�����    �   huggingface�{"info": {"features": {"id": {"dtype": "string", "_type": "Value"}, "text": {"dtype": "string", "_type": "Value"}, "dataset_id": {"dtype": "string", "_type": "Value"}}}}p4���� dataset_id��������text���� id����  �'z   �� ����&�&�@6vEy�{y�{y�`�y@���� #)/5;AFLRX[agmrx~���������������������� !'-38>CIOUZ`flrx~���������������������� #).4:@FKQV\bhnsy��������������������� $*06<BHNSY_djpv|��������������������� $*06<BHNTZ`flrx}���������������������� %+06<BHNTY_ekqw}���������������������� &,27=CIOU[agmsy��������������������� !'-39?EKQV\agmsx~��������������������� #)/5;AGMRW\bhntz����������������������    ! ' - 3 9 = C I N T Z ` f l r x ~ � � � � � � � � � � � � � � � � � � � � � �     # ) / 5 ; A E J P V \ b h n t z � � � � � � � � � � � � � � � � � � � � � �     ! ' - 3 9 ? D J P V \ a g m r x ~ � � � � � � � � � � � � � � � � � � � � � �     ! % + 1 7 = C I O T Z ` f l r x ~ � � � � � � � � � � � � � � � � � � � � �     $ * 0 6 < B H M S X ^ d j p s y  � � � � � � � � � � � � � � � � � � � � �  #)/5;AGMSY_ejpv|���������������������� $*06<BHNTZ`flrx~���������������������� #)/5;@FLRX^djpu{���������������������� "(.4:@FLQW]ciou{���������������������� "'-39?EKQW\bgmsy��������������������� $*05;AGMSY^djpv|���������������������� &,28>DJPU[agmsy��������������������� $*06<BHNTZ`flrx~���������������������� !'-39?EKQW[agmsy~�������������������520260858634296765857653647415261232217768797706996396664227763905365485044476116090311484869569470377962296094112846313889380768324584810570678393573634911698140592413410092363868885246133444320971547847437693345881294603557259139514311723215859533382626385757287515632912302714743388229851605211982027263632325115828870486047522459814661702228812536113607978358977618762953924212024080573199747047265958652782337437342533579140230774242801528820635374286996747162325307317840317432642675360286789225013968043051396873685643705450981089802224915261988593856894919565946659387153687986323780085109639135013558137121989657410022137253267784700011513189388212605771546468446175073745545694510778796352394369834879077079733720427787914138196251371577487923334754562864526383040933098271089411844636104651557753642354677380148173803369494508413215659605631119363072357188262219548856848250554286861696762931672462555649827409017862445873287954362782418473924640849416063291427828554113653681114084533196718329925304285137775655265137339166072144855156834461762046261936981107412706111732356714083719370281157705146420220956322923160086180075566694812131422047579934127483735505110958654598394453202322427714045646617436350634518449316785154556910354015223995621735274594908945897728561836884164529734133723573659914818592638529801767785741486466559193643538338305630548991128038830655332995288806461374174192413524136064740935147394318726386501833667269990605026538329437832436174026484781417546571128790451839371022613086915525402350143705597888014091644911396855453484195002184859052704944541312673820362349614256920651994248212514002451019022673816955883645945435954753525661379136568229156678849214112392972800115429710588877220575228808135909297293525749433760670510926592641522437623851519884107407330186593695045111839103225870628724920128083967764540639112182626625071587282341218365311697761453827993983838879832275755267747821139972829559458454363535903863380069210086837925089620187481091126105781011463893391258653143487894299648571574637887709381017162194607769139266776198621572772628261214172525945360655701445071815247798355943157013112450689677882506190092152249652334776567274478176845235267869323563952612275309711702504449616558451232244198610133397373250818796442040189466126547817015848127758786696232699726742795119085351932766139815527664452543271825388067048787410958215391539400178289563840317159911422409936301837651871974468494667331595070674216819486519356016342484537222676186686157696135051178324222542514115109855345762683553215381438605337881115563383402411189156646270505190542459622626936520331422238894111236693407363860175833460951103219286993879077513953450168463828502203764526184337528556344663963287036793496228775727312471869878246297743996804094416515757267640173411627637153935942781499078147697093259455110882199828039337634451621195780615351151840737101741632771289470715662299105511703678974626953787678543951285939265774132944329182653841689497195804973902171419137492917523287848624482895573906868485545807321203023259932736463880119608184793188408879634691688470730353778925510017451125979861055762568722012954181740129286745904678246484450589293152910587444241834898337805618772026418293566519632668379673385231164365494547010181139623868798918806343909017047841892989163325815967041145780722682486482828720979635386353310581254479742730686113690567398894480708128603872083233758768470868533965182585479045058635075255894447838849235159801626071692028087767550872084042190518010087148041654174506029889561983761332266909978098139420437975792704021845076042174070982243095262965826769559253825701842339550439956391335946549614130384188332774384584707271691849932105444694426743961811073537528214440253846732047670347166714132171784650399517672160208507587791843234685775961522701884706254751016115581872822878908737287839957057303604082429625655912588180538589764149338892870602361919816696217209682134172992249745327112372985340427135022140408278381770616237638709481054351152851103646355190719151804587775831060121020430989540084743976790712748749788096174456403159591577687204484420431836628622755060636081789542487308671467730194875643334196351313977462630257354032450867849418597778397410338189137357607274493560875765805586299858466833208849086632793983001156914033324511189928797283440018610704391618949243535121439417943467511172318486348779368120651517837181478115717133085993562926521973495523284785711407237846768363177899804664152311892855756420696990322393411315217609326271848851873229405580711329407545397531218619475796759006378053015304462497763061062059311439287058844905278419838268617227559437375135508829439585708466178032134265610864428947603276123466590020458720320311447671985450622083831618741947408030656074955767885422733001214353817383916907758542238773037077878118846286319590248986255024578761267506265926555547869729928451231539035988772446315028265069369592404673579040750205523794151751098268341958321481995278331271544863097088343395881109290669157138316916898660985161568042415314834768552346049510229429176580850421720556739881171177224843485415063436145045937889959156361626268497429707082038143974533576076766153049428521708636117367407637690477206398585268260649845747813684446628421448784967482904584527157664614660767539814648731363469707287700379413869111290662922237326379127126873895763142575183953191093493463213960877004350198359612802307692592740088793854727184419390475416552706685845613176764110440124129569501852148151223480860651108816663223756172534253498278116473388288504137717508098641049683237453858440273163201666297748913110724677747132267166427632778684869514417864469779934668961400239653721284472202056607947418042768734611620258728365541997300463384867053123086128598566028669203480127344589110677104401356849174792669193292447157443579742761996165434376422134486� 1>�@�GyY�{ҕ����\�e/&3G�QĵA���}���� ��e.���h�!�����i��6 �A�M:n`rx� ���� nJ�ph����� ��.fL^@hTH@]��������_F!Z,n{�t�W�V�� ��� l� �� �� '� ;� � _( > +M �p �� �� �� e �L o �� �� '� �� \" �. }x �� � *� � 2� � i5 �K |a �c h �i z� �� �� �� H� � �4�@O U�f׉%�ʡ����$���J�^L+�=t������"�L�Z��|������x�+�VFi�z��W�o�!��K�C��������L-sx�_���X��_�qts���]���+��-�;:[�b#Ops������)�"�R�G�����G$�|��z����C���P+�8=�M�����N�)&3!F�Z`�n4��$ˀ��q���eb�lRx;���1���8��Y E*�4�6�L�X�gQ�w���&�������Afk�W�ӾK�>�s 4> 'W �g � |� �� �!�"!�'!�F!wX!ɐ!�!��!{�!�"�/"D"{Z"8i"�m"��"��" #�#0##�&#�6#-A#�O#�a#�y#�#=�#��#�#z�#��#R�#b$V+$(M$��$��$��$í$ݻ$_�$�%�2%Sy%��%��%��%n�%��%*�%e�%��%8g&l&�v&7�&�'�+'r�'��' �'��'(�(�"({&(Gf(%)�X)�a)��)��)��)�)�)��)�*� *�*�2*�U*ts*K*2�*��*��*�+�+�+�'+G6+J:+H+?]+Uh+�l+�u+H{+��+:�+e�+��+��+��+��+��+��+A,x ,�`,+�,P�,��,�,0�,|-��-��-:�-�*.?1.��.�.Q�.��.m /�m/9�/��/͸/��/��/w�/߂0۝0!�0O�0��0'�0��0I$1�]1�{1��1��1��1�42*@2�M2Q2�S2n`2��2I�2�2 �2)33g63�83 ?3��3͑3s�3o�3�44�N4�\4p�4��4"�4��4'5<5 D5�f5ot5ۂ5s�5��5��5V�5�5'�5H)6��6�6�6�6��6f�6'�6A�647�7�#7wG7g7�7�7F�78� 8%8�m8 �8�8 �8��8��8��8��8�69{L9�P9rb9�t9�{9��9k�9��9��9�9�%:�7:�::�L:�N:�[:W�:X�:��:�;1!;%;N,;a@;�S;`�;5<p<�<OO<�|<�~<��<��<��<��<s�=��=h�=�=q >�(>z*>.>I>�L>DS>�n>�r>}w>^�>�?�?�?:�?X�?�?[�?��?\@�$@�8@�@[�@j�@��@�A�A-A7}A�Am�A�A>B #B�%B^UB}dB��BۢBƮB]�B�BA�BjC�(C?BC|oC��C��Ci�COD�D�iD�}D��D$�D/�D��D� E�,E(.E�rE��Ey�E��E�EB F�$F.1F96F�;FE>F�PF=WF{qF�}FŋF��F�F��Fc�F� G�,G:�G\H�KH�PH$[H^H~�H<�Hr�H��H�IPYp�YF�Y�Y�Y�_Z�oZzZ��Z�Z7[H&[F[a\�\�1]a=]PM]-�]��]��]�]*�]�c^�j^�^��^&�^.S_v_l�_)�_'�_�``�E`�O`S`�X`�n`p�`�`1�`$�`��`W�`�a~a� a�'a�4ac�Kc��cA�c��c~%d�dd��d �d��d��d��do�dq�d[,e�/eL|e)�e�eѰe<�e��e6Df�mfw�fRggag�xgY�g�gh�g��g��g �g�h�hDh�!h�/h�6h}Rh�[h3jh��h��h0�hi�i i�&is/i�@iEi��iȈi�i}�i�i�jnj�n�Jn{xn�|n΅n��n�o�2o�>o��of�o�o�p� pQp�p-!p�gp�ip�lprpl�pِp��pS�p��p`q�qE-qgPq�Uq [q4fq�q �q��qY�q��q!�qC r�r�0ro:s,Ds}Usils�ns�~sˏs�s�sםtȮtQ�tG�t��t��t��t�;u�=u�Vu�wu}�uJ�uB�uC�u��u1�u1�u�(vs/v�>vkDv�Zv��v��v��vQw�w�8w�@wMewDtw;�w��wz�wU�w��w��w��wL�w��w�"x5%x),x�=xjNx�dxuxĂx,�x�x��x&�x��x-�xvEy# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Client Integration (Keras) # This example takes Keras's [MNIST MLP example](https://github.com/keras-team/keras/blob/master/examples/mnist_mlp.py) and incorportates Verta's Client integration. # + HOST = "app.verta.ai" PROJECT_NAME = "MNIST Multiclassification" EXPERIMENT_NAME = "FC-NN" # + # import os # os.environ['VERTA_EMAIL'] = # os.environ['VERTA_DEV_KEY'] = # + from verta import Client client = Client(HOST) proj = client.set_project(PROJECT_NAME) expt = client.set_experiment(EXPERIMENT_NAME) # - # ## Imports # + from __future__ import print_function from tensorflow import keras from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras.optimizers import RMSprop # - # --- # # Log Workflow # ## Prepare Data batch_size = 128 num_classes = 10 epochs = 5 # + # the data, split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(60000, 784) x_test = x_test.reshape(10000, 784) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) # - # ## Define Model # + model = Sequential() model.add(Dense(512, activation='relu', input_shape=(784,))) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(num_classes, activation='softmax')) model.summary() model.compile( loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy'], ) # - # ## Run and Log Training run = client.set_experiment_run() # + from verta.integrations.keras import VertaCallback history = model.fit( x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test), callbacks=[VertaCallback(run)], ) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # - run # --- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # %load_ext autoreload # %autoreload 2 # + import json import time import pandas as pd import numpy as np from database_connector import connect, postgresql_to_dataframe from dil_preprocess import get_url_data, basic_pruning from dil_predict import init, predict_trees, reduce_leaky_endpoints from dil_postprocess import get_working_incs, get_dyn_urls, get_dyn_results, get_retest_urls, get_working_urls, get_working_urls_channels import qgrid #import ipysheet import redis r = redis.Redis() # - models = init() # Connect to the database site = "bitly.com" dat = get_url_data(site) dat.info() dat.groupby(["cookies"]).count() # Compare tika with file, both have some problems? # e.g. incomplete html, json, ... # (tika somewhat strange for empty files) qgrid.show_grid(dat[["resp_body_info", "resp_body_tika_info"]]) dat.groupby(["req_url"])["cookies"].agg(["nunique", "count"]).sort_values("nunique", ascending=False) af, d, poss, results = basic_pruning(dat, log=True) # Problem: many of the URLs in the attack frame are not predictable/guessable by a real attacker? qgrid.show_grid(af) # + # Methods that can leak the same thing are already reduced to one method # firefox 24 working leak_channels, 11 inc methods (not link-prefetch) # chromium 19 working leak_channels, 10 inc methods (not link-prefetch, not object) # Overlap: 18 leak channels, 6 only in firefox, 1 only in chromium # - af leaky_endpoints = predict_trees(af, log=True) leaks = reduce_leaky_endpoints(leaky_endpoints, log=True) # All URLs x working leak methods qgrid.show_grid(leaks) incs = get_working_incs(leaks) incs urls = get_dyn_urls(leaks, incs, d, poss, log=True, unpruned=False) # Get all results of the dynamic confirmation #site = "pdffiller.com" df = get_dyn_results(site) display(df.info()) df[["timed_out"]].value_counts() df.loc[(df["browser_id"] == 1) & (df["retest_num"] == 1)].sort_values(["test_id", "retest_num", "cookies"]) display(df.sort_values(["test_id", "browser_id", "cookies"])) display(df.groupby(["test_id", "browser_id"])[["events_id", "global_properties_id", "object_properties_id", "window_properties_id"]].agg("nunique")) # Find out what happenend with Firefox (and Chrome) # Firefox all same results! (maybe setting of cookies did not work?) # Chrome unsame results are only onblur/window.height (i.e., random stuff): seems like setting of cookies did not work? :( # Checked that, setting of cookies worked. Then the problem might be with our test application, e.g., cookie is invalid? # Or the problem is that the browser do not send the cookies along for some reason! # Secure? leaker website also has to be https? (does not matter too much) # Problem: cookie is only there for one request! then the cookie is gone :(??? # Django kills invalid session cookies by setting a new empty one # Now, we login directly ensuring that we have a valid session cookie print(df.columns) qgrid.show_grid(df.loc[df["browser_id"] == 1].sort_values(["test_id", "browser_id", "retest_num", "cookies"])[["test_id", "browser_id", "retest_num", "cookies", "events_id", "object_properties_id", "global_properties_id", "window_properties_id"]]) # + retest, _, _ = get_retest_urls(df, log=True) # run retest! print("run retest") display(retest) # Retest done; check if it worked: got same results for cookies/cookies - no-cookies/no-cookies twice and different results for cookies/no-cookies twice (implicitly given by the first condition as we only check the ones that had different results in the first test) # Dynamic fp_confirmation is hard/error-prone: e.g., a postMessage with a timestamp will be counted as a FP using our method # (can be a real FP, if both states receive the postMessage with different timstamps) # Other example: /random_image/ only available for members will always have different image dimensions # One solution would be to just check if a postMessage was received or not? (but this has another problem: if both states receive a distinct postMessage) # Reload data after retest is done _, pot, pot_leaks = get_retest_urls(get_dyn_results(site), log=True, retest_num=1) print("reloaded data") # Check that the potential leak is stable (has the same result twice) working, leak_urls = get_working_urls(pot, pot_leaks, log=True) display(working) display(leak_urls) # Alternative #conf = pot.groupby(["browser_id", "test_id", "cookies"])[["events_id", "global_properties_id", "object_properties_id", "window_properties_id"]].agg(["nunique", "count"]) #conf_miss = conf[conf.filter(regex="count", axis=1).isin([1]).all(axis=1)] #print(f"Dropping missing URLs: {conf_miss.shape}") #conf = conf.drop(conf_miss.index) # Does not drop the corresponding one (cookie/non-cookie) #conf = conf[conf.filter(regex="nunique", axis=1).isin([1]).all(axis=1)].reset_index() ##display(conf) #conf = df.loc[(df["browser_id"].isin(conf["browser_id"])) & (df["test_id"].isin(conf["test_id"])) & (df["retest_num"] == 1)].sort_values("test_id") #display(conf) #display(conf[~conf.isin(dup)].dropna()) # - # ### Not all URLs where found (and not all methods) # Find out why? # - ~~Initial crawl was incorrect~~ (redo crawl: answer was not the problem) # - all urls found + cookies are correct # - ~~Preprocessing/basic bruning too strict/incorrect~~ several fixes applied # - ~~Trees are inaccurate (too strict)~~ # - ~~Postprocessing is incomplete/has errors~~ several fixes applied # - ... # + summary = working_df.sort_values(["url", "method", "inc_method", "browser"]).groupby(["url", "browser", "inc_method"])["method"].unique().to_frame() #display(summary) df_unpruned = get_dyn_results(f"{site}-unpruned") working_df_unpruned , _, _ = get_working_urls_channels(df_unpruned, log=False) summary_unpruned = working_df_unpruned.sort_values(["url", "method", "inc_method", "browser"]).groupby(["url", "browser", "inc_method"])["method"].unique().to_frame() #display(summary_unpruned) # only unpruned: # - /leak14/: only leaks for `sec_fetch_site` == "cross-site", so it is removed by the basic pruning step (should not occur in the wild) # - ~~others, e.g., /leak16/ iframe: bug in preprocess xfo: fixed~~ # - others, e.g., /leak6/ link-prefetch: equivalent to other methods or trees not used because often not reliable enough: # rethink some of this? event_set of some inclusion method seem to work?, trees could also be inaccurate? :( # - /leak9/: redirect, depending on the resulting page (e.g., not existinge vs existing) many other methods can work too # (for redirect ones, add other methods as well?) # only pruned: # ~~iframe-csp: bug, was missing from test~~ with pd.option_context("max_rows", None): display(summary.join(summary_unpruned, rsuffix="-unpruned", how="outer")) # - working_df_unpruned.loc[working_df_unpruned["url"].str.contains("leak9")] display(df.loc[df["apg_url"].str.contains(r"/img/.*leak1/")].sort_values(["test_id", "browser_id", "retest_num", "cookies"])) display(df_unpruned.loc[df_unpruned["apg_url"].str.contains(r"/img/.*leak1/")].sort_values(["test_id", "browser_id", "retest_num", "cookies"])) display(tf.loc[tf["test_id"] == 4659295]) display(tf.loc[tf["test_id"] == 4659375]) # + import ipywidgets as widgets from ipysheet import Cell, column, from_dataframe, to_dataframe, to_array try: testapp_frame = pd.read_csv("testapp_frame") testapp_frame = testapp_frame.fillna('') sheet = from_dataframe(testapp_frame) except (NameError, FileNotFoundError): nrows = 5 sheet = ipysheet.sheet(columns=1,rows=nrows) column1 = ipysheet.column(0, [None] * nrows) row_button = widgets.Button(description='Add Row') column_button = widgets.Button(description='Add Column') out = widgets.Output() def add_row(_): sheet.rows += 1 for col in sheet.cells: # this assumes that each cell is a column, this might break otherwise col.row_end +=1 col = np.append(col,[None]) # Change None to whatever default value you want def add_column(_): """Only works for the initial run, does not work after data is imported anymore. Adding a colum, saving and reloading the frame works! Adding and directly editing does not work """ sheet.columns +=1 # Need to increment index first to avoid a ValueError ipysheet.column(sheet.columns-1,[None]*sheet.rows) row_button.on_click(add_row) column_button.on_click(add_column) display(widgets.VBox([widgets.HBox([row_button,column_button]),sheet])) # - testapp_frame = pd.DataFrame(to_array(sheet)) testapp_frame.to_csv("testapp_frame", index=False) testapp_frame for site in [ '172.17.0.1:44320', 'vimeo.com', 'amazon.in', 'unsplash.com', 'goodreads.com', 'digg.com', 'coursera.org', 'epa.gov', 'chess.com', 'stripe.com', 'avast.com', 'bitnami.com', 'envato.com', 'ning.com', 'postgresql.org', 'urbandictionary.com', 'readthedocs.io', 'technologyreview.com', 'hackmd.io']: df = get_dyn_results(site) print(f"Doing {site}, df.shape: {df.shape}") working_df, working_urls, url_dict = get_working_urls_channels(df, log=False) display(working_urls) display(working_df) # pd.DataFrame.from_dict(json.loads(json.dumps(working_df.to_dict("list")))) # Reset counter, to be able to retest site import json #r.set("hackmd.io", json.dumps([ {'domain': 'hackmd.io', 'name': 'sectionFilterApplied', 'value': 'true', 'path': '/', 'httpOnly': False, 'secure': False}, {'domain': 'hackmd.io', 'secure': True, 'value': 's%3A93EUlGSOqODk1Dm6cd4twh1NcRy5Fi4v.8KZtWWt67yLRN%2FCpXEzExoXJlY0sOxBcOTIfdWVPg%2BY', 'expiry': 1725511081, 'path': '/', 'httpOnly': True, 'name': 'connect.sid'}, {'domain': 'hackmd.io', 'secure': True, 'value': 'en-US', 'expiry': 1655837481, 'path': '/', 'httpOnly': False, 'name': 'locale'}, {'domain': 'hackmd.io', 'name': 'sectionsSortStrategy', 'value': 'cat_new_to_old', 'path': '/', 'httpOnly': False, 'secure': False}, {'domain': 'hackmd.io', 'name': 'overviewLayoutStrategy', 'value': '', 'path': '/', 'httpOnly': False, 'secure': False}, {'domain': 'hackmd.io', 'name': '_csrf', 'value': 'Vi1fP7b2S0iCMyxwHtmz6m5A', 'path': '/', 'httpOnly': False, 'secure': False}, {'domain': 'hackmd.io', 'name': 'notesSortStrategy', 'value': 'new_to_old', 'path': '/', 'httpOnly': False, 'secure': False}])) for site in ["172.17.0.1:44320", "172.17.0.1:44320-unpruned"]: print(r.get(f"{site}::first_count")) r.set(f"{site}::first_count", 0) r.set(f"{site}::second_count", 0) # # OLD stuff below from helper_functions import get_ef, get_gf, get_of, get_wf ef = get_ef() gf = get_gf() of = get_of() wf = get_wf() res = pot res = res.merge(ef, how="left", on="events_id") res = res.merge(gf, how="left", on="global_properties_id") res = res.merge(of, how="left", on="object_properties_id") res = res.merge(wf, how="left", on="window_properties_id") display(pot.sort_values(["test_id", "browser_id", "cookies"])) display(res.sort_values(["test_id", "browser_id", "cookies"])[["test_id", "browser_id", "cookies", "retest_num", "op_frame_count", "gp_window_postMessage"]]) res["event_set"] qgrid.show_grid(df.loc[df["test_id"].isin(pot["test_id"]) & (df["window_properties_id"] != 109)].sort_values(["test_id", "cookies"])) import json import subprocess import os # save URLs dict to file (json?) and start the dynamic confirmation # Run the automator framework with correct db settings + dict input + higher timeouts # Start the framework twice? once with cookies and once without cookies?! # Add cookies column to db/what about test? (maybe better to just put it into another table in the db!) # (XSSI handling needs to be added) url_dict_path = f"data/{site}.json" with open(url_dict_path, "w") as f: json.dump(urls, f) print(site) os.environ["PIPENV_DOTENV_LOCATION"] = "../.env" #print(subprocess.check_output(["pipenv", "run", "python", "test_browsers.py", "local_grid", f"../analysis/{url_dict_path}", site, "True", "Test"], cwd="../automator")) import glob at = af.reset_index(drop=True) display(at) start = time.time() for file in glob.glob("trees/tenmin/mojo/[1,2]/*.mojo"): break if "conflicted" in file: continue print(file) res = h2o.mojo_predict_pandas(at[th_headers], file, genmodel_jar_path="/home/jannis/Downloads/h2o-3.32.1.1/h2o.jar") if res["predict"].nunique() > 1: res = pd.concat([at, res], axis=1) valid = res.groupby("URL")["predict"].nunique() valid = valid[valid > 1] leaky = res.loc[res["URL"].isin(valid.index)] display(leaky) print(f"Took {time.time() - start} seconds") gen_path = "/home/jannis/Downloads/h2o-3.32.1.1/h2o.jar" dat = af.groupby("URL") # Replace by only good working ones?, otherwise we have too many trees! files = glob.glob("trees/mojo/[1,2]/*.mojo") files = [file for file in files if not "conflicted" in file] print(len(files)) for key, item in dat: continue df = dat.get_group(key) both = df[th_headers] working = [] for file in files: # Remove the (errornous) output of h2o: change file at path: /..../site-packages/h2o/utils/shared_utils.py line 414: to_csv add index=False res = h2o.mojo_predict_pandas(both, file, genmodel_jar_path=gen_path) if res["predict"].nunique() == 2: working.append(file) print(working) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''env'': venv)' # name: python_defaultSpec_1599703934604 # --- # Simple tip to avoid problems with data structures due to the broadcasting features in numpy/python # + import numpy as np a = np.random.randn(5) # rank 1 array # + tags=[] print(a) # + tags=[] print(a.shape) # + tags=[] print(a.T) # bad! # + tags=[] print(a @ a.T) # + tags=[] a = np.random.randn(5,1) print(a) # + tags=[] print(a.T) # + tags=[] print(a @ a.T) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #import the libraries import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier # + #read the data df = pd.read_csv('winequality-red.csv') #convert the target variable to a string df['quality'] = df['quality'].astype(str) #separate the target variable from the data Y = df.pop('quality') X = df #separate the data into training and test sets xtrain, xtest, ytrain, ytest = train_test_split(X, Y, test_size=0.2) #create a model rf = RandomForestClassifier() #train the model rf.fit(xtrain, ytrain) # - #import the partial dependence library import partial_dependence as pdp_plot #create a list of class labels class_label = list(rf.classes_) # bin a particular variable and get the plot for the rest of the variables. Number of bins must be specified pdp_plot.PartialDependence.plot_binning(xtest, 'fixed acidity', rf, class_label, class_label[3], 4, 10) #for a specified number of bins, the variables can be ranked by the total variance ot the plots #this function prints the variables according to the rank and also returns a list of variables in the ranked order pdp_plot.PartialDependence.get_rank_on_binning(xtest, rf, class_label, class_label[3], 4) #trying binning on alcohol pdp_plot.PartialDependence.plot_binning(xtest, 'alcohol', rf, class_label, class_label[3], 4, 10) #trying binning on density pdp_plot.PartialDependence.plot_binning(xtest, 'density', rf, class_label, class_label[3], 4, 10) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandana as pdna import numpy as np from urbansim.utils import misc, networks from scripts import datasources from scripts import models from scripts import variables from urbansim_templates import modelmanager as mm from urbansim_templates.models import LargeMultinomialLogitStep import orca import os; os.chdir('../') import warnings;warnings.simplefilter('ignore') # %%time orca.run(['initialize_network_walk']) # %%time orca.run(['network_aggregations_walk']) nodeswalk = orca.get_table('nodeswalk').to_frame() parcels = orca.get_table('parcels').to_frame() merged = pd.merge(nodeswalk, parcels, left_index=True, right_on='node_id_walk') len(merged) # + nodeswalk_upper = nodeswalk.quantile(.99) nodeswalk_c = nodeswalk.clip_upper(nodeswalk_upper, axis=1) nodeswalk_c['prop_children_500_walk'] = (nodeswalk_c['children_500_walk'] > 0).astype(int) / nodeswalk_c['hh_500_walk'] nodeswalk_c['prop_singles_500_walk'] = nodeswalk_c['singles_500_walk'] / nodeswalk_c['hh_500_walk'] nodeswalk_c['prop_elderly_500_walk'] = nodeswalk_c['elderly_hh_500_walk'] / nodeswalk_c['hh_500_walk'] nodeswalk_c['prop_black_500_walk'] = nodeswalk_c['pop_black_500_walk'] / nodeswalk_c['pop_500_walk'] nodeswalk_c['prop_white_500_walk'] = nodeswalk_c['pop_white_500_walk'] / nodeswalk_c['pop_500_walk'] nodeswalk_c['prop_asian_500_walk'] = nodeswalk_c['pop_asian_500_walk'] / nodeswalk_c['pop_500_walk'] nodeswalk_c['prop_hisp_500_walk'] = nodeswalk_c['pop_hisp_500_walk'] / nodeswalk_c['pop_500_walk'] nodeswalk_c['prop_rich_500_walk'] = nodeswalk_c['rich_500_walk'] / nodeswalk_c['pop_500_walk'] nodeswalk_c['prop_poor_500_walk'] = nodeswalk_c['poor_500_walk'] / nodeswalk_c['pop_500_walk'] nodeswalk_c['prop_children_1500_walk'] = (nodeswalk_c['children_1500_walk'] > 0).astype(int)/nodeswalk_c['hh_1500_walk'] nodeswalk_c['prop_singles_1500_walk'] = nodeswalk_c['singles_1500_walk'] / nodeswalk_c['hh_1500_walk'] nodeswalk_c['prop_elderly_1500_walk'] = nodeswalk_c['elderly_hh_1500_walk'] / nodeswalk_c['hh_1500_walk'] nodeswalk_c['prop_black_1500_walk'] = nodeswalk_c['pop_black_1500_walk'] / nodeswalk_c['pop_1500_walk'] nodeswalk_c['prop_white_1500_walk'] = nodeswalk_c['pop_white_1500_walk'] / nodeswalk_c['pop_1500_walk'] nodeswalk_c['prop_asian_1500_walk'] = nodeswalk_c['pop_asian_1500_walk'] / nodeswalk_c['pop_1500_walk'] nodeswalk_c['prop_hisp_1500_walk'] = nodeswalk_c['pop_hisp_1500_walk'] / nodeswalk_c['pop_1500_walk'] nodeswalk_c['prop_rich_1500_walk'] = nodeswalk_c['rich_1500_walk'] / nodeswalk_c['pop_1500_walk'] nodeswalk_c['prop_poor_1500_walk'] = nodeswalk_c['poor_1500_walk'] / nodeswalk_c['pop_1500_walk'] nodeswalk_c['pop_jobs_ratio_1500_walk'] = nodeswalk_c['pop_1500_walk'] / (nodeswalk_c['jobs_500_walk']+1) nodeswalk_c['avg_hhs_500_walk'] = nodeswalk_c['pop_500_walk'] / (nodeswalk_c['hh_500_walk']+1) nodeswalk_c['avg_hhs_1500_walk'] = nodeswalk_c['pop_1500_walk'] / (nodeswalk_c['hh_1500_walk']+1) nodeswalk_c['pop_jobs_ratio_1500_walk'] = nodeswalk_c['pop_1500_walk'] / (nodeswalk_c['jobs_500_walk']+1) nodeswalk_c['avg_hhs_500_walk'] = nodeswalk_c['pop_500_walk'] / (nodeswalk_c['hh_500_walk']+1) nodeswalk_c['avg_hhs_1500_walk'] = nodeswalk_c['pop_1500_walk'] / (nodeswalk_c['hh_1500_walk']+1) # - len(pd.merge(nodeswalk_c, parcels, left_index=True, right_on='node_id_walk')) nodeswalk_c.to_csv('/home/data/2018-07/nodeswalk_c_max.csv') nodeswalk = pd.read_csv('/home/data/2018-07/nodeswalk_c_max.csv', index_col='osmid') len(pd.merge(nodeswalk, parcels, left_index=True, right_on='node_id_walk')) # %%time orca.run(['initialize_network_small']) # %%time orca.run(['network_aggregations_small']) nodessmall = orca.get_table('nodessmall').to_frame() merged = pd.merge(nodessmall, parcels, left_index=True, right_on='node_id_small') len(merged) nodessmall_upper = nodessmall.quantile(.99) nodessmall_c = nodessmall.clip_upper(nodessmall_upper, axis=1) nodessmall_c['pop_jobs_ratio_10000'] = nodessmall_c['pop_10000'] / (nodessmall_c['jobs_10000'] + 1) nodessmall_c['pop_jobs_ratio_25000'] = nodessmall_c['pop_25000'] / (nodessmall_c['jobs_25000'] + 1) merged = pd.merge(nodessmall, parcels, left_index=True, right_on='node_id_small') len(merged) nodessmall_c.to_csv('/home/data/2018-07/nodessmall_c_max.csv') orca. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 1. Introduction # In this part of the project, we will clusterize data obtained from the previous notebook. The `set_option` is defined to show all columns and all the importations are made before the start. import pandas as pd import numpy as np pd.set_option('display.max_columns', None) pd.set_option('max_columns', None) from sklearn.cluster import KMeans from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import PowerTransformer from sklearn.preprocessing import Normalizer from sklearn import preprocessing df = pd.read_csv('para_cluster.csv') df = df.drop(['Unnamed: 0'],axis=1) df.columns df.shape # To avoid collinearity, variables with correlation higher than 0.9 will be dropped. In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coefficient estimates of the multiple regression may change erratically in response to small changes in the model or the data. corr_matrix = df.corr().abs() upper = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool)) to_drop = [column for column in upper.columns if any(upper[column] >0.90)] #to_drop df = df.drop(['Consumo de Energia Elétrica - Comércio e Serviços (Em MWh)', 'Consumo de Energia Elétrica - Residencial (Em MWh)', 'Consumo de Energia Elétrica - Iluminação e Serviços Públicos e Outros (Em MWh)', 'Empregos Formais de Homens ', 'Empregos Formais de Mulheres ', 'Empregos Formais das Pessoas de até 24 Anos ', 'Empregos Formais das Pessoas de 25 a 39 Anos ', 'Empregos Formais das Pessoas de 40 a 59 Anos ', 'Empregos Formais das Pessoas de 60 Anos e Mais ', 'Empregos Formais das Pessoas com Ensino Fundamental Incompleto ', 'Empregos Formais das Pessoas com Ensino Fundamental Completo ', 'Empregos Formais das Pessoas com Ensino Médio Completo ', 'Empregos Formais das Pessoas com Ensino Superior Completo ', 'Empregos Formais da Indústria ', 'Empregos Formais da Construção ', 'Empregos Formais do Comércio Atacadista e Varejista e do Comércio e Reparação de Veículos Automotores e Motocicletas ', 'Empregos Formais dos Serviços ', 'Participação dos Empregos Formais dos Serviços no Total de Empregos Formais (Em %)', 'Frota de Veículos - Total ', 'Frota de Automóveis ', 'Frota de Ônibus ', 'Frota de Caminhões ', 'Frota de Reboques ', 'Frota de Motocicletas e Assemelhados ', 'Frota de Microônibus e Camionetas ', 'Frota de Veículos de Outro Tipo ', 'Índice de Desenvolvimento Humano Municipal - IDHM - Ranking dos Municípios ', 'Índice de Desenvolvimento Humano Municipal - IDHM Educação ', 'Índice de Desenvolvimento Humano Municipal - IDHM Renda ', 'Leitos de Internação ', 'Leitos SUS ', 'Leitos SUS (Coeficiente por mil habitantes)', 'Taxa de Fecundidade Geral (Por mil mulheres entre 15 e 49 anos)', 'Óbitos Gerais (por local de residência) ', 'Óbitos da População de 15 a 34 Anos ', 'Óbitos da População de 60 Anos e Mais ', 'PIB (Em mil reais correntes)', 'População ', 'População Masculina ', 'População Feminina ', 'População de 0 a 3 Anos ', 'População de 4 a 6 Anos ', 'População de 6 Anos ', 'População de 7 a 10 Anos ', 'População de 11 a 14 Anos ', 'População de 15 a 17 Anos ', 'População de 18 a 19 Anos ', 'Auxiliares de Enfermagem Registrados no COREN/SP ', 'Dentistas Registrados no CRO/SP ', 'Enfermeiros Registrados no COREN/SP ', 'Fonoaudiólogos registrados no CRFa/SP ', 'Médicos Registrados no CRM/SP ','Empregos Formais da Agricultura. Pecuária. Produção Florestal. Pesca e Aquicultura ', 'Índice de Desenvolvimento Humano Municipal - IDHM Longevidade ', 'Índice Paulista de Responsabilidade Social - IPRS - Dimensão Riqueza ', 'Índice Paulista de Responsabilidade Social - IPRS - Dimensão Longevidade ', 'Índice Paulista de Responsabilidade Social - IPRS - Dimensão Escolaridade ', 'Taxa Geométrica de Crescimento Anual da População - 2010/2020 (Em % a.a.)', 'Dentistas Registrados no CRO/SP (Coeficiente por dois mil habitantes)', 'Técnicos de Prótese Dentária Registrados no CRO/SP (Coeficiente por dois mil habitantes)', 'Fonoaudiólogos registrados no CRFa/SP (Coeficiente por mil habitantes)','Area', 'Técnicos de Enfermagem Registrados no COREN/SP ', 'Técnicos de Prótese Dentária Registrados no CRO/SP ', 'Domicílios Particulares com Renda per Capita até 1/2 Salário Mínimo - Censo Demográfico (Em %)', 'Domicílios Particulares com Renda per Capita até 1/4 do Salário Mínimo - Censo Demográfico (Em %)', 'Consumo de Energia Elétrica - Total (Em MWh)'],axis=1) df.shape df.head() # New data from `municipio.csv` and `covid.csv` are stored into the notebook's memory. municipios = pd.read_csv('municipio.csv') municipios = municipios.drop(['Unnamed: 0'],axis=1) municipios.head() covid = pd.read_csv("DATA/covid.csv", sep=";", encoding='latin-1') covid = covid.drop(['Unnamed: 5', 'Unnamed: 6'], axis=1) covid.rename(columns={'Cod_IBGE':'Cód. IBGE'}, inplace=True) covid.rename(columns={'Município':'Localidades'}, inplace=True) covid.head() # ## 2. Data Scaling # As we have many variables with different ranges and meaning, scaler and/or normalization is necessary. Several scaler can be used. Indeed many estimators are designed with the assumption that each feature takes values close to zero or more importantly that all features vary on comparable scales. The Normalizer rescales the vector for each sample to have unit norm, independently of the distribution of the samples. # + #scaler = preprocessing.RobustScaler() #scaler = preprocessing.MinMaxScaler() #scaler = preprocessing.StandardScaler() df_scaled = Normalizer().fit_transform(df) #df_scaled = PowerTransformer(method='yeo-johnson').fit_transform(df) names = df.columns df_scaled = pd.DataFrame(df_scaled, columns=names) # - ssd = [] K = range(1,30) for k in K: km = KMeans(n_clusters=k) km = km.fit(df_scaled) ssd.append(km.inertia_) # In cluster analysis, the elbow method is a heuristic used in determining the number of clusters in a data set. The method consists of plotting the explained variation as a function of the number of clusters, and picking the elbow of the curve as the number of clusters to use. In this case, we have used 15 clusters. import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize=(10,6)) plt.plot(K, ssd, 'bx-') plt.xlabel('k') plt.ylabel('ssd') plt.title('Elbow Method For Optimal k') #plt.show() kmeans = KMeans(n_clusters=15) model = kmeans.fit(df_scaled) pred = model.labels_ df['cluster'] = pred # + from sklearn.decomposition import PCA pca = PCA(n_components=2) pca_model = pca.fit_transform(df_scaled) data_transform = pd.DataFrame(data = pca_model, columns = ['PCA1', 'PCA2']) data_transform['cluster'] = pred # - import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize=(8,8)) g = sns.scatterplot(data=data_transform, x='PCA1', y='PCA2', hue='cluster') title = plt.title('Countries Clusters with PCA') df_total = pd.concat([municipios, df], axis=1) sns.countplot(df_total['cluster']) df_total = df_total.rename(columns={'Município': 'Localidades'}) df_total = df_total.merge(covid,on='Localidades', how='left' ) # Saving the dataframe for maping tools. df_total.to_csv('clusterizacao_para_mapa.csv') # ### 3. Visualization df_ead = df_total df_ead = df_ead.drop(['Cód. IBGE_x','Cód. IBGE_y', 'Grande região', 'Mun_Total de casos', 'Mun_Total de óbitos'],axis=1) # > Those are the variables that we have used in k-means. To choose which you would like to plot, go to the following cell and type the name of the variable. In this example case, we have chosen 'Índice de Desenvolvimento Humano Municipal - IDHM'. After that, you can run the last cell. A new page is going to pop up. df_ead.columns col = 'Índice de Desenvolvimento Humano Municipal - IDHM ' # + #grid = sns.pairplot(df_ead5, hue="cluster", diag_kws={'bw': 0.4}) #dfl = df_ead4.set_index('cluster').stack().reset_index().rename(columns={'level_1': 'groups', 0: 'values'}) import plotly.express as px from plotly.offline import plot fig = px.box(df_ead, x="cluster", y=col) plot(fig) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline from matplotlib import style style.use('fivethirtyeight') import matplotlib.pyplot as plt import numpy as np import pandas as pd import datetime as dt # # Reflect Tables into SQLAlchemy ORM # Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func, inspect engine = create_engine("sqlite:///Resources/hawaii.sqlite") inspector = inspect(engine) # reflect an existing database into a new model Base = automap_base() # reflect the tables Base.prepare(engine,reflect = True) # We can view all of the classes that automap found Base.classes.keys() columns = inspector.get_columns('measurement') for column in columns: print(column["name"], column["type"]) columns = inspector.get_columns('station') for column in columns: print(column["name"], column["type"]) # Save references to each table Station = Base.classes.station Measurement = Base.classes.measurement # Create our session (link) from Python to the DB Session = Session(engine) # # Exploratory Climate Analysis # Calculate the date 1 year ago from the last data point in the database last_twelve_months = dt.date(2017,8,23)-dt.timedelta(days=365) last_twelve_months # Design a query to retrieve the last 12 months of precipitation data and plot the results results = Session.query(Measurement.date, Measurement.prcp).filter(Measurement.date>=last_twelve_months).all() #results # Save the query results as a Pandas DataFrame and set the index to the date column measurements = pd.DataFrame(results) measurements = measurements.rename(columns={"date": "Date","prcp": "Precipitation"}) measurements.set_index(measurements["Date"], inplace=True) del measurements["Date"] measurements.head(5) # Sort the dataframe by date measurements = measurements.sort_values("Date") measurements # Use Pandas Plotting with Matplotlib to plot the data precipitation_plot = measurements.plot(figsize = (15,8), color = 'navy', rot=45) plt.tight_layout() plt.ylabel("Precipitation", size=15) plt.title("Last 12 Months of Precipitation", size=25) plt.savefig("Last_12_Months_of_Precipitation.png") # Use Pandas to calculate the summary statistics for the precipitation data measurements.describe() # Design a query to show how many stations are available in this dataset? no_stations = Session.query(func.count(Station.station)).all() print(f'There are {no_stations} stations available') # What are the most active stations? (i.e. what stations have the most rows)? # Choose the station with the highest number of temperature observations. # List the stations and the counts in descending order. active_stations= Session.query(Measurement.station,func.count(Measurement.station)).group_by(Measurement.station).\ order_by(func.count(Measurement.station).desc()).all() active_stations # Using the station id from the previous query, calculate the lowest temperature recorded, lowest_temp = Session.query(func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs)).\ filter(Measurement.station=="USC00519281").all() lowest_temp # Query the last 12 months of temperature observation data for this station and plot the results as a histogram last_twelve_months = dt.date(2017,8,23)-dt.timedelta(days=365) station_9281 = Session.query(Measurement.tobs).filter(Measurement.station == "USC00519281").\ filter(Measurement.date>=last_twelve_months).all() station_9281_df = pd.DataFrame(station_9281, columns = ["tobs"]) station_9281_df = station_9281_df.rename(columns = {"tobs": "Temperature"}) station_9281_df.head(10) station_9281_df.plot.hist(bins=12) plt.title("Temperature Past 12 Months") plt.xlabel("Temperature") plt.tight_layout() plt.savefig("Temperature_Past_12_Months") # ## Bonus Challenge Assignment # + # This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' # and return the minimum, average, and maximum temperatures for that range of dates def calc_temps(start_date, end_date): """TMIN, TAVG, and TMAX for a list of dates. Args: start_date (string): A date string in the format %Y-%m-%d end_date (string): A date string in the format %Y-%m-%d Returns: TMIN, TAVE, and TMAX """ return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\ filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all() # function usage example print(calc_temps('2012-02-28', '2012-03-05')) # - # Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax # for your trip using the previous year's data for those same dates. # Plot the results from your previous query as a bar chart. # Use "Trip Avg Temp" as your Title # Use the average temperature for the y value # Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr) # + # Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates. # Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation # + # Create a query that will calculate the daily normals # (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day) def daily_normals(date): """Daily Normals. Args: date (str): A date string in the format '%m-%d' Returns: A list of tuples containing the daily normals, tmin, tavg, and tmax """ sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)] return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all() daily_normals("01-01") # + # calculate the daily normals for your trip # push each tuple of calculations into a list called `normals` # Set the start and end date of the trip # Use the start and end date to create a range of dates # Stip off the year and save a list of %m-%d strings # Loop through the list of %m-%d strings and calculate the normals for each date # - # Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index # Plot the daily normals as an area plot with `stacked=False` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Recommendations with IBM # # In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform. # # # You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.** # # By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations. # # # ## Table of Contents # # I. [Exploratory Data Analysis](#Exploratory-Data-Analysis)
# II. [Rank Based Recommendations](#Rank)
# III. [User-User Based Collaborative Filtering](#User-User)
# IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)
# V. [Matrix Factorization](#Matrix-Fact)
# VI. [Extras & Concluding](#conclusions) # # At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data. # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import project_tests as t import pickle import math from sklearn.metrics import f1_score # %matplotlib inline df = pd.read_csv('data/user-item-interactions.csv') df_content = pd.read_csv('data/articles_community.csv') del df['Unnamed: 0'] del df_content['Unnamed: 0'] # Show df to get an idea of the data df.head() # - # Show info of df df.info() # Column 'article_id' in df is of type float. # Show df_content to get an idea of the data df_content.head() # Show info of df_content df_content.info() # Column 'article_id' in df_content is of type int. # Shape of df(user_item_interactions) df.shape # Finding missing values in df df.isnull().any() # So, there are missing values in column email in df(user_item_interactions). # Print number of missing values in df df.isnull().sum() # So, there are 17 missing values in column email in df. # Shape of df_content df_content.shape # Finding missing values in df_content df_content.isnull().any() # So, there are missing values in columns doc_body and doc_description in df_content. # Print number of missing values in df_content df_content.isnull().sum() # So, there are 14 missing values in column doc_body and 3 missing values in column doc_description. # find duplicates in df df.duplicated().value_counts() # So, there are 12311 duplicates in df. # find duplicates in df_content df_content.duplicated().value_counts() # So, there are no duplicates in df_content. # ### Part I : Exploratory Data Analysis # # Use the dictionary and cells below to provide some insight into the descriptive statistics of the data. # # `1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article. # Group df by email and check article_id's count df.groupby('email')['article_id'].value_counts() # Get number of user_item inetractions for each user df.groupby('email')['article_id'].count() # How many unique users have interacted with articles len(df.groupby('email')) # Distribution of user_item interactions df.groupby('email')['article_id'].count().values # Getting counts of distribution of article intreactions df.groupby('email')['article_id'].count().value_counts().sort_index() # Get info about the distribution of user_item interactions df.groupby('email')['article_id'].count().describe() # Plotting histogram of distribution of user_item interactions fig, ax = plt.subplots(figsize=(8,6)) ax.hist(df.groupby('email')['article_id'].count().values, bins=20, range=(1,364)) ax.set_xticks(np.arange(0,364,25)) ax.set_yticks(np.arange(0,4000, 500)) ax.set_title('Distribution of user_item interactions') ax.grid(which='major', axis='y') ax.grid(which='major', axis='x') plt.show(); # Plotting histogram of distribution of user_item interactions(outliers removed) i.e. # removed values with 363 & 364 number of interactions as there were only 1 user whom had 363 & 364 number of interactions fig, ax = plt.subplots(figsize=(8,6)) ax.hist(df.groupby('email')['article_id'].count().values, bins=20, range=(1,175)) ax.set_xticks(np.arange(0,175,10)) ax.set_yticks(np.arange(0,4000, 500)) ax.set_title('Distribution of user_item interactions(outliers removed)') ax.grid(which='major', axis='y') ax.grid(which='major', axis='x') plt.show(); # + # Fill in the median and maximum number of user_article interactios below median_val = 3 # 50% of individuals interact with ____ number of articles or fewer. max_views_by_user = 364 # The maximum number of user-article interactions by any 1 user is ______. # - # `2.` Explore and remove duplicate articles from the **df_content** dataframe. # Find and explore duplicate entries in df_content df_content.duplicated().sum() # Find and explore duplicate articles in df_content df_content['article_id'].duplicated().sum() # There are no duplicate entries in df_content but there are 5 duplicate articles in df_content. # Get duplicate articles ID's in df_content print('Number of duplicate articles are : {}'.format(df_content['article_id'].duplicated().sum())) print("Duplicate articles ID's are : {}".format(df_content[df_content['article_id'].duplicated()]['article_id'].values)) # Get observations with duplicate articles ID's dup_article_id = df_content[df_content['article_id'].duplicated()]['article_id'].values.tolist() df_content.loc[np.where(df_content['article_id'].isin(dup_article_id))[0]] # Get duplicate articles entries df_content[df_content['article_id'].duplicated()] # Remove any rows that have the same article_id - only keep the first df_content.drop_duplicates(subset='article_id', keep='first', inplace=True) df_content.shape # `3.` Use the cells below to find: # # **a.** The number of unique articles that have an interaction with a user. # **b.** The number of unique articles in the dataset (whether they have any interactions or not).
# **c.** The number of unique users in the dataset. (excluding null values)
# **d.** The number of user-article interactions in the dataset. # Shape of df and unique values in each of column of df print(df.shape) print(df.nunique()) # a. The number of unique articles that have an interaction with a user. df['article_id'].nunique() # b. The number of unique articles in the dataset (whether they have any interactions or not). df_content['article_id'].nunique() # Check null values in column email of df df['email'].isnull().value_counts() # So, there are 17 observatiosn which has null values in column email. # c. The number of unique users in the dataset. (excluding null values) df[~df['email'].isnull()]['email'].nunique() # d. The number of user-article interactions in the dataset df.shape unique_articles = 714 # The number of unique articles that have at least one interaction total_articles = 1051 # The number of unique articles on the IBM platform unique_users = 5148 # The number of unique users user_article_interactions = 45993 # The number of user-article interactions # `4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below). # Find the most viewed article_id, as well as how often it was viewed df.groupby('article_id')['email'].count().sort_values(ascending=False).head() most_viewed_article_id = '1429.0' # The most viewed article in the dataset as a string with one value following the decimal max_views = 937 # The most viewed article in the dataset was viewed how many times? # + ## No need to change the code here - this will be helpful for later parts of the notebook # Run this cell to map the user email to a user_id column and remove the email column def email_mapper(): coded_dict = dict() cter = 1 email_encoded = [] for val in df['email']: if val not in coded_dict: coded_dict[val] = cter cter+=1 email_encoded.append(coded_dict[val]) return email_encoded email_encoded = email_mapper() del df['email'] df['user_id'] = email_encoded # show header df.head() # - # Number of unique user id's in df # null value in column email was also assigned a unique user id df['user_id'].nunique() # + ## If you stored all your results in the variable names above, ## you shouldn't need to change anything in this cell sol_1_dict = { '`50% of individuals have _____ or fewer interactions.`': median_val, '`The total number of user-article interactions in the dataset is ______.`': user_article_interactions, '`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user, '`The most viewed article in the dataset was viewed _____ times.`': max_views, '`The article_id of the most viewed article is ______.`': most_viewed_article_id, '`The number of unique articles that have at least 1 rating ______.`': unique_articles, '`The number of unique users in the dataset is ______`': unique_users, '`The number of unique articles on the IBM platform`': total_articles } # Test your dictionary against the solution t.sol_1_test(sol_1_dict) # - # ### Part II: Rank-Based Recommendations # # Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with. # # `1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below. # + def get_top_articles(n, df=df): ''' INPUT: n - (int) the number of top articles to return df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: top_articles - (list) A list of the top 'n' article titles ''' # Your code here n_top_articles_title = [] for article_id in get_top_article_ids(n, df): n_top_articles_title.append(df[df['article_id'] == float(article_id)]['title'][:1].values[0]) return n_top_articles_title # Return the top article titles from df (not df_content) def get_top_article_ids(n, df=df): ''' INPUT: n - (int) the number of top articles to return df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: n_top_articles_ids - (list of str) A list of the top 'n' article ids ''' # Your code here n_top_articles_ids = df.groupby('article_id')['user_id'].count().sort_values(ascending=False).index[:n].tolist() # Convert all elements of list(float) to list(str) for i in range(len(n_top_articles_ids)): n_top_articles_ids[i] = str(n_top_articles_ids[i]) return n_top_articles_ids # Return the top article ids # - print(get_top_articles(10)) print(get_top_article_ids(10)) # + # Test your function by returning the top 5, 10, and 20 articles top_5 = get_top_articles(5) top_10 = get_top_articles(10) top_20 = get_top_articles(20) # Test each of your three lists from above t.sol_2_test(get_top_articles) # - # ### Part III: User-User Based Collaborative Filtering # # # `1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns. # # * Each **user** should only appear in each **row** once. # # # * Each **article** should only show up in one **column**. # # # * **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1. # # # * **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**. # # Use the tests to make sure the basic structure of your matrix matches what is expected by the solution. # + # create the user-article matrix with 1's and 0's def create_user_item_matrix(df): ''' INPUT: df - pandas dataframe with article_id, title, user_id columns OUTPUT: user_item - user item matrix Description: Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with an article and a 0 otherwise ''' # Fill in the function here user_item = df.groupby(['user_id','article_id'])['article_id'].count().unstack() for col in user_item.columns: user_item[col] = user_item[col].apply(lambda x: 0 if math.isnan(float(x)) else 1) return user_item # return the user_item matrix user_item = create_user_item_matrix(df) # - user_item ## Tests: You should just need to run this cell. Don't change the code. assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right." assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right." assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right." print("You have passed our quick tests! Please proceed!") # `2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users. # # Use the tests to test your function. def find_similar_users(user_id, user_item=user_item): ''' INPUT: user_id - (int) a user_id user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: similar_users - (list) an ordered list where the closest users (largest dot product users) are listed first Description: Computes the similarity of every pair of users based on the dot product Returns an ordered ''' # compute similarity of each user to the provided user # Calculate similarity of provided user with all other users similarity_users = user_item.loc[user_id,:].dot(np.transpose(user_item)) # sort by similarity similarity_users_sorted = similarity_users.sort_values(ascending=False) # create list of just the ids similar_ids_list = similarity_users_sorted.index.tolist() # remove the own user's id similar_ids_list.remove(user_id) most_similar_users = similar_ids_list return most_similar_users # return a list of the users in order from most to least similar # Do a spot check of your function print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10])) print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5])) print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3])) # `3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user. # + def get_article_names(article_ids, df=df): ''' INPUT: article_ids - (list of str) a list of article ids df - (pandas dataframe) df as defined at the top of the notebook OUTPUT: article_names - (list) a list of article names associated with the list of article ids (this is identified by the title column) ''' # Your code here article_names = [] for a_id in article_ids: article_names.append(df[df['article_id'] == float(a_id)]['title'][:1].values[0]) return article_names # Return the article names associated with list of article ids def get_user_articles(user_id, user_item=user_item): ''' INPUT: user_id - (int) a user id user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: article_ids - (list of str) a list of the article ids seen by the user article_names - (list) a list of article names associated with the list of article ids (this is identified by the doc_full_name column in df_content) Description: Provides a list of the article_ids and article titles that have been seen by a user ''' # Your code here article_ids = user_item.loc[user_id, user_item.loc[user_id,:] == 1].index.tolist() for i in range(len(article_ids)): article_ids[i] = str(article_ids[i]) article_names = get_article_names(article_ids) return article_ids, article_names # return the ids and names def user_user_recs(user_id, m=10): ''' INPUT: user_id - (int) a user id m - (int) the number of recommendations you want for the user OUTPUT: recs - (list) a list of recommendations for the user Description: Loops through the users based on closeness to the input user_id For each user - finds articles the user hasn't seen before and provides them as recs Does this until m recommendations are found Notes: Users who are the same closeness are chosen arbitrarily as the 'next' user For the user where the number of recommended articles starts below m and ends exceeding m, the last items are chosen arbitrarily ''' # Your code here recs = np.array([]) # articles_seen by user (we don't want to recommend these) article_ids, article_names = get_user_articles(user_id) # Similar users to provided user similar_users = find_similar_users(user_id) for sim_user in similar_users: sim_user_article_ids, sim_user_article_names = get_user_articles(sim_user) #Obtain recommendations for each similar user new_recs = np.setdiff1d(np.array(sim_user_article_ids), np.array(article_ids), assume_unique=True) # Update recs with new recs recs = np.unique(np.concatenate([new_recs, recs], axis=0)) # If we have enough recommendations exit the loop if len(recs) >= m: break recs = recs.tolist() recs = recs[:m] return recs # return your recommendations for this user_id # - # Check Results get_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1 # Test your functions here - No need to change this code - just run this cell assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect." assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect." assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0']) assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']) assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0']) assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']) print("If this is all you see, you passed all of our tests! Nice job!") # `4.` Now we are going to improve the consistency of the **user_user_recs** function from above. # # * Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions. # # # * Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier. # + def get_top_sorted_users(user_id, df=df, user_item=user_item): ''' INPUT: user_id - (int) df - (pandas dataframe) df as defined at the top of the notebook user_item - (pandas dataframe) matrix of users by articles: 1's when a user has interacted with an article, 0 otherwise OUTPUT: neighbors_df - (pandas dataframe) a dataframe with: neighbor_id - is a neighbor user_id similarity - measure of the similarity of each user to the provided user_id num_interactions - the number of articles viewed by the user - if a u Other Details - sort the neighbors_df by the similarity and then by number of interactions where highest of each is higher in the dataframe ''' # Your code here # Compute similarity of each user to the provided user similarity_users = user_item.loc[user_id,:].dot(np.transpose(user_item)) # Create neighbors_df neighbors_df = similarity_users.reset_index().rename(columns={'user_id':'neighbor_id', user_id:'similarity'}) # Compute num_interactions for each neighbor_id and create another dataframe num_interactions_df num_interactions_df = df.groupby('user_id')['article_id'].count().reset_index() num_interactions_df = num_interactions_df.rename(columns={'article_id':'num_interactions'}).drop(columns=['user_id']) # Join two dataframes (neighbors_df & num_interactions_df) neighbors_df = neighbors_df.join(num_interactions_df) # remove the own user's id and sort by similarity idx = neighbors_df[neighbors_df['neighbor_id'] == user_id].index neighbors_df.drop(index=idx,inplace=True) neighbors_df.sort_values(by=['similarity', 'num_interactions'], ascending=False, inplace=True) return neighbors_df # Return the dataframe specified in the doc_string def user_user_recs_part2(user_id, m=10): ''' INPUT: user_id - (int) a user id m - (int) the number of recommendations you want for the user OUTPUT: recs - (list) a list of recommendations for the user by article id rec_names - (list) a list of recommendations for the user by article title Description: Loops through the users based on closeness to the input user_id For each user - finds articles the user hasn't seen before and provides them as recs Does this until m recommendations are found Notes: * Choose the users that have the most total article interactions before choosing those with fewer article interactions. * Choose articles with the articles with the most total interactions before choosing those with fewer total interactions. ''' # Your code here recs = [] rec_names = [] try: # articles alread read by user (we don't want to recommend these) article_ids, article_names = get_user_articles(user_id) except KeyError: #user does not exist print('User does not exist, Recommending Top Articles') recs = get_top_article_ids(m) rec_names = get_top_articles(m) return recs, rec_names # Create user_article_df and compute interactions_count of each article dict_1 = {'article_id': article_ids, 'article_name': article_names} user_article_df = pd.DataFrame(dict_1) user_article_df['art_inter_count'] = user_article_df['article_id'].apply(lambda x:\ df[df['article_id'] == float(x)]\ ['user_id'].count()) # Sort user_article_df by interactions_count of each article user_article_df.sort_values(by=['art_inter_count'], ascending=False, inplace=True) # Get sorted article id's for provided user article_ids_sorted = user_article_df['article_id'].astype(float).values # Similar users to provided user by using new function get_top_sorted_users() similar_users = get_top_sorted_users(user_id)['neighbor_id'].values.tolist() for sim_user in similar_users: sim_user_article_ids, sim_user_article_names = get_user_articles(sim_user) # Compute interactions_count of each article for similar user # Create another dataframe sim_user_article_df and compute article interactions count for similar user dict_2 = {'sim_user_article_id': sim_user_article_ids, 'sim_user_article_name': sim_user_article_names} sim_user_article_df = pd.DataFrame(dict_2) sim_user_article_df['sim_user_art_inter_count'] = sim_user_article_df['sim_user_article_id'].\ apply(lambda x: df[df['article_id'] == float(x)]\ ['user_id'].count()) # Sort sim_user_article_df by interactions_count of each article sim_user_article_df.sort_values(by=['sim_user_art_inter_count'], ascending=False, inplace=True) # Get sorted article id's for similar user sim_user_article_ids_sorted = sim_user_article_df['sim_user_article_id'].astype(float).values #Obtain recommendations for each similar user new_recs = np.setdiff1d(np.array(sim_user_article_ids_sorted), np.array(article_ids_sorted), assume_unique=True) # Update recs with new recs for ele in new_recs: if str(ele) not in recs: recs.append(str(ele)) # If we have enough recommendations exit the loop if len(recs) >= m: break recs = recs[:m] rec_names = get_article_names(recs) return recs, rec_names # - # Quick spot check - don't change this code - just use it to test your functions rec_ids, rec_names = user_user_recs_part2(20, 10) print("The top 10 recommendations for user 20 are the following article ids:") print(rec_ids) print() print("The top 10 recommendations for user 20 are the following article names:") print(rec_names) # `5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below. # Find the user that is most similar to user 1 find_similar_users(1)[0] # Find the 10th most similar user to user 131 find_similar_users(131)[9] ### Tests with a dictionary of results user1_most_sim = find_similar_users(1)[0] # Find the user that is most similar to user 1 user131_10th_sim = find_similar_users(131)[9] # Find the 10th most similar user to user 131 # + ## Dictionary Test Here sol_5_dict = { 'The user that is most similar to user 1.': user1_most_sim, 'The user that is the 10th most similar to user 131': user131_10th_sim, } t.sol_5_test(sol_5_dict) # - # `6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users. # **1. For a new user, we would not be able to use any recommendation method that relies on user similarity. Since the new user has most likely not read any articles yet, we cannot generate a similarity metric for the new user. Such kind of problem is known as Cold Start problem. Therefore, we cannot make recommendations to a new user using Collaborative filtering. Instead, we can use techniques like rank based recommendations for new users which means we would recommend new users with the most popular articles.** # # **2. The downside of this approach is that it could potentially skew our recommendation algorithm later on as it would consider new users who have interacted with all the same articles to be similar, but this would only be because they were suggested the same articles to begin with using ranked based recommendation method.** # # **3. Also considering article popularity based on the number of interactions presents the challenge that articles that are recommended in the new user scenario are likely to get more hits, wich would further increase their chances of being shown to the next new user.** # `7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation. # Getting recommendation for new user '0.0' which does not exist # Here, we had made sure that in case of Cold Start problem when using collaborative filtering, # we provide recommendations with the help of rank based recommendation method user_user_recs_part2('0.0') # + new_user = '0.0' # What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles. # Provide a list of the top 10 article ids you would give to new_user_recs = get_top_article_ids(10) # Your recommendations here set(new_user_recs) # + assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users." print("That's right! Nice job!") # - # ### Part IV: Content Based Recommendations # # Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information. # # `1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations. # + # import libraries import re import nltk from nltk.corpus import stopwords from nltk.tokenize import word_tokenize from nltk.stem import WordNetLemmatizer # import warnings filter from warnings import simplefilter # ignore all future warnings simplefilter(action='ignore', category=FutureWarning) nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger', 'stopwords']) # - # Define function tokenize to normalize, tokenize and lemmatize text string def tokenize(text): """Normalize, tokenize and lemmatize text string Args: text: string, String containing text for processing Returns: clean_tokens: list, List containing normalized and lemmatized word tokens """ # Replace URL links in text string with string 'urlplaceholder' url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+' detected_urls = re.findall(url_regex, text) for url in detected_urls: text = text.replace(url, "urlplaceholder") # Substitute characters in text string which match regular expression r'[^a-zA-Z0-9]' # with single whitespace text = re.sub(r'[^a-zA-Z0-9]', ' ', text) # Get word tokens from text string tokens = word_tokenize(text) # Instantiate WordNetLemmatizer lemmatizer = WordNetLemmatizer() # Get stop words in 'English' language stop_words = stopwords.words("english") # Clean tokens clean_tokens = [] for tok in tokens: # convert token to lowercase as stop words are in lowercase tok_low = tok.lower() if tok_low not in stop_words: # Lemmatize token and remove the leading and trailing spaces from lemmatized token clean_tok = lemmatizer.lemmatize(tok_low).lower().strip() clean_tokens.append(clean_tok) return clean_tokens def make_content_recs(_id, _id_type='user', m=10, df=df): ''' INPUT: _id (str) - either a user_id or article_id _id_type (str)- "user" or "article" m (int) - number of recommendations to return OUTPUT: recs (list of str) - list of article ids that are recommended rec_names (list) - list of article names that are recommended Description: This content based recommender looks at the articles that the user has interacted with. It goes through each article title and use the NLTK library, finds the most common words (related to content) throughout all the articles. Based on these most common words, the recommender looks at the sums of words in the title of each article, and based on the number of matches as well as the general popularity of the article it gives back the best recommendations. ''' recs, rec_names = None, None if _id_type == 'user': user_id = _id try: #get already read articles article_ids, _ = get_user_articles(int(float(user_id))) except KeyError: #user does not exist print('User does not exist, Recommending Top Articles') recs = get_top_article_ids(m) rec_names = get_article_names(recs) return recs, rec_names elif _id_type == 'article': a_id = str(_id) #Check if article exists in df if a_id in df['article_id'].values.astype(str).tolist(): article_ids = [] article_ids.append(a_id) else: #user does not exist print('Article does not exist, Recommending Top Articles') recs = get_top_article_ids(m) rec_names = get_article_names(recs) return recs, rec_names else: print("Please enter _id_type_ correctly. id_type must be 'user' or 'article'") return recs, rec_names # Create dataframe df_title without any duplicates of 'article_id' which would give unique titles df_title = df.drop_duplicates(subset=['article_id'], keep='first').sort_values(by=['article_id']).reset_index() df_title.drop(columns=['index'], inplace=True) titles = df_title[df_title['article_id'].isin(np.array(article_ids, dtype=float))]['title'] #tokenize the text in each article title tokens_list = tokenize(titles.str.cat(sep=' ')) #find the top occuring words top_words = pd.value_counts(tokens_list).sort_values(ascending=False)[:10].index.tolist() # Count number of occurences of each top word in all article titles (this measures similarity) word_matches_dict={} for word in top_words: word_count = pd.Series(df_title['title'].str.count(word).fillna(0)) #get occurences of each word in title word_matches_dict[word] = word_count # Create dataframe df_top_words with dict word_matches_dict df_top_words = pd.DataFrame(word_matches_dict) # num_cols == num of most common words(which can be atmost 10) df_top_words['total_matches'] = df_top_words.sum(axis=1) df_top_words['article_id'] = df_title['article_id'] # Get article interactions count and create another dataframe df_art_int art_inter_count = df.groupby('article_id')['user_id'].count() df_art_int = pd.DataFrame({'art_inter_count': art_inter_count}).reset_index() # Merging two dataframes df_top_words & df_art_int df_top_words = df_top_words.merge(df_art_int, on='article_id') # Sort df_top_words by total_matches & article interactions count df_top_words.sort_values(by = ['total_matches', 'art_inter_count'], ascending=False, inplace=True) #drop already read articles df_recs = df_top_words[~df_top_words['article_id'].isin(np.array(article_ids, dtype=float))] # Get recs and rec_names recs = df_recs['article_id'][:m].values.astype(str).tolist() rec_names = get_article_names(recs) return recs, rec_names # `2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender? # Number of unique articles in df df['article_id'].nunique() # Number of unique articles in df_content df_content['article_id'].nunique() # Find common articles in df & df_content art_df_set = set(df['article_id'].values.astype(int).tolist()) art_df_content_set = set(df_content['article_id'].values.astype(int).tolist()) len(art_df_set.intersection(art_df_content_set)) # Find number of unique articles which got interacted(were in df) but were not in df_content df['article_id'].nunique() - len(art_df_set.intersection(art_df_content_set)) # Find number of unique articles which were in df_content but didn't got interacted i.e. were not in df df_content['article_id'].nunique() - len(art_df_set.intersection(art_df_content_set)) # **This content based recommender looks at the articles the user has interacted with. It goes through each article title and using the NLTK library finds the most common words in the titles of each article that the user has interacted with.** # # **Based on these most common words, the recommender looks at all the articles which had interaction with atleast one user, looks at the words in the title of each article, find the frequency of occurence of each common words and compute the total count by summing up the frequencies of each common word in each article title, and based on the number of matches i.e. total count of common words in the titles as well as the general popularity of the article it gives back the best recommendations.** # # **If the user has not read any articles yet i.e. for a new user, then we can't really give any content based recommendations, and recommender just return back some of the most popular articles using ranked based recommendation method.** # # **There is a lot of potential improvement and optimization for this recommender. For example, one could construct a custom NLTK corpus which would filter out article words. Currently, I used a standard NLTK corpora.** # # **Currently, we had used only articles which got interacted and used titles from those articles in df, there were 714 unique articles in df which had got interacted. Out of 714, only 437 articles were common to df & df_content which meant that there were 277 articles in df which had got interacted but were not in df_content. This also meant, there was no information about(doc_body, doc_description & doc_full_name) for those 277 articles in df_content. Also, there were 614 unique articles which were in df_content but never got interacted i.e. were not in df. Therefore, 217 unique articles in df missed information in df_content and 614 unique articles in df_content missed information in df.** # # **We could have our recommender to look for more text information in columns 'doc_body', 'doc_description' & 'doc_full_name' in df_content apart from looking text information in column 'title' in df but with this approach we could have benefited only 437 artciles and 277 articles would not have got the benefit. As a result, this approach would have been unfair with those 277 articles in order to find common words as column 'doc_body' & 'doc_description' in df_content contains more content than in column 'title' in df and would have lead to bias in finding out the common words. Therefore, I used only content of column 'title' in df to find out the common words for all 714 unique articles in df which had got interacted in order to be fair with each unique article in df.** # # **We could have went other way round, i.e. we could have our recommender to look for content information only in df_content. With this approach, we could have benefitted all the 1051 unique articles in df_content but this approach had its downside. First, if an user had interacted with a certain article and if that article is not in df_content then we would not be able to make recommendations for that user based on content of the article even though user had interacted with an article. This approach would have only worked when an artcile is also present in df_content. Secondly, even if an article with whom user has interacted is present in df_content also, then also, finding out similarity based upon the top words in article feature 'title' and then finding those words in different features like 'doc_content' & 'doc_description' is not the best way to find simialr articles. In my opinion, similarity should be between same features. This approach would have worked when we would just provide the article_id and if the article is present in df_content, find the top words in features 'doc_content', 'doc_description' & 'doc_full_name' and then based upon top words in features 'doc_content', 'doc_description' & 'doc_full_name', we could have recommended new articles which are similar to provided article but this appraoch would not have considered any user interactions and would not have benefitted from user-article interactions.** # # **Therefore, with content based recommendations for articles, we could have many cases and sub-cases as to whether our recommender should consider df only, df_content only or both of them but that could be arrived with the specific problem in hand and the solution we want to arrive at. Since, this part is an extra, I am going on with the current version of this content based recommender.** # `3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations. # make recommendations for a brand new user make_content_recs('0.0', _id_type='user') # make a recommendations for a user who only has interacted with article id '1427.0' make_content_recs('1427.0', _id_type='article') # ### Part V: Matrix Factorization # # In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform. # # `1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook. # Load the matrix here user_item_matrix = pd.read_pickle('user_item_matrix.p') # quick look at the matrix user_item_matrix.head() # Shape of user_item_matrix user_item_matrix.shape # `2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson. # Perform SVD on the User-Item Matrix Here u, s, vt = np.linalg.svd(user_item_matrix) # use the built in to get the three matrices # Shape of u, s & vt matrix print(u.shape, s.shape, vt.shape) # **Our situation is different than the one in the lesson since this matrix does not have any missing values. SVD can only be performed when there are no missing values in the matrix. Even with just one nan value we cannot perform SVD. In this user_item_matrix, we have 0 where user has not interacted with the article and 1 if user has interacted with the article, no matter how many times.** # `3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features. # + # Compute prediction error and plot accuracy vs. number of latent features num_latent_feats = np.arange(10,700+10,20) sum_errs = [] for k in num_latent_feats: # restructure with k latent features s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :] # take dot product user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new)) # compute error for each prediction to actual value diffs = np.subtract(user_item_matrix, user_item_est) # total errors and keep track of them err = np.sum(np.sum(np.abs(diffs))) sum_errs.append(err) plt.plot(num_latent_feats, 1 - (np.array(sum_errs))/(user_item_matrix.shape[0]*user_item_matrix.shape[1])); plt.xlabel('Number of Latent Features'); plt.ylabel('Accuracy'); plt.title('Accuracy vs. Number of Latent Features'); # - # `4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below. # # Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below: # # * How many users can we make predictions for in the test set? # * How many users are we not able to make predictions for because of the cold start problem? # * How many articles can we make predictions for in the test set? # * How many articles are we not able to make predictions for because of the cold start problem? # Shape of df df.shape # + df_train = df.head(40000) df_test = df.tail(5993) def create_test_and_train_user_item(df_train, df_test): ''' INPUT: df_train - training dataframe df_test - test dataframe OUTPUT: user_item_train - a user-item matrix of the training dataframe (unique users for each row and unique articles for each column) user_item_test - a user-item matrix of the testing dataframe (unique users for each row and unique articles for each column) test_idx - all of the test user ids test_arts - all of the test article ids ''' # Your code here user_item_train = create_user_item_matrix(df_train) user_item_test = create_user_item_matrix(df_test) test_idx = user_item_test.index.tolist() test_arts = user_item_test.columns.tolist() return user_item_train, user_item_test, test_idx, test_arts user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test) # - # Shape of user_item_train matrix user_item_train.shape # Shape of user_item_test matrix user_item_test.shape # Find number of users which are common to both the training and testing dataframe len(set(user_item_train.index.tolist()).intersection(set(user_item_test.index.tolist()))) # Find number of articles which are common to both the training and testing dataframe len(set(user_item_train.columns.tolist()).intersection(set(user_item_test.columns.tolist()))) # + # Replace the values in the dictionary below a = 662 b = 574 c = 20 d = 0 sol_4_dict = { 'How many users can we make predictions for in the test set?': c, 'How many users in the test set are we not able to make predictions for because of the cold start problem?': a, 'How many movies can we make predictions for in the test set?': b, 'How many movies in the test set are we not able to make predictions for because of the cold start problem?': d } t.sol_4_test(sol_4_dict) # - # `5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`. # # Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data. # fit SVD on the user_item_train matrix u_train, s_train, vt_train = np.linalg.svd(user_item_train) # fit svd similar to above then use the cells below # Find shape of u_train, s_train & vt_train matrix print(u_train.shape, s_train.shape, vt_train.shape) # + # Use these cells to see how well you can use the training # decomposition to predict on test data # Find users which are common to both the training and testing dataframe user_common = np.array(list(set(user_item_train.index.tolist()).intersection(set(user_item_test.index.tolist())))) print('20 user_ids common to both training & testing dataframe are :\n{}'.format(user_common)) # Get index of these 20 user_id's in matrix u_train print('\nindex of these 20 user_ids in the u_train matrix are : \n{}'.format(user_common - 1)) # - # Get u_train matrix only for 20 common users u_train_common = u_train[user_common - 1, :] # Get shape of u_train matrix only for 20 common users u_train_common.shape # + # Find articles which are common to both the training and testing dataframe article_common = list(set(user_item_train.columns.tolist()).intersection(set(user_item_test.columns.tolist()))) #print('article_ids common to both training & testing dataframe are :\n{}'.format(article_common)) # Get index of these common article_id's in matrix u_train article_pos_u_train = [user_item_train.columns.tolist().index(ele) for ele in article_common] #print('\nindex of these common article_ids in the u_train matrix are : \n{}'.format(article_pos_u_train)) # - # Get vt_train matrix only for common articles vt_train_common = vt_train[:, article_pos_u_train] # Get shape of vt_train matrix for common articles vt_train_common.shape # Get 20 common users in user_item_test dataframe user_item_test_common = user_item_test.loc[user_common,:] # Get shape of user_item_test_common user_item_test_common.shape # Get shapes of u_train_common, s_train & vt_train_common matrix print(u_train_common.shape, s_train.shape, vt_train_common.shape) # + #make predictions based on training set SVD for the overlapping 20 users that are also in the test set #compare these predictions with the actual test matrix to get the test error for 20 common users # Also, make predictions for the entire training set # Compare the predictions with the original values in the training set to get the training error for the entire training set # Also, compute the f1-score of the training data as well as for the common test data num_latent_feats = np.arange(10,700+10,20) sum_errs_train = [] sum_errs_test_common = [] f1_train = [] f1_test_common = [] for k in num_latent_feats: # restructure with k latent features s_train_new, u_train_new, vt_train_new = np.diag(s_train[:k]), u_train[:, :k], vt_train[:k, :] s_train_common_new, u_train_common_new, vt_train_common_new = np.diag(s_train[:k]), u_train_common[:, :k],\ vt_train_common[:k, :] # take dot product user_item_train_est = np.around(np.dot(np.dot(u_train_new, s_train_new), vt_train_new)) user_item_test_common_est = np.around(np.dot(np.dot(u_train_common_new, s_train_common_new), vt_train_common_new)) # compute error for each prediction to actual value diffs_train = np.subtract(user_item_train, user_item_train_est) diffs_test_common = np.subtract(user_item_test_common, user_item_test_common_est) # total errors and keep track of them err_train = np.sum(np.sum(np.abs(diffs_train))) sum_errs_train.append(err_train) err_test_common = np.sum(np.sum(np.abs(diffs_test_common))) sum_errs_test_common.append(err_test_common) # compute f1 score (macro) for each prediction to actual value f1_train.append(f1_score(np.array(user_item_train).flatten(),\ user_item_train_est.flatten(), labels=[1.0], average='macro')) f1_test_common.append(f1_score(np.array(user_item_test_common).flatten(),\ user_item_test_common_est.flatten(), labels=[1.0], average='macro')) # + # Plotting Accuracy vs. Number of Latent features for both training data as well as for common test data which can be found in # training data also fig1, ax1 = plt.subplots() ax1.plot(num_latent_feats, 1 - (np.array(sum_errs_train))/(user_item_train.shape[0]*user_item_train.shape[1]), label='Train') ax1.plot(num_latent_feats, 1 - (np.array(sum_errs_test_common))/(user_item_test_common.shape[0]*\ user_item_test_common.shape[1]), label='Test') handler, label = ax.get_legend_handles_labels() ax1.legend(handler, label, loc='center right') ax1.grid(linestyle='--') ax1.set_title('Accuracy vs. Number of Latent Features') ax1.set_xlabel('Number of Latent Features') ax1.set_ylabel('Accuracy') plt.show() # + # Plotting f1-score vs. Number of Latent features for both training data as well as for common test data which can be found in # training data also fig2, ax2 = plt.subplots() ax2.plot(num_latent_feats, f1_train, label='Train f1 score (macro)') ax2.plot(num_latent_feats, f1_test_common, label='Test f1 score (macro)') handler, label = ax2.get_legend_handles_labels() ax2.legend(handler, label, loc='center right') ax2.grid(linestyle='--') ax2.set_title('F1 score (macro) vs. Number of Latent Features') ax2.set_xlabel('Number of Latent Features') ax2.set_ylabel('f1-score (macro)') plt.show() # - # `6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles? # **1. From the above Accuracy plot, we can observe that the test accuracy(learning) curve is inverse of the training accuracy curve. The explanation for this is that increasing latent features caused overfitting during training. This meant that even though our model performs better on the training set it does not generalize well to predict testing data. Based on this I would try to keep fewer latent features. Since there were only 20 individuals who co-existed between the training and testing datasets, there isn't exactly a lot data that can be used to test how well predictions via collaborative filtering with SVD are matching up with actual values. Though the above plot for accuracy makes it look like we are doing great in terms of accuracy, this is largely due to the class imbalance of 1's and 0's. The model accuracy could be this high because this is mostly a sparse matrix, therefore we might not need to use very many latent features to correctly reproduce the original matrix. Also, What we could do, is to shuffle the data more by shuffle splitting the training and testing set, so that we could train on a more diversified set of data which could generalize while predicting values for the testing set.** # # **2. In this situation, the accuracy is not appropriate, because predicted interactions between users and articles are very few (imbalanced). Test f1 score increased until a number of latent features were about 20, after that, it decreased with increased number of latent features. So, there is over-fitting when number of latent features got more than 20. Test f1 score is very low and the best number of latent feature as per f1-score curve appears to be about 20.** # # **3. I would not yet implement a recommendation system solely using SVD as the training and testing sample is still quite small. Since we only have overlap of a few users and some articels between training and testing set, the SVD recommendations does not work well in this case. Nevertheless, this approach showed the benefits and possibility of training and testing sets even with recommendation systems. The same training and testing methodology could apply across the other recommendation methods (collaborative filtering, content based etc).** # # **4. As an alternative to the offline approach we used here, we could do an online approach where we could run an A/B experiment to determine the impacts of implementing one or more recommendation systems into our user base. A simple A/B experiment for this situation might be to randomly assign half of users to a control group that receives no recommendations. A second group randomly receives recommendations using a mix of the methods provided above.** # # **5. For this, we could split the users by cookie based diversion, so that an equal number of users are split between A and B groups. This would be the invariant metric.** # # **6. The evaluation metrics for this scenario could be like(the mean/median number of interactions by users in each group, rate of clicks on the recommended articles from the recommendation section, time spent on article after click through, rate of users that read/scroll to the end of the article).** # # **7. We could then perform a hypothesis test where the null is that there is no difference in number of interactions against an alternative that there is a difference (or that the recommendation system increases the number of user-article interactions).** # # **8. We could then use some reasonable alpha level to check each evaluation metric for statistical significance and to understand if the recommendation system increases user engagement. In that case, we can move forward using the results as a basis for using the recommendation system.** # Create .html file or .pdf file of the notebook from subprocess import call call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.3 64-bit (''aiml1'': conda)' # metadata: # interpreter: # hash: 4679b77d4638167d678d58301c2a68afb4de0bda7d7a8bd5f7f4da2854807b37 # name: 'Python 3.8.3 64-bit (''aiml1'': conda)' # --- # # Numpy Intro # # ## Reference: # # 1. https://www.w3schools.com/python/numpy_intro.asp # 2. https://numpy.org/ # 3. http://www.learningaboutelectronics.com/Articles/How-to-create-an-array-of-random-integers-in-Python-with-numpy.php # # ## Getting started # + import numpy as np arr = np.array([1,2,3,4,5]) print(arr) print(type(arr)) print(np.__version__) # - # ## Using tuple to create array arr = np.array((1,2,3,4,5,6,7)) print(arr) # ## Dimensions # + # 0 dimensional, with single value arr = np.array(42) print(arr) # check number of dimensions print("# of dimensions : ", arr.ndim) # + # 1 dimensional, with one array arr = np.array([7,14,21,28,35,42]) print(arr) # check number of dimensions print("# of dimensions : ", arr.ndim) # + # 2 dimensional, an array that has 1D arrays as elements arr = np.array([ [1,2], [3,4] ]) print(arr) # check number of dimensions print("# of dimensions : ", arr.ndim) # + # 3 dimensional, an array that has 2D arrays as its elements arr = np.array([ [ [1,2], [3,4] ], [ [5,6], [7,9] ] ]) print(arr) # check number of dimensions print("# of dimensions : ", arr.ndim) # - # ## Creating higher dimensional array using numpy arr = np.array([1,2,3,4], ndmin=5) print(arr) # check number of dimensions print("# of dimensions : ", arr.ndim) arr = np.array([ [[1,2,3,4],[5,6,7,8]], [[11,12,13,14],[15,16,17,18]] ], ndmin=5) print(arr) # check number of dimensions print("# of dimensions : ", arr.ndim) # ## Array indexing # Indexing is used to access the elemets in the array # + print('accessing the 1st level') print('-'*20) print(arr[0]) print('-'*20) print('accessing the 2nd level') print('-'*20) print(arr[0][0]) print('-'*20) print('accessing the 3rd level') print('-'*20) print(arr[0][0][0]) print('-'*20) print('accessing the 4th level') print('-'*20) print(arr[0][0][1][1]) print('-'*20) print('accessing the 5th level') print('-'*20) print(arr[0][0][1][1][3]) # - # ## Array can be accessed by passing an array of indices # # arr\[l,m,n,o\] => returns the 4D array element, if its present # + print('accessing the 1st level') print('-'*20) print(arr[0]) print('-'*20) print('accessing the 2nd level') print('-'*20) print(arr[0,0]) print('-'*20) print('accessing the 3rd level') print('-'*20) print(arr[0,0,0]) print('-'*20) print('accessing the 4th level') print('-'*20) print(arr[0,0,1,1]) print('-'*20) print('accessing the 5th level') print('-'*20) print(arr[0,0,1,1,3]) # - # ## Negative indexing # + print('accessing the 3rd level, last element') print('-'*20) print(arr[0,0,-1]) print('-'*20) print('accessing the 4th level, last element') print('-'*20) print(arr[0,0,-1,-1]) print('-'*20) print('accessing the 5th level, second last element of the first element at 4th level') print('-'*20) print(arr[0,0,-1,-2,-2]) print('-'*20) # - # ## Array slicing # # Slicing helps in taking elements from one index to another index # # e.g. # # arr\[start:end\] # arr\[start:end:step\] # arr\[start:end,start,:end\] etc. # # + # 1D array arr = np.random.randint(1,100,10) print(arr) print("# of dims",arr.ndim) print("-"*20) print("printing elements from 2nd to 4th") print(arr[1:4]) print("printing elements from 4nd to end") print(arr[4:]) print("printing elements till 7th") print(arr[:7]) # - # ## Negative slicing # + arr = np.random.randint(1,100,10) print(arr) print("# of dims",arr.ndim) print("-"*20) print("printing 3rd last elements till 2nd last elemet") print(arr[-3:-1]) print("-"*20) print("printing 6th last elements till the last elemet") print(arr[-6:]) # - # ## using step # + arr = np.random.randint(1,100,10) print(arr) print("# of dims",arr.ndim) print("-"*20) print("elements at the 2nd index steps") print(arr[::2]) print("-"*20) print("elements at the 2nd index steps from 1st till 5th element") print(arr[:5:2]) print("-"*20) print("elements at even 2nd indices steps from 1st till 5th element") print(arr[1:5:2]) print("-"*20) # + # 2D array arr = np.random.randint(1,100, size=(2,5)) print(arr) print("# of dims",arr.ndim) print("-"*20) print("printing elements from index 0 ") print(arr[0:]) print("# of dims",arr[0:].ndim) print("-"*20) print("printing the first element from 2D array ") print(arr[0:1]) print("# of dims",arr[0:1].ndim) print("-"*20) print("printing the second elements from 2D array ") print(arr[1:]) print("# of dims",arr[1:].ndim) print("-"*20) print("printing the second element of the second element from 2D array ") print(arr[1:]) print("# of dims",arr[1:,1:2].ndim) print("-"*20) # + # 2D array arr = np.random.randint(1,100, size=(2,5)) print(arr) print("# of dims",arr.ndim) print("-"*20) print("printing the 3rd and 4th elements from all rows of 2D array") print(arr[0:,2:4]) print("# of dims",arr[0:,2:4].ndim) print("-"*20) print("printing the 3rd and 4th elements from the second row of 2D array") print(arr[1:2,2:3]) print("# of dims",arr[1:2,2:3].ndim) print("-"*20) # + # 3D array print("3 dimentional array, with 2 rows and 3 elements") arr = np.random.randint(1,10, size=(3,2,3)) print(arr) print("# of dims",arr.ndim) print("-"*20) print("3 dimentional array, with 5 rows and 7 elements") arr = np.random.randint(1,100, size=(3,5,7)) print(arr) print("# of dims",arr.ndim) print("-"*20) # + print("the first item in the 3D array using simple index gives 1D array") print(arr[0]) print("# of dims",arr[0].ndim) print("-"*20) print("the first item in the 3D array, using slice gives 3D array") print(arr[0:1]) print("# of dims",arr[0:1].ndim) print("-"*20) # - print("the second element in 3D array") print(arr[1:2]) print("# of dims",arr[0:1].ndim) print("-"*20) print("the 3rd and 4th element from the second element in 3D array") print(arr[1:2,2:4]) print("# of dims",arr[1:2,2:4].ndim) print("-"*20) # + print("the 6th element from the 3rd and 4th element from the second element in 3D array") print(arr[1:2,2:4,6:7]) print("# of dims",arr[1:2,2:4, 6:7].ndim) print("-"*20) print("the 6th element, from the 5th element from the second element in 3D array") """ 1. 1:2 idexes the 2nd element of 3D array 2. 4:5 indexes the 5th element of the array obtained as a result of step 1 3. 5:6 indexes the 6th element of the array obtained as a result of step 2 """ print(arr[1:2,4:5,5:6]) print("# of dims",arr[1:2, 4:5, 5:6].ndim) print("-"*20) # - # ## Data types # # Below is a list of all data types in NumPy and the characters used to represent them. # # i - integer # b - boolean # u - unsigned integer # f - float # c - complex float # m - timedelta # M - datetime # O - object # S - string # U - unicode string # V - fixed chunk of memory for other type ( void ) # # + arr = np.array([1, 2, 3, 4]) print(arr) print("datatype is : ",arr.dtype) print("-"*20) arr = np.array(["apple", 'banana', 'mango', 'peach']) print(arr) print("datatype is : ",arr.dtype) print("-"*20) print("with a defined datatype") arr = np.array([1,2,3,4], dtype="S") print(arr) print("datatype is : ",arr.dtype) print("-"*20) # - # For i, u, f, S and U we can define size as well. # + arr = np.array([1, 2, 3, 4], dtype='i4') print(arr) print(arr.dtype) print("-"*20) arr = np.array([1, 2, 3, 4], dtype='i8') print(arr) print(arr.dtype) print("-"*20) # - # ## Converting datatype of existing array # + arr = np.array([1.1, 2.1, 3.1]) print('Original array : ',arr) print(arr.dtype) print('-'*20) newarr = arr.astype('i') print('New array',newarr) print(newarr.dtype) print('-'*20) # + arr = np.array([1.1, 2.1, 3.1]) print('Original array : ',arr) print(arr.dtype) print('-'*20) newarr = arr.astype(int) print('New array',newarr) print(newarr.dtype) print('-'*20) # + arr = np.array([1.1, 2.1, 0.0]) print('Original array : ',arr) print(arr.dtype) print('-'*20) newarr = arr.astype(bool) print('New array',newarr) print(newarr.dtype) print('-'*20) # + arr = np.array([1.1, 2.1, 0.0]) print('Original array : ',arr) print(arr.dtype) print('-'*20) newarr = arr.astype(str) print('New array',newarr) print(newarr.dtype) print('-'*20) # - # ## Copy vs View # + print("changing the element at index 0 of the original array has no effect on copy") print("-"*20) arr = np.array([1, 2, 3, 4, 5]) print("arr before changing the element :",arr) x = arr.copy() arr[0] = 42 print("arr :",arr) print("x :",x) # + print("changing the element at index 0 of the original array has effect on view") print("-"*20) arr = np.array([1, 2, 3, 4, 5]) print("arr before changing the element :",arr) x = arr.view() arr[0] = 42 print("arr :",arr) print("x :",x) # - # ### Every NumPy array has the attribute `base` that returns None if the array owns the data. # # # + arr = np.array([1, 2, 3, 4, 5]) x = arr.copy() y = arr.view() print("base for copy : ",x.base) print("base for view : ",y.base , ", => returns the original array") # - # ## Shape print("3 dimentional array, with 5 rows and 7 elements") arr = np.random.randint(1,100, size=(3,5,7)) print(arr) print("# of dims",arr.ndim) print("Shape of the array is : ", arr.shape) print("Size of the array is : ", arr.size) print("-"*20) # ## Reshape # + print("3 dimentional array, with 4 rows and 2 elements") arr = np.random.randint(1,100, size=(3,4,2)) print(arr) print("# of dims",arr.ndim) print("Shape of the array is : ", arr.shape) print("Size of the array is : ", arr.size) print("-"*20) print("Reshaping array to 4 dimentional array, with 2 rows of 3, 2D arrays") """ 4th dimension has 2 3D arrays 3rd dimesion has 3 2D arrays 2nd dimension has 2 arrays with 2 elements each 1st dimention has 2 elements """ newarr = arr.reshape(2,3,2,2) # should be equal to dimension print(newarr) print("# of dims in newarr",newarr.ndim) print("Shape of the newarr is : ", newarr.shape) # why this is not changing? print("Size of the newarr is : ", newarr.size) print("-"*20) # + # can reshape only to same size array # Unknown dimension # Note: We can not pass -1 to more than one dimension. """ 4th dimension has 6 3D arrays 3rd dimesion has 2 2D arrays 2nd dimension has 2 arrays with 1 elements each 1st dimention has 1 elements """ newarr = arr.reshape(-1,2,2,1) # should be equal to dimension, 4 in this case as we want 4D array print(newarr) print("# of dims in newarr",newarr.ndim) print("Shape of the newarr is : ", newarr.shape) print("Size of the newarr is : ", newarr.size) print("-"*20) # - # ## Flattening the arrays flatarr = newarr.reshape(-1) print(flatarr) print("# of dims in flatarr",flatarr.ndim) print("Shape of the flatarr is : ", flatarr.shape) print("Size of the flatarr is : ", flatarr.size) print("-"*20) # **Note:** There are a lot of functions for changing the shapes of arrays in numpy `flatten`, `ravel` and also for rearranging the elements `rot90`, `flip`, `fliplr`, `flipud` etc. These fall under Intermediate to Advanced section of numpy. print("flatten", newarr.flatten()) print("ravel ", newarr.ravel()) # ## Iterating # 1D array arr = np.random.randint(1,100,10) for i in arr: print(i) # 2D array arr = np.random.randint(1,100, size=(3,5)) for k,v in enumerate(arr): print("row-"+str(k)+" : ",v) # 3D array arr = np.random.randint(1,100, size=(3,5,2)) for i,x in enumerate(arr): print("row-"+str(i)+" : \n",x) print("-"*20) for j,y in enumerate(x): print("row-"+str(i)+","+str(j)+" : ",y) print("="*20) # 4D array arr = np.random.randint(1,100, size=(2,1,3,4)) for i,x in enumerate(arr): print("4D row-"+str(i)+" : \n",x) print("-"*20) for j,y in enumerate(x): print("3D row-"+str(i)+","+str(j)+" : \n",y) for k,z in enumerate(y): print("2D row-"+str(i)+","+str(j)+","+str(k)+" : ",z) for l,a in enumerate(z): print("1D row-"+str(i)+","+str(j)+","+str(k)+","+str(l)+" : ",a) print("="*20) # ## Iterating Arrays Using `nditer()` # # The function nditer() is a helping function that can be used from very basic to very advanced iterations. # + # 3D array arr = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) # prints each element in the nd array for x in np.nditer(arr): print(x) # - # ## Iterating Array With Different Data Types # # We can use `op_dtypes` argument and pass it the expected datatype to change the datatype of elements while iterating. # # NumPy does not change the data type of the element in-place (where the element is in array) so it needs some other space to perform this action, that extra space is called buffer, and in order to enable it in `nditer()` we pass `flags=['buffered']`. # + # 3D array arr = np.random.randint(1,100, size=(2,2,2)) for i in np.nditer(arr, op_dtypes=['S'], flags=['buffered']): print(i) # - # ## nditer step size # # # + # 3D array arr = np.random.randint(1,100, size=(5,3,8)) print(arr) print('-'*20) print(arr[1:5:2, fc00:e968:6179::de52:7100,3::4]) """ 1. identify the rows from 3D array, 1:5:2 => every other row starting from 2nd element, (index 1 and 3) 2. get row at index 0 and 2 from array at index 1 and 3 from prev step 3. get every 4th element, starting form index 3 from prev step result """ for x in np.nditer(arr[1:5:2, fc00:e968:6179::de52:7100,3::4]): print(x) # - # ## Enumerated Iteration Using `ndenumerate()` arr = np.array([[1, 2, 3, 4], [5, 6, 7, 8]]) for idx, x in np.ndenumerate(arr): print(idx, x) arr = np.random.randint(1,100,size=(4,5,2)) for idx,x in np.ndenumerate(arr): print(idx,x) # ## Joining NumPy Arrays # # Joining means putting contents of two or more arrays in a single array. # In SQL we join tables based on a key, whereas in NumPy we join arrays by axes. # axis = 0 => rows # axis = 1 => cols # # - concatinate # - stack # - hstack # - vstack # - dstack # # # + # 1D concatination arr1 = np.random.randint(1,100,size=(10)) arr2 = np.random.randint(1,100,size=(10)) arr = np.concatenate((arr1,arr2)) print(arr1) print(arr2) print(arr) # + # 2D concatination arr1 = np.random.randint(1,100,size=(3,10)) arr2 = np.random.randint(1,100,size=(3,1))# just 1 col but same number of rows # arr2 = np.random.randint(1,100,size=(4,1))# just 1 col but different number of rows, this does not work in case of axis 1, will work for axis = 0 # with axis = 1 arr = np.concatenate((arr1,arr2), axis=1) print(arr1) print("-"*20) print(arr2) print("-"*20) print(arr) # - # ## Stack # + arr1 = np.random.randint(1,50,size=(3,7)) arr2 = np.random.randint(1,50,size=(3,7)) arr = np.stack((arr1,arr2)) print(arr1) print("-"*20) print(arr2) print("-"*20) print("this puts arr1 on top of arr2 and creates 3D array as arr1 and arr2 are 2D arrays, has 2 rows") print(arr) print("# of dims",arr.ndim) print("shape of dims",arr.shape) # + arr1 = np.random.randint(1,50,size=(3,7)) arr2 = np.random.randint(1,50,size=(3,7)) arr = np.stack((arr1,arr2), axis=1) print(arr1) print("-"*20) print(arr2) print("-"*20) print("this puts row at index i from arr1 and arr2 on top of each other and creates 3D array as arr1 and arr2 are 2D arrays, has 3 rows now") print(arr) print("# of dims",arr.ndim) print("shape of dims",arr.shape) # + # hstack arr1 = np.random.randint(1,50,size=(3,7)) arr2 = np.random.randint(1,50,size=(3,7)) arr = np.hstack((arr1,arr2)) print(arr1) print("-"*20) print(arr2) print("-"*20) print("concatinates corresponding row from arr1 and arr2 at index i, changes number of columns ") print(arr) print("# of dims",arr.ndim) print("shape of dims",arr.shape) # + # vstack arr1 = np.random.randint(1,50,size=(3,7)) arr2 = np.random.randint(1,50,size=(3,7)) arr = np.vstack((arr1,arr2)) print(arr1) print("-"*20) print(arr2) print("-"*20) print("stacks one array on top of another array, changes number of rows") print(arr) print("# of dims",arr.ndim) print("shape of dims",arr.shape) # + # dstack arr1 = np.random.randint(1,50,size=(2,7)) arr2 = np.random.randint(1,50,size=(2,7)) arr3 = np.random.randint(1,50,size=(2,7)) arr = np.dstack((arr1,arr2,arr3)) print(arr1) print("-"*20) print(arr2) print("-"*20) print("creates rows in 3D array = # of rows in arr1,arr2,arr3, 2 in this case") print("creates colums = # of arrays, arr1,arr2,arr3 = 3") print("creates rows in each 2D array = # of cols in arr1,arr2,arr3 = 7") print(arr) print("# of dims",arr.ndim) print("shape of dims",arr.shape) # - # ## Splitting NumPy Arrays # # Splitting is reverse of joining # # we use `array_split` method as `split` method does not adjust if the number of elemets dont match # # split array into 3 arrays arr = np.random.randint(1,50,size=(10)) newarr = np.array_split(arr, 3) print(arr) print("-"*20) print() print(newarr) # split array into 4 arrays arr = np.random.randint(1,50,size=(6)) newarr = np.array_split(arr, 4) print(arr) print("-"*20) print(newarr) # + # 2D array, split array into 3 arrays, col axis, so rows are adjusted arr = np.random.randint(1,50,size=(5,3)) newarr = np.array_split(arr, 3) print(arr) print("-"*20) # array will be 2D, number of columns will not change number of rows will be adjuted if required for k,val in np.ndenumerate(newarr): print(k,"\n",val) # - # 2D array, split array into 2 arrays, row axis, so cols are adjusted arr = np.random.randint(1,50,size=(4,3)) newarr = np.array_split(arr, 2, axis=1) print(arr) print("-"*20) print(newarr) print("-"*20) # array will be 2D, number of rows will not change number of cols will be adjuted if required # first roe has 2 cols # second row has 1 col # + # hsplit arr = np.random.randint(1,50,size=(2,8)) newarr = np.hsplit(arr, 4) print(arr) print("-"*20) print(newarr) print("-"*20) # need 4 arrays # first 2 cols for each row make the first array # 3rd and 4th cols for each row make the 2nd array # 5th and 6th cols for each row make the 3rd array # 7th and 8th cols for each row make the 4th array # - # **`Note:`** Similar alternates to `vstack()` and `dstack()` are available as `vsplit()` and `dsplit()`. # ## NumPy Searching Arrays # # Searching array returns the indices of the result # To search an array, use the `where()` method. arr = np.array([1, 2, 3, 4, 5, 4, 4]) x = np.where(arr == 4) print(x) # + # every third element till 50 arr = np.arange(1,50,3) print(arr) # index for every even number in the above list x = np.where(arr%2 == 0) print(x) # + # every fifth element till 50 arr = np.arange(1,50,5) print(arr) # index for every odd number in the above list x = np.where(arr%2 == 1) print(x) # - # ### search sorted arr = np.arange(2,50,2) print(arr) # search single value in sorted order x = np.searchsorted(arr,14) print(x) arr = np.arange(2,50,2) print(arr) # search single value in sorted order from right side x = np.searchsorted(arr, 14, side='right') # this one has issue print('Index of 14 is : ',x) print("Element at "+str(x)+" is : ",arr[x]) arr = np.arange(2,50,2) print(arr) # search single value in sorted order from right side x = np.searchsorted(arr, [14,48,40]) print(x) # ## Sorting Arrays # # putting elements in ordered sequence # + # number arr = np.array([3, 2, 0, 1]) print(arr,"\n", np.sort(arr),'\n----\n') # String arr = np.array(['banana', 'cherry', 'apple']) print(arr, "\n", np.sort(arr),"\n----\n") # boolean arr = np.array([True, False, True]) print(arr, "\n", np.sort(arr),"\n----\n") # 2D array arr = np.array([[3, 2, 4], [5, 0, 1]]) print(arr, "\n----\n", np.sort(arr)) # - # reversed sort arr = np.array([3, 2, 45, 0, 1]) print(arr, "\n----\n", np.sort(arr)[::-1]) print('-'*20) arr = np.array([[3, 2, 4], [5, 0, 1]]) print(arr, "\n----\n", np.sort(arr)[:,::-1]) # ## Filter arrays # # Getting some elements out of an existing array and creating a new array out of them is called filtering. # # In NumPy, you filter an array using a boolean index list. # # # # A boolean index list is a list of booleans corresponding to indexes in the array. # # If the value at an index is True that element is contained in the filtered array, if the value at that index is False that element is excluded from the filtered array. arr = np.array([41, 42, 43, 44]) # print item at index 0 and 2 x = [True, False, True, False] newarr = arr[x] print(newarr) # + # print elements higher than 42 arr = np.array([41, 42, 43, 44]) # create empty filter array filter_arr = [] for i in arr: if i >42: filter_arr.append(True) else: filter_arr.append(False) # pass the filter_array to actual array newarr = arr[filter_arr] print(filter_arr) print(newarr) # + # Create a filter array that will return only even elements from the original array arr = np.arange(1,21) # create empty filter array filter_arr = [] for i in arr: if i %2 == 0: filter_arr.append(True) else: filter_arr.append(False) # pass the filter_array to actual array newarr = arr[filter_arr] print(filter_arr) print(newarr) # - # # Creating Filter Directly From Array # Create a filter array that will return only values higher than 42: arr = np.array([41, 42, 43, 44]) filter_arr = arr > 42 newarr = arr[filter_arr] print(filter_arr) print(newarr) # Create a filter array that will return only even elements from the original array: arr = np.array([1, 2, 3, 4, 5, 6, 7]) filtered_arr = arr %2 ==0 newarr = arr[filtered_arr] print(filter_arr) print(newarr) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import os import re import warnings warnings.filterwarnings("ignore") # + X_train = pd.read_csv("train.csv") X_test = pd.read_csv('test.csv') X_train = X_train.drop_duplicates(subset='text') X_train = X_train[['text', 'target']] X_test = X_test[['text']] # + REPLACE_NO_SPACE = re.compile("(\.)|(\;)|(\:)|(\!)|(\?)|(\,)|(\")|(\()|(\))|(\[)|(\])|(\d+)") REPLACE_WITH_SPACE = re.compile("()|(\-)|(\/)") NO_SPACE = "" SPACE = " " def preprocess_reviews(reviews): reviews = [REPLACE_NO_SPACE.sub(NO_SPACE, line.lower()) for line in reviews] reviews = [REPLACE_WITH_SPACE.sub(SPACE, line) for line in reviews] return reviews X_train = preprocess_reviews(X_train) X_test = preprocess_reviews(X_test) # - X_train # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import glob import json import os import matplotlib.pyplot as plt from matplotlib.ticker import AutoMinorLocator import numpy as np # - with open('config.json', 'r') as f: settings = json.load(f) results_dict = {} def gather_results(results_dict, log_file_path): """ """ log_file = '/'.join(log_file_path.split('/')[-2:]) results_dict[log_file] = {} with open(log_file_path, 'r') as f: data = f.read() data = data.strip().split('\n') for i, line in enumerate(data): if '#Parameter' in line: results = [data[i+2:i+7]][0] if 'Retrieval Execution Time' in line: results_dict[log_file]['total_time'] = line.split()[-1] # Gather ancillary data results_dict[log_file]['machine_type'] = log_file.split('/')[0] results_dict[log_file]['iteration'] = os.path.basename(log_file).split('.log')[0][-1] results_dict[log_file]['target'] = os.path.basename(log_file).split('_')[1] results_dict[log_file]['method'] = os.path.basename(log_file).split('_')[0] results_dict[log_file]['results'] = {} # Parse results in log file for item in results: item = item.split('INFO: ')[-1] parameter = item.split()[0] results_dict[log_file]['results'][parameter] = {} results_dict[log_file]['results'][parameter]['low_error'] = item.split()[1] results_dict[log_file]['results'][parameter]['median'] = item.split()[2] results_dict[log_file]['results'][parameter]['upper_error'] = item.split()[3] results_dict[log_file]['results'][parameter]['best_fit'] = item.split()[4] return results_dict def get_total_times(results_dict, target, machine_type, method): """ """ total_times = [] for log_file in results_dict: if target in log_file and machine_type in log_file and method in log_file: total_time = results_dict[log_file]['total_time'] hours, minutes, seconds = total_time.split(':') total_seconds = (int(hours) * 3600) + (int(minutes) * 60) + int(seconds) total_times.append(total_seconds) average_time = np.mean(total_times) return average_time log_files = glob.glob(os.path.join(settings['results_dir'], '*pu', '*.log')) for log_file_path in log_files: results_dict = gather_results(results_dict, log_file_path) # + targets = ['hd209458b', 'wasp-19b', 'hat-p-12b', 'hat-p-1b'] cpu_multinest_times = [get_total_times(results_dict, target, 'cpu', 'multinest') for target in targets] gpu_multinest_times = [get_total_times(results_dict, target, 'gpu', 'multinest') for target in targets] cpu_emcee_times = [get_total_times(results_dict, target, 'cpu', 'emcee') for target in targets] gpu_emcee_times = [get_total_times(results_dict, target, 'gpu', 'emcee') for target in targets] groups = np.arange(4) width = 0.2 plt.style.use('ggplot') fig = plt.figure(figsize=(9, 6), dpi=300) ax = fig.add_subplot(111) ax.set_ylabel('Mean Retrieval Time (Minutes)', fontweight='bold', fontsize=14) ax.set_xlabel('Exoplanet Name', fontweight='bold', fontsize=14) p1 = ax.bar(groups - 0.03 - width, cpu_multinest_times, width, color='firebrick', label='mulitnest, CPU') p2 = ax.bar(groups - 0.03 - (0.45*width), gpu_multinest_times, width, color='lightcoral', label='multinest, GPU') p3 = ax.bar(groups + 0.03 + (0.45*width), cpu_emcee_times, width, color='steelblue', label='emcee, CPU') p4 = ax.bar(groups + 0.03 + width, gpu_emcee_times, width, color='lightskyblue', label='emcee, GPU') ax.grid(False) ax.set_xticks([0, 1, 2, 3]) ax.set_xticklabels(('hd209458b', 'wasp-19b', 'hat-p-12b', 'hat-p-1b'), fontweight='bold') ax.xaxis.set_ticks_position('none') ax.set_yticks([0, 300, 600, 900, 1200, 1500, 1800, 2100, 2400]) ax.set_yticklabels(('0', '5', '10', '15', '20', '25', '30', '35', '40'), fontweight='bold') ax.tick_params(axis='both', which='major', labelsize=13) legend = plt.legend(edgecolor='dimgray', prop={'weight': 'bold', 'size': 14}) plt.setp(legend.get_texts(), color='dimgray') plt.tight_layout() plt.savefig('../figures/timing_results.png') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Adversarial Search Algorithm in Game Theory # # ## Mimimax Algorithm ## # #### _Constraints_: #### # 1. 2-player zero-sum (one player wins and the other loses) adversarial game # 2. Players take turns to make move # 3. Assume opponent plays optimal way # 4. Need a score(value) associated with each state (usaully from evaluation funciton) # 5. Need a winning and losing state # #### _Description_: #### # - Each possible move will generate a successor state, and player will choose the move that will lead to the state with highest score. # - There will be two kinds of layers, MAX layer and MIN layer # - MAX layer gets the maximum value of its children # - MIN layer gets the minimum value of its children # - MAX layer and MIN layer happening iteratively and repeately, +1 level for each layer and +1 depth for each pair,until the game tree reaches a terminal state or the maximum depth defined # #### _Example_: #### # ![image info](./minimax.jpg) # #### _Complexity_: #### # _Assmue on average each node has b successors and depth is d_ # # O(bˣbˣbˣ...ˣb) = O(b^d) class MinimaxPolicy(): def __init__(self, index=0, depth=2): """Abstract class of Algorithms using Minimax Policy By default 3 kinds of optimizer defined 1. Minimizer, returns the tupe with minimum first value 2. Maximizer, returns the tupe with maximum first value 3. Expectation_Adder, returns a tuple containing sum of first values and None Parameters ---------- index : int Current agent index in agent array. depth : int The depth of game tree going to expand. """ self.index = 0 self.depth = depth self.minimizer = lambda *iterable: min(iterable, key=lambda val: val[0]) self.maximizer = lambda *iterable: max(iterable, key=lambda val: val[0]) self.expectation_adder = lambda *iterable: (sum([val[0] for val in iterable]), None) def evaluationFunction(self, game_state): """ @todo To be implemented according to the rule of game """ raise NotImplementedError("To be implemented") def get_optimize_specifics(self, agent_index): """ Get optimizer and inital best score Abstract function to be defined in inheritor separately. Parameters ---------- agent_index : int The agent index in agent array. Returns ------- (function, (float, Action)) tuple of optimizer and inital best score from agent index """ raise NotImplementedError("To be implemented") def minimax(self, agent_index, game_state, depth, alpha=None, beta=None): """ Get optimizer and inital best score Abstract function to be defined in inheritor separately. Parameters ---------- agent_index : int The agent index in agent array. game_state: State State of the game depth: int Current depth alpha: int Alpha value if using alpha-beta pruning beta: int Beta value if using alpha-beta purning Returns ------- (function, (float, Action)) tuple of optimizer and inital best score from agent index """ # Check Terminal State or not if game_state.isWin() or game_state.isLose(): return (self.evaluationFunction(game_state), None) optimizer, best_score = self.get_optimize_specifics(agent_index) # Take one step ahead if agent_index + 1 < game_state.getNumAgents(): next_agent_index = agent_index + 1 next_depth = depth else: next_agent_index = 0 next_depth = depth + 1 # Traverse through possible successors legal_actions = game_state.getLegalActions(agent_index) for action in legal_actions: successor_state = game_state.generateSuccessor(agent_index, action) # Get score of current node if reaches the max depth # otherwise keep expanding if next_depth > self.depth: successor_score = (self.evaluationFunction(successor_state), None) else: successor_score = (self.minimax(next_agent_index, successor_state, next_depth, alpha, beta)[0], action) # Update Best score and alpha beta values if applies if optimizer == self.maximizer: best_score = optimizer(best_score, successor_score) alpha = alpha and optimizer(alpha, best_score) elif optimizer == self.minimizer: best_score = optimizer(best_score, successor_score) beta = beta and optimizer(beta, best_score) elif optimizer is self.expectation_adder: best_score = optimizer(best_score, (1./len(legal_actions) * successor_score[0], None)) else: raise NotImplementedError("To be implemented") # Pruning if applies if alpha and beta and alpha[0] >= beta[0]: return best_score return best_score class MinimaxAgent(MinimaxPolicy): def __init__(self, index, depth): """Agent using minimax algorithm Parameters ---------- index : int Current agent index in agent array. depth : int The depth of game tree going to expand. """ self._player_optimizer = (self.maximizer, (-float('inf'), None)) self._opponent_optimizer = (self.minimizer, (float('inf'), None)) return super().__init__(index=index, depth=depth) def evaluationFunction(self, game_state): """ Parameters ---------- game_state : State Game State. Returns ------- int Value associated with the game state """ game_state.get_score() def get_optimize_specifics(self, agent_index): """ Get optimizer and inital best score """ if agent_index == self.index: return (self.maximizer, (-float('inf'), None)) else: return (self.minimizer, (float('inf'), None)) def getAction(self, gameState): """ Returns the action associated with best score """ _, action = self.minimax(self.index, gameState, 1) return action # ## Alpha-Beta Pruning (Optimation Method of Minimax) ## # #### _Desciption_: #### # - There will be two variables storing evaluated max and min values, which are ⍺ and β respectively # - Initial value of ⍺ is -∞, and initial value of β is ∞ # - MAX layer only update ⍺ # - MIN layer only update β # - Pruning the rest whenever ⍺ >= β # #### _Example_: #### # ![image info](./alpha-beta.jpg) # ``` # Step 1: # MAX{ ⍺ = -∞ # β = ∞ # ↓ # MIN{3, 5, 10}, # MIN{2, a, b}, # MIN{7, 2, 3}, # } # # Step 2: # MAX{ ⍺ = -∞ # β = 3 # ↓ # MIN{3, 5, 10}, # MIN{2, a, b}, # MIN{7, 2, 3}, # } # # Step 3: # MAX{ ⍺ = -∞ # β = 3 # ↓ # MIN{3, 5, 10}, # MIN{2, a, b}, # MIN{7, 2, 3}, # } # # Step 4: # MAX{ ⍺ = 3 # MIN{3, 5, 10}, # β = ∞ # ↓ # MIN{2, a, b}, # MIN{7, 2, 3}, # } # # Step 5: # MAX{ ⍺ = 3 # MIN{3, 5, 10}, # β = 2(pruning because MIN{2, a, b} <= 2 <= 3, result of outer MAX will never fall on MIN{2, a, b}) # ↓ # MIN{2, a, b}, # MIN{7, 2, 3}, # } # # Step 6: # MAX{ ⍺ = 3 # MIN{3, 5, 10}, # MIN{2, a, b}, # β = ∞ # ↓ # MIN{7, 2, 3}, # } # # Step 7: # MAX{ ⍺ = 3 # MIN{3, 5, 10}, # MIN{2, a, b}, # β = 7 # ↓ # MIN{7, 2, 3}, # } # # Step 8: # MAX{ ⍺ = 3 # MIN{3, 5, 10}, # MIN{2, a, b}, # β = 2(pruning because MIN{7, 2, 3} <= 2 <= 3, result of outer MAX will never fall on MIN{7, 2, 3}) # ↓ # MIN{7, 2, 3}, # } # ``` # #### _Complexity_: #### # # _Assmue each node has b successors and depth is d_
# _Worst Case_: No pruning happening, same complexity as minimax
# _Best Case_: First evaluated node is the best node, so that 1 for MAX layer and b for last MIN layer(it is possible to have multiple MIN layers) # # O(1ˣbˣ1ˣbˣ...ˣb) = O(b^(d/2)) # # Therefore, within the same amount of time Minimax with alpha-beta pruning could traverse 2 times deeper class AlphaBetaAgent(MinimaxPolicy): def __init__(self, index, depth): """Agent using Alpha-Beta Pruning algorithm Parameters ---------- index : int Current agent index in agent array. depth : int The depth of game tree going to expand. """ self._player_optimizer = (self.maximizer, (-float('inf'), None)) self._opponent_optimizer = (self.minimizer, (float('inf'), None)) return super().__init__(index=index, depth=depth) def evaluationFunction(self, game_state): """ Parameters ---------- game_state : State Game State. Returns ------- int Value associated with the game state """ game_state.get_score() def get_optimize_specifics(self, agent_index): """ Get optimizer and inital best score """ if agent_index == self.index: return self._player_optimizer else: return self._opponent_optimizer def getAction(self, gameState): """ Returns the action associated with best score """ _, action = self.minimax(self.index, gameState, 1, (-float('inf'), None), (float('inf'), None)) return action # ## Expectimax ## # #### _Description_: #### # - MAX layer remains the same # - MIN layer is replaced by chance nodes, that's to say each possible opponent move is associated with a weight(possibility) and the result is the sum of (weight ˣ score) # - Minimax almost could be considered as a special case of expectimax that min value has weight of 1.0 and others are 0 # ``` # Solves more realistic problem that result will not be too much biased by the minimum value since weights could be changed according to different kinds of opponents, so we don't need to assume opponent makes the optimal move. # ``` # #### _Example_: #### # ![image info](./expectimax.jpg) # # ``` # MAX{ # w1ˣA + w2ˣB + w3ˣC, # w4ˣD + w5ˣE + w6ˣF, # w7ˣG + w8ˣH + w9ˣI, # } # ``` class ExpectimaxAgent(MinimaxPolicy): def __init__(self, index, depth): """Agent using Expectimax algorithm Parameters ---------- index : int Current agent index in agent array. depth : int The depth of game tree going to expand. """ self._player_optimizer = (self.maximizer, (-float('inf'), None)) self._opponent_optimizer = (self.expectation_adder, (0, None)) return super().__init__(index=index, depth=depth) def evaluationFunction(self, game_state): """ Parameters ---------- game_state : State Game State. Returns ------- int Value associated with the game state """ game_state.get_score() def get_optimize_specifics(self, agent_index): """ Get optimizer for player or opponent """ if agent_index == self.index: return self._player_optimizer else: return self._opponent_optimizer def getAction(self, gameState): """ Returns the action associated with best score """ _, action = self.minimax(self.index, gameState, 1) return action # ## Optimizations ## # ### Zobrist Hash ### # ##### _Decription_: ##### # - Compute the hash of a state # ##### _Steps_: ##### # 1. Initialize a 3 dimensional table that contains keys for each possible case on board # 2. Start with 0 and do XOR operation for each non empty position on board # ##### _Advantages_: ##### # 1. When player makes a move, no need to recalculate everything because (A XOR B XOR B = A) # 2. Less hash table collision (Complex mathmatical prove behind this, Skip) # # ##### _Disadvantages_: ##### # 1. Common drawback of tabulation hash that requires certain amount of memory to store keys # ##### _Example_: ##### # # _Tic-Tac-Toe_: # + from random import randint # dictionary storing the piece keys to zobrist table pieces = { 'x': 0, 'o': 1, } board_height = 3 board_width = 4 # Zobrist table value for each piece in board zobrist_table = [[[randint(1, 2**63) for _ in pieces] for _ in range(board_width)] for _ in range(board_height)] def get_hash(board): height = len(board) width = len(board[0]) h_val = 0 for y in range(height): for x in range(width): piece = board[y][x] if piece in pieces: piece_key = pieces[piece] h_val ^= zobrist_table[y][x][piece_key] return h_val #@todo wrap this function in a class so that previous_board_hash == hash(previous_board) def update_hash(board, previous_board, previous_hash, positions): new_hash = previous_hash for position in positions: y, x = position previous_piece = previous_board[y][x] if previous_piece in pieces: piece_key = pieces[previous_piece] new_hash ^= zobrist_table[y][x][piece_key] current_piece = board[y][x] if current_piece in pieces: piece_key = pieces[current_piece] new_hash ^= zobrist_table[y][x][piece_key] return new_hash previous_board = [ ['x', 'o', '_', '_'], ['_', 'x', '_', 'o'], ['_', 'o', '_', 'x'], ] board = [ ['x', 'o', 'o', '_'], ['_', 'x', 'o', 'o'], ['_', 'o', 'o', 'x'], ] # updated ((0, 2), (1, 2), (2, 2)) updated_positions = ((0, 2), (1, 2), (2, 2)) # previous hash previous_hash = get_hash(previous_board) print(previous_hash) # updated hash current_hash = update_hash(board, previous_board, previous_hash, updated_positions) print(current_hash) # Should get the same value as previous_hash print(update_hash(previous_board, board, current_hash, updated_positions)) # - # ### Evaluaton Function ### # ##### _Decription_: ##### # # To get value of the state, depending on rules of the game, also there are some learnings about using Reinforcement Learning on evaluation function. # ## More Exploration ## # ### Monte Carlo Simulation ### # #### _Example_: #### # ![img](./monte-carlo.jpg) # # Sampling randomly on the square area and will get the result that # # # of samples in circle # ---------------------- ≈ π # # of total samples # # This can also be used in simulating odds for gambling games such as Texas Holdem Poker # ====================================================================================================================== # # Thanks # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Saving plots using PdfPages # # In this tutorial, we create a parabola. On each consecutive page of the pdf, we plot a growing range of this parabola. As you cycle through the pages, the parabola seems to grow # + import pandas as pd import matplotlib.pyplot as plt from matplotlib.backends.backend_pdf import PdfPages parabola = [x ** 2 for x in range(100)] pp = PdfPages("growingParabola.pdf") for i in range(1, len(parabola)): fig, ax = plt.subplots(figsize = (40,20)) ax.set_ylim(bottom = 0, top = parabola[-1]) ax.set_xlim(left = 0, right = len(parabola)) plt.plot(parabola[:i], linewidth = 5) pp.savefig(fig, bbox_inches = "tight") plt.close() pp.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Tested single data split only, not cross validation applied. # # And results are not very consistent, vary with each run... # # ### Polarity classification # # INFO:tensorflow:***** Eval results ***** # INFO:tensorflow: eval_accuracy = 0.8264151 # INFO:tensorflow: eval_loss = 0.4085315 # INFO:tensorflow: global_step = 744 # INFO:tensorflow: loss = 0.41608095 # # ### Category classification # # INFO:tensorflow:***** Eval results ***** # INFO:tensorflow: eval_accuracy = 0.49811321 # INFO:tensorflow: eval_loss = 1.4121249 # INFO:tensorflow: global_step = 223 # INFO:tensorflow: loss = 1.3853042 # # ! git clone https://github.com/daisukelab/dl-cliche.git # ! cd dl-cliche && pip install . # ! rm -fr dl-cliche from dlcliche.utils import * from dlcliche.nlp_mecab import * # Download dataset & stop_words_ja.txt # ! wget https://s3-ap-northeast-1.amazonaws.com/dev.tech-sketch.jp/chakki/public/chABSA-dataset.zip # ! unzip -q chABSA-dataset.zip && rm chABSA-dataset.zip && rm -r __MACOSX # ! ls chABSA-dataset # ! cd chABSA-dataset && wget https://raw.githubusercontent.com/chakki-works/chABSA-dataset/master/notebooks/resource/stop_words_ja.txt # ! wget https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip # ! unzip multilingual_L-12_H-768_A-12.zip # ! rm multilingual_L-12_H-768_A-12.zip # + DATA = Path('chABSA-dataset') def check_data_existence(folder): file_count = len(list(folder.glob("e*_ann.json"))) if file_count == 0: raise Exception("Processed Data does not exist.") else: print("{} files ready.".format(file_count)) check_data_existence(DATA) stop_words = [] with (DATA/"stop_words_ja.txt").open(encoding="utf-8") as f: stop_words = f.readlines() stop_words = [w.strip() for w in stop_words] print("{} stop words ready.".format(len(stop_words))) # - labels = [] # make labels (exclude NULL and OOD) for e in ["market", "company", "business", "product"]: for a in ["general", "sales", "profit", "amount", "price", "cost"]: labels.append(e + "#" + a) if e in ["market"]: break; print(labels) # + import json import numpy as np import pandas as pd from collections import Counter sentences = [] dataset = [] tokenizer = get_mecab_tokenizer() for f in DATA.glob("e*_ann.json"): with f.open(encoding="utf-8") as j: d = json.load(j) for s in d["sentences"]: tokenized = tokenizer.tokenize(s["sentence"].upper()) for o in s["opinions"]: if o["category"] in labels: # sentence index + category dataset.append((len(sentences), o["category"], o["polarity"])) sentences.append(tokenized) # - # ## Polarity classification # + from sklearn.model_selection import train_test_split Y = 2 dataset = np.array(dataset) Xtrn, Xval, ytrn, yval = train_test_split(dataset[:, 0], dataset[:, Y], test_size=0.1, random_state=0) def write_dataset(filename, X, y): with open(filename, 'w') as f: for _x, _y in zip(X, y): w = list(sentences[int(_x)]) f.write(_y+'\t'+' '.join(w)+'\n') write_dataset(DATA/'train.tsv', Xtrn, ytrn) write_dataset(DATA/'valid.tsv', Xval, yval) len(Xtrn), len(Xval), list(set(dataset[:, Y])) # - # ! export BERT_BASE_DIR=./multilingual_L-12_H-768_A-12 && export ChABSA_DIR=./chABSA-dataset && python run_classifier.py --task_name=generic --do_train=true --do_eval=true --data_dir=$ChABSA_DIR --vocab_file=$BERT_BASE_DIR/vocab.txt --bert_config_file=$BERT_BASE_DIR/bert_config.json --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt --max_seq_length=128 --train_batch_size=32 --learning_rate=2e-5 --num_train_epochs=3.0 --output_dir=/tmp/chabsa_output/ # ## Category classification # + from sklearn.model_selection import train_test_split Y = 1 dataset = np.array(dataset) Xtrn, Xval, ytrn, yval = train_test_split(dataset[:, 0], dataset[:, Y], test_size=0.1, random_state=0) def write_dataset(filename, X, y): with open(filename, 'w') as f: for _x, _y in zip(X, y): w = list(sentences[int(_x)]) f.write(_y+'\t'+' '.join(w)+'\n') write_dataset(DATA/'train.tsv', Xtrn, ytrn) write_dataset(DATA/'valid.tsv', Xval, yval) len(Xtrn), len(Xval), list(set(dataset[:, Y])) # - # ! export BERT_BASE_DIR=./multilingual_L-12_H-768_A-12 && export ChABSA_DIR=./chABSA-dataset && python run_classifier.py --task_name=generic --do_train=true --do_eval=true --data_dir=$ChABSA_DIR --vocab_file=$BERT_BASE_DIR/vocab.txt --bert_config_file=$BERT_BASE_DIR/bert_config.json --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt --max_seq_length=128 --train_batch_size=32 --learning_rate=2e-5 --num_train_epochs=3.0 --output_dir=/tmp/chabsa2_output/ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # Krisk is created for building statistical interactive visualization with pandas+Jupyter integration on top of Echarts. import pandas as pd import krisk.plot as kk # Use this when you want to nbconvert the notebook (used by nbviewer) from krisk import init_notebook; init_notebook() # We will be using GapMinder data for examples below. # + df = pd.read_csv('http://www.stat.ubc.ca/~jenny/notOcto/STAT545A/' 'examples/gapminder/data/' 'gapminderDataFiveYear.txt', sep='\t') df.head() # - # Let's start by small example. Using bar plot to count the data of category, kk.bar(df,'continent') # Note that by default, the plot already used a tooltip. You can hover the plot to see the y-value. # We also can plot bar by averaging GDP per capita for each continent, kk.bar(df,'continent',y='gdpPercap',how='mean') # We can change x as year, and use the grouping on continent, kk.bar(df,'year',y='gdpPercap',c='continent',how='mean') # Stacked and annotate the chart, (kk.bar(df,'year',y='gdpPercap',c='continent',how='mean',stacked=True,annotate=True) .set_size(width=1000)) # Next we can do the same thing with line chart, using area, annotate, and tooltip based on axis, p = kk.line(df,'year',y='gdpPercap',c='continent',how='mean', stacked=True,annotate='all',area=True) p.set_tooltip_style(trigger='axis',axis_pointer='shadow') p.set_size(width=1000) # We can also create a histogram and add theme into it, p = (kk.hist(df,x='lifeExp',c='continent',stacked=True,bins=100)) p.set_tooltip_style(trigger='axis',axis_pointer='shadow') p.set_theme('vintage') # Let's get a little bit advanced. We're going to create scatter points of GapMinder data in 2007. We use Life Expectancy, GDP per Capita, and Population as x,y,size respectively. We also want to add the information on the tooltip, add and reposition toolbox, legend, and title. p = kk.scatter(df[df.year == 2007],'lifeExp','gdpPercap',s='pop',c='continent') p.set_size(width=1000, height=500) p.set_tooltip_format(['country','lifeExp','gdpPercap','pop','continent']) p.set_theme('dark') p.set_toolbox(save_format='png',restore=True,data_zoom=True) p.set_legend(orient='vertical',x_pos='-1%',y_pos='-3%') p.set_title('GapMinder of 2007',x_pos='center',y_pos='-5%') # In the next few notebooks, we're going to dig deeper at each of the feature, including what's not being discussed here. But this introduction should give a sense of what krisk is capable of. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from netCDF4 import Dataset import math import warnings import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.ticker as t from mpl_toolkits.basemap import Basemap,shiftgrid,addcyclic from cdo import Cdo,CDOException,CdoTempfileStore cdo=Cdo() mpl.rc("text", usetex=False) mpl.rc('axes',titlesize=20,labelsize=17,linewidth=1.2) mpl.rc('xtick',labelsize=15) mpl.rc('ytick',labelsize=15) mpl.rcParams['xtick.major.size']=5.5 mpl.rcParams['xtick.minor.size']=3.5 mpl.rcParams['ytick.major.size']=5.5 mpl.rcParams['ytick.minor.size']=3.5 mpl.rcParams['legend.fontsize']=15 warnings.filterwarnings("ignore") def graph_Antarctica(path_array,legend_array,name_files,var,year,mesi,\ title,ylabel,data172,temp,cost,position=(0.5,-0.16)): data_mean=np.zeros((len(path_array),12)) for m in range(0,len(path_array)): if year[m]<10: string="000"+str(year[m]) elif year[m]>=10 and year[m]<100: string="00"+str(year[m]) elif year[m]>=100 and year[m]<1000: string="0"+str(year[m]) else: string=str(year[m]) if temp==True: data=[np.reshape([float(i)*cost for i in " ".join(cdo.output(input="-addc,-273.15 -selmonth,"\ +str(j)+" -selname,"+var+" "+path_array[m]+"../"+name_files[m]+"_PLA."+string+".nc"))\ .split()],[nlat,nlon]) for j in range(1,13)] else: data=[np.reshape([float(i)*cost for i in " ".join(cdo.output(input="-selmonth,"+str(j)\ +" -selname,"+var+" "+path_array[m]+"../"+name_files[m]+"_PLA."\ +string+".nc")).split()],[nlat,nlon]) for j in range(1,13)] lats=Dataset(path_array[m]+"../"+name_files[m]+"_PLA."+string+".nc","r").variables["lat"][::-1] data2=np.zeros((len(data),nlon*6*mul)) #print(np.shape(data)) for k in range(0,len(data)): count=0 count_mean=0 for i in range(0,len(data[k])): for j in range(0,len(data[k][i])): if i>25*mul: if data172[i][j]>=0.5: #print(i,j) #print(count) data2[k][count]=data[k][i][j]*np.cos(np.pi*lats[i]/180) count_mean+=np.cos(np.pi*lats[i]/180) count+=1 data_mean[m]=[np.sum(i)/count_mean for i in data2] #print(count) graph(mesi,data_mean,title+" Annual Antarctica Cycle",legend_array,"upper center","Time [month]",\ ylabel,True,False,position=position) return data_mean def graph_60N(path_array,legend_array,name_files,var,year,mesi,title,ylabel,data172,temp,cost): data_mean=np.zeros((len(path_array),12)) for m in range(0,len(path_array)): if year[m]<10: string="000"+str(year[m]) elif year[m]>=10 and year[m]<100: string="00"+str(year[m]) elif year[m]>=100 and year[m]<1000: string="0"+str(year[m]) else: string=str(year[m]) if temp==True: data=[np.reshape([float(i)*cost for i in " ".join(cdo.output(input="-addc,-273.15 -selmonth,"\ +str(j)+" -selname,"+var+" "+path_array[m]+"../"+name_files[m]+"_PLA."+string+".nc"))\ .split()],[nlat,nlon]) for j in range(1,13)] else: data=[np.reshape([float(i)*cost for i in " ".join(cdo.output(input="-selmonth,"+str(j)\ +" -selname,"+var+" "+path_array[m]+"../"+name_files[m]+"_PLA."+string+".nc")).\ split()],[nlat,nlon]) for j in range(1,13)] lats=Dataset(path_array[m]+"../"+name_files[m]+"_PLA."+string+".nc","r").variables["lat"][::-1] data2=np.zeros((len(data),nlon*6*mul)) #print(np.shape(data),lats) for k in range(0,len(data)): count=0 count_mean=0 for i in range(0,len(data[k])): for j in range(0,len(data[k][i])): if i<6*mul: if data172[i][j]>=0.5: data2[k][count]=data[k][i][j]*data172[i][j]*np.cos(np.pi*lats[i]/180) count_mean+=data172[i][j]*np.cos(np.pi*lats[i]/180) count+=1 data_mean[m]=[np.sum(i)/count_mean for i in data2] print(count) graph(mesi,data_mean,title+" Annual North Cycle",legend_array,"upper center","Time [month]"\ ,ylabel,True,False) return data_mean def read_graph_file(name_file,path,month,title,mesi,column,bar_title,vmin,vmax,cbar_type): """name_file,path,month,title,mesi,column,bar_title,vmin,vmax,cbar_type """ if month==True: with open(path+name_file,'r') as f: str_data=f.readline() f.seek(0) data=[[float(num) for num in line.split()] for line in f] for i in range(0,int(len(data)*column/tot)): data_res=np.reshape(data[i*int(tot/column)+1+i:(i+1)*int(tot/column)+1+i],[nlat,nlon]) Title=title+" "+mesi[i] graphycs_v(data_res[::-1],Title,'cyl',cbar_type,False,bar_title,vmin,vmax) return data,str_data else: with open(path+name_file,'r') as f: str_data=f.readline() data=[[float(num) for num in line.split()] for line in f] data_res=np.reshape(data,[nlat,nlon]) graphycs_v(data_res[::-1],title,'cyl',cbar_type,False,bar_title,vmin,vmax) return data_res,str_data def graphycs_v(data_array,title,proj,bar,savefig,bar_title,vmn,vmx): fig = plt.figure(figsize=(8,8)) m = Basemap(projection=proj, llcrnrlat=-90, urcrnrlat=90,\ llcrnrlon=0, urcrnrlon=360, resolution='c', lon_0=0) m.drawcoastlines() m.drawparallels(np.arange(-90,91,30),labels=[1,0,0,0]) m.drawmeridians(np.arange(-180.,181.,60.),labels=[0,0,0,1]) m.imshow(data_array,cmap=bar,vmax=vmx,vmin=vmn) cbar = plt.colorbar(orientation='vertical', shrink=0.5) cbar.set_label(bar_title,rotation=-90,fontsize=14,labelpad=25) plt.title(title+"\n",fontsize=17,fontweight="bold") if savefig==True : fig.savefig("grafici/"+title,bbox_inches='tight') def graph(x,data_array,title,legend_array,loc_legend,xlabel,ylabel,save,zonal,position=(0.5,-0.16)): fig,ax=plt.subplots(figsize=(7,5)) for i in range(0,len(data_array)): ax.plot(x[i],data_array[i],label=legend_array[i]) legend=ax.legend(loc="best", shadow=True) ax.set_title(title+"\n",fontweight="bold") ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) ax.yaxis.set_minor_locator(t.AutoMinorLocator()) if zonal==True: ax.set_xlim(-90,90) ax.xaxis.set_minor_locator(t.MultipleLocator(10)) ax.set_xticks((90,60,30,0,-30,-60,-90)) ax.set_xticklabels(["90N","60N","30N","0","30S","60S","90S"]) ax.grid(linestyle='--') elif "Cycle" in title: ax.grid(axis='y',linestyle='--') else: ax.grid(axis='y',linestyle='--') #ax.xaxis.set_major_locator(t.MultipleLocator(20)) plt.show() if save==True: fig.savefig("grafici/"+title,bbox_inches='tight') data=[] def load_file1(cost,file_array,path_array): data=[] if "ZM" in file_array[0]: data=[[cost*i for i in np.reshape(Dataset(path_array[j]+file_array[j],"r").\ variables[file_array[j].split("_")[-1].replace(".nc","") ][:],[nlat])]\ for j in range(0,len(file_array))] else: #print(np.shape(Dataset(path_array[0]+file_array[0],"r").variables[file_array[0].split("_")[-1].replace(".nc","") ][:])) data=[[cost*i for i in np.reshape(Dataset(path_array[j]+file_array[j],"r").variables[file_array[j]\ .split("_")[-1].replace(".nc","") ][:],[int(file_array[j].split("_")[-2].replace("Y",""))])]\ for j in range(0,len(file_array))] #print("1") return data def load_file2(cost,file_array,path_array,typology,temp,step,start): if typology=="global": data=[[cost*float(j) for j in np.reshape([cdo.output(input="-timselmean,"+str(step[k])+","\ +str(start[k])+" -selmonth,"+str(i)\ +" "+path_array[k]+file_array[k]) for i in\ range(1,13)],[12])] for k in range(0,len(file_array))] elif typology=="south": if temp==True: data0=[[cdo.output(input="-gridboxmean,nlon,16 -timselmean,"+str(step[k])+\ ","+str(start[k])+" -addc,-273.15 -selmonth,"+str(i)+" "+\ path_array[k]+file_array[k])[0].split() for i in \ range(1,13)]\ for k in range(0,len(file_array))] else: data0=[[cdo.output(input="-gridboxmean,nlon,16 -timselmean,"+str(step[k])+\ ","+str(start[k])+" -selmonth,"+str(i)+" "+\ path_array[k]+file_array[k])[0].split() for i in range(1,13)]\ for k in range(0,len(file_array))] data=[[cost*float(data0[k][i][1]) for i in range(0,len(data0[k]))] for k in range(0,len(data0))] elif typology=="north": if temp==True: data0=[[cdo.output(input="-gridboxmean,nlon,16 -timselmean,"+str(step[k])+\ ","+str(start[k])+" -addc,-273.15 -selmonth,"+str(i)+" "+\ path_array[k]+file_array[k])[0].split() for i \ in range(1,13)]\ for k in range(0,len(file_array))] else: data0=[[cdo.output(input="-gridboxmean,nlon,16 -timselmean,"+str(step[k])+\ ","+str(start[k])+" -selmonth,"+str(i)+" "+\ path_array[k]+file_array[k])[0].split() for i in range(1,13)]\ for k in range(0,len(file_array))] data=[[cost*float(data0[k][i][0]) for i in range(0,len(data0[k]))] for k in range(0,len(data0))] else: print("input sbagliato") return data def graph_globe(data,lons,lats,proj,save,title,cbar_title,vmn,vmx,step_bar,cmap=plt.cm.jet_r): fig = plt.figure(figsize=(10,10)) m = Basemap(projection=proj, llcrnrlat=-90, urcrnrlat=90,\ llcrnrlon=0, urcrnrlon=360, resolution='c', lon_0=0) m.drawcoastlines() if proj=="cyl": m.drawparallels(np.arange(-90,91,30),labels=[1,0,0,0]) m.drawmeridians(np.arange(-180.,181.,60.),labels=[0,0,0,1]) m.imshow(data[::-1],cmap=cmap,vmin=vmn,vmax=vmx) cbar = plt.colorbar(orientation='vertical', shrink=0.5,ticks=np.linspace(vmn,vmx,step_bar)) else: m.drawmapboundary() m.drawparallels(np.arange(-90,90,30),labels=[1,0,0,0]) m.drawmeridians(np.arange(m.lonmin,m.lonmax+30,60),labels=[0,0,0,1]) var_cyclic, lons_cyclic = addcyclic(data,lons) var_cyclic, lons_cyclic = shiftgrid(180.,var_cyclic,lons_cyclic,start=False) lon2d, lat2d = np.meshgrid(lons_cyclic, lats) x,y = m(lon2d, lat2d) cs = m.contourf(x,y,var_cyclic, cmap=cmap,levels=t.MaxNLocator(nbins=step_bar)\ .tick_values(vmn,vmx),extend="both") cbar = plt.colorbar(cs,orientation='vertical', shrink=0.5,ticks=t.MaxNLocator\ (nbins=step_bar).tick_values(vmn,vmx)) cbar.set_label(cbar_title,rotation=-90,labelpad=25) plt.title(title+"\n",fontweight="bold") if save==True: fig.savefig("grafici/"+title,bbox_inches='tight') def print_value(data,starts,ends): for i in range(0,len(data)): print("Mean "+str(starts[i])+"-"+str(ends[i])+": "+str(np.mean(data[i][starts[i]:ends[i]]))\ +" dev.std: "+str(np.std(data[i][starts[i]:ends[i]]))) def Net_Flux_Sup(nome_file,type_file,path_arrays,lenght,load_file,typology,temp,step,start): var=["rls","rss","hfss","hfls","prsn"] path_array=[path_arrays for i in range(0,len(var))] if load_file==1: data=load_file1(1,[nome_file+type_file+i+".nc" for i in var],path_array) else: data=load_file2(1,[nome_file+type_file+i+".nc" for i in var],path_array,typology,temp,\ [step for i in range(0,len(var))],[start for i in range(0,len(var))] ) data0=np.reshape(data[0],[lenght])+np.reshape(data[1],[lenght])+np.reshape(data[2],[lenght])+\ np.reshape(data[3],[lenght])-1000*334000*np.reshape(data[4],[lenght]) return np.reshape(data0,[lenght]) def all_graph(var,title_var,ylabel,name_files,path_array,legend_array,starts,ends,step,lats,lons,mesi\ ,cost,temp,string="ab",position=(0.5,-0.16)): if "a" in string: #Global Annual Cycle in_global=[name_files[i]+"_YM_FM_"+str(ends[i])+"Y_"+var+".nc" for i in range(0,len(name_files))] data_global=load_file1(cost,in_global,path_array) graph(time,data_global,title_var+" Global Annual Mean",legend_array,"upper center","Time [year]"\ ,ylabel,True,False,position) print_value(data_global,starts,ends) else: data_global=0 if "b" in string: #Zonal in_zonal=[name_files[i]+"_YM_"+str(step[i])+"YM_ZM_"+str(ends[i])+"Y_"+var+".nc" for i in \ range(0,len(name_files))] data_zonal=load_file1(cost,in_zonal,path_array) graph(lats,data_zonal,title_var+" Zonal Mean",legend_array,"upper center","Latitude [°]",ylabel,\ True,True,position) else: data_zonal=0 if "c" in string: #1 year cycle #Global in_global_cycle=[name_files[i]+"_FM_"+str(ends[i])+"Y_"+var+".nc" for i in range(0,len(name_files))] data_global_cycle=load_file2(cost,in_global_cycle,path_array,"global",temp,step,starts) graph(mesi,data_global_cycle,title_var+" Global Annual Cycle",legend_array,"upper center","Time [month]",\ ylabel,True,False,position) else: data_global_cycle=0 if "n" in string: #North in_north_cycle=[i+"_all_"+var+".nc" for i in name_files] data_north_cycle=load_file2(cost,in_north_cycle,path_array,"north",temp,step,starts) graph(mesi,data_north_cycle,title_var+" North Annual Cycle",legend_array,"upper center","Time [month]",\ ylabel,True,False,position) else: data_north_cycle=0 if "s" in string: #South in_south_cycle=[i+"_all_"+var+".nc" for i in name_files] data_south_cycle=load_file2(cost,in_south_cycle,path_array,"south",temp,step,starts) graph(mesi,data_south_cycle,title_var+" South Annual Cycle",legend_array,"upper center","Time [month]",\ ylabel,True,False,position) else: data_south_cycle=0 return data_global,data_zonal,data_global_cycle,data_north_cycle,data_south_cycle def all_graph_res(var,title_var,ylabel,name_files,path_array,legend_array,starts,ends,step,lats,lons,mesi,cost,\ temp): #Global Annual Cycle in_global=[name_files[i]+"_YM_FM_"+str(ends[i])+"Y_"+var+".nc" for i in range(0,len(name_files))] data_global=load_file1(cost,in_global,path_array) graph(time,data_global,title_var+" Global Annual Mean",legend_array,"upper center","Time [year]",ylabel,\ True,False) print_value(data_global,starts,ends) #Zonal in_zonal=[name_files[i]+"_YM_"+str(step[i])+"YM_ZM_"+str(ends[i])+"Y_"+var+".nc" for i in range(0,len(name_files))] data_zonal=load_file1(cost,in_zonal,path_array) graph(lats,data_zonal,title_var+" Zonal Mean",legend_array,"upper left","Latitude [°]",ylabel,True,True) return data_global,data_zonal def all_graph_sum(var1,var2,title_var,ylabel,name_files,path_array,legend_array,starts,\ ends,step,lats,lons,mesi,cost,temp,string="ab",position=(0.5,-0.16)): if "a" in string: #Global Annual Cycle in_global1=[name_files[i]+"_YM_FM_"+str(ends[i])+"Y_"+var1+".nc" for i in range(0,len(name_files))] in_global2=[name_files[i]+"_YM_FM_"+str(ends[i])+"Y_"+var2+".nc" for i in range(0,len(name_files))] data_global=[np.add(load_file1(cost,in_global1,path_array)[i],load_file1(cost,in_global2,path_array)\ [i]) for i in range(0,len(name_files))] graph(time,data_global,title_var+" Global Annual Mean",legend_array,"upper center","Time [year]",\ ylabel,True,False,position) print_value(data_global,starts,ends) else: data_global=0 if "b" in string: #Zonal in_zonal1=[name_files[i]+"_YM_"+str(step[i])+"YM_ZM_"+str(ends[i])+"Y_"+var1+".nc" for i in range(0,len(name_files))] in_zonal2=[name_files[i]+"_YM_"+str(step[i])+"YM_ZM_"+str(ends[i])+"Y_"+var2+".nc" for i in range(0,len(name_files))] data_zonal=[np.add(load_file1(cost,in_zonal1,path_array)[i],load_file1(cost,in_zonal2,path_array)[i])\ for i in range(0,len(name_files))] #print(np.shape(data_zonal)) graph(lats,data_zonal,title_var+" Zonal Mean",legend_array,"upper center","Latitude [°]",ylabel,\ True,True,position) else: data_zonal=0 if "c" in string: #1 year cycle #Global in_global_cycle1=[name_files[i]+"_FM_"+str(ends[i])+"Y_"+var1+".nc" for i in range(0,len(name_files))] in_global_cycle2=[name_files[i]+"_FM_"+str(ends[i])+"Y_"+var2+".nc" for i in range(0,len(name_files))] data_global_cycle=[np.add(load_file2(cost,in_global_cycle1,path_array,"global",temp,step,starts)[i],\ load_file2(cost,in_global_cycle2,path_array,"global",temp,step,starts)[i])\ for i in range(0,len(name_files))] graph(mesi,data_global_cycle,title_var+" Global Annual Cycle",legend_array,"upper center","Time [month]",\ ylabel,True,False,position) else: data_global_cycle=0 if "n" in string: #North in_north_cycle1=[i+"_all_"+var1+".nc" for i in name_files] in_north_cycle2=[i+"_all_"+var2+".nc" for i in name_files] data_north_cycle=[np.add(load_file2(cost,in_north_cycle1,path_array,"north",\ temp,step,starts)[i],load_file2(cost,in_north_cycle2,path_array,\ "north",temp,step,starts)[i]) \ for i in range(0,len(name_files))] graph(mesi,data_north_cycle,title_var+" North Annual Cycle",legend_array,\ "upper center","Time [month]",ylabel,True,False,position) else: data_north_cycle=0 if "s" in string: #South in_south_cycle1=[i+"_all_"+var1+".nc" for i in name_files] in_south_cycle2=[i+"_all_"+var2+".nc" for i in name_files] data_south_cycle=[np.add(load_file2(cost,in_south_cycle1,path_array,"south",\ temp,step,starts)[i],load_file2(cost,in_south_cycle2,path_array,\ "south",temp,step,starts)[i]) \ for i in range(0,len(name_files))] graph(mesi,data_south_cycle,title_var+" South Annual Cycle",legend_array,\ "upper center","Time [month]",ylabel,True,False,position) else: data_south_cycle=0 return data_global,data_zonal,data_global_cycle,data_north_cycle,data_south_cycle def graph_level(var,title_var,ylabel,name_files,path_array,ends,steps,z_press,lat,cost,var_min,var_max): data_globe=[[cost*i for i in Dataset(path_array[j]+name_files[j]+"_YM_"+str(steps[j])+"YM_ZM_"+str(ends[j])+"Y_"+var+".nc").\ variables[var][:]]for j in range(0,len(name_files))] for i in range(0,len(name_files)): for j in range(i+1,len(name_files)): fig,ax=plt.subplots(figsize=(8,6)) x,y=np.meshgrid(lat,z_press) cs=ax.contourf(x,y,np.reshape(data_globe[i],(13,nlat))-np.reshape(data_globe[j],(13,nlat)),\ cmap=plt.cm.jet,levels=np.linspace(var_min,var_max,21),extend="both") #ax.xaxis.set_major_locator(plt.MultipleLocator(10)) ax.yaxis.set_ticks([1000,850,700,500,400,300,200,100,30]) ax.set_ylim(1000,30) ax.set_xlabel("Latitude [°]") ax.set_ylabel(" Pressure [hPa]") ax.set_xlim(-86,86) ax.set_xticks((80,60,40,20,0,-20,-40,-60,-80)) ax.set_xticklabels(["80N","60N","40N","20N","0","20S","40S","60S","80S"]) ax.grid() cbar = plt.colorbar(cs,orientation='vertical', shrink=0.9,ticks=np.linspace(var_min,var_max,15)) cbar.set_label(ylabel,rotation=-90,labelpad=25) #plt.gca().invert_yaxis() plt.title(title_var+" ("+name_files[i]+"-"+name_files[j]+") \n",fontweight="bold") plt.savefig("grafici/"+title_var+" ("+name_files[i]+"-"+name_files[j]+")") def all_graph_globe(var,title_var,ylabel,name_files,path_array,lon,lat,cost,proj_type,end,step,color=plt.cm.jet): data_globe=[[cost*i for i in Dataset(path_array[j]+name_files[j]+"_YM_"+str(step[j])+"YM_"+\ str(end[j])+"Y_"+var+".nc").variables[var][0]]for j in \ range(0,len(name_files))] #print(math.factorial(len(name_files))/(math.factorial(len(name_files)-2)*math.factorial(2))) min_max=np.zeros((int(math.factorial(len(name_files))/(math.factorial(len(name_files)-2)*\ math.factorial(2))),2)) #print(np.shape(data_globe)) #print(np.reshape(data_globe[0],[nlat,nlon])[0]) count=0 for i in range(0,len(name_files)): for j in range(i+1,len(name_files)): min_max[count]=[np.min(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon])),\ np.max(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]))] count+=1 #print(min_max) max_lim=int(np.max(min_max)+1) min_lim=int(np.min(min_max)-1) lim=np.max([np.abs(min_lim),np.abs(max_lim)]) for i in range(0,len(name_files)): for j in range(i+1,len(name_files)): max_lim=int(np.max(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]))+1) min_lim=int(np.min(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]))-1) lim=np.max([np.abs(min_lim),np.abs(max_lim)]) graph_globe(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]),lons[i],\ lats[i],proj_type,True,title_var+" ("+name_files[i]+"-"+name_files[j]+")",ylabel,\ -lim,lim,15,cmap=color) #print(np.max(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon])),np.min(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]))) return data_globe def all_graph_globe2(data_globe,title_var,ylabel,lon,lat,proj_type): min_max=np.zeros((int(math.factorial(len(name_files))/(math.factorial(len(name_files)-2)\ *math.factorial(2))),2)) count=0 for i in range(0,len(data_globe)): for j in range(i+1,len(data_globe)): min_max[count]=[np.min(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon])),\ np.max(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]))] count+=1 #print(min_max) max_lim=int(np.max(min_max)+1) min_lim=int(np.min(min_max)-1) lim=np.max([np.abs(min_lim),np.abs(max_lim)]) for i in range(0,len(data_globe)): for j in range(i+1,len(data_globe)): max_lim=int(np.max(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]))+1) min_lim=int(np.min(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]))-1) lim=np.max([np.abs(min_lim),np.abs(max_lim)]) graph_globe(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]),lons[i],lats[i]\ ,proj_type,True,title_var+" ("+name_files[i]+"-"+name_files[j]+")",ylabel,-lim,\ lim,15) #print(np.max(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon])),np.min(np.reshape(data_globe[i],[nlat,nlon])-np.reshape(data_globe[j],[nlat,nlon]))) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: py36 # --- # ## Rolling Windows # Drawing # Drawing # Drawing # # ### How to fit # - Fit whole window as many-to-one into RNN # Drawing # # - Fit whole window as one-to-one into CNN or Fully-Connected # Drawing import matplotlib.pyplot as plt import pandas as pd import numpy as np from IPython.display import display # # Dataset df = pd.read_csv( 'cansim-0800020-eng-6674700030567901031.csv', skiprows=6, skipfooter=9, engine='python') df.head() # ### Adjust date # + from pandas.tseries.offsets import MonthEnd df['Adjustments'] = pd.to_datetime(df['Adjustments']) + MonthEnd(1) df = df.set_index('Adjustments') display(df.head()) display(df.plot()) # - # ### Split train-testset # + split_date = pd.Timestamp('01-01-2011') train = df.loc[:split_date, ['Unadjusted']] test = df.loc[split_date:, ['Unadjusted']] # - ax = train.plot(); test.plot(ax=ax) plt.legend(['train', 'test']); # #### MinMax Scaling input # + from sklearn.preprocessing import MinMaxScaler sc = MinMaxScaler() train_sc = sc.fit_transform(train) test_sc = sc.transform(test) # - ax = pd.DataFrame(train_sc).plot(); pd.DataFrame(test_sc).plot(ax=ax) plt.legend(['train', 'test']); # #### Create Sliding Windows # + train_sc_df = pd.DataFrame(train_sc, columns=['MinMax_Scaled'], index=train.index) test_sc_df = pd.DataFrame(test_sc, columns=['MinMax_Scaled'], index=test.index) train_sc_df.head() # + for s in range(1, 13): train_sc_df['shift_{}'.format(s)] = train_sc_df['MinMax_Scaled'].shift(s) test_sc_df['shift_{}'.format(s)] = test_sc_df['MinMax_Scaled'].shift(s) train_sc_df.head(13) # - # #### Define train-testset # + X_train = train_sc_df.dropna().drop('MinMax_Scaled', axis=1) y_train = train_sc_df.dropna()[['MinMax_Scaled']] X_test = test_sc_df.dropna().drop('MinMax_Scaled', axis=1) y_test = test_sc_df.dropna()[['MinMax_Scaled']] display(X_train.head()) display(y_train.head()) # + X_train = X_train.values X_test= X_test.values y_train = y_train.values y_test = y_test.values print(X_train.shape) print(y_train.shape) print('') print(X_test.shape) print(y_test.shape) # - # # Model # ## Fully connected on Windows # + from keras.models import Sequential from keras.layers import Dense import keras.backend as K from keras.callbacks import EarlyStopping K.clear_session() model = Sequential() model.add(Dense(12, input_dim=12, activation='relu')) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.summary() # - early_stop = EarlyStopping(monitor='loss', patience=3, verbose=1) history = model.fit( X_train, y_train, epochs=200,batch_size=1, validation_split=0.25, verbose=0, callbacks=[early_stop]) historydf = pd.DataFrame(history.history, index=history.epoch) historydf.plot(); # #### Evaluate y_train_pred = model.predict(X_train) y_test_pred = model.predict(X_test) # + from sklearn.metrics import mean_squared_error as mse print("The Mean Squared Error on the Train set is:\t{:0.5f}".format(mse(y_train, y_train_pred))) print("The Mean Squared Error on the Test set is:\t{:0.5f}".format(mse(y_test, y_test_pred))) # + from sklearn.metrics import r2_score print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred))) print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) # - plt.plot(y_test) plt.plot(y_test_pred) plt.legend(['real vals', 'predicted']); # ## LSTM on Windows # + # Convert to tensor: (batch_size, timesteps, input_dim) X_train_t = X_train.reshape(X_train.shape[0], 1, 12) X_test_t = X_test.reshape(X_test.shape[0], 1, 12) print(X_train_t.shape) print(X_test_t.shape) # + from keras.models import Sequential import keras.backend as K from keras.layers import LSTM, Dense from keras.callbacks import EarlyStopping K.clear_session() model = Sequential() model.add(LSTM(6, input_shape=(1, 12))) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.summary() # - early_stop = EarlyStopping(monitor='loss', patience=3, verbose=1) history = model.fit( X_train_t, y_train, epochs=100, batch_size=1, verbose=0, validation_split=0.25, callbacks=[early_stop]) historydf = pd.DataFrame(history.history, index=history.epoch) historydf.plot(); # #### Evaluate y_train_pred = model.predict(X_train_t) y_test_pred = model.predict(X_test_t) # + from sklearn.metrics import mean_squared_error as mse print("The Mean Squared Error on the Train set is:\t{:0.5f}".format(mse(y_train, y_train_pred))) print("The Mean Squared Error on the Test set is:\t{:0.5f}".format(mse(y_test, y_test_pred))) # + from sklearn.metrics import r2_score print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred))) print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) # - plt.plot(y_test) plt.plot(y_test_pred) plt.legend(['real vals', 'predicted']); # ## => The models are extremely good # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from scipy.stats import gamma import seaborn as sns dados_gama = gamma.rvs(a=4, size=1000) sns.histplot(dados_gama, kde=True) min(dados_gama), max(dados_gama) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from sklearn.cluster import KMeans from sklearn.datasets.samples_generator import make_blobs from sklearn import model_selection import pandas as pd X, y = make_blobs(n_samples=2000, cluster_std=1.5, n_features=3, centers=5, random_state=19850725) # + fig = plt.figure(figsize=(15,10)) ax = fig.add_subplot(111, projection='3d') ax.scatter(X[:,0], X[:,1], X[:,2], c=y, marker='o') plt.show() # + splitter = model_selection.StratifiedShuffleSplit(n_splits=1, test_size=0.25) splits = list(splitter.split(X=X,y=y)) train_index = splits[0][0] test_index = splits[0][1] train_df = pd.DataFrame(X[train_index]) train_df['cluster'] = y[train_index] print(len(train_df)) test_df = pd.DataFrame(X[test_index]) test_df['cluster'] = y[test_index] print(len(test_df)) # + fig = plt.figure(figsize=(20,10)) ax = fig.add_subplot(121, projection='3d') ax.scatter(train_df.iloc[:,0], train_df.iloc[:,1], train_df.iloc[:,2], c=train_df['cluster'], marker='o') ax = fig.add_subplot(122, projection='3d') ax.scatter(test_df.iloc[:,0], test_df.iloc[:,1], test_df.iloc[:,2], c=test_df['cluster'], marker='o') plt.show() # - train_df.to_csv(path_or_buf="data/train-data.csv", header=False, index=True) test_df.to_csv(path_or_buf="data/test-data.csv", header=False, index=True) k_means = KMeans(init='k-means++', n_clusters=3, n_init=100) k_means.fit(train_df.iloc[:,[0,1,2]]) training_assigments = k_means.predict(train_df.iloc[:,[0,1,2]]) test_assigments = k_means.predict(test_df.iloc[:,[0,1,2]]) print(training_assigments[0:100]) # + fig = plt.figure(figsize=(20,10)) ax = fig.add_subplot(121, projection='3d') ax.scatter(train_df.iloc[:,0], train_df.iloc[:,1], train_df.iloc[:,2], c=training_assigments, marker='o') ax = fig.add_subplot(122, projection='3d') ax.scatter(test_df.iloc[:,0], test_df.iloc[:,1], test_df.iloc[:,2], c=test_assigments, marker='o') plt.show() # + fig = plt.figure(figsize=(15,10)) ax = fig.add_subplot(321) ax.scatter(train_df.iloc[:,0], train_df.iloc[:,1], c=training_assigments, marker='o') ax = fig.add_subplot(322) ax.scatter(test_df.iloc[:,0], test_df.iloc[:,1], c=test_assigments, marker='o') ax = fig.add_subplot(323) ax.scatter(train_df.iloc[:,0], train_df.iloc[:,2], c=training_assigments, marker='o') ax = fig.add_subplot(324) ax.scatter(test_df.iloc[:,0], test_df.iloc[:,2], c=test_assigments, marker='o') ax = fig.add_subplot(325) ax.scatter(train_df.iloc[:,2], train_df.iloc[:,3], c=training_assigments, marker='o') ax = fig.add_subplot(326) ax.scatter(test_df.iloc[:,2], test_df.iloc[:,3], c=test_assigments, marker='o') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Storage # # This notebook illustrates how simulations and results can be saved to file. # + import pypesto import pypesto.optimize as optimize import pypesto.visualize as visualize from pypesto.store import (save_to_hdf5, read_from_hdf5) import numpy as np import scipy as sp import matplotlib.pyplot as plt import tempfile # %matplotlib inline # - # ## Define the objective and problem # + objective = pypesto.Objective(fun=sp.optimize.rosen, grad=sp.optimize.rosen_der, hess=sp.optimize.rosen_hess) dim_full = 10 lb = -3 * np.ones((dim_full, 1)) ub = 3 * np.ones((dim_full, 1)) problem = pypesto.Problem(objective=objective, lb=lb, ub=ub) # create optimizers optimizer = optimize.ScipyOptimizer(method='l-bfgs-b') # set number of starts n_starts = 20 # - # ## Objective function traces # # During optimization, it is possible to regularly write the objective function trace to file. This is useful e.g. when runs fail, or for various diagnostics. Currently, pyPESTO can save histories to 3 backends: in-memory, as CSV files, or to HDF5 files. # ### Memory History # To record the history in-memory, just set `trace_record=True` in the `pypesto.HistoryOptions`. Then, the optimization result contains those histories: # + # record the history history_options = pypesto.HistoryOptions(trace_record=True) # Run optimizaitons result = optimize.minimize( problem=problem, optimizer=optimizer, n_starts=n_starts, history_options=history_options) # - # Now, in addition to queries on the result, we can also access the # + print("History type: ", type(result.optimize_result.list[0].history)) # print("Function value trace of best run: ", result.optimize_result.list[0].history.get_fval_trace()) fig, ax = plt.subplots(1, 2) visualize.waterfall(result, ax=ax[0]) visualize.optimizer_history(result, ax=ax[1]) fig.set_size_inches((15, 5)) # - # ### CSV History # The in-memory storage is however not stored anywhere. To do that, it is possible to store either to CSV or HDF5. This is specified via the `storage_file` option. If it ends in `.csv`, a `pypesto.objective.history.CsvHistory` will be employed; if it ends in `.hdf5` a `pypesto.objective.history.Hdf5History`. Occurrences of the substring `{id}` in the filename are replaced by the multistart id, allowing to maintain a separate file per run (this is necessary for CSV as otherwise runs are overwritten). # + # record the history and store to CSV history_options = pypesto.HistoryOptions(trace_record=True, storage_file='history_{id}.csv') # Run optimizaitons result = optimize.minimize( problem=problem, optimizer=optimizer, n_starts=n_starts, history_options=history_options) # - # Note that for this simple cost function, saving to CSV takes a considerable amount of time. This overhead decreases for more costly simulators, e.g. using ODE simulations via AMICI. # + print("History type: ", type(result.optimize_result.list[0].history)) # print("Function value trace of best run: ", result.optimize_result.list[0].history.get_fval_trace()) fig, ax = plt.subplots(1, 2) visualize.waterfall(result, ax=ax[0]) visualize.optimizer_history(result, ax=ax[1]) fig.set_size_inches((15, 5)) # - # ### HDF5 History # + [markdown] pycharm={"name": "#%% md\n"} # Just as in CSV, writing the history to HDF5 takes a considerable amount of time. # If a user specifies a HDF5 output file named `my_results.hdf5` and uses a parallelization engine, then: # * a folder is created to contain partial results, named `my_results/` (the stem of the output filename) # * files are created to store the results of each start, named `my_results/my_results_{START_INDEX}.hdf5` # * a file is created to store the combined result from all starts, named `my_results.hdf5`. # Note that this file depends on the files in the `my_results/` directory, so **cease to function** if # `my_results/` is deleted. # + pycharm={"name": "#%%\n"} # record the history and store to CSV history_options = pypesto.HistoryOptions(trace_record=True, storage_file='history.hdf5') # Run optimizaitons result = optimize.minimize( problem=problem, optimizer=optimizer, n_starts=n_starts, history_options=history_options) # + pycharm={"name": "#%%\n"} print("History type: ", type(result.optimize_result.list[0].history)) # print("Function value trace of best run: ", result.optimize_result.list[0].history.get_fval_trace()) fig, ax = plt.subplots(1, 2) visualize.waterfall(result, ax=ax[0]) visualize.optimizer_history(result, ax=ax[1]) fig.set_size_inches((15, 5)) # + [markdown] pycharm={"name": "#%% md\n"} # ## Result storage # # Result objects can be stored as HDF5 files. When applicable, this is preferable to just pickling results, which is # not guaranteed to be reproducible in the future. # + pycharm={"name": "#%%\n"} # Run optimizaitons result = optimize.minimize( problem=problem, optimizer=optimizer, n_starts=n_starts) # + pycharm={"name": "#%%\n"} result.optimize_result.list[0:2] # + [markdown] pycharm={"name": "#%% md\n"} # As usual, having obtained our result, we can directly perform some plots: # + pycharm={"name": "#%%\n"} # plot waterfalls visualize.waterfall(result, size=(15,6)) # - # ### Save optimization result as HDF5 file # # The optimization result can be saved with `pypesto.store.write_result()`. This will write the problem and the # optimization result, and the profiling and sampling results if available, to HDF5. # All of them can be disabled with boolean flags # (see [the documentation](https://pypesto.readthedocs.io/en/latest/api_store.html#pypesto.store.write_result)) # + pycharm={"name": "#%%\n"} fn = tempfile.mktemp(".hdf5") # Write result save_to_hdf5.write_result(result, fn) # + [markdown] pycharm={"name": "#%% md\n"} # ### Read optimization result from HDF5 file # # When reading in the stored result again, we recover the original optimization result: # + pycharm={"name": "#%%\n"} # Read result and problem result = read_from_hdf5.read_result(fn) # - result.optimize_result.list[0:2] # plot waterfalls pypesto.visualize.waterfall(result, size=(15,6)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Ejercicio La Biblioteca | Repaso Week 1 # ## Precurso DSNov20 - The Bridge # # La versión original de esta Práctica es en español por lo que respetaremos sus orígenes y a su autora, , y lo dejaremos en este idioma. Disculpen las molestias que esto pudiera ocasionarle como estudiante. # ![biblioteca](https://static2.abc.es/media/familia/2018/11/02/AdobeStock_210538561-kJTF--620x349@abc.jpg) # Es tu primer día de trabajo en la Biblioteca de tu barrio y ya tienes tu primera tarea. Te acaban de pasar una lista con libros y la lista de unidades disponibles. libros = ["El mal de Corcira", "Un mundo feliz", "Lolita", "Crimen y castigo", "Python from for to pro",\ "El señor de los anillos", "Cien años de soledad", "", "Lectura Fácil", "Seda",\ "La chica de nieve", "El día que se perdió la cordura", "Data Science"] biblioteca = [("El mal de Corcira",4),("Un mundo feliz", 2),("Lolita", 5),\ ("Crimen y castigo",2),("Python from for to pro", 0),("El señor de los anillos", 6),\ ("Cien años de soledad", 5),("Harry Potter", 9),("Lectura Fácil", 4),("Seda", 2),\ ("La chica de nieve", 6),("El día que se perdió la cordura", 3), ("Data Science", 0)] # 1. ¿Cuántos títulos diferentes tienen en esta biblioteca? #your code here print(len(libros)) print(len(biblioteca)) # 2. ¿Cuántas letras componen la palabra *Seda*? #your code here x = len("Seda") print(x) print(len(biblioteca[9][0])) for _ in biblioteca: print(_) for u in biblioteca: if u[0] == "Seda": print(len(u[0])) # 3. ¿Cuántas unidades hay del libro *Seda*? #your code here for u in biblioteca: if u[0] == "Seda": print(u[1]) # 4. Quien registraba los libros antes de ti dejo pendiente de añadir a la lista **libros** la variable **pendiente**, además debió confundir el famoso libro de George Orwell *1984*, con un número, asignándolo como un integer. # ¿Puedes cambiarlo y pasar a string este elemento y añadirlo a **libros**? pendiente = 1984 #your code here pendiente = str(pendiente) libros.append(pendiente) print(libros) libros # 5. Te piden que añadas a esta lista el nuevo libro de Los Juegos del hambre que se titula *Balada de pájaros cantores y serpientes*. Has contado las unidades y han llegado 10. # # a. Crea una variable con el título del libro, que se llame **libro_1**. Añade este elemento a la lista **libros**. # # b. Crea una variable con el número de unidades, que se llame **uds_1**. # # c. Crea una variable que sea una lista llamada **nuevo_libro** en el que su primer elemento sea **libro_1** y el segundo **uds_1**. # # d. Convierte a **nuevo_libro** a tupla. # (muestra qué tipo es ahora esta variable) # # e. Añade **nuevo_libro** a la lista **biblioteca** libros #your code here libro_1 = "Balada de pájaros cantores y serpientes" libros.append(libro_1) libros uds_1 = 10 nuevo_libro = [libro_1, uds_1] print(type(nuevo_libro)) nuevo_libro = tuple(nuevo_libro) print(type(nuevo_libro)) biblioteca.append(nuevo_libro) biblioteca nuevo_libro2 = (libro_1, uds_1) print(type(nuevo_libro2)) nuevo_libro2 type(nuevo_libro2[0]) # 6. Acaban de traer una unidad más de *El mal de Corcira*, añade una unidad más al segundo elemento del primer elemento de la lista **biblioteca**. biblioteca t = list(biblioteca[0]) t t[1] = t[1] + 1 t t = tuple(t) t biblioteca[0] = t biblioteca # Busca en google este error y explica porqué no se puede añadir una unidad más. ¿Se te ocurre cómo podrías alterar este dato? # + #your comment here #done! # - # Convierte la tupla en una lista para poder modificar el segundo elemento y añadir esta unidad. Asigna la tupla convertida a lista a la variable **tup_to_list** haz los cambios, agrega la unidad y vuelve a añadir la lista ya convertida en tupla a la lista **biblioteca**. # + #your code here #done above! # - # Ahora, tenemos dos tuplas con el libro *El mal de Corcira*, pero esto no es lo que queremos. Elimina el primer elemento de la lista **biblioteca**. # # Hint: https://www.programiz.com/python-programming/methods/list/remove #your code here y = ('El mal de Corcira', 5) biblioteca.append(y) biblioteca biblioteca.pop() i = ('El mal de Corcira', 5) biblioteca.remove(i) biblioteca # 7. Te han pedido que localices los títulos de los libros de los que no disponen de unidades. Es decir, su segundo elemento, es igual a 0. biblioteca for elem in biblioteca: if elem[1] == 0: print(elem[0]) # 8. ¿Cómo meterías estos dos elementos en una lista llamada **missing**? # + #your code here missing = [] for elem in biblioteca: if elem[1] == 0: print("elem:", elem) libro_sin_unidad = elem[0] print("libro_sin_unidad:", libro_sin_unidad) missing.append(libro_sin_unidad) print("missing:", missing) print("##################") # - # 9. Como en cualquier jornada de trabajo, recibes miles de email, hay uno que no habías visto pero en el que tu jefa te pide hacer un pequeño programita (función) que recoja el título de un libro y la cantidad de libros, este último parámetro por defecto será 1, chequée si tenemos ese título en la lista `libros` y si lo tenemos, sume esa cantidad a su cantidad en la lista `biblioteca` y si no, añada el título a `libros` y en una tupla nueva con la cantidad correspondiente a la lista `biblioteca`. print(libros) print() print(biblioteca) def inventario(titulo, cantidad = 1, libros = libros, biblioteca = biblioteca): if titulo not in libros: libros.append(titulo) biblioteca.append((titulo, cantidad)) else: for tup in biblioteca: if tup[0] == titulo: lista = list(tup) lista[1]+=cantidad biblioteca.append(tuple(lista)) biblioteca.remove(tup) break return libros, biblioteca # Pruébalo añadiendo el título de "Guía del Autopista Galáctico", cantidad 42. libros, biblioteca = inventario("Guía del Autopista Galáctico", 42) biblioteca[-1] # ### Bonus Track. # 10. ¿Cuál es el libro con más unidades? ¿Cuál es la media de libros por título? unidades = [] for tupl in biblioteca: unidades.append(tupl[1]) for i, uni in enumerate(unidades): if uni == max(unidades): print(f"El libro con más unidades es {biblioteca[i][0]} con {uni} unidades") import statistics as stats print(f"La media de libros por título es: {round(stats.mean(unidades), 4)} unidades") # 11. ¿Cuál tiene el título más largo y cuál el más corto? # + longitud = [] for libro in libros: longitud.append(len(libro)) for i, lon in enumerate(longitud): if lon == max(longitud): print(f"El título más largo es {libros[i]}") if lon == min(longitud): print(f"El título más corto es {libros[i]}") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.4 64-bit # language: python # name: python37464bit297b18a1a3e542689b13a325bcf6fffe # --- # #### Part 1 | Load and Validate the Data # #### Part 2 | Merge Dataframes # #### Part 3 | Data Exploration # #### Part 4 | Visualizations # ## Part 1 | Load and Validate the Data # + # imports import pandas as pd import os import matplotlib.pyplot as plt import seaborn as sns # - math_path = '/home/chase/repos/Student_Performance/data/student-mat.csv' math = pd.read_csv(math_path, sep=';') port_path = '/home/chase/repos/Student_Performance/data/student-por.csv' port = pd.read_csv(port_path, sep=';') print(math.shape) print(math.head()) print(port.shape) print(port.head()) # ## Part 2 | Merge Dataframes # + # merge datasets df = pd.concat([math, port]) # rename columns df.columns = ['school','sex','age','address','family_size','parents_status','mother_education','father_education', 'mother_job','father_job','reason','guardian','commute_time','study_time','failures','school_support', 'family_support','paid_classes','activities','nursery','desire_higher_edu','internet','romantic','family_quality', 'free_time','go_out','weekday_alcohol_use','weekend_alcohol_use','health','absences','period1_score','period2_score', 'final_score'] df.head() # + # convert and assign final score to pass(1) & fail(0) # pass >= 14 # fail < 14 df['passing_grade'] = 'passing_grade' df.loc[(df.final_score >= 14), 'passing_grade'] = 1 df.loc[(df.final_score < 14), 'passing_grade'] = 0 df.head() # + # check for nan's df.isnull().sum() # - # ## Part 3 | Data Exploration print(df.shape) print(df.dtypes) # + # convert passing grades to ints df['passing_grade'] = df.passing_grade.astype(int) df.dtypes # + # save df as csv for later use df.to_csv('/home/chase/repos/Student_Performance/data/student_performance.csv') # - df.describe() # + # baseline for later df['passing_grade'].value_counts(normalize=True).max() # - # ## Part 4 | Visualizations # + # distribution plot of final_grade sns.distplot(df['final_score']); # + # distribution plot of passing_grade sns.distplot(df['passing_grade']); # + # passing_grade correlation heat map corr = df.corr() plt.figure(figsize=(15,15)) sns.heatmap(corr, annot=True, cmap="Blues") plt.title('Correlation Heatmap', fontsize=15); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import random import time import matplotlib.pyplot as plt import numpy as np from random import choices import sympy from sympy import Matrix from sympy import fraction from IPython.display import display, Markdown def var(v): return sympy.symbols(v) # + a = [2, 1, 1, 0, 3, 1, 0, 0, 4] #upper triangle 2*3*4 = 24 b = [1, 0, 0, 0, 1, 0, 0, 0, 1] #identity I = 1 c = [75, 36, 94, 93, 52, 8, 67, 57, 15, 3, 49, 61, 51, 26, 11, 23, 6, 63, 92, 79, 10, 16, 42, 66, 95] #5x5 d = [1, 6, 4, 2, 4, -1, -1, 2, 5] #dne inverse = 0 p = [1/2, 1/2, 0, 1/2, 1/2, 0, 0, 0, 1] #projection matrix P^2 = P pe = [0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1] #permutation matrix P*P.T = P.T*P = P e = [1, 1, 1, 1, 2, 2, 1, 2, 3] v = [var('x%d' % i) if i%(5+1)==0 else 0 for i in range(5**2)] #diagonal variable matrix: D = prod {diagonals} v2 = [var('x%d' % i) for i in range(3**2)] #full variable matrix F = [1, 2, 3, 2, 2, 3, 3, 3, 3] s = [3, -2, 4, -2, 6, 2, 4, 2, 3] o = [3, 0, 0, 0, 0, 2, 0, 1, 0] r = [8,3,0,0,5,0,6,0,5,3, 6,7,9,5,7,5,8,0,4,5, 1,2,9,5,7,3,5,0,5,6, 8,5,0,6,8,7,0,2,7,5, 6,8,4,4,4,4,5,8,5,6, 9,2,1,2,1,8,3,9,4,5, 6,4,6,2,3,2,6,3,7,9, 4,2,9,9,2,6,4,2,3,0] c_1 = 3 c_2 = 5 c_3 = 4 c_4 = 8 # - class sq_matrix: def __init__(self, m, cols, title="A"): self.m = m self.cols = cols self.title = title self.m_T = self.T(self.m[:], self.cols) self.m_ID = [1 if i%(self.cols+1)==0 else 0 for i in range(self.cols**2)] ''' Matrix defs. ''' #get sq_matrix 2d index def get_mat_indx(self, i, j): return self.mat_indx(self.m, self.cols, i, j) #get matrix 2d index def mat_indx(self, m, cols, i, j): #gets content of a_ij of nxn matrix A #subtract by 1 to get math-like form, st. a11 is the top left most content i -= 1 j -= 1 return m[i*cols + j] #swap entry of a matrix def swap_entry(self, m, cols, i_1, j_1, i_2, j_2): #for M nxn, does m_i1j1 => mi2j2 temp = self.mat_indx(m, cols, i_1, j_1) m[(i_1-1)*cols+(j_1-1)] = self.mat_indx(m, cols, i_2, j_2) m[(i_2-1)*cols+(j_2-1)] = temp #get transpose of sq_matrix def get_T(self): return self.T(self.m, self.cols) #gets transpose of matrix def T(self, m, cols): for i in range(1, cols+1): for j in range(i, cols+1): self.swap_entry(m, cols, i, j, j, i) return m ''' Row/Col. handling ''' #get row vector def get_row(self, r): if r > self.cols or r < 1: raise ValueError(f"Row must be in matrix: Given {r} for ({self.cols}x{self.cols}) matrix") return self.to_matrix()[r-1] #get row indexes def get_row_elems(self, r): if r > self.cols or r < 1: raise ValueError(f"Row must be in matrix: Given {r} for ({self.cols}x{self.cols}) matrix") r -= 1 #start at first elem of row, ie row 2 for 3x3 is: [2*3] st = r*self.cols return [i for i in range(st, st+self.cols)] #get col. indexes def get_col_elems(self, c): if c > self.cols or c < 1: raise ValueError(f"Col. must be in matrix: Given {r} for ({self.cols}x{self.cols}) matrix") c -= 1 #start at first elem of row, ie row 2 for 3x3 is: [2*3] st = c return [st+i*self.cols for i in range(0, self.cols)] #gets col. vector def get_col(self, c): if c > self.cols or c < 1: raise ValueError(f"Col. must be in matrix: Given {r} for ({self.cols}x{self.cols}) matrix") self.get_T() ans = self.to_matrix()[c-1] self.get_T() return ans ''' Determinant ''' #gets minor M_ij of a matrix def minor(self, m, cols, i, j): #gets M_ij by adding a_ij st m_n = [] mtx = sq_matrix(m, cols) dels = mtx.get_row_elems(i)+mtx.get_col_elems(j) for x in range(cols*cols): if x not in dels: m_n.append(m[x]) return self.det(m_n, cols-1) #gets determinant of sq_matrix def get_det(self, i=1): return self.det(self.m, self.cols) #get determinant of a matrix def det(self, m, cols, i=1): #find determinant of nxn matrix via cofactoring if cols < 1: return "must have > 1 col" #base case 2x2 matrix #print(m) if cols == 2: return m[0]*m[3]-m[1]*m[2] #sum of a ij * cij, for j = 1 -> n M_ij = lambda i,j: self.minor(m, cols, i, j) C_ij = lambda i,j: (-1)**(i+j) * M_ij(i, j) Cof_ij = lambda i,j: self.mat_indx(m, cols, i, j)*C_ij(i, j) return sum([Cof_ij(i, j) for j in range(1, cols+1)]) ''' Inverse of matrix ''' def get_inverse(self, sep=False): return self.inverse(self.m, self.cols, sep) def inverse(self, m, cols, sep=False): det = self.det(m, cols) if not det: return sq_matrix([0],1) if cols==2: return det, sq_matrix([m[3], -m[1], -m[2], m[0]],2) cof_mat = cols**2*[0] #adj(A) = Cof^T = ((-1)^(i+j) M_ji) Cof_ij = lambda i,j: (-1)**(i+j) * self.minor(m, cols, i, j) for row in range(cols): for col in range(cols): cof_mat[row*cols + col] = Cof_ij(col+1, row+1) cof_mat = sq_matrix(cof_mat, cols) if not sep: cof_mat.m = [x / det if x!=0 else 0 for x in cof_mat.m] return cof_mat return det, cof_mat ''' Eigenvalues ''' #gets eq. det(sq_matrix - λ*I) def eigenvalues_lhs_mat(self): lmb_ID_a = sq_matrix(self.m[:], self.cols) lmb_ID_b = sq_matrix(self.m_ID[:], self.cols) lmb_ID_b.scalar(sympy.symbols('λ')) return (lmb_ID_a - lmb_ID_b).m def eigenvalues_lhs(self): return self.det(self.eigenvalues_lhs_mat(), self.cols) #gets eigen values by using lhs = 0 via sympy solve def get_eigenvalues(self, no_imag=True, rational=False): res = sympy.solve(self.eigenvalues_lhs(), sympy.symbols('λ')) if rational: res = [sympy.Rational(i) for i in res] else: res = [sympy.N(i) for i in res] reals = [] imags = [] for i in res: if i.is_real: reals.append(float(i)) for i in res: if not i.is_real: imags.append(i) if no_imag: return reals return reals, imags def get_trace(self): return self.trace(self.m, self.cols) def trace(self, m, cols): mat = sq_matrix(m, cols) return sum([i**2 for i in mat.get_eigenvalues()]) ''' Eigenvectors ''' def get_eigenvectors(self): pass ''' RREF ''' #does RREF on sq_matrix def get_RREF(self): return self.RREF(self.m, self.cols) #does RREF via sympy rref def RREF(self, m, cols): mat = sq_matrix(m, cols) mtx = mat.to_matrix() mtx = Matrix(mtx) rref_mtx = list(mtx.rref()[0]) res = sq_matrix(rref_mtx, cols) return res def get_pivots(self): return self.pivots(self.m, self.cols) def pivots(self, m, cols): rref = self.RREF(m, cols) mtx = rref.to_matrix() pivots = [] for i in mtx: temp = list(filter(lambda x: x!=0, i)) if temp: pivots.append(temp[0]) return pivots def get_rank(self): return self.rank(self.m , self.cols) def rank(self, m, cols): return len(self.pivots(m, cols)) def full_rank(self): return self.get_rank()==self.cols ''' Cramer's Rule ''' #finds solution x, for Ax = b, via cramer's rule def cramer(self, b): if len(b) != self.cols: raise ValueError(f"vector must have same length as matrix cols: Given ({len(b)}), ({self.cols}x{self.cols})") #x_i = det(A_i)/det(A) det = self.get_det() cols = [self.get_col(i) for i in range(1, self.cols+1)] A_i = [i[:] for i in [cols]*self.cols] temp = [] #generate A_i's for i in range(len(A_i)): A_i[i][i] = b for j in A_i[i]: temp += j A_i[i] = temp temp = [] A_i[i] = sq_matrix(sq_matrix(A_i[i], self.cols).m_T, self.cols) dets = [self.det(i.m, self.cols) for i in A_i] res = [d/det for d in dets] return res ''' Check Matrix Type ''' #checks if matrix is invertible def inv(self): return self.get_det()!=0 #checks if matrix is identity def iden(self): return self.get_det()==1 #check if sq_matrix is symmetric def sym(self): return self.m==self.m_T #check if sq_matrix is proj. matrix def proj(self): return self*self==self #check if orthogonal matrix def ortho(self): tmp = sq_matrix(self.m_T, self.cols) tmp_id = sq_matrix(self.m_ID, self.cols) return tmp*self == self*tmp == tmp_id def perm(self): for i in self.m: if i not in [0, 1] or i<0: return False return self.ortho() ''' Operators ''' #dot product of two vectors def dot(self, v1, v2): return sum([v1[i]*v2[i] for i in range(self.cols)]) #matrix mul. of two matrices def __mul__(self, other): if self.cols != other.cols: raise ValueError(f"Both matrices must have same size: Given ({self.cols}x{self.cols}), ({other.cols}x{other.cols})") m_tmp = [] for row in range(1, self.cols+1): for col in range(1, self.cols+1): res = self.dot(self.get_row(row), other.get_col(col)) if res % 1 == 0: res = int(res) m_tmp.append(res) return sq_matrix(m_tmp, self.cols) #multiplies matrix by scalar def scalar(self, s): self.m = [x*s if x!=0 else 0 for x in self.m] self.update() #add two matrices together def __add__(self, other): if self.cols != other.cols: raise ValueError(f"Both matrices must have same size: Given ({self.cols}x{self.cols}), ({other.cols}x{other.cols})") return sq_matrix([self.m[i]+other.m[i] for i in range(self.cols*self.cols)], self.cols) #sub two matrices together def __sub__(self, other): if self.cols != other.cols: raise ValueError(f"Both matrices must have same size: Given ({self.cols}x{self.cols}), ({other.cols}x{other.cols})") return sq_matrix([self.m[i]-other.m[i] for i in range(self.cols*self.cols)], self.cols) #checks if self matrix == other matrix def __eq__(self, other): return self.m==other.m #updates m_T def update(self): self.m_T = self.T(self.m[:], self.cols) ''' Printing ''' #generates summary of matrix def summary(self): start = time.time() transp = sq_matrix(self.m_T, self.cols) rref = self.get_RREF() eigenvals = self.get_eigenvalues(False) print(self.cols*"-----") print(f"{self.title} ({self.cols}x{self.cols}):") self.print_matrix() print(f"{self.title}.T:") transp.print_matrix() print(f"{self.title}^-1:") if not self.inv(): print("Matrix not invertible") else: self.print_inverse() print(f"{self.title}_RREF:") rref.print_matrix() print(f"\nDet: {self.get_det()} | Exists A^-1: {self.inv()} | Is I_{self.cols}: {self.iden()} \n") print(f"Symmetric: {self.sym()} | Projection: {self.proj()} | Orthogonal: {self.ortho()} | Permutation: {self.perm()} \n") print(f"Real Eigenvalues: {eigenvals[0]}") print(f"Complex Eigenvalues: {eigenvals[1]} \n") print(f"Rank: {self.get_rank()} | Full Rank: {self.full_rank()}") print(self.cols*"-----") end = time.time() elp = end-start print(f"took {round(elp*1000,3)}ms") #returns 2d matrix rep. of 1d matrix def to_matrix(self): return [[self.get_mat_indx(row, col) for col in range(1, self.cols+1)] for row in range(1, self.cols+1)] #prints 2d matrix rep. via latex md def to_latex(self): #for row in range(self.cols): # print(self.to_matrix()[row]) bls = chr(92) res = bls + "begin{bmatrix}" for row in range(1, self.cols+1): for ele in self.get_row(row): res += f"{ele} &" res = res[:-1] + 2*bls res = res[:-2] + bls+"end{bmatrix}" return res def print_matrix(self): display(Markdown(self.to_latex())) def print_inverse(self): res = self.get_inverse(True) display(Markdown(f"$\\frac{1}{ {res[0]} } * $")) res[1].print_matrix() def get_heatmap(self, cm='binary', interp = 'nearest', xlims=None, ylims=None): self.heatmap(self.m, cm, interp, xlims, ylims) def heatmap(self, m, cm='binary', interp = 'nearest', xlims=None, ylims=None): if xlims != None: plt.xlim([xlims[0],xlims[1]]) if ylims != None: plt.ylim([ylims[0],ylims[1]]) np_m = np.matrix(sq_matrix(m[:], self.cols).to_matrix()) plt.imshow(np_m, cmap=cm, interpolation=interp) plt.show() #prints 2d rep. of matrix via print(sq_matrix) def __str__(self): self.print_matrix() return "" class gen_sq_matrix(sq_matrix): def __init__(self): self.m = [0,0,0,0] self.cols = 2 self.title = "A" self.update() def update(self): self.m_T = self.T(self.m[:], self.cols) def to_mat(self): return sq_matrix(self.m, self.cols, self.title) def I(self, n=None): if n==None or n < 2: n = self.cols self.m = [1 if i%(n+1)==0 else 0 for i in range(n**2)] self.cols = n self.update() def full_num(self, num, n=None): if n==None or n < 2: n=self.cols self.m = n**2*[num] self.cols = n self.update() def empty(self, n=None): self.full_num(0, n) self.update() def ones(self, n=None): self.full_num(1, n) self.update() def scalar(self, s): self.m = [x*s if x!=0 else 0 for x in self.m] self.update() def gauss(self, amp, sigs, mus): self.fapply_2d(lambda e_i, x, y: amp*np.exp(-( ((x-mus[0])**2/(2*sigs[0])) + ((y-mus[1])**2/(2*sigs[1]))))) def fapply_1d(self, f): self.m = [f(self.m[x], x) for x in range(self.cols**2)] self.update() def fapply_2d(self, f): for row in range(self.cols): for col in range(self.cols): self.m[row*self.cols + col] = f(self.m[row*self.cols + col], row+1, col+1) self.update() class chem_balance(sq_matrix): def __init__(self, r1, r2): #r1 = reactant, r2 = reaction #Element: (element, #) #Compound [Element, ...] #Reactant/Reaction = [Compound, ...] self.r1 = r1 self.r2 = r2 self.contains = self.contents() self.num_elems = len(self.contains) self.elems = list(self.contains.keys()) def contents(self): dic = {} for comp in self.r1+self.r2: for elem in comp: #print(elem) key = elem[0] val = elem[1] if elem[0] in dic: dic[key][0] += 1 dic[key][1] += val else: dic[key] = [1, val] return dic def make_rows(self): rows = [] for ele, comp in enumerate(self.r1+self.r2): r = [] for indx, i in enumerate(self.elems): udp = False for elem in comp: key = elem[0] val = -elem[1] if ele >= len(self.r1) else elem[1] if key==i: r.append(val) udp = True if not udp: r.append(0) rows.append(r) return rows def to_mat(self): rows = self.make_rows() diff = len(rows)-len(rows[0]) if(len(rows) > len(rows[0])): for i in rows: i.append([0]*diff) if(len(rows) < len(rows[0])): for i in range(diff): rows.append([0]*len(rows[0])) mat = [] for i in rows: mat += i mat_T = sq_matrix(mat, len(rows)).m_T res = sq_matrix(mat_T, len(rows)) return res def balance(self): res = Matrix(self.to_mat().to_matrix()).nullspace()[0] denoms = [fraction(i)[1] for i in res] lcm = np.lcm.reduce(denoms) return lcm*res def latex_result(self): bal = self.balance() res = "" for index, comp in enumerate(self.r1+self.r2): res = res + "" + str(bal[index]) for elem in comp: key = elem[0] val = elem[1] res = res + str(key) + "_" + str(val) if index==len(self.r1)-1: res = res + "\\rightarrow" if index!=len(self.r1+self.r2)-1 and index!=len(self.r1)-1: res = res + "+" display(Markdown("$" + res + "$")) def __str__(self): for i in self.r1: print(i) for j in self.r2: print(j) return "" # + ra1 = [("P", 1), ("Cl", 5)], [("H", 2), ("O", 1)] ra2 = [("H", 3), ("P", 1), ("O", 4)], [("H", 1), ("Cl", 1)] #rb1 = [("N", 2)], [("H", 2)] #rb2 = [("N", 1), ("H", 3)], [] #print(ra1) #print(rb1) M = chem_balance(ra1, ra2) M.latex_result() # - rot = [sympy.cos(var("x")), -sympy.sin(var("x")), sympy.sin(var("x")), sympy.cos(var("x"))] M = sq_matrix(rot, 2) M.summary() M = gen_sq_matrix() M.ones(3) M.fapply_1d(lambda e, i: i) M = M.to_mat() M.summary() # + n = 250 gen_a = gen_sq_matrix() gen_a.ones(n) gen_a.scalar(1) gen_a.fapply_2d(lambda e, x, y: np.ceil(np.sin(x/y * 180/np.pi))%2==0) gen_a.get_heatmap('hot') gen_a.fapply_2d(lambda e, x, y: np.floor(np.sin(x/y * 180/np.pi))%2==0) gen_a.get_heatmap('hot') # + cols = 100 amp = 1 sig_x, sig_y = [cols*(cols/10)/2, cols*(cols/10)/2] x_0, y_0 = [cols/2, cols/2] M = gen_sq_matrix() M.ones(cols) M.fapply_2d(lambda e, x, y: x) M = M.to_mat() M.get_heatmap() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import pandas as pd from rdkit import Chem from rdkit.Chem import Draw from rdkit.Chem.Scaffolds import MurckoScaffold from rdkit.Chem import rdMolDescriptors from multiprocessing import Pool, cpu_count import numpy as np import matplotlib.pyplot as plt import itertools from rdkit.Chem import PandasTools import glob from pathlib import Path import os OVERWRITE_FILES = False def normalize_smiles(smiles): mol = Chem.MolFromSmiles(smiles) if mol: return Chem.MolToSmiles(mol, isomericSmiles=False) else: return np.NaN def normalize_smiles_series(smiles_series): return smiles_series.map(normalize_smiles) def parallelize_dataframe(df, func): parts = np.array_split(df, cpu_count()) pool = Pool(cpu_count()) df = pd.concat(pool.map(func, list(parts))) pool.close() pool.join() return df def generate_murcko_scaffold(smile): try: mol = Chem.MolFromSmiles(smile) if mol: scaffold = MurckoScaffold.GetScaffoldForMol(mol) return Chem.MolToSmiles(scaffold, isomericSmiles=False) else: return np.NaN except: return np.NaN def generate_topological_scaffold(smile): try: mol = Chem.MolFromSmiles(smile) if mol: scaffold = MurckoScaffold.MakeScaffoldGeneric(MurckoScaffold.GetScaffoldForMol(mol)) return Chem.MolToSmiles(scaffold, isomericSmiles=False) else: return np.NaN except: return np.NaN def generate_murcko_scaffold_series(data): return data.map(generate_murcko_scaffold) def generate_topological_scaffold_series(data): return data.map(generate_topological_scaffold) def calcLogP(smiles): return rdMolDescriptors.CalcCrippenDescriptors(Chem.MolFromSmiles(smiles))[0] def calcLogP_series(df): return df.map(calcLogP) # + filternames = ["NoFilter", "CompoundSimilarity", "IdenticalMurckoScaffold", "IdenticalTopologicalScaffold", "ScaffoldSimilarity"] filternames_in_plots = [ "No memory", "CompoundSimilarity memory", "IdenticalMurckoScaffold memory", "IdenticalTopologicalScaffold memory", "ScaffoldSimilarity memory"] target_params = { "DRD2": {"maxstep": 300, "minactivity": 0.7}, "HTR1A": {"maxstep": 300, "minactivity": 0.7}, "clogP": {"maxstep": 150, "minactivity": 1.} } pathname = f"{Path.home()}/REINVENT/results/*/scaffold_memory.csv" for path in glob.glob(pathname): folder = path.split("/")[-2].replace(" ","_") if os.path.exists(f"data/memories/{folder}/memory_preprocessed.csv.gz") and not OVERWRITE_FILES: print(f"Skipping {folder} as it seems to already be processed") continue elements = folder.split("_") if len(elements) > 7: continue target, filtername, minsimilarity, bucket_size, outputmode, temperature, experience_replay = elements minsimilarity = float(minsimilarity) bucket_size = int(bucket_size) temperature = float(temperature) experience_replay = bool(experience_replay) memory = pd.read_csv(path) if len(memory) <= 1: print(f"{path} contains nothing") continue memory.rename(columns = {'SMILES':'GENERATED_SMILES'}, inplace = True) memory['SMILES'] = parallelize_dataframe(memory["GENERATED_SMILES"], normalize_smiles_series) memory = memory.dropna() memory = memory.sort_values(by=['step']) memory = memory.drop_duplicates('SMILES', keep="first") memory["Murcko Scaffold"] = parallelize_dataframe(memory["SMILES"], generate_murcko_scaffold_series) memory["Topological Scaffold"] = parallelize_dataframe(memory["SMILES"], generate_topological_scaffold_series) memory = memory.dropna() memory['ID'] = ["generated_{}_{}_{}_{}_{}_{}_{}_{}".format(target.replace(" ","_"), filtername, minsimilarity, bucket_size, outputmode, temperature, experience_replay, i) for i in range(len(memory))] maxstep = target_params[target]["maxstep"] memory = memory.query("step < @maxstep") os.makedirs(f"data/memories/{folder}", exist_ok=True) memory.to_csv(f"data/memories/{folder}/memory_preprocessed.csv.gz", index=False) os.makedirs(f"to_fragment/{folder}", exist_ok=True) memory[["SMILES","ID"]].to_csv(f"to_fragment/{folder}/generated_to_fragment.smi" ,sep=",",index=False,header=False) # - targets = ["DRD2"] for target in targets: df = pd.read_pickle(f"{Path.home()}/projects/reinvent-classifiers/{target}_df.pkl.gz").query("activity_label == 1")[["Original_Entry_ID","DB","RDKIT_SMILES","trainingset_class","cluster_id","activity_label","cfp"]] if os.path.exists(f"data/{target}/actives.pkl.gz") and not OVERWRITE_FILES: print(f"Skipping {target} as it seems to already be processed") continue _training = 0 _test = 0 _validation = 0 _target = target def make_id(row): global _training, _test, _validation, _target template = "{}_{}_{}_{}" if row["trainingset_class"] == "training": _training += 1 return template.format(row["Original_Entry_ID"], _target, row["trainingset_class"], _training) elif row["trainingset_class"] == "test": _test += 1 return template.format(row["Original_Entry_ID"], _target, row["trainingset_class"], _test) elif row["trainingset_class"] == "validation": _validation += 1 return template.format(row["Original_Entry_ID"], _target, row["trainingset_class"], _validation) df['ID'] = df.apply(make_id, axis=1) df["Murcko Scaffold"] = parallelize_dataframe(df["RDKIT_SMILES"], generate_murcko_scaffold_series) df["Topological Scaffold"] = parallelize_dataframe(df["RDKIT_SMILES"], generate_topological_scaffold_series) os.makedirs(f"data/{target}/", exist_ok=True) df.to_pickle(f"data/{target}/actives.pkl.gz") df = df[["RDKIT_SMILES","ID"]] os.makedirs(f"to_fragment/{target}", exist_ok=True) df[["RDKIT_SMILES","ID"]].to_csv(f"to_fragment/{target}/actives_to_fragment.smi",sep=",",index=False,header=False) # + from joblib import Parallel, delayed from rdkit import Chem from rdkit import rdBase from rdkit.Chem import AllChem from rdkit.Chem import SaltRemover from rdkit.Chem import rdmolops rdBase.DisableLog('rdApp.error') def _initialiseNeutralisationReactions(): patts = ( # Imidazoles ('[n+;H]', 'n'), # Amines ('[N+;!H0]', 'N'), # Carboxylic acids and alcohols ('[$([O-]);!$([O-][#7])]', 'O'), # Thiols ('[S-;X1]', 'S'), # Sulfonamides ('[$([N-;X2]S(=O)=O)]', 'N'), # Enamines ('[$([N-;X2][C,N]=C)]', 'N'), # Tetrazoles ('[n-]', '[nH]'), # Sulfoxides ('[$([S-]=O)]', 'S'), # Amides ('[$([N-]C=O)]', 'N'), ) return [(Chem.MolFromSmarts(x), Chem.MolFromSmiles(y, False)) for x, y in patts] _reactions = _initialiseNeutralisationReactions() def _neutraliseCharges(mol, reactions=None): global _reactions if reactions is None: reactions = _reactions replaced = False for i, (reactant, product) in enumerate(reactions): while mol.HasSubstructMatch(reactant): replaced = True rms = AllChem.ReplaceSubstructs(mol, reactant, product) mol = rms[0] if replaced: return mol, True else: return mol, False def _getlargestFragment(mol): frags = rdmolops.GetMolFrags(mol, asMols=True, sanitizeFrags=True) maxmol = None for mol in frags: if mol is None: continue if maxmol is None: maxmol = mol if maxmol.GetNumHeavyAtoms() < mol.GetNumHeavyAtoms(): maxmol = mol return maxmol _saltremover = SaltRemover.SaltRemover() def valid_size(mol, min_heavy_atoms, max_heavy_atoms, element_list, remove_long_side_chains): """Filters molecules on number of heavy atoms and atom types""" if mol: correct_size = min_heavy_atoms < mol.GetNumHeavyAtoms() < max_heavy_atoms if not correct_size: return valid_elements = all([atom.GetAtomicNum() in element_list for atom in mol.GetAtoms()]) if not valid_elements: return has_long_sidechains = False if remove_long_side_chains: # remove aliphatic side chains with at least 4 carbons not in a ring sma = '[CR0]-[CR0]-[CR0]-[CR0]' has_long_sidechains = mol.HasSubstructMatch(Chem.MolFromSmarts(sma)) return correct_size and valid_elements and not has_long_sidechains def standardize_smiles(smiles, min_heavy_atoms=10, max_heavy_atoms=50, element_list=[6, 7, 8, 9, 16, 17, 35], remove_long_side_chains=False, neutralise_charges=True): mol = Chem.MolFromSmiles(smiles) if mol: mol = _getlargestFragment(mol) if mol: mol = rdmolops.RemoveHs(mol, implicitOnly=False, updateExplicitCount=False, sanitize=True) if mol: mol = _saltremover.StripMol(mol, dontRemoveEverything=True) if mol and neutralise_charges: mol, _ = _neutraliseCharges(mol) if mol: rdmolops.Cleanup(mol) rdmolops.SanitizeMol(mol) mol = rdmolops.RemoveHs(mol, implicitOnly=False, updateExplicitCount=False, sanitize=True) if mol and valid_size(mol, min_heavy_atoms, max_heavy_atoms, element_list, remove_long_side_chains): return Chem.MolToSmiles(mol, isomericSmiles=False) return np.NaN def standardize_smiles_from_file(fname): """Reads a SMILES file and returns a list of RDKIT SMILES""" with open(fname, 'r') as f: smiles_list = [line.strip().split(" ")[0] for line in f] return standardize_smiles_list(smiles_list) def standardize_smiles_list(smiles_list): """Reads a SMILES list and returns a list of RDKIT SMILES""" smiles_list = Parallel(n_jobs=-1, verbose=0)(delayed(standardize_smiles)(line) for line in smiles_list) smiles_list = [smiles for smiles in set(smiles_list) if smiles is not None] logging.debug("{} unique SMILES retrieved".format(len(smiles_list))) return smiles_list # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import codecs import numpy as np from sklearn.preprocessing import LabelEncoder from sklearn.feature_extraction.text import TfidfVectorizer # ## 1 数据准备 # + import xgboost as xgb # 1 导入数据 labels = [] text = [] with codecs.open('output/data_clean_split.txt','r',encoding='utf-8') as f: document_split = f.readlines() for document in document_split: temp = document.split('\t') labels.append(temp[0]) text.append(temp[1].strip()) # 2 标签转换为数字 label_encoder = LabelEncoder() y = label_encoder.fit_transform(labels) # 3 TF-IDF提取文本特征 tfv1 = TfidfVectorizer(min_df=4, max_df=0.6) tfv1.fit(text) features = tfv1.transform(text) # 4 切分数据集 from sklearn.model_selection import train_test_split x_train_tfv, x_valid_tfv, y_train, y_valid = train_test_split(features, y, stratify=y, random_state=42, test_size=0.1, shuffle=True) # - # ## 2 定义损失函数 def multiclass_logloss(actual, predicted, eps=1e-15): """对数损失度量(Logarithmic Loss Metric)的多分类版本。 :param actual: 包含actual target classes的数组 :param predicted: 分类预测结果矩阵, 每个类别都有一个概率 """ # Convert 'actual' to a binary array if it's not already: if len(actual.shape) == 1: actual2 = np.zeros((actual.shape[0], predicted.shape[1])) for i, val in enumerate(actual): actual2[i, val] = 1 actual = actual2 clip = np.clip(predicted, eps, 1 - eps) rows = actual.shape[0] vsota = np.sum(actual * np.log(clip)) return -1.0 / rows * vsota # ## 3 使用模型分类 # + # 基于tf-idf特征,使用xgboost clf = xgb.XGBClassifier(max_depth=7, n_estimators=200, colsample_bytree=0.8, subsample=0.8, nthread=10, learning_rate=0.1) clf.fit(x_train_tfv.tocsc(), y_train) predictions = clf.predict_proba(x_valid_tfv.tocsc()) print ("logloss: %0.3f " % multiclass_logloss(y_valid, predictions)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import KFold from util import gini_normalized from parameters import parameters, batch_size, epochs, layers, activation_functions, loss, alpha from preprocessing import preproc # + # Importing the train dataset dataset_train = pd.read_csv('train.csv') # Importing the test dataset dataset_test = pd.read_csv('test.csv') # preprocessing train dataset X_train, y_train = preproc(dataset_train, 'train', oneHot=False, feature_selection=True) # preprocessing test dataset X_test, y_test = preproc(dataset_test, 'test', oneHot=False, feature_selection=True) # + # Now let's make the Classifier! # Fitting Random Forest Classification to the Training set class_weight = {0: 1., 1: alpha} K = 5 kf = KFold(n_splits=K, random_state=42, shuffle=True) # - i=0 results = [] for train_index, test_index in kf.split(X_train): train_x, train_y = X_train[train_index], y_train[train_index] eval_x, eval_y = X_train[test_index], y_train[test_index] classifier = RandomForestClassifier(n_estimators=30, criterion = 'gini', max_depth=5, random_state = 1, max_features='auto', class_weight=class_weight) classifier.fit(train_x, train_y) res_eval = classifier.predict(eval_x) res = classifier.predict(X_test) results.append(res) print('gini_eval', i) gini_score = gini_normalized(eval_y, res_eval) print(gini_score) i+=1 def to_csv(y_pred, ids): import csv with open('sumbission_5Kfold_random_forest.csv', 'w') as csvfile: spamwriter = csv.writer(csvfile, delimiter=',') spamwriter.writerow(['id', 'target']) for i in range(len(y_pred)): spamwriter.writerow([ids[i], y_pred[i]]) submission = (results[0] + results[1] + results[2] + results[3] + results[4]) / 5 idx = dataset_test.iloc[:, 0].values to_csv(submission,idx) # gini score for this submission : 0.04576 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import ipyvuetify as v import ipyvuedraggable as d from traitlets import (Any, Unicode, List) # Output a dummy Draggable instance, so ipyvue gets loaded. This does not # happen automatically when only VueTemplate is used d.Draggable() # + def getItems(): return [ { 'id': 1, 'avatar': "https://s3.amazonaws.com/vuetify-docs/images/lists/1.jpg", 'title': "Brunch this life?", 'subtitle': "Subtitle 1" }, { 'id': 2, 'avatar': "https://s3.amazonaws.com/vuetify-docs/images/lists/2.jpg", 'title': "Winter Lunch", 'subtitle': "Subtitle 2" }, { 'id': 3, 'avatar': "https://s3.amazonaws.com/vuetify-docs/images/lists/3.jpg", 'title': "Oui oui", 'subtitle': "Subtitle 3" } ] def getItems2(): return [ { 'id': 4, 'avatar': "https://s3.amazonaws.com/vuetify-docs/images/lists/4.jpg", 'title': "Brunch this weekend?", 'subtitle': "Subtitle 4" }, { 'id': 5, 'avatar': "https://s3.amazonaws.com/vuetify-docs/images/lists/5.jpg", 'title': 'Summer BBQ', 'subtitle': "Subtitle 5" } ] # + class MyDraggable(v.VuetifyTemplate): items = List(getItems()).tag(sync=True) items2 = List(getItems2()).tag(sync=True) template = Unicode(''' FIRST LIST SECOND LIST ''').tag(sync=True) MyDraggable() # + def makeListItem(item): return v.ListItem(children=[ v.ListItemAvatar(children=[ v.Html(tag='img', attributes={'src': item['avatar']}) ]), v.ListItemContent(children=[ v.ListItemTitle(children=[item['title']]), v.ListItemSubtitle(children=[item['subtitle']]) ]) ]) dg1 = d.Draggable( v_model=getItems(), group={'name': 'people'}, children=[makeListItem(item) for item in getItems()]) def update_dg1(change): dg1.children=[makeListItem(item) for item in dg1.v_model] dg1.observe(update_dg1, names=['v_model']) dg2 = d.Draggable( v_model=getItems2(), group={'name': 'people'}, children=[makeListItem(item) for item in getItems2()]) def update_dg2(change): dg2.children=[makeListItem(item) for item in dg2.v_model] dg2.observe(update_dg2, names=['v_model']) v.Content(children=[ v.Container(fluid=True, children=[ v.Layout(align_start=True, justify_center=True, children=[ v.Flex(xs4=True, class_='elevation-1 pa-3 ma-2', children=[ v.List(two_line=True, children=[ v.Subheader(children=['FIRST LIST']), dg1 ]) ]), v.Flex(xs4=True, class_='elevation-1 pa-3 ma-2', children=[ v.List(two_line=True, children=[ v.Subheader(children=['SECOND LIST']), dg2 ]) ]) ]) ]) ]) # + class MyDraggableArea(v.VuetifyTemplate): items = List(getItems() + getItems2()).tag(sync=True) items2 = List().tag(sync=True) template = Unicode(''' FIRST LIST DROP AREA
Nr of items: {{ items2.length }}
''').tag(sync=True) css = Unicode(''' /* Hide dragged element in target */ .droptarget > [draggable=true] { display: none; } .sortable-ghost.no-drop { cursor: no-drop; } .droparea { height: 200px; border: 1px solid black; } ''').tag(sync=True) methods = Unicode('''{ checkMove(e) { duplicate = this.items2.some(item => item.id === e.draggedContext.element.id) canMove = e.to.id !== "source" && !duplicate; cancelDrop = e.to.id === "catchAll" if (canMove && !cancelDrop) { e.dragged.classList.remove('no-drop') } else { e.dragged.classList.add('no-drop') } return canMove; }, }''').tag(sync=True) MyDraggableArea() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # We'll try to: # - import the data # - preprocess for a tensorflow model (following time series forecasting tutorial) # - complete the first part of the modelling (up to a simple single-window model) # + import os import datetime import IPython import IPython.display import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf mpl.rcParams['figure.figsize'] = (8, 6) mpl.rcParams['axes.grid'] = False # + train_path = '/home/joyvan/cfm/train_dc2020.h5' df = pd.read_hdf(train_path) df.head() df = df.loc[df['stock_id'] == 387 ] # make a datetime (to improve?) def mk_dt(df,num): df_dt= df['day_id'] + df[(num,'tod')]/1e11 return df_dt # - # We'd like to predict the next _source_id_ of trades. We have trades with a time of day, and a day ID for each day. the dataset runs over a whole year. # For one row in the dataset we have the whole time window encapsulated in the ten trades, whose columns are # # ```sh # (x,'source_id'), (x,'price'), (x,'qty'), (x,'tod') # ``` # # We're going to isolate these into a reduced dataset and discard the rest, as the OB info will require significantly more preprocessing work to include in any model *note; we have one row of 6 order books for the latest trade (the label trade) and nothing for the rest of the window... so LSTM is not suited there* # + for j in range(10): df[(j,'dt')] = mk_dt(df,j) df[(4,'dt')].head() # + _cols_sid= [(0,'source_id'),(1,'source_id'),(2,'source_id'),(3,'source_id'),(4,'source_id'),(5,'source_id'),(6,'source_id'),(7,'source_id'),(8,'source_id'),(9,'source_id')] _cols_price=[(0,'price'),(1,'price'),(2,'price'),(3,'price'),(4,'price'),(5,'price'),(6,'price'),(7,'price'),(8,'price'),(9,'price')] _cols_qty=[(0,'qty'),(1,'qty'),(2,'qty'),(3,'qty'),(4,'qty'),(5,'qty'),(6,'qty'),(7,'qty'),(8,'qty'),(9,'qty')] _cols_tod=[(0,'tod'),(1,'tod'),(2,'tod'),(3,'tod'),(4,'tod'),(5,'tod'),(6,'tod'),(7,'tod'),(8,'tod'),(9,'tod')] #def mk_ds() plot_features=df[[_cols_sid[0],_cols_price[0],_cols_qty[0]]] plot_features.index = df[(0,'dt')].sort_values() _ = plot_features.plot(subplots=True) plot_features.head() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt # # Siu et al 2004 Gamma Garch Option Pricing def Y(r,v,a,h,Xt): """equation 3.12 siu et al""" return r + v*h**0.5 - 0.5* h - (a*h)**0.5 + Xt def h(omega,alpha,beta,a,ht,Xt): """equation 3.13 siu et al""" return omega + alpha*(Xt - (a*ht)**0.5)**2 + beta*ht def htalt(a,v,h): """equation 3.14 siu et al""" return a**2 * (1- np.exp((v*h**0.5 - 0.5 * h - (a*h)**0.5)/a)) def X(a,b): """returns gamma / Xt in siu et al""" return np.random.gamma(a, b, 1)[0] def b(v,h,a): """equation 3.11 in siu et al""" return 1/((1- np.exp((v*h**0.5 - 0.5 * h - (a*h)**0.5)/a))) # + omega = 0.00003577 alpha = 0.155966 beta = 0.646049 v = 0.04873824 # lambda in siu et al r = 0.001 # risk free rate r ht = 1.4 # starting value for variance a = 4 # shape parameter for gamma distribution print ("{:<6} {:<6} {:<6} {:<6}".format('bt','Xt','ht', "Yt")) myb = [] myX = [] myh = [] myY = [] for i in range(25): bt = b(v,ht,a) Xt = X(a, bt) ht = h(omega,alpha,beta,a,ht,Xt) #ht = htalt(a,v,ht) Yt = Y(r,v,a,ht,Xt) print ("{:<6.4f} {:<6.4f} {:<6.4f} {:<6.4f}".format( bt, Xt, ht, Yt)) myb.append(bt), myX.append(Xt), myh.append(ht), myY.append(Yt) # - plt.figure(figsize = (12,8)) plt.subplot(111) plt.plot(myb, color = "blue", label = "bt") plt.plot(myX, color = "black", label = "Xt") plt.plot(myh, color = "red", label = "ht") plt.plot(myY, color = "yellow", label = "Yt") plt.legend() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # TABLE OF CONTENTS: # # These tips are focused on general Python tips I think that are good to know. # # Please go through official documentation if you want more thorough examples. # # Topics: # - [Additional Operators](#op) # - [Global](#global) # - [Comparisons](#compare) # - [Enumerate](#enum) # - [Comprehension](#comp) # - [List](#list) # - [Set](#set) # - [Dict](#dict) # + # # Uncomment if you want to use inline pythontutor # from IPython.display import IFrame # IFrame('http://www.pythontutor.com/visualize.html#mode=display', height=1500, width=750) # - # # Additional Operators # [Return to table of contents](#toc) # # Operators besides you typical `+`, `-`, `/`, etc. # `~` Inversion, is the bitwise complement operator in python which essentially calculates `-x - 1` # # `=` Assign value of right side of expression to left side operand `x = y + z` # # `+=` Add AND: Add right side operand with left side operand and then assign to left operand `a+=b` `a=a+b` # # `-=` Subtract AND: Subtract right operand from left operand and then assign to left operand `a-=b` `a=a-b` # # `*=` Multiply AND: Multiply right operand with left operand and then assign to left operand `a*=b` `a=a*b` # # `/=` Divide AND: Divide left operand with right operand and then assign to left operand `a/=b` `a=a/b` # # `%=` Modulus AND: Takes modulus using left and right operands and assign result to left operand `a%=b` `a=a%b` # # `//=` Divide(floor) AND: Divide left operand with right operand and then assign the value(floor) to left operand `a//=b` `a=a//b` # # `**=` Exponent AND: Calculate exponent(raise power) value using operands and assign value to left operand `a**=b` `a=a**b` # # `&=` Performs Bitwise AND on operands and assign value to left operand `a&=b` `a=a&b` # # `|=` Performs Bitwise OR on operands and assign value to left operand `a|=b` `a=a|b` # # `^=` Performs Bitwise xOR on operands and assign value to left operand `a^=b` `a=a^b` # # `>>=` Performs Bitwise right shift on operands and assign value to left operand `a>>=b` `a=a>>b` # # `<<=` Performs Bitwise left shift on operands and assign value to left operand `a <<= b` `a= a << b` # Or assignment # # Assigning a variable based on another variable's assignment. var = None b = None or var print(b) var = 5 b = None or var print(b) # # Global # [Return to table of contents](#toc) # Global lets you access global variables. In this example the global variable `c` is outside of the functions scope. # + # c = 1 # def add(): # c = c + 2 # print(c) # add() # # UnboundLocalError: local variable 'c' referenced before assignment # + c = 1 def add(): global c c = c + 2 print(c) add() # - # # Comparisons # [Return to table of contents](#toc) # Max # + # Find the max of an array/variables/values etc. print(max(5, 2)) print(max([10,11,12,13])) # - # Min # + # Find the min of an array/variables/values etc. print(min(5, 2)) print(min([10,11,12,13])) # - # float("inf")/float("-inf") # + # Setting a value to infinity or -infinity lets you have an easy comparion print(float("inf") > 309840) # - print(float("-inf") < -930984) # # Enumerate # [Return to table of contents](#toc) # # This lets you take the element's index and use it as a variable. # + pies = ["apple", "blueberry", "lemon"] for num, i in enumerate(pies): print(num, ":", i) # + # For dictionaris this is what it would look like. pies = {"pie1":"apple", "pie2":"blueberry", "pie3":"lemon"} for key, value in pies.items(): print(key, ":", value) # - # # Comprehensions # [Return to table of contents](#toc) # # Are a quicker way to create lists, dicts and sets, they act like for loops and can take conditions as well as if else statements. # List comprehension # # [Return to table of contents](#toc) # + # Manual way to make a list list_1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] list_2 = [11, 12, 13, 14, 15, 16, 17, 18, 19, 20] print(list_1) print(list_2) # + # Using list() list_1_with_list = list(range(1, 11)) list_2_with_list = list(range(11, 21)) print(list_1_with_list) print(list_2_with_list) # + # List comprehension autogenerates the list, they function closely to for loops. list_1_with_comp = [x for x in range(1, 11)] list_2_with_comp = [x for x in range(11, 21)] print(list_1_with_comp) print(list_2_with_comp) # + # Works with functions. def addition(x): return x + x [addition(x) for x in range(0, 3)] # + # Also works with conditions # The % is modulus which gives you the remainder after division. # If an number is even add 1 if it's odd add 3 [x + 1 if x%2 ==0 else x + 3 for x in range(1, 11)] # + # Without an else statement if goes at the end of the statement. [x + 1 for x in range(1, 11) if x%2 ==0] # + # Nested loop example to show how they function like for loops. for a in range(0, 3): for b in range(0, 5): print(a, b) # + # List comp also works as a nested loop. [[a, b] for a in range(0, 3) for b in range(0, 5)] # - # Set comprehension # # [Return to table of contents](#toc) # + # Set comprehension is same format as list comprehension but uses curly brackets. set_1_with_comp = {x for x in range(1, 11)} set_2_with_comp = {x for x in range(11, 21)} # - print(set_1_with_comp) print(set_2_with_comp) # Dict comprehension # # [Return to table of contents](#toc) # # Examples from: http://cmdlinetips.com/2018/01/5-examples-using-dict-comprehension/ (More samples there as well) # dict comprehension to create dict with numbers as values {str(i):i for i in [1,2,3,4,5]} # + # create list of fruits fruits = ["apple", "mango", "banana", "cherry"] # dict comprehension to create dict with fruit name as keys {f:len(f) for f in fruits} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="RBZZDXflq-pK" colab_type="code" colab={} import pandas as pd # + id="SuCuu0fa1AYk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 228} outputId="5ca051ab-c0d0-4002-d0a2-6b827927ad47" # !wget 'https://raw.githubusercontent.com/unburied/DiY-Help/master/Help%20Section%20Articles-February%2013%2C%202020%20(1).csv' # + id="2gu9Vc_-rZ57" colab_type="code" colab={} # Load data and confirm shape equates to web scraped export data ='Help Section Articles-February 13, 2020 (1).csv' df = pd.read_csv(data) assert df.shape == (427,4) # + id="03jILfO-tpi0" colab_type="code" colab={} import unicodedata as uc # clean content data def normaled_unicode(x): return uc.normalize('NFKD', str(x)) df['content'] = df.content.apply(normaled_unicode) # get word count to output def word_count(x): return len(x.split()) df['word_count'] = df.content.apply(word_count) # + id="fJfYRYXb4SLM" colab_type="code" colab={} VERBOSE = 75 result = df[df.word_count > VERBOSE] # + id="rZxiQvMH5NQp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 621} outputId="ced07c1e-e1cb-48c6-ea10-baf7604a6298" result.tail() # + id="x4RGsHrF8Ben" colab_type="code" colab={} from datetime import date today = date.today() today = today.strftime("%B %d, %Y") file_name = 'Word Counts' + today + '.csv' result.to_csv(file_name, index = False) # + id="fcJ-4wFP_HmA" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # [View in Colaboratory](https://colab.research.google.com/github/findingfoot/ML_practice-codes/blob/master/Data_Collection.ipynb) # + id="OLiZ91247O6P" colab_type="code" colab={} import warnings warnings.filterwarnings('ignore') import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.python.framework import ops ops.reset_default_graph() sess = tf.Session() # + id="bM7Evstg7ZKN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="937d5d22-944b-4468-e075-ac3826989af7" #Getting data from sklearn from sklearn.datasets import load_iris iris = load_iris() print(len(iris.data)) print(len(iris.target)) print(iris.data[14]) print(set(iris.target)) # + id="Sl0GOjuY7idv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="e1de94e1-d228-4529-f168-019c50d957c4" # downloading data from github import requests birthdata_url = 'https://github.com/nfmcclure/tensorflow_cookbook/raw/master/01_Introduction/07_Working_with_Data_Sources/birthweight_data/birthweight.dat' birth_file = requests.get(birthdata_url) birth_data = birth_file.text.split('\r\n') birth_header = birth_data[0].split('\t') birth_data = [[float(x) for x in y.split('\t') if len(x)>=1] for y in birth_data[1:] if len(y)>=1] print(len(birth_data)) print(len(birth_data[0])) # + id="PMcTJkTS8FHr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 2886} outputId="fbaad018-a2d8-462d-c811-9eae095ac225" import requests housing_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data' housing_header = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'] housing_file = requests.get(housing_url) housing_data = [[float(x) for x in y.split(' ') if len(x)>=1] for y in housing_file.text.split('\n') if len(y)>=1] print(len(housing_data)) print(len(housing_data[0])) print(housing_data) # + id="_G3AOzzp_Kj7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 598} outputId="54ebc784-32d8-4d87-e7b5-2ca794273540" from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data/', one_hot=True) print(len(mnist.train.images)) print(len(mnist.test.images)) print(len(mnist.validation.images)) print(mnist.train.labels[2,:]) # + id="qraglttLu9kk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 368} outputId="4af43012-3d0b-48d1-d44f-05fec1a7ddd2" from PIL import Image (X_train, y_train) ,(X_test,y_test) = tf.contrib.keras.datasets.cifar10.load_data() print(X_train.shape) print(X_test.shape) print(y_train.shape) # %matplotlib inline img = Image.fromarray(X_train[0,:,:,:]) plt.imshow(img) # + id="LlO65NSEyZUm" colab_type="code" colab={} import requests import io from zipfile import ZipFile # Get/read zip file zip_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip' r = requests.get(zip_url) z = ZipFile(io.BytesIO(r.content)) file = z.read('SMSSpamCollection') # Format Data text_data = file.decode() text_data = text_data.encode('ascii',errors='ignore') text_data = text_data.decode().split('\n') text_data = [x.split('\t') for x in text_data if len(x)>=1] [text_data_target, text_data_train] = [list(x) for x in zip(*text_data)] print(len(text_data_train)) print(set(text_data_target)) print(text_data_train[1]) # + id="Q9YXam301Hez" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="7befc381-8128-43e6-cd62-5d7a69c4b0a8" import requests import io import tarfile movie_data_url = 'http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz' r = requests.get(movie_data_url) streaming_data = io.BytesIO(r.content) temp = io.BytesIO() while True: s = streaming_data.read(16384) if not s: break temp.write(s) streaming_data.close() temp.seek(0) # Extract tar file tar_file = tarfile.open(fileobj=temp, mode="r:gz") pos = tar_file.extractfile('rt-polaritydata/rt-polarity.pos') neg = tar_file.extractfile('rt-polaritydata/rt-polarity.neg') # Save pos/neg reviews pos_data = [] for line in pos: pos_data.append(line.decode('ISO-8859-1').encode('ascii',errors='ignore').decode()) neg_data = [] for line in neg: neg_data.append(line.decode('ISO-8859-1').encode('ascii',errors='ignore').decode()) tar_file.close() print(len(pos_data)) print(len(neg_data)) print(pos_data[0]) # + id="mrFNQLDc3qRY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6c310c69-06bb-4417-9cb6-6ea17a14024f" shakespeare_url = 'http://www.gutenberg.org/cache/epub/100/pg100.txt' sp_data = requests.get(shakespeare_url) sp_file = sp_data.content sp_text = sp_file.decode('utf-8') sp_text = sp_text[2308:] print(len(sp_text)) # + id="LTTt2ELz3y3x" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="9403344b-e905-4ee0-f176-c0a3bbfe64af" import requests import io from zipfile import ZipFile sentence_url = 'http://www.manythings.org/anki/deu-eng.zip' r = requests.get(sentence_url) z = ZipFile(io.BytesIO(r.content)) file = z.read('deu.txt') # Format Data eng_ger_data = file.decode() eng_ger_data = eng_ger_data.encode('ascii',errors='ignore') eng_ger_data = eng_ger_data.decode().split('\n') eng_ger_data = [x.split('\t') for x in eng_ger_data if len(x)>=1] [english_sentence, german_sentence] = [list(x) for x in zip(*eng_ger_data)] print(len(english_sentence)) print(len(german_sentence)) print(eng_ger_data[14]) # + id="2PxCIqDs4hUX" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py37_pytorch # language: python # name: conda-env-py37_pytorch-py # --- import pandas as pd import pandas_profiling as pp import numpy as np # ## Read Data # Pandas has the ability to read data from various formats, including: # - CSV # - Excel # - Html # - Json # - Feather # - Parquet # Let's start by reading a table from a csv file. Pandas puts the data in an object known as a `DataFrame`.
# The data we are using here is air emissions from industrial facilities in Queensland for the 2005/2006 inventory year taken from [data.gov.au](http://data.gov.au). df = pd.read_csv('../../data/processed/npi-2006-qld-air-total-emissions.csv') # The `DataFrame` is now stored in variable `df`. We can print it: df # Similarly, using the respective read method you can read the tables from other file formats supported by Pandas.
# e.g. `pd.read_excel()` # Pandas can also read the data from SQL database and put them directly into a pandas data frame. To do that, we need to pass in the query and the connection object. # + import sqlite3 #create a connection to database conn = sqlite3.connect('../01 SQL/Sales.db') # write a query query = ''' SELECT * from Customers LIMIT 5 ''' pd.read_sql(query,conn) # - # __Tip:__ Sometimes you just want to quickly copy a portion of a dataset from a webpage or excel into a notebook. An easy way to do that is to copy the data from the source and then use `pd.read_clipboard()`. This method will create a pandas data frame from the data you copied. Note that this method only works if the notebook is running on your local machine (not on an external server). # ## Basic Analysis # - info # - describe # - pandas profiling # - value_counts() # df = pd.read_csv('../../data/processed/npi-2006-qld-air-total-emissions.csv') # We can use `.head()` and `.tail()` to view only top or bottom rows of the table. # top rows df.head() # bottom rows df.tail() # __Note:__ You can specify how many rows from the top or bottom of the table you want by passing in a number.
# e.g. `df.head(10)` or `df.tail(3)` # We can use `.columns` to get a list of column names. df.columns # Using `.info()` method you can get a list of the columns and the type of data stored in each. df.info() # You can also get some basic statistical analysis of the data using `.describe()` method. df.describe() # To get a more detailed analysis of the data in the table we can use a package called `pandas-profiling`. This package extends Pandas and adds detailed reports of the data. df.profile_report() # ## Subsetting and indexing # # # There are multiple ways to get a subset of the data. `loc` is used when we want to specify the names of columns and `iloc` when we want to use the index of the columns.
# df.columns # We can use `loc` by specifying the rows and columns we want. e.g. `df.loc[{row(s)}, {column(s) name}]` # + # we can get a subset of a single column df.loc[:10,'jurisdiction'] # Notice we used :10 for rows which means rows 0 to 10 rows # - # __Note:__ `loc` has a unique property. Since it is designed to work with names of columns and rows, when you want to get a subset of rows, the result it returns is inclusive. In other words when we passed in `:10` in almost every other python object that means `0` to `9`, but in `loc` it means `0` to `10`. Likewise, `10:20` in `loc` means rows `10` to `20`. # We can also get a subset of multiple columns by passing a list of columns we want. df.loc[10:20,['Year','facility_name','substance','quantity_in_kg']] # `iloc` works similar to `loc`, but instead of names we pass in index of the rows or the columns we want. # a single column df.iloc[:10,5] # __Note:__ Notice the number of rows here, and compare it with when we used `loc`. # multiple columns df.iloc[10:20,[1,9,-3,-1]] # Another useful method to get a subset of data is using boolean indexing. Booleans are either True or False. If we pass a list of booleans, pandas will return only the rows with True in the list. df['substance'] == 'Oxides of Nitrogen' # The list above has the value true only on the rows where the substance is "Oxides of Nitrogen".
# __Note:__ you can only see a small portion of the data so the True values might not be visible. # Now if we pass this as an index into a data frame we only get the rows where substance is "Oxides of Nitrogen". df[df['substance'] == 'Oxides of Nitrogen'] # This method can also be used with `loc` and `iloc`. df.loc[df['substance'] == 'Oxides of Nitrogen',['facility_name','substance','quantity_in_kg']] # ## Sorting # To sort the data in the table based on a certain column we can use the `.sort_values` method. When sorting we need to specify which column we want to sort and whether we want to sort in ascending order or descending order. df.sort_values(by='jurisdiction_facility_id',ascending=False) # __Note:__ many methods return the result as a data frame as well. This allows us to chain these operations to make the code shorter and easier to read.
# # Let's sort the table based on the amount of Oxides of Nitrogen only. df.loc[df['substance'] == 'Oxides of Nitrogen'].sort_values(by='quantity_in_kg',ascending=False) # We can also sort based on multiple columns. To do so we need to pass in the name of the column in a list (in the order we want them to be used for sorting) and also a list to specify whether each column should be ascending or descending. df.loc[df['substance'] == 'Oxides of Nitrogen'].sort_values(by=['site_address_postcode','facility_name'],ascending=[True,False]) # ## Data operations # - merge # - groupby # - pivot_table # - crosstab # __Groupby:__ It aggregates the data into groups (similar to groupby in SQL). For instance, what if we wanted an average emission of each substance across all the sites? To calculate that we use `.groupby()` method. # Since we want the average amount of substances, the columns we need will be __substance__ and __quantity_in_kg__. df[['substance','quantity_in_kg']].groupby(by='substance') # But it doesn't show us any tables. The reason is pandas has grouped the data into `DataFrameGroupBy` object and now we need to specify how the values should be aggregated. In this case since we want the average we use `.mean()`. df[['substance','quantity_in_kg']].groupby(by='substance').mean() # There are other useful aggregation functions such as `.std()` for standard deviation, `.median()` for median, `.count()` for the number of rows in each group, `.sum()` for sum, etc. You can also define your own aggregation function. agg_func = lambda x: np.sqrt((x**2).mean()) # root mean of squares df[['substance','quantity_in_kg']].groupby(by='substance').apply(agg_func) # Do you know how to use *__lambda__* functions? If not check out this page to learn about them. # ### Pivot Table # Another way to represent the data is using pivot tables. You might be familiar with pivot tables in Excel. You can perform the same operations here as well. # Let's create a pivot table that shows the amount of each substance in every postcode in the dataset. df.pivot_table(index='site_address_postcode',columns='substance',values='quantity_in_kg',aggfunc='mean') # __Note:__ `NaN` stands for Not a Number. In this case it means there was no value available for that cell. This means where you see `NaN` in the table there was no emission recorded for that substance in that specific postcode. This probably means that we can assume the emission was zero. We could let pandas know by passing in `fill_value = 0`. Then, where no value is available pandas put zero instead. # df.pivot_table(index='site_address_postcode',columns='substance',values='quantity_in_kg',aggfunc='mean',fill_value = 0) # #### Exercise # Now to practice what we have learned so far, let's create a table of the total emissions of the top 10 substances (most commonly recorded substances in the dataset) for each postcode.
# Also, let's keep the empty fields as `NaN`. # + # 1. Find how many times each substance has been recorded substances = df[['site_address_postcode','substance']].groupby(by='substance').count() # 2. Sort it and find the substances that have been recorded the most substances.columns = ['Count'] top10 = substances.sort_values(by='Count',ascending=False)[:10] top10 # 3. Create a table that shows the total amount of substances per postcode # 4. Get a subset of the table which only includes the top 10 substances # - # #### Solution: # + # you can replace site_address_postcode by any other column. Since we are only counting it doesn't matter which column use. substances = df[['site_address_postcode','substance']].groupby(by='substance').count() # Now sort it substances.columns = ['Count'] top10 = substances.sort_values(by='Count',ascending=False)[:10] # top10 is a data frame and top10.index contains the name of the substance. # Create the pivot table pivot = df.pivot_table(index='site_address_postcode',columns='substance',values='quantity_in_kg',aggfunc='sum') # get only the columns for top 10 substances pivot_top10 = pivot[top10.index] pivot_top10 # - # Now it's a good time to discuss dealing with missing values in a table. # ## Missing Values # There might be missing data in a table. Having `NaN` in the table can cause trouble in the analysis so we need to decide how we are going to deal with it. A few common scenarios are: # 1. filling the missing values with a number e.g. zero # 2. removing rows with missing values # 3. removeing rows with multiple missing values and filling the remaining with a new value # To replace the missing value with a fixed number we can use `.fillna()` method. dfnew = pivot_top10.fillna(value=0) dfnew # __Note:__ When using `fillna` the changes are not saved in the data frame. The default settings only returns the result and keeps the original data frame intact. If you want to save the changes in the same data frame you can pass in `inplace = True`. # There are other ways to fill the missing values. In some cases you might want to use different values for each column. A common example is using mean or median of a column for the missing values. fill_values = pivot_top10.mean() dfnew = pivot_top10.fillna(value=fill_values) dfnew # Pandas has other methods for filling the missing values including forward and backward filling. Forward filling replaces the missing values by the last valid value in the table and backward filling replaces the missing values by next valid value. These techniques are useful for sequential data such as time series and wouldn't make sense to be applied to tabular data.
# To use these methods, when using `fillna` instead of passing in a value, you can pass a method. For forward filling pass in `method = "ffill"` and for backward filling pass in `method = "bfill"`. # If you simply want to get rid of rows with missing values you can use `.dropna()` dfnew = pivot_top10.dropna() dfnew # __Note:__ Notice the number of rows are much less in the table above compared to the original table. # If we remove any row that contains missing values we might lose a significant portion of the data. Alternatively, we can only remove rows which have more than a certain number of missing values. To do so, we can set a threshold. # remove the rows with at least 3 missing values dfnew = pivot_top10.dropna(thresh=3) dfnew # Now we have more rows compared to when we removed all missing values.
# Next step is to replace the missing values using the techniques discussed above. # ## Saving Data # After analysis and reshaping the data you might want to save the results in a file. Similar to reading files, pandas supports multiple file formats to save the tables. pivot_top10.to_csv('final_table.csv') # ## Pandas Plotting # Pandas dataframes have plotting methods which help to visualise the data. The following plots are supported in pandas: # - 'line' : line plot (default) # - 'bar' : vertical bar plot # - 'barh' : horizontal bar plot # - 'hist' : histogram # - 'box' : boxplot # - 'kde' : Kernel Density Estimation plot # - 'density' : same as 'kde' # - 'area' : area plot # - 'pie' : pie plot # - 'scatter' : scatter plot # - 'hexbin' : hexbin plot. # # You can select which plot you want to use by setting `kind` to the string for the plot.
# There are a few other useful options you can set: # - xlim, ylim: to set limits of axes # - logx, logy, loglog: to set whether an axis should be displayed in logarithmic scale] # - title: to set the title of the plot # - figsize: to set the size of the plot #

Let's try a few types of charts and graphs. # Top 10 postcodes with largest carbon monoxide emission pivot_top10.sort_values(by='Carbon monoxide',ascending = False)[:10].plot(kind = 'barh',y='Carbon monoxide') # histogram of benzene emission pivot_top10.plot(kind='hist',y = 'Benzene',bins = 50) # + # kernel density estimation plot of benzene emission pivot_top10.plot(kind='kde',y = ['Benzene','Toluene (methylbenzene)'],logx=True) # - # histogram of benzene emission in each postcode pivot_top10.plot(kind='box',logy = True,rot=90) pivot_top10.plot(kind='scatter',x='Toluene (methylbenzene)',y='Benzene', loglog=True) # pie chart of emission of the substances in postcode 4008 pivot_top10.loc[4008,:].plot(kind='pie',subplots=True, figsize = (10,10)) # We will discuss producing more advanced plots in the next notebooks where we learn about various plotting packages in python. # ## References and further reading # The following sources have been used in the creation of this notebook: # - [Pandas documentation](https://pandas.pydata.org/) # - [Pandas in 10 minutes](https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html) # # What we covered in this notebook is only a fraction of capabilities Pandas offer. If you are interested to know more about pandas we recommend the following sources: # - https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf # - https://www.kaggle.com/learn/pandas # - https://www.kaggle.com/kashnitsky/topic-1-exploratory-data-analysis-with-pandas # - https://www.youtube.com/watch?v=ZyhVh-qRZPA&list=PL-osiE80TeTsWmV9i9c58mdDCSskIFdDS # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: bts # language: python # name: bts # --- import numpy as np import matplotlib.pyplot as plt # + max_period = 50 num_choices = 2 min_prob = .72 max_prob = .82 num_probs = 200 max_streak = max_period num_states = max_streak + 1 ## Get array of choices and possible states streaks = np.array(range(0, num_states)) choices = np.array(range(0, num_choices)) probs = np.linspace(min_prob, max_prob, num_probs) probs_plus_1 = np.concatenate((probs , np.array([1]))) # + ## Calculate matrices choices_mat, streaks_mat ,probs_mat = np.meshgrid(choices, streaks, probs) ## Streak Updating # If don't make pick streaks_stay_mat = streaks_mat[:, 0, :] # If make pick and win streaks_win_mat = np.minimum(streaks_mat[:, 1, :] + 1, max_period) # If make pick and lose streaks_lose_mat = np.zeros((num_states, num_probs), dtype='int') #### Probs updating probs_space = np.tile(range(0, num_probs), (num_states, 1)) Opts = {} V_funcs = {} G_funcs = {} Cutoffs = {} ## Value function in Last peripd is just the streak state V_funcs['V' + str(max_period)] = streaks_mat[:, 0, :] # - for period in range(max_period-1, -1, -1): next = period + 1 next_V = V_funcs['V' + str(next)] Exp = np.zeros((num_states, num_choices, num_probs)) Exp_V_stay = np.mean(next_V[streaks_stay_mat, probs_space], axis=1) Exp[:, 0, :] = np.tile(Exp_V_stay, (num_probs, 1)).T Exp_V_win = np.mean(next_V[streaks_win_mat, probs_space], axis=1) Exp_V_lose = np.mean(next_V[streaks_lose_mat, probs_space], axis=1) Exp[:, 1, :] = ( np.outer(Exp_V_win, probs_mat[0, 1, :]) + np.outer(Exp_V_lose, (1-probs_mat[0, 1, :])) ) G_funcs['G' + str(period)] = np.array(np.argmax(Exp, axis=1), dtype=float) G_funcs['G' + str(period)][next:, :] = np.nan V_funcs['V' + str(period)] = np.array(np.amax(Exp, axis=1), dtype=float) V_funcs['V' + str(period)][next:, :] = np.nan cutoff_idx = np.array(num_probs - np.sum(G_funcs['G' + str(period)], axis=1)) opt_streaks = streaks[~np.isnan(cutoff_idx)] cutoff_idx = cutoff_idx[~np.isnan(cutoff_idx)] cutoff_idx = np.array(cutoff_idx, dtype=int) opt_probs = probs_plus_1[cutoff_idx] cutoff = np.vstack((opt_streaks, opt_probs)) Cutoffs['C' + str(period)] = cutoff for p in range(0, max_period, 24): plt.plot(Cutoffs['C' + str(p)][0,:], Cutoffs['C' + str(p)][1,:]) plt.show() est_val = V_funcs['V0'][0, 0] print(est_val) Cutoffs['C' + str(p)] # + # MaxStreak(t) = max(MaxStreak(t-1), CurrentStreak(t)) # CurrentStreak(t) = CurrentStreak(t-1) + H(t) # V(MaxStreak(T), CurrentStreak(T), p(T)) = MaxStreak # V(MaxStreak, CurrentStreak, T-1, p(t-1)) = argmax(s , E(V(MaxStreak(T), CurrentStreak(T), p(T)))) # v(MaxStreak, CurrentStreak, t, p) = max(MaxStreak, CurrentStreak) + E(V(MaxStreak, CurrentStreak, t+1, H)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # . 2019 г. # # # Естественная сепарация # # Источники: # # . Modeling downhole natural separation : дис. – The University of Tulsa, 2004. # # Естественная сепарация на приеме погружного оборудования играет существенную роль в работе скважины. За счет того, что часть свободного газа удаляется в затрубное пространство, плотность ГЖС в НКТ возрастает, и следовательно снижается газлифтный эффект. С другой стороны, сепарация газа позволяет нормально работать погружному оборудованию, например, создавать больший перепад давления. # # Как и гидравлические корреляции, модели естественной сепарации деляться на экспериментальные корреляции и механистические модели. Новая корреляция Marquez относится к первому типу. # # В данном методике приняты следующие допущения: # 1. Объемное содержание газа поступившее в насос определяется как: # $$\alpha_p (1 - \alpha_p)^{n-1} = \frac{V_{Sgz}^i}{V_{\infty z}}$$ # 2. Учитываются скорости проскальзывания газа, как в вертикальном, так и в радиальном направлении; # 3. Для пробкового и эмульсионного режимов течения автор пренебрегает эффектом воздействия с другими пузырьками газа и на основе анализа экспериментальных данных принимает значение $n$ равным нулю. # # Коэффициент естественной сепарации рассчитывается по формуле # # $$E = 1 -[-\frac{V_{\infty r}}{V_{\infty z}}(\frac{A_p}{A_a}) + \frac{V_{SLz}^i}{V_{\infty z}}] $$ # # где отношение скоростей проскальзывания в вертикальном и радиальном направлении # # $$\frac{V_{\infty r}}{V_{\infty z}} = \frac{\rho_L}{(\rho_L + \rho_g)g} (V_{Lr} \frac{dV_{Lr}}{dr})$$ # # а $V_{SLz}^i, V_{Sgz}^i$ - приведенные скорости жидкости и газа вдоль оси насоса # # Автор ввел параметр M в виде # # $$M = -\frac{V_{\infty r}}{V_{\infty z}}\frac{A_p}{A_a} $$ # # или # # $$M = -[\frac{ab+c(\frac{V_{SLz}^i}{V_{\infty z}})^d}{b+(\frac{V_{SLz}^i}{V_{\infty z}})^d}]$$ # # где а = -0,0093 ; b = 57,758 ; c = 34,4 ; d = 1,308 – коэффициенты М параметра определялись из экспериментальных данных # # $A_p, A_a$ - площади поперечного сечения приема насоса и эксплуатационной колонны. # # Итоговая упрощенная формула: # # $$E = ([1 + [\frac{ab+c(\frac{V_{SLz}^i}{V_{\infty z}})^d}{b+(\frac{V_{SLz}^i}{V_{\infty z}})^d}]]^{272} + [\frac{V_{SLz}^i}{V_{\infty z}}]^{272} )^{1/272} - \frac{V_{SLz}^i}{V_{\infty z}}$$ # # # Перед расчетом необходимо воспользоваться механистической моделью Caetano (1992) для определения режимов течения в затрубном пространстве вертикальной скважины и для вычисления $V_{\infty z}$ - скорости проскальзывания пузырьков газа в вертикальном направлении # # # + import sys sys.path.append('../') import uniflocpy.uTools.uconst as uc import uniflocpy.uTools.data_workflow as tool import uniflocpy.uMultiphaseFlow.flow_pattern_annulus_Caetano as FPA import uniflocpy.uPVT.PVT_fluids as PVT import uniflocpy.uMultiphaseFlow.natural_separation as nat_sep import plotly.graph_objs as go from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot from plotly import tools init_notebook_mode(connected=True) # - # Создание необходимых экземляров для работы # + annular = FPA.flow_pattern_annulus_Caetano() fluid_flow = PVT.FluidFlow() flow_data = tool.Data() pattern_data = tool.Data() separation = nat_sep.new_correlation_Marquez() annular_data = tool.Data() fluid_flow_data = tool.Data() separation_data = tool.Data() # - # Задание термобарических условий на приеме погружного оборудования, конструкцию приема # # Расчет параметров по кольцевому пространству проиизводится исходя из того, что площадь поперечного сечения кольцевого пространства равна площади трубы при расчете многофазного потока. Возможно вместо этого подхода требуется применение гидравлического диаметра # + p_bar = 40 t_c = 60 annular.d_cas_in_m = 0.140 annular.d_tube_out_m = 0.100 d_annular_hydr_m = annular.d_cas_in_m - annular.d_tube_out_m #TODO какой диаметр при расчетах потока использовать? Ap = uc.pi / 4 * (annular.d_cas_in_m ** 2 - annular.d_tube_out_m ** 2) fluid_flow.d_m = (Ap * 4 / uc.pi) ** (1 / 2) # - # Здесь производится расчет. Расчитанные параметры многофазного потока вливаются в расчет режима течения # в кольцевом пространстве, а затем необходимые параметры передаются модулю расчета естественной сепарации # # + separation_data.clear_data() fluid_flow_data.clear_data() annular_data.clear_data() for i in range(1,200): fluid_flow.qliq_on_surface_m3day = i fluid_flow.calc(p_bar, t_c) annular.surface_tension_gl_Nm = fluid_flow.sigma_liq_Nm annular.rho_liq_kgm3 = fluid_flow.rho_liq_kgm3 annular.rho_gas_kgm3 = fluid_flow.fl.rho_gas_kgm3 annular.rho_mix_kgm3 = fluid_flow.rhon_kgm3 annular.mu_mix_pasec = fluid_flow.mun_cP / 10 ** 3 vs_gas_msec = fluid_flow.vsg_msec vs_liq_msec = fluid_flow.vsl_msec annular.calc_pattern(vs_liq_msec, vs_gas_msec) separation.v_infinite_z_msec = annular.v_infinite_z_msec separation.vs_liq_z_msec = annular.vs_liq_msec value_of_natural_separation_Marquez = separation.calc() separation_data.get_data(separation) fluid_flow_data.get_data(fluid_flow) annular_data.get_data(annular) # + def trace(data, number_param): tracep = go.Scattergl( x = fluid_flow_data.get_values(1), y = data.get_values(number_param), name = data.get_name(number_param), mode = 'markers' ) return tracep def plot(): layout = dict(title = 'Естественная сепарация по новой корреляции Marquez', yaxis = dict(range=[0,1], title = 'Коэффициент естественной сепарации, д.ед.'), xaxis = dict(title = 'Дебит жидкости в поверхностных условиях, м3/сут')) fig = dict(data=data, layout=layout) iplot(fig, filename='basic-scatter') # - separation_data.print_all_names() fluid_flow_data.print_all_names() # + trace1 = trace(separation_data, 4) data = [trace1] plot() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## dbscan clustering using `cuml` # # + # %load_ext autoreload # %autoreload 2 import utils from cuml.cluster import DBSCAN import numpy as np # + # load embeddings matrix using KGEmbeddingStore class emb_store = utils.load_embedding_store() X = emb_store.ent_embedding_matrix dim = X.shape[1] dim # + tags=[] labels = {} for EPS in [0.25, 0.5, 0.75]: print(f"EPS = {EPS}") dbscan_float = DBSCAN( eps=EPS, min_samples=2, verbose=False, ) labels[EPS] = dbscan_float.fit_predict(X) print(f"no clusters = {len(np.unique(labels[EPS])) - 1}") with open(f"./dbscan_cluster_idxs_EPS_{EPS}.txt", "wb") as f: np.savetxt(f, labels[EPS].astype(int), fmt='%i', delimiter=",") # - len(np.unique(labels[EPS])) - 1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # import the necessary packages from imutils.video import VideoStream import numpy as np import argparse import imutils import time import cv2 import os def detect_and_predict_age(frame, faceNet, ageNet, minConf=0.5): # define the list of age buckets our age detector will predict AGE_BUCKETS = ["(0-2)", "(4-6)", "(8-12)", "(15-20)", "(25-32)", "(38-43)", "(48-53)", "(60-100)"] # initialize our results list results = [] # grab the dimensions of the frame and then construct a blob from it (h, w) = frame.shape[:2] blob = cv2.dnn.blobFromImage(frame, 1.0, (300, 300), (104.0, 177.0, 123.0)) # pass the blob through the network and obtain the face detections faceNet.setInput(blob) detections = faceNet.forward() # loop over the detections for i in range(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with the prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the confidence is greater than the minimum confidence if confidence > minConf: # compute the (x, y)-coordinates of the bounding box for the object box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # extract the ROI of the face face = frame[startY:endY, startX:endX] # ensure the face ROI is sufficiently large if face.shape[0] < 20 or face.shape[1] < 20: continue # construct a blob from *just* the face ROI faceBlob = cv2.dnn.blobFromImage(face, 1.0, (227, 227), (78.4263377603, 87.7689143744, 114.895847746), swapRB=False) # make predictions on the age and find the age bucket with the largest corresponding probability ageNet.setInput(faceBlob) preds = ageNet.forward() i = preds[0].argmax() age = AGE_BUCKETS[i] ageConfidence = preds[0][i] # construct a dictionary consisting of both the face bounding box location along with the age prediction, # then update our results list d = {"loc": (startX, startY, endX, endY),"age": (age, ageConfidence)} results.append(d) # return our results to the calling function return results # + #load the files and directories face = os.path.join("face_detector") age = os.path.join("age_detector") # load our serialized face detector model from disk prototxtPath = os.path.sep.join([face, "deploy.prototxt"]) weightsPath = os.path.sep.join([face, "res10_300x300_ssd_iter_140000.caffemodel"]) faceNet = cv2.dnn.readNet(prototxtPath, weightsPath) # load our serialized age detector model from disk prototxtPath = os.path.sep.join([age, "age_deploy.prototxt"]) weightsPath = os.path.sep.join([age, "age_net.caffemodel"]) ageNet = cv2.dnn.readNet(prototxtPath, weightsPath) # - conf = 0.5 # initialize the video stream and allow the camera sensor to warm up vs = VideoStream(src=0).start() time.sleep(2.0) # + # loop over the frames from the video stream while True: # grab the frame from the threaded video stream and resize it to have a maximum width of 400 pixels frame = vs.read() frame = imutils.resize(frame,width = 400) # detect faces in the frame, and for each face in the frame, predict the age results = detect_and_predict_age(frame,faceNet,ageNet,minConf=conf) # loop over the results for r in results: # draw the bounding box of the face along with the associated predicted age text = "{}: {:.2f}%".format(r["age"][0], r["age"][1] * 100) (startX, startY, endX, endY) = r["loc"] y = startY - 10 if startY - 10 > 10 else startY + 10 cv2.rectangle(frame, (startX, startY), (endX, endY),(0, 0, 255), 2) cv2.putText(frame, text, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.45, (0, 0, 255), 2) # show the output frame cv2.imshow("frame",frame) key = cv2.waitKey(1) & 0xFF # if the `q` key was pressed, break from the loop if key == ord("q"): break # do a bit of cleanup cv2.destroyAllWindows() vs.stop() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="4Pjmz-RORV8E" # # Part 5: Labeling with zero-shot classification # # This notebook shows how zero-shot classification can be used to perform text classification/labeling. txtai provides a light-weight wrapper around the zero-shot-classification pipeline in Hugging Face Transformers. This method works impressively well out of the box. Kudos to the Hugging Face team for the phenomenal work on zero-shot classification! # # The examples in this notebook pick the best matching label using a list of labels for a snippet of text. # # [tldrstory](https://github.com/neuml/tldrstory) has full-stack implementation of a zero-shot classification system using Streamlit, FastAPI and Hugging Face Transformers. There is also a [Medium article describing tldrstory](https://towardsdatascience.com/tldrstory-ai-powered-understanding-of-headlines-and-story-text-fc86abd702fc) and zero-shot classification. # # + [markdown] id="Dk31rbYjSTYm" # # Install dependencies # # Install txtai and all dependencies # + id="XMQuuun2R06J" # %%capture # !pip install git+https://github.com/neuml/txtai # + [markdown] id="PNPJ95cdTKSS" # # Create a Labels instance # # The Labels instance is the main entrypoint for zero-shot classification. This is a light-weight wrapper around the zero-shot-classification pipeline in Hugging Face Transformers. # # In addition to the default model, additional models can be found on the [Hugging Face model hub](https://huggingface.co/models?search=mnli). # # + id="nTDwXOUeTH2-" # %%capture from txtai.pipeline import Labels # Create labels model labels = Labels() # Alternate models can be used via passing the model path as shown below # labels = Labels("roberta-large-mnli") # + [markdown] id="-vGR_piwZZO6" # # Applying labels to text # # The example below shows how a zero-shot classifier can be applied to arbitary text. The default model for the zero-shot classification pipeline is *bart-large-mnli*. # # Look at the results below. It's nothing short of amazing✨ how well it performs. These aren't all simple even for a human. For example, intercepted was purposely picked as that is more common in football than basketball. The amount of knowledge stored in larger Transformer models continues to impress me. # + colab={"base_uri": "https://localhost:8080/"} id="-K2YJJzsVtfq" outputId="ce29f32c-49da-4178-ae27-7c444df62340" sections = ["Dodgers lose again, give up 3 HRs in a loss to the Giants", "Giants 5 Cardinals 4 final in extra innings", "Dodgers drop Game 2 against the Giants, 5-4", "Flyers 4 Lightning 1 final. 45 saves for the Lightning.", "Slashing, penalty, 2 minute power play coming up", "What a stick save!", "Leads the NFL in sacks with 9.5", "UCF 38 Temple 13", "With the 30 yard completion, down to the 10 yard line", "Drains the 3pt shot!!, 0:15 remaining in the game", "Intercepted! Drives down the court and shoots for the win", "Massive dunk!!! they are now up by 15 with 2 minutes to go"] # List of labels tags = ["Baseball", "Football", "Hockey", "Basketball"] print("%-75s %s" % ("Text", "Label")) for text in sections: print("%-75s %s" % (text, labels(text, tags)[0][0])) # + [markdown] id="t-tGAzCxsHLy" # # Let's try emoji 😀 # # Does the model have knowledge of emoji? Check out the run below, sure looks like it does! Notice the labels are applied based on the perspective from which the information is presented. # + colab={"base_uri": "https://localhost:8080/"} id="uIf064M9pbjn" outputId="6cf15ade-286f-4167-d7d6-449849bacd16" tags = ["😀", "😡"] for text in sections: print("%-75s %s" % (text, labels(text, tags)[0][0])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 18.1 Unit overview # ### 18.1.1 Motivation # # https://youtu.be/Xj-QvRBDEi8 # # Notes: # # - Airline overbooked tickets due to 10% passengers on average being not present. What's the maximum number of tickets an airline should sell on a flight with 300 seats? You'll know how to calculate it using central limit theory. # ## 18.2 Lecture 18 Inequalities, convergence, and the Weak Law of Large Numbers # ### 18.2.1 Unit 8 overview # # https://youtu.be/Kgjc7VU3nUk # # The material is covered in section 5.1-5.4 and 9.1 of the text. # # - Weaklaw of large numbers: the observed avg of a large number of independent identically distributed random variables, called sample mean, converged to expected value # - Central limit theorem: provide details of the distribution of the sum of n i.i.d r.v.s. # - Estimate an unknow mean with measurements of "accuracy of estimates" and "confidence intervals" # Note: It's different from previous estimation on: # (1) make no assumption on the unknown quantity that is being estimated. # (2) not modeled as a r.v but just as an unknown constant # (3) cannot invoke the Bayers rule # Thus it's a very different conceptual setting, called non-Bayersian or classical statistics. # - Basic rule of classical statistics # ### 18.2.2 The Markov inequality # # https://youtu.be/7OYJoExDHA0 # # Notes: # # Basic ideas under the Markov inequality and many other inequalities and bounds in probability theory: # # - learn about probabiliteis of "extreme events" which some variables take very large value # - Markov inequality: if $X \geq 0$ and $a > 0$, then $P(X \geq a) \leq \frac{E[X]}{a}$: the probability of X that exceeds a is bounded # # **Derive method 1** # Assume X is a conteneous r.v.s (the discrete case is similar), then # # $E[X] = \int_{0}^\infty xf_x (x)d_x \geq \int_{a}^\infty xf_x(x)d_x \geq \int_{a}^\infty af_x(x)d_x = aP(x \geq a)$ # # **Derive method 2** # # $$ # Y = \left\{ # \begin{array}\\ # 0 & \mbox{if } X < a \\ # a & \mbox{if } X \geq a \\ # \end{array} # \right. # $$ # # We know $aP(X \geq a) = E[Y] \leq E[X]$ # # - Two examles: watch them again # ### 18.2.3 Exercise Markov inequality # # # ### 18.2.4 The Chebyshev inequality # # https://youtu.be/1vUkgWZf8xU # # # Notes: # # - The Chebyshev inequality: Given a r.v. X with finite mean $\mu$ and variance $\sigma^2$, if the variance is small, then X is unlikely to be too far from the mean. Mathematically, $P(|X - \mu| \geq c) \leq \frac{\sigma^2}{c^2}$ # # - It is a simple application of the Markov inequality but contain a different message # # - How to prove: $P(|x - \mu|) = P((x - \mu)^2) \leq \frac{E[(x-\mu)^2]}{c^2} = \frac{\sigma^2}{c^2}$ # # Application Example 1: # The distance from mean is at least k standard deviations where k > 0. # # $P(|X - \mu| \geq k\sigma) \leq \frac{\sigma^2}{k^2 \sigma} = \frac{1}{k^2}$ # # # Application Example 2: # # X is exponential ($\lambda = 1$): $P(X \geq a) \leq \frac{1}{a}$ (Markov) # # # # It indicates that The Chebyshev inequality gives us a much smaller bound and therefore more informative than what we obtained from the Markov inequality as it exploits more information about the distribution of the r.v. X., which not just use the knowledge of mean of the r.v. but also use some information about the variance of the r.v. # ### 18.2.5 Exercise Chebyshev inequality # # # ### 18.2.6 Exercise Chebyshev versus Markov # # # ### 18.2.7 The Weak Law of Large Numbers # # https://youtu.be/YOPiC3oLogU # ### 18.2.8 Exercise Sample mean bounds # # # ### 18.2.9 Polling # # https://youtu.be/-sLyJw06tL8 # # **The pollster's problem** # # - You want to predict what p actually is before the referendum starts. Here p is fraction of population that will vote "yes" in a referendum # - You select randomly/uniformly a number of people out of the population and record their answer. That is ith randomly selected person polled: # $X_i = \left\{ # \begin{array}\\ # 1, & \mbox{if yes,}\\ # 0, & \mbox{if no.} # \end{array} # \right.$ # thus $E[x_i] = p$ # Note: We assume we select those people independently then there is always a chance that the 1st person polled will be the same as the 2nd person polled, which we don't want to happen. However, if we assume the population is very large, then this won't happen and will not be a concern. # - we obtain $M_n = (X_1 + ... + X_n)/n$: fraction of "yes" in the sample, which is a reasonable estimate of the unknow fraction p # - Here is the scenario: # # Boss: Please find the exactly value of p # # You: There is no way to calculate p exactly on the basis of a finite and random poll.Therefore, there will be some error in the estimation of p # # Boss: Ok. Then try to give me an estimate of p which is very accurate within 1%. Can you do this for me? # # You: Ok. Let me try polling 10,000 people and see if I can guarantee for you such a small error. You realized later that actually there is no way of guaranteeing such a small error with certainty. So you told your boss that " I cannot guarantee with certainty that the error is going to be small, but I can guarantee that the error that I get is small with high probability. # # - $|M_n - p| < 0.01$ try n = 10,000 samples # # - $P(|M_{10000} - p| \geq 0.01) \leq \frac{\sigma^2}{n \epsilon^2} = \frac{p(1-p)}{10^4 10^{-4}} \leq \frac{1}{4}$ # # You: if I sample 10,000 people, then the probability of an error more than 1% is going to be less than 25% # # Boss: Well, 25% is too big which is unacceptable. I'd like you have a probability of error that is less than 5%. # # You: caluclate $\frac{1/4}{n 10^{-4}} \leq 0.05$ => $n \geq \frac{10^6}{20} = 50,000$ and tell your boss that one way of guaranteeing 5% is to take 50000 samples. Here 0.01 is the accuracy, and 5% is the confidence interval. # # Summary: the above calculation is based on the Chebyshev inequality which is not that accurate. It turns out that if we use more accurate estimates of this probability, we will find actually much smaller values of n will be enough for our purpose. # ### 18.2.10 Exercise Polling # # # # **MIT Staff Q&A** # # - the probabilistic assumption being made here is the following: sampling is done with replacement and every individual in the population has an equal chance of being polled on each iteration of the sampling (You randomly pick (sample) the first individual from the population; then randomly pick the second individual, then the thirds, then the fourth, ....). Of course, people's opinions are not "independent". But the answers we obtain from randomly sampled individuals, are statistically independent and distributed according the population of opinions from which we are sampling. # # - **Q**: "a) is ambiguous, the accuracy requirements can be looser or no, this is not the mathematical case, and even in answer you use word "most"." # # **A**:I agree with you opinion. In the "real world", the accuracy and confidence requirements can be dramatically different: in experimental physics the statistical standards are very high, in observational studies of social issues, the accuracy is determined by the available data and are often much lower than for experiments ("polls") conducted in a lab. # # - **Q**: What does "the Chebyshev bound is too conservative." mean? # # **A**: It means although it is right, but maybe not close to the true probability. Think about I tell you that someone's height is less than 3 meters. I am right(no one is higher than 3 meters), but this is not a useful information, in this case, I am conservative. Hope it helps. # # # ### 18.2.11 Convergence in probability # # https://youtu.be/yT102uPFJw4 # # **Notes**: # # - **Definition**: a sequence of random variables $Y_n$ (not necessarily independent) **converges in probability** to a number **a** if: # for any $\epsilon > 0, \lim_{x \to \infty}P(|Y_n - a| \geq \epsilon) = 0$ # # - In WLLN, it tells us that with high probability, the sample mean falls close to the true mean as n goes to infinity which we interpret it as "$M_n$ converges to $\mu$" in the sense of convergence in probability. # # - Convergence comparison # # # - Some properties of convergence of sequences: # # Support that $X_n -> a, Y_n -> b in probability. We do not make any assumptions about independence. That's to say we don't assume the Xn's are independent of each other and the sequence of Xn's is independent of Yn. # # Property 1: If g is continuous, then g(Xn) -> g(a); # # Property 2: Xn + Yn -> a + b # # **However**, E[Xn] need not converge to a, which means convergence of random variables in probability does not imply convergence of expectations. # ### 18.2.12 Convergence in probability examples # # https://youtu.be/Ti0syfE2e4c # # **Example 1:** # # A sequence of random variables Yn that are discrete. Most of the probabilitiy is concentrated at 0 but there is also a small probability of a large value. Do we have convergence in probability to 0? # # # # - convergence in probability (CIP) does not imply convergence of expectations as CIP has to deal with the bulk of the distribution, it only cares the tail of the distribution has small probability. On the other hand, the expectation is highly sensitive to the tail of the distribution. It might be that the tail only has a small probability but if that probability is assigned to a very large value, then the expectation will be strongly affected and can be quite different from the limit of the random variable. # # **Example 2:** # # # **Steps to show convergence in probability** # - Step 1: make a guess as to what is the value that the sequence converges to. # - Step 2: write down an expression for the probability of being epsilon away from the conjectured limit. # - Step 3: calculate that probability # ### 18.2.13 Exercise Convergence in probability # # # ### 18.2.14 Related topics # # https://youtu.be/mR2udEr2pPM # ## 18.3 Lecture 19 The Central Limit Theorem # ### 18.3.1 Lecture 19 overview and slides # # https://youtu.be/H-llL9ke_pA # # Section 5.4 of the textbook. # ### 18.3.2 The Central Limit Theorem # # https://youtu.be/ethm2HoZtE8 # ### 18.3.3 Exercise CLT # # # ### 18.3.4 Discussion of the CLT # # https://youtu.be/43tM56dYwIw # ### 18.3.5 Exercise CLT applicability # # # # ### 18.3. 6 Illustration of the CLT # # https://youtu.be/Q8g3sPvxHUw # ### 18.3.7 CLT examples # # https://youtu.be/0XaD4PXmJbk # ### 18.3.8 Exercise CLT practice # # # ### 18.3. 9 Normal approximation to the binomial # # https://youtu.be/Zr8KmnfQNJw # ### 18.3.10 Exercise CLT for the binomial # # # ### 18.3.11 Polling revisited # # https://youtu.be/8PLZvnmTkQk # ## 18.4 Lecture 20 An introduction to classical statistics # ### 18.4.1 Lecture 20 overview and slides # # https://youtu.be/UC9Tlm3l7AE # # Refer to section 9.1 of the text. # # Notes: # # a brief introduction to the so-called classical (non-Bayesian) statistical methods: # # - the general framework # - estimation based on sample means, confidence intervals, and maximum likelihood estimation. # ### 18.4.2 Overview of the classical statistical framework # # https://youtu.be/vdaxiJBG7qs # ### 18.4.3 The sample mean and some terminology # # https://youtu.be/Fz_7ymWGx0o # ### 18.4.4 Exercis Estimator properties # # # ### 18.4.5 On the mean squared error of an estimator # # https://youtu.be/Rw5uq3Js9ZA # ### 18.4.6 Confidence intervals # # https://youtu.be/16P8lB9TPMo # ### 18.4.7 Exercise Bias and MSE # # # ### 18.4.8 Exercise Confidence interval interpretation # # # ### 18.4.9 Exercise A simple CI # # # ### 18.4.10 Confidence intervals for an unknown mean # # https://youtu.be/ekLmfHCbXOA # ### 18.4.11 Exercise CI's via the CLT # # # ### 18.4.12 Confidence intervals for the mean when the variance is unknown # # https://youtu.be/SBS06u2KwFU # ### 18.4.13 Other natural estimators # # https://youtu.be/Vev2rMQOYtw # ### 18.4.14 Exercise Natural estimators # # # ### 18.4.15 Maximum likelihood estimation # # https://youtu.be/af6iD1AA6rY # ### 18.4.16 Maximum likelihood estimation examples # # https://youtu.be/osCutN2MFzA # ### 18.4.17 Exercise ML estimation # # # ## 18.5 Solved Problems # ### 18.5.1 Convergence in probability example # # # # https://youtu.be/rTectvxtPm4 # # # ### 18.5.2 Convergence in probability and in the mean square - Part I # # # # https://youtu.be/HRFM5q6374o # # # ### 18.5.3 Convergence in probability and in the mean square - Part II # # # # https://youtu.be/eoKxh7eznr4 # ### 18.5.4 Probability bounds # # # # https://youtu.be/8s6hXXA9mKM # ### 18.5.5 Using the CLT to estimate the probability of a wrong decision # # # # https://youtu.be/qjtb4VTNOQQ # ### 18.5.6 Using the CLT # # # # https://youtu.be/9Xl2fRK3q9M # ### 18.5.7 An unbiased variance estimator # # # # https://youtu.be/6H_Wmn9kozg # # Solution can be found on pages 467-468 of the text. # ## 18.6 Additional theoretical material # ### 18.6.1 Convergence of the sum of two random variables # # # # https://youtu.be/p3oDXka-dgw # ### 18.6.2 Jensen's inequality # # # # https://youtu.be/AG8Aa3fTucA # ### 18.6.3 Hoeffding's inequality # # # # https://youtu.be/ABmRmeRGtb0 # ## 18.7 Problem Set 8 # ### 18.7.1 Convergence in probability # # # # ### 18.7.2 Find the limits # # # ### 18.7.3 The sample mean # # # # ### 18.7.4 Airline overbooking # # # ### 18.7.5 Maximum likelihood estimation # # # ### 18.7.6 Tossing a triple of coins # # # ## 18.8 Unit 8 Summary # # https://youtu.be/rojy7JB76wM # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Starbucks Capstone Challenge # # ### Introduction # # This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks. # # Not all users receive the same offer, and that is the challenge to solve with this data set. # # Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products. # # Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement. # # You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer. # # Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer. # # ### Example # # To give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer. # # However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer. # # ### Cleaning # # This makes data cleaning especially important and tricky. # # You'll also want to take into account that some demographic groups will make purchases even if they don't receive an offer. From a business perspective, if a customer is going to make a 10 dollar purchase without an offer anyway, you wouldn't want to send a buy 10 dollars get 2 dollars off offer. You'll want to try to assess what a certain demographic group will buy when not receiving any offers. # # ### Final Advice # # Because this is a capstone project, you are free to analyze the data any way you see fit. For example, you could build a machine learning model that predicts how much someone will spend based on demographics and offer type. Or you could build a model that predicts whether or not someone will respond to an offer. Or, you don't need to build a machine learning model at all. You could develop a set of heuristics that determine what offer you should send to each customer (i.e., 75 percent of women customers who were 35 years old responded to offer A vs 40 percent from the same demographic to offer B, so send offer A). # # Data Sets # # The data is contained in three files: # # * portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.) # * profile.json - demographic data for each customer # * transcript.json - records for transactions, offers received, offers viewed, and offers completed # # Here is the schema and explanation of each variable in the files: # # **portfolio.json** # * id (string) - offer id # * offer_type (string) - type of offer ie BOGO, discount, informational # * difficulty (int) - minimum required spend to complete an offer # * reward (int) - reward given for completing an offer # * duration (int) - time for offer to be open, in days # * channels (list of strings) # # **profile.json** # * age (int) - age of the customer # * became_member_on (int) - date when customer created an app account # * gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F) # * id (str) - customer id # * income (float) - customer's income # # **transcript.json** # * event (str) - record description (ie transaction, offer received, offer viewed, etc.) # * person (str) - customer id # * time (int) - time in hours since start of test. The data begins at time t=0 # * value - (dict of strings) - either an offer id or transaction amount depending on the record # # **Note:** If you are using the workspace, you will need to go to the terminal and run the command `conda update pandas` before reading in the files. This is because the version of pandas in the workspace cannot read in the transcript.json file correctly, but the newest version of pandas can. You can access the termnal from the orange icon in the top left of this notebook. # # You can see how to access the terminal and how the install works using the two images below. First you need to access the terminal: # # # # Then you will want to run the above command: # # # # Finally, when you enter back into the notebook (use the jupyter icon again), you should be able to run the below cell without any errors. # + import pandas as pd import numpy as np import math import json % matplotlib inline # read in the json files portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True) profile = pd.read_json('data/profile.json', orient='records', lines=True) transcript = pd.read_json('data/transcript.json', orient='records', lines=True) # - # ### Reading The Datasets portfolio.head(10) portfolio.shape[0] portfolio.shape[1] print('portfolio: rows = {} ,columns = {}'.format((portfolio.shape[0]),(portfolio.shape[1]))) portfolio.describe() portfolio.info() portfolio.offer_type.value_counts() portfolio.reward.value_counts() # + import matplotlib.pyplot as plt plt.figure(figsize=[6,6]) fig, ax = plt.subplots() y_counts = portfolio['offer_type'].value_counts() y_counts.plot(kind='barh').invert_yaxis() for i, v in enumerate(y_counts): ax.text(v, i, str(v), fontsize=14) plt.title('Different offer types') # - # Discount and bogo are equally given and on maximum times # + plt.figure(figsize=[6,6]) fig, ax = plt.subplots() y_counts = portfolio['duration'].value_counts() y_counts.plot(kind='barh').invert_yaxis() for i, v in enumerate(y_counts): ax.text(v, i, str(v), color='black', fontsize=14) plt.title('Different offer types\' duartion') # - # Here we can see that most of the offers are for the duration of 7 days # ### Profile profile.head(8) print('profile: rows = {} ,columns = {}'.format((profile.shape[0]),(profile.shape[1]))) profile.describe() profile.isnull().sum() profile.shape import seaborn as sns plt.figure(figsize=[6,6]) fig, ax = plt.subplots() y_counts = profile['gender'].value_counts() y_counts.plot(kind='barh').invert_yaxis() for i, v in enumerate(y_counts): ax.text(v, i, str(v), color='black', fontsize=14) plt.title('Count of Genders') plt.pie(profile['gender'].value_counts() , labels = ['Male' , 'Female' , 'Other']) # Mostly male are interested in the offers and they are the major ones # ### Transcript transcript.head(9) transcript.describe() transcript.info() print('transcript: rows = {} ,columns = {}'.format((profile.shape[0]),(profile.shape[1]))) # ### Cleaning The Datasets # #### Portfolio # # Renaming 'id' to 'offer_id' portfolio.columns = ['channels', 'difficulty', 'duration', 'offer_id', 'offer_type', 'reward'] portfolio.columns portfolio.head() # # Profile # # Renaming 'id' to 'customer_id' , filling the missing values of age and income with mean value , filling the missing values of gender with mode profile.columns profile.columns = ['age', 'became_member_on', 'gender', 'customer_id', 'income'] profile.columns profile['age'].fillna(profile['age'].mean()) #filling missing age with average age profile['income'].fillna(profile['income'].mean()) #filling missing income with average income profile['gender'].fillna(profile['gender'].mode()[0]) #filling missing gender with the most occuring gender profile.head() profile.isnull().sum() # So there is not any missing value remaining in the profile dataframe # # Transcript # # Renaming 'person' to 'customer_id' , splitting the 'value' column based on its keys and # dropping the unnecessary columns transcript.columns transcript.columns = ['event', 'customer_id', 'time', 'value'] #changing the column name transcript.head() transcript.value.astype('str').value_counts().to_dict() #converting the values in the column 'value' to dictionary transcript['offer_id'] = transcript.value.apply(lambda x: x.get('offer_id')) #splitting the 'value' into separate columns.here is 'offer_id' transcript['offer id'] = transcript.value.apply(lambda x: x.get('offer id')) #splitting the 'value' into separate columns.here is 'offer id' transcript['offer_id'] = transcript.apply(lambda x : x['offer id'] if x['offer_id'] == None else x['offer_id'], axis=1) #merging both 'offer id' and 'offer_id' into the same column 'offer_id' transcript.drop('offer id',axis = 1,inplace = True) transcript.head(10) #splitting the reward and amount values in the 'value' transcript['offer_reward'] = transcript['value'].apply(lambda x: x.get('reward')) transcript['amount'] = transcript['value'].apply(lambda x: x.get('amount')) transcript.drop('value' ,inplace = True , axis = 1) transcript.isnull().sum() transcript.fillna(0 , inplace = True) #filling the missing values with 0 transcript.head(10) # ### Exploratory Data Analysis # ### Now we will merge the dataframes merge_df = pd.merge(portfolio, transcript, on='offer_id')#merging portfolio and transcript dataframes on the basis of 'offer_id' final_df = pd.merge(merge_df, profile, on='customer_id')#merging the merged dataframe of portfolio and transcript with profile dataframe on the basis of 'customer-id' #Exploring the final merged dataframe final_df # ### Now we will see the different offer types and their counts final_df['offer_type'].value_counts().plot.barh(title = 'Offer types with their counts') # So,we can see that discount and bogo are thr most given offer types # ### Now we will see the different events and their counts final_df['event'].value_counts().plot.barh(title = 'Different events and their counts') # So,in most of the cases offer is received by the user and it is not completed by him/her,means most of the people just ignore the offers they receive # ### Now we will analyse this data on the basis of the age of the customers sns.distplot(final_df['age'] , bins = 50 , hist_kws = {'alpha' : 0.4}); # As we can see that the people after the age of 100 are just acting as outliers,so we will remove them final_df = final_df[final_df['age']<=100] # Now seeing the distortion plot of age sns.distplot(final_df['age'] , bins = 50 , hist_kws = {'alpha' : 0.4}); # We can observe that most of the customers are within the age group of 45-60 are the most frequent customers and more than any other group,this is quite interesting. # ### Now,we will analyse this data on the basis of income of the customers sns.distplot(final_df['income'] , bins = 50 , hist_kws = {'alpha' : 0.4}); final_df['income'].mean() # Now we can see that most people who are the customers of Starbucks have their income within the range of 55k - 75k with a mean income of 66413.35 # ### Now,we will see how our final dataframe is depedent on the 'gender' feature final_df['gender'].value_counts().plot.barh(title = 'Analysing the gender of customers') # So,we can see that most of the customers are male # ### We will analyse the dataframe on the basis of 'offer_type' on the basis of gender sns.countplot(x = 'offer_type' , hue = 'gender' , data = final_df) # We can see that the count of gender weather it is male or female is approximately equal in the bogo and discount offers # ### Now,we will see the relation between gender and events sns.countplot(x = 'event' , hue = 'gender' , data = final_df) # So,from the exploratory data analysis we can see that most of the customers just receive the offers and they do not view them and the people who complete the offers they receive is quite less and most of the offers made by Starbuks are BOGO and Discount and most of the people that are the customers are within the age group of 45-60 and the most common gender is male and the people who are the customers of Starbucks have their income within the range of 55k - 75k # # Making a Machine Learning Model # First analysing our final dataset final_df # #### We will now encode the categorical features like 'offer_type' , 'gender' , 'age' # #### We will encode the offer_id and customer_id # + final_df = pd.get_dummies(final_df , columns = ['offer_type' , 'gender' , 'age']) #processing offer_id offer_id = final_df['offer_id'].unique().tolist() offer_map = dict( zip(offer_id,range(len(offer_id))) ) final_df.replace({'offer_id': offer_map},inplace=True) #processing customer_id customer_id = final_df['customer_id'].unique().tolist() customer_map = dict( zip(customer_id,range(len(customer_id))) ) final_df.replace({'customer_id': customer_map},inplace=True) # - final_df.head() # #### Now we will scale the numerical data including 'income' , 'difficulty' , 'duration' and many more... from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() numerical_columns = ['income' , 'difficulty' , 'duration' , 'offer_reward' , 'time' , 'reward' , 'amount'] final_df[numerical_columns] = scaler.fit_transform(final_df[numerical_columns]) final_df.head() # #### We will encode the values in the 'event' column final_df['event'] = final_df['event'].map({'offer received':1, 'offer viewed':2, 'offer completed':3}) final_df2 = final_df.drop('event' , axis = 1) # #### Now encoding the channels column final_df2['web'] = final_df2['channels'].apply(lambda x : 1 if 'web' in x else 0) final_df2['mobile'] = final_df2['channels'].apply(lambda x : 1 if 'mobile' in x else 0) final_df2['social'] = final_df2['channels'].apply(lambda x : 1 if 'social' in x else 0) final_df2['email'] = final_df2['channels'].apply(lambda x : 1 if 'email' in x else 0) #Now dropping the Channels column final_df2.drop('channels' , axis = 1 , inplace = True) final_df2['became_member_on'] = final_df2['became_member_on'].apply(lambda x: pd.to_datetime(str(x), format='%Y%m%d')) #adding new columns for month & year final_df2['month_member'] = final_df2['became_member_on'].apply(lambda x: x.day) final_df2['year_member'] = final_df2['became_member_on'].apply(lambda x: x.year) #dropping the became_member_on column final_df2.drop('became_member_on',axis=1, inplace=True) final_df2.shape # # Training Our Dataset # ### Now splitting our 'final_df' into training and test set independent_variables = final_df2 #our dataset containing all the independent variables excluding the 'event' dependent_variable = final_df['event'] #our final dataset containing the 'event' from sklearn.model_selection import train_test_split # splitting our dataset into training and test set and the test set being the 30% of the total dataset x_train , x_test, y_train , y_test = train_test_split(independent_variables , dependent_variable , test_size = 0.3 , random_state = 1) x_train.shape x_test.shape # # Testing Our Dataset # We will implement a number of classification machine learning methods and will determine which method is best for our model from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier #We will test the quality of the predicted output on a number of metrics,i.e. accuracy score,f1 score #We will use f1 score because it considers the class imbalance pretty well as compared to the accuracy score and is the best metric to evaluate our this model from sklearn.metrics import confusion_matrix , accuracy_score , fbeta_score def train_test_f1(model): """ Returns the F1 score of training and test set of any particular model model : model name Returns f1_score_train : F1 score of training set f1_score_test : F1 score of test set """ predict_train = (model.fit(x_train , y_train)).predict(x_train) predict_test = (model.fit(x_train , y_train)).predict(x_test) f1_score_train = fbeta_score(y_train , predict_train , beta = 0.5 , average = 'micro')*100 f1_score_test = fbeta_score(y_test , predict_test , beta = 0.5 , average = 'micro')*100 return f1_score_train , f1_score_test # ### Implementing the KNN Model knn = KNeighborsClassifier() f1_score_train_knn , f1_score_test_knn = train_test_f1(knn)#calculating the F1 scores # ### Implementing the Logistic Regression logistic = LogisticRegression() f1_score_train_logistic , f1_score_test_logistic = train_test_f1(logistic)#calculating the F1 scores # ### Implementing the Random Forest Classifier # random_forest = RandomForestClassifier() f1_score_train_random , f1_score_test_random = train_test_f1(random_forest)#calculating the F1 scores # ### Implementing the Decision Tree Classifier decision_tree = DecisionTreeClassifier() f1_score_train_decision , f1_score_test_decision = train_test_f1(decision_tree)#calculating the F1 scores # # Concluding from the above models and scores f1_scores_models = {'model_name' : [knn.__class__.__name__ , logistic.__class__.__name__ , random_forest.__class__.__name__ , decision_tree.__class__.__name__] , 'Training set F1 Score' : [f1_score_train_knn , f1_score_train_logistic , f1_score_train_random , f1_score_train_decision], 'Test set F1 Score' : [f1_score_test_knn , f1_score_test_logistic , f1_score_test_random , f1_score_test_decision]} f1_scores_df = pd.DataFrame(f1_scores_models) f1_scores_df # So,from the above dataframe we can conclude that when we trained our training dataset according to the KNeighborsClassifier our model performed worst,on training the model on RandomForestClassifier ,the training set F1 score is quite good i.e. 93.58 but it performed badly on the test set with a F1 score of 64.266 and when we trained our model on DecisionTreeClassifier our model's performance was best with the training set F1 score of 94.89 and the test set F1 score of 86.02,which means our model was able to classify between the events of offers upto a great extent.As this is a practical case study with real world dataset,we can say that our model has performed successfully. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + papermill={"duration": 7.929453, "end_time": "2022-02-18T19:58:58.405664", "exception": false, "start_time": "2022-02-18T19:58:50.476211", "status": "completed"} tags=[] from tqdm.auto import tqdm import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.metrics import roc_auc_score from sklearn.model_selection import train_test_split , StratifiedKFold import tensorflow as tf import tensorflow.keras.backend as K from tensorflow.keras.optimizers import Adam from tensorflow.keras.models import Model, load_model, save_model from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau from tensorflow.keras.layers import Input,Dense, LSTM, RNN, Bidirectional, GlobalAveragePooling2D , Dropout, Conv1D, Flatten from transformers import TFAutoModel , AutoTokenizer # import ray # from ray import tune # + papermill={"duration": 0.031405, "end_time": "2022-02-18T19:58:58.457299", "exception": false, "start_time": "2022-02-18T19:58:58.425894", "status": "completed"} tags=[] class config: #train_path = "../input/dravidianlangtech2022-personal/Train_Data_Combined.csv" #val_path = "../input/dravidianlangtech2022-personal/Validation_Data_Combined.csv" test_path = "../input/test-for-dravid-lang-tech-new/ta-misogyny-test.csv" save_dir = "./result" seed = 55 try: AUTOTUNE = tf.data.AUTOTUNE except: AUTOTUNE = tf.data.experimental.AUTOTUNE epochs = 50 max_len = 64 batch_size = 32 hf_path = "google/muril-base-cased" tokenizer_path = "../input/with-n-abusive-comment-detection-tamil-dravidianl/result/muril_tokenizer" model_weights = "../input/with-n-abusive-comment-detection-tamil-dravidianl/result" def seed_everything(seed = config.seed): print(f"seeded everything to seed {seed}") os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) tf.random.set_seed(seed) if not os.path.exists(config.save_dir): os.makedirs(config.save_dir) seed_everything() # + papermill={"duration": 0.05603, "end_time": "2022-02-18T19:58:58.533176", "exception": false, "start_time": "2022-02-18T19:58:58.477146", "status": "completed"} tags=[] col_names = ['text'] df_test = pd.read_csv(config.test_path,names=col_names,sep='\t') # + papermill={"duration": 0.048078, "end_time": "2022-02-18T19:58:58.601649", "exception": false, "start_time": "2022-02-18T19:58:58.553571", "status": "completed"} tags=[] df_test.info() # + papermill={"duration": 3.25411, "end_time": "2022-02-18T19:59:01.877389", "exception": false, "start_time": "2022-02-18T19:58:58.623279", "status": "completed"} tags=[] #Tokenization Process tokenizer = AutoTokenizer.from_pretrained(config.hf_path) tokenizer.save_pretrained(os.path.join(config.save_dir , "muril_tokenizer")) # + papermill={"duration": 0.034419, "end_time": "2022-02-18T19:59:01.935196", "exception": false, "start_time": "2022-02-18T19:59:01.900777", "status": "completed"} tags=[] def fast_encode(texts, tokenizer, chunk_size=512, maxlen=config.max_len): input_ids = [] tt_ids = [] at_ids = [] for i in tqdm(range(0, len(texts), chunk_size)): text_chunk = texts[i:i+chunk_size] encs = tokenizer( text_chunk, max_length = config.max_len, padding='max_length', truncation=True ) input_ids.extend(encs['input_ids']) tt_ids.extend(encs['token_type_ids']) at_ids.extend(encs['attention_mask']) return {'input_ids': input_ids, 'token_type_ids': tt_ids, 'attention_mask':at_ids} # + papermill={"duration": 0.113987, "end_time": "2022-02-18T19:59:02.071772", "exception": false, "start_time": "2022-02-18T19:59:01.957785", "status": "completed"} tags=[] test_token_data = fast_encode(list(df_test['text'].values), tokenizer) #train_token_data['label'] = list(df_train['label'].values) # + papermill={"duration": 0.049698, "end_time": "2022-02-18T19:59:02.145001", "exception": false, "start_time": "2022-02-18T19:59:02.095303", "status": "completed"} tags=[] df_tokenized_test = pd.DataFrame(test_token_data) len(df_tokenized_test['input_ids'][0]) # + papermill={"duration": 0.035817, "end_time": "2022-02-18T19:59:02.209855", "exception": false, "start_time": "2022-02-18T19:59:02.174038", "status": "completed"} tags=[] del test_token_data # + papermill={"duration": 0.03299, "end_time": "2022-02-18T19:59:02.267257", "exception": false, "start_time": "2022-02-18T19:59:02.234267", "status": "completed"} tags=[] #preparing dataset def test_prep_function(embeddings): input_ids = embeddings['input_ids'] attention_mask = embeddings['attention_mask'] token_type_ids = embeddings['token_type_ids'] #target = tf.cast(target, tf.int32) return {'input_ids': input_ids ,'token_type_ids':token_type_ids,'attention_mask': attention_mask} # + papermill={"duration": 6.154511, "end_time": "2022-02-18T19:59:08.445075", "exception": false, "start_time": "2022-02-18T19:59:02.290564", "status": "completed"} tags=[] # Detect hardware, return appropriate distribution strategy try: # TPU detection. No parameters necessary if TPU_NAME environment variable is # set: this is always the case on Kaggle. tpu = tf.distribute.cluster_resolver.TPUClusterResolver() print('Running on TPU ', tpu.master()) except ValueError: tpu = None if tpu: tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu) strategy = tf.distribute.experimental.TPUStrategy(tpu) else: # Default distribution strategy in Tensorflow. Works on CPU and single GPU. strategy = tf.distribute.get_strategy() print("REPLICAS: ", strategy.num_replicas_in_sync) # + papermill={"duration": 0.035582, "end_time": "2022-02-18T19:59:08.505089", "exception": false, "start_time": "2022-02-18T19:59:08.469507", "status": "completed"} tags=[] def create_model(transformer_model): input_id_layer = Input(shape=(config.max_len,) ,dtype = tf.int32 , name = 'input_ids') attention_mask_layer = Input(shape=(config.max_len,) , dtype = tf.int32 , name = 'attention_mask') token_type_layer = Input(shape=(config.max_len,) , dtype = tf.int32 , name = 'token_type_ids') transformer = transformer_model(input_ids = input_id_layer ,token_type_ids=token_type_layer,attention_mask = attention_mask_layer)[0] x = Dropout(0.5)(transformer) x = Conv1D(1,1)(x) x = Flatten()(x) predictions = Dense(8, activation = "softmax")(x) model = Model(inputs=[input_id_layer ,token_type_layer, attention_mask_layer], outputs = predictions) model.compile( optimizer = Adam(learning_rate= 0.01), metrics = ['accuracy'], loss = 'sparse_categorical_crossentropy' ) return model # + papermill={"duration": 75.051693, "end_time": "2022-02-18T20:00:23.581194", "exception": false, "start_time": "2022-02-18T19:59:08.529501", "status": "completed"} tags=[] with strategy.scope(): transformer_model = TFAutoModel.from_pretrained(config.hf_path) transformer_model.bert.trainable = False model = create_model(transformer_model) model.summary() tf.keras.utils.plot_model(model, show_shapes=True,show_dtype=True) # + papermill={"duration": 32.095706, "end_time": "2022-02-18T20:00:55.705306", "exception": false, "start_time": "2022-02-18T20:00:23.609600", "status": "completed"} tags=[] from sklearn.metrics import accuracy_score test_embeddings = {'input_ids': df_tokenized_test['input_ids'].tolist() ,'token_type_ids': df_tokenized_test['token_type_ids'].tolist(),"attention_mask":df_tokenized_test['attention_mask'].tolist()} #y_train = df_tokenized_train['label'] #y_test = df_tokenized_val['label'] #train_dataset = tf.data.Dataset.from_tensor_slices((train_embeddings , y_train)) test_dataset = tf.data.Dataset.from_tensor_slices((test_embeddings)) test_dataset = ( test_dataset .map(test_prep_function , num_parallel_calls = config.AUTOTUNE) .batch(config.batch_size) .prefetch(config.AUTOTUNE) ) test_steps = len(test_embeddings['input_ids'])//config.batch_size model.load_weights(f'{config.model_weights}/muril_fold_trained.h5') y_predict = model.predict(test_dataset , verbose = 1) predictions = y_predict preds_classes = np.argmax(predictions, axis=-1) # + papermill={"duration": 0.042108, "end_time": "2022-02-18T20:00:55.777918", "exception": false, "start_time": "2022-02-18T20:00:55.735810", "status": "completed"} tags=[] preds_classes # + papermill={"duration": 0.0389, "end_time": "2022-02-18T20:00:55.847720", "exception": false, "start_time": "2022-02-18T20:00:55.808820", "status": "completed"} tags=[] df_pred = pd.DataFrame(preds_classes,columns=['label']) # + papermill={"duration": 0.03987, "end_time": "2022-02-18T20:00:55.918401", "exception": false, "start_time": "2022-02-18T20:00:55.878531", "status": "completed"} tags=[] df_pred.replace({0:'Counter-speech', 1:'Homophobia', 2:'Hope-Speech', 3:'Misandry', 4:'Misogyny', 5:'None-of-the-above', 6:'Transphobic', 7:'Xenophobia'},inplace = True) # + papermill={"duration": 0.04833, "end_time": "2022-02-18T20:00:55.997638", "exception": false, "start_time": "2022-02-18T20:00:55.949308", "status": "completed"} tags=[] df_pred # + papermill={"duration": 0.039655, "end_time": "2022-02-18T20:00:56.068705", "exception": false, "start_time": "2022-02-18T20:00:56.029050", "status": "completed"} tags=[] df_test[list(df_pred.columns)] = df_pred # + papermill={"duration": 0.047372, "end_time": "2022-02-18T20:00:56.147316", "exception": false, "start_time": "2022-02-18T20:00:56.099944", "status": "completed"} tags=[] df_test # + papermill={"duration": 0.04811, "end_time": "2022-02-18T20:00:56.227769", "exception": false, "start_time": "2022-02-18T20:00:56.179659", "status": "completed"} tags=[] df_test.to_csv('BpHigh_tamil.tsv',sep="\t") # + papermill={"duration": 0.042532, "end_time": "2022-02-18T20:00:56.302594", "exception": false, "start_time": "2022-02-18T20:00:56.260062", "status": "completed"} tags=[] df_test.label.value_counts() # + papermill={"duration": 0.031737, "end_time": "2022-02-18T20:00:56.366397", "exception": false, "start_time": "2022-02-18T20:00:56.334660", "status": "completed"} tags=[] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="Z03qx2oKw0CO" colab_type="code" colab={} # #!pip install eli5 # + id="ovmRP6uVw2X6" colab_type="code" outputId="5e56af78-bd2d-44e0-e0ff-4e9456e49b2f" executionInfo={"status": "ok", "timestamp": 1581682473380, "user_tz": -60, "elapsed": 12416, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 168} import pandas as pd import numpy as np from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor from sklearn.metrics import mean_absolute_error from sklearn.model_selection import cross_val_score import eli5 from eli5.sklearn import PermutationImportance from ast import literal_eval from tqdm import tqdm_notebook # + id="m6odCqaEG-iw" colab_type="code" outputId="f2027afa-a813-4947-c289-f62ca2673aaa" executionInfo={"status": "ok", "timestamp": 1581682488135, "user_tz": -60, "elapsed": 1625, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # cd '/content/drive/My Drive/Colab Notebooks/dw_matrix' # + id="E5gjHZvjHRxS" colab_type="code" outputId="acaf792c-bd6d-478a-d7b9-da849cdc39d9" executionInfo={"status": "ok", "timestamp": 1581682498857, "user_tz": -60, "elapsed": 7069, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # ls data # + id="uHC-STIfHTJh" colab_type="code" colab={} df = pd.read_csv('data/men_shoes.csv', low_memory=False) # + id="pQDA4tM-Hl_p" colab_type="code" colab={} def run_model(feats, model = DecisionTreeRegressor(max_depth=5)): X = df[ feats ].values y = df['prices_amountmin'].values scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error') return np.mean(scores), np.std(scores) # + id="tase49BLH2vj" colab_type="code" colab={} df['brand_cat'] = df.brand.map(lambda x: str(x).lower()).factorize()[0] # + id="jRAFdSrsICEZ" colab_type="code" outputId="81677e98-dd66-4f3d-a749-28e7e4638b5d" executionInfo={"status": "ok", "timestamp": 1581682500852, "user_tz": -60, "elapsed": 3959, "user": {"displayName": "", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 34} run_model(['brand_cat']) # + id="uEMn6VhWIIdI" colab_type="code" outputId="7e76f6a6-3a19-4006-ba6c-59d9032d23ac" executionInfo={"status": "ok", "timestamp": 1581682503525, "user_tz": -60, "elapsed": 4533, "user": {"displayName": "", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 34} model = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0) run_model(['brand_cat'], model) # + id="kDL2Tve5Iuqi" colab_type="code" outputId="98d8515e-498a-435e-953d-14fb1cb87b91" executionInfo={"status": "ok", "timestamp": 1581682507411, "user_tz": -60, "elapsed": 1714, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 173} df.features.values # + id="uCIk2LcOI9IR" colab_type="code" colab={} def parse_features(x): output_dict = {} if str(x) == 'nan' : return output_dict features = literal_eval(x.replace('\\"','"')) for item in features: key = item['key'].lower().strip() value = item['value'][0].lower().strip() output_dict[key] = value return output_dict df['features_parsed'] = df['features'].map(parse_features) # + id="XqI-cbrxSV3Q" colab_type="code" outputId="5fc1242d-17c6-457a-cbcd-300dcf83f31c" executionInfo={"status": "ok", "timestamp": 1581682509508, "user_tz": -60, "elapsed": 1989, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 34} keys = set() # df['features_parsed'].map( lambda x: keys.update(x.keys()) ) df['features_parsed'].map( lambda x: keys.update(x.keys()) ) len(keys) # + id="iJK5BtyBS_zu" colab_type="code" outputId="631a2b15-cbab-465b-ca0f-498fe12ae2b1" executionInfo={"status": "ok", "timestamp": 1581682513817, "user_tz": -60, "elapsed": 4659, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["51ee90867947452db562527d41a94004", "c20db350e88b4065a208bac78699c142", "debd164b64ed46c88e32cfb330a8795f", "7485d4e095e246dbb271f617ab63f1fa", "", "8d671e3f28b344e1a6f18863e3587c89", "", ""]} def get_name_feat(key): return 'feat_' + key for key in tqdm_notebook( keys): df[get_name_feat(key)] = df.features_parsed.map(lambda feats: feats[key] if key in feats else np.nan) # + id="ACjIX79ZW2cb" colab_type="code" outputId="ee88a986-e2cd-411a-a273-01e71a1dc9fa" executionInfo={"status": "ok", "timestamp": 1581682516210, "user_tz": -60, "elapsed": 764, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 153} df.columns # + id="ab71HJUMaIZV" colab_type="code" colab={} keys_stat = {} for key in keys: keys_stat[key] = df[ False == df[get_name_feat(key)].isnull() ].shape[0]/df.shape[0] * 100 # + id="jregWCHaa0dT" colab_type="code" outputId="a55ef616-8483-4133-c728-2d3cfe446247" executionInfo={"status": "ok", "timestamp": 1581682522125, "user_tz": -60, "elapsed": 1951, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 102} {k:v for k,v in keys_stat.items() if v > 30} # + id="ipAMxx8Oksde" colab_type="code" outputId="1a559165-093f-43a1-8735-851d85bf56be" executionInfo={"status": "ok", "timestamp": 1581682522131, "user_tz": -60, "elapsed": 1417, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} keys_stat # + id="Z_lIcmvNa2Pr" colab_type="code" outputId="e6ba527e-f4de-4bcd-ed3d-665963f42a8d" executionInfo={"status": "ok", "timestamp": 1581682526480, "user_tz": -60, "elapsed": 2008, "user": {"displayName": "017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} # df['feat_brand_cat'] = df.feat_brand.factorize()[0] # df['feat_color_cat'] = df.feat_color.factorize()[0] # df['feat_gender_cat'] = df.feat_gender.factorize()[0] # df['feat_manufacturer part number_cat'] = df['feat_manufacturer part number'].factorize()[0] # df['feat_material_cat'] = df.feat_material.factorize()[0] # df['feat_sport_cat'] = df.feat_sport.factorize()[0] # df['feat_style_cat'] = df.feat_style.factorize()[0] for key in keys_stat: df[get_name_feat(key) + '_cat'] = df[get_name_feat(key)].factorize()[0] print(get_name_feat(key)) # get_name_feat(key) # + id="Jy7qEoHlcP0l" colab_type="code" colab={} feats = ['brand_cat'] # + id="kQwFFEH0b7sG" colab_type="code" outputId="b3243d3b-ddc8-4579-e2dc-68449cc4f31d" executionInfo={"status": "ok", "timestamp": 1581682533012, "user_tz": -60, "elapsed": 3505, "user": {"displayName": "", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 34} model = RandomForestRegressor(max_depth=5, n_estimators=100) run_model(['brand_cat'], model=model) # + id="yWNymkyOcz8D" colab_type="code" outputId="4abbb300-3957-4ca4-d901-4c55652e18c4" executionInfo={"status": "ok", "timestamp": 1581682538755, "user_tz": -60, "elapsed": 8107, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 34} feats = ['brand_cat', 'feat_brand_cat', 'feat_color_cat', 'feat_gender_cat', 'feat_manufacturer part number_cat', 'feat_material_cat'] run_model(feats, model) # + id="iYaZCchDeWWO" colab_type="code" outputId="7f0c93e9-b8b3-4e00-e968-ada27fcd2859" executionInfo={"status": "ok", "timestamp": 1581682542706, "user_tz": -60, "elapsed": 9897, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 34} feats = ['brand_cat', 'feat_brand_cat', 'feat_gender_cat', 'feat_material_cat', 'feat_style_cat'] run_model(feats, model) # + id="WHAZSMyZjbZu" colab_type="code" colab={} feats_cat = [x for x in df.columns if '_cat' in x] # + id="RmQ1v6C7kBzE" colab_type="code" colab={} feats_cat.remove('feat_catalog') # + id="kRfL-Do-jlyL" colab_type="code" outputId="2bef6206-fdb3-4c8a-a69d-af5d3999f30d" executionInfo={"status": "ok", "timestamp": 1581683027631, "user_tz": -60, "elapsed": 72573, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 34} feats += feats_cat feats = list(set(feats)) run_model(feats_cat, model) # + id="gHGySEOUdGiS" colab_type="code" outputId="8ad1a2d4-abcd-480f-c358-ebfab6747b79" executionInfo={"status": "ok", "timestamp": 1581683280067, "user_tz": -60, "elapsed": 252411, "user": {"displayName": "Mateusz \u017bukowski", "photoUrl": "", "userId": "04666613203844295870"}} colab={"base_uri": "https://localhost:8080/", "height": 391} X = df [ feats ].values y = df['prices_amountmin'].values m = model.fit(X, y) perm = PermutationImportance(m, random_state=0).fit(X, y); eli5.show_weights(perm, feature_names=feats) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + pycharm={"name": "#%%\n"} """install dependencies""" # !pip install sqlalchemy # for use in main.py # !pip install click # + """create in-memory database""" from sqlalchemy import (Column, Integer, Boolean, Float, DateTime, MetaData, Table, create_engine, CheckConstraint) engine = create_engine('sqlite://', echo=False) # in-memory database meta = MetaData() pago = Table('PAGO', meta, Column('id_pago', Integer, primary_key=True, nullable=False), Column('id_contrato', Integer,nullable=False), Column('id_cliente', Integer,nullable=False), CheckConstraint("""typeof(id_contrato) = "integer" & id_contrato >= 0 & typeof(id_cliente) = "integer" & id_cliente >= 0 & typeof(id_pago) = "integer" & id_pago >= 0 & typeof(monto) = "float" & monto > 0""", name='is a natural number'), Column('fecha', DateTime,nullable=False), Column('monto', Float,nullable=False), Column('fecha_registro', DateTime,nullable=False), Column('activo', Boolean,nullable=False) ).create(engine) # + pycharm={"name": "#%%\n"} """define ORM functionality""" from typing import Optional from sqlalchemy.sql import Update,Select from sqlalchemy.orm import Session,declarative_base from sqlalchemy import select,desc,update from datetime import datetime Base = declarative_base() DATE_FORMAT = '%d-%m-%Y' class Pago(Base): __tablename__: str = "PAGO" # Identificador del pago id_pago = Column(Integer, nullable=False, primary_key=True, autoincrement=True) id_contrato = Column(Integer, nullable=False) # Identificador del contrato id_cliente = Column(Integer, nullable=False) # Identificador del cliente fecha = Column(DateTime, nullable=False) # Fecha de pago monto = Column(Float, nullable=False) # Monto del pago # true si el registro está vigente, false si el registro ya no es válido (eliminado lógico) activo = Column(Boolean, default=True) fecha_registro = Column(DateTime, default=datetime.now()) # Fecha de registro del pago @classmethod def select_latest_pagos(cls, id_contrato: int, id_cliente: int, activo=True)->Select: return select(cls).where( cls.id_contrato == id_contrato, cls.id_cliente == id_cliente, cls.activo == activo, ).order_by(desc(cls.fecha)) @classmethod def replace_consecutive_ids(cls, starting_id_pago:int, id_contrato: int) -> Update: """replace consecutive ids found, i.e rows with (id_pago > starting_id_pago)""" return update(cls).where( cls.id_pago > starting_id_pago, cls.id_contrato == id_contrato, cls.activo).values(id_pago=cls.id_pago + 1) @classmethod def add_registry(cls, _session, pago_to_add): """related logic for specific constraint and business logic 05 august 2021.- Se pueden recibir pagos con fechas anteriores a los pagos ya registrados de un contrato, es decir, si ya existen N pagos con fechas F0,...,FN en la tabla de pagos y se recibe un pago con fecha F' donde F' < {Fk,...,Fm} (F' es anterior a 1 o varios pagos de un contrato), se desactivarán todos los pagos del contrato posteriores a F', se insertará el nuevo pago con fecha F' y se insertarán nuevos registros para los pagos que ya existían (posteriores a F'), de tal manera que para todos los pagos de un mismo contrato si Fi < Fj entonces id_pago[i] < id_pago[j]""" ID_CONTRATO = pago_to_add.id_contrato LATEST_PAGO_STM = cls.select_latest_pagos( id_contrato=pago_to_add.id_contrato, id_cliente=pago_to_add.id_cliente ).limit(1) # https://docs.sqlalchemy.org/en/14/core/connections.html#sqlalchemy.engine.Result.scalars latest_pago: Optional[cls] = _session.execute(LATEST_PAGO_STM).scalars().first() if latest_pago is None: # no related info found _session.add(pago_to_add) else: # info found if pago_to_add.fecha > latest_pago.fecha: _session.add(pago_to_add) else: UPDATE_STM = cls.replace_consecutive_ids(pago_to_add. id_pago,ID_CONTRATO) _session.execute(UPDATE_STM) def __repr__(self): fecha,id_contrato,monto = self.fecha.strftime(DATE_FORMAT),self.id_contrato,self.monto return f"Pago con {fecha=}, {id_contrato=}, {monto=}" "ok" # + pycharm={"name": "#%%\n"} """a. Escribir una función en Python que reciba como parámetro los datos de un pago e inserte el pago en la tabla considerando la regla 2 anterior.""" PAGOS_TO_INSERT = [ Pago(id_contrato=12, id_cliente=99, fecha=datetime(2021,8,5), monto=7000), Pago(id_contrato=12, id_cliente=99, fecha=datetime(2021,8,6), monto=1280), Pago(id_contrato=12, id_cliente=99, fecha=datetime(2021,8,7), monto=4900), Pago(id_contrato=12, id_cliente=99, fecha=datetime(2021,8,4), monto=4900) ] if __name__ == '__main__': with Session(engine) as session: # table = list(session.execute(select(Pago)).scalars().all()) table = list(session.execute(select(Pago))) for row in table: print(row) for _pago in PAGOS_TO_INSERT: Pago.add_registry(session, _pago) session.commit() # + pycharm={"name": "#%%\n"} with Session(engine) as session: found = list(session.execute( select(Pago).where( # Pago.id_pago > 4, Pago.id_contrato == 12, Pago.activo == True) )) # + pycharm={"name": "#%%\n"} """b. Habilitar interfaz de cualquier tipo (web, terminal, etc.) para interactuar con la función descrita en el punto anterior""" # el archivo fue llamado main.py """c. Usar sqlalchemy para modelar la tabla e interactuar con ella (insert, update, select, etc.).""" # hecho """d. Estructura el código de tal manera que sea modular""" # hecho """e. Agrega validaciones y manejo de errores, debe ser a prueba de todo.""" # hecho # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CS-109B Introduction to Data Science # ## Lab 6: Convolutional Neural Networks 2 # # **Harvard University**
# **Spring 2020**
# **Instructors:** , , and
# **Lab Instructors:** and
# **Content:** , , # # --- # RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES import requests from IPython.core.display import HTML styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2019-CS109B/master/content/styles/cs109.css").text HTML(styles) # ## Learning Goals # # In this lab we will continue with Convolutional Neural Networks (CNNs), will look into the `tf.data` interface which enables us to build complex input pipelines for our data. We will also touch upon visualization techniques to peak into our CNN's hidden layers. # # By the end of this lab, you should be able to: # # - know how a CNN works from start to finish # - use `tf.data.Dataset` to import and, if needed, transform, your data for feeding into the network. Transformations might include normalization, scaling, tilting, resizing, or applying other data augmentation techniques. # - understand how `saliency maps` are implemented with code. # # # ## Table of Contents # # 1. **Part 1**: [Beginning-to-end Convolutional Neural Networks](#part1). # 2. **Part 2**: [Image Pipelines with `tf.data.Dataset`](#part2). # 3. **Part 3**: [Hidden Layer Visualization, Saliency Maps](#part3). # + import numpy as np from scipy.optimize import minimize from sklearn.utils import shuffle import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (5,5) # %matplotlib inline # - import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Dense, Conv2D, Conv1D, MaxPooling2D, MaxPooling1D,\ Dropout, Flatten, Activation, Input from tensorflow.keras.optimizers import Adam, SGD, RMSprop from tensorflow.keras.utils import to_categorical from tensorflow.keras.metrics import AUC, Precision, Recall, FalsePositives, \ FalseNegatives, TruePositives, TrueNegatives from tensorflow.keras.preprocessing import image from tensorflow.keras.regularizers import l2 from __future__ import absolute_import, division, print_function, unicode_literals tf.keras.backend.clear_session() # For easy reset of notebook state. print(tf.__version__) # You should see a > 2.0.0 here! from tf_keras_vis.utils import print_gpus print_gpus() # + ## Additional Packages required if you don't already have them # While in your conda environment, # imageio # Install using "conda install imageio" # pillow # Install using "conda install pillow" # tensorflow-datasets # Install using "conda install tensorflow-datasets" # tf-keras-vis # Install using "pip install tf-keras-vis" # tensorflow-addons # Install using "pip install tensorflow-addons" # - from tf_keras_vis.saliency import Saliency from tf_keras_vis.utils import normalize import tf_keras_vis.utils as utils from matplotlib import cm from tf_keras_vis.gradcam import Gradcam np.random.seed(109) tf.random.set_seed(109) # ## Part 0: Running on SEAS JupyterHub # # **PLEASE READ**: [Instructions for Using SEAS JupyterHub](https://canvas.harvard.edu/courses/65462/pages/instructions-for-using-seas-jupyterhub?module_item_id=638544) # # SEAS and FAS are providing you with a platform in AWS to use for the class (accessible from the 'Jupyter' menu link in Canvas). These are AWS p2 instances with a GPU, 10GB of disk space, and 61 GB of RAM, for faster training for your networks. Most of the libraries such as keras, tensorflow, pandas, etc. are pre-installed. If a library is missing you may install it via the Terminal. # # **NOTE: The AWS platform is funded by SEAS and FAS for the purposes of the class. It is FREE for you - not running against your personal AWS credit. For this reason you are only allowed to use it for purposes related to this course, and with prudence.** # # **Help us keep this service: Make sure you stop your instance as soon as you do not need it. Your instance will terminate after 30 min of inactivity.** # # ![aws-dog](../images/aws-dog.jpeg) # *source: CS231n Stanford, Google Cloud Tutorial* # # # ## Part 1: Beginning-to-end Convolutional Neural Networks # # ![cnn](../images/CNN.png) # # *image [source](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/)* #

# We will go through the various steps of training a CNN, including: # - difference between cross-validation and validation # - specifying a loss, metrics, and an optimizer, # - performing validation, # - using callbacks, specifically `EarlyStopping`, which stops the training when training is no longer improving the validation metrics, # - learning rate significance #

#
Table Exercise: Use the whiteboard next to your table to draw a CNN from start to finish as per the instructions. We will then draw it together in class.
# [Back to Table of Contents](#top) # # ## Part 2: Image Preprocessing: Using `tf.data.Dataset` import tensorflow_addons as tfa import tensorflow_datasets as tfds # `tf.data` API in `tensorflow` enables you to build complex **input pipelines** from simple, reusable pieces. For example, the pipeline for an image model might aggregate data from files in a distributed file system, apply random perturbations to each image, and merge randomly selected images into a batch for training. # # The pipeline for a text model might involve extracting symbols from raw text data, converting them to embedding identifiers with a lookup table, and batching together sequences of different lengths. The `tf.data API` makes it possible to handle large amounts of data, read from different data formats, and perform complex transformations. # # The `tf.data API` introduces a `tf.data.Dataset` that represents a sequence of **elements**, consistινγ of one or more **components**. For example, in an image pipeline, an element might be a single training example, with a pair of tensor components representing the image and its label. # # To create an input pipeline, you must start with a data **source**. For example, to construct a Dataset from data in memory, you can use `tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`. Alternatively, if your input data is stored in a file in the recommended TFRecord format, you can use `tf.data.TFRecordDataset()`. # # The Dataset object is a Python iterable. You may view its elements using a for loop: # + dataset = tf.data.Dataset.from_tensor_slices(tf.random.uniform([4, 10], minval=1, maxval=10, dtype=tf.int32)) for elem in dataset: print(elem.numpy()) # - # Once you have a Dataset object, you can **transform** it into a new Dataset by chaining method calls on the `tf.data.Dataset` object. For example, you can apply per-element transformations such as `Dataset.map()`, and multi-element transformations such as `Dataset.batch()`. See the [documentation](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) for `tf.data.Dataset` for a complete list of transformations. # # The `map` function takes a function and returns a new and augmented dataset. dataset = dataset.map(lambda x: x*2) for elem in dataset: print(elem.numpy()) # Datasets are powerful objects because they are effectively dictionaries that can store tensors and other data such as the response variable. We can also construct them by passing small sized `numpy` arrays, such as in the following example. # # Tensorflow has a plethora of them: # + # uncomment to see available datasets #tfds.list_builders() # - # #### `mnist` dataset # load mnist (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train.shape, y_train.shape # take only 10 images for simplicity train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)) # In case you want to retrieve the images/numpy arrays for element in iter(train_dataset.take(1)): image = element[0].numpy() print(image.shape) print(image.shape) plt.figure() plt.imshow(image, cmap='gray') plt.show() # Once you have your Model, you may pass a Dataset instance directly to the methods `fit()`, `evaluate()`, and `predict()`. The difference with the way we have been previously using these methods is that we are not passing the images and labels separately. They are now both in the Dataset object. # # ``` # model.fit(train_dataset, epochs=3) # # model.evaluate(test_dataset) # ``` # #### Data Augmentation fig, axes = plt.subplots(1,6, figsize=(10,3)) for i, (image, label) in enumerate(train_dataset.take(4)): axes[i].imshow(image) axes[i].set_title(f'{label:.2f}') image_flip_up = tf.image.flip_up_down(np.expand_dims(image, axis=2)).numpy() image_rot_90 = tf.image.rot90(np.expand_dims(image, axis=2), k=1).numpy() axes[4].imshow(image_flip_up.reshape(28,-1)) axes[4].set_title(f'{label:.2f}-flip') axes[5].imshow(image_rot_90.reshape(28,-1)) axes[5].set_title(f'{label:.2f}-rot90') plt.show(); # #### Note: # # The tf.data API is a set of utilities in TensorFlow 2.0 for loading and preprocessing data in a way that's fast and scalable. You also have the option to use the `keras` [`ImageDataGenerator`](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator), that accepts `numpy` arrays, instead of the Dataset. We think it's good for you to learn to use Datasets. # # As a general rule, for input to NNs, Tensorflow recommends that you use `numpy` arrays if your data is small and fit in memory, and `tf.data.Datasets` otherwise. # # #### References: # 1. `tf.data.Dataset` [Documentation](https://www.tensorflow.org/api_docs/python/tf/data/Dataset). # 2. Import [`numpy` arrays in Tensorflow](https://www.tensorflow.org/tutorials/load_data/numpy) # ### The Street View House Numbers (SVHN) Dataset # # We will play with the SVHN real-world image dataset. It can be seen as similar in flavor to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images. # # All digits have been resized to a fixed resolution of 32-by-32 pixels. The original character bounding boxes are extended in the appropriate dimension to become square windows, so that resizing them to 32-by-32 pixels does not introduce aspect ratio distortions. Nevertheless this preprocessing introduces some distracting digits to the sides of the digit of interest. Loading the .mat files creates 2 variables: X which is a 4-D matrix containing the images, and y which is a vector of class labels. To access the images, $X(:,:,:,i)$ gives the i-th 32-by-32 RGB image, with class label $y(i)$. # # ![svhn](../images/svhn.png) # # *, , , , , . Ng Reading Digits in Natural Images with Unsupervised Feature Learning NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011.* # Will take some time but will only load once train_svhn_cropped, test_svhn_cropped = tfds.load('svhn_cropped', split=['train', 'test'], shuffle_files=False) isinstance(train_svhn_cropped, tf.data.Dataset) # # convert to numpy if needed features = next(iter(train_svhn_cropped)) images = features['image'].numpy() labels = features['label'].numpy() images.shape, labels.shape for i, element in enumerate(train_svhn_cropped): if i==1: break; image = element['image'] label = element['label'] print(label) # + # batch_size indicates that the dataset should be divided in batches # each consisting of 4 elements (a.k.a images and their labels) # take_size chooses a number of these batches, e.g. 3 of them for display batch_size = 4 take_size = 3 # Plot fig, axes = plt.subplots(take_size,batch_size, figsize=(10,10)) for i, element in enumerate(train_svhn_cropped.batch(batch_size).take(take_size)): for j in range(4): image = element['image'][j] label = element['label'][j] axes[i][j].imshow(image) axes[i][j].set_title(f'true label={label:d}') # - # Here we convert from a collection of dictionaries to a collection of tuples. We will still have a `tf.data.Dataset` # + def normalize_image(img): return tf.cast(img, tf.float32)/255. def normalize_dataset(element): img = element['image'] lbl = element['label'] return normalize_image(img), lbl # - train_svhn = train_svhn_cropped.map(normalize_dataset) test_svhn = test_svhn_cropped.map(normalize_dataset) isinstance(train_svhn, tf.data.Dataset) # #### Define our CNN model # + n_filters = 16 input_shape = (32, 32, 3) svhn_model = Sequential() svhn_model.add(Conv2D(n_filters, (3, 3), activation='relu', input_shape=input_shape)) svhn_model.add(MaxPooling2D((2, 2))) svhn_model.add(Conv2D(n_filters*2, (3, 3), activation='relu')) svhn_model.add(MaxPooling2D((2, 2))) svhn_model.add(Conv2D(n_filters*4, (3, 3), activation='relu')) svhn_model.add(Flatten()) svhn_model.add(Dense(n_filters*2, activation='relu')) svhn_model.add(Dense(10, activation='softmax')) svhn_model.summary() # + loss = keras.losses.sparse_categorical_crossentropy # we use this because we did not 1-hot encode the labels optimizer = Adam(lr=0.001) metrics = ['accuracy'] # Compile model svhn_model.compile(optimizer=optimizer, loss=loss, metrics=metrics) # - # #### With Early Stopping # + # %%time batch_size = 64 epochs=15 callbacks = [ keras.callbacks.EarlyStopping( # Stop training when `val_accuracy` is no longer improving monitor='val_accuracy', # "no longer improving" being further defined as "for at least 2 epochs" patience=2, verbose=1) ] history = svhn_model.fit(train_svhn.batch(batch_size), #.take(50), # change 50 only epochs=epochs, callbacks=callbacks, validation_data=test_svhn.batch(batch_size)) #.take(50)) # + def print_history(history): fig, ax = plt.subplots(1, 1, figsize=(8,4)) ax.plot((history.history['accuracy']), 'b', label='train') ax.plot((history.history['val_accuracy']), 'g' ,label='val') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Accuracy', fontsize=20) ax.legend() ax.tick_params(labelsize=20) fig, ax = plt.subplots(1, 1, figsize=(8,4)) ax.plot((history.history['loss']), 'b', label='train') ax.plot((history.history['val_loss']), 'g' ,label='val') ax.set_xlabel(r'Epoch', fontsize=20) ax.set_ylabel(r'Loss', fontsize=20) ax.legend() ax.tick_params(labelsize=20) plt.show(); print_history(history) # - svhn_model.save('svhn_good.h5') # #### Too High Learning Rate # + loss = keras.losses.sparse_categorical_crossentropy optimizer = Adam(lr=0.5) # really big learning rate metrics = ['accuracy'] # Compile model svhn_model.compile(optimizer=optimizer, loss=loss, metrics=metrics) # + # %%time batch_size = 64 epochs=10 history = svhn_model.fit(train_svhn.batch(batch_size), #.take(50), # change 50 to see the difference epochs=epochs, validation_data=test_svhn.batch(batch_size)) #.take(50)) # - print_history(history) fig.savefig('../images/train_high_lr.png') # #### Too Low Learning Rate # # Experiment with the learning rate using a small sample of the training set by using .take(num) which takes only `num` number of samples. # ``` # history = svhn_model.fit(train_svhn.batch(batch_size).take(50)) # ``` # + #loss = keras.losses.categorical_crossentropy loss = keras.losses.sparse_categorical_crossentropy # we use this because we did not 1-hot encode the labels optimizer = Adam(lr=1e-5) # very low learning rate metrics = ['accuracy'] # Compile model svhn_model.compile(optimizer=optimizer, loss=loss, metrics=metrics) # + # %%time batch_size = 32 epochs=10 history = svhn_model.fit(train_svhn.batch(batch_size).take(50), epochs=epochs, validation_data=test_svhn.batch(batch_size)) #.take(50)) # - print_history(history) fig.savefig('../images/train_50.png') # #### Changing the batch size # + #loss = keras.losses.categorical_crossentropy loss = keras.losses.sparse_categorical_crossentropy # we use this because we did not 1-hot encode the labels optimizer = Adam(lr=0.001) metrics = ['accuracy'] # Compile model svhn_model.compile(optimizer=optimizer, loss=loss, metrics=metrics) # + # %%time batch_size = 2 epochs=5 history = svhn_model.fit(train_svhn.batch(batch_size), epochs=epochs, validation_data=test_svhn.batch(batch_size)) # - print_history(history) # [Back to Table of Contents](#top) # ## Part 3: Hidden Layer Visualization, Saliency Maps # # [Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps](https://arxiv.org/pdf/1312.6034.pdf) # # It is often said that Deep Learning Models are black boxes. But we can peak into these boxes. # #### Let's train a small model on MNIST from tensorflow.keras.datasets import mnist # load MNIST data (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train.min(), x_train.max() # + x_train = x_train.reshape((60000, 28, 28, 1)) # Reshape to get third dimension x_test = x_test.reshape((10000, 28, 28, 1)) x_train = x_train.astype('float32') / 255 # Normalize between 0 and 1 x_test = x_test.astype('float32') / 255 # Convert labels to categorical data y_train = to_categorical(y_train) y_test = to_categorical(y_test) # - x_train.min(), x_train.max() # (train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data( # path='mnist.npz') x_train.shape # + class_idx = 0 indices = np.where(y_test[:, class_idx] == 1.)[0] # pick some random input from here. idx = indices[0] img = x_test[idx] # - np.unique(y_test[:, class_idx]) # + # pick some random input from here. idx = indices[0] # Lets sanity check the picked image. from matplotlib import pyplot as plt # %matplotlib inline plt.rcParams['figure.figsize'] = (18, 6) #plt.imshow(test_images[idx][..., 0]) img = x_test[idx] * 255 img = img.astype('float32') img = np.squeeze(img) # trick to reduce img from (28,28,1) to (28,28) plt.imshow(img, cmap='gray'); # + input_shape=(28, 28, 1) num_classes = 10 model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax', name='preds')) model.summary() # - model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) num_samples = x_train.shape[0] num_samples # + # %%time batch_size = 32 epochs = 10 model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_split=0.2, shuffle=True) # - score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # ### Let's look at the layers with `tf.keras.viz` # # https://pypi.org/project/tf-keras-vis/ # # And an example: https://github.com/keisen/tf-keras-vis/blob/master/examples/visualize_conv_filters.ipynb # We can identify layers by their layer id: # Alternatively we can specify layer_id as -1 since it corresponds to the last layer. layer_id = 0 model.layers[layer_id].name, model.layers[-2].name # Or you may look at their output output = [model.layers[layer_id].output] output # + # # You may also replace part of your NN with other parts, # # e.g. replace the activation function of the last layer # # with a linear one # model.layers[-1].activation = tf.keras.activations.linear # - # Generate Feature Maps def get_feature_maps(model, layer_id, input_image): """Returns intermediate output (activation map) from passing an image to the model Parameters: model (tf.keras.Model): Model to examine layer_id (int): Which layer's (from zero) output to return input_image (ndarray): The input image Returns: maps (List[ndarray]): Feature map stack output by the specified layer """ model_ = Model(inputs=[model.input], outputs=[model.layers[layer_id].output]) return model_.predict(np.expand_dims(input_image, axis=0))[0,:,:,:].transpose((2,0,1)) # Choose an arbitrary image image_id = 67 img = x_test[image_id,:,:,:] img.shape img_to_show = np.squeeze(img) plt.imshow(img_to_show, cmap='gray') # Was this successfully predicted? img_batch = (np.expand_dims(img,0)) print(img_batch.shape) predictions_single = model.predict(img_batch) print(f'Prediction is: {np.argmax(predictions_single[0])}') # layer id should be for a Conv layer, a Flatten will not do maps = get_feature_maps(model, layer_id, img)# [0:10] maps.shape # + # Plot just a subset maps = get_feature_maps(model, layer_id, img)[0:10] fig, ax = plt.subplots() img = np.squeeze(img) ax.imshow(img + 0.5) label = y_test[image_id,:] label = int(np.where(label == 1.)[0]) ax.set_title(f'true label = {label}') f, ax = plt.subplots(3,3, figsize=(8,8)) for i, axis in enumerate(ax.ravel()): axis.imshow(maps[i], cmap='gray') # - # ### `tf_keras_vis.gradcam.Gradcam` # # [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization](https://arxiv.org/pdf/1610.02391.pdf) #from tensorflow.keras import backend as K # Define modifier to replace a softmax function of the last layer to a linear function. def model_modifier(m): m.layers[-1].activation = tf.keras.activations.linear # + #img_batch = (np.expand_dims(img,0)) # Define modifier to replace a softmax function of the last layer to a linear function. def model_modifier(m): m.layers[-1].activation = tf.keras.activations.linear # Create Saliency object saliency = Saliency(model, model_modifier) # Define loss function. Pass it the correct class label. loss = lambda output: tf.keras.backend.mean(output[:, tf.argmax(y_test[image_id])]) # - # Generate saliency map print(img_batch.shape) # + saliency_map = saliency(loss, img_batch) saliency_map = normalize(saliency_map) f, ax = plt.subplots(nrows=1, ncols=2, figsize=(10, 5)) #, subplot_kw={'xticks': [], 'yticks': []}) ax[0].imshow(saliency_map[0], cmap='jet') ax[1].imshow(img); # + # from matplotlib import cm # from tf_keras_vis.gradcam import Gradcam # Create Gradcam object gradcam = Gradcam(model, model_modifier) # Generate heatmap with GradCAM cam = gradcam(loss, img_batch) cam = normalize(cam) f, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 5), subplot_kw={'xticks': [], 'yticks': []}) for i in range(len(cam)): heatmap = np.uint8(cm.jet(cam[i])[..., :3] * 255) ax.imshow(img) ax.imshow(heatmap, cmap='jet', alpha=0.5) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Digital House - Data Science a Distancia # # ## Trabajo Práctico 2 # # # # ### Autores: , , , , # #

Mayo 2022

# #### Aspectos técnicos # La notebook se ejecuta correctamente en una instalación estándar de Anaconda versión 4.11.0 build 3.21.6, Python 3.9.7 # # #### Librerías necesarias import numpy as np import pandas as pd import re # %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split # from sklearn.preprocessing import StandardScaler from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score # + tags=[] import statsmodels.api as sm # - # Ignore some warnings from warnings import simplefilter simplefilter(action='ignore', category=FutureWarning) simplefilter(action='ignore', category=UserWarning) # --- # #### Dataset data_final_url = "../Data/properatti_final.csv" data = pd.read_csv(data_final_url, encoding="utf-8") # + mask = data['state_name'] == 'Capital Federal' mask = np.logical_and(mask, data['property_type'] == 'apartment') data = data[mask].copy() mask = data['place_name'] == 'Belgrano' mask = np.logical_or(mask, data['place_name'] == 'Palermo') mask = np.logical_or(mask, data['place_name'] == 'Recoleta') data = data[mask].copy() # - # --- # #### Seleccionar, renombrar y ajustar valores de las características data.reset_index(inplace=True) data.rename(columns={'property_type' : 'tipo'}, inplace=True) data.rename(columns={'price_aprox_usd' : 'precio'}, inplace=True) data.rename(columns={'surface_covered_in_m2' : 'superficie'}, inplace=True) # Trabajar con precio en miles de dólares data['precio'] = (data['precio'] / 1000).round(3) # --- data.shape data.head() data['place_name'].value_counts() # --- # # + sns.set() plt.style.use('seaborn') sns.mpl.rcParams['axes.titlesize'] = 20 sns.mpl.rcParams['axes.labelsize'] = 16 # + tags=[] features = ['precio', 'superficie', 'cochera', 'pileta', 'parrilla'] # - fig, ax = plt.subplots(figsize=(20, 8)) sns.heatmap(data[features].corr(), annot=True, vmin=-1, vmax=1, cmap='Oranges', ax=ax) ax.set(title='Relación de características observadas') plt.show() fig, ax = plt.subplots(figsize=(16, 8)) sns.regplot(data=data, x='superficie', y='precio', ci=None, scatter_kws={'color':'r', 's':4}, ax=ax) ax.set(title='Precio según Superficie Cubierta', ylabel='Precio en Miles de dólares', xlabel='Superficie Cubierta') plt.show() # + [markdown] tags=[] # --- # #### Variables calculadas # - data['logprecio'] = np.log(data['precio']) fig, ax = plt.subplots(figsize=(16, 8)) sns.regplot(data=data, x='superficie', y='logprecio', ci=None, scatter_kws={'color':'r', 's':4}, ax=ax) ax.set(title='Precio según Superficie Cubierta', ylabel='Logaritmo del Precio en Miles de dólares', xlabel='Superficie Cubierta') plt.show() # --- # # # + def train_LinearRegression(X, y) : u""" Performs Ordinary Least Squares linear regression one from Scikit-Learn linear models and two from statsmodels Args: * X array of array of features * y array of target values """ Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=1) model = LinearRegression(fit_intercept=True) model.fit(Xtrain, ytrain) ypred = model.predict(Xtest) df = pd.DataFrame() df = pd.concat([df, pd.DataFrame(columns=['Features'], data=[ str(list(X.columns)) ] )], axis=1) df = pd.concat([df, pd.DataFrame(columns=['MAE'], data=[mean_absolute_error(ytest, ypred).round(3)] )], axis=1) df = pd.concat([df, pd.DataFrame(columns=['sqrt MSE'], data=[np.sqrt(mean_squared_error(ytest, ypred)).round(3)] )], axis=1) result_train = sm.OLS(ytrain, sm.add_constant(Xtrain)).fit() r2a_train = result_train.rsquared_adj.round(3) result_test = sm.OLS(ytest, sm.add_constant(Xtest)).fit() r2a_test = result_test.rsquared_adj.round(3) df = pd.concat([df, pd.DataFrame(columns=['R2 adj train'], data=[r2a_train] )], axis=1) df = pd.concat([df, pd.DataFrame(columns=['R2 adj test'], data=[r2a_test] )], axis=1) return model, df # - # --- # ### Train models # pd.options.display.max_colwidth = 80 # + y = data['logprecio'] metrics = pd.DataFrame() data['s2'] = data['superficie'] ** 2 # + X = data[['superficie']] model, metric = train_LinearRegression(X, y) metrics = metrics.append(metric) # + X = data[['superficie', 's2']] model, metric = train_LinearRegression(X, y) metrics = metrics.append(metric) # + X = data[['superficie', 's2','cochera']] model, metric = train_LinearRegression(X, y) metrics = metrics.append(metric) # + X = data[['superficie', 's2','pileta']] model, metric = train_LinearRegression(X, y) metrics = metrics.append(metric) # + X = data[['superficie', 's2','parrilla']] model, metric = train_LinearRegression(X, y) metrics = metrics.append(metric) # + X = data[['superficie', 's2','cochera', 'pileta']] model, metric = train_LinearRegression(X, y) metrics = metrics.append(metric) # - # --- display(metrics.head(20)) fig, ax = plt.subplots(figsize=(12, 4)) sns.barplot(x=metrics['sqrt MSE'], y=metrics['Features'], palette='coolwarm', ax=ax) ax.set(title='Evaluación de Métricas', ylabel='Características', xlabel='Raíz Cuadrada de la Media de Errores al Cuadrado') plt.show() fig, ax = plt.subplots(figsize=(12, 4)) sns.barplot(x=metrics['R2 adj test'], y=metrics['Features'], palette='coolwarm', ax=ax) ax.set(title='Evaluación de Métricas', ylabel='Características', xlabel='R^2 Ajustado (evaluación)') plt.show() # departamento de 1 m2 cubierto sin cochera ni pileta import math my_X = [[70, 70*70, 0, 0]] my_y = model.predict(my_X)[0] display(my_y, math.exp(my_y)) # --- # ### Gauss-Markov # # + def gauss_markov_test(Xtest, ytest): u""" funcion para visualizar e identificar supuestos de linealidad sobre la regression lineal Args: * Xtest - bserverd features * ytest - observed values """ model = sm.OLS(ytest, sm.add_constant(Xtest)).fit() display(model.summary()) ypred = model.predict() resid = model.resid rstud = model.get_influence().resid_studentized_internal rsqrt = np.sqrt(np.abs(rstud)) plt.figure(figsize=(16, 6)) sns.regplot(x = ypred, y = ytest, lowess = True, line_kws = {'color': 'red'}) plt.title('Linealidad de Valores Observados', fontdict = {'fontsize': 18}) plt.xlabel('Valores Predecidos', fontdict = {'fontsize': 14}) plt.ylabel('Valores Observados', fontdict = {'fontsize': 14}) plt.show() plt.figure(figsize=(16, 6)) sns.regplot(x = ypred, y = resid, lowess = True, line_kws = {'color': 'red'}) plt.title('Linealidad de Residuos', fontdict = {'fontsize': 18}) plt.xlabel('Valores Predecidos', fontdict = {'fontsize': 14}) plt.ylabel('Residuos', fontdict = {'fontsize': 14}) plt.show() plt.figure(figsize=(16, 8)) sns.regplot(x = ypred, y = rsqrt, lowess = True, line_kws = {'color': 'red'}) plt.title('Scale Location', fontdict = {'fontsize': 18}) plt.xlabel('Valores Predecidos', fontdict = {'fontsize': 14}) plt.ylabel('Residuos Normalizados', fontdict = {'fontsize': 14}) plt.show() sns.mpl.rcParams['figure.figsize'] = (16, 8) fig, ax = plt.subplots(2) sm.graphics.tsa.plot_acf(x = resid, ax = ax[0], lags = 40 , alpha = 0.05, title = '') ax[0].set_title('Correlación de características', fontdict = {'fontsize': 18}) plt.subplots_adjust(top = 1.5, wspace = 2) sm.ProbPlot(model.resid).qqplot(ax = ax[1], line = 's') ax[1].set_title('Normalidad de los Residuos', fontdict = {'fontsize': 18}) ax[1].set_xlabel("Valores Teóricos", fontsize = 14) ax[1].set_ylabel("Valores Reales", fontsize = 14) plt.show() return X = data[['superficie', 's2', 'cochera', 'pileta']] y = data['logprecio'] gauss_markov_test(X, y) # - # --- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ### PIL from Pillow # PIL stands for Python Imaging Library # + ## Import Module from PIL import Image ## Execption Hanlding of file import try: img = Image.open("data/barr.png") except IOError: pass # - # #### Retrieve the Image Size width, height = img.size print(width, height) # #### Rotating an Image # + ## Rotate the Image img_rotated = img.rotate(180) ## Save the rotated Image img_rotated.save("data/rotated_barr.png") # - # #### Transposing an Image # + ## Transpose transposed_img = img.transpose(Image.FLIP_LEFT_RIGHT) ## Save Transposed Image transposed_img.save("data/transposed_barr.png") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Inverse perspective mapping # ## Setting up Colab colab_nb = 'google.colab' in str(get_ipython()) if colab_nb: from google.colab import drive drive.mount('/content/drive') if colab_nb: # %cd /content/drive/My Drive/aad/code/tests/lane_detection # ## Exercise # Solve the TODO items in `exercises/lane_detection/camera_geometry.py` which are labeled as **"TODO step 2"**. # # The cells below will help you check if your implementation is correct. You might want to read them before you start with your implementation. # ### Unit test # execute this cell to run unit tests on your implementation of step 2 # %cd ../../../ # !python -m code.tests.lane_detection.camera_geometry_unit_test 2 # %cd - # ### Test by visual inspection # When you change the boolean below to `True`, your code will be run. Otherwise the sample solution will be run. The images that the code generates should be the same for your code and the sample solution. run_student_code = False # %load_ext autoreload # %autoreload 2 import sys from pathlib import Path sys.path.append(str(Path('../../'))) if run_student_code: from exercises.lane_detection import camera_geometry else: from solutions.lane_detection import camera_geometry import numpy as np import matplotlib.pyplot as plt import cv2 # First we construct the pixel coordinates $(u,v)$ for the left lane boundary, in the same way that we did it in the chapter on image formation: # + image_fn = str(Path("../../../data/Town04_Clear_Noon_09_09_2020_14_57_22_frame_625_validation_set.png").absolute()) image = cv2.imread(image_fn) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) boundary_fn = image_fn.replace(".png", "_boundary.txt") boundary_gt = np.loadtxt(boundary_fn) trafo_fn = image_fn.replace(".png", "_trafo.txt") trafo_world_to_cam = np.loadtxt(trafo_fn) cg = camera_geometry.CameraGeometry() K = cg.intrinsic_matrix left_boundary_3d_gt_world = boundary_gt[:,0:3] uv = camera_geometry.project_polyline(left_boundary_3d_gt_world, trafo_world_to_cam, K) u,v = uv[:,0], uv[:,1] plt.plot(u,v) plt.imshow(image); # - # Now we have image coordinates $(u,v)$ in our numpy array `uv`. Let us try to reconstruct the 3d coordinates using equation # $$ # \begin{pmatrix} X_c \\ Y_c \\Z_c \end{pmatrix} = \frac{h}{ \mathbf{n_c}^T \mathbf{K}^{-1} (u,v,1)^T} \mathbf{K}^{-1} \begin{pmatrix} u \\ v \\ 1 \end{pmatrix} # $$ # The relevant code is implemented in camera_geometry.py in the function `uv_to_roadXYZ_camframe()`. # Reconstruct the left boundary starting from the known u,v reconstructed_lb_3d_cam = [] for u,v in uv: xyz = cg.uv_to_roadXYZ_camframe(u,v) reconstructed_lb_3d_cam.append(xyz) reconstructed_lb_3d_cam = np.array(reconstructed_lb_3d_cam) # + # Map reconstructed left boundary into world reference frame def map_between_frames(points, trafo_matrix): x,y,z = points[:,0], points[:,1], points[:,2] homvec = np.stack((x,y,z,np.ones_like(x))) return (trafo_matrix @ homvec).T trafo_cam_to_world = np.linalg.inv(trafo_world_to_cam) reconstructed_lb_3d_world = map_between_frames(reconstructed_lb_3d_cam, trafo_cam_to_world) # - # plot both ground truth and reconstructed left boundary 3d in X-Y-plane plt.plot(left_boundary_3d_gt_world[:,0], left_boundary_3d_gt_world[:,1], label="ground truth") plt.plot(reconstructed_lb_3d_world[:,0], reconstructed_lb_3d_world[:,1], ls = "--", label="reconstructed") plt.axis("equal") plt.legend(); # You should see that the lines overlap. Finally, we can also do this comparison in the road frame instead of the world frame. # + # compare ground truth and reconstructed boundary in road frame trafo_world_to_road = cg.trafo_cam_to_road @ trafo_world_to_cam left_boundary_3d_gt_road = map_between_frames(left_boundary_3d_gt_world, trafo_world_to_road) reconstructed_lb_3d_road = map_between_frames(reconstructed_lb_3d_cam, cg.trafo_cam_to_road) # plot both ground truth and reconstructed left boundary 3d in Z-(-X)-plane (which is X-Y in road iso 8855) plt.plot(left_boundary_3d_gt_road[:,2], -left_boundary_3d_gt_road[:,0], label="ground truth") plt.plot(reconstructed_lb_3d_road[:,2], -reconstructed_lb_3d_road[:,0], ls = "--", label="reconstructed") plt.axis("equal") plt.legend(); # - # You should see that the lines overlap. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np # %matplotlib inline import matplotlib.pyplot as plt import cv2 import tensorflow as tf import keras from keras.models import Sequential, load_model from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout from keras.losses import categorical_crossentropy from keras.optimizers import adam, sgd from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import ModelCheckpoint from PIL import Image # - train_path = '../DATASET/TRAIN' test_path = '../DATASET/TEST' IMG_BREDTH = 30 IMG_HEIGHT = 60 num_classes = 2 # + train_batch = ImageDataGenerator(featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False, samplewise_std_normalization=False, zca_whitening=False, rotation_range=45, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True, vertical_flip=False).flow_from_directory(train_path, target_size=(IMG_HEIGHT, IMG_BREDTH), classes=['O', 'R'], batch_size=100) test_batch = ImageDataGenerator().flow_from_directory(test_path, target_size=(IMG_HEIGHT, IMG_BREDTH), classes=['O', 'R'], batch_size=100) # + def cnn_model(): model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', input_shape=(IMG_HEIGHT,IMG_BREDTH,3))) model.add(Conv2D(32, kernel_size=(3, 3), activation='relu')) model.add(Conv2D(32, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.summary() return model def use_model(path): model = load_model('best_waste_classifier.h5') pic = plt.imread(path) pic = cv2.resize(pic, (IMG_BREDTH, IMG_HEIGHT)) pic = np.expand_dims(pic, axis=0) classes = model.predict_classes(pic) # code using PIL # model = load_model('best_waste_classifier.h5') # pic1 = plt.imread(path) # pic = Image.open(path).resize((IMG_BREDTH, IMG_HEIGHT)) # plt.imshow(pic1) # if model.predict_classes(np.expand_dims(pic, axis=0)) == 0: # classes = 'ORGANIC' # elif model.predict_classes(np.expand_dims(pic, axis=0)) == 1: # classes = 'RECYCLABLE' return classes # - model = cnn_model() checkpoint = ModelCheckpoint('best_waste_classifier.h5', monitor='val_loss', verbose=0, save_best_only=True, mode='auto') model.compile(loss='categorical_crossentropy', optimizer=adam(lr=1.0e-4), metrics=['accuracy']) # + # run code to train the neural network # model = model.fit_generator(train_batch, # validation_data=test_batch, # epochs=100, # verbose=1, # callbacks=[checkpoint]) # - print(use_model('../../../../../Downloads/185284489.jpg')) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import json import os ## initial testing how json file works json_string = open("../audio_features_good_data/NLX-1499781672246905747.json") # print("--------",json_string.read()) parsed_json = json.load(json_string) print(parsed_json.keys()) print(parsed_json['features']['audio'].keys()) print('length of spectral_centroid ', len(parsed_json['features']['audio']['spectral_centroid'])) print('length of ZCR ',len(parsed_json['features']['audio']['ZCR'])) # - sample_list = [] for json_file in os.listdir('../audio_features_good_data/'): json_string = open("../audio_features_good_data/" + json_file) parsed_json = json.load(json_string) sample_list.append(parsed_json) #creating an aggregate sample dictionary, with sample id being the key # mapped to a dictionary, in which timepoint is key, and rest of data # are the value sample_ag_list = {} for sample in sample_list: if sample["sampleID"] in ["","Au","A","Augus"]: sample['sampleID'] = '79' if sample["sampleID"] in ["dcorprew"]: sample['sampleID'] = '14' if sample["sampleID"] in ["lgr2"]: sample['sampleID'] = '53' if sample["sampleID"] in ["nr"]: sample['sampleID'] = '54' if sample['sampleID'] not in sample_ag_list.keys(): sample_ag_list[sample["sampleID"]] = {} sample_ag_list[sample["sampleID"]][sample["howlong"]] = sample print(sample_ag_list['32']['0'].keys()) print(sample_ag_list['32']['0']['features']['audio'].keys()) print(len(sample_ag_list['32']['0']['features']['audio']['chroma_vector'][1])) # + dif_list_15 = [] dif_list_30 = [] dif_list_60 = [] dif_list_90 = [] dif_list_120 = [] # print len(sample_ag_list) # print sample_ag_list.keys() # for value in sample_ag_list.values(): # print value.keys() for sampleID in sample_ag_list.keys(): if '0' in sample_ag_list[sampleID].keys() and '15' in sample_ag_list[sampleID].keys(): dif_list_15.append([sample_ag_list[sampleID]['0'],sample_ag_list[sampleID]['15']]) if '0' in sample_ag_list[sampleID].keys() and '30' in sample_ag_list[sampleID].keys(): dif_list_30.append([sample_ag_list[sampleID]['0'],sample_ag_list[sampleID]['30']]) if '0' in sample_ag_list[sampleID].keys() and '60' in sample_ag_list[sampleID].keys(): dif_list_60.append([sample_ag_list[sampleID]['0'],sample_ag_list[sampleID]['60']]) if '0' in sample_ag_list[sampleID].keys() and '90' in sample_ag_list[sampleID].keys(): dif_list_90.append([sample_ag_list[sampleID]['0'],sample_ag_list[sampleID]['90']]) if '0' in sample_ag_list[sampleID].keys() and '120' in sample_ag_list[sampleID].keys(): dif_list_120.append([sample_ag_list[sampleID]['0'],sample_ag_list[sampleID]['120']]) print (len(dif_list_15),len(dif_list_30),len(dif_list_60),len(dif_list_90),len(dif_list_120)) # + from numpy import * for pair in dif_list_120: ZCR1 = array(pair[0]['features']['audio']['ZCR']) ZCR2 = array(pair[1]['features']['audio']['ZCR']) print(mean(ZCR1), mean(ZCR2)) print(std(ZCR1), std(ZCR2)) # + import matplotlib.mlab as mlab import matplotlib.pyplot as plt import numpy as np def plot_hist(data, sample_index, field, num_bins): """ this function plots a histogram of single time point voice feature sample on a range INPUT: data: total data sample organized in dictionary format with first layer being sample_id, second layer being time_point sample_index: index of list of samples to be ploted, in form of list of tuples (sample_id, timepoint) field: string value of the voice field/feature to be ploted on historgram num_bins: the number of bins in the histogram OUTPUT: plotting of histogram """ field_all = np.array([],dtype=np.float32) for idx in sample_index: field_all = np.append(field_all, data[str(idx[0])][str(idx[1])]['features']['audio'][field]) n, bins, patches = plt.hist(field_all, num_bins, normed=1) plt.xlabel('Field value') plt.ylabel('Probability') plt.title('Histogram of '+field+': mean='+str(np.around(np.mean(field_all),decimals=3))+', std='+str(np.around(np.std(field_all),decimals=3))) plt.show() plot_hist(sample_ag_list, [(32,0)], 'ZCR', 20) plt.figure(2) plot_hist(sample_ag_list, [(32,15)], 'ZCR', 20) plt.figure(3) plot_hist(sample_ag_list, [(32,30)], 'ZCR', 20) plt.figure(4) plot_hist(sample_ag_list, [(32,60)], 'ZCR', 20) # + def plot_time_multi_sample(data, sample_index, field, summary_method): """ this function plots the field across different time points among different samples INPUT: data: total data sample organized in dictionary format with first layer being sample_id, second layer being time_point sample_index: index of list of samples to be ploted, in form of list sample_id field: string value of the voice field/feature to be ploted on historgram summary_method: way to summarize 1200 numbers into 1 number, option = mean, std OUTPUT: display line graph of field across time """ for idx in sample_index: sample = data[str(idx)] summary_values = [] time_values = [] for sample_time in sample.keys(): if summary_method == 'mean': value = np.mean(sample[sample_time]['features']['audio'][field]) elif summary_method == 'std': value = np.std(sample[sample_time]['features']['audio'][field]) summary_values.append(value) time_values.append(int(sample_time)) plt.plot(time_values, summary_values, label="sample "+str(idx)) plt.legend() plt.xlabel('Time(minutes)') plt.ylabel('Field value') plt.title('Progression of '+field+' over time') plt.show() plot_time_multi_sample(sample_ag_list, [32, 79], 'ZCR', 'mean') plot_time_multi_sample(sample_ag_list, [32, 79], 'ZCR', 'std') # + def plot_time_multi_feature(data, sample_index, field_list, summary_method): """ this function plots different field across different time points on one sample INPUT: data: total data sample organized in dictionary format with first layer being sample_id, second layer being time_point sample_index: sample_id of the sample in interest field_list: list of string value of the voice field/feature to be ploted on historgram summary_method: way to summarize 1200 numbers into 1 number, option = mean, std OUTPUT: display line graph of field across time """ sample = data[str(sample_index)] for field in field_list: summary_values = [] time_values = [] for sample_time in sample.keys(): if summary_method == 'mean': value = np.mean(sample[sample_time]['features']['audio'][field]) elif summary_method == 'std': value = np.std(sample[sample_time]['features']['audio'][field]) summary_values.append(value) time_values.append(int(sample_time)) plt.plot(time_values, summary_values, label=field) plt.legend() plt.xlabel('time(minutes)') plt.ylabel('field value') plt.title('Progression of '+field+' over time') plt.show() plot_time_multi_feature(sample_ag_list, 32, ['ZCR','spectral_centroid','energy'], 'mean') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1A.algo - Tri plus rapide que prévu # # Dans le cas général, le coût d'un algorithme de tri est en $O(n \ln n)$. Mais il existe des cas particuliers pour lesquels on peut faire plus court. Par exemple, on suppose que l'ensemble à trier contient plein de fois le même élément. from jyquickhelper import add_notebook_menu add_notebook_menu() # %matplotlib inline # ## trier un plus petit ensemble import random ens = [random.randint(0,99) for i in range(10000)] # On peut calculer la distribution de ces éléments. # + def histogram(ens): hist = {} for e in ens: hist[e] = hist.get(e, 0) + 1 return hist hist = histogram(ens) list(hist.items())[:5] # - # Plutôt que de trier le tableau initial, on peut trier l'histogramme qui contient moins d'élément. sorted_hist = list(hist.items()) sorted_hist.sort() # Puis on recontruit le tableau initial mais trié : # + def tableau(sorted_hist): res = [] for k, v in sorted_hist: for i in range(v): res.append(k) return res sorted_ens = tableau(sorted_hist) sorted_ens[:5] # - # On crée une fonction qui assemble toutes les opérations. Le coût du nivrau tri est en $O(d \ln d + n)$ où $d$ est le nombre d'éléments distincts de l'ensemble initial. # + def sort_with_hist(ens): hist = histogram(ens) sorted_hist = list(hist.items()) sorted_hist.sort() return tableau(sorted_hist) from random import shuffle shuffle(ens) # %timeit sort_with_hist(ens) # - def sort_with_nohist(ens): return list(sorted(ens)) shuffle(ens) # %timeit sort_with_nohist(ens) # Les temps d'exécution ne sont pas très probants car la fonction `sort` est immplémentée en C et qu'elle utilise l'algorithme [timsort](https://en.wikipedia.org/wiki/Timsort). Cet algorithme est un algorithme adaptatif tel que [smoothsort](https://en.wikipedia.org/wiki/Smoothsort). Le coût varie en fonction des données à trier. Il identifie d'abord les séquences déjà triées, trie les autres parties et fusionne l'ensemble. Trier un tableau déjà trié revient à détecter qu'il est déjà trié. Le coût est alors linéaire $O(n)$. Cela explique le commentaire *The slowest run took 19.47 times longer than the fastest.* ci-dessous où le premier tri est beaucoup plus long que les suivant qui s'appliquent à un tableau déjà trié. Quoiqu'il en soit, il n'est pas facile de comparer les deux implémentations en terme de temps. def sort_with_nohist_nocopy(ens): ens.sort() return ens shuffle(ens) # %timeit sort_with_nohist_nocopy(ens) # ## évolution en fonction de n # # Pour réussir à valider l'idée de départ. On regarde l'évolution des deux algorithmes en fonction du nombre d'observations. # + def tableaux_aleatoires(ns, d): for n in ns: yield [random.randint(0,d-1) for i in range(n)] import pandas import time def mesure(enss, fonc): res = [] for ens in enss: cl = time.perf_counter() fonc(ens) diff = time.perf_counter() - cl res.append(dict(n=len(ens), time=diff)) return pandas.DataFrame(res) df = mesure(tableaux_aleatoires(range(100, 30000, 100), 100), sort_with_nohist) df.plot(x="n", y="time") # - df = mesure(tableaux_aleatoires(range(100, 30000, 100), 100), sort_with_hist) df.plot(x="n", y="time") # L'algorithme de tri de Python est plutôt efficace puisque son coût paraît linéaire en apparence. df = mesure(tableaux_aleatoires(range(100, 30000, 200), int(1e10)), sort_with_nohist) df.plot(x="n", y="time") # On ajoute un logarithme. from math import log df["nlnn"] = df["n"] * df["n"].apply(log) * 4.6e-8 df.plot(x="n", y=["time", "nlnn"]) # Il faut grossier le trait. from math import exp list(map(int, map(exp, range(5, 14)))) df100 = mesure(tableaux_aleatoires(map(int, map(exp, range(5, 14))), 100), sort_with_nohist) dfM = mesure(tableaux_aleatoires(map(int, map(exp, range(5, 14))), 1e9), sort_with_nohist) df = df100.copy() df.columns = ["n", "d=100"] df["d=1e9"] = dfM["time"] df.plot(x="n", y=["d=100", "d=1e9"]) # L'algorithme de tri [timsort](https://en.wikipedia.org/wiki/Timsort) est optimisé pour le cas où le nombre de valeurs distinctes est faible par rapport à la taille du tableau à trier. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # All the IPython Notebooks in **Python Functions** lecture series by Dr. are available @ **[GitHub](https://github.com/milaan9/04_Python_Functions)** # # # Python Global, Local and Nonlocal variables # # In this class, you’ll learn about Python Global variables, Local variables, Nonlocal variables and where to use them. # # When we define a function with variables, then those variables scope is limited to that function. In Python, the scope of a variable is the portion of a program where the variable is declared. Parameters and variables defined inside a function are not visible from outside the function. Hence, it is called the variable’s local scope. # # >**Note:** The inner function does have access to the outer function’s local scope. # # When we are executing a function, the life of the variables is up to running time. Once we return from the function, those variables get destroyed. So function does no need to remember the value of a variable from its previous call. # ## Global Variables # # In Python, a variable declared outside of the function or in global scope is known as a global variable. This means that a global variable can be accessed inside or outside of the function. # # For example: # + # Example 1: Create a Global Variable global_var = 999 def fun1(): print("Value in 1st function:", global_var) def fun2(): print("Value in 2nd function:", global_var) fun1() fun2() # + # Example 2: x = "global" def fun(): print("x inside:", x) fun() print("x outside:", x) # - # In the above code, we created **`x`** as a global variable and defined a **`fun()`** to print the global variable **`x`**. Finally, we call the **`fun()`** which will print the value of **`x`**. # # What if you want to change the value of **`x`** inside a function? # + # Example 3: x = "global" def fun(): x = x * 2 print(x) fun() # - # **Explanation:** # # The output shows an error because Python treats **`x`** as a local variable and **`x`** is also not defined inside **`fun()`**. # # To make this work, we use the **`global`** keyword. Visit **[Python Global Keyword](https://github.com/milaan9/04_Python_Functions/blob/main/003_Python_Function_global_Keywords.ipynb)** to learn more. # + # Example 4: global_lang = 'DataScience' def var_scope_test(): local_lang = 'Python' print(local_lang) var_scope_test() # Output 'Python' # outside of function print(global_lang) # Output 'DataScience' print(local_lang) # NameError: name 'local_lang' is not defined # + # Example 5: a=90 # 'a' is a variable defined outside of function, i.e., Global variable def print_data(): a=6 # 'a' is a variable defined inside of function, i.e., local variable b=30 print("(a,b):(",a,",",b,")") print_data() #(a,b):( 5 , 10 ) print("Global a :",a) #Global x : 50 print("Local b : ",b) #b is local veriable - throw NameError # - # ## Local Variables # # A variable declared inside the function's body or in the local scope is known as a local variable. # # If we try to access the local variable from the outside of the function, we will get the error as **`NameError`**. # + # Example 1: Accessing local variable outside the scope def fun(): y = "local" fun() print(y) # - # The output shows an error because we are trying to access a local variable **`y`** in a global scope whereas the local variable only works inside **`fun()`** or local scope. # + # Example 2: Create a Local Variable # Normally, we declare a variable inside the function to create a local variable. def fun(): y = "local" print(y) fun() # - # Let's take a look at the cell **In [2]: # Example 3:** where **`x`** was a global variable and we wanted to modify **`x`** inside **`fun()`**. # + # Exercise 3: def fun1(): loc_var = 999 # local variable print("Value is :", loc_var) def fun2(): print("Value is :", loc_var) fun1() fun2() # - # ## Global and local variables # # Here, we will show how to use global variables and local variables in the same code. # + # Example 1: Using Global and Local variables in the same code x = "global" def fun(): global x y = "local" x = x * 2 print(x) print(y) fun() # - # **Explanation**: # # In the above code, we declare **`x`** as a global and **`y`** as a local variable in the **`fun()`**. Then, we use multiplication operator **`*`** to modify the global variable **`x`** and we print both **`x`** and **`y`**. # # After calling the **`fun()`**, the value of **`x`** becomes **`global global`** because we used the **`x * 2`** to print two times **`global`**. After that, we print the value of local variable **`y`** i.e **`local`**. # + # Example 2: Global variable and Local variable with same name x = 9 def fun(): x = 19 print("local x:", x) fun() print("global x:", x) # - # **Explanation**: # # In the above code, we used the same name **`x`** for both global variable and local variable. We get a different result when we print the same variable because the variable is declared in both scopes, i.e. the local scope inside **`fun()`** and global scope outside **`fun()`**. # # When we print the variable inside **`fun()`** it outputs **`local x: 19`**. This is called the local scope of the variable. # # Similarly, when we print the variable outside the **`fun()`**, it outputs **`global x: 9`**. This is called the global scope of the variable. # + # Exercise 3: def my_func(): # for this Function I am not writing any argument in parenthesis '()' x = 10 print("Value inside the body of function:",x) x = 20 # first, this line to execute my_func() # second, the body of function will execute print("Value outside of function:",x) # finally, this line will execute # - # **Explanation:** # # Here, we can see that the value of **`x`** is 20 initially. Even though the function **`my_func()`** changed the value of **`x`** to 10, it did not affect the value outside the function. # # This is because the variable **`x`** inside the function is different (local to the function) from the one outside. Although they have the same names, they are two different variables with different scopes. # # On the other hand, variables outside of the function are visible from inside. They have a global scope. # # We can read these values from inside the function but cannot change (write) them. In order to modify the value of variables outside the function, they must be declared as global variables using the keyword global. # ## Nonlocal Variables # # Nonlocal variables are used in nested functions whose local scope is not defined. This means that the variable can be neither in the local nor the global scope. # # Let's see an example of how a global variable is created in Python. # # We use **`nonlocal`** keywords to create nonlocal variables. # + # Example 1: Create a nonlocal variable x1 = "global" # Global variable def outer_fun(): # main function x1 = "local" # 'x' is local variable for main function and it is nested variable for nested function print("variable type for Outer function:", x1) def inner_fun(): # nested fucntion nonlocal x1 # using local variable 'x' in nested function as nonloval variable x1 = "nonlocal" # changing the value of my 'x' print("variable type for Inner function:", x1) # print 'nonlocal' inner_fun() #print("outer:", x1) # print 'nonlocal' outer_fun() print("Variable type of x1:", x1) # - # In the above code, there is a nested **`inner()`** function. We use nonlocal keywords to create a **`nonlocal`** variable. The **`inner()`** function is defined in the scope of another function **`outer()`**. # # > **Note**: If we change the value of a nonlocal variable, the changes appear in the local variable. # + # Exercise 2: def outer_fun(): x = 999 def inner_fun(): # local variable now acts as global variable nonlocal x x = 900 print("value of x inside inner function is:", x) inner_fun() print("value of x inside outer function is:", x) outer_fun() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="5fCEDCU_qrC0" # #Data Science for Customer Insights - Python Introduction Course # # Welcome to Data Science for Customer Insights. In the following tasks, you will learn how to do Data Science with Python. In the end, you will be able to: # # - do basic coding and use Python loops, lists, functions and more # - access and apply machine learning algorithms to data sets # - understand fundamental machine learning concepts # # Please go trough all tasks and answer the questions carefully. **This is task-based learning - you will have to Google a lot**. You can use any credible source material, you like, e.g.: # # - stackoverflow # - SoloLearn # - DataCamp # - udemy # # Again - you will potentially have to Google everything, this is normal! # # Also, every time you see those signs "**...**" you have to change them to generate the correct code. # # + [markdown] id="UQWo_jfy4H9s" # **Before you start: Please provide your information** # # Name: # # Matr.: # + [markdown] id="GJBs_flRovLc" # ##First Steps # # + [markdown] id="p4NFErGadias" # **Print Function** # # By using the print statement, we're able to output text. # # *Please run the following code*: # + id="gJr_9dXGpJ05" print ('Hello World') # + [markdown] id="L3qFPIIqrc77" # # + [markdown] id="2fhs6GZ4qFMx" # Basic Operations # # With Python, you can easily do some calculations. # # *Tap around to perform an addition with a result of 5:* # + id="c6i-78HEXGNu" 3 # + [markdown] id="0DsykyJTXNao" # *Try to print a substraction with 7 as a result:* # # # + id="B3fhbNoMXLuf" 10 # + [markdown] id="8QlK460xXJqJ" # *Use parantheses to determine which operations are performed first - try to get 30:* # + id="-gE-Ez1qtyIA" 3*4+6 # + [markdown] id="bX2Xoej0WmJn" # *Try to divide by 2 to get a result of 2.00* # # ** Side Note:** we call 2.00 a float variable, while 2 is called an integer variable. Despite seemingly being the same, Python sees them as different datatypes which for example leads to differnt memory needs. # + id="4ty9qsqVWVq7" 4 # + [markdown] id="lSrWNr3MuFUS" # The minus sign symbolizes a negative number which has no effect on how to perform an operation with them. # # *Tap around and try to get a positive result of 48!* # + id="WgHrAlz6ZTB-" (-3+-1)*(12) # + [markdown] id="Wt_Y_ZWA9miH" # **Comments** # # You can comment your code, to explain something for later reference. # # *Please turn "This is just a comment" into a proper comment.* # + id="ssxN9tka9-2o" 4 + 4 This is just a comment # + [markdown] id="4i-XuhKW3owN" # **Storing Values** # # You can store values in variables. # # *Please complete the code to generate 42 as an output.* # + id="T29nLva63oG7" ... = 21 ... = ... * ... print(...) # + [markdown] id="Jhy8suwbZtde" # **ZeroDivisionError** # # Dividing by zero produces an error, as no answer can be calculated. # # *You don't believe that? Tap around and divide 3 by zero.* # + id="ghD_v1QJaWhM" # + [markdown] id="8bPzVa96a-zX" # **Other numerical Operations** # # You can also use Python for exponentiation or the quotient and remainder of a division. # # *Perform an exponentiation with a result of 32:* # + id="vHWTZFAsbnrU" 2 # + [markdown] id="e0KghhEMdLLP" # *Try to get the remainder of the division 1.25 by 0.5 (should be 0.25):* # + id="cKOpQ-2ebvmM" # + [markdown] id="OSqSFbg6jRw7" # ##Data Types # + [markdown] id="KA7WEmmTjbRl" # If Python handles data, it has to somehow turn it into 1s and 0s. Depending on the type of data, Python uses different internal representations. It is therefore very important, to always make sure your data has the right data type. # + [markdown] id="fxXZEWORXump" # **Integers** # # Integers are whole numbers like: # # - 5 # - 3 # # + [markdown] id="SpL66XSAV3PW" # **Floats** # # Numbers that aren't integers are represented as floats in Python ("a floating point number"). Some examples are: # - 0.8 # - -5.346 # # How to create a float? # By entering a number with a decimal point, or by using operations such as division on integers as well as operations on a float and an integer. # # *Please build a float dividing 12 by 4 and try to convert it back into an integer (should be 3 not 3.0):* # + id="pN81xZjUXB5Z" # create float print() # turn it into an integer print() # + [markdown] id="VprZLg-EY5s-" # **Strings** # # You have to use a string if you want to use text in Python. It's one of the basic things that store text like: # # - Python is fun # - I love Data Science # # How to create a string? By entering text between two single or double quotation marks. # # *Run the code to create a string:* # + id="98w-iUSGZ6ko" "Python is fun" # + [markdown] id="JX8ScvvleqQO" # *Try to print I love Data Science using single marks.* # + id="lt-l5lUJaBfd" # + [markdown] id="VeQ8nTgbbyur" # *What's wrong with the following string? Please correct it so that the type is 'str' and there is no error message!* # + id="zFxpt_tYaiPd" type('Lisa's brother likes good food.') # + [markdown] id="IgVEuFwsb2hE" # *Please change the following sentences to bring the second sentence in the second line:* # + id="_TdTOOA1cC7c" print ('Lisa likes good food. In the future, he wants to open a restaurant.') # + [markdown] id="We12AcbmdcPU" # **Booleans&Comparisons** # # You've already learned about integers, floats and strings. In Python, another type is the Boolean type. It has two values: True and False. By comparing values (using the equal operator ==), you can create Booleans. # # *Please define my_boolean as True:* # + id="S7bs7aEXgjML" my_boolean = print(my_boolean) # + [markdown] id="G-Uv7DJigzb9" # *Please compare two numbers and see if you get the "True" statement:* # + id="As8PeDsQnU3P" # + [markdown] id="61liw4eOhy2i" # *Is "hi" equal to "hello"? Try it out!* # + id="0srfVv0vnmYr" "hi" ... "hello" # + [markdown] id="6c-qfGoloEBC" # The not equal operator (!=) is another comparison operator. # # *Please run the code:* # + id="zFYUiLPWoLFX" 3!=3 # + [markdown] id="siEObqC2iNHa" # *Please generate a True statement using the operator:* # + id="owFd0uatoOFn" ... != ... # + [markdown] id="crYwcDotoYtw" # In Python, you can also use operators (> and <) to determine whether one number (float or integer) is smaller than or greater than another. # # *Tap around (change the ...) and generate one True as well as one False statement:* # + id="RV-FHVego3uL" 24 ... 16 # + id="KPY852q_o9rJ" 6 ... 6.0 # + [markdown] id="agDDzvQlrHxO" # ** Side Note: In the first lessons you've learned about the basic operations in Python. To be better prepared for the following steps, the table below provides a summary: ** # # # + [markdown] id="UdRyKR44dcNI" # ##Control Structures # # + [markdown] id="AX8KEqV7d2jR" # **if&else statement** # + [markdown] id="hdtFz5UXptLZ" # # # *Now it's your turn! Change the following code (...) to generate "else":* # + id="E6xzkR1ukfO0" if 2 + ... == 4 ... if 3 * ... == 9 ... print ("if") else: print ("else") # + [markdown] id="datsEgDB4iXQ" # ** Side Note:** Indentations are very important in Python, as they structure your code. # # # + [markdown] id="40mdnbNieJTh" # **Boolean Logic** # # Boolean Logic is used to make more complicated conditions for if statements that rely on more than one condition. The Boolean operators are **and, or and not**. # + [markdown] id="OdysNNSonWA0" # *Tap around to get a false statement:* # + id="7htGUSgym9KX" print ( 2 < 3 and 2 ... 6 ) # + [markdown] id="dHpbmcEBn_34" # *Change and correct the following code to print "Hi":* # + id="XlAjNuIqoKgs" age = ... money = 400 if age > 20 ... money > 50 ... ...("Hi"... # + [markdown] id="0JOau1ITeOBp" # **Lists** # # Lists are used to store multiple items. By using square brackets with commas separating items, you can create a list. # # *Try to print the first argument of the list ("Hello").* # + id="zjovDwhNlYFn" words = ["Hello", "world", "!"] print (words...) # + [markdown] id="F8VgN4fgrdDZ" # *Change the code to create a list and print its 3rd element:* # + id="d5rWZJjOrgwC" the_list = ... 33, 66, 99 ... ...(the_list[...]) # + [markdown] id="UEmilTRFecFV" # **Loops** # # A for loop is used to repeat a block of code multiple times. It is common to use the for loop when the number of iterations is fixed (e.g. iterating over a fixed list of items in a shopping list), but there are other loops as well. # # *Change the code below to print all elements of your list:* # + id="hrSVVOz2Uzm9" your_list = ['a','b','c','d'] for ... ... ... : print (...) # + [markdown] id="QOEUQc1SVzqh" # *Please change the ... to correct the code so that the loop is broken after the loop reaches the 'c' in your list.* # + id="kYHdF1AVlZYy" for item ... your_list: print(item) if item ... ...: print ("Breaking") ... print("Finished") # + [markdown] id="hDqpOb_IXfK0" # What's wrong with the following code? # *Please find the correct code to count all the 't' letters in the string. ATTENTION, you have to correct multiple errors and may need to change pre-written code.* # + id="if61RplmXxGC" str = "testing for loops to count 't's" count = 0 for ... in str: if ... == 't' : count ... = 1 print (count) # + [markdown] id="XBlj6_t8efo8" # **Range** # # The range() function returns a sequence of numbers. By default, it starts from 0, increments by 1 and stops before the specified number. # # *The code below should generate a list containing all of the integers, up to 10 (not incl.). Please change the following code (...):* # + id="jeR4mJGUlad7" numbers = list(...(10)) print(...) # + [markdown] id="cLKi_bFvDN8K" # *Now change the code below to print every even number from 10 to 19 using range:* # + id="TjGtMu2hDOz6" numbers = list(...) print(...) # + [markdown] id="OwuxHmxllTwN" # ##Functions & Modules # + [markdown] id="EMtDBH2Xe6kR" # **Functions** # # In addition to using pre-defined functions like range() or print(), you can create your own functions by using the def statement. You have to define functions before they are called! This is especially useful, if you want to do similar operations again and again. # # *Try for yourself and build a function "quattro_hello" which prints "hello" four times.* # # + id="pHnpri3gHXV8" def ...: # build a proper loop to print your output four times print (...) # + [markdown] id="aDR8mvcz-q1e" # *Now call your function and see if it prints 'hello' three times:* # + id="Z8N9zB0V8jq7" quattro_hello() # + [markdown] id="mzUjE-zrHiu6" # It is also possible to define a function that takes an argument. # # *Please change the following code to build a function that prints a given text and adds a question mark:* # # # + id="3qJbedzsHyTE" def hello_function (): print (... + "?") hello_function("HI") hello_function("...") # + [markdown] id="K5e3PBnT9G0o" # ** Side Note:** Python can take any form of object as input to a function AND (almost) everything in Python is an object !! # # # # # # # # # + [markdown] id="V5kSE42tJNkw" # *And 2 arguments (develop a code to get a result of 16):* # + id="tNiEftMNJzDD" def print_sum(...,...): print(... + ...) print_sum(6,11) # + [markdown] id="oMgTXZFfK94l" # *Fill in the ... to define a function that takes two arguments and prints their multiplication:* # + id="8xBfautGKOid" ... print_mult(x,y)... print(x * ... ) ...(2,2) # + [markdown] id="-caDGUcZLO9j" # *Fill in the ... to define a function that prints whether the first given paramter is greater or equal than the second paramter - or not:* # + id="OixS1ZvgAPJk" def compare(x, y): if ...: print(...) else: print(...) print(compare(...,...)) # + [markdown] id="lks5vCIQgFBR" # **Modules & Libraries** # # Modules refer to a file containing Python functions. A file containing Python code, for example: example.py, is called a module, and its module name would be example. The basic way to use a mdule is to add import *some_module_name* at the top of your code, and then using *some_module_name*.var to access functions and values with the name var in the module. This allows you to access complex functions programmed by other people. # # # + [markdown] id="SV5FevBtl459" # *Please import a fitting module and generate 5 random numbers from the interval [0,10):* # + id="R1892agsAnxy" import ... for i in range(5): value = ... print(value) # + [markdown] id="pvTTfdOUm96f" # *Please import only the pi constant from the respective module and print pi:* # + id="Mfxt3G-Ym-wd" from ... import ... print(...) # + [markdown] id="wFCVUGx_Bqef" # # + [markdown] id="SpnHNXNvUVUY" # **Pip Install** # # pip is a package management program for Python packages from the Python Package Index (PyPI). It allows you to install and manage software packages written in Python. Basically, it lets you install different modules. # # Before you learn how to install and import packages, the first step would be to actually make sure you've the correct Python verison installed in your system. # # *Fill in the ... to check this and the version installed:* # + id="UhimRI0A1QJ1" # !python ... v # + [markdown] id="LZsAOsGNt6t2" # *Please correct the following code to install the scenedetect package:* # + id="sezVDvVCA9HX" import scenedetect # are you able to import the package? # + id="ovmkymv7unbc" ... scenedetect # + id="dzrXbT3YBGNV" import scenedetect # you should be able to import it now! # + [markdown] id="qs52gjw6uzY8" # Uninstalling/removing a package is very easy with pip. # # *Please uninstall scenedetect again:* # + id="1G_MJA3_vNuN" pip ... # + [markdown] id="-Rh3-Vt9Nev9" # ##Exceptions # + [markdown] id="1RR8qzFXgkJh" # **Exception and Exception Handling** # + [markdown] id="qQamrW33vP7o" # Exception is an event that occurs due to incorrect code or input. # # *Please generate a zero devision error:* # + id="Dr5Bfn-ev_FC" # + [markdown] id="7EcpGATAv_-m" # *Please create a NameError:* # + id="VDMdlsJcxItH" # + [markdown] id="CjSKE_T4xJO-" # *Please generate a TypeError:* # + id="WNVtpN4mxSPV" # + [markdown] id="4pz8WKMMxWZf" # *Please create a ValueError:* # + id="4PdQE07HCnBS" # + [markdown] id="RjaaEpTMyB-y" # To handle exceptions, and to execute further code despite occuring errors, you can use a try/except statement. # # *Please change the following code to print "an error occured - but the interpreter continued":* # + id="N7fSjbYFye3c" try: num1 = 7 num2 = 0 print(...) print("calculation done") except: print ("an error occured - but the interpreter continued") # + [markdown] id="SSSAZzFFD3dW" # # + [markdown] id="KJxnVSYyVPOt" # ##Data Science Modules # + [markdown] id="8pFuDtAcVWWE" # **Numpy** # # *Please fill in the ... to learn more about Numpy:* # # + [markdown] id="Q4KPTtl2OtfM" # NumPy is a program library for the programming language Python. NumPy extends Python with **...** for scientific computing and **...** calculations. The library enables efficient computing with vectors, **...** and multidimensional **...**. # + [markdown] id="A1yba1TJbiNI" # *Please import Numpy as np:* # + id="BENNc8B4T2rb" import ... as ... # + [markdown] id="rRYIaE6Tb0WF" # *Please define a list 'cvalues' with different temperatures:* # + id="2eHWb-Dyb9b5" ... = [20.1, 20.8, 21.9, 22.5, 22.7, 21.8, 21.3, 20.9, 20.1] # + [markdown] id="GB0td9p2w5Au" # *From our list 'cvalues' we now create a one-dimensional NumPy array, so please change the following code:* # + id="16eyb_-TcC1J" C = np. ... (cvalues) print (C, type(C)) # + [markdown] id="ti-xqhLVxUk9" # *Now let us assume that we need the values in degrees Fahrenheit instead of degrees Celsius. # This can be done very easily with a NumPy array. The solution of our problem are simple scalar operations. Let's do this:* # + id="XR28FF30cWOL" print(...) # + [markdown] id="qFIvnXHOyEZC" # *Please find out what the shape of 'C' is:* # + id="5qqmqIrvdYWP" # + [markdown] id="wL-SPKbVE9fD" # *Try to create a numpy array with the shape 4x2 (4 rows, 2 columns), containing only zeros* # + id="Ldf8EhxHE8v6" # + [markdown] id="w_wm9FcxVd8T" # **Pandas** # # pandas is a program library for the Python programming language that provides tools for data management and analysis. In particular, it contains data structures and operators for accessing tables and time series. # # # + [markdown] id="cMs0xplyHutE" # *Please import pandas as pd and check the version:* # + id="EvlvkfJvH8NK" import pandas as pd print(pd.__ ... __) # + [markdown] id="mRYZ8zxhIvMV" # *Create a pandas series from a list with strings, a numpy array with integers and a dictionary with the combination of your list and your numpy array:* # + id="pzGh2XV4oIAZ" # Inputs import pandas as pd import numpy as np mylist = ... myarr = ... mydict = ...(...(mylist, myarr)) # Solution ser1 = pd.Series(mylist) ser2 = pd.Series(myarr) ser3 = pd.Series(mydict) print(ser1.head()) print(ser2.head()) print(ser3.head()) # + [markdown] id="LJ3KXj28sUCS" # ** Side Note:** Note that a Pandas Series consists of an index (left column) and values (right column). This is important for accessing values. # + [markdown] id="4WXvLc8CJJ6O" # *Now create a 4x4 DataFrame with random integer values:* # # --- # # # + id="N8PIWzFpJoLI" # create a numpy 4x4 numpy matrix with random integer values myarr = (np. ... (4,4)*10). ... .astype(int) df = ...(myarr) print(df) # + [markdown] id="ppVXphb1Le4l" # Let's learn something about indexing and slicing. # # **.loc()** has multiple access methods like: # # - A single scalar label # - A list of labels # - A slice object # - A Boolean array # # **loc** takes two single/ list/ range operator separated by ','. The first one indicates the row and the second one indicates columns. # # *Please complete all the tasks:* # + id="JW58HwihvfSt" #import the pandas library as pd import pandas as pd import numpy as np # you may reuse myarr from above, otherwise please recreate a 4x4 numy array df = ...(myarr,index = ['a','b','c','d'], columns = ['A', 'B', 'C', 'D']) #select the value in row b, columnd C print(...) #select all values in row b print(...) #select all values in from row b to d, and column A to C print(...) # + [markdown] id="WiHop8xU_bLo" # *You can use boolean logic to slice dataframes as well. This is extremely handy in many situations. Please complete the tasks given in the comments. **Do not manually select the correct column (by just stating column X and row y)**:* # + id="wgmuagXd_bVw" # select all rows with a positive value in the 'A' column print (...) # select all columns with a positive value in the 'b' row print (...) # + [markdown] id="VrxtswA2Mupp" # Now we are working with **.iloc()**. Pandas provide various methods in order to get purely integer based indexing. Like python and numpy, these are 0-based indexing. # # The various access methods are as follows: # - An Integer # - A list of integers # - A range of values # # *Tap around and change the ... to complete the tasks in the comments:* # + id="GeSB-zAiMwGt" # you may reuse df, otherwise, please create a new 4 by 4 dataframe # print the first 3 columns using 'i' and the loop for i in range(3): print(...) # print every second row using range ...[range( ... ) ... ] # + [markdown] id="l5yZGuq6RDy4" # You can read existing tables with pandas, to work with them in Python. The most common table format is csv. # # *Please download the following file (https://drive.google.com/file/d/1rabckY6MNXi3kGEDJ2KIXGB1rsmdwbEr/view?usp=sharing), upload it to your Colab environment and change the path to read it:* # # + id="5Ow2FxtgQers" import pandas as pd df = pd.read_csv ('...') print(df) # + [markdown] id="7B-Izk3RMSk3" # *Let's change the Data a bit:* # + id="1nSbRxA1UetU" # Delete the 'fuel consumption' column df = # Delete all rows, where the price is not available df = # Let's only keep cars, with more than 100 PS df = df # Let's see if the table looks good # + [markdown] id="5cPeqmMgNs1r" # *Now, please save the dataframe as a new csv file* # + id="fS1ycn55NtFz" name = 'cars_new.csv' ... # + [markdown] id="mjutWdU7VgJF" # **Matplotlib** # # Matplotlib is a program library for the programming language Python, which allows to plot your data. # # # # # + [markdown] id="yR-VrJyXpQnx" # # *Please generate a positive linear function and plot it using matplotlib:* # + id="jvNbwu8_yEEe" import numpy as np import matplotlib.pyplot as plt a = np. ... # please create 10 evenly spaced numbes between 0 and 1 b = np. ... # please create 10 evenly spaced numbes between 0 and 10 plt.plot(a,b) plt.show() # + [markdown] id="DrJomQ5EREim" # *Now, try to plot a scatter plot, where you see each individual point* # + id="DsMTSObFQDNU" ...(a,b) plt.show() # + [markdown] id="pTAh3IpX6Fng" # *Now, we want to build a histogram:* # + id="yIlOVSBBR1td" # Let's start by creating a list of 100 random numbers between 0 and 90 import random random_numbers=[] for i in range(100): random_numbers. ... # Now let's plot a histogram with 10 bins import ... as plt plt. ... ( ... ) plt.show() # + [markdown] id="5F0JvOsjVogd" # **Sklearn** # # scikit-learn is a open source library for data analysis in Python based on other Python libraries: NumPy, SciPy and Matplotlib. It contains a number of implementations for various common machine learning algorithms. # # *Let's start by creating some data:* # + id="xTiKJ44odfuQ" import matplotlib.pyplot as plt import numpy as np # create 40 random float numbers X = 10* ... y = 3 * X + np.random.randn(40)*5 # Take a look at your data with a scatter plot ... plt.show() # + [markdown] id="rFRG_YiVh6OM" # In many cases, it can be useful or necessary to standardize your data, so that both your X and y have a mean of 0 and a standard deviation of 1. # # *Try to standardize X and y using sklearn methods. You should receive 2 True as a result:* # + id="GXwazIichb9i" from sklearn import ... import numpy as np X_scaled = ... y_scaled = ... print(X_scaled.mean().round(10) == 0 and y_scaled.mean().round(10) == 0) print(X_scaled.std().round(10) == 1 and y_scaled.std().round(10) == 1) # + [markdown] id="G2s_v9ZdjwVJ" # ** Side Note:** We may have to add a 'round(10)' here as Python works numerically - instead of getting an exact 0, you may sometimes get something like 2.2204460492503132e-17, which is practically 0, once we round to the tenth decimal point. # + [markdown] id="yW99dkVKkk3M" # *Let's make sure standardization did not change any proportions, by again producing a scatter plot with the scaled data:* # + id="dnbjxif3g1VK" ...(X_scaled, y_scaled) plt.show() # + [markdown] id="B4z-ea5EIfoC" # # + [markdown] id="Ky94Fe9pVu8a" # ##Regression with Sklearn # + [markdown] id="_KADPUVuDC1w" # There are **five basic steps** when you’re implementing linear regression: # # 1. Import the packages and classes you need. # 2. Provide data to work with and eventually do appropriate transformations. # 3. Create a regression model and fit it with existing data. # 4. Check the results of model fitting to know whether the model is satisfactory. # 5. Apply the model for predictions. # + [markdown] id="3vxgqggvDgEG" # **Step 1: Import packages and classes** # # + [markdown] id="-4w2k0VNDzkd" # *Please import the package numpy and the class LinearRegression:* # + id="QOXsjNrzDqTL" import numpy as np from ... import LinearRegression # + [markdown] id="25zDeGGiEBUw" # The fundamental data type of NumPy is the array type called numpy.ndarray. From now on, we're using the term array to refer to instances of the type numpy.ndarray. # + [markdown] id="0qNh58vEEMWL" # **Step 2: Provide data:** # # The second step is defining data to work with. The inputs (regressors, 𝑥) and output (predictor, 𝑦) should be arrays (the instances of the class numpy.ndarray) or similar objects. Please reuse X_scaled and y_scaled. If this did not work, use the alternative data provided. # # *If necessary, change the following code to provide data for the regression:* # + id="XfhGOPWEuI72" # IF YOU DID NOT MANAGE TO CREATE X_scaled and y_scaled, uncomment and use the following lines # X_scaled = np.array([-0.51, 0.67, -0.47, 0.6 , 1.71, -0.16, 0.34, -1.02, -0.68, # 2.43, -1.2 , 0.51, -0.91, 0.66, -0.53, -2.31, -0.65, 1.34, # 0.11, 1.1 , 1.52, -0.96, -1.29, -0.71, -0.3 , 1.46, -1.08, # 0.24, -0. , 0.28, 0.9 , 0.75, -0.62, -0.71, 1.56, -0.33, # 0.14, -1.43, -0.66, 0.2 ]) # y_scaled = np.array([-0.46, 0.63, -0.3 , 0.66, 1.62, -0.55, 0.27, -1.03, -0.68, # 2.25, -1.25, 0.47, -0.51, 0.63, -0.62, -2.42, -0.76, 1.44, # -0.12, 1.38, 1.6 , -1.22, -1.21, -0.63, -0.37, 1.24, -1.18, # 0.37, 0.17, 0.4 , 0.88, 0.77, -0.49, -0.44, 1.63, -0.37, # 0.02, -1.36, -0.51, 0.07]) # + [markdown] id="Sgdxd-cp4VBx" # *Please print the variables X_scaled and y_scaled to check if they are correct:* # + id="CpgJWagDE-x9" ... ... # + [markdown] id="L6PcWKhwFML5" # **Step 3: Create a model and fit it** # # The next step is to create a linear regression model and fit it using the existing data. # + [markdown] id="Hk6SGwfuFihF" # *Please create and fit the model (fitting means, sklearn will estimate your parameters for a given Input using the specified model):* # **Attention: Many times, sklearn needs input of a specific shape. If this is the case (as it will be here), you may be required to reshape data, even if is basically the same, e.g. reshaping a (n,) array to a (n,1) array.** # + id="X5HYk7zYICma" model = ... # + [markdown] id="IYmtW_4dFwfi" # **Step 4: Get results** # # Once you have your model fitted, you can get the results to check whether the model works satisfactorily and interpret it. # # *Please obtain the coefficient of determination (𝑅²):* # + id="owg_cgsgF7Uf" r_sq = model. ... print('coefficient of determination:', r_sq) # + [markdown] id="Q7IOBX0JGG9U" # *Now, please also take a look at the estimated parameters for the intercept and the slope:* # + id="2t08j85rGJiP" print('intercept:', ...) print('slope:', ...) # + [markdown] id="hKQtcJTsGRSO" # The code above illustrates how to get 𝑏₀ and 𝑏₁. You may notice that the intercept is a scalar, while the slope is an array, as you can have multiple independent variables (X). # + [markdown] id="BQbgOa03Gcjq" # **Step 5: Predict response** # # Once there is a satisfactory model, you can use it for predictions with either existing or new data. # # *To obtain the predicted response for X_scaled, change the following code:* # + id="Yn_1GeBTI1xm" y_pred = ... print('predicted response:', y_pred , sep='\n') # + [markdown] id="GmP1TirNI7UT" # *Let's plot both original data and our predicted values as a line:* # + id="X_zYLgzOJCCe" import matplotlib.pyplot as plt plt.scatter(...) plt.plot(...) plt.show() # + [markdown] id="nj0pBvAEGyOU" # In practice, regression models are often applied for forecasts. # # *Try to use the model to forecast a y value for a single random X value:* # **Attention: Make sure, you provide your data in the correct format expected by your model - this can not be an integer ;)** # + id="g5epV7S-MZsF" X = ... y_new = ... print(y_new) # + [markdown] id="DmI6Szxaid4z" # ## Machine Learning Pipeline # # We will now execute a Machine Learning pipeline including data preparation, model training and performance evaluation using a Random Forest and a Linear Regression as a baseline model. # # We will use a dataset, where customers rated the quality of wine (For details, see: https://archive.ics.uci.edu/ml/datasets/Wine). # # + [markdown] id="VWmk2uMXYoRB" # **Step 1: Data Perparation** # # # *Please load the two datasets properly, and store them in two dataframes:* # + id="COtKM6dDRd5u" import pandas as pd url_white = 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv' url_red = 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv' df_white = ... df_red = ... print(df_white.shape,df_red.shape) # does this look resonable? Make sure you have the correct number of columns. If necessary, take a look at your data! # + [markdown] id="atoUpOnrMum_" # *Now add a column "color" with the wine colors white = 0 or red = 1 respectively (to preserve this information) and combine both dataframes into one:* # + id="MK9qxcwYMtuP" ... # add color column to df_white ... # add color column to df_red df_wines = ... # combine the two dataframes in a single long table print(df_wines.shape) # this should be (6497, 13) # + [markdown] id="bsIP2kuhMA68" # Remember, we want to understand **what drives customer's quality perception for wines**. To accomplish that, we want to predict the perceived quality of a wine (our dependent variable, y) using all other available variables (indepentend variables, X). # # *Let's take a brief look at your dependend variable - how many wines per quality rating are there?* # + id="nv32nSUxVLTR" ... # create a table with the number of occurences per quality value (e.g., there should be 193 wines rated 8) # + [markdown] id="ZGgViSRJjy1s" # *Now split the data in dependent (y) - quality - and independent(X) variable:* # + id="vOIyY8IkMg3R" X,y = ... print(y.head(3)) X.head(3) # Does this look good? # + [markdown] id="NCcOLK-ZMkUI" # **Step 2: Training** # # *Let's split the dataset so that 80% of the data is used to train the model and 20% is kept for future evaluation:* # + id="i1LDvrZ5MorA" from sklearn.model_selection import ... X_train, X_test, y_train, y_test = ... print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) # + [markdown] id="9FVKFEvSMtzc" # Now we can define and fit our model on the training dataset. Let's do this for both a random forest and a linear regression model. Using two model will allow us to better evaluate our model performance in the end. # # *Please fit a linear regression and a random forest model:* # + id="Q31IBY8OMw7k" from sklearn.ensemble import ... # take suitable random forest module from sklearn.linear_model import LinearRegression model_rfo = ... # initialize random rorest model model_lin = ... # initialize linear regression model model_rfo.fit(...) model_lin.fit(...) # + [markdown] id="pTwMheqMM0Kf" # **Step 3: Performance Evaluation** # # *Now use the fitted model to evaluate performance on your 20% holdout test dataset, using the mean squared error (*MSE*) performance metric:* # Note, that neither model had 'seen' this data before. This is crucial. # + id="4E6JDqYOM7AR" from sklearn.metrics import ... # please use MSE y_hat_rfo = ... # make predictions using your random forest model y_hat_lin = ... # make predictions using your linear regression model mse_rfo = ... # calculate random forest prediction MSE mse_lin = ... # calculate linear regression prediction MSE print('MSE Random Forest: %.3f' % mse_rfo) print('MSE Linear Regression: %.3f' % mse_lin) # + [markdown] id="uHDkZUZMWP-B" # # + [markdown] id="oyUb34dPpqdk" # *It is always a good idea, to get a visual intuition as well, so let's quickly plot ground truth y_test against our predictions y_hat_rfo and y_hat_lin*: # + id="4FtycYu3cdE-" import matplotlib.pyplot as plt ... # generate a scatter plot for your random forest predictions plt.show() ... # generate a scatter plot for your linear regression predictions plt.show() # + [markdown] id="7CN3utEwql_5" # Taking both the MSE and the visual representation - which model performs better? # + id="WK9El9w1qv1v" # write your answer # + [markdown] id="QED3CFGxq1Wi" # **Step 4: Generating Insights** # # Let's use our model to find out what is the most important variable for quality. # # *Please create a dataframe which has all the X variables and their respective performance and sort the dataframe by importance:* # + id="Pacpzzrhd1fe" insights = pd.DataFrame(columns=['variable','importance']) insights['variable'] = ... insights['importance'] = ... insights. ... # please print the sorted table - what is most imporant? # + [markdown] id="tClVksnmXsNn" # ## Save & Submit # # Please make sure you included your name and matrikel at the top of this notebook, save your work **with your name in the file name**, and send **two** files to jasper dot schwenzow at uni dash hamburg dot de: # - Ipynb *(Go to File -> Download .ipynb)* # - PDF *(Go to File -> Print -> Save as PDF)* (optional) # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Universidade Federal do Rio Grande do Sul (UFRGS) # Programa de Pós-Graduação em Engenharia Civil (PPGEC) # # # PEC00144: Experimental Methods in Civil Engineering # # # ### Part II: Instrumentation # [8. Analog signals processing](#section_8) # #    [8.1. Autocovariance function and stationarity](#section_81) #    [8.2. Fourier series and Fourier transform](#section_82) #    [8.3. Power spectral density and periodograms](#section_83) #    [8.4. White noise and pink noise](#section_84) #    [8.5. Signal derivation and integration](#section_85) #    [8.6. Basic stationarity test](#section_86) #    [8.7. Level upcrossing rate and peak factor](#section_87) #    [8.8. Frequency domain signal filtering](#section_88) #    [8.9. Cross correlation and coherence functions](#section_89) # # --- # _Prof. , Dr.techn._ [(ORCID)](https://orcid.org/0000-0001-5640-1020) # _Porto Alegre, RS, Brazil_ # # + # Importing Python modules required for this notebook # (this cell must be executed with "shift+enter" before any other Python cell) import numpy as np import pandas as pd import matplotlib.pyplot as plt from MRPy import MRPy # - # ## 8. Analog signals processing # # ### 8.1. Autocovariance function and stationarity # # Analog signals are the outcome of some physical measurement, usually some electrical # quantity like voltage, current, or electric charge. The magnitude of such quantities # are expected to be _analogous_ to some observed physical quantity, like stress, # force, displacement, acceleration, etc. # # An analog signal is generally considered as a random process, $X(t)$. A random process # is a time dependent random variable, what requires its statistical properties to be also # regarded as time dependent. # # The autocovariance function, $C_X(\tau)$, of a random process, $X(t)$, is defined as the # first cross moment between the process amplitude at two time instants: # # $$ C_X(t_1, t_2) = {\rm E}\left\{ X(t_1) X(t_2) \right\} - \mu_1\mu_2$$ # # where $\mu_1$ and $\mu_2$ are the mean value of $X(t)$ at instants $t_1$ and $t_2$, # respectively. # If a random process is _stationary_, its statistical properties are assumed to be # _independent of time_, and the expression above depends only on the time gap, # $\tau = t_2 - t_1$: # # $$ C_X(\tau) = {\rm E}\left\{ X(t) X(t + \tau) \right\} - \mu_X^2 $$ # # The definitions above exclude the mean value of $X(t)$, keeping only the time # dependent part of its amplitude. Furthermore, it is evident that the autocovariance # function is a _pair function_ (symmetric over the axis $\tau = 0$) and that at origin: # # $$ C_X(0) = {\rm E}\left\{ X^2(t) \right\} - \mu_X^2 = \sigma_X^2$$ # # which is the process _variance_. # The autocovariance function can be normalized by $\sigma_X^2$ resulting in the # process _autocorrelation function_: # # $$ R^2_X(\tau) = \frac{C_X(\tau)}{\sigma_X^2} $$ # # which is also a pair function such that $-1 \leq R_X(\tau) \leq +1$. # # ### 8.2. Fourier series and Fourier transform # # An introduction to Fourier analysis can be found in [Class 7](https://nbviewer.jupyter.org/github/mmaiarocha/PEC00025/blob/master/resources/Class_07_FourierTransform.ipynb?flushcache=true) # of the course [Introduction to Vibration Theory](https://github.com/mmaiarocha/PEC00025). # # ### 8.3. Power spectral density and periodograms # # The power spectral density, $S_X(\omega)$, of a stationary random process is defined # as the Fourier transform of the autocovariance function, $C_X(\tau)$: # # $$ S_X(\omega) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} {C_X(\tau) e^{-i\omega\tau} \, d\tau}$$ # # This means that $S_X(\omega)$ and $C_X(\tau)$ are a Fourier transform pair, and consequently: # # $$ C_X(\tau) = \int_{-\infty}^{+\infty} {S_X(\omega) e^{i\omega\tau} \, d\omega} $$ # # The definitions above implies that: # # $$ C_X(0) = \int_{-\infty}^{+\infty} {S_X(\omega) \, d\omega} = \sigma_X^2$$ # # what means that the total integral of the spectral density is, by definition, # the process variance. # # The ``MRPy`` module provides a straightforward method for visualizing the # spectral density estimator, called _periodogram_, and the corresponding autocorrelation # function. Below is a short script where a cosine wave is analysed: # # + Td = 2. # total signal duration (s) fs = 512 # sampling rate (Hz) N = int(Td*fs) # total number of samples f0 = 8. # cosine wave frequency (Hz) X0 = MRPy.harmonic(NX=1, N=N, fs=fs, X0=1.0, f0=f0, phi=0.0) f00 = X0.plot_time(fig=0, figsize=(8,3), axis_t=[0, Td, -2.0, 2.0]) f01 = X0.plot_freq(fig=1, figsize=(8,3), axis_f=[0, 16, 0.0, 2.0]) # periodogram f02 = X0.plot_corr(fig=2, figsize=(8,3), axis_T=[0, 1, -2.0, 2.0]) # autocovariance # - # It can be seen that the periodogram of a sine signal is a impulse function # at the sine frequency. # # Below it is show how to simulate a process with given autocorrelation function. # # + Sx, fs = X0.periodogram() X1 = MRPy.from_periodogram(Sx, fs) f00 = X1.plot_time(fig=0, figsize=(8,3), axis_t=[0, Td, -2.0, 2.0]) f01 = X1.plot_freq(fig=1, figsize=(8,3), axis_f=[0, 16, 0.0, 2.0]) # periodogram f02 = X1.plot_corr(fig=2, figsize=(8,3), axis_T=[0, 1, -2.0, 2.0]) # autocovariance # + tau = X1.T_axis() kt = (tau <= 0.5) Cx = np.zeros(tau.shape) Cx[kt] = 1 - 2*tau[kt] X2 = MRPy.from_autocov(Cx, tau[-1]) f00 = X2.plot_time(fig=0, figsize=(8,3), axis_t=[0, 8, -4.0, 4.0]) f01 = X2.plot_freq(fig=1, figsize=(8,3), axis_f=[0, 8, 0.0, 2.0]) f02 = X2.plot_corr(fig=2, figsize=(8,3), axis_T=[0, 4, -2.0, 2.0]) # - # ### 8.4. White noise and pink noise # # A _Gaussian white noise_ is a signal with random amplitude with normal distribution and # a constant power density all over the frequency domain: # # $$ S_X(\omega) = S_0 $$ # # The associated autocorrelation function is a Dirac's Delta at the origin, with an # _infinite_ pulse integral. # This signal in practice must have a limited band, otherwise the corresponding variance # would be infinite. For practical purposes a signal is considered to be a white noise # if the power is constant (within some statistical error) over some relevant # frequency band $\Delta\omega = \omega_2 - \omega_1$: # # $$ S_X(\omega) = S_0, \hspace{5mm} {\rm for} # \hspace{5mm} \omega_1 \leq \omega \leq \omega_2 $$ # # and zero otherwise. The corresponding autocorrelation function is: # # $$C_X(\tau) = \frac{4S_0}{\tau} \sin\left( \frac{ \Delta\omega}{2} \tau \right) # \cos\left( \omega_0 \tau \right) $$ # # where $\omega_0 = (\omega_1 + \omega_2)/2$ is the band center. # The corresponding variance is: # # $$ \sigma_X^2 = C_X(0) = 2\Delta\omega S_0$$ # # As the band width $\Delta\omega$ decreases, the signal above approaches a # cosine wave with frequency $\omega_0$, as described in the previous section. # # Let us take a look on a band-limited Gaussian white noise simulation with ``MRPy``. # The simulation uses $S_0 = 1$, hence the standard deviation will be # $\sigma_X = \sqrt{2\Delta\omega}$. # + X = MRPy.white_noise(NX=1, N=2**15, fs=512, band=[8, 10]) print('Mean value: {0:7.4f}'.format(X[0].mean())) print('Standard deviation: {0:7.4f}'.format(X[0].std()),'\n') f03 = X.plot_time(fig=3, figsize=(8,3), axis_t=[0, Td, -10.00, 10.00]) f04 = X.plot_freq(fig=4, figsize=(8,3), axis_f=[0, 32, 0.00, 1.00]) f05 = X.plot_corr(fig=5, figsize=(8,3), axis_T=[0, 1, -1.00, 1.00]) # - # The ``MRPy`` module uses a simulation technique that gives an almost perfectly # constant periodogram, as specified. # # ### 8.5. Signal derivation and integration # # As a starting point, let us calculate the following derivative: # # $$ \frac{d}{d\tau}\left[ X(t) X(t+\tau) \right] = # X(t) \cdot \frac{d X(t + \tau)}{d(t+\tau)} \cdot \frac{d(t+\tau)}{dt} = # X(t) \dot{X}(t+\tau) $$ # # Now, using the expected value operator and considering autocovariance symmetry: # # $$ \frac{d C_X(\tau)}{d\tau} = {\rm E}\left\{ X(t) \dot{X}(t+\tau) \right\} # = {\rm E}\left\{ X(t-\tau) \dot{X}(t) \right\} $$ # # and following the logic to find the second derivative: # # $$ \frac{d^2 C_X(\tau)}{d\tau^2} = -{\rm E}\left\{\dot{X}(t) \dot{X}(t+\tau) \right\} # = - C_{\dot{X}} (\tau)$$ # # With this result at hand, we go back to the relation between power density and # autocovariance: # # $$ C_X(\tau) = \int_{-\infty}^{+\infty} {S_X(\omega) e^{i\omega\tau} \, d\omega} $$ # # and apply double derivative: # # \begin{align*} # \frac{d C_X(\tau)}{d\tau} &= # \int_{-\infty}^{+\infty} {i\omega S_X(\omega) e^{i\omega\tau} \, d\omega} \\ # \frac{d^2 C_X(\tau)}{d\tau^2} &= # - \int_{-\infty}^{+\infty} {\omega^2 S_X(\omega) e^{i\omega\tau} \, d\omega} # \end{align*} # # Now, considering that the following relation is also valid: # # $$ C_\dot{X}(\tau) = \int_{-\infty}^{+\infty} {S_\dot{X}(\omega) e^{i\omega\tau} \, d\omega} $$ # # It finally results that: # # \begin{align*} # S_\dot{X} (\omega) &= \omega^2 S_{X} (\omega) \\ # S_\ddot{X}(\omega) &= \omega^4 S_{X}(\omega) # \end{align*} # # These relations allow us to calculate the spectral density of velocity and # acceleration processes from the spectral density of displacement process, or vice-versa. # They are quite useful for converting signal amplitudes obtained with one type of # transducer (for instance, an accelerometer) to amplitudes as they would have been # obtained with other type of transducer (for instance, displacement). # # The ``MRPy`` class provides the calculation of derivatives and integrals in # frequency domain, as demonstrated below. The methods allow the definition # of a passing frequency band, for simultaneously eliminating noise errors. # # + acc = MRPy.white_noise(N=2**15, fs=512) + 0.001*MRPy.harmonic(N=2**15, fs=512, f0=0.5) Td = acc.Td f03 = acc.plot_time(fig=3, figsize=(8,3), axis_t=[0, Td, -10.00, 10.00]) f04 = acc.plot_freq(fig=4, figsize=(8,3), axis_f=[0, 1, 0.00, 0.01]) f05 = acc.plot_corr(fig=5, figsize=(8,3), axis_T=[0, 0.1, -1.00, 1.00]) # + V1 = acc.integrate() # integration without filtering #V1 = acc.differentiate() # integration without filtering f03 = V1.plot_time(fig=3, figsize=(8,3))#, axis_t=[0, Td, -0.50, 0.50]) f04 = V1.plot_freq(fig=4, figsize=(8,3))#, axis_f=[0, 2, 0.00, 0.10]) f05 = V1.plot_corr(fig=5, figsize=(8,3))#, axis_T=[0, 32, -1.00, 1.00]) # + V2 = acc.integrate(band=[0.3, 0.7]) # integration with filtering f06 = plt.figure(6, figsize=(8,3)) f06a = plt.plot(V1.t_axis(), V1[0], 'r') f06b = plt.plot(V2.t_axis(), V2[0], 'b') #plt.axis([0, Td, -1, 1]) plt.grid(True) # - # Whenever a signal is integrated without a low frequency cut, # a _zero drift_ is expected to happen! This can be avoided by setting a lower frequency # bound as high as possible without attenuating the useful part of the signal. # # ### 8.6. Basic stationarity test # # Previously we have demonstrated that: # # $$ \frac{d C_X(\tau)}{d\tau} = {\rm E}\left\{ X(t) \dot{X}(t+\tau) \right\} = # \int_{-\infty}^{+\infty} {i\omega S_X(\omega) e^{i\omega\tau} \, d\omega} $$ # # By making $\tau = 0$ in the relations above gives: # # $$ {\rm E}\left\{ X(t) \dot{X}(t) \right\} = # \int_{-\infty}^{+\infty} {i\omega S_X(\omega) \, d\omega} $$ # # The power spectral density is a _pair function_. Multiplying it by $i\omega$ # necessarily yields an _odd function_. Consequently the integral vanishes and: # # $$ {\rm E}\left\{ X(t) \dot{X}(t) \right\} = 0 $$ # # This relation can be a shortcut for ascertaining the signal stationarity and hence # validating the constancy of its spectral density. # # This result can be demonstrated through simulation with the ``MRPy`` module. # A Gaussian white noise is simulated and its derivative is calculated. Both # procedures restrain the signals to the same limited band: # + X = MRPy.white_noise(N=2**15, fs=512, band=[8, 16]) Xdot = X.differentiate(band=[8, 16]) f07 = plt.figure(7, figsize=(6,6)) f07a = plt.plot(X[0], Xdot[0], 'b.') plt.axis([-15, 15, -500, 500]) plt.grid(True) # - # The lack of correlation between the process and its derivative observed in the plot # above is an evidence of process stationarity. # ### 8.7. Level upcrossing and peak factors # # For a stationary process, the expected number of upcrossings, $N^{+}_a(T)$, of a given # amplitude level, $a$, within a given observation time, $T$, is given by: # # $$ N^{+}_a(T) = \nu^{+}_a T$$ # # where $\nu^{+}_a$ is the _upcrossing rate_ of level $a$. In other words, the number of # upcrossings is proportional to the observation time. # # Level upcrossing # # The upcrossing rate is calculated by integrating the joint probability distribution # of the process amplitude and its derivative, such that the amplitude is fixed at # level $a$ and only positive values are regarded for its derivative: # # $$ \nu^{+}_a = \int_0^{\infty} {p_{X\dot{X}}(x=a, \dot{x}) \, x \, d{\dot{x}}} $$ # # This result can be particularized for a Gaussian process, also considering that # stationarity implies that the process and its derivative are uncorrelated: # # $$ \nu^{+}_a = \frac{1}{2\pi} \, \frac{\sigma_\dot{X}}{\sigma_X} \, # \exp \left( -\frac{a^2}{2\sigma_X^2} \right) $$ # # Recalling from previous results that: # # $$ \sigma_X^2 = \int_{-\infty}^{+\infty} {S_X(\omega) \, d\omega} $$ # # and that: # # $$ \sigma_\dot{X}^2 = \int_{-\infty}^{+\infty} {\omega^2 S_X(\omega) \, d\omega} $$ # # allow us to estimate the upcrossing rate for any stationary signal by # integrating the periodogram as indicated above. # # Para a parcela flutuante da resposta em deslocamento, como a análise foi conduzida no domínio da frequência, resultados no domínio do tempo só pode ser obtidos em termos estatísticos. A abordagem da NBR-6123 consiste em se adotar um fator de pico da resposta modal $g_k = 4$, aplicado sobre o desvio padrão da resposta flutuante. # No entanto, já que o espectro da resposta está disponível, é possível obter-se uma estimativa mais precisa deste fator de pico utilizando-se a fórmula de Davenport: # # $$g_k = \sqrt{2 \ln (\nu^{+}_0 T)} + \frac{0.5772}{\sqrt{2 \ln (\nu^{+}_0 T)}}$$ # # onde $T$ é o tempo de observação, adotado como 600s na NBR-6123, e $\nu_k$ é a taxa de cruzamento do nível zero para o positivo (_zero upcrossing rate_), calculada a partir do espectro como: # # $$ \nu^{+}_0 = \sqrt{\frac{\int_0^\infty{f^2 S_X(f) \; df}} # {\int_0^\infty{ S_X(f) \; df}}} $$ # # Observe-se que o denominador dentro da raiz é a variância da resposta modal. # # Aplicando-se as expressões acima ao exemplo de cálculo tem-se: # + X = MRPy.white_noise(N=2**16, fs=256) # full band white noise Y = X.sdof_Duhamel(4, 0.1) # mass-spring system response Y = Y/Y[0].std() # forces deviation equal to 1 Tm = X.Td gD = Y.Davenport(T=600) # Gaussian narrow-band process gS = Y.splitmax (T=600) # Prof. Rocha's method Ypk = Y[0].mean() + gD[0]*Y[0].std() # mean + g.deviation print('Peak factor from Davenport: {0:6.3f}'.format(gD[0])) print('Peak factor from splitmax: {0:6.3f}'.format(gS[0])) print('Peak value for displacement: {0:6.3f}'.format(Ypk)) f09 = plt.figure(9, figsize=(12,4)) f09a = plt.plot(Y.t_axis(), Y[0], 'r', lw=1) plt.axis([0, X.Td, -6, 6]) plt.grid(True) # - # ### 8.8. Frequency domain signal filtering # # # + Td = 4. # total signal duration fs = 2048 # sampling rate N = int(Td*fs) # total number of samples f60 = 60. # sinus wave frequency t = np.linspace(0, Td, N) xi = 2*np.random.randn(N) + 1*np.cos(2*np.pi*f60*t) X = MRPy(xi, fs=fs) Y = X #X.filtered(band=[59.8,60.2], mode='pass') # try out also 'stop' f08 = plt.figure(8, figsize=(12,4)) f08a = plt.plot(X.t_axis(), X[0], 'r', lw=0.5) f08b = plt.plot(Y.t_axis(), Y[0], 'b', lw=0.8) plt.axis([0, 1, -10, 10]) plt.grid(True) # - Y.plot_freq(fig=9) # ### 8.9. Cross correlation and coherence functions # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import patsy as pt import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn import preprocessing # %matplotlib inline import re import pymc3 as pm import matplotlib.ticker as tk import re from sklearn.model_selection import StratifiedKFold import pickle # ## Import data df = pd.read_csv('outputs/ala1_trials_clean.csv') df = df.rename(columns={'project_name': 'basis', 'cluster__n_clusters': 'n', 'test_mean': 'y'}).\ loc[:, ['basis', 'y', 'n']] # ## Scale predictors # to_scale = ['n'] scaler = preprocessing.MinMaxScaler() vars_scaled = pd.DataFrame(scaler.fit_transform(df.loc[:, to_scale]), columns=[x+'_s' for x in to_scale]) df = df.join(vars_scaled) df.T # ## Create design matrix y = df.loc[:, 'y'] X = df.loc[:, df.columns.difference(['y'])] X_c = pt.dmatrix('~ 0 + n_s + C(basis)', data=df, return_type='dataframe') X_c = X_c.rename(columns=lambda x: re.sub('C|\\(|\\)|\\[|\\]','',x)) # ## Model fitting functions # + def gamma(alpha, beta): def g(x): return pm.Gamma(x, alpha=alpha, beta=beta) return g def hcauchy(beta): def g(x): return pm.HalfCauchy(x, beta=beta) return g def fit_model_1(y, X, kernel_type='rbf'): """ function to return a pymc3 model y : dependent variable X : independent variables prop_Xu : number of inducing varibles to use X, y are dataframes. We'll use the column names. """ with pm.Model() as model: # Covert arrays X_a = X.values y_a = y.values X_cols = list(X.columns) # Globals prop_Xu = 0.1 # proportion of observations to use as inducing variables l_prior = gamma(1, 0.05) eta_prior = hcauchy(2) sigma_prior = hcauchy(2) # Kernels # 3 way interaction eta = eta_prior('eta') cov = eta**2 for i in range(X_a.shape[1]): var_lab = 'l_'+X_cols[i] if kernel_type=='RBF': cov = cov*pm.gp.cov.ExpQuad(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i]) if kernel_type=='Exponential': cov = cov*pm.gp.cov.Exponential(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i]) if kernel_type=='M52': cov = cov*pm.gp.cov.Matern52(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i]) if kernel_type=='M32': cov = cov*pm.gp.cov.Matern32(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i]) # Covariance model cov_tot = cov # Model gp = pm.gp.MarginalSparse(cov_func=cov_tot, approx="FITC") # Noise model sigma_n =sigma_prior('sigma_n') # Inducing variables num_Xu = int(X_a.shape[0]*prop_Xu) Xu = pm.gp.util.kmeans_inducing_points(num_Xu, X_a) # Marginal likelihood y_ = gp.marginal_likelihood('y_', X=X_a, y=y_a,Xu=Xu, noise=sigma_n) mp = pm.find_MAP() return gp, mp, model # - # ## Main testing loop # + # Inputs kernels = ['M32', 'M52', 'RBF', 'Exponential' ] # Outputs pred_dfs = [] # iterator kf = StratifiedKFold(n_splits=10) for i in range(len(kernels)): print(kernels[i]) for idx, (train_idx, test_idx) in enumerate(kf.split(X.values, X['basis'])): print('\tfold: {}'.format(idx)) # subset dataframes for training and testin y_train = y.iloc[train_idx] X_train = X_c.iloc[train_idx, :] y_test = y.iloc[test_idx] X_test = X_c.iloc[test_idx, :] # Fit gp model gp, mp, model = fit_model_1(y=y_train, X=X_train, kernel_type=kernels[i]) # Get predictions for evalution with model: # predict latent mu, var = gp.predict(X_test.values, point=mp, diag=True,pred_noise=False) sd_f = np.sqrt(var) # predict target (includes noise) _, var = gp.predict(X_test.values, point=mp, diag=True,pred_noise=True) sd_y = np.sqrt(var) res = pd.DataFrame({'f_pred': mu, 'sd_f': sd_f, 'sd_y': sd_y, 'y': y_test.values}) res.loc[:, 'kernel'] = kernels[i] res.loc[:, 'fold_num'] = idx pred_dfs.append(pd.concat([X_test.reset_index(), res.reset_index()], axis=1)) pred_dfs = pd.concat(pred_dfs) null_mu = np.mean(y) null_sd = np.std(y) # - # ## Evaluate kernels # + def ll(f_pred, sigma_pred, y_true): # log predictive density tmp = 0.5*np.log(2*np.pi*sigma_pred**2) tmp += (f_pred-y_true)**2/(2*sigma_pred**2) return tmp sll = ll(pred_dfs['f_pred'], pred_dfs['sd_y'], pred_dfs['y']) sll = sll - ll(null_mu, null_sd, pred_dfs['y']) pred_dfs['msll'] = sll pred_dfs['smse'] = (pred_dfs['f_pred']-pred_dfs['y'])**2/np.var(y) pred_dfs.to_pickle('outputs/kernel_cv_fits.p') msll = pred_dfs.groupby(['kernel'])['msll'].mean() smse = pred_dfs.groupby(['kernel'])['smse'].mean() summary = pd.DataFrame(smse).join(other=pd.DataFrame(msll), on=['kernel'], how='left') summary.to_csv('outputs/kernel_cv_fits_summary.csv') # - summary # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook contains the code for the meta-analysis of healthy lung data for ACE2, TMPRSS2, and CTSL. It contains the hold-out analysis for the complex model with interaction terms that was run on the cell-level data. This script contains the code that was run on the full data and does not test for smoking associations. import scanpy as sc import numpy as np import scipy as sp import matplotlib.pyplot as plt import pandas as pd from matplotlib import rcParams from matplotlib import colors from matplotlib import patches import seaborn as sns import batchglm import diffxpy.api as de import patsy as pat from statsmodels.stats.multitest import multipletests import logging, warnings import statsmodels.api as sm # + plt.rcParams['figure.figsize']=(8,8) #rescale figures sc.settings.verbosity = 3 #sc.set_figure_params(dpi=200, dpi_save=300) sc.logging.print_versions() de.__version__ logging.getLogger("tensorflow").setLevel(logging.ERROR) logging.getLogger("batchglm").setLevel(logging.INFO) logging.getLogger("diffxpy").setLevel(logging.INFO) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 35) warnings.filterwarnings("ignore", category=DeprecationWarning, module="tensorflow") # + #User inputs folder = '/storage/groups/ml01/workspace/malte.luecken/2020_cov19_study' adata_diffxpy = '/storage/groups/ml01/workspace/malte.luecken/2020_cov19_study/COVID19_lung_atlas_revision_v3.h5ad' output_folder = 'diffxpy_out/' de_output_base = 'COVID19_lung_atlas_revision_v3_lung_cov19_poissonglm_holdouts_nUMIoffset_testInts' # - # # Read the data adata = sc.read(adata_diffxpy) adata adata.obs.age = adata.obs.age.astype(float) adata.obs.dtypes adata.obs['dataset'] = adata.obs['last_author/PI'] adata.obs.dataset.value_counts() # # Filter the data # Keep only datsets with: # - more than 1 donor # - non-fetal # - lung # Remove fetal datasets dats_to_remove = set(['Rawlins', 'Spence', 'Linnarsson']) # + dat = adata.obs.groupby(['donor']).agg({'sex':'first', 'age':'first', 'dataset':'first'}) # Single donor filter don_tab = dat['dataset'].value_counts() dats_to_remove.update(set(don_tab.index[don_tab == 1])) # - dats_to_remove = list(dats_to_remove) dats_to_remove adata = adata[~adata.obs.dataset.isin(dats_to_remove)].copy() adata.obs.lung_vs_nasal.value_counts() # Filter for only lung data adata = adata[adata.obs.lung_vs_nasal.isin(['lung']),].copy() adata # Rename smoking status covariate adata.obs['smoking_status'] = adata.obs.smoked_boolean adata.obs.dataset.value_counts() adata.obs['sample'].nunique() adata.obs['donor'].nunique() # # Check the data np.mean(adata.X.astype(int) != adata.X) # Check if any non-integer data in a particular dataset for dat in adata.obs.dataset.unique(): val = np.mean(adata[adata.obs.dataset.isin([dat]),:].X.astype(int) != adata[adata.obs.dataset.isin([dat]),:].X) if val != 0: print(f'dataset= {dat}; value= {val}') adata[adata.obs.dataset.isin([dat]),:].X[:20,:20].A # All counts are integers # # Fit models and perform DE cluster_key = 'ann_level_2' clust_tbl = adata.obs[cluster_key].value_counts() clusters = clust_tbl.index[clust_tbl > 1000] ct_to_rm = clusters[[ct.startswith('1') for ct in clusters]] clusters = clusters.drop(ct_to_rm.tolist()).tolist() clusters # Calculate DE genes per cluster. adata adata.obs['total_counts_scaled'] = adata.obs['total_counts']/adata.obs['total_counts'].mean() # Get interquartile range for ages to test adata.obs.groupby(['donor']).agg({'age':'first'}).age.quantile([0.25,0.5,0.75]) # + formula = "1 + sex + age + sex:age + dataset" tested_coef = ["sex[T.male]", "age"] dmat = de.utils.design_matrix( data=adata, formula="~" + formula, as_numeric=["age"], return_type="patsy" ) to_test = dict() to_test['age'] = [32,62] to_test['sex[T.male]'] = [0,1] dmat[1] # - # ### Function definition to test effect sizes at particular covariate values def calc_effects(dmat, cov_mat, params, effect, coefs): from patsy.design_info import DesignMatrix from diffxpy.api.stats import wald_test_chisq dmat_cond = isinstance(dmat, tuple) and isinstance(dmat[0], DesignMatrix) if not dmat_cond: raise ValueError("`dmat` should be a patsy output Design Matrix.") effect_list = ['sex[T.male]', 'age', 'smoking_status[T.True]'] if not effect in effect_list: raise ValueError(f'{effect} is not one of: ' f'{effect_list}') if not isinstance(coefs, dict): raise TypeError('`coefs` should contain a dictionary of coefficients ' 'where the effects should be evaluated.') ## Note: this is only correct when 3 covariates are tested in combinations #if np.sum([coef in coefs for coef in effect_list]) < 2: # raise ValueError('The `coefs` dict must contain values for the two ' # 'coefficient not tested in:' # f'{effect_list}') if 'smoking_status[T.True]' in coefs and coefs['smoking_status[T.True]'] not in [0,1]: raise ValueError('Smoking status should be encoded as 0 or 1.') if 'sex[T.male]' in coefs and coefs['sex[T.male]'] not in [0,1]: raise ValueError('Sex should be encoded as 0 or 1.') if 'age' in coefs and not (isinstance(coefs['age'], float) or isinstance(coefs['age'], int)): raise ValueError('Age should be a numerical value.') coef_list = [] for term in dmat[1]: if effect not in term: coef_list.append(0) elif term == effect: coef_list.append(1) else: t_list = term.split(':') t_list.remove(effect) coef_list.append(coefs[t_list[0]]) C = np.array(coef_list) val = np.matmul(C,np.array(params)) stderr = np.sqrt(np.matmul(np.matmul(C.T,cov_mat),C)) pval = wald_test_chisq(np.array([val]).reshape(1,1), np.array([stderr**2]).reshape(1,1,1))[0] return (val, stderr, pval) # ## Poisson GLM # + # Poisson GLM loop de_results_lvl2_glm = dict() # Test over clusters for clust in clusters: res_list = [] adata_tmp = adata[adata.obs[cluster_key] == clust,:] hold_outs = np.unique(adata_tmp.obs["dataset"].values) for ho in hold_outs: adata_tmp_ho = adata_tmp[~adata_tmp.obs.dataset.isin([ho]),:].copy() print(f'Holdout {ho} in cluster {clust}:') print(pd.crosstab(adata_tmp_ho.obs['smoking_status'], adata_tmp_ho.obs['sex'])) # Filter out genes to reduce multiple testing burden sc.pp.filter_genes(adata_tmp_ho, min_cells=10) if adata_tmp_ho.n_vars == 0: print('No genes expressed in more than 10 cells!') continue if len(adata_tmp_ho.obs.sex.value_counts())==1: print(f'{clust} only has 1 type of male/female sample.') continue print(f'Testing {adata_tmp_ho.n_vars} genes...') print("") # List to store results de_results_list = [] # Set up design matrix dmat = de.utils.design_matrix( data=adata_tmp_ho, #[idx_train], formula="~" + formula, as_numeric=["age"], return_type="patsy" ) # Test if model is full rank if np.linalg.matrix_rank(np.asarray(dmat[0])) < np.min(dmat[0].shape): print(f'Cannot test {clust} as design matrix is not full rank.') continue for i, gene in enumerate(adata_tmp_ho.var_names): # Specify model pois_model = sm.GLM( endog=adata_tmp_ho.X[:, i].todense(), #[idx_train, :], exog=dmat[0], offset=np.log(adata_tmp_ho.obs['total_counts_scaled'].values), family=sm.families.Poisson() ) # Fit the model pois_results = pois_model.fit() # Get the covariance matrix cov_mat = pois_results.cov_params() # Test over coefs for coef in tested_coef: iter_coefs = tested_coef.copy() iter_coefs.remove(coef) for c1 in to_test[iter_coefs[0]]: coef_vals = {iter_coefs[0]:c1} val, stderr, pval = calc_effects( dmat = dmat, cov_mat = cov_mat, params = pois_results.params, effect = coef, coefs = coef_vals) case = iter_coefs[0]+':'+str(c1) case = case.replace('sex[T.male]:0','F').replace('sex[T.male]:1','M') case = case.replace('age:32','32yr').replace('age:62','62yr') case = case.replace('_',' ') # Output the results nicely de_results_temp = pd.DataFrame({ "gene": gene, "cell_identity": clust, "covariate": coef, "eval_at": case, "holdout": ho, "coef": val, "coef_sd": stderr, "pval": pval }, index= [clust+"_"+gene+"_"+coef]) de_results_list.append(de_results_temp) de_results = pd.concat(de_results_list) de_results['adj_pvals'] = multipletests(de_results['pval'].tolist(), method='fdr_bh')[1] res_list.append(de_results) # Store the results if len(res_list) > 0: de_results_lvl2_glm[clust] = pd.concat(res_list, ignore_index=True) # Join the dataframes: full_res_lvl2_glm = pd.concat([de_results_lvl2_glm[i] for i in de_results_lvl2_glm.keys()], ignore_index=True) # - # ## Inspect some results de_results_lvl2_glm.keys() full_res_lvl2_glm = full_res_lvl2_glm.sort_values(by=['gene', 'cell_identity', 'covariate']) full_res_lvl2_glm full_res_lvl2_glm.loc[(full_res_lvl2_glm['gene'] == 'ACE2') & (full_res_lvl2_glm['adj_pvals'] < 0.05),] # ### Aggregate hold-out results statistics # + def prop_signif(series): return (series < 0.05).mean() def prop_pos(series): return (series > 0).mean() def prop_pos_zero(series): return (series >= 0).mean() def prop_neg_zero(series): return (series <= 0).mean() # + res_summary_lvl2 = full_res_lvl2_glm.groupby(['gene', 'cell_identity', 'covariate', 'eval_at']).agg({ 'adj_pvals':prop_signif, 'coef':['mean', 'std', prop_pos], 'holdout':'count' }).reset_index() res_summary_lvl2 # - # # Level 3 annotation cluster_key = 'ann_level_3' clust_tbl = adata.obs[cluster_key].value_counts() clusters = clust_tbl.index[clust_tbl > 1000] ct_to_rm = clusters[[ct.startswith('1') or ct.startswith('2') for ct in clusters]] clusters = clusters.drop(ct_to_rm.tolist()).tolist() clusters # + adata_sub = adata[adata.obs.ann_level_3.isin(clusters),:] adata_sub adata_sub.obs.donor.nunique() adata_sub.obs['sample'].nunique() # - # ## Poisson GLM # + # Poisson GLM loop de_results_lvl3_glm = dict() # Test over clusters for clust in clusters: res_list = [] adata_tmp = adata_sub[adata_sub.obs[cluster_key] == clust,:] hold_outs = np.unique(adata_tmp.obs["dataset"].values) for ho in hold_outs: adata_tmp_ho = adata_tmp[~adata_tmp.obs.dataset.isin([ho]),:].copy() print(f'Holdout {ho} in cluster {clust}:') print(pd.crosstab(adata_tmp_ho.obs['smoking_status'], adata_tmp_ho.obs['sex'])) # Filter out genes to reduce multiple testing burden sc.pp.filter_genes(adata_tmp_ho, min_cells=10) if adata_tmp_ho.n_vars == 0: print('No genes expressed in more than 10 cells!') continue if len(adata_tmp_ho.obs.sex.value_counts())==1: print(f'{clust} only has 1 type of male/female sample.') continue print(f'Testing {adata_tmp_ho.n_vars} genes...') print("") # List to store results de_results_list = [] # Set up design matrix dmat = de.utils.design_matrix( data=adata_tmp_ho, formula="~" + formula, as_numeric=["age"], return_type="patsy" ) # Test if model is full rank if np.linalg.matrix_rank(np.asarray(dmat[0])) < np.min(dmat[0].shape): print(f'Cannot test {clust} as design matrix is not full rank.') continue for i, gene in enumerate(adata_tmp_ho.var_names): # Specify model pois_model = sm.GLM( endog=adata_tmp_ho.X[:, i].todense(), exog=dmat[0], offset=np.log(adata_tmp_ho.obs['total_counts_scaled'].values), family=sm.families.Poisson() ) # Fit the model pois_results = pois_model.fit() # Get the covariance matrix cov_mat = pois_results.cov_params() # Test over coefs for coef in tested_coef: iter_coefs = tested_coef.copy() iter_coefs.remove(coef) for c1 in to_test[iter_coefs[0]]: coef_vals = {iter_coefs[0]:c1} val, stderr, pval = calc_effects( dmat = dmat, cov_mat = cov_mat, params = pois_results.params, effect = coef, coefs = coef_vals) case = iter_coefs[0]+':'+str(c1) case = case.replace('sex[T.male]:0','F').replace('sex[T.male]:1','M') case = case.replace('age:32','32yr').replace('age:62','62yr') case = case.replace('_',' ') # Output the results nicely de_results_temp = pd.DataFrame({ "gene": gene, "cell_identity": clust, "covariate": coef, "eval_at": case, "holdout": ho, "coef": val, "coef_sd": stderr, "pval": pval }, index= [clust+"_"+gene+"_"+coef]) de_results_list.append(de_results_temp) de_results = pd.concat(de_results_list) de_results['adj_pvals'] = multipletests(de_results['pval'].tolist(), method='fdr_bh')[1] res_list.append(de_results) # Store the results if len(res_list) > 0: de_results_lvl3_glm[clust] = pd.concat(res_list, ignore_index=True) # Join the dataframes: full_res_lvl3_glm = pd.concat([de_results_lvl3_glm[i] for i in de_results_lvl3_glm.keys()], ignore_index=True) # - # ## Inspect some results de_results_lvl3_glm.keys() full_res_lvl3_glm = full_res_lvl3_glm.sort_values(by=['gene', 'cell_identity', 'covariate']) full_res_lvl3_glm.loc[full_res_lvl3_glm['gene'] == 'ACE2'] full_res_lvl3_glm.loc[full_res_lvl3_glm['gene'] == 'TMPRSS2'] full_res_lvl3_glm.loc[full_res_lvl3_glm['gene'] == 'CTSL'] full_res_lvl3_glm.loc[(full_res_lvl3_glm['gene'] == 'ACE2') & (full_res_lvl3_glm['adj_pvals'] < 0.05),] # ### Aggregate hold-out results statistics # + res_summary_lvl3 = full_res_lvl3_glm.groupby(['gene', 'cell_identity', 'covariate', 'eval_at']).agg({ 'adj_pvals':prop_signif, 'coef':['mean', 'std', prop_pos], 'holdout':'count' }).reset_index() res_summary_lvl3 # + prop_agreement = (res_summary_lvl3[('coef','prop_pos')] >= 0.8) | (res_summary_lvl3[('coef','prop_pos')] <= 0.2) gene_mask = (res_summary_lvl3['gene'] == 'ACE2') signif = (res_summary_lvl3[('adj_pvals', 'prop_signif')] >= 0.5) res_summary_lvl3.loc[(prop_agreement & gene_mask)] res_summary_lvl3.loc[(prop_agreement & gene_mask & signif)] # + prop_agreement = (res_summary_lvl3[('coef','prop_pos')] >= 0.8) | (res_summary_lvl3[('coef','prop_pos')] <= 0.2) gene_mask = (res_summary_lvl3['gene'] == 'TMPRSS2') signif = (res_summary_lvl3[('adj_pvals', 'prop_signif')] >= 0.5) res_summary_lvl3.loc[(prop_agreement & gene_mask)] res_summary_lvl3.loc[(prop_agreement & gene_mask & signif)] # + prop_agreement = (res_summary_lvl3[('coef','prop_pos')] >= 0.8) | (res_summary_lvl3[('coef','prop_pos')] <= 0.2) gene_mask = (res_summary_lvl3['gene'] == 'CTSL') signif = (res_summary_lvl3[('adj_pvals', 'prop_signif')] >= 0.5) res_summary_lvl3.loc[(prop_agreement & gene_mask)] res_summary_lvl3.loc[(prop_agreement & gene_mask & signif)] # + # Find number of disagreeing holdout datasets h_count = res_summary_lvl3[('holdout','count')] prop_pos = res_summary_lvl3[('coef','prop_pos')] dat_diff_pos = h_count - prop_pos*h_count dat_diff_neg = prop_pos*h_count dat_diff = pd.concat([dat_diff_pos, dat_diff_neg], axis=1).min(axis=1).astype(int) res_summary_lvl3['holdout_dataset_dis'] = dat_diff #dat_diff = np.min(dat_diff_pos, dat_diff_neg) # - res_summary_lvl3.columns # + gene_mask = (res_summary_lvl3['gene'] == 'ACE2') ct_mask = (res_summary_lvl3['cell_identity'].isin(['AT2', 'Basal', 'Multiciliated lineage', 'Secretory'])) ace2_holdout_res = res_summary_lvl3.loc[(gene_mask & ct_mask)][[('holdout_dataset_dis',''), ('covariate',''), ('cell_identity',''), ('eval_at','')]] ace2_holdout_res['cov_eval'] = [' '.join([i1, i2]) for i1,i2 in zip(ace2_holdout_res[('covariate','')], ace2_holdout_res[('eval_at','')])] ace2_holdout_res = ace2_holdout_res.pivot(index='cov_eval', columns='cell_identity') ace2_holdout_res = ace2_holdout_res.drop(columns=['eval_at', 'covariate']) ace2_holdout_res.columns = ace2_holdout_res.columns.get_level_values(2) ace2_holdout_res.index = [item.replace('sex[T.male]', 'Sex').replace('smoking_status[T.True]', 'Smoking status') for item in ace2_holdout_res.index.tolist()] rcParams['figure.figsize'] = (6,6) p1 = sns.heatmap(ace2_holdout_res, cbar=False, cmap='Blues', annot=True, linewidths=.5) plt.tick_params(axis='both', which='major', labelsize=10, labelbottom = False, bottom=False, top = False, labeltop=True) p1.set_yticklabels(ace2_holdout_res.index, rotation=0) plt.ylabel('') plt.xlabel('') plt.savefig(folder+'/'+output_folder+de_output_base+'_annlvl3_ace2_dataset_disagreements.pdf', dpi=300, bbox_inches='tight') plt.show() rcParams['figure.figsize'] = (8,8) # + gene_mask = (res_summary_lvl3['gene'] == 'TMPRSS2') ct_mask = (res_summary_lvl3['cell_identity'].isin(['AT2', 'Multiciliated lineage'])) tmprss2_holdout_res = res_summary_lvl3.loc[(gene_mask & ct_mask)][[('holdout_dataset_dis',''), ('covariate',''), ('cell_identity',''), ('eval_at','')]] tmprss2_holdout_res['cov_eval'] = [' '.join([i1, i2]) for i1,i2 in zip(tmprss2_holdout_res[('covariate','')], tmprss2_holdout_res[('eval_at','')])] tmprss2_holdout_res = tmprss2_holdout_res.pivot(index='cov_eval', columns='cell_identity') tmprss2_holdout_res = tmprss2_holdout_res.drop(columns=['eval_at', 'covariate']) tmprss2_holdout_res.columns = tmprss2_holdout_res.columns.get_level_values(2) tmprss2_holdout_res.index = [item.replace('sex[T.male]', 'Sex').replace('smoking_status[T.True]', 'Smoking status') for item in tmprss2_holdout_res.index.tolist()] rcParams['figure.figsize'] = (3,6) p1 = sns.heatmap(tmprss2_holdout_res, cbar=False, cmap='Blues', annot=True, linewidths=.5) plt.tick_params(axis='both', which='major', labelsize=10, labelbottom = False, bottom=False, top = False, labeltop=True) p1.set_yticklabels(tmprss2_holdout_res.index, rotation=0) plt.ylabel('') plt.xlabel('') plt.savefig(folder+'/'+output_folder+de_output_base+'_annlvl3_tmprss2_dataset_disagreements.pdf', dpi=300, bbox_inches='tight') plt.show() rcParams['figure.figsize'] = (8,8) # + gene_mask = (res_summary_lvl3['gene'] == 'CTSL') ct_mask = (res_summary_lvl3['cell_identity'].isin(['AT2', 'Multiciliated lineage'])) ctsl_holdout_res = res_summary_lvl3.loc[(gene_mask & ct_mask)][[('holdout_dataset_dis',''), ('covariate',''), ('cell_identity',''), ('eval_at','')]] ctsl_holdout_res['cov_eval'] = [' '.join([i1, i2]) for i1,i2 in zip(ctsl_holdout_res[('covariate','')], ctsl_holdout_res[('eval_at','')])] ctsl_holdout_res = ctsl_holdout_res.pivot(index='cov_eval', columns='cell_identity') ctsl_holdout_res = ctsl_holdout_res.drop(columns=['eval_at', 'covariate']) ctsl_holdout_res.columns = ctsl_holdout_res.columns.get_level_values(2) ctsl_holdout_res.index = [item.replace('sex[T.male]', 'Sex').replace('smoking_status[T.True]', 'Smoking status') for item in ctsl_holdout_res.index.tolist()] rcParams['figure.figsize'] = (3,6) p1 = sns.heatmap(ctsl_holdout_res, cbar=False, cmap='Blues', annot=True, linewidths=.5) plt.tick_params(axis='both', which='major', labelsize=10, labelbottom = False, bottom=False, top = False, labeltop=True) p1.set_yticklabels(ctsl_holdout_res.index, rotation=0) plt.ylabel('') plt.xlabel('') plt.savefig(folder+'/'+output_folder+de_output_base+'_annlvl3_ctsl_dataset_disagreements.pdf', dpi=300, bbox_inches='tight') plt.show() rcParams['figure.figsize'] = (8,8) # - # # Store results res_summary_lvl2.columns = ['_'.join(col).strip('_') for col in res_summary_lvl2.columns.values] res_summary_lvl3.columns = ['_'.join(col).strip('_') for col in res_summary_lvl3.columns.values] res_summary_lvl2.to_csv(folder+'/'+output_folder+de_output_base+'_lvl2_summary.csv') full_res_lvl2_glm.to_csv(folder+'/'+output_folder+de_output_base+'_lvl2_full.csv') res_summary_lvl3.to_csv(folder+'/'+output_folder+de_output_base+'_lvl3_summary.csv') full_res_lvl3_glm.to_csv(folder+'/'+output_folder+de_output_base+'_lvl3_full.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # DLW Practical 1: MNIST # # From linear to non-linear models with MNIST # # **Introduction** # # In this practical we will experiment further with linear and non-linear models using the MNIST dataset. MNIST consists of images of handwritten digits that we want to classify correctly. # # **Learning objectives**: # * Implement a linear classifier on the MNIST image data set in Tensorflow. # * Modify the code to to make the classifier non-linear by introducing a hidden non-linear layer. # # **What is expected of you:** # * Step through the code and make sure you understand each step. What test set accuracy do you get? # * Modify the code to make the classifier non-linear by adding a non-linear activation function layer in Tensorflow. What accuracy do you get now? # # *Some parts of the code were adapted from the DL Indaba practicals.* # + import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data def display_mnist_images(gens, num_images): plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' fig, axs = plt.subplots(1, num_images, figsize=(25, 3)) for i in range(num_images): reshaped_img = (gens[i].reshape(28, 28) * 255).astype(np.uint8) axs.flat[i].imshow(reshaped_img) plt.show() # download MNIST dataset # mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # visualize random MNIST images # batch_xs, batch_ys = mnist.train.next_batch(10) list_of_images = np.split(batch_xs, 10) display_mnist_images(list_of_images, 10) x_dim, train_examples, n_classes = mnist.train.images.shape[1], mnist.train.num_examples, mnist.train.labels.shape[1] ###################################### # define the model (build the graph) # ###################################### x = tf.placeholder(tf.float32, [None, x_dim]) W = tf.Variable(tf.random_normal([x_dim, n_classes])) b = tf.Variable(tf.ones([n_classes])) y = tf.placeholder(tf.float32, [None, n_classes]) y_ = tf.add(tf.matmul(x, W), b) prob = tf.nn.softmax(y_) ######################## # define loss function # ######################## cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_, labels=y)) learning_rate = 0.01 train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy_loss) ########################### # define model evaluation # ########################### actual_class, predicted_class = tf.argmax(y, 1), tf.argmax(prob, 1) correct_prediction = tf.cast(tf.equal(predicted_class, actual_class), tf.float32) classification_accuracy = tf.reduce_mean(correct_prediction) ######################### # define training cycle # ######################### num_epochs = 50 batch_size = 20 # initializing the variables before starting the session # init = tf.global_variables_initializer() # launch the graph in a session (use the session as a context manager) # with tf.Session() as sess: # run session # sess.run(init) # start main training cycle # for epoch in range(num_epochs): avg_cost = 0. avg_acc = 0. total_batch = int(mnist.train.num_examples / batch_size) # loop over all batches # for i in range(total_batch): batch_x, batch_y = mnist.train.next_batch(batch_size) # run optimization op (backprop), cost op and accuracy op (to get training losses) # _, c, a = sess.run([train_step, cross_entropy_loss, classification_accuracy], feed_dict={x: batch_x, y: batch_y}) # compute avg training loss and avg training accuracy # avg_cost += c / total_batch avg_acc += a / total_batch # display logs per epoch step # if epoch % 1 == 0: print("Epoch {}: cross-entropy-loss = {:.4f}, training-accuracy = {:.3f}%".format(epoch + 1, avg_cost, avg_acc * 100)) print("Optimization Finished!") # calculate test set accuracy # test_accuracy = classification_accuracy.eval({x: mnist.test.images, y: mnist.test.labels}) print("Accuracy on test set = {:.3f}%".format(test_accuracy * 100)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.0.3 # language: julia # name: julia-1.0 # --- using MultiResolutionIterators, CorpusLoaders include("../src/ner.jl") using LinearAlgebra, Statistics dataset = flatten_levels(collect(CorpusLoaders.load(GMB())) , lvls(GMB, :document)) |> full_consolidate X = [word.(sentence) for sentence in dataset] Y = [CorpusLoaders.named_entity.(sentence) for sentence in dataset] unique(vcat(Y...)) ner = NERTagger() # + function try_outs(ner, x_in, y_in) unique_labels = unique(ner.model.labels) num_labels = length(unique_labels) confusion_matrix = zeros(Int, (num_labels, num_labels)) for (x_seq, y_seq) in zip(x_in, y_in) preds = ner(x_seq) for (pred, logit) in zip(preds, y_seq) pred == "MISC" && continue if(logit == "O") confusion_matrix[findfirst(x -> x==pred, unique_labels), findfirst(x -> x=="O", unique_labels)] += 1 elseif(logit == "Location") confusion_matrix[findfirst(x -> x==pred, unique_labels), findfirst(x -> x=="LOC", unique_labels)] += 1 elseif(logit == "Person") confusion_matrix[findfirst(x -> x==pred, unique_labels), findfirst(x -> x=="PER", unique_labels)] += 1 elseif(logit == "Organization") confusion_matrix[findfirst(x -> x==pred, unique_labels), findfirst(x -> x=="ORG", unique_labels)] += 1 else continue end end end # print(confusion_matrix) s1 = sum(confusion_matrix, dims=2) s2 = sum(confusion_matrix, dims=1) dg = diag(confusion_matrix) s1 = [s1[1:2]..., s1[4:5]...] s2 = [s2[1:2]..., s2[4:5]...] dg = [dg[1:2]..., dg[4:5]...] a = mean(dg ./ s1) b = mean(dg ./ s2) println("Precision and recall are:", a, " ", b) println("F1 is:", (2 * a * b) / (a + b)) end # - try_outs(ner, X, Y) i = 5 collect(zip(ner(X[i]), X[i], Y[i])) i = 95 collect(zip(ner(X[i]), X[i], Y[i])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## F@H FEP/ML hybrid # #### Code for correcting AFE calculations in a F@H virtual screen by using temporal/ sequential learning # Collaboration between & groups # # Code by: # - # - # To do: # # Output test scatter # # Input real F@H dataset # # If above done, then: # # pivot code to use F@H data instead of toy # # Select DNN loss -> consider using the same for SKOPT loss # # rewrite SKOPT confergence fn to support replicates # # + import hydra # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd from tqdm.notebook import tqdm # - # ## Data preparation # + # for now, provide toy set of perturbations: # currently code just repeats these 3 points *300. Remove when real dataset is used. perturbation_paths = [ ["mobley_9185328~mobley_9185328"], ["mobley_9209581~mobley_9565165"], ["mobley_9209581~mobley_9821936"]] # compute ligand molecular properties from DB path hydra.computeLigMolProps("./input/ligands/") pass # - # using stored molprops and list of perturbations, compute relative features (subtract/concat bitstrings): hydra.computePertMolProps(perturbation_paths=perturbation_paths) # + # labels/labels.csv constructed in bash, follow same format for real dataset. Combine labels and features; # requires length/indices matching: hydra.buildTrainingSet( "labels/labels.csv", "features/MOLPROPS/featurised_molprops.h5", "trainingsets/MOLPROPS_trainingset.h5", ) # + # PCA + normalise dataset. Process in chunks so that we can potentially handle larger datasets with linear scaling: hydra.normaliseDataset( path_to_raw_trainingset="trainingsets/MOLPROPS_trainingset.h5", path_to_save_loc="trainingsets_prepared/", feature_type="MOLPROPS", chunksize=6000) # because of split processing in this function, n_components has to be found manually (see commented section in fn) # + # construct dataset splits necessary for tensorflow/sklearn: X_train, y_train, X_test, y_test = hydra.importDataSet( "ddGoffset", "trainingsets_prepared/MOLPROPS/data.h5") # - # ## Train models # + # construct DNN training function: fitness, dimensions, default_parameters = hydra.denseNN( X_train, y_train, X_test, y_test, "MOLPROPS") # + # use SKOPT to optimise hyperparameters & write outputs: n_calls = 11 hydra.trainCorrector(fitness, dimensions, n_calls, default_parameters, "MOLPROPS") # - # ## Output analysis # show standard loss plot of DNN architecture with best loss after hyperparameter optimisation: from IPython.display import Image Image(filename='output/MOLPROPS_top_performer_loss_plot.png', width=600, height=300) # + # plot above model's predictions: # read in predictions: model_preds = pd.read_csv("output/MOLPROPS_top_performer.csv") # plot simple scatter: fig, axes = plt.subplots(1,1, figsize=(4,4)) ax = axes # change in case of multiple subplots exp = model_preds["Exp1"].values.tolist() pred = model_preds["Pred1"].values.tolist() ax.scatter(exp, pred) # figure out limits: max_lim = max(exp + pred)*1.1 min_lim = min(exp + pred)*1.1 ax.set_ylim(min_lim, max_lim) ax.set_xlim(min_lim, max_lim) # formatting: ax.set_ylabel(r"Predicted $\Delta G_{offset}\ /\ kcal\cdot mol^{-1}$") ax.set_xlabel(r"F@H experimental $\Delta G_{offset}\ /\ kcal\cdot mol^{-1}$") plt.show() # - # here use function to plot hyperparameter config convergence: ax = hydra.plotSKOPTConvergence("output/MOLPROPS_skopt_convergence.csv") ax.set_ylabel("Global minimal SKOPT loss") ax.set_xlabel("SKOPT calls") plt.show() # + # here use function to correct and validate FEP/ML hybrid: # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Quantum Ensemble as Simple Averaging # # ### Theory and Implementation # # ##### (Fixed $U_{(i,j)}$ for independent quantum trajectories) # + [markdown] slideshow={"slide_type": "skip"} # $$\newcommand{\ket}[1]{\left|{#1}\right\rangle}$$ # $$\newcommand{\bra}[1]{\left\langle{#1}\right|}$$ # $$\newcommand{\braket}[1]{\left\langle{#1}\right\rangle}$$ # + [markdown] slideshow={"slide_type": "subslide"} # ### (Step 1) State Preparation # + [markdown] slideshow={"slide_type": "subslide"} # # + Given a 2-qubits $control$ register $(d=2) \implies $ ensemble of $B=2^2$ classifiers # # # # + $data$ register: *qubit encoding strategy* $\implies$ $N$ $2-$dimensional observations are encoded using $2 \times N$ qubits: # $$ # \begin{align} # \text{data register: } \underset{features}{\big( \overset{4}{\underset{i=1}{\otimes}} \left| x_i \right\rangle \big)}\otimes \underset{ labels}{\big( \overset{4}{\underset{i=1}{\otimes}} \left| y_i \right\rangle \big)} # \end{align} # $$ # # where $x_i$ and $y_i$ are encoded as follows: # $$ # \begin{align} # \left| x_i \right\rangle = x_{i,1}\left| 0 \right\rangle + x_{i,2}\left| 1 \right\rangle # \end{align} # $$ # # and # # if $\left| y_i \right\rangle=\left| 0 \right\rangle$ the $i$-th observation belongs to the class $0$. Otherwise if $\left| y_i \right\rangle=\left| 1 \right\rangle$ the $i$-th observation belongs to the class $1$. # + [markdown] slideshow={"slide_type": "subslide"} # ### (Step 1) State Preparation # + [markdown] slideshow={"slide_type": "fragment"} # $$ # \begin{align*} # \left|\Phi_0\right\rangle &= # \big( H^{\otimes 2} \otimes S_{(x,y)} \big)\left|0\right\rangle \otimes \left|0\right\rangle \otimes \left|0\right\rangle \nonumber \\ # & = # \left|c_1\right\rangle \otimes \left|c_2\right\rangle \otimes \left|x\right\rangle \left|y\right\rangle \nonumber\\ # & = # \frac{1}{\sqrt{2}}\big(\left|0\right\rangle+\left|1\right\rangle\big) \otimes \frac{1}{\sqrt{2}}\big(\left|0\right\rangle+\left|1\right\rangle\big) \otimes \left|x_0,x_1,x_2,x_3\right\rangle \left|y_0,y_1,y_2,y_3\right\rangle \end{align*} # $$ # + [markdown] slideshow={"slide_type": "fragment"} # where $S_x$ is the routine which encodes in the amplitudes of a qubit a real vector $x$ and $H$ is the Hadamard transformation. # + [markdown] slideshow={"slide_type": "subslide"} # ### (Step 2) Sampling in Superposition # # The second step regards the generation of $2^d$ different transformations of the training set in superposition, each entangled with a state of the control register. To this end, $d$ steps are necessary, where each step consists in the entanglement of the $i$-th control qubit with two transformations of $\left|x,y\right\rangle$ based on two random unitaries, $U_{(i,1)}$ and $U_{(i,2)}$, for $i = 1,2$. # # As shown in the **Appendix A**, *Sampling in Superposition* step leads to the following quantum state: # \begin{align} # \ket{\Phi_{2}} # = \frac{1}{2}\Big[ # \hspace{.2em} &\ket{00} U_{(2,1)}U_{(1,1)}\ket{x,y} # \nonumber \\ + & # \ket{01} U_{(2,1)}U_{(1,2)}\ket{x,y} # \nonumber \\ + & # \ket{10} U_{(2,2)}U_{(1,1)}\ket{x,y} # \nonumber \\ + & # \ket{11} U_{(2,2)}U_{(1,2)}\ket{x,y} # \Big] \nonumber \\ # & \hspace{-2.75em} = \frac{1}{\sqrt{4}} \sum_{b=1}^{4} \ket{b} V_b\ket{x,y} # \end{align} # + [markdown] slideshow={"slide_type": "subslide"} # In order to obtain independend quantum trajectories we provide the following definition for $U_{(i,j)}$: # $$U_{(1,1)} = \text{swap}(x_0,x_2) \times \text{swap}(y_0,y_2)$$ # $$U_{(1,2)} = \text{swap}(x_1,x_3) \times \text{swap}(y_1,y_3)$$ # $$U_{(2,1)} = \mathbf{I} $$ # $$U_{(2,2)} = \text{swap}(x_2,x_3) \times \text{swap}(y_2,y_3)$$ # # where $ \mathbf{I}$ is the identity matrix. Thus, the step of *Sampling in Superposition* leads to: # + [markdown] slideshow={"slide_type": "subslide"} # \begin{align*} # \left|\Phi_{2}\right\rangle = \frac{1}{2}\Big[ # & \left|11\right\rangle \left|x_0, x_3, x_1, x_2\right\rangle \left|y_0, y_3, y_1, y_2\right\rangle # \\ + & # \left|10\right\rangle \left|x_2, x_1, x_3, x_0\right\rangle \left|y_2, y_1, y_3, y_0\right\rangle \nonumber\\ # \hspace{.1em} # + & # \left|01\right\rangle \left|x_0, x_3, x_2, x_1\right\rangle \left|y_0, y_3, y_2, y_1\right\rangle \\ # + & # \left|00\right\rangle \left|x_2, x_1, x_0, x_3\right\rangle \left|y_2, y_1, y_0, y_3\right\rangle # \Big] # \end{align*} # + [markdown] slideshow={"slide_type": "subslide"} # We can see that swap operations allows to entangle different observations (in terms of the indices of the qubits) to different state of the $control$ register. In particular, if considering the last qubit of the *features* and *labels* (sub-)registers, the above choice for $U_{(i,j)}$ guarantees that each quantum state of the control register is entangled with a different training observation. Using a compact representation: # + [markdown] slideshow={"slide_type": "fragment"} # \begin{align} # \left|\Phi_{2^{'}}\right\rangle = \frac{1}{2}\Big[ # \left|11\right\rangle \left|x_2\right\rangle \left|y_2\right\rangle # + # \left|10\right\rangle\left|x_0\right\rangle\left|y_0\right\rangle # + # \left|01\right\rangle\left|x_1\right\rangle\left|y_1\right\rangle # + # \left|00\right\rangle\left|x_3\right\rangle \left|y_3\right\rangle # \Big] = # \frac{1}{\sqrt{4}}\sum_{i=0}^{3}\left|i\right\rangle\left|x_i,y_i\right\rangle # \end{align} # + [markdown] slideshow={"slide_type": "fragment"} # Notice that, in this case the $i$-th basis state does not correspond to the integer representation of the binary state. # + [markdown] slideshow={"slide_type": "subslide"} # ### (Step 3) Learning via interference # + [markdown] slideshow={"slide_type": "fragment"} # First, the $test$ register is initialised to encode the test set, $\tilde{x}$, considering also an additional register to store the final prediction: # # + [markdown] slideshow={"slide_type": "fragment"} # \begin{align} # (S_{\tilde{x}} \otimes \mathbb{1}) \left|0\right\rangle \left|0\right\rangle =\left|x^{(test)}\right\rangle \left|0\right\rangle # \end{align} # + [markdown] slideshow={"slide_type": "fragment"} # Then, the $data$ and $test$ registers interact via interference using the quantum version of the cosine classifier (gate $F$) to compute the estimates of the target variable: # + [markdown] slideshow={"slide_type": "fragment"} # \begin{align*} # \left|\Phi_{f}\right\rangle # = & \Big(\mathbb{1}^{\otimes 2} \otimes F \Big) \left|\Phi_{d}\right\rangle \nonumber \\ # = & (\mathbb{1}^{\otimes d} \otimes F )\Bigg[\frac{1}{\sqrt{2^d}}\sum_{b=1}^{2^d} \left|b\right\rangle \left|x_b, y_b\right\rangle\Bigg] \otimes \left|x^{(test)}\right\rangle \left|0\right\rangle \nonumber \\ # = & \frac{1}{\sqrt{2^d}}\sum_{b=1}^{2^d} \left|b\right\rangle \left|x_b, y_b\right\rangle\left|x^{(test)}\right\rangle \left|\hat{f}_b\right\rangle # \end{align*} # + [markdown] slideshow={"slide_type": "fragment"} # where $\hat{f}_b$ represents the $b$-th prediction for $\tilde{x}$ given the $b$-th training set. # + [markdown] slideshow={"slide_type": "subslide"} # ### (Step 4) Measurement # + [markdown] slideshow={"slide_type": "fragment"} # \begin{align*} # \left\langle M \right\rangle = & # \frac{1}{2^d}\sum_{b=1}^{2^d} \left\langle\hat{f}_b|M|\hat{f}_b\right\rangle = # \frac{1}{2^d}\sum_{b=1}^{2^d}\left\langle M_b \right\rangle \nonumber \\ # = & \frac{1}{B} \sum_{b=1}^B \hat{f}_b = \hat{f}_{bag}(\tilde{x}|x,y) # \end{align*} # + [markdown] slideshow={"slide_type": "subslide"} # ## Quantum Implementation # + slideshow={"slide_type": "fragment"} # Import pakages and functions import sys sys.path.insert(1, '../') from Utils import * from modeling import * # + slideshow={"slide_type": "fragment"} # load the toy dataset X_data, Y_data, x_test = load_data_custom() # Generate the quantum circuit qc = ensemble_fixed_U(X_data, Y_data, x_test) # + [markdown] slideshow={"slide_type": "fragment"} # qc.draw(output='mpl', scale=.6, # style={'fontsize':15, 'dpi':200}) # + [markdown] slideshow={"slide_type": "slide"} # # Quantum Ensemble as Simple Averaging - Experiments # - # This notebook reproduces the results in Section 4.3 of the paper *Quantum Ensemble for Classification* where is shown that the quantum ensemble algorithm is able to compute the expectation value of multiple quantum trajectories in superposition with just one execution of the quantum cosine classifier. # # For more details about the theoretical background see **Quantum Ensemble - Independent Trajectories.ipynb** # + # Import packages and functions import sys sys.path.insert(1, '../') from Utils import * from modeling import * Create the toy dataset reported in Table 1 and execute the (classical) cosine classifiers. # + # Load data without normalisation X_data, Y_data, x_test = load_data_custom(normalize = False) # Create table as shown in the paper (Table 1) data = pd.DataFrame(X_data, columns = [r'$X_1$', r'$X_2$']) # Extract the value of the target variable as integer y = [c[1] for c in Y_data] # Compute the cosine distance between the training points and the test point dist = [cosine_similarity([x], [x_test])[0][0] for x in X_data] # Compute the value of the cosine distance classifier # for the four training points from the test point p = [cosine_classifier(x, x_test)[0][0] for x in X_data] # Extract the probabilities for the test point to be classified in class # 1 according to the (classical) cosine classifies Equation (16) probs = [] for i,j in zip(y,p): if i == 0: probs.append(1-j) else: probs.append(j) # Create dataset as in paper (Table 1) probs = np.array(probs) # Rename columns data[r'$y$'] = np.array(y) data[r'$d($$\cdot$$, $ $x^{(test)})$'] = np.round(dist,2) data[r'$P($$y^{(test)}$$=1$$|b$ $)$'] = probs # Rename rows data.index = [r'$x_1$', r'$x_2$', r'$x_3$', r'$x_4$',] #Visualize dataset data # + slideshow={"slide_type": "subslide"} # Load normalised data X_data, Y_data, x_test = load_data_custom() #Visualisation of quantum cosine classifier quantum_cosine = quantum_cosine_classifier(X_data[0], x_test, Y_data[0] ) quantum_cosine.draw(output='mpl', scale=.7) # + [markdown] slideshow={"slide_type": "subslide"} # For each training point in *data* the quantum cosine classifier is executed to compute the prediction of the target variable for the test point $\tilde{x}$. Thus, given the measurements of the quantum circuts, the target probabilities are retrieved using the function *retrieve_proba*. # + slideshow={"slide_type": "subslide"} qc1 = quantum_cosine_classifier(X_data[0], x_test, Y_data[0] ) r1 = exec_simulator(qc1) r1 = retrieve_proba(r1) qc2 = quantum_cosine_classifier(X_data[1], x_test, Y_data[1]) r2 = exec_simulator(qc2) r2 = retrieve_proba(r2) qc3 = quantum_cosine_classifier(X_data[2], x_test, Y_data[2]) r3 = exec_simulator(qc3) r3 = retrieve_proba(r3) qc4 = quantum_cosine_classifier(X_data[3], x_test, Y_data[3]) r4 = exec_simulator(qc4) r4 = retrieve_proba(r4) out = [r1, r2, r3, r4] # + [markdown] slideshow={"slide_type": "subslide"} # We compute the average of predictions provided by the four quantum cosine classifiers that corresponds to the classical ensemble prediction, using simple averaging as aggregation strategy. # + slideshow={"slide_type": "subslide"} p0 = [p[0] for p in out] p1 = [p[1] for p in out] r_avg = [np.mean(p0), np.mean(p1)] print(np.mean(p0), np.mean(p1)) # + slideshow={"slide_type": "subslide"} qc = ensemble_fixed_U(X_data, Y_data, x_test) qc.draw(output='mpl', scale=.6, #filename='output/ensemble_circuit.png', style={'fontsize':15, 'dpi':200}) # + slideshow={"slide_type": "subslide"} r = exec_simulator(qc, n_shots=8192) r_ens = retrieve_proba(r) print(r_ens) # + # collect the results output_simulator = [r1, r2, r3, r4, r_avg, r_ens] data_pred = pd.DataFrame(output_simulator, columns=['p0', 'p1'], index=['qc1','qc2','qc3','qc4','AVG','Ensemble']) data_pred # data_pred.to_csv('output/sim_results.csv', index=False) # + slideshow={"slide_type": "subslide"} plot_cls(output_simulator, title= '') # + [markdown] slideshow={"slide_type": "subslide"} # The probability provided by the quantum cosine classifiers ($f_1$, $f_2$, $f_3$, $f_4$) are pretty much the same to the classical cosine classifier (*data*). Furthermore, the average of the four classifiers is almost the same to the quantum ensemble prediction where it is required only one execution of the cosine classifier. # + [markdown] slideshow={"slide_type": "subslide"} # ### Multiple Experiments # # In order to show that the result of quantum ensemble holds regardless of the data reported in *Table 1*, the same experiment is performed on $20$ randomly generated datasets, and the average of the quantum cosine classifiers and the quantum ensemble prediction are compared # + slideshow={"slide_type": "subslide"} seed = 543 n_shots = 8192 N_runs = 20 y_labels =[[0,1], [1,0]] p1_avg = [] p1_ens = [] np.random.seed(seed) for run in np.arange(N_runs): # print(run) x1 = [np.random.randint(1, 9), np.random.randint(1, 9)] x2 = [np.random.randint(1, 9), np.random.randint(1, 9)] x3 = [np.random.randint(1, 9), np.random.randint(1, 9)] x4 = [np.random.randint(1, 9), np.random.randint(1, 9)] y1 = y_labels[np.random.randint(0, 2)] y2 = y_labels[np.random.randint(0, 2)] y3 = y_labels[np.random.randint(0, 2)] y4 = y_labels[np.random.randint(0, 2)] Y_data = [y1, y2, y3, y4] X_data = [x1, x2, x3, x4] x_test = [np.random.randint(1, 9), np.random.randint(1, 9)] X_data, Y_data, x_test = load_data_custom(X_data, Y_data, x_test = x_test) qc1 = quantum_cosine_classifier(X_data[0], x_test, Y_data[0] ) r1 = exec_simulator(qc1) r1 = retrieve_proba(r1) qc2 = quantum_cosine_classifier(X_data[1], x_test, Y_data[1]) r2 = exec_simulator(qc2) r2 = retrieve_proba(r2) qc3 = quantum_cosine_classifier(X_data[2], x_test, Y_data[2]) r3 = exec_simulator(qc3) r3 = retrieve_proba(r3) qc4 = quantum_cosine_classifier(X_data[3], x_test, Y_data[3]) r4 = exec_simulator(qc4) r4 = retrieve_proba(r4) out = [r1, r2, r3, r4] p0 = [p[0] for p in out] p1 = [p[1] for p in out] r_avg = [np.mean(p0), np.mean(p1)] # print('AVG:', r_avg) qc = ensemble_fixed_U(X_data, Y_data, x_test) qc = transpile(qc, basis_gates = ['u1', 'u2', 'u3', 'cx'], optimization_level=3) r = exec_simulator(qc, n_shots=n_shots) r_ens = retrieve_proba(r) # print('Ensemble', r_ens) out = [r1, r2, r3, r4, r_avg, r_ens] p1_avg.append(r_avg[1]) p1_ens.append(r_ens[1]) avg_vs_ensemble(p1_avg, p1_ens) # + # Execution on real device IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q') provider.backends() backend_16 = provider.get_backend('ibmq_16_melbourne') backend_5 = provider.get_backend('ibmq_rome') def run_real_device(qc, backend, shots=8192): job = execute(qc, backend, shots=shots) results = job.result() r = results.get_counts(qc) return r # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:anaconda3] # language: python # name: conda-env-anaconda3-py # --- # + import pandas as pd import os from pysam import VariantFile import matplotlib.pyplot as plt import numpy as np import seaborn as sns from functools import reduce plt.style.use('aa_paper') # %matplotlib inline # - # # Reformat VCFs in parallel # # Using a script called `get_gens_df.py` in `AlleleAnalyzer/generate_gens_dfs/get_gens_df.py`, we reformat the 1000 Genomes VCFs in order to more easily annotate variants for whether they are near or in PAM sites. This is necessary because in ordinary VCF files, variants can have multiple alleles listed on one line, and these need to be split up for annotation based on each individual allele. # # For the 1000 Genomes analysis, we parallelized this process by splitting the genome into 10kb windows. (Will this make too many files? Maybe 500kb would be more feasible, then redo any that don't work, similar to ExAc approach. This approach is used because 1000 Genomes data is whole-genome rather than exome, like ExAc. # # Make 10 kb windows of the genome using `bedtools makewindows`. # # ## hg38 # # `bedtools makewindows -g hg38.sizes -w 10000 > hg38.10kbwindows.bed` # # 321,184 regions for hg38. # # ### Add unique regions IDs # + hg38_regions = pd.read_csv('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/dat/hg38.10kbwindows.bed', sep='\t', header=None, names=['chrom','start','stop']) hg38_regions['region_id'] = 'region_' + hg38_regions.index.astype(str) # # hg38_regions.to_csv('dat/1kgp_hg38_regions.bed', sep='\t', index=False, header=False) # - hg38_regions = pd.read_csv('dat/1kgp_hg38_regions.bed', sep='\t', header=None, names=['chrom','start','stop','region_id']) # ## Check whether all regions were appropriately reformatted # + hg38_regions['gens_fname'] = '/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_formatted_variants/' + hg38_regions['region_id'] + '.h5' # hg38_regions['gens_complete'] = hg38_regions['gens_fname'].map(os.path.isfile) # - hg38_regions.query('~gens_complete') len(hg38_regions.query('~gens_complete')) # ## Check whether annotations were completed for appropriate regions # + hg38_regions['region_id'] = hg38_regions['region_id'].str.replace('_','') hg38_regions['annots_fname'] = '/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_annotated_variants/' + \ hg38_regions['region_id'] + '.h5' # - hg38_regions['annots_file_exists'] = hg38_regions['annots_fname'].map(os.path.isfile) hg38_regions.to_csv('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/region_annots.tsv', sep='\t', index=False) len(hg19_regions.query('~annots_file_exists')) # ## hg38 # # `bedtools makewindows -g hg38.sizes -w 10000 > hg38.10kbwindows.bed` # # 321,184 regions for hg38. # # ### add unique region IDs hg38_regions = pd.read_csv('dat/1kgp_hg38_regions.bed', sep='\t', header=None, names=['chrom','start','stop','region_id']) # + # check that gens file completed hg38_regions['gens_fname'] = '/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_formatted_variants/' + hg38_regions['region_id'] + '.h5' # hg38_regions['gens_complete'] = hg38_regions['gens_fname'].map(os.path.isfile) # - len(hg38_regions.query('gens_complete')) hg38_regions.to_csv('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/region_annots.tsv', sep='\t', index=False) # # ExcisionFinder results # # # # `python gene_targ_variation.py /pollard/home/kathleen/projects/AlleleAnalyzer/manuscript_analyses/1000genomes_analysis/get_gene_list/gene_list_hg19.tsv /pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg19_analysis/1kgp_excisionfinder_results/results_by_chrom/ /pollard/home/kathleen/projects/AlleleAnalyzer/manuscript_analyses/1000genomes_analysis/src/hg19_analysis/plotting/` # # Targetable genes per person, just change the dir where h5 files are pulled to do 5kb window analysis. # # `python targ_genes_per_person.py /pollard/home/kathleen/projects/AlleleAnalyzer/manuscript_analyses/1000genomes_analysis/get_gene_list/gene_list_hg38.tsv /pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom/ targ_genes_per_person` # # with 5kb flanking # # `python targ_genes_per_person.py /pollard/home/kathleen/projects/AlleleAnalyzer/manuscript_analyses/1000genomes_analysis/get_gene_list/gene_list_hg38.tsv /pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom_5kb_window targ_genes_per_person_5kb` # # # Determine mean # putatively targetable autosomal genes per person in 1000 Genomes Cohort def translate_gene_name(gene_name): """ HDF5 throws all sort of errors when you have weird punctuation in the gene name, so this translates it to a less offensive form. """ repls = ("-", "dash"), (".", "period") trans_gene_name = reduce(lambda a, kv: a.replace(*kv), repls, str(gene_name)) return trans_gene_name genes = pd.read_csv('/pollard/home/kathleen/projects/AlleleAnalyzer/manuscript_analyses/1000genomes_analysis/get_gene_list/gene_list_hg38.tsv', sep='\t') autosomal_genes = genes.query('(chrom != "chrX") and (chrom != "chrY")') protein_coding_autosomal_genes = set(genes[genes['name'].str.startswith('NM')]['official_gene_symbol'].tolist()) genes.query('official_gene_symbol == "BEST1"') targ_genes_per_person = np.load('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom/targ_genes_per_persongenes_per_person.npy').item() targ_genes_per_person_5kb = np.load('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom_5kb_window/targ_genes_per_person_5kbgenes_per_person.npy').item() # + gene_dict = {} genes_eval = 0 # for c_dir in os.listdir('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom'): for chrom in list(range(1,23)): c_dir = os.path.join('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom',f'chr{chrom}_ef_results/') genes_in_dir = os.listdir(os.path.join('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom', c_dir)) for gene in genes_in_dir: if gene.endswith('.h5'): genes_eval += 1 gene_dict[gene[:-3]] = translate_gene_name(gene[:-3]) # print(genes_in_dir[:5]) # + gene_dict = {} genes_eval = 0 # for c_dir in os.listdir('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom'): for chrom in list(range(1,23)): c_dir = os.path.join('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom_5kb_window/',f'chr{chrom}_ef_results/') genes_in_dir = os.listdir(os.path.join('/pollard/data/projects/AlleleAnalyzer_data/1kgp_data/hg38_analysis/1kgp_excisionfinder_results/results_by_chrom', c_dir)) for gene in genes_in_dir: if gene.endswith('.h5'): genes_eval += 1 gene_dict[gene[:-3]] = translate_gene_name(gene[:-3]) # print(genes_in_dir[:5]) # + gene_dict_df = pd.DataFrame.from_dict(gene_dict, orient='index') gene_dict_df['gene'] = gene_dict_df.index gene_dict_df.columns = ['translated_gene','gene'] gene_dict_df.head() gene_dict_df.to_csv('/pollard/data/projects/AlleleAnalyzer_data/AlleleAnalyzer_supporting_data/1000_genomes_analysis/hg38_analysis/excisionFinder_results/5kb_window/gene_dict.tsv', sep='\t', index=False) # - print(genes_eval) len(genes) # ## Gene only (autosomal) # + ppl = [] num_targ_genes = [] cas = [] for key in targ_genes_per_person: ppl.append(key) num_targ_genes.append(len(protein_coding_autosomal_genes.intersection(set(targ_genes_per_person[key])))) targ_genes_per_person_df = pd.DataFrame({'ppl':ppl, 'num_targ_genes':num_targ_genes}) targ_genes_per_person_df['perc_targ_genes'] = targ_genes_per_person_df['num_targ_genes'].divide(len(protein_coding_autosomal_genes)) * 100.0 targ_genes_per_person_df['perc_targ_genes'].mean() # - # ## Gene + 5 kb # + ppl = [] num_targ_genes = [] cas = [] for key in targ_genes_per_person_5kb: ppl.append(key) num_targ_genes.append(len(protein_coding_autosomal_genes.intersection(set(targ_genes_per_person_5kb[key])))) targ_genes_per_person_df = pd.DataFrame({'ppl':ppl, 'num_targ_genes':num_targ_genes}) targ_genes_per_person_df['perc_targ_genes'] = targ_genes_per_person_df['num_targ_genes'].divide(len(protein_coding_autosomal_genes)) * 100.0 targ_genes_per_person_df['perc_targ_genes'].mean() # - # # people targetable # # In this faceted density plot, height of the colored portion indicates the proportion of genes where the specified percentage of the 1000 genomes cohort is putatively targetable. # + # inspired and helped by this page: https://seaborn.pydata.org/examples/kde_joyplot.html plot_df = pd.read_csv('/pollard/home/kathleen/projects/AlleleAnalyzer/manuscript_analyses/1000genomes_analysis/src/hg19_analysis/plotting/targ_per_gene_and_cas.tsv', sep='\t') cas_dict = np.load('/pollard/data/projects/AlleleAnalyzer_data/cas_abbrev_dict.npy').item() cas_dict['StCas9'] = 'StCas9' cas_dict['all'] = 'all' plot_df['% people targetable per gene'] = plot_df['% people targetable']*100.0 sns.set(style="white", rc={"axes.facecolor": (0, 0, 0, 0)}, font_scale=1.2) cas_list = plot_df.Cas.drop_duplicates().tolist() pal = sns.cubehelix_palette(len(cas_list), rot=-.25, light=.7) g = sns.FacetGrid(plot_df, row='Cas', hue='Cas', aspect=10, size=.5, palette=pal) g.map(sns.kdeplot, '% people targetable per gene', shade=True, alpha=1, lw=1.5, bw=.2) g.map(sns.kdeplot, '% people targetable per gene', color="w", lw=2, bw=.2) g.map(plt.axhline, y=0, lw=2) # Define and use a simple function to label the plot in axes coordinates def label(x, color, label): ax = plt.gca() ax.text(0, .2, cas_dict[label], fontweight="bold", ha="left", va="center", fontsize=11,transform=ax.transAxes) g.map(label, '% people targetable per gene') # Set the subplots to overlap g.fig.subplots_adjust(hspace=-.25) # Remove axes details that don't play well with overlap g.set_titles("") g.set(yticks=[]) g.despine(bottom=True, left=True) # g.savefig('people_targetable_per_gene_per_cas.pdf', dpi=300, bbox_inches='tight') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import os import sys import time from datetime import datetime, timedelta from typing import Optional import ipywidgets as ipw import matplotlib.pyplot as plt import matplotlib_inline.backend_inline from ipyflex import FlexLayout from widget_helpers import price_card, stylesheet from openbb_terminal import api as openbb # %matplotlib widget openbb.theme.applyMPLstyle() ipw.HTML(f"") # + def get_exchange_rate(currency_pair: str) -> str: """Get exchange rate for a currency pair.""" from_symbol, to_symbol = currency_pair.split("/") exchange_data = openbb.forex.models.av.get_quote( to_symbol=to_symbol, from_symbol=from_symbol ) return exchange_data["Realtime Currency Exchange Rate"]["5. Exchange Rate"][:6] def get_candle_widget( output: Optional[ipw.Output], to_symbol: str, from_symbol: str, ) -> ipw.Output: """Plot a candle chart for a currency pair.""" start_date = datetime.now() - timedelta(days=1) start_date = start_date.strftime("%Y-%m-%d") exchange_rate_data = openbb.forex.models.av.get_historical( to_symbol=to_symbol, from_symbol=from_symbol, start_date=start_date, resolution="i", interval="15", ) fig, ax = plt.subplots(1, 1, figsize=(11, 4)) openbb.forex.candle( data=exchange_rate_data, to_symbol=to_symbol, from_symbol=from_symbol, external_axes=[ax], ) ax.set_xlim(0, len(exchange_rate_data.index)) fig.canvas.header_visible = False fig.canvas.footer_visible = False with output: output.clear_output(True) fig.canvas.show() return output def on_dropdown_change(change): """Update charts on change of dropdown selection.""" if change["type"] == "change" and change["name"] == "value": output = ipw.Output() get_candle_widget( to_symbol=to_widget.value, from_symbol=from_widget.value, ) dashboard.children["Candle"] = output # - exchange_rates = { "EUR/USD": {"latest": None, "previous": None}, "USD/JPY": {"latest": None, "previous": None}, "GBP/USD": {"latest": None, "previous": None}, "AUD/USD": {"latest": None, "previous": None}, "USD/CAD": {"latest": None, "previous": None}, # "USD/CNY": {"latest": None, "previous": None}, # "USD/CHF": {"latest": None, "previous": None}, # "USD/HKD": {"latest": None, "previous": None}, } def compose_widgets(): widgets = {} for currency_pair in exchange_rates: price = exchange_rates[currency_pair]["latest"] price_color = "neutral_color" if exchange_rates[currency_pair]["previous"] is not None: if ( exchange_rates[currency_pair]["latest"] > exchange_rates[currency_pair]["previous"] ): price_color = "up_color" elif ( exchange_rates[currency_pair]["latest"] < exchange_rates[currency_pair]["previous"] ): price_color = "down_color" widgets[currency_pair] = ipw.HTML( price_card(ticker=currency_pair, price=price, price_color=price_color) ) return widgets # + # for currency_code in rates_to_usd: # rates_to_usd[currency_code]["previous"] = rates_to_usd[currency_code]["latest"] # rates_to_usd[currency_code]["latest"] = get_rate_against_usd(currency_code) # - widgets = compose_widgets() # + currency_list = openbb.forex.models.av.get_currency_list() from_widget = ipw.Dropdown( options=currency_list, value="EUR", description="From:", disabled=False, layout=ipw.Layout(margin="130"), ) to_widget = ipw.Dropdown( options=currency_list, value="USD", description="To:", disabled=False, ) exchange_selection = ipw.VBox([from_widget, to_widget]) widgets["Select"] = exchange_selection # - output = ipw.Output() output = get_candle_widget( output=output, to_symbol=to_widget.value, from_symbol=from_widget.value, ) widgets["Candle"] = output from_widget.observe(on_dropdown_change) to_widget.observe(on_dropdown_change) dashboard = FlexLayout( layout_config={"borderLeft": False, "borderRight": False, "enableSection": False}, style={ "height": "calc(100vh - 80px)", "backgroundColor": "rgb(0 0 0)", "fontFamily": "Consolas", "fontWeight": 800, }, header={ "title": "Currencies", "style": { "backgroundColor": "rgb(0 0 0)", "fontWeight": 400, "fontSize": "28px", }, "buttons": [], }, widgets=widgets, template=os.path.join("widgets", "currencies.json"), editable=False, ) dashboard # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="dy3f-wbDVvlf" import pymongo # + id="r25shoWjV0yF" sidharth = pymongo.MongoClient("mongodb://:@cluster0-shard-00-00.rd8et.mongodb.net:27017,cluster0-shard-00-01.rd8et.mongodb.net:27017,cluster0-shard-00-02.rd8et.mongodb.net:27017/myFirstDatabase?ssl=true&replicaSet=atlas-232pjm-shard-0&authSource=admin&retryWrites=true&w=majority") # + [markdown] id="p1hEC8U7EZiN" # * create a Database: # + id="2hst6d_kV00-" db = sidharth["Guvi_assessment"] # + [markdown] id="mBb1vInOEW4O" # * create a Collection: # # + id="Ui2Zro9wV04g" col = db["Telephone_Directory"] # + [markdown] id="meuI8bKyEdQW" # # create a Directory: # + id="lOnqqtuSpOFP" directory = [{"name":"SARAVANAN","department":"REVENUE_department","designation":"SUB_COLLECTOR","mobile":9445000413}, {"name":"DIVYASHRI","department":"REVENUE_department","designation":"RDO","mobile":9444964899}, {"name":"RAJASEKAR","department":"RURAL_DEVELOPMENT","designation":"ASST_DIRECTOR ","mobile":7402606005}, {"name":"KAVITHA","department":"RURAL_DEVELOPMENT","designation":"EXECUTIVE_ENGINEER","mobile":7373704561}, {"name":"MAHESHWARI","department":"MUNICIPALITY","designation":"COMMISSIONER","mobile":7397372823}, {"name":"ANANDHAJODHI","department":"MUNICIPALITY","designation":"MUNICIPAL_ENGINEER","mobile":9443439329}, {"name":"SATHIYASEELAN","department":"HIGHWAYS","designation":"DIVISIONAL_ENGINEER","mobile":9443434004}, {"name":"RAMESH","department":"PWD","designation":"ENGINEER_WRO","mobile":9884777234}, {"name":"ARASU","department":"PWD","designation":"ENGINEER_BUILDINGS","mobile":9443991445}, {"name":"PALANI","department":"HEALTH_department","designation":"DDHS","mobile":9442309909}, {"name":"ANURADHA","department":"HEALTH_department","designation":"D.O.FOOD_SAFETY","mobile":9443520332}, {"name":"ABDUL_BHARI","department":"FIRE_AND_RESCUE","designation":"ADO","mobile":9445086137}, {"name":"SEKAR","department":"TNEB","designation":"SUPERINTENDING_ENGINEER","mobile":9445855444}, {"name":"KOTEESWARI","department":"TNEB","designation":"AEE","mobile":9445855156}, {"name":"PREMAVATHY","department":"AGRICULTURE","designation":" DIRECTOR","mobile":9445335303}, {"name":"GANESAN","department":"AGRICULTURE","designation":"ASSISTANT","mobile":9894215521}, {"name":"SATHYA_MOORTHI","department":"EDUCATION","designation":"CEO","mobile":9443065779}, {"name":"ELLAPPAN","department":"EDUCATION","designation":"DEO","mobile":9442630704}, {"name":"RADHAKRISHNAN","department":"EDUCATION","designation":"DEO","mobile":9444334963}, {"name":"DHINAKARAN","department":"TRANSPORT","designation":"RTO","mobile":9384808153}, {"name":"PARVEND","department":"TRANSPORT","designation":"RTO","mobile":9384808145}, {"name":"SATHISH","department":"FOREST","designation":"DFO","mobile":8919104054}, {"name":"MALA","department":"SLUM_CLEARANCE_BOARD","designation":"EE","mobile":8637400624}, {"name":"SASIKUMAR","department":"FACTORIES","designation":"DD_INSPECTOR","mobile":9941186500}, {"name":"VIMALANADHAN","department":"FACTORIES","designation":"INSPECTOR_OF_LABOUR","mobile":8778619552} ] # + [markdown] id="E5nsYf1rEofx" # # Insert the record into the collection: # + id="yZrLSg3RpOIn" a = col.insert_many(directory) # + [markdown] id="fn4gn8hREufZ" # # Make a query to find records you just created: # + colab={"base_uri": "https://localhost:8080/"} id="M0C72C2a8w6U" outputId="cf8879ed-5686-43dc-849a-351e74d6c750" for i in col.find({},{"_id":0}): print(i) # + [markdown] id="UAUVTRb9EN60" # # Modify the records, use the update_one() method: # + colab={"base_uri": "https://localhost:8080/"} id="oLBI8dwf8w0Q" outputId="4fcea143-0ce4-416a-c67c-31e6e4a9f8da" # updating the mobile_number & department of Radhakrishnan: col.update_one({"name":"RADHAKRISHNAN"},{"$set":{"department":"HEALTH_department","mobile":9999555500}}) # + colab={"base_uri": "https://localhost:8080/"} id="kAhm0NvtFsz5" outputId="fcb69810-5fac-4f2b-c038-9c228a6247a8" for i in col.find({"department":{"$in":["HEALTH_department","EDUCATION"]}},{"_id":0,"name":1,"department":1,"mobile":1}): print(i) # + [markdown] id="f_nje_dAE75R" # # Delete the record, use delete_one() method: # + colab={"base_uri": "https://localhost:8080/"} id="a6HZcRXfE_mp" outputId="2b983b0e-1f1c-4eeb-bcb2-a191ca7d97ff" # A record belonging to Forest department is deleted: col.delete_one({"department":"FOREST"}) # + colab={"base_uri": "https://localhost:8080/"} id="9Q4bPlvxJf6m" outputId="93c1ea07-9fdf-479f-9a94-e66fb2e45ac9" for i in col.find({},{"_id":0,"department":1,"mobile":1}): print(i) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # List and its default Functions a = ["asish","rishabh","ankur"] print(len(a)) #define a list my_list = [4,7,0,3] #get an iterator using iter() my_iter = iter(my_list) # + #iterate through it using next() #output:4 print(next(my_iter)) # - #output:7 print(next(my_iter)) # + #next(obj) is same as obj.__next__() #output:0 print(my_iter.__next__()) # - #output:3 print(my_iter.__next__()) #This will raise error,no items left next(my_iter) >>> for element in my_list: print (element) # + number = [1,2,3,4,5,6,7,8,9] largest_number = max(number); print("The largest number is:", largest_number) # + languages = ["Python", "C Programming", "Java"] largest_string = max(languages); print("The largest string is:", largest_string) # + number = [1,2,3,4,5,6,7,8,9] smallest_number = min(number); print("The smallest number is:", smallest_number) # + languages = ["Python", "C Programming", "Java"] smallest_string = min(languages); print("The smallest string is:", smallest_string) # + #How range works in python #empty range print(list(range(0))) #using range(stop) print(list(range(10))) #using range(start, stop) print(list(range(1, 10))) # + start=2 stop=14 step=2 print(list(range(start, stop, step))) # - # # Dictionary and its default Functions # + d= {1: "one", 2: "two"} d.clear() print('d =', d) # + original = {1: "one", 2: "two"} new = original.copy() print('original:', original) print('new:', new) # + person = {'name': 'Asish', 'age':22} print('Name: ',person.get('name')) print('Age: ',person.get('age')) #value is not provided print('Salary: ',person.get('salary')) #value is provided print('Salary: ',person.get('salary',0.0)) # + #random sales dictionary sales = { 'apple': 2, 'orange': 3, 'grapes' : 4} items = sales.items() print('original items:', items) #delete an item from dictionary del[sales['apple']] print('updated items:',items) # + person = {'name': 'Asish', 'age': 22, 'salary':3500.0} print(person.keys()) empty_dict = {} print(empty_dict.keys()) # + #random sales dictionary sales = { 'apple': 2, 'orange': 3, 'grapes' : 4} element = sales.pop('apple') print('The popped element is:',element) print('The dictionary is:',sales) # + person = {'name': 'Asish', 'age': 22, 'salary':3500.0} #('salary',3500.0)is inserted at the last,so it is removed. result=person.popitem() print('Return value =', result) print('Person =', person) #inserting a new element pair person['profession'] = 'Plumber' #now ('profession', 'plumber')is the lastest element result=person.popitem() print('Return value =', result) print('Person =', person) # + person = {'name': 'Asish', 'age':22} age=person.setdefault('age') print('Person =', person) print('Age =', age) # + d= {1: "one", 2: "three"} d1= {2: "two"} #update the value of key 2 d.update(d1) print(d) d1= {3: "three"} #adds element with key 3 d.update(d1) print(d) # + #random sales dictionary sales = { 'apple': 2, 'orange': 3, 'grapes' : 4} print(sales.values()) # - # # Sets and its default Functions # + #set of vowels vowels = {'a','e', 'i', 'u'} #adding 'o' vowels.add('o') print('vowels are:',vowels) #adding 'a' again vowels.add('a') print('vowels are:',vowels) # + #set of vowels vowels = {'a', 'e', 'i', 'o', 'u'} print('vowels(before clear):', vowels) #clearing vowels vowels.clear() print('vowels(after clear):', vowels) # + numbers = {1,2,3,4} new_numbers = numbers new_numbers.add(5) print('numbers:', numbers) print('new_numbers:', new_numbers) # + A = {'a', 'b', 'c','d'} B = {'c', 'f', 'g'} #Equivalent to A-B print(A.difference(B)) #Equivalent to B-A print(B.difference(A)) # + A = {'a', 'b', 'c','d'} B = {'c', 'f', 'g'} result = A.difference_update(B) print('A = ', A) print('B = ', B) print('result = ', result) # + A = {'a', 'b', 'c','d'} print('Return value is', A.pop()) print('A = ', A) # + #language set language = {'English', 'French', 'German'} #removing 'German' from language language.remove('German') #updated language set print('updated language set:', language) # + A = {'a', 'c', 'd'} B = {'c', 'd', 2} C = {1,2,3} print('A U B =', A.union(B)) print('B U C =', B.union(C)) print('A U B U C =', A.union(B, C)) print('A.union()=', A.union()) # + A = {'a', 'b'} B = {1,2,3} result = A.update(B) print('A = ', A) print('result = ', result) # - # # Tuple and explore default methods # + #Use of Tuple count() #Vowels tuple vowels = ('a', 'e', 'i', 'o', 'i', 'u') #count element 'i' count = vowels.count('i') #print count print('The count of i is:', count) #count element 'p' count = vowels.count('p') #print count print('The count of p is:', count) # + #Vowels tuple vowels = ('a', 'e', 'i', 'o', 'i', 'u') #index of 'e' in vowels index=vowels.index('e') print('The index of e:', index) #element of i is searched #index of the first 'i' is returned index = vowels.index('i') print('The index of i:', index) # - # # String and explore default methods # + #define string string = "Python is awesome,isn't it?" substring = "is" count = string.count(substring) #print count print("The count is:", count) # + string = "python is AWesome." capitalized_string = string.capitalize() print('Old String: ', string) print('Capitalized String:', capitalized_string) # + # unicode string string = 'pythön!' # print string print('The string is:', string) # default encoding to utf-8 string_utf = string.encode() # print result print('The encoded version is:', string_utf) # + string = "Python is awesome" new_string = string.center(24) print("Centered String: ", new_string) # + s = 'this is good' print(s.islower()) s = 'th!s is a1so g00d' print(s.islower()) s = 'this is Not good' print(s.islower()) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="gW-Ju3o3WxJj" # ### Zaimportowanie bibliotek # - import datetime import numpy as np import pandas as pd import tensorflow as tf import matplotlib.pyplot as plt print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) # + [markdown] id="xM12BryyW1Uz" # ### Wczytanie wierszy i odpowiednich kolumn z plików # + id="xok-RpbxznTO" columns = ['station_id', 'day', 'month', 'year', 'mean temperature [deg F]', 'mean dew point [deg F]', 'mean pressure (sea level) [Pa]', 'mean pressure (station) [Pa]', 'mean visibility [mile]', 'mean wind speed [knot]', 'max wind gust [knot]', 'max temperature [deg F]', 'min temperature [deg F]', 'total precipitation [inch]', 'snow depth [inch]'] train_df = pd.read_excel('data_JA_df_training.xlsx')[columns] test_df = pd.read_excel('data_JA_df_evaluation.xlsx')[columns] # + [markdown] id="-Hli_B2JX2sn" # ### Znormalizowanie danych # + id="XriI-lSqX1sg" columns = ['station_id', 'mean dew point [deg F]', 'mean pressure (sea level) [Pa]', 'mean pressure (station) [Pa]', 'mean visibility [mile]', 'mean wind speed [knot]', 'max wind gust [knot]', 'max temperature [deg F]', 'min temperature [deg F]', 'total precipitation [inch]', 'snow depth [inch]'] train_mean = train_df[columns].mean() train_std = train_df[columns].std() test_mean = test_df[columns].mean() test_std = test_df[columns].std() train_df[columns] = (train_df[columns] - train_mean) / train_std test_df[columns] = (test_df[columns] - test_mean) / test_std # + [markdown] id="YoFS71TnW8MH" # ### Dodanie kolumny z datą do późniejszego przygotowania sekwencji # + id="ioit4mGHTpgM" train_df['date'] = train_df.apply(lambda d: datetime.date(day=int(d.day), month=int(d.month), year=int(d.year)), axis=1) test_df['date'] = test_df.apply(lambda d: datetime.date(day=int(d.day), month=int(d.month), year=int(d.year)), axis=1) # + [markdown] id="KcAicOCPXBc5" # ### Pogrupowanie po stacjach # + colab={"base_uri": "https://localhost:8080/"} id="iYUEA976Mq28" outputId="31c23e36-575a-43ca-cf16-71b2678391b1" df_stations_train = train_df.groupby("station_id") df_stations_test = test_df.groupby("station_id") print(df_stations_train.size()) print(df_stations_test.size()) # + [markdown] id="BnULcSQnXENY" # ### Utworzenie sekwencji treningowej i testowej # + id="JWYbtrSJPKOC" two_weeks_diff = datetime.timedelta(days = 13) # w sumie 14 dni st_index = 0 stations_train_value = [] stations_train_label = [] for name, station_train in df_stations_train: stations_train_value.append([]) stations_train_label.append([]) for index, station_measurement in station_train.iterrows(): m_station_id = station_measurement.station_id m_date_to = station_measurement.date m_date_from = m_date_to - two_weeks_diff; two_weeks_sequence = station_train[(station_train['date'] >= m_date_from) & (station_train['date'] <= m_date_to)] if len(two_weeks_sequence) == 14: stations_train_value[st_index].append(two_weeks_sequence[columns]) stations_train_label[st_index].append(station_measurement['mean temperature [deg F]']) st_index += 1 # + id="qdEDOwDpvPkZ" two_weeks_diff = datetime.timedelta(days = 13) # w sumie 14 dni st_index = 0 stations_test_value = [] stations_test_label = [] for name, station_train in df_stations_test: stations_test_value.append([]) stations_test_label.append([]) for index, station_measurement in station_train.iterrows(): m_station_id = station_measurement.station_id m_date_to = station_measurement.date m_date_from = m_date_to - two_weeks_diff; two_weeks_sequence = station_train[(station_train['date'] >= m_date_from) & (station_train['date'] <= m_date_to)] if len(two_weeks_sequence) == 14: stations_test_value[st_index].append(two_weeks_sequence[columns]) stations_test_label[st_index].append(station_measurement['mean temperature [deg F]']) st_index += 1 # + [markdown] id="kKT0kmi7pMfH" # ### Zamiana list na numpy array # + colab={"base_uri": "https://localhost:8080/"} id="y0hFUr4IO3Mn" outputId="af11239f-ad6e-4b9d-e0d4-ef7c530956aa" train_final_value = np.array(stations_train_value) train_final_label = np.array(stations_train_label) print(train_final_value.shape) print(train_final_label.shape) for index, row in enumerate(train_final_value): train_final_value[index] = np.array(row).astype(np.float32) for index, row in enumerate(train_final_label): train_final_label[index] = np.array(row).astype(np.float32) print(train_final_value[0][0]) print(train_final_label[0][0]) # + colab={"base_uri": "https://localhost:8080/"} id="MZBBetSgvo5Z" outputId="e20e2d0f-b495-4191-ef0a-0d29f9a3ec44" test_final_value = np.array(stations_test_value) test_final_label = np.array(stations_test_label) print(test_final_value.shape) print(test_final_label.shape) for index, row in enumerate(test_final_value): test_final_value[index] = np.array(row).astype(np.float32) for index, row in enumerate(test_final_label): test_final_label[index] = np.array(row).astype(np.float32) print(test_final_value[0][0]) print(test_final_label[0][0]) # + [markdown] id="ABd2G_Z6p_be" # ### Połączenie sekwencji z różnych stacji # + id="zCPS-qN-qILS" train_final_value_concat = np.concatenate(train_final_value) train_final_label_concat = np.concatenate(train_final_label) test_final_value_concat = np.concatenate(test_final_value) test_final_label_concat = np.concatenate(test_final_label) # + [markdown] id="bXPnsadkutqL" # ### Sprawdzenie przykładowych elementów # + colab={"base_uri": "https://localhost:8080/"} id="z2A2Y1W1uw8g" outputId="a5c0161e-4631-4176-b27c-10d0ddec8a91" print(train_final_value_concat[0]) print(train_final_label_concat[0]) # + colab={"base_uri": "https://localhost:8080/"} id="M_MPZI7WwARv" outputId="cbb949bb-1a59-4d5e-9d14-542639143285" print(test_final_value_concat[0]) print(test_final_label_concat[0]) # + [markdown] id="DQ6ocStLpRk0" # ### Zbudowanie modelu # + colab={"base_uri": "https://localhost:8080/"} id="VHVRLZ0QyWPe" outputId="ae7bb6c9-46fb-4cc6-d551-8a91e1aca00e" lstm_model_lr = tf.keras.models.Sequential([ tf.keras.layers.LSTM(256, activation='relu', return_sequences=True), tf.keras.layers.LSTM(128, activation='relu', return_sequences=True), tf.keras.layers.LSTM(64, activation='relu'), tf.keras.layers.Dense(units=1, activation='relu') ]) # + colab={"base_uri": "https://localhost:8080/"} id="capXMiWgus7W" outputId="724b16ac-5995-4f4f-b6ae-7159abab619a" MAX_EPOCHS = 30 early_stopping=tf.keras.callbacks.EarlyStopping( monitor='val_mean_squared_error', min_delta=0, patience=10, verbose=2, mode='auto', baseline=None, restore_best_weights=True) lstm_model_lr.compile(loss=tf.losses.MeanSquaredError(), optimizer=tf.optimizers.Adam(learning_rate=0.0001), metrics=[tf.metrics.MeanSquaredError()], ) history = lstm_model_lr.fit(train_final_value_concat, train_final_label_concat, validation_split=0.2, epochs=MAX_EPOCHS, callbacks=[early_stopping]) # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="f_Hys0k-rTSX" outputId="42853e7e-352d-4673-9b32-3c74db37ca73" plt.plot(history.history['mean_squared_error']) plt.plot(history.history['val_mean_squared_error']) plt.title('model MSE') plt.ylabel('MSE') plt.xlabel('epoch') plt.legend(['train', 'val'], loc='upper left') plt.show() # + [markdown] id="s7Aj-o_swM6t" # ### Ewaluacja # + colab={"base_uri": "https://localhost:8080/"} id="3Zo0QkCsylFt" outputId="f2e95342-f4d3-4ce8-e184-4a86da87a6db" lstm_model_lr.evaluate(test_final_value_concat, test_final_label_concat) # + [markdown] id="7MN91LtB_wq9" # ### Wyświetlenie wykresu # + id="4Xbi2Srl_yOV" predictions = lstm_model_lr.predict(test_final_value_concat, batch_size = 1) start = 0 end = 200 fig = plt.figure() ax = fig.add_subplot(111) ax.plot(test_final_label_concat[start:end], 'b', predictions[start:end], 'g') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- subreddits = ['news', 'politics', 'worldnews', 'Liberal', 'progressive', 'democrats', 'Conservative', 'The_Donald', 'Republican'] news_subreddits = ['news', 'politics', 'worldnews'] left_subreddits = ['Liberal', 'progressive', 'democrats'] right_subreddits = ['Republican'] #'Conservative', 'The_Donald', import os os.chdir('../../..') import convokit from convokit import Corpus, FightingWords root_dir = "/Users/calebchiam/Documents/datasets/" subreddit_corpora = dict() for subreddit in left_subreddits + right_subreddits: subreddit_corpora[subreddit] = Corpus(filename=root_dir + subreddit + "-filtered-labelled-small") for subreddit, corpus in subreddit_corpora.items(): print(subreddit) corpus.print_summary_stats() print() # + political_corpus = None for subreddit, corpus in subreddit_corpora.items(): if political_corpus is None: political_corpus = corpus else: political_corpus = political_corpus.merge(corpus) # - political_corpus.print_summary_stats() FW = FightingWords(l1_selector=lambda utt: utt.meta['subreddit'] == 'democrats', l2_selector=lambda utt: utt.meta['subreddit'] == 'Republican', ngram=(1,2)) # %matplotlib qt FW.summarize(political_corpus) FW.annot_method FW.top_k # %matplotlib inline FW.plot_fighting_words(max_label_size=15) FW.top_k = 15 FW.get_zscore('reform') FW.get_zscore('control') FW.get_zscore('liberals') FW.get_zscore('idiots') FW.get_zscore('trump') FW.get_zscore('cuck') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="37puETfgRzzg" # # Data Preprocessing Tools # + [markdown] colab_type="text" id="EoRP98MpR-qj" # ## Importing the libraries # + colab={} colab_type="code" id="N-qiINBQSK2g" import numpy as np import matplotlib.pyplot as plt import pandas as pd # + [markdown] colab_type="text" id="RopL7tUZSQkT" # ## Importing the dataset # + colab={} colab_type="code" id="WwEPNDWySTKm" dataset = pd.read_csv('Data.csv') # pandas dataframe - each column is a categorical variable # Every machine learning model has features and labels. # Features(independent variables) are inputs, label(dependent variables/target) are output # last column --> labels, all columns except last column --> features # X --> features, y --> label # iloc[r:r,c:c] # [:] --> no range, means take everything # [:, :-1] --> take all rows, all columns excluding last one (uppper bound not included in slicing) # [:, -1] --> take all rows, of last column # -1 --> index of last column # Create maxtrix of features and label variables X = dataset.iloc[:, :-1].values y = dataset.iloc[:, -1].values # + colab={"base_uri": "https://localhost:8080/", "height": 188} colab_type="code" id="hCsz2yCebe1R" outputId="1e4cc568-4e51-4b38-9d46-4aa3f15204be" print(X) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="eYrOQ43XcJR3" outputId="e0873b2a-3b08-4bab-ef0d-15b88858ca44" print(y) # + [markdown] colab_type="text" id="nhfKXNxlSabC" # ## Taking care of missing data # + # Checking for missing data np.isnan(np.sum(X[:, 1:3])) # + colab={} colab_type="code" id="c93k7ipkSexq" from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values=np.nan, strategy='mean') # select only the numberical columns imputer.fit(X[:, 1:3]) # fit() will look at NaN values, and will compute mean X[:, 1:3] = imputer.transform(X[:, 1:3]) # transform() will replace NaN by mean(replace) # update columns with returned by transform() # - np.isnan(np.sum(X[:, 1:3])) # + colab={"base_uri": "https://localhost:8080/", "height": 188} colab_type="code" id="3UgLdMS_bjq_" outputId="254af4e0-681e-47f5-aaa7-b9c6f43258e9" print(X) # + [markdown] colab_type="text" id="CriG6VzVSjcK" # ## Encoding categorical data # + [markdown] colab_type="text" id="AhSpdQWeSsFh" # ### Encoding the Independent Variable # + colab={} colab_type="code" id="5hwuVddlSwVi" # Ml models cannot learn from strings. It can only learn from binary numbers - 0 and 1 # so we will convert all columns of str to int # note: some columns have type object. Conversion --> object to str to int # convert categorical variables to dummy variables (0 or 1) from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder # [0,1,2] --> indexes of the columns ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [0])], remainder='passthrough') X = np.array(ct.fit_transform(X)) # fit() and transform() at the same time # remainder='passthrough' --> columns, other than the ones specified, will remain untouched # + colab={"base_uri": "https://localhost:8080/", "height": 188} colab_type="code" id="f7QspewyeBfx" outputId="5b35feef-7fe2-46ef-ce70-80495f94f4ed" print(X) # + [markdown] colab_type="text" id="DXh8oVSITIc6" # ### Encoding the Dependent Variable # + colab={} colab_type="code" id="XgHCShVyTOYY" # Replace "Yes" or "No" by binary variables # Label Encoder is only for labels/output from sklearn.preprocessing import LabelEncoder le = LabelEncoder() y = le.fit_transform(y) # send one single matrix/vector, y doesn't necessarily need to be a numpy array # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="FyhY8-gPpFCa" outputId="7f76ef29-5423-4c3e-cf69-45fbc366a997" print(y) # + [markdown] colab_type="text" id="qb_vcgm3qZKW" # ## Splitting the dataset into the Training set and Test set # + colab={} colab_type="code" id="pXgA6CzlqbCl" from sklearn.model_selection import train_test_split # test size = 0.2 --> Train = 80% of data, test = 20% of data # random_state = 0 --> to keep the random split the same - to make results reproducable -- like np.random.seed() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1) # Train set --> train model on existing observations # Test set --> evaluate model on new observations (future data) # X_train corresponds to y_train # X_test corresponds to y_test # - X_train.shape, y_train.shape, X_test.shape, y_test.shape # + colab={"base_uri": "https://localhost:8080/", "height": 154} colab_type="code" id="GuwQhFdKrYTM" outputId="de1e527f-c229-4daf-e7c5-ea9d2485148d" print(X_train) # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="TUrX_Tvcrbi4" outputId="9a041a9b-2642-4828-fa2f-a431d7d77631" print(X_test) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="pSMHiIsWreQY" outputId="5afe91e0-9244-4bf5-ec1b-e3e092b85c08" print(y_train) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="I_tW7H56rgtW" outputId="2a93f141-2a99-4a69-eec5-c82a3bb8d36b" print(y_test) # + [markdown] colab_type="text" id="TpGqbS4TqkIR" # ## Feature Scaling # + colab={} colab_type="code" id="AxjSUXFQqo-3" # Scale features to make sure all values are in the same scale (Prevent one feature from dominating the other) # Feature scaling is done after train,test split (to prevent information leakage) # Scaling on both train and test set must be same (Better to do at once). Although they are done seperately # Scaling is not done on y(labels) # Feature scaling is not needed for all ML algorithms. Only for some. from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train[:, 3:] = sc.fit_transform(X_train[:, 3:]) # Take all rows, and columns from 3rd to last (excluding dummy variable columns) X_test[:, 3:] = sc.transform(X_test[:, 3:]) # only apply transform() because test state must be applied the same scale as train state # in order to get the same transformation. (fit_transform would have given a new scale) # IMP: Standardisation is not done on dummy variables (variables that have been one hot encoded) # because we lose our numerical values # Only apply them on numerical variables # + colab={"base_uri": "https://localhost:8080/", "height": 154} colab_type="code" id="DWPET8ZdlMnu" outputId="dea86927-5124-4e2a-e974-2804df9a913c" print(X_train) # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="sTXykB_QlRjE" outputId="b68f0cfc-d07c-48cb-80d0-6800028c41f9" print(X_test) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: keras # language: python # name: keras # --- # # Instructions # + # # BEFORE RUNNING # # Preprocess data in a terminal from this directory using # # $ python ./process-data.py # # Note: This script may take several hours on a modern computing device. # # - # # Imports # + # %matplotlib inline import pandas as pd import matplotlib import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sys sys.path.append("../") import game_utils # - # # Performance by Group Size # ## Config data_dir = '../../data/experiment3/' # ## Load Data # + games = [] games += game_utils.get_games(data_dir, 'experiment') data = game_utils.get_data(data_dir, games) # # only one game had six players # data = data[data['active_n_players'] < 6] data.rename(columns={'difficulty': 'Noise Level'}, inplace=True) data["Noise Level"].replace({'1en01':'Low', '2en01':'High'}, inplace=True) # - with pd.option_context('display.max_rows', None): print(data.groupby('game')['score'].mean()) data.groupby(['active_n_players','Noise Level'])['score'].count()/data.groupby(['active_n_players','Noise Level'])['active_n_players'].mean() data[data['active_n_players'] == 5].groupby('game')['score'].mean() len(set(data[(data['active_n_players'] == 5) & (data['Noise Level'] == 'Low')]['game'])) np.mean(data['Noise Level'] == 'High') # ## Plot Performance # + sns.set(font = "serif", context = "poster", style = "white") sns.despine() sns.factorplot("active_n_players", "second_half_score", hue = "Noise Level", markers = ["o", "s"], linestyles = ["-","--"], data = data, kind="point", dodge = 0.15, units = "game", order = [1,2,3,4,5,6]) plt.xlabel('Number of Players') plt.ylabel('Mean Score') plt.savefig('../../plots/performance-summary.pdf') # - # # State Analysis # ## Plot State Frequency by Score def state_analysis(score, states, subset, group_size): sns.set(font = "serif", context = "poster", style = "white") fig, ax = plt.subplots() #plt.rcParams.update(pd.tools.plotting.mpl_stylesheet) #ax.set_color_cycle(['b', 'g', 'y']) #mpl.rc('font',family='Times New Roman') from seaborn import color_palette with color_palette("colorblind"): for s in states: plt.plot(score,states[s],label = s, lw = 10) #mpl.rc('font',family='Times New Roman') plt.xlabel('Score', fontsize=50) plt.ylabel('Probability', fontsize=50) if subset == '1en01': noise_level = "Low" elif subset == '2en01': noise_level = "High" plt.title('Noise Level: ' + noise_level + ', Group Size: ' + str(group_size), fontsize = 30) plt.xlim((0,1)) plt.ylim((0,1)) plt.legend(loc='upper left') plt.setp(plt.gca().get_legend().get_texts(), fontsize='40') ax.tick_params(axis='x', labelsize=40) ax.tick_params(axis='y', labelsize=40) fig = plt.gcf() fig.set_size_inches(12.5,12.5) fig.savefig('../../plots/states-'+subset+'-'+str(group_size)+'.pdf')#,dpi=100) # ## Low Noise in_dir = '../../processed/' state_df = [] for subset in ['1en01','2en01']: for n in range(2,6): score, states = game_utils.get_state_scores(in_dir, subset, n) for i in range(len(score)): for s in ['exploring', 'exploiting', 'copying']: state_df += [[subset, n, s, score[i], states[s][i]]] state_df = pd.DataFrame(state_df) state_df.columns = ['Noise Level', 'Num Players', 'State', 'Score', 'Proportion'] # + #sns.set(font = "serif", context = "poster", style = "white") sns.set_context("notebook", font_scale=1.5, rc={"lines.linewidth": 0.1}) sns.factorplot('Score', 'Proportion', hue = 'Noise Level', col = 'State', row = 'Num Players', kind = 'point', data = state_df) # - in_dir = '../../processed/' subset = '1en01' for n in range(2,6): score, states = game_utils.get_state_scores(in_dir, subset, n) state_analysis(score, states, subset, n) # ## High Noise in_dir = '../../processed/' subset = '2en01' score, states = game_utils.get_state_scores(in_dir, subset) state_analysis(score, states, subset) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # - title: Pseudo online LSTM RNN - revisited # - author: # - date: 2018-10-04 # - category: ai # - tags: keras, lstm, rnn, online learning # The goal of this exercise is to train an LSTM model using streaming data points $(X_t)$ to predict the next data point $(X_{t+1})$. To train the model, the input stream needs to get converted into samples over time and batches of such samples. # # First, some code for plots - # + import numpy as np from keras import layers, models import matplotlib.pyplot as plt def plot_samples(samples): fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(1,1,1) ax.set_ylim(-1,1) sample0 = samples[0] ax.plot(sample0['data'][:,0],"--",label=sample0['name']) sample1 = samples[1] ax.plot(sample1['data'][:,0],label=sample1['name'],linewidth=3,alpha = 0.5) if len(samples) > 2: sample2 = samples[2] ax.plot(sample2['data'][:,0],label=sample2['name'],linewidth=2,alpha = 0.5) plt.legend() plt.show() # - # ## Simple utility class to convert data stream to batches class OnlineToBatch(object): def __init__(self, n_batch, t_sample, input_shape): self.n_batch = n_batch self.t_sample = t_sample self.input_shape = input_shape self.buffer = [] def append(self, input): # shift left and append self.buffer.append(input) if len(self.buffer) == self.t_sample + self.n_batch + 1: batch_x = np.array([self.buffer[i:i+self.t_sample] for i in range(0, self.n_batch)]).reshape((self.n_batch, self.t_sample) + self.input_shape) batch_y = np.array([self.buffer[i+1:i+self.t_sample+1] for i in range(0, self.n_batch)]).reshape((self.n_batch, self.t_sample) + self.input_shape) self.buffer = [] return (batch_x, batch_y) return None # ## Create LSTM model # Here we create a simple LSTM based model to test how the OnlineToBatch drives training. # + def define_model(t_sample, n_batch, data_point_shape=(1,)): input_layer = layers.Input(batch_shape=(None, t_sample) + data_point_shape, name="input") rnn = layers.LSTM(16, return_sequences=True, name="RNN")(input_layer) dense = layers.TimeDistributed(layers.Dense(np.empty(data_point_shape).size, name="dense"))(rnn) model = models.Model(inputs=[input_layer], outputs=[dense]) model.compile(loss="mean_squared_error", sample_weight_mode="temporal", optimizer="rmsprop") return model X_t_shape = (1,) t_sample = 100 n_batch = 200 model = define_model(t_sample = t_sample, data_point_shape = X_t_shape, n_batch = n_batch) model.summary() # - # ## Train the model on a noisy $sin$ wave input stream # + online_to_batch = OnlineToBatch(n_batch=n_batch, t_sample=t_sample, input_shape=X_t_shape) last_x_batch = None last_y_batch = None for i in range(0, 2000): x = np.sin(i/10.0 + np.random.random_sample() * 0.2) data = online_to_batch.append(x) if data is not None: (last_x_batch, last_y_batch) = data plot_samples([ {'name': 'X', 'data': last_x_batch[0]}, {'name': 'y', 'data': last_y_batch[0]} ]) model.fit(last_x_batch, last_y_batch, validation_split=0.2, epochs=20, verbose=0) # - # ## Predict using the model # + predict_t = t_sample * 3 test_x = np.empty((predict_t,) + X_t_shape) test_x[0:t_sample] = last_x_batch[-1] for i in range(predict_t - t_sample): pred = model.predict(np.array([test_x[i:t_sample+i]]))[0][-1] test_x[t_sample + i] = pred test_x[0:t_sample-1] = np.nan test_x = test_x[1:] plot_samples([ {'name': 'X', 'data': last_x_batch[-1]}, {'name': 'ypred', 'data': test_x} ]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PythonData # language: python # name: pythondata # --- # + # The next code corresponds to the PyCitySchools_Challenge_starter_code.ipynb file provided for this challenge. # + # Dependencies and Setup import pandas as pd # File to Load (Remember to change the path if needed.) school_data_to_load = "Resources/schools_complete.csv" student_data_to_load = "Resources/students_complete.csv" # Read the School Data and Student Data and store into a Pandas DataFrame school_data_df = pd.read_csv(school_data_to_load) student_data_df = pd.read_csv(student_data_to_load) # Cleaning Student Names and Replacing Substrings in a Python String # Add each prefix and suffix to remove to a list. prefixes_suffixes = ["Dr. ", "Mr. ","Ms. ", "Mrs. ", "Miss ", " MD", " DDS", " DVM", " PhD"] # Iterate through the words in the "prefixes_suffixes" list and replace them with an empty space, "". for word in prefixes_suffixes: student_data_df["student_name"] = student_data_df["student_name"].str.replace(word,"") # Check names. student_data_df.head(10) # - # ## Deliverable 1: Replace the reading and math scores. # # ### Replace the 9th grade reading and math scores at Thomas High School with NaN. # Install numpy using conda install numpy or pip install numpy. # Step 1. Import numpy as np. import numpy as np # Step 2. Use the loc method on the student_data_df to select all the reading scores from the 9th grade at Thomas High School and replace them with NaN. student_data_df.loc[(student_data_df["school_name"]==" School") & (student_data_df["grade"]=="9th"),["reading_score"]]=np.nan # Step 3. Refactor the code in Step 2 to replace the math scores with NaN. student_data_df.loc[(student_data_df["school_name"]=="") & (student_data_df["grade"]=="9th"),["math_score"]]=np.nan # Step 4. Check the student data for NaN's. student_data_df.tail(10) # ## Deliverable 2 : Repeat the school district analysis # ### District Summary # Combine the data into a single dataset school_data_complete_df = pd.merge(student_data_df, school_data_df, how="left", on=["school_name", "school_name"]) school_data_complete_df.head() # + # Calculate the Totals (Schools and Students) school_count = len(school_data_complete_df["school_name"].unique()) student_count = school_data_complete_df["Student ID"].count() # Calculate the Total Budget total_budget = school_data_df["budget"].sum() total_budget # - # Calculate the Average Scores using the "clean_student_data". average_reading_score = school_data_complete_df["reading_score"].mean() average_reading_score # Calculate the Average Scores using the "clean_student_data". average_math_score = school_data_complete_df["math_score"].mean() average_math_score # + # Step 1. Get the number of students that are in ninth grade at Thomas High School. # These students have no grades. students_9th_thomas=school_data_complete_df.loc[(school_data_complete_df["school_name"]=="Thomas High School") & (school_data_complete_df["grade"]=="9th")].count()["Student ID"] # Get the total student count student_count_no_dishon = school_data_complete_df["Student ID"].count() # Step 2. Subtract the number of students that are in ninth grade at # Thomas High School from the total student count to get the new total student count. student_count=student_count_no_dishon-students_9th_thomas student_count # - # Calculate the passing rates using the "clean_student_data". pass_math_count = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)].count()["student_name"] pass_math_count # Calculate the passing rates using the "clean_student_data". pass_reading_count = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)].count()["student_name"] pass_reading_count # Step 3. Calculate the passing percentages (perc) with the new total student count. pass_math_perc=float(passing_math_count)/float(student_count) *100 pass_math_perc # Step 3.1 Calculate the passing percentages (perc) with the new total student count. pass_reading_perc=float(passing_reading_count)/float(student_count) *100 pass_reading_perc # + # Calculate the students who passed both reading and math. passing_math_reading = school_data_complete_df[(school_data_complete_df["math_score"] >= 70) & (school_data_complete_df["reading_score"] >= 70)] # Calculate the number of students that passed both reading and math. overall_passing_math_reading_count = passing_math_reading["student_name"].count() # Step 4.Calculate the overall passing percentage with new total student count. overall_passing_percentage=float(overall_passing_math_reading_count) / float(student_count) * 100 overall_passing_percentage # + # Create a DataFrame district_summary_df = pd.DataFrame( [{"Total Schools": school_count, "Total Students": student_count, "Total Budget": total_budget, "Average Math Score": average_math_score, "Average Reading Score": average_reading_score, "% Passing Math": pass_math_perc, "% Passing Reading": pass_reading_perc, "% Overall Passing": overall_passing_percentage}]) # Format the "Total Students" to have the comma for a thousands separator. district_summary_df["Total Students"] = district_summary_df["Total Students"].map("{:,}".format) # Format the "Total Budget" to have the comma for a thousands separator, a decimal separator and a "$". district_summary_df["Total Budget"] = district_summary_df["Total Budget"].map("${:,.2f}".format) # Format the columns. district_summary_df["Average Math Score"] = district_summary_df["Average Math Score"].map("{:.1f}".format) district_summary_df["Average Reading Score"] = district_summary_df["Average Reading Score"].map("{:.1f}".format) district_summary_df["% Passing Math"] = district_summary_df["% Passing Math"].map("{:.1f}".format) district_summary_df["% Passing Reading"] = district_summary_df["% Passing Reading"].map("{:.1f}".format) district_summary_df["% Overall Passing"] = district_summary_df["% Overall Passing"].map("{:.1f}".format) # Display the data frame district_summary_df # - # ## School Summary # + # Determine the School Type per_school_types = school_data_df.set_index(["school_name"])["type"] # Calculate the total student count. per_school_counts = school_data_complete_df["school_name"].value_counts() # Calculate the total school budget and per capita spending per_school_budget = school_data_complete_df.groupby(["school_name"]).mean()["budget"] # Calculate the per capita spending. per_school_capita = per_school_budget / per_school_counts # Calculate the average test scores. per_school_math = school_data_complete_df.groupby(["school_name"]).mean()["math_score"] per_school_reading = school_data_complete_df.groupby(["school_name"]).mean()["reading_score"] # Calculate the passing scores by creating a filtered DataFrame. per_school_passing_math = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)] per_school_passing_reading = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)] # Calculate the number of students passing math and passing reading by school. per_school_passing_math = per_school_passing_math.groupby(["school_name"]).count()["student_name"] per_school_passing_reading = per_school_passing_reading.groupby(["school_name"]).count()["student_name"] # Calculate the percentage of passing math and reading scores per school. per_school_passing_math = per_school_passing_math / per_school_counts * 100 per_school_passing_reading = per_school_passing_reading / per_school_counts * 100 # Calculate the students who passed both reading and math. per_passing_math_reading = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70) & (school_data_complete_df["math_score"] >= 70)] # Calculate the number of students passing math and passing reading by school. per_passing_math_reading = per_passing_math_reading.groupby(["school_name"]).count()["student_name"] # Calculate the percentage of passing math and reading scores per school. per_overall_passing_percentage = per_passing_math_reading / per_school_counts * 100 # + # Create the DataFrame per_school_summary_df = pd.DataFrame({ "School Type": per_school_types, "Total Students": per_school_counts, "Total School Budget": per_school_budget, "Per Student Budget": per_school_capita, "Average Math Score": per_school_math, "Average Reading Score": per_school_reading, "% Passing Math": per_school_passing_math, "% Passing Reading": per_school_passing_reading, "% Overall Passing": per_overall_passing_percentage}) per_school_summary_df.head() # + # Format the Total School Budget and the Per Student Budget per_school_summary_df["Total School Budget"] = per_school_summary_df["Total School Budget"].map("${:,.2f}".format) per_school_summary_df["Per Student Budget"] = per_school_summary_df["Per Student Budget"].map("${:,.2f}".format) # Display the data frame per_school_summary_df # + # Step 5. Get the number of 10th-12th graders from Thomas High School (THS). students_ths_rest = school_data_complete_df.loc[( (school_data_complete_df["school_name"] == "Thomas High School") & (school_data_complete_df["grade"] != "9th")), "Student ID"].count() students_ths_rest # + # Step 6. Get all the students passing math from THS rest_ths_pass_math_score = school_data_complete_df.loc[ (school_data_complete_df["school_name"] == "Thomas High School") & (school_data_complete_df["math_score"] >= 70), "Student ID"].count() rest_ths_pass_math_score # + # Step 7. Get all the students passing reading from THS rest_ths_pass_reading_score = school_data_complete_df.loc[ (school_data_complete_df["school_name"] == "Thomas High School") & (school_data_complete_df["reading_score"] >= 70), "Student ID"].count() rest_ths_pass_reading_score # + # Step 8. Get all the students passing math and reading from THS rest_ths_pass_reading_math_score = school_data_complete_df.loc[ (school_data_complete_df["school_name"] == "Thomas High School") & ((school_data_complete_df["math_score"] >= 70) & (school_data_complete_df["reading_score"] >= 70)), "Student ID"].count() rest_ths_pass_reading_math_score # - # Step 9. Calculate the percentage of 10th-12th grade students passing math from Thomas High School. ths_pass_math_perc=float(rest_ths_pass_math_score)/float(students_ths_rest)*100 ths_pass_math_perc # Step 10. Calculate the percentage of 10th-12th grade students passing reading from Thomas High School. ths_pass_reading_perc=float(rest_ths_pass_reading_score)/float(students_ths_rest)*100 ths_pass_reading_perc # Step 11. Calculate the overall passing percentage of 10th-12th grade from Thomas High School. ths_overall_pass_perc=float(rest_ths_pass_reading_math_score)/float(students_ths_rest)*100 ths_overall_pass_perc # Step 12. Replace the passing math percent for Thomas High School in the per_school_summary_df. per_school_summary_df.loc[["Thomas High School"],["% Passing Math"]]=ths_pass_math_perc # Step 13. Replace the passing reading percentage for Thomas High School in the per_school_summary_df. per_school_summary_df.loc[["Thomas High School"],["% Passing Reading"]]=ths_pass_reading_perc # Step 14. Replace the overall passing percentage for Thomas High School in the per_school_summary_df. per_school_summary_df.loc[["Thomas High School"],["% Overall Passing"]]=ths_overall_pass_perc per_school_summary_df # ## High and Low Performing Schools # Sort and show top five schools. max_schools=per_school_summary_df.sort_values(["% Overall Passing"], ascending=False) max_schools.head() # Sort and show top five schools. min_schools=per_school_summary_df.sort_values(["% Overall Passing"], ascending=True) min_schools.head() # ## Math and Reading Scores by Grade # + # Create a Series of scores by grade levels using conditionals. students_ninth = school_data_complete_df[(school_data_complete_df["grade"] == "9th")] students_tenth = school_data_complete_df[(school_data_complete_df["grade"] == "10th")] students_eleventh = school_data_complete_df[(school_data_complete_df["grade"] == "11th")] students_twelfth = school_data_complete_df[(school_data_complete_df["grade"] == "12th")] # Group each school Series by the school name for the average math score. ninth_math_score = students_ninth.groupby(["school_name"]).mean()["math_score"] tenth_math_score = students_tenth.groupby(["school_name"]).mean()["math_score"] eleventh_math_score = students_eleventh.groupby(["school_name"]).mean()["math_score"] twelfth_math_score = students_twelfth.groupby(["school_name"]).mean()["math_score"] # Group each school Series by the school name for the average reading score. ninth_reading_score = students_ninth.groupby(["school_name"]).mean()["reading_score"] tenth_reading_score = students_tenth.groupby(["school_name"]).mean()["reading_score"] eleventh_reading_score = students_eleventh.groupby(["school_name"]).mean()["reading_score"] twelfth_reading_score = students_twelfth.groupby(["school_name"]).mean()["reading_score"] # + # Combine each Series for average math scores by school into single data frame. math_scores_per_grade = pd.DataFrame({ "9th": ninth_math_score, "10th": tenth_math_score, "11th": eleventh_math_score, "12th": twelfth_math_score}) math_scores_per_grade.head() # + # Combine each Series for average reading scores by school into single data frame. reading_scores_per_grade = pd.DataFrame({ "9th": ninth_reading_score, "10th": tenth_reading_score, "11th": eleventh_reading_score, "12th": twelfth_reading_score}) reading_scores_per_grade.head() # - # Format each grade column. math_scores_per_grade["9th"] = math_scores_per_grade["9th"].map("{:.1f}".format) math_scores_per_grade["10th"] = math_scores_per_grade["10th"].map("{:.1f}".format) math_scores_per_grade["11th"] = math_scores_per_grade["11th"].map("{:.1f}".format) math_scores_per_grade["12th"] = math_scores_per_grade["12th"].map("{:.1f}".format) # + # Remove the index. math_scores_per_grade.index.name = None # Display the data frame math_scores_per_grade.head() # + ## Remove the index. reading_scores_per_grade.index.name = None # Display the data frame reading_scores_per_grade.head() # - # ## Scores by School Spending # + # Establish the spending bins and group names. spending_bins=[0,585,630,645,675] group_names=["<$584","$585-629","$630-644","$645-675"] # Categorize spending based on the bins. per_school_summary_df["Spending Ranges (Per Student)"]=pd.cut(per_school_capita,spending_bins,labels=group_names) # - # Calculate averages for the desired columns. spent_math_score=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average Math Score"] spent_reading_score=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average Reading Score"] spent_pass_math=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Math"] spent_pass_reading=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Reading"] spent_pass_overall=per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Overall Passing"] # Create the DataFrame spent_summary_df=pd.DataFrame({ "Average Math Score":spent_math_score, "Average Reading Score":spent_reading_score, "% Passing Math":spent_pass_math, "% Passing Reading":spent_pass_reading, "% Overall Passing":spent_pass_overall}) # Format the DataFrame spent_summary_df["Average Math Score"] = spent_summary_df["Average Math Score"].map("{:.1f}".format) spent_summary_df["Average Reading Score"] = spent_summary_df["Average Reading Score"].map("{:.1f}".format) spent_summary_df["% Passing Math"] = spent_summary_df["% Passing Math"].map("{:.1f}".format) spent_summary_df["% Passing Reading"] = spent_summary_df["% Passing Reading"].map("{:.1f}".format) spent_summary_df["% Overall Passing"] = spent_summary_df["% Overall Passing"].map("{:.1f}".format) # Printing the DataFrame spent_summary_df # ## Scores by School Size # + # Establish the bins. enrollment_bins=[0,1000,2000,5000] enrollment_names=["Small (<1000)","Medium (1000-2000)","Large (2000-5000)"] # Categorize spending based on the bins. per_school_summary_df["School Size"]=pd.cut(per_school_summary_df["Total Students"],enrollment_bins,labels=enrollment_names) # - # Calculate averages for the desired columns. size_math_score=per_school_summary_df.groupby(["School Size"]).mean()["Average Math Score"] size_reading_score=per_school_summary_df.groupby(["School Size"]).mean()["Average Reading Score"] size_pass_math=per_school_summary_df.groupby(["School Size"]).mean()["% Passing Math"] size_pass_reading=per_school_summary_df.groupby(["School Size"]).mean()["% Passing Reading"] size_pass_overall=per_school_summary_df.groupby(["School Size"]).mean()["% Overall Passing"] # Assemble into DataFrame. size_score_summary_df=pd.DataFrame({ "Average Math Score":size_math_score, "Average Reading Score":size_reading_score, "% Passing Math":size_pass_math, "% Passing Reading":size_pass_reading, "% Overall Passing":size_pass_overall}) # Format the DataFrame size_score_summary_df["Average Math Score"]=size_score_summary_df["Average Math Score"].map("{:.1f}".format) size_score_summary_df["Average Reading Score"]=size_score_summary_df["Average Reading Score"].map("{:.1f}".format) size_score_summary_df["% Passing Math"]=size_score_summary_df["% Passing Math"].map("{:.1f}".format) size_score_summary_df["% Passing Reading"]=size_score_summary_df["% Passing Reading"].map("{:.1f}".format) size_score_summary_df["% Overall Passing"]=size_score_summary_df["% Overall Passing"].map("{:.1f}".format) # + # Printing the DataFrame size_score_summary_df # - # ## Scores by School Type # + # WE DO NOT ADD ANOTHER COLUMN SIN "SCHOOL TYPE" IS ALREADY IN THE DATAFRAME WE'RE WORKING ON # Calculate averages for the desired columns. type_math_score=per_school_summary_df.groupby(["School Type"]).mean()["Average Math Score"] type_reading_score=per_school_summary_df.groupby(["School Type"]).mean()["Average Reading Score"] type_pass_math=per_school_summary_df.groupby(["School Type"]).mean()["% Passing Math"] type_pass_reading=per_school_summary_df.groupby(["School Type"]).mean()["% Passing Reading"] type_pass_overall=per_school_summary_df.groupby(["School Type"]).mean()["% Overall Passing"] # - # Assemble into DataFrame. type_score_summary_df=pd.DataFrame({ "Average Math Score":type_math_score, "Average Reading Score":type_reading_score, "% Passing Math":type_pass_math, "% Passing Reading":type_pass_reading, "% Overall Passing":type_pass_overall}) # # Format the DataFrame type_score_summary_df["Average Math Score"]=type_score_summary_df["Average Math Score"].map("{:.1f}".format) type_score_summary_df["Average Reading Score"]=type_score_summary_df["Average Reading Score"].map("{:.1f}".format) type_score_summary_df["% Passing Math"]=type_score_summary_df["% Passing Math"].map("{:.1f}".format) type_score_summary_df["% Passing Reading"]=type_score_summary_df["% Passing Reading"].map("{:.1f}".format) type_score_summary_df["% Overall Passing"]=type_score_summary_df["% Overall Passing"].map("{:.1f}".format) type_score_summary_df # + # END OF THE CHALLENGE 4 CODE # - # ## Previous code performed for Module 4 activites # + # Reading the schools csv file #Adding Pandas dependency. import pandas as pd #Adding the CSV files school_data_to_load = "Resources/schools_complete.csv" student_data_to_load = "Resources/students_complete.csv" #Confirmming Jupyter is reading the csv files correctly school_data_df = pd.read_csv(school_data_to_load) school_data_df # + # PRO TIP 1 - Head #Confirmming Jupyter is reading the csv files correctly school_data_df = pd.read_csv(school_data_to_load) school_data_df.head() # + # PRO TIP 2 - Tail #Confirmming Jupyter is reading the csv files correctly school_data_df = pd.read_csv(school_data_to_load) school_data_df.tail() # + # Reading the students csv file #Confirmming Jupyter is reading the csv files correctly student_data_df = pd.read_csv(student_data_to_load) student_data_df # + # COUNT METHOD - SCHOOLS # count function school_data_df.count() # + # COUNT METHOD (total data) - STUDENTS # count function student_data_df.count() # + # ISNULL METHOD (emtpy cells) - SCHOOLS # is null function school_data_df.isnull() # + # ISNULL METHOD - STUDENTS # is null function student_data_df.isnull() # + # ISNULL METHOD + SUM - STUDENTS # is null function student_data_df.isnull().sum() # + # NOTNULL METHOD (full cels)- SCHOOLS # not null function school_data_df.notnull() # + # NOTNULL METHOD + SUM - STUDENTS # not null function student_data_df.notnull().sum() # + # DATA TYPE - SCHOOLS # dtypes function school_data_df.dtypes # + # DATA TYPE/ONE COLUMN WITH ONE WORD - SCHOOLS # dtypes function school_data_df.budget.dtype # + # DATA TYPE/ONE COLUMN WITH TWO WORDS - SCHOOLS # dtypes function school_data_df["School ID"].dtype # + # DATATYPE - STUDENTS # not null function student_data_df.dtypes # + #NEW LIST TO CLEAN DATA # Add each prefix and suffix to remove to a list. prefixes_suffixes = ["Dr. ", "Mr. ","Ms. ", "Mrs. ", "Miss ", " MD", " DDS", " DVM", " PhD"] # Iterate through the words in the "prefixes_suffixes" list and replace them with an empty space, "". for word in prefixes_suffixes: student_data_df["student_name"] = student_data_df["student_name"].str.replace(word,"") student_data_df # + # MERGE # Combine the data into a single dataset. school_data_complete_df = pd.merge(student_data_df, school_data_df, on=["school_name", "school_name"]) school_data_complete_df.head() # + # CHECKING THE DATA WITH "COUNT" AGAINST FIRST "COUNT" BEFORE CLEANSE # Get the total number of students. student_count = school_data_complete_df.count() student_count # + # COUNT the number of students student_count = school_data_complete_df["Student ID"].count() student_count # + # Getting the number of schools with the original DataFrame - Alternative 1 # Calculate the total number of schools. school_count = school_data_df["school_name"].count() school_count # + # UNIQUE - Getting the number 15 schools using the combined DataFrame - Alternative 2.1 # Calculate the total number of schools school_count_2 = school_data_complete_df["school_name"].unique() school_count_2 # + # Getting the number of schools using the combined DataFrame after UNIQUE function - Alternative 2.2 #Using len to get the total numbers of schools len(school_data_complete_df["school_name"].unique()) # + # Getting the budget in the correct way (not using sum on the combined DataFrame) # Calculate the total budget. total_budget = school_data_df["budget"].sum() total_budget # + # Getting the average reading score # Calculate the average reading score. average_reading_score = school_data_complete_df["reading_score"].mean() average_reading_score # + # Getting the average math score # Calculate the average math score. average_math_score = school_data_complete_df["math_score"].mean() average_math_score # + # Determining passing grade for math passing_math = school_data_complete_df["math_score"] >= 70 passing_math # + # Determining passing grade for reading passing_reading = school_data_complete_df["reading_score"] >= 70 passing_reading # - # Get all the students who are passing math in a new DataFrame. passing_math = school_data_complete_df[school_data_complete_df["math_score"] >= 70] passing_math.head() # Get all the students that are passing reading in a new DataFrame. passing_reading = school_data_complete_df[school_data_complete_df["reading_score"] >= 70] passing_reading.head() # + # Getting the students that passed math passing_math_count = passing_math["student_name"].count() passing_math_count # + # Getting the students that passed reading passing_reading_count = passing_reading["student_name"].count() passing_reading_count # + # Getting the percentages # Calculate the percent that passed math. passing_math_percentage = passing_math_count / float(student_count) * 100 # Calculate the percent that passed reading. passing_reading_percentage = passing_reading_count / float(student_count) * 100 print(passing_math_percentage) print(passing_reading_percentage) # + # Getting who passed both # Calculate the students who passed both math and reading. passing_math_reading = school_data_complete_df[(school_data_complete_df["math_score"] >= 70) & (school_data_complete_df["reading_score"] >= 70)] passing_math_reading.head() # + # Number of students who passed both overall_passing_math_reading_count = passing_math_reading["student_name"].count() overall_passing_math_reading_count # - # Calculate the overall passing percentage. overall_passing_percentage = overall_passing_math_reading_count / student_count * 100 overall_passing_percentage # + # Creating a DataFrame summary # declaring the new DataFrame dictionary district_summary_df = pd.DataFrame([{"Total Schools": school_count, "Total Students": student_count, "Total Budget": total_budget, "Average Math Score": average_math_score, "Average Reading Score": average_reading_score, "% Passing Math": passing_math_percentage, "% Passing Reading": passing_reading_percentage, "% Overall Passing": overall_passing_percentage}]) district_summary_df # + # FORMAT function # Format the "Total Students" to have the comma for a thousands separator. district_summary_df["Total Students"] = district_summary_df["Total Students"].map("{:,}".format) district_summary_df["Total Students"] # + # FUNCTIONS # Define a function that calculates the percentage of students that passed both # math and reading and returns the passing percentage when the function is called. def passing_math_percent(pass_math_count, student_count): return pass_math_count / float(student_count) * 100 # - passing_math_count = 29370 total_student_count = 39170 # Call the function. passing_math_percent (passing_math_count, total_student_count) # + # FORMAT function # Format "Total Budget" to have the comma for a thousands separator, a decimal separator, and a "$". district_summary_df["Total Budget"] = district_summary_df["Total Budget"].map("${:,.2f}".format) district_summary_df["Total Budget"] # + # FORMAT function # Format the columns. district_summary_df["Average Math Score"] = district_summary_df["Average Math Score"].map("{:.1f}".format) district_summary_df["Average Reading Score"] = district_summary_df["Average Reading Score"].map("{:.1f}".format) district_summary_df["% Passing Math"] = district_summary_df["% Passing Math"].map("{:.0f}".format) district_summary_df["% Passing Reading"] = district_summary_df["% Passing Reading"].map("{:.0f}".format) district_summary_df["% Overall Passing"] = district_summary_df["% Overall Passing"].map("{:.0f}".format) # - district_summary_df # + # REORDERING columns # Reorder the columns in the order you want them to appear. new_column_order = ["Total Schools", "Total Students", "Total Budget","Average Math Score", "Average Reading Score", "% Passing Math", "% Passing Reading", "% Overall Passing"] # Assign district summary df the new column order. district_summary_df = district_summary_df[new_column_order] district_summary_df # + # Placing a NEW INDEX # Determine the school type. per_school_types = school_data_df.set_index(["school_name"])["type"] per_school_types # + # Making it a dataframe # Add the per_school_types into a DataFrame for testing. df = pd.DataFrame(per_school_types) df # + # Getting the number of students using SCHOOL_DATA_DF # Calculate the total student count. per_school_counts = school_data_df["size"] per_school_counts # - # Calculate the total student count. per_school_counts = school_data_df.set_index(["school_name"])["size"] per_school_counts # + # Getting the number of students using SCHOOL_DATA_COMPLETE_DF # Calculate the total student count. per_school_counts = school_data_complete_df["school_name"].value_counts() per_school_counts # + # Getting the budget per student # Calculate the total school budget. per_school_budget = school_data_df.set_index(["school_name"])["budget"] per_school_budget # - # Calculate the per capita spending. per_school_capita = per_school_budget / per_school_counts per_school_capita # + # Getting the average grades per school # Calculate the math scores. student_school_math = student_data_df.set_index(["school_name"])["math_score"] student_school_math # + # GROUP BY function # Calculate the average math scores. per_school_averages = school_data_complete_df.groupby(["school_name"]).mean() per_school_averages # + # Calculate the average test score for math per_school_math = school_data_complete_df.groupby(["school_name"]).mean()["math_score"] per_school_math # + # Calculate the average test score for math per_school_reading = school_data_complete_df.groupby(["school_name"]).mean()["reading_score"] per_school_reading # + # Getting the passing percentages per school # To get the passing percentages, we need to: # 1. Determine what is the passing grade. # 2. Get the number of students who passed math and reading. # 3. Get the students who passed math and passed reading # Calculate the passing scores by creating a filtered DataFrame. per_school_passing_math = school_data_complete_df[(school_data_complete_df["math_score"] >= 70)] per_school_passing_reading = school_data_complete_df[(school_data_complete_df["reading_score"] >= 70)] per_school_passing_math # + # Calculate the number of students passing math and passing reading by school. per_school_passing_math = per_school_passing_math.groupby(["school_name"]).count()["student_name"] per_school_passing_reading = per_school_passing_reading.groupby(["school_name"]).count()["student_name"] per_school_passing_math # + # Calculate the percentage of passing math and reading scores per school. per_school_passing_math = per_school_passing_math / per_school_counts * 100 per_school_passing_reading = per_school_passing_reading / per_school_counts * 100 per_school_passing_math # + # Calculate the students who passed both math and reading. per_passing_math_reading = school_data_complete_df[(school_data_complete_df["math_score"] >= 70) & (school_data_complete_df["reading_score"] >= 70)] per_passing_math_reading.head() # - # Calculate the number of students who passed both math and reading. per_passing_math_reading = per_passing_math_reading.groupby(["school_name"]).count()["student_name"] # Calculate the overall passing percentage. per_overall_passing_percentage = per_passing_math_reading / per_school_counts * 100 per_overall_passing_percentage # + # Adding a list of values with keys to create a new DataFrame. per_school_summary_df = pd.DataFrame({ "School Type": per_school_types, "Total Students": per_school_counts, "Total School Budget": per_school_budget, "Per Student Budget": per_school_capita, "Average Math Score": per_school_math, "Average Reading Score": per_school_reading, "% Passing Math": per_school_passing_math, "% Passing Reading": per_school_passing_reading, "% Overall Passing": per_overall_passing_percentage}) per_school_summary_df.head() # + # Adding $ sign, two decimals and thousands separator # Format the Total School Budget and the Per Student Budget columns. per_school_summary_df["Total School Budget"] = per_school_summary_df["Total School Budget"].map("${:,.2f}".format) per_school_summary_df["Per Student Budget"] = per_school_summary_df["Per Student Budget"].map("${:,.2f}".format) # Display the data frame per_school_summary_df.head() # + # 4.8.8 In case we had to reorder columns # Reorder the columns in the order you want them to appear. new_column_order = ["School Type", "Total Students", "Total School Budget", "Per Student Budget", "Average Math Score", "Average Reading Score", "% Passing Math", "% Passing Reading", "% Overall Passing"] # Assign district summary df the new column order. per_school_summary_df = per_school_summary_df[new_column_order] per_school_summary_df.head() # + # 4.9.1 Sort values function for higher overall scores # Sort and show top five schools. top_schools = per_school_summary_df.sort_values(["% Overall Passing"], ascending=False) top_schools.head() # + # 4.9.2 Sort values function for lowers overall scores # Sort and show top five schools. bottom_schools = per_school_summary_df.sort_values(["% Overall Passing"], ascending=True) bottom_schools.head() # + # 4.10.1 - Grades per level # Create a grade level DataFrames. ninth_graders = school_data_complete_df[(school_data_complete_df["grade"] == "9th")] tenth_graders = school_data_complete_df[(school_data_complete_df["grade"] == "10th")] eleventh_graders = school_data_complete_df[(school_data_complete_df["grade"] == "11th")] twelfth_graders = school_data_complete_df[(school_data_complete_df["grade"] == "12th")] ninth_graders.head() # + # 4.10.2 - Score Averages Grouped by School Name # Group each grade level DataFrame by the school name for the average math score. ninth_grade_math_scores = ninth_graders.groupby(["school_name"]).mean()["math_score"] tenth_grade_math_scores = tenth_graders.groupby(["school_name"]).mean()["math_score"] eleventh_grade_math_scores = eleventh_graders.groupby(["school_name"]).mean()["math_score"] twelfth_grade_math_scores = twelfth_graders.groupby(["school_name"]).mean()["math_score"] eleventh_grade_math_scores # + # Group each grade level DataFrame by the school name for the average reading score. ninth_grade_reading_scores = ninth_graders.groupby(["school_name"]).mean()["reading_score"] tenth_grade_reading_scores = tenth_graders.groupby(["school_name"]).mean()["reading_score"] eleventh_grade_reading_scores = eleventh_graders.groupby(["school_name"]).mean()["reading_score"] twelfth_grade_reading_scores = twelfth_graders.groupby(["school_name"]).mean()["reading_score"] twelfth_grade_reading_scores # + # 4.10.3 Comine each grade level Series into a DataFrame # Combine each grade level Series for average math scores by school into a single DataFrame. math_scores_by_grade = pd.DataFrame({ "9th": ninth_grade_math_scores, "10th": tenth_grade_math_scores, "11th": eleventh_grade_math_scores, "12th": twelfth_grade_math_scores}) math_scores_by_grade.head() # + # Combine each grade level Series for average reading scores by school into a single DataFrame. reading_scores_by_grade = pd.DataFrame({ "9th": ninth_grade_reading_scores, "10th": tenth_grade_reading_scores, "11th": eleventh_grade_reading_scores, "12th": twelfth_grade_reading_scores}) reading_scores_by_grade.head() # + # 4.10.4 - Format the Averages and Remove the Index Name # Math # Format each grade column. math_scores_by_grade["9th"] = math_scores_by_grade["9th"].map("{:.1f}".format) math_scores_by_grade["10th"] = math_scores_by_grade["10th"].map("{:.1f}".format) math_scores_by_grade["11th"] = math_scores_by_grade["11th"].map("{:.1f}".format) math_scores_by_grade["12th"] = math_scores_by_grade["12th"].map("{:.1f}".format) # Make sure the columns are in the correct order. math_scores_by_grade = math_scores_by_grade[ ["9th", "10th", "11th", "12th"]] # Remove the index name. math_scores_by_grade.index.name = None # Display the DataFrame. math_scores_by_grade.head() # + # Reading # Format each grade column. reading_scores_by_grade["9th"] = reading_scores_by_grade["9th"].map("{:,.1f}".format) reading_scores_by_grade["10th"] = reading_scores_by_grade["10th"].map("{:,.1f}".format) reading_scores_by_grade["11th"] = reading_scores_by_grade["11th"].map("{:,.1f}".format) reading_scores_by_grade["12th"] = reading_scores_by_grade["12th"].map("{:,.1f}".format) # Make sure the columns are in the correct order. reading_scores_by_grade = reading_scores_by_grade[ ["9th", "10th", "11th", "12th"]] # Remove the index name. reading_scores_by_grade.index.name = None # Display the data frame. reading_scores_by_grade.head() # + # 4.11.1 Establish the Spending Ranges per Student # Get the descriptive statistics for the per_school_capita. # per_school_capita.describe() # + # CUT FUNCTION # Cut the per_school_capita into the spending ranges. # ALWAYS put the 0 when defining separtions spending_bins = [0, 585, 615, 645, 675] pd.cut(per_school_capita, spending_bins) # + # Cut the per_school_capita into the spending ranges. # At first we used 615 but the amount of schools was not proportionate, so we changed to 630 as the second paremeter. spending_bins = [0, 585, 630, 645, 675] per_school_capita.groupby(pd.cut(per_school_capita, spending_bins)).count() # - # Establish the spending bins and group names. spending_bins = [0, 585, 630, 645, 675] group_names = ["<$584", "$585-629", "$630-644", "$645-675"] # + # 4.11.2 - Categorize the Spending Bins # Categorize spending based on the bins. per_school_summary_df["Spending Ranges (Per Student)"] = pd.cut(per_school_capita, spending_bins, labels=group_names) per_school_summary_df # + # 4.11.3 - Group by the Spending Ranges # Calculate averages for the desired columns. spending_math_scores = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average Math Score"] spending_reading_scores = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["Average Reading Score"] spending_passing_math = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Math"] spending_passing_reading = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Passing Reading"] overall_passing_spending = per_school_summary_df.groupby(["Spending Ranges (Per Student)"]).mean()["% Overall Passing"] overall_passing_spending # + # 4.11.4 - Create a DataFrame for the Scores by School Spending # Assemble into DataFrame. spending_summary_df = pd.DataFrame({ "Average Math Score" : spending_math_scores, "Average Reading Score": spending_reading_scores, "% Passing Math": spending_passing_math, "% Passing Reading": spending_passing_reading, "% Overall Passing": overall_passing_spending}) spending_summary_df # + # Formatting spending_summary_df["Average Math Score"] = spending_summary_df["Average Math Score"].map("{:.1f}".format) spending_summary_df["Average Reading Score"] = spending_summary_df["Average Reading Score"].map("{:.1f}".format) spending_summary_df["% Passing Math"] = spending_summary_df["% Passing Math"].map("{:.0f}".format) spending_summary_df["% Passing Reading"] = spending_summary_df["% Passing Reading"].map("{:.0f}".format) spending_summary_df["% Overall Passing"] = spending_summary_df["% Overall Passing"].map("{:.0f}".format) spending_summary_df # + # 4.12.1 - Create Bins for School Size # Establish the bins. size_bins = [0, 1000, 2000, 5000] group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"] # + # 4.12.2 Categorize the Size Bins # Categorize spending based on the bins. per_school_summary_df["School Size"] = pd.cut(per_school_summary_df["Total Students"], size_bins, labels=group_names) per_school_summary_df.head() # + # 4.12.3 - Group by School Size # Calculate averages for the desired columns. size_math_scores = per_school_summary_df.groupby(["School Size"]).mean()["Average Math Score"] size_reading_scores = per_school_summary_df.groupby(["School Size"]).mean()["Average Reading Score"] size_passing_math = per_school_summary_df.groupby(["School Size"]).mean()["% Passing Math"] size_passing_reading = per_school_summary_df.groupby(["School Size"]).mean()["% Passing Reading"] size_overall_passing = per_school_summary_df.groupby(["School Size"]).mean()["% Overall Passing"] # + # 4.12.4 - Creat a DataFrame for th Scores by School Size # Assemble into DataFrame. size_summary_df = pd.DataFrame({ "Average Math Score" : size_math_scores, "Average Reading Score": size_reading_scores, "% Passing Math": size_passing_math, "% Passing Reading": size_passing_reading, "% Overall Passing": size_overall_passing}) size_summary_df # + # Formatting. size_summary_df["Average Math Score"] = size_summary_df["Average Math Score"].map("{:.1f}".format) size_summary_df["Average Reading Score"] = size_summary_df["Average Reading Score"].map("{:.1f}".format) size_summary_df["% Passing Math"] = size_summary_df["% Passing Math"].map("{:.0f}".format) size_summary_df["% Passing Reading"] = size_summary_df["% Passing Reading"].map("{:.0f}".format) size_summary_df["% Overall Passing"] = size_summary_df["% Overall Passing"].map("{:.0f}".format) size_summary_df # + # 4.13.1 - Group by School Type # Calculate averages for the desired columns. type_math_scores = per_school_summary_df.groupby(["School Type"]).mean()["Average Math Score"] type_reading_scores = per_school_summary_df.groupby(["School Type"]).mean()["Average Reading Score"] type_passing_math = per_school_summary_df.groupby(["School Type"]).mean()["% Passing Math"] type_passing_reading = per_school_summary_df.groupby(["School Type"]).mean()["% Passing Reading"] type_overall_passing = per_school_summary_df.groupby(["School Type"]).mean()["% Overall Passing"] # + # 4.13.2 - Create a DataFrame for the Scores by School Type # Assemble into DataFrame. type_summary_df = pd.DataFrame({ "Average Math Score" : type_math_scores, "Average Reading Score": type_reading_scores, "% Passing Math": type_passing_math, "% Passing Reading": type_passing_reading, "% Overall Passing": type_overall_passing}) type_summary_df # + # Formatting type_summary_df["Average Math Score"] = type_summary_df["Average Math Score"].map("{:.1f}".format) type_summary_df["Average Reading Score"] = type_summary_df["Average Reading Score"].map("{:.1f}".format) type_summary_df["% Passing Math"] = type_summary_df["% Passing Math"].map("{:.0f}".format) type_summary_df["% Passing Reading"] = type_summary_df["% Passing Reading"].map("{:.0f}".format) type_summary_df["% Overall Passing"] = type_summary_df["% Overall Passing"].map("{:.0f}".format) type_summary_df # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## bibliotecas import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn.cluster import KMeans from sklearn import preprocessing from sklearn.metrics import silhouette_score # ## DataSet df = pd.read_csv('Mall_Customers.csv') # https://www.kaggle.com/vjchoudhary7/customer-segmentation-tutorial-in-python df.head(2) # ## Exploratory Analysis # size of the dataset df.shape # descriptive analysis of variables df.describe() # visualizing the distribution of the variables plt.subplot(1,3,1) plt.hist(df['Age']) plt.title('Age') plt.subplot(1,3,2) plt.hist(df['Annual Income (k$)']) plt.title('Annual Income') plt.subplot(1,3,3) plt.hist(df['Spending Score (1-100)']) plt.title('Spending Score') plt.tight_layout() # boxplots to visualize outliers plt.subplot(1,3,1) plt.boxplot(df['Age']) plt.title('Age') plt.subplot(1,3,2) plt.boxplot(df['Annual Income (k$)']) plt.title('Annual Income') plt.subplot(1,3,3) plt.boxplot(df['Spending Score (1-100)']) plt.title('Spending Score') plt.tight_layout() # Looking for missing data df.isnull().sum() # Finding the mean values of annual income for each age and seeing the top 5 ages with the biggest mean income df_age = df.groupby('Age').agg({'Annual Income (k$)': 'mean'}) df_age.sort_values('Annual Income (k$)', ascending = False).head(5) # Checking if there is a correlation between age and income df_age.reset_index().corr() # Finding the mean values of spending score for each age and seeing the top 5 ages with the biggest mean score df_age2 = df.groupby('Age').agg({'Spending Score (1-100)': 'mean'}) df_age2.sort_values('Spending Score (1-100)', ascending = False).head(5) # Checking if there is a correlation between age and income df_age2.reset_index().corr() # Finding if there is a correlation between spending score and annual income df[['Annual Income (k$)', 'Spending Score (1-100)']].corr() # ## Clustering the customers # Visualizating each customer graphically fig = plt.figure() ax = Axes3D(fig) ax.scatter(df['Age'], df['Annual Income (k$)'], df['Spending Score (1-100)']) ax.set_xlabel('Age') ax.set_ylabel('Annual Income (k$)') ax.set_zlabel('Spending Score (1-100)') # normalizating the variables col = ['Age','Annual Income (k$)', 'Spending Score (1-100)'] df_std = df.copy() df_std=pd.DataFrame(preprocessing.scale(df_std[col])) df_std.columns = col # Elbow method to determine the optimal number of groups wcss = [] for i in range(1, 11): km = KMeans(n_clusters = i, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0) km.fit(df_std[col]) wcss.append(km.inertia_) plt.plot(range(1, 11), wcss) plt.title('The Elbow Method', fontsize = 20) plt.xlabel('No. of Clusters') plt.ylabel('wcss') plt.show() kmeans = KMeans(n_clusters=4,init='k-means++',max_iter = 100, n_init = 25, random_state = 0).fit(df_std[col]) centroids = kmeans.cluster_centers_ df["clusters"] = kmeans.fit_predict(df_std[col]) plt.scatter(df_std['Annual Income (k$)'], df_std['Spending Score (1-100)'], c= df_std['Age']) plt.scatter(centroids[:, 0], centroids[:, 1], c='red', s=50) fig = plt.figure() ax = Axes3D(fig) ax.scatter(df_std['Age'], df_std['Annual Income (k$)'], df_std['Spending Score (1-100)']) ax.set_xlabel('Age') ax.set_ylabel('Annual Income (k$)') ax.set_zlabel('Spending Score (1-100)') ax.scatter(centroids[:, 0], centroids[:, 1], c='red', s=50) # Visualizating each group clusters = df.groupby('clusters')[['Age','Annual Income (k$)','Spending Score (1-100)']].mean() clusters.plot.bar(figsize=(10,7.5)) # #### Groups Caracterization # * Group 0: This group represents the customers that have the highest mean annual income but a low spending score, so they have money to spend, but for some reason they are not spending in the mall, we could think about some marketing strategies focusing on this group specifically. # * Group 1: This group represents the customers that have a high spending score when compared to its annual income, they are the youngest group too, so we can think about marketing strategies focusing on young people who are used to spend more money on the mall. # * Group 2: This group represents the customers with the highest mean age, they have a spending score higher than the group 0, but lower than the Group 1, and they are the oldest group, so we can infer that they dont spend too much time on the mall. # * Group 3: This group represents the highest spenders with a high annual income, so they are the most important group for the mall. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd, json, matplotlib.pyplot as plt from pdb import set_trace # %matplotlib inline # ls *.csv sc = pd.read_csv('scenario.csv') df = pd.read_csv('data_by_sphere.csv') sum_by_sphere = df.loc[df.year == 2012].groupby('sphere', as_index=False).k_t_ne.sum() sum_by_sphere.columns = ['sphere', 'total_ktne'] df df = df.merge(sum_by_sphere, on='sphere', how='left') df.sphere = df.sphere.apply(str.capitalize) df.sphere = df.sphere.str.replace('Total', 'Всього') df.sort_values(['total_ktne', 'sphere', 'year', 'is_vde', 'k_t_ne'], ascending=[False, True, True, True, False] ).drop('total_ktne', axis=1 ).to_csv('data_by_sphere.csv', index=False) measures = { 'ДОМОГОСПОДАРСТВА': [ 'Термомодернізація житлових будинків', 'Модернізація систем енергозабезпечення об’єктів ЖКГ', 'Автоматизовані індивідуальні теплові пункти (ІТП)', 'Впровадження когенераційних систем для багатоповерхових будинків', 'Техніка А++, А+++', '40-50% населення використовує сонячні панелі на дахах будинків', 'Опалення та гаряча вода за рахунок біопалива та відходів, централізованого ВДЕ-тепла та використання сонячних колекторів', ], 'ТРАНСПОРТ': [ 'Легкові авто: 90% електромобілі, 10% на біопаливі', 'Електроавтобуси та електропотяги', 'Авіаційний, водний, вантажний транспорт на біопаливі', ], 'ПРОМИСЛОВІСТЬ': [ '88% - енергія з відновлюваних джерел та альтернативних видів палива', 'Електрифікація промисловості', 'Заміна енергоємного обладнання на нове та ефективне', 'Більші інвестиції з боку промислових виробників, зокрема електроенергетичної галузі', ], 'СІЛЬСЬКЕ ГОСПОДАРСТВО': [ 'Використання ВДЕ для виробництва електроенергії та тепла', 'СГ Транспорт на біопаливі', 'Вирощування енергетичних культур', 'Нові технології використання сільськогосподарських машин та механізмів', ], 'СФЕРА ПОСЛУГ': [ 'ВДЕ 88%', 'Заходи з енергоефективності та енергозаощадження', 'Термоізоляція приміщень', 'Використання високоефективних технічних приладів (А++, А+++)', ], 'Всього': [ 'Економічні наслідки:', 'Скорочення потреб в енергоресурсах на 42%', 'Економія до 75% енергоресурсів для опалення завдяки повній реконструкції будинків', 'Економія 30% коштів завдяки енергоефективності та енергозбереженню', 'Енергоємність ВВП України знизиться у 3 рази', 'Не потрібно імпортувати газ, вугілля, ядерне паливо', 'Соціальні наслідки:', 'Нова енергетична інфраструктура', 'Нові робочі місця в галузі ВДЕ та енергоефективності', 'Нові можливості для розвитку на місцях: локальне виробництво енергії, використання місцевих ресурсів', 'Зменшення кількості працівників у видобувній галузі', 'Скорочення викидів парникових газів 90 % від рівня 1990 р.', 'Зниження негативного впливу від енергетики на здоров’я людей', ] } def reshape_df(d): new = d.pivot('year', 'source', 'k_t_ne').reset_index() new['sphere'] = d.sphere.iloc[0] return new.reindex(['year', 'sphere', 'Біопаливо та відходи', 'Вугілля', 'Газ', 'Електроенергія', 'Електроенергія з ВДЕ', 'Нафта та нафтопродукти', 'Сонячна енергія', 'Теплова енергія', 'Теплова енергія з ВДЕ'], axis=1) df.groupby(['year', 'sphere'], as_index=False).apply(reshape_df).reset_index(drop=True).fillna(0) def reshape_df(d): new = d.pivot('year', 'sphere', 'k_t_ne').reset_index() new['source'] = d.source.iloc[0] new['is_vde'] = d.is_vde.iloc[0] return new.reindex(['year', 'source', 'is_vde', 'Всього', 'Промисловість', 'Домогосподарства', 'Транспорт', 'Сфера послуг', 'Сільське господарство'], axis=1) df.groupby(['year', 'source'], as_index=False).apply(reshape_df ).reset_index(drop=True ).fillna(0 # ).to_csv('data_by_source.csv', index=False) pd.read_csv('data_by_source.csv') order = df[['source', 'is_vde']].drop_duplicates( ).reset_index( ).sort_values('index', ascending=False ).reset_index(drop=True) order = dict(map(lambda v: (v[1], {'index': v[0], 'is_vde': bool(v[2])}), order.values)) with open('sources_order.json', 'w') as f: json.dump(order, f, allow_nan=False, ensure_ascii=False) measures = {k.capitalize(): v for k, v in measures.items()} with open('measures.json', 'w') as f: json.dump(measures, f, allow_nan=False, ensure_ascii=False) # ## new data # + df = pd.read_csv('data_from_report.csv', sep='\t') df.columns = ['source', 'sphere', 'scenario', 'Y2012', 'Y2015', 'Y2020', 'Y2025', 'Y2030', 'Y2035', 'Y2040', 'Y2045', 'Y2050'] df = pd.wide_to_long(df, stubnames='Y', i=['source', 'sphere', 'scenario'], j='year').reset_index() df.columns = ['source', 'sphere', 'scenario', 'year', 'ktne'] df = df.loc[df.source != 'Всього'].copy() # + totals = df.loc[df.sphere == 'загалом'].copy() df = df.loc[df.sphere != 'загалом'].copy() new_total = df.groupby(['source', 'scenario', 'year'], as_index=False).ktne.sum() new_total['sphere'] = 'загалом' df = pd.concat([df, new_total], ignore_index=True, sort=False) # df.to_csv('data_report_longest.csv', index=False) # - df = pd.read_csv('data_report_longest.csv') # + def reshape_df(d): new = d.pivot('year', 'sphere', 'ktne').reset_index() new['source'] = d.source.iloc[0] new['scenario'] = d.scenario.iloc[0] return new df.groupby(['year', 'source', 'scenario'], as_index=False ).apply(reshape_df ).reset_index(drop=True ).fillna(0 ).to_csv('data_report_wide.csv', index=False) # - # ### parse costs import re # + with open('costs_to_parse.txt') as f: costs = f.read() costs = re.sub('\s{2,}|\s+(?<=\d)|\s+(?=\d)', '\n', costs) scens = re.split('\s*-{3,}\s*', costs) snames = ['Консервативний', 'Ліберальний', 'Революційний'] sdfs = []; for i, s in enumerate(scens): sdf = pd.DataFrame(list(map(lambda r: r.strip().split('\n'), re.findall('[^0-9]+[^А-яЄєЇїІіҐґ]+', s)))) sdf['scenario'] = snames[i] sdfs.append(sdf) sdf = pd.concat(sdfs, ignore_index=True, sort=False ).replace({'-': 0}) sdf.columns = ['action', 'Y2012', 'Y2015', 'Y2020', 'Y2025', 'Y2030', 'Y2035', 'Y2040', 'Y2045', 'Y2050', 'scenario'] sdf = pd.wide_to_long(sdf, stubnames='Y', i=['action', 'scenario',], j='year').reset_index() sdf.columns = ['action', 'scenario', 'year', 'm_euro'] # sdf.to_csv('costs.csv', index=False) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="gqWWnLRWENCi" colab_type="code" colab={} # !echo '{"username":"ronniesong0809","key":"269788090750b1eb7d55a04d86886708"}' > kaggle.json # !cp /content/kaggle.json /root/.kaggle/kaggle.json # + id="r4f3T6XZaqqj" colab_type="code" outputId="7b2e4784-9790-40b6-8f49-7ee4e0b316d2" colab={"base_uri": "https://localhost:8080/", "height": 34} # !cp /content/kaggle.json /root/.kaggle/kaggle.json # !chmod 600 /root/.kaggle/kaggle.json # !kaggle config set -n path -v/content # + id="WZi-s11tAfzP" colab_type="code" outputId="e9c8f715-fc0c-4996-9e51-6d4b11b0b3a5" colab={"base_uri": "https://localhost:8080/", "height": 68} # !kaggle datasets download -d paultimothymooney/chest-xray-pneumonia # + id="H6ZhhFl19DN-" colab_type="code" colab={} # !unzip datasets/paultimothymooney/chest-xray-pneumonia/chest-xray-pneumonia.zip # + id="bUHTcnt37Tuy" colab_type="code" colab={} import pandas as pd import numpy as np import os import matplotlib.pyplot as plt # %matplotlib inline # + id="b8LbRxUn7XcB" colab_type="code" outputId="5d91fcc0-1d81-48fa-f0a5-19669bcc2831" colab={"base_uri": "https://localhost:8080/", "height": 34} import keras from keras.models import Sequential from keras.layers import Conv2D,MaxPool2D,Dense,Dropout,Softmax,Input,Flatten, SeparableConv2D from keras.optimizers import Adam,RMSprop,SGD from keras.layers.merge import add from keras.layers import Dense, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D, BatchNormalization from keras.layers import BatchNormalization from keras.metrics import categorical_accuracy from keras.preprocessing.image import ImageDataGenerator from sklearn.metrics import roc_auc_score,roc_curve,accuracy_score,recall_score # from tensorflow import set_random_seed import tensorflow as tf # + id="tkSP5sFf7ed3" colab_type="code" colab={} os.environ['PYTHONHASHSEED'] = "0" np.random.seed(1) # set_random_seed(2) tf.random.set_seed(2) # + [markdown] id="xAMgZSovBAt1" colab_type="text" # ## Model 1: Simple CNN # + id="YWbxCW1WA_0w" colab_type="code" colab={} model = Sequential() model.add(Conv2D(filters=32, kernel_size=(3,3), activation="relu", padding="same", input_shape=(64,64,1))) model.add(Conv2D(filters=32, kernel_size=(3,3), activation="relu", padding="same")) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(rate=0.25)) model.add(Conv2D(filters=64, kernel_size=(3,3), activation="relu", padding="same")) model.add(Conv2D(filters=64, kernel_size=(3,3), activation="relu", padding="same")) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(rate=0.25)) model.add(Flatten()) model.add(Dense(1024,activation="relu")) model.add(BatchNormalization()) model.add(Dropout(rate=0.4)) model.add(Dense(2, activation="softmax")) # + [markdown] id="WIkjg-atC3OL" colab_type="text" # ## Model 2: VGG16 # + id="-eEMxIhAC56O" colab_type="code" colab={} # from keras.layers import SeparableConv2D, ZeroPadding2D # model = Sequential() # model.add(Conv2D(64, (3,3), activation='relu', padding='same', name='Conv1_1', input_shape=(224,224,1))) # model.add(Conv2D(64, (3,3), activation="relu", padding="same")) # # model.add(BatchNormalization()) # model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding="SAME")) # model.add(SeparableConv2D(128, (3,3), activation='relu', padding='same')) # model.add(SeparableConv2D(128, (3,3), activation='relu', padding='same')) # # model.add(BatchNormalization()) # model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding="SAME")) # model.add(SeparableConv2D(256, (3,3), activation='relu', padding='same')) # model.add(SeparableConv2D(256, (3,3), activation='relu', padding='same')) # model.add(SeparableConv2D(256, (3,3), activation='relu', padding='same')) # # model.add(BatchNormalization()) # model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding="SAME")) # model.add(SeparableConv2D(512, (3,3), activation='relu', padding='same')) # model.add(SeparableConv2D(512, (3,3), activation='relu', padding='same')) # model.add(SeparableConv2D(512, (3,3), activation='relu', padding='same')) # # model.add(BatchNormalization()) # model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding="SAME")) # model.add(SeparableConv2D(512, (3,3), activation='relu', padding='same')) # model.add(SeparableConv2D(512, (3,3), activation='relu', padding='same')) # model.add(SeparableConv2D(512, (3,3), activation='relu', padding='same')) # # model.add(BatchNormalization()) # model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding="SAME")) # model.add(Flatten()) # model.add(Dense(4096, activation='relu')) # model.add(Dense(4096, activation='relu')) # # model.add(Dropout(0.7)) # model.add(Dense(1000, activation='relu')) # # model.add(Dropout(0.5)) # model.add(Dense(2, activation='softmax')) # + id="LblRJecABHLW" colab_type="code" outputId="3ef23511-c999-4d60-e166-5e790afe86ad" colab={"base_uri": "https://localhost:8080/", "height": 663} model.summary() # + id="2r5sGSNP7hBc" colab_type="code" outputId="6322f418-59bf-472f-a378-9e1740bfd65f" colab={"base_uri": "https://localhost:8080/", "height": 68} gen = ImageDataGenerator() train_batches = gen.flow_from_directory("chest_xray/chest_xray/train", model.input_shape[1:3], color_mode="grayscale", shuffle=True, seed=1, batch_size=16) valid_batches = gen.flow_from_directory("chest_xray/chest_xray/val", model.input_shape[1:3], color_mode="grayscale", shuffle=True, seed=1, batch_size=16) test_batches = gen.flow_from_directory("chest_xray/chest_xray/test", model.input_shape[1:3], color_mode="grayscale", shuffle=False, batch_size=8) # + [markdown] id="l7RSffCOBxVp" colab_type="text" # ## **Experiment 1.** learning rate = 0.01 # + id="D8i4D_TRBKS7" colab_type="code" outputId="abc60bb6-7fa6-48e9-e754-c864226c02e3" colab={"base_uri": "https://localhost:8080/", "height": 136} model.compile(Adam(lr=0.001),loss="categorical_crossentropy", metrics=["accuracy"]) model.fit_generator(train_batches, validation_data=valid_batches, epochs=3) # + id="R3GIQE86UbUM" colab_type="code" outputId="5ab99b6a-47e8-4572-ed36-250afd4b8598" colab={"base_uri": "https://localhost:8080/", "height": 272} p = model.predict_generator(test_batches, verbose=True) pre = pd.DataFrame(p) pre["filename"] = test_batches.filenames pre["label"] = (pre["filename"].str.contains("PNEUMONIA")).apply(int) pre['pre'] = (pre[1]>0.5).apply(int) print(pre) # + id="3GlXF49nUdnz" colab_type="code" outputId="0c73b3cd-c8f2-4d03-f4a7-5908f4ae619d" colab={"base_uri": "https://localhost:8080/", "height": 34} recall_score(pre["label"], pre["pre"]) # + id="w4mL0WESUe6A" colab_type="code" outputId="b52fc11a-03bf-45ec-9591-3b85ddd1d9b0" colab={"base_uri": "https://localhost:8080/", "height": 34} roc_auc_score(pre["label"],pre[1]) # + id="ebdrbMPlUgWQ" colab_type="code" outputId="3d308110-88b3-4979-d695-105864615d19" colab={"base_uri": "https://localhost:8080/", "height": 296} tpr,fpr,thres = roc_curve(pre["label"],pre[1]) roc = pd.DataFrame([tpr,fpr]).T roc.plot(x=0,y=1) # + [markdown] id="KKHHXKBIUg3F" colab_type="text" # ## **Experiment 2.** learning rate = 0.001 # + id="gYZ8JDFFUZVt" colab_type="code" outputId="468e3341-c09c-4f13-8288-7b8977da4f89" colab={"base_uri": "https://localhost:8080/", "height": 136} model.compile(Adam(lr=0.001),loss="categorical_crossentropy", metrics=["accuracy"]) model.fit_generator(train_batches, validation_data=valid_batches, epochs=3) # + id="lGb9mTUHBeRy" colab_type="code" outputId="b4284156-78ac-4c7f-d2ba-9605e308a0cf" colab={"base_uri": "https://localhost:8080/", "height": 272} p = model.predict_generator(test_batches, verbose=True) pre = pd.DataFrame(p) pre["filename"] = test_batches.filenames pre["label"] = (pre["filename"].str.contains("PNEUMONIA")).apply(int) pre['pre'] = (pre[1]>0.5).apply(int) print(pre) # + id="1SeACpnKBgfz" colab_type="code" outputId="7fb904c0-11f3-4365-a77b-da2b99e72cea" colab={"base_uri": "https://localhost:8080/"} recall_score(pre["label"], pre["pre"]) # + id="Z56dibIqG4o4" colab_type="code" outputId="2800cb90-54d3-44b3-9f07-844e25faf19b" colab={"base_uri": "https://localhost:8080/"} roc_auc_score(pre["label"],pre[1]) # + id="SXbGksFkG3pA" colab_type="code" outputId="54a40e3c-e7d7-4b21-e849-acad78f5cd75" colab={"base_uri": "https://localhost:8080/", "height": 296} tpr,fpr,thres = roc_curve(pre["label"],pre[1]) roc = pd.DataFrame([tpr,fpr]).T roc.plot(x=0,y=1) # + [markdown] id="_x4qqbHqBv2l" colab_type="text" # ## **Experiment 3.** learning rate = 0.0001 # + id="8p1pCeFiBL8e" colab_type="code" outputId="b8a3984a-01fa-4d1d-e111-cd9f09fe745b" colab={"base_uri": "https://localhost:8080/", "height": 136} model.compile(Adam(lr=0.0001), loss="categorical_crossentropy", metrics=["accuracy"]) model.fit_generator(train_batches, validation_data=valid_batches, epochs=3) # + id="y8pcUn4UBub9" colab_type="code" outputId="7c9dd651-7add-4f09-a172-611c6e79742d" colab={"base_uri": "https://localhost:8080/", "height": 34} p = model.predict_generator(test_batches, verbose=True) pre = pd.DataFrame(p) pre["filename"] = test_batches.filenames pre["label"] = (pre["filename"].str.contains("PNEUMONIA")).apply(int) pre['pre'] = (pre[1]>0.5).apply(int) # + id="IeiPJQpdBmJm" colab_type="code" outputId="4ba40536-e982-49d3-914f-0f9b148d24df" colab={"base_uri": "https://localhost:8080/"} recall_score(pre["label"],pre["pre"]) # + id="y27upvTRTxAK" colab_type="code" outputId="b8aecbea-c29f-407a-fd56-a81b06eb841a" colab={"base_uri": "https://localhost:8080/"} roc_auc_score(pre["label"],pre[1]) # + id="fGbBJpzrTyOK" colab_type="code" outputId="94848fe9-0929-4455-d3f4-7bef14bb9d03" colab={"base_uri": "https://localhost:8080/", "height": 296} tpr,fpr,thres = roc_curve(pre["label"],pre[1]) roc = pd.DataFrame([tpr,fpr]).T roc.plot(x=0,y=1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- print("") r = int(input()) w = int(input()) h = int(input()) aoc = 3.14*r*r poc = 2*3.14*r aos = h*h aor = w*h print(aoc , " " ,poc , " " , aos , " " , aor) c = int(input()) f = (c * 1.8) + 32 print(f) i = int(input()) j = int(input()) k = int(input()) if i>k and i>j : print("{} is greatest".format(i)) elif j>i and j>k: print("{} is greatest".format(j)) else : print("{} is greatest".format(k)) # + import cmath a = int(input()) b = int(input()) c = int(input()) d = (b**2) - (4*a*c) s1= (-b-cmath.sqrt(d))/(2*a) s2 = (-b+cmath.sqrt(d))/(2*a) print("{0} and {1}".format(s1,s2)) # - for i in range(65,127): print(i, " ",chr(i)) x = int(input()) fact = 1 for i in range(1,x+1): fact *= i print(fact) x = int(input()) y = int(input()) while(y): x,y = y,x%y print(x) a = int(input()) b = int(input()) print(a**b) num = int(input()) if num > 1: for i in range(2, num): if (num % i) == 0: print(num, "is not a prime") break else: print(num, "is a prime") else: print(num, "is not a prime") num = int(input()) s = 0 t = num while temp > 0: d = t % 10 s += d ** 3 t //= 10 if num == s: print(num,"is an Armstrong number") else: print(num,"is not an Armstrong number") for num in range(1, 1000 + 1): s = 0 t = num while t > 0: d = t % 10 s += d ** 3 t //= 10 if num == s: print(num) def nas(num): return num == sum([int(x) ** len(str(num)) for x in str(num)]) num = int(input()) nas(num) # + def fibo(n): a = 0 b = 1 c = a+b print(0) print(1) while c # + run_control={"frozen": false, "read_only": false} import numpy as np # %matplotlib inline import sys sys.path.append("../") from optimize_dhamed import * # + run_control={"frozen": false, "read_only": false} c_l = [np.genfromtxt("count_matrix_1.txt")] # + run_control={"frozen": false, "read_only": false} v_ar = np.genfromtxt("wfile.txt")[:,1].reshape((9,1)) # + run_control={"frozen": false, "read_only": false} gi= np.zeros(9) gi[0] = 1 gi[-1] = 0.1 # + [markdown] run_control={"frozen": false, "read_only": false} # * Subtract minimum bias energy. # + run_control={"frozen": false, "read_only": false} v_ar.shape # + run_control={"frozen": false, "read_only": false} og = run_dhamed(c_l, -np.log(v_ar), g_init=-(np.zeros(9))*-1.0, numerical_gradients=False, gtol=10**-9, maxiter=10000) # + run_control={"frozen": false, "read_only": false} plt.plot(og*-1) # + run_control={"frozen": false, "read_only": false} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- import numpy as np import tensorflow as tf import tensorflow.contrib.slim as slim from six.moves import cPickle as pickle from six.moves import range import matplotlib.pyplot as plt # + pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) # + image_size = 28 num_labels = 10 num_channels = 1 # grayscale import numpy as np def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size, image_size, num_channels)).astype(np.float32) # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) # - def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0]) def model(images, is_training=False): # Create a small network. net = slim.conv2d(images, 16, [3, 3], stride=2, scope='conv1') net = slim.conv2d(net, 16, [3, 3], scope='conv2') # net = slim.ops.conv2d(net, 128, [3, 3], scope='conv3') net = slim.max_pool2d(net, [3, 3], stride=2, scope='pool3') net = slim.flatten(net) net = slim.fully_connected(net, 10, scope='logits') net = tf.nn.softmax(net) return net # + train_log_dir = 'slim-test' if not tf.gfile.Exists(train_log_dir): tf.gfile.MakeDirs(train_log_dir) with tf.Graph().as_default(): # Set up the data loading: images, labels = train_dataset, tf.convert_to_tensor(train_labels) # Define the model: train_prediction = model(images, is_training=True) # Specify the loss function: slim.losses.softmax_cross_entropy(train_prediction, labels) total_loss = slim.losses.get_total_loss() tf.summary.scalar('losses/total_loss', total_loss) # Specify the optimization scheme: optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001) # create_train_op that ensures that when we evaluate it to get the loss, # the update_ops are done and the gradient updates are computed. train_tensor = slim.learning.create_train_op(total_loss, optimizer) # Actually runs training. slim.learning.train(train_tensor, train_log_dir) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Reproduce Leaf Counting # Keras Implementation of InceptionV3 # # + # ensures back compatibility from tensorflow.keras import backend as K # for reading and preprocessing data from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Model from tensorflow.keras.layers import Flatten, Dense, Dropout from tensorflow.keras.optimizers import Adam # inceptionv3 model from keras with pretrained weights from tensorflow.python.keras.applications.inception_v3 import InceptionV3, preprocess_input import tensorflow as tf # for plots et al. import matplotlib.pyplot as plt import numpy as np from sklearn.metrics import classification_report, confusion_matrix import pandas as pd import matplotlib.pyplot as plt import seaborn as sn # + # set these parameters dataset = 'LeafCount' modelName = 'InceptionV3' load_weights = True # + DATASET_PATH = '../dataset/' IMAGE_SIZE = (256, 256) NUM_CLASSES = 9 BATCH_SIZE = 8 NUM_EPOCHS = 20 WEIGHTS_FINAL = ''.join(['model-',modelName,'-',dataset,'.h5']) # + # specify data augmentation parameters for training images train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input, rotation_range=30, width_shift_range=0.2, height_shift_range=0.2, zoom_range=0.4, horizontal_flip=True, fill_mode='nearest', validation_split=0.1) train_batches = train_datagen.flow_from_directory(DATASET_PATH + '/train', target_size=IMAGE_SIZE, interpolation='bicubic', class_mode='categorical', shuffle=True, batch_size=BATCH_SIZE, subset="training", classes=['1','2','3','4','5','6','7','8','9+']) valid_batches = train_datagen.flow_from_directory(DATASET_PATH + '/train', target_size=IMAGE_SIZE, interpolation='bicubic', class_mode='categorical', shuffle=True, batch_size=BATCH_SIZE, subset="validation", classes=['1','2','3','4','5','6','7','8','9+']) # + # show class indices print('****************') for cls, idx in train_batches.class_indices.items(): print('Class #{} = {}'.format(idx, cls)) print('****************') # + # inceptionv3 - model setup model = InceptionV3(include_top=False, weights='imagenet', input_tensor=None, input_shape=(IMAGE_SIZE[0], IMAGE_SIZE[1], 3)) # + # ResNet model from keras with pretrained weights from tensorflow.python.keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input # adding final FC layer at the end of model x = model.output x = Flatten()(x) x = Dropout(0.5)(x) output_layer = Dense( NUM_CLASSES, activation='softmax', name='softmax')(x) model = Model(inputs=model.input, outputs=output_layer) # ensure all layers are trainable for layer in model.layers: layer.trainable = True # setting up optimizer for model model.compile(optimizer=Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy']) # - print(model.summary()) # + # load saved weights # if load_weights: model.load_weights(WEIGHTS_FINAL) # - # train the model hist = model.fit_generator(train_batches, steps_per_epoch = train_batches.samples // BATCH_SIZE, validation_data = valid_batches, validation_steps = valid_batches.samples // BATCH_SIZE, epochs = NUM_EPOCHS) # save trained weights model.save(WEIGHTS_FINAL) # + # Plot Results N=NUM_EPOCHS plt.style.use("ggplot") plt.figure() plt.plot(np.arange(0, N), hist.history["loss"], label="train_loss") plt.plot(np.arange(0, N), hist.history["val_loss"], label="val_loss") plt.title("Training/Validation Loss on Dataset") plt.xlabel("Epoch #") plt.ylabel("Loss") plt.legend(loc="lower left") plt.savefig("plot_loss.png") plt.figure() plt.plot(np.arange(0, N), hist.history["accuracy"], label="train_acc") plt.plot(np.arange(0, N), hist.history["val_accuracy"], label="val_acc") plt.title("Training/Validation Accuracy on Dataset") plt.xlabel("Epoch #") plt.ylabel("Accuracy") plt.legend(loc="lower left") plt.savefig("plot.png") # + # test model test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) test_batches = test_datagen.flow_from_directory(DATASET_PATH + '/test', target_size=IMAGE_SIZE, interpolation='bicubic', class_mode='categorical', shuffle=False, batch_size=1, classes=['1','2','3','4','5','6','7','8','9+']) # out = model.evaluate_generator(test_batches, use_multiprocessing=True) # print(list(zip(model.metrics_names,out))) # - test_batches.reset() Y_pred = model.predict_generator(test_batches, use_multiprocessing=True) # print(len(Y_pred)) # + y_pred = np.argmax(Y_pred, axis=1) print('\n\n Classification Report\n') target_names = list(test_batches.class_indices.keys()) print(target_names) print(classification_report(list(test_batches.classes), y_pred, target_names=target_names)) print('\n\nConfusion Matrix\n') cm = confusion_matrix(test_batches.classes, y_pred) row_sums = cm.sum(axis=1) cm = cm / row_sums # df_cm = pd.DataFrame(cm, index = ['cocklebur','foxtail','pigweed','ragweed'], columns = ['cocklebur','foxtail','pigweed','ragweed']) df_cm = pd.DataFrame(cm) # print(df_cm) # flights = df_cm.pivot("month", "year", "passengers") plt.figure(figsize = (5,5)) plt.title(modelName) sn.heatmap(df_cm, annot=True, fmt='0.0%', cbar=False, xticklabels=['1','2','3','4','5','6','7','8','9+'], yticklabels=['1','2','3','4','5','6','7','8','9+']) saveName = ''.join([modelName,'-',dataset,'.png']) plt.savefig(saveName) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="4WkKysp1WiQv" colab_type="code" colab={} from google.colab import drive import pandas as pd import numpy as np import datadotworld as dw # + id="mERMNk7nYQMl" colab_type="code" colab={} # # !pip install datadotworld # # !pip install datadotworld[pandas] # + id="niYI1OLhYfM2" colab_type="code" colab={} # # !dw configure # + id="FIcUVlvAY3QR" colab_type="code" colab={} # drive.mount("/content/drive") # + id="J9OsQMB-ZDur" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="2153f8d0-f7a3-40f1-9252-a10b01f438b7" executionInfo={"status": "ok", "timestamp": 1581527808835, "user_tz": -60, "elapsed": 1704, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} cd "drive//My Drive/Colab Notebooks/dw_Matrix" # + id="uW13OtDqZzQ2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="874653d8-1371-4986-db42-df923009cc5d" executionInfo={"status": "ok", "timestamp": 1581527848289, "user_tz": -60, "elapsed": 4856, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} # ls matrix_one # + id="N-TjBv2lZ8IR" colab_type="code" colab={} # !mkdir data # + id="B6vbR5VJaBwT" colab_type="code" colab={} # !echo 'data' > .gitignore # + id="zX5XQHzvae3o" colab_type="code" colab={} # !git add .gitignore # + id="Z6u7Mct1akVK" colab_type="code" colab={} data = dw.load_dataset('datafiniti/mens-shoe-prices') # + id="iyQVMMlRa_YN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 125} outputId="5723252d-8012-47d8-8e28-fe3db868ae27" executionInfo={"status": "ok", "timestamp": 1581528230607, "user_tz": -60, "elapsed": 1789, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} df = data.dataframes['7004_1'] df.shape # + id="bqrWzOKbbMFR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 618} outputId="4857aee8-ffe6-460e-d74b-4bb2b1920416" executionInfo={"status": "ok", "timestamp": 1581528255217, "user_tz": -60, "elapsed": 678, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} df.sample(5) # + id="bt3HhnHZbgfp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 228} outputId="20616b4d-f7b9-432e-bb3f-98df55ca43c5" executionInfo={"status": "ok", "timestamp": 1581528377531, "user_tz": -60, "elapsed": 680, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} df.columns # + id="0Dny0JP7btA9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 105} outputId="a4670222-647e-4ede-dd3c-d7a07bb74fa6" executionInfo={"status": "ok", "timestamp": 1581528433818, "user_tz": -60, "elapsed": 786, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} df.prices_currency.unique() # + id="ljR6A-gIcMEs" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 263} outputId="f4c2d3d9-2919-40da-e417-c355d90e6af4" executionInfo={"status": "ok", "timestamp": 1581528514611, "user_tz": -60, "elapsed": 662, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} df.prices_currency.value_counts(normalize=True) # + id="dkCKn3_ncYlq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="367983c9-cb97-45c9-965f-c2034a6524b2" executionInfo={"status": "ok", "timestamp": 1581528634527, "user_tz": -60, "elapsed": 771, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} df_usd = df[ df.prices_currency == 'USD'].copy() df_usd.shape # + id="7be_bhe4c9D2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="3738e74d-4553-47fb-983e-c618a3e9681c" executionInfo={"status": "ok", "timestamp": 1581528921310, "user_tz": -60, "elapsed": 741, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} df_usd['prices_amountmin'] = df_usd.prices_amountmin.astype(np.float) df_usd['prices_amountmin'].hist() # + id="XJtTC3bWdcds" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="2e8aceb8-2cbd-456c-f53a-c7d87d18ce58" executionInfo={"status": "ok", "timestamp": 1581529127694, "user_tz": -60, "elapsed": 750, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} filter_max = np.percentile(df_usd['prices_amountmin'],99) filter_max # + id="F-aOJaDYeN9y" colab_type="code" colab={} df_usd_filter = df_usd[ df_usd['prices_amountmin'] < filter_max ] # + id="52nUZN_zeuDN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="727f77eb-e68f-4c7f-acf8-89779a87e895" executionInfo={"status": "ok", "timestamp": 1581529280544, "user_tz": -60, "elapsed": 1193, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} df_usd_filter.prices_amountmin.hist(bins=100) # + id="2ZYV48WofMbu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="b02821cd-dbfc-4514-dc81-e09ac8a46597" executionInfo={"status": "ok", "timestamp": 1581529539988, "user_tz": -60, "elapsed": 1824, "user": {"displayName": "", "photoUrl": "", "userId": "02669959677222457689"}} # ls matrix_one # + id="wYBUAU74gWJT" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Image Preperation # #### Data set: https://github.com/ardamavi/Sign-Language-Digits-Dataset import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Activation, Dense, Flatten, BatchNormalization, Conv2D, MaxPool2D from tensorflow.keras.optimizers import Adam from tensorflow.keras.metrics import categorical_crossentropy from tensorflow.keras.preprocessing.image import ImageDataGenerator from sklearn.metrics import confusion_matrix import itertools import os import shutil import random import matplotlib.pyplot as plt # %matplotlib inline pwd # To show current Jupyter notebook directory train_path = 'Sign-Language-Digits-Dataset/train' valid_path = 'Sign-Language-Digits-Dataset/valid' test_path = 'Sign-Language-Digits-Dataset/test' train_batches = ImageDataGenerator(preprocessing_function = keras.applications.mobilenet.preprocess_input).flow_from_directory( train_path, target_size = (224, 224), batch_size = 10) valid_batches = ImageDataGenerator(preprocessing_function = keras.applications.mobilenet.preprocess_input).flow_from_directory( valid_path, target_size = (224, 224), batch_size = 10) test_batches = ImageDataGenerator(preprocessing_function = keras.applications.mobilenet.preprocess_input).flow_from_directory( test_path, target_size = (224, 224), batch_size = 10, shuffle = False) # ## Modify Model mobile = keras.applications.mobilenet.MobileNet() mobile.summary() x = mobile.layers[-6].output predictions = Dense(10, activation = 'softmax')(x) model = keras.Model(inputs = mobile.input, outputs = predictions) model.summary() for layer in model.layers[:23]: layer.trainable = False # ## Training Model model.compile(Adam(learning_rate = 0.0001), loss = 'categorical_crossentropy', metrics = ['accuracy']) model.fit_generator(train_batches, steps_per_epoch = 18, validation_data = valid_batches, validation_steps = 3, epochs = 60, verbose = 2) # ## Predict Sign Language Digits test_labels = test_batches.classes predictions = model.predict(test_batches, steps = 5, verbose = 0) # This function has been taken from the website of scikit Learn. link: https://scikit-learn.org/0.18/auto_examples/model_selection/plot_confusion_matrix.html def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') cm = confusion_matrix(test_labels, predictions.argmax(axis = 1)) test_batches.class_indices cm_plot_labels = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] plot_confusion_matrix(cm, cm_plot_labels, title = 'Confusion Matrix') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sampling algorithm # --- # Implementation of Vertex's sampling algorithm described in [_Web-scale information extraction with vertex_](https://ieeexplore.ieee.org/abstract/document/5767842) # # ## Outline # --- # 1. [Retrieve XPaths from a page](#1.-retrieve-xpaths-from-a-page) # 2. [Compute XPath weight]() # 3. [Sampling algorithm]() # # ## 1. Retrieve XPaths from a page # As reported in the paper: # # > One simple way to achieve this is to treat each page as a set of XPaths contained in it, and then greedily select pages that cover the most number of uncovered XPaths. # # However, the paper does not specify which Xpaths are extracted from a page. Therefore we decided to extract XPaths which retrieves textual leaf nodes. # # To do so we used the [lxml]() library to select all leaf textual nodes in a page. Then, using the same library we obtained the respective XPath of each leaf node previously selected. import requests from lxml import html from collections import defaultdict # ### ``get_all_xpath`` # Given a page HTML source code returns a dict { _xpath_ : _value_ }, where _xpath_ is an xpath and _value_ is the string retrieved from the xpath on _src_ def get_all_xpath(html_src): # select nodes whose children include text nodes XPATH_SELECTOR = "//*[child::text()]" root = html.fromstring(html_src) tree = root.getroottree() # leaf_nodes is not properly a list of all leaf nodes. # It contains nodes which are parent of text elements in the DOM leaf_nodes = root.xpath(XPATH_SELECTOR) xpath_value_dict = {} # extract xpath from previously selected nodes and filter out "noisy" nodes for leaf in leaf_nodes: xpath = tree.getpath(leaf) + "/text()" # Filtering out xpaths which extract javascript code or css stylesheet if "/script" not in xpath and "/noscript" not in xpath and "/style" not in xpath: selected_values = root.xpath(xpath) selected_string = ''.join(selected_values).strip() # Filtering out xpaths which extract empty strings if selected_string: xpath_value_dict.update({xpath: selected_string}) return xpath_value_dict # ## 2. Compute XPaths weights # --- # #### get_data_structures # Return necessary data structures for computing xpaths weights def get_data_structures(url_to_html_map): url_to_xpaths = {} xpath_to_value_list = defaultdict(list) for url in list(url_to_html_map): page = url_to_html_map[url] xpath_to_single_value = get_all_xpath(page) xpath_list = list(xpath_to_single_value) url_to_xpaths[url] = xpath_list for xpath in xpath_to_single_value: value = xpath_to_single_value[xpath] xpath_to_value_list[xpath].append(value) return (url_to_xpaths, xpath_to_value_list) # ### Compute frequency # Given a list of values extracted from a xpath _Xi_ returns the frequency of _Xi_ def compute_frequency(values_list): return len(values_list) # ### Compute informativeness # Given cluster size and a list of values extracted from a xpath _Xi_ returns the informativeness of _Xi_ def compute_informativeness(M, values_list): values_set = set(values_list) Ti = len(values_set) sum_F_Xi = compute_frequency(values_list) return 1 - sum_F_Xi/(M*Ti) # ### xpath weight # Given a list of values extracted from a xpath _Xi_ returns the weight of _Xi_ def xpath_weight(cluster_size, list_of_values): return compute_frequency(list_of_values)*compute_informativeness(cluster_size, list_of_values) # ### xpath_to_weight # Arguments: # - **xpath_to_values_map**: dictionary where keys are xpath and values are values retrieved from the xpath # - **cluster_size** # # Returns a dictionary where keys are xpaths and values are their weights def xpath_to_weight(xpath_to_values_map, cluster_size): result = {} for xpath in xpath_to_values_map: list_of_values = xpath_to_values_map[xpath] weight = xpath_weight(cluster_size, list_of_values) result.update({xpath: weight}) return result # ### page_weight # Arguments: # - **list of xpath**: list of xpath of a given page # - **xpath_to_weight_map**: dictionary where keys are xpath and values are their weights # - **cluster_size** # - **intersection** (optional): if None nothing happens. Otherwise only xpath in **list of xpath** $\cap$ **intersection** will be considered in computing weight # # Returns page's weight def page_weight(list_of_xpath, xpath_to_weight_map, cluster_size, intersection = None): weight = 0 if intersection is None: intersection = list_of_xpath for xpath in list_of_xpath: if xpath in intersection: weight_of_xpath = xpath_to_weight_map[xpath] weight += weight_of_xpath return weight # ### Max weight page # Arguments: # - **url_to_xpaths_map**: dictionary where keys are urls and values are xpaths extracted from urls # - **xpath_to_weight_map**: dictionary where keys are xpath and values are their weights # - **cluster_size** # - **intersection** (optional): if None nothing happens. Otherwise only xpath in **list of xpath** $\cap$ **intersection** will be considered in computing weight # # Output: # - the URL of the page with the highest weight value def max_weight_page(url_to_xpaths_map, xpath_to_weight_map, cluster_size, intersection = None): max_weight = 0 max_weight_page = None for url in url_to_xpaths_map: xpaths = url_to_xpaths_map[url] weight = page_weight(xpaths, xpath_to_weight_map, cluster_size, intersection) if weight > max_weight: max_weight = weight max_weight_page = url print("INFO\tMax weight url is {}".format(max_weight_page)) print("INFO\tMax weight is {}".format(max_weight)) return max_weight_page # ### coverage # Returns cluster's page coverage. TODO: add more explanations def coverage(X, sample_pages_urls, cluster_pages_urls, url_to_xpaths_map, xpath_to_weight_map): covered = 0 cluster_size = len(cluster_pages_urls) for url in cluster_pages_urls: if url not in sample_pages_urls: xpaths = url_to_xpaths_map[url] weight = page_weight(xpaths, xpath_to_weight_map, cluster_size, X) if weight == 0: covered = covered + 1 return (covered + len(sample_pages_urls))/cluster_size #another metric to evaluate sample coverage def coverage2(samplePagesUrl,urlToXpathsMap,XpathNumber): sampleXpathList=[] for url in samplePagesUrl: xpaths=urlToXpathsMap[url] sampleXpathList=sampleXpathList+xpaths sampleXpathSet=set(sampleXpathList) return (len(sampleXpathSet)/XpathNumber) # ## 3. Sampling algorithm # --- # + from copy import copy def sampling(url_to_html_map, k = 20): cluster_size = len(url_to_html_map) cluster_pages_urls = list(url_to_html_map) url_to_xpaths_map, xpath_to_values_map = get_data_structures(url_to_html_map) url_to_xpaths_map_copy=copy(url_to_xpaths_map) xpath_to_weight_map = xpath_to_weight(xpath_to_values_map, cluster_size) xPathsSize=len(xpath_to_weight_map) X = list(xpath_to_values_map) #insert dictionary keys into a list result = [] iteration_no = 1 while X and len(result) < k: print("-------------------") print("INFO\tIteration {}".format(iteration_no)) max_weight_url = max_weight_page(url_to_xpaths_map, xpath_to_weight_map, cluster_size, X) result.append(max_weight_url) X = [xpath for xpath in X if xpath not in url_to_xpaths_map[max_weight_url]] url_to_xpaths_map.pop(max_weight_url) coverage_value = coverage(X, result, cluster_pages_urls, url_to_xpaths_map_copy, xpath_to_weight_map) coverage2_value= coverage2(result, url_to_xpaths_map_copy, xPathsSize) print("INFO\tWeight based Coverage is {}".format(coverage_value)) print("INFO\tXpath based Coverage is {}".format(coverage2_value)) iteration_no = iteration_no +1 return result # - # ## 4. Testing # %matplotlib inline # Importing libraries import matplotlib.pyplot as plt import pandas as pd df = pd.read_csv('datasets/books_dataset.csv', nrows = 100) df.head() df.describe() def create_dict(df): result = {} for index, row in df.iterrows(): key = row['url'] value = row['src'] result.update({key: value}) return result cluster = create_dict(df) sample_pages = sampling(cluster) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import time # #%matplotlib inline # + # %matplotlib notebook import matplotlib #modify some matplotlib parameters to manage the images for illustrator matplotlib.rcParams['pdf.fonttype'] = 42 matplotlib.rcParams['ps.fonttype'] = 42 # - # create a initial grid grid_size = 500 #evetually define as an input grid = np.zeros((grid_size,grid_size)) # define the initial pattern grid[int(grid_size/2), int(grid_size/2)]=2 grid def show_grid(grid_array): plt.figure() plt.imshow(grid, cmap=plt.cm.gray) plt.show() show_grid(grid) # + # Define rules def rule1(grid): # 2 = growing cell # 1 = stationary cell # 0 = empty space #this rule makes each cell divide only one time per step new_grid = grid #to separate the evaluation from the actualization #g_ones = grid[grid == 1] g_index = np.nonzero(grid == 2) # growth index = where cell value == 2 #g_mask = np.array([[0,1,2],[3,8,4],[5,6,7]]) #position 8 is the center and not selectable #shuffle the elements in order to eliminate the bias of the computation order arr = np.arange(g_index[0].shape[0]) np.random.shuffle(arr) #for i in range(len(g_index[0])): #go through every position with a "growing cell" ---> value = 2 for i in arr: #go through every position with a "growing cell" ---> value = 2 mask_pos = np.array([[-1,-1],[0,-1],[1,-1],[-1,0],[1,0],[-1,1],[0,1],[1,1]]) remove_list = [] #initialize a list to store "neighbours spaces"( = mask_pos) ocuped by cells # go through every position surrounding position --> eigh posibilities for j in range(len(mask_pos)): m = g_index[0][i] + mask_pos[j][0] n = g_index[1][i] + mask_pos[j][1] if grid[m,n] !=0 : #make a list with the positions which are not empty places remove_list.append(j) new_mask_pos = np.delete(mask_pos, remove_list, 0) # to exit when there is not a surrounding empty position l = len(new_mask_pos) if l > 1: r_pos = np.random.randint(l) # a random number between [0,len[ new_pos = new_mask_pos[r_pos] #m = g_ones_index[0][i] + mask_pos[new_pos][0] #n = g_ones_index[1][i] + mask_pos[new_pos][1] m = g_index[0][i] + new_pos[0] n = g_index[1][i] + new_pos[1] grid[m,n] = 2 elif l == 1: new_pos = new_mask_pos[0] m = g_index[0][i] + new_pos[0] n = g_index[1][i] + new_pos[1] grid[m,n] = 2 else: #when len(new_mask_pos) == 0 m = g_index[0][i] n = g_index[1][i] grid[m,n] = 1 # then, that position will not be evaluated again return(grid) # + # Define rules def rule2(grid): # 2 = growing cell # 1 = stationary cell # 0 = empty space #this rule makes each cell divide only one time per step #g_ones = grid[grid == 1] g_index = np.nonzero(grid == 2) # growth index = where cell value == 2 #g_mask = np.array([[0,1,2],[3,8,4],[5,6,7]]) #position 8 is the center and not selectable #choose a random cell in the dividing state cell_pos = int(np.random.rand(1)[0]*g_index[0].shape[0]) mask_pos = np.array([[-1,-1],[0,-1],[1,-1],[-1,0],[1,0],[-1,1],[0,1],[1,1]]) remove_list = [] #initialize a list to store "neighbours spaces"( = mask_pos) ocuped by cells # go through every position surrounding position --> eigh posibilities for j in range(len(mask_pos)): m = g_index[0][cell_pos] + mask_pos[j][0] n = g_index[1][cell_pos] + mask_pos[j][1] if grid[m,n] !=0 : #make a list with the positions which are not empty places remove_list.append(j) new_mask_pos = np.delete(mask_pos, remove_list, 0) # to exit when there is not a surrounding empty position l = len(new_mask_pos) if l > 1: r_pos = np.random.randint(l) # a random number between [0,len[ new_pos = new_mask_pos[r_pos] #m = g_ones_index[0][i] + mask_pos[new_pos][0] #n = g_ones_index[1][i] + mask_pos[new_pos][1] m = g_index[0][cell_pos] + new_pos[0] n = g_index[1][cell_pos] + new_pos[1] grid[m,n] = 2 elif l == 1: new_pos = new_mask_pos[0] m = g_index[0][cell_pos] + new_pos[0] n = g_index[1][cell_pos] + new_pos[1] grid[m,n] = 2 else: #when len(new_mask_pos) == 0 m = g_index[0][cell_pos] n = g_index[1][cell_pos] grid[m,n] = 1 # then, that position will not be evaluated again return(grid) # + # Define rules def rule3(grid): # 2 = growing cell # 1 = stationary cell # 0 = empty space #this rule makes each cell divide only one time per step #g_ones = grid[grid == 1] g_index = np.nonzero(grid == 2) # growth index = where cell value == 2 #g_mask = np.array([[0,1,2],[3,8,4],[5,6,7]]) #position 8 is the center and not selectable #choose a random cell in the dividing state index_pos = int(np.random.rand(1)[0]*g_index[0].shape[0]) m = g_index[0][index_pos] n = g_index[1][index_pos] #define the neighborhood nb = grid[m-1:m+2,n-1:n+2] #nb = neighborhood #define the free spaces in nb fs = np.where(nb == 0) if fs[0].shape[0] > 0: # go if there are a place in the neighbour if len(fs[0]) == 1: grid[m,n] = 1 # then, that position will not be evaluated again #grown over an empty position new_pos = int(np.random.rand(1)[0]*fs[0].shape[0]) #new pos in the neighbour matrix m_new = m + fs[0][new_pos] - 1 #-1 to convert [0 1 2] to [-1 0 1] n_new = n + fs[1][new_pos] - 1 grid[m_new, n_new] = 2 #save the cell grid index positions cell_index = [m,n] ncell_index = [m_new, n_new] else: grid[m,n] = 1 return(grid) # + #to show rule 3 time1 = time.clock() # create a initial grid grid_size = 150 #evetually define as an input grid = np.zeros((grid_size,grid_size)) # define the initial pattern #grid[int(grid_size/2), int(grid_size/2)]=2 grid = initial_pattern(grid, 0) #perform the loop steps = 10000 #sleep_time = 0.01 #to save the last figure filename = 'Segregation\\image_%03d.jpg' #filename = os.path.join(fpath, 'image_%05d.jpg') fig = plt.figure() fig.show() show_grid(grid) #show the initial grid plt.title('step 0') every = 100 count = 0 for i in range(steps): #time.sleep(sleep_time) grid = rule3(grid) #plt.imshow(grid, cmap=plt.cm.gray) #plt.title('step '+ str(i+1)) #fig.canvas.draw() if i%every == 0 or i == steps-1: count += 1 plt.title('step '+ str(i+1)) plt.imshow(grid, cmap=plt.cm.gray) #plt.savefig(str(filename) + ".pdf", transparent=True) plt.savefig(filename%(count))#, transparent=True) #plt.imshow(grid, cmap=plt.cm.gray) elapsed = time.clock() - time1 print(elapsed) # - def select_cell(grid): # 2 = growing cell # 1 = stationary cell # 0 = empty space #this rule makes each cell divide only one time per step g_index = np.nonzero(grid == 2) # growth index = where cell value == 2 #choose a random cell in the dividing state index_pos = int(np.random.rand(1)[0]*g_index[0].shape[0]) m = g_index[0][index_pos] n = g_index[1][index_pos] #save the cell grid index positions cell_index = [m,n] return(cell_index) def check_nbhd(grid, cell_index): #chek free spaces in the neighbourhood # fs: array # index of free spaces in the neighborhood m = cell_index[0] n = cell_index[1] #define the neighborhood nb = grid[m-1:m+2,n-1:n+2] #nb = neighborhood #define the free spaces in nb fs = np.where(nb == 0) return(fs) def nb_prob(grid, cell_index, prob_dist = 'contact_linear'): # assign division probabilities based on empty space cell contacts # prob_dist: uniform - contact_linear - contact_exp # contact linear is the default or if another thing is written # fs: array # index of free spaces in the neighborhood # return # prob: list # list with the [0,1] probability partition limit of each free space # e.g. prob = [0.23, 0.81, 1] --> second cell has bigger probability m = cell_index[0] n = cell_index[1] #define neighborhood nb = grid[m-1:m+2,n-1:n+2] #nb = neighborhood #define the free spaces in nb fs = np.where(nb == 0) #define cell spaces in bn cs = np.where(nb != 0) fs_num = len(fs[0]) prob = np.zeros(fs_num) contacts = np.zeros(fs_num) if prob_dist != 'uniform': # if prob_dist is something different from the options, contact_linear is the default for i in range(fs_num): mg = m + fs[0][i] - 1 #-1 to convert [0 1 2] to [-1 0 1] ng = n + fs[1][i] - 1 i_nb = grid[mg-1:mg+2,ng-1:ng+2] # i position neighborhood occup = np.where(i_nb != 0) contacts[i] = len(occup[0]) #save the number of contacts of this position if prob_dist == 'contact_exp': contacts = np.exp(contacts) else: contacts = np.ones(fs_num) #assign uniform values total = sum(contacts) prob[0] = (contacts[0]/total) for i in range(1,fs_num): prob[i] = prob[i-1]+contacts[i]/total return(prob) def cell_divition_uniform(grid, cell_index, fs): # uniform neighborhood divition probability #fs: free neighborhood spaces m = cell_index[0] n = cell_index[1] if len(fs[0]) == 1: grid[m,n] = 1 # then, that position will not divide again #grown over an empty position #new_pos = int(np.random.rand(1)[0]*fs[0].shape[0]) new_pos = int(np.random.rand(1)[0]*fs[0].shape[0]) #new pos in the neighbour matrix m_new = m + fs[0][new_pos] - 1 #-1 to convert [0 1 2] to [-1 0 1] n_new = n + fs[1][new_pos] - 1 grid[m_new, n_new] = 2 # crates the new cell ncell_index = [m_new ,n_new] return(grid, ncell_index) def cell_divition(grid, cell_index, fs, fs_proba): #fs: free neighborhood spaces #fs_proba: free spaces growth probabilities m = cell_index[0] n = cell_index[1] if len(fs[0]) == 1: grid[m,n] = 1 # then, that position will not divide again #grown over an empty position rand_val = np.random.rand(1)[0] # find the first position which is bigger than rand_val new_pos = np.where( (fs_proba > rand_val) == True )[0][0] #new pos in the neighbour matrix m_new = m + fs[0][new_pos] - 1 #-1 to convert [0 1 2] to [-1 0 1] n_new = n + fs[1][new_pos] - 1 grid[m_new, n_new] = 2 # crates the new cell ncell_index = [m_new ,n_new] return(grid, ncell_index) plt.figure() im_grid = np.zeros((100,100,4)) im_grid[:,:,0] = np.ones((100,100))*0 im_grid[:,:,1] = np.ones((100,100))*80 im_grid[:,:,2] = np.ones((100,100))*0 im_grid[:,:,3] = np.ones((100,100))*1 plt.imshow(im_grid) plt.figure() im_grid[:,:,1] = np.ones((100,100))*1 plt.imshow(im_grid) def initial_plasmids(grid, pattern_num = 0, num_plas = 2, max_copy = 4): # grid: initial grid c_index = np.nonzero(grid) # c_index, #cell_number = c_index[0].shape[0] gs = grid.shape pattern = np.zeros((gs[0],gs[1],max_copy)) #initialize the pattern array # add different patterns if pattern_num == 0: #random plasmid pattern for i in range(c_index[0].shape[0]): #assign a random plasmid pattern to each cell position pattern[c_index[0][i],c_index[1][i],:] = ((num_plas +1 )*np.random.rand(max_copy)).astype(int) #num_plas +1 to add "no-plasmid" state elif pattern_num == 1: pattern = np.ones((grid.shape)) return(pattern) def role_divideFlag(plasmids): #plasmids: cell plasmids vector max_plasmids = plasmids.shape[0] num_plasmids = np.nonzero(plasmids)[0].shape[0] divisor = max_plasmids*1.1 #arbitrary defined to make division(max_plasmids number) < 1 # make a cuadratic function of probabilities probability = (num_plasmids/divisor)**2 #if a cell has no plasmids --> will not divide if np.random.rand(1) < probability: return(1) # divide else: return(0) # not divide #Probability tables #plasmid_nums = np.arange(max_plasmids +1) #probability = (plasmid_nums/divisor)**2 def create_image(grid, plasgrid): im_s = plasgrid.shape aux_imR = np.zeros((im_s[0],im_s[1],im_s[2])) aux_imG = np.zeros((im_s[0],im_s[1],im_s[2])) for i in range(im_s[2]): aux_imR[:,:,i] = 1*(plasgrid[:,:,i]==1) aux_imG[:,:,i] = 1*(plasgrid[:,:,i]==2) aux_imR = np.sum(aux_imR,axis=2) aux_imG = np.sum(aux_imG,axis=2) aux_transparency = 0.5*(grid[:,:]==1) + 1*(grid[:,:]==2) # create the image im_grid = np.zeros((im_s[0],im_s[1],im_s[2])) im_grid[:,:,0] = np.multiply(np.ones((im_s[0],im_s[1])),aux_imR) im_grid[:,:,1] = np.multiply(np.ones((im_s[0],im_s[1])),aux_imG) #im_grid[:,:,2] = np.ones((100,100))*250 im_grid[:,:,3] = np.multiply(np.ones((im_s[0],im_s[1])),aux_transparency) # stationary cell -> transparency = 0.5) return(im_grid) def create_image2(grid, plasgrid): im_s = plasgrid.shape aux_imR = np.zeros((im_s[0],im_s[1],im_s[2])) aux_imG = np.zeros((im_s[0],im_s[1],im_s[2])) for i in range(im_s[2]): aux_imR[:,:,i] = 1*(plasgrid[:,:,i]==1) aux_imG[:,:,i] = 1*(plasgrid[:,:,i]==2) aux_imR = np.multiply(1*(np.sum(aux_imR,axis=2)>0),50*(grid[:,:]==1)) + 1*(np.sum(aux_imR,axis=2)>0) aux_imG = np.multiply(1*(np.sum(aux_imG,axis=2)>0),50*(grid[:,:]==1)) + 1*(np.sum(aux_imG,axis=2)>0) # create the image im_grid = np.zeros((im_s[0],im_s[1],3)) im_grid[:,:,0] = np.multiply(np.ones((im_s[0],im_s[1])),aux_imR) im_grid[:,:,1] = np.multiply(np.ones((im_s[0],im_s[1])),aux_imG) #im_grid[:,:,2] = np.ones((100,100))*250 return(im_grid) def plasmid_gProb(g_ratio=[1,1], p_types = [1,2]): #define a growtn probability (=velocity) based on the plasmids #g_ratio: ratio of growth rate between genotypes (i.e. plasmids) #p_types: plasmids types or labels #built the probability class vector cat_len = len(g_ratio) probs = np.zeros(cat_len) denominator = sum(g_ratio) probs[0] = g_ratio[0]/denominator for i in range(1,cat_len): probs[i] = probs[i-1]+g_ratio[i]/denominator return(probs) def plasm_g_test(plasmids,probs): #perform the probability test rand_val = np.random.rand(1) pos = np.where( (probs > rand_val) == True )[0][0] ptype = pos + 1 found = np.where(plasmids == ptype)[0] growth = False if found.size>0: growth = True return(growth) def cell_ratio(plasmgrid, ptype = [1,2]): c_num_plasm = np.sum(plasmgrid>0, axis=2) #number of plasmids in each grid plasm_sum = np.sum(plasmgrid, axis = 2) divition = np.divide(plasm_sum,c_num_plasm) #total = np.sum(np.isnan(divition) == False, axis = (0,1)) #it include cells with mix plasmids found = np.zeros(len(ptype)) total = 0 for i in range(len(ptype)): found[i] = len(np.where(divition == ptype[i])[0]) total += found[i] ratio = found[0]/total return(ratio) count=0 plasmids = np.ones(4)*2 for i in range(1000): plas_probs = plasmid_gProb(g_ratio= [1,2]) ifG = plasm_g_test(plasmids, plas_probs) if ifG == True: count+=1 print(count) #main sim_num = 1 all_ratios = [] for j in range(sim_num): time1 = time.clock() # create a initial empty grid grid_size = 1000 #evetually define as an input grid = np.zeros((grid_size,grid_size)) # define the initial grid and plasmid pattern grid = initial_pattern(grid, 1) # show_grid(grid) plasm_grid = initial_plasmids(grid) # Show the initial state # im_grid = create_image(grid, plasm_grid) # plt.imshow(im_grid) #perform the loop steps = 100000 #sleep_time = 0.01 #to save the last figure #filename = 'null' # filename = 'Segregation\\ratios\\image_%03d.jpg' filename = 'Seg_1ratio.jpg' # fig = plt.figure() # plt.title('step 0') #this two lines to save secuential images every = 100 #every how many save images count = 0 #define plasmid grwoth ratio plas_probs = plasmid_gProb(g_ratio= [1,1]) # g_ratio = [2,1] --> plasmid 1 divide twice fast than plasmid 2 ratios = [] #to store the cell type ratios for i in range(steps): #select a random growing cell cell_pos = select_cell(grid) free_nb = check_nbhd(grid, cell_pos) if free_nb[0].shape[0] > 0: # go if there is a place in the neighborhood plasmids = plasm_grid[cell_pos[0], cell_pos[1],:] #get its plasmids c_growth = plasm_g_test(plasmids, plas_probs) if c_growth == True: # tal vez poner esto arriba y chequear que los plasmidos no estene en cero #update its plasmids and cell state, n:new n_plasmids, n_state = plasmid_update(plasmids, cell_pos) plasm_grid[cell_pos[0], cell_pos[1],:] = n_plasmids grid[cell_pos[0], cell_pos[1]] = n_state #state will not be evaluated before role_divide #role_divide function shouldn´t allow divition of that cell divide_flag = role_divideFlag(n_plasmids) #perform the divition if flag changed if divide_flag != 0: #assign a cell to a new position free_proba = nb_prob(grid, cell_pos, prob_dist = 'contact_exp') grid, nCell_pos = cell_divition(grid, cell_pos, free_nb, free_proba) #split the mother plasmids m_plasmids, c_plasmids = divide_plasmids(n_plasmids) #assign mother and child plasmids plasm_grid[cell_pos[0], cell_pos[1],:] = m_plasmids plasm_grid[nCell_pos[0], nCell_pos[1],:] = c_plasmids else: grid[cell_pos[0],cell_pos[1]] = 1 #save cell type ratios if i%every == 0: ratios.append(cell_ratio(plasm_grid)) #Plot the result if i == steps-1: #if i%every == 0 or i == steps-1: # count += 1 plt.title('step '+ str(i+1)) im_grid = create_image2(grid, plasm_grid) plt.imshow(im_grid) # #fig.canvas.draw() # #plt.savefig(str(filename) + ".pdf", transparent=True) #plt.savefig(filename%(count), transparent=True) # plt.savefig(filename%(j), transparent=True) plt.savefig(filename, transparent=True) all_ratios.append(np.asarray(ratios)) elapsed = time.clock() - time1 print(elapsed) mean_ratio = 0 plt.figure() for i in range(len(all_ratios)): plt.plot(all_ratios[i]) mean_ratio += all_ratios[i][-1] plt.show() mean_ratio= mean_ratio/len(all_ratios) print(mean_ratio) ratio11=all_ratios ratio11_mean = mean_ratio # + plt.figure() for i in range(len(ratio11)): plt.plot(ratio11[i]) plt.title('growth ratio 1:1') plt.ylabel('cell type ratio') plt.xlabel('check point step number') plt.show() plt.savefig('ratio11', transparent=True) # - ratio32=all_ratios ratio32_mean = mean_ratio ratio43=all_ratios ratio43_mean = mean_ratio # + plt.figure() for i in range(len(ratio43)): plt.plot(ratio43[i]) plt.title('growth ratio 4:3') plt.ylabel('cell type ratio') plt.xlabel('check point step number') plt.show() plt.savefig('ratio43', transparent=True) # - ratio109=all_ratios ratio109_mean = mean_ratio # + mean_ratios = [ratio11_mean,ratio109_mean,ratio43_mean,ratio32_mean] expected = [1/2, 10/19, 4/7, 3/5] plt.figure() plt.plot([1, 2, 3, 4], mean_ratios, 'bo', label= 'observed ratio') plt.plot([1, 2, 3, 4], expected, 'ro', label = 'growth ratio') plt.ylabel('cell type ratio') plt.legend() plt.xticks([]) #plt.xticks(['1:1','10:9','4:3','3:2']) plt.xticks([1,2,3,4], ['1:1','10:9','4:3','3:2']) plt.show() plt.savefig('ratios_obs_exp', transparent=True) # + mean_ratios = [ratio11_mean,ratio109_mean,ratio43_mean,ratio32_mean] expected = [1/2, 10/19, 4/7, 3/5] plt.figure() plt.plot(expected, mean_ratios, 'bo') plt.xlabel plt.show() # - def plasmid_update(plasmids, pos_index): #plasmids: vector with plasmids. e.g [0,1,1,0,2] state = 2 # cell state = growing state plasmids_pos = np.nonzero(plasmids) empty_pos = np.where(plasmids == 0) num_plas = plasmids_pos[0].shape[0] if num_plas == 0: #it means no plasmid in the cell state = 1 #to not evaluate this cell in the loop again elif num_plas == plasmids.shape[0]: #it means all plasmids positions are full return(plasmids, state) else: copied_pos = np.random.randint(num_plas) plasmids[empty_pos[0][0]] = plasmids[plasmids_pos[0][copied_pos]] #copy the plasmid in the first free space return(plasmids, state) def divide_plasmids(plasmids): #plasmids: cell plasmids p_size = plasmids.size mother_p = np.zeros(p_size) child_p = np.zeros(p_size) np.random.shuffle(plasmids) #shuffle the plasmids if (p_size & 1) == 1: #odd case #sum a random value to choose which cell keep more plasmids rand_val = np.random.rand(1) half_p = int(p_size/2 + rand_val) else: #even case half_p = int(p_size/2) mother_p[:half_p] = plasmids[:half_p] child_p[half_p:]= plasmids[half_p:] return(mother_p, child_p) def initial_pattern(grid, pattern_num): pattern = {} #initiate initial pattern dictionary # add different patterns pattern[0] = np.array([[2]]) pattern[1] = np.array([[0, 0, 2, 0, 0],[0,2,2,2,0],[2,2,1,2,2],[0,2,2,2,0],[0,0,2,0,0]]) pattern[2] = np.ones((2,35))*2 #make elements which are not in the border to be = 1 fixed_pat = pattern[pattern_num] #put the pattern in the grid gs = grid.shape m0 = int(gs[0]/2) n0 = int(gs[1]/2) ps = fixed_pat.shape mpm = int(ps[0]/2) npm = int(ps[1]/2) for i in range(ps[0]): for j in range(ps[1]): m = m0 + (i - mpm) n = n0 + (j - npm) grid[m,n] = fixed_pat[i,j] return(grid) # + #perform the loop steps = 50 sleep_time = 0.1 fig = plt.figure() fig.show() show_grid(grid) #show the initial grid plt.title('step 0') for i in range(steps): time.sleep(sleep_time) grid = rule1(grid) plt.imshow(grid, cmap=plt.cm.gray) plt.title('step '+ str(i+1)) fig.canvas.draw() # - plt.figure() plt.imshow(grid) # + #perform the loop steps = 10 sleep_time = 0.5 show_grid(grid) #show the initial grid plt.figure(0) for i in range(steps): time.sleep(sleep_time) grid = rule1(grid) show_grid(grid) # + #to show time1 = time.clock() # create a initial grid grid_size = 150 #evetually define as an input grid = np.zeros((grid_size,grid_size)) # define the initial pattern #grid[int(grid_size/2), int(grid_size/2)]=2 grid = initial_pattern(grid, 2) #perform the loop steps = 10000 #sleep_time = 0.01 #to save the last figure filename = 'null' fig = plt.figure() fig.show() show_grid(grid) #show the initial grid plt.title('step 0') for i in range(steps): #time.sleep(sleep_time) #grid = rule1(grid) grid = rule3(grid) #plt.imshow(grid, cmap=plt.cm.gray) plt.title('step '+ str(i+1)) #fig.canvas.draw() if i == steps-1: plt.savefig(str(filename) + ".pdf", transparent=True) plt.imshow(grid, cmap=plt.cm.gray) elapsed = time.clock() - time1 print(elapsed) # + #to show time1 = time.clock() # create a initial grid grid_size = 150 #evetually define as an input grid = np.zeros((grid_size,grid_size)) # define the initial pattern grid = initial_pattern(grid, 0) #perform the loop steps = 15000 #sleep_time = 0.01 #to save the last figure filename = 'null' fig = plt.figure() plt.title('step 0') for i in range(steps): #time.sleep(sleep_time) #grid = rule1(grid) grid = rule2(grid) #plt.imshow(grid, cmap=plt.cm.gray) plt.title('step '+ str(i+1)) if i == steps-1: plt.imshow(grid, cmap=plt.cm.gray) #fig.canvas.draw() plt.savefig(str(filename) + ".pdf", transparent=True) elapsed = time.clock() - time1 print(elapsed) # - mask[1,2] vals = np.array([[-1,-1],[0,-1],[1,-1],[-1,0],[1,0],[-1,1],[0,1],[1,1],[0,0]]) vals[4][1] pos = np.nonzero(grid == 0) print(pos[0][10]) len(pos[1]) grid[grid == 1] # + #make automata loop time_steps = 100 for t in time_steps: # - np.random.randint(8) # a random number between [0,8[ --> eigh posibilities grid[0,0] # + time1 = time.clock() arr = np.random.rand(5000,5000) elapsed = time.clock() - time1 print(elapsed) # it takes 0.21186613101770035 seg for me # - np.amin(np.nonzero(grid == 2)[0]) # + #hacer clases # to not make: # cell = [px,py,vx,vy] # cell[0]+= cell[2]*deltaT # make classes is the same in general terms (in computation time, etc), but is a more recomended paradigma # because is more clear and organized. class cell: def __init__(self,px,py): #primer parametro siempre es self -->que hace referencia al objeto cell self.px = px self.py = py self.vx = 0 self.vy = 0 # la gracia es que puedo definir funciones dentro de la clase def mover(self,vx,vy,t): #siempre el primero es self! self.px += vx*t self.py += vy*t celula = cell(-1,1) print(celula.px) #--> arroja -1 ceula.px +=1 print(celula.px) #--> arroja 0 celula.mover(-1,1,1) lista = [] for n in range(10): lista.append(cell(n/10,n/10)) # or to make some computations faster when it is a big number of elements (>1000) class sim: def __init__(self, ncells): positions = np.array((ncells,2)) #Also you can create an inheritance class class bacterium(cell): #that means bacterium "is a" cell def infect(): #then you can add new functions to the class bacterium # todos los objetos en python pueden guardarse con el packete pickle # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- import numpy as np a = np.random.random((12,3,3)) a.size b =np.array([1,2,3,4]) np.prod(b) b.prod() c = np.arange(20).reshape((4,5)) c c.sum(axis = 0) c.sum(axis = 1) c.sum(axis = 1).shape np.prod(c.shape) d = np.arange(-12,13).reshape(5,5) d d * (d>0) d[:,np.newaxis].shape e = np.array([1,2,3,4,4]) f = d[np.arange(5),e] f f[:,np.newaxis].shape #this is same as keepdims = True (this one is better any day) import matplotlib.pyplot as plt # %matplotlib inline mu, sigma = 0, 0.1 # mean and standard deviation s = np.random.normal(mu, sigma, 1000) count, bins, ignored = plt.hist(s, 30, normed=True) plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r') plt.show() np.random.normal(0.1,0.1,(2,2)) 2**2 a = {'a':1,'b':2} a.update({'c':3,'d':8}) a 'w'+str(2) a = [1,2,3,4] for i,j in enumerate(a): print (i,j) a = np.arange(25).reshape((5,5)) a np.mean(a,axis = 1) np.mean(a,axis =0) p = (2,3) mask = (np.random.randn(*p) < 0.2)/0.2 q = np.arange(6).reshape(2,-1) q mask q* mask np.random.randn(*p) a = (2,3,4,5) a = tuple(list(a) + [2]) a a[-1] a[:-1] a b = np.arange(100).reshape(10,10) b b.shape np.pad(b,(1,1),'constant') np.pad(b,(0,1),'constant') c = np.arange(64).reshape(4,4,4) c.shape c[0] np.pad(c,(0,1),'constant').shape np.pad(b,((0,),(1,)),'constant') 2/ 2 c = np.arange(100).reshape((10,2,5)) c c.reshape(-1) np.zeros((2,2)) c[1,:,:] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6.9 64-bit # language: python # name: python3 # --- # # Gearbox Fault Diagnosis # # - [https://www.kaggle.com/brjapon/gearbox-fault-diagnosis](https://www.kaggle.com/brjapon/gearbox-fault-diagnosis) # - [https://openei.org/datasets/dataset/gearbox-fault-diagnosis-data](https://openei.org/datasets/dataset/gearbox-fault-diagnosis-data) # import pandas as pd import matplotlib.pyplot as plt def load_data(label='h', load=0): return pd.read_csv(f'{label}30hz{load}.csv.gz') pd.read_csv('h30hz90.csv.gz').plot() pd.read_csv('b30hz90.csv.gz').plot() print("Healthy | BrokenTooth") for i in range(0, 100, 10): print(pd.read_csv(f'h30hz{i}.csv.gz').shape, pd.read_csv(f'b30hz{i}.csv.gz').shape) def plot_sequence(df, st=0, ed=None, ax=None, figsize=(10, 3), individual=True): if ed is None: ed = df.shape[0] if individual: if not ax is None: assert len(ax) == 4 else: fig, ax = plt.subplots(4, figsize=figsize) for i in range(4): df.iloc[st:ed, i].plot(ax=ax[i], figsize=figsize, legend=True) else: if ax is None: fig, ax = plt.subplots(figsize=figsize) df.iloc[st:ed].plot(ax=ax, figsize=figsize, legend=True) plot_sequence(load_data(), st=100, ed=200, individual=False) # + label = 'h' load = 0 wd = 20 hg = 8 fig, ax = plt.subplots(5) plot_sequence(load_data(label=label, load=load), st=0, ed=500, ax=ax[:4], figsize=(wd, hg), individual=True) plot_sequence(load_data(label=label, load=load), st=0, ed=500, ax=ax[-1], figsize=(wd, hg), individual=False) ax[-1].set_xlabel('Time') for axi in ax: axi.set_ylabel('Value') name = 'Healty' if label == 'h' else 'BrokenTooth' ax[0].set_title(f'{name}: Load= {load}') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + class node: """An implementation of a node, which tracks a value, left and right children, anda parent. """ def __init__(self, v=None, l=None, r=None, p=None): self.v = v self.l = l self.r = r self.p = p class q: """An implementation of a queue """ def __init__(self, v): if v is not None: self.first = self.last = node(v) else: self.first=None self.last=None def enq(self, v): """Add an element to the end of a queue """ ## If the queue is empty, add a single node if self.last is None: self.first = self.last = node(v) else: ### if the queue is not empty, add a not at the end of the queue old_last = self.last self.last = node(v) old_last.l = self.last def deq(self): """Remove the first element of a queue """ if self.first is None: return None elif self.first is self.last: ## in this case there is only one node out = self.first self.first = None self.last = None return out.v else: ## In this case there is more than one node out = self.first self.first = out.l return out.v # + class tree: """Binary search tree that allows for insertion, deletion, and traversals. """ def __init__(self, v=None): self.root = node(v=v) def insert(self, v, current=None): """Insert an element, always as a leaf. """ if current is None: current = self.root if current is None: current = node(v=v) if current.v is None: current.v = v elif current.v <= v: if current.r is None: current.r = node(v, p=current) current.r.p = current else: self.insert(v, current.r) else: if current.l is None: current.l = node(v, p=current) current.l.p = current else: self.insert(v, current.l) def delete(self, v, current=None): """Find an node with a value and delete it. """ if current is None: current = self.root if current.v is None: raise ValueError("Value %s not found" %v) if current.v == v: if current.l is None and current.r is None: current = None elif current.l is not None: p = current.p p.l = current.l current.l.p = p else: p = current.p p.r = current.r current.r.p = p if current.v > v and current.l is not None: self.delete(v, current=current.l) if current.v < v and current.r is not None: self.delete(v, current=current.r) #################### Traversals #################### def pre_order(self, current=None): """Recursively traverse elements, in the order root --> left --> right The same order as DFS. """ if current is None: current = self.root yield current.v if current.l is not None: for el in self.pre_order(current=current.l): yield el if current.r is not None: for el in self.pre_order(current=current.r): yield el def in_order(self, current=None): """Recursively traverse elements, in the order left --> root --> right Should return the elements in order for a binary search tree. """ if current is None: current = self.root if current.l is not None: for el in self.in_order(current=current.l): yield str(el) yield current.v if current.r is not None: for el in self.in_order(current=current.r): yield str(el) def post_order(self, current=None): """Recursively traverse elements, in the order left --> right --> root """ if current is None: current = self.root if current.l is not None: for el in self.post_order(current=current.l): yield el if current.r is not None: for el in self.post_order(current=current.r): yield el yield current.v def BFS(self, current=None): """Traverse the nodes in breath-first order. Uses a queue to store future nodes to traverse. Elements will display ordered first by generation, and then from left to right """ if current is None: current = self.root Q = q(current) while Q.first is not None: current = Q.deq() for child in [current.l, current.r]: if child is not None: Q.enq(child) yield current.v def DFS(self, current=None): """Traverse the nodes in depth-first order. Uses a stack to store future nodes to traverse. Elements will displayed in order for a binary search tree. Should give the same result as pre-order traversal. """ if current is None: current = self.root S = [current] while S: current = S.pop() yield current.v for child in [current.r, current.l]: if child is not None: S.append(child) # - #################### Example #################### T = tree() for el in [4,5,7,6,3,2,1]: T.insert(el) for el in T.pre_order(): print el for el in T.in_order(): print el for el in T.post_order(): print el for el in T.BFS(): print el for el in T.DFS(): print el # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] colab_type="text" id="ncGMi7XXo-8g" # # Kapitel 5: Featureauswahl # + colab={} colab_type="code" id="tum7pL55o-8h" import warnings warnings.filterwarnings('ignore') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="qPUJu4mOo-8k" outputId="66cb69b1-1d62-45cc-d337-6c571dae9e06" # %matplotlib inline # %pylab inline # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="iQF9mgcYo-8q" outputId="240588fa-9695-49d5-85c8-a11204421ef4" import sklearn print(sklearn.__version__) # + colab={} colab_type="code" id="DfcQU94uo-8u" import numpy as np # + colab={} colab_type="code" id="cjffiT9Po-8x" import matplotlib.pyplot as plt # + [markdown] colab_type="text" id="p5HliKB5o-8z" # ## Fluch der hohen Dimensionen # + colab={} colab_type="code" id="HNaFi456o-8z" n = 100 vmin = 0; vmax = 10 x1 = np.random.uniform(vmin, vmax, n) x2 = np.random.uniform(vmin, vmax, n) x3 = np.random.uniform(vmin, vmax, n) # + colab={} colab_type="code" id="gvuboZwlo-82" # #plt.hist? # + colab={"base_uri": "https://localhost:8080/", "height": 660} colab_type="code" id="VkIb2Y-Go-84" outputId="b1d8c08b-fc70-4b47-8a49-a9be56e60717" # Eine Dimension fig = plt.figure(figsize=(16, 11)) ax = fig.add_subplot(111) ax.hist(x1, alpha=0.6, edgecolor='black', lw=1, bins=np.arange(0, 11, 1)) ax.set_xlabel('X1') ax.set_ylabel('n samples') # fig.savefig('ML_0512.png', bbox_inches='tight') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 660} colab_type="code" id="eUKEnZPfo-86" outputId="6d118081-e3c7-44fc-deb8-f9f3129242b1" # Zwei Dimensionen fig = plt.figure(figsize=(16, 11)) ax = fig.add_subplot(111) ax.scatter(x1, x2, c="b", marker="o") ax.set_xlabel('X1') ax.set_ylabel('X2') ax.set_xticks(np.arange(0, 11, 1) ) ax.set_yticks(np.arange(0, 11, 1) ) ax.grid(color='k', linestyle='-', linewidth=1, alpha=0.6) # fig.savefig('ML_0513.png', bbox_inches='tight') plt.show() # plt.clf() # + colab={"base_uri": "https://localhost:8080/", "height": 629} colab_type="code" id="wmk2Mw5Uo-88" outputId="e3ba42bc-8fe4-46d6-c20d-dd30436bde98" # Drei Dimensionen from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(16, 11)) ax = fig.add_subplot(111, projection='3d') ax.scatter(x1, x2, x3, c="b", marker="o") # ax.plot_wireframe((3,4,4,3,3, 3,4,4,3,3, 3,3,4,4,4,4), # (5,5,6,6,5, 5,5,6,6,5, 6,6,6,6,5,5), # (2,2,2,2,2, 3,3,3,3,3, 3,2,2,3,3,2), # color='r', rstride=1, cstride=1, alpha=0.9) ax.set_xticks(np.arange(0, 11, 1) ) ax.set_yticks(np.arange(0, 11, 1) ) ax.set_zticks(np.arange(0, 11, 1) ) ax.grid(color='k', linestyle='-', linewidth=1, alpha=0.6) ax.set_xlabel('X1') ax.set_ylabel('X2') ax.set_zlabel('X3') # fig.savefig('ML_0514.png', bbox_inches='tight') plt.show() # plt.clf() # + [markdown] colab_type="text" id="oJ0D3ChGo-8_" # ## Overfitting und Underfitting: Model-Komplexität vs Datenmenge # + colab={} colab_type="code" id="6rBG3mrWo-8_" np.random.RandomState(1) n_samples = 20 X = np.random.uniform(-2, 2, n_samples) y = X**3 + np.random.uniform(0, 2, n_samples) # + colab={"base_uri": "https://localhost:8080/", "height": 678} colab_type="code" id="If_D49vDo-9D" outputId="d4a79d14-f448-4d06-f8d2-594b23dfad87" fig, ax = plt.subplots(figsize=(11, 11)) print(X.shape, y.shape) plt.scatter(X, y, color='navy', s=30, marker='o') plt.xlabel('x') plt.ylabel('y') # fig.savefig('ML_0504.png', bbox_inches='tight') plt.show() # plt.clf() # + colab={"base_uri": "https://localhost:8080/", "height": 660} colab_type="code" id="cVljjox3o-9F" outputId="b8eb6bc8-836f-4d2e-e4f9-6623256fc28e" from sklearn.pipeline import make_pipeline from sklearn.linear_model import Ridge, LinearRegression from sklearn.preprocessing import PolynomialFeatures fig, ax = plt.subplots(figsize=(11, 11)) plt.scatter(X, y, color='navy', s=30, marker='o') x_plot = np.linspace(-2, 2, 100) poly_model = make_pipeline(PolynomialFeatures(3), LinearRegression()) poly_model.fit(X[:, np.newaxis], y) y_plot = poly_model.predict(x_plot[:, np.newaxis]) plt.plot(x_plot, y_plot, lw=2, color="red") plt.ylim(-12, 12) plt.xlabel('x') plt.ylabel('y') # fig.savefig('ML_0505.png', bbox_inches='tight') plt.show() # plt.clf() # + colab={"base_uri": "https://localhost:8080/", "height": 660} colab_type="code" id="qx9d0Lg1o-9H" outputId="14de6319-8e3c-4046-84ed-f143e14411c1" fig, ax = plt.subplots(figsize=(11, 11)) plt.scatter(X, y, color='navy', s=30, marker='o') poly_model = make_pipeline(PolynomialFeatures(1), LinearRegression()) poly_model.fit(X[:, np.newaxis], y) y_plot = poly_model.predict(x_plot[:, np.newaxis]) plt.plot(x_plot, y_plot, lw=2, color="red") plt.ylim(-9, 9) plt.xlabel('x') plt.ylabel('y') # fig.savefig('ML_0507.png', bbox_inches='tight') plt.show() # plt.clf() # + colab={"base_uri": "https://localhost:8080/", "height": 664} colab_type="code" id="m9gyCATFo-9J" outputId="1880e461-b905-45bd-abf1-a8091c959f27" fig, ax = plt.subplots(figsize=(11, 11)) plt.scatter(X, y, color='navy', s=30, marker='o') poly_model = make_pipeline(PolynomialFeatures(20), LinearRegression()) poly_model.fit(X[:, np.newaxis], y) y_plot = poly_model.predict(x_plot[:, np.newaxis]) plt.plot(x_plot, y_plot, lw=2, color="red") plt.ylim(-10, 10) plt.xlabel('x') plt.ylabel('y') # fig.savefig('ML_0506.png', bbox_inches='tight') plt.show() # plt.clf() # + [markdown] colab_type="text" id="2VeSYK7qo-9K" # ### Mehr Datensätze # + colab={} colab_type="code" id="T6d_sBJTo-9L" n_samples = 200 X = np.random.uniform(-2, 2, n_samples) y = X**3 + np.random.uniform(0, 2, n_samples) # + colab={"base_uri": "https://localhost:8080/", "height": 296} colab_type="code" id="tnEhZ8qIo-9O" outputId="816a1831-4c17-41eb-cbc8-35140202af52" print(X.shape, y.shape) plt.scatter(X, y, color='navy', s=30, marker='o', label="training points") plt.xlabel('x') plt.ylabel('y') # fig.savefig('ML_0508.png', bbox_inches='tight') plt.show() # plt.clf() # + colab={"base_uri": "https://localhost:8080/", "height": 660} colab_type="code" id="d9sBpbCpo-9R" outputId="cf69de1d-b147-4332-abb5-64d6aa678ca8" fig, ax = plt.subplots(figsize=(11, 11)) plt.scatter(X, y, color='navy', s=30, marker='o', label="training points") poly_model = make_pipeline(PolynomialFeatures(3), LinearRegression()) poly_model.fit(X[:, np.newaxis], y) y_plot = poly_model.predict(x_plot[:, np.newaxis]) plt.plot(x_plot, y_plot, lw=2, color="red") plt.ylim(-12, 12) plt.xlabel('x') plt.ylabel('y') # fig.savefig('ML_0509.png', bbox_inches='tight') plt.show() # plt.clf() # + colab={"base_uri": "https://localhost:8080/", "height": 664} colab_type="code" id="dvzRCcsCo-9T" outputId="677cacbc-6b3f-490f-90c2-9cfc69704775" fig, ax = plt.subplots(figsize=(11, 11)) plt.scatter(X, y, color='navy', s=30, marker='o', label="training points") poly_model = make_pipeline(PolynomialFeatures(20), LinearRegression()) poly_model.fit(X[:, np.newaxis], y) y_plot = poly_model.predict(x_plot[:, np.newaxis]) plt.plot(x_plot, y_plot, lw=2, color="red") plt.ylim(-8, 8) plt.xlabel('x') plt.ylabel('y') # fig.savefig('ML_0510.png', bbox_inches='tight') plt.show() # plt.clf() # + colab={"base_uri": "https://localhost:8080/", "height": 660} colab_type="code" id="B9dJdby6o-9V" outputId="70a3e065-b6de-42c1-9fb7-31bd45a4d35e" fig, ax = plt.subplots(figsize=(11, 11)) plt.scatter(X, y, color='navy', s=30, marker='o', label="training points") poly_model = make_pipeline(PolynomialFeatures(1), LinearRegression()) poly_model.fit(X[:, np.newaxis], y) y_plot = poly_model.predict(x_plot[:, np.newaxis]) plt.plot(x_plot, y_plot, lw=2, color="red") plt.ylim(-9, 9) plt.xlabel('x') plt.ylabel('y') # fig.savefig('ML_0511.png', bbox_inches='tight') plt.show() # plt.clf() # + [markdown] colab_type="text" id="SGNT--o_o-9Y" # ## Univariate Feature Exploration # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="8mh52lDYo-9Y" outputId="5e31f1c4-c451-40cc-e0a4-4ece05f28ea9" from sklearn.datasets import load_iris from sklearn.svm import SVC iris = load_iris() X = iris.data y = iris.target print(X.shape, y.shape) import numpy as np import matplotlib.pyplot as plt svc = SVC(kernel='linear', C=1E0) print(X.shape, y.shape) n_classes = 3 colors = 'byr' CMAP = colors plot_step = 0.01 # Add random noise rns = np.random.RandomState(12) #noise1 = rns.lognormal(mean=1, size=(len(X), 1)) noise2 = rns.uniform(0, 6, size=(len(X), 1)) #X_noise = np.hstack([X, noise1]) X_noise = np.hstack([X, noise2]) # + colab={"base_uri": "https://localhost:8080/", "height": 755} colab_type="code" id="fcMKN8vZo-9a" outputId="94cbe773-9315-43d7-fea7-3256d2052239" Y_feature_names = iris.feature_names Y_target_names = iris.target_names Y_feature_names = np.append(Y_feature_names, 'noise1') #Y_feature_names = np.append(Y_feature_names, 'noise2') Y_target_names = np.append(Y_target_names, 'noise1') #Y_target_names = np.append(Y_target_names, 'noise2') #fig = plt.figure(1, figsize=(9, 16)) fig = plt.figure(1, figsize=(16, 9)) BINS = [] BINS.append(np.arange(4, 8, 0.1)) BINS.append(np.arange(2, 5, 0.1)) BINS.append(np.arange(1, 7, 0.1)) BINS.append(np.arange(0, 3, 0.1)) BINS.append(np.arange(0, 6, 0.1)) #BINS.append(np.arange(0, 6, 0.1)) for fid in range(4): #for fid in range(5): X = X_noise[:, fid] y = iris.target #plt.subplot(3, 2, fid + 1) plt.subplot(2, 2, fid + 1) plt.xlabel(Y_feature_names[fid]) plt.ylabel('n examples') plt.axis("tight") for i, color in zip(range(n_classes), colors): idx = np.where(y == i) clf = svc.fit(X.reshape([150,1]), y) print(clf.score(X.reshape([150,1]), y)) plt.hist(X[idx], alpha=0.6, color=color, edgecolor='black', lw=1, label=Y_target_names[i], bins=BINS[fid]) if fid==3: plt.legend(loc='upper right') plt.axis("tight") plt.show() # fig.savefig('ML_0501.png', bbox_inches='tight') # plt.clf() # + [markdown] colab_type="text" id="hPXEwCqyo-9b" # ## Bivariate Feature Exploration # + colab={"base_uri": "https://localhost:8080/", "height": 553} colab_type="code" id="cCdZA0VXo-9b" outputId="a1315ff2-af3d-4abd-fcd6-7139c127cbdf" from scipy.stats import pearsonr Y_feature_names = iris.feature_names #Y_target_names = iris.target_names #Y_feature_names = np.append(Y_feature_names, 'noise1') #Y_feature_names = np.append(Y_feature_names, 'noise2') #Y_target_names = np.append(Y_target_names, 'noise1') #Y_target_names = np.append(Y_target_names, 'noise2') n_classes = 3 colors = 'byr' CMAP = colors plot_step = 0.01 #____________________________________________________________________ fig = plt.figure(1, figsize=(18, 9)) pos = [[6.2, 4.2], [4.5, 6.5], [7, 0.5], [3.5, 3], [3.5, 1], [5, 0.5]] for pairidx, pair in enumerate([[0, 1], [0, 2], [0, 3], [1, 2], [1, 3], [2, 3]]): X = iris.data[:, pair] y = iris.target plt.subplot(2, 3, pairidx + 1) plt.xlabel(iris.feature_names[pair[0]]) plt.ylabel(iris.feature_names[pair[1]]) plt.axis("tight") for i, color in zip(range(n_classes), colors): idx = np.where(y == i) plt.scatter(X[idx, 0], X[idx, 1], c=color, edgecolor='black', lw=2, label=iris.target_names[i], cmap=CMAP) r = "r = " + str(round(pearsonr(X[:, 0], X[:, 1])[0], 3)) plt.text(pos[pairidx][0], pos[pairidx][1], r) plt.axis("tight") plt.axis("tight") plt.legend(loc='upper left') plt.show() # fig.savefig('ML_0502.png', bbox_inches='tight') # plt.clf() # + [markdown] colab_type="text" id="YFFZBNy-o-9e" # ## Korrelation zwischen Feature und Target # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="vKuVNpILo-9f" outputId="c56b8185-809f-4085-f9ea-8de88b9576f8" from sklearn.datasets import load_iris import numpy as np from scipy.stats import pearsonr # pearson package from scipy iris = load_iris() # reload data X = iris.data y = iris.target for fid in (0, 1, 2, 3): # loop over all features idx = np.where( (y == 0) | (y == 1) ) x = X[idx] x = x[:, fid] print(iris.feature_names[fid], pearsonr(x, y[idx])[0]) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="D7Bfdkw6o-9i" outputId="555238c6-a7e5-4823-9ee2-42bcbf26d0cb" x = np.random.uniform(-1, 1, 1000) print(pearsonr(x, x**2)[0]) # + [markdown] colab_type="text" id="fl690nJfo-9j" # ## Principal Component Analyse # + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="JwLnd5Kqo-9j" outputId="e480faa7-addc-400e-d42a-ad5e2ffb9ea9" import numpy as np import matplotlib.pyplot as plt rns = np.random.RandomState(12) size = 200 X = np.zeros((size, 2)) x1 = rns.uniform(0, 2, size=size) x2 = -1.1*x1+1.8 + rns.normal(0, 0.2, size=size) X[:, 0] = x1 X[:, 1] = x2 from sklearn.decomposition import PCA pca = PCA(n_components=2, whiten=True) pca.fit(X) print(pca.explained_variance_) print() print(pca.components_) print() print(pca.mean_) print() # + colab={"base_uri": "https://localhost:8080/", "height": 698} colab_type="code" id="tGknYWlxo-9l" outputId="b25a5ff3-7239-45db-f3f8-160d51dcb38b" fig = plt.figure(figsize=(16, 11)) plt.scatter(X[:, 0], X[:, 1]) arrowprops = dict(arrowstyle='->', linewidth=2, shrinkA=0, shrinkB=0) for length, vector in zip(pca.explained_variance_, pca.components_): print(vector) v = vector * 1 * np.sqrt(length) ax = plt.gca() ax.annotate('', pca.mean_ + v, pca.mean_, arrowprops=arrowprops) plt.axis('equal') plt.xlim(0, 2) plt.ylim(0, 2) plt.xlabel('x1') plt.ylabel('x2') # fig.savefig('ML_0515.png', bbox_inches='tight') plt.show() # plt.clf() # + colab={"base_uri": "https://localhost:8080/", "height": 664} colab_type="code" id="M--XsTOZo-9n" outputId="f0029650-7b82-461a-e80e-14df679f30b4" fig = plt.figure(figsize=(16, 11)) X_pca = pca.transform(X) plt.scatter(X_pca[:, 0], X_pca[:, 1]) plt.annotate('', [0, 2], [0, 0], arrowprops=arrowprops) plt.annotate('', [2, 0], [0, 0], arrowprops=arrowprops) plt.axis('equal') plt.xlim(-3, 3) plt.ylim(-3, 3) plt.xlabel('pca component 1') plt.ylabel('pca component 2') # fig.savefig('ML_0516.png', bbox_inches='tight') plt.show() # plt.clf() # + colab={"base_uri": "https://localhost:8080/", "height": 497} colab_type="code" id="dxr12DIzo-9p" outputId="7d30abee-f9c7-4729-9d8e-26924551b929" from sklearn.datasets import load_iris from sklearn.decomposition import PCA import matplotlib.pyplot as plt import numpy as np n_classes = 3 colors = 'byr' CMAP = colors iris = load_iris() X = iris.data y = iris.target Y_target_names = iris.target_names pca = PCA(n_components=2, whiten=True) pca.fit(X) #_________________________________________________________ fig = plt.figure(figsize=(12, 8)) X_pca = pca.transform(X) for i, color in zip(range(n_classes), colors): idx = np.where(y == i) plt.scatter(X_pca[idx, 0], X_pca[idx, 1], label = Y_target_names[i], c=color, edgecolor='black', lw=2, cmap=CMAP) plt.axis("tight") plt.xlabel('pca component 1') plt.ylabel('pca component 2') plt.legend(loc='upper center') # fig.savefig('ML_0519.png', bbox_inches='tight') plt.show() # plt.clf() # + [markdown] colab_type="text" id="5X0moN-Co-9r" # ## Featureselektion # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="vtXJy9Hgo-9r" outputId="a9659352-9fb5-4088-e079-7646c5b66093" from sklearn.datasets import load_iris from sklearn.svm import SVC import numpy as np iris = load_iris() X = iris.data y = iris.target # reference score svc = SVC(kernel='linear', C=1) clf = svc.fit(X, y) print(clf.score(X, y)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="O0lPqoFGo-9t" outputId="02f43923-2db2-403b-9059-46a305ad5de7" # Add random noise as non informative data rns = np.random.RandomState(12) noise = rns.uniform(0, 6, size=(len(X), 1)) X = np.hstack([X, noise]) # Score with all noise clf = svc.fit(X, y) print(clf.score(X, y)) # + colab={} colab_type="code" id="EGnTye5ro-9v" from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_classif selector = SelectKBest(f_classif, k=4) X_sel = selector.fit_transform(X, y) # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="BZXW3xZEo-9x" outputId="3af3d2d8-393b-4466-bfb2-6115ffb136ed" print(selector.scores_) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="X9icJTY6o-90" outputId="9aa9cf77-68fd-43fa-a689-6666ee833e10" svc = SVC(kernel='linear', C=1) clf = svc.fit(X_sel, y) print(clf.score(X_sel, y)) # + [markdown] colab_type="text" id="Iy6elm8Qo-91" # ## Selektion nach Tree-Modellen # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="KVrKK5pNo-92" outputId="a000c871-0d82-42cc-a7c9-e65c24c2bbe8" from sklearn.feature_selection import SelectFromModel from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier() clf.fit(X, y) print(clf.feature_importances_) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="MNS96I_Ro-93" outputId="d6640305-5a6b-487f-c6f7-48e3ee2ff1bc" selector = SelectFromModel(clf, threshold=0.02) X_sel = selector.fit_transform(X, y) print(selector.get_support()) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="masCa8pso-95" outputId="12ee5033-fbce-4733-f296-fa6fdd85a0fb" svc = SVC(kernel='linear') clf = svc.fit(X_sel, y) print(clf.score(X_sel, y)) # + [markdown] colab_type="text" id="htdK7Awto-97" # ## Rekursive Eliminierung nach Modellen # + colab={"base_uri": "https://localhost:8080/", "height": 153} colab_type="code" id="06QeC-0Yo-97" outputId="d578ac96-4e96-4548-dc69-1b320b159991" from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(random_state=12) clf.fit(X, y) # + colab={} colab_type="code" id="wf-DKLjZo--A" from sklearn.feature_selection import RFE # Original selector = RFE(clf, 4) selector = RFE(clf, n_features_to_select=4) # + colab={} colab_type="code" id="w7VvfvFdo--D" selector = selector.fit(X, y) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="GDjg3CvSo--F" outputId="ab75bb52-e9e3-4887-803f-711d829262a1" print(selector.get_support()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nbsphinx="hidden" # This notebook is part of the `nbsphinx` documentation: http://nbsphinx.readthedocs.io/. # - # # Usage # # ## Installation # # Install `nbsphinx` with `pip`: # # python3 -m pip install nbsphinx --user # # If you suddenly change your mind, you can un-install it with: # # python3 -m pip uninstall nbsphinx # # Depending on your Python installation, you may have to use `python` instead of `python3`. # Recent versions of Python already come with `pip` pre-installed. # If you don't have it, you can [install it manually](https://pip.pypa.io/en/latest/installing/). # ## Sphinx Setup # # In the directory with your notebook files, run this command (assuming you have [Sphinx](http://sphinx-doc.org/) installed already): # # python3 -m sphinx.quickstart # # Answer the questions that appear on the screen. In case of doubt, just press the `` key repeatedly to take the default values. # # After that, there will be a few brand-new files in the current directory. # You'll have to make a few changes to the file named [conf.py](conf.py). You should at least check if those two variables contain the right things: # # ```python # extensions = [ # 'nbsphinx', # 'sphinx.ext.mathjax', # ] # exclude_patterns = ['_build', '**.ipynb_checkpoints'] # ``` # # Once your `conf.py` is in place, edit the file named `index.rst` and add the file names of your notebooks (with or without the `.ipynb` extension) to the [toctree](http://www.sphinx-doc.org/en/stable/markup/toctree.html) directive. # ## Running Sphinx # # To create the HTML pages, use this command: # # python3 -m sphinx # # If you have many notebooks, you can do a parallel build by using the `-j` option: # # python3 -m sphinx -j # # For example, if your source files are in the current directory and you have 4 CPU cores, you can run this: # # python3 -m sphinx . _build -j4 # # Afterwards, you can find the main HTML file in `_build/index.html`. # # Subsequent builds will be faster, because only those source files which have changed will be re-built. # To force re-building all source files, use the `-E` option. # # To create LaTeX output, use: # # python3 -m sphinx -b latex # # If you don't know how to create a PDF file from the LaTeX output, you should have a look at [Latexmk](http://users.phys.psu.edu/~collins/software/latexmk-jcc/) (see also [this tutorial](http://mg.readthedocs.io/latexmk.html)). # # Sphinx can automatically check if the links you are using are still valid. # Just invoke it like this: # # python3 -m sphinx -b linkcheck # ## Watching for Changes with `sphinx-autobuild` # # If you think it's tedious to run the Sphinx build command again and again while you make changes to your notebooks, you'll be happy to hear that there is a way to avoid that: [sphinx-autobuild](https://pypi.python.org/pypi/sphinx-autobuild)! # # It can be installed with # # python3 -m pip install sphinx-autobuild --user # # You can start auto-building your files with # # sphinx-autobuild # # This will start a local webserver which will serve the generated HTML pages at http://localhost:8000/. # Whenever you save changes in one of your notebooks, the appropriate HTML page(s) will be re-built and when finished, your browser view will be refreshed automagically. # Neat! # # You can also abuse this to auto-build the LaTeX output: # # sphinx-autobuild -b latex # # However, to auto-build the final PDF file as well, you'll need an additional tool. # Again, you can use `latexmk` for this (see [above](#Running-Sphinx)). # Change to the build directory and run # # latexmk -pdf -pvc # # If your PDF viewer isn't opened because of LaTeX build errors, you can use the command line flag `-f` to *force* creating a PDF file. # ## Automatic Creation of HTML and PDF output on [readthedocs.org](https://readthedocs.org) # # This is very easy! # # 1. Create an account on https://readthedocs.org/ and add your Github/Bitbucket repository (or any publicly available Git/Subversion/Mercurial/Bazaar repository). # # 1. Create a file named [requirements.txt](requirements.txt) (or whatever name you wish) in your repository containing the required pip packages: # # sphinx>=1.4 # nbsphinx # ipykernel # # 1. In the "Advanced Settings" on readthedocs.org, specify the path to your `requirements.txt` file (or however you called it) in the box labelled "Requirements file". Kinda obvious, isn't it? # # 1. Still in the "Advanced Settings", make sure the right Python interpreter is chosen. This must be the same version (2.x or 3.x) as you were using in your notebooks! # # 1. Make sure that in the "Settings" of your Github repository, under "Webhooks & services", "ReadTheDocs" is listed and activated in the "Services" section. If not, use "Add service". # There is probably a similar thing for Bitbucket. # # 1. Done! # # After that, you only have to "push" to your repository, and the HTML pages and the PDF file of your stuff are automagically created on readthedocs.org. Awesome! # # You can even have different versions of your stuff, just use Git tags and branches and select in the readthedocs.org settings (under "Admin", "Versions") which of those should be created. # ## HTML Themes # # The `nbsphinx` extension does *not* provide its own theme, you can use any of the available themes or [create a custom one](http://www.sphinx-doc.org/en/stable/theming.html#creating-themes), if you feel like it. # # The following links show how the `nbsphinx` documentation looks like in different themes. # # ### Sphinx's Built-In Themes # # * `alabaster`: # [example](http://nbsphinx.readthedocs.io/en/alabaster-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/alabaster-theme^...alabaster-theme) # # * `pyramid`: # [example](http://nbsphinx.readthedocs.io/en/pyramid-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/pyramid-theme^...pyramid-theme) # # * `classic`: # [example](http://nbsphinx.readthedocs.io/en/classic-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/classic-theme^...classic-theme) # # * `bizstyle`: # [example](http://nbsphinx.readthedocs.io/en/bizstyle-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/bizstyle-theme^...bizstyle-theme) # # * `haiku`: # [example](http://nbsphinx.readthedocs.io/en/haiku-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/haiku-theme^...haiku-theme) # # * `traditional`: # [example](http://nbsphinx.readthedocs.io/en/traditional-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/traditional-theme^...traditional-theme) # # * `agogo`: # [example](http://nbsphinx.readthedocs.io/en/agogo-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/agogo-theme^...agogo-theme) # # * `nature`: # [example](http://nbsphinx.readthedocs.io/en/nature-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/nature-theme^...nature-theme) # # ### 3rd-Party Themes # # * `sphinx_rtd_theme`: # [example](http://nbsphinx.readthedocs.io/en/readthedocs-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/readthedocs-theme^...readthedocs-theme) # # * `bootstrap`: # [example](http://nbsphinx.readthedocs.io/en/bootstrap-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/bootstrap-theme^...bootstrap-theme) # # * `cloud`, `redcloud`: # [example](http://nbsphinx.readthedocs.io/en/cloud-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/cloud-theme^...cloud-theme) # # * `sphinx_py3doc_enhanced_theme`: # [example](http://nbsphinx.readthedocs.io/en/py3doc-enhanced-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/py3doc-enhanced-theme^...py3doc-enhanced-theme) # # * `basicstrap`: # [example](http://nbsphinx.readthedocs.io/en/basicstrap-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/basicstrap-theme^...basicstrap-theme) # # * `dotted`: # [example](http://nbsphinx.readthedocs.io/en/dotted-theme/), # [usage](https://github.com/spatialaudio/nbsphinx/compare/dotted-theme^...dotted-theme) # # If you know of another Sphinx theme that should be included here, please open an [issue on Github](https://github.com/spatialaudio/nbsphinx/issues). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # ## Reg 180905188 # ### Roll No 29 C # # ## 1. Write a program to find the area of rectangle. Take input from user. # ## Eg. x= int(input(‘Enter number:’)) # + length = int(input("Enter number:")) breadth = int(input("Enter number:")) print("length is :" , length ) print("breadth is :" , breadth) print("Area of rectangle is :" , length * breadth) # - # ## 2. Write a program to swap the values of two variables. # + def swap(): a = 5 b = 7 print("a = " , a , "b= " , b) temp = a a = b b = temp print("a = " , a , "b= " , b) swap() # - # ## 3. Write a program to find whether a number is even or odd. # + def odd_or_even(num): return (num % 2) == 0 print(odd_or_even(7)) # - # ## 4. Write a program to check the largest among the given three numbers. # + def largest_among_three(a,b,c): if a > b and a > c: return a elif b > a and b > c: return b else: return c a = 5 b = 6 c = 7 print(largest_among_three(a,b,c)) # - # ## 5. Write a program to demonstrate while loop with else. for i in range(1,5): print("Loop iteration : " , i) else: print("inside else") # ## 6. Write a program to print the prime numbers for a user provided range. # + lower = int(input()) upper = int(input()) print("Prime numbers between", lower, "and", upper, "are:") for num in range(lower, upper + 1): # all prime numbers are greater than 1 if num > 1: for i in range(2, num): if (num % i) == 0: break else: print(num) # - # ## 7. Write a program to demonstrate List functions and operations. # + my_list = [1,2,3,4] print(my_list) my_list.append(5) print(my_list) my_list.pop() print(my_list) my_list.insert(2,100) print(my_list) my_list2 = my_list.copy() print(my_list2) my_list.reverse() print(my_list) my_list.sort() print(my_list) my_list.count(2) print(my_list) print("index of 2 in the list is :", my_list.index(2)) my_list.clear() print(my_list) # - # ## 8. Consider the tuple(1,3,5,7,9,2,4,6,8,10). Write a program to print half its values in one line and the other half in the next line. # + my_tuple = (1,3,5,7,9,2,4,6,8,10) list1 = [] list2 = [] for i in my_tuple: if i % 2 == 0: list1.append(i) else: list2.append(i) print(list1) print(list2) # - # ## 9. Consider the tuple (12, 7, 38, 56, 78 ). Write a program to print another tuple whose values are even number in the given tuple. # + input = (12, 7, 38, 56, 78 ) even_list = [] for i in input: if i % 2 == 0: even_list.append(i) even_tuple = tuple(even_list) print(even_tuple) # - # ## 10. Write a Python program to print negative Numbers in a List using for loop. Eg. [11, -21, 0, 45, 66, -93]. # + input_list = [11, -21, 0, 45, 66, -93] for i in input_list: if i < 0: print(i) # - # ## 11. Write a program to print negative Numbers in a List using while loop. # + input_list = [11, -21, 0, 45, 66, -93] i = 0 while i < 6: if input_list[i] < 0: print(input_list[i]) i = i + 1 # - # ## 12. Write a Python program to count positive and negative numbers in a List. # + input_list = [11, -21, 0, 45, 66, -93] positives = 0 negatives = 0 for i in input_list: if i < 0: negatives = negatives + 1 elif i > 0: positives = positives + 1 print("positives: " , positives) print("negatives: " , negatives) # - # ## 13. Write a Python program to remove all even elements from a list . # + num_list = [11, 21, 10, 45, 66, 93] print(num_list) for i in num_list: if i % 2 == 0: num_list.remove(i) print(num_list) # - # ## 14. Define a dictionary containing Students data {Name, Height, Qualification}. # a) Convert the dictionary into DataFrame # b) Declare a list that is to be converted into a new column (Address} # c) Using 'Address' as the column name and equate it to the list and display the result. # + my_dict = { "Name" : [ "Kaustav" , "Sahil" ], "Height" : [ "170 cm" , "165 cm" ] , "Qualification" : ["B. Tech CSE" , "B. Tech CSE"] } print(my_dict) print() import pandas as pd df_my_dict = pd.DataFrame.from_dict(my_dict) print(df_my_dict) print() address_list = ["Gurgaon" , "Jammu"] df_my_dict["Address"] = address_list print(df_my_dict) # - # ## 15. Define a dictionary containing Students data {Name, Height, Qualification}. # a) Convert the dictionary into DataFrame # b) Use DataFrame.insert() to add a column and display the result. # + my_dict = { "Name" : [ "Kaustav" , "Sahil" ], "Height" : [ "170 cm" , "165 cm" ] , "Qualification" : ["B. Tech CSE" , "B. Tech CSE"] } print(my_dict) print() import pandas as pd df_my_dict = pd.DataFrame.from_dict(my_dict) print(df_my_dict) print() address_list = ["Gurgaon" , "Jammu"] df_my_dict.insert(column="Address" , value=address_list , loc=3 ) print(df_my_dict) # - # ## End # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import dautil as dl import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns from IPython.display import HTML context = dl.nb.Context('exp_smoothing') lr = dl.nb.LatexRenderer(chapter=6, start=3, context=context) lr.render(r's_{t}= \alpha x_{t} + (1-\alpha)s_{t-1},\ t>0') lr.render(r's_{t} = \alpha x_{t} + (1-\alpha)(s_{t-1} + b_{t-1}') lr.render(r'b_{t} = \beta (s_t - s_{t-1}) + (1-\beta)b_{t-1}') def grid_mse(i, j, devs): alpha = 0.1 * i beta = 0.1 * j cell = dl.ts.double_exp_smoothing(devs.values, alpha, beta) return dl.stats.mse(devs, cell) wind = dl.data.Weather.load()['WIND_SPEED'].dropna() wind = dl.ts.groupby_year(wind).mean() devs = dl.ts.rolling_deviations(wind, 12).dropna() # %matplotlib inline dl.nb.RcWidget(context) dl.nb.LabelWidget(2, 2, context) # + sp = dl.plotting.Subplotter(2, 2, context) sp.label(ylabel_params=dl.data.Weather.get_header('WIND_SPEED')) sp.ax.plot(wind.index, wind) cp = dl.plotting.CyclePlotter(sp.next_ax()) cp.plot(devs.index, devs, label='Rolling Deviations') cp.plot(devs.index, dl.ts.exp_smoothing(devs.values, 0.7), label='Smoothing') sp.label() alphas = 0.01 * np.arange(1, 100) errors = [dl.stats.mse(devs, dl.ts.exp_smoothing(devs.values, alpha)) for alpha in alphas] sp.label(advance=True) sp.ax.plot(alphas, errors) sp.label(advance=True) rng = range(1, 10) df = dl.report.map_grid(rng, rng, ["alpha", "beta", "mse"], grid_mse, devs) sns.heatmap(df, cmap='Blues', square=True, annot=True, fmt='.1f', ax=sp.ax) HTML(sp.exit()) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler, LabelEncoder from sklearn.linear_model import LinearRegression, SGDRegressor, Ridge, Lasso, ElasticNet from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline from sklearn.metrics import mean_squared_error, r2_score from sklearn.feature_selection import SelectFromModel import matplotlib.pyplot as plt import numpy as np # - df = pd.read_csv('Resources/sales.csv') df.head() df=df.drop(columns=['Date', 'year', 'day', 'month','weekly_sales']) df.head() df=df[['Store','Product','week_of_year','Base Price','Price','promotion','Is_Holiday','Weekly_Units_Sold']] df.head() df['Temp']='_' df['Store'] = df['Temp'].str.cat(df['Store'].values.astype(str)) df.head() del df['Temp'] df df['Temp']='_' df['Product'] = df['Temp'].str.cat(df['Product'].values.astype(str)) df.head() del df['Temp'] df df['Temp']='_' df['week_of_year'] = df['Temp'].str.cat(df['week_of_year'].values.astype(str)) del df['Temp'] df.head() # LabelEncoding Is_Holiday column df['Is_Holiday']=LabelEncoder().fit_transform(df['Is_Holiday']) df.head() # + # Create features X=df.drop(columns=['Weekly_Units_Sold'], axis = 1) # One Hot Encode. X=pd.get_dummies(X) # Create target. y = df['Weekly_Units_Sold'] # - X X.describe() # ## Linear Regression # + # Split data X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # Scale the data scaler = StandardScaler().fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # + # Create and train the model model = LinearRegression().fit(X_train_scaled, y_train) # Generate predictions y_pred = model.predict(X_test_scaled) # + # Score Data training_score = model.score(X_train_scaled, y_train) testing_score = model.score(X_test_scaled, y_test) score = r2_score(y_test, y_pred) print(f"Training Score: {training_score}") print(f"Testing Score: {testing_score}") print('---------------------') print(f"R2 Score: {score}") # + from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.ensemble import GradientBoostingRegressor GBR = GradientBoostingRegressor() parameters = {'learning_rate': [0.045, 0.05, 0.055, 0.06, 0.065], 'subsample' : [0.09, 0.085, 0.08, 0.075, 0.07], 'n_estimators' : [900, 950, 1000, 1050, 1100], 'max_depth' : [6, 7, 8, 9, 10] } grid_GBR = GridSearchCV(estimator=GBR, param_grid = parameters, cv = 2, n_jobs=-1) grid_GBR.fit(X_train_scaled, y_train) # Generate predictions y_pred = grid_GBR.predict(X_test_scaled) print(" Results from Grid Search " ) print("\n The best estimator across ALL searched params:\n",grid_GBR.best_estimator_) print("\n The best score across ALL searched params:\n",grid_GBR.best_score_) print("\n The best parameters across ALL searched params:\n",grid_GBR.best_params_) score = r2_score(y_test, y_pred) score # + # # Apply trees in the ensemble to X, return leaf indices. # apply(X) # # Fit the gradient boosting model. # fit(X, y[, sample_weight, monitor]) # # Get parameters for this estimator. # get_params([deep]) X = [] predict(X) # # Return the coefficient of determination of the prediction. # score(X, y[, sample_weight]) # # Set the parameters of this estimator. # set_params(**params) # # Predict regression target at each stage for X. # staged_predict(X) # - # Visualizing the regression coefficients. plt.rcParams["figure.figsize"] = (20,13) plt.xticks(rotation=90) plt.bar(X_train.columns, model.coef_,) plt.show() # + # filter for product 1 Prod1_df = df[df['Product'] == '_1'] # Create features X = Prod1_df.drop(columns=['Weekly_Units_Sold'], axis = 1) # One Hot Encode. X=pd.get_dummies(X) # Create target. y = Prod1_df['Weekly_Units_Sold'] # Split data X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # Scale the data scaler = StandardScaler().fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # Create and train the model model = LinearRegression().fit(X_train_scaled, y_train) # Generate predictions y_pred = model.predict(X_test_scaled) # Score Data training_score = model.score(X_train_scaled, y_train) testing_score = model.score(X_test_scaled, y_test) score = r2_score(y_test, y_pred) print(f"Training Score for Product 1: {training_score}") print(f"Testing Score for Product 1: {testing_score}") print('---------------------') print(f"R2 Score for Product 1: {score}") # + from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.ensemble import GradientBoostingRegressor GBR = GradientBoostingRegressor() parameters = {'learning_rate': [0.05, 0.055, 0.06, 0.065, 0.07], 'subsample' : [0.08, 0.075, 0.07, 0.065, 0.06], 'n_estimators' : [900, 950, 1000, 1050, 1100], 'max_depth' : [5, 6, 7, 8, 9] } grid_GBR = GridSearchCV(estimator=GBR, param_grid = parameters, cv = 2, n_jobs=-1) grid_GBR.fit(X_train_scaled, y_train) # Generate predictions y_pred = grid_GBR.predict(X_test_scaled) print(" Results from Grid Search " ) print("\n The best estimator across ALL searched params:\n",grid_GBR.best_estimator_) print("\n The best score across ALL searched params:\n",grid_GBR.best_score_) print("\n The best parameters across ALL searched params:\n",grid_GBR.best_params_) score = r2_score(y_test, y_pred) score # + # filter for product 2 Prod2_df = df[df['Product'] == '_2'] # Create features X = Prod2_df.drop(columns=['Weekly_Units_Sold'], axis = 1) # One Hot Encode. X=pd.get_dummies(X) # Create target. y = Prod1_df['Weekly_Units_Sold'] # Split data X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # Scale the data scaler = StandardScaler().fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # Create and train the model model = LinearRegression().fit(X_train_scaled, y_train) # Generate predictions y_pred = model.predict(X_test_scaled) # Score Data training_score = model.score(X_train_scaled, y_train) testing_score = model.score(X_test_scaled, y_test) score = r2_score(y_test, y_pred) print(f"Training Score for Product 2: {training_score}") print(f"Testing Score for Product 2: {testing_score}") print('---------------------') print(f"R2 Score for Product 2: {score}") # + from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.ensemble import GradientBoostingRegressor GBR = GradientBoostingRegressor() parameters = {'learning_rate': [0.025, 0.03, 0.035, 0.04, 0.045], 'subsample' : [0.6, 0.055, 0.05, 0.045, 0.04], 'n_estimators' : [1400, 1450, 1500, 1550, 1650], 'max_depth' : [8, 9, 10, 11, 12] } grid_GBR = GridSearchCV(estimator=GBR, param_grid = parameters, cv = 2, n_jobs=-1) grid_GBR.fit(X_train_scaled, y_train) # Generate predictions y_pred = grid_GBR.predict(X_test_scaled) score = r2_score(y_test, y_pred) print(" Results from Grid Search " ) print("\n The best estimator across ALL searched params:\n",grid_GBR.best_estimator_) print("\n The best score across ALL searched params:\n",grid_GBR.best_score_) print("\n The best parameters across ALL searched params:\n",grid_GBR.best_params_) score # + # filter for product 3 Prod3_df = df[df['Product'] == '_3'] # Create features X = Prod1_df.drop(columns=['Weekly_Units_Sold'], axis = 1) # One Hot Encode. X=pd.get_dummies(X) # Create target. y = Prod3_df['Weekly_Units_Sold'] # Split data X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # Scale the data scaler = StandardScaler().fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # Create and train the model model = LinearRegression().fit(X_train_scaled, y_train) # Generate predictions y_pred = model.predict(X_test_scaled) # Score Data training_score = model.score(X_train_scaled, y_train) testing_score = model.score(X_test_scaled, y_test) score = r2_score(y_test, y_pred) print(f"Training Score for Product 3: {training_score}") print(f"Testing Score for Product 3: {testing_score}") print('---------------------') print(f"R2 Score for Product 3: {score}") # + from sklearn import datasets from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.ensemble import GradientBoostingRegressor GBR = GradientBoostingRegressor() parameters = {'learning_rate': [0.04, 0.0425, 0.0475, 0.05], 'subsample' : [0.7, 0.6, 0.5, 0.4], 'n_estimators' : [1600, 1650,1700, 1750], 'max_depth' : [1, 2, 3, 4] } grid_GBR = GridSearchCV(estimator=GBR, param_grid = parameters, cv = 2, n_jobs=-1) grid_GBR.fit(X_train_scaled, y_train) # Generate predictions y_pred = grid_GBR.predict(X_test_scaled) print(" Results from Grid Search " ) print("\n The best estimator across ALL searched params:\n",grid_GBR.best_estimator_) print("\n The best score across ALL searched params:\n",grid_GBR.best_score_) print("\n The best parameters across ALL searched params:\n",grid_GBR.best_params_) score = r2_score(y_test, y_pred) score # - # ## Lasso Regression # + # Create and train the model model = Lasso(max_iter=10000).fit(X_train_scaled, y_train) # Generate predictions y_pred = model.predict(X_test_scaled) # + # Score Data training_score = model.score(X_train_scaled, y_train) testing_score = model.score(X_test_scaled, y_test) score = r2_score(y_test, y_pred) print(f"Training Score: {training_score}") print(f"Testing Score: {testing_score}") print('---------------------') print(f"R2 Score: {score}") # - # Visualizing the regression coefficients. plt.rcParams["figure.figsize"] = (20,13) plt.xticks(rotation=90) plt.bar(X_train.columns, model.coef_) plt.show() # ## Ridge # + # Create and train the model model = Ridge(alpha=100).fit(X_train_scaled, y_train) # Generate predictions y_pred = model.predict(X_test_scaled) # + # Score Data training_score = model.score(X_train_scaled, y_train) testing_score = model.score(X_test_scaled, y_test) score = r2_score(y_test, y_pred) print(f"Training Score: {training_score}") print(f"Testing Score: {testing_score}") print('---------------------') print(f"R2 Score: {score}") # - # Visualizing the regression coefficients. plt.rcParams["figure.figsize"] = (20,13) plt.xticks(rotation=90) plt.bar(X_train.columns, model.coef_) plt.show() # ## ElasticNet # + # Create and train the model model = ElasticNet(alpha=10).fit(X_train_scaled, y_train) # Generate predictions y_pred = model.predict(X_test_scaled) # + # Score Data training_score = model.score(X_train_scaled, y_train) testing_score = model.score(X_test_scaled, y_test) score = r2_score(y_test, y_pred) print(f"Training Score: {training_score}") print(f"Testing Score: {testing_score}") print('---------------------') print(f"R2 Score: {score}") # - # Visualizing the regression coefficients. plt.rcParams["figure.figsize"] = (20,13) plt.xticks(rotation=90) plt.bar(X_train.columns, model.coef_) plt.show() dnn_model = build_and_compile_model(normalizer) dnn_model.summary() # Select the columns that have a nonzero value from the LASSO regression. reg = Lasso(max_iter=5000).fit(X_train_scaled, y_train) sel = SelectFromModel(reg) sel.fit(X_train_scaled, y_train) X_selected_train, X_selected_test, y_train, y_test = train_test_split(sel.transform(X), y, random_state=1) scaler = StandardScaler().fit(X_selected_train) X_selected_train_scaled = scaler.transform(X_selected_train) X_selected_test_scaled = scaler.transform(X_selected_test) reg = LinearRegression().fit(X_selected_train_scaled, y_train) reg.score(X_selected_test_scaled, y_test) reg = LinearRegression().fit(X_train, y_train) print(reg.coef_) plt.bar(X.columns, reg.coef_) plt.show() def rmse(X, predictions): return np.sqrt(np.mean(np.square(X - predictions))) # + # Create and train the model model = LinearRegression().fit(X_train, y_train) # Generate predictions predictions_test = model.predict(X_test) # Compute loss to evalute the model loss = rmse(y_test, predictions_test) print('Test Loss for LinearRegression:', loss) # Generate predictions for train set predictions_train = model.predict(X_train) # Compute loss on train set to evalute the model loss = rmse(y_train, predictions_train) print('Training Loss for LinearRegression:', loss) # - # + # # Scale the data # scaler = StandardScaler().fit(X_train) # + # model=LinearRegression() # + # model.fit(X_train, y_train) # + training_score = model.score(X_train, y_train) testing_score = model.score(X_test, y_test) ### END SOLUTION print(f"Training Score: {training_score}") print(f"Testing Score: {testing_score}") # - score = model.score(X, y) print(f"R2 Score: {score}") predictions = model.predict(X) # Plot Residuals plt.scatter(predictions, predictions - y) plt.hlines(y=0, xmin=predictions.min(), xmax=predictions.max()) plt.show() from sklearn.svm import SVR from sklearn.preprocessing import PolynomialFeatures from sklearn.pipeline import make_pipeline import numpy as np # + # USE WEIRD MAGICAL FUNCTIONS TO FIT THE CASE DATA TO A SECORD ORDER POLYNOMIAL poly_model = make_pipeline(PolynomialFeatures(4), LinearRegression()) poly_model.fit(X[:, np.newaxis], y) Xfit = np.linspace(0, len(X), 1000) yfit = poly_model.predict(Xfit[:, np.newaxis]) plt.plot(X, y) plt.plot(Xfit, yfit) plt.legend(loc="best") # - reg = make_pipeline(StandardScaler(), SGDRegressor(max_iter=1000, tol=1e-3)) reg.fit(X,y) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) # Create and train the model model = LinearRegression().fit(X_train, y_train) -- --- -- jupyter: -- jupytext: -- text_representation: -- extension: .hs -- format_name: light -- format_version: '1.5' -- jupytext_version: 1.14.4 -- kernelspec: -- display_name: Haskell -- language: haskell -- name: haskell -- --- -- ## Haskell Basics for Functional Programming -- *Some snippets in this notebook are taken from BLG458E Functional Programming course slides which can be accessed through the following link: https://www.slideshare.net/uyar/tag/blg458e* -- ### Definitions -- -- Definitions associate an identifier(name) with a value of a specific type -- **Format:** -- -- ```haskell -- name :: type -- name = expression -- ``` -- -- *Frequently used types: Int, Float, String, Char, Bool, Double, Integer* -- + age :: Int -- age is an integer age = 20 -- age is equal to 20 name :: String -- name is a string name = "Bob" -- name is equal to Bob gpa :: Float -- gpa is a float gpa = 3.78 -- gpa is equal to 3.78 -- - age name gpa -- **Example** -- Calculate circumference of a circle whose radius is 7.2 -- + radius :: Float radius = 7.2 circumference :: Float circumference = 2 * 3.14159 * radius circumference -- - -- ### Local Definitions -- -- We can also define identifiers locally *which can only used inside a block* -- **Format:** -- -- -- -- ```haskell -- name = expression -- where -- name1 :: type1 -- name1 = expression1 -- -- name2 :: type2 -- name2 = expression2 -- ... -- ``` -- -- Note: Haskell can infer the types of identifiers, so name::type parts can be left out. -- **Example** -- Calculate BMI of a person whose weight is 78 kg and height is 1.82 m -- + bmi :: Float bmi = weight / (height * height) -- we'll define weight and height inside where where weight = 78 -- Haskell infers the type of weight and height height = 1.82 bmi -- - weight -- locally-defined identifiers aren't accessible outside of their scope -- ### Functions -- -- **Definition:** -- -- ```haskell -- functionName :: param1Type -> param2Type -> ... -> paramkType -> resultType -- functionName param1 param2 ... paramk = result -- ``` -- -- For example, you can call functions like this: -- `functionName p1 p2 p3` where p1, p2, and p3 represent parameters of the function -- square :: Int -> Int -- parameter is of type Int and result is also of Type Int square x = x * x -- parameter is x, and result is x^2 square 7 -- ### Local Function Definitions -- -- You can define functions locally as well using *where* keyword -- **Example** -- Let's write a function to calculate a3 - b3 differenceOfCubes :: Int -> Int -> Int differenceOfCubes a b = cube a - cube b where cube :: Int -> Int cube a = a * a * a differenceOfCubes 5 3 -- ### Prefix vs Infix -- -- Functions can be called in infix format using backticks. -- -- For example you can call mod function using **mod 10 3** or **10 \`mod\` 3** mod 10 3 10 `mod` 3 mod 10 3 == 10 `mod` 3 -- You can also use operators in prefix format. -- All you need to do is put the operator inside paranthesis as follows: *(/) 10 4* (/) 10 4 -- ### Conditional Statements -- -- In order to write conditional statements, we can use *Guards* -- -- A guard is a boolean expression to check some condition -- -- **Format:** -- -- ```haskell -- functionName :: p1Type -> p2Type -> ... -> pkType -> resultType -- functionName p1 p2 .. pk -- | guard1 = expression1 -- | guard2 = expression2 -- ... -- | otherwise = expression -- ``` -- -- Return value is the expression of the first guard whose value is True. -- If none of the guards is true, last expression (otherwise case) is returned. -- maxOfTwo :: Int -> Int -> Int maxOfTwo a b | a >= b = a | otherwise = b maxOfTwo 3 7 -- We can also use if-then-else statements in order to write conditional statements. -- **Format** -- `if condition then x else y` if 12 > 5 then "12 is larger than 5" else "12 is not larger than 5" -- ### Error Messages -- -- We can use *error* keyword to define error messages. multiInverse :: Float -> Float multiInverse a | a == 0 = error "Division by Zero!" | otherwise = 1 / a multiInverse 5 multiInverse 0 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Lab 3: Gaussian process regression # # ### Machine Learning 1, September 2015 # # * The lab exercises should be made in groups of two, three or four people. # * The deadline is October 25th (Sunday) 23:59. # * Assignment should be sent to (). The subject line of your email should be "lab\#\_lastname1\_lastname2\_lastname3". # * Put your and your teammates' names in the body of the email. # * Attach the .IPYNB (IPython Notebook) file containing your code and answers. Naming of the file follows the same rule as the subject line. For example, if the subject line is "lab01\_Kingma\_Hu", the attached file should be "lab01\_Kingma\_Hu.ipynb". Only use underscores ("\_") to connect names, otherwise the files cannot be parsed. # # Notes on implementation: # # * You should write your code and answers in an IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact us. # * Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline. # * NOTE: Make sure we can run your notebook / scripts! # $\newcommand{\bx}{\mathbf{x}}$ # $\newcommand{\bxp}{\mathbf{x}^{'}}$ # $\newcommand{\bw}{\mathbf{w}}$ # $\newcommand{\bt}{\mathbf{t}}$ # $\newcommand{\by}{\mathbf{y}}$ # $\newcommand{\bm}{\mathbf{m}}$ # $\newcommand{\bb}{\mathbf{b}}$ # $\newcommand{\bS}{\mathbf{S}}$ # $\newcommand{\ba}{\mathbf{a}}$ # $\newcommand{\bz}{\mathbf{z}}$ # $\newcommand{\bv}{\mathbf{v}}$ # $\newcommand{\bq}{\mathbf{q}}$ # $\newcommand{\bp}{\mathbf{p}}$ # $\newcommand{\bh}{\mathbf{h}}$ # $\newcommand{\bI}{\mathbf{I}}$ # $\newcommand{\bX}{\mathbf{X}}$ # $\newcommand{\bT}{\mathbf{T}}$ # $\newcommand{\bPhi}{\mathbf{\Phi}}$ # $\newcommand{\bW}{\mathbf{W}}$ # $\newcommand{\bV}{\mathbf{V}}$ # $\newcommand{\xm}{\mathbf{x}_m}$ # $\newcommand{\xn}{\mathbf{x}_n}$ # $\newcommand{\y}{\mathbf{y}}$ # $\newcommand{\K}{\mathbf{K}}$ # $\newcommand{\zero}{\mathbf{0}}$ # $\newcommand{\yi}{\y_i}$ # $\newcommand{\thetav}{\mathbf{\theta}}$ # $\newcommand{\t}{\mathbf{t}}$ # $\newcommand{\x}{\mathbf{x}}$ # $\newcommand{\tN}{\mathbf{t}_N}$ # $\newcommand{\xN}{\mathbf{x}_N}$ # $\newcommand{\k}{\mathbf{k}}$ # $\newcommand{\C}{\mathbf{C}}$ # $\newcommand{\CN}{\mathbf{C}_N}$ # $\newcommand{\KN}{\mathbf{K}_N}$ # $\newcommand{\eyeN}{\mathbf{I}_N}$ # # Gaussian process regression # # For this Lab we will be refer to Bishop sections 6.4.2 and 6.4.3. You may also want to refer to Rasmussen's Gaussian Process text which is available online at http://www.gaussianprocess.org/gpml/chapters/ and especially to the project found at http://www.automaticstatistician.com/index.php by Ghahramani for some intuition in GP. To understand Gaussian processes, it is highly recommended understand how marginal, partitioned Gaussian distributions can be converted into conditional Gaussian distributions. This is covered in Bishop 2.3 and summarized in Eqns 2.94-2.98. # # # # ### Sinusoidal Data # We will use the same data generating function that we used previously for regression. You can change sigma/beta, but keep it reasonable. Definitely play around once you have things working. Make use of these functions as you wish. # %pylab inline import numpy as np import pylab as pp import matplotlib.pyplot as plt import math import numpy.matlib sigma = 0.5 beta = 1.0 / pow(sigma,2) # this is the beta used in Bishop Eqn. 6.59 N_test = 100 x_test = np.linspace(-1,1,N_test); mu_test = np.zeros( N_test ) # + def true_mean_function( x ): return np.sin( 2*pi*(x+1) ) def add_noise( y, sigma ): return y + sigma*np.random.randn(len(y)) def generate_t( x, sigma ): return add_noise( true_mean_function( x), sigma ) # - y_test = true_mean_function( x_test ) t_test = add_noise( y_test, sigma ) pp.plot( x_test, y_test, 'b-', lw=2) pp.plot( x_test, t_test, 'go') # ### 1. Sampling from the Gaussian process prior (30 points) # We will implement Gaussian process regression using the kernel function in Bishop Eqn. 6.63. # # #### 1.1 k_n_m( xn, xm, thetas ) (10 points) # To start, implement function "k_n_m( xn, xm, thetas )" that takes scalars $\xn$ and $\xm$, and a vector of $4$ thetas, and computes the kernel function Bishop Eqn. 6.63 (10 points). def k_n_m(xn,xm,thetas): return thetas[0]*np.exp((-thetas[1]/2) * np.sum(pow(xn-xm,2))) + thetas[2]+thetas[3]*xn.transpose().dot(xm) # #### 1.2 computeK( X1, X2, thetas ) (5 points) # Eqn 6.60 is the marginal distribution of mean ouput of $N$ data vectors: $p(\y) = \mathcal{N}(\zero, \K)$. Notice that the expected mean function is $0$ at all locations, and that the covariance is a $N$ by $N$ kernel matrix $\K$. Write a function "computeK( X1, X2, thetas )" that computes the kernel matrix. Hint: use k_n_m as part of an innner loop (of course, there are more efficient ways of computing the kernel function making better use of vectorization, but that is not necessary) (5 points). def computeK( X1, X2, thetas ): K= np.empty([len(X1),len(X2)]) for n in range(0,len(X1)): for m in range(0,len(X2)): K[n,m]=k_n_m(np.array(X1[n]),np.array(X2[m]),thetas) return K # #### 1.3 Plot function samples (15 points) # Now sample mean functions at the x_test locations for the theta values in Bishop Figure 6.5, make a figure with a 2 by 3 subplot and make sure the title reflects the theta values (make sure everything is legible). In other words, sample $\yi \sim \mathcal{N}(\zero, \K_{\thetav})$. Make use of numpy.random.multivariate_normal(). On your plots include the expected value of $\y$ with a dashed line and fill_between 2 standard deviations of the uncertainty due to $\K$ (the diagonal of $\K$ is the variance of the model uncertainty) (15 points). # + def plot_1(): N_test = 100 x_test = np.linspace(-1,1,N_test) fig, ax = plt.subplots(2,3) thetas = [[1.00,4.00,0.00,0.00],[9.00,4.00,0.00,0.00],[1.00,64.00,0.00,0.00],[1.00,0.25,0.00,0.00],[1.00,4.00,10.00,0.00],[1.00,4.00,0.00,5.00]] i=0 for theta in thetas: col = i%3 row = math.floor(i/3) i+=1 K=computeK(x_test,x_test,theta) var =K.diagonal() std=np.array([math.sqrt(x) for x in var]) ys=np.random.multivariate_normal(np.zeros(N_test),K,5) for y in ys: ax[row,col].plot(x_test,y,'-') ax[row,col].set_title('('+','.join(str(x) for x in theta)+")") #mean function ax[row,col].plot(x_test,np.zeros(N_test),'--') ax[row,col].fill_between(x_test,np.zeros(N_test)+2*std,np.zeros(N_test)-2*std,color='pink',alpha=0.1) # ax.ylim((-1.5,1.5)) plt.show() plot_1() # - # ### 2. Predictive distribution (35 points) # So far we have sampled mean functions from the prior. We can draw actual data $\t$ two ways. The first way is generatively, by first sampling $\y | \K$, then sampling $\t | \y, \beta$ (Eqns 6.60 followed by 6.59). The second way is to integrate over $\y$ (the mean draw) and directly sample $\t | \K, \beta$ using Eqn 6.61. This is the generative process for $\t$. Note that we have not specified a distribution over inputs $\x$; this is because Gaussian processes are conditional models. Because of this we are free to generate locations $\x$ when playing around with the GP; obviously a dataset will give us input-output pairs. # # Once we have data, we are interested in the predictive distribution (note: the prior is the predictive distribution when there is no data). Consider the joint distribution for $N+1$ targets, given by Eqn 6.64. Its covariance matrix is composed of block components $\CN$, $\k$, and $c$. The covariance matrix $CN$ for $\tN$ is $\CN = \KN + \eyeN / \beta$. We have just made explicit the size $N$ of the matrix; $N$ is the number of training points. The kernel vector $\k$ is a $N$ by $1$ vector of kernel function evaluations between the training input data and the test input vector. The scalar $c$ is a kernel evaluation at the test input. # # #### 2.1 gp_predictive_distribution(...) (10 points) # Write a function "gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C = None)" that computes Eqns 6.66 and 6.67, except allow for an arbitrary number of test points (not just one) and now the kernel matrix is for training data. By having C as an optional parameter, we can avoid computing it more than once (for this problem it is unimportant, but for real problems this is an issue). The function should compute $\C$, $\k$, and $c$ and the mean and noise functions. Do not forget: the computeK function computes $\K$, not $\C$! (10 points) def gp_predictive_distribution(x_train, t_train, x_test, theta, beta, C = None): # computing ingredients c=[] for i in range(0,len(x_test)): t= computeK([x_test[0]],[x_test[0]],theta) c.append(t[0][0]+(1/beta)) k = computeK(x_train,x_test,theta) C= C if C!=None else computeK(x_train,x_train,theta)+np.matlib.identity(np.shape(x_train)[0])/beta k_C=k.transpose().dot(np.linalg.inv(C)) # mean computation mean = k_C.dot(t_train) # var computation var = c-k_C.dot(k) return mean, var # #### 2.2 gp_log_likelihood(...) (10 points) # Later, to learn the hyperparameters, we will need to compute the log-likelihood of the of the training data. Implicitly, this is conditioned on the value setting for $\thetav$. Write a function "gp_log_likelihood( x_train, t_train, theta, C = None, invC = None )", where C and invC can be stored and reused. (10 points) # + def gp_log_likelihood( x_train, t_train, theta, beta, C = None, invC = None ): C= C if C!=None else computeK(x_train,x_train,theta)+np.matlib.identity(np.shape(x_train)[0])/beta invC= invC if invC!=None else np.linalg.inv(C) N=np.shape(x_train)[0] res= (-np.log(np.linalg.det(C))-t_train.T.dot(invC).dot(t_train)-N*np.log(2*np.pi))/2 return np.asarray(res)[0][0] x_train = np.linspace(-1, 1, 5) y_train = true_mean_function(x_train) t_train = add_noise(y_train, sigma) #beta = 1 x_test = np.linspace(-1, 1, 7) thetas = np.array([(1, 4, 0, 0), (9, 4, 0, 0), (1, 64, 0, 0), (1, 0.25, 0, 0), (1, 4, 10, 0), (1, 4, 0, 5)]) print (gp_log_likelihood(x_train,t_train,thetas[0],beta)) # - # #### 2.3 Plotting (10 points) # Repeat the 6 plots above, but this time conditioned on the training points. Use the sinuosoidal data generator to create 2 training points where x is sampled uniformly between $-1$ and $1$. For these plots, feel free to use the provided function "gp_plot". Make sure you put the parameters in the title and this time also the log-likelihood. (10 points) Try to understand the two types of uncertainty! If you do not use "gp_plot", please add a fill between for the model and target noise. def gp_plot( x_test, y_test, mu_test, var_test, x_train, t_train, theta, beta ): # x_test: the test data # y_test: the true function at x_test # mu_test: predictive mean at x_test # var_test: predictive covariance at x_test # t_train: the training values # theta: the kernel parameters # beta: the precision (known) # the reason for the manipulation is to allow plots separating model and data stddevs. std_total = np.sqrt(np.diag(var_test)) # includes all uncertainty, model and target noise std_model = np.sqrt( std_total**2 - 1.0/beta ) # remove data noise to get model uncertainty in stddev std_combo = std_model + np.sqrt( 1.0/beta ) # add stddev (note: not the same as full) pp.plot( x_test, y_test, 'b', lw=3) pp.plot( x_test, mu_test, 'k--', lw=2 ) pp.fill_between( x_test, mu_test+2*std_combo,mu_test-2*std_combo, color='k', alpha=0.25 ) pp.fill_between( x_test, mu_test+2*std_model,mu_test-2*std_model, color='r', alpha=0.25 ) pp.plot( x_train, t_train, 'ro', ms=10 ) def plot_2_3(N_train,N_test): thetas = [[1.00,4.00,0.00,0.00],[9.00,4.00,0.00,0.00],[1.00,64.00,0.00,0.00],[1.00,0.25,0.00,0.00],[1.00,4.00,10.00,0.00],[1.00,4.00,0.00,5.00]] x_train = np.random.uniform(-1, 1,N_train) t_train = generate_t(x_train,sigma) x_test = np.linspace(-1, 1,N_test) t_test = generate_t(x_test,sigma) y_test =true_mean_function(x_test) # fig, ax = plt.subplots(2,3) for row in range(2): for col in range(3): idx = row * 3 + col plt.subplot(2, 3, idx + 1) mu,var = gp_predictive_distribution(x_train,t_train,x_test,thetas[idx],beta) mu=np.squeeze(np.asarray(mu.T)) gp_plot(x_test,y_test,mu,var,x_train,t_train,thetas[idx],beta) plt.title(str(thetas[idx])) plt.show() # + N_train = 2 N_test=100 plot_2_3(N_train,N_test) # - # #### 2.4 More ploting (5 points) # Repeat the 6 plots above, but this time conditioned a new set of 10 training points. (5 points) # + N_train = 10 N_test=100 plot_2_3(N_train,N_test) # - # ### 3. Learning the hyperparameters (45 points) # Learning the values of the parameter $\thetav$ can be very tricky for Gaussian processes in general, but when the data is univariate like ours, we can visualize the fit and see how plausible it looks. # # #### 3.1 Derivatives (5 points) # Maximum likelihood or MAP learning is the most common way of setting the parameters, though a fully Bayesian approach is possible too. We will look at ML today. For this, we start with the dervivative of the log-likelihood with respect to the parameters $\thetav$; this is Eqn 6.70. This, in turn, requires the derivative of the kernel matrix $\CN$ wrt $\thetav$. This is the matrix of element-wise derivatives of the kernel function. Write the derivatives for $\theta_0$ to $\theta_3$ for our kernel function (5 points). # $\frac{\partial k(x_n,x_m)}{\partial \theta_0}= \exp(-\frac{\theta_1}{2} ||x_n-x_m||^2)$ # # $\frac{\partial k(x_n,x_m)}{\partial \theta_1}= - \frac{\theta_0}{2} || x_n-x_m||^2 \exp(-\frac{\theta_1}{2} ||x_n-x_m||^2)$ # # $\frac{\partial k(x_n,x_m)}{\partial \theta_2}= 1$ # # $\frac{\partial k(x_n,x_m)}{\partial \theta_3}= x_n^Tx_m$ # # #### 3.2 Questions (5 points) # Which parameters in $\thetav$ are unconstrained, that is, where any positive/ negative values are valid? (5 points) # We start by decomposing our kernal function into a sum of 3 kernal functions: # # $k(x_n,x_m)= \theta_0 k_1(x_n,x_m) + \theta_2 k_2(x_n,x_m) + \theta_3 k_3(x_n,x_m)$ # # As we can see $\theta_0$, $\theta_2$, and $\theta_3$ are constants and therefore should be >0 for the kernal to be valid. On the other hand, $\theta_1$ plays a role of a bandwidth in $k_1$ and is not constrained. # #### 3.3 More derivatives (5 points) # For parameters that are constrained to be positive, the usual approach is to use the exponential of the free-parameter in the kernel function, but perform gradient ascent on the unconstrained values. Consider the case $\theta_i = \exp( \phi_i)$, where $\phi_i$ is unconstrained. Write the derivative for $\phi_i$ in terms of the derivatives you already computed (5 points). Hint: use the chain rule and do not repeat the full derivation. # # ___answer___ # # $\frac{\partial k(x_n,x_m)}{\partial \phi_0}= \exp(-\frac{\theta_1}{2} ||x_n-x_m||^2 + \phi_0)$ # # # $\frac{\partial k(x_n,x_m)}{\partial \phi_2}= \exp(\phi_2)$ # # $\frac{\partial k(x_n,x_m)}{\partial \phi_3}= x_n^Tx_m \exp(\phi_3)$ # # #### 3.4 Grid search (10 points) # Grid-search: for the same training set you have above, perform a small grid search over $\thetav$ (try at least 20 combinations). Have your grid-search loop or function print out rows of log-likelihood + $\thetav$ sorted by best to worst. Use the log-likelihood to select the best $\thetav$ and the worst. Plots both the same way as the subplots above (ie a 1 by 2 subplot of best and worst). (10 points) # + def grid_search(iter): sigma = 0.5 beta = 1.0 / pow(sigma,2) # this is the beta used in Bishop Eqn. 6.59 N_train = 30 N_test = 100 x_train = np.random.uniform(-1, 1,N_train) t_train = generate_t(x_train,sigma) x_test = np.linspace(-1, 1,N_test) t_test = generate_t(x_test,sigma) y_test =true_mean_function(x_test) res ={} for i in range(0,iter): thetas=np.random.uniform(1,10,4) lh=gp_log_likelihood(x_train,t_train,thetas,beta) res[lh]=thetas print ('likelihood: '+str(lh)+' thetas: '+str(thetas)) keys=sorted(res.keys()) best_theta = np.array(res[keys[iter-1]]) worst_theta = np.array(res[keys[0]]) # best plt.subplot(1, 2, 1) mu,var = gp_predictive_distribution(x_train,t_train,x_test,best_theta,beta) mu=np.squeeze(np.asarray(mu.T)) gp_plot(x_test,y_test,mu,var,x_train,t_train,best_theta,beta) plt.title("The best theta: "+str(best_theta)) # worst plt.subplot(1, 2, 2) mu,var = gp_predictive_distribution(x_train,t_train,x_test,worst_theta,beta) mu=np.squeeze(np.asarray(mu.T)) gp_plot(x_test,y_test,mu,var,x_train,t_train,worst_theta,beta) plt.title("The worst theta: "+ str(worst_theta)) plt.show() grid_search(300) # - # #### 3.5 Questions (10 points) # Selecting kernel functions can be somewhat of an art. There are charateristics of kernel functions that are useful for some data sets, but not others. Complicating the matter is the ability to combine kernels with different characteristics (long term trends + seasonal fluctuations). Describe the charactistics of the kernel function we are using in terms of (signal, scale, offsets, etc). You may want to play around with $\thetav$ and see what each parameter does/affects/etc. (5 points) Describe why the best parameters work well for the training data and explain why the bad parameter settings perform poorly (in terms of the first part of the question). (5 points) # ___answer___ # # As we showed in 3.2 the kernal function can be decomposed as a combination of other kernel functions, we shall continue our discussion from that point. # $k(x_n,x_m)= \theta_0 k_1(x_n,x_m) + \theta_2 k_2(x_n,x_m) + \theta_3 k_3(x_n,x_m)$ # # $k_1$ kernel corresponds to the similarity measure between points (signal capturing). $k_2 $ and $k_3$ operate as offsets and it was observed that it’s possible to achieve a good fit to the true function without the offset. Theta parameters have also some interpretation. $\theta_1$ corresponds to the bandwidth of the unnormalised gaussian distribution, and $\theta_0$ corresponds to the scaling factor of the distribution. Since $k_2(x_n,x_m)=x_n^0x_m^0$, $\theta_2$ plays a role of the offset, and $\theta_3$ is the scaling factor of the linear kernel. # # # As we stated previously, it’s possible to achieve a good data fit without the offset (i.e. only with a similarity kernel and two thetas), the opposite is also true, i.e. it’s not possible to achieve a good data fit without the similarity kernel and the corresponding thetas. The worst performance models have been observed to have low values for $\theta_0$ and $\theta_1$, and that implies that it can’t capture the signal. The key aspect in the kernel is obviously the similarity measure between two data points, and say, if both $\theta_0$ and $\theta_1$ are set close to 0, then the kernel loses its power, and the expectation on the plot becomes similar to a constant function. # #### 3.6 Bonus: Implementation (20 points) # Implement gradient-ascent (or descent if you wish) using the combination of a) the log-likelihood objective function and b) the gradients you calculated above. Run on the training data above and show the log-likehood curve as it learns and a plot of the final model. Feel free to use available software (eg search for "minimize.py" which uses conjugate gradient descent, or something in scipy). NB: log-likelihood should be monotonically increasing. You are encouraged to also search and use "checkgrad". (20 points) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Visualizing GSD File # # In this example, we will use `fresnel` to visualize a gsd file. We will color the particles & bonds by types, as well as visualize the simulation box. # # We will need the [gsd](https://gsd.readthedocs.io/en/stable/) package to run this example. import fresnel import gsd.hoomd import numpy as np # First we read in the `.gsd` file. # + with gsd.hoomd.open(name="molecules.gsd", mode="rb") as gsd_file: snap = gsd_file[0] box = snap.configuration.box # - # We want to color by particle type. We will color A types red, B types blue, and C types green. # + N = snap.particles.N particle_types = snap.particles.typeid colors = np.empty((N, 3)) # Color by typeid colors[particle_types == 0] = fresnel.color.linear([.95, 0, 0]) # A type colors[particle_types == 1] = fresnel.color.linear([0, .95, 0]) # B type colors[particle_types == 2] = fresnel.color.linear([0, 0, .95]) # C type # + scene = fresnel.Scene() # Spheres for every particle in the system geometry = fresnel.geometry.Sphere(scene, N=N, radius=0.2) geometry.position[:] = snap.particles.position geometry.material = fresnel.material.Material(roughness=0.9) geometry.outline_width = 0.05 # use color instead of material.color geometry.material.primitive_color_mix = 1.0 geometry.color[:] = fresnel.color.linear(colors) # - # create box in fresnel fresnel.geometry.Box(scene, box, box_radius=.07) # We will visualize bonds using cylinders, and color the bonds to match the particle types. To aid visualization, we will first remove any bonds that span the periodic boundary. # + all_bonds = np.stack( [ snap.particles.position[snap.bonds.group[:, 0]], snap.particles.position[snap.bonds.group[:, 1]], ], axis=1, ) # Use a distance cutoff (L/2) to filter bonds that span the periodic boundary bond_distances = np.linalg.norm(all_bonds[:,0,:]-all_bonds[:,1,:], axis=1) # This simple method will work for cubic cells L = box[0] bond_indices = np.where(bond_distances < L/2)[0] filtered_bonds = all_bonds[bond_indices, :, :] N_bonds = filtered_bonds.shape[0] bonds = fresnel.geometry.Cylinder(scene, N=N_bonds) bonds.material = fresnel.material.Material(roughness=0.5) bonds.outline_width = 0.05 # Color by bond typeid bond_ids = snap.bonds.typeid[bond_indices] bond_colors = np.empty((N_bonds, 3)) bond_colors[bond_ids == 0] = fresnel.color.linear([0, .95, 0]) # B-B Bonds bond_colors[bond_ids == 1] = fresnel.color.linear([0, 0, .95]) # C-C Bonds bonds.material.primitive_color_mix = 1.0 bonds.points[:] = filtered_bonds bonds.color[:] = np.stack( [fresnel.color.linear(bond_colors), fresnel.color.linear(bond_colors)], axis=1 ) bonds.radius[:] = [0.1] * N_bonds # - # Now that we have everything setup, we will render everything and apply some ring lighting conditions. scene.camera = fresnel.camera.Orthographic.fit(scene) scene.lights = fresnel.light.lightbox() fresnel.pathtrace(scene, light_samples=5) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Supervised Emphasized Denoising AutoEncoder # > A tutorial of supervised emphasized Denoising AutoEncoder (DAE) # # - toc: true # - badges: true # - comments: true # - categories: [notebook, kaggle] # This notebook was originally published [here](https://www.kaggle.com/jeongyoonlee/supervised-emphasized-denoising-autoencoder) at Kaggle. # # --- # # In this notebook, I will show how to build supervised emphasized Denoising AutoEncoder (DAE) with Keras. With pseudo label, we can train a classifier and the DAE together instead of training them separately as done in previous TPS competitions. # # If you're interested in how different components of DAE (denoising, stacked layers, emphasis, etc.) contribute to its performance, please check out [Vincent et al. (2010) "Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion", JMLR](https://www.jmlr.org/papers/volume11/vincent10a/vincent10a.pdf). # # This notebook is built on top of my previous notebook, [AutoEncoder + Pseudo Label + AutoLGB](https://www.kaggle.com/jeongyoonlee/autoencoder-pseudo-label-autolgb/). The first part (section 1, 2, 3 and 5) of the notebook is the same as the previous one. # # The contents of the notebook are as follows: # 1. **Package Installation**: Installing latest version of `Kaggler` using `Pip`. # 2. **Feature Engineering**: [code](https://www.kaggle.com/udbhavpangotra/tps-apr21-eda-model) by @udbhavpangotra # 3. **Feature Transformation**: Using `kaggler.preprocessing.LabelEncoder` to impute missing values and group rare categories automatically. # 4. **Stacked Emphasized Denoising AutoEncoder (DAE)**: Adding random noise mask and **emphasized** version of AutoEncoder, called "Embphasized Denoising AutoEncoder". # 5. **LightGBM Model Training**: 5-fold CV + Pseudo label from @hiro5299834's [data](https://www.kaggle.com/hiro5299834/tps-apr-2021-voting-pseudo-labeling) + `kaggler.model.AutoLGB`'s feature selection and hyperparameter optimization # 6. **Supervised DAE**: Training the classifier and DAE simultaneously. # # # Part 1: DAE + AutoLGB # ## Load Libraries and Install `Kaggler` # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _kg_hide-input=true _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # - import lightgbm as lgb from matplotlib import pyplot as plt import numpy as np import pandas as pd from pathlib import Path import tensorflow as tf from tensorflow import keras from tensorflow.keras import backend as K from tensorflow.keras.losses import mean_squared_error from tensorflow.keras.metrics import AUC from tensorflow.python.keras.utils import control_flow_util import seaborn as sns from sklearn.model_selection import StratifiedKFold from sklearn.preprocessing import StandardScaler from sklearn.metrics import roc_auc_score, confusion_matrix import warnings # + _kg_hide-output=true # !pip install kaggler # + import kaggler from kaggler.model import AutoLGB from kaggler.preprocessing import LabelEncoder print(f'Kaggler: {kaggler.__version__}') print(f'TensorFlow: {tf.__version__}') # - warnings.simplefilter('ignore') plt.style.use('fivethirtyeight') pd.set_option('max_columns', 100) # ## Feature Engineering (ref: [code](https://www.kaggle.com/udbhavpangotra/tps-apr21-eda-model) by @udbhavpangotra) # + data_dir = Path('/kaggle/input/tabular-playground-series-apr-2021/') trn_file = data_dir / 'train.csv' tst_file = data_dir / 'test.csv' sample_file = data_dir / 'sample_submission.csv' pseudo_label_file = '/kaggle/input/tps-apr-2021-label/voting_submission_from_5_best.csv' target_col = 'Survived' id_col = 'PassengerId' feature_name = 'dae' algo_name = 'lgb' model_name = f'{algo_name}_{feature_name}' feature_file = f'{feature_name}.csv' predict_val_file = f'{model_name}.val.txt' predict_tst_file = f'{model_name}.tst.txt' submission_file = f'{model_name}.sub.csv' # - trn = pd.read_csv(trn_file, index_col=id_col) tst = pd.read_csv(tst_file, index_col=id_col) sub = pd.read_csv(sample_file, index_col=id_col) pseudo_label = pd.read_csv(pseudo_label_file, index_col=id_col) print(trn.shape, tst.shape, sub.shape, pseudo_label.shape) tst[target_col] = pseudo_label[target_col] n_trn = trn.shape[0] df = pd.concat([trn, tst], axis=0) df.head() # + # Feature engineering code from https://www.kaggle.com/udbhavpangotra/tps-apr21-eda-model df['Embarked'] = df['Embarked'].fillna('No') df['Cabin'] = df['Cabin'].fillna('_') df['CabinType'] = df['Cabin'].apply(lambda x:x[0]) df.Ticket = df.Ticket.map(lambda x:str(x).split()[0] if len(str(x).split()) > 1 else 'X') df['Age'].fillna(round(df['Age'].median()), inplace=True,) df['Age'] = df['Age'].apply(round).astype(int) df['Fare'].fillna(round(df['Fare'].median()), inplace=True,) df['FirstName'] = df['Name'].str.split(', ').str[0] df['SecondName'] = df['Name'].str.split(', ').str[1] df['n'] = 1 gb = df.groupby('FirstName') df_names = gb['n'].sum() df['SameFirstName'] = df['FirstName'].apply(lambda x:df_names[x]) gb = df.groupby('SecondName') df_names = gb['n'].sum() df['SameSecondName'] = df['SecondName'].apply(lambda x:df_names[x]) df['Sex'] = (df['Sex'] == 'male').astype(int) df['FamilySize'] = df.SibSp + df.Parch + 1 feature_cols = ['Pclass', 'Age','Embarked','Parch','SibSp','Fare','CabinType','Ticket','SameFirstName', 'SameSecondName', 'Sex', 'FamilySize', 'FirstName', 'SecondName'] cat_cols = ['Pclass','Embarked','CabinType','Ticket', 'FirstName', 'SecondName'] num_cols = [x for x in feature_cols if x not in cat_cols] print(len(feature_cols), len(cat_cols), len(num_cols)) # - # ## Feature Transformation Using `Kaggler` # + for col in ['SameFirstName', 'SameSecondName', 'Fare', 'FamilySize', 'Parch', 'SibSp']: df[col] = np.log2(1 + df[col]) scaler = StandardScaler() df[num_cols] = scaler.fit_transform(df[num_cols]) lbe = LabelEncoder(min_obs=50) df[cat_cols] = lbe.fit_transform(df[cat_cols]).astype(int) # - # ## Emphasized Denoising AutoEncoder (DAE) Using `Keras` # + encoding_dim = 128 masking_prob = .2 emphasis_ratio = 2. seed = 42 def get_dae(encoding_dim, dropout=.2): num_dim = len(num_cols) num_input = keras.layers.Input((num_dim,), name='num_input') cat_inputs = [] cat_embs = [] emb_dims = 0 for col in cat_cols: cat_input = keras.layers.Input((1,), name=f'{col}_input') emb_dim = max(8, int(np.log2(1 + df[col].nunique()) * 4)) cat_emb = keras.layers.Embedding(input_dim=df[col].max() + 1, output_dim=emb_dim)(cat_input) cat_emb = keras.layers.Dropout(dropout)(cat_emb) cat_emb = keras.layers.Reshape((emb_dim,))(cat_emb) cat_inputs.append(cat_input) cat_embs.append(cat_emb) emb_dims += emb_dim merged_inputs = keras.layers.Concatenate()([num_input] + cat_embs) batch_size, merged_inputs_dim = merged_inputs.get_shape() training = K.learning_phase() def mask_inputs(): mask = tf.random.stateless_binomial(shape=(batch_size, merged_inputs_dim), seed=seed, counts=tf.ones((merged_inputs_dim,)), probs=[masking_prob] * merged_inputs_dim) return tf.where(mask == 1, tf.zeros_like(merged_inputs), merged_inputs) masked_inputs = control_flow_util.smart_cond(training, mask_inputs, lambda: merged_inputs) encoded = keras.layers.Dense(encoding_dim, activation='relu')(masked_inputs) encoded = keras.layers.Dropout(dropout)(encoded) encoded = keras.layers.Dense(encoding_dim, activation='relu')(encoded) encoded = keras.layers.Dropout(dropout)(encoded) encoded = keras.layers.Dense(encoding_dim, activation='relu')(encoded) decoded = keras.layers.Dense(encoding_dim, activation='relu')(encoded) decoded = keras.layers.Dropout(dropout)(decoded) decoded = keras.layers.Dense(encoding_dim, activation='relu')(decoded) decoded = keras.layers.Dropout(dropout)(decoded) decoded = keras.layers.Dense(num_dim + emb_dims, activation='linear')(decoded) encoder = keras.Model([num_input] + cat_inputs, encoded) ae = keras.Model([num_input] + cat_inputs, decoded, name='ae') reconstruction_loss = K.mean( # masked inputs mean_squared_error(merged_inputs, tf.where(merged_inputs != masked_inputs, decoded, merged_inputs)) / masking_prob * emphasis_ratio \ # original inputs + mean_squared_error(merged_inputs, tf.where(merged_inputs == masked_inputs, decoded, merged_inputs)) / (1. - masking_prob) ) ae.add_loss(reconstruction_loss) ae.compile(optimizer='adam') return ae, encoder # - ae, encoder = get_dae(encoding_dim) ae.summary() # + _kg_hide-output=true inputs = [df[num_cols].values] + [df[x].values for x in cat_cols] ae.fit(inputs, inputs, epochs=30, batch_size=16384, shuffle=True, validation_split=.2) # - encoding = encoder.predict(inputs) print(encoding.shape) np.savetxt(feature_file, encoding, fmt='%.6f', delimiter=',') # ## Model Training + Feature Selection + HPO Using `Kaggler`'s `AutoLGB` # + n_fold = 5 X = pd.concat((df[feature_cols], pd.DataFrame(encoding, columns=[f'enc_{x}' for x in range(encoding_dim)])), axis=1) y = df[target_col] X_tst = X.iloc[n_trn:] cv = StratifiedKFold(n_splits=n_fold, shuffle=True, random_state=seed) p = np.zeros_like(y, dtype=float) p_tst = np.zeros((tst.shape[0],)) for i, (i_trn, i_val) in enumerate(cv.split(X, y)): if i == 0: clf = AutoLGB(objective='binary', metric='auc', random_state=seed) clf.tune(X.iloc[i_trn], y[i_trn]) features = clf.features params = clf.params n_best = clf.n_best print(f'{n_best}') print(f'{params}') print(f'{features}') trn_data = lgb.Dataset(X.iloc[i_trn], y[i_trn]) val_data = lgb.Dataset(X.iloc[i_val], y[i_val]) clf = lgb.train(params, trn_data, n_best, val_data, verbose_eval=100) p[i_val] = clf.predict(X.iloc[i_val]) p_tst += clf.predict(X_tst) / n_fold print(f'CV #{i + 1} AUC: {roc_auc_score(y[i_val], p[i_val]):.6f}') np.savetxt(predict_val_file, p, fmt='%.6f') np.savetxt(predict_tst_file, p_tst, fmt='%.6f') # - print(f' CV AUC: {roc_auc_score(y, p):.6f}') print(f'Test AUC: {roc_auc_score(pseudo_label[target_col], p_tst)}') # ## Submission File for DAE + AutoLGB n_pos = int(0.34911 * tst.shape[0]) th = sorted(p_tst, reverse=True)[n_pos] print(th) confusion_matrix(pseudo_label[target_col], (p_tst > th).astype(int)) sub[target_col] = (p_tst > th).astype(int) sub.to_csv(submission_file) # # Part 2: Supervised DAE # + feature_name = 'dae' algo_name = 'sdae' model_name = f'{algo_name}_{feature_name}' feature_file = f'{feature_name}.csv' predict_val_file = f'{model_name}.val.txt' predict_tst_file = f'{model_name}.tst.txt' submission_file = f'{model_name}.sub.csv' # - # ## Supervised DAE with `Keras` # We are adding a classifier **head** to the DAE network. It requires the additional loss and metric for the classifier in addition to the `reconstruction_loss` for DAE. def get_sdae(encoding_dim, dropout=.2): num_dim = len(num_cols) num_input = keras.layers.Input((num_dim,), name='num_input') cat_inputs = [] cat_embs = [] emb_dims = 0 for col in cat_cols: cat_input = keras.layers.Input((1,), name=f'{col}_input') emb_dim = max(8, int(np.log2(1 + df[col].nunique()) * 4)) cat_emb = keras.layers.Embedding(input_dim=df[col].max() + 1, output_dim=emb_dim)(cat_input) cat_emb = keras.layers.Dropout(dropout)(cat_emb) cat_emb = keras.layers.Reshape((emb_dim,))(cat_emb) cat_inputs.append(cat_input) cat_embs.append(cat_emb) emb_dims += emb_dim inputs = [num_input] + cat_inputs merged_inputs = keras.layers.Concatenate()([num_input] + cat_embs) # masking batch_size, merged_inputs_dim = merged_inputs.get_shape() training = K.learning_phase() def mask_inputs(): mask = tf.random.stateless_binomial(shape=(batch_size, merged_inputs_dim), seed=seed, counts=tf.ones((merged_inputs_dim,)), probs=[masking_prob] * merged_inputs_dim) return tf.where(mask == 1, tf.zeros_like(merged_inputs), merged_inputs) masked_inputs = control_flow_util.smart_cond(training, mask_inputs, lambda: merged_inputs) # encoder encoded_1 = keras.layers.Dense(encoding_dim, activation='relu')(masked_inputs) encoded_1 = keras.layers.Dropout(dropout)(encoded_1) encoded_2 = keras.layers.Dense(encoding_dim, activation='relu')(encoded_1) encoded_2 = keras.layers.Dropout(dropout)(encoded_2) encoded_3 = keras.layers.Dense(encoding_dim, activation='relu')(encoded_2) encoded_concat = keras.layers.Concatenate()([encoded_1, encoded_2, encoded_3]) encoder = keras.Model(inputs, encoded_concat) decoded = keras.layers.Dense(encoding_dim, activation='relu')(encoded_3) decoded = keras.layers.Dropout(dropout)(decoded) decoded = keras.layers.Dense(encoding_dim, activation='relu')(decoded) decoded = keras.layers.Dropout(dropout)(decoded) decoded = keras.layers.Dense(num_dim + emb_dims, activation='linear')(decoded) ae = keras.Model([num_input] + cat_inputs, decoded) # classifier clf_encoded_input = keras.Input((encoding_dim * 3,)) x = keras.layers.Dense(encoding_dim, 'relu')(clf_encoded_input) x = keras.layers.Dropout(dropout)(x) clf_output = keras.layers.Dense(1, activation='sigmoid')(x) clf = keras.Model(inputs=clf_encoded_input, outputs=clf_output, name='clf') outputs = [ae(inputs), clf(encoder(inputs))] model = keras.Model(inputs, outputs, name='sdae') reconstruction_loss = K.mean( # masked inputs mean_squared_error(merged_inputs, tf.where(merged_inputs != masked_inputs, decoded, merged_inputs)) / masking_prob * emphasis_ratio \ # original inputs + mean_squared_error(merged_inputs, tf.where(merged_inputs == masked_inputs, decoded, merged_inputs)) / (1. - masking_prob) ) model.add_loss(reconstruction_loss) model.compile(optimizer='adam', loss={'clf': 'binary_crossentropy'}, metrics={'clf': [AUC()]}) return model, encoder sdae, encoder = get_sdae(encoding_dim) sdae.summary() # ## Model Training: Supervised DAE with 5-CV # + _kg_hide-output=true n_fold = 5 X = df[feature_cols] y = df[target_col] X_tst = X.iloc[n_trn:] inputs_tst = [X_tst[num_cols].values] + [X_tst[x].values for x in cat_cols] cv = StratifiedKFold(n_splits=n_fold, shuffle=True, random_state=seed) p = np.zeros_like(y, dtype=float) p_tst = np.zeros((tst.shape[0],)) for i, (i_trn, i_val) in enumerate(cv.split(X, y)): X_trn = X.iloc[i_trn] X_val = X.iloc[i_val] inputs_trn = [X[num_cols].values[i_trn]] + [X[x].values[i_trn] for x in cat_cols] inputs_val = [X[num_cols].values[i_val]] + [X[x].values[i_val] for x in cat_cols] sdae, _ = get_sdae(encoding_dim) sdae.fit(inputs_trn, y[i_trn], epochs=20, batch_size=16384, shuffle=True, validation_data=(inputs_val, y[i_val])) p[i_val] = sdae.predict(inputs_val)[1].flatten() p_tst += sdae.predict(inputs_tst)[1].flatten() / n_fold print(f'CV #{i + 1} AUC: {roc_auc_score(y[i_val], p[i_val]):.6f}') np.savetxt(predict_val_file, p, fmt='%.6f') np.savetxt(predict_tst_file, p_tst, fmt='%.6f') # - print(f' CV AUC: {roc_auc_score(y, p):.6f}') print(f'Test AUC: {roc_auc_score(pseudo_label[target_col], p_tst)}') n_pos = int(0.34911 * tst.shape[0]) th = sorted(p_tst, reverse=True)[n_pos] print(th) confusion_matrix(pseudo_label[target_col], (p_tst > th).astype(int)) sub[target_col] = (p_tst > th).astype(int) sub.to_csv(submission_file) # # Part 3: Simple Ensemble # + submission_file = 'simple_ensemble_dae.csv' model_names = ['lgb_dae', 'sdae_dae'] predict_val_files = [f'{x}.val.txt' for x in model_names] predict_tst_files = [f'{x}.tst.txt' for x in model_names] dict_val_predict = {} dict_tst_predict = {} for name, val_file, tst_file in zip(model_name, predict_val_files, predict_tst_files): dict_val_predict[name] = np.loadtxt(val_file) dict_tst_predict[name] = np.loadtxt(tst_file) p = pd.DataFrame(dict_val_predict).mean(axis=1).values p_tst = pd.DataFrame(dict_tst_predict).mean(axis=1).values print(f' CV AUC: {roc_auc_score(y, p):.6f}') print(f'Test AUC: {roc_auc_score(pseudo_label[target_col], p_tst)}') # - n_pos = int(0.34911 * tst.shape[0]) th = sorted(p_tst, reverse=True)[n_pos] print(th) confusion_matrix(pseudo_label[target_col], (p_tst > th).astype(int)) sub[target_col] = (p_tst > th).astype(int) sub.to_csv(submission_file) # If you find it helpful, please upvote the notebook and give a star to [Kaggler](http://github.com/jeongyoonlee/Kaggler). If you have questions and/or feature requests for Kaggler, please post them as Issue in the Kaggler GitHub repository. # # Happy Kaggling! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pyanitools as pyt import pyNeuroChem as pyc import hdnntools as hdt import numpy as np from ase_interface import ANIENS from ase_interface import ensemblemolecule from ase.atoms import Atoms import os import matplotlib.pyplot as plt # + h5test = '/nh/nest/u/jsmith/Research/tin_research/metal_data/U12.0_test.h5' ens_path = '/nh/nest/u/jsmith/Research/tin_research/metal_data/models/gutz_lc_models/' cns = '/train0/rX-2.8R_32-2.0A_a8-8.params' sae = '/train0/sae.dat' nnf = '/train' Nn = 8 # + model_paths = sorted(os.listdir(ens_path)) comp_all = {} for model_path in model_paths: wkpath = ens_path+model_path print(wkpath) aens = ensemblemolecule(wkpath + cns,wkpath + sae,wkpath + nnf,Nn) comp = {'E':[[],[]], 'F':[[],[]],} adl = pyt.anidataloader(h5test) for data in adl: print(data['path']) X = data['coordinates'] S = data['species'] E = data['energies'] F = data['forces'] C = data['cell'] for x,e,f,c in zip(X,E,F,C): comp['E'][0].append(e) comp['F'][0].append(f.flatten()) aens.set_pbc(True,True,True) celi = (np.linalg.inv(C)).astype(np.float32) aens.set_cell((C).astype(np.float32), celi) aens.set_molecule(X=x,S=S) ea,fa,es,fs = aens.compute_mean_props() comp['E'][1].append(ea) comp['F'][1].append(fa.flatten()) comp['E'][0] = np.array(comp['E'][0]) comp['E'][1] = np.array(comp['E'][1]) comp['F'][0] = np.concatenate(comp['F'][0]) comp['F'][1] = np.concatenate(comp['F'][1]) comp_all[model_path] = comp # + plot_data = {'Emae':[], 'Fmae':[], 'perc':[],} for key in comp_all.keys(): percent = float(key.split('_')[1][:-1]) Emae = hdt.calculaterootmeansqrerror(comp_all[key]['E'][0],comp_all[key]['E'][1]) Fmae = hdt.calculaterootmeansqrerror(comp_all[key]['F'][0],comp_all[key]['F'][1]) plot_data['perc'].append(percent) plot_data['Emae'].append(Emae) plot_data['Fmae'].append(Fmae) print(percent,Emae,Fmae) # + import matplotlib.ticker as ticker fig, axs = plt.subplots(2, 1, figsize=(6, 4), dpi=200, sharey=False) axs[0].plot(plot_data['perc'], plot_data['Emae'], '-o') axs[1].plot(plot_data['perc'], plot_data['Fmae'], '-o') # remove the x and y ticks for ax in axs: #ax.set_yscale('log') ax.set_xscale('log') ax.xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.0f')) #ax.set_yticks([]) axs[0].set_title("Energy", y=0.8) axs[1].set_title("Force", y=0.8) axs[1].set_xlabel("Percent of data") axs[0].set_ylabel("Energy RMSE") axs[1].set_ylabel("Force RMSE") #print(fig.get_yticks()) #axs[0].yaxis.set_ticks_position('none') #axs[1].yaxis.set_ticks_position('none') #axs[0].yaxis.set_ticks(np.arange(min(plot_data['Emae']), max(plot_data['Emae']), 0.02)) #axs[0].yaxis.set_major_formatter(ticker.FormatStrFormatter('%0.1e')) #axs[0].xaxis.set_major_formatter(ticker.FormatStrFormatter('%0.0f')) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Import the required libraries: Pandas, Numpy, Matplotlib and Seaborn import pandas as pd import numpy as np import seaborn as sns # For mathematical calculations import matplotlib.pyplot as plt # For plotting graphs from datetime import datetime # To access datetime from pandas import Series # To work on series # %matplotlib inline import warnings # To ignore the warnings warnings.filterwarnings("ignore") # + # Let us load and read the data from the csv file df=pd.read_csv("StudentsPerformance.csv") df.columns # - df.sample(2) df.info() df["Total_Marks"]=df["math score"]+df["reading score"]+df["writing score"] df.head(2) df["Percentage"]=df["Total_Marks"]*1/3 df.head(2) # + # Top 10 Best Performed Students Overall top_10 =df.sort_values(["Percentage"], ascending=[False])[:10] top_10 # - # #### Seaborn Categorical Plots in Python # To demonstrate the various categorical plots used in Seaborn, we will use the in-built dataset present in the seaborn library which is the ‘tips’ dataset. # # barplot # # countplot # # boxplot # # violinplot # # striplot # # swarmplot # # # + # Load data using Seaborn Library sf=sns.load_dataset('tips') sf.head() # - # ### 1. Bar Plot # + # Bar plot visualizing the top 10 performers sns.barplot(x='gender',y='Percentage',data=top_10) # - sns.barplot(x='gender',y='Percentage',hue="parental level of education",data=top_10, palette="Set2") # Here parameters x, y refers to the name of the variables in the dataset provided in parameter ‘df’. # ### 2. Count Plot # # This is essentially the same as barplot except the estimator is explicitly counting the number of occurrences. Which is why we only pass the x value. # + # Count plot # counts the number of occurences of the gender column in the top 10 best performers sns.countplot(x='gender',data=top_10) # - sns.countplot(x='gender', hue="Percentage", data=top_10) # ### 3. Box Plot # # A box plot (or box-and-whisker plot) shows the distribution of quantitative data in a way that facilitates comparisons between variables or across levels of a categorical variable. The box shows the quartiles of the dataset while the whiskers extend to show the rest of the distribution, except for points that are determined to be “outliers” using a method that is a function of the inter-quartile range. # + # Categorical Variable vs Numerical Continuous Variable # Box plot Visualization # Comparing the student performance against the gender column sns.boxplot(x='gender',y='Percentage',data=top_10, palette='rainbow') # - # It’s also possible to add a nested categorical variable with the hue parameter. # + # Categorical Variable vs Categorical Variable Vs Numerical Continuous Variable # Adding a nested categorical variable with the Hue Parameter # It’s also possible to add a nested categorical variable with the hue parameter. sns.boxplot(x="gender",y="Percentage",hue="parental level of education",data=top_10, palette="coolwarm") # - # #### Violin plot # # A violin plot plays a similar role as a box and whisker plot. It shows the distribution of quantitative data across several levels of one (or more) categorical variables such that those distributions can be compared. Unlike a box plot, in which all of the plot components correspond to actual datapoints, the violin plot features a kernel density estimation of the underlying distribution. sns.violinplot(x='gender',y='Percentage',data=top_10, palette='rainbow') # hue can also be applied to violin plot. # # Output gives: sns.violinplot(x="gender",y="Percentage",hue="parental level of education",data=top_10, palette='Set1') # #### Strip plot AND swarn plot # # The stripplot will draw a scatterplot where one variable is categorical. A strip plot can be drawn on its own, but it is also a good complement to a box or violin plot in cases where you want to show all observations along with some representation of the underlying distribution. # # The swarmplot is similar to stripplot(), but the points are adjusted (only along the categorical axis) so that they don’t overlap. This gives a better representation of the distribution of values, although it does not scale as well to large numbers of observations (both in terms of the ability to show all the points and in terms of the computation needed to arrange them). sns.stripplot(x="gender", y="Percentage", data=top_10) sns.stripplot(x="gender",y="Percentage",data=top_10,jitter=True,hue='parental level of education',palette='Set1') # #### Command for swarm plot sns.swarmplot(x="gender", y="Percentage", data=top_10) sns.swarmplot(x="gender",y="Percentage",hue='parental level of education',data=top_10, palette="Set1", split=True) # ### DONE # + # Box plot for the entire whole dataframe with orient='h' sns.boxplot(data=df,palette='coolwarm',orient='h') # + # Categorical Variable vs Categorical Variable Vs Numerical Continuous Variable # Adding a nested categorical variable with the Hue Parameter # It’s also possible to add a nested categorical variable with the hue parameter. sns.boxplot(x="gender",y="Percentage",hue="parental level of education",data=df, palette="coolwarm") # - sns.violinplot(x="gender",y="Percentage",hue="parental level of education",data=df, palette='Set1') sns.stripplot(x="gender",y="Percentage",data=df,jitter=True,hue='parental level of education',palette='Set1') # + # Categorical Variable vs Numerical Continuous Variable # Box plot Visualization # Comparing the total performance of all the students against the gender column sns.boxplot(x='gender',y='Percentage',data=df, palette='rainbow') # - sns.stripplot(x="gender", y="Percentage", data=df) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + deletable=true editable=true slideshow={"slide_type": "skip"} language="html" # # + deletable=true editable=true slideshow={"slide_type": "skip"} from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # ### Import Modules # + deletable=true editable=true slideshow={"slide_type": "fragment"} import pandas as pd from matplotlib import pyplot as plt # %matplotlib inline import seaborn as sns # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # ### The dataset # + deletable=true editable=true slideshow={"slide_type": "fragment"} df = pd.read_csv('data_simpsons_episodes.csv') df.head() # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # ### Scatterplots # + deletable=true editable=true slideshow={"slide_type": "fragment"} sns.stripplot(x="season", y="us_viewers_in_millions", data=df); # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # ### Swarmplot # + deletable=true editable=true slideshow={"slide_type": "fragment"} sns.swarmplot(x="season", y="us_viewers_in_millions", data=df); # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # ### Boxplot # + deletable=true editable=true slideshow={"slide_type": "fragment"} sns.boxplot(x="season", y="us_viewers_in_millions", data=df); # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # ### Violinplot # + deletable=true editable=true slideshow={"slide_type": "fragment"} sns.violinplot(x="season", y="us_viewers_in_millions", data=df); # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # ### Barplot # + deletable=true editable=true slideshow={"slide_type": "fragment"} sns.barplot(x="season", y="us_viewers_in_millions", data=df); # + deletable=true editable=true slideshow={"slide_type": "slide"} sns.countplot(x="season", data=df); # + [markdown] deletable=true editable=true slideshow={"slide_type": "slide"} # ### Wide form plots # + deletable=true editable=true slideshow={"slide_type": "slide"} df = pd.read_csv('data-alcohol.csv') df.head() # + slideshow={"slide_type": "slide"} sns.boxplot(data=df, orient="h"); # + slideshow={"slide_type": "skip"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # So in explore_SLFV_GP.ipynb, I tried a bunch of different things on a VERY big lightcurve. But I think I'm getting ahead of myself, so I'm gonna take a step back here... # + import numpy as np import pandas as pd from TESStools import * import os import warnings from multiprocessing import Pool, cpu_count from scipy.stats import multivariate_normal from tqdm.notebook import tqdm import h5py as h5 import pymc3 as pm import pymc3_ext as pmx import aesara_theano_fallback.tensor as tt from celerite2.theano import terms, GaussianProcess from pymc3_ext.utils import eval_in_model import arviz as az import exoplanet print(f"exoplanet.__version__ = '{exoplanet.__version__}'") # - from aesara_theano_fallback import __version__ as tt_version from celerite2 import __version__ as c2_version pm.__version__, pmx.__version__, tt_version, c2_version # # Ok here is our example data we're going to be working with. It's almost two years of TESS observations, with a year in between them cool_sgs = pd.read_csv('sample.csv',index_col=0) example = cool_sgs[cool_sgs['CommonName']=='HD 269953'] tic = example.index[0] lc, lc_smooth = lc_extract(get_lc_from_id(tic), smooth=128) time, flux, err = lc['Time'].values, lc['Flux'].values, lc['Err'].values # # Let's parse the lightcurve into TESS Sectors. # + orbit_times = pd.read_csv('../data/orbit_times_20210629_1340.csv',skiprows=5) sector_group = orbit_times.groupby('Sector') sector_starts = sector_group['Start TJD'].min() sector_ends = sector_group['End TJD'].max() sectors = pd.DataFrame({'Sector':sector_starts.index,'Start TJD':sector_starts.values,'End TJD':sector_ends.values}) fig = plt.figure(dpi=300) plt.scatter(time, flux, s=1, c='k') for i,row in sectors.iterrows(): plt.axvline(x=row['Start TJD'], c='C0') plt.axvline(x=row['End TJD'], c='C3') plt.text(0.5*(row['Start TJD']+row['End TJD']),1.007,int(row['Sector'])) # - sector_lcs = [] for i,row in sectors.iterrows(): sec_lc = lc[(lc['Time']>=row['Start TJD'])&(lc['Time']<=row['End TJD'])] if len(sec_lc) > 0: sec_lc.insert(3,'Sector',np.tile(int(row['Sector']),len(sec_lc))) sector_lcs.append(sec_lc) lc_new = pd.concat(sector_lcs) lc_new all_sectors = np.unique(lc_new['Sector']) this_sector = lc_new[lc_new['Sector'] == all_sectors[0]] this_sector # + this_time, this_flux, this_err = this_sector['Time'].values, this_sector['Flux'].values, this_sector['Err'].values pseudo_NF = 0.5 / (np.mean(np.diff(this_time))) rayleigh = 1.0 / (this_time.max() - this_time.min()) ls = LombScargle(this_time,this_flux,dy=this_err,) freq,power=ls.autopower(normalization='psd',maximum_frequency=pseudo_NF) power /= len(this_time) fig, ax = plt.subplots(2, 1, dpi=300) ax[0].scatter(this_time, this_flux,s=1,c='k') ax[0].plot(lc_smooth['Time'],lc_smooth['Flux'],c='C2') ax[0].set(xlim=(this_time.min(),this_time.max())) ax[1].loglog(freq, power) # - # # Let's fit the GP to this! # Here's a cute function that does that, but the mean can be any number of sinusoids! def pm_fit_gp_sin(time, flux, err, fs=None, amps=None, phases=None, model=None, return_var=False, thin=50): """ Use PyMC3 to do a maximum likelihood fit for a GP + multiple periodic signals Inputs ------ time : array-like Times of observations flux : array-like Observed fluxes err : array-like Observational uncertainties fs : array-like, elements are PyMC3 distributions Array with frequencies to fit, default None (i.e., only the GP is fit) amps : array-like, elements are PyMC3 distributions Array with amplitudes to fit, default None (i.e., only the GP is fit) phases : array-like, elements are PyMC3 distributions Array with phases to fit, default None (i.e., only the GP is fit) model : `pymc3.model.Model` PyMC3 Model object, will fail unless given return_var : bool, default True If True, returns the variance of the GP thin : integer, default 50 Calculate the variance of the GP every `thin` points. Returns ------- map_soln : dict Contains best-fit parameters and the gp predictions logp : float The log-likelihood of the model bic : float The Bayesian Information Criterion, -2 ln P + m ln N var : float If `return_var` is True, returns the variance of the GP """ assert model is not None, "Must provide a PyMC3 model object" #Step 1: Mean model mean_flux = pm.Normal("mean_flux", mu = 1.0, sigma=np.std(flux)) if fs is not None: #Making a callable for celerite mean_model = tt.sum([a * tt.sin(2.0*np.pi*f*time + phi) for a,f,phi in zip(amps,fs,phases)],axis=0) + mean_flux #And add it to the model pm.Deterministic("mean", mean_model) else: mean_model = mean_flux mean = pm.Deterministic("mean", mean_flux) #Step 2: Compute Lomb-Scargle Periodogram pseudo_NF = 0.5 / (np.mean(np.diff(time))) rayleigh = 1.0 / (time.max() - time.min()) ls = LombScargle(time,flux) freq,power=ls.autopower(normalization='psd',maximum_frequency=pseudo_NF) power /= len(time) #Step 3: Do the basic peridogram fit to guess nu_char and alpha_0 popt, pcov, resid = fit_red_noise(freq, power) a0, tau_char, gamma, aw = popt nu_char = 1.0/(2*np.pi*tau_char) # A jitter term describing excess white noise (analogous to C_w) log_jitter = pm.Uniform("log_jitter", lower=np.log(aw)-15, upper=np.log(aw)+15, testval=np.log(np.median(np.abs(np.diff(flux))))) # A term to describe the SLF variability # sigma is the standard deviation of the GP, tau roughly corresponds to the #breakoff in the power spectrum. rho and tau are related by a factor of #pi/Q (the quality factor) #guesses for our parameters omega_0_guess = 2*np.pi*nu_char Q_guess = 1/np.sqrt(2) sigma_guess = a0 * np.sqrt(omega_0_guess*Q_guess) * np.power(np.pi/2.0, 0.25) #sigma logsigma = pm.Uniform("log_sigma", lower=np.log(sigma_guess)-10, upper=np.log(sigma_guess)+10) sigma = pm.Deterministic("sigma",tt.exp(logsigma)) #rho (characteristic timescale) logrho = pm.Uniform("log_rho", lower=np.log(0.01/nu_char), upper=np.log(100.0/nu_char)) rho = pm.Deterministic("rho", tt.exp(logrho)) nuchar = pm.Deterministic("nu_char", 1.0 / rho) #tau (damping timescale) logtau = pm.Uniform("log_tau", lower=np.log(0.01*2.0*Q_guess/omega_0_guess),upper=np.log(100.0*2.0*Q_guess/omega_0_guess)) tau = pm.Deterministic("tau", tt.exp(logtau)) nudamp = pm.Deterministic("nu_damp", 1.0 / tau) #We also want to track Q, as it's a good estimate of how stochastic the #process is. Q = pm.Deterministic("Q", np.pi*tau/rho) kernel = terms.SHOTerm(sigma=sigma, rho=rho, tau=tau) gp = GaussianProcess( kernel, t=time, diag=err ** 2.0 + tt.exp(2 * log_jitter), quiet=True, ) # Compute the Gaussian Process likelihood and add it into the # the PyMC3 model as a "potential" gp.marginal("gp", observed=flux-mean_model) # Compute the mean model prediction for plotting purposes pm.Deterministic("pred", gp.predict(flux-mean_model)) # Optimize to find the maximum a posteriori parameters map_soln = pmx.optimize() logp = model.logp(map_soln) # parameters are tau, sigma, Q/rho, mean, jitter, plus 3 per frequency (rho is fixed) if fs is not None: n_par = 5.0 + (3.0 * len(fs)) else: n_par = 5.0 bic = -2.0*logp + n_par * np.log(len(time)) #compute variance as well... if return_var: eval_in_model(gp.compute(time[::thin],yerr=err[::thin]), map_soln) mu, var = eval_in_model(gp.predict(flux[::thin], t=time[::thin], return_var=True), map_soln) return map_soln, logp, bic, var return map_soln, logp, bic with pm.Model() as model: map_soln, logp, bic = pm_fit_gp_sin(this_time, this_flux, this_err, model=model) fig = plt.figure(dpi=300) plt.scatter(this_time, this_flux, c='k', s=1) plt.plot(this_time, map_soln['pred']+map_soln['mean_flux']) plt.scatter(this_time, resid_flux,c='k',s=1) # + resid_flux = this_flux - (map_soln['pred']+map_soln['mean_flux']) ls_resid = LombScargle(this_time,resid_flux,dy=this_err,) freq_r,power_r=ls_resid.autopower(normalization='psd',maximum_frequency=pseudo_NF) power_r /= len(this_time) fig, ax = plt.subplots(2, 1, dpi=300) ax[0].scatter(this_time, resid_flux,s=1,c='k') ax[0].set(xlim=(this_time.min(),this_time.max())) ax[1].loglog(freq_r, power_r) # - # # Let's try this with two sectors of data! two_sec = lc_new[lc_new['Sector'] < 3] two_sec time, flux, err = lc[['Time','Flux','Err']].values.T time def gp_multisector(lc, fs=None, amps=None, phases=None, model=None, return_var=False, thin=50): """ Use PyMC3 to do a maximum likelihood fit for a GP + multiple periodic signals, but now with a twist: handles multiple sectors! Inputs ------ ls : `pandas.DataFrame` Dataframe containing the lightcurve. Must have Time, Flux, Err, and Sector as columns. fs : array-like, elements are PyMC3 distributions Array with frequencies to fit, default None (i.e., only the GP is fit) amps : array-like, elements are PyMC3 distributions Array with amplitudes to fit, default None (i.e., only the GP is fit) phases : array-like, elements are PyMC3 distributions Array with phases to fit, default None (i.e., only the GP is fit) model : `pymc3.model.Model` PyMC3 Model object, will fail unless given return_var : bool, default True If True, returns the variance of the GP thin : integer, default 50 Calculate the variance of the GP every `thin` points. Returns ------- map_soln : dict Contains best-fit parameters and the gp predictions logp : float The log-likelihood of the model bic : float The Bayesian Information Criterion, -2 ln P + m ln N var : float If `return_var` is True, returns the variance of the GP """ assert model is not None, "Must provide a PyMC3 model object" time, flux, err, sectors = lc[['Time','Flux','Err','Sector']].values.T #Step 1: Mean model mean_flux = pm.Normal("mean_flux", mu = 1.0, sigma=np.std(flux)) if fs is not None: #Making a callable for celerite mean_model = tt.sum([a * tt.sin(2.0*np.pi*f*time + phi) for a,f,phi in zip(amps,fs,phases)],axis=0) + mean_flux #And add it to the model pm.Deterministic("mean", mean_model) else: mean_model = mean_flux mean = pm.Deterministic("mean", mean_flux) #Step 2: Compute Lomb-Scargle Periodogram pseudo_NF = 0.5 / (np.mean(np.diff(time))) rayleigh = 1.0 / (time.max() - time.min()) ls = LombScargle(time,flux) freq,power=ls.autopower(normalization='psd',maximum_frequency=pseudo_NF) power /= len(time) #Step 3: Do the basic peridogram fit to guess nu_char and alpha_0 popt, pcov, resid = fit_red_noise(freq, power) a0, tau_char, gamma, aw = popt nu_char = 1.0/(2*np.pi*tau_char) # A jitter term per sector describing excess white noise (analogous to C_w) jitters = [pm.Uniform(f"log_jitter_S{int(s)}", lower=np.log(aw)-15, upper=np.log(aw)+15, testval=np.log(np.median(np.abs(np.diff(flux))))) for s in np.unique(sectors)] # A term to describe the SLF variability, shared across sectors #guesses for our parameters omega_0_guess = 2*np.pi*nu_char Q_guess = 1/np.sqrt(2) sigma_guess = a0 * np.sqrt(omega_0_guess*Q_guess) * np.power(np.pi/2.0, 0.25) #sigma logsigma = pm.Uniform("log_sigma", lower=np.log(sigma_guess)-10, upper=np.log(sigma_guess)+10) sigma = pm.Deterministic("sigma",tt.exp(logsigma)) #rho (characteristic timescale) logrho = pm.Uniform("log_rho", lower=np.log(0.01/nu_char), upper=np.log(100.0/nu_char)) rho = pm.Deterministic("rho", tt.exp(logrho)) nuchar = pm.Deterministic("nu_char", 1.0 / rho) #tau (damping timescale) logtau = pm.Uniform("log_tau", lower=np.log(0.01*2.0*Q_guess/omega_0_guess),upper=np.log(100.0*2.0*Q_guess/omega_0_guess)) tau = pm.Deterministic("tau", tt.exp(logtau)) nudamp = pm.Deterministic("nu_damp", 1.0 / tau) #We also want to track Q, as it's a good estimate of how stochastic the #process is. Q = pm.Deterministic("Q", np.pi*tau/rho) kernel = terms.SHOTerm(sigma=sigma, rho=rho, tau=tau) #A number of GP objects with shared hyperparameters gps = [GaussianProcess( kernel, t=time[sectors==s], diag=err[sectors==s] ** 2.0 + tt.exp(2 * j), quiet=True,) for s,j in zip(np.unique(sectors),jitters) ] for s,gp in zip(np.unique(sectors),gps): # Compute the Gaussian Process likelihood and add it into the # the PyMC3 model as a "potential" gp.marginal(f"gp_S{int(s)}", observed=(flux-mean_model)[sectors==s]) # Compute the mean model prediction for plotting purposes pm.Deterministic(f"pred_S{int(s)}", gp.predict((flux-mean_model)[sectors==s])) # Optimize to find the maximum a posteriori parameters map_soln = pmx.optimize() logp = model.logp(map_soln) # parameters are logtau, logsigma, logrho, mean, jitter*n_sectors, plus 3 per frequency (rho is fixed) base_par = 4 + len(np.unique(sectors)) if fs is not None: n_par = base_par + (3.0 * len(fs)) else: n_par = base_par bic = -2.0*logp + n_par * np.log(len(time)) #compute variance as well... if return_var: eval_in_model(gp.compute(time[::thin],yerr=err[::thin]), map_soln) mu, var = eval_in_model(gp.predict(flux[::thin], t=time[::thin], return_var=True), map_soln) return map_soln, logp, bic, var return map_soln, logp, bic with pm.Model() as model_m: map_soln, logp, bic = gp_multisector(two_sec, model=model_m) with pm.Model() as model_all: map_soln, logp, bic = gp_multisector(lc_new, model=model_all) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Random Forest Regression # Importing the libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt # - # Loading the dataset data = pd.read_csv("boston.csv") data.info() # Getting information about our dataset data.head() # Displays the first 5 elements of our dataset # Replacing 0 values in columns 'ZN' and 'CHAS' with NaN data.ZN.replace(0, np.nan, inplace = True) data.CHAS.replace(0, np.nan, inplace = True) data.info() # Checking for null values data.isnull().sum()/ len(data) * 100 # Checking the % of missing values in our dataset # As we can see below both “ZN” and “CHAS” are missing more than # 70% data so we will remove both these features. data.drop(['Unnamed: 0', 'ZN', 'CHAS'], axis = 1, inplace = True) data.info() data.isnull().sum()/ len(data) * 100 # Again calculating percentage of missing values # To get basic stats about our data like mean, median, count etc. # We use .describe() method as shown below: data.describe() # Splitting our data into dependent and independent variables X = data.iloc[:, :-1].values y = data.iloc[:, -1].values # + # Splitting our data into Test set and Training set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) y_train = y_train.reshape(-1,1) y_test = y_test.reshape(-1, 1) # + # Feature Scaling from sklearn.preprocessing import StandardScaler sc_X = StandardScaler() sc_y = StandardScaler() X_train = sc_X.fit_transform(X_train) X_test = sc_X.fit_transform(X_test) y_test = sc_y.fit_transform(y_test) y_train = sc_y.fit_transform(y_train) # - # Importing the RandomForestRegressor class from sklearn.ensemble library from sklearn.ensemble import RandomForestRegressor model = RandomForestRegressor(n_estimators = 500, criterion = 'mse', random_state = 0) model.fit(X_train, y_train) # Making predictions with our model y_pred = model.predict(X_test) y_pred = y_pred.reshape(-1, 1) y_pred = sc_y.transform(y_pred) # Calculating the performance metrics from sklearn import metrics print("MAE", metrics.mean_absolute_error(y_test, y_pred)) print("MSE", metrics.mean_squared_error(y_test, y_pred)) print("RMSE", np.sqrt(metrics.mean_squared_error(y_test, y_pred))) print("Score:", model.score(X_test, y_test)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="2BxBk84bp4wC" colab_type="text" # # Crossentropy method # # In this notebook, we will imprement crossentropy method to solve the ['Taxi' problem in Open AI Gym](https://gym.openai.com/envs/Taxi-v2/) # # First, we have to make sure we are connected to the right **python 3 reutime and using the GPU**. (Click the 'Runtime' tab and choose 'Change runtime type'), then import the required package (all are already installed in Google Colab) # + id="w4ucVW4Qp4wD" colab_type="code" colab={} import gym import numpy as np, pandas as pd # + [markdown] id="-7adMV64Et-r" colab_type="text" # Then we create the 'Taxi' enviroment from the Gym. Render the initial state to check it is imported properly # + id="IPiSeYf5Erqi" colab_type="code" colab={} env = gym.make("Taxi-v2") env.reset() env.render() # + [markdown] id="Tl3wtYs_GIje" colab_type="text" # Find out how many states and action we can have in this enviroment. # + id="6kcwaSI7p4wI" colab_type="code" colab={} n_states = env.observation_space.n n_actions = env.action_space.n print("n_states=%i, n_actions=%i"%(n_states, n_actions)) # + [markdown] id="DogESWKmp4wL" colab_type="text" # ## Task 1: Create stochastic policy # # Let's create a policy, for crossentrupy method with stochastic policy (updated each epoch), it will be a probability distribution of taking action a will result in state s. # # ```policy[s,a] = P(take action a | in state s)``` # # It could be represented in a 2-D array, and we want to initialize it __uniformly__. In another word, at the beginning state,probability of choosing all actions should be equal (and adding up to 1). # # With `n_state` and `n_action`: # + id="79LDIEUmp4wM" colab_type="code" colab={} policy = np.ones((n_states,n_actions)) * (1/n_actions) # + id="a-RRIPR3p4wO" colab_type="code" colab={} assert type(policy) in (np.ndarray,np.matrix) assert np.allclose(policy,1./n_actions) assert np.allclose(np.sum(policy,axis=1), 1) # + [markdown] id="VtB4cVZJp4wQ" colab_type="text" # ## Task 2: Play the game # # Let's play the game with our policy. The following function will 'play the game' i.e. picking actions according to the state we are in and following the policy given. It will keep on 'playing' till the game is over or the step (t) reached the maximum limit. (To avoid endless loop) The action and states will be reconded and returned. Also the performance of the policy (sum of rewards) will also be returned. # + id="PQdj_dHLp4wQ" colab_type="code" colab={} def generate_session(policy,t_max=10**4): """ Play game until end or for t_max ticks. :param policy: an array of shape [n_states,n_actions] with action probabilities :returns: list of states, list of actions and sum of rewards """ states,actions = [],[] total_reward = 0. s = env.reset() for t in range(t_max): a = np.random.choice(n_actions,1,p=policy[s,:])[0] new_s, r, done, info = env.step(a) #Record state, action and add up reward to states,actions and total_reward accordingly. states.append(s) actions.append(a) total_reward += r s = new_s if done: break return states, actions, total_reward # + id="-OAc-QDyp4wS" colab_type="code" colab={} s,a,r = generate_session(policy) assert type(s) == type(a) == list assert len(s) == len(a) assert type(r) in [float,np.float] # + [markdown] id="FLet-FrI-E1C" colab_type="text" # ## Let's see the initial reward distribution # # In the following cell we play the game 200 times using the function `generate_session` we implemented above and plot to examine the performance of our policy... # + id="vCqRGN36p4wU" colab_type="code" colab={} import matplotlib.pyplot as plt # %matplotlib inline sample_rewards = [generate_session(policy,t_max=1000)[-1] for _ in range(200)] plt.hist(sample_rewards,bins=20); plt.vlines([np.percentile(sample_rewards, 50)], [0], [100], label="50'th percentile", color='green') plt.vlines([np.percentile(sample_rewards, 90)], [0], [100], label="90'th percentile", color='red') plt.legend() # + [markdown] id="ZYBfD-5t-tS6" colab_type="text" # You can see that it is not that great :-( So we now use crossentropy method to improve 'train' a better policy. # + [markdown] id="VgEK8Fsnp4wX" colab_type="text" # ## Task 3: Crossentropy method steps # # The goal is to select the state-action pairs which perform better than a certain percentile (elite_states, elites_actions). So we know what is a better choice actions in a certain state. You can say it's trail and error, and yes, that is the basic of reinforcement learning. # + id="c0jacA8Ep4wY" colab_type="code" colab={} def select_elites(states_batch,actions_batch,rewards_batch,percentile=50): """ Select states and actions from games that have rewards >= percentile :param states_batch: list of lists of states, states_batch[session_i][t] :param actions_batch: list of lists of actions, actions_batch[session_i][t] :param rewards_batch: list of rewards, rewards_batch[session_i] :returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions Please return elite states and actions in their original order [i.e. sorted by session number and timestep within session] If you're confused, see examples below. Please don't assume that states are integers (they'll get different later). """ def gen_list(input_list, output_list): for indx,val in enumerate(input_list): if rewards_batch[indx] >= reward_threshold: if isinstance(val, list): for i in range(len(val)): output_list.append(val[i]) else: output_list.append(val) reward_threshold = np.percentile(rewards_batch,percentile) elite_states = [] elite_actions = [] gen_list(states_batch,elite_states) gen_list(actions_batch,elite_actions) return elite_states,elite_actions # + id="puefOHVFp4wa" colab_type="code" colab={} states_batch = [ [1,2,3], #game1 [4,2,0,2], #game2 [3,1] #game3 ] actions_batch = [ [0,2,4], #game1 [3,2,0,1], #game2 [3,3] #game3 ] rewards_batch = [ 3, #game1 4, #game2 5, #game3 ] test_result_0 = select_elites(states_batch, actions_batch, rewards_batch, percentile=0) test_result_40 = select_elites(states_batch, actions_batch, rewards_batch, percentile=30) test_result_90 = select_elites(states_batch, actions_batch, rewards_batch, percentile=90) test_result_100 = select_elites(states_batch, actions_batch, rewards_batch, percentile=100) assert np.all(test_result_0[0] == [1, 2, 3, 4, 2, 0, 2, 3, 1]) \ and np.all(test_result_0[1] == [0, 2, 4, 3, 2, 0, 1, 3, 3]),\ "For percentile 0 you should return all states and actions in chronological order" assert np.all(test_result_40[0] == [4, 2, 0, 2, 3, 1]) and \ np.all(test_result_40[1] ==[3, 2, 0, 1, 3, 3]),\ "For percentile 30 you should only select states/actions from two first" assert np.all(test_result_90[0] == [3,1]) and \ np.all(test_result_90[1] == [3,3]),\ "For percentile 90 you should only select states/actions from one game" assert np.all(test_result_100[0] == [3,1]) and\ np.all(test_result_100[1] == [3,3]),\ "Please make sure you use >=, not >. Also double-check how you compute percentile." print("Ok!") # + [markdown] id="Tm6nZp6cAQEd" colab_type="text" # ## Task 4: Update policy # # Now we have the 'elites' list, we can update our policy so the probability of the state-action pairs are propotional to their occurance of in the 'elites' list. In other words, the state-action which seems to have a higher chance to perform better get selected more. # + id="lRMzX-7Lp4we" colab_type="code" colab={} def update_policy(elite_states,elite_actions): """ Given old policy and a list of elite states/actions from select_elites, return new updated policy where each action probability is proportional to policy[s_i,a_i] ~ #[occurences of si and ai in elite states/actions] Don't forget to normalize policy to get valid probabilities and handle 0/0 case. In case you never visited a state, set probabilities for all actions to 1./n_actions :param elite_states: 1D list of states from elite sessions :param elite_actions: 1D list of actions from elite sessions """ new_policy = np.zeros([n_states,n_actions]) esp = 0.0000001 for i,s in enumerate(elite_states): new_policy[s,elite_actions[i]] += 1 for s in range(n_states): if all(new_policy[s,:] == 0): new_policy[s,:] = 1./n_actions else: new_policy[s,:] /= np.sum(new_policy[s,:]) #Don't forget to set 1/n_actions for all actions in unvisited states. return new_policy # + id="mumQNLK6p4wg" colab_type="code" colab={} elite_states, elite_actions = ([1, 2, 3, 4, 2, 0, 2, 3, 1], [0, 2, 4, 3, 2, 0, 1, 3, 3]) new_policy = update_policy(elite_states,elite_actions) assert np.isfinite(new_policy).all(), "Your new policy contains NaNs or +-inf. Make sure you don't divide by zero." assert np.all(new_policy>=0), "Your new policy can't have negative action probabilities" assert np.allclose(new_policy.sum(axis=-1),1), "Your new policy should be a valid probability distribution over actions" reference_answer = np.array([ [ 1. , 0. , 0. , 0. , 0. ], [ 0.5 , 0. , 0. , 0.5 , 0. ], [ 0. , 0.33333333, 0.66666667, 0. , 0. ], [ 0. , 0. , 0. , 0.5 , 0.5 ]]) assert np.allclose(new_policy[:4,:5],reference_answer) print("Ok!") # + [markdown] id="tCK1uSLTp4wj" colab_type="text" # ## Let's train it # Let's put all our work above together: Generate sessions, select N best and fit to those. (Double check you are using the GPU runtime type, training may take a while, be patient) # + id="GJVx8Ifpp4wk" colab_type="code" colab={} from IPython.display import clear_output def show_progress(batch_rewards, log, percentile, reward_range=[-990,+10]): """ A convenience function that displays training progress. No cool math here, just charts. """ mean_reward, threshold = np.mean(batch_rewards), np.percentile(batch_rewards, percentile) log.append([mean_reward,threshold]) clear_output(True) print("mean reward = %.3f, threshold=%.3f"%(mean_reward, threshold)) plt.figure(figsize=[8,4]) plt.subplot(1,2,1) plt.plot(list(zip(*log))[0], label='Mean rewards') plt.plot(list(zip(*log))[1], label='Reward thresholds') plt.legend() plt.grid() plt.subplot(1,2,2) plt.hist(batch_rewards,range=reward_range); plt.vlines([np.percentile(batch_rewards, percentile)], [0], [100], label="percentile", color='red') plt.legend() plt.grid() plt.show() # + id="hmoj_BaJp4wo" colab_type="code" colab={} #reset policy just in case policy = np.ones([n_states, n_actions]) / n_actions # + id="hT5piqEyp4wq" colab_type="code" colab={} n_sessions = 250 #sample this many sessions percentile = 50 #take this percent of session with highest rewards learning_rate = 0.5 #add this thing to all counts for stability log = [] for i in range(100): # %time sessions = [generate_session(policy) for i in range(n_sessions)] batch_states,batch_actions,batch_rewards = zip(*sessions) elite_states, elite_actions = select_elites(batch_states,batch_actions,batch_rewards,percentile) new_policy = update_policy(elite_states,elite_actions) policy = learning_rate * new_policy + (1-learning_rate) * policy #display results on chart show_progress(batch_rewards, log, percentile) if np.mean(batch_rewards) > -20: break # + [markdown] id="ZmQcJYjAp4wt" colab_type="text" # ## Footnote: Reflecting on results # # You may have noticed that the taxi problem quickly converges from <-1000 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode. # # In case CEM failed to learn how to win from one distinct starting point, it will siply discard it because no sessions from that starting point will make it into the "elites". # # To mitigate that problem, you can either reduce the threshold for elite sessions (duct tape way) or change the way you evaluate strategy (theoretically correct way). You can first sample an action for every possible state and then evaluate this choice of actions by running _several_ games and averaging rewards. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Normalization for Better Generalization and Faster Training # > Different types of Normalization layers ( Batch Norm, Layernorm) # - toc: true # - badges: true # - comments: true # - image:https://i.imgur.com/Mpeu82o.jpg # - author: # - categories: [NLP, Batchnorm, layernorm, normalization] # ## Batch Normalization # # Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization and makes it notoriously hard to train models with saturating nonlinearities. so to overcome this, we can do a normalization after some layers as below. # #
# # ![BN](https://i.imgur.com/l26z2Zz.png "Credit: https://arxiv.org/pdf/2003.07845.pdf") # #
# # It calculates the batch means, std, and using those, normalizes the data then creates running mean and std which will be used in inference. # One intuition about why BatchNorm works is that it removes the internal covariance shift. You can check that in the below video. # #
# # >youtube: https://www.youtube.com/watch?v=nUUqwaxLnWs # # # #
# # Another intuition: # # Batch Normalization normalizes the activations in the intermediate layers. BN primarily enables training with a larger learning rate which is cause for faster convergence and better generalization. # # Larger batch size training may converge to sharp minima. If we converge to sharp minima, generalization capacity may decrease. so noise in the SGD has an important role in regularizing the NN. Similarly, Higher learning rate will bias the network towards wider minima so it will give the better generalization. But, training with a higher learning rate may cause an explosion in the updates. # # If we compare the gradients between with batch normalization and without batch normalization, without batch norm network gradients are larger and heavier tailed as shown below so we can train with larger learning rates with BN. # # ![BNGradeints](https://i.imgur.com/NuppcAM.png "Credit: https://arxiv.org/pdf/1806.02375.pdf") # # #
# # >Important: BN is widely adopted in computer vision but, it leads to significant performance degradation for NLP. Nowadays Layer Normalization is preferred normalization technique for NLP tasks. # #
# # # >Note: BN cannot be applied to online learning tasks. BN cannot applied to extremely large distributed models where the minibatches have to be small. For forward neural networks, BN can be directly applied, because each layer has a fixed number of neurons, and the mean and variance statistics of each neuron in each layer of the network can be directly stored for model prediction, but in the RNNs network, different mini-batch may have different input sequence length, it is difficult to calculate statistical information, and the test sequence length cannot be greater than the maximum training sequence length # #
# # You can check the figure below from a paper, which compares the BN in CV and NLP. The differences between running mean/Variance and batch mean/variance exhibit very high variance with extreme outliers in Transformers. # # ![BN](https://i.imgur.com/5xCloXd.png "Credit: https://arxiv.org/pdf/2003.07845.pdf") # import tensorflow as tf input_layer = tf.keras.Input(shape=(6,)) bn_layer = tf.keras.layers.BatchNormalization() bn_layer_out = bn_layer(input_layer) print('Number of weights is', len(bn_layer.get_weights())) # # If we have `n` features as input to the BN layer, the weight matrix we have to learn is of size `(4, n)`, i.e. `n` features for each beta_initializer, gamma_initializer, moving_mean_initializer, moving_variance_initializer. # Please read Tensorflow documentation to know more about Training mode, inference mode of the BN layer. It is very important to take care of the mode in BN layer. # ## Layer Normalization # # Unlike Batch normalization, it normalized horizontally i.e. it normalizes each data point. so $\mu$, $\sigma$ not depend on the batch. layer normalization does not have to use "running mean" and "running variance". # # ![layernorm](https://i.imgur.com/XRuwFls.png "Credit: https://papers.nips.cc/paper/8689-understanding-and-improving-layer-normalization.pdf") # # It gives the better results because of the gradinets with respect to $\mu$, $\sigma$ in Layer Normalization. Derivative of $\mu$ re-centers network gradients to zero. Derivative of $\sigma$ reduces variance of network gradient, which can be seen a kind of re-scaling. # # #
# # # >Important: The parameters of LayerNorm, including the bias and gain, increase the risk of over-fitting, and do not work in most cases. - https://papers.nips.cc/paper/8689-understanding-and-improving-layer-normalization.pdf. You can remove these using `center`, `scale` parameters in `Tensorflow`. # import tensorflow as tf input_layer = tf.keras.Input(shape=(6)) norm_layer = tf.keras.layers.LayerNormalization(scale=False, center=False) norm_layer_out = norm_layer(input_layer) print('Number of weights is', len(norm_layer.get_weights())) # >Note: If there is no gain and bias, number of weights is zero. import tensorflow as tf input_layer = tf.keras.Input(shape=(10,),batch_size=1) norm_layer = tf.keras.layers.LayerNormalization(scale=True, center=True) norm_layer_out = norm_layer(input_layer) print('Number of weights is', len(norm_layer.get_weights())) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="j2l0x3S2XOXs" colab_type="text" # # An intuitive introduction to the Entropy # # Let $X$ be a discrete random variable (RV) taking values from set $\mathcal{X}$ with probability mass function $P(X)$. # # *Definition* the entropy $H(X)$ of the discrete random variable $X$ is # \begin{equation} # H(X) = \sum_{x\in\mathcal{X}}P(X)\log \frac{1}{P(X)} = -\sum_{x\in\mathcal{X}}P(X)\log P(X). # \end{equation} # # How to make sense out of this definition? We'll, rather informally, argue below that the entrpy of an RV provides a lower bound on the amount of information provided by the RV, which we'll dfine as the average number of bits required to transmit the value the RV has taken. # # As a motivating example consider asking your friend for advice. The probabilities of his answers are given in the table below: # # | $x$ | $P(x)$ | # |----------|--------| # | OK | $1/2$ | # | Average | $1/4$ | # | Bad | $1/8$ | # | Terrible | $1/8$ | # # To transmit the answer of your friend you must introduce an *encoding*, e.g.: # # | $x$ | $P(x)$ | Code 1 | # |----------|--------|--------| # | OK | $1/2$ | 00 | # | Average | $1/4$ | 01 | # | Bad | $1/8$ | 10 | # | Terrible | $1/8$ | 11 | # # Under this encoding, we spend 2 bits per answer. # # However, we could also consider a variable length code, that uses shorter codewords for more frequent symbols: # # | $x$ | $P(x)$ | Code 2 | # |----------|--------|--------| # | OK | $1/2$ | 0 | # | Average | $1/4$ | 10 | # | Bad | $1/8$ | 110 | # | Terrible | $1/8$ | 111 | # # Under this encoding the average number of bits to encode an answer is: # \begin{equation} # \mathbb{E}[L] = \frac{1}{2} \cdot 1 + \frac{1}{4} \cdot 2 + \frac{1}{8} \cdot 3 + \frac{1}{8} \cdot 3 = \frac{7}{8} # \end{equation} # # Thus, the new code is more efficient. Is it the best we can do? # + [markdown] id="uOpiKOWvn92s" colab_type="text" # ### The code space # # We'll now try formalize the coding task, i.e. the assignment of code lengths to possible values of the RV. # # Let's first observe an important property of our code: in a variable length coding, no codeword can be the prefix of another one. Otherwise, decoding is not deterministic. Therefore, whenever a value is assigned a symbol of length $L$, $1/2^L$ of the code space is reserved and not available to other codes. # # This can be visualised as a code space. Below, we indicate the codes assigned to symbols in the example and grey-out codes that are not available because the shorter codes are used: # # "Visual Information theory": https://colah.github.io/posts/2015-09-Visual-Information/ # 2. ad TM Cover, "Elements of Information Theory", chapter 2 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import warnings warnings.filterwarnings('ignore') u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code'] users = pd.read_csv('./archive/ml-100k/u.user', sep='|', names=u_cols, encoding='latin-1') users.head() i_cols = ['movie_id', 'title' ,'release date','video release date', 'IMDb URL', 'unknown', 'Action', 'Adventure', 'Animation', 'Children\'s', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', 'Sci-Fi', 'Thriller', 'War', 'Western'] movies = pd.read_csv('./archive/ml-100k/u.item', sep='|', names=i_cols, encoding='latin-1') movies.head() movies = movies[['movie_id', 'title']] r_cols = ['user_id', 'movie_id', 'rating', 'timestamp'] ratings = pd.read_csv('./archive/ml-100k/u.data', sep='\t', names=r_cols, encoding='latin-1') ratings.head() ratings = ratings.drop('timestamp', axis=1) from sklearn.model_selection import train_test_split X = ratings.copy() y = ratings['user_id'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y) from sklearn.metrics import mean_squared_error def rmse(y_true, y_pred): return np.sqrt(mean_squared_error(y_true, y_pred)) def baseline(user_id, movie_id): return 3.0 def score(cf_model): id_pairs = zip(X_test['user_id'], X_test['movie_id']) y_pred = np.array([cf_model(user_id, movie_id) for user_id, movie_id in id_pairs]) y_true = np.array(X_test['rating']) return rmse(y_true, y_pred) score(baseline) # # User based collaborative filtering r_matrix = X_train.pivot_table(values='rating', index='user_id', columns='movie_id') r_matrix.head() def cf_user_mean(user_id, movie_id): if movie_id in r_matrix: mean_rating = r_matrix[movie_id].mean() else: mean_rating = 3.0 return mean_rating score(cf_user_mean) r_matrix_dummy = r_matrix.copy().fillna(0) from sklearn.metrics.pairwise import cosine_similarity cosine_sim = cosine_similarity(r_matrix_dummy, r_matrix_dummy) cosine_sim = pd.DataFrame(cosine_sim, index=r_matrix.index, columns=r_matrix.index) cosine_sim.head() def cf_user_wmean(user_id, movie_id): if movie_id in r_matrix: sim_score = cosine_sim[user_id] m_ratings = r_matrix[movie_id] idx = m_ratings[m_ratings.isnull()].index m_ratings = m_ratings.dropna() sim_score = sim_score.drop(idx) wmean_rating = np.dot(sim_score, m_ratings)/sim_score.sum() else: wmean_rating = 3.0 return wmean_rating score(cf_user_wmean) merged_df = pd.merge(X_train, users) merged_df.head() gender_mean = merged_df[['movie_id', 'sex', 'rating']].groupby(['movie_id','sex']).mean() users = users.set_index('user_id') def cf_gender(user_id, movie_id): if (movie_id in gender_mean) & (movie_id in r_matrix): gender = users.loc[user_id]['sex'] if gender in gender_mean[movie_id]: gender_rating = gender_mean[movie_id][gender] else: gender_rating = 3.0 else: gender_rating = 3.0 return gender_rating score(cf_gender) gen_occ_mean = merged_df[['sex', 'rating', 'movie_id', 'occupation']].pivot_table(values='rating', index='movie_id', columns=['occupation', 'sex'],aggfunc='mean') gen_occ_mean.head() def cf_gen_occ(user_id, movie_id): if (movie_id in gen_occ_mean) & (movie_id in r_matrix): user = users[user_id] occ = user['occupation'] gen = user['sex'] if occ in gen_occ_mean.loc[movie_id]: if gender in gen_occ_mean.loc[movie_id][occ]: rating = gen_occ_mean.loc[movie_id][occ][gender] if np.isnan(rating): rating = 3.0 return rating return 3.0 score(cf_gen_occ) from surprise import Reader, Dataset, KNNBasic from surprise.model_selection import cross_validate reader = Reader() data = Dataset.load_from_df(ratings, reader) knn = KNNBasic() cross_validate(knn, data, measures=['RMSE']) from surprise import SVD svd = SVD() cross_validate(svd, data, measures=['RMSE']) ratings = pd.read_csv('./archive/ratings_small.csv') data = Dataset.load_from_df(ratings[['userId', 'movieId', 'rating']], reader) svd = SVD() trainset = data.build_full_trainset() svd.fit(trainset) id_map = pd.read_csv('./archive/movies_metadata.csv') id_to_title = id_map.set_index('id') title_to_id = id_map.set_index('title') cosine_sim_map = {} for i in cosine_sim.index: title = id def hybrid(userId, title): idx = title_to_id.loc[title]['id'] tmdbId = title_to_id.loc[title]['id'] movie_id = title_to_id.loc[title]['id'] sim_scores = list(enumerate(cosine_sim[int(idx)])) sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True) sim_scores = sim_scores[1:26] movie_indices = [i[0] for i in sim_scores] # print(movie_indices) movies = id_map.iloc[movie_indices][['title', 'vote_count', 'vote_average','id']] movies['est'] = movies['id'].apply(lambda x: svd.predict(userId, x).est) movies = movies.sort_values('est', ascending=False) return movies.head(10) hybrid(1, 'Toy Story') hybrid(2, 'Toy Story') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_python3 # language: python # name: conda_python3 # --- print ('hello world') a = 3 b = 4 a+b # + #test print # - print (a) print ('Define new variable') a = 5 # %matplotlib inline import numpy as np import matplotlib.pyplot as plt X = np.linspace(-np.pi, np.pi, 256, endpoint = True) C,S = np.cos(X), np.sin(X) plt.plot(X,C) plt.plot(X,S) plt.show # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import glob import pandas as pd #获取指定目录下的所有图片 files = glob.glob('seq2seq*') dfs=[] for f in files: print(f) dfs.append(pd.read_csv(f)) df_ensemble = pd.DataFrame(columns=[dfs[0].columns]) df_ensemble.FORE_data = dfs[0].FORE_data df_ensemble[[' t2m', ' rh2m', ' w10m']] = 0 df_ensemble = pd.DataFrame(columns=[dfs[0].columns]) df_ensemble.FORE_data = dfs[0].FORE_data df_ensemble[[' t2m', ' rh2m', ' w10m']] = 0 for i in range(len(dfs)): df_ensemble[[' t2m', ' rh2m', ' w10m']] += dfs[i][[' t2m', ' rh2m', ' w10m']].values df_ensemble[[' t2m', ' rh2m', ' w10m']] = df_ensemble[[' t2m', ' rh2m', ' w10m']].values / len(dfs) df_ensemble.to_csv('./ensemble_avg_2018101503.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- 5 > 8 condition = 5 > 8 print(condition) 5 < 8 5 <= 8 8 <= 8 10 <= 8 8 == 8 8 == 7 8.5 != 8.5 8.5 != 8.4 prix_1 = 10.5 prix_2 = 5.2 total = prix_1 + prix_2 prix_max = 20 total <= prix_max total # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Hard Core Tutorial # ================== # # Typically in fitting, performance matters. Python is slow since it does tons of extra stuff(name lookup etc.) for a good reason. We can fix that with cython and numpy. This tutorial will demonstate how one would write a model which can take data and fit to the data. We will be demonstrating two ways: fastway and generic way. As a bonus we will show how to parallelize your cost function. # ## Basic Cython # Before we go on let's talk about how to use cython efficiently. Cython speed things up by using static type where it can. Generally the more type information you tell them the better it can generate C code. # # Cython has a very handy option call annotate which lets you know which line of code is static type which one make a call to python object. # %pylab inline # %load_ext Cython from iminuit import Minuit # + magic_args="--annotate" language="cython" # # def slow_f(n): # x = 100. # for i in range(n): # x+=n # return x # # # you tell it more type information like this # def fast_f(int n): # cdef double x=100. # cdef int i # for i in range(n): # x+=n # return x # # - # You can see that there is yellow code line and white code line. # # - yellow code line means calling to python code # - white code line means native C # # Basically, your goal is to get as many white lines as possible. By telling as much type information as possible. # # You can also click on each line to see what code cython actually generate for you. (You many need to double click it) # %timeit -n10 -r10 slow_f(100) # %timeit -n10 -r10 fast_f(100) # ### Quick And Dirty way # Let's look at how to write a cython cost function np.random.seed(0) data = 2 + 3 * randn(int(1e6)) # mu=2, sigma=3 hist(data, bins=100, histtype='step'); # + magic_args="--force" language="cython" # # use --annotate if you wonder what kind of code it generates # cimport cython # import numpy as np # cimport numpy as np # overwritten those from python with cython # from libc.math cimport exp, M_PI, sqrt, log # from iminuit.util import describe, make_func_code # # @cython.embedsignature(True) # dump the signatre so describe works # cpdef double mypdf(double x, double mu, double sigma): # #cpdef means generate both c function and python function # cdef double norm = 1./(sqrt(2*M_PI)*sigma) # cdef double ret = exp(-1*(x-mu)*(x-mu)/(2.*sigma*sigma))*norm # return ret # # cdef class QuickAndDirtyLogLH: # cdef is here to reduce name lookup for __call__ # cdef np.ndarray data # cdef int ndata # # def __init__(self, data): # self.data = data # self.ndata = len(data) # # @cython.embedsignature(True) # you need this to dump function signature in docstring # def compute(self, double mu, double sigma): # #this line is a cast not a copy. Let cython knows mydata will spit out double # cdef np.ndarray[np.double_t, ndim=1] mydata = self.data # cdef double loglh = 0. # cdef double thisdata # for i in range(self.ndata): # thisdata = mydata[i] # loglh -= log(mypdf(mydata[i],mu,sigma)) # return loglh # - describe(mypdf) # works because we embedded the signature lh = QuickAndDirtyLogLH(data) describe(lh.compute) # works because we embedded the signature m = Minuit(lh.compute, mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1,10.0), errordef=1) m.migrad(); # %%timeit -n1 -r1 m = Minuit(lh.compute, mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1,10.0),print_level=0, errordef=1) m.migrad() # Have your cython PDF in a separate file # --------------------------------------- # # Lots of time your stuff is incredibly complicated and doesn't fit in ipython notebook. Or you may want to reuse your PDF in many notebooks. We have external_pdf.pyx in the same directory as this tutorial. This is how you load it. import os os.path.exists(np.get_include() + "/numpy/arrayobject.h") import pyximport import pyximport; pyximport.install( setup_args=dict( include_dirs=[np.get_include()], #include directory # libraries = ['m']#'stuff you need to link (no -l) # library_dirs ['some/dir']#library dir # extra_compile_args = ['-g','-O2'], # extra_link_args=['-some-link-flags'], ), reload_support=True, #you may also find this useful ) #if anything funny is going on look at your console import external_pdf # + # reload(external_pdf) #you may find this useful for reloading your module # - lh = external_pdf.External_LogLH(data) m = Minuit(lh.compute,mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1,10.0), errordef=1) m.migrad(); # %%timeit -r1 -n1 m = Minuit(lh.compute, mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1,10.0), print_level=0, errordef=1) m.migrad() # ### Generic Reusable Cost Function # # Sometime we want to write a cost function that will take in any pdf and data and compute appropriate # cost function. This is slower than the previous example but will make your code much more reusable. This is how you do it. # + language="cython" # # use --annotate if you wonder what kind of code it generates # cimport cython # import numpy as np # cimport numpy as np # overwritten those from python with cython # from iminuit.util import make_func_code, describe # from libc.math cimport log # # cdef class LogLH: # cdef is here to reduce name lookup for __call__(think of struct) # cdef np.ndarray data # cdef int ndata # cdef public func_code # cdef object pdf # # def __init__(self, pdf, data): # self.data = data # self.ndata = len(data) # #the important line is here # self.func_code = make_func_code(describe(pdf)[1:])#1: dock off independent param # self.pdf = pdf # # #@(False) # you can turn off bound checking # def __call__(self, *args): # cdef np.ndarray[np.double_t, ndim=1] mydata = self.data # this line is very important # #with out this line cython will have no idea about data type # cdef double loglh = 0. # cdef list larg = [0.]+list(args) # # for i in range(self.ndata): # #it's slower because we need to do so many python stuff # #to do generic function call # #if you are python hacker and know how to get around this # #please let us know # larg[0] = mydata[i] # loglh -= log(self.pdf(*larg)) # return loglh # - # And your favorite PDF # + language="cython" # #use --annotate if you wonder what kind of code it generates # from libc.math cimport exp, M_PI, sqrt, log # cimport cython # # @cython.binding(True) # def mypdf(double x, double mu, double sigma): # #cpdef means generate both c function and python function # cdef double norm = 1./(sqrt(2*M_PI)*sigma) # cdef double ret = exp(-1*(x-mu)*(x-mu)/(2.*sigma*sigma))*norm # return ret # - mylh = LogLH(mypdf,data) print(describe(mypdf)) print(describe(mylh)) describe(mypdf) m = Minuit(mylh, mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1,10.0), errordef=1) # it's slower than before m.migrad(); # %%timeit -r2 -n1 m=Minuit(mylh, mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1,10.0), print_level=0, errordef=1) m.migrad() # you can feel it's much slower than before # show histogram of data and fitted pdf x = linspace(-10,12,100) initial = np.fromiter((mypdf(xx, 1.5, 2.5) for xx in x), float); fit = np.fromiter((mypdf(xx, m.values['mu'], m.values['sigma']) for xx in x), float); plot(x, initial, label='initial values') plot(x, fit, label='best fit') hist(data, density=True, bins=100, histtype='step', label='data') legend(); # ### Parallel processing with `multiprocessing` # This example is most likely not working on windows. You need to implement a bunch of workaround for lack of fork() on Windows such that it's easier to install a decent linux on your computer. # # The idea here on how to parallelize your cost function is to separate you data into multiple chunks and have each worker calculate your cost function, collect them at the end and add them up. # # The tool we will be showing here is Python multiprocess. We will be rewriting our generic cost funciton but now with multiprocessing. # # You might be worried about that forking process will copy your data. Most modern OS use [Copy On Write](http://en.wikipedia.org/wiki/Copy-on-write mechanism)(look at wiki) # what this means is that when it forks a process # it doesn't copy memory there unless you are writing it # + magic_args="-f" language="cython" # cimport cython # import numpy as np # cimport numpy as np #overwritten those from python with cython # from libc.math cimport exp, M_PI, sqrt, log, floor # from libc.stdio cimport printf # from iminuit.util import make_func_code, describe # import multiprocessing as mp # from multiprocessing import Manager # import os # #import logging # #logger = multiprocessing.log_to_stderr() # #don't do this in ipython either it will crash(watch ipython #2438) # #logger.setLevel(multiprocessing.SUBDEBUG) # # @cython.embedsignature(True)#dump the signatre so describe works # cpdef double mypdf(double x, double mu, double sigma): # #cpdef means generate both c function and python function # cdef double norm # cdef double ret # norm = 1./(sqrt(2*M_PI)*sigma) # ret = exp(-1*(x-mu)*(x-mu)/(2.*sigma*sigma))*norm # return ret # # cdef class Multiprocess_LogLH:#cdef is here to reduce name lookup for __call__ # cdef np.ndarray data # #cdef double* databuffer#i'm not really sure if numpy will do copy on write only or not # cdef int ndata # cdef public func_code # cdef object pdf # cdef int njobs # cdef list starts # cdef list stops # cdef object pool # cdef int i # cdef object manager # cdef object results # def __init__(self, pdf, np.ndarray[np.double_t] data, njobs=None): # self.data = data # #self.databuffer = data.data # self.ndata = len(data) # # #the important line is here # self.func_code = make_func_code(describe(pdf)[1:])#1: dock off independent param # self.pdf = pdf # # #worker pool stuff # self.njobs = njobs if njobs is not None else mp.cpu_count() # print('Number of CPU: ',self.njobs) # #determine chunk size # chunk_size = floor(self.ndata/self.njobs) # self.starts = [i*chunk_size for i in range(self.njobs)] # self.stops = [(i+1)*chunk_size for i in range(self.njobs)] # self.stops[-1] = self.ndata #add back last couple data from round off # self.i = 0 # self.manager = Manager() # self.results = self.manager.Queue() # # @cython.embedsignature(True) # cpdef process_chunk(self, # int pid, int start, int stop, tuple args, object results): #start stop is [start, stop) # #be careful here there is a bug in ipython preventing # #child process from printing to stdout/stderr (you will get a segfault) # #I submitted a patch https://github.com/ipython/ipython/pull/2712 # #Ex: For now, do something like this if you need to debug # #msg = str(('Start Worker:', pid, start, stop,os.getpid()))+'\n' # #printf(msg) # #it will run fine as a python script though # cdef np.ndarray[np.double_t, ndim=1] mydata = self.data # get cython to know the type # cdef int i # cdef double loglh = 0. # cdef double tmp = 0. # cdef tuple t # #calculate lh for this chunk # cdef list larg = [0.]+list(args) # for i in range(start,stop): # tmp = mydata[i] # larg[0] = tmp # loglh -= log(self.pdf(*larg)) # results.put(loglh) #put result in multiprocessing.queue # return loglh#return isn't necessary since it will be ignored but useful for testing # # def __call__(self, *args): # cdef double ret=0 # pool = [mp.Process(target=self.process_chunk, # args=(i,self.starts[i],self.stops[i],args,self.results)) # for i in range(self.njobs)] # # you may think that forking this many time is inefficient # # We can do better but this is not bad. Since most of the time # # will be spend on calculating your loglh this is cheap compared to those. # # If forking is more expensive than what each worker does then... # # your problem is something else. # # Copy on write ensures that data points will never be copied (unless you write to it) # self.i+=1 # for p in pool: p.start() #start everyone # for p in pool: p.join() #wait for everyone to finish # while not self.results.empty(): #collect the result # tmp = self.results.get() # ret += tmp # return ret # - mlh = Multiprocess_LogLH(mypdf,data) #good idea to debug it in non-multiprocess environment first import multiprocessing as mp q = mp.Queue() mlh.process_chunk(0, 0, 10, (0,1), q) #then see if it works on one point mlh(0, 1) m = Minuit(mlh,mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1, 10.0), errordef=1) m.migrad(); # %%timeit -r1 -n1 m = Minuit(mlh,mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1, 10.0), print_level=0, errordef=1) m.migrad() # ### Parallel Computing With Cython and OpenMP # *For this tutorial you will need a compiler with openmp support. GCC has one. However, clang does NOT support it.* # # **This is not recommended. Do this only when overhead of multiprocessing is too big for you. The performance gain is probably not worth your headache.** # # Computer nowadays are multi-core machines so it makes sense to utilize all of them. This method is fast but quite restricted and cubersome since you need to write function such that cython can figure out its reentrant-ness. And you need some understanding of thread-local and thread-share variable. # # You can read [prange](http://wiki.cython.org/enhancements/prange) from cython wiki for more information and how to gain a more complete control over paralelization. The official documentation is [here](http://docs.cython.org/src/userguide/parallelism.html) # + magic_args="-f -c-fopenmp --link-args=-fopenmp -c-g" language="cython" # # use --annotate if you wonder what kind of code it generates # cimport cython # import numpy as np # cimport numpy as np #overwritten those from python with cython # from libc.math cimport exp, M_PI, sqrt, log # from iminuit.util import describe, make_func_code # import multiprocessing # from cython.parallel import prange # # # # notice nogil a the end (no global intepreter lock) # # cython doesn't do a super thorough check for this # # so make sure your function is reentrant this means approximately # # just simple function compute simple stuff based on local stuff and no read/write to global # @cython.embedsignature(True)#dump the signatre so describe works # @cython.cdivision(True) # cpdef double mypdf(double x, double mu, double sigma) nogil: # #cpdef means generate both c function and python function # cdef double norm # cdef double ret # norm = 1./(sqrt(2*M_PI)*sigma) # ret = exp(-1*(x-mu)*(x-mu)/(2.*sigma*sigma))*norm # return ret # # cdef class ParallelLogLH:#cdef is here to reduce name lookup for __call__ # cdef np.ndarray data # cdef int ndata # cdef int njobs # cdef np.ndarray buf#buffer for storing result from each job # def __init__(self, data, njobs=None): # self.data = data # self.ndata = len(data) # self.njobs = njobs if njobs is not None else multiprocessing.cpu_count() # print('Number of CPU: ',self.njobs) # # @cython.boundscheck(False) # @cython.embedsignature(True)#you need this to dump function signature in docstring # def compute(self, double mu, double sigma): # cdef np.ndarray[np.double_t, ndim=1] mydata = self.data # cdef double loglh = 0. # cdef tuple t # cdef double thisdata # cdef int i=0 # #in parallel computing you need to be careful which variable is # #thread private which variable is shared between thread # #otherwise you will get into hard to detect racing condition # #cython rule of thumb(guess rule) is # # 1) assigned before use is thread private # # 2) read-only is thread-shared # # 3) inplace modification only is thread shared # cdef int njobs = self.njobs # cdef double tmp # with nogil: # for i in prange(self.ndata, # num_threads=njobs, # chunksize=100000, # schedule='dynamic'):#split into many threads # thisdata = mydata[i] #this is assigned before read so it's thread private # tmp = mypdf(thisdata,mu,sigma) #also here assigne before read # loglh -= log(tmp) #inplace modification so loglh is thread shared # return loglh # - plh = ParallelLogLH(data) plh.compute(1.5, 2.0) describe(plh.compute) m = Minuit(plh.compute,mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1,10.0), print_level=0, errordef=1) m.migrad(); # %%timeit -n2 -r2 m=Minuit(plh.compute,mu=1.5, sigma=2.5, error_mu=0.1, error_sigma=0.1, limit_sigma=(0.1,10.0), print_level=0, errordef=1) m.migrad() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction of Numpy Broadcasting # > https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html Korean Translation, Arrangement # - toc: True # # > https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html Korean Translation, Arrangement #hide import numpy as np a = np.array([0,1,2]) b = np.array([5,5,5]) a + b # Broadcasting은 다른 사이즈를 가진 배열을 아래와 같은 이항연산(binart operation)을 가능하게 만든다. 예를들어 스칼라(1차원배열)를 배열에 쉽게 더할 수 있다. a + 5 # 이러한 연산을 `5`를 `[5,5,5]`로 늘리거나 복사하고 더한다고 생각할 수 있다. Numpy Broadcasting의 장점은 그런 복사가 실제로 일어나지 않아 메모리를 절약한다. 하지만 복사한다고 생각하는 것이 Broadcasting을 이해하기에 편하다. # # 또한 배열을 더 높은 차원의 배열로 확장할 수 있다. 1차원의 배열을 2차원의 배열에 더한 결과를 보자 # M = np.ones((3,3)) M M + a # 1차원 배열 `a`가 `M`의 2번째차원에 맞추기 위해 broadcasting되었다. # # 위와 같이 간단한 예는 이해하기 쉽지만 더 복잡한 경우에는 두개의 배열에서 동시에 broadcasting이 일어난다. 아래의 예를 살펴보자 # + a = np.arange(3) b = np.arange(3)[:, np.newaxis] print(a) print(b) # - a + b # 하나의 값을 다른 배열에 맞추기 위해 broadcasting했을 때와 마찬가지로 위의 예에서는 `a`, `b` 두 배열 전부 공통 크기를 맞추기 위해 broadcasting 되었고 2차원 배열이 되었다. # # # Rules of Broadcasting # # Numpy Broadcasting은 두 배열사이의 상호작용을 결정하기 위해 엄격한 규칙을 따른다. # # - Rule1: 만약 두 배열의 차원의 개수가 다르면 작은 차원을 가진 배열의 왼쪽으로 패딩된다 # - Rule2: 만약 두배열이 어떠한 한 차수의 개수가 일치하지 않고, 그 배열의 차원의 shape 1을 가진다면 그 차원은 다른 배열의 shape에 맞춰 늘어난다. # - Rule3: 만약 어떠한 차원의 크기가 일치하지않고 1의 shape을 가진 배열이 없다면 error가 발생하게 된다. # ## Broadcasting example 1 # # 하나의 배열에서만 broadcasting이 필요한 경우 M = np.ones((2,3)) a = np.arange(3) # 위의 두 배열에 대한 연산을 살펴보자. # 두 배열의 shape은 아래와 같다. # # - `M.shape = (2,3)` # - `a.shape = (3,)` # # Rule1에 의해 더 적은 차원을 가지는 a배열은 왼쪽으로 1로 패딩되는 것을 알수 있다. # # - `M.shape -> (2,3)` # - `a.shape -> (1,3)` # # Rule2에 의해 첫 번째 차원의 크기가 일치하지 않아 1의 크기를 가진 차원이 더 큰 크기를 가진 차원에 맞춰지는 것을 알 수 있다. # # - `M.shape ->(2,3)` # - `a.shape ->(2,3)` # # 이제 차원의 크기들이 일차하는 것을 알 수 있다. 최종 shape은 `(2,3)`이 된다 M + a # ## Broadcasting example 2 # # 두 배열 모두 broadcasting이 필요한 경우 a = np.arange(3).reshape((3,1)) b = np.arange(3) # 위 배열들의 shape은 아래와 같다. # # - `a.shape = (3,1)` # - `b.shape = (3,)` # # Rule1에 의해 b는 첫번째 차원에 크기1을 가지도록 패딩된다. # # - `a.shape -> (3,1)` # - `b.shape -> (1,3)` # # 그리고 Rule2에 의해 두 열 모두 다른 배열의 같은차원의 크기와 일치하도록 broadcasting된다. # # - `a.shape -> (3,3)` # - `b.shape -> (3,3)` # a + b # ## Broadcasting example 3 # # 두 배열이 broadcasting 될 수 없는 예 M = np.ones((3,2)) a = np.arange(3) # 첫 예제와 달리 M에 전치되었다. 이러한 상황이 계산에 어떠한 영향을 미칠까? 배열의 shape은 아래와 같다. # # - `M.shape = (3,2)` # - `a.shape = (3,)` # # Rule1에 의해 a의 왼쪽으로 크기1을 가진 차원이 추가된다. # # - `M.shape -> (3,2)` # - `a.shape -> (1,3)` # # Rule2에 의해 a의 첫번째 차원의 크기가 M과 같도록 늘어난다. # # - `M.shape -> (3,2)` # - `a.shape -> (3,3)` # # 두 배열의 두 번째 차원의 크기각 둘 중 하나라도 1이 아닌 상태에서 차원의 크기가 일치하지 않아 Rule3에 의해 error가 발생한다. M + a # 배열 a를 오른쪽으로 padding하면 위와 같은 에러가 발생하지 않지만 규칙으로는 불명확함을 없애기 위해 왼쪽으로 패딩하도록 되어있다. 만약 오른쪽을 패딩하고 싶다면 아래와 같이 np.newaxis를 사용하면 된다 a[:, np.newaxis].shape M + a[:, np.newaxis] # Numpy Broadcasting은 어떠한 넘파이 이항연산 함수에서도 사용가능하다. 예를 들어 `logaddexp(a,b) - log(exp(a) + exp(b))`와 같은 함수도 사용가능하다 np.logaddexp(M, a[:, np.newaxis]) # # Broadcasting in Practice # # ## Centering an array # # Numpy의 ufunc들은 Numpy유저들이 느린 파이썬 반복문을 사용하지 않고 다양한 연산을 수행할 수 있도록 만들어 줬다. 대표적으로 Centering할때 편리학게 할 수있다. X = np.random.random((10, 3)) # + # 첫번째 차원 방향으로 mean함수 실행 Xmean = X.mean(0) Xmean # - # Xmean을 X에 빼줘서 centring X_centered = X - Xmean # + # cetering 되었기 때문에 X_centered를 첫번째 차원뱡향의 평균을 구했을 때 0에 근접해야함 X_centered.mean(0) # - # ## Plotting a two-dimensional function # # Broadcasting은 이차원 함수로부터 나온 결과 이미지를 보여주는 데에 유용하게 사용된다. # + # x and y have 50 steps form 0 to 5 x = np.linspace(0, 5, 50) y = np.linspace(0, 5, 50)[:, np.newaxis] z = np.sin(x) ** 10 + np.cos(10 + y*x) * np.cos(x) # - #hide # %matplotlib inline import matplotlib.pyplot as plt plt.imshow(z, origin='lower', extent=[0, 5, 0, 5], cmap='viridis') plt.colorbar(); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd import numpy as np iris = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data") iris.head() df = iris.copy() df.head() df.dtypes df.columns df.columns = ['sl','sw','pl','pw','flower_types'] df.head() df.shape df.describe() df.pw df['sw'] df.iloc[1:6,2:4] df.loc[1:5] df.isnull() df.isnull().sum() # + #To get the indexs of the dataframe, index attribute is used iris.index # - df.head() # drop() function is used to delete a row from the dataset df.drop(0) df # + #No changes have been made because drop function does not delete the rows from the original dataframe. #It creates a copy of the dataframe and returns the updated copy #To make changes in the original dataframe we need to pass a inplace=True argument. This ensures that the changes are made #in the original dataset iris.drop(0, inplace=True) iris.head() # + #To delete more than one columns in one go, we can pass a list of row numbers iris.drop([4,5,8], inplace=True) iris.head(10) # + # index[i] is used to access the ith row and not the row whose row no is i print(iris.index[0]) #Therefore iris.index[i] can also be passed to drop function to delete the ith row iris.drop(iris.index[0], inplace=True) iris.head(10) # + #to get the ith row from starting we use iloc print(iris.head()) iris.iloc[0] # + #Similarly to access the ith label row, we use loc print(iris.head()) iris.loc[2] # + #To add a row into a dataframe, iris.loc[0] = [1, 2, 3, 4, "Iris-setosa"] #The entry will be added to the end iris.tail() # + #To reset the indices, reset_index() function is used print("Dataframe before reset index") print(iris.head(10)) iris.reset_index(inplace=True) print("\nDataframe after reset index") print(iris.head(10)) #Note that in the above dataframe the indices before resetting index are added as a column in our updated dataframe # - #removing column df.drop('pl', axis= 1, inplace=True) df.head() #another way to delete a column is by using del del df['sl'] df.head() df.head() df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data") df.head() df.columns = ['sl','sw','pl','pw','flower_types'] df.head() #adding new column df['add_sl_sw'] = df['sl'] + df['sw'] df.head() df.iloc[2:4, 1:3] = np.nan df.head() df.dropna(inplace=True) df.head() df.iloc[2:4, 1:3] = np.nan df.head() setosa = df[df['flower_types']=='Iris-setosa'] setosa_sw_mean = setosa['sw'].mean() print(setosa_sw_mean) setosa_pl_mean = setosa['pl'].mean() print(setosa_pl_mean) df['sw'].fillna(setosa_sw_mean, inplace=True) df['pl'].fillna(setosa_pl_mean, inplace=True) df.head() df['gender'] = 'Female' df.head() df.iloc[1:20,6] = 'Male' df.head() # + def f(s): if s == 'Male': return 0 else: return 1 df['sex'] = df['gender'].apply(f) df.head() # - del df['gender'] df.head() # ## Problem Statement # #### FInd and print count of each kind of flower ( separated by space) ? df.count() df.flower_types.value_counts() # #### Find the data of flower "Iris-virginica" type where petal-length > 1.5 ? iris_virginica = df[df['flower_types'] == 'Iris-virginica'] iris_virginica[iris_virginica.pl > 1.5] # #### Find and print the maximum, minimum and average values of the feature for each kind of flower ? df.describe().loc[['min','max','mean']] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.11 64-bit (''otopy38'': conda)' # name: python3 # --- # # Schema from example oto_data = { "_id" : "5e3868eb61e48d0017ab68b0", "kind" : "OutlierModel", "bt" : 1580755178368101, "tt" : 0, "modelId" : "5c1a7b648a6df285f82bdd4f", "distance" : 204311, "closest_centroid" : 82, "outlier_score" : 90.9653, "centroid_rareness" : 4.26592, "is_outlier" : 1, "source" : "40FWF4_R-", "account" : "analogdevices_dev" } # # Using schema # ## Validation of a schema # + from schema import Schema, And, Use, Optional, SchemaError schema = Schema([{'name': And(str, len), 'age': And(Use(int), lambda n: 18 <= n <= 99), Optional('gender'): And(str, Use(str.lower), lambda s: s in ('squid', 'kid'))}]) data = [{'name': 'Sue', 'age': '28', 'gender': 'Squid'}, {'name': 'Sam', 'age': '42'}, {'name': 'Sacha', 'age': '20', 'gender': 'KID'}] # + validated = schema.validate(data) assert validated == [{'name': 'Sue', 'age': 28, 'gender': 'squid'}, {'name': 'Sam', 'age': 42}, {'name': 'Sacha', 'age' : 20, 'gender': 'kid'}] # - """{'_id': str, 'kind': str, 'bt': int, 'tt': int, 'modelId': str, 'distance': int, 'closest_centroid': int, 'outlier_score': float, 'centroid_rareness': float, 'is_outlier': int, 'source': str, 'account': str} """ schema_otosense= Schema([{'_id': str, 'kind':Use(str), 'bt': Use(int), 'tt':Use(int), 'modelId':Use(str), 'distance':Use(int), 'closest_centroid':Use(int), 'outlier_score':Use(int), 'centroid_rareness':Use(float), 'is_outlier':Use(bool), 'source':Use(str), 'account':And(str, lambda s: s in ('analogdevices_dev', 'analogdevices_prod')) }]) schema_otosense.validate([oto_data]) # # Validation using Great expectations import great_expectations as ge import pandas as pd data_list = [{'_id': '5e3868eb61e48d0017ab68b0', 'kind': 'OutlierModel', 'bt': 1580755178368101, 'tt': 0, 'modelId': '5c1a7b648a6df285f82bdd4f', 'distance': 204311, 'closest_centroid': 82, 'outlier_score': 90, 'centroid_rareness': 4.26592, 'is_outlier': True, 'source': '40FWF4_R-', 'account': 'analogdevices_dev'}, {'_id': '4e3868eb61e48d0017ab6898', 'kind': 'OutlierModel', 'bt': 1580755178368156, 'tt': 0, 'modelId': '5c1a7b648a6df285f82bdd4f', 'distance': 204316, 'closest_centroid': 80, 'outlier_score': 10, 'centroid_rareness': 4.26592, 'is_outlier': True, 'source': '40FWF4_R-', 'account': 'analogdevices_dev'}, {'_id': '3e3868eb61e48d0017ab6800', 'kind': 'OutlierModel', 'bt': 1580755178368101, 'tt': 0, 'modelId': '5c1a7b648a6df285f82bdd4f', 'distance': 20400, 'closest_centroid': 17, 'outlier_score': 9, 'centroid_rareness': 3.1, 'is_outlier': True, 'source': '40FWF4_R-', 'account': 'analogdevices_prod'} ] df_csv = pd.DataFrame(data_list).to_csv('data_great_expect.csv') df=ge.read_csv('data_great_expect.csv') df # + feature_columns = ['kind', 'bt', 'tt','modelId', 'distance','closest_centroid'] for col in feature_columns: df.expect_column_to_exist(col) df.expect_column_values_to_be_of_type('kind', 'str') # - df.get_expectation_suite() # # Generating data from a model from sdv import SDV #synthetic data vault sdv = SDV() df_data = pd.DataFrame(data_list) sdv.fit(tables =[df_data]) # df_data # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="BxC7W6HplhEW" # # League of Legends Diamond Ranked Games (10 min) # + [markdown] id="OKooCxuIlkyQ" # ## (1) introduction # "League of Legends" is a team-based strategy game. Two teams with five powerful summoners will face each other in a canyon. The goal of the team is to tear down each other's base. # # A typical League of Legends game usually lasts 30 to 45 minutes, and each game can be divided into three stages: the laning stage, the mid-term and late-stage. Players usually spend the first 10 to 15 minutes in their own branch (up, middle, down, JG) to develop in order to gain the advantage of equipment and level as early as possible. # # The combination of heroes of each team will have a crucial impact on the outcome of the game, because some heroes are very strong in the early game, while other heroes will grow substantially in the middle and late game. # In the high score game of LOL (diamond to master segment), the final direction of the game is analyzed based on the data of the first 10 minutes. # - # ### dataset # # https://www.kaggle.com/bobbyscience/league-of-legends-diamond-ranked-games-10-min # + [markdown] id="i0cKFYOCmOda" # ## (2) method # # Based on neural network, we use the model to train the dataset and predict the result, win or lose. # # The neural network structure is that: # # input size = 9 # # 16 units dense layer # # 32 units full connect layer # # 16 units dense layer # # 8 units dense layer # # the binary output layer # # # + id="n4yDdq5xWDEU" import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout import numpy as np import pandas as pd from sklearn.preprocessing import MinMaxScaler from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt # + id="YT89HBsDWGIf" df = pd.read_csv('./data/lol/high_diamond_ranked_10min.csv') cols = ['gameId', 'blueWardsPlaced', 'blueWardsDestroyed', 'blueDeaths', 'blueEliteMonsters', 'blueHeralds', 'blueTowersDestroyed', 'blueTotalGold', 'blueAvgLevel', 'blueTotalExperience', 'blueTotalMinionsKilled', 'blueTotalJungleMinionsKilled', 'blueGoldDiff', 'blueExperienceDiff', 'blueGoldPerMin', 'redWardsPlaced', 'redWardsDestroyed', 'redFirstBlood', 'redDeaths', 'redEliteMonsters', 'redHeralds', 'redTowersDestroyed', 'redTotalGold', 'redAvgLevel', 'redTotalExperience', 'redTotalMinionsKilled', 'redTotalJungleMinionsKilled', 'redGoldDiff', 'redExperienceDiff', 'redGoldPerMin'] df = df.drop(cols, axis=1) y = df['blueWins'] # + colab={"base_uri": "https://localhost:8080/"} id="T0VjI836Xvjn" outputId="a6ee141a-ecfb-4a3e-9747-7df17310ad28" clist = df[df.columns[1:]].apply(lambda x: x.corr(df['blueWins'])) cols=[] for col in clist.index: if (clist[col]>0.2 or clist[col]<-0.2): cols.append(col) df = df[cols] print(df) # + id="IesTWlTYXwrQ" scaler = MinMaxScaler() scaler.fit(df) X = scaler.transform(df) X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2) # + colab={"base_uri": "https://localhost:8080/"} id="obPJbc-SX24e" outputId="eaa1265c-be8e-4274-ab99-ef7cb3690af6" model = Sequential() model.add(Dense(16, input_shape=(9,))) model.add(Dense(32)) model.add(Dense(16)) model.add(Dense(8)) model.add(Dense(1,activation='sigmoid')) model.compile(loss='binary_crossentropy',optimizer='SGD',metrics=['accuracy']) history = model.fit(X_train,y_train,validation_split=0.3, epochs=10) # + [markdown] id="c4k0A4Q4nz7e" # ## (3) results # # accuracy # + colab={"base_uri": "https://localhost:8080/", "height": 285} id="Q6wzyYY5cdn9" outputId="1fa0c0f1-7c3b-4c8d-94dd-a4fb2db0aff1" plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) # + [markdown] id="wEb_AM3vn7lE" # loss # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="hgzfXRC7cmhg" outputId="e492a809-07e9-4818-ce5b-6df37d863a7b" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) # + [markdown] id="tAruggwYn_X5" # ## (4) discussion # # The results are good at about 70% accuracy, but not good enough for the prediction. There are only 9 elements for the input, which is a very low-dimensional data set, because "League of Legends" may capture hundreds of variables from each game. For further experiments, functions such as hero combination, time range, and champion proficiency of a specific player can be included in the analysis. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" import warnings warnings.filterwarnings("ignore") data = pd.read_csv("/kaggle/input/amazon-fine-food-reviews/Reviews.csv") print(data.shape) # - data.columns data = data[data['Score']!=3] data['target'] = data['Score'].apply(lambda x : 1 if x>3 else 0) #Sorting data according to ProductId in ascending order data=data.sort_values('ProductId', axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last') #Deduplication of entries data=data.drop_duplicates(subset={"UserId","ProfileName","Time","Text"}, keep='first', inplace=False) print(data.shape) data.head() # + # https://stackoverflow.com/a/47091490/4084039 # https://stackoverflow.com/questions/16206380/python-beautifulsoup-how-to-remove-all-tags-from-an-element import re from bs4 import BeautifulSoup def decontracted(phrase): # specific phrase = re.sub(r"won't", "will not", phrase) phrase = re.sub(r"can\'t", "can not", phrase) # general phrase = re.sub(r"n\'t", " not", phrase) phrase = re.sub(r"\'re", " are", phrase) phrase = re.sub(r"\'s", " is", phrase) phrase = re.sub(r"\'d", " would", phrase) phrase = re.sub(r"\'ll", " will", phrase) phrase = re.sub(r"\'t", " not", phrase) phrase = re.sub(r"\'ve", " have", phrase) phrase = re.sub(r"\'m", " am", phrase) return phrase stopwords= ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\ "you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \ 'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\ 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \ 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \ 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \ 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\ 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\ 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\ 'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \ 's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \ 've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\ "hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\ "mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \ 'won', "won't", 'wouldn', "wouldn't"] from tqdm import tqdm def preprocess_text(text_data): preprocessed_text = [] for sentance in tqdm(text_data): sentance = re.sub(r"http\S+", "", sentance) sentance = BeautifulSoup(sentance, 'lxml').get_text() sentance = decontracted(sentance) sentance = re.sub("\S*\d\S*", "", sentance).strip() sentance = re.sub('[^A-Za-z]+', ' ', sentance) #sent = ' '.join(e for e in sent.split() if e.lower() not in stopwords) preprocessed_text.append(sentance.lower().strip()) return preprocessed_text # - from wordcloud import WordCloud from tqdm import tqdm_notebook as tqdm import matplotlib.pyplot as plt def plot_word_cloud(data): text = " ".join(review.lower() for review in data) wordcloud = WordCloud(width = 800, height = 800, background_color ='white', stopwords = stopwords, min_font_size = 10).generate(text) plt.figure(figsize = (8, 8), facecolor = None) plt.imshow(wordcloud) plt.axis("off") plt.tight_layout(pad = 0) plt.show() preprocessed_reviews = preprocess_text(data['Text'].values) labels = data['target'].values data['Text'] = preprocessed_reviews # %%time plot_word_cloud(data['Text'].values) # %%time plot_word_cloud(data.loc[data['target'] == 0]['Text'].values) # %%time plot_word_cloud(data.loc[data['target'] == 1]['Text'].values) pd.Series(labels).value_counts() # + from keras.preprocessing.sequence import pad_sequences from keras.preprocessing.text import one_hot from keras.preprocessing.text import Tokenizer from sklearn.model_selection import train_test_split from scipy import sparse x_train,x_cv, y_train, y_cv = train_test_split(preprocessed_reviews, labels, test_size=0.2, random_state=42, stratify=labels) MAX_SEQUENCE_LENGTH = 4000 MAX_NUM_WORDS = 20000 tokenizer = Tokenizer(num_words=MAX_NUM_WORDS, filters='!"#$%&()*+,-./:;<=>?@[\\]^`{|}~\t\n') tokenizer.fit_on_texts(x_train) encoded_docs_train = tokenizer.texts_to_sequences(x_train) padded_text_train=pad_sequences(encoded_docs_train,maxlen=MAX_SEQUENCE_LENGTH, padding="post", truncating="post").astype('int16') encoded_docs_cv = tokenizer.texts_to_sequences(x_cv) padded_text_cv=pad_sequences(encoded_docs_cv,maxlen=MAX_SEQUENCE_LENGTH, padding="post", truncating="post").astype('int16') word_index = tokenizer.word_index # - import pickle with open('tokenizer.pickle', 'wb') as handle: pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL) print(padded_text_train[0]) print(padded_text_cv.shape) from sklearn.preprocessing import Normalizer transformer = Normalizer() transformer.fit(padded_text_train) padded_text_train_norm = transformer.transform(padded_text_train) padded_text_cv_norm =transformer.transform(padded_text_cv) # ### Training a simple model # + from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import RandomizedSearchCV from sklearn.metrics import accuracy_score, f1_score, roc_auc_score, confusion_matrix from scipy.stats import uniform from sklearn.linear_model import LogisticRegression as lr import seaborn as sns clf = lr() clf.fit(padded_text_train_norm, y_train) y_pred = clf.predict(padded_text_cv_norm) print("Acccuracy is",accuracy_score(y_cv,y_pred)) print("F1 score is", f1_score(y_cv, y_pred)) confusion_matrix_test = confusion_matrix(y_cv, y_pred) sns.heatmap(confusion_matrix_test,xticklabels=["1","0"], yticklabels=["1","0"],annot=True).set_title("Confusion Matrix") plt.show() # - # ### Comaparing and Fine tuning ML classification models from xgboost import XGBClassifier models = { 'lr': lr(), 'svc': SVC(), 'dtc':DecisionTreeClassifier(), 'rfc': RandomForestClassifier(), 'xgbc':XGBClassifier(objective = 'binary:logistic', verbosity = 0) } def train_and_test_model(model_key, distributions, model_name): print("Model training", model_name) model = models[model_key] clf = RandomizedSearchCV(model, distributions, random_state=0, n_iter = 10, verbose=0) clf.fit(padded_text_train_norm, y_train) print(clf.best_estimator_) final_model = clf.best_estimator_ final_model.fit(padded_text_train_norm, y_train) y_pred = final_model.predict(padded_text_cv) print("Acccuracy is",accuracy_score(y_cv,y_pred)) print("F1 score is", f1_score(y_cv, y_pred)) confusion_matrix_test = confusion_matrix(y_cv, y_pred) sns.heatmap(confusion_matrix_test,xticklabels=["1","0"], yticklabels=["1","0"],annot=True).set_title("Confusion Matrix") plt.show() distributions = dict(C=uniform(loc=0.00001, scale=10000), penalty=['l2', 'l1'], class_weight=['balanced', None]) train_and_test_model('lr', distributions, "logistic regression") distributions = dict(C=uniform(loc=0.00001, scale=10000),kernel=['linear', 'poly', 'rbf', 'sigmoid'] ,coef0=uniform(loc=0, scale=10), class_weight=['balanced', None], gamma=['scale', 'auto']) train_and_test_model('svc', distributions, "SVM") from scipy.stats import randint distributions = dict(criterion=['gini', 'entropy'], splitter=['best', 'random'], max_depth = randint(1, 100), min_samples_split= uniform(loc=0, scale=1), max_features=['auto', 'sqrt', 'log2'], class_weight=['balanced', None]) train_and_test_model('dtc', distributions, "Decisoin Tree Classifier") distributions = dict(criterion=['gini', 'entropy'], n_estimators = randint(10, 500) , max_depth = randint(1, 100), min_samples_split= uniform(loc=0, scale=1), max_features=['auto', 'sqrt', 'log2'], class_weight=['balanced', 'balanced_subsample', None]) train_and_test_model('rfc', distributions, "Random forest classifier") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3-azureml # kernelspec: # display_name: Python 3.6 - AzureML # language: python # name: python3-azureml # --- # + import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # !curl https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv -o ../covid-19-confirmed-cases.csv # !curl https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv -o ../covid-19-recovered-cases.csv # !curl https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv -o ../covid-19-recovered-deaths.csv # - ConfirmedCases = pd.read_csv('../covid-19-confirmed-cases.csv') RecoveredCases = pd.read_csv('../covid-19-recovered-cases.csv') Deaths = pd.read_csv('../covid-19-recovered-deaths.csv') # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} ConfirmedCases = Dataset.get_by_name(workspace, name='COVID19_Confirmed').to_pandas_dataframe() RecoveredCases = Dataset.get_by_name(workspace, name='COVID19_Recovered').to_pandas_dataframe() Deaths = Dataset.get_by_name(workspace, name='COVID19_Deaths').to_pandas_dataframe() print("Succeeded") # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} ConfirmedCases_ValueColumns = ConfirmedCases.columns[4:] RecoveredCases_ValueColumns = RecoveredCases.columns[4:] Deaths_ValueColumns = Deaths.columns[4:] print("Succeeded") # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} ConfirmedCases = pd.melt(ConfirmedCases, id_vars=['Country/Region'], value_vars = ConfirmedCases_ValueColumns, var_name = "Date", value_name = "ConfirmedCases") RecoveredCases = pd.melt(RecoveredCases, id_vars=['Country/Region'], value_vars = RecoveredCases_ValueColumns, var_name = "Date", value_name = "RecoveredCases") Deaths = pd.melt(Deaths, id_vars = ["Country/Region"], value_vars = Deaths_ValueColumns, var_name = "Date", value_name = "Deaths") print("Succeeded") # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} ConfirmedCases["Date"] = pd.to_datetime(ConfirmedCases["Date"]) RecoveredCases["Date"] = pd.to_datetime(RecoveredCases["Date"]) Deaths["Date"] = pd.to_datetime(Deaths["Date"]) # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} ConfirmedCases_ByDate = pd.pivot_table(ConfirmedCases, values='ConfirmedCases', index=["Date"], aggfunc=np.sum).sort_values(by = "Date") RecoveredCases_ByDate = pd.pivot_table(RecoveredCases, values = "RecoveredCases", index = ["Date"], aggfunc = np.sum).sort_values(by = "Date") Deaths_ByDate = pd.pivot_table(Deaths, values = "Deaths", index = ["Date"], aggfunc = np.sum).sort_values(by = "Date") # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} #Creating By Country Pivot Tables ConfirmedCases_ByCountry = pd.pivot_table(ConfirmedCases, values = "ConfirmedCases", index = ["Country/Region"], aggfunc = np.max) RecoveredCases_ByCountry = pd.pivot_table(RecoveredCases, values = "RecoveredCases", index = ["Country/Region"], aggfunc = np.max) Deaths_ByCountry = pd.pivot_table(Deaths, values = "Deaths", index = ["Country/Region"], aggfunc = np.max) print("Succeeded") # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} COVID19_ByDate = ConfirmedCases_ByDate.merge(RecoveredCases_ByDate, on = "Date").merge(Deaths_ByDate, on = "Date") COVID19_ByCountry = ConfirmedCases_ByCountry.merge(RecoveredCases_ByCountry, on = "Country/Region").merge(Deaths_ByCountry, on = "Country/Region") print("Succeeded") # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exercise 18 # # # For this homework we consider a set of observations on a number of red and white wine varieties involving their chemical properties and ranking by tasters. Wine industry shows a recent growth spurt as social drinking is on the rise. The price of wine depends on a rather abstract concept of wine appreciation by wine tasters, opinion among whom may have a high degree of variability. Pricing of wine depends on such a volatile factor to some extent. Another key factor in wine certification and quality assessment is physicochemical tests which are laboratory-based and takes into account factors like acidity, pH level, presence of sugar and other chemical properties. For the wine market, it would be of interest if human quality of tasting can be related to the chemical properties of wine so that certification and quality assessment and assurance process is more controlled. # # Two datasets are available of which one dataset is on red wine and have 1599 different varieties and the other is on white wine and have 4898 varieties. All wines are produced in a particular area of Portugal. Data are collected on 12 different properties of the wines one of which is Quality, based on sensory data, and the rest are on chemical properties of the wines including density, acidity, alcohol content etc. All chemical properties of wines are continuous variables. Quality is an ordinal variable with possible ranking from 1 (worst) to 10 (best). Each variety of wine is tasted by three independent tasters and the final rank assigned is the median rank given by the tasters. # # A predictive model developed on this data is expected to provide guidance to vineyards regarding quality and price expected on their produce without heavy reliance on volatility of wine tasters. import pandas as pd import numpy as np import zipfile with zipfile.ZipFile('wine_data.zip', 'r') as z: f = z.open('Wine_data_red.csv') data_r = pd.io.parsers.read_table(f, sep=',') f = z.open('Wine_data_white.csv') data_w = pd.io.parsers.read_table(f, sep=',') data = data_w.assign(type = 'white') data = data.append(data_r.assign(type = 'red'), ignore_index=True) data.sample(5) # # Exercice 18.1 # # Show the frecuency table of the quality by type of wine # # Exercice 18.2 (2 points) # # * Standarized the features (not the quality) # * Create a binary target for each type of wine # * Create two Linear SVM's for the white and red wines, repectively. # # # Exercice 18.3 (2 points) # # Test the two SVM's using the different kernels (‘poly’, ‘rbf’, ‘sigmoid’) # # # Exercice 18.4 (2 points) # # Using the best SVM find the parameters that gives the best performance # # 'C': [0.1, 1, 10, 100, 1000], 'gamma': [0.01, 0.001, 0.0001] # # Exercice 18.5 # # Compare the results with other methods # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Creating an Interpolated Structured Data Grid # This script allows you to input an unstructured dataset, e.g. from a CFD velocity data file and interpolate it into a structured grid of your chosen size. # # ![grid%20%281%29.png](attachment:grid%20%281%29.png) # Structured output velocity file vs unstructured grids CFD velocity data input. Source: # *, "The leading edge vortex and its secondary structures: a numerical study of the flow topology of pitching and plunging airfoils", MEng Disseration, University of Glasgow, January 2021* # # # # # # ### Sample header from input file: # # "U:0","U:1","U:2","Q-criterion","Points:0","Points:1","Points:2" # # 0,0,0,-2.0633e+05,0.076136,-3.4993e-05,0.03 # # 0,0,0,-2.9188e+07,0.0762,-3.2004e-05,0.03 # # 0.1312,0,0,-1.7476e+05,0.076137,-4.4772e-05,0.03 # # 0.1312,0,0,-2.494e+07,0.076207,-3.7501e-05,0.03 # # 0,0,0,-1.7728e+05,0.076066,-3.8283e-05,0.03 # # 0.1312,0,0,-49700,0.076066,-4.8514e-05,0.03 # # 0.1312,0,0,-7.0466e+06,0.076207,3.7501e-05,0.03 # # 0,0,0,-9.4372e+07,0.0762,3.2004e-05,0.03 # # 0.1312,0,0,-0,0.076138,-5.5822e-05,0.03 # + import pandas as pd import numpy as np from scipy import interpolate from IPython.display import clear_output import os initialFrame = 1 finalFrame = 2 frameStep = 1 for i in range(initialFrame,finalFrame+frameStep,frameStep): #input file paths input_file = os.getcwd() input_file += '/InputVelocity/velocity_' #sample velocity files for you to try this out input_file += str(i) input_file += '.csv' #output file paths output_file = os.getcwd() output_file += '/StructuredVelocityOutput/' output_file += str(i) output_file += '.txt' df = pd.read_csv(input_file) df = df.drop(["U:2","Q-criterion","Points:2"], axis = 1) df = df.rename(columns = {'Points:0' : 'X', 'Points:1': 'Y', 'U:0': 'U', 'U:1':'V'}) x = df['X'].to_numpy() #x input coordinates of velocity file y = df['Y'].to_numpy() #y input coordinates of velocity file u = df['U'].to_numpy() #u input coordinates of velocity file v = df['V'].to_numpy() #v input coordinates of velocity file xgrid = np.linspace(-0.05, 0.05, 100) #output grid (initial x, final x, resolution) ygrid = np.linspace(-0.05, 0.05, 100) #output grid (initial y, final x, resolution) xx, yy = np.meshgrid(xgrid, ygrid) #grid is meshed points = np.transpose(np.vstack((x, y))) #creating a joint (x,y) matrix u_interp = interpolate.griddata(points, u, (xx, yy), method='cubic') #interpolating u v_interp = interpolate.griddata(points, v, (xx, yy), method='cubic') #interpolating v x1 = pd.DataFrame (data=np.hstack(xx), columns=['X']) y1 = pd.DataFrame (data=np.hstack(yy), columns=['Y']) u1 = pd.DataFrame (data=np.hstack(u_interp), columns=['U']) v1 = pd.DataFrame (data= np.hstack(v_interp), columns=['V']) df = pd.concat([x1,y1,u1,v1], axis=1) #df = df.round({'X': 4, 'Y': 4}) #df.groupby(['X', 'Y']).mean() df = df.drop_duplicates(['X', 'Y']) #df = df.dropna() df = df.sort_values(by=['X', 'Y']) print('Processing ',round((i-1)/(finalFrame-initialFrame)*100,2), '%') clear_output(wait=True) df.to_csv(output_file, sep=' ', index = False, header = False) # - df # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd df = pd.read_csv("dataset.csv") df.head() df.RCL.unique() from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import GaussianNB from sklearn.model_selection import train_test_split import numpy as np from sklearn.metrics import confusion_matrix from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 from sklearn.feature_selection import VarianceThreshold import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline del df['Time'] df.head() print(pd.Series(df['Subject'])) X = df.iloc[:, 0:5] y = df.iloc[:, -1] y.head() # + from sklearn import preprocessing x = X.values #returns a numpy array min_max_scaler = preprocessing.MinMaxScaler() x_scaled = min_max_scaler.fit_transform(x) X = pd.DataFrame(x_scaled) # - from sklearn.metrics import classification_report from sklearn.metrics import accuracy_score X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.20) classifier = DecisionTreeClassifier() classifier.fit(X_train, y_train) import pickle pickle.dump(classifier, open('model1.pkl','wb')) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.9 64-bit (''teste'': conda)' # language: python # name: python37964bittesteconda1543f1c219704c4ea077a4e6376e3e28 # --- # # # # JAI - Trust your data # # ## Resolution: finding duplicates in your data # This is an example of how to use the resolution capabilities of JAI. # # In this notebook we will use a subset of the [PC Games 2020](https://www.kaggle.com/jesneuman/pc-games) dataset to search for duplicate values in the 'Genres' column. # # You can install JAI in your environment using `pip install jai-sdk`. # # And you can read the docs [here](https://jai-sdk.readthedocs.io/en/stable/)! # # If you have any comments or suggestions, feel free to contact us: # # *In no direction that we turn do we find ease or comfort. If we are honest and if we have the will to win we find only danger, hard work and iron resolution.* - # + # JAI imports from jai import Jai from jai.processing import process_resolution # I/O import import pandas as pd # - # ## Reading data # it might take a few seconds to download this dataset (10MB) to your computer DATASET_URL = "https://jaipresentation.blob.core.windows.net/data/games_jai.parquet" df_games = pd.read_parquet(DATASET_URL) # checking values in the Platform column df_games["Platform"].value_counts() # We can see column `Platform` has some values that actually refer to the same thing (i.e., "Linux, macOS, PC" and "Linux, PC, macOS"). In other words, these are duplicate values. # ## We can use JAI to find duplicates and standardize their values! j = Jai("YOUR_AUTH_KEY") # ### We call `resolution` passing a given `name` for the database and the `data` itself (i.e., column "Platform") db_name = "games_resolution" col = "Platform" results = j.resolution(name=db_name, data=df_games[col], db_type="FastText") # ## OK, how do I interpret these results? # Each index in the `results` dataframe is related to an integer in the `resolution_id` column. This integer, in turn, is also an index! And it indicates which other sample is related to that index. So index 0 points to `resolution_id` 0, stating that sample number 0 is related to itself. No surprises there. results.head(10) # But what about indexes that are related to samples other than themselves? This will clear things up. Let's look at samples where the index DOES NOT match the `resolution_id` column: res2 = results.copy() res2["map_id"] = res2.index res3 = res2.loc[res2["resolution_id"] != res2["map_id"]] res3 # **Now we can see that samples 5 and 15 are actually referring to the same thing! Let's see if it checks out:** print(f"Item 5: {df_games[col].iloc[5]}\nItem 15: {df_games[col].iloc[15]}") # It does check out! These samples are clearly a permutation of one another. # ## We can create groups of samples that refer to the same thing # This makes it easier for us to check if the output of `fill` is actually making any sense # get groups groups = dict() for i in range(res3.shape[0]): fixed = res3["resolution_id"].iloc[i] moving = res3["map_id"].iloc[i] fixed_name = df_games[col].iloc[fixed] moving_name = df_games[col].iloc[moving] if fixed_name not in groups: groups[fixed_name] = {moving_name} else: groups[fixed_name].add(moving_name) # get all platforms that correspond to 'Linux, PC, macOS, PlayStation 4' groups["Linux, PC, macOS, PlayStation 4"] # The cell above shows all values that belong to the same group (`"Linux, PC, macOS, PlayStation 4"`). They are all permutations of one another and are indeed duplicates! # # Finally, we can standardize column `Platform` mapping permutations to a single, consistent value using the `results` dataframe. results.head(10) new_platform = [df_games[col].iloc[results["resolution_id"].iloc[item]] for item in range(results.shape[0])] new_platform_col = "Platform resolved" df_games_resolved = df_games.copy() df_games_resolved[new_platform_col] = new_platform print(f"[df_games]\nItem 5: {df_games[col].iloc[5]}\nItem 15: {df_games[col].iloc[15]}") print(f"\n[df_games_resolved]\nItem 5: {df_games_resolved[new_platform_col].iloc[5]}\nItem 15: {df_games_resolved[new_platform_col].iloc[15]}") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 深层神经网络 # 前面一章我们简要介绍了神经网络的一些基本知识,同时也是示范了如何用神经网络构建一个复杂的非线性二分类器,更多的情况神经网络适合使用在更加复杂的情况,比如图像分类的问题,下面我们用深度学习的入门级数据集 MNIST 手写体分类来说明一下更深层神经网络的优良表现。 # # ## MNIST 数据集 # mnist 数据集是一个非常出名的数据集,基本上很多网络都将其作为一个测试的标准,其来自美国国家标准与技术研究所, National Institute of Standards and Technology (NIST)。 训练集 (training set) 由来自 250 个不同人手写的数字构成, 其中 50% 是高中学生, 50% 来自人口普查局 (the Census Bureau) 的工作人员,一共有 60000 张图片。 测试集(test set) 也是同样比例的手写数字数据,一共有 10000 张图片。 # # 每张图片大小是 28 x 28 的灰度图,如下 # # ![](https://ws3.sinaimg.cn/large/006tKfTcly1fmlx2wl5tqj30ge0au745.jpg) # # 所以我们的任务就是给出一张图片,我们希望区别出其到底属于 0 到 9 这 10 个数字中的哪一个。 # # ## 多分类问题 # 前面我们讲过二分类问题,现在处理的问题更加复杂,是一个 10 分类问题,统称为多分类问题,对于多分类问题而言,我们的 loss 函数使用一个更加复杂的函数,叫交叉熵。 # # ### softmax # 提到交叉熵,我们先讲一下 softmax 函数,前面我们见过了 sigmoid 函数,如下 # # $$s(x) = \frac{1}{1 + e^{-x}}$$ # # 可以将任何一个值转换到 0 ~ 1 之间,当然对于一个二分类问题,这样就足够了,因为对于二分类问题,如果不属于第一类,那么必定属于第二类,所以只需要用一个值来表示其属于其中一类概率,但是对于多分类问题,这样并不行,需要知道其属于每一类的概率,这个时候就需要 softmax 函数了。 # # softmax 函数示例如下 # # ![](https://ws4.sinaimg.cn/large/006tKfTcly1fmlxtnfm4fj30ll0bnq3c.jpg) # # 对于网络的输出 $z_1, z_2, \cdots z_k$,我们首先对他们每个都取指数变成 $e^{z_1}, e^{z_2}, \cdots, e^{z_k}$,那么每一项都除以他们的求和,也就是 # # $$ # z_i \rightarrow \frac{e^{z_i}}{\sum_{j=1}^{k} e^{z_j}} # $$ # # 如果对经过 softmax 函数的所有项求和就等于 1,所以他们每一项都分别表示属于其中某一类的概率。 # # ## 交叉熵 # 交叉熵衡量两个分布相似性的一种度量方式,前面讲的二分类问题的 loss 函数就是交叉熵的一种特殊情况,交叉熵的一般公式为 # # $$ # cross\_entropy(p, q) = E_{p}[-\log q] = - \frac{1}{m} \sum_{x} p(x) \log q(x) # $$ # # 对于二分类问题我们可以写成 # # $$ # -\frac{1}{m} \sum_{i=1}^m (y^{i} \log sigmoid(x^{i}) + (1 - y^{i}) \log (1 - sigmoid(x^{i})) # $$ # # 这就是我们之前讲的二分类问题的 loss,当时我们并没有解释原因,只是给出了公式,然后解释了其合理性,现在我们给出了公式去证明这样取 loss 函数是合理的 # # 交叉熵是信息理论里面的内容,这里不再具体展开,更多的内容,可以看到下面的[链接](http://blog.csdn.net/rtygbwwwerr/article/details/50778098) # # 下面我们直接用 mnist 举例,讲一讲深度神经网络 # + import numpy as np import torch from torchvision.datasets import mnist # 导入 pytorch 内置的 mnist 数据 from torch import nn from torch.autograd import Variable # - # 使用内置函数下载 mnist 数据集 train_set = mnist.MNIST('./data', train=True, download=True) test_set = mnist.MNIST('./data', train=False, download=True) # 我们可以看看其中的一个数据是什么样子的 a_data, a_label = train_set[0] a_data a_label # 这里的读入的数据是 PIL 库中的格式,我们可以非常方便地将其转换为 numpy array a_data = np.array(a_data, dtype='float32') print(a_data.shape) # 这里我们可以看到这种图片的大小是 28 x 28 print(a_data) # 我们可以将数组展示出来,里面的 0 就表示黑色,255 表示白色 # # 对于神经网络,我们第一层的输入就是 28 x 28 = 784,所以必须将得到的数据我们做一个变换,使用 reshape 将他们拉平成一个一维向量 # + def data_tf(x): x = np.array(x, dtype='float32') / 255 x = (x - 0.5) / 0.5 # 标准化,这个技巧之后会讲到 x = x.reshape((-1,)) # 拉平,直接是一个-1便可以 x = torch.from_numpy(x) return x #目的是先对numpy进行标准化,然后进行展平,然后进行转化成tensor #以上是一个转化方法,下边做成各种set train_set = mnist.MNIST('./data', train=True, transform=data_tf, download=True) # 重新载入数据集,申明定义的数据变换 test_set = mnist.MNIST('./data', train=False, transform=data_tf, download=True) # - a, a_label = train_set[0] print(a.shape) print(a_label) from torch.utils.data import DataLoader # 使用 pytorch 自带的 DataLoader 定义一个数据迭代器 train_data = DataLoader(train_set, batch_size=64, shuffle=True) test_data = DataLoader(test_set, batch_size=128, shuffle=False) # 使用这样的数据迭代器是非常有必要的,如果数据量太大,就无法一次将他们全部读入内存,所以需要使用 python 迭代器,每次生成一个批次的数据 a, a_label = next(iter(train_data)) # 打印出一个批次的数据大小 print(a.shape) print(a_label.shape) # 使用 Sequential 定义 4 层神经网络 net = nn.Sequential( nn.Linear(784, 400), nn.ReLU(), nn.Linear(400, 200), nn.ReLU(), nn.Linear(200, 100), nn.ReLU(), nn.Linear(100, 10) ) net # 交叉熵在 pytorch 中已经内置了,交叉熵的数值稳定性更差,所以内置的函数已经帮我们解决了这个问题 # 定义 loss 函数 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(net.parameters(), 1e-1) # 使用随机梯度下降,学习率 0.1 # + # 开始训练 losses = [] acces = [] eval_losses = [] eval_acces = [] for e in range(20): train_loss = 0 train_acc = 0 net.train()#这个网络结构要提前声明 for im, label in train_data:#对于train集之中数据 im = Variable(im)#对输入进行形成这中形式 label = Variable(label) # 前向传播 out = net(im) loss = criterion(out, label) # 反向传播 optimizer.zero_grad() loss.backward() optimizer.step() # 记录误差 train_loss += loss.item()#每一次都是一个常数 # 计算分类的准确率 _, pred = out.max(1)#这个啥意思?哦哦是概率最大的类别 num_correct = (pred == label).sum().item() acc = num_correct / im.shape[0]#求准确率 train_acc += acc losses.append(train_loss / len(train_data)) acces.append(train_acc / len(train_data)) # 在测试集上检验效果 eval_loss = 0 eval_acc = 0 net.eval() # 将模型改为预测模式 for im, label in test_data:#在test集之中 im = Variable(im)#输入时variable形式 label = Variable(label)#标签也是variable形式 out = net(im)#输出向量 loss = criterion(out, label)#误差和label差距交叉熵 # 记录误差 eval_loss += loss.item() # 记录准确率 _, pred = out.max(1) num_correct = (pred == label).sum().item() acc = num_correct / im.shape[0] eval_acc += acc eval_losses.append(eval_loss / len(test_data)) eval_acces.append(eval_acc / len(test_data)) print('epoch: {}, Train Loss: {:.6f}, Train Acc: {:.6f}, Eval Loss: {:.6f}, Eval Acc: {:.6f}' .format(e, train_loss / len(train_data), train_acc / len(train_data), eval_loss / len(test_data), eval_acc / len(test_data))) # - # 画出 loss 曲线和 准确率曲线 import matplotlib.pyplot as plt # %matplotlib inline plt.title('train loss') plt.plot(np.arange(len(losses)), losses) plt.plot(np.arange(len(acces)), acces) plt.title('train acc') plt.plot(np.arange(len(eval_losses)), eval_losses) plt.title('test loss') plt.plot(np.arange(len(eval_acces)), eval_acces) plt.title('test acc') # 可以看到我们的三层网络在训练集上能够达到 99.9% 的准确率,测试集上能够达到 98.20% 的准确率 # **小练习:看一看上面的训练过程,看一下准确率是怎么计算出来的,特别注意 max 这个函数** # # **自己重新实现一个新的网络,试试改变隐藏层的数目和激活函数,看看有什么新的结果** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="0eXVUpoKZwNr" # #Task 1 :Try Linear Regression just using numpy (Without Tensorflow/Pytorch or other torch library). # + id="W6NK3ltgYHJ3" executionInfo={"status": "ok", "timestamp": 1630477588788, "user_tz": -330, "elapsed": 1397, "user": {"displayName": "dhaval karen", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgvY6o4MuHT6YUeaqCq-6tAM61bz0NJJ6eXkl6VmA=s64", "userId": "03115065192551383929"}} import numpy as np from sklearn.linear_model import LinearRegression # + colab={"base_uri": "https://localhost:8080/"} id="lHVGJ7nNYHJ4" executionInfo={"status": "ok", "timestamp": 1630477591687, "user_tz": -330, "elapsed": 32, "user": {"displayName": "dhaval karen", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgvY6o4MuHT6YUeaqCq-6tAM61bz0NJJ6eXkl6VmA=s64", "userId": "03115065192551383929"}} outputId="40601b94-7325-4fd6-ecb5-57dc3f2936ef" inputs = np.array([[73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70], [73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70], [73, 67, 43], [91, 88, 64], [87, 134, 58], [102, 43, 37], [69, 96, 70]], dtype='float32') print(inputs.shape) # + id="3NunoIBCYHJ6" executionInfo={"status": "ok", "timestamp": 1630477595596, "user_tz": -330, "elapsed": 10, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgvY6o4MuHT6YUeaqCq-6tAM61bz0NJJ6eXkl6VmA=s64", "userId": "03115065192551383929"}} targets = np.array([[56, 70], [81, 101], [119, 133], [22, 37], [103, 119], [56, 70], [81, 101], [119, 133], [22, 37], [103, 119], [56, 70], [81, 101], [119, 133], [22, 37], [103, 119]], dtype='float32') # + id="GGlX1RMbYHJ6" executionInfo={"status": "ok", "timestamp": 1630477599362, "user_tz": -330, "elapsed": 382, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgvY6o4MuHT6YUeaqCq-6tAM61bz0NJJ6eXkl6VmA=s64", "userId": "03115065192551383929"}} from sklearn.model_selection import train_test_split x_train,x_test,y_train,y_test = train_test_split(inputs,targets,test_size=0.4, random_state=58) # + colab={"base_uri": "https://localhost:8080/"} id="_7ez_dU0YHJ7" executionInfo={"status": "ok", "timestamp": 1630477601550, "user_tz": -330, "elapsed": 27, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgvY6o4MuHT6YUeaqCq-6tAM61bz0NJJ6eXkl6VmA=s64", "userId": "03115065192551383929"}} outputId="ac06cf40-b1fc-4e9b-b5ae-cee0783dfc26" reg = LinearRegression().fit(x_train,y_train) print(reg) # + colab={"base_uri": "https://localhost:8080/"} id="qisKJXBxYHJ8" executionInfo={"status": "ok", "timestamp": 1630477605536, "user_tz": -330, "elapsed": 346, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgvY6o4MuHT6YUeaqCq-6tAM61bz0NJJ6eXkl6VmA=s64", "userId": "03115065192551383929"}} outputId="2dcbfde6-5e24-4d62-d17a-486e2388f09c" print(reg.intercept_) print(reg.coef_) # + colab={"base_uri": "https://localhost:8080/"} id="-wIb51_WYHJ9" executionInfo={"status": "ok", "timestamp": 1630477610002, "user_tz": -330, "elapsed": 418, "user": {"displayName": "dhaval karen", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgvY6o4MuHT6YUeaqCq-6tAM61bz0NJJ6eXkl6VmA=s64", "userId": "03115065192551383929"}} outputId="4b8bf666-7d6c-40f0-90c3-66593b03a7ff" y_pred = reg.predict(x_test) print(y_pred,"\n") print(y_test) # + colab={"base_uri": "https://localhost:8080/"} id="yrVwSpngYHJ-" executionInfo={"status": "ok", "timestamp": 1630477619116, "user_tz": -330, "elapsed": 360, "user": {"displayName": "dhaval karen", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgvY6o4MuHT6YUeaqCq-6tAM61bz0NJJ6eXkl6VmA=s64", "userId": "03115065192551383929"}} outputId="723dfcb3-320e-4a31-fc0e-e30beb135126" from sklearn import metrics print('\n Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Casa # language: casa # name: casapy # --- # ## Flux density values for standard flux calibrators # # To convert correlation coefficients to absolute flux densities # ### J0408-6545 # RA: 04h08m20.4s # Decl: -65d45m09.6s # Coefficients: a=-0.9790, b=3.3662, c=-1.1216, d=0.0861 # $log_{10}(S) = a + b*log_{10}(f) +c*log_{10}(f)^2 + d*log_{10}(f)^3$ # with $S$ in Jy and $f$ in MHz # + a=-0.9790 b=3.3662 c=-1.1216 d=0.0861 f = 1284. # MHz (L-band) log_S = a + b*np.log10(f) + c*np.log10(f)**2 + d*np.log10(f)**3 print('Calculated Stokes I for J0408-6545 {} [Jy]'.format(10**log_S)) # - # ``` # setjy(vis=msfile, field='J0408-6545', scalebychan=True, standard='manual', fluxdensity=[17.1,0,0,0]) # ``` # ### J1939-6342 # RA: 19h39m25.05s # Decl: -63d42m43.63s # Coefficients: a=-30.7667 b=26.4908 c=-7.0977 d=0.605334 # $log_{10}(S) = a + b*log_{10}(f) +c*log_{10}(f)^2 + d*log_{10}(f)^3$ # with $S$ in Jy and $f$ in MHz # + a=-30.7667 b=26.4908 c=-7.0977 d=0.605334 f = 1284. # MHz (L-band) log_S = a + b*np.log10(f) + c*np.log10(f)**2 + d*np.log10(f)**3 print('Calculated Stokes I for J1939-6342 {} [Jy]'.format(10**log_S)) # - # ``` # setjy(vis=msfile, field='J1939-6342', scalebychan=True, standard='Stevens-Reynolds 2016', fluxdensity=-1) # ``` # ### J1331+3030 # RA: 13h31m08.3s # Decl: +30d30m32.96s # ``` # setjy(vis=msfile, field='3C286', scalebychan=True, standard='Perley-Butler 2013', fluxdensity=-1) # ``` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp data_processing.time_series # - # export import numpy as np import pandas as pd import sys import os from scipy import stats # + # export def cast_date(series, to='month', type='str'): if to == 'month': fmt = '%Y-%m' dt_type = 'datetime64[M]' elif to == 'day': fmt = '%Y-%m-%d' dt_type = 'datetime64[D]' if type == 'str': return pd.to_datetime(series).dt.strftime(fmt) elif type == 'datetime': return pd.to_datetime(series).astype(dt_type) # + #export def count_time_value(dataframe, ts_col, group_cols, by='day'): df = dataframe.copy() df[f'ts_{by}'] = cast_date(df[ts_col], to=by) groups = [f'ts_{by}'] groups.extend(group_cols) return df.groupby(groups).size().reset_index(name='counts') # - #export def time_over_time(data_frame: pd.DataFrame = None, ts: str=None, value: str=None , groups: str=None, shift: int=1, drop_last_value: bool=False)->pd.DataFrame: """ return: series that contains time over time """ df = data_frame.copy() if groups: last_value_col = 'last_{g_col}_{v_col}'.format(g_col='_'.join(groups), v_col=value) df[last_value_col] = df.groupby(groups)[value].shift(shift) else: last_value_col = 'last_{v_col}'.format(v_col=value) df[last_value_col] = df[value].shift(shift) tot_col = '{value}_{u}o{u}_{shift}'.format(value=value, u=ts[0], shift=shift) df[tot_col] = df[value] / df[last_value_col] return df[tot_col] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sys import datetime as dt import os.path import math import numpy as np import cupy as cp import ceo import matplotlib.pyplot as plt # %matplotlib inline from scipy import ndimage import scipy.interpolate as scyint from collections import OrderedDict import IPython # %pylab inline # - # ## Initialize CEO objects # Make sure that the .ceo file containing the theoretical KL modes contains two sets (one for central, one for outer), and that the central KL pupil has the right OC ratio # + jupyter={"outputs_hidden": true} #-- Karhunen-Loeve per M2 segment M2_n_modes = 600 gmt = ceo.GMT_MX(M2_mirror_modes=u"Karhunen-Loeve", M2_N_MODE=M2_n_modes) # + jupyter={"outputs_hidden": true} D = 25.5 nPx = 1024 gs = ceo.Source("R",zenith=0.,azimuth=0., rays_box_size=D, rays_box_sampling=nPx, rays_origin=[0.0,0.0,25]) # - # ## Estimate central obscuration of central segment (S7) # From the optical design document (GMT-DOC-00010 Rev. F), the inner clear aperture (ICA) diameter (baffled) of S7 is 2875 mm, and the clear aperture diameter of all segments is 8365 mm. Hence, the central occultation ratio of S7 is: # $ \frac{2875}{8365}=0.344$. # # Update 1): Based on the GMT-CAD-161007 Rev C, there is a M2 (circular) baffle 3.7 m in diameter. This becomes the dominant obstruction to compute the effective central occultation ratio of S7, becoming: # $ \frac{3700}{8365}=0.44232$. # # Update 2): Based on new information, the M2 baffle will be reduced to 3.3 m in diameter, becoming: $ \frac{3300}{8365}=0.3945$. # + gmt.reset() gs.reset() gmt.propagate(gs, project_truss_onaxis=True) ## Piston masks for each segment P = np.rollaxis( np.array(gs.rays.piston_mask ),0,3) ## Find center coordinates (in pixels) of each segment mask u = np.arange(gs.n) v = np.arange(gs.m) x,y = np.meshgrid(u,v) x = x.reshape(1,-1,1) y = y.reshape(1,-1,1) xc = np.sum(x*P,axis=1)/P.sum(axis=1) yc = np.sum(y*P,axis=1)/P.sum(axis=1) ## Polar coordinates rho = np.hypot( x - xc[:,np.newaxis,:], y - yc[:,np.newaxis,:]) #temporal rho vector theta = np.arctan2( y - yc[:,np.newaxis,:], x - xc[:,np.newaxis,:]) * P # + active="" # ## Preliminary estimation of radius (in pixels) of each segment mask (assuming that there is no central obscuration) # Rs = np.sqrt(P.sum(axis=1)/np.pi) # # ## Estimate central obscuration area of each segment mask # # ##--- Method 1. # ## Note: this method works when there are no other mask features (like spiders) # #ObsArea = np.sum(rho < 0.9*Rs[:,np.newaxis,:] * ~P.astype('bool'), axis=1) # # ## Improve estimation of radius of each segment mask # Rs = np.sqrt( (P.sum(axis=1)+ObsArea) / np.pi) # # ## Determine central occultation diameter (in % of segment size) # Roc = np.sqrt(ObsArea/np.pi) / Rs # # print('Segment diameter estimation [m]: ') # print(np.array_str(Rs.ravel()*2*D/nPx, precision=3, suppress_small=True)) # # print("\nCentral occultation ratio for each segment: ") # print(np.array_str(Roc.ravel(), precision=3, suppress_small=True)) # + ##--- Method 2. ## Note: Uses radial profiles of segment masks to estimate radius and OC more precisely. Roc_pix = [] Rs = [] for this_seg in range(7): nbins = np.round(rho[this_seg].max()) Rflabel = np.rint(nbins * rho[this_seg]/rho[this_seg].max()).reshape((nPx,nPx)) Rfidx = np.arange(0,Rflabel.max()+1) Sm = np.squeeze(P[this_seg,:]).reshape((nPx,nPx)) Smprof = ndimage.mean(Sm, labels=Rflabel, index=Rfidx) midx = np.squeeze(np.argwhere(Smprof > 0)) Roc_pix.append( np.where(midx[0] <=1, 0, midx[0]) ) #OC radius Rs.append( midx.max()+1) # Estimate semi-major axis Roc_pix = np.array(Roc_pix) Rs = np.array(Rs) Roc = Roc_pix/Rs print('Segment diameter estimation [m]: ') print(np.array_str(Rs.ravel()*2*D/nPx, precision=3, suppress_small=True)) print("\nCentral occultation ratio for each segment: ") print(np.array_str(Roc.ravel(), precision=3, suppress_small=True)) # + # Show quality of central segment fitting this_seg=6 Sm = np.squeeze(P[this_seg,:]).reshape((nPx,nPx)) Sf = np.squeeze(np.logical_and(rho[this_seg,:]Roc[this_seg]*Rs[this_seg])).reshape((nPx,nPx)) #plt.imshow(Sm.astype('int'), interpolation='None') plt.imshow(Sm.astype('int')-Sf.astype('int'), interpolation='None') plt.xlim([xc[this_seg]-Rs[this_seg]-5, xc[this_seg]+Rs[this_seg]+5]) plt.ylim([yc[this_seg]-Rs[this_seg]-5, yc[this_seg]+Rs[this_seg]+5]) plt.colorbar() # - # ## Retrieve Rod's KL modes # *Note:* There are two sets: # 1. one for outer segments; # 2. one for the central segment. (Rod updated the M2_KarhunenLoeve.ceo file with theoretical KL modes defined in a circular pupil with the requested OC ratio) ## Retrieve M2 KL modes M2 = gmt.M2.modes.M.host() print(M2.shape) # + jupyter={"outputs_hidden": true} ## Select central segment OC OC_S7 = 0.3945 #OC_S7 = 0.344 #OC_S7 = 0.359 # + jupyter={"outputs_hidden": true} #Create circular mask rows = gmt.M2.modes.N_SAMPLE cols = gmt.M2.modes.N_SAMPLE nsets = gmt.M2.modes.N_SET nkls = gmt.M2.modes.N_MODE xVec = np.linspace(-1,1,cols) yVec = np.linspace(-1,1,rows) [x,y] = np.meshgrid(xVec,yVec) # rows x cols r = np.hypot(x,y) #Mask for outer segments M2masko = np.full((rows,cols),np.nan) M2masko[(r <= 1)]=1.0 M2npo = np.sum(r <= 1) #Mask for central segment M2maskc = np.full((rows,cols),np.nan) M2maskc[np.logical_and(r <= 1, r >= OC_S7)] = 1.0 M2npc = np.sum(M2maskc == 1) # + #### Check visually that the mask for central segment matches the actual segment pupil mask this_seg=6 extenttt = np.squeeze([xc[this_seg]-Rs[this_seg], xc[this_seg]+Rs[this_seg], yc[this_seg]-Rs[this_seg], yc[this_seg]+Rs[this_seg]]) Sm = np.squeeze(P[this_seg,:]).reshape((nPx,nPx)) fig, (ax1,ax2) = plt.subplots(ncols=2) fig.set_size_inches(15,5) imm = ax1.imshow(Sm, interpolation='None')#, extent=[-1,1,-1,1]) ax1.set_xlim([xc[this_seg]-Rs[this_seg], xc[this_seg]+Rs[this_seg]]) ax1.set_ylim([yc[this_seg]-Rs[this_seg], yc[this_seg]+Rs[this_seg]]) ax1.grid() imm1 = ax2.imshow(M2maskc, extent=extenttt) ax2.grid() # + ## Choose KL to display this_set = 1 # 0: outer segments; 1: central segment this_kl = 596 if this_set == 0: M2mask = M2masko M2np = M2npo else: M2mask = M2maskc M2np = M2npc KLmap = np.reshape(M2[:,this_set*nkls+this_kl], (rows,cols) )*M2mask KLrms = np.sqrt( np.sum(KLmap[M2mask==1]**2)/M2np ) print("RMS of KL mode %d of set %d is: %.2f"%(this_kl, this_set, KLrms)) fig, (ax1,ax2) = plt.subplots(ncols=2) fig.set_size_inches(15,5) imm = ax1.imshow(KLmap, cmap=plt.cm.winter) fig.colorbar(imm, ax=ax1) ax1.set_title('M2 KL %d'%(this_kl), fontsize=15) ax2.plot(xVec,KLmap[:,int(cols/2)]) ax2.plot(yVec,KLmap[int(rows/2),:]) ax2.grid() # - # ## Compute the cross-product matrix # + jupyter={"outputs_hidden": true} ## Choose set to process this_set = 1 # 0: outer segments; 1: central segment # + if this_set == 0: M2mask = M2masko M2np = M2npo else: M2mask = M2maskc M2np = M2npc KLmat = [] for ii in range(nkls): KLmap = np.reshape(M2[:,this_set*nkls+ii], (rows,cols) )*M2mask KLmat.append( KLmap[M2mask==1]) KLmat = np.transpose(KLmat) Dmat = np.matmul(np.transpose(KLmat), KLmat)/M2np; fig, (ax1,ax2) = plt.subplots(ncols=2) fig.set_size_inches(15,5) imm = ax1.imshow(Dmat, cmap=plt.cm.winter) fig.colorbar(imm, ax=ax1) ax1.set_title('cross-product matrix', fontsize=15) ax2.plot(np.sqrt(np.diag(Dmat)), 'o--') ax2.grid() # - ## KL modes that have large RMS w.r.t to the majority (as seen in plot above) np.where(np.sqrt(np.diag(Dmat)) > 0.9) # ## Re-orthonormalize KL modes # + Lmat = np.linalg.cholesky(Dmat) Umat, Smat, Vmat =np.linalg.svd(Lmat) fig, ax = plt.subplots() fig.set_size_inches(7,5) ax.plot(Smat/np.max(Smat), 'o-', ) ax.grid() ax.tick_params(labelsize=14) ax.set_xlabel('eigenmode number', fontsize=14) ax.set_ylabel('normalized singular value', fontsize=14) #ax.set_xlim([2400,2500]) # + inv_cond = 1e-12 inv_Lmat = np.linalg.pinv(Lmat, rcond=inv_cond) KLmato = np.matmul(KLmat, np.transpose(inv_Lmat)) Dmato = np.matmul(np.transpose(KLmato), KLmato)/M2np; fig, (ax1,ax2) = plt.subplots(ncols=2) fig.set_size_inches(15,5) imm = ax1.imshow(Dmato, cmap=plt.cm.winter) fig.colorbar(imm, ax=ax1) ax1.set_title('cross-product matrix', fontsize=15) ax2.plot(np.diag(Dmato), 'o--') ax2.grid() ax2.set_ylim([0,1.2]) # + ## Visualize re-ortho modes this_kl=595 KLmap = np.zeros((rows,cols)) KLmap[M2mask==1] = KLmato[:,this_kl] fig, (ax1,ax2) = plt.subplots(ncols=2) fig.set_size_inches(15,5) imm = ax1.imshow(KLmap, cmap=plt.cm.winter) fig.colorbar(imm, ax=ax1) ax1.set_title('M2 KL %d'%(this_kl), fontsize=15) ax2.plot(xVec,KLmap[:,int(cols/2)]) ax2.plot(yVec,KLmap[int(rows/2),:]) ax2.grid() # - # ## Create set of pure segment piston, tip, and tilt # + jupyter={"outputs_hidden": true} PTTmat = np.zeros((M2np,3)) PTTmat[:,0] = 1 PTTmat[:,1] = x[M2mask==1] PTTmat[:,2] = y[M2mask==1] PTT_Dmat = np.matmul(np.transpose(PTTmat), PTTmat)/M2np; PTT_Lmat = np.linalg.cholesky(PTT_Dmat) PTT_inv_Lmat = np.linalg.pinv(PTT_Lmat) PTTmato = np.matmul(PTTmat, np.transpose(PTT_inv_Lmat)) # + ## Visualize PTT modes this_kl=1 KLmap = np.zeros((rows,cols)) KLmap[M2mask==1] = PTTmato[:,this_kl] fig, (ax1,ax2) = plt.subplots(ncols=2) fig.set_size_inches(15,5) imm = ax1.imshow(KLmap, cmap=plt.cm.winter) fig.colorbar(imm, ax=ax1) ax1.set_title('M2 PTT %d'%(this_kl), fontsize=15) ax2.plot(xVec,KLmap[:,int(cols/2)]) ax2.plot(yVec,KLmap[int(rows/2),:]) ax2.grid() # - # ## Remove PTT from all KL modes, and merge with pure PTT modes # + jupyter={"outputs_hidden": true} inv_PTTmato = np.linalg.pinv(PTTmato) ptt_coeffs = np.matmul(inv_PTTmato, KLmato) KLmato_pttf = KLmato - np.matmul(PTTmato, ptt_coeffs) ModesMat = np.hstack((PTTmato, KLmato_pttf[:,3:])) # + ## Visualize final modes this_kl=1 KLmap = np.zeros((rows,cols)) KLmap[M2mask==1] = ModesMat[:,this_kl] fig, (ax1,ax2) = plt.subplots(ncols=2) fig.set_size_inches(15,5) imm = ax1.imshow(KLmap, cmap=plt.cm.winter) fig.colorbar(imm, ax=ax1) ax1.set_title('M2 KL %d'%(this_kl), fontsize=15) ax2.plot(xVec,KLmap[:,int(cols/2)]) ax2.plot(yVec,KLmap[int(rows/2),:]) ax2.grid() # - # ## Extrapolate outside pupil (required by CEO) # + #--- Extrapolate (using near-neighbor method) to points outside mirror ModesMatCEO = np.zeros((rows,cols,nkls)) maskOffOut = np.logical_and(np.isnan(M2mask), r >= 0.9) # points outside circle maskOffIn = np.logical_and(np.isnan(M2mask), r <= 0.9) # points within OC pointsData = np.concatenate([x[ M2mask==1][:,None], y[ M2mask==1][:,None]],axis=1) pointsOut = np.concatenate([x[ maskOffOut][:,None], y[ maskOffOut][:,None]],axis=1) pointsIn = np.concatenate([x[ maskOffIn][:,None], y[ maskOffIn][:,None]],axis=1) for this_kl in range(nkls): ModesMatCEO[M2mask==1,this_kl] = ModesMat[:,this_kl] ModesMatCEO[maskOffOut,this_kl] = scyint.griddata(pointsData, ModesMatCEO[M2mask==1,this_kl], pointsOut, method='nearest') if this_set == 1: ModesMatCEO[ maskOffIn,this_kl] = scyint.griddata(pointsData, ModesMatCEO[M2mask==1,this_kl], pointsIn, method='cubic') # + ## Visualize extrapolated modes this_kl=3 KLmap = ModesMatCEO[:,:,this_kl] fig, (ax1,ax2) = plt.subplots(ncols=2) fig.set_size_inches(15,5) imm = ax1.imshow(KLmap, cmap=plt.cm.winter) fig.colorbar(imm, ax=ax1) ax1.set_title('M2 KL %d'%(this_kl), fontsize=15) ax2.plot(xVec,KLmap[:,int(cols/2)]) ax2.plot(yVec,KLmap[int(rows/2),:]) ax2.grid() # + jupyter={"outputs_hidden": true} ### save outer KL modes to KL1 KL1 = [] for this_kl in range(nkls): KL1.append(ModesMatCEO[:,:,this_kl]) # + jupyter={"outputs_hidden": true} ### save central segment KL modes to KL2 KL2 = [] for this_kl in range(nkls): KL2.append(ModesMatCEO[:,:,this_kl]) # + jupyter={"outputs_hidden": true} suit = OrderedDict() suit['Ni'] = np.array( rows, dtype=np.int32) #assume number of rows = number of cols suit['L'] = np.array( 1.05, dtype=np.double) # size of M2 segment suit['N_SET'] = np.array( 2, dtype=np.int32) suit['N_MODE'] = np.array( len(KL1), dtype=np.int32) suit['s2b'] = np.array( [0,0,0,0,0,0,1], dtype=np.int32) suit['M'] = np.dstack(KL1+KL2).flatten(order='F') path_to_modes = '/storage/data02/gmtMirrors_repository/M2_KarhunenLoeveModes_ortho_S7OC%1.3f_cubicInt.ceo'%OC_S7 with open(path_to_modes,'w') as f: for key in suit: suit[key].tofile(f) # + jupyter={"outputs_hidden": true} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #Import packages import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline sns.set() # Draw 1,000 samples from uniform & plot results x = np.random.rand(1000) plt.hist(x); # + # Bootstrap WRONG proportions = [] for i in range(1000): # Modded to 1000 for testing subset = np.random.choice(x, size=200) # Wrong size for question clicks = subset <= .5 n_clicks = sum(clicks) prop = n_clicks/len(clicks) proportions.append(prop) print(np.mean(proportions), np.std(proportions)) # Backwards wrong table method! bootm = np.mean(proportions) boots = np.std(proportions) print(bootm - 1.96*boots, bootm + 1.96*boots) # Using true critical t's TOO WIDE # + # Bootstrap RIGHT n = len(x) # Sample size for each bootstrap resample (must equal original!) m = 10000 # Number of bootstrap samples alpha = 0.05 # Interval type (classical) proportions = [] for i in range(m): bootsample = np.random.choice(x, size = n, replace = True) n_clicks = sum(bootsample <= .5) prop = n_clicks/n proportions.append(prop) sp = np.sort(proportions) # Sort! # For alpha/2, 1-alpha/2 interval, need to multiply against m the number of bootstrap samples: print(sp[int(alpha/2*m)], sp[int((1-alpha/2)*m)]) # + m = 1000 alpha = 0.05 int((alpha/2)*m) # - len(clicks) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exception handling try: a=5 b='0' print(a/b) except: print('Some error occurred.') print("Out of try except blocks.") # Some error occurred. # Out of try except blocks. # + try: a=5 b='0' print (a+b) except TypeError: print('Unsupported operation') print ("Out of try except blocks") # Unsupported operation # Out of try except blocks # + try: a=5 b=0 print (a/b) except TypeError: print('Unsupported operation') except ZeroDivisionError: print ('Division by zero not allowed') print ('Out of try except blocks') # Division by zero not allowed # Out of try except blocks # - # ### else and finally try: #statements in try block except: #executed when error in try block else: #executed if try block is error-free finally: #executed irrespective of exception occured or not # + try: print('try block') x=int(input('Enter a number: ')) y=int(input('Enter another number: ')) z=x/y except ZeroDivisionError: print("except ZeroDivisionError block") print("Division by 0 not accepted") else: print("else block") print("Division = ", z) finally: print("finally block") x=0 y=0 print ("Out of try, except, else and finally blocks." ) # try block # Enter a number: 10 # Enter another number: 2 # else block # Division = 5.0 # finally block # Out of try, except, else and finally blocks. # - # ### Raise an exception # + try: x=int(input('Enter a number upto 100: ')) if x > 100: raise ValueError(x) except ValueError: print(x, "is out of allowed range") else: print(x, "is within the allowed range") # Enter a number upto 100: 200 # 200 is out of allowed range # Enter a number upto 100: 50 # 50 is within the allowed range # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # !pip install translators import pandas as pd # current version have logs, which is not very comfortable import translators as ts from multiprocessing import Pool from tqdm import * CSV_PATH = '../input/jigsaw-multilingual-toxic-comment-classification/jigsaw-toxic-comment-train.csv' LANG = 'es' API = 'google' # + _uuid="0b35a13e-fcba-449b-9404-3c8e54aa8b8f" _cell_guid="d6bdf012-2b31-45cb-afb1-968fb7d74897" def translator_constructor(api): if api == 'google': return ts.google elif api == 'bing': return ts.bing elif api == 'baidu': return ts.baidu elif api == 'sogou': return ts.sogou elif api == 'youdao': return ts.youdao elif api == 'tencent': return ts.tencent elif api == 'alibaba': return ts.alibaba else: raise NotImplementedError(f'{api} translator is not realised!') def translate(x): try: return [x[0], translator_constructor(API)(x[1], 'en', LANG), x[2]] except: return [x[0], None, [2]] def imap_unordered_bar(func, args, n_processes: int = 48): p = Pool(n_processes, maxtasksperchild=100) res_list = [] with tqdm(total=len(args)) as pbar: for i, res in tqdm(enumerate(p.imap_unordered(func, args))): pbar.update() res_list.append(res) pbar.close() p.close() p.join() return res_list def main(): df = pd.read_csv(CSV_PATH).sample(100) tqdm.pandas('Translation progress') df[['id', 'comment_text', 'toxic']] = imap_unordered_bar(translate, df[['id', 'comment_text', 'toxic']].values) df.to_csv(f'jigsaw-toxic-comment-train-{API}-{LANG}.csv') if __name__ == '__main__': main() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: venv_oecdflows # language: python # name: venv_oecdflows # --- # # Working on Model import torch import torch.nn as nn import pandas as pd import numpy as np import os # # Approach Based on the Following # + # https://github.com/AndriyMulyar/bert_document_classification # - # ### Unfortunately as shown in the following lines, this repo has problems!!!! from bert_document_classification.models import SmokerPhenotypingBert from bert_document_classification.models import ObesityPhenotypingBert smoking_classifier = SmokerPhenotypingBert(device='cuda', batch_size=10) #defaults to GPU prediction| # + smoking_classifier = SmokerPhenotypingBert(device='cpu', batch_size=10) #defaults to GPU prediction # - obesity_classifier = ObesityPhenotypingBert(device='cpu', batch_size=10) #or CPU if you would like. smoking_classifier.predict(["I'm a document! Make me long and the model can still perform well!"]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: my_py # language: python # name: my_py # --- # + import pandas as pd import os import datetime import pickle import re import time from collections import Counter import numpy as np import nltk nltk.data.path # - from nltk.tokenize import sent_tokenize, word_tokenize #nltk.download('stopwords') from nltk.corpus import stopwords import spacy nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner']) import random # ## 1. Import and Preprocess Data # Define a text preprocessor (lemmatizer) def my_preprocessor(text): doc=nlp(text.lower()) lemmas=[token.lemma_ for token in doc if not token.is_punct | token.is_space] texts_out=" ".join(lemmas) return texts_out # Import expanded reg sentences df_regSentsExpand=pd.read_pickle('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/allRegSentsExpand.pkl') print(df_regSentsExpand.info()) # Convert reg sentences to list regSentsExpand=df_regSentsExpand['RegSentsExpand'].tolist() print(len(regSentsExpand), regSentsExpand[0]) # Examples print(regSentsExpand[4031]) print(my_preprocessor(regSentsExpand[4031])) # Preprocess all expanded reg sentences regSentsExpand_lemmatized=[my_preprocessor(sent) for sent in regSentsExpand] # Examples print(len(regSentsExpand_lemmatized), regSentsExpand_lemmatized[0]) print(regSentsExpand[1],regSentsExpand_lemmatized[1]) # + # # Save preprocessed expanded reg sentences # with open('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/regSentsExpand_lemmatized', 'wb') as fp: # pickle.dump(regSentsExpand_lemmatized, fp) # - # ## 2. Match Noun Chunks # Import preprocessed expanded reg sentences with open ('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/regSentsExpand_lemmatized', 'rb') as fp: regSentsExpand_lemmatized = pickle.load(fp) print(len(regSentsExpand_lemmatized), regSentsExpand_lemmatized[0]) # Use all unique noun chunks in rule titles with open ('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/ruleTitleUniqueNounChunks', 'rb') as fp: nounchunks = pickle.load(fp) print(len(nounchunks),nounchunks[0:20]) # Compile a new re pattern with all noun chunks pattern=re.compile(r"\b"+r"\b|\b".join(map(re.escape, nounchunks))+r"\b") # + # Match noun chunks in all expanded reg sentences start_time = time.time() nounchunk_match=[] nounchunk_match_words=[] for sent in regSentsExpand_lemmatized: match_words=[] match=0 find=pattern.findall(sent) if len(find)>0: match_words=find match=len(find) nounchunk_match.append(match) nounchunk_match_words.append(match_words) print("--- %s seconds ---" % (time.time() - start_time)) # - # Examples of matched noun chunks print(len(nounchunk_match), len(nounchunk_match_words)) print(nounchunk_match_words[-1], nounchunk_match[-1]) print(df_regSentsExpand.head()) print(df_regSentsExpand[df_regSentsExpand['NounChunksMatch']>0].info()) # Export results: matched noun chunks in expanded reg sentences df_regSentsExpand['NounChunksMatch']=nounchunk_match df_regSentsExpand['NounChunkMatchWords']=nounchunk_match_words df_regSentsExpand.to_pickle('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/RegSentsExpand_NounChunks3.pkl') # ## 3. All Noun Chunk Occurences Across All Unique Articles df_regSentsExpand=pd.read_pickle('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/RegSentsExpand_NounChunks3.pkl') print(df_regSentsExpand.info()) # Remove duplicated articles IDs_nodup=pd.read_csv('/home/ec2-user/SageMaker/New Uncertainty/IDs_no_duplicates.csv') print(IDs_nodup.info()) df_regSentsExpand['ID']=df_regSentsExpand['ID'].astype('int64') df_regSentsExpand=IDs_nodup.merge(df_regSentsExpand,on='ID',how='left').reset_index(drop=True) print(df_regSentsExpand.info()) # All unique macthed noun chunks and occurences from expanded reg sentences allMatchWords=[] for list in df_regSentsExpand['NounChunkMatchWords']: allMatchWords=allMatchWords+list print(len(allMatchWords)) print(allMatchWords[100]) allMatchWordsCount=Counter(allMatchWords) print(allMatchWordsCount) df_MatchWords = pd.DataFrame(allMatchWordsCount.items(),columns = ['Noun Chunks','Occurences']) print(df_MatchWords.info()) df_MatchWords=df_MatchWords.sort_values('Occurences',ascending=False).reset_index(drop=True) print(df_MatchWords.head()) # Export noun chunk occurences df_MatchWords.to_csv('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/RegSentsExpand_NounChunkOccurences3.csv',index=False) # ## 4. Remove General Terms after Human Auditing # Remove general terms from matched noun chunks in expanded reg sentences df_MatchWordsRemove=pd.read_csv('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/RegSentsExpand_NounChunkOccurences3_Remove.csv') print(df_MatchWordsRemove.info()) # Noun chunks to be removed ("general terms") matchwords_remove=df_MatchWordsRemove['Noun Chunks'].tolist() print(matchwords_remove[0:20]) # Reg sections with all matched noun chunks df_regSentsExpand=pd.read_pickle('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/RegSentsExpand_NounChunks3.pkl') print(df_regSentsExpand.info()) # Remove general terms from all matched noun chunks df_regSentsExpand['NounChunkMatchWordsFiltered']=np.nan df_regSentsExpand['NounChunkMatchFiltered']=0 for i in range(0,len(df_regSentsExpand['NounChunkMatchWords'])): df_regSentsExpand['NounChunkMatchWordsFiltered'][i]=[w for w in df_regSentsExpand['NounChunkMatchWords'][i] if w not in matchwords_remove] df_regSentsExpand['NounChunkMatchFiltered'][i]=len(df_regSentsExpand['NounChunkMatchWordsFiltered'][i]) print(df_regSentsExpand.info()) print(df_regSentsExpand[df_regSentsExpand['NounChunkMatchFiltered']>0].info()) df_regSentsExpand.head() # Check some examples for i in range(0,100): print(df_regSentsExpand['NounChunkMatchWords'][i]) print(df_regSentsExpand['NounChunkMatchWordsFiltered'][i],'\n') df_regSentsExpand.to_pickle('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/RegSentsExpand_NounChunks3.pkl') # ## 5. Filtered Noun Chunk Occurences Acorss Regulation-related Articles df_regSentsExpand=pd.read_pickle('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/RegSentsExpand_NounChunks3.pkl') print(df_regSentsExpand.info()) # Remove duplicated articles IDs_nodup=pd.read_csv('/home/ec2-user/SageMaker/New Uncertainty/IDs_no_duplicates.csv') print(IDs_nodup.info()) df_regSentsExpand['ID']=df_regSentsExpand['ID'].astype('int64') df_regSentsExpand=IDs_nodup.merge(df_regSentsExpand,on='ID',how='left').reset_index(drop=True) print(df_regSentsExpand.info()) # Regulation-related articles df_reg=df_regSentsExpand[df_regSentsExpand['NounChunkMatchFiltered']>0].reset_index(drop=True) print(df_reg.info()) # All unique macthed noun chunks and occurences from expanded reg sentences allMatchWords=[] for list in df_reg['NounChunkMatchWordsFiltered']: allMatchWords=allMatchWords+list print(len(allMatchWords)) print(allMatchWords[0]) allMatchWordsCount=Counter(allMatchWords) print(allMatchWordsCount) df_MatchWords = pd.DataFrame(allMatchWordsCount.items(),columns = ['Noun Chunks','Occurences']) print(df_MatchWords.info()) df_MatchWords=df_MatchWords.sort_values('Occurences',ascending=False).reset_index(drop=True) print(df_MatchWords.head()) # Export noun chunk occurences df_MatchWords.to_csv('/home/ec2-user/SageMaker/New Uncertainty/Reg Relevance/RegSentsExpand_FilteredNounChunkOccurences.csv',index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd import matplotlib.pyplot as plt import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt import datetime import math import matplotlib.pyplot as plt import keras import pandas as pd import numpy as np from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Dropout from keras.layers import * from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split from keras.callbacks import EarlyStopping print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) # # Load Stockmarket Data sp500 = pd.read_csv("sp500.csv") spy = pd.read_csv("spy.csv") sp500.head() spy.head() # ## Verify that the SPY ETF is correlated with the S&P 500 # We verify that the data sets are correlated, and as expected we find all features to be perfectly correlated, with exception to volume. sp500.corrwith(spy) # ## Feature Engineering data = spy data['d1'] = data.close_adj/data.close_adj.shift(1) data['d2'] = data.close_adj/data.close_adj.shift(2) data['d3'] = data.close_adj/data.close_adj.shift(3) data = data.drop([0,1,2]) data.isna().sum() data print ("Average 1 day change",data['d1'].mean()) print ("Average 2 day change",data['d2'].mean()) print ("Average 3 day change",data['d3'].mean()) def plot_dist(data, detail=None, n = 100, s=3, ): vars_name = data.name mu, std = norm.fit(data) plt.hist(data, bins = n, density=True, alpha=0.5, color='b') xmin, xmax = plt.xlim() x = np.linspace( start = xmin, stop = xmax, num = 100) p = norm.pdf(x, mu, std) plt.plot(x, p, 'k', linewidth=2) for i in range(1, s+1): for j in [1, -1]: plt.axvline( x = mu + i * j * std, linewidth=.5, color="g", linestyle = "-.") plt.axvline(x=mu, linewidth=.5, color="r") plt.xlim(mu - s*std, mu + s*std) plt.legend([f'Mu: {mu:.5f}', f'Std: {std:.5f}']) title = f'Distribution of {vars_name}' if detail != None: title = title+'\n'+detail plt.title(title) return plt.show() plot_dist(data['d1'], "Change over 1 Days") plot_dist(data['d2'], "Change over 2 Days") plot_dist(data['d3'], "Change over 3 Days") # + tags=[] def calculate_z(column): return (column - column.mean())/np.std(column) # - np.std(data['d1']) data['d1_z'] = calculate_z(data['d1']) data['d2_z'] = calculate_z(data['d2']) data['d3_z'] = calculate_z(data['d3']) plot_dist(data['d1_z'], "Z-Score for Single Day Change") plot_dist(data['d2_z'], "Z-Score for Two Day Change") plot_dist(data['d3_z'], "Z-Score for Three Day Change") # ### Day Week Month data['date'] = pd.to_datetime(data['date']) data['day'] = data['date'].dt.day_name() data['month'] = data['date'].dt.month_name() data # # Baseline Model drop_columns = ['date', 'open', 'high', 'low', 'close', 'volume', 'd1_z',"d2_z","d3_z"] data_prepared = data.drop(columns=drop_columns) data_prepared = pd.get_dummies(data_prepared) columns = data_prepared.columns scaler = MinMaxScaler() data_prepared[columns] = scaler.fit_transform(data_prepared[columns]) data_prepared['output'] = list(zip(data_prepared['d1'], data_prepared['d2'], data_prepared['d3'])) data_prepared['output'] = data_prepared['d1'] data_d1 = data_prepared.drop(columns=['d1','d2','d3']) data_d1 data_zipped = data_prepared.drop(columns=['d1','d2','d3']) data_zipped # ## Create Training Packets final = data_d1 records = np.array(final) records.shape def history(data, days, output): X = [] y = np.array(data[days:,output]) for i in range(days, len(data)): X.append(data[i-60:i]) X = np.array(X) #X = np.reshape(X, (X.shape[0], X.shape[1], 1)) return X,y features, predictors = history(records, 60, 18) print (f'Inputs {features.shape}') print (f'Outputs {predictors.shape}') # + [markdown] tags=[] # ## Split Training/Test # - split = int(len(features)*.74) print (split) split = 1792 X_training = features[:split] y_training = predictors[:split] X_test = features[split:] y_test = predictors[split:] print (f"Training Set: {len(X_training)}, {len(y_training)}") print (f"Test Set: {len(X_test)}, {len(y_test)}") y_training # + [markdown] tags=[] # ## Model # - X_training.shape[1],X_training.shape[2] model = Sequential() model.add(LSTM(units = 50, return_sequences = True, input_shape = (X_training.shape[1], X_training.shape[2]))) model.add(Dropout(0.2)) model.add(LSTM(units = 50, return_sequences = True)) model.add(Dropout(0.2)) model.add(LSTM(units = 50, return_sequences = True)) model.add(Dropout(0.2)) model.add(LSTM(units = 50)) model.add(Dropout(0.2)) model.add(Dense(units = 1)) model.summary() model.compile(optimizer = 'adam', loss = 'mean_squared_error') callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=5) # Fitting the RNN to the Training set history = model.fit( X_training, y_training, epochs = 10000, batch_size = 1, shuffle = False, validation_split = .3, callbacks = [callback]) len(y_test) results = model.evaluate(X_test, y_test) print (results) predicted_change = model.predict(X_test) predicted_change plt.plot(data.loc[800:, ‘Date’],dataset_test.values, color = ‘red’, label = ‘Real TESLA Stock Price’) plt.plot(df.loc[800:, ‘Date’],predicted_change, color = ‘blue’, label = ‘Predicted TESLA Stock Price’) plt.xticks(np.arange(0,459,50)) plt.title('TESLA Stock Price Prediction') plt.xlabel('Time') plt.ylabel('TESLA Stock Price') plt.legend() plt.show() len(data) data test_scaled.shape X_train = [] y_train = [] engineer_memory(data, history, # + [markdown] colab_type="text" id="4DbQxcXuzVmc" # # Scatter plot for EDA (Exploratory Data Analysis) # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 173} colab_type="code" executionInfo={"elapsed": 764, "status": "ok", "timestamp": 1532865915321, "user": {"displayName": "", "photoUrl": "//lh5.googleusercontent.com/-AsF_YBOVN9s/AAAAAAAAAAI/AAAAAAAAAAA/qYdp7i1L4LY/s50-c-k-no/photo.jpg", "userId": "112579363612867944767"}, "user_tz": -180} id="nXPMB3fCzUjR" outputId="d202462a-e302-4d3e-b1c5-ae18ba9a2388" import matplotlib.pyplot as plt import pandas as pd import itertools from sklearn.datasets import load_iris iris = load_iris() features = iris.data.T print("features:", iris.feature_names) print("target classes:", iris.target_names) print("features.shape", features.shape) print("iris.target.shape", iris.target.shape) # Build dataframe from features and series from target X = pd.DataFrame(features.T, columns=iris.feature_names) # note we transpose the feature columns y = pd.Series(iris.target) print(y.value_counts()) print(">> 3 target classes, each containing 50 samples") # + [markdown] colab_type="text" id="iWQk6-KXDVzX" # ## Scatter plots showing 3 out of 4 features at a time (third feature is the point size) # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 781} colab_type="code" executionInfo={"elapsed": 1921, "status": "ok", "timestamp": 1532865917527, "user": {"displayName": "", "photoUrl": "//lh5.googleusercontent.com/-AsF_YBOVN9s/AAAAAAAAAAI/AAAAAAAAAAA/qYdp7i1L4LY/s50-c-k-no/photo.jpg", "userId": "112579363612867944767"}, "user_tz": -180} id="aG5h5-nD5Gfs" outputId="0744ed9e-439a-42fa-d2d3-82c2a614be1b" all_pairs = list(itertools.combinations(list(X.columns), 3)) # C(4,3) = 4! / (3! * 1!) = 24 / (6 * 1) = 4 print("Feature combinations:") for i,v in enumerate(all_pairs): print(i,v) print("---") plt.figure(figsize=(10, 10)) for i,v in enumerate(all_pairs): f0 = v[0] f1 = v[1] f2 = v[2] ax = plt.subplot(2, 2, i+1) ax.set_title("Point sizes based on: %s" % f2) plt.scatter(X[f0], X[f1], alpha=0.3, s=100*X[f2], c=iris.target, cmap='jet') plt.suptitle('Iris dataset scatter plots of 3 out of 4 features at at time') plt.xlabel(f0) plt.ylabel(f1) # + [markdown] colab_type="text" id="L40QRJJ8CK1s" # **Note how the clusters shown above are separated much better when petal length is used for the y-axis.** # + [markdown] colab_type="text" id="YnybX2C6Dh_E" # ## Scatter plots showing 2 out of 4 features at a time # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 693} colab_type="code" executionInfo={"elapsed": 1760, "status": "ok", "timestamp": 1532865919509, "user": {"displayName": "", "photoUrl": "//lh5.googleusercontent.com/-AsF_YBOVN9s/AAAAAAAAAAI/AAAAAAAAAAA/qYdp7i1L4LY/s50-c-k-no/photo.jpg", "userId": "112579363612867944767"}, "user_tz": -180} id="LMFU_n5yCO3W" outputId="ede451db-1c0a-4600-d45f-d1e76b0d87a8" all_pairs = list(itertools.combinations(list(X.columns), 2)) # C(4,2) = 4! / (2! * 2!) = 24 / (2 * 2) = 6 print("Feature combinations:") for i,v in enumerate(all_pairs): print(i,v) print("---") plt.figure(figsize=(12, 8)) for i,v in enumerate(all_pairs): f0 = v[0] f1 = v[1] ax = plt.subplot(2, 3, i+1) plt.scatter(X[f0], X[f1], c=iris.target, cmap='jet') plt.suptitle('Iris dataset scatter plots of 2 out of 4 features at at time') plt.xlabel(f0) plt.ylabel(f1) # + colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 1236} colab_type="code" executionInfo={"elapsed": 2905, "status": "ok", "timestamp": 1532868731806, "user": {"displayName": "", "photoUrl": "//lh5.googleusercontent.com/-AsF_YBOVN9s/I/AAAAAAAAAAA/qYdp7i1L4LY/s50-c-k-no/photo.jpg", "userId": "112579363612867944767"}, "user_tz": -180} id="YH-DDcvoE1YF" outputId="e266b131-6ef6-4926-f1e4-fff187f6596b" # All pair permutations # P(4,2) = 4! / 2! = 24 / 2 = 12 COLS = list(X.columns) all_pairs = [(f1,f0) for f0 in COLS for f1 in COLS] print("All feature permuations:") for i,v in enumerate(all_pairs): print(i,v) print("---") plt.figure(figsize=(14, 14)) for i,v in enumerate(all_pairs): f0 = v[0] f1 = v[1] ax = plt.subplot(4, 4, i+1) plt.scatter(X[f0], X[f1], c=iris.target, cmap='jet') plt.suptitle('Iris dataset scatter plots of 2 out of 4 features at at time while maintaining feature alignment') plt.xlabel(f0) plt.ylabel(f1) # + [markdown] colab_type="text" id="kaE9JazLGwED" # **Note how the clusters shown above are separated much better based on petal length and/or petal width.** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Social Networks and Text Analysis - Introduction to Networks # + executionInfo={"elapsed": 1525, "status": "ok", "timestamp": 1609841370249, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="XOideyTrBBKC" #Importing the necessary libraries for this lab: #------------------------------------------------ import networkx as nx #---> Library for network analysis import matplotlib.pyplot as plt #---> Library for creating plots import collections #---> Library for operating with dictionaries import random #---> Library for generating random numbers/distributions import numpy as np #---> Library for efficiently operating with arrays/matrices/vectors from pylab import rcParams #---> Library for set the attributes of the figures #Magic functions (%) for setting up the matplotlib and increase the resolution of the plots: # %matplotlib inline # %config InlineBackend.figure_format = 'retina' # - # ## Lets play with networks! # ## 1 Create the empty network # + executionInfo={"elapsed": 1504, "status": "ok", "timestamp": 1609841370250, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="rlu6dsraBBKR" G=nx.Graph() #---> Create an empty undirected network (use nx.DiGraph() for the directed version) G.add_node(1) #---> Add one node G.add_nodes_from(range(10)) #---> Add nodes from a list or array, here we add nodes from 0 to 9 # + colab={"base_uri": "https://localhost:8080/", "height": 319} executionInfo={"elapsed": 2004, "status": "ok", "timestamp": 1609841370881, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="_aN0b_psBBKS" outputId="1bf83317-64b5-4154-bb66-0dc45663f400" nx.draw(G, node_color='darkblue') #---> Plot the network which just contains nodes. This function can change #color, layout, size, labels and other attributes of the nodes and edges. # + [markdown] id="KJT390jnBBKU" # ## 2 Add the edges # + [markdown] id="uCMFeuTbBBKU" # Each edge is composed by source and target, and can be added to the network one by one or in a list/array. Wighted edges can be added here too. # + colab={"base_uri": "https://localhost:8080/", "height": 319} executionInfo={"elapsed": 1965, "status": "ok", "timestamp": 1609841370882, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="rXvOA6Y1BBKV" outputId="a1350aff-338c-4f33-f5b2-f29d5aa68d94" G.add_edge(1,2) #---> Add one edge as a tupple () e=[(2,3),(9,3),(8,4),(3,5)] #---> Create a list [] of tuples () being the edges G.add_edges_from(e) #---> Add the list of edges nx.draw(G, with_labels=True, node_color='darkblue', font_color='white') #---> Plot the network # + [markdown] id="67CmwS-JBBKV" # ## 3 Check the adjacency matrix # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1937, "status": "ok", "timestamp": 1609841370884, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="MfPu0OR7BBKW" outputId="6e582a7c-ab21-413a-9766-c86867eef6a8" print(nx.adjacency_matrix(G)) #---> Check the numpy array with the adjacency network with values different to 0 #This network is unweighted then all the next values are 1 # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1910, "status": "ok", "timestamp": 1609841370885, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="1xn0_4A0BBKX" outputId="2d6aafb1-6abf-4fad-e525-5682ecc541ef" #To visualize the edges and their attributes (eg weight) use the next function. #This object can be converted to a dict, or indexed as one, in wich the first keys are the sources, #the second keys are the targets and edge attributes can be other objects: G.adj # + colab={"base_uri": "https://localhost:8080/", "height": 470} executionInfo={"elapsed": 2120, "status": "ok", "timestamp": 1609841371134, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="KqKM1Xw4BBKX" outputId="fcd866bb-1ddd-4b40-d1a1-e4215b9ab727" print(nx.to_numpy_matrix(G)) #--> visualize the entire adjacency matrix plt.imshow(nx.to_numpy_matrix(G)) #--> This function create a heatmaps from 2-dimensional numpy arrays. cbar = plt.colorbar() #--> set the colorbar of the heatmap cbar.set_ticks([0,1]) #--> set the range of the color bar cbar.ax.set_yticklabels(['Zero','One'],) #--> set the label of the number to display in the color bar cbar.set_label('link', rotation=270) #--> set the label of the color bar and rotate it plt.xlabel('node idx') #--> set the label of the x axis plt.ylabel('node idx') #--> set the label of the y axis # - # ## 4 Work with weighted networks # + executionInfo={"elapsed": 2115, "status": "ok", "timestamp": 1609841371135, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="FEy_NBQSBBKY" #Iterate over the edges list and add an random uniform weight from 0 to 1: for e in G.edges(): G[e[0]][e[1]]['weight'] = random.uniform(0, 1) #<-- Add the edge attribute of weight to each node # + colab={"base_uri": "https://localhost:8080/", "height": 636} executionInfo={"elapsed": 2731, "status": "ok", "timestamp": 1609841371774, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="Fj3D0xTlBBKb" outputId="7d69aa3d-cf0a-440c-8675-a23eae0e5cdf" #Check again the adjacency matrix of the network and notice the differences: print(nx.to_numpy_matrix(G)) #--> visualize the entire adjacency matrix plt.imshow(nx.to_numpy_matrix(G)) #--> This function create a heatmaps from 2-dimensional numpy arrays. cbar = plt.colorbar() #--> set the colorbar of the heatmap cbar.set_label('Weight', rotation=270) #--> set the label of the color bar and rotate it plt.xlabel('node idx') #--> set the label of the x axis plt.ylabel('node idx') #--> set the label of the y axis # + [markdown] id="U_6OSV32BBKe" # ## 5 Network statistics # - # ### 5.1 Degree distribution # + colab={"base_uri": "https://localhost:8080/", "height": 295} executionInfo={"elapsed": 2710, "status": "ok", "timestamp": 1609841371789, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="SjfKKIIHBBKf" outputId="1041caef-2f6a-4c94-9409-56a6d5c3698e" G =nx.karate_club_graph() #<-- G will be an existing famous network of networkx degree_sequence = sorted([d for n, d in G.degree()], reverse=True) #<-- Save the degree of each node #and order the list from highest to lowest print("Degree sequence", degree_sequence) degreeCount = collections.Counter(degree_sequence) #<-- Count the frequency (number of times) of each degree print("Degree frequencies", degreeCount) deg, cnt = zip(*degreeCount.items()) #<-- Function that create lists of iterables, #one for the degrees and the other one for the frquencies rcParams['figure.figsize'] = 10, 5 #<-- Set the plot size fig, ax = plt.subplots() #<-- Create the subplots plt.bar(deg, cnt, width=0.80, color='darkblue') #<-- Plot a bar plot with the degrees and their frequencies #Set plot attributes as title, x and y labels, and ticks with frequencies larger than zero plt.title("Degree Histogram") plt.ylabel("Count") plt.xlabel("Degree") ax.set_xticks([d + 0.4 for d in deg]) ax.set_xticklabels(deg) #Draw the network inside the barplot plt.axes([0.4, 0.4, 0.5, 0.5]) #Select the largest connected component of the network: Gcc = G.subgraph(sorted(nx.connected_components(G), key=len, reverse=True)[0]) pos = nx.spring_layout(G) #<-- Set the layout of the network plt.axis('off') #<-- Remove the axis of the network plot nx.draw_networkx_nodes(G, pos, node_color= 'darkblue',node_size=20) #<-- Plot the nodes nx.draw_networkx_edges(G, pos, alpha=0.4) #<-- Plot the edges plt.show() # - print('Average degree', np.mean(G.degree())) print('Connectance', nx.density(G)) # + [markdown] id="w10gvoFkBBKh" # ### 5.2 Shortest Path # + colab={"base_uri": "https://localhost:8080/", "height": 319} executionInfo={"elapsed": 3242, "status": "ok", "timestamp": 1609841372359, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="FwZDEszdBBKj" outputId="3ca72a1c-47f0-4102-fc89-e9fc278a79d8" R = nx.karate_club_graph() #<-- R will be an existing famous network of networkx #We will detected the shorstet path from one node to the other one source=14 #<-- Source node target=16 #<-- Target node def short_path_plot(G,source,target): '''This function calculates the shortest path between two nodes in a network Attributes: G: The networkx object source: Name of the source node target: Name of the source node''' pos = nx.spring_layout(G) #<-- Set the layout of the network nx.draw(G,pos,node_color='k', with_labels=True, font_color='white') #<-- Plot the original network in black path = nx.shortest_path(G,source=14,target=16) #<-- Select the nodes in the shortest path print(path) path_edges = list(zip(path,path[1:])) #<-- Create a list of iterables with the edges of the shortest path nx.draw_networkx_nodes(G,pos,nodelist=path,node_color='r', label=True) #<-- Plot the nodes nx.draw_networkx_edges(G,pos,edgelist=path_edges,edge_color='r',width=10) #<-- Plot the edges plt.axis('equal') plt.show() return #Run the created funtion short_path_plot(R,source,target) # + [markdown] id="KkcLHm1FBBKm" # ### 5.3 Generating the directed version # + colab={"base_uri": "https://localhost:8080/", "height": 319} executionInfo={"elapsed": 3991, "status": "ok", "timestamp": 1609841373152, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="x5WpdErCBBKn" outputId="01ed0b54-fd99-48cb-88c8-e9b057426b3e" DiR=nx.Graph.to_directed(R) #<-- Set the network to a directed onde pos = nx.spring_layout(DiR) nx.draw(DiR, pos = pos, with_labels=True, node_color='darkblue', font_color='white', node_size=500) #<-- Plot the network #with some attributes # + [markdown] id="wD9TnbGrBBKn" # Removing some random edges and plot the network with the same layout to know the differences: # + colab={"base_uri": "https://localhost:8080/", "height": 336} executionInfo={"elapsed": 4632, "status": "ok", "timestamp": 1609841373824, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="1rnVZDE3BBKo" outputId="675eddfd-0811-4ee7-8061-bde65de0d38a" n_edgestoremove=random.randint(0,len(DiR.edges)) #<-- Select the number of edeges to remove print('num edges removed '+str(n_edgestoremove)) edges_to_remove=random.sample(set(range(len(DiR.edges))), n_edgestoremove) #<-- Select a random sample of edges #with the number of edges to remove DiR.remove_edges_from(np.array(DiR.edges)[edges_to_remove]) #<-- Remove the selected edges #Plot the network with the same layout of the original network: nx.draw(DiR, pos = pos, node_size=500, with_labels=True, node_color='darkblue', font_color='white') # - # ### 5.4 Degree and Indegree # + colab={"base_uri": "https://localhost:8080/", "height": 590} executionInfo={"elapsed": 4812, "status": "ok", "timestamp": 1609841374039, "user": {"displayName": "", "photoUrl": "", "userId": "13951013551349997385"}, "user_tz": 0} id="jP86QXpmBBKp" outputId="d5118c3c-a547-4841-9b04-92c4313b59a7" #Set the plot attributes for the OUT degree: #------------------------------------------- degree_sequence = sorted([d for n, d in DiR.out_degree()], reverse=True) #<-- Save the degree of each node #and order the list from highest to lowest degreeCount = collections.Counter(degree_sequence) #<-- Count the frequency (number of times) of each degree deg, cnt = zip(*degreeCount.items())#<-- Function that create lists of iterables, #one for the degrees and the other one for the frquencies plt.bar(deg, cnt, width=0.80, color='darkblue') #<-- Plot a bar plot with the degrees and their frequencies plt.title("Out Degree Histogram", size=20) plt.xticks(size=16) plt.yticks(size=16) plt.ylabel("Count", size=16) plt.xlabel("Degree", size=16) plt.show() #<- Show the plot created in previous lines #Set the plot attributes for the IN degree: #------------------------------------------- degree_sequence = sorted([d for n, d in DiR.in_degree()], reverse=True) #<-- Save the degree of each node #and order the list from highest to lowest degreeCount = collections.Counter(degree_sequence) #<-- Count the frequency (number of times) of each degree deg, cnt = zip(*degreeCount.items()) #<-- Function that create lists of iterables, #one for the degrees and the other one for the frquencies plt.bar(deg, cnt, width=0.80, color='darkblue') #<-- Plot a bar plot with the degrees and their frequencies plt.title("In Degree Histogram", size=20) plt.xticks(size=16) plt.yticks(size=16) plt.ylabel("Count", size=16) plt.xlabel("Degree", size=16) plt.show() #<- Show the plot created in previous lines # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # PyGromos File Examples # + pycharm={"name": "#%%\n"} import os, sys root_dir = os.getcwd() #if package is not installed and path not set correct - this helps you out :) sys.path.append(root_dir+"/..") # + [markdown] pycharm={"name": "#%% md\n"} # ## IMD - Simulation Paramter File # + pycharm={"name": "#%%\n"} from pygromos import files in_imd = "../pygromos/data/imd_templates/md.imd" out_imd="test.imd" #load imd = files.Imd(in_imd) #change Solvent number in system: imd.SYSTEM.NSM = 500 #add EDS block imd.edit_EDS(NUMSTATES=2, S=1.0, EIR=[0.0 for x in range(2)]) #get_file content: print(imd) #store again out_imd = imd.write(out_imd) # + [markdown] pycharm={"name": "#%% md\n"} # ## CNF - Coordinate File # + pycharm={"name": "#%%\n"} """ CNF """ from pygromos import files #load_cnf in_cnf="in_dir/to/my.cnf" out_cnf="in_dir/to/new.cnf" cnf = files.Cnf(in_cnf) #get residues of cnf residues = cnf.get_residues() #Delete Residues delete_ligands = ["delete ME!"] for resn in delete_ligands: cnf.delete_residue(resName=resn) #set new title cnf._blocks["TITLE"].content = " Ligands:\t " + " ".join(lig_sys) + "\n" #cleaning cnf.clean_posiResNums() #get_file content: print(cnf) #store again out_cnf = cnf.write(out_path=out_cnf) # - # ## TRC - Coordinate Trajectory File # + pycharm={"name": "#%%\n"} #This snippet converts multiple cnfs to a trajectory. import glob from pygromos import files in_cnfs_reg = "*.cnf" out_trc_path = "concat_cnfs.trc" cnfs = glob.glob(in_cnfs_reg) out_trc = files.Trc.cnfs_to_trc(cnfs) print(out_trc) out_trc.write(out_trc_path) # + [markdown] pycharm={"name": "#%% md\n"} # ## TOP - topology File # # + [markdown] pycharm={"name": "#%% md\n"} # ## Other Files # # ### MTB - topology building block file # # ### IFP - topology parameter file # + pycharm={"name": "#%%\n"} ifp_path = root+"\\examples\\test\\54a7.ifp" sys.path.append(root+"/../../") import pygromos from pygromos.files.topology import ifp #parse forcefield file myfp = ifp.ifp(ifp_path) myfp.write(os.getcwd()+"/fun.ifp") # + pycharm={"name": "#%%\n"} #parse output and write out again test = ifp.ifp(os.getcwd()+"/fun.ifp") test.write(os.getcwd()+"/fun.ifp") # + [markdown] pycharm={"name": "#%% md\n"} # ### disres - distance restraint file # - # ## PTP-Files in PyGromos # # Here I try to give a few example on what s possible with the ptp obj in pygromos. # + pycharm={"name": "#%%\n"} from pygromos.files.topology.ptp import Pertubation_topology as PTP from pygromos.files.blocks.topology_blocks import MPERTATOM, atom_eds_pertubation_state, pertubation_eds_state # - # ### defining some state types for later use :) # + pycharm={"name": "#%%\n"} dummy_type = pertubation_eds_state(IAC=22, CHARGE=0.0) my_type = pertubation_eds_state(IAC=99, CHARGE=-1.0) # - # ### Read in ptp file: # + pycharm={"name": "#%%\n"} #Read in ptp file: path= "../pygromos/tests/testfiles/ptp/eds_short.ptp" ptp = PTP(path) print(ptp) # - # ### Add atom or state or overwrite atominformation (except atom.NR) # + pycharm={"name": "#%%\n"} new_atoms_state = [atom_eds_pertubation_state(NR=x, NAME="H", STATES={7: my_type}) for x in range(1, 4)] ptp.add_block(bloc=MPERTATOM(new_atoms_state)) print(ptp) # - # ### delete full state # + pycharm={"name": "#%%\n"} ptp.MPERTATOM.delete_state(stateIDs=[1,3]) print(ptp) # - # ### delete specific atoms # + pycharm={"name": "#%%\n"} ptp.MPERTATOM.delete_atom(atomNR=[1,2,7,8,9]) print(ptp) # - # #3# Write out ptp file # + pycharm={"name": "#%%\n"} ptp.write("fun.ptp") # - # ### Building ptp from scratch and generate all possible state combinations: # + pycharm={"name": "#%%\n"} print(ptp.MPERTATOM.STATEATOMHEADER) # + pycharm={"name": "#%%\n"} import numpy as np from itertools import combinations from pygromos.files.topology.ptp import Pertubation_topology as PTP from pygromos.files.blocks.topology_blocks import atom_eds_pertubation_state, pertubation_eds_state #INPUT: ## parameters outpath_new_ptp = "fun.ptp" ## the states to use: o_state = pertubation_eds_state(IAC=16, CHARGE=-1.0) h_state = pertubation_eds_state(IAC=33, CHARGE=-2.0) ## Map for molecule ID with Atom IDS ### First atom assumed to be O and last two Hs molecules_atoms = {1: [1,2,3], 2: [4,5,6], 3: [7,8,9],} #BUILD UP STATES ## Generate active state mapping: max_active_mols_same_time = len(molecules_atoms) molecule_states={} state_ID=1 for active_mols in range(1, max_active_mols_same_time+1): combis = list(combinations(molecules_atoms, active_mols)) molecule_states.update({state_ID+ind: x for ind, x in enumerate(combis)}) state_ID = state_ID+len(combis) #gives the state number as key and all the active molecules in this state print("gives the state number as key and all the active molecules in this state") print(molecule_states) print() #build state atoms for ptp new_atoms_state_dict = {} for state, molecules in molecule_states.items(): for molecule in molecules: if(molecule in new_atoms_state_dict): atoms = new_atoms_state_dict[molecule] atoms[0].STATES.update({state: o_state}) atoms[1].STATES.update({state: h_state}) atoms[2].STATES.update({state: h_state}) else: atoms = [atom_eds_pertubation_state(NR=molecules_atoms[molecule][0], NAME="O", STATES={state: o_state}), atom_eds_pertubation_state(NR=molecules_atoms[molecule][1], NAME="H1", STATES={state: h_state}) , atom_eds_pertubation_state(NR=molecules_atoms[molecule][2], NAME="H2", STATES={state: h_state})] new_atoms_state_dict.update({molecule: atoms}) print("gives the atom_perttubation states for all mols") print(new_atoms_state_dict) print() ##finally make a list for our ptp file (#ThanksGromos) new_atoms_state = np.concatenate(list(new_atoms_state_dict.values())) #print(list(map(lambda x: x.STATES, new_atoms_state))) #BUILD PTP ptp = PTP() ptp.MPERTATOM.add_state_atoms(state_atoms=new_atoms_state) print(ptp) ptp.write(outpath_new_ptp) #TADAAAAA - DONE // --- // jupyter: // jupytext: // text_representation: // extension: .js // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Javascript (Node.js) // language: javascript // name: javascript // --- var _ = require("lodash"); // This should work but fails with "Error: Cannot find module 'lodash'" _.first([1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ###### Copyright © , Anvetsu Technologies Pvt. Ltd (2015) # # Data Structures & Types # ## 1. Mutable data structures as default arguments # ### 1.1. Show me the Code! def add_name(names=[]): """ Add a name to a list """ name=raw_input("Enter name: ").strip() names.append(name) return names print add_name() print add_name() print add_name() # ## Gotcha !!! # #### This is because, # # 1. The function definition line is evaluated just once and not every time the function is called. # 2. This is because Python functions are first class objects. # ### 1.2. Show me the Fix ! # #### 1.2.1. Use a place-holder value instead of modifying the default value directly. def add_name(names=None): if names==None: names = [] name=raw_input("Enter name: ").strip() names.append(name) return names print add_name() print add_name() names = ['Appu','Dhruv'] print add_name(names) # ##### 1.2.2. Use a sentinel object. sentinel = object() def add_name(names=sentinel): if names==sentinel: names = [] name=raw_input("Enter name: ").strip() names.append(name) return names names = ['Appu','Dhruv'] print add_name(names) # ##### 1.2.3. Use an inner function, which is always evaluated from the context of the outer function. def add_name(): def inner(names=[]): name=raw_input("Enter name: ").strip() names.append(name) return names return inner() add_name() add_name() add_name() # #### 1.3. Valid uses of this behavior # ##### 1.3.1. A caching memoizer pattern def fibonacci(n, memo={}): """ Return n'th fibonacci number """ # Uses an inline caching dictionary # as a memoizing data structure if n in memo: print '*** memoized data ***' return memo[n] a, b, c = 0, 1, 1 for i in range(n-1): c = a + b a, b = b, c memo[n] = c return c fibonacci(1) fibonacci(2); fibonacci(2) fibonacci(3); fibonacci(3) # #### 1.4. More Reading # # 1. https://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument # 1. http://effbot.org/zone/default-values.htm # ## 2. Mutable Argument Modification / Name Binding # ### 2.1. Show me the Code! # + def f(x, l): """ A function taking a list as argument """ # This is a silly function, really. if len(l)<5: # L1 l = g(x) # L2 # L3 l.append(x) # L4 def g(x): """ A functon """ return [x*x]*5 # - nums = range(5) f(10, nums) print nums nums=range(3) f(10, nums) print nums # 'nums' remains the same. Not surprised ? Good, Surprised - Not so good :) # ## Gotcha !!! # #### This is because, # # 1. The __nums__ that is replaced in line #2 is a __new__ object recieved from g(...). It doesn't # replace the original object. # 2. This is because in Python objects are bound to variables by name. Names _refer_ to objects, they don't bind strongly to them. # 3. In order to modify a mutable, you need to call methods on it that modifies it. In case of list, these are _append_, _extend_, _remove_, _pop_ etc. # ### 2.2. Show me the Fix ! # + def f(x, l): """ A function taking a list as argument """ if len(l)<5: # L1 l = g(x) # L2 # L3 l.append(x) # L4 # Return it return l def g(x): """ A functon """ return ([x]*5) # - nums=range(3) nums = f(10, nums) print nums # ## 3. Immutable Variable Comparison # ### 3.1. Show me the Code! def greet(greeting, default_value="Hi"): """ Greet someone with a greeting """ if greeting is not default_value: greeting = default_value + ", " + greeting print greeting # Test 1 greet("Hi") # Test 2 greet("Good Morning") # Test 3 greet("Good Morning", "Hello there") # Test 4 greet("Hello there", "Hello there") # Fine # Test 5 greet("Hi, how do you do!", "Hi, how do you do!") # Test 6 greeting="Hello there" greet(greeting, default_value="Hello there") # Hmmm, not what you expected ? # ## Gotcha !!! # #### This is because, # # 1. You used __is__, the identity comparison operator instead of __!=__, the equality comparison operator. # 1. However, the code still works as expected in __Test 4__ above because Python optimizes string memory for literal strings. Since # both arguments are passed as literal strings and their value is the same, Python creates the object just once for both arguments, # so the _is_ comparison works. # 1. In __Test 6__, we use a separate name _greeting_ for the first argument and the literal string for the second. Hence Python doesn't # get a chance to optimize in this case and the _is_ comparison fails. # # ### 3.2. Show me the Fix ! def greet(greeting, default_value="Hi"): """ Greet someone with a greeting """ # Simple: Use == or != operator always if greeting != default_value: greeting = default_value + ", " + greeting print greeting # Test 4 greet("Hello there", "Hello there") # Test 6 greeting="Hello there" greet(greeting, default_value="Hello there") # ## 4. Integer vs Float Division # #### Integer division in Python always produces an integer result, ignoring the fractional part. Moreover, it __floors__ the result which can sometimes be a little confusing. # ### 4.1. Show me the Code! 5/2 # Not 2.5, but 2, i.e the answer rounded off -5/2 # Prints -3, not -2, i.e answer is floored away from zero # ### 4.2. Notes # This is pretty well known behaviour of Python. It is not exactly a Gotcha, but newbie programmers are caught off-guard when # they encounter this behaviour for the first time. It does take a while to get used to it. # ### 4.3. Workarounds # #### 4.3.2 Workaround #1 - Specifically use float division # + # Just remember to convert one of the numbers to float, typically multiplying by 1.0. # This is what I do. x=5 y=1.0*x/2 print y # - x=5 y=x/2.0 print y # #### 4.3.2 Workaround #2 - Backported from future # + from __future__ import division print "True division =>", 5 / 2 print "Floor division =>", 5 // 2 # - # For Python 2.x, import __division__ from the future (means a feature backported from Python 3.x). Then # you get two division operators, __/__ performing true division and the new __//__ performing floor division. # #### 4.3.3 Workaround #3 - Use decimal module # + import decimal x=decimal.Decimal(5) y=decimal.Decimal(2) z=x/y print z # - # __NOTE__ - Above is overkill for such a simple example. __Decimal__ types are more useful to get absolute precision for your floating point numbers. We will see another example below. # #### 4.4. More Reading # # 1. http://python-history.blogspot.in/2010/08/why-pythons-integer-division-floors.html # 1. https://stackoverflow.com/questions/183853/in-python-what-is-the-difference-between-and-when-used-for-division # ## 5. Floating Point Precision & Round-Off # #### Floating point numbers are always represented as a round-off to their actual internal value. In Python, sometimes these can cause some unexpected results. These are not a bug in the language or your code, but simply some interesting results of the way programming languages represent floating point numbers and display them. # ### 5.1. Precision # #### 5.1.1. Show me the Code! # + x=0.1 y=0.2 z=x+y print z # All good # - # However, z # #### 5.1.2. Notes # ##### What is happening here ? # # When you print the variable z, print takes care to represent the number rounded off to the closest # value. However when you inspect the number by not printing it, you get to see the actual number internally represented. # Technically this is called a __Representation Error__ . # ### 5.2 Round-off # #### 5.2.1. Show me the Code! x = 0.325 print round(x, 2) # Good x = 0.365 print round(x, 2) # What the ...!!! # #### 5.2.2. Notes # ##### What is happening here ? # # Since the decimal fraction 0.365 is exactly half-way between 0.37 and 0.38, sometimes it could be represented by a binary # fraction which is closer to 0.36 than it is closer to 0.37. But how to find the exact precision of a float in Python ? x=0.365 x # Doesn't help! # + # Solution - Use decimal module import decimal decimal.Decimal(0.365) # - decimal.Decimal(0.325) # Now you understand why 0.325 nicely rounds to 0.33 # As you can see, 0.365 is internally represented by 0.3649999999999999911182158029987476766109466552734375 which is closer to 0.36 when rounded off to 2 decimal places. Which is why round(0.365) produces 0.36. # #### 5.2.3. Workarounds # ##### 5.2.3.1 Use ceil for rounding up # + import math x=0.365 # Multiple and divide by power of 10 equal to precision required math.ceil(pow(10,2)*x)/pow(10,2) # - # ##### 5.2.3.2 Use decimal module # + from decimal import * x=Decimal(0.365).quantize(Decimal('0.01'), rounding=ROUND_UP) y=round(x, 2) print y # - # #### 5.3. More Reading # # 1. https://docs.python.org/2/tutorial/floatingpoint.html # 1. https://stackoverflow.com/questions/4518641/how-to-round-off-a-floating-number-in-python # # 6. Modifying Mutables inside Immutables # #### When you have mutables (lists, dictionaries) as elements inside immutables (tuples here) you can have some unexpected results when trying to modify the former. # ### 6.1. Show me the Code! def make_shipment(container, items, index=0): """ Modify objects to be shipped in 'container' by adding objects from 'items' into it at index 'index' """ # container is a tuple containing lists container[index] += items # Real-life example - container of items to be exported container = (['apples','mangoes','oranges'], ['silk','cotton','wool']) make_shipment(container, ['papayas']) # #### However, print container # But container is modified as well! # ## Gotcha !!! # #### This is because, # # 1. For mutable types in Python, # # >>> x += [y] # # is not exactly the same as, # # >>> x = x + [y] # # 2. In the first one, __x__ remains the same, but in second case, a new object is created and assigned to __x__ . # 3. Hence when, # # container[index] += items # # is performed, the referenced list changes in-place. The item assignment doesn't work, but when the exception # occurs, the item has already been changed in place. # ### 6.2. Show me the Fix! def make_shipment(container, items, index=0): """ Modify objects to be shipped in 'container' by adding objects from 'items' into it at index 'index' """ # container is a tuple containing lists # Use .extend(...) container[index].extend(items) # Real-life example - container of items to be exported container = (['apples','mangoes','oranges'], ['silk','cotton','wool']) make_shipment(container, ['papayas']) print container def make_shipment(container, items, index=0): """ Modify objects to be shipped in 'container' by adding objects from 'items' into it at index 'index' """ # container is a tuple containing lists # Or retrieve the item at index to a variable item = container[index] # Then add to it. item += items # Real-life example - container of items to be exported container = (['apples','mangoes','oranges'], ['silk','cotton','wool']) make_shipment(container, ['papayas']) print container # #### 6.3. More Reading # # 1. http://web.archive.org/web/20031203024741/http://zephyrfalcon.org/labs/python_pitfalls.html # # 7. Boolean Type Fallacy # #### Python doesn't respect its own boolean types. In fact, the two boolean types __True__ and __False__ can be quite flexible if you chose them to be. A developer can (often accidentally) overwrite Python's boolean types causing all kinds of problems and in this case, a bit of fun :) # ### 7.1. Show me the Fun! # #### This show is named __"The Blind Truthness of Falsehood"__ # + print True print False x='blind' True=x ## Fun print 'Love is',True print 'Hate is',not x # Now watch the fun! # Python allows you to overwrite its default boolean types. False=True # Yes you can do this in #Python. print 'Love is',x # What do you expect to get printed ? print 'Love is',True,'as well as',False print 'Hate is',False # What do you expect to get printed ? print 'Hate is',False,'as well as',False print # REAL-LIFE, NEAR-DEATH EXAMPLE # Point-blank situation no_bullet_in_gun = False if no_bullet_in_gun: print "GO AHEAD, SHOOT ME IN THE HEAD !" # Goes ahead... your life ends here. True='dead' else: print "NO PLEASE... I BEG YOU TO SPARE ME...!" True='alive' print 'I am',True # + # Reset our world to sanity True, False=bool(1), bool(0) no_bullet_in_gun=False if no_bullet_in_gun: print "GO AHEAD, SHOOT ME IN THE HEAD !" x='dead' else: print "NO PLEASE... I BEG YOU TO SPARE ME...!" # Spares you, you live to write more code in Python, but x='alive' # hopefully not like the one above. print 'I am',x # - # ### 7.2. Show me the Fix ! # #### You are kidding right ? # Word of advice - Don't overwrite your boolean types though Python allows it. It is harmful to health. # ###### Copyright © , Anvetsu Technologies Pvt. Ltd (2015) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: base # language: python # name: base # --- import pandas as pd import numpy as np import glob import warnings from bs4 import BeautifulSoup from urllib.request import urlopen import matplotlib.pyplot as plt import seaborn as sns warnings.filterwarnings('ignore') # %matplotlib inline import os import json import re data_path = "/Users/kaushik/Desktop/runasdus/src/com/lab/data/" for fname in glob.glob(r"{}/*".format(data_path)): pol = os.path.join(fname, "priv.html") try: html = open(pol, "r").read() soup = BeautifulSoup(html, features="html.parser") soup_text = soup.get_text() print(soup_text) except: print("Exception occurred!") continue for dirname in os.listdir("/Users/kaushik/Desktop/runasdus/src/com/lab/data/"): pol = os.path.abspath(os.path.join(dirname, "dom_ind.html")) print(pol) try: html = open(pol, "r").read() soup = BeautifulSoup(html, features="html.parser") soup_text = soup.get_text() print(soup_text) except: continue for fname in glob.glob(r"{}/*.csv".format(annot_dpath)): #Extract path basename basename = os.path.basename(fname) #Create directories if they don't exist os.makedirs(op_annotations_dpath, exist_ok = True) os.makedirs(op_segments_dpath, exist_ok = True) #Extract policyID from basename policy_id = basename.split('_')[0] policy_df = pd.read_csv(fname, header=None, usecols=[0, 4, 5, 6], names=['annotation_ID', 'segment_ID', 'category', 'attr_val']) #Set policyID in each table policy_df.loc[:,"policy_ID"] = policy_id #Replace extension santized_policy_fpath = os.path.splitext(basename)[0]+'.html' # Parse html text html = open(os.path.join(sanitized_pol_dpath, santized_policy_fpath), "r").read() soup = BeautifulSoup(html, features="html.parser") soup_text = soup.get_text() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Agregación # # >_A menudo es necesario transformar o filtrar datos en el proceso de visualización. En `Altair` puedes hacer esto de dos maneras: # Antes de la definición del gráfico, utilizando transformaciones de datos estándar de `Pandas`. # Dentro de la definición del gráfico, utilizando las herramientas de transformación de datos de Vega-Lite. # En la mayoría de los casos, le sugerimos que utilice el primer enfoque, porque es más directo para aquellos que están familiarizados con la manipulación de datos en `Python`, y porque el paquete `Pandas` ofrece mucha más flexibilidad que Vega-Lite en las manipulaciones de datos disponibles. # El segundo enfoque resulta útil cuando el origen de datos no es __DataFrame__, sino, por ejemplo, un URL a un archivo __JSON__ o __CSV__. También puede ser útil en un gráfico compuesto en el que diferentes vistas del conjunto de datos requieren diferentes transformaciones. # Este segundo enfoque, que especifica las transformaciones de datos dentro de la especificación del gráfico, se puede lograr utilizando los métodos `transform_ *` de los objetos de nivel superior: # >###### [Documentación de `Altair`](https://altair-viz.github.io/user_guide/transform.html) # En este capítulo aprenderemos de las # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.2.0 # language: julia # name: julia-1.2 # --- # Julia # # [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/alan-turing-institute/MLJ.jl/master?filepath=binder%2FMLJ_demo.ipynb) # # ## Lightning encounter with Julia programming language # # ###### Julia related content prepared by [@ablaom](https://github.com/ablaom) # # Interacting with Julia at the REPL, or in a notebook, feels very much # # the same as python, MATLAB or R: print("Hello world!") 2 + 2 typeof(42.0) # ## Just-in-time compilation # # Here's a function used in generating the famous Mandelbrot set, # # which looks pretty much the same in python, MATLAB or R: function mandel(z) c = z maxiter = 80 for n in 1:maxiter if abs(z) > 2 return n-1 end z = z^2 + c end return maxiter end # In particular, notice the absence of type annotations. The crucial difference is what happens when you call this function: @time mandel(1.2) # time call on a Float64 # This is actually pretty lousy, slower than python. However, trying again: @time mandel(3.4) # time on another Float64 # Thousands of times faster, second time around! What happenend? # # When you call `mandel(1.2)` in python, say, then the defining code # is interpreted each time. When you call `mandel(1.2)` in Julia for # the first time Julia inspects the of the argument, namely `Float64`, # and using this information *compiles* an efficient type-specfic # version of `mandel`, which it caches for use in any subsequent call # *on the same type*. Indeed if we call `mandel` on a new type, a new # compilation will be needed: @time mandel(1.0 + 5.0im) @time mandel(2.0 + 0.5im) # Since plotting the Mandelbrot set means calling `mandel` millions of # times on the same type, the advantage of just-in-time compilation is # obvious. # + using PyPlot plt.imshow([mandel(x + y * im) for y = -1:0.001:1, x = -2:0.001:1]) # - # ## Multiple dispatch # # You will never see anything like `A.add(B)` in Julia because Julia # is not a traditional object-oriented language. In Julia, function and # structure are kept separate, with the help of abstract types and # multiple dispatch, as we explain next # In addition to regular concrete types, such as `Float64` and # `String`, Julia has a built-in heirarchy of *abstract* types. These # generally have subtypes but no instances: typeof(42) supertype(Int64) supertype(Signed) subtypes(Integer) Bool <: Integer # is Bool a subtype of Integer? Bool <: String # In Julia, which is optionally typed, one uses type annotations to # adapt the behaviour of functions to their types. If we define divide(x, y) = x / y # then `divide(x, y)` will make sense whenever `x / y` makes sense (for # the built-in function `/`). For example, we can use it to divide two # integers, or two matrices: divide(1, 2) divide([1 2; 3 4], [1 2; 3 7]) # To vary the behaviour for specific types we make type annotatations: divide(x::Integer, y::Integer) = floor(x/y) divide(x::String, y::String) = join([x, y], " / ") divide(1, 2) divide("Hello", "World!") # In the case of `Float64` the original "fallback" method still # applies: divide(1.0, 2.0) # ## User-defined types # # Users can define their own abstract types and composite types: # + abstract type Organism end struct Animal <: Organism name::String is_hervibore::Bool end struct Plant <: Organism name::String is_flowering::Bool end describe(o::Organism) = string(o.name) # fall-back method function describe(p::Plant) if p.is_flowering text = " is a flowering plant." else text = " is a non-flowering plant." end return p.name*text end # - describe(Animal("Elephant", true)) describe(Plant("Fern", false)) # ## Type inference and multiple dispatch # # *Type inference* is the process of identifying the types of the arguments to dispatch the right method. # # Blogpost about [type dispatch](http://www.stochasticlifestyle.com/type-dispatch-design-post-object-oriented-programming-julia/) by [](http://www.chrisrackauckas.com/). # + function function_x(x::String) println("this is a string: $x") end function function_x(x::Int) println("$(x^2) is the square of $x") end # - # each call to the function_x() will dispatch the corresponding method depending on the parameter's type function_x("a string") function_x(2) # ## Automatic differentiation # # Differentiation of almost arbitrary programs with respect to their input. ([source]( https://render.githubusercontent.com/view/ipynb?commit=89317894e2e5370a80e45d52db8a4055a4fdecd6&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f6d6174626573616e636f6e2f454d455f4a756c69615f776f726b73686f702f383933313738393465326535333730613830653435643532646238613430353561346664656364362f315f496e74726f64756374696f6e2e6970796e62&nwo=matbesancon%2FEME_Julia_workshop&path=1_Introduction.ipynb&repository_id=270611906&repository_type=Repository#Automatic-differentiation) by [@matbesancon](https://github.com/matbesancon)) # + using ForwardDiff function sqrt_babylonian(s) x = s / 2 while abs(x^2 - s) > 0.001 x = (x + s/x) / 2 end x end # - sqrt_babylonian(2) - sqrt(2) @show ForwardDiff.derivative(sqrt_babylonian, 2); @show ForwardDiff.derivative(sqrt, 2); # ## Unitful computations # Physicists' dreams finally made true. ([soure](https://render.githubusercontent.com/view/ipynb?commit=89317894e2e5370a80e45d52db8a4055a4fdecd6&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f6d6174626573616e636f6e2f454d455f4a756c69615f776f726b73686f702f383933313738393465326535333730613830653435643532646238613430353561346664656364362f315f496e74726f64756374696f6e2e6970796e62&nwo=matbesancon%2FEME_Julia_workshop&path=1_Introduction.ipynb&repository_id=270611906&repository_type=Repository#Unitful-computations) by [@matbesancon](https://github.com/matbesancon)) using Unitful using Unitful: J, kg, m, s 3J + 1kg * (1m / 1s)^2 # MLJ # # # MLJ # # MLJ (Machine Learning in Julia) is a toolbox written in Julia providing a common interface and meta-algorithms for selecting, tuning, evaluating, composing and comparing machine learning models written in Julia and other languages. MLJ is released under the MIT licensed and sponsored by the [Alan Turing Institute](https://www.turing.ac.uk/). # ### The MLJ Universe # # The functionality of MLJ is distributed over a number of repositories # illustrated in the dependency chart below. # # [MLJ](https://github.com/alan-turing-institute/MLJ) * [MLJBase](https://github.com/alan-turing-institute/MLJBase.jl) * [MLJModelInterface](https://github.com/alan-turing-institute/MLJModelInterface.jl) * [MLJModels](https://github.com/alan-turing-institute/MLJModels.jl) * [MLJTuning](https://github.com/alan-turing-institute/MLJTuning.jl) * [MLJLinearModels](https://github.com/alan-turing-institute/MLJLinearModels.jl) * [MLJFlux](https://github.com/alan-turing-institute/MLJFlux.jl) * [MLJTutorials](https://github.com/alan-turing-institute/MLJTutorials) * [MLJScientificTypes](https://github.com/alan-turing-institute/MLJScientificTypes.jl) * [ScientificTypes](https://github.com/alan-turing-institute/ScientificTypes.jl) # # #
# Dependency Chart #
# # *Dependency chart for MLJ repositories. Repositories with dashed # connections do not currently exist but are planned/proposed.* # MLJ provides access to to a wide variety of machine learning models. For the most up-to-date list of available models `models()`. using MLJ models() # ## Fit, predict, transform # # The following example is using the `fit()`, `predict()`, and `transform()` functions of MLJ. import Statistics using PrettyPrinting using StableRNGs X, y = @load_iris; # let's also load the DecisionTreeClassifier: @load DecisionTreeClassifier tree_model = DecisionTreeClassifier() # ## MLJ Machine # # In MLJ, a *model* is an object that only serves as a container for the hyperparameters of the model. A *machine* is an object wrapping both a model and data and can contain information on the *trained* model; it does *not* fit the model by itself. However, it does check that the model is compatible with the scientific type of the data and will warn you otherwise. tree = machine(tree_model, X, y) # A machine is used both for supervised and unsupervised model. In this tutorial we give an example for the supervised model first and then go on with the unsupervised case. # # ## Training and testing a supervised model # # Now that you've declared the model you'd like to consider and the data, we are left with the standard training and testing step for a supervised learning algorithm. # # ## Splitting the data # # To split the data into a training and testing set, you can use the function `partition` to obtain indices for data points that should be considered either as training or testing data: rng = StableRNG(566) train, test = partition(eachindex(y), 0.7, shuffle=true, rng=rng) test[1:3] # ## Fitting and testing the machine # # To fit the machine, you can use the function `fit!` specifying the rows to be used for the training: fit!(tree, rows=train) # Note that this **modifies** the machine which now contains the trained parameters of the decision tree. You can inspect the result of the fitting with the `fitted_params` method: fitted_params(tree) |> pprint # This `fitresult` will vary from model to model though classifiers will usually give out a tuple with the first element corresponding to the fitting and the second one keeping track of how classes are named (so that predictions can be appropriately named). # # You can now use the machine to make predictions with the `predict` function specifying rows to be used for the prediction: ŷ = predict(tree, rows=test) @show ŷ[1] # Note that the output is probabilistic, effectively a vector with a score for each class. You could get the mode by using the `mode` function on `ŷ` or using `predict_mode`: ȳ = predict_mode(tree, rows=test) @show ȳ[1] @show mode(ŷ[1]) # To measure the discrepancy between ŷ and y you could use the average cross entropy: mce = cross_entropy(ŷ, y[test]) |> mean round(mce, digits=4) # # [Check out MLJ example with TreeParzen.jl](TreeParzen_example.ipynb) # # A more advanced example using MLJ using StableRNGs import DataFrames @load RidgeRegressor pkg=MultivariateStats # In this example we will show how to generate a model from a network; there are two approaches: # # * using the `@from_network` macro # * writing the model in full # # the first approach should usually be the one considered as it's simpler. # # Generating a model from a network allows subsequent composition of that network with other tasks and tuning of that network. # # ### Using the @from_network macro # # Let's define a simple network # # *Input layer* # + rng = StableRNG(6616) # for reproducibility x1 = rand(rng, 300) x2 = rand(rng, 300) x3 = rand(rng, 300) y = exp.(x1 - x2 -2x3 + 0.1*rand(rng, 300)) X = DataFrames.DataFrame(x1=x1, x2=x2, x3=x3) test, train = partition(eachindex(y), 0.8); Xs = source(X) ys = source(y, kind=:target) # - # *First layer* # + std_model = Standardizer() stand = machine(std_model, Xs) W = MLJ.transform(stand, Xs) box_model = UnivariateBoxCoxTransformer() box = machine(box_model, ys) z = MLJ.transform(box, ys) # - # *Second layer* ridge_model = RidgeRegressor(lambda=0.1) ridge = machine(ridge_model, W, z) ẑ = predict(ridge, W) # *Output* ŷ = inverse_transform(box, ẑ) # No fitting has been done thus far, we have just defined a sequence of operations. # # To form a model out of that network is easy using the `@from_network` macro: @from_network CompositeModel(std=std_model, box=box_model, ridge=ridge_model) <= ŷ; # The macro defines a constructor CompositeModel and attributes a name to the different models; the ordering / connection between the nodes is inferred from `ŷ` via the `<= ŷ`. # # **Note**: had the model been probabilistic (e.g. `RidgeClassifier`) you would have needed to add `is_probabilistic=true` at the end. cm = machine(CompositeModel(), X, y) res = evaluate!(cm, resampling=Holdout(fraction_train=0.8, rng=51), measure=rms) round(res.measurement[1], sigdigits=3) # ## Check out more [Data Science tutorials in Julia](https://alan-turing-institute.github.io/DataScienceTutorials.jl/). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Opis problema # Naš zadatak je da iz skupa podataka o pacijentima koji su podvrgnuti **elektrokonverziji** napravimo **klasifikacioni model** koji predviđa da li će procedura biti uspešna. # U ovoj svesci predstavljeni su različiti modeli dobijeni primenom drveta odlučivanja, naivnog Bajesa, metode potpornih vektora i neuronskih mreža. # ## Učitavanje podataka # Podaci su dimenzija **147 x 49**. Redovi predstavljaju pacijente indeksirane po broju iz baze, a atributi su karakteristike zdravstvenog stanja pacijenata i karakteristike primenjene elektrokonverzije. # Ulazni atributi su celobrojnog tipa, a ciljni atribut (uspešnost elektrokonverzije) je istinitosnog tipa. Važno je napomenuti da je skup podataka **nebalansiran**: klasa *True (uspešna elektrokonverzija)* je zastupljena u 130 zapisa, dok je klasa *False (neuspešna elektrokonverzija)* zastupljena u svega 17 zapisa. # + import pandas as pd df = pd.read_csv( 'elektroprecisceno.csv', index_col=0 ) df # - # ## Podela podataka na trening i test # Skup podataka delimo na deo za treniranje modela i deo za testiranje dobijenog modela. Podela će se vršiti pre primene svakog od različitih algoritama kako bi se poništile moguće transformacije i kako bi za svaki bile izabrane odrednice kreiranja trening skupa. # # Prosleđeni argument *random_state* je postavljen na 0 jer se na taj način dobija uvek isto podeljen skup podataka, što je pogodno za upoređivanje modela. # Argument *stratify* dobija vrednost ciljane kolone kako bi raspodela podataka u trening i test skupu ostala približno ista kao u početnom skupu, što je posebno važno jer je u pitanju nebalansirani skup. # Koristi se i *sampler* za dodatno balansiranje u trening skupu. # + from sklearn.model_selection import train_test_split X = df.iloc[:, :-1] y = df.iloc[:, -1] tts = train_test_split def train_test_split(X, y, sampler=None, test=0.35): X_train, X_test, y_train, y_test = tts(X, y, random_state=0, stratify=y, test_size=test) if sampler is not None: X_train, y_train = sampler(random_state=0).fit_resample(X_train, y_train) return X_train, X_test, y_train, y_test # - # ## Kvalitet modela # Mera koja nam daje najviše informacija o kvalitetu modela je **balansirana tačnost** (*macro_avg*) koja predstavlja srednju vrednost odziva za svaku klasu. Korišćeni su **matrica konfuzije** (u vidu **toplotne mape**) i **izveštaj klasifikacije** koji nam pružaju uvid u različite osobine modela. Funkcija prijavljuje i loše klasifikovane neuspešne procedure. Prikazana je i **ROC kriva**, za koju važi da je površina ispod nje dobra mera uspeha binarnog klasifikatora, kao i kriva koja prikazuje **odnos preciznosti i odziva** na ključnoj klasi neuspešnih procedura, a čija je numerička mera prosečna preciznost. # + import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import plot_roc_curve, plot_precision_recall_curve,\ classification_report, plot_confusion_matrix _, XXX, _, _ = train_test_split(X, y) def kvalitet_modela(clf, X_test, y_test): y_pred = clf.predict(X_test) print(classification_report(y_test, y_pred)) fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(15, 4)) plot_confusion_matrix(clf, X_test, y_test, ax=ax1, colorbar=False, cmap='Reds') ax1.set_xlabel('Predviđeno') ax1.set_ylabel('Stvarno') ax1.set_title('Matrica konfuzije') plot_roc_curve(clf, X_test, y_test, pos_label=False, lw=2, color='red', ax=ax2) ax2.set_xlabel('Stopa lažno neuspešnih') ax2.set_ylabel('Stopa stvarno neuspešnih') ax2.set_title('ROC kriva neuspešnih') ax2.legend(loc='lower right') plot_precision_recall_curve(clf, X_test, y_test, pos_label=False, lw=2, color='red', ax=ax3) ax3.set_xlabel('Odziv neuspešnih (specifičnost)') ax3.set_ylabel('Preciznost neuspešnih') ax3.set_title('Odnos preciznosti i odziva') ax3.legend(loc='upper right') plt.show() try: print('Greške:', XXX.iloc[np.where((y_pred != y_test) & ~y_test)].index.to_numpy()) except: pass # - # ## Stablo odlučivanja # **Stablo odlučivanja** je klasifikator koji u svakom unutrašnjem čvoru sadrži upit za određeni atribut na osnovu kog se pretraga dalje usmerava kroz adekvatne grane, u naredne čvorove. Za razliku od unutrašnjih čvorova, svakom listu je pridružena odgovarajuća klasa. Da bismo klasifikovali neki podatak na osnovu atributa se usmeravamo kroz stablo odgovarajućim granama, sve do lista koji će reći kojoj klasi podatak pripada. # # Za treniranje i kreiranje modela stabla odlučivanja, zbog nebalansiranosti skupa podataka, prosleđen je parametar *class_weight* sa vrednošću *'balanced'* koja označava da su težine svake klase obrnuto srazmerne njihovoj zastupljenosti u skupu. # + from sklearn.tree import DecisionTreeClassifier X_train, X_test, y_train, y_test = train_test_split(X, y) tree = DecisionTreeClassifier(random_state=0, class_weight='balanced') tree.fit(X_train, y_train) kvalitet_modela(tree, X_test, y_test) # - # ## Skraćivanje stabla # # Dobijeno stablo je u potpunosti prilagođeno trening podacima i ne pokazuje dovoljno dobre performanse nad test podacima. Zbog toga treba pokušati sa poboljšanjem modela **odsecanjem delova stabla**. # Korišćeno je odsecanje modifikacijom vrednosti parametra *ccp_alpha* gde se za veće vrednosti odseca veći broj čvorova. # Za različite vrednosti parametra ispitan je kvalitet dobijenog stabla (*balanced_accuracy_score*). Međutim, najbolji kvalitet se dobija za *alpha=0.0*, što znači da se *odsecanjem stabla ne dobijaju bolji rezultati*. # + from sklearn.metrics import balanced_accuracy_score ccp_alphas = tree.cost_complexity_pruning_path(X_train, y_train).ccp_alphas[:-1] trees = [] for ccp_alpha in ccp_alphas: tree_pruned = DecisionTreeClassifier(random_state = 0, class_weight='balanced', ccp_alpha = ccp_alpha) tree_pruned.fit(X_train, y_train) trees.append(tree_pruned) train_scores = [balanced_accuracy_score(y_train,tree_pruned.predict(X_train)) for tree_pruned in trees] test_scores = [balanced_accuracy_score(y_test,tree_pruned.predict(X_test)) for tree_pruned in trees] plt.xlabel('alpha') plt.ylabel('balanced_accuracy_score') plt.plot(ccp_alphas, train_scores, marker='o', label='train', drawstyle='steps-post') plt.plot(ccp_alphas, test_scores, marker='o', label='test', drawstyle='steps-post') plt.legend() plt.show() # - # Konačno stablo odgovara početnom stablu s obzirom da modifikacije ne doprinose kvalitetu. Balansirana tačnost stabla iznosi $63\%$, pri čemu False klasi odgovara odziv $33\%$ što nije dovoljno dobro. # Konačno stablo prikazano je na slici ispod: # + import sklearn.tree fig = plt.figure(figsize=(20,15)) _ = sklearn.tree.plot_tree(tree, feature_names=X_train.columns, class_names=['False','True'], filled=True) # - # ## Ispitivanje povezanosti i značaj atributa # Funkcija *permutation_importance* izračunava značaj atributa za klasifikator nad određenim skupom podataka. # Značaj se izračunava kao smanjenje nečistoće čvora otežano verovatnoćom dosezanja tog čvora. Što je veća vrednost koja se dobije, to je atribut značajniji. # + from sklearn.inspection import permutation_importance importances = sorted(zip(tree.feature_importances_, X.columns), reverse=True) for importance, column in importances: if importance >= 1e-06: print(f'{column} : {importance:.6f}') # - # Dendrogram i toplotna mapa(heatmap) predstavljaju **korelacije između atributa**. Presekom napravljenim na dendrogramu moguće je napraviti selekciju atributa. # + from scipy.stats import spearmanr from scipy.cluster import hierarchy fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(13.5, 7)) corr = spearmanr(X).correlation corr_linkage = hierarchy.ward(corr) dendro = hierarchy.dendrogram(corr_linkage, labels=X.columns.tolist(), ax=ax1) dendro_idx = np.arange(0, len(dendro['ivl'])) ax2.imshow(corr[dendro['leaves'], :][:, dendro['leaves']]) ax2.set_xticks(dendro_idx) ax2.set_yticks(dendro_idx) ax2.set_xticklabels(dendro['ivl'], rotation='vertical') ax2.set_yticklabels(dendro['ivl']) fig.tight_layout() plt.show() # - # ## Primena šume za klasifikaciju # **Slučajna šuma** odgovara kombinaciji različitih drveta odlučivanja na različitim poduzorcima skupa podataka. Nad rezultatima koje drveta odlučivanja koristi se usrednjavanje kako bi se poboljšala tačnost predviđanja i kontrolisalo preprilagođavanje. Veličina poduzoraka se kontroliše parametrom *max_samples*, ali samo ukoliko je parametar *bootstrap* postavljen na True (ukoliko je *bootstrap* postavljen na False ceo skup se koristi za izgradnju svakog drveta). # # Kao i kod drveta odlučivanja, parametar *random_state* je fiksiran i težine klasa su balansirane. # Parametar *n_estimators* predstavlja broj stabala koji želimo da se kreira i na osnovu kojih se procenjuje konačan rezultat (uzima se srednja vrednost za svaku procenu). # Ispitano je više vrednosti za *n_estimators* i *max_samples* i među njima najbolju balansiranu tačnost daju vrednosti u navedenom modelu. # + from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier(random_state=0, bootstrap=True, n_estimators=2, max_samples=50, class_weight='balanced') forest.fit(X_train, y_train) kvalitet_modela(forest, X_test, y_test) # - # ## # Klasifikator **** procenjuje uslovnu verovatnoću klase sa pretpostavkom da su atributi međusobno nezavisni. Korelisani atributi mogu narušiti performanse naivnog Bajesovog klasifikatora jer narušavaju pretpostavku o nezavisnosti. Na toplotnoj mapi smo videli da postoje korelacije između određenih atributa, međutim njih nema mnogo pa ima smisla da ipak pokušamo sa upotrebom ovog klasifikatora. # # Korišćena su dva drugačija klasifikatora, **GaussianNB** i **ComplementNB**. GaussianNB pretpostavlja da je raspodela atributa normalna. Zbog toga ćemo pre primene ovog klasifikatora podatke standardizovati. # # Dobijen model GaussianNB klasifikatora za veličinu test skupa $35\%$ skoro sve podatke svrstava u False klasu. Može biti iskorišćena činjenica da model ne zahteva veliki trening skup kako bi pokazao dobre performanse, pa promenom veličine test skupa na $70\%$ dobijaju se nešto bolji rezultati. Međutim, oni i dalje nisu dovoljno dobri jer balansirana tačnost iznosi samo $48\%$, gde je odziv za obe klase približno $50\%$. # + from sklearn.preprocessing import StandardScaler from sklearn.naive_bayes import GaussianNB from sklearn.pipeline import Pipeline X_train_g, X_test_g, y_train_g, y_test_g = train_test_split(X, y, test=0.7) gnb = Pipeline([('std', StandardScaler()), ('gnb', GaussianNB())]) gnb.fit(X_train_g, y_train_g) kvalitet_modela(gnb, X_test_g, y_test_g) # - # ComplementNB je dopuna MultionimalNB koji je posebno prilagođen nebalansiranim podacima. ComplementNB ne obrađuje negativne vrednosti atributa, pa je dodatno potrebno podatke skalirati tako da se svi nalaze u nekom pozitivnom opsegu. # # Model ComplementNB klasifikatora sa RandomOverSampler pokazuje dosta bolje osobine jer balansirana tačnost sada iznosi $73\%$. Model ima vrlo slične performanse kao stablo odlučivanja, ali pogađa jednu dodatnu neuspešnu elektrokonverziju. # + from sklearn.preprocessing import MinMaxScaler from imblearn.over_sampling import RandomOverSampler from sklearn.naive_bayes import ComplementNB X_train_rand, _, y_train_rand, _ = train_test_split(X, y, RandomOverSampler) cnb = Pipeline([('mms', MinMaxScaler()), ('cnb', ComplementNB())]) cnb.fit(X_train_rand, y_train_rand) kvalitet_modela(cnb, X_test, y_test) # - # ## Neuronska mreža # **Veštačka neuronska mreža** predstavlja simulaciju biološkog nervnog sistema jer ljudski mozak uči promenama jačina sinaptičkih veza između neurona nakon ponovljene simulacije istim impulsima. # Ona se sastoji od skupa čvorova koji simuliraju neurone i direktnih veza koje simuliraju sinaptičke veze. Promene težina direktnih veza odgovaraju promenama jačina sinaptičkih veza, a ulazni podaci odgovaraju impulsima. # Najjednostavniji model neuronske mreže je **perceptron**. Granica odlučivanja kod perceptrona je linearna hiperravan koja razdvaja podatke na dve klase. Iz toga razloga, perceptron nije odgovarajući za primenu kod ovog skupa podataka jer smo PCA analizom uvideli da podaci nisu linearno razdvojivi. # Zbog toga nam je potrebna **višeslojna neuronska mreža** koja modeluje kompleksnije odnose između ulaznih i izlaznih promenljivih. U sklearn-u klasa **MLPClassifier** implementira algoritam višeslojnih neuronskih mreža. Model je veoma osetljiv na razlike u opsege podataka, pa je podatke potrebno je skalirati pre treniranja modela. _ = ''' X_train, X_test, y_train, y_test = train_test_split(X, y, RandomOverSampler) std_scaler = StandardScaler() std_scaler.fit(X_train) X_train = std_scaler.transform(X_train) X_test = std_scaler.transform(X_test) ''' # ## Određivanje parametara metoda # GridSearchCV je funkcija koja implementira iscrpnu pretragu nad specifikovanim vrednostima parametara klasifikatora. Parametri se optimizuju unakrsnom validacijom nad mrežom parametara. # Za funkciju ocenjivanja modela korišćene su balansirana tačnost i površina ispod roc krive sa akcentom na balansiranoj tačnosti. # # Parametri potrebni za višeslojnu neuronsku mrežu:
# **hidden_layer_sizes** - broj čvorova u skrivenim slojevima
# **activation** - određuje aktivacionu funkciju za skriveni sloj
# **solver** - rešavač za optimizaciju težina
# **alpha** - regularizacioni parametar, kontroliše preprilagođavanje
# **learning_rate** - način na koji se menja stopa učenja kroz iteracije algoritma
# **learning_rate_init** - početna vrednost stope/brzine učenja
# **power_t** - parametar za slučaj da je stopa učenja inverzno skaliranje
_ = ''' from sklearn.model_selection import RandomizedSearchCV from sklearn.neural_network import MLPClassifier mlp = MLPClassifier(solver='sgd', max_iter=400, random_state=0) param_grid = [{ 'hidden_layer_sizes':[(6,4),(10,5),(15,10)], 'activation':['relu','tanh'], 'alpha':[0.0001,0.001,0.005], 'batch_size':[1,10,50], 'power_t':[0.3,0.5,0.7], 'learning_rate':['constant','invscaling', 'adaptive'], 'learning_rate_init':[0.001,0.01] }] grid = RandomizedSearchCV(estimator=mlp, param_distributions=param_grid, cv=3, refit='balanced_accuracy', scoring= ['roc_auc','balanced_accuracy']) grid.fit(X_train, y_train) ''' _ = ''' from sklearn.model_selection import RandomizedSearchCV from sklearn.neural_network import MLPClassifier mlp = MLPClassifier(max_iter=400, random_state=0) param_grid = [{ 'hidden_layer_sizes':[(6,4),(10,5),(15,10)], 'activation':['relu','tanh'], 'solver':['adam','lbfgs'], 'batch_size':[1,5,10], 'alpha':[0.0001,0.001,0.005], 'learning_rate_init':[0.001,0.01,0.1] }] grid = RandomizedSearchCV(estimator=mlp, param_distributions=param_grid, cv=3, refit='balanced_accuracy', scoring= ['roc_auc','balanced_accuracy']) grid.fit(X_train, y_train) ''' # ## Kreiranje i primena neuronske mreže # Neuronska mreža se kreira sa parametrima dobijenim funkcijom GridSearchCV, kao i uz RandomOverSampler. # Finalna neuronska mreža dobijena je kombinovanjem različitih pokretanja funkcije GridSearchCV. # # Dobijeni model za svaku pojedinačnu kombinciju parametara dobijenu iz mreže parametara pokazuje manju balansiranu tačnost nego dobijenu unakrsnom validacijom. # Balansirana tačnost iznosi $76\%$, pri čemu je odziv za *False* klasu jednak $67\%$, a za *True* klasu $85\%$. # Nakon primene balansiranja trening skupa pomoću RandomOverSampler dobijen je kvalitetniji model. # + from sklearn.neural_network import MLPClassifier mlp = MLPClassifier(solver='sgd', activation='tanh', hidden_layer_sizes=(15,10), learning_rate='invscaling', learning_rate_init=0.001, power_t = 0.6, batch_size=5, max_iter=400, random_state=0) mlp = Pipeline([('std', StandardScaler()), ('mlp', mlp)]) mlp = mlp.fit(X_train_rand, y_train_rand) kvalitet_modela(mlp, X_test, y_test) # - # ## Metoda potpornih vektora # Granica odlučivanja kojoj klasi podatak pripada predstavljena je podskupom trening podataka: **potpornim vektorima**. # Suština metoda je određivanje **maksimalne margine hiperravni**. Kada je margina mala, model je podložniji preprilagođavanju i slabo generalizuje nad novim podacima. # **Nelinearni SVM** se primenjuje na linearno nerazdvojive skupove podataka, a klasa koja mu odgovara u sklearn-u je **SVC** koji će biti ovde primenjen. # + from imblearn.over_sampling import ADASYN X_train_ada, _, y_train_ada, _ = train_test_split(X, y, ADASYN) # - # ## Izbor parametara metoda # Za izbor parametara je ponovo korišćena funkcija GridSearchCV sa istim osobinama. # # Parametri potrebni za metodu potpornih vektora:
# **C** - vrednost penala za granicu odlučivanja sa velikom vrednošću promenljive popuštanja
# **kernel** - tip kernel funkcije za transformaciju prostora
# **gamma** - koeficijent kernel funkcije
# **coef0** - koeficijent kernel funkcije
# **degree** - koeficijent kernel funkcije (samo za 'poly')
# _ = ''' from sklearn.model_selection import GridSearchCV from sklearn.svm import SVC svc = SVC(random_state=0, class_weight='balanced') param_grid = [{ 'C':[0.1, 0.5, 1.0], 'kernel': ['linear','poly','rbf','sigmoid'], 'gamma':['scale','auto'], 'coef0':[0.3,0.5,0.7,0.9], 'degree':[1,2,3,4,5,6,7], }] grid = GridSearchCV(estimator=svc, param_grid=param_grid, cv=5, refit='balanced_accuracy', scoring= ['balanced_accuracy','roc_auc']) grid.fit(X_train, y_train) ''' # ## Kreiranje modela # Klasifikator potpornih vektora se kreira sa parametrima dobijenim pomoću funkcije GridSearchCV, kao i ASASYN oversamplera. # Pretraga parametara pozvana je nad neizmenjenim skupom za trening i nad balansiranim skupom za trening. # Oprobana su oba modela, a predstavljeni pokazuje bolje osobine. Dodatno, izmenjen je parametar degree i dobijen je bolji model. # Balansirana tačnost sada iznosi $70\%$, pri čemu je odziv *False* klase $67\%$, a *True* klase $74\%$. # + from sklearn.svm import SVC # GridSearchCV nad neizmenjenim trening skupom: {'C': 0.1, 'coef0': 0.5, 'degree': 1, 'gamma': 'scale', 'kernel': 'poly'} # GridSearchCV nad balansiranim trening skupom: {'C': 1.0, 'coef0': 0.3, 'degree': 2, 'gamma': 'scale', 'kernel': 'poly'} svc = SVC(kernel='poly', C = 0.1, coef0 = 0.5, degree= 3, gamma='scale', class_weight='balanced', probability=True) svc = Pipeline([('std', StandardScaler()), ('svc', svc)]) svc = svc.fit(X_train_ada, y_train_ada) kvalitet_modela(svc, X_test, y_test) # - # ## Ansambl metoda # + import warnings from mlxtend.classifier import EnsembleVoteClassifier def ansambl(*args): ens = EnsembleVoteClassifier(args, fit_base_estimators=False) with warnings.catch_warnings(): warnings.simplefilter('ignore') ens.fit(_, y_train) return ens # - # Kako su se dosad slučajna šuma i komplementarni naivni Bajes prikazali kao relativno uspešni, a pritom prave samo jednu zajedničku grešku, ima smisla napraviti **ansambl** koji spaja ova dva klasifikatora. Za funkciju odlučivanja može se uzeti pomnožena funkcija odlučivanja ova dva klasifikatora – pacijentu se dodeljuje klasa True samo ako oba člana ansambla tako misle. Ovo je po svim parametrima najbolji rezultat trenutno. # + forcnb = ansambl(forest, cnb) kvalitet_modela(forcnb, X_test, y_test) # - # Balansiranjem trening skupa dobijen je kvalitetniji model neuronskih mreža koji takođe može biti iskorišćen u ansamblu sa komplementarnim naivnim Bajesom, jer sa njim nema zajedničkih grešaka. Ovaj model je još bolji po svim parametrima najbolji i sveukupno najbolji. # + mlpcnb = ansambl(mlp, cnb) kvalitet_modela(mlpcnb, X_test, y_test) # - # Moguće je efikasno kombinovati i višeslojni perceptron sa klasifikatorom zasnovanim na potpornim vektorima, ali ipak sa osetno više grešaka. Slična situacije je i sa ostalim kombinacijama, tako da ipak ne postoji nijedna koja bi mogla da nadmaši prethodnu. # + mlpsvc = ansambl(mlp, svc) kvalitet_modela(mlpsvc, X_test, y_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Testing hypotheses via a mixture estimation model # Simple demo of the idea in https://arxiv.org/pdf/1412.2044v2.pdf # Related discussion: ['s Blog post](https://xianblog.wordpress.com/2016/12/05/bayesian-parameter-estimation-versus-model-comparison/), [Discussion on X validated](https://xianblog.wordpress.com/2016/12/05/bayesian-parameter-estimation-versus-model-comparison/) # The data and code is based on the [BEST example of PyMC3](https://github.com/pymc-devs/pymc3/blob/master/docs/source/notebooks/BEST.ipynb) # + import numpy as np import pymc3 as pm import pandas as pd import seaborn as sns sns.set(color_codes=True) # %pylab inline # %config InlineBackend.figure_format = 'retina' # %qtconsole --colors=linux # np.random.seed(20090425) # + # drug = (101,100,102,104,102,97,105,105,98,101,100,123,105,103,100,95,102,106, # 109,102,82,102,100,102,102,101,102,102,103,103,97,97,103,101,97,104, # 96,103,124,101,101,100,101,101,104,100,101) # placebo = (99,101,100,101,102,100,97,101,104,101,102,102,100,105,88,101,100, # 104,100,100,100,101,102,103,97,101,101,100,101,99,101,100,100, # 101,100,99,101,100,102,99,100,99) Nsbj = 42 drug = np.random.normal(loc=101.8,scale=1,size=Nsbj) placebo = np.random.normal(loc=100.5,scale=1,size=Nsbj) y1 = np.array(drug) y2 = np.array(placebo) y = pd.DataFrame(dict(value=np.r_[y1, y2], group=np.r_[['drug']*len(drug), ['placebo']*len(placebo)])) y.hist('value', by='group'); # - # # Bayesian Estimation Supersedes the T-Test (BEST) model with pm.Model() as model: μ1 = pm.Flat('μ1') σ1 = pm.HalfFlat('σ1') group1 = pm.Normal('drug', mu=μ1, sd=σ1, observed=y1) μ2 = pm.Flat('μ2') σ2 = pm.HalfFlat('σ2') group2 = pm.Normal('placebo', mu=μ2, sd=σ2, observed=y2) start = pm.find_MAP() start # + # Bayesian Estimation Supersedes the T-Test (BEST) model μ_m = y.value.mean() μ_s = y.value.std() * 2 σ_low = 0 σ_high = 10 with pm.Model() as model: μ1 = pm.Normal('μ1', μ_m, sd=μ_s) μ2 = pm.Normal('μ2', μ_m, sd=μ_s) σ1 = pm.Uniform('σ1', lower=σ_low, upper=σ_high) σ2 = pm.Uniform('σ2', lower=σ_low, upper=σ_high) group1 = pm.Normal('drug', mu=μ1, sd=σ1, observed=y1) group2 = pm.Normal('placebo', mu=μ2, sd=σ2, observed=y2) diff_of_means = pm.Deterministic('μ1-μ2', μ1 - μ2) diff_of_stds = pm.Deterministic('σ1-σ2', σ1 - σ2) effect_size = pm.Deterministic('effect size', diff_of_means / np.sqrt((σ1**2 + σ2**2) / 2)) trace = pm.sample(1000, njobs=4) # - pltvar = ['μ1', 'μ2', 'σ1', 'σ2'] maplist = [float(start[val]) for val in pltvar] maplist pltvar = ['μ1', 'μ2', 'σ1', 'σ2'] pm.plot_posterior(trace, varnames=pltvar, ref_val=maplist, color='#87ceeb'); pm.plot_posterior(trace, varnames=['μ1-μ2', 'σ1-σ2', 'effect size'], ref_val=0, color='#87ceeb'); pm.forestplot(trace, varnames=[v.name for v in model.vars[:2]]) pm.forestplot(trace, varnames=['μ1', 'μ2', 'σ1', 'σ2', 'μ1-μ2', 'σ1-σ2', 'effect size']); pm.df_summary(trace, varnames=['μ1-μ2', 'σ1-σ2', 'effect size']) # # Testing hypotheses via a mixture estimation model # # In the Kamary et al (2014) they introduced a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. Their alternative to the traditional construction of Bayes factors or posterior probabilities of a model given the data is to consider the hypotheses or models under # comparison as components of a mixture model. Therefore, the original testing problem is replaced with an estimation one that focuses on the probability or weight of a given hypothesis within the mixture model. # # A reference $\text{Beta}(a_0,a_0)$ prior on the mixture weights can be used for the common problem of # testing two contrasting hypotheses or models. In this case the sensitivity of the posterior estimates of the weights to the choice of $a_0$ vanishes as the sample size increases. Kamary et al advocate a default choice of $a_0 = 0.5$, derived from Rousseau and Mengersen (2011) # # ### Update 2017-03-20 # Sort the mixture weight so to break the multimodality [ref](http://mc-stan.org/documentation/case-studies/identifying_mixture_models.html). However, one must be careful of placing the order of the mixture component, as the less possible component should be place in the front. As a heuristic we can place the component coding for the null hypothesis before the alternative hypothesis, as our strawman null hypothesis is mostly false. Of course, if the null hypothesis is actually more plausible this approach will backfired. # + # Hypotheses testing via mixture model estimation import theano.tensor as tt from pymc3.math import logsumexp # from pymc3.dist_math import logpow, gammaln with pm.Model() as model2: #H0 group0_mean = pm.Normal('group0_mean', μ_m, sd=μ_s) group0_std = pm.Uniform('group0_std', lower=σ_low, upper=σ_high) group1_h0 = pm.Normal('drug_h0', mu=group0_mean, sd=group0_std) group2_h0 = pm.Normal('placebo_h0', mu=group0_mean, sd=group0_std) #H1 group1_mean = pm.Normal('group1_mean', μ_m, sd=μ_s) group2_mean = pm.Normal('group2_mean', μ_m, sd=μ_s) group1_std = pm.Uniform('group1_std', lower=σ_low, upper=σ_high) group2_std = pm.Uniform('group2_std', lower=σ_low, upper=σ_high) group1_h1 = pm.Normal('drug_h1', mu=group1_mean, sd=group1_std) group2_h1 = pm.Normal('placebo_h1', mu=group2_mean, sd=group2_std) a0 = pm.Dirichlet('a0', a=np.ones((2))*.5) # use sort to break multimodality a = pm.Deterministic('mixing',tt.sort(a0)) group1 = pm.Mixture('drug', w=a, comp_dists=[group1_h0.distribution,group1_h1.distribution], observed=y1) group2 = pm.Mixture('placebo', w=a, comp_dists=[group2_h0.distribution,group2_h1.distribution], observed=y2) # a = pm.Beta('mixing',alpha=.5,beta=.5) # def mixturelogp(w,dist1,dist2): # logp1 = dist1.distribution.logp # logp2 = dist2.distribution.logp # def logp_(value): # logps = tt.log(w) + logp1(value) + tt.log(1-w) + logp2(value) # return tt.sum(logsumexp(logps)) # return logp_ # group1 = pm.DensityDist('drug', mixturelogp(a,group1_h0,group1_h1), observed=y1) # group2 = pm.DensityDist('placebo', mixturelogp(a,group2_h0,group2_h1), observed=y2) #start = pm.find_MAP() trace = pm.sample(2000, njobs=4, tune=2000) # - pm.traceplot(trace, varnames=['mixing', 'group0_mean', 'group0_std', 'group1_mean', 'group2_mean', 'group1_std', 'group2_std']) plt.show() accept = trace.get_sampler_stats('mean_tree_accept', burn=burnin) print('The accept rate is: %.5f' % (accept.mean())) diverge = trace.get_sampler_stats('diverging',burn=burnin) print('Effective samples') print(pm.diagnostics.effective_n(trace)) print('Diverge of the trace') print(diverge.nonzero()) energy = trace['energy'] energy_diff = np.diff(energy) sns.distplot(energy - energy.mean(), label='energy') sns.distplot(energy_diff, label='energy diff') plt.legend() plt.show() pm.plot_posterior(trace, varnames=['mixing', 'group0_mean', 'group0_std', 'group1_mean', 'group2_mean', 'group1_std', 'group2_std'], color='#87ceeb'); # Note that, the mixture model is not very well parameterized (in more ambiguous situation the estimation could be off badly). A better parameterization or other MC sampling techinque should be used for any real problem. # # Model selection via Bayes Factor # + with pm.Model() as model3: delta = pm.Cauchy('delta', alpha=0, beta=1) group_mean = pm.Normal('group_mean', μ_m, sd=μ_s) group_std = pm.Uniform('group_std', lower=σ_low, upper=σ_high) alpha = delta*group_std group1 = pm.Normal('drug', mu=group_mean+alpha/2, sd=group_std, observed=y1) group2 = pm.Normal('placebo', mu=group_mean-alpha/2, sd=group_std, observed=y2) trace3 = pm.sample(2000, njobs=2, tune=1000) pm.plot_posterior(trace3[1000:], varnames=['group_mean', 'group_std', 'delta'], color='#87ceeb'); # + def display_delta(trace, x): # BFs based on density estimation (using kernel smoothing instead of spline) from scipy.stats.kde import gaussian_kde from scipy.stats import cauchy pm.summary(trace, varnames=['delta']) tmp = pm.df_summary(trace, varnames=['delta']) # 95% confidence interval: x0 = tmp.values[0, 3] x1 = tmp.values[0, 4] t_delt = trace['delta'][:] my_pdf = gaussian_kde(t_delt) plt.plot(x, my_pdf(x), '--', lw=2.5, alpha=0.6, label='Posterior') # distribution function plt.plot(x, cauchy.pdf(x), 'r-', lw=2.5, alpha=0.6, label='Prior') posterior = my_pdf(0) # this gives the pdf at point delta = 0 prior = cauchy.pdf(0) # height of order-restricted prior at delta = 0 BF01 = posterior/prior print ('the Bayes Factor is %.5f' %(BF01)) plt.plot([0, 0], [posterior, prior], 'k-', [0, 0], [posterior, prior], 'ko', lw=1.5, alpha=1) plt.xlabel('Delta') plt.ylabel('Density') plt.legend(loc='upper left') plt.show() x = np.linspace(-3, 3, 100) display_delta(trace3[1000:], x) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Code to plot performance of classifier over time import matplotlib.pyplot as plt import numpy as np import csv from os.path import join from matplotlib import colors # helper function def csv2array(file): """ Read in CSV file and return values as numpy array """ with open(file) as csvfile: d = np.asarray(list(csv.reader(csvfile))).astype(np.float) return(d) # Import files base_dir = 'plotdata' self_acc = csv2array(join(base_dir, 'self_indiv_acc.csv')) xtic = csv2array(join(base_dir,'self_xtics.csv')) a1 = csv2array(join(base_dir, 'self_sig_acc1.csv')) a2 = csv2array(join(base_dir, 'self_sig_acc2.csv')) a3 = csv2array(join(base_dir, 'self_sig_acc3.csv')) a4 = csv2array(join(base_dir, 'self_sig_acc4.csv')) a5 = csv2array(join(base_dir, 'self_sig_acc5.csv')) x1 = csv2array(join(base_dir, 'self_sig_x1.csv')) x2 = csv2array(join(base_dir, 'self_sig_x2.csv')) x3 = csv2array(join(base_dir, 'self_sig_x3.csv')) x4 = csv2array(join(base_dir, 'self_sig_x4.csv')) x5 = csv2array(join(base_dir, 'self_sig_x5.csv')) self_sig_clusters = [[x1,a1], [x2,a2], [x3,a3], [x4,a4], [x5,a5]] other_acc = csv2array(join(base_dir, 'other_indiv_acc.csv')) xtic = csv2array(join(base_dir,'other_xtics.csv')) a1 = csv2array(join(base_dir, 'other_sig_acc1.csv')) a2 = csv2array(join(base_dir, 'other_sig_acc2.csv')) a3 = csv2array(join(base_dir, 'other_sig_acc3.csv')) a4 = csv2array(join(base_dir, 'other_sig_acc4.csv')) a5 = csv2array(join(base_dir, 'other_sig_acc5.csv')) a6 = csv2array(join(base_dir, 'other_sig_acc6.csv')) a7 = csv2array(join(base_dir, 'other_sig_acc7.csv')) a8 = csv2array(join(base_dir, 'other_sig_acc8.csv')) x1 = csv2array(join(base_dir, 'other_sig_x1.csv')) x2 = csv2array(join(base_dir, 'other_sig_x2.csv')) x3 = csv2array(join(base_dir, 'other_sig_x3.csv')) x4 = csv2array(join(base_dir, 'other_sig_x4.csv')) x5 = csv2array(join(base_dir, 'other_sig_x5.csv')) x6 = csv2array(join(base_dir, 'other_sig_x6.csv')) x7 = csv2array(join(base_dir, 'other_sig_x7.csv')) x8 = csv2array(join(base_dir, 'other_sig_x8.csv')) other_sig_clusters = [[x1,a1], [x2,a2], [x3,a3], [x4,a4], [x5,a5], [x6,a6], [x7,a7], [x8,a8]] # + # generate figure plt.figure(figsize=(15,4), dpi=72) plt.rcParams.update({'font.size':20}) plt.rcParams.update({'legend.handlelength':.1}) A = self_acc B = other_acc nsub, npnt = A.shape N = 256 vals = np.ones((N,4)) vals[:,0] = np.linspace(0/256, 1, N) vals[:,1] = np.linspace(24/256, 1, N) vals[:,2] = np.linspace(24/256, 1, N) newcmp = colors.ListedColormap(vals) color_shuffle = np.arange(37)*5 np.random.shuffle(color_shuffle) top_y = .4 mid_y = .3 top_adj = top_y - .5 mid_adj = mid_y - .5 A_mean = np.mean(A,0) + top_adj A_sd = np.std(A,0) A_upper = A_mean + A_sd A_lower = A_mean - A_sd B_mean = np.mean(B,0) + mid_adj B_sd = np.std(B,0) B_upper = B_mean + B_sd B_lower = B_mean - B_sd # plot lines plt.plot([xtic[0][0],xtic[0][-1]],[top_y, top_y],'-',color='dimgray', alpha=.5) plt.plot([xtic[0][0],xtic[0][-1]],[mid_y, mid_y],'-',color='dimgray', alpha=.5) plt.fill_between(xtic[0],A_upper,A_lower,color='indianred', alpha=.1) plt.fill_between(xtic[0],B_upper,B_lower,color='dodgerblue', alpha=.1) plt.plot(xtic[0],np.mean(A,0)+top_adj,'-', color='indianred',linewidth=1,alpha=.5) plt.plot(xtic[0],np.mean(B,0)+mid_adj,'-', color='dodgerblue',linewidth=1,alpha=.5) # add rasters noleg = True for sc in self_sig_clusters: x, a = sc if noleg: plt.plot(x[0],a[0]+top_adj,'-',linewidth=3, color='indianred', label='Self') noleg = False else: plt.plot(x[0],a[0]+top_adj,'-',linewidth=3, color='indianred') for xi in x[0]: plt.plot([xi,xi],[.53,.55],'-',linewidth=4,color='indianred') noleg = True for sc in other_sig_clusters: x, a = sc if noleg: plt.plot(x[0],a[0]+mid_adj,'-',linewidth=3, color='dodgerblue', label='Other') noleg = False else: plt.plot(x[0],a[0]+mid_adj,'-',linewidth=3, color='dodgerblue') for xi in x[0]: plt.plot([xi,xi],[.5,.52],'-',linewidth=4,color='dodgerblue') # make time = 0 plt.plot([0,0],[.2,.58],'-k') plt.xlim((-200,800)) plt.ylim((0.2,0.56)) plt.xlabel('Time (ms)') plt.yticks([],[]) plt.yticks([mid_y,top_y],['Chance (50%)','Chance (50%)']) plt.title('Decoding Accuracy', loc='left', x=-.05) leg = plt.legend(bbox_to_anchor=(1.05, .9), loc='upper left', frameon=False) leg.get_lines()[0].set_linewidth(20) leg.get_lines()[1].set_linewidth(20) plt.tight_layout() plt.savefig('svm_self_other.png', facecolor='white') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:20181015] # language: python # name: conda-env-20181015-py # --- # + # %matplotlib inline from datetime import time, datetime import os.path from matplotlib import pyplot as plt import pandas import numpy import xarray as xr import rasterio import rasterio.features import fiona import dask from dask.delayed import delayed from dask.distributed import LocalCluster, Client import tempfile import datacube from datacube import Datacube from datacube.virtual import construct, construct_from_yaml from datacube.ui.task_app import year_splitter # - # ### Settings # + n_workers = 14 mem_per_worker = 8e9 # 8e9 is 8GB (8,000,000,000 bytes) chunk_size = {'time': 1, 'x': 2000, 'y': 2000} # - # ### Set up a local dask cluster # This lets several processes work at the same time, and manage total memory usage # # We also get a dashboard to see how the system is running cluster = LocalCluster(local_dir=tempfile.gettempdir(), n_workers=n_workers, threads_per_worker=1, memory_limit=mem_per_worker) client = Client(cluster) dask.config.set(get=client.get) client cluster.worker_kwargs dc = Datacube() # ### Construct virtual product LS7_BROKEN_DATE = datetime(2003, 5, 31) is_pre_slc_failure = lambda dataset: dataset.center_time < LS7_BROKEN_DATE def wofls_fuser(dest, src): where_nodata = (src & 1) == 0 numpy.copyto(dest, src, where=where_nodata) return dest fc_land_only_yaml = """ transform: apply_mask mask_measurement_name: water preserve_dtype: false input: juxtapose: - collate: - transform: apply_mask mask_measurement_name: pixelquality preserve_dtype: false input: juxtapose: - product: ls5_fc_albers group_by: solar_day measurements: [PV, NPV, BS] - transform: make_mask input: product: ls5_pq_albers group_by: solar_day fuse_func: datacube.helpers.ga_pq_fuser flags: ga_good_pixel: true mask_measurement_name: pixelquality - transform: apply_mask mask_measurement_name: pixelquality preserve_dtype: false input: juxtapose: - product: ls7_fc_albers group_by: solar_day measurements: [PV, NPV, BS] # dataset_predicate: __main__.is_pre_slc_failure - transform: make_mask input: product: ls7_pq_albers group_by: solar_day fuse_func: datacube.helpers.ga_pq_fuser flags: ga_good_pixel: true mask_measurement_name: pixelquality - transform: apply_mask mask_measurement_name: pixelquality preserve_dtype: false input: juxtapose: - product: ls8_fc_albers group_by: solar_day measurements: [PV, NPV, BS] - transform: make_mask input: product: ls8_pq_albers group_by: solar_day fuse_func: datacube.helpers.ga_pq_fuser flags: ga_good_pixel: true mask_measurement_name: pixelquality - transform: make_mask input: product: wofs_albers group_by: solar_day fuse_func: __main__.wofls_fuser flags: water_observed: false mask_measurement_name: water """ fc_land_only = construct_from_yaml(fc_land_only_yaml) # ### Set up geometry functions def geometry_mask(geoms, geobox, all_touched=False, invert=False, chunks=None): """ Create a mask from shapes. By default, mask is intended for use as a numpy mask, where pixels that overlap shapes are False. :param list[Geometry] geoms: geometries to be rasterized :param datacube.utils.GeoBox geobox: :param bool all_touched: If True, all pixels touched by geometries will be burned in. If false, only pixels whose center is within the polygon or that are selected by Bresenham's line algorithm will be burned in. :param bool invert: If True, mask will be True for pixels that overlap shapes. """ data = rasterio.features.geometry_mask([geom.to_crs(geobox.crs) for geom in geoms], out_shape=geobox.shape, transform=geobox.affine, all_touched=all_touched, invert=invert) if chunks is not None: data = dask.array.from_array(data, chunks=tuple(chunks[d] for d in geobox.dims)) coords = [xr.DataArray(data=coord.values, name=dim, dims=[dim], attrs={'units': coord.units}) for dim, coord in geobox.coords.items()] return xr.DataArray(data, coords=coords) def get_shapes(shape_file): with fiona.open(shape_file) as shapes: crs = datacube.utils.geometry.CRS(shapes.crs_wkt) for shape in shapes: geom = datacube.utils.geometry.Geometry(shape['geometry'], crs=crs) yield geom, shape['properties'] shape_file = os.path.expanduser('SA_2016_edited_3577.shp') shapes = list(get_shapes(shape_file)) start_year, end_year = 1987, 2018 time_range = (str(start_year), str(end_year)) time_range def fc_summary(data): fc = data[['BS', 'PV', 'NPV']].sum(dim=('x', 'y')) fc_sum = fc.to_array('variable').sum(dim='variable') area = fc * (25 * 25 / 1_000_000) area = area.rename({'BS': 'BS_area', 'PV': 'PV_area', 'NPV': 'NPV_area'}) for da in area.data_vars.values(): da.attrs['units'] = 'km2' fc = fc * 100 / fc_sum for da in fc.data_vars.values(): da.attrs['units'] = '%' fc = fc.merge(area) return fc def keepna(a, dim=None, thresh=None): if type(a) is xr.Dataset: return a.apply(keepna, keep_attrs=True, dim=dim, thresh=thresh) keep_dim = [] if dim is None else [dim] dims = [d for d in a.dims if d not in keep_dim] if thresh is None: keep = numpy.isfinite(a).sum(dim=dims) > 0 else: keep = numpy.isfinite(a).sum(dim=dims) >= thresh return a.where(keep, other=numpy.nan) def plot_stacked(daily_data, catchment_id, show=True): if not show: plt.ioff() fig,ax = plt.subplots(figsize=(10,5)) ax.stackplot(daily_data.time.data, daily_data.BS, daily_data.NPV, daily_data.PV, colors = ['tan','olive','darkolivegreen',], labels=['BS','NPV','PV',]) plt.legend(loc='upper center', ncol = 3) plt.title(f'FC Components: Catchment ID {catchment_id}', size=12) plt.ylabel('Percentage (%)', size=12) #Set Y label plt.xlabel('Date', size=12) #Set X label plt.savefig(f'/short/v10/adh547/abs_fc_sa2/{catchment_id}_monthly_plot.png'); plt.close(fig) # Turn interactive back on if not show: plt.show() # ### Process the query # For each year and polygon query the product, apply the gemotry mask and compute the frational cover stats # # Using `client.compute()` lets us use the monthly results in calculating the annual results at the same time. # Use this list instead of shapes to just the big outback South Australian area s2 = [(g,p) for g, p in shapes if str(p['SA2_MAIN16']) == '406021141'] # If we have enough resources, we can start the query and calculation of the next year's data while the previous is still being calculated. `by_slice=False` will be faster, but use more memory. # # For larger areas `by_slice` will need to be `True`, so that the compute cluster does not become overwhelmed. # # If you get the error: # > `distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting` # # then you will need to set `by_slice=True` by_slice=True for geometry, properties in s2: catchment_id = str(properties['SA2_MAIN16']) print(f"Catchment ID: {catchment_id}, size: {properties['AREASQKM16']}km^2, time: {time_range}") monthly_values = [] annual_values = [] mask = None for sub_time_range in year_splitter(time_range[0], time_range[-1]): print(f' lazy loading {sub_time_range}...') data = fc_land_only.load(dc, dask_chunks=chunk_size, time=sub_time_range, geopolygon=geometry) print(f' lazy loaded {sub_time_range}') if mask is None: mask = geometry_mask([geometry], data.geobox, invert=True, chunks=data.chunks) data = data.where(mask) data = data.resample(time='1MS').mean(dim='time', skipna=True) data = keepna(data, dim='time', thresh=0.9*int(mask.sum())) monthly_data = fc_summary(data) annual_data = monthly_data.resample(time='1YS').mean(dim='time', skipna=True) print(f" calculating for {dict(monthly_data.sizes)}") monthly_data, annual_data = client.compute([monthly_data, annual_data], sync=by_slice) print(" compute submitted") monthly_values.append(monthly_data) annual_values.append(annual_data) if not by_slice: print(" all years queried, hard load data") monthly_values = client.gather(monthly_values) annual_values = client.gather(annual_values) monthly_values = xr.concat(monthly_values, dim='time').dropna(dim='time') plot_stacked(monthly_values, catchment_id, show=False) annual_values = xr.concat(annual_values, dim='time').dropna(dim='time') print(" all data loaded, save to csv") monthly_values.to_dataframe().to_csv(f"/short/v10/adh547/abs_fc_sa2/{catchment_id}_monthly.csv") annual_values.to_dataframe().to_csv(f"/short/v10/adh547/abs_fc_sa2/{catchment_id}_annual.csv") print(f" Catchment {catchment_id} done") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import random, math n = int(input('请输入一个大于0的整数,回车结束。')) m = int(input('请输入一个大于0的整数,回车结束。')) k = int(input('请输入一个大于m的整数,回车结束。')) i = 0 sum = 0 total = 0 while i < n: i += 1 number = random.randint(m, k) a = math.log(number) b = 1/a sum += a total += b print(sum) print(total) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # --- # layout: post # title: "Monte-Carlo Integration made easy" # desc: "A practical Python explanation of Monte-Carlo integration" # long_desc: "A simple tutorial on Monte Carlo integration using Python. In this article, we'll cover how MC integration works, when to use MC integration and also how to use it efficiently via importance sampling, all in easy to read Python." # date: ###DATE # categories: [tutorial] # tags: [statistics] # loc: ###LOC # permalink: ###LINK # redirect_from: "/montecarlo" # # math: true # --- # Integrating a function is tricky. A lot of the time, the math is beyond us. Or beyond me, at the very least, and so I turn to my computer, placing the burden on its silent, silicon shoulders. And yet this isn't the end of it, because there are a host of ways to perform numerical integration. # # Do we want the simple rectangle rule? The superior trapezoidal rule? Simpson's rule? Do we want to adaptively sample? How many dimensions is this in anyway - 1D, 2D, 3D... 100D? # # Unfortunately, every algorithm listed above falls over at higher dimensionality, simply because most of them are based off a grid. If you have a 100 points in a grid, for a 1D integral, thats easy, 100 points. For a 2D grid, well now its 10 thousand cells. For a 3D grid, thats a million voxels. This is exponential scaling. This is bad news. # # So instead we turn to the amazing algorithm of Monte-Carlo integration. Monte-Carlo here means its based off random numbers (yes, I'm glossing over a lot), and so we perform Monte-Carlo integration essentially by just taking the average of our function after evaluating it at some random points. # # Let's just illustrate this with an example, starting with Simpson's rule. Let's integrate the super simple function: # # $$ \int_0^{1.5\pi} \sin(x) \ dx $$ # + tags=["remove"] # Remove from base import * plt.rcParams['axes.xmargin'] = 0 plt.rcParams['axes.ymargin'] = 0 plt.rcParams['axes.prop_cycle'] = ( cycler(color=['#50d460', '#dbce1a', '#F77F00', '#FCBF49', '#EAE2B7']) + cycler(linestyle=['-', '--', ':', '-.', '-'])) # + import numpy as np import matplotlib.pyplot as plt from scipy.integrate import simps def fn(x): return np.sin(x) + 1 xs = np.linspace(0, 1.5 * np.pi, 100) ys = fn(xs) area = simps(ys, x=xs) plt.plot(xs, ys, label="Function") plt.fill_between(xs, 0, ys, alpha=0.1) plt.text(1, 0.75, f"Area from Simps is {area:0.3f}", fontsize=12) plt.xlabel("x"), plt.legend(); plt.ylim(0, 2.2); ###REMOVE # - # Great, so how would we use Monte-Carlo integration to get another esimtate? # + tags=["remove"] plt.rcParams['axes.prop_cycle'] = ( cycler(color=['#3ccfb4', '#dbce1a', '#F77F00', '#FCBF49', '#EAE2B7']) + cycler(linestyle=['-', '--', ':', '-.', '-'])) # + width = 1.5 * np.pi - 0 # The width from 0 to 1.5pi samples = np.random.uniform(low=0, high=width, size=1000000) mc_area = fn(samples).mean() * width plt.plot(xs, ys, label="Function") plt.fill_between(xs, 0, ys, alpha=0.1) plt.text(1, 0.75, f"Area from Simps is {area:0.3f}", fontsize=12) plt.text(1, 0.5, f"Area from MC Integration is {mc_area:0.3f}", fontsize=12) plt.xlabel("x"), plt.legend(); plt.ylim(0, 2.2); ###REMOVE # - # Alright, so let's dig into this a bit: # # 1. Why does this work? # 2. What is `width`? # 3. Why did I have to ask for a million samples!?!? # # Lets start with \#1: **How does this work?** # # Monte-Carlo integration is all about that [Law of Large Numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers). To summarise the Wiki page, the LLN states that if you do an experiment over and over, the average of your experiment should converge to the expected value. Our experiment here is "sampling the function (uniformly)", so the LLN says if we keep sampling it, the average result should converge to the mean of the function. This should be intuitive - if you roll a fair 6-sided die a lot and take an average, you'd expect that you'd get around the same amount of each number, which would give you an average of 3.5. # # Let's merge in **What is `width`** now. If we have the average of a function over some arbitrary $x$-domain, to get the area we need to factor in how big that $x$-domain is. We are essentially finding the area of a rectangle `width` wide, with an average height given by our samples. # # If we want to be more formal about this, what we are doing is combining both our original function # # $$ f(x) = \sin(x), $$ # # and the probability density function that describes how we draw our samples. In our case, this function is - in English - uniformly between $0$ and $1.5\pi$, and in mathematics: # # $$p(x) = # \begin{cases} # \frac{1}{1.5\pi}, & \text{if}\ 0 < x < 1.5\pi \\ # 0, & \text{otherwise} # \end{cases}$$ # # The "width" comes in to our final result when you add the probability in to our equation: # # $$\begin{align}\int_0^{1.5\pi} \sin(x)\ dx &= \int_0^{1.5\pi} \sin(x)\ \ \frac{1.5\pi}{1.5\pi} \ dx\\ # &= \int_0^{1.5\pi} 1.5\pi\sin(x) \ \ p(x) \ dx # \end{align}$$ # # Sorry for the math, but hopefully you can see that if we separate the equation so that we can get our sample function on the right, the `width` factor comes out naturally. Conceptually, it's easier to think of it using the rectangle analogy above, but that doesn't generalise too well. # # Finally, **why did we need so many samples?** # # The answer is that I wanted to make sure it agreed very well with the result from Simpsons' rule. Monte-Carlo integration has uncertainty, but you can quantify that: # # $$ \text{error} = \frac{\sigma(x)}{\sqrt{N}}$$ # # where $\sigma$ is the standard deviation, $x$ is what we average (so really our samples times our width), and $N$ is the number of points. For us, the plot should really look like this: # + tags=["remove"] plt.rcParams['axes.prop_cycle'] = ( cycler(color=['#1ca7e8', '#dbce1a', '#F77F00', '#FCBF49', '#EAE2B7']) + cycler(linestyle=['-', '--', ':', '-.', '-'])) # + error = np.std(samples * width) / np.sqrt(samples.size) plt.plot(xs, ys, label="Function") plt.fill_between(xs, 0, ys, alpha=0.1) plt.text(0.7, 0.75, f"Area from Simps is {area:0.3f}", fontsize=12) plt.text(0.7, 0.5, f"Area from MC Integration is {mc_area:0.3f}±{error:0.3f}", fontsize=12) plt.xlabel("x"), plt.legend(); plt.ylim(0, 2.2); ###REMOVE # - # Of course, Simpsons' rule has error too, let's not forget that! # # ## Importance sampling! # # Importance sampling is the way that we can improve the accuracy of our estimates. It's conceptually simple - in the plot above, we could get better accuracy if we estimated the peak between 1 and 2 more throughly than if we estimated the area after 4 more thoroughly. So why are we uniformly sampling our distribution, when some areas are much more important?? # # For a super easy example, lets change the function. We now care about # # $$\int_{-\infty}^{\infty} \frac{1 + x^2}{\sqrt{2\pi}} e^{-x^2/2}\ dx.$$ # # Uniformly sampling this would be crazy - how can we sample from $-\infty$ to $\infty$??? Instead, what we do is we look at the function and we separate it out. We say, "Hey, this looks like a polynomial times a normal distribution". Or more formally: # # $$\int_{-\infty}^{\infty} (1+x^2) \ \ \mathcal{N}(0, 1)\ dx,$$ # # where $\mathcal{N}(0,1)$ is a normal distribution, centered at 0, with a width of 1. And just like before, we now have two parts - the first part to calculate, and the second part we can sample from. To do this, and then create a plot showing each sample, is simple: # + tags=["remove"] plt.rcParams['axes.prop_cycle'] = ( cycler(color=['#32a840', '#dbce1a', '#F77F00', '#FCBF49', '#EAE2B7']) + cycler(linestyle=['-', '--', ':', '-.', '-'])) np.random.seed(2) # + # MC integration here samples_2 = np.random.normal(size=1000) fn_2 = 1 + samples_2 ** 2 area_2 = fn_2.mean() error_2 = np.std(fn_2) / np.sqrt(fn_2.size) # Simps integration here def fn2(xs): return (1 + xs**2) * np.exp(-(xs**2)/2) / np.sqrt(2 * np.pi) xs = np.linspace(-5, 5, 200) ys = fn2(xs) area_simps = simps(ys, x=xs) # And of course, plotting here plt.plot(xs, ys, label="Function", lw=3) plt.fill_between(xs, 0, ys, alpha=0.1) plt.text(-4.8, 0.5, f"MC Area={area_2:0.2f}±{error_2:0.2f}", fontsize=12) plt.text(-4.8, 0.43, f"Simps Area={area_simps:0.2f}", fontsize=12) plt.plot((samples_2, samples_2), ([0 for i in samples_2], [fn2(i) for i in samples_2]), c='#1c93e8', lw=0.2, ls='-', zorder=-1, alpha=0.5) plt.xlabel("x"), plt.legend(); plt.ylim(0, 0.55); ###REMOVE # - # !!!main # # Where each blue horiztonal line shows us one specific sample. # You can see that for us to get close to Simpons' rule we need far less samples, because we're sampling more efficiently. # # When using importance sampling, note that you don't need to have a probability function you can sample with *perfectly* in your equation. You can put any PDF in (just like we did with the uniform distribution), and simply divide the original equation by that PDF. Imaging if we changed our function from above just a tiny bit: # # $$\int_{-\infty}^{\infty} \frac{1 + x^2}{\sqrt{2\pi}} e^{-x^4/4}\ dx.$$ # # + def fn3(xs): return (1 + xs**2) * np.exp(-(xs**4)/4) / np.sqrt(2 * np.pi) ys3 = fn3(xs) plt.plot(xs, ys, label="Original") plt.plot(xs, ys3, label="Modified to be $x^4/4$") plt.legend(); plt.xlabel("x"); plt.ylim(0, 0.65); ###REMOVE # - # That's fine! We can still use that normal distribution from before, we just add it into the equation. Normally, your function will not be nice and analytic like the one we've tried to use, so we can state in general: # # $$\begin{align} \text{integral} &= \int_{-\infty}^{\infty} f(x) dx\\ # &= \frac{f(x)}{p(x)} \ p(x) \ dx\\ # \end{align}$$ # # where $p(x)$ in our example will be the normal distribution. # + tags=["remove"] plt.rcParams['axes.prop_cycle'] = ( cycler(color=['#e81c1c', '#e89a1c', '#F77F00', '#FCBF49', '#EAE2B7']) + cycler(linestyle=['-', '--', ':', '-.', '-'])) np.random.seed(2) # + from scipy.stats import norm # MC integration here x_samp = norm.rvs(size=2000) p_of_x = norm.pdf(x_samp) vals = fn3(x_samp) / p_of_x area = vals.mean() error = np.std(vals) / np.sqrt(vals.size) # Simps integration here xs = np.linspace(-5, 5, 200) ys = fn3(xs) area_simps = simps(ys, x=xs) # And of course, plotting here plt.plot(xs, ys, label="Function", lw=3) plt.fill_between(xs, 0, ys, alpha=0.1) plt.text(-4.8, 0.5, f"MC Area={area:0.2f}±{error:0.2f}", fontsize=12) plt.text(-4.8, 0.43, f"Simps Area={area_simps:0.2f}", fontsize=12) plt.plot((x_samp, x_samp), ([0 for i in x_samp], [fn3(i) for i in x_samp]), c='#e89a1c', lw=0.2, ls='-', zorder=-1, alpha=0.3) plt.xlabel("x"), plt.legend(); plt.ylim(0, 0.65); ###REMOVE # - # So hopefully you can see how this would be useful. To summarise, the general process for Monte-Carlo integration is: # # 1. Have your function to integrate. 1D, 2D, 3D, doesn't matter. # 2. Pick a distribution that looks *something* like your function, or just use a Uniform distribution # 3. Sample points from that distribution # 4. Get the function at those points, and divide by $p(x)$. # 5. Take the mean for the estimate, and the standard deviation / root(N) for the error. # 6. Celebrate # # Finally, obviously I've kept the examples here to 1D for simplicity, but I really should stress that MC integration shines in higher dimensions. If you have a 10 dimensional function that looks roughly Gaussian (like a normal), you can sample from a 10 dimensional normal, and apply all the same steps above, nothing at all changes. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tpot # language: python # name: tpot # --- import os import math import pandas as pd import numpy as np import seaborn as sns from pandas import datetime from matplotlib import pyplot as plt from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.metrics import r2_score # + ## convert one to multiple series def lag_ahead_series(data, n_in=1, n_out=1, n_vars = 1,dropnan=True): df = pd.DataFrame(data) cols, names = list(), list() # input sequence (t-n, ... t-1) for j in range(n_vars): for i in range(n_in, 0, -1): cols.append(df.iloc[:,j].shift(i)) names.append('{}{}(t-{})'.format(df.columns[0],j+1, i)) # forecast sequence (t+1, ... t+n) for j in range(n_vars): for i in range(0, n_out): cols.append(df.iloc[:,j].shift(-i)) names += [('{}{}(t+{})'.format(df.columns[0],j+1, i)) ] # put it all together agg = pd.concat(cols, axis=1) agg.columns = names #drop rows with NaN values if dropnan: agg.dropna(inplace=True) return agg ## Distribution plot funciton def distri_plot(df): f, axes = plt.subplots(3, 3, figsize=(15, 11), sharex=False) for idx, col_name in enumerate(df.columns, 0): idx = int(idx) ## jump to plotting energy if(col_name == "rain"): sns.distplot(df["energy"],ax=axes[2,2]) return sns.distplot(df[col_name],ax=axes[idx//3,idx%3]) ## plot plt.tight_layout() ## Scatter plot function def scatter_plot(df): f, axes = plt.subplots(4, 2, figsize=(15, 11), sharex=False) for idx, col_name in enumerate(df.columns, 0): idx = int(idx) if(idx >= 8): return ## jump to plotting energy sns.scatterplot(x= col_name,y = "energy", data = df, ax=axes[idx//2,idx%2]) ## plot plt.tight_layout() ## plot dataframe creation def plot_df(arr, name): plot_df = pd.DataFrame() i = 0 for row in arr: plot_df.insert(i, "{}_{}".format(name,i), row, True) i += 1 return plot_df def get_eval(y, yhat): print("MSE: {}".format(mean_squared_error(y,yhat))) print("MAE: {}".format(mean_absolute_error(y,yhat))) print("r2_score: {}".format(r2_score(y,yhat, multioutput = "variance_weighted"))) ## feature/ target construction fucntion with lag variable def feature_target_construct(df, load_lag, target_ahead, temp_ahead, wd_on = False): tempcols = ['temperature','temperature.1','temperature.2','temperature.3'] load = df['TotalLoad'] f_temp = pd.DataFrame() for col in tempcols: temp = lag_ahead_series(df[col], n_in = 0, n_out = temp_ahead + 1, n_vars = 1, dropnan = True) f_temp = pd.concat([f_temp, temp], axis = 1) t = lag_ahead_series(load, n_in = 0, n_out = target_ahead + 1, n_vars = 1, dropnan = True) if(target_ahead > temp_ahead): num_ahead = target_ahead f_temp = f_temp.iloc[load_lag:-num_ahead + temp_ahead,:] t = t.iloc[load_lag:,:] elif(target_ahead < temp_ahead): num_ahead = temp_ahead f_temp = f_temp.iloc[load_lag:,:] t = t.iloc[load_lag:-num_ahead + target_ahead,:] else: num_ahead = temp_ahead f_temp = f_temp.iloc[load_lag:,:] t = t.iloc[load_lag:,:] ## load lag series f_load = lag_ahead_series(load, n_in = load_lag, n_out = 0, n_vars = 1, dropnan = True).iloc[:-num_ahead,:] ## feature concatanation if wd_on: weekday = pd.get_dummies(df.iloc[load_lag:-num_ahead,-1]) f = pd.concat([weekday, f_temp, f_load], axis = 1) else: f = pd.concat([f_temp, f_load], axis = 1) ## target load values return f, t # - train = pd.read_csv("../data/train_elia.csv", index_col= 'time') test = pd.read_csv("../data/test_elia.csv", index_col= 'time') # load_lag, target_ahead , temp_ahead, weekday_on f_train, t_train = feature_target_construct(train, 4, 4, 4, True) f_test, t_test = feature_target_construct(test, 4, 4, 4, True) # ### TPOT: 6 hour ahead, 15 mins # + from tpot import TPOTRegressor from mmm import mul_reg_config_dict from sklearn.model_selection import TimeSeriesSplit #import mul_config as mc train_X, val_X, train_y, val_y = train_test_split(f_train, t_train, train_size = 0.1, shuffle = False) tpot_reg = TPOTRegressor(generations=20, population_size=60, offspring_size=None, mutation_rate=0.9, crossover_rate=0.1, scoring='neg_mean_squared_error', cv=TimeSeriesSplit(n_splits = 5), subsample=1.0, n_jobs=4, max_time_mins=None, max_eval_time_mins=5, random_state=123, config_dict=mul_reg_config_dict, template=None, warm_start=False, memory=None, use_dask=False, periodic_checkpoint_folder=None, early_stop=None, verbosity=0, disable_update_check=False) tpot_reg.fit(train_X , train_y) # - #val_X, val_y = train.iloc[500:600,:8], ahead_e.iloc[500:600,:] yhat = tpot_reg.predict(f_test) # ### Result Evaluation get_eval(t_test, yhat) # + ## assignment real = t_test.to_numpy() guess = yhat real = real[1:2,:] guess = guess[1:2,:] rpdf = plot_df(real, "observed") gpdf = plot_df(guess, "prediction") #plot ax = plt.gca() gpdf.plot(figsize=(25,10), colormap = 'plasma',style='--x',legend = True, ax = ax) rpdf.plot(figsize=(25,10), color = 'g',style ='-o',legend = True, ax = ax, lw = 4) ax.legend(frameon=False, loc='upper right', ncol=6, prop={'size': 16}) plt.show() # - # ### TPOT: 24 hour ahead, half-hourly # + from tpot import TPOTRegressor from tpot_mulr import mul_reg_config_dict from sklearn.model_selection import train_test_split #import mul_config as mc train_X, val_X, train_y, val_y = train_test_split(ahead_w, ahead_e, train_size = 0.05, test_size = 0.95, random_state = 123) tpot_reg = TPOTRegressor(generations=30, population_size=60, n_jobs=4, verbosity=2, random_state=123, subsample= 0.8, config_dict=mul_reg_config_dict) tpot_reg.fit(train_X , train_y) # - yhat = tpot_reg.predict(test_X) mean_squared_error(test_y, yhat) mean_absolute_error(test_y, yhat) r2_score(test_y, yhat) # + ## assignment real = test_y.to_numpy() guess = yhat real = real[50:51,:49] guess = guess[50:51,:49] rpdf = plot_df(real, "observed") gpdf = plot_df(guess, "prediction") #plot ax = plt.gca() gpdf.plot(figsize=(25,10), colormap = 'plasma',style='--',legend = True, ax = ax) rpdf.plot(figsize=(25,10), color = 'g',style ='-o',legend = True, ax = ax, lw = 4) ax.legend(frameon=False, loc='upper right', ncol=6, prop={'size': 16}) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Beginning Programming in Python # # ### Lists/Strings # #### CSE20 - Spring 2021 # # # Interactive Slides: [https://tinyurl.com/cse20-spr21-lists-strings](https://tinyurl.com/cse20-spr21-lists-strings) # + [markdown] slideshow={"slide_type": "slide"} # # Lists # # - Commonly in programming we want to keep a collection of values rather than a single value. # - In Python, one way this can be done is by using `list`s # - A `list` can store between 0 and 9,223,372,036,854,775,807 items(though a list that large would probably crash your computer). # - A `list` can store different types in its list. # - A `list` is a **mutable** collection, meaning we can change the contents of the list # - Typical naming convention for `list` variables is to name them in the plural, i.e. `items`, `values`, or `cars` # + [markdown] slideshow={"slide_type": "slide"} # # Lists: Instantiation (Creation) # - There are a few ways to instantiate a list. The way we'll go over today is using square brackets `[]` # - some_numbers = [1, 2, 3, 4] some_letters = ["a", "b", "c"] some_numbers_and_letters = [1, "b", 3] empty_list = [] # + [markdown] slideshow={"slide_type": "slide"} # # Lists: Access # - To access/retrieve the values stored in a `list` variable we use square brackets `[]` # - To retrieve a single value we use an index (starting from the left 0, or from the right with -1), i.e. `list_variable[0]` # - To retrieve multiple items we use a `slice`. A `slice` is denoted using a colon `:`, and bounding indices can be placed on either side of the colon. Indices in `slice` notation are on a half closed interval where `list_variable[start:end]` operates on the interval `[start, end)` # - The contents of a list can be changed by assigning a new value at a index. `list_variable[idx] = new_value` # + [markdown] slideshow={"slide_type": "slide"} # # Lists: Access # - some_values = [1, "b", 3, "d"] print(some_values[0]) print(some_values[-1]) print(some_values[1:]) print(some_values[:2]) print(some_values[1:3]) print(some_values[:]) # + [markdown] slideshow={"slide_type": "slide"} # # Lists: Updates # - some_values = [1, "b", 3, "d"] print(some_values) some_values[0] = "a" print(some_values) some_values[0:2] = [1, 2] print(some_values) # + [markdown] slideshow={"slide_type": "slide"} # # `list` Methods # - `list`'s are considered `object`s, we'll go over `object`s in more detail when we go over Object Oriented Programming (OOP). # - For now you need to know that objects can have functions called `methods`, which can be "called" by using the `list_variable.method_name()` notation. # + [markdown] slideshow={"slide_type": "slide"} # # `list` Methods: `append()` # - `append()` adds a value to the end of the `list` # - some_values = [] some_values.append("Howdy") print(some_values) some_values.append("There") print(some_values) some_values.append("Friend") print(some_values) # + [markdown] slideshow={"slide_type": "slide"} # # `list` Methods: `pop()` # - `pop()` removes a value from the end of the `list` # - some_values = ["Howdy", "There", "Friend"] print(some_values) last_item = some_values.pop() print("some_values: ", some_values) print("last_item: ", last_item) # + [markdown] slideshow={"slide_type": "slide"} # # `list` Methods: `remove()` # - `remove()` removes the first value in the `list` that matches the given argument # - some_values = ["Howdy", "There", "Friend"] print(some_values) some_values.remove("There") print("some_values: ", some_values) # + [markdown] slideshow={"slide_type": "slide"} # # `list` Methods: `index()` # - `index()` returns the "index" you would need to use to get the get the given argument. # - some_values = ["Howdy", "There", "Friend"] print(some_values) there_idx = some_values.index("Friend") print("there_idx: ", there_idx) print("some_values[there_idx]: ", some_values[there_idx]) # + [markdown] slideshow={"slide_type": "slide"} # # `list` Methods: `count()` # - `count()` returns the number of times a given argument occurs in a `list` # - some_values = ["Howdy", "There", "Friend"] print("Howdy occurs", some_values.count("Howdy"), "time(s)") some_values.append("Howdy") print("Howdy occurs", some_values.count("Howdy"), "time(s)") print(some_values) # + [markdown] slideshow={"slide_type": "slide"} # # `list` Methods: `reverse()` # - `reverse()` reverses the order of the elements in the list # - some_values = ["Howdy", "There", "Friend", "Hello"] print("some_values: ", some_values) some_values.reverse() print("some_values: ", some_values) # + [markdown] slideshow={"slide_type": "slide"} # # `list` Methods: `extend()` or `+` # - `+` like with strings will concatenate two lists together # - `extend()` concatenates two lists, but does it "in-place". Its like using `+=` for concatenation. # + slideshow={"slide_type": "-"} some_values = ["Howdy", "There", "Friend"] other_values = ["How", "Are", "You"] concat = other_values + some_values print(concat) some_values.extend(other_values) print(some_values) some_values.extend(other_values) print(some_values) # + [markdown] slideshow={"slide_type": "slide"} # # Built-in Functions That are Compatible With `list`s # - `len()` will return the length(number of elements in) of the `list` # - `max()` will return the maximum element in the list # - `min()` will return the minimum element in the list # - `sum()` will return the sum of all the elements in the list # + [markdown] slideshow={"slide_type": "slide"} # # Built-in Functions That are Compatible With `list`s # + slideshow={"slide_type": "-"} some_values = [1, 2, 3, 4, 5] print(some_values) print("There are", len(some_values), "values in the list") print("The largest value is", max(some_values)) print("The smallest value is", min(some_values)) print("The sum of the values in the list is", sum(some_values)) # + [markdown] slideshow={"slide_type": "slide"} # # Strings # # - Strings are like a list of characters but are different in a couple important ways: # - They are **immutable** (can't be changed) # - They don't support methods that imply mutability like `pop()`, `extend()`, `reverse()`, etc. # - Some helpful methods not apart of list include `.lower()` and `.upper()` # - `split()` can break a string into a list of strings, splitting the string based on the input argument # - More info in the string [documentation](https://docs.python.org/3/library/stdtypes.html#string-methods) # + [markdown] slideshow={"slide_type": "slide"} # # Strings # - class_name = "CSE20E40EABCjdfhsjkdfhkdjsfhskdjfhksjdhfkjlsdahf" print(class_name[0]) print(class_name.index("E")) print(class_name.count("2")) print(class_name.lower()) print(class_name.split("h")) # + [markdown] slideshow={"slide_type": "slide"} # # Membership Operator `in` `not in` # # - You can test whether or not a `list` or string contains a value by using `in` and `not in` # + some_numbers = [1, 2, 3] contains_one = 4 in some_numbers print(contains_one) class_name = "CSE20" contains_cse = "C20" in class_name print(contains_cse) # + [markdown] slideshow={"slide_type": "slide"} # # What's Due Next? # # - zybooks Chapter 3 due April 18th 11:59 PM # - Assignment 2 due April 25th 11:59 PM # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Fairness Analysis of NamSor's Gender API Endpoint using Aequitas # # Part I: Fairness of Gender API Endpoint by Gender and Ethnicity # + import pandas as pd import seaborn as sns from aequitas.group import Group from aequitas.bias import Bias from aequitas.fairness import Fairness from aequitas.plotting import Plot import pdfkit as pdf # import warnings; warnings.simplefilter('ignore') # %matplotlib inline # - df = pd.read_csv("data/compas_ethnicity_predictions.csv") df.head() # Non String columns will lead to problems later so we have to find out if there are any non_attr_cols = ['id', 'model_id', 'entity_id', 'score', 'label_value', 'rank_abs', 'rank_pct'] attr_cols = df.columns[~df.columns.isin(non_attr_cols)] # index of the columns that are df.columns[(df.dtypes != object) & (df.dtypes != str) & (df.columns.isin(attr_cols))] # And delete them. df = df.drop(['Unnamed: 0'], axis=1) df.head() # we only want to look at ethnicity here, since that is what we calculated label_value for df = df.drop(['race_pred', 'first', 'last'], axis=1) # if we don't drop the tables, Aequitas thinks these are attributes by which groups should be separated # now we remove groups that are too small df = df[df.race.isin(['African-American', 'Caucasian', 'Hispanic'])] df.shape df["combined_attribute"] = df["sex"] + " " + df["race"] df.head() # ## Group Metrics # First, let's calculate the group metrics like tp, fn, and so on. For determining which score is considered to be a prediction of the classification being correct we will use a score treshold that is the maximum score, or minimum probabilityCalibrated. We do this, because NamSor does make a prediction starting at that score and we want to look at the raw prediction first. We will look at different scores later on to see which score produces which fairness. t = df[["score"]].max()[0] g = Group() xtab, _ = g.get_crosstabs(df, attr_cols=["sex", "race", "combined_attribute"], score_thresholds= {'score': [t]}) absolute_metrics = g.list_absolute_metrics(xtab) df[["score"]].max()[0] xtab[[col for col in xtab.columns if col not in absolute_metrics]] def _color_red_or_green_or_orange(val): # https://stackoverflow.com/questions/28075699/coloring-cells-in-pandas try: if(val > 0.8 and val < 1.2): color = 'green' elif(val < 0.8): color = 'orange' else: color = 'red' return 'color: %s' % color except: return 'color: black' xtab.columns xtab #[['attribute_name', 'attribute_value'] + absolute_metrics] # ## Disparities of Group Metrics #aq_palette = sns.diverging_palette(240, 10, n=9) #sns.diverging_palette(225, 35, n=2) b = Bias() # + bdf = b.get_disparity_predefined_groups(xtab, original_df=df, ref_groups_dict={'sex':'Male', 'race':'Caucasian', 'combined_attribute':'Male Caucasian'}) bdf_overview = bdf[["attribute_name", "attribute_value", "precision_disparity", "fdr_disparity"]] bdf_overview.style.applymap(_color_red_or_green_or_orange) # # - # ## Fairness Metrics f = Fairness() fdf = f.get_group_value_fairness(bdf) def _color_red_or_green(val): # https://stackoverflow.com/questions/28075699/coloring-cells-in-pandas color = 'green' if val else 'red' return 'color: %s' % color fdf[["attribute_name", "attribute_value", "Impact Parity", "Precision Parity", "FDR Parity", "FPR Parity", "TPR Parity", "Equalized Odds", "TypeI Parity"]].style.applymap(_color_red_or_green) gaf = f.get_group_attribute_fairness(fdf.fillna(True)) gaf[["attribute_name", "Impact Parity", "Precision Parity", "FDR Parity", "FPR Parity", "TPR Parity", "Equalized Odds", "TypeI Parity"]].style.applymap(_color_red_or_green) gof = f.get_overall_fairness(fdf) gof xtab[['attribute_value'] + absolute_metrics].round(2).to_latex('graphics/group_metrics_maxthreshold_ethnicity.tex') fdf[["attribute_value", "ppr_disparity", "pprev_disparity", "precision_disparity", "fdr_disparity", "fpr_disparity", "tpr_disparity"]].round(2).to_latex('graphics/disparities_maxthreshold_ethnicity.tex') fdf[["attribute_value", "Impact Parity", "Precision Parity", "FPR Parity", "TPR Parity", "Equalized Odds", "TypeI Parity", "Unsupervised Fairness"]].to_latex('graphics/fairness_maxthreshold_ethnicity.tex') # ### Analysis # We find that: # * PPREV or **Impact Parity** # * PPV- or Precision- or **Predictive Parity** only on a high level between Men and Women or on subgroup level between Caucasian men and Caucasian women # * TPR- or Recall Parity or **Equal Opportunity** # * **Equalized Odds** (TPR and FPR) # * **Predictive Equality** (FPR) # * **Equal Opportunity** (TPR) # # There is no: # * PPV- or Precision- or **Predictive Parity** (the colors are confusing here - if there is PPV Parity there has to be FDR Parity also) for groups other than Caucasian # * Because of Equal Opportunity and limited Predictive Equality there is also limited **Overall Accuracy Equality** # * Because of PPV/Predictive Parity and limited FPR/Predictive Equality there is also limited **Type I** # * PPR- or Demographic- or **Statistical Parity** # * Because of Impact Parity but no Statistical Parity, there is no **Unsupervised Fairness**. # These last two depend on a representative data set, which we have not. However # # The unfairness does not negatively impact the concerned groups but if looking at the disparities, one can see that all groups except Caucasian women obtain **more precise** results than Caucasian men, the reference group. Therefore, one could say that the ethnicity API endpoint is actually biased against Caucasians. It seems to be harder to infer the ethnicity of Caucasians. # # Not measured: # * FOR Parity # * Type II Parity (FNR/Equal Opportunity and FOR Parity) # * Supervised Fairness (Type I and II Parity) # ## Impact of Treshold chosen # Now we will check whether chosing different tresholds results in different fairness. # + from numpy import arange disparities_res = pd.DataFrame({'model_id' : []}) fairness_res = pd.DataFrame({'model_id' : []}) overall_fairness_res = pd.DataFrame({'model_id' : []}) all_tables = [] complete_tables = [] g = Group() tresholds = arange(0.1, 0.5, 0.1) for t in tresholds: xtab, _ = g.get_crosstabs(df, attr_cols=["combined_attribute"], score_thresholds= {'score': [t]}) absolute_metrics = g.list_absolute_metrics(xtab) b = Bias() bdf = b.get_disparity_predefined_groups(xtab, original_df=df, ref_groups_dict={'combined_attribute':'Male Caucasian'}, input_group_metrics=["ppr", "tpr", "fpr", "precision", "tnr", "npv", "pprev", "for", "fdr", "fnr"], check_significance=False, mask_significance=False) f = Fairness() fdf = f.get_group_value_fairness(bdf) gaf = f.get_group_attribute_fairness(fdf) # save results fdf['model_id'] = t.round(2) fairness_by_group = fdf[['model_id', 'attribute_value'] + f.list_parities(fdf)] fairness_res = fairness_res.append(fairness_by_group, sort=False) gaf['model_id'] = t.round(2) overall_fairness_res = overall_fairness_res.append(gaf, sort=False) bdf['model_id'] = t.round(2) disparities_res = disparities_res.append(bdf, sort=False) complete_tables.append(bdf) # - disparity_overview = disparities_res[["model_id", "attribute_value", "ppr_disparity", "tpr_disparity", "fpr_disparity", "precision_disparity", "tnr_disparity", "npv_disparity", "pprev_disparity", "for_disparity", "fdr_disparity", "fnr_disparity"]].groupby(["model_id", "attribute_value"]).mean() disparity_overview.style.applymap(_color_red_or_green_or_orange) fairness_overview = fairness_res[["model_id", "attribute_value"] + f.list_parities(fdf)].groupby(["model_id", "attribute_value"]).max() fairness_overview.style.applymap(_color_red_or_green) overall_fairness_res # For score = 0.1 the groups become too small to return any meaningful results. # # We find that: # * PPV- or Precision- or **Predictive Parity** only on a high level between Men and Women or on subgroup level between Caucasian men and Caucasian women # # # Tendencies: # * lower thresholds result in lower **Impact Parity**, # * lower thresholds result in lower TPR- or Recall Parity or **Equal Opportunity**, **Equalized Odds** (TPR and FPR), **Predictive Equality** (FPR), **Equal Opportunity** (TPR), **Overall Accuracy Equality** except for Hispanics. # * lower thresholds result in higher **FOR Parity**, **FNR Parity** and thus **Type II Parity** # # There is no: # * PPV- or Precision- or **Predictive Parity** (the colors are confusing here - if there is PPV Parity there has to be FDR Parity also) for groups other than Caucasian # * **Type I** Parity # * Supervised Fairness # * PPR- or Demographic- or **Statistical Parity** # * Because of Impact Parity but no Statistical Parity, there is no **Unsupervised Fairness**. # These last two are actually Fairness measures on the data set, which can not be given if the data set is not representational of all groups! # # Our observation from before can be confirmed: All groups except Caucasian women obtain **better** results than Caucasian men, the reference group. Therefore, one could say that the ethnicity API endpoint is actually biased against Caucasians. disparity_overview.to_html('graphics/disparities_by_threshold_ethnicity.html') fairness_overview.to_html('graphics/fairness_by_threshold_ethnicity.html') overall_fairness_res.to_html('graphics/overall_fairness_by_threshold_ethnicity.html') disparities_res[["model_id", "attribute_value", "pprev_disparity", "npv_disparity", "precision_disparity", "fpr_disparity", "fnr_disparity", "for_disparity"]].round(2).groupby(["model_id", "attribute_value"]).mean() disparities_res[["model_id", "attribute_value", "pprev_disparity", "npv_disparity", "precision_disparity", "fpr_disparity", "fnr_disparity", "for_disparity"]].round(2).groupby(["model_id", "attribute_value"]).mean().to_latex('graphics/disparities_by_threshold_ethnicity_small.tex') disparity_overview.to_latex('graphics/disparities_by_threshold_ethnicity.tex') fairness_overview.to_latex('graphics/fairness_by_threshold_ethnicity.tex') overall_fairness_res.to_latex('graphics/overall_fairness_by_threshold_ethnicity.tex') aqp = Plot() i = 0.1 for table in complete_tables: fdf = f.get_group_value_fairness(table) plot = aqp.plot_fairness_group_all(fdf, ncols=5, metrics = "all") plot.savefig('graphics/ethnicity_endpoint_disparities_{}.pdf'.format(i), format='pdf') i = i + 0.1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # # Exercise 1: A Fully Linear Model # # *This is the companion notebook to Exercise 1 in Hogg, Price-Whelan & Leistedt (2020).* # + # Standard library packages import os import sys # Third-party packages import astropy.table as at import corner import matplotlib as mpl import matplotlib.pyplot as plt # %matplotlib inline import numpy as np # Set up a deterministic random number generator: # Note: this requires numpy>=1.17 rnd = np.random.Generator(np.random.PCG64(seed=42)) # - # For this exercise, we will use a quadratic polynomial as our parametric model to both generate some simulated data, and to represent the data when inferring the parameters of the model: # $$ # f(x \,;\, \alpha, \beta, \gamma) = \alpha\, x^2 + \beta \, x + \gamma # $$ # where $(\alpha, \beta, \gamma)$ are the (linear) coefficients of the quadratic polynomial. # # To start off, let's define true parameter values, which we will use to generate the simulated data: true_pars = (-3.21, 2.44, 14.82) K = len(true_pars) # number of parameters # We can now generate our simulated data and store it as an astropy `Table` object for easy writing and printing to latex (for including in the companion paper). We round the values to make it more concise when including the data in the companion article. The ranges used below in the random number generators were chosen arbitrarily. # + N = 4 # number of data points t = at.Table() t['x'] = np.round(np.sort(rnd.uniform(-5, 5, size=N)), decimals=1) sigma_y = np.round(np.sort(rnd.uniform(0.5, 4, size=N)), decimals=1) t['y'] = np.round(rnd.normal(np.poly1d(true_pars)(t['x']), sigma_y), decimals=1) t['sigma_y'] = sigma_y t.write('data1.csv', overwrite=True) # to archive t.write(sys.stdout, format='ascii.latex') # to include in Latex # - # Next, we set up the 2D arrays to represent the various matrices we need to solve the problem. First, the (inverse) covariance tensor of the data, $\mathrm{C}^{-1}$. Here, we assume the data are independent, so this matrix is diagonal: Cinv = np.diag(1 / t['sigma_y']**2) # Next, we set up the design matrix $\mathrm{M}$. For this, we use the `numpy` convenience function `numpy.vander`: M = np.vander(t['x'], K) M # Finally, we have to create numpy arrays to specify the parameters of our (Gaussian) prior pdf. Here, we also assume that the prior pdf is a product of independent Gaussians, so the (inverse) variance tensor is also diagonal. The values used here are specified in the companion paper (under Exercise 1): # + mu = np.array([1, 3, 9]) sigmas = np.array([5, 2, 8]) Linv = np.diag(1 / sigmas**2) # - # We now have all of the objects we need to compute the MAP parameter vector and the associated variance tensor of the posterior (Gaussian) pdf. Following the nomenclature and expressions in the companion paper, Ainv = Linv + M.T @ Cinv @ M A = np.linalg.inv(Ainv) a = A @ (Linv @ mu + M.T @ Cinv @ t['y']) a # We now have the mean and variance tensor of the posterior pdf, so we can directly generate posterior samples of our parameters using `numpy.random.multivariate_normal`: samples = rnd.multivariate_normal(a, A, size=4096) # To complete the exercise, we must now plot the data, the MAP model, and the 68% credible region: # + plt.figure(figsize=(5.25, 5)) plt.errorbar(t['x'], t['y'], t['sigma_y'], ls='none', marker='o', zorder=10, label='data') xpred = np.linspace(-5, 5, 128) plt.plot(xpred, a[0] * xpred**2 + a[1] * xpred + a[2], marker='', color='tab:blue', zorder=1, label='MAP model') ypred = (samples[:, 0][None] * xpred[:, None]**2 + samples[:, 1][None] * xpred[:, None] + samples[:, 2][None]) yfill = np.percentile(ypred, q=[16, 84], axis=1) plt.fill_between(xpred, yfill[0], yfill[1], zorder=0, color='tab:blue', alpha=0.3, lw=0, label='68% credible region') plt.xlim(-5, 5) plt.xlabel('$x$') plt.ylabel('$y$') plt.legend(loc='lower center', fontsize=16) plt.tight_layout() plt.savefig('../paper/exercise1.pdf') # - # As a final visualization, we can make a [`corner`](https://corner.readthedocs.io/en/latest/) plot of the samples to visualize projections of the posterior samples we generated: _ = corner.corner(samples, truths=true_pars, labels=[r'$\alpha$', r'$\beta$', r'$\gamma$']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys sys.path.append("/home/ecbm4040/Final_Project/e4040-2021Fall-Project-SCNN-as6430-as6456-vsk2123/src/") # + id="e2be64ad" import tensorflow as tf import numpy as np import random from tensorflow import keras from tensorflow.keras import utils as np_utils #from keras import utils as np_utils from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, Dropout, BatchNormalization, Softmax from tensorflow.keras import Model from tensorflow.keras.utils import to_categorical from tensorflow.keras.losses import categorical_crossentropy import datetime from time import time from matplotlib import pyplot as plt from keras.preprocessing.image import ImageDataGenerator from tensorflow.python.keras.layers.pooling import GlobalAveragePooling2D # + colab={"base_uri": "https://localhost:8080/"} id="238e6748" outputId="2faa18bd-ca54-44e0-9ae1-0dc01e0f8d82" # Load and prepare data (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() # Scale images to the [0, 1] range x_train = x_train.astype("float32") / 255 x_test = x_test.astype("float32") / 255 print("x_train shape:", x_train.shape) print(x_train.shape[0], "train samples") print(x_test.shape[0], "test samples") input_shape = (x_train.shape[1], x_train.shape[2], x_train.shape[3]) output_size=10 # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, output_size) y_test = keras.utils.to_categorical(y_test, output_size) print(y_train.shape) print(y_test.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="G_G1nSk_lS0Y" outputId="5f995f6d-bb10-4e06-c979-30b047c48618" # Plot sample images before augmentation for i in range(0,9): plt.subplot(330 + 1 + i) plt.imshow(x_train[i]) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="JghQdsPzlDMV" outputId="131e3091-a1bf-4678-d66f-7a99f1b91348" # Data augmentation code def HSV_perturbations(image): """ Takes an input image and returns it with either the randomly adjusted hue or saturation (or may be makes no HSV change at all) with a probability of 1/3 """ choice=random.randint(1,3) print(choice) image = np.array(image) if choice ==1: return tf.image.random_hue(image, 1/random.randint(1,10)) elif choice ==2: return tf.image.random_saturation(image, 5, 10) else: return image # so as to avoid not change hue for every image datagen = ImageDataGenerator( horizontal_flip=True, width_shift_range=0.1, height_shift_range=0.1, brightness_range=[0.5,1.5] ) datagen.fit(x_train) # Plot sample augmented images for X_batch, y_batch in datagen.flow(x_train, y_train, batch_size=9): for i in range(0, 9): plt.subplot(330 + 1 + i) plt.imshow(X_batch[i].astype(np.uint8)) plt.show() break # + id="356e89ec" # Define CNN model architecture def spatialCNN(): model = Sequential() model.add(Conv2D(96, kernel_size=(5,5),padding="same", input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(3,3),strides=(2,2))) model.add(Conv2D(192, kernel_size=(5,5),padding="same")) model.add(MaxPooling2D(pool_size=(3,3),strides=(2,2))) model.add(Flatten()) model.add(Dense(1024)) model.add(Dense(512)) model.add(Dense(output_size, activation='softmax')) return model # + [markdown] id="Uy5dI4zMgc8D" # # + colab={"base_uri": "https://localhost:8080/"} id="08e3a1dc" outputId="4238f6b6-f706-405d-d544-62459dbfe076" # Compiling and training the model batch_size=128 nb_epochs=50 train_generator = datagen.flow(x_train, y_train, batch_size=batch_size) valid_generator = datagen.flow(x_test, y_test, batch_size=batch_size) standard_cnn_model = spatialCNN() print(standard_cnn_model.summary()) standard_cnn_model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) #history=standard_cnn_model.fit(x=x_train, y=y_train, batch_size=batch_size,epochs=nb_epochs, validation_data=(x_test, y_test)) history=standard_cnn_model.fit_generator(train_generator,epochs=nb_epochs,steps_per_epoch=len(x_train)//batch_size, validation_data=valid_generator) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as ec from selenium.webdriver.common.action_chains import ActionChains import urllib from bs4 import BeautifulSoup import time import csv import re import sys import os import numpy as np import pandas as pd from IPython.core.debugger import set_trace path = 'datasets/alltrails/' chrome_options = webdriver.chrome.options.Options() chrome_options.add_argument("--start-maximized") #chrome_options.add_argument('--headless') #chrome_options.add_argument('--no-sandbox') #chrome_options.add_argument('--disable-dev-shm-usage') chrome_path = r'C:\Users\Sumit\Anaconda3\pkgs\python-chromedriver-binary-77.0.3865.40.0-py36_0\Lib\site-packages\chromedriver_binary\chromedriver.exe' import pickle infile=open('AllTrailsScraper_notebook_env.db','rb') df_trails = pickle.load(infile) infile.close() def login(): driver.get("https://www.alltrails.com/login?ref=header#") email = driver.find_element_by_id("user_email").send_keys("") password = driver.find_element_by_id("user_password").send_keys("") login_button = driver.find_element_by_class_name("login") login_button.click() def scrape_trail(browser, hike_url): browser.get(hike_url) soup = BeautifulSoup(browser.page_source, "lxml") return soup # + # Below processing should already have been done: # df_trails['trail'] = df_trails['trail'].str.split('?').str[0] # df_trails['trail_url'] = df_trails['trail_url'].str.split('?').str[0] # df_trails['trail_url'] = df_trails['trail_url'].str.replace('/explore/','/') # + driver = webdriver.Chrome(chrome_path, options=chrome_options) login() time.sleep(1) df_trail_latlon = pd.DataFrame() for index, row in df_trails.iterrows(): park = row['park'] trail = row['trail'] trail_url = row['trail_url'] print(trail_url) soup = scrape_trail(driver, trail_url) #print(soup.getText()) e_trail_lat = soup.find('meta', {'property': 'place:location:latitude'}) e_trail_lon = soup.find('meta', {'property': 'place:location:longitude'}) trail_lat = float(e_trail_lat['content']) trail_lon = float(e_trail_lon['content']) print(trail_lat) print(trail_lon) df_trail_latlon = df_trail_latlon.append({'park':park,'trail':trail,'trail_url':trail_url,'trail_lat':trail_lat,'trail_lon':trail_lon}, ignore_index=True) # e_rev = soup.findAll('div', {'itemprop': 'review'}) # print('Reviews: '+str(len(e_rev))) # for review in e_rev: # e_rev_date = review.find('meta', {'itemprop': 'datePublished'}) # e_rev_body = review.find('p', {'itemprop': 'reviewBody'}) # rev_date = e_rev_date['content'] # rev_body = e_rev_body.getText() # # print(rev_date) # # print(rev_body) # df_trail_reviews = df_trail_reviews.append({'park':park,'trail':trail,'date':rev_date,'review':rev_body}, ignore_index=True) # df_trail_reviews.to_csv(path+'reviews_'+park+'_'+trail+'.csv') driver.quit() # - df_trail_latlon.to_csv(path+'traillatlon.csv') df_trail_latlon # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Code More, Code Daily # - Goal is to Write code regularly # - my own implementation of the tricks with added links or personal experimetnation # [🐍PyTricks]: Python list slice syntax fun # --- # Python's list slice syntax can be used without indices # for a few fun and useful things: # You can clear all elements from a list: listy = [1, 2,3,4,5] print(listy) del listy[:] # deletes all items in place? listy # You can replace all elements of a list # without creating a new list object: a = listy listy[:] = [7,8,9] # selects all items and assignes ne values? print(listy) a a is listy # you can also create a shallow copy of a list b = listy[:] b b is listy a is listy a is b # assignment via slicing is trickyyyyy good to know! a.append([1,3]) print(a) print(listy) b # [🐍PyTricks]: Python 3.5+ type annotations # --- # Fri, Jul 5, # *mypy not able to install via anaconda navigator?* # Python 3.5+ supports 'type annotations' that can be # used with tools like [Mypy](http://mypy-lang.org/) to write statically typed Python: # - Mypy is an optional static type checker for Python that aims to combine the benefits of dynamic (or "duck") typing and static typing. Mypy combines the expressive power and convenience of Python with a powerful type system and compile-time type checking. Mypy type checks standard Python programs; run them using any Python VM with basically no runtime overhead. def my_add(a: int, b: int) -> int: return a + b my_add(5,10) my_add(5,"10") 5 + "10" # not a good test...? not raising on str # my_add(5, 10.0) # not working? # from docs https://github.com/python/mypy : # - see example on [online playground](https://mypy-play.net/) # - https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html # - http://mypy-lang.org/examples.html : # - word frequencies with dynamic typic # - simple class # - prime number sieve with generators # # + from typing import Iterator def fib(n: int) -> Iterator[int]: a, b = 0, 1 while a < n: yield a a, b = b, a + b # - fib(123) fib("10") # http://mypy-lang.org/examples.html : word frequencies with dynamic typic # # - [file object returned by open](https://docs.python.org/3.7/glossary.html#term-file-object) # - [`re.sub(pattern, repl, string, count=0, flags=0)`](https://devdocs.io/python~3.7/library/re#re.sub) Return the string obtained by replacing the leftmost non-overlapping occurrences of pattern in string by the replacement repl. # + # Mypy with dynamic typing # filename: wordfreq.py import sys import re if not sys.argv[1:]: # retrieve when run in command line with filename as arg raise RuntimeError('Usage: wordfreq FILE') dictionary = {} with open(sys.argv[1]) as file: # s(treams) - file object, text file for s in f: for word in re.sub('\W', ' ', ).split(): # add word to dictionary d[word] = d.get(word, 0) + 1 #list comprehension lst = [(freq, word) for word, freq in d.items()] for freq, word in sorted(lst): print('%-6d %s' % (freq, word) ) # + # Display the frequencies of words in a file. import sys import re from typing import Dict if not sys.argv[1:]: raise RuntimeError('Usage: wordfreq FILE') d = {} # type: Dict[str, int] with open(sys.argv[1]) as f: for s in f: for word in re.sub('\W', ' ', s).split(): d[word] = d.get(word, 0) + 1 # Use list comprehension l = [(freq, word) for word, freq in d.items()] for freq, word in sorted(l): print('%-6d %s' % (freq, word)) # - # - Copy to file and run...? # - or hmmm find a data set to use on thats persnonal from my note sin the past eg on Evernote, Workflowy, Google Docs, ... # import sys print(sys.argv[1]) # + # Mypy with dynamic typing import re if not sys.argv[1:]: # retrieve when run in command line with filename as arg raise RuntimeError('Usage: wordfreq FILE') dictionary = {} filename = './data/WorkFlowy-sept-15-2017.txt' with open(filename,encoding='utf-8') as file: # s(treams) - file object, text file for stream in file: for word in re.sub('\W', ' ', stream).split(): # add word to dictionary dictionary[word] = dictionary.get(word, 0) + 1 #list comprehension lst = [(freq, word) for word, freq in dictionary.items()] for freq, word in sorted(lst): print('%-6d %s' % (freq, word) ) # - size = len(lst) print(size) # + sizeInDaMiddle = len(lst[int(size*0.25) : int(size*0.75) ]) # size of between 20% - 80% second and third quantile? print(sizeInDaMiddle) print(sizeInDaMiddle/size*100) # percentage of total... lol makes sense haha sanity chekc # - # Get unique numbers ? use set propert!~https://stackoverflow.com/questions/12897374/get-unique-values-from-a-list-in-python list(set(size)) lst[10:20] # list of tuple keywords pairs... type(lst) type(lst[1]) set(dictionary.values()) # unique counts.... ? # Okay but what question am I Investigating? I want to see what topics eg Javascript, Python or Cooking etc is being talked aobut... hmmm # # **PAUSE FOR NEXT TIME** # **ERROR Unicode decode error** ? `'charmap' codec can't decode byte 0x9d in position 5443: character maps to ` # # - https://stackoverflow.com/a/21129492/11539023 answer # - ecxport again.. or export as OPML specification ? # - review open file docs! # # **Fix** open(filename) -> open(filename,encoding='utf-8') # # [🐍PyTricks]: Merging two dicts in Python 3.5+ with a single expression # --- # Sat, May 25 2019 # Screencast: [A Shorthand for Merging Dictionaries in Python 3.5+](https://www.youtube.com/watch?v=Duexw08KaC8) # # Detailed info from on [PEP 448 -- Additional Unpacking Generalizations](https://www.python.org/dev/peps/pep-0448/) # - * iterable unpacking operator and ** dictionary unpacking operators # - function calls, in comprehensions and generator expressions, and in displays. # # More [examples here](https://codeyarns.com/2012/04/26/unpack-operator-in-python/) # + # How to Merge two dictionaries? # May 25, 2019 x = {'a': 1, 'b':2} y = {'b': 3, 'b':4} print({**x, **{'c':1,'d':2}}) z = {**x, **y} print(z) # {'a': 1, 'b': 4}, cant have dupes so last one was added # for dicts : ** unpacks the values , while * unpacks the keys print(*x,*y,*{'c':1,'d':2}) print(y) # -> cant create duplicates, strangely python does not raise a warning # In these examples, Python merges dictionary keys # in the order listed in the expression, overwriting # duplicates from left to right. # - print(*[1,23],*[2],3,*{'x':3}) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:.conda-jme] # language: python # name: conda-env-.conda-jme-py # --- # ## Firstly we set up 8 dask workers import numpy as np import dask from dask.distributed import Client, LocalCluster import dask.bag as db cluster = LocalCluster(n_workers=8) # explicitly connect to the cluster we just created client = Client(cluster) client # ## And we define a function # # $$ # g(x) = \sqrt{|x|} ^{\ln |x|} # $$ # # This is a test function for each pixel. In the real case we use an optimization workflow for each pixel. # + def g(x): return np.sqrt(np.abs(x)) ** np.log(np.abs(x)) g(3) # - # ## ...and a large array # # with 2 million pixels. Pixel values are normally distributed. test_array = np.random.randn(2000000) # Now we are going to apply $g(x)$ to each pixel in the test array. # # ## Serial # + # %%time results_serial = np.zeros((2000000, 1)) for i in range(len(test_array)): results_serial[i] = g(test_array[i]) # - results_serial[[0, 1, 2, 3, 10, -10, -3, -2, -1]].flatten() # ## Parallel # # Using `dask.bag` b = db.from_sequence(test_array, npartitions=24) b = b.map(g) b.visualize() # %%time results_parallel = b.compute() results_parallel = np.array(results_parallel) results_parallel[[0, 1, 2, 3, 10, -10, -3, -2, -1]] # - Serial: **~11 s** # - Parallel (8 workers): **~1.5 min** # # Why? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Séance 2 – 07/10/2020 : révisions # ## Correction des exos # # Voir [https://github.com/clement-plancq/python-im-2/tree/master/corrections](https://github.com/clement-plancq/python-im-2/tree/master/corrections) # Les exos (à part ceux de CodinGame) sont liés à des tests, dans un fichier commençant par `test_`, ils peuvent être lancés avec [pytest](https://docs.pytest.org/en/latest/). Pour cela il faut installer le module `pytest`. # # Je vous rappelle la commande à utiliser pour installer un module : # `python3 -m pip install pytest` # ou # `python3 -m pip install pytest --user` # ## Les listes : fonctions # # - Les listes héritent des fonctions des *sequences*, elles ont également des [fonctions propres](https://docs.python.org/3.6/tutorial/datastructures.html#more-on-lists) # - Parmi ces fonctions nous utiliserons principalement : # - `append(x)` : ajoute un élément x à la fin de la liste (haut de la pile*) # - `extend([x, y, z])` : ajoute tous les éléments de la liste arg à la fin de la liste # - `pop(index=-1)` : supprime et renvoie l'élément de la liste à la position `index` # - `index(x)` : renvoie l'index du premier élément de valeur x # - `count(x)` : renvoie le nombre de fois où x apparaît # - `sort(key=None, reverse=False)` : trie et modifie la liste, lire la [doc](https://docs.python.org/3.6/howto/sorting.html#sortinghowto) pour en savoir plus sur les ordres de tri. stack = [12, 15, 12, 7, 18] stack.index(12) stack.count(12) stack.sort() stack stack.append(23) stack stack.append([35, 46]) stack stack.extend([51, 52]) stack # ### ✍️ Exo ✍️ def tokenize(sentence): """ Tokenize la phrase donnée en argument (sep = espace). Renvoie une liste de mots. Les mots composés avec un tiret sont décomposés dans des sous-listes. Args: sentence (string): la phrase à tokenizer Returns: list """ words = [] for item in sentence.split(): if '-' in item: words.append(item.split('-')) else: words.append(item) return words assert tokenize("je suis né dans le gris par accident") == \ ['je', 'suis', 'né', 'dans', 'le', 'gris', 'par', 'accident'] assert tokenize("tout mon cœur est resté là-bas") == \ ['tout', 'mon', 'cœur', 'est', 'resté', ['là', 'bas']] # # Les listes en compréhension # # - Elles permettent de définir des listes par filtrage ou opération sur les éléments d'une autre liste # - La [PEP 202](http://www.python.org/dev/peps/pep-0202/) conseille de préférer les listes en compréhension aux fonctions `map()` et `filter()` # - C'est puissant et concis, *so pythonic* [i ** 2 for i in range(10)] # + slideshow={"slide_type": "subslide"} [i ** 2 for i in range(10) if i % 2 == 0] # - [(i, j) for i in range(2) for j in ['a', 'b']] # ### ✍️ Exo ✍️ # Utilisez une liste en compréhension sur la sortie de votre fonction tokenize de manière à ne retenir que les noms # composés words = tokenize("De-ci de-là, cahin-caha, va trottine, va chemine, va petit âne") compounds = [word for word in words if type(word) is list] assert compounds == [['De', 'ci'], ['de', 'là,'], ['cahin', 'caha,']] # ## Parcours de liste # # La boucle `for` est particulièrement adaptée pour parcourir les *iterables* et donc les listes # + slideshow={"slide_type": "subslide"} voyelles = ['a', 'e', 'i', 'o', 'u'] for item in voyelles: print(item) # - # La fonction `enumerate` peut être utile dans certains cas, elle renvoie un `tuple` contenant l'indice et la valeur de l'item à l'indice concerné for i, item in enumerate(voyelles): print(i, item) # ## Copie de liste # # Dans `y = x`, `y` n'est pas un copie de x, les deux pointent vers le même objet x = [1, 2, 3] y = x y[0] = 4 x # Pour copier une liste il faut utiliser : x = [1, 2, 3] y = x[:] # ou y = list(x) y[0] = 4 x # # Déballage de séquences # # - Le *sequence unpacking* permet d'effectuer plusieurs affectations simultanées # - L'*unpacking* s'applique souvent sur des tuples x, y, z = (1, 2, 3) y lexique = [("maison", "mEz§"), ("serpent", "sERp@")] for ortho, phon in lexique: print(phon) # - On peut aussi utiliser `*` pour déballer une séquence en argument de fonction bornes = (0, 10) for i in range(*bornes): print(i) # # Les ensembles # # Les ensembles (`set`) sont des collections non ordonnées d'élements sans doublons # Les ensembles supportent les fonctions mathématiques d'union, d'intersection, de différence ([doc](https://docs.python.org/3.6/library/stdtypes.html#set)) # # - `value in s` renvoie si `value` est un élément de `s` # - `union(*sets)` renvoie l'union de tous les `sets` (l'ensemble des valeur contenues dans tous les sets). # - `intersection(*sets)` renvoie l'intersection de tous les `sets` (l'ensemble des valeur contenues dans au moins un set). # ens0 = set() # on crée l'ensemble vide ens0 ens1 = {'le', 'guépard', 'le', 'poursuit'} ens1 ens2 = {"avec", "le", "chandelier", "dans", "la", "cuisine"} ens1.intersection(ens2) # ### ✍️ Exo # # Dans cet extrait de données tirées des [listes de Swadesh de langues austronésiennes](https://en.wiktionary.org/wiki/Appendix:Austronesian_Swadesh_lists), ici pour le tagalog et le cebuano, trouvez les mots en commun. tagalog = {'i':'ako', 'you_sg':'ikaw', 'he':'siya', 'we':'tayo', 'you_pl':'kayo', 'they':'sila',\ 'this':'ito', 'that':'iyan', 'here':'dito', 'there':'doon', 'who':'sino',\ 'what':'ano', 'where':'saan', 'when':'kailan', 'how':'paano'} cebuano = {'i':'ako', 'you_sg':'ikaw', 'he':'siya', 'we':'kita', 'you_pl':'kamo', 'they':'sila',\ 'this':'kiri', 'that':'kana', 'here':'diri', 'there':'diha', 'who':'kinsa',\ 'what':'unsa', 'where':'asa', 'when':'kanus-a', 'how':'unsaon'} set(tagalog.values()).intersection(set(cebuano.values())) # # Les dictionnaires # # - Les dictionnaires (`dict`) sont des structures de données associatives de type clé: valeur # - Les clés d'un dictionnaire sont uniques, seuls les types *hashable* (*immutable* et objets que vous avez définis) peuvent être des clés ([doc](https://docs.python.org/3.6/library/stdtypes.html#mapping-types-dict)) # # - `key in d` renvoie True si `key` est une clé de `d` # - `keys()` renvoie la liste des clés # - `values()` renvoie la liste des valeurs # - `items()` renvoie la liste des couples clé:valeur (tuple) # - `get(key, default=None)` renvoie la valeur associée à `key`. Si `key` n'existe pas, renvoie l'argument `default`. Ne modifie pas le dictionnaire. # - `setdefault(key, default=None)` si `key` n'existe pas, insère `key` avec la valeur `default` dans le dictionnaire puis renvoie la valeur associée à la clé. d = {'Perl':'Larry Wall', 'Python':'', 'C++':''} d['Perl'] d['Ruby'] d.setdefault('Ruby', 'je sais pas') d # ## Module collections # # - Le module *collections* propose des implémentations de structures de données supplémentaires # - Dans la liste (voir [doc](https://docs.python.org/3.6/library/collections.html)), deux pourront nous intéresser : # # - `defaultdict` # # `defauldict` est similaire à un `dict` mais il permet l'autovivification # # Son implémentation le rend plus rapide qu'un dictionnaire utilisé avec la fonction `setdefault` # import collections lexique = [("couvent", "kuv"), ("couvent", "kuv@")] dico = collections.defaultdict(list) for ortho, phon in lexique: dico[ortho].append(phon) dico # ## Module collections # # - `Counter` # # `Counter` est un dictionnaire où les valeurs attendues sont les nombres d'occurences des clés from collections import Counter cnt = Counter() list = ['le', 'guépard', 'le', 'poursuit'] for item in list: cnt[item] += 1 cnt # ### ✍️ Exo # # Faites la même chose avec un dictionnaire # ## Les fichiers # # * Pour travailler avec les fichiers on doit procéder à trois opérations : # 1. Ouverture avec la fonction `open` (lève l'exception `FileNotFoundError` en cas d'échec) # 2. Lecture (`read` ou `readline` ou `readlines`) et/ou écriture (`write`) # 3. Fermeture du fichier avec la fonction `close` # # * Ouverture # * `open` est une fonction qui accepte de nombreux arguments : RTFM # * `open` renvoie un objet de type `file` # * Le plus souvent elle s'emploie de la manière suivante: # ```python # >>> #f = open(filename, mode) # >>> f = open('nom_fichier', 'w') # ``` # Les modes sont : # # * `r` : lecture (défaut) # * `w` : écriture # * `x` : création et écriture (échec si le fichier existe déjà) # * `a` : concaténation (append) # # # * `b` : mode binaire # * `t` : mode texte (défaut) # * `+` : read/write (ex: r+b) # # ## Les fichiers : ouverture # # La documentation de Python conseille cette façon de faire : # ```python # with open('mon_fichier', 'r') as f: # read_data = f.read() # ``` # L'utilisation du mot clé `with` garantit la fermeture du fichier même si une exception est soulevée. # ## Les fichiers : lecture # # * `read(size=-1)` lit les `size` premiers octets (mode `b`) ou caractères (mode `t`). Si `size` < 0, lit tout le fichier. # * `readline(size=-1)` lit au plus `size` caractères ou jusqu'à la fin de ligne. Si `size` < 0, lit toute la ligne. Il est conseillé de ne pas toucher à `size`. # * `readlines(hint=-1)` lit `hint` lignes du fichier. Si `hint` < 0, lit toutes les lignes du fichier. # * un objet `file` est un itérable ! (*the pythonic way*) # # ```python # for line in f: # process(line) # ``` # ## Les fichiers : écriture et fermeture # # * `write(text)` écrit `texte` dans le fichier? # * `close()` ferme le fichier. # # En règle générale veillez à toujours fermer les objets fichiers. # En mode écriture oublier de fermer un fichier peut réserver des mauvaises surprises # # * fonction `print` # ```python # with open('mon_fichier', 'w') as output_f: # for item in words: # print(item, file=output_f) # ``` # * `sys.stdin`, `sys.stdout` et `sys.stderr` sont des objets de type `file` # ### ✍️ Exo # # Lisez le fichier `data/austronesian_swadesh.csv` et écrivez les mots des langues Ilocano et Malagasy dans deux fichiers distincts. # Les données viennent de [Wiktionary](https://en.wiktionary.org/wiki/Appendix:Austronesian_Swadesh_lists). # # (Essayez de faire comme si vous ne connaissiez pas le module csv sinon la partie qui suit n'aura aucun intérêt.) # + # c'est compliqué sans le module csv quand même # - # ## Module csv # # La documentation est ici : [https://docs.python.org/3/library/csv.html](https://docs.python.org/3/library/csv.html) # Parce que les données au format csv sont très répandues et parce qu'il peut être pénible de le lire correctement, le module csv est là pour vous aider. # Pour le dire vite il y a deux façons de l'utiliser : reader/writer ou DictReader/DictWriter. # - `csv.reader` # + import csv swadesh_light = [] with open('data/austronesian_swadesh.csv') as csvfile: reader = csv.reader(csvfile, delimiter=',', quotechar='"') # à l'ouverture je spécifie les séparateur de champ et de chaîne for row in reader: # l'objet reader est un itérable swadesh_light.append(row[0:3]) print(' | '.join(row[0:3])) # row est une liste de chaînes de caractères # - # - `csv.writer` with open('swadesh_light.csv', 'w') as csvfile: writer = csv.writer(csvfile, delimiter='|',quotechar='"') #writer.writerows(swadesh_light) ici on écrit tout en une fois for item in swadesh_light: writer.writerow(item) # writerow reçoit une liste de chaînes # - csv.DictReader # Cette classe s'appuie sur la ligne d'en-tête pour créer une suite de dictionnaires. # S'il n'y a pas de ligne d'en-tête on peut utiliser une liste `fieldnames` en paramètre. with open('data/austronesian_swadesh.csv') as csvfile: reader = csv.DictReader(csvfile, delimiter=',',quotechar='"') for row in reader: # ici row est un dictionnaire print(f"{row['Ilocano']} | {row['Malagasy']}") # - csv.DictWriter # Cette fois il s'agit de générer un fichier csv à partir d'une séquence de dictionnaires. Le paramètre `fieldnames` est obligatoire. with open('swadesh_light.csv', 'w') as csvfile: fieldnames = ['english', 'ilocano'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames, delimiter='|',quotechar='$') writer.writeheader() for num, en, ilo in swadesh_light: writer.writerow({'english': en, 'ilocano': ilo}) # ## Module `re` import re # - `re` est un module particulièrement important, vous devez lire la [doc](https://docs.python.org/3/library/re.html), absolument # # - La doc officielle est parfois aride, ce [howto](https://docs.python.org/3.6/howto/regex.html) rédigé par est plus digeste # # # a minima vous devez connaître les fonctions : # # - `findall` : trouve toutes les occurences du motif, retourne une liste de chaînes trouvées # - `search` : trouve le motif, retourne un objet `Match`, `None` sinon # - `match` : détermine si le motif est présent au début de la chaîne, retourne un objet `Match`, `None` sinon # - `split` : découpe une chaîne selon un motif, retourne une liste de chaînes # - `sub` : remplace les occurences d'un motif par une chaîne de remplacement # - `compile` : compilation d'un motif (pattern), retourne un objet `Pattern` if re.search(r"(\w|\s)+", "Un léopard me pourchasse"): print("Cours !") re.sub(r'e|é', 'i', 'éléphanteau') # ## `\w` et Python3 # # `\w` est la classe prédéfinie des caractères alphanumériques : # # - En Python 2 `\w` ~correspond~ correspondait à `[A-Za-z0-9_]`, avec les locales il est possible d'y ajouter d'autres caractères # # - En Python 3 `\w` correspond à tous les caractères qui ont la propriété Unicode Letter d'après le module `unicodedata` (sauf si le motif est compilé en binaire ou si l'option `re.ASCII` est activée) if re.search(r"\w", "馬青區團長成中央代表"): print("Yeah !") if re.search(r"\w", "هيلاري كلينتون"): print("Yeah !") # ### ☕ Exos ☕ # 1. Écrire une fonction qui reçoit deux noms de langue austronésiennes, une liste de mots en anglais et renvoie chacun des mots anglais avec leur traduction dans les deux langues. # + def get_austro_words(langue1, langue2, words): """ Reçoit un couple de langues (langue1, langue2) et une liste de mots (words) Cherche dans la liste Swadesh des langues austronésiennes les traductions des mots dans ces deux langues. Renvoie un dictionnaire {'langue1': [w1, w2], 'langue2': [w1, w2]} Liste vide si la langue n'est pas répertoriée dans la liste """ # votre code assert get_austro_words('Malay', 'Malagasy', ['new', 'old', 'good']) == \ { 'Malay':['baharu', 'lama', 'bagus, baik'], 'Malagasy':['vaovao', 'onta, hantitra', 'tsara'] } assert get_austro_words('Malay', 'Balinese', ['new', 'old', 'good']) == \ { 'Malay':['baharu', 'lama', 'bagus, baik'], 'Balinese':[] } # - # 2. Pour chaque mot du Cebuano de la liste Swadesh austronésienne, trouvez les mots des autres langues qui ont les deux ou trois premiers caractères en commun. # (optionnel si vous voulez jouer avec les expressions régulières) Si le mot commence par une voyelle, elle pourra différer dans les autres langues. Ex: isa / usa seront considérées comme similaires (i/u) parce qu'à part la première lettre voyelle elles sont similaires. # 3. Sans rechercher de solution sur internet, essayez d'implémenter une fonction qui calcule la distance de Levenshtein # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + """ This code performs graph classification with a simple Graph Convolutional Neural Network based on Spektral So far no graphic card needed and it only works on woodycap """ #import random #for shuffling lists import spektral as spektral #package based on Keras that is foundation for GNNs import tables #show h5 files hirarchy import h5py import numpy as np import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 12}) #change font size of diagrams import os os.environ['KMP_DUPLICATE_LIB_OK']='True' #for compatability? import pandas as pd import tensorflow as tf from tensorflow.keras.losses import CategoricalCrossentropy from tensorflow.keras.metrics import categorical_accuracy from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, Dropout print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) #show number of GPUs from spektral.datasets import TUDataset #example dataset (molecules?) from spektral.layers import GINConv, GlobalAvgPool, GCNConv, GlobalSumPool, GCSConv from spektral.transforms import AdjToSpTensor, LayerPreprocess from spektral.data import Dataset, DisjointLoader, Graph, BatchLoader #Dataset to create a custom dataset from spektral.transforms.normalize_adj import NormalizeAdj from ctapipe.visualization import CameraDisplay #for visualisation of hexagonal pixel plots from ctapipe.io import EventSource from ctapipe.utils.datasets import get_dataset_path from ctapipe.instrument import CameraGeometry #for obtaining adjacency matrix of the camera #this code makes Nvidia RTX3080 also compatible with the code from tensorflow.compat.v1 import ConfigProto from tensorflow.compat.v1 import InteractiveSession config = ConfigProto() config.gpu_options.allow_growth = True session = InteractiveSession(config=config) # - # ################################################################################ # # Config # ################################################################################ # learning_rate = 1e-3 # Learning rate # channels = 128 # Hidden units # layers = 3 # GIN layers # epochs = 10 # Number of training epochs # batch_size = 32 # Batch size # # ################################################################################ # # Load data # ################################################################################ # dataset = TUDataset("PROTEINS", clean=True) # # # Parameters # F = dataset.n_node_features # Dimension of node features (here it is 4) # n_out = dataset.n_labels # Dimension of the target (here 2) # # # print(dataset) # print(type(dataset)) # print(dataset[1]) # # # # # Train/test split # idxs = np.random.permutation(len(dataset)) # split = int(0.9 * len(dataset)) # idx_tr, idx_te = np.split(idxs, [split]) # dataset_tr, dataset_te = dataset[idx_tr], dataset[idx_te] # # loader_tr = DisjointLoader(dataset_tr, batch_size=batch_size, epochs=epochs) # loader_te = DisjointLoader(dataset_te, batch_size=batch_size, epochs=1) # # # ################################################################################ # # Build model # ################################################################################ # class GIN0(Model): # def __init__(self, channels, n_layers): # super().__init__() # self.conv1 = GINConv(channels, epsilon=0, mlp_hidden=[channels, channels]) # self.convs = [] # for _ in range(1, n_layers): # self.convs.append( # GINConv(channels, epsilon=0, mlp_hidden=[channels, channels]) # ) # self.pool = GlobalAvgPool() # self.dense1 = Dense(channels, activation="relu") # self.dropout = Dropout(0.5) # self.dense2 = Dense(n_out, activation="softmax") # # def call(self, inputs): # x, a, i = inputs # x = self.conv1([x, a]) # for conv in self.convs: # x = conv([x, a]) # x = self.pool([x, i]) # x = self.dense1(x) # x = self.dropout(x) # return self.dense2(x) # # # # Build model # model = GIN0(channels, layers) # optimizer = Adam(learning_rate) # loss_fn = CategoricalCrossentropy() # # # ################################################################################ # # Fit model # ################################################################################ # @tf.function(input_signature=loader_tr.tf_signature(), experimental_relax_shapes=True) # def train_step(inputs, target): # with tf.GradientTape() as tape: # predictions = model(inputs, training=True) # loss = loss_fn(target, predictions) + sum(model.losses) # gradients = tape.gradient(loss, model.trainable_variables) # optimizer.apply_gradients(zip(gradients, model.trainable_variables)) # acc = tf.reduce_mean(categorical_accuracy(target, predictions)) # return loss, acc # # # epoch = step = 0 # results = [] # for batch in loader_tr: # step += 1 # loss, acc = train_step(*batch) # results.append((loss, acc)) # if step == loader_tr.steps_per_epoch: # step = 0 # epoch += 1 # print("Ep. {} - Loss: {}. Acc: {}".format(epoch, *np.mean(results, 0))) # results = [] # # ################################################################################ # # Evaluate model # ################################################################################ # results = [] # for batch in loader_te: # inputs, target = batch # predictions = model(inputs, training=False) # results.append( # ( # loss_fn(target, predictions), # tf.reduce_mean(categorical_accuracy(target, predictions)), # ) # ) # print("Done. Test loss: {}. Test acc: {}".format(*np.mean(results, 0))) # + #load event data of gammas (and later also protons) (DL1.h5) as h5 data containers file = "/home/saturn/caph/mpp228/CTA_data/Prod5_GRID/Prod5_Paranal_AdvancedBaseline_NSB1x_gamma-diffuse_North_20deg_ctapipe_v0.10.5_DL1/gamma_20deg_0deg_run7239___cta-prod5-paranal_desert-2147m-Paranal-dark_cone10_merged.DL1.h5" file_proton = "/home/saturn/caph/mpp228/CTA_data/Prod5_GRID/Prod5_Paranal_AdvancedBaseline_NSB1x_proton_North_20deg_ctapipe_v0.10.5_DL1/proton_20deg_0deg_run4386___cta-prod5-paranal_desert-2147m-Paranal-dark_merged.DL1.h5" h5file = tables.open_file(file, driver="H5FD_CORE") protons = tables.open_file(file_proton, driver="H5FD_CORE") print(h5file) # + #overview of the loaded h5 data and a few subfolders #command: shows subgroup names: ._g_getnchildren #command: print(h5file.root.dl1.event.telescope.images._f_iter_nodes) iterate over all nodes=subtables in the group print(h5file.root.dl1.event.telescope.images.tel_001) print(h5file.root.dl1.event.telescope.images.tel_001.description) print(h5file.root.dl1.event.telescope.images.tel_001.cols) print(h5file.root.dl1.event.telescope.images.tel_001.nrows) print(np.array(h5file.root.dl1.event.telescope.images.tel_001.col("image")).shape) #print(np.array(h5file.root.dl1.event.telescope.images.tel_001.col("image"))[5,:]) #show one image print("\n") #this seems to be DL2 data format (whatever that means), these are not the relevant training images print(h5file.root.simulation.event.telescope.images.tel_001) print(h5file.root.simulation.event.telescope.images.tel_001.description) print(h5file.root.simulation.event.telescope.images.tel_001.cols) print(h5file.root.simulation.event.telescope.images.tel_001.nrows) print(np.array(h5file.root.simulation.event.telescope.images.tel_001.col("true_image")).shape) #show geometry of the cams (all of them at the end of print(h5file)) print("\n") print(h5file.root.configuration.instrument.telescope.camera.geometry_LSTCam.description) print(h5file.root.configuration.instrument.telescope.camera.geometry_LSTCam.nrows) # - print(protons) print(np.array(protons.root.dl1.event.telescope.images.tel_001.col("image")).shape) # + #plot adjacency matrix of given geometry in input file using ctapipe geom = CameraGeometry.from_name("LSTCam") #geometry of LST camera plt.figure(figsize=(8, 3)) plt.subplot(1, 2, 1) plt.imshow(geom.neighbor_matrix, origin="lower") #geom.neighbor_matrix is the adjacency matrix plt.title("Pixel Neighbor Matrix LSTCam") #adjacency matrix for LST camera plt.subplot(1, 2, 2) plt.scatter(geom.pix_x, geom.pix_y) plt.title("Pixel Positions LSTCam (1855)") geom = CameraGeometry.from_name("NectarCam") #geometry that looks like LST camera plt.figure(figsize=(8, 3)) plt.subplot(1, 2, 1) plt.imshow(geom.neighbor_matrix, origin="lower") plt.title("Pixel Neighbor Matrix NectarCam") plt.subplot(1, 2, 2) plt.scatter(geom.pix_x, geom.pix_y) plt.title("Pixel Positions NectarCam (1855?)") geom = CameraGeometry.from_name("CHEC") #??? plt.figure(figsize=(8, 3)) plt.subplot(1, 2, 1) plt.imshow(geom.neighbor_matrix, origin="lower") plt.title("Pixel Neighbor Matrix CHEC") plt.subplot(1, 2, 2) plt.scatter(geom.pix_x, geom.pix_y) plt.title("Pixel Positions CHEC (2048)") geom = CameraGeometry.from_name("FlashCam") #??? plt.figure(figsize=(8, 3)) plt.subplot(1, 2, 1) plt.imshow(geom.neighbor_matrix, origin="lower") plt.title("Pixel Neighbor Matrix FlashCam") plt.subplot(1, 2, 2) plt.scatter(geom.pix_x, geom.pix_y) plt.title("Pixel Positions FlashCam (1764)") print(geom.neighbor_matrix) #adjacency matrix for one camera # + #this path is DL1 data #collect gamma images from all telescope types and putting it to a list called images tel = h5file.root.dl1.event.telescope.images._f_iter_nodes() images = [] for tel in tel: images.append(tel.col("image")) print(len(images)) print(images) #images contains ~180 telescope folders (LST, FlashCam,...) with few hundred images each #collect proton images from all telescope types and putting it to a list called images_proton tel_proton = protons.root.dl1.event.telescope.images._f_iter_nodes() images_proton = [] for tel_proton in tel_proton: images_proton.append(tel_proton.col("image")) print(images_proton) #images contains ~180 telescope folders (LST, FlashCam,...) with few hundred images each print(len(images_proton)) #also 180 events # + 'this codeblock is redundant if one only wants LST images with 1855 pixels' print(np.array(images[7]).shape) #one element meaning one telescope camera type (for example LST, FlashCam,...) has this shape print(np.array(images[7])[11,:]) #list of pixel values (in photoelectrons?) of one specific image 11 from one specific telescope camera 7 #collect geometries for all 180 entries of telescope camera types (for example LST, FlashCam,...) geometries = [] #collect adjacency matrix for every telescope camera depending on the number of pixels of the camera for i in range(len(images)): if np.ma.size(np.array(images[i]), axis = 1) == 1855: geometries.append(CameraGeometry.from_name("LSTCam").neighbor_matrix) elif np.ma.size(np.array(images[i]), axis = 1) == 2048: geometries.append(CameraGeometry.from_name("CHEC").neighbor_matrix) elif np.ma.size(np.array(images[i]), axis = 1) == 1764: geometries.append(CameraGeometry.from_name("FlashCam").neighbor_matrix) else: Print("Something went wrong") print(len(geometries)) print(geometries[1]) #adjacency matrix of telescope camera [1] #now we have images (180 tel with a few hundred images each) # and also geometries with 180 ajacency matrices # + #select only LST gamma images (or at least images with 1855 pixels (do not know the difference so far)) and save tem in images_lst images_lst_list = [] for i in range(len(images)): if np.ma.size(np.array(images[i]), axis = 1) == 1855: images_lst_list.append(images[i]) else: pass images_lst = np.empty((0,1855)) for i in range(len(images_lst_list)): images_lst = np.concatenate((images_lst, np.array(images_lst_list[i])), axis = 0) images_lst = images_lst[1:,:] #get rid of numpy empty entry print("Now gamma image shape is:", images_lst.shape) #select only LST proton images (or at least images with 1855 pixels (do not know the difference so far)) and save tem in images_lst_proton images_lst_proton_list = [] for i in range(len(images_proton)): if np.ma.size(np.array(images_proton[i]), axis = 1) == 1855: images_lst_proton_list.append(images_proton[i]) else: pass images_lst_proton = np.empty((0,1855)) for i in range(len(images_lst_proton_list)): images_lst_proton = np.concatenate((images_lst_proton, np.array(images_lst_proton_list[i])), axis = 0) images_lst_proton = images_lst_proton[1:,:] #get rid of numpy empty entry print("Now proton image shape is:", images_lst_proton.shape) #collect LST geometry geometrie_lst = CameraGeometry.from_name("LSTCam").neighbor_matrix #this collects LST adjacency matrix geometrie_lst = geometrie_lst.astype(np.float32) #convert from bool to float for GNN input print("Now adjacency shape is:", geometrie_lst.shape) print("\n", geometrie_lst) # + #prepare gamma labels for each graph (meaning each image) labels = np.ones(np.size(images_lst, 0)) print(labels.shape) #12621 times number one as the target value #prepare proton labels for each graph (meaning each image) labels_proton = np.zeros(np.size(images_lst_proton, 0)) print(labels_proton.shape) #12621 times number one as the target value # - #merging proton and gamma images and labels labels = np.concatenate((labels,labels_proton)) images_lst = np.concatenate((images_lst, images_lst_proton)) print(labels.shape) print(images_lst.shape) # + # spektral custom dataset class so one transforms numpy arrays to Spektral datasets (for GNN input) class MyDataset(Dataset): #in brackets one puts inherited classes """ A dataset created by a numpy array of images of shape (number_of_images, number_of_pixels) and by a numpy array of labels of shape (number_of_images) """ def __init__(self, images, labels, **kwargs): #initiate instance self.images = images self.labels = labels super().__init__(**kwargs) #inherit from other classes def read(self): #this function returns a list of graph objects so that Spektral takes it as the GNNs input # We must return a list of Graph objects output = [] for i in range(np.size(self.labels)): #do for all training images output.append(spektral.data.graph.Graph(x = self.images[i,:], a = geometrie_lst, y = self.labels[i])) return output #create a new instance from the class Dataset dataset = MyDataset(images_lst[:,:], labels[:]) #, transforms=NormalizeAdj() #this takes a lot of ram input_gnn = dataset.read() #write down a list of graph objects calling a class method of the instance print(dataset) #this only gives position in RAM space print(type(dataset)) print(dataset[3]) print() print(input_gnn[10:13]) print(type(input_gnn)) print(input_gnn[4]) print(len(input_gnn)) # - #create GNN class MyFirstGNN(Model): def __init__(self, n_hidden, n_labels): super().__init__() self.graph_conv = GCNConv(n_hidden, activation="relu") #A graph convolutional layer where n_hidden is number of channels self.pool = GlobalSumPool() #graph pooling layer self.graph_conv = GCNConv(n_hidden, activation="relu") self.dropout = Dropout(0.1) #original was 0.5 self.dense = Dense(n_labels, 'sigmoid') #original sofmax function def call(self, inputs): out = self.graph_conv(inputs) out = self.dropout(out) out = self.pool(out) out = self.dense(out) return out class Net(Model): def __init__(self): super().__init__() self.conv1 = GCSConv(32, activation="relu") self.conv2 = GCSConv(32, activation="relu") self.conv3 = GCSConv(32, activation="relu") self.global_pool = GlobalAvgPool() self.dense = Dense(dataset.n_labels, activation="sigmoid") def call(self, inputs): x, a = inputs x = self.conv1([x, a]) x = self.conv2([x, a]) x = self.conv3([x, a]) output = self.global_pool(x) output = self.dense(output) return output #define strategy for splitting training over multiple GPUs #does not work currently strategy = tf.distribute.MirroredStrategy(devices=None) #devices=None automatically detects all devices print('Number of devices: {}'.format(strategy.num_replicas_in_sync)) #show number of devices for training # + #compile model model = Net() #first number = number of channels #original was 32 model.compile(optimizer="adam", loss="binary_crossentropy", metrics=[tf.keras.metrics.AUC(from_logits=True)]) #load dater using batch-loader for fitting the model loader = BatchLoader(dataset, batch_size=32, shuffle=True) model.fit(loader.load(), steps_per_epoch=loader.steps_per_epoch, epochs=50) """ loss2=tf.keras.losses.BinaryCrossentropy(from_logits=True) #compile model model2 = MyFirstGNN(32, dataset.n_labels) #first number = number of channels #original was 32 model2.compile(optimizer="SGD", loss=loss2, metrics=["sparse_categorical_accuracy"]) #load dater using batch-loader for fitting the model loader2 = BatchLoader(dataset, batch_size=32, shuffle=True) model2.fit(loader2.load(), steps_per_epoch=loader2.steps_per_epoch, epochs=10) loss3 = tf.keras.losses.CategoricalCrossentropy(from_logits=False,label_smoothing=0.0,axis=-1,reduction="auto", name="categorical_crossentropy") #compile model model3 = MyFirstGNN(32, dataset.n_labels) #first number = number of channels #original was 32 model3.compile(optimizer="SGD", loss=loss3, metrics=["sparse_categorical_accuracy"]) #load dater using batch-loader for fitting the model loader3 = BatchLoader(dataset, batch_size=32, shuffle=True) model3.fit(loader3.load(), steps_per_epoch=loader3.steps_per_epoch, epochs=10) """ # + #checking layers and gnn structure #commit to github # + model.summary() #evaluate the model on an unseen test set using a different loader loader_test = BatchLoader(test_dataset, batch_size=32, shuffle=True) loss = model.evaluate(loader_test.load(), steps=loader_test.steps_per_epoch) print('Test loss: {}'.format(loss)) # - # + """ This example shows how to define your own dataset and use it to train a non-trivial GNN with message-passing and pooling layers. The script also shows how to implement fast training and evaluation functions in disjoint mode, with early stopping and accuracy monitoring. The dataset that we create is a simple synthetic task in which we have random graphs with randomly-colored nodes. The goal is to classify each graph with the color that occurs the most on its nodes. For example, given a graph with 2 colors and 3 nodes: x = [[1, 0], [1, 0], [0, 1]], the corresponding target will be [1, 0]. """ import numpy as np import scipy.sparse as sp import tensorflow as tf from tensorflow.keras.layers import Dense from tensorflow.keras.losses import CategoricalCrossentropy from tensorflow.keras.metrics import categorical_accuracy from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam from spektral.data import Dataset, DisjointLoader, Graph from spektral.layers import GCSConv, GlobalAvgPool from spektral.transforms.normalize_adj import NormalizeAdj ################################################################################ # Config ################################################################################ learning_rate = 1e-2 # Learning rate epochs = 10 # Number of training epochs es_patience = 10 # Patience for early stopping batch_size = 32 # Batch size ################################################################################ # Load data ################################################################################ class MyDataset(Dataset): """ A dataset of random colored graphs. The task is to classify each graph with the color which occurs the most in its nodes. The graphs have `n_colors` colors, of at least `n_min` and at most `n_max` nodes connected with probability `p`. """ def __init__(self, n_samples, n_colors=3, n_min=10, n_max=100, p=0.1, **kwargs): self.n_samples = n_samples self.n_colors = n_colors self.n_min = n_min self.n_max = n_max self.p = p super().__init__(**kwargs) def read(self): def make_graph(): n = np.random.randint(self.n_min, self.n_max) colors = np.random.randint(0, self.n_colors, size=n) # Node features x = np.zeros((n, self.n_colors)) x[np.arange(n), colors] = 1 # Edges a = np.random.rand(n, n) <= self.p a = np.maximum(a, a.T).astype(int) a = sp.csr_matrix(a) # Labels y = np.zeros((self.n_colors,)) color_counts = x.sum(0) y[np.argmax(color_counts)] = 1 return Graph(x=x, a=a, y=y) # We must return a list of Graph objects return [make_graph() for _ in range(self.n_samples)] data = MyDataset(1000, transforms=NormalizeAdj()) print("This is the same type of dataset as mine above:") print(data) print(type(data)) print(data[3]) # Train/valid/test split idxs = np.random.permutation(len(data)) split_va, split_te = int(0.8 * len(data)), int(0.9 * len(data)) idx_tr, idx_va, idx_te = np.split(idxs, [split_va, split_te]) data_tr = data[idx_tr] data_va = data[idx_va] data_te = data[idx_te] # Data loaders loader_tr = DisjointLoader(data_tr, batch_size=batch_size, epochs=epochs) loader_va = DisjointLoader(data_va, batch_size=batch_size) loader_te = DisjointLoader(data_te, batch_size=batch_size) ################################################################################ # Build model ################################################################################ class Net(Model): def __init__(self): super().__init__() self.conv1 = GCSConv(32, activation="relu") self.conv2 = GCSConv(32, activation="relu") self.conv3 = GCSConv(32, activation="relu") self.global_pool = GlobalAvgPool() self.dense = Dense(data.n_labels, activation="softmax") def call(self, inputs): x, a, i = inputs x = self.conv1([x, a]) x = self.conv2([x, a]) x = self.conv3([x, a]) output = self.global_pool([x, i]) output = self.dense(output) return output model = Net() optimizer = Adam(lr=learning_rate) loss_fn = CategoricalCrossentropy() ################################################################################ # Fit model ################################################################################ @tf.function(input_signature=loader_tr.tf_signature(), experimental_relax_shapes=True) def train_step(inputs, target): with tf.GradientTape() as tape: predictions = model(inputs, training=True) loss = loss_fn(target, predictions) + sum(model.losses) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) acc = tf.reduce_mean(categorical_accuracy(target, predictions)) return loss, acc def evaluate(loader): output = [] step = 0 while step < loader.steps_per_epoch: step += 1 inputs, target = loader.__next__() pred = model(inputs, training=False) outs = ( loss_fn(target, pred), tf.reduce_mean(categorical_accuracy(target, pred)), len(target), # Keep track of batch size ) output.append(outs) if step == loader.steps_per_epoch: output = np.array(output) return np.average(output[:, :-1], 0, weights=output[:, -1]) epoch = step = 0 best_val_loss = np.inf best_weights = None patience = es_patience results = [] for batch in loader_tr: step += 1 loss, acc = train_step(*batch) results.append((loss, acc)) if step == loader_tr.steps_per_epoch: step = 0 epoch += 1 # Compute validation loss and accuracy val_loss, val_acc = evaluate(loader_va) print( "Ep. {} - Loss: {:.3f} - Acc: {:.3f} - Val loss: {:.3f} - Val acc: {:.3f}".format( epoch, *np.mean(results, 0), val_loss, val_acc ) ) # Check if loss improved for early stopping if val_loss < best_val_loss: best_val_loss = val_loss patience = es_patience print("New best val_loss {:.3f}".format(val_loss)) best_weights = model.get_weights() else: patience -= 1 if patience == 0: print("Early stopping (best val_loss: {})".format(best_val_loss)) break results = [] ################################################################################ # Evaluate model ################################################################################ model.set_weights(best_weights) # Load best model test_loss, test_acc = evaluate(loader_te) print("Done. Test loss: {:.4f}. Test acc: {:.2f}".format(test_loss, test_acc)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="Wp1un6KrkCtI" outputId="356f5c15-95a0-460d-94b7-c1c6486a3396" # tensorflow와 tf.keras를 임포트합니다 import tensorflow as tf from tensorflow import keras # 헬퍼(helper) 라이브러리를 임포트합니다 import numpy as np import matplotlib.pyplot as plt print(tf.__version__) # + colab={"base_uri": "https://localhost:8080/"} id="a2uT7t5Fmy1c" outputId="ae12bc2f-692c-4cd7-9ee2-94b632c292cb" print("Hello world!") # + colab={"base_uri": "https://localhost:8080/"} id="xb2PZHp9n0Ir" outputId="b66253bb-6e20-41b9-8950-e569d286a691" fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # + id="2C5i118ErKG7" class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # + colab={"base_uri": "https://localhost:8080/"} id="fi-3UKtprOby" outputId="0d0f4f61-bc84-4a0c-bc5c-a71391befc9b" train_images.shape # + colab={"base_uri": "https://localhost:8080/"} id="N8a_0UYerbUj" outputId="3d6e3436-d079-4276-b6be-46b84f31561a" len(train_labels) # + colab={"base_uri": "https://localhost:8080/"} id="XSr8Z-u4rk2y" outputId="8d5c6688-5bef-46c8-fc64-b10fdf2876b2" train_labels # + colab={"base_uri": "https://localhost:8080/"} id="XMHVmHwUr1PS" outputId="c848e418-17e5-4e11-f9a4-a92ed6a6efc1" test_images.shape # + colab={"base_uri": "https://localhost:8080/"} id="37hMMQYksJmi" outputId="3f78e13b-4e89-4a35-88a3-03833703dd9b" len(test_labels) # + colab={"base_uri": "https://localhost:8080/", "height": 264} id="dvaXzFIGsXCa" outputId="65191f3d-09db-45b3-94c7-61a4ec965e5d" plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() # + id="NqYZOIp_st6S" train_images = train_images / 255.0 test_images = test_images / 255.0 # + colab={"base_uri": "https://localhost:8080/", "height": 588} id="c3NaE5Brsx4B" outputId="d05eadb3-9594-4e6c-8fbe-a514703054e9" plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap=plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() # + id="Hjmftl5XtaPj" model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) # + id="tCtdwQt5uMjj" model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="Pu1beuCPvRLL" outputId="055828b5-a810-4c6d-f9c6-a1208890f4df" model.fit(train_images, train_labels, epochs=5) # + colab={"base_uri": "https://localhost:8080/"} id="W1H22PyiveB_" outputId="dd60ac17-0f19-475d-a100-1dedb22c5e2d" test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print('\n테스트 정확도:', test_acc) # + id="dcj690iKxBSa" predictions = model.predict(test_images) # + colab={"base_uri": "https://localhost:8080/"} id="unlSzRoQxD0b" outputId="1b14de5a-4a40-417d-f458-014d3d8e1ca1" predictions[0] # + colab={"base_uri": "https://localhost:8080/"} id="Ys35KcH2xGaT" outputId="aa2959a4-f3b1-4602-a129-00ad3c93f2a1" np.argmax(predictions[0]) # + colab={"base_uri": "https://localhost:8080/"} id="j2xxadNGxlO6" outputId="c6895cf3-7151-4386-efcd-d5ac3c49a414" test_labels[0] # + id="ZIdMjEPEx4r4" def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') # + colab={"base_uri": "https://localhost:8080/", "height": 203} id="j4f9jneSx6P6" outputId="63a72ec9-1b46-4ca0-b8fb-fd30ff795849" i = 0 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 203} id="qDlzH86Rx77C" outputId="39f8a9a9-e526-4ba4-b1ad-eddaa84b8ee8" i = 12 plt.figure(figsize=(6,3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1,2,2) plot_value_array(i, predictions, test_labels) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 538} id="QrZKFASux98K" outputId="d2008a89-5e5f-41c7-f022-9aec4d990444" # 처음 X 개의 테스트 이미지와 예측 레이블, 진짜 레이블을 출력합니다 # 올바른 예측은 파랑색으로 잘못된 예측은 빨강색으로 나타냅니다 num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions, test_labels) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="KNbB1Ci2yPmq" outputId="06ac126e-83d5-4679-f4a6-d1b8c047a245" # 테스트 세트에서 이미지 하나를 선택합니다 img = test_images[0] print(img.shape) # + colab={"base_uri": "https://localhost:8080/"} id="Wn9jJw2fySkk" outputId="c55e890f-8e30-4c30-ce2e-141d1beaa5a8" # 이미지 하나만 사용할 때도 배치에 추가합니다 img = (np.expand_dims(img,0)) print(img.shape) # + colab={"base_uri": "https://localhost:8080/"} id="5U51fEJXyVL6" outputId="bdafdcdf-7057-4d45-ce88-03b04f18aec8" predictions_single = model.predict(img) print(predictions_single) # + colab={"base_uri": "https://localhost:8080/", "height": 300} id="_OEFO9P_yXGu" outputId="4fa178a9-5417-47fd-9a05-3d7f4fbbffe3" plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=45) # + id="LZrujFLOyagj" # MIT License # # Copyright (c) 2017 # # Permission is hereby granted, free of charge, to any person obtaining a # # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. # + colab={"base_uri": "https://localhost:8080/"} id="3kU6gefICTBc" outputId="f07fa35c-2bc1-4e62-dd91-9984d8dbb320" from google.colab import drive drive.mount('/content/drive') # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="z5XScG-EDcJd" outputId="aa4fc2a6-78e4-48fc-93b8-021a4b8d1850" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/Screenshot_20210528-120608_Samsung Internet.jpg") img_cvt=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_cvt) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="WJTnBtWME_cV" outputId="cba47630-0bfe-4166-e608-acdd1fec6eef" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/outer1.jpg") img_cvt=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_cvt) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="Ecu9rVdNFIVk" outputId="f57ea675-c663-4d2a-b785-06e00bbe4a1a" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/outer2.jpg") img_cvt=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_cvt) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="tWsWw0wfFK28" outputId="b3b4049f-cb05-49be-8169-5ae477c211e3" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/outer3.jpg") img_cvt=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_cvt) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="_FA9vQEQFNKM" outputId="5ef8d6fb-4e18-47c3-bf8a-7e97e85ec1b7" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/shose1.jpg") img_cvt=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_cvt) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="ZADFh5r-FV-z" outputId="16ed2a2f-7f09-47c1-d9b4-21f0b5029245" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/shose2.jpg") img_cvt=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_cvt) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="vEfCrAMzFZs6" outputId="0d913a4e-a20c-4d1d-cd8f-18f5fcd7ea29" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/shose3.jpg") img_cvt=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_cvt) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="6oKVuhTwFcUi" outputId="d1cd99c8-c68c-4379-f8ff-f8c990253d9a" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/watch1.jpg") img_cvt=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_cvt) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="kLdJbgtAFz-D" outputId="fdcf9039-c14e-4982-db2c-ac15e272e865" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/shose4.jpg") img_cvt=cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img_cvt) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 264} id="MdC8xIvJF3bj" outputId="1a4d73dd-65bc-40fe-c6e9-c40cd567c9df" import cv2 import numpy as np from matplotlib import pyplot as pl img = cv2.imread("/content/drive/MyDrive/shose4.jpg") img=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img = cv2.resize(img, dsize=(28, 28), interpolation=cv2.INTER_AREA) plt.imshow(img) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="O7z4SU6cHSM0" outputId="a4af4889-179e-4080-98b1-983d4aa26038" img.shape # + id="F5QhCZ-SHwkM" img = img/255. # + colab={"base_uri": "https://localhost:8080/", "height": 264} id="3yWwC7xCJRN9" outputId="bb06c093-9887-4c98-a977-9045105256b8" plt.imshow(img, cmap='Greys_r') plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="K7qOKh5nJT8k" outputId="3f83947c-44d8-4ac6-a5a7-aef4ededa0a8" img # + colab={"base_uri": "https://localhost:8080/"} id="Mhn2VdXhJYbc" outputId="17dd1490-429c-4578-cfd8-e859c930f786" img.shape # + colab={"base_uri": "https://localhost:8080/"} id="7Neo-3lALHUF" outputId="6432cdee-8b8f-4c28-e080-77d17492c4a1" # 이미지 하나만 사용할 때도 배치에 추가합니다 img = (np.expand_dims(img,0)) print(img.shape) # + colab={"base_uri": "https://localhost:8080/"} id="AZiwmiAdLLGJ" outputId="5836797b-07ec-4748-d5f3-8999af4ab26e" predictions_single = model.predict(img) print(predictions_single) # + colab={"base_uri": "https://localhost:8080/", "height": 300} id="oqLYEchMLM1E" outputId="2278bc4e-ccbe-44e5-f908-20057de6500e" plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=45) # + colab={"base_uri": "https://localhost:8080/", "height": 547} id="3Y5sexmYRH_U" outputId="88e69cb5-9a86-4f35-d657-b1620a9f49ef" import cv2 import numpy as np from matplotlib import pyplot as plt fname='t2_negative.png' img = cv2.imread("/content/drive/MyDrive/shose5.jpg") img =cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) img = cv2.resize(img, dsize=(28, 28), interpolation=cv2.INTER_AREA) img = img/255. plt.imshow(img) plt.show() img = (np.expand_dims(img,0)) predictions_single = model.predict(img) plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=45) # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.4.1 # language: julia # name: julia-1.4 # --- # + [markdown] slideshow={"slide_type": "slide"} # ## Comments # + # This is a single-line comment # + slideshow={"slide_type": "fragment"} #= This is a multi-line comment, and this is the second line, and the thrid line, and the fourth line :D =# # - # ## Documentation (Docstrings) # + """ mulitply(x, y) Mulitply `x` and `y` together. # Arguments - `x::Integer`: a number - `y::Integer=1`: another number # Examples ```julia-repl julia> mulitply(2, 3) ``` """ mulitply(x, y) = println(x * y) mulitply(4, 4) # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.6.1 # language: julia # name: julia-0.6 # --- # + active="" # As you walk through the door, a glowing humanoid shape yells in your direction. "You there! Your state appears to be idle. Come help us repair the corruption in this spreadsheet - if we take another millisecond, we'll have to display an hourglass cursor!" # # The spreadsheet consists of rows of apparently-random numbers. To make sure the recovery process is on the right track, they need you to calculate the spreadsheet's checksum. For each row, determine the difference between the largest value and the smallest value; the checksum is the sum of all of these differences. # # For example, given the following spreadsheet: # # 5 1 9 5 # 7 5 3 # 2 4 6 8 # The first row's largest and smallest values are 9 and 1, and their difference is 8. # The second row's largest and smallest values are 7 and 3, and their difference is 4. # The third row's difference is 6. # In this example, the spreadsheet's checksum would be 8 + 4 + 6 = 18. # # What is the checksum for the spreadsheet in your puzzle input? # - spreadsheet = readdlm("inputs/day2.txt", Int32) @time sum(maximum(spreadsheet, 2) - minimum(spreadsheet, 2)) # + active="" # --- Part Two --- # # "Great work; looks like we're on the right track after all. Here's a star for your effort." However, the program seems a little worried. Can programs be worried? # # "Based on what we're seeing, it looks like all the User wanted is some information about the evenly divisible values in the spreadsheet. Unfortunately, none of us are equipped for that kind of calculation - most of us specialize in bitwise operations." # # It sounds like the goal is to find the only two numbers in each row where one evenly divides the other - that is, where the result of the division operation is a whole number. They would like you to find those numbers on each line, divide them, and add up each line's result. # # For example, given the following spreadsheet: # # 5 9 2 8 # 9 4 7 3 # 3 8 6 5 # In the first row, the only two numbers that evenly divide are 8 and 2; the result of this division is 4. # In the second row, the two numbers are 9 and 3; the result is 3. # In the third row, the result is 2. # In this example, the sum of the results would be 4 + 3 + 2 = 9. # # What is the sum of each row's result in your puzzle input? # - function evenly_divide(row) L = length(row) row = sort(row) for i = 1:L for j = (i+1):L r = row[j] // row[i] if r.den == 1 return r.num end end end end a = [5 9 2 8; 9 4 7 3; 3 8 6 5] sum(mapslices(evenly_divide, a, 2)) @time sum(mapslices(evenly_divide, spreadsheet, 2)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import argparse import logging import os import torch import sys sys.path.append(os.path.join(os.path.dirname("__file__"), '../multimodal_seq2seq_gSCAN/')) import random import copy from seq2seq.gSCAN_dataset import GroundedScanDataset from seq2seq.model import Model from seq2seq.train import train from seq2seq.predict import predict_and_save from tqdm import tqdm, trange from GroundedScan.dataset import GroundedScan from typing import List from typing import Tuple from collections import defaultdict from collections import Counter import json import numpy as np from seq2seq.gSCAN_dataset import Vocabulary from seq2seq.helpers import sequence_accuracy from experiments_utils import * FORMAT = "%(asctime)-15s %(message)s" logging.basicConfig(format=FORMAT, level=logging.DEBUG, datefmt="%Y-%m-%d %H:%M") logger = logging.getLogger(__name__) def isnotebook(): try: shell = get_ipython().__class__.__name__ if shell == 'ZMQInteractiveShell': return True # Jupyter notebook or qtconsole elif shell == 'TerminalInteractiveShell': return False # Terminal running IPython else: return False # Other type (?) except NameError: return False # Probably standard Python interpreter use_cuda = True if torch.cuda.is_available() and not isnotebook() else False device = "cuda" if use_cuda else "cpu" if use_cuda: logger.info("Using CUDA.") logger.info("Cuda version: {}".format(torch.version.cuda)) # - def evaluate_syntactic_dependency(flags): for argument, value in flags.items(): logger.info("{}: {}".format(argument, value)) # 1. preparing datasets logger.info("Loading datasets.") compositional_splits_data_path = os.path.join(flags["data_directory"], "dataset.txt") compositional_splits_preprocessor = DummyGroundedScanDataset(compositional_splits_data_path, flags["data_directory"], input_vocabulary_file=flags["input_vocab_path"], target_vocabulary_file=flags["target_vocab_path"], generate_vocabulary=False, k=flags["k"]) compositional_splits_dataset = \ GroundedScan.load_dataset_from_file( compositional_splits_data_path, save_directory=flags["output_directory"], k=flags["k"]) logger.info("Loading models.") # 2. load up models raw_example = None for _, example in enumerate(compositional_splits_dataset.get_examples_with_image(flags["split"], True)): raw_example = example break single_example = compositional_splits_preprocessor.process(raw_example) model = Model(input_vocabulary_size=compositional_splits_preprocessor.input_vocabulary_size, target_vocabulary_size=compositional_splits_preprocessor.target_vocabulary_size, num_cnn_channels=compositional_splits_preprocessor.image_channels, input_padding_idx=compositional_splits_preprocessor.input_vocabulary.pad_idx, target_pad_idx=compositional_splits_preprocessor.target_vocabulary.pad_idx, target_eos_idx=compositional_splits_preprocessor.target_vocabulary.eos_idx, **input_flags) model = model.cuda() if use_cuda else model _ = model.load_model(flags["resume_from_file"]) # TODO: let us enable multi-gpu settings here to save up times logger.info("Starting evaluations.") input_levDs = [] pred_levDs = [] accuracies = [] corrupt_accuracies = [] example_count = 0 limit = flags["max_testing_examples"] split = flags["split"] dataloader = [example for example in compositional_splits_dataset.get_examples_with_image(split, True)] random.shuffle(dataloader) # shuffle this to get a unbiased estimate of accuracies dataloader = dataloader[:limit] if limit else dataloader for _, example in enumerate(tqdm(dataloader, desc="Iteration")): # non-corrupt single_example = compositional_splits_preprocessor.process(example) output = predict_single(single_example, model=model, max_decoding_steps=30, pad_idx=compositional_splits_preprocessor.target_vocabulary.pad_idx, sos_idx=compositional_splits_preprocessor.target_vocabulary.sos_idx, eos_idx=compositional_splits_preprocessor.target_vocabulary.eos_idx, device=device) pred_command = compositional_splits_preprocessor.array_to_sentence(output[3], vocabulary="target") accuracy = sequence_accuracy(output[3], output[4][0].tolist()[1:-1]) accuracies += [accuracy] # corrupt corrupt_example = make_corrupt_example(example, flags["corrupt_methods"]) corrupt_single_example = compositional_splits_preprocessor.process(corrupt_example) corrupt_output = predict_single(corrupt_single_example, model=model, max_decoding_steps=30, pad_idx=compositional_splits_preprocessor.target_vocabulary.pad_idx, sos_idx=compositional_splits_preprocessor.target_vocabulary.sos_idx, eos_idx=compositional_splits_preprocessor.target_vocabulary.eos_idx, device=device) corrupt_pred_command = compositional_splits_preprocessor.array_to_sentence(corrupt_output[3], vocabulary="target") corrupt_accuracy = sequence_accuracy(corrupt_output[3], corrupt_output[4][0].tolist()[1:-1]) corrupt_accuracies += [corrupt_accuracy] input_levD = levenshteinDistance(example['input_command'], corrupt_example['input_command']) pred_levD = levenshteinDistance(pred_command, corrupt_pred_command) input_levDs.append(input_levD) pred_levDs.append(pred_levD) example_count += 1 exact_match = 0 for acc in accuracies: if acc == 100: exact_match += 1 exact_match = exact_match * 1.0 / len(accuracies) corrupt_exact_match = 0 for acc in corrupt_accuracies: if acc == 100: corrupt_exact_match += 1 corrupt_exact_match = corrupt_exact_match * 1.0 / len(corrupt_accuracies) logger.info("Eval Split={}, Original Exact Match %={}, Corrupt Exact Match %={}".format(split, exact_match, corrupt_exact_match)) return {"input_levDs" : input_levDs, "pred_levDs" : pred_levDs, "accuracies" : accuracies, "corrupt_accuracies" : corrupt_accuracies} if __name__ == "__main__": input_flags = vars(get_gSCAN_parser().parse_args()) saved_to_dict = evaluate_syntactic_dependency(flags=input_flags) split = input_flags["split"] corrupt_methods = input_flags["corrupt_methods"] if input_flags["save_eval_result_dict"]: torch.save(saved_to_dict, os.path.join( input_flags["output_directory"], f"eval_result_split_{split}_corrupt_{corrupt_methods}_dict.bin") ) else: logger.info("Skip saving results.") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data Input and Output # # Reading DataFrames from external sources using pd.read functions import numpy as np import pandas as pd # ## CSV # # ### CSV Input dataframe = pd.read_csv('train.csv') # ### CSV Output dataframe.to_csv('train2.csv',index=False) #If index=FALSE then csv does not store index values # ## Excel # # Using Pandas, one can read excel files, however it can only import data. It does not fetch formulae or any formatting/images/macros and having such things in excel files can crash the python function to crash and not execute successfully. # ### Excel Input pd.read_excel('Consumer.xlsx',sheet_name='Data1') # ### Excel Output dataframe.to_excel('Consumer2.xlsx',sheet_name='Sheet1') # ### The END # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={} colab_type="code" id="58OiOlTxMvwh" #concorrência - Códigos adaptados de [FORBES, Elliot. Learning Concurrency in Python: Build highly efficient, robust, and concurrent applications. 2017.] # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 955, "status": "ok", "timestamp": 1597495270821, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="e1YSKJBIjvwL" outputId="29836548-ac45-4d4e-9ad2-18fabe20d2cd" #identificando a quantidade de núcleos disponíveis para o sistema import multiprocessing multiprocessing.cpu_count() #conta a quantidade de núcleos disponíveis no sistema # + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" executionInfo={"elapsed": 13610, "status": "ok", "timestamp": 1597495628883, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="9QLPusRsmAvV" outputId="96d182f6-68c1-455a-9aa6-8f496e595681" #processamento sequencial import threading #módulo para a construção de threads import urllib.request #módulo para a requisição de url import time #módulo para tratar tempo #função criada para a realização do download das imagens def downloadImagens(imagePath, fileName): print("Realizando o download .... ", imagePath) urllib.request.urlretrieve(imagePath, fileName) #realiza a requisição para a página da web t0 = time.time() #armazena o tempo inicial de execução for i in range(10): #imageName = "imagens/image-" + str(i) + ".jpg" #coloca o nome em cada uma das imagens baixadas imageName = str(i) #coloca o nome em cada uma das imagens baixadas downloadImagens("http://lorempixel.com/400/200/sports", imageName) #aplica o download da imagem t1 = time.time() #tempo final após a execução totalTime = t1 - t0 #diferença de tempo entre o valor inical de execução e o final print("Tempo toal de execução {}".format(totalTime)) # + colab={"base_uri": "https://localhost:8080/", "height": 374} colab_type="code" executionInfo={"elapsed": 2094, "status": "ok", "timestamp": 1597495930593, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="nZEnC-NFo1Nf" outputId="130f39c0-e51f-4194-ceae-814601ddc636" #exedcução do download de imagens via multiplas threads import threading import urllib.request import time def downloadImagens(imagePath, fileName): print("Realizando o download .... ", imagePath) urllib.request.urlretrieve(imagePath, fileName) print("Download Finalizado") def executeThread(i): imageName = str(i) #coloca o nome em cada uma das imagens baixadas downloadImagens("http://lorempixel.com/400/200/sports", imageName) t0 = time.time() threads = [] #lista vazia que vai conter todas as threads criadas #cria das 10 threads, cada uma delas será responsável por realizar o download for i in range(10): thread = threading.Thread(target=executeThread, args=(i,)) threads.append(thread) thread.start() #garante que as execuções foram finalizadas for i in threads: i.join() #calcula o tempo de execução t1 = time.time() totalTime = t1 - t0 print("Tempo total de execução {}".format(totalTime)) # + [markdown] colab_type="text" id="PviI5qNYOpZf" # **Compartilhamento de tempo** # + colab={} colab_type="code" id="R90lL9EROolp" import threading #módulo para a construção de multithreads import time #módulo para a medição de tempo import random #módulo para geração de números randomicos counter = 10 #contador inicial #função que adiciona um número para o contador def tarefaA(): global counter #variável global while counter < 30: counter += 1 #incrementa o contador print("A tarefaA incrementou o contador para {}".format(counter)) sleepTime = random.randint(0,1) #escolhe, randomicamente, um valor entre 0 e 3 time.sleep(sleepTime) #coloca a tread para dormir #função que retira um número do contador def tarefaB(): global counter #variável global while counter > -30: counter -= 1 #decrementa o contador print("A tarefaB decrementou o contador para {}".format(counter)) sleepTime = random.randint(0,3) #escolhe, randomicamente, um valor entre 0 e 3 time.sleep(sleepTime) #coloca a tread para dormir t0 = time.time() thread1 = threading.Thread(target=tarefaA) #instancia um objeto da classe Thread para executar #a tarefaA thread2 = threading.Thread(target=tarefaB) #instancia um objeto da classe Thread para executar #a tarefaB thread1.start() #inicia a tread1 thread2.start() #inicia thread2 thread1.join() #ceritifica o fim da execução thread2.join() #certifica do fim da execução t1 = time.time() print("Tempo total de execução {}".format(t1-t0)) # + [markdown] colab_type="text" id="ZEv721D1A0L3" # **Estados de execução de uma Thread** # + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" executionInfo={"elapsed": 10728, "status": "ok", "timestamp": 1597520482102, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="ls7X8ruRA3cl" outputId="189306fe-cc18-4b04-bdbd-2005cedfde9f" import threading import time # função que, simplesmente, realiza o print de uma mensagem de execução e def threadWorker(): # Neste ponto é onde ocorre a mudança do 'Runnable' para o 'Running' print("A thread entrou no estado 'Running'") # quando chamamos a função time.sleep() a #thread entra para o estado de not-running print('A thread entrou no estado "Non-Running"') time.sleep(10) # quando a tarefa é finalizada, a thread é terminada print("Execução da thread foi finalizada") #garbage collector # neste momento a threada ainda "não possui estados" #não existe alocação de recursos print("Thread Criada") myThread = threading.Thread(target=threadWorker) #quando é chamado o método myThread.start(), python realiza a #alocação de recursos e, posteriormente, passa para o estado de # "Start" para o "Runnable", mas sem entrar em execução. print("Thread no estado 'Runnable'") myThread.start() #quando o metodo join é chamado, a thread passa para o estado # "terminated" myThread.join() print("A thread está no estado de 'Terminated'") # + [markdown] colab_type="text" id="bI5BpEbyEvhJ" # **Outro exemplo de execução de uma Thread** # + colab={"base_uri": "https://localhost:8080/", "height": 547} colab_type="code" executionInfo={"elapsed": 616, "status": "ok", "timestamp": 1597520582200, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="EMRp6YBvEznI" outputId="0fc252f9-dc7c-4767-db5c-085ed5b76ccd" import threading import time import random #função que recebe um número e cira uma thread def executeThread(i): print("Thread {} incializada \n".format(i)) sleepTime = random.randint(1,10) time.sleep(sleepTime) print("Thread {} finalizou a execução".format(i)) for i in range(10): thread = threading.Thread(target=executeThread, args=(i,)) thread.start() print("Número de threads ativas:" , threading.enumerate()) # + [markdown] colab_type="text" id="YJigjQKFpGqr" # **Herança com Threads** # + colab={"base_uri": "https://localhost:8080/", "height": 119} colab_type="code" executionInfo={"elapsed": 552, "status": "ok", "timestamp": 1597525715306, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="H5zfWtzOpKFT" outputId="597fe291-193b-4515-a9f3-fb9bdd7594d1" from threading import Thread #define a classe como filha da classe Thread class MinhaClasseThread(Thread): def __init__(self): print("Olá, construtor thread!!") Thread.__init__(self) #define a função run() que é chamada quando thread.start() def run(self): print("\nThread em execução.") #instanciando um objeto da classe criada minhaThread=MinhaClasseThread() print("Objeto criado") minhaThread.start() print("Thread inicalizada") minhaThread.join() print("Thread finalizada") # + [markdown] colab_type="text" id="Xai_uqZMvx-a" # **Multiplas Threads** # + colab={"base_uri": "https://localhost:8080/", "height": 374} colab_type="code" executionInfo={"elapsed": 534, "status": "ok", "timestamp": 1597526000492, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="RUgcgZDUv1AG" outputId="a559df1e-b407-43ef-8e3b-bfee0c16ec28" import threading class minhaThread (threading.Thread): def __init__(self, threadID, nome, contador): threading.Thread.__init__(self) self.threadID = threadID self.nome = nome self.contador = contador def run(self): print("Iniciando thread %s com %d processos" %(self.name,self.contador)) processo(self.nome, self.contador) print("Finalizando " + self.nome) def processo(nome, contador): while contador: print( "Thread %s fazendo o processo %d" % (nome, contador)) contador -= 1 # Criando as threads thread1 = minhaThread(1, "Alice", 8) thread2 = minhaThread(2, "Bob", 8) # Comecando novas Threads thread1.start() thread2.start() threads = [] threads.append(thread1) threads.append(thread2) for t in threads: t.join() print("Saindo da main") # + [markdown] colab_type="text" id="sihVaGMQxYLE" # **Contando Threads ativas** # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 4597, "status": "ok", "timestamp": 1597526097273, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="CPFSAiFhxbo5" outputId="65cae8bc-8d7f-4a35-e713-803db5bcb747" import threading import time import random def minhaThread(i): print("Thread {}: inicializada".format(i)) time.sleep(random.randint(1,5)) print("\nThread {}: finalizada".format(i)) for i in range(random.randint(2,50)): thread=threading.Thread(target=minhaThread,args=(i,)) thread.start() time.sleep(4) print("Total de Threads ativas: {}".format(threading.active_count())) # + [markdown] colab_type="text" id="uTfSAOr0zGUG" # **Em qual thread estamos?** # + colab={"base_uri": "https://localhost:8080/", "height": 187} colab_type="code" executionInfo={"elapsed": 631, "status": "ok", "timestamp": 1597526158699, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="WJK3Sxw9zV75" outputId="71bb8462-8735-4ef0-d201-c22053a08240" import threading import time def threadTarget(): print("Thread atual: {}".format(threading.current_thread())) threads = [] for i in range(10): thread = threading.Thread(target=threadTarget) thread.start() threads.append(thread) for thread in threads: thread.join() # + [markdown] colab_type="text" id="txQicA-J00jQ" # **Main Thread** # + colab={"base_uri": "https://localhost:8080/", "height": 153} colab_type="code" executionInfo={"elapsed": 5514, "status": "ok", "timestamp": 1597526195086, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="RB5cC-Fbz2cS" outputId="b9527f90-fb89-4a11-acce-be10afeb2a13" import threading import time def myChildThread(): print("Thread Filha inicializada ----") time.sleep(5) print("Thread Atual ----------") print(threading.current_thread()) print("-------------------------") print("Main Thread -------------") print(threading.main_thread()) print("-------------------------") print("Thread Filha Finalizada") child = threading.Thread(target=myChildThread) child.start() child.join() # + [markdown] colab_type="text" id="OYaGnqlc1jfR" # **Identificando as Threads** # + colab={"base_uri": "https://localhost:8080/", "height": 122} colab_type="code" executionInfo={"elapsed": 684, "status": "ok", "timestamp": 1597526235658, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="h9Re7tQ91mGd" outputId="1174c99e-409e-42cd-dbea-e3493054f58e" import threading import time def myThread(): print("Thread {} inicializada".format(threading.currentThread().getName())) time.sleep(10) print("Thread {} finalizada".format(threading.currentThread().getName())) for i in range(4): threadName = "Thread-" + str(i) thread = threading.Thread(name=threadName, target=myThread) thread.start() print("{}".format(threading.enumerate())) # + [markdown] colab_type="text" id="4TubT6mI6Ypc" # **Deadlock** # + colab={"base_uri": "https://localhost:8080/", "height": 714} colab_type="code" executionInfo={"elapsed": 35482, "status": "error", "timestamp": 1597524147680, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="xlJV0qx36cLp" outputId="2aaa09dd-b11e-4c97-82fc-c69d3418a6ba" import threading import time import random class Filosofos(threading.Thread): def __init__(self, name, leftFork, rightFork): print("{} sentou na mesa".format(name)) threading.Thread.__init__(self, name=name) self.leftFork = leftFork self.rightFork = rightFork def run(self): print("{} começou a pensar".format(threading.currentThread().getName())) while True: time.sleep(random.randint(1,5)) print("{} parou de pensar".format(threading.currentThread().getName())) self.leftFork.acquire() time.sleep(random.randint(1,5)) try: print("{} pegou o garfo da esquerda.".format(threading.currentThread().getName())) self.rightFork.acquire() try: print("{} pegou os dois garfos e começou a comer".format(threading.currentThread().getName())) finally: self.rightFork.release() print("{} soltou o garfo da direita".format(threading.currentThread().getName())) finally: self.leftFork.release() print("{} soltou o garfo da esquerda".format(threading.currentThread().getName())) fork1 = threading.RLock() fork2 = threading.RLock() fork3 = threading.RLock() fork4 = threading.RLock() fork5 = threading.RLock() philosopher1 = Filosofos("Kant", fork1, fork2) philosopher2 = Filosofos("Aristotle", fork2, fork3) philosopher3 = Filosofos("Spinoza", fork3, fork4) philosopher4 = Filosofos("Marx", fork4, fork5) philosopher5 = Filosofos("Russell", fork5, fork1) philosopher1.start() philosopher2.start() philosopher3.start() philosopher4.start() philosopher5.start() philosopher1.join() philosopher2.join() philosopher3.join() philosopher4.join() philosopher5.join() # + [markdown] colab_type="text" id="r0E9F6W48Iud" # **Semáforos** # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 25999, "status": "ok", "timestamp": 1597524683216, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="K8uG50RB-yrP" outputId="363ef48b-c83c-4821-dd52-a8e81b182bad" import threading import time import random class TicketSeller(threading.Thread): ticketsSold = 0 def __init__(self, semaphore): threading.Thread.__init__(self); self.sem = semaphore print("Venda de ingressos inicializada") def run(self): global ticketsAvailable running = True while running: self.randomDelay() self.sem.acquire() if(ticketsAvailable <= 0): running = False else: self.ticketsSold = self.ticketsSold + 1 ticketsAvailable = ticketsAvailable - 1 print("{} acabou de vender 1 ({} restantes)".format(self.getName(), ticketsAvailable)) self.sem.release() print("Venda de ingresso {} Ingressos vendidos no total {}".format(self.getName(), self.ticketsSold)) def randomDelay(self): time.sleep(random.randint(0,4)/4) # definição do nosso semáforo semaphore = threading.Semaphore() # Número de ingressos disponíveis ticketsAvailable = 200 # os nossos vendedores sellers = [] for i in range(4): seller = TicketSeller(semaphore) seller.start() sellers.append(seller) # joining all our sellers for seller in sellers: seller.join() # + [markdown] colab_type="text" id="95fwmZEhOjUc" # **Queue em Python** # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 12721, "status": "ok", "timestamp": 1597527614849, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiC2kqFihAn3Ile03oz-6rO8qVjEHv1DhGQ0ngQ5g=s64", "userId": "06907869093485551957"}, "user_tz": 180} id="heafwE_rOiq9" outputId="8f39742d-1e36-4ff2-a763-fc34558e7e6f" #código adaptado de http://www.learn4master.com/algorithms/python-queue-for-multithreading # put(), get(), join() e task_done(). import threading import time from queue import Queue def consumidor(q): while(True): name = threading.currentThread().getName() print("Thread: {0} deseja obter um item da queue[tamanho atual = {1}] na data = {2} \n".format(name, q.qsize(), time.strftime('%H:%M:%S'))) item = q.get(); time.sleep(3) # demora 3 segundos para adicionar um item print("Thread: {0} terminou de processar o item da queue[tamanho atual = {1}] na data = {2} \n".format(name, q.qsize(), time.strftime('%H:%M:%S'))) q.task_done() def produtor(q): # a thread principal vai adicionar itens para a queue for i in range(10): name = threading.currentThread().getName() print("Thread: {0} começou a adicionar um item na queue[tamanho atual = {1}] na data = {2} \n".format(name, q.qsize(), time.strftime('%H:%M:%S'))) item = "item-" + str(i) q.put(item) print("Thread: {0} adicionou um item na queue[tamanho atual = {1}] na data = {2} \n".format(name, q.qsize(), time.strftime('%H:%M:%S'))) q.join() if __name__ == '__main__': q = Queue(maxsize = 3) threads_num = 3 # criação de 3 threads consumidoras for i in range(threads_num): t = threading.Thread(name = "ThreadConsumidora-"+str(i), target=consumidor, args=(q,)) t.start() # criação de uma thread produtora t = threading.Thread(name = "ThreadProdutora", target=produtor, args=(q,)) t.start() q.join() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Dark current and hot pixels # Every image from a CCD contains *dark current*, which are counts in a raw image caused by thermal effects in the CCD. # the dark current is modern CCDs is extremely small if the camera is cooled in some way. Cameras cooled with liquid nitrogen have nearly zero dark current while thermo-electrically cooled CCDs have a somewhat larger dark current. The dark current in a CCD operating at room temperature will typically be very large. # # Even a camera in which the dark current is *typically* very small will have a small fraction of pixels, called hot pixels, in which the dark current is much higher. # # The next notebook walks through how to identify those pixels and how to decide the right way to remove dark current from your data. # # EVERYTHING BELOW MOVES INTO LATER NOTEBOOK # + from astropy.nddata import CCDData from astropy.visualization import hist import matplotlib.pyplot as plt import numpy as np from convenience_functions import show_image # - dark_1000 = CCDData.read('master_dark_exposure_1000.0.fit') show_image(dark_1000, cmap='gray') plt.figure(figsize=(20, 10)) hist(dark_1000.data.flatten(), bins=100); plt.semilogy() plt.grid() plt.xlabel('Counts') plt.figure(figsize=(20, 10)) hist(dark_1000.data.flatten()/1000 * 1.5, bins=10000, density=True); plt.semilogy() plt.grid() #plt.xlim(0, .1) plt.xlabel('dark current, electrons per second') bop = hist(dark_1000.data.flatten()/1000 * 1.5, bins=10000, density=True); bop print(len(bop[0]), len(bop[1])) frac_pix = np.cumsum(bop[0] * (bop[1][1:] - bop[1][:-1])) plt.figure(figsize=(20, 10)) plt.bar(bop[1][:-1], 1 - frac_pix, (bop[1][1:] - bop[1][:-1])); plt.semilogy() #plt.xlim(0, .2) #plt.ylim(0.01, 1) plt.grid() (dark_1000.data.flatten()/1000 * 1.5).mean() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Convolutional Neural Network with CIFAR10 dataset # # Use the CIFAR10 [dataset](https://www.cs.toronto.edu/~kriz/cifar.html), which contains 50,000 32x32 color training images, labeled over 10 categories, and 10,000 test images. # # As this dataset is also included in Keras datasets, we just ask the keras.datasets module for the dataset. # ## Import Classes and Functions # + import numpy as np from keras.datasets import cifar10 from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D from keras import backend as K from keras.utils import to_categorical # - # ## Initialize Random Number Generator # fix random seed for reproducibility seed = 7 np.random.seed(seed) # + num_classes = 10 # input image dimensions img_rows, img_cols = 32, 32 # - # ## Load The Dataset # The data, shuffled and split between train and test sets (X_train, y_train), (X_test, y_test) = cifar10.load_data() if K.image_data_format() == 'channels_first': X_train = X_train.reshape(X_train.shape[0], 3, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 3, img_rows, img_cols) input_shape = (3, img_rows, img_cols) else: X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 3) X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 3) input_shape = (img_rows, img_cols, 3) # ### Normalize the data X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # ### Convert class vectors to binary class matrices y_train = to_categorical(y_train, num_classes) y_test = to_categorical(y_test, num_classes) # ### Define The Neural Network Model def create_model(): model = Sequential() ## Your model here # model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model # ### Create the Model model = create_model() # ### Define training parameters batch_size = 1 epochs = 1 # ### Train the model model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(X_test, y_test)) # ### Evaluate the model score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: TF2 GPU # language: python # name: tf2_gpu # --- # # Assignment 4 # ## # ### December 2019 # *** import os ### The Notebook can be excecuted in GPU-Accelerated Mode or Not by adjusting the following flag ### use_gpu = True if not use_gpu: ### Disable GPU ### os.environ["CUDA_VISIBLE_DEVICES"] = "-1" else: ### Enable the first available GPU ### os.environ["CUDA_VISIBLE_DEVICES"] = "0" from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from tensorflow.keras.models import Model, Sequential, load_model from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import RMSprop, SGD, Adam import matplotlib.pyplot as plt import matplotlib.image as mpimg import pandas as pd import numpy as np import random from sklearn.metrics import f1_score from collections import Counter from sklearn.dummy import DummyClassifier from sklearn.preprocessing import MultiLabelBinarizer import warnings warnings.filterwarnings("ignore") # %matplotlib inline # # My Data Structure # # * The Data were downloaded and stored in a folder named `DataFolder` at the same level as the Notebook. The folder includes the following folders/files: # # * `training-set` # # * `validation-set` # # * `test-set\dummy-class` : The dummy-class subfolder is needed for the flow_from_directory method of Keras Generator. # # * `train_concepts.csv` : the image IDs of the training set with their gold (i.e., known correct) tags, separated with `;`. # # * `val_concepts.csv` : the validation image IDs with their gold tags, separated with `;`. # # * `string_concepts.csv`: all the available tag IDs and their corresponding name, separated with tabs. data_path = 'DataFolder/' train_data_path = data_path + '/training-set/' validation_data_path = data_path + '/validation-set/' test_data_path = data_path + '/test-set/' train_tag_id = pd.read_csv(data_path + '/train_concepts.csv') val_tag_id = pd.read_csv(data_path + '/val_concepts.csv') concept_id = pd.read_csv(data_path + '/string_concepts.csv', sep='\t', header=None) concept_id.rename(columns={0: "tags", 1: "concepts"}, inplace=True) # ## Data Exploration # *** # * Plot some images. # * For those images, fetch their tag IDs and their tag names. # + ### Read 3 random Train Images rand_3_train = random.choices(os.listdir(train_data_path), k=3) img1 = mpimg.imread(train_data_path + rand_3_train[0]) img2 = mpimg.imread(train_data_path + rand_3_train[1]) img3 = mpimg.imread(train_data_path + rand_3_train[2]) ### Find their Tag ID's img1_tags = train_tag_id[train_tag_id['image'] == rand_3_train[0].split('.')[0]]['tags'].values[0].split(';') img2_tags = train_tag_id[train_tag_id['image'] == rand_3_train[1].split('.')[0]]['tags'].values[0].split(';') img3_tags = train_tag_id[train_tag_id['image'] == rand_3_train[2].split('.')[0]]['tags'].values[0].split(';') ### Find their tag names img1_names = [concept_id[concept_id['tags'] == f]['concepts'].values[0] for f in img1_tags] img2_names = [concept_id[concept_id['tags'] == f]['concepts'].values[0] for f in img2_tags] img3_names = [concept_id[concept_id['tags'] == f]['concepts'].values[0] for f in img3_tags] ### Plot them with their ID's and names f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize = (5, 11)) ax1.imshow(img1, aspect = "auto") ax1.title.set_text(img1_tags) ax1.set_xlabel(img1_names) ax2.imshow(img2, aspect = "auto") ax2.title.set_text(img2_tags) ax2.set_xlabel(img2_names) ax3.imshow(img3, aspect = "auto") ax3.title.set_text(img3_tags) ax3.set_xlabel(img3_names) f.subplots_adjust(hspace=0.5) # + ### Read 3 random Validation Images rand_3_val = random.choices(os.listdir(validation_data_path), k=3) ### Plot them img1 = mpimg.imread(validation_data_path + rand_3_val[0]) img2 = mpimg.imread(validation_data_path + rand_3_val[1]) img3 = mpimg.imread(validation_data_path + rand_3_val[2]) ### Find their Tag ID's img1_tags = val_tag_id[val_tag_id['image'] == rand_3_val[0].split('.')[0]]['tags'].values[0].split(';') img2_tags = val_tag_id[val_tag_id['image'] == rand_3_val[1].split('.')[0]]['tags'].values[0].split(';') img3_tags = val_tag_id[val_tag_id['image'] == rand_3_val[2].split('.')[0]]['tags'].values[0].split(';') ### Find their tag names img1_names = [concept_id[concept_id['tags'] == f]['concepts'].values[0] for f in img1_tags] img2_names = [concept_id[concept_id['tags'] == f]['concepts'].values[0] for f in img2_tags] img3_names = [concept_id[concept_id['tags'] == f]['concepts'].values[0] for f in img3_tags] ### Plot them with their ID's and names f, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize = (5, 11)) ax1.imshow(img1, aspect = "auto") ax1.title.set_text(img1_tags) ax1.set_xlabel(img1_names) ax2.imshow(img2, aspect = "auto") ax2.title.set_text(img2_tags) ax2.set_xlabel(img2_names) ax3.imshow(img3, aspect = "auto") ax3.title.set_text(img3_tags) ax3.set_xlabel(img3_names) f.subplots_adjust(hspace=0.5) # - # * How many tags are there in total? # *** ### Let's find the total number of tag ID's total_tag_ids = concept_id['tags'].unique().shape[0] print("The total number of Unique CUI tags is: {}".format(total_tag_ids)) # + ### We find 5528 unique CUI tags that is consistent with the number reported on the Paper after the preprocessing the authors made (page 3) # - def cui_to_concept(cui_code): return concept_id[concept_id['tags'] == cui_code]['concepts'].values[0] # * Which ones are the most frequent? # *** ### Let's count the frequency in the training set. array_of_all_train_tags = np.concatenate(train_tag_id['tags'].str.split(';').apply(lambda x: np.array(x)).values) count_array = Counter(array_of_all_train_tags) most_6 = count_array.most_common(6) print("The most Common TAG id's are:") for tag, times in most_6: print("CUI : {} | UMLS term : {} | Images: {}".format(tag,cui_to_concept(tag),times)) # + ### We find exactly tha same results as the authors made (page 3) # - ### Let's count the frequency in the validation set. array_of_all_val_tags = np.concatenate(val_tag_id['tags'].str.split(';').apply(lambda x: np.array(x)).values) count_array = Counter(array_of_all_val_tags) most_6 = count_array.most_common(6) print("The most Common TAG id's are:") for tag, times in most_6: print("CUI : {} | UMLS term : {} | Images: {}".format(tag,cui_to_concept(tag),times)) # * How many tags are there per image? # *** ### Let's count the range of tags in the training set. array_of_train_tags_per_img = train_tag_id['tags'].str.split(';').apply(lambda x: len(x)).values print("Minumum tags assigned per image: {}".format(array_of_train_tags_per_img.min())) print("Maximum tags assigned per image: {}".format(array_of_train_tags_per_img.max())) print("On average tags assigned per image: {}".format(int(array_of_train_tags_per_img.mean()))) ### Let's count the range of tags in the validation set. array_of_val_tags_per_img = val_tag_id['tags'].str.split(';').apply(lambda x: len(x)).values print("Minumum tags assigned per image: {}".format(array_of_val_tags_per_img.min())) print("Maximum tags assigned per image: {}".format(array_of_val_tags_per_img.max())) print("On average tags assigned per image: {}".format(int(array_of_val_tags_per_img.mean()))) # *** # ## Data Preprocessing # # * Preprocess the images so that you can use them as input. # # * You may have to preprocess the labels as well. # + ### The first step is to create a pandas dataframe that has at one column the image name .jpg and in another column the onehot encoded labels # + train_tag_id['tags'] = train_tag_id['tags'].str.split(';') train_tag_id['image'] = train_tag_id['image'] + '.jpg' val_tag_id['tags'] = val_tag_id['tags'].str.split(';') val_tag_id['image'] = val_tag_id['image'] + '.jpg' # + ### Note: We dont need to create and pass the labels from the UMLS terms, rather we can keep the CUI codes as labels. ### Simply, we create a dictionary to reverse-engineer the predicted code into the UMLS term when needed. # - tag_to_name_dict = concept_id.set_index('tags').to_dict()['concepts'] # + ### Because the data could not fit in memory we will BATCH-PROCESS them using Keras Preprocessing Library. # + ### First we create an Image Data Generator that will apply some transorfmations on the TRAIN images ### in order to boost our network's robustness. datagen = ImageDataGenerator( rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, rescale=1./255.) ### With this image generator we choose to load a batch of 12 images from our train directory. ### Shuffle them for randomness. ### Rescale them to 72 by 72. ### And finally normalize them in the range 0-1. ### Furthermore, we also convert the labels associated with each one to multilabel encoded format. train_datagen = datagen.flow_from_dataframe( dataframe=train_tag_id, directory=train_data_path, x_col="image", y_col="tags", batch_size=12, seed=1453, shuffle=True, class_mode="categorical", classes= list(concept_id['tags'].unique()), target_size=(72,72)) # - ### Our Train Data Generetor has the attributes __len__ and batch_size that we will use to get our images in batches as well as validate the number of batches we get. print("We will perform {} iterations of {} image preprocessing steps.".format(train_datagen.__len__(), train_datagen.batch_size)) # + ### Let's get the first batch and observe its type / size # - print("Type: {}, Size: {}".format(type(train_datagen[0]),len(train_datagen[0]))) # + ### We can see that it is a tuple of (x_train, y_train) data. Let's see x_train data: # - train_datagen[0][0].shape # + ### It contains 12 Images of 72 x 72 pixels and 3 color channels ( Channel Last Format ). ### Now let's see y_train data. # - train_datagen[0][1].shape # + ### It contains 12 vectors of size 5528 with 1 at the position of the classes associated with each image and 0 elsewhere. ### So with this approach we managed to preprocess both the image as well as the labels AND make them fit into memory. # + ### Let's do the same with the validation data, only this time the only transformation needed is the rescaling. datagen2 = ImageDataGenerator(rescale=1./255.) val_datagen = datagen2.flow_from_dataframe( dataframe=val_tag_id, directory=validation_data_path, x_col="image", y_col="tags", batch_size=12, seed=1453, shuffle=False, class_mode="categorical", classes= list(concept_id['tags'].unique()), target_size=(72,72)) # - # ## Build a Baseline # # * Think of a baseline classifier that you could use to to measure your efforts. # # * That could be a classifier that produces always the most frequent labels. # # * Alternatively (and probably better), it could be a classifier that samples from the labels based on their frequency. # + ### Let's Create 2 Classifiers. We only need the train_y and val_y arrays, ### as the x_train and x_val data could not be used to adjust or tune any parameters in our dummy classifiers. ### For this reason we dont need to preprocess the images, instead we can use a simpler method suitable only for our labels. mlb = MultiLabelBinarizer(classes=list(concept_id['tags'].unique())) train_y = mlb.fit_transform(train_tag_id['tags'].values) val_y = mlb.fit_transform(val_tag_id['tags'].values) # - # ### Dummy Most Frequent Classifier ### dummy_clf = DummyClassifier() dummy_clf.fit(np.zeros(train_y.shape[0]),train_y) dummy_predictions = dummy_clf.predict(np.zeros(val_y.shape[0])) # ### Now lets create a classifier that samples from the lablels ### class MySamplingClassifier: def __init__(self): """ Due to the fact that the 'Random Picking' is implemented with a random uniform distribution, it is limited by a quantized lower bound on the probabilities it can output.So some of the extremely low probabilities could not be achieved numerically. Thus i chose to rescale all probabilities in a greater range without harming the validity of the results. """ self.n_classes_ = None self.freq_bins_ = None self.rescale_factor = 1000 * 1000 return def fit(self,train_data): self.n_classes_ = train_data.shape[1] self.freq_bins_ = self.rescale_factor * (train_data.sum(axis=0) / train_data.shape[0]) def predict_n_times(self,n): prediction = np.empty((n, self.n_classes_)) for i in range(0,n): rand = np.random.uniform(low=0.0, high=self.rescale_factor, size=(1,self.n_classes_))[0] pred = np.less(rand,self.freq_bins_) prediction[i] = pred return prediction scl = MySamplingClassifier() scl.fit(train_y) prob_predictions = scl.predict_n_times(val_y.shape[0]) # ### F1 Score Re-Implemented Here ### # + ### After inspecting the python script of the competition we can reduce it to the following function def f1_score_calc(y_pred, y_true): ### Initialize Score ### total_score = 0 ### Make Sure Data have the same Shape ### assert y_pred.shape == y_true.shape ### Initialize the number of eligible examples (examples with 0 classes in Ground Truth are ignored) ### eligible_examples = y_pred.shape[0] for idx in range(0,y_pred.shape[0]): pred_slice = y_pred[idx] true_slice = y_true[idx] if true_slice.sum() == 0: eligible_examples -= 1 else: total_score += f1_score(y_true=true_slice, y_pred=pred_slice, average='binary') return total_score / eligible_examples # - ### Let's see how the dummy classifier performs. f1_score_calc(y_pred=dummy_predictions, y_true=val_y) ### Let's see how the Frequency Based Sampling classifier performs. f1_score_calc(y_pred=prob_predictions, y_true=val_y) # + ### We dont observe any huge difference between the two methods ### # - # ## Build a Neural Network # # * Transfer Learning from DenseNet. # * We will implement a Freeze - Warmup - ReTrain Schema by: # * Fine tuning only the last layer with an adaptive optimizer at a relatively high learning rate. # * Re-train only the convolutional network with our linear classifier frozen using a non-adaptive optimizer at lower momentum and low learning rate. # + ### As a classic approach to transfer learning settings we will get a pretrained CNN and drop its last layer in order to fine tune it. ### After reading the paper we decide to choose the pretrained DenseNet model that can be directly loaded from Keras. from tensorflow.keras.applications import DenseNet121 as DN ### If the model named best_frozen_upper_model exists in the same directory skip this part and load the model instead, it is the model ### i trained with the following algorithm. if os.path.isfile('./best_frozen_upper_model.h5'): skip = True else: skip = False # - if not skip: ### Let's Create The Basis (DenseNet121) an perform the freeze-warmup schema. ### Get base model and repuropose(retrain) a new dense layer that will perform the multiclass classification. base_model = DN(include_top=False, weights='imagenet', input_tensor=None, input_shape=(72,72,3), pooling='avg') # Lets get the last layer of the base model base_model_output = base_model.output # let's add a fully-connected layer with sigmoid activtion function for multiclass classification. n_output_classes = concept_id['tags'].unique().shape[0] predictions = Dense(n_output_classes, activation='sigmoid')(base_model_output) # Global Model = Base Model + New MultiClass Classifier gmodel = Model(inputs=base_model.input, outputs=predictions) # Freeze all convolutional DenseNet layers for layer in base_model.layers: layer.trainable = False # Compile the model gmodel.compile(Adam(lr=0.0001), loss="binary_crossentropy", metrics=["mse","accuracy"]) # Train the model on the new data for a single epoch at small batch size (12) Training_Steps = train_datagen.n//train_datagen.batch_size Validation_Steps = val_datagen.n//val_datagen.batch_size ### Finally we will implement 2 Callbacks: ### Early Stopping will stop training if the validation error starts rising, indicating overfit. ### Model Checkpoint will save the best model up-to the current epoch. We will later load THAT model in order to ### continue with the training schema. es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=2) mc = ModelCheckpoint('best_frozen_upper_model.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True) gmodel.fit_generator(generator=train_datagen, steps_per_epoch=Training_Steps, validation_data=val_datagen, validation_steps=Validation_Steps, epochs=10, callbacks = [es,mc] ) # * Unfreezing and Retraining # ##### We load the best produced model so far and then we retrain following the 2nd step of our schema. # ##### We opt for a non adaptive optimizer with small learning rate in order to avoid the "Catastrophic Forgetting" Effect, # ##### that could possibly impede the already optimized kernels of our convnet instead of further fine tune them. ### Again skip the following part if the produced final_model exists localy if os.path.isfile('./final_model.h5'): skip = True else: skip = False if not skip: ### If there is a hanging gmodel object delete it try: del gmodel except: pass ### Let's load the best model so far with the pretrained model frozen ### gmodel = load_model('best_frozen_upper_model.h5') ### Now lets unfreeze all the Convolutional Network but freeze our Classifier(last layer) at the end ### for layer in gmodel.layers[:-1]: layer.trainable = True gmodel.layers[-1].trainable = False ### Re-compile the model again ### gmodel.compile(optimizer=RMSprop(lr=0.0001, momentum=0.9), loss='binary_crossentropy', metrics=["mse","accuracy"]) Training_Steps = train_datagen.n//train_datagen.batch_size Validation_Steps = val_datagen.n//val_datagen.batch_size es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=2) mc = ModelCheckpoint('final_model.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True) gmodel.fit_generator(generator=train_datagen, steps_per_epoch=Training_Steps, validation_data=val_datagen, validation_steps=Validation_Steps, epochs=10, callbacks = [es,mc] ) # ## Assessment # # * For each validation image, measure the [F1 score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) of the predicted tags. You can use the evaluation code of the competition, which can be found in the official site, in the Evaluation Methodology section of the [competition web page](https://www.imageclef.org/2019/medical/caption/). Actually that's probably the best course of action to avoid getting bogged down in differences in F1 score implementations. # # * Calculate the average for all the *validation* images. final_model = load_model('final_model.h5') # + ### Useful Functions Block ### # + def create_predictions(model, generator): generator.reset() raw_predictions = model.predict_generator(generator, len(generator)) return raw_predictions def create_result_files(filename, raw_predictions, generator, class_dict=None, threshold=None): ### If threshold is the default threshold set it to 0.16 ### if threshold is None: threshold = 0.16 threshold = np.ones_like(raw_predictions) * threshold boolean_results = np.greater(raw_predictions , threshold) * 1.0 predictions = [] ### We need the dictionary mapping each class-name (CUI code) to the logit the generator has internally. ### If the generator used to create the predictions is one of (train,validation) the class indicies can be ### retrieved from the class attribute "class_indicies", else the class mapping needs to be passed explicitly if class_dict is None: labels = generator.class_indices else: labels = class_dict labels = dict((v,k) for k,v in labels.items()) for row in boolean_results: l=[] for index in np.nonzero(row)[0]: l.append(labels[index]) predictions.append(",".join(l)) filenames=generator.filenames with open(filename, "w") as text_file: for image, class_predictions in zip(filenames,predictions): name = str(image).split('\\')[-1] string_to_print = name +'\t'+ ";".join(class_predictions.split(',')) print(string_to_print, file=text_file) return def convert_dataframe_to_one_hot(df,generator): """ Converts the label column of a dataframe according to a generators inner mapping. """ labels = generator.class_indices y_true_list = [] for row in df['tags'].values: indicies = np.empty(len(row)) multi_hot = np.zeros(5528) for idx, cui_code in enumerate(row): multi_hot[int(labels[cui_code])] = 1 y_true_list.append(multi_hot) y_true_validation = np.array(y_true_list) return y_true_validation # - ### Predict over the Validation Images ### raw_predictions_on_val_images = create_predictions(model=final_model, generator=val_datagen) # + ### Get the real y_values ### # - y_true_validation = convert_dataframe_to_one_hot(df=val_tag_id, generator=val_datagen) # ### Grid Searching on Threshold Values ### # ### I performed the following Grid Searches: # * From 0 to 0.5 @ 0.1 step and got peak at 0.2 # * From 0.1 to 0.2 @ 0.05 and got peak at 0.15 # * So here i present the ultimate grid search t_list = np.arange(0.14, 0.18, 0.01) f1_score_list = [] best_f_score = 0 best_threshold = 0 for t in t_list: threshold = t threshold = np.ones_like(raw_predictions_on_val_images) * threshold boolean_results = np.greater(raw_predictions_on_val_images , threshold) * 1.0 f = f1_score_calc(y_pred = boolean_results, y_true = y_true_validation) if f >= best_f_score: best_f_score = f best_threshold = t f1_score_list.append(f) import seaborn as sns sns.set_style('darkgrid') plt.figure(figsize=(9, 10)) plt.ylabel('F1 Score') plt.xlabel('Global Sigmoid Threshold') plt.axhline(y=best_f_score, linestyle=':', linewidth=2, color ='r') plt.axvline(x=best_threshold, alpha=0.6, linewidth=1, color='r') plt.plot(t_list, f1_score_list) plt.savefig('Grid_Search_on_Validation.jpg') plt.show() plt.close() print("Best F1 Score on Validation Dataset: {} @ {} threshold".format(round(best_f_score,5), best_threshold)) # + ### We found that the best F1 Score on Validation Dataset is 0.23475 at a threshold of 0.15 ### ### Now let's leave the result file for the validation images ### # - ### Create the Result File for Validation Images ### create_result_files(filename='final_val_results.txt', raw_predictions=raw_predictions_on_val_images, generator=val_datagen, class_dict=val_datagen.class_indices, threshold=0.15) ### Let's Create the GT File ### def create_gt_file(gt_df,filename='gt_val.txt'): filenames = gt_df['image'].values gt = gt_df['tags'].values with open(filename, "w") as text_file: for image, ground_truth in zip(filenames,gt): name = str(image) string_to_print = name +'\t'+ ";".join(ground_truth) print(string_to_print, file=text_file) return create_gt_file(gt_df=val_tag_id) # + ### Let's Run the Competition's Evaluation File ### # - # !python evaluate-f1.py final_val_results.txt gt_val.txt # * After finessing your model on the validation images, you will use it on the test images. # + ### Perform the same for Test Images ### ### We created a dummy folder named dummy class in order for this to work ### # + datagen3 = ImageDataGenerator(rescale=1./255.) final_generator = datagen3.flow_from_directory(directory=test_data_path, target_size=(72,72), classes=None, batch_size=10, shuffle=False, seed=1453) # - ### Predict over the Test Images ### final_generator.reset() raw_predictions_on_test_images = create_predictions(model=final_model, generator=final_generator) ### Create the Result File for Test Images ### create_result_files(filename='final_test_results.txt', raw_predictions=raw_predictions_on_test_images, generator=final_generator, class_dict=val_datagen.class_indices, threshold=0.15) # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.5.2 # language: julia # name: julia-0.5 # --- # ### Background # 1. C-field theory # * Projected fields # * Projected Gross-Pitaevskii Equation # 2. Spectral-Galerkin method # * $2M$ Rule for quadrature integration # * Integrating general nonlinearities # ### C-field definition # \begin{equation} # \psi(x,t)=\sum_n c_n(t)\phi_n(x), # \end{equation} # where the modes $\phi_n(x)$ are a complete and orthonormal set: # \begin{equation} # \int dx\;\phi^*_n(x)\phi_m(x)=\delta_{nm},\\ # \mathbb{1}=\sum_n|\phi_n\rangle\langle\phi_n|. # \end{equation} # Completeness then gives a construction for the Dirac delta: # \begin{equation} # \delta(x-x')=\sum_n\phi_n(x)\phi_n^*(x'). # \end{equation} # ### Defining the C-region # To define the C-region, we introduce the projection operator # \begin{equation} # \hat{\cal P}=\bar{\sum_n}|\phi_n\rangle\langle \phi_n| # \end{equation} # # where the bar notation refers to the cutoff defining the $C$-region. The quantum number is restricted to $n\leq M-1$, giving a total of $M$ modes in the 1D $C$-region, indexed by quantum numbers $n\in C=\{ 0,1,\dots, M-1\}$. # # The Dirac delta function for the $C$-region is # \begin{equation} # \langle x|\hat{\cal P}|x'\rangle=\bar{\sum_n}\phi_n(x)\phi_n^*(x')\equiv\delta(x,x'). # \end{equation} # ### Projected $\delta$-function # The projected delta function $\delta(x,x')$ plays a fundamental role in c-field theory. It forms the kernel of the projector acting on functions in position space: # \begin{equation} # {\cal P}f(x)\equiv \int dx'\;\delta(x,x')f(x') # \end{equation} # For any function residing in $C$, projection is the identity. Equivalently, the action of $\delta(x,x')$ is equivalent to the defining property of $\delta(x-x')$: # \begin{equation} # \int dx'\;\delta(x,x')f(x')=f(x). # \end{equation} # However, when considering $\delta(x,x')$, the order of arguments is significant. # ## Projected Gross-Pitaevskii equation # ### Introduction to PGPE # Given a basis as above, one start from the GPE # \begin{equation} # i\hbar\frac{\partial \psi}{\partial t}=\left(-\frac{\hbar^2\nabla^2}{2m}+V(x)+g|\psi|^2\right)\psi # \end{equation} # and substitute # \begin{equation} # \psi=\bar{\sum_n}c_n(t)\phi_n(x), # \end{equation} # and project onto $\phi_n(x)$. # ## Spectral basis # We express the equation in the spectral basis # \begin{equation} # (\hat T+\hat V)\phi_n(x)=\epsilon_n\phi_n(x). # \end{equation} # We arrive at the equation # \begin{equation} # i\hbar\frac{\partial c_n}{\partial t}=\epsilon_nc_n+g\int dx\;\phi_n^*(x)|\psi(x)|^2\psi(x) # \end{equation} # In the harmonic oscillator basis, our modes take the form # \begin{equation} # \phi_n(x)=e^{-x^2/2}P_n(x) # \end{equation} # where $P_n(x)$ contains the polynomial part of degree $n$ and normalization factors. # There are two obvious ways to express the integral. # #### 1. Direct substitution of basis decomposition # \begin{equation} # I_n\equiv \int dx\;\phi_n^*(x)|\psi(x)|^2\psi(x) # \end{equation} # Firstly, we can substitute the wavefunction: # \begin{equation} # I_n=\bar{\sum}_{mpq}c_m^*c_pc_q\int dx\;\phi_n^*(x)\phi_m^*(x)\phi_p(x)\phi_q(x), # \end{equation} # a triple summation over a set of known matrix elements for each mode $n$. For each $n$ this triple summation requires $O(M^3)$ arithmetic operations. The total cost of evaluating the PGPE right hand side via summation is $O(M^4)$. This approach is numerically expensive, and scales poorly with $M$. # #### 2. Quadrature method # This computational effort can be reduced via an alternate approach using Gaussian quadrature. # # >#### Philosophy: # We will transform onto a *particular* (in general non-uniform) spatial grid that will guarantee that the non-linear term is projected faithfully onto $\phi_n(x)$, *for all modes in the C-field*. # # For the integral at hand, the highest order term can be found by considering the cutoff mode for each wavefunction $\psi(x)$, and can be written as # \begin{equation} # I_{M-1}=\int dx\;e^{-2x^2}R_{4(M-1)}(x), # \end{equation} # where $R$ expresses the polynomial part. # ### The $2M$ rule # As we will see below, for this $C$ region consisting of $M$ modes, the nonlinear integral may be evaluated exactly by mapping the coefficients $c_n$ to a spatial grid of $2M$ points, corresponding to a Gaussian quadrature rule of order $2M$. Thus we can state our general rule, for a cubic nonlinearity of Gross-Pitaevskii form: # # >#### The 2M rule: # A field represented by $M$ modes may be evolved at working precision using a particular grid of $2M$ spatial points. # # The Fourier spectral method is a special case for which the wieght function becomes unity, and for which the $2M$ rule achieves spatial sampling at the *Nyquist frequency*. # ### Gauss-Hermite quadrature # For any integral of the form # \begin{equation} # I=\int_{-\infty}^{\infty} dx \;e^{-x^2}Q_{2N-1}(x), # \end{equation} # with $Q_{2N-1}(x)$ restricted to maximum polynomial degree $2N-1$ # \begin{equation} # Q_{2N-1}(x)=a_0+a_1x+\dots+a_{2N-1}x^{2N-1}, # \end{equation} # there exists a quadrature rule of order $N$, involving $N$ roots $x_k$, and $N$ weights $w_k$, $k\in 1,\dots, N$, that will evaluated all such integrals exactly. # # From a linear algebra point of view this result is not so surprising: The Gaussian-weighted integral of an arbitrary polynomial of degree $2N-1$ involves $2N$ uknown coefficients, and $2N$ known integrals. # Given $2N$ free parameters, here given by the roots and weights of the quadrature rule, the $2N$ unknowns may be solved for exactly, by solving a system of simultaneous linear equations. # # The integral may be evaluated "exactly" as # \begin{equation} # I=\sum_k w_kQ_{2N-1}(x_k), # \end{equation} # where the meaning of "exact" is that it is accurate to machine precision; typically the accuracy will be of order 1e-16. # ### Projected time evolution: work step # #### Algorithm # >1. cast the integral in the above form $I$, # 2. Transform to the quadrature grid $c_n\to \psi(x_k)$ # 2. Evaluate $Q_{2N-1}(x_k)$ for the resulting polynomial part, and # 3. Evaluate the sum weighted by $w_k$, giving the projection of $|\psi|^2\psi\to c_n'$ # # Step 1. is done by the change of variables $x=y/\sqrt{2}$ # we find # \begin{align} # I_{M-1}&=\int_{-\infty}^{\infty} dy\;e^{-y^2}\left[\frac{R_{4(M-1)}(y/\sqrt{2})}{\sqrt{2}}\right], # \end{align} # where the terms in parentheses correspond to $Q_{2N-1}(x)$ defined above. # # According to the rules of Gaussian quadrature, a rule of order $N$ will integrate all polynomials up to and including degree $2N-1$. A rule of order $2M$ will thus evaluate all terms up to order $4M-1$. This is the lowest order rule that will guarantee exactness. # Hence # >**All projected integrals in the PGPE for $M$ modes are exact within a Gaussian quadrature of order $2M$**. # A rule of order $n$ will integrate polynomials up to and including degree $2n-1$. Thus a rule of order $JM$ will integrate terms up to order $2JM-1$, giving numerically exact evalution of the nonlinear term for all modes in $C$. The variable transformation is # $$ x=\frac{y}{\sqrt{J}}$$ # giving the integral # $$I_{M-1}=\int_{-\infty}^{\infty} dy\;e^{-y^2} \frac{1}{\sqrt{J}}R_{2J(M-1)}(y/\sqrt{J}),$$ # All powers of $\psi(x)$, i.e. both odd and even, are accommodated by putting $2J\to K$, to give # $$ # I_{M-1}\equiv \int dx\;\phi_n^*(x)\psi(x)^{K-2}\psi(x), # $$ # to give the form # $$ # I_{M-1}=\int_{-\infty}^{\infty} dx \;e^{-Kx^2/2}R_{K(M-1)}(x). # $$ # A rule of order $KM/2$ will integrate all terms up to order $KM-1>K(M-1)$. The variable transformation is # $$x=\sqrt{\frac{2}{K}}y,$$ # giving the integral # $$I_{M-1}=\int_{-\infty}^{\infty} dy\;e^{-y^2} \sqrt{\frac{2}{K}}R_{K(M-1)}\left(\sqrt{\frac{2}{K}}y\right) ,$$ # ### General Anisotropic form # The most general formulation we require must accommodate different oscillator fequencies in different spatial directions: # $$ # I_{M-1}\equiv \int dx\;\phi_n^*(x)\psi(x)^{K-2}\psi(x), # $$ # to give the form # $$ # I_{M-1}=\int_{-\infty}^{\infty} dx \;e^{-K\omega x^2/2}R_{K(M-1)}(x). # $$ # A rule of order $KM/2$ will integrate all terms with polynomial degree up to $KM-1>K(M-1)$. The variable transformation is # $$x=\sqrt{\frac{2}{K\omega}}y,$$ # giving the integral # $$I_{M-1}=\int_{-\infty}^{\infty} dy\;e^{-y^2} \sqrt{\frac{2}{K\omega}}R_{K(M-1)}\left(\sqrt{\frac{2}{K\omega}}y\right).$$ # # In the package `ProjectedGPE.jl`, the function that sets up the roots `x`, weights `w`, and transformation matrix `T` to evaluate integrals of this form is # `nfieldtrans.jl` # You should be able to run this example to compute wavefunction norm: using ProjectedGPE; M = 20 x,w,T = nfieldtrans("Hermite",M,2) #2 field product in the integral to be evaluated c = randn(M) + im*randn(M); ψ = T*c N = sum(w.*abs(ψ).^2) # and compare with the direct summation of coefficients: Nsum = sum(abs(c).^2) abs(N-Nsum) # Integrals of higher order field products are computed with comparable accuracy. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + a = -1 b = 1 c = 0 c = a + b num = int(input(" Enter Limit : ")) print(" Fibonacci Series Till ",num," Is") while(c < num): print(" ",c,end = " ") a = b b = c c = a + b # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CX 4230, Project 2A # # ## Setup # # Run these code cells to get started. # Our usual multidimensional array tools import numpy as np import scipy as sp import scipy.sparse import math from scipy.integrate import odeint # Core plotting support import matplotlib.pyplot as plt from matplotlib import animation # %matplotlib inline # ## Conceptual model # # There are several possible conceptual models. Recall the one discussed in class: # # * Model three population types as _fractions_ of the total population that vary continuously in time: $S \equiv S(t)$ for suscetible people, $I \equiv I(t)$ for infected people, and $R \equiv R(t)$ for recovered people. # * Ignore the spatial dimension and assume the subpopulations are _well-mixed_, i.e., everyone is connected to everyone else. # * Let $\tau$ be a rate of infection spread, having inverse-time units. # * Let $\rho$ be a rate of recovery, also having inverse-time units. # # From these starting assumptions, we developed a conceptual model of the system as a system of _ordinary differential equations_ (ODEs): # # $$ # \begin{eqnarray} # \dfrac{dI(t)}{dt} & = & \tau I(t) S(t) - \rho I(t) \\ # \dfrac{dS(t)}{dt} & = & -\tau I(t) S(t) \\ # \dfrac{dR(t)}{dt} & = & \rho I(t) # \end{eqnarray} # $$ # # > Recall that the $\rho I(t)$ term measures the fraction of the infected population that recovers per unit time. We also discussed a variation on this term in class, which expresses recovery using a _delay_ term, $I(t-k)$; such a term results in a [_delay differential equation_](https://en.wikipedia.org/wiki/Delay_differential_equation). TAU = 0.1 RHO = 0.5 L_x = 1.0 # Length of x L_y = 1.0 # Length of y C = 0.1 # Diffusion coefficient V = 0.7 # Vaccinate coefficient N = 15 # Number of (spatial) grid points x-axis. The grid is M x N M = 15 # Number of grid points y-axis dx = float (L_x) / (N-1) # Grid resolution dy = float (L_y) / (M-1) I0 = 1.0 S0 = 1.0 - I0 R0 = 0.0 print ("=== Simulation parameters ===") print (" Domain of x: [0, L_x=%g]" % L_x) print (" Domain of y: [0, L_y=%g]" % L_y) print (" Diffusion coefficient: C=%g (units: [len]^2 / [time])" % C) print (" Grid resolution: n=%d points ==> dx=%g [len]" % (N, dx)) print (" Grid resolution: m=%d points ==> dx=%g [len]" % (M, dy)) # ## Simulator # # To build a "simulator" for this system of ODEs, we can use a black-box ODE solver. Let's use Scipy's [`odeint()`](http://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.integrate.odeint.html#scipy.integrate.odeint). # Recall that a generic ODE solver expects to be given a system of the form, # # $$ # \dfrac{d\vec{y}(t)}{dt} = f(t, \vec{y}), # $$ # # where $\vec{y}(t)$ is the vector whose $m$ components are the continuous state variables of the system, # # $$ # \begin{eqnarray} # \vec{y}(t) # & \equiv & \left(\begin{array}{c} # y_0(t) \\ # y_1(t) \\ # \vdots \\ # y_{m-1}(t) # \end{array}\right), # \end{eqnarray} # $$ # # and $\vec{f}(t, \vec{y})$ is a vector function having $m$ components that depend on the time $t$ and the state variable (components) of $\vec{y}$: # # $$ # \begin{eqnarray} # \vec{f}(t, \vec{y}) # & \equiv & \left(\begin{array}{c} # f_0(t, \vec{y}) \\ # f_1(t, \vec{y}) \\ # \vdots \\ # f_{m-1}(t, \vec{y}) # \end{array}\right). # \end{eqnarray} # $$ # Function to compute a centered impulse as an initial condition def gen_initial (m,n,y0,dx,dy, impulse=None): u = np.zeros ((m + 2,n + 2, 3), dtype=float) i_mid = int (n/2) j_mid = int (m/2) for x in range(1, n + 1): for y in range(1, m + 1): u[y][x][:] = np.array([0,1,0]) if impulse is None: u[j_mid][i_mid][:] = y0 u[j_mid+3][i_mid-3][:] = y0 else: for point in impulse: (x,y) = (int(point[0]/dx)+1,int(point[1]/dx)+1) u[x,y] = y0 return u def plot(G, vmin, vmax): plt.pcolor (G, vmin=vmin, vmax=vmax, edgecolor='black') ticks = [vmin, vmax] plt.colorbar (ticks=ticks) plt.axes().set_xlim([0,G.shape[0]]) plt.axes().set_ylim([0,G.shape[1]]) plt.axes().set_aspect('equal') def x_axis_diffusion (G, x, y, c=C, dx=dx): if ( x == 1 ): return c*( 0 - G[y][x][0] + G[y][x+1][0])/dx**2 elif (x == (G.shape[1] - 2) ): return c*( G[y][x-1][0] - G[y][x][0])/dx**2 else: return c*(G[y][x-1][0] - 2*G[y][x][0] + G[y][x+1][0])/dx**2 def y_axis_diffusion (G, x, y, c=C, dy=dy): if ( y == 1 ): return c*( 0 - G[y][x][0] + G[y+1][x][0])/dy**2 elif (y == (G.shape[0] - 2) ): return c*( G[y-1][x][0] - G[y][x][0])/dy**2 else: return c*(G[y-1][x][0] - 2*G[y][x][0] + G[y+1][x][0])/dy**2 def f_sirode (G, t, m=M, n=N, tau=TAU, rho=RHO, c=C, dx=dx, dy=dy): assert type (G) is np.ndarray assert type (t) is float G = np.reshape(G, (m+2, n+2,3), order='C') newG = np.zeros((m+2, n+2, 3), dtype=float) # @YOUSE: Compute `f[:]` for x in range(1, G.shape[1]-1): for y in range(1, G.shape[0]-1): f = np.zeros (np.array(G[y][x]).shape) x_diff = x_axis_diffusion(G, x, y, c, dx) y_diff = y_axis_diffusion(G, x, y, c, dy) cell = G[y][x] f[0] = tau*cell[0]*cell[1] - rho*cell[0] + (x_diff + y_diff)*cell[1] f[1] = -tau*cell[0]*cell[1] - (x_diff + y_diff)*cell[1] f[2] = rho*cell[0] newG[y][x][:] = f return newG.ravel() def sim (c, m, n, tau, rho, i0, dx, dy, t_max, n_t, verbose=False, impulse=None): y0 = np.array ([i0, 1.0-i0, 0.0]) G = gen_initial (m, n, y0, dx, dy, impulse) t_all = np.linspace (0.0, t_max, n_t) U = odeint (f_sirode, G.ravel(order='C'), t_all, args=(m,n, tau, rho,c,dx,dy)) U = np.reshape(U, (n_t, m+2, n+2, 3), order='C') if verbose: print ("Time points:", t_all) return (U, t_all) def show_results(U, n_t): minInit = np.amin(U[0,:,:,:]) maxInit = np.amax(U[0,:,:,:]) plt.figure(1) plt.title('Initial Infected') plot(U[0,:,:,0], minInit, maxInit) #print('\n\n===================Initial Infected===================') #print(U[0,:,:,0]) plt.figure(2) plt.title('Initial Susceptible') plot(U[0,:,:,1], minInit, maxInit) #print('\n\n===================Initial Susceptible===================') #print(U[0,:,:,1]) plt.figure(3) plt.title('Initial Recovered') plot(U[0,:,:,2], minInit, maxInit) #print('\n\n===================Initial Recovered===================') #print(U[0,:,:,2]) minFinal = np.amin(U[n_t-1,:,:,:]) maxFinal = np.amax(U[n_t-1,:,:,:]) plt.figure(4) plt.title('Final Infected') plot(U[n_t - 1,:,:,0], minFinal, maxFinal) #print('\n\n===================Final Infected===================') #print(U[n_t - 1,:,:,0]) plt.figure(5) plt.title('Final Susceptible') plot(U[n_t - 1,:,:,1], minFinal, maxFinal) #print('\n\n===================Final Susceptible===================') #print(U[n_t - 1,:,:,1]) plt.figure(6) plt.title('Final Recovered') plot(U[n_t - 1,:,:,2],minFinal, maxFinal) #print('\n\n===================Final Recovered===================') #print(U[n_t - 1,:,:,2]) # + # Test code: Display a plotting widget that lets you adjust # the simulation parameters and visualize the results. from ipywidgets import interact def isim (tau=TAU, rho=RHO, i0=1, t_max=10.0, n_t=10, c=C, m=M, n=N, lx=L_x, ly=L_y): assert t_max > 0.0 dx = float (lx) / (n-1) # Grid resolution dy = float (ly) / (m-1) (U, t_all) = sim(c, m, n, tau, rho, i0, dx, dy, t_max, n_t) show_results(U, n_t) interact (isim , tau=(0.0, 2, 0.1) , rho=(0.0, 2.0, 0.1) , i0=(0.0, 1.0, 0.01) , t_max=(0.0, 20, 1.0) , n_t=(1, 500, 10) , c=(0, 0.5, 0.1) , m=(5, 30, 1) , n=(5, 30, 1) , lx=(0.5, 2, 0.1) , ly=(0.5, 2, 0.1) ) # - # ## Challenge a t_max = 100 t_n = 1000 (U, t_all) = sim (0.2, 11, 11, 0.8, 1.0/4, 0.5, 0.1, 0.1, t_max, t_n, verbose=False, impulse=[(0.5,0.5)]) #sim (c, m, n, tau, rho, i0, dx, dy, t_max, n_t, verbose=False, impulse=None) i_means = [] s_means = [] r_means = [] for t in range(len(t_all)): i_mean = np.sum(U[t,:,:,0])/11**2 s_mean = np.sum(U[t,:,:,1])/11**2 r_mean = np.sum(U[t,:,:,2])/11**2 i_means.append(i_mean) s_means.append(s_mean) r_means.append(r_mean) assert abs(( s_mean + i_mean + r_mean ) - 1.0) < 0.0001 if (i_mean < 1e-5) or (s_mean < 1e-5): plt.figure(1) plt.title('Infected and Recovered at time ' + str(t_all[t])) plt.plot (t_all[0:t+1], i_means, 'r', label = 'Infected') plt.plot (t_all[0:t+1], r_means,'g', label = 'Recover') plt.plot (t_all[0:t+1], s_means, 'y', label = 'Susceptible') plt.legend(loc='center right') break # ## Challenge b # Function to compute a centered impulse as an initial condition def gen_initial_b (m,n,y0,dx,dy, impulse=None): u = np.zeros ((m + 2,n + 2, 4), dtype=float) i_mid = int (n/2) j_mid = int (m/2) for x in range(1, n + 1): for y in range(1, m + 1): u[y][x][:] = np.array([0,1,0,0]) if impulse is None: u[j_mid][i_mid][:] = y0 u[j_mid+3][i_mid-3][:] = y0 else: for point in impulse: (x,y) = (int(point[0]/dx)+1,int(point[1]/dx)+1) u[x,y] = y0 return u def f_sirode_b (G, t, m=M, n=N, tau=TAU, rho=RHO, c=C, v=V, dx=dx, dy=dy): assert type (G) is np.ndarray assert type (t) is float G = np.reshape(G, (m+2, n+2,4)) newG = np.zeros((m+2, n+2, 4), dtype=float) # @YOUSE: Compute `f[:]` for x in range(1, G.shape[1]-1): for y in range(1, G.shape[0]-1): f = np.zeros (np.array(G[y][x]).shape) x_diff = x_axis_diffusion(G, x, y, c, dx) y_diff = y_axis_diffusion(G, x, y, c, dy) cell = G[y][x] f[0] = tau*cell[0]*cell[1] - rho*cell[0] + (x_diff + y_diff)*cell[1] f[1] = -tau*cell[0]*cell[1] - (x_diff + y_diff)*cell[1] - v*cell[1]*cell[0]/(cell[0]+cell[1]) f[2] = rho*cell[0] f[3] = v*cell[1]*cell[0]/(cell[0]+cell[1]) newG[y][x] = f return np.reshape(newG, (m+2)*(n+2)*4) def sim_b (c, v, m, n, tau, rho, i0, dx, dy, t_max, n_t, verbose=False, impulse=None): y0 = np.array ([i0, 1.0-i0, 0.0, 0.0]) G = gen_initial_b (m, n, y0, dx, dy, impulse) t_all = np.linspace (0.0, t_max, n_t) U = odeint (f_sirode_b, np.reshape(G, (m+2)*(n+2)*4), t_all, args=(m,n, tau, rho,c,dx,dy)) U = np.reshape(U, (n_t, m+2, n+2, 4)) if verbose: print ("Time points:", t_all) return (U, t_all) def show_results_b(U, n_t): minInit = np.amin(U[0,:,:,:]) maxInit = np.amax(U[0,:,:,:]) plt.figure(1) plt.title('Initial Infected') plot(U[0,:,:,0], minInit, maxInit) #print('\n\n===================Initial Infected===================') #print(U[0,:,:,0]) plt.figure(2) plt.title('Initial Susceptible') plot(U[0,:,:,1], minInit, maxInit) #print('\n\n===================Initial Susceptible===================') #print(U[0,:,:,1]) plt.figure(3) plt.title('Initial Recovered') plot(U[0,:,:,2], minInit, maxInit) #print('\n\n===================Initial Recovered===================') #print(U[0,:,:,2]) plt.figure(4) plt.title('Initial Vaccinate') plot(U[0,:,:,3], minInit, maxInit) #print('\n\n===================Initial Recovered===================') #print(U[0,:,:,3]) minFinal = np.amin(U[n_t-1,:,:,:]) maxFinal = np.amax(U[n_t-1,:,:,:]) plt.figure(5) plt.title('Final Infected') plot(U[n_t - 1,:,:,0], minFinal, maxFinal) #print('\n\n===================Final Infected===================') #print(U[n_t - 1,:,:,0]) plt.figure(6) plt.title('Final Susceptible') plot(U[n_t - 1,:,:,1], minFinal, maxFinal) #print('\n\n===================Final Susceptible===================') #print(U[n_t - 1,:,:,1]) plt.figure(7) plt.title('Final Recovered') plot(U[n_t - 1,:,:,2],minFinal, maxFinal) #print('\n\n===================Final Recovered===================') #print(U[n_t - 1,:,:,2]) plt.figure(8) plt.title('Final Vaccinate') plot(U[n_t - 1,:,:,3], minInit, maxInit) #print('\n\n===================Initial Recovered===================') #print(U[n_t - 1,:,:,3]) # + def isim_b (tau=TAU, rho=RHO, i0=1, t_max=10.0, n_t=10, c=C, v=V, m=M, n=N, lx=L_x, ly=L_y): assert t_max > 0.0 dx = float (lx) / (n-1) # Grid resolution dy = float (ly) / (m-1) (U, t_all) = sim_b(c, v, m, n, tau, rho, i0, dx, dy, t_max, n_t) show_results_b(U, n_t) interact (isim_b , tau=(0.0, 2, 0.1) , rho=(0.0, 2.0, 0.1) , i0=(0.0, 1.0, 0.01) , t_max=(0.0, 20, 1.0) , n_t=(1, 500, 10) , c=(0, 0.5, 0.1) , v=(0, 1, 0.1) , m=(5, 30, 1) , n=(5, 30, 1) , lx=(0.5, 2, 0.1) , ly=(0.5, 2, 0.1) ) # + t_max = 100 t_n = 1000 (U, t_all) = sim_b (0.2, 0.1, 11, 11, 0.8, 1.0/4, 0.5, 0.1, 0.1, t_max, t_n, verbose=False, impulse=[(0.5,0.5)]) #sim_b (c, v, m, n, tau, rho, i0, dx, dy, t_max, n_t, verbose=False, impulse=None): i_means = [] s_means = [] r_means = [] v_means = [] for t in range(len(t_all)): i_mean = np.sum(U[t,:,:,0])/11**2 s_mean = np.sum(U[t,:,:,1])/11**2 r_mean = np.sum(U[t,:,:,2])/11**2 v_mean = np.sum(U[t,:,:,3])/11**2 i_means.append(i_mean) s_means.append(s_mean) r_means.append(r_mean) v_means.append(v_mean) assert abs(( s_mean + i_mean + r_mean + v_mean) - 1.0) < 0.0001 if (i_mean < 1e-5) or (s_mean < 1e-5): plt.figure(1) plt.title('At time ' + str(t_all[t])) plt.plot (t_all[0:t+1], i_means, 'r', label = 'Infected') plt.plot (t_all[0:t+1], r_means,'g', label = 'Recover') plt.plot (t_all[0:t+1], s_means, 'y', label = 'Susceptible') plt.plot (t_all[0:t+1], v_means, 'b', label = 'Vaccinated') plt.legend(loc='center right') break # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 環境構築 # ### (option) Colab 上で実行する場合は以下を実行 # + import torch torch.cuda.get_device_name(0) # - from google.colab import drive drive.mount('/content/drive') # %cd /content # !git clone https://github.com/rs1004/object_detection.git # %cd /content/object_detection/data # !unzip -q /content/drive/MyDrive/data/voc07+12.zip # %cd .. # !ls -1 ./data/voc07+12/train | wc -l # !ls -1 ./data/voc07+12/val | wc -l # ### 共通 # !pip install -e . # ## 設定 CONFIG = './configs/fcos/fcos_512_r50_voc_aug.py' # ## 学習 # + tags=[] # !python src/train.py $CONFIG # - # ## 予測・評価 # !python src/test.py $CONFIG # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 12 - Lists # ------------------------------- # Lists are ordered collections of data items, just like tuples. The major difference with tuples is that lists are mutable. This makes them a highly flexible data structure, that you will find many uses for. # --- # ## List basics # A list is a collection of elements. # # The elements of a list are *ordered*. Because they are ordered, you can access each of the elements of a list using an index, just like you can access the characters of a string, and just like you can access the elements of a tuple. # # In Python, lists are recognizable from the fact that they enclose their elements in square brackets (`[]`). You can get the number of elements in a list by using the `len()` function. You can use a `for` loop to traverse the elements of a list. You can mix data types in a list. You can apply the `max()`, `min()` and `sum()` functions to a list. You can test for the existence of an element in a list using the `in` operator (or for the non-existence by using `not in`). # + fruitlist = ["apple", "banana", "cherry", 27, 3.14] print( len( fruitlist ) ) for element in fruitlist: print( element ) print( fruitlist[2] ) numlist = [314, 315, 642, 246, 129, 999] print( max( numlist ) ) print( min( numlist ) ) print( sum( numlist ) ) print( 100 in numlist ) print( 999 in numlist ) # - # **Exercise**: Write a `while` loop to print the elements of a list. # Traversing a list using while. fruitlist = ["apple", "banana", "cherry ", "durian", "orange"] # Apart from the square brackets, lists seem to be a lot like tuples. Yet there is a big difference... # --- # ## Lists are mutable # Because lists are mutable, you can change the contents of a list. # # To overwrite an element of a list, you can assign a new value to it. fruitlist = ["apple", "banana", "cherry", "durian", "orange"] print( fruitlist ) fruitlist[2] = "strawberry" print( fruitlist ) # You can also overwrite list slices by assigning a new list to the slice. The slice you remove need not be of equal length to the new list you insert. fruitlist = ["apple", "banana", "cherry", "durian", "orange"] print( fruitlist ) fruitlist[1:3] = ["raspberry", "elderberry", "strawberry", "blueberry"] print( fruitlist ) # You can insert new elements into a list by assigning them to an empty slice. fruitlist = ["apple", "banana", "cherry", "durian", "orange"] print( fruitlist ) fruitlist[1:1] = ["raspberry", "elderberry", "strawberry", "blueberry"] print( fruitlist ) # You can delete elements from a list by assigning an empty list to a slice. fruitlist = ["apple", "banana", "cherry", "durian", "orange"] print( fruitlist ) fruitlist[1:3] = [] print( fruitlist ) # Using slices and assignments, you can adapt a list in any way that you like. However, it is easier to change lists using methods. There are many helpful methods available, which I am going to discuss below. # # **Exercise**: Change the list in the code block below by turning every word in the list into a word consisting of only capitals. At this point in the notebook, the way to do that is by using a while loop that uses a variable i that starts at 0 and runs up to `len(fruitlist)-1`. This is an index for all the elements of `fruitlist`, which you can change by simply assigning a new value to them. # Making capital fruits. fruitlist = ["apple", "banana", "cherry", "durian", "orange"] # --- # ## Lists and operators # Lists support the use of the operators `+` and `*`. These operators work similar as to how they work for strings. # # You can add two lists together with the `+` operator, the result of which is a list which contains the elements of both lists involved. Of course, you have to assign the result to a variable to store it. # # You can multiply a list by a number to create a list that contains the elements of the original list, repeated as often as the number indicates. This can be a fast approach to create a list with all equal elements. fruitlist = ["apple", "banana"] + ["cherry", "durian"] print( fruitlist ) numlist = 10 * [0] print( numlist ) # Note: With the `+` you can add a list to another list, but you cannot add a new element to a list, unless you turn that new element into a list with a single element by putting two square brackets around it. If you try to add something to a list that is not a list, Python will try to interpret it as a list -- if it can do that (which it can, for instance, for a string, which it can consider a list of letters); it will then still do the addition but the result will not be what you want. For instance, the code below tries to add a "cherry" to a list, but only the second addition actually does what is intended. # + fruitlist = ["apple", "banana"] fruitlist += "cherry" print( fruitlist ) fruitlist = ["apple", "banana"] fruitlist += ["cherry"] print( fruitlist ) # - # --- # ## List methods # Python supports many methods to change lists or get information from them. You do not need to import a module to use them. Since they are methods, you call them using the syntax `.()`. # # **Important!** Lists are mutable and these methods actually *change* the list! It is not as you are used to with string methods, where the methods create a new string, and return it, while the original string remains. Most list methods have an irrevocable effect on the list they work on. Usually they have no return value, and you do not need one either, as the purpose of the methods is to change the list. # ### `append()` # `append()` attaches an item at the end of a list. You call the method with the item you wish to add as argument. fruitlist = ["apple", "banana", "cherry", "durian"] print( fruitlist ) fruitlist.append( "orange" ) print( fruitlist ) # An alternative for using the `append()` method is to add a list with one new element to the existing list with a `+`, and assign the resulting list to the original list variable. However, the `append()` method is preferable as it is more readable. `.append()` is equivalent to `[len():] = []`, or simply ` += []`. # ### `extend()` # `extend()` makes a list longer by appending the elements of another list at the end. You call the method with the list of which you want to add the elements as argument. fruitlist = ["apple", "banana", "cherry", "durian"] print( fruitlist ) fruitlist.extend( ["raspberry", "elderberry", "strawberry", "blueberry"] ) print( fruitlist ) # Just as with the `append()` method, you can extend an existing list with a new list by simply using the `+` operator, and assigning the result to the original list variable. And just as with the `append()` method, using the `extend()` method is preferable. `.extend()` is equivalent to `[len():] = `. # ### `insert()` # `insert()` allows you to insert an element at a specific position in a list. It is called with two arguments, the first being the index of the location where you wish to insert the new element, and the second the new element itself. To insert an element at the front of the list, you can use index 0. fruitlist = ["apple", "banana", "cherry", "durian"] print( fruitlist ) fruitlist.insert( 2, "orange" ) print( fruitlist ) # `.insert(,)` is equivalent to `[:] = []`. # ### `remove()` # `remove()` allows you to remove an element from a list. The element you wish to remove is given as argument. If the element occurs in the list multiple times, only the first occurrence will be removed. If you try to remove an element that is not on the list, a runtime error is generated. fruitlist = ["apple", "banana", "cherry", "banana", "durian"] print( fruitlist ) fruitlist.remove( "banana" ) print( fruitlist ) # ### `pop()` # Like `remove()`, `pop()` removes an element from the list, but does so by index. It has one optional argument, which is the index of the element that you wish to remove. If you do not provide that argument, `pop()` removes the last element from the list. If the index is beyond the boundaries of the list, `pop()` generates a runtime error. # # A major difference with `remove()` is that `pop()` actually has a return value, namely the element that gets removed. This allows you to quickly process all the elements of a list, while emptying the list at the same time. fruitlist = ["apple", "banana", "cherry", "durian"] print( fruitlist ) print( fruitlist.pop() ) print( fruitlist ) print( fruitlist.pop( 0 ) ) print( fruitlist ) # ### `del` # `del` is neither a method nor a function, but since it is often metioned in one breath with `remove()` and `pop()`, I place it here. `del` is a keyword that allows you to delete a list element, or list slice, by index. It is similar to `pop()` in functionality, but does not have a return value. Also, `pop()` cannot be used on slices. To use `del`, use the statement `del []` or `del [:]`. fruitlist = ["apple", "banana", "cherry", "banana", "durian"] del fruitlist[3] print( fruitlist ) # ### `index()` # `index()` returns the index of the first occurrence on the list of the element that is given to `index()` as argument. A runtime error is generated if the element is not found on the list. fruitlist = ["apple", "banana", "cherry", "banana", "durian"] print( fruitlist.index( "banana" ) ) # ### `count()` # `count()` returns an integer that indicates how often the element that is passed to it as an argument occurs in the list. fruitlist = ["apple", "banana", "cherry", "banana", "durian"] print( fruitlist.count( "banana" ) ) # ### ``sort()`` # `sort()` sorts the elements of the list, from low to high. If the elements of the list are strings, it does an alphabetical sort. If the elements are numbers, it does a numeric sort. If the elements are mixed, it generates a runtime error, unless certain arguments are given. # + fruitlist = ["apple", "strawberry", "banana", "raspberry", "cherry", "banana", "durian", "blueberry"] fruitlist.sort() print( fruitlist ) numlist = [314, 315, 642, 246, 129, 999] numlist.sort() print( numlist ) # - # To do a reverse sort, you can add an argument `reverse=`. fruitlist = ["apple", "strawberry", "banana", "raspberry", "cherry", "banana", "durian", "blueberry"] fruitlist.sort( reverse=True ) print( fruitlist ) # Another argument that you can give `sort()` is a key. You have to provide this argument as `.sort( key= )`, whereby `` is a function that takes one argument (the element that is to be sorted) and returns a value that is used as key. A typical use for the `key` argument is if you want to sort a list of strings, but want to do the sorting case-insensitively. So as key you want to use the elements, but in lower case, i.e., you want to apply the function `str.lower()` to the element. You call the sort as in the following example: fruitlist = ["apple", "Strawberry", "banana", "raspberry", "CHERRY", "banana", "durian", "blueberry"] fruitlist.sort() print( fruitlist ) fruitlist.sort( key=str.lower ) # case-insensitive sort print( fruitlist ) # Note that for the key argument, you do not place parentheses after the function name. This is not a function call, it is an argument that tells Python which function to use to generate the key. # # You can write your own function to be used as key. For example, in the code below, `numlist` is sorted with the last digit of each number as the most important digit, followed by the middle digit: # + def revertdigits( element ): return (element%10)*100 + (int( element/10 )%10)*10 + int( element/100 ) numlist = [314, 315, 642, 246, 129, 999] numlist.sort( key=revertdigits ) print( numlist ) # - # Here is another example, that sorts a list of strings by string length, followed by alphabetical order: # + def len_alphabetical( element ): return len( element ), element fruitlist = ["apple", "strawberry", "banana", "raspberry", "cherry", "banana", "durian", "blueberry"] fruitlist.sort( key=len_alphabetical ) print( fruitlist ) # - # Note that the `len_alphabetical()` function returns a tuple. When two tuples are compared, first the first elements of both tuples are compared, and if they are equal, the second elements are compared. You can use this knowledge to create a `key` function which sorts a mixed list, e.g., a list which consists of both strings and numbers. Just let the `key` function return a tuple of which the first item indicates the type of the element represented by a number, and the second the element itself. # + def mixed_key( element ): if isinstance( element, str ): return 1, element return 0, element mixedlist = ["apple", 0, "strawberry", 5, "banana", 2, "raspberry", 9, "cherry", "banana", 7, 7, 6, "blueberry"] mixedlist.sort( key=mixed_key ) print( mixedlist ) # - # At this point I can give a typical example of the use of "anonymous functions", which I introduced in the chapter on functions. Using an anonymous function to specify the key for the `sort()` method keeps the code for the key next to where you call the `sort()`, instead of elsewhere in the program. This may improve readability. Here is once more the example of the sorting of a list of strings by length, followed by alphabetical order, now using an anonymous function: fruitlist = ["apple", "strawberry", "banana", "raspberry", "cherry", "banana", "durian", "blueberry"] fruitlist.sort( key=lambda x: (len(x),x) ) print( fruitlist ) # ### `reverse()` # `reverse()` simply reverses the elements of the list. fruitlist = ["apple", "strawberry", "banana", "raspberry", "cherry", "banana", "durian", "blueberry"] fruitlist.reverse() print( fruitlist ) # ### Practice # **Exercise**: Write a program that asks the user to enter some data, for instance the names of their friends. When the user wants to stop providing inputs, he just presses *Enter*. The program then displays an alphabetically sorted list of the data items entered. Do not just print the list, but print each item separately, on a different line. # Present a sorted list. # **Exercise**: Sort a list of numbers using their absolute values. # Sorting with absolute values. numlist = [-10, -7, -3, -2, 0, 1, 2, 5, 7, 8, 10] # **Exercise**: Count how often each letter occurs in a string (case-insensitively). You can ignore every character that is not a letter. Of course, you should not introduce 26 variables to do this; instead, use a list of 26 items that all start at zero. Print the resulting counts. As index for the list you can use `ord(letter) - ord("a")`, where letter is a lower case letter (the `ord()` function is explained in the strings chapter). If you have troubles with this exercise, see the hints below the code block. # Letter counting. text = """Now, it's quite simple to defend yourself against a man armed with a banana. First of all you force him to drop the banana; then, second, you eat the banana, thus disarming him. You have now rendered him helpless.""" # Hints: If I ask you to count how many times the letter "a" occurs in the string, you use one variable `count_a`, which you start at zero. Then you process the letters in the string using `for letter in text`, and every time you note that `letter` is "a", you add 1 to `count_a`. If I ask you to do the same for the letter "b", you may decide to track a `count_b` variable next to `count_a`. If I ask you to do that for all the letters of the alphabet (which I do indeed), a solution using 26 separate variables and 26 conditions would be very inelegant. However, a list of length 26 is actually equal to 26 variables. Call this list `countlist` and fill it with 26 times the value 0. The first element of this list, with index 0, you use to track the number of "a"s. The second, with index 1, you use to track the number of "b"s. Etcetera. But how do you know which of the elements of the list you have to increase when you have a variable `letter` which contains one of the letters of the alphabet? If you take `ord(letter)`, you get a number. If `letter` is "a", this number is 97, if `letter` is "b", the number is 98, etcetera. This means that if I calculate `ord(letter)-97`, and `letter` is indeed a (lower-case) letter of the alphabet, I get a number between 0 and 25, which I can use as the index for the list. Instead of 97, I prefer that you use `ord("a")`, as 97 is a "magic number". This means that if `letter` is in the alphabet, then you can add 1 to the variable that tracks the number of times the letter occurs using the command `countlist[ord(letter)-ord("a")] += 1`. # --- # ## Aliasing # If you assign variable that contains a list to another variable, you might expect that you create a copy of the list in the second variable. But you are not doing that. You are actually creating an *alias* for the list, i.e., a new variable that is referring to the same list. This means that the new variable can be treated as a list, but any change that you make to the list it refers to, is visible in the original list variable, and vice versa. They are not different lists. fruitlist = ["apple", "banana", "cherry", "durian"] newfruitlist = fruitlist print( fruitlist ) print( newfruitlist ) newfruitlist[2] = "orange" print( fruitlist ) print( newfruitlist ) # Every variable in Python has an identification number. You can see it with the `id()` function. The ID number indicates which memory spot the variable refers to. For an alias of a list, the ID is the same as for the original list. fruitlist = ["apple", "banana", "cherry", "durian"] newfruitlist = fruitlist print( id( fruitlist ) ) print( id( newfruitlist ) ) # If you want to create a copy of a list, you can do so using a little trick. Instead of using ` = `, you use the command ` = [:]`. # + fruitlist = ["apple", "banana", "cherry", "durian"] newfruitlist = fruitlist verynewfruitlist = fruitlist[:] print( id( fruitlist ) ) print( id( newfruitlist ) ) print( id( verynewfruitlist ) ) fruitlist[2] = "orange" print( fruitlist ) print( newfruitlist ) print( verynewfruitlist ) # - # ### `is` # The keyword `is` is introduced to compare the identities of two variables. # + fruitlist = ["apple", "banana", "cherry", "durian"] newfruitlist = fruitlist verynewfruitlist = fruitlist[:] print( fruitlist is newfruitlist ) print( fruitlist is verynewfruitlist ) print( newfruitlist is verynewfruitlist ) # - # As you can see, the keyword `is` manages to determine that `fruitlist` and `newfruitlist` are aliases, but that `verynewfruitlist` is not the same list. If you compare them with the `==` operator, the results are not the same as comparing them with `is`: # + fruitlist = ["apple", "banana", "cherry", "durian"] newfruitlist = fruitlist verynewfruitlist = fruitlist[:] print( fruitlist == newfruitlist ) print( fruitlist == verynewfruitlist ) print( newfruitlist == verynewfruitlist ) # - # The `==` operator actually compares the contents of the lists, so it returns `True` for all comparisons. For data types for which `==` is not defined, it executes an identity comparison, but for lists it has been defined as a comparison of the contents. I will return to this topic when discussing "operator overloading", much later in the course. # ### Shallow vs. deep copies # If (some of) the items of your list are lists themselves (or other mutable data structures which are introduced in the next chapters), copying the list using the ` = [:]` syntax may give problems. The reason is that such a copy is a "shallow copy", which means that it copies each of the elements of the list with a regular assignment, which entails that the items in the list that are lists themselves become aliases of the items on the original list. # + numlist = [ 1, 2, [3, 4] ] copylist = numlist[:] numlist[0] = 5 numlist[2][0] = 6 print( numlist ) print( copylist ) # - # In the code above, you can see that the assignment `numlist[0] = 5` only has an effect on `numlist`, as `copylist` contains a copy of numlist. However, since this is a shallow copy, the assignment to `numlist[2][0]` has an effect on both lists, as the sublist `[3, 4]` is stored in `copylist` as an alias. # # If you want to create a "deep copy" of a list (i.e., a copy that also contains true copies of all mutable substructures of the list, which in turn contain true copies of all their mutable substructures, etcetera), then you can use the `copy` module for that. The `deepcopy()` function from the copy module allows you to create deep copies of any mutable data structure. # + from copy import deepcopy numlist = [ 1, 2, [3, 4] ] copylist = deepcopy( numlist ) numlist[0] = 5 numlist[2][0] = 6 print( numlist ) print( copylist ) # - # Note that the `copy` module also contains a function `copy()` that makes shallow copies. If you wonder why that function is included as you can easily create shallow copies of lists with the ` = [:]` command: the `copy` module not only works for lists, but for any mutable data structure. Not for all such data structures there exist shortcuts to create shallow copies. # ### Passing lists as arguments # When you pass a list as an argument to a function, this is a "pass by reference". The parameter that the function has access to will be an alias for the list that you pass. This means that a function that you pass a list to, can actually change the contents of the list. # # This is important, so I repeat it: when you pass a mutable data structure to a function, this is a "pass by reference", meaning that the data structure is passed as an alias and the function can change the contents of the data structure. # # You have to know whether a function that you pass a list to will or will not change the list. If you do not want the function to change the list, and you do not know if it will, you best pass a deep copy of the list to the function. # + def changelist( x ): if len( x ) > 0: x[0] = "CHANGE!" fruitlist = ["apple", "banana", "cherry", "durian"] changelist( fruitlist ) print( fruitlist ) # - # The reason that a list is "passed by reference" and not "by value" is that technically, every argument that is passed to a function must be stored in the computer in a specific block of memory that is part of the processor. This is called the "stack", and it is pretty limited in size. Since lists can be really long, allowing a program to place a list on the stack would cause all kinds of annoying runtime errors. In Python, as in most other programming languages, for the most part only basic data types (such as integers, floats, and strings) are passed by value. # --- # ## Nested lists # The elements of a list may be lists themselves (which also may contains lists, etcetera). This is a good way to create a matrix in a program. For instance, you can create a Tic-Tac-Toe board, where a dash (`-`) represents an empty cell, as follows: board = [ ["-", "-", "-"], ["-", "-", "-"], ["-", "-", "-"] ] # The first row of the board is represented by `board[0]`, the second row by `board[1]`, and the third row by `board[2]`. If you want to access the first cell of the first row, that is `board[0][0]`, the second cell is `board[0][1]` and the third cell is `board[0][2]`. # # For example, the following code places an "X" in the middle of the board, and an "O" in the upper right corner. It also displays the board in a nice way (with markers for rows and columns around it). # + def display_board( b ): print( " 1 2 3" ) for row in range( 3 ): print( row+1, end=" ") for col in range( 3 ): print( b[row][col], end=" " ) print() board = [ ["-", "-", "-"], ["-", "-", "-"], ["-", "-", "-"] ] board[1][1] = "X" board[0][2] = "O" display_board( board ) # - # --- # ## List casting # You can type cast a sequence of elements to a list using the `list()` function. t1 = ( "apple", "banana", "cherry" ) print( t1 ) print( type( t1 ) ) fruitlist = list( t1 ) print( fruitlist ) print( type( fruitlist ) ) # This is sometimes necessary, in particular when you have an "iterator" available and you want to use the elements in a list format. An iterator is a function that generates a sequence. An example of an iterator that I already discussed is the `range()` function. The `range()` function generates a sequence of numbers. If you want to use these numbers as a list, you can use list casting. numlist = range( 1, 11 ) print( numlist ) numlist = list( range( 1, 11 ) ) print( numlist ) # You can turn a string into a list of its characters by using a list casting on the string. # --- # ## List comprehensions # List comprehensions are a concise way to create lists. They are typical for Python, but you do not find them in many other programming languages. They are not actually needed, as you can use functions to achieve the same effect, but as they are often used in examples (especially by people who want to show off their Python abilities to create short statements that have extensive effects), I thought it prudent to discuss them. # # Suppose that you want to create a list consisting of the squares of the numbers 1 to 25. A function that creates such a list is: # + def squareslist(): squares = [] for i in range( 1, 26 ): squares.append( i*i ) return squares sl = squareslist() print( sl ) # - # In Python, you can create that list with one single statement, namely as follows: sl = [ x*x for x in range( 1, 26 )] print( sl ) # Now suppose that you want to create this list, but want to leave out (for some reason) the squares of any numbers that end in 5. That would add at least two lines to the function above, but with list comprehensions you can still do it with that single line: sl = [ x*x for x in range( 1, 26 ) if x%10 != 5] print( sl ) # A list comprehension consists of an expression in square brackets, followed by a `for` clause, followed by zero or more `for` and/or `if` clauses. The result is a list that contains the elements that result from evaluating the expression for the combination of the `for` and `if` clauses. # # The results can become quite complex. For instance, here is a list comprehension that creates a list of tuples with three integers between 1 and 4, whereby the three integers are all different: triplelist = [ (x,y,z) for x in range( 1, 5 ) for y in range( 1, 5 ) for z in range( 1, 5 ) if x != y if x != z if y != z] print( triplelist ) # If you find list comprehensions hard to use, remember that there is absolutely no reason to use them except for keeping code concise, and that keeping code readable and understandable is far more important than keeping it concise. # --- # ## What you learned # In this chapter, you learned about: # # - Lists # - Mutability of lists # - Using `+` and `*` with lists # - List methods `append()`, `extend()`, `insert()`, `remove()`, `pop()`, `index()`, `count()`, `sort()`, and `reverse()` # - `del` with lists # - Aliasing # - The keyword `is` # - Creating list copies # - Creating deep copies of lists using `deepcopy()` # - Using lists as arguments # - Nested lists # - List casting # - List comprehensions # ------------- # ## Exercises # ### Exercise 12.1 # A magic 8-ball, when asked a question, provides a random answer from a list. The code below contains a list of possible answers. Create a magic 8-ball program that asks a question, then gives a random answer. # Magic 8-ball answers = [ "It is certain", "It is decidedly so", "Without a doubt", "Yes, definitely", "You may rely on it", "As I see it, yes", "Most likely", "Outlook good", "Yes", "Signs point to yes", "Reply hazy try again", "Ask again later", "Better not tell you now", "Cannot predict now", "Concentrate and ask again", "Don't count on it", "My reply is no", "My sources say no", "Outlook not so good", "Very doubtful" ] # ### Exercise 12.2 # A playing card consists of a suit ("Hearts", "Spades", "Clubs", or "Diamonds") and a value ("Ace", 2, 3, 4, 5, 6, 7, 8, 9, 10, "Jack", "Queen", "King"). Create a list of all possible playing cards, which is a deck. Then create a function that shuffles the deck, producing a random order. # Shuffling a deck. # ### Exercise 12.3 # A first-in-first-out (FIFO) structure, also called a "queue", is a list that gets new elements added at the end, while elements from the front are removed and processed. Write a program that processes a queue. In a loop, ask the user for input. If the user just presses the *Enter* key, the program ends. If the user enters anything else, except for a single question mark (`?`), the program considers what the user entered a new element and appends it to the queue. If the user enters a single question mark, the program pops the first element from the queue and displays it. You have to take into account that the user might type a question mark even if the queue is empty. # Queue. # ### Exercise 12.4 # Count how often each letter occurs in a string (case-insensitively). You can ignore every character that is not a letter. Print the letters with their counts, in order from highest count to lowest count. # Letter counting. # ### Exercise 12.5 # The sieve of Eratosthenes is a method to find all prime numbers between 1 and a given number using a list. This works as follows: Fill the list with the sequence of numbers from 1 to the highest number. Set the value of 1 to zero, as 1 is not prime. Now loop over the list. Find the next number on the list that is not zero, which, at the start, is the number 2. Now set all multiples of this number to zero. Then find the next number on the list that is not zero, which is 3. Set all multiples of this number to zero. Then the next number, which is 5 (because 4 has already been set to zero), and do the same thing again. Process all the numbers of the list in this way. When you have finished, the only numbers left on the list are primes. Use this method to determine all the primes between 1 and 100. # Eratosthenes. # ### Exercise 12.6 # Write a Tic-Tac-Toe program that allows two people to play the game against each other. In turn, ask each player which row and column they want to play. Make sure that the program checks if that row/column combination is empty. When a player has won, end the game. When the whole board is full and there is no winner, announce a draw. # Tic-Tac-Toe player. # This is a fairly long program to write (60 lines or so). It will definitely help to use some functions. I recommend that you create a function `display_board()` that gets the board as parameter and displays it, a function `getRowCol()` that asks for a row or a column (depending on a parameter) and checks whether the user entered a legal value, and a function `winner()` that gets the board as argument and checks if there is a winner. Keep track of who the current player is using a global variable `player` that you can pass to a function as an argument if the function needs it. I also use a function `opponent()`, that takes the player as argument and returns the opponent. I use that to switch players after each move. # # The main program will be something along the lines of (in pseudo-code): # # display board # while True: # ask for row # ask for column # if row/column combination already occupied: # display error message # continue # place player marker on row/column combination # display board # if there is a winner # announce winner # break # if the board is full # announce draw # break # switch players # ### Exercise 12.7 # Create a program that is a simplified version of the game "Battleship". The computer creates (in memory) a grid that is 4 cells wide and 3 cells high. The rows of the grid are numbered 1 to 3, and the columns of the grid are labeled `A` to `D`. The computer hides a battleship in three random cells in the grid. Each battleship occupies exactly one cell. Battleships are not allowed to touch each other horizontally or vertically. Make sure that the program places the battleships randomly, so not pre-configured. # # The computer asks the player to "shoot" at cells of the grid. The player does so by entering the column letter and row number of the cell which he wants to shoot at (e.g., "D3"). If the cell which the player shoots at contains nothing, the computer responds with "Miss!". If the cell contains a battleship, the computer responds with "You sunk my battleship!", and removes the battleship from the cell (i.e., a second shot at the same cell is a miss). As soon as the player hits the last battleship, the computer responds with displaying how many shots the player needed to shoot down all three battleships, and the program ends. # # To help with debugging the game, at the start the computer should display the grid with periods marking empty cells and `X`s marking cells with battleships. # # Hint: If you have troubles with this exercise, start by using a board which has the battleships already placed. Once the rest of the code works, add a function that places the battleships at random, at first without checking if they are touching one another. Once that works, add code that disallows battleships touching each other. # Battleship. # ### Exercise 12.8 # The "subset sum" problem asks the question whether a list of integers contains a subset of integers that, when summed, gives zero as answer. For instance, for the list `[1, 4, -3, -5, 7]` the answer is "yes", as `1 + 4 - 5 = 0`. However, for the list `[1, 4, -3, 7]` the answer is "no", as there is no subset of integers that adds up to zero. Write a program that solves the "subset sum" problem for a list of integers. If there is a solution, print it; if not, report that there is no solution. # # Hint: This problem is tackled best using recursion. If you skipped that chapter, you better skip this exercise too. # Subset sum problem. # --- # ## Python 2 # In Python 2, sorting a mixed list does not give a runtime error. The `sort()` function also supports an argument `cmp=`, that allows you to specify a function that does the comparison between two elements. This function is no longer supported in Python 3, but you can use the `key` parameter for the same purpose. In the Python module `functools` a function `cmp_to_key()` is found that turns the old-style `cmp` specification into the new-style `key` specification. # # In Python 2, the `range()` function produces an actual list, and you do not need list casting to turn the result of it into one. # -------- # End of Chapter 12. Version 1.3. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="s6LccGLI5deK" #

Soal 1: Membuat Objek DataFrames

# # - Buatlah objek dataframes dari objek dictionary, Bebas! # + colab={} colab_type="code" id="V2jNr8n-5deM" # Membuat dataframes dari dictionary import pandas as pd df = pd.DataFrame({ 'No': [1, 2, 3, 4, 5], 'Nama': ['Steven', 'Axel', 'William', 'Rigo', 'Reuben'], 'Berat Badan': [80, 50, 70, 75, 60], 'Tinggi Badan': [169, 160, 172, 176, 170] }) df = df.set_index('No') df # + [markdown] colab_type="text" id="KWEFNf9B5deX" #

Soal 2: Membaca data dari file

# # Download file bernama [diamonds.csv](https://drive.google.com/uc?export=download&id=1qQiM9utQUThTPq8jk8stPMh_y2zfqYOd), kemudian baca data tersebut dan assign ke dalam suatu variabel kemudian print. # + colab={} colab_type="code" id="Jy2KtDNs5deZ" # Membaca data dan memasukannya kedalam suatu variabel df = pd.read_csv('diamonds-1.csv') df # + [markdown] colab_type="text" id="1o9m_N295del" # Expected Result: # # ![alt text](https://drive.google.com/uc?id=1BvgM5mJEVfj0fllrjkBfQwn0QIFkry_t) # + [markdown] colab_type="text" id="LtLWF_Ot5deo" #

Soal 3: DataFrame vs Series

# # - Apa perbedaan Series dengan list dan dictionary # - Apa perbedaan dataframe dan series # - Buatlah suatu data series # # Perbedaan Series dengan list dan dictionary: # # - Series merupakan class object yang berasal dari modul Pandas, sedangkan list dan dictionary adalah built-in data structure di Python. # - Series bisa dibilang adalah gabungan dari list dan dictionary karena ia merupakan labeled array seperti yang ada pada dictionary dan dapat di slicing atau indexing seperti di list, namun pandas Series hanya dapat memuat data 1 dimensi. # # Perbedaan Data Frame dengan Series: # # - Series merupakan data structure untuk setiap kolom yang ada pada data frame # - Data Frame merupakan sekumpulan pandas Series # - Series merupakan 1 dimensional data structure # - Data Frame merupakan 2 dimensional data structure # + [markdown] colab_type="text" id="6F2EDcPp5der" # Jawab: # + colab={} colab_type="code" id="RBPrIKlZ5dev" # code here # seri = pd.Series(['Steven', 'Axel', 'William', 'Rigo', 'Reuben'], name='Nama', index=range(1, 6)) seri = pd.Series([[1,2,3], [4,5,6]]) seri # + [markdown] colab_type="text" id="JNuHgN945de8" #

Soal 4: Pemeriksaan Data Sederhana

# # Di soal no 2 kamu telah membaca data tentang diamonds. pada soal ini cobalah suatu metode dari pandas untuk mengetahui beberapa data pertama dan beberapa data terakhir. kemudian sebutkan nilai dari: # - baris pertama dari column price # - baris terakhir dari column color # # tunjukan cara kalian menemukan nilai tersebut di block di bawah ini. # - Bisa dilihat dari tampilan data framenya, atau bisa juga dengan metode indexing # + colab={} colab_type="code" id="MRvNarxL5de-" # code here display(df.head()) display(df.tail()) print(df['price'][0]) print(df['color'][len(df) - 1]) # + [markdown] colab_type="text" id="q-u9QIjJ5dfK" #

Soal 5: Deskriptif Statistik

# # Jelaskan apa itu deskriptis statistik! # + [markdown] colab_type="text" id="iKTeB5Gd5dfM" # Jawab: # # Statistik deskriptif digunakan untuk mendeskripsikan ciri-ciri dasar data dalam suatu penelitian. Mereka memberikan ringkasan sederhana tentang sampel dan ukuran. Statistika deskriptif mampu menyajikan informasi inti dari kumpulan data yang ada. Informasi yang di dapatkan yang berasal dari statistika deskriptif ini adalah ukuran pemusatan data (central tendency), ukuran penyebaran data (varians dan standar deviasi), dan juga kecenderungan suatu gugus data. # + [markdown] colab_type="text" id="5SKD0rNu5dfP" #

Soal 6: Practice Deskriptif Statistik

# # - Gunakan suatu metode dari pandas untuk mengetahui deskriptif statistik dari suatu data # - Berapa nilai rata-rata dari column price # - Berapa nilai standar deviasi dari column depth # - Berapa nilai maximum dari column carat # + colab={} colab_type="code" id="giNtJG9Y5dfS" # code here display(df.describe()) print(f'Rata-rata column price: {df["price"].mean()}') print(f'Standar deviasi column depth: {df["depth"].std()}') print(f'Nilai maximum column carat: {df["carat"].max()}') # + [markdown] colab_type="text" id="M6miZVtG5dfd" # Jawab: # - Rata-rata column price: 3932.799722 # - Standar deviasi column depth: 1.432621 # - Nilai maximum column carat: 5.010000 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Trail-1 # @create 2021-9-8 import cv2 import matplotlib.pyplot as plt import math import numpy as np # %matplotlib inline cv2.__version__ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ### What I really need to test here is how CAMB is computing the power spectrum # #### How are the parameters interpreted when I feed them in, what is the sigma 8 value, how does changing the input change the output # %matplotlib inline import os import sys import numpy as np from astropy.io import fits as pf from sklearn.neighbors import KernelDensity as kde from scipy import integrate import camb from camb import model from scipy.special import j0 from scipy import interpolate import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D as ax3d from skmonaco import mcquad from skmonaco import mcmiser import time #Import SpIES / SHELA data data = '../Data_Sets/QSO_Candidates_allcuts_with_errors_visualinsp.fits' obs = pf.open(data)[1].data Z = obs.zphotNW gdx = ((Z >= 2.9)&(Z <= 5.2) & (obs.Good_obj == 0)) #gdx = Z>0 #Set up a KDE for dNdz tmpz = Z[gdx][:, np.newaxis] #change the array from row shape (1) to column shape (1,) print np.shape(tmpz) sample_range = np.linspace(min(tmpz[:, 0]), max(tmpz[:, 0]), len(tmpz[:, 0]))[:, np.newaxis] est = kde(bandwidth=0.1,kernel='gaussian') #Set up the Kernel histkde = est.fit(tmpz).score_samples(sample_range) #fit the kernel to the data and find the density of the grid #Interpolate (you get the same function back) to plug in any z in the range (as opposed to set z values) dNdz = interpolate.interp1d(sample_range.flatten(),np.exp(histkde)) print sample_range.flatten() print 'done' #Plot the KDE dndz plt.plot(sample_range[:,0],np.exp(histkde)) plt.xlabel('z') #plt.plot(sample_range[:,0],dNdz(sample_range[:,0])) #plt.plot(bins[:-1],num,linestyle = 'steps-mid') ZE = np.linspace(min(Z),max(Z),100) xo=integrate.quad(dNdz,min(sample_range),max(sample_range)) #quad(f(x),xlower,xupper, args) print xo #plt.savefig('dndz.png') plt.show() # + # Compute the matter power spectrum from CAMB and Generate the P(z,k) function to output the power at any given redshift #and wavenumber #First define Planck 2015 cosmological parameters H = 70 #H0. oc = 0.229 #physical density of CDM ob = 0.046 #physical density of baryons #Conversion to density param: Omega_Matter = (oc+ob)/(H0/100.)**2 #Set up parameters in CAMB pars = camb.CAMBparams() #H0 is hubble parameter at z=0, ombh2 is the baryon density (physical), omch2 is the matter density (physical) #mnu is sum of neutrino masses, omk is curvature parameter (set to 0 for flat), meffsterile is effective mass of sterile neutrinos #pars.set_cosmology(H0=H,ombh2=ob, omch2=oc,omk=0)#,mnu=0,meffsterile=0) pars.H0=H pars.omegab=ob pars.omegac=oc pars.omegav=0.725 pars.set_dark_energy() #Set parameters using standard power law parameterization.If nt=None, uses inflation consistency relation. #ns is scalar speectral index pars.InitPower.set_params(ns=0.960) camb.set_halofit_version(version='original') #uses the Smith 2003 halo model ze=np.linspace(0,20,150) ka=np.logspace(-4,1,len(ze))#np.linspace(0,10,100) #Get the matter power spectrum interpolation object (based on RectBivariateSpline). #pars: input parameters, zs: redshift range, nonlinear: generate nonlinear power spectrum, hubble_units=True: output as Mpc/h^-3 #instead of Mpc^-3 PK = camb.get_matter_power_interpolator(pars,zs = ze,zmax = ze[-1], nonlinear=True, hubble_units=True, k_hunit=True, kmax = ka[-1]) #Generate the power using the interpolator and the z and k arrays #Power = PK.P(z,k) # - # + def dimpower(Pk,z,k): delta = Pk.P(z,k) * k**3/(2*np.pi**2) return delta k=np.logspace(-3,2,1000) #k=10**kay plt.figure(2) plt.plot(k,dimpower(PK,0,k)) plt.plot(k,dimpower(PK,2,k)) plt.plot(k,dimpower(PK,6,k)) plt.xscale('log') plt.yscale('log') plt.ylim(10**-3,10**4) plt.xlim(10**-3,10**1) plt.show() # - # + # Print the parameters out pars2 = camb.CAMBparams() #Not non-linear corrections couples to smaller scales than you want pars2.set_matter_power(redshifts=[0., 1.0, 6.0], kmax=10.0) H = 70 #H0. oc = 0.229 #physical density of CDM ob = 0.046 #physical density of baryons #H0 is hubble parameter at z=0, ombh2 is the baryon density (physical), omch2 is the matter density (physical) #mnu is sum of neutrino masses, omk is curvature parameter (set to 0 for flat), meffsterile is effective mass of sterile neutrinos #pars.set_cosmology(H0=H,ombh2=ob, omch2=oc,omk=0)#,mnu=0,meffsterile=0) pars2.H0=H pars2.omegab=ob pars2.omegac=oc pars2.omegav=0.725 pars2.set_dark_energy() #Set parameters using standard power law parameterization.If nt=None, uses inflation consistency relation. #ns is scalar speectral index pars2.InitPower.set_params(As = 2e-9,ns=0.960) ''' #Linear spectra pars.NonLinear = model.NonLinear_none results = camb.get_results(pars) kh, z, pk = results.get_matter_power_spectrum(minkh=1e-4, maxkh=10, npoints = 1000) s8 = np.array(results.get_sigma8()) print s8 ''' #Non-Linear spectra (Halofit) pars2.NonLinear = model.NonLinear_both camb.set_halofit_version(version='original') #uses the Smith 2003 halo model camb.get_transfer_functions(pars2) results = camb.get_results(pars2) results.calc_power_spectra(pars2) kh_nonlin, z_nonlin, pk_nonlin = results.get_matter_power_spectrum(minkh=1e-4, maxkh=10, npoints = 1000) s82 = np.array(results.get_sigma8()) print results.get_matter_transfer_data() print s82 print '' print z_nonlin print np.shape(pk_nonlin) print np.shape(kh_nonlin) # + plt.figure(2,figsize = [8,8]) plt.plot(kh_nonlin,pk_nonlin[0],label='z = %s'%z_nonlin[0]) plt.plot(kh_nonlin,pk_nonlin[1],label='z = %s'%z_nonlin[1]) plt.plot(kh_nonlin,pk_nonlin[2],label='z = %s'%z_nonlin[2]) plt.plot(kh_nonlin,PK.P(z_nonlin[2],kh_nonlin),label='interpolator') plt.xscale('log') plt.yscale('log') plt.legend() plt.show() plt.figure(3,figsize = [8,8]) #plt.plot(kh_nonlin,pk_nonlin[2],label='z = %s'%z_nonlin[2]) plt.plot(kh_nonlin/0.7,pk_nonlin[2]/PK.P(z_nonlin[2],kh_nonlin)-1,label='set/interp') plt.xscale('log') #plt.yscale('log') plt.legend() plt.show() # - print results.get_params().InitPower.ScalarPowerAmp[0] # + # Print the parameters out pars3 = camb.CAMBparams() #Not non-linear corrections couples to smaller scales than you want pars3.set_matter_power(redshifts=[0., 1.0, 6.0], kmax=10.0) H = 70 #H0. oc = 0.229 #physical density of CDM ob = 0.046 #physical density of baryons #H0 is hubble parameter at z=0, ombh2 is the baryon density (physical), omch2 is the matter density (physical) #mnu is sum of neutrino masses, omk is curvature parameter (set to 0 for flat), meffsterile is effective mass of sterile neutrinos #pars.set_cosmology(H0=H,ombh2=ob, omch2=oc,omk=0)#,mnu=0,meffsterile=0) pars3.H0=H pars3.omegab=ob pars3.omegac=oc pars3.omegav=0.725 pars3.set_dark_energy() #Set parameters using standard power law parameterization.If nt=None, uses inflation consistency relation. #ns is scalar speectral index sig8old = float(s82[-1]) print sig8old pars3.InitPower.set_params(As = 2e-9*(0.8/sig8old)**2,ns=0.960) #Non-Linear spectra (Halofit) pars3.NonLinear = model.NonLinear_both camb.set_halofit_version(version='original') #uses the Smith 2003 halo model camb.get_transfer_functions(pars3) results2 = camb.get_results(pars3) results2.calc_power_spectra(pars3) kh_nonlin, z_nonlin, pk_nonlin = results2.get_matter_power_spectrum(minkh=1e-4, maxkh=10, npoints = 1000) s83 = np.array(results2.get_sigma8()) print results.get_matter_transfer_data() print s83 print '' print z_nonlin print np.shape(pk_nonlin) print np.shape(kh_nonlin) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np mylist=[1,2,3] type(mylist) np.array(mylist) type(mylist) mylist=[[1,2,3],[4,5,6],[7,8,9]] np.array(mylist) mymatrix=np.array(mylist) mymatrix.shape np.arrange(0,10) np.arange(0,10) np.arange(0,10,3) np.zeros(5) np.zeros(4,10) np.zeros((4,10)) np.ones((5,4)) np.ones((5,4)) + 2 [1,1,1,1] *6 np.linspace(0,10,3) np.linspace(0,10,20) np.eye(5) np.eye(4) np.eye(4.4) np.random.rand(5,5) np.random.rand(3) np.random.randn(10) np.random.randn(1) np.random.randint(1,100,10) np.random.seed(555) np.random.rand(4) np.random.seed(42) np.random.rand(4) arr=np.arange(25) arr ranarr=np.random.randint(0,50,10) ranarr arr.reshape(5,5) ranarr.min() # + active="" # # - ranarr # + active="" # ranarr.max() # - ranarr.max() ranarr.max() ranarr.argmax() ranarr.argmin() ranarr.dtype arr=np.arange(0,11) arr arr[:5] arr[0:5] arr[6:] arr+100 arr/100 arr arr ** 2 arr slice_of_arr=arr[0:6] slice_of_arr slice_of_arr[:]=99 slice_of_arr arr arr_copy=arr.copy() arr_copy[:]=1000 arr_copy arr arr_2d=np.array([[5,10,15],[20,35,30],[35,40,45]]) # + active="" # # - arr_2d # + active="" # # - arr_2d.shape arr_2d[1] # + active="" # # - arr_2d[1][1] arr_2d[1,1] arr_2d[2,1] arr_2d[:2] arr_2d[:2,1:] arr=np.arange(1,11) arr arr>4 bool_arr=arr>4 bool_arr arr[bool_arr] arr[arr>4] arr[arr<=6] arr=np.arange(0,10) arr 1/arr arr/arr np.sqrt(arr) np.log(arr) arr.sum() arr.mean() arr_2dw=np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12]]) arr_2dw arr_2dw.shape arr_2dw.sum() arr_2dw.sum(axis=0) arr_2dw.sum(axis=1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer tweets = ["""'@MartinSLewis Do you know where we stand with stag/hen company ‘Last night of freedom’. We have a large group booked""", """@tomGadsby I wish @DonaldTrump had reacted better to corona!""", """Went to see the new Marvel movie last night, the film was amazing"""] s = SentimentIntensityAnalyzer() for tweet in tweets: print(tweet[:50], s.polarity_scores(tweet)) s.polarity_scores(tweets[0]) sentiments = [] 'meh' 'i hate everything and i love everything' #slightly negative 'i love everything and i hate everything' #slightly positive ### building blocks *neg, neu, and pos ranges from 0-1-0 means minimum of that emotion, 1 means maximum presence of emotion * compound - ranges between -1 and 1. 0 is fully neutral, 1 is fully positive # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Model_Training_with_BERT.ipynb: #

Use Text Extensions for Pandas to integrate BERT tokenization with model training for named entity recognition on Pandas.

#
# # Introduction # # This notebook shows how to use the open source library [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas) to seamlessly integrate BERT tokenization and embeddings with model training for named entity recognition using [Pandas](https://pandas.pydata.org/) DataFrames. # # This example will build on the analysis of the [CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/) corpus done in [Analyze_Model_Outputs](./Analyze_Model_Outputs.ipynb) to train a new model for named entity recognition (NER) using state-of-the-art natural language understanding with BERT tokenization and embeddings. While the model used is rather simple and will only get modest scoring results, the purpose is to demonstrate how Text Extensions for Pandas integrates BERT from [Huggingface Transformers](https://huggingface.co/transformers/index.html) with the `TensorArray` extension for model training and scoring, all within Pandas DataFrames. See [Text_Extension_for_Pandas_Overview](./Text_Extension_for_Pandas_Overview.ipynb) for `TensorArray` specification and more example usage. # # The notebook is divided into the following steps: # # 1. Retokenize the entire corpus using a "BERT-compatible" tokenizer, and map the token/entity labels from the original corpus on to the new tokenization. # 1. Generate BERT embeddings for every token in the entire corpus in one pass, and store those embeddings in a DataFrame column (of type TensorDtype) alongside the tokens and labels. # 1. Persist the DataFrame with computed BERT embeddings to disk as a checkpoint. # 1. Use the embeddings to train a multinomial logistic regression model to perform named entity recognition. # 1. Compute precision/recall for the model predictions on a test set. # # ## Environment Setup # # This notebook requires a Python 3.7 or later environment with NumPy, Pandas, scikit-learn, PyTorch and Huggingface `transformers`. # # The notebook also requires the `text_extensions_for_pandas` library. You can satisfy this dependency in two ways: # # * Run `pip install text_extensions_for_pandas` before running this notebook. This command adds the library to your Python environment. # * Run this notebook out of your local copy of the Text Extensions for Pandas project's [source tree](https://github.com/CODAIT/text-extensions-for-pandas). In this case, the notebook will use the version of Text Extensions for Pandas in your local source tree **if the package is not installed in your Python environment**. # + import gc import os import sys from typing import * import numpy as np import pandas as pd import sklearn.pipeline import sklearn.linear_model import transformers # And of course we need the text_extensions_for_pandas library itself. try: import text_extensions_for_pandas as tp except ModuleNotFoundError as e: # If we're running from within the project source tree and the parent Python # environment doesn't have the text_extensions_for_pandas package, use the # version in the local source tree. if not os.getcwd().endswith("notebooks"): raise e if ".." not in sys.path: sys.path.insert(0, "..") import text_extensions_for_pandas as tp # - # # Named Entity Recognition with BERT on CoNLL-2003 # # [CoNLL](https://www.conll.org/), the SIGNLL Conference on Computational Natural Language Learning, is an annual academic conference for natural language processing researchers. Each year's conference features a competition involving a challenging NLP task. The task for the 2003 competition involved identifying mentions of [named entities](https://en.wikipedia.org/wiki/Named-entity_recognition) in English and German news articles from the late 1990's. The corpus for this 2003 competition is one of the most widely-used benchmarks for the performance of named entity recognition models. Current [state-of-the-art results](https://paperswithcode.com/sota/named-entity-recognition-ner-on-conll-2003) on this corpus produce an F1 score (harmonic mean of precision and recall) of 0.93. The best F1 score in the original competition was 0.89. # # For more information about this data set, we recommend reading the conference paper about the competition results, ["Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition,"](https://www.aclweb.org/anthology/W03-0419/). # # **Note that the data set is licensed for research use only. Be sure to adhere to the terms of the license when using this data set!** # # The developers of the CoNLL-2003 corpus defined a file format for the corpus, based on the file format used in the earlier [Message Understanding Conference](https://en.wikipedia.org/wiki/Message_Understanding_Conference) competition. This format is generally known as "CoNLL format" or "CoNLL-2003 format". # # In the following cell, we use the facilities of Text Extensions for Pandas to download a copy of the CoNLL-2003 data set. Then we read the CoNLL-2003-format file containing the `test` fold of the corpus and translate the data into a collection of Pandas [DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) objects, one Dataframe per document. Finally, we display the Dataframe for the first document of the `test` fold of the corpus. # Download and cache the data set. # NOTE: This data set is licensed for research use only. Be sure to adhere # to the terms of the license when using this data set! data_set_info = tp.io.conll.maybe_download_conll_data("outputs") data_set_info # ## Show how to retokenize with a BERT tokenizer. # # The BERT model is originally from the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by , , , . The model is pre-trained with masked language modeling and next sentence prediction objectives, which make it effective for masked token prediction and NLU. # # With the CoNLL-2003 corpus loaded, it will need to be retokenized using a "BERT-compatible" tokenizer. Then we can map the token/entity labels from the original corpus on to the new tokenization. # # We will start by showing the retokenizing process for a single document before doing the same on the entire corpus. # + # Read in the corpus in its original tokenization. corpus_raw = {} for fold_name, file_name in data_set_info.items(): df_list = tp.io.conll.conll_2003_to_dataframes(file_name, ["pos", "phrase", "ent"], [False, True, True]) corpus_raw[fold_name] = [ df.drop(columns=["pos", "phrase_iob", "phrase_type"]) for df in df_list ] test_raw = corpus_raw["test"] # Pick out the dataframe for a single example document. example_df = test_raw[5] example_df # - # The `example_df` contains columns `span` and `sentence` of dtypes `SpanDtype` and `TokenSpanDtype`. These represent spans from the target text, and here they contain tokens of the text and the sentence containing that token. See the notebook [Text_Extension_for_Pandas_Overview](./Text_Extension_for_Pandas_Overview.ipynb) for more on `SpanArray` and `TokenSpanArray`. example_df.dtypes # ### Convert IOB-Tagged Data to Lists of Entity Mentions # # The data we've looked at so far has been in [IOB2 format](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)). # Each row of our DataFrame represents a token, and each token is tagged with an entity type (`ent_type`) and an IOB tag (`ent_iob`). The first token of each named entity mention is tagged `B`, while subsequent tokens are tagged `I`. Tokens that aren't part of any named entity are tagged `O`. # # IOB2 format is a convenient way to represent a corpus, but it is a less useful representation for analyzing the result quality of named entity recognition models. Most tokens in a typical NER corpus will be tagged `O`, any measure of error rate in terms of tokens will over-emphasizing the tokens that are part of entities. Token-level error rate implicitly assigns higher weight to named entity mentions that consist of multiple tokens, further unbalancing error metrics. And most crucially, a naive comparison of IOB tags can result in marking an incorrect answer as correct. Consider a case where the correct sequence of labels is `B, B, I` but the model has output `B, I, I`; in this case, last two tokens of model output are both incorrect (the model has assigned them to the same entity as the first token), but a naive token-level comparison will consider the last token to be correct. # # The CoNLL 2003 competition used the number of errors in extracting *entire* entity mentions to measure the result quality of the entries. We will use the same metric in this notebook. To compute entity-level errors, we convert the IOB-tagged tokens into pairs of ``. # Text Extensions for Pandas includes a function `iob_to_spans()` that will handle this conversion for you. # Convert the corpus IOB2 tagged DataFrame to one with entity span and type columns. spans_df = tp.io.conll.iob_to_spans(example_df) spans_df # ### Initialize our BERT Tokenizer and Model # # Here we configure and initialize the [Huggingface transformers BERT tokenizer and model](https://huggingface.co/transformers/model_doc/bert.html). Text Extensions for Pandas provides a `make_bert_tokens()` function that will use the tokenizer to create BERT tokens as a span column in a DataFrame, suitable to compute BERT embeddings with. # + # Huggingface transformers BERT Configuration. bert_model_name = "dslim/bert-base-NER" tokenizer = transformers.BertTokenizerFast.from_pretrained(bert_model_name, add_special_tokens=True) # Retokenize the document's text with the BERT tokenizer as a DataFrame # with a span column. bert_toks_df = tp.io.bert.make_bert_tokens(example_df["span"].values[0].target_text, tokenizer) bert_toks_df # - # BERT tokenization includes special zero-length tokens. bert_toks_df[bert_toks_df["special_tokens_mask"]] # Align the BERT tokens with the original tokenization. bert_spans = tp.TokenSpanArray.align_to_tokens(bert_toks_df["span"], spans_df["span"]) pd.DataFrame({ "original_span": spans_df["span"], "bert_spans": bert_spans, "ent_type": spans_df["ent_type"] }) # Generate IOB2 tags and entity labels that align with the BERT tokens. # See https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging) bert_toks_df[["ent_iob", "ent_type"]] = tp.io.conll.spans_to_iob(bert_spans, spans_df["ent_type"]) bert_toks_df[10:20] # Create a Pandas categorical type for consistent encoding of categories # across all documents. ENTITY_TYPES = ["LOC", "MISC", "ORG", "PER"] token_class_dtype, int_to_label, label_to_int = tp.io.conll.make_iob_tag_categories(ENTITY_TYPES) token_class_dtype # The traditional way to transform NER to token classification is to # treat each combination of {I,O,B} X {entity type} as a different # class. Generate class labels in that format. classes_df = tp.io.conll.add_token_classes(bert_toks_df, token_class_dtype) classes_df # ## Show how to compute BERT embeddings # # We are going to use the BERT embeddings as the feature vector to train our model. First, we will show how they are computed # + # Initialize the BERT model that will be used to generate embeddings. bert = transformers.BertModel.from_pretrained(bert_model_name) # Force garbage collection in case this notebook is running on a low-RAM environment. gc.collect() # Compute BERT embeddings with the BERT model and add result to our example DataFrame. embeddings_df = tp.io.bert.add_embeddings(classes_df, bert) embeddings_df[["token_id", "span", "input_id", "ent_iob", "ent_type", "token_class", "embedding"]].iloc[10:20] # - embeddings_df[["span", "ent_iob", "ent_type", "embedding"]].iloc[70:75] # The `embedding` column is an extension type `TensorDtype` that holds a #`TensorArray` provided by Text Extensions for Pandas. embeddings_df["embedding"].dtype # A `TensorArray` can be constructed with a NumPy array of arbitrary dimensions, added to a DataFrame, then used with standard Pandas functionality. See the notebook [Text_Extension_for_Pandas_Overview](./Text_Extensions_for_Pandas.ipynb) for more on `TensorArray`. # Zero-copy conversion to NumPy can be done by first unwrapping the # `TensorArray` with `.array` and calling `to_numpy()`. embeddings_arr = embeddings_df["embedding"].array.to_numpy() embeddings_arr.dtype, embeddings_arr.shape # ## Generate BERT tokens and BERT embeddings for the entire corpus # # Text Extensions for Pandas has a convenience function that will combine the above cells to create BERT tokens and embeddings. We will use this to add embeddings to the entire corpus. # Example usage of the convenience function to create BERT tokens and embeddings. tp.io.bert.conll_to_bert(example_df, tokenizer, bert, token_class_dtype) # When this notebook is running on a resource-constrained environment like [Binder](https://mybinder.org/), # there may not be enough RAM available to hold all the embeddings in memory. # So we use [Gaussian random projection](https://scikit-learn.org/stable/modules/random_projection.html#gaussian-random-projection) to reduce the size of the embeddings. # The projection shrinks the embeddings by a factor of 3 at the expense of a small # decrease in model accuracy. # # Change the constant `SHRINK_EMBEDDINGS` in the following cell to `False` if you want to disable this behavior. # + SHRINK_EMBEDDINGS = True PROJECTION_DIMS = 256 RANDOM_SEED=42 import sklearn.random_projection projection = sklearn.random_projection.GaussianRandomProjection( n_components=PROJECTION_DIMS, random_state=RANDOM_SEED) def maybe_shrink_embeddings(df): if SHRINK_EMBEDDINGS: df["embedding"] = tp.TensorArray(projection.fit_transform(df["embedding"])) return df # + # Run the entire corpus through our processing pipeline. bert_toks_by_fold = {} for fold_name in corpus_raw.keys(): print(f"Processing fold '{fold_name}'...") raw = corpus_raw[fold_name] bert_toks_by_fold[fold_name] = tp.jupyter.run_with_progress_bar( len(raw), lambda i: maybe_shrink_embeddings(tp.io.bert.conll_to_bert( raw[i], tokenizer, bert, token_class_dtype))) bert_toks_by_fold["dev"][20] # - # ## Collate the data structures we've generated so far # Create a single DataFrame with the entire corpus's embeddings. corpus_df = tp.io.conll.combine_folds(bert_toks_by_fold) corpus_df # ## Checkpoint # # With the `TensorArray` from Text Extensions for Pandas, the computed embeddings can be persisted as a tensor along with the rest of the DataFrame using standard Pandas input/output methods. Since this is a costly operation and the embeddings are deterministic, it can save lots of time to checkpoint the data here and save the results to disk. This will allow us to continue working with model training without needing to re-compute the BERT embeddings again. # # ### Save DataFrame with Embeddings Tensor # Write the tokenized corpus with embeddings to a Feather file. # We can't currently serialize span columns that cover multiple documents (see issue #73 https://github.com/CODAIT/text-extensions-for-pandas/issues/73), # so drop span columns from the contents we write to the Feather file. cols_to_drop = [c for c in corpus_df.columns if "span" in c] corpus_df.drop(columns=cols_to_drop).to_feather("outputs/corpus.feather") # ### Load DataFrame with Previously Computed Embeddings # Read the serialized embeddings back in so that you can rerun the model # training parts of this notebook (the cells from here onward) without # regenerating the embeddings. corpus_df = pd.read_feather("outputs/corpus.feather") corpus_df # ## Training a model on the BERT embeddings # # Now we will use the loaded BERT embeddings to train a multinomial model to predict the token class from the embeddings tensor. # Extract the training set DataFrame. train_df = corpus_df[corpus_df["fold"] == "train"] train_df # + # Train a multinomial logistic regression model on the training set. MULTI_CLASS = "multinomial" # How many iterations to run the BGFS optimizer when fitting logistic # regression models. 100 ==> Fast; 10000 ==> Full convergence LBGFS_ITERATIONS = 10000 base_pipeline = sklearn.pipeline.Pipeline([ # Standard scaler. This only makes a difference for certain classes # of embeddings. #("scaler", sklearn.preprocessing.StandardScaler()), ("mlogreg", sklearn.linear_model.LogisticRegression( multi_class=MULTI_CLASS, verbose=10, max_iter=LBGFS_ITERATIONS )) ]) X_train = train_df["embedding"].values Y_train = train_df["token_class_id"] base_model = base_pipeline.fit(X_train, Y_train) base_model # - # ## Make Predictions on Token Class from BERT Embeddings # # Using our model, we can now predict the token class from the test set using the computed embeddings. # Define a function that will let us make predictions on a fold of the corpus. def predict_on_df(df: pd.DataFrame, id_to_class: Dict[int, str], predictor): """ Run a trained model on a DataFrame of tokens with embeddings. :param df: DataFrame of tokens for a document, containing a TokenSpan column called "embedding" for each token. :param id_to_class: Mapping from class ID to class name, as returned by :func:`text_extensions_for_pandas.make_iob_tag_categories` :param predictor: Python object with a `predict_proba` method that accepts a numpy array of embeddings. :returns: A copy of `df`, with the following additional columns: `predicted_id`, `predicted_class`, `predicted_iob`, `predicted_type` and `predicted_class_pr`. """ result_df = df.copy() class_pr = tp.TensorArray(predictor.predict_proba(result_df["embedding"])) result_df["predicted_id"] = np.argmax(class_pr, axis=1) result_df["predicted_class"] = [id_to_class[i] for i in result_df["predicted_id"].values] iobs, types = tp.io.conll.decode_class_labels(result_df["predicted_class"].values) result_df["predicted_iob"] = iobs result_df["predicted_type"] = types result_df["predicted_class_pr"] = class_pr return result_df # Make predictions on the test set. test_results_df = predict_on_df(corpus_df[corpus_df["fold"] == "test"], int_to_label, base_model) test_results_df.head() # Take a slice to show a region with more entities. test_results_df.iloc[40:50] # ## Compute Precision and Recall # # With our model predictions on the test set, we can now compute precision and recall. To do this, we will use the following steps: # # 1. Split up test set predictions by document, so we can work on the document level. # 1. Join the test predictions with token information into one DataFrame per document. # 1. Convert each DataFrame from IOB2 format to span, entity type pairs as done before. # 1. Compute accuracy for each document as a DataFrame. # 1. Aggregate per-document accuracy to get overal precision/recall. # + # Split model outputs for an entire fold back into documents and add # token information. # Get unique documents per fold. fold_and_doc = test_results_df[["fold", "doc_num"]] \ .drop_duplicates() \ .to_records(index=False) # Index by fold, doc and token id, then make sure sorted. indexed_df = test_results_df \ .set_index(["fold", "doc_num", "token_id"], verify_integrity=True) \ .sort_index() # Join predictions with token information, for each document. test_results_by_doc = {} for collection, doc_num in fold_and_doc: doc_slice = indexed_df.loc[collection, doc_num].reset_index() doc_toks = bert_toks_by_fold[collection][doc_num][ ["token_id", "span", "ent_iob", "ent_type"] ].rename(columns={"id": "token_id"}) joined_df = doc_toks.copy().merge( doc_slice[["token_id", "predicted_iob", "predicted_type"]]) test_results_by_doc[(collection, doc_num)] = joined_df # Test results are now in one DataFrame per document. test_results_by_doc[("test", 0)].iloc[40:60] # + # Convert IOB2 format to spans, entity type with `tp.io.conll.iob_to_spans()`. test_actual_spans = {k: tp.io.conll.iob_to_spans(v) for k, v in test_results_by_doc.items()} test_model_spans = {k: tp.io.conll.iob_to_spans(v, iob_col_name = "predicted_iob", entity_type_col_name = "predicted_type") .rename(columns={"predicted_type": "ent_type"}) for k, v in test_results_by_doc.items()} test_model_spans[("test", 0)].head() # - # Compute per-document statistics into a single DataFrame. test_stats_by_doc = tp.io.conll.compute_accuracy_by_document(test_actual_spans, test_model_spans) test_stats_by_doc # Collection-wide precision and recall can be computed by aggregating # our DataFrame. tp.io.conll.compute_global_accuracy(test_stats_by_doc) # ### Adjusting the BERT Model Output # # The above results aren't bad for a first shot, but taking a look a some of the predictions will show that sometimes the tokens have been split up into multiple entities. This is because the BERT tokenizer uses WordPiece to make subword tokens, see https://huggingface.co/transformers/tokenizer_summary.html and https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf for more information. # # This is going to cause a problem when computing precision/recall because we are comparing exact spans, and if the entity is split, it will be counted as a false negative _and_ possibly one or more false positives. Luckily we can fix up with Text Extension for Pandas. # # Let's drill down to see an example of the issue and how to correct it. # Every once in a while, the BERT model will split a token in the original data # set into multiple entities. For example, look at document 202 of the test set: test_model_spans[("test", 202)].head(10) # Notice `[150, 151): 'W'` and `[151, 156): 'idnes'`. These outputs are part # of the same original token, but have been split by the model. # + # We can use spanner algebra in `tp.spanner.overlap_join()` # to fix up these outputs. spans_df = test_model_spans[("test", 202)] toks_df = test_raw[202] # First, find which tokens the spans overlap with: overlaps_df = ( tp.spanner.overlap_join(spans_df["span"], toks_df["span"], "span", "corpus_token") .merge(spans_df) ) overlaps_df.head(10) # - # Next, compute the minimum span that covers all the corpus tokens # that overlap with each entity span. agg_df = ( overlaps_df .groupby("span") .aggregate({"corpus_token": "sum", "ent_type": "first"}) .reset_index() ) agg_df.head(10) # Finally, take unique values and covert character-based spans to token # spans in the corpus tokenization (since the new offsets might not match a # BERT tokenizer token boundary). cons_df = ( tp.spanner.consolidate(agg_df, "corpus_token")[["corpus_token", "ent_type"]] .rename(columns={"corpus_token": "span"}) ) cons_df["span"] = tp.TokenSpanArray.align_to_tokens(toks_df["span"], cons_df["span"]) cons_df.head(10) # Text Extensions for Pandas contains a single function that repeats the actions of the # previous 3 cells. tp.io.bert.align_bert_tokens_to_corpus_tokens(test_model_spans[("test", 202)], test_raw[202]).head(10) # Run all of our DataFrames through `align_bert_tokens_to_corpus_tokens()`. keys = list(test_model_spans.keys()) new_values = tp.jupyter.run_with_progress_bar( len(keys), lambda i: tp.io.bert.align_bert_tokens_to_corpus_tokens(test_model_spans[keys[i]], test_raw[keys[i][1]])) test_model_spans = {k: v for k, v in zip(keys, new_values)} test_model_spans[("test", 202)].head(10) # Compute per-document statistics into a single DataFrame. test_stats_by_doc = tp.io.conll.compute_accuracy_by_document(test_actual_spans, test_model_spans) test_stats_by_doc # Collection-wide precision and recall can be computed by aggregating # our DataFrame. tp.io.conll.compute_global_accuracy(test_stats_by_doc) # These results are a bit better than before, and while the F1 score is not high compared to todays standards, it is decent enough for a simplistic model. More importantly, we did show it was fairly easy to create a model for named entity recognition and analyze the output by leveraging the functionalitiy of Pandas DataFrames along with [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas) `SpanArray`, `TensorArray` and integration with BERT from [Huggingface Transformers](https://huggingface.co/transformers/index.html). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import numpy as np import sys if "../" not in sys.path: sys.path.append("../") from lib.envs.gridworld import GridworldEnv env = GridworldEnv() env.P.keys() #for saTuple in env.P[0]: # print env.P[0][saTuple] print env.P[0] print np.ones([env.nS, env.nA]) / env.nA def policy_eval(policy, env, discount_factor=1.0, theta=0.00001): """ Evaluate a policy given an environment and a full description of the environment's dynamics. Args: policy: [S, A] shaped matrix representing the policy. env: OpenAI env. env.P represents the transition probabilities of the environment. env.P[s][a] is a (prob, next_state, reward, done) tuple. theta: We stop evaluation once our value function change is less than theta for all states. discount_factor: gamma discount factor. Returns: Vector of length env.nS representing the value function. """ # Start with a random (all 0) value function V = np.zeros(env.nS) #while True: # prevent infinite loops for cnt in range(1000): delta = 0.0 for si in range(policy.shape[0]): v = V[si] tmp = 0 for saInd in env.P[si]: saTuple = env.P[si][saInd][0] tmp += policy[si][saInd] * saTuple[0] * (saTuple[2] + discount_factor * V[saTuple[1]]) V[si] = tmp delta = max(delta,abs(v - V[si])) if delta < theta: break return np.array(V) random_policy = np.ones([env.nS, env.nA]) / env.nA v = policy_eval(random_policy, env) # Test: Make sure the evaluated policy is what we expected expected_v = np.array([0, -14, -20, -22, -14, -18, -20, -20, -20, -20, -18, -14, -22, -20, -14, 0]) np.testing.assert_array_almost_equal(v, expected_v, decimal=2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 9.4.2 동영상을 frame별 화상 데이터로 변환 # # 다운로드한 동영상을 frame별로 jpeg 형식의 화상 데이터로 변환합니다. # # ## 사전 준비 # # ffmepg가 설치되어 있지 않은 경우에는 # 다음 명령을 터미널에서 실행하여, Ubuntu에 ffmpeg를 설치합니다. # # - sudo apt install ffmpeg # # + import os import subprocess # 터미널에서 실행할 명령을 수행할 수 있다 # 동영상이 저장된 "kinetics_videos"폴더에 있는, 클래스의 종류와 경로를 취득 dir_path = './data/kinetics_videos' class_list = os.listdir(path=dir_path) print(class_list) # 각 클래스의 동영상 파일을 화상 파일로 변환한다 for class_list_i in (class_list): # 클래스별 루프 # 클래스의 폴더 경로를 취득 class_path = os.path.join(dir_path, class_list_i) # 각 클래스 폴더 내의 동영상 파일을 하나식 처리하는 루프 for file_name in os.listdir(class_path): # 파일명과 확장자로 분할 name, ext = os.path.splitext(file_name) # mp4 파일이 아니거나, 폴더 등은 처리하지 않음 if ext != '.mp4': continue # 동영상 파일을 화상으로 분할하여 저장할 폴더명을 취득 dst_directory_path = os.path.join(class_path, name) # 위의 화상 저장 폴더가 없으면 작성 if not os.path.exists(dst_directory_path): os.mkdir(dst_directory_path) # 동영상 파일의 경로를 취득 video_file_path = os.path.join(class_path, file_name) # ffmpeg를 실행하여, 동영상 파일을 jpg로 바꿈(높이 256 픽셀로, 폭은 화면 비율을 바꾸지 않음) # kinetics 동영상은 10초이며, 대략 300개의 파일이 됨(30 frames /sec) cmd = 'ffmpeg -i \"{}\" -vf scale=-1:256 \"{}/image_%05d.jpg\"'.format( video_file_path, dst_directory_path) print(cmd) subprocess.call(cmd, shell=True) print('\n') print("동영상 파일을 화상 파일로 변환했습니다.") # - # 끝 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Q1. Create the below pattern using nested for loop in Python. # #

# *
# * *
# * * *
# * * * *
# * * * * *
# * * * *
# * * *
# * *
# *
#

# + rows=5 #Printing above half for i in range(rows): for j in range(i): print('*',end=" ") print('\n') #Printing the lower half for i in range(rows,0,-1): for j in range(i): print('*',end=" ") print("\n") # - # ## Q2. Write a Python program to reverse a word after accepting the input from the user. # # ### Sample Output: # # #### Input word: ineuron # #### Output: norueni word=input("Input word:") print("Output: ",word[::-1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: scale # language: python # name: scale # --- import tensorflow as tf from tensorflow import keras from tensorflow.keras.layers import Dense, DenseFeatures, Lambda, Add, Dot, Activation, Layer from tensorflow.keras import Model from tensorflow.keras.initializers import GlorotNormal from Layer_FM import Layer_FM import numpy as np from utils.conf_process import myconfigparser from utils.feature_process import tuple_feature_process from utils.active_method import activate from utils.optimizer_method import get_optimizer from utils.distribut_conf import set_dist_env EMBEDDING_SIZE = 8 HIDDEN_UNITS = [32, 8] BATCH_SIZE = 4 def build_model_columns(conf_path): deep_columns = list() sparse_cols = [] dense_cols = [] sparse_col_names = [] dense_col_names = [] default_dict = {} embedding_nums = 0 cfg = myconfigparser() cfg.read(conf_path) sections = cfg.sections() for part_key in sections: items_lst = cfg.items(part_key) if part_key == "deep": for items in items_lst: feature_name = items[0].strip() feature_col,default_val,embedding_num = tuple_feature_process(items,int(EMBEDDING_SIZE)) embedding_nums += embedding_num if feature_name not in default_dict: default_dict[feature_name] = default_val deep_columns.append(feature_col) # if isinstance(feature_col, tf.feature_column.indicator_column): # sparse_cols.append(feature_col) # sparse_col_names.append(feature_name) # else: # dense_cols.append(feature_col) # dense_col_names.append(feature_name) #获取所有特征列(cfg取出的格式为[(特征名,具体值)]) columns = cfg.items("use")[0][1].strip().split(",") sparse_cols_names = cfg.items("sparse_feature")[0][1].strip().split(",") numeric_cols_names = cfg.items("numeric_feature")[0][1].strip().split(",") default_dict["label"] = 0 cols_default = list(map(lambda x:["-1"] if x not in default_dict else [default_dict[x]], columns)) return deep_columns, columns, cols_default,embedding_nums, sparse_cols_names, numeric_cols_names deep_columns, columns, cols_default, embedding_num, sparse_cols_names, numeric_cols_names = build_model_columns('data/feature_engineering.conf') # ## Split sparse and numeric feature columns # + sparse_feature_cols = filter(lambda col: col.name.split('_indicator')[0] in sparse_cols_names, deep_columns) numeric_feature_cols = filter(lambda col: col.name in numeric_cols_names, deep_columns) sparse_feature_cols = list(sparse_feature_cols) numeric_feature_cols = list(numeric_feature_cols) print(sparse_feature_cols) print(numeric_feature_cols) # - # ## Generate dataset from csv LABEL_COLUMN = 'label' def get_dataset(file_path, **kwargs): dataset = tf.data.experimental.make_csv_dataset( file_path, batch_size=5, # Artificially small to make examples easier to show. label_name=LABEL_COLUMN, na_value="?", num_epochs=1, ignore_errors=True, **kwargs) return dataset raw_train_data = get_dataset('data/train.data.csv', column_names=columns) def show_batch(dataset): for batch, label in dataset.take(1): for key, value in batch.items(): print("{:20s}: {}".format(key,value.numpy())) show_batch(raw_train_data) sparse_train_data = get_dataset('data/train.data.csv', column_names=columns,select_columns=sparse_cols_names.copy()) # use copy to avoid make_csv_dataset changes the select_columns from str list to index list numeric_train_data = get_dataset('data/train.data.csv', column_names=columns,select_columns=numeric_cols_names.copy()) show_batch(sparse_train_data) print() show_batch(numeric_train_data) # ## sp_feature_names = sparse_cols_names.copy() sp_feature_names.remove('label') sp_feature_names num_feature_names = numeric_cols_names.copy() num_feature_names.remove('label') num_feature_names sparse_train_batch, labels_batch = next(iter(sparse_train_data)) sparse_train_batch # ## Customize FM layer # ## Generate Inputs Layer sp_features_len = len(sp_feature_names) num_features_len = len(num_feature_names) print(sp_features_len) print(num_features_len) sparse_inputs = { colname : tf.keras.layers.Input(shape=(), name=colname, dtype=tf.string, sparse=True) for colname in sp_feature_names } numeric_inputs = { colname : tf.keras.layers.Input(shape=(), name=colname, dtype='float32', sparse=False) for colname in num_feature_names } print(sparse_inputs) print(numeric_inputs) # ## Model Definition # + sp_dense = DenseFeatures(sparse_feature_cols, name='sparse_input_layer')(sparse_inputs) num_dense = DenseFeatures(numeric_feature_cols, name='numeric_input_layer')(numeric_inputs) #---linear--- both = tf.keras.layers.concatenate([sp_dense, num_dense], name='both') first_order = Dense(1, input_shape=both.shape, name='first_order')(both) #---fm--- fm_inputs = {'sparse': sp_dense, 'numeric': num_dense} second_order, v_em = Layer_FM(sp_features_len, num_features_len, EMBEDDING_SIZE, name='FM')(fm_inputs) # ---deep--- field_size = sp_features_len + num_features_len deep_inputs = tf.reshape(v_em, shape=(-1, field_size*EMBEDDING_SIZE)) for layer_id, num_hidden_units in enumerate(HIDDEN_UNITS): layer_name = 'deep_%d'%(layer_id) deep_inputs = Dense(num_hidden_units, activation='relu', kernel_initializer=GlorotNormal(), name=layer_name)(deep_inputs) y_deep = Dense(1, activation='relu', kernel_regularizer=tf.keras.regularizers.l2)(deep_inputs) outputs = first_order+second_order+y_deep outputs = Activation('sigmoid')(outputs) model = Model([sparse_inputs, numeric_inputs], outputs) # - model.summary() from tensorflow.keras.utils import plot_model plot_model(model, to_file='model.png') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Information about this notebook # # This example script was provided as part of the Data Management Project (INF) within the TR-172 ["ArctiC Amplification: Climate Relevant Atmospheric and SurfaCe Processes, and Feedback Mechanisms" (AC)³](http://www.ac3-tr.de/) funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) # # Author: , [Institute of Environmental Physics](http://www.iup.uni-bremen.de), University of Bremen, Germany, # # Github repository: https://github.com/ac3-tr/ac3-notebooks # # **Setup instructions for a reference Python Environment can be found on the [Github page](https://github.com/ac3-tr/ac3-notebooks)** import matplotlib.pyplot as plt import matplotlib.dates as dates from matplotlib.collections import LineCollection import numpy as np import datetime as dt import cartopy.crs as ccrs import os from pyhdf import SD # %matplotlib inline # # TCCON data from Ny Ålesund # # # ## Dataset resources # # **Title:** NDACC FTIR O$_3$ profiles from Ny Ålesund, Svalbard # # **Author** # # **Year** 2019 # # **Institute** Institute of Environmental Physics, University of Bremen, Bremen (DE) # # **Data hosted at** [FTP Access](ftp://ftp.cpc.ncep.noaa.gov/ndacc/station/nyalsund/hdf/ftir/) # # **DOI** *no DOI assigned yet* # # **License** [TCCON Data Use Policy](https://data.caltech.edu/tindfiles/serve/90348ea4-f340-4f43-8db2-b9beb7845519/) # # # ## Abstract # Profile retrieval results of Ozone at Ny-Ålesund, Svalbard. Derived from solar absorption spectra, measured with the Fourier-Transform InfraRed (FTIR) Spectrometer, hosted at the AWIPEV observatory. # # ## Reading example dataset # # The dataset can be downloaded via the link above and saved in the current working directory of this notebook. The HDF file can be opened and read with pyhdf. Parameter description can be printed. # + datafolder = '../ac3/INF/pangaea_download/' fname = 'groundbased_ftir.o3_awi001_ny.alesund_20170403t140027z_20170915t111845z_005.hdf' data = SD.SD(os.path.join(datafolder, fname)) o3vmr = data.select('O3.MIXING.RATIO.VOLUME_ABSORPTION.SOLAR') for k in o3vmr.attributes().keys(): print (k, o3vmr.attributes()[k]) o3vmr = o3vmr.get()[:] o3col = data.select('O3.COLUMN_ABSORPTION.SOLAR').get()[:] o3vmr_ap = data.select('O3.MIXING.RATIO.VOLUME_ABSORPTION.SOLAR_APRIORI').get()[:] o3vmr_avk = data.select('O3.MIXING.RATIO.VOLUME_ABSORPTION.SOLAR_AVK').get()[:] alt = data.select('ALTITUDE').get() dat = np.array([dt.datetime(d.year, d.month, d.day, d.hour, d.minute, d.second) for d in dates.num2date(data.select('DATETIME').get()+730120.0)]) data.end() # - # ## Ozone profile plot # # Using Matplotlib's LineCollection method, the day of year cann be assigned a color and the profiles displayed in bulk, with the color showing the time information. # + Coll = [list(zip(o3vmr[i], alt)) for i in range(o3vmr.shape[0])] cmap = plt.get_cmap('winter') line_segments = LineCollection(Coll, cmap=cmap, linewidth=1) line_segments.set_array(np.array([i.toordinal()-dt.datetime(2017,1,1).toordinal() for i in dat])) fig, ax = plt.subplots(1, figsize=(7,7)) ax.set_xlabel('O$_3$ mixing ratio [ppmv]') ax.set_ylabel('Altitude [km]') ax.set_xlim(0,8) ax.set_ylim(0,max(alt)) ax.add_collection(line_segments) axcb = fig.colorbar(line_segments, pad=0.02) axcb.set_label('Day of year 2017') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="PmrOLukFMW9v" # # Import packages # + colab={} colab_type="code" id="uFFLDRpGVu3J" import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.stats import multivariate_normal from sklearn.metrics import classification_report from matplotlib import cm # + [markdown] colab_type="text" id="0VqZGutNc65m" # # Load training data # # Our data has 2D feature $x1, x2$. Data from the two classes is are in $\texttt{class1_train}$ and $\texttt{class2_train}$ respectively. Each file has two columns corresponding to the 2D feature. # + colab={} colab_type="code" id="jyVpK1m7drij" class1_train = pd.read_csv('class1_train').to_numpy() class2_train = pd.read_csv('class2_train').to_numpy() # + [markdown] colab_type="text" id="cV4oAZdlYAwV" # # Visualize training data # Generate 2D scatter plot of the training data. Plot the points from class 1 in red and the points from class 2 in blue. # + colab={} colab_type="code" id="c3D3W5XGYCkB" import seaborn as sns classes = ['class-1','class-2'] for i in range(class1_train.shape[0]): plt.scatter(class1_train[i][0],class1_train[i][1] ,c="red",alpha=0.6, edgecolors='none') plt.xlabel('Growth') plt.ylabel('Population Size') for j in range(class2_train.shape[0]): plt.scatter(class1_train[j][0],class1_train[j][1] ,c="blue") # + [markdown] colab_type="text" id="EBa6Br1-ZF9D" # # Maximum likelihood estimate of parameters # # We will model the likelihood, $P(\mathbf{x}|C_1)$ and $P(\mathbf{x}|C_2)$ as $\mathcal{N}(\mathbf{\mu_1},|\Sigma_1)$ and $\mathcal{N}(\mathbf{\mu_2},|\Sigma_2)$ respectively. The prior probability of the classes are called, $P(C_1)=\pi_1$ and $P(C_2)=\pi_2$. # # The maximum likelihood estimate of the parameters as follows: # \begin{align*} # \pi_k &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)}{N}\\ # \mathbf{\mu_k} &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)\mathbf{x}^i}{\sum_{i=1}^N \mathbb{1}(t^i=k)}\\ # \Sigma_k &= \frac{\sum_{i=1}^N \mathbb{1}(t^i=k)(\mathbf{x}^i-\mathbf{\mu_k})(\mathbf{x}^i-\mathbf{\mu_k})^T}{\sum_{i=1}^N \mathbb{1}(t^i=k)}\\ # \end{align*} # # Here, $t^i$ is the target or class of $i^{th}$ sample. $\mathbb{1}(t^i=k)$ is 1 if $t^i=k$ and 0 otherwise. # # Compute maximum likelihood values estimates of $\pi_1$, $\mu_1$, $\Sigma_1$ and $\pi_2$, $\mu_2$, $\Sigma_2$ # # Also print these values # # + colab={} colab_type="code" id="REKlzGnKclHE" def calculate_pi_1(): num = class1_train.shape[0] deno = class1_train.shape[0] + class2_train.shape[0] return num/deno def calculate_pi_2(): num = class2_train.shape[0] deno = class1_train.shape[0] + class2_train.shape[0] return num/deno def calculate_mu_1(): return class1_train.mean(axis=0) def calculate_mu_2(): return class2_train.mean(axis=0) def calculate_cov_1(): x = class1_train print(x.shape) mu = x.mean(axis=0) x_norm = x-mu x_transpose = x_norm.transpose() return np.cov(x_transpose) def calculate_cov_2(): x = class2_train print(x.shape) mu = x.mean(axis=0) x_norm = x-mu x_transpose = x_norm.transpose() return np.cov(x_transpose) print( 'pi_1 : {} and pi_2 : {}'.format(calculate_pi_1(),calculate_pi_2())) print( 'mu_1 : {} and mu_2 : {}'.format(calculate_mu_1(),calculate_mu_2())) print( 'sigma_1 : \n{} \n sigma_2 : \n{}'.format(calculate_cov_1(),calculate_cov_2())) # + [markdown] colab_type="text" id="pHshjXHQ8rlb" # # Visualize the likelihood # Now that you have the parameters, let us visualize how the likelihood looks like. # # 1. Use $\texttt{np.mgrid}$ to generate points uniformly spaced in -5 to 5 along 2 axes # 1. Use $\texttt{multivariate_normal.pdf}$ to get compute the Gaussian likelihood for each class # 1. Use $\texttt{plot_surface}$ to plot the likelihood of each class. # 1. Use $\texttt{contourf}$ to plot the likelihood of each class. # # You may find the code in the lecture notebook helpful. # # For the plots, use $\texttt{cmap=cm.Reds}$ for class 1 and $\texttt{cmap=cm.Blues}$ for class 2. Use $\texttt{alpha=0.5}$ to overlay both plots together. # + colab={} colab_type="code" id="Zjslmo-j83KH" from matplotlib import cm x,y = np.mgrid[-5:5:.01, -5:5:.01] pos = np.empty(x.shape + (2,)) pos[:, :, 0] = x; pos[:, :, 1] = y mu1 = calculate_mu_1() mu2 = calculate_mu_2() cov1 = calculate_cov_1() cov2 = calculate_cov_2() rv1 = multivariate_normal(mean = mu1, cov = cov1) rv2 = multivariate_normal(mean = mu2, cov = cov2) fig = plt.figure(figsize=(20,10)) ax = fig.add_subplot(121, projection='3d') plt.xlabel('x') plt.ylabel('y') ax.plot_surface(x,y,rv1.pdf(pos), cmap=cm.Reds,alpha=0.5) ax.plot_surface(x,y,rv2.pdf(pos), cmap=cm.Blues,alpha=0.5) plt.subplot(122) plt.contourf(x, y, rv1.pdf(pos), cmap=cm.Reds,alpha=0.5) plt.contourf(x, y, rv2.pdf(pos), cmap=cm.Blues,alpha=0.5) plt.colorbar() plt.xlabel('x') plt.ylabel('y') # + [markdown] colab_type="text" id="BPZBa1Z5AfLc" # #Visualize the posterior # Use the prior and the likelihood you've computed to obtain the posterior distribution for each class. # # Like in the case of the likelihood above, make same similar surface and contour plots for the posterior. # + colab={} colab_type="code" id="oTQTLL0CAiij" likelihood1 = rv1.pdf(pos) likelihood2 = rv2.pdf(pos) pi1 = len(class1_train)/(len(class1_train)+len(class2_train)) pi2 = len(class2_train)/(len(class1_train)+len(class2_train)) p1 = (likelihood1 * pi1)/(likelihood1*pi1+likelihood2*pi2) p2 = (likelihood2 * pi2)/(likelihood1*pi1+likelihood2*pi2) x, y = np.mgrid[-5:5:.01, -5:5:.01] pos = np.empty(x.shape + (2,)) pos[:, :, 0] = x; pos[:, :, 1] = y fig = plt.figure(figsize=(20,10)) ax = fig.add_subplot(131, projection='3d') plt.xlabel('x') plt.ylabel('y') ax.plot_surface(x,y,p1, cmap=cm.Reds,alpha=0.5) ax.plot_surface(x,y,p2, cmap=cm.Blues,alpha=0.5) plt.subplot(132) plt.contourf(x,y,p1,cmap=cm.Reds,alpha=0.5) plt.contourf(x,y,p2,cmap=cm.Blues,alpha=0.5) plt.xlabel('x') plt.ylabel('y') # + [markdown] colab_type="text" id="3-z8dLtbEkdi" # # Decision boundary # 1. Decision boundary can be obtained by $P(C_2|x)>P(C_1|x)$ in python. Use $\texttt{contourf}$ to plot the decision boundary. Use $\texttt{cmap=cm.Blues}$ and $\texttt{alpha=0.5}$ # 1. Also overlay the scatter plot of train data points from the 2 classes on the same plot. Use red color for class 1 and blue color for class 2 # + colab={} colab_type="code" id="0GPzpqy2Dy_b" des = p2>p1 plt.contourf(x,y,p1,cmap=cm.Reds,alpha=0.5) plt.contourf(x,y,p2,cmap=cm.Blues,alpha=0.5) plt.contourf(x,y,des,cmap=cm.Greens,alpha=0.3) plt.xlabel('x') plt.ylabel('y') plt.scatter(class1_train[:,0],class1_train[:,1],marker='*',color='red') plt.scatter(class2_train[:,0],class2_train[:,1],marker='+',color='blue') # + [markdown] colab_type="text" id="HBtAykz2FihL" # # Test Data # Now let's use our trained model to classify test data points # # 1. $\texttt{test_data}$ contains the $x1,x2$ features of different data points # 1. $\texttt{test_label}$ contains the true class of the data points. 0 means class 1. 1 means class 2. # 1. Classify the test points based on whichever class has higher posterior probability for each data point # 1. Use $\texttt{classification_report}$ to test the classification performance # + colab={} colab_type="code" id="VbxiXB0bD6le" test = pd.read_csv('test').to_numpy() test_data, test_label = test[:,:2], test[:,2] test_data # + ## likelihood l1 = rv1.pdf(test_data) l2 = rv2.pdf(test_data) # - ##Posterior p1_test= (l1*pi1)/(l1*pi1+l2*pi2) p2_test= (l2*pi2)/(l1*pi1+l2*pi2) # + ## Descision bundory test_data_predict=p2_test>p1_test test_data_predict # + test_data_predict = np.where(test_data_predict==True,1,0) test_data_predict # - from sklearn.metrics import classification_report # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np # 一维向量 v = np.array([1, 2, 3, 4]) print('v: ', v) # 二维向量 m = np.array([[1, 2], [3, 4]]) print('m: ', m) # 显示维度 print('v.shape: ', v.shape) print('m.shape: ', m.shape) # 显示大小 print('v.size: ', v.size) print('m.size: ', m.size) # + # 数组生成函数 # 固定步长 x = np.arange(1, 10, 1) # arguments: start, stop, step print('x:\n', x) # 固定长度 y = np.linspace(-10, 10, 25) print('y:\n', y) z = np.logspace(0, 10, 10, base=np.e) print('z:\n', z) # + # 生成随机矩阵 # uniform random numbers in [0,1] np.random.rand(5, 5) # standard normal distributed random numbers np.random.randn(5, 5) # + # 对角阵 np.diag([1, 2, 3]) # 次对角阵 np.diag([1, 2, 3, 4], k = 1) # 零矩阵 np.zeros([3,3]) # 一矩阵 np.ones([3,3]) # + # 索引 print(m) print(m[0, 0]) print(m[:,1]) # + # 切片, 同list a = np.arange(5) a[::2] # parameters: start, end, step # - A = np.array([[n+m*10 for n in range(5)] for m in range(5)]) print(A) print(A[1:3, 1:3]) x = np.arange(0, 10, 0.5) x[x>3] x[(x>3) & (x<5)] np.where((x>3) & (x<5)) # + a = np.repeat('abc',3) a = np.array([[1, 2], [3, 4]]) np.repeat(a, 3) np.tile(a, 3) # + # 使用矢量化函数 def Theta(x): if x>=0: return 1 else: return 0 # 矢量化 Theta_vec = np.vectorize(Theta) Theta_vec(np.array([-1, 0, -1, 1, 3])) # + # any 和 all 的用法 M = np.array([[1, 4], [9, 16]]) if (M > 5).any(): print('At least one element in M is bigger than 5') else: print('no element in M is larger than 5') if (M > 5).all(): print('All elements in M are bigger than 5') else: print('Not all elements in M is larger than 5') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="P4f8LKqeA57v" # # Data Upload # + id="JXe9qYNAVTPK" executionInfo={"status": "ok", "timestamp": 1645715200702, "user_tz": -180, "elapsed": 3094, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} # Main Packages import numpy as np import pandas as pd import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns from matplotlib.ticker import PercentFormatter import calendar import plotly.graph_objects as go from plotly.subplots import make_subplots # %matplotlib inline import plotly.express as px # + id="8N1C1wPFEDa6" executionInfo={"status": "ok", "timestamp": 1645715202702, "user_tz": -180, "elapsed": 269, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} import sqlite3 conn = sqlite3.connect(":memory:") # + id="mLVXhn6f6Kx2" executionInfo={"status": "ok", "timestamp": 1645715215574, "user_tz": -180, "elapsed": 11580, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} # Customers, Sellers customers= pd.read_csv('/content/drive/MyDrive/Colab Notebooks/olist_customers_dataset.csv') sellers= pd.read_csv('/content/drive/MyDrive/Colab Notebooks/olist_sellers_dataset.csv') # Product products= pd.read_csv('/content/drive/MyDrive/Colab Notebooks/olist_products_dataset.csv') products_name = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/product_category_name_translation.csv') # Order orders= pd.read_csv('/content/drive/MyDrive/Colab Notebooks/olist_orders_dataset.csv') orders_item= pd.read_csv('/content/drive/MyDrive/Colab Notebooks/olist_order_items_dataset.csv') orders_payment= pd.read_csv('/content/drive/MyDrive/Colab Notebooks/olist_order_payments_dataset.csv') orders_review= pd.read_csv('/content/drive/MyDrive/Colab Notebooks/olist_order_reviews_dataset.csv') # + [markdown] id="De8C6h7S7kxG" # # Data Preprocessing # + [markdown] id="hbFu4Gd1AFxZ" # ## Data Integration # + id="TU1hjxDBAFVm" executionInfo={"status": "ok", "timestamp": 1645717100199, "user_tz": -180, "elapsed": 2849, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} full_data = pd.merge(orders,orders_item,left_on='order_id',right_on='order_id') full_data = full_data.merge(orders_payment,on ='order_id') full_data = full_data.merge(orders_review,on ='order_id') full_data = full_data.merge(customers,on ='customer_id') full_data = full_data.merge(sellers,on ='seller_id') full_data = full_data.merge(products,on ='product_id') full_data = full_data.merge(products_name,on ='product_category_name') # + [markdown] id="ickSW7vm_TGC" # ## Data Reduction # + id="sTLJGPMb_ULe" executionInfo={"status": "ok", "timestamp": 1645717105704, "user_tz": -180, "elapsed": 11, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} full_data.drop(['product_name_lenght', 'product_description_lenght' ,'product_photos_qty' ,'product_weight_g' , 'product_length_cm', 'product_height_cm' ,'product_width_cm','product_category_name','review_comment_title', 'review_comment_message','review_creation_date','review_answer_timestamp','order_delivered_carrier_date'],axis=1,inplace=True) # + [markdown] id="gf-fmDQR_gsC" # ## Data Cleaning - Missing Data # + colab={"base_uri": "https://localhost:8080/"} id="tHpJ-9UZ6RAo" executionInfo={"status": "ok", "timestamp": 1645362895186, "user_tz": -180, "elapsed": 997, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="9411a25e-9900-43cd-b88f-ba4f33e14ffc" for feature in full_data.columns: if full_data[feature].isnull().sum() >=1: print(feature, '---> total nulls are',(full_data[feature].isnull().sum())) # + id="KCzoJXVq6aI7" executionInfo={"status": "ok", "timestamp": 1645717110082, "user_tz": -180, "elapsed": 2, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} #Fill the nulls --> order_approved_at full_data["order_approved_at"].fillna(full_data["order_purchase_timestamp"], inplace=True) #Fill the nulls --> order_delivered_customer_date full_data["order_delivered_customer_date"].fillna(full_data["order_estimated_delivery_date"], inplace=True) # + colab={"base_uri": "https://localhost:8080/"} id="69ZggpVB6cbV" executionInfo={"status": "ok", "timestamp": 1645362904266, "user_tz": -180, "elapsed": 1674, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="9c30a56e-c0e0-4bc9-c466-b798a5117330" full_data.info() # + [markdown] id="elHins77ALxS" # ## Data Transformation and Feature Extraction # + id="7nKexHGPYCR3" executionInfo={"status": "ok", "timestamp": 1645717124951, "user_tz": -180, "elapsed": 11125, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} #As the type of the purchase date is object, the type datetime64[ns] is converted first. #And time, day, month, month-year and year information is received full_data['order_purchase_timestamp'] = full_data['order_purchase_timestamp'].apply(pd.to_datetime) full_data['purchase_time'] = full_data['order_purchase_timestamp'].dt.hour full_data['purchase_MonthY'] = full_data['order_purchase_timestamp'].dt.to_period('M') full_data['purchase_year'] = full_data['order_purchase_timestamp'].dt.year full_data['purchase_month'] = pd.Series(pd.Categorical(full_data['order_purchase_timestamp'].dt.month_name(), categories=list(calendar.month_name))) full_data['purchase_day'] = pd.Series(pd.Categorical(full_data['order_purchase_timestamp'].dt.day_name(), categories=list(calendar.day_name))) # + id="TMN-sTARjWvs" executionInfo={"status": "ok", "timestamp": 1645717127697, "user_tz": -180, "elapsed": 618, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} # Date information formatted as %Y-%m-%d Dates = ['order_purchase_timestamp', 'order_approved_at', 'order_delivered_customer_date', 'order_estimated_delivery_date','shipping_limit_date' ,'order_delivered_customer_date'] for columns in Dates: full_data[columns] = pd.to_datetime(full_data[columns], format='%Y-%m-%d').dt.date # + id="dSNR1KXbislR" executionInfo={"status": "ok", "timestamp": 1645717130246, "user_tz": -180, "elapsed": 663, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} #Estimated Delivery Date - Actual Delivery Date = Delta (Difference) on Delivery Date full_data['shipping_time_delta'] = full_data['order_estimated_delivery_date'] - full_data['order_delivered_customer_date'] #Actual Delivery Date - Purchase Date = Actual Delivery Time full_data['shipping_duration'] = full_data['order_delivered_customer_date'] - full_data['order_purchase_timestamp'] #Estimated Delivery Date - Purchased Date = Estimated Delivery Time full_data['estimated_duration'] = full_data['order_estimated_delivery_date'] - full_data['order_purchase_timestamp'] # + colab={"base_uri": "https://localhost:8080/"} id="CaJU5ph66-ft" executionInfo={"status": "ok", "timestamp": 1645717137567, "user_tz": -180, "elapsed": 5, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="68dd3461-172a-41c9-8ed7-b410c4000149" full_data.info() # + [markdown] id="hl4xIpu-AR9t" # ## Data Overview # # + colab={"base_uri": "https://localhost:8080/"} id="sIpE5g6M7MkD" executionInfo={"status": "ok", "timestamp": 1645715363034, "user_tz": -180, "elapsed": 4, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="3b74a468-347e-44c4-be97-223d6ef23a9a" totalOrders = full_data.order_id.nunique() print('Unique customer cities:',full_data.customer_city.nunique()) print('Unique customer states:',full_data.customer_state.nunique()) print('Unique seller cities:',full_data.seller_city.nunique()) print('Unique seller states:',full_data.seller_state.nunique()) print('Average price:',full_data.price.sum() / totalOrders) print('Average qnt of products by order:',full_data.order_item_id.sum() / totalOrders) print('Average freight price:',full_data.freight_value.sum() / totalOrders) print('Total revenue for the period was:',full_data.price.sum()) print('Number of unique customers:',full_data.customer_unique_id.nunique()) print('Total order quantity:', totalOrders) print('Average number of product by order:',full_data.freight_value.sum() / totalOrders) # + colab={"base_uri": "https://localhost:8080/", "height": 522} id="GeaGN36g7QkL" executionInfo={"status": "ok", "timestamp": 1645717142257, "user_tz": -180, "elapsed": 7, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="a1e75330-f88a-4c55-b1cf-c070933e993d" full_data # + [markdown] id="OrkFQPym67pU" # # Product Analysis # + colab={"base_uri": "https://localhost:8080/"} id="4b7v7qEakmy_" executionInfo={"status": "ok", "timestamp": 1645451968606, "user_tz": -180, "elapsed": 264, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="49d75dfb-fed8-40b2-f659-0a157005be29" #Product_id info was too long and complex, so last eight digits were checked for uniqueness len(full_data['product_id'].unique()) == len(full_data['product_id'].str[-8:].unique()) # + id="Eu6RPYPwkoxP" #Grafiklerde kullanmak amacıyla id kısaltıldı full_data['product_id_shorten']=full_data['product_id'].str[-8:] # + colab={"base_uri": "https://localhost:8080/"} id="8nAUVkKYksEd" executionInfo={"status": "ok", "timestamp": 1645452015246, "user_tz": -180, "elapsed": 305, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="43ce9549-f4a9-43d1-e295-256f90436f13" #Id information has been shortened for use in graphics len(full_data['customer_unique_id'].unique()) == len(full_data['customer_unique_id'].str[-8:].unique()) # + id="xnQ5Xl9jkuUL" #The same operation was done for the customer_id information. full_data['customer_unique_id_shorten']=full_data['customer_unique_id'].str[-8:] # + colab={"base_uri": "https://localhost:8080/", "height": 415} id="nvnqx5kLEeVY" executionInfo={"status": "ok", "timestamp": 1645452351027, "user_tz": -180, "elapsed": 556, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="7ad9b036-ea05-4390-f20d-0a93f10539da" #Bar graph of the general review score of the products plt.figure(figsize=(10,6)) sns.countplot(x='review_score', data=full_data, palette='YlGnBu') plt.title("Review Scores", fontsize=15, weight='bold',pad = 10.0) plt.xlabel('Score', fontsize = 15) plt.ylabel('Count', fontsize = 15) plt.show() # + id="noigqyTREi0c" executionInfo={"status": "ok", "timestamp": 1645716781045, "user_tz": -180, "elapsed": 5781, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} #The first 30 columns of the data are taken to be able to write #SQL queries. data = full_data.iloc[:,:27] data.to_sql("data",conn, if_exists="replace") # + colab={"base_uri": "https://localhost:8080/", "height": 268} id="L_mpO5fThbIp" executionInfo={"status": "ok", "timestamp": 1645715609869, "user_tz": -180, "elapsed": 502, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="71e4c05f-4436-4296-86a8-19989d6893e2" #Average scores analyzed according to order status information order_status = pd.read_sql(""" SELECT order_status, COUNT(*) AS count, AVG(review_score) AS AVG_Score from data GROUP BY order_status ORDER BY count DESC """ , conn ) order_status #It is seen that the number of submissions is quite high and has a good average score. # + [markdown] id="T6-c6Czu7ADE" # ## Top 10 Products # + colab={"base_uri": "https://localhost:8080/", "height": 363} id="wToswI21nJZn" executionInfo={"status": "ok", "timestamp": 1645453741560, "user_tz": -180, "elapsed": 14, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="9162f571-88df-4802-8b80-9dc10d31b01b" #It is assumed that each row in the dataset is separate items of the order. #For this reason, it is grouped according to product_id and the rows are counted. top_ten_products = pd.read_sql(""" select product_id, Count(*) AS num , AVG(review_score) AS AVG_review,product_category_name_english AS Category_nam from data GROUP BY product_id ORDER BY num DESC LIMIT 10 """,conn) top_ten_products #The ten best-selling products are in 6 different categories. # + colab={"base_uri": "https://localhost:8080/"} id="vhFjqai7r_zV" executionInfo={"status": "ok", "timestamp": 1645453784439, "user_tz": -180, "elapsed": 4, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="9957025b-5c66-4b20-a543-0e84b7eedf8f" top_ten_products['AVG_review'].mean() #The scores of the products with high sales are also at a good level. # + colab={"base_uri": "https://localhost:8080/", "height": 410} id="Ogma_13l7VCF" executionInfo={"status": "ok", "timestamp": 1645366489213, "user_tz": -180, "elapsed": 1040, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="a4436400-0d62-40e8-c18b-df150e0e3555" #Plotting Top 10 Products plt.figure(figsize=(10,6)) sns.countplot(x='product_id_shorten', data=full_data, palette='cool', order=full_data['product_id_shorten'].value_counts()[:10]\ .sort_values().index).set_title("Top 10 Products", fontsize=15, weight='bold') plt.xlabel("Product ID", fontsize = 14) plt.ylabel("Sales Amount", fontsize = 14) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 415} id="nrcchj2p7WKt" executionInfo={"status": "ok", "timestamp": 1645454168926, "user_tz": -180, "elapsed": 862, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="5164e32e-9ee3-4123-b1d8-1ca19322bb6e" #Top 10 Product And Their Categories plt.figure(figsize=(10,6)) sns.set_style("whitegrid") group_product= full_data.groupby(['product_id_shorten','product_category_name_english',])['product_id_shorten']\ .count().sort_values(ascending=False).head(10) group_product.plot(kind='barh',color='dodgerblue') plt.title("Top 10 Products And Their Categories", fontsize=15, weight='bold',pad = 10.0) plt.xlabel("Sales Amount", fontsize = 15) plt.ylabel("Product ID - Product Category", fontsize = 15) plt.show() # + [markdown] id="Wwc10Kk77JDT" # ## Top 10 Categories # + colab={"base_uri": "https://localhost:8080/", "height": 363} id="CA28sNCFsKIF" executionInfo={"status": "ok", "timestamp": 1645455039962, "user_tz": -180, "elapsed": 284, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="73626383-0cc0-4885-b4f1-d6203cc4e7f8" top_ten_categories = pd.read_sql(""" SELECT product_category_name_english AS Category_Name, COUNT(*) AS num, SUM(price) AS Total_Price, AVG(review_score) AS AVG_Score , COUNT(DISTINCT(seller_id)) AS Num_of_Seller FROM data GROUP BY product_category_name_english ORDER BY num DESC limit 10 """ , conn ) top_ten_categories #Although the most sold product is in the furniture_decor category, the most sold category is the bed_bath_table category. #The two categories with the highest total price are health_beauty and watches_gifts. #Bed_bath_table has more sales and almost one of the highest turnovers, although the number of sellers is lower. #Products in this category can be kept in their own warehouses by the e-commerce company. # + colab={"base_uri": "https://localhost:8080/"} id="RpFtFv_Qsv4F" executionInfo={"status": "ok", "timestamp": 1645453983782, "user_tz": -180, "elapsed": 2438, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="6708a7f4-c911-4cc6-e090-6c3c8e319d18" #Average review score of the top ten categories top_ten_categories['AVG_Score'].mean() # + colab={"base_uri": "https://localhost:8080/", "height": 414} id="TS-OVmQfgCC5" executionInfo={"status": "ok", "timestamp": 1645534566207, "user_tz": -180, "elapsed": 852, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="43f962e8-9c2b-429f-8102-92abc39e6672" #Plotting Top 10 Categories group_category= full_data.groupby(by='product_category_name_english')[['order_id']].agg(['count']).sort_values(by=('order_id','count'),ascending=False).head(10) group_category.plot(kind='barh',color='mediumpurple',figsize=(10,6)) plt.title("Top 10 Categories", fontsize=15, weight='bold',pad = 10.0) plt.xlabel('Number of customers', fontsize = 15) plt.ylabel('Product Categories', fontsize = 15) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 363} id="Ijn1vW_PcNaP" executionInfo={"status": "ok", "timestamp": 1645716075746, "user_tz": -180, "elapsed": 432, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="d1e8290a-e808-4251-e082-a2efb6beaffd" #Let's look at category 2 with the highest turnover because the number of sellers is low. We can store products with low number of vendors ourselves. watches_gifts = pd.read_sql(""" SELECT customer_city AS Customer_City, COUNT(*) AS Num, SUM(price) AS Total_Price, AVG(review_score) AS AVG_Score , COUNT(DISTINCT(seller_id)) AS Num_of_Seller, MIN(DATE(order_purchase_timestamp)) AS First_order, MAX(DATE(order_purchase_timestamp)) AS Last_order , julianday('2018-09-01 00:00:00.000')-julianday(MAX(DATE(order_purchase_timestamp))) AS num_of_days_from_last_order FROM data WHERE product_category_name_english = 'watches_gifts' GROUP BY customer_city ORDER BY num DESC limit 10 """ , conn ) watches_gifts #Here the satisfaction rate of the city of Belo Horizonte is at a good level with 4.3. #Let's examine this city and have an idea about its sellers. # + colab={"base_uri": "https://localhost:8080/", "height": 363} id="0WkeWLfjdV3v" executionInfo={"status": "ok", "timestamp": 1645716124626, "user_tz": -180, "elapsed": 372, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="bcb1ad81-988b-4035-b1aa-d7f45e649129" #Watches-gifs sellers in Belo Horizonte df = pd.read_sql(""" SELECT seller_id, COUNT(order_id)AS Num, SUM(price) AS Total_Price, AVG(review_score) AS AVG_Score, MIN(DATE(order_purchase_timestamp)) AS First_order, MAX(DATE(order_purchase_timestamp)) AS Last_order , julianday('2018-09-01 00:00:00.000')-julianday(MAX(DATE(order_purchase_timestamp))) AS num_of_days_from_last_order FROM data WHERE product_category_name_english = 'watches_gifts' AND customer_city = 'belo horizonte' GROUP BY seller_id ORDER BY num DESC LIMIT 10 """ , conn ) df #Although the sellers in the 3rd and 4th index have high scores, they have not made a sale for two months. Sales of these vendors can be encouraged. # + colab={"base_uri": "https://localhost:8080/", "height": 363} id="Kn0UzeSGf6QH" executionInfo={"status": "ok", "timestamp": 1645716309109, "user_tz": -180, "elapsed": 733, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="b1438140-832d-424f-ac24-290e20391fbe" #The ten categories with the least sales last_ten_categories = pd.read_sql(""" SELECT product_category_name_english AS Category_Name, COUNT(*) AS Num, SUM(price) AS Total_Price, AVG(review_score) AS AVG_Score , COUNT(DISTINCT(seller_id)) AS Num_of_Seller, MIN(DATE(order_purchase_timestamp)) AS First_order, MAX(DATE(order_purchase_timestamp)) AS Last_order , julianday('2018-09-01 00:00:00.000')-julianday(MAX(DATE(order_purchase_timestamp))) AS num_of_days_from_last_order FROM data GROUP BY product_category_name_english ORDER BY num limit 10 """ , conn ) last_ten_categories #There has been no shopping in the Security category for a year and the score is low. #Although the music category has 19 sellers and a high score, its sales are low. #High-score categories can be examined and studies can be carried out on those who may have a demand. # + colab={"base_uri": "https://localhost:8080/", "height": 363} id="WHR-1RIsjlyE" executionInfo={"status": "ok", "timestamp": 1645716392979, "user_tz": -180, "elapsed": 707, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="00f60587-d2b9-4dc0-f94a-d8f1a3229b28" #Top 10 categories with lowest scores last_ten_score_categ= pd.read_sql(""" SELECT product_category_name_english AS Category_name ,COUNT(*) AS num, AVG(review_score) AS AVG_Score, SUM(price) AS Total_Price, COUNT(DISTINCT(seller_id)) AS Num_of_Seller , MIN(DATE(order_purchase_timestamp)) AS First_order, MAX(DATE(order_purchase_timestamp)) AS Last_order , julianday('2018-09-01 00:00:00.000')-julianday(MAX(DATE(order_purchase_timestamp))) AS num_of_days_from_last_order FROM data GROUP BY product_category_name_english ORDER BY AVG(review_score) limit 10 """ , conn ) last_ten_score_categ #Although the sales are high in the office_furniture category, its score is 3.5. # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="gnGeSFC2x1hv" executionInfo={"status": "ok", "timestamp": 1645716462323, "user_tz": -180, "elapsed": 287, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="d28f6aa3-e4b8-40f3-9250-3e462986f07e" #Let's take a closer look at the office furniture category. furniture_office = pd.read_sql(""" Select seller_id, COUNT(*) AS Num, AVG(review_score) AS AVG_Score, MIN(DATE(order_purchase_timestamp)) AS First_order, MAX(DATE(order_purchase_timestamp)) AS Last_order , julianday('2018-09-01 00:00:00.000')-julianday(MAX(DATE(order_purchase_timestamp))) AS num_of_days_from_last_order from data WHERE product_category_name_english = 'office_furniture' GROUP BY seller_id ORDER BY Num DESC """,conn) furniture_office #It is seen that the seller's scores in the 3rd and 6th indexes are low. #On the other hand, it is striking that the seller in the 2nd index has not made sales for more than 1 year and has a score of 3.5, which is not bad. # + colab={"base_uri": "https://localhost:8080/", "height": 425} id="S6Z5swjI0mUb" executionInfo={"status": "ok", "timestamp": 1645716522893, "user_tz": -180, "elapsed": 463, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="5233faf7-a1b8-4e2d-bc14-e3efed494c18" #Top 10 selling categories and their percentage in total orders group_category= full_data.groupby(by='product_category_name_english')[['order_id']].agg(['count']).sort_values(by=('order_id','count'),ascending=False) group_category['Percent share of customers'] = group_category.apply(lambda x: (x/(x.sum()))*100) group_category.head(10) # + colab={"base_uri": "https://localhost:8080/", "height": 542} id="cI_9vhbNP86w" executionInfo={"status": "ok", "timestamp": 1645456351415, "user_tz": -180, "elapsed": 448, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="8b0c3580-d18f-4695-fc54-91d2e6f438ff" fig = px.pie(values=group_category['Percent share of customers'], names=group_category.index,title= 'Percentages of Product Categories') fig.update_traces(textposition='inside') fig.update_layout(uniformtext_minsize=15, uniformtext_mode='hide') fig.show() #The top 10 categories account for almost 65% of all orders. # + [markdown] id="LQY6doiE59BI" # # Time Series Analysis # + [markdown] id="6SJC-D1K6E3I" # ## Monthly Sales Analysis # + colab={"base_uri": "https://localhost:8080/", "height": 436} id="rxaZOFVHpiko" executionInfo={"status": "ok", "timestamp": 1645716801208, "user_tz": -180, "elapsed": 2625, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="43c5e933-3fb2-4ddd-cc8e-74acf065626d" #Average sales by month month = full_data.groupby('purchase_month').size().sort_values() fig=plt.figure(figsize=(10,6)) sns.set_style("darkgrid") sns.barplot(y=month.index, x=month.values) plt.title('Average Monthly Purchase Amount',fontsize=20,pad = 10.0) plt.xlabel('Amount of Purchase', fontsize = 15) plt.ylabel('Month', fontsize = 15) #This image may mislead us, as we have the data for the whole of 2017 and about half of 2018. # + [markdown] id="C283N-iY6KbM" # ## Year-Monthly Sales Analysis # + colab={"base_uri": "https://localhost:8080/", "height": 436} id="OEU9JliZ72Pz" executionInfo={"status": "ok", "timestamp": 1645716837647, "user_tz": -180, "elapsed": 1137, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="67978c33-9cc6-4df3-edd4-0d7e25a72610" #Let's take a look at all the months we have as Month-Year: Month_Year = full_data.groupby('purchase_MonthY').size().sort_index() fig=plt.figure(figsize=(10,6)) sns.barplot(y=Month_Year.index, x=Month_Year.values) plt.title('Year-Monthly Purchase',fontsize=20,pad = 10.0) plt.xlabel('Amount of Purchase', fontsize = 15) plt.ylabel('Year-Month', fontsize = 15) #It is seen that there is a big sale in November due to Black Friday. # + [markdown] id="pzYTXJrz6P76" # ## Daily Sales Analysis # + colab={"base_uri": "https://localhost:8080/", "height": 436} id="vyqaJTJt74p2" executionInfo={"status": "ok", "timestamp": 1645716873681, "user_tz": -180, "elapsed": 15, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="11fadb83-4d24-4b71-eeb0-14dfc39c4a28" #Sales by day day = full_data.groupby('purchase_day').size().sort_values() fig=plt.figure(figsize=(10,6)) sns.barplot(y=day.index, x=day.values) plt.title('Average Daily Purchase Amount',fontsize=20,pad = 10.0) plt.xlabel('Amount of Purchase', fontsize = 15) plt.ylabel('Day', fontsize = 15) #It can be said that more shopping is done on weekdays. # + [markdown] id="LpPvEJ786R34" # ## Hourly Sales Analysis # + colab={"base_uri": "https://localhost:8080/", "height": 440} id="ej8GtXcz78zV" executionInfo={"status": "ok", "timestamp": 1645717153307, "user_tz": -180, "elapsed": 912, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="955da792-1795-4090-b7f9-a38e7b20dcea" #Sales by the hour full_data.groupby('purchase_time').size().sort_index().plot(kind='bar',figsize=(15,6),color='lightseagreen') plt.title('Average Hourly Purchase Amount',fontsize=20,pad = 10.0) plt.xlabel('Amount of Purchase', fontsize = 15) plt.ylabel('Purchase Amount', fontsize = 15) #It is seen that sales are made between 10 a.m. and 10 p.m. # + [markdown] id="mCuxUpuV5FT8" # # # Customer Analysis # + colab={"base_uri": "https://localhost:8080/", "height": 404} id="sjuQ0OI3qVzY" executionInfo={"status": "ok", "timestamp": 1645436562797, "user_tz": -180, "elapsed": 1132, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="08f73d6f-5acc-4390-bb6c-0fae91d5b38c" #If we look at the cumulative number of customers: customer_cumulative = full_data.groupby('customer_id',as_index=False).agg({'order_purchase_timestamp': 'min'}) customer_cumulative.groupby('order_purchase_timestamp').count().cumsum().plot(figsize=(15,6),color='blue') plt.title('Customers Cumulative',fontsize=20,pad = 10.0) plt.xlabel('Timeline', fontsize = 14) plt.ylabel('Customer count cumulative', fontsize = 14); #As expected, an increasing graph emerges. # + id="Sl8ULoh-qb1B" executionInfo={"status": "ok", "timestamp": 1645717327487, "user_tz": -180, "elapsed": 963, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} colab={"base_uri": "https://localhost:8080/", "height": 423} outputId="43fb3a2b-d2da-4c0b-c6f8-66c33fa2ce66" #Let's take a look at the last time customers shopped. customer_orders = pd.read_sql(""" SELECT customer_unique_id AS Customer_ID, COUNT(order_id) AS num_of_order, SUM(price) AS price, AVG(review_score) AS AVG_Score, DATE(MAX(order_purchase_timestamp)) AS last_order , julianday('2018-09-01 12:08:15.310')-julianday(DATE(MAX(order_purchase_timestamp))) AS not_order_for from data GROUP BY customer_unique_id ORDER BY last_order """ , conn ) #Customers who have not shopped in the last 100 days inactive_customer = customer_orders[customer_orders['not_order_for'] > 100] #Customers who shopped in the last 100 days active_customer = customer_orders[customer_orders['not_order_for'] < 100] inactive_customer # + colab={"base_uri": "https://localhost:8080/", "height": 468} id="9wezwn1hqgMm" executionInfo={"status": "ok", "timestamp": 1645717335137, "user_tz": -180, "elapsed": 1848, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="a009f7b9-66be-420d-a0e5-4b802f2046b1" #Inactive Customer Graph inactive_graph = pd.concat([inactive_customer['num_of_order'] , inactive_customer['not_order_for']],axis=1) inactive_graph.plot(figsize=(12,7)) plt.title("Inactive Customers", fontsize=15, weight='bold',pad = 10.0) plt.xlabel('Number of Customer', fontsize = 15) plt.ylabel('Number of Days Without Purchase', fontsize = 15) plt.show() #The orange line represents how many days each customer has not shopped. The blue line shows the total number of purchases for each customer. # + colab={"base_uri": "https://localhost:8080/"} id="7Ihcn-4jt_qI" executionInfo={"status": "ok", "timestamp": 1645717470183, "user_tz": -180, "elapsed": 2192, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="05874a56-2a2e-4a34-ec3f-818003c087f8" total_customers = inactive_customer['Customer_ID'].count() + active_customer['Customer_ID'].count() total_customers # + colab={"base_uri": "https://localhost:8080/"} id="UqNveJamqpr6" executionInfo={"status": "ok", "timestamp": 1645717482192, "user_tz": -180, "elapsed": 307, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="d9ec672c-4bc6-4023-8859-e99376dd8fa4" inactive_customer['Customer_ID'].count() / total_customers #79 percent of our customers are inactive # + colab={"base_uri": "https://localhost:8080/"} id="NET7_B68vEBZ" executionInfo={"status": "ok", "timestamp": 1645717523904, "user_tz": -180, "elapsed": 373, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="144e518c-a03f-4357-ba34-671bb9832493" active_customer['Customer_ID'].count() / total_customers #About 20 percent of our customers are active # + colab={"base_uri": "https://localhost:8080/"} id="_pvl4C4guM2K" executionInfo={"status": "ok", "timestamp": 1645717554880, "user_tz": -180, "elapsed": 311, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="d4f36d44-930e-4792-c66a-ac325e9e5db5" total_price = inactive_customer['price'].sum() + active_customer['price'].sum() total_price # + colab={"base_uri": "https://localhost:8080/"} id="cpb5ASNUt6in" executionInfo={"status": "ok", "timestamp": 1645717556613, "user_tz": -180, "elapsed": 305, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="e2bbc9d0-0bea-4161-d220-814792c6d378" inactive_customer['price'].sum() / total_price #Inactive customers account for 79 percent of our earnings. # + colab={"base_uri": "https://localhost:8080/"} id="MMYmPy8KXPFq" executionInfo={"status": "ok", "timestamp": 1645717557648, "user_tz": -180, "elapsed": 4, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="fffd8aaf-ca05-4058-ad71-01f2fc910429" active_customer['price'].sum() / total_price # + colab={"base_uri": "https://localhost:8080/"} id="lwDL9gLua8N7" executionInfo={"status": "ok", "timestamp": 1645717736275, "user_tz": -180, "elapsed": 3, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="6508bd4b-a0c5-406f-808c-4d79f81efb9c" active_customer[active_customer['AVG_Score']>4]['AVG_Score'].count() / active_customer['AVG_Score'].count() #approximately 63 percent of our active customers are satisfied with their purchases # + colab={"base_uri": "https://localhost:8080/"} id="R5a1Bplnq-RH" executionInfo={"status": "ok", "timestamp": 1645717686925, "user_tz": -180, "elapsed": 287, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="89cc27a3-d3fe-4bc1-c333-57f66fa660b2" inactive_customer[inactive_customer['AVG_Score']>4]['AVG_Score'].count() / inactive_customer['AVG_Score'].count() #our inactive customers are slightly less satisfied # + colab={"base_uri": "https://localhost:8080/"} id="Q_htWFSCXraw" executionInfo={"status": "ok", "timestamp": 1645717707587, "user_tz": -180, "elapsed": 282, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="12a2f0c7-2eff-47e4-8fcf-631890a3390b" inactive_customer[inactive_customer['AVG_Score']<3]['AVG_Score'].count() / inactive_customer['AVG_Score'].count() #15 percent of inactive customers have a review score of less than 3 # + colab={"base_uri": "https://localhost:8080/"} id="RQgnfqYRXv2X" executionInfo={"status": "ok", "timestamp": 1645717731782, "user_tz": -180, "elapsed": 3, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="df0d3e9a-45cb-4b29-be7f-ac1a6ad24b1e" active_customer[active_customer['AVG_Score']<3]['AVG_Score'].count() / active_customer['AVG_Score'].count() # + [markdown] id="F_9XgLpI5QCM" # # ## City ​​Based Customer Analysis # + colab={"base_uri": "https://localhost:8080/"} id="6zelBJEYnPhb" executionInfo={"status": "ok", "timestamp": 1645717827689, "user_tz": -180, "elapsed": 308, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="6cb7094c-eebc-4a37-a7ed-b0622d446e40" #Total number of customers Number_of_customers = full_data.customer_unique_id.nunique() Number_of_customers # + colab={"base_uri": "https://localhost:8080/", "height": 582} id="jHJmobUwmff-" executionInfo={"status": "ok", "timestamp": 1645717831156, "user_tz": -180, "elapsed": 806, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="b122caf0-31b7-4f04-9ae3-bdc64153d1d9" #Cities and percentages of customers customer_city = full_data.groupby(by='customer_city')[['customer_unique_id']].agg(['count']).sort_values(by=('customer_unique_id','count'),ascending=False)[0:15] customer_city['Percent share of customers'] = customer_city.apply(lambda x: (x/(Number_of_customers))*100) customer_city # + colab={"base_uri": "https://localhost:8080/", "height": 582} id="si_HfxrDvxzU" executionInfo={"status": "ok", "timestamp": 1645717857476, "user_tz": -180, "elapsed": 7, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="a66c4ceb-ea5a-4145-e8d0-97c691fea639" #Total earnings and average earnings by cities full_data.groupby(by='customer_city')[['payment_value']].agg(['sum','mean']).sort_values(by=('payment_value','sum'),ascending=False)[0:15] # + colab={"base_uri": "https://localhost:8080/", "height": 526} id="1BeYUA-RVPJj" executionInfo={"status": "ok", "timestamp": 1645301473555, "user_tz": -180, "elapsed": 1342, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="0a282375-cd48-4849-9f36-0375f5e40cf0" #Graph of percentages of cities where customers are located sns.set_style("whitegrid") plt.figure(figsize=(15,6)) df = full_data.groupby(by='customer_city')[['customer_unique_id']].agg(['count']).sort_values(by=('customer_unique_id','count'),ascending=False)[:20] df['Percent share of customers'] = df.apply(lambda x: (x/(Number_of_customers))*100) df['Percent share of customers'].plot(kind='bar',color='dodgerblue') plt.title('Customer Percentages of Cities',fontsize=20,pad = 10.0) plt.xlabel('Customer Cities', fontsize = 15) plt.ylabel('Percent share of customers', fontsize = 15) plt.legend() plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 616} id="a0sdICdRsE0A" executionInfo={"status": "ok", "timestamp": 1645304287244, "user_tz": -180, "elapsed": 988, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="253f30b6-5c95-4e8e-d679-a376fc50f9a0" # Adding cummulative percentage for Pareto Chart df = full_data.groupby(by='customer_city')[['customer_unique_id']].agg(['count']).sort_values(by=('customer_unique_id','count'),ascending=False)[:20] df['Percent share of customers'] = df.apply(lambda x: (x/(Number_of_customers))*100) df["cumpercentage"] = df["Percent share of customers"].cumsum()/df["Percent share of customers"].sum()*100 #Plotting Pareto chart fig, ax = plt.subplots(figsize=(15,8)) ax.bar(df.index, df[('customer_unique_id','count')], color="dodgerblue",width=0.8) ax2 = ax.twinx() ax2.plot(df.index, df["cumpercentage"], color="orangered", marker="D", ms=7) ax2.yaxis.set_major_formatter(PercentFormatter()) ax.tick_params(axis="x",rotation=90) ax.tick_params(axis="y", colors="C0") ax2.tick_params(axis="y", colors="orangered") plt.title('Percentages and Sum of Customer Cities',fontsize=20,pad = 10.0) plt.show() # + [markdown] id="ytovqbG05WQv" # ## State ​​Based Customer Analysis # + colab={"base_uri": "https://localhost:8080/", "height": 582} id="nRpZGkz7nw20" executionInfo={"status": "ok", "timestamp": 1645481571719, "user_tz": -180, "elapsed": 8, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="6d6df154-dd69-4dac-ba60-56fafcc9f0d7" #Countries and percentages of customers customer_state = full_data.groupby(by='customer_state')[['customer_unique_id']].agg(['count']).sort_values(by=('customer_unique_id','count'),ascending=False)[0:15] customer_state['Percent share of customers'] = customer_state.apply(lambda x: (x/(Number_of_customers))*100) customer_state # + colab={"base_uri": "https://localhost:8080/", "height": 582} id="7UBk5Duhv-WP" executionInfo={"status": "ok", "timestamp": 1645481602613, "user_tz": -180, "elapsed": 485, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="46adf846-54de-48d6-925f-44fe0b826ea4" #Total and average earnings from countries full_data.groupby(by='customer_state')[['payment_value']].agg(['sum','mean']).sort_values(by=('payment_value','sum'),ascending=False)[0:15] # + colab={"base_uri": "https://localhost:8080/", "height": 424} id="jljdTfTPW_YT" executionInfo={"status": "ok", "timestamp": 1645301608196, "user_tz": -180, "elapsed": 915, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="406f1bab-5e30-4152-f352-83e17aa3b562" #Graph of percentage of customers in countries plt.figure(figsize=(15,6)) df = full_data.groupby(by='customer_state')[['customer_unique_id']].agg(['count']).sort_values(by=('customer_unique_id','count'),ascending=False)[:20] df['Percent share of customers'] = df.apply(lambda x: (x/(Number_of_customers))*100) df['Percent share of customers'].plot(kind='bar',color='teal') plt.title('Customer Percentages of States',fontsize=20,pad = 10.0) plt.xlabel('Customer States', fontsize = 15) plt.ylabel('Percent share of customers', fontsize = 15) plt.legend() plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 616} id="DmeN48ftotcw" executionInfo={"status": "ok", "timestamp": 1645304262324, "user_tz": -180, "elapsed": 1363, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="bd674779-7f9c-40fa-f123-5bb6cf6f0ddc" # Adding cummulative percentage for Pareto Chart df = full_data.groupby(by='customer_state')[['customer_unique_id']].agg(['count']).sort_values(by=('customer_unique_id','count'),ascending=False) df['Percent share of customers'] = df.apply(lambda x: (x/(Number_of_customers))*100) df["cumpercentage"] = df["Percent share of customers"].cumsum()/df["Percent share of customers"].sum()*100 #Plotting Pareto chart fig, ax = plt.subplots(figsize=(15,10)) ax.bar(df.index, df[('customer_unique_id','count')], color="forestgreen",width=0.8) ax2 = ax.twinx() ax2.plot(df.index, df["cumpercentage"], color="orangered", marker="D", ms=7) ax2.yaxis.set_major_formatter(PercentFormatter()) ax.tick_params(axis="y", colors="forestgreen") ax2.tick_params(axis="y", colors="orangered") plt.title('Percentages and Sum of Customer States',fontsize=20,pad = 10.0) plt.show() # + [markdown] id="Z0Ip-WEz5ZvG" # ## Payment Types # + colab={"base_uri": "https://localhost:8080/", "height": 238} id="3wnuqCAL-5hD" executionInfo={"status": "ok", "timestamp": 1645305594064, "user_tz": -180, "elapsed": 1622, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="d2c5f313-cb47-4243-804e-8d6b69d31369" #Let's create dataframe for payment types at look at some numbers df = full_data.groupby(by='payment_type')[['customer_unique_id']].agg(['count']).sort_values(by=('customer_unique_id','count'),ascending=False) df['Percent customers'] = df.apply(lambda x: (x/(Number_of_customers))*100) df['Payments'] = np.round(full_data.groupby(by='payment_type')[['payment_value']].agg(['sum']),4) total_payment_value = df['Payments'].sum() #Calculating percent shares def percent_payment(x): z = x/total_payment_value return z * 100 df['Percent payment value'] = df['Payments'].apply(lambda x: percent_payment(x)) df # + colab={"base_uri": "https://localhost:8080/", "height": 542} id="hqh2vo8c2oDP" executionInfo={"status": "ok", "timestamp": 1645305864300, "user_tz": -180, "elapsed": 282, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="8ee831e2-89a1-4a8d-c3c3-4c488e7090ba" fig = make_subplots(rows=1, cols=2, specs=[[{'type':'domain'}, {'type':'domain'}]]) fig.add_trace(go.Pie(labels=df.index, values=df['Percent customers'], name="Percent customers"), 1, 1) fig.add_trace(go.Pie(labels=df.index, values=df['Percent payment value'], name="'Percent payment value'"), 1, 2) # Use `hole` to create a donut-like pie chart fig.update_traces(hole=.4, hoverinfo="label+percent+name") fig.update_layout( title_text="Payment Types", # Add annotations in the center of the donut pies. annotations=[dict(text='Customer', x=0.18, y=0.5, font_size=20, showarrow=False), dict(text='Price', x=0.82, y=0.5, font_size=20, showarrow=False)]) fig.show() # + colab={"base_uri": "https://localhost:8080/", "height": 363} id="2UGLdoMgw6n4" executionInfo={"status": "ok", "timestamp": 1645522195707, "user_tz": -180, "elapsed": 6, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="2fc52a9c-6f77-4beb-f46c-c6d47a5af8dc" #Most used installment amounts payment_seq = pd.read_sql ( """ select payment_sequential, COUNT(*) as num ,AVG(review_score) AS score from data GROUP BY payment_sequential ORDER BY num DESC LIMIT 10 """,conn) payment_seq #In general, there is a cash purchase, which may be due to the low price per order. #Installments can be made by increasing the debt # + [markdown] id="hiJOihvvX38m" # # # Seller Analysis # + [markdown] id="9DvpPxAWnuti" # ## Total Number of Sellers # + colab={"base_uri": "https://localhost:8080/", "height": 404} id="CoWEV9HZqhfK" executionInfo={"status": "ok", "timestamp": 1645520556139, "user_tz": -180, "elapsed": 2145, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="68701d7f-ce07-4669-c7c9-8097cbe26538" #Graph of number of sellers over time seller_cumulative = full_data.groupby('seller_id',as_index=False).agg({'order_purchase_timestamp': 'min'}) seller_cumulative.groupby('order_purchase_timestamp').count().cumsum().plot(figsize=(15,6),color='blue') plt.title('Sellers Cumulative',fontsize=20,pad = 10.0) plt.xlabel('Timeline', fontsize = 14) plt.ylabel('Seller count cumulative', fontsize = 14); # + colab={"base_uri": "https://localhost:8080/"} id="KuEhJNsTrHh4" executionInfo={"status": "ok", "timestamp": 1645521556150, "user_tz": -180, "elapsed": 309, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="0b01abc0-15b0-415e-ecee-4f66204500af" #Total number of sellers Number_of_sellers = full_data.seller_id.nunique() Number_of_sellers # + [markdown] id="r5G0bd8nnpN0" # ## Actice and Inactive Sellers # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="3bc4H4qgq6z1" executionInfo={"status": "ok", "timestamp": 1645520763323, "user_tz": -180, "elapsed": 285, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="c36257f9-582d-487f-fa40-b1dbee889fc0" #Let's look at the last time sellers made a sale. seller_orders = pd.read_sql(""" SELECT seller_id AS Seller_ID, COUNT(order_id) AS num_of_order, SUM(price) AS price, AVG(review_score) AS AVG_Score, DATE(MAX(order_purchase_timestamp)) AS last_order , julianday('2018-07-01 12:08:15.310')-julianday(DATE(MAX(order_purchase_timestamp))) AS not_order_for from data GROUP BY seller_id ORDER BY last_order """ , conn ) #Sellers who have not made a sale in the last 100 days inactive_seller = seller_orders[seller_orders['not_order_for'] > 100] #Sellers who have made a sale in the last 100 days active_seller = seller_orders[seller_orders['not_order_for'] < 100] inactive_seller # + colab={"base_uri": "https://localhost:8080/", "height": 468} id="g7irpPngreBF" executionInfo={"status": "ok", "timestamp": 1645521033617, "user_tz": -180, "elapsed": 732, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="1d0a3ca5-fa0a-4cf6-cf9f-40c80cdfc9b6" #Graph of inactive sellers and total sales inactive_graph = pd.concat([inactive_seller['num_of_order'] , inactive_seller['not_order_for']],axis=1) inactive_graph.plot(figsize=(12,7)) plt.title("Inactive Sellers", fontsize=15, weight='bold',pad = 10.0) plt.xlabel('Number of Sellers', fontsize = 15) plt.ylabel('Number of Days Without Purchase', fontsize = 15) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="XM0LnNA9tqSr" executionInfo={"status": "ok", "timestamp": 1645522526193, "user_tz": -180, "elapsed": 274, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="877ffc38-0aa2-45fe-abf8-bb929e5be75d" #Inactive sellers, what percentage of total sellers inactive_seller['Seller_ID'].count() / Number_of_sellers # + colab={"base_uri": "https://localhost:8080/"} id="hbWFeRA9tsWr" executionInfo={"status": "ok", "timestamp": 1645522548493, "user_tz": -180, "elapsed": 275, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="26ce0ab4-dbbd-4002-8a5f-5292ff6571ff" #Active sellers, what percentage of total sellers active_seller['Seller_ID'].count() / Number_of_sellers # + colab={"base_uri": "https://localhost:8080/"} id="8Gkqi_YtvRwv" executionInfo={"status": "ok", "timestamp": 1645521775136, "user_tz": -180, "elapsed": 2940, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="3909571e-c59a-4a18-b5dc-3ebe1eea7d12" #Total Price total_price = inactive_seller['price'].sum() + active_seller['price'].sum() total_price # + colab={"base_uri": "https://localhost:8080/"} id="16adNiBjvY8X" executionInfo={"status": "ok", "timestamp": 1645521790056, "user_tz": -180, "elapsed": 274, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="1f5599f7-cb15-4bd2-db47-c2eaa184acd9" #Percentage of inactive sellers in total price inactive_seller['price'].sum() / total_price # + colab={"base_uri": "https://localhost:8080/"} id="PZB8UAbHveJu" executionInfo={"status": "ok", "timestamp": 1645521807874, "user_tz": -180, "elapsed": 290, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="2d91f9a1-11a0-4687-daca-cd9c11b8cf8d" #Percentage of active sellers in total price active_seller['price'].sum() / total_price # + colab={"base_uri": "https://localhost:8080/"} id="85NLTkiCvslZ" executionInfo={"status": "ok", "timestamp": 1645521870825, "user_tz": -180, "elapsed": 6, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="e503a68c-61f3-4127-9742-ba0263d7db90" #total order total_order = inactive_seller['num_of_order'].sum() + active_seller['num_of_order'].sum() total_order # + colab={"base_uri": "https://localhost:8080/"} id="_wULPcuvvyF0" executionInfo={"status": "ok", "timestamp": 1645521894423, "user_tz": -180, "elapsed": 12, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="9e336299-3a3b-4e94-c763-192006429f6f" #Percentage of inactive sellers in total sales inactive_seller['num_of_order'].sum() / total_order # + colab={"base_uri": "https://localhost:8080/"} id="bo6mzOtdv4Qz" executionInfo={"status": "ok", "timestamp": 1645521903523, "user_tz": -180, "elapsed": 278, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="f28e59bd-0311-44e6-bae7-ae06fde1b1a4" #Percentage of active sellers in total sales active_seller['num_of_order'].sum() / total_order # + colab={"base_uri": "https://localhost:8080/"} id="rrBtHXUDv7ZE" executionInfo={"status": "ok", "timestamp": 1645522710348, "user_tz": -180, "elapsed": 1764, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="e6b646cb-d45f-41ce-a4c6-4bb338cec1a6" #Percentage of inactive sellers with a score greater than 4 inactive_seller[inactive_seller['AVG_Score']>4]['AVG_Score'].count() / inactive_seller['AVG_Score'].count() # + colab={"base_uri": "https://localhost:8080/"} id="3YRpTkGpwH8u" executionInfo={"status": "ok", "timestamp": 1645522751272, "user_tz": -180, "elapsed": 272, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="5f80d8df-7f40-45ee-eeb9-0e5606dd0be7" #Percentage of active sellers with a score greater than 4 active_seller[active_seller['AVG_Score']>4]['AVG_Score'].count() / active_seller['AVG_Score'].count() # + colab={"base_uri": "https://localhost:8080/"} id="BvxZKPLLlRzk" executionInfo={"status": "ok", "timestamp": 1645522954799, "user_tz": -180, "elapsed": 270, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="475ff45d-a4b2-4433-84ce-00235fc727f0" seller_inf = pd.read_sql ( """ select seller_id, COUNT(*) as num ,AVG(review_score) AS score from data GROUP BY seller_id ORDER BY score DESC """,conn) good_sellers = seller_inf[seller_inf['num']>500] print("Number of sellers selling more than 500 units :",good_sellers['seller_id'].count()) print("Average number of sales of sellers selling more than 500 units:",good_sellers['num'].mean()) print("Average score of sales of sellers selling more than 500 units :",good_sellers['score'].mean()) # + [markdown] id="zbbiTudZnjE_" # ## States of Sellers # + colab={"base_uri": "https://localhost:8080/", "height": 615} id="NwmnbgT-wfHq" executionInfo={"status": "ok", "timestamp": 1645522063082, "user_tz": -180, "elapsed": 967, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="6a628f86-9341-491c-e3df-189108305bc8" #Countries where sellers are located num_sellers = full_data.seller_id.nunique() # Adding cummulative percentage for Pareto Chart df = full_data.groupby(by='seller_state')[['seller_id']].agg(['count']).sort_values(by=('seller_id','count'),ascending=False) df['Percent share of seller'] = df.apply(lambda x: (x/(num_sellers))*100) df["cumpercentage"] = df["Percent share of seller"].cumsum()/df["Percent share of seller"].sum()*100 #Plotting Pareto chart fig, ax = plt.subplots(figsize=(15,10)) ax.bar(df.index, df[('seller_id','count')], color="forestgreen",width=0.8) ax2 = ax.twinx() ax2.plot(df.index, df["cumpercentage"], color="orangered", marker="D", ms=7) ax2.yaxis.set_major_formatter(PercentFormatter()) ax.tick_params(axis="y", colors="forestgreen") ax2.tick_params(axis="y", colors="orangered") plt.title('Percentages and Sum of Sellers States',fontsize=20,pad = 10.0) plt.show() # + [markdown] id="cjlj7IbZsTyx" # # Delivery Analysis # + colab={"base_uri": "https://localhost:8080/"} id="-Y0Ep1Dprh3i" executionInfo={"status": "ok", "timestamp": 1645537541602, "user_tz": -180, "elapsed": 264, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="5ab192aa-1d1f-4000-f8d7-ed8615e1ea26" print("Shipping Time",full_data['shipping_duration'].mean()) print("Estimated Time",full_data['estimated_duration'].mean()) #It can be said that the delivery times are generally successful. # + colab={"base_uri": "https://localhost:8080/", "height": 394} id="AaWtknKFrnNC" executionInfo={"status": "ok", "timestamp": 1645537557069, "user_tz": -180, "elapsed": 279, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="f3609119-38a0-4de3-c885-cf5c6ff75a85" #Top 10 countries with shortest delivery time full_data.groupby('customer_state')[['shipping_duration']].mean().sort_values(by='shipping_duration').head(10) # + colab={"base_uri": "https://localhost:8080/"} id="WAkYxUTQrq45" executionInfo={"status": "ok", "timestamp": 1645537696190, "user_tz": -180, "elapsed": 321, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="7062d57c-6e0b-45ec-9ae1-2fdae36508b0" #How many of my orders were delivered late? late_delivery = full_data[full_data['order_estimated_delivery_date'] < full_data['order_delivered_customer_date']] ['order_id'].count() late_delivery # + colab={"base_uri": "https://localhost:8080/"} id="bdAI9Pl1sB3v" executionInfo={"status": "ok", "timestamp": 1645537710332, "user_tz": -180, "elapsed": 399, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14093034254050366238"}} outputId="f72a607a-49a4-415a-843b-64b370a728de" totalOrders = full_data.order_id.nunique() late_delivery / totalOrders #Considering the percentage, a very small amount was delivered late. Overall good delivery. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: '' # name: sagemath # --- # + language="html" # # # # # # # # - # **Important:** to view this notebook properly you will need to execute the cell above, which assumes you have an Internet connection. It should already be selected, or place your cursor anywhere above to select. Then press the "Run" button in the menu bar above (the right-pointing arrowhead), or press Shift-Enter on your keyboard. # $\newcommand{\identity}{\mathrm{id}} # \newcommand{\notdivide}{\nmid} # \newcommand{\notsubset}{\not\subset} # \newcommand{\lcm}{\operatorname{lcm}} # \newcommand{\gf}{\operatorname{GF}} # \newcommand{\inn}{\operatorname{Inn}} # \newcommand{\aut}{\operatorname{Aut}} # \newcommand{\Hom}{\operatorname{Hom}} # \newcommand{\cis}{\operatorname{cis}} # \newcommand{\chr}{\operatorname{char}} # \newcommand{\Null}{\operatorname{Null}} # \newcommand{\lt}{<} # \newcommand{\gt}{>} # \newcommand{\amp}{&} # $ #

Section14.2The Class Equation

#

Let $X$ be a finite $G$-set and $X_G$ be the set of fixed points in $X\text{;}$ that is,

# \begin{equation*} # X_G = \{ x \in X : gx = x \text{ for all } g \in G \}. # \end{equation*} #

Since the orbits of the action partition $X\text{,}$

# \begin{equation*} # |X| = |X_G| + \sum_{i = k}^n |{\mathcal O}_{x_i}|, # \end{equation*} #

where $x_k, \ldots, x_n$ are representatives from the distinct nontrivial orbits of $X\text{.}$

#

Now consider the special case in which $G$ acts on itself by conjugation, $(g,x) \mapsto gxg^{-1}\text{.}$ The center of $G\text{,}$

# \begin{equation*} # Z(G) = \{x : xg = gx \text{ for all } g \in G \}, # \end{equation*} #

is the set of points that are fixed by conjugation. The nontrivial orbits of the action are called the conjugacy classes of $G\text{.}$ If $x_1, \ldots, x_k$ are representatives from each of the nontrivial conjugacy classes of $G$ and $|{\mathcal O}_{x_1}| = n_1, \ldots, |{\mathcal O}_{x_k}| = n_k\text{,}$ then

# \begin{equation*} # |G| = |Z(G)| + n_1 + \cdots + n_k. # \end{equation*} #

The stabilizer subgroups of each of the $x_i$'s, $C(x_i) = \{ g \in G: g x_i = x_i g \}\text{,}$ are called the centralizer subgroups of the $x_i$'s. From Theorem 14.11, we obtain the class equation:

# \begin{equation*} # |G| = |Z(G)| + [G: C(x_1) ] + \cdots + [ G: C(x_k)]. # \end{equation*} #

One of the consequences of the class equation is that the order of each conjugacy class must divide the order of $G\text{.}$

#
Example14.12

It is easy to check that the conjugacy classes in $S_3$ are the following:

# \begin{equation*} # \{ (1) \}, \quad \{ (123), (132) \}, \quad \{(12), (13), (23) \}. # \end{equation*} #

The class equation is $6 = 1+2+3\text{.}$

#
Example14.13

The center of $D_4$ is $\{ (1), (13)(24) \}\text{,}$ and the conjugacy classes are

# \begin{equation*} # \{ (13), (24) \}, \quad \{ (1432), (1234) \}, \quad \{ (12)(34), (14)(23) \}. # \end{equation*} #

Thus, the class equation for $D_4$ is $8 = 2 + 2 + 2 + 2\text{.}$

#
Example14.14

For $S_n$ it takes a bit of work to find the conjugacy classes. We begin with cycles. Suppose that $\sigma = ( a_1, \ldots, a_k)$ is a cycle and let $\tau \in S_n\text{.}$ By Theorem 6.16,

# \begin{equation*} # \tau \sigma \tau^{-1} = ( \tau( a_1), \ldots, \tau(a_k)). # \end{equation*} #

Consequently, any two cycles of the same length are conjugate. Now let $\sigma = \sigma_1 \sigma_2 \cdots \sigma_r$ be a cycle decomposition, where the length of each cycle $\sigma_i$ is $r_i\text{.}$ Then $\sigma$ is conjugate to every other $\tau \in S_n$ whose cycle decomposition has the same lengths.

The number of conjugate classes in $S_n$ is the number of ways in which $n$ can be partitioned into sums of positive integers. In the case of $S_3$ for example, we can partition the integer 3 into the following three sums:

# \begin{align*} # 3 & = 1 + 1 + 1\\ # 3 & = 1 + 2\\ # 3 & = 3; # \end{align*} #

therefore, there are three conjugacy classes. The problem of finding the number of such partitions for any positive integer $n$ is what computer scientists call NP-complete. This effectively means that the problem cannot be solved for a large $n$ because the computations would be too time-consuming for even the largest computer.

#
Proof

We apply the class equation

# \begin{equation*} # |G| = |Z(G)| + n_1 + \cdots + n_k. # \end{equation*} #

Since each $n_i \gt 1$ and $n_i \mid |G|\text{,}$ it follows that $p$ must divide each $n_i\text{.}$ Also, $p \mid |G|\text{;}$ hence, $p$ must divide $|Z(G)|\text{.}$ Since the identity is always in the center of $G\text{,}$ $|Z(G)| \geq 1\text{.}$ Therefore, $|Z(G)| \geq p\text{,}$ and there exists some $g \in Z(G)$ such that $g \neq 1\text{.}$

#
Proof

By Theorem 14.15, $|Z(G)| = p$ or $p^2\text{.}$ If $|Z(G)| = p^2\text{,}$ then we are done. Suppose that $|Z(G)| = p\text{.}$ Then $Z(G)$ and $G / Z(G)$ both have order $p$ and must both be cyclic groups. Choosing a generator $aZ(G)$ for $G / Z(G)\text{,}$ we can write any element $gZ(G)$ in the quotient group as $a^m Z(G)$ for some integer $m\text{;}$ hence, $g = a^m x$ for some $x$ in the center of $G\text{.}$ Similarly, if $hZ(G) \in G / Z(G)\text{,}$ there exists a $y$ in $Z(G)$ such that $h = a^n y$ for some integer $n\text{.}$ Since $x$ and $y$ are in the center of $G\text{,}$ they commute with all other elements of $G\text{;}$ therefore,

# \begin{equation*} # gh = a^m x a^n y = a^{m+n} x y = a^n y a^m x = hg, # \end{equation*} #

and $G$ must be abelian.

# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 1.123465, "end_time": "2021-07-26T05:28:47.868961", "exception": false, "start_time": "2021-07-26T05:28:46.745496", "status": "completed"} tags=[] # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from sklearn.preprocessing import StandardScaler from sklearn.metrics import mean_squared_error from sklearn.metrics import accuracy_score from sklearn.metrics import roc_auc_score import matplotlib.pyplot as pt import seaborn as sns from sklearn.model_selection import train_test_split pd.set_option('display.max_columns', None) # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # + [markdown] papermill={"duration": 0.022244, "end_time": "2021-07-26T05:28:47.915261", "exception": false, "start_time": "2021-07-26T05:28:47.893017", "status": "completed"} tags=[] # # Input of training data from input folder # Links - https://www.kaggle.com/azeembootwala/titanic # + papermill={"duration": 0.078653, "end_time": "2021-07-26T05:28:48.016371", "exception": false, "start_time": "2021-07-26T05:28:47.937718", "status": "completed"} tags=[] input_ads = pd.read_csv('../input/titanic/train_data.csv') input_ads.drop(columns=['Unnamed: 0','Title_1','Title_2','Title_3','Title_4'],inplace=True) #Dropping un-necessary columns #----------------------------------------------------------------- print(input_ads.shape) input_ads.head() # + [markdown] papermill={"duration": 0.023364, "end_time": "2021-07-26T05:28:48.063177", "exception": false, "start_time": "2021-07-26T05:28:48.039813", "status": "completed"} tags=[] # # Null Check # + papermill={"duration": 0.044028, "end_time": "2021-07-26T05:28:48.131003", "exception": false, "start_time": "2021-07-26T05:28:48.086975", "status": "completed"} tags=[] pd.DataFrame(input_ads.isnull().sum()).T # + [markdown] papermill={"duration": 0.023811, "end_time": "2021-07-26T05:28:48.180166", "exception": false, "start_time": "2021-07-26T05:28:48.156355", "status": "completed"} tags=[] # # Describe of the whole data # + papermill={"duration": 0.082647, "end_time": "2021-07-26T05:28:48.287308", "exception": false, "start_time": "2021-07-26T05:28:48.204661", "status": "completed"} tags=[] input_ads.describe() # + [markdown] papermill={"duration": 0.024066, "end_time": "2021-07-26T05:28:48.335969", "exception": false, "start_time": "2021-07-26T05:28:48.311903", "status": "completed"} tags=[] # ### Note - The data is already standardized since it is imported from a pre-processed public dataset # + papermill={"duration": 0.036644, "end_time": "2021-07-26T05:28:48.396994", "exception": false, "start_time": "2021-07-26T05:28:48.360350", "status": "completed"} tags=[] #Total survived vs not-survived split in the training data input_ads['Survived'].value_counts() # + [markdown] papermill={"duration": 0.024375, "end_time": "2021-07-26T05:28:48.446310", "exception": false, "start_time": "2021-07-26T05:28:48.421935", "status": "completed"} tags=[] # # Data Splitting & Pre-Processing # + papermill={"duration": 0.05372, "end_time": "2021-07-26T05:28:48.524769", "exception": false, "start_time": "2021-07-26T05:28:48.471049", "status": "completed"} tags=[] target = 'Survived' #To predict #-------------------------------------------------------------------------------- #Splitting into X & Y datasets (supervised training) X = input_ads[[cols for cols in list(input_ads.columns) if target not in cols]] y = input_ads[target] #-------------------------------------------------------------------------------- #Since test data is already placed in the input folder separately, we will just import it test_ads = pd.read_csv('../input/titanic/test_data.csv') test_ads.drop(columns=['Unnamed: 0','Title_1','Title_2','Title_3','Title_4'],inplace=True) #Dropping un-necessary columns #Splitting into X & Y datasets (supervised training) X_test = test_ads[[cols for cols in list(test_ads.columns) if target not in cols]] y_test = test_ads[target] print('Train % of total data:',100 * X.shape[0]/(X.shape[0] + X_test.shape[0])) #-------------------------------------------------------------------------------- #Manipulation of datasets for convenience and consistency X_arr = np.array(X) X_test_arr = np.array(X_test) y_arr = np.array(y).reshape(X_arr.shape[0],1) y_test_arr = np.array(y_test).reshape(X_test_arr.shape[0],1) #-------------------------------------------------------------------------------- #Basic Summary print(X_arr.shape) print(X_test_arr.shape) print(y_arr.shape) # + [markdown] papermill={"duration": 0.02458, "end_time": "2021-07-26T05:28:48.574178", "exception": false, "start_time": "2021-07-26T05:28:48.549598", "status": "completed"} tags=[] # # Logistic Regression from scratch # + [markdown] papermill={"duration": 0.02451, "end_time": "2021-07-26T05:28:48.623562", "exception": false, "start_time": "2021-07-26T05:28:48.599052", "status": "completed"} tags=[] # ### Defining fwd prop UDF, Cost function UDF & initiating weights and intercepts # + papermill={"duration": 0.045569, "end_time": "2021-07-26T05:28:48.694159", "exception": false, "start_time": "2021-07-26T05:28:48.648590", "status": "completed"} tags=[] #Sigmoid function for the forward propagation as well as backward propagation def sigmoid(arr): sig = 1/(1 + np.exp(-arr)) return sig #Fn for forward propagation of the model (to caculate the predictions) #-------------------------------------------------------------------------- def fwd_prop(X_arr,w,b): a = np.dot(X_arr,w) + b sig_a = sigmoid(a) #print('Shape of a:',a.shape) return sig_a #Fn to calculate cost for logistic regression #-------------------------------------------------------------------------------------------------- def cost_fn(y_true,y_pred,n_examples,reg_alpha,reg_type,w_): #Applying regularizations if reg_type=='L1': reg = np.sum(abs(w_)) elif reg_type=='L2': reg = 0.5 * np.sum(np.square(w_)) cost = (-1/n_examples) * np.sum((y_true * np.log(y_pred)) + ((1-y_true) * np.log(1 - y_pred))) + (reg_alpha*reg) return cost #Fn to convert probabilities into class 0 or 1 based on threshold #-------------------------------------------------------------------------- def prob_to_class(arr,threshold): mask = arr>threshold #print(mask) arr_class = mask.astype(int) return arr_class #Initiating the weight and intercept vectors with appropriate dimensions #-------------------------------------------------------------------------- np.random.seed(100) #Setting seed for consistency in case of random number generation #Weights #---------------------------------- w = np.zeros((X.shape[1],1)) print(X_arr.shape[1]) print(w.shape) print(w) #Intercept #---------------------------------- b = np.zeros(1) b #Testing the forward propagation function #---------------------------------- a = fwd_prop(X_arr,w,b) a.shape # + [markdown] papermill={"duration": 0.02577, "end_time": "2021-07-26T05:28:48.746278", "exception": false, "start_time": "2021-07-26T05:28:48.720508", "status": "completed"} tags=[] # ### UDF for batch_gradient_descent # #### 1. If batch_size=1, it becomes stochastic gradient descent # + papermill={"duration": 0.045224, "end_time": "2021-07-26T05:28:48.817395", "exception": false, "start_time": "2021-07-26T05:28:48.772171", "status": "completed"} tags=[] def batch_gradient_descent(y_arr_overall,X_arr_overall,w_,b_,n_iters=10,lr=0.01,batch_size=1,reg_alpha=1,reg_type='L2'): print('Total training rows :',X_arr_overall.shape[0]) #---------------------------------------------------------------------------------------- #Creating x-y batches according to the provided batch_size n_batches = X.shape[0]//batch_size print('Total Batches to create in each epoch/iter :',n_batches) batches_x = np.array_split(X_arr_overall,n_batches) print('Total Batches of X:',len(batches_x)) batches_y = np.array_split(y_arr,n_batches) print('Total Batches of y:',len(batches_y)) cost_history = [] #Cache for cost function o/p at necessary intervals for plotting later #---------------------------------------------------------------------------------------- for i in range(n_iters): #Total iterations/epochs to train on if i%1000==0: print('#-------------------- Epoch number :',i,'--------------------#') for j in range(len(batches_x)): #For each batch created for each epoch/iter #print('Batch No :',j) X_arr_ = batches_x[j] y_arr_ = batches_y[j] n_examples = X_arr_.shape[0] #print(n_examples) #---------------------------------------------------------------------------------------- #Forward propagation of the model - calculation of the model prediction a_temp = fwd_prop(X_arr_,w_,b_) cost = cost_fn(y_arr_,a_temp,n_examples,reg_alpha,reg_type,w_) if cost == np.inf: print('---- Inf encountered due to exploding gradients ----') return w_,b_,cost_history #---------------------------------------------------------------------------------------- if reg_type=='L1': reg_derivative = np.divide(w_, abs(w_), out=np.zeros_like(w_), where=abs(w_)!=0) reg_derivative = np.where(reg_derivative==np.inf,0,reg_derivative) elif reg_type=='L2': reg_derivative = w_ #Calculating the gradients for the current batch dz = (1/n_examples) * ((a_temp-y_arr_)) #Derivative of Loss fn 'L'(binary crossentropy) wrt z = wx+b dw = np.dot(X_arr_.T,dz) + ((1/n_examples)*(reg_alpha * reg_derivative)) #Derivative of w (weights) wrt 'L' (Applying chain rule of differentiation) db = (1/n_examples) * np.sum(dz) #Derivative of b (intercept) wrt 'L' (Applying chain rule of differentiation) #Updating the weight and the intercept w_ = w_ - (lr * dw) b_ = b_ - (lr * db) #Updating cost into the cache cost_history = cost_history + [cost] #------------------------------------------------- #Progress at regular intervals if (i%5000==0): print(i,': Cost ------->',cost) f_train_a = fwd_prop(X_arr_overall,w_,b_) #Results on whole training data after every 5k epochs #print(f_train_a.shape) f_train_a = prob_to_class(arr=f_train_a,threshold=0.5) print(f_train_a.shape) #print(y_arr_overall.shape) print('ROC AUC of training set :',roc_auc_score(y_arr_overall,f_train_a)) print('Accuracy of training set :',accuracy_score(y_arr_overall,f_train_a)) return w_,b_,cost_history # + [markdown] papermill={"duration": 0.025386, "end_time": "2021-07-26T05:28:48.868578", "exception": false, "start_time": "2021-07-26T05:28:48.843192", "status": "completed"} tags=[] # ### Training the logistic regression model # + papermill={"duration": 32.939846, "end_time": "2021-07-26T05:29:21.834557", "exception": false, "start_time": "2021-07-26T05:28:48.894711", "status": "completed"} tags=[] #Training the model on the training data with the below training specs #----------------------------------------------------------------------------------------------------------------------- epochs = 20001 learning_rate=0.00006 batch_size_=50 #----------------------------------------------------------------------------------------------------------------------- w_final,b_final,cost_history = batch_gradient_descent(y_arr_overall=y_arr, #Train y array X_arr_overall=X_arr, #Train X array w_=w, #Passing zero initiated weight vector b_=b, #Passing zero initiaed intercept vector n_iters=epochs, #Total epochs/iters for Gradient Descent lr=learning_rate, #Learning rate for Gradient Descent batch_size=batch_size_, #Batch size for Gradient Descent (1 for SGD) reg_alpha=0.05, #Regularization factor reg_type='L1') #Regularization Type # + [markdown] papermill={"duration": 0.031872, "end_time": "2021-07-26T05:29:21.898525", "exception": false, "start_time": "2021-07-26T05:29:21.866653", "status": "completed"} tags=[] # ### Plotting cost over epochs (Should have a sharp decrease) # + papermill={"duration": 1.377651, "end_time": "2021-07-26T05:29:23.308133", "exception": false, "start_time": "2021-07-26T05:29:21.930482", "status": "completed"} tags=[] #Cost plot over epochs (1 value at end of each epoch) - over the last batch sns.set_style('darkgrid') ax = sns.lineplot(x=list(range(0,epochs)),y=cost_history) ax.set(xlabel='No of epochs',ylabel='Cost',title='Cost vs Epochs - Logistic Regression') # + [markdown] papermill={"duration": 0.033019, "end_time": "2021-07-26T05:29:23.375003", "exception": false, "start_time": "2021-07-26T05:29:23.341984", "status": "completed"} tags=[] # ### UDF for predicting # + papermill={"duration": 0.043439, "end_time": "2021-07-26T05:29:23.451733", "exception": false, "start_time": "2021-07-26T05:29:23.408294", "status": "completed"} tags=[] def predict(w_,b_,test_x,test_y): print("Testing on :",test_x.shape[0],'rows') a_temp = fwd_prop(test_x,w_,b_) #Using the weights(w_) and bias(b_) vectors derived from training a_temp = prob_to_class(arr=a_temp,threshold=0.5) print('Shape of prediction :',a_temp.shape) print('ROC AUC of test set :',roc_auc_score(test_y,a_temp)) print('Accuracy of test set :',accuracy_score(test_y,a_temp)) print(a_temp[0:3]) return a_temp # + [markdown] papermill={"duration": 0.033609, "end_time": "2021-07-26T05:29:23.519425", "exception": false, "start_time": "2021-07-26T05:29:23.485816", "status": "completed"} tags=[] # # Predictions from the manual created linear regression model # + papermill={"duration": 0.046624, "end_time": "2021-07-26T05:29:23.599635", "exception": false, "start_time": "2021-07-26T05:29:23.553011", "status": "completed"} tags=[] predictions_ = predict(w_final,b_final,X_test_arr,y_test_arr) # + [markdown] papermill={"duration": 0.033442, "end_time": "2021-07-26T05:29:23.666971", "exception": false, "start_time": "2021-07-26T05:29:23.633529", "status": "completed"} tags=[] # # Linear Regression from sklearn as benchmark # + papermill={"duration": 0.833429, "end_time": "2021-07-26T05:29:24.534275", "exception": false, "start_time": "2021-07-26T05:29:23.700846", "status": "completed"} tags=[] from sklearn.linear_model import LogisticRegression #--------------------------------------------------------------------------------------- log_reg = LogisticRegression(penalty='l2',random_state=100,solver='sag',max_iter=20001,tol=1e-4,C=20) log_reg.fit(X_arr,y_arr) prediction_sklearn = log_reg.predict(X_test_arr) #--------------------------------------------------------------------------------------- print('ROC AUC of test set :',roc_auc_score(y_test_arr,prediction_sklearn)) print('Accuracy of test set :',accuracy_score(y_test_arr,prediction_sklearn)) # + [markdown] papermill={"duration": 0.033841, "end_time": "2021-07-26T05:29:24.603395", "exception": false, "start_time": "2021-07-26T05:29:24.569554", "status": "completed"} tags=[] # # SGD Classifier # - (Logistic regression with normal stochastic gradient descent) - Directly comparable with manual implementation # + papermill={"duration": 0.059878, "end_time": "2021-07-26T05:29:24.697642", "exception": false, "start_time": "2021-07-26T05:29:24.637764", "status": "completed"} tags=[] from sklearn.linear_model import SGDClassifier #--------------------------------------------------------------------------------------- log_reg_sgd = SGDClassifier(loss='log',penalty='l2',alpha=0.05,random_state=100,epsilon=learning_rate,max_iter=epochs,tol=1e-10) log_reg_sgd.fit(X_arr,y_arr) prediction_sklearn_sgd = log_reg_sgd.predict(X_test_arr) #--------------------------------------------------------------------------------------- print('ROC AUC of test set :',roc_auc_score(y_test_arr,prediction_sklearn_sgd)) print('Accuracy of test set :',accuracy_score(y_test_arr,prediction_sklearn_sgd)) # + [markdown] papermill={"duration": 0.035107, "end_time": "2021-07-26T05:29:24.768326", "exception": false, "start_time": "2021-07-26T05:29:24.733219", "status": "completed"} tags=[] # # Comparison to SGD Classifier (Because, sklearn logistic regression has other optimization techniques, its not directly comparable) # + [markdown] papermill={"duration": 0.034991, "end_time": "2021-07-26T05:29:24.838288", "exception": false, "start_time": "2021-07-26T05:29:24.803297", "status": "completed"} tags=[] # ### Percent deviation in the weights (with respect to manual logistic regression weights) # + papermill={"duration": 0.045572, "end_time": "2021-07-26T05:29:24.919133", "exception": false, "start_time": "2021-07-26T05:29:24.873561", "status": "completed"} tags=[] 100 * (w_final-log_reg_sgd.coef_.ravel().reshape(11,1))/w_final # + [markdown] papermill={"duration": 0.035106, "end_time": "2021-07-26T05:29:24.989741", "exception": false, "start_time": "2021-07-26T05:29:24.954635", "status": "completed"} tags=[] # ## Insights : # 1. Though the weights are very different as we see above, the models are predicting similarly. # 2. This indicates that there are multiple solutions possible for the data in hand # + [markdown] papermill={"duration": 0.035422, "end_time": "2021-07-26T05:29:25.061409", "exception": false, "start_time": "2021-07-26T05:29:25.025987", "status": "completed"} tags=[] # # END # + papermill={"duration": 0.036186, "end_time": "2021-07-26T05:29:25.133339", "exception": false, "start_time": "2021-07-26T05:29:25.097153", "status": "completed"} tags=[] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="tc4Xb8Fyz9Re" # # - # IMPORTING LIBRARIES # + id="6Fvv3zM70hIu" import sklearn.datasets as datasets import pandas as pd # + [markdown] id="7r_HJVUW0jBa" # IMPORTING DATASET # + colab={"base_uri": "https://localhost:8080/"} id="-9bkxQwD0psJ" outputId="e6724b86-9f05-4001-93b9-c4d17196006c" iris=datasets.load_iris() df=pd.DataFrame(iris.data, columns=iris.feature_names) print(df.head(5)) y=iris.target print(y) # + [markdown] id="yZc6lfeT1Eod" # THE DECISION TREE ALGORITHUM # + colab={"base_uri": "https://localhost:8080/"} id="rDaUKqfd1Le6" outputId="9f65be7c-e108-446b-f5e4-ee8854c6bedd" from sklearn.tree import DecisionTreeClassifier dtree=DecisionTreeClassifier() dtree.fit(df,y) # + [markdown] id="CERKYXMF1RRI" # Visualize the Decision Tree # + [markdown] id="rGacOwWu1Z_Q" # >>Installing the required libraries # + colab={"base_uri": "https://localhost:8080/"} id="m1XpW0ND1fas" outputId="d532fcb8-8794-447b-bc2f-5ea029ef984b" # !pip install pydotplus # !apt-get install graphviz -y # + [markdown] id="-yxgR-HY199i" # >>Visualize the graph # + colab={"base_uri": "https://localhost:8080/", "height": 802} id="cqWT0jWJ10nK" outputId="5afdcc36-ba60-431c-fd9c-3fe7d684132c" from sklearn.externals.six import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import pydotplus dot_data = StringIO() export_graphviz(dtree, out_file=dot_data, feature_names=iris.feature_names, filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: actionshandstools # language: python # name: actionshandstools # --- # + import os import cv2 # %cd /home/egoodman/intermountain/videos def extract_frames_from_video(video_location, num_frames_to_collect): vid_cap = cv2.VideoCapture(video_location) num_frames_in_video = int(vid_cap.get(cv2.CAP_PROP_FRAME_COUNT)) sample_every = int(num_frames_in_video / (num_frames_to_collect-1)) print("\n\nExtracting frames from video at location {} having {} frames".format(video_location, num_frames_in_video)) print("Therefore, we will sample every {} frames".format(sample_every)) count = 1 frame_no = 0 ret = True while ret: vid_cap.set(cv2.CAP_PROP_POS_FRAMES, frame_no) ret, frame = vid_cap.read() if ret: frame_name = video_location[-15:-4] + "-frame-" + str(frame_no) + ".jpg" print("Writing frame", count, frame_name) cv2.imwrite(frame_name, frame) frame_no += sample_every count += 1 head_dir = "/home/egoodman/intermountain/videos/" for file in os.listdir(): if file.endswith(".avi"): complete_video_path = head_dir + file extract_frames_from_video(complete_video_path, 20) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # # %load HC(SLINK).py ''' Created on the 12th, Sep, 2017 @author : ''' from math import * # definition of distance between numeric data points # input : two points a and b, # distance used among 'euclidean' 'squared' 'manhattan' and 'max' # output : distance between two numeric points def norm(a,b, metric = 'euclidean'): try : if(len(a) != len(b)): raise ValueError("two vectors are not of the same dimension") exit() k =0 for i in range(len(a)): if(metric == 'euclidean' or 'squared'): k+= (a[i] - b[i])**2 #print('eucliean',k) if(metric =='manhattan'): k+= abs(a[i]-b[i]) #print('manhattan',k) if(metric== 'max'): k =max(k, abs(a[i]-b[i])) #print('max',k) if(metric == 'euclidean'): k = sqrt(k) return(k) except TypeError: print("Not all data points are numeric") # function to execute SLINK algo # input : dataset who is a list of data points in form of list, # dimension of data points # output : pointer representations of dendrograms Pi and Lambda def SLINK(Dataset, d): n = len(Dataset) # All the data points are labelled as 1, 2, ..., n # A(i) is Lambda, noting the lowest level at which i is no longer the last point in his cluster # B(i) is the last point in the cluster which i then joins A = [n+1 for i in range(n)] B = [n*2 for i in range(n)] #initialisation A[0] = 1 B[0] = 10000 for k in range(1,n): B[k] = k A[k] = 10000 M = [0 for i in range(k+1)] for i in range(k): M[i] = norm(Dataset[i],Dataset[k]) for i in range(k): if(A[i]>= M[i]): M[B[i]] = min(M[B[i]], A[i]) A[i] = M[i] B[i] = k if(A[i] < M[i]): M[B[i]] = min(M[B[i]], M[i]) for i in range(k): if(A[i] >= A[B[i]]): B[i] = k return(A,B) # - e =[2,2] b =[1,2] c =[1,1] d =[0,0] a = [3,3] f =[2.5,2] Dataset = [a,b,c,d,e,f] Dataset.append([1.5,0]) Dataset.append([3,4]) res = SLINK(Dataset,2) print(res) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exercises # # This chapter is an intermezzo that allows us to check and have a deeper understanding of the concepts seen so far by means of exercises. We will see how the code shown can be rewritten to take advantage of battle-tested solutions and idioms that emerges from daily practice. # # First of all, we import some modules (be free to skim the corresponding documentation for each one of them), import functools, operator, math, itertools, random, collections, statistics, bisect, operator, heapq # that contains useful definitions for the code that we are going to write. Moreover, an utility for generators, def take(iterable, n): return map(lambda p: p[1], zip(range(n), iterable)) # that consumes an iterable and return a generator that will yield $n$ objects at most. For the sake of clarity, taken = take(itertools.count(), 50) taken # is a actually generator and its content equals assert list(taken) == list(range(50)) # Before starting, we initialize the random generator with a nice prime random.seed(11) # ## Intersection # + A = list(range(10000)) B = list(range(10000)) random.shuffle(A) random.shuffle(B) # - def intersection(A, B): B = set(B) return (a for a in A if a in B) # %timeit list(intersection(A, B)) # %timeit list(zip(A, set(B))) def intersection(A, B): A, B = iter(sorted(A)), iter(sorted(B)) a, b = next(A), next(B) while True: try: if a == b: yield a a, b = next(A), next(B) elif a < b: a = next(A) else: b = next(B) except StopIteration: break # %timeit list(intersection(A, B)) # ## (Pythagorean) tuples # # Let def tuples(*slices): return itertools.product(*map(lambda s: range(s.start, s.stop), slices)) # **INTERMEZZO** def A(a, b, c, d): pass def A(*args): return list(map(lambda i: i + 4, args)) def AA(args): return list(map(lambda i: i + 4, args)) def B(a, b, *args): return [a, b] + list(map(lambda i: i + 4, args)) A(1, 2, 3) A([1, 2, 3]) AA([1, 2, 3]) B(1,) B(1, 2) B(1, 2, 3) A() A(1, 2, 3) A(1, 2, 3, 4, 5, 6, 7) container = range(5) A( *container ) # --- # where help(itertools.product) # Consider the application to an empty sequence of `slide`s, units = tuples() units # then saturate it list(units) # Now, build tuples using just a `slide` object, singletons = tuples(slice(5, 11)) singletons # then saturate it list(singletons) # Now, build tuples using a twin `slide` object, s = slice(5, 11) pairs = tuples(s, s) pairs # then saturate it list(pairs) # Now, build tuples using a three different `slide` objects (taking into account of splitting the returned generator), triples_a, triples_b = itertools.tee(tuples(slice(5, 11), slice(6, 13), slice(7, 14))) # where help(itertools.tee) # then saturate it list(triples_a) # Now a corner case, but still interesting for ensuring a sound behavior, triples = tuples(slice(5, 11), slice(6, 6), slice(7, 14)) # ouch! L = [1, 2, 3, 4] L[2:2] L[slice(2, 2)] # then saturate it list(triples) # who we have to blame? # Finally, let type(True) def is_pythagorean(tup: tuple, n=2) -> bool: # is_pythagorean is a *predicate* '''A Pythagorean triple consists of three positive integers a, b, and c, such that a^2 + b^2 = c^2. Such a triple is commonly written (a, b, c), and a well-known example is (3, 4, 5). If (a, b, c) is a Pythagorean triple, then so is (ka, kb, kc) for any positive integer k. A primitive Pythagorean triple is one in which a, b and c are coprime (that is, they have no common divisor larger than 1). See also https://en.wikipedia.org/wiki/Pythagorean_triple. ''' a, b, c = tup # tuple unpacking return (a**n + b**n == c**n) if a <= b <= c else False # in filter(is_pythagorean, triples_b) list(filter(is_pythagorean, triples_b)) # do a selection # and help(is_pythagorean) # just to show that writing docstrings is cool and useful. # ## `sum_upto` # Let def sum_upto(n): return functools.reduce(operator.add, range(n+1)) # and test according to Euler's quicker formula n = 100 v = sum_upto(n) gauss = (n*(n+1)/2) assert v == gauss == 5050 # where help(functools.reduce) # and help(operator.add) # ## `sqrt` # Let def sqrt(n): refined = n while True: yield refined refined = (n/refined + refined)/2 # to enumerate 15 approximation of the square root of 37 n = 37 list(take(sqrt(37), 15)) # and check with respect to math.sqrt(n) # where help(math.sqrt) # ## $\pi$ # # According to https://en.wikipedia.org/wiki/Leibniz_formula_for_%CF%80, let def pi_Leibniz(): d = 0 for i, coeff in enumerate(itertools.count(1, step=2)): yield 4*d d += (-1)**i/coeff # in list(take(pi_Leibniz(), 1000))[-10:] # and check against the math.pi # where help(itertools.count) # ## The Collatz's conjecture # # Consider the following operation on an arbitrary positive integer: # # If the number is even, divide it by two. # If the number is odd, triple it and add one. # # See also https://en.wikipedia.org/wiki/Collatz_conjecture. Let def collatz(n): while True: yield n n = 3*n + 1 if n % 2 else n // 2 # be aware that we lose track of the original `n`! # in [list(take(collatz(n), 15)) for n in range(1, 20)] # ## Fibonacci numbers # # Directly from https://docs.python.org/3/library/functools.html#functools.cache: @functools.lru_cache() def factorial(n): print('•', end='') return n * factorial(n-1) if n else 1 # no previously cached result, makes 11 recursive calls (count the • symbols) factorial(10) # just looks up cached value result factorial(5) # makes two new recursive calls, the other 10 are cached factorial(12) # ## Uniform `random` on segmented interval # # The problem here reads as follow: sample uniformly from $[a, b)$ and $[c, d)$ where $b <= c$.
Eventually, try to generate to an arbitrary sequence of `slice`s, assuming they are fed in sorted order with respect to `<`. help(random.random) def samples(*slices): step = 1/len(slices) steps = itertools.count(step, step) bins = [(s, sl) for sl, s in zip(slices, steps)] while True: r = random.random() i = bisect.bisect_left(bins, (r, None)) sl = slices[i] yield abs(sl.stop - sl.start) * (r - (i*step))/step + sl.start samples(slice(10, 20), slice(35, 40)) # Then define the generator with respect to $[10, 20)$ and $[35, 40)$ observations = take(samples(slice(10, 20), slice(35, 40)), 1_000_000) observations # have a look at some observations sorted([i for _, i in zip(range(100), observations)]) # then observe the quantiles: statistics.quantiles(observations) # it looks uniform. By the way, use different intervals, $[14, 20)$ and $[35,40)$, observations = take(samples(slice(14, 20), slice(35, 40)), 1_000_000) # look again at some observations, sorted([i for _, i in zip(range(100), observations)]) # and check the corresponding quantiles statistics.quantiles(observations) # it should be uniform too. Finally, we test the corner case where $b=c$, so let $[10, 20)$ and $[20,40)$, observations = take(samples(slice(10, 20), slice(20, 40)), 1_000_000) # look again at some observations, sorted([i for _, i in zip(range(100), observations)]) # and check the corresponding quantiles statistics.quantiles(observations) # it should be uniform either. Finally, attempt a sampling from `4` slices, observations = take(samples(slice(0, 5), slice(10, 15), slice(20, 25), slice(30, 35)), 1_000_000) # look again at some observations, sorted([i for _, i in zip(range(100), observations)]) # and check the corresponding quantiles statistics.quantiles(observations) # it should be uniform either. # ## Bernoulli random variable int(True) # this is a very quick check to see if a Boolean can be used as integer def Bernoulli(p): 'This is a generator for a Bernoulli random variable of parameter `p` for success.' while True: # forever we loop r = random.random() # get a sample yield int(r < p) # if that sample denotes a success or a failure we *yield* that outcome B = Bernoulli(p=0.6) # B is our random variable B next(B) next(B) next(B) next(B) list(take(B, 20)) C = collections.Counter(take(B, 1_000_000)) C C[1]/(C[0]+C[1]) # where print(collections.Counter.__doc__) # ## Russian Peasant Multiplication # # Let def halves_doubles(n, m): halving = n doubling = m acc = 0 while halving: digit = halving % 2 acc = acc + digit * doubling yield (digit, halving, doubling, acc) halving = halving >> 1 # int(halving / 2) doubling = doubling << 1 # in list(halves_doubles(89, 18)) # see https://en.wikipedia.org/wiki/Ancient_Egyptian_multiplication and also https://www.cut-the-knot.org/Curriculum/Algebra/PeasantMultiplication.shtml. Then, def rpm(n, m): *prefix, (b, h, d, s) = halves_doubles(n, m) return s # so the check passes, assert rpm(89, 18) == 89 * 18 == 1602 # because bin(89) # Of course, it works too when the first number is even, rpm(6, 100) # Of course our implementation # %timeit rpm(293819385789379687596845, 921038209831568476843584365) # is *slower* than the primitive one # %timeit 293819385789379687596845 * 921038209831568476843584365 # because arithmetic is performed in the virtual machine. # Let us give a strict version also, def rpm_strict(n, m): halving = n doubling = m acc = 0 while halving: digit = halving % 2 acc = acc + digit * doubling halving = halving >> 1 doubling = doubling << 1 return acc # check that it is correct, rpm_strict(89, 18) # and observe that it is a little bit *faster* than our former implementation # %timeit rpm_strict(293819385789379687596845, 921038209831568476843584365) # ## Fixed sum def subarrays(L): return (L[i:j] for i in range(len(L)) for j in range(i, len(L)+1)) L = [-1, 5, 8, -9, 4, 1] list(subarrays(L)) def fixed_sum(L, n): return filter(lambda s: sum(s)==n, subarrays(L)) list(fixed_sum(L, 10)) def partial_sums(L): g = itertools.accumulate(subarrays(L), lambda s, each: s + each[-1] if each else 0, initial=0) next(g) # to ignore the initial 0 given above return g list(partial_sums(L)) # Toward an optimization... def subarrays_rev(L): return (tuple(L[i:j]) for i in range(len(L)-1, -1, -1) for j in range(i+1, len(L)+1)) list(subarrays_rev(L)) def fixed_sum_rev(L, n, cache={}): for tup in subarrays_rev(L): rest = tup[1:] s = tup[0] + cache.get(rest, 0) cache[tup] = s if s == n: yield tup cache = {} list(fixed_sum_rev(L, 10, cache)) cache # have a look at the collected values def sample(n): O, b, *rest = bin(random.getrandbits(n)) # because `string`s are iterable objects indeed. return list(map(int, rest)) # where help(random.getrandbits) LL = sample(1000) assert set(map(tuple, fixed_sum(LL, 10))) == set(fixed_sum_rev(LL, 10)) # %timeit list(fixed_sum(LL, 10)) # %timeit list(fixed_sum_rev(LL, 10)) # **INTERMEZZO** if 4 < 8: print('a') else: pass b = if 4 < 8: ''' lots of code ''' else: 6 b = 5 if 4 < 8 else 6 b # ## Some strange uses of recursion # # For more on this recursion schemata see https://www.cs.ox.ac.uk/people/ralf.hinze/publications/ICFP09.pdf and also https://www.sciencedirect.com/science/article/pii/S1571066104809721. # ### Constants def const(n): yield n yield from const(n) const(1) ones = const(1) list(take(ones, 10)) # ### Nats def nats(): yield 0 g = nats() # !! yield from map(lambda n: n + 1, g) list(take(nats(), 10)) # ### Primes # Consider the following functional specification for the naturals that are also *primes* # ```haskell # primes = filterPrime [2..] # where filterPrime (p:xs) = # p : filterPrime [x | x <- xs, x `mod` p /= 0] # ``` def primes(): def P(numbers): prime = next(numbers) # get the next prime from the iterator `it`. yield prime # yield the next prime number def not_divisible_by_prime(n): # a mnemonic predicate. q, r = divmod(n, prime) return r != 0 yield from P(filter(not_divisible_by_prime, numbers)) # `numbers` has been advanced before. yield from P(itertools.count(2)) list(take(primes(), 20)) # ### Fibonacci, again # # Remember, # $$ # f_{n+2} = f_{n+1} + f_{n}, \quad \text{where} \quad f_{0} = 0 \wedge f_{1} = 1 # $$ def fibs(first=0, second=1): yield first # the first number in the Fibonacci series, yield second # ... and the second one. f, ff = itertools.tee(fibs(first, second)) # duplicate the stream of fibonacci numbers. next(ff) # advance just one of them yield from map(operator.add, f, ff) # according to the Fibonacci rule, yield all the rest. list(take(fibs(), 20)) # #### ...and again # + from sympy import IndexedBase, init_printing # SymPy for symbolic computation init_printing() # pretty printing math symbols and expressions # - x = IndexedBase('x') x[1] # indexing as done in math. fibos = list(take(fibs(x[0], x[1]), 20)) # generate an abstract schema fibos [expr.subs({x[0]:0, x[1]:1}) for expr in fibos] # Fibonacci numbers, as usual. [expr.subs({x[0]:2, x[1]:1}) for expr in fibos] # Lucas numbers, less usual. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Week 1. Hyperparameter tuning: Searching the best architecture # # --- # # ### Neural Architecture Search # # - Is a technique for automating the design of ANN. # - It helps finding the optimal architecture # - This is a search over a huge space # # ### Types of parameteres in ML Models # Trainable parameters: # - Learned by the algorithm during training # # Hyperparameters: # - Set before launching the learning process # - Not updated in each training step # # ### Manual hyperparameter tuning is not scalable # - Hyperparameters can be numerous even for small models # - Tuning them manually can be a real brain teaser # - Tuning helps with model performance # # ### Automating hyperparameter tuning with Keras Tuner # - Automation is key: open source resources to the rescue # Keras Tuner: # - Hyperparameter tuning with TF.2 # # ## Keras Autotuner # # --- # # - Do the model need more or less hidden units to perform well? # - How does model size affect the convergence speed? # - Is there any trade off between convergen speed, model size and accuracy? # - Search automation is the natural path to take # - Keras tunner built in search functionality # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #

2014
Python 3, 2020
# #
#

How To Do Things With Words (And Counters) #
or #
Statistical Natural Language Processing in Python #
or #
Everything I Needed to Know I learned From Sesame Street

#
#
#
One, two, three, ah, ah, ah! — The Count #
#
# # This notebook will look at data, models, and tasks involving stastical natural language processing. # # First some boring preliminaries: # + import re import math import random import matplotlib.pyplot as plt from collections import Counter from itertools import permutations from functools import lru_cache from typing import List, Tuple, Set, Dict, Callable Word = str # We implement words as strings cat = ''.join # Function to concatenate strings together # - # # Data: Text and Words # # Before we can do things with words, we need some words. I happen to have a big text called [big.txt](big.txt). We can read it, and see how big it is (in characters): TEXT = open('big.txt').read() len(TEXT) # Over six million characters. # # Now break the characters into words (or more formal-sounding: *tokens*). We'll ignore capitalization and punctuation and anything but the 26 letters (other tokenizers *do* deal with such things). The function `tokens` turns text into a list of words, and `sentence` puts tokens back together into a string: # + def tokens(text) -> List[Word]: """List all the word tokens (consecutive letters) in a text. Normalize to lowercase.""" return re.findall('[a-z]+', text.lower()) sentence = ' '.join # Function to join words with spaces # - tokens('This is: "A test". 😋') sentence(tokens('This is: "A test". 😋')) # How many words are in the text? WORDS = tokens(TEXT) len(WORDS) # Over a million words. Here are the first 10: sentence(WORDS[:10]) # # Model: The Bag of Words Model # # `WORDS` is a list of the words in `TEXT`, and with a little work it can also serve as a *generative model* of text. We know that language is very complicated, but we can create a simplified model of language that captures part of the complexity. # # In the so-called [**bag of words** model](https://en.wikipedia.org/wiki/Bag-of-words_model), we ignore the order of words, but maintain their frequency. Think of it this way: take all the words from the text, and throw them into a bag. Shake the bag, and then generate a sentence by pulling words out of the bag one at a time. Chances are the "sentence" won't be grammatical or sensible, but it will have words in roughly the right proportions. Here we generate a "sentence" with this approach: def sample(words, n=10) -> str: """Sample n random words from a list of words.""" return [random.choice(words) for _ in range(n)] sentence(sample(WORDS)) # `WORDS` is a million elements list, and has a lot of repetition. For example: WORDS.count('the') # A more compact representation for a bag of words is a `collections.Counter`: a dictionary of `{'word': count}` entries (with some handy extra [methods](https://docs.python.org/3/library/collections.html#collections.Counter)). We could use `Counter` directly, but I will define `Bag` as a subclass of Counter (note that [bag](https://en.wikipedia.org/wiki/Multiset) is considered a synonym of [multiset](https://en.wikipedia.org/wiki/Multiset)): class Bag(Counter): """A bag of words.""" Bag(tokens('Is this a test? It is a test! A test it is!')) # Let's make a `Bag` of `WORDS` and get a feel for what's there: # + BAG = Bag(WORDS) len(BAG) # Number of different words # - BAG.most_common(20) # Most common words # # Model: Zipf's Law # # In 1935, linguist observed that in any big text, the *n*th most frequent word appears with a frequency of roughly 1/*n* of the most frequent word. He get's credit for [*Zipf's Law*](https://en.wikipedia.org/wiki/Zipf%27s_law), even though [](https://en.wikipedia.org/wiki/Felix_Auerbach) made a similar observation in 1913. If we plot the frequency of words, most common first, on a log-log plot, they should come out as a straight line if Zipf's Law holds. Here we see that it is a fairly close fit: # + def zipf_plot(bag): M = max(bag.values()) # Count for most common word X = range(1, len(bag) + 1) plt.yscale('log'); plt.xscale('log'); plt.title('Frequency of n-th most frequent word and 1/n line.') plt.xlabel('Word Index Number'); plt.ylabel('Word Frequency') plt.plot(X, [M/i for i in X], ':', label='1/n') plt.plot(X, [c for (w, c) in bag.most_common()], '.', label='actual word frequency') plt.grid(); plt.legend(loc=1) zipf_plot(BAG) # - # # Task: Spelling Correction # # Given a word *w*, find the most likely correction *c* = `correct(`*w*`)`. # # **Approach:** Try all candidate words *c* that are known words that are near *w*. Choose the most likely one. # # How to balance *near* and *likely*? # # For now, in a trivial way: always prefer nearer, but when there is a tie on nearness, use the word with the highest `WORDS` count. Measure nearness by *edit distance*: the minimum number of deletions, transpositions, insertions, or replacements of characters. By trial and error, we determine that going out to edit distance 2 will give us reasonable results. Then we can define `correct(`*w*`)`: # # # def correct(word) -> Word: """Find the best spelling correction for this word.""" # Prefer edit distance 0, then 1, then 2; otherwise default to word itself. candidates = (known({word}) or known(edits1(word)) or known(edits2(word)) or {word}) return max(candidates, key=BAG.get) # The functions `known` and `edits0` are easy; and `edits2` is easy if we assume we have `edits1`: # + def known(words) -> Set[Word]: """Return the subset of `words` that are in the known `BAG` of words.""" return words.intersection(BAG) def edits2(word) -> Set[str]: """Return all strings that are two edits away from this word.""" return {e2 for e1 in edits1(word) for e2 in edits1(e1)} # - # Now for `edits1(word)`: the set of candidate words that are one edit away. For example, given `"wird"`, this would include `"weird"` (inserting an `e`) and `"word"` (replacing a `i` with a `o`), and also `"iwrd"` (transposing `w` and `i`; then `known` can be used to filter this out of the set of final candidates). How could we get them? One way is to *split* the original word in all possible places, each split forming a *pair* of words, `(a, b)`, before and after the place, and at each place, either delete, transpose, replace, or insert a letter: # # #
pairs: Ø, wird w, ird wi, rd wir, dwird, ØNotes: (a, b) pair #
deletions: Ø+ird w+rd wi+d wir+ØDelete first char of b #
transpositions: Ø+iwrd w+rid wi+drSwap first two chars of b #
replacements: Ø+?ird w+?rd wi+?d wir+?Replace char at start of b #
insertions: Ø+?+wird w+?+ird wi+?+rd wir+?+d wird+?+ØInsert char between a and b #
# + def edits1(word) -> Set[str]: """Return all strings that are one edit away from this word.""" edits = set() for a, b in splits(word): if b: edits.add(a+b[1:]) # deletion if len(b) >= 2: edits.add(a+b[1]+b[0]+b[2:]) # transposition for c in alphabet: edits.add(a+c+b[1:]) # replacement edits.add(a+c+b) # insertion return edits return set(deletes + transposes + replaces + inserts) def splits(word) -> List[Tuple[str, str]]: """Return a list of all possible (first, rest) pairs that comprise word.""" return [(word[:i], word[i:]) for i in range(len(word)+1)] alphabet = 'abcdefghijklmnopqrstuvwxyz' # - splits('wird') ' '.join(sorted(edits1('wird'))) known(edits1('wird')) len(edits2('wird')) # + s = 'Speling ERRURS in "somethink." Whutever; unusuel misteakes everyware?' {w: correct(w) for w in tokens(s)} # - # Can we make the output prettier than that? Can we preserve the punctuation and capitalization in a text? # + def correct_text(text) -> str: "Correct all the words within a text, leaving the rest untouched." return re.sub('[a-zA-Z]+', correct_match, text) def correct_match(match: re.Match) -> Word: "Spell-correct word in match, and preserve proper upper/lower/title case." word = match.group() return case_of(word)(correct(word.lower())) def case_of(word) -> Callable: """Guess what function would give the capitalization of `word`.""" return next(c for c in (str.upper, str.lower, str.capitalize, str) if c(word) == word) # - correct_text(s) correct_text('Audiance sayzs: TUMBLR ...') # So far so good. You can probably think of ways to make this better; we'll consider some later. # # Model: Probabilities with the Bag of Words Model # # In the bag of words model, what's the probability of picking a particular word out of the bag? What's the probability of a sequence of words? # # We'll denote that probability of a single word as `Pword(word)` in Python; the probability of a sequence of words will be `Pwords(words)`. I'll redefine `Bag` to make bags be both a Counter and a `ProbabilityFunction`: a callable object, `P`, that will give you the probability of some outcome when invoked with `P(outcome)`. # + class ProbabilityFunction: def __call__(self, outcome): """The probability of `outcome`.""" if not hasattr(self, 'total'): self.total = sum(self.values()) return self[outcome] / self.total class Bag(Counter, ProbabilityFunction): """A bag of words.""" # - # Now the following assignment allows us to get the probability of a word with `Pword(word)`: Pword = Bag(WORDS) Pword('the') Pword['the'] # The above says that the probability of picking `the` randomly out of the bag of words is about 7%, and the actual count of `the` is 80,029. Below we see probabilities for more words: sorted({(Pword(w), w) for w in tokens('''"The" is the most common word in English; old-fashioned words like "victuals" and lamentations" are rare.''')}) # Now, what is the probability of a *sequence* of words? In the general case, this is defined by the definition of joint probability: # # $P(w_1 \ldots w_n) = P(w_1) \times P(w_2 \mid w_1) \times \ldots P(w_n \mid w_1 \ldots w_{n-1}) = \Pi_{i=1}^n P(w_i \mid w_1 \ldots w_{i-1})$ # # In the bag of words model, each word is drawn from the bag *independently* of the others. So $P(w_i \mid w_1 \ldots w_{i-1}) = P(w_i)$, and we have: # # $P(w_1 \ldots w_n) = P(w_1) \times P(w_2) \times \ldots P(w_n) = \Pi_{i=1}^n P(w_i)$ # # Now clearly this model is wrong; the probability of a sequence of words in English *does* depend on the order of the words. But, as the statistician George Box said, *All models are wrong, but some are useful.* The bag of words model, wrong as it is, has many useful applications. # #
# #
All models are wrong, but some are useful
—George Box (1919-2013)

# # # Below we define `Pwords` to compute the product of the individual word probabilities: # + def Pwords(words: List[Word]) -> float: "Probability of a sequence of words, assuming each word is independent of others." return Π(Pword(w) for w in words) def Π(nums) -> float: "Multiply the numbers together. (Like `sum`, but with multiplication.)" result = 1 for num in nums: result *= num return result # - Pwords(tokens('this is a test of the good words')) Pwords(tokens('here lies another sentence with occasionally unusual terms')) # Although both sentences have eight words, the second is judged to be 10 billion times less probable. # # Task: Word Segmentation # # **Task**: *given a sequence of characters with no spaces separating words, recover the sequence of words.* # # Why? Some languages have no word delimiters: [不带空格的词](http://translate.google.com/#auto/en/%E4%B8%8D%E5%B8%A6%E7%A9%BA%E6%A0%BC%E7%9A%84%E8%AF%8D) # # In hastily-written text there might be spelling errors that include [wordsruntogether](https://www.google.com/search?q=wordsruntogether). # # Some specialized sub-genres have no word delimiters; an example is domain names in URLs. Sometimes this can go [poorly](https://www.boredpanda.com/worst-domain-names/?utm_source=google&utm_medium=organic&utm_campaign=organic), as in the website `choosespain.com` which once was a tourist information site encouraging you to "choose Spain" for your travel, but then they noticed that the domain name could also be parsed as "chooses pain". # # To find the best segmentation, we can again take advantage of the bag-of-words assumption that words are independent of each other. If we segment the text into a first word and remaining characters, then the best segmentation with that first word is arrived at by finding the best segmentation of the remaining characters (which does not depend on the first word), where "best" means having maximum probability according to `Pwords`. Thus if we try all possible splits, the split with the maximum probability will be the overall best. # # segment('choosespain') == # max((['c'] + segment('hoosespain'), # ['ch'] + segment('oosespain'), # ... # ['choose'] + segment('spain'), # ... # ['choosespain'] + segment('')), # key=Pwords) # # To make this somewhat efficient, we need to avoid re-computing the segmentations of the remaining characters. This can be done explicitly by *dynamic programming* or implicitly with *memoization* or *caching* of function results, as is done by `functools.lru_cache`. Also, we needn't consider all possible lengths for the first word; we can impose a maximum length. What should it be? A little more than the longest word seen so far. max(len(w) for w in BAG) def splits(text, start=0, end=20) -> Tuple[str, str]: """Return a list of all (first, rest) pairs; start <= len(first) <= L.""" return [(text[:i], text[i:]) for i in range(start, min(len(text), end)+1)] splits('word') splits('choosespain', 1, 7) @lru_cache(None) def segment(text) -> List[Word]: """Return a list of words that is the most probable segmentation of text.""" if not text: return [] else: candidates = ([first] + segment(rest) for (first, rest) in splits(text, 1)) return max(candidates, key=Pwords) segment('choosespain') segment('speedofart') decl = ('wheninthecourseofhumaneventsitbecomesnecessaryforonepeoplet' + 'odissolvethepoliticalbandswhichhaveconnectedthemwithanother' + 'andtoassumeamongthepowersoftheearththeseparateandequalstati' + 'ontowhichthelawsofnatureandofnaturesgodentitlethem') sentence(segment(decl)) # That looks good! It should be "nature's" not "natures," but our approach does not allow apostrophes. Some more examples: segment('smallandinsignificant') # That looks good. What about: segment('largeandinsignificantnumbers') # That's not how I would segment it. Let's look at the probabilities: (Pwords(['large', 'and', 'insignificant']), Pwords(['large', 'and', 'in', 'significant'])) # The probabilities are close, but the segmentation with fewer words is preferred. The bag of words model does not know that "small" goes better with "insignificant" and "large" goes better with "significant," because the model assumes all words are independent. # # Summary: # # - Overall, looks pretty good! # - The bag-of-words assumption is a limitation. # # Data: Billions of Words # # Let's move up from millions to [*billions and billions*](https://en.wikipedia.org/wiki/Billions_and_Billions) of words. I happen to have a word count data file available in the format `"word \t count"`. Let's arrange to read it in: def load_counts(lines, sep='\t', fn=str.lower) -> Bag: """Return a Bag initialized from key/count pairs, one on each line.""" bag = Bag() for line in lines: word, count = line.split(sep) bag[fn(word)] += int(count) return bag # We'll make `P1w` be a bag of billions of words (and, as a `Bag`, also a probability function): # + P1w = load_counts(open('count_1w.txt')) len(P1w), sum(P1w.values())/1e9 # - # A third of a million distinct words with a total count of 588 billion tokens. P1w.most_common(20) # The Zipf plot for 588 billion word tokens is similar to the one for a million word tokens: zipf_plot(P1w) # # Model: the Bigram Model # # Even with billions of words of data, we won't have enough information to create a full joint probability model, where # # $P(w_1 \ldots w_n) = \Pi_{i=1}^n P(w_i \mid w_1 \ldots w_{i-1})$ # # Why not? Consider a 30-word sentence. We've probably never seen those words in that exact order before, so we have no information about $P(w_{30} \mid w_1 \ldots w_{29})$. But we are much more likely to have seen the two-word sequences that make up the sentence. We call these two-word sequences **bigrams**. The **bigram model** says that the probability of each word depends on the immediately previous word but is independent of the words that came before that. It is less-wrong than the bag of words model. We can write an equation for the probability of a sequence of words (making the assumption that $w_0$ is a special "start" or "separator" symbol that we denote as ``): # # $P(w_1 \ldots w_n) = P(w_1 \mid \mbox{}) \times P(w_2 \mid w_1) \times \ldots P(w_n \mid w_{n-1}) = \Pi_{i=1}^{n} P(w_i \mid w_{i-1})$ # # The conditional probability of each word given the previous word is given by: # # $P(w_i \mid w_{i-1}) = \frac{P(w_{i-1}w_i)}{P(w_{i-1})} $ # # That is, the bigram probability of the word and the previous word, divided by the unigram probability of the previous word. # # The bigram model can be thought of as a "multiple bags of words" model, where there are multiple bags, one for each distinct word, and a special bag marked ``. (We assume that each sentence boundary in the text is also annotated with a ``.) To build a model, take a text, # cut it up into slips of paper with two words on them, put each slip into the bag labelled with the first word on the slip. To generate language from the model, first pick a slip from the bag labeled `` and output the second word on the slip. Now pick a slip from the bag labeled with that second word, and output the second word on *that* slip. Continue in this manner. # # # The file `count_2w.txt` has bigram data in the form `"word1 word2 \t count"`. We'll load this into the bag `P2w`. P2w = load_counts(open('count_2w.txt')) len(P2w), sum(P2w.values())/1e9 # Over a quarter million distinct two-word pairs, with a total count of 225 billion. Let's see the most common two-word sequences: P2w.most_common(20) # We'll use `Pwords2` for the probability of a word sequence in the bigram model, and `cPword` for the conditional probability of a word given the previous word. # # But we run into a problem: in `cPword` it is natural to divide by the probability of the previous word, but we don't want to divide by zero in the case where we haven't seen that word before. So we will use a **backoff model** that says if we don't have counts for both words, we'll fall back to the unigram probability of the word (ignoring the previous word). # + def Pwords2(words, prev='') -> float: """The probability of a sequence of words, using bigram data, given previous word.""" return Π(cPword(w, (prev if (i == 0) else words[i-1]) ) for (i, w) in enumerate(words)) def cPword(word, prev) -> float: """Conditional probability of word, given previous word.""" bigram = prev + ' ' + word if P2w(bigram) > 0 and P1w(prev) > 0: return P2w(bigram) / P1w(prev) else: # Average the back-off value and zero. return P1w(word) # - P2w('of the') P2w('the of') # We'll look at all 24 permutations of the words in "this is a test" and compute the probability for each under the bigram model. We see that "this is a test" is the most probable permutation, and "this a is test" is the least probable, almost a million times less probable: sorted((Pwords2(t), sentence(t)) for t in permutations(tokens('this is a test')))[::-1] # Under the bag of words model, all permutations have the same probability (except for roundoff error in the least-significant digit): sorted((Pwords(t), t) for t in permutations(tokens('this is a test')))[::-1] # # Task: Segmentation with a Bigram Model # Now let's re-do segmentation using a bigram model. To make `segment2`, we copy `segment`, and make sure to pass around the previous token, and to evaluate probabilities with `Pwords2` instead of `Pwords`. # + @lru_cache(None) def segment2(text, prev=''): "Return best segmentation of text; use bigram data." if not text: return [] else: candidates = ([first] + segment2(rest, first) for (first, rest) in splits(text, 1)) return max(candidates, key=lambda words: Pwords2(words, prev)) def splits(text, start=0, end=20) -> Tuple[str, str]: """Return a list of all (first, rest) pairs; start <= len(first) <= L.""" return [(text[:i], text[i:]) for i in range(start, min(len(text), end)+1)] # - segment2('choosespain') segment2('speedofart') # For the following example, both `segment` and `segment2` perform perfectly: tolkein = 'adrybaresandyholewithnothinginittositdownonortoeat' sentence(segment(tolkein)), sentence(segment2(tolkein)) # But for the next example, `segment` gets three words wrong, while `segment2` only misses one word, `unregarded`, which is not in `P1w`. adams = ('faroutintheunchartedbackwatersoftheunfashionableendofthewesternspiral' + 'armofthegalaxyliesasmallunregardedyellowsun') sentence(segment(adams)) sentence(segment2(adams)) 'unregarded' in P1w # Conclusion? The bigram model is a little better, but these examples haven't demonstrated a large difference. Even with hundreds of billions of words as data, we still don't have a complete model of English words. # # Theory: Evaluation # # So far, we've got an intuitive feel for how this all works. But we don't have any solid metrics that quantify the results. Without metrics, we can't say if we are doing well, nor if a change is an improvement. In general, # when developing a program that relies on data to help make # predictions, it is good practice to divide your data into three sets: #

    #
  1. Training set: the data used to create our spelling # model; this was the big.txt file. #
  2. Development set: a set of input/output pairs that we can # use to rank the performance of our program as we are developing it. #
  3. Test set: another set of input/output pairs that we use # to rank our program after we are done developing it. The # development set can't be used for this purpose—once the # programmer has looked at the development test it is tainted, because # the programmer might modify the program just to pass the development # test. That's why we need a separate test set that is only looked at # after development is done. #
# # For this program, the training data is the word frequency BAG, the development set is the examples like `"choosespain"` that we have been playing with, and now we need a test set. # + def test_segmenter(segmenter, tests): "Try segmenter on tests; report failures; return fraction correct." return sum([test_one_segment(segmenter, test) for test in tests]), len(tests) def test_one_segment(segmenter, test): words = tokens(test) result = segmenter(cat(words)) correct = (result == words) if not correct: print('expected:', sentence(words)) print(' got:', sentence(result)) return correct cat = ''.join proverbs = ("""A little knowledge is a dangerous thing A man who is his own lawyer has a fool for his client All work and no play makes Jack a dull boy Better to remain silent and be thought a fool that to speak and remove all doubt; Do unto others as you would have them do to you Early to bed and early to rise, makes a man healthy, wealthy and wise Fools rush in where angels fear to tread Genius is one percent inspiration, ninety-nine percent perspiration If you lie down with dogs, you will get up with fleas Lightning never strikes twice in the same place Power corrupts; absolute power corrupts absolutely Here today, gone tomorrow See no evil, hear no evil, speak no evil Sticks and stones may break my bones, but words will never hurt me Take care of the pence and the pounds will take care of themselves Take care of the sense and the sounds will take care of themselves The bigger they are, the harder they fall The grass is always greener on the other side of the fence The more things change, the more they stay the same Those who do not learn from history are doomed to repeat it""" .splitlines()) # - test_segmenter(segment, proverbs) test_segmenter(segment2, proverbs) # This confirms that both segmenters are good, and that `segment2` is slightly better, getting 20 out of 20 examples right, compared to 18 out of 20 for `segment`. There is much more that can be done in terms of the variety of tests, and in measuring statistical significance. # # Theory and Practice: Smoothing # # Here are some more test cases: # # + tests = ['this is the oligonucleotide test', 'this is the neverbeforeseen test', 'this is the zqbhjhsyefvvjqc test'] {test: Pwords2(tokens(test)) for test in tests} # - # The issue here is the finality of a probability of zero. Out of the three 15-letter words, it turns out that "oligonucleotide" is in the dictionary, but if it hadn't been, if somehow our corpus of words had missed it, then the probability of that whole phrase would have been zero. It seems that is too strict; there must be some "real" words that are not in our dictionary, so we shouldn't give them probability zero. There is also a question of likelyhood of being a "real" word. It does seem that "neverbeforeseen" is more English-like than "zqbhjhsyefvvjqc", and so perhaps should have a higher probability. # # We can address this by assigning a non-zero probability to words that are not in the dictionary. This is even more important when it comes to multi-word phrases (such as bigrams), because many legitimate bigrams will not appear in the training data. # # We can think of our probability model as being overly spiky; it has a spike of probability mass wherever a word or phrase occurs in the corpus. What we would like to do is *smooth* over those spikes so that we get a model that does not depend so much on the details of our corpus. # # For example, Laplace was asked for an estimate of the probability of the sun rising tomorrow. From past data, he knows that the sun has risen $n/n$ times for the last *n* days (with some uncertainty about the value of *n*), so the maximum liklihood estimator is probability 1. But Laplace wanted to balance the observed data with the possibility that tomorrow, either the sun will rise or it won't, so he came up with an estimate of $(n + 1) / (n + 2)$. # #
# #
What we know is little, and what we are ignorant of is immense.
(1749-1827)
#

# # # A generalization of Laplace's approach is called [additive smoothing](https://en.wikipedia.org/wiki/Additive_smoothing): we add a **pseudocount** (which does not have to be 1) to each of the outcomes we have seen so far, plus one more for the unseen outcomes. We will redefine `Bag` and `P1w` to use additive smoothing. This time, I can't have the `ProbabilityFunction` as a mixin, because I need to supply the psuedocount as an argument when the bag is constructed, and then use it within the probability function. Thus, I will redefine `Bag` as follows: # class Bag(Counter): """A bag of words with a probability function using additive smoothing.""" def __init__(self, elements=(), pseudocount=1): self.update(elements) self.pseudocount = pseudocount def __call__(self, outcome): """The probability of `outcome`, smoothed by adding pseudocount to each outcome.""" if not hasattr(self, 'denominator'): self.denominator = sum(self.values()) + self.pseudocount * (len(self) + 1) return (self[outcome] + self.pseudocount) / self.denominator P1w = Bag(P1w) # Now an unknown word has a non-zero probability, as do sequences that contain unknown words, but known words still have higher probabilities: P1w('neverbeforeseen') {test: Pwords2(tokens(test)) for test in tests} # There are many more advanced ways to do smoothing, but we won't cover them here. # # There is one more issue to contend with that we will mention but not fix: the smallest positive 64-bit floating point number is about $10^{-323}$. That means that if we are trying to compute the probability of a long sequence of words, we will eventually reach a point where we get **floating point underflow** and the probability is incorrectly reported as zero. We see that this happens somewhere around a 100 word sequence, depending on the exact words: {n: Pwords(sample(WORDS, n)) for n in range(70, 151, 10)} # This problem can be addressed by adding logarithms rather than multiplying probabilities, or by re-scaling numbers when they get too small. # # Task: Secret Codes # # Let's tackle one more task: decoding secret messages. We'll start with the simplest of codes, a rotation cipher, sometimes called a shift cipher or a Caesar cipher (because this was state-of-the-art crypotgraphy in 100 BC). First, a method to encode: # + def rot(msg, n=13): "Encode a message with a rotation (Caesar) cipher." return encode(msg, alphabet[n:]+alphabet[:n]) def encode(msg, key): "Encode a message with a substitution cipher." table = str.maketrans(upperlower(alphabet), upperlower(key)) return msg.translate(table) def upperlower(text): return text.upper() + text.lower() # + msg = 'This is a secret message.' rot(msg, 1) # - rot(msg, 13) rot(rot(msg, 13), 13) # Decoding a Caesar cipher message is easy: try all 26 candidates, and find the one with the maximum `Pwords`: def decode_rot(secret): "Decode a secret message that has been encoded with a rotation cipher." candidates = [tokens(rot(secret, i)) for i in range(26)] return sentence(max(candidates, key=Pwords)) # + secret = rot(msg, 17) decode_rot(secret) # - # Let's make it a bit harder. When the secret message contains separate words, it is too easy to decode by guessing that the one-letter words are most likely "I" or "a". So what if the encode routine squished all the letters together: def squish_rot(msg, n=13): "Encode a message with a rotation (Caesar) cipher, keeping letters only." return encode(cat(tokens(msg)), alphabet[n:]+alphabet[:n]) secret = squish_rot('Who knows the answer this time? Anyone? Anyone? Bueller?', 19) secret # That looks harder to decode. A decoder will have to try all rotations, then segment each rotation, and find the candidate with the best `Pwords`: def decode_rot(secret): """Decode a secret message that has been encoded with a rotation cipher, and which has had all the non-letters squeezed out.""" candidates = [segment(rot(secret, i)) for i in range(26)] return max(candidates, key=lambda msg: Pwords(msg)) sentence(decode_rot(secret)) # Almost right, but didn't quite get the proper name. # # What about a general substitution cipher? The problem is that there are $26! \approx 10^{26}$ substitution ciphers, and we can't enumerate all of them. We would need to search through the space: initially make some guess at a substitution, then swap two letters; if that looks better keep going, if not try something else. This approach solves most substitution cipher problems, although it can take a few minutes on a message of length 100 words or so. # # # What's Next? # # What to do next? Here are some options: # # - **Spelling correction**: Use bigram or trigram context; make a model of spelling errors/edit distance; go beyond edit distance 2; make it more efficient # - **Evaluation**: Make a serious test suite; search for best parameters (e.g. $c_1, c_2, c_3$) # - **Smoothing**: Implement Kneser-Ney and/or Interpolation; do letter *n*-gram-based smoothing # - **Secret Codes**: Implement a search over substitution ciphers # - **Classification**: Given a corpus of texts, each with a classification label, write a classifier that will take a new text and return a label. Examples: spam/no-spam; favorable/unfavorable; what author am I most like; reading level. # - **Clustering**: Group data by similarity. Find synonyms/related words. # - **Parsing**: Representing nested structures rather than linear sequences of words. relations between parts of the structure. Implicit missing bits. Inducing a grammar. # - **Meaning**: What semantic relations are meant by the syntactic relations? # - **Translation**: Using examples to transform one language into another. # - **Question Answering**: Using examples to transfer a question into an answer, either by retrieving a passage, or by synthesizing one. # - **Speech**: Dealing with analog audio signals rather than discrete sequences of characters. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np import sklearn, sys import pandas as pd import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error # ## Versions # + libraries = (('Matplotlib', matplotlib), ('Numpy', np), ('Pandas', pd), ('Seaborn', sns), ('Sklearn', sklearn)) print("Python version:", sys.version, '\n') for lib in libraries: print('{0} version: {1}'.format(lib[0], lib[1].__version__)) # - # ## Introduction # Building a model is simple but assessing your model and tuning it require care and proper technique. Unfortunately, this is a place where novice modelers make disastrous mistakes. So while this topic is not as exciting as a shiny new algorithm, it is nonetheless extraordinarily important. You must know this inside and out. There's no two ways about it. # # Let's motivate the discussion with a real-world example. # # The [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml/index.php) contains many wonderful datasets that you can download and experiment on. Datasets usually come with at least some description. Furthermore, the datasets are grouped according to a number of attributes like Classification, Regression, Clustering or Multivariate, Time Series, Text. It really is a great resource to hone your modeling skills. # # Anyway, for the purposes of this demonstration, we'll use the [Forest Fires](http://archive.ics.uci.edu/ml/datasets/Forest+Fires) dataset. # # The task is this: predict the burned area of forest fires, in the northeast region of Portugal, by using meteorological and other data. # # A bit of information about the features: # 1. X - x-axis spatial coordinate within the Montesinho park map: 1 to 9 # 2. Y - y-axis spatial coordinate within the Montesinho park map: 2 to 9 # 3. month - month of the year: 'jan' to 'dec' # 4. day - day of the week: 'mon' to 'sun' # 5. FFMC - FFMC index from the FWI system: 18.7 to 96.20 # 6. DMC - DMC index from the FWI system: 1.1 to 291.3 # 7. DC - DC index from the FWI system: 7.9 to 860.6 # 8. ISI - ISI index from the FWI system: 0.0 to 56.10 # 9. temp - temperature in Celsius degrees: 2.2 to 33.30 # 10. RH - relative humidity in %: 15.0 to 100 # 11. wind - wind speed in km/h: 0.40 to 9.40 # 12. rain - outside rain in mm/m2 : 0.0 to 6.4 # 13. area - the burned area of the forest (in ha): 0.00 to 1090.84 # # Ok, let's get the data, clean it up, and then build a Linear Regression model. # #### Get Data url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/forest-fires/forestfires.csv' df = pd.read_csv(url) # #### View Data df.head() # Right away we see a problem. The columns *month* and *day* are coded as text, not numbers. This is a problem for our Linear Regression model. We need to convert those categorical features. There are many ways to do this but for the purposes of Linear Regression we are going to do something called one-hot encoding. What this does is take a single categorical feature like day, which consists of Sunday through Saturday, and splits it into numerous indicator features. Specifically, a column will be created for each day of the week in this example. So one column will parse into seven. # # You may be wondering how many columns result from one-hot encoding. It's simply the number of categories within a feature. Of course you can one-hot encode multiple categorical features. Just keep in mind that your data matrix can expand wide very quickly using this schema. # #### Clean Data # Pandas has a very nice method called *get_dummies* that will one-hot encode the categorical features for us automatically. It'll even delete the original feature, which is a nice touch. Here we go! df = pd.get_dummies(df) # check column names df.columns # check dataframe df.head() # Something should worry you very much at this point. If it doesn't, go back and look at the columns again. Think it through before I give you the answer. There's a crucial problem we need to address. Can you spot it? # **Spoiler Alert:** I'm going to tell you what's wrong. I hope you spotted it yourself. # # Think about what's happening for the day of week indicator features. We have a column for each day of the week. They can only take values of 0 or 1, indicating if a fire occurred on that given day. The features are mutually exclusive meaning there can only be a 1 in one of the columns. All the rest have to be 0 because it's impossible for the same day to be both Monday and Thursday, for example. # # Ok, so what's the problem? # # Think about the coding. Common convention states a week starts on Sunday. So we have features for Sunday through Saturday. But I don't need an indicator feature for Saturday. That's already encoded implicity when Sunday=0, Monday=0, Tuesday=0, Wednesday=0, Thursday=0, and Friday=0. If you're up on your linear algebra you realize that adding Saturday causes linear dependence, which is a no-no for Linear Regression which assumes independent features. Therefore, we **must** drop one column of each one-hot encoded feature. In this case we need to drop one column from month and one from day of week. Then we'll be in good shape. df.drop(labels=['month_dec', 'day_sat'], axis=1, inplace=True) df.columns df.head() # check range of values df.max() - df.min() # Some of the variables have relatively high variance, like *DMC* and *DC*, whereas others are constrained between 0 and 1, like day of week. Linear Regression can adjust with the magnitude of its coefficients, but it's really good practice to normalize or standardize first, especially when we use regularization or Gradient Descent. We won't normalize/standardize here. You'll understand why shortly. # # For now let's pretend we're in good shape. Let's model. # #### Fit Model data = df.copy() target = data.pop('area') lr = LinearRegression(fit_intercept=True) lr.fit(data, target) # #### How'd we do? # Let's look at $R^2$ and Root Mean Squared Error (RMSE) to see how our model performed. # R^2 lr.score(data, target) predictions = lr.predict(data) mse = mean_squared_error(target, predictions) rmse = np.sqrt(mse) rmse # #### Interpretation # Right away we can see the $R^2$ is abysmal. It's really not too surprising because if you look at the documentation on UCI you'll notice that the target variable is highly skewed with several high leverage points. This is worthy of investigation and could yield substantial performance gains. Review SSE, SST, and $R^2$ if you're unclear as to why. # # The RMSE is a measure of how far off on average our model is from ground truth. I'm using the term *average* loosely here because it's really the average square root of the squared residuals. Yikes, that's a mouthful. Said another way, it's one way to measure the magnitude of errors, though it's not the only one. Mean Absolute Error is another. They will give you different answers, so you should ponder on that. # # Here's the thing: our model is rubbish no matter what. We could have had an $R^2$ approaching 1 or an RMSE close to 0 but that's totally and completely meaningless in the real-world. We have no idea how this model would generalize to data it hasn't seen. We merely have a measure of how well it's doing on the data it sees. This is a major problem for predictive analytics. You can have what seems like an incredible model but then you unleash it in the wild and it performs horribly. Understanding why this is the case is absolutely essential. # ## Why This Model Sucks # In the most extreme case, I can create a model that is really a lookup table. You give me an input and I give you the output. Another way to say this is take a model and let it memorize the data it can see. The result: an $R^2$ of 1 and an RMSE of 0. # # Clearly nobody thinks that's a great model. The point of building a model is to predict something interesting. You can't do that with a lookup table. Yet, that's exactly how we tried to assess our Linear Regression model above - give it some data and then see how well it does predicting that SAME data. That is why it's rubbish. PLEASE DO NOT EVER DO THIS! # # What we've done is look at something called *in-sample error* (ISE). It is a useful metric but only tells one half of the story. # ## Out-of-Sample Error # The other half of the story is something called *out-of-sample error* which I'll denote henceforth at OSE. Simply put, OSE is how well the model performs on data it's never seen. # # But where do we get this data? # # Easy, holdout some data at the beginning. Don't let the model see it until the very end. Then make predictions and see how well it performs. This gives you an indication as to how well your model will do in the wild. # # This process we just discussed is called *Train/Test split*. You determine how much data to holdout at the beginning, split the data into a training dataset and a test dataset, model on the training set, and then calculate ISE and OSE. # # Let's see how to do this with Sklearn. # ## Train/Test Split X_train, X_test, y_train, y_test = train_test_split(data, target, shuffle=True, test_size=0.5, random_state=49) # show dataset sizes data_list = (('X_train', X_train), ('X_test ', X_test), ('y_train', y_train), ('y_test ', y_test)) for item in data_list: print('{}: {}'.format(item[0], len(item[1]))) # ## Fit Model on Training Data lr_split = LinearRegression(fit_intercept=True) lr_split.fit(X_train, y_train) # ## Functions to Calculate ISE and OSE def calc_ISE(X_train, y_train, model): '''returns the in-sample R^2 and RMSE; assumes model already fit.''' predictions = model.predict(X_train) mse = mean_squared_error(y_train, predictions) rmse = np.sqrt(mse) return model.score(X_train, y_train), rmse def calc_OSE(X_test, y_test, model): '''returns the out-of-sample R^2 and RMSE; assumes model already fit.''' predictions = model.predict(X_test) mse = mean_squared_error(y_test, predictions) rmse = np.sqrt(mse) return model.score(X_test, y_test), rmse is_r2, ise = calc_ISE(X_train, y_train, lr_split) os_r2, ose = calc_OSE(X_test, y_test, lr_split) # show dataset sizes data_list = (('R^_in', is_r2), ('R^2_out', os_r2), ('ISE', ise), ('OSE', ose)) for item in data_list: print('{:10}: {}'.format(item[0], item[1])) # #### Interpretation # We can see that the in-sample $R^2$ is pretty low but what's interesting here is that the out-of-sample $R^2$ is lower. In fact, it's slightly below zero. Even more telling is the RMSE values. The RMSE for the data the model saw (ISE) is significantly lower (by a factor of 3) than the RMSE for the data the model has never seen (OSE). In machine learning speak our model is *overfitting* meaning it's doing a much better job on the data it has seen but does not generalize well. The greater the gap between between in-sample and out-of-sample, the greater the overfitting. You can equate overfitting with "memorizing" the data. It becomes more and more like creating a lookup table. # # So the big takeaway here is that you *must* calculate ISE and OSE to get an accurate picture as to how your model is doing. This necessiates holding out some data at the beginning so you can test your model on data it's never seen. I showed you how to do that with sklearn's *train_test_split*. # # Now you may be wondering how to address overfitting. To understand that, we need to discuss something called the **Bias-Variance Tradeoff**, which is a topic for another day. # # Before we wrap up, there's one more subtle item we need to address: the downside of train/test split. # ## Downside of Train/Test Split # I just told you that train/test split gives you both sides of the story - how well your model performs on data it's seen and data it hasn't. That's true to an extent but there's something subtle you need to be aware of. Let me show you be example before explaining it. Let's try a few different train/test splits and check ISE and OSE values. # #### Multiple Train/Test Splits # create array of random_state values random_states = np.random.randint(1, 100, size=5) random_states for random_state in random_states: # split data according to random state X_train, X_test, y_train, y_test = train_test_split(data, target, shuffle=True, test_size=0.5, random_state=random_state) # instantiate mmodel lr = LinearRegression(fit_intercept=True) # fit model lr.fit(X_train, y_train) # capture key metrics is_r2, ise = calc_ISE(X_train, y_train, lr) os_r2, ose = calc_OSE(X_test, y_test, lr) # round values is_r2, os_r2 = round(is_r2, 4), round(os_r2, 4) ise, ose = round(ise, 4), round(ose, 4) # print key metrics print('Random State: {}'.format(random_state)) print('IS_R^2: {} | IS_RMSE: {}'.format(is_r2, ise)) print('OS_R^2: {} | OS_RMSE: {}'.format(os_r2, ose)) print('-'*34) # #### Takeaways # * $R^2$ is always higher in-sample as opposed to out-of-sample # * RMSE show great variability in-sample vs out-of-sample # #### Discussion # It's no surprise that $R^2$ is higher in-sample. The surprise here is RMSE. What's particularly interesting is that sometimes ISE is higher than OSE and sometimes it's the other way around. This is a small dataset so the skewed distribution in the target variable is having major consequences. A much larger dataset would still be affected but to a much smaller degree. With that in mind, you'll almost always see OSE's that are higher than ISE's. If not, there's something funky going on in your data like we have here. It's a good red flag to keep in mind when doing EDA. Anyway, we get very different results depending on how we split the data. In this case, I didn't change the proportion of data that's selected, merely how it's split. So how you split can dramatically affect your model. In some cases it generalizes well and other times it doesn't. # # An obvious question you're probably asking is how do I best split my data? Trial and error? # # No, there's a better method for small to medium-sized datasets and it's called Cross-Validation. We'll pickup that discussion next time. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + import numpy as np import tensorflow as tf from sklearn.linear_model import LogisticRegression from sklearn.metrics import roc_auc_score from sklearn.preprocessing import LabelEncoder import json import pickle import pandas as pd from sklearn.externals import joblib # %pylab inline # - # # Get data # ! mkdir ./tmp # ! wget http://files.grouplens.org/datasets/movielens/ml-100k.zip -O ./ml-100k.zip # ! unzip -o ./ml-100k.zip # ! cat ./ml-100k/README #remove broken symbols # ! iconv -f utf-8 -t utf-8 -c ml-100k/u.item > ml-100k/u.item2 # # user part # ! head -3 ./ml-100k/u.user df_user = pd.read_csv('./ml-100k/u.user', sep='|', names='user id | age | gender | occupation | zip code'.split(' | ')) df_user['living_area'] = df_user['zip code'].map(lambda x: x[0]) del df_user['zip code'] df_user.head() res = [] for age in list(map(str, df_user['age'].values)): res.append(int(round(int(age), -1))) df_user['age'] = res for f in ['age', 'gender', 'occupation', 'living_area']: print(f) print(df_user[f].nunique()) print('----') # + features_list = ['age', 'gender', 'occupation', 'living_area'] s_users = [] le = LabelEncoder() users_mat = [] for feature in features_list: col = le.fit_transform(df_user[feature].values) users_mat.append(col) s_users.append(len(le.classes_)) users_mat = np.array(users_mat).T print(users_mat.shape) # - users = {} for i, id in enumerate(df_user['user id'].values): users[id] = users_mat[i] # # item part # ! head -3 ./ml-100k/u.item2 df_item = pd.read_csv('./ml-100k/u.item2', sep='|', names=(['id', 'title', 'release_date', 'video_release_date', 'url'] + ['g{}'.format(i) for i in range(19)]) ) df_item['year'] = df_item['release_date'].map(lambda x: str(x).split('-')[-1]) res = [] for age in list(map(str, df_item['year'].values)): if age == 'nan': age='1600' res.append(int(round(int(age), -1))) df_item['decade'] = res for f in ['decade']: print(f) print(df_item[f].nunique()) print('----') # + features_list = ['decade'] + ['g{}'.format(i) for i in range(19)] s_item = [] items_mat = [] for feature in features_list: col = le.fit_transform(df_item[feature].values) items_mat.append(col) s_item.append(len(le.classes_)) items_mat = np.array(items_mat).T print(items_mat.shape) # - items = {} for i, id in enumerate(df_item['id'].values): items[id] = items_mat[i] # # ratings part # ! head -3 ./ml-100k/u.data df_data = pd.read_csv('./ml-100k/u.data', sep='\t', names='user id | item id | rating | timestamp'.split(' | ') ) df_data['target'] = df_data['rating'] > 4.5 data = df_data[['user id', 'item id']].as_matrix() target = df_data['target'].values print('Mean target: {}'.format(np.mean(target==True))) # split to pos/neg samples positive_idx = np.where(target==True)[0] negative_idx = np.where(target!=True)[0] from sklearn.cross_validation import train_test_split pos_idx_tr, pos_idx_te = train_test_split(positive_idx, random_state=42, test_size=0.5) neg_idx_tr, neg_idx_te = train_test_split(negative_idx, random_state=42, train_size=len(pos_idx_tr)) def build_matrix(pos_idx, neg_idx): rows_user = [] rows_item = [] rows_pair = [] for idx in list(pos_idx) + list(neg_idx): u, i = data[idx] # values should be 1-based rows_user.append(users[u] + 1) rows_item.append(items[i] + 1) # u and i already 1-based rows_pair.append(data[idx]) X = np.hstack(map(np.array, [rows_user, rows_pair, rows_item])) Y = np.zeros(len(pos_idx) + len(neg_idx)) Y[:len(pos_idx)] = 1 perm = np.random.permutation(X.shape[0]) return X[perm], Y[perm] # + n_users = 943 n_items = 1682 X_tr, Y_tr = build_matrix(pos_idx_tr, neg_idx_tr) X_te, Y_te = build_matrix(pos_idx_te, neg_idx_te) # sizes of categorical features s_features = s_users + [n_users, n_items] + s_item # - print('X_tr shape: ', X_tr.shape) print('X_te shape: ', X_te.shape) print('Num of features: ', len(s_features)) print('Size of feature space: ', np.prod(s_features)) print('Sizes of features: ', s_features) # dump to disk joblib.dump((X_tr, Y_tr, s_features), './tmp/train_categotical.jl') joblib.dump((X_te, Y_te, s_features), './tmp/test_categorical.jl') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import random # + #WrightFisherSimulation popSize=100 nGen=100 genomeLength=100 muRates={ 'A': {'C': 0.1,'G': 0.1,'T': 0.1}, 'C':{'A': 0.1,'G': 0.1,'T': 0.1} , 'G':{'A': 0.1,'C': 0.1,'T': 0.1}, 'T':{'A': 0.1,'C': 0.1,'G': 0.1}} def mutate(popDict,muRates): muPopDict=popDict for j in list(popDict.keys()): template=popDict[j] position=random.choice(range(genomeLength)) mutation=random.choice(list(muRates[template[position]].keys())) if random.random()", "8ed74b1d3482486785f72546a4203efe", "", "", "", "ac28f855322242258e947cafe0cdf46f", "", "dfab5adca25d4f399eb8789a1b48c3e7", "0a875081b61e401081ba106d0976cb3e", "9a552e7f9f864203a770cbd53600f9ed", "86dab5d84e9e47ee825aaabe39bcedc6", "8dddf754681d4e6ba209e9e705ccc1ad", "d36e06be25154c2fbd98c951dfa28460", "81840fd518154627aaed69b4dbc27a7d"]} id="4K6Ah5u2cZa9" outputId="2f541504-cbed-47d8-950e-de8a7b29196a" import stanza # Download the stanza model or you will get this error: stanza.download('en') # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="rD_VNj-Ea8cD" outputId="c7f5f6bd-ead6-4df8-dfe3-1c96a377baa4" import binarytree as bt # pip install binarytree from nltk.tree import Tree from nltk.draw import TreeWidget from PIL import Image, ImageDraw from nltk.draw.util import CanvasFrame from IPython.display import Image, display from wordnet import find_relation, get_word_sets # internally defined from Udep2Mono.polarization import PolarizationPipeline # internally defined Udep2Mono package. # + id="wxA0lKCPZp5Q" nounModifiers = {"det", "nummod", "amod", "obl:tmod", "acl:relcl", "nmod", "case", "nmod:pass", "acl", "Prime","cc"} verbModifiers = {"advmod", "obl","xcomp","advcl","mark","aux"} nounCategories = {"compound"} verbs = {"VBZ", "VBP", "VBD", "VBG"} modified = {"NN", "PRP", "JJ", "VB"}.union(verbs) modifiers = nounModifiers.union(verbModifiers) offFocus = {"expl"} contents = {"nsubj", "obj", "cop", "compound", "conj", "nsubj:pass"} cont_npos = {"nsubj": 'nn', "obj": 'nn', "cop": 'vbz', "verb": 'vbz'} mark_toProp = {"+": {"hyponym", "synonym"}, "-": {"hypernym", "synonym"}, "=": {"synonym"}} clause_prop = {"which", "that", "who"} be_verbs = {"is", "am", "are", "be"} directions = {0: "lexical", 1: "phrasal", 2: "syntatic_variation", 3: "implicative"} arrows = { "+": "\u2191", "-": "\u2193", "=": "=", "0": "" } def annotation2string(annotation): '''self contained function, no change needed.''' annotated = list(annotation['annotated'].popkeys()) def compose_token(word): if '-' in word[0]: orig = word[0].split('-') return ' '.join([x + arrows[word[2]] for x in orig]) else: return word[0] + arrows[word[2]] annotated_sent = ' '.join([compose_token(x) for x in annotated]) return annotated_sent class Unode: def __init__(self, prop, word, npos, mark): self.nexts = dict() self.prop = prop self.isRoot = False self.nexts["all"] = set() self.word = word self.npos = npos self.mark = mark self.phrases = set() self.pair = -1 self.pairParts = dict() self.start = -1 self.end = -1 self.nodes = set() self.cc = None def add_Unode(self, node): # print(node.prop) if(self.isRoot): self.nexts[node.prop].add(node) else: self.nexts["all"].add(node) def addNode(self, node): self.nodes.add(node) def getText(self): if(self.isRoot): output = "" for cont in ["nsubj", "verb", "obj"]: for ele in self.nexts[cont]: output += ele.getText() output += " " return output.strip() else: if(self.nexts["all"] == set()): return self.word output = self.word for element in self.nexts["all"]: if(element.prop == "amod"): output = " " + output output = element.getText() + output else: output += " " output += element.getText() return output def get_inText(self, index): connected_info = "" if(self.isRoot): for key in self.nexts.keys(): if(key != "all"): print(key) for keyItem in self.nexts[key]: connected_info += (key + ": " + keyItem.get_inText(index + 1) + " ") return "{ " + connected_info + "}" else: for node in self.nexts["all"]: if(node != None): # print("111") connected_info += node.get_inText(index + 1) return "{ The " + str(index) + " layer" + ": " + self.word + connected_info + "}" def get_magicText(self): connected_info = "" if(self.isRoot): for key in self.nexts.keys(): component = "" if(key != "all"): print(key) for keyItem in self.nexts[key]: component += " (" + keyItem.get_magicText() + ")" component = "(" + key + " " + component + ")" connected_info += component return "(" + connected_info + ")" else: for node in self.nexts["all"]: if(node != None): # print("111") connected_info += "(" + node.get_magicText() + ")" if(self.nexts["all"] == set()): if(self.pair != -1): return self.word + str(self.pair) return self.word if(self.pair != -1): return self.word + str(self.pair) + connected_info return self.word + connected_info def addNum(self, num): self.pair = num def addPart(self, newNode, type1): if(type1 not in self.pairParts): self.pairParts[type1] = set() self.pairParts[type1].add(newNode) def getParts(self): # return verb-obj subParts now return self.pairParts["obj"] def addCC(self,node): self.cc = node class PairCounter: def __init__(self, initial=0): self.nsubj = initial self.obj = initial def incrementN(self): self.nsubj += 1 def incrementO(self): self.obj += 1 class Ugraph: def __init__(self, rootNode): self.root = rootNode self.root.isRoot = True self.root.nexts.pop("all", None) for main in {"nsubj", "obj", "verb"}: self.root.nexts[main] = set() self.nodes = set() self.contentSet = set() self.chunks = set() self.Pairs = dict() self.Pairs["nsubj"] = dict() self.Pairs["obj"] = dict() def add_node(self, node): self.nodes.add(node) self.root.addNode(node) def add_edge(self, node1, node2): if(node1.isRoot): self.contentSet.add(node2.word) node1.add_Unode(node2) def contains(self, word_assigned): return word_assigned in self.contentSet def get_magicText(self): return self.root.get_magicText() def addPair(self, newNode, num, type1): newNode.addNum(num) if(num not in self.Pairs[type1]): self.Pairs[type1][num] = [None] if(newNode.prop == "verb"): self.Pairs[type1][num][0] = newNode else: self.Pairs[type1][num].append(newNode) if(len(self.Pairs[type1][num]) > 1 and self.Pairs[type1][num][0] is not None): if(newNode.prop == "verb"): self.Pairs[type1][num][0].addPart( self.Pairs[type1][num][-1], "obj") else: self.Pairs[type1][num][0].addPart(newNode, "obj") def jupyter_draw_nltk_tree(self, tree): cf = CanvasFrame() tc = TreeWidget(cf.canvas(), tree) tc['node_font'] = 'arial 14 bold' tc['leaf_font'] = 'arial 14' tc['node_color'] = '#005990' tc['leaf_color'] = '#3F8F57' tc['line_color'] = '#175252' cf.add_widget(tc, 20, 20) cf.print_to_file('../data/tree_img/tree.ps') cf.destroy() os.system( 'magick convert ../data/tree_img/tree.ps ../data/tree_img/tree.png') display(Image(filename='../data/tree_img/tree.png')) def visualize_tree(self, tree): btree = Tree.fromstring(tree.replace('[', '(').replace(']', ')')) self.jupyter_draw_nltk_tree(btree) def printUgraph_inText(self, Ugraph): print(Ugraph.root.get_inText(1)) def combine_comp(self, tree, node): if(tree.right == None): node.word = node.word + " " + tree.val node.end = tree.id return else: node.word = node.word + " " + tree.left.val return self.combine_comp(tree.right, node) def mono2Graph_recur(self, sent_tree, G, mods, pos=None, counter=-1): if(sent_tree is None): return else: if(any(list(map(lambda x: sent_tree.val is not None and x in sent_tree.val, list(modifiers))))): if("acl" in sent_tree.val): pipeTemp = GraphPipeline() G_prime = pipeTemp.mono2Graph(sent_tree.left) mods.add(G_prime.root) else: left_result = self.mono2Graph_recur( sent_tree.left, G, set(), sent_tree.val, counter) if(left_result is not None): if(type(left_result) is set): for item_result in left_result: if(item_result is not None): mods.add(item_result) else: mods.add(left_result) return self.mono2Graph_recur(sent_tree.right, G, mods, pos, counter) else: if ((sent_tree.left is None and sent_tree.right is None) or sent_tree.val == "compound"): if(sent_tree.val == 'and'): return if(sent_tree.val == "compound"): newNode = Unode(pos, sent_tree.left.val, sent_tree.pos, sent_tree.mark) newNode.start = sent_tree.left.id self.combine_comp(sent_tree.right, newNode) if(pos in contents or pos == "verb"): G.add_edge(G.root, newNode) if(pos != "nsubj"): G.addPair(newNode, counter.obj, "obj") if(pos == "verb" and sent_tree.parent.val != "cop"): counter.incrementO() for node in mods: if(node.npos == "CC"): newNode.addCC(node) else: G.add_edge(newNode, node) return newNode newNode = Unode(pos, sent_tree.val, sent_tree.pos, sent_tree.mark) newNode.start = sent_tree.id newNode.end = sent_tree.id G.add_node(newNode) if (any(list(map(lambda x: sent_tree.pos is not None and x in sent_tree.pos, list(modified)))) or any(list(map(lambda x: pos is not None and x in pos, list(contents)))) or pos == "verb"): if(pos in contents or pos == "verb"): G.add_edge(G.root, newNode) if(pos != "nsubj"): G.addPair(newNode, counter.obj, "obj") if(pos == "verb" and sent_tree.parent.val != "cop"): counter.incrementO() for node in mods: if(node.npos == "CC"): newNode.addCC(node) else: G.add_edge(newNode, node) return newNode else: mods.add(newNode) return newNode else: if(any(list(map(lambda x: sent_tree.val is not None and x in sent_tree.val, list(contents))))): pos_left = sent_tree.val pos_right = pos if("nsubj" in sent_tree.val): pos_right = "verb" pos_left = sent_tree.val[0:5] if("cop" in sent_tree.val): pos_left = "verb" pos_right = "obj" self.mono2Graph_recur(sent_tree.left, G,set(),pos_left,counter) output = self.mono2Graph_recur(sent_tree.right, G, mods, pos_right,counter) counter.incrementO() return output if('conj' in sent_tree.val): if (any(list(map(lambda x: pos is not None and x in pos, list(modifiers))))): results = set() results.add(self.mono2Graph_recur(sent_tree.left, G, set(), pos,counter)) results.add(self.mono2Graph_recur(sent_tree.right, G, set(), pos,counter)) return results else: self.mono2Graph_recur(sent_tree.left, G, set(), pos,counter) self.mono2Graph_recur(sent_tree.right, G, mods, pos,counter) elif("aux" in sent_tree.val): self.mono2Graph_recur( sent_tree.right, G, mods, "verb", counter) elif("obj" in sent_tree.val and pos != "verb"): right_result = self.mono2Graph_recur( sent_tree.right, G, set(), "Prime", counter) if(right_result is not None): mods.add(right_result) self.mono2Graph_recur( sent_tree.left, G, mods, pos_left, counter) else: self.mono2Graph_recur( sent_tree.left, G, set(), pos_left, counter) self.mono2Graph_recur( sent_tree.right, G, mods, pos_right, counter) elif(any(list(map(lambda x: sent_tree.val is not None and x in sent_tree.val, list(offFocus))))): self.mono2Graph_recur( sent_tree.right, G, mods, pos, counter) class GraphPipeline: def __init__(self): self.graph_logs = [] def mono2Graph(self, sent_info): G = Ugraph(Unode("root", "Root", "r00t", "=")) self.graph_logs.append(G) counter = PairCounter() G.mono2Graph_recur(sent_info, G, set(), "verb", counter) return G class Chunk: def __init__(self, node, nodeList): self.node = node self.nodeList = nodeList self.ifVP = False class Chunker: def __init__(self): self.ifGraph = False def insert_byOrder(self, nodeList, totalList): index = 0 for i in range(len(totalList)): if(nodeList[-1].end < totalList[i][0].start): break index += 1 totalList.insert(index, nodeList) return index def check_nodesForChunk(self, nodeList, center, total): size = len(nodeList) splitpos = [0, size] for j in range(size - 1): if(nodeList[j][-1].end + 1 != nodeList[j+1][0].start): if(j < center): if(j >= splitpos[0]): splitpos[0] = j else: if((j+1) < splitpos[1]): splitpos[1] = j+1 outList = nodeList[splitpos[0]:splitpos[1]] newList = [] for k in range(len(outList)): for node in outList[k]: newList.append(node) newChunk = Chunk(nodeList[center][0], newList) total.append(newChunk) return newChunk def construct_sentence(self, root): listNodes = list(root.nodes) output = [] for k in range(len(root.nodes)): index = 0 for i in range(len(output)): if(listNodes[k].end < output[i].start): break index += 1 output.insert(index, listNodes[k]) newChunk = Chunk(None, output) return newChunk def chunk_from_nodes(self, node, results): if(node.isRoot): self.make_chunks(node, results) # considering chunks from clause unrelated to main clause now output = self.construct_sentence(node) results.append(output) return output if(node.nexts["all"] == set()): return Chunk(node, [node]) # sorting goes: tempList = [] for nodeItem in node.nexts["all"]: result = self.chunk_from_nodes(nodeItem, results) if(result is not None): if(nodeItem.cc != None): self.insert_byOrder([nodeItem.cc],tempList) self.insert_byOrder(result.nodeList, tempList) center = self.insert_byOrder([node], tempList) output = self.check_nodesForChunk(tempList, center, results) return output def combine_conj_chunk(self, chunkList): out_results = [] chunkList.sort(key=(lambda x: x.node.start)) size = len(chunkList) splitIndex = 0 if(chunkList != []): temp = Chunk(chunkList[0].node, chunkList[0].nodeList.copy()) else: return [] for j in range(size -1): if(chunkList[j+1].node.cc != None and chunkList[j].nodeList[-1].end + 1 == chunkList[j+1].node.cc.start): temp.nodeList += [chunkList[j+1].node.cc] temp.nodeList += chunkList[j+1].nodeList else: if(temp != []): out_results.append(temp) temp = Chunk(chunkList[j+1].node, chunkList[j+1].nodeList.copy()) out_results.append(temp) return out_results def make_chunks(self, graph_or_root, results): if(type(graph_or_root) is Ugraph): root = graph_or_root.root else: root = graph_or_root cont_out = dict() for cont in root.nexts.keys(): for contNode in root.nexts[cont]: comp = self.chunk_from_nodes(contNode, results) if(cont in cont_out): cont_out[cont].append(comp) else: cont_out[cont] = [] cont_out[cont].append(comp) if(cont in cont_out): cont_out[cont] = self.combine_conj_chunk(cont_out[cont]) results += cont_out[cont] if("verb" in cont_out and "obj" in cont_out): for vbChunk in cont_out["verb"]: for objChunk in cont_out["obj"]: if(vbChunk.node.pair == objChunk.node.pair): vb = vbChunk.nodeList obj = objChunk.nodeList if(vb[-1].end +1 == obj[0].start): vpChunk = Chunk(vbChunk.node, vb+obj) vpChunk.ifVP = True results.append(vpChunk) elif("verb" in cont_out and not "obj" in cont_out): for vbChunk in cont_out["verb"]: vbChunk.ifVP = True outList = set() for nodeChunk in results: tempStr = "" for node in nodeChunk.nodeList: tempStr += node.word tempStr += " " #if(nodeChunk.ifVP): # tempStr = "Somebody " + tempStr outList.add(tempStr.rstrip()) return list(outList) def get_chunks_byDepTree(self, tree): pipe1 = GraphPipeline() g1 = pipe1.mono2Graph(tree) return self.make_chunks(g1, []) # + colab={"base_uri": "https://localhost:8080/", "height": 253} id="DV68-8sDaHhu" outputId="59f22af6-ac99-40e0-c4f2-c807f15681ab" sentences = ["There is a girl with a bag", "Here is the homework that I just wrote", "This is the pizza that I just ordered","There is no cat who playing with a device"] pipeline = PolarizationPipeline(verbose = 1) results = [] results_tree = [] for sent in sentences: tree = pipeline.single_polarization(sent)["polarized_tree"] results_tree.append(tree) results.append(pipeline.postprocess(tree,"")) print(results) # visualize_tree(results[2]) # + id="q7PSo4Iwbh03" gp = GraphPipeline() chunker = Chunker() results = [] gh1 = gp.mono2Graph(results_tree[3]) chunks = chunker.make_chunks(gh1, results) print(chunks) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests import pandas as pd page = requests.get("https://solicitors.lawsociety.org.uk/person/130606") page.status_code # + df_summary = pd.DataFrame() for index in range(3000, 4000): page = requests.get("https://solicitors.lawsociety.org.uk/person/"+str(index)) data = {'index': index, 'status': page.status_code} df_new = pd.DataFrame.from_records([data]) df_summary = df_summary.append(df_new) print('-', end='') # - df_summary.status.value_counts() df = df.append(df_summary) df.status.value_counts() df.loc[df.status == 200] # + from bs4 import BeautifulSoup soup = BeautifulSoup(page.content, 'html.parser') # - soup.find('div', id='languages-spoken-accordion').get_text().strip() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd all_proposals_data = pd.read_csv("All Proposals.csv") # - # ## Doing Rounds 9 and 10 # At our request, the Ocean engineers graciously added time stamp data for Rounds 9 and 10 in the "OceanDAOVotes.xlsx" file. We will be using a slightly different process to extract the timestamps here. # + def transfer_data(df1: pd.DataFrame, df2: pd.DataFrame, source_col: str, match_col: str, target_cols: list) -> None: """ This function transfers the information in the target_col column from df2 to df1. Modifies data frame in place. """ for target_col in target_cols: names_dict = {x:y for x,y in zip(df2[match_col], df2[target_col])} df1[target_col] = df1[source_col].map(names_dict) return None def lookup_time_excel_sheet(wallet:str, hash_str:str) -> str: truncate_hash = hash_str[0:31] get_sheet = pd.read_excel("OceanDAOVotes.xlsx", sheet_name = truncate_hash) get_voter = get_sheet[get_sheet["address"] == wallet] get_time = get_voter["created"] real_time = get_time.loc[0] print(real_time) return get_time def lookup_time(wallet:str, hash_str:str) -> str: get_voter = snapshot_data[snapshot_data["voter"]==wallet] get_vote = get_voter[get_voter["ipfsHash"] == hash_str] get_time = get_vote["created"] return get_time def create_timestamp_column(df:pd.DataFrame) -> list: timestamps = [lookup_time(df.loc[k,"address"],df.loc[k,"ipfsHash"]) for k in range(len(df))] return timestamps def truncate_hash(ipfshash:str) -> str: """ Truncate the hash to 31 characters for accessing sheet names. """ return ipfshash[:31] def get_project_data(ipfshash:str, round_number:int, proposal_name:str) -> pd.DataFrame: """ Given the ipfshash for a project, a round number, and a proposal name, access the data corresponding to that. """ truncated_hash = truncate_hash(ipfshash) project_data = pd.read_excel("OceanDAOVotes.xlsx", sheet_name = truncated_hash) project_data["round"] = round_number project_data["Proposal Name"] = proposal_name project_data["ipfsHash"] = ipfshash project_data.rename(columns = {'balace': 'balance'}, inplace = True) return project_data def get_round_data(round_number: int) -> pd.DataFrame: """ Get the data from a given round. """ our_sheet_name = "Round " + str(round_number) + " Results" base_data = pd.read_excel("OceanDAOVotes.xlsx", sheet_name = our_sheet_name) return base_data def get_project_data_from_round(round_number: int) -> pd.DataFrame: """ Get the project data from a given round. """ round_data = get_round_data(round_number) project_df_list = [] for project_index in range(len(round_data)): project_ipfshash = round_data.loc[project_index,"ipfsHash"] proposal_name = round_data.loc[project_index,"Project Name"] project_data = get_project_data(project_ipfshash, round_number, proposal_name) project_df_list.append(project_data) all_data = pd.concat(project_df_list) return all_data choice_map = {1: "Yes", 2: "No"} info_to_add = ["Project Name", "Proposal State", "Proposal Standing", "Grant Category", "Earmarks", "OCEAN Granted"] info_to_drop_at_end = ["choice","version","authorIpfsHash", "relayIpfsHash", "round"] early_rounds_cols = ['address', 'balance', 'timestamp', 'Vote', 'Round', 'Project Name', 'Proposal State', 'Proposal Standing', 'Grant Category', 'Earmarks', 'OCEAN Granted'] def process_very_late_round_data(rd_num:int, cols_to_keep) -> pd.DataFrame: rd_data = get_project_data_from_round(rd_num) rd_proposal_data = all_proposals_data[all_proposals_data["Round"] == rd_num] rd_data["Vote"] = rd_data["choice"].map(choice_map) rd_data["Round"] = rd_num transfer_data(rd_data, rd_proposal_data, "ipfsHash", "ipfsHash", info_to_add) rd_data["OCEAN Granted"].fillna(value = 0, inplace = True) rd_data.reset_index(inplace=True) #rd_data["created"] = [int(x) for x in rd_data["created"]] rd_data.rename(columns = {"created":"timestamp"}, inplace = True) rd_data = rd_data[cols_to_keep] return rd_data # - rd9data = process_very_late_round_data(9, early_rounds_cols) rd10data = process_very_late_round_data(10,early_rounds_cols) late_rds_data = pd.concat([rd9data, rd10data]) late_rds_data.fillna(0, inplace = True) rds_1_through_8 = pd.read_csv("../final-data/ocean-votes-round-1-to-8-with-timestamps.csv") all_rounds_with_timestamps = pd.concat([rds_1_through_8,late_rds_data]) # + from datetime import datetime human_times = [datetime.utcfromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S') for ts in all_rounds_with_timestamps["timestamp"]] all_rounds_with_timestamps["datetime"] = human_times all_rounds_with_timestamps.to_csv("../final-data/ocean-votes-rounds_1_to_10_with_almost_all_timestamps.csv") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="ug7Q7DElsGfU" colab_type="text" # **PARKINSON'S DISEASE PREDICTION USING MACHINE LEARNING TECHNIQUES** # + [markdown] id="QgvQmFAisPpl" colab_type="text" # IMPORTING REQUIRED LIBRARIES # + id="vN4EmIGim_uF" colab_type="code" colab={} import numpy as np import pandas as pd import warnings import os, sys from sklearn.preprocessing import MinMaxScaler from xgboost import XGBClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score import seaborn as sns from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_classif from imblearn.over_sampling import SMOTE from imblearn.combine import SMOTETomek sm=SMOTE(random_state=2) smk = SMOTETomek(random_state= 2) from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC from sklearn.model_selection import cross_val_score from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report,confusion_matrix # + [markdown] id="wZocAleNsdr6" colab_type="text" # LAUNCHING # + id="IAMbx0pqnFdh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="1f3d46a2-0b0c-4ccc-e565-7b8d1171b2ae" df=pd.read_csv('/content/parkinsons.data') df.head() # + [markdown] id="lj6NZa86tFcA" colab_type="text" # REMOVING NON CATEGORICAL FEATURE BEFORE MOVING AHEAD # + id="1v1UUuZ7tJRT" colab_type="code" colab={} df = df.drop(['name'],axis = 1) # + [markdown] id="dt3OCR9XshLp" colab_type="text" # DATA TREATMENT # + id="0rKsFlQ8nPcl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 371} outputId="b45dab01-5ce4-4fbc-d56d-f59ce6821d20" #CHECKING FOR MISSING VALUES AND REPRESENTING IT USING HEAT MAPS sns.heatmap(df.isnull(),cmap='coolwarm',xticklabels = True,yticklabels = False,cbar=False) # + [markdown] id="-Yt22NFLthfE" colab_type="text" # PREPARING x AND y # + id="bP0sZPsrnfTh" colab_type="code" colab={} y = df['status'] X = df.drop(['status'],axis =1 ) # + id="rp5UA_qyoID_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="e222a06f-f4bc-4cd3-e7aa-0a0fb6dd5eb6" X.head() # + id="TGsNznBaoL_z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="2c17eb98-3020-44f1-85f7-24d5e5a6ce86" y.head() # + [markdown] id="JkvW3YH1ttTU" colab_type="text" # SPLITTING DATA INTO TEST AND TRAIN # + id="qBgsLZwRoNSO" colab_type="code" colab={} X_train,X_test,y_train, y_test = train_test_split( X, y ,test_size = 0.3 , random_state = 123) # + [markdown] id="sFrkYGm-t9Eb" colab_type="text" # BALANCING DATA FOR IMPROVING CLASSIFICATION PERFORMANCE # + id="yh1gNZ73oRIk" colab_type="code" colab={} def balancing(X_train,y_train) : #Smote if( x ==1): #X_train, y_train = sm.fit_sample(X_train, y_train.values.ravel()) X_train, y_train = sm.fit_sample(X_train, y_train) #smotetomek elif(x ==2) : X_train, y_train = smk.fit_sample(X_train, y_train.values.ravel()) else : print('wrong input') y_train = pd.DataFrame(y_train) X_train = pd.DataFrame(X_train) X_train = X_train.round(2) #print(X_train.head()) return X_train,y_train # + id="KKEO_QaAod6Q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="7c569a71-eef5-4a20-acb6-0b63cd3c0b4b" x = 1 X_train,y_train = balancing(X_train,y_train) # + [markdown] id="QkGxRtcmuC_I" colab_type="text" # RENAMING THE COLUMNS FOR BETTER UNDERSTANDING AND READIBILITY # + id="HSNCR-nwogkr" colab_type="code" colab={} X_train = X_train.rename(columns={0:'id',1:'MDVP:Fo(Hz)',2:'MDVP:Fhi(Hz)',3:'MDVP:Flo(Hz)',4:'MDVP:Jitter(%)',5 :'MDVP:Jitter(Abs)',6:'MDVP:RAP',7:'MDVP:PPQ',8:'Jitter:DDP',9:'MDVP:Shimmer', 10:'MDVP:Shimmer(dB)',11:'Shimmer:APQ3',12:'Shimmer:APQ5', 13:'MDVP:APQ',14:'Shimmer:DDA',15:'NHR',16:'HNR', 17:'RPDE',18:'DFA',19:'spread1',20:'spread2',21: 'D2', 22:'PPE'}) y_train = y_train.rename(columns={0:'status'}) # + id="1ECC5GDtovww" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="f7e6abc7-1802-440e-9475-78ab2084f487" y_train['status'].value_counts() # + [markdown] id="aEUr0DHqugwV" colab_type="text" # STANDARDIZING THE MULTIVARIATE DATA # + id="9iWSaVTkoy9j" colab_type="code" colab={} sc_X = StandardScaler() X_train = sc_X.fit_transform(X_train) X_test = sc_X.fit_transform(X_test) # + [markdown] id="lZcRNPzRurz4" colab_type="text" # APPLYING MACHINE LEARNING ALGORITHMS # + id="JUQ4h7z9o46C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="3b124da4-d69e-45d9-b7dd-e79fa804d52a" #support vector classifier svc_model = SVC(class_weight ='balanced',probability = True) svc_model.fit(X_train,y_train) # + id="LgSjg_6EqvjU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="42e4f1ef-4c4e-4fde-b2f2-86f69282e087" svc_predictions = svc_model.predict(X_test) print(accuracy_score(y_test,svc_predictions)) # + id="yU1-t4hCq40D" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="606b28a5-79d6-4fb3-e8d6-f1bce6d71702" print(classification_report(y_test,svc_predictions)) print(confusion_matrix(y_test,svc_predictions)) # + id="sf_a7J1gq73p" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="7f2ca52f-ce72-4e97-9931-b3bc68d081aa" svc_score = cross_val_score(estimator = svc_model , X = X_train , y = y_train,cv = 5) print(svc_score.mean()) # + id="TIlD3t8frCiX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="ce8faca4-e206-45d6-e60d-61c0dc5fedb3" #NAIVE BAYES nb = GaussianNB() nb.fit(X_train,y_train) # + id="ktAr51GtrGhu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2e05663d-8a34-4bf2-c4b1-3e3bd21016d6" nb_predictions = nb.predict(X_test) print(accuracy_score(y_test,nb_predictions)) # + id="olmZPIkWrPDp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="94b95826-a324-49fd-b4a5-7d0e7ed6ca9b" print(classification_report(y_test,nb_predictions)) print(confusion_matrix(y_test,nb_predictions)) # + id="ZR6E4I2wrTzm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 156} outputId="7016e338-4dbb-4e63-89fe-1d808f8581f0" #LOGISTIC REGRESSION logreg = LogisticRegression() logreg.fit(X_train,y_train) # + id="h1jyT4g2rYDR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f3baae9e-5e83-4b14-dfa8-d0b65f2b7e5c" logreg_predictions = logreg.predict(X_test) print(accuracy_score(y_test,logreg_predictions)) # + id="CdnHDE3src5F" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="b942247e-e634-4515-b812-e0ce63205ac4" print(classification_report(y_test,logreg_predictions)) print(confusion_matrix(y_test,logreg_predictions)) # + id="DK5ztzDDrgCh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 207} outputId="6456310f-7d3e-46b8-e0d2-c8207483d7b6" #RANDOM FOREST from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier(n_estimators =10 ) rfc.fit(X_train,y_train) # + id="a1bT7k4froEr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="718bcb17-b2a4-4baf-d77d-c60d60719b29" rfc_predictions = rfc.predict(X_test) print(accuracy_score(y_test,rfc_predictions)) # + id="XeDySf8brreZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="045dfa14-3d99-4d97-c5e5-de60998b7bf6" print(classification_report(y_test,rfc_predictions)) print(confusion_matrix(y_test,rfc_predictions)) # + id="gyQK0-Gort7S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="cff37704-3359-4b8d-9349-9f28fab0fd8b" rfc_score = cross_val_score(estimator = rfc , X = X_train , y = y_train,cv =5) print(rfc_score.mean()) # + id="i-EkLPblryBT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="ba9bdc60-c24b-4f1f-9b8e-642dbe53c900" import xgboost as xgb model = xgb.XGBClassifier() model.fit(X_train,y_train) # + id="peF8UeA7r5U3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0c419a83-0cc1-4094-f241-ba95835f65a2" predictions = model.predict(X_test) print(accuracy_score(y_test,predictions)) # + id="9ZEEyANZr_Ex" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="c3c795c5-c89e-4a79-d5e2-7c12657407a8" print(classification_report(y_test,predictions)) print(confusion_matrix(y_test,predictions)) # + id="tA2NitsJsA05" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 122} outputId="17adc3f1-6554-46d6-a9ee-55158b0ed205" score = cross_val_score(estimator = model , X = X_train , y = y_train,cv =5) print(score.mean()) # + [markdown] id="HWJ9QajwyesZ" colab_type="text" # **INFERENCE: Considering the accuracy scores and cross validation scores of various predictions over the same pre processed data, our proposed algorithm , XGBoost comes out as the winner** # + [markdown] id="lKCUqJWdy0Z0" colab_type="text" # **CONCLUSION:XGBOOST has given promising results in terms of early prediction on Parkinson's disease and can be explored furthur in order to making a feasible model for medical application** # # --- # # # # --- # # # **As tested in this project, # XGBoost with its ability to predict the disease at an early # stage could change lives of many, paving the way for a # better life for one and all** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Deriving evolver and driver classes # # In `micromagneticmodel` package, base classes `micromagneticmodel.Driver` and `micromagneticmodel.Evolver` are defined. Their purpose is to build individual evolvers and drivers in a particular micromagnetic calculator. In this tutorial, we will demonstrate some of their basic properties, on an example of a `micromagneticmodel.Driver` class. The behaviour of the `micromagneticmodel.Evolver` class is the same. # # Let us derive `MyDriver` class. In order to define it, `_allowed_attributes` list must be defined. It is a list of strings, which lists the kwargs which are allowed to be passed. # + import micromagneticmodel as mm class MyDriver(mm.Driver): _allowed_attributes = ['arg1', 'arg2'] def drive(system): return system # - # `Driver` class does not require any parameters to be passed at initialisation. If a keyword argument is from `_allowed_kwargs` list, it will be assigned as a class attribute. Otherwise, `AttributeError` will be raised. driver = MyDriver(arg1=1, arg2='value') # The attributes are driver.arg1 driver.arg2 # If we try to pass a keyword argument at initialisation, which is not in the `_allowed_kwargs` list, `AttributeError` is rased: try: driver = MyDriver(arg3=1) except AttributeError: print('Exception raised.') # The main driver method which must be implemented by a derived class is `drive`. try: driver.drive() except NotImplementedError: print('Exception raised.') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:synthetic-observables] # language: python # name: conda-env-synthetic-observables-py # --- # # Compute Timelags # In this notebook, we'll compute the timelags and cross-correlation for every channel pair and heating type and save them as FITS files. # + import os import sys import numpy as np import distributed import matplotlib.pyplot as plt import matplotlib.colors from sunpy.map import Map,GenericMap from astropy.coordinates import SkyCoord import astropy.units as u from astropy.utils.console import ProgressBar import synthesizAR from synthesizAR.instruments import InstrumentSDOAIA from synthesizAR.analysis import DistributedAIACube,AIATimelags from synthesizAR.visualize import bgry_004_idl_cmap # %matplotlib inline # - synthesizAR.version.version # Spin up a local Dask cluster. cluster = distributed.LocalCluster(n_workers=32,threads_per_worker=2,) client = distributed.Client(cluster) client channels = [94,131,171,193,211,335] channel_pairs = [(94,335), (94,171), (94,193), (94,131), (94,211), (335,131), (335,193), (335,211), (335,171), (211,131), (211,171), (211,193), (193,171), (193,131), (171,131),] heating = ['high_frequency', 'intermediate_frequency', 'low_frequency', 'cooling_outofphase_long', 'cooling'] labels = ['High', 'Intermediate', 'Low', 'Random', 'Cooling'] intensity_file_format = '/storage-home/w/wtb2/data/timelag_synthesis_v2/{}/nei/SDO_AIA/{}/map_t{:06d}.fits' result_file_format = '/storage-home/w/wtb2/projects/synthetic-observables-paper-models/paper/data/{}/{}_{}_{}.fits' timelag_bounds = (-6*u.hour,6*u.hour) # Now, compute the timelag and correlation maps for all of the channel pairs and all of the different heating models. # ## High Frequency tl = AIATimelags(*[DistributedAIACube.from_files([intensity_file_format.format('high_frequency', c, i) for i in range(500,2500)]) for c in channels]) for ca,cb in channel_pairs: timelag_map = tl.make_timelag_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) timelag_map.save(result_file_format.format('high_frequency', 'timelag', ca, cb)) correlation_map = tl.make_correlation_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) correlation_map.save(result_file_format.format('high_frequency', 'correlation', ca, cb)) # ## Intermediate Frequency tl = AIATimelags(*[DistributedAIACube.from_files([intensity_file_format.format('intermediate_frequency', c, i) for i in range(500,2500)]) for c in channels]) for ca,cb in channel_pairs: timelag_map = tl.make_timelag_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) timelag_map.save(result_file_format.format('intermediate_frequency', 'timelag', ca, cb)) correlation_map = tl.make_correlation_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) correlation_map.save(result_file_format.format('intermediate_frequency', 'correlation', ca, cb)) # ## Low Frequency tl = AIATimelags(*[DistributedAIACube.from_files([intensity_file_format.format('low_frequency', c, i) for i in range(500,2500)]) for c in channels]) for ca,cb in channel_pairs: timelag_map = tl.make_timelag_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) timelag_map.save(result_file_format.format('low_frequency', 'timelag', ca, cb)) correlation_map = tl.make_correlation_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) correlation_map.save(result_file_format.format('low_frequency', 'correlation', ca, cb)) # ## Random tl = AIATimelags(*[DistributedAIACube.from_files([intensity_file_format.format('cooling_outofphase_long', c, i) for i in range(500,2500)]) for c in channels]) for ca,cb in channel_pairs: timelag_map = tl.make_timelag_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) timelag_map.save(result_file_format.format('random', 'timelag', ca, cb)) correlation_map = tl.make_correlation_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) correlation_map.save(result_file_format.format('random', 'correlation', ca, cb)) # ## Cooling tl = AIATimelags(*[DistributedAIACube.from_files([intensity_file_format.format('cooling', c, i) for i in range(0,1000)]) for c in channels]) for ca,cb in channel_pairs: timelag_map = tl.make_timelag_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) timelag_map.save(result_file_format.format('cooling', 'timelag', ca, cb)) correlation_map = tl.make_correlation_map(f'{ca}',f'{cb}', timelag_bounds=timelag_bounds,chunks=(tl[0].shape[1]//3,tl[0].shape[2]//3)) correlation_map.save(result_file_format.format('cooling', 'correlation', ca, cb)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Running Jupyter Notebook on EC2 # ### This Tutorial is for running Jupyter Notebook on Amazon EC2 instance. This is not a beginner's tutorial but I will be explaining every step so that everyone can follow. # ### Step 1 - Launch your EC2 by following the steps in this blog - : # # # ### When Instance is launched, make sure you have something like the following screen: # # # ![title](Capture1.PNG) # ### Also make sure that Anaconda is installed on your instance by typing: # #### - "which python /home/ubuntu/anaconda3/bin/python" on the terminal # #### It will show you the following output # # # ![title](Capture2.PNG) # ### Step 2 - Now follow the following steps for creating password of your Jupyter Notebook: # #### - run these commands one by one on the terminal: # #### - ipython # #### - from IPython.lib import passwd # #### - passwd() # #### - Enter password: (Write your password here) # #### - Verify your password # #### - Terminal will show something like this - 'sha1:98ff0e580111:12798c72623a6eecd54b51c006b1050f0ac1a62d' [save this output somewhere else] # #### - exit # ### Step 3 - Create a config file by running following command on terminal: # # #### - jupyter notebook --generate-config # # ### Now lets create http certificates by runing following commands one by one on terminal: # # #### - mkdir certs # #### - cd certs # #### - sudo openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem # #### - Now few questions will be asked from you, answer them as you want # # ### Step 4 - Configuring Jupyter # # #### - Go to "cd ~/.jupyter" using terminal # #### - edit it by using "vi jupyter_notebook.config.py" on terminal or copy this file to your computer for editing using any online software such as FileZilla # #### - Insert following lines at the top of the document: # #### - c = get_config() # #### - c.IPKernelApp.pylab = 'inline' # if you want plotting support always in your notebook # #### - c.NotebookApp.certfile = u'/home/ubuntu/certs/mycert.pem' #location of your certificate file # #### - c.NotebookApp.ip = '0.0.0.0' # #### - c.NotebookApp.open_browser = False #so that the ipython notebook does not opens up a browser by default # #### - c.NotebookApp.password = u':' [change this with your password that you saved in Step 2] # #### - c.NotebookApp.port = 8888 # # ### Exit the file # ### Step 5 - Starting Jupyter Notebook # # #### Create folder for your notebooks by writing following commands on terminal: Now just write "jupyter notebook" on the terminal # # #### - Terminal will look something like this: # # ![title](Capture3.PNG) # # ### Now open a browser and write: # #### - https://[Your Public DNS]:8888/ [change your public DNS here] # #### - It will end up looking like this - https://ec2-54-160-17-108.compute-1.amazonaws.com:8888/ # #### - You can get your public DNS from your instance page while it is running, shown as follows: # # ![title](Capture4.PNG) # ### Now you will be able to use Jupyter Notebook on EC2 instance - Congrats :) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="n7pdtN1ZwFFj" # + [markdown] id="K7NHbklvwjDQ" # --- # title: "Reading Tabular Data into DataFrames" # teaching: 10 # exercises: 10 # questions: # - "How can I read tabular data?" # objectives: # - "Import the Pandas library." # - "Use Pandas to load a simple CSV data set." # - "Get some basic information about a Pandas DataFrame." # keypoints: # - "Use the Pandas library to get basic statistics out of tabular data." # - "Use `index_col` to specify that a column's values should be used as row headings." # - "Use `DataFrame.info` to find out more about a dataframe." # - "The `DataFrame.columns` variable stores information about the dataframe's columns." # - "Use `DataFrame.T` to transpose a dataframe." # - "Use `DataFrame.describe` to get summary statistics about data." # --- # ## Use the Pandas library to do statistics on tabular data. # # * Pandas is a widely-used Python library for statistics, particularly on tabular data. # * Borrows many features from R's dataframes. # * A 2-dimensional table whose columns have names # and potentially have different data types. # * Load it with `import pandas as pd`. The alias pd is commonly used for Pandas. # * Read a Comma Separated Values (CSV) data file with `pd.read_csv`. # * Argument is the name of the file to be read. # * Assign result to a variable to store the data that was read. # # ~~~ # import pandas as pd # # data = pd.read_csv('data/gapminder_gdp_oceania.csv') # print(data) # ~~~ # {: .language-python} # ~~~ # country gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 \ # 0 Australia 10039.59564 10949.64959 12217.22686 # 1 New Zealand 10556.57566 12247.39532 13175.67800 # # gdpPercap_1967 gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 \ # 0 14526.12465 16788.62948 18334.19751 19477.00928 # 1 14463.91893 16046.03728 16233.71770 17632.41040 # # gdpPercap_1987 gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 \ # 0 21888.88903 23424.76683 26997.93657 30687.75473 # 1 19007.19129 18363.32494 21050.41377 23189.80135 # # gdpPercap_2007 # 0 34435.36744 # 1 25185.00911 # ~~~ # {: .output} # # * The columns in a dataframe are the observed variables, and the rows are the observations. # * Pandas uses backslash `\` to show wrapped lines when output is too wide to fit the screen. # # > ## File Not Found # > # > Our lessons store their data files in a `data` sub-directory, # > which is why the path to the file is `data/gapminder_gdp_oceania.csv`. # > If you forget to include `data/`, # > or if you include it but your copy of the file is somewhere else, # > you will get a [runtime error]({{ page.root }}/04-built-in/#runtime-error) # > that ends with a line like this: # > # > ~~~ # > FileNotFoundError: [Errno 2] No such file or directory: 'data/gapminder_gdp_oceania.csv' # > ~~~ # > {: .error} # {: .callout} # # ## Use `index_col` to specify that a column's values should be used as row headings. # # * Row headings are numbers (0 and 1 in this case). # * Really want to index by country. # * Pass the name of the column to `read_csv` as its `index_col` parameter to do this. # # ~~~ # data = pd.read_csv('data/gapminder_gdp_oceania.csv', index_col='country') # print(data) # ~~~ # {: .language-python} # ~~~ # gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 gdpPercap_1967 \ # country # Australia 10039.59564 10949.64959 12217.22686 14526.12465 # New Zealand 10556.57566 12247.39532 13175.67800 14463.91893 # # gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 gdpPercap_1987 \ # country # Australia 16788.62948 18334.19751 19477.00928 21888.88903 # New Zealand 16046.03728 16233.71770 17632.41040 19007.19129 # # gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 gdpPercap_2007 # country # Australia 23424.76683 26997.93657 30687.75473 34435.36744 # New Zealand 18363.32494 21050.41377 23189.80135 25185.00911 # ~~~ # {: .output} # # ## Use the `DataFrame.info()` method to find out more about a dataframe. # # ~~~ # data.info() # ~~~ # {: .language-python} # ~~~ # # Index: 2 entries, Australia to New Zealand # Data columns (total 12 columns): # gdpPercap_1952 2 non-null float64 # gdpPercap_1957 2 non-null float64 # gdpPercap_1962 2 non-null float64 # gdpPercap_1967 2 non-null float64 # gdpPercap_1972 2 non-null float64 # gdpPercap_1977 2 non-null float64 # gdpPercap_1982 2 non-null float64 # gdpPercap_1987 2 non-null float64 # gdpPercap_1992 2 non-null float64 # gdpPercap_1997 2 non-null float64 # gdpPercap_2002 2 non-null float64 # gdpPercap_2007 2 non-null float64 # dtypes: float64(12) # memory usage: 208.0+ bytes # ~~~ # {: .output} # # * This is a `DataFrame` # * Two rows named `'Australia'` and `'New Zealand'` # * Twelve columns, each of which has two actual 64-bit floating point values. # * We will talk later about null values, which are used to represent missing observations. # * Uses 208 bytes of memory. # # ## The `DataFrame.columns` variable stores information about the dataframe's columns. # # * Note that this is data, *not* a method. (It doesn't have parentheses.) # * Like `math.pi`. # * So do not use `()` to try to call it. # * Called a *member variable*, or just *member*. # # ~~~ # print(data.columns) # ~~~ # {: .language-python} # ~~~ # Index(['gdpPercap_1952', 'gdpPercap_1957', 'gdpPercap_1962', 'gdpPercap_1967', # 'gdpPercap_1972', 'gdpPercap_1977', 'gdpPercap_1982', 'gdpPercap_1987', # 'gdpPercap_1992', 'gdpPercap_1997', 'gdpPercap_2002', 'gdpPercap_2007'], # dtype='object') # ~~~ # {: .output} # # ## Use `DataFrame.T` to transpose a dataframe. # # * Sometimes want to treat columns as rows and vice versa. # * Transpose (written `.T`) doesn't copy the data, just changes the program's view of it. # * Like `columns`, it is a member variable. # # ~~~ # print(data.T) # ~~~ # {: .language-python} # ~~~ # country Australia New Zealand # gdpPercap_1952 10039.59564 10556.57566 # gdpPercap_1957 10949.64959 12247.39532 # gdpPercap_1962 12217.22686 13175.67800 # gdpPercap_1967 14526.12465 14463.91893 # gdpPercap_1972 16788.62948 16046.03728 # gdpPercap_1977 18334.19751 16233.71770 # gdpPercap_1982 19477.00928 17632.41040 # gdpPercap_1987 21888.88903 19007.19129 # gdpPercap_1992 23424.76683 18363.32494 # gdpPercap_1997 26997.93657 21050.41377 # gdpPercap_2002 30687.75473 23189.80135 # gdpPercap_2007 34435.36744 25185.00911 # ~~~ # {: .output} # # ## Use `DataFrame.describe()` to get summary statistics about data. # # `DataFrame.describe()` gets the summary statistics of only the columns that have numerical data. # All other columns are ignored, unless you use the argument `include='all'`. # ~~~ # print(data.describe()) # ~~~ # {: .language-python} # ~~~ # gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 gdpPercap_1967 \ # count 2.000000 2.000000 2.000000 2.000000 # mean 10298.085650 11598.522455 12696.452430 14495.021790 # std 365.560078 917.644806 677.727301 43.986086 # min 10039.595640 10949.649590 12217.226860 14463.918930 # 25% 10168.840645 11274.086022 12456.839645 14479.470360 # 50% 10298.085650 11598.522455 12696.452430 14495.021790 # 75% 10427.330655 11922.958888 12936.065215 14510.573220 # max 10556.575660 12247.395320 13175.678000 14526.124650 # # gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 gdpPercap_1987 \ # count 2.00000 2.000000 2.000000 2.000000 # mean 16417.33338 17283.957605 18554.709840 20448.040160 # std 525.09198 1485.263517 1304.328377 2037.668013 # min 16046.03728 16233.717700 17632.410400 19007.191290 # 25% 16231.68533 16758.837652 18093.560120 19727.615725 # 50% 16417.33338 17283.957605 18554.709840 20448.040160 # 75% 16602.98143 17809.077557 19015.859560 21168.464595 # max 16788.62948 18334.197510 19477.009280 21888.889030 # # gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 gdpPercap_2007 # count 2.000000 2.000000 2.000000 2.000000 # mean 20894.045885 24024.175170 26938.778040 29810.188275 # std 3578.979883 4205.533703 5301.853680 6540.991104 # min 18363.324940 21050.413770 23189.801350 25185.009110 # 25% 19628.685413 22537.294470 25064.289695 27497.598692 # 50% 20894.045885 24024.175170 26938.778040 29810.188275 # 75% 22159.406358 25511.055870 28813.266385 32122.777857 # max 23424.766830 26997.936570 30687.754730 34435.367440 # ~~~ # {: .output} # # * Not particularly useful with just two records, # but very helpful when there are thousands. # # > ## Reading Other Data # > # > Read the data in `gapminder_gdp_americas.csv` # > (which should be in the same directory as `gapminder_gdp_oceania.csv`) # > into a variable called `americas` # > and display its summary statistics. # > # > > ## Solution # > > To read in a CSV, we use `pd.read_csv` and pass the filename `'data/gapminder_gdp_americas.csv'` to it. # > > We also once again pass the column name `'country'` to the parameter `index_col` in order to index by country. # > > The summary statistics can be displayed with the `DataFrame.describe()` method. # > > ~~~ # > > americas = pd.read_csv('data/gapminder_gdp_americas.csv', index_col='country') # > > americas.describe() # > > ~~~ # > >{: .language-python} # > {: .solution} # {: .challenge} # # > ## Inspecting Data # > # > After reading the data for the Americas, # > use `help(americas.head)` and `help(americas.tail)` # > to find out what `DataFrame.head` and `DataFrame.tail` do. # > # > 1. What method call will display the first three rows of this data? # > 2. What method call will display the last three columns of this data? # > (Hint: you may need to change your view of the data.) # > # > > ## Solution # > > 1. We can check out the first five rows of `americas` by executing `americas.head()` # > > (allowing us to view the head of the DataFrame). We can specify the number of rows we wish # > > to see by specifying the parameter `n` in our call # > > to `americas.head()`. To view the first three rows, execute: # > > # > > ~~~ # > > americas.head(n=3) # > > ~~~ # > > {: .language-python} # > > ~~~ # > > continent gdpPercap_1952 gdpPercap_1957 gdpPercap_1962 \ # > > country # > > Argentina Americas 5911.315053 6856.856212 7133.166023 # > > Bolivia Americas 2677.326347 2127.686326 2180.972546 # > > Brazil Americas 2108.944355 2487.365989 3336.585802 # > > # > > gdpPercap_1967 gdpPercap_1972 gdpPercap_1977 gdpPercap_1982 \ # > > country # > > Argentina 8052.953021 9443.038526 10079.026740 8997.897412 # > > Bolivia 2586.886053 2980.331339 3548.097832 3156.510452 # > > Brazil 3429.864357 4985.711467 6660.118654 7030.835878 # > > # > > gdpPercap_1987 gdpPercap_1992 gdpPercap_1997 gdpPercap_2002 \ # > > country # > > Argentina 9139.671389 9308.418710 10967.281950 8797.640716 # > > Bolivia 2753.691490 2961.699694 3326.143191 3413.262690 # > > Brazil 7807.095818 6950.283021 7957.980824 8131.212843 # > > # > > gdpPercap_2007 # > > country # > > Argentina 12779.379640 # > > Bolivia 3822.137084 # > > Brazil 9065.800825 # > > ~~~ # > > {: .output} # > > 2. To check out the last three rows of `americas`, we would use the command, # > > `americas.tail(n=3)`, analogous to `head()` used above. However, here we want to look at # > > the last three columns so we need to change our view and then use `tail()`. To do so, we # > > create a new DataFrame in which rows and columns are switched: # > > # > > ~~~ # > > americas_flipped = americas.T # > > ~~~ # > > {: .language-python} # > > # > > We can then view the last three columns of `americas` by viewing the last three rows # > > of `americas_flipped`: # > > ~~~ # > > americas_flipped.tail(n=3) # > > ~~~ # > > {: .language-python} # > > ~~~ # > > country Argentina Bolivia Brazil Canada Chile Colombia \ # > > gdpPercap_1997 10967.3 3326.14 7957.98 28954.9 10118.1 6117.36 # > > gdpPercap_2002 8797.64 3413.26 8131.21 33329 10778.8 5755.26 # > > gdpPercap_2007 12779.4 3822.14 9065.8 36319.2 13171.6 7006.58 # > > # > > country Costa Rica Cuba Dominican Republic Ecuador ... \ # > > gdpPercap_1997 6677.05 5431.99 3614.1 7429.46 ... # > > gdpPercap_2002 7723.45 6340.65 4563.81 5773.04 ... # > > gdpPercap_2007 9645.06 8948.1 6025.37 6873.26 ... # > > # > > country Mexico Nicaragua Panama Paraguay Peru Puerto Rico \ # > > gdpPercap_1997 9767.3 2253.02 7113.69 4247.4 5838.35 16999.4 # > > gdpPercap_2002 10742.4 2474.55 7356.03 3783.67 5909.02 18855.6 # > > gdpPercap_2007 11977.6 2749.32 9809.19 4172.84 7408.91 19328.7 # > > # > > country Trinidad and Tobago United States Uruguay Venezuela # > > gdpPercap_1997 8792.57 35767.4 9230.24 10165.5 # > > gdpPercap_2002 11460.6 39097.1 7727 8605.05 # > > gdpPercap_2007 18008.5 42951.7 10611.5 11415.8 # > > ~~~ # > > {: .output} # > > # > > This shows the data that we want, but we may prefer to display three columns instead of three rows, # > > so we can flip it back: # > > ~~~ # > > americas_flipped.tail(n=3).T # > > ~~~ # > > {: .language-python} # > > __Note:__ we could have done the above in a single line of code by 'chaining' the commands: # > > ~~~ # > > americas.T.tail(n=3).T # > > ~~~ # > > {: .language-python} # > {: .solution} # {: .challenge} # # # > ## Reading Files in Other Directories # > # > The data for your current project is stored in a file called `microbes.csv`, # > which is located in a folder called `field_data`. # > You are doing analysis in a notebook called `analysis.ipynb` # > in a sibling folder called `thesis`: # > # > ~~~ # > your_home_directory # > +-- field_data/ # > | +-- microbes.csv # > +-- thesis/ # > +-- analysis.ipynb # > ~~~ # > {: .output} # > # > What value(s) should you pass to `read_csv` to read `microbes.csv` in `analysis.ipynb`? # > # > > ## Solution # > > We need to specify the path to the file of interest in the call to `pd.read_csv`. We first need to 'jump' out of # > > the folder `thesis` using '../' and then into the folder `field_data` using 'field_data/'. Then we can specify the filename `microbes.csv. # > > The result is as follows: # > > ~~~ # > > data_microbes = pd.read_csv('../field_data/microbes.csv') # > > ~~~ # > >{: .language-python} # > {: .solution} # {: .challenge} # # > ## Writing Data # > # > As well as the `read_csv` function for reading data from a file, # > Pandas provides a `to_csv` function to write dataframes to files. # > Applying what you've learned about reading from files, # > write one of your dataframes to a file called `processed.csv`. # > You can use `help` to get information on how to use `to_csv`. # > > ## Solution # > > In order to write the DataFrame `americas` to a file called `processed.csv`, execute the following command: # > > ~~~ # > > americas.to_csv('processed.csv') # > > ~~~ # > >{: .language-python} # > > For help on `to_csv`, you could execute, for example: # > > ~~~ # > > help(americas.to_csv) # > > ~~~ # > >{: .language-python} # > > Note that `help(to_csv)` throws an error! This is a subtlety and is due to the fact that `to_csv` is NOT a function in # > > and of itself and the actual call is `americas.to_csv`. # > {: .solution} # {: .challenge} # + [markdown] id="8AONlfPywjiE" # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # SETTINGS # # This notebook performs initial data processing: # - importing the raw data # - converting feature types # - merging some data.frames # - saving data as two CSV files: `orders.csv` and `items.csv`. # # A detailed walkthrough of the code covering the key steps is provided in [this blog post](https://kozodoi.me/python/time%20series/demand%20forecasting/competitions/2020/07/27/demand-forecasting.html). # + ##### LIBRARIES import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import scipy.stats import os import time import datetime import random import multiprocessing import pickle import warnings import gc import sys # + ##### MODULES sys.path.append('../codes') from data_prep import print_factor_levels, print_missings, find_constant_features, split_nested_features from versioning import save_csv_version # + ##### SETTINGS warnings.filterwarnings('ignore') pd.set_option('display.max_columns', None) plt.style.use('dark_background') # %matplotlib inline gc.enable() # - # # DATA IMPORT # + ##### IMPORT infos = pd.read_csv('../data/raw/infos.csv', sep = '|') items = pd.read_csv('../data/raw/items.csv', sep = '|') orders = pd.read_csv('../data/raw/orders.csv', sep = '|') print(infos.shape) print(items.shape) print(orders.shape) # - infos.head() items.head() orders.head() # # PROCESSING # ### MERGE INFOS AND ITEMS # + ##### MERGER print(infos.shape) print(items.shape) items = pd.merge(infos, items, on = 'itemID', how = 'left') print(items.shape) del infos # - # ### CONVERT FEATURE TYPES print('-' * 50) print(items.dtypes) print('-' * 50) print(orders.dtypes) print('-' * 50) # + # items for var in ['itemID', 'brand', 'manufacturer', 'category1', 'category2', 'category3']: items[var] = items[var].astype('str').astype('object') # orders for var in ['transactID', 'itemID']: orders[var] = orders[var].astype('str').astype('object') # dates orders['time'] = pd.to_datetime(orders['time'].astype('str'), infer_datetime_format = True) # - # ### CHECK FEATURES print_factor_levels(items, top = 3) print_factor_levels(orders, top = 3) find_constant_features(items) find_constant_features(orders) # ### MISSING VALUES # change zeros to NA where relvant items.loc[items['brand'] == '0', 'brand'] = np.nan items.loc[items['customerRating'] == 0, 'customerRating'] = np.nan print_missings(items) print_missings(orders) # ### UNFOLD PROMOTIONS # split promotion feature items = split_nested_features(items, split_vars = 'promotion', sep = ',') items.head() # + # convert date types promotion_vars = items.filter(like = 'promotion_').columns for var in promotion_vars: items[var] = pd.to_datetime(items[var], infer_datetime_format = True) items.dtypes # - # # EXPORT # save data frame # save_csv_version() automatically adds version number to prevent overwriting save_csv_version('../data/prepared/orders.csv', orders, index = False, compression = 'gzip') save_csv_version('../data/prepared/items.csv', items, index = False, compression = 'gzip') print(orders.shape) print(items.shape) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from selenium import webdriver from selenium.common.exceptions import NoSuchElementException import json import re import urllib.request url = "https://www.bata.id/men/category/sandal" driver = webdriver.PhantomJS() driver.implicitly_wait(10) driver.get(url) print(driver.current_url) products = [] images = [] while True: for product in driver.find_elements_by_xpath("//div[@class='product details product-item-details']"): if product.text != '': details = re.split('\n',product.text) if(len(details) == 2): products.append( { 'Model': details[0], 'Harga': details[1], }) try: driver.find_element_by_xpath("//a[@class='action next']").click() except NoSuchElementException as e: break; products # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import numpy as np import progressbar import matplotlib.pyplot as plt print(tf.__version__) import math import matplotlib.pyplot as plt import keras import pandas as pd import numpy as np from array import array from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers import Dropout from keras.layers import * from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split from keras.callbacks import EarlyStopping # df=pd.read_csv("BTC-USD.csv") # df.tail(5) # + import yfinance as yf stock = yf.Ticker("BTC-USD") hist = stock.history(period="5y") hist.tail(10) # - df=hist d=30 ahead=10 n=int(hist.shape[0]*0.8) training_set = df.iloc[:n, 1:2].values test_set = df.iloc[n:, 1:2].values # + sc = MinMaxScaler(feature_range = (0, 1)) training_set_scaled = sc.fit_transform(training_set) X_train = [] y_train = [] for i in range(d, n-ahead): X_train.append(training_set_scaled[i-d:i, 0]) y_train.append(training_set_scaled[i+ahead, 0]) X_train, y_train = np.array(X_train), np.array(y_train) X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1)) # - model = Sequential() #Adding the first LSTM layer and some Dropout regularisation model.add(LSTM(units = 100, return_sequences = True, input_shape = (X_train.shape[1], 1))) model.add(Dropout(0.2)) # Adding a second LSTM layer and some Dropout regularisation model.add(LSTM(units = 100, return_sequences = True)) model.add(Dropout(0.2)) # Adding a third LSTM layer and some Dropout regularisation model.add(LSTM(units = 50, return_sequences = True)) model.add(Dropout(0.2)) # Adding a fourth LSTM layer and some Dropout regularisation model.add(LSTM(units = 50)) model.add(Dropout(0.2)) # Adding the output layer model.add(Dense(units = 1)) # Compiling the RNN model.compile(optimizer = 'adam', loss = 'mean_squared_error') # Fitting the RNN to the Training set model.fit(X_train, y_train, epochs = 50, batch_size = 32) model.save("BTC-predict.h5") # + # Getting the predicted stock price dataset_train = df.iloc[:n, 1:2] dataset_test = df.iloc[n:, 1:2] dataset_total = pd.concat((dataset_train, dataset_test), axis = 0) inputs = dataset_total[len(dataset_total) - len(dataset_test) - d:].values inputs = inputs.reshape(-1,1) inputs = sc.transform(inputs) # - X_test = [] for i in range(d, inputs.shape[0]): X_test.append(inputs[i-d:i, 0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) print(X_test.shape) predicted_stock_price = model.predict(X_test) predicted_stock_price = sc.inverse_transform(predicted_stock_price) df['Date']=df.index df=df.reset_index(drop=True) df # + plt.plot(df.loc[n:, 'Date'],dataset_test.values, color = 'red', label = 'Real Butcoin Stock Price') plt.plot(df.loc[n:, 'Date'],predicted_stock_price, color = 'blue', label = 'Predicted Bitcoin Stock Price') plt.title('Bitcoin Price Prediction') plt.xlabel('Time') plt.ylabel('Bitcoin Price') plt.legend() plt.xticks(rotation=90) plt.show() # - # ## Get tomorrow's predicted price df # + ## Add a dummy row at the end. This will not be used to predict. df.loc[len(df)]=df.loc[len(df)-1] # - df # + # Getting the predicted stock price dataset_train = df.iloc[:n, 1:2] dataset_test = df.iloc[n:, 1:2] dataset_total = pd.concat((dataset_train, dataset_test), axis = 0) inputs = dataset_total[len(dataset_total) - len(dataset_test) - d:].values inputs = inputs.reshape(-1,1) inputs = sc.transform(inputs) # - X_test = [] for i in range(d, inputs.shape[0]): X_test.append(inputs[i-d:i, 0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) print(X_test.shape) predicted_stock_price = model.predict(X_test) predicted_stock_price = sc.inverse_transform(predicted_stock_price) float(predicted_stock_price[-1]) print("Tomorrow's predicted price = $", float(predicted_stock_price[-1])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # **Traffic Sign Recognition** # # ## Writeup # # ### You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer. # # --- # # **Build a Traffic Sign Recognition Project** # # The goals / steps of this project are the following: # * Load the data set (see below for links to the project data set) # * Explore, summarize and visualize the data set # * Design, train and test a model architecture # * Use the model to make predictions on new images # * Analyze the softmax probabilities of the new images # * Summarize the results with a written report # # # [//]: # (Image References) # # [image1]: ./images/classes_vis.png "Classes visualization" # [image2]: ./images/class_vis.png "Diffrent images for one class" # [image3]: ./images/dataset_dist.png "Distribution of images across classes" # [image4]: ./images/jittered.png "Jittered image example" # [image5]: ./images/normalised.png "Normalised image" # [image6]: ./images/testsigns.png "New Test Signs" # [image7]: ./images/predictions.png "Predictions" # [image8]: ./images/softmax_prob.png "Softmax probabilities" # # ## Rubric Points # ### Here I will consider the [rubric points](https://review.udacity.com/#!/rubrics/481/view) individually and describe how I addressed each point in my implementation. # # --- # ### Writeup / README # # #### 1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf. You can use this template as a guide for writing the report. The submission includes the project code. # # You're reading it! # # ### Data Set Summary & Exploration # # #### 1. Provide a basic summary of the data set. In the code, the analysis should be done using python, numpy and/or pandas methods rather than hardcoding results manually. # # I used the pandas library to calculate summary statistics of the traffic # signs data set: # # * The size of training set is 34799 # * The size of the validation set is 4410 # * The size of test set is 12630 # * The shape of a traffic sign image is 32 x 32 x 3 (RGB) # * The number of unique classes/labels in the data set is 43 # # See code in Step 1 (**Traffic_Sign_ClassifierFinal.ipynb**). # # #### 2. Include an exploratory visualization of the dataset. # # The below are exploratory visualizations of the dataset. # # ![alt text][image1] # ![alt text][image2] # # * Many images have different perspective, sign size, brightness and background. This is especially visible in the images for a particular class (here it is the 100 km/h speed limit). # * All the images are in the same size of 32 x 32 px (RGB). # # * The next bar charts illustrate a distribution of images across datasets. It is clear from the histograms that some classes are underrepresented. This might results in a lower performance of the ANN for these signs. Following the recommendation from [Traffic Sign Recognition with Multi-Scale Convolutional Networks](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf), a fake images (jittered) should be added to a dataset. # ![alt text][image3] # # See code in Step 1 (**Traffic_Sign_ClassifierFinal.ipynb**). # # ### Design and Test a Model Architecture # # #### 1. Describe how you preprocessed the image data. What techniques were chosen and why did you choose these techniques? Consider including images showing the output of each preprocessing technique. Pre-processing refers to techniques such as converting to grayscale, normalization, etc. (OPTIONAL: As described in the "Stand Out Suggestions" part of the rubric, if you generated additional data for training, describe why you decided to generate additional data, how you generated the data, and provide example images of the additional data. Then describe the characteristics of the augmented training set like number of images in the set, number of images for each class, etc.) # # The most of improvements were inspired from the [Traffic Sign Recognition with Multi-Scale Convolutional Networks](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) and from a classroom exercise with the LeNet implementation. # # * Conversion to grayscale - Not much improvement recorded. However, it seems that training time is shorter than for a 3-channels image. This can be useful if large datasets are used e.g. when jittered images are added. # # * Standardisation to between -1 and 1 - Easy to implement and worked very well. However, it is important to have a result as float, otherwise the effect is quite opposite; a bid drop in the accuracy. # # * Conversion to YUV and normalisation of Y channel - A noticeable improvement with respect to initial 89% but in the later stage of the project this technique was dismissed in favour of the global normalisation and standardisation. # # * Global normalisation - Implemented with use of cv2.equalizeHist(). This provided a significant boost to the training and validation accuracy. Very likely the increased contrast (see figure below) helps the network to extract more features. # ![alt text][image5] # * Additional data - The histograms of the original dataset indicated that the classes were not equally represented by the number of images. Also, the mentioned article suggested to add few additional augmented images per each original image. The suggested augmentations were: random translation by [-2:2] pixels, random rotation within [-15: +15] degrees and random scaling within [-10:10] %. Here is an example of an original image and an augmented image: # ![alt text][image4] # * The original dataset has 34799 images whereas the augmented dataset contains 208794 images. The augmentation introduced another significant boost, allowing to consistently achieving the validation accuracy above 96%. # # Final pre-processing involves: # Step 1: Global normalisation # Step 2: Standardisation # Step 3: Generate additional jittered images # # See code in Step 2:Pre-process the Data Set (**Traffic_Sign_ClassifierFinal.ipynb**). # # #### 2. Describe what your final model architecture looks like including model type, layers, layer sizes, connectivity, etc.) Consider including a diagram and/or table describing the final model. # # The LeNet network, as suggested, was chosen as a base for my model. The mentioned paper suggested further modifications but once a dropout was added after each layer the model started to perform very well and focus was placed on tuning the hyperparameters and adding fake data. # # My final model consisted of the following layers: # # | Layer | Description | # |:---------------------:|:---------------------------------------------:| # | Input | 32x32x3 RGB image | # | Convolution 5x5 | 1x1 stride, valid padding, outputs 28x28x12 | # | RELU | | # | Dropout | 0.9 | # | Max pooling | 2x2 stride, same padding, outputs 14x14x6 | # | Dropout | 0.9 | # | Convolution 5x5 | 1x1 stride, valid padding, outputs 10x10x16 | # | RELU | | # | Dropout | 0.9 | # | Max pooling | 2x2 stride, same padding, outputs 5x5x16 | # | Dropout | 0.9 | # | Flatten | outputs 400 | # | Fully connected | outputs 120 | # | RELU | | # | Dropout | 0.9 | # | Fully connected | outputs 84 | # | RELU | | # | Dropout | 0.9 | # | Fully connected | outputs 43| # # See code in Step 2: Architecture (**Traffic_Sign_ClassifierFinal.ipynb**). # # #### 3. Describe how you trained your model. The discussion can include the type of optimizer, the batch size, number of epochs and any hyperparameters such as learning rate. # # * The Adam Optimiser as a default optimiser performed very well. The Gradient Descent Optimizer was also tried few times but its performance was worse than Adam's. # * The hyperparameters were adjust by trial and error. The final settings were Epochs=30, Batch=128, learning rate=0.0005, dropout probability 0.9 (10%), mu=0, sigma=0.1, # * Training time was quite fast and took approximately just several minutes on the NVIDA GTX 980. However, making TensorFlow to work on a GPU required quite some time. # # See code in Step 2:Train, Validate and Test the Model (**Traffic_Sign_ClassifierFinal.ipynb**). # # #### 4. Describe the approach taken for finding a solution and getting the validation set accuracy to be at least 0.93. Include in the discussion the results on the training, validation and test sets and where in the code these were calculated. Your approach may have been an iterative process, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think the architecture is suitable for the current problem. # # # My final model results were: # * training set accuracy of 100% # * validation set accuracy of 98.5% # * test set accuracy of 96.6% # # If an iterative approach was chosen: # * What was the first architecture that was tried and why was it chosen? # The adopted LeNet network from the classroom's handwriting exercise produced good initial results i.e. 89% hence it was used. # * What were some problems with the initial architecture? # A correct adoption in terms of layers sizes and selection of a correct sigma parameter. Once sigma was changed from 0.01 to 0.1 much better results were obtained. # * How was the architecture adjusted and why was it adjusted? Typical adjustments could include choosing a different model architecture, adding or taking away layers (pooling, dropout, convolution, etc), using an activation function or changing the activation function. One common justification for adjusting an architecture would be due to overfitting or underfitting. A high accuracy on the training set but low accuracy on the validation set indicates over fitting; a low accuracy on both sets indicates under fitting. # The attempt to replicate the suggested architecture as in the [Traffic Sign Recognition with Multi-Scale Convolutional Networks](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) failed. Instead a dropout of 10% was introduced after each layer to address the overfitting. Different level of dropout was tried but 10% seem to work the best for my model. # * Which parameters were tuned? How were they adjusted and why? # The batch size and epochs were set fixed for most of trials. The major tuning parameter was the learning rate. It was initially set to 0.001 but gradually it was reduced to 0.0008 and then to 0.0005. # * What are some of the important design choices and why were they chosen? For example, why might a convolution layer work well with this problem? How might a dropout layer help with creating a successful model? # It was noticed that the model was overfitting. To address this a dropout was added. The introduction of dropout was one of the major boosts to the model's accuracy. # # If a well known architecture was chosen: # * What architecture was chosen? # * Why did you believe it would be relevant to the traffic sign application? # The convolutional networks used in the LeNet network and the mentioned paper demonstrated very good performance in symbols recognition. And as stated in the paper they can work in real time on embedded systems makes them an excellent tool for autonomous vehicles. # * How does the final model's accuracy on the training, validation and test set provide evidence that the model is working well? # 96.6% out of 12630 images were predicted correctly which is strong enough evidence that model works well. However, 4410 images for a validation set seem to be not enough to accurately monitor training progres. Further work would increase this amount to approx. 20% of a training dataset size. # # # ### Test a Model on New Images # # #### 1. Choose five German traffic signs found on the web and provide them in the report. For each image, discuss what quality or qualities might be difficult to classify. # # Here are 9 German traffic signs that I found on the web: # # ![alt text][image6] # # The images are different in quality and size; they were cropped out from the original images. # # The yield, no vehicles and no entry signs should be easy to recognise due to their unique features. Next 4 signs are triangular with a red border and black symbol. It expected these would be a bigger challenge for a model. The final two sings for speed limits also share some similar features so it would be interesting to see if the model can distinguish them correctly. # # The images were resized into 32x32 RGB px with use of cv2.resize(, , interpolation=cv2.INTER_AREA). The interpolation can play significant role so it was chosen such way the imported images resemble the quality of images from the training set. # # See code in Step 3: Load and Output the Images (**Traffic_Sign_ClassifierFinal.ipynb**). # # #### 2. Discuss the model's predictions on these new traffic signs and compare the results to predicting on the test set. At a minimum, discuss what the predictions were, the accuracy on these new predictions, and compare the accuracy to the accuracy on the test set (OPTIONAL: Discuss the results in more detail as described in the "Stand Out Suggestions" part of the rubric). # # Here are the results of the prediction: # ![alt text][image7] # # | Image | Prediction | # |:---------------------:|:---------------------------------------------:| # | Yield | Yield | # | No vehicles | **Speed limit 80km/h** | # | No entry | No entry | # | General caution | General caution | # | Dangerous curve to the right | Dangerous curve to the right | # | Road work | Road work | # | Wild animals | Wild animals | # | Speed limit 60km/h | Speed limit 60km/h | # | Speed limit 80km/h | **Speed limit 120km/h** | # # # The model was able to correctly guess 7 of the 9 traffic signs, which gives an accuracy of 77.78%. # # See code in Step 3: Predict the Sign Type for Each Image (**Traffic_Sign_ClassifierFinal.ipynb**). # # #### 3. Describe how certain the model is when predicting on each of the five new images by looking at the softmax probabilities for each prediction. Provide the top 5 softmax probabilities for each image along with the sign type of each probability. (OPTIONAL: as described in the "Stand Out Suggestions" part of the rubric, visualizations can also be provided such as bar charts) # # The bar charts below visualise the softmax probabilities for each image found on the web. # # ![alt text][image8] # # # The final trained network was very confident in its guesses for the correctly recognised 7 images. # # The four triangular shaped signs were predicted correctly with the top probabilities reaching 100%, however it worth to notice that the remaining four probabilities are mostly also for triangular shaped signs. This shape shared by these signs was recognised by the network very well. # # The predictions for the speed limit 60km/h and 80 km/h and no vehicles signs contain also different speed limit signs, what suggested that network have no trouble in recognition of unique features of these signs; a white circle with a red border. Very likely because of this the no vehicles sign was misclassified as a speed limit 80km/h. A similar case is for the speed limit 80km/h sign, which was recognised as the speed limit 120km/h. However, one would expect a misclassification as different double digits speed limit rather than three digits. It is worth to highlight that prior this final run the network had not trouble to recognise all the signs correctly with almost 100% confidence. As the all hyperparameters were fixed the reason for this difference is very likely due to randomness of the generated additional images. # # There is still a room for the improvement as demonstrated in [Traffic Sign Recognition with Multi-Scale Convolutional Networks](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) with a network capable achieving 99.17% test accuracy for this dataset. But even with such high accuracy it would interesting exercise to apply these models to videos, especially from city centres with many ads and other obstructions, which would make the traffic sign recognition much more challenging. # # # # See code in Step 3: Softmax Probabilities For Each Image Found on the Web(**Traffic_Sign_ClassifierFinal.ipynb**). # # ### (Optional) Visualizing the Neural Network (See Step 4 of the Ipython notebook for more details) # #### 1. Discuss the visual output of your trained network's feature maps. What characteristics did the neural network use to make classifications? # # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + def pass_x(path): x = np.empty((0,28),float) row = np.array([]) xf = open(path,'r') line = xf.readline() while line: for i in range(28): row = np.append(row,float(line)) line = xf.readline() x = np.append(x,np.expand_dims(row, axis=0),axis=0) row = np.array([]) xf.close() return x def pass_y(path): y = np.array([]) with open(path,'r') as f: for line in f: y = np.append(y,float(line)) if 'str' in line: break return y # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.6 64-bit (''venv'': venv)' # metadata: # interpreter: # hash: 1dd613a0944e0ee4877c49f3f7544f0058632aedafee0ea9352026a66388253a # name: python3 # --- # # Olympic Games Exploratory Data Analysis # Before we begin, let's set up some useful settings: # - Max number of columns to be displayed = 100 # - Max number of columns to be displayed = 100 # + import pandas as pd pd.set_option('display.max_columns', 100) pd.set_option('display.max_rows', 100) # - # ### First step: read and glimpse the dataset # # In this EDA, we'll use the ["120 years of Olympic history: athletes and results"](https://www.kaggle.com/heesoo37/120-years-of-olympic-history-athletes-and-results) Kaggle dataset, locally available in this repo in `raw_data\athlete_events.csv` . # # Let's first read the dataset: df = pd.read_csv("raw_data/athlete_events.csv") # ### Q0: How many rows and columns are there in this dataset? # print(df.shape) # Over 271 thousand competitors in the last 120 years of Olympics! Wow! # # # Let's get some basic info on the available data: print(df.info()) # Lots of infos available! Let's take a glimpse on actual data: df.head() # Each row represents a competitor in a specific event from a specific olympic games. Interesting, very interesting. # ### Q1: Which are the oldest olympic summer and winter games with data available in the dataset? # # To solve this one, we may resort to the `np.sort()` function: # # + import numpy as np np.sort(df['Year'].unique()) # .unique() to return only one ocurrence for each olympic year # - # The first olympic game with data available is actually the first one in modern age, 1896 Olympic Summer games, in Athens. # # # ### Q2: Which game had the greatest number of registered competitors? # # To answer this one, we may resort to `df.value_counts()` : df['Year'].value_counts() # Well, the one with greatest number of competitors was not one of the last ones, but rather the 1992 Summer Games! Very interesting! # # # ### Q3.1: What is the range of competing athletes' age? # # This one is rather simple: # + import numpy as np min_age_all_sports = np.amin(df['Age']) max_age_all_sports = np.amax(df['Age']) print(f'Age ranging from {min_age_all_sports} to {max_age_all_sports}') # - # ### Q3.2: What is the most common athlete age found in games? # # One could guess that most athletes are young, in their finest physical forms. But is this true? Let's find out. df.groupby(by="Age")["Age"].count().sort_values(ascending=False).head() # Interesting! Most common age is 23 years old, followed by ages in twenties range. But is the age spread or tightly concentrade around this value? # df["Age"].describe() # display all major statistics (mean, median, std, quartiles) at once # Well, indeed, most athletes (75%) had 28 or less years while competing. The youngest of all was a 10-year old child! And the oldest one was a 97-year old senior! Impressive! # # Is Age "evenly distributed", in the sense of being not side-skewed or not too spiked /flatted? We can quickly glance this by looking at its kurtosis and skewness values. df["Age"].skew() # retrieve its skewness # So, as Age distribution has a positive skewness, it is right-skewed, i.e. skewed towards the right, having its most common value (mode), mean and median all concentrated in the left side, with a long tail to the right df["Age"].kurt() # retrieve its kurtosis # As Age distribution has a kurtosis > 3, it is leptokurtic, i.e. a little "spikier" than normal distribution, with more mass concentrated around its central values (mean, median, mode). # # By only looking at its kurtosis and skewness, we found Age distritubion is assymetric to the left (i.e. with smaller values of Age being more common) and "spikier" (i.e. much concentrated around mean, median and mode). If are a "seeing is believing" kind of person, let's make a simple histogram/distribution graph to confirm it: # + import seaborn as sns sns.displot(df["Age"], discrete=True) # - # Just as we have found previously! # # So, this brief analysis confirm that, in general, most athletes are very young while competing, relying in their finest physical forms to complete most sports' events. # # But.. does this result hold for most sports? Is there a sport where seniors compete most? We shall see this one next. But first, one may ask: is medal-winners Age distribution any how similar to the general athlete distribution? Let's find out. df.query('Medal in ("Gold", "Silver", "Bronze")')["Age"].describe() print(df.query('Medal in ("Gold", "Silver", "Bronze")')["Age"].kurt()) print(df.query('Medal in ("Gold", "Silver", "Bronze")')["Age"].skew()) # So, winners' distribution is quite similar, assimetric to the left and spikier, but less so. We can check this in its distribution plot. sns.displot(df.query('Medal in ("Gold", "Silver", "Bronze")')["Age"], discrete=True) # Its form is very similar, see? OK, it's not that easy to see just looking at each graph. Let's plot them overlapped. # + mod_df = df.assign(Medallist=["Yes" if Medal in ("Gold", "Silver", "Bronze") else "No" for Medal in df.Medal ])[["Age", "Medallist"]] # compute a new column "Medallist" telling if athlete was or not a medalist (regardless of whic medal it achieved) sns.displot(mod_df, x="Age", hue ="Medallist", discrete=True) # - # Now, you see! Medallists' and non-medalists' Age distribution is very similar, with Medallist much less frequent (of course). # ### Q3.3: What is the distribution of age in various sports? # # Now, to answer this one, we must look not only at the most common value, but also other meaningful statistics of Age attribute in various sports. Let's start with the usual `describe` method: df[['Age', 'Sport']].groupby('Sport').describe() # Looking at table above, we see some interesting facts: # * Rhythmic Gymnastics is a clear outlier with youger athletes - its 75th-percentile is 20 years old! # * Shooting, Polo, Equestrianism, Croquet, Alpinism, Art Competitions, Roque are outliers with older athletes - their 75th-percentile are 39, 39, 40, 42.5, 47.5, 54, 61.5 years old, respectively! # * More popular team sports like Football (Soccer), Volleyball, Basketball are very alike and aligned with general statistics - their 75th-percentile are 26, 28, 28 years old, respectively # # # To better see the relationship, let's plot some of the above cited sports. # + sports_df = df.query(' Sport in ("Croquet", "Alpinism", "Roque") ')[['Age', 'Sport']] # selecting less popular sports together, to not distort graph with very different count scales sns.displot(sports_df, x="Age", hue ="Sport", discrete=True) # + sports_df = df.query(' Sport in ("Rhythmic Gymnastics", "Equestrianism") ')[['Age', 'Sport']] # selecting somewhat popular sports, to not distort graph with very different count scales sns.displot(sports_df, x="Age", hue ="Sport", discrete=True) # + sports_df = df.query(' Sport in ("Football", "Volleyball", "Basketball") ')[['Age', 'Sport']] # selecting very popular sports, to not distort graph with very different count scales sns.displot(sports_df, x="Age", hue ="Sport", discrete=True) # - # Now, it's important to note that some sports may have policies restricting teams mainly to youger athletes. This is indeed the case of Football, which has a policy restricting the number of athletes with more than 23 years, which clearly reflects in its above graph. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense from keras import backend as K # + # dimensions of our images. img_width, img_height = 150, 150 train_data_dir = '/Users/SyedAli/Desktop/CPics' validation_data_dir = '/Users/SyedAli/Desktop/CPics' nb_train_samples = 2000 nb_validation_samples = 800 epochs = 50 batch_size = 16 # + if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height) else: input_shape = (img_width, img_height, 3) model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=input_shape)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(64)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) # + # this is the augmentation configuration we will use for training train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) # this is the augmentation configuration we will use for testing: # only rescaling test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') model.fit_generator( train_generator, steps_per_epoch=nb_train_samples // batch_size, epochs=epochs, validation_data=validation_generator, validation_steps=nb_validation_samples // batch_size) model.save_weights('first_try.h5') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import gc # # 1. generate file with count of meter_reading equals 0 # count meter_reading equals 0 per site_id, timestamp and meter # + # %%time train_df3= pd.read_csv("train.csv") building_df = pd.read_csv("building_metadata.csv") train_df3 = train_df3.merge(building_df[['site_id', 'building_id']], on='building_id', how='left') toto = train_df3.groupby(['site_id','timestamp', 'meter']).meter_reading.transform(lambda x: x[x.eq(0)].size) train_df3['cnt_tmp_meter_siteid'] = toto train_df3.to_feather('train_with_cnt_per_tmp_meter.feather') # - # # 2. generate file with hot encoding of meters per building # a table listing all the building with 4 columns corresponding to the 4 meters. if the building has a partical meter, the value is 1 else it is 0. # + # %%time train_df= pd.read_csv("train.csv") train_df = train_df.set_index(['building_id', 'meter', 'timestamp']) train_df = train_df.unstack(1) train_df.fillna(-1) train_df2 = train_df train_df2.columns = ['meter_reading_0','meter_reading_1','meter_reading_2','meter_reading_3',] train_df2.reset_index(inplace=True) for col in ['meter_reading_0','meter_reading_1','meter_reading_2','meter_reading_3',]: train_df2[col].loc[~train_df2[col].isnull()]=1 train_df2[col].loc[train_df2[col].isnull()]=0 del train_df2['timestamp'] train_df3 = train_df2.drop_duplicates(subset=['building_id', 'meter_reading_0','meter_reading_1','meter_reading_2','meter_reading_3',], keep="last").reset_index(drop=True) train_df3['meter_reading_0'] = train_df3.groupby('building_id')['meter_reading_0'].transform(max) train_df3['meter_reading_1'] = train_df3.groupby('building_id')['meter_reading_1'].transform(max) train_df3['meter_reading_2'] = train_df3.groupby('building_id')['meter_reading_2'].transform(max) train_df3['meter_reading_3'] = train_df3.groupby('building_id')['meter_reading_3'].transform(max) train_df3 = train_df3.drop_duplicates(subset=['building_id', 'meter_reading_0','meter_reading_1','meter_reading_2','meter_reading_3',], keep="last").reset_index(drop=True) train_df3.to_feather('building_all_meters.feather') # - # # 3. cleanup train dataset to remove "vertical lines" # in his kernel, Ganfear shows the raw data https://www.kaggle.com/ganfear/missing-data-and-zeros-visualized and visualize when the target (meter_reading) is missing or 0. We can clearly see that there are some vertical blue lines: these lines most likely mean that the 0s were added and it is even more likely that they were added when these vertical lines happen on several meters, at the same time and the same site. The goal of the code below is to remove most of them. # Note: a single groupby takes 3.5 hours to be calculated. # + # %%time building_df = pd.read_csv("building_metadata.csv") train = pd.read_csv("train.csv") building_df['cnt_building_per_site'] = building_df.groupby(['site_id']).building_id.transform(lambda x: x.size) train = train.merge(building_df[['building_id', 'cnt_building_per_site', 'site_id']], on='building_id', how='left') print('starting') train['cnt_mreadEQ0_per_tmp_site'] = train.groupby(['timestamp','site_id']).meter_reading.transform(lambda x: x[x.eq(0)].size) print('done 1') train['cnt_mreadEQ0_per_tmp_site_building'] = train.groupby(['timestamp','site_id','building_id']).meter_reading.transform(lambda x: x[x.eq(0)].size) print('done 2') train['sum_meter_reading'] = train.groupby(['building_id', 'meter']).meter_reading.transform('sum') print('done 3') print(train.shape) df_train2 = pd.read_feather('train_with_cnt_per_tmp_meter.feather') train['cnt_tmp_meter_siteid'] = df_train2.cnt_tmp_meter_siteid ### groupby('site_id','timestamp', 'meter') x[x.eq(0)].size) del df_train2 # + print(train.shape) train = train.query('not (building_id <= 104 & meter == 0 & timestamp <= "2016-05-21")') print(train.shape) train = train[~(((train.cnt_mreadEQ0_per_tmp_site_building>2) & (train.meter_reading==0)))] print(train.shape) train = train[~(((train.cnt_mreadEQ0_per_tmp_site_building==2) & (train.meter_reading==0) & (train.meter==0)))] print(train.shape) train = train[~(((train.site_id==5) &(train.cnt_tmp_meter_siteid/train.cnt_building_per_site>.35) & (train.meter_reading==0)))] print(train.shape) train = train[~(((train.site_id==6) &(train.cnt_tmp_meter_siteid/train.cnt_building_per_site>.35) & (train.meter_reading==0)))] print(train.shape) train = train[~(((train.site_id==8) &(train.cnt_tmp_meter_siteid/train.cnt_building_per_site>.35) & (train.meter_reading==0)))] print(train.shape) train = train[~(((train.site_id==9) &(train.cnt_tmp_meter_siteid/train.cnt_building_per_site>.35) & (train.meter_reading==0)))] print(train.shape) train = train[~(((train.site_id==14) &(train.cnt_tmp_meter_siteid/train.cnt_building_per_site>.35) & (train.meter_reading==0)))] print(train.shape) train = train[~(((train.site_id==15) &(train.cnt_tmp_meter_siteid/train.cnt_building_per_site>.35) & (train.meter_reading==0)))] print(train.shape) train = train[['building_id', 'meter', 'timestamp', 'meter_reading']].reset_index(drop=True) train.to_feather('train_cleanup_001.feather') # - # # 4. clean up datasets (train & test) # # ####### code extracted from https://www.kaggle.com/purist1024/ashrae-simple-data-cleanup-lb-1-08-no-leaks # + def make_is_bad_zero(Xy_subset, min_interval=48, summer_start=3000, summer_end=7500): """Helper routine for 'find_bad_zeros'. This operates upon a single dataframe produced by 'groupby'. We expect an additional column 'meter_id' which is a duplicate of 'meter' because groupby eliminates the original one.""" meter = Xy_subset.meter_id.iloc[0] is_zero = Xy_subset.meter_reading == 0 if meter == 0: # Electrical meters should never be zero. Keep all zero-readings in this table so that # they will all be dropped in the train set. return is_zero transitions = (is_zero != is_zero.shift(1)) all_sequence_ids = transitions.cumsum() ids = all_sequence_ids[is_zero].rename("ids") if meter in [2, 3]: # It's normal for steam and hotwater to be turned off during the summer keep = set(ids[(Xy_subset.timestamp < summer_start) | (Xy_subset.timestamp > summer_end)].unique()) is_bad = ids.isin(keep) & (ids.map(ids.value_counts()) >= min_interval) elif meter == 1: time_ids = ids.to_frame().join(Xy_subset.timestamp).set_index("timestamp").ids is_bad = ids.map(ids.value_counts()) >= min_interval # Cold water may be turned off during the winter jan_id = time_ids.get(0, False) dec_id = time_ids.get(8283, False) if (jan_id and dec_id and jan_id == time_ids.get(500, False) and dec_id == time_ids.get(8783, False)): is_bad = is_bad & (~(ids.isin(set([jan_id, dec_id])))) else: raise Exception(f"Unexpected meter type: {meter}") result = is_zero.copy() result.update(is_bad) return result def find_bad_zeros(X, y): """Returns an Index object containing only the rows which should be deleted.""" Xy = X.assign(meter_reading=y, meter_id=X.meter) is_bad_zero = Xy.groupby(["building_id", "meter"]).apply(make_is_bad_zero) return is_bad_zero[is_bad_zero].index.droplevel([0, 1]) # - # `find_bad_sitezero` identifies the "known-bad" electrical readings from the first 141 days of the data for site 0 (i.e. UCF). def find_bad_sitezero(X): """Returns indices of bad rows from the early days of Site 0 (UCF).""" return X[(X.timestamp < 3378) & (X.site_id == 0) & (X.meter == 0)].index # `find_bad_building1099` identifies the most absurdly high readings from building 1099. These are orders of magnitude higher than all data, and have been emperically seen in LB probes to be harmful outliers. def find_bad_building1099(X, y): """Returns indices of bad rows (with absurdly high readings) from building 1099.""" return X[(X.building_id == 1099) & (X.meter == 2) & (y > 3e4)].index # Finally, `find_bad_rows` combines all of the above together to allow you to do a one-line cleanup of your data. def find_bad_rows(X, y): return find_bad_zeros(X, y).union(find_bad_sitezero(X)).union(find_bad_building1099(X, y)) # + from meteocalc import Temp, dew_point, heat_index, wind_chill, feels_like def c2f(T): return T * 9 / 5. + 32 def windchill(T, v): return (10*v**.5 - v +10.5) * (33 - T) def prepareweather(df): df['RH'] = 100 - 5 * (df['air_temperature']-df['dew_temperature']) # df['RH_above50'] = (df['RH'] > 50).astype(int) df['heat'] = df.apply(lambda x: heat_index(c2f(x.air_temperature), x.RH).c, axis=1) df['windchill'] = df.apply(lambda x: windchill(x.air_temperature, x.wind_speed), axis=1) df['feellike'] = df.apply(lambda x: feels_like(c2f(x.air_temperature), x.RH, x.wind_speed*2.237).c, axis=1) return df def add_lag_feature(weather_df, window=3): group_df = weather_df.groupby('site_id') cols = ['air_temperature', 'dew_temperature', 'heat', 'windchill', 'feellike'] rolled = group_df[cols].rolling(window=window, min_periods=0) lag_mean = rolled.mean().reset_index().astype(np.float32) lag_max = rolled.max().reset_index().astype(np.float16) lag_min = rolled.min().reset_index().astype(np.float16) lag_std = rolled.std().reset_index().astype(np.float16) for col in cols: weather_df[f'{col}_mean_lag{window}'] = lag_mean[col] # weather_df[f'{col}_max_lag{window}'] = lag_max[col] # weather_df[f'{col}_min_lag{window}'] = lag_min[col] # weather_df[f'{col}_std_lag{window}'] = lag_std[col] def fill_weather_dataset(weather_df): # Find Missing Dates time_format = "%Y-%m-%d %H:%M:%S" start_date = datetime.datetime.strptime(weather_df['timestamp'].min(),time_format) end_date = datetime.datetime.strptime(weather_df['timestamp'].max(),time_format) total_hours = int(((end_date - start_date).total_seconds() + 3600) / 3600) hours_list = [(end_date - datetime.timedelta(hours=x)).strftime(time_format) for x in range(total_hours)] missing_hours = [] for site_id in range(16): site_hours = np.array(weather_df[weather_df['site_id'] == site_id]['timestamp']) new_rows = pd.DataFrame(np.setdiff1d(hours_list,site_hours),columns=['timestamp']) new_rows['site_id'] = site_id weather_df = pd.concat([weather_df,new_rows]) weather_df = weather_df.reset_index(drop=True) # for col in weather_df.columns: # if col != 'timestamp': # if weather_df[col].isna().sum(): # weather_df['na_'+col] = weather_df[col].isna().astype(int) # weather_df['weath_na_total'] = weather_df.isna().sum(axis=1) # Add new Features weather_df["datetime"] = pd.to_datetime(weather_df["timestamp"]) weather_df["day"] = weather_df["datetime"].dt.day weather_df["week"] = weather_df["datetime"].dt.week weather_df["month"] = weather_df["datetime"].dt.month # Reset Index for Fast Update weather_df = weather_df.set_index(['site_id','day','month']) air_temperature_filler = pd.DataFrame(weather_df.groupby(['site_id','day','month'])['air_temperature'].mean(),columns=["air_temperature"]) weather_df.update(air_temperature_filler,overwrite=False) # Step 1 cloud_coverage_filler = weather_df.groupby(['site_id','day','month'])['cloud_coverage'].mean() # Step 2 cloud_coverage_filler = pd.DataFrame(cloud_coverage_filler.fillna(method='ffill'),columns=["cloud_coverage"]) weather_df.update(cloud_coverage_filler,overwrite=False) due_temperature_filler = pd.DataFrame(weather_df.groupby(['site_id','day','month'])['dew_temperature'].mean(),columns=["dew_temperature"]) weather_df.update(due_temperature_filler,overwrite=False) # Step 1 sea_level_filler = weather_df.groupby(['site_id','day','month'])['sea_level_pressure'].mean() # Step 2 sea_level_filler = pd.DataFrame(sea_level_filler.fillna(method='ffill'),columns=['sea_level_pressure']) weather_df.update(sea_level_filler,overwrite=False) wind_direction_filler = pd.DataFrame(weather_df.groupby(['site_id','day','month'])['wind_direction'].mean(),columns=['wind_direction']) weather_df.update(wind_direction_filler,overwrite=False) wind_speed_filler = pd.DataFrame(weather_df.groupby(['site_id','day','month'])['wind_speed'].mean(),columns=['wind_speed']) weather_df.update(wind_speed_filler,overwrite=False) # Step 1 precip_depth_filler = weather_df.groupby(['site_id','day','month'])['precip_depth_1_hr'].mean() # Step 2 precip_depth_filler = pd.DataFrame(precip_depth_filler.fillna(method='ffill'),columns=['precip_depth_1_hr']) weather_df.update(precip_depth_filler,overwrite=False) weather_df = weather_df.reset_index() weather_df = weather_df.drop(['datetime','day','week','month'],axis=1) weather_df = timestamp_align(weather_df) print('add heat, RH...') weather_df = prepareweather(weather_df) print('add lag features') add_lag_feature(weather_df, window=3) return weather_df # Original code from https://www.kaggle.com/gemartin/load-data-reduce-memory-usage by @gemartin from pandas.api.types import is_datetime64_any_dtype as is_datetime from pandas.api.types import is_categorical_dtype def reduce_mem_usage(df, use_float16=False): """ Iterate through all the columns of a dataframe and modify the data type to reduce memory usage. """ start_mem = df.memory_usage().sum() / 1024**2 print("Memory usage of dataframe is {:.2f} MB".format(start_mem)) for col in df.columns: if is_datetime(df[col]) or is_categorical_dtype(df[col]): continue col_type = df[col].dtype if col_type != object: c_min = df[col].min() c_max = df[col].max() if str(col_type)[:3] == "int": if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max: df[col] = df[col].astype(np.int8) elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max: df[col] = df[col].astype(np.int16) elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max: df[col] = df[col].astype(np.int32) elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max: df[col] = df[col].astype(np.int64) else: if use_float16 and c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max: df[col] = df[col].astype(np.float16) elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max: df[col] = df[col].astype(np.float32) else: df[col] = df[col].astype(np.float64) else: df[col] = df[col].astype("category") end_mem = df.memory_usage().sum() / 1024**2 print("Memory usage after optimization is: {:.2f} MB".format(end_mem)) print("Decreased by {:.1f}%".format(100 * (start_mem - end_mem) / start_mem)) return df def features_engineering(df): # Sort by timestamp df.sort_values("timestamp") df.reset_index(drop=True) # Add more features df["timestamp"] = pd.to_datetime(df["timestamp"],format="%Y-%m-%d %H:%M:%S") df["hour"] = df["timestamp"].dt.hour df["month"] = df["timestamp"].dt.month df["weekday"] = df["timestamp"].dt.weekday # # Remove Unused Columns # drop = ['sea_level_pressure", "wind_direction", "wind_speed", "precip_depth_1_hr"] # df = df.drop(drop, axis=1) gc.collect() return df def building_features(building_meta_df): building_addfeatures = pd.read_feather('building_all_meters.feather') for col in building_meta_df.columns: if col != 'timestamp': if building_meta_df[col].isna().sum(): building_meta_df['na_'+col] = building_meta_df[col].isna().astype(int) building_meta_df['build_na_total'] = building_meta_df.isna().sum(axis=1) building_meta_df = pd.concat([building_meta_df, building_addfeatures[['meter_reading_0', 'meter_reading_1', 'meter_reading_2', 'meter_reading_3']]], axis=1) from sklearn.preprocessing import LabelEncoder le = LabelEncoder() building_meta_df.primary_use = le.fit_transform(building_meta_df.primary_use) building_meta_df['cnt_building_per_site'] = building_meta_df.groupby(['site_id']).building_id.transform(lambda x: x.size) building_meta_df['cnt_building_per_site_prim'] = building_meta_df.groupby(['site_id', 'primary_use']).building_id.transform(lambda x: x.size) building_meta_df['sqr_mean_per_site'] = building_meta_df.groupby(['site_id', ]).square_feet.transform('median') building_meta_df['sqr_mean_per_prim_site'] = building_meta_df.groupby(['site_id', 'primary_use']).square_feet.transform('median') return building_meta_df # + import pandas as pd import numpy as np import os, gc import warnings from lightgbm import LGBMRegressor from sklearn.base import BaseEstimator, RegressorMixin, clone from sklearn.metrics import mean_squared_log_error pd.set_option("max_columns", 500) def input_file(file): path = f"./{file}" if not os.path.exists(path): return path + ".gz" return path def compress_dataframe(df): result = df.copy() for col in result.columns: col_data = result[col] dn = col_data.dtype.name if dn == "object": result[col] = pd.to_numeric(col_data.astype("category").cat.codes, downcast="integer") elif dn == "bool": result[col] = col_data.astype("int8") elif dn.startswith("int") or (col_data.round() == col_data).all(): result[col] = pd.to_numeric(col_data, downcast="integer") else: result[col] = pd.to_numeric(col_data, downcast='float') return result def read_train(): df = pd.read_feather("./train_cleanup_001.feather") dft = pd.read_csv('test.csv') # df = features_engineering(df) ############################### df.timestamp = (pd.to_datetime(df["timestamp"]) - pd.to_datetime("2016-01-01")).dt.total_seconds() // 3600 return compress_dataframe(df) def read_building_metadata(): df = pd.read_csv(input_file("building_metadata.csv")) df = building_features(df) df = compress_dataframe(df).fillna(-1).set_index("building_id") return df site_GMT_offsets = [-5, 0, -7, -5, -8, 0, -5, -5, -5, -6, -7, -5, 0, -6, -5, -5] def read_weather_train(fix_timestamps=True, interpolate_na=True, add_na_indicators=True): df = pd.read_csv(input_file("weather_train.csv"), parse_dates=["timestamp"]) ################ print('add heat, RH...') df = prepareweather(df) print('add lag features') add_lag_feature(df, window=3) ################# df.timestamp = (df.timestamp - pd.to_datetime("2016-01-01")).dt.total_seconds() // 3600 if fix_timestamps: GMT_offset_map = {site: offset for site, offset in enumerate(site_GMT_offsets)} df.timestamp = df.timestamp + df.site_id.map(GMT_offset_map) if interpolate_na: site_dfs = [] for site_id in df.site_id.unique(): # Make sure that we include all possible hours so that we can interpolate evenly site_df = df[df.site_id == site_id].set_index("timestamp").reindex(range(8784)) site_df.site_id = site_id for col in [c for c in site_df.columns if c != "site_id"]: if add_na_indicators: site_df[f"had_{col}"] = ~site_df[col].isna() site_df[col] = site_df[col].interpolate(limit_direction='both', method='linear') # Some sites are completely missing some columns, so use this fallback site_df[col] = site_df[col].fillna(df[col].median()) site_dfs.append(site_df) df = pd.concat(site_dfs).reset_index() # make timestamp back into a regular column elif add_na_indicators: for col in df.columns: if df[col].isna().any(): df[f"had_{col}"] = ~df[col].isna() return compress_dataframe(df).set_index(["site_id", "timestamp"]) def combined_train_data(fix_timestamps=True, interpolate_na=True, add_na_indicators=True): Xy = compress_dataframe(read_train().join(read_building_metadata(), on="building_id").join( read_weather_train(fix_timestamps, interpolate_na, add_na_indicators), on=["site_id", "timestamp"]).fillna(-1)) return Xy.drop(columns=["meter_reading"]), Xy.meter_reading def _add_time_features(X): return X.assign(tm_day_of_week=((X.timestamp // 24) % 7), tm_hour_of_day=(X.timestamp % 24)) # + def read_test(): df = pd.read_csv(input_file("test.csv"), parse_dates=["timestamp"]) df.timestamp = (df.timestamp - pd.to_datetime("2016-01-01")).dt.total_seconds() // 3600 return compress_dataframe(df).set_index("row_id") def read_weather_test(fix_timestamps=True, interpolate_na=True, add_na_indicators=True): df = pd.read_csv(input_file("weather_test.csv"), parse_dates=["timestamp"]) ############### print('add heat, RH...') df = prepareweather(df) print('add lag features') add_lag_feature(df, window=3) ############## df.timestamp = (df.timestamp - pd.to_datetime("2016-01-01")).dt.total_seconds() // 3600 if fix_timestamps: GMT_offset_map = {site: offset for site, offset in enumerate(site_GMT_offsets)} df.timestamp = df.timestamp + df.site_id.map(GMT_offset_map) if interpolate_na: site_dfs = [] for site_id in df.site_id.unique(): # Make sure that we include all possible hours so that we can interpolate evenly site_df = df[df.site_id == site_id].set_index("timestamp").reindex(range(8784, 26304)) site_df.site_id = site_id for col in [c for c in site_df.columns if c != "site_id"]: if add_na_indicators: site_df[f"had_{col}"] = ~site_df[col].isna() site_df[col] = site_df[col].interpolate(limit_direction='both', method='linear') # Some sites are completely missing some columns, so use this fallback site_df[col] = site_df[col].fillna(df[col].median()) site_dfs.append(site_df) df = pd.concat(site_dfs).reset_index() # make timestamp back into a regular column elif add_na_indicators: for col in df.columns: if df[col].isna().any(): df[f"had_{col}"] = ~df[col].isna() return compress_dataframe(df).set_index(["site_id", "timestamp"]) def combined_test_data(fix_timestamps=True, interpolate_na=True, add_na_indicators=True): X = compress_dataframe(read_test().join(read_building_metadata(), on="building_id").join( read_weather_test(fix_timestamps, interpolate_na, add_na_indicators), on=["site_id", "timestamp"]).fillna(-1)) return X # + X, y = combined_train_data() bad_rows = find_bad_rows(X, y) pd.Series(bad_rows.sort_values()).to_csv("rows_to_drop.csv", header=False, index=False) # - categorical_columns = [ "building_id", "meter", "site_id", "primary_use", "had_air_temperature", "had_cloud_coverage","had_dew_temperature", "had_precip_depth_1_hr", "had_sea_level_pressure", "had_wind_direction","had_wind_speed", "tm_day_of_week", "tm_hour_of_day" ] # + dropcol = [] categorical_columns = [f for f in categorical_columns if f not in dropcol] # - X.columns # + X = X.drop(index=bad_rows) y = y.reindex_like(X) # Additional preprocessing X = compress_dataframe(_add_time_features(X)) X = X.drop(dropcol, axis=1) # Raw timestamp doesn't help when prediction y = np.log1p(y) # - y XX = X.copy() XX['meter_reading'] = y.values XX.reset_index(drop=True, inplace=True) XX.to_feather('train_simple_cleanup.feather') Xt = combined_test_data() Xt = compress_dataframe(_add_time_features(Xt)) Xt = Xt.drop(dropcol, axis=1) # Raw timestamp doesn't help when prediction Xt = Xt.reset_index() Xt.to_feather('test_simple_cleanup.feather') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Model Specification for 1st-Level fMRI Analysis # # Nipype provides also an interfaces to create a first level Model for an fMRI analysis. Such a model is needed to specify the study specific information, such as **condition**, their **onsets** and **durations**. For more information, make sure to check out [Model Specificaton](http://nipype.readthedocs.io/en/latest/users/model_specification.html) and [nipype.algorithms.modelgen](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html) # ## Simple Example # # Let's consider a simple experiment, where we have three different stimuli such as ``'faces'``, ``'houses'`` and ``'scrambled pix'``. Now each of those three conditions has different stimuli onsets, but all of them have a stimuli presentation duration of 3 seconds. # # So to summarize: # # conditions = ['faces', 'houses', 'scrambled pix'] # onsets = [[0, 30, 60, 90], # [10, 40, 70, 100], # [20, 50, 80, 110]] # durations = [[3], [3], [3]] # # The way we would create this model with Nipype is almsot as simple as that. The only step that is missing is to put this all into a ``Bunch`` object. This can be done as follows: # + from nipype.interfaces.base import Bunch conditions = ['faces', 'houses', 'scrambled pix'] onsets = [[0, 30, 60, 90], [10, 40, 70, 100], [20, 50, 80, 110]] durations = [[3], [3], [3]] subject_info = Bunch(conditions=conditions, onsets=onsets, durations=durations) # - # It's also possible to specify additional regressors. For this you need to additionally specify: # # - **``regressors``**: list of regressors that you want to include in the model (must correspond to the number of volumes in the functional run) # - **``regressor_names``**: name of the regressors that you want to include # ## Example based on dataset # # Now let's look at a TSV file from our tutorial dataset. # !cat /data/ds000114/task-fingerfootlips_events.tsv # We can also use [pandas](http://pandas.pydata.org/) to create a data frame from our dataset. import pandas as pd trialinfo = pd.read_table('/data/ds000114/task-fingerfootlips_events.tsv') trialinfo.head() # Before we can use the onsets, we first need to split them into the three conditions: for group in trialinfo.groupby('trial_type'): print(group[0]) print(group[1].onset, group[1].duration) # The last thing we now need to to is to put this into a ``Bunch`` object and we're done: # + from nipype.interfaces.base import Bunch conditions = [] onsets = [] durations = [] for group in trialinfo.groupby('trial_type'): conditions.append(group[0]) onsets.append(group[1].onset.tolist()) durations.append(group[1].duration.tolist()) subject_info = Bunch(conditions=conditions, onsets=onsets, durations=durations) subject_info.items() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import sys import numba import swifter import timeit import pandas as pd import warnings warnings.filterwarnings('ignore') __author__ = '' __version__ = 'Python 2' # - data = pd.read_csv('../Data/bid_fe.csv') data.sample(5) len(data) hashmap = dict() memberkeys = sorted(list(set(data.MemberKey)))[60000:] len(memberkeys) start = timeit.default_timer() for key in memberkeys: hashmap[key] = data.query("MemberKey == '{0}'".format(key)) stop = timeit.default_timer() print "Hashmap generated in {0} seconds".format((stop-start)) data = data[data['MemberKey'].isin(memberkeys)] len(data) # ## Lender Winning Bids # + def lender_winning_bids(x): return hashmap[x["MemberKey"]].query("Status == 'Winning' and ListingStartDate < '{0}'".format(x["ListingStartDate"]))["Bid_Key"].nunique() start = timeit.default_timer() data["LenderWinningBids"] = data[["MemberKey", "ListingStartDate"]].swifter.apply(lender_winning_bids, axis=1) stop = timeit.default_timer() print "Feature engineering completed in {0} seconds".format((stop-start)) # data[["MemberKey", "ListingKey", "ListingStartDate", "Bid_Key", "BidCreationDate", "Status", "LenderWinningBids"]].query("MemberKey=='365434772269169153CB3D3'").sort_values("ListingStartDate") # - # ## Lender Total Bids # + def lender_total_bids(x): return hashmap[x["MemberKey"]].query("ListingStartDate < '{0}'".format(x["ListingStartDate"]))["Bid_Key"].nunique() start = timeit.default_timer() data["LenderTotalBids"] = data[["MemberKey", "ListingStartDate"]].swifter.apply(lender_total_bids, axis=1) stop = timeit.default_timer() print "Feature engineering completed in {0} seconds".format((stop-start)) # data[["MemberKey", "ListingKey", "ListingStartDate", "Bid_Key", "BidCreationDate", "Status", "LenderWinningBids", "LenderTotalBids"]].head() # - data[["MemberKey", "ListingKey", "ListingStartDate", "Bid_Key", "BidCreationDate", "Status", "LenderWinningBids", "LenderTotalBids"]].query("MemberKey=='{0}'".format(memberkeys[0])).sort_values("ListingStartDate") # ## Save Data data.to_csv("lender_bid_fe_65k.csv", index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # .. _vat_userguide: # # Onderwijsnummer Strings # ============ # + active="" # Introduction # ------------ # # The function :func:`clean_nl_onderwijsnummer() ` cleans a column containing Onderwijsnummer (the Dutch student identification number) strings, and standardizes them in a given format. The function :func:`validate_nl_onderwijsnummer() ` validates either a single Onderwijsnummer strings, a column of Onderwijsnummer strings or a DataFrame of Onderwijsnummer strings, returning `True` if the value is valid, and `False` otherwise. # - # Onderwijsnummer strings can be converted to the following formats via the `output_format` parameter: # # * `compact`: only number strings without any seperators or whitespace, like "101222331" # * `standard`: Onderwijsnummer strings with proper whitespace in the proper places. Note that in the case of Onderwijsnummer, the compact format is the same as the standard one. # # Invalid parsing is handled with the `errors` parameter: # # * `coerce` (default): invalid parsing will be set to NaN # * `ignore`: invalid parsing will return the input # * `raise`: invalid parsing will raise an exception # # The following sections demonstrate the functionality of `clean_nl_onderwijsnummer()` and `validate_nl_onderwijsnummer()`. # ### An example dataset containing Onderwijsnummer strings import pandas as pd import numpy as np df = pd.DataFrame( { "vat": [ '1012.22.331', '2112.22.337', 'BE 428759497', 'BE431150351', "002 724 334", "hello", np.nan, "NULL", ], "address": [ "123 Pine Ave.", "main st", "1234 west main heights 57033", "apt 1 789 s maple rd manhattan", "robie house, 789 north main street", "1111 S Figueroa St, Los Angeles, CA 90015", "(staples center) 1111 S Figueroa St, Los Angeles", "hello", ] } ) df # ## 1. Default `clean_nl_onderwijsnummer` # # By default, `clean_nl_onderwijsnummer` will clean vat strings and output them in the standard format with proper separators. from dataprep.clean import clean_nl_onderwijsnummer clean_nl_onderwijsnummer(df, column = "vat") # ## 2. Output formats # This section demonstrates the output parameter. # ### `standard` (default) clean_nl_onderwijsnummer(df, column = "vat", output_format="standard") # ### `compact` clean_nl_onderwijsnummer(df, column = "vat", output_format="compact") # ## 3. `inplace` parameter # # This deletes the given column from the returned DataFrame. # A new column containing cleaned Onderwijsnummer strings is added with a title in the format `"{original title}_clean"`. clean_nl_onderwijsnummer(df, column="vat", inplace=True) # ## 4. `errors` parameter # ### `coerce` (default) clean_nl_onderwijsnummer(df, "vat", errors="coerce") # ### `ignore` clean_nl_onderwijsnummer(df, "vat", errors="ignore") # ## 4. `validate_nl_onderwijsnummer()` # `validate_nl_onderwijsnummer()` returns `True` when the input is a valid Onderwijsnummer. Otherwise it returns `False`. # # The input of `validate_nl_onderwijsnummer()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame. # # When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. # # When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_nl_onderwijsnummer()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_nl_onderwijsnummer()` returns the validation result for the whole DataFrame. from dataprep.clean import validate_nl_onderwijsnummer print(validate_nl_onderwijsnummer("1012.22.331")) print(validate_nl_onderwijsnummer("2112.22.337")) print(validate_nl_onderwijsnummer('BE 428759497')) print(validate_nl_onderwijsnummer('BE431150351')) print(validate_nl_onderwijsnummer("004085616")) print(validate_nl_onderwijsnummer("hello")) print(validate_nl_onderwijsnummer(np.nan)) print(validate_nl_onderwijsnummer("NULL")) # ### Series validate_nl_onderwijsnummer(df["vat"]) # ### DataFrame + Specify Column validate_nl_onderwijsnummer(df, column="vat") # ### Only DataFrame validate_nl_onderwijsnummer(df) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (delta) # language: python # name: delta # --- # + # # !conda install python-snappy # + # # %matplotlib widget # %matplotlib inline # # %matplotlib qt import json import requests import matplotlib.pyplot as plt import pandas as pd import numpy as np import folium import geopandas as gpd from shapely.geometry import box, Polygon from pyproj import CRS from tqdm.notebook import tqdm from tqdm import trange # - # remove folium warning import warnings; warnings.filterwarnings('ignore', 'GeoSeries.notna', UserWarning) # + dates = requests.get('https://openaltimetry.org/data/icesat2/getTrackDate').json() tracks = gpd.read_file(r"../data/icesat-2/icesat2_tracks.shp") # NL # extents = [5.35,53.4,6.2,53.5] extents = [5.8062744140625, 53.36202615413913, 6.1791229248046875, 53.5092846887053] # zoom = 10 # DK # extents = [8.0557, 54.8955, 8.7615, 55.5938] # zoom = 8 # FR extents = [-1.28608, 44.60415, -0.9908, 44.7969] zoom = 10 polygon_geom = Polygon(zip([extents[0],extents[2],extents[2],extents[0]], [extents[1],extents[1],extents[3],extents[3]])) polygon = gpd.GeoDataFrame(index=[0], crs=CRS('EPSG:4326'), geometry=[polygon_geom]) # Clip data tracks_clip = gpd.clip(tracks, polygon) track_ids = tracks_clip.TrackId.values # - center = [polygon_geom.centroid.xy[1][0], polygon_geom.centroid.xy[0][0]] m = folium.Map(center, zoom_start=zoom) folium.GeoJson(polygon_geom).add_to(m) folium.GeoJson(tracks_clip).add_to(m) folium.LatLngPopup().add_to(m) m track_ids # track_id, track_date overpasses = [] for track_id in track_ids: for track_date in dates['track_{}'.format(track_id)].split(','): overpasses.append([track_id, track_date]) # + rows = [] # download data for all track_id, track_date combinations with tqdm(overpasses, ncols='100%') as t: log = lambda s: t.set_description(s); t.refresh() for track_id, track_date in t: log(f'Downloading data for track: {track_id}, date: {track_date}') # Paste the OpenAltimetry API URL for Photon here: OA_API_URL = 'https://openaltimetry.org/data/api/icesat2/atl03?' \ '&minx={}&miny={}&maxx={}&maxy={}&date={}&trackId={}' \ '&beamName=gt3r&beamName=gt3l&beamName=gt2r&beamName=gt2l&beamName=gt1r&beamName=gt1l' \ .format(extents[0], extents[1], extents[2], extents[3], track_date, track_id) # This function will request the 6 beams data using OpenAltimetry's API r = requests.get(OA_API_URL + '&client=jupyter') photon_data = r.json() # iterate over 6 beams for beam in photon_data: # every beam has data series with different confidences: # # {'beam_name': 'gt3r', # 'total_photon_count': 43383, # 'select_photon_count': 43383, # 'percentage': 100.0, # 'series': [{'name': 'Noise', # 'photon_count': 580, # 'data': [[55.48735236638138, 8.49802945875813, 37.546936], # [55.48734604692815, 8.498028327844818, 42.50964], beam_name = beam['beam_name'] for s in beam['series']: # every series has name (confidence) series_name = s['name'] for o in s['data']: # add rows row = { 'track_id': track_id, 'date': track_date, 'beam': beam_name, 'series': series_name, 'lon': round(o[1], 6), 'lat': round(o[0], 6), 'h': o[2], } rows.append(row) # - df = pd.DataFrame(rows) df df.memory_usage() df.h.hist(bins=100) # + # df.to_parquet('../data/out/icesat-2-DK.parquet') # df = pd.read_parquet('../data/out/icesat-2-DK.parquet') # df.to_parquet('../data/out/icesat-2-NL.parquet') # df = pd.read_parquet('../data/out/icesat-2-DK.parquet') df.to_parquet('../data/out/icesat-2-FR.parquet') # - <<<<<<< LOCAL CELL DELETED >>>>>>> df.to_parquet('../data/out/icesat-2-DK.parquet') #df = pd.read_parquet('../data/out/icesat-2-DK.parquet') df.columns df.series.unique() df.beam.unique() track_ids = df.track_id.unique() track_ids [ [track_id, df[df.track_id == track_id].date.unique()] for track_id in track_ids] # for track_id in track_ids: # track_df = df[df.track_id == track_id] # for date in track_df.date.unique(): # track_df2 = track_df[track_df.date == date] # print(track_id, date, len(track_df2)) print(df.beam.unique()) # + fig, ax = plt.subplots(1, 2, figsize=(20, 5)) # (df.series != 'Noise') & (df.series != 'Buffer') & data = df [(df.series != 'Noise') & (df.track_id == '229') & (df.date == '2020-01-10') & (df.beam == 'gt2l')] ax[0].plot(data.h, '.', markersize=5, alpha=0.2) ax[1].plot(df.x, df.y, 'k.', markersize=0.1, alpha=0.2) ax[1].plot(data.x, data.y, 'r.', markersize=0.1, alpha=0.2) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd data=pd.read_csv('xclara.csv') data.head() import numpy as np c1=data['V1'].values c2=data['V2'].values from sklearn.cluster import KMeans range_cluster = [1,2,3,5,7,10,15] sse={} # dictionary for n_clusters in range_cluster: kmeans=KMeans(n_clusters=n_clusters) kmeans_label=kmeans.fit(data) sse[n_clusters]=kmeans_label.inertia_ #inertia is the score function for Kmeans #inertia finds the fitting from matplotlib import pyplot as plt #plt.figure() plt.plot(list(sse.keys()),list(sse.values())) plt.xlabel('Number of Cluster') plt.ylabel('cost') plt.show() #Prefered K is seen to be as 3 plt.scatter(c1,c2,c='yellow') #represents the enrties as a cluster kmeans=KMeans(n_clusters=3)# any numnber can be putt here kmeans_label=kmeans.fit_predict(data)#helps to make the graph plt.scatter(c1,c2,c=kmeans_label)# are the 3 clusters which are to be plotted centroids=kmeans.cluster_centers_ #center points of the clusters in the graph centroids ##### make a KNN code on your own using any Dataset # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from datascience import * # %matplotlib inline import matplotlib.pyplot as plots plots.style.use('fivethirtyeight') import numpy as np # ## Lecture 23 ## # ### Percentiles # Manually compute the 55th percentile. x = make_array(43, 20, 51, 7, 28, 34) # Step 1. Sort the data np.sort(x) # + # Step 2. Figure out where 55th percentile would be. # - # OR: 1 Line of Code percentile(55, x) # If we tried to compute which element to take... 55 / 100 * 6 # ### Sample Median sf = Table.read_table('san_francisco_2015.csv') sf # Who is making the most money sf.sort('Total Compensation', descending=True).show(5) # Who is making the least money sf.sort('Total Compensation', descending=False).show(5) min_salary = 10 * 20 * 52 sf = sf.where('Total Compensation', are.above(min_salary)) pop_median = percentile(50, sf.column('Total Compensation')) pop_median our_sample = sf.sample(300, with_replacement=False) our_sample.show(5) percentile(50, our_sample.column('Total Compensation')) sf_bins = np.arange(0, 700000, 25000) sf.hist('Total Compensation', bins=sf_bins) plots.title('Population Distribution'); our_sample.hist('Total Compensation', bins=sf_bins) plots.title('Sample Distribution'); # # Variability of the Estimate def generate_sample_median(samp_size): our_sample = sf.sample(samp_size, with_replacement=False) return percentile(50, our_sample.column('Total Compensation')) sample_median = generate_sample_median(300) sample_median error = our_sample_median - pop_median error # # Quantifying Uncertainty # + sample_medians = make_array() for i in np.arange(1000): new_median = generate_sample_median(300) sample_medians = np.append(sample_medians, new_median) # + med_bins = np.arange(90000, 125001, 2500) Table().with_column( 'Sample Medians', sample_medians ).hist(bins = med_bins) plots.scatter(pop_median, 0, color="red"); # + err_bins = np.arange(-15000, 12501, 2500) Table().with_column( 'Errors', sample_medians - pop_median ).hist(bins = err_bins) plots.scatter(0, 0, color="red"); # - # # Bootstrap # + # Take a bootstrap (re)sample of size 300, WITH replacement boot_sample = our_sample.sample(300, with_replacement=True) boot_sample.hist('Total Compensation', bins=sf_bins) plots.title('Bootstrap sample'); print("Population Median = ", pop_median) print("Our Sample Median = ", sample_median) print("Bootstrap Sample Median = ", percentile(50,boot_sample.column('Total Compensation'))) # - def one_bootstrap_median(): single_sample = our_sample.sample() return percentile(50, single_sample.column('Total Compensation')) bootstrap_medians = make_array() for i in np.arange(1000): new_median = one_bootstrap_median() bootstrap_medians = np.append(bootstrap_medians, new_median) # + Table().with_column( 'Bootstrap Medians', bootstrap_medians ).hist('Bootstrap Medians', bins=med_bins) plots.scatter(pop_median, 0, color="red"); plots.scatter(sample_median, 0, color="blue"); # - # ## Confidence Intervals # + # Make an interval based on the middle 95% of bootstrap samples left = percentile(2.5, bootstrap_medians) right = percentile(97.5, bootstrap_medians) Table().with_column( 'Bootstrap Medians', bootstrap_medians ).hist('Bootstrap Medians', bins=med_bins) plots.plot([left, right], [0,0], color="gold",lw=3, zorder=1); plots.scatter(pop_median, 0, color="red", zorder=2); plots.scatter(sample_median, 0, color="blue", zorder=2); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Load data import pandas as pd col_names = ['sentiment','id','date','query_string','user','text'] data_path = 'training.1600000.processed.noemoticon.csv' tweet_data = pd.read_csv(data_path, header=None, names=col_names, encoding="ISO-8859-1").sample(frac=1) # .sample(frac=1) shuffles the data tweet_data = tweet_data[['sentiment', 'text']] # Disregard other columns print(tweet_data.head()) # + colab={} colab_type="code" id="tUX3YexFy_kz" # Preprocess function import re allowed_chars = ' AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz0123456789~`!@#$%^&*()-=_+[]{}|;:",./<>?' punct = '!?,.@#' maxlen = 280 def preprocess(text): return ''.join([' ' + char + ' ' if char in punct else char for char in [char for char in re.sub(r'http\S+', 'http', text, flags=re.MULTILINE) if char in allowed_chars]])[:maxlen] # - # Apply preprocessing tweet_data['text'] = tweet_data['text'].apply(preprocess) # Put __label__ in front of each sentiment tweet_data['sentiment'] = '__label__' + tweet_data['sentiment'].astype(str) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="teQJsIJay_k1" outputId="5b3c28ae-05b4-4cee-e2f9-df50489c23ca" # Save data import os # Create directory for saving data if it does not already exist data_dir = './processed-data' if not os.path.isdir(data_dir): os.mkdir(data_dir) # Save a percentage of the data (you could also only load a fraction of the data instead) amount = 0.125 tweet_data.iloc[0:int(len(tweet_data)*0.8*amount)].to_csv(data_dir + '/train.csv', sep='\t', index=False, header=False) tweet_data.iloc[int(len(tweet_data)*0.8*amount):int(len(tweet_data)*0.9*amount)].to_csv(data_dir + '/test.csv', sep='\t', index=False, header=False) tweet_data.iloc[int(len(tweet_data)*0.9*amount):int(len(tweet_data)*1.0*amount)].to_csv(data_dir + '/dev.csv', sep='\t', index=False, header=False) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="CRbruBmilCOt" outputId="5151d827-f90d-4ce6-f6f8-59b02be7f74f" # Memory management del tweet_data import gc; gc.collect() # + colab={"base_uri": "https://localhost:8080/", "height": 292} colab_type="code" id="BR5iIWRgy_k5" outputId="0e1616ab-b1c5-4007-d11d-5ff522fea812" # Load the data into Corpus format from flair.data_fetcher import NLPTaskDataFetcher from pathlib import Path corpus = NLPTaskDataFetcher.load_classification_corpus(Path(data_dir), test_file='test.csv', dev_file='dev.csv', train_file='train.csv') # + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="_6XPFKdqy_k7" outputId="68c8caa5-0b73-4e00-bb1d-53c7d8b575d8" # Make label dictionary label_dict = corpus.make_label_dictionary() # + colab={"base_uri": "https://localhost:8080/", "height": 350} colab_type="code" id="f-nIq7k6y_k9" outputId="2ea897cc-175b-4830-894c-48227d50f46f" # Load embeddings from flair.embeddings import WordEmbeddings, FlairEmbeddings word_embeddings = [WordEmbeddings('glove'), # FlairEmbeddings('news-forward'), # FlairEmbeddings('news-backward') ] # + colab={} colab_type="code" id="WiVHs1aRy_k_" # Initialize embeddings from flair.embeddings import DocumentRNNEmbeddings document_embeddings = DocumentRNNEmbeddings(word_embeddings, hidden_size=512, reproject_words=True, reproject_words_dimension=256) # + colab={} colab_type="code" id="y7x4mV67BPAh" # Create model from flair.models import TextClassifier classifier = TextClassifier(document_embeddings, label_dictionary=label_dict) # + colab={} colab_type="code" id="TRCGx_VdA-5F" # Create model trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(classifier, corpus) # + colab={"base_uri": "https://localhost:8080/", "height": 868} colab_type="code" id="AHgC91YoA-2J" outputId="a613d884-a079-480c-97bc-4ddd500bf9b6" # Train the model trainer.train('model-saves', learning_rate=0.1, mini_batch_size=32, anneal_factor=0.5, patience=8, max_epochs=200) # + colab={} colab_type="code" id="ozdqd8KiA-zZ" # Load the model and make predictions from flair.data import Sentence classifier = TextClassifier.load('model-saves/final-model.pt') pos_sentence = Sentence(preprocess('I love Python!')) neg_sentence = Sentence(preprocess('Python is the worst!')) classifier.predict(pos_sentence) classifier.predict(neg_sentence) print(pos_sentence.labels, neg_sentence.labels) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %load_ext autoreload # %autoreload 2 # %matplotlib inline # %config InlineBackend.figure_format = 'retina' # + import itertools import sys sys.path.insert(0, '..') import warnings warnings.filterwarnings('ignore') import matplotlib import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (10, 8) plt.rcParams['font.size'] = 12 plt.rcParams['font.sans-serif'] = "Ubuntu" plt.rcParams['font.weight'] = "light" plt.rcParams["figure.facecolor"] = "white" from matplotlib import ticker from matplotlib import patheffects import numpy as np import seaborn as sns import xarray as xr import covid19 def myLogFormat(y,pos): # Find the number of decimal places required decimalplaces = int(np.maximum(-np.log10(y),0)) # =0 for numbers >=1 # Insert that number into a format string formatstring = '{{:.{:1d}f}}'.format(decimalplaces) # Return the formatted tick label return formatstring.format(y) # - # ## data # + #istat_path = 'comuni_giornaliero.csv' #_, istat = covid19.data.read_istat(istat_path) #istat_italy = istat.sel(year=2020).sum(['location', 'age_class']) #istat_italy -= istat_italy.sel(time=slice(None, '2020-02-20')).mean() #istat_italy = istat_italy.sel(time=slice(None, '2020-04-30')) #istat_italy # - # data_italy_path = 'dpc-covid19-ita-andamento-nazionale.csv' data_italy_regions_path = 'dpc-covid19-ita-regioni.csv' # data_italy_path = covid19.data.download('andamento-nazionale') data_italy_regions_path = covid19.data.download('regioni') # + ds_italy_regions = covid19.data.read_dpc(data_italy_regions_path) # ds_italy_regions = ds_italy_regions.drop(dim='location', labels=['Italy / Lombardia']) ds_italy_regions['daily_deaths'] = ds_italy_regions['deaths'].diff('time') ds_italy_regions['daily_confirmed'] = ds_italy_regions['confirmed'].diff('time') ds_italy_regions['daily_tests'] = ds_italy_regions['tests'].diff('time') ds_italy_regions['daily_tested'] = ds_italy_regions['tested'].diff('time') ds_italy_regions['daily_tpr'] = ds_italy_regions['daily_confirmed'] / ds_italy_regions['daily_tests'] ds_italy_regions['mortality'] = ds_italy_regions['deaths'] / ds_italy_regions['population'] * 1_000_000 ds_italy_regions['daily_mortality'] = ds_italy_regions['daily_deaths'] / ds_italy_regions['population'] * 1_000_000 ds_italy_regions['daily_prevalence'] = ds_italy_regions['daily_confirmed'] / ds_italy_regions['population'] * 1_000_000 ds_italy_regions['daily_tests_pm'] = ds_italy_regions['daily_tests'] / ds_italy_regions['population'] * 1_000_000 ds_italy_regions['daily_tested_pm'] = ds_italy_regions['daily_tested'] / ds_italy_regions['population'] * 1_000_000 ds_italy_regions['current_severe_pm'] = ds_italy_regions['current_severe'] / ds_italy_regions['population'] * 1_000_000 ds_italy_regions['current_critical_pm'] = ds_italy_regions['current_critical'] / ds_italy_regions['population'] * 1_000_000 ds_italy_regions['daily_critical_pm'] = ds_italy_regions['daily_critical'] / ds_italy_regions['population'] * 1_000_000 for kind in ['daily_tests', 'daily_confirmed', 'daily_deaths', 'daily_tested', 'daily_mortality', 'daily_prevalence', 'daily_tests_pm', 'daily_tested_pm', "daily_critical_pm"]: ds_italy_regions[kind + '7'] = ds_italy_regions[kind].rolling({'time': 7}).mean() for kind in ['daily_confirmed', 'daily_deaths', 'daily_mortality', 'daily_prevalence']: ds_italy_regions[kind + '14'] = ds_italy_regions[kind].rolling({'time': 14}).mean() ds_italy_regions['daily_tpr7'] = ds_italy_regions['daily_confirmed7'] / ds_italy_regions['daily_tests7'] ds_italy_regions = ds_italy_regions.fillna(0) ds_italy_regions = ds_italy_regions.assign_coords({'location': ('location', [l.partition(' / ')[2] for l in ds_italy_regions.location.values])}) ds_italy_regions = ds_italy_regions.drop(['lat', 'lon', 'state_region', 'country']) ds_italy_regions = ds_italy_regions.drop(['deaths', 'confirmed', 'tests', 'tested', 'mortality']) # ds_italy_regions = ds_italy_regions.sel(time=slice(None, '2020-07-20')) # - tmp = ds_italy_regions.sortby(-ds_italy_regions['current_critical_pm'].isel(time=-1)).isel(time=-1) REGIONS = list(tmp.location.astype(str).values) tmp.to_dataframe() # [['daily_prevalence', 'daily_tpr']] # + ds_italy = ds_italy_regions.sum('location') ds_italy['daily_mortality'] = ds_italy['daily_deaths'] / ds_italy['population'] * 1_000_000 ds_italy['daily_prevalence'] = ds_italy['daily_confirmed'] / ds_italy['population'] * 1_000_000 ds_italy['daily_tests_pm'] = ds_italy['daily_tests'] / ds_italy['population'] * 1_000_000 ds_italy['daily_tested_pm'] = ds_italy['daily_tested'] / ds_italy['population'] * 1_000_000 ds_italy['current_severe_pm'] = ds_italy['current_severe'] / ds_italy['population'] * 1_000_000 ds_italy['current_critical_pm'] = ds_italy['current_critical'] / ds_italy['population'] * 1_000_000 ds_italy['daily_critical_pm'] = ds_italy['daily_critical'] / ds_italy['population'] * 1_000_000 ds_italy['daily_tpr'] = ds_italy['daily_confirmed'] / ds_italy['daily_tests'] for kind in ['daily_tests', 'daily_confirmed', 'daily_deaths', 'daily_tested', 'daily_mortality', 'daily_prevalence', 'daily_tests_pm', 'daily_tested_pm', "daily_critical_pm"]: ds_italy[kind + '7'] = ds_italy[kind].rolling({'time': 7}).mean() ds_italy['daily_confirmed14'] = ds_italy['daily_confirmed'].rolling({'time': 21}).mean() ds_italy['daily_tpr7'] = ds_italy['daily_confirmed7'] / ds_italy['daily_tests7'] ds_italy.to_dataframe().tail(22)[::-1][["daily_confirmed", "daily_confirmed7", "daily_prevalence7"]] # - # ## situation report # + window = 7 rr = ds_italy_regions.isel(time=slice(-window, None)) var = rr['daily_tpr7'] var1 = rr['current_severe_pm'] it_rr = ds_italy.isel(time=slice(-window, None)) it_var = it_rr['daily_tpr7'].expand_dims(location=['Italia']) it_var1 = (it_rr['current_severe_pm']).expand_dims(location=['Italia']) _, ax = plt.subplots() ax.yaxis.grid(color="lightgrey", linewidth=0.5) ax.xaxis.grid(color="lightgrey", linewidth=0.5) ylim = (0.003, .25) xlim = (3, 600) covid19.plot.scatter_xarray(var1, var, ax=ax, xlim=xlim, ylim=ylim) covid19.plot.scatter_xarray(it_var1, it_var, ax=ax) ax.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1., decimals=0)) _ = ax.set( xscale='log', yscale='log', xlim=xlim, ylim=ylim, title=f'Parametri di rischio COVID-19 del {str(var1.time.max().values)[:10]} - medie su 7 giorni', xlabel='pazienti attualmente ricoverati per milione di abitanti', ylabel='nuovi postivi / tamponi giornalieri (test positivity rate)', ) ax.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.)) ax.xaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) # x = np.arange(1000) * .001 # ax.plot(x, 0.08 * x ) # + window = 7 rr = ds_italy_regions.isel(time=slice(-window, None)) var = rr['daily_tpr7'] var1 = rr['daily_prevalence7'] it_rr = ds_italy.isel(time=slice(-window, None)) it_var = it_rr['daily_tpr7'].expand_dims(location=['Italia']) it_var1 = (it_rr['current_severe_pm']).expand_dims(location=['Italia']) _, ax = plt.subplots() ax.yaxis.grid(color="lightgrey", linewidth=0.5) ax.xaxis.grid(color="lightgrey", linewidth=0.5) ylim = (0.001, .25) xlim = (2, 1000) covid19.plot.scatter_xarray(var1, var, ax=ax, xlim=xlim, ylim=ylim) covid19.plot.scatter_xarray(it_var1, it_var, ax=ax) ax.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1., decimals=0)) _ = ax.set( xscale='log', yscale='log', xlim=xlim, ylim=ylim, title=f'Parametri di rischio COVID-19 del {str(var1.time.max().values)[:10]} - medie su 7 giorni', xlabel='nuovi casi per milione di abitanti', ylabel='percentuale di tamponi positivi', ) ax.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.)) ax.xaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) # x = np.arange(1000) * .001 # ax.plot(x, 0.08 * x ) # - _, ax = plt.subplots() ax.set(yscale='log', title='nuovi postivi / tamponi giornalieri (test positivity rate)') covid19.plot.plot_data(ax, ds_italy['daily_tpr7'], label='test positivity rate', marker=None) data = ds_italy['daily_confirmed7'] / ds_italy['daily_tested7'] # covid19.plot.plot_data(ax, data, label='tested positivity rate', marker=None, ratio=1.9) ax.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.)) ax.set(ylim=(0.002, 0.25)) _ = ax.legend() # + DAY = np.timedelta64(24 * 60 * 60, 's') PALETTE_ONE = list(sns.color_palette()) START_FIT = '2020-02-23' STOP_FIT = '2020-04-01' LAST_DAY = ds_italy.time[-1].values SHOWUNTIL = LAST_DAY + 30 * DAY EXTRAPOLATE = (np.datetime64('2020-03-01'), SHOWUNTIL) XLIM = EXTRAPOLATE XXX = ['daily_tests_pm7', 'daily_prevalence7', 'current_severe_pm', 'current_critical_pm', 'daily_mortality7'] FIT_PARAMS = { 'daily_tests_pm7': [], 'daily_prevalence7': ['2020-07-19', '2020-10-06'], 'current_severe_pm': ['2020-08-13', '2020-10-06'], 'current_critical_pm': ['2020-08-07', '2020-10-10'], 'daily_mortality7': ['2020-08-19', '2020-10-12'], 'daily_tpr7': [], 'daily_confirmed7': ['2020-10-06', '2020-10-19', '2020-10-21', '2020-10-23', '2020-11-01', '2020-11-23', None], 'daily_deaths7': ['2020-10-15', '2020-10-29', '2020-11-12', '2020-11-18'], "daily_critical_pm7": [], } #FIT_PARAMS = { # 'daily_tests_pm7': [], # 'daily_prevalence7': [], # 'current_severe_pm': [], # 'current_critical_pm': ['2020-04-15', '2020-07-10'], # 'daily_mortality7': ['2020-04-19', '2020-08-01'], # 'daily_tpr7': [], # 'daily_confirmed7': ['2020-10-06', None], #} RATIO = { 'current_critical_pm': 0.165, 'daily_mortality7': 1 / 45, "daily_critical_pm7": 1 / 85, } DEALY = { 'current_critical_pm': -5, 'daily_mortality7': -7.5, } LABEL = { 'current_severe': 'pazienti attualmente ricoverati in reparto', 'current_confirmed': 'attualmente positivi', 'current_severe_pm': 'pazienti attualmente ricoverati in reparto', 'current_critical': 'pazienti attualmente in terapia intensiva', 'current_critical_pm': 'pazienti attualmente in terapia intensiva', 'daily_tests7': 'tamponi giornalieri (media su 7 giorni)', 'daily_tests_pm7': 'tamponi giornalieri (media su 7 giorni)', 'daily_tested7': 'tamponi giornalieri (media su 7 giorni)', 'daily_deaths': 'decessi giornalieri', 'daily_deaths7': 'decessi giornalieri (media su 7 giorni)', 'daily_confirmed7': 'nuovi casi giornalieri (media su 7 giorni)', 'daily_deaths14': 'decessi giornalieri (media su 14 giorni)', 'daily_confirmed14': 'nuovi casi giornalieri (media su 14 giorni)', 'daily_prevalence7': 'nuovi casi giornalieri (media su 7 giorni)', 'daily_tpr7': 'percentuale di test positivi (media su 7 giorni)', 'daily_prevalence_screening7': 'nuovi casi da screening (media su 7 giorni)', 'daily_tests_pm7': 'tamponi giornalieri (media su 7 giorni)', 'daily_tested_pm7': 'tamponi giornalieri (media su 7 giorni)', 'daily_mortality7': 'decessi giornalieri (media su 7 giorni)', 'daily_mortality14': 'decessi giornalieri (media su 14 giorni)', "daily_critical_pm7": "ingressi giornalieri in terapia intensiva (media su 7 giorni)", } # + fits = {} for kind, breaks in FIT_PARAMS.items(): if isinstance(breaks, int): breaks = [np.datetime64(b) + breaks * DAY if b is not None else b for b in FIT_PARAMS['current_severe']] fits[kind] = covid19.fit.fit_exponential_segments(ds_italy[kind], breaks=breaks, min_value=0, valid_ratio=0.1) # fits['daily_deaths7'] = [fit.shift(7).scale(0.013) for fit in fits['daily_confirmed7']] # fits['daily_confirmed7'].append(fits['daily_confirmed7'][-1].scale(.75)) # fits # + SHOW = ['daily_prevalence7', 'current_severe_pm', 'current_critical_pm', 'daily_mortality7', 'daily_tests_pm7', "daily_critical_pm7"] EXTRAPOLATE = (np.datetime64('2020-02-24'), np.datetime64('2021-08-01')) XLIM = EXTRAPOLATE _, ax = covid19.plot.subplots(subplot_kw={'xlim': XLIM, 'yscale': 'log', 'ylim': (0.07, 7000)}) for (kind, fits_kind), color in zip(fits.items(), PALETTE_ONE): if SHOW is not None and kind not in SHOW: continue label = LABEL.get(kind, '') ratio = RATIO.get(kind, 1.) covid19.plot.plot_data(ax, ds_italy[kind], label=label, color=color, date_interval=28, marker=None, annotate=True) #for i, fit in enumerate(fits_kind): # covid19.plot.plot_fit(ax, fit, color=color, marker=None, label=f'stima esponenziale', extrapolate=[0, 7], alpha=0.5) # covid19.plot.plot_data(ax, istat_italy, label='Surplus di decessi giornalieri per tutte le cause (dati parziali ISTAT)', color=color, marker='^', linestyle=':', date_interval=7) _ = ax.set_title(f'COVID-19 Italia - dati Protezione Civile') _ = ax.set(xlabel="", ylabel="# per milione di abitanti") _ = ax.yaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) _ = ax.legend(loc='upper left') # + SHOW = ['daily_prevalence7', 'current_severe_pm', 'current_critical_pm', 'daily_mortality7', 'daily_tests_pm7', "daily_critical_pm7"] SHOW = ['daily_prevalence7', 'daily_critical_pm7'] EXTRAPOLATE = (np.datetime64('2021-06-01'), np.datetime64('2021-08-01')) XLIM = EXTRAPOLATE dealy = 0 _, ax = covid19.plot.subplots(subplot_kw={'xlim': XLIM, 'yscale': 'log', 'ylim': (4, 100)}) for (kind, fits_kind), color in zip(fits.items(), PALETTE_ONE): if SHOW is not None and kind not in SHOW: continue label = LABEL.get(kind, '') ratio = RATIO.get(kind, 1.) if kind == 'daily_mortality7': dealy = 0 if ratio != 1: label += f" moltiplicato {int(1 / ratio)}" covid19.plot.plot_data(ax, ds_italy[kind], label=label, color=color, date_interval=7, marker=None, annotate=True, ratio=ratio, delay=dealy) #for i, fit in enumerate(fits_kind): # covid19.plot.plot_fit(ax, fit, color=color, marker=None, label=f'stima esponenziale', extrapolate=[0, 7], alpha=0.5) # covid19.plot.plot_data(ax, istat_italy, label='Surplus di decessi giornalieri per tutte le cause (dati parziali ISTAT)', color=color, marker='^', linestyle=':', date_interval=7) _ = ax.set_title(f'COVID-19 Italia - dati Protezione Civile') _ = ax.set(xlabel="", ylabel="# per milione di abitanti") _ = ax.yaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) _ = ax.legend(loc='upper left') # - # + HIGHLIGHT_DAY = None XLIM1 = (np.datetime64('2020-10-01'), np.datetime64('2021-07-15')) SHOW = ['daily_confirmed7'] _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM1, 'ylim': (0, 46000)}) count = 0 for (kind, fits_kind), color in zip(fits.items(), PALETTE_ONE): count += 1 if SHOW is not None and kind not in SHOW: continue ratio = RATIO.get(kind, 1) label = LABEL.get(kind, '') covid19.plot.plot_data(ax, ds_italy[kind], label=label, color=color, date_interval=14, markersize=4) try: ax.bar(ds_italy.time, ds_italy[kind[:-1]], color=color, alpha=0.25, label='nuovi casi giornalieri') ax.bar(ds_italy.time[HIGHLIGHT_DAY::7], ds_italy[kind[:-1]][HIGHLIGHT_DAY::7], color=color, alpha=0.25) except: pass #for i, fit in enumerate(fits_kind[-1:]): # covid19.plot.plot_fit(ax, fit, color=color, marker=None, label=f'stima esponenziale', extrapolate=[-3,7]) # covid19.plot.plot_data(ax, istat_italy, label='Surplus di decessi giornalieri per tutte le cause (dati parziali ISTAT)', color=color, marker='^', linestyle=':', date_interval=7) # ax.plot([np.datetime64('2020-11-07') + i * DAY for i in range(-6, 8)], 0.73 * np.array([37000, 32500, 38700, 43100, 46400, 52200, 54600, 61800, 59200, 67700, 74400, 80200, 88800, 94200]), ' x', color=color, label='nuovi casi giornalieri nello scenario del 2020-10-28') _ = ax.set_title(f'COVID-19 Italia - dati Protezione Civile') _ = ax.set(xlabel="", ylabel="") _ = ax.legend(loc='upper left') _ = ax.set(ylim=(0, None)) #means = [] #values = list(ds_italy['daily_confirmed'][-14:-7].values) #for i in range(-6, 8): # mean = fit.predict(LAST_DAY + i * DAY) # means.append(round(mean)) # values.append(round(mean * 7 - sum(values[-6:]))) #print(np.array([int(round(v / 100)) * 100 for v in values])) #print(0.73 * np.array([int(round(v / 100)) * 100 for v in values])) # + SHOW = ['daily_confirmed7'] _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM1, 'ylim': (0, 12000)}) data = ds_italy_regions.sel(location='Lombardia') count = 0 for (kind, fits_kind), color in zip(region_fits['Lombardia'].items(), PALETTE_ONE): color = 'tab:pink' count += 1 if SHOW is not None and kind not in SHOW: continue ratio = RATIO.get(kind, 1) label = LABEL.get(kind, '') covid19.plot.plot_data(ax, data[kind], label=label, color=color, date_interval=7, markersize=4) try: ax.bar(ds_italy.time, data[kind[:-1]], color=color, alpha=0.25, label='nuovi casi giornalieri') ax.bar(ds_italy.time[HIGHLIGHT_DAY::7], data[kind[:-1]][HIGHLIGHT_DAY::7], color=color, alpha=0.25) except: pass #for i, fit in enumerate(fits_kind): # covid19.plot.plot_fit(ax, fit, color=color, marker=None, label=f'stima esponenziale', extrapolate=[-3,7]) # covid19.plot.plot_data(ax, istat_italy, label='Surplus di decessi giornalieri per tutte le cause (dati parziali ISTAT)', color=color, marker='^', linestyle=':', date_interval=7) # ax.plot([np.datetime64('2020-11-07') + i * DAY for i in range(-6, 8)], 0.73 * np.array([9270.0, 7650.0, 9450.0, 12150.0, 12510.0, 14580.0, 15120.0, 17100.0, 16200.0, 18810.0, 22410.0, 23760.0, 26820.0, 28620.0]), ' x', color=color, label='nuovi casi giornalieri nello scenario del 2020-10-28') _ = ax.set_title(f'COVID-19 Lombardia - dati Protezione Civile') _ = ax.set(xlabel="", ylabel="") _ = ax.legend(loc='upper left') _ = ax.set(ylim=(0, None)) #means = [] #values = list(data['daily_confirmed'][-14:-7].values) #for i in range(-6, 8): # mean = fit.predict(LAST_DAY + i * DAY) # means.append(round(mean)) # values.append(round(mean * 7 - sum(values[-6:]))) #print(np.array([int(round(v / 100)) * 100 for v in values])) #print((0.9 * np.array([int(round(v / 100)) * 100 for v in values])).tolist()) # + SHOW = ['daily_deaths7'] _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM1, 'ylim': (0, 1010)}) data = ds_italy fits_data = fits count = 0 for (kind, fits_kind), color in zip(fits_data.items(), PALETTE_ONE): count += 1 if SHOW is not None and kind not in SHOW: continue print(color) ratio = RATIO.get(kind, 1) label = LABEL.get(kind, '') covid19.plot.plot_data(ax, data[kind], label=label, color=color, date_interval=7, markersize=4) try: ax.bar(data.time, data[kind[:-1]], color=color, alpha=0.25, label='decessi giornalieri') ax.bar(data.time[HIGHLIGHT_DAY::7], data[kind[:-1]][HIGHLIGHT_DAY::7], color=color, alpha=0.25) except: pass #for i, fit in enumerate(fits_kind[-2:]): # covid19.plot.plot_fit(ax, fit, color=color, marker=None, label=f'stima esponenziale', extrapolate=[-3,7]) # covid19.plot.plot_data(ax, istat_italy, label='Surplus di decessi giornalieri per tutte le cause (dati parziali ISTAT)', color=color, marker='^', linestyle=':', date_interval=7) # ax.plot([np.datetime64('2020-11-23') + i * DAY for i in range(1, 8)], [835, 858, 760, 809, 804, 676, 747], ' x', color=color, label='stima dei decessi nello scenario del 2020-11-16') _ = ax.set_title(f'COVID-19 Italia - dati Protezione Civile') _ = ax.set(xlabel="", ylabel="") _ = ax.legend(loc='upper left') _ = ax.set(ylim=(0, None)) #means = [] #values = [int(v) for v in ds_italy['daily_deaths'].isel(time=slice(-7, None)).values] #for i in range(1, 8): # mean = fit.predict(LAST_DAY + i * DAY) # means.append(int(round(mean))) # value = int(round(mean * 7 - sum(values[-6:]))) # values.append(value) #print(means) #print(values) # + SHOW = ['daily_deaths7'] _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM1, 'ylim': (0, 260)}) data = ds_italy_regions.sel(location='Lombardia') # + ds_italy_regions.sel(location='Piemonte') + ds_italy_regions.sel(location="Valle d'Aosta") fits_data = region_fits['Lombardia'] count = 0 for (kind, fits_kind), color in zip(fits_data.items(), PALETTE_ONE): count += 1 if SHOW is not None and kind not in SHOW: continue print(color) ratio = RATIO.get(kind, 1) label = LABEL.get(kind, '') covid19.plot.plot_data(ax, data[kind], label=label, color=color, date_interval=7, markersize=4) try: ax.bar(data.time, data[kind[:-1]], color=color, alpha=0.25, label='decessi giornalieri') ax.bar(data.time[HIGHLIGHT_DAY::7], data[kind[:-1]][HIGHLIGHT_DAY::7], color=color, alpha=0.25) except: pass #for i, fit in enumerate(fits_kind[-2:]): # covid19.plot.plot_fit(ax, fit, color=color, marker=None, label=f'stima esponenziale', extrapolate=[-3,7]) # covid19.plot.plot_data(ax, istat_italy, label='Surplus di decessi giornalieri per tutte le cause (dati parziali ISTAT)', color=color, marker='^', linestyle=':', date_interval=7) #ax.plot([np.datetime64('2020-11-16') + i * DAY for i in range(1, 8)], [182, 191, 229, 160, 203, 227, 147], ' x', color=color, label='stima dei decessi nello scenario del 2020-11-16') _ = ax.set_title(f'COVID-19 Lombardia - dati Protezione Civile') _ = ax.set(xlabel="", ylabel="") _ = ax.legend(loc='upper left') _ = ax.set(ylim=(0, None)) # + SHOW = ['daily_tpr7'] _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM1, 'ylim': (0, 0.20)}) count = 0 for (kind, fits_kind), color in zip(fits.items(), PALETTE_ONE): count += 1 if SHOW is not None and kind not in SHOW: continue color = 'tab:blue' ratio = RATIO.get(kind, 1) label = LABEL.get(kind, '') covid19.plot.plot_data(ax, ds_italy[kind], label=label, color=color, date_interval=14) try: ax.bar(ds_italy.time, ds_italy[kind[:-1]], color=color, alpha=0.25, label='percentuale di test positivi (dato giornaliero)') ax.bar(ds_italy.time[HIGHLIGHT_DAY::7], ds_italy[kind[:-1]][HIGHLIGHT_DAY::7], color=color, alpha=0.25) except: pass for i, fit in enumerate(fits_kind): covid19.plot.plot_fit(ax, fit, color=color, marker=None, label=f'stima esponenziale', extrapolate=[0, 7]) # covid19.plot.plot_data(ax, istat_italy, label='Surplus di decessi giornalieri per tutte le cause (dati parziali ISTAT)', color=color, marker='^', linestyle=':', date_interval=7) _ = ax.set_title(f'COVID-19 Italia - dati Protezione Civile') _ = ax.set(xlabel="", ylabel="") _ = ax.legend(loc='upper left') ax.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1., decimals=0)) _ = ax.set(ylim=(0, None)) # + SHOW = ['daily_tpr7'] _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM1, 'ylim': (0, 0.30)}) data = ds_italy_regions.sel(location='Lombardia') count = 0 for (kind, fits_kind), color in zip(fits.items(), PALETTE_ONE): count += 1 if SHOW is not None and kind not in SHOW: continue color = 'tab:blue' ratio = RATIO.get(kind, 1) label = LABEL.get(kind, '') covid19.plot.plot_data(ax, data[kind], label=label, color=color, date_interval=14) try: ax.bar(data.time, data[kind[:-1]], color=color, alpha=0.25, label='percentuale di test positivi (dato giornaliero)') ax.bar(data.time[HIGHLIGHT_DAY::7], data[kind[:-1]][HIGHLIGHT_DAY::7], color=color, alpha=0.25) except: pass for i, fit in enumerate(fits_kind): covid19.plot.plot_fit(ax, fit, color=color, marker=None, label=f'stima esponenziale', extrapolate=[0, 7]) # covid19.plot.plot_data(ax, istat_italy, label='Surplus di decessi giornalieri per tutte le cause (dati parziali ISTAT)', color=color, marker='^', linestyle=':', date_interval=7) _ = ax.set_title(f'COVID-19 Italia - dati Protezione Civile') _ = ax.set(xlabel="", ylabel="") _ = ax.legend(loc='upper left') ax.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1., decimals=0)) _ = ax.set(ylim=(0, None)) # + REGIONS_FIT_PARAMS_DEFAULT = { 'daily_tests_pm7': [], 'daily_prevalence7': ['2020-07-10', None], 'current_severe_pm': ['2020-07-20', None], 'current_critical_pm': ['2020-07-25', None], 'daily_mortality7': ['2020-09-01', None], "daily_critical_pm7": [] } REGIONS_FIT_PARAMS = { 'Lombardia': { 'daily_tests7': [], 'daily_prevalence7': ['2020-07-23', '2020-10-06', None], 'current_severe_pm': [], 'current_critical_pm': ['2020-08-05', '2020-10-10', None], 'daily_mortality7': ['2020-08-01', '2020-10-15', None], 'daily_confirmed7': ['2020-10-06', '2020-10-23', '2020-11-01', None], 'daily_deaths7': ['2020-10-15', '2020-11-09', '2020-11-22'], 'daily_confirmed14': ['2020-10-06', '2020-10-23', '2020-11-01', None], 'daily_deaths14': ['2020-10-15', '2020-11-09', '2020-11-22'], }, 'Lazio': { 'daily_prevalence7': ['2020-08-01', None], 'current_severe_pm': [], 'current_critical_pm': ['2020-08-25', None], 'daily_mortality7': ['2020-09-05', None], }, 'Campania': { }, 'Sicilia': { 'daily_prevalence7': ['2020-07-17', None], 'daily_mortality7': ['2020-09-15', None], }, 'Veneto': { 'daily_prevalence7': ['2020-07-10', None], 'current_severe_pm': ['2020-08-10', None], }, 'Emilia-Romagna': { 'daily_prevalence7': ['2020-06-10', None], }, 'Sardegna': { 'daily_prevalence7': ['2020-07-10', '2020-09-05', None], 'current_severe_pm': ['2020-08-01', None], 'current_critical_pm': [], }, 'Liguria': { 'current_severe_pm': [], }, 'Puglia': { 'daily_prevalence7': ['2020-07-20', '2020-09-10'], 'current_severe_pm': ['2020-07-20', '2020-09-10'], 'current_critical_pm': ['2020-07-25', '2020-09-20'], }, 'Marche':{ 'daily_prevalence7': ['2020-07-30', None], }, 'Basilicata': { 'daily_prevalence7': ['2020-07-20', None], }, } STOP_FIT = '2020-04-03' region_fits = {} kind = 'daily_prevalence7' for region in REGIONS: data = ds_italy_regions.sel(location=region)[kind] region_fits[region] = {"daily_tests_pm7": [], 'daily_prevalence7': [], 'current_severe_pm': [], "daily_critical_pm7": []} region_fits[region][kind] = covid19.fit.fit_exponential_segments(data, breaks=[LAST_DAY - 7 * DAY, None], min_value=0, valid_ratio=0.1) # - # + EXTRAPOLATE = (np.datetime64('2020-04-01'), np.datetime64('2021-09-15')) XLIM = EXTRAPOLATE SHOW_REGIONS = {'daily_prevalence7', 'current_severe_pm', 'current_critical_pm', 'daily_mortality7'} SHOW_REGIONS = {'daily_prevalence7', 'daily_critical_pm7'} SHOW_REGIONS = {'daily_prevalence7'} for region in REGIONS: fit_kinds = region_fits[region] ds_region = ds_italy_regions.sel(location=region) _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM, 'yscale': 'log', 'ylim': (1 * 0.9, 1000 / 0.9)}) ax.yaxis.grid(color='lightgrey', linewidth=0.5) ax.xaxis.grid(color='lightgrey', linewidth=0.5) for (kind, fitsk), color, ratio, delay in zip(fit_kinds.items(), PALETTE_ONE, [0.1, 1, 1, 100, 40], [0, 0, 0, 0, 0]): if SHOW_REGIONS is not None and kind not in SHOW_REGIONS: continue label = LABEL[kind] if ratio == 1 else LABEL[kind] + f' per {ratio}' covid19.plot.plot_data(ax, ds_region[kind], label=label, color=color, delay=-delay, ratio=1 / ratio, marker=None, date_interval=28, annotate=True) for ff in fitsk: covid19.plot.plot_fit(ax, ff, extrapolate=[-2, 30], marker=None, color=color, label="tendenza ultimi 7 giorni") pop_perc = ds_region.population.values / ds_italy.population.values * 100 pos_perc = ds_region['daily_confirmed7'][-1].values / ds_italy['daily_confirmed7'][-1].values * 100 ax.set_title(f'COVID-19 {region} - {pop_perc:.1f}% popolazione - {pos_perc:.1f}% nuovi casi - al {str(LAST_DAY)[:10]}' ) ax.set(xlabel="", ylabel="# per milione di abitanti") ax.yaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) ax.legend(loc='upper left') # + _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM1, 'yscale': 'log', 'ylim': (10 * 0.9, 10000 / 0.9)}) ax.yaxis.grid(color='lightgrey', linewidth=0.5) ax.xaxis.grid(color='lightgrey', linewidth=0.5) for region, color in zip(reversed(REGIONS), itertools.cycle(PALETTE_ONE)): fit_kinds = region_fits[region] ds_region = ds_italy_regions.sel(location=region) kind = 'daily_confirmed14' label = LABEL[kind] if ratio == 1 else LABEL[kind] + f' per {ratio} spostato di {delay} giorni' covid19.plot.plot_data(ax, ds_region[kind], label=region, color=color, marker=None, date_interval=7, annotate=True, annotate_add_label=True) ax.set_title(f'COVID-19 nuovi casi' ) _ = ax.set(xlabel="", ylabel="") # _ = ax.yaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) # + _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM1, 'yscale': 'log', 'ylim': (0.5 * 0.9, 200 / 0.9)}) ax.yaxis.grid(color='lightgrey', linewidth=0.5) ax.xaxis.grid(color='lightgrey', linewidth=0.5) for region, color in zip(reversed(REGIONS), itertools.cycle(PALETTE_ONE)): fit_kinds = region_fits[region] ds_region = ds_italy_regions.sel(location=region) kind = 'daily_deaths7' label = LABEL[kind] if ratio == 1 else LABEL[kind] + f' per {ratio} spostato di {delay} giorni' covid19.plot.plot_data(ax, ds_region[kind], label=region, linewidth=1, color=color, marker=None, date_interval=7, annotate=True, annotate_add_label=True) ax.set_title(f'COVID-19 decessi (medie a 7 giorni)' ) _ = ax.set(xlabel="", ylabel="") # _ = ax.yaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) # + _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM1, 'yscale': 'log', 'ylim': (2 * 0.9, 30 / 0.9)}) ax.yaxis.grid(color='lightgrey', linewidth=0.5) ax.xaxis.grid(color='lightgrey', linewidth=0.5) for region, color in zip(reversed(REGIONS), itertools.cycle(PALETTE_ONE)): fit_kinds = region_fits[region] ds_region = ds_italy_regions.sel(location=region) kind = 'daily_mortality14' label = LABEL[kind] if ratio == 1 else LABEL[kind] + f' per {ratio} spostato di {delay} giorni' covid19.plot.plot_data(ax, ds_region[kind], label=region, color=color, marker=None, linewidth=1.5, date_interval=7, annotate=True, annotate_add_label=True) ax.set_title(f'COVID-19 decessi' ) _ = ax.set(xlabel="", ylabel="# per milione di abitanti") # _ = ax.yaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) # + _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': (np.datetime64('2020-10-25'), np.datetime64('2020-11-25')), 'yscale': 'log', 'ylim': (10 * 0.9, 150 / 0.9)}) ax.yaxis.grid(color='lightgrey', linewidth=0.5) ax.xaxis.grid(color='lightgrey', linewidth=0.5) for region, color in zip(reversed(REGIONS), itertools.cycle(PALETTE_ONE)): fit_kinds = region_fits[region] ds_region = ds_italy_regions.sel(location=region) kind = 'current_critical_pm' label = LABEL[kind] if ratio == 1 else LABEL[kind] + f' per {ratio} spostato di {delay} giorni' covid19.plot.plot_data(ax, ds_region[kind].rolling({'time': 7}).mean(), label=region, color=color, marker=None, linewidth=1, date_interval=7, annotate=True, annotate_add_label=True) ax.set_title(f'COVID-19 TI' ) _ = ax.set(xlabel="", ylabel="# per milione di abitanti") # _ = ax.yaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) # - _, ax = covid19.plot.subplots(subplot_kw={'yscale': 'log', 'xlim': (np.datetime64('2020-02-24'), np.datetime64('2021-03-31')), 'ylim': (30, 11000)}) covid19.plot.plot_data(ax, ds_italy['current_critical'], marker=None, label='Pazienti in terapia intensiva', color='tab:red', annotate=True) covid19.plot.plot_data(ax, ds_italy['daily_deaths7'], marker=None, ratio=1/8, label='Decessi gioralieri per 8', color='tab:purple', date_interval=14, linestyle='--') covid19.plot.plot_data(ax, ds_italy['current_severe'], marker=None, ratio=15, label='Pazienti ospedalizzati diviso 15', color='tab:green', date_interval=14, linestyle='--') _ = ax.legend(loc='upper center') _ = ax.set(title='COVID-19 Italia - dati Protezione Civile', ylabel='', xlabel='') _, ax = covid19.plot.subplots(subplot_kw={'yscale': 'log', 'xlim': (np.datetime64('2020-10-01'), np.datetime64('2020-11-24')), 'ylim': (100, 10000)}) covid19.plot.plot_data(ax, ds_italy['daily_confirmed7'], marker=None, delay=5, ratio=5.8, label='Nuovi casi (media a 7 giorni) spostati di 5 giorni in avanti e diviso 5.8', color='tab:orange', linestyle='--') covid19.plot.plot_data(ax, ds_italy['current_critical'], marker=None, label='Pazienti in terapia intensiva', color='tab:red', date_interval=14, annotate=True) _ = ax.legend(loc='upper left') _ = ax.set(title='COVID-19 Italia - dati Protezione Civile', ylabel='', xlabel='') # + shift = 7 scale = 0.013 color_deaths = (0.4980392156862745, 0.4980392156862745, 0.4980392156862745) _, ax = covid19.plot.subplots(subplot_kw={'xlim': (np.datetime64('2020-10-01'), np.datetime64('2020-11-21')), 'yscale': 'log', 'ylim': (10, 700)}) covid19.plot.plot_data(ax, ds_italy['daily_confirmed7'], marker=None, delay=shift, ratio=1/scale, label=f'{scale * 100:.2g}% dei nuovi casi di {shift} giorni prima', color='tab:orange', linestyle='--') covid19.plot.plot_data(ax, ds_italy['daily_deaths7'], label='Decessi', color=color_deaths, date_interval=7, annotate=True) covid19.plot.plot_fit(ax, fits['daily_deaths7'][-2], marker=None, color=color_deaths, extrapolate=(-4, 7), label='stima') covid19.plot.plot_fit(ax, fits['daily_deaths7'][-1], marker=None, color=color_deaths, extrapolate=(-4, 7), label='stima') _ = ax.legend(loc='upper center') _ = ax.set(title='COVID-19 Italia - dati Protezione Civile (media a 7 giorni)', ylabel='', xlabel='') # - _, ax = covid19.plot.subplots(subplot_kw={'xlim': XLIM, 'yscale': 'log'}) covid19.plot.plot_data(ax, ds_italy['daily_confirmed14'], marker=None, delay=0, ratio=35, label='Nuovi casi spostati di 5 giorni in avanti e diviso 42', color='tab:orange', linestyle='--') covid19.plot.plot_data(ax, ds_italy['current_severe_pm'], marker=None, label='Pazienti ricoverati', color='tab:green', date_interval=14,) _ = ax.legend() _ = ax.set(title='COVID-19 Italia - medie a 7 giorni', ylabel='# per milione di abitanti', xlabel='') _, ax = covid19.plot.subplots(subplot_kw={'xlim': (np.datetime64('2020-05-25'), np.datetime64('2020-11-15')), 'yscale': 'log', 'ylim': (1, 1100)}) covid19.plot.plot_data(ax, ds_italy['current_critical'], marker=None, delay=4, ratio=12.5, label='Terapie intensive spostate di 4 giorni in avanti e diviso 12.5', color='tab:red', linestyle='--') covid19.plot.plot_data(ax, ds_italy['daily_deaths7'], marker=None, label='Decessi giornalieri (media a 7 giorni)', color='tab:purple', date_interval=14, annotate=True) _ = ax.legend() _ = ax.set(title='COVID-19 Italia - dati Protezione Civile', ylabel='', xlabel='') # + shift = 8 scale = 0.013 _, ax = covid19.plot.subplots(subplot_kw={'xlim': XLIM, 'yscale': 'log'}) covid19.plot.plot_data(ax, ds_italy['daily_confirmed7'], delay=shift, ratio=1/scale, marker=None, label='Nuovi casi', color='tab:orange', linestyle='--') covid19.plot.plot_data(ax, ds_italy['daily_deaths7'], marker=None, label='Stima denu nuovi casi dai decessi ', color='tab:purple', date_interval=14) _ = ax.legend() _ = ax.set(title='COVID-19 Italia - medie a 7 giorni', ylabel='', xlabel='') _, ax = covid19.plot.subplots(subplot_kw={'xlim': XLIM, 'yscale': 'log'}) covid19.plot.plot_data(ax, daily_confirmed_estimate / ds_italy['daily_confirmed7'], marker=None, label='Decessi giornalieri', color='tab:purple', date_interval=14,) _ = ax.legend() _ = ax.set(title='COVID-19 Italia - dati Protezione Civile', ylabel='', xlabel='') # - # + SHOW_REGIONS = None # {'daily_prevalence7', 'current_severe_pm','current_critical_pm', 'daily_mortality7'} for region in REGIONS: fit_kinds = region_fits[region] ds_region = ds_italy_regions.sel(location=region) _, ax = covid19.plot.subplots(1, subplot_kw={'xlim': XLIM, 'yscale': 'log', 'ylim': (0.02 * 0.9, 5000 / 0.9)}) ax.yaxis.grid(color='lightgrey', linewidth=0.5) ax.xaxis.grid(color='lightgrey', linewidth=0.5) for (kind, fitsk), color in zip(fit_kinds.items(), PALETTE_ONE): if SHOW_REGIONS is not None and kind not in SHOW_REGIONS: continue label = LABEL[kind] covid19.plot.plot_data(ax, ds_region[kind], label=label, color=color, marker=None, date_interval=14, annotate=True) for fit in fitsk: covid19.plot.plot_fit(ax, fit, color=color, marker=None, label=f'stima esponenziale', extrapolate=[-7, 7], alpha=0.2) pop_perc = ds_region.population.values / ds_italy.population.values * 100 pos_perc = ds_region['daily_confirmed7'][-1].values / ds_italy['daily_confirmed7'][-1].values * 100 ax.set_title(f'COVID-19 {region} - {pop_perc:.1f}% popolazione - {pos_perc:.1f}% positivi' ) ax.set(xlabel="", ylabel="# per milione di abitanti") ax.yaxis.set_major_formatter(ticker.FuncFormatter(myLogFormat)) ax.legend(loc='lower left') # - da = ds_italy['current_critical'][159:] for i in range(20): old = ds_italy['current_critical'][i] new = da[(da < old).argmin()] print(str(old.time.values)[:10], old.values, ds_italy['daily_deaths'].sel(time=old.time.values).values, str(new.time.values)[:10], new.values) daa = ds_italy['current_critical'] for i in range(10): try: new = da[i * 7 + 3] except: new = da[-1] # print(new, daa.values, (daa > new).values) old = daa[(daa <= new).argmin() - 1] old1 = daa[(daa <= new).argmin()] print( str(old.time.values)[:10], old.values, ds_italy['daily_deaths'].sel(time=old.time.values).values, ds_italy['current_severe'].sel(time=old.time.values).values, str(new.time.values)[:10], new.values, ds_italy['daily_deaths'].sel(time=new.time.values).values, ds_italy['current_severe'].sel(time=new.time.values).values, ) daa a = xr.DataArray( [56, 64, 105, 140, 166, 229, 295, 351, 462, 567, 650, 733], dims=['time'], coords={ 'time': ('time', [np.datetime64(t) for t in ['2020-08-18', '2020-08-25', '2020-09-01', '2020-09-08', '2020-09-15', '2020-09-22', '2020-09-29', '2020-10-06', '2020-10-13', '2020-10-20', '2020-10-27', '2020-11-03']]), 'old_time': ('time', ['2020-02-27', '2020-02-28', '2020-02-29', '2020-03-01', '2020-03-02', '2020-03-03', '2020-03-04', '2020-03-05', '2020-03-06', '2020-03-07', '2020-03-08', '2020-03-09']), } ) a # + # import matplotlib.patheffects as patheffects f, ax = covid19.plot.subplots() covid19.plot.plot_data(ax, da, marker=None, color='tab:red') # covid19.plot.plot_data(ax, ds_italy['daily_confirmed7'][159:], marker=None, delay=18, ratio=4.6, label='Nuovi casi spostati di 18 giorni in avanti e diviso 4.5', color='tab:orange', linestyle='--') covid19.plot.plot_data(ax, a, linestyle='', color='black') for xp, l in zip(a.time.values, a.old_time.values): yp = a.sel(time=xp).values ax.annotate(l, (xp - 3 * np.timedelta64(1, 'D'), yp + 10), path_effects=[ patheffects.Stroke(linewidth=4, foreground='white'), patheffects.Normal(), ]) ax.set(title='COVID-19 pazienti in terapia intensiva', xlabel='', ylim=(0, 820)) # + f, ax = covid19.plot.subplots() covid19.plot.plot_data(ax, ds_italy['current_critical'], marker=None, color='tab:red', date_interval=14) ax.set( ylim=(0, 500), ) for i in range(10): b = ds_italy['current_critical'][i + 2] c = da[(da < b).argmin()] ax.plot([b.time.values, c.time.values + 3 * np.timedelta64(day=1)], [b.values, b.values], color='tab:red') # - # + import math peak_infected_percent = .6 peak_infected = peak_infected_percent * ds_italy.population.values IFR_base = .006 T_ds_up = [7, 10, 20, 30] print(f"{peak_infected:,.0f} {peak_infected * IFR_base:,.0f}") # - # + def Rt(T_d=7, onset_day=5): return 2 ** (onset_day / T_d) print(f"{Rt():.2f}") # - 1.64*.8 # + 2 ** ((t - t_0) / T_d) T_d / ln(2) * 2 ** ((t - t_0) / T_d) + C peak_infected = T_d / ln(2) * 2 ** ((t - t_0) / T_d) ln2(peak_infected * ln(2) / T_d) * T_d = (t - t_0) # + def delta_time(T_d, peak_infected): return math.log2(peak_infected * math.log(2) / T_d) * T_d delta_time(7, peak_infected) # - 7 / math.log(2) * 2 ** ((152.5) / 7) cumul = 0 T_d = 7 for d in range(500): infected = 2 ** ((d + 107.5066) / T_d) cumul += infected theory = T_d / math.log(2) * 2 ** (d / T_d) print(f"{d} {np.datetime64('2020-10-25') + d * DAY:.10s} {infected/2:,.0f} {infected:,.0f} {cumul + 3458000:,.0f}") ds_italy['daily_confirmed'].sel(time=slice('2020-06-01')).sum().values * 2 + 3000000 def approx_tot(T_d): for d in range(100, 160): print(f"{np.datetime64('2020-07-16') + d * DAY:.10s} {d - 100} {2 ** (d / T_d):,.0f} {T_d / math.log(2) * 2 ** (d / T_d) + 3_500_000:,.0f}") approx_tot(7) from scipy import stats # + x = np.arange(90) sigma = 1 mu = 10 pd = stats.lognorm.pdf(x, sigma, scale=12) pd /= sum(pd) _ = plt.plot(x, pd) # - c = ds_italy['daily_confirmed7'] ds_italy['daily_deaths7'].plot(yscale='log') plt.plot(ds_italy.time, c) #ds_italy['current_critical'].plot() plt.plot(ds_italy.time + 3 * DAY, np.convolve(c, pd)[:-89] / 40) # plt.plot(ds_italy.time + 366 * DAY, ds_italy['daily_deaths7']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from astropy.table import Table import matplotlib.pyplot as plt import numpy as np # + IMAGE_FORMAT = 'eps' #IMAGE_DIR = 'C:/Users/jacob/Documents/GitHub/RotationCurves/images/' IMAGE_DIR = '/Users/kellydouglass/Documents/Research/Rotation_curves/RotationCurves/images/' #master_table = Table.read( 'C:/Users/jacob/Documents/GitHub/RotationCurves/master_file.txt', format='ascii.ecsv') master_table = Table.read('/Users/kellydouglass/Documents/Research/Rotation_curves/RotationCurves/master_file_vflag.txt', format='ascii.ecsv') # + plt.rc('font', size=16) lwidth = 2 # Line width used in plots msize = 3 # Marker size used in plots ########################################################################### # Hard-coded entry for the bins for the histrogram plots at the end of this # function. #-------------------------------------------------------------------------- hist_range = ( 0, 5000) BINS = np.linspace( hist_range[0], hist_range[1], 100) ########################################################################### # + ''' ########################################################################### # Import the necessary data from the master_file. #-------------------------------------------------------------------------- vflag_list = master_table['vflag'] avg_chi_square_rot_master = master_table['avg_chi_square_rot'] pos_chi_square_rot_master = master_table['pos_chi_square_rot'] neg_chi_square_rot_master = master_table['neg_chi_square_rot'] mass_ratio_master = master_table['Mdark_Mstar_ratio'] ########################################################################### ########################################################################### # Initialize the arrays to store the chi square values. # # NOTE: chi square values are separated into difference histograms by the # average of the two rotation curves and the positive and negative # rotation curves. # NOTE: within these histograms, galaxies are separated by those in walls, # those in voids, and other. #-------------------------------------------------------------------------- avg_chi_square_rot_wall = [] avg_chi_square_rot_void = [] avg_chi_square_rot_other = [] pos_chi_square_rot_wall = [] pos_chi_square_rot_void = [] pos_chi_square_rot_other = [] neg_chi_square_rot_wall = [] neg_chi_square_rot_void = [] neg_chi_square_rot_other = [] ########################################################################### ''' ########################################################################### # For each galaxy in the master_table, separate the chi square values by # if the galaxy is within a wall, a void, or other. #-------------------------------------------------------------------------- vboolean = master_table['vflag'] == 1 wboolean = master_table['vflag'] == 0 eboolean = np.logical_not(vboolean | wboolean) avg_chi_square_rot_void = master_table['avg_chi_square_rot'][vboolean] avg_chi_square_rot_wall = master_table['avg_chi_square_rot'][wboolean] avg_chi_square_rot_other = master_table['avg_chi_square_rot'][eboolean] pos_chi_square_rot_void = master_table['pos_chi_square_rot'][vboolean] pos_chi_square_rot_wall = master_table['pos_chi_square_rot'][wboolean] pos_chi_square_rot_other = master_table['pos_chi_square_rot'][eboolean] neg_chi_square_rot_void = master_table['neg_chi_square_rot'][vboolean] neg_chi_square_rot_wall = master_table['neg_chi_square_rot'][wboolean] neg_chi_square_rot_other = master_table['neg_chi_square_rot'][eboolean] mass_ratio_void = master_table['Mdark_Mstar_ratio'][vboolean] mass_ratio_wall = master_table['Mdark_Mstar_ratio'][wboolean] mass_ratio_other = master_table['Mdark_Mstar_ratio'][eboolean] ''' for vflag, chi_square, pos_chi_square, neg_chi_square \ in zip( vflag_list, avg_chi_square_rot_master, pos_chi_square_rot_master, neg_chi_square_rot_master): if vflag == 0: avg_chi_square_rot_wall.append( chi_square) pos_chi_square_rot_wall.append( pos_chi_square) neg_chi_square_rot_wall.append( neg_chi_square) elif vflag == 1: avg_chi_square_rot_void.append( chi_square) pos_chi_square_rot_void.append( pos_chi_square) neg_chi_square_rot_void.append( neg_chi_square) elif vflag == 2 or vflag == -9: avg_chi_square_rot_other.append( chi_square) pos_chi_square_rot_other.append( pos_chi_square) neg_chi_square_rot_other.append( neg_chi_square) ''' ########################################################################### ''' ########################################################################### # Calculate the mean, RMS, and standard deviation for the void, wall, and # total distributions in the average, positive and negative, histograms # below. #-------------------------------------------------------------------------- avg_chi_square_rot_wall_mean = np.mean( avg_chi_square_rot_wall) avg_chi_square_rot_void_mean = np.mean( avg_chi_square_rot_void) avg_chi_square_rot_other_mean = np.mean( avg_chi_square_rot_other) avg_chi_square_rot_wall_stdev = np.std( avg_chi_square_rot_wall) avg_chi_square_rot_void_stdev = np.std( avg_chi_square_rot_void) avg_chi_square_rot_other_stdev = np.std( avg_chi_square_rot_other) avg_chi_square_rot_wall_rms = np.sqrt( np.mean( avg_chi_square_rot_wall**2)) avg_chi_square_rot_void_rms = np.sqrt( np.mean( avg_chi_square_rot_void**2)) avg_chi_square_rot_other_rms = np.sqrt( np.mean( avg_chi_square_rot_other**2)) pos_chi_square_rot_wall_mean = np.mean( pos_chi_square_rot_wall) pos_chi_square_rot_void_mean = np.mean( pos_chi_square_rot_void) pos_chi_square_rot_other_mean = np.mean( pos_chi_square_rot_other) pos_chi_square_rot_wall_stdev = np.std( pos_chi_square_rot_wall) pos_chi_square_rot_void_stdev = np.std( pos_chi_square_rot_void) pos_chi_square_rot_other_stdev = np.std( pos_chi_square_rot_other) pos_chi_square_rot_wall_rms = np.sqrt( np.mean( pos_chi_square_rot_wall**2)) pos_chi_square_rot_void_rms = np.sqrt( np.mean( pos_chi_square_rot_void**2)) pos_chi_square_rot_other_rms = np.sqrt( np.mean( pos_chi_square_rot_other**2)) neg_chi_square_rot_wall_mean = np.mean( neg_chi_square_rot_wall) neg_chi_square_rot_void_mean = np.mean( neg_chi_square_rot_void) neg_chi_square_rot_other_mean = np.mean( neg_chi_square_rot_other) neg_chi_square_rot_wall_stdev = np.std( neg_chi_square_rot_wall) neg_chi_square_rot_void_stdev = np.std( neg_chi_square_rot_void) neg_chi_square_rot_other_stdev = np.std( neg_chi_square_rot_other) neg_chi_square_rot_wall_rms = np.sqrt( np.mean( neg_chi_square_rot_wall**2)) neg_chi_square_rot_void_rms = np.sqrt( np.mean( neg_chi_square_rot_void**2)) neg_chi_square_rot_other_rms = np.sqrt( np.mean( neg_chi_square_rot_other**2)) ########################################################################### ########################################################################### # Variables that are used in the fitting of the gaussian are located # directly below. #-------------------------------------------------------------------------- xmin, xmax = plt.xlim() x = np.linspace(xmin, xmax, 100) ########################################################################### ''' # - # ## Distribution of chi-square # + ########################################################################### # Plot the chi square distribution for the average chi rotation curve and # separate the distributions into walls and voids. #-------------------------------------------------------------------------- #avg_chi_square_rot_hist = plt.figure() plt.figure(figsize=(7,5)) plt.title("$\chi^2$ Distribtuion") # - - - - - - - - - - - - - - - - - - - plt.hist( avg_chi_square_rot_other, BINS, color='g', density=True, histtype='step', linewidth=lwidth, linestyle='--', label='Other') # p = norm.pdf(x, # avg_chi_square_rot_other_mean, avg_chi_square_rot_other_stdev) # plt.plot(x, p, 'g--', linewidth=2) # plt.axvline( avg_chi_square_rot_other_mean, # color='green', linestyle='-', linewidth=1.5) # plt.axvline( # avg_chi_square_rot_other_mean + avg_chi_square_rot_other_stdev, # color='green', linestyle=':', linewidth=1) # plt.axvline( # avg_chi_square_rot_other_mean - avg_chi_square_rot_other_stdev, # color='green', linestyle=':', linewidth=1) # plt.axvline( # avg_chi_square_rot_other_mean + 2*avg_chi_square_rot_other_stdev, # color='green', linestyle=':', linewidth=1) # plt.axvline( # avg_chi_square_rot_other_mean - 2*avg_chi_square_rot_other_stdev, # color='green', linestyle=':', linewidth=1) # _, mean_avg_chi_square_rot_other_ = plt.ylim() # plt.text(avg_chi_square_rot_other_mean + avg_chi_square_rot_other_mean/10, # mean_avg_chi_square_rot_other_ - mean_avg_chi_square_rot_other_/10, # 'Mean: {:.2f}'.format( avg_chi_square_rot_other_mean)) plt.hist( avg_chi_square_rot_void, BINS, color='r', density=True, histtype='step', linewidth=lwidth, label='Void') # p = norm.pdf(x, # avg_chi_square_rot_void_mean, avg_chi_square_rot_void_stdev) # plt.plot(x, p, 'r--', linewidth=2) # plt.axvline( avg_chi_square_rot_void_mean, # color='red', linestyle='-', linewidth=1.5) # plt.axvline( avg_chi_square_rot_void_mean + avg_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # plt.axvline( avg_chi_square_rot_void_mean - avg_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # plt.axvline( # avg_chi_square_rot_void_mean + 2*avg_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # plt.axvline( # avg_chi_square_rot_void_mean - 2*avg_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # _, mean_void_avg_chi_square_rot_ = plt.ylim() # plt.text(avg_chi_square_rot_void_mean + avg_chi_square_rot_void_mean/10, # mean_void_avg_chi_square_rot_ - mean_void_avg_chi_square_rot_/10, # 'Mean: {:.2f}'.format( avg_chi_square_rot_void_mean)) plt.hist( avg_chi_square_rot_wall, BINS, color='k', density=True, histtype='step', linewidth=lwidth, linestyle=':', label='Wall') # p = norm.pdf(x, # avg_chi_square_rot_wall_mean, avg_chi_square_rot_wall_stdev) # plt.plot(x, p, 'k--', linewidth=2) # plt.axvline( avg_chi_square_rot_wall_mean, # color='black', linestyle='-', linewidth=1.5) # plt.axvline( avg_chi_square_rot_wall_mean + avg_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # plt.axvline( avg_chi_square_rot_wall_mean - avg_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # plt.axvline( # avg_chi_square_rot_wall_mean + 2*avg_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # plt.axvline( # avg_chi_square_rot_wall_mean - 2*avg_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # _, mean_wall_avg_chi_square_rot_ = plt.ylim() # plt.text(avg_chi_square_rot_wall_mean + avg_chi_square_rot_wall_mean/10, # mean_wall_avg_chi_square_rot_ - mean_wall_avg_chi_square_rot_/10, # 'Mean: {:.2f}'.format( avg_chi_square_rot_wall_mean)) #ax = avg_chi_square_rot_hist.add_subplot(111) plt.tick_params( axis='both', direction='in') #ax.yaxis.set_ticks_position('both') #ax.xaxis.set_ticks_position('both') plt.ylabel('Galaxy fraction') plt.xlabel('$\chi^2$ (Goodness of Fit)') plt.legend() # textstr = '\n'.join(( # r'STDEV: $%.2f$' % ( avg_chi_square_rot_other_stdev, ), # r'$STDEV_{wall}$: $%.2f$' % ( avg_chi_square_rot_wall_stdev, ), # r'$STDEV_{void}$: $%.2f$' % ( avg_chi_square_rot_void_stdev, ), # r'RMS: $%.2f$' % ( avg_chi_square_rot_other_rms, ), # r'$RMS_{wall}$: $%.2f$' % ( avg_chi_square_rot_wall_rms, ), # r'$RMS_{void}$: $%.2f$' % ( avg_chi_square_rot_void_rms, ))) # props = dict( boxstyle='round', facecolor='cornsilk', alpha=0.6) # ax.text(0.72, 0.95, textstr, # verticalalignment='top', horizontalalignment='left', # transform=ax.transAxes, # color='black', fontsize=8, bbox=props) ''' plt.savefig( IMAGE_DIR + '/histograms/avg_chi_square_hist.' + IMAGE_FORMAT, format=IMAGE_FORMAT) ''' #plt.show() #plt.close() ########################################################################### # - # ## Distribution of chi square for positive curves # + ########################################################################### # Plot the chi square distribution for the maximum chi rotation curve and # separate the distributions into walls and voids. #-------------------------------------------------------------------------- #pos_chi_square_rot_hist = plt.figure() plt.figure(figsize=(7,5)) plt.title("$\chi^2$ Distribution (Positive)") # - - - - - - - - - - - - - - - - - - - - - - - - - plt.hist( pos_chi_square_rot_other, BINS, color='g', density=True, histtype='step', linewidth=lwidth, linestyle='--', label='Other') # p = norm.pdf(x, # pos_chi_square_rot_other_mean, pos_chi_square_rot_other_stdev) # plt.plot(x, p, color='purple', linestyle='--', linewidth=2) # plt.axvline( pos_chi_square_rot_other_mean, # color='purple', linestyle='-', linewidth=1.5) # plt.axvline( # pos_chi_square_rot_other_mean + pos_chi_square_rot_other_stdev, # color='purple', linestyle=':', linewidth=1) # plt.axvline( # pos_chi_square_rot_other_mean - pos_chi_square_rot_other_stdev, # color='purple', linestyle=':', linewidth=1) # plt.axvline( # pos_chi_square_rot_other_mean + 2*pos_chi_square_rot_other_stdev, # color='purple', linestyle=':', linewidth=1) # plt.axvline( # pos_chi_square_rot_other_mean - 2*pos_chi_square_rot_other_stdev, # color='purple', linestyle=':', linewidth=1) # _, mean_pos_chi_square_rot_other_ = plt.ylim() # plt.text(pos_chi_square_rot_other_mean + pos_chi_square_rot_other_mean/10, # mean_pos_chi_square_rot_other_ - mean_pos_chi_square_rot_other_/10, # 'Mean: {:.2f}'.format( pos_chi_square_rot_other_mean)) plt.hist( pos_chi_square_rot_void, BINS, color='r', density=True, histtype='step', linewidth=lwidth, label='Void') # p = norm.pdf(x, # pos_chi_square_rot_void_mean, pos_chi_square_rot_void_stdev) # plt.plot(x, p, 'r--', linewidth=2) # plt.axvline( pos_chi_square_rot_void_mean, # color='red', linestyle='-', linewidth=1.5) # plt.axvline( pos_chi_square_rot_void_mean + pos_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # plt.axvline( pos_chi_square_rot_void_mean - pos_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # plt.axvline( # pos_chi_square_rot_void_mean + 2*pos_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # plt.axvline( # pos_chi_square_rot_void_mean - 2*pos_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # _, mean_void_pos_chi_square_rot_ = plt.ylim() # plt.text(pos_chi_square_rot_void_mean + pos_chi_square_rot_void_mean/10, # mean_void_pos_chi_square_rot_ - mean_void_pos_chi_square_rot_/10, # 'Mean: {:.2f}'.format( pos_chi_square_rot_void_mean)) plt.hist( pos_chi_square_rot_wall, BINS, color='k', density=True, histtype='step', linewidth=lwidth, linestyle=':', label='Wall') # p = norm.pdf(x, # pos_chi_square_rot_wall_mean, pos_chi_square_rot_wall_stdev) # plt.plot(x, p, 'k--', linewidth=2) # plt.axvline( pos_chi_square_rot_wall_mean, # color='black', linestyle='-', linewidth=1.5) # plt.axvline( pos_chi_square_rot_wall_mean + pos_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # plt.axvline( pos_chi_square_rot_wall_mean - pos_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # plt.axvline( # pos_chi_square_rot_wall_mean + 2*pos_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # plt.axvline( # pos_chi_square_rot_wall_mean - 2*pos_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # _, mean_wall_pos_chi_square_rot_ = plt.ylim() # plt.text(pos_chi_square_rot_wall_mean + pos_chi_square_rot_wall_mean/10, # mean_wall_pos_chi_square_rot_ - mean_wall_pos_chi_square_rot_/10, # 'Mean: {:.2f}'.format( pos_chi_square_rot_wall_mean)) #ax = pos_chi_square_rot_hist.add_subplot(111) plt.tick_params( axis='both', direction='in') #ax.yaxis.set_ticks_position('both') #ax.xaxis.set_ticks_position('both') plt.ylabel('Galaxy fraction') plt.xlabel('$\chi^2$ (Goodness of Fit)') plt.legend() # textstr = '\n'.join(( # r'STDEV: $%.2f$' % ( pos_chi_square_rot_other_stdev, ), # r'$STDEV_{wall}$: $%.2f$' % ( pos_chi_square_rot_wall_stdev, ), # r'$STDEV_{void}$: $%.2f$' % ( pos_chi_square_rot_void_stdev, ), # r'RMS: $%.2f$' % ( pos_chi_square_rot_other_rms, ), # r'$RMS_{wall}$: $%.2f$' % ( pos_chi_square_rot_wall_rms, ), # r'$RMS_{void}$: $%.2f$' % ( pos_chi_square_rot_void_rms, ))) # props = dict( boxstyle='round', facecolor='cornsilk', alpha=0.6) # ax.text(0.72, 0.95, textstr, # verticalalignment='top', horizontalalignment='left', # transform=ax.transAxes, # color='black', fontsize=8, bbox=props) ''' plt.savefig( IMAGE_DIR + '/histograms/pos_chi_square_hist.' + IMAGE_FORMAT, format=IMAGE_FORMAT) ''' #plt.show() #plt.close() ########################################################################### # - # ## Distribution of chi square for negative rotation curve # + ########################################################################### # Plot the chi square distribution for the negative rotation curve and # separate the distributions into walls and voids. #-------------------------------------------------------------------------- #neg_chi_square_rot_hist = plt.figure() plt.figure(figsize=(7,5)) plt.title("$\chi^2$ Distribution (Negative)") plt.hist( neg_chi_square_rot_other, BINS, color='g', density=True, histtype='step', linewidth=lwidth, linestyle='--', label='Other') # p = norm.pdf(x, # neg_chi_square_rot_other_mean, neg_chi_square_rot_other_stdev) # plt.plot(x, p, 'g--', linewidth=2) # plt.axvline( neg_chi_square_rot_other_mean, # color='blue', linestyle='-', linewidth=1.5) # plt.axvline( # neg_chi_square_rot_other_mean + neg_chi_square_rot_other_stdev, # color='blue', linestyle=':', linewidth=1) # plt.axvline( # neg_chi_square_rot_other_mean - neg_chi_square_rot_other_stdev, # color='blue', linestyle=':', linewidth=1) # plt.axvline( # neg_chi_square_rot_other_mean + 2*neg_chi_square_rot_other_stdev, # color='blue', linestyle=':', linewidth=1) # plt.axvline( # neg_chi_square_rot_other_mean - 2*neg_chi_square_rot_other_stdev, # color='blue', linestyle=':', linewidth=1) # _, mean_neg_chi_square_rot_other_ = plt.ylim() # plt.text(neg_chi_square_rot_other_mean + neg_chi_square_rot_other_mean/10, # mean_neg_chi_square_rot_other_ - mean_neg_chi_square_rot_other_/10, # 'Mean: {:.2f}'.format( neg_chi_square_rot_other_mean)) plt.hist( neg_chi_square_rot_void, BINS, color='r', density=True, histtype='step', linewidth=lwidth, label='Void') # p = norm.pdf(x, # neg_chi_square_rot_void_mean, neg_chi_square_rot_void_stdev) # plt.plot(x, p, 'r--', linewidth=2) # plt.axvline( neg_chi_square_rot_void_mean, # color='red', linestyle='-', linewidth=1.5) # plt.axvline( neg_chi_square_rot_void_mean + neg_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # plt.axvline( neg_chi_square_rot_void_mean - neg_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # plt.axvline( # neg_chi_square_rot_void_mean + 2*neg_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # plt.axvline( # neg_chi_square_rot_void_mean - 2*neg_chi_square_rot_void_stdev, # color='red', linestyle=':', linewidth=1) # _, mean_void_neg_chi_square_rot_ = plt.ylim() # plt.text(neg_chi_square_rot_void_mean + neg_chi_square_rot_void_mean/10, # mean_void_neg_chi_square_rot_ - mean_void_neg_chi_square_rot_/10, # 'Mean: {:.2f}'.format( neg_chi_square_rot_void_mean)) plt.hist( neg_chi_square_rot_wall, BINS, color='k', density=True, histtype='step', linewidth=lwidth, linestyle=':', label='Wall') # p = norm.pdf(x, # neg_chi_square_rot_wall_mean, neg_chi_square_rot_wall_stdev) # plt.plot(x, p, 'k--', linewidth=2) # plt.axvline( neg_chi_square_rot_wall_mean, # color='black', linestyle='-', linewidth=1.5) # plt.axvline( neg_chi_square_rot_wall_mean + neg_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # plt.axvline( neg_chi_square_rot_wall_mean - neg_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # plt.axvline( # neg_chi_square_rot_wall_mean + 2*neg_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # plt.axvline( # neg_chi_square_rot_wall_mean - 2*neg_chi_square_rot_wall_stdev, # color='black', linestyle=':', linewidth=1) # _, mean_wall_neg_chi_square_rot_ = plt.ylim() # plt.text(neg_chi_square_rot_wall_mean + neg_chi_square_rot_wall_mean/10, # mean_wall_neg_chi_square_rot_ - mean_wall_neg_chi_square_rot_/10, # 'Mean: {:.2f}'.format( neg_chi_square_rot_wall_mean)) #ax = neg_chi_square_rot_hist.add_subplot(111) plt.tick_params( axis='both', direction='in') #ax.yaxis.set_ticks_position('both') #ax.xaxis.set_ticks_position('both') plt.ylabel('Galaxy fraction') plt.xlabel('$\chi^2$ (Goodness of Fit)') plt.legend() # textstr = '\n'.join(( # r'STDEV: $%.2f$' % ( neg_chi_square_rot_other_stdev, ), # r'$STDEV_{wall}$: $%.2f$' % ( neg_chi_square_rot_wall_stdev, ), # r'$STDEV_{void}$: $%.2f$' % ( neg_chi_square_rot_void_stdev, ), # r'RMS: $%.2f$' % ( neg_chi_square_rot_other_rms, ), # r'$RMS_{wall}$: $%.2f$' % ( neg_chi_square_rot_wall_rms, ), # r'$RMS_{void}$: $%.2f$' % ( neg_chi_square_rot_void_rms, ))) # props = dict( boxstyle='round', facecolor='cornsilk', alpha=0.6) # ax.text(0.72, 0.95, textstr, # verticalalignment='top', horizontalalignment='left', # transform=ax.transAxes, # color='black', fontsize=8, bbox=props) ''' plt.savefig( IMAGE_DIR + '/histograms/neg_chi_square_hist.' + IMAGE_FORMAT, format=IMAGE_FORMAT) ''' #plt.show() #plt.close() ########################################################################### # - # ## Relationship between chi square and mass ratio # + plt.figure( figsize=(7,5)) plt.loglog( avg_chi_square_rot_other, mass_ratio_other, 'go', markersize=msize, fillstyle='none', label='Other') plt.loglog( avg_chi_square_rot_wall, mass_ratio_wall, 'k^', markersize=msize, label='Wall') plt.loglog( avg_chi_square_rot_void, mass_ratio_void, 'ro', markersize=msize, label='Void') plt.xlabel('$\chi^2$ (Goodness of Fit)') plt.ylabel('$M_{DM}/M_*$') plt.legend() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # based on https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api?pivots=programming-language-python&tabs=version-3-1#named-entity-recognition-(ner) # requirements: # pip install azure-ai-textanalytics --pre import json import os.path from collections import defaultdict with open('.azure-key') as fh: key = fh.read() endpoint = 'https://climate-law-entity-extraction.cognitiveservices.azure.com/' # + pycharm={"name": "#%%\n"} from azure.ai.textanalytics import TextAnalyticsClient from azure.core.credentials import AzureKeyCredential def authenticate_client(): ta_credential = AzureKeyCredential(key) text_analytics_client = TextAnalyticsClient( endpoint=endpoint, credential=ta_credential) return text_analytics_client client = authenticate_client() # + pycharm={"name": "#%%\n"} def chunks(list, num_elements): for i in range(0, len(list), num_elements): yield list[i:i + num_elements] def read_file_in_parts(file_path, part_length_limit): """ splits the file along newlines to parts with given max length """ document_parts = [] with open(file_path, mode='r', encoding='utf8') as input_document: lines = input_document.readlines() part_character_count = 0 part = '' for line in lines: line_length = len(line.encode('utf8')) if part_character_count + line_length > part_length_limit: document_parts.append(part) part = '' part_character_count = 0 if line_length > part_length_limit: print("ERROR: line too long: " + line) continue part += line + '\n' part_character_count += line_length return document_parts # + pycharm={"name": "#%%\n"} # Actual limit of service is 5120 "text elements" but having a hard time matching this with characters or bytes in python DOCUMENT_CHARACTER_LIMIT = 4000 VERBOSE = False def entity_recognition_from_file(client, file_path, output_dir): """ reads document from file, extracts entities using Azure and writes to output file """ try: document_parts = read_file_in_parts(file_path, DOCUMENT_CHARACTER_LIMIT) aggregated_result = [] for chunk in chunks(document_parts, 5): aggregated_result.extend(client.recognize_entities(documents=chunk)) category_statistics = defaultdict(int) text_statistics = defaultdict(int) entities = [] for result in aggregated_result: if VERBOSE: print("Named Entities:\n") for entity in result.entities: if entity.category == "Quantity": # We skip quantities for now to reduce some noise, as we probably cant put it to use right now continue category_statistics[entity.category] += 1 subcategory_key = '{}_{}'.format(entity.category, entity.subcategory) category_statistics[subcategory_key] += 1 text_statistics[entity.text] += 1 entities.append({'text': entity.text, 'category': entity.category, 'subcategory': entity.subcategory, 'confidence_score': entity.confidence_score, 'offset': entity.offset, 'length': entity.length}) if VERBOSE: print("\tText: \t", entity.text, "\tCategory: \t", entity.category, "\tSubCategory: \t", entity.subcategory, "\n\tConfidence Score: \t", round(entity.confidence_score, 2), "\tLength: \t", entity.length, "\tOffset: \t", entity.offset, "\n") basename = os.path.basename(file_path) filename, extension = os.path.splitext(basename) target_filename = os.path.join(output_dir, filename + '_entities.json') print("Writing {} entities to {}".format(len(entities), target_filename)) result_dump = {'category_statistics': category_statistics, 'text_statistics': text_statistics, 'entities': entities} with open(target_filename, mode='w', encoding='utf8') as entities_fh: entities_fh.write(json.dumps(result_dump)) except Exception as err: print("Encountered exception. {}".format(err)) # + pycharm={"name": "#%%\n"} # Run for single document entity_recognition_from_file(client, '../documents/1004_0.txt', '../entities') # + pycharm={"name": "#%%\n"} documents_directory = '../documents' entities_directory = '../entities' # + pycharm={"name": "#%%\n"} # single threaded for entry in os.listdir(documents_directory): source_file = os.path.join(documents_directory, entry) if not os.path.isfile(source_file): continue entity_recognition_from_file(client, source_file, entities_directory) # + pycharm={"name": "#%%\n", "is_executing": true} from concurrent.futures.thread import ThreadPoolExecutor # multi threaded execution with ThreadPoolExecutor(max_workers=10) as executor: futures = [] for entry in os.listdir(documents_directory): source_file = os.path.join(documents_directory, entry) if not os.path.isfile(source_file): continue future = executor.submit(entity_recognition_from_file, client, source_file, entities_directory) futures.append(future) print([f.result() for f in futures]) # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # __Machine Learningn Lyfecycle Example__ # # **1.Define Objective** # # Objective: Infer how IQ, Year Experience, and Age effects income using a linear model. # # __2. Collect Data__ # import tensorflow as tf import numpy as np import pandas as pd from pandas import DataFrame as DF # **Create Dataset** # + np.random.seed(555) X1 = np.random.normal(loc=100,scale=15,size=200).astype(int) X2 = np.random.normal(10,4.5,200) X3 = np.random.normal(32,4,200).astype(int) dateOfBirth = np.datetime64('2017-10-31') - 365 * X3 intercept = 5 noiseOrError = np.random.normal(0,1.5,200) Y = np.array([0.3*x1 + 1.5*x2 + 0.83*x3 + intercept + e for x1,x2,x3,e in zip(X1,X2,X3,noiseOrError)]) # - # **3. Cleaning the Data** columns = ['iq','years_Experience','dateOfBirth'] dataFrame = DF(list(zip(X1,X2,dateOfBirth)), columns = columns) dataFrame['income'] = Y dataFrame.info() dataFrame.describe() dataFrame = dataFrame[dataFrame.years_Experience >= 0] dataFrame.describe() # **4. EDA ** dataFrame.describe(include=['datetime64']) import matplotlib.pyplot as plot # %matplotlib inline pd.plotting.scatter_matrix(dataFrame,figsize=(16,9)) print(dataFrame.corr()) import seaborn as sb plot.figure(figsize=(21,9)) sb.heatmap(dataFrame.corr()) # **5. Data Processing / Feature Engineering ** # + from datetime import datetime as dt dataFrame['age'] = dataFrame.dateOfBirth.apply(lambda x: (dt.strptime('2017-10-31','%Y-%m-%d')-x).days / 365) dataFrame.drop('dateOfBirth',axis=1, inplace=True) dataFrame.head() # - import matplotlib.pyplot as plot # %matplotlib inline pd.plotting.scatter_matrix(dataFrame,figsize=(16,9)) import seaborn as sb plot.figure(figsize=(21,9)) sb.heatmap(dataFrame.corr()) # **6. Train/Evaluate Models** # + import tensorflow as tf #Train/Test Split X = dataFrame.iloc[:,[0,1,3]] Y = dataFrame.income tranning_index = X.sample(frac=0.67).index x_training = X[X.index.isin(tranning_index)].values x_test = X[~X.index.isin(tranning_index)].values y_training = Y[Y.index.isin(tranning_index)].values y_test = Y[~Y.index.isin(tranning_index)].values # - # **Create Model** tf.reset_default_graph() # + session = tf.Session() #create parameters w = tf.get_variable(name='w', initializer=[[0.1],[0.1],[0.1]]) b = tf.get_variable(name='b', initializer=0.) #create input placeholders x = tf.placeholder('float32',name='x') y = tf.placeholder('float32',name='y_true') #create linear model yhat = tf.reshape(tf.matmul(x,w) + b, [-1,], name='yhat') # - # **create the loss and test score function** # + mse = tf.reduce_mean(tf.square(y-yhat),name='mse') rmse = tf.sqrt(mse,name='rmse') #test score NRMSE normalized root mean square error test_nrmse = tf.divide(rmse, tf.abs(tf.reduce_mean(y)),name='nrmse') # - #init variables init = tf.variables_initializer([w,b]) session.run(init) # **Tranning/Evaluate** # + #reset parameters w and b session.run(init) #run optimization again with smaller learning rate opt = tf.train.GradientDescentOptimizer(learning_rate=0.001) train = opt.minimize(rmse) for i in range(800): if(i%50 == 0) & (i > 0): nrmse = session.run(test_nrmse,{x: x_test, y: y_test}) print('test NRMSE: {}'.format(nrmse)) else: session.run(train, {x: x_training, y: y_training}) # + #0.3*x1 + 1.5*x2 + 0.83*x3 + 5 # - session.run([w,b]) # **Result: ** # # iq: for ever .001 increase in iq we have increase around ~260 dollar (260 = 0.265 * 1000) anually # for every additional year of expeience the person will earn 1400 dollar (1400 = 1.425 * 1000 ) # the last value 0.942 is for age # # # ** How our model evolve over time and how are the mse values changes ** # **Create Mode** # - create summeries # - merge summeries # - create a summery writer # - run merge summeries wit model in session # - use the weiter to write the output # tf.reset_default_graph() # + session = tf.Session() #create parameters w = tf.get_variable(name='w', initializer=[[0.1],[0.1],[0.1]]) tf.summary.scalar('wMean',tf.reduce_mean(w)) tf.summary.scalar('wSum',tf.reduce_sum(w)) tf.summary.histogram('weights',w) b = tf.get_variable(name='b', initializer=0.) tf.summary.scalar('intercept',b) #create input placeholders x = tf.placeholder('float32',name='x') y = tf.placeholder('float32',name='y_true') #create linear model yhat = tf.reshape(tf.matmul(x,w) + b, [-1,], name='yhat') # - # **create the loss and test score function** # + mse = tf.reduce_mean(tf.square(y-yhat),name='mse') rmse = tf.sqrt(mse,name='rmse') #test score NRMSE normalized root mean square error test_nrmse = tf.divide(rmse, tf.abs(tf.reduce_mean(y)),name='nrmse') #merge all the summeries and create the writer that will output them during the training summaries = tf.summary.merge_all() writer = tf.summary.FileWriter(logdir='temp/linear_logs', graph=session.graph) # - #init variables init = tf.variables_initializer([w,b]) session.run(init) # **Tranning/Evaluate** # + #reset parameters w and b session.run(init) #run optimization again with smaller learning rate opt = tf.train.GradientDescentOptimizer(learning_rate=0.001) train = opt.minimize(rmse) for i in range(800): if(i%50 == 0) & (i > 0): smry, nrmse = session.run([summaries, test_nrmse],{x: x_test, y: y_test}) writer.add_summary(smry,i) #print('test NRMSE: {}'.format(nrmse)) else: smry, _ = session.run([summaries,train], {x: x_training, y: y_training}) writer.add_summary(smry,i) # - session.run([w,b]) # **in the termenal write** # # tensorboard --logdir=temp/linear_logs # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Dependencies and Setup # %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np # Hide warning messages in notebook import warnings warnings.filterwarnings('ignore') # File to Load (Remember to Change These) mouse_drug_data_to_load = "data/mouse_drug_data.csv" clinical_trial_data_to_load = "data/clinicaltrial_data.csv" # Read the Mouse and Drug Data and the Clinical Trial Data mouse_data = pd.read_csv("data/mouse_drug_data.csv") drug_data = pd.read_csv("data/clinicaltrial_data.csv") # Combine the data into a single dataset combine_df =pd.merge(mouse_data, drug_data, on="Mouse ID") # Display the data table for preview combine_df.head() #drug_data #mouse_data # - # ## Tumor Response to Treatment # + # Store the Mean Tumor Volume Data Grouped by Drug and Timepoint #tumor = combine_df.loc[combine_df["Drug"] == "Stelasyn"] #print(tumor) #tumor_data = tumor_data.groupby(["Drug"]).mean() #tumor_data = tumor.groupby(["Timepoint"]).mean() #tumorVOL_df = pd.DataFrame(tumor_data) tumor_data = combine_df.groupby(["Drug", "Timepoint"])["Tumor Volume (mm3)"].mean() tumorVOL_df = pd.DataFrame(tumor_data) tumorVOL_df = tumorVOL_df.reset_index() mean_pivot = tumorVOL_df.pivot(index="Timepoint", columns="Drug")["Tumor Volume (mm3)"] uniques = tumorVOL_df["Drug"].unique() print(uniques) #tumorVOL_df mean_pivot # + colors = ["red", "yellow", "blue", "green", "orange", "black", "cyan", "purple", "grey", "brown"] alpha_list = ["1", "0", "1", "1", "0", "1", "0", "0", "0", "0"] dfList = [] count = 0 for drug in uniques: newDf = tumorVOL_df.loc[tumorVOL_df["Drug"]==drug] dfList.append(newDf) #print(dfList) for df in dfList: plt.scatter(dfList[count]["Timepoint"], dfList[count]["Tumor Volume (mm3)"], color=colors[count], alpha=alpha_list[count]) count +=1 #capDF = tumorVOL_df.loc[tumorVOL_df["Drug"]=="Capomulin"] #print(capDF) plt.xlabel("Timepoint") plt.ylabel("Tumor Volume") plt.title("Tumor Volume Over Time") plt.show() #plt.scatter(DFList[0:]["Timepoint"], capDF["Tumor Volume (mm3)"], color=colors[0:]) #plt.scatter(DFList[1]["Timepoint"], capDF["Tumor Volume (mm3)"], color=colors[1]) # + #scatter plot that shows how the tumor volume changes over time for each treatment. #plt.scatter(tumorVol_df("Tumor Volume (mm3)") # + # Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint # Convert to DataFrame # Preview DataFrame # - # + # Minor Data Munging to Re-Format the Data Frames # Preview that Reformatting worked # - # + # Generate the Plot (with Error Bars) # Save the Figure # - # Show the Figure plt.show() # ![Tumor Response to Treatment](../Images/treatment.png) # ## Metastatic Response to Treatment # + # Store the Mean Met. Site Data Grouped by Drug and Timepoint # Convert to DataFrame # Preview DataFrame # - # + # Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint # Convert to DataFrame # Preview DataFrame # - # + # Minor Data Munging to Re-Format the Data Frames # Preview that Reformatting worked # - # + # Generate the Plot (with Error Bars) # Save the Figure # Show the Figure # - # ![Metastatic Spread During Treatment](../Images/spread.png) # ## Survival Rates # + # Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric) # Convert to DataFrame # Preview DataFrame # - # + # Minor Data Munging to Re-Format the Data Frames # Preview the Data Frame # - # + # Generate the Plot (Accounting for percentages) # Save the Figure # Show the Figure plt.show() # - # ![Metastatic Spread During Treatment](../Images/survival.png) # ## Summary Bar Graph # + # Calculate the percent changes for each drug # Display the data to confirm # - # + # Store all Relevant Percent Changes into a Tuple # Splice the data between passing and failing drugs # Orient widths. Add labels, tick marks, etc. # Use functions to label the percentages of changes # Call functions to implement the function calls # Save the Figure # Show the Figure #fig.show() # - # ![Metastatic Spread During Treatment](../Images/change.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.6 64-bit (''base'': conda)' # name: python36664bitbaseconda80ab881b08fc4123973af10a07e0b9cc # --- # import library import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt import sklearn.linear_model import sklearn.model_selection import pandas as pd import seaborn as sns # ### In the assignment you should: # # - Load the data and check its correctness # - Explore the basic parameters: how many data points do we have? What are the targets and what is their distribution? Any kind of exploratory data analysis is welcome # - Identify the problem: is it regression? classification? # - Identify metric you're going to use # - Design and run the experiment: train and validate your model # - Compare your results with some kind of baseline (simplest possible solution to the problem) # - (Optional) estimate feature importances and select the most important features # # # # ### load data and check its correctness # + tags=[] # pd_data = pd.read_csv("FINAL-data/winequality-red.csv") nRow, nCol = pd_data.shape print(f'Wine quality : There are {nRow} rows and {nCol} columns') # - # check column types pd_data.dtypes # ### Explore the basic parameters: how many data points do we have? What are the targets and what is their distribution? # get the data description pd_data.describe() # check missing data pd_data.isna().sum() pd_data.corr() # + #PEARSON CORRELATION plt.figure(figsize = (15,10)) sns.heatmap(pd_data.corr(method="pearson")) plt.title('PEARSON CORRELATION', fontsize=15) # - fig, axes = plt.subplots(nrows=6,ncols=2) fig.set_size_inches(12, 10) sns.distplot(pd_data["fixed acidity"],ax=axes[0][0]) sns.distplot(pd_data["volatile acidity"],ax=axes[0][1]) sns.distplot(pd_data["citric acid"],ax=axes[1][0]) sns.distplot(pd_data["residual sugar"],ax=axes[1][1]) sns.distplot(pd_data["chlorides"],ax=axes[2][0]) sns.distplot(pd_data["free sulfur dioxide"],ax=axes[2][1]) sns.distplot(pd_data["total sulfur dioxide"],ax=axes[3][0]) sns.distplot(pd_data["density"],ax=axes[3][1]) sns.distplot(pd_data["pH"],ax=axes[4][0]) sns.distplot(pd_data["sulphates"],ax=axes[4][1]) sns.distplot(pd_data["alcohol"],ax=axes[5][0]) sns.distplot(pd_data["quality"],ax=axes[5][1]) # ### Identify the problem: is it regression? classification? # It is a classification problem. Quality of wine can be divided into several class with only discrete numbers. # + tags=[] # remove columns with low correlation. pd_data = pd_data.drop(["residual sugar", "free sulfur dioxide", "pH"], axis=1) #log some columns to let them have a normal distribution pd_data["log_total_SO2"] = np.log1p(pd_data["total sulfur dioxide"]) pd_data["log_alcohol"] = np.log1p(pd_data["alcohol"]) pd_data = pd_data.drop(["total sulfur dioxide", "alcohol", "fixed acidity"], axis=1) pd_data.corr() # + tags=[] X = pd_data.drop("quality", axis=1) Y = pd_data["quality"] print(X.shape) print(Y.shape) # - from imblearn.over_sampling import SMOTE sm=SMOTE() X,Y=sm.fit_sample(X,Y) # ### Identify metric you're going to use # Accuracy of prediction will be the only metric of measuring classification problem. from sklearn.preprocessing import StandardScaler, MinMaxScaler scaler = StandardScaler().fit(X) scaled_X = scaler.transform(X) # + tags=[] from sklearn.model_selection import train_test_split seed = 42 test_size = 0.20 X_train, X_test, Y_train, Y_test = train_test_split(scaled_X, Y, test_size = test_size, random_state = seed) print(X_train.shape) print(X_test.shape) print(Y_train.shape) print(Y_test.shape) # - # ### Design and run the experiment: train and validate your model # + tags=[] from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from matplotlib.colors import ListedColormap from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from xgboost import XGBClassifier from lightgbm import LGBMClassifier import time import datetime start = 0 end = 0 start = time.time() # user variables to tune folds = 10 # hold different regression models in a single dictionary models = {} models["DecisionTree"] = DecisionTreeClassifier() models["KNN"] = KNeighborsClassifier() models["RandomForest"] = RandomForestClassifier() models["AdaBoost"] = AdaBoostClassifier() models["GradientBoost"] = GradientBoostingClassifier() models["XGBoost"] = XGBClassifier() models["LGBM"] = LGBMClassifier() models["GaussianNB"] = GaussianNB() models["Quad"] = QuadraticDiscriminantAnalysis() # 10-fold cross validation for each model model_results = [] model_names = [] for model_name in models: model = models[model_name] k_fold = KFold(n_splits=folds, random_state=seed) results = cross_val_score(model, X_train, Y_train, cv=k_fold) model_results.append(results) model_names.append(model_name) print("{}: {}, {}".format(model_name, round(results.mean(), 3), round(results.std(), 3))) end = time.time() list_lapse = end - start print("Time taken for processing {}: {}".format(model_name, str(datetime.timedelta(seconds=list_lapse)))) # box-whisker plot to compare regression models figure = plt.figure(figsize = (20,8)) figure.suptitle('Classification models comparison') axis = figure.add_subplot(111) plt.boxplot(model_results) axis.set_xticklabels(model_names, rotation = 45, ha="right") axis.set_ylabel("Accuracy Score") plt.margins(0.05, 0.1) # + tags=[] # from sklearn.model_selection import RandomizedSearchCV # model = RandomForestClassifier(random_state = 42) # random_grid = {'bootstrap': [True, False], # 'max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, None], # 'max_features': ['auto', 'sqrt'], # 'min_samples_leaf': [1, 2, 4], # 'min_samples_split': [2, 5, 10], # 'n_estimators': [130, 180, 230]} # print('Parameters currently in use:\n') # print(model.get_params()) # #print random grid # print(random_grid) # rf_random = RandomizedSearchCV(estimator = model, param_distributions = random_grid, n_iter = 100, cv = 5, verbose=2, random_state=42, n_jobs = -1) # # Fit the random search model # rf_random.fit(X_train, Y_train) # rf_random.best_params_ # - # ### Compare your results with some kind of baseline (simplest possible solution to the problem) # + tags=[] from sklearn.metrics import accuracy_score import math import statistics #Predicting TEST model = RandomForestClassifier(random_state = 42) model.fit(X_train, Y_train) test_predict = model.predict(X_test) accuracy = accuracy_score(Y_test, test_predict) print("Prediction accuracy of RandomForest. Mean: {}, SD: {}".format(round(accuracy.mean(), 3), round(accuracy.std(), 3))) avg_quality = round(statistics.mean(Y_test)) avg_quality_list = [avg_quality for i in range(Y_test.size)] base_accuracy = accuracy_score(Y_test, avg_quality_list) print("Simple prediction by taking average of all quality score. Mean: {}, SD {}".format(round(base_accuracy.mean(), 3), round(base_accuracy.std(), 3))) # + tags=[] from sklearn.metrics import accuracy_score,classification_report,confusion_matrix print(accuracy_score(Y_test,test_predict)) print(classification_report(Y_test,test_predict)) sns.heatmap(confusion_matrix(Y_test,test_predict),annot=True,cmap="Purples") # + tags=[] import sklearn.metrics real = [9,7,5,4] prediction = [10,8,4,9] #1,1,1,5 = 8/4 =2 print(sklearn.metrics.mean_absolute_error(real,prediction)) print(sklearn.metrics.mean_relative_error(real,prediction)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lekcja 3: Instrukcje warunkowe # ## Spis treści # 1. Przypomnienie: Typ danych `bool`, operatory porównania, operatory logiczne # # # - 1.1. Operatory porównania # # # - 1.2. Operatory logiczne - logiczne zdania złożone # # # 2. Instrukcje warunkowe # # # - 2.1. Instrukcja `if` # # # - 2.2. Instrukcja `if` - `else` # # # - 2.3. Instrukcja `if` - `elif` - `elif` - ... - `elif` - `else` # # # - 2.4. Warunki zagnieżdżone # # # 3. Zadania domowe # ## 1. Przypomnienie: Typ danych `bool`, operatory porównania, operatory logiczne # W tej lekcji skupimy się na dwóch kwestiach: # # - Dowiedzieliśmy się podczas Lekcji 2 o typie danych Boolean (`bool`), opisującym zdania logiczne. Np. wyrażenie `(2 < 5)` jest zdaniem logicznym "2 jest mniejsze niż 5", które jest prawdziwe - mówimy, że jego wartość logiczna to "prawda", w Pythonie oznaczana przez słowo kluczowe `True`. Całe to zdanie jest odrębnym obiektem, "bytem", właśnie typu Boolean. Możemy w szczególności wartość takiego obiektu przypisać do zmiennej, np. `check = (2 < 5)`, i teraz `check` to zmienna typu `bool` o wartości `True`. (Pamiętamy, że moglibyśmy to napisać też bez nawiasów, `check = 2 < 5`, jako że Python najpierw wykonuje obliczenie po prawej stronie operatora przypisania `=`, a dopiero na końcu jego wynik podstawia do zmiennej.) Możemy pokusić się teraz o zdecydowane rozszerzenie funkcjonalności naszych programów - aby nasz program mógł podejmować jakieś, mniej lub bardziej złożone, decyzje, na podstawie warunków logicznych - "jeśli to zdanie logiczne jest prawdziwe, wykonaj następujący fragment kodu". Konstrukcje takie noszą nazwę **instrukcji warunkowych** ("conditional statements") i budowane są za pomocą słów kluczowych `if`, `else`, `elif`. # # - Druga kwestia jest bardziej techniczna: Do tej pory nasze programy były dość proste - były po prostu sekwencją wyrażeń zapisanych jedno po drugim, każde w nowej linijce. Interpreter Pythona przechodził przez taki kod linijka po linijce i wykonywał zawarte w nich instrukcje - można powiedzieć w sposób "linearny". Dowiemy się dzisiaj, że wszystkie takie programy były tak naprawdę pojedynczym **blokiem kodu** ("code block"). W kontekście instrukcji warunkowych będziemy mieć natomiast do czynienia z wieloma "rozgałęzionymi" blokami kodu, typu - "jeśli to zdanie jest prawdziwe, wykonaj następujący blok kodu; w przeciwnym razie, wykonaj inny blok kodu". Interpreter sprawdza wówczas prawdziwość danego warunku (zdania logicznego) i wykonuje tylko jeden blok kodu - na odpowiedniej "gałęzi" warunku logicznego. Dowiemy się także o tym, jak w Pythonie oznaczać blok kodu - poprzez **wcięcie** ("indentation") kodu od lewej strony złożone z czterech spacji. # ### 1.1. Operatory porównania # Przypomnijmy sobie teraz szybko to, co dowiedziliśmy się o typie danych `bool`. Podstawową metodą konstrukcji zdań logicznych jest użycie operatorów porównania, które tu sobie znowu wylistujmy: # # | Operator | Działanieh | # | --- | --- | # |

`==`
|
równe
| # |
`!=`
|
różne
| # |
`>`
|
większe niż
| # |
`<`
|
mniejsze niż
| # |
`>=`
|
większe lub równe niż
| # |
`<=`
|
mniejsze lub równe niż
| # # Podkreślmy raz jeszcze, iż operatorem równości jest `==`, natomiast `=` jest operatorem przypisania do zmiennej. Zatem `x == 5` jest zdaniem logicznym "wartość zmiennej `x` jest równa 5" (może być ono prawdziwe bądź fałszywe, w zależności od wartości zmiennej `x`); natomiast `x = 5` jest instrukcją przypisania do zmiennej `x` wartości 5. # # Operatory te zdefiniowane są dla typów numerycznych (`int`, `float` - gdzie oznaczają znajome porównania arytmetyczne), a także dla typu tekstowego (`str` - gdzie odnoszą się do porządku leksykograficznego). Np.: 5 > 7 'HAniA' == 'Hania' # Możemy porównywać też całe wyrażenia, np.: 5 * 6 + 1 > 45 / 2 'HAniA'.lower() == 'Hania'.lower() # ... gdyż Python najpierw wykonuje obliczenia, otrzymując ostateczną wartość każdego wyrażenia, a dopiero na końcu dokonuje porównania. # Możemy oczywiście też porównywać wartości opisane przez zmienne, np.: # + x = 5 * 6 y = 23 - 1 x != y # x ma wartość 30, y ma wartość 22, zatem zdanie "x jest różne od y" jest prawdziwe # - # Szybkie ćwiczenie 1: Zdefiniuj zmienną `name`, do którego przypiszesz jakiś string, np. `'Ania'`. Napisz zdanie logiczne sprawdzające, czy imię w zmiennej `name` zgadza się z imieniem "Ola" (jest to imię pani porucznik MO, współpracującej z kpt. Żbikiem w historii "Kryptonim >>Walizka<<"). # + # szybkie ćwiczenie 1 - rozwiązanie # - # Szybkie ćwiczenie 2: Zdefiniuj zmienną `num`, do której przypisz jakąś liczbę całkowitą. Napisz zdanie logiczne sprawdzające, czy jest ona parzysta - wypisz tę informację dla użytkownika w postaci "Liczba <...> jest parzysta: <...>". # # Wskazówka: Pamiętaj o operatorze reszty z dzielenia `%`. Do samego drukowania wykorzystaj jedną z poznanych metod wypisywania stringów (konkatenację za pomocą operatora `+`, metodę `format`, bądź interpolację stringów). # + # szybkie ćwiczenie 2 - rozwiązanie # - # ### 1.2. Operatory logiczne - logiczne zdania złożone # Nic nie staje na przeszkodzie, aby budować bardziej złożone wyrażenia logiczne - mamy przecież do dyspozycji operatory logiczne! # # | Operator | Działanie | # | ---|--- | # |
`and`
|
"i"; zwraca `True`, gdy oba wyrażenia są `True`
| # |
`or`
|
"lub"; zwraca `True`, gdy przynajmniej jedno wyrażenie jest `True`
| # |
`not`
|
"nie"; negacja/zaprzeczenie, odwraca `True` na `False` i odwrotnie
| # Napiszmy dla przykładu następujący (dość uproszczony, oczywiście) algorytm wyszukiwania miejsca wypoczynku na wakacje: # + # definicje zmiennych price = 3500 hotel_stars = 4 all_inclusive = True distance_to_beach = 400 private_beach = False # czy jesteśmy zainteresowani ofertą? ( price <= 5000 ) and ( hotel_stars >= 4 ) and all_inclusive and ( distance_to_beach < 500 ) and private_beach # - # Mamy tutaj pięć zdań logicznych, a więc wyrażeń typu `bool`, połączonych operatorem `and` - całe takie zdanie złożone będzie miało wartość `True` jedynie wtedy, kiedy wszystkie zdania składowe będą mieć wartość `True`. # # Zwróćmy uwagę, że pośród pięciu zmiennych, jakie zdefiniowaliśmy powyżej, mamy dwie zmienne typu `bool`, tj. `all_inclusive` i `private_beach`. Zauważmy, jak są one użyte w zdaniu złożonym; nie musimy pisać np. `all_inclusive == True` (co byłoby najzupełniej poprawne) - wystarczy samo `all_inclusive`, jako że ono samo jest typu `bool`. # Aby poprawić czytelność takich złożonych zdań logicznych, przypomnijmy sobie ważny punkt z Lekcji 2 - wartości logiczne możemy w normalny sposób przypisywać do zmiennych. Wyrażenie logiczne `(price <= 5000)` jest jako całość obiektem typu `bool`, możemy więc przypisać je do zmiennej, `price_check = (price <= 5000)`, i teraz zmienna `price_check` jest typu `bool` o wartości `True` albo `False`, w zależności od tego, jaką wartość ma zmienna `price` w porównaniu do 5000. Moglibyśmy napisać więc np. (pomińmy nawiasy, abyśmy oswoili się z tą notacją!): # + price_check = price <= 5000 # do zmiennej o nazwie price_check przypisujemy wartość obiektu typu bool (price <= 5000) hotel_stars_check = hotel_stars >= 4 distance_to_beach_check = distance_to_beach < 500 is_interesting = price_check and hotel_stars_check and all_inclusive and distance_to_beach_check and private_beach # - # ... wszystkie te nowe zmienne są typu `bool` i mają następujące wartości: print( type( price_check ) ) print( price_check ) print( type( hotel_stars_check ) ) print( hotel_stars_check ) print( type( distance_to_beach_check ) ) print( distance_to_beach_check ) print( type( is_interesting ) ) print( is_interesting ) # Dłuższe ćwiczenie 3: Zmodyfikuj powyższą zmienną `is_interesting`, aby odpowiadała na wszystkie następujące warunki: (1) cena jest poniżej 5000, ale powyżej 2000; (2) liczba gwiazdek jest co najmniej 3 oraz oferta jest all inclusive, lub też liczba gwiazdek jest co najmniej 4; (3) odległość do plaży jest co najwyżej 200 i plaża jest prywatna, lub też odległość do plaży jest co najwyżej 50. Każdy z cząstkowych warunków zapisz jako osobną zmienną typu `bool`. # + # dłuższe ćwiczenie 3 - rozwiązanie # - # ## 2. Instrukcje warunkowe # Poznaliśmy już dobrze typ `bool` w Pythonie - jak tworzyć zdania logiczne za pomocą operatorów porównania; jak przypisywać je do zmiennych; jak wykonywać na tych zmiennych operacje logiczne. Przejdźmy teraz do głównej części lekcji - jak wykorzystywać zdania logiczne do pisania **instrukcji warunkowych**, a więc instrukcji typu "jeśli to zdanie logiczne jest prawdziwe, wykonaj następujący fragment kodu (a jeśli jest fałszywe, to wykonaj inny fragment kodu)". Nasz program będzie zatem zachowywał się inaczej w zależności od zadanego warunku logicznego. # # Są trzy wersje instrukcji warunkowych: # # - `if` # # - `if` - `else` # # - `if` - `elif` - `elif` - ... - `elif` - `else` # ### 2.1. Instrukcja `if` # Najprostsza składnia warunkowa ma następującą konstrukcję: # ``` # ... # statement # statement # # if condition: # statement # statement # ... # statement # # statement # statement # ... # ``` # # Wygląda ona tak: # - zaczyna się słówkiem kluczowym `if`; # - po nim następuje _dowolne wyrażenie logiczne_, tutaj nazwane `condition`; jest to dowolny obiekt typu `bool` (jak np. powyższa zmienna `is_interesting`), w szczególności mógłby on być powstały z kombinacji wielu innych warunków logicznych itd.; # - linijka kończy się dwukropkiem `:` - nie zapomnijcie go! # - następnie rozpoczyna się **blok kodu**, charakteryzujący się wcięciem na cztery spacje - ten blok kodu wykona się tylko i wyłącznie wtedy, jeśli zdanie logiczne `condition` będzie prawdziwe; jeśli będzie fałszywe, cały ten blok kodu zostanie przez interpreter pominięty - nic się nie wydarzy. # # Aby podkreślić wcięcie bloku kodu dodaliśmy kilka linijek przed i po instrukcji `if`. Zwróćmy też uwagę, iż blok kodu pod `if`-em może składać się z dowolnej liczby linijek. # Uwagi praktyczne nt. bloków kodu: # # - W innych językach programowania - np. C, C++, C#, Java, JavaScript, TypeScript - używa się nawiasów do oznaczenia bloków kodu, a wszystkie tzw. białe znaki (spacje, tabulatory) są ignorowane; w Pythonie jest inaczej - poczwórna spacja oznacza fragment bloku kodu. Jest to notacja bardzo czytelna - blok kodu jest łatwo widoczny - tym niemniej należy uważać na to, co jest wcięte, a co nie: wiele błędów pojawia się z powodu niewłaściwego wcięcia! # # - Edytor kodu jakim jest Jupyter Lab - a także większość innych nowoczesnych edytorów kodu, jak np. najpopularniejsze IDE Pythona, czyli PyCharm - automatycznie dokonują wcięcia na cztery spacje po przejściu `Enter`-em do nowej linijki z linijki typu `if condition:`. Edytor rozumie, iż po `if condition:` zaczyna się blok kodu i sam umieszcza kursor na wcięciu. # # - We wszystkich nowoczesnych edytorach, klawisz tabulatora `Tab` ma przypisane właśnie dokładnie cztery spacje - a więc nigdy nie trzeba ich ręcznie wpisywać, wystarczy nacisnąć `Tab`! Zauważmy jednak - to może nieco akademicki przykład - że gdybyśmy mieli zachciankę napisać kod np. w Notatniku Windowsa (absolutnie możemy to zrobić! kod to przecież czysty tekst), to klawisz `Tab` w Notatniku ma zwykle inną liczbę spacji niż cztery - i wówczas interpreter Pythona, dostając taki kod, by zaprotestował! # Prosty przykład: # + x = 78 if x % 2 == 0: # jeśli liczba x jest podzielna przez 2... print( f'{x} is an even number!' ) # ... to wypisz takie zdanie # - # Warunkiem logicznym jest tutaj `x % 2 == 0` - jeśli to zdanie logiczne jest prawdą, Python przechodzi do wykonywania wciętego bloku kodu poniżej. Dla większej czytelności moglibyśmy wziąć je w nawias: # ``` # if (x % 2 == 0): # ... # ``` # ... ale jak już zauważyliśmy, takich nawiasów często się nie pisze. # Blok kodu może zawierać dowolną liczbę linijek: # + x = 78 if x % 2 == 0: # jeśli liczba x jest podzielna przez 2... print( f'{x} is an even number!' ) x_by_2 = x / 2 print( f'{x} divided by 2 is {x_by_2}' ) # - # Spróbujcie wykonać powyższe komórki podając zarówno parzystą, jak i nieparzystą liczbę - w tym drugim przypadku nic się nie stanie: blok kodu nie wykona się, jako że warunek nie jest spełniony. # Oczywiście cały kod znajdujący się poza blokiem `if` zostanie wykonany bez względu na to, czy warunek jest spełniony - jego nasz `if` nie dotyczy: # + x = 78 if x % 2 == 0: # jeśli liczba x jest podzielna przez 2, wykonaj następujący blok kodu print( f'{x} is an even number!' ) x_by_2 = x / 2 print( f'{x} divided by 2 is {x_by_2}' ) # tu już jesteśmy poza blokiem if (skończyło się wcięcie) i poniższy kod jest wykonywany niezależnie od powyższego if-a x_by_3 = x / 3 print( f'{x} divided by 3 is {x_by_3}' ) # - # Takie bloki `if` mogą oczywiście występować jeden po drugim, np.: # + x = 78 # pierwszy warunek if if x % 2 == 0: # jeśli liczba x jest podzielna przez 2, wykonaj następujący blok kodu print( f'{x} is an even number!' ) x_by_2 = x / 2 print( f'{x} divided by 2 is {x_by_2}' ) # drugi warunek if (zupełnie niezależny od poprzedniego) if x % 3 == 0: # jeśli liczba x jest podzielna przez 3, wykonaj następujący blok kodu print( f'{x} is divisible by 3!' ) x_by_3 = x / 3 print( f'{x} divided by 3 is {x_by_3}' ) # - # Szybkie ćwiczenie 4: Zdefiniuj zmienne `age` i `sex`, przypisz do nich wiek użytkownika (liczbę całkowitą dodatnią) i płeć (w formie stringu: albo `'M'`, albo `'F'`). Wydrukuj wiadomość, ile lat pozostało użytkownikowi do przejścia na emeryturę (wiek emerytalny to 60 lat dla kobiet i 65 lat dla mężczyzn), np. w formie `'You have <...> years till retirement.'`. # + # szybkie ćwiczenie 4 - rozwiązanie # - # ### 2.2. Instrukcja `if` - `else` # Konstrukcję warunkową `if` możemy rozszerzyć w następujący sposób: "jeśli dany warunek jest prawdziwy, wykonaj poniższy blok kodu - a w przeciwnym razie (czyli jeśli ten warunek jest fałszywy), wykonaj inny blok kodu". Ogólna składnia to: # ``` # if condition: # statement # statement # ... # statement # else: # statement # statement # ... # statement # ``` # Pierwsza część jest zatem identyczna, jak wcześniej. Mamy za to dodatkową część, rozpoczynającą się słowem kluczowym `else` i dwukropkiem `:`, po których zaczyna się nowy blok kodu - i jest on wykonywany wtedy, kiedy zdanie `condition` jest fałszywe. Zwróćmy uwagę na wcięcia obu bloków! # Zobaczmy to na przykładzie - zmodyfikujmy nasz program tak, by wypisywał informację, czy podana liczba jest parzysta czy też nieparzysta: # + x = 15 if x % 2 == 0: # jeśli liczba x jest podzielna przez 2... print( f'{x} is an even number!' ) # ... to wypisz takie zdanie else: # ... a w przeciwnym razie (czyli, gdy x jest nieparzysta) print( f'{x} is an odd number!' ) # ... to wypisz inne zdanie # - # Szybkie ćwiczenie 5: Zdefinuj zmienną `num`, przypisz do niej liczbę całkowitą. Napisz program, który sprawdza, czy liczba ta jest jednocześnie podzielna przez 2, 3 i 5. Jeśli tak, wydrukuj zdanie typu `'<...> is divisible by 2, 3, and 5'`; w przeciwnym razie, wydrukuj inne (przeczące) zdanie. # # Wskazówka: Jakiego operatora logicznego należy tu użyć? Możesz zapisać łączny warunek bezpośrednio po `if`-ie, lub też - dla większej czytelności - zdefiniować warunki cząstkowe jako osobne zmienne logiczne i potem połączyć je w `if`-ie. # + # szybkie ćwiczenie 5 - rozwiązanie # - # Szybkie ćwiczenie 6: Zdefiniuj zmienną `letter` i przypisz do niej string będący jedną literą (ograniczmy się w tym ćwiczeniu do małych liter i do języka angielskiego). Napisz program, który wydrukuje informację, czy jest to samogłoska, czy spółgłoska, tj. `'<...> is a vowel|consonant'`. # # Wskazówka: Napisz najpierw zdanie złożone `is_vowel`, sprawdzające, czy dana litera jest równa `'a'`, lub też `'e'`, lub `'i'`, `'o'`, `'u'` (jaki operator logiczny należy tu użyć?). Następnie napisz odpowiednią instrukcję `if` - `else`. # # Uwaga: `'y'` jest w języku angielskim czasem samogłoską, czasem spółgłoską; tutaj traktujemy ją jako spółgłoskę. Niżej nauczymy się sprawdzać bardziej skomplikowane sytuacje - nie tylko "jeśli ten warunek jest prawdziwy, zrób to, a jeśli nie, zrób tamto", ale wiele warunków naraz. Wtedy będziemy mogli wziąć `'y'` pod uwagę i wypisać `'y is sometimes a vowel and sometimes a consonant'`. # + # szybkie ćwiczenie 6 - rozwiązanie # - # ### 2.3. Instrukcja `if` - `elif` - `elif` - ... - `elif` - `else` # Jest to najpełniejsza składnia warunkowa w Pythonie - daje nam ona możliwość sprawdzenia dowolnej liczby warunków logicznych i wykonania odpowiedniego bloku kodu: # # - konstrukcja zaczyna się, jak zawsze, od `if`, sprawdzającego warunek logiczny `condition_1`; jeśli jest on prawdziwy, wykonywany jest następujący blok kodu, zaś pozostała część instrukcji warunkowej jest pomijana; # # - jeśli `condition_1` jest zdaniem fałszywym, interpreter przechodzi do sprawdzenia kolejnego warunku, `condition_2`, znajdującego się po słowie kluczowym `elif` (skrót od "else-if", znaczącego "jeśli poprzedni warunek jest fałszywy i jeśli obecny warunek jest prawdziwy") - i wykonuje znajdujący się pod spodem blok kodu jeśli `condition_2` jest zdaniem prawdziwym; # # - w ten sam sposób możemy sprawdzać dowolną liczbę warunków, każde po słowie `elif`; ważne jest zapamiętanie, że _wykonany zostanie tylko jeden blok kodu z całej serii_, tj. ten, dla którego pierwszego warunek będzie prawdziwy; reszta instrukcji warunkowej jest pomijana (nawet gdyby któryś inny warunek poniżej też był prawdziwy); # # - jeśli żaden z warunków - najpierw po `if`, a potem po kolejnych `elif`-ach - nie jest prawdziwy, zostaje wykonany kod znajdujący się na samym końcu, po słowie `else`; uwaga: ten blok `else` jest opcjonalny, nie musi go być, a wówczas jeśli żaden z warunków nie jest prawdziwy, to cała instrukcja warunkowa nie będzie miała po prostu żadnego efektu. # # ``` # if condition_1: # statement # statement # ... # statement # elif condition_2: # statement # statement # ... # statement # elif condition_3: # statement # statement # ... # statement # ... # elif condition_n: # statement # statement # ... # statement # else: # statement # statement # ... # statement # ``` # # Jak zawsze, każdy blok kodu jest wcięty; zawsze też po każdym warunku (i po `else`) jest dwukropek `:`. # Prosty przykład: # + x = -12 if x == 0: print( f'{x} is zero!' ) elif x > 0: print( f'{x} is positive!' ) else: print( f'{x} is negative!' ) # - # Dłuższe ćwiczenie 7: Zdefiniuj zmienną `month` będącą nazwą miesiąca (powiedzmy po angielsku, `'January'` itd.) i wypisz liczbę dni, jakie ten miesiąc ma; dla lutego napisz `'February has 28 or 29 days'`. # + # dłuższe ćwiczenie 7 - rozwiązanie # - # Dłuższe ćwiczenie 8: Napisz program dobierający rozmiar ramy rowerów trekkingowych wg poniższej tabelki: # # |
Wzrost rowerzysty do
|
Rozmiar ramy roweru trekkingowego
| # | --- | --- | # |
152 cm
|
16” (41 cm)
| # |
157 cm
|
17” (43 cm)
| # |
162 cm
|
18” (46 cm)
| # |
167 cm
|
18” (46 cm)
| # |
172 cm
|
19” (48 cm)
| # |
175 cm
|
19” (48 cm)
| # |
177 cm
|
20” (51 cm)
| # |
182 cm
|
20” (51 cm)
| # |
187 cm
|
21” (53 cm)
| # |
192 cm
|
22” (56 cm)
| # |
197 cm
|
23” (58 cm)
| # # Zauważ, że niektóre rozmiary ramy powtarzają się. Możesz uzupełnić ten program o wypisanie wiadomości w stylu `'We have no bike frames large enough!'`, jeśli wzrost będzie wykraczał poza zakres w tabelce. Przyjmij od użytkownika jego wzrost - poprzez zdefiniowanie zmiennej `user_height` - a następnie zwróć rozmiar ramy (najlepiej z ładną wiadomością); wybierz, czy w calach, czy w centymetrach. # + # dłuższe ćwiczenie 8 - rozwiązanie # - # Powyższe ćwiczenie jest może nieco monotonne - ale postaraj się jednak wypisać pełną instrukcję warunkową: mamy nadzieję, że dzięki temu nigdy nie zapomnisz, iż po warunku musi być dwukropek, a blok warunkowy ma być wcięty! # ### 2.4. Warunki zagnieżdżone # Poznaliśmy już zatem wszystkie trzy rodzaje instrukcji warunkowych w Pythonie. Konstrukcje te można jednak jeszcze bardziej skomplikować! - poprzez **zagnieżdżanie** ("nesting") instrukcji warunkowych w innych instrukcjach warunkowych. Jest to prostsze niż brzmi - oznacza jedynie, iż każdy blok w danej instrukcji warunkowej może sam w sobie zawierać instrukcję warunkową. Czemu by i nie? Blok kodu może zawierać przecież dowolny kod - w tym i inne instrukcje warunkowe. # # Zagnieżdżone bloki warunkowe muszą oczywiście też być wcięte - o kolejne cztery spacje! # Jako przykład wróćmy do naszego programu sprawdzającego, czy liczba całkowita jest parzysta. Utrudnijmy go jednak nieco: przyjmijmy, że zmienna `x` (typu `int`) pochodzi ze stringu `x_raw`, który użytkownik wpisał, np. `'17'`. (Służy do tego funkcja `input` - sprawdź w Internecie jej działanie!) Ten string `x_raw` następnie próbujemy przekonwertować na liczbę całkowitą instrukcją `x = int(x_raw)`, a dopiero potem sprawdzić jej parzystość. # # Teraz, gdybyśmy jako `x_raw` podali mu string niedający się zamienić na liczbę całkowitą, np. `'8.8'` lub `'abc'`, interpreter zareaguje błędem: # + x_raw = '8.8' x = int( x_raw ) if x % 2 == 0: print( f'{x} is an even number!' ) else: print( f'{x} is an odd number!' ) # - # Sytuację tego typu możemy wziąć pod uwagę dodając "zewnętrzną" instrukcję warunkową, która najpierw sprawdzi, czy wpisany tekst `x_raw` da się przekonwertować na liczbę całkowitą. Aby napisać ten warunek, przypomnijmy sobie z Lekcji 2 metodę stringów `isdigit`, która zwraca obiekt typu `bool` odpowiadający na pytanie, czy dany string jest łańcuchem tylko i wyłącznie cyfr: '1537652'.isdigit() '8.8'.isdigit() # tu oprócz cyfr mamy kropkę '-10'.isdigit() # tu oprócz cyfr mamy myślnik ' 5'.isdigit() # tu oprócz cyfr mamy spację 'abc'.isdigit() # Możemy więc rozważyć warunek logiczny `x_raw.isdigit()`; będzie on prawdziwy tylko wtedy, kiedy string `x_raw` będzie składał się tylko z cyfr, a wówczas operacja `x = int(x_raw)` bezproblemowo przekonwertuje go na odpowiednią liczbę całkowitą. # # Bądźmy jeszcze bardziej dokładni i postarajmy się wziąć pod uwagę sytuację, kiedy użytkownik przypadkowo wpisał spacje przed lub po danej liczbie. Weźmy pod uwagę też liczby ujemne. Aby to zrobić, ze stringu `x_raw` usuńmy najpierw wszystkie ewentualne spacje po lewej i prawej, a także ewentualny myślnik (znak minus) po stronie lewej. Przypomnijmy sobie z Lekcji 2 służące do tego metody stringów `strip` i `lstrip` (jest też `rstrip`). ' 5 '.strip() '-10'.lstrip( '-' ) # Nasz warunek przyjmie więc postać: `x_raw.strip().lstrip('-').isdigit()` - najpierw usuń z `x_raw` spacje po obu stronach, potem usuń myślnik po lewej, na końcu sprawdź, czy otrzymany string składa się tylko i wyłącznie z cyfr: ' -7892 '.strip().lstrip( '-' ).isdigit() # Poćwiczyliśmy przy okazji kilka metod stringów! Zauważmy, że metody możemy składać jedna po drugiej, każda oczywiście po kolejnej kropce. Nierzadko warunki logiczne w instrukcjach warunkowych powstają właśnie z tego typu zastosowania metod. # Wróćmy do tematu zagnieżdżonych instrukcji warunkowych. Najbardziej "zewnętrzną" instrukcją będzie sprawdzenie powyższego warunku i tylko jeśli będzie on prawdziwy, to przejdziemy do "wewnętrznej" instrukcji warunkowej sprawdzającej parzystość: # + x_raw = '8.8' if x_raw.strip().lstrip( '-' ).isdigit(): x = int( x_raw ) if x % 2 == 0: print( f'{x} is an even number!' ) else: print( f'{x} is an odd number!' ) else: print( f'{x_raw} is not an integer! Please enter a valid integer number.' ) # - # Zwróćmy uwagę na podwójne wcięcie w wewnętrznych blokach warunkowych! Zwróćmy też uwagę, iż puste linijki - dodane tu dla większej czytelności - w niczym nie wadzą: Python poznaje blok kodu po wcięciu. # Dłuższe ćwiczenie 9: Zmodyfikuj program z ćwiczenia 8, tak by najpierw pytał o typ roweru - trekkingowy albo MTB - a następnie dla rowerów MTB dobierał ramę wg poniższej tabeli: # # |
Wzrost rowerzysty do
|
Rozmiar ramy roweru MTB
| # | --- | --- | # |
152 cm
|
14” (36 cm)
| # |
157 cm
|
15” (38 cm)
| # |
162 cm
|
16” (41 cm)
| # |
167 cm
|
16” (41 cm)
| # |
172 cm
|
17” (43 cm)
| # |
175 cm
|
17" (43 cm)
| # |
177 cm
|
18" (46 cm)
| # |
182 cm
|
18" (46 cm)
| # |
187 cm
|
19" (48 cm)
| # |
192 cm
|
20" (51 cm)
| # |
197 cm
|
21" (53 cm)
| # # (Dla rowerów trekkingowych użyj tabeli z ćwiczenia 8.) # + # dłuższe ćwiczenie 9 - rozwiązanie # - # ## 3. Zadania domowe # Dłuższe ćwiczenie 10: Ścianka kostki do gry. # ``` # ----- # | *| # | * | # |* | # ----- # ``` # # Napisz program, który przyjmie liczbę całkowitą od 1 do 6 (nazwijmy ją powiedzmy `num`) i wydrukuje reprezentację ścianki kostki do gry o tym numerze (jak wyżej pokazano dla liczby 3) - ścianki górne zaznaczone pięcioma myślnikami `'-'`, a także ścianki boczne zaznaczone znakiem "pipe" `'|'`; pośrodku tych ścianek mamy wydrukowane trzy linijki, po trzy znaki w każdej, z czego znak może być albo spacją `' '`, albo gwiazdką `'*'`. # # Wskazówka: Można by napisać instrukcję warunkową z sześcioma warunkami, np. `num == 1` itd., gdzie dla każdej wartości `num` drukowalibyśmy odpowiednią reprezentację ścianki. Ale postarajmy się być bardziej zmyślni i wykorzystać istniejącą tu symetrię! W tym celu, zdefiniuj trzy zmienne, `first_row`, `second_row`, `third_row`, które będą przybierały formę trójznakowych stringów, np. dla liczby 3 byłyby one równe odpowiednio `' *'`, `' * '`, `'* '`. Napisz teraz dwie oddzielne instrukcje warunkowe, każda z nich mająca tylko trzy (nie sześć!) warunki, a więc `if` - `elif` - `elif`. Pierwsza instrukcja niech przypisuje zmienne `first_row` oraz `third_row`, natomiast druga instrukcja zmienną `second_row`. Zastanów się, dlaczego tak warto zrobić (podpowiedź: pierwszy i trzeci rząd są bardzo do siebie podobne). # # Uwaga: Jest to przykład tzw. "ASCII art". # + # dłuższe ćwiczenie 10 - rozwiązanie # - # Dłuższe ćwiczenie 11: Kolor pola na szachownicy. # # Napisz program, który przyjmie liczbę całkowitą od 1 do 8 (nazwijmy ją `row`) oraz małą literę od `'a'` do `'h'` (nazwijmy ją `col`) i zwróci kolor (nazwijmy go `color`) odpowiedniego pola na szachownicy, tj. `'white'` albo `'black'`. # # Wskazówka: Zauważ, że kolor pola szachownicy zależy jednoznacznie od dwóch faktów: czy numer rzędu `row` jest parzysty, czy nie; oraz czy numer kolumny jest parzysty, czy nie - przy czym kolumny są numerowane literami, więc "parzystość" w tym przypadku oznacza, że `col` jest jedną z liter `'b'`, `'d'`, `'f'`, `'h'`. Zdefiniuj zmienne typu `bool`, nazwijmy je `row_even` i `col_even`, które będą miały wartość `True`, gdy rząd/kolumna będą parzyste w tym sensie. Napisz teraz zagnieżdżoną instrukcję warunkową przy użyciu tych dwóch zdań logicznych. # + # dłuższe ćwiczenie 11 - rozwiązanie # - # Wersja nieco bardziej zaawansowana, ale i elegancka: Powyżej zdefiniowana zmienna `col_even` była stworzona dość "manualnie", poprzez wylistowanie "parzystych" liter alfabetu (za pomocą odpowiedniego operatora logicznego). To byłby problem gdybyśmy np. mieli planszę szachową nie 8 x 8, ale np. 20 x 20. Spróbuj przepisać definicję zmiennej `col_even` w bardziej elegancki sposób - niech będzie ona liczbą odpowiadającą literze, tj. 1 zamiast `'a'`, 2 zamiast `'b'` itd. # # Wskazówka: Rozważ string pomocniczy `chess_alphabet = 'abcdefgh'`. Przypomnij sobie z Lekcji 2 metodę stringów `find`, pozwalającą znaleźć pozycję danego znaku w stringu (i pamiętaj, że pozycja taka liczona jest od zera!). # + # dłuższe ćwiczenie 11 - rozwiązanie alternatywne # - # Dłuższe ćwiczenie 12: Chatbot na siłowni. # # (a) Wstęp na siłownię mają osoby w wieku powyżej 14 lat. Napisz program, który przeprowadzi selekcję klientów, pytając ich o wiek (zdefiniuj zmienną `age`, przypisz jej jakąś wartość typu `int`) i wypisując odpowiednią wiadomość na ekranie. # # (b) Siłownia wprowadziła promocję - wejściówki dla pań kosztują 10 zł, a dla panów 15 zł. Zmodyfikuj poprzedni program poprzez rozszerzenie go o weryfikację płci (zdefiniuj zmienną `sex`, przypisz jej string `'M'` albo `'F'`) i wypisanie odpowiedniej wiadomości, czy klient może wejść, a jeśli tak, to ile ma zapłacić. # # (c) Siłownia wprowadziła zniżkę dla studentów - po okazaniu ważnej legitymacji studenckiej wejściówka kosztuje 5zł, bez względu na płeć. Zmodyfikuj odpowiednio poprzedni program (zdefiniuj zmienną `student`, przypisz jej jakąś wartość typu `bool`). # # (d) Siłownia udostępniła saunę dla klientów - jeżeli oprócz siłowni klient chce również skorzystać z sauny, całkowity koszt wstępu to 20 zł, bez względu na płeć i bez względu na ewentualną legitymację studencką. Zmodyfikuj odpowiednio poprzedni program (zdefiniuj zmienną `sauna`, przypisz jej jakąś wartość typu `bool`). # + # dłuższe ćwiczenie 12a - rozwiązanie # + # dłuższe ćwiczenie 12b - rozwiązanie # + # dłuższe ćwiczenie 12c - rozwiązanie # + # dłuższe ćwiczenie 12d - rozwiązanie # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import numpy as np; import math; import random; # + class Qubit(): def __init__(self, initialState = np.array([1,0], dtype=complex)): if (self.checkState(initialState)): self.initialState = initialState; self.currentState = initialState; else: raise Exception("Does not meet normalization constraint."); def getInitialState(self): return self.initialState; def checkState(self, state): if (np.linalg.norm(state) == 1): return True; else: return False; def getCurrentState(self): return self.currentState; def setCurrentState(self, state): self.currentState = state; def measure(self, iterations = 100): prob0 = abs(self.currentState[0] ** 2); prob1 = abs(self.currentState[1] ** 2); print(prob1); distribution = [0, 0]; for i in range(iterations): if (random.uniform(0, 1) > prob1): distribution[0] += 1; else: distribution[1] += 1; print(distribution); class Gate(): def __init__(self, transformation = np.mat('0 1; 1 0', dtype = complex)): print("Creating a new gate"); self.transformation = transformation; def processQubit(self, qubit): print((qubit.getCurrentState())); qubit.setCurrentState(np.asarray(self.transformation.dot(qubit.getCurrentState()))[0]); print(qubit.getCurrentState()); class HadamardGate(Gate): def __init__(self): Gate.__init__(self, 1/math.sqrt(2)*np.mat('1 1; 1 -1', dtype = complex)); print("Creating a hadamard gate"); class PauliXGate(Gate): def __init__(self): Gate.__init__(self, np.mat('0 1; 1 0', dtype = complex)); print("Creating a pauli-x gate"); class PauliYGate(Gate): def __init__(self): Gate.__init__(self, np.mat('0 -1j; 1j 0', dtype = complex)); print("Creating a hadamard gate"); class PauliZGate(Gate): def __init__(self): Gate.__init__(self, np.mat('1 0; 0 -1', dtype = complex)); print("Creating a hadamard gate"); # + qubit = Qubit(np.array([0,1], dtype=complex)); myGate = HadamardGate(); myGate.processQubit(qubit); qubit.measure(); # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook was prepared by [](https://github.com/rishihot55). Source and license info is available on [Github](https://github.com/donnemartin/interactive-coding-challenges). # # Challenge Notebook # ## Problem: Find all valid combinations of n-pairs of parentheses. # # * [Constraints](#Constraints) # * [Test Cases](#Test-Cases) # * [Algorithm](#Algorithm) # * [Code](#Code) # * [Unit Test](#Unit-Test) # * [Solution Notebook](#Solution-Notebook) # ## Constraints # # * Is the input an integer representing the number of pairs? # * Yes # * Can we assume the inputs are valid? # * No # * Is the output a list of valid combinations? # * Yes # * Should the output have duplicates? # * No # * Can we assume this fits memory? # * Yes # ## Test Cases # #
# * None -> Exception
# * Negative -> Exception
# * 0 -> []
# * 1 -> ['()']
# * 2 -> ['(())', '()()']
# * 3 -> ['((()))', '(()())', '(())()', '()(())', '()()()']
# 
# ## Algorithm # # Refer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/recursion_dynamic/n_pairs_parentheses/n_pairs_parentheses_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. # ## Code class Parentheses(object): def find_pair(self, num_pairs): # TODO: implement me pass # ## Unit Test # + # # %load test_n_pairs_parentheses.py import unittest class TestPairParentheses(unittest.TestCase): def test_pair_parentheses(self): parentheses = Parentheses() self.assertRaises(TypeError, parentheses.find_pair, None) self.assertRaises(ValueError, parentheses.find_pair, -1) self.assertEqual(parentheses.find_pair(0), []) self.assertEqual(parentheses.find_pair(1), ['()']) self.assertEqual(parentheses.find_pair(2), ['(())', '()()']) self.assertEqual(parentheses.find_pair(3), ['((()))', '(()())', '(())()', '()(())', '()()()']) print('Success: test_pair_parentheses') def main(): test = TestPairParentheses() test.test_pair_parentheses() if __name__ == '__main__': main() # - # ## Solution Notebook # # Review the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/recursion_dynamic/n_pairs_parentheses/n_pairs_parentheses_solution.ipynb) for a discussion on algorithms and code solutions. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.8 ('base') # language: python # name: python3 # --- # # Python Learning Sessions: Feature Engineering # ## Basics # ### Pipelines # Piplines are a way to combine multiple steps of preprocessing into a single step. # Pandas has a built-in function for creating pipelines. The method can be called by a DataFrame.pipe() method. You can pass a function to the pipe() method and chain multiple transformations together by calling the pipe() method after another. # # SKLearn has a built-in pipeline class that sequentially applies each step of the pipeline. SKLearn also has ColumnTransformer class that can be used to combine multiple steps of preprocessing into a single step. These two classes can be used together to provide a more flexible way to combine multiple steps of preprocessing into a single step. # #### Pandas Pipeline # --- # When applying a series of transformations to a dataframe, we usually call it like this: # # ```python # df = h(df) # df = g(df, arg1=a) # df = func(df, arg2=b, arg3=c) # ``` # # Or we can nest the transformations using functions: # ```python # df = func(g(h(df), arg1=a), arg2=b, arg3=c) # ``` # # Or we can use the pipe() method to apply a series of transformations to a dataframe: # # ```python # (df.pipe(h) # .pipe(g, arg1=a) # .pipe(func, arg2=b, arg3=c) # ) # ``` # You can define a pipeline operator as a function that takes a dataframe and returns a dataframe. # + import pandas as pd import numpy as np link = 'https://raw.githubusercontent.com/mwaskom/seaborn-data/master/taxis.csv' df = pd.read_csv(link) df # + def get_unique_routes(df): """ Get the number of unique routes Parameters ---------- df : pandas.DataFrame DataFrame with the taxi data Returns ------- df : pandas.DataFrame DataFrame with the taxi data with the number of unique routes """ df = df.groupby(by=['pickup_borough', 'dropoff_borough']).agg({ 'distance': 'mean', 'fare': 'mean', 'tip': 'mean', 'tolls': 'mean', 'payment': pd.Series.mode, 'color': pd.Series.mode, }) df.reset_index(inplace=True) df['route'] = df.pickup_borough + '-' + df.dropoff_borough df.drop(['pickup_borough', 'dropoff_borough'], axis=1, inplace=True) return df def bin_distance(df, bins): """ Bin the distance column Parameters ---------- df : pandas.DataFrame DataFrame with the taxi data bins : list List of bins to use Returns ------- df : pandas.DataFrame DataFrame with the taxi data with the distance column binned """ df['distance_bin'] = pd.cut(df.distance, bins) df.drop(['distance'], axis=1, inplace=True) return df def get_tip_toll_percentage(df): """ Get the percentage of tip and tolls Parameters ---------- df : pandas.DataFrame DataFrame with the taxi data Returns ------- df : pandas.DataFrame DataFrame with the taxi data with the percentage of tip and tolls """ df['ave_tip_percentage'] = df.tip / (df.fare) df['ave_toll_percentage'] = df.tolls / (df.fare) df.drop(['tip', 'tolls'], axis=1, inplace=True) return df # - df.pipe(get_unique_routes)\ .pipe(bin_distance, bins=np.arange(0, 100, 5))\ .pipe(get_tip_toll_percentage) # #### SKLearn Pipeline, FeatureUnion, ColumnTransformer # * `Pipeline`: chaining estimators # * `FeatureUnion`: composite feature spaces # * `ColumnTransformer` for heterogeneous data # * `TransformedTargetRegressor`: transforming target in regression # + import numpy as np from sklearn.pipeline import make_pipeline from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.impute import SimpleImputer from sklearn.compose import ColumnTransformer, TransformedTargetRegressor from sklearn.preprocessing import ( OneHotEncoder, StandardScaler, QuantileTransformer, MinMaxScaler ) from sklearn.linear_model import LinearRegression from sklearn.feature_selection import SelectKBest, VarianceThreshold from sklearn.feature_extraction.text import CountVectorizer from sklearn import set_config numeric_preprocessor = Pipeline( steps=[ ('quasi_constant_remover', VarianceThreshold(threshold=0.0)), ( "imputation_mean", SimpleImputer(missing_values=np.nan, strategy="mean") ), ("scaler", StandardScaler()), ] ) categorical_preprocessor = Pipeline( steps=[ ( "imputation_constant", SimpleImputer(fill_value="missing", strategy="constant"), ), ("onehot", OneHotEncoder(handle_unknown="ignore")), ] ) # Text preprocessing #------------------------------------------------------------------------------ from sklearn.base import BaseEstimator, TransformerMixin class AverageWordLengthExtractor(BaseEstimator, TransformerMixin): """Takes in dataframe, outputs average word length""" def __init__(self): pass def average_word_length(self, name): """Helper code to compute average word length of a name""" return np.mean([len(word) for word in name.split()]) def transform(self, df, y=None): """The workhorse of this feature extractor""" return df['road_name'].apply(self.average_word_length) def fit(self, df, y=None): """Returns `self` unless something different happens in train, test""" return self best_word_selector = Pipeline( steps=[ ('bow', CountVectorizer()), ('select_one', SelectKBest(k=1)) ] ) text_preprocessor = FeatureUnion( transformer_list=[ ('best_word', best_word_selector), ('ave', AverageWordLengthExtractor()) ] ) preprocessor = ColumnTransformer( [ ('keys', 'drop', ['cif', 'mastercif', 'acct_num']), ("categorical", categorical_preprocessor, ["state", "gender"]), ("numerical", numeric_preprocessor, ["age", "weight"]), ('text', text_preprocessor, ['feedback']), ] ) qt_transformer = QuantileTransformer(output_distribution='normal') linreg = LinearRegression(n_jobs=-1) regr = TransformedTargetRegressor( regressor=linreg, transformer=qt_transformer) clf = GBMClassifier() pipe = make_pipeline(preprocessor, clf) # - set_config(display="diagram") pipe # click on the diagram below to see the details of each step pipe.get_params() # ### Advantages of Pipelines # #### Avoid target leakage df.loc[lambda x: x.isna().any(axis=1), :] df.fillna(df.mode().iloc[0])\ .loc[df.isna().any(axis=1), :] df.fillna(df.iloc[:4000].mode().iloc[0])\ .loc[df.isna().any(axis=1), :] # #### Grid Search and cross-validation with pipelines is more efficient from sklearn.model_selection import GridSearchCV gs = GridSearchCV( estimator=pipe, param_grid={ 'text__ngram__ngram__ngram_range': [(1, 1), (1, 2), (1, 3)], 'text__ave__ave_word_length': [1, 2, 3], 'regr__regressor__alpha': [0.1, 0.01, 0.001], 'regr__regressor__max_iter': [100, 1000, 10000], 'regr__regressor__normalize': [True, False], 'regr__regressor__fit_intercept': [True, False], 'columntransformer__numerical__scaler':[ MinMaxScaler(), StandardScaler(), None, ] }, ) gs # # References # * [6.1. Pipelines and composite estimators](https://scikit-learn.org/stable/modules/compose.html) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Load Wisconsin Breast Cancer Dataset # + import numpy as np from sklearn.datasets import load_breast_cancer import matplotlib.pyplot as plt # %matplotlib inline # load data data = load_breast_cancer() X = data.data y = data.target print(X.shape, data.feature_names) # - # # Classification Model Evaluation Metrics # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) print(X_train.shape, X_test.shape) # + from sklearn import linear_model logistic = linear_model.LogisticRegression() logistic.fit(X_train,y_train) # - # ## Confusion Matrix # + import model_evaluation_utils as meu y_pred = logistic.predict(X_test) meu.display_confusion_matrix(true_labels=y_test, predicted_labels=y_pred, classes=[0, 1]) # - # ## True Positive, False Positive, True Negative and False Negative positive_class = 1 TP = 106 FP = 4 TN = 59 FN = 2 # ## Accuracy fw_acc = round(meu.metrics.accuracy_score(y_true=y_test, y_pred=y_pred), 5) mc_acc = round((TP + TN) / (TP + TN + FP + FN), 5) print('Framework Accuracy:', fw_acc) print('Manually Computed Accuracy:', mc_acc) # ## Precision fw_prec = round(meu.metrics.precision_score(y_true=y_test, y_pred=y_pred), 5) mc_prec = round((TP) / (TP + FP), 5) print('Framework Precision:', fw_prec) print('Manually Computed Precision:', mc_prec) # ## Recall fw_rec = round(meu.metrics.recall_score(y_true=y_test, y_pred=y_pred), 5) mc_rec = round((TP) / (TP + FN), 5) print('Framework Recall:', fw_rec) print('Manually Computed Recall:', mc_rec) # ## F1-Score fw_f1 = round(meu.metrics.f1_score(y_true=y_test, y_pred=y_pred), 5) mc_f1 = round((2*mc_prec*mc_rec) / (mc_prec+mc_rec), 5) print('Framework F1-Score:', fw_f1) print('Manually Computed F1-Score:', mc_f1) # ## ROC Curve and AUC meu.plot_model_roc_curve(clf=logistic, features=X_test, true_labels=y_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Potential outcome model # + import matplotlib.pyplot as plt import pandas as pd import numpy as np pd.options.display.float_format = "{:,.2f}".format # - # The National Health Interview Survey (NHIS) data is collected on U.S. households since 1957. It covers a broad range of health-related topics from medical conditions, health insurance, and the number of doctor visits to measures of physical activity. Here we focus on indicators relevant for the Potential outcome model (POM) framework. In particular, we will compare the health status of hospitalized and non-hospitalized individuals in 2018. For this purpose, we use answers to the survey question During the past 12 months, has the respondent been hospitalized overnight? with potential answers Yes and No which we code as one and zero. Further, we consider answers to the questions Would you say your health, in general, is excellent, very good, good, fair, poor? where responses are coded as one for poor health up to five for excellent health. The survey also collects data on relevant characteristics as sex, age, level of education, hours worked last week, and total earnings. # ## Task A.1 # # Open a Jupyter Notebook and import the data set nhis-initial.xslx (available at https://bit.ly/nhis-initial). Try to think of ways to answer the following questions: Are there more females or males? Are there more individuals who hold a degree or not?. Now try to relate individual characteristics to the hospitalization status. Are high or low earners/old or young people more often hospitalized? df = pd.read_excel("data/nhis-initial.xls") df.index.set_names("Individual", inplace=True) df.head() # We want to study average age and working hours. stat = df["age"].mean() print(f"Average age in the sample is {stat:.2f}") stat = df["hours"].mean() print(f"Average of working hours per week in the sample is {stat:.0f}") for column in ["sex", "education", "earnings", "health"]: fig, ax = plt.subplots() info = df[column].value_counts(normalize=True).to_frame() x, y = info.index, info.to_numpy().flatten() ax.bar(x, y) ax.set_xlabel(column.capitalize()) ax.set_ylim(None, 1) ax.set_ylabel("Share") fig.show() # Now try to relate individual characteristics to the hospitalization status. df.groupby("hospitalized")[["age", "hours"]].mean() for column in ["sex", "education", "earnings"]: fig, ax = plt.subplots() rslt = df.groupby(column)["hospitalized"].mean() x, y = rslt.index, rslt.to_numpy() ax.bar(x, y) ax.set_ylabel("Share") fig.show() # ## Task A.2 # # Compute the average health status of hospitalized and non-hospitalized individuals. Who is healthier on average? What could be a reason for this difference? df.groupby("hospitalized")["health"].mean().to_frame() # ## Task A.3 # # Adjust the data set for the POM framework, with health status as the outcome and hospitalization as the treatment status. # + df = pd.read_excel("data/nhis-initial.xls") df.rename(columns={"health": "Y", "hospitalized": "D"}, inplace=True) df["Y_1"] = np.where(df["D"] == 1, df["Y"], np.nan) df["Y_0"] = np.where(df["D"] == 0, df["Y"], np.nan) df.head() # - # ## Task A.4 # # Compute the naive estimate for the average treatment effect (ATE) stat = df["Y_1"].mean() - df["Y_0"].mean() print(f"Our naive estimate is {stat:.1f}") # ## Task B.1 # # As we’ve seen in the lecture, in reality, we can only ever observe one counterfactual; however, when simulating data, we can bypass this problem. The (simulated) data set nhis-simulated.xslx (available at https://bit.ly/nhis-simulated) # contains counterfactual outcomes, i.e., outcomes under control for individuals assigned to the treatment group and vice versa. Derive and compute the average outcomes in the two observable and two unobservables states. Design them similar # to Table 2.3 in Morgan & Winship (2014). df = pd.read_excel("data/nhis-simulated.xls") df.index.set_names("Individual", inplace=True) df.head() # + rslt = df.groupby("D")[["Y_1", "Y_0"]].mean() rslt.columns = ["E[Y_1|D]", "E[Y_0|D]"] rslt.index = ["Untreated", "Treated"] rslt # - # ## Task B.2 # # From here on we assume that 5% of the population take the treatment. Derive and explain Equation 2.12 from Morgan & Winship (2014) for the naive estimator as a decomposition of true ATE, baseline bias, and differential treatment effect bias. # This derivation is straightforward. # ## Task B.3 # # Compute the naive estimate and true value of the ATE for the simulated data. Is the naive estimator upwardly or downwardly biased? Calculate the baseline bias and differential treatment effect bias. How could we interpret these biases in our framework of health status of hospitalized and non-hospitalized respondents? # + pi = 0.05 # naive estimate naive = rslt.loc["Treated", "E[Y_1|D]"] - rslt.loc["Untreated", "E[Y_0|D]"] # baseline bias base = rslt.loc["Treated", "E[Y_0|D]"] - rslt.loc["Untreated", "E[Y_0|D]"] # differential effect diff = 0 diff += rslt.loc["Treated", "E[Y_1|D]"] - rslt.loc["Treated", "E[Y_0|D]"] diff -= rslt.loc["Untreated", "E[Y_1|D]"] - rslt.loc["Untreated", "E[Y_0|D]"] diff *= 1 - pi # true average treatment effect true = 0 true += pi * (rslt.loc["Treated", "E[Y_1|D]"] - rslt.loc["Treated", "E[Y_0|D]"]) true += (1 - pi) * (rslt.loc["Untreated", "E[Y_1|D]"] - rslt.loc["Untreated", "E[Y_0|D]"]) print(f"naive: {naive:.2f}, base: {base:.2f}, diff: {diff:.2f}, true: {true:.2f}") # We can also test the relationships just to be sure. np.testing.assert_almost_equal(true, naive - (base + diff), decimal=10) # - # ## Task B.4 # # Under which assumptions does the naive estimator provide the ATE? # We need the *stable unit treatment value assumption* and independence between potential outcomes and the treatment. # ## References # # * ., and . (2014). [*Counterfactuals and causal inference: Methods and principles for social research*](https://www.cambridge.org/de/academic/subjects/sociology/sociology-general-interest/counterfactuals-and-causal-inference-methods-and-principles-social-research-2nd-edition?format=PB). Cambridge, England: Cambridge University Press. # # * ., and . (2009). [*Mostly harmless econometrics: An empiricists companion*](https://press.princeton.edu/titles/8769.html). Princeton, NJ: Princeton University Press. # # * [National Health Interview Survey.](https://www.cdc.gov/nchs/nhis/index.htm) (2018). National Center for Health Statistics. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (Ubuntu Linux) # language: python # name: python3-ubuntu # resource_dir: /usr/local/share/jupyter/kernels/python3-ubuntu # --- from provers import * # + def uc2le(uc): """return <= relation as a dict of sets""" us = {x:set([x]+uc[x]) for x in uc} done = False while not done: done = True for v0, v1s in us.items(): old_len = len(v1s) for v2s in [us[v1] for v1 in v1s]: v1s |= v2s done = done and len(v1s) == old_len return us def uc2p9(uc): le = uc2le(uc) print(le) return [(f"{i}<={j}" if j in le[i] else f"-({i}<={j})") for i in uc for j in uc] def rn2p9(rn): return [f"-{i}={rn[i]}" for i in rn] p92ltx={"u":"u","v":"v","w":"w","x":"x","y":"y","z":"z","c":"c","d":"d","e":"1","1":"1","0":"0","b":"\\bot","t":"\\top","f":"f","g":"g", "~":"\\sim","-":"-","dd":"\\diamond","bx":"\\box"," r ":"\\triangleright"," t ":"\\triangleleft","tr":"\\vartriangleright","tl":"\\vartriangleleft", "i":"^{-1}","c":"\\smallsmile","'":"'","^":"\\wedge"," v ":"\\vee","*":"\\cdot","/":"/","\\":"\\backslash", "+":"+","<-":"\\ominus"," o ":"\\circ","->":"\\to", "<=":"\\le",">=":"\\ge","=":"=","!=":"\\ne","&":"\\text{ and }","|":"\\text{ or }", " -> ":"\\implies","<->":"\\iff","all":"\\forall","exists":"\\exists"} def uc2tikz(A): n = len(A['uc']) xy = A['xy'] st = '\n\\begin{tikzpicture}\n' st += '\\node at(0,-1)[n]{$\\mathbf'+A['nm']+'$};\n' for i in range(n-1,-1,-1): st += '\\node({})at({},{})[label={}:${}$]'.format(i,xy[i][0],xy[i][1],'right' if xy[i][0]>0 else 'left',A['lb'][i]) st += '{}'+''.join(['edge({})'.format(j) for j in A['uc'][i]])+';\n' return st + '\\end{tikzpicture}\n' def latex_op(s,m,lb,rmbnds=True): if type(m)==list: if len(m)==0: return s+'=[]' base = list(reversed(range(1 if rmbnds else 0,len(m)-1 if rmbnds else len(m)))) #print m,type(m[0]) if type(m[0])==list: if len(m[0])==0: return s+'=[[]]' st = r'$\begin{array}{|c|'+"".join(['c' for i in base])+'|}\\hline\n' st += '\\W{'+p92ltx[s]+'}&'+'&'.join('\\W{'+lb[d]+'}' for d in base)+r'\\\hline'+'\n' for i in base: st += lb[i]+'&'+'&'.join(('-' if m[i][j]==-1 \ else '\\rd{'+lb[m[i][j]]+'}' if i==j and m[i][i]==i \ else '\\g{'+lb[m[i][j]]+'}' if m[i][j]==m[j][i] and i x=y', 'x<=y & y<=z -> x<=z', '(x<=y) -> ((x*z)<=(y*z))', '(x<=y) -> ((z*x)<=(z*y))'] Lr=['x*y<=z<->y<=x\ z'] R=Lr+['x*y<=z<->x<=z/y'] Sgrp=['((x*y)*z)=(x*(y*z))'] Mon=Sgrp+['(x*1)=x', '(1*x)=x'] rdct=pos31 ax=PoMag+Mon+uc2p9(rdct['uc'])#+rn2p9(rdct['rn']) print(ax) a=p9(ax,[],1000,1000,len(rdct['uc'])) print(a) st=uc2tikz(rdct) st+="".join(alg2latex(x,'PoMon',rdct['lb'],False) for x in a) rdct=pos31 ax=PoMag+Sgrp+uc2p9(rdct['uc'])#+rn2p9(rdct['rn']) print(ax) a=p9(ax,[],1000,1000,len(rdct['uc'])) print(a) st=uc2tikz(rdct) st+="".join(alg2latex(x,'PoMon',rdct['lb'],False) for x in a) rdct=pos41 ax=PoMag+Sgrp+Lr+uc2p9(rdct['uc'])#+rn2p9(rdct['rn']) print(ax) a=p9(ax,[],1000,1000,len(rdct['uc'])) print(a) st+=uc2tikz(rdct) st+="".join(alg2latex(x,'LrPoSgrp',rdct['lb'],False) for x in a) rdct=pos42 ax=PoMag+Sgrp+Lr+uc2p9(rdct['uc'])#+rn2p9(rdct['rn']) print(ax) a=p9(ax,[],1000,1000,len(rdct['uc'])) print(a) st+=uc2tikz(rdct) st+="".join(alg2latex(x,'LrPoSgrp',rdct['lb'],False) for x in a) rdct=pos43 ax=PoMag+Sgrp+Lr+uc2p9(rdct['uc'])#+rn2p9(rdct['rn']) print(ax) a=p9(ax,[],1000,1000,len(rdct['uc'])) print(a) st+=uc2tikz(rdct) st+="".join(alg2latex(x,'LrPoSgrp',rdct['lb'],False) for x in a) print(st) pos81={'uc':{0:[1,2],1:[3,4],2:[3,4],3:[5,6],4:[5,6],5:[7],6:[7],7:[]}, 'rn':{0:7,1:6,2:5,3:3,4:4,5:2,6:1,7:0}, 'xy':{0:(0,0),1:(-.5,1),2:(.5,1),3:(-.5,2),4:(.5,2),5:(.5,3),6:(-.5,3),7:(0,4)}, 'lb':{0:'0',1:'\\rn a',2:'\\rn b',3:'c',4:'d',5:'b',6:'a',7:'1'}, 'nm':'{InPos}_{8,1}'} pos82={'uc':{0:[1],1:[2,3],2:[4,5],3:[4,5],4:[6],5:[6],6:[7],7:[]}, 'rn':{0:7,1:6,2:5,3:4,4:3,5:2,6:1,7:0}, 'xy':{0:(0,0),1:(0,1),2:(-.5,2),3:(.5,2),4:(.5,3),5:(-.5,3),6:(0,4),7:(0,5)}, 'lb':{0:'0',1:'\\rn a',2:'\\rn b',3:'\\rn c',4:'c',5:'b',6:'a',7:'1'}, 'nm':'{InPos}_{8,2}'} D81={'uc':{0:[1],1:[2],2:[3,4],3:[5],4:[5],5:[6],6:[7],7:[]}, 'rn':{0:7,1:6,2:5,3:3,4:4,5:2,6:1,7:0}, 'xy':{0:(0,0),1:(0,1),2:(0,2),3:(-.5,3),4:(.5,3),5:(0,4),6:(0,5),7:(0,6)}, 'lb':{0:'0',1:'\\rn a',2:'\\rn b',3:'c',4:'d',5:'b',6:'a',7:'1'}, 'nm':'{InD}_{8,1}'} InTo8={'uc':{0:[1],1:[2],2:[3],3:[4],4:[5],5:[6],6:[7],7:[]}, 'rn':{0:7,1:6,2:5,3:4,4:3,5:2,6:1,7:0}, 'xy':{0:(0,0),1:(0,1),2:(0,2),3:(0,3),4:(0,4),5:(0,5),6:(0,6),7:(0,7)}, 'lb':{0:'0',1:'\\rn a',2:'\\rn b',3:'\\rn c',4:'c',5:'b',6:'a',7:'1'}, 'nm':'{InTo}_{8}'} Po=['x<=x', 'x<=y & y<=x -> x=y', 'x<=y & y<=z -> x<=z',] InPocrim=['x<=x', 'x<=y & y<=x -> x=y', 'x<=y & y<=z -> x<=z', '-(-(x))=x', '((x*y)<=z) <-> (y<=-((-(z)*x)))', '((x*y)*z)=(x*(y*z))', '(x*y)=(y*x)', '(x*7)=x', 'x<=7'] CyInPorim=['x<=x', 'x<=y & y<=x -> x=y', 'x<=y & y<=z -> x<=z', '-(-(x))=x', '((x*y)<=z) <-> (y<=-((-(z)*x)))', '((x*y)<=z) <-> (x<=-((y*-(z))))', '((x*y)*z)=(x*(y*z))', '(x*e)=x', '(e*x)=x', 'x<=e', 'e=7'] ax=CyInPorim+uc2p9(pos81['uc'])+rn2p9(pos81['rn']) print(ax) a=p9(ax,[],1000,1000,8) st=uc2tikz(pos81) st+="".join(alg2latex(x,'CyInPorim',pos81['lb']) for x in a) print(a) rdct=D81 ax=CyInPorim+uc2p9(rdct['uc'])+rn2p9(rdct['rn']) print(ax) a=p9(ax,['x*y=y*x'],1000,1000,8) st+=uc2tikz(rdct) st+="".join(alg2latex(x,'CyIInDMon',rdct['lb']) for x in a) print(a) rdct=InTo8 ax=CyInPorim+uc2p9(rdct['uc'])+rn2p9(rdct['rn']) print(ax) a=p9(ax,['x*y=y*x'],1000,1000,8) st+=uc2tikz(rdct) st+="".join(alg2latex(x,'CyIInDMon',rdct['lb']) for x in a) print(a) ax=InPocrim+uc2p9(pos82['uc'])+rn2p9(pos82['rn']) print(ax) a=p9(ax,[],1000,1000,8) st+=uc2tikz(pos82) st+="".join(alg2latex(x,'InPocrim',pos82['lb']) for x in a) print(st) a # + #Python 2.7 program to draw in LaTeX/Tikz/PDF residuated lattices of size <=6 # 2017-04-12 latex = r""" \documentclass{amsart} % process this file with LuaLaTeX to avoid memory errors \usepackage{tikz} \usepackage{pythontex} \advance\textheight by 1.8in \advance\textwidth by 2in \advance\topmargin by -1in \advance\oddsidemargin by -1in \advance\evensidemargin by -1in \title{Lattices of size up to 6} \date{\today} \begin{document} \maketitle \noindent There are $1+1+1+2+5+15=25$ lattices with $\le 6$ elements. In the list below, each lattice is named $B_k,C_n,D_{n,i}$ or $L_{n,i}$ depending on whether it is a Boolean algebra, a chain with $n\ge 3$ elements, a distributive lattice or a nondistributive lattice with $n\ge 5$ elements. The number $i$ is an index that enumerates nonisomorphic lattices of size $n$, in order of increasing height, increasing width and decreasing number of lines. \tikzstyle{every picture} = [scale=0.5] \tikzstyle{every node} = [draw, fill=white, circle, inner sep=0pt, minimum size=5pt] \tikzstyle{n} = [draw=none, rectangle, inner sep=0pt] %name style \tikzstyle{i} = [draw, fill=black, circle, inner sep=0pt, minimum size=5pt] %idempotent \tikzstyle{e} = [draw=none, rectangle, inner sep=0pt] \tikzstyle{nci} = [draw, fill=black, rectangle, inner sep=0pt, minimum size=5pt] \tikzstyle{nc} = [draw, fill=white, rectangle, inner sep=0pt, minimum size=5pt] \setlength{\arraycolsep}{2pt} %\begin{center} \sloppy """ class Alg(object): def __init__(self, op={}, index=None, size=None, aut=[]): """ Construct a finite algebra. The size of the algebra is the length of any nonconstant operation. INPUT: op -- a dictionary of operations on range(size). Entries are symbol:table pairs where symbol is a string that denotes the operation symbol, e.g. 'cdot', and table is an n-dimensional array with entries from range(size). n >= 0 is the arity of the operation (not explicitly coded but can be computed from the table). index -- a natural number giving the position of the algebra in a list of algebras aut -- an optional list of automorphisms (permutations of range(size)) """ self.op = op self.index = index self.size = size if size!=None else len(next(x for x in op.values() if type(x)==list)) self.aut = aut undefined = -1 def alg2latex(alg): ops = alg.op.items() st = r'$\begin{array}{|l|}\hline'+'\n' st += r'A_{'+str(alg.index)+'}'+\ (r'\\|\text{Aut}(A)|='+str(len(self.aut)) if alg.aut!=[] else '')+'\\\\\n' st += '\\ '.join([latex_op(kv[0],kv[1]) for kv in ops]) return st+r'\\\hline\end{array}$' def latex_sym(s): return '\\'+s if s[0].isalpha() and len(s)>1 else s def latex_op(s,m): if type(m)==list: if len(m)==0: return s+'=[]' base = range(len(m)) #print m,type(m[0]) if type(m[0])!=list: st = r'\begin{array}{c|c}' st += '&'+latex_sym(s)+r'\\\hline'+'\n' for i in base: st += str(i)+'&'+(str(m[i]) if m[i]!=undefined else '-')+r'\\'+'\n' return st + r'\end{array}'+'\n' else: if len(m[0])==0: return s+'=[[]]' st = r'\begin{array}{c|'+"".join(['c' for i in base])+r'}' st += latex_sym(s)+'&'+'&'.join(str(d) for d in base)+r'\\\hline'+'\n' #central = [i for i in base if all(m[i][j]==m[j][i] for j in base)] for i in base: st += str(i)+'&'+'&'.join(('-' if m[i][j]==undefined \ else '\\mathbf{'+str(m[i][j])+'}' if i==j and m[i][i]==i \ else '\\textcolor{lightgray}{'+str(m[i][j])+'}' if m[i][j]==m[j][i] \ else str(m[i][j])) for j in base)+r'\\'+'\n' return st + r'\end{array}'+'\n' else: return latex_sym(s)+'='+(str(m) if m!=undefined else '') def permutations(m,n): """ return Python generator for all permutations of {m,...,n-1} """ p = [m+i for i in range(n-m)] yield p n = len(p) j = 1 while j>=0: q = list(range(n)) j = n-2 while j>=0 and p[j]>p[j+1]: j = j-1 if j>=0: for k in range(j): q[k] = p[k] k = n-1 while p[j]>p[k]: k = k-1 q[j] = p[k] i = n-1 while i>j: q[i] = p[j+n-i] i = i-1 q[j+n-k]=p[j] p = q yield q def inverse_permutation(p): q = list(range(len(p))) for i in range(len(p)): q[p[i]]=i return q def check_permutations(m): """ If partial is false, return False if every completion of some isomorphic copy q(m) of m will be lexicographically larger than m. Return True otherwise. Remove a permutation if If partial is true, perms and invperms are lists of permuatations and their inverses pflags is an array of booleans deciding which perms still need to be checked pflags_list records which flags have been set (needed to restore on backtracking) For partial algebras, q is an isomorphism if m[i][j] defined ==> q[m[i][j]]=m[q[i]][q[j]] and the same must hold for the inverse qi """ global perms, invperms, pflags, pflags_list base = range(len(m)) for k in range(len(perms)): if pflags[k]: q = perms[k] qi = invperms[k] equal = True for i in base: for j in base: mqi = m[qi[i]][qi[j]] mij = m[i][j] if mqi == undefined or mij != q[mqi]: equal = False if mqi != undefined and q[mqi]mij: #get rid of q pflags[k] = False pflags_list += [k] if not equal: break #equal means q[m[qi[i]][qi[j]]] == m[i][j] != undefined if not equal: break return True def check_associativity(m,i,j): """ This function may update the partial operation m. For an equation r(xyz)=m[s(xyz)][t(xyz)] if lhs is defined and rhs is undefined, but s(xyz), t(xyz) are defined then assign m[s(xyz)][t(xyz)]=r(xyz) and add the pair (s(xyz),t(xyz)) to entries_list. Similarly for m[s(xyz)][t(xyz)]=r(xyz). Return False if both sides are defined but not equal. After trying all assignments, return True. This function can perhaps be made more efficient by using the values of i,j """ global entries_list base = range(len(m)) for x in base: for y in base: xy = m[x][y] if xy != undefined: for z in base: xyz = m[xy][z]; if xyz != undefined: yz = m[y][z] if yz != undefined: if m[x][yz] != undefined: if xyz!=m[x][yz]: return False else: m[x][yz] = xyz entries_list.append((x,yz)) else: if m[y][z] != undefined and m[x][m[y][z]] != undefined: m[xy][z] = m[x][m[y][z]] entries_list.append((xy,z)) return True def check_commutativity(m,i,j): global entries_list if m[j][i]!=undefined: if m[j][i]!=m[i][j]: return False else: m[j][i] = m[i][j] # since (i,j) is the first undefined position found, m[j][i] will always be undefined entries_list.append((j,i)) return True def is_partially_ordered(m,uc,le): n = len(m) for x in range(n): for y in uc[x]: for z in range(n): if m[x][z]>-1 and m[y][z]>-1 and le[m[x][z]][m[y][z]]==0: return False if m[z][x]>-1 and m[z][y]>-1 and le[m[z][x]][m[z][y]]==0: return False return True def is_op_automorphism(m,q): n = len(m) for i in range(n): for j in range(n): if m[q[i]][q[j]]!=q[m[i][j]]: return False return True def check_lattice_ordered(m,join): """ Return False if partial optable m does not distribute over join Return True otherwise """ global entries_list n = len(m) for x in range(n): for y in range(n): xy = m[x][y] if xy>-1: for z in range(n): xz = m[x][z] if xz>-1: yz = join[y][z] xyz = m[x][yz] if xyz>-1: if xyz!=join[xy][xz]: return False else: m[x][yz] = join[xy][xz] entries_list.append((x,yz)) zy = m[z][y] if zy>-1: xz = join[x][z] xyz = m[xz][y] if xyz>-1: if xyz!=join[xy][zy]: return False else: m[xz][y] = join[xy][zy] entries_list.append((xz,y)) return True def check_divisible(m,down): """ Return False if partial optable m is not divisible i.e. x<=y implies x=m[y][u]=m[v][y] for some u,v Return True otherwise Compute list S of elements in y-row (col). Compute k = |down[y] - S|. if k>#undefined in y-row, return False if #undefined=1 and k=1 let undefined = this elt. """ global entries_list n = len(m) for y in range(n): S = set([m[y][u] for u in range(n) if m[y][u]>-1]) E = set(down[y]).difference(S) U = [u for u in range(n) if m[y][u]==-1] if len(E)>len(U): return False if len(U)==1 and len(E)==1: m[y][U[0]] = E.pop() entries_list += [(y,U[0])] for y in range(n): S = set([m[u][y] for u in range(n) if m[u][y]>-1]) E = set(down[y]).difference(S) U = [u for u in range(n) if m[u][y]==-1] if len(E)>len(U): return False if len(U)==1 and len(E)==1: m[U[0]][y] = E.pop() entries_list += [(U[0],y)] return True def complete_l_operation(m,join,down,i,j,associative=False,commutative=False,divisible=False): """ find next i,j where alg[i][j] = -1 = undefined; for each val=0..n-1 set alg[i][j] = val, check axioms and permutations if ok, complete_ordered_operation(m,i,j+1) then restore and return """ global algebra_list,entries_list,pflags,pflags_list,gl_index ok = True n = len(m) while ok and i-1: j = j+1 if j>=n: j = 0 i = i+1 else: ok = False if ok: if check_lattice_ordered(m,join) and \ (not divisible or check_divisible(m,down)): gl_index+=1 algebra_list.append(Alg(op={'cdot':[x[:] for x in m],'vee':join},index=gl_index,size=n)) else: for val in range(n): if all([join[x][i]!=i or m[x][j]==-1 or join[m[x][j]][val]==val \ for x in range(i)]) and \ all([join[x][j]!=j or m[i][x]==-1 or join[m[i][x]][val]==val \ for x in range(j)]): ok = True el = len(entries_list) m[i][j] = val entries_list += [(i,j)] if commutative: m[j][i] = val entries_list += [(j,i)] if ok: ok = check_lattice_ordered(m,join) if ok and divisible: ok = check_divisible(m,down) if ok and associative: ok = check_associativity(m,i,j) pl = len(pflags_list) if ok: ok = check_permutations(m) if ok: complete_l_operation(m,join,down,i,j+1,associative,commutative,divisible) while len(entries_list)>el: e = entries_list.pop() m[e[0]][e[1]] = -1 while len(pflags_list)>pl: p = pflags_list.pop() pflags[p] = True def downsets(lat): n = lat.size jn = lat.op['vee'] down = [[x for x in range(n) if jn[x][y]==y] for y in range(n)] return down def find_l_groupoids(l,associative=False,commutative=False,divisible=False,idempotent=False,identity=False,id=None,zero=False): """ Return list of lattice ordered groupoids of lattice l as nxn matrices. The lattice is given by the join table. The operation is compatible with the order. """ global algebra_list,perms,invperms,entries_list,pflags,pflags_list algebra_list=[] entries_list=[] n = l.size join = l.op['vee'] down = downsets(l) m = [[-1 for x in range(n)] for x in range(n)] perms = [[0]+p+[n-1] for p in permutations(1,n-1) \ if is_op_automorphism(join,[0]+p+[n-1])] if zero else \ [p+[n-1] for p in permutations(0,n-1) \ if is_op_automorphism(join,p+[n-1])] if idempotent: for x in range(n): m[x][x] = x if zero: #perms = [[0]+p for p in permutations(1,n)] for x in range(n): m[x][0] = 0 m[0][x] = 0 if identity: if id==None: #should only try all canconical identity positions t = n if zero and n>1: s = 1 else: s = 0 else: s = id; t = id+1 for e in range(s,t): for x in range(n): m[e][x] = x m[x][e] = x perms = [[0]+p+[n-1] for p in permutations(1,n-1) \ if is_op_automorphism(join,[0]+p+[n-1])] if zero else \ [p+[n-1] for p in permutations(0,n-1) \ if is_op_automorphism(join,p+[n-1])] if all([p[e]>=e for p in perms]): # eliminate isomorphic (P,e) perms = [p for p in perms if p[e]==e] #keep e fixed invperms = [inverse_permutation(p) for p in perms] pflags = [True for i in range(len(perms))] pflags_list = [] complete_l_operation(m,join,down,0,0,associative,commutative,divisible) for x in range(n): m[e][x] = -1 m[x][e] = -1 else: invperms = [inverse_permutation(p) for p in perms] pflags = [True for i in range(len(perms))] pflags_list = [] complete_l_operation(m,join,down,0,0,associative,commutative,divisible) return algebra_list def find_l_semigroups(l): return find_l_groupoids(l,associative=True) def find_idempotent_semirings(n): #not correct alg_list = [] for l in find_joinsemilattices(n): alg_list.extend(find_l_semigroups(l)) return alg_list def find_l_semigroups_with_zero(l,e=None): return find_l_groupoids(l,associative=True,id=e,zero=True) def find_semirings0(n): alg_list = [] for l in find_lattices(n): alg_list.extend(find_l_semigroups_with_zero(l)) return alg_list def find_commutative_l_semigroups(l): return find_l_groupoids(l,associative=True,commutative=True) def find_l_bands(l): return find_l_groupoids(l,associative=True,idempotent=True) def find_l_semilattices(l): return find_l_groupoids(l,associative=True,commutative=True,idempotent=True) def find_l_monoids(l,e=None): return find_l_groupoids(l,associative=True,identity=True,id=e) def find_idempotent_semirings_with_unit(n): #not correct alg_list = [] for l in find_joinsemilattices(n): alg_list.extend(find_l_monoids(l)) return alg_list def find_commutative_l_monoids(l,e=None): return find_l_groupoids(l,associative=True,commutative=True,identity=True,id=e) def find_commutative_idempotent_semirings_with_unit(n): #not correct alg_list = [] for l in find_joinsemilattices(n): alg_list.extend(find_commutative_l_monoids(l)) return alg_list def find_l_monoids_with_zero(l,e=None): return find_l_groupoids(l,associative=True,identity=True,id=e,zero=True) #a=Alg({"\\cdot":[[0,1],[1,-1]], "\\sim":[0,-1]},1) #a.uc={0:[1],1:[]} #a.pos=[(0,0),(0,1)] lattices=[ # list of lattices of size up to 5 {'pos': {0: (0, 0)}, 'uc': {0: []}}, {'pos': {0: (0, 0), 1: (0, 1)}, 'uc': {0: [1], 1: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2)}, 'uc':{0: [1], 1: [2], 2: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2), 3:(0, 3)}, 'uc': {0: [1], 1: [2], 2: [3], 3: []}}, {'pos': {0: (0, 0), 1:(-1, 1), 2: (1, 1), 3: (0, 2)}, 'uc': {0: [1, 2], 1: [3], 2: [3], 3: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2), 3: (0, 3), 4: (0, 4)}, 'uc': {0: [1], 1: [2], 2: [3], 3: [4], 4: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (1, 1), 3: (0, 2), 4: (0, 3)}, 'uc': {0: [1, 2], 1: [3], 2: [3], 3: [4], 4: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (-1, 2), 3: (1, 2), 4: (0, 3)}, 'uc': {0: [1], 1: [2, 3], 2: [4], 3: [4], 4: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 1.5), 3: (1, 2), 4: (0, 3)}, 'uc': {0: [2, 1], 1: [3], 2: [4], 3: [4], 4: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (0, 1), 3: (1, 1), 4: (0, 2)}, 'uc': {0: [1, 2, 3], 1: [4], 2: [4], 3: [4], 4: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2), 3: (0, 3), 4: (0, 4), 5: (0, 5)}, 'uc': {0: [1], 1: [2], 2: [3], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (1, 1), 3: (0, 2), 4: (0, 3), 5: (0, 4)}, 'uc': {0: [1, 2], 1: [3], 2: [3], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (-1, 2), 3: (1, 2), 4: (0, 3), 5: (0, 4)}, 'uc': {0: [1], 1: [2, 3], 2: [4], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2), 3: (-1, 3), 4: (1, 3), 5: (0, 4)}, 'uc': {0: [1], 1: [2], 2: [3, 4], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 2), 3: (1, 2), 4: (1, 3), 5: (0, 4)}, 'uc': {0: [2, 1], 1: [3], 2: [5], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 1.5), 3: (1, 2), 4: (0, 3), 5: (0, 4)}, 'uc': {0: [2, 1], 1: [3], 2: [4], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (1, 2), 3: (-1,2.5), 4: (1, 3), 5: (0, 4)}, 'uc': {0: [1], 1: [2, 3], 2: [4], 3: [5], 4: [5], 5: []}}, {'pos': {0:(-.5, 0), 1:(-1.5, 1), 2:(.5, 1), 3:(-.5, 2), 4:(1.5, 2), 5:(.5, 3)}, 'uc': {0: [1, 2], 1: [3], 2: [3, 4], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (0, 1), 3: (1, 1), 4: (0, 2), 5: (0, 3)}, 'uc': {0: [1, 2, 3], 1: [4], 2: [4], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (-1, 2), 3: (0, 2), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [1], 1: [2, 3, 4], 2: [5], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (1, 1), 3: (-1, 2), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [2, 1], 1: [3], 2: [4], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 1.5), 3: (0, 1.5), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [2, 3, 1], 1: [4], 2: [5], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1:(-.01, 1), 2: (1, 1), 3: (-1, 1.5), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [3, 1, 2], 1: [4], 2: [4], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 1.5), 3:(-.01, 2), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [2, 1], 1: [3, 4], 2: [5], 3: [5], 4: [5], 5: []}}, {'pos': {0:(0, 0), 1:(-1.5, 1), 2:(-0.5, 1), 3:(0.5, 1), 4:(1.5, 1), 5:(0, 2)}, 'uc': {0: [1, 2, 3, 4], 1: [5], 2: [5], 3: [5], 4: [5], 5: []}}, #jsl {'uc':{0:[2],1:[2],2:[]}, 'pos': {0:(0,0),1:(1,0),2:(.5,1)}}, {'uc':{0:[3],1:[3],2:[3],3:[]}, 'pos': {0:(0,0),1:(.7,0),2:(1.4,0),3:(.7,1)}}, {'uc':{0:[2],1:[2],2:[3],3:[]}, 'pos': {0:(0,0),1:(1,0),2:(.5,1),3:(.5,2)}}, {'uc':{0:[1],1:[3],2:[3],3:[]}, 'pos': {0:(0,0),1:(0,1),2:(1,1),3:(.5,2)}}, #msl {'uc':{0:[1,2],1:[],2:[]}, 'pos': {0:(0,0),1:(-.5,1),2:(.5,1)}}, {'uc':{0:[1,2,3],1:[],2:[],3:[]}, 'pos': {0:(0,0),1:(-.7,1),2:(0,1),3:(.7,1)}}, {'uc':{0:[1,2],1:[],2:[3],3:[]}, 'pos': {0:(0,0),1:(-.5,1),2:(.5,1),3:(.5,2)}}, {'uc':{0:[1],1:[2,3],2:[],3:[]}, 'pos': {0:(0,0),1:(0,1),2:(-.5,2),3:(.5,2)}}, #posets {'uc':{0:[1],1:[],2:[]}, 'pos':{0:(0,0),1:(0,1),2:(.7,0)}}, {'uc':{0:[2,3],1:[2,3],2:[],3:[]}, 'pos':{0:(0,0),1:(1,0),2:(0,1),3:(1,1)}}, {'uc':{0:[2],1:[2,3],2:[],3:[]}, 'pos':{0:(0,0),1:(1,0),2:(0,1),3:(1,1)}}, {'uc':{0:[2],1:[3],2:[],3:[]}, 'pos':{0:(0,0),1:(1,0),2:(0,1),3:(1,1)}}, {'uc':{0:[2,3],1:[],2:[],3:[]}, 'pos':{0:(0,0),1:(.7,0),2:(-.5,1),3:(.5,1)}}, {'uc':{0:[2],1:[2],2:[],3:[]}, 'pos':{0:(0,0),1:(1,0),2:(.5,1),3:(1.7,0)}}, {'uc':{0:[1],1:[],2:[],3:[]}, 'pos':{0:(0,0),1:(0,1),2:(.7,0),3:(1.4,0)}}, {'uc':{0:[1],1:[2],2:[],3:[]}, 'pos':{0:(0,0),1:(0,1),2:(0,2),3:(.7,0)}}, ] def permuted_binary_op(m,q): qi = inverse_permutation(q) return [[q[m[qi[x]][qi[y]]] for y in range(len(m))] for x in range(len(m))] def leq2uc(le): # assumes le[x][y] => x <= y (topologically sorted) n = len(le) uc = [] for a in range(n): S = [] # accumulate upper covers of a for x in range(a+1,n): if le[a][x]==1: y = len(S)-1 while y>=0 and le[S[y]][x]==0: y = y-1 if y<0: S.append(x) uc.append(S) return uc class Posetuc(object): def __init__(self,uppercovers): self.uc = dict([(x,uppercovers[x]) for x in range(len(uppercovers))]) def size(self): return len(self.uc) def get_leq(self): if hasattr(self,'leq'): return self.leq n = self.size() leq = [[0 for y in range(n)] for x in range(n)] for i in range(n): leq[i][i] = 1 for j in self.uc[i]: leq[i][j] = 1 for i in range(n): for j in range(i+1,n): if leq[i][j]: for k in range(j+1,n): if leq[j][k]: leq[i][k] = 1 self.leq = leq return leq def get_join(self): # Freese-Jezek-Nation p217 if hasattr(self,'join'): return self.join n = self.size() join = [[0 for x in range(n)] for x in range(n)] le = self.get_leq() if not all([le[x][n-1]==1 for x in range(n)]): return "poset has no top element" p = range(n-1,-1,-1) uc = [sorted([p[y] for y in self.uc[x]]) for x in p] S = [] for x in range(n): # x=x_k join[x][x] = x for y in S: T = [] for z in uc[x]: T.append(join[y][z]) # T = {x_i \vee z : z>-x_k} q = T[0] for z in T[1:]: if z>q: q = z #error in Listing 11.9 for z in T: if not le[p[q]][p[z]]: return "poset is not a semilattice: x="+x+" y="+y join[x][y] = q join[y][x] = q S.append(x) self.join = permuted_binary_op(join,p) return self.join def get_meet(self): # Freese-Jezek-Nation p217 if hasattr(self,'meet'): return self.meet n = self.size() meet = [[0 for x in range(n)] for x in range(n)] le = self.get_leq() if not all([le[0][x]==1 for x in range(n)]): return "poset has no bottom element" lc = self.get_lowercovers() S = [] for x in range(n): # x=x_k meet[x][x] = x for y in S: T = [] for z in lc[x]: T.append(meet[y][z]) # T = {x_i \wedge z : z>-x_k} q = T[0] for z in T[1:]: if z>q: q = z for z in T: if not le[z][q]: return "poset is not a semilattice: x="+x+" y="+y meet[x][y] = q meet[y][x] = q S.append(x) self.meet = meet return meet def get_lowercovers(self): if hasattr(self,'lc'): return self.lc lc = [[] for x in range(self.size())] for i in range(self.size()): for j in self.uc[i]: lc[j].append(i) self.lc = lc return lc def depths(self): # assumes P is topologically sorted l = [0 for x in range(len(self.uc))] lc = self.get_lowercovers() for i in range(len(self.uc)-1,-1,-1): for j in lc[i]: if l[j] x <= y (topologically sorted) n = len(jn) uc = [] for a in range(n): S = [] # accumulate upper covers of a for x in range(a+1,n): if jn[a][x]==x: y = len(S)-1 while y>=0 and jn[S[y]][x]!=x: y = y-1 if y<0: S.append(x) uc.append(S) return uc def table2list(A,sym="cdot"): jn = A.op["vee"] m = A.op[sym] e = findid(m) n = A.size B = list(range(1,e))+list(range(e+1,n)) #assumes 0=bottom d = dict((x,[]) for x in range(n)) p = A.pos for x in B: for y in B: if ((x>y and p[x][1]!=p[y][1]) or (xA.pos[i][1] or \ # A.pos[i-1][1] 2017-04-12 latex = r""" \documentclass{amsart} % process this file with LuaLaTeX to avoid memory errors \usepackage{tikz} \usepackage{pythontex} \advance\textheight by 1.8in \advance\textwidth by 2in \advance\topmargin by -1in \advance\oddsidemargin by -1in \advance\evensidemargin by -1in \title{Residuated lattices of size up to 6} \author{} \author{} \date{\today} \begin{document} \maketitle \noindent There are $1+1+3+20+149+1488=1662$ residuated lattices with $\le 6$ elements. In the list below, each algebra is named $R^{mn}_{ij}$ where $m$ is the cardinality and $n$ enumerates nonisomorphic lattices of size $m$, in order of decreasing height. The depth of the identity element $1$ is given by $i$, and $j$ enumerates nonisomorphic algebras. For lattices of the same height distributive lattices appear before modular lattices, followed by nonmodular lattices, and selfdual lattices appear before nonselfdual lattices. Algebras with more central elements (round circles) are listed earlier, hence commutative residuated lattices precede noncommutative ones. Finally, algebras are listed in decreasing order of number of idempotents (black nodes). The monoid operation is indicated by labels. If a nonobvious product $xy$ is not listed, then it can be deduced from the given information: either it follows from idempotence ($x^2=x$) indicated by a black node or from commutativity or there are products $uv=wz$ such that $u\le x\le w$ and $v\le y\le z$ (possibly $uv=\bot\bot$ or $wz=\top\top$). If you have comments or notice any issues in this list, please email \texttt{jipsen.AT.chapman.edu}. \tikzstyle{every picture} = [scale=0.5] \tikzstyle{every node} = [draw, fill=white, circle, inner sep=0pt, minimum size=5pt] \tikzstyle{n} = [draw=none, rectangle, inner sep=0pt] %name style \tikzstyle{i} = [draw, fill=black, circle, inner sep=0pt, minimum size=5pt] %idempotent \tikzstyle{e} = [draw=none, rectangle, inner sep=0pt] \tikzstyle{nci} = [draw, fill=black, rectangle, inner sep=0pt, minimum size=5pt] \tikzstyle{nc} = [draw, fill=white, rectangle, inner sep=0pt, minimum size=5pt] \begin{tikzpicture} \draw(0,1.4)node[label=right:{\ $=$ central nonidempotent}]{}; \draw(0,2.1)node[i,label=right:{\ $=$ central idempotent}]{}; \draw(0,0)node[nc,label=right:{\ $=$ noncentral nonidempotent}]{}; \draw(0,0.7)node[nci,label=right:{\ $=$ noncentral idempotent}]{}; \end{tikzpicture} \setlength{\arraycolsep}{2pt} %\begin{center} \sloppy """ class Alg(object): def __init__(self, op={}, index=None, size=None, aut=[]): """ Construct a finite algebra. The size of the algebra is the length of any nonconstant operation. INPUT: op -- a dictionary of operations on range(size). Entries are symbol:table pairs where symbol is a string that denotes the operation symbol, e.g. 'cdot', and table is an n-dimensional array with entries from range(size). n >= 0 is the arity of the operation (not explicitly coded but can be computed from the table). index -- a natural number giving the position of the algebra in a list of algebras aut -- an optional list of automorphisms (permutations of range(size)) """ self.op = op self.index = index self.size = size if size!=None else len(next(x for x in op.values() if type(x)==list)) self.aut = aut undefined = -1 def alg2latex(alg): ops = alg.op.items() st = r'$\begin{array}{|l|}\hline'+'\n' st += r'A_{'+str(alg.index)+'}'+\ (r'\\|\text{Aut}(A)|='+str(len(self.aut)) if alg.aut!=[] else '')+'\\\\\n' st += '\\ '.join([latex_op(kv[0],kv[1]) for kv in ops]) return st+r'\\\hline\end{array}$' def latex_sym(s): return '\\'+s if s[0].isalpha() and len(s)>1 else s def latex_op(s,m): if type(m)==list: if len(m)==0: return s+'=[]' base = range(len(m)) #print m,type(m[0]) if type(m[0])!=list: st = r'\begin{array}{c|c}' st += '&'+latex_sym(s)+r'\\\hline'+'\n' for i in base: st += str(i)+'&'+(str(m[i]) if m[i]!=undefined else '-')+r'\\'+'\n' return st + r'\end{array}'+'\n' else: if len(m[0])==0: return s+'=[[]]' st = r'\begin{array}{c|'+"".join(['c' for i in base])+r'}' st += latex_sym(s)+'&'+'&'.join(str(d) for d in base)+r'\\\hline'+'\n' #central = [i for i in base if all(m[i][j]==m[j][i] for j in base)] for i in base: st += str(i)+'&'+'&'.join(('-' if m[i][j]==undefined \ else '\\mathbf{'+str(m[i][j])+'}' if i==j and m[i][i]==i \ else '\\textcolor{lightgray}{'+str(m[i][j])+'}' if m[i][j]==m[j][i] \ else str(m[i][j])) for j in base)+r'\\'+'\n' return st + r'\end{array}'+'\n' else: return latex_sym(s)+'='+(str(m) if m!=undefined else '') def permutations(m,n): """ return Python generator for all permutations of {m,...,n-1} """ p = [m+i for i in range(n-m)] yield p n = len(p) j = 1 while j>=0: q = list(range(n)) j = n-2 while j>=0 and p[j]>p[j+1]: j = j-1 if j>=0: for k in range(j): q[k] = p[k] k = n-1 while p[j]>p[k]: k = k-1 q[j] = p[k] i = n-1 while i>j: q[i] = p[j+n-i] i = i-1 q[j+n-k]=p[j] p = q yield q def inverse_permutation(p): q = list(range(len(p))) for i in range(len(p)): q[p[i]]=i return q def check_permutations(m): """ If partial is false, return False if every completion of some isomorphic copy q(m) of m will be lexicographically larger than m. Return True otherwise. Remove a permutation if If partial is true, perms and invperms are lists of permuatations and their inverses pflags is an array of booleans deciding which perms still need to be checked pflags_list records which flags have been set (needed to restore on backtracking) For partial algebras, q is an isomorphism if m[i][j] defined ==> q[m[i][j]]=m[q[i]][q[j]] and the same must hold for the inverse qi """ global perms, invperms, pflags, pflags_list base = range(len(m)) for k in range(len(perms)): if pflags[k]: q = perms[k] qi = invperms[k] equal = True for i in base: for j in base: mqi = m[qi[i]][qi[j]] mij = m[i][j] if mqi == undefined or mij != q[mqi]: equal = False if mqi != undefined and q[mqi]mij: #get rid of q pflags[k] = False pflags_list += [k] if not equal: break #equal means q[m[qi[i]][qi[j]]] == m[i][j] != undefined if not equal: break return True def check_associativity(m,i,j): """ This function may update the partial operation m. For an equation r(xyz)=m[s(xyz)][t(xyz)] if lhs is defined and rhs is undefined, but s(xyz), t(xyz) are defined then assign m[s(xyz)][t(xyz)]=r(xyz) and add the pair (s(xyz),t(xyz)) to entries_list. Similarly for m[s(xyz)][t(xyz)]=r(xyz). Return False if both sides are defined but not equal. After trying all assignments, return True. This function can perhaps be made more efficient by using the values of i,j """ global entries_list base = range(len(m)) for x in base: for y in base: xy = m[x][y] if xy != undefined: for z in base: xyz = m[xy][z]; if xyz != undefined: yz = m[y][z] if yz != undefined: if m[x][yz] != undefined: if xyz!=m[x][yz]: return False else: m[x][yz] = xyz entries_list.append((x,yz)) else: if m[y][z] != undefined and m[x][m[y][z]] != undefined: m[xy][z] = m[x][m[y][z]] entries_list.append((xy,z)) return True def check_commutativity(m,i,j): global entries_list if m[j][i]!=undefined: if m[j][i]!=m[i][j]: return False else: m[j][i] = m[i][j] # since (i,j) is the first undefined position found, m[j][i] will always be undefined entries_list.append((j,i)) return True def is_partially_ordered(m,uc,le): n = len(m) for x in range(n): for y in uc[x]: for z in range(n): if m[x][z]>-1 and m[y][z]>-1 and le[m[x][z]][m[y][z]]==0: return False if m[z][x]>-1 and m[z][y]>-1 and le[m[z][x]][m[z][y]]==0: return False return True def is_op_automorphism(m,q): n = len(m) for i in range(n): for j in range(n): if m[q[i]][q[j]]!=q[m[i][j]]: return False return True def check_lattice_ordered(m,join): """ Return False if partial optable m does not distribute over join Return True otherwise """ global entries_list n = len(m) for x in range(n): for y in range(n): xy = m[x][y] if xy>-1: for z in range(n): xz = m[x][z] if xz>-1: yz = join[y][z] xyz = m[x][yz] if xyz>-1: if xyz!=join[xy][xz]: return False else: m[x][yz] = join[xy][xz] entries_list.append((x,yz)) zy = m[z][y] if zy>-1: xz = join[x][z] xyz = m[xz][y] if xyz>-1: if xyz!=join[xy][zy]: return False else: m[xz][y] = join[xy][zy] entries_list.append((xz,y)) return True def check_divisible(m,down): """ Return False if partial optable m is not divisible i.e. x<=y implies x=m[y][u]=m[v][y] for some u,v Return True otherwise Compute list S of elements in y-row (col). Compute k = |down[y] - S|. if k>#undefined in y-row, return False if #undefined=1 and k=1 let undefined = this elt. """ global entries_list n = len(m) for y in range(n): S = set([m[y][u] for u in range(n) if m[y][u]>-1]) E = set(down[y]).difference(S) U = [u for u in range(n) if m[y][u]==-1] if len(E)>len(U): return False if len(U)==1 and len(E)==1: m[y][U[0]] = E.pop() entries_list += [(y,U[0])] for y in range(n): S = set([m[u][y] for u in range(n) if m[u][y]>-1]) E = set(down[y]).difference(S) U = [u for u in range(n) if m[u][y]==-1] if len(E)>len(U): return False if len(U)==1 and len(E)==1: m[U[0]][y] = E.pop() entries_list += [(U[0],y)] return True def complete_l_operation(m,join,down,i,j,associative=False,commutative=False,divisible=False): """ find next i,j where alg[i][j] = -1 = undefined; for each val=0..n-1 set alg[i][j] = val, check axioms and permutations if ok, complete_ordered_operation(m,i,j+1) then restore and return """ global algebra_list,entries_list,pflags,pflags_list,gl_index ok = True n = len(m) while ok and i-1: j = j+1 if j>=n: j = 0 i = i+1 else: ok = False if ok: if check_lattice_ordered(m,join) and \ (not divisible or check_divisible(m,down)): gl_index+=1 algebra_list.append(Alg(op={'cdot':[x[:] for x in m],'vee':join},index=gl_index,size=n)) else: for val in range(n): if all([join[x][i]!=i or m[x][j]==-1 or join[m[x][j]][val]==val \ for x in range(i)]) and \ all([join[x][j]!=j or m[i][x]==-1 or join[m[i][x]][val]==val \ for x in range(j)]): ok = True el = len(entries_list) m[i][j] = val entries_list += [(i,j)] if commutative: m[j][i] = val entries_list += [(j,i)] if ok: ok = check_lattice_ordered(m,join) if ok and divisible: ok = check_divisible(m,down) if ok and associative: ok = check_associativity(m,i,j) pl = len(pflags_list) if ok: ok = check_permutations(m) if ok: complete_l_operation(m,join,down,i,j+1,associative,commutative,divisible) while len(entries_list)>el: e = entries_list.pop() m[e[0]][e[1]] = -1 while len(pflags_list)>pl: p = pflags_list.pop() pflags[p] = True def downsets(lat): n = lat.size jn = lat.op['vee'] down = [[x for x in range(n) if jn[x][y]==y] for y in range(n)] return down def find_l_groupoids(l,associative=False,commutative=False,divisible=False,idempotent=False,identity=False,id=None,zero=False): """ Return list of lattice ordered groupoids of lattice l as nxn matrices. The lattice is given by the join table. The operation is compatible with the order. """ global algebra_list,perms,invperms,entries_list,pflags,pflags_list algebra_list=[] entries_list=[] n = l.size join = l.op['vee'] down = downsets(l) m = [[-1 for x in range(n)] for x in range(n)] perms = [[0]+p+[n-1] for p in permutations(1,n-1) \ if is_op_automorphism(join,[0]+p+[n-1])] if zero else \ [p+[n-1] for p in permutations(0,n-1) \ if is_op_automorphism(join,p+[n-1])] if idempotent: for x in range(n): m[x][x] = x if zero: #perms = [[0]+p for p in permutations(1,n)] for x in range(n): m[x][0] = 0 m[0][x] = 0 if identity: if id==None: #should only try all canconical identity positions t = n if zero and n>1: s = 1 else: s = 0 else: s = id; t = id+1 for e in range(s,t): for x in range(n): m[e][x] = x m[x][e] = x perms = [[0]+p+[n-1] for p in permutations(1,n-1) \ if is_op_automorphism(join,[0]+p+[n-1])] if zero else \ [p+[n-1] for p in permutations(0,n-1) \ if is_op_automorphism(join,p+[n-1])] if all([p[e]>=e for p in perms]): # eliminate isomorphic (P,e) perms = [p for p in perms if p[e]==e] #keep e fixed invperms = [inverse_permutation(p) for p in perms] pflags = [True for i in range(len(perms))] pflags_list = [] complete_l_operation(m,join,down,0,0,associative,commutative,divisible) for x in range(n): m[e][x] = -1 m[x][e] = -1 else: invperms = [inverse_permutation(p) for p in perms] pflags = [True for i in range(len(perms))] pflags_list = [] complete_l_operation(m,join,down,0,0,associative,commutative,divisible) return algebra_list def find_l_semigroups(l): return find_l_groupoids(l,associative=True) def find_idempotent_semirings(n): #not correct alg_list = [] for l in find_joinsemilattices(n): alg_list.extend(find_l_semigroups(l)) return alg_list def find_l_semigroups_with_zero(l,e=None): return find_l_groupoids(l,associative=True,id=e,zero=True) def find_semirings0(n): alg_list = [] for l in find_lattices(n): alg_list.extend(find_l_semigroups_with_zero(l)) return alg_list def find_commutative_l_semigroups(l): return find_l_groupoids(l,associative=True,commutative=True) def find_l_bands(l): return find_l_groupoids(l,associative=True,idempotent=True) def find_l_semilattices(l): return find_l_groupoids(l,associative=True,commutative=True,idempotent=True) def find_l_monoids(l,e=None): return find_l_groupoids(l,associative=True,identity=True,id=e) def find_idempotent_semirings_with_unit(n): #not correct alg_list = [] for l in find_joinsemilattices(n): alg_list.extend(find_l_monoids(l)) return alg_list def find_commutative_l_monoids(l,e=None): return find_l_groupoids(l,associative=True,commutative=True,identity=True,id=e) def find_commutative_idempotent_semirings_with_unit(n): #not correct alg_list = [] for l in find_joinsemilattices(n): alg_list.extend(find_commutative_l_monoids(l)) return alg_list def find_l_monoids_with_zero(l,e=None): return find_l_groupoids(l,associative=True,identity=True,id=e,zero=True) #a=Alg({"\\cdot":[[0,1],[1,-1]], "\\sim":[0,-1]},1) #a.uc={0:[1],1:[]} #a.pos=[(0,0),(0,1)] lattices=[ # list of lattices of size up to 5 {'pos': {0: (0, 0)}, 'uc': {0: []}}, {'pos': {0: (0, 0), 1: (0, 1)}, 'uc': {0: [1], 1: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2)}, 'uc':{0: [1], 1: [2], 2: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2), 3:(0, 3)}, 'uc': {0: [1], 1: [2], 2: [3], 3: []}}, {'pos': {0: (0, 0), 1:(-1, 1), 2: (1, 1), 3: (0, 2)}, 'uc': {0: [1, 2], 1: [3], 2: [3], 3: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2), 3: (0, 3), 4: (0, 4)}, 'uc': {0: [1], 1: [2], 2: [3], 3: [4], 4: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (1, 1), 3: (0, 2), 4: (0, 3)}, 'uc': {0: [1, 2], 1: [3], 2: [3], 3: [4], 4: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (-1, 2), 3: (1, 2), 4: (0, 3)}, 'uc': {0: [1], 1: [2, 3], 2: [4], 3: [4], 4: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 1.5), 3: (1, 2), 4: (0, 3)}, 'uc': {0: [2, 1], 1: [3], 2: [4], 3: [4], 4: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (0, 1), 3: (1, 1), 4: (0, 2)}, 'uc': {0: [1, 2, 3], 1: [4], 2: [4], 3: [4], 4: []}}, ] lat6=[ {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2), 3: (0, 3), 4: (0, 4), 5: (0, 5)}, 'uc': {0: [1], 1: [2], 2: [3], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (1, 1), 3: (0, 2), 4: (0, 3), 5: (0, 4)}, 'uc': {0: [1, 2], 1: [3], 2: [3], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (-1, 2), 3: (1, 2), 4: (0, 3), 5: (0, 4)}, 'uc': {0: [1], 1: [2, 3], 2: [4], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (0, 2), 3: (-1, 3), 4: (1, 3), 5: (0, 4)}, 'uc': {0: [1], 1: [2], 2: [3, 4], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 2), 3: (1, 2), 4: (1, 3), 5: (0, 4)}, 'uc': {0: [2, 1], 1: [3], 2: [5], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 1.5), 3: (1, 2), 4: (0, 3), 5: (0, 4)}, 'uc': {0: [2, 1], 1: [3], 2: [4], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (1, 2), 3: (-1,2.5), 4: (1, 3), 5: (0, 4)}, 'uc': {0: [1], 1: [2, 3], 2: [4], 3: [5], 4: [5], 5: []}}, {'pos': {0:(-.5, 0), 1:(-1.5, 1), 2:(.5, 1), 3:(-.5, 2), 4:(1.5, 2), 5:(.5, 3)}, 'uc': {0: [1, 2], 1: [3], 2: [3, 4], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (0, 1), 3: (1, 1), 4: (0, 2), 5: (0, 3)}, 'uc': {0: [1, 2, 3], 1: [4], 2: [4], 3: [4], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (0, 1), 2: (-1, 2), 3: (0, 2), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [1], 1: [2, 3, 4], 2: [5], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (-1, 1), 2: (1, 1), 3: (-1, 2), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [2, 1], 1: [3], 2: [4], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 1.5), 3: (0, 1.5), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [2, 3, 1], 1: [4], 2: [5], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1:(-.01, 1), 2: (1, 1), 3: (-1, 1.5), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [3, 1, 2], 1: [4], 2: [4], 3: [5], 4: [5], 5: []}}, {'pos': {0: (0, 0), 1: (1, 1), 2: (-1, 1.5), 3:(-.01, 2), 4: (1, 2), 5: (0, 3)}, 'uc': {0: [2, 1], 1: [3, 4], 2: [5], 3: [5], 4: [5], 5: []}}, {'pos': {0:(0, 0), 1:(-1.5, 1), 2:(-0.5, 1), 3:(0.5, 1), 4:(1.5, 1), 5:(0, 2)}, 'uc': {0: [1, 2, 3, 4], 1: [5], 2: [5], 3: [5], 4: [5], 5: []}}] def permuted_binary_op(m,q): qi = inverse_permutation(q) return [[q[m[qi[x]][qi[y]]] for y in range(len(m))] for x in range(len(m))] def leq2uc(le): # assumes le[x][y] => x <= y (topologically sorted) n = len(le) uc = [] for a in range(n): S = [] # accumulate upper covers of a for x in range(a+1,n): if le[a][x]==1: y = len(S)-1 while y>=0 and le[S[y]][x]==0: y = y-1 if y<0: S.append(x) uc.append(S) return uc class Posetuc(object): def __init__(self,uppercovers): self.uc = dict([(x,uppercovers[x]) for x in range(len(uppercovers))]) def size(self): return len(self.uc) def get_leq(self): if hasattr(self,'leq'): return self.leq n = self.size() leq = [[0 for y in range(n)] for x in range(n)] for i in range(n): leq[i][i] = 1 for j in self.uc[i]: leq[i][j] = 1 for i in range(n): for j in range(i+1,n): if leq[i][j]: for k in range(j+1,n): if leq[j][k]: leq[i][k] = 1 self.leq = leq return leq def get_join(self): # Freese-Jezek-Nation p217 if hasattr(self,'join'): return self.join n = self.size() join = [[0 for x in range(n)] for x in range(n)] le = self.get_leq() if not all([le[x][n-1]==1 for x in range(n)]): return "poset has no top element" p = range(n-1,-1,-1) uc = [sorted([p[y] for y in self.uc[x]]) for x in p] S = [] for x in range(n): # x=x_k join[x][x] = x for y in S: T = [] for z in uc[x]: T.append(join[y][z]) # T = {x_i \vee z : z>-x_k} q = T[0] for z in T[1:]: if z>q: q = z #error in Listing 11.9 for z in T: if not le[p[q]][p[z]]: return "poset is not a semilattice: x="+x+" y="+y join[x][y] = q join[y][x] = q S.append(x) self.join = permuted_binary_op(join,p) return self.join def get_meet(self): # Freese-Jezek-Nation p217 if hasattr(self,'meet'): return self.meet n = self.size() meet = [[0 for x in range(n)] for x in range(n)] le = self.get_leq() if not all([le[0][x]==1 for x in range(n)]): return "poset has no bottom element" lc = self.get_lowercovers() S = [] for x in range(n): # x=x_k meet[x][x] = x for y in S: T = [] for z in lc[x]: T.append(meet[y][z]) # T = {x_i \wedge z : z>-x_k} q = T[0] for z in T[1:]: if z>q: q = z for z in T: if not le[z][q]: return "poset is not a semilattice: x="+x+" y="+y meet[x][y] = q meet[y][x] = q S.append(x) self.meet = meet return meet def get_lowercovers(self): if hasattr(self,'lc'): return self.lc lc = [[] for x in range(self.size())] for i in range(self.size()): for j in self.uc[i]: lc[j].append(i) self.lc = lc return lc def depths(self): # assumes P is topologically sorted l = [0 for x in range(len(self.uc))] lc = self.get_lowercovers() for i in range(len(self.uc)-1,-1,-1): for j in lc[i]: if l[j] x <= y (topologically sorted) n = len(jn) uc = [] for a in range(n): S = [] # accumulate upper covers of a for x in range(a+1,n): if jn[a][x]==x: y = len(S)-1 while y>=0 and jn[S[y]][x]!=x: y = y-1 if y<0: S.append(x) uc.append(S) return uc def table2list(A,sym="cdot"): jn = A.op["vee"] m = A.op[sym] e = findid(m) n = A.size B = list(range(1,e))+list(range(e+1,n)) #assumes 0=bottom d = dict((x,[]) for x in range(n)) p = A.pos for x in B: for y in B: if ((x>y and p[x][1]!=p[y][1]) or (xA.pos[i][1] or \ A.pos[i-1][1]iddepth: c=1 A.index=1 iddepth=A.iddepth st += uc2tikz(A) return st def savefile(fn,st): fh = open(fn,'w') fh.write(st) fh.close() def main(): global latex ind = 0 si = 1 for i in range(len(RL)): ind += 1 if si","\\leftarrow":"<-", "\\le":"<=","\\ge":">=","=":"=","\\ne":"!=","\\text{ and }":" & ","\\text{ or }":" | ", "\\implies":" -> ","\\iff":" <-> ","\\forall":"all","\\exists":"exists"} opts=["op(700,infix,\"r\")","op(700,infix,\"t\")"] ################## Parser code (can ignore this) ################# # Terms are read using 's top-down parsing algorithm # symbol_table = {} def wrap(subt, t): # decide when to add parentheses during printing of terms return subt.tex() if subt.lbp > t.lbp or len(subt.a)<=1 else "("+subt.tex()+")" class symbol_base(object): a = [] def __repr__(self): return self.tex() def tex(self): if len(self.a) == 0: return self.id if len(self.a) == 1: if self.id[0]=="^": return wrap(self.a[0],self)+self.id return self.id+" "+wrap(self.a[0],self) if len(self.a) == 2: return wrap(self.a[0],self)+self.id+(" " if self.id[0]=='\\' else "")+wrap(self.a[1],self) return self.id+" "+self.a[0].id+self.a[1].id+self.a[2].id def symbol(id, bp=0): # identifier, binding power if id in symbol_table: s = symbol_table[id] # look symbol up in table s.lbp = max(bp, s.lbp) # update left binding power else: class s(symbol_base): # create class for this symbol pass s.id = id s.lbp = bp s.nulld = lambda self: self symbol_table[id] = s return s def advance(id=None): global token if id and token.id != id: raise SyntaxError("Expected "+id+" got "+token.id) token = next() def nulld(self): # null denotation expr = expression() advance(")") return expr def infix(id, bp): def leftd(self, left): # left denotation self.a = [left] self.a.append(expression(bp)) return self symbol(id, bp).leftd = leftd def prefix(id, bp): global token def nulld(self): # null denotation global token if token.id != "(": self.a = [expression(bp)] return self else: token = next() self.a = [] if token.id != ")": while 1: self.a.append(expression()) if token.id != ",": break advance(",") advance(")") return self symbol(id, bp).nulld = nulld def postfix(id, bp): def leftd(self,left): # left denotation self.a = [left] return self symbol(id, bp).leftd = leftd symbol("(").nulld = nulld symbol(")") symbol("[").nulld = nulld symbol("]") symbol("(end)") for st in VAR|CONST: symbol(st) for t in PREFIX: prefix(t[0],t[1]) for t in POSTFIX: postfix(t[0],t[1]) for t in INFIX: infix(t[0],t[1]) for st in VAR: for t in QUANT: prefix(t[0]+" "+st,t[1]) def tokenize(st): i = 0 while i='0' and st[j]<='9': j+=1 if j='a' and st[j]<='z') or (st[j]>='A' and st[j]<='Z')): j+=1 if j='a' and st[j]<='z') or (st[j]>='A' and st[j]<='Z')): j+=1 if st[i]=="{" and st[j]=="}": j+=1 tok = st[i:j] if tok in ["\\mathbf","\\forall","\\exists"]: j+=2 if j='0' and st[j]<='9': j+=1 if j='a' and st[j]<='z': j+=1 j+=1 if j0) and s.find('\\langle')==-1 and s.find('\\mathbf')==-1] def finitemembers(cl,info=False): #find fine spectrum in the class cl fs = re.search(r"\\begin{finitemembers}(.*?)\\end{finitemembers}",section(cl),flags=re.DOTALL) if info: print(fs) li = re.findall("\$(.*?)\$",fs.group(1),flags=re.DOTALL) if info: print(li) ind = [s.split("_")[1].split("=")[0].strip() for s in li if s.find("_")>=0 and s.find("=")>=0] ind = [int(s[1:-1]) if s[0]=="{" else int(s) for s in ind if s.find('n')==-1 and s.find('k')==-1] for i in range(min(8,len(ind))): if ind[i]!=i+1: raise Exception("index error at: ", i+1) val = [s.split("_")[1].split("=")[1].strip() for s in li if s.find("_")>=0 and s.find("=")>=0] return [int(s) for s in val if s!="" and s.find("n")==-1 and s.find("k")==-1] def subclasses(cl,info=False): #find subclasses of the class cl fs = re.search(r"\\begin{subclasses}(.*?)\\end{subclasses}",section(cl),flags=re.DOTALL) if info: print(fs) return re.findall(r"\\hyperlink{(.*?)}{",fs.group(1),flags=re.DOTALL) def set_subclasses(cl,sbcli,longname,st=latex_st): #replace subclasses with those in sbcli mo = section_mo(cl,st) newsbc = r"\\begin{subclasses}\n"+"\n\n".join(r" \\hyperlink{"+x+"}{"+x+":"+(longname[x] if x in longname.keys() else "")+"}" for x in sbcli)+r"\n\\end{subclasses}" newst = re.sub(r"\\begin{subclasses}.*?\\end{subclasses}",newsbc,mo.group(1),flags=re.DOTALL) return st[:mo.start(1)]+newst+st[mo.end(1):] def subclassposet(): m = re.findall(r"\\hypertarget{(.*?)}{",latex_st,flags=re.DOTALL) return {s:subclasses(s) for s in m} def superclasses(cl,info=False): #find subclasses of the class cl fs = re.search(r"\\begin{superclasses}(.*?)\\end{superclasses}",section(cl),flags=re.DOTALL) if info: print(fs) return re.findall(r"\\hyperlink{(.*?)}{",fs.group(1),flags=re.DOTALL) def set_superclasses(cl,spcli,longname,st=latex_st): #replace superclasses with those in spcli mo = section_mo(cl,st) newspc = r"\\begin{superclasses}\n"+"\n\n".join(r" \\hyperlink{"+x+"}{"+x+":"+(longname[x] if x in longname.keys() else "")+"}" for x in spcli)+r"\n\\end{superclasses}" newst = re.sub(r"\\begin{superclasses}.*?\\end{superclasses}",newspc,mo.group(1),flags=re.DOTALL) return st[:mo.start(1)]+newst+st[mo.end(1):] def superclassposet(): m = re.findall(r"\\hypertarget{(.*?)}{",latex_st,flags=re.DOTALL) return {s:superclasses(s) for s in m} def finitememberslatex(li): #convert list of numbers to fine spectrum in LaTeX return ",\n".join("$f_"+(str(i) if i<10 else "{"+str(i)+"}")+" = "+str(li[i-1])+"$" for i in range(1,len(li)+1))+"\n" def p9out(A): #output formula A in Prover9 format if A.a==[]: return p9sym[A.id] if A.id[:7] in ["\\forall","\\exists"]: return p9sym[A.id[:7]]+" "+A.id[8:]+"("+p9out(A.a[0])+")" if len(A.a)==1: #if symbol_table[p9sym[A.id]].lbp!=12: return p9sym[A.id]+"("+p9out(A.a[0])+")" #return "("+p9out(A.a[0])+")"+p9sym[A.id] return "("+p9out(A.a[0])+p9sym[A.id]+p9out(A.a[1])+")" po=["x<=x","x<=y & y<=x -> x=y","x<=y & y<=z -> x<=z"] msl=["(x^y)^z=x^(y^z)","x^y=y^x","x^x=x","x^y=x<->x<=y"] jsl=["(x v y)v z=x v(y v z)","x v y=y v x","x v x=x","x v y=y<->x<=y"] lat=msl+jsl+["x v(x^y)=x","x^(x v y)=x"] dlat=lat+["x^(y v z)=(x^y)v(x^z)"] to=lat+["x^y=x | x^y=y"] #["x<=y | y<=x"] ba=dlat+["x'v x=t","x'^x=b"] uo=[] axioms=[po,jsl,msl,lat,dlat,to,ba,uo] cli = [allclasses(chapter(i)) for i in range(2,10)] fam = {x:cx for cx in range(8) for x in cli[cx]} def finespectrum(cl,n,info=True,f=fam,new_ax=None): # call Prover9 on the translated full definition of cl and find the fine spectrum up to n if info: print(cl) try: ax = [p9out(parse(e)) for e in fulldefinition(cl)] except: print("############### Error, skipping", cl) return [] ax = [(x[1:-1] if x[0]=='(' and x[-1]==')' else x) for x in ax] ch_axioms = axioms[fam[cl]] if new_ax==None else new_ax if info: print(ch_axioms+ax) t = time.time() a = [[1]]+[prover9(ch_axioms+ax,[],10000,10000,k,options=opts) for k in range(2,n+1)] if info: print("Time: {:.2f}".format(time.time()-t), "sec") return [len(x) for x in a] def section_mo(cl,st=latex_st): cl = cl.replace('\\','\\\\').replace('$','\\$').replace('{','\\{') return re.search(r"(\\hypertarget{"+cl+r"}{.*?)\\hypertarget",st,flags=re.DOTALL) def set_finitemembers(cl,li,st=latex_st): #read cl string from latex_st, replace finitemembers entry and return new latex_st mo = section_mo(cl,st) newst = re.sub(r"\\begin{finitemembers}.*?\\end{finitemembers}",r"\\begin{finitemembers}\n"+finitememberslatex(li)+r"\\end{finitemembers}",mo.group(1),flags=re.DOTALL) return st[:mo.start(1)]+newst+st[mo.end(1):] def set_classname(cl,new_name,st=latex_st): #replace long class name of the class cl mo = section_mo(cl,st) newst = re.sub(r"\\section{.*?}",r"\\section{"+new_name+r"}",mo.group(1),flags=re.DOTALL) return st[:mo.start(1)]+newst+st[mo.end(1):] ch=jsl def compareandupdatefs(chax,cl): li = finitemembers(cl) sli = [x for x in li if x <= 1000] n = min(max(len(sli),3),6) fs = finespectrum(chax,cl,n,True) print(cl,li) return li[:n]==fs or fs==[], fs def sectionnames(st): return re.findall(r"\\hypertarget{(.*?)}{",st,flags=re.DOTALL) #axioms=[[],["x<=y->x=y"],po,jsl,msl,lat,dlat,to,ba,uo] def checkfs(li,info=False): global latex_st m=[[] for x in range(10)] for ch in li: st1=chapter(ch,latex_st) m[ch]=sectionnames(st1) print([len(x) for x in m]) for ch in li: print(ch,axioms[ch]) for cl in m[ch]: if fulldefinition(cl)!="none": fl = compareandupdatefs(axioms[ch-2],cl) if not fl[0]: print("***********",fl) latex_st = set_finitemembers(cl,fl[1],latex_st) def allclasses(st=latex_st): #return list of class names in st li=re.findall(r"\\hypertarget{(.*?)}{",st,flags=re.DOTALL) print("Number of classes:", len(li)) return li def allclassposets(st=latex_st): #return list of tikz diagrams return re.findall(r"(\\begin{tikzpicture}\[xscale=.6.*?\\end{tikzpicture}\n)",latex_st,flags=re.DOTALL) def allnodes(st): #return list of node names in st li=re.findall(r"\\node\((.*?)\)",st,flags=re.DOTALL) print("Number of nodes:", len(li)) return li def lowercovers(nd,st): #return list of lowercovers of node name nd in st edges=re.search(r"\\draw\("+nd+"\)(.*?);",st,flags=re.DOTALL) return re.findall(r"edge.*?\((.*?)\)",edges.group(1),flags=re.DOTALL) if edges!=None else [] def lc2uc(lc): uc={x:[] for x in lc} for x in lc: for y in lc[x]: if y in uc: uc[y].append(x) return uc # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from ObjectDetectionElsys.config import Config from ObjectDetectionElsys.augmentation import change_brightness_slightly, change_brightness_not_so_slightly, dropout, adjust_contrast, blur, grayscale, noise, sharpen from ObjectDetectionElsys.augmenter import augment_images # + cfg_path = './cfg/mobilenetyolov2.cfg' cfg = Config(cfg_path) images = './images/' annotations = './annotations/' augmenters = [blur, sharpen, noise, change_brightness_not_so_slightly, change_brightness_slightly, dropout] target = 5150 max_augs = 3 #dest_im = './images/' #dest_ann = './annotations/' # - augment_images(cfg, images, annotations, augmenters, target, max_augs) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Trees # # ## Binary Tree - PreOrder Traversal, InOrder Traversal and PostOrder Traversal # # # 1. PreOrder Traversal - First root, then left subtree then right subtree # 2. InOrder Traversal - First left subtree, then root then right subtree (Mathematical computations) # 3. PostOrder Traversal - First left subtree, then right subtree then root node (Infix Stack operations using ) # + # Definition for a binary tree node. class TreeNode: def __init__(self, x): self.val = x self.left = None self.right = None class Solution: pre = [] def PreOrder(self, root): if root: # first root value is printed in pre order pre.append(root.val) # then left subtree PreOrder(root.left) # then right subtree PreOrder(root.right) return pre def InOrder(self, root): if root: InOrder(root.left) print(root.val), InOrder(root.right) def PostOrder(self, root): if root: PostOrder(root.left) PostOrder(root.right) print(root.val), node = TreeNode(5) node.left = TreeNode(4) node.right = TreeNode(6) node.left.left = TreeNode(9) node.left.right = TreeNode(2) node.right.left = TreeNode(7) node.right.right = TreeNode(1) s = Solution() pre = s.PreOrder(node) print("Pre Order Traversal: ", pre) # + val = "" print(ord(val[4])) hashed = 0 for i in range(len(val)): hashed += (ord(val[i]) * (31 ** (len(val) - 1 - i))) % (10 ** 9 + 7) print(hashed) hashed - 4837933372 # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- import glob import matplotlib.pyplot as plt import pandas as pd import numpy as np import pickle from IPython.core.debugger import set_trace import difflib import csv from scipy.stats import entropy import re alldata_new.keys() # + # load data alldata_new = pickle.load(open('output/dtm_processed_output.p', 'rb')) doctitles=alldata_new['docnames'] doc_year=alldata_new['docs_per_year'] doc_ids =[0]+list(np.cumsum(doc_year)) term_topic = alldata_new['term_topic']# term_topic is n_years*n_topics*n_terms terms = alldata_new['terms'] term_frequency = alldata_new['term_frequency'][1:] # weirdly the first entry is empty doc_topicyrs = alldata_new['doc_topic'] doc_topic = [] doc_length=[] for year in range(len(term_topic)): doc_topic.append(alldata_new['doc_topic'][doc_ids[year]:doc_ids[year+1]])# doc_topic is nyear*n_docs given year*n_topics doc_length.append(alldata_new['doc_length'][doc_ids[year]:doc_ids[year+1]]) #doc_length is nyear*n_docs given year""" # rename topics by the hand-picked names topic_labels = pickle.load(open('topicnames.p','rb')) # - def stringdiff(a,b): return sum ( a[i] != b[i] for i in range(len(a)) ) # + def getlist(titles,doctitles): doclist=[] titlelist=[] titles = [k.lower() for k in titles] for doc in doctitles: for title in titles: matchratio = difflib.SequenceMatcher(None,title,doc).ratio() if matchratio >.7: print(doc+'\n'+title) doclist.append(doctitles.index(doc)) titlelist.append(title) if set(titlelist)==set(titles): break for t in titles: if t not in titlelist: print('\ncannot find: '+t) return([doclist,titlelist]) # given a list of paper, what are their main topics? for analyzing like a lab or an author def maintopics(doclist,*topic_labels): ntopics=20 doc_topfreq=np.empty((len(doclist),ntopics)) for k in range(len(doclist)): if isinstance(doclist[0],int): doc_topfreq[k]=alldata_new['doc_topic'][doclist[k]] elif len(doclist[0])==2: # year then index try: doc_topfreq[k]=doc_topic[doclist[k][0]][doclist[k][1]] except: year=doclist[k][0] print('year%d'%doclist[k][0]) print(len(doc_topic[year])) docdir = 'text_data/volume_{}/'.format(22+year) alldocs = glob.glob(docdir+'*.txt') print(len(alldocs)) set_trace() doc_topfreq = np.mean(doc_topfreq,axis=0) doc_topfreq = doc_topfreq/sum(doc_topfreq) maintopid = np.argsort(-doc_topfreq) doc_topfreq=doc_topfreq[maintopid] if topic_labels: maintopics=[topic_labels[0][idx] for idx in maintopid] return (maintopics,doc_topfreq) def lab_summary(titles,doctitles,label): [doclist,titlelist]=getlist(titles,doctitles) (mtops,meantpfreq)=maintopics(doclist,topic_labels) with open('result/lab_topic/'+label+'.txt','w') as f: for k in range(len(mtops)): f.write(mtops[k]+', freq={}'.format(meantpfreq[k])+'\n') f.write('\n papers included:\n') for title in titlelist: f.write(title) with open('result/lab_topic/'+label+'.csv','w') as f: csvwriter = csv.writer(f) for k in range(len(mtops)): csvwriter.writerow([mtops[k],meantpfreq[k]]) return(doclist,titlelist,mtops,meantpfreq) def labentropy(doclist): entrop=[] for doc in doclist: entrop.append(entropy(alldata_new['doc_topic'][doc])) return (np.mean(entrop)) # - label='alex' titles=['Computationally reproducible experiments','The Attentional Learning Trap and How to Avoid it','Online Experiments using jsPsych, psiTurk, and Amazon Mechanical Turk'] (doclist,titlelist,mtops,meantpfreq)=lab_summary(titles,label) label='anselm' titles=['Asking and evaluating natural language questions'] (doclist,titlelist,mtops,meantpfreq)=lab_summary(titles,label) # + # find the paper index for given titles label='gureckis' # find all titles from gureckis lab titles = [] with open('lab_paper/Gureckis','r') as f: for line in iter(f.readline, ''): if 'Annual Conference of the Cognitive Science' in line: line = line.lower() ids = re.search(r"(20[0-1][0-9])", line).end(0)+2 if '"' in line[ids:ids+4]: ids +=1 ide = line.find('" in ') else: ide = line.find('proceedings')-2 pptitle = line[ids:ide] titles.append(pptitle) (doclist,titlelist,mtops,meantpfreq)=lab_summary(titles,doctitles,label) gureckis_width=labentropy(doclist) pickle.dump([doclist,titlelist,mtops,meantpfreq,gureckis_width],open('result/lab_topic/'+label+'.p','wb')) # - mtops label='m_frank' [doclist,titlelist,mtops,meantpfreq,gureckis_width]=pickle.load(open('result/lab_topic/'+label+'.p','rb')) with open('result/lab_topic/'+label+'.csv','w') as f: csvwriter = csv.writer(f) for k in range(len(mtops)): csvwriter.writerow([mtops[k],meantpfreq[k]]) print('%d of %d papers are found for %s lab'%(len(doclist),len(titles),label)) label='m_frank' # find all titles from titles = [] with open('lab_paper/M_Frank','r') as f: for line in iter(f.readline, ''): if 'Annual Conference of the Cognitive Science' in line: ids = line.find('). ')+3 ide = line.find('Proceedings')-2 pptitle = line[ids:ide] titles.append(pptitle) # lab summary in the topic space (doclist,titlelist,mtops,meantpfreq)=lab_summary(titles,doctitles,label) frank_width=labentropy(doclist) pickle.dump([doclist,titlelist,mtops,meantpfreq,gureckis_width],open('result/lab_topic/'+label+'.p','wb')) print(frank_width) print(entropy(meantpfreq)) mtops print('%d of %d papers are found for %s lab'%(len(doclist),len(titles),label)) label='Murphy' titles=['A knowledge resonance (KRES) model of category learning','Eyetracking as an implicit measure of category-based induction'] (doclist,titlelist,mtops,meantpfreq)=lab_summary(titles,label) murphy_width=labentropy(doclist) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CS_Signalling_ERBB_RAS_AKT import h5py import numpy as np import pandas as pd import re import sys sys.path.insert(0, 'CS_Signalling_ERBB_RAS_AKT_petab') import CS_Signalling_ERBB_RAS_AKT_petab as modelModule model = modelModule.getModel() res_file = '/home/dweindl/Downloads/ipopt-ERBB_RAS_AKT_Drugs_r389936-noHierarchical-noProtein-CV1-srv24ib.886758.0_rank00000.h5' # ## Fixed parameters model_k = model.getFixedParameterIds() with h5py.File(res_file, 'r') as f: res_k = f['/inputData/fixedParameters/parameterNames'][:] map_k = pd.DataFrame(data={'model_id': model_k, 'results_id': None}) map_k.set_index(['model_id'], inplace=True) map_k.head() # map species: for model_id in map_k.index: if model_id in res_k: map_k.loc[model_id] = model_id # show unmapped map_k.loc[map_k.results_id.isnull()] def update_parameter_name(old_name): # p_r_23811_k_RPKM2protein_reaction_23811 # -> reaction_23811_r23811_k_RPKM2protein # if no match, return as is # new_name = re.sub(r'p_r_(\d+)_(.*)_(reaction_\d+)', # r'\3_r\1_\2', old_name) new_name = re.sub(r'p_r_(\d+_.*)_(reaction_\d+)', r'r\1', old_name) return new_name # old names to new names: for results_id in res_k: model_id = update_parameter_name(results_id) if model_id in map_k.index: map_k.loc[model_id, 'results_id'] = results_id # show unmapped map_k.loc[map_k.results_id.isnull()] # + # TODO: estimate all Genespecific scaling # - map_k.to_csv("fixed_parameter_map.tsv", sep='\t') # ## Dynamic parameters model_p = model.getParameterIds() with h5py.File(res_file, 'r') as f: res_p = f['/inputData/parameters/modelParameterNames'][:] map_p = pd.DataFrame(data={'model_id': model_p, 'results_id': None}) map_p.set_index(['model_id'], inplace=True) map_p # + # old names to new names: for results_id in res_p: model_id = update_parameter_name(results_id) if model_id in map_p.index: map_p.loc[model_id, 'results_id'] = results_id # show unmapped map_p.loc[map_p.results_id.isnull()] # - # these parameters won't be used map_p.loc['observableParameter1_proliferation', 'results_id'] = 1.0 map_p.loc['noiseParameter1_proliferation', 'results_id'] = 1.0 map_p.to_csv("dynamic_parameter_map.tsv", sep='\t') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import torch from torch import nn from torch.autograd import Variable from data_loader import DataLoader, load_dictionary from model import UniSkip from config import * from datetime import datetime, timedelta # - d = DataLoader("./data/patent.refined.sep.unigram.id.txt") dictionary = load_dictionary("./data/patent.refined.sep.unigram.id.txt.pkl") len(dictionary) spm_vocab = {} with open('/mnt/48TB/temp/patent.refined.sep.mixed.txt.vocab') as f: for line in f.readlines(): spm_vocab[line.split()[0]] = line.split()[1] [w for w in dictionary if w not in spm_vocab] CUDA_DEVICE mod = UniSkip() if USE_CUDA: mod.cuda(CUDA_DEVICE) lr = 3e-4 optimizer = torch.optim.Adam(params=mod.parameters(), lr=lr) # + loss_trail = [] last_best_loss = None current_time = datetime.utcnow() def debug(i, loss, prev, nex, prev_pred, next_pred): global loss_trail global last_best_loss global current_time this_loss = loss.data[0] loss_trail.append(this_loss) loss_trail = loss_trail[-20:] new_current_time = datetime.utcnow() time_elapsed = str(new_current_time - current_time) current_time = new_current_time print("Iteration {}: time = {} last_best_loss = {}, this_loss = {}".format( i, time_elapsed, last_best_loss, this_loss)) print("prev = {}\nnext = {}\npred_prev = {}\npred_next = {}".format( d.convert_var_to_sentences(prev), d.convert_var_to_sentences(nex), d.convert_var_to_sentences(prev_pred), d.convert_var_to_sentences(next_pred), )) try: trail_loss = sum(loss_trail)/len(loss_trail) if last_best_loss is None or last_best_loss > trail_loss: print("Loss improved from {} to {}".format(last_best_loss, trail_loss)) save_loc = "./saved_models/skip-best".format(lr, VOCAB_SIZE) print("saving model at {}".format(save_loc)) torch.save(mod.state_dict(), save_loc) last_best_loss = trail_loss except Exception as e: print("Couldn't save model because {}".format(e)) # + print("Starting training...") # a million iterations for i in range(0, 894100): sentences, lengths = d.fetch_batch(32 * 10) loss, prev, nex, prev_pred, next_pred = mod(sentences, lengths) if i % 100 == 0: debug(i, loss, prev, nex, prev_pred, next_pred) optimizer.zero_grad() loss.backward() optimizer.step() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from __future__ import print_function # + def fac(x): if x == 1: return 1 else: return x*fac(x-1) print(fac(5)) print(fac(52)) print(fac(52).bit_length()) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="nXRYFcKeeCLh" colab_type="code" outputId="72acabfd-1729-41d7-831c-323f719e57ad" colab={"base_uri": "https://localhost:8080/", "height": 34} from google.colab import drive drive.mount('/content/drive', force_remount=True) gdrive_home_dir = '/content/drive/My Drive/' # + id="g310W7Bcmqtp" colab_type="code" outputId="8c95e655-57cd-4099-8acc-a02e13bfca62" colab={"base_uri": "https://localhost:8080/", "height": 170} # !ls '/content/drive/My Drive/dataset_fp/labelfinger.csv' # !ls '/content/drive/My Drive/dataset_fp/data/' # + id="tSHR-OA1c40N" colab_type="code" colab={} from __future__ import print_function import numpy as np # + id="sS3_oeBJcoWP" colab_type="code" colab={} np.random.seed(1337) # for reproducibility # + id="LyK02LtMcq6-" colab_type="code" outputId="13441e5f-a097-4522-b29c-f64ebfb3adb8" colab={"base_uri": "https://localhost:8080/", "height": 34} import pandas as pd from PIL import Image from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils from keras import backend as K # + id="0T8w8xsKcpM7" colab_type="code" colab={} batch_size = 10 nb_classes = 10 nb_epoch = 2 # + id="uDpUl8gycjD9" colab_type="code" colab={} # To make the data ready for CNN, pictures are named with indexes, like '1.jpg', '2.jpg', etc.. def dir_to_dataset(mypath, loc_train_labels=""): dataset = [] gbr = pd.read_csv(loc_train_labels, sep="\t") #for file_count, file_name in enumerate( sorted(glob(glob_files),key=len) ): for i in range(1,81): image = Image.open(mypath + str(i)+'.jpg') img = Image.open(mypath + str(i)+'.jpg').convert('LA') #tograyscale ''' pixels_list = list(img.getdata()) for pixel in pixels_list: p = pixel dataset.append(p) ''' #pixels_list = list(img.getdata()) #width, height = img.size #pixels = [pixels[i * width:(i + 1) * width] for i in range(height)] pixels = [f[0] for f in list(img.getdata())] #<- original dataset.append(pixels) # outfile = glob_files+"out" # np.save(outfile, dataset) if len(loc_train_labels) > 0: df = pd.read_csv(loc_train_labels) return np.array(dataset), gbr["Class"].values else: return np.array(dataset) # + id="6ZglATaj7btE" colab_type="code" outputId="9cd4e7cf-afdf-4224-9f34-871a2f8583d6" colab={"base_uri": "https://localhost:8080/", "height": 802} if __name__ == '__main__': Data, y = dir_to_dataset("/content/drive/My Drive/dataset_fp/data/","/content/drive/My Drive/dataset_fp/labelfinger.csv") #Split the train set and validation set train_set_x = Data[:60] val_set_x = Data[10:] train_set_y = y[:60] val_set_y = y[10:] (X_train, y_train), (X_test, y_test) = (train_set_x,train_set_y),(val_set_x,val_set_y) # input image dimensions img_rows, img_cols = 640, 480 # number of convolutional filters to use nb_filters = 32 # size of pooling area for max pooling pool_size = (2, 2) # convolution kernel size kernel_size = (3, 3) # Checking if the backend is Theano or Tensorflow if K.image_dim_ordering() == 'th': X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols) X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols) input_shape = (1, img_rows, img_cols) else: X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1) X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 print('X_train shape:', X_train.shape) print(X_train.shape[0], 'train samples') print(X_test.shape[0], 'test samples') # convert class vectors to binary class matrices Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) model = Sequential() model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1], border_mode='valid', input_shape=input_shape)) model.add(Activation('relu')) model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=pool_size)) model.add(Dropout(0.25)) model.add(Flatten()) #model.add(Dense(128)) model.add(Dense(32)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(nb_classes)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test, verbose=0) print(model.summary()) print('Test score:', score[0]) print('Test accuracy:', score[1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Anago36 # language: python # name: anago36 # --- # + import sys sys.executable # %load_ext autoreload # %autoreload 2 # %reload_ext autoreload ''' TRAINING_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/frame.train.conll" DEV_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/frame.dev.conll" TEST_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/frame.test.conll" ''' ''' TRAINING_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/ner_conll2003_en.train.conll" DEV_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/ner_conll2003_en.dev.conll" TEST_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/ner_conll2003_en.test.conll" ''' ''' TRAINING_DATA_PATH = "/Users/slouvan/sandbox/anago/data/conll2003/en/ner/train.lower.txt" DEV_DATA_PATH = "/Users/slouvan/sandbox/anago/data/conll2003/en/ner/valid.lower.txt" TEST_DATA_PATH = "/Users/slouvan/sandbox/anago/data/conll2003/en/ner/test.lower.txt" ''' ''' TRAINING_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/restaurant.train.conll" DEV_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/restaurant.dev.conll" TEST_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/restaurant.test.conll" ''' ''' TRAINING_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/movie_MIT.train.conll" DEV_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/movie_MIT.dev.conll" TEST_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/movie_MIT.test.conll" ''' TRAINING_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/atis-2.train.conll" DEV_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/atis-2.dev.conll" TEST_DATA_PATH = "/Users/slouvan/sandbox/cross-domain/data/atis.test.iob.conll" # - import anago from anago.reader import load_data_and_labels, load_glove x_train, y_train = load_data_and_labels(TRAINING_DATA_PATH) x_dev, y_dev = load_data_and_labels(DEV_DATA_PATH) x_test, y_test = load_data_and_labels(TEST_DATA_PATH) # + print(len(x_test)) print(len(y_train)) print(len(y_dev)) print(y_test[0]) cnt = 0 for x in x_train: if len(x) == 1: cnt +=1 print(cnt) # - embeddings = load_glove("/Users/slouvan/sandbox/cross-domain/data/glove.6B.100d.txt") model = anago.Sequence(embeddings=embeddings,char_feature=True) model.train(x_train, y_train, x_dev, y_dev) model.eval(x_test, y_test) model.save("/Users/slouvan/sandbox/anago/mymodel") # + from anago import config model = anago.Sequence(char_feature=True) model = model.load(config.ANAGO_CHECKPOINT_DIR) model.eval(x_test, y_test) # + ''' soccer o - o japan b-loc get o lucky o win o , o china b-per in o surprise o defeat o . o ''' model.analyze("i want to fly from milan to boston .".split()) # - model.eval(x_test, y_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as patches import pyart from wsrlib import * # requirements: # - pyart # - boto3 # - pyproj # + #name = 'KBUF20160107_123931_V06' #name = 'KBUF20160806_101141_V06' def image2xy(x, y, r=0, dim=600, rmax=150000): ''' Convert from image coordinates to (x,y) coordinates offset from radar ''' x0 = y0 = dim/2.0 # origin x = (x - x0)*2*rmax/dim y = -(y - y0)*2*rmax/dim r = r*2*rmax/dim return x, y, r # + import pandas as pd roosts = pd.read_csv('roost_labels_KBUF2016.csv') roosts = roosts.loc[roosts['label']=='swallow-roost'] roosts['tot_refl'] = 0 groups = roosts.groupby('filename') n = len(groups) for j, (scan, group) in enumerate(groups): if (j < 133 or j > 138): continue print('Processing %s (%03d/%03d)' % (scan, j, n)) for i,row in group.iterrows(): #print(i) #print(i, row['track_id'], row['from_sunrise']) roosts.loc[i,'tot_refl'] = i*10 display(roosts) # - print(len(roosts['track_id'])) # + # File name and roost location name = 'KBUF20160826_101937_V06' # 1907,KBUF20160826_101937_V06,-14,0.657,235.294551,278.52,26.456692 x0, y0, r = image2xy(235.294551,278.52,26.456692) name = 'KBUF20160826_103825_V06' # 1913,KBUF20160826_103825_V06,5,1.024,64.915009,390.60,52.056629 x0, y0, r = image2xy(64.915009,390.60,52.056629) # name = 'KBUF20160808_101440_V06' # 1612,KBUF20160808_101440_V06,1,0.863,45.938196,389.39,45.936024 # x0, y0, r = image2xy(45.938196,389.39,45.936024) print(x0,y0,r) radar = read_s3(name) # + # Plot reflectivity disp = pyart.graph.RadarDisplay(radar) rmax = 150 # km lim = (-rmax, rmax) disp.plot('reflectivity', 0, title='NEXRAD Reflectivity', vmin=-5, vmax=35, colorbar_label='dBZ') disp.set_limits(xlim=lim, ylim=lim) # Plot roost rect = patches.Rectangle(((x0-r)/1000,(y0-r)/1000),2*r/1000,2*r/1000,linewidth=1,edgecolor='r',facecolor='none') plt.gca().add_patch(rect) plt.show() # - data, y, x, elev = radar2mat(radar, coords='cartesian', dim=600, r_max=150000) print(data['reflectivity'].shape) # + rng, az, elev, dbz = get_volumes(radar, coords='antenna') # Convert to reflectivity and set nan --> zero refl, _ = z_to_refl(idb(dbz)) refl[np.isnan(refl)] = 0 # + ## Pulse volume coordinate conversions # Get range along ground and height above ground [ground_rng, height] = slant2ground(rng, elev) # Convert compass bearing to mathematical angle theta = cmp2pol(az) # Convert to X,Y coordinates of each pulse volume [x, y] = pol2cart(theta, ground_rng) # + # Convert roost location to lat / lon [angle, dist_from_radar] = cart2pol(x0, y0) bearing = pol2cmp(angle) from pyproj import Geod geodesic = Geod(ellps='WGS84') roost_lon, roost_lat, _ = geodesic.fwd(radar.longitude['data'][0], radar.latitude['data'][0] , bearing, dist_from_radar) #roost_lon = do_alias(roost_lon, 180) print('lat,lon=%.4f,%.4f' % (roost_lat, roost_lon)) # + DIST = np.sqrt( (x-x0)**2 + (y-y0)**2 ) # distance of each sample volume to center of roost edges = np.arange(0, 1.5*r, 500) nbins = len(edges)-1 density = np.full(nbins, np.nan) for i in range(nbins): inds = (DIST >= edges[i]) & (DIST < edges[i+1]) # & (elev < 0.55) density[i] = np.sum(refl[inds]) # - bin_start = edges[:-1] plt.plot(bin_start, density, 'o') plt.show() # + inds = DIST < r plt.hist(DIST[inds], weights=refl[inds], bins=40) plt.show() inds = (DIST < r) & (height < 4000) plt.hist(height[inds], weights=refl[inds], bins=40) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 3.8.2 # language: python # name: 3.8.2 # --- # + papermill={"duration": 0.356807, "end_time": "2020-07-01T21:57:20.467394", "exception": false, "start_time": "2020-07-01T21:57:20.110587", "status": "completed"} tags=[] from dask.array import from_array from numpy import array A = array(range(36)).reshape((6,6)) d = from_array(A, chunks=(3,3)) d # + [markdown] papermill={"duration": 0.013713, "end_time": "2020-07-01T21:57:20.495329", "exception": false, "start_time": "2020-07-01T21:57:20.481616", "status": "completed"} tags=[] # ## `scipy.sparse` # + papermill={"duration": 0.051382, "end_time": "2020-07-01T21:57:20.560967", "exception": false, "start_time": "2020-07-01T21:57:20.509585", "status": "completed"} tags=[] from scipy.sparse import spmatrix, coo_matrix, csr_matrix, csc_matrix, dia_matrix sps = d.map_blocks(coo_matrix, chunks=(3,3)) spsc = sps.compute(scheduler="synchronous") spsc # + papermill={"duration": 0.02682, "end_time": "2020-07-01T21:57:20.602002", "exception": false, "start_time": "2020-07-01T21:57:20.575182", "status": "completed"} tags=[] spsc.todense() # + [markdown] papermill={"duration": 0.014072, "end_time": "2020-07-01T21:57:20.629306", "exception": false, "start_time": "2020-07-01T21:57:20.615234", "status": "completed"} tags=[] # ### `axis=None` # + [markdown] papermill={"duration": 0.013582, "end_time": "2020-07-01T21:57:20.657050", "exception": false, "start_time": "2020-07-01T21:57:20.643468", "status": "completed"} tags=[] # Dask dense blocks: # + papermill={"duration": 0.045208, "end_time": "2020-07-01T21:57:20.715680", "exception": false, "start_time": "2020-07-01T21:57:20.670472", "status": "completed"} tags=[] d.sum().compute(), d.sum(keepdims=False).compute(), d.sum(keepdims=True).compute() # + [markdown] papermill={"duration": 0.01538, "end_time": "2020-07-01T21:57:20.745500", "exception": false, "start_time": "2020-07-01T21:57:20.730120", "status": "completed"} tags=[] # Dask scipy.sparse blocks: # + papermill={"duration": 0.049385, "end_time": "2020-07-01T21:57:20.809746", "exception": false, "start_time": "2020-07-01T21:57:20.760361", "status": "completed"} tags=[] sps.sum().compute(), sps.sum(keepdims=False).compute(), sps.sum(keepdims=True).compute() # + [markdown] papermill={"duration": 0.017039, "end_time": "2020-07-01T21:57:20.841423", "exception": false, "start_time": "2020-07-01T21:57:20.824384", "status": "completed"} tags=[] # scipy.sparse, sans Dask: # + papermill={"duration": 0.032058, "end_time": "2020-07-01T21:57:20.889613", "exception": false, "start_time": "2020-07-01T21:57:20.857555", "status": "completed"} tags=[] spsc.sum(), spsc.sum(keepdims=False), spsc.sum(keepdims=True) # + [markdown] papermill={"duration": 0.015578, "end_time": "2020-07-01T21:57:20.920313", "exception": false, "start_time": "2020-07-01T21:57:20.904735", "status": "completed"} tags=[] # ### `axis=0` # + [markdown] papermill={"duration": 0.016251, "end_time": "2020-07-01T21:57:20.951688", "exception": false, "start_time": "2020-07-01T21:57:20.935437", "status": "completed"} tags=[] # Dask dense blocks: # + papermill={"duration": 0.043464, "end_time": "2020-07-01T21:57:21.011608", "exception": false, "start_time": "2020-07-01T21:57:20.968144", "status": "completed"} tags=[] d.sum(axis=0).compute(), d.sum(axis=0, keepdims=False).compute(), d.sum(axis=0, keepdims=True).compute() # + [markdown] papermill={"duration": 0.01525, "end_time": "2020-07-01T21:57:21.042524", "exception": false, "start_time": "2020-07-01T21:57:21.027274", "status": "completed"} tags=[] # Dask scipy.sparse blocks: # + papermill={"duration": 0.054799, "end_time": "2020-07-01T21:57:21.112333", "exception": false, "start_time": "2020-07-01T21:57:21.057534", "status": "completed"} tags=[] sps.sum(axis=0).compute(), sps.sum(axis=0, keepdims=False).compute(), sps.sum(axis=0, keepdims=True).compute() # + [markdown] papermill={"duration": 0.017029, "end_time": "2020-07-01T21:57:21.144274", "exception": false, "start_time": "2020-07-01T21:57:21.127245", "status": "completed"} tags=[] # scipy.sparse, sans Dask: # + papermill={"duration": 0.034129, "end_time": "2020-07-01T21:57:21.193918", "exception": false, "start_time": "2020-07-01T21:57:21.159789", "status": "completed"} tags=[] spsc.sum(axis=0), spsc.sum(axis=0, keepdims=False), spsc.sum(axis=0, keepdims=True) # + [markdown] papermill={"duration": 0.015816, "end_time": "2020-07-01T21:57:21.226067", "exception": false, "start_time": "2020-07-01T21:57:21.210251", "status": "completed"} tags=[] # ### `axis=1` # + [markdown] papermill={"duration": 0.017408, "end_time": "2020-07-01T21:57:21.260154", "exception": false, "start_time": "2020-07-01T21:57:21.242746", "status": "completed"} tags=[] # Dask dense blocks: # + papermill={"duration": 0.048616, "end_time": "2020-07-01T21:57:21.326152", "exception": false, "start_time": "2020-07-01T21:57:21.277536", "status": "completed"} tags=[] d.sum(axis=1).compute(), d.sum(axis=1, keepdims=False).compute(), d.sum(axis=1, keepdims=True).compute() # + [markdown] papermill={"duration": 0.018309, "end_time": "2020-07-01T21:57:21.364412", "exception": false, "start_time": "2020-07-01T21:57:21.346103", "status": "completed"} tags=[] # Dask scipy.sparse blocks: # + papermill={"duration": 0.052337, "end_time": "2020-07-01T21:57:21.433147", "exception": false, "start_time": "2020-07-01T21:57:21.380810", "status": "completed"} tags=[] sps.sum(axis=1).compute(), sps.sum(axis=1, keepdims=False).compute(), sps.sum(axis=1, keepdims=True).compute() # + [markdown] papermill={"duration": 0.016834, "end_time": "2020-07-01T21:57:21.468289", "exception": false, "start_time": "2020-07-01T21:57:21.451455", "status": "completed"} tags=[] # scipy.sparse, sans Dask: # + papermill={"duration": 0.030581, "end_time": "2020-07-01T21:57:21.515189", "exception": false, "start_time": "2020-07-01T21:57:21.484608", "status": "completed"} tags=[] spsc.sum(axis=1), spsc.sum(axis=1, keepdims=False), spsc.sum(axis=1, keepdims=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:sourmash-sbt2knn] # language: python # name: conda-env-sourmash-sbt2knn-py # --- # cd /home/olga/pureScratch/olgabot-maca/facs/ # ls -lha # + # # !ls -lha .sbt.tabula-muris-k21/ | head # + import glob import itertools from itertools import groupby import math import numpy as np import scipy.sparse from umap.umap_ import smooth_knn_dist, compute_membership_strengths from umap.spectral import spectral_layout # %load_ext autoreload # %autoreload 2 import sourmash from sourmash import signature as sig from sourmash.sbt import Leaf from sourmash.sbtmh import SigLeaf, create_sbt_index, load_sbt_index from sourmash import sourmash_args from sourmash.logging import notify from sourmash import knn from sourmash import umap # - # ### Settings for this session # + # --- Which signatures to use? --- # ksize = 21 moltype = 'DNA' # --- How to build sequence bloom tree? --- # bf_size = 1e5 n_children = 2 scaled = False # --- How to build K-nearest neighbor graph on similarities? --- # n_neighbors = 5 ignore_abundance = True downsample = False # --- UMAP settings --- n_epochs = 0 # - # %time tree = load_sbt_index("tabula-muris-k21.sbt.json") tree len(tree.nodes) set(type(node) for node in tree.nodes.values()) sum(1 for _ in knn.get_leaves(tree)) sum(1 for node in tree.nodes.values() if isinstance(node, (Leaf, SigLeaf))) # + # sum(1 for node in tree.nodes.values() if hasattr(node, 'data.minhash')) # - # %%time adjacencies = tree.nearest_neighbor_adjacencies(n_neighbors=n_neighbors, ignore_abundance=ignore_abundance, downsample=downsample) print(len(adjacencies)) len(adjacencies) sum(1 for node in tree.nodes.values() if isinstance(node, (Leaf, SigLeaf))) # + # # %%time # adjacencies = knn.nearest_neighbors(tree, n_neighbors=n_neighbors, # ignore_abundance=ignore_abundance, # downsample=downsample) # print(len(adjacencies)) # - len(adjacencies) adjacencies[:10] u_leaves = set(u for (u, v, similarity) in adjacencies) 'P9-MAA001884-3_38_F-1-1_S151' in u_leaves leaves = list(knn.get_leaves(tree)) len(leaves) sum(1 for _ in knn.get_leaves(tree)) len(tree.nodes) tree.nodes[10] set(type(node) for node in tree.nodes.values()) sum(1 for node in tree.nodes.values() if hasattr(node, 'minhash')) leaf_to_position = dict( (node.data.name(), position) for position, node in tree.nodes.items() if isinstance(node, Leaf)) print(len(leaf_to_position)) leaf_to_index = dict((name, i) for i, name in enumerate(leaf_to_position.keys())) len(leaf_to_index) # + knn_indices = [] knn_dists = [] for u, items in groupby(adjacencies, key=lambda x: x[0]): knn_indices_line = [] knn_dists_line = [] for u, v, similarity in items: index = leaf_to_index[v] if not isinstance(index, int): print(f"index {index} is not an integer") knn_indices_line.append(index) # Dissimilarity = 1-similarity dissimilarity = float(1 - similarity) if not isinstance(dissimilarity, float): print(f"dissimilarity {dissimilarity} is not a float") knn_dists_line.append(dissimilarity) # knn_indices_line = np.array(knn_indices_line) # knn_dists_line = np.array(knn_dists_line) knn_indices.append(knn_indices_line) knn_dists.append(knn_dists_line) knn_indices = np.array(knn_indices, dtype='int') knn_dists = np.array(knn_dists) # - [x for x in knn_indices if len(x) != n_neighbors] knn_indices.dtype knn_indices.astype(int) knn_dists.dtype knn_dists[0].dtype knn_indices knn_dists.astype(float) knn_dists[:10] leaf_to_index['P9-MAA001884-3_38_F-1-1_S151'] leaf_to_position['P9-MAA001884-3_38_F-1-1_S151'] knn_indices.dtypes graph = umap.fuzzy_simplicial_set(knn_indices, knn_dists, n_neighbors) # + # knn_indices, knn_dists, leaf_to_index = \ # sourmash_knn.similarity_adjacency_to_knn(adjacencies, tree) # print(f"knn_indices.shape: {knn_indices.shape}") # print(f"knn_dists.shape: {knn_dists.shape}") # print(f"len(leaf_to_index): {len(leaf_to_index)}") # - # %debug index_to_leaf = dict(zip(leaf_to_index.values(), leaf_to_index.keys())) graph = sourmash_knn.make_search_graph(knn_indices, knn_dists) # + graph, n_epochs = sourmash_knn._clean_graph(graph, n_epochs) # - # ## Get connected components and similarity_adjacency_to_knnem # + def convert_name_to_tree_position(tree, name): for position, node in tree.nodes.items(): if node.data.name() == name: return position label_to_tree_position = dict((name, convert_name_to_tree_position(name)) for name in leaf_to_index.keys()) len(label_to_tree_position) label_to_tree_position['P9-MAA001884-3_38_F-1-1_S151'] # + n_connected_components, labels = scipy.sparse.csgraph.connected_components(graph) for label in range(n_connected_components): graph_index_of_label = np.where(component_labels == label) tree_position first_sig = tree.get(graph) for i in graph_index_of_label[1:]: # - np.where(component_labels == label) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- Decision Trees for Regression # + #Use within an Jupyter notebook # %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.datasets import load_diabetes diabetes = load_diabetes() X = diabetes.data y = diabetes.target X_feature_names = ['age', 'gender', 'body mass index', 'average blood pressure','bl_0','bl_1','bl_2','bl_3','bl_4','bl_5'] pd.Series(y).hist(bins=50) # + bins = 50*np.arange(8) binned_y = np.digitize(y, bins) pd.Series(binned_y).hist(bins=50) # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,stratify=binned_y) # + from sklearn.tree import DecisionTreeRegressor dtr = DecisionTreeRegressor() dtr.fit(X_train, y_train) y_pred = dtr.predict(X_test) from sklearn.metrics import mean_absolute_error mean_absolute_error(y_test, y_pred) # - (np.abs(y_test - y_pred)/(y_test)).mean() pd.Series((y_test - y_pred)).hist(bins=50) pd.Series((y_test - y_pred)/(y_test)).hist(bins=50) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns; sns.set() import numpy as np # ### Motivating GMM: Weakness of k-Means from sklearn.datasets.samples_generator import make_blobs X, y_true = make_blobs(n_samples=400, centers=4, cluster_std=0.6, random_state=0) X = X[:, ::-1] # flipping axes for better plotting # plot the data with K-means labels from sklearn.cluster import KMeans kmeans = KMeans(4, random_state=0) labels = kmeans.fit(X).predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis'); # + from sklearn.cluster import KMeans from scipy.spatial.distance import cdist def plot_kmeans(kmeans, X, n_cluster=4, rseed=0, ax=None): labels = kmeans.fit_predict(X) # plot the input data ax = ax or plt.gca() ax.axis('equal') ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2); # plot the circle for the kmeans model centers = kmeans.cluster_centers_ radii = [cdist(X[labels == i], [center]).max() for i, center in enumerate(centers)] for c, r in zip(centers, radii): ax.add_patch(plt.Circle(c, r, fc='#CCCCCC', lw=3, edgecolor='k', alpha=0.5, zorder=1)) kmeans = KMeans(n_clusters=4, random_state=0) plot_kmeans(kmeans, X) # - rng = np.random.RandomState(13) X_stretched = np.dot(X, rng.randn(2, 2)) kmeans = KMeans(n_clusters=4, random_state=0) plot_kmeans(kmeans, X_stretched) # These two disadvantages of k-means—its lack of flexibility in cluster shape and lack # of probabilistic cluster assignment—mean that for many datasets (especially lowdimensional # datasets) it may not perform as well as you might hope. # ### Generalizing the E-M: Gaussian Mixture Models # A Gaussian mixture model (GMM) attempts to find a mixture of multidimensional # Gaussian probability distributions that best model any input dataset. from sklearn.mixture import GaussianMixture gmm = GaussianMixture(n_components=4).fit(X) labels = gmm.predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis') probs = gmm.predict_proba(X) print(probs[:5].round(3)) # visualizing the uncertainty by making it proportional to the size size = 50 * probs.max(1) ** 5 # square to k=empahasize tdifferences plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', s=size); # Under the hood, a Gaussian mixture model is very similar to k-means: it uses an # expectation–maximization approach that qualitatively does the following: # 1. Choose starting guesses for the location and shape # 2. Repeat until converged: # - E-step: for each point, find weights encoding the probability of membership in # each cluster # - M-step: for each cluster, update its location, normalization, and shape based # on all data points, making use of the weights # Let’s create a function that will help us visualize the locations and shapes of the GMM # clusters by drawing ellipses based on the gmm output # + from matplotlib.patches import Ellipse def draw_ellipse(position, covariance, ax=None, **kwargs): """Draw an ellipse with a given psition and covariance""" ax = ax or plt.gca() # Convert covariance to principal axes if covariance.shape == (2,2): U, s, Vt = np.linalg.svd(covariance) angle = np.degrees(np.arctan2(U[1, 0], U[0, 0])) width, height = 2 * np.sqrt(s) else: angle = 0 width, height = 2 * np.sqrt(covariance) # Draw the ellipse for nsig in range(1, 4): ax.add_patch(Ellipse(position, nsig * width, nsig * height, angle, **kwargs)) def plot_gmm(gmm, X, label=True, ax=None): ax = ax or plt.gca() labels = gmm.fit(X).predict(X) if label: ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2) else: ax.scatter(X[:, 0], X[:, 1], s=40, zorder=2) ax.axis('equal') w_factor = 0.2 / gmm.weights_.max() for pos, covar, w in zip(gmm.means_, gmm.covariances_, gmm.weights_): draw_ellipse(pos, covar, alpha=w * w_factor) # - gmm = GaussianMixture(n_components=4, random_state=42) plot_gmm(gmm, X) # using GMM for the stretched data gmm = GaussianMixture(n_components=4, covariance_type='full', random_state=42) plot_gmm(gmm, X_stretched) # The hyperparameter controls the degrees of freedom in the shape of each cluster is the covariance type # # ### GMM as Density Estimation # Though GMM is often categorized as a clustering algorithm, fundamentally it is an # algorithm for density estimation. That is to say, the result of a GMM fit to some data is # technically not a clustering model, but a generative probabilistic model describing the # distribution of the data. from sklearn.datasets import make_moons Xmoon, ymoon = make_moons(200, noise=0.05, random_state=0) plt.scatter(Xmoon[:, 0], Xmoon[:, 1]); #If we try to fit this to a two-component GMM viewed as a clustering model, gmm2 = GaussianMixture(n_components=2, covariance_type='full', random_state=0) plot_gmm(gmm2, Xmoon) gmm16 = GaussianMixture(n_components=16, covariance_type='full', random_state=1) plot_gmm(gmm16, Xmoon, label=False) # Here the mixture of 16 Gaussians serves not to find separated clusters of data, but # rather to model the overall distribution of the input data. This is a generative model of # the distribution, meaning that the GMM gives us the recipe to generate new random # data distributed similarly to our input. For example, here are 400 new points drawn # from this 16-component GMM fit to our original datam Xnew, y = gmm16.sample(400) plt.scatter(Xnew[:, 0], Xnew[:, 1], c=y, cmap='viridis'); # #### How many components # A generative model is inherently # a probability distribution for the dataset, and so we can simply evaluate the likelihood # of the data under the model, using cross-validation to avoid overfitting. Another # means of correcting for overfitting is to adjust the model likelihoods using some analytic # criterion such as the Akaike information criterion (AIC) or the Bayesian information # criterion (BIC). Scikit-Learn’s GMM estimator actually includes built-in # methods that compute both of these, and so it is very easy to operate on this # approach. # + n_components = np.arange(1, 21) models = [GaussianMixture(n, covariance_type='full', random_state=0).fit(Xmoon) for n in n_components] plt.plot(n_components, [m.bic(Xmoon) for m in models], label='BIC') plt.plot(n_components, [m.aic(Xmoon) for m in models], label='AIC') plt.legend(loc='best') plt.xlabel('n_components') # - # The optimal number of clusters is the value that minimizes the AIC or BIC, depending # on which approximation we wish to use. # Notice the important point: this choice of number of components measures how well # GMM works as a density estimator, not how well it works as a clustering algorithm. I’d # encourage you to think of GMM primarily as a density estimator, and use it for clustering # only when warranted within simple datasets # ### Example: GMM for Generating New Data # Here we will run # with this idea and generate new handwritten digits from the standard digits corpus # that we have used before from sklearn.datasets import load_digits digits = load_digits() digits.data.shape def plot_digits(data): fig, ax = plt.subplots(10, 10, figsize=(8, 8), subplot_kw=dict(xticks=[], yticks=[])) fig.subplots_adjust(hspace=0.05, wspace=0.05) for i, axi in enumerate(ax.flat): im = axi.imshow(data[i].reshape(8, 8), cmap='binary') im.set_clim(0, 16) plot_digits(digits.data) # GMMs can have difficulty converging in such a high dimensional # space, so we will start with an invertible dimensionality reduction algorithm on # the data. Here we will use a straightforward PCA, asking it to preserve 99% of the # variance in the projected data from sklearn.decomposition import PCA pca = PCA(0.99, whiten=True) data = pca.fit_transform(digits.data) data.shape n_components = np.arange(50, 210, 10) models = [GaussianMixture(n, covariance_type='full', random_state=0) for n in n_components] aics = [model.fit(data).aic(data) for model in models] plt.plot(n_components, aics); gmm = GaussianMixture(140, covariance_type='full', random_state=0) gmm.fit(data) print(gmm.converged_) # Now we can draw samples of 100 new points within this 41-dimensional projected # space, using the GMM as a generative model: data_newX, _ = gmm.sample(100) data_newX.shape digits_new = pca.inverse_transform(data_newX) plot_digits(digits_new) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Permutation Importance and Dealing with Correlated Features # %load_ext watermark # %watermark -a "" -u -d -p numpy,pandas,matplotlib,sklearn,mlxtend # %matplotlib inline import matplotlib.pyplot as plt # + import numpy as np np.random.seed(123) y = np.zeros(1000) y[:500] = 1 x1 = np.random.randn(1000) x2 = np.empty(1000) x2[:500] = np.random.randn(500) x2[500:] = np.random.randn(500)+4 x3 = x2 X = np.vstack((x3, x2, x1)).swapaxes(1, 0) X.shape # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test =\ train_test_split(X, y, test_size=0.3, random_state=123, stratify=y) # + from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier(n_estimators=10, random_state=123, max_features=2) forest.fit(X_train, y_train) print('Training accuracy:', np.mean(forest.predict(X_train) == y_train)*100) print('Test accuracy:', np.mean(forest.predict(X_test) == y_test)*100) # - corr = np.corrcoef(X_train.T) corr.shape # + from mlxtend.plotting import heatmap heatmap(corr, cmap='binary') plt.savefig('5.pdf') plt.show() # + from mlxtend.evaluate import feature_importance_permutation imp_vals, imp_all = feature_importance_permutation( predict_method=forest.predict, X=X_test, y=y_test, metric='accuracy', num_rounds=50, seed=123) # + std = np.std(imp_all, axis=1) indices = np.argsort(imp_vals)[::-1] plt.figure() plt.title("Random Forest feature importance via permutation importance") plt.bar(range(X_train.shape[1]), imp_vals[indices], yerr=std[indices]) plt.xticks(range(X_train.shape[1]), indices) plt.xlim([-1, X_train.shape[1]]) plt.ylim([0, 0.6]) plt.tight_layout() plt.savefig('6.pdf') plt.show() # + X_train = X_train[:, 1:] X_test = X_test[:, 1:] forest = RandomForestClassifier(n_estimators=10, random_state=123, max_features=2) forest.fit(X_train, y_train) imp_vals, imp_all = feature_importance_permutation( predict_method=forest.predict, X=X_test, y=y_test, metric='accuracy', num_rounds=50, seed=123) std = np.std(imp_all, axis=1) indices = np.argsort(imp_vals)[::-1] plt.figure() plt.title("Random Forest feature importance via permutation importance") plt.bar(range(X_train.shape[1]), imp_vals[indices], yerr=std[indices]) plt.xticks(range(X_train.shape[1]), indices) plt.xlim([-1, X_train.shape[1]]) plt.ylim([0, 0.6]) plt.tight_layout() plt.savefig('7.pdf') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="koVcufVBD9uv" # Please remember to run **pip3 install textattack[tensorflow]** in your notebook enviroment before the following codes: # # # Multi-language attacks # # TextAttack's four-component framework makes it trivial to run attacks in other languages. In this tutorial, we: # # - Create a model wrapper around Transformers [pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) # - Initialize a pre-trained [CamemBERT](https://camembert-model.fr/) model for sentiment classification # - Load the AlloCiné movie review sentiment classification dataset (from [`datasets`](https://github.com/huggingface/datasets/)) # - Load the `pwws` recipe, but use French synonyms from multilingual WordNet (instead of English synonyms) # - Run an adversarial attack on a French language model # # Voilà! # + [markdown] id="Abd2C3zJD9u4" # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/QData/TextAttack/blob/master/docs/2notebook/Example_4_CamemBERT.ipynb) # # [![View Source on GitHub](https://img.shields.io/badge/github-view%20source-black.svg)](https://github.com/QData/TextAttack/blob/master/docs/2notebook/Example_4_CamemBERT.ipynb) # + id="-fnSUl8ND9u5" from textattack.attack_recipes import PWWSRen2019 from textattack.datasets import HuggingFaceDataset from textattack.models.wrappers import ModelWrapper from transformers import AutoTokenizer, TFAutoModelForSequenceClassification, pipeline from textattack import Attacker import numpy as np # Quiet TensorFlow. import os if "TF_CPP_MIN_LOG_LEVEL" not in os.environ: os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3" class HuggingFaceSentimentAnalysisPipelineWrapper(ModelWrapper): """ Transformers sentiment analysis pipeline returns a list of responses like [{'label': 'POSITIVE', 'score': 0.7817379832267761}] We need to convert that to a format TextAttack understands, like [[0.218262017, 0.7817379832267761] """ def __init__(self, model): self.model = model#pipeline = pipeline def __call__(self, text_inputs): raw_outputs = self.model(text_inputs) outputs = [] for output in raw_outputs: score = output['score'] if output['label'] == 'POSITIVE': outputs.append([1-score, score]) else: outputs.append([score, 1-score]) return np.array(outputs) # + colab={"base_uri": "https://localhost:8080/"} id="i2WPtwO9D9u6" outputId="2f5e8fab-1047-417d-c90c-b9238b2886a4" tags=[] # Create the model: a French sentiment analysis model. # see https://github.com/TheophileBlard/french-sentiment-analysis-with-bert model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine") tokenizer = AutoTokenizer.from_pretrained("tblard/tf-allocine") pipeline = pipeline('sentiment-analysis', model=model, tokenizer=tokenizer) model_wrapper = HuggingFaceSentimentAnalysisPipelineWrapper(pipeline) # Create the recipe: PWWS uses a WordNet transformation. recipe = PWWSRen2019.build(model_wrapper) # # WordNet defaults to english. Set the default language to French ('fra') # # See "Building a free French wordnet from multilingual resources", # . (ELRA) (ed.), # Proceedings of the Sixth International Language Resources and Evaluation (LREC’08). recipe.transformation.language = 'fra' dataset = HuggingFaceDataset('allocine', split='test') attacker = Attacker(recipe, dataset) attacker.attack_dataset() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import rpyc import time from IPython.display import display, clear_output ROBOT_HOSTNAME = "ev3dev.local" # # Preparation: Start rpyc on the ev3 # # For the remote procedure calls to work, a rpcy server has to be started on the ev3. This can be done by executing the `rpyc_server.sh` file. The commands below perform this action # ``` # ssh robot@ev3dev.local # maker # ./rpyc_server.sh # ``` # # Establish and test connection conn = rpyc.classic.connect(ROBOT_HOSTNAME) conn.execute("print('Hello Slave. I am your master!')") # # Robot setup: # # 1. Plug a large motor into output A # 2. Plug a medium motor into output B # # Test motors # This rotates the motors motors = conn.modules['ev3dev2.motor'] m1 = motors.LargeMotor('outA') m1.stop_action = 'brake' m2 = motors.MediumMotor('outB') m2.stop_action = 'brake' m1.run_timed(time_sp=1000, speed_sp=600) m2.run_timed(time_sp=1000, speed_sp=600) m1.wait_until_not_moving() m2.wait_until_not_moving() # + # change speed dynamicly speeds = [10*i for i in range(0,11)] + [10*i for i in range(9,-11, -1)] + [10*i for i in range(-9, 1)] m2.run_direct(duty_cycle_sp=0) for speed in speeds: m2.duty_cycle_sp=speed time.sleep(0.1) m2.stop() # - # # Measure angles # This measurs the angle of the modium motor for 30s. Try turning it! # + # reset motor position m2.position = 0 # mesure position in degrees start = time.time() while time.time() - start < 30: clear_output(wait=True) display('Angle: ' + str(m2.position)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Deep Neural Network for Beer Crown Defects Classification # # This model is generated based on CS230 course program assignment. The goal is to build a tensorflow model that can classify beer crown based on their differences: color, text, date code, and defects such as crimp flare, and scratches. It considers each pixel as individual feature, which works well with manipulated images from industrial production line, i.e. the images are standardized, similar to each other, with fixed ROI, consistent brightness, color and contrast. # # The model is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. # The SIGMOID output layer has been converted to a SOFTMAX for more than two classes. # # # The training is based on the following loss function: # $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$ # # Tensorflow function `tf.nn.sigmoid_cross_entropy_with_logits` is called to calculate the following cross entropy loss: # $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$ # # The procedure: # 1. Create placeholders # 2. Specify the computation graph corresponding to operations you want to compute # 3. Create the session # 4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. # # The model training includes the following items: # 1. Hyper parameter optimization; # 2. Number of layers test; # 3. Training time and Run time test; # 4. Test the minimum amount of images needed for a good model; # 5. Test the minimum amount of sample bottles needed for a good model; # 6. Test the minimum resolution of image needed for a good training. # # + colab={} colab_type="code" id="rhZ0RUw8T111" import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops #from tf_utils1 import load_dataset, random_mini_batches, convert_to_one_hot, predict import os.path # %matplotlib inline np.random.seed(1) # + [markdown] colab_type="text" id="LW8S6sVzT12h" # # # 1. Load Datasets # # - **Training set**: 2400 pictures (240 by 240 pixels) of crowns representing numbers from 0 to 5 (400 pictures per each class). # - **Test set**: 600 pictures (240 by 240 pixels) of crowns representing numbers from 0 to 5 (100 pictures per each class). # # Below are examples for each class, and how to represent them. #
**Figure 1**: CROWNS dataset
# # # Run the following code to load the dataset. # + #import h5py #import numpy as np #import tensorflow as tf #import math # DIR = datasets #training 2400 + test 600, 240x240x3 # DIR = datasets_mini #training 120 + test 60, 240x240x3 # DIR = 'datasets_120' #training 2400 + test 600, 120x120x3 # DIR = datasetsAug #training 2400 + test 600, 120x120x3 def load_dataset(): #train_dataset = h5py.File(os.path.join(DIR, '/train_crown.h5'), "r") train_dataset = h5py.File('datasets_Aug/train_crown.h5', "r") train_set_x_orig = np.array(train_dataset["X"][:]) # your train set features train_set_y_orig = np.array(train_dataset["y"][:]) # your train set labels #test_dataset = h5py.File(os.path.join(DIR, '/test_crown.h5'), "r") test_dataset = h5py.File('datasets_Aug/test_crown.h5', "r") test_set_x_orig = np.array(test_dataset["X"][:]) # your test set features test_set_y_orig = np.array(test_dataset["y"][:]) # your test set labels # classes = np.array(test_dataset["list_classes"][:]) # the list of classes classes = np.array([0,1,2,3,4,5]) train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0])) test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0])) return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0): """ Creates a list of random minibatches from (X, Y) Arguments: X -- input data, of shape (input size, number of examples) Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) mini_batch_size - size of the mini-batches, integer seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours. Returns: mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y) """ m = X.shape[1] # number of training examples mini_batches = [] np.random.seed(seed) # Step 1: Shuffle (X, Y) permutation = list(np.random.permutation(m)) shuffled_X = X[:, permutation] shuffled_Y = Y[:, permutation].reshape((Y.shape[0],m)) # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case. num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning for k in range(0, num_complete_minibatches): mini_batch_X = shuffled_X[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size] mini_batch_Y = shuffled_Y[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size] mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) # Handling the end case (last mini-batch < mini_batch_size) if m % mini_batch_size != 0: mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m] mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m] mini_batch = (mini_batch_X, mini_batch_Y) mini_batches.append(mini_batch) return mini_batches def convert_to_one_hot(Y, C): Y = np.eye(C)[Y.reshape(-1)].T return Y def predict(X, parameters): W1 = tf.convert_to_tensor(parameters["W1"]) b1 = tf.convert_to_tensor(parameters["b1"]) W2 = tf.convert_to_tensor(parameters["W2"]) b2 = tf.convert_to_tensor(parameters["b2"]) W3 = tf.convert_to_tensor(parameters["W3"]) b3 = tf.convert_to_tensor(parameters["b3"]) params = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} #x = tf.placeholder("float", [172800, 1]) x = tf.placeholder("float", [43200, 1]) z3 = forward_propagation_for_predict(x, params) p = tf.argmax(z3) sess = tf.Session() prediction = sess.run(p, feed_dict = {x: X}) return prediction def forward_propagation_for_predict(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] # Numpy Equivalents: Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3 return Z3 # + colab={} colab_type="code" id="wCgjv84yT12i" # Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # + [markdown] colab_type="text" id="JYimgnMbT12k" # Change the index below and run the cell to visualize some examples in the dataset. # + colab={} colab_type="code" id="wG0QwVtJT12k" # TO chck if image feature is created and loaded properly, display one image based on its index # If there is color difference, then most likely there is mistake in Create_datasets. index = 8 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) # + [markdown] colab_type="text" id="2WP4-S2CT12m" # Flatten the image dataset, then normalize it by dividing by 255. Convert each label to a one-hot vector. # + colab={} colab_type="code" id="tn3gF5xLT12m" # Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6) Y_test = convert_to_one_hot(Y_test_orig, 6) #print ("number of training examples = " + str(X_train.shape[1])) print ("number of test examples = " + str(X_test.shape[1])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) # + [markdown] colab_type="text" id="JSNd_DRWT12p" # # 2. Create Placeholders # # Create placeholders for `X` and `Y`. This will allow to later pass the training data in when run the session. # + colab={} colab_type="code" id="fcAcBRAAT12q" # Create placeholders in Tensorflow def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 240 * 240 * 3 = 172800 or 120 * 120 * 3 = 43200) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "tf.float32" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "tf.float32" Tips: - You will use None because it let's us be flexible on the number of examples you will for the placeholders. In fact, the number of examples during test/train is different. """ X = tf.placeholder(tf.float32, shape=[n_x, None], name="X") Y = tf.placeholder(tf.float32, shape=[n_y, None], name="Y") return X, Y # + colab={} colab_type="code" id="Ve9WOa1LT12r" #X, Y = create_placeholders(172800, 6) X, Y = create_placeholders(43200, 6) #print ("X = " + str(X)) #print ("Y = " + str(Y)) # + [markdown] colab_type="text" id="eyYz9y1XT12u" # # 3. Initializing the parameters # # Use Xavier Initialization for weights and Zero Initialization for biases. Xavier initialization, Kaiming initialization. # + colab={} colab_type="code" id="gPi-SeuWT12u" INITIALIZER = "Xavier" #INITIALIZER = "He" def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] W3 : [6, 12] b3 : [6, 1] Returns: parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3 """ tf.set_random_seed(1) # Xavier initialization if INITIALIZER == "Xavier": #W1 = tf.get_variable("W1", [25,172800], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) W1 = tf.get_variable("W1", [25,43200], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer()) W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer()) W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer()) # Kaiming initialization initializer=tf.contrib.layers.variance_scaling_initializer()) if INITIALIZER == "He": #W1 = tf.get_variable("W1", [25,172800], initializer = tf.contrib.layers.variance_scaling_initializer(seed = 1)) W1 = tf.get_variable("W1", [25,43200], initializer = tf.contrib.layers.variance_scaling_initializer(seed = 1)) b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer()) W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.variance_scaling_initializer(seed = 1)) b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer()) W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.variance_scaling_initializer(seed = 1)) b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer()) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} return parameters # + colab={} colab_type="code" id="CcuKNYinT12x" tf.reset_default_graph() with tf.Session() as sess: parameters = initialize_parameters() #print("W1 = " + str(parameters["W1"])) #print("b1 = " + str(parameters["b1"])) #print("W2 = " + str(parameters["W2"])) #print("b2 = " + str(parameters["b2"])) # + [markdown] colab_type="text" id="cnuAGFn2T120" # # 4. Forward propagation in tensorflow # # Implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. # # **Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`! # # # + colab={} colab_type="code" id="nC7CYNk0T120" def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, A1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3, A2) + b3 return Z3 # + colab={} colab_type="code" id="hioQQqyxT122" tf.reset_default_graph() with tf.Session() as sess: #X, Y = create_placeholders(172800, 6) X, Y = create_placeholders(43200, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) #print("Z3 = " + str(Z3)) # + [markdown] colab_type="text" id="FDjgAHp6T125" # The forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. # + [markdown] colab_type="text" id="RXqHnAEnT125" # # 5. Compute cost # # Simply call reduce mean apply to cross entroy loss fucntion. # ```python # tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...)) # ``` # + colab={} colab_type="code" id="1_bzQXSJT125" def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z3) labels = tf.transpose(Y) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits =logits, labels =labels)) return cost # + colab={} colab_type="code" id="4HahBCJVT127" tf.reset_default_graph() with tf.Session() as sess: #X, Y = create_placeholders(172800, 6) X, Y = create_placeholders(43200, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) #print("cost = " + str(cost)) # + [markdown] colab_type="text" id="9O9sNnHQT12-" # # 6. Backward propagation & parameter updates # # This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model. # # Create an "`optimizer`" object. Call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate. # # This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs. # # + [markdown] colab_type="text" id="SKxhuoN2T12_" # # 7. Building the model # # Use Adam optimizer. Tested with Gradient Descent Optimizer. # User learning_rate, and minibatch_size and num_epochs to control performance and total training time. # + colab={} colab_type="code" id="siFLpYfkT12_" # global variables LEARNING_RATE = 0.0001 # 0.0001, 0.001, 0.0005, 0.0002 NUM_EPOCHS = 40 # 100, 200, 80, 40, 20 MINIBATCH_SIZE = 32 # 64 #16 #8 def model(X_train, Y_train, X_test, Y_test, learning_rate = LEARNING_RATE, num_epochs = NUM_EPOCHS, minibatch_size = MINIBATCH_SIZE, print_cost = True, print_accuracy = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 12288, number of training examples = 1080) Y_train -- test set, of shape (output size = 6, number of training examples = 1080) X_test -- training set, of shape (input size = 12288, number of training examples = 120) Y_test -- test set, of shape (output size = 6, number of test examples = 120) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost # Create Placeholders of shape (n_x, n_y) X, Y = create_placeholders(n_x, n_y) # Initialize parameters parameters = initialize_parameters() # Forward propagation: Build the forward propagation in the tensorflow graph Z3 = forward_propagation(X, parameters) # Cost function: Add cost function to tensorflow graph cost = compute_cost(Z3, Y) # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer. #optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost) optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost) # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y). _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) epoch_cost += minibatch_cost / minibatch_size # Print the cost every epoch %100 % 5 if print_cost == True and epoch % 2 == 0: print ("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 2 == 0: costs.append(epoch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per twos)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # lets save the parameters in a variable parameters = sess.run(parameters) print ("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train})) print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test})) return parameters # - # # # Run the model and return parameters trained. This parameters will be saved for the future prediction actions. # + colab={} colab_type="code" id="AISfljZVT13B" import time tic = time.perf_counter() parameters = model(X_train, Y_train, X_test, Y_test) toc = time.perf_counter() print("Run time = " + str(toc-tic) +"s") # + [markdown] colab_type="text" id="cka8pF8BT13E" # # 8. Evaluate with images # # The images shall be different from those images in the train set and test set. # - For crowns with different color or text, the evaluation images are the same except orientation. # - For crimp flare and scratch defects, there are size, shape, position, and orientation difference. # - Create adversarial images for test. # # - def className(i): if pridectClassNumber == 0: predictClassName = " Good crown"; if pridectClassNumber == 1: predictClassName = " Crown with color difference"; if pridectClassNumber == 2: predictClassName = " Crown with date code"; if pridectClassNumber == 3: predictClassName = " Crown with crimp flare"; if pridectClassNumber == 4: predictClassName = " Crown with different text"; if pridectClassNumber == 5: predictClassName = " Crown with scratch"; return predictClassName # + colab={} colab_type="code" id="EJ8Aft1CT13F" import scipy import scipy.misc from PIL import Image from scipy import ndimage import time import warnings warnings.filterwarnings('ignore') # Evaluate with images #my_image = "good.jpg" # classID=0 #my_image = "color.jpg" # classID=1 #my_image = "date.jpg" # classID=2 #my_image = "flare.jpg" # classID=3 #my_image = "text.jpg" # classID=4 #my_image = "scratch.jpg" # classID=5 #my_image = "0.bmp" # classID=3 my_image = "40.bmp" # classID=5 fname = "images/" + my_image tic = time.perf_counter() image = np.array(plt.imread(fname, format=None)) # normalization image = image/255. my_image = scipy.misc.imresize(image, size=(120,120)).reshape((1, 120*120*3)).T my_image_prediction = predict(my_image, parameters) toc = time.perf_counter() print("Predict Run Time: " + str(round((toc-tic)*1000.)) +"ms") pridectClassNumber = np.squeeze(my_image_prediction) predictClassName = className(pridectClassNumber) plt.imshow(image) print("Predict Results : y = " + str(np.squeeze(my_image_prediction)) + predictClassName) # - # The End. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Import required packages and functions and set the session seed import numpy as np np.random.seed(1234) from tensorflow import set_random_seed set_random_seed(1234) import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import keras from keras.models import Sequential from keras.layers import Dense, Activation, Flatten, Conv2D, MaxPooling2D from keras.layers import Dropout, SpatialDropout2D from keras.applications import VGG19 from keras.applications.vgg19 import preprocess_input from keras.models import Model from keras.datasets import fashion_mnist from keras.utils import to_categorical from keras import models from keras import layers from keras import optimizers # Load the Fashion MNIST data from Keras (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # Normalize the image data by dividing through the maximum pixel value (=255) train_images = train_images / train_images.max() test_images = test_images / test_images.max() # Build a simple three-layer (1 hidden layer) model # The input size is 28 x 28 pixels and is flattened to a vector of length 784 # The activation function is RELU (rectified linear unit) and performs the # multiplication of input and weights (plus bias) # The output (softmax) layer returns probabilities for all ten classes three_layer_model = Sequential() three_layer_model.add(Flatten(input_shape = (28, 28))) three_layer_model.add(Dense(128, activation = 'relu')) three_layer_model.add(Dense(10, activation = 'softmax')) # Compile the model with accuracy metric and adam optimizer # Sparse categorical cross-entropy is the loss function for integer labels # Fit the model using 70 percent of the data and 10 epochs three_layer_model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) three_layer_model.fit(train_images, train_labels, epochs = 10, validation_split = 0.3, verbose = 2) # Compute and print the test loss and accuracy test_loss, test_acc = three_layer_model.evaluate(test_images, test_labels) print("Model with three layers and ten epochs -- Test loss:", test_loss * 100) print("Model with three layers and ten epochs -- Test accuracy:", test_acc * 100) # Similarly as before, build a five-layer (3 hidden layers) model five_layer_model = Sequential() five_layer_model.add(Flatten(input_shape = (28, 28))) five_layer_model.add(Dense(128, activation = 'relu')) five_layer_model.add(Dense(128, activation = 'relu')) five_layer_model.add(Dense(128, activation = 'relu')) five_layer_model.add(Dense(10, activation = 'softmax')) # Compile the model with accuracy metric and adam optimizer # Fit the model using 70 percent of the data and 10 epochs five_layer_model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) five_layer_model.fit(train_images, train_labels, epochs = 10, validation_split = 0.3, verbose = 2) # Compute and print the test loss and accuracy test_loss, test_acc = five_layer_model.evaluate(test_images, test_labels) print("Model with five layers and ten epochs -- Test loss:", test_loss * 100) print("Model with five layers and ten epochs -- Test accuracy:", test_acc * 100) # Similarly as before, build a ten-layer (8 hidden layers) model ten_layer_model = Sequential() ten_layer_model.add(Flatten(input_shape = (28, 28))) ten_layer_model.add(Dense(128, activation = 'relu')) ten_layer_model.add(Dense(128, activation = 'relu')) ten_layer_model.add(Dense(128, activation = 'relu')) ten_layer_model.add(Dense(128, activation = 'relu')) ten_layer_model.add(Dense(128, activation = 'relu')) ten_layer_model.add(Dense(128, activation = 'relu')) ten_layer_model.add(Dense(128, activation = 'relu')) ten_layer_model.add(Dense(128, activation = 'relu')) ten_layer_model.add(Dense(10, activation = 'softmax')) # Compile the model with accuracy metric and adam optimizer # Fit the model using 70 percent of the data and 10 epochs ten_layer_model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) ten_layer_model.fit(train_images, train_labels, epochs = 10, validation_split = 0.3, verbose = 2) # Compute and print the test loss and accuracy test_loss, test_acc = ten_layer_model.evaluate(test_images, test_labels) print("Model with ten layers and ten epochs -- Test loss:", test_loss * 100) print("Model with ten layers and ten epochs -- Test accuracy:", test_acc * 100) # Compile the model with accuracy metric and adam optimizer # Fit the model using 70 percent of the data and 50 epochs three_layer_model_50_epochs = three_layer_model.fit(train_images, train_labels, epochs = 50, validation_split = 0.3, verbose = 2) # Compute and print the test loss and accuracy test_loss, test_acc = three_layer_model.evaluate(test_images, test_labels) print("Model with three layers and fifty epochs -- Test loss:", test_loss * 100) print("Model with three layers and fifty epochs -- Test accuracy:", test_acc * 100) # + # Plot loss as function of epochs plt.subplot(1, 2, 1) plt.plot(three_layer_model_50_epochs.history['val_loss'], 'blue') plt.plot(three_layer_model_50_epochs.history['loss'], 'red') plt.legend(['Cross-validation', 'Training'], loc = 'upper left') plt.ylabel('Loss') plt.xlabel('Epoch') # Plot accuracy as function of epochs plt.subplot(1, 2, 2) plt.plot(three_layer_model_50_epochs.history['val_acc'], 'blue') plt.plot(three_layer_model_50_epochs.history['acc'], 'red') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.subplots_adjust(wspace = .35) # Include plot title and show the plot plt.suptitle('Model loss and accuracy over epochs for a three-layer neural network') plt.show() # - # Calculate and print predictions versus actual labels predictions = three_layer_model.predict(test_images) for i in range(10): print("Prediction " + str(i) + ": " + str(np.argmax(np.round(predictions[i])))) print("Actual " + str(i) + ": " + str(test_labels[i])) # Reload the data for a convolutional neural network (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # Reshape the data to the correct format (the last 1 stands for greyscale) train_images = train_images.reshape(60000, 28, 28, 1) test_images = test_images.reshape(10000, 28, 28, 1) # Convert the image data to numeric data and normalize them train_images = train_images.astype('float32') test_images = test_images.astype('float32') train_images = train_images / train_images.max() test_images = test_images / test_images.max() # One-hot encode the label data # Convert every number to a vector of the length of the number of categories # The vector has zero everywhere except a one on the position of the number it # represents. Example: 3 = [0 0 0 1 0 0 0 0 0 0] train_labels_bin = to_categorical(train_labels) test_labels_bin = to_categorical(test_labels) # Build a convolutional neural network with two convolutional layers conv_model = Sequential() conv_model.add(Conv2D(128, (3, 3), input_shape = (28, 28, 1))) conv_model.add(Activation('relu')) conv_model.add(MaxPooling2D(pool_size = (2, 2))) conv_model.add(Conv2D(128, (3, 3))) conv_model.add(Activation('relu')) conv_model.add(MaxPooling2D(pool_size = (2, 2))) conv_model.add(Flatten()) conv_model.add(Dense(128)) conv_model.add(Dense(10)) conv_model.add(Activation('softmax')) # Compile and fit the model with adam optimizer and accuracy metric # Categorical cross-entropy is the loss function for one-hot encoded labels and # batch size equal to the number of neurons in the convolutional layers and 10 epochs conv_model.compile(loss = "categorical_crossentropy", optimizer = 'adam', metrics = ['accuracy']) conv_model.fit(train_images, train_labels_bin, batch_size = 128, epochs = 10, verbose = 2) # Compute and print the test loss and accuracy test_loss, test_acc = conv_model.evaluate(test_images, test_labels_bin) print("Convolutional model ten epochs -- Test loss:", test_loss * 100) print("Convolutional model ten epochs -- Test accuracy:", test_acc * 100) # Build a convolutional neural network with two convolutional layers # Decrease number of neurons and add dropout to reduce overfitting conv_model_reduce_overfit = Sequential() conv_model_reduce_overfit.add(Conv2D(64, (3, 3), input_shape = (28, 28, 1))) conv_model_reduce_overfit.add(Activation('relu')) conv_model_reduce_overfit.add(MaxPooling2D(pool_size = (2, 2))) conv_model_reduce_overfit.add(Dropout(0.5)) conv_model_reduce_overfit.add(Conv2D(64, (3, 3))) conv_model_reduce_overfit.add(SpatialDropout2D(0.5)) conv_model_reduce_overfit.add(Activation('relu')) conv_model_reduce_overfit.add(MaxPooling2D(pool_size = (2, 2))) conv_model_reduce_overfit.add(Flatten()) conv_model_reduce_overfit.add(Dense(64)) conv_model_reduce_overfit.add(Dropout(0.5)) conv_model_reduce_overfit.add(Dense(10)) conv_model_reduce_overfit.add(Activation('softmax')) # Compile and fit the model with adam optimizer and accuracy metric # Categorical cross-entropy is the loss function for one-hot encoded labels and # batch size equal to the number of neurons in the convolutional layers and 10 epochs # Add early stopping to avoid overfitting conv_model_reduce_overfit.compile(loss = "categorical_crossentropy", optimizer = 'adam', metrics = ['accuracy']) conv_callback = keras.callbacks.EarlyStopping(monitor = 'val_loss', patience = 3) conv_model_reduce_overfit.fit(train_images, train_labels_bin, validation_split = 0.3, epochs = 10, verbose = 2, callbacks = [conv_callback], batch_size = 64) # Compute and print the test loss and accuracy test_loss, test_acc = conv_model_reduce_overfit.evaluate(test_images, test_labels_bin) print("Convolutional model ten epochs reduced overfit -- Test loss:", test_loss * 100) print("Convolutional model ten epochs reduced overfit -- Test accuracy:", test_acc * 100) # Calculate and print predictions versus actual labels predictions = conv_model_reduce_overfit.predict(test_images) for i in range(10): print("Prediction " + str(i) + ": " + str(np.argmax(np.round(predictions[i])))) print("Actual " + str(i) + ": " + str(test_labels[i])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ultra) # language: python # name: ultra # --- # + [markdown] colab_type="text" id="ENoS3mTtGQts" # # Experiment 02: Deformations Experiments ETH-01 # # In this notebook, we are using the CLUST Dataset. # The sequence used for this notebook is ETH-01.zip # # + colab={} colab_type="code" id="ptO9fUvkD-Lc" import sys import random import os sys.path.append('../src') import warnings warnings.filterwarnings("ignore") from PIL import Image from sklearn.manifold import Isomap from utils.compute_metrics import get_metrics, get_majority_vote,log_test_metrics from utils.split import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA from sklearn.svm import LinearSVC from sklearn.svm import SVC from sklearn.model_selection import GroupKFold from tqdm import tqdm from pprint import pprint import torch from itertools import product import pickle import pandas as pd import numpy as np import mlflow import matplotlib.pyplot as plt #from kymatio.numpy import Scattering2D import torch from tqdm import tqdm from kymatio.torch import Scattering2D # - directory1=os.listdir('../data/02_interim/Data1') directory1.sort() directory3=os.listdir('../data/02_interim/Data3') directory3.sort() # # 1. Extract PCA Components with open('../data/03_features/scattering_features_deformation.pickle', 'rb') as handle: scattering_features1= pickle.load(handle) with open('../data/03_features/dataset_deformation.pickle', 'rb') as handle: dataset1 = pickle.load(handle) with open('../data/03_features/scattering_features_deformation3.pickle', 'rb') as handle: scattering_features3= pickle.load(handle) with open('../data/03_features/dataset_deformation3.pickle', 'rb') as handle: dataset3= pickle.load(handle) with open('../data/03_features/scattering_features_deformation5.pickle', 'rb') as handle: scattering_features5= pickle.load(handle) with open('../data/03_features/dataset_deformation5.pickle', 'rb') as handle: dataset5= pickle.load(handle) # ## ETH-1 PCA sc_features1 = scattering_features1.view(scattering_features1.shape[0], scattering_features1.shape[1] * scattering_features1.shape[2] * scattering_features1.shape[3]) X1 = sc_features1.cpu().numpy() #standardize scaler = StandardScaler() X1 = scaler.fit_transform(X1) pca = PCA(n_components=3, kernel ='poly') X1 = pca.fit_transform(X1) sc_features3 = scattering_features3.view(scattering_features3.shape[0], scattering_features3.shape[1] * scattering_features3.shape[2] * scattering_features3.shape[3]) X3 = sc_features3.cpu().numpy() #standardize scaler = StandardScaler() X3 = scaler.fit_transform(X3) pca = PCA(n_components=3, kernel ='rbf') X3 = pca.fit_transform(X3) sc_features5 = scattering_features5.view(scattering_features5.shape[0], scattering_features5.shape[1] * scattering_features5.shape[2] * scattering_features5.shape[3]) X5 = sc_features5.cpu().numpy() #standardize scaler = StandardScaler() X5 = scaler.fit_transform(X5) pca = PCA(n_components=3, kernel ='poly') X5 = pca.fit_transform(X5) plt.scatter(X3[:,0], X3[:,1], color='red') plt.scatter(X5[:,0], X5[:,1], color='green') # + from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(10,7)) ax = Axes3D(fig) #ax.scatter(X1[:,0][0:500], X1[:,1][0:500], X1[:,2][0:500], s=9,c='red') # ax.scatter(X3[:,0], X3[:,1], X3[:,2], s=9,c='green') # ax.scatter(X5[:,0], X5[:,1], X5[:,2], s=9,c='black') ax.scatter(X3[:,0][0:500], X3[:,1][0:500], X3[:,2][0:500], s=9,c='green') ax.scatter(X5[:,0][0:500], X5[:,1][0:500], X5[:,2][0:500], s=9,c='black') # ax.scatter(X_transformed3[:,0], X_transformed3[:,1], X_transformed3[:,2], s=9,color='green') # ax.scatter(X_transformed5[:,0], X_transformed5[:,1], X_transformed5[:,2], s=9,color='black') #ax.scatter(X1[:,0][0:500], X1[:,1][1000:1500], X1[:,2][1000:1500], s=9,c='red') # ax.scatter(X3[:,0][0:500], X3[:,1][1000:1500], X3[:,2][1000:1500], s=9,c='green') # ax.scatter(X5[:,0][0:500], X5[:,1][1000:1500], X5[:,2][1000:1500], s=9,c='black') ax.set_xlabel('x1') ax.set_ylabel('x2') ax.set_zlabel('x3') plt.show() # - from sklearn.manifold import LocallyLinearEmbedding embedding3 = LocallyLinearEmbedding(n_components=3) X_transformed3 = embedding3.fit_transform(X3[:1000]) embedding5 = LocallyLinearEmbedding(n_components=3) X_transformed5 = embedding5.fit_transform(X5[:1000]) X_transformed3.shape df = pd.DataFrame(X) df['order'] = dataset['order'] #df.corr() import seaborn as sns; sns.set_theme() figure(figsize = (10,10)) vec1 = df.corr()['order'].values vec2 = vec1.reshape(vec1.shape[0], 1) sns.heatmap(vec2) plt.show() def visualize_corr_pca_order(pca_c, df): plt.figure(figsize=(16,8)) x= df['order'] y= df[pca_c] plt.scatter(x,y) m, b = np.polyfit(x, y, 1) plt.plot(x, m*x + b, color='red') plt.ylabel('PCA Component '+ str(pca_c+1)) plt.xlabel('Frame Order') plt.show() visualize_corr_pca_order(1, df) print('Correlation between order and Pca component 2:', df.corr()['order'][1]) visualize_corr_pca_order(3, df) print('Correlation between order and Pca component 3:', df.corr()['order'][3]) def visualize_sub_plot(pca_c, df, x_num= 3, y_num =3): fig, axs = plt.subplots(x_num, y_num, figsize=(15,10)) size = len(df) plot_num = x_num * y_num frame = int(size/plot_num) start = 0 for i in range (x_num): for j in range (y_num): final = start + frame x= df['order'].iloc[start:final] y= df[pca_c].iloc[start:final] m, b = np.polyfit(x, y, 1) axs[i, j].set_ylabel('PCA Component '+ str(pca_c+1)) axs[i, j].set_xlabel('Frame Order') axs[i, j].plot(x, m*x + b, color='red') axs[i, j].scatter(x,y) start = start + frame plt.show() visualize_sub_plot(1, df, x_num= 3, y_num =3) visualize_sub_plot(3, df, x_num= 3, y_num =3) # # 5. Isometric Mapping Correlation with Order with open('../data/03_features/scattering_features_deformation.pickle', 'rb') as handle: scattering_features = pickle.load(handle) with open('../data/03_features/dataset_deformation.pickle', 'rb') as handle: dataset = pickle.load(handle) sc_features = scattering_features.view(scattering_features.shape[0], scattering_features.shape[1] * scattering_features.shape[2] * scattering_features.shape[3]) X = sc_features.cpu().numpy() #standardize scaler = StandardScaler() X = scaler.fit_transform(X) from sklearn.manifold import Isomap embedding = Isomap(n_components=2) X_transformed = embedding.fit_transform(X[0:500]) df = pd.DataFrame(X_transformed) df['order'] = dataset['order'] df.corr() from sklearn.manifold import Isomap def visualize_sub_plot_iso(pca_c, x_num= 3, y_num =3): fig, axs = plt.subplots(x_num, y_num, figsize=(15,13)) size =len(sc_features ) plot_num = x_num * y_num frame = int(size/plot_num) start = 0 x_total = [] for i in tqdm(range (x_num)): for j in tqdm(range (y_num)): final = start + frame embedding = Isomap(n_components=2) X_transformed = embedding.fit_transform(X[start:final]) x_total.extend(X_transformed) df = pd.DataFrame(X_transformed) df['order'] = dataset['order'].iloc[start:final].values x= df['order'] y= df[pca_c] start = start + frame #m, b = np.polyfit(x, y, 1) axs[i, j].set_ylabel('Iso Map Dimension '+ str(pca_c+1)) axs[i, j].set_xlabel('Frame Order') #axs[i, j].plot(x, m*x + b, color='red') axs[i, j].scatter(x,y) plt.show() return x_total x_total = visualize_sub_plot_iso(0, x_num= 3, y_num =3) #print('Correlation between order and Pca component 2:', df.corr()['order'][1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import visualProjector as vp import imp import matplotlib.pyplot as plt import numpy as np imp.reload(vp) size= 1600 sizeR = [size+1,np.int(size/2)] proj = vp.Projector(size = size) proj.addObject(3,0,0) proj.addObject(-3,0,0) proj.addObject(0,3,0) proj.addObject(0,-3,0) proj.addObject(0,0,3) proj.addObject(0,0,-3) # + #-10 up on image X = proj.render(True) # + # - proj.image() fig = plt.figure(figsize = (20,20)) plt.imshow(proj.image()) theta = np.linspace(-np.pi/2,np.pi/2,sizeR[1]) phi = np.linspace(-np.pi,np.pi,sizeR[0]) phi2d = np.array([phi,]*sizeR[1]) theta2d = np.array([theta,]*sizeR[0]).transpose() sinThetaIm = np.sin(theta2d) cosThetaCosPhiIm = np.cos(theta2d) * np.cos(phi2d) cosThetaSinPhiIm = np.cos(theta2d) * np.sin(phi2d) sinTheta = np.matrix.flatten(np.sin(theta2d)) cosThetaCosPhi = np.matrix.flatten(np.cos(theta2d) * np.cos(phi2d)) cosThetaSinPhi = np.matrix.flatten(np.cos(theta2d) * np.sin(phi2d)) theta2d fig=plt.figure(figsize = (20,20)) plt.imshow(sinThetaIm) fig=plt.figure(figsize = (20,20)) plt.imshow(cosThetaSinPhiIm) fig=plt.figure(figsize = (20,20)) plt.imshow(proj.image()*cosThetaCosPhiIm) fig=plt.figure(figsize = (20,20)) plt.imshow(proj.image()*cosThetaSinPhiIm) print(proj.image()[4,:-1]) fig = plt.figure(figsize=(10,50)) plt.plot(cosThetaCosPhiIm[4,:-1],'o') np.sum(proj.image()[4,:-1]*cosThetaCosPhiIm[4,:-1]) fig = plt.figure(figsize=(10,20)) plt.plot(proj.image()[4,:-1]*cosThetaCosPhiIm[4,:-1],'o') plt.plot(proj.image()[4,:-1]) np.sum(proj.pixels*np.matrix.flatten(sinTheta)) proj.pixels*np.matrix.flatten(sinTheta[:-1]) plt.imshow(1/np.cos(theta2d)[: ,:-1]) np.sum(np.sin(phi2d)) np.sum(np.cos(phi[:-1])) plt.plot(np.cos(phi),'o') np.sum(cosThetaCosPhiIm[:-1,:-1]) plt.plot(proj.image()[int(sizeR[1]/2),:-1]*cosThetaCosPhiIm[int(sizeR[1]/2),:-1],'o') plt.plot(proj.image()[int(sizeR[1]/2),:-1]) # + Y=[] X=[] for k in range(0,sizeR[1]): X.append(np.sum(proj.image()[k,:-1]*cosThetaCosPhiIm[k,:-1])) print(np.sum(proj.image()[:,:-1]*cosThetaCosPhiIm[:,:-1])) for k in range(0,sizeR[0]): Y.append(np.sum(proj.image()[:,k]*cosThetaCosPhiIm[:,k])) # - plt.plot(X) plt.plot(Y) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 ('LD') # language: python # name: python3 # --- # # Uncertain Zero Sum Competition # Uncomment the following cells (by deleting the leading `#`) if you are running this in Colab. # + # # !git clone https://github.com/wbarfuss/POLD.git # + # # !pip install pyDOE # + # # cd POLD # + # imports import os import hashlib import numpy as np import matplotlib.pyplot as plt from matplotlib.lines import Line2D from environments.Env_RoleChangingZeroSum import RoleChangingZeroSum as ENV from agents.deterministic import detQ from utils import quiver as qv from utils import interact as ia # document this session for reproducibility import IPython print(IPython.sys_info()) print() print('PACKAGES') print('----- pip -----') # !pip freeze | grep -E 'numpy|scipy|pandas|matplotlib|ipython|pyDOE' print('----- conda -----') # !conda list --export | grep -E 'numpy|scipy|pandas|matplotlib|ipython|pyDOE' # + def setup_agents(AGENTS, alpha, beta, gamma, ENV, obs_noise): alpha, beta, gamma = map(lambda x: np.round(x,4), [alpha, beta, gamma]) env = ENV(obsnoise=obs_noise) acts = env.actions() states = env.states() obs = env.observations() T = env.TransitionTensor(); assert np.allclose(T.sum(-1), 1) O = env.ObservationTensor(); assert np.allclose(O.sum(-1), 1) R = env.RewardTensor() F = env.FinalStates() agents = AGENTS(T, R, O, alpha, beta, gamma) fid = f"{ENV.__name__}_{env.noise}__"+\ f"{AGENTS.__name__}_{alpha}_{beta}_{gamma}" infodic = dict(acts=acts, states=states, obs=obs) return agents, fid, infodic def getXs(agents, n=4, radius=0.4, phi0=0): """ Get policies on a circle around the mean """ Xs = [agents.zeroIntelligence_behavior()] x0 = 0.5; y0 = 0.5 for phi in np.linspace(0, 2*np.pi, n)[0:-1]: X = agents.zeroIntelligence_behavior() X[0, 0, 0] = x0 + radius*np.cos(phi+phi0); X[0, 0, 1] = 1-X[0, 0, 0] X[1, 0, 0] = y0 + radius*np.sin(phi+phi0); X[1, 0, 1] = 1-X[1, 0, 0] if X.shape[1] == 2: X[1, 1, 0] = x0 + radius*np.cos(phi); X[1, 1, 1] = 1-X[1, 1, 0] X[0, 1, 0] = y0 + radius*np.sin(phi); X[0, 1, 1] = 1-X[0, 1, 0] Xs.append(X) return Xs def _transform_tensor_into_hash(tens): """Transform tens into a string for filename saving""" r = int(hashlib.sha512(str(tens).encode('utf-8')).hexdigest()[:16], 16) return r def compute_data(agents, NrRandomX=0, Xs=[]): assert NrRandomX > 0 or type(Xs) == list Xs = Xs + [agents.random_behavior() for _ in range(NrRandomX)] Xtrajs, Rtrajs = [], [] for i in range(len(Xs)): X = Xs[i] Xtraj, Rtraj, _ = ia.compute_detXtraj(agents, X, EpsMin=1e-5, Tmax=1000) Xtrajs.append(Xtraj); Rtrajs.append(Rtraj) print(i, Rtraj[-1].round(4), Rtraj.mean(0).round(4)) return dict(X=Xtrajs, R=Rtrajs) def get_data(agents, agentid, NrRandomX=0, Xs=[], datfolder=None): assert NrRandomX > 0 or type(Xs) == list if NrRandomX > 0: fn = os.path.expanduser(datfolder) + agentid + f"__{NrRandomX}.npz" else: h = _transform_tensor_into_hash(np.array(Xs)) fn = os.path.expanduser(datfolder) + agentid\ + f"__Xs{h}.npz" try: dat = np.load(fn, allow_pickle=True) ddic = dict(zip((k for k in dat), (dat[k] for k in dat))) print("Loading ", fn) except: print("Computing ", fn) ddic = compute_data(agents, NrRandomX=NrRandomX, Xs=Xs) if datfolder is not None: np.savez_compressed(fn, **ddic) return ddic def plot_single_condition(agents, data, gisp=None, axes=None, fsf = 1.0, plot=True): Q = agents.Q if gisp is not None: addgs = gisp.subgridspec fig = plt.gcf() elif axes is None: fig = plt.figure(figsize=(fsf*5, fsf*5)) addgs = fig.add_gridspec if axes is None: ws=0.1; hs=1.5 if Q == 2: gs = addgs(3, 2, wspace=ws, hspace=hs) axL = fig.add_subplot(gs[0:2, 0], ylim=(0,1), xlim=(0,1), xticks=(0,1), yticks=(0,1)) axR = fig.add_subplot(gs[0:2, 1], ylim=(0,1), xlim=(0,1), xticks=(0,1), yticks=(0,1)) axR.set_yticklabels([]) axL.annotate(f"Obs: KeepKick", xy=(0.0, 1.0), xycoords="axes fraction", color="k", ha="left", va="bottom") axR.annotate(f"Obs: KickKeep", xy=(1.0, 1.0), xycoords="axes fraction", color="k", ha="right", va="bottom") axLR = [axL, axR] else: assert Q == 1 gs = addgs(3, 4, wspace=ws, hspace=hs) axLR = [fig.add_subplot(gs[0:2, 1:3], xticks=(0,1), yticks=(0,1))] axLR[0].set_ylim(0,1); axLR[0].set_xlim(0,1) axLR[0].annotate(f"Obs: KK", xy=(0.5, 1.0), xycoords="axes fraction", color="k", ha="center", va="bottom") axT1 = fig.add_subplot(gs[2, :]) x=([0], range(Q), [0]); y=([1], range(Q), [0]) if plot: pAs = np.linspace(0.01, 0.99, 9) qv.plot_quiver(agents, x=x, y=y, pAs=pAs, NrRandom=16, kind="quiver+samples", dens=0.4, sf=0.5, policies_iter_steps=2, cmap='Greys', acts=["A", "B"], conds=None, axes=axLR); Xtrajs = data['X'] axes = qv.plot_trajectories(Xtrajs[0:-1], x=x, y=y, cols=['grey'], lws=[1], mss=['.'], msss=[0], lss=['-'], alphas=[0.5], axes=axLR) axes = qv.plot_trajectories(Xtrajs[-1:], x=x, y=y, cols=['purple'], lws=[1], mss=['.'], msss=[0], lss=['-'], alphas=[0.9], axes=axLR) for ax in axLR: ax.set_title(''); ax.set_xlabel(''); ax.set_ylabel('') Rtrajs = data['R'] for Rtraj in Rtrajs[0:-1]: axT1.plot(Rtraj[:, 0], color='darkgrey', alpha=0.5) axT1.plot(Rtraj[:, 1], color='lightgrey', alpha=0.5) axT1.plot(Rtrajs[-1][:, 0], color='blue', alpha=0.8) axT1.plot(Rtrajs[-1][:, 1], color='red', alpha=0.8) axT1.set_ylim(-0.05, 1.05); #axT2.set_ylim(-0.01, 1.01) axT1.set_xlabel('Time steps') axT1.set_yticks([0, 1]) axT1.set_yticklabels([-1, 1]) return axLR, axT1 #axT2 # + datfolder = "./data/" gamma = 0.90 beta = 200 / (1-gamma) alpha = 0.001 obsnoise = 0.25 fsf = 1.0 fig = plt.figure(figsize=(fsf*10, fsf*2.9)) gs = fig.add_gridspec(1, 3, wspace=0.25, hspace=0.35, left=0.1, right=0.98, bottom=0.18, top=0.78) plot = True # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - agents, fid, info = setup_agents(detQ, alpha, beta, gamma, ENV, obs_noise=0) Xs = getXs(agents, n=6, radius=0.4, phi0=0.15*np.pi)[1:] data = get_data(agents, fid, Xs=Xs, datfolder=datfolder) axs, axT = plot_single_condition(agents, data, gisp=gs[0], plot=plot) axT.annotate(f"A", xy=(-0.08, 6.8), xycoords="axes fraction", color="k", weight='semibold', ha="center", va="bottom", fontsize='large') axT.annotate(f"Obs. noise 0.0", xy=(0.5, 6.8), xycoords="axes fraction", color="k", weight='semibold', ha="center", va="bottom", fontsize='large') axs[0].set_xlabel(r'$p^1(Left | Obs)$'); axs[0].set_ylabel(r'$p^2(Left | Obs)$') axs[0].xaxis.labelpad = -9; axs[0].yaxis.labelpad = -6 axs[1].set_xlabel(r'$p^1(Left | Obs)$'); axs[1].xaxis.labelpad = -9 axs[0].annotate(f"Policy\nSpace", xy=(-0.57, 0.5), xycoords="axes fraction", color="k", ha="center", va="center", fontsize='large', weight='semibold', rotation=90) axT.annotate(f"Average\nReward", xy=(-0.265, 0.5), xycoords="axes fraction", color="k", ha="center", va="center", fontsize='large', weight='semibold', rotation=90) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - noise=0.25 agents, fid, info = setup_agents(detQ, alpha, beta, gamma, ENV, obs_noise=noise) Xs = getXs(agents, n=6, radius=0.4, phi0=0.15*np.pi)[1:] data = get_data(agents, fid, Xs=Xs, datfolder=datfolder) axs, axT = plot_single_condition(agents, data, gisp=gs[1], plot=plot) axT.annotate(f"B", xy=(-0.08, 6.8), xycoords="axes fraction", color="k", weight='semibold', ha="center", va="bottom", fontsize='large') axT.annotate(f"Obs. noise {noise}", xy=(0.5, 6.8), xycoords="axes fraction", color="k", ha="center", va="bottom", fontsize='large', weight='semibold') axs[0].set_xlabel(r'$p^1(Left | Obs)$'); axs[0].set_ylabel(r'$p^2(Left | Obs)$') axs[0].xaxis.labelpad = -9; axs[0].yaxis.labelpad = -6 axs[1].set_xlabel(r'$p^1(Left | Obs)$'); axs[1].xaxis.labelpad = -9 # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - agents, fid, info = setup_agents(detQ, alpha, beta, gamma, ENV, obs_noise=0.6) Xs = getXs(agents, n=6, radius=0.4, phi0=0.15*np.pi)[1:] data = get_data(agents, fid, Xs=Xs, datfolder=datfolder) axs, axT = plot_single_condition(agents, data, gisp=gs[2], plot=plot) axT.annotate(f"C", xy=(-0.08, 6.8), xycoords="axes fraction", color="k", weight='semibold', ha="center", va="bottom", fontsize='large') axT.annotate(f"Obs. noise >0.5", xy=(0.5, 6.8), xycoords="axes fraction", color="k", ha="center", va="bottom", fontsize='large', weight='semibold') axs[0].set_xlabel(r'$p^1(Left | Obs)$'); axs[0].set_ylabel(r'$p^2(Left | Obs)$') axs[0].xaxis.labelpad = -9; axs[0].yaxis.labelpad = -6 ledli = [Line2D([0],[0], color=c, alpha=0.95, lw=1) for c in ["red", "blue"]] ledla = [f"Agent {i}" for i in [1, 2]] axT.legend(ledli, ledla, loc="center right", bbox_to_anchor=(1.0250, 0.5), ncol=1, prop={'size': 7}) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - plt.savefig('figs/fig_ZeroSum.png', dpi=300, facecolor='w') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # KTRX Example # %load_ext autoreload # %autoreload 2 # + import pandas as pd import numpy as np from orbit.models.ktrlite import KTRLiteMAP from orbit.models.ktrx import KTRXFull, KTRXAggregated from orbit.estimators.pyro_estimator import PyroEstimatorVI from orbit.estimators.stan_estimator import StanEstimatorMAP from orbit.utils.features import make_fourier_series_df, make_fourier_series from orbit.diagnostics.plot import plot_predicted_data, plot_predicted_components, plot_ktr_lev_knots from orbit.diagnostics.metrics import smape from orbit.utils.dataset import load_iclaims, load_electricity_demand import matplotlib.pyplot as plt plt.style.use("fivethirtyeight") # %matplotlib inline # - pd.set_option('display.float_format', lambda x: '%.5f' % x) # ## Data # + df = load_iclaims() date_col = 'week' response_col = 'claims' print(df.shape) df.head() # - print(f'starts with {df[date_col].min()}\nends with {df[date_col].max()}\nshape: {df.shape}') test_size = 52 train_df = df[:-test_size] test_df = df[-test_size:] # ## KTRLite # + # level_knot_dates = pd.date_range(start='1981-01-01', end='1990-12-31', periods=21) # level_knot_dates # - ktrlite = KTRLiteMAP( response_col=response_col, date_col=date_col, # seasonality seasonality=[52], seasonality_fs_order=[3], level_knot_scale=.1, span_level=.05, # level_knot_dates=level_knot_dates, # date_freq='D', span_coefficients=.3, estimator_type=StanEstimatorMAP, n_bootstrap_draws=1e4, ) ktrlite.fit(train_df) predicted_df = ktrlite.predict(df=test_df, decompose=True) predicted_df.head() '{:.2%}'.format(smape(predicted_df['prediction'].values, test_df[response_col].values)) _ = plot_predicted_data(training_actual_df=train_df, predicted_df=predicted_df, date_col=date_col, actual_col=response_col, test_actual_df=test_df) _ = plot_predicted_components(predicted_df=predicted_df, date_col=date_col, plot_components=['trend', 'seasonality_52']) # get the knots lev_knots = ktrlite._aggregate_posteriors['map']['lev_knot'] decomp_df = ktrlite.predict(df=train_df, decompose=True) _ = plot_ktr_lev_knots(train_df, decomp_df, date_col, response_col, ktrlite._level_knot_dates, lev_knots) # ## KTRX # prepare the input level_knot_dates = ktrlite._level_knot_dates level_knots = ktrlite._aggregate_posteriors['map']['lev_knot'][0] seasonal_knots_input = { '_seas_coef_knot_dates': ktrlite._coef_knot_dates, '_sea_coef_knot': ktrlite._aggregate_posteriors['map']['coef_knot'], '_seasonality': ktrlite._seasonality, '_seasonality_fs_order': ktrlite._seasonality_fs_order, } # + ktrx = KTRXAggregated( response_col=response_col, date_col=date_col, level_knot_dates=level_knot_dates, level_knots=level_knots, seasonal_knots_input=seasonal_knots_input, regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'], level_knot_scale=.1, #.01 span_coefficients=0.3, rho_coefficients=0.15, # pyro optimization parameters seed=8888, num_steps=1000, num_sample=1000, learning_rate=0.1, #learning_rate_total_decay=0.05, verbose=True, message=100, aggregate_method="median", estimator_type=PyroEstimatorVI, ) ktrx.fit(train_df) # + # ktrx = KTRXFull( # response_col=response_col, # date_col=date_col, # level_knot_dates=level_knot_dates, # level_knots=level_knots, # seasonal_knots_input=seasonal_knots_input, # regressor_col=['trend.unemploy', 'trend.filling', 'trend.job'], # level_knot_scale=.1, #.01 # span_coefficients=0.3, # rho_coefficients=0.15, # # pyro optimization parameters # seed=8888, # num_steps=1000, # num_sample=1000, # learning_rate=0.1, # #learning_rate_total_decay=0.05, # verbose=True, # message=100, # estimator_type=PyroEstimatorVI, # ) # ktrx.fit(train_df) # - coef_df = ktrx.get_regression_coefs(coefficient_method="smooth") # include_ci=True print(coef_df.head()) print(coef_df.tail()) coef_df = ktrx.get_regression_coefs(coefficient_method="empirical") # include_ci=True print(coef_df.head()) print(coef_df.tail()) coef_df = ktrx.get_regression_coefs(coefficient_method="smooth" , date_array=['2017-06-11', '2017-06-18', '2017-06-25', '2017-07-02', '2017-07-09', '2017-07-16']) coef_df coef_df = ktrx.get_regression_coefs(coefficient_method="empirical" , date_array=['2017-06-11', '2017-06-18', '2017-06-25', '2017-07-02', '2017-07-09', '2017-07-16']) coef_df _ = ktrx.plot_regression_coefs(with_knot=True, coefficient_method='smooth', ncol=3, figsize=(10, 3)) predicted_df = ktrx.predict(df=test_df, coefficient_method="smooth", decompose=False) predicted_df.head() predicted_df = ktrx.predict(df=test_df, coefficient_method="empirical", decompose=False) predicted_df.head() '{:.2%}'.format(smape(predicted_df['prediction'].values, test_df[response_col].values)) _ = plot_predicted_data(training_actual_df=train_df, predicted_df=predicted_df, date_col=date_col, actual_col=response_col, test_actual_df=test_df) _ = plot_predicted_components(predicted_df=predicted_df, date_col=date_col, plot_components=['trend', 'seasonality_input', 'regression']) # ## Try a different data # + # from 2000-01-01 to 2008-12-31 df = load_electricity_demand() df['electricity'] = np.log(df['electricity']) date_col = 'date' response_col = 'electricity' print(df.shape) df.head() # - test_size = 365 train_df = df[:-test_size] test_df = df[-test_size:] # + ktrlite = KTRLiteMAP( response_col=response_col, date_col=date_col, # seasonality seasonality=[7, 365.25], seasonality_fs_order=[2, 5], level_knot_scale=.1, span_level=.05, # level_knot_dates=level_knot_dates, # date_freq='D', span_coefficients=.3, estimator_type=StanEstimatorMAP, n_bootstrap_draws=1e4, ) ktrlite.fit(train_df) # - # prepare the input level_knot_dates = ktrlite._level_knot_dates level_knots = ktrlite._aggregate_posteriors['map']['lev_knot'][0] seasonal_knots_input = { '_seas_coef_knot_dates': ktrlite._coef_knot_dates, '_sea_coef_knot': ktrlite._aggregate_posteriors['map']['coef_knot'], '_seasonality': ktrlite._seasonality, '_seasonality_fs_order': ktrlite._seasonality_fs_order, } # + ktrx = KTRXFull( response_col=response_col, date_col=date_col, level_knot_dates=level_knot_dates, level_knots=level_knots, seasonal_knots_input=seasonal_knots_input, level_knot_scale=.1, #.01 span_coefficients=0.3, rho_coefficients=0.15, # pyro optimization parameters seed=8888, num_steps=1000, num_sample=1000, learning_rate=0.1, #learning_rate_total_decay=0.05, verbose=True, message=100, estimator_type=PyroEstimatorVI, ) ktrx.fit(train_df) # - predicted_df = ktrx.predict(df=test_df, decompose=True) predicted_df.head() '{:.2%}'.format(smape(predicted_df['prediction'].values, test_df[response_col].values)) _ = plot_predicted_data(training_actual_df=train_df, predicted_df=predicted_df, date_col=date_col, actual_col=response_col, test_actual_df=test_df) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: claude # language: python # name: claude # --- import jax.numpy as jnp from jax import grad, jit, vmap, jacfwd, jacrev from jax import random import jax.scipy.sparse as sp import matplotlib.pyplot as plt import matplotlib from mpl_toolkits.axes_grid1 import make_axes_locatable import networkx as nx import pandas as pd from tqdm.auto import tqdm, trange from scipy.sparse import csc_matrix from IPython.core.display import display, HTML display(HTML("")) import claude as cl from claude import observables as obs from claude import constraints as const # + N = 1000 ba = nx.barabasi_albert_graph(N,30) gn = nx.scale_free_graph(1000) ba_adj = np.asarray(nx.adj_matrix(ba).todense()) gn_adj = np.asarray(nx.adj_matrix(gn).todense()) d = np.array(ba_adj.sum(axis=0)).flatten() dinv = 1/d din = np.array(gn_adj.sum(axis=0)).flatten() dout = np.array(gn_adj.sum(axis=1)).flatten() # - def plot_adj(adj): plt.imshow(adj,cmap=matplotlib.cm.get_cmap('gray_r'),vmin=0,vmax=1) plt.colorbar() # + [markdown] heading_collapsed=true # # Connectivity # + hidden=true m = cl.GraphEnsemble(1000) m.fit([const.Connectivity(343)], opt_kwargs={'nit':30,'fatol':1e-2,'disp':True}) # + hidden=true plt.figure(figsize=[15,5]) plt.subplot(1,3,1) plot_adj(ba_adj) plt.subplot(1,3,2) plot_adj(m.adj_matrix) plt.subplot(1,3,3) plt.hist([m.sample().sum() for i in trange(1000)]) # + [markdown] hidden=true # ## Subgraph connectivity # + hidden=true m = cl.GraphEnsemble(1000) m.fit([const.Connectivity(343, nodeset1=np.arange(100))], opt_kwargs={'nit':30,'fatol':1e-2,'disp':True}) # + hidden=true plt.figure(figsize=[15,5]) plt.subplot(1,3,1) plot_adj(ba_adj) plt.subplot(1,3,2) plot_adj(m.adj_matrix) plt.subplot(1,3,3) plt.hist([np.triu(m.sample()[:100,:100]).sum() for i in trange(1000)]) # + [markdown] hidden=true # ## Connectivity between 2 groups of nodes # + hidden=true m = cl.GraphEnsemble(1000) m.fit([const.Connectivity(343, nodeset1=np.arange(100), nodeset2=np.arange(150,200))], opt_kwargs={'nit':30,'fatol':1e-2,'disp':True}) # + hidden=true plt.figure(figsize=[15,5]) plt.subplot(1,3,1) plot_adj(ba_adj) plt.subplot(1,3,2) plot_adj(m.adj_matrix) plt.subplot(1,3,3) plt.hist([m.sample()[:100,150:200].sum() for i in trange(1000)]) # + [markdown] heading_collapsed=true # # Fixed degree sequence # + [markdown] hidden=true # ### Full degree sequence # + hidden=true m = cl.GraphEnsemble(1000) m.fit([const.DegreeSequence(d)], opt_kwargs={'nit':100,'fatol':1e-2,'disp':True}) #m.fit([const.DegreeSequence(get_din_dout(ba)[0]),const.Connectivity(250, nodeset1=np.arange(30), nodeset2=np.arange(40,50))], opt_kwargs={'nit':100,'fatol':1e-2}) #m.fit([const.DegreeSequence(d[:30], nodeset=np.arange(30))], opt_kwargs={'nit':30,'fatol':1e-2}) # + hidden=true plt.figure(figsize=[15,5]) plt.subplot(1,3,1) plot_adj(ba_adj) plt.subplot(1,3,2) plot_adj(m.adj_matrix) plt.subplot(1,3,3) plt.plot(d, m.adj_matrix.sum(axis=1),'.') plt.plot(plt.xlim(),plt.xlim()) # + [markdown] hidden=true # ### Partial degree sequence # + hidden=true m = cl.GraphEnsemble(1000) #m.fit([const.DegreeSequence(get_din_dout(ba)[0]),const.Connectivity(250, nodeset1=np.arange(30), nodeset2=np.arange(40,50))], opt_kwargs={'nit':100,'fatol':1e-2}) m.fit([const.DegreeSequence(d[:50], nodeset=np.arange(50))], opt_kwargs={'nit':30,'fatol':1e-2,'disp':True}) # + hidden=true plt.figure(figsize=[25,10]) plt.subplot(1,3,1) plot_adj(ba_adj) plt.subplot(1,3,2) plot_adj(m.adj_matrix) plt.subplot(1,3,3) plt.plot(d, '.', markersize=15) plt.plot(m.adj_matrix.sum(axis=1),'.',markersize=5) # + [markdown] hidden=true # ### Degree sequence in subgraph # + hidden=true m = cl.GraphEnsemble(1000) #m.fit([const.DegreeSequence(get_din_dout(ba)[0]),const.Connectivity(250, nodeset1=np.arange(30), nodeset2=np.arange(40,50))], opt_kwargs={'nit':100,'fatol':1e-2}) m.fit([const.DegreeSequence(np.arange(50), nodeset=np.arange(50), subgraph_nodeset=np.arange(100))], opt_kwargs={'nit':100,'fatol':1e-2,'disp':True}) #m.fit([const.DegreeSequence(d[:50], nodeset=np.arange(50))], opt_kwargs={'nit':30,'fatol':1e-2}) # + hidden=true plt.figure(figsize=[20,5]) plt.subplot(1,3,1) plot_adj(m.adj_matrix) plt.subplot(1,3,2) plt.plot(m.adj_matrix[:50,:100].sum(axis=1),'.') plt.subplot(1,3,3) plt.plot(m.adj_matrix.sum(axis=1),'.') plt.axvline(100,color='red') # + [markdown] heading_collapsed=true # # Joining constraints # + hidden=true m = cl.GraphEnsemble(1000) m.fit([const.DegreeSequence(get_din_dout(ba)[0]),const.Connectivity(250, nodeset1=np.arange(30), nodeset2=np.arange(40,50))], opt_kwargs={'nit':100,'fatol':1e-2,'disp':True}) # + hidden=true plt.figure(figsize=[20,5]) plt.subplot(1,3,1) plot_adj(ba_adj) plt.subplot(1,3,2) plot_adj(m.adj_matrix) plt.subplot(1,3,3) plt.hist([m.sample()[:30,40:50].sum() for i in trange(1000)]) # - # # Observables # ### RWR # + x = np.zeros(N) x[np.random.randint(0,N, int(N/100*20))] = 1 #x = np.random.rand(N) x= x/x.sum() x = jnp.asarray(x) lambd = 0.3 def propagate(adj,x,lambd=lambd): p = adj / jnp.reshape(adj.sum(axis=0), [adj.shape[0],1]) I = jnp.eye(p.shape[0]) return lambd*x.dot(jnp.linalg.inv(I - (1-lambd)*p)) def propagate2(adj,x,i,lambd=lambd): p = adj / jnp.reshape(adj.sum(axis=0), [adj.shape[0],1]) I = jnp.eye(p.shape[0]) return lambd*x.dot(jnp.linalg.inv(I - (1-lambd)*p))[i] # - m = cl.GraphEnsemble(1000) #m.fit([const.DegreeSequence(d)], opt_kwargs={'nit':100,'fatol':1e-2}) m.fit([const.DegreeSequence(d),const.Connectivity(250, nodeset1=np.arange(30), nodeset2=np.arange(40,50))], opt_kwargs={'nit':50,'fatol':1e-2,'disp':True}) k = obs.RandomWalkWithRestart.eval_transfer_matrix(ba_adj, lambd) mu = m.predict_mean(obs.RandomWalkWithRestart.func, f_args=[x, lambd, None, k]) sigma = m.predict_std(obs.RandomWalkWithRestart.grad, f_args=[x, lambd, None, k], obs_dim_idx=1) # + N_samples = 100 y = propagate(ba_adj, x) #samples = [] rdmy_ms = np.zeros([len(ba),N_samples]) for i in trange(N_samples): flag = False while not flag: smpl = m.sample() prop = propagate(smpl,x,lambd) if np.isnan(propagate(smpl,x)).sum() == 0: #samples.append(smpl) rdmy_ms[:,i] = propagate(smpl, x) flag = True z = (y - rdmy_ms.mean(axis=1))/rdmy_ms.std(axis=1) # - plt.figure(figsize=[13,5]) plt.subplot(121) plt.plot(rdmy_ms.mean(axis=1),mu,'.') plt.xlabel('Numerical expected value',fontsize=15) plt.ylabel('Predicted expected value',fontsize=15) plt.plot(plt.xlim(),plt.xlim()) plt.subplot(122) plt.plot(rdmy_ms.std(axis=1),sigma,'.') plt.xlabel('Numerical std. dev.',fontsize=15) plt.ylabel('Predicted std. dev.',fontsize=15) plt.plot(plt.xlim(),plt.xlim()) plt.tight_layout() # ### ANND mu = m.predict_mean(obs.AverageNeighborDegree.func) sigma = [m.predict_std(grad(obs.AverageNeighborDegree.func, argnums=0), f_args=[i]) for i in trange(N)] # + annd = obs.AverageNeighborDegree.func N_samples = 100 y = np.asarray(annd(csc_matrix(adj))).flatten() #samples = [] rdm_annd = np.zeros([N,N_samples]) for i in trange(N_samples): smpl = csc_matrix(m.sample()) prop = np.asarray(annd(smpl)).flatten() rdm_annd[:,i] = prop z = (y - rdm_annd.mean(axis=1))/rdm_annd.std(axis=1) # - plt.figure(figsize=[13,5]) plt.subplot(121) plt.plot(rdm_annd.mean(axis=1),mu,'.') plt.xlabel('Numerical expected value',fontsize=15) plt.ylabel('Predicted expected value',fontsize=15) plt.plot(plt.xlim(),plt.xlim()) plt.subplot(122) plt.plot(rdm_annd.std(axis=1),sigma,'.') plt.xlabel('Numerical std. dev.',fontsize=15) plt.ylabel('Predicted std. dev.',fontsize=15) plt.plot(plt.xlim(),plt.xlim()) plt.tight_layout() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="DXItCGVYhLmw" # ##Mask RCNN Architecture Explanation # + [markdown] id="xrmrZKPthQzI" # ###1. Object detection , Semantic Segmentation and Instance segmentation # + [markdown] id="8wPuBrJkzJYu" # # Architectures like YOLO and SSD are efficient at the task of `object localization`, which is detecting objects in the image and drawing a bounding box around them. But the other major task in computer vision is `semantic segmentation` which is creating a pixel level boundary instead of a generic bounding box around the object. The pixel level classification allows for a much clearer distinction between different objects in the image. # \ # A specific case of semantic segmentation is `Instance segmentation` where in the pixels in the image, in addition to the different classes, are also classified based on the number of objects in each class. # \ # In this session, we'll discuss `Instance segmentaion`. Specifically, we will discuss Mask RCNN architecture which is uniquely designed for this task. # \ # \ # MarineGEO circle logo # + [markdown] id="vz3JF1p2zJHX" # ##1.1 Mask RCNN architecture # In 2017 et al. proposed Mask R-CNN. It was developed on top of Faster RCNN which is a Region-based Convolution Neural Net that performs Object Detection and returns bounding boxes and the corresponding class labels. To this architecture(i.e., Faster RCNN) a layer is added for predicting the segmentation values. # # ###Architecture Explanation # Mask RCNN, as the author explains, is a natural extension of the Faster RCNN detector. # # # So similar to Faster RCNN, Mask RCNN has a backbone network and 2 stages there after. # \ # Backbone network : A pretrained CNN network usually Resnet50 or Resnet101. This is used as a feature extractor. The ouput feature map of this stage is fed to both the Stage1 and 2. # \ # Note: While Resnet works fine in this network, the authors of Mask RCNN have proposed an improved Feature Pyramid Network backbone for the architecture. For simplicity, this will be discussed later. # \ # Stage 1 : A lightweight neural network that takes the feature map and outputs a set of rectangular regions in the feature map which have the potential for an object of one of the desired classes. In the output, each rectangular region is associated with an objectness score and the coordinates of the rectangle. # # Stage 2 : # # # \ # Now the backbone network is a simple vanilla pre-trained Resnet network. So we will discuss the next part, i.e. the Stage 1 Region proposal network. # + [markdown] id="jRlfpnLtzJ0R" # ##1.2 Region Proposal Network & Anchor Boxes # The Region proposal network produces regions of interest from the feature map that it recieves as input. It does this by moving a sliding window of size `n*n` along the feature map. Each window thus obtained is then fed to the fully connnected layers - box regression layers and box-classification layer. In addition, it is also fed to the network that predicts the masks for the object. # # \ # At each sliding-window location, we simultaneously # predict multiple region proposals, where the number # of maximum possible proposals for each location is # denoted as k. So the regression layer has 4k outputs encoding # the coordinates of k boxes, and the classification layer outputs # 2k scores that estimate probability of object or not # object for each proposal. The k proposals are parameterized relative to k reference boxes, which we call `anchors`. An anchor is centered at the sliding window # in question, and is associated with a scale and aspect # ratio (Second figure below). By default we use 3 scales and # 3 aspect ratios, yielding k = 9 anchors at each sliding # position. For a convolutional feature map of a size # W × H (typically ∼2,400), there are W Hk anchors in # total. # # Below is an illustration of anchor boxes distributed over the input image(In practise the anchor boxes are decided over the feature map from the backbone network. Also the number of anchor boxes will be considerably more than shown here.) # # MarineGEO circle logo # # \ # Below is an illustration of the different scales and aspect ratios of the anchor boxes that are used in the architecture. # MarineGEO circle logo # # # \ # Now the output of the Region proposal network are `anchor class` and the refined `coordinates` for the bounding boxes. # # # * The `anchor class` is either `background` or `foreground`. # * The bounding boxes that are predicted often don't center over the object perfectly. Hence a delta for the coordinates is estimated by the RPN network which are the refined `coordinates`. # \ # After this, the predictions from the RPN are filtered based on the anchor class. Also the bounding boxes that are closely located to each other are eliminated by measuring their overlap by a technique called Non-maxima suppression. Thus, a final set of proposals are selected for the next stage. # # # # # + [markdown] id="vpQGcCK4zFIv" # ##ROI Pool vs ROI Align # Region of Interest Pooling operation is a simple neural net layer that performs maxpooling on inputs of non-uniform sizes to gain a fixed size feature map # + [markdown] id="QVXFHMZ1YSFJ" # ##Mask Representations # + [markdown] id="TarfYbMQzKKL" # ## Loss Function # + [markdown] id="TmJdVKs-zKiD" # ##Training the network # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import random import os import sys import json import numpy as np from sklearn.cluster import KMeans import math from collections import Counter, defaultdict import itertools import time # + file_path = './hw6_clustering.txt' seed = 1 random.seed(seed) def data_generator(): data = np.loadtxt(file_path, delimiter=',') data = data.tolist() random.shuffle(data) data_batches = [] for i in range(0, len(data), math.floor(0.2*len(data))): d = np.array(data[i:i+math.floor(0.2*len(data))]) data_batches.append(d) return data_batches data_batches = data_generator() # + class stats: def __init__(self, points): # create the stats from points self.point_indices = points[:, 0].tolist() self.actual_clusters = points[:, 1].tolist() self.n = len(points) self.sum = np.sum(points[:, 2:], axis=0) self.sumsq = np.sum(np.power(points[:, 2:], 2), axis=0)\ def update_stats(self, point): self.point_indices.append(point[0]) self.actual_clusters.append(point[1]) self.n += 1 self.sum += point[2:] self.sumsq += np.power(point[2:], 2) return def merge_with(self, cluster): self.point_indices += cluster.point_indices self.actual_clusters += cluster.actual_clusters self.n += cluster.n self.sum += cluster.sum self.sumsq += cluster.sumsq def calculate_variance(self): return np.power((self.sumsq/self.n) - np.power(self.sum/self.n, 2), 1/2) def calculate_centroid(self): return self.sum / self.n def mahalanobis_distance(cluster, point): return np.power(np.sum(np.power((point - cluster.calculate_centroid()) / cluster.calculate_variance(), 2)), 1/2) def bfr(data, k, d, output_file_path): global seed rs = defaultdict(list) ds = defaultdict(list) cs = defaultdict(list) output_file = open(output_file_path, "wt") output_file.write("The intermediate results:\n") for i, batch in enumerate(data): if(i == 0): # Run K-Means (e.g., from sklearn) with a large K (e.g., 5 times of the number of the input clusters) on the data in memory using the Euclidean distance as the similarity measurement. model = KMeans(n_clusters=k*5, random_state=seed) model = model.fit(batch[:, 2:]) # In the K-Means result from Step 2, move all the clusters that contain only one point to RS index = defaultdict(list) for pos, centroid_id in enumerate(model.labels_): index[centroid_id].append(pos) rest_of_data = [] for centroid_id, positions in index.items(): if(len(positions) == 1): rs[centroid_id] = np.take(batch, positions, axis=0) if(len(positions) > 1): rest_of_data.append(np.take(batch, positions, axis=0)) rest_of_data = np.concatenate(rest_of_data, axis=0) # Run K-Means again to cluster the rest of the data points with K = the number of input clusters. model = KMeans(n_clusters=k, random_state=seed) model = model.fit(rest_of_data[:, 2:]) # Use the K-Means result from Step 4 to generate the DS clusters (i.e., discard their points and generate statistics). index = defaultdict(list) for pos, centroid_id in enumerate(model.labels_): index[centroid_id].append(pos) for centroid_id, positions in index.items(): points = np.take(rest_of_data, positions, axis=0) ds[centroid_id] = stats(points) points = [x for x in rs.values()] points = np.concatenate(points, axis=0) model = KMeans(n_clusters=min(5*k, len(points)), random_state=seed) model = model.fit(points[:, 2:]) index = defaultdict(list) for pos, centroid_id in enumerate(model.labels_): index[centroid_id].append(pos) rs = defaultdict(list) for centroid_id, positions in index.items(): p = np.take(points, positions, axis=0) if(len(positions) == 1): rs[centroid_id] = p if(len(positions) > 1): cs[centroid_id] = stats(p) else: for point in batch: cluster_id, min_dist = min(list(map(lambda x: (x[0], mahalanobis_distance(x[1], point[2:])), ds.items())), key=lambda x: x[1]) if(min_dist < 2*math.pow(d, 1/2)): ds[cluster_id].update_stats(point) else: if(len(cs) != 0): cluster_id, min_dist = min(list(map(lambda x: (x[0], mahalanobis_distance(x[1], point[2:])), cs.items())), key=lambda x: x[1]) if(min_dist < 2*math.pow(d, 1/2)): cs[cluster_id].update_stats(point) else: if(len(rs) == 0): rs[0] = np.expand_dims(point, axis=0) else: rs[max(rs.keys())+1] = np.expand_dims(point, axis=0) else: if(len(rs) == 0): rs[0] = np.expand_dims(point, axis=0) else: rs[max(rs.keys())+1] = np.expand_dims(point, axis=0) points = [x for x in rs.values()] points = np.concatenate(points, axis=0) model = KMeans(n_clusters=min(5*k, len(points)), random_state=seed) model = model.fit(points[:, 2:]) index = defaultdict(list) for pos, centroid_id in enumerate(model.labels_): index[centroid_id].append(pos) rs = defaultdict(list) for centroid_id, positions in index.items(): p = np.take(points, positions, axis=0) if(len(positions) == 1): rs[centroid_id] = p if(len(positions) > 1): if(len(cs) == 0): cs[0] = stats(p) else: cs[max(cs.keys())+1] = stats(p) # merge cs clusters if distance < 2 root d to_be_merged = [] for c1, c2 in itertools.combinations(cs.keys(), 2): dist = mahalanobis_distance(cs[c1], cs[c2].calculate_centroid()) if(dist < 2*math.pow(dist, 1/2)): to_be_merged.append((c1, c2)) for (c1, c2) in to_be_merged: if(c1 in cs and c2 in cs): cs[c1].merge_with(cs[c2]) del cs[c2] # after each round output number_of_ds_points = sum([x.n for x in ds.values()]) number_of_clusters_cs = len(cs) number_of_cs_points = sum([x.n for x in cs.values()]) number_of_rs_points = sum([len(x) for x in rs.values()]) if(i != len(data)-1): output_file.write("Round {}: {},{},{},{}\n".format(i+1, number_of_ds_points, number_of_clusters_cs, number_of_cs_points, number_of_rs_points)) # after last round # merge cs with ds with distance less than 2 root d merged_cs = [] for k, c in cs.items(): point = c.calculate_centroid() cluster_id, min_dist = min(list(map(lambda x: (x[0], mahalanobis_distance(x[1], point)), ds.items())), key=lambda x: x[1]) if(min_dist < 2*math.pow(d, 1/2)): ds[cluster_id].merge_with(c) merged_cs.append(k) for k in merged_cs: del cs[k] number_of_ds_points = sum([x.n for x in ds.values()]) number_of_clusters_cs = len(cs) number_of_cs_points = sum([x.n for x in cs.values()]) number_of_rs_points = sum([len(x) for x in rs.values()]) output_file.write("Round {}: {},{},{},{}\n".format(len(data), number_of_ds_points, number_of_clusters_cs, number_of_cs_points, number_of_rs_points)) gt = [] pred = [] original_index = [] cluster_id = 0 for i, x in ds.items(): gt += x.actual_clusters pred += [cluster_id]*x.n original_index += x.point_indices cluster_id += 1 for i, x in cs.items(): gt += x.actual_clusters pred += [cluster_id]*x.n original_index += x.point_indices cluster_id += 1 for i, x in rs.items(): gt += x[:, 1].tolist() original_index += x[:, 0].tolist() pred += [-1]*len(x) gt = [int(x) for x in gt] pred = [int(x) for x in pred] original_index = [int(x) for x in original_index] final_output = sorted([(x,y) for x,y in zip(original_index, pred)]) output_file.write("The clustering results:\n") for x, y in final_output: output_file.write("{},{}\n".format(x, y)) from sklearn.metrics.cluster import v_measure_score print(v_measure_score(gt, pred)) output_file.close() return ds, cs, rs # - st = time.time() d = data_batches[0].shape[-1]-2 ds, cs, rs = bfr(data_batches, 10, d, "output.txt") print(time.time()-st) from sklearn.metrics.cluster import v_measure_score v_measure_score(gt, pred) ds[6].calculate_variance() mahalanobis_distance(ds[4], data_batches[1][1]) np.expand_dims(data_batches[0][0], axis=0) d = {1:1, 2: 2} del d[1] d model = KMeans(n_clusters=k*5, random_state=12) model = model.fit(x) clusters = {} for i, v in enumerate(model.labels_): if(v in clusters): clusters[v].append(i) else: clusters[v] = [] mask = [] for i, v in clusters.items(): if(len(v) == 1): mask.append(True) else: mask += [False] * len(v) c = Counter(model.labels_) set_rs = set() for i, v in c.items(): if(v == 1): set_rs.add(i) set_rs model.labels_ x[list(map(lambda x: x in set_rs, model.labels_))] rs_index = list(map(lambda v: len(v[1])==1, clusters.items())) x[rs_index, :] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Rolling Update Tests # # Check rolling updates function as expected. import json import time # !kubectl create namespace seldon # !kubectl config set-context $(kubectl config current-context) --namespace=seldon # ## Change Image # !kubectl apply -f resources/fixed_v1.yaml # !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \ # -o jsonpath='{.items[0].metadata.name}') # !curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \ # -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \ # -H "Content-Type: application/json" # !kubectl apply -f resources/fixed_v2.yaml time.sleep(5) # To allow operator to start the update for i in range(60): # responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json" response = json.loads(responseRaw[0]) assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5) # jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json data="".join(jsonRaw) resources = json.loads(data) numReplicas = int(resources["items"][0]["status"]["replicas"]) if numReplicas == 3: break time.sleep(1) print("Rollout Success") # !kubectl delete -f resources/fixed_v1.yaml # ## Separate Service Orchestrator # !kubectl apply -f resources/fixed_v1_sep.yaml # !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \ # -o jsonpath='{.items[0].metadata.name}') # !curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \ # -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \ # -H "Content-Type: application/json" # !kubectl apply -f resources/fixed_v2_sep.yaml time.sleep(5) # To allow operator to start the update for i in range(60): # responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json" response = json.loads(responseRaw[0]) assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5) # jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json data="".join(jsonRaw) resources = json.loads(data) numReplicas = int(resources["items"][0]["status"]["replicas"]) if numReplicas == 1: break time.sleep(1) print("Rollout Success") # !kubectl delete -f resources/fixed_v1_sep.yaml # ## Two PodSpecs # !kubectl apply -f resources/fixed_v1_2podspecs.yaml # !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \ # -o jsonpath='{.items[0].metadata.name}') # !curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \ # -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \ # -H "Content-Type: application/json" # !kubectl apply -f resources/fixed_v2_2podspecs.yaml time.sleep(5) # To allow operator to start the update for i in range(60): # responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json" response = json.loads(responseRaw[0]) assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5) # jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json data="".join(jsonRaw) resources = json.loads(data) numReplicas = int(resources["items"][0]["status"]["replicas"]) if numReplicas == 1: break time.sleep(1) print("Rollout Success") # !kubectl delete -f resources/fixed_v1_2podspecs.yaml # ## Two Models # !kubectl apply -f resources/fixed_v1_2models.yaml # !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \ # -o jsonpath='{.items[0].metadata.name}') # !curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \ # -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \ # -H "Content-Type: application/json" # !kubectl apply -f resources/fixed_v2_2models.yaml time.sleep(5) # To allow operator to start the update for i in range(60): # responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json" response = json.loads(responseRaw[0]) assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5) # jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json data="".join(jsonRaw) resources = json.loads(data) numReplicas = int(resources["items"][0]["status"]["replicas"]) if numReplicas == 3: break time.sleep(1) print("Rollout Success") # !kubectl delete -f resources/fixed_v2_2models.yaml # ## Model name changes # # This will not do a rolling update but create a new deployment. # !kubectl apply -f resources/fixed_v1.yaml # !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=fixed \ # -o jsonpath='{.items[0].metadata.name}') # !curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \ # -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions \ # -H "Content-Type: application/json" # !kubectl apply -f resources/fixed_v2_new_name.yaml time.sleep(5) # To allow operator to start the update for i in range(60): # responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/fixed/api/v1.0/predictions -H "Content-Type: application/json" response = json.loads(responseRaw[0]) assert(response['data']['ndarray'][0]==1 or response['data']['ndarray'][0]==5) # jsonRaw=!kubectl get deploy -l seldon-deployment-id=fixed -o json data="".join(jsonRaw) resources = json.loads(data) numItems = len(resources["items"]) if numItems == 1: break time.sleep(1) print("Rollout Success") # !kubectl delete -f resources/fixed_v2_new_name.yaml # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## QUICK REVISION NOTES 5 # ### Dictionary and its method # + # Dictionary is a combination of key-value pairs seperated by semi-colan ':' and kept inside '{}' # Dictionary is a mutable object but it doent allow mutable object as key # Key cant be duplicated but values can be duplicated. # Indexing and slicing is not possible. # To access a vale we have to use its key. # - dct = {'std': {'name':'abc', 'num':40}, 'std2':{'num':50}} print(dct,'\nType of dct is',type(dct), '\nID of dct is', id(dct)) # + # To access a value in dictionary print(dct['std']) print(dct['std']['name']) print() # To change the value in dictionary dct['std']['num'] = 10 print(dct) # + # To view all the methods in dictionary print(dir(dict)) # The identifiers starting and ending with underscore (_) are language defined identifiers # Other than that everything are the methods of dictionary # + # get and setdefault # get() - It returns the value of the particular key # - If the key is not available it returns None which can be edited by the user. print('Get method in dictionary') print(dct.get('std')) # Returns the value of 'std' print(dct.get('std3')) # Returns None since the key is not present print(dct.get('hello', 'key-value pair not available')) # Returns user defined output if the key not available print(dct) # setdefult() - It returns the value of the particular key. # - If the key is not available it returns None and adds None as value to mentioned key # - This can also be used to add key-value pair to dictionary print('\nSetdefault method in dictionary') sdct = {1:20, 'true': 56, 'class':10} print(sdct.setdefault('age')) # Key 'age is not available so it returns None and saves it as value' print(sdct.setdefault('name','krishna')) # Key 'name' is not available so it returns the assigned value & saves it print(sdct) # update() - Its is used to update the dictionary with new key value pair or assign new value to available key print('\nDefault method in dictionary') udct = {10:20,30:10,50:89,45:82} udct.update({10:'suresh'}) # return type is None udct.update({'name':'krishna',101:202,50:{5:5,60:6}}) # return type is None print(udct) # + # Removing a pair from dictionary # pop() - It is used to remove a particular element from the dictionary print('Pop method in dictionary') pdct = {10:40,50:5,6:60,8:80} print(pdct.pop(10)) # removes the pair and returns the removed value # print(pdct.pop(100)) # returns key error if the key is not present in the dictionary print(pdct.pop(100, 'Key not present')) # To avoid KeyError custom message can be used print(pdct) # returns the dictionary after removing the mentioned pair # popitem() - This removes a last pair from the dictionary. # - Returns key error if the dictionary is empty. print('\nPopitem method in dictionary') print(pdct.popitem()) # removes the last pair and returns it print(pdct) # returns the dictionary after removing last pair # + #items(), keys(), values() - These creates a sequence of required elements from the dictionary sdct = {10:1, 20:2, 30:3, 40:4, 50:5} print(sdct.keys()) print(sdct.values()) print(sdct.items()) # This can be looped through dictionary print('\nPrinting keys') for i in sdct.keys(): print(i, end=' ') print('\n\nPrinting values') for i in sdct.values(): print(i, end=' ') print('\n\nPrinting items') for i in sdct.items(): print(i, end=' ') # - # fromkeys() - This method is used to create default values for all keys in dictionary. fkdct={} fkdct = fkdct.fromkeys([1,2,3,4,5,6,7],'suresh') # Assign it to a variable will return the expected output print(fkdct) # + # copy() - This method creates a shallow copy of the dictionary spdct = {1:400, 2:800, 3:1200, 4:2000} cpdct = spdct.copy() print('Before making changes to copy') print(spdct,cpdct,sep='\n') print('\nAfter making changes to copy') cpdct.update({4:1600}) print(spdct,cpdct,sep='\n') # To create a deepcopy: from copy import deepcopy dpdct = deepcopy(spdct) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #export from local.torch_basics import * from local.test import * from local.layers import * from local.data.all import * from local.notebook.showdoc import show_doc from local.optimizer import * from local.learner import * from local.callback.progress import * # + #default_exp callback.fp16 # - #hide from local.utils.test import * # # Mixed precision training # # > Callback and utility functions to allow mixed precision training # ## A little bit of theory # Continuing the documentation on the fastai_v1 development here is a brief piece about mixed precision training. A very nice and clear introduction to it is [this video from NVIDIA](http://on-demand.gputechconf.com/gtc/2018/video/S81012/). # # ### What's half precision? # In neural nets, all the computations are usually done in single precision, which means all the floats in all the arrays that represent inputs, activations, weights... are 32-bit floats (FP32 in the rest of this post). An idea to reduce memory usage (and avoid those annoying cuda errors) has been to try and do the same thing in half-precision, which means using 16-bits floats (or FP16 in the rest of this post). By definition, they take half the space in RAM, and in theory could allow you to double the size of your model and double your batch size. # # Another very nice feature is that NVIDIA developed its latest GPUs (the Volta generation) to take fully advantage of half-precision tensors. Basically, if you give half-precision tensors to those, they'll stack them so that each core can do more operations at the same time, and theoretically gives an 8x speed-up (sadly, just in theory). # # So training at half precision is better for your memory usage, way faster if you have a Volta GPU (still a tiny bit faster if you don't since the computations are easiest). How do we do it? Super easily in pytorch, we just have to put .half() everywhere: on the inputs of our model and all the parameters. Problem is that you usually won't see the same accuracy in the end (so it happens sometimes) because half-precision is... well... not as precise ;). # # ### Problems with half-precision: # To understand the problems with half precision, let's look briefly at what an FP16 looks like (more information [here](https://en.wikipedia.org/wiki/Half-precision_floating-point_format)). # # ![half float](images/half.png) # # The sign bit gives us +1 or -1, then we have 5 bits to code an exponent between -14 and 15, while the fraction part has the remaining 10 bits. Compared to FP32, we have a smaller range of possible values (2e-14 to 2e15 roughly, compared to 2e-126 to 2e127 for FP32) but also a smaller *offset*. # # For instance, between 1 and 2, the FP16 format only represents the number 1, 1+2e-10, 1+2*2e-10... which means that 1 + 0.0001 = 1 in half precision. That's what will cause a certain numbers of problems, specifically three that can occur and mess up your training. # 1. The weight update is imprecise: inside your optimizer, you basically do w = w - lr * w.grad for each weight of your network. The problem in performing this operation in half precision is that very often, w.grad is several orders of magnitude below w, and the learning rate is also small. The situation where w=1 and lr*w.grad is 0.0001 (or lower) is therefore very common, but the update doesn't do anything in those cases. # 2. Your gradients can underflow. In FP16, your gradients can easily be replaced by 0 because they are too low. # 3. Your activations or loss can overflow. The opposite problem from the gradients: it's easier to hit nan (or infinity) in FP16 precision, and your training might more easily diverge. # # ### The solution: mixed precision training # # To address those three problems, we don't fully train in FP16 precision. As the name mixed training implies, some of the operations will be done in FP16, others in FP32. This is mainly to take care of the first problem listed above. For the next two there are additional tricks. # # The main idea is that we want to do the forward pass and the gradient computation in half precision (to go fast) but the update in single precision (to be more precise). It's okay if w and grad are both half floats, but when we do the operation w = w - lr * grad, we need to compute it in FP32. That way our 1 + 0.0001 is going to be 1.0001. # # This is why we keep a copy of the weights in FP32 (called master model). Then, our training loop will look like: # 1. compute the output with the FP16 model, then the loss # 2. back-propagate the gradients in half-precision. # 3. copy the gradients in FP32 precision # 4. do the update on the master model (in FP32 precision) # 5. copy the master model in the FP16 model. # # Note that we lose precision during step 5, and that the 1.0001 in one of the weights will go back to 1. But if the next update corresponds to add 0.0001 again, since the optimizer step is done on the master model, the 1.0001 will become 1.0002 and if we eventually go like this up to 1.0005, the FP16 model will be able to tell the difference. # # That takes care of problem 1. For the second problem, we use something called gradient scaling: to avoid the gradients getting zeroed by the FP16 precision, we multiply the loss by a scale factor (scale=512 for instance). That way we can push the gradients to the right in the next figure, and have them not become zero. # # ![half float representation](images/half_representation.png) # # Of course we don't want those 512-scaled gradients to be in the weight update, so after converting them into FP32, we can divide them by this scale factor (once they have no risks of becoming 0). This changes the loop to: # 1. compute the output with the FP16 model, then the loss. # 2. multiply the loss by scale then back-propagate the gradients in half-precision. # 3. copy the gradients in FP32 precision then divide them by scale. # 4. do the update on the master model (in FP32 precision). # 5. copy the master model in the FP16 model. # # For the last problem, the tricks offered by NVIDIA are to leave the batchnorm layers in single precision (they don't have many weights so it's not a big memory challenge) and compute the loss in single precision (which means converting the last output of the model in single precision before passing it to the loss). # # ![Mixed precision training](images/Mixed_precision.jpeg) # # ### Dynamic loss scaling # # The only annoying thing with the previous implementation of mixed precision training is that it introduces one new hyper-parameter to tune, the value of the loss scaling. Fortunately for us, there is a way around this. We want the loss scaling to be as high as possible so that our gradients can use the whole range of representation, so let's first try a really high value. In all likelihood, this will cause our gradients or our loss to overflow, and we will try again with half that big value, and again, until we get to the largest loss scale possible that doesn't make our gradients overflow. # # This value will be perfectly fitted to our model and can continue to be dynamically adjusted as the training goes, if it's still too high, by just halving it each time we overflow. After a while though, training will converge and gradients will start to get smaller, so we al # so need a mechanism to get this dynamic loss scale larger if it's safe to do so. The strategy used in the Apex library is to multiply the loss scale by 2 each time we had a given number of iterations without overflowing. # ## Util functions # Before going in the main `Callback` we will need some helper functions. We use the ones from the [APEX library](https://github.com/NVIDIA/apex). # export from local.utils.fp16_utils import convert_network, model_grads_to_master_grads, master_params_to_model_params # ### Converting the model to FP16 # We will need a function to convert all the layers of the model to FP16 precision except the BatchNorm-like layers (since those need to be done in FP32 precision to be stable). In Apex, the function that does this for us is `convert_network`. We can use it to put the model in FP16 or back to FP32. # + model = nn.Sequential(nn.Linear(10,30), nn.BatchNorm1d(30), nn.Linear(30,2)).cuda() model = convert_network(model, torch.float16) for i,t in enumerate([torch.float16, torch.float32, torch.float16]): test_eq(model[i].weight.dtype, t) test_eq(model[i].bias.dtype, t) model = nn.Sequential(nn.Linear(10,30), BatchNorm(30, ndim=1), nn.Linear(30,2)).cuda() model = convert_network(model, torch.float16) for i,t in enumerate([torch.float16, torch.float32, torch.float16]): test_eq(model[i].weight.dtype, t) test_eq(model[i].bias.dtype, t) # - # ### Creating the master copy of the parameters # From our model parameters (mostly in FP16), we'll want to create a copy in FP32 (master parameters) that we will use for the step in the optimizer. Optionally, we concatenate all the parameters to do one flat big tensor, which can make that step a little bit faster. # # We can't use the FP16 util function here as it doesn't handle multiple parameter groups, which is the thing we use to # - do transfer learning and freeze some layers # - apply discriminative learning rates # - don't apply weight decay to some layers (like BatchNorm) or the bias terms # + #export from torch.nn.utils import parameters_to_vector def get_master(opt, flat_master=False): model_params = [[param for param in pg if param.requires_grad] for pg in opt.param_groups] if flat_master: master_params = [] for pg in model_params: mp = parameters_to_vector([param.data.float() for param in pg]) mp = torch.nn.Parameter(mp, requires_grad=True) if mp.grad is None: mp.grad = mp.new(*mp.size()) master_params.append([mp]) else: master_params = [[param.clone().float().detach() for param in pg] for pg in model_params] for pg in master_params: for param in pg: param.requires_grad_(True) return model_params, master_params # - #hide #cuda learn = synth_learner() learn.model = convert_network(nn.Sequential(nn.Linear(1,1), nn.Linear(1,1)), torch.float16).cuda() learn.splitter = lambda m: [list(m[0].parameters()), list(m[1].parameters())] learn.opt = learn.opt_func(learn.splitter(learn.model), learn.lr) model_p,master_p = get_master(learn.opt) test_eq(len(model_p), 2) #2 pqrqm groups test_eq(len(master_p), 2) for pg1,pg2 in zip(model_p,master_p): test_eq([p.float() for p in pg1], pg2) #Same values but different types for p in pg1: assert p.dtype == torch.float16 #hide #cuda #Flattened version model_pf,master_pf = get_master(learn.opt, flat_master=True) test_eq(len(model_pf), 2) #2 pqrqm groups test_eq(len(master_pf), 2) for pg1,pg2 in zip(model_pf,master_pf): test_eq(len(pg2), 1) #One flattened tensor test_eq([p.float().squeeze() for p in pg1], [p for p in pg2[0]]) #Same values but different types for p in pg1: assert p.dtype == torch.float16 # ### Copy the gradients from model params to master params # After the backward pass, all gradients must be copied to the master params before the optimizer step can be done in FP32. The corresponding function in the Apex utils is `model_grads_to_master_grads` but we need to adapt it to work with param groups. # export def to_master_grads(model_pgs, master_pgs, flat_master=False): for (model_params,master_params) in zip(model_pgs,master_pgs): model_grads_to_master_grads(model_params, master_params, flat_master=flat_master) #hide #cuda xb,yb = learn.dbunch.one_batch() pred = learn.model.cuda()(xb.cuda().half()) loss = F.mse_loss(pred, yb.cuda().half()) loss.backward() to_master_grads(model_p, master_p) to_master_grads(model_pf, master_pf, flat_master=True) test_eq([[p.grad.float() for p in pg] for pg in model_p], [[p.grad for p in pg] for pg in master_p]) test_eq([[p.grad.float().squeeze() for p in pg] for pg in model_pf], [[p for p in pg[0].grad] for pg in master_pf]) xb.shape # ### Copy the master params to the model params # After the step, we need to copy back the master parameters to the model parameters for the next update. The corresponding function in Apex is `master_params_to_model_params`. # export def to_model_params(model_pgs, master_pgs, flat_master:bool=False)->None: for (model_params,master_params) in zip(model_pgs,master_pgs): master_params_to_model_params(model_params, master_params, flat_master=flat_master) #hide #cuda learn.opt.param_groups = master_p learn.opt.step() to_model_params(model_p, master_p) test_close([[p.float() for p in pg] for pg in model_p], [[p for p in pg] for pg in master_p], eps=1e-3) #hide #cuda learn.opt.param_groups = master_pf learn.opt.step() to_model_params(model_pf, master_pf, flat_master=True) test_close([[p.float().squeeze() for p in pg] for pg in model_pf], [[p for p in pg[0]] for pg in master_pf], eps=1e-3) # ### Checking for overflow # For dynamic loss caling, we need to know when the gradients have gone up to infinity. It's faster to check it on the sum than to do `torch.isinf(x).any()`. # export def test_overflow(x): s = float(x.float().sum()) return (s == float('inf') or s == float('-inf') or s != s) x = torch.randn(3,4) assert not test_overflow(x) x[1,2] = float('inf') assert test_overflow(x) # Then we can use it in the following function that checks for gradient overflow: # export def grad_overflow(pgs): for pg in pgs: for p in pg: if p.grad is not None and test_overflow(p.grad.data): return True return False #hide #cuda assert not grad_overflow(model_p) assert not grad_overflow(model_pf) model_p[1][0].grad.data[0,0] = float('inf') model_pf[0][1].grad.data[0] = float('inf') assert grad_overflow(model_p) assert grad_overflow(model_pf) # ## MixedPrecision - # export @docs class MixedPrecision(Callback): "Run training in mixed precision" toward_end=True def __init__(self, loss_scale=512, flat_master=False, dynamic=True, max_loss_scale=2.**24, div_factor=2., scale_wait=500, clip=None): assert torch.backends.cudnn.enabled, "Mixed precision training requires cudnn." self.flat_master,self.dynamic,self.max_loss_scale = flat_master,dynamic,max_loss_scale self.div_factor,self.scale_wait,self.clip = div_factor,scale_wait,clip self.loss_scale = max_loss_scale if dynamic else loss_scale def begin_fit(self): self.learn.model = convert_network(self.model, dtype=torch.float16) self.model_pgs,self.master_pgs = get_master(self.opt, self.flat_master) #Changes the optimizer so that the optimization step is done in FP32. self.learn.opt.param_groups = self.master_pgs if self.dynamic: self.count = 0 def begin_batch(self): self.learn.xb = to_half(self.xb) def after_pred(self): self.learn.pred = self.pred.float() def after_loss(self): if self.training: self.learn.loss *= self.loss_scale def after_backward(self): self.learn.loss /= self.loss_scale #To record the real loss #First, check for an overflow if self.dynamic and grad_overflow(self.model_pgs): self.loss_scale /= self.div_factor self.model.zero_grad() raise CancelBatchException() #skip step and zero_grad to_master_grads(self.model_pgs, self.master_pgs, self.flat_master) for master_params in self.master_pgs: for param in master_params: if param.grad is not None: param.grad.div_(self.loss_scale) #Check if it's been long enough without overflow if self.clip is not None: for group in self.master_pgs: nn.utils.clip_grad_norm_(group, self.clip) if self.dynamic: self.count += 1 if self.count == self.scale_wait: self.count = 0 self.loss_scale *= self.div_factor def after_step(self): self.model.zero_grad() #Zero the gradients of the model manually (optimizer disconnected) to_model_params(self.model_pgs, self.master_pgs, self.flat_master) def after_fit(self): self.learn.model = convert_network(self.model, dtype=torch.float32) _docs = dict(begin_fit="Put the model in FP16 and prepare the two copies of the parameters", begin_batch="Put the input in FP16", after_pred="Put the output back to FP32 so that the loss is computed in FP32", after_loss="Apply loss scaling to avoid gradient underflow", after_backward="Copy the gradients to the master param and undo the loss scaling", after_step="Copy the master params to the model params", after_fit="Put the model back in FP32" ) # + #hide class TestBeforeMixedPrecision(Callback): run_before=MixedPrecision def begin_fit(self): test_eq(next(iter(self.model.parameters())).dtype, torch.float32) def begin_batch(self): test_eq(self.x.dtype, torch.float32) def after_pred(self): test_eq(self.pred.dtype, torch.float16) def after_loss(self): self.loss = self.learn.loss.detach().clone() def after_backward(self): self.has_overflown = grad_overflow(self.mixed_precision.model_pgs) self.grads = [p.grad.data.clone() for p in self.model.parameters()] self.old_params = [p.data.clone() for p in self.model.parameters()] def after_step(self): assert not self.has_overflown def after_cancel_batch(self): assert self.has_overflown class TestAfterMixedPrecision(Callback): run_after=MixedPrecision def begin_fit(self): test_eq(next(iter(self.model.parameters())).dtype, torch.float16) def after_fit(self): test_eq(next(iter(self.model.parameters())).dtype, torch.float32) def begin_batch(self): test_eq(self.x.dtype, torch.float16) def after_pred(self): test_eq(self.pred.dtype, torch.float32) def after_loss(self): loss_scale = self.mixed_precision.loss_scale if self.training else 1. test_eq(self.loss, self.test_before_mixed_precision.loss * loss_scale) def after_backward(self): tbmp = self.test_before_mixed_precision test_eq(self.loss, tbmp.loss) #Test gradients have been copied and scaled back test_close(sum([[p.grad.data for p in pg] for pg in self.mixed_precision.master_pgs], []), [g.float()/self.mixed_precision.loss_scale for g in tbmp.grads]) def after_step(self): tbmp,mp =self.test_before_mixed_precision,self.mixed_precision #Test master params have been copied to model test_close(sum([[p.data for p in pg] for pg in mp.master_pgs], []), [p.data.float() for p in self.model.parameters()], eps=1e-3) #Test update has been done properly for p,g,op in zip(self.model.parameters(), tbmp.grads, tbmp.old_params): test_close(p.data.float(), op.float() - self.lr*g.float()/self.mixed_precision.loss_scale, eps=1e-3) # - #hide #cuda learn = synth_learner(cbs=MixedPrecision(), cuda=True) learn.model = nn.Sequential(nn.Linear(1,1), nn.Linear(1,1)).cuda() learn.opt_func = partial(SGD, mom=0.) learn.splitter = lambda m: [list(m[0].parameters()), list(m[1].parameters())] learn.fit(3, cbs=[TestAfterMixedPrecision(), TestBeforeMixedPrecision()]) #Check loss scale did change assert 1 < learn.mixed_precision.loss_scale < 2**24 #Check the model did train for v1,v2 in zip(learn.recorder.values[0], learn.recorder.values[-1]): assert v2 10] = 1 # # would give a value of 1 to the variable new_var for the subset of passengers whose fares greater than 10. Remember that new_var has a value of 0 for all other values (including missing values). # # A new column called Child in the train data frame has been created for you that takes the value NaN for all observations. # # ### Instructions # Set the values of Child to 1 is the passenger's age is less than 18 years. # # Then assign the value 0 to observations where the passenger is greater than or equal to 18 years in the new Child column. # # Compare the normalized survival rates for those who are <18 and those who are older. Use code similar to what you had in the previous exercise. # + # Create the column Child and assign to 'NaN' train["Child"] = float('NaN') # Assign 1 to passengers under 18, 0 to those 18 or older. Print the new column. # train['Child'][train['Age'] >= 18] = 0 # train['Child'][train['Age'] < 18] = 1 train.loc[train['Age'] >= 18, 'Child'] = 0 train.loc[train['Age'] < 18, 'Child'] = 1 print(train['Child']) # Print normalized Survival Rates for passengers under 18 print(train["Survived"][train["Child"] == 1].value_counts(normalize = True)) # Print normalized Survival Rates for passengers 18 or older print(train["Survived"][train["Child"] == 0].value_counts(normalize = True)) # - # ## 6.First Prediction # https://campus.datacamp.com/courses/kaggle-python-tutorial-on-machine-learning/getting-started-with-python?ex=6 # # In one of the previous exercises you discovered that in your training set, females had over a 50% chance of surviving and males had less than a 50% chance of surviving. Hence, you could use this information for your first prediction: all females in the test set survive and all males in the test set die. # # You use your test set for validating your predictions. You might have seen that contrary to the training set, the test set has no Survived column. You add such a column using your predicted values. Next, when uploading your results, Kaggle will use this variable (= your predictions) to score your performance. # # ### Instructions # Create a variable test_one, identical to dataset test # Add an additional column, Survived, that you initialize to zero. # Use vector subsetting like in the previous exercise to set the value of Survived to 1 for observations whose Sex equals "female". # Print the Survived column of predictions from the test_one dataset. # + # Create a copy of test: test_one test_one = test # Initialize a Survived column to 0 test_one['Survived'] = 0 # Set Survived to 1 if Sex equals "female" and print the `Survived` column from `test_one` # test_one['Survived'][test_one['Sex'] == 'female'] = 1 test_one.loc[test_one['Sex'] == 'female', 'Survived'] = 1 print(test_one['Survived']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.13 64-bit (''clip'': conda)' # name: python3 # --- import clip import torch from dataset import build_loaders from clip.model import CLIP from tqdm.autonotebook import tqdm # from tqdm import tqdm_notebook as tqdm import torch.nn as nn import torch.nn.functional as F from transformers import get_cosine_schedule_with_warmup import numpy as np class AvgMeter: def __init__(self, name="Metric"): self.name = name self.reset() def reset(self): self.avg, self.sum, self.count = [0] * 3 def update(self, val, count=1): self.count += count self.sum += val * count self.avg = self.sum / self.count def __repr__(self): text = f"{self.name}: {self.avg:.4f}" return text def get_lr(optimizer): for param_group in optimizer.param_groups: return param_group["lr"] # + torch_cross_entropy_func = torch.nn.CrossEntropyLoss() def cross_entropy(preds, targets, reduction='none'): log_softmax = nn.LogSoftmax(dim=-1) loss = (-targets * log_softmax(preds)).sum(1) if reduction == 'none': return loss elif reduction == 'mean': return loss.mean() def paper_loss(logits_per_image, logits_per_text): labels = torch.Tensor(np.arange(logits_per_image.size()[0])).long().to(cfg.device) # labels = torch.eye(logits_per_image.size()[0]).cuda() loss_text = torch_cross_entropy_func(logits_per_text, labels) loss_image = torch_cross_entropy_func(logits_per_image, labels) loss = (loss_text + loss_image) / 2.0 return loss.mean() def custom_loss(logits_per_image, logits_per_text): logits = logits_per_text targets = F.softmax( (logits_per_image + logits_per_text) / 2, dim=-1 ) texts_loss = cross_entropy(logits, targets, reduction='none') images_loss = cross_entropy(logits.t(), targets.t(), reduction='none') loss = (images_loss + texts_loss) / 2.0 return loss.mean() # - def valid_epoch(model, valid_loader): loss_meter = AvgMeter() tqdm_object = tqdm(valid_loader, total=len(valid_loader)) for images, texts in tqdm_object: images = images.to(cfg.device) texts = texts.to(cfg.device) logits_per_image, logits_per_text = model(images, texts) # loss = custom_loss(logits_per_image, logits_per_text) loss = paper_loss(logits_per_image, logits_per_text) count = images.size(0) loss_meter.update(loss.item(), count) tqdm_object.set_postfix(valid_loss=loss_meter.avg) return loss_meter # def train_epoch(model, train_loader, optimizer, lr_scheduler, step): def train_epoch(model, train_loader, optimizer): # loss_meter = AvgMeter() # lr_meter = AvgMeter() tqdm_object = tqdm(train_loader, total=len(train_loader)) i = 0 for images, texts in tqdm_object: images = images.to(cfg.device) texts = texts.to(cfg.device) logits_per_image, logits_per_text = model(images, texts) # loss = custom_loss(logits_per_image, logits_per_text) loss = paper_loss(logits_per_image, logits_per_text) loss.backward() if (i+1) % cfg.gradient_accumulation == 0: optimizer.step() optimizer.zero_grad() scheduler.step() # tqdm_object.set_postfix(train_loss=loss_meter.avg, learing_rate=cfg.lr) tqdm_object.set_postfix(train_loss=loss.item(), learing_rate=scheduler.get_last_lr()[0]) wandb.log({"LR": scheduler.get_last_lr()[0], "Loss": loss}) i += 1 # loss_meter.update(loss.item(), count) # lr_meter.update(scheduler.get_last_lr()[0], count) # return loss_meter return None # + from config import CFG # config cfg = CFG() model = CLIP(embed_dim=cfg.embed_dim, # vision image_resolution=cfg.image_resolution, vision_layers=cfg.vision_layers, vision_width=cfg.vision_width, vision_patch_size=cfg.vision_patch_size, context_length=cfg.context_length, vocab_size=cfg.vocab_size, # text transformer_width=cfg.transformer_width, transformer_heads=cfg.transformer_heads, transformer_layers=cfg.transformer_layers).to(cfg.device) # DataLoader train_loader, test_loader = build_loaders(cfg=cfg) # optimizer = torch.optim.Adam(model.parameters(), lr=cfg.lr, betas=cfg.adam_beta, eps=1e-6, weight_decay=cfg.weight_decay) optimizer = torch.optim.AdamW(model.parameters(), lr=cfg.lr, betas=cfg.adam_beta, eps=cfg.eps, weight_decay=cfg.weight_decay) # in paper # lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( # optimizer, mode="min", patience=cfg.patience, factor=cfg.factor # ) warmup_steps = cfg.warmup_epochs * len(train_loader) // cfg.gradient_accumulation num_steps = cfg.epochs * len(train_loader) // cfg.gradient_accumulation + 1 scheduler = get_cosine_schedule_with_warmup(optimizer, warmup_steps, num_steps, last_epoch=-1) # - import wandb wandb.init(project="CLIP-GPU", settings=wandb.Settings(console='off')) wandb.config.update(vars(cfg)) best_loss = float('inf') for epoch in range(cfg.epochs): print(f"Epoch: {epoch + 1}") model.train() # train_loss = train_epoch(model, train_loader, optimizer, lr_scheduler, step) train_loss = train_epoch(model, train_loader, optimizer) model.eval() with torch.no_grad(): valid_loss = valid_epoch(model, test_loader) if valid_loss.avg < best_loss: best_loss = valid_loss.avg torch.save(model.state_dict(), "./models/2m"+str(valid_loss.avg)[:5]+".pt") print("saved the best model! ") # lr_scheduler.step(valid_loss.avg) # print("lr: ", get_lr(optimizer)) output.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # CheckAtlas examples : Evaluate and compare different atlases # In this example, we show how to run checkatlas in a folder and produce a summary html file. The three atlases are downloaded in Scanpy (.h5ad) format. # ## Download datasets # Datasets are downloaded from the cellxgene server. This might takes few minutes. # B-cells compartment
# From: Cross-tissue immune cell analysis reveals tissue-specific features in humans - Domínguez Conde et al. (2022) Science # + language="bash" # mkdir -p data1 # cd data1/ # curl -o B-cells_compartment.h5ad "https://corpora-data-prod.s3.amazonaws.com/71be997d-ff75-41b9-8a9f-1288c865f921/local.h5ad?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIATLYQ5N5XZ5IYTEDX%2F20220601%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20220601T092903Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IMEYCIQCuM0q8VWuj3%2Bphm9rrEY5kfiDpHNi1OdMGEy22J9HBywIhAPLJqFERoPTJlBQR%2FuDT87BYVxxPM%2BkB34Qflk6nq4eoKusDCEEQARoMMjMxNDI2ODQ2NTc1IgykmjPglXNgfrQ6scoqyAN8seihHvMaIliJhL1YBHG6xUDiwfJeZQDpiQc%2Ffgz3vpo92NkM%2Fi%2F4PBhWbl9m915ezNTpJI7TuNkeWZnn%2FBg7e3iheegwPZOoULzMoc9VZnF%2BNnSTouYBlJ1uRousmEzuUHLYSfqqnK50yfaTneQmH09TFpxpgxE3MluOYfompF1mmytjkIADlYCrYRcQ5YsXauKMr%2FMKXtXhmkKwrP%2FpfqZ3mu98JbZv0GjWyAAlgJquLXvt9xK1FGCzGjz8OeZ1cNJE4RhNfULfz2wlx3TY%2BUZno4NYJV%2BYQI2is6M%2F9ncjR2L4j%2BSITjaQv9rBUcDfoxUlaZ4II2u9cx8501dT7qdnWFWTVtr6JDSQ5zQz%2F6ed2Yx2fFh0AHkAadBLxp%2Fpw8mrdxQS%2Bc9MgRzMGT2Lgs3s3Vo3CWKG6d0BX6SQKX8fHKJ0aa20oPbx9ic8dFcgto%2FawyWWyx86YSRvTL5JOkD7OI208mf6Kf75gtkVHYbmDW0Ti3p3ShbPrvXNpSGiJ8dJD6eZS0dExJWpAz0f%2F1oGsaj88DGAQ2r3BkbrTeC0DeVt%2F6GXfnm4qE4u5w8yN6VSfBwl%2BYUEY8qTNbI6C0WtMmwibZUw78HclAY6pAExe1SacHFeycJugsezBpHyvWsGTn4zHAxFFDBz7nQ5HsJIgPHgrnKB3x%2Bwt52153mK3rrmGnoZ6ViuwsWgHRYFp5Hyh7d8mKFvLHBlKnAifq8nWMZQK25S1Vgw2L5jrVrEBtyQB%2BQ2emqM5uiPPtvlWrPr4VqjgoGRC1gOhqcK8lDaJgmDxsUlTYMazKosnUHek1hWwfeSlSfDj4wtHzhxb%2FWkHQ%3D%3D&X-Amz-Signature=725af38dbe890d4da80718a57f1a3d7fb000e60b24f79db3a7f339c0f76586d5" # - # Endothelial
# From: Tabula Sapiens - The Tabula Sapiens Consortium* et al. (2022) Science # + language="bash" # cd data1/ # curl -o Tabula_Sapiens_Endothelial.h5ad "https://corpora-data-prod.s3.amazonaws.com/5571ad37-d0ea-4372-9905-67b5a4deaf80/local.h5ad?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=EUG%2F20220601%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20220601T093425Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEEUaCXVzLXdlc3QtMiJHMEUCIQDO7nzihCvsRHuEOt1RIX90IZVpKfVvEhf8GD7aRraZfwIgNgJoiaeDA0bqTGdxiZO3Ul216bewEwlwpXFSO4ABMz%2BFiKhgefWKwMxVAJYZXMX79vDPfxRm3nl%2Bexg78decVGq96c2TJ70RaYzX9eltV91RKCofE7FI8hyibDDlGt15WWpoesyBhLeK1IdR1atlsaZ5I7%2FCwA0UfIBgV6nuenyU476%2BddGc%2FG7jFGWGNhcpBmd3VFsRBQjR%2FiNaGCR8AZibu4ZdL2jnlPx0ontq0rmE9J9h%2F0x1EE%2Fx5FZHDcuriiflw7%2BQ8rMM%2F8cCwvuhTPqw%2BQNGOAe98dBNwnVUZPDxMO4rJtphRGBCXrzReNRA88HrmJBA1weIeFxe8cQs6ihWyhVB777qdUC6CJ7RzzKolGAZoghp%2FhW7B3ol4ec4JW45sq5wMklZ%2FdFA9%2BM%2FL2O98sL8P4uqv3sVuWJt2M8ulr4pfo2X8psCpWQuJ3IdfvaoknU9nAXRwxuSgiDo6RDYdg7dOU9quqy9yZyIK1r4Wg%2Bv1x0PgqrQpbKfdamBMFfcg%2FJFhVuv8MUSUj2eyynSr3x8%2Brh4fsaqtP0WVbpPkNnBHukeL%2B10JG9dSpGWP4GTtsw%2BEpERTWL5bTd4mBo6cuZ6BbLftv%2FeVJ%2FqGdVY0zcV7wEsi%2FV6mtjrvcp1oWr1zCJ49uUBjqlAW%2F6TRywhy%2Fvue7RBEaFykKOzIHO2XwCLbEZpI8kcSJWgz3EgjPCqZGSn4d32CwpJPcsI4t%2B7IQzLhmmOFvMUADexFPj3joM2xF7j8N1dwiLK4mKsdMBP7%2FthtHX1aqXE5kAb9Nx1z3dqzH3oR4EZ4lZxL2MBCbssTavVBe5MWxauQRd4AdrdLc9QiaGCxWaRV4Gj6nq3CGPuA%2F%2F%2FiFFhUbsXNGfGA%3D%3D&X-Amz-Signature=6391a1a4a1581f4130d7287cd09eac6bd84e2c0fd3ff9dd240f00cb95df01201" # - # Fetal
# From: # Single cell derived mRNA signals across human kidney tumors - Young et al. (2021) Nat Commun # + language="bash" # cd data1/ # curl -o Fetal.h5ad "https://corpora-data-prod.s3.amazonaws.com/a51c6ece-5731-4128-8c1e-5060e80c69e4/local.h5ad?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIATLYQ5N5X2OOOPPN6%2F20220601%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20220601T093718Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEEUaCXVzLXdlc3QtMiJIMEYCIQDa%2F6DcbFSKCAKSZOahE4BDQFPD4MYy7HtENvd8EdXDKusDCD4QARoMMjMxNDI2ODQc1IgwjMpK93hi3mahqnTwqyAN3k8W2x%2FHWeEEdTeLEIU5Xzf%2BO5IShslHQatUWt%2FOAqUdDklzDnXZc8SrP0h0c%2F%2BfXOpqdWdXH5xJs2ZJBC9WGdi26LPlJWc293IrZx71kYIGMdRuQDaTB9g3osr%2B9I9Um48t5g%2F4aTHeHuBxjAh4a6ELJcGVio%2B2%2BHap5an9EgHAHbXsWGxwlj%2FNgxLSfPJF6ycnzPeT6Yn%2FK%2BK9EiqgHmi95luxg4seHcun5ZZnEO3Tnc3v2xJ%2BfIghpmikYQvWLcUi34ppOVnAPMa4d%2FXkgtuQUe0Sg0xLhzulsxyBDgp4yHqtkyvC89lq98cFjGB5qsnMwSrT2RiGPjFSAEVhPPwCky7GQZ3MGAEC50A8UXDcnprLObV%2BSDpfjg9OTDl4WRCQTiEEFAsLDlwOnCRyO0bSnItksbanr0wmSkCmUsJ6Ju%2FNxtui1VQZFx5dhfKU8xWOv%2B5nBCxED2%2Bv6LmalNzKqtlDI5Jkx3qok1%2FsUTBKP9eEDiM3aoSm4b95kIariV7pusfAttyJwu5oEM5YkR1XYNum7bJYNxEh64zRRM9b7zSOYlB3T%2BNl41QlPY6Ssc183lm10PbwkteRGlJAeBconJmwJS4Aw%2BOXblAY6pAHF4Ih5mLvCi0RKHu8VNXo9YhaQV%2FKDwBOXbrJ3ejikSLCANYGiKFhFT2ch5CZ8PKJDJsyYujWQ%2BjzBNc0K2JDtj4dOJdRFSspbgx%2BKg1Dl6sY3KJrCX%2Fn4YCe59vQehAGEH8GMYEpJemZShh%2B0V7m8Uc8jyV7fL81GQ%2BCs7zUDSNzjNNFte1uxPmrnt5Rth5tdrZW66IHm6wToY3sM1sixVBIfyA%3D%3D&X-Amz-Signature=fa37236f3b672df8f9fbfe2e6168836a7d2ef887d338c70ac7a6e3e7a4d4f234" # - # # ## Run checkatlas # If checkatlas is installed in your environment, you just need to run this cell. This will produce all metric tables and figures needed. # + language="bash" # python -m checkatlas data1/ # - # ## Run MultiQC # Once checkatlas has been run, all tables and fig cazn be found in the checkatlas_files folder. MultiQC will retrieve these files and create the html summary files. # WARNING: Install and run only MultiQC from https://github.com/becavin-lab/MultiQC/tree/checkatlas. Otherwise checkatlas files will not be taken into account. # + language="bash" # multiqc -f --cl-config "ignore_images: false" -c multiqc_config.yaml -n "CheckAtlas_example_1" -o "CheckAtlas_example_1" data1/ # - # If multiqc ran without error an html report has been created in CheckAtlas_example1/CheckAtlas_example1.html
# Open it and check your atlases ! # + from IPython.display import IFrame IFrame( src="CheckAtlas_example_1/CheckAtlas_example_1.html", width="100%", height="500px", ) # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # High Energy Physics example tracks <- read.csv("data/tracks_real_sorted.csv") dim(tracks) tracks <- tracks[,2:175] tracks_pt <- sqrt(tracks[,4]^2+tracks[,5]^2) tracks <- tracks[,7:174] dim(tracks) 168/6 counter <- 1; for(i in seq(1,168, by=6)){ names(tracks)[i] <- paste("X", counter, sep=""); names(tracks)[i+1] <- paste("Y", counter, sep=""); names(tracks)[i+2] <- paste("Z", counter, sep=""); names(tracks)[i+3] <- paste("CH0_", counter, sep=""); names(tracks)[i+4] <- paste("CH1_", counter, sep=""); names(tracks)[i+5] <- paste("W", counter, sep=""); counter <- counter +1; } names(tracks) new_tracks <- data.frame(0) counter <- 1; for(i in seq(1,168, by=6)){ new_tracks <- cbind(new_tracks, tracks[,i]) new_tracks <- cbind(new_tracks, tracks[,i+1]) new_tracks <- cbind(new_tracks, tracks[,i+2]) counter <- counter +1; } dim(new_tracks) tracks <- new_tracks[,2:85] dim(tracks) head(tracks) names(tracks) counter <- 1; for(i in seq(1,84, by=3)){ names(tracks)[i] <- paste("X", counter, sep=""); names(tracks)[i+1] <- paste("Y", counter, sep=""); names(tracks)[i+2] <- paste("Z", counter, sep=""); counter <- counter +1; } tracks <- cbind(tracks, tracks_pt) names(tracks)[85] <- "pt" dim(tracks) model <- lm(tracks$pt ~ ., data=tracks) summary(model) set.seed(123) nrow(tracks) sample_size <- as.integer(nrow(tracks)*.8) sample_index <- sample(nrow(tracks), sample_size, replace = FALSE) tracks_train <- tracks[sample_index,] tracks_test <- tracks[-sample_index,] plot(tracks_test$X1, tracks_test$pt) tracks.lm <- lm(pt ~ poly(X1,2)+ poly(Y1,2) , data=tracks_train) summary(tracks.lm) predictions <- predict(tracks.lm, data = tracks_test[1:84]) sum((predictions-tracks_test$pt)^2) rmse <- sqrt(mean((tracks_test$pt - predictions)^2)) barplot(tracks_test$pt[tracks_test$pt < 5]) barplot(predictions) rmse write.csv(tracks, "data/tracks.csv", row.names=FALSE) tr <- read.csv("data/tracks.csv") names(tr) dim(tr) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + data1 = '''{ "calories": { "a": 420, "b": 380, "c": 390 }, "duration": { "a": 50, "b": 40, "c": 45, "d": [ { "1": "61", "2": "62", "3": "63" }, { "10": "71", "20": "72", "30": "73" } ] } }''' data2 = '''{ "duration": { "a": 50, "b": 40, "c": 45, "d": [ { "10": "71", "20": "72", "30": "73" }, { "1": "61", "2": "62", "3": "63" } ] }, "calories": { "a": 420, "b": 380, "c": 390 } }''' # + import json x = json.loads(data1) y = json.loads(data2) for i in x: print(x[i]) # try: # assert sorted(x) == sorted(y) # print("Same") # except: # print("Different") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python3 # --- #

Table of contents

# # #
#
# + #Import libraries import random import numpy as np import matplotlib.pyplot as plt from sklearn.cluster import KMeans from sklearn.datasets.samples_generator import make_blobs # %matplotlib inline print('imported') # - #

k-Means on a randomly generated dataset

# Lets create our own dataset for this lab! #set-up random seed using numpy's random.seed() function starting at 0 np.random.seed(0) # Next, random clusters of points by using the make_blobs class. The make_blobs class can take in many inputs. #

# Input # #
    #
  • n_samples: The total number of points equally divided among clusters.
  • #
    • Value will be: 5000
    #
  • centers: The number of centers to generate, or the fixed center locations.
  • #
    • Value will be: [[4, 4], [-2, -1], [2, -3],[1,1]]
    #
  • cluster_std: The standard deviation of the clusters.
  • #
    • Value will be: 0.9
    #
#
# Output #
    #
  • X: Array of shape [n_samples, n_features]. (Feature Matrix)
  • #
    • The generated samples.
    #
  • y: Array of shape [n_samples]. (Response Vector)
  • #
    • The integer labels for cluster membership of each sample.
    #
# # define mak_blobs class X, y = make_blobs(n_samples=5000, centers=[[4,4], [-2, -1], [2, -3], [1, 1]], cluster_std=0.9) # display the plot of the randomly generated data plt.scatter(X[:, 0], X[:, 1], marker='.') # The scatter plot have 4 segments #

Setting up K-Means

# Now that we have our random data, let's set up our K-Means Clustering. # The KMeans class has many parameters that can be used,I will be using these three: # #
    #
  • init: Initialization method of the centroids.
  • #
      #
    • Value will be: "k-means++"
    • #
    • k-means++: Selects initial cluster centers for k-mean clustering in a smart way to speed up convergence.
    • #
    #
  • n_clusters: The number of clusters to form as well as the number of centroids to generate.
  • #
    • Value will be: 4 (since we have 4 centers)
    #
  • n_init: Number of time the k-means algorithm will be run with different centroid seeds. The final results will be the best output of n_init consecutive runs in terms of inertia.
  • #
    • Value will be: 12
    #
# # Initialize KMeans with these parameters, where the output parameter is called k_means. # k_means = KMeans(init = "k-means++", n_clusters = 4, n_init = 12) # fit the KMeans model with features matrix - X k_means.fit(X) # fit KMeans .labels_ attribute and save it as k_means_labels k_means_labels = k_means.labels_ k_means_labels # build cendroid KMeans .cluster_centers_ , and save it as k_means_cluster_centers k_means_cluster_centers = k_means.cluster_centers_ k_means_cluster_centers #

Creating the Visual Plot

# So now that we have the random data generated and the KMeans model initialized, let's plot them and see what it looks like! # + # Initialize the plot with the specified dimensions. fig = plt.figure(figsize=(10, 8)) # Colors uses a color map, which will produce an array of colors based on the number of labels there are. We use set(k_means_labels) to get the # unique labels. colors = plt.cm.Spectral(np.linspace(0, 1, len(set(k_means_labels)))) # Create a plot ax = fig.add_subplot(1, 1, 1) # For loop that plots the data points and centroids. k will range from 0-3, which will match the possible clusters that each # data point is in. for k, col in zip(range(len([[4,4], [-2, -1], [2, -3], [1, 1]])), colors): # Create a list of all data points, where the data poitns that are # in the cluster (ex. cluster 0) are labeled as true, else they are # labeled as false. my_members = (k_means_labels == k) # Define the centroid, or cluster center. cluster_center = k_means_cluster_centers[k] # Plots the datapoints with color col. ax.plot(X[my_members, 0], X[my_members, 1], 'w', markerfacecolor=col, marker='.') # Plots the centroids with specified color, but with a darker outline ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=8) # Title of the plot ax.set_title('K-Means') # Remove x-axis ticks ax.set_xticks(()) # Remove y-axis ticks ax.set_yticks(()) # Show the plot plt.show() # - #

Customer Segmentation with K-Means

# # download data # !wget -O Cust_Segmentation.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/Cust_Segmentation.csv # + # Import library import pandas as pd print('imported') # read the .csv file cust_df = pd.read_csv("Cust_Segmentation.csv") cust_df.head() # - #

Pre-processing

Normalizing # Normalization is a statistical method that helps mathematical-based algorithms to interpret features with different magnitudes and distributions equally. We use ***StandardScaler()*** to normalize our dataset. # + from sklearn.preprocessing import StandardScaler X = df.values[:,1:] X = np.nan_to_num(X) Clus_dataSet = StandardScaler().fit_transform(X) Clus_dataSet # - #

Modeling

# Without k-Mean algorithm it would be the same as guessing. Using k-Means clustering we can do all this process much easier. #
#
# Apply k-Means for a cluster labels. # + clusterNum = 4 k_means = KMeans(init = "k-means++", n_clusters = clusterNum, n_init = 12) k_means.fit(X) labels = k_means.labels_ print(labels) # - # ### Insights # assign the labels to each row in dataframe df['Clus_group'] = labels df.head() # the centroid mean values in each cluster df.groupby('Clus_group').mean() # + # plot the distribution of customers based on their age & income in 2D area = np.pi * ( X[:, 1])**2 plt.scatter(X[:, 0], X[:, 3], s=area, c=labels.astype(np.float), alpha=0.5) plt.xlabel('Age', fontsize=18) plt.ylabel('Income', fontsize=16) plt.show() # + from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(1, figsize=(12, 10)) plt.clf() ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134) plt.cla() ax.set_xlabel('Education', fontsize=18) ax.set_ylabel('Age',fontsize=18) ax.set_zlabel('Income',fontsize=18) ax.scatter(X[:, 1], X[:, 0], X[:, 3], c= labels.astype(np.float)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Train an RBF network via gradient descent # # In this notebook, we show how to instantiate and train an RBF network (and an MLP network). We will test the OOD capabilities of the trained deterministic discriminative models by looking at the softmax entropy and some chosen OOD datasets. # + import os import sys curr_dir = os.path.basename(os.path.abspath(os.curdir)) # See __init__.py in folder "toy_example" for an explanation. if curr_dir == 'tutorials' and '..' not in sys.path: sys.path.insert(0, '..') from hypnettorch.data.mnist_data import MNISTData from hypnettorch.data.fashion_mnist import FashionMNISTData from hypnettorch.mnets import MLP from hypnettorch.utils import misc import matplotlib.pyplot as plt import numpy as np from sklearn.metrics import roc_auc_score from time import time import torch from torch import nn from finite_width.rbf_net import StackedRBFNet from IPython.display import display, Markdown, Latex #display(Markdown('*some markdown* $\phi$')) # %matplotlib inline # %load_ext autoreload # %autoreload 2 device = 'cuda' # - mnist = MNISTData('.', use_one_hot=True) fmnist = FashionMNISTData('.', use_one_hot=True) # + def test_net(net, data, use_test=True): with torch.no_grad(): test_in = data.input_to_torch_tensor( \ data.get_test_inputs() if use_test else data.get_val_inputs(), \ device, mode='inference') test_out = data.input_to_torch_tensor( \ data.get_test_outputs() if use_test else data.get_val_outputs(), device, mode='inference') test_lbls = test_out.max(dim=1)[1] logits = net(test_in) pred_lbls = logits.max(dim=1)[1] acc = torch.sum(test_lbls == pred_lbls) / test_lbls.numel() * 100. return acc def train_net(net, data, lr=1e-3, nepochs=10): criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(net.internal_params, lr=lr) for epoch in range(nepochs): i = 0 for batch_size, x, y in data.train_iterator(32): i += 1 x_t = data.input_to_torch_tensor(x, device, mode='train') y_t = data.output_to_torch_tensor(y, device, mode='train') # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize p_t = net(x_t) loss = criterion(p_t, y_t.max(dim=1)[1]) loss.backward() optimizer.step() if i % 500 == 0: print('[%d, %5d] loss: %.3f, val-acc: %.2f%%' % (epoch + 1, i + 1, loss.item(), test_net(net, data, use_test=False))) print('Training finished with test-acc: %.2f%%' % (test_net(net, mnist))) # + rbf_net = StackedRBFNet(n_in=np.prod(mnist.in_shape), n_nonlin_units=(100,), n_lin_units=(10,), use_bias=True, bandwidth=5000).to(device) train_net(rbf_net, mnist, lr=1e-2, nepochs=30) # + mlp_net = MLP(n_in=np.prod(mnist.in_shape), n_out=10, hidden_layers=(400,400)).to(device) train_net(mlp_net, mnist, lr=1e-3) # + def calc_auroc(net, ind_data, ood_data): with torch.no_grad(): ind_inps = ind_data.input_to_torch_tensor( \ ind_data.get_test_inputs(), device, mode='inference') ind_logits = net(ind_inps) ind_softmax = nn.functional.softmax(ind_logits, dim=1).\ cpu().detach().numpy() ind_entropies = - np.sum(ind_softmax * \ np.log(np.maximum(ind_softmax, 1e-5)), axis=1) ood_inps = ood_data.input_to_torch_tensor( \ ood_data.get_test_inputs(), device, mode='inference') ood_logits = net(ood_inps) ood_softmax = nn.functional.softmax(ood_logits, dim=1).\ cpu().detach().numpy() ood_entropies = - np.sum(ood_softmax * \ np.log(np.maximum(ood_softmax, 1e-5)), axis=1) y_true = [0]*len(ind_entropies) + [1]*len(ood_entropies) y_score = ind_entropies.tolist() + ood_entropies.tolist() auroc = roc_auc_score(y_true, y_score) return auroc print('MLP AUROC: %.3f' % (calc_auroc(mlp_net, mnist, fmnist))) print('RBF Net AUROC: %.3f' % (calc_auroc(rbf_net, mnist, fmnist))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Querying Hierarchical Data Using the `reach` Action in the `NETWORK` Actionset # In this demonstration, we load a sample company's organization structure into our CAS server, and query it to identify all employees in a single manager's reporting structure. # # This example emulates the same query, but conducted using recursive SQL queries, as shown in [Recursive Query with PROC SQL](https://kansascode.blogspot.com/2021/06/recursive-query-with-proc-sql.html?m=1) as well as [Learn PostgreSQL Recursive Query By Example](https://www.postgresqltutorial.com/postgresql-recursive-query/). # # We show how through use of the `reach` action, we identify all employees who report to a manager named ____. Traversal algorithms allow us to avoid recursion, making our code simpler to understand and debug. # # ---------------- # # The basic flow of this notebook is as follows: # 1. Load the organizational structure into a Pandas DataFrame as a set of links. # 2. Connect to a CAS server and upload our DataFrame. # 3. Execute the `reach` action and fetch the results. # 4. Compare results with the known results from the examples referenced above. # # ---------------- # __Prepared by:__ # (: [dtherrick](www.github.com/dtherrick)) # ## Imports # # The modules below are needed for this exercise. # # | Module | Description | # |:-------------------|:----------------------------------------------------------------------------------:| # | `os` | Allows access to environment variables. | # | `swat` | SAS Python module that orchestrates communication with a CAS server. | # | `pandas` | Data management module we use for preparation of local data. | # | `graphviz.Digraph` | Used to display the organizational structure with the results of our reach query. | # + import os import swat import pandas as pd from graphviz import Digraph # - # ## Sample Data # # We take our sample data from the original post about recursive SQL. # # | `employee_id` | `full_name` | `manager_id` | # |:-------------:|:-------------------:|:------------:| # | 1 | '' | NULL | # | 2 | '' | 1 | # | 3 | '' | 1 | # | 4 | '' | 1 | # | 5 | '' | 1 | # | 6 | '' | 2 | # | 7 | '' | 2 | # | 8 | '' | 2 | # | 9 | '' | 2 | # | 10 | '' | 3 | # | 11 | '' | 3 | # | 12 | '' | 3 | # | 13 | '' | 3 | # | 14 | '' | 4 | # | 15 | '' | 4 | # | 16 | '' | 7 | # | 17 | '' | 7 | # | 18 | '' | 8 | # | 19 | '' | 8 | # | 20 | '' | 8 | # # For our purposes, let's put that data into a master dataframe, which we can use both in our reach calculations as well as for verifications and visualization later in this exercise. # + # We use a list of tuples, along with a list of column names, to create the master dataframe. colNames = ["employee_id", "full_name", "manager_id"] data = [ (1, "", 0), (2, "", 1), (3, "", 1), (4, "", 1), (5, "", 1), (6, "", 2), (7, "", 2), (8, "", 2), (9, "", 2), (10, "", 3), (11, "", 3), (12, "", 3), (13, "", 3), (14, "", 4), (15, "", 4), (16, "", 7), (17, "", 7), (18, "", 8), (19, "", 8), (20, "", 8), ] # create the dataframe from the above source data. dfMasterData = pd.DataFrame(data, columns=colNames) # display the table, but we don't care about the index for this exercise. dfMasterData.style.hide_index() # - # ## Connect to our Viya (CAS) server # # We now need to connect to our CAS server. Contact your SAS administrator for the proper credentials to connect to CAS using `python-swat`. # # As a convention, I keep my CAS host and port set as environment variables. This allows me to both avoid placing user-specific data in notebooks, as well as adds a layer of security. # # For ease-of-reading, I assign those environment variables into `host` and `port` variables, then pass them into the connection statement. # # Once connected, we need access to the `PROC NETWORK` actions. In CAS terminology, that is an `actionset` so we load that now. # + host = os.environ['CAS_HOST_ORGRD'] port = int(os.environ['CAS_PORT']) # Connect to the server conn = swat.CAS(host, port) # Load the actionsets we need conn.loadactionset('network') # + [markdown] heading_collapsed=true # ## Load source tables into CAS # # All of the `network` actions require a graph table. In this exercise, we simply need to pass the action a `LinkSetIn` table that represents the relationships between the employees, and a `NodeSubsetIn` table that represents the specific nodes whose reach we wish to calculate. # # As we defined in the master data above, we only need to pass the `employee_id` and `manager_id` values from each row to create the links table. Since ____ does not have a manager, we want to skip his record in the links table we upload to CAS. The easiest way is to create an intermediate dataframe using the pandas `loc` method to exclude the first row, and limit to only the `employee_id` and `manager_id` columns. # # For the `NodeSubsetIn` table, we create an intermediate dataframe with a specific format. Its source must be a list of dicts, where each dictionary has two keys: `node` and `reach`. `node`'s value is the identifier of the node whose reach we calculate. `reach` is the order of reach calculations we want to calculate. Since we are only calculating for one source node, `reach`'s value is 1. # # Once the intermediate dataframes are created, we upload them to CAS using the `upload` method. The `casout=` argument allows us to define the names of the CAS tables. # + hidden=true dfLinkSetIn = dfMasterData.loc[dfMasterData.employee_id>1, ['employee_id', 'manager_id']] dfNodeSubsetIn = pd.DataFrame([{'node': 2, 'reach': 1}]) swat.options.cas.print_messages=False _ = conn.upload(dfLinkSetIn, casout=dict(name='LinkSetIn', replace=True)) _ = conn.upload(dfNodeSubsetIn, casout=dict(name='NodeSubsetIn', replace=True)) # - # ## Find all employees who report to # # We can find all of Megan's subordinates using the `reach` action. # # The reach action takes several options: # * `links`: The name of the source table that contains the full set of links that describe our graph. # * `nodessubset`: The name of the source table that contains the nodes and reach we wish to search against. # * `direction`: Define whether the graph in the `links` option is a directed or undirected graph. For this exercise, we must explicitly state that this is a directed graph using the string `directed`. # * `linksvar`: We include this option because we did not name our columns `from` and `to`. (If we had, we could ignore this option). For our calculation, we pass a dictionary where `manager_id` is the `from` value, and `employee_id` is the `to` value. # * `maxreach`: an integer value that defines how many levels deep we should calculate the reach. We use 2 in this case. # * `outReachNodes`: an output table with the nodes that are included in the reach calculations. # * `outReachLinks`: an output table with the link pairs that are included in the reach calculations. # * `outCounts`: an output summary table that has the total count of nodes included in the reach calculations. # %%time conn.network.reach( direction = "directed", links = {"name":"LinkSetIn"}, linksvar = {"from": "manager_id", "to": "employee_id"}, nodessubset = {"name":"NodeSubSetIn"}, outReachNodes = {"name":"ReachNodes", "replace":True}, outReachLinks = {"name":"ReachLinks", "replace":True}, outCounts = {"name":"ReachCounts", "replace":True}, maxreach = 2) # ## Review the output tables # # If the reach action was successful, it will output some summary and log information in the notebook. We asked it to generate three tables, so let's just fetch those tables and verify that the data looks OK. # # For the `ReachLinks` table, a quick pass at the first four rows shows Megan has four direct reports. If we glance back at the master data above, we see that, yes, her employee id is 2, and the `ReachLinks` table shows four employees with a manager id of 2. # # For `ReachCounts`, we see that node 2 connects to 10 nodes. This is good, since there are 20 total nodes - we know Megan doesn't manage everyone. # # For `ReachNodes`, we simply verify that there are 10 rows (nodes) included. # # Based on the quick verification, all three tables look good. conn.fetch('ReachLinks') conn.fetch('ReachNodes') conn.fetch('ReachCounts') # ## Verify that our call to the reach action matches the output of recursive SQL # # From the initial PostgreSQL recursive SQL article, we expect these results: # # employee_id | manager_id | full_name # -------------:|:------------:|:----------------- # 2 | 1 | # 6 | 2 | # 7 | 2 | # 8 | 2 | # 9 | 2 | # 16 | 7 | # 17 | 7 | # 18 | 8 | # 19 | 8 | # 20 | 8 | # # Let's use the `ReachNodes` table and the `dfMasterData` dataframe to try to reproduce this table. # # Our approach: # 1. Create a list of only the node id's in the `ReachNodes` returned by the reach action. A simple list comprehension accomplishes this, in the `report_list` variable. # 2. Then, use that list with the master data to generate a boolean pandas series, where values are True if they are in Megan's reporting reach, False if not. # 3. Filter the master dataframe to create a `Reports` table we can use to compare. # 4. Display that final reports table. # # We see that, yes, the reach action provides the same results as the recursive SQL statement. # + report_list = [int(i) for i in conn.fetch('ReachNodes')['Fetch']['node'].to_list()] reports_series = dfMasterData.employee_id.isin(report_list) dfReports = dfMasterData[reports_series] dfReports.style.hide_index() # - # ## Bonus: Verify Results Visually # # Finally, let's produce a tree diagram of our organizational structure, and highlight the employees in Megan's reporting structure. This approach allows us to quickly verify whether the reach action produced the correct reporting structure. # # We use `graphviz` since it provides an easy means to programmatically create a directed network graph diagram. Since we can't directly translate pandas dataframes to graphviz dot syntax, we need to convert our dataframes to Python lists. # # Our approach is as follows: # 1. Create a `graph` object from the master dataframe. Calling the `to_dict` method with `orient='records'` set returns a list of dictionaries, one for each row in the dataframe. # 2. Create two lists of tuples - one for the reporting list, and one for everyone else. We simply use list comprehensions filtered by the `report_list` we created above to generate these variables. # 3. Define our graphviz object as a `Digraph`. # 4. Add the nodes in the reporting structure. For our purposes, we highlight by filling these nodes with light blue. # 5. Add all remaining nodes. Leave them unfilled. # 6. Use the full graph - which already has the link relationships - to add the links to our visualization. # 7. Since this is a wide image, we create a `final_plot` object in which we pass our dot language and use the `unflatten` option with a `stagger` value of 4. # 8. Finally, display the plot. # + graph = dfMasterData.to_dict(orient='records') report_nodes = [(node['employee_id'], node['full_name']) for node in graph if node['employee_id'] in report_list] other_nodes = [(node['employee_id'], node['full_name']) for node in graph if node['employee_id'] not in report_list] dot = Digraph() for item in report_nodes: dot.node(str(item[0]), item[1], style='filled', fillcolor='lightblue') for item in other_nodes: dot.node(str(item[0]), item[1]) # note that we skip the first row in "graph" because that is the head of the tree with no manager. We don't need a phantom link called "0" for pair in graph[1:]: dot.edge(str(pair['manager_id']), str(pair['employee_id'])) final_plot = dot.unflatten(stagger=4) final_plot # - # ## Clean up everything. # # Make sure we know what tables we created, drop them, and close our connection. # (This is probably overkill, since everything in this session is ephemeral anyway, but good practice nonetheless. # + table_list = conn.tableinfo()["TableInfo"]["Name"].to_list() for table in table_list: conn.droptable(name=table, quiet=True) conn.close() # - # ## Conclusion # # In this notebook, we used the `reach` action in the `network` actionset for SAS Viya to determine all employees in a single person's reporting structure. We checked our results against the same calculations performed by recursive SQL statements in both PostgreSQL and SAS `PROC SQL`. # # Both a results table and a network diagram demonstrate that the reach action results are equivalent to the recursive SQL statements. Using the reach action is both less verbose, and requires less debugging and design than developing custom recursive queries. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Reyfman data import scanpy as sc import glob import anndata import scIB import pandas as pd import numpy as np files = glob.glob('../../../data/HLCA_extended/extension_datasets/raw/Reyfman/*filtered*') files adatas = {} for i in files: name = "_".join(i.split('/')[9].split('_')[1:3]) sub_id = i.split('/')[9].split('_')[0] adatas[name] = sc.read_10x_h5(i) adatas[name].obs['condition']=name.split('_')[0] #adatas[name].obs['condition']=name.split('_')[0] adatas[name].obs['subject_ID']=name adatas[name].var_names_make_unique() #adatas[name].obs_names = adatas[name].obs_names+name adata = anndata.concat(adatas) adata adata.obs_names_make_unique() adata.obs['dataset']='MisharinBudinger2019_disease' adata.obs['study']='MisharinBudinger2019' adata.obs scIB.pp.summarize_counts(adata) scIB.pp.plot_QC(adata, gene_threshold=(800,5000), count_threshold=(10,40000)) sc.pp.filter_cells(adata, max_genes=7000) sc.pp.filter_genes(adata, min_counts=1) adata.obs=adata.obs.iloc[:,0:4] adata.obs['preprocessing']='filter cells to max_genes=7000 and genes to min_counts=1' adata.obs.condition.unique() # # Assign further obs covariates adata adata.obs['sample'] = adata.obs['subject_ID'] # # Load annotations idents = pd.read_csv('../../../data/HLCA_extended/extension_datasets/raw/Reyfman/Reyfman_2019_idents.csv') samps = pd.read_csv('../../../data/HLCA_extended/extension_datasets/raw/Reyfman/Reyfman_samples.csv') idents.index = idents['Unnamed: 0'] samps.index = samps['paper_ID'] # + tags=[] samps idents adata.obs # - #Map obs names adata.obs_names = [samps.loc[adata.obs['sample'][idx]]['library_ID']+'_'+idx.split('-')[0] for idx in adata.obs_names] # + tags=[] adata.obs # - adata.obs['original_celltype_ann'] = idents[''] adata.obs adata.obs['original_celltype_ann'].value_counts() np.sum(adata.obs['original_celltype_ann'].isna()) # ### Filter out NA celltype cells adata = adata[~adata.obs['original_celltype_ann'].isin([np.nan])].copy() adata np.sum(adata_nona.obs['original_celltype_ann'].isna()) # # Write full object adata adata.write('../../../data/HLCA_extended/extension_datasets/ready/full/reyfman_disease.h5ad') # # Subset data adata = sc.read('ready/full/reyfman_disease.h5ad') gene_set = pd.read_csv('genes_for_mapping.csv') # cd ../scripts import preprocessing as pp # cd ../query_datasets/ adata_sub = pp.subset_and_pad_adata(gene_set, adata) # # Write output adata_sub.write('../../../data/HLCA_extended/extension_datasets/ready/subsetted/reyfman_disease_sub.h5ad') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Formation Data-Science: # ## Introduction au Machine learning (Jour 1) # # Auteurs : , # Ce notebook contient les éléments d'introduction au machine learning # # I - Prétraitement et visualiation sur *digits* # ## Imports et intialisation # + # %matplotlib inline import numpy as np # charge un package pour le numérique import matplotlib.pyplot as plt # charge un package pour les graphiques # - # ## Description du jeu de données: # On charge le jeu de données *digits* disponoible dans le package scikit-learn (nom d'import sklearn). Ce jeu de données contient des images de chiffres numérisés. On va s'en servir au cours de cette séance pour étudier les principaux enjeu en classification (supervisée). # + # Chargement des données disponible dans le package sklearn from sklearn.datasets import load_digits digits = load_digits() X, y = digits.data, digits.target print("Nombre de pixels : {}".format(X.shape[1])) print("Nombre d'observations : {}".format(X.shape[0])) print("Nombre de classes : {}".format(len(np.unique(y)))) # Choix d'une observation quelconques de la base idx_to_test = 15 print("Affichage d'une ligne de la matrice / image:") print(X[idx_to_test, :]) print("Affichage de la classe / chiffre associé:") print(y[idx_to_test]) # - # ## EXERCICE: # Faites varier le choix de l'indice. Sans afficher la classe arrivez-vous à reconnaitre le chiffre représenté? np.sum(y == 1) # ## Visualisation des observations: # # Les images scannées sont de taille 8 x 8 et comportent donc 64 pixels chacune. Elles sont stockées sous la forme de vecteurs ligne, qu'il faut remettre dans un ordre lisible pour les identifiés. L'affichage graphique est proposé avec les commandes qui suivent. # # Utilisation de la fonction imshow pour l'affichage de l'image numéro idx_to_test: imgplot = plt.imshow(np.reshape(X[idx_to_test, :], (8, 8))) # + # Amélioration de la visualisation (niveau de gris) et de la légende: imgplot = plt.imshow(np.reshape(X[idx_to_test, :], (8, 8)), cmap='gray', aspect='equal', interpolation='nearest') # Attention aux accents: ne pas oublier le u (Unicode) ci-dessous plt.title(u'Le chiffre numéro %s est un %s' % (idx_to_test, y[idx_to_test])) # + # plt.imshow? # - # # ## Statistiques élementaires : # Pour mieux comprendre la base de données on va s'intéresser à quelques statistiques. # On commence par calculer les moyennes et variances par classes pour chacun des chiffres. La moyenne par classe se visualise comme une image qui est une représentantion moyenne pour chaque chiffre de zéro à neuf. Idem pour la variance, ce qui permet alors de voir les parties avec les plus grandes variations entre les membres d'une même classe. # # Récupérer les modalités possible prises (Il y en a bien 10!) classes_list = np.unique(y).astype(int) print "Liste des classes en présence: ", classes_list # ## EXERCICE: # # - Calculer un représentant moyen du chiffre 0? # - Avec une boucle `for` calculer le représentant moyen pour chaque chiffre # ## Solution np.mean(X[y == 0, :], axis=0).shape imgplot = plt.imshow(np.reshape(np.mean(X[y == 0, :], axis=0), (8, 8)), cmap='gray', aspect='equal', interpolation='nearest') # + # Calculer un représentant moyen pour chaque chiffre Xi_mean = [np.mean(X[y == cls], axis=0) for cls in classes_list] # Fonction d'affichage d'une liste d'image def disp_pics(pic_list, title=''): """" Fonction qui affiche une liste d'image codée en vecteur """"" fig, axs = plt.subplots(nrows=2, ncols=5, figsize=(12, 4)) plt.suptitle(title, fontsize=16) for i in range(10): opt = dict(cmap='gray', aspect='equal', interpolation='nearest') axs.flat[i].imshow(pic_list[i].reshape(8, 8), **opt) axs.flat[i].set_title("Chiffre: %s" % i) # Contre-balancer l'affichage pas terrible de matplotlib plt.tight_layout() plt.subplots_adjust(top=0.85) # Affichage des images moyennes par classe pour les données disp_pics(Xi_mean, title=(u"Moyennes sur l'ensemble des données")) # - plt.matshow(np.cov(X[y == 0].T)) # Calculer de la variance par chiffre Xi_var = [np.var(X[y == cls], axis=0) for cls in classes_list] # Affichage des images de variance par classe pour les données disp_pics(Xi_var, title=(u"Variances sur l'ensemble des donneés")) # # II - Premiers classifieurs # ## Validation : découpage Apprentissage / Validation # # On procède de manière classique en réservant 80% des données pour la partie apprentissage, et 20% pour l'évaluation des classifieurs que l'on a construit sur la première partie. # En effet il n'est pas raisonnable de tester la perfromance sur 100% des données. Cela donnera lieux à du sur-apprentissage (en: overfitting). La généralisation des méthodes apprises serait alors très mauvaise from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=42) print("Nb d'échantillons d'apprentissage : {}".format(X_train.shape[0])) print("Nb d'échantillons de validation : {}".format(X_test.shape[0])) plt.hist(y_train, bins=10) plt.hist(y_test, bins=10) # ## Mesures de performance # + from sklearn.lda import LDA clf = LDA() # choix de la méthode LDA comme premier classifieur clf.fit(X_train, y_train) # calibration de la méthode sur nos données y_pred = clf.predict(X_test) # prédiction de la méthode sur la partie à tester # Chargement d'une mesure standard de performance from sklearn.metrics import accuracy_score # accuracy : pourcentage de bonnes predictions print "Accuracy : ", accuracy_score(y_test, y_pred) print "Accuracy bis: : ", np.mean(y_test == y_pred) # mesure d'erreur 0/1 print("Le classifieur propose une bonne prédiction dans %s %% des cas." % (100 * accuracy_score(y_test, y_pred))) # - # ## EXERCICE: # # - Construire un classifieur "naif" qui prédirait chaque chiffre entre 0 et 9 avec probabilité 0.1 (on pourra utiliser la commande np.random.randint). Mesurer sa performance sur l'échantillon test. # ## Correction: y_rand = np.random.randint(10, size=y_test.shape) accuracy_score(y_rand, y_pred) # ### Précision et rappel # Choix d'un chiffre: chf = 8 print "Chiffre choisi: %s " % chf # Chargement de deux autres mesure de performance from sklearn.metrics import precision_score, recall_score # + # Calcul de la précision: precision_value = precision_score(y_test, y_pred,average=None) print "Precision : ", precision_value[chf] print "Precision bis : ", np.sum(np.logical_and(y_pred == chf, y_test == chf)) / float(np.sum(y_test == chf)) print("Le classifieur prédit le chiffre %s " " avec raison dans %s %% de ses prédictions.\n" "Autrement dit, dans %s %% de ses prédictions le classifieur prédit %s" " alors que le vrai chiffre est différent.\n" % (chf, 100 * precision_value[chf], 100 * (1 - precision_value[chf]), chf)) # + # recall_score?? # + # Calcul du rappel: recall_value = recall_score(y_test, y_pred, average=None) # en Anglais rappel = recall print "Rappel : ", recall_value[chf] print "Rappel bis : ", np.mean(np.logical_and(y_pred == chf, y_test == chf)) print("Le classifieur prédit le chiffre %s avec raison dans %s" "%% des cas où le vrai chiffre est un %s.\n" "En revanche %s %% des chiffres qui sont vraiment des %s" " sont prédit à tord par un autre chiffre." % (chf, 100 * recall_value[chf], chf, 100 * (1 - recall_value[chf]), chf)) # - # # EXERCICE: # Afficher la précision et le rappel pour tous les valeurs de chiffre de 0 à 9 en utilisant une boucle "for" pour le classifieur que l'on vient de considérer. # ## Résumé de ces éléments avec scikit-learn: # + from sklearn.metrics import classification_report, f1_score print classification_report(y_test, y_pred) # - # # EXERCICE: # Vérifier que la précision et le rappel moyen donnés dans le dernier tableau sont la moyenne de la précision et du rappel obtenus pour chaque chiffre. # # III - Courbe d'apprentissage # Les courbes d'apprentissage permettent de quantifier le gain de performance obtenu en augmentant la taille des données. Cela permet de répondre à des questions comme: # # - Ai-je assez de données? # - Mon modèle est-il assez complexe pour mon problème? # + N_range = np.linspace(15, X_train.shape[0], 20).astype(int) clf = LDA() clf_name = 'LDA'# choix de la méthode LDA comme premier classifieur def plot_learning_curve(clf, clf_name): training_error = [] test_error = [] for N in N_range: XN = X_train[:N] yN = y_train[:N] clf.fit(XN, yN) training_error.append(accuracy_score(clf.predict(XN), yN)) test_error.append(accuracy_score(clf.predict(X_test), y_test)) plt.figure() plt.plot(N_range, training_error, label='training') plt.plot(N_range, test_error, label='test') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.title(u'Courbe d\'apprentissage pour la méthode ' + clf_name) plt.xlabel('Nombre de points d\'entrainement') plt.ylabel('Pourcentage d\'erreurs') #plot_learning_curve(clf, clf_name) from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier(n_neighbors=15) plot_learning_curve(clf, 'knn') # + scores = [] neighbors_range = range(1, 30, 1) for n_neighbors in neighbors_range: clf.n_neighbors = n_neighbors clf.fit(X_train, y_train) scores.append(accuracy_score(clf.predict(X_test), y_test)) plt.plot(neighbors_range, scores) # - y_test.shape # # IV - Comparaison de performances de classifieurs # + # Chargement d'une autre méthode de classification (KNN) from sklearn.neighbors import KNeighborsClassifier # Liste des classifieurs évalués classifiers = [('LDA', LDA()), ('KNN_k=1', KNeighborsClassifier(n_neighbors=1))] # - # ## Calcul des métriques de performance # # Pour chaque classifieur on évalue la performance par le score sur les données de test et le temps d'éxecution from sklearn.metrics import confusion_matrix from sklearn.grid_search import GridSearchCV # + import pandas as pd # charge un package pour le traitement des données from timeit import timeit # charge un package pour des mesures de temps # Definition des métriques de performance def perf_compute(clf, name, loops=10): """ Calcule le temps d'apprentissage, de prediction, le score et la matrice de confusion d'un classifieur """ # On initialise le conteneur perf = pd.Series(name=name) # On crée les callables qu'on passera à la fonction de profiling fit = lambda: clf.fit(X_train, y_train) score = lambda: clf.score(X_test, y_test) # On profile le temps des phases d'entrainement et de prédiction en ms perf['train_tps'] = timeit(fit, number=loops) / loops * 1000 perf['test_tps'] = timeit(score, number=loops) / loops * 1000 perf['total_tps'] = perf.train_tps + perf.test_tps # On calcule le score en pourcentage perf['score'] = fit().score(X_test, y_test) * 100 # On calcule la matrice de confusion perf['conf_mat'] = [confusion_matrix(fit().predict(X_test), y_test)] # Normalisation par ligne de la matrice de confusion pour avoir des pourcentages d'erreurs. # cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] return perf # + # On lance le calcule de performance. On profile en bouclant 100 fois perfs = pd.DataFrame([perf_compute(clf, name) for name, clf in classifiers]) perfs = perfs.sort('score') perfs['train_tps test_tps total_tps score'.split()].T # + # Barplot des performances plt.style.use('ggplot') # sortie graphique améliorées (tester sans!) fig, axs = plt.subplots(ncols=2, figsize=(12, 4)) perfs['score'].plot(kind='barh', title='Pourcentage d\'erreurs commises (en %)', ax=axs[0]) perfs[['train_tps', 'test_tps', 'total_tps']].plot(kind='barh', title='Temps (en ms)', ax=axs[1]) # + import seaborn as sns def plot_conf_mat(perf, ax, title='Model'): """ Affichage de la matrice de confusion """ sns.heatmap(perf.conf_mat[0], ax=ax, square=True, annot=True) ax.set_title('{}: {}\nScore: {:.2f}'.format(title, perf.name, perf.score)) ax.set_xlabel('True label') ax.set_ylabel('Predicted label') # Affichage du plus mauvais et du meilleur classifieur # Les classifieurs sont classés par scores croissant fig, axs = plt.subplots(ncols=2, figsize=(12, 4)) plot_conf_mat(perfs.iloc[0], ax=axs[0], title='Pire model') plot_conf_mat(perfs.iloc[-1], ax=axs[1], title='Meilleur model') # - # courbe d'apprentissage pour KNN plot_learning_curve(classifiers[1][1], classifiers[1][0]) # # EXERCICE # # * Tester d'autres modèles: # # `from sklearn.naive_bayes import GaussianNB` # # `from sklearn.svm import SVC` # # `estimator = GaussianNB()` # # `estimator = SVC(gamma=0.001)` # # # * Les résultats varient en fonction du découpage apprentissage/validation initial. Obtenez-vous des les même résultat : # # * en changeant la graine de l'aléa ('random_state=10')? # # * en changeant le ratio apprentissage/validation? # # EXERCICE # # * Quelle valeur de k dans le k-NN donne la meilleure performance? Faire varier k entre 1 et 10 et afficher un graphique de performance en fonction de k. # # * Il y a-t-il un compromis entre la performance de prédiction et le temps de calcul? from sklearn.datasets import load_boston boston = load_boston() X, y = boston.data, boston.target print(boston.DESCR) plt.hist(y) # + # AdaBoostRegressor? # + from sklearn.cross_validation import cross_val_score from sklearn.neighbors import KNeighborsRegressor from sklearn.preprocessing import scale from sklearn.ensemble import RandomForestRegressor from sklearn.ensemble import GradientBoostingRegressor from sklearn.ensemble import AdaBoostRegressor # XX = scale(X) # estimator = KNeighborsRegressor(n_neighbors=1) estimator = RandomForestRegressor(n_estimators=100, max_depth=7, n_jobs=4) # estimator = GradientBoostingRegressor() # estimator = AdaBoostRegressor(base_estimator=estimator) scores = cross_val_score(estimator, X, y, cv=5, scoring='mean_squared_error') print np.sqrt(-np.mean(scores)) # - np.std(X, axis=0) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] deletable=true editable=true # ### Topic Modelling - and more - with Gensim! # # This tutorial will attempt to walk you through the entire process of analysing your text - from pre-processing to creating your topic models and visualising them. # # python offers a very rich suite of NLP and CL tools, and we will illustrate these to the best of our capabilities. # Let's start by setting up our imports. # # We will be needing: # ``` # - Gensim # - matplotlib # - spaCy # - pyLDAVis # ``` # # + import matplotlib.pyplot as plt import gensim import numpy as np import spacy from gensim.models import CoherenceModel, LdaModel, LsiModel, HdpModel from gensim.models.wrappers import LdaMallet from gensim.corpora import Dictionary import pyLDAvis.gensim import os, re, operator, warnings warnings.filterwarnings('ignore') # Let's not pay heed to them right now # %matplotlib inline # - # For this tutorial, we will be using the Lee corpus which is a shortened version of the [Lee Background Corpus](http://www.socsci.uci.edu/~mdlee/lee_pincombe_welsh_document.PDF). The shortened version consists of 300 documents selected from the Australian Broadcasting Corporation's news mail service. It consists of texts of headline stories from around the year 2000-2001. # # We should keep in mind we can use pretty much any textual data-set and go ahead with what we will be doing. # + # since we're working in python 2.7 in this tutorial, we need to make sure to clean our data to make it unicode consistent def clean(text): return unicode(''.join([i if ord(i) < 128 else ' ' for i in text])) test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) lee_train_file = test_data_dir + os.sep + 'lee_background.cor' text = open(lee_train_file).read() # - # ### Pre-processing data! # # It's been often said in Machine Learning and NLP algorithms - garbage in, garbage out. We can't have state-of-the-art results without data which is aa good. Let's spend this section working on cleaning and understanding our data set. # NTLK is usually a popular choice for pre-processing - but is a rather [outdated](https://explosion.ai/blog/dead-code-should-be-buried) and we will be checking out spaCy, an industry grade text-processing package. from spacy.en import English nlp = spacy.load("en") # For safe measure, let's add some stopwords. It's a newspaper corpus, so it is likely we will be coming across variations of 'said' and 'Mister' which will not really add any value to the topic models. my_stop_words = [u'say', u'\'s', u'Mr', u'be', u'said', u'says', u'saying'] for stopword in my_stop_words: lexeme = nlp.vocab[stopword] lexeme.is_stop = True doc = nlp(clean(text)) # Voila! With the `English` pipeline, all the heavy lifting has been done. Let's see what went on under the hood. doc # It seems like nothing, right? But spaCy's internal data structure has done all the work for us. Let's see how we can create our corpus. You can check out what a gensim corpus looks like [here](google.com). # we add some words to the stop word list texts, article = [], [] for w in doc: # if it's not a stop word or punctuation mark, add it to our article! if w.text != '\n' and not w.is_stop and not w.is_punct and not w.like_num: # we add the lematized version of the word article.append(w.lemma_) # if it's a new line, it means we're onto our next document if w.text == '\n': texts.append(article) article = [] # And this is the magic of spaCy - just like that, we've managed to get rid of stopwords, punctauation markers, and added the lemmatized word. There's lot more we can do with spaCy which I would really recommend checking out. # # Sometimes topic models make more sense when 'New' and 'York' are treated as 'New_York' - we can do this by creating a bigram model and modifying our corpus accordingly. bigram = gensim.models.Phrases(texts) texts = [bigram[line] for line in texts] dictionary = Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts] # We're now done with a very important part of any text analysis - the data cleaning and setting up of corpus. It must be kept in mind that we created the corpus the way we did because that's how gensim requires it - most algorithms still require one to clean the data set the way we did, by removing stop words and numbers, adding the lemmatized form of the word, and using bigrams. # ### LSI # # LSI stands for Latent Semantic Indeixing - it is a popular information retreival method which works by decomposing the original matrix of words to maintain key topics. Gensim's implementation uses an SVD. lsimodel = LsiModel(corpus=corpus, num_topics=10, id2word=dictionary) lsimodel.show_topics(num_topics=5) # Showing only the top 5 topics # ### HDP # # HDP, the Hierarchical Dirichlet process is an unsupervised topic model which figures out the number of topics on it's own. hdpmodel = HdpModel(corpus=corpus, id2word=dictionary) hdpmodel.show_topics() # ### LDA # # LDA, or Latent Dirichlet Allocation is arguably the most famous topic modelling algorithm out there. Out here we create a simple topic model with 10 topics. ldamodel = LdaModel(corpus=corpus, num_topics=10, id2word=dictionary) ldamodel.show_topics() # ### pyLDAvis # # Thanks to pyLDAvis, we can visualise our topic models in a really handy way. All we need to do is enable our notebook and prepare the object. pyLDAvis.enable_notebook() pyLDAvis.gensim.prepare(ldamodel, corpus, dictionary) # ### Round-up # # Okay - so what have we learned so far? # By using spaCy, we cleaned up our data super fast. It's worth noting that by running our doc through the pipeline we also know about every single words POS-tag and NER-tag. This is useful information and we can do some funky things with it! I would highly recommend going through [this](https://github.com/explosion/spacy-notebooks) repository to see examples of hands-on spaCy usage. # # As for gensim and topic modelling, it's pretty easy to see how well we could create our topic models. Now the obvious next question is - how do we use these topic models? The [news classification notebook](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/gensim_news_classification.ipynb) in the Gensim [notebooks](https://github.com/RaRe-Technologies/gensim/tree/develop/docs/notebooks) directory is a good example of how we can use topic models in a practical scenario. # # We will continue this tutorial by demonstrating a newer topic modelling features of gensim - in particular, Topic Coherence. # # ### Topic Coherence # # Topic Coherence is a new gensim functionality where we can identify which topic model is 'better'. # By returning a score, we can compare between different topic models of the same. We use the same example from the news classification notebook to plot a graph between the topic models we have created. # + lsitopics = [[word for word, prob in topic] for topicid, topic in lsimodel.show_topics(formatted=False)] hdptopics = [[word for word, prob in topic] for topicid, topic in hdpmodel.show_topics(formatted=False)] ldatopics = [[word for word, prob in topic] for topicid, topic in ldamodel.show_topics(formatted=False)] # + lsi_coherence = CoherenceModel(topics=lsitopics[:10], texts=texts, dictionary=dictionary, window_size=10).get_coherence() hdp_coherence = CoherenceModel(topics=hdptopics[:10], texts=texts, dictionary=dictionary, window_size=10).get_coherence() lda_coherence = CoherenceModel(topics=ldatopics, texts=texts, dictionary=dictionary, window_size=10).get_coherence() # - def evaluate_bar_graph(coherences, indices): """ Function to plot bar graph. coherences: list of coherence values indices: Indices to be used to mark bars. Length of this and coherences should be equal. """ assert len(coherences) == len(indices) n = len(coherences) x = np.arange(n) plt.bar(x, coherences, width=0.2, tick_label=indices, align='center') plt.xlabel('Models') plt.ylabel('Coherence Value') evaluate_bar_graph([lsi_coherence, hdp_coherence, lda_coherence], ['LSI', 'HDP', 'LDA']) # We can see that topic coherence helped us get past manually inspecting our topic models - we can now keep fine tuning our models and compare between them to see which has the best performance. # # This also brings us to the end of the runnable part of this tutorial - we will continue however by briefly going over two more Jupyter notebooks I have previously worked on - mainly, [Dynamic Topic Modelling](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/ldaseqmodel.ipynb) and [Document Word Coloring](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/topic_methods.ipynb). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + id="T4uqqLQm4gZM" import pandas as pd import numpy as np # + id="dbuXOrGw41jf" cluster = 4 # + id="T0FbuH-n4gZO" country_data = pd.read_csv("synthesized_data.csv") # + colab={"base_uri": "https://localhost:8080/"} id="ROU5nRbx4gZO" outputId="aedb0fb3-6609-4eb4-a7a2-fd22e943598c" country_data.shape # + colab={"base_uri": "https://localhost:8080/"} id="K7oA-Lct4gZP" outputId="80139f3c-942b-4a1d-ad81-2f21c516a2a8" country_data.isnull().sum() # + id="4Cg2HH-54gZP" country_data['Primary enrollment rate'] = country_data['Primary enrollment'] / country_data['Population (thousands)'] country_data['Secondary enrollment rate'] = country_data['Secondary enrollment'] / country_data['Population (thousands)'] country_data['Tertiary enrollment rate'] = country_data['Tertiary enrollment'] / country_data['Population (thousands)'] country_data['Lower secondary repeaters rate'] = country_data['Lower secondary repeaters count'] / country_data['Population (thousands)'] useful_data = country_data.loc[:, ~country_data.columns.isin(['Country', 'Primary enrollment', 'Secondary enrollment', 'Tertiary enrollment', 'Lower secondary repeaters count', 'Primary repeaters count', 'Average income', 'Average income currency'])] # + colab={"base_uri": "https://localhost:8080/"} id="aB1gheN-4gZQ" outputId="1472020b-9c8c-4a7a-f1bb-44589ff97570" useful_data.isnull().sum() # + id="Dnh3HeCa4gZQ" from sklearn.impute import SimpleImputer imputer = SimpleImputer(missing_values=np.nan, strategy='median') imputed_data = pd.DataFrame(imputer.fit_transform(useful_data)) standardized_data = (imputed_data-imputed_data.mean())/imputed_data.std() # + id="jzRFoVS94gZQ" from sklearn.decomposition import PCA pca_countries = PCA(n_components=2) principalComponents_countries = pca_countries.fit_transform(standardized_data) # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="aI6cXxXT4gZQ" outputId="ea4e03cf-841d-4e50-b0db-bf0697e52b16" import matplotlib.pyplot as plt # %matplotlib inline plt.scatter(principalComponents_countries[:, 0], principalComponents_countries[:, 1]) # + id="LWskfzdu4gZR" from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=cluster) cluster_vals = kmeans.fit_predict(standardized_data) # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="LlOYWggt4gZR" outputId="19b2f055-c900-4d0a-b235-7c7213a9cd66" plt.scatter(principalComponents_countries[cluster_vals ==0,0], principalComponents_countries[cluster_vals == 0,1], s=100, c='red') plt.scatter(principalComponents_countries[cluster_vals ==1,0], principalComponents_countries[cluster_vals == 1,1], s=100, c='black') plt.scatter(principalComponents_countries[cluster_vals ==2,0], principalComponents_countries[cluster_vals == 2,1], s=100, c='blue') plt.scatter(principalComponents_countries[cluster_vals ==3,0], principalComponents_countries[cluster_vals == 3,1], s=100, c='green') plt.scatter(principalComponents_countries[cluster_vals ==4,0], principalComponents_countries[cluster_vals == 4,1], s=100, c='cyan') plt.scatter(principalComponents_countries[cluster_vals ==5,0], principalComponents_countries[cluster_vals == 5,1], s=100, c='purple') plt.scatter(principalComponents_countries[cluster_vals ==6,0], principalComponents_countries[cluster_vals == 6,1], s=100, c='orange') plt.scatter(principalComponents_countries[cluster_vals ==7,0], principalComponents_countries[cluster_vals == 7,1], s=100, c='yellow') plt.scatter(principalComponents_countries[cluster_vals ==8,0], principalComponents_countries[cluster_vals == 8,1], s=100, c='grey') plt.scatter(principalComponents_countries[cluster_vals ==9,0], principalComponents_countries[cluster_vals == 9,1], s=100, c='brown') plt.scatter(principalComponents_countries[cluster_vals ==10,0], principalComponents_countries[cluster_vals == 10,1], s=100, c='magenta') plt.scatter(principalComponents_countries[cluster_vals ==11,0], principalComponents_countries[cluster_vals == 11,1], s=100, c='lime') plt.scatter(principalComponents_countries[cluster_vals ==12,0], principalComponents_countries[cluster_vals == 12,1], s=100, c='pink') plt.scatter(principalComponents_countries[cluster_vals ==13,0], principalComponents_countries[cluster_vals == 13,1], s=100, c='maroon') plt.scatter(principalComponents_countries[cluster_vals ==14,0], principalComponents_countries[cluster_vals == 14,1], s=100, c='azure') # + id="Z0N0gvRw4gZS" cv_sr = pd.Series(cluster_vals) clusters = [] for i in range(cluster): clusters.append(country_data['Country'][cv_sr == i].to_list()) # + colab={"base_uri": "https://localhost:8080/"} id="wvWor1974gZS" outputId="932b4784-4e81-46e0-b494-4ceb88e0b0a0" clusters # + colab={"base_uri": "https://localhost:8080/"} id="Dv3hrrf24gZS" outputId="daba4935-4820-4061-9fc9-bb31e1160c61" education_data = standardized_data[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 20, 21, 22, 23]] cluster_vals_ed = kmeans.fit_predict(education_data) cv_sr_ed = pd.Series(cluster_vals_ed) clusters_ed = [] for i in range(cluster): clusters_ed.append(country_data['Country'][cv_sr_ed == i].to_list()) clusters_ed # + colab={"base_uri": "https://localhost:8080/"} id="5Pi9CnzM4gZS" outputId="064a3762-0b27-4dc7-f156-7dcec35fdc5a" economic_data = standardized_data[[0, 14, 15, 16, 17, 18, 19]] cluster_vals_ec = kmeans.fit_predict(economic_data) cv_sr_ec = pd.Series(cluster_vals_ec) clusters_ec = [] for i in range(cluster): clusters_ec.append(country_data['Country'][cv_sr_ec == i].to_list()) clusters_ec # + id="kYHPl9EF4gZT" import collections ed_ec_combined = collections.defaultdict(list) for i in range(cluster): ed_ec_combined[i] = collections.defaultdict(list) for i in range(178): country = country_data['Country'][i] ed_ec_combined[cv_sr_ed[i]][cv_sr_ec[i]].append(country) # + colab={"base_uri": "https://localhost:8080/"} id="P_999SEv4dhm" outputId="c20af508-7e81-49d2-fa7a-20e641c1b212" ed_ec_combined # + colab={"base_uri": "https://localhost:8080/", "height": 496} id="WN8PHvRf-0FA" outputId="9ae2c019-e29b-45d0-fae7-9c1913244074" from sklearn.metrics import confusion_matrix cm = confusion_matrix(cv_sr_ec, cv_sr_ed) print(cm) from mlxtend.plotting import plot_confusion_matrix fig, ax = plot_confusion_matrix(conf_mat=cm, figsize=(6, 6), cmap=plt.cm.Blues) plt.xlabel('Economic Clusters', fontsize=16) plt.ylabel('Education Clusters', fontsize=16) plt.title('Confusion Matrix', fontsize=16) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="k70MHlxRH76q" outputId="a68fbdf9-3e55-4a79-ebc6-ac5effd73daa" df = country_data[country_data['Country'] == ed_ec_combined[2][2][0]] for entry in ed_ec_combined[2][2]: print(entry) df = df.append(country_data[country_data['Country'] == entry]) # + id="tFodBr_-IoQI" index = [8, 15, 18, 30, 44, 45, 57, 58, 62, 70, 72, 79, 81, 94, 97, 117, 121, 130, 132, 134, 144, 154, 155, 159, 168, 169] # + id="USF7KBXTNshH" df = pd.DataFrame(columns=imputed_data.columns) for i in index: df.loc[len(df.index)] = imputed_data.loc[i] # + id="QBaWJsQFectL" import csv table = [] sig_table = [] with open("all_feature_comparison.csv", "w", newline='') as f: with open("best_feature_comparison.csv", "w", newline='') as g: w1 = csv.writer(f) w2 = csv.writer(g) table.append(['', 'Mean of Best Cluster', 'Mean of All Countries', 'Difference (%)']) sig_table.append(['', 'Mean of Best Cluster', 'Mean of All Countries', 'Difference (%)']) for i in range(24): diff = abs((imputed_data[i].mean() - df[i].mean()) / df[i].mean()) * 100 row = [useful_data.columns[i], df[i].mean(), imputed_data[i].mean(), diff] table.append(row) if diff > 45: sig_table.append(row) w2.writerows(sig_table) w1.writerows(table) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="ANqFABrcyiuV" outputId="f9837f03-2475-43f8-c5b2-6e6d7c6fbf7f" from tabulate import tabulate print(tabulate(table, headers='firstrow', tablefmt='fancy_grid', numalign="right")) table.to_csv('all_feature_comparison.csv') # + colab={"base_uri": "https://localhost:8080/"} id="GVOLXFwv_aND" outputId="fd7b7205-5c9e-4411-a12d-8fab6e4729c7" print(tabulate(sig_table, headers='firstrow', tablefmt='fancy_grid', numalign="right")) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Readme: # # This notebook documents what I learnt from https://course.spacy.io/en/. It contains notes/sample codes/sample problems from the course. # # Special thanks to the content creators and the presenter Ines # # This notebook is intended for self-study, not re-distributing contents. # If you want to learn more about spaCy, please visit https://spacy.io/ or https://course.spacy.io/en/ # # Thank you! # ### Chapter 4 : Training and Updating Models # * How training words? # * Initialize the model weights randomly with nlp.begin_training # * Predict a few examples with the current weights by calling nlp.updates # * Compare prediction with true labels # * Calculate how to change weights to improve predictions # * Update weights slightly # * Go back to 2 # # Note : Gradient : how to change the weight # # * update an existing model : a few hundred to a few thousand examples # * train a new category : a few thousand to a million examples # * training to teach the model new labels, entity types or other classification schemes. # * spaCy’s components are supervised models for text annotations, meaning they can only learn to reproduce examples, not guess new labels from raw text. Training does not help with discovering patterns in unlabelled data # + import json from spacy.matcher import Matcher from spacy.lang.en import English with open("exercises/en/iphone.json", encoding="utf8") as f: TEXTS = json.loads(f.read()) nlp = English() matcher = Matcher(nlp.vocab) # Two tokens whose lowercase forms match "iphone" and "x" pattern1 = [{"LOWER": "iphone"}, {"LOWER": "x"}] # Token whose lowercase form matches "iphone" and a digit pattern2 = [{"LOWER": "iphone"}, {"IS_DIGIT": True}] # Add patterns to the matcher and check the result matcher.add("GADGET", None, pattern1, pattern2) for doc in nlp.pipe(TEXTS): print([doc[start:end] for match_id, start, end in matcher(doc)]) # + import json from spacy.matcher import Matcher from spacy.lang.en import English with open("exercises/en/iphone.json", encoding="utf8") as f: TEXTS = json.loads(f.read()) nlp = English() matcher = Matcher(nlp.vocab) pattern1 = [{"LOWER": "iphone"}, {"LOWER": "x"}] pattern2 = [{"LOWER": "iphone"}, {"IS_DIGIT": True}] matcher.add("GADGET", None, pattern1, pattern2) TRAINING_DATA = [] # Create a Doc object for each text in TEXTS for doc in nlp.pipe(TEXTS): # Match on the doc and create a list of matched spans spans = [doc[start:end] for match_id, start, end in matcher(doc)] # Get (start character, end character, label) tuples of matches entities = [(span.start_char, span.end_char, "GADGET") for span in spans] # Format the matches as a (doc.text, entities) tuple training_example = (doc.text, {"entities": entities}) # Append the example to the training data TRAINING_DATA.append(training_example) print(*TRAINING_DATA, sep="\n") # ('How to preorder the iPhone X', {'entities': [(20, 28, 'GADGET')]}) # ('iPhone X is coming', {'entities': [(0, 8, 'GADGET')]}) # ('Should I pay $1,000 for the iPhone X?', {'entities': [(28, 36, 'GADGET')]}) # ('The iPhone 8 reviews are here', {'entities': [(4, 12, 'GADGET')]}) # ("iPhone 11 vs iPhone 8: What's the difference?", {'entities': [(0, 9, 'GADGET'), (13, 21, 'GADGET')]}) # ('I need a new phone! Any tips?', {'entities': []}) # - # * How does training loop work? # * loop for a number of times # * shuffle the training data # * divide the data into batches # * update the model for each batch # * save the updated model # * Best practice when training models # * Models can forget things # * mix in previously correct predictions # * website v.s. persons # * run existing spaCy model over data and extract all other relevant entities # * Models can't learn everything # * local context / surrounding words # * label schemes need to be consistent and not too specific (clothing is better than adult clothing/children clothing) # * use rules from generic to specific # + # example 1 TRAINING_DATA = [ ( "i went to amsterdem last year and the canals were beautiful", {"entities": [(10, 19, "TOURIST_DESTINATION")]}, ), ( "You should visit Paris once in your life, but the Eiffel Tower is kinda boring", {"entities": [(17, 22, "TOURIST_DESTINATION")]}, ), ("There's also a Paris in Arkansas, lol", {"entities": []}), ( "Berlin is perfect for summer holiday: lots of parks, great nightlife, cheap beer!", {"entities": [(0, 6, "TOURIST_DESTINATION")]}, ), ] # this is subjective, it will be better if we just name it as GPE or location, and use the rule-based system # to determine whether such an entity is tourist destination or not # so we should replace all "TOURIST_DESTINATION" with "GPE" # for the third doc, both paris and arkansas will have "GPE" # "There's also a Paris in Arkansas, lol", # {"entities": [(15, 20, "GPE"), (24, 32, "GPE")]}, # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # # 2.1 Dask Delayed # # ## Delayed Functions # The Dask function `delayed` is a *decorator function* (see below) that delays the execution of a function when it is called, letting you determine when to actually execute the function's operation at a later time. # > **NOTE:** A *decorator function* (or just a *decorator*) is a function that takes another function as its argument and returns yet another function. Effectively, *decorator functions* act as wrapper functions, passing arguments through the wrapper to the wrapped function. # > # >Decorators can be applied in two different ways: # > # >```python # >def unwrapped_function(...): # > ... # > # >wrapped_function = decorator_function(unwrapped_function) # >``` # > # >where the `wrapped_function` is defined separately from the `unwrapped_function`, or # > # >```python # >@decorator_function # >def function(...): # > ... # >``` # > # >where the `function` is wrapped at the time it is defined using *decorator syntax*. from code.mysleep import sleep import dask # ## Example: *Slow Python Functions* # A simple function to increment an integer...slowly! def slow_inc(x): sleep(1) return x + 1 # A simple function to decrement an integer...slowly! def slow_dec(x): sleep(1) return x - 1 # %time i_2 = slow_inc(2) i_2 # %time d_i_2 = slow_dec(i_2) d_i_2 # ## Example: *Dask Delayed Functions* delayed_inc = dask.delayed(slow_inc) delayed_dec = dask.delayed(slow_dec) # %time delayed_i_2 = delayed_inc(2) delayed_i_2 # %time delayed_d_i_2 = delayed_dec(delayed_i_2) delayed_d_i_2 # ## Notice anything different? # # **1. Run Times:** The "tasks" ran almost instantaneously! # # **2. Return Values:** The `delayed` functions returned `Delayed` objects. # ## Delayed Objects # # When called, every `delayed` function returns a `Delayed` object. Each `Delayed` object represents a node in a *task graph*, and each `Delayed` object gives you the ability to examine and visualize the *task graph* that leads up to that node in the graph. # # ![](images/inc-add.svg) # ## Delayed.compute() # # To force the `delayed` functions to compute and return the result, we call the `compute` method of the `Delayed` object. # %time delayed_i_2.compute() # %time delayed_d_i_2.compute() # ### Notice! # # The computation of `delayed_d_i_2` took 2 seconds, which is the time required to compute `slow_inc(2)` plus the time required to compute `slow_dec(3)`! # # But we already computed `delayed_i_2`, so why are we computing it, again? # ### NOTE: # # In addition to using the `compute` method of a Delayed object, you can also compute a `Delayed` object with the `compute` function in Dask. # %time _i_2, _d_i_2 = dask.compute(delayed_i_2, delayed_d_i_2) _i_2, _d_i_2 # ### Notice! # # Did you notice that this computed both `Delayed` objects at the same time (in parallel)? # ## Delayed.persist() # # To keep the computed result of a `Delayed` object in memory, so that it is available later, we use the `persist` method of the `Delayed` objects. # %time persist_i_2 = delayed_inc(2).persist() persist_i_2 # %time persist_i_2.compute() # %time persist_d_i_2 = delayed_dec(persist_i_2) persist_d_i_2 # %time persist_d_i_2.compute() # ### Notice! # # Now, the computation of `i2` only took as long as it took to compute `dec(3)` because the result of `i1` was persisted in memory. # ### NOTE: # # Like the `dask.compute` function, you can also persist `Delayed` objects with the `dask.persist` function: # %time _i_2, _d_i_2 = dask.persist(delayed_i_2, delayed_d_i_2) _i_2, _d_i_2 # ## Delayed.key # # Each `Delayed` object has a unique identifier, called a `key`, which can be returned with the `key` attribute. delayed_i_2.key persist_i_2.key delayed_d_i_2.key persist_d_i_2.key # ## Delayed.dask # # These `key`s are used to uniquely identify each task in a *Task Graph*, and the *Task Graph* can be viewed as dictionary-like object associated with the `dask` attribute of the `Delayed` object. # Short function to print out a Task Graph def print_dask(dobj): for key in dobj.dask: print('{}:'.format(key)) if isinstance(dobj.dask[key], tuple): print(' function: {}'.format(dobj.dask[key][0])) print(' arguments: {}'.format(dobj.dask[key][1:])) else: print(' value: {}'.format(dobj.dask[key])) print_dask(delayed_i_2) print_dask(delayed_d_i_2) # ## Delayed.visualize() # # It's kinda annoying that we have to write a special function to see what the graph looks like! # # Fortunately, there's a better way! Use the `visualize` method of the `Delayed` object. delayed_i_2.visualize() delayed_d_i_2.visualize() # ### Notice! # # If we visualize the persisted versions of these `Delayed` objects, what do you get? persist_i_2.visualize() persist_d_i_2.visualize() # The first objects in the Task Graphs are *data*, now! Before, they were function calls! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Numpy is the numerical computing package in Python import numpy as np # The import statement is used to import external libraries. All functions within the imported library are now accesible through that library's namespace. Since we imported numpy as "np", we can now access the functions within the Numpy library through: np.\ np.array( [1, 2, 3, 4] ) np.arange(10) np.arange(0, 10, 1) # The arange function takes in the "start" value, "stop" value, and the "step" size as inputs and returns an evenly spaced array. For more information, read the documentation: # + jupyter={"outputs_hidden": true} # np.linspace? # - np.linspace(0, 9,10) np.zeros(10) np.array([ [1,1], [1,1] ]) #arr = np.ones( (2,2) ) arr = np.ones ( (3,4) ) arr # Above, we created a matrix with 3 rows and 4 columns. It has a shape of 3 x 4 arr.shape np.full_like(np.zeros(10), 10) np.ones(10) * 10 x = 2 #integer y = "this is a string" x * y # ## Why use numpy? # # Firstly, numpy provides access to efficient and optimized implementation of array handling operations. For instance, let's repeat the exercise from the previous notebook, where we created a list of 1000 elements and performed certain operations on each element. # %%timeit x = [] for i in range(0, 1000, 1): x.append(i**2 + 0.5 * i + 2.5) # %%timeit xn = np.arange(0, 1000, 1) xn = xn**2 + 0.5 * xn + 2.5 #

# Numpy is ~2 orders of magnitude faster! # # The pure Python way of doing things is slower because Python is a dynamically typed language and the compiler may not implement memory optimizations such as loading up the array elements into memory before the operation is carried out. # # But the numpy way of doing things is faster because the underlying code is written in C, and we get all the optimization that comes with having defined types and compiler optimizations with memory management. We also avoid the overheads that come with storing the data type and checking it before every operation is carried out, leading to much faster running code! # ### Indexing in numpy x = np.arange(5, 10, 1) x x.shape print(x[0], x[1], x[2], x[3], x[4]) print(x[5]) # #### Index 5 threw an error because our array has elements in the 0th, 1st, 2nd, 3rd, and 4th positions only. Remember, indexing in Python begins from 0 x[ [0,1,2,3] ] # You can call elements of the array by passing an array of indices! x[ np.arange(4) ] # The above two cells are equivalent to each other. # ### Boolean indexing: # Pass an array of boolean (True/False) values and corresponding elements of the original array are selected x[[True, False, True, False, True]] x[((x % 2) == 1)] # What happened above? x % 2 # The percentage sign is known as the modulo operator, which returns the remainder after division. So 5 divided by 2 is equal to 2 with a remainder of 1, 6 divided by 2 is equal to 3 with a remainder of 0 and so on... (x % 2) == 1 # The above is a logical test for each element of the array produced after the modulo operation. And finally, we pass the boolean array produced from this logical test as boolean indexers of the original array! # ### Slicing print(x[0: 5]) print(x[-1], x[-2], x[-3], x[-4], x[-5]) # Numpy slicing operator is ":" the colon. Use the operator with the lower index and the higher index to access slices of the array x[0:3] x[0:-1] x[0:-2] # You can step over elements of the array using double colons :: followed by the step size x[::2] x[::-1] # It also works in reverse! x[::-2] # The indexing and slicing operations work on any N dimensional array. Just seperate the indices by commas: twoDarray = np.array([ [1,2,3], [4,5,6], [7,8,9]]) twoDarray # The same can be created using the reshape function in numpy np.arange(1, 10, 1).reshape(3,3) twoDarray[0,0] twoDarray[0,1] twoDarray[1,0] twoDarray[ [0, 1, 2], [0, 0, 0] ] twoDarray[:, 0] # The above two operations are equivalent, can you work out how? In the 1st case, an array of row indices and an array of column indices were passed. The corresponding elements located at (0,0), (1,0), and (2, 0) were retrieved. In the 2nd cell, the slicing operator selects all the rows and the 0th column. # # To see the slicing operation more explicitly, see below: twoDarray[0:3, 0] # ### Array operations # # All operations on the numpy object are by default applied to every element in the array x x + 10 x x = (x + 10) / 10 x x[3] = x[0] * 5 x # ### Exercise 03: Using both arange and linspace # # 1. Create an array of length 5, 10, 100, 1000 of equally spaced numbers between 0 and 1 # 1. Create a 2 dimensional array, "arr", of shape 3x3, with the first row having all 1's, second row having all 2's, and the third row having all 3's # 1. Access the diagonal elements of "arr" created above and multiply it by 10 # 1. Access the first column of "arr" and add it with the third column of "arr" # ## Exercise 04: Fancy indexing # # 1. Create an array ranging from 0 to 100: arange(0, 100, 1). # 1. Reshape this array into 10 rows and 10 columns. # 1. Using arrays of indices, select the diagonal elements alone. # 1. Using slicing operators, select the 1st 3 elements of the 1st row. # 1. Select all elements that are greater than 49. # 1. use np.triu_indices to extract the upper triangular elements of the matrix (all elements that lie on or above the diagonal) # + # # %load ./solutions/sol_np_reshape.py # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python2 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import xarray as xr import numpy as np import os,sys,glob os.getcwd() fl_wrfinput=glob.glob('wrfinput*') fl_wrfinput ds_disk = xr.open_dataset('wrfinput_d01.save.nc') dict_rules_suews={ 'LAI_SUEWS':[('lai',1,3)], 'albDecTr_SUEWS':0, 'albEveTr_SUEWS':0, 'albGrass_SUEWS':0, 'DecidCap_SUEWS':0, 'porosity_SUEWS':0, 'GDD_SUEWS':[('gdd',1,5)], 'HDD_SUEWS':[('hdd',1,6),('day',3,2)], 'state_SUEWS':[('nsurf',1,7)], 'soilmoist_SUEWS':[('nsurf',1,7)], 'surf_var_SUEWS':[('nsurf',1,7)], 'qn1_av_SUEWS':0, 'qn1_s_SUEWS':0, 'dqndt_SUEWS':0, 'dqnsdt_SUEWS':0, 'MeltWaterStore':[('nsurf',1,7)], 'SnowAlb':0, 'WUDay':[('wu',1,9)], 'z0m_in':0, 'zdm_in':0 } # + def gen_var_expand(name,rules,var_base=ds_disk['T2'].copy(deep=True)): var_new=var_base.rename(name.upper()); var_attrs=var_base.attrs for x_rule in rules: # print x_rule (name_dim,axis,rep) = x_rule var_new=np.repeat( var_new.expand_dims( name_dim,axis=axis),rep,axis=axis) var_new.attrs=var_attrs var_new.attrs['MemoryOrder']='XYZ' var_new.attrs[u'description']=name var_new.attrs['stagger']='Z' return var_new def gen_var_keep(name,rules,var_base=ds_disk['T2'].copy(deep=True)): var_new=var_base.rename(name) var_new.attrs[u'description']=name return var_new def gen_var(name,rules,var_base=ds_disk['T2'].copy()): if type(rules) is list: var_new=gen_var_expand(name,rules,var_base) else: var_new=gen_var_keep(name,rules,var_base) return var_new # - ds_new=xr.Dataset({name.upper():gen_var(name,rules) for name,rules in dict_rules_suews.items()}) # ds_disk.merge(ds_new) # ds_disk[['T2','Q2']].merge() # ds_new[['z0m_in','zdm_in']] ds_new['GDD_SUEWS'].attrs ds_disk['T2'].attrs ds_disk.SMOIS.attrs # ds_disk.merge(ds_new) ds_disk['SMOIS'].copy() ds_new['SnowAlb'].attrs ds_new['LAI_SUEWS'].attrs for var in ds_disk.variables: if 'FieldType' in ds_disk[var].attrs: print var,ds_disk[var].ndim,ds_disk[var].attrs['FieldType'] ds_merged=ds_disk.update(ds_new) for var in ds_merged.data_vars.keys(): if 'coordinates' in ds_merged[var].attrs: print var del ds_merged[var].attrs['coordinates'] # ds_merged['ALBBCK']=ds_merged['ALBBCK']+0.2 ds_merged.to_netcdf('wrfinput_d01.new1.nc',mode='w',format='NETCDF4_CLASSIC') rules=zip(*[('hdd',2,6),('day',4,2)]) var_base=ds_disk['T2'].copy() var_new=var_base.copy(); var_new=var_new.expand_dims(rules[0],axis=rules[1]) var_new=np.repeat(var_new,) ds_new.var() ds_disk['LAI_SUEWS'].shape ds_disk['SMOIS'].coords['XLAT'] xx=ds_disk['T2'].copy() yy=np.repeat(xx.expand_dims('nsurf',axis=2),5,axis=2) yy ds_disk[['T2','Q2']] yy=np.repeat(yy,5,axis=2) yy ds_disk['LAI_SUEWS']=xx xx=ds_disk['T2'].copy() yy=np.repeat(xx.expand_dims('nsurf',axis=2),5,axis=2) yy xx=xr.DataArray(np.ones((1,5,7,5))) xx ds_disk.coords['XLAT'] for var in ds_disk.data_vars.keys(): if 'coordinates' in ds_disk[var].attrs: print var del ds_disk[var].attrs['coordinates'] ds_disk.to_netcdf('xx.nc') da_test.name='LAI_SUEWS' da_test.attrs.update(description='test var for SUEWS') da_test ds_disk[['T2','Q2']].to_netcdf('xx.nc',mode='w',format='NETCDF3_CLASSIC') del ds_disk['Q2'].attrs['coordinates'] del ds_disk['T2'].attrs['coordinates'] ds_xx=xr.open_dataset('xx.nc') ds_xx['T2'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + from Anechoic import get_ttv train, test, validate = get_ttv('./data', columns='reflection_count') # + import torch device = 'cuda' if torch.cuda.is_available() else 'cpu' print(f'Using {device} device') # - from torch import nn class NeuralNetwork(nn.Module): def __init__(self, input_shape, output_shape): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(input_shape, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, output_shape) ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits model = NeuralNetwork(100, 10).to(device) print(model) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="lKU8kmSs65xv" # # **Step 1**: _Hello World_ # + [markdown] id="ZrjGQON37lk2" # ## Installing Hub # + [markdown] id="9pcfYcPu7KxY" # Hub can be installed via `pip`. # + id="oC_N5qOx6o0d" from IPython.display import clear_output # !pip3 install hub clear_output() # + [markdown] id="z4_rfJ_GVxLz" # **By default, Hub does not install dependencies for audio, video, and google-cloud (GCS) support. They can be installed using:** # + id="AwpEic3jV2nV" #pip install hub[audio] -> Audio support via miniaudio #pip install hub[video] -> Video support via pyav #pip install hub[gcp] -> GSS support via google-* dependencies #pip install hub[all] -> Installs everything - audio, video and GCS support # + [markdown] id="0N-f2SYU7OjQ" # ## Fetching your first Hub dataset # + [markdown] id="9aNFn7rZ7qxP" # Begin by loading in [MNIST](https://en.wikipedia.org/wiki/MNIST_database), the hello world dataset of machine learning. # # First, load the `Dataset` by pointing to its storage location. Datasets hosted on the Activeloop Platform are typically identified by the namespace of the organization followed by the dataset name: `activeloop/mnist-train`. # + id="izccjS4k7NvX" import hub dataset_path = 'hub://activeloop/mnist-train' ds = hub.load(dataset_path) # Returns a Hub Dataset but does not download data locally # + [markdown] id="bR5n8yYg-0Wu" # ## Reading Samples From a Hub Dataset # + [markdown] id="0XdaAKaS-3NO" # Data is not immediately read into memory because Hub operates [lazily](https://en.wikipedia.org/wiki/Lazy_evaluation). You can fetch data by calling the `.numpy()` method, which reads data into a NumPy array. # # + id="6qpQeNoq-xfo" # Indexing W = ds.images[0].numpy() # Fetch image return a NumPy array X = ds.labels[0].numpy(aslist=True) # Fetch label and store as list of NumPy array # Slicing Y = ds.images[0:100].numpy() # Fetch 100 images and return a NumPy array if possible # This method produces an exception if # the shape of the images is not equal Z = ds.labels[0:100].numpy(aslist=True) # Fetch 100 labels and store as list of # NumPy arrays # + id="eNGHXfdKwJ7W" print('X is {}'.format(X)) # + [markdown] id="tmi2w0_e_LtH" # Congratulations, you've got Hub working on your local machine! 🤓 # + [markdown] id="G-DM6PKq_di2" # # **Step 2**: _Creating Hub Datasets_ # *Creating and storing Hub Datasets manually.* # + [markdown] id="FEzK8LTe_gJW" # Creating Hub datasets is simple, you have full control over connecting your source data (files, images, etc.) to specific tensors in the Hub Dataset. # + [markdown] id="EGXGvKU1qsp1" # ## Manual Creation # + [markdown] id="CQk29Mnhqn1V" # Let's follow along with the example below to create our first dataset. First, download and unzip the small classification dataset below called the *animals dataset*. # + id="QDJRrlDP_DsW" # Download dataset from IPython.display import clear_output # !wget https://github.com/activeloopai/examples/raw/main/colabs/starting_data/animals.zip clear_output() # + id="SIQf9cY6_vyn" # Unzip to './animals' folder # !unzip -qq /content/animals.zip # + [markdown] id="IIz-MYImAfCg" # The dataset has the following folder structure: # + [markdown] id="IuhZZqVIAqj_" # animals # - cats # - image_1.jpg # - image_2.jpg # - dogs # - image_3.jpg # - image_4.jpg # + [markdown] id="6Lez5uCJAto4" # Now that you have the data, you can **create a Hub `Dataset`** and initialize its tensors. Running the following code will create a Hub dataset inside of the `./animals_hub` folder. # # + id="qtzmT0iBNV23" import hub from PIL import Image import numpy as np import os ds = hub.empty('./animals_hub') # Creates the dataset # + [markdown] id="PQ5yt0aaNeP5" # Next, let's inspect the folder structure for the source dataset `'./animals'` to find the class names and the files that need to be uploaded to the Hub dataset. # + id="ubGLkgG8Njbb" # Find the class_names and list of files that need to be uploaded dataset_folder = './animals' class_names = os.listdir(dataset_folder) files_list = [] for dirpath, dirnames, filenames in os.walk(dataset_folder): for filename in filenames: files_list.append(os.path.join(dirpath, filename)) # + [markdown] id="CtVSh0FnNmyI" # Next, let's **create the dataset tensors and upload metadata**. Check out our page on [Storage Synchronization](https://docs.activeloop.ai/how-hub-works/storage-synchronization) for details about the `with` syntax below. # # + id="a6QDC6caNpiH" with ds: # Create the tensors with names of your choice. ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') ds.create_tensor('labels', htype = 'class_label', class_names = class_names) # Add arbitrary metadata - Optional ds.info.update(description = 'My first Hub dataset') ds.images.info.update(camera_type = 'SLR') # + [markdown] id="TD-hCSBKBA_m" # **Note:** Specifying `htype` and `dtype` is not required, but it is highly recommended in order to optimize performance, especially for large datasets. Use `dtype` to specify the numeric type of tensor data, and use `htype` to specify the underlying data structure. More information on `htype` can be found [here](https://api-docs.activeloop.ai/htypes.html). # + [markdown] id="HR4kLo6YBOhO" # Finally, let's **populate the data** in the tensors. # + id="0QRAyS-HA-Fp" with ds: # Iterate through the files and append to hub dataset for file in files_list: label_text = os.path.basename(os.path.dirname(file)) label_num = class_names.index(label_text) #Append data to the tensors ds.append({'images': hub.read(file), 'labels': np.uint32(label_num)}) # + [markdown] id="lWqYzfI1DCPG" # **Note:** `ds.append({'images': hub.read(path)})` is functionally equivalent to `ds.append({'images': PIL.Image.fromarray(path)})`. However, the `hub.read()` method is significantly faster because it does not decompress and recompress the image if the compression matches the `sample_compression` for that tensor. Further details are available in the next section. # # **Note:** In order to maintain proper indexing across tensors, `ds.append({...})` requires that you to append to all tensors in the dataset. If you wish to skip tensors during appending, please use `ds.append({...}, skip_ok = True)` or append to a single tensor using `ds.tensor_name.append(...)`. # + [markdown] id="WzHVb521XSud" # Check out the first image from this dataset. More details about Accessing Data are available in **Step 4**. # + id="OMG2oif0XSDZ" Image.fromarray(ds.images[0].numpy()) # + [markdown] id="g8E_f-eXqy1c" # ## Automatic Creation # + [markdown] id="MCjy5dH9q3Gi" # The above animals dataset can also be converted to Hub format automatically using 1 line of code: # + id="CUtOL7F8q1xB" src = "./animals" dest = './animals_hub_auto' ds = hub.ingest(src, dest) # + id="o6xboPUKrs1l" Image.fromarray(ds.images[0].numpy()) # + [markdown] id="03b3r7owq7o8" # **Note**: Automatic creation currently only supports image classification datasets, though support for other dataset types is continually being added. A full list of supported datasets is available [here](https://api-docs.activeloop.ai/#hub.ingest). # + [markdown] id="PK_wpkYsDdH2" # ## Creating Tensor Hierarchies # + [markdown] id="1btlOtBDDe4G" # Often it's important to create tensors hierarchically, because information between tensors may be inherently coupled—such as bounding boxes and their corresponding labels. Hierarchy can be created using tensor `groups`: # + id="ICg3Z1z8CRGN" ds = hub.empty('./groups_test') # Creates the dataset # Create tensor hierarchies ds.create_group('my_group') ds.my_group.create_tensor('my_tensor') # Alternatively, a group can us created using create_tensor with '/' ds.create_tensor('my_group_2/my_tensor') # Automatically creates the group 'my_group_2' # + [markdown] id="wE-rWBCkpI9T" # Tensors in groups are accessed via: # + id="78s3Oa_jpKXV" ds.my_group.my_tensor #OR ds['my_group/my_tensor'] # + [markdown] id="3fhjWZ9hDvKe" # For more detailed information regarding accessing datasets and their tensors, check out **Step 4**. # + [markdown] id="46H4nEnZDv5m" # # **Step 3**: _Understanding Compression_ # # *Using compression to achieve optimal performance.* # + [markdown] id="_ajldDggEp8O" # **Data in Hub can be stored in raw uncompressed format. However, compression is highly recommended for achieving optimal performance in terms of speed and storage.** # # # Compression is specified separately for each tensor, and it can occur at the `sample` or `chunk` level. For example, when creating a tensor for storing images, you can choose the compression technique for the image samples using the `sample_compression` input: # + id="uOw9hc0jDpQY" import hub # Set overwrite = True for re-runability ds = hub.empty('./compression_test', overwrite = True) ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') # + [markdown] id="nv4ktXoCE2K2" # In this example, every image added in subsequent `.append(...)` calls is compressed using the specified `sample_compression` method. # + [markdown] id="8WaFBxrEE9GI" # ### **Choosing the Right Compression** # + [markdown] id="yM8VtZ98FCUu" # There is no single answer for choosing the right compression, and the tradeoffs are described in detail in the next section. However, good rules of thumb are: # # # # 1. For data that has application-specific compressors (`image`, `audio`, `video`,...), choose the sample_compression technique that is native to the application such as `jpg`, `mp3`, `mp4`,... # 2. For other data containing large samples (i.e. large arrays with >100 values), `lz4` is a generic compressor that works well in most applications. `lz4` can be used as a `sample_compression` or `chunk_compression`. In most cases, `sample_compression` is sufficient, but in theory, `chunk_compression` produces slightly smaller data. # 3. For other data containing small samples (i.e. labels with <100 values), it is not necessary to use compression. # + [markdown] id="hotuAwslFbAu" # ### **Compression Tradeoffs** # + [markdown] id="QrWvN558v4xn" # **Lossiness -** Certain compression techniques are lossy, meaning that there is irreversible information loss when compressing the data. Lossless compression is less important for data such as images and videos, but it is critical for label data such as numerical labels, binary masks, and segmentation data. # # # **Memory -** Different compression techniques have substantially different memory footprints. For instance, png vs jpeg compression may result in a 10X difference in the size of a Hub dataset. # # # **Runtime -** The primary variables affecting download and upload speeds for generating usable data are the network speed and available compute power for processing the data . In most cases, the network speed is the limiting factor. Therefore, the highest end-to-end throughput for non-local applications is achieved by maximizing compression and utilizing compute power to decompress/convert the data to formats that are consumed by deep learning models (i.e. arrays). # # # **Upload Considerations -** When applicable, the highest uploads speeds can be achieved when the `sample_compression` input matches the compression of the source data, such as: # + id="qkJKv00UFexo" # sample_compression is "jpg" and appended image is "jpeg" ds.create_tensor('images_jpg', htype = 'image', sample_compression = 'jpg') ds.images_jpg.append(hub.read('./animals/dogs/image_3.jpg')) # + [markdown] id="3LMsd3K9GJJ9" # In this case, the input data is a `.jpg`, and the hub `sample_compression` is `jpg`. # # However, a mismatch between compression of the source data and sample_compression in Hub results in significantly slower upload speeds, because Hub must decompress the source data and recompress it using the specified `sample_compression` before saving. # + id="c5MaC1lBwa3a" # sample_compression is "jpg" and appended image is "jpeg" ds.create_tensor('images_png', htype = 'image', sample_compression = 'png') ds.images_png.append(hub.read('./animals/dogs/image_3.jpg')) # + [markdown] id="EXGCWxu7wjHX" # **NOTE:** Due to the computational costs associated with decompressing and recompressing data, it is important that you consider the runtime implications of uploading source data that is compressed differently than the specified sample_compression. # + [markdown] id="JGo-E8Z8Ho6F" # # **Step 4**: _Accessing Data_ # _Accessing and loading Hub Datasets._ # + [markdown] id="A8Mye_Z5Htut" # ## Loading Datasets # + [markdown] id="0DI_D7flHvEN" # Hub Datasets can be loaded and created in a variety of storage locations with minimal configuration. # + id="I9dl3mfENulO" import hub # + id="sltdan65HmRN" # Local Filepath ds = hub.load('./animals_hub') # Dataset created in Step 2 in this Colab Notebook # + id="41FBvx25NWMN" # S3 # ds = hub.load('s3://my_dataset_bucket', creds={...}) # + id="PuacdMOgNNmT" # Public Dataset hosted by Activeloop ## Activeloop Storage - See Step 6 ds = hub.load('hub://activeloop/k49-train') # + id="ocs18sNqNQfG" # Dataset in another workspace on Activeloop Platform # ds = hub.load('hub://workspace_name/dataset_name') # + [markdown] id="ZD60qFaAH2qg" # **Note:** Since `ds = hub.dataset(path)` can be used to both create and load datasets, you may accidentally create a new dataset if there is a typo in the path you provided while intending to load a dataset. If that occurs, simply use `ds.delete()` to remove the unintended dataset permanently. # + [markdown] id="1Kb9q_ZqIARN" # ## Referencing Tensors # + [markdown] id="bq5WSI5LIClV" # Hub allows you to reference specific tensors using keys or via the `.` notation outlined below. # # # **Note:** data is still not loaded by these commands. # + id="jr_ZEtBnN1Wp" ds = hub.dataset('hub://activeloop/k49-train') # + id="24trRqlLH0Tl" ### NO HIERARCHY ### ds.images # is equivalent to ds['images'] ds.labels # is equivalent to ds['labels'] ### WITH HIERARCHY ### # ds.localization.boxes # is equivalent to # ds['localization/boxes'] # ds.localization.labels # is equivalent to # ds['localization/labels'] # + [markdown] id="bjmnRLWHINXG" # ## Accessing Data # + [markdown] id="js3jsmBHIPqu" # Data within the tensors is loaded and accessed using the `.numpy()` command: # + id="6QUWjQNGILWQ" # Indexing ds = hub.dataset('hub://activeloop/k49-train') W = ds.images[0].numpy() # Fetch an image and return a NumPy array X = ds.labels[0].numpy(aslist=True) # Fetch a label and store it as a # list of NumPy arrays # Slicing Y = ds.images[0:100].numpy() # Fetch 100 images and return a NumPy array # The method above produces an exception if # the images are not all the same size Z = ds.labels[0:100].numpy(aslist=True) # Fetch 100 labels and store # them as a list of NumPy arrays # + [markdown] id="DykgrsBEIfk1" # **Note:** The `.numpy()` method will produce an exception if all samples in the requested tensor do not have a uniform shape. If that's the case, running `.numpy(aslist=True)` solves the problem by returning a list of NumPy arrays, where the indices of the list correspond to different samples. # + [markdown] id="K385Fpvmqc0l" # #**Step 5**: *Visualizing Datasets* # + [markdown] id="uSIK-TCAqqQF" # One of Hub's core features is to enable users to visualize and interpret large amounts of data. Let's load the COCO dataset, which is one of the most popular datasets in computer vision. # + id="_YRBC6ehqpgz" import hub ds = hub.load('hub://activeloop/coco-train') # + [markdown] id="o5TW4f4Zqzlw" # The tensor layout for this dataset can be inspected using: # + id="YU10NNvNqz54" ds.summary() # + [markdown] id="StDTRjIJq3qI" # The dataset can be [visualized in Platform](https://app.activeloop.ai/activeloop/coco-train), or using an iframe in a jupyter notebook: # + id="7G3X22Tdq5tn" ds.visualize() # + [markdown] id="L713rdfJtVKS" # **Note:** Visualizing datasets in [Activeloop Platform](https://app.activeloop.ai/) will unlock more features and faster performance compared to visualization in Jupyter notebooks. # + [markdown] id="ul8Q0CK6rH50" # ##Visualizing your own datasets # + [markdown] id="DQzmJazJrOaF" # Any hub dataset can be visualized using the methods above as long as it follows the conventions necessary for the visualization engine to interpret and parse the data. These conventions [are explained here](https://docs.activeloop.ai/dataset-visualization). # + [markdown] id="NQipSo2OF_lB" # # **Step 6**: _Using Activeloop Storage_ # # _Storing and loading datasets from Activeloop Platform Storage._ # + [markdown] id="2TJfXx2pgG7P" # ## Register # + [markdown] id="bA39G647GHX4" # You can store your Hub Datasets with Activeloop by first creating an account in [Activeloop Platform](https://app.activeloop.ai/) or in the CLI using: # + id="PCDC-5dmGFdJ" # !activeloop register # + [markdown] id="o-nf5Sb-gMED" # ## Login # + [markdown] id="o1iZpxtOGJ0N" # In order for the Python API to authenticate with the Activeloop Platform, you should log in from the CLI using: # + id="Z0OUCCMGGLv0" # !activeloop login # prompts for inputting username and password will follow ... # Alternatively, you can directly input your username and password in the same line: # # !activeloop login -u my_username -p my_password # + [markdown] id="FvBxhaAYGNOi" # You can then access or create Hub Datasets by passing the Activeloop Platform path to `hub.dataset()`. # + id="FeL0a2zwGXeU" import hub # platform_path = 'hub://workspace_name/dataset_name' # 'hub://jane_smith/my_awesome_dataset' ds = hub.dataset(platform_path) # + [markdown] id="huQQ1M8kGcyL" # **Note**: When you create an account in Activeloop Platform, a default workspace is created that has the same name as your username. You are also able to create other workspaces that represent organizations, teams, or other collections of multiple users. # + [markdown] id="vUdVLQUGGnsA" # Public datasets such as `hub://activeloop/mnist-train` can be accessed without logging in. # + [markdown] id="vgfj-ldqgZa_" # ## Tokens # + [markdown] id="HhguQ8IxgeBd" # Once you have an Activeloop account, you can create tokens in [Activeloop Platform](https://app.activeloop.ai/) (Organization Details -> API Tokens) and pass them to python commands that require authentication using: # + id="sxETFtMlgw0E" #ds = hub.load(platform_path, token = 'xyz') # + [markdown] id="LVma__gxGq97" # # **Step 7**: _Connecting Hub Datasets to ML Frameworks_ # # _Connecting Hub Datasets to machine learning frameworks such as PyTorch and TensorFlow._ # + [markdown] id="8r-AkeJMGwxB" # You can connect Hub Datasets to popular ML frameworks such as PyTorch and TensorFlow using minimal boilerplate code, and Hub takes care of the parallel processing! # + [markdown] id="Bnr9ItdkGzDk" # ## PyTorch # + [markdown] id="wKkrCv2NG1GG" # You can train a model by creating a PyTorch DataLoader from a Hub Dataset using `ds.pytorch()`. # + id="HP3C2uoAGnNK" import hub from torchvision import datasets, transforms, models ds = hub.dataset('hub://activeloop/cifar100-train') # Hub Dataset # + [markdown] id="F3J24ptPAyTw" # The transform parameter in `ds.pytorch()` is a dictionary where the `key` is the tensor name and the `value` is the transformation function that should be applied to that tensor. If a specific tensor's data does not need to be returned, it should be omitted from the keys. If a tensor's data does not need to be modified during preprocessing, the transformation function is set as `None`. # + id="AvqbqsCnA4P3" tform = transforms.Compose([ transforms.ToPILImage(), # Must convert to PIL image for subsequent operations to run transforms.RandomRotation(20), # Image augmentation transforms.ToTensor(), # Must convert to pytorch tensor for subsequent operations to run transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), ]) #PyTorch Dataloader dataloader= ds.pytorch(batch_size = 16, num_workers = 2, transform = {'images': tform, 'labels': None}, shuffle = True) # + [markdown] id="GX_MIn_rA70v" # You can iterate through the Hub DataLoader just like you would for a Pytorch DataLoader. Loading the first batch of data takes the longest time because the shuffle buffer is filled before any data is returned. # + id="NuowobdbA96I" for data in dataloader: print(data) break # Training Loop # + [markdown] id="-EQ2LUPydfPo" # **Note:** Some datasets such as imagenet contain both grayscale and color images, which can cause errors when the transformed images are passed to the model. To convert only the grayscale images to color format, you can add this Torchvision transform to your pipeline: # + id="3Wi0K1aVdjMr" # transforms.Lambda(lambda x: x.repeat(int(3/x.shape[0]), 1, 1)) # + [markdown] id="x5bX92ZUG_2F" # ## TensorFlow # + [markdown] id="jeRUG-arHP1F" # Similarly, you can convert a Hub Dataset to a TensorFlow Dataset via the `tf.Data` API. # + id="I1bma0HSHOAO" ds # Hub Dataset object, to be used for training ds_tf = ds.tensorflow() # A TensorFlow Dataset # + [markdown] id="guao84xTb4Zg" # # **Step 8**: _Parallel Computing_ # # _Running computations and processing data in parallel._ # + [markdown] id="BVcZ28epcKRc" # Hub enables you to easily run computations in parallel and significantly accelerate your data processing workflows. This example primarily focuses on parallel dataset uploading, and other use cases such as dataset transformations can be found in [this tutorial](https://docs.activeloop.ai/tutorials/data-processing-using-parallel-computing). # # Parallel compute using Hub has two core elements: #1. defining a function or pipeline that will run in parallel and #2. evaluating it using the appropriate inputs and outputs. Let's start with #1 by defining a function that processes files and appends their data to the labels and images tensors. # + [markdown] id="ZWNxzF1pcWxn" # **Defining the parallel computing function** # # The first step for running parallel computations is to define a function that will run in parallel by decorating it using `@hub.compute`. In the example below, `file_to_hub` converts data from files into hub format, just like in **Step 2: Creating Hub Datasets Manually**. If you have not completed Step 2, please complete the section that downloads and unzips the *animals* dataset # + id="JMjMF_-LcHtl" import hub from PIL import Image import numpy as np import os @hub.compute def file_to_hub(file_name, sample_out, class_names): ## First two arguments are always default arguments containing: # 1st argument is an element of the input iterable (list, dataset, array,...) # 2nd argument is a dataset sample # Other arguments are optional # Find the label number corresponding to the file label_text = os.path.basename(os.path.dirname(file_name)) label_num = class_names.index(label_text) # Append the label and image to the output sample sample_out.labels.append(np.uint32(label_num)) sample_out.images.append(hub.read(file_name)) return sample_out # + [markdown] id="d-ZhXH-pcgT8" # In all functions decorated using `@hub.compute`, the first argument must be a single element of any input iterable that is being processed in parallel. In this case, that is a filename `file_name`, becuase `file_to_hub` reads image files and populates data in the dataset's tensors. # # The second argument is a dataset sample `sample_out`, which can be operated on using similar syntax to dataset objects, such as `sample_out.append(...)`, `sample_out.extend(...)`, etc. # # The function decorated using `@hub.compute` must return `sample_out`, which represents the data that is added or modified by that function. # + [markdown] id="TIUiNuQqchnH" # **Executing the transform** # # To execute the transform, you must define the dataset that will be modified by the parallel computation. # + id="TZfEn1g_cno_" ds = hub.empty('./animals_hub_transform') # Creates the dataset # + [markdown] id="u7FIReeLcpka" # Next, you define the input iterable that describes the information that will be operated on in parallel. In this case, that is a list of files `files_list` from the animals dataset in Step 2. # + id="8CwypbTxcrx0" # Find the class_names and list of files that need to be uploaded dataset_folder = './animals' class_names = os.listdir(dataset_folder) files_list = [] for dirpath, dirnames, filenames in os.walk(dataset_folder): for filename in filenames: files_list.append(os.path.join(dirpath, filename)) # + [markdown] id="5IC-VRKVcuRI" # You can now create the tensors for the dataset and **run the parallel computation** using the `.eval` syntax. Pass the optional input arguments to `file_to_hub`, and we skip the first two default arguments `file_name` and `sample_out`. # # The input iterable `files_list` and output dataset `ds` is passed to the `.eval` method as the first and second argument respectively. # + id="p4H4Fug0cxJG" with ds: ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') ds.create_tensor('labels', htype = 'class_label', class_names = class_names) file_to_hub(class_names=class_names).eval(files_list, ds, num_workers = 2) # + id="BfWc3_fkhr0W" Image.fromarray(ds.images[0].numpy()) # + [markdown] id="5xTj7kt0jrd3" # Congrats! You just created a dataset using parallel computing! 🎈 # + [markdown] id="iXRCphquSFs3" # # **Step 9**: _Dataset Version Control_ # # *Managing changes to your datasets using Version Control.* # + [markdown] id="4y_V53L8SCuB" # Hub dataset version control allows you to manage changes to datasets with commands very similar to Git. It provides critical insights into how your data is evolving, and it works with datasets of any size! # # Let's check out how dataset version control works in Hub! If you haven't done so already, please download and unzip the *animals* dataset from **Step 2**. # # First let's create a hub dataset in the `./version_control_hub` folder. # + id="YgEWowxySUDL" import hub import numpy as np from PIL import Image # Set overwrite = True for re-runability ds = hub.dataset('./version_control_hub', overwrite = True) # Create a tensor and add an image with ds: ds.create_tensor('images', htype = 'image', sample_compression = 'jpeg') ds.images.append(hub.read('./animals/cats/image_1.jpg')) # + [markdown] id="DNLh-JE5pkS_" # The first image in this dataset is a picture of a cat: # + id="w4hVjQaVpksW" Image.fromarray(ds.images[0].numpy()) # + [markdown] id="_CEF-kjySdLp" # ##Commit # + [markdown] id="joKq3VV0SdEW" # To commit the data added above, simply run `ds.commit`: # # + id="pj9uTZeSTGwT" first_commit_id = ds.commit('Added image of a cat') print('Dataset in commit {} has {} samples'.format(first_commit_id, len(ds))) # + [markdown] id="Tc2-MRmaSc4x" # Next, let's add another image and commit the update: # + id="zArtG0phTZRv" with ds: ds.images.append(hub.read('./animals/dogs/image_3.jpg')) second_commit_id = ds.commit('Added an image of a dog') print('Dataset in commit {} has {} samples'.format(second_commit_id, len(ds))) # + [markdown] id="fYjnY_1RTcjM" # The second image in this dataset is a picture of a dog: # + id="gbPu9JoFp0ap" Image.fromarray(ds.images[1].numpy()) # + [markdown] id="kWvgUH25Tj8V" # ##Log # + [markdown] id="CiqOb8POTkb4" # The commit history starting from the current commit can be show using `ds.log`: # # + id="XQSxvzIcTuU-" log = ds.log() # + [markdown] id="TgefyAuATwi4" # This command prints the log to the console and also assigns it to the specified variable log. The author of the commit is the username of the [Activeloop account](https://docs.activeloop.ai/getting-started/using-activeloop-storage) that logged in on the machine. # + [markdown] id="2JRpqeYqV-oT" # ##Branch # + [markdown] id="4TWcOT4RV-d4" # Branching takes place by running the `ds.checkout` command with the parameter `create = True`. Let's create a new branch `dog_flipped`, flip the second image (dog), and create a new commit on that branch. # + id="eY-CZmzrXr0X" ds.checkout('dog_flipped', create = True) with ds: ds.images[1] = np.transpose(ds.images[1], axes=[1,0,2]) flipped_commit_id = ds.commit('Flipped the dog image') # + [markdown] id="VUUMXFKEXuIq" # The dog image is now flipped and the log shows a commit on the `dog_flipped` branch as well as the previous commits on `main`: # + id="DIP6V3VFqPKS" Image.fromarray(ds.images[1].numpy()) # + id="O3-UgHZPX_0u" ds.log() # + [markdown] id="HCrKgp6FYDG9" # ##Checkout # + [markdown] id="07nHcIIiYFtW" # A previous commit of branch can be checked out using `ds.checkout`: # + id="YZe8iXjlYEdf" ds.checkout('main') Image.fromarray(ds.images[1].numpy()) # + [markdown] id="7AZXuEVYYVHm" # As expected, the dog image on `main` is not flipped. # + [markdown] id="gmydIxas3XsV" # ## Diff # + [markdown] id="dTEuB-4C3a-B" # Understanding changes between commits is critical for managing the evolution of datasets. Hub's `ds.diff` function enables users to determine the number of samples that were added, removed, or updated for each tensor. The function can be used in 3 ways: # + id="XhlPmK9E37Do" ds.diff() # Diff between the current state and the last commit # + id="tCa8-nlJ4Dxg" ds.diff(first_commit_id) # Diff between the current state and a specific commit # + id="Bj2Yez624Ecb" ds.diff(second_commit_id, first_commit_id) # Diff between two specific commits # + [markdown] id="i1GqH1JvYkNP" # ##HEAD Commit # # + [markdown] id="RbiRZ0eGiBrz" # Unlike Git, Hub's version control does not have a staging area because changes to datasets are not stored locally before they are committed. All changes are automatically reflected in the dataset's permanent storage (local or cloud). **Therefore, any changes to a dataset are automatically stored in a HEAD commit on the current branch**. This means that the uncommitted changes do not appear on other branches. Let's see how this works: # # You should currently be on the `main` branch, which has 2 samples. Let's adds another image: # # + id="FwuzyJUViZC6" print('Dataset on {} branch has {} samples'.format('main', len(ds))) with ds: ds.images.append(hub.read('./animals/dogs/image_4.jpg')) print('After updating, the HEAD commit on {} branch has {} samples'.format('main', len(ds))) # + [markdown] id="p3qePpVFqkG9" # The 3rd sample is also an image of a dog: # + id="tDfKKuhLqlMM" Image.fromarray(ds.images[2].numpy()) # + [markdown] id="4brOnBdyiq6p" # Next, if you checkout `dog_flipped` branch, the dataset contains 2 samples, which is sample count from when that branch was created. Therefore, the additional uncommitted third sample that was added to the `main` branch above is not reflected when other branches or commits are checked out. # + id="cvG-X9VqipM3" ds.checkout('dog_flipped') print('Dataset in {} branch has {} samples'.format('dog_flipped', len(ds))) # + [markdown] id="7aoAeA7vixsC" # Finally, when checking our the `main` branch again, the prior uncommitted changes and visible and they are stored in the `HEAD` commit on `main`: # + id="6DnXiwTmi6G9" ds.checkout('main') print('Dataset in {} branch has {} samples'.format('main', len(ds))) # + [markdown] id="ztVUV_BDqyHR" # The dataset now contains 3 samples and the uncommitted dog image is visible: # + id="ci0IHCP9q0In" Image.fromarray(ds.images[2].numpy()) # + [markdown] id="uinXs4r1i7Zz" # ##Merge - Coming Soon # # + [markdown] id="kQOGilvkjG2c" # Merging is a critical feature for collaborating on datasets, and Activeloop is currently working on an implementation. # + [markdown] id="Fz15ukH5jiIm" # Congrats! You just are now an expert in dataset version control!🎓 # + [markdown] id="vBM1DKntaXwS" # # **Step 10:** *Dataset Filtering* # + [markdown] id="W8K5WREdaf--" # Filtering and querying is an important aspect of data engineering because it enables users to focus on subsets of their datasets in order to obtain important insights, perform quality control, and train models on parts of their data. # # Hub enables you to perform queries using user-defined functions or Hub's Pythonic query language, all of which can be parallelized using our simple multi-processing API. # + [markdown] id="Rp9i4Flsai0p" # ## Filtering with user-defined-functions # + [markdown] id="ZtwDPVwmapjL" # The first step for querying using UDFs is to define a function that returns a boolean depending on whether an input sample in a dataset meets the user-defined condition. In this example, we define a function that returns `True` if the labels for a tensor are in the desired labels_list. If there are inputs to the filtering function other than `sample_in`, it must be decorated with `@hub.compute`. # + id="2lWTjNbUamyp" @hub.compute def filter_labels(sample_in, labels_list, class_names): text_label = class_names[sample_in.labels.numpy()[0]] return text_label in labels_list # + [markdown] id="zwITkEcmvdNo" # Let's load a dataset and specify the `labels_list` that we want to filter for. # + id="FLb_JgfxbYGA" import hub from PIL import Image ds = hub.load('hub://activeloop/cifar10-test') labels_list = ['automobile', 'ship'] # Desired labels for filtering class_names = ds.labels.info.class_names # Mapping from numeric to text labels # + [markdown] id="ccAa1QuZazC-" # The filtering function is executed using the `ds.filter()` command below, and it returns a virtual view of the dataset (`dataset_view`) that only contains the indices that met the filtering condition. Just like in the Parallel Computing API, the `sample_in` parameter does not need to be passed into the filter function when evaluating it, and multi-processing can be specified using the `scheduler` and `num_workers` parameters. # + id="mq4e5gRIbZ74" ds_view = ds.filter(filter_labels(labels_list, class_names), scheduler = 'threaded', num_workers = 0) # + [markdown] id="G2DowEHfbb2m" # The data in the returned `ds_view` can be accessed just like a regular dataset. # + id="4FCxLH0Nbec8" Image.fromarray(ds_view.images[0].numpy()) # + [markdown] id="5ibjQKAEbs9F" # **Note:** in most cases, multi-processing is not necessary for queries that involve simple data such as labels or bounding boxes. However, multi-processing significantly accelerates queries that must load rich data types such as images and videos. # + [markdown] id="-OjmRFFVb1pn" # ## Filtering using our pythonic query language # + [markdown] id="fgIOy-aJcOBF" # Queries can also be executed using hub's Pythonic query language. This UX is primarily intended for use in [Activeloop Platform](https://app.activeloop.ai/), but it can also be applied programmatically in Python. # + id="YS4CD0Wncb99" ds_view = ds.filter("labels == 'automobile' or labels == 'automobile'", scheduler = 'threaded', num_workers = 0) # + [markdown] id="FtaZboftcemF" # Tensors can be referred to by name, the language supports common logical operations (`in, ==, !=, >, <, >=, <=`), and numpy-like operators and indexing can be applied such as `'images.min > 5'`, `'images.shape[2]==1'`, and others. # + [markdown] id="KEi-DxIOcpHp" # Congrats! You just learned to filter data with hub! 🎈 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Reproducing stance detection - UKP dataset # + import mlflow import pandas as pd mlflow.set_tracking_uri('http://localhost:5000') # - # ## In-topic runs = mlflow.search_runs(run_view_type=mlflow.tracking.client.ViewType.ACTIVE_ONLY, experiment_ids=[2, 4]) runs.groupby('params.task_name')['metrics.test_acc'].max().reset_index() # ## Cross-topic runs = mlflow.search_runs(run_view_type=mlflow.tracking.client.ViewType.ACTIVE_ONLY, experiment_ids=[6, 7]) cross_topic = runs.groupby(['params.task_name', 'params.test_id'])['metrics.test_acc'].max().unstack() cross_topic['mean'] = cross_topic.values.mean(axis=1) cross_topic # # REVIEWS dataset runs = mlflow.search_runs(run_view_type=mlflow.tracking.client.ViewType.ACTIVE_ONLY, experiment_ids=[3, 8]) runs.groupby('params.task_name')['metrics.test_acc', 'metrics.test_f1_macro'].max().reset_index() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:dl] # language: python # name: conda-env-dl-py # --- # ## Model training # > In this notebook we train a model for each stock and save it to disk. # ### 1) Here we import the data_manager and model modules # > * Here we use the `'companies()'` method from and `'data_manager'` module. # > * From the `'model'` module we use the `'model_selector'` method and the `'ModelLoader'` class. # + # %load_ext autoreload # %aimport data_manager # %aimport model # %autoreload 1 from data_manager import * from model import * # - # ### 2) Import company list # > Here we read a csv file and import a list of company trade symbols stocks = companies() symbols = stocks['Symbol'].values.tolist() print(symbols) # ### 3) RNN model training # > In this section we train a RNN model for each stock. # # In the model selection [notebook](./model_selection.ipynb) we seleted a bidirectional RNN model where we would pass parameter lists and then select the model with lowest MSE test. We use that model here to train every stock. print(final_model.__doc__) print(model_selector.__doc__) #our model selector input variables window_sizes = [5,7,10] dropouts = [0.25,0.4] learn_rates = [0.01,0.001] epochs = [100,200] batch_size = 50 # ### Train one stock # > Here lets train the first stock in our list and save to disk. # + result = model_selector(symbols[0], window_sizes, learn_rates, dropouts, epochs, batch_size,verbose=1) print("\nResults : ") print("-"*60) print(result[0]) print(result[1]) #save trained model ModelLoader.save(result[1]['ticker'],result[0],result[1]) print("Saved trained model for {}".format(result[1]['ticker'])) # - # ### Training remaining stocks # > * Here train the remaining stocks and save the model to directory. # + from keras import backend as K import tensorflow as tf for ticker in symbols[1:]: #release memory K.clear_session() tf.reset_default_graph() result = model_selector(ticker, window_sizes, learn_rates, dropouts, epochs, batch_size,verbose=2) #save trained model ModelLoader.save(result[1]['ticker'],result[0],result[1]) print(" ==> Saved trained model for {}".format(result[1]['ticker'])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="E-QmT8-Qhgt9" # # 交差検証 # 「交差検証」により、過学習の問題に対処します。 # + [markdown] id="sW2dwjT2kYqQ" # ## データの準備 # 必要なライブラリの導入、データの読み込みと加工を行います。 # + id="Jwulz4eYsxZG" import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import StratifiedKFold from sklearn.metrics import accuracy_score import lightgbm as lgb train_data = pd.read_csv("train.csv") # 訓練データ test_data = pd.read_csv("test.csv") # テストデータ test_id = test_data["PassengerId"] # 結果の提出時に使用 data = pd.concat([train_data, test_data], sort=False) # テストデータ、訓練データを結合 # カテゴリデータの変換 data["Sex"].replace(["male", "female"], [0, 1], inplace=True) data["Embarked"].fillna(("S"), inplace=True) data["Embarked"] = data["Embarked"].map({"S": 0, "C": 1, "Q": 2}) # 欠損値を埋める data["Fare"].fillna(data["Fare"].mean(), inplace=True) data["Age"].fillna(data["Age"].mean(), inplace=True) # 新しい特徴量の作成 data["Family"] = data["Parch"] + data["SibSp"] # 不要な特徴量の削除 data.drop(["Name", "PassengerId", "SibSp", "Parch", "Ticket", "Cabin"], axis=1, inplace=True) # 入力と正解の作成 train_data = data[:len(train_data)] test_data = data[len(train_data):] t = train_data["Survived"] # 正解 x_train = train_data.drop("Survived", axis=1) # 訓練時の入力 x_test = test_data.drop("Survived", axis=1) # テスト時の入力 x_train.head() # + [markdown] id="mJypDLDg1lld" # ## 交差検証 # scikit-learnの`StratifiedKFold`により交差検証を行います。 # https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html # `StratifiedKFold`を使えば検証用データ中の0と1の割合を一定に保つことができます。 # + id="oeA3OUzc0Qld" y_valids = np.zeros((len(x_train),)) # 予測結果: 検証用データ y_tests = [] # 予測結果: テスト用データ skf = StratifiedKFold(n_splits=5, shuffle=True) # ハイパーパラメータの設定 params = { "objective": "binary", # 二値分類 "max_bin": 300, # 特徴量の最大分割数 "learning_rate": 0.05, # 学習率 "num_leaves": 32 # 分岐の末端の最大数 } categorical_features = ["Embarked", "Pclass", "Sex"] for _, (ids_train, ids_valid) in enumerate(skf.split(x_train, t)): x_tr = x_train.loc[ids_train, :] x_val = x_train.loc[ids_valid, :] t_tr = t[ids_train] t_val = t[ids_valid] # データセットの作成 lgb_train = lgb.Dataset(x_tr, t_tr, categorical_feature=categorical_features) lgb_val = lgb.Dataset(x_val, t_val, reference=lgb_train, categorical_feature=categorical_features) # モデルの訓練 model = lgb.train(params, lgb_train, valid_sets=[lgb_train, lgb_val], verbose_eval=20, # 学習過程の表示間隔 num_boost_round=500, # 学習回数の最大値 early_stopping_rounds=10) # 連続して10回性能が向上しなければ終了 # 結果を保持 y_valids[ids_valid] = model.predict(x_val, num_iteration=model.best_iteration) y_test = model.predict(x_test, num_iteration=model.best_iteration) y_tests.append(y_test) # + [markdown] id="wS5We0rU_vnG" # ## 正解率 # 検証用データによる予測結果と正解を使い、正解率を計算します。 # + id="XYpY6tgX0uqm" y_valids_bin = (y_valids>0.5).astype(int) # 結果を0か1に accuracy_score(t, y_valids_bin) # 正解率の計算 # + [markdown] id="tHN33EZaAgwM" # ## 提出用のデータ # 提出量データの形式を整え、CSVファイルに保存します。 # + id="TI9FZwRnAgwN" y_test_subm = sum(y_tests) / len(y_tests) # 平均をとる y_test_subm = (y_test > 0.5).astype(int) # 結果を0か1に # 形式を整える survived_test = pd.Series(y_test_subm, name="Survived") subm_data = pd.concat([test_id, survived_test], axis=1) # 提出用のcsvファイルを保存 subm_data.to_csv("submission_titanic_cv.csv", index=False) subm_data # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import unittest from abc import ABC from enum import Enum # creature removal (unsubscription) ignored in this exercise solution class Creature(ABC): def __init__(self, game, attack, defense): self.initial_defense = defense self.initial_attack = attack self.game = game @property def attack(self): pass @property def defense(self): pass def query(self, source, query): pass class WhatToQuery(Enum): ATTACK = 1 DEFENSE = 2 class Goblin(Creature): def __init__(self, game, attack=1, defense=1): super().__init__(game, attack, defense) @property def attack(self): q = Query(self.initial_attack, WhatToQuery.ATTACK) for c in self.game.creatures: c.query(self, q) return q.value @property def defense(self): q = Query(self.initial_defense, WhatToQuery.DEFENSE) for c in self.game.creatures: c.query(self, q) return q.value def query(self, source, query): if self != source and query.what_to_query == WhatToQuery.DEFENSE: query.value += 1 class GoblinKing(Goblin): def __init__(self, game): super().__init__(game, 3, 3) def query(self, source, query): if self != source and query.what_to_query == WhatToQuery.ATTACK: query.value += 1 else: super().query(source, query) class Query: def __init__(self, initial_value, what_to_query): self.what_to_query = what_to_query self.value = initial_value class Game: def __init__(self): self.creatures = [] # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import lightgbm as lgb import matplotlib.pyplot as plt # - # + DATA = '../data/run5' for fold_no in [1]: model_file = os.path.join(DATA, f"fold{fold_no}_model.txt") model = lgb.Booster(model_file=model_file) for importance_type in ['gain', 'split']: f, ax = plt.subplots(figsize=[7, 10]) lgb.plot_importance(model, max_num_features=50, ax=ax, importance_type=importance_type) plt.title("Light GBM Feature Importance") out_file = os.path.join(DATA, f"fold{fold_no}_importance_{importance_type}.png") plt.savefig(out_file) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tensorflow # language: python # name: tensorflow # --- # ## Exercise 3 | Part 1: One-vs-all # ### =========== Part 1: Loading and Visualizing Data ============= # + from ex3 import * ## Setup the parameters you will use for this part of the exercise input_layer_size = 400 # 20x20 Input Images of Digits num_labels = 10 # 10 labels, from 1 to 10 # (note that we have mapped "0" to label 10) # Load Training Data print('Loading and Visualizing Data ...') from scipy import io as sio data = sio.loadmat('ex3data1.mat') # training data stored in arrays X, y X = data['X'] y = data['y'].reshape(-1) m = X.shape[0] # Randomly select 100 data points to display rand_indices = np.random.permutation(m) sel = X[rand_indices[:100], :] # %matplotlib inline _ = displayData(sel) # - # ### ============ Part 2a: Vectorize Logistic Regression ============ # + # Test case for lrCostFunction print('Testing lrCostFunction() with regularization') theta_t = np.array([-2, -1, 1, 2]) X_t = np.column_stack([np.ones(5), np.arange(0.1, 1.6, 0.1).reshape((5, 3), order='F')]) y_t = np.array([1, 0, 1, 0, 1]) >= 0.5 lambda_t = 3 J, grad = lrCostFunction(theta_t, X_t, y_t, lambda_t) print(f'\nCost: {J:f}') print('Expected cost: 2.534819') print('Gradients:') print(f' {grad} ') print('Expected gradients:') print(' 0.146561\n -0.548558\n 0.724722\n 1.398003') # - # ### ============ Part 2b: One-vs-All Training ============ # + print('Training One-vs-All Logistic Regression...') lambda_ = 1 all_theta = oneVsAll(X, y, num_labels, lambda_) # - # 好了,应该都收敛了。 # ### ================ Part 3: Predict for One-Vs-All ================ # + pred = predictOneVsAll(all_theta, X) print(f'Training Set Accuracy: {(pred == y).mean() * 100:f}') # - # 结果有点小误差,应该是$\lambda$取值问题。 # ###### 以上部分代码在[ex3.py](https://github.com/StevenPZChan/ml_dl_coursera_Andrew_Ng/blob/master/machine-learning-ex3/ex3.py)中 # ## Exercise 3 | Part 2: Neural Networks # ### =========== Part 1: Loading and Visualizing Data ============= # + ## Setup the parameters you will use for this exercise input_layer_size = 400 # 20x20 Input Images of Digits hidden_layer_size = 25 # 25 hidden units num_labels = 10 # 10 labels, from 1 to 10 # (note that we have mapped "0" to label 10) # Load Training Data print('Loading and Visualizing Data ...') from scipy import io as sio data = sio.loadmat('ex3data1.mat') X = data['X'] y = data['y'].reshape(-1) m = X.shape[0] # Randomly select 100 data points to display sel = np.random.permutation(m) sel = sel[:100] # %matplotlib inline _ = displayData(X[sel, :]) # - # ### ================ Part 2: Loading Pameters ================ # + print('Loading Saved Neural Network Parameters ...') # Load the weights into variables Theta1 and Theta2 data = sio.loadmat('ex3weights.mat') Theta1 = data['Theta1'] Theta2 = data['Theta2'] print(Theta1.shape, Theta2.shape) # - # ### ================= Part 3: Implement Predict ================= # + pred = predict(Theta1, Theta2, X); print(f'Training Set Accuracy: {(pred == y).mean() * 100:f}') # - # Displayed image: # + # Randomly permute examples rp = np.random.permutation(m) for i in range(m): # Display print('Displaying Example Image') _ = displayData(X[rp[i], :].reshape((-1, input_layer_size))) pred = predict(Theta1, Theta2, X[rp[i], :].reshape((-1, input_layer_size))) print(f'\nNeural Network Prediction: {pred} (digit {pred % 10})') # Pause with quit option s = input('Paused - press enter to continue, q to exit:') if s == 'q': break # - # 嗯,几个例子还是可以的。 # ### 总结:学会了`scipy.io.loadmat`导入`*.mat`数据的方法,学会了`matplotlib`将矩阵数据画成图的方法。进一步了解`scipy.optimize.minimize`的用法,其可以使用多种迭代方法。 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # PhysioNet/Computing in Cardiology Challenge 2020 # ## Classification of 12-lead ECGs # ### 3. Train Model # # Setup Notebook # + # Import 3rd party libraries import os import sys import ast import time import json import numpy as np import pandas as pd # Import local Libraries sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(os.getcwd())))))) from kardioml import DATA_PATH from kardioml.models.physionet2017.training.xgboost_model import Model from kardioml.data.data_loader import load_challenge_data # Configure Notebook import warnings warnings.filterwarnings('ignore') # %matplotlib inline # %load_ext autoreload # %autoreload 2 # - # # Import Data # ### Meta Data # + # Import to DataFrame meta_data = pd.read_csv(os.path.join(DATA_PATH, 'training', 'physionet_2017', 'meta_data.csv')) # View DataFrame meta_data.head() # - # ### Features # + # Import to DataFrame features = pd.read_csv(os.path.join(DATA_PATH, 'training', 'physionet_2017', 'features.csv')) # View DataFrame features.head() # - # ### Labels # + # Import to DataFrame labels = pd.read_csv(os.path.join(DATA_PATH, 'training', 'physionet_2017', 'labels.csv')) # View DataFrame labels.head() # - # # Hyper-Parameter Tuning # + # Set parameter bounds param_bounds = {'learning_rate': (0.01, 1.0), 'n_estimators': (500, 1500), 'max_depth': (2, 8), 'subsample': (0.5, 1.0), 'colsample_bytree': (0.5, 1.0), 'gamma': (0.001, 2.0), 'min_child_weight': (0, 10), 'max_delta_step': (0, 10)} # Set number of iterations n_iter = 40 # Set number CV folds cv_folds = 4 # Get 1-D labels for stratifying stratifier = meta_data['labels'].map(lambda val: ast.literal_eval(val)[0]) # Initialize model model = Model(features=features.drop(['dataset', 'filename', 'lead'], axis=1), labels=labels, cv_folds=cv_folds, stratifier=stratifier) # Run hyper-paramter search model.tune_hyper_parameters(param_bounds=param_bounds, n_iter=n_iter) # Save model model.save() # - # # Test Inference # + # Load test data data, header_data = load_challenge_data(filename=os.path.join(DATA_PATH, 'raw', 'Training_WFDB', 'A0100.mat')) # Run inference model.challenge_prediction(data=data, header_data=header_data) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt from skimage.measure import find_contours, regionprops, label from skimage import io import cv2 import numpy as np import skimage from skimage import measure print(skimage.__version__) from PIL import Image # + def imshow(*args,**kwargs): """ Handy function to show multiple plots in on row, possibly with different cmaps and titles Usage: imshow(img1, title="myPlot") imshow(img1,img2, title=['title1','title2']) imshow(img1,img2, cmap='hot') imshow(img1,img2,cmap=['gray','Blues']) """ cmap = kwargs.get('cmap', 'gray') title= kwargs.get('title','') axis_off = kwargs.get('axis_off','') if len(args)==0: raise ValueError("No images given to imshow") elif len(args)==1: plt.title(title) plt.imshow(args[0], interpolation='none') else: n=len(args) if type(cmap)==str: cmap = [cmap]*n if type(title)==str: title= [title]*n plt.figure(figsize=(n*5,10)) for i in range(n): plt.subplot(1,n,i+1) plt.title(title[i]) plt.imshow(args[i], cmap[i]) if axis_off: plt.axis('off') plt.show() def TissueMaskGenerationPatch(patchRGB): ''' Returns mask of tissue that obeys the threshold set by paip ''' r = patchRGB[:,:,0] < 235 g = patchRGB[:,:,1] < 210 b = patchRGB[:,:,2] < 235 tissue_mask = np.logical_or(r,np.logical_or(g,b)) return tissue_mask # - img_paths = ['../../results/saved_imgs/val_7/ref_142.png'] for img_path in img_paths: img = io.imread(img_path) _,w,_ = img.shape w = w//4 viable_mask = np.average(img[:,w*3:w*4,:],axis=2) slide_img = io.imread('../../results/saved_imgs/ref_imgs_pid/01_01_0083.png')[:,:2847,:] whole_tum = np.average(io.imread('../../results/saved_imgs/ref_imgs_pid/01_01_0083.png')[:,5694:,:],axis=2) kernel = np.ones((20, 20), dtype=np.uint8) img = cv2.morphologyEx(img_raw, cv2.MORPH_OPEN, kernel) kernel = np.ones((70, 70), dtype=np.uint8) img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel) img = np.uint8(img) print(img.shape) print(np.unique(img)) imshow(img_raw,img,slide_img,whole_tum) b = label(img) print(np.unique(b)) plt.imshow(b) r = regionprops(img) xtl,ytl,xbr,ybr = r[0].bbox wt = np.zeros_like(img_raw) wt[xtl:xbr,ytl:ybr] = 255 wt = wt*TissueMaskGenerationPatch(slide_img) imshow(wt,whole_tum,img_raw) np.unique(img_raw) actual_whole_tum = np.sum(whole_tum)/255 pred_whole_tum = np.sum(wt)/255 viable_tum = np.sum(img_raw)/255 print(actual_whole_tum,pred_whole_tum,viable_tum) print(viable_tum/actual_whole_tum,viable_tum/pred_whole_tum) for prop in r: for a in prop: print(a,prop[a]) # + # Find contours at a constant value of 0.8 contours = measure.find_contours(img, 0.8) # Display the image and plot all contours found fig, ax = plt.subplots() ax.imshow(img, interpolation='nearest', cmap=plt.cm.gray) for n, contour in enumerate(contours): ax.plot(contour[:, 1], contour[:, 0], linewidth=1) ax.axis('image') ax.set_xticks([]) ax.set_yticks([]) plt.show() # - aa,contours,_ = cv2.findContours(img.astype('uint8'), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) out = np.zeros_like(img) cv2.drawContours(out,contours,-1,255,3) imshow(out,img,aa) print(np.unique(out)) Image.fromarray(out.astype('uint8')) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # # Scientific modules and IPython # %matplotlib inline import matplotlib.pylab as plt # ## Core scientific packages # When people say that they do their scientific computations in Python it's only half true. Python is a construction set, similar to MITgcm or other models. Without packages it's only a core, that although very powerful, does not seems to be able to do much by itself. # # There is a set of packages, that almost every scientist would need: # # # # source: https://devopedia.org/python-for-scientific-computing # We are going to talk about several basic ones # ## Installation # Installation instructions can be found in the README.md file of this repository. Better to use [rendered version from GitHub](https://github.com/koldunovn/python_data_train). # # ## IPython # In order to be productive you need comfortable environment, and this is what IPython provides. It was started as enhanced python interactive shell, but with time become architecture for interactive computing. # ## Jupyter notebook # Since the 0.12 release, IPython provides a new rich text web interface - IPython notebook. Here you can combine: # #### Code execution print('I love Python') # #### Text (Markdown) # IPython [website](http://ipython.org/). # # List: # # * [Python on Codeacademy](http://www.codecademy.com/tracks/python) # * [Google's Python Class](https://developers.google.com/edu/python/) # # Code: # # print('hello world') # # #### $\LaTeX$ equations # $$\int_0^\infty e^{-x^2} dx=\frac{\sqrt{\pi}}{2}$$ # $$ # F(x,y)=0 ~~\mbox{and}~~ # \left| \begin{array}{ccc} # F''_{xx} & F''_{xy} & F'_x \\ # F''_{yx} & F''_{yy} & F'_y \\ # F'_x & F'_y & 0 # \end{array}\right| = 0 # $$ # #### Plots x = [1,2,3,4,5] plt.plot(x); # #### Rich media from IPython.display import YouTubeVideo YouTubeVideo('F4rFuIb1Ie4') # * [IPython website](http://ipython.org/) # * [Notebook gallery](https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks) # ## Run notebook # In order to start Jupyter notebook from the command line you have to type: # # jupyter notebook # # ### You can download and run this lectures: # Web version can be accesed from the [github repository](https://github.com/koldunovn/python_data_train). # ## Main IPython features # ### Getting help # You can use question mark in order to get help. To execute cell you have to press *Shift+Enter* # + # ? # - # Question mark after a function will open pager with documentation. Double question mark will show you source code of the function. # + # plt.plot?? # - # Press SHIFT+TAB after opening bracket in order to get help for the function (list of arguments, doc string). sum() # ## Magic functions # The magic function system provides a series of functions which allow you to # control the behavior of IPython itself, plus a lot of system-type # features. # Let's create some set of numbers using [range](http://docs.python.org/2/library/functions.html#range) command: list(range(10)) # And find out how long does it take to run it with *%timeit* magic function: # %timeit list(range(10)) # Print all interactive variables (similar to Matlab function): # %whos # ### Cell-oriented magic # Receive as argument both the current line where they are declared and the whole body of the cell. # %%timeit range(10) range(100) # Thre are several cell-oriented magic functions that allow you to run code in other languages: # + language="bash" # # echo "My shell is:" $SHELL # + language="perl" # # $variable = 1; # print "The variable has the value of $variable\n"; # - # You can write content of the cell to a file with *%%writefile* (or *%%file* for ipython < 1.0): # %%writefile hello.py #if you use ipython < 1.0, use %%file comand # #%%file a = 'hello world!' print(a) # And then run it: # %run hello.py # The *%run* magic will run your python script and load all variables into your interactive namespace for further use. # %whos # In order to get information about all magic functions type: # %magic # ### Links: # [The cell magics in IPython](http://nbviewer.ipython.org/urls/raw.github.com/ipython/ipython/1.x/examples/notebooks/Cell%20Magics.ipynb) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # + [markdown] nbpresent={"id": "12d91375-1556-4646-866e-11cc662f8f9c"} # # Manipulating MARC Records: Examples # + [markdown] nbpresent={"id": "cfc16a67-f38d-4382-b529-59192c3fb8d3"} # ## Introduction & Resources # # Many cultural heritage institutions hold inventory and bibliographic records in the *Machine Readable Cataloging* format, or [MaRC](https://en.wikipedia.org/wiki/MARC_standards). In this notebook, we will explore some basic uses of Python code to parse MaRC records, count and filter them, and then to transform them. # # The examples assume installation and use of the ``pymarc`` python library, which is available on github at https://github.com/edsu/pymarc. This library was developed by , , , . # # The activities below use a file of MaRC records retrieved from the Project MUSE database representing all book titles received during the month of February 2018, as of February 28, 2018 (see [Project MUSE's Download MaRC Records](https://muse.jhu.edu/about/librarians/marc_records.html) for information). When this data was uploaded to this exercise, it included 706 records. # # + [markdown] nbpresent={"id": "807667eb-eca4-43a8-9816-286f1d472d92"} # ## Basic Demo # # Let's start with this python code. This block shows how to open the MaRC file using pymarc, then look for the title field using the library's builtin ``title()`` function. Note that if you are familiar with MaRC format, you can ask for specific MaRC fields using the field number and identifiers to extract more granular information. # # + nbpresent={"id": "8815c6c5-7c9c-4c7f-94bc-d32cfaf4bd0a"} from pymarc import MARCReader with open('/marc-python-tests/Project_MUSE_2018_Complete_20180228.mrc', 'rb') as fh: #NB: this open command must point to the location of the marc file in your system reader = MARCReader(fh) Titlecount = 0 for record in reader: print(record.title()) Titlecount = Titlecount + 1 if Titlecount >= 10: break print('\nCounted',Titlecount,'Titles') # + [markdown] nbpresent={"id": "c79d4b49-a443-4583-95fa-fc5300627cf0"} # The above block adapts the basic example presented in the PyMarc documentation. # # First, it imports the ``MARCReader`` function from the pymarc library. # # Next, it opens the file of MaRC records and puts the data into the ``reader`` variable. # # A counter variable named ``Titlecount`` is set to 0. # # Then, using a basic ``for`` loop, the code iterates through the title fields, prints the Title as a text string, and stops when the count reaches 10. # # When you run the code block, you should see the following output: # 1. text strings of the 10 titles, # 2. a blank line, and # 3. a response that lists the Title count. # + [markdown] nbpresent={"id": "1bd59153-c74c-4b87-8b81-e69d3f1afac0"} # ## Filtering Duplicate Titles # # You might notice that there are a lot of titles that appear to be duplicates. There may be a variety of reasons for this (different editions, multiple copies in holdings, etc). But let's say that we want to print what we think is a list of unique titles in order to establish a good idea of how many works are represented. How could we filter out the duplicates? # # In this case we can use a variable to store the current title, the previous title, and to compare the two. We use a ``continue`` breakpoint to restart the loop if the title is already recorded. If we try this directly, we will break the code (below). Do you see what the error is? How can we get around this? # + nbpresent={"id": "0524d35d-8059-416c-ab52-1050e75e1bca"} from pymarc import MARCReader with open('/marc-python-tests/Project_MUSE_2018_Complete_20180228.mrc', 'rb') as fh: reader = MARCReader(fh) Titlecount = 0 for record in reader: titleCur = record.title() if titleCur == titlePrev: continue else: titlePrev = titleCur print(record.title()) Titlecount = Titlecount + 1 if Titlecount >= 10: break print('\nCounted',Titlecount,'Titles') # + [markdown] nbpresent={"id": "495930b5-9fba-4763-95d1-44d533a8bcfb"} # One way to fix this would be to use a try/except loop to establish the first variable. In other words, we will try the code, but if the ``titlePrev`` variable is not established, the code runs the except loop to establish it: # + nbpresent={"id": "831261ef-78cd-4535-9e09-22d6c0473e7a"} from pymarc import MARCReader with open('/marc-python-tests/Project_MUSE_2018_Complete_20180228.mrc', 'rb') as fh: reader = MARCReader(fh) Titlecount = 0 for record in reader: titleCur = record.title() try: if titleCur == titlePrev: continue else: titlePrev = titleCur print(record.title()) Titlecount = Titlecount + 1 if Titlecount >= 10: break except: titlePrev = titleCur print(record.title()) Titlecount = Titlecount + 1 if Titlecount > 10: break print('\nCounted',Titlecount,'Titles') # + [markdown] nbpresent={"id": "5f2950d8-521b-4256-a0c6-4ae00b1de08b"} # Now, we have a list of the first 10 unique title strings. Of course, there may still be "the same" titles that sneak through if they were keyed in according to different conventsion, for example there might be some that use the trailing `/`, which accords with the AACR2 convention for entering data in the MaRC 240 field for title, but not all of the records in the set include this in the text string that we see in the output. # + [markdown] nbpresent={"id": "4e6e4315-a2d8-43c1-81bc-34eba978cd05"} # ### Resources # # * [PyMarc](https://github.com/edsu/pymarc) # * [Project MUSE MaRC Records](https://muse.jhu.edu/about/librarians/marc_records.html) # * [Library of Congress MaRC Records](https://www.loc.gov/cds/products/marcDist.php) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.10 64-bit # language: python # name: python3 # --- # # Data cleaning import pandas as pd import numpy as np from matplotlib import pyplot as plt # %matplotlib inline import matplotlib matplotlib.rcParams["figure.figsize"] = (20, 10) df1 = pd.read_csv("../data/raw/Bengaluru_House_Data.csv") df1.head() df1.groupby('area_type')['area_type'].agg('count') df2 = df1.drop(['area_type', 'society', 'balcony', 'availability'], axis='columns') df2.head() df2.isnull().sum() df3 = df2.dropna() df3.isnull().sum() df3.shape df3['size'].unique() df3['bhk'] = df3['size'].apply(lambda x: int(x.split()[0])) df3.head() df3[df3.bhk>20] df3.total_sqft.unique() def isfloat(x): try: float(x) except: return False return True df3[~df3['total_sqft'].apply(isfloat)].head(10) def convert_sqft_to_num(x): tokens = x.split('-') if len(tokens) == 2: return (float(tokens[0])+float(tokens[1]))/2 try: return float(x) except: return None df4 = df3.copy() df4['total_sqft'] = df4['total_sqft'].apply(convert_sqft_to_num) df4.head() df4.loc[30] # # Feature engineering df5 = df4.copy() df5['price_per_sqft'] = (df5['price'] / df5['total_sqft']) * 100000 # multiplication id for converting to LAKH df5.head() len(df5.location.unique()) # 1304 locations is too much and will cause dimentionality curse when doing onehotencoding # + df5.location = df5.location.apply(lambda x: x.strip()) location_stats = df5.groupby('location')['location'].agg('count').sort_values(ascending=False) location_stats # - len(location_stats[location_stats<=10]) location_stats_less_than_10 = location_stats[location_stats<=10] location_stats_less_than_10 df5.location = df5.location.apply(lambda x: 'other' if x in location_stats_less_than_10 else x) len(df5.location.unique()) df5.head() # # Outliers removal df5[df5.total_sqft/df5.bhk<300].head() df5.shape df6 = df5[~(df5.total_sqft/df5.bhk<300)] df6.shape df6.price_per_sqft.describe() # + def remove_pps_outliers(df): df_out = pd.DataFrame() for key, subdf in df.groupby('location'): m = np.mean(subdf.price_per_sqft) st = np.std(subdf.price_per_sqft) reduced_df = subdf[(subdf.price_per_sqft>=(m-st)) & (subdf.price_per_sqft<=(m+st))] df_out = pd.concat([df_out, reduced_df], ignore_index=True) return df_out df7 = remove_pps_outliers(df6) df7.shape # + def plot_scatter_chart(df, location): bhk2 = df[(df.location == location) & (df.bhk==2)] bhk3 = df[(df.location == location) & (df.bhk==3)] matplotlib.rcParams['figure.figsize'] = (15,10) plt.scatter(bhk2.total_sqft, bhk2.price_per_sqft, color='blue', label='2 BHK', s=50) plt.scatter(bhk3.total_sqft, bhk3.price_per_sqft, color='green', marker='+', label='3 BHK', s=50) plt.xlabel("Total Square Feet Area") plt.ylabel("Price Per Square Feet") plt.title(location) plt.legend() plot_scatter_chart(df7, "") # - # **We should also remove properties where for same location, the price of (for example) 3 bedroom apartment is less than 2 bedroom apartment (with same square ft area). What we will do is for a given location, we will build a dictionary of stats per bhk, i.e.** # # { # '1' : { # 'mean': 4000, # 'std: 2000, # 'count': 34 # }, # '2' : { # 'mean': 4300, # 'std: 2300, # 'count': 22 # }, # } # **Now we can remove those 2 BHK apartments whose price_per_sqft is less than mean price_per_sqft of 1 BHK apartment** # # # + def remove_bhk_outliers(df): exclude_indices = np.array([]) for location, location_df in df.groupby('location'): bhk_stats = {} for bhk, bhk_df in location_df.groupby('bhk'): bhk_stats[bhk] = { 'mean': np.mean(bhk_df.price_per_sqft), 'std': np.std(bhk_df.price_per_sqft), 'count': bhk_df.shape[0] } for bhk, bhk_df in location_df.groupby('bhk'): stats = bhk_stats.get(bhk-1) if stats and stats['count']>5: exclude_indices = np.append(exclude_indices, bhk_df[bhk_df.price_per_sqft<(stats['mean'])].index.values) return df.drop(exclude_indices,axis='index') df8 = remove_bhk_outliers(df7) df8.shape # - plot_scatter_chart(df8,"") matplotlib.rcParams["figure.figsize"] = (20,10) plt.hist(df8.price_per_sqft,rwidth=0.8) plt.xlabel("Price Per Square Feet") plt.ylabel("Count") # # Outlier Removal Using Bathrooms Feature df8.bath.unique() plt.hist(df8.bath,rwidth=0.8) plt.xlabel("Number of bathrooms") plt.ylabel("Count") df8[df8.bath>10] # It is unusual to have 2 more bathrooms than number of bedrooms in a home df8[df8.bath>df8.bhk+2] # the business manager has a conversation with you (i.e. a data scientist) that if you have 4 bedroom home and even if you have bathroom in all 4 rooms plus one guest bathroom, you will have total bath = total bed + 1 max. Anything above that is an outlier or a data error and can be removed df9 = df8[df8.bath= 0: x[loc_index] = 1 return lr_clf.predict([x])[0] predict_price('1st Phase JP Nagar',1000, 2, 2) # # Exporting the model import pickle with open('home_prices_model.pickle','wb') as f: pickle.dump(lr_clf,f) # ! mv ./home_prices_model.pickle ../data/model/home_prices_model.pickle # # Export location and column information to a file that will be useful later on in our prediction application import json columns = { 'data_columns' : [col.lower() for col in X.columns] } with open("../data/cols/columns.json","w") as f: f.write(json.dumps(columns)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # $ \newcommand{\bra}[1]{\langle #1|} $ # $ \newcommand{\ket}[1]{|#1\rangle} $ # $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ # $ \newcommand{\dot}[2]{ #1 \cdot #2} $ # $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ # $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ # $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ # $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ # $ \newcommand{\mypar}[1]{\left( #1 \right)} $ # $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ # $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ # $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ # $ \newcommand{\onehalf}{\frac{1}{2}} $ # $ \newcommand{\donehalf}{\dfrac{1}{2}} $ # $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ # $ \newcommand{\vzero}{\myvector{1\\0}} $ # $ \newcommand{\vone}{\myvector{0\\1}} $ # $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $ # $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ # $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ # $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ # $ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $ # $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ # $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ # $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ # $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ # $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $ # $ \newcommand{\greenbit}[1] {\mathbf{{\color{green}#1}}} $ # $ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}#1}}} $ # $ \newcommand{\redbit}[1] {\mathbf{{\color{red}#1}}} $ # $ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}#1}}} $ # $ \newcommand{\blackbit}[1] {\mathbf{{\color{black}#1}}} $ # Project | Quantum Tomography with Many Qubits #
# _prepared by _ #

# We design a simple python object for experimenting quantum tomography on multiple qubits. # # This will be the extension of the class given in our notebook [Quantum Tomography](../bronze/B48_Quantum_Tomography.ipynb). # # _All angles must be in radian._ # ### Create a python class called `UnknownQuantumSystem(the_number_of_qubits=2)` # # Any instance refers to a quantum system with the specified number of qubits, which is a two qubits system if not specified. # # Each qubit of this system will be set to an quantum state specified with an angle in $ [0,\pi) $. # # The instance will have 1000 identical copies of this system. (You may ask the number of copies from the user.) # # You can define the methods of this class similar to the ones defined for "unknown_qubit()" so that any user can do experiments on the quantum system to learn the quantum state (the angle) of each qubit and then compare her results with the picked quantum states (angles). # ### Experiment with your class # # Use your class as a user and develop a solution about how to guess the unknown quantum state of each qubit for the given quantum system. # ### Program the quantum part # # - _This part of the project aims to give some ideas about how to simulate a generic quantum system._ # - _The difficuly level of this part is medium._ # - _Please do not use any scientific python library such as `NumPy`._ # # Re-design your class without using any quantum programming library. # # You should write your own code to simulate the quantum system. Remark that: # - The state of the system should be kept as a single vector, and you should not keep the state of each qubit separately. # - If the quantum system has $n$ qubits, then its state vector can be represented by $ 2^n $-dimensional list. # - Then, each operator (rotation) should be defined as a $ (n \times n) $-dimensional matrix even though it is applied to single qubit. # - Otherwise, there will be no difference between a single-qubit system and multi-qubit systems, and all calculations will be trivial. # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import os import caffe from google.protobuf import text_format from caffe.proto import caffe_pb2 # load PASCAL VOC labels labelmap_file = 'model/voc/labelmap_voc.prototxt' file = open(labelmap_file, 'r') labelmap = caffe_pb2.LabelMap() text_format.Merge(str(file.read()), labelmap) def get_labelname(labelmap, labels): num_labels = len(labelmap.item) labelnames = [] if type(labels) is not list: labels = [labels] for label in labels: found = False for i in xrange(0, num_labels): if label == labelmap.item[i].label: found = True labelnames.append(labelmap.item[i].display_name) break assert found == True return labelnames # + model_def = 'model/voc/deploy_merged.prototxt' model_weights = 'model/voc/pelee_merged.caffemodel' net = caffe.Net(model_def, # defines the structure of the model model_weights, # contains the trained weights caffe.TEST) # use test mode (e.g., don't perform dropout) # input preprocessing: 'data' is the name of the input blob == net.inputs[0] transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape}) transformer.set_transpose('data', (2, 0, 1)) transformer.set_input_scale('data', 0.017) transformer.set_mean('data', np.array([103.94,116.78,123.68])) # mean pixel transformer.set_raw_scale('data', 255) # the reference model operates on images in [0,255] range instead of [0,1] transformer.set_channel_swap('data', (2,1,0)) # the reference model has channels in BGR order instead of RGB # - def do_detect(image,colors): transformed_image = transformer.preprocess('data', image) net.blobs['data'].data[...] = transformed_image # Forward pass. detections = net.forward()['detection_out'] # Parse the outputs. det_label = detections[0,0,:,1] det_conf = detections[0,0,:,2] det_xmin = detections[0,0,:,3] det_ymin = detections[0,0,:,4] det_xmax = detections[0,0,:,5] det_ymax = detections[0,0,:,6] # Get detections with confidence higher than 0.4. top_indices = [i for i, conf in enumerate(det_conf) if conf >= 0.4] top_conf = det_conf[top_indices] top_label_indices = det_label[top_indices].tolist() top_labels = get_labelname(labelmap, top_label_indices) top_xmin = det_xmin[top_indices] top_ymin = det_ymin[top_indices] top_xmax = det_xmax[top_indices] top_ymax = det_ymax[top_indices] plt.imshow(image) currentAxis = plt.gca() currentAxis.axes.xaxis.set_visible(False) currentAxis.axes.yaxis.set_visible(False) font = {'weight' : 'bold', 'size' : 12} plt.rc('font', **font) for i in xrange(top_conf.shape[0]): xmin = int(round(top_xmin[i] * image.shape[1])) ymin = int(round(top_ymin[i] * image.shape[0])) xmax = int(round(top_xmax[i] * image.shape[1])) ymax = int(round(top_ymax[i] * image.shape[0])) score = top_conf[i] label = int(top_label_indices[i]) label_name = top_labels[i] display_txt = '%s: %.2f'%(label_name, score) coords = (xmin, ymin), xmax-xmin+1, ymax-ymin+1 color = colors[label] currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor=color, linewidth=2)) currentAxis.text(xmin, ymin, display_txt, bbox={'facecolor':color, 'alpha':0.8}) # + # set net to batch size of 1 image_resize = 304 net.blobs['data'].reshape(1,3,image_resize,image_resize) colors=[] for r in [0.2,0.4,0.8,0.6,1.0]: for g in [0.3,0.7]: for b in [0.4,0.8]: colors.append([r,g,b,1.0]) # + plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' image = caffe.io.load_image('samples/dog.jpg') do_detect(image,colors) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Compare performance of ImageDataGenerator and tf.data # A quick experiment to evaluate how tf.data stands up to ImageDataGenerator. Experiment was conducted 5 times with max deviation from these results ~12% # + import tensorflow as tf import tensorflow.keras as keras import numpy as np DATA_DIR = 'a folder with 320 jpeg images 299x299x3' keras.__version__,tf.__version__ # - iv3 = keras.applications.inception_v3.InceptionV3(include_top=False, weights='imagenet',input_shape=(299, 299, 3)) syntetic_data = np.random.randint(0,255,size=(6400,299,299,3)) # ## In-memory data as a baseline # %%time iv3.predict(syntetic_data,batch_size=64).shape # ## Using ImageDataGenerator # + from tensorflow.keras.preprocessing.image import ImageDataGenerator import glob import pandas as pd files = glob.glob(DATA_DIR+'/*.JPG') df = pd.DataFrame({'filename': files}) gen = ImageDataGenerator() \ .flow_from_dataframe(df,class_mode=None,batch_size=64,target_size=(299,299)) # - # %%time for i in range(100): #nxt = next(gen) nxt = keras.applications.inception_v3.preprocess_input(next(gen)) # needed for tensorflow backend #print('batch {} : {} {}'.format(i,nxt.shape, nxt.dtype)) iv3.compile(optimizer='adam',loss='mse') # why would that be required for predict_generator? -> RuntimeError: You must compile your model before using it. # %%time out_generator = iv3.predict_generator(gen,steps=100) print(out_generator.shape) # ## Using tf.data pipeline # + dataset = tf.data.Dataset.from_tensor_slices(files) def _load_images(filename): image = tf.read_file(filename) image = tf.image.decode_jpeg(image,channels=3) image = tf.image.resize_images(image, [299, 299]) image = keras.applications.inception_v3.preprocess_input(image) return image dataset = dataset.map(_load_images) dataset = dataset.batch(64).repeat() iterator = dataset.make_one_shot_iterator() image = iterator.get_next() # - # %%time with tf.Session() as sess: for i in range(100): out = sess.run(image) #print('batch {} : {} {}'.format(i,out.shape, out.dtype)) # %%time out_tfdata = iv3.predict(image,steps=100) print(out_tfdata.shape) # ## Some observations # * tf.data and ImageDataGenerator have similar performance on a singe machine. Maybe for distrubuted training it will be worth it to use tf.data (and for TPU it is required). # * Observed CPU (i7-7820HQ) load was ~35% while GPU (NVidia Quadro M2200) load was 100%. # * Data loading (from SSD) does not appear as a bottleneck for big models (e.g. Inception v3). However, smaller models or faster/more GPUs may require faster disks/RAID ... # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Circles and Spheres # # ![Minecraft Circles and Spheres](http://www.minecraftdl.com/wp-content/uploads/2013/10/Dual-Sphere-Survival-Map.png) # # In our first class we learned about functions. A function is a task to do something. Some of the functions we used are **mc.postToChat** to post a message to Minecraft and **time.sleep** to pause our program execution for 1 second. In today's adventure, we learn more about functions. We will use new functions to build circles and spheres in Minecraft. # # # ## Functions and arguments # In our previous class we were introduced to functions. Sometimes you need more than just a task name to complete a task. # # E.g., "Practice you writing!", is not enough, you want to know for how long. "Practice your writing for 15 minutes!" is more helpful. # # If we had a function for the "practice writing" task, we would write is as, # # ```python # practiceWriting(15) # ``` # # the additional pieces information we pass to a function are called its arguments. A function can have zero or more arguments just like a real tasks need zero or more pieces of information to complete them. Both the functions that you used so far needed arguments. For the function examples below, # # ```python # mc.postToChat("Hi Kids") # time.sleep(1) # ``` # # The **mc.postToChat** function needs the argument _""_ which is the message to post to the Minecraft chat window. The **time.sleep** function needs the argument _1_ which is the number of seconds to pause the program from running. # ## Building functions # # The first building function **drawings.drawMyCircle** uses two arguments - the radius of the circle to build and the block id to use for building the circle. # # ```python # drawings.drawMyCircle(radius,blockId) # # e.g., drawings.drawMyCircle(5,block.STONE.id) # ``` # # The second building function **drawings.drawMySphere** uses two arguments - the radius of the sphere to build and the block id to use for building the sphere. # # ```python # drawings.drawMySphere(radius,blockId) # # e.g., drawings.drawMySphere(5,block.STONE.id) # ``` # # In the following tasks, you will use these functions with different arguments to build circles and spheres in Minecraft. # # **Note:** Before you start your tasks, **start Minecraft** and run the program cell below to make the building functions available to your programs. import sys sys.path.append('/home/pi/minecraft-programming') import mcpi.block as block import time import drawings # ## Task 1 # Use the building functions to build three circles with radii 5, 9 and 13 in the same position (center). # Task 1 program # ## Task 2 # Use the building functions to build three spheres of radius 5, 9 and 13 in three different positions. # Task 2 program # ## Task 3 # # Write a program to build a sphere of radius 5 at the center, a circle of radius 8 and a circle of radius 13 at Steve's current position # Task 3 program # ## Task 4 # Use a new function **drawings.drawMyHollowSphere**. The function takes two arguments. First the **radius** of the hollow sphere to build and second, the **blockId** to use for building the hollow sphere. # Task 4 program # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Numpy experiments # Import numpy package. import numpy as np # ## Reshaping # Optionally, the last index given in `reshape` can be `-1`, the shape will be such that the dimensions multiply to the size of the original array. a = np.arange(1, 21, dtype=np.int).reshape(4, -1) a # In three dimensions, the last dimension can be replaced by `-1`. b = a.reshape(2, 5, -1) b # ## Slicing # Define 3D data with a "width" of 5, a "height" of 4, and a "depth" of 3. a = np.arange(1, 61, dtype=np.int).reshape((3, 4, -1)) # When the last dimension is -1, it is computed automatically if possible, i.e., if the product of the sizes in each dimension is the size of the array. a.shape # When data is viewed as 3D, slice the "front" layer. a[0, :, :] # Slice the middle layer. a[1, :, :] # Slice the back layer. a[2, :, :] # Slice the top layer (first row is front). a[:, 0, :] # Slice the left layer (first row is front). a[:, :, 0] # Introducing new leading dimension. a[None].shape # Check that this is the original matrix, with size of leading dimension equal to 1. np.all(a == a[None][0]) # Add additional "last" dimension, check that the result is the top-front row, with each element in an individual dimension 1 subarray. a[:, :, :, None][0, 0] # It is important to realize that slicing doesn't copy, i.e., a slice yields a reference to the original data. Consider the following matrix. b = np.arange(1, 21).reshape(4, 5) b c = b[1:-1, 1:-1] c # Now we modify an element of `c` c[0, 0] = 0 c # Clearly, `c` was changed, but so is `b`. b # ## Fancy indexing # Create a new $4 \times 5$ array. a = np.arange(1, 21, dtype=np.int).reshape((4, -1)) a # Select the second and fourth row. a[[1, 3], :] # Select the first and fourth column. a[:, [0, 3]] # ## Structured arrays # Although strictly speaking not fancy indexing, we can create structured arrays, which you can think of as arrrays of structures. These arrays can be indexed by number, and by structure field name. b = np.zeros(10, dtype={'names': ('x', 'y'), 'formats': ('f8', 'f8')}) b # Each element acts as a tuple, the first field has name `x` and type `np.float`, the second has name `y`, and is also `np.float`. # The shape shows it is an 1-D array. b.shape # The type of the elements is perhaps a bit surprising. type(b[0]) # We can initialize the `x` and `y` fields of each array element with a single assingment. b['x'] = np.random.uniform(size=10) b['y'] = np.random.uniform(size=10) b b['x'] # Using a numerical index gives the corresponding element in the array. b[0] # Accessing elements by field number or field name is equivalent. b[4][1] == b[4]['y'] # ## Stacking # Create two matrices, both $4 \times 5$. a = np.arange(1, 21, dtype=np.int).reshape((4, -1)) b = np.arange(21, 41, dtype=np.int).reshape((4, -1)) print(a) print(b) # Stack matrices horizontally. np.hstack((a, b)) # Stack matrices vertically. np.vstack((a, b)) # Create a list of 3 arrays, each $4 \times 5$. arrays_2d = [] size_2d = 20 for i in range(3): upperleft = i*size_2d + 1 arrays_2d.append(np.arange(upperleft, upperleft + size_2d, dtype=np.int).reshape(4, -1)) # Create a 3D array out of the list of 2D arrays. # + array_3d = np.array(arrays_2d) array_3d.shape # - # Get front plane. array_3d[0, :, :] # Get top plane. array_3d[:, 0, :] # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R 4.0 # language: R # name: ir40 # --- # + [markdown] toc=true #

Table of Contents

# # - # # Dependencies library(data.table) # # Functions # # Paths manifestpath <- "/Users/anabbi/OneDrive - UHN/Documents/IPD2/Manifests/" datapath <- "/Users/anabbi/OneDrive - UHN/Documents/IPD2/Data/" plotpath <- "/Users/anabbi/OneDrive - UHN/Documents/IPD2/Plots/" # # Main load(paste0(datapath, "ESTIMATE/estimate_manifest_primary_clean_final.RData")) # ## CBTN # This is just manifest export for arriba tsv files on cavatica arriba_manifest <- read.csv(paste0(manifestpath,"arriba_manifest.csv"),header = T, stringsAsFactors = F) head(arriba_manifest) # This file was made on cavatica notebook (analysis matrices) by compiling all arriba tsv files arriba_cbtn <- read.table(paste0(datapath, "fusion/all_arriba_CBTTC.tsv"),sep = "\t", header = T, comment.char = "", quote = "", stringsAsFactors = F) head(arriba_cbtn) arriba_manifest$Filename <- gsub(".arriba.fusions.tsv", "",arriba_manifest$name) arriba_df_manifest <- merge(arriba_manifest[,c("sample_id", "aliquot_id","Filename")], arriba_cbtn, by = "Filename") arriba_df_cbtn <- merge(arriba_df_manifest, estimate_manifest_primary_clean[,c("sample_id", "aliquot_id")], by = c("sample_id", "aliquot_id")) table(estimate_manifest_primary_clean$group) # + length(unique(arriba_cbtn$Filename)) length(unique(arriba_df_cbtn$Filename)) # - colnames(arriba_df_cbtn)[colnames(arriba_df_cbtn) == "X.gene1"] <- "gene1" head(arriba_df_cbtn) arriba_df_cbtn$Filename <- NULL # ## DKFZ # This is from Pengbo. Use colnames from cbttc arriba_dkfz <- fread(paste0(datapath, "fusion/arriba_ICGC_pengbo.txt"), sep = "\t", header = F) head(arriba_dkfz) colnames(arriba_dkfz)[2:25] <- colnames(arriba_df_cbtn)[3:26] colnames(arriba_dkfz)[1] <- "sample_id" arriba_df_dkfz <- merge(arriba_dkfz, estimate_manifest_primary_clean[,c("sample_id", "aliquot_id")], by = "sample_id") # + length(unique(arriba_dkfz$sample_id)) length(unique(arriba_df_dkfz$sample_id)) # - dkfz <- estimate_manifest_primary_clean[estimate_manifest_primary_clean$group == "ICGC",] allsampleswitharriba <- unique(arriba_dkfz$sample_id) dkfz$sample_id[!dkfz$sample_id %in% allsampleswitharriba] # There was no Arriba call available for the samples above arriba_df <- rbind(arriba_df_cbtn, arriba_df_dkfz) save(arriba_df, file = paste0(datapath, "fusion/arriba_df.RData")) table(arriba_df$confidence) # Remove low confidence arriba_df_conf <- arriba_df[arriba_df$confidence != "low",] # Only splice-site to splice-site head(arriba_df_conf) dim(arriba_df_conf) arriba_df_conf_ss <- arriba_df_conf[ arriba_df_conf$site1 == "splice-site",] arriba_df_conf_ss <- arriba_df_conf_ss[ arriba_df_conf_ss$site2 == "splice-site",] dim(arriba_df_conf_ss) save(arriba_df_conf_ss, file = paste0(datapath, "fusion/arriba_df_conf_ss.RData")) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="CjfzxXZLHed_" outputId="fde8240c-552b-4030-dc67-510ba2ae85cf" # clone the repo for running evaluation # !git clone https://github.com/AI4Bharat/indicTrans.git # %cd indicTrans # clone requirements repositories # !git clone https://github.com/anoopkunchukuttan/indic_nlp_library.git # !git clone https://github.com/anoopkunchukuttan/indic_nlp_resources.git # !git clone https://github.com/rsennrich/subword-nmt.git # %cd .. # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="IeYW2BJhlJvx" outputId="80bdf9f2-b3ea-43c2-beb3-6a9673303099" # Install the necessary libraries # !pip install sacremoses pandas mock sacrebleu tensorboardX pyarrow indic-nlp-library # ! pip install mosestokenizer subword-nmt # Install fairseq from source # !git clone https://github.com/pytorch/fairseq.git # %cd fairseq # # !git checkout da9eaba12d82b9bfc1442f0e2c6fc1b895f4d35d # !pip install --use-feature=in-tree-build ./ # %cd .. # + id="TktUu9NW_PLq" # sanity check to see if fairseq is installed from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils # + colab={"base_uri": "https://localhost:8080/"} id="E_4JxNdRlPQB" outputId="a500b4ec-19c8-4f80-e17c-1168b8e7be23" # download the indictrans model # downloading the indic-en model # !wget https://storage.googleapis.com/samanantar-public/V0.3/models/indic-en.zip # !unzip indic-en.zip # downloading the en-indic model # # !wget https://storage.googleapis.com/samanantar-public/V0.3/models/en-indic.zip # # !unzip en-indic.zip # # downloading the indic-indic model # # !wget https://storage.googleapis.com/samanantar-public/V0.3/models/m2m.zip # # !unzip m2m.zip # %cd indicTrans # + colab={"base_uri": "https://localhost:8080/"} id="yTnWbHqY01-B" outputId="2104a004-864e-4af5-f2b2-94557f64110c" from indicTrans.inference.engine import Model indic2en_model = Model(expdir='../indic-en') # + colab={"base_uri": "https://localhost:8080/"} id="QTp2NOgQ__sB" outputId="793a1356-700e-48fe-f497-129ab847f3a4" ta_sents = ['அவனுக்கு நம்மைப் தெரியும் என்று தோன்றுகிறது', "நாங்கள் உன்னைத் பின்பற்றுவோம் (அ) தொடர்வோம். ", 'உங்களுக்கு உங்கள் அருகில் இருக்கும் ஒருவருக்கோ இத்தகைய அறிகுறிகள் தென்பட்டால், வீட்டிலேயே இருப்பது, கொரோனா வைரஸ் தொற்று பிறருக்கு வராமல் தடுக்க உதவும்.'] indic2en_model.batch_translate(ta_sents, 'ta', 'en') # + colab={"base_uri": "https://localhost:8080/", "height": 141} id="VFXrCNZGEN7Z" outputId="143d20ae-92f5-4a8d-c9ed-d638a5bb3977" ta_paragraph = """இத்தொற்றுநோய் உலகளாவிய சமூக மற்றும் பொருளாதார சீர்குலைவை ஏற்படுத்தியுள்ளது.இதனால் பெரும் பொருளாதார மந்தநிலைக்குப் பின்னர் உலகளவில் மிகப்பெரிய மந்தநிலை ஏற்பட்டுள்ளது. இது விளையாட்டு,மத, அரசியல் மற்றும் கலாச்சார நிகழ்வுகளை ஒத்திவைக்க அல்லது ரத்து செய்ய வழிவகுத்தது. அச்சம் காரணமாக முகக்கவசம், கிருமிநாசினி உள்ளிட்ட பொருட்களை அதிக நபர்கள் வாங்கியதால் விநியோகப் பற்றாக்குறை ஏற்பட்டது.""" indic2en_model.translate_paragraph(ta_paragraph, 'ta', 'en') # + id="Hi_D7s_VIjis" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ![license_header_logo](../../../images/license_header_logo.png) # # > **Copyright (c) 2021 ifAI Sdn. Bhd.**
#
# This program is part of OSRFramework. You can redistribute it and/or modify #
it under the terms of the GNU Affero General Public License as published by #
the Free Software Foundation, either version 3 of the License, or #
(at your option) any later version. #
#
This program is distributed in the hope that it will be useful #
but WITHOUT ANY WARRANTY; without even the implied warranty of #
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
GNU Affero General Public License for more details. #
#
You should have received a copy of the GNU Affero General Public License #
along with this program. If not, see . #
# # Topic Modeling # ## Introduction # Another popular text analysis technique is called topic modeling. The ultimate goal of topic modeling is to find various topics that are present in your corpus. Each document in the corpus will be made up of at least one topic, if not multiple topics. # # In this notebook, we will be covering the steps on how to do **Latent Dirichlet Allocation (LDA)**, which is one of many topic modeling techniques. It was specifically designed for text data. # # To use a topic modeling technique, you need to provide (1) a document-term matrix and (2) the number of topics you would like the algorithm to pick up. # # Once the topic modeling technique is applied, your job as a human is to interpret the results and see if the mix of words in each topic make sense. If they don't make sense, you can try changing up the number of topics, the terms in the document-term matrix, model parameters, or even try a different model. # # Notebook Content # # * [Topic Modeling 1](#Topic-Modeling-1) # # # * [Topic Modeling 2](#Topic-Modeling-2) # # # * [Topic Modeling 3](#Topic-Modeling-3) # # # * [Identify Topics in Each Document](#Identify-Topics-in-Each-Document) # # # * [Additional Exercises](#Additional-Exercises) # ## Topic Modeling 1 # # ### Attempt #1 (All Text) # + # Let's read in our document-term matrix import pandas as pd import pickle data = pd.read_pickle('models/dtm_stop.pkl') data # + # Import the necessary modules for LDA with gensim # Terminal / Anaconda Navigator: conda install -c conda-forge gensim from gensim import matutils, models import scipy.sparse # import logging # logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) # - # One of the required inputs is a term-document matrix tdm = data.transpose() tdm.head() # We're going to put the term-document matrix into a new gensim format, from df --> sparse matrix --> gensim corpus sparse_counts = scipy.sparse.csr_matrix(tdm) corpus = matutils.Sparse2Corpus(sparse_counts) # Gensim also requires dictionary of the all terms and their respective location in the term-document matrix cv = pickle.load(open("models/cv_stop.pkl", "rb")) id2word = dict((v, k) for k, v in cv.vocabulary_.items()) # Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term), we need to specify two other parameters - the number of topics and the number of passes. Let's start the number of topics at 2, see if the results make sense, and increase the number from there. # Now that we have the corpus (term-document matrix) and id2word (dictionary of location: term), # we need to specify two other parameters as well - the number of topics and the number of passes lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=2, passes=10) lda.print_topics() # LDA for num_topics = 3 lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=3, passes=10) lda.print_topics() # LDA for num_topics = 4 lda = models.LdaModel(corpus=corpus, id2word=id2word, num_topics=4, passes=10) lda.print_topics() # These topics aren't looking too great. We've tried modifying our parameters. Let's try modifying our terms list as well. # ## Topic Modeling 2 # ### Attempt #2 (Nouns Only) # One popular trick is to look only at terms that are from one part of speech (only nouns, only adjectives, etc.). Check out the UPenn tag set: https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html. import nltk nltk.download('averaged_perceptron_tagger') # + # Let's create a function to pull out nouns from a string of text from nltk import word_tokenize, pos_tag def nouns(text): '''Given a string of text, tokenize the text and pull out only the nouns.''' is_noun = lambda pos: pos[:2] == 'NN' tokenized = word_tokenize(text) all_nouns = [word for (word, pos) in pos_tag(tokenized) if is_noun(pos)] return ' '.join(all_nouns) # - # Read in the cleaned data, before the CountVectorizer step data_clean = pd.read_pickle('models/data_clean.pkl') data_clean # Apply the nouns function to the transcripts to filter only on nouns data_nouns = pd.DataFrame(data_clean.transcript.apply(nouns)) data_nouns # + # Create a new document-term matrix using only nouns from sklearn.feature_extraction import text from sklearn.feature_extraction.text import CountVectorizer # Re-add the additional stop words since we are recreating the document-term matrix add_stop_words = ['like', 'im', 'know', 'just', 'dont', 'thats', 'right', 'people', 'youre', 'got', 'gonna', 'time', 'think', 'yeah', 'said'] stop_words = text.ENGLISH_STOP_WORDS.union(add_stop_words) # Recreate a document-term matrix with only nouns cvn = CountVectorizer(stop_words=stop_words) data_cvn = cvn.fit_transform(data_nouns.transcript) data_dtmn = pd.DataFrame(data_cvn.toarray(), columns=cvn.get_feature_names()) data_dtmn.index = data_nouns.index data_dtmn # + # Create the gensim corpus corpusn = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtmn.transpose())) # Create the vocabulary dictionary id2wordn = dict((v, k) for k, v in cvn.vocabulary_.items()) # - # Let's start with 2 topics ldan = models.LdaModel(corpus=corpusn, num_topics=2, id2word=id2wordn, passes=10) ldan.print_topics() # Let's try topics = 3 ldan = models.LdaModel(corpus=corpusn, num_topics=3, id2word=id2wordn, passes=10) ldan.print_topics() # Let's try 4 topics ldan = models.LdaModel(corpus=corpusn, num_topics=4, id2word=id2wordn, passes=10) ldan.print_topics() # ## Topic Modeling 3 # # ### Attempt #3 (Nouns and Adjectives) # Let's create a function to pull out nouns from a string of text def nouns_adj(text): '''Given a string of text, tokenize the text and pull out only the nouns and adjectives.''' is_noun_adj = lambda pos: pos[:2] == 'NN' or pos[:2] == 'JJ' tokenized = word_tokenize(text) nouns_adj = [word for (word, pos) in pos_tag(tokenized) if is_noun_adj(pos)] return ' '.join(nouns_adj) # Apply the nouns function to the transcripts to filter only on nouns data_nouns_adj = pd.DataFrame(data_clean.transcript.apply(nouns_adj)) data_nouns_adj # Create a new document-term matrix using only nouns and adjectives, also remove common words with max_df cvna = CountVectorizer(stop_words=stop_words, max_df=.8) data_cvna = cvna.fit_transform(data_nouns_adj.transcript) data_dtmna = pd.DataFrame(data_cvna.toarray(), columns=cvna.get_feature_names()) data_dtmna.index = data_nouns_adj.index data_dtmna # + # Create the gensim corpus corpusna = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtmna.transpose())) # Create the vocabulary dictionary id2wordna = dict((v, k) for k, v in cvna.vocabulary_.items()) # - # Let's start with 2 topics ldana = models.LdaModel(corpus=corpusna, num_topics=2, id2word=id2wordna, passes=10) ldana.print_topics() # Let's try 3 topics ldana = models.LdaModel(corpus=corpusna, num_topics=3, id2word=id2wordna, passes=10) ldana.print_topics() # Let's try 4 topics ldana = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=10) ldana.print_topics() # ## Identify Topics in Each Document # Out of the 9 topic models we looked at, the nouns and adjectives, 4 topic one made the most sense. So let's pull that down here and run it through some more iterations to get more fine-tuned topics. # Our final LDA model (for now) ldana = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=80) ldana.print_topics() # These four topics look pretty decent. Let's settle on these for now. # * Topic 0: mom, parents # * Topic 1: husband, wife # * Topic 2: guns # * Topic 3: profanity # Let's take a look at which topics each transcript contains corpus_transformed = ldana[corpusna] list(zip([a for [(a,b)] in corpus_transformed], data_dtmna.index)) # For a first pass of LDA, these kind of make sense to me, so we'll call it a day for now. # * Topic 0: mom, parents [Anthony, Hasan, Louis, Ricky] # * Topic 1: husband, wife [Ali, John, Mike] # * Topic 2: guns [Bill, Bo, Jim] # * Topic 3: profanity [Dave, Joe] # ## Additional Exercises # 1. Try further modifying the parameters of the topic models above and see if you can get better topics. # 2. Create a new topic model that includes terms from a different [part of speech](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html) and see if you can get better topics. # # Contributors # # **Author** #
# # References # # 1. [Natural Language Processing in Python](https://www.youtube.com/watch?v=xvqsFTUsOmc&t=6s) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + pycharm={"name": "#%%\n", "is_executing": true} # %env S3_DATASET_BUCKET={{YOUR_S3_BUCKET}} # %env S3_DATASET_TRAIN=knn/input/iris_train.csv # %env S3_DATASET_TEST=knn/input/iris_test.csv # %env S3_TRAIN_OUTPUT=knn/output # %env SAGEMAKER_ROLE={{YOUR_SAGEMAKER_ROLE}} # + pycharm={"name": "#%%\n"} import os import random import string import boto3 import matplotlib.pyplot as plt import pandas as pd import sagemaker from IPython.display import display from sagemaker import image_uris from sagemaker.deserializers import JSONDeserializer from sagemaker.estimator import Estimator, Predictor from sagemaker.inputs import TrainingInput from sagemaker.serializers import CSVSerializer from sklearn.model_selection import train_test_split # + pycharm={"name": "#%%\n"} # Define constants CSV_PATH = './tmp/iris.csv' S3_DATASET_BUCKET = os.getenv('S3_DATASET_BUCKET') S3_DATASET_TRAIN = os.getenv('S3_DATASET_TRAIN') S3_DATASET_TEST = os.getenv('S3_DATASET_TEST') S3_TRAIN_OUTPUT = os.getenv('S3_TRAIN_OUTPUT') SAGEMAKER_ROLE = os.getenv('SAGEMAKER_ROLE') ESTIMATOR_INSTANCE_COUNT = 1 ESTIMATOR_INSTANCE_TYPE = 'ml.m5.large' PREDICTOR_INSTANCE_TYPE = 'ml.t2.medium' PREDICTOR_ENDPOINT_NAME = f'sagemaker-knn-{PREDICTOR_INSTANCE_TYPE}'.replace('.', '-') # Define variables used over this notebook bucket = boto3.resource('s3').Bucket(S3_DATASET_BUCKET) train_df = None test_df = None train_object_path = None test_object_path = None knn = None predictor = None # + pycharm={"name": "#%%\n", "is_executing": true} # Download a sample csv # !mkdir -p tmp # !curl -o "$(pwd)/tmp/iris.csv" -L https://raw.githubusercontent.com/aws/amazon-sagemaker-examples/master/hyperparameter_tuning/r_bring_your_own/iris.csv # + pycharm={"name": "#%%\n", "is_executing": true} ############################################################ # Data preparation ############################################################ def load_csv(path: str) -> pd.DataFrame: """ Load a csv file to transform pandas DataFrame Args: path (str): Path to a csv file Returns: pd.DataFrame: DataFrame to be trained """ df = pd.read_csv(path) # Move the last label column to the first # See https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html#cdf-csv-format df = df[['Species', 'Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width']] # Convert target string to int df['Species'] = df['Species'].map({'setosa': 0, 'versicolor': 1, 'virginica': 2}) return df def plot(df: pd.DataFrame) -> None: """ Plot DataFrame Args: df (pd.DataFrame): DataFrame which you want to plot """ pd.plotting.scatter_matrix(df, figsize=(15, 15), c=df['Species']) plt.show() def upload_csv_to_s3(df: pd.DataFrame, object_path: str) -> str: """ Upload a csv file to be trained by SageMaker Args: df (pd.DataFrame): DataFrame which is saved as csv format object_path (str): An S3 object path under your specified bucket Returns: str: An S3 object uri """ filename = ''.join([random.choice(string.digits + string.ascii_lowercase) for i in range(10)]) path = os.path.abspath(os.path.join('./tmp', filename)) df.to_csv(path, header=False, index=False) # Change content-type because the default is binary/octet-stream bucket.upload_file(path, object_path, ExtraArgs={'ContentType': 'text/csv'}) return f's3://{bucket.name}/{object_path}' if __name__ == '__main__': # Prepare data df = load_csv(CSV_PATH) display(df) plot(df) train_df, test_df = train_test_split(df, shuffle=True, random_state=0) # type: (pd.DataFrame, pd.DataFrame) train_object_path = upload_csv_to_s3(train_df, S3_DATASET_TRAIN) test_object_path = upload_csv_to_s3(test_df, S3_DATASET_TEST) # + pycharm={"name": "#%%\n", "is_executing": true} ############################################################ # Model build ############################################################ def get_estimator(**hyperparams) -> Estimator: """ Get a SageMaker estimator Args: **hyperparams: Hyperparameters Returns: Estimator: A SageMaker estimator to which necessary arguments and hyperparameters are set """ estimator = Estimator( image_uri=image_uris.retrieve('knn', boto3.Session().region_name), # AWS provided container in ECR, role=SAGEMAKER_ROLE, instance_count=ESTIMATOR_INSTANCE_COUNT, instance_type=ESTIMATOR_INSTANCE_TYPE, input_mode='Pipe', output_path=f's3://{S3_DATASET_BUCKET}/{S3_TRAIN_OUTPUT}', sagemaker_session=sagemaker.Session(), ) hyperparams.update({'predictor_type': 'classifier'}) estimator.set_hyperparameters(**hyperparams) return estimator def train(estimator: Estimator, train_object_path: str, test_object_path: str) -> None: """ Train a SageMaker estimator synchronously Args: estimator (Estimator): A SageMaker estimator to be trained train_object_path (str): An S3 object path used as train data test_object_path (str): An S3 object path used as test data """ # Specify content-type because the default is application/x-recordio-protobuf train_input = TrainingInput(train_object_path, content_type='text/csv', input_mode='Pipe') test_input = TrainingInput(test_object_path, content_type='text/csv', input_mode='Pipe') estimator.fit({'train': train_input, 'test': test_input}) if __name__ == '__main__': knn = get_estimator(k=1, sample_size=1000) train(knn, train_object_path, test_object_path) # + pycharm={"name": "#%%\n", "is_executing": true} ############################################################ # Model deploy ############################################################ def deploy(estimator: Estimator) -> Predictor: """ Deploy a SageMaker estimator and create an inference endpoint Args: estimator (Estimator): A SageMaker estimator to be deployed Returns: Predictor: A SageMaker predictor which you use for inference """ return estimator.deploy( initial_instance_count=1, instance_type=PREDICTOR_INSTANCE_TYPE, serializer=CSVSerializer(), deserializer=JSONDeserializer(), endpoint_name=PREDICTOR_ENDPOINT_NAME ) def validate(predictor: Predictor, test_df: pd.DataFrame) -> pd.DataFrame: """ Get pandas DataFrame for validation This does not include scores such as accuracy, precision, etc. Args: predictor (Predictor): A SageMaker predictor test_df (pd.DataFrame): Test data Returns: pd.DataFrame: pandas DataFrame to be used for validation """ rows = [] for i, data in test_df.iterrows(): predict = predictor.predict( pd.DataFrame([data.drop('Species')]).to_csv(header=False, index=False), initial_args={'ContentType': 'text/csv'} ) predicted_label = predict['predictions'][0]['predicted_label'] row = data.tolist() row.append(predicted_label) row.append(data['Species'] == predicted_label) rows.extend([row]) return pd.DataFrame(rows, columns=('Species', 'Sepal.Length', 'Sepal.Width', 'Petal.Length', 'Petal.Width', 'Prediction', 'Result')) if __name__ == '__main__': predictor = deploy(knn) predictions = validate(predictor, test_df) display(predictions) # + pycharm={"name": "#%%\n", "is_executing": true} ############################################################ # Delete a model and an inference endpoint ############################################################ def delete_model(predictor: Predictor) -> None: """ Delete a SageMaker model Args: predictor (Predictor): A SageMaker predictor """ try: predictor.delete_model() print(f'Deleted a model') except BaseException as e: print(e) def delete_endpoint(predictor: Predictor) -> None: """ Delete a SageMaker endpoint including a SageMaker endpoint config Args: predictor (Predictor): A SageMaker predictor """ try: predictor.delete_endpoint(delete_endpoint_config=True) print(f'Deleted {predictor.endpoint_name}') except BaseException as e: print(e) if __name__ == '__main__': delete_model(predictor) delete_endpoint(predictor) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from __future__ import unicode_literals, print_function, division from io import open import unicodedata import re import random import torch import torch.nn as nn from torch import optim import torch.nn.functional as F import pandas as pd from tqdm import tqdm import time import math import matplotlib.pyplot as plt import matplotlib.ticker as ticker import pickle device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # - # # Introduction # Usually, we use traditional scoring methods for keywords extraction(eg. TFIDF). However, the traditional methods usually do poorly on a small-length text. Here we overcome this diffculity by using supervised learning method (ie. we tell our model what we want our outcome be like and hopes it learns the patterns). Using Sequence-to-Sequence neural network, which is two RNNs with one served as encoder one as decoder, we achieved good results tested on a chinese ecommerce platform. # # However, one difficulity we may have to overcome is finding good targets for our model to learn from. Luckly, if you are working in a ecommerce company, you already have natural targets for your model,customer-searched words. Binding the product descriptions with the customer-searched words when a customer clicks a product after searching, we can produce resonable training data for model to be trained on. # # Reference: https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html # # Functions for preparing data # Geting a list of words from a sentence. This function is typically useful for ecommerce sites that use chinese as their main language. Note that one can modify this part according to her needs. def get_word_list(s1): regEx = re.compile('([\u4e00-\u9fa5]|[^a-zA-Z0-9_-]+)') res = re.compile(r"([\u4e00-\u9fa5])") p1 = regEx.split(s1.lower()) str1_list = [] for str in p1: if res.split(str) == None: str1_list.append(str) else: ret = res.split(str) for ch in ret: str1_list.append(ch) list_word1 = [w for w in str1_list if len(w) != 0] return list_word1 # Lang tool class for translating words to one-hoc vector. # + SOS_token = 0 EOS_token = 1 class Lang: def __init__(self, name): self.name = name self.word2index = {} self.word2count = {} self.index2word = {0: "SOS", 1: "EOS"} self.n_words = 2 # Count SOS and EOS def addSentence(self, sentence): for word in get_word_list(sentence): self.addWord(word) def addWord(self, word): if word not in self.word2index: self.word2index[word] = self.n_words self.word2count[word] = 1 self.index2word[self.n_words] = word self.n_words += 1 else: self.word2count[word] += 1 # - def getPairs(version_num,reverse=False): print("Reading lines...") # Read the file and split into lines lines = open('training_data_'+str(version_num)+'.csv', encoding='utf-8').\ read().strip().split('\n') # Split every line into pairs and normalize pairs = [[s for s in l.split('\t')] for l in lines] # Reverse pairs, make Lang instances if reverse: pairs = [list(reversed(p)) for p in pairs] return pairs # + MAX_IN_LENGTH = 100 def filterPair(p): return len(get_word_list(p[0])) < MAX_IN_LENGTH and \ len(get_word_list(p[1])) < MAX_IN_LENGTH def filterPairs(pairs): return [pair for pair in pairs if filterPair(pair)] # - def prepareData(version_num,reverse=False): #reverse=False means desciption to keywords pairs = getPairs(version_num,reverse=reverse) print("Read %s sentence pairs" % len(pairs)) lang1 = Lang("My_Lang") pairs = filterPairs(pairs) print("Trimmed to %s sentence pairs" % len(pairs)) print("Counting words...") pbar = tqdm(total=len(pairs),position=0, leave=True) for pair in pairs: lang1.addSentence(pair[0]) lang1.addSentence(pair[1]) pbar.update() print("Counted words:") print(lang1.n_words) return lang1, pairs # # Model # ### Encoder # A encoder for encodeing the product description. We are using GRU here for simplicity, one can change it to LSTM for more complicate model. class EncoderRNN(nn.Module): def __init__(self, input_size, hidden_size): super(EncoderRNN, self).__init__() self.hidden_size = hidden_size self.embedding = nn.Embedding(input_size, hidden_size) self.gru = nn.GRU(hidden_size, hidden_size) def forward(self, input, hidden): embedded = self.embedding(input).view(1, 1, -1) output = embedded output, hidden = self.gru(output, hidden) return output, hidden def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) # ### Decoder # A decoder for producing predicted output keywords. Here we are Attention mechanism to improve performance. # Reference: https://arxiv.org/abs/1706.03762 class AttnDecoderRNN(nn.Module): def __init__(self, hidden_size, output_size, dropout_p=0.1, max_length=MAX_IN_LENGTH): super(AttnDecoderRNN, self).__init__() self.hidden_size = hidden_size self.output_size = output_size self.dropout_p = dropout_p self.max_length = max_length self.embedding = nn.Embedding(self.output_size, self.hidden_size) self.attn = nn.Linear(self.hidden_size * 2, self.max_length) self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size) self.dropout = nn.Dropout(self.dropout_p) self.gru = nn.GRU(self.hidden_size, self.hidden_size) self.out = nn.Linear(self.hidden_size, self.output_size) def forward(self, input, hidden, encoder_outputs): embedded = self.embedding(input).view(1, 1, -1) embedded = self.dropout(embedded) attn_weights = F.softmax( self.attn(torch.cat((embedded[0], hidden[0]), 1)), dim=1) attn_applied = torch.bmm(attn_weights.unsqueeze(0), encoder_outputs.unsqueeze(0)) output = torch.cat((embedded[0], attn_applied[0]), 1) output = self.attn_combine(output).unsqueeze(0) output = F.relu(output) output, hidden = self.gru(output, hidden) output = F.log_softmax(self.out(output[0]), dim=1) return output, hidden, attn_weights def initHidden(self): return torch.zeros(1, 1, self.hidden_size, device=device) # # Training # ### Tool functions for visualize progress. # + def asMinutes(s): m = math.floor(s / 60) s -= m * 60 return '%dm %ds' % (m, s) def timeSince(since, percent): now = time.time() s = now - since es = s / (percent) rs = es - s return '%s (- %s)' % (asMinutes(s), asMinutes(rs)) # - plt.switch_backend('agg') def showPlot(points): # %matplotlib inline plt.figure() fig, ax = plt.subplots() loc = ticker.MultipleLocator(base=0.2) ax.yaxis.set_major_locator(loc) plt.plot(points) # ### Preparing Training Data # + def indexesFromSentence(lang, sentence): return [lang.word2index[word] for word in get_word_list(sentence)] def tensorFromSentence(lang, sentence): indexes = indexesFromSentence(lang, sentence) indexes.append(EOS_token) return torch.tensor(indexes, dtype=torch.long, device=device).view(-1, 1) def tensorsFromPair(pair): input_tensor = tensorFromSentence(lang1, pair[0]) target_tensor = tensorFromSentence(lang1, pair[1]) return (input_tensor, target_tensor) # - # ### Train functions # + teacher_forcing_ratio = 0.7 def train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_IN_LENGTH): encoder_hidden = encoder.initHidden() encoder_optimizer.zero_grad() decoder_optimizer.zero_grad() input_length = input_tensor.size(0) target_length = target_tensor.size(0) encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device) loss = 0 for ei in range(input_length): encoder_output, encoder_hidden = encoder( input_tensor[ei], encoder_hidden) encoder_outputs[ei] = encoder_output[0, 0] decoder_input = torch.tensor([[SOS_token]], device=device) decoder_hidden = encoder_hidden use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False if use_teacher_forcing: # Teacher forcing: Feed the target as the next input for di in range(target_length): decoder_output, decoder_hidden, decoder_attention = decoder( decoder_input, decoder_hidden, encoder_outputs) loss += criterion(decoder_output, target_tensor[di]) decoder_input = target_tensor[di] # Teacher forcing else: # Without teacher forcing: use its own predictions as the next input for di in range(target_length): decoder_output, decoder_hidden, decoder_attention = decoder( decoder_input, decoder_hidden, encoder_outputs) topv, topi = decoder_output.topk(1) decoder_input = topi.squeeze().detach() # detach from history as input print(decoder_output.shape) print(target_tensor.shape) loss += criterion(decoder_output, target_tensor[di]) if decoder_input.item() == EOS_token: break loss.backward() encoder_optimizer.step() decoder_optimizer.step() return loss.item() / target_length # - def trainIters(encoder, decoder, n_iters, print_every=1000, plot_every=100, learning_rate=0.01): start = time.time() global plot_losses print_loss_total = 0 plot_loss_total = 0 encoder_optimizer = optim.SGD(encoder.parameters(), lr=learning_rate) decoder_optimizer = optim.SGD(decoder.parameters(), lr=learning_rate) training_pairs = [tensorsFromPair(random.choice(pairs)) for i in range(n_iters)] criterion = nn.NLLLoss() print("Start training...") for iter in range(1, n_iters + 1): training_pair = training_pairs[iter - 1] input_tensor = training_pair[0] target_tensor = training_pair[1] loss = train(input_tensor, target_tensor, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion) print_loss_total += loss plot_loss_total += loss if iter % print_every == 0: print_loss_avg = print_loss_total / print_every print_loss_total = 0 print('%s (%d %d%%) %.4f' % (timeSince(start, iter / n_iters), iter, iter / n_iters * 100, print_loss_avg)) if iter % plot_every == 0: plot_loss_avg = plot_loss_total / plot_every plot_losses.append(plot_loss_avg) plot_loss_total = 0 #showPlot(plot_losses) # ### Evaluate function # In this function, we produce three predicted keywords for each description by feeding the second and third most possible words into the model for the start of the second and third keywords, respectively. def evaluate(encoder, decoder, sentence, max_length=MAX_IN_LENGTH): with torch.no_grad(): input_tensor = tensorFromSentence(lang1, sentence) input_length = input_tensor.size()[0] encoder_hidden = encoder.initHidden() encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device) for ei in range(input_length): encoder_output, encoder_hidden = encoder(input_tensor[ei],encoder_hidden) encoder_outputs[ei] += encoder_output[0, 0] decoded_words_all = [] probability_all = [] decoder_input = torch.tensor([[SOS_token]], device=device) decoder_output, decoder_hidden_first, _ = decoder(decoder_input, encoder_hidden, encoder_outputs) topv_first, topi_first = decoder_output.data.topk(3) topv_first = topv_first[0] topi_first = topi_first[0] for i in range(3): decoded_words = [] probability = [] decoder_hidden = torch.clone(decoder_hidden_first) probability.append(str(round(math.exp(topv_first[i].item()),2))) if topi_first[i].item() == EOS_token: decoded_words.append('') continue else: decoded_words.append(lang1.index2word[topi_first[i].item()]) decoder_input = topi_first[i].squeeze().detach() for di in range(max_length-1): decoder_output, decoder_hidden, _ = decoder(decoder_input, decoder_hidden, encoder_outputs) topv, topi = decoder_output.data.topk(1) probability.append(str(round(math.exp(topv.item()),2))) if topi.item() == EOS_token: decoded_words.append('') break else: decoded_words.append(lang1.index2word[topi.item()]) decoder_input = topi.squeeze().detach() decoded_words_all.append(decoded_words) probability_all.append(probability) return decoded_words_all, probability_all def evaluateRandomly(encoder, decoder, n=10): for i in range(n): pair = random.choice(pairs) print('Input : ', pair[0]) print('Target : ', pair[1]) output_words,probability = evaluate(encoder, decoder, pair[0]) print(output_words) print(probability) # # Start Running # ### Load and save prepared-data # Typically you would want to do this just once, and use the saved prepared-data in the future. # + lang1,pairs = prepareData("v5", True) ''' with open('lang.pkl', 'wb') as output: pickle.dump(lang1, output, pickle.HIGHEST_PROTOCOL) with open('trimmed.pkl', 'wb') as output: pickle.dump(pairs, output, pickle.HIGHEST_PROTOCOL) ''' # - # ### Create encoder and decoder instence hidden_size = 256 encoder1 = EncoderRNN(lang1.n_words, hidden_size).to(device) attn_decoder1 = AttnDecoderRNN(hidden_size, lang1.n_words, dropout_p=0.3).to(device) plot_losses = [] # ### Start training # Since we are choosing data to train randomly. One can stop the training to evaluate, and re-run this block to continue training. # + trainIters(encoder1, attn_decoder1, 7500, print_every=500,plot_every=100) # - showPlot(plot_losses) # ### Save model using pickle # + with open('model.pkl', 'wb') as output: pickle.dump(encoder1, output, pickle.HIGHEST_PROTOCOL) pickle.dump(attn_decoder1, output, pickle.HIGHEST_PROTOCOL) # - # ### Evaluate evaluateRandomly(encoder1, attn_decoder1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import imageio import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from skimage.transform import resize from IPython.display import HTML from tqdm import tqdm from matplotlib import pyplot as plt import warnings warnings.filterwarnings("ignore") # + source_image = imageio.imread('/home/wintersoldier/Downloads/forDeepFake.jpg') driving_video = imageio.mimread('/home/wintersoldier/Downloads/my_video3.mp4',memtest=False) # source_image2 = imageio.imread('/home/wintersoldier/Downloads/chris.jpg') source_image = resize(source_image, (256, 256))[..., :3] # source_image2 = resize(source_image2, (256, 256))[..., :3] source_image = resize(source_image, (256, 256))[..., :3] driving_video = [resize(frame, (256, 256))[..., :3] for frame in driving_video] # + #Resize image and video to 256x256 def display(source, driving, generated=None): fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6)) ims = [] for i in range(len(driving)): cols = [source] cols.append(driving[i]) if generated is not None: cols.append(generated[i]) im = plt.imshow(np.concatenate(cols, axis=1), animated=True) plt.axis('off') ims.append([im]) ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000) plt.close() return ani # HTML(display(source_image, driving_video).to_html5_video()) # - from demo import load_checkpoints generator, kp_detector = load_checkpoints(config_path='config/vox-256.yaml', checkpoint_path='/media/wintersoldier/Gaming_Disk/Downloads/vox-cpk.pth.tar') # + from demo import make_animation from skimage import img_as_ubyte predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=True) #save resulting video # imageio.mimsave('../generated.mp4', [img_as_ubyte(frame) for frame in predictions]) #video can be downloaded from /content folder # + from demo import make_animation # from skimage import img_as_ubyte cap = cv2.VideoCapture(0) driving_video=[] while True: if cv2.waitKey(1) & 0xFF == ord('q'): break ret, frame = cap.read() frame = cv2.resize(cv2.cvtColor(frame,cv2.COLOR_BGR2RGB),(256,256)) cv2.imshow('the',frame) print(len(driving_video)) driving_video.append(frame) if(len(driving_video)==256): predictions = make_animation(source_image, driving_video, generator, kp_detector, relative=True) try: predictions = np.reshape(predictions,(10,256,256,3)) cv2.imshow('this',predictions[8]) except: pass driving_video=[] cap.release() cv2.destroyAllWindows() # - cap.release() HTML(display(source_image, driving_video, predictions).to_html5_video()) # + # main_prediction=np.reshape(main_prediction,(100,256,256,3)) # + # HTML(display(source_image, driving_video, main_prediction).to_html5_video()) # - np.shape(temp_driving_video) temp_driving_video = driving_video[121:122] # + from demo import make_animation from skimage import img_as_ubyte k=100 for i in range(k, k+1): temp_driving_video = driving_video[i:i+15] predictions = make_animation(source_image, temp_driving_video, generator, kp_detector, relative=True) HTML(display(source_image, temp_driving_video, predictions).to_html5_video()) #save resulting video # imageio.mimsave('../generated.mp4', [img_as_ubyte(frame) for frame in predictions]) #video can be downloaded from /content folder # + cap = cv2.VideoCapture(0) recorded=[] while True: ret, frame = cap.read() frame = cv2.resize(cv2.cvtColor(frame,cv2.COLOR_BGR2RGB),(256,256)) cv2.imshow('the',frame) print(len(driving_video)) recorded.append(frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # - import cv2 plt.imshow(predictions[1]) HTML(display(source_image, driving_video, predictions).to_html5_video()) np.shape(predictions) for frame in predictions: frame = np.array(frame,'uint8') cv2.imshow('',frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cv2.destroyAllWindows() for frame in predictions: frame = np.array(frame,'uint8') print(np.shape(frame)) np.shape(predictions) cv2.destroyAllWindows() predictions[0][0] def display(source, driving, generated=None): fig = plt.figure(figsize=(8 + 4 * (generated is not None), 6)) ims = [] for i in range(len(driving)): cols = [source] cols=[[]] cols.append(driving[i]) if generated is not None: cols.append(generated[i]) im = plt.imshow(np.concatenate(cols, axis=1), animated=True) plt.axis('off') ims.append([im]) ani = animation.ArtistAnimation(fig, ims, interval=50, repeat_delay=1000) plt.close() return ani arr = [img_as_ubyte(frame) for frame in predictions] # + active="" # # - import time for frame in arr: cv2.imshow('',frame) time.sleep(1) cv2.destroyAllWindows() imageio.mimsave('/home/wintersoldier/Desktop/generated.mp4', [img_as_ubyte(frame) for frame in predictions]) imageio.imsave('/home/wintersoldier/Desktop/generated.png', img_as_ubyte(predictions[4])) arr=[] cap = cv2.VideoCapture(0) while True: ret, frame = cap.read() cv2.imshow('', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() # + import urllib import cv2 import numpy as np import time arr=[] # Replace the URL with your own IPwebcam shot.jpg IP:port url='http://192.168.1.103:8080/shot.jpg' while True: # Use urllib to get the image from the IP camera imgResp = urllib.request.urlopen(url) # Numpy to convert into a array imgNp = np.array(bytearray(imgResp.read()),dtype=np.uint8) # Finally decode the array to OpenCV usable format ;) img = cv2.imdecode(imgNp,-1) img = cv2.resize(img, (256,256)) # put the image on screen cv2.imshow('IPWebcam',img) arr.append(img) predictions = make_animation(source_image, arr, generator, kp_detector, relative=True) imageio.imsave('/home/wintersoldier/Desktop/cache.png', img_as_ubyte(predictions[0])) readImg= plt.imread('/home/wintersoldier/Desktop/cache.png') cv2.imshow('',readImg) arr = [] #To give the processor some less stress #time.sleep(0.1) # Quit if q is pressed if cv2.waitKey(1) & 0xFF == ord('q'): break cv2.destroyAllWindows() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Javascript library in a notebook: c3 # # The following cell should show a [c3.js](http://c3js.org/) graph. The results depends on the browser. It works fine on chrome. script = """ var chart = c3.generate({ bindto: '#__ID__', data: { columns: [ ['data1', 30, 200, 100, 400, 150, 250], ['data2', 50, 20, 10, 40, 15, 25] ] } }); setTimeout(function () { chart.load({ columns: [ ['data1', 230, 190, 300, 500, 300, 400] ] }); }, 1000); setTimeout(function () { chart.load({ columns: [ ['data3', 130, 150, 200, 300, 200, 100] ] }); }, 1500); setTimeout(function () { chart.unload({ ids: 'data1' }); }, 2000); """ # The script does not use the latest version of [d3.js](https://d3js.org/). See [C3JS - Cannot read property 'category10' of undefined](https://stackoverflow.com/questions/38344643/c3js-cannot-read-property-category10-of-undefined). from jyquickhelper import RenderJS css = ["https://raw.githubusercontent.com/sdpython/jyquickhelper/master/src/jyquickhelper/js/c3/c3.min.css"] jr = RenderJS(script, css=css, libs = [ dict(path="https://raw.githubusercontent.com/sdpython/jyquickhelper/master/src/jyquickhelper/js/d3/d3.v5.min.js", name="d3", exports="d3"), dict(path="https://raw.githubusercontent.com/sdpython/jyquickhelper/master/src/jyquickhelper/js/c3/c3.min.js", name="c3", exports="c3", deps=["d3"])]) jr # Here is the code it produces: print(jr.generate_html()[0]) print(jr.generate_html()[1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Question 1: # Make a regular expression to get all IP addresses from the below link and Extract the IP # addresses. # https://study-ccna.com/classes-of-ip-addresses/ # + import requests, re url = "https://study-ccna.com/classes-of-ip-addresses/" r = requests.get(url) data = r.text ip = r'(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' list1_ip = re.findall(ip, data) list1_ip = list(set(list1_ip)) for each in list1_ip: print(each) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Compile anomalies and NPC position into one file for analysis # #### created by sol, nov 26 019 # ### New data from chelle - nov 23, 2019 - v6 import numpy as np import pandas as pd import matplotlib.pyplot as plt import xarray as xr # + # npc position ds = xr.open_dataset('../../biforcation_figures_chelle/biforcation_data165_monthly_aviso2.nc') ds2=ds.lat.sel(time=slice('1993-01-01','2018-12-31')) # anomalies clim=ds2.groupby('time.month').mean('time') ds3=ds2.groupby('time.month')-clim npc165 = ds3.values ds = xr.open_dataset('../../biforcation_figures_chelle/biforcation_data135_monthly_aviso2.nc') ds2=ds.lat.sel(time=slice('1993-01-01','2018-12-31')) fecha=ds2.time.values # anomalies clim=ds2.groupby('time.month').mean('time') ds3=ds2.groupby('time.month')-clim npc135 = ds3.values # - plt.plot(fecha,npc165,label='165') plt.plot(fecha,npc135,label='135') plt.legend() plt.title('NPC Position') plt.show() # + boxl=['NPC','Bif','Cal2','Ak1','Ak2','NPCv','Bifv', 'Cal1','Cal3'] # currents ds = xr.open_dataset('../../timeseries_data/oscar1993-01-01data_minus_clim_v6.nc') # select V Cal1, V Ak1 for this analysis ds2=ds.sel(time=slice('1993-01-01','2018-12-31')) vcak = ds2.v[3,:].values vcac = ds2.v[7,:].values plt.plot(fecha,vcak,label='CAK') plt.plot(fecha,vcac,label='CAC') plt.legend() plt.title('Oscar V currents') plt.show() # SST ds = xr.open_dataset('../../timeseries_data/sst1993-01-01data_minus_clim_v6.nc') # select V Cal1, V Ak1 for this analysis ds2=ds.sel(time=slice('1993-01-01','2018-12-31')) tcak = ds2.analysed_sst[3,:].values tcac = ds2.analysed_sst[7,:].values plt.plot(fecha,tcak, label='CAK') plt.plot(fecha,tcac, label='CAC') t165 = ds2.analysed_sst[5,:].values t135 = ds2.analysed_sst[6,:].values plt.plot(fecha,t165, label='165W') plt.plot(fecha,t135, label='135W') plt.title('SST') plt.legend() plt.show() # - ds var = ['NPC165', 'NPC135','V_CAC','VCAK'] adt1 = xr.DataArray(npc165, dims=['time'], coords=[fecha]) adt2 = xr.DataArray(npc135, dims=['time'], coords=[fecha]) adt3 = xr.DataArray(vcac, dims=['time'], coords=[fecha]) adt4 = xr.DataArray(vcak, dims=['time'], coords=[fecha]) adt5 = xr.DataArray(tcac, dims=['time'], coords=[fecha]) adt6 = xr.DataArray(tcak, dims=['time'], coords=[fecha]) adt7 = xr.DataArray(t165, dims=['time'], coords=[fecha]) adt8 = xr.DataArray(t135, dims=['time'], coords=[fecha]) ads = xr.Dataset({'NPC165': adt1, 'NPC135':adt2, 'V_CAC': adt3, 'V_CAK':adt4, 'SST_CAC': adt5, 'SST_CAK':adt6, 'SST_NPC': adt7, 'SST_Bif':adt8}) ads.to_netcdf('../data/bifdatams_long_26nov2019.nc',mode='w') ads # seasonal anomalies adseas = ads.resample(time='3M',label='left',closed='left',loffset='1M').mean() adseas = adseas.sel(time=slice('1993-01-01','2018-12-31')) adseas.to_netcdf('../data/bifdatams_seas_long_26nov2019.nc',mode='w') dfseas = adseas.to_dataframe() dfseas = dfseas.rename(columns={'NPC165':'NPC@165W','NPC135':'NPC@135W','V_CAC':'CalCv','V_CAK':'AkCv','SST_NPC':'SST@165W','SST_Bif':'SST@135W'}) dfseas = dfseas.drop(['SST_CAC','SST_CAK'],axis=1) dfseas.to_csv('../data/MS_data_feb2020.csv') plt.figure(figsize=(16,4)) adseas.SST_NPC.plot() ads.SST_NPC.plot() plt.grid() plt.figure(figsize=(16,4)) adseas.SST_Bif.plot() ads.SST_Bif.plot() plt.grid() ds = xr.open_dataset('../../timeseries_data/sst1993-01-01data_minus_clim_v6.nc') ds 312/3 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="UfVeNJIVYVu9" # ## HandlersTimeProfiler # + [markdown] id="VK6Pe16EpvP3" # ###A.2.1. HandlersTimeProfiler # # 이그나이트의 [HnadlerTimeProfiler](https://pytorch.org/ignite/generated/ignite.contrib.handlers.time_profilers.HandlersTimeProfiler.html)를 이용해 데이터 로딩과 처리, 엔진에 적용된 각종 핸들러에 대한 간단한 프로파일링을 진행하여 아래와 같이 출력하거나 저장할 수 있다. 이그나이트에서 직접 제공하는 간단한 컨볼루션 네트워크에 대한 프로파일링 예제는 [이곳](https://github.com/pytorch/ignite/blob/9230a7319047b37ce19d956e024fa1b86030c30a/examples/notebooks/HandlersTimeProfiler_MNIST.ipynb)에서 참조하도록 한다. # # 상기 예제의 실행 결과는 다음과 같다 # #
# #
# # + [markdown] id="HLBKKwFyBI0h" # 결과 항목 중 제일 아래를 보면 Dataflow에 소요된 시간이 약 101초 임을 알 수 있는데, HandlersTimeProfiler 에서는 미니배치를 로딩하는 이벤트 발생 전과 후에 기록한 시간을 이용하여 이를 측정한다. 이와 마찬가지로 Processing에 소요된 시간은 이터레이션(iteration) 시작과 종료 이벤트 시점에 기록한 시간을 이용하여 측정하며, 예제에서는 약 25초가 소요되었음을 알 수 있다. # # + id="HdhoeHGgsimM" import os gpu_gtg = False if int(os.environ.get("COLAB_GPU")) > 0: gpu_gtg = "COLAB_GPU" in os.environ tpu_gtg = "COLAB_TPU_ADDR" in os.environ if tpu_gtg: # tpu print("TPU") #VERSION = "nightly" # https://github.com/pytorch/builder/pull/750 VERSION = "20210304" # was 20200607" # !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py # !python pytorch-xla-env-setup.py --version $VERSION # + colab={"base_uri": "https://localhost:8080/"} id="27ND-qg2tJEU" outputId="2586824f-1ce4-4390-c5ea-0cd4df8d3df6" # !pip install --pre pytorch-ignite # + id="bYoQTy8JtJBU" import numpy as np import torch import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms from torchvision import datasets, models import torchsummary import ignite import ignite.distributed as idist from ignite.engine import Engine, Events, create_supervised_evaluator, create_supervised_trainer from ignite.metrics import Accuracy, Loss, RunningAverage, ConfusionMatrix from ignite.handlers import ModelCheckpoint, EarlyStopping from ignite.utils import setup_logger # + id="y3akwem4zQQf" from ignite.contrib.handlers import ProgressBar, HandlersTimeProfiler # + id="va-GVsDLyijz" def training(local_rank, config, **kwargs): print("local rank: ", local_rank) ########################################################### # 데이터 준비 train_transform = transforms.Compose( [ transforms.Pad(4), transforms.RandomCrop(32, fill=128), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), ] ) test_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),]) if idist.get_local_rank() > 0: idist.barrier() trainset = torchvision.datasets.CIFAR10(root=config["data_path"], train=True, download=True, transform=train_transform) testset = torchvision.datasets.CIFAR10(root=config["data_path"], train=False, download=True, transform=test_transform) if idist.get_local_rank() == 0: idist.barrier() trainloader = idist.auto_dataloader(trainset, batch_size=config["batch_size"], shuffle=True, num_workers=config["num_workers"], drop_last=True) testloader = idist.auto_dataloader(testset, batch_size=config["batch_size"], shuffle=False, num_workers=config["num_workers"],) ########################################################### # 모델, 옵티마이저, 로스, 트레이너, 이밸류에이터 num_classes = 10 model = models.resnet18(num_classes = num_classes) model = idist.auto_model(model) optimizer = idist.auto_optim(optim.Adam(model.parameters(), lr=0.001)) criterion = nn.CrossEntropyLoss().to(idist.device()) trainer = create_supervised_trainer(model, optimizer, criterion, device=idist.device()) trainer.logger = setup_logger("hkim-trainer") #************************************************************* # HandlersTimeProfiler profiler = HandlersTimeProfiler() profiler.attach(trainer) # Init and attach progressbar pbar = ProgressBar(persist=True) pbar.attach(trainer, metric_names="all") metrics = { 'accuracy':Accuracy(), 'ce':Loss(criterion), } val_evaluator = create_supervised_evaluator(model, metrics=metrics, device=idist.device()) val_evaluator.logger = setup_logger("hkim-val_evaluator") # track a running average of the scalar loss output for each batch. RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss') ########################################################### # 이벤트 @trainer.on(Events.EPOCH_COMPLETED) def log_validation_results(trainer): state = val_evaluator.run(testloader) metrics = val_evaluator.state.metrics accuracy = metrics['accuracy']*100 loss = metrics['ce'] #log_metrics(val_evaluator.logger, state.epoch, state.times["COMPLETED"], "validation evaluator", state.metrics) pbar.log_message("Validation Results - Epoch: {} Avg accuracy: {:.2f} Avg loss: {:.2f}".format(state.epoch, accuracy, loss)) pbar.n = pbar.last_print_n = 0 trainer.run(trainloader, max_epochs=config["num_epochs"]) #************************************************************* # HandlersTimeProfiler profiler.print_results(profiler.get_results()) # + colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["8aed36fc448e4a4d93d51571dfc27a76", "", "126b5fe1aa1f4fb8a6e6efeb8d2e36ff", "5877f880ed104ed79bd1b7054838237b", "", "82490e9176c44accaa67bb681ed0dfbf", "866e6260d3ad4e98b32e1f72643f47b7", "633ceb41c520422e857ba4138bd6f468", "", "", "cc9a9fcadd3f41dbb9f305ed916d71b7", "ffb8b4be834848168f5ea106ec0a09f9", "", "", "2d10aa4dd6734b76865679a203432259", "c41a97ff37624a69ac1411e1ae7956a4", "de1504db28584a2f8943127b97342ac4", "", "", "", "", "ac055ace583645bd89ef2635c653066c", "dcf3ae824de840dfabb011c7b49abed0", "0e7d23d510d54400b32211d14ed2d69c", "c58f79c223ef4ef18fe122da641e2b28", "829d616a99c04347b609ef2743a2727c", "", "", "7c438a6511d44ca783022fa2c5ce3c48", "e60f72d76bee43c4bfd89804edff644c", "2b436fdb7114401a8009affca4bea300", "", "e56e6b31f62e45e48e27d81c9b6dacd8", "ac17c15610424f6cb01ee7433f8567fd", "", "5ca4ded0a2934a5993b7bcb467a4d777", "", "", "", "", "", "", "", "", "", "", "a49d09561c5a4af4bed759c3a296242e", "482df02980ba425b9e3c4461fb618887", "", "", "11a65af7dd01461585b8603d59d2db31", "", "8be3770eca104a4d81a38cc78d60c39f", "", "4e5b66e8fa4d485fa668eec4743087a3"]} id="FXJzm8yqtI-k" outputId="5a893e87-0025-4d13-9a66-e4ebea5ab5aa" config = { "seed": 543, "data_path" : "./cifar10", "output_path" : "./output-cifar10/", "model" : "resnet18", "batch_size" : 512, "momentum" : 0.9, "weight_decay" : 1e-4, "num_workers" : 2, "num_epochs" : 5, "learning_rate" : 0.4, "num_warmup_epochs" : 4, "validate_every" : 3, "checkpoint_every" : 1000, "backend" : None, "resume_from" : None, "log_every_iters" : 15, "nproc_per_node" : None, "stop_iteration" : None, "with_amp" : False, "log_interval" : 10, "verbose_set" : False, "verbose_set2" : False, "verbose_loader" : False } if not (tpu_gtg or gpu_gtg): # cpu config["backend"] = 'gloo' config["nproc_per_node"] = 8 elif gpu_gtg: # gpu config["backend"] = 'nccl' config["nproc_per_node"] = 1 elif tpu_gtg: # tpu config["backend"] = 'xla-tpu' config["nproc_per_node"] = 8 else: # error raise RuntimeError("Unknown environment: tpu_gtg {}, gpu_gtg {}".format(tpu_gtg, gpu_gtg)) if config["backend"] == "xla-tpu" and config["with_amp"]: raise RuntimeError("The value of with_amp should be False if backend is xla") dist_configs = {'nproc_per_node': config["nproc_per_node"], "start_method": "fork"} #def log_metrics(logger, epoch, elapsed, tag, metrics): # metrics_output = "\n".join([f"\t{k}: {v}" for k, v in metrics.items()]) # logger.info(f"\nEpoch {epoch} - Evaluation time (seconds): {elapsed:.2f} - {tag} metrics:\n {metrics_output}") with idist.Parallel(backend=config["backend"], **dist_configs) as parallel: parallel.run(training, config, a=1, b=1) # + [markdown] id="3GzqJ_zsDAtS" # Processing 시간 항목은, 멀티노드 멀티CPU/GPU/TPU 환경하에 미니배치가 준비된 상태에서 계산되므로 노드 간 또는 GPU간 네트워킹 관련 오버헤드를 배제하고 프로파일 할 수 있다는 점에서 유용하다. 마찬가지로 Dataflow 시간 항목이 크다면 네트워킹 오버헤드가 크다고 생각할 수 있다는 점에서 개발자가 사용하는 시스템에 대한 평가가 일부 가능해진다. # + [markdown] id="OQL39wXmXx9k" # ## License # # + [markdown] id="YXio6q3iX5Ig" # --- # # # Note: This is not an official [LG AI Research](https://www.lgresearch.ai/) product but sample code provided for an educational purpose # #
# author: #
# email: / # # # --- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import glob import os import pandas as pd import numpy as np ##################### Traces description # 1. CLT_PUSH_START - SENDING Time between the scheduling of the request and its actual processing # 2. CLT_PUSH_END - CLT_PUSH_START Time to prepare the packet, send it to the NIC driver through rte_eth_tx_burst(), and free the dpdk mbuf # 3. SRV_POP_START - CLT_PUSH_END Time on the wire: item detected in the io queue's receive queue - client packet sent /!\ I think this can be negative if the server schedule's pop way before the client sends requests # 4. SRV_POP_END - SRV_POP_START Time to parse incoming packet + "waiting time" at the server's queue # 5. NET_RECEIVE - SRV_POP_END Time between message delivered to the application by dmtr_wait_any() and packet processed by the I/O queue # 6. HTTP_DISPATCH - NET_RECEIVE Time taken to select the HTTP recipient (either RR, or apply the filter, etc) # 7. START_HTTP - HTTP_DISPATCH Time spent in memory queue between network component and HTTP # 8. END_HTTP - START_HTTP Time spent performing HTTP processing # 9. HTTP_DONE - END_HTTP Time spent in memory queue between HTTP component and HTPP /!\ This include the "wait time" of dmtr_wait_any, as the same poll operates on both network sockets, and this memory queue # 10. SRV_PUSH_START - HTTP_DONE Time between the scheduling of the response and its actual processing # 11. SRV_PUSH_END - SRV_PUSH_START Time spent preparing the packet and sending it to the wire (identical to #2) # 12. CLT_POP_START - SRV_PUSH_END Time spent on the wire /!\ I think this can be negative as the client schedules the read as soon as it as sent the request # 13. CLT_POP_END - CLT_POP_START Time spent processing an incoming network packet (includes wait time) (identical to #4) # 14. COMPLETED - CLT_POP_END Time ellapsed between the reponse being delivered to the client by dmtr_wait_any(), and the response's being fully processed by the I/O queue TRACE_ORDER = [ 'SENDING', 'CLT_PUSH_START', # 'CLT_PUSH_END', # 'SRV_POP_START', 'SRV_POP_END', 'NET_RECEIVE', 'HTTP_DISPATCH', 'START_HTTP', 'END_HTTP', 'HTTP_DONE', 'SRV_PUSH_START', # 'SRV_PUSH_END', # 'CLT_POP_START', 'CLT_POP_END', 'COMPLETED' ] def read_tokens(trace_dir, exclude_first = 5): #REQ_ID SENDING READING COMPLETED PUSH_TOKEN POP_TOKEN) files = glob.glob(os.path.join(trace_dir, '*traces*')) files = list(filter(lambda x: not ('POP' in x or 'PUSH' in x), files)) if len(files) > 1: raise Exception("Too many files") df = pd.read_csv(files[0], sep='\t') min_time = df[df.columns[1]].min() df = df[df[df.columns[1]] > min_time + exclude_first * 1e9] return df def read_traces(trace_dir, label): files = glob.glob(os.path.join(trace_dir, '*%s-traces' % label)) if len(files) > 1: raise Exception("Too many files") df = pd.read_csv(files[0], sep='\t') return df def merge_trace(token_df, trace_df, token_label, col_label): trace_df = trace_df[['%s_TOKEN' % token_label, 'TIME']] df = pd.merge(token_df, trace_df, on='%s_TOKEN' % token_label) # return df return df.rename(columns={'TIME': col_label}) def merge_traces(token_df, trace_df, token_label, col_label): start_df = trace_df[trace_df.START] stop_df = trace_df[~trace_df.START] df = merge_trace(token_df, start_df, token_label, '%s_%s_START' % (col_label, token_label)) df = merge_trace(df, stop_df, token_label, '%s_%s_END' % (col_label, token_label)) return df col_labels = dict(client='CLT', server='SRV') token_labels = dict(client='rate_client', server='') def order_cols(df, subtract_root = True): col_order = list(filter(lambda x: x in df.columns, TRACE_ORDER, )) df = df[['REQ_ID'] +col_order].set_index('REQ_ID') if subtract_root: df[col_order] = df[col_order].apply(lambda x: x - df[col_order[0]]) return df def read_profiling_node(base_dir, experiment, node_label): client_dir = os.path.join(base_dir, experiment, node_label) token_df = read_tokens(client_dir) push_df = read_traces(client_dir, 'PUSH') pop_df = read_traces(client_dir, 'POP') df = merge_traces(token_df, push_df, 'PUSH', col_labels[node_label]) df = merge_traces(df, pop_df, 'POP', col_labels[node_label]) return order_cols(df) CLIENT_RCV = 'CLT_POP_END' CLIENT_SND = 'CLT_PUSH_START' def read_merged_profiling(base_dir, experiment): client_df = read_profiling_node(base_dir, experiment, 'client') server_df = read_profiling_node(base_dir, experiment, 'server') server_cols = server_df.columns client_cols = client_df.columns df = client_df.join(server_df) offset = df[CLIENT_SND] df[server_cols] = df[server_cols].apply(lambda x: x + offset) offset =( df[CLIENT_RCV] - df[server_cols[-1]]) / 2 df[server_cols] = df[server_cols].apply(lambda x: x + offset) return order_cols(df.reset_index()) # + COLORS = ["#700f00", "#013fb0", "#cbcd11", "#6b3a7d", "#ff392e", "#008eb2", "#ff8da5", "#000000", "#458f00", "#AAAAAA", "#123456", "#7192F1", "#013fb0", '#777777', '#BBBBBB' ] def stacked_plot(df, full_sep=False): columns = df.columns print(columns) bottom = 0 cols_so_far = [] for prev_col, next_col, color in zip(columns, columns[1:], COLORS): if not full_sep: bottom = df[prev_col] plt.bar(df.index, df[next_col] - df[prev_col], 1, bottom=bottom, color=color, label=prev_col) if full_sep: bottom = (bottom + df[next_col]- df[prev_col]).max() # - def plot_stacked_sample(df, sample_size=100, full_sep=False): df = df.sort_values(df.columns[-1]) lowest = df.iloc[:sample_size] highest = df.iloc[-sample_size:] middlest = df.iloc[int(len(df) / 2 - sample_size / 2): int(len(df) / 2 + sample_size / 2)] plt.figure(figsize=(9.5, 4)) ax1 = plt.subplot(131) stacked_plot(lowest.reset_index(drop=True), full_sep) ax2 = plt.subplot(132, sharey=ax1) stacked_plot(middlest.reset_index(drop=True), full_sep) plt.subplot(133, sharey=ax2) stacked_plot(highest.reset_index(drop=True), full_sep) plt.tight_layout() plt.subplots_adjust(top=.8) plt.sca(ax2) plt.legend(loc='lower center', bbox_to_anchor=(.5, 1), ncol=5) df = read_merged_profiling('profiling', 'all_pinned_8') plot_stacked_sample(df, 200) df = read_merged_profiling('profiling', 'all_pinned_14') plot_stacked_sample(df, 200) df = read_merged_profiling('profiling', 'all_pinned_file_only') plot_stacked_sample(df, 200) # + import numpy as np def plot_correlations(df): columns = df.columns for prev_col, next_col, color in zip(columns, columns[1:], COLORS): diffs = df[next_col] - df[prev_col] x = diffs / diffs.max() y = df.COMPLETED - df[columns[0]] plt.plot(x, y, '.', markersize=.1, color=color, label=prev_col) mask = ~x.isna() & ~y.isna() p = np.polyfit(x[mask], y[mask], 1) plt.plot(x, p[0] + p[1]*x, '-', color=color) # - df = read_merged_profiling('profiling', 'all_pinned_file_only') df = df.sort_values('COMPLETED') # df = df.iloc[-5000:] diffs = df.astype(float).diff(-1, axis=1) diffs = diffs[diffs.columns[~diffs.isna().all().values]] diffs['TOTAL'] = df['COMPLETED'] plt.figure() corr = diffs.corr() corr[corr == 1] = 0 plt.imshow(corr) plt.xticks(range(len(corr.columns))) plt.yticks(range(len(corr.columns))) plt.xlim([-.5, 10.5]) plt.ylim([-.5, 10.5]) plt.gca().set_xticklabels(corr.columns, rotation=-45, ha='left') plt.gca().set_yticklabels(corr.columns) plt.colorbar() plt.tight_layout() # plt.subplots_adjust(left=.3, top=.8) df = read_merged_profiling('profiling', 'all_pinned_file_only') df = df.sort_values('COMPLETED') # df = df.iloc[-5000:] diffs = df.astype(float).diff(-1, axis=1) diffs = diffs[diffs.columns[~diffs.isna().all().values]] diffs['TOTAL'] = df['COMPLETED'] plt.figure() corr = diffs.cov() # corr[corr > .98] = 0 plt.imshow(corr) plt.xticks(range(len(corr.columns))) plt.yticks(range(len(corr.columns))) plt.xlim([-.5, 10.5]) plt.ylim([-.5, 10.5]) plt.gca().set_xticklabels(corr.columns, rotation=-45, ha='left') plt.gca().set_yticklabels(corr.columns) plt.colorbar() plt.tight_layout() # plt.subplots_adjust(left=.3, top=.8) df = read_merged_profiling('profiling', 'all_pinned_regex_only') plot_stacked_sample(df, 200) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] id="PDDqFd8hon3v" colab_type="text" #

Create and benchmark train, validation, and test datasets

# # In this notebook, you will prepare the training, validation (sometimes called evaluation), and test datasets using the NYC taxi fare data. After that, you will benchmark against these datasets, in other words use your benchmark model to calculate the metric values for the training, validation, and test examples. In parallel, you will learn how to include various bash shell commands (e.g. ls, head, and others) in your Colab notebooks. # # # --- # Before you start, **make sure that you are logged in with your student account**. Otherwise you may incur Google Cloud charges for using this notebook. # # --- # # + id="Wq75B91eon3y" colab_type="code" cellView="form" colab={} import numpy as np import pandas as pd import seaborn as sns import shutil from google.cloud import bigquery #@markdown Copy-paste your GCP Project ID in the following field: PROJECT = "" #@param {type: "string"} #@markdown When running this cell you will need to **uncheck "Reset all runtimes before running"** as shown on the following screenshot: #@markdown ![](https://i.imgur.com/9dgw0h0.png) #@markdown Next, use Shift-Enter to run this cell and to complete authentication. try: from google.colab import auth auth.authenticate_user() print("AUTHENTICATED") except: print("FAILED to authenticate") # + [markdown] id="quQPY9r8MzQ6" colab_type="text" # Next, query for 1 out of 100,000 rows of the entire taxi fare dataset and apply the clean up pre-processing rules you have developed in the earlier lab. Based on the summary from the Pandas describe table, you can confirm that there are roughly 10,500 rows of cleaned-up data in the `tripsqc` variable. # + id="YdiXbfDuI-LI" colab_type="code" colab={} EVERY_N = 100000 bq = bigquery.Client(project=PROJECT) trips = bq.query(''' SELECT pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` WHERE MOD(ABS(FARM_FINGERPRINT(STRING(pickup_datetime))), %d) = 1 #note that that trips with zero distance or #costing less than $2.50 are excluded AND trip_distance > 0 AND fare_amount >= 2.5 ''' % (EVERY_N)).to_dataframe() def preprocess(trips_in): trips = trips_in.copy(deep=True) trips.fare_amount = trips.fare_amount + trips.tolls_amount del trips['tolls_amount'] del trips['total_amount'] del trips['trip_distance'] del trips['pickup_datetime'] qc = np.all([\ trips['pickup_longitude'] > -78, \ trips['pickup_longitude'] < -70, \ trips['dropoff_longitude'] > -78, \ trips['dropoff_longitude'] < -70, \ trips['pickup_latitude'] > 37, \ trips['pickup_latitude'] < 45, \ trips['dropoff_latitude'] > 37, \ trips['dropoff_latitude'] < 45, \ trips['passenger_count'] > 0, ], axis=0) return trips[qc] tripsqc = preprocess(trips) tripsqc.describe() # + [markdown] id="bbsRzcqLon46" colab_type="text" #

Create ML datasets

# # The next cell splits the cleaned up dataset randomly into training, validation and test sets. Since you are working with an in-memory dataset (for now), you will use a 70%-15%-15% split. Later you will learn about the benefits of allocating a larger percentage of the entire dataset for training. # + id="9WZgUiO0on47" colab_type="code" colab={} shuffled = tripsqc.sample(frac=1) trainsize = int(len(shuffled['fare_amount']) * 0.70) validsize = int(len(shuffled['fare_amount']) * 0.15) df_train = shuffled.iloc[:trainsize, :] df_valid = shuffled.iloc[trainsize:(trainsize+validsize), :] df_test = shuffled.iloc[(trainsize+validsize):, :] df_train.describe() # + id="G60adT61on5C" colab_type="code" colab={} df_valid.describe() # + id="1i2mo1CAon5H" colab_type="code" colab={} df_test.describe() # + [markdown] id="ng1gbvDuon5K" colab_type="text" # Let's write out the three dataframes to appropriately named csv files. The files will be useful for local training while you are developing your machine learning models. In future labs, you will scale out to a larger dataset using other serverless capabilities like Cloud Machine Learning Engine (Cloud MLE) and Dataflow. # + id="yuQwBVVpon5L" colab_type="code" colab={} def to_csv(df, filename): outdf = df.copy(deep=False) outdf.loc[:, 'key'] = np.arange(0, len(outdf)) # rownumber as key # reorder columns so that target is first column cols = outdf.columns.tolist() cols.remove('fare_amount') cols.insert(0, 'fare_amount') print (cols) # new order of columns outdf = outdf[cols] outdf.to_csv(filename, header=False, index_label=False, index=False) to_csv(df_train, 'taxi-train.csv') to_csv(df_valid, 'taxi-valid.csv') to_csv(df_test, 'taxi-test.csv') # + [markdown] id="MOPzJGT7OtWu" colab_type="text" # There are 2 ways to execute shell commands in the OS environment hosting this notebook: # # 1. You can prefix your `bash` command with an exclaimation mark as shown in the next code cell. # # 2. You can use the `%%bash` "magic" as the first line of a code cell. This approach is better suited for multi-line shell scripts. # # If you are interested in details about Jupyter "magics", you can learn more [here](https://nbviewer.jupyter.org/github/ipython/ipython/blob/1.x/examples/notebooks/Cell%20Magics.ipynb) # + [markdown] id="RGZsBGRcon5T" colab_type="text" #

Verify that datasets exist

# + id="57XAd4O8on5U" colab_type="code" colab={} # !ls -l *.csv # + [markdown] id="f1AvPQIDon5Y" colab_type="text" # There are 3 .csv files corresponding to train, valid, test datasets. The ratio of file sizes reflect the percentages in the split of the data. # + id="Tcqtnc0Hon5Y" colab_type="code" colab={} language="bash" # head -10 taxi-train.csv # + [markdown] id="St5NHo1lon5b" colab_type="text" # Looks good! Now the datasets are prepared so you will be ready to train machine learning models, evaluate, and test them. # + [markdown] id="DBbYTPZAon5c" colab_type="text" #

Benchmark

# # Before committing to a complex machine learning model, it is a good idea to come up with a very simple, heuristic model and use that as a benchmark. # # My model is going to simply divide the mean fare_amount by the mean trip_distance to come up with an average rate per kilometer and use that to predict. Let's compute the RMSE of such a model. # + id="o4TuDR8Jon5c" colab_type="code" colab={} def distance_between(lat1, lon1, lat2, lon2): # haversine formula to compute distance "as the crow flies". Taxis can't fly of course. dist = np.degrees(np.arccos(np.minimum(1,np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np.cos(np.radians(lat1)) * np.cos(np.radians(lat2)) * np.cos(np.radians(lon2 - lon1))))) * 60 * 1.515 * 1.609344 return dist def estimate_distance(df): return distance_between(df['pickuplat'], df['pickuplon'], df['dropofflat'], df['dropofflon']) def compute_rmse(actual, predicted): return np.sqrt(np.mean((actual-predicted)**2)) def print_rmse(df, rate, name): print ("{1} RMSE = {0}".format(compute_rmse(df['fare_amount'], rate*estimate_distance(df)), name)) FEATURES = ['pickuplon','pickuplat','dropofflon','dropofflat','passengers'] TARGET = 'fare_amount' columns = list([TARGET]) columns.extend(FEATURES) # in CSV, target is the first column, after the features columns.append('key') df_train = pd.read_csv('taxi-train.csv', header=None, names=columns) df_valid = pd.read_csv('taxi-valid.csv', header=None, names=columns) df_test = pd.read_csv('taxi-test.csv', header=None, names=columns) rate = df_train['fare_amount'].mean() / estimate_distance(df_train).mean() print ("Rate = ${0}/km".format(rate)) print_rmse(df_train, rate, 'Train') print_rmse(df_valid, rate, 'Valid') print_rmse(df_test, rate, 'Test') # + [markdown] id="Z5TGIWkQon5i" colab_type="text" #

Benchmark on same dataset

# # The RMSE depends on the dataset and for meaningful and reproducible comparisons it is critical to measure on the same dataset each time. The following query will continue to be reused in the later labs: # + id="O9xVQv49on5j" colab_type="code" colab={} def create_query(phase, EVERY_N): """ phase: 1=train 2=valid """ base_query = """ SELECT (tolls_amount + fare_amount) AS fare_amount, CONCAT( STRING(pickup_datetime), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key, EXTRACT(DAYOFWEEK FROM pickup_datetime)*1.0 AS dayofweek, EXTRACT(HOUR FROM pickup_datetime)*1.0 AS hourofday, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers FROM `nyc-tlc.yellow.trips` WHERE {} AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 """ if EVERY_N == None: if phase < 2: # training selector = "MOD(ABS(FARM_FINGERPRINT(STRING(pickup_datetime))), 4) < 2" else: selector = "MOD(ABS(FARM_FINGERPRINT(STRING(pickup_datetime))), 4) = 2" else: selector = "MOD(ABS(FARM_FINGERPRINT(STRING(pickup_datetime))), %d) = %d" % (EVERY_N, phase) query = base_query.format(selector) return query sql = create_query(2, 100000) df_valid = bq.query(sql).to_dataframe() print_rmse(df_valid, 2.56, 'Final Validation Set') # + [markdown] id="MT9dwy_ton5m" colab_type="text" # The simple distance-based rule gives us a RMSE of $7.42. We have to beat this, of course, but you will find that simple rules of thumb like this can be surprisingly difficult to beat. # # Let's be ambitious, though, and make our goal to build ML models that have a RMSE of less than $6 on the test set. # + [markdown] id="HVuX8oeMon5m" colab_type="text" # Copyright 2019 CounterFactual.AI LLC. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 5. Final Model - Clean Notebook with Output for API # # ## Introduction # # Final model and the scripts to transfer to API. # ## Load libraries # + # Set up import pandas as pd import numpy as np import scipy as sp import matplotlib import matplotlib.pyplot as plt import pickle as pkl import seaborn as sns import itertools import json from sklearn.model_selection import train_test_split from sklearn.model_selection import cross_val_score from sklearn.model_selection import cross_validate from sklearn.model_selection import cross_val_predict from sklearn.model_selection import validation_curve from sklearn.model_selection import learning_curve from sklearn.model_selection import GridSearchCV from sklearn.model_selection import RandomizedSearchCV from sklearn.model_selection import StratifiedKFold from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 from sklearn.feature_selection import mutual_info_classif from sklearn.feature_selection import RFECV from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import scale from sklearn.preprocessing import Normalizer from sklearn.preprocessing import MinMaxScaler from sklearn.decomposition import PCA from sklearn.dummy import DummyClassifier from sklearn.cluster import KMeans from sklearn.pipeline import Pipeline from sklearn.pipeline import make_pipeline from sklearn.metrics import explained_variance_score from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import roc_curve from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from sklearn.metrics import precision_recall_curve from sklearn.metrics import roc_auc_score from sklearn.mixture import GaussianMixture from sklearn import tree from sklearn import svm from sklearn import ensemble from sklearn import naive_bayes from sklearn import linear_model import xgboost as xgb from feature_selector import FeatureSelector # %load_ext autoreload # %autoreload 2 # %matplotlib inline LOCAL_DIR = '/home/jovyan/notebook/' # - # Load data df = pd.read_csv(LOCAL_DIR + 'brainFitDx/data/sample/sample_2000_for_model.csv') df.head(5) # + # THESE ARE THE 'FINAL' FEATURES features2_1 = [ 'total_recall', 'serial7', #'tics', removed 'backwards_20', 'immediate_recall', 'delayed_recall', 'g1101', 'g1074a', 'g1158', 'g1226', 'g1240', 'g1242', 'g1289', 'g1290', 'g1323', 'g1329', 'g1340', 'g1345', 'g1361', 'g1363', 'g1369', 'g1374', 'g1395', 'g1416', 'g1425', 'g1442', #'g1805', #'g1814_05', #'g1814', removed 'g2707', 'g2710', 'g2723', 'g2725', 'g2726', 'g2745', 'g2755', 'g2847', 'g2851', 'g2860', 'g2865', 'g2870', 'g2915', 'g2916', 'g2918', 'g2940', 'g3002', 'g1_trend_health' ] # + # Corresponding feature names in web application new_names = ['23', '25', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '22', '24', 'total_recall' ] # - # ## Calculate sample statistics for key inputs (health, cognitive, living) temp_df = df.loc[:,features2_1] # + # compute unweighted benchmarks for health domains # cognitive health def cognitive_health(d): """ Args: d: df row Returns: percentage """ accuracy=(d['immediate_recall']+d['delayed_recall']+d['backwards_20']+d['serial7'])/27 return round(accuracy*100, 3) # physical health def physical_health(d): """ Args: d: df row Returns: percentage """ # binary questions binary_ques=['g1240', 'g1289', 'g1290', 'g1329', 'g1345', 'g1442'] binary_score=[d[b] for b in binary_ques].count(1) if d['g1395']==5: binary_score+=1 # numeric questions numeric_ques=['g1226', 'g1242', 'g1323', 'g1340', 'g1361', 'g1363', 'g1369', 'g1374', 'g1416', 'g1_trend_health'] numeric_score=sum([d[n] for n in numeric_ques]) # final score final_score=(binary_score+numeric_score)/72 return (round((1-final_score)*100, 3)) # living health def living_health(d): """ Args: d: df row Returns: percentage """ score=0 # binary questions binary_ques=['g2707', 'g2710', 'g2723', 'g2725', 'g2726', 'g2745', 'g2755', 'g2847', 'g2851', 'g2860', 'g2865', 'g2870', 'g2915', 'g2916', 'g2918'] score+=[d[b] for b in binary_ques].count(1) # numeric questions if d['g2940']>=1: score+=1 if d['g3002']!=1: score+=1 # final score final_score=score/17 return round((1-final_score)*100, 3) # - # create columns temp_df['cognitive_health']=temp_df.apply(cognitive_health, axis=1) temp_df['physical_health']=temp_df.apply(physical_health, axis=1) temp_df['living_health']=temp_df.apply(living_health, axis=1) temp_df['cognitive_health'].plot.hist() temp_df['physical_health'].plot.hist() temp_df['living_health'].plot.hist() temp_df = temp_df.loc[:,['g1101', 'physical_health', 'cognitive_health', 'living_health']] temp_df2 = temp_df.sort_values('g1101') bins = [0, 50, 60, 70, 80, 90, 100, 120] temp_df2['group'] = np.digitize(temp_df2['g1101'],bins) grouped = temp_df2.groupby('group') for test in ['cognitive_health', 'physical_health', 'living_health']: print( grouped[test].describe()) for test in ['cognitive_health', 'physical_health', 'living_health']: grouped[test].describe().to_csv(str(test)+'.csv') # ## Final Model # Features features2_1.sort() print(features2_1) print(len(features2_1)) X = df.loc[:,features2_1] y = df['kmeans'] X.rename(columns=dict(zip(features2_1, new_names)), inplace=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42) X_train = pd.DataFrame(data=X_train, columns=new_names) X_test = pd.DataFrame(data=X_test, columns=new_names) # + # Estimate and examine model clf = xgb.XGBClassifier(objective='binary:logistic', gamma=0.5, min_child_weight=1, max_depth=3, learning_rate=0.3, seed=42) model = clf.fit(X_train,y_train) y_pred = model.predict(X_test) confusion = confusion_matrix(y_test, y_pred) sns.heatmap(pd.DataFrame(confusion), annot=True, cbar=None, cmap='Blues') print ('Classification report: \n{}'.format(classification_report(y_test, y_pred))) # - # ## Saving Model X = df.loc[:,features2_1] y = df['kmeans'] X.rename(columns=dict(zip(features2_1, new_names)), inplace=True) X.head(2) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42) X_train = pd.DataFrame(data=X_train, columns=new_names) X_test = pd.DataFrame(data=X_test, columns=new_names) # + # Save the model train_data = xgb.DMatrix(X_train, y_train) param = {'max_depth':3, 'eta':0.3, 'silent':1, 'gamma':0.5, 'min_child_weight':1, 'seed':42, 'objective':'binary:logistic'} num_round = 100 model = xgb.train(param, train_data, num_round) pkl.dump(model, open('xgb_model.pkl','wb'), protocol=2) # - # Load model xgb_model = pkl.load(open('xgb_model.pkl','rb')) xgb.plot_importance(xgb_model) # Save to file test data X_test2 = X_test.iloc[1:2,:] #X_test.to_csv('testinput.csv', index=False) X_test2.to_json('testinput.json',orient='records') X_test2 with open('testinput.json') as json_file: json_df = json.load(json_file) json.dumps(json_df) load_df2 = pd.DataFrame(json_df) load_df2 load_df = pd.read_json('testinput.json') load_dfc = pd.read_csv('cognitive_health.csv') load_dfp = pd.read_csv('physical_health.csv') load_dfl = pd.read_csv('living_health.csv') load_dfc load_df = load_df2.loc[:,new_names] temp = load_df.columns.tolist() temp_df = pd.DataFrame(temp) temp_df.to_csv('finalfeatures.csv', header=False, index=False) load_df.head(4) # Do prediction load_data_matrix = xgb.DMatrix(load_df) # Predict preds = xgb_model.predict(load_data_matrix) print(preds) preds = xgb_model.predict(xgb.DMatrix(X_test)) sns.distplot(preds) # + # %%writefile xgb_model.py # Program to load model and predict class # Input: xgb_model.py %1 where %1 is a csv file of features # Output: array of predictions import os import sys import xgboost as xgb import pandas as pd import json import pickle as pkl # Load model xgb_model = pkl.load(open('xgb_model.pkl','rb')) # Read csv file inputcvs = pd.read_json(str(sys.argv[1])) json_2_df = inputcvs.loc[:,['23', '25', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '22', '24', 'total_recall' ]] # Process data load_data_matrix = xgb.DMatrix(json_2_df) print(xgb_model.predict(load_data_matrix)) # - # !python xgb_model.py testinput.json pred # ## To write the model.py xgb_model = pkl.load(open('xgb_model.pkl','rb')) data_json = pd.read_json('testinput.json') data = {"1":17,"2":70,"3":1,"4":1,"5":5,"6":3,"7":5,"8":5,"9":3,"10":5,"11":0,"12":5,"13":2,"14":2,"15":2,"16":3,"17":1,"18":3,"19":180,"20":5,"21":1,"22":[10,6],"23":1,"ts1":1532850133053,"24":[46,39,32,25,18],"ts2":1532850156469,"25":[10,6],"26":1,"27":5,"28":7,"29":6,"30":1,"31":5,"32":1,"33":1,"34":5,"35":5,"36":5,"37":1,"38":5,"39":5,"40":5,"41":1,"42":1} data['22'] = data['22'][0] data['25'] = data['25'][0] # + temp = 0 for i in range(5): if data['24'][i] == (100-47-7-i*7): temp += 1 data['24'] = temp # - data['total_recall'] = data['22'] + data['25'] data data_json.values # + # compute user performance metrics # cognitive health def compute_cog(d): """ Args: d: dict Returns: percentage """ accuracy=(d['22']+d['25']+d['23']+d['24'])/27 return round(accuracy*100, 3) # physical health def compute_physical(d): """ Args: d: dic Returns: percentage """ # binary questions binary_ques=[5, 7, 8, 10, 12, 20] binary_score=[d[str(b)] for b in binary_ques].count(1) if d['17']==5: binary_score+=1 # numeric questions numeric_ques=[4, 6, 9, 11, 13, 14, 15, 16, 18, 21] numeric_score=sum([d[str(n)] for n in numeric_ques]) # final score final_score=(binary_score+numeric_score)/72 return (round((1-final_score)*100, 3)) # living health def compute_living(d): """ Args: d: dict Returns: percentage """ score=0 score+=[d[str(k)] for k in range(26,41)].count(1) if d['41']>=1: score+=1 if d['42']!=1: score+=1 final_score=score/17 return round((1-final_score)*100, 3) # - cog_score = compute_cog(data) phy_score = compute_physical(data) liv_score = compute_living(data) load_df = pd.DataFrame(data, index=[0]) load_df = pd.read_json('testinput.json') json_2_df = load_df.loc[:,['23', '25', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '22', '24', 'total_recall' ]] json_2_df json_2_df load_data_matrix = xgb.DMatrix(json_2_df) res = xgb_model.predict(load_data_matrix) print(res) lw2009 = json_2_df['total_recall'][0] + json_2_df['23'][0] + json_2_df['24'][0] if lw2009 > 12: lw = 0 else: lw = 1 # + # Access info for scores i = 0 if json_2_df['2'][0] < 50: i = 1 elif json_2_df['2'][0] >= 50 and json_2_df['2'][0] < 60: i = 2 elif json_2_df['2'][0] >= 60 and json_2_df['2'][0] < 70: i = 3 elif json_2_df['2'][0] >= 70 and json_2_df['2'][0] < 80: i = 4 elif json_2_df['2'][0] >= 80 and json_2_df['2'][0] < 90: i = 5 elif json_2_df['2'][0] >= 90 and json_2_df['2'][0] < 100: i = 6 else: i = 7 # + # Given an age group cogs = load_dfc.loc[load_dfc['group']==i, ['mean', '25%', '50%', '75%', 'max',]].values.tolist()[0] cogs.append(cog_score) phys = load_dfp.loc[load_dfp['group']==i, ['mean', '25%', '50%', '75%', 'max',]].values.tolist()[0] phys.append(phy_score) livs = load_dfl.loc[load_dfl['group']==i, ['mean', '25%', '50%', '75%', 'max',]].values.tolist()[0] livs.append(liv_score) # - # Variables to pass results = {} results['Predicted'] = res[0] results['LW2009'] = lw2009 results['LWClass'] = lw results['Cog'] = cogs results['Phys'] = phys results['Livs'] = livs print (results) # + # %%writefile model.py import os import sys import xgboost as xgb import pandas as pd import numpy as np import json import pickle as pkl # Load model xgb_model = pkl.load(open('xgb_model.pkl','rb')) # Load distribution load_dfc = pd.read_csv('cognitive_health.csv') load_dfp = pd.read_csv('physical_health.csv') load_dfl = pd.read_csv('living_health.csv') # cognitive health def compute_cog(d): """ Args: d: dict Returns: percentage """ accuracy=(d['22']+d['25']+d['23']+d['24'])/27 return round(accuracy*100, 3) # physical health def compute_physical(d): """ Args: d: dic Returns: percentage """ # binary questions binary_ques=[5, 7, 8, 10, 12, 20] binary_score=[d[str(b)] for b in binary_ques].count(1) if d['17']==5: binary_score+=1 # numeric questions numeric_ques=[4, 6, 9, 11, 13, 14, 15, 16, 18, 21] numeric_score=sum([d[str(n)] for n in numeric_ques]) # final score final_score=(binary_score+numeric_score)/72 return (round((1-final_score)*100, 3)) # living health def compute_living(d): """ Args: d: dict Returns: percentage """ score=0 score+=[d[str(k)] for k in range(26,41)].count(1) if d['41']>=1: score+=1 if d['42']!=1: score+=1 final_score=score/17 return round((1-final_score)*100, 3) def api_predict(data): # Transform data data['22'] = data['22'][0] data['25'] = data['25'][0] temp = 0 for i in range(5): if data['24'][i] == (100-47-7-i*7): temp += 1 data['24'] = temp data['total_recall'] = data['22'] + data['25'] # Calculate scores cog_score = compute_cog(data) phy_score = compute_physical(data) liv_score = compute_living(data) # Create panda load_df = pd.DataFrame(data, index=[0]) #load_df = pd.read_json(inputfile) # Make sure order is consistent with training json_2_df = load_df.loc[:,['23', '25', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '22', '24', 'total_recall' ]] # Load data load_data_matrix = xgb.DMatrix(json_2_df) # Run prediction res = xgb_model.predict(load_data_matrix) # Calculate LW2009 score lw2009 = json_2_df['total_recall'][0] + json_2_df['23'][0] + json_2_df['24'][0] if lw2009 > 12: lw = 0 else: lw = 1 #print ('Internal info:') #print (data) #print (res, lw2009, lw) # Add in scores # Access info for scores i = 0 if json_2_df['2'][0] < 50: i = 1 elif json_2_df['2'][0] >= 50 and json_2_df['2'][0] < 60: i = 2 elif json_2_df['2'][0] >= 60 and json_2_df['2'][0] < 70: i = 3 elif json_2_df['2'][0] >= 70 and json_2_df['2'][0] < 80: i = 4 elif json_2_df['2'][0] >= 80 and json_2_df['2'][0] < 90: i = 5 elif json_2_df['2'][0] >= 90 and json_2_df['2'][0] < 100: i = 6 else: i = 7 # Given an age group cogs = load_dfc.loc[load_dfc['group']==i, ['mean', '25%', '50%', '75%', 'max',]].values.tolist()[0] cogs.append(cog_score) phys = load_dfp.loc[load_dfp['group']==i, ['mean', '25%', '50%', '75%', 'max',]].values.tolist()[0] phys.append(phy_score) livs = load_dfl.loc[load_dfl['group']==i, ['mean', '25%', '50%', '75%', 'max',]].values.tolist()[0] livs.append(liv_score) # Calculate Predicted Class, using a 0.5 hurdle - may need to change if res[0] > 0.5: pred_class = 1 else: pred_class = 0 # Calculate the Aggregate Dementia Risk Score: # Score = 1 if mixed results agg_score = 1 # Score = 2 if both dementia class if pred_class == 1 & lw == 1: agg_score = 2 # Score = 0 if both normal class if pred_class ==0 & lw == 0: agg_score = 0 #print(agg_score,cog_score, cogs[2], phy_score, phys[2], liv_score, livs[2]) pref_str = ','.join(map(str,[agg_score, cog_score, cogs[0], phy_score, phys[0], liv_score, livs[0]])) original_str = '{ "Predicted": ' + str(res[0]) + ', "LW2009": ' + str(lw2009) + ', "LWClass": ' + str(lw) + ', "Cog": ' + str(cogs) + ', "Phys": ' + str(phys) + ', "Livs": ' + str(livs) + '}' return pref_str # - import model model.api_predict({"1":17,"2":70,"3":5,"4":1,"5":1,"6":1,"7":1,"8":1,"9":3,"10":1,"11":1,"12":1,"13":2,"14":2,"15":2,"16":3,"17":1,"18":3,"19":120,"20":5,"21":1,"22":[10,6],"23":1,"ts1":1532850133053,"24":[46,39,32,25,18],"ts2":1532850156469,"25":[10,6],"26":1,"27":5,"28":7,"29":6,"30":1,"31":5,"32":1,"33":5,"34":5,"35":1,"36":5,"37":1,"38":5,"39":5,"40":1,"41":1,"42":1}) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import tensorflow as tf import matplotlib.pyplot as plt #Extracting DataFrame From Web : import pandas_datareader as web #Stock Symbol Used : Alphabet A Inc - > Google # From 1 April 2020 to 30 June 2020 #This data is full collection of all stock features from Google's Stock Market df_g=web.DataReader('GOOGL',data_source='yahoo',start='04-01-2008',end='07-15-2020') df_g.info() df_g.describe() # + def plot_series(time, series, format="-", start=0, end=None): plt.plot(time[start:end], series[start:end], format) plt.xlabel("Time") plt.ylabel("Value") plt.grid(True) # - df_g #Indicating Number Of days and features -> 123 days and 6 stock features # + temps=[] time_step=[] step=0 for row in range(3096): time_step.append(step) step = step + 1 ds = np.array(df_g['Close'].values) time = np.array(time_step) plt.figure(figsize=(10, 6)) plot_series(time, ds) # - ds_g = ds.reshape(-1,1) print(ds_g.shape) from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0,1)) series = scaler.fit_transform(ds_g) # + split_time = 2500 time_train = time[:split_time] x_train = series[:split_time] time_valid = time[split_time:] x_valid = series[split_time:] window_size = 30 batch_size = 32 shuffle_buffer_size = 1000 # - def windowed_dataset(series, window_size, batch_size, shuffle_buffer): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size + 1)) ds = ds.shuffle(shuffle_buffer) ds = ds.map(lambda w: (w[:-1], w[-1:])) return ds.batch(batch_size).prefetch(1) def windowed_valid_dataset(series, window_size, batch_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size + 1, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size + 1)) ds = ds.map(lambda w: (w[:-1], w[-1:])) return ds.batch(batch_size).prefetch(1) def model_forecast(model, series, window_size): ds = tf.data.Dataset.from_tensor_slices(series) ds = ds.window(window_size, shift=1, drop_remainder=True) ds = ds.flat_map(lambda w: w.batch(window_size)) ds = ds.batch(32).prefetch(1) forecast = model.predict(ds) return forecast train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) print(train_set) #Checking The Batches And their corresponding or respective label ,x and y for x,y in train_set: print("x :",x.numpy()) print("y :",y.numpy()) x[0],y[0] #very First Set of batch of 64 values window and its respective 65th next value or label of first set.. # + tf.keras.backend.clear_session() tf.random.set_seed(51) np.random.seed(51) window_size = 64 batch_size = 256 train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size) print(train_set) print(x_train.shape) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding="causal", activation="relu", input_shape=[None, 1]), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.LSTM(64, return_sequences=True), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(10, activation="relu"), tf.keras.layers.Dense(1), tf.keras.layers.Lambda(lambda x: x * 400) ]) optimizer = tf.keras.optimizers.SGD(lr=1e-6, momentum=0.9) model.compile(loss=tf.keras.losses.Huber(), optimizer='adam', metrics=["mse"]) history = model.fit(train_set,epochs=100) # - # + #plt.semilogx(history.history["lr"], history.history["loss"]) #plt.axis([1e-8, 1e-4, 0, 60]) # - window_size1 = 64 batch_size1 = 256 pd=model_forecast(model,series,window_size1) pd.shape pd=pd[split_time - window_size:-1, -1, 0] pd.shape plot_series(time_valid, x_valid) plot_series(time_valid,pd) rnn_forecast = model_forecast(model, series, window_size) rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0] print(rnn_forecast.shape) print(x_valid.shape) y=np.array(x_valid) print(x_valid.shape) plt.figure(figsize=(25, 6)) plot_series(time_valid, x_valid) plot_series(time_valid, rnn_forecast) tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy() print(rnn_forecast) print(x_valid) # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.5.3 # language: julia # name: julia-1.5 # --- # ## パッケージ # # ソフトウェアを再利用可能な形で配布する単位。 # # ### パッケージ管理の基本 # # Juliaには、標準でパッケージ管理ツールが同封されている。**パッケージ管理はJuliaのREPLで行うことが多い**。 # # \]を入力するとパッケージ管理モードに移行する # # statusコマンドで、現在インストールされている外部パッケージの状態を確認することができる。 # # add Distributionsなどのように**addコマンドでパッケージを追加する**。出力メッセージをみると**Project.tomlとManifest.tomlファイルが更新されている。これらのファイルはインストールされたパッケージの情報を管理する重要なファイルである**。 # # また、**インストールしたパッケージを削除するには、パッケージ管理モードでremoveコマンド(短縮形は、rm)を実行する** # # 以前インストールしたパッケージを更新したい場合はupdate(短縮形は、up)を用いる。引数を渡さずに使用すると、インストールされている全てのパッケージをアップデートしようとする。 # # 逆に、特定のパッケージをなんらかの理由でアップデートしたくない場合は、pinコマンドでパッケージのバージョンを固定する。pinコマンドで固定したバージョンは、freeコマンドで固定を解除できる。また、gcコマンドを使用するおt、パッケージの追加や削除、更新の繰り返しで発生した不要なデータが蓄積した時に自動的に削除してくれる。 # # ### プロジェクトのパッケージ管理 # # Project.toml→現在のプロジェクトが依存してるパッケージを管理するファイル。このファイルの中にはプロジェクト自体のメタデータや依存パッケージの一覧がある。 # # Manifest.toml→実際に使われるパッケージの正確なバージョンやインストール場所を管理するファイル。パッケージマネージャがProject.tomlファイルに記述された依存パッケージの一覧を元に依存解決をして生成する。つまり、Manifest.tomlファイルは、Juliaの実行時にどのパッケージが実際に使用されるかを厳密に指定する。 # # Project.tomlは人が読み書きするファイルだが、Manifest.tomlはJuliaのパッケージマネージャが自動的に生成するファイルなのでユーザが読み書きする必要性がない。 # # では、これら二つのファイルを使って実際にプロジェクトのパッケージ管理をしてみる。 # #### プロジェクトのパッケージ管理 # # まず、~/workspace/myprojectというディレクトリを作成して、Project.tomlを作成する # + active="" # # Project.toml # name = "myproject" # - # プロジェクト毎にパッケージ管理を有効にするには、パッケージ管理モードでactivateコマンドを実行する。具体的には、以下のようにactivate .で現在のディレクトリをプロジェクトとして有効化する。 # # 例: # # ・対象のディレクトリでproject.tomlを作成する。そして、対象のディレクトリでjulia -qを叩く # # ・pkg>の状態でactivate .を実行して現在のディレクトリをプロジェクトとして有効化する→(myproject) pkg>となる # # ・statusを実行すると何のパッケージもインストールされていないことがわかる # # ・ここからaddコマンドでパッケージを追加すると、現在のプロジェクトに依存パッケージとして追加される # # ・作成したproject.tomlファイルの内容を確認してみると、追加したパッケージが追記される # # project.tomlで追加したパッケージの右側にある文字列はUUIDと呼ばれるパッケージを識別するための識別子である。UUIDは覚えづらいので、addコマンドで管理するのが良い。 # # Project.tomlファイルの更新と同時にManifest.tomlファイルも作成される。 # # また、juliaコマンドを実行するときに、オプションとして$--project=@.$を指定すると、現在のディレクトリにあるプロジェクトを有効化して実行される。 # # 例えば、下記のコードをtest.jlとした時に、$julia --project=@. test.jl$とすれば良い。 # + active="" # using Distributions # println("OK") # - # ## パッケージの作成 # # Juliaのパッケージは、ディレクトリの中である決まった構造を取る必要性がある。この構造を守らないと、パッケージとして正しく動かないので注意する必要性がある。 # # ・**README.mdファイル**:パッケージの概要を説明するファイル # # ・**LICENSEファイル**:配布しているソースコードのライセンスを指定するファイル # # ・**Project.toml**:パッケージのメタ情報や依存パッケージを記述するファイル # # ・**srcディレクトリ**:ソースコードを収めるディレクトリ # # ・**testディレクトリ**:テストコードを収めるディレクトリ # # ・**docsディレクトリ**:ドキュメントを収めるディレクトリ # # Project.tomlファイルとsrcディレクトリは、パッケージには必須である。srcディレクトリには、そのパッケージと同じ名前のJuliaソースコードファイル(例えば、Example.jlというパッケージならExample.jlファイル)が収められている。 # # Licenseファイルやdocsディレクトリなどは必須ではないが、パッケージを公式パッケージとして登録する際に要求されるので、あらかじめ用意しておくのが望ましい。他にもパッケージの依存ライブラリやビルドスクリプトを収めるdepsディレクトリや、継続的インテグレーションのための設定ファイルなどがあることが望ましい。 # # 今回は、My-Package.jlという簡単なパッケージを作成する。今回は、~/workspace/MyPackageで作成する。 # # ・~/workspaceディレクトリに移動して、juliaのパッケージ管理モードに移行する。 # # ・**generate MyPackageコマンド**を実行すると、~workspace/MyPackageディレクトリが作成される。その中にProject.tomlファイルとsrc/MyPacage.jlファイルが作成される。 # # 注意:生成されたファイルの中身を確認すると、パッケージのUUIDは毎回変わるので心配ない # # ・自動生成されたパッケージはすぐに試すことができ、--projectオプションを使ってREPLを起動する。試しに、下記のMyPackage.jlを作成し、usingを使って検証することができる。また、**依存パッケージの追加はaddコマンドを用いるが、Statistics.jlなどの標準ライブラリにあるパッケージでもBase以外はaddコマンドで追加する必要があるので注意が必要である**。また、他のユーザーがパッケージをインストールする場合は、作成したパッケージの依存ライブラリをproject.tomlを参照して自動的にインストールするような仕組みになっている。 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import lux import pandas as pd # lux.config.plotting_backend = "matplotlib" lux.config.sampling = False # + df = pd.read_csv("/Users/s57405/git/iag_geo/spark_testing/apache_sedona/testing/junkpile/02_spatial_join_sql_st_subdivide_benchmark.csv") # convert times to seconds df["processing_time"] = (pd.to_timedelta(df["processing_time"]) .astype('timedelta64[s]') .astype(int) ) df["computer"] = df["computer"].astype("string") # filter by run name df2 = df[(df["computer"] == "macbook2-no-cache") & (df["processing_time"] < 330)] # .drop("computer", axis=1) # df2 = df[df["processing_time"] < 350] # - df2 df.recommendation df.info() # + # df.exported # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="g4_H6dxc41SG" # Matrix Algebra # + colab={"base_uri": "https://localhost:8080/"} id="Y_H23hju4zNy" outputId="fd1a658b-b0cc-40fe-d049-b34c4c4b3fba" #numpy import numpy as np a = np.array([1,2,3]) #This is an example of 1x3 matrix print(a) # + colab={"base_uri": "https://localhost:8080/"} id="4apBmsLw5FgQ" outputId="11fd62a8-5c00-4465-84ba-7f9ba5ad2c79" import numpy as np b = np.array (([1,2,3],[4,5,6])) #This is a 2x3 matrix print(b) # + colab={"base_uri": "https://localhost:8080/"} id="Ag9vp0Za5Jh0" outputId="1899fde0-6c5c-4306-b2fd-79baa85a6f66" import numpy as np c = np.array([[1,2,3],[4,5,6],[7,8,9]]) #This is an example of 3x3 matrix print(c) # + colab={"base_uri": "https://localhost:8080/"} id="cugzlrj05jN1" outputId="c354ed47-4703-4058-bf2e-97e704eb3d11" import numpy as np d = np.full ((3,3),7) print(d) # + colab={"base_uri": "https://localhost:8080/"} id="48ufys9R52lL" outputId="2c57a00c-bfea-4007-9ed6-3fc30f5dc422" import numpy as np e = np.array ([[1,2,3],[4,5,6],[7,8,9]]) print(e) e = np.diagonal ([[1,2,3],[4,5,6],[7,8,9]]) print(e) # + colab={"base_uri": "https://localhost:8080/"} id="cXuHfgmP6bLZ" outputId="fc4a49e9-4211-4c86-dd22-5c8947bfed16" import numpy as np f=np.eye(3) print(f) # + colab={"base_uri": "https://localhost:8080/"} id="mVWWmJTz6msX" outputId="a740b66b-d94e-49ca-dc85-fb0715382976" import numpy as np g = np.zeros((3,3)) print(g) # + colab={"base_uri": "https://localhost:8080/"} id="yCUTc7Lx61W_" outputId="174be243-2cd4-411a-8649-b233f3b1eca7" import numpy as np h = np.empty((0,12)) print(h) # + colab={"base_uri": "https://localhost:8080/"} id="CwXQmULh7FAC" outputId="27121a3a-8074-4d99-ed8e-4a9131b57ece" import numpy as np d = np.full ((3,3),7) print(d+d) # + colab={"base_uri": "https://localhost:8080/"} id="Cu_AaKTb7WWW" outputId="717f1452-a46d-4d7d-97be-099a501d1ebd" import numpy as np d = np.full ((3,3),7) print(d-d) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Mixup Deep Dive # > What is Mixup Data Augmentation really? # # - toc: true # - badges: true # - comments: true # - author: # - categories: [Computer Vision, Data Augmentation] # # Intro # # In this post we are going to dive into what Mixup is. Mixup is a very powerful data augmentation tool that is super helpful tool to have in your toolbox, especially when you don't have enough data and are overfitting. # # The goal of this post is not to show you the intricacies of training a model using mixup - that will be reserved for a future post. The goal of this post is to communicate an intuitive understanding of what mixup is and why it works. If you don't know what the tool is, it's impossible to have good intuition on how and when to use it. # # We will be using the Pets Dataset to demonstrate this. # # >Bonus Challenge: As you go through each step, think about what other kinds of data you may be able to apply these concepts to. Could you apply these transformations to NLP Embeddings? Could you apply these transformations to Tabular Data? # # Setup # # ### Get Libraries/Data from fastai2.data.external import * from fastai2.vision.all import * from PIL import Image import matplotlib.pyplot as plt from pylab import rcParams from functools import partial,update_wrapper # + seed = 42 # Download and get path for dataseet path = untar_data(URLs.PETS) #Sample dataset from fastai2 path.ls() # - # ### Helper Functions def plot_images(imgs): rcParams['figure.figsize'] = 10, 20 imgs_len = len(imgs) for x in range(0,imgs_len): plt.subplot(1,imgs_len,x+1) plt.imshow(imgs[x]) # ### Data Setup # # Data Blocks and data loaders are convenient ways that fastai helps us manage an load data. There is a lot going on in DataBlock so I am going to break it down piece by piece. # # # #### DataBlock # pets = DataBlock( blocks = (ImageBlock, CategoryBlock), get_items = get_image_files, splitter= RandomSplitter(valid_pct = 0.2, seed=seed), get_y= using_attr(RegexLabeller(r'(.+)_\d+.jpg$'),'name'), item_tfms=Resize(460), batch_tfms=aug_transforms(min_scale = 0.9,size=224) ) # #### dataloader # # The dataloader is what we will actually interact with. In the DataBlock we defined lots of things we need to do to get and transform images, but not where to get them from. We define that in the dataloade dls = pets.dataloaders(path/"images") # # Mixup Explained # # So what is Mixup really? To undestand that, we are first going to look at a couple pictures and understand what a Neural network would normally see an image then use the same images and apply mixup after that. To say that another way. we want to understand the inputs and the outputs. To say that another way, we want to know our xs and our ys # # ### x: No Mixup # # Let's use 2 images as an example. I have plotted them below. im1 = tensor(Image.open((path/'images').ls()[8]).resize((500,371))).float()/255; im2 = tensor(Image.open((path/'images').ls()[6]).resize((500,371))).float()/255; plot_images([im1,im2]) # Great, so the inputs are the pictures. What are the outputs? Well the output is going to be what breed they are. Let's see what breed they are. (path/'images').ls()[8],(path/'images').ls()[6] # Ok we can see in the file name that the dog is a leonberger and the cat is a ragdoll. So now we need to translate that into the one-hot encoded matrix for our model to predict. Looking at dls.vocab gives us all the class names. dls.vocab # ### y: No Mixup # Let's define y for these 2 images. In a normal scenario, we have 1 column per class. When looking at the vocab above we saw that there were 37 classes. All of them will be 0 except the target. # # Let's start by figuring out which column is the target (ie leonberger and ragdoll). Then we just need a tensor of length 37 that is all zeros except that position which will be a 1. list(dls.vocab).index('leonberger'),list(dls.vocab).index('Ragdoll') # + # 37 classes long, all 0 except position 25 which represents leonberger and is 1 y_leonberger = tensor([0]*25+[1]+[0]*(37-26)) # 37 classes long, all 0 except position 8 which represents Ragdoll and is 1 y_Ragdoll = tensor([0]*8+[1]+[0]*(37-9)) print(y_leonberger) print(y_Ragdoll) # - # Great! We have our images that go in, and our output we want to predict. This is what a normal neural network is going to try to predict. Let's see whats different if we use these 2 images with the Mixup data Augmentation # ### x: Yes Mixup # # For the images, we are going to apply an augmentation. What Mixup does, it really mixing up 2 images together. The name is really exactly what it is doing. Let's apply mixup to an image and see what I mean. # # Let's take a *mix* of the 2 images. We will take 40% of the first image, and 60% of the second image and plot them. We are doing this by multiplying the actual pixel values. # # For example, If the pixel 1 value from image 1 * .4 + pixel 1 value from image 2 * .6 and that will equal pixel 1 value in my new image. Take a look at the third image and you can see it really does have a bit of each image in there. im_mixup = im1*.6+im2*.4 plot_images([im1,im2,im_mixup]) # ### y: Yes Mixup # # So now we have our new augmented image with mixup. Clearly it's not really fair to call it 100% of either class. In fact it's 60% of one class and 40% of the other. So how does our Y work? # # Well, we already have our ys when they are 100% of either class, so lets just take 60% of one + 40% of the other exactly like we did for our images. That should give us an appropriate label. # + # 37 classes long, all 0 except position 25 which represents leonberger and is 1 y_leonberger = tensor([0]*25+[1]+[0]*(37-26)) # 37 classes long, all 0 except position 8 which represents Ragdoll and is 1 y_Ragdoll = tensor([0]*8+[1]+[0]*(37-9)) y_mixup = y_leonberger*.6+y_Ragdoll*.4 print(y_leonberger) print(y_Ragdoll) print(y_mixup) # - # ### What weights? # # Here I took 60% of one image and 40% of the other. In reality you could to a 90/10 split. Or a 99/1 split. Or a 50/50 split. # # I picked relatively close weights so it's easy to see, but these weights are something to play with and tune when building your model. # # FastAI mixup # # Applying the basic Mixup in fastai is super easy. Here's how you can create a CNN using Mixup. learn = cnn_learner(dls,resnet34,cbs=MixUp) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Plotting the Climate Data by AEZ and Month # # This code comes from https://github.com/digitalearthafrica/thematic-layers/tree/main/CHPclim rainfall_AEZ.py and the climate data were retrieved using chpclim_download_and_cog.sh import sys import xarray as xr import geopandas as gpd from rasterio.features import rasterize from affine import Affine import numpy as np from matplotlib import pyplot as plt import pandas as pd # %matplotlib inline # ## Plotting the valid/clear obersvations valid_pts = {} def sort_for_plot(aez_name): CEO = f'../Results/WOfS_Assessment/Point_Based/{aez_name}_S2_WOfS.csv' input_data = pd.read_csv(CEO,delimiter=",") input_data=input_data.drop(['Unnamed: 0'], axis=1) input_data['CL_OBS_count'] = input_data.groupby('MONTH')['CLEAR_OBS'].transform('count') sorted_data = input_data.sort_values(['MONTH']) for_plot = sorted_data[~sorted_data.MONTH.duplicated(keep='first')] valid_pts[aez_name] = for_plot.CL_OBS_count.values sort_for_plot('Indian_ocean') sort_for_plot('Central') sort_for_plot('Sahel') sort_for_plot('Southern') sort_for_plot('Northern') sort_for_plot('Eastern') sort_for_plot('Western') valid_pts df_valid = pd.DataFrame.from_dict(valid_pts) df_valid.to_csv('../Results/WOfS_Assessment/Point_Based/Africa_s2_monthly_valid.csv') months = np.arange(1,13) plt.plot(months, valid_pts['Central'], label='Central') plt.plot(months, valid_pts['Eastern'], label='Eastern') plt.plot(months, valid_pts['Indian_ocean'], label='Indian Ocean') plt.plot(months, valid_pts['Northern'], label='Northern') plt.plot(months, valid_pts['Sahel'], label='Sahel') plt.plot(months, valid_pts['Southern'], label='Southern') plt.plot(months, valid_pts['Western'], label='Western') plt.ylim(0,450) plt.legend(ncol=3, loc='upper center', borderaxespad=0.) plt.title('S2 Valid points accross AEZs') plt.xlabel('Months') plt.ylabel('Number of Valid Points') plt.savefig('All_AEZ_valid_S2.png') plt.plot(months, valid_pts['Central']/sum(valid_pts['Central']), label='Central') plt.plot(months, valid_pts['Eastern']/sum(valid_pts['Eastern']), label='Eastern') plt.plot(months, valid_pts['IndianOcean']/sum(valid_pts['IndianOcean']), label='Indian Ocean') plt.plot(months, valid_pts['Northern']/sum(valid_pts['Northern']), label='Northern') plt.plot(months, valid_pts['Sahel']/sum(valid_pts['Sahel']), label='Sahel') plt.plot(months, valid_pts['Southern']/sum(valid_pts['Southern']), label='Southern') plt.plot(months, valid_pts['Western']/sum(valid_pts['Western']), label='Western') plt.ylim(0,0.25) plt.legend(ncol=3, loc='upper center', borderaxespad=0.) plt.title('Normalised Valid points accross AEZs') plt.xlabel('Months') plt.ylabel('Normalised Number of Valid Points') plt.savefig('All_AEZ_valid_norm.png') plt.plot(months, valid_pts['Central']/500, label='Central') plt.plot(months, valid_pts['Eastern']/500, label='Eastern') plt.plot(months, valid_pts['Indian_ocean']/300, label='Indian Ocean') plt.plot(months, valid_pts['Northern']/300, label='Northern') plt.plot(months, valid_pts['Sahel']/300, label='Sahel') plt.plot(months, valid_pts['Southern']/500, label='Southern') plt.plot(months, valid_pts['Western']/500, label='Western') plt.ylim(0,1.2) plt.legend(ncol=4, loc='upper center', borderaxespad=0.) plt.title('S2 Normalised Valid points accross AEZs') plt.xlabel('Months') plt.ylabel('Normalised Number of Valid Points') plt.savefig('All_AEZ_valid_total_norm_S2.png') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: RL # language: python # name: rl # --- # + import pandas as pd import numpy as np import pickle import itertools import os import random import math import time import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from collections import deque from tensorflow.keras.optimizers import Adam from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Conv1D, MaxPooling1D, Flatten, concatenate, Conv2D, MaxPooling2D from libs.utils import * from libs.generate_boxes import * # + os.environ['TF_CPP_MIN_LOG_LEVEL'] = '0' tf.get_logger().setLevel('INFO') tf.keras.backend.floatx() plt.style.use('fivethirtyeight') plt.rcParams['figure.figsize'] = (20,10) # - N_MDD = 32 boxes, gt_pos = generation_3dbox_random(case_size=[[20,20,20]],min_s=1, N_mdd = N_MDD) boxes = boxes[0] # + K = 3 n_candidates = 4 num_max_boxes = len(boxes) num_max_remain = num_max_boxes - K print(num_max_boxes, num_max_remain) # - env = Bpp3DEnv() env.reset() env.container.shape, env.container_h.shape boxes_all = np.array(boxes).copy() r_boxes = boxes_all.copy() k = min(K, len(r_boxes)) k box = r_boxes[0] r_list = [] for i in range(32): r_list.append(box) r_list = np.array(r_list) r_list selected = cbn_select_boxes(r_list[:n_candidates],k) s_order = get_selected_order(selected, k) s_order, selected # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tutorial N: Title # # **Filled notebook:** # [![View on Github](https://img.shields.io/static/v1.svg?logo=github&label=Repo&message=View%20On%20Github&color=lightgrey)](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/template/TemplateNotebook.ipynb) # [![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/DL2/template/TemplateNotebook.ipynb) # **Pre-trained models:** # [![View files on Github](https://img.shields.io/static/v1.svg?logo=github&label=Repo&message=View%20On%20Github&color=lightgrey)](https://github.com/phlippe/saved_models/tree/main/DL2/template/) # **Recordings:** # [![YouTube - Part N](https://img.shields.io/static/v1.svg?logo=youtube&label=YouTube&message=Part%20N&color=red)](https://youtu.be/waVZDFR-06U) # **Authors:** # Your name here # __TODOs:__ # # * Update the links for the filled notebook (both github and collab) to your new notebook # * Update the link for the saved models # * Update the link for the YouTube recording if you have any. If you want to upload one to the UvA DLC YouTube account, you can contact Phillip. # * Fill in the author names # Here, you are supposed to add some intro about the topic. Give a short abstract motivating the tutorial, and then detail what will be done. It is good to have pictures here as well. If you add images, make sure to use SVGs for best resolution, and put them in the same folder as your notebook. An example is given below (use any HTML editing you like). # #
# # The next cell is where you import all your packages that you need. In case you have non-standard packages, make sure to install them to make it executable on GoogleColab (see for instance the PyTorch Lightning install). # + ## Standard libraries import os import numpy as np ## Imports for plotting import matplotlib.pyplot as plt plt.set_cmap('cividis') # %matplotlib inline from IPython.display import set_matplotlib_formats set_matplotlib_formats('svg', 'pdf') # For export import matplotlib matplotlib.rcParams['lines.linewidth'] = 2.0 import seaborn as sns sns.set() ## tqdm for loading bars from tqdm.notebook import tqdm ## PyTorch import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.data as data import torch.optim as optim ## Torchvision (TODO: ONLY NEEDED FOR VISION-BASED DATASETS) import torchvision from torchvision import transforms # PyTorch Lightning try: import pytorch_lightning as pl except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary # !pip install --quiet pytorch-lightning>=1.4 import pytorch_lightning as pl from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint # Import tensorboard (TODO: REMOVE IF YOU DO NOT WANT TO RUN TENSORBOARDS INTERACTIVELY) # %load_ext tensorboard # Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10) DATASET_PATH = "../data" # Path to the folder where the pretrained models are saved (TODO: UPDATE LINK BELOW TO A FOLDER WITH THE NAME OF YOUR NOTEBOOK FOLDER) CHECKPOINT_PATH = "../../saved_models/DL2/template" # Setting the seed pl.seed_everything(42) # Ensure that all operations are deterministic on GPU (if used) for reproducibility torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu") print("Device:", device) # - # You will likely have some pretrained models that you want to share with the students, and download when running on GoogleColab. You can do this with the cell below. If you don't have any pretrained models, you can remove the cell. # + import urllib.request from urllib.error import HTTPError # Github URL where saved models are stored for this tutorial (TODO: UPDATE URL BELOW TO YOUR NOTEBOOK FOLDER) base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/DL2/template/" # Files to download pretrained_files = [] # (TODO: ADD A LIST OF STRINGS THAT ARE THE FILES YOU WANT TO DOWNLOAD. PATHS WITH RESPECT TO BASE_URL) # Create checkpoint path if it doesn't exist yet os.makedirs(CHECKPOINT_PATH, exist_ok=True) # For each file, check whether it already exists. If not, try downloading it. for file_name in pretrained_files: file_path = os.path.join(CHECKPOINT_PATH, file_name) if "/" in file_name: os.makedirs(file_path.rsplit("/",1)[0], exist_ok=True) if not os.path.isfile(file_path): file_url = base_url + file_name print(f"Downloading {file_url}...") try: urllib.request.urlretrieve(file_url, file_path) except HTTPError as e: print("Something went wrong. Please try later again, or contact the author with the full output including the following error:\n", e) # - # ## My Tutorial Topic # # Start your notebook from here. Introduce the topics, go step by step, don't forget to explain the code, etc. # # You can make use of different heading levels, they will be shown as tabs on the RTD website. # ## Conclusion # # Give a conclusion and summary of the notebook. Give a retroview: what have the students learned from this notebook, what is there to further explore in this topic, anything critical to keep in mind? # # ### References # # Give a list of references, especially the papers that introduce the methods you implemented in this notebook. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from aide_design.play import* from aide_design import floc_model as floc from aide_design import cdc_functions as cdc from aide_design.unit_process_design.prefab import lfom_prefab_functional as lfom from pytexit import py2tex import math # # 1 L/s Plants in Parallel # # CHANCEUX # ## , , # # # AguaClara has been designing and building water treatment plants since 2005 in Honduras, 2013 in India, and 2017 in Nicaragua. It has been providing gravity powered water treatment systems to thousands of people in rural communities. However, there were populations that could not be targeted due to the technology only being scaled up from 6 L/s. For towns and rural areas with populations with smaller needs, AguaClara technologies were out of their reach. # # Recently a one liter per second (1 LPS) plant was developed based on traditional AguaClara technology, to bring sustainable water treatment to towns with populations of 300 people per plant. # # The first 1 LPS plant fabricated was sent to Cuatro Comunidades in Honduras, where a built in place plant already exists, and is currently operating without the filter attachment, also known as Enclosed Stacked Rapid Sand Filter (EStaRS). EStaRS is the last step in the 1 LPS plant processes before chlorination and completes the 4 step water treatment process: flocculation, sedimentation, filtration, and chlorination. # Having water treatment plants for smaller flow rates would increase AguaClara’s reach and allow it to help more people. Despite being in its initial stages, the demand for this technology is increasing. Three 1 LPS plants were recently ordered for a town that did not have water treatment. However, the implementation of 1 LPS plants is a problem that has not yet been solved. # # This project has stemmed from the possibility of implementing AguaClara technologies to be helpful in Puerto Rico’s post hurricane rebuild effort. The goal of this project is to assess whether the portable 1 L/s plant could be a viable option to help rural communities have safe drinking water. The project models multiple 1 L/s plants working in parallel to provide for the community and plans for the future when communities would need to add capacity. For this project, the team has set 6 L/s as the design constraint. We need experience building and deploying 1 LPS plants to determine the economics and ease of operation to compare to those of built in place plants. For example, if we need 12 L/s, it could still be reasonable to use the 1 LPS plants in parallel or select a 16 L/s built in place plant if more than 12 L/s is needed. Because the dividing line between the modular prefabricated 1 LPS plants and the build in place plants is unknown, the team chose 6 L/s because it is the smallest built in place plant capacity. # # Our model is based on the following: # * Standardization modular designs for each plant (1 plant has one EStaRs and Flocculator) # * One entrance tank and chemical dosing controller # * Entrance Tank accomodates to 6 L/s flow # * Coagulant/ Chlorine dosing according to flow by operator # * Parallel Layout for convenience # * Extendable shelter walls to add capacity using chain-link fencing # * Manifolds connecting up to 3 plants (accounting for 3 L/s) from the sedimentation tank to the ESTaRS and after filtration for chlorination (using Ts and fernco caps) # * Manifolds to prevent flow to other filters being cut off if filters need to be backwashed and lacks enough flow # * Equal flow to the filters and chlorination from the manifolds # # # Calculations follow below. # ### Chemical Dosing Flow Rates # Below the functions for calculating the flow rate of the coagulant and chlorine based on the target plant flow rate are shown. The Q_Plant and concentrations of PACL and Cl can be set by the engineer and is set to 3 L/s in this sample calculation. # # Chlorine would be ideally done at the end of the filtration where flow recombines so that the operator would only have to administer chlorine at one point. However our drafts did not account for that and instead lack piping that unites the top and bottom 1 L/s plants. Only the 6 L/s draft reflects this optimal design for chlorination. # + #using Q_plant as the target variable, sample values of what a plant conditions might be are included below Temperature = u.Quantity(20,u.degC) Desired_PACl_Concentration=3*u.mg/u.L Desired_Cl_Concentration=3*u.mg/u.L C_stock_PACl=70*u.gram/u.L C_stock_Cl=51.4*u.gram/u.L NuPaCl = cdc.viscosity_kinematic_pacl(C_stock_PACl,Temperature) RatioError = 0.1 KMinor = 2 Q_Plant= 3*u.L/u.s def CDC(Q_Plant, DesiredCl_Concentration,C_stock_PACl): Q_CDC=(Q_Plant*Desired_PACl_Concentration/C_stock_PACl).to(u.mL/u.s) return (Q_CDC) def Chlorine_Dosing(Q_Plant,Desired_Cl_Concentration,C_stock_Cl): Q_Chlorine=(Q_Plant*Desired_Cl_Concentration/C_stock_Cl).to(u.mL/u.s) return (Q_Chlorine) print('The flow rate of coagulant is ',CDC(Q_Plant, Desired_PACl_Concentration, C_stock_PACl).to(u.L/u.hour)) print('The flow rate of chlorine is ',Chlorine_Dosing(Q_Plant, Desired_Cl_Concentration, C_stock_Cl).to(u.L/u.hour)) # - # ### SPACE CONSTRAINTS # # In the code below the team is calculating the floor plan area. The X distance and Y distance are the length and width of the floor plan respectively. The dimensions of the sedimentation tank, flocculator, and entrance tank are accounted for in this calculation. # + #Claculting the Y distance For the Sed Tank #properties of sedimentation tank Sed_Tank_Diameter=0.965*u.m Length_Top_Half=1.546*u.m #See image for clearer understanding Y_Sed_Top_Half=Length_Top_Half*math.cos(60*u.degrees) print(Y_Sed_Top_Half) Y_Sed_Total=Sed_Tank_Diameter+Y_Sed_Top_Half print(Y_Sed_Total) # - # SED TANK: Based on the calculation above, the space the Sedimentation tank would take on the floor plant is found to be 1.738 m. # # ![Image of Sed Tank](https://lh3.googleusercontent.com/p1raanocowKirq5nuu7_R9j0QmrkL-ZwmXQDB14wbfIGYeLaCbObV6iNNueZ1jES-VAhtrglIQpijE-3NXkhAaf7QPtTA8-ZCnRwVS6KrUppkWc1fRt1o3i9St7kPAHEO6GfJLZ5MOQwFQ7_LVAZvPZgqBjYFNb_Z2uLJcNLPx1tU7U-lldvox8AlYMOBSN7OWYe1BmtJT22DjcVUJP5DpFWR8qqSjYq1TsKp-x7h8OCUXAPsgKq9qxD53wT1nSAi3mlLjpgQiDWWmybrcYl-Rn_2VW1vKiea5JeKxwFvslweNbWVXRw9CSccDQl1gus2v85EUV9KXvdXxYs1CDwA7BoYpHJ63Ke5wrCl0uxTt30yukV4eIQUxROVphnkclJZmmkB9IrGPOTdbJV-xUt8LEHWXMMw8aPcHFr8Jwx066n4w3K75lIpqPpY2t05KAxIoAhA_lYo38H40EQdRwdo1Bt-5EdLG0d8sov8SaQFeCokNPaNBSmg1OfXaLSwcfeq7U9Y73mf_viFSNPI1yX-A-VvhvakJi_KYQu3HLm6fD8NKtGapcdnO2ZazRWsegzb6Guv8_8w2hStHPuPxuk21IBs978RkZrYdd87q5qfZh8CEE2z7gBjkr-68vpDRCHqEqEAk3asCDMLlXcDZFsrv_U9Uyi09iDmxw=w798-h660-no) # # This is a picture of the sedimentation tank with dimensions showing the distance jutting out from the sedimentation tank. This distance of 0.773 m is added to the sedimentation tank diameter totalling 1.738 meters. # # ESTaRS: The dimensions of the ESTaRS are set and did not need to be calculated. A single manifold can collect water from the sed tanks and send it to the EStaRS. There will be valves on the manifold system installed before the entrance to the ESTaRS to allow for backwashing. These valves can be shut to allow for enough flow to provide backwashing capacity. There will be a manifold connecting flow after filtration to equate the flow for chlorination. # # FLOCCULATOR: We want symmetrical piping systems coming out of the flocculator. There is a flocculator for each plant so that available head going into the parallel sedimentation tanks will be the same. We will have an asymmetrical exit lauder system coming out of the sedimentation tanks going into the ESTaRS (diagram). # # ENTRANCE TANK: The entrance tank is set to be at the front of the plant. The focus of this project is to calculate the dimensions and design the plant. The entrance tank dimensions should be left to be designed in the future. An estimated dimension was set in the drawing included in this project. There will be a grit chamber included after the entrance tank. The traditional design for rapid mix that is used in AguaClara plants will be included in the entrance tank. # # CONTACT CHAMBER: A contact chamber is included after the entrance tank to ensure that the coagulant is mixed with all of the water before it separates into the multiple treatment trains. Like the entrance tank, contact chamber dimensions should be left to be designed in the future. An estimated dimension was set in the drawing included in this project. # # WOODEN PLATFORM: The wooden platform is 4m long, 0.8m wide, and is 1.4 meters high allowing for the operator to be able to access the top of the sedimentation tank, flocculator, and ESTaRS. It would be placed in between every sedimentation tank. In the case of only a single sedimentation tank it would go on the right of the tank because the plant expands to the right. # + #Spacing due to Entrance Tank and contact chamber #estimated values Space_ET=0.5*u.m CC_Diameter=0.4*u.m Space_Between_ET_CC=0.1*u.m Space_CC_ET=1*u.m #Spacing due to Manifold between contact chamber and flocculators Space_CC_Floc=1.116*u.m #Spacing due to Flocculator Space_Flocc_Sed=.1*u.m Space_Flocculator=0.972*u.m #Spacing due to the Manifold Space_Manifold=0.40*u.m #Spacing due to ESTaRS Space_ESTaRS=0.607*u.m #Spacing for ESTaRS Manifold to the wall Space_ESTaRS_Wall=0.962*u.m # - # The Y distance below 3 L/s is set to be as a sum of the total Y distance of the flocculator, sedimentation tank, and ESTaRS. An additional 2 meters of Y distance is added for operator access around the plant. The lengths between the sedimentation tank, flocculator and ESTarS were kept minimal but additional Y distance can be taken off between the sedimentation tank and ESTaRS. This is because the ESTaRS can hide under the sloping half of the sedimentation tank but this orientation would not account for the manifold drawn in the picture. # The total Y distance is calculated below. # + Y_Length_Top=(Space_CC_ET+Space_CC_Floc+Space_Flocc_Sed +Space_Flocculator+Y_Sed_Total+Space_Manifold+Space_ESTaRS+ Space_ESTaRS_Wall) Y_Length_Bottom=Y_Length_Top-0.488*u.m # - # Below are functions that can be used to design a plant based on the desired flow rate # + def X(Q): if Q>3*u.L/u.s: X_Distance_Bottom=X(Q-3*u.L/u.s) X_Distance_Top=6.9*u.m return(X_Distance_Top,X_Distance_Bottom) else: Q_Plant=Q.to(u.L/u.s) Extra_Space=2*u.m X_Distance=(Q_Plant.magnitude-1)*1*u.m+(Q_Plant.magnitude)*.965*u.m+Extra_Space return(X_Distance.to(u.m)) def Y(Q): if Q>3*u.L/u.s: return(Y_Length_Top+Y_Length_Bottom) else: return(Y_Length_Top) print(X(Q_Plant_2).to(u.m)) def Area_Plant(Q): if Q>3*u.L/u.s: X_Distance_Bottom=X(Q-3*u.L/u.s) Area_Bottom=X_Distance_Bottom*Y_Length_Bottom*((Q/u.L*u.s-3)) Area_Top=X(3*u.L/u.s)*Y_Length_Top Area_Total=Area_Top+Area_Bottom return(Area_Total) else: H_Distance=X(Q_Plant) Y_Distance=Y_Length_Top Area_Total=H_Distance*Y_Distance return(Area.to(u.m**2)) # - # ![Image of Sample 3L/s Plant](https://lh3.googleusercontent.com/V-YxrMCMg9_wiKeqH9gQ4EtY4nMhrIReoT_mRcyQ9MP-E2SBRwHoEHz55j_dOMjecPnmn3PmLlWadpa0eDVQdmpDdZX84Z0UaWprMG4Yx72_OlK1oBUFyUQTIjCvhZAMbhQst_67I-QBewCbw3U4udEvq9jg6X-v1kQ7HxMmdBz6U1sR8fkmLaLw28YRFf1DBNGg8Xay9QoMV4THIRQ68_N_UQjlfwKxohvU01u2ezVRPDHv6vCkO5J7LQ1doC_x5zEipy6xF6opSgBY2F3m9U3xYS6ivIkicvq4EpaT7P3dYuxUJ8QLgzxrR-IIzWouJFBXaqIKM83bqvlzdqJAvY9iIp9dvzkNLQwtlMJBzm-fleR2KFL88BWhH5Airzcpy40MZhUoI5hGL-u6vd3QLMGnqn4FrwWMhcy_ijQrTwdGPftokr5UAHxpliaPeCiWawKUeBmAR1sgf_TVb_PB70k9Zrx2nmwDvVZSxYiSeqW0oQJYPLRTZvOhNeTYikLbtSbK-6JuH4H0fZxDahCXFvzhMpbsj3VKi1QrBQVWF9uqqEiE3I1tBH6aelC4TZVA_kb_pXtxHzhIHQQ4_ScsISiqvHIP6ju7S4hXeioQXpKGXvjSs7vHzBH9tvR5DauC1qqlMWGQEP64HRbcO-Y0vQl2faImCoQxBuU=w693-h748-no) # # # This is a layout of a sample plant for three 1 L/s tanks running in parallel. Check bottom of document for additional drafts. # # A platform will be in between sedimentation tanks as a way for the operators to have access to the sedimentation tanks. The platform height will be 1.4m to allow the plant operators 0.5m to view the sedimentation tanks just as in the built in place plants. The operator access requirements influence the optimal plant layout because the operators movements and safety have to be considered. The manifold will be built underneath the platform for space efficiency. The image below shows the wooden platform with 4m of length but could be truncated or extended to depending on the situation. The last image shows the platform in between the sedimentation tanks in the 3 L/s sample plant. # # ![Image of Wooden Platform](https://lh3.googleusercontent.com/1z5hX0-m1B3zu8efVM1sCZ4_IkL-zZpqJFyW8wcMnI4FnQNVpA-cC57XayAvhaNqeX5fpH7H31KV0E46_h9XWuKrct8dYFw1YcdQRzLuJQRzB9uYBqm7cJ8E1Cwsk2dtqlASuEu9SrVeaRRwKz6ttEpko4woo5tCJDY-744e6CsGU_Ss4fQb-Xc8SOIHl86VBWaTv5R6IRGxKMrU58pstrsaYvxzd0LG2epTFfYkEgBi2StSJVqlYlvyXniLC9UMYbTOqgfRQlKM6drWkyxH4zkIZK3YuMOkXjQjvdlBvdyViBQQEKGue9at4hg2gM6-_VarCpPgWKeg9WLX7d2BkguIu6hsqnfdElIlnnODLRNS7lDPMeX6OK0D1s2DrpBrAGIILWwT1vswlHxDBvz7JGK5a4NacDJANWJr9WUyImCx5lyQa5Shw4Y02yVH1KwfVLzq_JmcmchdRKY12eI6VIa-PRP777ntAiwB-aU94swZ63lghoL6ELIQ1jjycXs8QogpKqL_cXOve79vlZrf1eDaOFSH5094i7_RdfLGKkKpsKOUJv_5W2gWfmGk15NL0u0lscmNT71ICL60iogp_6SWpLpaQL750Q3cb7dIPcnoiQoDDnLOSmHXRgI_3YPHacvHp-ZPL-ITJD7L2mIjpSXJDepvpbOnZms=w833-h702-no) # # Wooden platforms would fill up the space in between sedimentation tanks to allow for operator access to the top of sedimentation tanks, ESTaRS and flocculators. # # ![Image of Wooden Platform in Sample 3 L/s Plant](https://lh3.googleusercontent.com/qJuHz_JPzzyHVT-xCt_v-iqY4ADhb3MMK7mCJLTLaW41OtpqxTsNYosQGst_COv1K6ojb0HuNCJy87puD8GbvgmAK6qk31jAZE-B5lZrt4cIilgY2taCjjmT5YrDO35204mTgYIVkFGOnqoONidU4TqVZXyMX6qAzh3Yz8gqHfLGzZCWf1IGPGn2io9NSeK31OPaehKmWOKKZEQ0whzr0LcN8qDvq8gbiP1ms5Z8YljsJYj6k0GPkInNAAxs_4uKk41O9bGZuV7HeeUifv0u3oLGMqcAJI8fc12fd2BWROfeT9MadiXsDLXdMsyXG6PI0DVfhG_TDjL3QTMt5WVY89q_Gy24bMAlzAPhqoot6WtSGfzpxic3ioT6zCZQT9_y3stvWZPZfAXZqw6wjNiLJKh9MqaG9J_nbzfssNr45diqTxGFavwMLB9tJ_jNKHVrfQLv6vBZXx4fAUsfqiLUC0rsSfgx0KRvFjvR8LHFCvf5L7ncpFOzdSGrQnFS9IumBSw_0r1H4R9zF6_3hxw2wIh-s-JE6l1jyDfaWzS3WL_xtn8kuyy1Q0ds1UQCArtXX2VJ8J9cKkLgrPHy_WapMyKWtgEJs9B0JnpdKGsQX2BFJ6ZYcUV_Tcrri2hyVjkAisn7q7CfJOhgE75qqM1dPnbTJ9k=w1275-h742-no) # # ### Adding Capacity # When adding capacity up to plant flow rates of 3 L/s, vertical distance will be constant so adding capacity will only change the horizontal distance of the plant. We define a set of flocculator, sedimentation tank, and EStARS as a 1 L/s plant. # # After the capacity of the plant reaches 3 L/s, additional 1 L/s plants will be added to the bottom half of the building, using a mirrored layout as the top half of the building. The only difference between the spacing is that the additions no longer need another entrance tank so the width of the bottom half of the plant is shorter than the width of the top half. This was done instead of simply increasing the length of the plant each time capacity was added because the length of the pipe between the contact chamber and the farthest flocculator would become increasingly large. This addition of major losses would cause different flow rates between the farther 1LPS plant and the closest one. # # The following function will only account for adding 1 L/s plants one at at a time # def Additional_Area(Q_Plant,Q_Extra): if (Q_Plant+Q_Extra>3*u.L/u.s): X_Distance_Extra=X(Q_Extra) Y_Distance_Extra=Y_Length_Bottom Area=(X_Distance_Extra*Y_Distance_Extra).to(u.m**2) return(Area) else: Q=Q_Extra.to(u.L/u.s) Horizontal=(Q_Extra.magnitude)*.965*u.m+(Q_Extra.magnitude)*1*u.m Vertical=5.512*u.m Extra_Area=Vertical*Horizontal print('Extra length that has to be added is '+ut.sig(Horizontal,4)) return(ut.sig(Extra_Area,2)) # ### ESTaRS # The total surface area required for the ESTaRS filter is simply a product of the area one ESTaRS requires and the flow rate coming into the plant. The surface area required for an ESTaRS was measured in the DeFrees lab. It is approximately a square meter. # + ESTaRS_SA=1*u.m**2/u.L def Area_ESTaRS(Q): Q_Plant=Q.to(u.L/u.s) Surface_Area_ESTaRS=(ESTaRS_SA.magnitude)*Q_Plant.magnitude return(Surface_Area_ESTaRS*u.m**2) print('Surface area required for the ESTaR(s) is',Area_ESTaRS(Q_Plant)) # - # ### Flocculators # The total surface area required for the ESTaRS filter is simply a product of the area one ESTaRS requires and the flow rate coming into the plant. The surface area required for an ESTaRS was measured in the DeFrees lab. It is approximately a square meter. Flocc_SA=0.972*u.m*0.536*u.m*u.L/u.s def Area_Flocc(Q): Q_Plant=Q.to(u.L/u.s) Surface_Area_Flocc=(Flocc_SA.magnitude)*Q_Plant.magnitude return(Surface_Area_Flocc*u.m**2) print('Surface area required for the flocculator(s) is',Area_Flocc(Q_Plant)) # ### Manifold Headloss Calculations # The amount of headloss has to be minimal so that the amount of head availible coming out of the exit launder is enough to drive water fast enough to fluidize the sand bed in ESTaRS. This fluidization is required for backwashing the filter. Since the calculation for 4 inch pipe has a headloss for less than 1mm we conclude that the manifold is economically feasible. Any futher increase in the diameter of the manifold would become increasingly expensive. # # + SDR = 26 SF=1.33 Q_Manifold = 1 * u.L/u.s #max flowrate for a manifold Q_UpperBound=SF*Q_Manifold L_pipe = 5*u.m #Length sed tank to it’s ESTaRS K_minor_bend=0.3 K_minor_branch=1 K_minor_contractions=1 K_minor_total= 2*(K_minor_bend+K_minor_branch+K_minor_contractions) # (two bends, two dividing branch, and two contractions) # The maximum viscosity will occur at the lowest temperature. T_crit = u.Quantity(10,u.degC) nu = pc.viscosity_kinematic(T_crit) e = 0.1 * u.mm Manifold_1LPS_ID=4*u.inch Headloss_Max=10*u.mm Manifold_Diam=pc.diam_pipe(Q_Manifold,Headloss_Max,L_pipe,nu,e,K_minor_total).to(u.inch) print(Manifold_Diam.to(u.inch)) print('The minimum pipe inner diameter is '+ ut.sig(Manifold_Diam,2)+'.') Manifold_ND = pipe.ND_SDR_available(Manifold_Diam,SDR) print('The nominal diameter of the manifold is '+ut.sig(Manifold_ND,2)+' ('+ut.sig(Manifold_ND.to(u.inch),2)+').') HLCheck = pc.headloss(Q_UpperBound,Manifold_1LPS_ID,L_pipe,nu,e,K_minor_total) print('The head loss is',HLCheck.to(u.mm)) # - # # Drafts of 1 L/s Plant to 6 L/s Plant # # ![Image of Sample 1 L/s Plant](https://lh3.googleusercontent.com/vOmHWVvkOfpIfaNClasDE0f0aB0mTcMdVX020OsMH476zHjly2ZctM0Izli8-2TR5K4XsMdtVjhlgBtj5lQx_RksotACGf2EoBNn_h0cVoaiY79hoXAN3y6CFv9a9zTb0FKIXCImh0rX4AKo=w366-h722-no) # # A sample 1 L/s plant. The red t's indicate where the piping system can be expanded to include more 1 L/s systems. Additionally the t would be covered by a fernco cap so that the pipe can be removed when the plant is adding capacity. See cell below for pictures of manifolds with caps. # # ![Image of Sample 2 L/s Plant](https://lh3.googleusercontent.com/680lSRhg_LyDPOdNU8rphmoZi8-uOlX-FyMNOYBDfAGUSdkLUfmyxHBtwzlk9P8osyvCKmsD5J53_wwjL2sJQ99d0s-o41ONMhG5Sg6rRgX-nVY2mL4_a7OPrVYcM7NILp16NOVLnutWSDSZC_eXVHqfZjCQPRdKTN5L4j76-vl58QsKJnf_kYJXzmVuNlDTj2sKC53uRsQXrmJFVgBdYjrGn7usUqR6VEB4_1u5wCm4fpzbTnmH7O6KdwkKn7xaaEUvwx-qqCEsStbxG7UgB2lvv9xqd5Kd9r7jr83mu4If-qZB0ZfycvAFOwGfvrRi8r-ZaAefnSX59TkxehfAA6qgCnSsCoXWi1L0crTP2B4XYCdsvG4DD8Gvp6A5G34JOSsnMj56WqVJ1e-NH8qPqv7uF0X5_C23LuE1258E8ig4bBPSbrxc-PVnB_i0PjXxwDofplAS4EYflwkUaCdD8pzDORSokk1RwP4KPRaFBmSJwnuW8xRJJCVVJ_l7UL1ioHGpYHvUBCvOrYVCxGNzmot0OKPWkoefK23XzEMRkkoGOQXSf_CuM7S9vADNr2J9UZoO3CF-hLUNvyEtJbPFYCMGMLEe4x_ZMAEDcOmYtcXPPVtUtxcvEMZnL5vYPHqS3y_qGrkoNAx1h8fMtN7Q9JKHKLQj8gzHJuE=w524-h746-no) # # The sample 2 L/s plant. The red t's indicate where the piping system can be expanded to include more 1 L/s plants. Additionally the t exit that isn't connected to any piping would be covered by a fernco cap so that the pipe can be removed when the plant is adding capacity. # # ![Image of Sample 3L/s Plant](https://lh3.googleusercontent.com/V-YxrMCMg9_wiKeqH9gQ4EtY4nMhrIReoT_mRcyQ9MP-E2SBRwHoEHz55j_dOMjecPnmn3PmLlWadpa0eDVQdmpDdZX84Z0UaWprMG4Yx72_OlK1oBUFyUQTIjCvhZAMbhQst_67I-QBewCbw3U4udEvq9jg6X-v1kQ7HxMmdBz6U1sR8fkmLaLw28YRFf1DBNGg8Xay9QoMV4THIRQ68_N_UQjlfwKxohvU01u2ezVRPDHv6vCkO5J7LQ1doC_x5zEipy6xF6opSgBY2F3m9U3xYS6ivIkicvq4EpaT7P3dYuxUJ8QLgzxrR-IIzWouJFBXaqIKM83bqvlzdqJAvY9iIp9dvzkNLQwtlMJBzm-fleR2KFL88BWhH5Airzcpy40MZhUoI5hGL-u6vd3QLMGnqn4FrwWMhcy_ijQrTwdGPftokr5UAHxpliaPeCiWawKUeBmAR1sgf_TVb_PB70k9Zrx2nmwDvVZSxYiSeqW0oQJYTYikLbtSbK-6JuH4H0fZxDahCXFvzhMpbsj3VKi1QrBQVWF9uqqEiE3I1tBH6aelC4TZVA_kb_pXtxHzhIHQQ4_ScsISiqvHIP6ju7S4hXeioQXpKGXvjSs7vHzBH9tvR5DauC1qqlMWGQEP64HRbcO-Y0vQl2faImCoQxBuU=w693-h748-no) # # The sample 3 L/s plant. There are now no more t's that can be extended because the manifold was designed for up to 3 1 L/s plants. Further capacities are added on the bottom half of the plant like in the next 3 pictures. # # ![Image of Sample 4 L/s Plant](https://lh3.googleusercontent.--t2xC7Qk6BbyV_kaspNQIvaohz5Ce39HTBvfqm2mvJXvX9aTZpiJ5tEbSWDOFBm9wfLMUIlqWjjOG5awFN3TTx05H8ifQA6A4f8GUoWUpONKZweWX-LT04nWI_nwZZ0phYZ76P59g6yVsBSTKQP8aMrQ6LN32jx5JCB2tnpJTLe9LBDw5m_g03F4VaXUWAG0xNeCjNK3D_QpZdBK5jI2VIf2GeQYX-udo_NOTKbqVm_dsACQiK4-k5qlu2GwiRehMr33L0RYLhyQ_j2pKablJ15yZkmxdYEvinjiV-fmTVMUdcSQVwPBRnUd5PmXZNsTQ9TY7oG2NJQ93VPen6XTj_ntP93QbtrRkM5Ido4Y9dS9PcN7gtnYRygF7AOFcYF_PZ_lQ6er2fwZGIabYNKGPTO9SYKW4bj4Azf0kDJVCWilaYgyAc1JSqYbeGSziqO4nbztYHebUhbnBm7Afd4fb-D_-Y4ixOn_02imeI10V39fN4TPGdwXLo1RWkY=w443-h753-no) # # The sample 4 L/s plant. Like in the 1 L/s plant the red t's indicate the start of the 2nd manifold system. # # ![Image of Sample 5 L/s Plant](https://lh3.googleusercontent.com/s9mGAe84mRq88k-YgrdpcIdBlsmHjJWG963OY5eRWHZJFD6vHgkXqqILnqltZTaZ7DaVCGke-8MuVgI5w6ACJFfJPII27ZqWm7SN5K6JuvazJk5wjWJ8F5EI1vZKiTvG5iGL13TJOjWcyN-k1Bqa0H05SnXakpP46Cx4bk3TvTBBSKwfLe2Vz_a6dOpwAWq_YXdXtJb0UZ7LKviJQrGTCWmmarRdiJE54ojmVmd5y0x-0Fdo2XbgTOk3x1BvDCQEMjgof16jFgM-mKyzOB8jBsNx4vJHNr_uikZt2viZ_hLiPG3OAJjt5ARDR7DrLPbvrWrpAfSAb2yb-bKIoZnr8rd5VEXPDDtQfZX1FYiKEgcgwYzbiV5npkt_1Wtdlh6i1Rbs24stxSWIhp_KCxb3g7soCQ5TbyS5kKjcR0Rctxq-_7K1eTz_LZ6gt7Zg1c3oYzf3eZM0UQvcbwpsSnSqXwCR3XMOySWJpmpAYRS74qpLN0-UpLJsK8A4_I_6cKn-5vDW2ppNbn-NlwoAP0CtA3ArWFd00IGFypk9Tqj-SFwx7q3LYoZi3CzdXHz0kLC5Fu-4oJxkTQVxV01HitC2tym4N730Bq9ucAPvMjSQ-nBqp74IInh8klJ8hDdKj9ibkeb87xDScKOOJaAP1d10BE__6DST00aUmCM=w476-h816-no) # # The sample 5 L/s plant. The length (x-direction) is extended by the diameter of a sedimentation tank and one meter like in the top half of the plant. However the width of the bottom half is slightly lower because it can use the already built entrance tank used for the first 3 L/s systems. # # ![Image of Sample 6 L/s Plant](https://lh3.googleusercontent.com/gZmKRSfcJqXeg5z8neBhowTt9uldRwsMmXfMRdP0RwobzmrpCgnU0ZQP8yBecykp-dJzJvziPApaYK0IgQDHY3ZGRfin-e_5S6CjD09Ur0IEzSJvAHsB1Nh3Df9OwN11aIPR7qcbNr7PP9-DLK0p66GZlaqYtC1xpB-Nm7e964ph-p2koh6Gx6A5_waWGPtwGPn3D1Zj23llXZY8dS9h6d8obv3pvtfHYdhtXm9_XQtcU8vPVPwLiycqsn1YRuJ1Es0VBroD1flT_vKR3SX4rhkCvBCpibOpw7Yd2FSG5WndORJ7emOx0hSgcsihHelSh_8qx_IONfa_p63TsOqz69LgtuO4Vw1Yc7kzW4MEfpEIaD23oL7G4hARl-MctaXZHbFQbz7cj7EpWO0HBSfe2w-xXecyQCYWfBU76JVQp4j0nIz_fVj7q7wmes-tx9H1cZS573zSE6qMmv4B_hS5x5ChcdlrrWXgmMliWmhAGj7nILc5jL1nONxysZ5pZJ6NI6WZHHgnOA15_jF8AZpVc0VZ1PP-hHppZ2Ti7H3X4A7ioRrgayQoEbnYeZnbJybN73cDDvv7T9cAARdc2lQyVXVrWsraI7RbWHRzXWoHe5URBEzZh9YwE_etDyoY20B12dl6JyfianC31YeHJpiafo5EteQkCZ2U2GA=w543-h814-no) # # The sample 6 L/s plant. The building is now at full capacity with an area of 91.74 m^2. The flow reunites to allow for chlorination to be administered at one point. # # # # # Manifold Drafts # # ![Sample Manifold with 1 L/s](https://lh3.googleusercontent.com/RsMmKy_4KN5gcDDGpQtLyy4DU08QMxCoa55mIlPH5sNCzlE5lbuBLXeiNG-D8eWGqPx7aQuHewRx8iyIg07FZxUl_CPBQo8oDxFkqwK_5dbitdD6ZpyDqLALo6w_XqtnwcJ8YFjkLjiY30cGOTRMpkvdLv3fPbbsUOrKq3PnU7w8AtG-iwwAqX6Y_biCx_unSFe6DrAJHKYSa9kF0E13ysErqHZ5F252uaT6rm_EHOM7_LDsjMQnvh69bTn7jYacVkoVI6TPNRyMXuH9s9-n97VvgVIiZrrhUMedfjERZbs5Vnq9SD4h-xXfiHsNXWFa0qMd1gHTSGggZhkc3881d9TLfPSe09Fj5M_rDaPoOnfdeYHSTWSeMwVvpukVVAj04bENymvrmlv096DQyh2-CSClvaoSSPpezWHQUhpv-1cqZMla1PEJxcLOK5lkVg1crnb0lLs8FOxerstXBIo6LgNgHlqSNodcnemaMJcNkQDgFfoTAv1oILlIpitDM9thDsw27zD2pLkxzHGWG9TTa8A8G40lT1ZZ4EZ4KBkD-DcKRkR7LanbbUnEO7Uxn97Wigca9h_yfflzxt1x16Yf39ZtwbxjL8s5X6_MuMBAR6WFT-TPkg4tPztkkLSyCoLhtEcSqGDTCFEt5-9f-Rs6Jog-TtIUEHTN_dU=w692-h724-no) # # The manifold with 1 L/s. The elbow at the top is connected to the exit launder of the sedimentation tank. The flow goes from the exit launder into the reducer which is not yet connected in the draft to the ESTaRS. The removeable fernco cap that allows for further expansions of the manifold pipe system can be seen in the picture. # # ![Sample Manifold with 2 L/s](https://lh3.googleusercontent.com/OvySUVwaHjK581j2MFYvGfqTaz-ixQVdzZEk3Y7kxpsnl6EBxxhdckYc74NOiP6h_CKETRMY6gUuYnMz1BEWQtpWRHw3GZQXJ1BVriyIkite0n9qml6h3Z-gv6cyhtfCGqFHKUsOjKvQaMnmkn0PePuyQnmtRc-fozf8HzVTbvyQszkKp_MX16noVLBk8hHa2e-W1tCWYDJhjOrV2Mf8B3br2MEQvfFZ4t4BJZ7wrqIXAES2RwWAJjDf1fRSXbzSiaWAfgpGurip1rDzje2KthNzTaYPMqQrkwASF6K12i1KRnmasSrxkLquQLYuKcWfIw-X1ZafWt5NMQFBrbRE6KV1RnYT0czOLXLCrWREs21g0nPBLAxd-Z37muaSxkoI3M14cYQp96KDHpUHGSlLCRxewpOPkixNDs1OrvivKa7khHQEO08EijTakXfxGGuzZQIIgdbA-HruRrHslWdxE6T_Idw8auDxv555-FjJEaA0RH_8miInQ1g0-vthxgREWLTu4T1cgpXW92YdQRQJ5vLoUtIUo_QU8XsCZC2VUe3x32rFVOBm6--cchodCuUyHLvsszR1QFV_sXMg1yn7N6Czezaqev6OEa7ox8J69LaoWLSNZtlHicjD-o2Z2Su1d-S7EoLvObU7baGg9dAbu_WR9d2ZeJgaknc=w873-h765-no) # # The manifold with 2 L/s. Again a fernco cap is used to allow for future expansions up to 3 L/s. # # # ![Sample Manifold with 3 L/s](https://lh3.googleusercontent.com/15gWXhAqF-rKkFNxKsdT2Vv7HzsEg4jrewnjyNCiOw0qhEa66_wmgNYj7apCWUyZuNO_M841Enx9KVYAO8F73EX_QvwhcRgodkKeCVU_H9xDcrhO9w38irozRd8bSm_oiJeB_Q3FIBRIHVcyqm1g0g1qGi28_MP0DGBs6Boi9Qy7aSP96tSrnnR1jNvq8aNRIzEUUp6vgmHIQ4pWvG3utB7S2cMhqztagFBP1WRJ21-cK6E6lkuhtIzMu-uAMQVZkHMJlrAN8nZEc4Jroyk_qa56qkPQQs4wXgvOgmzYySRJlwHk8bEiT-1xMxqdM3Cd4hNzpnIAVwsWutuf0gmqSyS9jqflYnGKVVMYLAJZtgP7oOBWogwiReiH1q_LCPXorHAJyXjqePX-ojDn1QUuFWOHOuDvvmCqsqo_dmpCFSAToJYXOitX2FOPVNInu4eBsud9zV0DXRNWqKMd07lQneEJStoeDxqNGTjhEjCn4fWc0kbgybwgAcgpotTGCdYiibIjKCPAB1peb5E5wnUawTUkoUy18tZe5pIxcZUwAXp-tftufN8xIsO6VUtQVb5yODx2cOz_xJvg6T6JqawEeoJFMSRUBKl7lTaSrnv2M9PqVK8RjLtDX1jByobJ1BJEDlUCGw6Vjr_VqzWLyTozr3AJ0xacVb55KxY=w1063-h749-no) # # The manifold with 3 L/s. Here there are no fernco caps because the manifold is designed for up to connections with three 1 L/s plants. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="_jbu5q7jgdCV" # # Playing with Asteroid # # > "Source separation with Asteroid" # # - toc: false # - branch: master # - hidden: true # - categories: [asteroid, separation] # # + id="vJGyKTQJqdzq" # %%capture # !pip install -U asteroid # + id="f5u8wRm3GYFr" # %%capture # !wget https://podcast.rasset.ie/podcasts/audio/2021/0626/20210626_rteraidion-bailiuchanbhairbre-bailichnbh_c21974765_21975131_232_.mp3 # + id="_1FfyTTCePSR" # %%capture # !ffmpeg -i /content/20210626_rteraidion-bailiuchanbhairbre-bailichnbh_c21974765_21975131_232_.mp3 -acodec pcm_s16le -ac 1 -ar 16000 /content/20210626_rteraidion-bailiuchanbhairbre-bailichnbh_c21974765_21975131_232_.wav # + id="go1wJBYJnsIx" colab={"base_uri": "https://localhost:8080/"} outputId="09c3538b-f5c3-4dd0-d83c-721be004bc12" # !asteroid-infer "mpariente/ConvTasNet_WHAM!_sepclean" --files /content/20210626_rteraidion-bailiuchanbhairbre-bailichnbh_c21974765_21975131_232_.wav --resample --ola-window 8000 --ola-hop 4000 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # EEG forward operator with a template MRI # ======================================== # # This tutorial explains how to compute the forward operator from EEG data # using the standard template MRI subject ``fsaverage``. # # .. important:: Source reconstruction without an individual T1 MRI from the # subject will be less accurate. Do not over interpret # activity locations which can be off by multiple centimeters. # #

Note

`plot_montage` show all the standard montages in MNE-Python.

# :depth: 2 # # + # Authors: <> # <> # # License: BSD Style. import os.path as op import mne from mne.datasets import eegbci from mne.datasets import fetch_fsaverage # Download fsaverage files fs_dir = fetch_fsaverage(verbose=True) subjects_dir = op.dirname(fs_dir) # The files live in: subject = 'fsaverage' trans = op.join(fs_dir, 'bem', 'fsaverage-trans.fif') src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif') bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif') # - # Load the data # ------------- # # We use here EEG data from the BCI dataset. # # # + raw_fname, = eegbci.load_data(subject=1, runs=[6]) raw = mne.io.read_raw_edf(raw_fname, preload=True) # Clean channel names to be able to use a standard 1005 montage ch_names = [c.replace('.', '') for c in raw.ch_names] raw.rename_channels({old: new for old, new in zip(raw.ch_names, ch_names)}) # Read and set the EEG electrode locations montage = mne.channels.read_montage('standard_1005', ch_names=raw.ch_names, transform=True) raw.set_montage(montage) raw.set_eeg_reference(projection=True) # needed for inverse modeling # Check that the locations of EEG electrodes is correct with respect to MRI mne.viz.plot_alignment( raw.info, src=src, eeg=['original', 'projected'], trans=trans, dig=True) # - # Setup source space and compute forward # -------------------------------------- # # # + fwd = mne.make_forward_solution(raw.info, trans=trans, src=src, bem=bem, eeg=True, mindist=5.0, n_jobs=1) print(fwd) # for illustration purposes use fwd to compute the sensitivity map eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed') eeg_map.plot(time_label='EEG sensitivity', subjects_dir=subjects_dir, clim=dict(lims=[5, 50, 100])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Recommender Systems using Affinity Analysis #
# Here we will look at affinity analysis that determines when objects occur # frequently together. This is also called market basket analysis, after one of # the use cases of determining when items are purchased together frequently. # In this example, we wish to work out when # two movies are recommended by the same reviewers. # # ### Affinity analysis # Affinity analysis is the task of determining when objects are used in similar # ways. The data for affinity analysis is often described in the form of a # transaction. Intuitively, this comes from a transaction at a store—determining # when objects are purchased together. # # The classic algorithm for affinity analysis is called the Apriori algorithm. It addresses # the exponential problem of creating sets of items that occur frequently within a # database, called frequent itemsets. Once these frequent itemsets are discovered, # creating association rules is straightforward. # # #### Apriori algorithm # # First, we ensure that a rule # has sufficient support within the dataset. Defining a minimum support level is the # key parameter for Apriori. To build a frequent itemset, for an itemset (A, B) to have a # support of at least 30, both A and B must occur at least 30 times in the database. This # property extends to larger sets as well. For an itemset (A, B, C, D) to be considered # frequent, the set (A, B, C) must also be frequent (as must D). # These frequent itemsets can be built up and possible itemsets that are not frequent # (of which there are many) will never be tested. This saves significant time in testing # new rules. # Other example algorithms for affinity analysis include the Eclat and FP-growth # algorithms. There are many improvements to these algorithms in the data mining # literature that further improve the efficiency of the method. In this chapter, we will # focus on the basic Apriori algorithm. # # #### Choosing parameters # # To perform association rule mining for affinity analysis, we first use the Apriori # to generate frequent itemsets. Next, we create association rules (for example, if a # person recommended movie X, they would also recommend movie Y) by testing # combinations of premises and conclusions within those frequent itemsets. # For the first stage, the Apriori algorithm needs a value for the minimum support # that an itemset needs to be considered frequent. Any itemsets with less support will # not be considered. Setting this minimum support too low will cause Apriori to test a # larger number of itemsets, slowing the algorithm down. Setting it too high will result # in fewer itemsets being considered frequent. # # In the second stage, after the frequent itemsets have been discovered, association # rules are tested based on their confidence. We could choose a minimum confidence # level, a number of rules to return, or simply return all of them and let the user decide # what to do with them. # # Here, we will return only rules above a given confidence level. Therefore, # we need to set our minimum confidence level. Setting this too low will result in rules # that have a high support, but are not very accurate. Setting this higher will result in # only more accurate rules being returned, but with fewer rules being discovered. # ### The movie recommendation problem # # Product recommendation is big business. Online stores use it to up-sell to # customers by recommending other products that they could buy. Making better # recommendations leads to better sales. When online shopping is selling to millions # of customers every year, there is a lot of potential money to be made by selling more # items to these customers. # # Product recommendations have been researched for many years; however, the field # gained a significant boost when Netflix ran their Netflix Prize between 2007 and # 2009. This competition aimed to determine if anyone can predict a user's rating of a # film better than Netflix was currently doing. The prize went to a team that was just # over 10 percent better than the current solution. While this may not seem like a large # improvement, such an improvement would net millions to Netflix in revenue from # better movie recommendations. # # ### Obtaining the dataset # # Since the inception of the Netflix Prize, Grouplens, a research group at the University # of Minnesota, has released several datasets that are often used for testing algorithms # in this area. They have released several versions of a movie rating dataset, which # have different sizes. There is a version with 100,000 reviews, one with 1 million # reviews and one with 10 million reviews. # The datasets are available from http://grouplens.org/datasets/movielens/ # and the dataset we are going to use in this chapter is the MovieLens 1 million # dataset. Download this dataset and unzip it in your data folder. We then load the dataset using Pandas. The MovieLens dataset is in a good shape; however, there are some changes from the # default options in pandas.read_csv that we need to make. To start with, the data is # separated by tabs, not commas. Next, there is no heading line. This means the first # line in the file is actually data and we need to manually set the column names. When loading the file, we set the delimiter parameter to the tab character, tell pandas # not to read the first row as the header (with header=None), and set the column # names. ratings_filename = "data/ml-100k/u.data" import pandas as pd all_ratings = pd.read_csv(ratings_filename, delimiter="\t", header=None, names = ["UserID", "MovieID", "Rating", "Datetime"]) all_ratings["Datetime"] = pd.to_datetime(all_ratings['Datetime'],unit='s') all_ratings[:5] # Sparse data formats: # # This dataset is in a sparse format. Each row can be thought of as a cell in a large # feature matrix of the type used in previous chapters, where rows are users and # columns are individual movies. The first column would be each user's review # of the first movie, the second column would be each user's review of the second # movie, and so on. # There are 1,000 users and 1,700 movies in this dataset, which means that the full # matrix would be quite large. We may run into issues storing the whole matrix in # memory and computing on it would be troublesome. However, this matrix has the # property that most cells are empty, that is, there is no review for most movies for # most users. There is no review of movie #675 for user #213 though, and not for most # other combinations of user and movie. # As you can see, there are no review for most movies, such as #213 all_ratings[all_ratings["UserID"] == 675].sort("MovieID") # The format given here represents the full matrix, but in a more compact way. # The first row indicates that user #196 reviewed movie #242, giving it a ranking # of 3 (out of five) on the December 4, 1997. # Any combination of user and movie that isn't in this database is assumed to not exist. # This saves significant space, as opposed to storing a bunch of zeroes in memory. This # type of format is called a sparse matrix format. As a rule of thumb, if you expect # about 60 percent or more of your dataset to be empty or zero, a sparse format will # take less space to store. # # When computing on sparse matrices, the focus isn't usually on the data we don't # have—comparing all of the zeroes. We usually focus on the data we have and # compare those. # # ### The Apriori implementation # # The goal of this chapter is to produce rules of the following form: if a person # recommends these movies, they will also recommend this movie. We will also discuss # extensions where a person recommends a set of movies is likely to recommend # another particular movie. # # To do this, we first need to determine if a person recommends a movie. We can # do this by creating a new feature Favorable, which is True if the person gave a # favorable review to a movie: # Not all reviews are favourable! Our goal is "other recommended books", so we only want favourable reviews all_ratings["Favorable"] = all_ratings["Rating"] > 3 all_ratings[10:15] all_ratings[all_ratings["UserID"] == 1][:5] # We will sample our dataset to form a training dataset. This also helps reduce # the size of the dataset that will be searched, making the Apriori algorithm run faster. # We obtain all reviews from the first 200 users: # Sample the dataset. You can try increasing the size of the sample, but the run time will be considerably longer ratings = all_ratings[all_ratings['UserID'].isin(range(200))] # & ratings["UserID"].isin(range(100))] # Next, we can create a dataset of only the favorable reviews in our sample: # We start by creating a dataset of each user's favourable reviews favorable_ratings = ratings[ratings["Favorable"]] favorable_ratings[:5] # We will be searching the user's favorable reviews for our itemsets. So, the next thing # we need is the movies which each user has given a favorable. We can compute this # by grouping the dataset by the User ID and iterating over the movies in each group: # We are only interested in the reviewers who have more than one review favorable_reviews_by_users = dict((k, frozenset(v.values)) for k, v in favorable_ratings.groupby("UserID")["MovieID"]) len(favorable_reviews_by_users) # In the preceding code, we stored the values as a frozenset, allowing us to quickly # check if a movie has been rated by a user. Sets are much faster than lists for this type # of operation, and we will use them in a later code. # Finally, we can create a DataFrame that tells us how frequently each movie has been # given a favorable review: # Find out how many movies have favourable ratings num_favorable_by_movie = ratings[["MovieID", "Favorable"]].groupby("MovieID").sum() num_favorable_by_movie.sort("Favorable", ascending=False)[:5] # ### The Apriori algorithm revisited # # The Apriori algorithm is part of our affinity analysis and deals specifically with # finding frequent itemsets within the data. The basic procedure of Apriori builds # up new candidate itemsets from previously discovered frequent itemsets. These # candidates are tested to see if they are frequent, and then the algorithm iterates as # explained here: # # 1. Create initial frequent itemsets by placing each item in its own itemset. Only items with at least the minimum support are used in this step. # 2. New candidate itemsets are created from the most recently discovered frequent itemsets by finding supersets of the existing frequent itemsets. # 3. All candidate itemsets are tested to see if they are frequent. If a candidate is not frequent then it is discarded. If there are no new frequent itemsets from this step, go to the last step. # 4. Store the newly discovered frequent itemsets and go to the second step. # 5. Return all of the discovered frequent itemsets. # #### Implementation # # On the first iteration of Apriori, the newly discovered itemsets will have a length # of 2, as they will be supersets of the initial itemsets created in the first step. On the # second iteration (after applying the fourth step), the newly discovered itemsets will # have a length of 3. This allows us to quickly identify the newly discovered itemsets, # as needed in second step. # # We can store our discovered frequent itemsets in a dictionary, where the key is the # length of the itemsets. This allows us to quickly access the itemsets of a given length, # and therefore the most recently discovered frequent itemsets, with the help of the # following code: frequent_itemsets = {} # itemsets are sorted by length # We also need to define the minimum support needed for an itemset to be considered frequent. This value is chosen based on the dataset but feel free to try different # values. I recommend only changing it by 10 percent at a time though, as the time the # algorithm takes to run will be significantly different! Let's apply minimum support: # min_support = 50 # To implement the first step of the Apriori algorithm, we create an itemset with each # movie individually and test if the itemset is frequent. We use frozenset, as they # allow us to perform set operations later on, and they can also be used as keys in our # counting dictionary (normal sets cannot). # k=1 candidates are the isbns with more than min_support favourable reviews frequent_itemsets[1] = dict((frozenset((movie_id,)), row["Favorable"]) for movie_id, row in num_favorable_by_movie.iterrows() if row["Favorable"] > min_support) # We implement the second and third steps together for efficiency by creating a # function that takes the newly discovered frequent itemsets, creates the supersets, # and then tests if they are frequent. First, we set up the function and the counting # dictionary. In keeping with our rule of thumb of reading through the data as little as possible, # we iterate over the dataset once per call to this function. While this doesn't matter too # much in this implementation (our dataset is relatively small), it is a good practice to # get into for larger applications. We iterate over all of the users and their reviews. Next, we go through each of the previously discovered itemsets and see if it is a # subset of the current set of reviews. If it is, this means that the user has reviewed # each movie in the itemset. We can then go through each individual movie that the user has reviewed that isn't # in the itemset, create a superset from it, and record in our counting dictionary that # we saw this particular itemset. We end our function by testing which of the candidate itemsets have enough support # to be considered frequent and return only those : # + from collections import defaultdict def find_frequent_itemsets(favorable_reviews_by_users, k_1_itemsets, min_support): counts = defaultdict(int) for user, reviews in favorable_reviews_by_users.items(): for itemset in k_1_itemsets: if itemset.issubset(reviews): for other_reviewed_movie in reviews - itemset: current_superset = itemset | frozenset((other_reviewed_movie,)) counts[current_superset] += 1 return dict([(itemset, frequency) for itemset, frequency in counts.items() if frequency >= min_support]) # - # To run our code, we create a loop that iterates over the steps of the Apriori # algorithm, storing the new itemsets as we go. In this loop, k represents the length # of the soon-to-be discovered frequent itemsets, allowing us to access the previously # most discovered ones by looking in our frequent_itemsets dictionary using the # key k - 1. We create the frequent itemsets and store them in our dictionary by their # length. We want to break out the preceding loop if we didn't find any new frequent itemsets # (and also to print a message to let us know what is going on). If we do find frequent itemsets, we print out a message to let us know the loop will # be running again. This algorithm can take a while to run, so it is helpful to know that # the code is still running while you wait for it to complete! Finally, after the end of the loop, we are no longer interested in the first set of # itemsets anymore—these are itemsets of length one, which won't help us create # association rules – we need at least two items to create association rules. Let's # delete them : # # + import sys print("There are {} movies with more than {} favorable reviews".format(len(frequent_itemsets[1]), min_support)) sys.stdout.flush() for k in range(2, 20): # Generate candidates of length k, using the frequent itemsets of length k-1 # Only store the frequent itemsets cur_frequent_itemsets = find_frequent_itemsets(favorable_reviews_by_users, frequent_itemsets[k-1], min_support) if len(cur_frequent_itemsets) == 0: print("Did not find any frequent itemsets of length {}".format(k)) sys.stdout.flush() break else: print("I found {} frequent itemsets of length {}".format(len(cur_frequent_itemsets), k)) #print(cur_frequent_itemsets) sys.stdout.flush() frequent_itemsets[k] = cur_frequent_itemsets # We aren't interested in the itemsets of length 1, so remove those del frequent_itemsets[1] # - # This code may take a few minutes to run. print("Found a total of {0} frequent itemsets".format(sum(len(itemsets) for itemsets in frequent_itemsets.values()))) # As we can see it returns 2968 frequent itemsets of varying lengths. You'll notice # that the number of itemsets grows as the length increases before it shrinks. It grows # because of the increasing number of possible rules. After a while, the large number # of combinations no longer has the support necessary to be considered frequent. # This results in the number shrinking. This shrinking is the benefit of the Apriori # algorithm. If we search all possible itemsets (not just the supersets of frequent ones), # we would be searching thousands of times more itemsets to see if they are frequent. # ### Extracting association rules # # After the Apriori algorithm has completed, we have a list of frequent itemsets. # These aren't exactly association rules, but they are similar to it. A frequent itemset # is a set of items with a minimum support, while an association rule has a premise # and a conclusion. # # We can make an association rule from a frequent itemset by taking one of the movies # in the itemset and denoting it as the conclusion. The other movies in the itemset will # be the premise. This will form rules of the following form: if a reviewer recommends all # of the movies in the premise, they will also recommend the conclusion. # # For each itemset, we can generate a number of association rules by setting each # movie to be the conclusion and the remaining movies as the premise. # # In code, we first generate a list of all of the rules from each of the frequent itemsets, # by iterating over each of the discovered frequent itemsets of each length. We then iterate over every movie in this itemset, using it as our conclusion. # The remaining movies in the itemset are the premise. We save the premise and # conclusion as our candidate rule. This returns a very large number of candidate rules. We can see some by printing # out the first few rules in the list. # Now we create the association rules. First, they are candidates until the confidence has been tested candidate_rules = [] for itemset_length, itemset_counts in frequent_itemsets.items(): for itemset in itemset_counts.keys(): for conclusion in itemset: premise = itemset - set((conclusion,)) candidate_rules.append((premise, conclusion)) print("There are {} candidate rules".format(len(candidate_rules))) print(candidate_rules[:5]) # These rules were returned as the resulting output. # In these rules, the first part (the frozenset) is the list of movies in the premise, # while the number after it is the conclusion. In the first case, if a reviewer # recommends movie 50, they are also likely to recommend movie 64. # # Next, we compute the confidence of each of these rules. The process starts by creating dictionaries to store how many times we see the # premise leading to the conclusion (a correct example of the rule) and how many # times it doesn't (an incorrect example). We iterate over all of the users, their favorable reviews, and over each candidate # association rule. We then test to see if the premise is applicable to this user. In other words, did the # user favorably review all of the movies in the premise? If the premise applies, we see if the conclusion movie was also rated favorably. # If so, the rule is correct in this instance. If not, it is incorrect. We then compute the confidence for each rule by dividing the correct count by the # total number of times the rule was seen. # Now, we compute the confidence of each of these rules. This is very similar to what we did in chapter 1 correct_counts = defaultdict(int) incorrect_counts = defaultdict(int) for user, reviews in favorable_reviews_by_users.items(): for candidate_rule in candidate_rules: premise, conclusion = candidate_rule if premise.issubset(reviews): if conclusion in reviews: correct_counts[candidate_rule] += 1 else: incorrect_counts[candidate_rule] += 1 rule_confidence = {candidate_rule: correct_counts[candidate_rule] / float(correct_counts[candidate_rule] + incorrect_counts[candidate_rule]) for candidate_rule in candidate_rules} # Choose only rules above a minimum confidence level min_confidence = 0.9 # Filter out the rules with poor confidence rule_confidence = {rule: confidence for rule, confidence in rule_confidence.items() if confidence > min_confidence} print(len(rule_confidence)) # Now we can print the top five rules by sorting this confidence dictionary and # printing the results: from operator import itemgetter sorted_confidence = sorted(rule_confidence.items(), key=itemgetter(1), reverse=True) for index in range(5): print("Rule #{0}".format(index + 1)) (premise, conclusion) = sorted_confidence[index][0] print("Rule: If a person recommends {0} they will also recommend {1}".format(premise, conclusion)) print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)])) print("") # The resulting printout shows only the movie IDs, which isn't very helpful without # the names of the movies also. The dataset came with a file called u.items, which # stores the movie names and their corresponding MovieID (as well as other # information, such as the genre). # # We can load the titles from this file using pandas. Additional information about # the file and categories is available in the README that came with the dataset. # The data in the files is in CSV format, but with data separated by the | symbol; # it has no header and the encoding is important to set. The column names were # found in the README file # Even better, we can get the movie titles themselves from the dataset movie_name_filename = 'data/ml-100k/u.item' movie_name_data = pd.read_csv(movie_name_filename, delimiter="|", header=None, encoding = "mac-roman") movie_name_data.columns = ["MovieID", "Title", "Release Date", "Video Release", "IMDB", "", "Action", "Adventure", "Animation", "Children's", "Comedy", "Crime", "Documentary", "Drama", "Fantasy", "Film-Noir", "Horror", "Musical", "Mystery", "Romance", "Sci-Fi", "Thriller", "War", "Western"] # Getting the movie title is important, so we will create a function that will return a # movie's title from its MovieID, saving us the trouble of looking it up each time. We look up the movie_name_data DataFrame for the given MovieID and return only # the title column. We use the values parameter to get the actual value (and not the pandas Series # object that is currently stored in title_object). We are only interested in the first # value—there should only be one title for a given MovieID anyway! We end the function by returning the title as needed. def get_movie_name(movie_id): title_object = movie_name_data[movie_name_data["MovieID"] == movie_id]["Title"] title = title_object.values[0] return title get_movie_name(4) # We adjust our previous code for printing out the top # rules to also include the titles for index in range(5): print("Rule #{0}".format(index + 1)) (premise, conclusion) = sorted_confidence[index][0] premise_names = ", ".join(get_movie_name(idx) for idx in premise) conclusion_name = get_movie_name(conclusion) print("Rule: If a person recommends {0} they will also recommend {1}".format(premise_names, conclusion_name)) print(" - Confidence: {0:.3f}".format(rule_confidence[(premise, conclusion)])) print("") # The result is much more readable (there are still some issues, but we can ignore them # for now.) # ### Evaluation # # In a broad sense, we can evaluate the association rules using the same concept as for # classification. We use a test set of data that was not used for training, and evaluate # our discovered rules based on their performance in this test set. # # To do this, we will compute the test set confidence, that is, the confidence of each # rule on the testing set. # We won't apply a formal evaluation metric in this case; we simply examine the rules # and look for good examples. # # First, we extract the test dataset, which is all of the records we didn't use in the # training set. We used the first 200 users (by ID value) for the training set, and we will # use all of the rest for the testing dataset. As with the training set, we will also get the # favorable reviews for each of the users in this dataset as well. # Evaluation using test data test_dataset = all_ratings[~all_ratings['UserID'].isin(range(200))] test_favorable = test_dataset[test_dataset["Favorable"]] #test_not_favourable = test_dataset[~test_dataset["Favourable"]] test_favorable_by_users = dict((k, frozenset(v.values)) for k, v in test_favorable.groupby("UserID")["MovieID"]) #test_not_favourable_by_users = dict((k, frozenset(v.values)) for k, v in test_not_favourable.groupby("UserID")["MovieID"]) #test_users = test_dataset["UserID"].unique() test_dataset[:5] # We then count the correct instances where the premise leads to the conclusion, in the # same way we did before. The only change here is the use of the test data instead of # the training data. correct_counts = defaultdict(int) incorrect_counts = defaultdict(int) for user, reviews in test_favorable_by_users.items(): for candidate_rule in candidate_rules: premise, conclusion = candidate_rule if premise.issubset(reviews): if conclusion in reviews: correct_counts[candidate_rule] += 1 else: incorrect_counts[candidate_rule] += 1 # Next, we compute the confidence of each rule from the correct counts. test_confidence = {candidate_rule: correct_counts[candidate_rule] / float(correct_counts[candidate_rule] + incorrect_counts[candidate_rule]) for candidate_rule in rule_confidence} print(len(test_confidence)) sorted_test_confidence = sorted(test_confidence.items(), key=itemgetter(1), reverse=True) print(sorted_test_confidence[:5]) # Finally, we print out the best association rules with the titles instead of the # movie IDs. for index in range(10): print("Rule #{0}".format(index + 1)) (premise, conclusion) = sorted_confidence[index][0] premise_names = ", ".join(get_movie_name(idx) for idx in premise) conclusion_name = get_movie_name(conclusion) print("Rule: If a person recommends {0} they will also recommend {1}".format(premise_names, conclusion_name)) print(" - Train Confidence: {0:.3f}".format(rule_confidence.get((premise, conclusion), -1))) print(" - Test Confidence: {0:.3f}".format(test_confidence.get((premise, conclusion), -1))) print("") # The fifth rule, for instance, has a perfect confidence in the training data (1.000), but it # is only accurate in 60 percent of cases for the test data (0.609). Many of the other rules in # the top 10 have high confidences in test data though, making them good rules for # making recommendations. # ### Summary # # In this example, we performed affinity analysis in order to recommend movies based # on a large set of reviewers. We did this in two stages. First, we found frequent # itemsets in the data using the Apriori algorithm. Then, we created association rules # from those itemsets. # # The use of the Apriori algorithm was necessary due to the size of the dataset. # We performed training on a subset of our data in order to find the association rules, # and then tested those rules on the rest of the data—a testing set. From what we # discussed in the previous chapters, we could extend this concept to use cross-fold # validation to better evaluate the rules. This would lead to a more robust evaluation # of the quality of each rule # # ___ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # First, navigate to your chosen working directory. # + # # %cd ~/.. # # %pwd # %matplotlib inline # + random_seed = 34 import random random.seed(random_seed) import dataset import matplotlib.pyplot as plt import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from stuf import stuf import time import os gpu=5 os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"]="6"#str(gpu) from tensorflow.keras import backend as K import tensorflow as tf; tf.enable_eager_execution() tf_config=tf.ConfigProto(log_device_placement=True) tf_config.gpu_options.allocator_type = 'BFC' tf_config.gpu_options.allow_growth = True tf_config.allow_soft_placement = True sess = tf.Session(config=tf_config) K.set_session(sess) from tensorflow.python.client import device_lib def get_available_gpus(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU'] print(get_available_gpus()) import pyleaves from pyleaves.analysis.img_utils import convert_to_png from pyleaves import leavesdb from pyleaves.data_pipeline.tensorpack_loaders import get_multiprocess_dataflow # - DATASET = "Fossil" # tf.test.gpu_device_name() # dir(tf.test) # #### **1.** Initialize and connect to database in local filesystem # + # local_db = os.path.join(os.getcwd(),'pyleaves','leavesdb','resources','leavesdb.db') local_db = leavesdb.init_local_db() print(local_db) db = dataset.connect(f'sqlite:///{local_db}', row_type=stuf) # - # #### **2.** Print a summary of the database's contents db_summary = leavesdb.summarize_db(db) # #### **3.** Select a subset of datasets # ##### Here we select the Fossil dataset # + # data = leavesdb.db_query.load_Fossil_data(db) # data = leavesdb.db_query.load_data(db, dataset=DATASET) # data = leavesdb.db_query.load_Leaves_data(db) data = leavesdb.db_query.load_all_data(db) data_by_dataset = data.groupby(by='dataset') data_by_dataset_dict = {k:v for k,v in data_by_dataset} # - print(data.head(5)) new_data_locations = {} for dataset_name, rows in data_by_dataset_dict.items(): if dataset_name in ['PNAS','plant_village']: filepaths = list(rows['path'].values) labels = list(rows['family'].values) assert len(filepaths) == len(labels) print('Starting ', dataset_name, ' with ', len(filepaths), ' files') # filepaths = filepaths[:10] # labels = labels[:10] num_files = len(filepaths) start_time = time.perf_counter() new_dataset_paths = convert_to_png(filepaths, labels, dataset_name = dataset_name) end_time = time.perf_counter() total_time = end_time-start_time print(f'Finished copying {num_files} from {dataset_name} in {total_time:.3f} seconds at a rate of {num_files/total_time:.3f} images/sec') new_dataset_paths = list(new_dataset_paths) new_data_locations.update({dataset_name:new_dataset_paths}) # #### **4.** Encode labels as integers for feeding into model # + from pyleaves.data_pipeline.preprocessing import encode_labels data_df = encode_labels(data) data_df.sample(frac=1).head(10) # - print(len(data_df)) # + from pyleaves.data_pipeline.preprocessing import filter_low_count_labels, one_hot_encode_labels, one_hot_decode_labels test_size = 0.25 val_size = 0.25 data_df = filter_low_count_labels(data_df, threshold=3, verbose = True) data_df = encode_labels(data_df) #Re-encode numeric labels after removing sub-threshold classes so that max(labels) == len(labels) image_paths = data_df['path'].values.reshape((-1,1)) one_hot_labels = one_hot_encode_labels(data_df['label'].values) # + # train_data, test_data = train_test_split(data_df, test_size=test_size, random_state=random_seed, shuffle=True, stratify=data_df['label']) train_paths, test_paths, train_labels, test_labels = train_test_split(image_paths, one_hot_labels, test_size=test_size, random_state=random_seed, shuffle=True, stratify=data_df['label']) train_paths, val_paths, train_labels, val_labels = train_test_split(train_paths, train_labels, test_size=val_size, random_state=random_seed, shuffle=True, stratify=train_labels) train_data = {'path': train_paths, 'label': train_labels} val_data = {'path': val_paths, 'label': val_labels} test_data = {'path': test_paths, 'label': test_labels} data_splits = {'train': train_data, 'val': val_data, 'test': test_data} # train_gen = get_multiprocess_dataflow(train_data['path'], train_data['label'], size=(299,299), batch_size=32, num_prefetch=25, num_proc=5) # - # ## **Let's set up our model** # + plot_class_frequencies = leavesdb.utils.plot_class_frequencies plot_class_frequencies(labels=one_hot_decode_labels(train_data['label']).ravel().tolist()); plot_class_frequencies(labels=one_hot_decode_labels(val_data['label']).ravel().tolist()); # + num_classes = len(np.unique(data_df['label'])) img_size = [299,299] channels = 3 batch_size = 32 learning_rate=0.01 num_epochs = 1 def parse_function(filename, label): img = tf.io.read_file(filename) img = tf.io.decode_jpeg(img, channels=channels)#, dtype=tf.float32) img = tf.image.resize(img, img_size) return img, label #{'image':img, 'label':label} # def train_preprocess(img, label): # img = tf.image.resize(img, img_size) # return {'image':img, 'label':label} def get_tf_dataset(filenames, labels): data = tf.data.Dataset.from_tensor_slices((filenames, labels)) data = data.shuffle(len(filenames)) # data = data.interleave((lambda x, y: tf.data.Dataset(x,y).map(parse_function, num_parallel_calls=1)), cycle_length=4, block_length=16) data = data.map(parse_function, num_parallel_calls=tf.data.experimental.AUTOTUNE) # data = data.map(train_preprocess, num_parallel_calls=4) data = data.batch(batch_size) data = data.prefetch(tf.data.experimental.AUTOTUNE) # data = data.apply(tf.data.experimental.prefetch_to_device('/device:GPU:0')) return data ############################## def debug_parse_function(filename, label): img = tf.io.read_file(filename) img = tf.io.decode_jpeg(img, channels=channels)#, dtype=tf.float32) img = tf.image.resize(img, img_size) return img, label, filename #{'image':img, 'label':label} def debug_get_tf_dataset(filenames, labels): data = tf.data.Dataset.from_tensor_slices((filenames, labels)) data = data.shuffle(len(filenames)) # data = data.interleave((lambda x, y: tf.data.Dataset(x,y).map(parse_function, num_parallel_calls=1)), cycle_length=4, block_length=16) data = data.map(debug_parse_function, num_parallel_calls=tf.data.experimental.AUTOTUNE) # data = data.map(train_preprocess, num_parallel_calls=4) data = data.batch(batch_size) data = data.prefetch(tf.data.experimental.AUTOTUNE) data = data.cache() # data = data.apply(tf.data.experimental.prefetch_to_device('/device:GPU:0')) return data ############################## debug = False#True if debug == True: get_tf_dataset = debug_get_tf_dataset def decode_labels(data_df): data_df=data_df.groupby('label', group_keys=False).apply(lambda df: df.sample(1).loc[:,['label','family']]) data_df.sort_values(by='label', inplace=True) data_df.set_index(keys='label',drop=True, inplace=True) data_df = data_df.to_dict() return data_df['family'] # train_dataset = get_tf_dataset(filenames = train_data['path'].values, labels = train_data['label'].values) # val_dataset = get_tf_dataset(filenames = val_data['path'].values, labels = val_data['label'].values) train_dataset = get_tf_dataset(filenames = train_data['path'].ravel(), labels = train_data['label']) val_dataset = get_tf_dataset(filenames = val_data['path'].ravel(), labels = val_data['label']) label_map = decode_labels(data_df=data_df) num_samples_train = len(train_data['path']) num_samples_val = len(val_data['path']) num_samples_test = len(test_data['path']) print(num_samples_train) print(num_samples_val) print(num_samples_test) label_map[0] # + # for features in train_dataset.take(1): # image_batch = features[0].numpy().astype(np.int) # label_batch = features[1].numpy().astype(np.int) # plot_image_grid = pyleaves.analysis.img_utils.plot_image_grid # plot_image_grid(image_batch, label_batch, x_plots = 4, y_plots = 8) # + ############################################################################# # datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1.0/255.0) # train_data['label'] = train_data['label'].astype(str) # datagen_flow = datagen.flow_from_dataframe(train_data.iloc[:100,:], x_col='path', y_col='label', class_mode='sparse', batch_size=batch_size) # a=next(datagen_flow) # print(a[0].shape, a[1].shape) # + # import time # start_time = time.time() # n=100 # total_time = 0 # try: # for i, features in enumerate(train_dataset.take(n)): # # print(i, features[0].shape, features[1].shape) # run_time = time.time()-start_time # total_time += run_time # print(f'Took {run_time:.2f} seconds') # start_time = time.time() # except Exception as e: # print(e) # print(f'finished {i} iterations') # avg_time = total_time / i+1 # rate = (i+1)*batch_size/total_time # print(f'Avg time = {avg_time:.2f} | Ran {i+1} iterations using batch size = {batch_size} & {batch_size*n} samples') # print(f'rate = {rate}') # - model = pyleaves.models.inception_v3.build_model(num_classes, learning_rate=learning_rate) model.summary() # + # history = pyleaves.models.inception_v3.train_model(model, # train_dataset, # validation_data=val_dataset, # steps_per_epoch=int(num_samples_train//batch_size), # validation_steps=int(num_samples_val//batch_size), # max_epochs=num_epochs, # # callbacks=None, # workers=5, # initial_epoch=0) # + # train_dataset = train_dataset.make_initializable_iterator().get_next() # val_dataset = val_dataset.make_initializable_iterator().get_next() # - # current_epoch = 0 # while current_epoch < 20: # try: # history = model.fit( # train_dataset, # validation_data=val_dataset, # # steps_per_epoch=int(num_samples_train//batch_size), # # validation_steps=int(num_samples_val//batch_size), # epochs=num_epochs, # # callbacks=None, # workers=10, # initial_epoch=current_epoch, # verbose=1) # except KeyboardInterrupt: # break # except Exception as e: # print(f'current epoch = {current_epoch}, error: {e}') # + # read_data = tf.get_default_session().run(train_dataset) # + import time from tensorflow.python.framework.errors_impl import InvalidArgumentError filename_ids = [] batch_log = [] invalid_filenames = [] start_time = time.time() time_log = [] steps_per_epoch = num_samples_train//batch_size valid_filenames = [] reset_iter = True current_epoch = 0 while current_epoch < 20: if reset_iter == True: epoch_dataset = train_dataset.take(steps_per_epoch) reset_iter = False try: for i, (imgs, labels, filenames) in enumerate(epoch_dataset): run_time = time.time()-start_time time_log.append(run_time) print(f'Took {run_time:.2f} seconds') valid_filenames.append([fname.numpy().decode('utf-8') for fname in filenames]) start_time = time.time() except InvalidArgumentError as e: invalid_flag = 0 for j, fname in enumerate(filenames): fname = fname.numpy().decode('utf-8') if os.path.isfile(fname): filename_ids.append(i*batch_size+j) valid_filenames.append(fname) continue else: filename_ids.append(i*batch_size+j) invalid_filenames.append(fname) print(f'invalid filename = {fname}') invalid_flag = 1 print(f'current epoch = {current_epoch}, error: {e}', type(e)) continue except KeyboardInterrupt: break except Exception as e: reset_iter = True print(f'current epoch = {current_epoch}, error: {e}', type(e)) print(f'finished {i*batch_size} samples over {i} iterations in {np.sum(time_log):.2f} seconds') # - print(invalid_filenames) # + import cv2 plot_image_grid = pyleaves.analysis.img_utils.plot_image_grid invalid_filenames = np.concatenate([fname.numpy().tolist() for fname in invalid_filenames]).tolist() for i, fname in enumerate(invalid_filenames): if type(fname)==bytes: invalid_filenames[i] = fname.decode('utf-8') # + invalid_imgs = [] for i, fname in enumerate(invalid_filenames): img = cv2.imread(fname) img = cv2.resize(img, tuple(img_size)) invalid_imgs.append(img) invalid_images = np.stack(invalid_imgs) plot_image_grid(invalid_images, labels = np.ones(len(invalid_imgs)), x_plots = 4, y_plots = 8) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This is the sequel of tutorial_training_dnn.ipynb. # # Here, we will discuss about loading a trained network, forwarding data to it and how to extract features maps. ## Set up the sys.path in order to be able to import our modules import os import sys from matplotlib import pyplot as plt module_path = os.path.abspath(os.path.join('../python')) if module_path not in sys.path: sys.path.append(module_path) # First, you can load a model using load_model from Keras to load an hdf5 file containing a network (saved using save_model or a ModelCheckpoint also defined in Keras). # # Otherwise, using the DeepNet class, you can call the constructor with the file. # + # Input the hdf5 file, here we load a network trained on Tikhonov net_file = 'Deconv_net_tuto_copy.hdf5' ### Method 1 #from keras.models import load_model #dnn = load_model(net_file) #dnn is instance of the model class ### Method 2 from DeepDeconv.deepnetFCS.DeepNet import DeepNet dnn = DeepNet(model_file=net_file) #dnn is instance of DeepNet class # - dnn_alexis=DeepNet(model_file="../nets/DeconvNet2D_FCS_sc4_layer4x5x6x7_relu_growthRate12_reshfl_SNR20to100.hdf5") dnn.model.summary() # We also load a traditionnal CNN network from keras.models import load_model kk=load_model('/data/DeepDeconv/model/archive/unet_vs_cnn/cnn.hdf5') # We then print the structure of the network: kk.summary() # And display the structure of the network: # + from keras.utils import plot_model plot_model(kk, to_file='{0}.png'.format("CNN_10_layers"),show_shapes=True) from IPython.display import SVG from keras.utils.vis_utils import model_to_dot pydot_obj=model_to_dot(kk,show_shapes=True,show_layer_names=False) SVG(pydot_obj.create(prog='dot', format='svg')) # - # To forward data through the network use the predict method available for both classes. For the method 2: # + from DeepDeconv.utils.batch_utils import get_batch_from_fits import numpy as np # Input the file containing the galaxies and psfs for testing testset_file = '/data/DeepDeconv/data/vsc_euclidpsfs/reshuffle/image-shfl-0-multihdu.fits' noiseless_img_hdu = 0 psf_hdu = 1 targets_hdu = 2 # Create the set of test with 10 observations at SNR 50 test_data, target_data = get_batch_from_fits(testset_file, idx_list=np.arange(10), SNR=50, noiseless_img_hdu=noiseless_img_hdu, targets_hdu=targets_hdu, psf_hdu=psf_hdu, image_dim=96, image_per_row=100, deconv_mode='TIKHONOV') noisy_data, target_data = get_batch_from_fits(testset_file, idx_list=np.arange(10), SNR=50, noiseless_img_hdu=noiseless_img_hdu, targets_hdu=targets_hdu, psf_hdu=psf_hdu, image_dim=96, image_per_row=100, deconv_mode=None) dnn_reconstruction = dnn.predict(test_data, verbose=1) print(dnn_reconstruction.shape) # - # Using DNN from Alexis: dnn_reconstruction_alexis = dnn_alexis.predict(test_data, verbose=1) print(dnn_reconstruction.shape) plt.figure(figsize=(12,8)) plt.subplot(221),plt.imshow(target_data[0,:,:,0]) plt.subplot(222),plt.imshow(noisy_data[0,:,:,0]) plt.subplot(223),plt.imshow(dnn_reconstruction_alexis[0,:,:,0]) plt.subplot(224),plt.imshow(dnn_reconstruction[0,:,:,0]) # The DeepNet class also provides a method to extract intermediate features maps: # # get_layer_output(test_data, layer_idx) # # layer_idx (int): idx of the layer whose features maps will be extracted. # # Use dnn.model.summary() to retrieve the layers list. dnn.model.summary() # + # Suppose we want the last concatenation layer idx = -4 # We print the name to check if you have the correct layer print(dnn.model.layers[idx].name) # We extract the features maps for the same data set as above. # The layer has 48 neurons so we get an output of shape (10,96,96,48) feat_maps = dnn.get_layer_output(test_data, layer_idx=idx) print(feat_maps.shape) # + plt.figure(figsize=(16,16)) fig,axes=plt.subplots(4,4) for kx in range(4): for ky in range(4): axes[kx, ky].imshow(feat_maps[kx,:,:,ky]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="GJ-wL42Lxp7g" # %load_ext autoreload # %autoreload 2 # %matplotlib inline # + id="AO3i1A7iL--k" #from google.colab import drive #drive.mount('/content/gdrive') # + colab={"base_uri": "https://localhost:8080/"} id="WdH_c8nnqn-c" outputId="0dba1de3-0b50-4c27-c757-29d573843927" #Q1. Execute the following statement. What is displayed? What does it mean? # !pwd # + id="jhAroxykqtRF" #Q2: Execute the following statement. What happens? Examine the left column of your colab page to see what happens. # !git clone https://github.com/fastai/course-v3.git # + id="FOZb4AeKq6vQ" #Q3. Execute the following statement. What are displayed by the execution of sys.path? What do it mean? import sys sys.path.append('/content/course-v3/nbs/dl2') sys.path # + id="I86djNvcxp70" #export #Q4: "from exp.nb_01 import *" imports classes, functions, etc from module nb_01 in package exp. # Q4a. What do you know that exp is a package? # Q4b. How is the Python system able to get access to package exp? That is, how can it search for it? from exp.nb_02 import * import torch.nn.functional as F # + [markdown] id="iywbFmpbxp73" # ## Initial setup # + [markdown] id="G5DBpkZWxp75" # ### Data # + [markdown] id="Q_7vJLX_xp75" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=1786) # + id="xg8JmuuDLwmt" from IPython.display import Image from six.moves import urllib opener = urllib.request.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib.request.install_opener(opener) def get_data(): import os import torchvision.datasets as datasets root = '../data' if not os.path.exists(root): os.mkdir(root) train_set = datasets.MNIST(root=root, train=True, download=True) test_set = datasets.MNIST(root=root, train=False, download=True) #load validation set x_train, x_valid = train_set.data.split([50000, 10000]) y_train, y_valid = train_set.targets.split([50000, 10000]) return (x_train.view(50000, -1) / 256.0), y_train.float(), (x_valid.view(10000, -1))/ 256.0, y_valid.float() # + id="TRKV-q7Kxp76" mpl.rcParams['image.cmap'] = 'gray' # + id="i6um-ZEMxp76" #Q5: when you execute the following statement, where is the downloaded data stored? Examine the left column of your colab page. x_train,y_train,x_valid,y_valid = get_data() # + colab={"base_uri": "https://localhost:8080/"} id="dwBgUYjX1Gbc" outputId="7aa49d60-0624-4d7a-d29f-b57fc9aae53b" y_train.shape # + id="oHcenWs_rZ_I" #Q6: Execute the following statement. What does the number displayed mean? len(x_train) # + id="mxSwa7g5rcer" # + id="DESekbEYxp77" #Q7: Display the values of n,m, c, and nh. What are they? For what are they used in the following code? n,m = x_train.shape c = y_train.max()+1 nh = 50 # + id="_cCzC9iZxp77" #Q8: The following defines Model class, which will be used to create a neural net. #Q8a: What is nn.Module?, What is nn.Linear? What is nn.ReLU? #Q8b. What is the difference between nn.Linear itself and nn.Linear(n_in,nh) ? class Model(nn.Module): def __init__(self, n_in, nh, n_out): super().__init__() self.layers = [nn.Linear(n_in,nh), nn.ReLU(), nn.Linear(nh,n_out)] def __call__(self, x): for l in self.layers: x = l(x) return x # + id="LJx4LIA1xp78" #Q9a. When Model(m,nh, 10) is executed, which method in Model class is executed? model = Model(m, nh, 10) # + id="gPFt346vxp78" #Q10a. Is model a function or an object? If it is an object, how is it used as if it is a function? # When model(x_train) is executed, which method in Model is executed. What does this method create? pred = model(x_train) # + id="gXbLts9ZRMgd" pred # + id="FsWmZbCWRQd_" #Q11. What does pred[0] contain? pred[0] # + id="-Ejrm01LRSxj" #Q12. Execute the following statement. Is the resulting value near to 1? If not, what does it imply? pred[0].sum() # + [markdown] id="qhbGAsGWxp79" # ### Cross entropy loss # + [markdown] id="fFGQT0zpxp79" # First, we will need to compute the softmax of our activations. This is defined by: # # $$\hbox{softmax(x)}_{i} = \frac{e^{x_{i}}}{e^{x_{0}} + e^{x_{1}} + \cdots + e^{x_{n-1}}}$$ # # or more concisely: # # $$\hbox{softmax(x)}_{i} = \frac{e^{x_{i}}}{\sum_{0 \leq j \leq n-1} e^{x_{j}}}$$ # # In practice, we will need the log of the softmax when we calculate the loss. # + id="Q3ZRxbanxp7_" def log_softmax(x): return (x.exp()/(x.exp().sum(-1,keepdim=True))).log() # + id="QDeeKw-Sxp8A" log_sm_pred = log_softmax(pred) # + colab={"base_uri": "https://localhost:8080/"} id="PJmGXhPfRYba" outputId="5678501b-3553-42cf-f595-3d7c32de04e4" log_sm_pred[0] # + id="JRmNSz1vRb0N" log_sm_pred[0].sum() # + [markdown] id="DfNuX2v1xp8B" # ## Log likelihood interpretation: https://datascience.stackexchange.com/questions/9302/the-cross-entropy-error-function-in-neural-networks # # One way to interpret cross-entropy is to see it as a (minus) log-likelihood for the data y′i, under a model yi. # # Namely, suppose that you have some fixed model (a.k.a. "hypothesis"), which predicts for n classes {1,2,…,n} their hypothetical occurrence probabilities y1,y2,…,yn. Suppose that you now observe (in reality) k1 instances of class 1, k2 instances of class 2, kn instances of class n, etc. According to your model the likelihood of this happening is: # # $P[data|model]:=y_{1}^{k_{1}}...y_{n}^{k_{n}}$ . # # Taking the logarithm and changing the sign: # −logP[data|model]= # $−k_{1} * logy_{1}−k_{2} * logy_{2}−⋯−k_{n}logy_{n} =−\sum_{i} k_{i}*logy_{i}$ # # If you now divide the right-hand sum by the number of observations N=k1+k2+⋯+kn, and denote the empirical probabilities as y′i=ki/N, you'll get the cross-entropy: # −1/N * logP[data|model]=−1/N # $\sum_{i}k_{i}logy_{i}=−\sum_{i}y′_{i}logy_{i} = H(y′,y)$ # # ## The Differnce between probabilities interpretation: https://datascience.stackexchange.com/questions/20296/cross-entropy-loss-explanation # # $H(p,\hat{p})=−\sum_{s \in B} p(x^{(s)}) \cdot log(\hat{p}(x^{(s)}))$ # # # In general, $ p(x^{(s)}) = [ p_{1} (x^{(s)}), ..., p_{n}(x^{(s)})]$ is a probability distribution over a set of categories. # But since our $ p(x^{(s)})$ are 1-hot encoded, that is, in the form of $ p(x^{(s)}) =[0,0,..,0,1,0..]$, where the probability of only one category is one and those of the other categories are all zero, this can be rewritten as # # $-\sum_{s \in B} [ 0*\log(\hat{p}_{1} (x^{(s)})) # + 1*\log(\hat{p}_{i} (x^{(s)}) ) +..+0*\log(\hat{p}_{n} (x^{(s)}) ) ] \\ # = -\sum_{s \in B} \log(\hat{p}_{i(s)} ) (x^{(s)}) )$ # # Here $i(s)$ is the index of the one-hot probability distribution $p(x^{(s)})$ where the probability is one. # # # + [markdown] id="KBtZhgePxp8D" # This can be done using numpy-style [integer array indexing](https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#integer-array-indexing). Note that PyTorch supports all the tricks in the advanced indexing methods discussed in that link. # + id="3R_y0cP_xp8D" colab={"base_uri": "https://localhost:8080/"} outputId="0df5c3d0-18c7-4a14-870a-226f7ad09a24" y_train[:3] # + colab={"base_uri": "https://localhost:8080/"} id="VehY657jB7IB" outputId="9937fcbc-4ddb-43ed-a6fd-852936cc8d63" log_sm_pred[ [0,1,2]] # + id="uJM_9p0rxp8H" colab={"base_uri": "https://localhost:8080/"} outputId="929485f1-54f3-4291-cc50-cdb67c2e1a89" log_sm_pred[[0,1,2], [5,0,4]] # + id="oSK1T-mgxp8I" y_train.shape[0] # + colab={"base_uri": "https://localhost:8080/"} id="-ogBL3jZxPPN" outputId="01119f33-9e56-4fcd-e7a8-5cdad566afe1" y_train # + [markdown] id="h1bTmfz5xp8J" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=2081) # + colab={"base_uri": "https://localhost:8080/"} id="pPJ9QjlLQQc_" outputId="7323044b-38e6-4359-a4c5-183343ec16f8" log_sm_pred[0] # + colab={"base_uri": "https://localhost:8080/"} id="xoc2YwU5wATE" outputId="db5c56ac-c45e-4815-9908-ded483996b71" range(y_train.long().shape[0]) # + colab={"base_uri": "https://localhost:8080/"} id="8GMBhmq9w4QG" outputId="d18ff032-6585-4755-9142-e6ff64d13682" log_sm_pred[range(y_train.long().shape[0]), y_train[0:10].long() ].shape # + id="ZI0cXSLRxp8K" def nll(input, target): return -input[range(target.shape[0]), target].mean() # + colab={"base_uri": "https://localhost:8080/"} id="vzmLXXWiPTSF" outputId="a8bf7fd5-d0d3-4669-8bad-3ddfa532dac8" y_train.long().shape[0] # + colab={"base_uri": "https://localhost:8080/"} id="_qRbZvGePbHV" outputId="5d8c06da-0da9-4699-f69a-c65b3b91a183" x_train.shape # + id="Bs5UU4fyxp8K" loss = nll(log_sm_pred, y_train.long()) # + colab={"base_uri": "https://localhost:8080/"} id="rrPFVa91P0EO" outputId="522c4008-bfcb-486c-9f69-9aa19ee2bcf3" y_train # + colab={"base_uri": "https://localhost:8080/"} id="6uzxkFQaPv-W" outputId="f229a792-a57f-45a9-d73a-98bf5820ddf5" y_train.long() # + id="zkSkgQYQxp8L" colab={"base_uri": "https://localhost:8080/"} outputId="452d86ad-a756-4d0d-aa4f-c84ac6c5e235" loss # + [markdown] id="rHyuP5t7xp8M" # Note that the formula # # $$\log \left ( \frac{a}{b} \right ) = \log(a) - \log(b)$$ # # gives a simplification when we compute the log softmax, which was previously defined as `(x.exp()/(x.exp().sum(-1,keepdim=True))).log()` # + id="veHLG6DXxp8M" def log_softmax(x): return x - x.exp().sum(-1,keepdim=True).log() # + id="aelGhkV5xp8N" test_near(nll(log_softmax(pred), y_train.long()), loss) # + [markdown] id="lVPZMaTcxp8O" # Then, there is a way to compute the log of the sum of exponentials in a more stable way, called the [LogSumExp trick](https://en.wikipedia.org/wiki/LogSumExp). The idea is to use the following formula: # # $$\log \left ( \sum_{j=1}^{n} e^{x_{j}} \right ) = \log \left ( e^{a} \sum_{j=1}^{n} e^{x_{j}-a} \right ) = a + \log \left ( \sum_{j=1}^{n} e^{x_{j}-a} \right )$$ # # where a is the maximum of the $x_{j}$. # + id="No5iHYvCxp8P" def logsumexp(x): m = x.max(-1)[0] return m + (x-m[:,None]).exp().sum(-1).log() # + [markdown] id="UpzazYXDxp8Q" # This way, we will avoid an overflow when taking the exponential of a big activation. In PyTorch, this is already implemented for us. # + id="dQ_VQU0Sxp8Q" test_near(logsumexp(pred), pred.logsumexp(-1)) # + [markdown] id="wkGCt3oDxp8R" # So we can use it for our `log_softmax` function. # + id="q7nsaEFtxp8R" def log_softmax(x): return x - x.logsumexp(-1,keepdim=True) # + id="Wt1PreQAxp8S" test_near(nll(log_softmax(pred), y_train.long()), loss) # + [markdown] id="FcdHBuPOxp8S" # Then use PyTorch's implementation. # + id="U609ws15xp8T" test_near(F.nll_loss(F.log_softmax(pred, -1), y_train.long()), loss) # + [markdown] id="yOo22f7Exp8U" # In PyTorch, `F.log_softmax` and `F.nll_loss` are combined in one optimized function, `F.cross_entropy`. # + id="lU7uCHBwxp8U" test_near(F.cross_entropy(pred, y_train.long()), loss) # + [markdown] id="BdFUzc94xp8V" # ## Basic training loop # + [markdown] id="495DuZ5Xxp8V" # Basically the training loop repeats over the following steps: # - get the output of the model on a batch of inputs # - compare the output to the labels we have and compute a loss # - calculate the gradients of the loss with respect to every parameter of the model # - update said parameters with those gradients to make them a little bit better # + [markdown] id="6e6Y1AKyxp8W" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=2542) # + id="fDHaEy3oxp8X" loss_func = F.cross_entropy # This criterion combines log_softmax and nll_loss in a single function. # + id="wcuWlwDXxp8Y" #export def accuracy(out, yb): return (torch.argmax(out, dim=1)==yb).float().mean() # + id="ozBbmfr7xp8Z" colab={"base_uri": "https://localhost:8080/"} outputId="ecc22746-beca-402f-9126-fc1f10c21c9e" bs=64 # batch size xb = x_train[0:bs] # a mini-batch from x preds = model(xb) # predictions preds[0], preds.shape # + id="Ue2zaDOLxp8a" colab={"base_uri": "https://localhost:8080/"} outputId="7c2ce71e-5f8f-42d4-a0ad-2dd8b1fcd073" yb = y_train[0:bs] loss_func(preds, yb.long()) # + id="vYSsK_Rlxp8b" colab={"base_uri": "https://localhost:8080/"} outputId="856100ed-dc6e-4991-ebfb-d78b3618eb6a" accuracy(preds, yb) # + id="dxLMr6u-xp8c" lr = 0.5 # learning rate epochs = 1 # how many epochs to train for # + [markdown] id="9-hKTgAOFvu0" # # The mechanism for training the network: (1) computing the graidents of Loss with respect to tensors at each layer, (2) backpropgation: applying the chain rule to compute the gradient vector of the parameters, (3) updating the parameters. # # https://github.com/pytorch/pytorch/blob/35bd2b3c8b64d594d85fc740e94c30aa67892a34/torch/tensor.py # # https://github.com/pytorch/pytorch/blob/35bd2b3c8b64d594d85fc740e94c30aa67892a34/torch/tensor.py # # https://stackoverflow.com/questions/57248777/backward-function-in-pytorch # : Pytorch does not support this non-scalar function derivatives. Instead, pytorch assumes out is only an intermediate tensor and somewhere "upstream" there is a scalar loss function, that through chain rule provides d loss/ d out[i,j]. This "upstream" gradient is of size 2-by-3 and this is actually the argument you provide backward in this case: out.backward(g) where g_ij = d loss/ d out_ij. # # The gradients are then calculated by chain rule d loss / d a[i,j] = (d loss/d out[i,j]) * (d out[i,j] / d a[i,j]); In fact, So it's not just (d loss/d out[i,j]) * (d out[i,j] / d a[i,j]), but in fact sum_{k,l} (d loss/d out[k,l]) * (d out[k,l] / d a[i,j]). # # # The "gradient accumulation" # # https://discuss.pytorch.org/t/why-do-we-need-to-set-the-gradients-manually-to-zero-in-pytorch/4903/9 # # In the following example, y.backward() is called 5 times, so the final value of x.grad will be 5*cos(0)=5. # # import torch # from torch.autograd import Variable # # x = Variable(torch.Tensor([[0]]), requires_grad=True) # # for t in range(5): # y = x.sin() # y.backward() # # print(x.grad) # shows 5 # Calling x.grad.data.zero_() before y.backward() can make sure x.grad is exactly the same as current y’(x), not a sum of y’(x) in all previous iterations. # # x = Variable(torch.Tensor([[0]]), requires_grad=True) # # for t in range(5): # if x.grad is not None: # x.grad.data.zero_() # y = x.sin() # y.backward() # # print(x.grad) # shows 1 # I also got confused by this “zeroing gradient” when first learning pytorch. The doc of torch.autograd.backward does mention that # # This function accumulates gradients in the leaves - you might need to zero them before calling it. # # But this is quite hard to find and pretty confusing for (say) tensorflow users. # # Official tutorials like 60 Minute Blitz 45 or PyTorch with Examples 69 both say nothing about why one needs to call grad.data.zero_() during training. I think it would be useful to explain this a little more in beginner-level tutorials. RNN is a good example for why accumulating gradient (instead of refreshing) is useful, but I guess new users wouldn’t even know that backward() is accumulating gradient # # # # A more rigorous reference to autograd: # # https://blog.paperspace.com/pytorch-101-understanding-graphs-and-automatic-differentiation/ # # torch.no_grad(): # # When we are computing gradients, we need to cache input values, and intermediate features as they maybe required to compute the gradient later. # # The gradient of # b # = # w # 1 # ∗ # a # w.r.t it's inputs # w # 1 # and # a # is # a # and # w # 1 # respectively. We need to store these values for gradient computation during the backward pass. This affects the memory footprint of the network. # # While, we are performing inference, we don't compute gradients, and thus, don't need to store these values. Infact, no graph needs to be create during inference as it will lead to useless consumption of memory. # # PyTorch offers a context manager, called torch.no_grad for this purpose. # # # with torch.no_grad: # # inference code goes here # # No graph is defined for operations executed under this context manager. # # # # + id="Q0KH3m-Uxp8c" colab={"base_uri": "https://localhost:8080/"} outputId="10ebc4d7-40b1-48f7-b7c3-3a15095924e5" for epoch in range(epochs): for i in range((n-1)//bs + 1): # for each batch in the current epoch # set_trace() start_i = i*bs end_i = start_i+bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] ybHat = model(xb) h = ybHat.register_hook( lambda grad: print (grad) ) loss = loss_func( ybHat, yb.long()) # r"""Computes the gradient of loss w.r.t. graph LEAVES. # This function accumulates gradients in the leaves - you might need to zero # ``.grad`` attributes or set them to ``None`` before calling it. # (This makes it easier for rnn, because each module will be back propogated through several times.) # Attribute grad of tensor ybHat (ybHat.grad = dloss/dybHat) is ``None`` by default and becomes a Tensor the first time a call to #func `backward` computes gradients for ybHat. #The attribute will then contain the gradients computed and future calls to #:func:`backward` will accumulate (add) gradients into it. #Since the backward() function accumulates gradients, and you don’t want to mix up gradients between minibatches, # you have to zero them out at the start of a new minibatch. # By the way, the best practice is to use the optimzer.zero_grad() ybHat.grad loss.backward() # When you call loss.backward(), all it does is compute gradient of loss # w.r.t all the parameters in loss that have requires_grad = True and store them in parameter.grad attribute for every parameter ybHat.grad #optimizer.step() updates all the parameters based on parameter.grad with torch.no_grad(): for l in model.layers: if hasattr(l, 'weight'): l.weight -= l.weight.grad * lr l.bias -= l.bias.grad * lr #we don't care about gradients from the previous batch. Not zeroing grads would lead to gradient accumulation across batches l.weight.grad.zero_() l.bias .grad.zero_() # + id="I0fC9kbvxp8g" colab={"base_uri": "https://localhost:8080/"} outputId="c8b375c1-825a-4a92-c027-49b59a68a658" loss_func(model(xb), yb.long()), accuracy(model(xb), yb) # + [markdown] id="n70culC-xp8h" # ## Using parameters and optim # + [markdown] id="Ztmln-E0xp8h" # ### Parameters # + [markdown] id="3JMWX2-fxp8i" # Use `nn.Module.__setattr__` and move relu to functional: # + [markdown] id="2qu-R0Ljxp8j" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=2818) # + id="lw5k0KJRxp8j" class Model(nn.Module): def __init__(self, n_in, nh, n_out): super().__init__() self.l1 = nn.Linear(n_in,nh) self.l2 = nn.Linear(nh,n_out) def __call__(self, x): return self.l2(F.relu(self.l1(x))) # + id="lq8zwnBlxp8k" model = Model(m, nh, 10) # + id="yFsAG7KCxp8k" colab={"base_uri": "https://localhost:8080/"} outputId="9d765f98-3b16-4a38-98b5-6bcc457da4f2" for name,l in model.named_children(): print(f"{name}: {l}") # + id="QQAcSLAVxp8l" colab={"base_uri": "https://localhost:8080/"} outputId="c1301582-6aa8-4bb1-9e3b-c9ad0f685747" model # + id="O77Ds-LRxp8m" colab={"base_uri": "https://localhost:8080/"} outputId="ae80b6f2-ead8-43f0-a448-3888e712dd07" model.l1 # + id="SUDx09pkxp8m" def fit(): for epoch in range(epochs): for i in range((n-1)//bs + 1): start_i = i*bs end_i = start_i+bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] loss = loss_func(model(xb), yb.long()) loss.backward() with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad() # + id="zkjGdktlxp8o" colab={"base_uri": "https://localhost:8080/"} outputId="b2db913c-a4b9-4a7d-9003-28aadf55efff" fit() loss_func(model(xb), yb.long()), accuracy(model(xb), yb) # + [markdown] id="-wQ0CkLmxp8p" # Behind the scenes, PyTorch overrides the `__setattr__` function in `nn.Module` so that the submodules you define are properly registered as parameters of the model. # + id="nfLTjkNJxp8p" class DummyModule(): def __init__(self, n_in, nh, n_out): self._modules = {} self.l1 = nn.Linear(n_in,nh) self.l2 = nn.Linear(nh,n_out) def __setattr__(self,k,v): if not k.startswith("_"): self._modules[k] = v super().__setattr__(k,v) def __repr__(self): return f'{self._modules}' def parameters(self): for l in self._modules.values(): for p in l.parameters(): yield p # + id="BxdtM1ilxp8q" colab={"base_uri": "https://localhost:8080/"} outputId="6a4c6d65-f870-47a5-9ffb-4878cb76a6e9" mdl = DummyModule(m,nh,10) mdl # + id="asWZtQT7xp8r" colab={"base_uri": "https://localhost:8080/"} outputId="bc341473-2c04-4764-dc64-65061f646c72" [o.shape for o in mdl.parameters()] # + [markdown] id="PJzROcD-xp8r" # ### Registering modules # + [markdown] id="Gi6Bj3Egxp8s" # We can use the original `layers` approach, but we have to register the modules. # + [markdown] id="SCDxRdQexp8s" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=2997) # + id="8ax355bwxp8t" layers = [nn.Linear(m,nh), nn.ReLU(), nn.Linear(nh,10)] # + id="0FVOwYH8xp8t" class Model(nn.Module): def __init__(self, layers): super().__init__() self.layers = layers for i,l in enumerate(self.layers): self.add_module(f'layer_{i}', l) def __call__(self, x): for l in self.layers: x = l(x) return x # + id="x5hq-X7Vxp8u" model = Model(layers) # + id="lqYP2MXlxp8u" colab={"base_uri": "https://localhost:8080/"} outputId="2acdf4d1-9aee-4e4e-bf4f-5fb1488c41b1" model # + [markdown] id="diwEDyzJxp8v" # ### nn.ModuleList # + [markdown] id="yUVBUAqqxp8v" # `nn.ModuleList` does this for us. # + [markdown] id="w39ws0evxp8w" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=3173) # + id="YNrrTzq6xp8w" class SequentialModel(nn.Module): def __init__(self, layers): super().__init__() self.layers = nn.ModuleList(layers) def __call__(self, x): for l in self.layers: x = l(x) return x # + id="m59ypeCdxp8x" model = SequentialModel(layers) # + id="SVSNzdUmxp8x" colab={"base_uri": "https://localhost:8080/"} outputId="4be69e8d-f133-4c01-8617-44e8451cae20" model # + id="QheaMa-6xp8y" colab={"base_uri": "https://localhost:8080/"} outputId="b02d5cb4-cbd1-4f08-ae29-1be7c5332c91" fit() loss_func(model(xb), yb.long()), accuracy(model(xb), yb) # + [markdown] id="yk8EQfB6xp8y" # ### nn.Sequential # + [markdown] id="twIiTxJAxp8y" # `nn.Sequential` is a convenient class which does the same as the above: # + [markdown] id="eWOOJnAXxp8y" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=3199) # + id="YVIaRdsPxp8z" model = nn.Sequential(nn.Linear(m,nh), nn.ReLU(), nn.Linear(nh,10)) # + id="V_079mLyxp8z" colab={"base_uri": "https://localhost:8080/"} outputId="3994c698-7ae4-47ee-f9cf-291c194a3ce8" fit() loss_func(model(xb), yb.long()), accuracy(model(xb), yb) # + id="fX3SUZc5xp8z" # nn.Sequential?? # + id="s1EvUMznxp81" colab={"base_uri": "https://localhost:8080/"} outputId="c5637639-9c72-4336-ff2d-686a0bcb4816" model # + [markdown] id="1o20CTI0xp82" # ### optim # + [markdown] id="_8edAASgxp82" # Let's replace our previous manually coded optimization step: # # ```python # with torch.no_grad(): # for p in model.parameters(): p -= p.grad * lr # model.zero_grad() # ``` # # and instead use just: # # ```python # opt.step() # opt.zero_grad() # ``` # + [markdown] id="lV7oQXBxxp83" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=3278) # + id="rz8BYVQkxp84" class Optimizer(): def __init__(self, params, lr=0.5): self.params,self.lr=list(params),lr def step(self): with torch.no_grad(): for p in self.params: p -= p.grad * lr def zero_grad(self): for p in self.params: p.grad.data.zero_() # + id="Rr97GcPPxp84" model = nn.Sequential(nn.Linear(m,nh), nn.ReLU(), nn.Linear(nh,10)) # + id="lh_YE0ogxp84" opt = Optimizer(model.parameters()) # + id="UOZPZ1rsxp85" for epoch in range(epochs): for i in range((n-1)//bs + 1): start_i = i*bs end_i = start_i+bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb.long()) loss.backward() opt.step() opt.zero_grad() # + id="1K8AuK0yxp85" colab={"base_uri": "https://localhost:8080/"} outputId="96477a22-437a-4d1e-cedc-81ccfdc7ad38" loss,acc = loss_func(model(xb), yb.long()), accuracy(model(xb), yb) loss,acc # + [markdown] id="vtEnqOylxp86" # PyTorch already provides this exact functionality in `optim.SGD` (it also handles stuff like momentum, which we'll look at later - except we'll be doing it in a more flexible way!) # + id="1EbDEWXsxp87" #export from torch import optim # + id="DVc-OgiExp87" # optim.SGD.step?? # + id="wJyAMVzbxp88" def get_model(): model = nn.Sequential(nn.Linear(m,nh), nn.ReLU(), nn.Linear(nh,10)) return model, optim.SGD(model.parameters(), lr=lr) # + id="fu_n5Z2zxp89" colab={"base_uri": "https://localhost:8080/"} outputId="8de39113-a531-428e-f89e-e70459000741" model,opt = get_model() loss_func(model(xb), yb.long()) # + id="XT0H_e6Vxp8-" for epoch in range(epochs): for i in range((n-1)//bs + 1): start_i = i*bs end_i = start_i+bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb.long()) loss.backward() opt.step() opt.zero_grad() # + id="Do5mQXdWxp8-" colab={"base_uri": "https://localhost:8080/"} outputId="4289d3b3-4da9-498d-b2b9-96e5e69cc7dc" loss,acc = loss_func(model(xb), yb.long()), accuracy(model(xb), yb) loss,acc # + [markdown] id="UVaK2RCHxp8_" # Randomized tests can be very useful. # + [markdown] id="TSDu_zpXxp8_" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=3442) # + id="3GQE3EvMxp9A" assert acc>0.7 # + [markdown] id="1PWzGE8Rxp9A" # ## Dataset and DataLoader # + [markdown] id="nz8ts0aZxp9A" # ### Dataset # + [markdown] id="hhmhOE2xxp9B" # It's clunky to iterate through minibatches of x and y values separately: # # ```python # xb = x_train[start_i:end_i] # yb = y_train[start_i:end_i] # ``` # # Instead, let's do these two steps together, by introducing a `Dataset` class: # # ```python # xb,yb = train_ds[i*bs : i*bs+bs] # ``` # + [markdown] id="U-gLw07Fxp9B" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=3578) # + id="QdCJfv5Zxp9B" #export class Dataset(): def __init__(self, x, y): self.x,self.y = x,y def __len__(self): return len(self.x) def __getitem__(self, i): return self.x[i],self.y[i] # + id="5wzhoIAExp9C" train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid) assert len(train_ds)==len(x_train) assert len(valid_ds)==len(x_valid) # + id="BxsllyZ8xp9C" colab={"base_uri": "https://localhost:8080/"} outputId="5a240c6a-3e35-40d2-8956-49fe539d3d28" xb,yb = train_ds[0:5] assert xb.shape==(5,28*28) assert yb.shape==(5,) xb,yb # + id="PNvAKEUAxp9C" model,opt = get_model() # + id="6y3T2ONQxp9D" for epoch in range(epochs): for i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb) loss = loss_func(pred, yb.long()) loss.backward() opt.step() opt.zero_grad() # + id="eiGSBU34xp9D" colab={"base_uri": "https://localhost:8080/"} outputId="16d2e73b-d668-4c50-8025-5a3ae2ea8682" loss,acc = loss_func(model(xb), yb.long()), accuracy(model(xb), yb) assert acc>0.7 loss,acc # + [markdown] id="0fgmSsJdxp9D" # ### DataLoader # + [markdown] id="QlMsRaK_xp9D" # Previously, our loop iterated over batches (xb, yb) like this: # # ```python # for i in range((n-1)//bs + 1): # xb,yb = train_ds[i*bs : i*bs+bs] # ... # ``` # # Let's make our loop much cleaner, using a data loader: # # ```python # for xb,yb in train_dl: # ... # ``` # + [markdown] id="Ka7lGsJoxp9E" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=3674) # + id="DcOXwOi-xp9F" class DataLoader(): def __init__(self, ds, bs): self.ds,self.bs = ds,bs def __iter__(self): for i in range(0, len(self.ds), self.bs): yield self.ds[i:i+self.bs] # + id="z1FCkkcdxp9F" train_dl = DataLoader(train_ds, bs) valid_dl = DataLoader(valid_ds, bs) # + id="dMXh88dvxp9G" xb,yb = next(iter(valid_dl)) assert xb.shape==(bs,28*28) assert yb.shape==(bs,) # + id="mWnYMKDhxp9G" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="12b6abb1-0f8c-4dac-ba8c-0ffbcee1bd21" plt.imshow(xb[0].view(28,28)) yb[0] # + id="mekhrFC3xp9G" model,opt = get_model() # + id="jIu_Wvwkxp9H" def fit(): for epoch in range(epochs): for xb,yb in train_dl: pred = model(xb) loss = loss_func(pred, yb.long()) loss.backward() opt.step() opt.zero_grad() # + id="JIFF7_-xxp9H" fit() # + id="RH5KjHAaxp9H" colab={"base_uri": "https://localhost:8080/"} outputId="6b7502d5-445e-465e-a214-710cb6fa098d" loss,acc = loss_func(model(xb), yb.long()), accuracy(model(xb), yb) assert acc>0.7 loss,acc # + [markdown] id="-9uDi1fcxp9H" # ### Random sampling # + [markdown] id="JRARDaX9xp9P" # We want our training set to be in a random order, and that order should differ each iteration. But the validation set shouldn't be randomized. # + [markdown] id="MUSSX0iNxp9P" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=3942) # + id="mgUen1XGxp9Q" class Sampler(): def __init__(self, ds, bs, shuffle=False): self.n,self.bs,self.shuffle = len(ds),bs,shuffle def __iter__(self): self.idxs = torch.randperm(self.n) if self.shuffle else torch.arange(self.n) for i in range(0, self.n, self.bs): yield self.idxs[i:i+self.bs] # + id="nV78WqQbxp9Q" small_ds = Dataset(*train_ds[:10]) # + id="Y-wrj8hrxp9Q" colab={"base_uri": "https://localhost:8080/"} outputId="423bda15-8805-4037-e7b6-65a7575a7933" s = Sampler(small_ds,3,False) [o for o in s] # + id="cAyM-UXDxp9R" colab={"base_uri": "https://localhost:8080/"} outputId="8f96ed10-0c06-4f7a-ced1-08591b7ad9ff" s = Sampler(small_ds,3,True) [o for o in s] # + id="Xf0r66xbxp9R" def collate(b): xs,ys = zip(*b) return torch.stack(xs),torch.stack(ys) class DataLoader(): def __init__(self, ds, sampler, collate_fn=collate): self.ds,self.sampler,self.collate_fn = ds,sampler,collate_fn def __iter__(self): for s in self.sampler: yield self.collate_fn([self.ds[i] for i in s]) # + id="JfhQOgsVxp9S" train_samp = Sampler(train_ds, bs, shuffle=True) valid_samp = Sampler(valid_ds, bs, shuffle=False) # + id="GZpHj2oqxp9S" train_dl = DataLoader(train_ds, sampler=train_samp, collate_fn=collate) valid_dl = DataLoader(valid_ds, sampler=valid_samp, collate_fn=collate) # + id="qG_iithdxp9S" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="6367fbae-db63-4292-8db2-21ecef0bbbe4" xb,yb = next(iter(valid_dl)) plt.imshow(xb[0].view(28,28)) yb[0] # + id="wDtE1Hlgxp9S" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="f64d8585-9b27-4833-915c-3569b6e0bc0c" xb,yb = next(iter(train_dl)) plt.imshow(xb[0].view(28,28)) yb[0] # + id="fkS0eizexp9T" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="51ad1749-fbb5-42da-adb4-2f649e7adecc" xb,yb = next(iter(train_dl)) plt.imshow(xb[0].view(28,28)) yb[0] # + id="pwxmZdIBxp9U" colab={"base_uri": "https://localhost:8080/", "height": 449} outputId="975b6c12-4885-496a-9318-90d8529be2e9" model,opt = get_model() fit() loss,acc = loss_func(model(xb), yb), accuracy(model(xb), yb) assert acc>0.7 loss,acc # + [markdown] id="BkOcYVz5xp9U" # ### PyTorch DataLoader # + [markdown] id="Ek63kxvpxp9U" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=4171) # + id="8heUT6Qaxp9V" #export from torch.utils.data import DataLoader, SequentialSampler, RandomSampler # + id="Mc6PTXcIxp9V" train_dl = DataLoader(train_ds, bs, sampler=RandomSampler(train_ds), collate_fn=collate) valid_dl = DataLoader(valid_ds, bs, sampler=SequentialSampler(valid_ds), collate_fn=collate) # + id="URCJObCIxp9V" colab={"base_uri": "https://localhost:8080/", "height": 413} outputId="b316b685-e677-440a-a87a-179efba49375" model,opt = get_model() fit() loss_func(model(xb), yb), accuracy(model(xb), yb) # + [markdown] id="RV_tHff0xp9W" # PyTorch's defaults work fine for most things however: # + id="m1-XnjlYxp9W" train_dl = DataLoader(train_ds, bs, shuffle=True, drop_last=True) valid_dl = DataLoader(valid_ds, bs, shuffle=False) # + id="hldX-jfTxp9W" colab={"base_uri": "https://localhost:8080/", "height": 449} outputId="1e61d521-5d11-4296-f054-d4b6da994933" model,opt = get_model() fit() loss,acc = loss_func(model(xb), yb), accuracy(model(xb), yb) assert acc>0.7 loss,acc # + [markdown] id="4wHMsZhExp9W" # Note that PyTorch's `DataLoader`, if you pass `num_workers`, will use multiple threads to call your `Dataset`. # + [markdown] id="IZdHSkBbxp9X" # ## Validation # + [markdown] id="KzRnuK0dxp9X" # You **always** should also have a [validation set](http://www.fast.ai/2017/11/13/validation-sets/), in order to identify if you are overfitting. # # We will calculate and print the validation loss at the end of each epoch. # # (Note that we always call `model.train()` before training, and `model.eval()` before inference, because these are used by layers such as `nn.BatchNorm2d` and `nn.Dropout` to ensure appropriate behaviour for these different phases.) # + [markdown] id="gG-PcbGPxp9X" # [Jump_to lesson 9 video](https://course.fast.ai/videos/?lesson=9&t=4260) # + id="gZpIe-Jqxp9X" def fit(epochs, model, loss_func, opt, train_dl, valid_dl): for epoch in range(epochs): # Handle batchnorm / dropout model.train() # print(model.training) for xb,yb in train_dl: loss = loss_func(model(xb), yb.long()) loss.backward() opt.step() opt.zero_grad() model.eval() # print(model.training) with torch.no_grad(): tot_loss,tot_acc = 0.,0. for xb,yb in valid_dl: pred = model(xb) tot_loss += loss_func(pred, yb.long()) tot_acc += accuracy (pred,yb) nv = len(valid_dl) print(epoch, tot_loss/nv, tot_acc/nv) return tot_loss/nv, tot_acc/nv # + [markdown] id="ZU-f2jwTxp9Y" # *Question*: Are these validation results correct if batch size varies? # + [markdown] id="kavB4SYyxp9Y" # `get_dls` returns dataloaders for the training and validation sets: # + id="Ze8BCcJrxp9Z" #export def get_dls(train_ds, valid_ds, bs, **kwargs): return (DataLoader(train_ds, batch_size=bs, shuffle=True, **kwargs), DataLoader(valid_ds, batch_size=bs*2, **kwargs)) # + [markdown] id="gLfgd85jxp9Z" # Now, our whole process of obtaining the data loaders and fitting the model can be run in 3 lines of code: # + id="1xdz5p95xp9a" colab={"base_uri": "https://localhost:8080/"} outputId="0b4b7c1d-dcf1-4070-f5a1-8c9325b72364" train_dl,valid_dl = get_dls(train_ds, valid_ds, bs) model,opt = get_model() loss,acc = fit(5, model, loss_func, opt, train_dl, valid_dl) # + id="TM7lifJHxp9a" assert acc>0.9 # + [markdown] id="I-fmPWSBxp9b" # ## Export # + id="LWl4Vw_Txp9b" outputId="21bd0810-c37a-4d12-9157-d733a96c0e26" # !python notebook2script.py 03_minibatch_training.ipynb # + id="OxGTEBHtxp9c" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.4 64-bit (''cindy'': virtualenv)' # language: python # name: python37464bitcindyvirtualenveabd95eadb5f430ab6f2454348bea7d0 # --- import pandas as pd # read in the cities data data=pd.read_csv("Resources/cities.csv") weather_data = data.set_index("City_ID") weather_data weather_data.to_html('raw_data.html') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="Y7JhFHZiisse" # + [markdown] id="j7HxRKLBcTel" # # using BeautifulSoup to Extract Data from Webpage # # # # + id="wjlG4uoYZVtU" colab={"base_uri": "https://localhost:8080/"} outputId="50677183-8312-4682-9803-181d38e07ea3" # !pip install requests --upgrade --quiet # + id="EmjrvYO20oAN" import requests # + id="Eu6h4Vrr0r3f" topics_url = 'https://www.myupchar.com/en/search?term=Paracetamol%20&filter=medicine' # + id="cTJX24WR3WGJ" response= requests.get(topics_url) # + colab={"base_uri": "https://localhost:8080/"} id="u-96p1re3lMD" outputId="84c4aa86-69ef-4e50-ae72-18901b2c317d" response.status_code # + colab={"base_uri": "https://localhost:8080/"} id="CV-4vc5n3rGF" outputId="fcd8c366-7dac-4537-8acc-eaa25cd69323" len(response.text) # + id="V5Eqkm9J4BpA" page_contents= response.text # + colab={"base_uri": "https://localhost:8080/", "height": 137} id="vzkwkfwJ4USn" outputId="3d2f2b85-0bdf-4536-9ec6-1bc9f9353238" page_contents[:1000] # + id="oox5rZH-4XTN" with open('webpage.html', 'w') as f: f.write(page_contents) # + colab={"base_uri": "https://localhost:8080/"} id="D7AEh5jW5Vwy" outputId="417e51a1-97c9-44d6-c28b-3b2cd70528d7" # !pip install beautifulsoup4 --upgrade --quiet # + id="EFjbmpkb4itL" from bs4 import BeautifulSoup # + id="4xFUWcwp4ivM" doc= BeautifulSoup(page_contents,'html.parser') # + id="lP0_qFXt4i1A" title_tag= doc.find_all('h5') # + colab={"base_uri": "https://localhost:8080/"} id="m9ESkklj4i33" outputId="0effb3ac-a63d-4a1e-8931-70800cf49123" title_tag # + colab={"base_uri": "https://localhost:8080/"} id="NxbaSgao4i6t" outputId="fd4d229d-5dc3-465d-8034-582a406279e5" title_tag titles= [] for tag in title_tag: titles.append(tag.text) print(titles[:]) # + id="0K2MtXwh4i9c" colab={"base_uri": "https://localhost:8080/"} outputId="997e3dfb-3cc0-409c-dce4-4cd6864f4192" # + colab={"base_uri": "https://localhost:8080/"} id="dFoqtMqj4i_L" outputId="f509780e-20bd-443e-fe06-71f3699dea25" len(titles) # + [markdown] id="lVvVYxnW_HK5" # topic urls # + id="T26tRE1F-I5e" link_tags= titles # + id="FyyGtCdL_FZm" colab={"base_uri": "https://localhost:8080/", "height": 163} outputId="8e40944f-8569-42e8-ee23-7098538fd9d4" div_tags=title_tags0.parent.parent # + id="kiCqWtV-_Ff6" link_tags= doc.find_all('a') # + id="7-K-m2aqoaRS" title_tag1=doc.find_all('a', {'class': 'surgery-section-bx'}) # + id="uSOR9a-H_FiE" selection_class = 'surgery-section-bx' title_tags= doc.find_all('a', {'class': selection_class}) # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="biD9sXwQ_Fk2" outputId="ae56f5a7-df57-4e77-a3fd-44e1bec159de" title_tags[0]['href'] # + id="VSMUWo7M_FnA" topic_urls0='https://www.myupchar.com/' + title_tags[0]['href'] # + id="aZYVrBkvyT1r" # + colab={"base_uri": "https://localhost:8080/"} id="slZR478X_Fq-" outputId="51be9426-cd4b-48d6-b043-6198506cd8d7" len(topic_urls0) # + colab={"base_uri": "https://localhost:8080/"} id="bJDkL1s6CH_S" outputId="1ae7eed7-2756-455e-b052-9f51ab47ee08" print(topic_urls0) # + id="8mOAi7B1CM8p" upchar_urls = [] base_url = 'https://www.myupchar.com/' for tag in title_tags: upchar_urls.append(base_url + tag['href']) # + colab={"base_uri": "https://localhost:8080/"} id="HojjVj1FCaeg" outputId="46801ee4-919f-4d96-d519-a50b766d2631" upchar_urls # + colab={"base_uri": "https://localhost:8080/"} id="7nzc88z6idrV" outputId="ed1819d4-cb46-4420-ede8-e7bdb4f0c2ef" len(titles) # + id="cORBLxB82GC2" item_dict= {'Brand Name': titles , 'URL': upchar_urls} # + colab={"base_uri": "https://localhost:8080/"} id="jlPNA3jL2eBT" outputId="312e77c5-79b9-4bcf-d2d0-fd8cbd04cb2f" titles # + id="_P9zdkuk15Qs" ext= pd.DataFrame(item_dict) # + colab={"base_uri": "https://localhost:8080/", "height": 793} id="bSWyOQwG4tHm" outputId="4623b1a3-72b1-4f01-a1a2-34e798898dd1" ext # + id="5nS6QrX3zHdq" from bs4 import BeautifulSoup doc= BeautifulSoup(page_contents,'html.parser') # + id="1_MpxMtQzHjp" # + id="HcRFqCk2zHms" # + id="-F5RblvczHo6" # + id="2e-ZF-FUzHrT" # + id="hXDbiJ2szHuR" # + id="a7JFl9iSzHwy" # + id="9nfgJn5dzH00" # + id="bpwnaxiRzH2u" # + id="1r_c4r9pidrX" # + id="Vrmzse3GckRt" # + id="g1v9nzLNckUT" # + id="_JXtLwMxckXg" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: angr # language: python # name: angr # --- # + import angr import angrcli.plugins.context_view import angrcli.plugins.watches from angrcli.interaction.explore import ExploreInteractive #import angr.sim_state #angr.sim_state.SimState.__repr__ = lambda self: self.context_view.pprint() proj = angr.Project("./memview.elf", load_options={'auto_load_libs': False}) cfg = proj.analyses.CFGFast() state = proj.factory.entry_state() e = ExploreInteractive(proj, state) # + # Task 1 # Find the addr of the global struct # angr has a knowledge base plugin to parse specific symbols into memory labels, struct_addr = proj.kb.labels.lookup('global_struct') # - angr.types.define_struct(""" struct example_struct { int f_integer; char f_string[4]; char* f_pointer; }; """) state.watches.add_watch(lambda state: state.mem[struct_addr].example_struct, 'struct watch') # + #e.state.mem[proj.kb.labels.lookup('global_struct')].example_struct.f_pointer.deref.string.concrete # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # [Datashader](http://datashader.org) renders data into regularly sampled arrays, a process known as [rasterization](https://en.wikipedia.org/wiki/Rasterisation), and then optionally converts that array into a viewable image (with one pixel per element in that array). # # In some cases, your data is *already* rasterized, such as data from imaging experiments, simulations on a regular grid, or other regularly sampled processes. Even so, the rasters you have already are not always the ones you need for a given purpose, having the wrong shape, range, or size to be suitable for overlaying with or comparing against other data, maps, and so on. Datashader provides fast methods for ["regridding"](https://climatedataguide.ucar.edu/climate-data-tools-and-analysis/regridding-overview)/["re-sampling"](http://gisgeography.com/raster-resampling/)/"re-rasterizing" your regularly gridded datasets, generating new rasters on demand that can be used together with those it generates for any other data types. Rasterizing into a common grid can help you implement complex cross-datatype analyses or visualizations. # # In other cases, your data is stored in a 2D array similar to a raster, but represents values that are not regularly sampled in the underlying coordinate space. Datashader also provides fast methods for rasterizing these more general rectilinear or curvilinear grids, known as [quadmeshes](#Quadmesh-Rasterization) as described later below. Fully arbitrary unstructured grids ([Trimeshes](Trimesh.ipynb)) are discussed separately. # # ## Re-rasterization # # First, let's explore the regularly gridded case, declaring a small raster using Numpy and wrapping it up as an [xarray](http://xarray.pydata.org) DataArray for us to re-rasterize: # + import numpy as np, datashader as ds, xarray as xr from datashader import transfer_functions as tf, reductions as rd def f(x,y): return np.cos((x**2+y**2)**2) def sample(fn, n=50, range_=(0.0,2.4)): xs = ys = np.linspace(range_[0], range_[1], n) x,y = np.meshgrid(xs, ys) z = fn(x,y) return xr.DataArray(z, coords=[('y',ys), ('x',xs)]) da = sample(f) # - # Here we are declaring that the first dimension of the ``DataArray`` (the rows) is called ``y`` and corresponds to the indicated continuous coordinate values in the list ``ys``, and similarly for the second dimension (the columns) called ``x``. The coords argument is optional, but without it the default integer indexing from Numpy would be used, which would not match how this data was generated (sampling over each of the ``ys``). # # DataArrays like `da` happen to be the format used for datashader's own rasterized output, so you can now immediately turn this hand-constructed array into an image using ``tf.shade()`` just as you would for points or lines rasterized by datashader: tf.shade(da) # ## Interpolation (upsampling) # # So, what if we want a larger version? We can do that with the Datashader `Canvas.raster` method. Note that `Canvas.raster` only supports regularly spaced rectilinear, 2D or 3D ``DataArray``s, and so will not accept the additional dimensions or non-separable coordinate arrays that xarray allows (for which see [Quadmesh](#Quadmesh-rasterization), below). Also see the Quadmesh docs below if you want to use a [GPU](https://datashader.org/user_guide/Performance.html#Data-objects) for processing your raster data, because the Datashader `Canvas.raster` implementation does not yet support GPU arrays. # # Assuming you are ready to use `Canvas.raster`, let's try upsampling with either nearest-neighbor or bilinear interpolation (the default): tf.Images(tf.shade(ds.Canvas().raster(da, interpolate='linear'), name="linear interpolation (default)"), tf.shade(ds.Canvas().raster(da, interpolate='nearest'), name="nearest-neighbor interpolation")) # As you can see, linear interpolation works well for smoothly varying values, like those in the lower left of this function, but doesn't help much for regions close to or beyond the sampling limits of the original raster, like those in the upper right. # # We can choose whatever output size we like and sample the grid over any range we like, though of course there won't be any data in regions outside of the original raster grid: tf.shade(ds.Canvas(plot_height=200, plot_width=600, x_range=(-2,5), y_range=(-0.1,0.4)).raster(da)) # ## Aggregation (downsampling) # # The previous examples all use upsampling, from a smaller to a larger number of cells per unit distance in X or Y. Downsampling works just as for points and lines in Datashader, supporting various aggregation functions. These aggregation functions determine the result when more than one raster grid cell falls into a given pixel's region of the plane. To illustrate downsampling, let's first render a 500x500 version of the above 50x50 array: da2 = sample(f, n=500) tf.shade(da2) # You can see that the version we upsampled to this size from the 50x50 samples is similar to this, but this one is much smoother and more accurately represents the underlying mathematical function, because it has sufficient resolution to avoid aliasing in the high-frequency portions of this function (towards the upper right). # # Now that we have this larger array, we can downsample it using a variety of aggregation functions: cvs = ds.Canvas(plot_width=150, plot_height=150) tf.Images(tf.shade(cvs.raster(da2), name="mean downsampling (default)"), tf.shade(cvs.raster(da2, agg=rd.min()), name="min downsampling"), tf.shade(cvs.raster(da2, agg=rd.max()), name="max downsampling"), tf.shade(cvs.raster(da2, agg=rd.mode()), name="mode downsampling"), tf.shade(cvs.raster(da2, agg=rd.std()), name="std downsampling")) # Here the default downsampling function ``mean`` renders a faithful size-reduced version of the original, with all the raster grid points that overlap a given pixel being averaged to create the final pixel value. The ``min`` and ``max`` aggregation functions take the minimum or maxiumum, respectively, of the values overlapping each pixel, and you can see here that the ``min`` version has larger light-blue regions towards the upper right (with each pixel reflecting the minimum of all grid cells it overlaps), while the ``max`` version has larger dark-blue regions towards the upper right. The ``mode`` version computes the most common value that overlaps this pixel (not very useful for floating-point data as here, but important for categorical data where ``mean`` would not be valid; in that case you can also use `first` or `last` to take the first or last value found for a given pixel). The ``std`` version reports the standard deviation of the grid cells in each pixel, which is low towards the lower left where the function is smooth, and increases towards the upper right, where the function value varies significantly per pixel (i.e., has many samples in the original grid with different values). # # The differences between min and max are clearer if we look at a regime where the function varies so much that it can only barely be faithfully be represented in a grid of this size: da3 = sample(f, n=500, range_=(0,3)) tf.shade(da3) # In the upper right of this plot, you can see that the function is varying with such high frequency that any downsampled version will fail to represent it properly. The aggregation functions in Datashader can help you see when that is happening: cvs = ds.Canvas(plot_width=150, plot_height=150) tf.Images(tf.shade(cvs.raster(da3), name="mean downsampling (default)"), tf.shade(cvs.raster(da3, agg=rd.min()), name="min downsampling"), tf.shade(cvs.raster(da3, agg=rd.max()), name="max downsampling"), tf.shade(cvs.raster(da3, agg=rd.mode()),name="mode downsampling"), tf.shade(cvs.raster(da3, agg=rd.std()), name="std downsampling")) # Here you can see that the ``mean`` downsampling looks like a good approximation to the original array, locally averaging the original function values in each portion of the array. However, if you were to zoom in and adjust for contrast, you would be able to see some of the inevitable aliasing artifacts that occur for any such representation in a too-small array. These aliasing effects are clearly visible in the ``min`` and ``max`` aggregation, because they keep the local minimum or maxiumum rather than averaging out the artifacts. Comparing ``mean`` and ``min`` or ``max`` (or subtracting ``min`` from ``max``) can help find regions of the array that are being poorly represented in the current view. # # ## Collections of raster data # # The above examples all use a single 2D DataArray. Datashader's `raster` support can also use a 3D DataArray as long as which "layer" along this third dimension is wanted is specified: # + def g(x, y, k=1): return np.cos(k*((x**2+y**2)**2)) ks = [1.1,3.3,33] xs = ys = np.linspace(0.0, 2.4, 200) y,k,x = np.meshgrid(ys, ks, xs, indexing='xy') da4 = xr.DataArray(g(x,y,k), coords=[('k',ks), ('y',ys), ('x',xs)]) tf.Images(tf.shade(ds.Canvas().raster(da4, layer=1.1), name="k=1.1"), tf.shade(ds.Canvas().raster(da4, layer=3.3), name="k=3.3"), tf.shade(ds.Canvas().raster(da4, layer=33), name="k=33")) # - # Here the factor "k" results in the function being evaluated at increasingly higher frequencies, eventually leading to complex [Moiré patterns](https://en.wikipedia.org/wiki/Moir%C3%A9_pattern) due to undersampling for k=33. 3D xarrays are useful for reading multi-layer image files, such as those from [xarray's rasterio-based support for reading from multilayer TIFFs](http://xarray.pydata.org/en/stable/io.html#rasterio). # # Similarly, Datashader can accept an xarray Dataset (a dictionary-like collection of aligned DataArrays), as long as the aggregation method specifies a suitable DataArray within the Dataset: # + def h(x,y): return np.sin((x**2+y**2)**2) def m(x,y): return np.exp(x+y) dd = xr.Dataset({'cos': sample(f, n=150), 'sin': sample(h, n=150), 'exp': sample(m, n=150)}) tf.Images(tf.shade(ds.Canvas().raster(dd, agg=rd.mean('cos')), name='cos ((x^2+y^2)^2)'), tf.shade(ds.Canvas().raster(dd, agg=rd.mean('sin')), name='sin ((x^2+y^2)^2)'), tf.shade(ds.Canvas().raster(dd, agg=rd.mean('exp')), name='exp (x+y)')) # - # ## Scaling up # # The above rasters are all relatively tiny, for illustration purposes, but Datashader's raster support is accelerated using multi-core machine-code generation from Numba, and can re-sample much larger rasters easily. For instance, rendering the above function into a 10000x10000 raster takes a few seconds on a four-core machine using Numpy, but it can then be re-sampled using Datashader in under 0.1 sec: # %time da5 = sample(m, n=10000, range_=(0,3)) # %time tf.shade(cvs.raster(da5)) # ## Interactivity # # In practice, it is usually a good idea to view your data interactively at many different zoom levels to see all the data available and to detect any sampling or aliasing issues, which you can do with Datashader as described in [3_Interactivity](../getting_started/3_Interactivity.ipynb). For such cases, you can define both upsampling and downsampling methods at the same time; whichever one is needed for a given zoom level and range of the data will be applied. # # You can see Datashader rasters at work in the [Landsat](https://examples.pyviz.org/landsat/landsat.html) example notebook, which also has examples of reading raster data from TIFF files using xarray and rasterio: # # # ## Quadmesh Rasterization # The `Canvas.raster` method described above supports re-rasterizing grids that are already regularly spaced. For the more general case of rectilinear and curvilinear grids, which are structured (unlike [Trimesh grids](Trimesh.ipynb)) but not necessarily regularly spaced, Datashader provides the `Canvas.quadmesh` method. Despite its similarities with `raster`, `quadmesh` is currently implemented separately from `raster`, and thus has quite different features and limitations. For instance, `quadmesh` supports [GPU](https://datashader.org/user_guide/Performance.html#Data-objects) arrays (including an optimized code path specifically speeding up regularly spaced rasters if they are provided), while `raster` offers optional interpolation and supports distributed computing using Dask, and many other small differences exist (see the docs for each method). So if you have a regularly spaced array, you will probably want to try both `quadmesh` and `raster` and see which one meets your needs. # # ### Rectilinear Quadmesh # A rectilinear quadmesh is specified as a 2-dimensional xarray `DataArray` with two 1-dimensional coordinates. The coordinates specify the center position of the quads in each row or column of the grid, allowing the spacing on x and y to be irregular, but leaving the two dimensions orthogonal (and thus always producing an axis-aligned rectangular image out of the provided rectangular array, with each patch also rectangular and axis aligned). # + da = xr.DataArray( [[1, 2, 3], [4, 5, 6], [7, 8, 9]], coords=[('y', [1, 6, 7]), ('x', [1, 2, 7])], name='Z') canvas = ds.Canvas() tf.shade(canvas.quadmesh(da, x='x', y='y')) # - # Because the quads may vary in size, a given quadmesh rasterization call may require a combination of upscaling and downscaling at different locations in the grid. `Canvas.raster`, in contrast, is always either upscaling or downscaling in a given call, never both. Quadmesh upscaling is performed with nearest-neighbor interpolation, and downscaling is performed using the aggregation function supplied as the `agg` argument to `Canvas.quadmesh`. If not supplied, the aggregation function defaults to `mean`. # + def rect_data(n): xs = np.linspace(0, 3, n) ** 2 ys = np.arange(n) zs = np.sin(xs * ys[:, np.newaxis]) da = xr.DataArray(zs, coords=[('y', ys), ('x', xs)], name='Z') return da canvas = ds.Canvas(plot_width=150, plot_height=150) tf.Images(*[tf.shade(canvas.quadmesh(rect_data(n),x='x', y='y', agg=ds.mean('Z')), name=str(n)) for n in [20, 80, 320, 1280]]) # - # ### Curvilinear Quadmesh # A curvilinear quadmesh is specified as an 2-dimensional xarray `DataArray` with 2-dimensional coordinates. When upsampling such a mesh, each array element will map to a quadrilateral patch (a "quad"), which need not be rectangular or axis aligned. The coordinates specify the center position of each quad in the grid, allowing the quadmesh to map from the underlying 2D array into any arbitrary overall shape and with arbitrarily varying grid spacing. # + Qy = [[1, 2, 4], [1, 2, 3], [1, 2, 3]] Qx = [[1, 1, 1], [2, 2, 2], [2.5, 3, 3]] Z = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] da = xr.DataArray(Z, name='Z', dims = ['y', 'x'], coords={'Qy': (['y', 'x'], Qy), 'Qx': (['y', 'x'], Qx)}) canvas = ds.Canvas() tf.shade(canvas.quadmesh(da, x='Qx', y='Qy')) # - # As with the rectilinear quadmesh, upscaling is performed with nearest-neighbor interpolation, and downscaling is performed using the aggregation function supplied as the `agg` argument to `Canvas.quadmesh` (thus combining the values from multiple quads into a given pixel). If not supplied, the aggregation function defaults to `mean`. # + def curve_data(n): coords = np.linspace(-1.5,1.5,n) X,Y = np.meshgrid(coords, coords); Qx = np.cos(Y) - np.cos(X) Qy = np.sin(Y) + np.sin(X) Z = np.sqrt(X**2 + Y**2) return xr.DataArray(Z, name='Z', dims=['Y', 'X'], coords={'Qx': (['Y', 'X'], Qx), 'Qy': (['Y', 'X'], Qy)}) tf.Images(*[tf.shade(canvas.quadmesh(curve_data(n), x='Qx', y='Qy', agg=ds.mean('Z')), name=str(n)) for n in [8, 16, 32, 64]]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import to_rgba from smtools import sm_sort, tros_ms, lc_cmap # + x = np.linspace(0, 100, 61) y1 = x + 0.0001*x**3+15 i1 = x/100 y2 = x[::-1] + 2 i2 = 1 - x / 100 y3 = x + 40.5 i3 = np.sin(x/10)**2.0 dat = np.array([y1, y2, y3]).T intensity = np.array([i1, i2, i3]).T # Stark map eigenvalues are usually sorted, obscuring exact crossings. dat, intensity = tros_ms(dat, intensity) fig, ax = plt.subplots() for ix in range(3): ax.scatter(x, dat[:,ix], marker='.', s=600*intensity[:, ix]**2.0) plt.show() # + dat2, intensity2 = sm_sort(dat, intensity) fig, ax = plt.subplots() for ix in range(3): ax.scatter(x, dat2[:,ix], marker='.', s=600*intensity2[:, ix]**2.0) plt.show() # + fig, ax = plt.subplots() for ix in range(3): #cs = np.asarray([to_rgba(col['color'], a) for a in zs]) #RGBalpha alpha = np.clip(intensity2[:, ix], 0, 1) cs = np.asarray([to_rgba('C%d'%ix, a) for a in alpha]) #RGBalpha lc = lc_cmap(x, dat2[:,ix], cs) lc.set_linewidth(2) ax.add_collection(lc) ax.set_xlim(-10, 110) ax.set_ylim(-10, 220) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Environment (conda_amazonei_tensorflow2_p36) # language: python # name: conda_amazonei_tensorflow2_p36 # --- # + import numpy as np import cv2 as cv import os import pandas as pd from matplotlib import pyplot as plt # import pickle to save image data import pickle # to use all cpu cores we use Pool from multiprocessing from multiprocessing import Pool # - # ## load data details # + ground_truth_df = None duplicate_images_df = None with open( os.path.join('.', 'ISIC_2020_Training_GroundTruth_v2.csv') ) as fd: ground_truth_df = pd.read_csv(fd, encoding='utf8') with open( os.path.join('.', 'ISIC_2020_Training_Duplicates.csv')) as fd: duplicate_images_df = pd.read_csv(fd, encoding='utf8') # - # ## helper functions # + def scale_img(img_data, height=240, width=320): dim = (width, height) new_img = cv.resize(img_data, dim) return new_img def change_color_rgb_to_gray(img_data): gray = cv.cvtColor(img_data, cv.COLOR_RGB2GRAY) return gray def gray_scale_img(img_data): gray = change_color_rgb_to_gray(img_data) return scale_img(gray) def load_img(image_name): return cv.imread(os.path.join('.', 'images', f"{image_name}.jpg")) # - # ## drop duplicate images ground_truth_df = ground_truth_df.set_index('image_name').drop(index=duplicate_images_df['image_name_2']).reset_index() # ## save images to numpy array # ### scale down images to `height=240`, `width=320` and load them into tuple list `[(img_name, img_data)]` image_name = 'ISIC_8924942' img = cv.imread(os.path.join('.', 'images', f"{image_name}.jpg")) plt.imshow(scale_img(img)) print(scale_img(img).shape) # + def scale_load(img_name): return (img_name, scale_img(load_img(img_name))) with Pool(16) as p: image_data = p.map(scale_load, ground_truth_df['image_name']) # - # ### create dataframe with image data # + image_data_df = [[img_name, data] for img_name, data in image_data] # save image_data_df if necessary # open a file where data need to be stored #file = open('image_data_df.pkl', 'wb') # dump information to the file #pickle.dump(image_data_df, file) # close the file #file.close() # - images_df = pd.DataFrame(image_data_df, columns=['image_name', 'image_data']) images_df.head(1) # ### join dataframe with `ground_truth` details dataset_df = pd.merge(ground_truth_df, images_df, left_on='image_name', right_on='image_name') dataset_df.head(2) dataset_df.to_pickle('dataset_df.pickle') # ## load dataframe if starting notebook dataset_df = pd.read_pickle('dataset_df.pickle') dataset_df.head(2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd """ 使用pandas的read_excel函数, 读取bitcoin.xlsx文件, 并将文件数据赋值给变量df。 """ df = pd.read_excel('../datasets/bitcoin/bitcoin.xlsx') # 查看df df # + """ 选择df的前10行数据,并选择前5列,构成df_part """ df_part = df[:10][df.columns[0:5]] # 查看df_part df_part # + """ 将df_part的数据, 按照excel的格式写入到bitcoin_part.xlsx文件。 """ df_part.to_excel('../datasets/bitcoin/bitcoin_part.xlsx', index=False) """ 再次使用pandas的read_excel函数, 读取bitcoin_part.xlsx文件, 并将文件数据赋值给变量df_new。 """ df_new = pd.read_excel('../datasets/bitcoin/bitcoin_part.xlsx') # 校验df_new与df_part是否一致。s df_new # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.style.use('classic') # %matplotlib inline # # Class 8: Deterministic Time Series Models II # ## Nonlinear First-Order Difference Equations # # Recall the Solow growth model with no exogenous productivity growth (written in "per worker" terms): # # \begin{align} # y_t & = Ak_t^{\alpha}\\ # k_{t+1} & = i_t + (1-\delta)k_{t}\\ # y_t & = c_t + i_t\\ # c_t & = (1-s)y_t, # \end{align} # # where $y_t$ is output per worker, $k_t$ is capital per worker, $c_t$ is consumption per worker, and $i_t$ is investment per worker. We've seen that by treating investment as an exogenous quantity, then the capital accumulation equation could be viewed as a linear first-order difference equation. But in the Solow model, investment is *not* exogenous and is equal to the savings rate times output per worker: # # \begin{align} # i_t & = sy_t = sAk_t^{\alpha} # \end{align} # # Therefore the equilibrium law of motion for capital per worker is: # # \begin{align} # k_{t+1} & = sAk_t^{\alpha} + (1-\delta)k_{t}, \label{eqn:capital_solution} # \end{align} # # which is a *nonlinear* first-order difference equation. In equilibrium capital in period $t+1$ is a *concave* (i.e., increasing at a decreasing rate) function of capital in period $t$. We can iterate on the nonlinear law of motion just like we iterated on the linear difference equation. # # Let's suppose the following values for the simulation: # # | $k_0$ | $s$ | $A$ | $\alpha$ | $\delta$ | $T$ | # |-------|-----|------|----------|-----------|------| # | 10 | 0.1 | 10 | 0.35 | 0.1 | 101 | # # Where $T$ is the total number of simulation periods (i.e., $t$ will range from $0$ to $100$). # + # Create variables 'k0', 's', 'A', 'alpha','delta','T' to store parameter values for the simulation # Initialize capital per worker variable 'capital' as an array of zeros with length T # Set first value of capital equal to k0 # Iterate over t in range(T-1) to update subsequent values in the capital array # Construct a plot of simulated capital per worker # - # It looks like capital per worker is converging toward a steady state value near $35$. You can verify that the steady state is in fact $34.55$. # # So computing nonlinear difference equations is essentially the same as computing linear differrence equations. However, establishing the stability properties of nonlinear difference equations is more involved and we'll skip that. But in case you're interested, I've included an optional discussion at the bottom of this notebook. # ## Systems of Difference Equations # # Most macroeconomic models are dynamic *systems* of equations. They are collections of several equations that simultaneously determine several variables. Working with them can be a little intimidating, but in principle, it's similar to working with a single dynamic equation. # # ### Example: Solow Growth Model Without Exogenous Growth # # The basic Solow growth model is a system of four equations that determine values of four variables $k_t$, $y_t$, $c_t$, and $i_t$. In the Solow model, capital per worker is *state variable* because the the value of $k_t$ was actually determined in period $t-1$. In fact, in the Solow model without exogenous population of productivity growth, capital is the *only* state variable. That means that if you know the value of capital per worker, then you can compute all of the other quantities. The nonstate variables are called *control* variables. # # To make this point clear, here is the Solow growth model *solved* in terms of $k_t$: # # \begin{align} # y_t & = Ak_t^{\alpha}\\ # i_t & = sAk_t^{\alpha}\\ # c_t & = (1-s)Ak_t^{\alpha}\\ # k_{t+1} & = sAk_t^{\alpha} + (1-\delta)k_{t}. # \end{align} # # To simulate all of the endogenous variables, do these two steps: # # 1. Simulate an array of values for $k_t$ # 2. Compute the other variables using the array of $k_t$ values. # # Since we've already simulated capital, we can just compute simulated values for $y_t$, $c_t$, and $i_t$ right away. # + # Create variables 'output', 'consumption', and 'investment' equal to the respective variables in the model # Construct a plot of simulated output per worker, consumption per worker, investment per worker, capital per worker. # Create legend. Use ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) to put legend outside of axis # Add grid if you want # - # Of course, if we want to compute multiple simulations, then we will want to write a function to compute the simulated values so that we don't have to repeat ourselves. The function in the following cell returns a `DataFrame` with columns containing simulated values of capital per worker, output per worker, consumption per worker, and investment per worker. # Define a function that returns a DataFrame of simulated value from the Solow model with no exogenous growth. CELL PROVIDED def solow(s,A,alpha,delta,T,k0): '''Function for computing a simulation of the Solow growth model without exogenous growth. The model is assmed to be in "per-worker" quantities: y[t] = A*k[t]^alpha k[t+1] = i[t] + (1-delta)*k[t] y[t] = c[t] + i[t] c[t] = (1-s)*y[t] Args: s (float): Saving rate A (float): TFP alpha (float): Capital share in Cobb-Dougla production function delta (float): Capital depreciation rate T (int): Number of periods to simulate k0 (float): Initial value of capital per worker Returns: Pandas DataFrame ''' # Initialize capital per worker values capital = np.zeros(T) # Set first value of capital per worker equal to k0 capital[0] = k0 # Iterate over t in range(T-1) to update subsequent values in the capital array for t in range(T-1): capital[t+1] = s*A*capital[t]**alpha + (1-delta)*capital[t] # Compute the values of the other variables output = A*capital**alpha consumption = (1-s)*output investment = s*output # Put simulated data into a DataFrame df = pd.DataFrame({'output':output,'consumption':consumption,'investment':investment,'capital':capital}) # Return the simulated data return df # Use the function `solow()` to simulate the Solow model with the same parameters that we've been using. # + # Simulate the model and store results in a variable called 'df' # Print the first five rows of df # - # Now, use the function `solow()` to simulte the model for five different initial values of capital per worker: $k_0 = 10, 20, 30, 40, 50$. Plot the trajectories for $y_t$ together. # + # Create variable 'initial_ks' that store the five initial values of k[t] # Create a figure and axis # Iterate over k0 in initial_ks and plot # Add title # Create legend. Use ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) to put legend outside of axis # Add grid if you want # - # ### Solow Model with Exogenous Labor # # Now consider the Solow growth model with exogenous labor growth. That is, let $L_t$ denote the quantity of labor and suppose that the population (and therefore the labor supply) grows at the constant rate $n$. This means: # # \begin{align} # L_{t+1} & = (1+n)L_t, # \end{align} # # where the intial value of labor $L_0$ must be given. The rest of the model in aggregate terms is: # # \begin{align} # Y_t & = AK_t^{\alpha}L_t^{1-\alpha}\\ # K_{t+1} & = I_t + (1-\delta)K_{t}\\ # Y_t & = C_t + I_t\\ # C_t & = (1-s)Y_t, # \end{align} # # where the intial value of capital $K_0$ is also given. # # Since capital *and* labor are both determined in the previous period by their respective laws of motion, the Solow model with exogenous labor growth has *two* state variables: $K_t$ and $L_t$. We can solve the capital law of motion for capital in terms of only the variables capital and labor and so the two equations that determine how the state of the economy evolves are: # # \begin{align} # K_{t+1} & = sAK_t^{\alpha}L_t^{1-\alpha} + (1-\delta)K_{t}\\ # L_{t+1} & = (1+n)L_t # \end{align} # # If we iterate on these two to compute simulations of $K_t$ and $L_t$, then we can compute $Y_t$, $C_t$, and $I_t$ easily. # # Let's suppose the following values for a simulation: # # | $L_0$ | $n$ | $K_0$ | $s$ | $A$ | $\alpha$ | $\delta $ | $T$ | # |-------|------|-------|------|------|----------|-----------|-----| # | 1 | 0.01 | 10 | 0.1 | 10 | 0.35 | 0.1 | 101 | # # Compute simulated values for $K_t$ and $L_t$ and use those values to compute and plot simulated series for output $Y_t$ and output *per worker* $Y_t/L_t$ side-by-side in the same figure. # + # Create variables 'K0', 'L0', 'n', 's', 'A', 'alpha','delta','T' to store parameter values for the simulation # Initialize capital variable 'capital' as an array of zeros with length T # Set first value of capital equal to Kdd0 # Initialize the labor variable 'labor' as an array of zeros with length T # Set first value of capital equal to L0 # Iterate over t in range(T-1) to update subsequent values in the capital and labor arrays # Create variables 'output' and 'output_pw' that store output and output per worker # Create figure # Construct a plot of simulated output # Construct a plot of simulated output per worker # - # Because of exogenous labor growth, there is long-run growth in output but not output per worker. The function in the cell below simulates the Solow model with exogenous labor growth. # Define a function that returns a DataFrame of simulated value from the Solow model with exogenous labor growth. CELL PROVIDED def solow_w_L(s,A,alpha,delta,n,T,K0,L0): '''Function for computing a simulation of the Solow growth model with exogenous labor. Y[t] = A*K[t]^alpha*L[t]^(1-alpha) K[t+1] = I[t] + (1-delta)*K[t] Y[t] = C[t] + I[t] C[t] = (1-s)*Y[t] L[t+1] = (1+n)*L[t] Args: s (float): Saving rate A (float): TFP alpha (float): Capital share in Cobb-Douglas production function delta (float): Capital depreciation rate n (float): Labor growth rate T (int): Number of periods to simulate k0 (float): Initial value of capital per worker L0 (float): Initial labor Returns: Pandas DataFrame ''' # Initialize capital values capital = np.zeros(T) # Set first value of capital equal to K0 capital[0] = K0 # Initialize labor values labor = np.zeros(T) # Set first value of labor equal to L0 labor[0] = L0 # Iterate over t in range(T-1) to update subsequent values in the capital and labor arrays for t in range(T-1): capital[t+1] = s*A*capital[t]**alpha*labor[t]**(1-alpha) + (1-delta)*capital[t] labor[t+1] = (1+n)*labor[t] # Compute the values of the other aggregate variables output = A*capital**alpha*labor**(1-alpha) consumption = (1-s)*output investment = s*output # Compute the values of the "per worker" variables capital_pw = capital/labor output_pw = output/labor consumption_pw = consumption/labor investment_pw = investment/labor # Put simulated data into a DataFrame df = pd.DataFrame({'output':output, 'consumption':consumption, 'investment':investment, 'capital':capital, 'output_pw':output_pw, 'consumption_pw':consumption_pw, 'investment_pw':investment_pw, 'capital_pw':capital_pw}) # Return the simulated data return df # Use the function `solow_w_L()` to replicate the previous simulation and plots. # + # Replicate the previous simulation exercise using the solow_no_exo_growth() function. CELL PROVIDED # Simulate the model df = solow_w_L(s,A,alpha,delta,n,T,K0,L0) # Create a figure fig = plt.figure(figsize=(12,4)) # Construct a plot of simulated output ax = fig.add_subplot(1,2,1) ax.plot(df['output'],lw=3,alpha=0.75) ax.set_title('Ouput') ax.grid() # Construct a plot of simulated output per worker ax = fig.add_subplot(1,2,2) ax.plot(df['output_pw'],lw=3,alpha=0.75) ax.set_title('Ouput per Worker') ax.grid() # - # ## Stability of Nonlinear First-Order Difference Equations (Optional) # # Unlike the linear first-order difference equation, stability (i.e., whether the process diverges to infinity in absolute value) of nonlinear first-order difference equations is less straightforward to establish. In fact, proving or disproving *global stability* -- Will the process remain finite for *any* initial condition? -- is particularly challenging. An easier task is to establish the existience of *local stability*: that the process is stable in a *neighborhood* of a given point. The idea is to use the first-order Taylor series approximation (https://en.wikipedia.org/wiki/Taylor_series) to approximate the nonlinear equation with a linear one. An example using the Solow model will make this easier to explain. # # Let $k^*$ denote the steady state of capital per worker in the Solow model with no exogenous growth. Then: # # \begin{align} # k^* & = \left(\frac{sA}{\delta}\right)^{\frac{1}{1-\alpha}} # \end{align} # # The first-order Taylor series approximation to the capital law of motion around steady state capital per worker is: # # \begin{align} # k_{t+1} & \approx \left[ sA\left(k^*\right)^{\alpha} + (1-\delta)k^*\right] + \left[\alpha sA\left(k^*\right)^{\alpha-1} + 1-\delta\right]\left(k_t - k^*\right). \label{eqn:capital_approx} # \end{align} # # Equation (\ref{eqn:capital_approx}) is a linear first-order difference equation. As long as the coefficient on $k_t$, # # \begin{align} # \alpha sA\left(k^*\right)^{\alpha-1} + 1-\delta, # \end{align} # # is less than 1 in absolute value, then the approximated model is stable and we say that $k_t$ is stable in a neighborhood of the steady state $k^*$. Let's compute this coefficient for the Solow model using the same parameterization we used earlier: # # | $s$ | $A$ | $\alpha$ | $\delta $ | # |-----|------|----------|-----------| # | 0.1 | 10 | 0.35 | 0.1 | # + # Parameters # Computer steady state # Compute coefficient # - # So steady state capital per worker is about 35 and the coefficient on $k_t$ in the approximation is 0.935. So the Solow model as we have parameterized it is stable in the neighborhood of $k^*$. Can you find the upper bound on $\alpha$ for which the model will *not* be stable given $A = 10$, $s = 0.1$ and $\delta = 0.1$? # # An obvious question is: How good is the linear approximation? It turns out that for the capital law of motion in the Solow model, the linear approximation is really good. Let's plot the exact law of motion and the approximated law of motion for $k_t \in[0,100]$. # + # Create array of values for k[t] # Compute k[t+1] exactly using the law of motion for capital per worker # Compute k[t+1] approximately using the approximation to the law of motion for capital per worker # Create figure # Plot exact and approximate k[t+1] against for k[t] # - # The exact law of motion and the approximated law of motion intersect at the steady state and are hardly distinguishable elsewhere. You can see the difference between them only by zooming in on the extreme values. # + # Create figure # Plot exact and approximate k[t+1] against for k[t] between 0 and 10 # Plot exact and approximate k[t+1] against for k[t] between 80 and 100 # - # For $k_t$ near zero, the approximated law of motion is greater than the exact one but not by much. For $k_t$ near 100, the two laws of motion are still really close to each other. The point is that we can be confident that in the Solow growth model, capital will tend to converge toward the steady state regardless of its initial value. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # My Music # + ## Basic stuff # %load_ext autoreload # %autoreload from IPython.core.display import display, HTML display(HTML("")) display(HTML("""""")) from mp3id import mp3ID from musicBase import musicBase from musicFinder import musicFinder from musicPath import pathBasics from timeUtils import clock, elapsed from discogsBase import discogs from multiArtist import multiArtist from time import sleep from pandas import DataFrame, Series #from discogs import discogs ## Python Version import sys print("Python: {0}".format(sys.version)) import datetime as dt start = dt.datetime.now() print("Notebook Last Run Initiated: "+str(start)) # - # # Music Analyzer (My Music) # + # %load_ext autoreload # %autoreload from searchUtils import findDirsPattern, findWalkExt from fileUtils import getBaseFilename, getFileBasics from fsUtils import setFile from ioUtils import saveFile, getFile from timeUtils import clock, elapsed from musicPath import pathBasics ############################################################################################################################## # My Music ############################################################################################################################## class musicAnalyzer(musicBase): def __init__(self, debug=False): self.name = "myanalyzer" musicBase.__init__(self, debug=debug) self.pb = pathBasics() self.flists = self.getMusicListFiles() self.tagDBs = self.getMusicTagDBs() self.pathDBs = self.getMusicPathDBs() # + def realName(x): if x is None: return [None,-1] lenx = len(x) if len(x) < 1: return [x,-1] if x[-1] != ")": return [x, None] if lenx >=5: if x[-3] == "(": try: num = int(x[-2:-1]) val = x[:-3].strip() return [val, num] except: return [x, None] if lenx >= 6: if x[-4] == "(": try: num = int(x[-3:-1]) val = x[:-4].strip() return [val, num] except: return [x, None] if lenx >= 7: if x[-4] == "(": try: num = int(x[-3:-1]) val = x[:-4].strip() return [val, num] except: return [x, None] return [x, None] # + from pandas import Series discdf = DataFrame(Series(df)) discdf.columns = ['Name'] tmp = DataFrame(discdf["Name"].apply(realName).tolist()) tmp.index = discdf.index discdf['Artist'] = tmp[0] discdf = discdf[~discdf['Name'].isna()] print(discdf.shape) artists = list(discdf["Artist"]) _ = clock("Last Run") # + # %load_ext autoreload # %autoreload mf = musicFinder(debug=False) ## Run This To Find Music Paths mf.findMusic() #"Soundtrack", "Live", "Classical", "Jazz", "MixTape", "Compilation", "Non Fiction", "Todo", "Moves", "Not In Discogs"]) ma = musicAnalyzer(debug=False) _ = clock("Last Run") # + start, cmt = clock("Getting My Artists/Albums") artistData = {} albumData = {} for pathfile in ma.pathDBs: # if sum([x in pathfile for x in ["Soundtrack", "Live", "Classical", "Jazz", "MixTape", "Compilation", "Non Fiction", "Todo", "Moves", "Not In Discogs"]]) > 0: # continue # if sum([x in pathfile for x in ["Matched"]]) == 0: # continue pathdata = getFile(pathfile) for mp3,mp3data in pathdata.items(): if artistData.get(mp3data['pbArtist']) is None: artistData[mp3data["pbArtist"]] = {} if artistData[mp3data["pbArtist"]].get(mp3data["pbAlbum"]) is None: artistData[mp3data["pbArtist"]][mp3data["pbAlbum"]] = [] artistData[mp3data["pbArtist"]][mp3data["pbAlbum"]].append(mp3) if albumData.get(mp3data["pbAlbum"]) is None: albumData[mp3data["pbAlbum"]] = {} if albumData[mp3data["pbAlbum"]].get(mp3data["pbArtist"]) is None: albumData[mp3data["pbAlbum"]][mp3data["pbArtist"]] = [] albumData[mp3data["pbAlbum"]][mp3data["pbArtist"]].append(mp3) print("Found {0} artists and {1} albums".format(len(artistData), len(albumData))) saveFile(idata=artistData, ifile="/Volumes/Music/MusicDB/artistData.p") saveFile(idata=albumData, ifile="/Volumes/Music/MusicDB/albumData.p") elapsed(start, cmt) _ = clock("Last Run") # - # # Find Artists disc = discogs() df = disc.getArtistIDToNameData() tmp = Series(df).apply(realName).tolist() artists = list(DataFrame(tmp)[0]) # + from searchUtils import findNearest from unicodedata import normalize from pandas import DataFrame def getMusicData(key, artist): retval = discdf[discdf[key] == artist] if retval.shape[0] > 0: return retval else: return None saveArtistData = {"Odd": {}, "Match": {}, "Near": {}, "None": {}} for artistName,artistAlbums in artistData.items(): rename = None if False: rename = None if renameReplacements.get(artistName) is not None: artistName = renameReplacements[artistName] rename = True artistName = normalize('NFC', artistName) keys = ["Artist", "Name"] done = False for key in keys: mdata = getMusicData(key, artistName) if isinstance(mdata, DataFrame): if key == "Artist": #print('Match\t',mdata.shape,'\t',artistName) saveArtistData["Match"][artistName] = {"Match": artistName, "Rename": rename, "Discog": mdata, "Albums": artistAlbums} else: print('Odd\t',mdata.shape,'\t',artistName) saveArtistData["Odd"][artistName] = {"Match": artistName, "Rename": rename, "Discog": mdata, "Albums": artistAlbums} done = True break if done: continue cutoff=0.9 if len(artistName) <= 6: cutoff = 0.8 cutoffArtists = findNearest(item=artistName, ilist=artists, num=1, cutoff=cutoff) if len(cutoffArtists) > 0: for artist in cutoffArtists: mdata = getMusicData("Artist", artist) saveArtistData["Near"][artistName] = {"Match": artist, "Rename": rename, "Discog": mdata, "Albums": artistAlbums} print('Near','\t',mdata.shape,'\t',artistName,'\t\t====> Discog Match =',artist) done = True else: if False: cutoffArtists = findNearest(item=artistName, ilist=artists, num=1, cutoff=cutoff - 0.15) if len(cutoffArtists) > 0: for artist in cutoffArtists: mdata = getMusicData("Artist", artist) saveArtistData["Near"][artistName] = {"Match": artist, "Rename": rename, "Discog": mdata, "Albums": artistAlbums} print('Near','\t',mdata.shape,'\t',artistName,'\t\t====> Discog Match =',artist) done = True if done: continue print('\t','\t','\t',artistName) saveArtistData["None"][artistName] = {"Match": None, "Rename": rename, "Discog": None, "Albums": artistAlbums} continue if artistName == " (2)": print(artistAlbums) 1/0 continue for albumName,albumVals in artistAlbums.items(): print("\t",albumName,len(albumVals)) break #albumData = {} # - # # Known Discog Renames from pandas import read_excel renames = read_excel('/Volumes/Music/Discog/db/DiscogRenames.xlsx')[["Unnamed: 1", "Unnamed: 2"]][1:] renames.columns = ["My Name", "Discog Name"] renames cutoff=0.8 for discogName in renames["Discog Name"].values: cutoffArtists = findNearest(item=discogName, ilist=artists, num=1, cutoff=cutoff) print(discogName,'\t',cutoffArtists) renames["Discog Name"].values # # Full Matches del saveArtistData["Match"]['11'] # + toget = [] for artistName,aData in saveArtistData["Match"].items(): rename = aData["Rename"] discog = aData["Discog"] albums = aData["Albums"] if discog is not None: matches = discog["Name"].values for val in matches: toget.append(val) else: continue saveFile(idata=toget, ifile="/Volumes/Music/MusicDB/toget.p") # + toget2 = [] for artistName,aData in saveArtistData["Near"].items(): rename = aData["Rename"] discog = aData["Discog"] albums = aData["Albums"] if discog is not None: matches = discog["Name"].values for val in matches: if val not in toget: toget2.append(val) else: continue for artistName,aData in saveArtistData["Odd"].items(): rename = aData["Rename"] discog = aData["Discog"] albums = aData["Albums"] if discog is not None: matches = discog["Name"].values for val in matches: if val not in toget: toget2.append(val) else: continue saveFile(idata=toget2, ifile="/Volumes/Music/MusicDB/toget2.p") # + toget3 = [] for i,(artistName,aData) in enumerate(saveArtistData["None"].items()): rename = aData["Rename"] discog = aData["Discog"] albums = aData["Albums"] matches = mulArts.getArtistNames(artistName) for name,val in matches.items(): if val is None: continue if isinstance(val,list): for artistID in val: toget3.append(artistID) else: toget3.append(val) toget3 = list(set(toget3)) saveFile(idata=toget3, ifile="/Volumes/Music/MusicDB/toget3.p") # - saveFile(idata=toget3, ifile="/Volumes/Music/MusicDB/toget3.p") # # Mark Soundtracks from searchUtils import findDirsPattern, findAll from fsUtils import moveDir from fileUtils import getDirBasics for dirval in findDirsPattern('/Volumes/Music/iTunes Compilation/iTunes Media/Music', pattern="OST"): if dirval.endswith("OST"): dst = " ".join([dirval[:-3].strip(), "Soundtrack"]) albumName = " ".join([getDirBasics(dirval)[-1][:-3].strip(), "Soundtrack"]) for val in findAll(dirval): for albumFile in findAll(val): m = mp3ID(albumFile) m.setAlbum(albumName) moveDir(dirval, dst) # # Near Matches toExcel = [] for i,(artistName,aData) in enumerate(saveArtistData["Near"].items()): print("{0: <3}/{1: <3}".format(i,len(saveArtistData["Near"])), end=" \t") rename = aData["Rename"] discog = aData["Discog"] albums = aData["Albums"] if discog is not None: matches = discog["Name"].values print('{0: <50}'.format(artistName),matches) else: print('{0: <50}{1: <8}{2}'.format(artistName,'x',len(albums))) # # Odd Names for artistName,aData in saveArtistData["Odd"].items(): rename = aData["Rename"] discog = aData["Discog"] albums = aData["Albums"] if discog is not None: matches = discog["Name"].values print('{0: <50}'.format(artistName),matches) else: print('{0: <50}{1: <8}{2}'.format(artistName,'x',len(albums))) # # None Match # + ## Abwaerts = Abwärts ## Move all ' vs ' into MixTape ## Known #Beatsnblends #Dr. # - #artistData[' & FM'] albumData["Flo.Rida-Only.One.Flo.Pt.1-(Retail)-2010-NoFS)"] # # # Find Matches # + from searchUtils import findNearest class multiArtist: def __init__(self, cutoff=0.9, discdata=None, exact=False): self.cutoff = cutoff self.discdata = discdata self.exact = exact self.basicdelims = ["Duet With", "Presents", "Featuring"] self.delims = [",", "&", " And ", "+", "/", "With The", " with ", " With ", " y ", " Y ", " feat.", " ft.", " Feat. ", " x ", " X "] self.discArtists = [] if self.discdata is not None: self.discArtists = [x for x in discdata.keys() if x is not None] self.knownDelimArtists = {artist: True for artist in self.discArtists if self.nDelims(artist) > 0} def getDiscArtists(self): return self.discArtists def getKnownDelimArtists(self): return self.knownDelimArtists def isKnownArtist(self, artist): if isinstance(self.discdata, dict): return self.discdata.get(artist) != None return False def nDelims(self, artist): return sum([x in artist for x in self.delims]) def joinNames(self, delim, names): joinVal = delim if delim.startswith(" ") is False: joinVal = " {0}".format(joinVal) if delim.endswith(" ") is False: joinVal = "{0} ".format(joinVal) return joinVal.join(names) def getDelimData(self, val, debug=False): valdata = {d: [x.strip() for x in val.split(d)] for d in self.delims if d in val} if debug: print("\tDelimParsing ==> {0}".format(valdata)) for k,v in valdata.items(): if len(v) == 3: v.append(self.joinNames(delim=k, names=v[:2])) v.append(self.joinNames(delim=k, names=v[1:-1])) if debug: print("\tDelimParsing ==> {0}".format(valdata)) return valdata def getBasicDelimData(self, val): valdata = {d: [x.strip() for x in val.split(d)] for d in self.basicdelims if d in val} return valdata def getNdelims(self, val): return len(val) def addArtist(self, allArtists, val, debug=False): #print("L={0}".format(len(allArtists))) if allArtists.get(val) is None: if debug: print("Adding {0}. Sum is {1}".format(val, len(allArtists))) allArtists[val] = True def cleanArtist(self, artist): artist = artist.replace("(", "") artist = artist.replace(")", "") return artist def newMethod(self, artist, debug=False): if debug is True: print("Parsing [{0}]".format(artist)) artist = self.cleanArtist(artist) allArtists = {} d1delims = self.getBasicDelimData(artist) if len(d1delims) == 0: d1delims = self.getDelimData(artist) if debug: print('1','\t',artist,'===>',d1delims) knownArtists = set() if len(d1delims) == 0: self.addArtist(allArtists, artist, debug) knownArtists = set(allArtists.keys()) elif self.isKnownArtist(artist): self.addArtist(allArtists, artist, debug) knownArtists = set(allArtists.keys()) d1delims = {} ############################################################################## ## 1st Delimiter Split ############################################################################## for delim1, delimdata1 in d1delims.items(): delimdata1 = list(set(delimdata1).difference(knownArtists)) for artist1 in delimdata1: d2delims = self.getDelimData(artist1) if debug: print('2','\t',artist1,'===>',d2delims) if self.getNdelims(d2delims) == 0: self.addArtist(allArtists, artist1, debug) knownArtists = set(allArtists.keys()) continue elif self.isKnownArtist(artist1): self.addArtist(allArtists, artist1, debug) knownArtists = set(allArtists.keys()) d2delims = {} ############################################################################## ## 2nd Delimiter Split ############################################################################## for delim2, delimdata2 in d2delims.items(): delimdata2 = list(set(delimdata2).difference(knownArtists)) for artist2 in delimdata2: d3delims = self.getDelimData(artist2) if debug: print('3','\t',artist2,'===>',d3delims) if self.getNdelims(d3delims) == 0: self.addArtist(allArtists, artist2) knownArtists = set(allArtists.keys()) continue elif self.isKnownArtist(artist2): self.addArtist(allArtists, artist2) knownArtists = set(allArtists.keys()) d3delims = {} ############################################################################## ## 3rd Delimiter Split ############################################################################## for delim3, delimdata3 in d3delims.items(): delimdata3 = list(set(delimdata3).difference(knownArtists)) for artist3 in delimdata3: d4delims = self.getDelimData(artist3) if self.getNdelims(d4delims) == 0: self.addArtist(allArtists, artist3) knownArtists = set(allArtists.keys()) continue elif self.isKnownArtist(artist3): self.addArtist(allArtists, artist3) knownArtists = set(allArtists.keys()) d4delims = {} ############################################################################## ## 4th Delimiter Split ############################################################################## for delim4, delimdata4 in d4delims.items(): delimdata4 = list(set(delimdata4).difference(knownArtists)) for artist4 in delimdata4: d5delims = self.getDelimData(artist4) if self.getNdelims(d5delims) == 0: self.addArtist(allArtists, artist4) knownArtists = set(allArtists.keys()) continue elif self.isKnownArtist(artist4): self.addArtist(allArtists, artist4) knownArtists = set(allArtists.keys()) d5delims = {} ############################################################################## ## 5th Delimiter Split ############################################################################## for delim5, delimdata5 in d5delims.items(): delimdata5 = list(set(delimdata5).difference(knownArtists)) for artist5 in delimdata5: d6delims = self.getDelimData(artist5) if self.getNdelims(d6delims) == 0: self.addArtist(allArtists, artist5) knownArtists = set(allArtists.keys()) continue elif self.isKnownArtist(artist5): self.addArtist(allArtists, artist5) knownArtists = set(allArtists.keys()) d6delims = {} ############################################################################## ## Combine Results ############################################################################## results = {} if self.discdata is not None and len(self.discArtists) > 0: for name in knownArtists: retval = self.discdata.get(name) if self.exact is False: if retval is None: retval = findNearest(name, self.discArtists, 1, self.cutoff) if len(retval) == 1: retval = self.discdata.get(retval[0]) else: retval = None results[name] = retval else: results = {k: ['?'] for k,v in allArtists.items()} return results def getArtistNames(self, artist, debug=False): return self.newMethod(artist, debug) if self.nDelims(artist) == 0: names = {artist: []} return names if self.discdata is not None and len(self.discArtists) > 0: retval = self.discdata.get(artist) if retval is not None: return {artist: retval} else: retval = findNearest(artist, self.discArtists, 1, self.cutoff) if len(retval) == 1: return {artist: self.discdata.get(retval[0])} names = {artist: None} names = self.splitArtist(names) names = self.unravelDict(names) names = self.unravelDict(names) return names def unravelDict(self, dvals): fvals = {} for k,v in dvals.items(): if isinstance(v, dict): for k2,v2 in v.items(): if isinstance(v2, dict): for k3,v3 in v.items(): if isinstance(v3, dict): for k4,v4 in v.items(): fvals[k4] = v4 else: fvals[k3] = v3 else: fvals[k2] = v2 else: fvals[k] = v return fvals def splitArtistDelim(self, artist, delval): names = {} if delval not in artist: return None for val in artist.split(delval): val = val.strip() if self.discdata is not None and len(self.discArtists) > 0: retval = self.discdata.get(val) if retval is not None: names[val] = retval else: retval = findNearest(val, self.discArtists, 1, self.cutoff) if len(retval) == 0: names[val] = None else: names[val] = retval else: names[val] = [-1] if len(names) == 0: return None if any(names.values()) is False: return None return names def splitArtist(self, result): delims = self.delims #print("Input: {0}".format(result)) for name in result.keys(): #print(" Name -->",name,"\tCurrent Value -->",result[name]) if result[name] is None: for delim in delims: #print("\tDelim: {0} for {1}".format(delim, name)) result2 = self.splitArtistDelim(name,delval=delim) #print("\tDelim Result: {0}".format(result2)) if result2 is not None: result[name] = result2 #print("\tName:",name,'\tResult:',result2) for name2 in result2.keys(): if result2[name2] is None: for delim2 in delims: #print("\t\tDelim2: {0} for {1}".format(delim2, name2)) result3 = self.splitArtistDelim(name2,delval=delim2) #print("\t\tDelim Result: {0}".format(result3)) if result3 is not None: #print("\t\tName:",name2,'\tResult:',result3) result2[name2] = result3 ## Breaking from delim2 break ## Breaking from delim break return result # - #artistDB = disc.getArtistNameToIDData() artistNameToID = {} for artistID, artistName in discdf["Artist"].to_dict().items(): if artistNameToID.get(artistName) is None: artistNameToID[artistName] = [] artistNameToID[artistName].append(artistID) for i,(artistName,aData) in enumerate(saveArtistData["None"].items()): rename = aData["Rename"] discog = aData["Discog"] albums = aData["Albums"] matches = mulArts.getArtistNames(artistName) if all(matches.values()): for albumName,albumValueData in albums.items(): if len(albumValueData) == 1: print('{0: <50}'.format(artistName),matches) print("\t{0}\t{1}\n".format(len(albumValueData), albumName)) #discdf[discdf.index == '730612'] discdf[discdf.index == '135946'] for artistName,aData in saveArtistData["None"].items(): rename = aData["Rename"] discog = aData["Discog"] albums = aData["Albums"] matches = mulArts.getArtistNames(artistName) if not all(matches.values()): print('{0: <50}'.format(artistName),matches) # + x = artistNames[4] items = artistNames my_str_as_bytes = str.encode(x, 'utf-8') print(my_str_as_bytes,type(my_str_as_bytes)) # ensure it is byte representation from unicodedata import normalize keys = ['NFC', 'NFKC', 'NFD', 'NFKD'] for key in keys: print(x,'<---',key) print(findNearest(item=x, ilist=items, num=1, cutoff=1.0)) my_str = normalize(key, x) print(findNearest(item=my_str, ilist=items, num=1, cutoff=1.0)) print(my_str,type(my_str)) my_str_as_bytes = str.encode(my_str, 'utf-8') print(my_str_as_bytes,type(my_str_as_bytes)) # ensure it is byte representation my_decoded_str = my_str_as_bytes.decode() print(my_decoded_str,type(my_decoded_str)) # ensure it is string representation print(findNearest(item=my_decoded_str, ilist=items, num=1, cutoff=1.0)) print("") # - # # Test ifile = "/Volumes/Music/iTunes MixTape/iTunes Media/Music/24 Shots/01 Intro (Power Of The Dollar).mp3" isFile(ifile) # + x = artistNames[4] x = '' items = artistNames from unicodedata import normalize keys = ['NFC', 'NFKC', 'NFD', 'NFKD'] for key in keys: print(x,'<---',key) print(findNearest(item=x, ilist=items, num=1, cutoff=1.0)) my_str = normalize(key, x) print(findNearest(item=my_str, ilist=items, num=1, cutoff=1.0)) print(my_str,type(my_str)) my_str_as_bytes = str.encode(my_str, 'utf-8') print(my_str_as_bytes,type(my_str_as_bytes)) # ensure it is byte representation my_decoded_str = my_str_as_bytes.decode() print(my_decoded_str,type(my_decoded_str)) # ensure it is string representation print(findNearest(item=my_decoded_str, ilist=items, num=1, cutoff=1.0)) print("") # - m = mp3ID(ifile) m.getInfo() pb = pathBasics(debug=False) pbc = pb.getPaths(ifile) pbc.getDict() # + from mp3id import mp3ID mp3Vals = {} for artistName,aData in saveArtistData['Odd'].items(): print(artistName) discog = aData['Discog'] artist = None if isinstance(discog, DataFrame): if discog.shape[0] == 1: artist = discog['Artist'][0] #.values if artist is None: continue for album,albumFiles in aData['Albums'].items(): print(' ',album) for albumFile in albumFiles: print('\t',albumFile) m = mp3ID(albumFile) mArtist = m.getArtist()[0] if mArtist == artistName: print("\t\tArtist: {0} <--> {1}".format(mArtist,artist)) m.setArtist(artist) mAlbumArtist = m.getAlbumArtist()[0] if mAlbumArtist == artistName: print("\t\tAlbumArtist: {0} <--> {1}".format(mAlbumArtist,artist)) m.setAlbumArtist(artist) #mp3Vals[albumFile] = mid.getInfo() # - mp3Vals # # Multi Artists artistDB = disc.getArtistIDToNameData() artistNameToID = {v: k for k,v in artistDB.items()} mulArts = multiArtist(cutoff=0.9, discdata=artistNameToID, exact=False) #len(artistDB) len(artistNameToID) renameReplacements findNearest(item="", ilist=artistDB.keys(), num=10, cutoff=0.7) # + x = "" from unicodedata import normalize keys = ['NFC', 'NFKC', 'NFD', 'NFKD'] for key in keys: continue print(x,'<---',key) print(findNearest(item=x, ilist=items, num=1, cutoff=1.0)) my_str = normalize(key, x) print(findNearest(item=my_str, ilist=items, num=1, cutoff=1.0)) print(my_str,type(my_str)) my_str_as_bytes = str.encode(my_str, 'utf-8') print(my_str_as_bytes,type(my_str_as_bytes)) # ensure it is byte representation my_decoded_str = my_str_as_bytes.decode() print(my_decoded_str,type(my_decoded_str)) # ensure it is string representation print(findNearest(item=my_decoded_str, ilist=items, num=1, cutoff=1.0)) print("") # - for artistName,artistData in saveArtistData.items(): found = artistData[0] if found is False: retval = ma.getArtistNames(artistName) if len(retval) > 1: artistNames = [k for k,v in retval.items() if v is not None] print("{0: <50}{1}".format(artistName,artistNames)) else: print("{0: <50}{1}".format(artistName,artistData[0])) lvals = list(discdf["Artist"]) test = "*NSYNC" from searchUtils import findNearest findNearest(item=test, ilist=lvals, num=1, cutoff=0.9) def getMusicData(artist): retval = musicdf[musicdf["Artist"] == artist] if retval.shape[0] > 0: return retval else: return None # + artistIDtoName = disc.getArtistIDToNameData() artistIDtoRefs = disc.getArtistIDToRefData() albumIDtoName = disc.getAlbumIDToNameData() albumIDtoRef = disc.getAlbumIDToRefData() artistIDtoAlbumNames = disc.getArtistIDAlbumNames() artistIDtoAlbumIDs = disc.getArtistIDAlbumIDs() artistMetaData = disc.getAlbumArtistMetaData() # + colArtistIDtoName = disc.getCollectionIDToNameData() colArtistIDtoRefs = disc.getCollectionIDToRefData() colArtistReftoCnts = disc.getCollectionRefCountsData() artistIDtoName = disc.getArtistIDToNameData() artistIDtoRefs = disc.getArtistIDToRefData() albumIDtoName = disc.getAlbumIDToNameData() albumIDtoRef = disc.getAlbumIDToRefData() artistIDtoAlbumNames = disc.getArtistIDAlbumNames() artistIDtoAlbumIDs = disc.getArtistIDAlbumIDs() artistMetaData = disc.getAlbumArtistMetaData() def splitMetaData(x): retval = {} if isinstance(x, dict): for k,v in x.items(): retval[k] = [z[0] for z in v.most_common(3)] else: retval = None return retval from pandas import Series, DataFrame sArtistToRef = Series(artistIDtoRefs) sArtistToName = Series(artistIDtoName) sAlbumToRef = Series(albumIDtoRef) sAlbumToName = Series(albumIDtoName) sArtistToAlbums = Series(artistIDtoAlbumIDs) sArtistToAlbumNames = Series(artistIDtoAlbumNames) sArtistAlbums = Series([dict(zip(x, y)) for x,y in list(zip(sArtistToAlbums.values, sArtistToAlbumNames))], index=sArtistToAlbums.index) sArtistMetaData = Series(artistMetaData) sArtistMetaData = sArtistMetaData.apply(splitMetaData) cols = ["Ref"] discdf = DataFrame(sArtistToRef) discdf.columns = cols discdf = discdf.join(DataFrame(sArtistToName)) cols += ["Name"] discdf.columns = cols tmp = DataFrame(DataFrame(sArtistMetaData)[0].tolist()) tmp.index = sArtistMetaData.index discdf = discdf.join(tmp) cols += ["Extra Artists", "Genres", "Styles"] discdf.columns = cols discdf = discdf.join(DataFrame(sArtistAlbums)) cols += ["Albums"] discdf.columns = cols sColArtistToRef = Series(colArtistIDtoRefs) sColArtistToName = Series(colArtistIDtoName) sColArtistRefToCnts = Series(colArtistReftoCnts) cols = ["Ref"] coldiscdf = DataFrame(sColArtistToRef) coldiscdf.columns = cols coldiscdf = coldiscdf.join(DataFrame(sColArtistToName)) cols += ["Name"] coldiscdf.columns = cols colrefdf = DataFrame(sColArtistRefToCnts) colrefdf.columns = ["Counts"] colrefdf.reset_index(inplace=True) colrefdf.columns = ["Ref", "Counts"] coldiscdf = coldiscdf.merge(colrefdf, on="Ref") coldiscdf.index = DataFrame(sColArtistToRef).index coldiscdf = coldiscdf.merge(discdf, on=["Ref", "Name"], how='left') coldiscdf.index = DataFrame(sColArtistToRef).index # - # + sColArtistToRef = Series(colArtistIDtoRefs) sColArtistToName = Series(colArtistIDtoName) sColArtistRefToCnts = Series(colArtistReftoCnts) cols = ["Ref"] coldiscdf = DataFrame(sColArtistToRef) coldiscdf.columns = cols coldiscdf = coldiscdf.join(DataFrame(sColArtistToName)) cols += ["Name"] coldiscdf.columns = cols colrefdf = DataFrame(sColArtistRefToCnts) colrefdf.columns = ["Counts"] colrefdf.reset_index(inplace=True) colrefdf.columns = ["Ref", "Counts"] coldiscdf = coldiscdf.merge(colrefdf, on="Ref") coldiscdf.index = DataFrame(sColArtistToRef).index coldiscdf = coldiscdf.merge(discdf, on=["Ref", "Name"], how='left') coldiscdf.index = DataFrame(sColArtistToRef).index # - coldiscdf.to_pickle("ArtistDiscog.p") coldiscdf[coldiscdf["Name"] == ""] colrefdf coldiscdf.shape coldiscdf.merge(colrefdf, on="Ref", how="left") # + coldiscdf.merge(sColArtistRefToCnts, on="") # - discdf.shape sArtistToAlbums = Series(artistIDtoAlbumIDs) sArtistToAlbumNames = Series(artistIDtoAlbumNames) s = Series([dict(zip(x, y)) for x,y in list(zip(sArtistToAlbums.values, sArtistToAlbumNames))], index=sArtistToAlbums.index) s fixes = {} iDir = None debug = False for ifile,iPathData in jzpathdata.items(): iTagData = jztagdata[ifile] pathArtist = iPathData['pbArtist'] maPathArtists = ma.getArtistNames(pathArtist) maTagArtists = ma.getArtistNames(iTagData['Artist']) maTagAlbumArtists = ma.getArtistNames(iTagData['AlbumArtist']) if maTagArtists == maTagAlbumArtists: if maTagAlbumArtists == maPathArtists: continue print("Looks good") else: if debug: print("\nDifferent Artists:") print("\tPath: {0}".format(maPathArtists.keys())) print("\tTag: {0}".format(maTagArtists.keys())) print("\tTagAA: {0}".format(maTagAlbumArtists.keys())) if iDir is None: iDir = ifile.split(pathArtist)[0] fixes[iDir] = {} print(" Adding {0}".format(iDir)) if fixes[iDir].get(pathArtist) is None: fixes[iDir][pathArtist] = {} print("\tAdding {0}".format(pathArtist)) iData = ifile.split(pathArtist)[1][1:] fixes[iDir][pathArtist][iData] = {"Error": "Path", "Path": maPathArtists, "Artist": maTagArtists, "AlbumArtist": maTagAlbumArtists} print("\t Adding {0}".format(iData)) else: if debug: print("\nUnknown Artist to Discogs:") print("\tPath: {0}".format(maPathArtists)) print("\tTag: {0}".format(maTagArtists)) print("\tTagAA: {0}".format(maTagAlbumArtists)) if iDir is None: iDir = ifile.split(pathArtist)[0] fixes[iDir] = {} print(" Adding {0}".format(iDir)) if fixes[iDir].get(pathArtist) is None: fixes[iDir][pathArtist] = {} print("\tAdding {0}".format(pathArtist)) iData = ifile.split(pathArtist)[1][1:] fixes[iDir][pathArtist][iData] = {"Error": "Tag", "Path": maPathArtists, "Artist": maTagArtists, "AlbumArtist": maTagAlbumArtists} print("\t Adding {0}".format(iData)) saveFile(idata=fixes, ifile="fixes.yaml") disc = discogs() bsdata = searchDiscogs("Duke Ellington with the Ron Collier Orchestra Collages", requireUS=True, release=True) from webUtils import getHTML from artist import artist #bsdata = getHTML(bsdata) # + h4s = bsdata.findAll("h4") h5s = bsdata.findAll("h5") for ih4,h4 in enumerate(h4s): print(h4) spans = h4.findAll("span") ref = None if len(spans) == 0: ref = h4.find("a") else: ref = spans[0].find("a") if ref is None: continue try: href = ref.attrs.get('href') name = ref.text.strip() except: print("Could not get name/href from {0}".format(ref)) continue # + ah5s # - artistData # + albumData = [x for x in h4s if len(x.findAll('a')) > 0] albumData = [x.findAll('a') for x in albumData if not x.find('a').attrs['href'].startswith("/sell/")] artistData = [x.findAll("a") for x in h5s if len(x.findAll('span')) > 0] #artistData = [x.findAll("a") for x in artistData] # - baseURL = disc.discogURL url = urllib.parse.urljoin(baseURL, href) self.downloadAlbumURL(url, savename) def getRefName(ref): try: href = ref.attrs.get('href') name = ref.text.strip() except: href = None name = None return href,name artistData alb = album() # + baseURL = disc.discogURL for (albums,artists) in zip(albumData,artistData): for album in albums: albumRef,albumName = getRefName(album) print(albumRef,'\t',albumName) try: albumID = str(int(albumRef.split("/")[-1])) except: print("Could not get album ID from {0}".format(albumRef)) albumID = None print(albumID) url = urllib.parse.urljoin(baseURL, albumRef) for artist in artists: artistRef,artistName = getRefName(artist) print(artistRef,'\t',artistName) # - album # + from searchUtils import findDirsPattern, findWalkExt from fileUtils import getBaseFilename, getFileBasics from ioUtils import saveFile, getFile from timeUtils import clock, elapsed from musicPath import pathBasics ############################################################################################################################## # My Music ############################################################################################################################## class musicFileAnalyzer(musicBase): def __init__(self, debug=False): self.name = "myfileanalyzer" musicBase.__init__(self, debug=debug) self.pb = pathBasics() self.pathDBs = self.getMusicPathDBs() self.tagDBs = self.getMusicTagDBs() # - mfa = musicFileAnalyzer() # + from fileUtils import getDirname from collections import Counter artistPathData = {} albumPathData = {} for cls in ma.getMusicClasses(): pathData = ma.getMusicPathDB(cls) tagData = ma.getMusicTagDB(cls) start, cmt = clock(" Checking {0} mp3s".format(len(pathData))) for mp3,pb in pathData.items(): artist = pb["pbArtist"] if artist is not None: if artistPathData.get(artist) is None: artistPathData[artist] = {} else: if artistPathData.get(artist) is None: artistPathData[artist] = {} album = pb["pbAlbum"] if album is not None: if artistPathData[artist].get(album) is None: artistPathData[artist][album] = {} else: if artistPathData[artist].get(album) is None: artistPathData[artist][album] = {} cls = pb["pbClass"] if album is not None: if artistPathData[artist][album].get(cls) is None: artistPathData[artist][album][cls] = {} else: if artistPathData[artist][album].get(cls) is None: artistPathData[artist][album][cls] = {} dirval = getDirname(mp3) if artistPathData[artist][album][cls].get(dirval) is None: artistPathData[artist][album][cls][dirval] = [] artistPathData[artist][album][cls][dirval].append([pb, tagData[mp3]]) elapsed(start, cmt) print("") savename = setFile(ma.getDBDir(), "ArtistAlbumTagData.p") saveFile(ifile=savename, idata=artistPathData, debug=True) # - savename = setFile(ma.getDBDir(), "ArtistAlbumTagData.p") artistPathData = getFile(savename) nArt = 0 for artist,artistData in artistPathData.items(): nArt += 1 if nArt == 10: break for album,albumData in artistData.items(): for cls,clsData in albumData.items(): for dirval,dirvalData in clsData.items(): dval = dirval.replace(ma.getMusicDir(), "") nFiles = len(dirvalData) partist = artist if len(artist) >= 40: partist = "{0}...".format(artist[:36]) palbum = album if len(album) >= 40: palbum = "{0}...".format(album[:36]) print("{0: <40}{1: <40}{2: <20}{3: <5}{4}".format(partist,palbum,cls,nFiles,dirval)) artistPathData for ifile in ma.flists: tagfile = ifile.replace(".p", "-Tags.p") pathfile = ifile.replace(".p", "-Paths.p") fileData = getFile(ifile) tagData = getFile(tagfile) pathData = getFile(pathfile) from searchUtils import findExt files = findExt(basedir=ma.getDBDir(), ext="*.p") files = [x for x in files if sum([y in x for y in ["Tags.p", "Paths.p"]]) == 0] files import os, sys def splitall(path): allparts = [] while 1: parts = os.path.split(path) if parts[0] == path: # sentinel for absolute paths allparts.insert(0, parts[0]) break elif parts[1] == path: # sentinel for relative paths allparts.insert(0, parts[1]) break else: path = parts[0] allparts.insert(0, parts[1]) return allparts os.path.split("24 Shots/01 Intro (Power Of The Dollar)/dfd") idir files[-2] ifile filename dirval artist # %load_ext autoreload # %autoreload disc = discogs() # + import argparse as ap from os import getcwd from mp3id import mp3ID mid = mp3ID(ifile) minfo = mid.getInfo() from glob import glob from fsUtils import isFile from fileUtils import getExt from searchUtils import findWalkExt, findWalkPattern from os import listdir, walk class mp3Tagger: def __init__(self, disc=None): self.name = "mp3tagger" self.cwd = getcwd() self.infos = {} self.ma = multiArtist(cutoff=0.9, discdata=None, exact=False) if disc is not None: self.setDiscogs(disc) self.exts = mp3ID().mp3exts def setDiscogs(self, disc): artistDB = disc.getArtistVariationNameToIDsData() self.setArtistsDB(artistDB=artistDB) def setArtistsDB(self, artistDB, cutoff=0.9): self.ma = multiArtist(cutoff=0.9, discdata=artistDB, exact=False) def getFiles(self, dirval): self.infos = {} files = findWalkPattern(basedir=dirval, pattern=".") files = [x for x in files if getExt(x) in self.exts] for i,ifile in enumerate(files): if self.infos.get(ifile) is None: mid = mp3ID(ifile) minfo = mid.getInfo() self.infos[ifile] = minfo def get(self): return self.infos def getInfo(self, dirval=None): if dirval is not None: self.getFiles(dirval) print("{0: <3}{1: <4}{2: <4}{3: <50}{4: <50}{5: <40}{6: <40}".format("#", "Trk", "Dsc", "Title", "Album", "Artist", "Album Artist")) i = 0 for ifile,minfo in self.infos.items(): print("{0: <3}".format(i+1), end="") trkno = minfo.get('TrackNo') if trkno is None: trkno = -1 elif isinstance(trkno, list): if len(trkno) == 1: trkno = trkno[0] else: trkno = str(trkno) print("{0: <4}".format(trkno), end="") discno = minfo.get('DiscNo') if discno is None: discno = -1 elif isinstance(discno, list): if len(discno) == 1: discno = discno[0] else: discno = str(discno) print("{0: <4}".format(discno), end="") title = minfo.get('Title') if title is None: title = "?" elif isinstance(title, list): if len(title) == 1: title = title[0] else: title = str(title) print("{0: <50}".format(title), end="") album = minfo.get('Album') if album is None: album = "?" elif isinstance(album, list): if len(album) == 1: album = album[0] else: album = str(album) print("{0: <50}".format(album), end="") artist = minfo.get('Artist') if artist is None: artist = "?" elif isinstance(artist, list): if len(artist) == 1: artist = artist[0] else: artist = str(artist) print("{0: <40}".format(artist), end="") albartist = minfo.get('AlbumArtist') if albartist is None: albartist = "?" elif isinstance(albartist, list): if len(albartist) == 1: albartist = albartist[0] else: albartist = str(albartist) print("{0: <40}".format(albartist), end="") print("") i += 1 def getAlbumArtists(self): return [x.get("AlbumArtist") for ifile,x in self.infos.items()] # - mt = mp3Tagger(disc=disc) mt.getInfo(dirval="/Volumes/Music/iTunes Jazz/iTunes Media/Music/ & Qu") # + from ioUtils import saveFile retvals = {} for ifile,minfo in mt.get().items(): retvals[ifile] = {} for key in ["Artist", "AlbumArtist"]: val = minfo.get(key) if isinstance(val, list): if len(val) == 1: retval = mt.ma.getArtistNames(val[0]) if len(retval) > 1: artistNames = [k for k,v in retval.items() if v is not None] retvals[ifile][key] = artistNames saveFile(idata=retvals, ifile="tmp.yaml") # - retvals tmp=[[' & Qu'], [' & Qu'], [' & Qu'], [' & Qu'], [' & Qu'], [' & Quincy Jones'], [' & Quincy Jones'], [' & Quincy Jones'], [' & Quincy Jones'], [' & Quincy Jones'], [' & Quincy Jones'], [' & Quincy Jones'], [' & Quincy Jones'], [' & Qu'], [' & Quincy Jones'], [' & Quincy Jones']] m.getDBDir # + idir="/Volumes/Music/iTunes Jazz/iTunes Media/Music/ & from random import shuffle print("There are {0} chart artists".format(len(chartArtists))) shuffle(chartArtists) for i,artist in enumerate(chartArtists): if prevs.get(artist) is not None: continue retval = ma.getArtistNames(artist) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to Python # # ### Pandas Intro # ### Pandas import pandas as pd import numpy as np import matplotlib.pyplot as plt import os # %matplotlib inline datapath = "../Data/" # ### Pandas Data Structures: Series obj = pd.Series([4, 12, -5, 3, 5]) obj obj.values obj.index obj.index = ['Bob', 'Steve', 'Jeff', 'Ryan', 'Fernie'] obj obj2 = pd.Series([4, 7, -5, 3], index=['d', 'b', 'a', 'c']) obj2 obj2['c'] obj2[['c', 'a', 'd']] obj2[obj2 < 0] obj2 * 2 np.exp(obj2) sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000} obj3 = pd.Series(sdata) obj3 states = ['California', 'Ohio', 'Oregon', 'Texas'] obj4 = pd.Series(sdata, index=states) obj4 pd.isnull(obj4) pd.notnull(obj4) obj3.add(obj4, fill_value=10) obj4.name = 'Population' obj4.index.name = 'State' obj4 # #### Pandas Data Structures: Dataframe data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year': [2000, 2001, 2002, 2001, 2002], 'pop': [1.5, 1.7, 3.6, 2.4, 2.9]} frame = pd.DataFrame(data) frame d = pd.DataFrame(data, columns=['year', 'state', 'pop']) d d.set_index('year', inplace=True) d.drop(2000, axis=0) d frame2 = pd.DataFrame(data, columns=['year', 'state', 'pop', 'debt'], index=['one', 'two', 'three', 'four', 'five']) frame2 frame2['nova'] = 13 frame2 frame2.loc['three'] frame2.iloc[2] frame2.nova = 23 frame2 frame2.columns print(frame2['state']) print() print(type(frame2['state'])) frame2.state #frame2.loc['three'] frame2.loc['three','state'] frame2['debt'] = 16.5 frame2 frame2['debt'] = np.arange(5.) frame2 val = pd.Series([-1.2, -1.5, -1.7], index=['two', 'four', 'five']) frame2['debt'] = val frame2 frame2['eastern'] = frame2.state == 'Ohio' frame2 del frame2['eastern'] frame2.columns pivot = frame2.pivot(index= 'year', columns='state', values='pop') pivot transpose = pivot.T transpose pop = {'Nevada': {2001: 2.4, 2002: 2.9},'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}} frame3 = pd.DataFrame(pop) frame3 frame3.T type(pop) pd.DataFrame(pop, index=[2001, 2002, 2003]) pdata = {'Ohio': frame3['Ohio'][:-1],'Nevada': frame3['Nevada'][:2]} pd.DataFrame(pdata) frame3.index.name = 'year'; frame3.columns.name = 'state' frame3 pop = {'Nevada': {2001: 2.4, 2002: 2.9},'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}} frame4 = pd.DataFrame(pop) frame4 frame4.loc[2000,'Nevada'] = 2 frame4 frame5 = pd.concat([frame4, frame4], axis=0) frame5.iloc[3,1] = 32 frame5 frame5.drop_duplicates(['Nevada']) dates = pd.date_range("20160101", periods=10, freq='D') data = np.random.random((10,3)) column_names = ['Column1', 'Column2', 'Column3'] dates data df = pd.DataFrame(data, index=dates, columns=column_names) df.head(10) df.iloc[0:11,1] df[1:3] df['20160104':'20160107'] df[(df.index>'20150101')&(df.index<'20160106')] df.query('(Column1 < Column2) & (Column1 < Column3)') df.loc['20160101':'20160102',['Column1','Column3']] df.info() df.iloc[3:5, 0:2] df.describe() df.info() df.sort_index(axis=0, ascending=True,) # inplace=True) df[sorted(df.columns)] df.sort_values(by='Column2') # + dates1 = pd.date_range("20160101", periods=6) data1 = np.random.random((6,2)) column_names1 = ['ColumnA', 'ColumnB'] dates2 = pd.date_range("20160101", periods=7) data2 = np.random.random((7,2)) column_names2 = ['ColumnC', 'ColumnD'] df1 = pd.DataFrame(data1, index=dates1, columns=column_names1) df2 = pd.DataFrame(data2, index=dates2, columns=column_names2) # - df1 df2 #https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html df1.join(df2, how='left') df1.join(df2, how='right') df1.join(df2, how='outer') df1.join(df2, how='inner') df2['ColumnA'] = df1.ColumnA+1 df1.join(df2, how='left', rsuffix='_df2') # + del df2['ColumnA'] df3 = df1.join(df2) # add a column to df to group on df3['ProfitLoss'] = pd.Series(['Profit', 'Loss', 'Profit', 'Same', 'Profit', 'Loss', 'Profit', 'Profit', 'Same', 'Loss'], index=dates) # - df3['Aluno'] = pd.Series(['Alessandra', 'Alessandra', 'Alessandra', 'Marcos', 'Ana', 'Ana', 'Marcos', 'Ana', 'Ana', 'Juliana'], index=dates) df3 grupos = df3.groupby('ProfitLoss')#.mean() grupos.mean() max(['abacaxi', 'laranja', 'maçã']) df4 = df3.groupby(['Aluno','ProfitLoss']).max() df4 df4.index.get_level_values('Aluno') df4.loc[('Ana','Profit'), 'ColumnA'] # + active="" # #### https://hackernoon.com/fundamental-python-data-science-libraries-a-cheatsheet-part-2-4-fcf5fab9cdf1 # !pip install quandl # # import quandl # # # set up the Quandl connection # api_key = 'GETYOURAPIKEY' # quandl.ApiConfig.api_key = api_key # quandl_code = "BITSTAMP/USD" # # # get the data from the API # bitcoin_data = quandl.get(quandl_code, start_date="2017-01-01", end_date="2018-01-17", returns="numpy") # # # set up the data in pandas # df = pd.DataFrame(data=bitcoin_data, columns=['Date', 'High', 'Low', 'Last', 'Bid', 'Ask', 'Volume', 'VWAP']) # # # make the 'Date' column the index # df.set_index('Date', inplace=True) # # # find a rolling 30 day average # df['RollingMean'] = df['Last'].rolling(window=30).mean().shift(1) # # # label when the last price is less than L30D average # df['Buy'] = df['Last'] < df['RollingMean'] # # # create a strategic trading DataFrame # trading_info = df.loc[:,['Last', 'RollingMean', 'Buy']] # # trading_info.tail(10) # lets look at last 10 days # - # Plotting with Pandas # -- dfvote = pd.read_excel(os.path.join('../Data','votesurvey.xls'), 'votesurvey') dfvote.head() dfvote.sort_values(by=['Age','Expected salary'], ascending=[True, False])[0:10] for i in dfvote.index: dfvote.loc[i, 'Random'] = np.random.randint(10) dfvote.Random = dfvote.Random.astype(int) dfvote.head() # Histogram fig=plt.figure(figsize=(15,8)) #Create one or more subplots using add_subplot, because you can't create blank figure ax = fig.add_subplot(1,1,1) #Variable ax.hist(dfvote['Age'],bins = 7) # Here you can play with number of bins Labels and Tit plt.title('Age distribution') plt.xlabel('Age') plt.ylabel('#Citizens') plt.show() # Box Plot dfvote.Age.hist(); fig=plt.figure() ax = fig.add_subplot(1,1,1) ax.boxplot(dfvote['Age']) plt.show() dfvote.boxplot(by='Gender'); # Violin Plot (using Seaborn) # Obs: Seaborn changes some settings on matplotlib An alternative is to import this way: # import seaborn.apionly as sns import seaborn as sns sns.violinplot(dfvote['Age'], dfvote['Gender']) sns.despine() # Bar Chart #var = df.groupby('Gender').Random.sum() #grouped sum of at Gender level var = dfvote.groupby('Gender').Random.mean() #grouped mean of at Gender level fig = plt.figure() ax1 = fig.add_subplot(1,1,1) ax1.set_xlabel('Gender') ax1.set_ylabel('Sum of Sales') ax1.set_title("Gender wise mean of ") #sum or mean var.plot(kind='bar'); # Line Chart var = dfvote.groupby('Candidate').Age.mean() fig = plt.figure() ax1 = fig.add_subplot(1,1,1) #ax1.set_xlabel('Candidate') ax1.set_ylabel('Mean of Ages') ax1.set_title("Candidate wise mean of ages") var.plot(kind='line') # Stacked Column Chart var = dfvote.groupby(['Age','Gender']).Random.sum() var.unstack().plot(kind='bar',stacked=True, color=['red','blue'], grid=False) # Scatter Plot fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.scatter(dfvote['Age'],dfvote['Random'],s=dfvote['Expected salary']/500) #You can also add more variables here to represent color and size. plt.show() dfvote.plot.scatter(x='Age',y='Expected salary', c='Random', cmap='jet'); # Bubble Plot fig = plt.figure() ax = fig.add_subplot(1,1,1) # Added third variable income as size of the bubble ax.scatter(dfvote['Age'],dfvote['Expected salary'], s=dfvote['Random']**3) plt.show() # Pie chart var=dfvote.groupby(['Gender']).sum().stack() temp=var.unstack() type(temp) x_list = temp['Random'] label_list = temp.index #The pie chart is oval by default. To make it a circle use plt.axis("equal") plt.axis("equal") #To show the percentage of each pie slice, pass an output format to the autopctparameter plt.pie(x_list,labels=label_list,autopct="%1.1f%%") plt.title("Gender Distribution") plt.show() # Heat Map # + #Generate a random number, you can refer your data values also data = np.random.rand(8,2) rows = list('12345678') #rows categories columns = list('MF') #column categories fig,ax=plt.subplots() #Advance color controls ax.pcolor(data,cmap=plt.cm.Reds,edgecolors='k') # Here we position the tick labels for x and y axis ax.set_xticks(np.arange(0,2)+0.5) ax.set_yticks(np.arange(0,8)+0.5) ax.xaxis.tick_top() ax.yaxis.tick_left() #Values against each labels ax.set_xticklabels(columns,minor=False,fontsize=20) ax.set_yticklabels(rows,minor=False,fontsize=20) plt.show() print(data) # - # ## An example: Baby names in the USA names1880 = pd.read_csv(os.path.join('../Data','names','yob1880.txt'), names=['name', 'sex', 'births']) names1880[0:20] #names1880.head() names1880.tail() names1880.groupby('sex').births.sum() years = range(1880, 2013) pieces = [] columns = ['name', 'sex', 'births'] for year in years: path = os.path.join('../Data','names','yob{}.txt'.format(year)) frame = pd.read_csv(path, names=columns) frame['year'] = year pieces.append(frame) # Concatenate everything into a single DataFrame names = pd.concat(pieces, ignore_index=True) names names.head(10) names.groupby('sex').births.sum() total_births = names.pivot_table('births', index='year', columns='sex', aggfunc=sum) total_births total_births.tail() total_births.plot(title='Total births by sex and year', figsize=(12,6)); # + def add_prop(group): # Integer division floors births = group.births.astype(float) group['percent'] = births / births.sum() return group names = names.groupby(['year', 'sex']).apply(add_prop) # - #names names.head() names[names.percent > 0.085] names[names.name.str.startswith('Renat')] np.allclose(names.groupby(['year', 'sex']).percent.sum(), 1) # + def get_top1000(group): return group.sort_values(by='births', ascending=False)[:1000] grouped = names.groupby(['year', 'sex']) top1000 = grouped.apply(get_top1000) # - #top1000 pd.options.display.float_format = '{:,.3f}'.format top1000[:15] boys = top1000[top1000.sex == 'M'] girls = top1000[top1000.sex == 'F'] Walter_names = boys[boys.name=='Bob'] Walter_names[:10] total_births_top1000 = top1000.pivot_table('births', index='year', columns='name', aggfunc=sum) subset = total_births_top1000[['John', 'Harry', 'Mary', 'Marilyn']] subset.plot(subplots=True, figsize=(12, 10), grid=False, title="Number of births per year") table = top1000.pivot_table('percent', index='year', columns='sex', aggfunc=sum) table.plot(title='Sum of table1000.percent by year and sex', yticks=np.linspace(0, 1.2, 13), xticks=range(1880, 2020, 10)) df = boys[boys.year == 2010] prop_cumsum = df.sort_values(by='percent', ascending=False).percent.cumsum() # + def get_quantile_count(group, q=0.5): group = group.sort_values(by='percent', ascending=False) return group.percent.cumsum().values.searchsorted(q) + 1 prop_cumsum.values.searchsorted(0.5) # - diversity = top1000.groupby(['year', 'sex']).apply(get_quantile_count) diversity = diversity.unstack('sex') diversity.head() diversity.plot(title="Number of popular names in top 50%") # extract last letter from name column get_last_letter = lambda x: x[-1] last_letters = names.name.map(get_last_letter) last_letters.name = 'last_letter' table = names.pivot_table('births', index=last_letters, columns=['sex', 'year'], aggfunc=sum) subtable = table.reindex(columns=[1910, 1960, 2010], level='year') subtable.head() subtable.sum() letter_prop = subtable / subtable.sum().astype(float) fig, axes = plt.subplots(2, 1, figsize=(10, 8)) letter_prop['M'].plot(kind='bar', rot=0, ax=axes[0], title='Male') letter_prop['F'].plot(kind='bar', rot=0, ax=axes[1], title='Female',legend=False) letter_prop = table / table.sum().astype(float) dny_ts = letter_prop.ix[['d', 'n', 'y'], 'M'].T dny_ts.head() dny_ts.plot() all_names = top1000.name.unique() mask = np.array(['lesl' in x.lower() for x in all_names]) lesley_like = all_names[mask] lesley_like filtered = top1000[top1000.name.isin(lesley_like)] filtered.groupby('name').births.sum() table = filtered.pivot_table('births', index='year', columns='sex', aggfunc='sum') table = table.div(table.sum(1), axis=0) table.tail() table.plot(style={'M': 'k-', 'F': 'k--'}) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="1yKhLP03hArF" # As we are not developing a chit chat but a virtual conversation between a user and an expert, we will make some assumptions: # # • The user will not use emoji or abbreviations in the same sentence with an important request. # # • The user might use emoji/abbreviations as a reaction/reply or in a follow up discussion. # # • The user might use emoji/abbreviations in a more complex input and a reply might be regarded as emotional and thus might help the conversation going forward. # # Taking this into consideration, we will implement a secondary flow that will have specific answers/policies and can exist together with the main flow, or standalone if no other input is provided by the user. # # Important: Abbreviations that are not considered as specific to chit chat needs will be treated in the main flow, together with the sentence-intent. For example: abbreviation for institutes that are NER. # # + id="9QrCI1_cg6TF" # IN: inputs from Auto-correctand Procesing I # OUT: send identified emoji/abbreviations to the following pipelines: Untrained NIU/Reaction Analysis # + [markdown] id="HK9_k_wXhXjI" # Objectives: # # • Gathering the emoji/abbreviations from auto-correct, processing and mark them for Untrained NIU/Reactions; # # Language specificities: no; # # Dependencies: Auto-correct /Processing I/Untrained NIU/ Reaction analysis; # # Database/ Vocabularies needed: Emoji/Abbreviations vocabularies; # # + id="KFSoniTkhqp6" # To dos: # 1. Mark sentences with emoji/abbreviations for Untrained answers. # 2. Classify all the emoji/abbreviations in 6 categories (sad, blink, kiss, smile, cool, laughing out loud). # 3. Sad and Smile will be sent to the Reaction analysis pipeline. # 4. The rest will be sent to Untrained NIU. # + [markdown] id="0ouZs8mAhzeR" # Use adapted code # # Code example, but please adapt to the objectives # # # # # # + id="E44ouNp-jnBK" # https://github.com/amanchadha/coursera-deep-learning-specialization/blob/master/C5%20-%20Sequence%20Models/Week%202/Emojify/Emojify%20-%20v2.ipynb # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### This notebook compares Turn Point Runs using 1000, 2000 and 3000 particles # + import matplotlib.pyplot as plt import numpy as np import pandas as pd import netCDF4 as nc import seaborn as sns import matplotlib.colors as mcolors from matplotlib.colors import LinearSegmentedColormap from mpl_toolkits.axes_grid1 import make_axes_locatable import matplotlib.patches as mpatches import cmocean as cm import glob import os import xarray as xr import datetime from salishsea_tools import viz_tools, tidetools, geo_tools, gsw_calls, wind_tools import pickle from IPython.core.display import display, HTML display(HTML("")) from sys import platform # %matplotlib inline if platform == 'darwin': bathy_dir = '/Users/rmueller/Data/SalishSeaCast/grid/' results_dir = '/Users/rmueller/Projects/MIDOSS/results/nbr_partic_test/' out_dir = '/Users/rmueller/Documents/Presentations/Graphics/MIDOSS/SOILED/nbr_partic/' else: display('Update file paths for oceans machine') # - # define number of particles and the compute time for each run nbr_partic = [1000,2000,3000] compute_time = [102, 104, 102] cpu_time = [5934.55, 6025.39, 5670.18] tp_akns={} # this is the way to define a new dictionary thick2D={} # one dictionary is never enough thick2D_t={} # every house ought to have at least three! for nparticles in nbr_partic: tp_akns[nparticles] = xr.open_dataset(f'{results_dir}Lagrangian_AKNS_crude_TP_01aug17-08aug17_akns{nparticles}.nc') thick2D[nparticles] = tp_akns[nparticles].Thickness_2D thick2D_t[nparticles] = thick2D[nparticles].sum(dim='time') # ### Use sum of thickness 2D to help discern whether the mass discrepency is an output error or a mass calculation error [nt,ny,nx] = thick2D[1000].shape np.sum(thick2D[1000][1,:,:].values) # + thick2D_timeseries = {} for nparticles in nbr_partic: thick2D_timeseries[nparticles] = np.array([]) for time_index in range(nt): timeseries_value = np.sum(thick2D[nparticles][time_index,:,:].values) # The first value of sum is negative so I explicitely calculate it here to make sure the results isn't a funky issue with the append function, which it isn't if time_index == 0: thick2D_timeseries[nparticles] = timeseries_value else: thick2D_timeseries[nparticles] = np.append(thick2D_timeseries[nparticles], timeseries_value) # - # + fs = 20 fig = plt.figure(figsize=(20,20)) ax1 = fig.add_subplot(131) ax2 = fig.add_subplot(132) ax3 = fig.add_subplot(133) # convert xarray into numpy using ".values" in order to gain access to different visualization tools mappable = ax1.pcolormesh(thick2D_t[1000].values, vmin = 0, vmax = 40, cmap = cm.cm.balance) mappable = ax2.pcolormesh(thick2D_t[2000].values, vmin = 0, vmax = 40, cmap = cm.cm.balance) mappable = ax3.pcolormesh(thick2D_t[3000].values, vmin = 0, vmax = 40, cmap = cm.cm.balance) # add land mask to ax1 and ax2 viz_tools.plot_land_mask(ax1,'/Users/rmueller/Projects/MIDOSS/MIDOSS-MOHID-grid/AfterNEMOBathy201702.nc', color = 'burlywood') viz_tools.plot_land_mask(ax2,'/Users/rmueller/Projects/MIDOSS/MIDOSS-MOHID-grid/AfterNEMOBathy201702.nc', color = 'burlywood') viz_tools.plot_land_mask(ax3,'/Users/rmueller/Projects/MIDOSS/MIDOSS-MOHID-grid/AfterNEMOBathy201702.nc', color = 'burlywood') ax1.set_title(r'1000 ($\sum$ thickness =' + f'{thick2D_timeseries[1000][nt-1]:5.1f})', fontsize = fs) ax2.set_title(r'2000 ($\sum$ thickness =' + f'{thick2D_timeseries[2000][nt-1]:5.1f})', fontsize = fs) ax3.set_title(r'3000 ($\sum$ thickness =' + f'{thick2D_timeseries[3000][nt-1]:5.1f})', fontsize = fs) ax1.set_ylim(200,400) ax2.set_ylim(200,400) ax3.set_ylim(200,400) ax1.set_xlim(50,350) ax2.set_xlim(50,350) ax3.set_xlim(50,350) ax1.set_aspect(aspect=1) ax2.set_aspect(aspect=1) ax3.set_aspect(aspect=1) # - # ### Check to make sure that the mass of oil in these three cases is the same # + #### Load header information with open(f'{results_dir}resOilOutput_1000.sro', 'r') as the_file: all_data = [line.strip() for line in the_file.readlines()] header = all_data[4] # Order header into list array by splitting up string header_arr = [] header_arr = header.split(' ') # Remove emtpy entries from list header_arr = np.asarray([x for x in header_arr if x != '']) # load mass balance area values mass_bal = {} mass_bal_infile = {} for nparticles in nbr_partic: mass_bal_infile[nparticles] = f'{results_dir}resOilOutput_{nparticles}.sro' mass_bal[nparticles] = np.genfromtxt(mass_bal_infile[nparticles], skip_header=6, skip_footer=4) # - header_arr #number of time points and parameters [nt,npoints] = mass_bal[1000].shape # indices for mass balance (etc) i_mevaporated = 15 i_mdispersed = 18 i_mdissolved = 24 i_mbio = 37 i_voloilbeached = 8 i_volumebeached = 9 i_volumeoil = 10 i_volume = 11 i_vwatercontent = 34 i_msedimented = 21 i_mwatercontent = 33 i_density = 35 i_viscosity = 36 i_area = 12 i_theorical_area = 13 # analyte mass (I thought this was dissolved but the numbers are more reflective of dispersed) i_analytemass0 = 42 i_analytemass1 = 43 i_analytemass2 = 44 i_analytemass3 = 45 i_analytemass4 = 46 # biodegredation i_bio0 = 47 i_bio1 = 48 i_bio2 = 49 i_bio3 = 50 i_bio4 = 51 # plot up volume plt.plot(mass_bal[1000][range(nt),i_volume],'b') plt.plot(mass_bal[2000][range(nt),i_volume]) plt.plot(mass_bal[3000][range(nt),i_volume],'g') plt.ylabel(header_arr[i_volume]) plt.xlabel('Hours after spill ') plt.legend(['1000', '2000', '3000']) plt.title('NBR_PARTIC comparison') plot_mass = [i_mevaporated, i_mdispersed, i_mdissolved, i_mbio] for imass in plot_mass: for nparticles in nbr_partic: plt.plot(mass_bal[nparticles][range(nt),imass]) plt.ylabel('Mass (Tonnes)') plt.xlabel('Time after oil release (hours) ') plot_mass = [i_mevaporated, i_mdispersed, i_mdissolved, i_mbio] for imass in plot_mass: plt.figure() for nparticles in nbr_partic: plt.plot(mass_bal[nparticles][range(nt),imass]) plt.legend(['1000','2000','3000']) plt.ylabel('Mass (Tonnes)') plt.xlabel('Time after oil release (hours) ') plt.title(header_arr[imass]) # ### Evaluate results from changing Mass used in MDispersedDT calculation from Me%Var%MassOil to Me%Var%MassINI # + nbr_partic = [1000,2000, 3000, 3001, 3002] # load mass balance area values mass_bal = {} mass_bal_infile = {} for nparticles in nbr_partic: if nparticles==3001: mass_bal_infile[nparticles] = f'{results_dir}resOilOutput_3000_fix1.sro' mass_bal[nparticles] = np.genfromtxt(mass_bal_infile[nparticles], skip_header=6, skip_footer=4) elif nparticles==3002: mass_bal_infile[nparticles] = f'{results_dir}resOilOutput_3000_fix2.sro' mass_bal[nparticles] = np.genfromtxt(mass_bal_infile[nparticles], skip_header=6, skip_footer=4) else: mass_bal_infile[nparticles] = f'{results_dir}resOilOutput_{nparticles}.sro' mass_bal[nparticles] = np.genfromtxt(mass_bal_infile[nparticles], skip_header=6, skip_footer=4) # - plot_mass = [i_mevaporated, i_mdispersed, i_mdissolved, i_mbio] for imass in plot_mass: plt.figure() for nparticles in nbr_partic: plt.plot(mass_bal[nparticles][range(nt),imass]) plt.legend(['1000','2000','3000','3000(MassINI)','3000(bio mass fix)']) plt.ylabel('Mass (Tonnes)') plt.xlabel('Time after oil release (hours) ') plt.title(header_arr[imass]) mass_bal_infile[nparticles] # #### Test whether mass issue is an output or whether the 2D thickness values also show different mass values # + ## calculate the increase in surface mass when comparing NBR_PARTIC = 2000 and 3000 to 1000. # - # The first value of this sum is negative, so I'm plotting subsequent values surfacemass_timeseries = {} for nparticles in nbr_partic[0:3]: surfacemass_timeseries[nparticles] = 500 * 440 * mass_bal[nparticles][range(nt),i_density] * thick2D_timeseries[nparticles] * 1e-9 plt.plot(surfacemass_timeseries[nparticles][1:-1]) display(f'first value < 0 for nbrpartic[{nparticles}]: {thick2D_timeseries[nparticles][0]} microns. Plotting subsequent values.') plt.ylabel('Integrated surface mass (tonnes)') plt.xlabel('Time after oil release (hours) ') plt.title('Spatially-integrated surface mass (tonnes) after first saved output (which is negative)') plt.legend([f'1000 (t(end) = {surfacemass_timeseries[1000][nt-1]:3.1f})', f'2000 (t(end) = {surfacemass_timeseries[2000][nt-1]:3.1f})', f'3000 (t(end) = {surfacemass_timeseries[3000][nt-1]:3.1f})' ]) # ### verify that the mass is comparable in the output from the version of NBR_PARTIC = 3000 pre- and post- bug fix to remove biodegredation mass (which is << than other mass losses; hence, the comparison ought to be comparable) tp_akns[3002] = xr.open_dataset(f'{results_dir}Lagrangian_AKNS_crude_TP_01aug17-08aug17_akns3000_fix2.nc') thick2D[3002] = tp_akns[3002].Thickness_2D thick2D_t[3002] = thick2D[3002].sum(dim='time') thick2D_timeseries_fix2 = np.array([]) for time_index in range(nt): timeseries_value = np.sum(thick2D[3002][time_index,:,:].values) thick2D_timeseries_fix2 = np.append(thick2D_timeseries_fix2, timeseries_value) surfacemass_timeseries_fix2 = 500 * 440 * mass_bal[3000][range(nt),i_density] * thick2D_timeseries_fix2 * 1e-9 surfacemass_timeseries_fix2.shape plt.plot(surfacemass_timeseries_fix2[1:-1]) display(f'first value < 0 for nbrpartic[3000]: {thick2D_timeseries_fix2[0]} microns. Plotting subsequent values.') plt.ylabel('Integrated surface mass (tonnes)') plt.xlabel('Time after oil release (hours) ') plt.title('Spatially-integrated surface mass (tonnes) for run with biodegredation mass removed from MassOil') plt.legend([f'3000-biofix (t(end) = {surfacemass_timeseries_fix2[nt-1]:3.1f})']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.3 64-bit (''3.8.3'': pyenv)' # name: python_defaultSpec_1595582370349 # --- import numpy as np fan = 0.0 rain = 1.0 people = 0.0 def activation_function(x): if x >= 0.5: return 1 else: return 0 def predict(fan, rain, people): inputs = np.array([fan, rain, people]) weights_input_to_hidden_1 = [0.25, 0.25, 0] weights_input_to_hidden_2 = [0.5, -0.4, 0.9] weights_input_to_hidden = np.array([weights_input_to_hidden_1, weights_input_to_hidden_2]) weights_input_to_output = np.array([-1, 1]) hidden_input = np.dot(weights_input_to_hidden, inputs) print("Hidden input: " + str(hidden_input)) hidden_output = np.array([activation_function(x) for x in hidden_input]) print("Hidden output: " + str(hidden_output)) output = np.dot(weights_input_to_output, hidden_output) print("Output: " + str(output)) return activation_function(output) == 1 # + tags=[] print("Result: " + str(predict(fan, rain, people))) # - # [Resource](https://www.youtube.com/watch?v=AZG0j0pNY-4) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pylab import rcParams from windrose import (WindroseAxes, WindAxes, WindAxesFactory) from matplotlib import pyplot as plt import matplotlib.cm as cm import matplotlib as mpl import numpy as np import pandas as pd rose_color = (.918,.576,.576) speed_color = (1.0,74.9,52.5) plt.style.use('seaborn-whitegrid') plt.ion() font = {'family' : 'monospace', 'weight' : 'medium', 'size' : 14} plt.rc('font', **font) pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) # - def plot_wind_rose(df, bins = np.arange(0, 25, 5), quantiles = np.arange(5,15,step=5), cmap=cm.viridis): ws = df['Speed'] wd = df['Direction'] fig = plt.figure() # windrose_axes = WindAxesFactory.create('windroseaxes', ax=ax) windrose_axes = WindroseAxes.from_ax(fig=fig) windrose_axes.bar(wd , ws, normed=True, opening=0.8, edgecolor='white', bins=bins[:-1], cmap=cmap) windrose_axes.set_yticks(quantiles) windrose_axes.set_yticklabels([str(t / 100 ) for t in quantiles]) # windrose_axes.set_legend() ax2 = fig.add_axes([0.0, .1, 0.05, 0.8]) norm = mpl.colors.BoundaryNorm(bins, cmap.N) cb = mpl.colorbar.ColorbarBase(ax2, cmap=cmap, norm=norm, spacing='proportional', extend='max', ticks=bins, format='%1i') # boundaries=bounds, cb.set_ticklabels([str(b) + 'm/s' for b in bins]) plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0, hspace = 0, wspace = 0) plt.margins(0,0) print(bins) print(quantiles) return fig ws = np.random.random(500) * 8 + 25 wd = np.random.random(500) * 0 + 90 print(np.min(wd), np.max(wd)) ax = WindroseAxes.from_ax() ax.bar(wd, ws, normed=True, opening=0.8, edgecolor='white') ax.set_legend() df = pd.read_csv ('/home/ctripp/data/hopp/resource/loc_1_high_correlation.csv') fig = plot_wind_rose(df) plt.savefig('loc_1_high_correlation_rose.svg', format='svg') df = pd.read_csv ('/home/ctripp/data/hopp/resource/loc_2_low_correlation.csv') fig = plot_wind_rose(df) plt.savefig('loc_2_low_correlation_rose.svg', format='svg') def plot_wind_rose2(df, bins = np.arange(0, 25, 5), quantiles = np.arange(5,15,step=5), cmap=cm.viridis): ws = df['Speed'] wd = df['Direction'] # (3, 5) + (3, 1) fig = plt.figure(figsize=(3,6)) # windrose_axes = WindAxesFactory.create('windroseaxes', ax=ax) windrose_axes = WindroseAxes.from_ax(fig=fig) windrose_axes.bar(wd , ws, normed=True, opening=0.85, edgecolor='white', bins=bins[:-1], cmap=cmap) windrose_axes.set_yticks(quantiles) windrose_axes.set_yticklabels([str(t / 100 ) for t in quantiles]) # windrose_axes.set_legend() ax2 = fig.add_axes([0.05, .15, .90, .05]) norm = mpl.colors.BoundaryNorm(bins, cmap.N) cb = mpl.colorbar.ColorbarBase(ax2, cmap=cmap, norm=norm, spacing='proportional', extend='max', ticks=bins, format='%1i', orientation='horizontal') # boundaries=bounds, labels = [str(b) for b in bins] labels[-1] = labels[-1] + 'm/s' cb.set_ticklabels(labels) plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0, hspace = 0, wspace = 0) plt.margins(0,0) print(bins) print(quantiles) return fig df = pd.read_csv ('/home/ctripp/data/hopp/resource/loc_1_high_correlation.csv') fig = plot_wind_rose2(df) plt.savefig('loc_1_high_correlation_rose.svg', format='svg') df = pd.read_csv ('/home/ctripp/data/hopp/resource/loc_2_low_correlation.csv') fig = plot_wind_rose2(df) plt.savefig('loc_2_low_correlation_rose.svg', format='svg') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:da2] # language: python # name: conda-env-da2-py # --- import numpy as np import pandas as pd from collections import defaultdict # + ### Global variables ROOT = "/users/kcochran/projects/domain_adaptation_nosexchr/" tfs = ["CTCF", "CEBPA", "Hnf4a", "RXRA"] tfs_latex_names = ["CTCF", "CEBPA", "HNF4A", "RXRA"] all_trainspecies = ["mm10", "hg38"] model_names_dict = {"mm10" : "Mouse", "hg38" : "Human"} # - # # Load Predictions and Labels # + def get_preds_file(tf, train_species, test_species): # this file is created by the 0_generate_predictions notebook return ROOT + "/model_out/" + tf + "_" + train_species + "-trained_" + test_species + "-test.preds.npy" def load_average_test_set_preds(test_species): # takes a while to run. preds_dict = defaultdict(lambda : dict()) # loop over mouse-trained, human-trained models, and DA mouse-trained models for train_species in all_trainspecies: for tf in tfs: # load predictions for all 5 independent model runs preds_file = get_preds_file(tf, train_species, test_species) try: preds_dict[train_species][tf] = np.mean(np.load(preds_file), axis = 1) except: print("Could not load preds file:", preds_file) return preds_dict avg_preds_human_test = load_average_test_set_preds("hg38") # + def get_test_bed_file(tf, species): # should be in BED file format # this file is specific to each tf -- the last column # should contain the binding label for each window return(ROOT + "data/" + species + "/" + tf + "/chr2.bed") def get_test_labels(tf, species = "hg38"): # This function reads in the test-data bed file # for a given species and TF and returns the binding labels # for each example in that file. labels_file = get_test_bed_file(tf, species) with open(labels_file) as f: return np.array([int(line.split()[-1]) for line in f]) def get_all_test_labels_and_indices(preds_dict, species = "hg38"): labels = dict() bound_indices = dict() for tf in tfs: # load binding labels from bed file labels_for_tf = get_test_labels(tf) # assuming the mm10 entry exists, species doesn't matter here # shape of preds array should be the same in all cases len_to_truncate_by = preds_dict["mm10"][tf].shape[0] # truncate binding labels to be multiple of batch size, like preds labels[tf] = labels_for_tf[:len_to_truncate_by] # get indices where label is bound bound_indices[tf] = np.nonzero(labels[tf])[0] return labels, bound_indices labels, bound_indices = get_all_test_labels_and_indices(avg_preds_human_test) # - # ## Find FPs, FNs, etc. # + def get_fp_fn_indices(preds_dict, bound_indices): # this function categorizes sites in the test set as being # FPs, or FNs, recording the index of each site for # each category. # we will do this for every TF/training-species combo # (assuming the sites we're looking at are all from the same genome) fp_indices = defaultdict(lambda : defaultdict(lambda : [])) fn_indices = defaultdict(lambda : defaultdict(lambda : [])) for tf in tfs: for train_species in all_trainspecies: print(tf, train_species) # using a threshold of 0.5 here to decide what is predicted as bound predicted_as_bound_indices = np.nonzero(preds_dict[train_species][tf] > 0.5)[0] # for each example the models predicted as bound... for index in predicted_as_bound_indices: # if it's not actually bound, it's a FP if not index in bound_indices[tf]: fp_indices[tf][train_species].append(index) # for each example that is actually bound... for index in bound_indices[tf]: # if it's not predicted to be bound, it's a FN if index not in predicted_as_bound_indices: fn_indices[tf][train_species].append(index) return fp_indices, fn_indices def get_site_indices(preds_dict, bound_indices): # This function uses the predictions and binding labels for # all sites in the test set to determine if sites are # both-model false positives, false negatives, mouse-model FPs, etc. # Here we use a definition of "differentially predicted site" # to find mouse-model FPs or FNs that requires the difference # between the mouse-model prediction and the human-model prediction # to be at least 0.5 fp_indices, fn_indices = get_fp_fn_indices(preds_dict, bound_indices) site_indices = defaultdict(lambda : dict()) for tf in tfs: # if the site is a TP for both training species, it's a "bothTP" site_indices[tf]["bothTP"] = set(tp_indices[tf][species1]).intersection(set(tp_indices[tf][species2])) # ditto for FPs and FNs site_indices[tf]["bothFP"] = set(fp_indices[tf][species1]).intersection(set(fp_indices[tf][species2])) site_indices[tf]["bothFN"] = set(fn_indices[tf][species1]).intersection(set(fn_indices[tf][species2])) # get the set of sites with sufficiently differential predictions between training species diff_pred_mm10_overpred = set(np.nonzero(preds_dict[species1][tf] - preds_dict[species2][tf] > 0.5)[0]) diff_pred_mm10_underpred = set(np.nonzero(preds_dict[species2][tf] - preds_dict[species1][tf] > 0.5)[0]) # if the mouse-trained model alone overpredicts an unbound site, it's an "mFP" site_indices[tf]["mFP"] = set(fp_indices[tf][species1]).intersection(diff_pred_mm10_overpred) # if the mouse-trained model alone underpredicts a bound site, it's an "mFN" site_indices[tf]["mFN"] = set(fn_indices[tf][species1]).intersection(diff_pred_mm10_underpred) return site_indices site_subset_indices = get_site_indices(avg_preds_human_test, bound_indices) # - # # Load Repeat Annotations # + REPEAT_TYPES = ["DNA", "LINE", "Low_complexity", "LTR", "Simple_repeat", "SINE", "Unknown"] # Removed due to < 500 instances in the test set: ["RC", "Retroposon", "RNA", "rRNA", "Satellite", "scRNA", "srpRNA", "tRNA"] def get_rmsk_file(): # this file is downloaded by an earlier script return ROOT + "data/hg38/rmsk.bed" def read_and_filter_rmsk_file(repeat_name, is_subfam = False, test_chrom = "chr2"): # This function reads in the RepeatMasker bed file, # filters for only rows listing annotations of one # repeat type, and then returns only the start and # end coordinate for each annotation. # We're assuming all the test set examples are on # one chromosome, so we don't need the first column. # assuming bed format filename = get_rmsk_file() df = pd.read_csv(filename, sep = "\t", usecols = [5, 6, 7, 10, 11], header = None) if is_subfam: df = df[df[10] == repeat_name] df = df[df[5] == test_chrom] else: df = df[df[11] == repeat_name] df = df[df[5] == test_chrom] sorted_repeat_coords = sorted(list(zip(df[6], df[7])), key = lambda tup : tup[0]) return np.array(sorted_repeat_coords) def get_repeat_and_test_set_overlap(list_a, list_b): # This function is similar to bedtools intersect, # but returns a binary yes/no for overlap for each # window in list_a. # Assumes everything's on the same chromosome # Assumes inputs are lists of 2-ples: (start, stop) # output is list with len == len(list_a) matches = [] b_index = 0 for a_item in list_a: a_start, a_end = a_item while True: if b_index >= len(list_b): matches.append(False) break b_start, b_end = list_b[b_index] # the -1 is because bed files are 1-indexed if b_start > a_end - 1: matches.append(False) break elif b_end <= a_start: b_index += 1 else: matches.append(True) break assert len(matches) == len(list_a) return np.array(matches) def get_test_bed_coords(species = "hg38"): # This function loads in the bed file for the test set # and keeps only the start and end coords for each entry. # Here we assume the test set is 1 chromosome # later analysis will assume the coords are sorted, # as in `sort -k1,1 -k2,2n $bed_file` # TF doesn't matter here because we're not using labels test_bed = get_test_bed_file(tfs[0], species) df = pd.read_csv(test_bed, sep = "\t", usecols = [1, 2], header = None) return df.to_numpy() def get_all_repeat_labels_and_indices(): all_windows_coords = get_test_bed_coords() repeat_labels = dict() repeat_indices = dict() for repeat_type in REPEAT_TYPES: print(repeat_type) repeat_type_coords = read_and_filter_rmsk_file(repeat_type) # filtering for repeat types with at least 500 instances # in the test set, so we don't get incorrectly extreme results assert len(repeat_type_coords) > 500, (repeat_type, len(repeat_type_coords)) repeat_labels[repeat_type] = get_repeat_and_test_set_overlap(all_windows_coords, repeat_type_coords) repeat_indices[repeat_type] = set(np.nonzero(repeat_labels[repeat_type])[0]) return repeat_labels, repeat_indices repeat_labels, repeat_indices = get_all_repeat_labels_and_indices() # + def calc_repeat_fracs_from_site_overlap(bound_indices, labels, site_subset_indices, repeat_indices): # This function calculates what fraction of sites are overlapping a given repeat type, # for various categories of sites. The output is returned in nested dictionaries, one # for each TF, because the site categorizations are specific to the TF. repeat_fracs = defaultdict(lambda : dict()) for tf in tfs: # Bound site repeat fraction = (# bound sites overlapping repeat) / (# bound sites) num_bound_sites_with_repeat = len(set(bound_indices[tf]).intersection(repeat_indices)) repeat_fracs[tf]["bound"] = num_bound_sites_with_repeat / len(bound_indices[tf]) # this arithmetic reverses binary-numeric labels (0 is now 1, 1 is now 0) num_unbound_sites = sum(labels[tf] * -1 + 1) # Unbound site repeat fraction = (# unbound sites overlapping repeat) / (# unbound sites) # where (# unbound sites overlapping repeat) = (# repeat-overlap sites in test set not in bound site set) num_unbound_sites_with_repeat = len(repeat_indices.difference(set(bound_indices[tf]))) repeat_fracs[tf]["unbound"] = num_unbound_sites_with_repeat / num_unbound_sites # for each of the specific categories of sites we're interested in... # (e.g. "false positives", "mouse-model false negatives") for site_type in site_subset_indices[tf].keys(): # calc total # of sites in this category num_sites = len(site_subset_indices[tf][site_type]) if num_sites > 0: # calc # of sites in this category that overlap the given repeat type num_sites_with_repeat = len(site_subset_indices[tf][site_type].intersection(repeat_indices)) # finally, calc fraction of sites in this category that overlap the repeat type repeat_fracs[tf][site_type] = num_sites_with_repeat / num_sites else: repeat_fracs[tf][site_type] = np.nan return repeat_fracs def generate_all_repeat_fracs_for_table(bound_indices, labels, site_subset_indices, repeat_indices): # This function creates the dictionary needed for the print_full_table() function below, # where each key is a repeat type and the value is nested dictionaries of the fraction # of sites overlapping that repeat type for a given TF (since sets of sites, such as # "bound site" and "mouse-model false positive", are different for each TF). repeat_fracs = dict() for repeat_type in REPEAT_TYPES: repeat_fracs[repeat_type] = calc_repeat_fracs_from_site_overlap(bound_indices, labels, site_subset_indices, repeat_indices[repeat_type]) return repeat_fracs all_repeat_fracs = generate_all_repeat_fracs_for_table(bound_indices, labels, site_subset_indices, repeat_indices) # + def fix_repeat_name(repeat_name): # If a repeat name has an underscore in it, this will # mess up the latex formatting, so we replace the # underscores with spaces and then capitalize the first # letter of each word in the repeat name if "_" in repeat_name: return " ".join(repeat_name.split("_")).title() return repeat_name def print_full_table(all_repeat_fracs, header = None, row_order = None): # This function prints a fully latex-formatted table, made up of # one sub-table for each repeat type in the all_repeat_fracs dict. # The columns of the table are given below in header, and the # rows of the sub-tables are the TFs. print(r'\begin{table*}{') print(r'\setlength{\tabcolsep}{0.8em}') print(r'\centering \begin{tabular}{@{}ccccccc@{}}\toprule') if header is None: header = "TF & Bound & FN (Both Models) & FN (Mouse Only) & Unbound & FP (Both Models) & FP (Mouse Only)" col_order = ["bound", "bothFN", "mFN", "unbound", "bothFP", "mFP"] print(header + r' \\\midrule') if row_order is None: row_order = tfs last_row = row_order[-1] last_repeat_type = REPEAT_TYPES[-1] for repeat_type in REPEAT_TYPES: print(r'\multicolumn{7}{c}{\textbf{' + fix_repeat_name(repeat_type) + r'}} \\\midrule') repeat_fracs = all_repeat_fracs[repeat_type] for row_key in row_order: row = [repeat_fracs[row_key][col] for col in col_order] row_as_str = ["%0.1f" % (100 * num) + r'\%' for num in row] row_as_str[-1] = r'\textbf{' + row_as_str[-1] + r'}' tf_fancy_name = tfs_latex_names[tfs.index(row_key)] if row_key is not last_row: print(tf_fancy_name + " & " + " & ".join(row_as_str) + r' \\') else: if repeat_type == last_repeat_type: print(tf_fancy_name + " & " + " & ".join(row_as_str) + r' \\\bottomrule') else: print(tf_fancy_name + " & " + " & ".join(row_as_str) + r' \\\midrule') print(r'\end{tabular}}{}') print(r'\captionof{table}{Percent of windows overlapping various RepeatMasker-defined ' + \ 'repeat elements, for different categories of genomic windows from the held-out test set. ' + \ 'Only RepeatMasker repeat classes with at least 500 distinct annotations within the test ' + \ 'set are shown. FPs: false positives. FNs: false negatives. Mouse Only: specific to' + \ r'mouse-trained models. See Methods for more details on site categorization.}') print(r'\end{table*}') print_full_table(all_repeat_fracs) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:tf] # language: python # name: conda-env-tf-py # --- # + import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # + tf.reset_default_graph() images, labels = mnist.train.next_batch(256) weights1 = tf.get_variable("weights1", (784, 100)) layer1 = tf.nn.relu(images @ weights1) weights2 = tf.get_variable("weights2", (100, 100)) layer2 = tf.nn.relu(layer1 @ weights2) weights3 = tf.get_variable("weights3", (100, 10)) outputs = layer2 @ weights3 loss = tf.losses.softmax_cross_entropy(labels, outputs) train_op = tf.train.AdamOptimizer().minimize(loss) total_steps = 100 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for step in range(total_steps): sess.run([train_op]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Analysis Classes - a Unified Interface to Analysis Results # # MVG comes with a set of analysis classes which provide a unified interface to analysis results *irrespective of the specific feature*. Note that the analysis classes are helper classes, as such the mvg class does *not* depend on them. # # Except for step (1) requesting the specific feature analysis, the following generic workflow holds: # # 1. Request a specific analysis # 2. Retrieve results and parse them into an analysis_class object # 3. Use generic methods like plot(), summary() or to_df() # # One application is to use the analysis classes interactively from a Python REPL session. # # #### Or in (pseudo) code # ```python # result = parse_results(session.get_analysis_results(request_id)) # call API to get results # result.plot() # plot results # result.summary() # print summary table # df = result.to_df() # convert to DataFrame # result.save_pkl() # save to pickle file # ``` # # ### Prerequisites # For running the examples in this notebook: # # 1. Installed mvg package # 2. A token for API access from Viking Analytics # 3. The database needs to be populated with our example assets. This can be achieved by running the ["Sources and Measurement"](2-sources_and_measurements.ipynb) example. # ## Example flow # # ### Importing the required packages, classes and functions # + import os from mvg import MVG, analysis_classes # mvg library with python bindings to mvg-API from mvg.analysis_classes import parse_results # analysis classes # - # ### Create a Session and test API access # # Note that the `TOKEN` is used both for authorization and authentication. Thus, each unique token represents a unique user and each user has their own unique database on the Viking Analytics MultiViz Vibration service. # # **You need to insert your token received from Viking Analytics here:** # + tags=["parameters"] # Replace by your own Token TOKEN = os.environ["TEST_TOKEN"] # use our own token ENDPOINT = "https://api.beta.multiviz.com" # - session = MVG(ENDPOINT, TOKEN) session.check_version() # Check if API is accessible # + [markdown] pycharm={"name": "#%% md\n"} # ### Running the Analyses # # Once the API session is live, we start by checking if the source u0001 we will use is available in the database: # + pycharm={"name": "#%%\n"} SOURCE_ID = "u0001" session.get_source(SOURCE_ID) # + [markdown] pycharm={"name": "#%% md\n"} # ### Requsting the Analysis # # We will now request an analysis (first two lines, uncomment one of them) and wait for the results to become available. # # The results as returned will be stored in a dictionary named `raw_result`. The raw results are shown in the results cell, mainly to show that they are not optimized for readability or interpretation. # + pycharm={"name": "#%%\n"} # Specifc part : Select one of two anlysis here by un/commenting selected_feature = "KPIDemo" # selected_feature = "ModeId" # Generic Part: request analysis and wait for results analysis_request = session.request_analysis(SOURCE_ID, selected_feature) print(f"Waiting for {analysis_request}") session.wait_for_analyses([analysis_request["request_id"]]) # Generic Part: Displaying unparsed results raw_result = session.get_analysis_results(analysis_request["request_id"]) raw_result # + [markdown] pycharm={"name": "#%% md\n"} # ### Showing and Browsing the results using analysis_classes # To make the results more accessible, we'll use the analysis_classes. The parse_results function will take the raw_results of (any) analysis and represent them in a python object with a number of convenience methods for summarising, plotting and exporting. For the full list of provided methods check __[the documentation](https://vikinganalytics.github.io/mvg/content/utilities_reference/analysis_classes.html)__. # # The parse function will automatically determine the kind (feature) of analysis based on the raw_results. # Once the results are parsed, we can summarize them using the summary() method *irrespective of which analysis them stem from*. To verify this you can rerun the cell above by selecting another feature for the analysis. # # #### Timestamps # The Vibration API requires timestamps to be represented in EPOCH time. To display human interpertable timestamps a timezone and a time unit (specifying if the timestamps are seconds 's' or milliseconds 'ms' from EPOCH). This information can be given in the parse_results calls 2nd and 3rd argument). If they are left blank EPOCH times are kept. # When exporting the results to a DataFrame, a column called "datetime" will be appended to show the human interpretable times. # + pycharm={"name": "#%%\n"} # Parse result = parse_results(raw_result, "Europe/Stockholm", "s") # result = parse_results(raw_result) # show only raw timestamps # Show summary summary = result.summary() # - # ### Plotting # For visual representation of the results there is the 'plot' method: # # Please not that when plotting the results for the KPIDemo feature, one selects the KPI to be displayed by passing the parameter `"kpi"`. If this parameter is not included, the plot function will display the results of the first KPI after the timestamps, which is the RMS value. # + pycharm={"name": "#%%\n"} result.plot() # + [markdown] pycharm={"name": "#%% md\n"} # ### Export results to DataFrame # The to_df() method will export results to a DataFrame. Note that the format of the DataFrame depends on the specific analysis and that not all of the results can be represented as a data frame. # + pycharm={"name": "#%%\n"} result.to_df() # + [markdown] pycharm={"name": "#%% md\n"} # ### Full results # In case the full server results are needed in the form they were returned, we can obtain them with the results() method: # + pycharm={"name": "#%%\n"} result.results() # - # ## Black Sheep Analysis example # # The BlackSheep is a population analysis and has a somewhat different call signature as it requires a number of assets to be submitted to analysis. # + pycharm={"name": "#%%\n"} # Specific signature for BlackSheep POPULATION_SOURCES = ["u0001","u0001","u0001","u0001","u0001","u0001","u0001","u0001","u0001","u0004","u0002","u0003","u0001"] analysis_request = session.request_population_analysis(POPULATION_SOURCES, "BlackSheep", parameters={"atypical_threshold": 0.15}) # Generic part, same as above print(f"Waiting for {analysis_request}") session.wait_for_analyses([analysis_request["request_id"]]) raw_result = session.get_analysis_results(analysis_request["request_id"]) raw_result # + [markdown] pycharm={"name": "#%% md\n"} # ### Using the analysis classes # We use exactly the same code as above to inspect the results: # - # Parse blacksheep_result = analysis_classes.parse_results(raw_result, "Europe/Stockholm", "s") # Show summary blacksheep_result.summary() blacksheep_result.plot() # ### Serializing # Finally, we can save the object including the results to pickle. If no name is given it is saved under the name `".pkl"`. # + pycharm={"name": "#%%\n"} blacksheep_result.save_pkl() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ## Overview: # This notebook demonstrates how to leverage different data formats for OMXWare results for downstream analysis. # # + # %matplotlib inline import json from IPython.display import display import omxware # --- Authentication options --- # # generate token with OMXWare user name and password (likely done once and then stored in file, see usage below) # token = omxware.get_token('harsha', '') # fill in password to initiate token # or # use previously generated token loaded from file token_path = "./omxware_collaborative_work/super_awesome_token" # update for your path to the token file you create f = open(token_path) token = f.readline() # provide your token to authenticate with OMXWare omx = omxware.omxware(token) # - # ### Retrieve gene data from OMXWare as a Pandas Dataframe # + search_string = 'sporulation' response = omx.genes(gene_name=search_string, page_size=25) #total_results = response.total_results() #print(total_results) results_df = response.results(type='df') results_df.head() # - # ### Distribution of Genes by Genera response.show_facets(name='genera', topN=7) # ### Retrieve gene data from OMXWare as JSON results_json = response.results(type='json') print(json.dumps(results_json[:3], indent=4, sort_keys=True)) # ### Retrieve gene data from OMXWare as an Object # + results_list = response.results(type='list') # By default, the API returns a `list` print("Returns: List of {} objects \nResults: {}\n".format(response.type(), response.total_results()) ) gene = results_list[0] print("Id \t\t=> " + gene.id()) print("Name \t\t=> " + gene.name()) print("Sequence \t=> " + gene.sequence()[:100] + "...") print("Sequence length => " + str(gene.sequence_length())) print("\n\n JSON:") print(gene.json()) # - # ### Retrieve gene data from OMXWare as FASTA new_gene_object = omx.genes(ids='00054a98f8ddd95e3f46d9d757137284').results(type='fasta') print(new_gene_object) # ### Retrieve Gene query results as fasta results_fasta = response.results(type='fasta') print(results_fasta) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # In this example, we load up two corpora - one for training a classifier, and one for testing it. # # We preprocess the two corpora by extracting text parses for the utterances in them. Following this, we extract politeness features from these text parses and train a politeness classifier on these politeness features from the training corpus. We then use the trained classifier to compute the average politeness score of utterances from two subreddits (r/aww and r/politics) in the testing corpus. import convokit from convokit import Corpus, download, TextParser, PolitenessStrategies, Classifier train_corpus = Corpus(filename=download('wiki-politeness-annotated')) test_corpus = Corpus(filename=download('reddit-corpus-small')) # ## Step 1: Preprocessing parser = TextParser() parser.transform(train_corpus) parser.transform(test_corpus) # ## Step 2: Feature extraction ps = PolitenessStrategies() ps.transform(train_corpus) ps.transform(test_corpus) # ## Step 3: Analysis clf = Classifier(obj_type='utterance', pred_feats=['politeness_strategies'], labeller=lambda utt: utt.meta['Binary']==1) clf.fit(train_corpus) clf.transform(test_corpus) aww_vals = clf.summarize(test_corpus, selector=lambda utt: utt.meta['subreddit']=='aww') politics_vals = clf.summarize(test_corpus, selector=lambda utt: utt.meta['subreddit']=='politics') print(aww_vals['pred_score'].mean()) print(politics_vals['pred_score'].mean()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true #

Table of Contents

#
    # - # Upload a notebook to connect that: # # - Uses your Planck function (or mine) and numpy.diff to find # # $$\frac{dE_\lambda}{d\lambda}$$ # # for a range of wavelengths between 0.1 - 20 microns and # temperatures of Temp = 250 K, 300 K, 350 K, 400 K. # # - With day3\_datetime.ipynb\_ Section 3 as a guide, plot these 4 # curves with a legend and show that each one passes through # 0 (i.e. the Planck function reaches a maximum) at a wavelength # of: # # $$\lambda_{max} = \frac{a}{Temp}$$ # # in agreement with Stull 2.14 (Wien's law) # # (i.e. -- your graph should have 4 lines, a legend, and four # colored dots at coordinates ($\lambda_{max}$, 0) that hopefully # lie on appropriate the line. # # + import numpy as np from matplotlib import pyplot as plt # # get Stull's c_1 and c_2 from fundamental constants # c=2.99792458e+08 #m/s -- speed of light in vacuum h=6.62606876e-34 #J s -- Planck's constant kb=1.3806503e-23 # J/K -- Boltzman's constant c=3.e8 #speed of light in vacuum (m/s) c1=2.*h*c**2.*np.pi c2=h*c/kb sigma=2.*np.pi**5.*kb**4./(15*h**3.*c**2.) def planckwavelen(wavel,Temp): """ Calculate the blackbody radiant exitence (Stull 2.13) Parameters ---------- wavel: float or array wavelength (meters) Temp: float temperature (K) Returns ------- Elambda: float or arr monochromatic radiant exitence (W/m^2/m) """ Elambda=c1/(wavel**5.*(np.exp(c2/(wavel*Temp)) -1)) return Elambda # + # %matplotlib inline a= 2897. #micron Kelvins -- Stull eq. 2.14 Temps = [250, 300, 350, 400] #Kelvins wavelengths = np.arange(0.1, 20,0.01) #microns # # initialize a dictionary with empty lists # # below I do it the long way # # shorter way, use defaultdict: # from collections import defaultdict # hold_dict = defaultdict(list) # # hold_dict={} for a_temp in Temps: hold_dict[a_temp] = [] dlambda = np.diff(wavelengths) # # empty list hold_list to hold the wavelengths of maxima # hold_max=[] for a_temp in Temps: E_vals = planckwavelen(wavelengths*1.e-6,a_temp)*1.e-6 #W/m^2/micron dE = np.diff(E_vals) deriv = dE/dlambda hold_dict[a_temp]=deriv hold_max.append(a/a_temp) plt.close('all') avg_wavelength=(wavelengths[1:] + wavelengths[:-1])/2. fig, ax = plt.subplots(1,) for a_temp in hold_dict.keys(): deriv = hold_dict[a_temp] max_wavelen = a/a_temp line_label = "{} K".format(a_temp) the_line=ax.plot(avg_wavelength,deriv,label=line_label) # # make the points and lines the same color # color=the_line[0].get_color() ax.plot(max_wavelen,0,'o',color=color) _=ax.legend() # - # ## # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### NER Model Training # This notebook contains all the training runs and hyperparameter tuning used to train the NER model. import pandas as pd import numpy as np from sklearn.feature_extraction import DictVectorizer from sklearn.feature_extraction.text import HashingVectorizer #from sklearn.linear_model import Perceptron from sklearn.model_selection import train_test_split #from sklearn.linear_model import SGDClassifier #from sklearn.linear_model import PassiveAggressiveClassifier #from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import classification_report import pickle PATH = '../GLG-Old Models, datasets/' df = pd.read_csv(PATH + 'ner_dataset.csv', encoding = "ISO-8859-1") #df = df[:500000] df.head() df.shape df.fillna(method='ffill',inplace=True) df.head() # + #X = df.drop('Tag',axis=1) #v = DictVectorizer(sparse=False) #X = v.fit_transform(X.to_dict('records')) #y = df['Tag'] classes = list(np.unique(df['Tag'])) #X_train, X_text, y_train, y_test = train_test_split(X, y, test_size=0.15,random_state=4) # - import sklearn_crfsuite from sklearn_crfsuite import scorers from sklearn_crfsuite import metrics from collections import Counter class SentenceGetter(object): def __init__(self, data): self.n_sent = 1 self.data = data self.empty = False agg_func = lambda s: [(w, p, t) for w, p, t in zip(s['Word'].values.tolist(), s['POS'].values.tolist(), s['Tag'].values.tolist())] self.grouped = self.data.groupby('Sentence #').apply(agg_func) self.sentences = [s for s in self.grouped] def get_next(self): try: s = self.grouped['Sentence: {}'.format(self.n_sent)] self.n_sent += 1 return s except: return None getter = SentenceGetter(df) sentences = getter.sentences def word2features(sent, i): word = sent[i][0] postag = sent[i][1] features = { 'bias': 1.0, 'word.lower()': word.lower(), 'word[-3:]': word[-3:], 'word[-2:]': word[-2:], 'word.isupper()': word.isupper(), 'word.istitle()': word.istitle(), 'word.isdigit()': word.isdigit(), 'postag': postag, 'postag[:2]': postag[:2], } if i > 0: word1 = sent[i-1][0] postag1 = sent[i-1][1] features.update({ '-1:word.lower()': word1.lower(), '-1:word.istitle()': word1.istitle(), '-1:word.isupper()': word1.isupper(), '-1:postag': postag1, '-1:postag[:2]': postag1[:2], }) else: features['BOS'] = True if i < len(sent)-1: word1 = sent[i+1][0] postag1 = sent[i+1][1] features.update({ '+1:word.lower()': word1.lower(), '+1:word.istitle()': word1.istitle(), '+1:word.isupper()': word1.isupper(), '+1:postag': postag1, '+1:postag[:2]': postag1[:2], }) else: features['EOS'] = True return features def sent2features(sent): return [word2features(sent, i) for i in range(len(sent))] def sent2labels(sent): return [label for token, postag, label in sent] def sent2tokens(sent): return [token for token, postag, label in sent] X = [sent2features(s) for s in sentences] y = [sent2labels(s) for s in sentences] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=4) # + ## X_p = list(np.array([sent2features(s) for s in sentences]).flatten()) ## y_p = list(np.array([sent2labels(s) for s in sentences]).flatten()) ## X_train_p, X_test_p, y_train_p, y_test_p = train_test_split(X_p, y_p, test_size=0.1, random_state=4,stratify=y) # - # 08:54 crf = sklearn_crfsuite.CRF( # algorithm='lbfgs', algorithm='l2sgd', # c1=0.1, # c2=0.1, max_iterations=100, all_possible_transitions=True ) crf.fit(X_train, y_train) crf = sklearn_crfsuite.CRF( algorithm='lbfgs', c1=0.1, c2=0.1, max_iterations=100, all_possible_transitions=True ) crf.fit(X_train, y_train) pd.DataFrame(y_train) # Test size = 0.33 y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred)) # Test size = 0.33 y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred, labels=classes)) #Default, test_size = 0.5 y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred)) filename = 'crf_ner_model.sav' pickle.dump(crf, open(filename, 'wb')) print(metrics.flat_classification_report(y_train, y_train)) #SGD, test_size = 0.5 y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred)) #SGD y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred)) # SGD y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred, labels=classes)) y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred, labels=classes)) y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred)) y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred)) y_pred = crf.predict(X_test) print(metrics.flat_classification_report(y_test, y_pred))#, labels = classes)) # ### Predict on Some Sample Queries # + import re def prep_query(phrase): split_query = re.findall(r"[\w']+|[.,!?;]", phrase) pos_tags = pos_tag(split_query) df_query = pd.DataFrame({'Sentence #':['Sentence: 1'] * len(pos_tags), 'Word':[pair[0] for pair in pos_tags], 'POS':[pair[1] for pair in pos_tags], 'Tag':[None] * len(pos_tags)}) return df_query # + s = " is a former host on The Apprentice. He is an American businessman and former President." s = 'hello how are you' s = 'The Second World War started in 1914 and ended in 1918' s = 'The Korean War started in 1939 and ended in 1945' s = 'Iraq and Iran were once at war. Saddam Hussein was involved' s = 'The World Cup is a quadrennial sporting event. FIFA is the governing body involved.' x = prep_query(s) # + getter_query = SentenceGetter(x) sentences_query = getter_query.sentences X_query = [sent2features(s) for s in sentences_query] X_words = [s[0] for s in sentences_query[0]] pred = crf.predict(X_query) list(zip(pred[0],X_words)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="4Rnwb_rgmJZB" colab_type="text" # # Try computer vision yourself in under 5 mintes! # #### Upload your own image and see what objects a computer vision model can find in it # --- # *Last Updated 18 May 2020* # Computer vision is a field of machine learning where computers are trained to recognize and identify patterns from images. Nowadays, computer vision is so common that many smartphones use it to sort the image gallery into categories like holidays, pets or even recognize friends faces. You can try using the search feature in your phone's image gallery to see what classes of objects your phone knows! # # Here, we use a pre-trained "lightweight" model known for its speed that's still relatively accurate, [YOLOv2 in Darkflow](https://github.com/thtrieu/darkflow) (the Tensorflow implementation of YOLO in Darknet) to detect 80 classes of everyday objecs and animals, including bird, cat, airplane, bottle, tv and more. # # Notes: This notebook runs entirely in Google Colab and doesn't require any software installations or downloads to your local machine. It is best to run this demo on a computer instead of a phone. # + [markdown] id="iJz5m4BKmJZD" colab_type="text" # ## Installs # --- # Run this code block by dragging your mouse over the brackets to the left of line 1 and press the "play" button. It takes about 1-2 minutes to run and you will see some text being output beneath the block. After it is finished, scroll down to **Model Preparation**. # + id="QW2sYEc47x5A" colab_type="code" colab={} # Install required libraries # !pip install tensorflow-gpu==1.15.0rc2 # !pip install imageio # Download and build darkflow (the tensorflow implementation of YOLO) import os import pathlib if "darkflow-master" in pathlib.Path.cwd().parts: while "darkflow-master" in pathlib.Path.cwd().parts: os.chdir('..') elif not pathlib.Path("darkflow-master").exists(): # !git clone --depth 1 https://github.com/thtrieu/darkflow.git # Compile darkflow # %cd darkflow # !python setup.py build_ext --inplace # Change darkflow to darkflow-master to distinguish between folder names # %cd ../ # !mv darkflow darkflow-master # %cd darkflow-master # Upload yolo.weights, pre-trained weights file (for YOLO v2) from an external Google drive weights = 'yolo' weights_file = weights + '.weights' if not os.path.exists('weights_file'): # !gdown --id 0B1tW_VtY7oniTnBYYWdqSHNGSUU # !mkdir bin # !mv yolo.weights bin # Imports # %cd darkflow-master # %tensorflow_version 1.15.0rc2 # For importing/exporting files, working with arrays, etc import time import urllib import numpy as np import pandas as pd import imageio # For actual object detection import tensorflow as tf from darkflow.net.build import TFNet threshold = 0.25 # For drawing onto and plotting images import matplotlib.pyplot as plt import cv2 # %config InlineBackend.figure_format = 'svg' # + [markdown] id="X2fF0fSxmJZR" colab_type="text" # ### Model Preparation # --- # Drag your mouse over the brackets to the left of line 1 and press the "play" button on the right. This step takes ~30 seconds. After it is finished, scroll down to **Object Detection - Find out what objects YOLO can see in your image!** # + id="R2wnfMcDmJZS" colab_type="code" colab={} # For uploading an image from url # Modified from https://www.pyimagesearch.com/2015/03/02/convert-url-to-image-with-python-and-opencv/ def url_to_image(url): resp = urllib.request.urlopen(url) image = np.asarray(bytearray(resp.read()), dtype="uint8") image = cv2.imdecode(image, cv2.IMREAD_COLOR) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) return image # Define parameters for feeding images through the model # Can change detection confidence threshold here params = { 'model': 'cfg/yolo.cfg', 'load': 'bin/yolo.weights', 'threshold': threshold, 'gpu': 1.0 } # Run the model tfnet = TFNet(params) # For drawing bounding boxes around detected objects on image def boxing(image, predictions): newImage = np.copy(image) im_height, im_width, im_depth = image.shape global labels labels = [] for result in predictions: # Only show boxes that are above confidence threshold confidence = result['confidence'] if confidence > threshold: xmin = result['topleft']['x'] ymin = result['topleft']['y'] xmax = result['bottomright']['x'] ymax = result['bottomright']['y'] #global label label = result['label'] + " " + str(round(confidence, 2)) labels.append(label) # Draw boxes on image fontScale = max(im_width,im_height)/500 fontThickness = int(max(im_width,im_height)/200) newImage = cv2.rectangle(newImage, (xmin, ymax), (xmax, ymin), (255, 0, 157), 3) newImage = cv2.putText(newImage, label, (xmin, ymax-5), cv2.FONT_HERSHEY_SIMPLEX, fontScale, (0, 0, 0), fontThickness*2, cv2.LINE_AA) newImage = cv2.putText(newImage, label, (xmin, ymax-5), cv2.FONT_HERSHEY_SIMPLEX, fontScale, (255, 255, 255), fontThickness, cv2.LINE_AA) return newImage # + [markdown] colab_type="text" id="1zL-PctVuqZv" # ## Object Detection - Find out what objects YOLO can see in your image # --- # You can either **A) Load an image in by URL** or **B) Load an image in from file**. # + [markdown] id="7gd4TbiOEXff" colab_type="text" # **A) Load in individual image by URL** # Read in any image from a URL and see what objects YOLO can find! # 1. To get an image from a url: # * For images from [CCSearch](https://ccsearch.creativecommons.org/), click the image you want. Next, right click the image and select "open image in new tab." # * For images from [Google Images](https://images.google.com/), right click the image you want and select "open image in new tab." # 2. Copy the url and paste it within the quotes on line 2 in the code block below. # 3. Drag your mouse over the brackets to the left of line 1 and press the "play" button on the right. # 4. Optional: Adjust the detection confidence threshold on line 6 and press "play" again to display modified results. # + id="qNsodQP3t_o5" colab_type="code" colab={} # Insert your URL here url = "https://farm2.staticflickr.com/1238/709521497_2122cdde9e_b.jpg" # Set confidence threshold for detection # Optional: You can adjust this value (0-1) to see more or less detected objects in the image threshold = 0.25 # Read in image from URL image = url_to_image(url) # Use YOLO for object detection # Record inference time start_time = time.time() result = tfnet.return_predict(image) end_time = time.time() # Plot and show detection boxes on images _, ax = plt.subplots(figsize=(10, 10)) ax.imshow(boxing(image, result)) # Add titles to plot if result and labels: plt.title('Wow, I found some cool stuff in {} seconds!'.format(format(end_time-start_time, '.2f'))) else: plt.title("I didn't find anything in the picture :(") # + [markdown] id="mkNIUT8kDJrx" colab_type="text" # **B) Load in individual image by file** # Read in any image from file and see what objects YOLO can find! # To get an image from file: # 1. Click the folder icon in the left side panel. # 2. Click "Upload" # 3. Select any image from your computer to upload. # 4. Copy your image filename within the quotes on line 3 in the code block below. # 5. Drag your mouse over the brackets to the left of line 1 and press the "play" button on the right. # 6. Optional: Adjust the detection confidence threshold on line 7 and press "play" again to display modified results. # + id="tsUS-2fUrSdy" colab_type="code" colab={} # Insert your filename at line 3 inpath = '/content/' filename = '[yourfilenamehere].jpg' # Set confidence threshold for detection # Optional: You can adjust this value (0-1) to see more or less detected objects in the image threshold = 0.25 # Read in image from file fpath = inpath + filename image = imageio.imread(fpath, pilmode='RGB') # Use YOLO for object detection # Record inference time start_time = time.time() result = tfnet.return_predict(image) end_time = time.time() # Plot and show detection boxes on images _, ax = plt.subplots(figsize=(10, 10)) ax.imshow(boxing(image, result)) # Add titles to plot if result and labels: plt.title('Wow, I found some cool stuff in {} seconds!'.format(format(end_time-start_time, '.2f'))) else: plt.title("I didn't find anything in the picture :(") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Using SQL Databases in Python # # You can, of course, access databases from your code, not just from Jupyter or the command line. # # We will first create a database, and then access it using a Python SQL library. # # # %load_ext sql # #%config SqlMagic.autocommit=False # %sql mysql+pymysql://root:root@127.0.0.1:3306/mysql # %sql show databases; # + # if you have the germplasm database, please drop it now! # if not, then move to the next box # %sql drop database germplasm # - # %sql create database germplasm; # %sql show databases # %sql use germplasm; # + # #%sql drop table stock # #%sql drop table germplasm # #%sql drop table gene # %sql CREATE TABLE stock(id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, amount FLOAT NOT NULL, date DATE NOT NULL, location VARCHAR(20) NOT NULL); # %sql DESCRIBE stock # %sql CREATE TABLE germplasm(id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, taxonid INTEGER NOT NULL, allele VARCHAR(10) NOT NULL, gene_id INTEGER NOT NULL, stock_id INTEGER NOT NULL); # %sql DESCRIBE germplasm # %sql CREATE TABLE gene(id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, gene VARCHAR(10) NOT NULL, gene_name VARCHAR(30) NOT NULL, embl VARCHAR(70) NOT NULL); # %sql DESCRIBE gene # + # %sql INSERT INTO stock(id, amount, date, location) VALUES (1, 5, '2013-05-10', 'Room 2234'); # %sql INSERT INTO stock(id, amount, date, location) VALUES (2, 9.8, '2015-1-12', 'Room 998'); # %sql INSERT INTO germplasm (taxonid, allele, gene_id, stock_id) VALUES (4150, 'def-1', 1, 1 ); # %sql INSERT INTO germplasm (taxonid, allele, gene_id, stock_id) VALUES (3701, 'ap3', 2, 2 ); # %sql INSERT INTO germplasm (taxonid, allele, gene_id, stock_id) VALUES (3701, 'ag', 3, 2 ); # %sql INSERT INTO gene (id, gene, gene_name, embl) VALUES (1, 'DEF', "Deficiens", 'https://www.ebi.ac.uk/ena/data/view/AB516402'); # %sql INSERT INTO gene (id, gene, gene_name, embl) VALUES (2, 'AP3', "Apetala3", 'https://www.ebi.ac.uk/ena/data/view/AF056541'); # %sql INSERT INTO gene (id, gene, gene_name, embl) VALUES (3, 'AG', "Agamous", 'https://www.ebi.ac.uk/ena/data/view/AL161549'); # - #
    #
    #
    # 
    # # ## Our germplasm database is now set up. # # The data structure is approximately the same as the final data structure yesterday: # # **Stock table** # # **Gene table** # # **Germplasm table** linked to **Stock table** (many-to-one) (several germplasms point to same stock) # **Germplasm table** linked to **Gene table** (one-to-one) (every germplasm points to one gene) # # + import pymysql.cursors # Connect to the database connection = pymysql.connect(host='localhost', user='root', password='', db='germplasm', charset='utf8mb4', # note utf8... this is important for unusual characters! cursorclass=pymysql.cursors.DictCursor) try: with connection.cursor() as cursor: # Read a single record sql = "SELECT * FROM stock" cursor.execute(sql) results = cursor.fetchall() print(results) print("") for result in results: print(result['amount']," is located in ",result['location']) finally: print("") connection.close() # - # ## Open a new jupyter notebook to Lesson 3 - mySQL section # # Try some of the SELECT queries we learned in that lesson. # + # try queries here # - #
    #
    #
    # 
    # ## You can issue ANY mysql command in this way # # Including _create database_, _create table_, and _insert_ data commands. # # For example: # + connection = pymysql.connect(host='localhost', user='root', password='', db='germplasm', charset='utf8mb4', # note utf8... this is important for unusual characters! cursorclass=pymysql.cursors.DictCursor) try: with connection.cursor() as cursor: # Read a single record sql = "create database testing123" cursor.execute(sql) finally: print("") connection.close() # %sql show databases # + connection = pymysql.connect(host='localhost', user='root', password='', # PAY ATTENTION HERE!!!!!!!!!!!!!!! db='testing123', charset='utf8mb4', # note utf8... this is important for unusual characters! cursorclass=pymysql.cursors.DictCursor) try: with connection.cursor() as cursor: # Read a single record sql = "create table test1(id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, data1 VARCHAR(20) NOT NULL )" cursor.execute(sql) sql = "create table test2(id INTEGER NOT NULL AUTO_INCREMENT PRIMARY KEY, otherdata VARCHAR(20) NOT NULL )" cursor.execute(sql) finally: print("") connection.close() # %sql use testing123 # #%sql show tables # #%sql desc test1 # %sql desc test2 # - # %sql show databases # # Insert a new record into the Germplasm database # # First, look at the schemas: # # %sql use germplasm # %sql desc gene # %sql desc stock # %sql desc germplasm # ## test data to show how to retrieve the latest unique ID number: # # **Gene**: gene=TEST, gene_name=testing, embl=http://TESTTEST # # (Remember the SQL "last_insert_id()" function!!) # # + connection = pymysql.connect(host='localhost', user='root', password='', # PAY ATTENTION HERE!!!!!!!!!!!!!!! db='germplasm', charset='utf8mb4', # note utf8... this is important for unusual characters! cursorclass=pymysql.cursors.DictCursor) try: with connection.cursor() as cursor: # Read a single record sql = "INSERT INTO gene (gene, gene_name, embl) VALUES ('TEST', 'testing', 'https://TESTTEST')" cursor.execute(sql) sql = "SELECT last_insert_id()" cursor.execute(sql) results = cursor.fetchall() print(results) gene_id = results[0]['last_insert_id()']# note that results is a LIST, so we need to take element 0 first print("The unique ID for the last gene entered was {}".format(gene_id)) finally: print("") connection.close() # - # ## new data: # # **Gene**: gene=WUS, gene_name=wuschel, embl=http://ABC123 # # **Stock**: amount=10, date=12/09/2018, location=Room990 # # **Germplasm**: taxonid=3701, allele=wus-1, **gene_id=??, stock_id=??** # # # + connection = pymysql.connect(host='localhost', user='root', password='', db='germplasm', charset='utf8mb4', # note utf8... this is important for unusual characters! cursorclass=pymysql.cursors.DictCursor) try: with connection.cursor() as cursor: # you will need to create 3 INSERT INTO statements # you will need to capture the unique ID from two # of those to create the germplasm entry sql = "" finally: print("") connection.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import collections # ## Accessing Values # The `ChainMap` manages a sequence of dictionaries, and searches through them in the order they are given, to find values associated with keys. # + a= {'a':'A','c':'C'} b= {'b':'B','c':'D'} m=collections.ChainMap(a,b) print('Indivisual Values') print('a={}'.format(m['a'])) print('b={}'.format(m['b'])) print('c={}'.format(m['c'])) print() print('Keys={}'.format(list(m.keys()))) print('Value={}'.format(list(m.values()))) print() print("Print Items") for k,v in m.items(): print('{}={}'.format(k,v)) print('"d" in m:{}'.format(('d' in m))) # - # **Note** # The child mapping are searched in the order they are passed to the constructor, so the value reported for the key 'c' comes from the "a" dictionary. # ## Reordering # # the ChainMap stores the list of mappings over which it searchs in a list in its maps attribute. This list is mutable. so it is possible to add new mappings directly or change the order of the elements to control lookup and update behavior # + import collections a = {'a': 'A', 'c': 'C'} b = {'b': 'B', 'c': 'D'} m = collections.ChainMap(a, b) print(m.maps) print('c = {}\n'.format(m['c'])) # reverse the list m.maps = list(reversed(m.maps)) print(m.maps) print('c = {}'.format(m['c'])) # - # ## Updating Values # # A ChainMap does not cache the values in the child mappings, thus, if their contents are modified, the results are reflected when ChainMap is accessed. # + import collections a = {'a': 'A', 'c': 'C'} b = {'b': 'B', 'c': 'D'} m = collections.ChainMap(a, b) print('Before: {}'.format(m['c'])) a['c'] = 'E' print('After : {}'.format(m['c'])) # - # ChainMap Provides a convenience method for creating a new instance with one extra mapping at the front of maps list to make it easy to avoid modifying the existing underlying data structures. # + import collections a = {'a': 'A', 'c': 'C'} b = {'b': 'B', 'c': 'D'} m1 = collections.ChainMap(a, b) m2 = m1.new_child() print('m1 before:', m1) print('m2 before:', m2) m2['c'] = 'E' print('m1 after:', m1) print('m2 after:', m2) # - # for situation where the new context is known or buit in advance, it is also possible to pass a mapping to `new_child()' # + import collections a = {'a': 'A', 'c': 'C'} b = {'b': 'B', 'c': 'D'} c = {'c': 'E'} m1 = collections.ChainMap(a, b) m2 = m1.new_child(c) print('m1["c"] = {}'.format(m1['c'])) print('m2["c"] = {}'.format(m2['c'])) # - # we can also skip the first map in the search. # `parents` Property returning a new `ChainMap` containing all of the maps in the current instance except the first one. # + import collections a = {'a': 'A', 'c': 'C'} b = {'b': 'B', 'c': 'D'} c = {'e': 'E', 'f': 'F'} m1 = collections.ChainMap(a, b,c) m1.parents # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lab 2 # # ## Question 1 # # Collect any 10 documents (English text documents) from the web and create inverted index by # doing necessary preprocessing steps using python. # + # Import Statements import os import nltk # - # + import requests from bs4 import BeautifulSoup from bs4.element import Comment import re import string import nltk from nltk.stem.porter import PorterStemmer index = {} def visible_text(element): if element.parent.name in ['style', 'title', 'script', 'head', '[document]', 'class', 'a', 'li']: return False elif isinstance(element, Comment): return False elif re.match(r"[\s\r\n]+",str(element)): return False return True def read_url(url, url_number): ps = PorterStemmer() r = requests.get(url) soup = BeautifulSoup(r.content, 'html.parser') text = soup.findAll(text = True) result = list(filter(visible_text, text)) counter = 0; words = [] for i in result: temp = i.split(' ') for word in temp: k = [] temp_word = word.lower() for c in temp_word: if c not in list(string.punctuation): k.append(c) temp_word = ''.join(k) words.append(temp_word) for i in words: if(i.isalpha()): i = ps.stem(i) if not i in index.keys(): index[i] = [(url_number, counter)] counter = counter + len(i) + 1 else: index[i].append((url_number, counter)) counter = counter + len(i) + 1 return None urls = [ 'https://en.wikipedia.org/wiki/Google', 'https://en.wikipedia.org/wiki/Facebook', 'https://en.wikipedia.org/wiki/Netflix', 'https://en.wikipedia.org/wiki/Amazon_(company)', 'https://en.wikipedia.org/wiki/Microsoft', 'https://en.wikipedia.org/wiki/Tesla,_Inc.', 'https://en.wikipedia.org/wiki/Apple_Inc.', 'https://en.wikipedia.org/wiki/Silicon_Valley_(TV_series)', 'https://en.wikipedia.org/wiki/Wikipedia', 'https://en.wikipedia.org/wiki/Uber', ] for i in range(len(urls)) : read_url(urls[i], (i+1)) sorted_keys = sorted(index.keys()) f = open("output2.txt", "w") output_line = "Word".ljust(15) + "Frequency".ljust(15) + "Posting List".ljust(15) + "\n" f.writelines(output_line) f.writelines('-------------------------------------------------------------------------\n\n') for i in sorted_keys: print(i, len(index[i]), index[i]) output_string = str(i).ljust(15) + str(len(index[i])).ljust(15) + str(index[i]).ljust(15) + "\n" f.writelines(output_string) f.writelines('\n') f.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from reader import load_dataframe import pandas as pd import numpy as np import os import glob from datetime import date, datetime # %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import requests import math import matplotlib.ticker as mtick plot_colors=["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71", '#ffce44'] # - # df = load_dataframe() df = pd.read_csv('ny_sf_all_data_0703.csv', parse_dates=[1,2]) df.sample(5) #convert start time to hour decimal to graph # df['start hour'] = df['starttime'].apply(lambda x: x.hour + (x.minute/60)) df['start hour'] = df['starttime'].apply(lambda x: x.hour + (round((x.minute/60)*4))/4) #rounds to quarter hour # df['age'] = df['age'].fillna('Other') df.head() #age grouping age_bins = [0, 20, 30, 40, 50, 60, 100] age_groups = ['0-20', '21-30', '31-40', '41-50', '51-60', '61+'] df['age groups'] = pd.cut(df['age'], age_bins, labels=age_groups) df.sample(5) # ### age groups, average age df.groupby('age groups').agg(['count', 'median'])['start hour'] #create data frame for each city grouped = df.groupby('city') ny = grouped.get_group('New York') sf = grouped.get_group('San Francisco') avg_age = df['age'].mean() sf_avg_age = sf['age'].mean() ny_avg_age = ny['age'].mean() print('the overall average age is: {}'.format(avg_age)) print('the average age in New York is: {}'.format(ny_avg_age)) print('the average age in San Francisco is: {}'.format(sf_avg_age)) age_group_count = df.groupby('city')['age groups'].value_counts() age_group_count = pd.DataFrame([age_group_count]).transpose() age_group_count # ny_age = ny['age groups'].value_counts().sort_index() # sf_age = sf['age groups'].value_counts().sort_index() # print(ny_age) # print(sf_age) # + #NEED TO GRAPH AGE GROUPS # plt.figure(figsize=(10,8)) # age_group_count.plot(kind='bar') # # sf_plt = plt.bar(sf_age['index'], sf_age['age groups']) # # ny_plt = plt.bar(ny_age['index'], ny_age['age groups']) # plt.title('Bike Users by Age', fontsize=18) # plt.xlabel("Age Group", fontsize = 14) # plt.ylabel("Number of Users", fontsize = 14) # # plt.xticks(np.arange(6),('0-20', '21-30','31-40','41-50','51-60', '61+')) # plt.xticks(np.arange(6), age_groups) # plt.show() # - # ### count by age group, by city #NY age group breakdown try: ny_age_group = ny.groupby('age groups').count()['usertype'].reset_index() except KeyError: pass ny_age_group #SF age group break down try: sf_age_group = sf.groupby('age groups').count()['usertype'].reset_index() except KeyError: pass sf_age_group ny_age_gender = ny.groupby(['age groups', 'gender']).gender.count().unstack().fillna(0) ny_age_gender =ny_age_gender.drop(['Unknown'], axis =1).reset_index() ny_age_gender = ny_age_gender[['age groups', 'Male', 'Female']] ny_age_gender # age group break down by gender NY plot_colors=['#9b59b6', '#34495e', '#95a5a6'] ny_age_gender.plot(kind='bar', color =plot_colors, stacked=True, figsize=(10,8)) plt.legend(fontsize=14) plt.title('New York Users by Age & Gender', fontsize=18) plt.xlabel("Age Group", fontsize = 14) plt.ylabel("Number of Users", fontsize = 14) plt.xticks(np.arange(6),('0-20', '21-30','31-40','41-50','51-60', '61+'), fontsize= 14, rotation =0) plt.show() sf_age_gender = sf.groupby(['age groups', 'gender']).gender.count().unstack().fillna(0) sf_age_gender =sf_age_gender.drop(['Other'], axis =1).reset_index() sf_age_gender # age group break down by gender SF plot_colors=['#9b59b6', '#34495e', '#95a5a6'] sf_age_gender.plot(kind='bar', color =plot_colors, stacked=True, figsize=(10,8)) plt.legend(fontsize=14) plt.title('San Francisco Users by Age & Gender', fontsize=18) plt.xlabel("Age Group", fontsize = 14) plt.ylabel("Number of Users", fontsize = 14) plt.xticks(np.arange(6),('0-20', '21-30','31-40','41-50','51-60', '61+'), fontsize= 14, rotation =0) plt.show() #NewYork user count by age group plt.figure(figsize=(10,8)) plt.bar(ny_age_group['age groups'], ny_age_group['city'], color='#34495e', align="center") plt.title("New York Bike Users, By Age", fontsize = 18) plt.xlabel("Age Group", fontsize = 14) plt.ylabel("Number of Users", fontsize = 14) #SanFran user count by age group plt.figure(figsize=(10,8)) plt.bar(sf_age_group['age groups'], sf_age_group['usertype'], color='#9b59b6', align="center") plt.title("San Francisco Bike Users, By Age", fontsize = 18) plt.xlabel("Age Group", fontsize = 14) plt.ylabel("Number of Users", fontsize = 14) # ### time of day by age df['start date'], df['start time'] = df['starttime'].str.split(' ',1).str df['end date'], df['end time'] = df['stoptime'].str.split(' ', 1).str # df.drop(['starttime', 'stoptime'], axis = 1) df.head() sf['tripduration'].hist(bins=100) sf.groupby('age groups').count() zero = sf[sf['age groups'] == '0-20'] twenty1= sf[sf['age groups'] == '31-40' ] forty1 = sf[sf['age groups'] == '41-50' ] fifty1 = sf[sf['age groups'] == '51-60' ] sixty1 = sf[sf['age groups'] == '61+' ] thirty1 = sf[sf['age groups'] == '21-30' ] plt.figure(figsize=(20,8)) sns.set(rc={'lines.linewidth':4.5}, font_scale=2) sns.distplot(zero['start hour'], hist=False, color ='#95a5a6', label = '0-20') sns.distplot(twenty1['start hour'], hist=False, color ='#9b59b6', label = '21-30') sns.distplot(thirty1['start hour'], hist=False, color = '#3498db', label = '31-40') sns.distplot(forty1['start hour'], hist=False, color = '#e74c3c',label = '41-50') sns.distplot(fifty1['start hour'], hist=False, color = '#34495e',label = '51-60') sns.distplot(sixty1['start hour'], hist=False, color = '#2ecc71',label = '61+') plt.legend() plt.title("Start Time by Age Group_San Francisco") plt.xlabel("Time") plt.xticks(np.arange(0, 25, step=4), ('midnight', '4am', '8am', 'noon', '4pm', '8pm', 'midnight')) plt.ylabel("User Count") plt.show() # ### percent users by city & age group # + #SanFran sf_total_riders = sf['usertype'].count() print('total number of users in SanFran is: {}'.format(sf_total_riders)) sf_age_group['age group percent'] = sf_age_group['usertype'].map(lambda x: (x/sf_total_riders)*100) sf_age_group['age group percent'] = sf_age_group['age group percent'].map('{0:.2f}%'.format) #export to excel to handle "other" category for non-reporting sf_age_group.to_clipboard(excel=True) sf_age_group_df = pd.read_csv('sf_age_group_df.csv') sf_age_group_df # - sns.set() sf_age_group_df.plot(kind='bar', color = plot_colors, figsize=(10,8), legend=False) plt.title('Age Group breakdown San Francisco', fontsize=18) plt.xlabel("Age Group", fontsize = 14) plt.ylabel("Number of Riders", fontsize = 14) plt.xticks(np.arange(7),('0-20', '21-30','31-40','41-50','51-60', '61+', 'other'), fontsize= 14, rotation=0) plt.show() # + sns.set() age_0 = float(sf_age_group_df.iloc[0]['usertype']) age_21 = sf_age_group_df.iloc[1]['usertype'] age_31 = sf_age_group_df.iloc[2]['usertype'] age_41 = sf_age_group_df.iloc[3]['usertype'] age_51 = sf_age_group_df.iloc[4]['usertype'] age_61 = sf_age_group_df.iloc[5]['usertype'] age_other = sf_age_group_df.iloc[6]['usertype'] colors = plot_colors labels = '0-20', '21-30','31-40','41-50','51-60', '61+', 'other' ages = [age_0, age_21, age_31, age_41, age_51, age_61, age_other] plt.pie(ages, labels=labels, colors=plot_colors, startangle=90) plt.title('Age Group breakdown San Francisco', fontsize=18) plt.axis('equal') plt.show() # + #NewYork ny_total_riders = ny['usertype'].count() print('total number of users in New York is: {}'.format(ny_total_riders)) ny_age_group['age group percent'] = ny_age_group['usertype'].map(lambda x: (x/ny_total_riders)*100) ny_age_group['age group percent'] = ny_age_group['age group percent'].map('{0:.2f}%'.format) #export to excel to handle "other" category for non-reporting ny_age_group.to_clipboard(excel=True) ny_age_group_df = pd.read_csv('ny_age_group_df.csv') ny_age_group_df # - sns.set() ny_age_group_df.plot(kind='bar', color = '#34495e', figsize=(10,8), legend=False) plt.title('Age Group breakdown New York', fontsize=18) plt.xlabel("Age Group", fontsize = 14) plt.ylabel("Number of Riders", fontsize = 14) plt.xticks(np.arange(7),('0-20', '21-30','31-40','41-50','51-60', '61+', 'other'), fontsize= 14,rotation=0) plt.show() # + sns.set() age_0 = float(ny_age_group_df.iloc[0]['usertype']) age_21 = ny_age_group_df.iloc[1]['usertype'] age_31 = ny_age_group_df.iloc[2]['usertype'] age_41 = ny_age_group_df.iloc[3]['usertype'] age_51 = ny_age_group_df.iloc[4]['usertype'] age_61 = ny_age_group_df.iloc[5]['usertype'] age_other = ny_age_group_df.iloc[6]['usertype'] colors = plot_colors labels = '0-20', '21-30','31-40','41-50','51-60', '61+', 'other' ages = [age_0, age_21, age_31, age_41, age_51, age_61, age_other] plt.pie(ages, labels=labels, colors=plot_colors, startangle=90) plt.title('Age Group breakdown New York', fontsize=18) plt.axis('equal') plt.show() # - # testing customer type by age - works ny_user_type = ny.groupby('usertype')['age groups'].value_counts() ny_user_type try: ny_age_group = ny.groupby('age groups').count()['city'].reset_index() except KeyError: pass # ### user type by age # + #NewYork ny_total_riders = ny['usertype'].count() print('total number of users in New York is: {}'.format(ny_total_riders)) ny_usertype = ny.groupby('usertype')['age groups'].value_counts(normalize=True) ny_usertype # - ny_usertype.unstack(0).plot(kind='bar', figsize=(10,8)) plt.legend(fontsize=14) plt.title('Customer vs Subscriber - New York', fontsize=18) plt.xlabel("Age Group", fontsize = 14) plt.ylabel("Number of Users", fontsize = 14) plt.gca().set_yticklabels(['{:.0f}%'.format(x*100) for x in plt.gca().get_yticks()], fontsize=12) plt.xticks(np.arange(6),('0-20', '21-30','31-40','41-50','51-60', '61+'), fontsize= 12, rotation=0) plt.show() # + #San Fran Customer Type sf_total_riders = sf['usertype'].count() print('total number of users in San Francisco is: {}'.format(ny_total_riders)) sf_usertype = sf.groupby('usertype')['age groups'].value_counts(normalize=True) sf_usertype # - ny_usertype.unstack(0).plot(kind='bar', figsize=(10,8)) plt.legend(fontsize=14) plt.title('Customer vs Subscriber - New York', fontsize=18) plt.xlabel("Age Group", fontsize = 14) plt.ylabel("Number of Users", fontsize = 14) plt.gca().set_yticklabels(['{:.0f}%'.format(x*100) for x in plt.gca().get_yticks()], fontsize=12) plt.xticks(np.arange(6),('0-20', '21-30','31-40','41-50','51-60', '61+'), fontsize= 12, rotation=0) plt.show() # ### avg trip duration by age #NewYork ny_trip_duration = ny.groupby('age groups')['tripduration'].mean().reset_index() # ny_trip_duration['ny tripduration'] = ny_trip_duration['tripduration'].map(lambda x: x/60) ny_trip_duration #SanFran sf_trip_duration = sf.groupby('age groups')['tripduration'].mean().reset_index() # sf_trip_duration['sf tripduration'] = sf_trip_duration['tripduration'].map(lambda x: x/60) sf_trip_duration # + trip_duration =sf_trip_duration.join(ny_trip_duration, lsuffix='_sf', rsuffix='_ny') trip_duration['tripduration_sf'] = trip_duration['tripduration_sf'].map(lambda x: x/60) trip_duration['tripduration_ny'] = trip_duration['tripduration_ny'].map(lambda x: x/60) trip_duration # - trip_duration.plot(kind='bar', color = plot_colors, figsize=(10,8)) plt.legend(fontsize=14) plt.title('New York Trip duration by age group', fontsize=18) plt.xlabel("Age Group", fontsize = 14) plt.ylabel("Average Ride Duration (in minutes)", fontsize = 14) plt.xticks(np.arange(6),('0-20', '21-30','31-40','41-50','51-60', '61+'), fontsize= 14, rotation=0) plt.show() # ### popular stations by age ny_drop = ny.dropna(axis=0) start_station = ny_drop.groupby('age groups')['start station name'].value_counts() start_station = start_station.groupby('age groups').nlargest(5) # start_station.plot(kind='bar', stacked=True, figsize=(10,8)) start_station = pd.DataFrame(start_station) start_station.index.names=['age group', 'x', 'start station'] start_station.reset_index(inplace=True) start_station= start_station[['age group', 'start station', 'start station name']] start_station # ### bikes around the world url = 'https://bikeshare-research.org/api/v1/categories/base/fields/country' response = requests.get(url).json() df= pd.DataFrame([x for x in response]) country_count = df.groupby('country').count().reset_index() country_count = country_count.sort_values(by=['bssid']) top5 = country_count.nlargest(5, 'bssid') top5 # + # cool live map of bikes in use # http://bikes.oobrien.com/#zoom=3&lon=-16.1133&lat=37.9504 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ['CUDA_VISIBLE_DEVICES'] = '2' import torch import torch.nn as nn import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler from torch.utils.data import DataLoader from torch.autograd import Variable import math, random, sys import numpy as np import argparse from collections import deque # import cPickle as pickle import pickle # from fast_jtnn import * from hgraph import * import rdkit from cnn_finetune import make_model import torch.nn as nn from albumentations.pytorch import ToTensorV2 import albumentations as A from albumentations import ( HorizontalFlip, IAAPerspective, ShiftScaleRotate, CLAHE, RandomRotate90, Transpose, ShiftScaleRotate, Blur, OpticalDistortion, GridDistortion, HueSaturationValue, IAAAdditiveGaussianNoise, GaussNoise, MotionBlur, MedianBlur, IAAPiecewiseAffine, IAASharpen, IAAEmboss, RandomContrast, RandomBrightness, Flip, OneOf, Compose, RandomCrop, Normalize, Resize ) import cv2 from tqdm import tqdm lg = rdkit.RDLogger.logger() lg.setLevel(rdkit.RDLogger.CRITICAL) # + class ARGS: def __init__(self): self.train = './train_processed_bms_ver2/mol/' self.vocab = 'data/chembl/vocab.txt' self.atom_vocab = common_atom_vocab self.save_dir = 'vae_model/' self.load_model = None self.seed = 7 self.rnn_type = 'LSTM' self.hidden_size = 250 self.embed_size = 250 self.batch_size = 32 self.latent_size = 32 self.depthT = 15 self.depthG = 15 self.diterT = 1 self.diterG = 3 self.dropout = 0.0 self.lr = 1e-3 self.clip_norm = 5.0 # self.beta = 0.0 self.step_beta = 0.001 self.max_beta = 1.0 self.warmup = 10000 self.kl_anneal_iter = 2000 self.epoch = 100 self.anneal_rate = 0.9 self.anneal_iter = 25000 self.kl_anneal_iter = 2000 self.print_iter = 50 self.save_iter = 5000 args = ARGS() # - # ## Model # + if not torch.cuda.is_available(): device = torch.device('cpu') else: device = torch.device('cuda:0') print (device) # + # class JTNNVAE_edit(JTNNVAE): # def __init__(self, vocab, hidden_size, latent_size, depthT, depthG): # super().__init__(vocab, hidden_size, latent_size, depthT, depthG) # def forward_edit(self, x_batch, beta, x_tree_vecs, x_mol_vecs): # x_batch, x_jtenc_holder, x_mpn_holder, x_jtmpn_holder = x_batch # _, x_tree_mess, _ = self.encode(x_jtenc_holder, x_mpn_holder) # z_tree_vecs,tree_kl = self.rsample(x_tree_vecs, self.T_mean, self.T_var) # z_mol_vecs,mol_kl = self.rsample(x_mol_vecs, self.G_mean, self.G_var) # kl_div = tree_kl + mol_kl # word_loss, topo_loss, word_acc, topo_acc = self.decoder(x_batch, z_tree_vecs) # assm_loss, assm_acc = self.assm(x_batch, x_jtmpn_holder, z_mol_vecs, x_tree_mess) # return word_loss + topo_loss + assm_loss + beta * kl_div, kl_div.item(), word_acc, topo_acc, assm_acc def make_cuda(tensors): tree_tensors, graph_tensors = tensors make_tensor = lambda x: x if type(x) is torch.Tensor else torch.tensor(x) tree_tensors = [make_tensor(x).cuda().long() for x in tree_tensors[:-1]] + [tree_tensors[-1]] graph_tensors = [make_tensor(x).cuda().long() for x in graph_tensors[:-1]] + [graph_tensors[-1]] return tree_tensors, graph_tensors class HierVAE_edit(HierVAE): def __init__(self, args): super().__init__(args) # def rsample(self, z_vecs, W_mean, W_var, perturb=True): # batch_size = z_vecs.size(0) # z_mean = W_mean(z_vecs) # z_log_var = -torch.abs( W_var(z_vecs) ) # kl_loss = -0.5 * torch.sum(1.0 + z_log_var - z_mean * z_mean - torch.exp(z_log_var)) / batch_size # epsilon = torch.randn_like(z_mean).cuda() # z_vecs = z_mean + torch.exp(z_log_var / 2) * epsilon if perturb else z_mean # return z_vecs, kl_loss # def sample(self, batch_size, greedy): # root_vecs = torch.randn(batch_size, self.latent_size).cuda() # return self.decoder.decode((root_vecs, root_vecs, root_vecs), greedy=greedy, max_decode_step=150) # def reconstruct(self, batch): # graphs, tensors, _ = batch # tree_tensors, graph_tensors = tensors = make_cuda(tensors) # root_vecs, tree_vecs, _, graph_vecs = self.encoder(tree_tensors, graph_tensors) # root_vecs, root_kl = self.rsample(root_vecs, self.R_mean, self.R_var, perturb=False) # return self.decoder.decode((root_vecs, root_vecs, root_vecs), greedy=True, max_decode_step=150) def forward(self, graphs, tensors, orders, beta, perturb_z=True): tree_tensors, graph_tensors = tensors = make_cuda(tensors) root_vecs, tree_vecs, _, graph_vecs = self.encoder(tree_tensors, graph_tensors) root_vecs, root_kl = self.rsample(root_vecs, self.R_mean, self.R_var, perturb_z) kl_div = root_kl loss, wacc, iacc, tacc, sacc = self.decoder((root_vecs, root_vecs, root_vecs), graphs, tensors, orders) return loss + beta * kl_div, kl_div.item(), wacc, iacc, tacc, sacc def forward_edit(self, graphs, tensors, orders, root_vecs, beta, perturb_z=True): tree_tensors, graph_tensors = tensors = make_cuda(tensors) # root_vecs, tree_vecs, _, graph_vecs = self.encoder(tree_tensors, graph_tensors) root_vecs, root_kl = self.rsample(root_vecs, self.R_mean, self.R_var, perturb_z) kl_div = root_kl loss, wacc, iacc, tacc, sacc = self.decoder((root_vecs, root_vecs, root_vecs), graphs, tensors, orders) return loss + beta * kl_div, kl_div.item(), wacc, iacc, tacc, sacc # + vocab = [x.strip("\r\n ").split() for x in open(args.vocab)] args.vocab = PairVocab(vocab) model = HierVAE_edit(args).cuda() # print (model) print("Model #Params: %dK" % (sum([x.nelement() for x in model.parameters()]) / 1000,)) for param in model.parameters(): if param.dim() == 1: nn.init.constant_(param, 0) else: nn.init.xavier_normal_(param) optimizer = optim.Adam(model.parameters(), lr=args.lr) scheduler = lr_scheduler.ExponentialLR(optimizer, args.anneal_rate) if args.load_model: print('continuing from checkpoint ' + args.load_model) model_state, optimizer_state, total_step, beta = torch.load(args.load_model) model.load_state_dict(model_state) optimizer.load_state_dict(optimizer_state) else: total_step = beta = 0 param_norm = lambda m: math.sqrt(sum([p.norm().item() ** 2 for p in m.parameters()])) grad_norm = lambda m: math.sqrt(sum([p.grad.norm().item() ** 2 for p in m.parameters() if p.grad is not None])) meters = np.zeros(6) # + # model.load_state_dict(torch.load('./fast_molvae/vae_model/model.iter-1980')) # model.load_state_dict(torch.load('./ckpt/chembl-pretrained/model.ckpt')) model_state, optimizer_state, total_step, beta = torch.load('./ckpt/bms-pretrained/model.ckpt.5160') model.load_state_dict(model_state) model = model.to(device) optimizer_vae = optim.Adam(model.parameters(), lr=args.lr) scheduler = lr_scheduler.ExponentialLR(optimizer_vae, args.anneal_rate) # model.eval() # for param in model.parameters(): # param.requires_grad = False # + model_K = make_model('inception_v4', num_classes=10, pretrained=True) for param in model_K.parameters(): param.requires_grad = False model_K._classifier.weight.requires_grad = True model_K._classifier.bias.requires_grad = True class InceptionV4Bottom(nn.Module): def __init__(self, original_model): super(InceptionV4Bottom, self).__init__() self.features = nn.Sequential( # stop at conv4 *list(original_model.children())[:-1] ) # self.features = nn.Sequential( # # stop at conv4 # *list(original_model.children()) # ) dim1 = 512 dim2 = 250 self.num_ftrs = original_model._classifier.in_features self.classifier1 = nn.Sequential(nn.Linear(self.num_ftrs, dim1), nn.BatchNorm1d(dim1), nn.ReLU(), nn.Linear(dim1, dim2)) def forward(self, x1): x1 = self.features(x1) # print (x1.shape) x1 = x1.view(-1, self.num_ftrs) # x2 = self.dense1(x2) # print (x2.shape, x1.shape) # x = torch.cat((x1, x2), dim=1) o1 = self.classifier1(x1) return o1 model_cnn = InceptionV4Bottom(model_K) # + def unfreeze(model, ct_c = 7): for param in model.parameters(): param.requires_grad = True ct = 0 for child in model.features[0].children(): ct += 1 if ct < ct_c: for param in child.parameters(): param.requires_grad = False # unfreeze(model_cnn,3) # + model_cnn = model_cnn.to(device) # Observe that all parameters are being optimized optimizer = optim.Adam(model_cnn.parameters(), lr=0.001) # - IMG_SIZE = 512 transform = A.Compose([ A.Resize(int(IMG_SIZE), int(IMG_SIZE)), A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)), ToTensorV2() ]) dataset = DataFolder_ver2_BMS(args.train, '/home/quang/working/Theory_of_ML/hgraph2graph/train_processed_bms_ver2/ids', args.batch_size, transform, '/home/quang/working/Theory_of_ML/mbs-molecular-captioning/images/train/') for batch, images_list, labels in tqdm(dataset): break # + import matplotlib.pyplot as plt import torchvision.utils as vutils from IPython.display import SVG from rdkit import Chem from rdkit.Chem import Draw def show(img, size = 8): plt.figure(figsize=(size,size)) npimg = img.numpy() plt.imshow(np.transpose(npimg, (1,2,0)), interpolation='nearest') plt.show() show(vutils.make_grid(images_list[:8], padding=10, normalize=True), size = 12) plt.show() smiles_code = labels[7] m = Chem.MolFromSmiles(smiles_code) print (Chem.MolToInchi(m)) SVG(Draw.MolsToGridImage([m], molsPerRow=4, subImgSize=(180, 150), useSVG=True)) # - # ## Training # + for epoch in range(args.epoch): dataset = DataFolder_ver2_BMS(args.train, '/home/quang/working/Theory_of_ML/hgraph2graph/train_processed_bms_ver2/ids', args.batch_size, transform, '/home/quang/working/Theory_of_ML/mbs-molecular-captioning/images/train/') for batch, images_list, labels in tqdm(dataset): total_step += 1 # images_list, tree_batch, jtenc_holder, mpn_holder, (jtmpn_holder,batch_idx) = batch_t # batch = tree_batch, jtenc_holder, mpn_holder, (jtmpn_holder,batch_idx) images_list = images_list.to(device) optimizer.zero_grad() optimizer_vae.zero_grad() mol_vec = model_cnn(images_list) loss, kl_div, wacc, iacc, tacc, sacc = model.forward_edit(*batch, mol_vec, beta=beta) # loss, kl_div, wacc, tacc, sacc = model.forward_edit(batch, beta, tree_vec, mol_vec) loss.backward() optimizer_vae.step() optimizer.step() if total_step % args.anneal_iter == 0: scheduler.step() print("learning rate: %.6f" % scheduler.get_lr()[0]) # try: # optimizer.zero_grad() # tree_vec, mol_vec = model_cnn(images_list) # loss, kl_div, wacc, iacc, tacc, sacc = model.forward_edit(*batch, mol_vec, beta=beta) # # loss, kl_div, wacc, tacc, sacc = model.forward_edit(batch, beta, tree_vec, mol_vec) # loss.backward() # optimizer.step() # except Exception as e: # print (e) # continue meters = meters + np.array([kl_div, loss.item(), wacc * 100, iacc * 100, tacc * 100, sacc * 100]) if total_step % args.print_iter == 0: meters /= args.print_iter print("[%d] Beta: %.3f, KL: %.2f, loss: %.3f, Word: %.2f, %.2f, Topo: %.2f, Assm: %.2f, PNorm: %.2f, GNorm: %.2f" % (total_step, beta, meters[0], meters[1], meters[2], meters[3], meters[4], meters[5], param_norm(model), grad_norm(model))) sys.stdout.flush() meters *= 0 # if total_step % args.save_iter == 0: # torch.save(model.state_dict(), args.save_dir + "/model.iter-" + str(total_step)) # if total_step % args.anneal_iter == 0: # scheduler.step() # print ("learning rate: %.6f" % scheduler.get_lr()[0]) # if total_step % args.kl_anneal_iter == 0 and total_step >= args.warmup: # beta = min(args.max_beta, beta + args.step_beta) torch.save(model_cnn.state_dict(), "./model_cnn/model_cnn_hgraph_ver3_good_100eps_512.iter-" + str(total_step)) # - # ## Validation # + # model_cnn.load_state_dict(torch.load('./model_cnn/model_cnn.iter-145')) # + # dataset_val = DataFolder_ver2_BMS('./val_processed_bms/mol/', '/home/quang/working/Theory_of_ML/hgraph2graph/val_processed_bms/ids',args.batch_size, transform, '/home/quang/working/Theory_of_ML/mbs-molecular-captioning/images/val/') # for batch, images_list, labels in tqdm(dataset_val): # # total_step += 1 # # # images_list, tree_batch, jtenc_holder, mpn_holder, (jtmpn_holder,batch_idx) = batch_t # # # batch = tree_batch, jtenc_holder, mpn_holder, (jtmpn_holder,batch_idx) # # images_list = images_list.to(device) # # optimizer.zero_grad() # # tree_vec, mol_vec = model_cnn(images_list) # # loss, kl_div, wacc, iacc, tacc, sacc = model.forward_edit(*batch, mol_vec, beta=beta) # # # loss, kl_div, wacc, tacc, sacc = model.forward_edit(batch, beta, tree_vec, mol_vec) # # loss.backward() # # optimizer.step() # break dataset_val = DataFolder_ver2_BMS('./val_processed_bms/mol/', '/home/quang/working/Theory_of_ML/hgraph2graph/val_processed_bms/ids',args.batch_size, transform, '/home/quang/working/Theory_of_ML/mbs-molecular-captioning/images/val/') total_labels = [] total_preds = [] for batch, images_list, labels in tqdm(dataset_val): with torch.no_grad(): images_list = images_list.to(device) mol_vec = model_cnn(images_list) root_vecs, root_kl = model.rsample(mol_vec, model.R_mean, model.R_var, perturb=True) outputs = model.decoder.decode((root_vecs, root_vecs, root_vecs), greedy=True, max_decode_step=150) total_labels.extend(labels) total_preds.extend(outputs) # - # ## Levenshtein distance for SMILES code # + import Levenshtein def get_score(y_true, y_pred): scores = [] for true, pred in zip(y_true, y_pred): score = Levenshtein.distance(true, pred) scores.append(score) avg_score = np.mean(scores) return avg_score score = get_score(total_labels, total_preds) print (score) # + import Levenshtein def get_score_inchi(y_true, y_pred): scores = [] for true, pred in zip(y_true, y_pred): inchi_pred = Chem.MolToInchi(Chem.MolFromSmiles(pred)) inchi_true = Chem.MolToInchi(Chem.MolFromSmiles(true)) score = Levenshtein.distance(inchi_true, inchi_pred) scores.append(score) avg_score = np.mean(scores) return avg_score score = get_score_inchi(total_labels, total_preds) print (score) # - # ## Validity score # + invalid_smiles = 0 for smile_temp in tqdm(total_preds): m = Chem.MolFromSmiles(smile_temp) if m is None: invalid_smiles += 1 print (1 - invalid_smiles/len(total_preds)) # - # ## Visualization # + images_list = images_list.to(device) mol_vec = model_cnn(images_list) root_vecs, root_kl = model.rsample(mol_vec, model.R_mean, model.R_var, perturb=True) outputs = model.decoder.decode((root_vecs, root_vecs, root_vecs), greedy=True, max_decode_step=150) # - for i in range(len(outputs)): print ('GT: ',labels[i]) print ('Pred: ', outputs[i]) i = 5 m = Chem.MolFromSmiles(labels[i]) # print (Chem.MolToInchi(m)) SVG(Draw.MolsToGridImage([m], molsPerRow=4, subImgSize=(180, 150), useSVG=True)) smiles_code = labels[i] m = Chem.MolFromSmiles(smiles_code) print (Chem.MolToInchi(m)) SVG(Draw.MolsToGridImage([m], molsPerRow=4, subImgSize=(180, 150), useSVG=True)) smiles_code = 'C1#CCC[CH2:1]CC#CCC1' m = Chem.MolFromSmiles(smiles_code) print (Chem.MolToInchi(m)) SVG(Draw.MolsToGridImage([m], molsPerRow=4, subImgSize=(180, 150), useSVG=True)) smiles_code = 'C1#CC[CH2:1][CH2:1][CH2:1]C#CCC1' m = Chem.MolFromSmiles(smiles_code) print (Chem.MolToInchi(m)) SVG(Draw.MolsToGridImage([m], molsPerRow=4, subImgSize=(180, 150), useSVG=True)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''.venv'': venv)' # language: python # name: python3 # --- # # Demo de Altair # # Exploremos rápidamente como se ve el código para crear visualizaciones con Altair. import altair as alt import pandas as pd from vega_datasets import data # Comencemos con unos ejemplos de la galería de altair: https://altair-viz.github.io/gallery/index.html # + barras_df = pd.DataFrame({ 'a': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'], 'b': [28, 55, 43, 91, 81, 53, 19, 87, 52] }) alt.Chart(barras_df).mark_bar().encode( x='a', y='b' ) # - # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # + #https://vincentarelbundock.github.io/Rdatasets/datasets.html #Yearly batting records for all major league baseball players #read in the CSV file as available on the site players <- read.csv(file="Documents/baseball.csv", header=TRUE, sep=",") head(players) # - library(dplyr) playerst <- tbl_df(players) playerst glimpse(playerst) data <- sample_n(players, 30) glimpse(data) #filter only players with over 200 hits in a season over200 <- filter(players, h > 200) head(over200) nrow(over200) over200and40hr <- filter(players, h > 200 & hr > 40) head(over200and40hr) nrow(over200and40hr) filter(players, h > 300) # add column percentage of times at bat player got a hit pct <- mutate(players, hitpct = h / ab) head(pct) summary(pct) summarise(pct, mean(hitpct, na.rm = TRUE)) library(magrittr) pct %>% summarise(hitpct %>% mean(na.rm = TRUE)) quantile(pct$hitpct, probs = 0.99, na.rm = TRUE) top_players <- filter(pct, hitpct > 0.47) top_players <- top_players[order(top_players$hitpct) , ] head(top_players) nrow(top_players) #new top_players %>% filter(id == "ruthba01") # + #new TODO #starwars %>% # group_by(species) %>% # summarise( # n = n(), # mass = mean(mass, na.rm = TRUE) # ) %>% # filter(n > 1) # - teamhitpct <- summarise(group_by(pct, team), mean(hitpct, na.rm = TRUE)) names(teamhitpct) <- c("team", "hitpct") summary(teamhitpct) teamhitpct <- teamhitpct[order(-teamhitpct$hitpct) , ] head(teamhitpct) quantile(teamhitpct$hitpct, probs = 0.99) #new by_team <- group_by(players, team) by_team #new summarize(by_team,mean(h / ab)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Question: When during the day is most dangerous? # # Let's see how accidents are splitted based on the place of the event and see where we can feel to be the safest. # # First step before any data analysis is to import required libraries and data. Any information required to understand columns is available here: https://www.kaggle.com/ahmedlahlou/accidents-in-france-from-2005-to-2016. # + import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns dataset = pd.read_csv('data/caracteristics.csv', encoding='latin1') dataset.head() # - # Let's check the 'lum' column and change the values there. # * 1 - Full day # * 2 - Twilight or dawn # * 3 - Night without public lighting # * 4 - Night with public lighting not lit # * 5 - Night with public lighting on # # But first check if there are a missing values in column that we want to work on. dataset.columns[dataset.isna().sum() != 0] # Now, let's change the 'lum' column to be more understandable by humans. dataset['lum'] = dataset['lum'].astype(str) dataset['lum'] = dataset['lum'].replace({'2': 'Twilight or dawn', '1': 'Full day', '3': 'Night no public lighting', '4': 'Night lighting not lit', '5': 'Night with public lighting on'}) plt.clf() ax = sns.countplot(y = 'lum', data=dataset) ax.set_title('Number of accidents based on the lightning conditions') ax.set_xlabel('Number of accidents') ax.set_ylabel('Light conditions') plt.show() # Now let's see summarization when during all that years there was the highest number of accidents. First let's check what type is in 'hrmn' column which represents combined hour and minutes. dataset.dtypes # Ceate new column 'hr' where we'll put only hour of accident in 24 hour format. dataset['hr'] = dataset['hrmn']/100 dataset['hr'] = dataset['hr'].astype('int64') dataset.head() plt.clf() plt.figure(figsize=(10,7)) ax = sns.countplot(x='hr', data=dataset, palette="GnBu_d") ax.set_title('Number of accidents during 24-hours') ax.set_xlabel('Hour') ax.set_ylabel('Number of accidents') plt.show() # It looks like the most of accidents are happening during the day, with two peaks. First peak is between 8:00 and 9:00 and second between 17:00 and 18:00. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ! explorer . # # ファイルの整理 # ## 課題 # たまった写真をGOOGLE DRIVEに入れたい # しかし、写真を貯めているフォルダの構造は、めちゃくちゃで、重複しているファイルもある。フォルダのサイズは80GB! # とても手作業でやる気が起こらないので、スクリプトを組んだ # ## 方針 # * フォルダ内に同一ファイル名が存在しないように、一括でリネームする。 # * フォルダ内のファイルを全てpandas dataframeに格納 # * ハッシュなるものを使って、同一ファイルを抽出、pandas dataframeで重複ファイルの無いリストを作る # * 上記のリストのファイルを新しいフォルダに移動 # * 最後に人が見ながら、不要なファイルを削除 # # スクリプト # + # -*- coding: utf-8; py-indent-offset:4 -*- ############################################################################### # # Copyright (C) # ############################################################################### import os,hashlib class env_copy_right(): ''' working_dirにあるファイルのパスとハッシュを取得する ''' def __init__(self): self.file_list = [] def get_list(self,working_dir): for root, dirs, files in os.walk(working_dir): for filename in files: fname=os.path.join(root, filename) with open(fname, 'rb') as f: checksum = hashlib.md5(f.read()).hexdigest() self.file_list.append([fname,checksum]) if __name__ == '__main__': working_dir = "D:\\新しいフォルダー" obj=env_copy_right() obj.get_list(working_dir) # - # ## 取得したファイルを片っ端からリネーム # + import os x=0 for i in obj.file_list: root, ext = os.path.splitext(i[0]) pa=root+"kiroku_"+str(x)+ext print(pa) os.rename(i[0], pa) x=x+1 # - # ## データフレームを作成 import pandas as pd df=pd.DataFrame(obj.file_list) df.columns = ["path","checksum"] #カラム名を付ける # ## 重複を削除してdf1として再定義 df1=df.drop_duplicates(['checksum']) df1=df1.reset_index( drop = True ) df1 # ## df1のファイル名を取得 import os, shutil file_name_list=[] for i, v in df1.iterrows(): print(v['path']) p=os.path.basename(v['path']) file_name_list.append(p) file_name_list # ## データフレームに追加 df2=pd.DataFrame(file_name_list) df2.columns = ["file_name"] df2 # ## 同一ファイル名が無い事を確認(今回、階層をなくしたいため) # ### ファイル名の重複を確認 df2.duplicated(['file_name']).any() # ### 重複している場合は、ファイル名を出力する df3=df2.apply(pd.value_counts) df3.loc[(df3["file_name"] > 1)] # ## ファイル名をデータフレームに追加する df1=df1.join(df2) df1 # ## ファイルをコピー # + run_control={"marked": false} ls=list(df1["path"]) # - # ls import shutil for i in ls: shutil.copy(i, "D:\\pic") # # 構成部品 # ## ハッシュとは # ハッシュ値、大きくアバウトに言ってしまうと、データを特定するするために、あるアルゴリズム(関数)から算出される値。 # 簡単な例では、データの同一性をチェックするための「チェックサム」もその1つ。コンパイラの高速テーブル検索でもハッシュが使われる。 # 「チェックサム」では、データを一定のビット数で区切ってその総和を、送り側で計算してデータに付加しておく。これが「チェックサム」といわれるの由来。 # 受け側でこの「チェックサム」を除く、純データの「チェックサム」を計算し直し、付き合わせることでデータの同一性をかなりの高確率で保証できる。 import hashlib # fname="test_file - コピー" fname="test_file" # fname="haikei_2.ipynb" with open(fname, 'rb') as f: checksum = hashlib.md5(f.read()).hexdigest() checksum # ## ファイルパス # + import os working_dir = 'C:\\Users\\mineo\\Anaconda3\\envs\\mine01' file_list = [] for root, dirs, files in os.walk(working_dir): for filename in files: file_list.append(os.path.join(root, filename)) # print(file_list) # df_list = [pd.read_table(file) for file in file_list] # if df_list: # final_df = pd.concat(df_list) # final_df.to_csv(os.path.join(root, "Final.csv")) file_list # - len(file_list) # ## 合成 hikaku_1=set() for i in file_list: hikaku_1.add(i[0]+"_____hash_____"+i[1]) hikaku_1 len(hikaku_1) # ## 重複ファイルの確認 # ハッシュでソートして、同じハッシュのファイルを表示する df.sort_values(by=["checksum"], inplace=True, ascending=False) df=df.reset_index( drop = True ) df1=df.apply(pd.value_counts) df2=df1.loc[(df1["checksum"] > 1)] # + run_control={"marked": true} pd.set_option("display.max_colwidth", 80) df3=df.loc[(df["checksum"] == df2.index[10])] df3["path"] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Extract Embeddings from Input import os import json import numpy as np import pandas as pd import tensorflow as tf from numpy.linalg import norm from invoke import run, exceptions # ### Extract Embeddings bucket_name = 'ekaba-assets' model_dir = 'biomedbert_large_bert_weights_and_vocab' config = 'large_bert_config.json' vocab = 'vocab.txt' checkpoint = tf.train.latest_checkpoint('gs://{}/{}'.format(bucket_name, model_dir)) voc_fname = 'gs://{}/{}/{}'.format(bucket_name, model_dir, vocab) config_fname = 'gs://{}/{}/{}'.format(bucket_name, model_dir, config) data = pd.read_pickle('biosaq_format.pkl') data.head() # convert series to .txt file def series_to_file(series, fname): series.to_csv('{}.txt'.format(fname), index=False) series_to_file(data.question, 'question') series_to_file(data.answer, 'answer') print(checkpoint) print(voc_fname) print(config_fname) file_name = os.path.relpath('answer.txt') output_dir = 'output_{}.jsonl'.format(file_name.split('.')[0]) print(file_name) print(output_dir) # !python3 bert/extract_features.py \ # --input_file={file_name} \ # --output_file={output_dir} \ # --vocab_file={voc_fname} \ # --bert_config_file={config_fname} \ # --init_checkpoint={checkpoint} \ # --layers=-1 \ #,-2,-3,-4 \ # --max_seq_length=128 \ # --batch_size=8 # ### Format outout JSONl with open(output_dir) as f: data = f.read() def get_sent_embed(output_jsonl) : #We will run the model and get the outputs json_lines = output_jsonl.split('\n') #Removing the blank strings json_lines = list(filter(None,json_lines)) #getting the dimensions & getting the output of the query line_q = json.loads(json_lines[0]) embed_size = len(line_q['features'][0]['layers'][0]['values']) #Temp list for saving the tokens token_temp_q = [] #array for saving the embeddings feat_embed_q = np.array(line_q['features'][0]['layers'][0]['values']) #Getting the final df df_query = pd.DataFrame() for j,feature in enumerate(line_q['features']): token_temp_q.append(feature['token']) #final_output_embeddings tokens_query = ' '.join(token_temp_q[1:len(token_temp_q)-1]) #final query dataframe df_query['documents'] = [tokens_query] df_query['embedding'] = [feat_embed_q] #--------------------------------------- answers ----------------------------------------------# #Defining the lists sent_embed = [] tokens = [] #Getting the final df df_ans = pd.DataFrame() #Running for the sentence for i in range(1,len(json_lines)): line = json.loads(json_lines[i]) feat_embed = np.array(line['features'][0]['layers'][0]['values']) #Temp list for saving the tokens token_temp = [] for j,feature in enumerate(line['features']): token_temp.append(feature['token']) #sanity checks if feat_embed.sum() == 0 : print ('Check_model') #final_output_embeddings sent_embed.append(feat_embed) tokens.append(' '.join(token_temp[1:len(token_temp)-1])) df_ans['documents'] = tokens df_ans['embedding'] = sent_embed return df_query, df_ans # %%time df_query, df_ans = get_sent_embed(data) df_query df_ans.head() len(df_ans.embedding[0]) df_ans.to_pickle('answer.pkl') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg # %matplotlib qt img = mpimg.imread('perspective_transform.jpg') plt.imshow(img) 21,40 21,68 78,76 7850 def warp(img): img_size = (img.shape[1], img.shape[0]) src = np.float32([[21,40],[21,68],[78,76],[78,50]]) dst = np.float32([[21,45],[21,74],[65,74],[65,45]]) M = cv2.getPerspectiveTransform(src, dst) minv = cv2.getPerspectiveTransform(dst, src) warped = cv2.warpPerspective(img, M, img_size, flags= cv2.INTER_LINEAR) return warped warped_img = warp(img) f, (ax1, ax2) = plt.subplots(1, 2, figsize = (20, 10)) ax1.set_title('orginal') ax1.imshow(img) ax2.set_title('warped') ax2.imshow(warped_img) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Alpha Update # # Consider an agent with private belief of $\hat \alpha$ who takes an attestation action while the state is characterized by: # # $C$: commitment for payout at successful completion
    # $R$: current reserve funds in the bonding curve
    # $S$: total bond token supply
    # $S_0$: total bond tokens locked in negative attestation
    # $S_1$: total bond tokens locked in positive attestation
    # $S_{free} = S-S_0-S_1$: bond tokens not committed to attestations
    # $Q_0$: total claims on rewards given negative attestation
    # $Q_1$: total claims on rewards given positive attestation
    # $\alpha$: the systems current belief of the liklihood of success # # ### Expected Payout # The close-out 'expected' payout at the system level can be described as
    # # Aggregate for holders of the free supply $S_{free}$ # $$(\alpha C + R) \frac{S_{free}}{S}$$ # # Aggregate for holders of the negative outcome claims $Q_0$ # $$(1-\alpha) \frac{S_0}{S} R$$ # # Aggregate for holders of the positive outcome claims $Q_1$ # $$\alpha \frac{S_1}{S} (C+R)$$ # # Note that the total expected payout for the whole system is given by # $$\mathbb E(\Theta) = (\alpha C + R) \frac{S_{free}}{S} # +(1-\alpha) \frac{S_0}{S} R # + \alpha \frac{S_1}{S} (C+R)$$ # Rearranging, # $$ \mathbb E(\Theta) = (\alpha C + R) \frac{S_{free}}{S} + \alpha \frac{S_1}{S}C + \frac{S_0}{S}R + \alpha R \frac{S_1-S_0}{S}$$ # # If $S_0=S_1$: # # Then the above collapses to $(\alpha C + R)$. # # Now let's suppose $S_0\not=S_1$ but let's find $\alpha$ such that this equality holds anyway: # # $(\alpha C + R) = (\alpha C + R) \frac{S_{free}}{S} + \alpha \frac{S_1}{S}C + \frac{S_0}{S}R + \alpha R \frac{S_1-S_0}{S}$ # # Distributing the multiplications: # # $\alpha C + R = \alpha C\frac{S_{free}}{S} + R \frac{S_{free}}{S} + \alpha \frac{S_1}{S}C + \frac{S_0}{S}R + \alpha R \frac{S_1-S_0}{S}$ # # grouping the terms with $\alpha$ # # $\alpha C -\alpha \frac{S_{free}}{S}C-\alpha \frac{S_1-S_0}{S}R - \alpha \frac{S_1}{S}C = \frac{S_{free}}{S}R + \frac{S_0}{S}R - R$ # # Regrouping # # $\alpha \left(C -\frac{S_{free}}{S}C-\frac{S_1-S_0}{S}R - \frac{S_1}{S}C \right)= \left(\frac{S_{free}}{S} + \frac{S_0}{S} - 1\right)R$ # # sign swapping # # $\alpha \left(\frac{S_{free}}{S}C+\frac{S_1-S_0}{S}R + \frac{S_1}{S}C -C\right)= \left(1-\frac{S_{free}}{S} - \frac{S_0}{S} \right)R$ # # factoring # # $\alpha C \left(\frac{S_{free}}{S}+\frac{S_1-S_0}{S}\frac{R}{C} + \frac{S_1}{S} -1\right)= \left(1-\frac{S_{free}}{S} - \frac{S_0}{S} \right)R$ # # collapsing by $S = S_0+S_1+S_{free}$ # # $\alpha C \left(\frac{S_1-S_0}{S}\frac{R}{C} -\frac{S_0}{S}\right)= \frac{S_1}{S}R$ # # mulitply both sides by $S$ and distributing # # $\alpha \left(S_1R-S_0R -S_0 C\right)= S_1R$ # # A valid starting condition $(\alpha, S_1, S_0, R, C)$ must satisify the condition that $S_1 R \ge S_0(R+C)$ with # # $$\alpha = \frac{S_1 R}{S_1R-S_0R -S_0 C}$$ # # As $S_1$ dominates $S_0$: $\alpha \rightarrow 1$
    # As $S_0$ dominates $S_1$: $\alpha \rightarrow 0$ # # Spot checks in desmos imply that alpha should be: # # $$\alpha = \frac{S_1 R}{S_1R-S_0R +S_0 C}$$ # # https://www.desmos.com/calculator/ucgbgiy7ci # ![](https://i.imgur.com/BfMSeHS.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # # PyBPS Tutorial # # 1. Initialization # # The first thing to do is obviously to import the `pybps` package. # At the same time, we also import other useful packages. # + import pybps import os import sys import re import sqlite3 import pandas as pd from pandas.io import sql import matplotlib.pyplot as plt # - # Once the `pybps` is imported into `python`, the first thing to do is to create an instance of the `BPSProject` class to get all the information required to run a particular simulation project from the project directory and hold it in a series of instance variables. This way, the project information will be easily retrieved when the different functions that manage simulation pre/post-processing and running are called. # # The simplest and quickest way to get ready is to instanciate the `BPSProject` class with the `path` and `batch` arguments defined. `path` is the path (relative or absolute) to the directory holding the simulation files for a particular project. `batch` is a flag which sets whether the simulation project corresponds to a single run (`batch = False`, which is the default value) or to a batch run (`batch = True`). # # In the present tutorial, we will use the very simple `Begin` example `TRNSYS` project that can be found in the `Examples` directory of any `TRNSYS` installation. We just made some modifications to the output (using Type46) and added parameters in an external `parameters.txt` file for the batch run. This example project can be found in the `Examples/TRNSYS` folder found in the `PyBPS` package. Note that as of today, `PyBPS` has only been tested with `TRNSYS` simulation projects, altought its functionnalities could easily be used with any other text-file-based simulation tool. bps = pybps.BPSProject('Examples/TRNSYS/Begin', batch=True) # Another way to create an instance of the `BPSProject` class is to call it without any arguments and then use the `path` and `batch` methods to set both variables. In the case of the `batch` method, calling it sets `batch` to `True` (since by default it is set to `False`, which corresponds to a single simulation run). bps = pybps.BPSProject() bps.path('Examples/TRNSYS/Begin') bps.batch() # Once we have got our `bps` object created, we can check the simulation project info obtained from the project folder and stored in the object's attributes. Behind the scenes, the `BPSProject` class uses two hidden methods to detect the simulation tool to be used (based on file extensions found in given directory) and to get the info required to run single or batch runs. The basic info needed to run any tool is contained in the `config.ini` file is the base folder of the `pybps` package. # # If the project is of the "single run" type, the following instance variables hold the basic info needed to actually run the simulation: # Path to the folder containing simulation files bps.path # Simulation tool name bps.sim_tool # Simulation input file path bps.simfile_path # Basic config info needed to call the proper commands to run the simulation tool and identify the basic simulation files. # This info is contained in the "config.ini" file. bps.config # Particular configuration parameters can be acceded like in any python dictionnary bps.config['executable'] # If the simulation project happens to be a batch run, an additional set of instance variables is created to hold information about the template and parameter files, as well as the list of jobs to be run. In `pybps`, template files are simulation files containing parameters to be replaced prior to running simulation. Parameters are identified as strings surrounded by `%` signs, like `%PAR%` for example. The user has to create the template files (replacing acordingly the simulation parameters with parameters search strings) prior to calling the `pybps` package and place a parameter file in csv format in the project folder. Template and parameter files should contain a specific search string in their filename to be recognized as such by `pybps`. By default, users should include `_TMP` in template filenames and `_PAR` in parameter filenames. These are just the default settings and can be modified in the `config.ini` file. # # If the simulation project was identified by the user as corresponding to a batch run (`batch = True`) and `pybps` can't find any template or parameter file, it will give an error message and exit. # Unique ID for the batch to be run. Allows for succesive batch runs with different sets of parameters to be run within a same directory without the risk to overwrite cases. # Also helps for storing info in sql databases bps.series_id # List of paths to the template files found in the project directory bps.tmpfiles_pathlist # Path to parameter file bps.paramfile_path # List of jobs to be run # This is actually a list of dicts containing all the parameters for the current batch run bps.job_list # Number of simulation jobs to be run bps.njob # ## 2. Pre-process Simulation Data # job = pybps.BPSJob('Examples/TRNSYS/Begin_PARAM/SIM00001') job.path # ## 3. Run Simulation Jobs # bpsbatch.job[1].run() bpsbatch.run() # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.5.0-dev # language: julia # name: julia-0.5 # --- # #### This notebook generates a ball tree and then shows a few pretty plots. push!(LOAD_PATH, "../..") using NearestNeighbors using Colors using PyPlot using PyCall @pyimport matplotlib.patches as patch tree = BallTree(rand(2,100), Euclidean(); leafsize = 10) # + import NearestNeighbors.HyperSphere # Adds a sphere to an axis function add_sphere(ax, hs::HyperSphere, col) ell = patch.Circle(hs.center, radius = hs.r, facecolor="none", edgecolor=col) ax[:add_artist](ell) end # Skip non leaf nodes offset = tree.tree_data.n_internal_nodes + 1 nleafs = tree.tree_data.n_leafs # Range of leaf nodes index_range = offset: offset + nleafs - 1 # Generate some nice colors cols = distinguishable_colors(length(index_range), RGB(0,0,0)) # Create figure cfig = figure() ax = cfig[:add_subplot](1,1,1) ax[:set_aspect]("equal") axis((-.25,1.25,-.25,1.25)) for (i, idx) = enumerate(index_range) col = cols[i] # Get the indices of the leaf nodes into the tree data range = NearestNeighbors.get_leaf_range(tree.tree_data, idx) d = tree.data[:, range] # Plot the points in the hyper spehre plot(vec(d[1,:]), vec(d[2,:]), "*", color = (col.r, col.g, col.b)) # And the hypersphere itself sphere = tree.hyper_spheres[idx] add_sphere(ax, sphere, (col.r, col.g, col.b)) end title("Leaf nodes with their corresponding points") # + # Range of leaf nodes index_range = 1: offset + nleafs - 1 # Generate some nice colors cols = distinguishable_colors(length(index_range), RGB(0,0,0)) # Create figure cfig = figure() ax = cfig[:add_subplot](1,1,1) ax[:set_aspect]("equal") axis((-.5,1.5,-.5,1.5)) for (i, idx) = enumerate(index_range) col = cols[i] # And the hypersphere itself sphere = tree.hyper_spheres[idx] add_sphere(ax, sphere, (col.r, col.g, col.b)) end # + import NearestNeighbors: isleaf, getleft, getright # Two different colors for internal and leaf nodes col_leaf = RGB(1.0,0.0,0.0) col_internal = RGB(0.0,0.0,0.0) # Create figure cfig = figure() ax = cfig[:add_subplot](1,1,1) ax[:set_aspect]("equal") axis((-.5,1.5,-.5,1.5)) function plot_tree(index, lvl) leaf = isleaf(tree.tree_data.n_internal_nodes, index) col = leaf ? col_leaf : col_internal sphere = tree.hyper_spheres[index] add_sphere(ax, sphere, (col.r, col.g, col.b)) # Recursively call the tree plotter on the sub trees if !leaf plot_tree(getleft(index), lvl+1) plot_tree(getright(index), lvl+1) end end plot_tree(1, 1) title("Leaf nodes in red, internal nodes in black"); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from gpcharts import figure import json from dateutil import parser # + # my_plot = figure(title='Demo') # my_plot.plot([1, 2, 10, 15, 12, 23]) # - # ls -al with open('data.json') as dataFile: jsonData = json.load(dataFile) dt = parser.parse(jsonData["cc45108d4bf137346761df0d022f1383"]["time"]) jsonData["cc45108d4bf137346761df0d022f1383"]["time"] dt.hour timeSlice = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] counter = 0 for key in jsonData: userKey = jsonData[key]["userKey"] if userKey != "encryptedUserKey" and userKey != "" and userKey != "": dt = parser.parse(jsonData[key]["time"]) hour = (dt.hour + 9) % 24 timeSlice[hour] += 1 counter += 1 timeSlice my_plot = figure(title='Demo') my_plot.plot(timeSlice) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nbgrader={} # # Project Euler: Problem 52 # + [markdown] nbgrader={} # https://projecteuler.net/problem=52 # # It can be seen that the number, $125874$, and its double, $251748$, contain exactly the same digits, but in a different order. # # Find the smallest positive integer, $x$, such that $2x$, $3x$, $4x$, $5x$, and $6x$, contain the same digits. # + [markdown] nbgrader={} # First, write a function `same_digits(x,y)` that returns `True` if two integers `x` and `y` have the exact same set of digits and multiplicities and `False` if they have different digits. # + nbgrader={"checksum": "aad5ed41801af39fc06f00c8d275a010", "solution": true} def same_digits(x, y): value = True xs = str(x) ys = str(y) xl = [i for i in xs] yl = [j for j in ys] xs = sorted(xl) ys = sorted(yl) if len(xs) != len(ys): value = False else: for i in range(0,len(xs)): if xs[i]!=ys[i]: value = False return value # + deletable=false nbgrader={"checksum": "dd0aff5d565bc794cee175aaa6c0cb3d", "grade": true, "grade_id": "projecteuler52a", "points": 4} assert same_digits('132', '321') assert not same_digits('123', '3') assert not same_digits('456', '0987654321') # + [markdown] nbgrader={} # Now use the `same_digits` function to solve this Euler problem. As you work on this problem, be careful to debug and test your code on small integers before trying it on the full search. # - # Searching for the smallest. So I will look from the bottom and the first one I find is the smallest # + deletable=false nbgrader={"checksum": "6cff4e8e53b15273846c3aecaea84a3d", "solution": true} myint = 0 for b in range(1,200000): if same_digits(b, 2*b) and same_digits(b, 3*b) and same_digits(b, 4*b) and same_digits(b, 5*b) and same_digits(b, 6*b) : myint = b break myint # + deletable=false nbgrader={"checksum": "dafbda681e8fb50925790dc1d0600750", "grade": true, "grade_id": "projecteuler52b", "points": 6} assert True # leave this cell to grade the solution # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import mxnet as mx from mxnet.gluon import nn from mxnet import nd # 使用 NDArray 实现一个 Gluon 层 from mxnet import gluon my_param = gluon.Parameter('my parameter', shape=(3, 3)) my_param.initialize() print(my_param.data()) my_param.grad() # 自定义层时 通常使用 nn.Block 自带的一个 ParameterDict 类型的成员变量 params pd = gluon.ParameterDict(prefix='block1_') pd.get('my_parameter1', shape=(3, 3)) pd class MyDense(nn.Block): def __init__(self, units, in_units, **kwargs): super(MyDense, self).__init__(**kwargs) with self.name_scope(): self.weight = self.params.get('weight', shape=(in_units, units)) self.bias = self.params.get('bias', shape=(units,)) def forward(self, x): linear = nd.dot(x, self.weight.data()) + self.bias.data() return nd.relu(linear) dense = MyDense(5, in_units=10, prefix='o_my_dense_') dense.params dense.initialize() dense(nd.random.uniform(shape=(100, 10))) net = nn.Sequential() with net.name_scope(): net.add(MyDense(32, in_units=64)) net.add(MyDense(2, in_units=32)) net.initialize() net(nd.random.uniform(shape=(2,64))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pathlib import Path import glob import hashlib import pandas as pd import pyarrow as pa import pyarrow.parquet as pq # - csv_files = glob.glob('../data/yellow_tripdata_2016-*.csv') for csv_file in csv_files: # Change the file suffix p = Path(csv_file) parquet_file = p.parent / f"{p.name[:-3]}parquet" str_parquet_file = p.parent / f"str_{p.name[:-3]}parquet" cat_parquet_file = p.parent / f"cat_{p.name[:-3]}parquet" # Read in the CSV and already convert datetime columns df = pd.read_csv( csv_file, parse_dates=["tpep_pickup_datetime", "tpep_dropoff_datetime"], index_col=False, infer_datetime_format=True, ) # store_and_fwd_flag is actually boolean but read in as string. # Manually change it to have a more efficient storage. df['store_and_fwd_flag'] = df['store_and_fwd_flag'] == 'Y' # Store it with the default options: # * a single RowGroup, no chunking # * SNAPPY compression df.to_parquet(parquet_file, engine="pyarrow") df['str'] = df.apply(lambda x: hashlib.sha256(str(x).encode()).hexdigest(), axis = 1) df.to_parquet(str_parquet_file, engine="pyarrow") df['str'] = df['str'].apply(lambda s: f"{s[0]}-{s[1]}-{s[2]}") df.to_parquet(cat_parquet_file, engine="pyarrow") for csv_file in csv_files: # Change the file suffix p = Path(csv_file) parquet_file = p.parent / f"{p.name[:-3]}parquet" str_parquet_file = p.parent / f"str_{p.name[:-3]}parquet" str_csv_file = p.parent / f"str_{p.name}" cat_parquet_file = p.parent / f"cat_{p.name[:-3]}parquet" cat_csv_files = p.parent / f"cat_{p.name}" df = pd.read_parquet(str_parquet_file) df.to_csv(str_csv_file, index=False) df = pd.read_parquet(cat_parquet_file) df.to_csv(cat_csv_files, index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # Unified batch and streaming ingestion API # ### Storage Write API - https://cloud.google.com/bigquery/docs/write-api # ### BigQuery Storage V1 -https://cloud.google.com/bigquery/docs/reference/storage/rpc/google.cloud.bigquery.storage.v1 # ##### "Pending" mode chosen from the available three modes - ( Commited , Pending , Buffered ) # ### BigQuery Storage API V1 Methods - https://cloud.google.com/bigquery/docs/reference/storage/rpc import sys # #!{sys.executable} -m pip install --upgrade google-cloud-bigquery-storage from dotenv import load_dotenv load_dotenv() # !{sys.executable} -m pip show google-cloud-bigquery-storage # + from google.cloud import bigquery from google.cloud.bigquery import Table from googleapiclient import discovery from oauth2client.client import GoogleCredentials from google.cloud import bigquery_storage_v1 from google.cloud.bigquery_storage_v1 import BigQueryWriteClient from google.cloud.bigquery_storage_v1 import types from google.cloud.bigquery_storage_v1 import writer from google.protobuf import descriptor_pb2 # - # ### Make Proto2 stubs with proto2 messages , since Storage API V1 only support protobuf V2 # ![image.png](attachment:463134ee-8162-4160-b0ed-0ba7f5b782ac.png) project_id = 'bigquery-project-32089' dataset_id = 'customers_dataset' table_id = 'customer_records' dataset_full_name = f"{project_id}.{dataset_id}" table_full_name = f"{project_id}.{dataset_id}.{table_id}" # + import json from collections import namedtuple import customers_1_pb2 import customers_2_pb2 import customers_3_pb2 # + def customerRecordsDecoder(customerDict): return namedtuple('X', customerDict.keys(), rename=True)(*customerDict.values()) def create_row_data(customer): row = customers_1_pb2.Customers_1_Message() row._id = customer._0 row.index = customer.index row.guid = customer.guid row.isActive = customer.isActive row.age = customer.age row.eyeColor = customer.eyeColor row.name = customer.name row.gender = customer.gender row.company = customer.company row.email = customer.email row.phone = customer.phone row.address = customer.address row.about = customer.about row.registered = customer.registered row.latitude = customer.latitude row.longitude = customer.longitude for i in customer.tags: row.tags.append(i) fri = row.Friends() for friend in customer.friends: fri.id = friend.id fri.name = friend.name row.friends.append(fri) row.greeting = customer.greeting row.favoriteFruit = customer.favoriteFruit return row.SerializeToString() def append_rows_pending(project_id: str, dataset_id: str, table_id: str): linesx = open(r'.\data\customers_1.json').read().splitlines() """Create a write stream, write data, and commit the stream.""" write_client = bigquery_storage_v1.BigQueryWriteClient() parent = write_client.table_path(project_id, dataset_id, table_id) write_stream = types.WriteStream() # When creating the stream, choose the type. Use the PENDING type to wait # until the stream is committed before it is visible. See: write_stream.type_ = types.WriteStream.Type.PENDING write_stream = write_client.create_write_stream( parent=parent, write_stream=write_stream ) stream_name = write_stream.name # Create a template with fields needed for the first request. request_template = types.AppendRowsRequest() # The initial request must contain the stream name. request_template.write_stream = stream_name # So that BigQuery knows how to parse the serialized_rows, generate a # protocol buffer representation of your message descriptor. proto_schema = types.ProtoSchema() proto_descriptor = descriptor_pb2.DescriptorProto() ##customer_record_pb2.CustomerRecord.DESCRIPTOR.CopyToProto(proto_descriptor) customers_1_pb2.Customers_1_Message.DESCRIPTOR.CopyToProto(proto_descriptor) proto_schema.proto_descriptor = proto_descriptor proto_data = types.AppendRowsRequest.ProtoData() proto_data.writer_schema = proto_schema request_template.proto_rows = proto_data # Some stream types support an unbounded number of requests. Construct an # AppendRowsStream to send an arbitrary number of requests to a stream. append_rows_stream = writer.AppendRowsStream(write_client, request_template) # Create a batch of row data by appending proto2 serialized bytes to the # serialized_rows repeated field. proto_rows = types.ProtoRows() count = 0 #request = types.AppendRowsRequest() # Strips the newline character for line in linesx: ### request.offset = count count += 1 # Parse JSON into an object with attributes corresponding to dict keys. customer = json.loads(line.strip(), object_hook=customerRecordsDecoder) proto_rows.serialized_rows.append(create_row_data(customer)) ## proto_rows.serialized_rows.append(create_row_data(2, "Bob")) # Set an offset to allow resuming this stream if the connection breaks. # Keep track of which requests the server has acknowledged and resume the # stream at the first non-acknowledged message. If the server has already # processed a message with that offset, it will return an ALREADY_EXISTS # error, which can be safely ignored. # # The first request must always have an offset of 0. request = types.AppendRowsRequest() request.offset = 0 proto_data = types.AppendRowsRequest.ProtoData() proto_data.rows = proto_rows request.proto_rows = proto_data response_future_1 = append_rows_stream.send(request) # Send another batch. ## proto_rows = types.ProtoRows() ## proto_rows.serialized_rows.append(create_row_data(3, "Charles")) # Since this is the second request, you only need to include the row data. # The name of the stream and protocol buffers DESCRIPTOR is only needed in # the first request. ## request = types.AppendRowsRequest() ## proto_data = types.AppendRowsRequest.ProtoData() ## proto_data.rows = proto_rows ## request.proto_rows = proto_data # Offset must equal the number of rows that were previously sent. ## request.offset = 2 ## response_future_2 = append_rows_stream.send(request) print(response_future_1.result()) ## print(response_future_2.result()) # Shutdown background threads and close the streaming connection. append_rows_stream.close() # A PENDING type stream must be "finalized" before being committed. No new # records can be written to the stream after this method has been called. write_client.finalize_write_stream(name=write_stream.name) # Commit the stream you created earlier. batch_commit_write_streams_request = types.BatchCommitWriteStreamsRequest() batch_commit_write_streams_request.parent = parent batch_commit_write_streams_request.write_streams = [write_stream.name] write_client.batch_commit_write_streams(batch_commit_write_streams_request) print(f"Writes to stream: '{write_stream.name}' have been committed.") # - # ## Append rows pending append_rows_pending(project_id, dataset_id, table_id) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Duplication # # Пример отчета по пересечению аудитории интернет-проектов (WEB-Index). # # ## Описание задачи # # Посчитаем пересечение аудитории проектов: # - Vk.com # - Odnoklassniki.ru # # Общие параметры: # - Период: Февраль 2020 # - География: Россия 100 000+ # - Население: 12+ # - Фильтр по типу использования: нет, считаем по всем (Web Desktop, Web Mobile, App Online, App Offline) # # Статистики: # # - Reach # - Reach Column% # - Reach Row% # # # Для расчета потребуется сформировать три задания для Responsum: # # - данные по пересечению аудитории проектов # - данные по аудитории проектов # - данные по аудитории всего Интернета # # + # %reload_ext autoreload # %autoreload 2 import sys import os import re import json import datetime import time import pandas as pd import matplotlib.pyplot as plt from pathlib import Path from bokeh.io import output_notebook, show from bokeh.plotting import figure from IPython.display import JSON from bokeh.models import HoverTool from bokeh.layouts import gridplot import logging import seaborn as sns # from ipywidgets import interact, interactive, fixed, interact_manual # import ipywidgets as widgets from mediascope_api.core import net as msnet from mediascope_api.responsum import catalogs as rc from mediascope_api.responsum import tasks as rt logging.basicConfig(format='%(asctime)s %(levelname)s:%(message)s', level=logging.INFO, datefmt='%I:%M:%S') logger = logging.getLogger() logger.setLevel(logging.INFO) # pd.set_option("display.max_rows", 200) # pd.set_option("display.max_colwidth", 50) # pd.set_option("display.precision", 6) output_notebook() # - # ## Общие параметры для заданий # # Для начала зададим общие параметры # + # задаем параметры # выбираем тип установки mobile facility = 'mobile' # возможные значения: 'desktop', 'mobile', 'desktop_pre' # создаем объекты для работы с каталогами и заданиями, # а так же загружаем каталоги rcats = rc.ResponsumCats(facility) rtask = rt.ResponsumTask(facility) # задаем период расчета date_from = '2020-02-01' date_to = '2020-02-29' # задаем типы пользования Интернетом usetypes = rcats.get_usetype('all') # проверяем, что значения параметров установлены верно rtask.save_report_info(facility, date_from, date_to, usetypes) rtask.show_report_info() print(f"Объектов в media-каталоге: {rcats.holdings.shape[0]}") # - # ### Получим ID проектов # Для построения отчета нам необходимо получить идентификаторы сайтов __Vk.com__ и __Odnoklassniki.ru__, для этого воспользуемся ноутбуком, в котором приведены примеры работы с [каталогами](catalogs.ipynb): # # - Vk.com site_id = 16571 # - Odnoklassniki.ru site_id = 12808 # ## Задания # Перейдем к формированию заданий. # # # ### Задание №1. Расчет пересечения аудитории проектов # + # задаем название проекта для отображения в DataFrame project_name = 'dup' # задаем медиа фильтр и duplication фильтр, в нашем случае это ID проекта VK.com и Odnoklassniki.ru media_filter = "site = 16571" dup_media_filter = "site = 12808" # задаем фильтр, нас интересуют города России с населением 100 тыс. чел. и больше demo_filter = "CITY_TYPE2 != 4" # задаем список статистик для расчета statistics = ["Reach"] # указываем порядок группировки structure = { "usetype": False, "media": ["site"], "duplication": ["site"] } # формируем из заданных параметров задание для Responsum в формате JSON task_json = rtask.build_duplication_task(task_name=project_name, facility=facility, date_from=date_from, date_to=date_to, usetype_filter=usetypes, media_filter=media_filter, dup_media_filter=dup_media_filter, demo_filter=demo_filter, statistics=statistics, structure=structure) # oтправляем задание на расчет и ждем выполнения task_dup = rtask.wait_task(rtask.send_duplication_task(task_json)) # получаем результат df_dup = rtask.result2table(rtask.get_result(task_dup), project_name=project_name) # df_uni df_dup # - # Посчитаем дополнительные отчеты, необходимые для расчета Reach Row% и Reach Column%. # # # ### Задание №2. Расчет Total для проектов # + # задаем название проекта для отображения в DataFrame project_name = 'total_project' # задаем фильтр по медиа media_filter = "site = 16571 OR site = 12808" # задаем фильтр, нас интересуют города России с населением 100 тыс. чел. и больше demo_filter = "CITY_TYPE2 != 4" # задаем список статистик для расчета statistics = ["Reach"] # указываем порядок разбивки по сайтам structure = { "usetype": False, "media": ["site"] } # формируем из заданных параметров задание для Responsum в формате JSON task_json = rtask.build_audience_task(task_name=project_name, facility=facility, date_from=date_from, date_to=date_to, usetype_filter=usetypes, media_filter=media_filter, demo_filter=demo_filter, statistics=statistics, structure=structure) # отправляем задание на расчет и ждем выполнения task_audience = rtask.wait_task(rtask.send_audience_task(task_json)) # получаем результат df_total_prj = rtask.result2table(rtask.get_result(task_audience), project_name=project_name) df_total_prj # - # ### Задание №3. Расчет Total Internet # # Расчет данных по всем проектам займет достаточно много времени у Responsum, поэтому воспользуемся технической страницей WEB-Index, в которой учитывается вся аудитория. # Техническая страница WEB-Index __site_id = 101__ # # Важно учитывать в случае расчета статистики OTS: # - если считать с ограничением по Технической странице в медиа-фильтре, то полученная статистика OTS будет показывать общее количество заходов в Интернет за указанный период; # - если считать без ограничений в медиа-фильтре (media_filter = None), то полученная статистика OTS будет показывать общее количество загрузок страниц интернет-проектов, содержащихся в вашем медиа-каталоге, за указанный период; # - статистика OTS корректна для работы с десктопными данными и данными по рекламным кампаниям. # + # задаем название проекта для отображения в DataFrame project_name = 'total' # задаем техническую страницу WEB-Index в качестве медиа фильтра media_filter = 'site = 101' # задаем фильтр, нас интересуют города России с населением 100 тыс. чел. и больше demo_filter = "CITY_TYPE2 != 4" # задаем список статистик для расчета statistics = ["Reach"] # указываем порядок группировки, в нашем случае ее нет structure = { "usetype": False } # формируем из заданных параметров задание для Responsum в формате JSON task_json = rtask.build_audience_task(task_name=project_name, facility=facility, date_from=date_from, date_to=date_to, usetype_filter=usetypes, media_filter=media_filter, demo_filter=demo_filter, statistics=statistics, structure=structure) # отправляем задание на расчет и ждем выполнения task_audience = rtask.wait_task(rtask.send_audience_task(task_json)) # получаем результат df_total = rtask.result2table(rtask.get_result(task_audience), project_name=project_name) df_total # - # ### Формируем итоговую таблицу # # Объединим полученные результаты и посчитаем доли. # # Для этого воспользуемся методом библиотеки Mediascope _calc_duplication_row_col_. # # Для этого метода на вход передаются три DataFrame: # # - данные по пересечению аудитории проектов # - Total для проектов # - Total Internet df_duplication = rtask.calc_duplication_row_col(df_dup, df_total_prj, df_total) df_duplication # #### Округлим полученные значения до второго знака # Воспользуемся стандартным методом библиотеки pandas df_duplication[['rnd_reach', 'rnd_prc_reach']] = df_duplication[['media_stat_reach', 'stat_prc_reach']].round(2) df_duplication # ## Экспорт в Excel writer = pd.ExcelWriter(rtask.get_excel_filename('duplication-VK-OK')) df_info = rtask.get_report_info() df_duplication.to_excel(writer, 'Report') df_info.to_excel(writer, 'Info', index=False) writer.save() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import logging import os from gensim import corpora, utils from gensim.models.wrappers.dtmmodel import DtmModel import numpy as np # if not os.environ.get('DTM_PATH', None): # raise ValueError("SKIP: You need to set the DTM path") # - import pandas as pd from os import listdir from os.path import isfile, join mypath = '/home/avikbasu/WORK/Economics_of_Innovation/Removed_duplicates_timewise' onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))] yearwise_docs = {} for file in onlyfiles: df = pd.read_csv(os.path.join(mypath,file), header=None) yearwise_docs[file] = list(df[10]) # + #remove stopwords import re,nltk from nltk.tokenize import RegexpTokenizer from nltk.stem.porter import PorterStemmer from gensim import corpora, models from nltk.stem.wordnet import WordNetLemmatizer from nltk.corpus import wordnet, stopwords import gensim import os import csv import matplotlib as plt tag_to_type = {'J': wordnet.ADJ, 'V': wordnet.VERB, 'R': wordnet.ADV} def get_wordnet_pos(treebank_tag): return tag_to_type.get(treebank_tag[:1], wordnet.NOUN) ## from def cleanData(doc): shortword = re.compile(r'\W*\b\w{1,3}\b') nonan = re.compile(r'[^a-zA-Z ]') # tokenizer = RegexpTokenizer(r'\w+') stop = stopwords.words('english') + ['paper','system'] lmtzr = WordNetLemmatizer() tag_to_type = {'J': wordnet.ADJ, 'V': wordnet.VERB, 'R': wordnet.ADV} tokens = nltk.word_tokenize(shortword.sub('',nonan.sub('',doc.lower()))) tokens = [token for token in tokens if not token in stop] #print tokens tags = nltk.pos_tag(tokens) is_noun = lambda pos: pos[:2] == 'NN' or pos[:2] == 'NNP' or pos[:2] == 'NNS' or pos[:2] == 'NNPS' tokenized = nltk.word_tokenize(lines) nouns = [word for (word, pos) in nltk.pos_tag(tokenized) if is_noun(pos)] #print tags #raw_input() finalTokens = [] for word, tag in zip(tokens, tags): finalTokens.append(lmtzr.lemmatize(word, get_wordnet_pos(tag[1]))) return finalTokens # - clean_yearwise = {} for year in yearwise_docs: print "Working on year {}".format(year) new_docs = [] for doc in yearwise_docs[year]: clean_doc = cleanData(doc) new_docs.append(clean_doc) clean_yearwise[year] = new_docs import pickle as pkl with open('cleaned_yearwise.pkl','wb') as fp: pkl.dump(clean_yearwise, fp) yearwise_docs['2001.csv'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline from sklearn.pipeline import Pipeline from sklearn.pipeline import FeatureUnion from sklearn.base import clone from sklearn.model_selection import learning_curve from sklearn.feature_selection import SelectFromModel from sklearn.model_selection import cross_validate from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model import LogisticRegression from xgboost import XGBClassifier import xgboost as xgb TRAIN_DATA_PATH = "Data/train.csv.zip" TEST_DATA_PATH = "Data/test.csv.zip" label_cols = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate'] comment_col = 'comment_text' train = pd.read_csv(TRAIN_DATA_PATH) test = pd.read_csv(TEST_DATA_PATH) COMMENT = 'comment_text' train[COMMENT].fillna("unknown", inplace=True) test[COMMENT].fillna("unknown", inplace=True) xgboost_pipeline = Pipeline([ ('vect', FeatureUnion([ ('word_vect', TfidfVectorizer()), ('char_vect', TfidfVectorizer()) ])), ('selection', SelectFromModel(LogisticRegression(solver = 'sag'))), ('clf', XGBClassifier(max_depth=3, learning_rate=0.1, n_estimators=100, silent=True, objective='binary:logistic', booster='gbtree', n_jobs=1, nthread=None, gamma=0, min_child_weight=1, max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5, random_state=0, seed=None, missing=None)) ]) params = { 'vect__word_vect__sublinear_tf': True, 'vect__word_vect__strip_accents': 'unicode', 'vect__word_vect__analyzer': 'word', 'vect__word_vect__token_pattern': r'\w{1,}', 'vect__word_vect__max_features': 50000, 'vect__word_vect__ngram_range': (1, 2), 'vect__char_vect__sublinear_tf': True, 'vect__char_vect__strip_accents': 'unicode', 'vect__char_vect__analyzer': 'char', 'vect__char_vect__max_features': 50000, 'vect__char_vect__ngram_range': (2, 6), 'selection__threshold': 0.2, 'clf__learning_rate': 0.2 } xgboost_pipeline.set_params(**params) label = 'severe_toxic' print ('Running for ' + label) cv[label] = cross_validate(xgboost_pipeline, train[comment_col], train[label], cv = 5, n_jobs = 3, verbose = 10, scoring = ('accuracy', 'roc_auc', 'neg_log_loss')) print (cv[label]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Save ECG records on 16 bits binary representation import numpy as np import wfdb def packBytes(narray): packed = [] buff = '' tmp = 0 for n in range(len(narray)): value = narray[n] #Get 3 most significant bits msb = bin((value >> 8) & 0xFF) if len(msb)<5: #pad with zeroes msb = '0'*(5-len(msb))+msb[2:] else: msb = msb[-3:] #print("parsed: "+msb) if n%2: #Store 3 most significant bits buff = buff + "00" + msb #print(buff) packed.append(int(buff, 2)) packed.append(value) else: #Store 8 less significant bits packed.append(value & 0xFF) #tmp = value >> (8) & 0xff if(n==len(narray)-1): packed.append(int(msb+"00000", 2)) buff = msb #print("buffer without parsing: "+bin((value >> 8) & 0xff)) #print("buffer after parsing: "+buff) return np.array(packed, dtype='uint8') #Save array to binary file def saveBinaryFile(file_name, darray): file_name = file_name + '.txt' fh = open(file_name, "bw") darray.tofile(fh, format="%08b") # + ''' Read ECG record data. Each sample is given in a 'int16' data type holding the corresponding decimal value stored on the 11 bits register of the ADC register. ''' record = 201#221 digital_record = wfdb.rdrecord(str(record), physical=False, return_res=16) #Get np array containing digital signal d_signal = digital_record.d_signal #Choose one of the channels channel = 0 signal = d_signal[:, channel].astype('uint16') #signal = d_signal[:, channel] # - #Compress signal arr = packBytes(signal) #Save binary file saveBinaryFile(str(record)+"C", arr) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: hello_spinetoolbox # language: python # name: hello # --- # + import pandas as pd import sys import numpy as np import matplotlib.pyplot as plt Plants = pd.read_excel(sys.path[0] +"/../data/powerplantsGermany/Kraftwerksliste_2020_1.xlsx", sheet_name="Gesamtkraftwerksliste BNetzA", usecols="A:X", skiprows = "1:10" #index_col="Kraftwerksnummer Bundesnetzagentur"# , parse_dates="Commisionedyear" ) # - # print(Plants.Technology.unique()) # print(Plants.fuel.unique()) # + pycharm={"name": "#%%\n"} print(Plants.head()) # - x = range(2128) plt.scatter(x,Plants["Commisionedyear"]) table = Plants[Plants["Status"]=="in Betrieb"].groupby(['Fuel', 'Commisionedyear']) print(table) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os os.getcwd() os.chdir('C:\\Users\\Admin\\Desktop\\') os.getcwd() import pandas as pd import numpy as np tips = pd.read_csv('tips.csv') tips a=input(int(x)) a pd.read_csv('C:\\Users\\Admin\\Desktop\\tips.csv') type(tips) len(tips) len(tips.columns) tips.head(2) tips.tail(6) tips[['total_bill','sex','smoker','time']] tips[tips['time']=='Dinner'] stats.to_json('stats.json') tips[(tips['time']=='Dinner') & (tips['tip']>5)].tail() len(tips[(tips['size']>5) | (tips['total_bill']>45)]) tips.groupby(['smoker','sex']).size() tips.groupby(['smoker','day','sex']).agg({'tip':np.mean, 'size':np.sum}) df_male=tips[(tips['sex']=='Male')] df_male.groupby(['smoker']).agg({'tip':np.mean}) dplyr stats= pd.read_csv('DemographicData.csv') len(stats) len(stats.columns) stats.info() data=stats.describe() data.to_csv('data.csv') stats stats.columns = ['CountryName', 'CountryCode','BirthRate','InternetUsers','IncomeGroup'] stats.rename(columns={'CN':'CountryName'}, inplace=True) stats stats[1:4][['CountryName','BirthRate']] stats[stats['CountryName'].isin(['India','Pakistan'])] stats['MyCalc'] = stats.BirthRate*stats.InternetUsers stats stats.sort_values(['MyCalc'], ascending=1) stats.drop('MyCalc',axis=1,inplace=True) stats raw_data = {'c_id':[1,2,3], 'c_name':['a','b','c']} raw_data1 = {'c_id':[4,5,6], 'c_name':['y','x','z']} a=pd.DataFrame(raw_data) b = pd.DataFrame(raw_data1) a b pd.concat([a,b],axis=0) stats[stats.InternetUsers<2] stats.IncomeGroup.unique() stats.drop(['InternetUsers','IncomeGroup'],axis=1) stats import seaborn as sns sns.lmplot(data=stats, x='InternetUsers', y='BirthRate',fit_reg=False,hue='IncomeGroup') stats[stats.InternetUsers<2] df1 = pd.DataFrame(np.array([['a',5,10],['b',4,61], ['c',24,19]]), columns=['name','attr1','attr2']) df1 df2 = pd.DataFrame(np.array([['a',5,19],['b',4,15], ['c',2,9]]), columns=['name','attr1','attr2']) df2 df3 = pd.DataFrame(np.array([['a',5,20],['b',3,61], ['e',3,19]]), columns=['name','attr1','attr2']) df1 df2 df3 pd.merge(pd.merge(df1, df2, on='name', how='right'), df3, on='name', how='right') df1.merge(df2,on='name').merge(df3, on='name') pd.merge(df1,df2, how='inner', left_on=['name','attr1']) from dfply import * flight_data = pd.read_csv('flight_data.csv') flight_data (flight_data >> select (X.origin, X.dest, X.hour)) (flight_data>> drop(X.year, X.month, X.day)) (flight_data>> select(~X.hour, ~X.minute)) (flight_data>> mask(X.month==1, X.day==1, X.origin=='JFK', X.hour>10)) (flight_data>> arrange(X.distance, X.hour,ascending=False)) (flight_data>> mask(X.origin=='EWR')>> mutate(new_distance=X.distance/1000, carrier_origin = X.carrier + X.origin)) (flight_data>> group_by(X.origin)) (flight_data>> group_by(X.origin)>> summarise(mean_disctance=X.distance.min())) (flight_data>> mask(X.hour>10)>> mutate(speed = X.distance/(X.air_time*60))>> group_by(X.origin)>> summarise(mean_speed=X.speed.mean())>> arrange(X.mean_speed, ascending=False)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + values = df.Prices * df.Amount df['Values'] = values.where(df.Action == 'Sell', other=-values) In [39]: df Out[39]: # + active="" # ''' # Prices Amount Action Values # 0 3 57 Sell 171 # 1 89 42 Sell 3738 # 2 45 70 Buy -3150 # 3 6 43 Sell 258 # 4 60 47 Sell 2820 # 5 19 16 Buy -304 # 6 56 89 Sell 4984 # 7 3 28 Buy -84 # 8 56 69 Sell 3864 # 9 90 49 Buy -4410''' # - order_df['Value'] = order_df.apply(lambda row: (row['Prices']*row['Amount'] if row['Action']=='Sell' else -row['Prices']*row['Amount']), axis=1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import numpy as np import random import tensorly as tl from tensorly.decomposition import tucker # # Data Ingestion # We read all of the files, and get the number of frames in each one. When reading them as tensors we will truncate to the smallest number of frames. I strived to take videos of the same length (~11s), but small discrepancies are bound to exist. For our particular application, truncation ought not to matter too much. # + # Create VideoCapture objects parking_lot = cv2.VideoCapture('parking_lot.MOV') patio = cv2.VideoCapture('patio.MOV') commute = cv2.VideoCapture('commute.MOV') # Get number of frames in each video parking_lot_frames = int(parking_lot.get(cv2.CAP_PROP_FRAME_COUNT)) patio_frames = int(patio.get(cv2.CAP_PROP_FRAME_COUNT)) commute_frames = int(commute.get(cv2.CAP_PROP_FRAME_COUNT)) parking_lot_frames, patio_frames, commute_frames # + # Get dimensions of each frame parking_lot_height = int(parking_lot.get(cv2.CAP_PROP_FRAME_HEIGHT)) parking_lot_width = int(parking_lot.get(cv2.CAP_PROP_FRAME_WIDTH)) patio_height = int(patio.get(cv2.CAP_PROP_FRAME_HEIGHT)) patio_width = int(patio.get(cv2.CAP_PROP_FRAME_WIDTH)) commute_height = int(commute.get(cv2.CAP_PROP_FRAME_HEIGHT)) commute_width = int(commute.get(cv2.CAP_PROP_FRAME_WIDTH)) print(parking_lot_height, parking_lot_width) print(patio_height, patio_width) print(commute_height, commute_width) # - # Based on the number of frames and the dimensions of the frames, we need a 4D tensor (314x1080x1920x3) to hold these videos: # - 314 for the frames of the images (we truncate the extra frames for the patio and parking lot videos) # - 1080x1920 for the height and width of the images # - 3 for the RGB color channels # Create function to read all frames of a video in an array def read_frames(video_capture, max_frames): """ INPUTS: video_capture: an OpenCV VideoCapture object whose frames we want to read max_frames: the maximum number of frames we want to read OUTPUT: array of all the frames until max_frames """ # Initialize empty array frames_array = [] # Keep track of the frame number frame_nb = 0 # iterate through the frames and append them to the array while video_capture.isOpened() and frame_nb < max_frames: ret, frame = video_capture.read() if not ret: break frames_array.append(frame) if cv2.waitKey(1) & 0xFF == ord('q'): break frame_nb += 1 # release the video capture video_capture.release() cv2.destroyAllWindows() # return the array return(frames_array) # Read in all the videos parking_lot_array = read_frames(video_capture=parking_lot, max_frames=commute_frames) patio_array = read_frames(video_capture=patio, max_frames=commute_frames) commute_array = read_frames(video_capture=commute, max_frames=commute_frames) # # Data Manipulation # We create tensors out of the NumPy arrays with the TensorLy library. # Create tensors from matrices parking_lot_tensor = tl.tensor(parking_lot_array) patio_tensor = tl.tensor(patio_array) commute_tensor = tl.tensor(commute_array) # To speed up later steps, we randomly select 50 frames of the tensors to focus on. # Set the seed for reproducibility random.seed(42) random_frames = random.sample(range(0, commute_frames), 50) # Use these random frames to subset the tensors subset_parking_lot = parking_lot_tensor[random_frames,:,:,:] subset_patio = patio_tensor[random_frames,:,:,:] subset_commute = commute_tensor[random_frames, :, :, :] # Convert to double, otherwise Tucker decomposition will not work. # Convert three tensors to double subset_parking_lot = subset_parking_lot.astype('d') subset_patio = subset_patio.astype('d') subset_commute = subset_commute.astype('d') # # Naive Comparison # A natural way of comparing two tensors is to compute the norm of the difference between them. # + # Parking and patio parking_patio_naive_diff = tl.norm(subset_parking_lot - subset_patio) # Parking and commute parking_commute_naive_diff = tl.norm(subset_parking_lot - subset_commute) # Patio and commute patio_commute_naive_diff = tl.norm(subset_patio - subset_commute) # Print our differences print("The difference between parking and patio tensors is {}, {} between parking and commute and {} between patio and commute".format(int(parking_patio_naive_diff), int(parking_commute_naive_diff), int(patio_commute_naive_diff))) # - # # Unsupervised Learning # Now that we have the tensors, we can perform Tucker decomposition to get a more robust representation (using the resulting core tensor). This rids us of noise and we get a better sense of the similarity between two videos. # # The main tuning parameter is the n-rank of the tensor. If we were seeking the optimal decomposition, AIC criterion could be used to choose the best value of the hyperparameter. Nevertheless, in this specific context we are not looking for an optimal setting, rather something that is usable. Besides, we need similar dimensions across tensors to be able to make comparisons. # # For this reason, we chose n-rank of [2,2,2,2] for all tensors and compare the resulting core tensors. Choosing this somewhat small n-rank helps by limiting the computational complexity of our operations (trying out n-rank of [5,5,5,5] will exceed the capabilities of LAPACK, which is used under the hood). # Get core tensor for the parking lot video core_parking_lot, factors_parking_lot = tucker(subset_parking_lot, ranks = [2,2,2,2]) # Get core tensor for the patio video core_patio, factors_patio = tucker(subset_patio, ranks = [2,2,2,2]) # Get core tensor for the commute video core_commute, factors_commute = tucker(subset_commute, ranks = [2,2,2,2]) # Compare core parking lot and patio parking_patio_diff = tl.norm(core_parking_lot - core_patio) int(parking_patio_diff) # Compare core parking lot and commute parking_commute_diff= tl.norm(core_parking_lot - core_commute) int(parking_commute_diff) # Compare core patio and commute patio_commute_diff = tl.norm(core_patio - core_commute) int(patio_commute_diff) # # Conclusion # # Leveraging Tucker decomposition allows us to make robust comparisons between videos by extracting the core tensor, the main information contained in it. # # This has very broad applications (recommender systems, material science) but also needs a lot of computing power, with some potential for parallelization for this to be used at scale. # # For more information on the mathematical underpinning of tensor decomposition as well as broader context on this analysis, please refer to the Medium article linked in the associated README file. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import sys from os.path import join as oj sys.path.insert(1, oj(sys.path[0], 'train_model')) from train_model import sent_util import torch from torchtext import data, datasets # + # To train model, first run 'train.py' from train_model dir # get model snapshot_dir = 'train_model/results/' snapshot_file = oj(snapshot_dir, 'best_snapshot_devacc_84.9770642201835_devloss_0.5329645872116089_iter_7000_model.pt') model = sent_util.get_model(snapshot_file) # get data inputs, answers, train_iterator, dev_iterator = sent_util.get_sst() # - # Find sentence used in figure 2 batch_nums = list(range(6920)) data = sent_util.get_batches(batch_nums, train_iterator, dev_iterator) for ind in range(6919): text = data[ind].text.data[:, 0] words = [inputs.vocab.itos[i] for i in text] if words[0] == 'it' and words[1] == "'s" and words[2] == 'easy': high_level_comp_ind = ind break # Produce CD importance scores for phrases used in figure 2 pos, pos_irrel = sent_util.CD(data[high_level_comp_ind], model, start = 0, stop = 15) print(' '.join(words[:16]), pos[0] - pos[1]) neg, neg_irrel = sent_util.CD(data[high_level_comp_ind], model, start = 16, stop = 26) print(' '.join(words[16:]), neg[0] - neg[1]) # Sanity check: CD is a decomposition, so an effective way to check for bugs is to verify that the decomposition holds (up to numerical errors) print(pos + pos_irrel) linear_bias = model.hidden_to_label.bias.data.cpu().numpy() print((model(data[high_level_comp_ind]).data.cpu().numpy() - linear_bias)[0]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="0qW5ai19Slq_" colab_type="text" # # **Import and other Reqs** # + id="qpugzmJTKlam" colab_type="code" outputId="26af5e76-db86-48bb-891f-63f1f3150f6f" colab={"base_uri": "https://localhost:8080/", "height": 268} # !pip install wikipedia # + id="b2HO8XOAIBfJ" colab_type="code" colab={} import spacy from spacy import displacy import wikipedia import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # + id="MB9bMM1OIUhX" colab_type="code" colab={} nlp = spacy.load('en') #en for english, fr for french and so on # + id="1CBsTwMOzLG-" colab_type="code" colab={} from spacy.lang.en.stop_words import STOP_WORDS # + id="TDfwhHUNopBe" colab_type="code" colab={} import warnings warnings.filterwarnings("ignore") # + id="o9p_f7ZVJPQt" colab_type="code" outputId="00fe1f44-0271-41df-ae04-b3ae3b614839" colab={"base_uri": "https://localhost:8080/", "height": 54} from google.colab import drive drive.mount('/gdrive') # %cd /gdrive # + id="Y-REEROEeG3b" colab_type="code" colab={} """ !ls #similar to linux ls command %cd .. #similar to linux directory navigation %cd 'My Drive/CIS_508/Colab Notebooks/NER' """ # + [markdown] id="ak6P7jDQJmK2" colab_type="text" # # **Read text document** # + id="bKi8A9mPIbjI" colab_type="code" outputId="a65464ca-7701-4eae-83e3-10c74de31b8b" colab={"base_uri": "https://localhost:8080/"} doc = nlp("SpaCY is a cool tool") doc # + [markdown] id="ezcrnqCxJq7f" colab_type="text" # # **Read from a file** # + id="pLQBfUsGIh86" colab_type="code" outputId="fe0885f5-3662-4e9e-e5ff-07ce3547b34e" colab={"base_uri": "https://localhost:8080/", "height": 90} myfile = open('/gdrive/My Drive/CIS_508/Colab Notebooks/NER/trial.txt').read() docfile = nlp(myfile) docfile # + [markdown] id="_6bVfj-5KfEr" colab_type="text" # # **Read from wikipedia** # + id="Ye_GInrDKeuS" colab_type="code" colab={} page = wikipedia.page("Vinodhini") page_url = page.url page_title = page.title content = page.content wikitext = nlp(content) # + [markdown] id="q0_GPDnGNs10" colab_type="text" # # **Explore the basic functions** # + id="2NY56KxiNtAj" colab_type="code" outputId="479b9768-e844-4cb0-c176-ef6a6023f2f0" colab={"base_uri": "https://localhost:8080/"} print(docx) print("\n",docfile) # + id="3tqJpMiyN0z-" colab_type="code" outputId="cd046fea-09e8-499c-c5de-8a276bfd5547" colab={"base_uri": "https://localhost:8080/"} print(docx.text) print(docfile.text) # + [markdown] id="b-V4GJjdJ1Nw" colab_type="text" # # **Tokenization - Sentence Tokens** # + id="OpDuCDY7J0vX" colab_type="code" outputId="bf237dd5-5c58-428d-d2fc-d8976ac99feb" colab={"base_uri": "https://localhost:8080/"} num = 1 for sentence in docfile.sents: print(str(num), ": ", sentence) num += 1 # + id="FOMQqEdQJjwI" colab_type="code" outputId="c97b67a9-2521-431d-f25f-9c5e0d80137e" colab={"base_uri": "https://localhost:8080/"} for num, sentence in enumerate(docfile.sents): #enum starts from 0 print(f'{num}: {sentence}') # + [markdown] id="fQyO5YjkL9pS" colab_type="text" # # **Tokenization - Word Tokens** # + id="fCBw9BvwL4fU" colab_type="code" outputId="cbe5fb06-d1ca-4735-c1e5-3b91a27902fa" colab={"base_uri": "https://localhost:8080/"} for word in docx: print(word.text) #word.text and word are the same here #get a list of word tokens words = [word for word in docx] words # + id="bPVHgjxZMIll" colab_type="code" colab={} for word in docfile: print(word) words = [word.text for word in docfile] words # + id="p-6C66-LMIrl" colab_type="code" colab={} #split using spaces are not exactly same because word tokenizer takes care of the punctuation and other nuances. But the basic idea is that docfile.text.split(" ") # + [markdown] id="7mYmlijvSdZo" colab_type="text" # # **Other word functions** # + id="WZlMj4GlMIwx" colab_type="code" colab={} for word in docfile: print(word.text, word.shape_) # + id="W-IUxKXoMIuZ" colab_type="code" outputId="6769f8b2-b12c-4a51-d459-414cd2b23cc4" colab={"base_uri": "https://localhost:8080/"} for word in docx: print(word.text, word.shape_, word.is_alpha, word.is_stop) #word.shape is not human readable format # + [markdown] id="4o6S2N_eU382" colab_type="text" # # **POS Tagging** # + id="O_rCnmt5MIzO" colab_type="code" outputId="70d71ffc-c8e5-4961-b5df-8155e5ba591d" colab={"base_uri": "https://localhost:8080/"} for word in docx: print(word.text, word.pos_) #word.pos is not in human readable form # + id="ER734GPZVgRb" colab_type="code" outputId="f9aa0e9c-d577-4790-a0cd-4db951f1dcd3" colab={"base_uri": "https://localhost:8080/"} print(spacy.explain('DET')) print(spacy.explain('PROPN')) # + id="p6ALETsWVgUF" colab_type="code" outputId="511af52e-ba10-4c61-8552-ea8c2cbc50ad" colab={"base_uri": "https://localhost:8080/"} #Now let us mess with Spacy and see if it is able to tag the infamous sentence buff = nlp('Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo') buff # + [markdown] id="yXw_IyhVWhxM" colab_type="text" # ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAVgAAAFBCAYAAAAytbcTAAAgAElEQVR4Ae19C8weRfU+EKWIl5iGkqZUo02MyQ8VJBpNFC8xaPL7g9oQElTyNSEkhp9oovEGUloEK3douRUo5VakBUsplDtFaEFoKVbukCLQG/dSEITvK8L888zu2T07776X3W8vs/s+k3zf7rszO3PmmTPPnp2dObOTYSACRIAIEIFSENiplFyZKREgAkSACBgSLJWACBABIlASAiTYkoBltkSACBABEix1gAgQASJQEgKpBHv/uocM/4gBdYA6QB3orgODcHJXgh0d22H4RwyoA9QB6kCnDuDBM0ggwfJBwgcpdYA6kFEHSLAZAeNTuvMpTUyICXUgXQcKJViOw3QfhyE2xIY6MBw6oB82hRPsIOMNTEMEiAARaCMCIFQSbBtblnUiAkSgdgRIsLU3AQUgAkSgrQiQYNvasqwXESACtSNAgq29CSgAESACbUWABNvWlmW9iA) # + id="TchFxLwLVgWm" colab_type="code" outputId="77b33f20-50fa-4e2f-cc8d-6a7c4342adf9" colab={"base_uri": "https://localhost:8080/"} for word in buff: print(word.text, word.pos_) # Ofcourse it doesn't tag it right, because it is a very complicated sentence # + id="Vdry4zEuVgaA" colab_type="code" outputId="f6ed55f0-0346-43df-bbd6-f47e85d390b5" colab={"base_uri": "https://localhost:8080/"} for word in nlp('I fish a fish'): print(word.text, word.pos_, word.tag_) #spacy got this sentence right # + id="5NLTWtN8MI3h" colab_type="code" outputId="6bad461e-8c5e-451c-a084-9774f63f845c" colab={"base_uri": "https://localhost:8080/"} spacy.explain('VBP') # + id="hY9NgAduXScx" colab_type="code" outputId="01c55ea5-d925-4a59-e9fe-ede91f183a1b" colab={"base_uri": "https://localhost:8080/"} for word in nlp('All the faith I had had no effect on the outcome of my life'): print((word.text, word.pos_, word.tag_)) # + id="DpDnFoi4XSfR" colab_type="code" outputId="e440e58d-774e-4f59-e373-a76c6b4d684e" colab={"base_uri": "https://localhost:8080/"} print(spacy.explain('VBD')) print(spacy.explain('VBN')) # + [markdown] id="-xzDiRX2Zs7Y" colab_type="text" # # **Syntactical Dependency** # + id="cBbbGAViXSiM" colab_type="code" outputId="8e1d2f49-c84c-40e5-db12-2254bd3cda5a" colab={"base_uri": "https://localhost:8080/"} for word in nlp(' Rose'): print(word.text, word.pos_, word.tag_, word.dep_) # + id="bx1bBK0oXSkP" colab_type="code" outputId="57eb2564-3ec4-495b-98e9-60ea449d1171" colab={"base_uri": "https://localhost:8080/"} spacy.explain('nsubj') # + id="RZQDMuFfXSmy" colab_type="code" outputId="a79b76a6-9ca7-43d3-da08-4dbfae1c35d9" colab={"base_uri": "https://localhost:8080/"} displacy.render(nlp('Jack likes Rose'), style = 'dep', jupyter = True) # + [markdown] id="e1JPY204aG20" colab_type="text" # # **Lemmatization** # # Stemming just cuts the suffix and prefix wheras lemmatization turns words into meaningful root words # + id="zE_HRonTaKHp" colab_type="code" outputId="8a595a37-e819-4cdd-a71a-a3f402c13ff4" colab={"base_uri": "https://localhost:8080/"} doc = nlp("Study studying studious studio student sturdy") for word in doc: print(word.text, word.lemma_, word.pos_) # + id="zGTblzTkaKMq" colab_type="code" outputId="91f3de35-4f63-474a-96f2-a1ab3022ccce" colab={"base_uri": "https://localhost:8080/"} doc = nlp("She walks regularly. walking walks walked walker walkman was were be is") for word in doc: print(word.text, word.lemma_, word.pos_) #she walks (walks) is identified as verb wheras the second walks is identified as noun. This is really good # + id="fBL7fjf8aKPg" colab_type="code" colab={} # + [markdown] id="Rx1ovjHvcD4I" colab_type="text" # # **Named Entity Recognition** # + id="3jhF3suqaKR6" colab_type="code" outputId="1b068700-f733-415e-f557-8b4bdb3ef898" colab={"base_uri": "https://localhost:8080/"} for word in docfile.ents: print(word.text, word.label_) print("\n",docfile) # + id="ttPZ1QaBaKUg" colab_type="code" outputId="c5eab96d-a357-4220-e902-ccd6d833c328" colab={"base_uri": "https://localhost:8080/"} d = nlp("Facebook, America, Explosion.ai") print(d) for word in d.ents: print(word.text, word.label_) # + id="VKLHvAaWaKYn" colab_type="code" outputId="c7b94eed-949a-47af-8d0c-67c31a9ce418" colab={"base_uri": "https://localhost:8080/"} displacy.render(docfile, style = 'ent', jupyter = True) # + [markdown] id="vYB2O6Z3eA-5" colab_type="text" # # **Visualizing with Displacy** # + id="7QkHI2EKeGt7" colab_type="code" outputId="cfbf5ff7-43f9-445d-b4af-ec7a982931fb" colab={"base_uri": "https://localhost:8080/"} displacy.render(docfile, style = 'ent', options = options, jupyter = True) #doesn't change for text display # + id="NV8PSWLceGwm" colab_type="code" colab={} options = {'compact':True, 'bg':'cornflowerblue', 'color':'#fff', 'font': 'Verdana'} # + id="lrH6Nys9aKW-" colab_type="code" outputId="d3161cd9-ad82-41ba-954f-a5b6746c4867" colab={"base_uri": "https://localhost:8080/"} displacy.render(docfile, style = 'dep', options = options, jupyter = True) #works well for pictorial display # + id="MLDQDhoFeGy4" colab_type="code" colab={} svg = displacy.render(docfile, style = 'dep', options = options, jupyter = False) with open('/gdrive/My Drive/CIS_508/Colab Notebooks/NER/Dependency.svg', 'w', encoding = 'utf-8') as f: f.write(svg) # + [markdown] id="SB40Nx_riS0o" colab_type="text" # # **Semantic Similarity** # # Use case: Recommendation systems # + id="iTt9QaHweG5Z" colab_type="code" colab={} dog = nlp("dog") wolf = nlp("wolf") elephant = nlp("elephant") cat = nlp("cat") # + id="bo7SjozIiSLY" colab_type="code" outputId="122ab031-6ccd-467e-a254-b43cc33204e0" colab={"base_uri": "https://localhost:8080/"} dog.similarity(cat) #higher the value more the similarity # + id="sMaaSH3YiSOM" colab_type="code" outputId="561f8a4a-fdd6-4845-c706-6f04044c16c3" colab={"base_uri": "https://localhost:8080/"} dog.similarity(wolf) # + id="IEkpoLy6iSVk" colab_type="code" outputId="0c3f436b-68f2-4edd-9fa7-a59ad8fa91ac" colab={"base_uri": "https://localhost:8080/"} dog.similarity(elephant) # + id="YnrDj2mRiSRA" colab_type="code" colab={} smart = nlp("smart") clever = nlp("clever") foolish = nlp("foolish") # + id="6u4vIbjmeG2a" colab_type="code" outputId="a2cf3853-2552-4e64-a606-cee6b3a30d09" colab={"base_uri": "https://localhost:8080/"} smart.similarity(clever) # + id="zRoqb5ExaKKN" colab_type="code" outputId="4b2bee23-fe92-49c6-e3de-7b0c8b198fbf" colab={"base_uri": "https://localhost:8080/"} clever.similarity(foolish) #weirdly clever and foolish are more similar than smart and clever. # + id="ug7K1mClnj_h" colab_type="code" outputId="dfe42550-0095-4da5-b92f-7f194fdda4fc" colab={"base_uri": "https://localhost:8080/"} data = nlp("mom dad cat dog") for i in data: for j in data: print((i, j, i.similarity(j))) # + id="AgVh0hQ6vKfw" colab_type="code" outputId="31956c61-0475-4653-9827-cc57b6546b1a" colab={"base_uri": "https://localhost:8080/"} data_sim = [(i.text, j.text, i.similarity(j)) for j in data for i in data] type(data_sim[0][0]) # + id="gMFqKRDrnkCN" colab_type="code" outputId="ad8f8f9f-1aaf-4ee9-bc51-46634a374db9" colab={"base_uri": "https://localhost:8080/"} df = pd.DataFrame(data_sim, columns = ['Token 1', 'Token 2', 'Similarity']) type(df['Token 1'][0]) # + id="dqeJ8xXXvm3h" colab_type="code" outputId="c68369e8-6939-4318-a6c8-2930218ef66b" colab={"base_uri": "https://localhost:8080/"} df_viz = df.replace({'mom':0, 'dad':1, 'cat':2, 'dog':3}) df_viz # + id="uO0erp1hnkEm" colab_type="code" outputId="24344aaa-b0c0-4765-f218-26a1aa1b9785" colab={"base_uri": "https://localhost:8080/"} fig, ax = plt.subplots() sns.heatmap(df_viz.corr(), annot = True) plt.show() #this makes no sense to me # + id="kd60GHf8xwrf" colab_type="code" outputId="7d497abd-0f23-4a17-a0aa-2dae72fa980d" colab={"base_uri": "https://localhost:8080/"} fig, ax = plt.subplots() sns.heatmap(df_viz, annot = True) plt.show() # + [markdown] id="WqAm5_ddx-Bv" colab_type="text" # # **StopWords Processing** # + id="1i6tznZ2x9Bj" colab_type="code" outputId="4e822b0a-9fa3-4f4d-bdf7-6c1909906f95" colab={"base_uri": "https://localhost:8080/"} print(STOP_WORDS) # + id="c0-kp7OVx9Gt" colab_type="code" outputId="4c189ca1-61fb-47f5-929e-a40b47ca9a8c" colab={"base_uri": "https://localhost:8080/"} for word in docx: print(word.text, word.is_stop) # + id="VDFnOtcfx9JG" colab_type="code" outputId="00ee34f9-138c-4f47-862e-7cfacb6a17a4" colab={"base_uri": "https://localhost:8080/"} nlp.vocab["Hamburger"].is_stop # + id="tFHy2d0Kx9NO" colab_type="code" outputId="c152174f-54e2-489b-c39e-cae323ce54b1" colab={"base_uri": "https://localhost:8080/"} for word in docx: if word.is_stop: print(word.text) # + id="NW-8bNStx9R0" colab_type="code" outputId="ac26e396-1322-41dc-e96c-8b71260092f1" colab={"base_uri": "https://localhost:8080/"} non_stop_words = [ word for word in docx if word.is_stop == False] non_stop_words # + id="zVl-KWy1x9Ya" colab_type="code" outputId="bcc89f94-4d56-41f6-e7eb-9d5868f87d69" colab={"base_uri": "https://localhost:8080/"} lol = nlp("lol") for word in lol: print(word.is_stop) # + [markdown] id="05usXAgB09Xd" colab_type="text" # Add new stop words to the list # + id="8k5I3eJ43eMp" colab_type="code" outputId="40a3e1ce-f331-4755-ff1b-1ac7cda20521" colab={"base_uri": "https://localhost:8080/"} #In normal spacy vocabulary for enlish language London and cousins are not stop_words for word in docfile: if word.is_stop: print((word.text, word.is_stop)) # + id="gtAO8oDbx9Uu" colab_type="code" colab={} custom_stop_words = ["London", "cousins"] # + id="Ra06SB-o3s0W" colab_type="code" colab={} #we have manually over-written the is_stop for those custom stop words for word in custom_stop_words: nlp.vocab[word].is_stop = True # + id="9vtvByYWx9L5" colab_type="code" outputId="d6501486-b97e-48fe-c422-13ccf3a53a80" colab={"base_uri": "https://localhost:8080/"} #now London and cousins appear in the stop words list for word in docfile: if word.is_stop: print((word.text, word.is_stop)) # + id="9OO6QOD-x9FU" colab_type="code" colab={} #reset London and cousins back for word in custom_stop_words: nlp.vocab[word].is_stop = False # + [markdown] id="i-eoStWU5quE" colab_type="text" # # **Noun Chunks** # + id="AqA4Wi1047mF" colab_type="code" colab={} doc = nlp("The man, although he had good looks, kept sulking about his fat body") # + id="DmH448ab47on" colab_type="code" outputId="a33ec1a6-e16b-410b-caee-447fdc6d210b" colab={"base_uri": "https://localhost:8080/"} #retrieve chunks of nouns like below for token in doc.noun_chunks: print(token.text) # + id="NS8f6EV447vy" colab_type="code" outputId="716a8ec8-6025-4ea2-8505-c35271cc575e" colab={"base_uri": "https://localhost:8080/"} #get to the root word of the chunks for token in doc.noun_chunks: print(token.root.text) # + id="LnPDUxj047tq" colab_type="code" outputId="044a8c78-8397-43af-93b0-bbe17c3ed7a9" colab={"base_uri": "https://localhost:8080/"} #head of the nouns for token in doc.noun_chunks: print(token.root.text, "..", token.root.head.text) # + [markdown] id="bLuii2Ds7s1f" colab_type="text" # # **Sentence Boundary detection** # + id="rbcta0NU47rK" colab_type="code" colab={} def mycustom_boundary(docx): for token in docx[:-1]: if (token.text == '---'): docx[token.i+1].is_sent_start = True return docx # + id="6pElsRpu7ySV" colab_type="code" colab={} nlp.add_pipe(mycustom_boundary, before='parser') # + id="wr17TmoY7yUx" colab_type="code" colab={} #figure out what is wrong my_sent = nlp(u"Helloooo --- what are you --- doing") for s in my_sent.sents: print(s) # + id="TyDQi7Ax7yY6" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Section I. INTRODUCTION # # # Chapter 1. What is Robotics? # What is a robot? # ---------------- # # It might come as a surprise that it is actually tricky to define the # word "robot." Contrast the idea of a science fiction android with a # remote control flying drone. The android (appears) to think, feel, and # move with the intellect of a human, while the drone simply executes the # command of its operator. Yet, most people are in agreement that they can # be rightfully called "robots." # # There are some *necessary* conditions for something to be a robot. A # robot is a machine that: # # - Moves in the physical world, # # - Senses its environment and/or itself, and # # - Uses computational elements to generate its movement. # # However, these are not *sufficient* for qualifying a machine as a robot. # For example, your washing machine (if built in the last 10 years) will # likely have all of these characteristics. A washing machine certainly # moves; it senses the settings of its controls, the weight of your # laundry, the incoming water temperature, possibly the dirtiness of your # laundry, and so on; and will adjust its cycle speeds and duration # accordingly using computation circuits. But there seems to be something # missing here, since few people would refer to their washing machine as a # robot! Similarly, your (relatively modern) car performs fuel injection, # anti-lock braking, cruise control, and airbag deployment using several # computers and sensors to monitor fuel efficiency, tire pressure, # velocity, seat occupancy, etc. Yet, we are not ready to call it a robot # just yet[1]. # # # Let us propose a couple of other possible criteria to call a system a # robot. What about if a robot were required to: # # - Exhibit autonomy or automation, and # # - Exhibit apparently intelligent behavior? # # These provide a sharper boundary, as they would disqualify the washing # machine and non-self-driving car from consideration as robots. But, # there exist many robots that are not autonomous, such as the # remote-controlled drone mentioned above, or surgical robots under # surgeon control, like the Intuitive Surgical Da Vinci robot. # # An intelligence criterion is also difficult to apply because since it is # challenging to define "intelligence" without delving into a # philosophical minefield! By using the phrase "apparently intelligent" we # sidestep the issue by assuming a human judge. But what is agreed upon as # "intelligent" may change from year to year; compared to those devices in # the 1950's, our modern washing machines and cars are actually quite # smart! Perhaps as the control and artificial intelligence technology # used in robots becomes more widely adopted, the line dividing robot and # non-robot machines will become blurrier and blurrier... until the term # "robot" has lost its meaning. # # Overall, it may be a pointless exercise to extract a precise definition # of robot. In any case, the layperson's "I know a robot when I see one" # should suffice for the purposes of this book. # # ------------------------------------------------------- # # [1]: Presumably, by time of publication, self-driving cars are not yet widely commercially available. # # # How to Develop a Robot # ---------------------- # # A roboticist is a thinker, not a tinkerer. Although many students begin # tinkering with robots at a relatively young age, this process is not # usually the best way to develop a robot that performs a task well. # Robotics is a more *deliberate* way of reaching the end goal that is # informed through decades of prior research, analysis, and practical # experience. One way to define it would be as follows: # # > **Robotics**: the study of systematic, principled techniques to aid in # > the development of robots that perform desired functions. # # Although robots are some of the most complex machines in the world, # there is a logic to how they should be developed. A good roboticist will # follow this logic whilst using any available techniques at his/her # disposal. # # Specifically, the recipe for developing an intelligent robot must follow # these major steps: # # 1. **Fabrication**: Design and fabricate a mechanism with sensing, actuation, and # computing capabilities to fulfill the intended task. # # 2. **Measurement**: Develop measurement apparatus(es) and a testing protocol to observe # the function of the mechanism # # 3. **Calibration**: Use measurements to calibrate (or learn) a model of the mechanism's # dynamics, actuation, and sensing. # # 4. **Control**: Develop and validate a control subsystem that maintains system # stability and provides medium-level functionality for planning. # # 5. **Knowledge representation**: Decide upon a knowledge representation to be shared between planning # and perception subsystems. # # 6. **Perception**: Develop and evaluate a perception subsystem that produces knowledge # relevant to the task (robot state, maps, object identities) # # 7. **Planning**: Implement and test a planning subsystem that generates feasible # trajectories to accomplish high-level tasks. # # 8. **Supervisor**: Develop a high-level supervisor that schedules tasks, provides a # user interface, etc. # # 9. **Testing and evaluation**: Test the entire system in the field (on the real task in the real # world, and perhaps with human end-users). # # It is important to note that these steps are almost never followed # linearly, because robot design is a cyclic process of trial and error. # Any robotics project will incur many, many *design cycles* over its # lifetime. For example, unless the team is limited to an established # robot platform, or purchasing off-the-shelf parts, redesigns of the # mechanism usually occur after steps 3, 4, and 6. Mechanical redesigns # may also occur after planning tests to make task execution more # successful. On the software side, new knowledge, perception, and # planning requirements are bound to arise as tasks are tested more # thoroughly. After testing with end-users, it is not uncommon to go all # the way "back to the drawing board" to build a new mechanism! A wise # roboticist will develop their later components to rapidly accommodate # minor mechanical changes. # # Buying an established robot platform can greatly speed up development # time by shortcutting steps 1-4, but many vendors sell fairly raw # hardware (requiring a rehash of steps 2-4). Also, there may be # locked-in decisions that prevent certain functionality to be implemented # later. To use a robot as a haptic device, make sure its motor # controllers provide high-rate feedback and a force control mode! To have # your robot navigate outdoors, make sure that it has a laser sensor or # stereo vision rather than a structured light sensor! This makes it very # important to examine the technical specs of a robot — even the most # mundane details — with a fine-toothed comb before making a purchase. # # The theory, mathematics, and algorithms that are discussed in this book # are designed to facilitate the development of functional robots. For # example, it is unnecessary to go to "square one" for every component of # a robot, as long as that component is deeply understood. A good # roboticist will understand which off-the-shelf techniques apply, and # where. However, there is no substitute for real-world testing! # # Testing is one of the most painful parts of robotics, but ultimately one # of its most satisfying. Although a given technique might be # theoretically beautiful, it will have been largely tested in the lab, # making certain idealizations of the real world, or for a slightly # different use case. The bulk of time spent doing robotics work is # usually tweaking, tuning, and testing these tried-and-true techniques # until they fit for the problem at hand. But once the technique is # validated by thorough field testing, its merits are unimpeachable! # # In summary, a good roboticist: # # - Understands the robot development process. # # - "Minds the gaps:" understands how their sub-system works with other # components of the project. # # - Is not afraid of testing. # # - Clearly communicates the assumptions, performance measures, and # limitations of their sub-system. # # - Understands the assumptions made by any technique before employing # it successfully in practice — or making the tweaks necessary to # make it work. # # - Is aware of classical and state-of-the-art techniques. # # - Is up-to-date on current design tools, hardware, software, # algorithms, and programming languages used in robotics. # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import xarray as xr import numpy as np import regionmask import geopandas as gpd import pandas as pd import matplotlib.pyplot as plt # + # borrows heavily from http://www.matteodefelice.name/post/aggregating-gridded-data/ # attempt to apply a mask to the data to generate data suitable for post-processing (country by country mean etc) # - # Load the shapefile PATH_TO_SHAPEFILE = 'shapefiles/ref-nuts-2021-10m.shp/NUTS_RG_10M_2021_4326_LEVL_0.shp/NUTS_RG_10M_2021_4326_LEVL_0.shp' nuts = gpd.read_file(PATH_TO_SHAPEFILE) # + # ozone data in regular lat/lon grid # - o3 = xr.open_dataset('cmip6_data/u-be647_ann_mean_surface_level_o3.nc') # roll longitude from 0-360 to -180 to 180 o3 = o3.assign_coords(longitude=(((o3.longitude + 180) % 360) - 180)).sortby('longitude') RMM_AIR = 28.97 # g mol-1 RMM_OZONE = 47.997 # g mol-1 MMR_to_PPBV = RMM_AIR/RMM_OZONE*1e9 o3.o3.values = o3.o3.values*MMR_to_PPBV # CALCULATE MASK nuts_mask_poly = regionmask.Regions_cls(name = 'nuts_mask', numbers = list(range(0,37)), names = list(nuts.NUTS_ID), abbrevs = list(nuts.NUTS_ID), outlines = list(nuts.geometry.values[i] for i in range(0,37))) mask = nuts_mask_poly.mask(o3.isel(time = 0).sel(latitude = slice(32, 75), longitude = slice(-30, 50)), lat_name='latitude', lon_name='longitude') ID_REGION = 8 print(nuts.NUTS_ID[ID_REGION]) lat = mask.latitude.values lon = mask.longitude.values sel_mask = mask.where(mask == ID_REGION).values id_lon = lon[np.where(~np.all(np.isnan(sel_mask), axis=0))] id_lat = lat[np.where(~np.all(np.isnan(sel_mask), axis=1))] data_to_plot = o3.sel(latitude = slice(id_lat[0], id_lat[-1]), longitude = slice(id_lon[0], id_lon[-1])).compute().where(mask == region) from mpl_toolkits.axes_grid1 import make_axes_locatable plt.figure(figsize=(12,5.5), dpi=200) ax = plt.subplot(1,2,1) (o3_uk.o3.isel(time = 0)).plot(ax = ax, cmap=plt.get_cmap('Reds'), vmin=0, vmax=65, levels=14,cbar_kwargs={'label': 'Surface O3 / ppbv'}) nuts.plot(ax = ax, alpha = 0.3, facecolor = 'none') plt.title('Surface O3 over UK in ppbv 2015') plt.legend() ax.set_ylabel('Latitude') ax.set_ylabel('Longitude') ax = plt.subplot(1,2,2) (o3_uk.o3.isel(time = 84)-o3_uk.o3.isel(time = 0)).plot(ax = ax, cmap=plt.get_cmap('bwr'), vmin=-10, vmax=10, levels=21, cbar_kwargs={'label': 'Difference in O3 / ppbv'}) nuts.plot(ax = ax, alpha = 0.3, facecolor = 'none') ax.set_ylabel('Latitude') ax.set_ylabel('Longitude') plt.title('Change in O3 over UK in ppbv 2015-2100') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.2 64-bit (''ocr-uva'': conda)' # name: python392jvsc74a57bd0cc29f658ddb1b0f0a648f4c47acf5938bc6d1ad3f68ae93354e191176a755a49 # --- # + from gensim.models import Word2Vec import os import re import string from nltk.tokenize import RegexpTokenizer # + # Load data newseye_path = os.path.join('..', 'data', 'newseye') icdar_2017_1_path = os.path.join(newseye_path, '2017', 'full', 'eng_monograph') icdar_2017_2_path = os.path.join(newseye_path, '2017', 'full', 'eng_periodical') icdar_2019_path = os.path.join(newseye_path, '2019', 'full', 'EN') # - tokenizer = RegexpTokenizer(r'\w+') # + documents = [] for icdar_path in [icdar_2017_1_path, icdar_2017_2_path, icdar_2019_path]: for filename in os.listdir(icdar_path): file_path = os.path.join(icdar_path, filename) with open(file_path, 'r', encoding='utf-8') as text_file: file_lines = text_file.readlines() gt_line = file_lines[2] processed_line = gt_line.replace('[ GS_aligned]', '').replace('#', '').replace('@', '') text_nonum = re.sub(r'\d+', '', processed_line) text_nopunct = "".join([char.lower() for char in text_nonum if char not in string.punctuation]) text_no_doublespace = re.sub('\s+', ' ', text_nopunct).strip() result = tokenizer.tokenize(text_no_doublespace) documents.append(result) # - model_path = 'gensim_default_eng.model' def load_model(): if not os.path.exists(model_path): return None model = Word2Vec.load(model_path) return model # + # TRAIN def create_model(corpus): model = Word2Vec(vector_size=300, window=5, min_count=5, workers=2) model.build_vocab(corpus, progress_per=10000) model.train(corpus, total_examples=model.corpus_count, epochs=300, report_delay=1) model.save() return model # - model = load_model() if model is None: print('Model is not loaded. Creating and training now...') model = create_model(documents) words = ['man', 'new', 'time', 'day', 'good', 'old', 'little', 'one', 'two', 'three'] for word in words: print(f'-- \'{word}\':') print(model.wv.most_similar(positive=[word])) # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.6.1 # language: julia # name: julia-1.6 # --- # + #using ReadWriteDlm2 #using Pkg #Pkg.add("Dierckx") # - # + using DifferentialEquations using Plots; pyplot() using DelimitedFiles #https://docs.julialang.org/en/v1/stdlib/DelimitedFiles/ using Interpolations using Dierckx # - # #ToDo # # * include fundamnetal constants as module # # module myconstants # include("myconstants.jl") # end # # using myconstants # print(myconstants.CGS_C) # # * numerical derivative of array to calculate gamma # * root finding # * time everything # + eos_tablefile = "./eos_tables/eosDD2.lorene" eos_table = readdlm(eos_tablefile,Float64, skipstart=10, comments=true, comment_char='#') #, delim::AbstractChar, Float64 , eol::AbstractChar; header=false, skipblanks=true, use_mmap, quotes=true, dims) index = eos_table[:,1,1,1] rho = eos_table[:,2,1,1] edens = eos_table[:,3,1,1] pres = eos_table[:,4,1,1] rho_scaling = 3.2202e+15 edens_scale = lgrho = log10.(rho) lgedens = log10.(edens) lgpres = log10.(pres) #lgrhomin = Float64() #lgrhomax = Float64() rho_new = minimum(lgrho):0.1:maximum(lgrho) print(rho_new) rho2pres = LinearInterpolation(lgrho, lgpres) #rho2pres = CubicSplineInterpolation(lgrho, lgpres; bc=Line(OnGrid())) rho2edens = LinearInterpolation(lgrho, lgedens) pres2edens = LinearInterpolation(lgpres,lgedens) pres2rho = LinearInterpolation(lgpres,lgrho) lgpres_new = [rho2pres(x) for x in rho_new] lgedens_new = [rho2edens(x) for x in rho_new] scatter(rho,edens,markersize=3) plot!(10 .^ rho_new, 10 .^ lgedens_new,markersize=1, xaxis=:log, yaxis=:log,xlabel="rho",ylabel="edens",lw=1,title="INTERPOLATION SUCCESSFUL!") #,color=[:red,:green]) gr() scatter!(rho,pres,markersize=2) plot!(10 .^ rho_new, 10 .^ lgpres_new, xaxis=:log, yaxis=:log,xlabel="rho",ylabel="pres",lw=1,title="pres") #plot(pln_edens, pln_pres, layout = (1, 2), legend = false) #pl_edens = plot!(10 .^ rho_new, 10 .^ lgedens_new) #pl_pres = plot!(10 .^ rho_new, 10 .^ lgpres_new) #plot!(pln_edens, pln_pres, layout = (1, 2), legend = false) # - # + function Λ(C, Y) # """ Compute the dimensionless tidal deformability parameter Lambda from the compactness C and # the Lindblom y-potential at the surface of a polytropic star""" # Eq.(C1,C2) of Lindblom & Indik 2014 zeta = 4 * C^3 * (13 - 11 * Y + C*(3*Y-2) + 2*(C^2)*(1+Y)) + 3 * ((1-2*C)^2) * (2 - Y + 2*C*(Y-1)) * log(1-2*C) + 2 * C * (6 - 3*Y + 3*C*(5*Y-8)) Lambda_dimensionless = (16/(15*zeta)) * ((1-2*C)^2) * (2 + 2*C*(Y-1) - Y) #dimensionless tidal deformability #lambda_dimensional = Lambda_dimensionless/const.CGS_G *(const.CGS_G*m*const.CGS_MSUN/const.CGS_C**2)**5 end #= function tab = LoadEOSTab(fname) % Load EOS table % Format (ASCII, 4 cols): % % # % # % # % nlines % # % # % # % index n[1/cm^3] e[g/cm^3] p[dyn/cm^2] -- convert them to G=c=M_⊙=1 unit % fid = fopen(fname,'r'); col = textscan(fid,'%n%n%n%n','Headerlines',7); fclose(fid); # %Length_fm = 1.476701332464468e+18; % cm # %rahul: New Value %rahul: old value units.vol = 3.219379180793e15; %3.2202e+15; this is (GM/c^2)^3 units.pres = 5.548820759138184e+38; units.mden = 6.176656704073912e17; %6.173895728686583e+17; units.vol/units.Msun units.mB = 1.66e-24 %1.675e-24; units.Msun = 1.9885e33 %1.9889e+33; tab.rho = col{2} * units.mB / units.mden; # %tab.lgn = log10( col{2} * units.vol ); tab.lgr = log10( tab.rho ); # %tab.lgr = log10( col{2} * units.mB * units.vol / units.mden ); % rho = mB n % ! tab.lge = log10( col{3} / units.mden ); tab.lgp = log10( col{4} / units.pres ); =# @time print(Λ(0.3,1000.)) # + ## in the unit, G=c=M⊙=1 # interpolating EoS table -- https://discourse.julialang.org/t/using-interpolations/23933/32 function tov!(du,u,p,r) #G=1 mg = u[1] pres = u[2] #u[3] is not used in the differential system anywhere. yp = u[4] eden = 10^pres2edens(log10(pres)) rho = 10^pres2rho(log10(pres)) eos_gamma = 2.1 du[1] = 4.0*pi* r^2 *eden #dm_grav/dr du[2] = -(eden + pres)*(mg + 4.0*pi*r^3*pres) / (r*(r - 2.0*mg)) #dpres/dr du[3] = du[1]*(1-2*mg/r)^(-0.5) #dm_baryon/dr du[4] = -yp^2/r -(r + 4*pi*r^3*(pres-rho))*yp/(r*(r-2*u[1])) + (4*(u[1]+4*pi*r^3*pres)^2)/(r*(r-2*u[1])^2) + 6/(r-2*u[1]) - 4*pi*(r^2)*(5*rho+9*pres+(rho+pres)^2/(pres*eos_gamma))/((r-2*u[1])) end rhoc = 8 Δr = 1.e-7 mg0 = 4/3 * π * Δr^3 * rhoc p0 = 10^rho2pres(log10(rhoc)) ## get pressure at rhoc using EOS mb0 = mg0/(Δr-2mg0) y0 = 2 u0 = [mg0;p0;mb0;y0] rspan = (1.e-7,20.0) @time prob = ODEProblem(tov!,u0,rspan) sol = solve(prob) plot(sol,vars=(2,3)) # - # + tov_filename = "./matlab_Sebastiano/tov/Sequences/Stable/DD2_sequence.txt" tovseq = readdlm(tov_filename,Float64, skipstart=0, comments=true, comment_char='#') #, delim::AbstractChar, T::Type, eol::AbstractChar; header=false, skipstart=0, skipblanks=true, use_mmap, quotes=true, dims, comments=false, comment_char='#') #"APR4_data.out" #print(tovseq) # - # ## Practice with several julia commands # + using Interpolations using Plots; pyplot() #default(show = true) xs = 1:1:10 f(x) = log(x) A = [f(x) for x in xs] interp_linear = LinearInterpolation(xs, A) #etp = CubicSplineInterpolation(xs, A; bc=Line(OnGrid()), extrapolation_bc=Throw()) #print(etp(271.8281),'\n') print(interp_linear(2.718281),'\n') # exactly log(3) print(interp_linear(3.1),'\n') # approximately log(3.1) #interp_linear(0.9) # outside grid: error interp_linear_extrap = LinearInterpolation(xs, A,extrapolation_bc=Line()) print(interp_linear_extrap(0.9)) # outside grid: linear extrapolation xnew = 1:0.2:10 Anew = [interp_linear(x) for x in xnew] print('\n',Anew) plot(xnew,Anew) plot!(xs,A,seriestype = :scatter, title = "My Scatter Plot") # - # + using DifferentialEquations using Plots function lorenz!(du,u,p,t) du[1] = 10.0*(u[2]-u[1]) du[2] = u[1]*(68.0-u[3]) - u[2] du[3] = u[1]*u[2] - (8/3)*u[3] end u0 = [1.0;2.0;10.0] tspan = (0.0,100.0) @time prob = ODEProblem(lorenz!,u0,tspan) sol = solve(prob) #plot(sol,vars=(2,3)) plot(sol,vars=(1,2,3)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Final Micro Project # The time has come to apply what you have learned throughout the course by doing a mini project. # You have two options now. # # 1. Choose from our list of projects # - [Meteorite Landings](#Meteorite-Landings) # - [Significant Earthquakes](#Significant-Earthquakes) # - [World Ocean Atlas](#World-Ocean-Atlas) # - [Arctic Sea Ice](#Arctic-Sea-Ice) # - [Other](#Other-Data) # # 2. Use your own data # # - If you find our ideas terrible or you are eager to start using Python in your work, you can use your own data. # - The only requirement is that data should be in a format that we have used in this course, preferably **CSV/ASCII** or **netCDF** file. # - We might not be very helpful, but at least we can help you get started and/or point you to a relevant resource. # ## Meteorite Landings # NASA has a great [archive of open data](https://data.nasa.gov/browse) that you can use to master your data analysis skills. Moreover, there is even a small Python library to fetch some of the datasets: [pyNASA](https://bmtgoncalves.github.io/pyNASA/). # For example, Meteorite Landings dataset can be either downloaded manually from https://data.nasa.gov/Space-Science/Meteorite-Landings/gh4g-9sfh or fetched as a `pandas.DataFrame` using `pyNASA.meteorite()` method. Some further ideas: # * `pandas` package will be most useful to read in the data, as well as analyse them # * Use `cartopy` to plot the data using longitude and latitude columns # * Use scatter plot to show meteorites' mass in colour # * Create a histogram of earthquakes magnitude # ## Significant Earthquakes # US Geological Survey (USGS) provides various [earthquakes data](https://earthquake.usgs.gov/data/data.php#eq) on a global scale. Its Earthquake Catalog contains earthquake source parameters (e.g. hypocenters, magnitudes, phase picks and amplitudes) and other products (e.g. moment tensor solutions, macroseismic information, tectonic summaries, maps) produced by contributing seismic networks. # If you follow this [link](http://earthquake.usgs.gov/earthquakes/search/), you can search throught the catalog and filter data by the magnitude, time and geographic region. In the `data/` folder, we provide an [example dataset](../data/earthquakes_2015_2016_gt45.csv) of earthquakes with magnitude >4.5 that occurred around the world throughout the last year. # So if you want to build your project on these data, some possible ideas are: # * `pandas` package will be most useful to read in the data, as well as analyse them # * Use `cartopy` to plot the data using longitude and latitude columns # * Explore `pandas`' `groupby()` method, which you can use to aggregate data by time or other parameter # * Create a histogram of earthquakes magnitude # To get you started, we provided the minimal code to load the data. # + # import pandas as pd # df = pd.read_csv('../data/earthquakes_2015_2016_gt45.csv', parse_dates = ['time',], index_col='time') # df.head() # - # ## World Ocean Atlas # NOAA's [World Ocean Atlas](https://www.nodc.noaa.gov/OC5/WOA09/netcdf_data.html) (WOA) provides open-access gridded data of temperature, salinity and other ocean parameters. # It is a set of objectively analyzed climatological fields at standard depth levels for annual, seasonal, and monthly compositing periods. It also includes associated statistical fields of observed oceanographic profile data. # WOA is available over a variety of resolutions (5$^\circ$, 1$^\circ$ or 1/4$^\circ$grid). It is also available in different data formats. # If you choose to analyse these data, we suggest that you follow these initial steps: # # * download 5$^\circ$ data of temperature and oxygen, in NetCDF format. # * plot the sea surface data (or depth of choice) on a global map # * do not use jet/rainbow colormap! # * calculate an average depth profile and plot it beside the map # # Bonus step: # * Rather than creating two sections of code for each variable, can you create a function that will produce plots for either the temperature or oxygen variables? # + # Hint: # To download temperature data into xarray, you can use the following e.g. # WOA_temp = xr.open_dataset('https://data.nodc.noaa.gov/thredds/dodsC/ncei/woa/temperature/decav/5deg/woa18_decav_t00_5d.nc', decode_times=False) # decode_times=False is needed when xarray cannot process dates on import (you will see an error if use without). # To access oxygen, you can use the following URL: # https://data.nodc.noaa.gov/thredds/dodsC/ncei/woa/oxygen/all/1.00/woa18_all_o00_01.nc # - # ## Arctic Sea Ice # ### Data # * In this project you are offered to use NOAA/NSIDC Climate Data Record of Passive Microwave Sea Ice Concentration. # * In the `../data/` directory, there are 2 netCDF files `seaice_conc_monthly*` that correspond to September 1991 ([original FTP link](ftp://sidads.colorado.edu/pub/DATASETS/NOAA/G02202_v2/north/monthly/seaice_conc_monthly_nh_f08_199109_v02r00.nc)) and September 2012 ([original FTP link](ftp://sidads.colorado.edu/pub/DATASETS/NOAA/G02202_v2/north/monthly/seaice_conc_monthly_nh_f17_201209_v02r00.nc)). # * If you want to download data for other months, visit the [NSIDC's data portal](https://nsidc.org/data/search/#keywords=sea+ice/sortKeys=score,,desc/facetFilters=%257B%257D/pageNumber=1/itemsPerPage=25). # ### Ideas for the project # * Plot one of the time slices on a map with North Polar Stereographic projection # * Create a figure with 3 subplots # * Plot the 1991 sea ice concentration in the 1st subplot, 2012 sea ice in the 2nd, and the difference in the 3rd. # ### Getting started # For this project, we recommend that you: # * use `xarray` for opening and reading the netCDF files # * may use `xarray.open_mf_dataset()` to load both files at once # * use `cartopy` for creating a plot with a correct map projection # * use appropriate colormaps for the sea ice concentration and difference # To get started, copy the following cell into your **Project** notebook. # + # import cartopy.crs as ccrs # import matplotlib.pyplot as plt # import xarray as xr # + # ds = xr.open_mfdataset('../data/seaice_conc_monthly_*.nc') ## or # ds1 = xr.open_dataset('../data/seaice_conc_monthly_nh_f08_199109_v02r00.nc') # ds2 = xr.open_dataset('../data/seaice_conc_monthly_nh_f17_201209_v02r00.nc') # + ## Extract longitude and latitude values, then the sea ice concentration itself # + ## Code for creating a map # fig = plt.figure() # ax = fig.add_subplot(111, projection=ccrs.???(central_longitude=0)) # ax.coastlines(resolution='110m', linewidth=0.5) # ax.gridlines() # ax.set_extent([-180, 180, 40, 90], crs=ccrs.PlateCarree()) # - # ## Other Data # * In the `../data/` directory, there are several files that haven't been used in the course: # - `met_brw_insitu_1_obop_hour_2015.txt` with accompanying `met_brw_readme.txt` - hourly meteorological observations in Barrow, Alaska (BRW) # - `plot_extent_n_v2.csv` - daily total sea ice extent in the Arctic # - `north_sea_data.csv` - ship observations from several cruises in the North Sea (data provided by ) # * You can use any of them: open, plot, calculate statistics, etc. # ### Advanced: geospatial data on Amazon # * Amazon Web Services (AWS) host a number of open datasets of geospatial data: satellite, radar, DEM, weather forecast data, etc.: https://aws.amazon.com/earth/ # * You can try to use the AWS API to download data and plot it with cartopy. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap import pandas as pd import seaborn as sns import copy from load_data import load_expression from scipy.stats import pearsonr from model import SpiceMix from Result import SpiceMixResult # - from IPython.core.display import HTML HTML(""" """) import h5py with h5py.File("../../SeqFISHPlus/previous_runs/new_format/SpiceMix_K20_random_seed_0.hdf5", 'r') as h5file: print(h5file["weights"]["0"].keys()) # + STARmap_spatial_result = SpiceMixResult( path2dataset='../../SeqFISHPlus/', result_filename="../../SeqFISHPlus/previous_runs/new_format/SpiceMix_K20_random_seed_0.hdf5", neighbor_suffix="", ) STARmap_nmf_result = SpiceMixResult( path2dataset='../../SeqFISHPlus', result_filename="../../SeqFISHPlus/previous_runs/new_format/NMF_K20_random_seed_0.hdf5", neighbor_suffix="", ) # - # Prior to downstream analysis, we should make sure the models have converged fig, axes = plt.subplots(1, 2, figsize=(12, 6)) STARmap_spatial_result.plot_convergence(axes[0], label='SpiceMix', c='C0') STARmap_nmf_result.plot_convergence(axes[1], label='NMF', c='C0') for ax in axes.flat: ax.set_title('Q function') ax.set_xlabel('Iteration') ax.set_ylabel('Q decrement') ax.set_yscale('log') ax.set_ylim(10**-1, 10**3) ax.legend() # Load latent states for all cells estimated by the last iteration of SpiceMix/NMF STARmap_spatial_result.load_latent_states(iiter=-1) STARmap_nmf_result.load_latent_states(iiter=-1) # Do hierarchical clustering on ALL cells in the latent space # TODO: accelerate this function for datasets of 10k+ cells fig, axes = plt.subplots(1, 2, figsize=(12, 6)) STARmap_spatial_result.determine_optimal_clusters(axes[0], K_range=np.arange(2, 41)) STARmap_nmf_result.determine_optimal_clusters(axes[1], K_range=np.arange(2, 41)) STARmap_spatial_result.determine_clusters(16) STARmap_nmf_result.determine_clusters(16) # + plot_height = int(np.sqrt(STARmap_spatial_result.num_repli)) plot_width = STARmap_spatial_result.num_repli // plot_height if plot_height * plot_width < STARmap_spatial_result.num_repli + 1: plot_width += 1 fig, axes = plt.subplots(plot_height, plot_width, figsize=(plot_height * 4, plot_width * 3), squeeze=False) STARmap_spatial_result.plot_multireplicate(axes, "cluster") # + plot_height = int(np.sqrt(STARmap_nmf_result.num_repli)) plot_width = STARmap_nmf_result.num_repli // plot_height if plot_height * plot_width < STARmap_nmf_result.num_repli + 1: plot_width += 1 fig, axes = plt.subplots(plot_height, plot_width, figsize=(plot_height * 4, plot_width * 3), squeeze=False) STARmap_nmf_result.plot_multireplicate(axes, "cluster") # - replicate = 0 replicate_string = str(replicate) # + # We ovrlap latent states on the spatial space # SpiceMix metagenes are expected to show clearer spatial patterns with less background expressions segmentdata = copy.deepcopy(plt.get_cmap('Reds')._segmentdata) segmentdata['red' ][0] = (0., 1., 1.) segmentdata['green'][0] = (0., 1., 1.) segmentdata['blue' ][0] = (0., 1., 1.) cmap = LinearSegmentedColormap('', segmentdata=segmentdata, N=256) plot_height = 8 plot_width = 5 fig, axes = plt.subplots(plot_height, plot_width, figsize=(plot_width * 3, plot_height * 3)) fig.suptitle('In situ expression of metagenes - SpiceMix (top) vs NMF (bottom)', y=.92) STARmap_spatial_result.plot_metagenes(axes[[0, 2, 4, 6]], cmap=cmap, replicate=replicate, s=0.8) STARmap_nmf_result.plot_metagenes(axes[[1, 3, 5, 7]], s=1, cmap=cmap, replicate=replicate) plt.show() # - # do dimensionality reduction by UMAP kwargs=dict( n_components=2, n_neighbors=30, min_dist=0.2, random_state=0, ) STARmap_spatial_result.UMAP(**kwargs) STARmap_nmf_result.UMAP(**kwargs) # + # Visualize cells in the latent space colored by # SpiceMix/NMF clusters (cluster) # cell types from the original analysis (cell type) # replicates (repli) # Result.visualizeFeaturesSpace is a wraper of Result.visualizeFeatureSpace and handles the custom order of metagenes kwargs = dict(s=5) fig, axes = plt.subplots(2, 2, figsize=(16, 10)) STARmap_spatial_result.plot_feature(axes[0, 0], 'cluster', key_x='UMAP 1', key_y='UMAP 2', **kwargs) STARmap_spatial_result.plot_feature(axes[0, 1], 'replicate', key_x='UMAP 1', key_y='UMAP 2', **kwargs) STARmap_nmf_result.plot_feature(axes[1, 0], 'cluster', key_x='UMAP 1', key_y='UMAP 2', **kwargs) STARmap_nmf_result.plot_feature(axes[1, 1], 'replicate', key_x='UMAP 1', key_y='UMAP 2', **kwargs) axes[0, 0].set_title('SpiceMix clusters') axes[0, 1].set_title('Replicates') axes[1, 0].set_title('NMF clusters') axes[1, 1].set_title('Replicates') # + adjacency_lists = STARmap_spatial_result.dataset["Es"][replicate] num_metagenes = STARmap_spatial_result.hyperparameters["K"] empirical_spicemix_affinities = np.zeros((num_metagenes, num_metagenes)) embeddings = STARmap_spatial_result.data[STARmap_spatial_result.weight_columns].values embeddings /= embeddings.sum(axis=1, keepdims=True) print(embeddings) empirical_matches = [[[] for _ in range(num_metagenes)] for _ in range(num_metagenes)] for cell, adjacency_list in adjacency_lists.items(): cell = int(cell) cell_embedding = embeddings[cell] # average_adjacent_embedding = embeddings[adjacency_list].mean(axis=0) for adjacent_cell in adjacency_list: adjacent_embedding = embeddings[adjacent_cell] for first_metagene in range(num_metagenes): for second_metagene in range(first_metagene, num_metagenes): match = cell_embedding[first_metagene], adjacent_embedding[second_metagene] empirical_matches[first_metagene][second_metagene].append(match) for first_metagene in range(num_metagenes): for second_metagene in range(first_metagene, num_metagenes): x, y = np.array(empirical_matches[first_metagene][second_metagene]).T empirical_spicemix_affinities[first_metagene][second_metagene], _ = pearsonr(x, y) empirical_spicemix_affinities += empirical_spicemix_affinities.T - np.diag(np.diag(empirical_spicemix_affinities)) # + adjacency_lists = STARmap_spatial_result.dataset["Es"][replicate] num_metagenes = STARmap_spatial_result.hyperparameters["K"] empirical_nmf_affinities = np.zeros((num_metagenes, num_metagenes)) embeddings = STARmap_nmf_result.data[STARmap_nmf_result.weight_columns].values embeddings /= embeddings.sum(axis=1, keepdims=True) empirical_matches = [[[] for _ in range(num_metagenes)] for _ in range(num_metagenes)] for cell, adjacency_list in adjacency_lists.items(): cell = int(cell) cell_embedding = embeddings[cell] # average_adjacent_embedding = embeddings[adjacency_list].mean(axis=0) for adjacent_cell in adjacency_list: adjacent_embedding = embeddings[adjacent_cell] for first_metagene in range(num_metagenes): for second_metagene in range(first_metagene, num_metagenes): match = cell_embedding[first_metagene], adjacent_embedding[second_metagene] empirical_matches[first_metagene][second_metagene].append(match) for first_metagene in range(num_metagenes): for second_metagene in range(first_metagene, num_metagenes): x, y = np.array(empirical_matches[first_metagene][second_metagene]).T empirical_nmf_affinities[first_metagene][second_metagene], _ = pearsonr(x, y) empirical_nmf_affinities += empirical_nmf_affinities.T - np.diag(np.diag(empirical_nmf_affinities)) # + segmentdata = copy.deepcopy(plt.get_cmap('bwr')._segmentdata) for key in ['red', 'green', 'blue']: segmentdata[key] = [(1.-i, k, j) for (i, j, k) in segmentdata[key][::-1]] cm = LinearSegmentedColormap('', segmentdata=segmentdata, N=256) kwargs = dict( cmap=cm, ) empirical_metagene_correlations = STARmap_spatial_result.calculate_metagene_correlations(replicate_string, STARmap_spatial_result.data[STARmap_spatial_result.data["replicate"] == replicate_string], STARmap_spatial_result.weight_columns) fig, axes = plt.subplots(1, 3, figsize=(8, 10)) STARmap_spatial_result.plot_metagene_heatmap(axes[0], -empirical_spicemix_affinities.astype(float), cmap=cm) STARmap_spatial_result.plot_affinity_metagenes(axes[1], cmap=cm) STARmap_nmf_result.plot_metagene_heatmap(axes[2], -empirical_nmf_affinities.astype(float), cmap=cm) # axes[0].plot(axes[0, 0], 'cluster', key_x='UMAP 1', key_y='UMAP 2', **kwargs) # axes[1].plot_feature(axes[0, 1], 'replicate', key_x='UMAP 1', key_y='UMAP 2', **kwargs) # STARmap_nmf_result.plot_feature(axes[1, 0], 'cluster', key_x='UMAP 1', key_y='UMAP 2', **kwargs) # STARmap_nmf_result.plot_feature(axes[1, 1], 'replicate', key_x='UMAP 1', key_y='UMAP 2', **kwargs) plt.tight_layout() axes[0].set_title('SpiceMix Empirical Metagene Correlation') axes[1].set_title('SpiceMix $\Sigma_x^{-1}$') axes[2].set_title('NMF Empirical Metagene Correlation') # + segmentdata = copy.deepcopy(plt.get_cmap('Reds')._segmentdata) segmentdata['red' ][0] = (0., 1., 1.) segmentdata['green'][0] = (0., 1., 1.) segmentdata['blue' ][0] = (0., 1., 1.) cm = matplotlib.colors.LinearSegmentedColormap('', segmentdata=segmentdata, N=256) fig, axes = plt.subplots(1, 2, figsize=(15, 6)) obj_SpiceMix.visualizeFeatureEnrichment( axes[0], cmap=cm, ignores_y=['NA'], normalizer_raw=StandardScaler(with_mean=False).fit_transform, ) obj_NMF .visualizeFeatureEnrichment( axes[1], cmap=cm, normalizer_raw=StandardScaler(with_mean=False).fit_transform, ) # + segmentdata = copy.deepcopy(plt.get_cmap('RdBu')._segmentdata) for key in ['red', 'green', 'blue']: segmentdata[key] = [(1.-i, k, j) for (i, j, k) in segmentdata[key][::-1]] cm = matplotlib.colors.LinearSegmentedColormap('', segmentdata=segmentdata, N=256) kwargs=dict( cmap=cm, ) # In the first column are the gene IDs, and in the second column are annotations, # which are cell types in this example gene_list_plot = np.array([ ('Slc17a7', 'E'), # putative marker of all excitatory neural types ('Nov', 'E-L2/3'), # STARmap, Fig. 2J ('Cux2', 'E-L2/3 L4'), # STARmap, Fig. 2J ('Rorb', 'E-L4 L5'), # STARmap, Fig. 2J ('Ctgf', 'E-L6'), # STARmap, Fig. 2J & tasic ('Gad1', 'I'), # putative marker of all inhibitory neural types ('Pvalb', 'I'), # putative marker of PVALB subtype ('Sst', 'I'), # putative marker of SST subtype ('Vip', 'I'), # putative marker of VIP subtype ('Reln', 'I'), ('Aqp4', 'Astro'), # tasic ('Enpp2', 'Oligo'), # STARmap Fig. S6B ('Mbp', 'Oligo'), # tasic ('Mgp', 'Smc'), # STARmap Fig. S6B ('Sema3c', 'Endo'), # STARmap Fig. S6B ]) # Extract gene IDs and prepend string 'expr ' keys_x = [f'expr {_[0]}' for _ in gene_list_plot] fig, axes = plt.subplots(1, 2, figsize=(15, 6)) obj_SpiceMix.visualizeFeatureEnrichment( axes[0], keys_x=keys_x, **kwargs, ignores_y=['NA'], normalizer_raw=StandardScaler().fit_transform, normalizer_avg=lambda x: StandardScaler().fit_transform(x.T).T, ) obj_NMF .visualizeFeatureEnrichment( axes[1], keys_x=keys_x, **kwargs, normalizer_raw=StandardScaler().fit_transform, normalizer_avg=lambda x: StandardScaler().fit_transform(x.T).T, ) for ax in axes: ax.set_xticklabels(' - '.join(_) for _ in gene_list_plot) # - # plot the empirical affinity between cell types segmentdata = copy.deepcopy(plt.get_cmap('RdBu')._segmentdata) for channel in ['red', 'green', 'blue']: segmentdata[channel] = [(1.-i, k, j) for (i, j, k) in segmentdata[channel][::-1]] cm = matplotlib.colors.LinearSegmentedColormap('', segmentdata=segmentdata, N=256) kwargs = dict( cmap=cm, ) fig, axes = plt.subplots(1, 2, figsize=(13, 6)) obj_SpiceMix.plotAffinityClusters(axes[0], ignores={'NA'}, **kwargs) obj_NMF .plotAffinityClusters(axes[1], ignores={'NA'}, **kwargs) # + segmentdata = copy.deepcopy(plt.get_cmap('bwr')._segmentdata) for key in ['red', 'green', 'blue']: segmentdata[key] = [(1.-i, k, j) for (i, j, k) in segmentdata[key][::-1]] cm = matplotlib.colors.LinearSegmentedColormap('', segmentdata=segmentdata, N=256) kwargs = dict( cmap=cm, ) fig, axes = plt.subplots(1, 2, figsize=(13, 6)) obj_SpiceMix.plotAffinityMetagenes(axes[0], iteration=-1, **kwargs) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Raphael_PVT.py d/f PE251_Project_v12 # 16-09-2020 # Version for Git Hub #------------------------------------- #PE251_Project_v12 d/f PE25_Project_v11 # 21-08-2020 #1. To test the crashes with z8 #2. And see if I can dow away by callin EOS function 3 times. #-------------------------------------------------------- #PE251_Project_v11 d/f PE251_Project_V9 # 17-07-2020 #Help create Undersaturated line for PVTO for Condesate #-------------------------------------- #PE251_Project_v9 d/f PE251_Project_V8 #15-07-2020 # Check that it works for Gas Condensates #------------------------- #PE251_Project_v8 d/f PE251_Project_v7 #12-07-2020 # to fix the issue with z3 fluid # The issue was the firt guess value for V in the Flash Equation #-------------------------------- #PE251_Project_v7 d/f PE251_Project_v5 #-------Caution--------------- # LBC Viscosity calculation is still being improved # At present its value is 5 times more than that given by commercial simulator #----------------------------------- #26-06-2020 # The values for Pc, Tc,acent and Vcrit have been modified (accidentally in PE251-Project_V4) # So this shall be deleted #----------------------------------- #PE251-Project_V5 d/f PE251-Project_V4 #18-06-2020 # To fix issues with composition in Table 10.6, Page 211 #------------------------- #User Input # Composition is in Mole Fractions: Please ensure they add up to 1.00 #Sample Compostion #z1={'N2':0.00291,'CO2':0.00481,'C1':0.17813,'C2':0.01454,'C3':0.02914,'iC4':0.01146, # 'nC4':0.0275,'iC5':0.01769,'nC5':0.02425,'C6':0.03949,'C7':0.04976,'C8':0.05467, 'C9':0.04387,'C10':0.50178} # Volve Data Set: Mathematically Recombined Samples z1={'N2':0.0047,'CO2':0.0162,'C1':0.4012,'C2':0.0587,'C3':0.0547,'iC4':0.0078,'nC4':0.0285, 'iC5':0.0107,'nC5':0.0167,'C6':0.0227,'C7':0.0344,'C8':0.0313,'C9':0.023,'C10':0.2894} #--Gas Condensate #z1={'CO2': 0.0018,'N2':0.0013,'C1':0.6192, 'C2': 0.1408,'C3':0.0835,'iC4':0.0097,'nC4':0.0341,'iC5':0.0084, # 'nC5':0.0148,'C6':0.0179,'C7':0.0685} #z1={'CO2': 0.0569,'N2':0.0037,'C1':0.86,'C2':0.0348,'C3':0.0152,'iC4':0.0036,'nC4':0.0044, # 'iC5':0.0016,'nC5':0.0012,'C6':0.0018,'C7':0.0021,'C8':0.0024,'C9':0.0024,'C10':0.0011, # 'C11':0.0012,'C12':0.0014,'C13':0.0011,'C14':0.0013,'C15':0.0010,'C16':0.0028} #z1={'CO2': 0.0569,'N2':0.0037,'C1':0.86,'C2':0.0348,'C3':0.0152,'iC4':0.0036,'nC4':0.0044, # 'iC5':0.0016,'nC5':0.0012,'C6':0.0018,'C7':0.0021,'C8':0.0024,'C9':0.0024,'C10':0.0099} #z1={'C1':0.2,'C2':0.1,'C6':0.2,'C10':0.5} # Res Temperature (Degree F) T_res= 232 # Pressures P_list=[100,200,300,400,500,900,1250,1500,1750,2000,2250,2500,3000,3500,4000,4500,5000] #P_list=[400,3000] #-------------------------- #Specify the Fluid Type (oil or gas) fluid_type='Oil' fluid=fluid_type.upper() # Separator Stages num_stages=1 #SepStage 1 # Psep1 (psia), Tsep (Degree F) Psep1=14.7 Tsep1=60 #SepStage 2 #Psep2 (psia), Tsep (Degree F) Psep2=14.7 Tsep2=60 # # The Initial guess value for Vapour Fraction # In case of convergence PROBLEMS ty modifying this value # guessV is between 0.0 and 1.0 guessV=0.5 # + # By # PE251-Project_v4 to incorporate LBC Viscosity # 06-06-2020 #------------------------------ # Create Dictionries to access properties of the components # commenced on 23-12-2019 #--------------------------------------------------------- # This is now complete as it can now calculate Bo and Rs # Monday 23-12-2019 #--------------------------------------------------------- #Modify all terms to follow PVTi manual 17-11-2019 import math as mt import numpy as np import matplotlib.pyplot as plt import sys import pandas as pd #import LBC1 #import pedersen1 #----------------------------------- #Fixed Input: Please do not modify #----------------------------------- # R=10.732 psi.ft3/lb-mole.R R=10.732 #Pc={'CO2':1070,'C1':667,'C5':489,'C16':266} Pc={'N2':492.31,'CO2':1070,'H2S': 1296.2,'C1': 667,'C2':708.34,'C3':615.76,'iC4':529.3,'nC4':550.7, 'iC5':491.58,'nC5':488.79,'C6':435.9,'C7':426.18,'C8':417.66,'C9':381.51,'C10':350.94, 'C11':323.46,'C12':301.71,'C13':284.22,'C14':269.82,'C15':255.27,'C16':240.72} #Critical Temperature Tc in Rankine #Tc={'CO2':548,'C1':344,'C5':846,'C16':1185} Tc={'N2':227.49,'CO2':672.81,'H2S':672.81,'C1': 343.41,'C2':551.04,'C3':665.97,'iC4':734.91,'nC4':765.69, 'iC5':829.05,'nC5':845.61,'C6':913.83,'C7':986.73,'C8': 1035.33,'C9':1085.73,'C10':1127.13, 'C11':1166.73,'C12':1202.73,'C13':1236.93,'C14':1271.13,'C15':1303.53,'C16':1330} #acentricity factor #acent={'CO2':0.225,'C1':0.008,'C5':0.251, 'C16':0.575} acent={'N2':0.045,'CO2':0.225,'H2S':0.10,'C1':0.013,'C2':0.0986,'C3':0.1524,'iC4':0.1848,'nC4':0.201, 'iC5':0.227,'nC5':0.251,'C6':0.299,'C7':0.30,'C8':0.312,'C9':0.348,'C10':0.385, 'C11':0.419,'C12':0.454,'C13':0.484,'C14':0.516,'C15':0.55,'C16':0.582} MW={'N2':28,'CO2':44,'H2S':32,'C1':16,'C2':30,'C3':44,'iC4':58,'nC4':58, 'iC5':72,'nC5':72,'C6':86,'C7':100,'C8':114,'C9':128,'C10':142,'C11':156,'C12':170, 'C13':184,'C14':198,'C15':212,'C16':226} Vc={'N2':1.4417,'CO2':1.5698,'H2S': 1.55057,'C1':1.5698,'C2':2.3707,'C3':3.2037,'iC4':4.2129,'nC4':4.0817, 'iC5':4.93377,'nC5':4.9817,'C6':5.6225,'C7':6.2792,'C8':6.937,'C9':7.7529,'C10':8.5539, 'C11':9.4028,'C12':10.204,'C13':10.941,'C14':11.693,'C15':12.478,'C16':13.311} #BIC={'CO2':{'CO2':0.00,'C1':0.15,'C5':0.00,'C16':0.00}, # 'C1':{'CO2':0.15,'C1':0.00,'C5':0.02,'C16':0.05}, # 'C5':{'CO2':0.00,'C1':0.02,'C5':0.00,'C16':0.00}, # 'C16':{'CO2':0.00,'C1':0.05,'C5':0.00,'C16':0.00}} #---------- N2 CO2 H2S C1 BIC={'N2' :{'N2':0.00, 'CO2':0.00, 'H2S':0.00, 'C1':0.025, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'CO2' :{'N2':0.00, 'CO2':0.00, 'H2S': 0.00, 'C1':0.15, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0,'C15':0,'C16':0}, 'H2S' :{'N2':0.13, 'CO2':0.135,'H2S':0.00, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C1' :{'N2':0.025,'CO2':0.15,'H2S':0.07, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0.02,'nC5':0.02,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0.05}, 'C2' :{'N2':0.01, 'CO2':0.13 ,'H2S':0.085,'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C3' :{'N2':0.09, 'CO2':0.125, 'H2S':0.080,'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'iC4':{'N2':0.095, 'CO2':0.12, 'H2S':0.075, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'nC4':{'N2':0.095, 'CO2':0.115,'H2S':0.075, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'iC5':{'N2':0.100, 'CO2':0.00, 'H2S':0.070, 'C1':0.02, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'nC5':{'N2':0.110, 'CO2':0.00, 'H2S':0.070, 'C1':0.02, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C6': {'N2':0.110, 'CO2':0.115, 'H2S':0.055, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C7': {'N2':0.110, 'CO2':0.115, 'H2S':0.050, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C8': {'N2': 0.110, 'CO2':0.115, 'H2S':0.048, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C9': {'N2':0.110, 'CO2':0.115, 'H2S':0.046, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C10':{'N2':0.110, 'CO2':0.115, 'H2S':0.045, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C11':{'N2':0.110, 'CO2':0.115, 'H2S':0.045, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C12':{'N2':0.110, 'CO2':0.115, 'H2S':0.045, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C13':{'N2':0.110, 'CO2':0.115, 'H2S':0.045, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C14':{'N2':0.110, 'CO2':0.115, 'H2S':0.045, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, 'C15':{'N2':0.110, 'CO2':0.115, 'H2S':0.045, 'C1':0.00, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}, #--The numbers in C16 line are arbitrary 'C16':{'N2':0.110, 'CO2':0.115, 'C5':0.000, 'C1': 0.05, 'C2':0,'C3':0,'iC4':0,'nC4':0,'iC5':0,'nC5':0,'C6':0,'C7':0,'C8':0,'C9':0,'C10':0,'C11':0,'C12':0, 'C13':0,'C14':0, 'C15':0,'C16':0}} # - #--------------- # Tuning # Specify highest Fraction #--------------- Pc['C10']=Pc['C10']*1.0 Tc['C10']=Tc['C10']*1.0 def check_sum_func(z1): #Check if the sum=1.0 if abs(sum(z1.values())-1.0)>0.001: print ('sum of mole fractions=',sum(z1.values())) print ('Check your Sample Composition') else: print ('sum of mole fraction=',sum(z1.values())) check_sum_func(z1) #----This function is NOT used------ def criticals_fluid_func(z): Tc_fluid=0 Pc_fluid=0 for i in z: Tc_fluid=Tc_fluid+z[i]*Tc[i] Pc_fluid=Pc_fluid+z[i]*Pc[i] print ('Tc_fluid=',Tc_fluid, 'Pc_fluid=', Pc_fluid) return Tc_fluid, Pc_fluid def wilson(P,T_R,comp_list): #kval2 and k1 are temperary variables. #kount=0 kval={} for key in comp_list: #while kount0: # print ('ok2') # V=1.0 # L=0.0 # ylist=comp_list # for i in range(0,len(comp_list)): # xlist.append(0.0) #elif F_0<0: # print ('ok3') # V=0.0 # L=1.0 # xlist=comp_list # for i in range(0,len(comp_list)): # ylist.append(0.0) #print ('x=', "%.2f" % x1, 'y=',"%.2f" % y1) #print ('ok3') #print ( 'kval=',kval) #print ('L=',L, 'V=',V) return F_0,F_1,xlist,ylist,L,V # + # Following PVTi Manual # Calcuation of Aj=OmegaA(T,j)*Prj/Trj^2 Eq. 7.98 #--------------------- def a_calc1(P,T_R,comp_list): OmegaA0=0.457235529 Pr={} Tr={} OmegaA={} Aj={} #i=0 for key in comp_list: #while i < len(comp_list): Pr[key]=(P/Pc[key]) Tr[key]=(T_R/Tc[key]) #while i < len(z_list): OmegaA[key]=(OmegaA0*(1+(0.37464+1.54226*acent[key]-0.2669*acent[key]**2) *(1-mt.sqrt(Tr[key])))**2) Aj[key]=OmegaA[key]*(P/Pc[key])/(T_R/Tc[key])**2 #i=i+1 return Aj # Eq 7.97 PVTi manual # Eq 7.95 PVTi manual #----------------------- def a_calc2(P,T_R,comp_list): Ajk=0 S={} Aij=[[],[]] A=0 Aj=a_calc1(P,T_R,comp_list) S_temp=0 #i=0 #while i < len(comp_list): for i in comp_list: # j=0 #while j < len(comp_list): for j in comp_list: #while j <= i: #if (i != j): try: BIC_temp=BIC[i][j] except: BIC_temp=0.0 # Ajk=(1-BIC[i][j])*mt.sqrt(Aj[i]*Aj[j]) Ajk=(1-BIC_temp)*mt.sqrt(Aj[i]*Aj[j]) Aij.append(Ajk) A=A+Ajk*comp_list[i]*comp_list[j] S_temp=S_temp+Ajk*comp_list[j] #print ('i=',i,'j=',j,'Ajk=',Ajk, 'A=',A) #print (comp_list[i],comp_list[j]) #+Ajk*comp_list[j] #j=j+1 S[i]=(S_temp) S_temp=0 #i=i+1 #print ('A=',A,'S=',S) return A,S # + def b_calc(P,T_R,comp_list): OmegaB0=0.077796074 Pr=[] Tr=[] Bj={} xBj={} B=0 #i=0 #while i < len(comp_list): for i in comp_list: #Pr.append(P/Pc[i]) #Tr.append(T_R/Tc[i]) Pr=P/Pc[i] Tr=T_R/Tc[i] Bj[i]=OmegaB0*Pr/Tr #Bj.append(OmegaB0*Pr/Tr) xBj[i]=Bj[i]*comp_list[i] # Bj.append(OmegaB0*Pr[i]/(Tr[i]) # Bj.append(OmegaB0*Pr[i]/Tr[i]) # print ('bi=',bi, 'z_list=', z_list) B = 0 for i in xBj: B= B + xBj[i] # i=i+1 #print ('b_calc') #print ('Bj=',Bj,'B=',B) return B,Bj # + def eos_zeta(A,B): # Eq. 790 to Eq.792 # zeta as in PV=zetaRT zeta_liq=0 zeta_vap=0 zeta_single=0 m1=1+np.sqrt(2) m2=1-np.sqrt(2) E2=(m1+m2-1)*B-1 E1=A-(2*(m1+m2)-1)*B**2-(m1+m2)*B E0= -(A*B+m1*m2*B**2*(B+1)) #E0E1E2=[E0,E1,E2] NROOT, zeta1,zeta2,zeta3= cubroot(E2,E1,E0) #[zeta_list2]=np.roots[E0,E1,E2] #print ('-----eos_zeta----------') #print ('A=',A, 'B=',B) #print ('E2=',E2,'E1=',E1,'E0=',E0) #print ('NROOT=',NROOT, 'zeta1=',zeta1,'zeta2=',zeta2,'zeta3=',zeta3) zeta_list=[zeta1,zeta2,zeta3] zeta_list = [i for i in zeta_list if i >= 0.0] # Number of Roots Real and Positive NROOT_r_p=len(zeta_list) #print ('---eos_zeta-------') #print ('NROOT=',NROOT,'zeta=',zeta_list) zeta_list.sort() if NROOT_r_p==0: zeta_single=0.0 elif NROOT_r_p==1: zeta_single=zeta_list[0] elif NROOT_r_p>1: zeta_liq=zeta_list[0] zeta_vap=zeta_list[-1] return NROOT_r_p,zeta_single,zeta_liq,zeta_vap #function liq_vap_zeta is not called def liq_vap_zeta(P,T_R,comp_list): A,S=a_calc2(P,T_R,comp_list) B,Bj=b_calc(P,T_R,comp_list) NROOT, zeta_single, zeta_liq,zeta_vap=eos_zeta(A,B) #print ('NROOT=',NROOT,'zeta_single=', zeta_single, 'zeta_liq=',zeta_liq, 'zeta_vap=',zeta_vap) return zeta_single, zeta_liq, zeta_vap def liq_zeta(P,T_R,xlist): A,S=a_calc2(P,T_R,xlist) B,Bj=b_calc(P,T_R,xlist) NROOT_r_p, zeta_single, zeta_liq,zeta_vap=eos_zeta(A,B) if NROOT_r_p==0 or NROOT_r_p==1: zeta=zeta_single elif NROOT_r_p>1: zeta=zeta_liq #print ('-----liq_zeta-----') #print ('NROOT=',NROOT) #print ('zeta_single=',zeta_single) #print ('zeta_liq=',zeta_liq,'zeta_vap=',zeta_vap) #print('zeta=',zeta) return zeta def vap_zeta(P,T_R,ylist): A,S=a_calc2(P,T_R,ylist) B,Bj=b_calc(P,T_R,ylist) NROOT_r_p, zeta_single, zeta_liq, zeta_vap=eos_zeta(A,B) #print ('NROOT=',NROOT, 'zeta_single=',zeta_single) if NROOT_r_p==0 or NROOT_r_p==1: zeta=zeta_single elif NROOT_r_p>1: zeta=zeta_vap #print ('-----vap_zeta-----') #print ('NROOT=',NROOT) #print ('zeta_single=',zeta_single) #print ('zeta_liq=',zeta_liq,'zeta_vap=',zeta_vap) #print ('zeta=',zeta) return zeta # - #Eq 7.93 PVTi manual def fugacity(P, T_R, zeta,comp_list): # need to call eos_vol fug={} #print ('zeta=',zeta) #print ('comp_list=',comp_list) B,Bj=b_calc(P,T_R,comp_list) A,S=a_calc2(P,T_R,comp_list) #print ('fugacity') #print ('A=', A, 'Bj=',Bj,'B=',B,'zeta=',zeta) #---------------------------- m1=1+np.sqrt(2) m2=1-np.sqrt(2) #i=0 for i in comp_list: #while i < len(comp_list): # subdivide the equation to make calculation easier: fug1=-np.log(zeta-B) #print ('zeta=',zeta,'B=',B,'fug1=',fug1) fug2= A/(m1-m2)/B #print ('fug2=',fug2) #print ('A=',A,'B=',B) fug3= 2*S[i]/A -Bj[i]/B #print ('fug3=',fug3) fug4= np.log((zeta+m2*B)/(zeta+m1*B)) #print ('fug4=',fug4) fug5= (Bj[i]/B)*(zeta-1) #print ('fug5=',fug5) fug6= fug1+fug2*fug3*fug4+fug5 #rint ('fug6=',fug6) fug7= P*comp_list[i]*np.exp(fug6) #rint ('fug7=',fug7) fug[i]=(fug7) #i=i+1 return fug def cubroot(p,q,r): # By alfa= (p**2-3.0*q)/9.0 beta= (2.0*p**3-9.0*p*q+27*r)/54 delta= alfa**3-beta**2 #print ('delta=', delta) pi=4.0*np.arctan(1.0) x1=-999 x2=-999 x3=-999 if (delta>=0): phi=np.arccos(beta/np.sqrt(alfa**3)) x1= -2.0*np.sqrt(alfa)*np.cos(phi/3.0)-p/3.0 x2= -2.0*np.sqrt(alfa)*np.cos((phi+2.0*pi)/3.0)-p/3.0 x3= -2.0*np.sqrt(alfa)*np.cos((phi+4.0*pi)/3.0)-p/3.0 nroot=3 else: a=(np.sqrt(beta**2-alfa**3)+abs(beta))**(1.0/3.0) b=alfa/a if(beta<=0.0): x1=a+b-p/3.0 else: x1= -a-b-p/3.0 nroot=1 #print ('Inside cuberoot') #print ('p=',p,'q=',q,'r=',r) #print ('beta=',beta,'x1=',x1) #print ('NROOT=',nroot, x1,x2,x3) return(nroot,x1,x2,x3) # + # To be called with P, T_F and mole fraction of the feed stream def workflow(P,T_F,comp_list): T_R=T_F+460 kval=wilson(P, T_R,comp_list) F_0,F_1,xlist,ylist,L,V=liq_vap_molfrac(kval,comp_list,guessV) #print ('F_0=',F_0,'F_1=',F_1) #print ('workflow') print ('In Workflow','KVAL=', kval,'V=',V) print ('F_0=',F_0,'F_1=',F_1) if F_1>0 or F_0<0: print ('------Single Phase------') if fluid=="OIL": zeta_v=1.0 zeta_l=liq_zeta(P,T_R,comp_list) vap_v=0.0 liq_v=zeta_l*R*T_R/P V=0.0 L=1.0 for i in comp_list: xlist[i]=comp_list[i] ylist[i]=0 if fluid=="GAS": zeta_v=vap_zeta(P,T_R,comp_list) #zeta_l=1.0 zeta_l=1.0 vap_v=zeta_v*R*T_R/P liq_v=0.0 V=1.0 L=0.0 for i in comp_list: xlist[i]=0 ylist[i]=comp_list[i] #------------------------------------- # Commented for _V0 # if F_0<0: # #print ('F_0<0') # zeta_l=liq_zeta(P,T_R,comp_list) # #zeta_v=1.0 Arbitrary # zeta_v=1.0 # liq_v=zeta_l*R*T_R/P # vap_v=0.0 # V=0.0 # L=1.0 # for i in comp_list: #xlist[i]=comp_list[i] # ylist[i]=0 #---------------------------------- if F_0>=0 and F_1<=0: print ('---Two Phases----') #print ('L=',L, 'V=',V) #print ('xlist=',xlist, 'ylist=',ylist) #zeta_single,zeta_l,zeta_v= liq_vap_zeta(P,T_R,comp_list) #print ('zeta=',zeta_single,'zeta_l=',zeta_l,'zeta_v=',zeta_v) zeta_l=liq_zeta(P,T_R,xlist) zeta_v=vap_zeta(P,T_R,ylist) liq_v=L*zeta_l*R*T_R/P vap_v=V*zeta_v*R*T_R/P #print ('zeta_l=',zeta_l,'zeta_v=',zeta_v) #print ('xlist=',xlist) #print ('ylist=',ylist) fug_l=fugacity(P,T_R,zeta_l,xlist) fug_v=fugacity(P,T_R,zeta_v,ylist) #print ('fug_l=',fug_l) #print ('fug_v=',fug_v) #fugcomp is a temporary variable to compare liquid and vapour fugacities fugcomp=0.0 for i in comp_list: fugcomp=fugcomp+(fug_l[i]/fug_v[i]-1.0)**2.0 #print ('fugcomp=', fugcomp) iter=1 while abs(fugcomp)>0.001 and iter<=500: #print ('-------While Loop-----------') #print ('iter=',iter,'fugcomp=',fugcomp) kval_old=kval kval={} for i in comp_list: kval[i]=(kval_old[i]*fug_l[i]/fug_v[i]) #print ('In Workflow, while loop','KVAL=', kval,'V=',V) F_0,F_1,xlist,ylist,L,V=liq_vap_molfrac(kval,comp_list,V) #print ('KVAL=',kval,'V=',V) #print ('F_0=',F_0,'F_1=',F_1) #print ('fugcomp=',fugcomp) #print ('ok4') #print ('L=',L,'V=',V) if F_1>0 or F_0<0: print('------Single Phase-------') print('------Inside While Loop--') print ('F_0=',F_0,'F_1=',F_1) if fluid=="OIL": zeta_v=1.0 zeta_l=liq_zeta(P,T_R,comp_list) vap_v=0.0 liq_v=zeta_l*R*T_R/P V=0.0 L=1.0 for i in comp_list: xlist[i]=comp_list[i] ylist[i]=0 break if fluid=="GAS": zeta_l=0.0 zeta_v=vap_zeta(P,T_R,comp_list) liq_v=0 vap_v=zeta_v*R*T_R/P V=1.0 L=0.0 for i in comp_list: xlist[i]=0 ylist[i]=comp_list[i] break #------------------------------------------- #Commented for _V9 # if F_0<0: # print ('F_0<0') # print ('F_0=',F_0,'F_1=',F_1) # zeta_l=liq_zeta(P,T_R,comp_list) # zeta_v=0.0 # liq_v=zeta_l*R*T_R/P # vap_v=0.0 # V=0.0 # L=1.0 # for i in comp_list: # xlist[i]=comp_list[i] # ylist[i]=0 # break #---------------------------------------- #print ('xlist=',xlist,'ylist=',ylist) zeta_l=liq_zeta(P,T_R,xlist) zeta_v=vap_zeta(P,T_R,ylist) liq_v=L*zeta_l*R*T_R/P vap_v=V*zeta_v*R*T_R/P #print ('workflow', 'liq_v=',liq_v) fug_l=fugacity(P,T_R,zeta_l,xlist) fug_v=fugacity(P,T_R,zeta_v,ylist) #print ('fug_l=',fug_l) #print ('fug_v=', fug_v) # update the value of fugcomp fugcomp=0.0 for i in comp_list: fugcomp=fugcomp+(fug_l[i]/fug_v[i]-1.0)*2 #print ('fugcomp=', fugcomp) #print ('iter=',iter) iter=iter+1 #return (xlist,ylist,L,V,zeta_l, liq_v, zeta_v, vap_v) #print ('fugcomp=',fugcomp) return (xlist,ylist,L,V,zeta_l,zeta_v, liq_v, vap_v) # + #--function to calculate Bo and Rs #--single stage separator def blackoil(comp_list,T_F,P): Bo=[] Bo_us=[] Bg=[] Rs=[] Rv=[] visco=[] viscg=[] L1_list=[] zeta_l1_list=[] liq_vol1_list=[] liq_vol_tbd_list=[] #Pressure for Liquid Tables P_liq=[] for i in range (0,len(P)): #--Reservoir--- x1,y1,L1,V1,zeta_l1,zeta_v1,liq_v1,vap_v1=workflow(P[i],T_F, comp_list) liq_vol1=liq_v1/5.615 vap_vol1=vap_v1 moles_L1=L1 moles_V1=V1 L1_list.append(moles_L1) #zeta_l1_list.append(zeta_l1) liq_vol1_list.append(liq_vol1) #liq_vol_tbd_list.append(liq_vol_tbd) print ('--------Reservoir---------------') print ('Pressure=',P[i],'Temperature(F)=',T_F) print ('zeta_l1=', zeta_l1, 'zeta_v1=',zeta_v1) print ('moles_L1=',moles_L1,'moles_V1=',moles_V1) print ('liq_vol1=',"%.2f" % liq_vol1, "BBL") print ('vap_vol1=', "%.2f" % vap_vol1,'ft3') print ('---------Compostion----------------') print ('x1=',x1) print ('y1=',y1) #---------------2 Stage Separator---------------- if num_stages==2: print ('-------Separtor----------') print ('num_stages=',num_stages) if fluid=='GAS': print ("Fluid Type=",fluid) #--------------Sep1 (V1-->L2v,V2v) x2v,y2v,L2v,V2v,zeta_l2v,zeta_v2v,liq_v2v,vap_v2v=workflow(Psep1,Tsep1,y1) moles_L2v=moles_V1*L2v moles_V2v=moles_V1*V2v liq_vol2v=liq_v2v vap_vol2v=vap_v2v print ('--------Stage 1-------') print ('-------Composition----') print ('x2v=', x2v) print ('y2v=', y2v) print ('--------mole fraction and volume----') print('moles_L2v=',moles_L2v, 'moles_V2v=', moles_V2v) print('liq_vol2v=',liq_vol2v, 'ft3', 'vap_vol2v=', vap_vol2v,'ft3') #--------------Sep2 (L2v-->L3v,V3v) x3v,y3v,L3v,V3v,zeta_l3v,zeta_v3v,liq_v3v,vap_v3v=workflow(Psep2,Tsep2,x2v) moles_L3v=moles_L2v*L3v moles_V3v=moles_L2v*V3v liq_vol3v=moles_L2v*liq_v3v vap_vol3v=moles_L2v*vap_v3v print ('--------Stage 2-------') print ('-------Composition----') print ('x3v=', x3v) print ('y3v=', y3v) print ('--------mole fraction and volume----') print ('moles_L3v=',moles_L3v,'moles_V3v=',moles_V3v) print ('liq_vol3v=',liq_vol3v,'ft3' 'vap_vol3v=',vap_vol3v,'ft3') print ('---------System Output V1------------') # print('moles_L2x=',moles_L2x,'moles_V2x=',moles_V2x) liq_vol_sys_V1=liq_vol3v # Convert Vapour Volume for Sep Cond1 to standard conditions Bg_sep_cond1v= 0.0283*zeta_v2v*(Tsep1+460)/Psep1 print ('Bg_sep_cond1v=',Bg_sep_cond1v) #vap_vol_sys_V1=vap_vol2v/(0.0283*zeta_v2v*(Tsep1+460)/Psep2)+vap_vol3v vap_vol_sys_V1=vap_vol2v/Bg_sep_cond1v+vap_vol3v print('liq_vol_sys_V1=',liq_vol_sys_V1, 'vap_vol_sys_V1=',vap_vol_sys_V1) print('xBg=',"%.2f" % (0.0283*zeta_v1*(T_F+460)/P[i]/5.615*1000),'BBL/Mscf') print('Rv=',"%.2f" %(liq_vol_sys_V1*1000/5.615/vap_vol_sys_V1),'STB/Mscf') Bg.append(0.0283*zeta_v1*(T_F+460)/P[i]/5.615*1000) Rv.append(liq_vol_sys_V1*1000/5.615/vap_vol_sys_V1) # to calulate condensate properties for PVTO if L1>0: #---First Stage Separator (L1-->L2l,V2l) print ('-------------------------------') print ('In 2 Phase Region: Where L1>0.0') print ('-----Sep Stage 1 (Liquid)---------') x2l,y2l,L2l,V2l,zeta_l2l,zeta_v2l,liq_v2l,vap_v2l=workflow(Psep1,Tsep1,x1) moles_L2l=moles_L1*L2l moles_V2l=moles_L1*V2l #workflow takes into account Liquid Mole Fraction while calculating volume liq_vol2l=moles_L1*liq_v2l vap_vol2l=moles_L1*vap_v2l #------------------------------------------------- print ('liq_v2l=',liq_v2l) print('moles_L2l=',moles_L2l, 'moles_V2l=',moles_V2l) print('liq_vol2l=',liq_vol2l,'ft3','vap_vol2l=',vap_vol2l,'ft3') #--------------------------------------------------- #---Second Stage Separator (L2l-->L3l,V3l) print('------Liquid Sep Stage 2 (Liquid)-----') x3l,y3l,L3l,V3l,zeta_l3l,zeta_v3l,liq_v3l,vap_v3l=workflow(Psep2,Tsep2,x2l) moles_L3l=moles_L2l*L3l moles_V3l=moles_L2l*V3l #workflow takes into account Liquid Mole Fraction while calculating volume liq_vol3l=moles_L2l*liq_v3l vap_vol3l=moles_L2l*vap_v3l #----------------------------------------------------- print ('liq_v3l=',liq_v3l) print('moles_L3l=',moles_L3l,'moles_V3l=',moles_V3l ) print('liq_vol3l=',liq_vol3l,'ft3','vap_vol3l=',vap_vol3l,'ft3') #----To Develop Undersaturated Line (Flash x1 at highes pressure) x11,y11,L11,V11,zeta_l11,zeta_v11,liq_v11,vap_v11=workflow(max(P_list),T_res,x1) if L11==0 or V11==0: #us: undersaturated liq_vol_us11=moles_L1*(liq_v11+vap_v11) else: liq_vol_us11=moles_L1*liq_v11 print('liq_vol_us11',liq_vol_us11) print ('Max P for us lines=', max (P_list)) print ('Bo_us=', "%.2f" % (liq_vol_us11/liq_vol3l)) #---------System Output L1--------------------------- liq_vol_sys_L1=liq_vol3l Bg_sep_cond1l= 0.0283*zeta_v2l*(Tsep1+460)/Psep1 vap_vol_sys_L1=vap_vol2l/Bg_sep_cond1l+vap_vol3l print ('Bg_sep_con1l=',Bg_sep_cond1l) print('Bo=',"%.2f" % (liq_vol1/liq_vol_sys_L1*5.615), "BBL/STB") print ('Rs=', "%.2f" % (vap_vol_sys_L1*5.615/liq_vol_sys_L1/1000),'Mscf/STB') Bo.append(liq_vol1/liq_vol_sys_L1*5.615) Bo_us.append(liq_vol_us11/liq_vol_sys_L1) Rs.append(vap_vol_sys_L1*5.615/liq_vol_sys_L1/1000) #------------------------------------------ if fluid=='OIL': print ("Fluid Type=",fluid) #--sep Cond1 #Liquid and vapour based on x1 x2,y2,L2,V2,zeta_l2,zeta_v2,liq_v2,vap_v2=workflow(Psep1,Tsep1,x1) moles_L2=moles_L1*L2 moles_V2=moles_L1*V2 #workflow takes into account Liquid Mole Fraction while calculating volume) liq_vol2=moles_L1*liq_v2 # First Stage Vapour is conversted to Stock Tank condtions vap_vol2=moles_L1*vap_v2*Psep1/14.7/zeta_v2 #--Sep Cond2 #Liquid and vapour based on x2 x3,y3,L3,V3,zeta_l3,zeta_v3,liq_v3,vap_v3=workflow(Psep2,Tsep2,x2) moles_L3=moles_L2*L3 moles_V3=moles_L2*V3 #workflow takes into account Liquid Mole Fraction while calculating volume) liq_vol3=moles_L2*liq_v3 vap_vol3=moles_L2*vap_v3 #----------------------------------------------- print ('---------Sep Stage 1--------------') print ('---------Liquid Composition--------') print('x2=',x2) print ('---------Vapour Composition------') print('y2=',y2) print('moles_L2=',moles_L2, 'moles_v2=',moles_V2) print ('liq_vol2=',liq_vol2,'ft3','vap_vol2=',vap_vol2,'ft3') print ('---------Sep Stage 2--------------') print ('-----------Liquid Composition-------') print('x3=',x3) print ('-----------Vapour Composition-------') print('y3=',y3) print('moles_L3=',moles_L3,'moles_V3=',moles_V3) print('liq_vol3=',liq_vol3,'ft3', 'vap_vol3=',vap_vol3,'ft3') liq_vol_sys=liq_vol3/5.615 vap_vol_sys=vap_vol2+vap_vol3 print ('---------System Output------------') print ('liq_vol_sys=',"%.2f" % liq_vol_sys, 'STB') print ('vap_vol_sys=',"%.2f" % vap_vol_sys, 'scf') print('Bo=',"%.2f" % (liq_vol1/liq_vol_sys), "BBL/STB") print ('Rs=', "%.2f" % (vap_vol_sys/liq_vol_sys/1000),'Mscf/STB') print('Bg=',"%.2f" % (0.0283*zeta_v1*(T_F+460)/P[i]/5.615*1000),'BBL/Mscf') # print('Rv=',"%.2f" %(liq_vol_sys*1000/vap_vol_sys)) Bg.append(0.0283*zeta_v1*(T_F+460)/P[i]/5.615*1000) Bo.append(liq_vol1/liq_vol_sys) Rs.append(vap_vol_sys/liq_vol_sys/1000) # Rv.append(liq_vol_sys*1000/vap_vol_sys) #----------------Single Stage Separator--------------- if num_stages==1: print ('num_stages=',num_stages) #--sep Cond1 #Liquid and vapour based on y1 if fluid=='GAS': print ("Fluid Type=",fluid) x2v,y2v,L2v,V2v,zeta_l2v,zeta_v2v,liq_v2v,vap_v2v=workflow(Psep1,Tsep1,y1) moles_L2v=moles_V1*L2v moles_V2v=moles_V1*V2v liq_vol2v=liq_v2v vap_vol2v=vap_v2v # to calulate condensate properties for PVTO if L1>0: print ('----------2 Phase Region--------') x2l,y2l,L2l,V2l,zeta_l2l,zeta_v2l,liq_v2l,vap_v2l=workflow(Psep1,Tsep1,x1) moles_L2x=moles_L1*L2l moles_V2x=moles_L1*V2l #workflow takes into account Liquid Mole Fraction while calculating volume liq_vol2x=moles_L1*liq_v2l/5.615 vap_vol2x=moles_L1*vap_v2l print('Bo=',"%.2f" % (liq_vol1/liq_vol2x), "BBL/STB") print ('Rs=', "%.2f" % (vap_vol2x/liq_vol2x/1000),'Mscf/STB') Bo.append(liq_vol1/liq_vol2x) Rs.append(vap_vol2x/liq_vol2x/1000) #--------------------------------------------------- print('moles_L2x=',moles_L2x, 'moles_V2x=',moles_V2x) print('liq_vol2x=',liq_vol2x, 'vap_vol2x=',vap_vol2x) #--------------------------------------------------- print('moles_L2v=',moles_L2v, 'moles_V2v=',moles_V2v) print('liq_vol2v=',liq_vol2v, 'vap_vol2v=',vap_vol2v) #----To Develop Undersaturated Line (Flash x1 at highes pressure) #---- Copied and pasted from 2-Stage Seprator x11,y11,L11,V11,zeta_l11,zeta_v11,liq_v11,vap_v11=workflow(max(P_list),T_res,x1) if L11==0 or V11==0: #us: undersaturated liq_vol_us11=moles_L1*(liq_v11+vap_v11) else: liq_vol_us11=moles_L1*liq_v11 print('liq_vol_us11',liq_vol_us11) print ('Max P for us lines=', max (P_list)) print ('Bo_us=', "%.2f" % (liq_vol_us11/liq_vol2x)) Bo_us.append(liq_vol_us11/liq_vol2x/5.615) #----------------------------------------------- print ('---------System Output-----------') print ('---------1 Stage Separator-------') print('x2v=',x2v) print('y2v=',y2v) print('moles_L1=',moles_L1,'moles_V1=',moles_V1) # print('moles_L2x=',moles_L2x,'moles_V2x=',moles_V2x) print('liq_vol2v=',liq_vol2v, 'vap_vol2v=',vap_vol2v) print('Bg=',"%.2f" % (0.0283*zeta_v1*(T_F+460)/P[i]/5.615*1000),'BBL/Mscf') print('Rv=',"%.2f" %(liq_vol2v*1000/5.615/vap_vol2v),'STB/Mscf') Bg.append(0.0283*zeta_v1*(T_F+460)/P[i]/5.615*1000) Rv.append(liq_vol2v*1000/5.615/vap_vol2v) if fluid=="OIL": #print ('Fluid Type=',fluid) #--sep Cond1 #Liquid and vapour based on x1 x2,y2,L2,V2,zeta_l2,zeta_v2,liq_v2,vap_v2=workflow(Psep1,Tsep1,x1) moles_L2=moles_L1*L2 moles_V2=moles_L1*V2 print ('--------System Output--------------') print ('---------1 Stage Separator---------') print ('---------Liquid Composition--------') print ('x2=',x2) print ('---------Vapour Composition--------') print ('y2=',y2) #workflow takes into account Liquid Mole Fraction while calculating volume) liq_vol2=moles_L1*liq_v2 vap_vol2=moles_L1*vap_v2 #----------------------------------------------- # print ('---------stock tank------------') # print('x2=',x2,'y2=',y2) # print('moles_L2=',moles_L2,'moles_V2=',moles_V2) # print('liq_vol2=',liq_vol2, 'vap_vol2=',vap_vol2) #--system output-- liq_vol_sys=liq_vol2/5.615 vap_vol_sys=vap_vol2 print ('liq_vol_sys=',"%.2f" % liq_vol_sys, 'STB') print ('vap_vol_sys=',"%.2f" % vap_vol_sys, 'scf') print('Bg=',"%.2f" % (0.0283*zeta_v1*(T_F+460)/P[i]/5.615*1000),'BBL/Mscf') #print('Rv=',"%.2f" %(liq_vol_sys*1000/vap_vol_sys)) Bg.append(0.0283*zeta_v1*(T_F+460)/P[i]/5.615*1000) Bo.append(liq_vol1/liq_vol_sys) Rs.append(vap_vol_sys/liq_vol_sys/1000) return (Bg,Bo,Bo_us,Rs,Rv, L1_list,zeta_l1_list,liq_vol1_list,liq_vol_tbd_list) # + Bg,Bo,Bo_us,Rs,Rv,L1_list,zeta_l1_list,liq_vol1_list,liq_vol_tbd_list=blackoil(z1,T_res,P_list) # + def MW_func(z): MW_mix=0.0 for i in z: MW_mix=MW_mix+z[i]*MW[i] #print ('MW_mix=',MW_mix) return(MW_mix) def density(P, T_F,z): (xlist,ylist,L,V,zeta_l, zeta_v, liq_v, vap_v)=workflow(P,T_F,z) # Critical densities from LBC1 print ('I am in density') #print ('xlist=',xlist) #print ('ylist=',ylist) print ('L=',L,'V=',V) #print ('liq_v=',liq_v, 'vap_v=',vap_v) if L>0: liq_den_c=rhocfunc(xlist) #print ('liq_den_c=',liq_den_c) Mw_liq=MW_func(xlist) #liquid density lb/ft3 liq_den1=Mw_liq*L/liq_v print ('liq-den1=',liq_den1) #liquid density g/cm3 liq_den2=liq_den1/62.4 # molar density lb-moles/ft3 liq_den3=L/liq_v print ('liq_den3=',liq_den3,'lb-moles/ft3') #reduced density liq_red_den= liq_den3/liq_den_c print ('liq_red_den=',liq_red_den) if V>0: vap_den_c=rhocfunc(ylist) Mw_vap=MW_func(ylist) #vap_denisty lb/ft3 vap_den1=Mw_vap*V/vap_v #vap_density g/cm3 vap_den2=vap_den1/62.4 # molar denstiy lb_moles/ft3 vap_den3=V/vap_v print ('vap_den3=',vap_den3) #reduced density vap_red_den= vap_den3/vap_den_c print ('vap_red_den=',vap_red_den) if V==0: vap_den1=liq_den1 vap_den2=liq_den2 vap_red_den=liq_red_den if L==0: liq_den1=vap_den1 liq_den2=vap_den2 liq_red_den=vap_red_den return (liq_den1,liq_den2,liq_red_den,vap_den1,vap_den2,vap_red_den) def epsillionifunc(z): # Equation 10.44 epsillioni={} for i in z: #print ('MW=',MW[i], 'Tc=', Tc[i], 'Pc=', Pc[i]) epsillioni[i]=Tc[i]**(1/6)/mt.sqrt(MW[i])/Pc[i]**(2/3) return (epsillioni) def Trfunc(z,T_F): T_R=T_F+460 Tr={} for i in z: Tr[i]=(T_R/Tc[i]) return (Tr) def eta_i_starfunc(z,T_F): Tr=Trfunc(z,T_F) epsillioni=epsillionifunc(z) eta_i_star={} for i in z: if Tr[i]<=1.5: eta_i_star[i]=(34e-5*1/epsillioni[i]*Tr[i]**0.94) else: eta_i_star[i]=(17.78e-5/epsillioni[i]*(4.58*Tr[i]-1.67)**(5.0/8)) #print (i,eta_i_star) return (eta_i_star) def eta_starfunc(z,T_F): eta_i_star=eta_i_starfunc(z,T_F) eta_star=0.0 num=0.0 den=0.0 for i in z: #numerator num=num+z[i]*eta_i_star[i]*mt.sqrt(MW[i]) #denominator den=den+z[i]*mt.sqrt(MW[i]) eta_star=num/den return (eta_star) def rhocfunc(z): # Equation 10.39 RHOc_mix=0.0 den=0.0 for i in z: #denominator den=den+(z[i]*Vc[i]) RHOc_mix= 1/den return (RHOc_mix) def epsillionfunc(z): # Eqauation 10.38 num=0.0 den1=0.0 den2=0.0 for i in z: num=num+z[i]*Tc[i] den1=den1+z[i]*MW[i] den2=den2+z[i]*Pc[i] epsillion=num**(1/6)/den1**(1/2)/den2**(2/3) return (epsillion) def lbc_visc(P, T_F,z): (xlist,ylist,L,V,zeta_l,zeta_v,liq_v,vap_v)=workflow(P,T_F,z) print ('-----------LBC Viscosity------------') # print ('Pressure=',P) # print ('xlsit=',xlist) # print ('ylist=',ylist) #Table 10.3 a1=0.1023 a2=0.023364 a3=0.058533 a4=-0.040758 a5=0.0093324 liq_den1,liq_den2,liq_red_den,vap_den1,vap_den2,vap_red_den=density(P,T_F,z) print ('from density') print ('liq_red_den=',liq_red_den,'vap_red_den=',vap_red_den) #liquid_viscosity if L>0: #temporary variables visc_liq1,visc_liq2,visc_liq3,visc_liq4 visc_liq1=a1+a2*liq_red_den+a3*liq_red_den**2+a4*liq_red_den**3+a5*liq_red_den**4 visc_liq2= visc_liq1**4 visc_liq3= visc_liq2-1e-4 visc_liq4= visc_liq3/epsillionfunc(xlist) #liquid_viscosity visc_liq= visc_liq4+eta_starfunc(xlist,T_F) print ('visc_liq=',visc_liq) #----------------------------------------------- print ("Undersaturated Viscosity") liq_den1_us,liq_den2_us,liq_red_den_us,vap_den1_us,vap_den2_us,vap_red_den_us=density(max(P_list),T_F,xlist) visc_liq1_us=a1+a2*liq_red_den_us+a3*liq_red_den_us**2+a4*liq_red_den_us**3+a5*liq_red_den_us**4 visc_liq2_us= visc_liq1_us**4 visc_liq3_us= visc_liq2_us-1e-4 visc_liq4_us= visc_liq3_us/epsillionfunc(xlist) #liquid_viscosity visc_liq_us= visc_liq4_us+eta_starfunc(xlist,T_F) print ('visc_liq_us=',visc_liq_us) print ('L=',L) #------------------------------------------ if V>0: print ('V>0') #temporary variables visc_vap1,visc_vap2,visc_vap3,visc_vap4 visc_vap1=a1+a2*vap_red_den+a3*vap_red_den**2+a4*vap_red_den**3+a5*vap_red_den**4 visc_vap2= visc_vap1**4 visc_vap3= visc_vap2-1e-4 visc_vap4= visc_vap3/epsillionfunc(ylist) #vapour viscosty visc_vap=visc_vap4+eta_starfunc(ylist,T_F) print ('vap_red_den=',vap_red_den) print ('visc_vap=',visc_vap) if L==0.0: visc_liq=visc_vap visc_liq_us=visc_vap if V==0.0: visc_vap=visc_liq visc_liq_us=visc_liq return (visc_liq,visc_vap,visc_liq_us) # + visc_liq_list=[] visc_vap_list=[] # visc_liq_us_list=[] for P in P_list: print ("Pressure=",P) visc_liq,visc_vap,visc_liq_us=lbc_visc(P,T_res,z1) visc_liq_list.append(visc_liq) visc_vap_list.append(visc_vap) visc_liq_us_list.append(visc_liq_us) # - #For undersaturated Lines P_max=[] for i in range (0,len(Bo_us)): P_max.append(max(P_list)) print (P_max) # + #PVTO Undersaturated Lines PVTO_us=pd.DataFrame(list(zip(Rs,P_max,Bo_us,visc_liq_us_list)), columns=['Rs(Mscf/STB)','Pressure(psia)','Bo(BBL/STB)', 'visc_liq(cp)']) #PVTO Saturated Lines PVTO_s=pd.DataFrame(list(zip(Rs,P_list, Bo,visc_liq_list)), columns =['Rs(Mscf/STB)', 'Pressure(psia)','Bo(BBL/STB)', 'visc_liq(cp)']) # + def PVTG_func(P_list,Rv,Bg,visc_vap_list,L1_list): PVTG= pd.DataFrame(list(zip(P_list,Rv,Bg,visc_vap_list,L1_list)), columns =[ 'Pressure (psia)','Rv(STB/Mscf)','Bg (BBL/Mscf)','visc_vap (cp)','L1_list']) PVTG.set_index("Pressure (psia)", inplace = True) col,row=PVTG.shape slash=[] for i in range (0,col,1): slash.append('/') PVTG['slash']=slash PVTG=PVTG.drop(columns=['L1_list']) PVTG.to_excel('PVTG.xls') print (PVTG) return PVTG def PVTO_cond_func(Rs,P_list,Bo,visc_liq_list,L1_list): #--- For Gas Condesates #--PVTO Undersaturated Lines #For undersaturated Lines P_max=[] for i in range (0,len(Bo_us)): P_max.append(max(P_list)) #print (P_max) PVTO_us=pd.DataFrame(list(zip(Rs,P_max,Bo_us,visc_liq_us_list)), columns=['Rs(Mscf/STB)','Pressure(psia)','Bo(BBL/STB)', 'visc_liq(cp)']) #--PVTO Saturated Lines PVTO_s=pd.DataFrame(list(zip(Rs,P_list, Bo,visc_liq_list)), columns =['Rs(Mscf/STB)', 'Pressure(psia)','Bo(BBL/STB)', 'visc_liq(cp)']) #'L1_list']) PVTO= pd.concat([PVTO_s, PVTO_us],sort=False) PVTO.sort_values(by='Rs(Mscf/STB)',inplace=True) #PVTO.set_index("Rs (Mscf/STB)", inplace = True) #col,row=PVTO.shape # slash=[] # for i in L1_list: # if i<1.0: # slash.append('/') # else: # slash.append(' ') # PVTO['slash']=slash # PVTO=PVTO.drop(columns=['L1_list']) print ('--Please manually edit the PVTO to conform to Eclipse Requirements') print ('--(i) remove repetetive RS values from the first column, saturated values') print ('--(ii) Adjust the Slashes') print ('--These issues will be fixed in future versions') PVTO.to_excel('PVTO.xlsx') print (PVTO) return PVTO_s,PVTO # + def PVDG_func(P_list,Bg,visc_vap_list,L1_list): PVDG= pd.DataFrame(list(zip(P_list,Bg,visc_vap_list,L1_list)), columns =[ 'Pressure (psia)', 'Bg (BBL/Mscf)', 'visc_vap (cp)','L1_list']) PVDG.set_index("Pressure (psia)", inplace = True) BelowBP=PVDG['L1_list']<1.0 PVDG=PVDG.loc[BelowBP] col,row=PVDG.shape slash=[] for i in range (0,col,1): slash.append('/') PVDG['slash']=slash PVDG=PVDG.drop(columns=['L1_list']) PVDG.to_excel('PVDG.xlsx') print (PVDG) return PVDG def PVTO_Oil_func(Rs,P_list,Bo,visc_liq_list,L1_list): #----for Oil PVTO=pd.DataFrame(list(zip(Rs,P_list,Bo,visc_liq_list,L1_list)), columns =['Rs (Mscf/STB)', 'Pressure (psia)', 'Bo (BBL/STB)', 'visc_liq (cp)','L1_list']) PVTO.set_index("Rs (Mscf/STB)", inplace = True) col,row=PVTO.shape slash=[] for i in L1_list: if i<1.0: slash.append('/') else: slash.append(' ') PVTO['slash']=slash PVTO=PVTO.drop(columns=['L1_list']) print ('--Please manually edit the PVTO') print ('--(i) remove RS values from the first column, saturated values') print ('--(ii) And add a slash at the end of the same row') print ('--These issues will be fixed in future versions') PVTO.to_excel('PVTO.xlsx') print (PVTO) return PVTO # - def PVTO_PVDG_func(P_list,Bg,visc_vap_list,L1_list,Rs, Bo, visc_liq_list): if fluid=="OIL": PVTO=PVTO_Oil_func(Rs,P_list,Bo,visc_liq_list,L1_list) PVDG=PVDG_func(P_list,Bg,visc_vap_list,L1_list) return(PVTO,PVDG) #PVTO_PVDG_func(P_list,Bg,visc_vap_list,L1_list,Rs, Bo, visc_liq_list) def PVTO_PVTG_func(P_list,Bg,visc_vap_list,Rv,L1_list,Rs,Bo,visc_liq_list): if fluid=="GAS": PVTO_s,PVTO=PVTO_cond_func(Rs,P_list,Bo,visc_liq_list,L1_list) PVTG=PVTG_func(P_list,Rv,Bg,visc_vap_list,L1_list) return(PVTO_s,PVTO,PVTG) #PVTO_s,PVTO,PVTG= PVTO_PVTG_func(P_list,Bg,visc_vap_list,Rv,L1_list,Rs,Bo,visc_liq_list) # + if fluid=='GAS': PVTO_s,PVTO,PVTG=PVTO_PVTG_func(P_list,Bg,visc_vap_list,Rv,L1_list,Rs,Bo,visc_liq_list) plt.plot(PVTG.index,PVTG['Bg (BBL/Mscf)']) plt.ylabel ('Bg (BBL/Mscf)') plt.xlabel ('pressure (psia)') plt.show() plt.plot (PVTG.index, PVTG['Rv(STB/Mscf)']) plt.ylabel ('Rv(STB/Mscf)') plt.xlabel ('pressure (psia)') plt.show() plt.plot (PVTG.index, PVTG['visc_vap (cp)']) plt.ylabel ('visc vap (cp)') plt.xlabel ('pressure (psia)') plt.show() plt.plot(PVTO_s['Pressure(psia)'],PVTO_s['Bo(BBL/STB)']) ls='dotted' plt.ylabel ('Bo (BBL/STB)') plt.xlabel ('pressure (psia)') plt.show() plt.plot(PVTO_s['Pressure(psia)'],PVTO_s['Rs(Mscf/STB)']) plt.ylabel ('Rs (Mscf/STB)') plt.xlabel ('pressure (psia)') plt.show() plt.plot (PVTO_s['Pressure(psia)'],PVTO_s['visc_liq(cp)']) plt.ylabel ('visc liq (cp)') plt.xlabel ('pressure (psia)') plt.show() plt.plot (P_list,L1_list) plt.ylabel ('Liquid Mole Fraction') plt.xlabel ('pressure (psia)') plt.show() # - if fluid=='OIL': PVTO,PVDG=PVTO_PVDG_func(P_list,Bg,visc_vap_list,L1_list,Rs,Bo,visc_liq_list) plt.plot(PVTO['Pressure (psia)'],PVTO['Bo (BBL/STB)']) plt.ylabel ('Bo (BBL/STB)') plt.xlabel ('pressure (psia)') plt.show() # plt.plot(PVTO['Pressure (psia)'],PVTO.index) plt.ylabel ('Rs (Mscf/STB)') plt.xlabel ('pressure (psia)') plt.show() # plt.plot (PVTO['Pressure (psia)'],PVTO['visc_liq (cp)']) plt.ylabel ('visc liq (cp)') plt.xlabel ('pressure (psia)') plt.show() # plt.plot(PVDG.index,PVDG['Bg (BBL/Mscf)']) plt.ylabel ('Bg (BBL/Mscf)') plt.xlabel ('pressure (psia)') plt.show() # plt.show() plt.plot (PVDG.index, PVDG['visc_vap (cp)']) plt.ylabel ('visc vap (cp)') plt.xlabel ('pressure (psia)') plt.show() # plt.plot (P_list,L1_list) plt.ylabel ('Liquid Mole Fraction') plt.xlabel ('pressure (psia)') plt.show() # + #------Complicated Separator---------- z1= [0.2, 0.4, 0.25, 0.15] #z1=[0.19, 0.31, 0.38, 0.12] #initializing the values moles_V1=0.1 moles_V2=0.0 moles_V3=0.0 moles_V4=0.0 moles_L1=0.1 moles_L2=0.0 moles_L3=0.0 moles_L4=0.0 output_moles=0.0 #y3 initialized as it only is available for Separator 2 in second iteration y3=[0,0,0,0] feed_sep2=[0,0,0,0] iter=1 #while abs((1+moles_L4)/(moles_L1+moles_V1)-1.0)>0.01 and iter<=100: while abs (1-output_moles)>0.01 and iter<=20: print ('iter=',iter) print ('z1=',z1) #--Sep 1--- x1,y1,L1,V1,zeta_l1,liq_v1,zeta_v1,vap_v1=workflow(1750,160, z1) moles_L1=(1+moles_L4)*L1 moles_V1=(1+moles_L4)*V1 print ('Sep 1') print('x1=',x1,) print('y1=',y1) print ('L1=',L1,'V1=',V1) print('moles_L1=',moles_L1,'moles_V1=',moles_V1) #---feed composition for Sep 2 --- for i in range (0,len(z1)): feed_sep2[i]=(y1[i]*moles_V1+y3[i]*moles_V3)/(moles_V1+moles_V3) print ('moles_V1=',moles_V1,'moles_V3=',moles_V3) print ('feed_sep2=',feed_sep2) #--Sep 2------- x2,y2,L2,V2,zeta_l2,liq_v2,zeta_v2,vap_v2=workflow(100,-150,feed_sep2) moles_V2=(moles_V1+moles_V3)*V2 moles_L2=(moles_V1+moles_V3)*L2 print ('Sep 2') print('x2=',x2) print('y2=',y2) print ('L2=',L2,'V2=',V2) print('moles_L2=',moles_L2,'moles_V2=',moles_V2) #--Sep 3 ------ x3,y3,L3,V3,zeta_l3,liq_v3,zeta_v3,vap_v3=workflow(100,70,x1) moles_V3=moles_L1*V3 moles_L3=moles_L1*L3 print ('Sep 3') print('x3=',x3) print ('y3=',y3) print ('L3=',L3,'V3=',V3) print('moles_L3=',moles_L3,'moles_V3=',moles_V3) #--Sep 4------- x4,y4,L4,V4,zeta_l4,liq_v4,zeta_v4,vap_v4=workflow(100,60,x2) moles_V4=moles_L2*V4 moles_L4=moles_L2*L4 print ('Sep 4') print('x4=',x4) print('y4=',y4) print ('L4=',L4,'V4=',V4) print('moles_L4=',moles_L4,'moles_V4=',moles_V4) output_moles=moles_V2+moles_V4+moles_L3 print ('output_moles=',output_moles) #---feed composition for Sep 1 --- for i in range (0,len(z1)): z1[i]=(z1[i]+moles_L4*x4[i])/(1+moles_L4) print ('z1=',z1) iter=iter+1 # + #P_Diff=[5000,4500,4000,3500,3000,2500,2000,1500,1000,500,100,50,14.7] P_Diff=[3000,2000,1000,15] list_liq_v1=[] list_L1=[] def DL_func(P_Diff,T_res,z1): # Calculate residual at 14.7psia and 60 Deg.F for P in P_Diff: x1,y1,L1,V1,zeta_l1,zeta_v1,liq_v1,vap_v1=workflow(P,T_res, z1) list_liq_v1.append(liq_v1) list_L1.append(L1) z1=x1 return (P_Diff,list_liq_v1,list_L1) (P_Diff,list_liq_v1,list_L1)=DL_func(P_Diff, T_res, z1) DiffLib= pd.DataFrame(list(zip(P_Diff,list_liq_v1,list_L1)), columns =[ 'Pressure (psia)','liq_v1','Liquid Mole Fraction']) DiffLib # + x2000,y2000,L2000,V2000,zeta_l2000,zeta_v2000,liq_v2000,vap_v2000=workflow(2000,232,z1) print('x2000=',x2000,'L2000=',L2000,'liq_v2000=',liq_v2000,'L2000=',L2000) x1000,y1000,L1000,V1000,zeta_l1000,zeta_v1000,liq_v1000,vap_v1000=workflow(1000,232,x2000) print('x1000=',x1000,'L1000=',L1000,'liq_v1000=',liq_v1000,'L1000=',L1000) x15,y15,L15,V15,zeta_l15,zeta_v15,liq_v15,vap_v15=workflow(14.7,232,x1000) print('x15=',x15,'L15=',L15,'liq_v15=',liq_v15,'L15=',L15) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Install Olliepy # %%capture # !pip install -U Olliepy # # Import pandas and olliepy import pandas as pd from olliepy import InteractiveDashboard # # Import data df = pd.read_csv('data/sales_data.csv') df.head() # # Define columns to be used in the dashbaord # + categorical_columns = ['year', 'country', 'region', 'remote_sale', 'salesman_position', 'product_type', 'product_subtype'] numerical_columns = ['latitude', 'longitude', 'number_of_sales','distance_travelled_in_KM', 'sales_amount_in_dollars','unit_price'] date_columns = ['date'] # - # # Generate dashboard using olliepy dashboard = InteractiveDashboard(title='Sales dashboard', output_directory='.', dashboard_folder_name='SALES_DASHBOARD', dataframes=[df], dataframes_names=['Sales'], numerical_columns=numerical_columns, categorical_columns=categorical_columns, date_columns=date_columns, generate_encryption_secret=False) # ## (Optional) Bin numerical features to be used in bar, row, pie, heatmap, etc. dashboard.bin_numerical_feature('number_of_sales', 'number_of_sales_binned', 10, 'n_sales') dashboard.bin_numerical_feature('distance_travelled_in_KM', 'distance_binned', 10, 'distance') # ## Create dashboard with histograms and count plots for the provided features dashboard.create_dashboard(auto_generate_distribution_plots=True) # ## Serve dashboard and display in a new tab dashboard.serve_dashboard_from_local_server(mode='server', load_existing_dashboard=False) # ## Serve dashboard and display in jupyter dashboard.serve_dashboard_from_local_server(mode='jupyter', load_existing_dashboard=False) # ## Save dashboard and zip it to share it with someone or display it locally if you are using a cloud solution dashboard.save_dashboard(zip_dashboard=True) # # Load an existing dashboard import olliepy dashboard = olliepy.load_interactive_dashboard(dashboard_path='./SALES_DASHBOARD') # ## Copy charts and number displays from an existing dashboard to a new one dashboard_v2 = InteractiveDashboard(title='Sales dashboard V2', output_directory='.', dashboard_folder_name='SALES_DASHBOARD_V2', dataframes=[df.sample(300)], dataframes_names=['Sales'], numerical_columns=numerical_columns, categorical_columns=categorical_columns, date_columns=date_columns, generate_encryption_secret=False) dashboard_v2.bin_numerical_feature('number_of_sales', 'number_of_sales_binned', 10, 'n_sales') dashboard_v2.bin_numerical_feature('distance_travelled_in_KM', 'distance_binned', 10, 'distance') dashboard_v2.update_charts(dashboard.get_charts(), keep_existing=False) dashboard_v2.update_number_displays(dashboard.get_number_displays(), keep_existing=False) dashboard_v2.create_dashboard(auto_generate_distribution_plots=False) dashboard_v2.serve_dashboard_from_local_server(mode='server', load_existing_dashboard=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py35] # language: python # name: conda-env-py35-py # --- # ### Play with various sorts # # Evaluate various sorts. # # See [Instertion Sort example](http://www.geekviewpoint.com/python/sorting/insertionsort) # + debugging = False debugging = True logging = True def dbg(f, *args): if debugging: print((' DBG:' + f).format(*args)) def log(f, *args): if logging: print((f).format(*args)) def logError(f, *args): if logging: print(('*** ERROR:' + f).format(*args)) def className(instance): return type(instance).__name__ # - # ### The TestSet Mechanism # # %load TestHarness.py # ### The Unit Under Test #======================================================================= # Author: # Title: Insertionsort # Project: geekviewpoint # Package: algorithms # # Statement: # Given a disordered list of integers (or any other items), # rearrange the integers in natural order. # # Sample Input: [8,5,3,1,9,6,0,7,4,2,5] # Sample Output: [0,1,2,3,4,5,5,6,7,8,9] # # Time Complexity of Solution: # Best O(n); Average O(n^2); Worst O(n^2). # # Approach: # Insertion sort is good for collections that are very small # or nearly sorted. Otherwise it's not a good sorting algorithm: # it moves data around too much. Each time an insertion is made, # all elements in a greater position are shifted. #======================================================================= def insertionsort( aList ): for i in range( 1, len( aList ) ): tmp = aList[i] k = i dbg('--- i={0} tmp={1} aList={2}', i, tmp, aList) while k > 0 and tmp < aList[k - 1]: aList[k] = aList[k - 1] k -= 1 aList[k] = tmp a = [8,5,3,1,9,6,0,7,4,2,5] bubbleSort(a) a #======================================================================= # BubbleSort # # Time Complexity of Solution: # Best O(n); Average O(n^2); Worst O(n^2). # # Approach: # Bubble sort is goodf for very small collections that are almost # already sorted. #======================================================================= def bubbleSort( a ): m = len(a) - 1 swapped = True def swap(i, j): nonlocal swapped t = a[i] a[i] = a[j] a[j] = t swapped = True while swapped: swapped = False dbg('--- m={0} a={1}', m, a) for i in range( 0, m ): if a[i] > a[i+1]: swap(i, i+1) m -= 1 # Ass end is sorted already. #======================================================================= # QuickSort # # Time Complexity of Solution: # Best O(n); Average O(n*logn); Worst O(n^2). # # Approach: # Quick sort is good for non pathological lists #======================================================================= def quickSort( a ): def sortPartition(i, j, name): def swap(i, j): t = a[i] a[i] = a[j] a[j] = t # A single element is always sorted. if j - i < 1: return # Any two elements are easily sorted. if j - i == 1: if a[i] > a[j]: swap(i, j) return pivot = a[i] # Select an arbitrary value. leftFence = i+1 # Exclude the pivot value rightFence = j while leftFence <= rightFence and a[leftFence] <= pivot : leftFence += 1 while leftFence <= rightFence and a[rightFence] >= pivot : rightFence -= 1 if leftFence < rightFence: swap(leftFence, rightFence) swap(i, rightFence) # Relocate the pivot to the center sortPartition(i, rightFence-1, 'left') sortPartition(rightFence+1, j, 'right') sortPartition(0, len(a)-1, 'starting') a = [5,8,3,1,9,6,0,213,1,2,41,1,423,1,21,3,1,22,1,2,4,6,33,1,7,4,2] quickSort(a) a a = [2, 1, 1, 1, 1, 2] quickSort(a) a # ### The Test Cases # + #simpletest = lambda A : [spiral_square(A)] def simpletest(A): l = [x for x in spiral_square2(A)] return l A2 = [[1, 2], [3, 4]] A3 = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] A4 = [[ 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]] A4A = ['good men to come'.split(), 'all now is to'.split(), 'for time the the'.split(), 'country their of aid'.split()] A5 = [[1, 2, 3, 4, 100], [5, 6, 7, 8, 200], [9, 10, 11, 12, 300], [13, 14, 15, 16, 400], [17, 18, 19, 20, 500]] c0 = TestCase('0x0', simpletest, [ [] ], []) c1 = TestCase('1x1', simpletest, [ [[1]] ], [1]) c2 = TestCase('2x2', simpletest, [A2], [1, 2, 4, 3]) c3 = TestCase('3x3', simpletest, [A3], [5, 4, 1, 2, 3, 6, 9, 8, 7]) c4 = TestCase('4x4', simpletest, [A4], [6, 7, 11, 10, 9, 5, 1, 2, 3, 4, 8, 12, 16, 15, 14, 13]) c4a = TestCase('4x4 words', simpletest, [A4A], 'now is the time for all good men to come to the aid of their country'.split()) tester = TestSet([c0, c1, c2, c3, c4, c4a]) # - tester.run_tests() # ### Some Ad Hoc Tests [x for x in spiral_square(A3)] (.33333333333333333333333333333).as_integer_ratio() (1/3).as_integer_ratio() from fractions import Fraction Fraction(0.333333333333333).limit_denominator() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7 - Spark (local) # language: python # name: spark-3-python # --- # # Data preparation for the SDG Indicators by (1) UN and (2) WorldBank # ## (1) UN data set # # **We use UN SDG's data set and convert this data set, so every country, continent, etc. is in a separate csv file.** # # To get started, we download the entire available data from https://unstats.un.org/sdgs/indicators/database/ and call it un_data.csv. # # # Let's load the data set and look at its columns and rows to figure out how it is structured. # # # **We aim to have one pandas data frame per country, with all indicators. We save them as separate csv files.** # Let's start with the usual imports and loading the data set. import numpy as np import pandas as pd import math import os import pickle import copy import matplotlib.pyplot as plt from sklearn.preprocessing import scale import warnings warnings.filterwarnings('ignore') # loading data set all_data = pd.read_csv('utils/data/data.csv', dtype=object) # the percentage of targets we have data far print(round(len(all_data.Target.unique())/169, 3)*100, '%') # UN data for SDG 13 SDG13_data = pd.read_csv('utils/SDG13_data.csv', dtype=object) # check SDG13_data.head() # take out data not belonging to SDG 13 sdg13_data = SDG13_data[SDG13_data['Goal']=='13'] # The data set is structured by indicators and years in rows in one large data frame with all countries. We would like to have one data frame per country. Hence, we first extract the names of *regional groupings*, i.e. countries, continents, etc., and the names of so-called *other groupings*. # # According to the UN Statistics Division, other groupings include Least Developed Countries (LDC), Land Locked Developing Countries (LLDC), Small Island Developing States (SIDS), Developed Regions, and Developing Regions. # # Developing Regions are Latin America and the Caribbean, South-Eastern Asia, Southern Asia, Southern Asia (excluding India), Caucasus and Central Asia, Eastern Asia (excluding Japan and China), Western Asia (exc. Armenia, Azerbaijan, Cyprus, Israel and Georgia), Eastern Asia (excluding Japan), Oceania (exc. Australia and New Zealand), Sub-Saharan Africa (inc. Sudan), and Northern Africa (exc. Sudan). # # **All these groupings can be subject to separate network analyses of the indicators later on.** # # # # # Let's first see all different columns of our data frame before we only see these different groupings. list(all_data) # We have even lots of information on a sub-indicator level and this might be subject to more detailed analyses later on. We could, e.g., indicator 4.6.1* explore by disparate age goups and by sex. # # \* *Indicator 4.6.1: Proportion of population in a given age group achieving at least a fixed level of proficiency in functional (a) literacy and (b) numeracy skills, by sex.* # # # We keep this possibility open, but now, let's not go further into a sub-indicator level and see the different groupings. groupings_UN = list(SDG13_data['GeoAreaName'].unique()) groupings_UN #all_data.replace({"Democratic People's Republic of Korea": "Korea, Dem. People's Rep.", 'Gambia': 'Gambia, The', 'United Kingdom of Great Britain and Northern Ireland': 'United Kingdom', 'Congo': 'Congo, Rep.', 'Democratic Republic of the Congo': 'Congo, Dem. Rep.', 'Czechia': 'Czech Republic', 'Iran (Islamic Republic of)': 'Iran, Islamic Rep.', "Côte d'Ivoire": "Cote d'Ivoire", 'Kyrgyzstan': 'Kyrgyz Republic', "Lao People's Democratic Republic": 'Lao PDR', 'Republic of Moldova': 'Moldova', 'Micronesia (Federated States of)': 'Micronesia, Fed. Sts.', 'Slovakia': 'Slovak Republic', 'Viet Nam': 'Vietnam', 'Egypt': 'Egypt, Arab Rep.', 'United Republic of Tanzania': 'Tanzania','United States of America': 'United States', 'Venezuela (Bolivarian Republic of)': 'Venezuela, RB', 'Yemen': 'Yemen, Rep.', 'Bahamas': 'Bahamas, The', 'Bolivia (Plurinational State of)': 'Bolivia'}, inplace=True) sdg13_data.replace({"Republic of Korea": "Korea, Rep.", "Democratic People's Republic of Korea": "Korea, Dem. People's Rep.", 'Gambia': 'Gambia, The', 'United Kingdom of Great Britain and Northern Ireland': 'United Kingdom', 'Congo': 'Congo, Rep.', 'Democratic Republic of the Congo': 'Congo, Dem. Rep.', 'Czechia': 'Czech Republic', 'Iran (Islamic Republic of)': 'Iran, Islamic Rep.', "Côte d'Ivoire": "Cote d'Ivoire", 'Kyrgyzstan': 'Kyrgyz Republic', "Lao People's Democratic Republic": 'Lao PDR', 'Republic of Moldova': 'Moldova', 'Micronesia (Federated States of)': 'Micronesia, Fed. Sts.', 'Slovakia': 'Slovak Republic', 'Viet Nam': 'Vietnam', 'Egypt': 'Egypt, Arab Rep.', 'United Republic of Tanzania': 'Tanzania','United States of America': 'United States', 'Venezuela (Bolivarian Republic of)': 'Venezuela, RB', 'Yemen': 'Yemen, Rep.', 'Bahamas': 'Bahamas, The', 'Bolivia (Plurinational State of)': 'Bolivia'}, inplace=True) # only take World Bank countries c = pd.read_csv('utils/countries_wb.csv', dtype=str, delimiter=';', header=None) countries = list(c[0]) countries # + # list of keys to delete delete_groups = [] for g in list(groupings_UN): if g not in countries: delete_groups.append(g) # delete for dg in delete_groups: groupings_UN.remove(dg) # - # loading World Bankd data set wb_data = pd.read_csv('utils/data/wb_data.csv', dtype=object) wb_data.drop(wb_data.tail(5).index,inplace=True) # 5 last rows are blank / have other info wb_data.tail() # + columns = list(wb_data.columns) for column in columns[4:]: columns.append(column[:4]) columns.remove(column) wb_data.columns = columns # - years = columns[4:] # Let's now save a data frame with all of the meta-information. We delete the columns which are specific in area and time, and of course we do not want to have the values in this data frame. In the end, we delete all duplicate entries in the column **SeriesCode**. So, we are left with the information we wanted: mapping the series codes to the indicators, the Source for the data, the Units measured in, etc. info = sdg13_data.drop(columns=['GeoAreaCode', 'GeoAreaName', 'TimePeriod', 'Value', 'Time_Detail']).drop_duplicates(subset=['Indicator', 'SeriesCode']) # check info.head() # list of all series codes of SDG 13 seriescodes_13 = set(list(info['SeriesCode'])) seriescodes_13 # count how many we have len(seriescodes_13) # We convert the data set into multiple small data sets by creating a dictionary that contains the groupings' names as keys. # # First, we create empty data frames for each key. #dict_all = {c: pd.DataFrame() for c in countries} dict_13 = {c: pd.DataFrame() for c in countries} # check, should be empty #dict_all.get('Belize') dict_13.get('Belize') # Second, we replace each of the empty data frames with the data we have available for them. Note, that our dictionary will be the ensamble of all groupings. for c in countries: # memory-intensive #dict_all[c] = all_data[all_data['GeoAreaName'].isin(['{}'.format(c)])] dict_13[c] = sdg13_data[sdg13_data['GeoAreaName'].isin(['{}'.format(c)])] # check dict_13['France'] # Now, we have one data frame per country. The next step is to have years as columns. # # The next cell gives us the series codes in the rows and the years in the columns. These series codes are unique descriptions of the sub-indicators and we match these series codes to indicators and all other information in a different data frame. for c in countries: #dict_all[c] = dict_all.get(c).pivot_table(values='Value', index='SeriesCode', columns='TimePeriod', dropna=False, aggfunc='first') if c not in groupings_UN: dict_13[c] = pd.DataFrame(index=seriescodes_13, columns=years) else: dict_13[c] = dict_13.get(c).pivot_table(values='Value', index='SeriesCode', columns='TimePeriod', dropna=False, aggfunc='first') # check dict_13['France'] # ## Cleaning up and transforming all country data frames into the same dimensions # # We have a couple of things to do to make our data frames workable: # 1. We have some values in the data frames which we do not want, as e.g. ,, = , N, etc. We replace them with appropriate values, i.e. 0, or simply a space. # 2. Some data frames have data from **1990** to **2018**, some others from **1992** to **2018**. We want to have all data frames having data from **1990** to **2018**, i.e. an equal amount of columns. The additional columns are filled with NaNs. # 3. Some data frames have not all indicators and sub-indicators listed, but we would like to have all of them in all data frames. These additional rows are filled with NaNs. # Let's start with the first task, i.e. cleaning up the data frames. # # We first need to define lists for all years, i.e. **1990** to **2018** and all indicators and sub-indicators, i.e. series codes. # Change 'Haiti' in the cell below to a few other countries and you'll see that they can have different lengths. We need to bring all on the same length. We agree on having data for the **years 1990 to 2019**. # # Now, we insert the missing years for all groupings. We want to have NaNs in those columns. # example list(dict_13['Germany']) # Firstly, we insert the missing years as columns for all groupings. for c in countries: # memory-intensive for year in years: if year not in list(dict_13[c]): dict_13[c]['{}'.format(year)] = np.nan # having the years in order dict_13[c] = dict_13[c][years] # check dict_13['Germany'] # Secondly, we insert the missing series codes as rows. # # Let's first see how many rows do we have for France ? len(list(dict_13['Nicaragua'].index)) # Let's have all $J$ sub-indicators we want for each country as rows. We fill these rows with NaNs. for c in countries: for seriescode in seriescodes_13: if seriescode not in list(dict_13[c].index): dict_13[c].loc[seriescode] = np.nan # fill these rows with NaNs # check: do we have J many? len(list(dict_13['Nicaragua'].index)) # convert all to floats for c in countries: for year in years: for seriescode in seriescodes_13: if not isinstance(dict_13[c].loc[seriescode, year], float): dict_13[c].loc[seriescode, year] = float(dict_13[c].loc[seriescode, year].replace(',', '').replace('<', '').replace('>', '').replace('=', '').replace('N', '0').replace(' - ', '0').replace('0V', '0').replace('. . .', '0')) # double-check: are all series codes as rows? len(list(dict_13['Nicaragua'].index)) # Finally, we can save all countries as different csv files and as one `dict`. if not os.path.exists('csv_original'): os.mkdir('csv_original') for c in countries: dict_13[c].to_csv(r'csv_original/{}.csv'.format(c)) # Having the information file might also be helpful. # + if not os.path.exists('utils'): os.mkdir('utils') info.to_csv(r'utils/info.csv') # - # as one pickle file dict13 = open('utils/data/dict_13.pkl', 'wb') pickle.dump(dict_13, dict13) dict13.close() # as one pickle file dictall = open('utils/data/dict_all.pkl', 'wb') pickle.dump(dict_all, dictall) dictall.close() # CHECKPOINT dictall = open('utils/data/dict_all.pkl', 'rb') dict_all = pickle.load(dictall) dictall.close() # ## Visualising time-series # # We quickly visualise the time-series to get a better idea of the characteristics of our data set. # + f, (ax, ax2) = plt.subplots(2, 1, sharex=True) ax2.plot(list(range(2000, 2020)), dict_all['Bolivia'].loc['DC_ODA_BDVL'], color='#42B24C', linewidth=5) ax.plot(list(range(2000, 2020)), dict_all['Bolivia'].loc['DC_TRF_TOTL'], color='#DE0E68', linewidth=5) ax2.set_ylim(0, 251) # biodiversity ODA ax.set_ylim(250, 2501) # total ODA # hide the spines between ax and ax2 ax.spines['bottom'].set_visible(False) ax2.spines['top'].set_visible(False) ax.xaxis.tick_top() ax.tick_params(labelsize=20, labeltop='off') # don't put tick labels at the top ax2.xaxis.tick_bottom() plt.xticks(np.arange(2000, 2019, step=2), size=20) ax2.tick_params(labelsize=20) f.set_figheight(8) f.set_figwidth(12) plt.show() # - plt.figure(figsize=(12,8)) plt.plot(list(range(2000, 2020)), dict_all['Bolivia'].loc['DC_ODA_BDVL'], color='#42B24C', linewidth=5) plt.xticks(np.arange(2000, 2019, step=2), size=20) plt.yticks(np.arange(0, 251, step=25), size=20) plt.show() plt.figure(figsize=(12,8)) plt.plot(list(range(2000, 2020)), dict_all['Bolivia'].loc['DC_TRF_TOTL'], color='#DE0E68', linewidth=5) plt.xticks(np.arange(2000, 2019, step=2), size=20) plt.yticks(np.arange(0, 2501, step=250), size=20) plt.show() plt.figure(figsize=(12,8)) plt.yticks(np.arange(0, 2001, step=250), size=20) plt.xticks(np.arange(0, 251, step=25), size=20) plt.plot(dict_all['Bolivia'].loc['DC_ODA_BDVL'], dict_all['Bolivia'].loc['DC_TRF_TOTL'], '--bo'); #, s=100, color='black') # ## Data standardisation # We have saved the original data set, but it is often useful to have the data standardised. Standardising a data set involves rescaling the distribution of values so that the mean of observed values is 0 and the standard deviation is 1. Standardisation is often required by machine learning algorithms when your time series data has input values with differing scales. # # We create a new dictionary `dict_all_std` to keep the standardised values separate to the original ones. # CHECKPOINT (we don't want to re-run the entire script every time we continue working on it) dict_all = pickle.load(open('utils/data/dict_all.pkl', 'rb')) dict_all_std = pickle.load(open('utils/data/dict_all_std.pkl', 'rb')) # + # ~15 minutes computing time #dict_all_std = copy.deepcopy(dict_all) dict_13_std = copy.deepcopy(dict_13) for country in countries: for seriescode in seriescodes_13: dict_13_std[country].loc[seriescode] = scale(dict_13[country].loc[seriescode]) #dict_all_std[country].loc[seriescode] = scale(dict_all[country].loc[seriescode]) # - #check print('Original value', dict_13['France'].loc['VC_DSR_DAFF']) print('-------') print('Standardised value', dict_13_std['France'].loc['VC_DSR_DAFF']) # We better save `dict_all_std`... # + # as csv files per grouping if not os.path.exists('csv_standardised'): os.mkdir('csv_standardised') for group in groupings: dict_13_std[group].to_csv(r'csv_standardised/{}.csv'.format(group)) # as one pickle file stand = open('utils/data/dict_13_std.pkl', 'wb') pickle.dump(dict_all_std, stand) stand.close() # - # ## (2) World Bank data set # # **We use World Bank's data set and convert this data set, so every country, continent, etc. is in a separate csv file.** # # To get started, we download the entire available data from http://datatopics.worldbank.org/sdgs/ and call it wb_data.csv. # # # Let's load the data set and look at its columns and rows to figure out how it is structured. # # # **We aim to have one pandas data frame per country, with all indicators. We save them as separate csv files.** # + # only execute when new indicators are added -> call new metadata file wb_info_new.csv wb_info_new = pd.read_csv('utils/wb_info.csv', header=None, dtype=object) print(len(wb_info_new)) # - # the percentage of targets we have data far print(round(len(wb_info_new[4].unique())/169, 4)*100, '%') # are any indicators double? wb_info_new[wb_info_new.duplicated(subset=[2])==True] # drop this indicator wb_info_new.drop_duplicates(subset=[2], inplace=True) print(len(wb_info_new)) # + i = 0 for code in wb_info_new[0]: if code not in list(wb_info[0]): print(code) i += 1 print() print('# added indicators:', i) print() j = 0 for code in wb_info[0]: if code not in list(wb_info_new[0]): print(code) j += 1 print() print('# deleted indicators:', j) # - all_countries = list(wb_data['Country Name'].unique()) # save countries np.savetxt('utils/countries_wb_all.csv', all_countries, delimiter=';', fmt='%s') dict_all_wb = {country: pd.DataFrame() for country in countries} for country in countries: print(country) dict_all_wb[country] = wb_data[wb_data['Country Name'].isin(['{}'.format(country)])] dict_all_wb[country] = dict_all_wb[country].drop(columns=['Country Name', 'Country Code', 'Series Name']) dict_all_wb[country].set_index('Series Code', inplace=True) dict_all_wb[country] = pd.concat([dict_all_wb[country], dict_13[country]]) # adding series codes for SDG 13 dict_all_wb[country] = dict_all_wb[country].replace('..', np.nan).astype(float) dict_all_wb[country] = dict_all_wb[country].drop(index='DT.ODA.ODAT.CD') seriescodes_wb = set(list(dict_all_wb['Germany'].index)) # + # saving data for country in countries: dict_all_wb[country].to_csv(r'csv_original/{}_wb.csv'.format(country)) # as one pickle file dictall = open('utils/data/dict_all_wb.pkl', 'wb') pickle.dump(dict_all_wb, dictall) dictall.close() # - # ## Data standardisation # + dict_all_wb_std = copy.deepcopy(dict_all_wb) for country in countries: for seriescode in seriescodes_wb: # adding noise as representative for measurement errors #noise = np.random.normal(scale=0.1, size=len(dict_all_wb[country].loc[seriescode])) #dict_all_wb[country].loc[seriescode] = dict_all_wb[country].loc[seriescode] + noise dict_all_wb_std[country].loc[seriescode] = scale(dict_all_wb[country].loc[seriescode]) # - #check print('Original value', dict_all_wb['Belgium'].loc['ER.H2O.FWTL.ZS']) print('-------') print('Standardised value', dict_all_wb_std['Belgium'].loc['ER.H2O.FWTL.ZS']) # We better save `dict_all_wb_std`. # + # as csv files per grouping if not os.path.exists('csv_standardised'): os.mkdir('csv_standardised') for country in countries: dict_all_wb_std[country].to_csv(r'csv_standardised/{}_wb.csv'.format(country)) # as one pickle file stand = open('utils/data/dict_all_wb_std.pkl', 'wb') pickle.dump(dict_all_wb_std, stand) stand.close() # - // --- // jupyter: // jupytext: // text_representation: // extension: .groovy // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Groovy // language: groovy // name: groovy // --- // ## StackView Op // + //load ImageJ // %classpath config resolver scijava.public https://maven.scijava.org/content/groups/public // %classpath add mvn net.imagej imagej 2.0.0-rc-67 //create ImageJ object ij = new net.imagej.ImageJ() // - // This `Op` wraps the `Views.stack()` method of ImgLib2, taking multiple n-dimensional `RandomAccessibleInterval`s and stacking them to form a single (n+1)-dimensional `RandomAccessibleInterval`. ij.op().help('stackView') // We are interested in the first of the two, for this notebook. To create the stack we need two `Img`s, which we can create using [`equation`](../image/equation.ipynb) // + import net.imglib2.FinalInterval import net.imglib2.img.Img import net.imglib2.type.numeric.integer.UnsignedByteType dims = new FinalInterval(200, 150) imgList = new ArrayList() input1 = ij.op().create().img(dims, new UnsignedByteType()) equation1 = "127 * Math.sin(p[0] / 4) + 128" ij.op().run("equation", input1, equation1) input2 = ij.op().create().img(dims, new UnsignedByteType()) equation2 = "127 * Math.cos(p[1] / 4) + 128" ij.op().run("equation", input2, equation2) imgList.add(input1) imgList.add(input2) //TODO show side by side // - // We can stack them by passing both into `Views.stack()`: // + import net.imglib2.view.Views stack = ij.op().run("stackView", imgList) ij.notebook().display(stack) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp text.data.question_answering # default_cls_lvl 3 # - # all_slow #hide # %reload_ext autoreload # %autoreload 2 # %matplotlib inline # # text.data.question_answering # # > Question/Answering tasks are models that require two text inputs (a context that includes the answer and the question). The objective is to predict the start/end tokens of the answer in the context). This module contains the bits required to use the fastai DataBlock API and/or mid-level data processing pipelines to organize your data for question/answering tasks. # + # export import ast from functools import reduce from datasets import Dataset from fastcore.all import * from fastai.data.block import DataBlock, CategoryBlock, ColReader, ColSplitter from fastai.imports import * from fastai.losses import CrossEntropyLossFlat from fastai.torch_core import * from fastai.torch_imports import * from transformers import AutoModelForQuestionAnswering, PretrainedConfig, PreTrainedTokenizerBase, PreTrainedModel, logging from blurr.text.data.core import TextInput, BatchTokenizeTransform, Preprocessor, first_blurr_tfm from blurr.text.utils import get_hf_objects logging.set_verbosity_error() # + # hide_input import pdb from datasets import load_dataset from fastai.data.core import DataLoader, DataLoaders, TfmdDL from fastai.data.external import untar_data, URLs from fastai.data.transforms import * from fastcore.test import * from nbdev.showdoc import show_doc from blurr.utils import print_versions from blurr.text.data.core import TextBlock from blurr.text.utils import BlurrText NLP = BlurrText() os.environ["TOKENIZERS_PARALLELISM"] = "false" pd.set_option("display.max_colwidth", 100) print("What we're running with at the time this documentation was generated:") print_versions("torch fastai transformers") # - # hide # cuda torch.cuda.set_device(1) print(f"Using GPU #{torch.cuda.current_device()}: {torch.cuda.get_device_name()}") # ## Setup # # We'll use a subset of `squad_v2` to demonstrate how to configure your blurr code for extractive question answering raw_datasets = load_dataset("squad_v2", split=["train[:1000]", "validation[:200]"]) raw_train_ds, raw_valid_ds = raw_datasets[0], raw_datasets[1] # + raw_train_df = pd.DataFrame(raw_train_ds) raw_valid_df = pd.DataFrame(raw_valid_ds) raw_train_df["is_valid"] = False raw_valid_df["is_valid"] = True print(len(raw_train_df)) print(len(raw_valid_df)) # - raw_train_df.head(2) raw_valid_df.head(2) squad_df = pd.concat([raw_train_df, raw_valid_df]) len(squad_df) # + squad_df["ans_start_char_idx"] = squad_df.answers.apply(lambda v: v["answer_start"][0] if len(v["answer_start"]) > 0 else "0") squad_df["answer_text"] = squad_df.answers.apply(lambda v: v["text"][0] if len(v["text"]) > 0 else "") squad_df["ans_end_char_idx"] = squad_df["ans_start_char_idx"].astype(int) + squad_df["answer_text"].str.len() print(len(squad_df)) squad_df[squad_df.is_valid == True].head(2) # + model_cls = AutoModelForQuestionAnswering pretrained_model_name = "roberta-base" #'xlm-mlm-ende-1024' hf_arch, hf_config, hf_tokenizer, hf_model = get_hf_objects(pretrained_model_name, model_cls=model_cls) max_seq_len = 128 vocab = dict(enumerate(range(max_seq_len))) # - # ## Preprocessing # # With version 2.0.0 of `BLURR`, we include a `Preprocessor` for question answering that can either truncate texts or else chunk long documents into multiple examples. # # **Note**: Unlike other NLP tasks in BLURR, extractive question answering ***requires*** preprocessing in order to convert our raw start/end character indices into start/end token indices unless your dataset includes the later. Token indicies, rather than character indices, will be used as our targets and are dependent on your tokenizer of choice. # export class QAPreprocessor(Preprocessor): def __init__( self, # A Hugging Face tokenizer hf_tokenizer: PreTrainedTokenizerBase, # The number of examples to process at a time batch_size: int = 1000, # The unique identifier in the dataset. If not specified and "return_overflowing_tokens": True, an "_id" attribute # will be added to your dataset with its value a unique, sequential integer, assigned to each record id_attr: Optional[str] = None, # The attribute in your dataset that contains the context (where the answer is included) (default: 'context') ctx_attr: str = "context", # The attribute in your dataset that contains the question being asked (default: 'question') qst_attr: str = "question", # The attribute in your dataset that contains the actual answer (default: 'answer_text') ans_attr: str = "answer_text", # The attribute in your dataset that contains the actual answer (default: 'answer_text') ans_start_char_idx: str = "ans_start_char_idx", # The attribute in your dataset that contains the actual answer (default: 'answer_text') ans_end_char_idx: str = "ans_end_char_idx", # The attribute that should be created if your are processing individual training and validation # datasets into a single dataset, and will indicate to which each example is associated is_valid_attr: Optional[str] = "is_valid", # Tokenization kwargs that will be applied with calling the tokenizer (default: {"return_overflowing_tokens": True}) tok_kwargs: dict = {"return_overflowing_tokens": True}, ): # these values are mandatory tok_kwargs = {**tok_kwargs, "return_offsets_mapping": True} # shift the question and context appropriately based on the tokenizers padding strategy if hf_tokenizer.padding_side == "right": tok_kwargs["truncation"] = "only_second" text_attrs = [qst_attr, ctx_attr] else: tok_kwargs["truncation"] = "only_first" text_attrs = [ctx_attr, qst_attr] super().__init__(hf_tokenizer, batch_size, text_attr=text_attrs[0], text_pair_attr=text_attrs[1], tok_kwargs=tok_kwargs) store_attr() def process_df(self, training_df: pd.DataFrame, validation_df: Optional[pd.DataFrame] = None): df = super().process_df(training_df, validation_df) # a unique Id for each example is required to properly score question answering results when chunking long # documents (e.g., return_overflowing_tokens=True) chunk_docs = self.tok_kwargs.get("return_overflowing_tokens", False) max_length = self.tok_kwargs.get("max_length", self.hf_tokenizer.model_max_length) if self.id_attr is None and chunk_docs: df.insert(0, "_id", range(len(df))) # process df in mini-batches final_df = pd.DataFrame() for g, batch_df in df.groupby(np.arange(len(df)) // self.batch_size): final_df = final_df.append(self._process_df_batch(batch_df, chunk_docs, max_length)) final_df.reset_index(drop=True, inplace=True) return final_df def process_hf_dataset(self, training_ds: Dataset, validation_ds: Optional[Dataset] = None): ds = super().process_hf_dataset(training_ds, validation_ds) return Dataset.from_pandas(self.process_df(pd.DataFrame(ds))) # ----- utility methods ----- def _process_df_batch(self, batch_df, is_chunked, max_length): batch_df.reset_index(drop=True, inplace=True) # grab our inputs inputs = self._tokenize_function(batch_df.to_dict(orient="list")) offset_mapping = inputs.pop("offset_mapping") sample_map = inputs.pop("overflow_to_sample_mapping", batch_df.index.tolist()) proc_data = [] for idx, offsets in enumerate(offset_mapping): example_idx = sample_map[idx] row = batch_df.iloc[example_idx] input_ids = inputs["input_ids"][idx] seq_ids = inputs.sequence_ids(idx) # get question and context associated with the inputs at "idx" qst_mask = [i != 1 if self.hf_tokenizer.padding_side == "right" else i != 0 for i in seq_ids] qst_offsets = [offsets[i] for i, is_qst in enumerate(qst_mask) if is_qst and seq_ids[i] is not None] ctx_offsets = [offsets[i] for i, is_qst in enumerate(qst_mask) if not is_qst and seq_ids[i] is not None] proc_qst = row[self.qst_attr][min(qst_offsets)[0] : max(qst_offsets)[1]] proc_ctx = row[self.ctx_attr][min(ctx_offsets)[0] : max(ctx_offsets)[1]] # if we are chunking long documents, we need to tokenize the chunked question, context in order to correctly assign # the start/end token indices, else we can just the above since we are only looking at one example at a time if is_chunked: chunk_texts = (proc_qst, proc_ctx) if self.hf_tokenizer.padding_side == "right" else (proc_ctx, proc_qst) chunk_inputs = self.hf_tokenizer(chunk_texts[0], chunk_texts[1]) chunk_input_ids = chunk_inputs["input_ids"] chunk_qst_mask = [i != 1 if self.hf_tokenizer.padding_side == "right" else i != 0 for i in chunk_inputs.sequence_ids()] else: chunk_input_ids, chunk_qst_mask = input_ids, qst_mask # lastly we iterate over the input tokens to see if we can fine the answer tokens within (ignoring the input tokens # belonging to the "question" as we only want to find answers that exist in the "context") tok_input = self.hf_tokenizer.convert_ids_to_tokens(chunk_input_ids) tok_ans = self.hf_tokenizer.tokenize(str(row[self.ans_attr])) start_idx, end_idx = 0, 0 for idx, (tok, is_qst_tok) in enumerate(zip(tok_input, chunk_qst_mask)): try: if is_qst_tok == False and tok == tok_ans[0] and tok_input[idx : idx + len(tok_ans)] == tok_ans: # ensure we are within the max_length last_idx = idx + len(tok_ans) if last_idx < max_length: start_idx, end_idx = idx, idx + len(tok_ans) break except: pass # update the oringal example information with the processed question, context, start/end "token" indices, and # a boolean indicating whether the question is answerable overflow_row = row.copy() overflow_row[f"proc_{self.qst_attr}"] = proc_qst overflow_row[f"proc_{self.ctx_attr}"] = proc_ctx overflow_row["ans_start_token_idx"] = start_idx overflow_row["ans_end_token_idx"] = end_idx overflow_row["is_answerable"] = start_idx != 0 and end_idx != 0 proc_data.append(overflow_row) return pd.DataFrame(proc_data) # #### How to preprocess your data # + tok_kwargs = {"return_overflowing_tokens": True, "max_length": max_seq_len, "stride": 64} preprocessor = QAPreprocessor(hf_tokenizer, id_attr="id", tok_kwargs=tok_kwargs) proc_df = preprocessor.process_df(squad_df) print(len(proc_df)) proc_df.head(4) # - sampled_df = proc_df.sample(n=10) for row_idx, row in sampled_df.iterrows(): test_example = row inputs = hf_tokenizer(row.proc_question, row.proc_context) if test_example.is_answerable: # print(test_example.answer_text) test_eq( test_example.answer_text, hf_tokenizer.decode(inputs["input_ids"][test_example.ans_start_token_idx : test_example.ans_end_token_idx]).strip(), ) else: test_eq(test_example.ans_start_token_idx, 0) test_eq(test_example.ans_end_token_idx, 0) # If you want to remove texts longer than your model will hold (and include only answerable contexts) # + preprocessor = QAPreprocessor(hf_tokenizer, tok_kwargs={"return_overflowing_tokens": False, "max_length": max_seq_len}) proc2_df = preprocessor.process_df(squad_df) proc2_df = proc2_df[(proc2_df.ans_end_token_idx < max_seq_len) & (proc2_df.is_answerable)] print(len(proc2_df)) proc2_df.head(2) # - # ## Mid-level API # ### `QATextInput` - # export class QATextInput(TextInput): pass # ### `QABatchTokenizeTransform` - # export class QABatchTokenizeTransform(BatchTokenizeTransform): def __init__( self, # The abbreviation/name of your Hugging Face transformer architecture (e.b., bert, bart, etc..) hf_arch: str, # A specific configuration instance you want to use hf_config: PretrainedConfig, # A Hugging Face tokenizer hf_tokenizer: PreTrainedTokenizerBase, # A Hugging Face model hf_model: PreTrainedModel, # To control whether the "labels" are included in your inputs. If they are, the loss will be calculated in # the model's forward function and you can simply use `PreCalculatedLoss` as your `Learner`'s loss function to use it include_labels: bool = True, # The token ID that should be ignored when calculating the loss ignore_token_id=CrossEntropyLossFlat().ignore_index, # To control the length of the padding/truncation. It can be an integer or None, # in which case it will default to the maximum length the model can accept. If the model has no # specific maximum input length, truncation/padding to max_length is deactivated. # See [Everything you always wanted to know about padding and truncation](https://huggingface.co/transformers/preprocessing.html#everything-you-always-wanted-to-know-about-padding-and-truncation) max_length: int = None, # To control the `padding` applied to your `hf_tokenizer` during tokenization. If None, will default to # `False` or `'do_not_pad'. # See [Everything you always wanted to know about padding and truncation](https://huggingface.co/transformers/preprocessing.html#everything-you-always-wanted-to-know-about-padding-and-truncation) padding: Union[bool, str] = True, # To control `truncation` applied to your `hf_tokenizer` during tokenization. If None, will default to # `False` or `do_not_truncate`. # See [Everything you always wanted to know about padding and truncation](https://huggingface.co/transformers/preprocessing.html#everything-you-always-wanted-to-know-about-padding-and-truncation) truncation: Union[bool, str] = "only_second", # The `is_split_into_words` argument applied to your `hf_tokenizer` during tokenization. Set this to `True` # if your inputs are pre-tokenized (not numericalized) is_split_into_words: bool = False, # Any other keyword arguments you want included when using your `hf_tokenizer` to tokenize your inputs. tok_kwargs: dict = {}, # Keyword arguments to apply to `BatchTokenizeTransform` **kwargs ): # "return_special_tokens_mask" and "return_offsets_mapping" are mandatory for extractive QA in blurr tok_kwargs = {**tok_kwargs, **{"return_special_tokens_mask": True, "return_offsets_mapping": True}} super().__init__( hf_arch, hf_config, hf_tokenizer, hf_model, include_labels=include_labels, ignore_token_id=ignore_token_id, max_length=max_length, padding=padding, truncation=truncation, is_split_into_words=is_split_into_words, tok_kwargs=tok_kwargs, **kwargs ) def encodes(self, samples, return_batch_encoding=False): updated_samples, batch_encoding = super().encodes(samples, return_batch_encoding=True) for idx, s in enumerate(updated_samples): # cls_index: location of CLS token (used by xlnet and xlm); is a list.index(value) for pytorch tensor's s[0]["cls_index"] = (s[0]["input_ids"] == self.hf_tokenizer.cls_token_id).nonzero()[0] # p_mask: mask with 1 for token than cannot be in the answer, else 0 (used by xlnet and xlm) s[0]["p_mask"] = s[0]["special_tokens_mask"] trgs = s[1:] if self.include_labels and len(trgs) > 0: s[0].pop("labels") # this is added by base class, but is not needed for extractive QA s[0]["start_positions"] = trgs[0] s[0]["end_positions"] = trgs[1] if return_batch_encoding: return updated_samples, inputs return updated_samples # ## Examples # # The following eamples demonstrate several approaches to construct your `DataBlock` for question answering tasks using the mid-level API # ### Using the mid-level API # #### Batch-Time Tokenization # ##### Step 1: Get your Hugging Face objects # + pretrained_model_name = "distilroberta-base" hf_arch, hf_config, hf_tokenizer, hf_model = get_hf_objects(pretrained_model_name, model_cls=AutoModelForQuestionAnswering) max_seq_len = 128 vocab = dict(enumerate(range(max_seq_len))) # - # ##### Step 2: Preprocess dataset # + tok_kwargs = {"return_overflowing_tokens": True, "max_length": max_seq_len, "stride": 24} preprocessor = QAPreprocessor(hf_tokenizer, id_attr="id", tok_kwargs=tok_kwargs) proc_df = preprocessor.process_df(squad_df) proc_df.head(1) # - # ##### Step 3: Create your `DataBlock` # + before_batch_tfm = QABatchTokenizeTransform(hf_arch, hf_config, hf_tokenizer, hf_model, max_length=max_seq_len) blocks = ( TextBlock(batch_tokenize_tfm=before_batch_tfm, input_return_type=QATextInput), CategoryBlock(vocab=vocab), CategoryBlock(vocab=vocab), ) dblock = DataBlock( blocks=blocks, get_x=lambda x: (x.proc_question, x.proc_context), get_y=[ColReader("ans_start_token_idx"), ColReader("ans_end_token_idx")], splitter=ColSplitter(), n_inp=1, ) # - # ##### Step 4: Build your `DataLoaders` dls = dblock.dataloaders(proc_df, bs=4) len(dls.train), len(dls.valid) b = dls.one_batch() len(b), len(b[0]), len(b[1]), len(b[2]) b[0]["input_ids"].shape, b[0]["attention_mask"].shape, b[1].shape, b[2].shape b[0]["start_positions"], b[0]["end_positions"] # export @typedispatch def show_batch( # This typedispatched `show_batch` will be called for `QuestionAnswerTextInput` typed inputs x: QATextInput, # Your targets y, # Your raw inputs/targets samples, # Your `DataLoaders`. This is required so as to get at the Hugging Face objects for # decoding them into something understandable dataloaders, # Your `show_batch` context ctxs=None, # The maximum number of items to show max_n=6, # Any truncation your want applied to your decoded inputs trunc_at=None, # Any other keyword arguments you want applied to `show_batch` **kwargs ): # grab our tokenizer tfm = first_blurr_tfm(dataloaders, tfms=[QABatchTokenizeTransform]) hf_tokenizer = tfm.hf_tokenizer res = L() for sample, input_ids, start, end in zip(samples, x, *y): txt = hf_tokenizer.decode(sample[0], skip_special_tokens=True)[:trunc_at] found = start.item() != 0 and end.item() != 0 ans_text = hf_tokenizer.decode(input_ids[start:end], skip_special_tokens=True) res.append((txt, found, (start.item(), end.item()), ans_text)) display_df(pd.DataFrame(res, columns=["text", "found", "start/end", "answer"])[:max_n]) return ctxs # The `show_batch` method above allows us to create a more interpretable view of our question/answer data. dls.show_batch(dataloaders=dls, max_n=4) # #### Passing extra information # # As mentioned in the `data.core` module documentation, BLURR now also allows you to pass extra information alongside your inputs in the form of a dictionary. If we are splitting long documents into chunks but want to predict/aggregation by example (rather than by chunk), we'll need to include a unique identifier for each example. When we look at `modeling.question_answer` module, we'll see how the question answering bits can use such an Id for this purpose. # # ##### Step 1: Get your Hugging Face objects # + pretrained_model_name = "bert-large-uncased-whole-word-masking-finetuned-squad" hf_arch, hf_config, hf_tokenizer, hf_model = get_hf_objects(pretrained_model_name, model_cls=AutoModelForQuestionAnswering) max_seq_len = 128 vocab = dict(enumerate(range(max_seq_len))) # - # ##### Step 2: Preprocess dataset # + preprocessor = QAPreprocessor( hf_tokenizer, id_attr="id", tok_kwargs={"return_overflowing_tokens": True, "max_length": max_seq_len, "stride": 64} ) proc_df = preprocessor.process_df(squad_df) proc_df.head(1) # - # ##### Step 2: Create your `DataBlock` # + before_batch_tfm = QABatchTokenizeTransform(hf_arch, hf_config, hf_tokenizer, hf_model, max_length=max_seq_len) blocks = ( TextBlock(batch_tokenize_tfm=before_batch_tfm, input_return_type=QATextInput), CategoryBlock(vocab=vocab), CategoryBlock(vocab=vocab), ) # since its preprocessed, we include an "text" key with the values of our question and context def get_x(item): return {"text": (item.proc_question, item.proc_context), "id": item.id} dblock = DataBlock( blocks=blocks, get_x=get_x, get_y=[ItemGetter("ans_start_token_idx"), ItemGetter("ans_end_token_idx")], splitter=ColSplitter(), n_inp=1, ) # - # ##### Step 3: Build your `DataLoaders` dls = dblock.dataloaders(proc_df, bs=4) len(dls.train), len(dls.valid) b = dls.one_batch() len(b), len(b[0]), len(b[1]), len(b[2]) b[0].keys() b[0]["input_ids"].shape, b[0]["attention_mask"].shape, b[1].shape, b[2].shape # We can see that any additional data is now located in the inputs dictionary b[0]["id"] dls.show_batch(dataloaders=dls, max_n=4) # ## Export - # + # hide from nbdev.export import notebook2script notebook2script() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.5 64-bit # name: python395jvsc74a57bd0bd4c67ef142469da7dc4d338a32ac40116904d26076b8e6aa587d80720bc6a2b # --- from datetime import datetime starttime=datetime.strptime("20180301095416","%Y%m%d%H%M%S") # print(starttime.day) endtime = datetime.strptime("20180301100225","%Y%m%d%H%M%S") # starttime, endtime = endtime, starttime delta = endtime - starttime min_total = round(delta.seconds / 60) print(min_total) # + from datetime import datetime import pandas as pd # filename = "data/Subway_20180301.txt" filename = "data/Subway_20190301_top100000.txt" # res = {} df = pd.read_csv(filename) # - def time_group(etime:datetime.time) -> int: return etime.hour * 6 + (etime.minute // 10) + 1 # + tags=[] for _, line in df.iterrows(): starttime = datetime.strptime(str(line["UpTime"]), "%Y%m%d%H%M%S") endtime = datetime.strptime(str(line["DownTime"]), "%Y%m%d%H%M%S") if starttime.day != 1 or endtime.day != 1: continue delta = endtime - starttime if delta.days < 0: continue if delta.seconds > 7200: continue if _ > 100: break print(starttime,time_group(starttime)) # - newindex = {} for i in range(144): hour = i // 6 dec_min = i % 6 newindex[i] = str(hour).rjust(2,'0') + ":" + str(dec_min * 10).rjust(2,'0') + '-' + str(hour).rjust(2,'0') + ":" + str(dec_min * 10 + 9).rjust(2,'0') print(newindex) # + import pandas as pd df = pd.read_csv("./out/PeopleInSubwayTime.csv") data = [df.iloc[:,0].tolist(), df.iloc[:,1].tolist()] df = pd.read_csv("./out/PeopleInSubwayCount.csv") data = [df.iloc[:, 0].tolist(), df.iloc[:, 1].tolist()] print(data) # - from mytool import csv2js csv2js() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # #%save MonCoreNLP.py _ih[2] # + """ serveur dans (home simph) ./nltk-data/stanford-corenlp-full-2016-10-31/ lancement java -mx6g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9009 -timeout 15000 warning : il a besoin de 6G 5g de ram pour traiter 2 langues comme en et fr - mettre dans la directory les models java pour chaque langue....voir le site stanford coreNLP server. """ import time import requests, logging, sys import pprint, json, time import codecs root = logging.getLogger('Root') root.setLevel(logging.WARNING) lhandler = logging.StreamHandler(sys.stdout) formatter = logging.Formatter( '%(asctime)s [%(levelname)s] : %(message)s', '%Y-%m-%d %H:%M:%S') lhandler.setFormatter(formatter) root.addHandler(lhandler) class CoreNLP: root.debug('Object instantiating..') annotator_full_list = ["tokenize", "cleanxml", "ssplit", "pos", "lemma", "ner", "regexner", "truecase", "parse", "depparse", "dcoref", "relation", "natlog", "quote", "sentiment"] language_full_list = ['en', 'fr', 'ar', 'zh', 'de', 'es'] url = 'http://127.0.0.1:9009' out_format = 'json' def __init__(self, url=url, language = "en", annotator_list=annotator_full_list, isPrint = False, ): if isPrint : root.setLevel(logging.DEBUG) assert url.upper().startswith('HTTP'), \ 'url string should be prefixed with http' if 'SENTIMENT' in map(str.upper, annotator_list): root.warning('You are using "Sentiment" annotator which is'\ 'not supported by Old version of CoreNLP') if url.endswith('/'): self.url = url[:-1] else: self.url = url assert isinstance(annotator_list, list), \ 'annotators can be passed only as a python list' if len(annotator_list) == 14: root.info('Using all the annotators, It might take a while') self.annotator_list = annotator_list self.language = language self.isPrint = isPrint common=set(self.annotator_list).intersection(self.annotator_full_list) not_suprtd_elem = set(self.annotator_list) - common assertion_error = 'annotator not supported: ' + str(not_suprtd_elem) assert not not_suprtd_elem, assertion_error common = set([self.language]).intersection(self.language_full_list) not_existing_language = set ( [self.language ]) - common assertion_error = 'language not supported: ' + str(not_existing_language) assert not not_existing_language, assertion_error @staticmethod def server_connection(current_url, data,): root.debug('server connection: ' + current_url) # modif PL root.debug ("data :"+ data ) #print (type(data)) assert isinstance(data, unicode) and data, 'Enter valid string input' try: data_utf = data.encode('utf-8') except: print (repr(data)) raise try: server_out = requests.post(current_url, data_utf, headers={'Connection': 'close'}) except requests.exceptions.ConnectionError: root.error('Connection Error, check you have server running') raise Exception('Check your CoreNLP Server status \n' 'if not sure, Check the pywrap doc for Server instantiation') return server_out def url_calc(self, serializer=''): s_string = '/?properties={"annotators": "' anot_string = ','.join(self.annotator_list) m_string = '", "outputFormat": "' + self.out_format f_string = '", "serializer": "' + serializer + '"}&pipelineLanguage='+self.language return self.url + s_string + anot_string + m_string + f_string def basic(self, data, out_format='json', serializer=''): self.out_format = out_format format_list = ['JSON', 'XML', 'TEXT', 'SERIALIZED'] assert out_format.upper() in format_list, \ 'output format not supported, check stanford doc' if out_format.upper() == 'SERIALIZED' and not serializer: root.info( 'Default Serializer is using - ' + 'edu.stanford.nlp.pipeline.ProtobufAnnotationSerializer') serializer = ('edu.stanford.nlp.pipeline.' 'ProtobufAnnotationSerializer') current_url = self.url_calc(serializer) return self.server_connection(current_url, data,) @staticmethod def tokensregex(data, pattern='', custom_filter=''): root.info('TokenRegex started') return CoreNLP.regex('/tokensregex', data, pattern, custom_filter) @staticmethod def semgrex(data, pattern='', custom_filter=''): root.info('SemRegex started') return CoreNLP.regex('/semgrex', data, pattern, custom_filter) @staticmethod def tregex(data, pattern='', custom_filter=''): root.info('Tregex started') return CoreNLP.regex('/tregex', data, pattern, custom_filter) @classmethod def regex(cls, endpoint, data, pattern, custom_filter): url_string = '/?pattern=' + str(pattern) +'&filter=' + custom_filter current_url = cls.url + endpoint + url_string root.info('Returning the data requested') return cls.server_connection(current_url, data) @staticmethod def process_sentences(sentences): assert isinstance(sentences, list), 'it should be a list' index = 0 new_index = 0 token_dict = { 'index':[], 'truecaseText':[], 'ner':[], 'before':[], 'originalText':[], 'characterOffsetBegin':[], 'lemma':[], 'truecase':[], 'pos':[], 'characterOffsetEnd':[], 'speaker':[], 'word':[], 'after':[], 'normalizedNER':[] } for sentence in sentences: index = new_index tokens = sentence['tokens'] for val in tokens: #workaround to handle length inconsistancie with normalizedNER, rethink the logic if 'ner' in val.keys() and 'normalizedNER' not in val.keys(): token_dict['normalizedNER'].append(0) for key, val in val.items(): if key == 'index': new_index = index + int(val) token_dict[key].append(str(new_index)) else: try: token_dict[key].append(val) except KeyError: token_dict[key] = [val] root.info('New key added: ' + key) return [token_dict, sentences] def arrange(self, data): root.info('Executing custom function') assert isinstance(data, unicode) and data, 'Enter valid string input' if 'lemma' not in self.annotator_list: self.annotator_list.append('lemma') current_url = self.url_calc() r = self.server_connection(current_url, data) try: r = r.json() rs = r['sentences'] except ValueError: root.error('Value Error: '+r.text+', Check special chars in input') rs = [] return self.process_sentences(rs) listePOS = """ CC Coordinating conjunction CD Cardinal number DT Determiner EX Existential there FW Foreign word IN Preposition or subordinating conjunction JJ Adjective JJR Adjective, comparative JJS Adjective, superlative LS List item marker MD Modal NN Noun, singular or mass NNS Noun, plural NNP Proper noun, singular NNPS Proper noun, plural PDT Predeterminer POS Possessive ending PRP Personal pronoun PRP$ Possessive pronoun RB Adverb RBR Adverb, comparative RBS Adverb, superlative RP Particle SYM Symbol TO to UH Interjection VB Verb, base form VBD Verb, past tense VBG Verb, gerund or present participle VBN Verb, past participle VBP Verb, non­3rd person singular present VBZ Verb, 3rd person singular present WDT Wh­determiner WP Wh­pronoun WP$ Possessive wh­pronoun WRB Wh­adverb """ full_annotator_list = ["ner", "pos", "lemma", "depparse", "relation"] language ="fr" cf = CoreNLP(url='http://localhost:9009', language = language, annotator_list=full_annotator_list, isPrint = False ) """ #Calling basic function which would return a 'requests' object out = cf.basic(data, out_format='json' ) print ('Basic')haque pp.pprint(out.json()) """ pp = pprint.PrettyPrinter(indent=4) full_annotator_list = ["tokenize", "cleanxml", "ssplit", "pos", "lemma", "ner", "regexner", "truecase", "parse", "depparse", "dcoref", "relation", "natlog", "quote"] full_annotator_list = ["ner", "pos", "lemma", "depparse", "relation"] language ="fr" cf = CoreNLP(url='http://localhost:9009', language = language, annotator_list=full_annotator_list, isPrint = False ) def get_enhancedPlusPlusDependencies (data) : t0 = time.time() Arrange, sentences = cf.arrange(data,) #print ("temps de calcul = %1.2f secondes" %(time.time() - t0)) #print ("\n ###################### ==== Affichage du dictionnaire Arrange ===== ###################### \n") #pp.pprint (Arrange) ner = Arrange ['ner'] roles = Arrange ['pos'] words = Arrange ['word'] #print ("ner" + str(ner)) #print ("\n ###################### ==== Affichage du dictionnaire sentences ===== ###################### \n") #pp.pprint (sentences) enhancedPlusPlusDependencies = sentences[0] ['enhancedPlusPlusDependencies'] #print ("\n ## == Affichage de la valeur associée à la clé enhancedPlusPlusDependencies dans le dictionnaire sentences == ## \n") #pp.pprint(enhancedPlusPlusDependencies) return enhancedPlusPlusDependencies, ner , roles, words def lecture_fichier(path = "./definition de famille (simple).txt") : data = [] with codecs.open(path, "r", "utf-8") as f : for line in f.readlines(): if not line.strip().startswith("#") and not "#--" in line.strip() and not line.strip() == "": data.append(line.strip("\r\n")) f.close() return data # + # #%save affichage.py _ih[27] # + def creationArbreDependancePosition (enhancedPlusPlusDependencies) : arbre = {} for group in enhancedPlusPlusDependencies : dep = group [0] # 'dep' == 0 governor = group [3] #'governor' == 3 dependent = group [1] # 'dependent' == 1 try: arbre [governor].append((dependent, dep)) except: arbre [governor]= [(dependent, dep),] continue return arbre def creationArbreDependanceNom (enhancedPlusPlusDependencies) : arbre = {} for group in enhancedPlusPlusDependencies : dep = group [0] # 'dep' == 0 governorGloss = group [4] # 'governorGloss' == 4 dependentGloss = group [2] # 'dependentGloss' == 2 #print("aaaaaaaaaaaaaaaaa", governorGloss) try: arbre [governorGloss].append((dependentGloss, dep)) except: arbre [governorGloss]= [(dependentGloss, dep),] continue return arbre arbre = creationArbreDependanceNom (enhancedPlusPlusDependencies) #premierMot = "ROOT" def affichageArbre (mot, arbre,memoire = {}) : try: listeDependence = arbre[mot] except : return for dependentGloss, dep in listeDependence : print (mot, "=> " , dependentGloss," pour type de dependance: ", dep) if not dependentGloss in memoire : affichageArbre (dependentGloss, arbre, memoire = memoire) memoire [dependentGloss] = True continue return #affichageArbre (premierMot, arbre ) # + # #%save lecture_data.py _ih[31] # - # + # #%save family.py _ih[22] # + import time, codecs #from lecture_data import lecture_fichier #from enhanced import get_enhancedPlusPlusDependencies #from affichage import creationArbreDependanceNom, affichageArbre # - """ Generalisation intemporelle et hors espace......Concept pas objet anime ou inanimé un chat est un carnivore un carnivore est un mangeur(nc) uniquement de viande. un mangeur est un (objet animé) qui est (mange V) La regle est de generaliser les qualités du carnivore au chat par exemple Le chat de Merlin mange de l'herbe. réponse : impossible un chat est un carnivore donc il ne mange pas de l'herbe. retour: Le chat de Merlin avale de l'herbe pour se purger. réponse OK mot1 => "mot general" dico car taille faible et inversement le retour est un Btree car beaucoup de cas possible (exemple homme = 4 milliards d'individus nommés ) le graphe de calcul se fait sur le mot ensuite sur le "mot general" et ainsi de suite... arret des le premier résultat ? tous les mots concept peuvent etre potentiellement un généralisation mais pas de boucle????? Quand un concept se met à avoir une consistance temps et espace alors il deviend une instance par généralisable... Un mot peut posseder plusieurs significations au sens concept ou instanciation... Un mot peut avoir plusieurs sens (grouper les sens mais comment ?) un mot = "dhdhddhdh" une liste des sens , chaque sens est une liste d'objet qui est relié aux autres par mot, sens , position """ # + """ Cas d'équivallences #######Cas 1 {nom propre 1} ((etre present)|(be present)) {article indéfini + nom singulier } (de | of ) {nom propre 2} est équivallent à {nom propre 2} ((avoir present) | (have present) ) {article indefini + nom singulier} ("désigné(e)+par " | "called, " ) {nom propre 1} Exemple : Merlin est un fils de Patrick. Donne Patrick a un fils,nommé Merlin. #######Cas 2 (Chaque | Every ) {nom singulier 1} ((avoir present) | (have present)) {article indefini + nom singulier 2} et (All | Tous | Toutes) {article defini plural + singular noun plural } (have present)) {article indefini + nom singulier 2} est équivallent à {article indefini + nom sigulier 2 } ("is part of every" | "est une partie de chaque" {nom singulier 1} Exemple : Every car have an engine. All the cars have an engine. equivallent An engine is a part of every car. ou Chaque voiture a un moteur. Toutes les voitures ont un moteur. équivallent Un moteur est ume partie de chaque voiture. """ # - """ Groupement de la connaissance dans la mémoire on connait Cas 1 {nom propre 1} ((etre present)|(be present)) {article indéfini + nom singulier } (de | of ) {nom propre 2} est équivallent à {nom propre 2} ((avoir present) | (have present) ) {article indefini + nom singulier} ("désigné(e)+par " | "called, " ) {nom propre 1} Si {nom propre 2} ((avoir present) | (have present) ) {article indefini + nom singulier} ("désigné(e)+par " | "called, " ) {nom propre 3} + {nom propre 2} ((avoir present) | (have present) ) {article indefini + nom singulier} ("désigné(e)+par " | "called, " ) {nom propre 1} Alors {nom propre 2} ((avoir present) | (have present) ) {article indefini + nom singulier} ("désigné(e)+par " | "called, " { nom propre 3 } (and" | "et") {nom propre 1} """ """ Raisonnement dans le temps passé vs present (+ futur ? ) / contexte {nom propre 1} ( (be past) | (être past)) {article defini + nom singulier} of {nom propre 2} devient (nom propre 2) (("have not past)| (non plus avoir past) ) {nom singulier} (anymore | ) ET {nom propre 1} ( (be past) | (être past)) {article defini | indefini + nom singulier} of {nom propre 2} devient (nom propre 2) ((have past)| ( avoir past) ) {indefinite article + nom singulier} ("called ,"| "désigné(e)+par) {nom propre 1} """ """ Détection des conflits et génération de questions pour résoudre le conflit (Every | chaque ) {nom singulier 1} ((be present ) | (être present) ) {article indefini + nom singular 2 } ( (or) | (ou)) {article indefini + nom singular 3 } En conflit avec {nom propre 4} ((be present ) | (être present) ) {article indefini + nom singulier 2 } ( (and) | (et) {article indefini + nom singulier 3 } exemple : Chaque personne est un homme ou une femme. Nadia est une femme et un homme. => conflit si Nadia est une personne. Regle pour raisonner : # inference (Every | chaque ) {nom singulier 1} ((be present ) | (être present) ) {article indefini + nom singular 2 } ( (or) | (ou)) {article indefini + nom singular 3 } ET {nom propre 1 } ((be present ) | (être present) ) {article indefini + nom singulier 1 } => {nom propre 1 } ((be present ) | (être present) ) {article indefini + nom singular 2 } ( (or) | (ou)) {article indefini + nom singular 3 } # Question {nom propre 1 } ((be present ) | (être present) ) {article indefini + nom singular 2 } ( (or) | (ou)) {article indefini + nom singular 3 } donne la question ((be present ) | (être present) ) {nom propre 1 } {article indefini + nom singular 2 } ( (or) | (ou)) {article indefini + nom singular 3 } ? # ou est exclusif ((be present ) | (être present) ) {nom propre 1 } {article indefini + nom singular 2 } ( (or) | (ou)) {article indefini + nom singular 3 } ? reponse : {nom propre 1 } ((be present negation ) | (être present negation ) ) {article indefini + nom singular 2 } Conclusion : {nom propre 1 } ((be present ) | (être present) ) {article indefini + nom singular 3 } Ou réponse : {nom propre 1 } ((be present negation ) | (être present negation ) ) {article indefini + nom singular 3 } Conclusion : {nom propre 1 } ((be present ) | (être present) ) {article indefini + nom singular 2 } """ """ Archivage dans le temps des faits (généralisation avec une variable temps dans le contexte, par défaut le temps de la machine ) {nom propre 1} ((etre present)|(be present)) {article défini + nom singulier } (de | of ) {nom propre 2} équivallent à {nom propre 2} ((avoir present) | (have present) ) {article défini + nom singulier} ("désigné(e)+par " | "called, " ) {nom propre 1} Donc {nom propre 1} ((etre present)|(be present)) {article indéfini + nom singulier } (de | of ) {nom propre 2} SUIVI PAR {nom propre 3} ((etre present)|(be present)) {article indéfini + nom singulier } (de | of ) {nom propre 2} CONCLUSION {nom propre 2} ((avoir past) | (have past ) ) {article défini + nom singulier} ("désigné(e)+par " | "called, " ) {nom propre 1} ET {nom propre 3} ((avoir present) | (have present ) ) {article défini + nom singulier} ("désigné(e)+par " | "called, " ) {nom propre 2} EXEMPLE est le president des Etats Unis. est le president des Etats Unis. => The United State avait un president nommé, . The United State a un president nommé, . """ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Analyze and prediction on the Kaggle's Titanic dataset # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline #Alterando configurações padrão dos plots plt.style.use('fivethirtyeight') plt.rcParams['figure.figsize'] = (10, 5) # - df_train = pd.read_csv('dataset/train.csv') # ## Dataset analyzes and cleaning df_train.head() df_train.describe() df_train.dtypes #setting passanger ID as the dataframe index df_train.set_index('PassengerId', inplace=True, verify_integrity=True) sns.pairplot(df_train, kind="scatter", plot_kws=dict(s=80, edgecolor="white", linewidth=2.5)) plt.show() # **Ticket Class**: To better visualize the ticket class as a categorical variable, I created a new row where I name each class as *class-1, class-2* and *class-3*. This will help to label the following visualizations. df_train['Class_label'] = df_train['Pclass'].map({1:'class-1', 2:'class-2', 3:'class-3'}) # ### Age NaN values # There are missing values for Age and at that point I don't know if it can intefere in the prediction. To gain some insight about this issue I've done explorations over the theme to help me understand the problem. # # ***Note:*** The test dataset can have NaN values as well so i think that some data normalization can be needed before validating the model # Age distribution df_train['Age'].plot(kind='hist', title='Age Distribution') plt.show() #df_train.plot(kind='hist', y='Age', x='Pclass') #Age distribution by Sex # referencia: https://medium.com/@lorran.rodr/criando-a-pir%C3%A2mide-et%C3%A1ria-dos-acidentes-de-trabalho-no-brasil-com-seaborn-cd960f56ffa2 df_age = df_train[['Sex', 'Age']].copy() df_age['age_group'] = pd.cut(df_age.Age, [0, 10, 20, 30, 40, 50, 60, 70, 80]) df_age['count'] = 1 pv = pd.pivot_table(data=df_age, values='count', columns='Sex', index='age_group', aggfunc=np.sum, dropna=True).fillna(0) pv df_age['piram_age'] = df_age.apply(lambda x: x['Age']*(-1) if x['Sex'] == 'male' else x['Age'], axis=1) df_age.sort_values('Age', ascending=False) df_age.head(50) bar_order = df_age.Age.unique()[::-1] sns.barplot(x='PassengerId', y='Sex', data=df_age.loc[df_age['Sex']=='male',:]) sb.violinplot(data=df_train, x='Class_label', y='Age', inner=None, order=[ "class-1", "class-2", "class-3"]).set_title('Age distribuition by Ticket class') plt.show() #Passangers per class df_train.groupby(['Class_label'])['Class_label'].count().plot(kind='pie', autopct='%.1f%%', ) plt.show() # TO DOs # -Criar grafico de distribui sobreposto para idade por sexo # -Ordenar gráfico de violino pelas classes # + active="" # # Null values for Age # df_train.loc[df_train['Age'].isnull() == True] # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # WeatherPy # ---- # # #### Note # * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. # !pip install citipy # + # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time import scipy.stats as st # Import API key from api_keys import weather_api_key # Incorporated citipy to determine city based on latitude and longitude from citipy import citipy # Output File (CSV) output_data_file = "output_data/cities.csv" # Range of latitudes and longitudes lat_range = (-90, 90) lng_range = (-180, 180) # - # ## Generate Cities List # + # List for holding lat_lngs and cities lat_lngs = [] cities = [] # Create a set of random lat and lng combinations lats = np.random.uniform(lat_range[0], lat_range[1], size=1500) lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500) lat_lngs = zip(lats, lngs) # Identify nearest city for each lat, lng combination for lat_lng in lat_lngs: city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name # If the city is unique, then add it to a our cities list if city not in cities: cities.append(city) # Print the city count to confirm sufficient count len(cities) # - # ### Perform API Calls # * Perform a weather check on each city using a series of successive API calls. # * Include a print log of each city as it'sbeing processed (with the city number and city name). # # + url = "http://api.openweathermap.org/data/2.5/weather?" units = "imperial" query_url = f"{url}appid={weather_api_key}&units={units}&q=" record = 0 set = 1 city_name = [] lat = [] long = [] max_temp = [] humidity = [] clouds = [] wind_speed = [] country = [] date = [] print("Beginning Data Retrieval") print("-----------------------------") for city in cities: #Count of cities/sets of 50 if record == 50: record = 0 set += 1 record += 1 #Try validating each city try: #Pull weather info with successful attempts weather_response = requests.get(query_url+city).json() city_name.append(weather_response["name"]) lat.append(weather_response["coord"]["lat"]) long.append(weather_response["coord"]["lon"]) max_temp.append(weather_response["main"]["temp_max"]) humidity.append(weather_response["main"]["humidity"]) clouds.append(weather_response["clouds"]["all"]) wind_speed.append(weather_response["wind"]["speed"]) country.append(weather_response["sys"]["country"]) date.append(weather_response["dt"]) print(f"Processing Record {record} of Set {set} | {city}") except: #Print out for each city that can not be validated print("City not found. Skipping...") continue # - # ### Convert Raw Data to DataFrame # * Export the city data into a .csv. # * Display the DataFrame # + #Put data into dictionary wx_dict = {'City':city_name, 'Lat':lat, 'Lng':long, 'Max Temp':max_temp, 'Humidity':humidity, 'Cloudiness':clouds, 'Wind Speed':wind_speed, 'Country':country, 'Date':date} #Put dictionary into DataFrame wx_summary = pd.DataFrame(wx_dict) #Print DataFrame into a csv wx_summary.to_csv(output_data_file) #Print DataFrame here wx_summary # + #Perform quick analysis of data in DataFrame wx_analysis = pd.DataFrame(wx_summary.describe()) #Print quick analysis wx_analysis # - # ## Inspect the data and remove the cities where the humidity > 100%. # ---- # Skip this step if there are no cities that have humidity > 100%. # Get the indices of cities that have humidity over 100%. # Make a new DataFrame equal to the city data to drop all humidity outliers by index. # Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data". # ## Plotting the Data # * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels. # * Save the plotted figures as .pngs. # ## Latitude vs. Temperature Plot # + #Outline Scatter Plot data plt.scatter(wx_summary["Lat"],wx_summary["Max Temp"]) #Label graph plt.xlabel("Latitude") plt.ylabel("Max Temperature (F)") plt.title("City Latitude vs. Max Temperature") plt.grid(True) #Save Scatter Plot plt.savefig("output_data/Latitude vs. Temperature Plot.png") #Print Scatter Plot plt.show() #The plot shows that temperatures are warmer the closer they are to the equator and colder at further latitudes # - # ## Latitude vs. Humidity Plot # + #Outline Scatter Plot data plt.scatter(wx_summary["Lat"],wx_summary["Humidity"]) #Label graph plt.xlabel("Latitude") plt.ylabel("Humidity (%)") plt.title("City Latitude vs. Humidity") plt.grid(True) #Save Scatter Plot plt.savefig("output_data/Latitude vs. Humidity Plot.png") #Print Scatter Plot plt.show() #The plot appears to show that cities at latitudes further from the equator have a higher percentage of humidity #and almost no low percentages of humidity in the same areas # - # ## Latitude vs. Cloudiness Plot # + #Outline Scatter Plot data plt.scatter(wx_summary["Lat"],wx_summary["Cloudiness"]) #Label graph plt.xlabel("Latitude") plt.ylabel("Cloudiness (%)") plt.title("City Latitude vs. Cloudiness") plt.grid(True) #Save Scatter Plot plt.savefig("output_data/Latitude vs. Cloudiness Plot.png") #Print Scatter Plot plt.show() #The plot appears to show that there is some higher percentages of cloudiness further from the equator # - # ## Latitude vs. Wind Speed Plot # + #Outline Scatter Plot data plt.scatter(wx_summary["Lat"],wx_summary["Wind Speed"]) #Label graph plt.xlabel("Latitude") plt.ylabel("Wind Speed (mph)") plt.title("City Latitude vs. Wind Speed") plt.grid(True) #Save Scatter Plot plt.savefig("output_data/Latitude vs. Wind Speed Plot.png") #Print Scatter Plot plt.show() #The plot appears to show there is very little correlation between windspeed and latitude # - # ## Linear Regression #Outline data in northern and southern hemisphere northern_hem = wx_summary.loc[wx_summary["Lat"] > 0] southern_hem = wx_summary.loc[wx_summary["Lat"] < 0] # #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression # + #Create equation (slope, intercept, rvalue, pvalue, stderr) = st.linregress(northern_hem["Lat"],northern_hem["Max Temp"]) regress_values = northern_hem["Lat"] * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) #Outline graph plt.scatter(northern_hem["Lat"],northern_hem["Max Temp"]) plt.plot(northern_hem["Lat"],regress_values,"r-") plt.annotate(line_eq,(5,-15),fontsize=15,color="red") #Label and detail graph plt.xlabel("Latitude") plt.ylabel("Max Temp (F)") plt.title("Northern Hemisphere - City Latitude vs. Max Temp") #Save Scatter Plot plt.savefig("output_data/Northern Hemisphere - Max Temp vs. Latitude.png") #Print Scatter Plot plt.show() #Print R-Value print(f"The r-value is: {rvalue**2}") #The plot shows there is a direct correlation between high max temps and lower latitudes as well as low max temps # and higher latitudes # - # #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression # + #Create equation (slope, intercept, rvalue, pvalue, stderr) = st.linregress(southern_hem["Lat"],southern_hem["Max Temp"]) regress_values = southern_hem["Lat"] * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) #Outline graph plt.scatter(southern_hem["Lat"],southern_hem["Max Temp"]) plt.plot(southern_hem["Lat"],regress_values,"r-") plt.annotate(line_eq,(-25,40),fontsize=15,color="red") #Label and detail graph plt.xlabel("Latitude") plt.ylabel("Max Temp (F)") plt.title("Southern Hemisphere - City Latitude vs. Max Temp") #Save Scatter Plot plt.savefig("output_data/Southern Hemisphere - Max Temp vs. Latitude.png") #Print Scatter Plot plt.show() #Print R-Value print(f"The r-value is: {rvalue**2}") #The plot shows there is a correlation between medium to high max temps and latitudes closer to the equator #as well as low max temps and further away from the equator # - # #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression # + #Create equation (slope, intercept, rvalue, pvalue, stderr) = st.linregress(northern_hem["Lat"],northern_hem["Humidity"]) regress_values = northern_hem["Lat"] * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) #Outline and detail graph plt.scatter(northern_hem["Lat"],northern_hem["Humidity"]) plt.plot(northern_hem["Lat"],regress_values,"r-") plt.annotate(line_eq,(40,20),fontsize=15,color="red") #Label graph plt.xlabel("Latitude") plt.ylabel("Humidity (%)") plt.title("Northern Hemisphere - City Latitude vs. Humidity") #Save Scatter Plot plt.savefig("output_data/Northern Hemisphere - Humidity (%) vs. Latitude.png") #Print Scatter Plot plt.show() #Print R-Value print(f"The r-value is: {rvalue**2}") #This plot appears to show that humidity is becomes reliably higher at latitudes further from the equator and lower #humidities are essentially non-existent # - # #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression # + #Create equation (slope, intercept, rvalue, pvalue, stderr) = st.linregress(southern_hem["Lat"],southern_hem["Humidity"]) regress_values = southern_hem["Lat"] * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) #Outline and detail graph plt.scatter(southern_hem["Lat"],southern_hem["Humidity"]) plt.plot(southern_hem["Lat"],regress_values,"r-") plt.annotate(line_eq,(-50,15),fontsize=15,color="red") #Label graph plt.xlabel("Latitude") plt.ylabel("Humidity (%)") plt.title("Southern Hemisphere - City Latitude vs. Humidity") #Save Scatter Plot plt.savefig("output_data/Southern Hemisphere - Humidity (%) vs. Latitude.png") #Print Scatter Plot plt.show() #Print R-Value print(f"The r-value is: {rvalue**2}") #This plot appears to show there is a very very slight correlation between the rise in humidity as the latitudes #approach the equator # - # #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression # + #Create equation (slope, intercept, rvalue, pvalue, stderr) = st.linregress(northern_hem["Lat"],northern_hem["Cloudiness"]) regress_values = northern_hem["Lat"] * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) #Outline and detail graph plt.scatter(northern_hem["Lat"],northern_hem["Cloudiness"]) plt.plot(northern_hem["Lat"],regress_values,"r-") plt.annotate(line_eq,(45,25),fontsize=15,color="red") #Label graph plt.xlabel("Latitude") plt.ylabel("Cloudiness (%)") plt.title("Northern Hemisphere - City Latitude vs. Cloudiness") #Save Scatter Plot plt.savefig("output_data/Northern Hemisphere - Cloudiness (%) vs. Latitude.png") #Print Scatter Plot plt.show() #Print R-Value print(f"The r-value is: {rvalue**2}") #This plot appears to show there is a correlation between low percentages of cloudiness at lower latitudes and higher #percentages of cloudiness at higher latitudes # - # #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression # + #Create equation (slope, intercept, rvalue, pvalue, stderr) = st.linregress(southern_hem["Lat"],southern_hem["Cloudiness"]) regress_values = southern_hem["Lat"] * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) #Outline and detail graph plt.scatter(southern_hem["Lat"],southern_hem["Cloudiness"]) plt.plot(southern_hem["Lat"],regress_values,"r-") plt.annotate(line_eq,(-50,60),fontsize=15,color="red") #Label graph plt.xlabel("Latitude") plt.ylabel("Cloudiness (%)") plt.title("Southern Hemisphere - City Latitude vs. Cloudiness") #Save Scatter Plot plt.savefig("output_data/Southern Hemisphere - Cloudiness (%) vs. Latitude.png") #Print Scatter Plot plt.show() #Print R-Value print(f"The r-value is: {rvalue**2}") #This plot appears to show there is a very slight correlation between the rise in cloudiness as the latitudes #approach the equator # - # #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression # + #Create equation (slope, intercept, rvalue, pvalue, stderr) = st.linregress(northern_hem["Lat"],northern_hem["Wind Speed"]) regress_values = northern_hem["Lat"] * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) #Outline and detail graph plt.scatter(northern_hem["Lat"],northern_hem["Wind Speed"]) plt.plot(northern_hem["Lat"],regress_values,"r-") plt.annotate(line_eq,(10,30),fontsize=15,color="red") #Label graph plt.xlabel("Latitude") plt.ylabel("Wind Speed (mph)") plt.title("Northern Hemisphere - City Latitude vs. Wind Speed") #Save Scatter Plot plt.savefig("output_data/Northern Hemisphere - Wind Speed (mph) vs. Latitude.png") #Print Scatter Plot plt.show() #Print R-Value print(f"The r-value is: {rvalue**2}") #This plot appears to show that wind speeds increases slightly from the equator to higher latitudes but is overall rather low # - # #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression # + #Create equation (slope, intercept, rvalue, pvalue, stderr) = st.linregress(southern_hem["Lat"],southern_hem["Wind Speed"]) regress_values = southern_hem["Lat"] * slope + intercept line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)) #Outline and detail graph plt.scatter(southern_hem["Lat"],southern_hem["Wind Speed"]) plt.plot(southern_hem["Lat"],regress_values,"r-") plt.annotate(line_eq,(-25,25),fontsize=15,color="red") #Label graph plt.xlabel("Latitude") plt.ylabel("Wind Speed (mph)") plt.title("Southern Hemisphere - City Latitude vs. Wind Speed") #Save Scatter Plot plt.savefig("output_data/Southern Hemisphere - Wind Speed (mph) vs. Latitude.png") #Print Scatter Plot plt.show() #Print R-Value print(f"The r-value is: {rvalue**2}") #This plot appears to show that wind speed reduces as it approaches the equator from lower latitudes # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Logistic model quasi-Laplace with Mr.Ash # # - toc: true # - badges: true # - comments: true # - categories: [jupyter] # ### About # # This is a proof of concept implementation of [quasi-Laplace approximation]({% post_url 2020-06-22-intuition-for-quasi-Laplace %}) with Mr.Ash. The quasi-Laplace approximation introduces a regularizer on the likelihood and approximates the posterior with a Gaussian distribution. This approach can be used for generalized linear model with any link function. # # Bayesian logistic regression with a Gaussian prior (ridge regression) uses the Laplace approximation to approximate the posterior with a Gaussian distribution. The variational treatment of logistic model by Jordan and Jaakkola (2000) also leads to a Gaussian approximation of the posterior distribution. The greater flexibility of the variational approximation leads to improved accuracy compared to Laplace method. # # Here, I compare the quasi-Laplace approach with cross-validated Lasso ($L_1$ penalty). I used the $R^2$ statistic by Tjur{% fn 1 %} to quantify the quality of classification, and the Gini index{% fn 2 %} to quantify the sparsity of the coefficients. # # > Note: Coefficients are generally underestimated by QL irrespective of the choice of regularizer. Lasso overestimates the coefficients. # # # # I found that the optimization depends on the choice of Gaussian mixture family. # + #collapse_hide import numpy as np np.set_printoptions(precision = 4, suppress=True) import pandas as pd from scipy import optimize from scipy import stats from scipy.special import digamma from sklearn import linear_model import matplotlib.pyplot as plt from matplotlib import cm from matplotlib import ticker as plticker from mpl_toolkits.axes_grid1 import make_axes_locatable import sys sys.path.append("../../utils/") import mpl_stylesheet mpl_stylesheet.banskt_presentation(fontfamily = 'latex-clearsans', fontsize = 18, colors = 'banskt', dpi = 72) # - # ### Generate toy data # # Let us consider a logistic model with sparse coefficients, so that the number of causal variables `ncausal` is much less than the number of variables `nvar` in the model. To ensure sparsity of the coefficients, we use a Gaussian mixture prior of `nGcomp` components with variances given by $\sigma_k^2$ (`sigmak2`) and unknown mixture coefficients. The sparsity of the initialization is controlled by the variable `sparsity` that specifies the mixture coefficient of the zero-th component $\mathcal{N}(0, 0)$ (or the delta function). We are particularly interested in systems where the number of samples `nsample` is less than `nvar`. Validation is performed on separate test dataset with `ntest` samples. # + nsample = [50, 200] nvar = [100, 100] ncausal = 5 nGcomp = 10 sparsity = 0.95 prior_variance = 5 ntest = 2000 hg2 = 0.8 sigmak2 = np.array([prior_variance * np.square(np.power(2, (i)/nGcomp) - 1) for i in range(nGcomp)]) # - # The data is generated from a liability threshold model such that the explanatory variables explain only a given fraction `hg2` of the response variable. $\mathbf{X}$ is centered and scaled such that for each variable $j$, the variance is $\mathrm{var}(\mathbf{x}_j) = 1$. # + # collapse-hide def standardize(X): Xnorm = (X - np.mean(X, axis = 0)) Xstd = Xnorm / np.std(Xnorm, axis = 0) return Xstd def logistic_data(X, beta): Xbeta = np.dot(X, beta) pred = 1 / (1 + np.exp(-Xbeta)) Y = np.random.binomial(1, pred) return Y, pred def liability_model(X, beta, prevalence = 0.5): hg2 = np.sum(np.square(beta)) eff = np.dot(X, beta) rand_std = np.sqrt(1 - hg2) rand_eff = np.random.normal(0, rand_std, X.shape[0]) liability = eff + rand_eff cutoff = stats.norm.ppf(1 - prevalence) cases = np.where(liability >= cutoff)[0] ctrls = np.where(liability < cutoff)[0] Y = np.zeros(X.shape[0]) Y[cases] = 1 return Y, liability def tjur_R2(X, Y, beta): Xbeta = np.dot(X, beta) pred = 1 / (1 + np.exp(-Xbeta)) R2 = np.mean(pred[Y == 1]) - np.mean(pred[Y == 0]) return R2 def gini(x): n = len(x) xabs = abs(x) xsort = xabs[np.argsort(xabs)] t1 = xsort / np.sum(xabs) t2 = (n - np.arange(n) - 1 + 0.5) / n gini = 1 - 2 * np.sum(t1 * t2) return gini def simulate_data(nsample, nvar, ncausal, ntest, hg2): X = np.random.rand(nsample * nvar).reshape(nsample, nvar) X = standardize(X) Xtest = np.random.rand(ntest * nvar).reshape(ntest, nvar) Xtest = standardize(Xtest) beta = np.zeros(nvar) causal_idx = np.random.choice(nvar, ncausal, replace = False) beta[causal_idx] = np.random.normal(loc = 0., scale = 1., size = ncausal) beta = beta * np.sqrt(hg2 / np.sum(np.square(beta))) Y, Y_liab = liability_model(X, beta) Ytest, Ytest_liab = liability_model(Xtest, beta) #Y, Y_liab = logistic_data(X, beta) #Ytest, Ytest_liab = logistic_data(Xtest, beta) return X, Y, beta, Xtest, Ytest, Y_liab, Ytest_liab # - # Let us have a look at the generated data. # + # collapse-hide X = [None for n in nsample] Xtest = [None for n in nsample] Y = [None for n in nsample] Ytest = [None for n in nsample] Y_liab = [None for n in nsample] Ytest_liab = [None for n in nsample] beta = [None for n in nsample] for i, n in enumerate(nsample): X[i], Y[i], beta[i], Xtest[i], Ytest[i], Y_liab[i], Ytest_liab[i] \ = simulate_data(n, nvar[i], ncausal, ntest, hg2) print(f'There are {ncausal} non-zero coefficients with heritability {hg2:.4f}') fig = plt.figure(figsize = (12,6)) ax = [fig.add_subplot(121), fig.add_subplot(122)] for i, n in enumerate(nsample): Xbeta = np.dot(X[i], beta[i]) pred = 1 / (1 + np.exp(-Xbeta)) ax[i].scatter(Xbeta, Y[i], s = 10) ax[i].scatter(Xbeta, pred, s = 10) ax[i].set_xlabel(r'$\sum_i X_{ni} \beta_i$') ax[i].set_ylabel(r'$Y_n$') plt.tight_layout() plt.show() # - # ### Implementation of quasi-Laplace with variational approximation # # See Latex notes for details of the theory. Current implementation focus on readability and debug. Computational efficiency has been ignored. # # > Warning: ELBO decreasing with parameter update. # # # # There must be an error in theory or implementation. Debug! # + # collapse-hide def regularized_log_likelihood(beta, X, Y, L): nvar = beta.shape[0] Xbeta = np.dot(X, beta) ## Function llb = np.sum(np.dot(Y, Xbeta) - np.log(1 + np.exp(Xbeta))) reg = - 0.5 * np.einsum('i,i->i', np.square(beta), L) loglik = llb + reg ## Gradient pred = 1 / (1 + np.exp(-Xbeta)) der = np.einsum('i,ij->j', (Y - pred), X) - np.multiply(beta, L) return -loglik, -der def precisionLL(X, beta, L): nvar = X.shape[1] Xbeta = np.einsum('i,ji->j', beta, X) pred = 1 / (1 + np.exp(-Xbeta)) Sinv = np.einsum('i,i,ij,ik->jk', pred, (1 - pred), X, X) Sinv[np.diag_indices(nvar)] += L return Sinv def get_regularized_mode(X, Y, beta0, L): nsample, nvar = X.shape args = X, Y, L gmode = optimize.minimize(regularized_log_likelihood, beta0, args=args, method='L-BFGS-B', jac=True, bounds=None, options={'maxiter': 20000000, 'maxfun': 20000000, 'ftol': 1e-9, 'gtol': 1e-9 #'disp': True }) return gmode.x def get_elbo_qL(alpha, mu, s2, pi, sigmak2, XPX, SinvM, M, L, logdetS, debug = False): Ebeta = np.sum(np.multiply(alpha, mu)[:, 1:], axis = 1) mu2 = np.square(mu) t1 = - 0.5 * np.dot(Ebeta.T, np.dot(XPX, Ebeta)) t2 = np.dot(Ebeta.T, SinvM) t3 = 0 t4 = 0 for j in range(nvar): Ebeta2j = 0 for k in range(nGcomp): mom2jk = mu2[j, k] + s2[j, k] Ebeta2j += alpha[j, k] * mom2jk t4jk = 0 if alpha[j, k] != 0: t4jk += np.log(pi[k]) t4jk += - np.log(alpha[j, k]) if k > 0: t4jk += 0.5 * np.log(s2[j, k]) t4jk += - 0.5 * np.log(sigmak2[k]) t4jk += - 0.5 * mom2jk / sigmak2[k] t4jk += 0.5 t4jk *= alpha[j, k] t4 += t4jk t3 += 0.5 * XPX[j, j] * (Ebeta[j] * Ebeta[j] - Ebeta2j) logdetL = np.sum(np.log(L)) t0 = - 0.5 * (logdetL + logdetS + np.dot(M.T, SinvM)) elbo = t0 + t1 + t2 + t3 + t4 if debug: print(t0, t1 + t2, t3, t4) return elbo #def get_elbo(alpha, mu, s2, pi, sigmak2, XPX, SinvM, M, L, logdetS): def get_elbo(alpha, mu, s2, logpi, sigmak2, XPX, SinvM, M, L, logdetS): nvar, nGcomp = alpha.shape Ebeta = np.sum(np.multiply(alpha, mu)[:, 1:], axis = 1) mu2 = np.square(mu) logdetL = np.sum(np.log(L)) t0 = - 0.5 * (logdetL + logdetS + np.dot(M.T, SinvM)) t1 = - 0.5 * np.dot(Ebeta.T, np.dot(XPX, Ebeta)) t2 = np.dot(Ebeta.T, SinvM) t3 = 0 t4 = 0 for j in range(nvar): Ebeta2j = 0 for k in range(nGcomp): mom2jk = mu2[j, k] + s2[j, k] Ebeta2j += alpha[j, k] * mom2jk t4jk = 0 if alpha[j, k] != 0: #t4jk += np.log(pi[k]) t4jk += logpi[k] t4jk += - np.log(alpha[j, k]) if k > 0: t4jk += 0.5 * np.log(s2[j, k]) t4jk += - 0.5 * np.log(sigmak2[k]) t4jk += - 0.5 * mom2jk / sigmak2[k] t4jk += 0.5 t4jk *= alpha[j, k] t4 += t4jk t3 += 0.5 * XPX[j, j] * (Ebeta[j] * Ebeta[j] - Ebeta2j) t5 = np.sum(np.multiply(- np.sum(alpha, axis = 0), logpi)) elbo = t0 + t1 + t2 + t3 + t4 + t5 return elbo def ash_qL_regularizer(X, Y, binit, L, mode_optim = True): nsample, nvar = X.shape if mode_optim: M = get_regularized_mode(X, Y, binit, L) else: M = binit.copy() Sinv = precisionLL(X, M, L) S = np.linalg.inv(Sinv) sgndetS, logdetS = np.linalg.slogdet(S) XPX = Sinv.copy() XPX[np.diag_indices(nvar)] -= L return M, S, Sinv, logdetS, XPX def ash_qL_pi_init(nGcomp, pi0, sparse = True): ''' By default, pi is initialized with sparsity, i.e. pi[0] > pi[j], for j > 0 ''' pi = np.zeros(nGcomp) pi[0] = pi0 pi[1:(nGcomp - 1)] = (1 - pi0) / (nGcomp - 1) pi[nGcomp - 1] = 1 - np.sum(pi) if not sparse: pi = np.repeat(1 / nGcomp, nGcomp) return pi def ash_logistic_qL(X, Y, nGcomp, sigmak2, binit, Linit, a_dirich = None, maxiter = 1000, tol = 1e-10, pi0 = 0.8, sparse_init = True, reg_step = 1, reg_tol = 0.1, debug = True, debug_step = 10): ''' Current focus is on readability and debug. ''' nsample, nvar = X.shape # default dirichlet prior on pi if a_dirich is None: a_dirich = np.repeat(1., nGcomp) a_dirich[0] = 1. # soft max at pi[0] ''' Initialize ''' L = Linit.copy() M, S, Sinv, logdetS, XPX = ash_qL_regularizer(X, Y, binit, L) SinvM = np.dot(Sinv, M) SinvRtj = np.dot(Sinv, M) - np.multiply(np.diag(Sinv), M) pi = ash_qL_pi_init(nGcomp, pi0, sparse = sparse_init) logpi = np.log(pi) alpha = np.repeat(np.array([pi]), nvar, axis = 0) logalpha = np.log(alpha) s2 = np.zeros((nvar, nGcomp)) s2[:, 1:] = 1 / (np.diag(XPX).reshape(-1, 1) + 1 / sigmak2[1:]) mu = np.multiply(s2, (SinvM - SinvRtj).reshape(-1,1)) R = np.sum(np.multiply(alpha, mu)[:, 1:], axis = 1) ''' Iteration ''' elbo_old = get_elbo(alpha, mu, s2, logpi, sigmak2, XPX, SinvM, M, L, logdetS) pi_old = pi.copy() pi_reg = pi.copy() if debug: print(f"Starting ELBO: {elbo_old:.4f} Pi0: {pi[0]:.4f}") for it in range(maxiter): # E-step for j in range(nvar): SinvRtj = np.sum(np.multiply(Sinv[:, j], R[:])) - Sinv[j, j] * R[j] s2[j, 0] = 0. mu[j, 0] = 0. #logalpha[j, 0] = np.log(pi[0]) logalpha[j, 0] = logpi[0] for k in range(1, nGcomp): s2[j, k] = 1 / (XPX[j, j] + (1 / sigmak2[k])) mu[j, k] = s2[j, k] * (SinvM[j] - SinvRtj) ssf = np.log(s2[j, k] / sigmak2[k]) ssr = mu[j, k] * mu[j, k] / s2[j, k] logalpha[j, k] = logpi[k] + 0.5 * ssf + 0.5 * ssr #logalpha[j, k] = np.log(pi[k]) + 0.5 * ssf + 0.5 * ssr maxL = np.max(logalpha[j, :]) alpha[j, :] = np.exp(logalpha[j, :] - maxL) alpha[j, :] /= np.sum(alpha[j, :]) R[j] = np.sum(np.multiply(alpha[j, 1:], mu[j, 1:])) # M-step bk = np.sum(alpha, axis = 0) + a_dirich pi = bk / np.sum(bk) logpi = digamma(bk) - digamma(np.sum(bk)) #for k in range(nGcomp): # pi[k] = np.sum(alpha[:, k]) / nvar # Conditional M-step if (it + 1) % reg_step == 0: eps_reg = max([abs(x - y) for x, y in zip(pi, pi_reg)]) if eps_reg > reg_tol: print(f"Recalculating regularizer, step {it}") L = 1 / np.sum(alpha * sigmak2, axis = 1) M, S, Sinv, logdetS, XPX = ash_qL_regularizer(X, Y, R, L) SinvM = np.dot(Sinv, M) pi_reg = pi.copy() elbo = get_elbo(alpha, mu, s2, logpi, sigmak2, XPX, SinvM, M, L, logdetS) eps_elbo = elbo - elbo_old eps_pi = max([abs(x - y) for x, y in zip(pi, pi_old)]) if debug and (it % debug_step == 0): print(f"Iteration {it}. ELBO: {elbo:.4f} Pi0: {pi[0]:.4f} epsilon: {eps_pi:.4f}, {eps_elbo:.4f}") ''' Check termination, continue otherwise. ''' if eps_pi < tol: print(eps_pi) break ''' Keep this step in memory. Use deep copy to avoid in place substitutions. ''' pi_old = pi.copy() alpha_old = alpha.copy() mu_old = mu.copy() s2_old = s2.copy() R_old = R.copy() elbo_old = elbo return pi, alpha, mu, s2, R, L # - # Hold the result summary statistic in a data frame # + # collapse-hide res_df = [pd.DataFrame(columns = ['Method', 'RMSE', 'Tjur R2', 'Gini Index']) for n in nsample] data_Xbeta = [np.dot(Xtest[i], beta[i]) for i in range(len(nsample))] data_tjur = [tjur_R2(Xtest[i], Ytest[i], beta[i]) for i in range(len(nsample))] data_rmse = [np.sqrt(np.mean(np.square(data_Xbeta[i] - np.dot(Xtest[i], beta[i])))) for i in range(len(nsample))] data_gini = [gini(beta[i]) for i in range(len(nsample))] for i in range(len(nsample)): res_df[i] = res_df[i].append(pd.DataFrame([['Data', data_rmse[i], data_tjur[i], data_gini[i]]], columns = res_df[i].columns)) # - # ### Fitting # # - Lasso # # # + # collapse-hide clf = [linear_model.LogisticRegressionCV(cv = 5, penalty='l1', solver='saga', fit_intercept = False, max_iter = 5000) for n in nsample] for i, n in enumerate(nsample): clf[i].fit(X[i], Y[i]) clf_Xbeta = [np.dot(Xtest[i], clf[i].coef_[0]) for i in range(len(nsample))] clf_tjur = [tjur_R2(Xtest[i], Ytest[i], clf[i].coef_[0]) for i in range(len(nsample))] clf_rmse = [np.sqrt(np.mean(np.square(clf_Xbeta[i] - np.dot(Xtest[i], beta[i])))) for i in range(len(nsample))] clf_gini = [gini(clf[i].coef_[0]) for i in range(len(nsample))] for i in range(len(nsample)): res_df[i] = res_df[i].append(pd.DataFrame([['LASSO', clf_rmse[i], clf_tjur[i], clf_gini[i]]], columns = res_df[i].columns)) # - # - QL Ash # # # + # collapse-hide pi = [None for n in nsample] alpha = [None for n in nsample] mu = [None for n in nsample] s2 = [None for n in nsample] R = [None for n in nsample] L = [None for n in nsample] #probk = ash_qL_pi_init(nGcomp, sparsity) for i in range(len(nsample)): gold_reg = np.repeat(1e4, nvar[i]) for j, b in enumerate(beta[i]): if b != 0: gold_reg[j] = 1 / prior_variance #fix_reg = np.repeat(1 / sigmak2[nGcomp - 1], nvar[i]) #fix_reg = np.repeat( 1 / np.dot(probk, sigmak2), nvar[i]) fix_reg = np.repeat(1 / prior_variance, nvar[i]) binit = np.zeros(nvar[i]) Linit = fix_reg pi[i], alpha[i], mu[i], s2[i], R[i], L[i] \ = ash_logistic_qL(X[i], Y[i], nGcomp, sigmak2, binit, Linit, tol = 1e-4, debug_step = 1, maxiter = 1000, reg_step = 10, pi0 = sparsity, sparse_init = True) qLash_Xbeta = [np.dot(Xtest[i], R[i]) for i in range(len(nsample))] qLash_rmse = [np.sqrt(np.mean(np.square(qLash_Xbeta[i] - np.dot(Xtest[i], beta[i])))) for i in range(len(nsample))] qLash_r2 = [tjur_R2(Xtest[i], Ytest[i], R[i]) for i in range(len(nsample))] qLash_gini = [gini(R[i]) for i in range(len(nsample))] for i in range(len(nsample)): res_df[i] = res_df[i].append(pd.DataFrame([['qLash', qLash_rmse[i], qLash_r2[i], qLash_gini[i]]], columns = res_df[i].columns)) # - # ### Results i = 0 print (f'N = {nsample[i]}, P = {nvar[i]}') res_df[i] i = 1 print (f'N = {nsample[i]}, P = {nvar[i]}') res_df[i] # + # collapse-hide fig = plt.figure(figsize = (12, 7)) ax = [fig.add_subplot(121), fig.add_subplot(122)] for i, n in enumerate(nsample): whichX = Xtest[i].copy() whichY = Ytest[i].copy() Xbeta = np.dot(whichX, beta[i]) pred_data = 1 / (1 + np.exp(-Xbeta)) ax[i].scatter(Xbeta, whichY, s = 1) Xbeta_clf = np.dot(whichX, clf[i].coef_[0]) pred_clf = 1 / (1 + np.exp(-Xbeta_clf)) ax[i].scatter(Xbeta, pred_clf, s = 1, label = 'Lasso') Xbeta_qL = np.dot(whichX, R[i]) pred_qL = 1 / (1 + np.exp(-Xbeta_qL)) ax[i].scatter(Xbeta, pred_qL, s = 1, label = 'qLash') leg = ax[i].legend(markerscale = 10) ax[i].set_xlabel(r'$\sum_i X_{ni} \beta_i$') ax[i].set_ylabel(r'$p_n$') ax[i].set_title(f'N = {nsample[i]}, P = {nvar[i]}', pad = 20) plt.tight_layout() plt.show() # - # {{ '. (2009). Coefficients of Determination in Logistic Regression Models - A New Proposal: The Coefficient of Discrimination. The American Statistician, 63(4), 366-372. DOI: [10.1198/tast.2009.08210](https://doi.org/10.1198/tast.2009.08210)' | fndetail: 1 }} # # {{ 'For a review of sparsity indices, see Rickard (2009). Comparing Measures of Sparsity. arXiv. DOI: [arXiv:0811.4706v2](https://arxiv.org/abs/0811.4706v2)' | fndetail: 2 }} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="7UgxMWEAMJFu" import torch import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg import torchvision import torchvision.transforms as transforms from torchvision.utils import save_image import math import cv2 from google.colab.patches import cv2_imshow from keras.regularizers import l2 from keras.models import Sequential from keras.optimizers import Adam, SGD from keras.engine.training import Model from keras import backend as K, regularizers from keras.callbacks import LearningRateScheduler from keras.preprocessing.image import ImageDataGenerator from keras.layers import Add, Conv2D, MaxPooling2D, Dropout, Flatten, Dense, BatchNormalization, Activation import scipy.io as sio from keras.utils import to_categorical import keras.models # + id="TkaXpsM6TqXF" # IMPORTANT! READ THIS! # Must manually import these following 2 files from the drive folder # Simply download files to local pc and upload to colab runtime # 8x8_mask.png # 32x32_white.jpg # + id="mjsJHpdyuKRm" # %%shell # rm -rf * wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1mbkgkpRXeUDwTxVjxdSkkFUGrzr62bVd' -O 8x8_mask.png wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1mMqJCzdQDEv2O4dESOghX7GGaf30cthc' -O 32x32_white.jpg # getting the keras_model direc zipped file (>100 MB through wget) wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1Dki6AJan6sthZy9lfDWtvms3GN8rJPKW' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1Dki6AJan6sthZy9lfDWtvms3GN8rJPKW" -O keras_model.zip && rm -rf /tmp/cookies.txt # unzip under /content/ directory unzip -q keras_model.zip # + colab={"base_uri": "https://localhost:8080/", "height": 83, "referenced_widgets": ["ab59188e5e954df88693c28bdcac952f", "7af27c0b9ed84441983c504b08e282a9", "0ff71eda46bf4764b0eed20b3b2cd2c4", "69cc21c4348141639cc9807da9310fe5", "8f1aa88279e2489b8a00f0d7be0af77e", "639c600d72d84e15b1b324915391b06b", "83c748fc1e30437192ca3d0a09e4eb50", "fddce1b979f84bb3a82c98ff846ae84d"]} id="1ZjEKrbJMV3Q" outputId="aa9e7416-cfbb-4eac-afcb-3181ac919807" # Get SVHN dataset transform = transforms.ToTensor() svhn_data = torchvision.datasets.SVHN(root="data", split="train", transform=transform, download=True) # + id="X0eOCwn4Mgnm" # Save 1000 samples in runtime for i in range(40): if i % 2 == 0: save_image(svhn_data[i][0], str(int(i/2)) + ".jpg") # + id="9Oj8NpBjNJGb" # OpenCV Image Inpainting mask = cv2.imread('8x8_mask.png',0) images = [] for i in range(20): img = cv2.imread(str(i) + ".jpg") dst = cv2.inpaint(img, mask, 3, cv2.INPAINT_TELEA) images.append(dst) # + colab={"base_uri": "https://localhost:8080/", "height": 722} id="90zWxCozN6Dq" outputId="5a28a8f0-8d5d-4042-c275-26bbb021f63e" # Plot the 20 original images from the dataset fig = plt.figure(figsize=(25, 4)) for idx in range(20): img = cv2.imread (str(idx) + '.jpg') ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) # Plot the 20 images with mask applied base = cv2.imread('32x32_white.jpg',0) mask_sample = cv2.rectangle(base, (12,12), (20,20), (0,0,0), -1) fig = plt.figure(figsize=(25, 4)) for idx in range(20): img = cv2.imread (str(idx) + '.jpg') base = cv2.imread('32x32_white.jpg',0) mask_sample = cv2.rectangle(base, (12,12), (20,20), (0,0,0), -1) res = cv2.bitwise_and(img,img,mask = mask_sample) ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) plt.imshow(cv2.cvtColor(res, cv2.COLOR_BGR2RGB)) # Plot the 20 inpainted images by openCV fig = plt.figure(figsize=(25, 4)) for idx in range(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) plt.imshow(cv2.cvtColor(images[idx], cv2.COLOR_BGR2RGB)) # + id="CmmA4LgcBIQM" mask = cv2.imread('6x6_mask.png',0) image = [] label = [] for i in range(5): save_image(svhn_data[i][0], str(i) + ".jpg") img = cv2.imread(str(i) + ".jpg") dst = cv2.inpaint(img, mask, 3, cv2.INPAINT_TELEA) image.append(dst) label.append(svhn_data[i][1]) # + id="0rVzi1JuB8S3" image_tensor = torch.Tensor(image) label_tensor = torch.Tensor(label) dataset = torch.utils.data.TensorDataset(image_tensor, label_tensor) dataloader = torch.utils.data.DataLoader(dataset) # + id="dhXyMUP1uO1P" loaded_model = keras.models.load_model('/content/keras_model') # + id="iK6oOkYguQKT" from keras.preprocessing.image import ImageDataGenerator # applying transformation to image train_gen = ImageDataGenerator( rotation_range=15, zoom_range = 0.10, width_shift_range=0.3, height_shift_range=0.3, brightness_range=[0.2,1.0] ) # + id="YpqnpkZvuRPw" def get_accuracy_keras (data_loader, batch_size, verbose=False): ''' get accuracy by randomly selecting 10 batches from the data_loader to speed up the training ''' loss, accuracy = 0.0, 0.0 samples = 0 for i, (d_images, labels) in enumerate(data_loader): #KERAS part images = d_images.cpu().detach().numpy() labels = labels.cpu().detach().numpy() one_hot_labels = to_categorical(labels, 10) # images = np.transpose(images, (0, 2, 3, 1)) # Converting the arrays to Float type images = images.astype('float32') # Normalizing images = images / 255.0 keras_batch = train_gen.flow(images, one_hot_labels, batch_size=batch_size) new_loss, new_accuracy = loaded_model.evaluate(keras_batch, verbose=verbose) loss += new_loss accuracy += new_accuracy samples += 1 print(f'sample count: {samples}') return loss/samples, accuracy/samples # + id="J3Dq1k_juStH" loss, acc = get_accuracy_keras(dataloader, batch_size=1) print(loss) print(acc) # + id="l4feuxb-uTwl" blocked_image = [] blocked_label = [] for idx in range(1000): save_image(svhn_data[i][0], str(i) + ".jpg") img = cv2.imread (str(idx) + '.jpg') base = cv2.imread('32x32_white.jpg',0) mask_sample = cv2.rectangle(base, (12,12), (20,20), (0,0,0), -1) res = cv2.bitwise_and(img,img,mask = mask_sample) blocked_image.append(res) blocked_label.append(svhn_data[i][1]) # + id="Zem5huTauU62" blocked_image_tensor = torch.Tensor(blocked_image) blocked_label_tensor = torch.Tensor(blocked_label) blocked_dataset = torch.utils.data.TensorDataset(blocked_image_tensor, blocked_label_tensor) blocked_dataloader = torch.utils.data.DataLoader(blocked_dataset) # + id="7a5V83zVuW1p" blocked_loss, blocked_acc = get_accuracy_keras(blocked_dataloader, batch_size=1) print(blocked_loss) print(blocked_acc) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:tensorflow] # language: python # name: conda-env-tensorflow-py # --- # # CIFAR-10 # # Link: https://www.cs.toronto.edu/~kriz/cifar.html # # The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. # # The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. "Automobile" includes sedans, SUVs, things of that sort. "Truck" includes only big trucks. Neither includes pickup trucks. # all the imports import tensorflow as tf import numpy as np import glob # ## Downloading and Preparing the Dataset # The archive contains the files data_batch_1, data_batch_2, ..., data_batch_5, as well as test_batch. Each of these files is a Python "pickled" object produced with Pickle. # # Loaded in this way, each of the batch files contains a dictionary with the following elements: # - data -- a 10000x3072 numpy array of uint8s. Each row of the array stores a 32x32 colour image. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image. # - labels -- a list of 10000 numbers in the range 0-9. The number at index i indicates the label of the ith image in the array data. # get all the dataset file names filenames_train = np.array(glob.glob("./cifar-10-batches-py/data_batch_*"), dtype=str) filename_test = "./cifar-10-batches-py/test_batch" filename_label = "./cifar-10-batches-py/batches.meta" filenames_train, filename_test, filename_label # unpickle the data and return the dictionary containing the data and label def unpickle(file): import pickle with open(file, 'rb') as f: dict = pickle.load(f, encoding='bytes') return dict train_images = [] labels = [] for file in filenames_train: cifar = unpickle(file) train_images.extend(cifar[b'data']) labels += cifar[b'labels'] X_train = np.array(train_images, dtype=float) X_train = X_train.reshape(-1, 32, 32, 3) X_train.shape _y_train = np.array(labels).reshape(-1,1) y_train = np.zeros((50000,10)) for i in range(50000): y_train[i][_y_train[i]] = 1 y_train.shape X_train[94], y_train[:10] test_images = [] test_labels = [] cifar = unpickle(filename_test) test_images = cifar[b'data'] test_labels = cifar[b'labels'] X_test = np.array(test_images, dtype=float) X_test = X_test.reshape(-1, 32, 32, 3) X_test.shape _y_test = np.array(test_labels).reshape(-1,1) y_test = np.zeros((10000,10)) for i in range(10000): y_test[i][_y_test[i]] = 1 y_test.shape X_test[94], y_test[94] # ## Create a tensorflow Input pipeline n_batch = 512 buffer_size = 1024 # create the image parser function def _parser_function(image, label): image_norm = tf.divide(image, tf.convert_to_tensor(255.0, dtype=tf.float64)) return image_norm, label # Input Pipeline # create input pipeline using tf.data dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)) dataset = dataset.shuffle(buffer_size=buffer_size) dataset = dataset.apply(tf.contrib.data.map_and_batch(map_func=_parser_function, batch_size=n_batch)) iterator = dataset.make_initializable_iterator() # create batch iterators input_mini_batch, label_mini_batch = iterator.get_next() # Test Pipeline # create test pipeline using tf.data test_dataset = tf.data.Dataset.from_tensor_slices((X_test, y_test)) test_dataset = test_dataset.shuffle(buffer_size=buffer_size) test_dataset = test_dataset.apply(tf.contrib.data.map_and_batch(map_func=_parser_function, batch_size=n_batch)) test_iterator = test_dataset.make_initializable_iterator() # create batch iterators X_test_mini_batch, y_test_mini_batch = test_iterator.get_next() # ## Define CNN Model # We'll use a simple 2 layered CNN model for this approach X = tf.placeholder(tf.float32, [None, 32, 32, 3]) y_ = tf.placeholder(tf.float32, [None, 10]) # + # create weights and biases with a small amount of noise def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) # + # convolution and pooling layers def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # + # define weights and biases W_conv1 = weight_variable([5,5,3,64]) b_conv1 = bias_variable([64]) W_conv2 = weight_variable([5,5,64,128]) b_conv2 = bias_variable([128]) # fully connected layers W_fc1 = weight_variable([8192,2048]) b_fc1 = bias_variable([2048]) W_fc2 = weight_variable([2048,1024]) b_fc2 = bias_variable([1024]) W_fc3 = weight_variable([1024,10]) b_fc3 = bias_variable([10]) # + # create CNN tensorflow model conv1 = conv2d(X, W_conv1) conv1_activated = tf.nn.relu(conv1 + b_conv1) conv1_maxpool = max_pool_2x2(conv1_activated) conv2 = conv2d(conv1_maxpool, W_conv2) conv2_activated = tf.nn.relu(conv2 + b_conv2) conv2_maxpool = max_pool_2x2(conv2_activated) print(conv2_maxpool) # - # flatten output from conv layers x_fc = tf.reshape(conv2_maxpool, shape=[-1, 8*8*128]) # + # fully conected layer 1 fc1 = tf.add(tf.matmul(x_fc, W_fc1), b_fc1) fc1_activated = tf.nn.relu(fc1) # fully conected layer 2 fc2 = tf.add(tf.matmul(fc1_activated, W_fc2), b_fc2) fc2_activated = tf.nn.relu(fc2) # fully conected layer 3 y = tf.add(tf.matmul(fc2_activated, W_fc3), b_fc3) # - # ## Cost function # We'll use softmax as our cost function cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=y, labels=y_)) # ## Optimizer # We'll use Adam Optimizer for testing purposes optim = tf.train.AdamOptimizer() # ### Now let's minimize the cost using our optimizer train = optim.minimize(cost) # ### Setup graph to calculate accuracy # returns a boolean array predictions = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) # cast the boolean array to float and calculate accuracy accuracy = tf.reduce_mean(tf.cast(predictions, tf.float32)) # ## Create a tensorflow graph # init the tensorflow session and create train graph # initialize tensorflow interactive session# sess = tf.InteractiveSession() # initialize variables inside the graph tf.global_variables_initializer().run() # ### Load Graph # + active="" # # load graph # saver = tf.train.Saver() # saver.restore(sess, "./model.ckpt") # - # ### Save Graph def save_graph(epoch): saver = tf.train.Saver() save_path = saver.save(sess, "./model_epoch_final.ckpt", global_step=epoch) # Now let's create a function to calculate accuracy of our model def calculate_accuracy(): # reset iterators sess.run(iterator.initializer) sess.run(test_iterator.initializer) train_acc = 0 train_iters = 0 # check train accuracy over 5 mini batches, to reduce computation for _ in range(10): train_iters += 1 X_test, y_test = sess.run([input_mini_batch, label_mini_batch]) train_acc += sess.run(accuracy, feed_dict={X: X_test, y_: y_test}) test_acc = 0 test_iters = 0 while True: try: test_iters += 1 X_test, y_test = sess.run([X_test_mini_batch, y_test_mini_batch]) test_acc += sess.run(accuracy, feed_dict={X: X_test, y_: y_test}) except tf.errors.OutOfRangeError: break # print accuracy print(f"Train accuracy: {train_acc/train_iters}") print(f"Test accuracy: {test_acc/test_iters}") # Finally it's time to train our model n_epochs = 20 for epoch in range(0,n_epochs): epoch_loss = 0 sess.run(iterator.initializer) while True: try: X_for_train, y_for_train = sess.run([input_mini_batch, label_mini_batch]) t, c = sess.run([train, cost], feed_dict={X: X_for_train, y_: y_for_train}) epoch_loss += c except tf.errors.OutOfRangeError: break # print loss print(f"Epoch {epoch} out of {n_epochs}, loss: {epoch_loss}") # print train and test accuracies calculate_accuracy() # Save Graph if epoch % 10 == 0 and epoch >= 2: save_graph(epoch) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Viewing Tracks in Napari # , 12/01/2022 # # Based on [Single Cell Tracking Napari Tutorial](https://napari.org/tutorials/tracking/cell_tracking.html). Data from [Cell Tracking Challenge](http://celltrackingchallenge.net/3d-datasets/) # ### Packages # + import os import napari import numpy as np import pandas as pd from skimage.io import imread from skimage.measure import regionprops_table # - # ### Definitions PATH = 'Fluo-N3DH-CE/' NUM_IMAGES = 195 # ### Define functions def load_image(idx: int): """Load an image from the sequence. Parameters ---------- idx : int Index of the image to load. Returns ------- image : np.ndarray The image specified by the index, idx """ filename = os.path.join(PATH, '01_GT/TRA', f'man_track{idx:0>3}.tif') return imread(filename) def regionprops_plus_time(idx): """Return the unique track label, centroid and time for each track vertex. Parameters ---------- idx : int Index of the image to calculate the centroids and track labels. Returns ------- data_df : pd.DataFrame The dataframe of track data for one time step (specified by idx). """ props = regionprops_table(stack[idx, ...], properties=('label', 'centroid')) props['frame'] = np.full(props['label'].shape, idx) return pd.DataFrame(props) def root(node: int): """Recursive function to determine the root node of each subgraph. Parameters ---------- node : int the track_id of the starting graph node. Returns ------- root_id : int The track_id of the root of the track specified by node. """ if full_graph[node] == 0: # we found the root return node return root(full_graph[node]) # ### Get data to be displayed # + # Read in image data - manually annotated stack = np.asarray([load_image(i) for i in range(NUM_IMAGES)]) # Find the centroid of each cell data_df_raw = pd.concat( [regionprops_plus_time(idx) for idx in range(NUM_IMAGES)] ).reset_index(drop=True) # sort the data lexicographically by track_id and time data_df = data_df_raw.sort_values(['label', 'frame'], ignore_index=True) # create the final data array: track_id, T, Z, Y, X data = data_df.loc[ :, ['label', 'frame', 'centroid-0', 'centroid-1', 'centroid-2'] ].to_numpy() # - # ### View Manual Annotations napari.view_image(stack, name='image') napari.run() # ### View Tracklets napari.view_tracks(data, name='tracklets') napari.run() # ### Create graph to represent associations between tracks lbep = np.loadtxt(os.path.join(PATH, '01_GT/TRA', 'man_track.txt'), dtype=np.uint) full_graph = dict(lbep[:, [0, 3]]) graph = {k: v for k, v in full_graph.items() if v != 0} # ### Get root node for lineage trees roots = {k: root(k) for k in full_graph.keys()} properties = {'root_id': [roots[idx] for idx in data[:, 0]]} # ### Read in original image data timelapse = np.asarray( [imread(os.path.join(PATH, '01', f't{i:0>3}.tif')) for i in range(NUM_IMAGES)] ) # ### Visualise tracks and cells # + # scale factor for dimensions in TZYX order SCALE = (1.0, 1.0, 0.09, 0.09) viewer = napari.Viewer() viewer.add_image(timelapse, scale=SCALE, name='Fluo-N3DH-CE') viewer.add_tracks(data, properties=properties, graph=graph, scale=SCALE, name='tracks') napari.run() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import oci import random import time # Default config file and profile config = oci.config.from_file() compartment_id = config['tenancy'] identity = oci.identity.IdentityClient(config) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np from sklearn.metrics import mean_absolute_error as mea from fbprophet import Prophet import matplotlib.pyplot as plt # %matplotlib inline # + path = 'https://raw.githubusercontent.com/dataworkshop/5dwchallenge_2019/master/challenge5/input/mauna_loa.csv' df = pd.read_csv(path) df.head() # + df['date'] = pd.to_datetime(df[['year', 'month', 'day']]) df['ppm_fixed'] = df['ppm'].map(lambda x: np.nan if x<0 else x) df['ppm_fixed'].fillna(method='backfill', inplace=True) plt.plot(df.date, df.ppm_fixed); # + cut_year = 2008 train = df[df.year < cut_year] test = df[ df.year >= cut_year ] plt.plot(train.date, train.ppm_fixed, label='train'); plt.plot(test.date, test.ppm_fixed, label = 'test'); # - # # Prophet # + fb_df = train[ ['date', 'ppm_fixed']].copy() fb_df.columns = ['ds', 'y'] fb_df.head() # + m = Prophet() m.fit(fb_df) # - future = m.make_future_dataframe(periods=len(test), freq='W', include_history=False) future.tail() test.tail() forecast = m.predict(future) forecast.head() # + train = df[df.year < cut_year] test = df[ df.year >= cut_year ] #plt.plot(train.date, train.ppm_fixed, label='train'); plt.plot(test.date, test.ppm_fixed, label = 'test'); plt.plot(test.date, forecast.yhat, label = 'forecast'); plt.legend(); # - m.plot(forecast); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 8.1-1 # 8.1-1. The Childfair Company has three plants producing child push chairs that are to be shipped to four # distribution centers. Plants 1, 2, and 3 produce 12, 17, and 11 shipments per month, respectively. Each # distribution center needs to receive 10 shipments per month. The distance from each plant to the respective # distributing centers is given as shown below (in miles) # # | Plant| Dist 1 | Dist 2 | Dist 3 | Dist 4 | # | --- | --- | --- | --- | --- | # | Plant 1 | 800 | 1300 | 400 | 700 | # | Plant 2 | 1100 | 1400 | 600 | 1000 | # | Plant 3 | 600 | 1200 | 800 | 900 | # # # # The freight cost for each shipment is $100 plus 50 cents per mile. How much should be shipped from each plant to each # of the distribution centers to minimize the total shipping cost? (a) Formulate this problem as a transportation # problem by constructing the appropriate parameter table. (b) Draw the network representation of this problem. (c) # Obtain an optimal solution. import pandas as pd from docplex.mp.model import Model import os # Read in external data df_arc = pd.read_csv('data_arc.csv') df_plant = pd.read_csv('data_plant.csv') df_distribution = pd.read_csv('data_distribution.csv') # Check df_arc.head() # Check df_plant.head() # Check df_distribution.head() # Indices/sets arcs = list((t.plant, t.distribution) for t in df_arc.itertuples()) plants = df_plant['plant'] distributions = df_distribution['distribution'] # Check arcs # Check plants # Check distributions # Parameters distance = dict([((t.plant, t.distribution), t.distance) for t in df_arc.itertuples()]) capacity = df_plant['capacity'] demand = df_distribution['demand'] # Check distance # Check capacity # Check demand # Fixed parameters shipment_cost = 100 per_mile_cost = .50 max_demand = 10 # Create model m = Model('8.1-1') m # Create decision variables x = m.integer_var_dict(arcs, name='x') x # Objective function m.minimize(m.sum(distance[ij] * per_mile_cost * x[ij] for ij in arcs) + m.sum(shipment_cost * x[ij] for ij in arcs)) # Check print(m.export_to_string()) # capacity constraint for i in plants: m.add_constraint(m.sum(x[(i, j)] for j in distributions) <= capacity[i - 1], ctname='capacity_constraint_%d' % i) # Check print(m.export_to_string()) # demand constraint for j in distributions: m.add_constraint(m.sum(x[i, j] for i in plants) >= demand[i - 1], ctname='demand_constraint_%d' % j) # Check print(m.export_to_string()) # Set parameters for solving model m.parameters.timelimit = 120 m.parameters.mip.strategy.branch = 1 m.parameters.mip.tolerances.mipgap = 0.15 # Solve model soln = m.solve(log_output=True) # Display results print(m.get_solve_status()) soln.display() m.g # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) # # The Extended Kalman Filter #format the book # %matplotlib inline # %load_ext autoreload # %autoreload 2 from __future__ import division, print_function import sys sys.path.insert(0,'./code') from book_format import load_style load_style() # At this point in the book we have developed the theory for the linear Kalman filter. Then, in the last two chapters we broached the topic of using Kalman filters for nonlinear problems. In this chapter we will learn the Extended Kalman filter (EKF). The EKF handles nonlinearity by linearizing the system at the point of the current estimate, and then the linear Kalman filter is used to filter this linearized system. It was one of the very first techniques used for nonlinear problems, and it remains the most common technique. Most filters in real world use are EKFs. # # The EKF provides significant mathematical challenges to the designer of the filter; this is the most challenging chapter of the book. To be honest, I do everything I can to avoid the EKF in favor of other techniques that have been developed to filter nonlinear problems. However, the topic is unavoidable; all classic papers and a majority of current papers in the field use the EKF. Even if you do not use the EKF in your own work you will need to be familiar with the topic to be able to read the literature. # ## Linearizing a System # Consider the function $f(x)=x^2−2x$. We want a linear approximation of this function so that we can use it in the Kalman filter. We will see how it is used in the Kalman filter in the next section, so don't worry about that yet. We can see that there is no single linear function (line) that gives a close approximation of this function. However, during each innovation (update) of the Kalman filter we know its current state, so if we linearize the function at that value we will have a close approximation. For example, suppose our current state is $x=1.5$. What would be a good linearization for this function? # # We can use any linear function that passes through the curve at (1.5,-0.75). For example, consider using f(x)=8x−12.75 as the linearization, as in the plot below. # + import numpy as np import matplotlib.pyplot as plt xs = np.arange(0, 2, 0.01) ys = [x**2 - 2*x for x in xs] def y(x): return 8*x - 12.75 plt.plot(xs, ys) plt.plot([1.25, 1.75], [y(1.25), y(1.75)]) plt.xlim(1, 2) plt.ylim([-1.5, 1]); # - # This is not a good linearization for $f(x)$. It is exact for $x=1.5$, but quickly diverges when $x$ varies by a small amount. # # A much better approach is to use the slope of the function at the evaluation point as the linearization. We find the slope by taking the first derivative of the function: # # $$f(x) = x^2 -2x \\ # \frac{df}{dx} = 2x - 2$$, # # so the slope at 1.5 is $2*1.5-2=1$. Let's plot that. # + def y(x): return x - 2.25 plt.plot(xs, ys) plt.plot([1, 2], [y(1), y(2)]) plt.xlim(1, 2) plt.ylim([-1.5, 1]); # - # Here we can see that this linearization is much better. It is still exactly correct at $x=1.5$, but the errors are very small as x varies. Compare the tiny error at $x=1.4$ vs the very large error at $x=1.4$ in the previous plot. This does not constitute a formal proof of correctness, but this sort of geometric depiction should be fairly convincing. Certainly it is easy to see that in this case if the line had any other slope the errors would accumulate more quickly. # ## Linearizing the Kalman Filter # # To implement the extended Kalman filter we will leave the linear equations as they are, and use partial derivatives to evaluate the system matrix $\mathbf{F}$ and the measurement matrix $\mathbf{H}$ at the state at time t ($\mathbf{x}_t$). In other words we linearize the equations at time t by finding the slope (derivative) of the equations at that time. Since $\mathbf{F}$ also depends on the control input vector $\mathbf{u}$ we will need to include that term: # # $$ # \begin{aligned} # F # &\equiv {\frac{\partial{f}}{\partial{x}}}\biggr|_{{x_t},{u_t}} \\ # H &\equiv \frac{\partial{h}}{\partial{x}}\biggr|_{x_t} # \end{aligned} # $$ # # All this means is that at each update step we compute $\mathbf{F}$ as the partial derivative of our function $f()$ evaluated at x. We then use a computational technique, such as Taylor expansion, to turn this into a set of linear equations. # # For nonlinear problems our function $f()$ is a set of differential equations. Modeling physical systems with differential equations is well outside the scope of this book. You will need to be reasonably well versed in this branch of applied mathematics to successfully implement the EKF for your problem. If you have not read it yet, please read the section **Modeling Dynamic Systems** in the **Kalman Filter Math** chapter as it contains the math that you will need to complete this chapter. # # I think the easiest way to understand the EKF is to start off with an example. Perhaps the reason for some of my mathematical choices will not be clear, but trust that the end result will be an EKF. # # # **orphan** # The extended Kalman filter (EKF) linearizing the process model for each evolution. For example, consider the problem of tracking a cannonball in flight. Obviously it follows a curved flight path. However, if our update rate is small enough, say 1/100 second, then the trajectory over that time is nearly linear. If we linearize that short segment we will get an answer very close to the actual value, and we can use that value to perform the prediction step of the filter. More often you will have to perform numeric integration. There are many ways to linearize a set of nonlinear differential equations, and the topic is somewhat beyond the scope of this book. In practice, a Taylor series approximation is frequently used with EKFs, and that is what we will use. # ## Example: Tracking a Flying Airplane # We will start by simulating tracking an airplane by using ground based radar. Radars work by emitting a beam of radio waves and scanning for a return bounce. Anything in the beam's path will reflects some of the signal back to the radar. By timing how long it takes for the reflected signal to get back to the radar the system can compute the *slant distance* - the straight line distance from the radar installation to the object. # # For this example we want to take the slant range measurement from the radar and compute the horizontal position (distance of aircraft from the radar measured over the ground) and altitude of the aircraft, as in the diagram below. import ekf_internal ekf_internal.show_radar_chart() # As discussed in the introduction, our measurement model is the nonlinear function $x=\sqrt{slant^2 - altitude^2}$. Therefore we will need a nonlinear # # Predict step: # # $$ # \begin{array}{ll} # \textbf{Linear} & \textbf{Nonlinear} \\ # \mathbf{\bar{x}} = \mathbf{Fx} & \mathbf{\bar{x}} = \underline{f(x)} \\ # \mathbf{\bar{P}} = \mathbf{FPF}^\mathsf{T} + \mathbf{Q} & \mathbf{\bar{P}} = \mathbf{FPF}^\mathsf{T} + \mathbf{Q} # \end{array} # $$ # # Update step: # # $$ # \begin{array}{ll} # \textbf{Linear} & \textbf{Nonlinear} \\ # \mathbf{K} = \mathbf{\bar{P}H}^\mathsf{T}(\mathbf{H\bar{P}H}^\mathsf{T} + \mathbf{R})^{-1}& \mathbf{K} = \mathbf{PH}^\mathsf{T}(\mathbf{HPH}^\mathsf{T} + \mathbf{R})^{-1}\\ # \mathbf{x} = \mathbf{\bar{x}} + \mathbf{K}(\mathbf{z}-\mathbf{H\bar{x}}) & \mathbf{x} = \mathbf{\bar{x}} + \mathbf{K}(\mathbf{z}-\underline{h(x)}) \\ # \mathbf{P} = \mathbf{\bar{P}}(\mathbf{I} - \mathbf{KH}) & \mathbf{P} = \mathbf{\bar{P}}(\mathbf{I} - \mathbf{KH})\\ # \end{array} # $$ # As we can see there are two minor changes to the Kalman filter equations, which I have underlined. The first change replaces the equation $\mathbf{x} = \mathbf{Fx}$ with $\mathbf{x} = f(\mathbf{x})$. In the Kalman filter, $\mathbf{Fx}$ is how we compute the new state based on the old state. However, in a nonlinear system we cannot use linear algebra to compute this transition. So instead we hypothesize a nonlinear function $f()$ which performs this function. Likewise, in the Kalman filter we convert the state to a measurement with the linear function $\mathbf{Hx}$. For the extended Kalman filter we replace this with a nonlinear function $h()$, giving $\mathbf{z}_x = h(\mathbf{x})$. # # The only question left is how do we implement and use $f()$ and $h()$ in the Kalman filter if they are nonlinear? We reach for the single tool that we have available for solving nonlinear equations - we linearize them at the point we want to evaluate the system. For example, consider the function $f(x) = x^2 -2x$. # # The rest of the equations are unchanged, so $f()$ and $h()$ must produce a matrix that approximates the values of the matrices $\mathbf{F}$ and $\mathbf{H}$ at the current value for $\mathbf{x}$. We do this by computing the partial derivatives of the state and measurements functions: # ### Design the State Variables # So we want to track the position of an aircraft assuming a constant velocity and altitude, and measurements of the slant distance to the aircraft. That means we need 3 state variables - horizontal distance, velocity, and altitude. # # $$\mathbf{x} = \begin{bmatrix}distance \\velocity\\ altitude\end{bmatrix}= \begin{bmatrix}x_{pos} \\x_{vel}\\ x_{alt}\end{bmatrix}$$ # ### Design the System Model # We will model this as a set of differential equations. So we need an equation in the form # $$\dot{\mathbf{x}} = \mathbf{Ax} + \mathbf{w}$$ # # where $\mathbf{w}$ is the system noise. # # Let's work out the equation for each of the rows in $\mathbf{x}.$ # # The first row is $\dot{x}_{pos}$, which is the velocity of the airplane. So we can say # # $$\dot{x}_{pos} = x_{vel}$$ # # The second row is $\dot{x}_{vel}$, which is the acceleration of the airplane. We assume constant velocity, so the acceleration equals zero. However, we also assume system noise due to things like buffeting winds, errors in control inputs, and so on, so we need to add an error $w_{acc}$ to the term, like so # # $$\dot{x}_{vel} = 0 + w_{acc}$$ # # The final row contains $\dot{x}_{alt}$, which is the rate of change in the altitude. We assume a constant altitude, so this term is 0, but as with acceleration we need to add in a noise term to account for things like wind, air density, and so on. This gives us # # $$\dot{x}_{alt} = 0 + w_{alt}$$ # # We turn this into matrix form with the following: # # $$\dot{\mathbf{x}} = \begin{bmatrix} 0 & 1 & 0 \\ 0& 0& 0 \\ 0&0&0\end{bmatrix} # \begin{bmatrix}x_{pos} \\x_{vel}\\ x_{alt}\end{bmatrix} + \begin{bmatrix}0 \\w_{vel}\\ w_{alt}\end{bmatrix} # $$ # # Now we have our differential equations for the system we can somehow solve for them to get our familiar Kalman filter state equation # # $$ \mathbf{x}=\mathbf{Fx}$$ # # Solving an arbitrary set of differential equations is beyond the scope of this book, however most Kalman filters are amenable to Taylor-series expansion which I will briefly explain here without proof. The section **Modeling Dynamic Systems** in the **Kalman Filter Math** chapter contains much more information on this technique. # # Given the partial differential equation # # $$\mathbf{F} = \frac{\partial f(\mathbf{x})}{\partial x}$$ # # the solution is $e^{\mathbf{F}t}$. This is a standard answer learned in a first year partial differential equations course, and is not intuitively obvious from the material presented so far. However, we can compute the exponential matrix $e^{\mathbf{F}t}$ using a Taylor-series expansion in the form: # # $$\Phi = \mathbf{I} + \mathbf{F}\Delta t + \frac{(\mathbf{F}\Delta t)^2}{2!} + \frac{(\mathbf{F}\Delta t)^3}{3!} + \ldots$$ # # You may expand that equation to as many terms as required for accuracy, however many problems only use the first term # # $$\Phi \approx \mathbf{I} + \mathbf{F}\Delta t$$ # We can then compute the system matrix by substituting $\Phi$ in $x(t_k) = \Phi(\Delta t)x(t_{k-1})$. Thus, $\Phi$ is our system matrix. # # We cannot use Greek symbols in Python, so the code uses the symbol `F` for $\Phi$. This is admittedly confusing. In the math above $\mathbf{F}$ represents the system of partial differential equations, and $\Phi$ is the system matrix. In the Python the partial differential equations are not represented in the code, and the system matrix is `F`. # ### Design the Measurement Model # The measurement function for our filter needs to take the filter state $\mathbf{x}$ and turn it into a slant range distance. This is nothing more than the Pythagorean theorem. # # $$h(\mathbf{x}) = \sqrt{x_{pos}^2 + x_{alt}^2}$$ # # The relationship between the slant distance and the position on the ground is nonlinear due to the square root term. # So what we need to do is linearize the measurement function at some point. As we discussed above, the best way to linearize an equation at a point is to find its slope, which we do by taking its derivative. # # $$ # \mathbf{H} \equiv \frac{\partial{h}}{\partial{x}}\biggr|_x # $$ # # The derivative of a matrix is called a Jacobian, which in general takes the form # # $$\frac{\partial \mathbf{h}}{\partial \mathbf{x}} = # \begin{bmatrix} # \frac{\partial h_1}{\partial x_1} & \frac{\partial h_1}{\partial x_2} &\dots \\ # \frac{\partial h_2}{\partial x_1} & \frac{\partial h_2}{\partial x_2} &\dots \\ # \vdots & \vdots # \end{bmatrix} # $$ # # In other words, each element in the matrix is the partial derivative of the function $h$ with respect to the variables $x$. For our problem we have # # $$\mathbf{H} = \begin{bmatrix}\frac{\partial h}{\partial x_{pos}} & \frac{\partial h}{\partial x_{vel}} & \frac{\partial h}{\partial x_{alt}}\end{bmatrix}$$ # # where $h(x) = \sqrt{x_{pos}^2 + x_{alt}^2}$ as given above. # # Solving each in turn: # # $$\begin{aligned} # \frac{\partial h}{\partial x_{pos}} &= \\ &=\frac{\partial}{\partial x_{pos}} \sqrt{x_{pos}^2 + x_{alt}^2} \\ &= \frac{x_{pos}}{\sqrt{x^2 + x_{alt}^2}} # \end{aligned}$$ # # and # # $$\begin{aligned} # \frac{\partial h}{\partial x_{vel}} &=\\ # &= \frac{\partial}{\partial x_{vel}} \sqrt{x_{pos}^2 + x_{alt}^2} \\ # &= 0 # \end{aligned}$$ # # and # # $$\begin{aligned} # \frac{\partial h}{\partial x_{alt}} &=\\ &= \frac{\partial}{\partial x_{alt}} \sqrt{x_{pos}^2 + x_{alt}^2} \\ &= \frac{x_{alt}}{\sqrt{x_{pos}^2 + x_{alt}^2}} # \end{aligned}$$ # # giving us # # $$\mathbf{H} = # \begin{bmatrix} # \frac{x_{pos}}{\sqrt{x_{pos}^2 + x_{alt}^2}} & # 0 & # & # \frac{x_{alt}}{\sqrt{x_{pos}^2 + x_{alt}^2}} # \end{bmatrix}$$ # # This may seem daunting, so step back and recognize that all of this math is doing something very simple. We have an equation for the slant range to the airplane which is nonlinear. The Kalman filter only works with linear equations, so we need to find a linear equation that approximates $\mathbf{H}$ As we discussed above, finding the slope of a nonlinear equation at a given point is a good approximation. For the Kalman filter, the 'given point' is the state variable $\mathbf{x}$ so we need to take the derivative of the slant range with respect to $\mathbf{x}$. # # To make this more concrete, let's now write a Python function that computes the Jacobian of $\mathbf{H}$. The `ExtendedKalmanFilter` class will be using this to generate `ExtendedKalmanFilter.H` at each step of the process. from math import sqrt def HJacobian_at(x): """ compute Jacobian of H matrix for state x """ horiz_dist = x[0] altitude = x[2] denom = sqrt(horiz_dist**2 + altitude**2) return array ([[horiz_dist/denom, 0., altitude/denom]]) # Finally, let's provide the code for $h(\mathbf{x})$ def hx(x): """ compute measurement for slant range that would correspond to state x. """ return (x[0]**2 + x[2]**2) ** 0.5 # Now lets write a simulation for our radar. # + from numpy.random import randn import math class RadarSim(object): """ Simulates the radar signal returns from an object flying at a constant altityude and velocity in 1D. """ def __init__(self, dt, pos, vel, alt): self.pos = pos self.vel = vel self.alt = alt self.dt = dt def get_range(self): """ Returns slant range to the object. Call once for each new measurement at dt time from last call. """ # add some process noise to the system self.vel = self.vel + .1*randn() self.alt = self.alt + .1*randn() self.pos = self.pos + self.vel*self.dt # add measurement noise err = self.pos * 0.05*randn() slant_dist = math.sqrt(self.pos**2 + self.alt**2) return slant_dist + err # - # Now we can implement our filter. I have not yet designed $\mathbf{R}$ and $\mathbf{Q}$ which is required to get optimal performance. However, we have already covered a lot of confusing material and I want you to see concrete examples as soon as possible. Therefore I will use 'reasonable' values for $\mathbf{R}$ and $\mathbf{Q}$. # # The `FilterPy` library provides the class `ExtendedKalmanFilter`. It works very similar to the `KalmanFilter` class we have been using, except that it allows you to provide functions that compute the Jacobian of $\mathbf{H}$ and the function $h(\mathbf{x})$. We have already written the code for these two functions, so let's get going. # # We start by importing the filter and creating it. There are 3 variables in `x` and only 1 measurement. At the same time we will create our radar simulator. # # ```python # from filterpy.kalman import ExtendedKalmanFilter # # rk = ExtendedKalmanFilter(dim_x=3, dim_z=1) # radar = RadarSim(dt, pos=0., vel=100., alt=1000.) # ``` # # We will initialize the filter near the airplane's actual position # # ```python # rk.x = array([radar.pos, radar.vel-10, radar.alt+100]) # ``` # # We assign the system matrix using the first term of the Taylor series expansion we computed above. # # ```python # dt = 0.05 # rk.F = eye(3) + array([[0, 1, 0], # [0, 0, 0], # [0, 0, 0]])*dt # ``` # # After assigning reasonable values to $\mathbf{R}$, $\mathbf{Q}$, and $\mathbf{P}$ we can run the filter with a simple loop # # ```python # for i in range(int(20/dt)): # z = radar.get_range() # rk.update(array([z]), HJacobian_at, hx) # rk.predict() # ``` # # Putting that all together along with some boilerplate code to save the results and plot them, we get # + from filterpy.kalman import ExtendedKalmanFilter from numpy import eye, array, asarray dt = 0.05 rk = ExtendedKalmanFilter(dim_x=3, dim_z=1) radar = RadarSim(dt, pos=0., vel=100., alt=1000.) # make an imperfect starting guess rk.x = array([radar.pos-100, radar.vel+100, radar.alt+1000]) rk.F = eye(3) + array([[0, 1, 0], [0, 0, 0], [0, 0, 0]])*dt rk.R = radar.alt * 0.05 # 5% of distance rk.Q = array([[0, 0, 0], [0, 1, 0], [0, 0, 1]]) * 0.001 rk.P *= 50 xs = [] track = [] for i in range(int(20/dt)): z = radar.get_range() track.append((radar.pos, radar.vel, radar.alt)) rk.update(array([z]), HJacobian_at, hx) xs.append(rk.x) rk.predict() xs = asarray(xs) track = asarray(track) time = np.arange(0, len(xs)*dt, dt) ekf_internal.plot_radar(xs, track, time) # - # ## Using SymPy to compute Jacobians # Depending on your experience with derivatives you may have found the computation of the Jacobian above either fairly straightforward, or quite difficult. Even if you found it easy, a slightly more difficult problem easily leads to very difficult computations. # # As explained in Appendix A, we can use the SymPy package to compute the Jacobian for us. # + import sympy sympy.init_printing() x_pos, x_vel, x_alt = sympy.symbols('x_pos, x_vel x_alt') H = sympy.Matrix([sympy.sqrt(x_pos**2 + x_alt**2)]) state = sympy.Matrix([x_pos, x_vel, x_alt]) H.jacobian(state) # - # This result is the same as the result we computed above, and at much less effort on our part! # ## Designing Q # **author's note: ignore this, it to be revised - noise in position and altitude is independent, not dependent** # # Now we need to design the process noise matrix $\mathbf{Q}$. From the previous section we have the system equation # # $$\dot{\mathbf{x}} = \begin{bmatrix} 0 & 1 & 0 \\ 0& 0& 0 \\ 0&0&0\end{bmatrix} # \begin{bmatrix}x_{pos} \\x_{vel}\\ x_{alt}\end{bmatrix} + \begin{bmatrix}0 \\w_{vel}\\ w_{alt}\end{bmatrix} # $$ # # where our process noise is # # $$w = \begin{bmatrix}0 \\w_{vel}\\ w_{alt}\end{bmatrix}$$ # # We know from the Kalman filter math chapter that # # $$\mathbf{Q} = E(ww^T)$$ # # where $E(\bullet)$ is the expected value. We compute the expected value as # # $$\mathbf{Q} = \int_0^{dt} \Phi(t)\mathbf{Q}\Phi^T(t) dt$$ # Rather than do this by hand, let's use sympy. # + import sympy from sympy import Matrix sympy.init_printing(use_latex='mathjax') w_vel, w_alt, dt = sympy.symbols('w_vel w_alt \Delta{t}') w = Matrix([[0, w_vel, w_alt]]).T phi = Matrix([[1, dt, 0], [0, 1, 0], [0,0,1]]) q = w*w.T sympy.integrate(phi*q*phi.T, (dt, 0, dt)) # - # ## Robot Localization # # So, time to try a real problem. I warn you that this is far from a simple problem. However, most books choose simple, textbook problems with simple answers, and you are left wondering how to implement a real world solution. # # We will consider the problem of robot localization. We already implemented this in the **Unscented Kalman Filter** chapter, and I recommend you read that first. In this scenario we have a robot that is moving through a landscape with sensors that give range and bearings to various landmarks. This could be a self driving car using computer vision to identify trees, buildings, and other landmarks. Or, it might be one of those small robots that vacuum your house. It could be a search and rescue device meant to go into dangerous areas to search for survivors. It doesn't matter too much. # # Our robot is wheeled, which means that it manuevers by turning it's wheels. When it does so, the robot pivots around the rear axle while moving forward. This is nonlinear behavior which we will have to account for. The robot has a sensor that gives it approximate range and bearing to known targets in the landscape. This is nonlinear because computing a position from a range and bearing requires square roots and trigonometry. # # ### Robot Motion Model ekf_internal.plot_bicycle() # At a first approximation n automobile steers by turning the front tires while moving forward. The front of the car moves in the direction that the wheels are pointing while pivoting around the rear tires. This simple description is complicated by issues such as slippage due to friction, the differing behavior of the rubber tires at different speeds, and the need for the outside tire to travel a different radius than the inner tire. Accurately modelling steering requires an ugly set of differential equations. For Kalman filtering, especially for lower speed robotic applications a simpler *bicycle model* has been found to perform well. # # I have depicted this model above. Here we see the front tire is pointing in direction $\alpha$. Over a short time period the car moves forward and the rear wheel ends up further ahead and slightly turned inward, as depicted with the blue shaded tire. Over such a short time frame we can approximate this as a turn around a radius $R$. If you google bicycle model you will find that we can compute the turn angle $\beta$ with # # $$\beta = \frac{d}{w} \tan{(\alpha)}$$ # # and the turning radius R is given by # # $$R = \frac{d}{\beta}$$ # # where the distance the rear wheel travels given a forward velocity $v$ is $d=v\Delta t$. # # If we let $\theta$ be our current orientation then we can compute the position $C$ before the turn starts as # # $$ C_x = x - R\sin(\theta) \\ # C_y = y + R\cos(\theta) # $$ # # After the move forward for time $\Delta t$ the new position and orientation of the robot is # # $$\begin{aligned} x &= C_x + R\sin(\theta + \beta) \\ # y &= C_y - R\cos(\theta + \beta) \\ # \theta &= \theta + \beta # \end{aligned} # $$ # # Once we substitute in for $C$ we get # # $$\begin{aligned} x &= x - R\sin(\theta) + R\sin(\theta + \beta) \\ # y &= y + R\cos(\theta) - R\cos(\theta + \beta) \\ # \theta &= \theta + \beta # \end{aligned} # $$ # # You don't really need to understand this math in detail, as it is already a simplification of the real motion. The important thing to recognize is that our motion model is nonlinear, and we will need to deal with that with our Kalman filter. # ### Design the State Variables # # For our robot we will maintain the position and orientation of the robot: # # $$\mathbf{x} = \begin{bmatrix}x \\ y \\ \theta\end{bmatrix}$$ # # I could include velocities into this model, but as you will see the math will already be quite challenging. # # Our control input $\mathbf{u}$ is the velocity and steering angle # # $$\mathbf{u} = \begin{bmatrix}v \\ \alpha\end{bmatrix}$$ # ### Design the System Model # # In general we model our system as a nonlinear motion model plus noise. # # $$x^- = x + f(x, u) + \mathcal{N}(0, Q)$$ # # Using the motion model for a robot that we created above, we can expand this to # # $$\begin{bmatrix}x\\y\\\theta\end{bmatrix}^- = \begin{bmatrix}x\\y\\\theta\end{bmatrix} + # \begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \\ # R\cos(\theta) - R\cos(\theta + \beta) \\ # \beta\end{bmatrix}$$ # # We linearize this with a taylor expansion at $x$: # # $$f(x, u) \approx \mathbf{x} + \frac{\partial f(x, u)}{\partial x}$$ # # We replace $f(x, u)$ with our state estimate $\mathbf{x}$, and the derivative is the Jacobian of $f$. # The Jacobian $\mathbf{F}$ is # # $$\mathbf{F} = \frac{\partial f(x, u)}{\partial x} =\begin{bmatrix} # \frac{\partial \dot{x}}{\partial x} & # \frac{\partial \dot{x}}{\partial y} & # \frac{\partial \dot{x}}{\partial \theta}\\ # \frac{\partial \dot{y}}{\partial x} & # \frac{\partial \dot{y}}{\partial y} & # \frac{\partial \dot{y}}{\partial \theta} \\ # \frac{\partial \dot{\theta}}{\partial x} & # \frac{\partial \dot{\theta}}{\partial y} & # \frac{\partial \dot{\theta}}{\partial \theta} # \end{bmatrix} # $$ # # When we calculate these we get # # $$\mathbf{F} = \begin{bmatrix} # 1 & 0 & -R\cos(\theta) + R\cos(\theta+\beta) \\ # 0 & 1 & -R\sin(\theta) + R\sin(\theta+\beta) \\ # 0 & 0 & 1 # \end{bmatrix}$$ # # We can double check our work with SymPy. # + from sympy import symbols a, x, y, v, w, theta, time = symbols('a, x, y, v, w, theta, t') d = v*time beta = (d/w)*sympy.tan(a) R = w/sympy.tan(a) fxu = Matrix([[x-R*sympy.sin(theta)+R*sympy.sin(theta+beta)], [y+R*sympy.cos(theta)-R*sympy.cos(theta+beta)], [theta+beta]]) fxu.jacobian(Matrix([x, y, theta])) # - # Now we can turn our attention to the noise. Here, the noise is in our control input, so it is in *control space*. In other words, we command a specific velocity and steering angle, but we need to convert that into errors in $x, y, \theta$. In a real system this might vary depending on velocity, so it will need to be recomputed for every prediction. I will choose this as the noise model; for a real robot you will need to choose a model that accurately depicts the error in your system. # # $$\mathbf{M} = \begin{bmatrix}0.01 vel^2 & 0 \\ 0 & \sigma_\alpha^2\end{bmatrix}$$ # # If this was a linear problem we would convert from control space to state space using the by now familiar $\mathbf{FMF}^\mathsf{T}$ form. Since our motion model is nonlinear we do not try to find a closed form solution to this, but instead linearize it with a Jacobian which we will name $\mathbf{V}$. # # $$\mathbf{V} = \frac{\partial f(x, u)}{\partial u} \begin{bmatrix} # \frac{\partial \dot{x}}{\partial v} & \frac{\partial \dot{x}}{\partial \alpha} \\ # \frac{\partial \dot{y}}{\partial v} & \frac{\partial \dot{y}}{\partial \alpha} \\ # \frac{\partial \dot{\theta}}{\partial v} & \frac{\partial \dot{\theta}}{\partial \alpha} # \end{bmatrix}$$ # # Let's compute that with SymPy: fxu.jacobian(Matrix([v, a])) # # **authors note: explain FPF better** # # This gives us the final form of our prediction equations: # # $$\begin{aligned} # \mathbf{\bar{x}} &= \mathbf{x} + # \begin{bmatrix}- R\sin(\theta) + R\sin(\theta + \beta) \\ # R\cos(\theta) - R\cos(\theta + \beta) \\ # \beta\end{bmatrix}\\ # \mathbf{\bar{P}} &=\mathbf{FPF}^{\mathsf{T}} + \mathbf{VMV}^{\mathsf{T}} # \end{aligned}$$ # # One final point. This form of linearization is not the only way to predict $\mathbf{x}$. For example, we could use a numerical integration technique like *Runge Kutta* to compute the position of the robot in the future. In fact, if the time step is relatively large you will have to do that. As I am sure you are realizing, things are not as cut and dried with the EKF as it was for the KF. For a real problem you have to very carefully model your system with differential equations and then determine the most appropriate way to solve that system. The correct approach depends on the accuracy you require, how nonlinear the equations are, your processor budget, and numerical stability concerns. These are all topics beyond the scope of this book. # ### Design the Measurement Model # # Now we need to design our measurement model. For this problem we are assuming that we have a sensor that receives a noisy bearing and range to multiple known locations in the landscape. The measurement model must convert the state $\begin{bmatrix}x & y&\theta\end{bmatrix}^\mathsf{T}$ into a range and bearing to the landmark. Using $p$ be the position of a landmark, the range $r$ is # # $$r = \sqrt{(p_x - x)^2 + (p_y - y)^2}$$ # # We assume that the sensor provides bearing relative to the orientation of the robot, so we must subtract the robot's orientation from the bearing to get the sensor reading, like so: # # $$\phi = \arctan(\frac{p_y - y}{p_x - x}) - \theta$$ # # # Thus our function is # # # $$\begin{aligned} # \mathbf{x}& = h(x,p) &+ \mathcal{N}(0, R)\\ # &= \begin{bmatrix} # \sqrt{(p_x - x)^2 + (p_y - y)^2} \\ # \arctan(\frac{p_y - y}{p_x - x}) - \theta # \end{bmatrix} &+ \mathcal{N}(0, R) # \end{aligned}$$ # # This is clearly nonlinear, so we need linearize $h(x, p)$ at $\mathbf{x}$ by taking its Jacobian. We compute that with SymPy below. px, py = symbols('px, py') z = Matrix([[sympy.sqrt((px-x)**2 + (py-y)**2)], [sympy.atan2(py-y, px-x) - theta]]) z.jacobian(Matrix([x, y, theta])) # Now we need to write that as a Python function. For example we might write: # + from math import sqrt def H_of(x, landmark_pos): """ compute Jacobian of H matrix where h(x) computes the range and bearing to a landmark for state x """ px = landmark_pos[0] py = landmark_pos[1] hyp = (px - x[0, 0])**2 + (py - x[1, 0])**2 dist = sqrt(hyp) H = array( [[-(px - x[0, 0]) / dist, -(py - x[1, 0]) / dist, 0], [ (py - x[1, 0]) / hyp, -(px - x[0, 0]) / hyp, -1]]) return H # - # We also need to define a function that converts the system state into a measurement. from math import atan2 def Hx(x, landmark_pos): """ takes a state variable and returns the measurement that would correspond to that state. """ px = landmark_pos[0] py = landmark_pos[1] dist = sqrt((px - x[0, 0])**2 + (py - x[1, 0])**2) Hx = array([[dist], [atan2(py - x[1, 0], px - x[0, 0]) - x[2, 0]]]) return Hx # ### Design Measurement Noise # # This is quite straightforward as we need to specify measurement noise in measurement space, hence it is linear. It is reasonable to assume that the range and bearing measurement noise is independent, hence # # $$R=\begin{bmatrix}\sigma_{range}^2 & 0 \\ 0 & \sigma_{bearing}^2\end{bmatrix}$$ # # ### Implementation # # We will use `FilterPy`'s `ExtendedKalmanFilter` class to implment the filter. The prediction of $\mathbf{x}$ is nonlinear, so we will have to override the method `predict()` to implement this. I'll want to also use this code to simulate the robot, so I'll add a method `move()` that computes the position of the robot which both `predict()` and my simulation can call. You would not need to do this for a real robot, of course. # # The matrices for the prediction step are quite large; while trying to implement this I made several errors before I finally got it working. I only found my errors by using SymPy's `evalf` function, which allows you to evaluate a SymPy `Matrix` for specific values of the variables. I decided to demonstrate this technique, and to eliminate a possible source of bugs, by using SymPy in the Kalman filter. You'll need to understand a couple of points. # # First, `evalf` uses a dictionary to pass in the values you want to use. For example, if your matrix contains an x and y, you can write # # ```python # M.evalf(subs={x:3, y:17}) # ``` # # to evaluate the matrix for `x=3` and `y=17`. # # Second, `evalf` returns a `sympy.Matrix` object. You can convert it to a numpy array with `numpy.array(m)`, but the result uses type `object` for the elements in the array. You can convert the array to an array of floats with ``numpy.array(m).astype(float)`. # # So, here is the code: from filterpy.kalman import ExtendedKalmanFilter as EKF from numpy import dot, array, sqrt class RobotEKF(EKF): def __init__(self, dt, wheelbase, std_vel, std_steer): EKF.__init__(self, 3, 2, 2) self.dt = dt self.wheelbase = wheelbase self.std_vel = std_vel self.std_steer = std_steer a, x, y, v, w, theta, time = symbols( 'a, x, y, v, w, theta, t') d = v*time beta = (d/w)*sympy.tan(a) r = w/sympy.tan(a) self.fxu = Matrix( [[x-r*sympy.sin(theta)+r*sympy.sin(theta+beta)], [y+r*sympy.cos(theta)-r*sympy.cos(theta+beta)], [theta+beta]]) self.F_j = self.fxu.jacobian(Matrix([x, y, theta])) self.V_j = self.fxu.jacobian(Matrix([v, a])) # save dictionary and it's variables for later use self.subs = {x: 0, y: 0, v:0, a:0, time:dt, w:wheelbase, theta:0} self.x_x, self.x_y, = x, y self.v, self.a, self.theta = v, a, theta def predict(self, u=0): self.x = self.move(self.x, u, self.dt) self.subs[self.theta] = self.x[2, 0] self.subs[self.v] = u[0] self.subs[self.a] = u[1] F = array(self.F_j.evalf(subs=self.subs)).astype(float) V = array(self.V_j.evalf(subs=self.subs)).astype(float) # covariance of motion noise in control space M = array([[self.std_vel*u[0]**2, 0], [0, self.std_steer**2]]) self.P = dot(F, self.P).dot(F.T) + dot(V, M).dot(V.T) def move(self, x, u, dt): h = x[2, 0] v = u[0] steering_angle = u[1] dist = v*dt if abs(steering_angle) < 0.0001: # approximate straight line with huge radius r = 1.e-30 b = dist / self.wheelbase * tan(steering_angle) r = self.wheelbase / tan(steering_angle) # radius sinh = sin(h) sinhb = sin(h + b) cosh = cos(h) coshb = cos(h + b) return x + array([[-r*sinh + r*sinhb], [r*cosh - r*coshb], [b]]) # Now we have another issue to handle. The residual is notionally computed as $y = z - h(x)$ but this will not work because our measurement contains an angle in it. Suppose z has a bearing of $1^\circ$ and $h(x)$ has a bearing of $359^\circ$. Naively subtracting them would yield a bearing difference of $-358^\circ$, which will throw off the computation of the Kalman gain. The correct angle difference in this case is $2^\circ$. So we will have to write code to correctly compute the bearing residual. def residual(a, b): """ compute residual (a-b) between measurements containing [range, bearing]. Bearing is normalized to [-pi, pi)""" y = a - b y[1] = y[1] % (2 * np.pi) # force in range [0, 2 pi) if y[1] > np.pi: # move to [-pi, pi) y[1] -= 2 * np.pi return y # The rest of the code runs the simulation and plots the results, and shouldn't need too much comment by now. I create a variable `landmarks` that contains the coordinates of the landmarks. I update the simulated robot position 10 times a second, but run the EKF only once. This is for two reasons. First, we are not using Runge Kutta to integrate the differental equations of motion, so a narrow time step allows our simulation to be more accurate. Second, it is fairly normal in embedded systems to have limited processing speed. This forces you to run your Kalman filter only as frequently as absolutely needed. # + from filterpy.stats import plot_covariance_ellipse from math import sqrt, tan, cos, sin, atan2 dt = 1.0 def z_landmark(lmark, sim_pos, std_rng, std_brg): x, y = sim_pos[0, 0], sim_pos[1, 0] d = np.sqrt((lmark[0] - x)**2 + (lmark[1] - y)**2) a = atan2(lmark[1] - y, lmark[0] - x) - sim_pos[2, 0] z = np.array([[d + randn()*std_rng], [a + randn()*std_brg]]) return z def ekf_update(ekf, z, landmark): ekf.update(z, HJacobian=H_of, Hx=Hx, residual=residual, args=(landmark), hx_args=(landmark)) def run_localization(landmarks, std_vel, std_steer, std_range, std_bearing): ekf = RobotEKF(dt, wheelbase=0.5, std_vel=std_vel, std_steer=std_steer) ekf.x = array([[2, 6, .3]]).T ekf.P = np.diag([.1, .1, .1]) ekf.R = np.diag([std_range**2, std_bearing**2]) sim_pos = ekf.x.copy() # simulated position # steering command (vel, steering angle radians) u = array([1.1, .01]) plt.scatter(landmarks[:, 0], landmarks[:, 1], marker='s', s=60) for i in range(200): sim_pos = ekf.move(sim_pos, u, dt/10.) # simulate robot plt.plot(sim_pos[0], sim_pos[1], ',', color='g') if i % 10 == 0: ekf.predict(u=u) plot_covariance_ellipse( (ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2], std=6, facecolor='b', alpha=0.08) x, y = sim_pos[0, 0], sim_pos[1, 0] for lmark in landmarks: z = z_landmark(lmark, sim_pos, std_range, std_bearing) ekf_update(ekf, z, lmark) plot_covariance_ellipse( (ekf.x[0,0], ekf.x[1,0]), ekf.P[0:2, 0:2], std=6, facecolor='g', alpha=0.4) plt.axis('equal') plt.show() return ekf # + landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5]]) landmarks = array([[5, 10], [10, 5], [15, 15]]) ekf = run_localization( landmarks, std_vel=0.1, std_steer=np.radians(1), std_range=0.3, std_bearing=0.1) print(ekf.P.diagonal()) # + landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5]]) ekf = run_localization( landmarks, std_vel=0.1, std_steer=np.radians(1), std_range=0.3, std_bearing=0.1) print(ekf.P.diagonal()) # - # I have plotted the landmarks as solid squares. The path of the robot is drawn with a dashed line, which is admittedly hard to see. The covariance after the predict step is drawn in a very light shade of blue, and the covariance after all of the landmark measurements are incorporated are shown in green. To make them visible at this scale I have set the ellipse boundary at 6$\sigma$. # # From this we can see that there is a lot of uncertainty added by our motion model, and that most of the error in in the direction of motion. We can see that from the shape of the blue ellipses. After a few steps we can see that the filter incorporates the landmark measurements. # # We can see the fantastic effect that multiple landmarks has on our uncertainty by only using the first two landmarks. ekf = run_localization( landmarks[0:2], std_vel=1.e-10, std_steer=1.e-10, std_range=1.4, std_bearing=.05) print(ekf.P.diagonal()) # We can see that the covariance gets smaller as it passes through the landmarks but quickly expands once past them. Let's see what happens with only one landmark ekf = run_localization( landmarks[0:1], std_vel=1.e-10, std_steer=1.e-10, std_range=1.4, std_bearing=.05) print(ekf.P.diagonal()) # As you probably suspected, only one landmark produces a very bad covariance. What is worse, the filter starts to diverge from the robot's path. On the other hand, a large number of landmarks allows us to make very accurate estimates. # + landmarks = array([[5, 10], [10, 5], [15, 15], [20, 5], [15, 10], [10,14], [23, 14], [25, 25], [10, 20]]) ekf = run_localization( landmarks, std_vel=0.1, std_steer=np.radians(1), std_range=0.3, std_bearing=0.1) print(ekf.P.diagonal()) # - # ### Discussion # # I said that this was a 'real' problem, and in some ways it is. I've seen alternative presentations that used robot motion models that led to much easier Jacobians. On the other hand, my model of a automobile's movement is itself simplistic in several ways. First, it uses the *bicycle model* to compute how it moves. A real car has two sets of tires, and each travels on a different radius. The wheels do not grip the surface perfectly. I also assumed that the robot was able to instaneously respond to my control input changes. In fact, I didn't even bother changing the control input during the run. writes in *Probabalistic Robots* that simplied models are justified because the filters perform well when used to track real vehicles. The lesson here is that while you have to have a reasonably accurate nonlinear model, it does not need to be perfect to operate well. As a designer you will need to balance the fidelity of your model with the difficulty of the math and the computation required to implement the equations. # # Another way in which this problem was simplistic is that we assumed that we knew the correspondance between the landmarks and measurements. But suppose we are using radar - how would we know that a specific signal return corresponded to a specific building in the local scene? This question hints at SLAM algorithms - simultaneous localization and mapping. SLAM is not the point of this book, so I will not elaborate on this topic. # # However, this example should underscore how difficult EKFs can be. EKF have a well deserved reputation for difficulty. Especially when the problem is highly nonlinear you must design # ## UKF vs EKF # # I implemented this tracking problem using an unscented Kalman filter in the previous chapter. The difference in implementation should be very clear. Computing the Jacobians for the state and measurement models was not trivial and we used a very rudimentary model for the motion of the car. I am justified in using this model because the research resulting from the DARPA car challenges has shown that it works well in practice. Nonetheless, a different problem, such as an aircraft or rocket will yield a very difficult to impossible to compute Jacobian. In contrast, the UKF only requires you to provide a function that computes the system motion model and another for the measurement model. This is will always be easier than deriving a Jacobian analytically. In fact, there are many physical processes for which we cannot find an analytical solution. It is beyond the scope of this book, but in that case you have to design a numerical method to compute the Jacobian. That is a very nontrivial undertaking, and you will spend a significant portion of a master's degree at a STEM school learning various techniques to handle such situations. Even then you'll likely only be able to solve problems related to your field - an aeronautical engineer learns a lot about Navier Stokes equations, but not much about modelling chemical reaction rates. # # So, UKFs are easy. Are they accurate? Everything I have read states that there is no way to prove that a UKF will always perform as well or better than an EKF. However, in practice, they do perform better. You can search and find any number of research papers that prove that the UKF outperforms the EKF in various problem domains. It's not hard to understand why this would be true. The EKF works by linearizing the system model and measurement model at a single point. # # Let's look at a specific example. I will take the function $f(x) = x^3$ as our nonlinear function and pass a Gaussian distribution through it. I will compute an accurate answer using a monte carlo simulation. I do this by generating 50,000 points distributed according to the Gaussian, passing each point through the function, and then computing the mean and variance of the result. # # First, let's see how the EKF fairs. The EkF linearizes the function by taking the derivative and evaluating it the mean $x$ to get the slope tangent to the function at that point. This slope becomes the linear function that we use to transform the Gaussian. Here is a plot of that. import nonlinear_plots nonlinear_plots.plot_ekf_vs_mc() # We can see from both the graph and the print out at the bottom that the EKF has introduced quite a bit of error. # # In contrast, here is the performance of the UKF evaluated with the same Gaussian and function.b nonlinear_plots.plot_ukf_vs_mc(alpha=0.001, beta=3., kappa=1.) # Here we can see that the computation of the UKF's mean is accurate to 2 decimal places. The standard deviation is slightly off, but you can also fine tune how the UKF computes the distribution by using the $\alpha$, $\beta$, and $\gamma$ parameters for generating the sigma points. Here I used $\alpha=0.001$, $\beta=3$, and $\gamma=1$. Feel free to modify them in the function call to see the result - you should be able to get better results than I did. However, ovoid overtuning the UKF for a specific test - it may perform better for your test case, but worse in general. # # This is one contrived example, but as I said the literature is filled with detailed studies of real world problems that exhibit similar performance differences between the two filters. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Projeto I - Computação Centífica II # ### Solução de EDOs # > Autor:
    # > Contato:
    # > Repo: [@mirandagil](https://github.com/mirandagil)
    import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as plt_color import scipy.integrate # ### Encontrando a solução da EDO # # $$\frac{\mathrm{d}u}{\mathrm{d}t} = Au + b$$ # # $$ # u(t) = e^{At}u_0+ A^{-1} (e^{At}-I)b # $$ # ### Verificando sanidade das matrizes def verifica_defeituosa(A): ## input: A -> Matriz def cauchy_schwarz(u,v): ## input: u - vetor nx1, v -> vetor nx1 norm_u = np.linalg.norm(u) norm_v = np.linalg.norm(v) scalar_product = np.dot(u,v) if abs(scalar_product) == abs(norm_u * norm_v): return 1 else: return 0 M_eigen_vects = np.linalg.eig(A)[1] ## Retorna a matriz de autovetores em forma de coluna eigen_vects = [M_eigen_vects[:,i] for i in range(0,M_eigen_vects.shape[1])] ## Retorna cada autovetor como um vetor linha for i in range(0, len(eigen_vects)): for j in range(1,len(eigen_vects)): if i != j: if cauchy_schwarz(eigen_vects[i],eigen_vects[j]): return 1 break return 0 # Caso matriz diagonalizavel # $$e^{At} = Se^{\Lambda t} S^{-1} $$ def e_to_diag(A, t): eigen = np.linalg.eig(A) M_eigen_vects = eigen[1] ## Retorna a matriz de autovetores em forma de coluna eigen_values= eigen[0]*t e_to = np.e**eigen_values diag_A = np.diag(e_to) inv_M = np.linalg.inv(M_eigen_vects) return M_eigen_vects @ (diag_A) @ inv_M # Caso matriz defeituosa # $$ # e^{At} = I + \frac{At}{1!} + \frac{A^2t^2}{2!} +\frac{A^3t^3}{3} +\frac{A^4t^4}{4!} + \dots # $$ def e_to_defect(A, t, termo = 20): I = np.identity(np.shape(A)[0]) e_to = I for i in range(1,termo): e_to += np.linalg.matrix_power(A*t, i)/np.math.factorial(i) return e_to # ### Solver para ED def ed_solver(A, b, c, ts): is_defect = verifica_defeituosa(A) ## Booleano para defeituosa I = np.identity(np.shape(A)[0]) sol = [c] b_zero = not np.all(b) ## Booleano para b = 0 if b_zero: ## Rotinas caso b = 0, assim podemos ignorar o fato de A não invertivel for t in ts: if is_defect: e_to_A = e_to_defect(A, t) else: e_to_A = e_to_diag(A,t) u_n = e_to_A@c sol.append(u_n) else: ## Rotinas para b != 0 try: ## Vai executar caso A tenha inversa A_inv = np.linalg.inv(A) for t in ts: if is_defect: e_to_A = e_to_defect(A, t) else: e_to_A = e_to_diag(A,t) u_n = e_to_A@c + A_inv@(e_to_A-I)@b sol.append(u_n) except: ## Erro, A não é inversivel print("A matriz A não é inversível, e portanto o problema não tem solução pois b é não nulo.") return np.array(sol) # ### Função para exibir solução de uma edo dada a partir de uma Matriz def exibe_solucao_1(A, b, c, Tf = 10): ts = np.arange(0, Tf, 0.01) sol = ed_solver(A, b, c, ts) sol = sol[1:] sol_T = sol.T i = 0 plt.figure(figsize = (10,5)) for s in sol_T: i += 1 plt.plot(ts, s, label = '$x_'+ str(i) + '$') plt.legend(bbox_to_anchor = (1,1)) plt.title('Solução da Equação Diferencial Matricial', fontweight = 'bold') plt.xlabel('tempo (t)') plt.grid(alpha=0.3) plt.show() ts = np.arange(0,30,0.05) A = np.array([[0,1],[-3,0]]) b = [0,0] c = [-3,0] exibe_solucao_1(A, b, c) ts = np.arange(0,30,0.05) A = np.array([[0,1],[-1,1]]) b = [0,-1] c = [0,1] exibe_solucao_1(A, b, c) # ### Gerar matriz A, dada uma equação diferencial # # A função `gen_mAb(eq)` gera a matriz $A$ e o vetor $b$ dada uma lista com coeficientes da equação diferencial, para a forma # $$\frac{\mathrm{d}u}{\mathrm{d}t} = Au + b$$ def gen_mAb(eq): ## Input -> eq: lista de coeficientes da ed. col_len = len(eq) - 2 ## Armazena o tamanho da matrix A nxn, que é o tamanho do array retirados dois coeficientes A = np.zeros(shape=(col_len, col_len)) ## Gera a matriz A com zeros I = np.eye(col_len) ## Gera uma matriz identidade I[0] = I[0]*0 ## Transforma a primeira linha de I em zeros I_desl = np.roll(I, -1, 0) ## Desloca a matriz identidade para a direita b = np.zeros(col_len) ## Gera o vetor b com zeros b[-1] = eq[-1] ## Insere como último elemento de b a constante c for (i,j) in zip(np.flip(eq[1:-1]),range(0,col_len)): ## Insere os coeficientes na última linha da matriz A A[-1][j] = -i/eq[0] A = A + I_desl ## Soma A com a identidade deslocada return A, b ## Output -> A: Matriz do sistema de edos de primeira ordem. b: Vetor constante # ### exibe_solucao_2 def exibe_solucao_2(eq,c, Tf = 5): A, b = gen_mAb(eq) ## Chama o metódo para gerar a matriz A associada ao sistema linear de ordem 1 #c = np.flip(c) ## Inverte o vetor de condições iniciais ts = np.arange(0, Tf, 0.01) ## Definindo o dominio ### Rotinas para preparar a solução sol = ed_solver(A, b, c, ts) sol = sol[1:] sol_T = sol.T ## Rotina para plottagem plt.figure(figsize = (10,5)) plt.plot(ts, sol_T[0], label = '$u(t)$') plt.legend(bbox_to_anchor = (1,1)) plt.title('Solução da Equação Diferencial', fontweight = 'bold') plt.xlabel('tempo (t)') plt.grid(alpha=0.3) plt.show() # $2y'' -y' + 5y = 0$ eq = [2,-1,5,0] c = [1,0] exibe_solucao_2(eq, c, Tf = 6) # $2y'' + y = 1$ eq = [2,0,1,1] c = [2,1] exibe_solucao_2(eq,c, Tf = 30) # + Tf1 = 0.5 test_matrix_1 = np.array([[-7., 0., 6., 8.], [-8., 0., 8., 3.], [ 4., -6., 2., 0.], [ 0., -2., 4., 0.]]) #test_matrix_1 = np.array(test_matrix_1) cond_inicial_1 = np.array([15, 0, -0.5, -30]) b_1 = np.array([0]*4) b_2 = np.array([0,100, -19, -100]) test_matrix_2 = [[ -7. , 0. , 6. , 8. ], [ -8. , 0. , 8. , 3. ], [ 4. , -6. , 2. , 0. ], [ 14.5, -6. , -7. , -12. ]] test_matrix_2 = np.array(test_matrix_2) test_matrix_3 = (test_matrix_1 + test_matrix_1.T)/2 from scipy.linalg import orth P = orth(test_matrix_1) test_matrix_4 = P @ np.diag([0.3, 2, -4, 0]) @ np.linalg.inv(P) #1.2 exibe_solucao_1(test_matrix_1,b_1,cond_inicial_1,Tf1) #1 exibe_solucao_1(test_matrix_1,b_2,cond_inicial_1,Tf1) #0.6 exibe_solucao_1(test_matrix_2, b_2, cond_inicial_1,Tf1) #1.2 exibe_solucao_1(test_matrix_3, b_1, cond_inicial_1,Tf1) #1 exibe_solucao_2([-1, 0, -3, 0, -4, 1, 0], [-3, 1, 3, -0.3, -3], 10) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:pytorch] # language: python # name: conda-env-pytorch-py # --- import os import re from collections import Counter import slimit from slimit.parser import Parser from slimit.visitors.nodevisitor import ASTVisitor from slimit.lexer import Lexer from functions import lex, remove_comments input_len = 30 # + # _actual is the result of removing all missing and undecodable files file_dir = '/home/irteam/users/data/150kJavaScript/' lexer = Lexer() with open(file_dir+'programs_training_actual.txt') as f: file_list=f.readlines() # - idx=400 for i,file in enumerate(file_list[idx:idx+1]): file = file.strip() with open(file_dir+file) as f: text = f.read() tokens = lex(text,'value') for line in ' '.join(tokens).split(';'): print(line) print('\n') lexer.input('var a = ') for token in lexer: print(token) out_file.fll # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + from PIL import Image, ImageOps print('confirm') ans = input() block_size = 400 num, i, j = 0, 0, 0 # y, x size_y = 5 # y size_x = 5 # x result = Image.new('RGB', (block_size * (size_x + 1), block_size * (size_y + 1)), (250, 250, 250)) for page_num in range(1, 65): for img_num in range(48): img = Image.open(f'res/{page_num}/{img_num}_th.png') # Read the two images img = img.resize((block_size, block_size)) # Resize image result.paste(img, (j * block_size, i * block_size)) if j < size_x: j += 1 else: i += 1 j = 0 if i > size_y: inverted_image = ImageOps.invert(result) inverted_image.save(f"plates/{num}.png", "PNG") num += 1 i, j = 0, 0 result = Image.new('RGB', (block_size * (size_x + 1), block_size * (size_y + 1)), (250, 250, 250)) # result.save(f"plates/{num}.png", "PNG") # + from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.proxy import Proxy, ProxyType from selenium.webdriver.support.ui import Select from time import sleep import random import socket import struct def parse_paper(): chromeOptions = webdriver.ChromeOptions() prefs = {"download.default_directory" : f"/home/pavtiger/Docs/SymbolsAnalyse-v2/svg/"} # Download path chromeOptions.add_experimental_option("prefs", prefs) webdriver.DesiredCapabilities.CHROME['acceptSslCerts']=True driver = webdriver.Chrome('./chromedriver', options=chromeOptions) for number in range(85): if os.path.isfile(f"/home/pavtiger/Docs/SymbolsAnalyse-v2/svg/{number}.svg"): continue driver.get("https://www.autotracer.org") # Access website driver.find_element_by_id("fileupfield").send_keys(os.getcwd() + f'/plates/{number}.png') # Upload path sleep(3) try: # Open advanced menu content = driver.find_element_by_class_name('toggle-advanced-options-button') content.click() sleep(3) except: pass # Chose number of colors select = Select(driver.find_element_by_id('selectColorCount')) select.select_by_visible_text('2') select = Select(driver.find_element_by_id('selectSmooth')) select.select_by_visible_text('Smoother') sleep(3) check = driver.find_element_by_id('checkboxIgnoreWhiteBackground') check.click() driver.find_element_by_id("fbut").click() # Generate result sleep(2) try: error = driver.find_elements_by_xpath("//*[contains(text(), 'Your IP Address did exhaust the maximum numbers')]") print(error.get_attribute('innerHTML')) print('banned') except: pass while True: try: driver.find_element_by_id("result").find_element_by_id("result-svg-preview") # Check if exists result = driver.find_element_by_id("result") links = result.find_elements_by_xpath("//a[@href]") for l in links: if "FW" in l.get_attribute("href"): print(l.get_attribute("href")) l.click() break except: continue sleep(30) driver.close() # + import os parse_paper() # if os.path.exists(f'res/{str(page)}_'): # parse_paper(str(page) + '_') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # # %load /Users/facai/Study/book_notes/preconfig.py # %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set(color_codes=True) #sns.set(font='SimHei', font_scale=2.5) #plt.rcParams['axes.grid'] = False import numpy as np import pandas as pd #pd.options.display.max_rows = 20 #import sklearn #import itertools #import logging #logging.basicConfig() #logger = logging.getLogger() #logger.setLevel(logging.DEBUG) from IPython.display import Image # - # Chapter 5 Monte Carlo Methods # ================ # # Monte Carlo methods require only *experience*: sample sequences of states, actions, and rewards from actual or simulated interaction with an environment. # # requirements: averaging sample returns => we define Monte Carlo methods only for episodic tasks. # ### 5.1 Monte Carlo Prediction # # the value of a state: expected return starting from that state. # # An obvious way to estimate it from experience: simply to average the returns observed after visits to that state. # # $s$ may be visited multiple times in the same episode: # + first-visit MC method # + every-visit MC method Image('./res/first_visit_mc.png') # Monte Carlo methods do not *bootstrap*: the estimate for one state does not build upon the estimate of any other state. # # The computational expense of estimating the value of a single state is independent of the number of states. # => One can generate many sample episodes starting from the states of interest, averaging returns from only these states, ignoring all others. # ### 5.2 Monte Carlo Estimation of Action Values # # If a model is not available, useful to estimate *action* values $q_\pi(s, a)$ rather than *state* values. # # maintain exploration problem: many state-action pairs may never be visited. # + exploring starts: every pair has a nonzero probability of being selected as start point. # + make policy stochasic with a nonzero probability of selecting all actions in each state. # ### 5.3 Monte Carlo Control # # $\pi(s) \doteq \operatorname{arg max}_a q(s, a)$ Image('./res/gpi.png') Image('./res/monte_carlo_es.png') # ### 5.4 Monte Carlo Control without Exploring Starts # # ensure that all actions are selected infinitely often is for the agent to continue to select them: # + on-policy: evaluate or imporve the policy that is used to make decisions. # + off-policy: evaluate or improve a policy different from that used to generate the data. # # policy is *soft*, meaning that $\pi(a \mid s) > 0$ for all $s \in \mathcal{S}$ and all $a \in \mathcal{A}(s)$, but gradually shifted closer and closer to a deterministic optimal policy. # # $\epsilon$-soft policies: $\pi(a \mid s) \geq \frac{\epsilon}{|\mathcal{A}(s)|}$ for all states and actions, for some $\epsilon > 0$. # + $\epsilon$-greedy policies: most of time greedy, sometimes random. # + For any $\epsilon$-soft policy $\pi$, any $\epsilon$-greedy policy with respect to $q_\pi$ is guaranteed to be better than or equal to $\pi$. Image('./res/on_epsilon_soft.png') # ### 5.5 Off-policy Prediction via Importances Sampling # # use two policies: # + target policy $\pi$: the potimal policy that is learned about. # + behavior policy $b$: more exploratory policy that is used to generate behavior. # # assumption of *coverage*: $\pi(a \mid s) > 0$ implies $b(a \mid s) > 0$. # # we wish to estimate $v_\pi$ or $q_\pi$, but all we have are episodes following another policy $b$, where $b \neq \pi$. # => importance sampling: a general technique for estimating expected values under one distribution given samples from another. # => importance-sampling ratio $\rho_{t:T-1}$: # # Given a starting state $S_t$, we have: $\operatorname{Pr}\{A_t, S_{t+1}, A_{t+1}, \cdots, S_T \mid S_t, A_{t:T-1} \sim \pi\} = \prod_{K=t}^{T-1} \pi(A_k \mid S_k) p(S_{k+1} \mid S_k, A_k)$ # # \begin{equation} # \rho_{t:T-1} \doteq \prod_{K=t}^{T-1} \frac{\pi(A_k \mid S_k)}{b(A_k \mid S_k)} # \end{equation} # # So we can have the right expected value by: # # \begin{align} # \mathbb{E}[G_t \mid S_t] &= v_b(S_t) \\ # \mathbb{E}[\rho_{t:T-1} G_t \mid S_t] &= v_\pi(S_t) # \end{align} # # To estimate $v_\pi(s)$, we simply scale the returns by the ratios and average the results: # + ordinary importance sampling: $V(s) \doteq \frac{\sum_{t \in \mathcal{J}(s)} \rho_{t:T(t)-1} G_t}{|\mathcal{J}(s)|}$: unbiased, but it can be extreme. # + weighted importance sampling: $V(s) \doteq \frac{\sum_{t \in \mathcal{J}(s)} \rho_{t:T(t)-1} G_t}{\sum_{t \in \mathcal{J}(s)} \rho_{t:T(t)-1}}$: biased, but its variance is bounded. # ### 5.6 Incremental Implementation Image('./res/off_policy_predict.png') # ### 5.7 Off-policy Monte Carlo Control Image('./res/off_policy_control.png') # Potential problem: this method learns only from the tails of episodes, when all of the remaining actions in the episode are greedy. If nongreedy actions are commom => greatly slow learning. # ### 5.8 Disounting-aware Importance Sampling # ### 5.9 Per-decision Importance Sampling # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="GZGZrmvqHRql" # ## Welcome to Python Fundamentals # In this module, we are going to establish or review our skills in Python programming. In this notebook we are going to cover: # * Variables and Data Types # * Operations # * Input and Output Operations # * Logic Control # * Iterables # * Functions # + [markdown] id="MHQmxmhMHbDQ" # Variable and Data Types # + colab={"base_uri": "https://localhost:8080/"} id="H4huFtVrHglQ" outputId="0fb81bbd-b80b-423f-9e3a-695431e2912e" x = 8 a,b = 0,-4 b # + colab={"base_uri": "https://localhost:8080/"} id="WHa6-UvbHr3W" outputId="6f3daaa3-8344-46aa-f2ec-7ce3ff3536f9" type(a) int # + colab={"base_uri": "https://localhost:8080/"} id="cqO4dAezHxdd" outputId="97a429e7-098a-4771-8ef7-5b177a5c9517" y= 2.5 type(y) # + colab={"base_uri": "https://localhost:8080/"} id="io_2KXkLH37G" outputId="f3c0baad-a9aa-4a06-c22c-696adb2d861b" x= float(x) type (x) # + colab={"base_uri": "https://localhost:8080/"} id="49cHKweQI3W5" outputId="229f1451-62b9-4fd3-a555-085463c3a8cb" c,h,e ="0", '2', 'three' type (u) # + colab={"base_uri": "https://localhost:8080/"} id="cM9vxi7IJNLN" outputId="c25e426a-f456-490a-c33a-357122f7cd64" s_int = int(s) s_int # + [markdown] id="C_ssg1-vJeN2" # ##Operations # + [markdown] id="qbYXRgdoJrQy" # ###Arithmethic # + id="z9lQjesnJwIv" n,i,c,a=1.0, 2.0, 3.0, 4.0 # + colab={"base_uri": "https://localhost:8080/"} id="uoBiyA_pKWgN" outputId="2a2bfc17-4b59-4bae-be52-dbbb046955c5" #Addition S=n+i S # + colab={"base_uri": "https://localhost:8080/"} id="tYQ5Q2hdK4v8" outputId="dbb2ed10-5cf2-4251-8737-821221bf05de" ###Subtraction D= i-a D # + colab={"base_uri": "https://localhost:8080/"} id="qNT6mGuOLRZ1" outputId="d89f49a5-c52c-4903-882b-1fc3d6204bf9" ### Multiplication P=n*i P # + colab={"base_uri": "https://localhost:8080/"} id="zqHGf-0wLnao" outputId="aafe465e-51b5-4929-d29c-2528381bfa30" ###Division Q= n/i Q # + colab={"base_uri": "https://localhost:8080/"} id="aSCT4FAjL14O" outputId="0a33df50-45d8-4f6e-a259-8efbf823bac9" ###Floor Division Fq = n//i Fq # + colab={"base_uri": "https://localhost:8080/"} id="H65AKSUoMCRf" outputId="49d29729-fc7b-4989-b0f7-c1e55e2c8f64" ### Exponentiation E = n**i E # + colab={"base_uri": "https://localhost:8080/"} id="ERblO-k9MP1B" outputId="ca1958f7-d44a-49e8-c4d0-eb4a30131aeb" ### Modulo mod = a%n mod # + [markdown] id="L5dhl2A0S0vP" # ##Assignment Operations # + id="KrlAFyYJTB1c" C, U, T, E = 5.0, 6.0, 7.0, 8.0 # + colab={"base_uri": "https://localhost:8080/"} id="WniLaVWGTJZj" outputId="bf794948-16a6-4eab-b603-809a4ec69375" C += n C # + colab={"base_uri": "https://localhost:8080/"} id="l2raQOSETUAj" outputId="4b283f78-4d5f-4c1c-f4c7-2cfae6faa919" E -= a E # + colab={"base_uri": "https://localhost:8080/"} id="za1tB5IRTdkG" outputId="75842b22-0f07-4d67-eeea-c156a038338d" T *= 7.0 T # + colab={"base_uri": "https://localhost:8080/"} id="NntG_vbqTl_U" outputId="304db5eb-4261-4143-852f-cb2530eb8009" E **8.0 E # + [markdown] id="SKvgBzPbTuTL" # ###Comparators # + id="jX0dz_ooTy_N" res_1, res_2, res_3 = 1, 2.0, "1" true_val= 1.0 # + colab={"base_uri": "https://localhost:8080/"} id="KFL6MzmdUCCj" outputId="5025938b-dc44-4c04-91e2-afb0008ab208" ## Equality res_1==true_val # + colab={"base_uri": "https://localhost:8080/"} id="UEszL0HTUJd8" outputId="3234eb72-4bfc-44b2-f442-41493af1649b" ## Non_equality res_1==true_val # + colab={"base_uri": "https://localhost:8080/"} id="1ppj9rvkUQ0i" outputId="8d0de564-9a71-4684-9c7c-9ea1d6e6780e" ## Inequality t1=res_1 >res_2 t2= res_1 = res_2/2 t4= res_1 <=res_2 t4 # + [markdown] id="H-a64u_LUzKM" # ###Logical # + colab={"base_uri": "https://localhost:8080/"} id="VzQ2DQnIU107" outputId="95c65da3-c6a5-4c59-b204-c0cd8862fbed" res_1 == true_val # + colab={"base_uri": "https://localhost:8080/"} id="o7yYAwJXU9dj" outputId="bf4c7b88-493c-408f-9b2b-e3666877cf9a" res_1 is true_val # + colab={"base_uri": "https://localhost:8080/"} id="6OA_tsj3VBqb" outputId="f1a55275-0a6d-4e1c-f98c-7ed7c2759523" res_1 is not true_val # + colab={"base_uri": "https://localhost:8080/"} id="C2x8KM5UVGl1" outputId="8b2b24d9-0393-4e80-c99c-8fe0af19e662" p, q = True, False conj = p and q conj # + colab={"base_uri": "https://localhost:8080/"} id="33gDlX4AVQGy" outputId="a8e5a6bd-dd1b-4e41-a305-51c24d0a12e5" p, q = True, False disj = p or q conj # + [markdown] id="72nhig4uVcyE" # ###l/O # + colab={"base_uri": "https://localhost:8080/"} id="GlGpeoo_WHMl" outputId="b60dfe7a-7b84-4401-db7c-f69d78ec90a4" print("si sir magpapaulan ng uno") # + id="mjqh0SI55s3t" cnt=1 # + colab={"base_uri": "https://localhost:8080/"} id="vuj6hgUW5vpj" outputId="546f4154-d9ae-4281-a8f1-4291b1429d76" string = "si sir magpapaulan ng uno yan" print(string, ",Current run count is", cnt) cnt +=1 # + colab={"base_uri": "https://localhost:8080/"} id="Z-Bhrq0Z6Ghb" outputId="4c02e75c-1eae-4281-cf5f-d8cba1d7703a" print(f"{string}, Current count is: {cnt}") # + colab={"base_uri": "https://localhost:8080/"} id="QgHTM8UB6f6a" outputId="ea32fc6d-8f5f-4cd3-f68d-594ea6282f3b" sem_grade = 90.123456789101213 name = "" print("Congrats {}, your semestral grade is: {}".format(name, sem_grade)) # + colab={"base_uri": "https://localhost:8080/"} id="LOosuG0p8Ivb" outputId="a283f58e-1ea2-4b10-a8a8-21c96dcea6dc" w_pg, w_mg, w_fg = 0.3, 0.3, 0.4 print("The weights of your semestral grades are:\ \n\t{:.2%} for Prelims\ \n\t{:.2%} for Midterms, and\ \n\t{:.2%} for Finals.".format(w_pg, w_mg, w_fg)) # + colab={"base_uri": "https://localhost:8080/", "height": 53} id="UfS-427U9M0m" outputId="8f74000a-ce1d-4161-b673-8664ea9de171" x = input("Input your name: ") x # + id="AEzNatys2E5s" outputId="75ba9942-ad08-4ada-85be-e56589ceb85b" colab={"base_uri": "https://localhost:8080/"} name = input("Silvertounge: ") pg = input("Enter prelim grade: ") mg = input("Enter midterm grade: ") fg = input("Enter finals grade: ") sem_grade = 90 print("Hello {}, your semestral grade is: {}".format(name, sem_grade)) # + [markdown] id="kbajUjHk39VM" # # Looping Statements # + [markdown] id="Xh6qJRgk4AdS" # ###While # + id="ZTZ0OspO4Jv-" outputId="c8806fce-84d2-4485-aae9-a7136da761fe" colab={"base_uri": "https://localhost:8080/"} ## while loops i, j = 0, 10 while(i<=j): print(f"{i}\t|\t{j}") i+=1 # + [markdown] id="b7OUK8ONNBi1" # ### For # + id="JdlDy2mg42pF" outputId="537c47a0-05c8-47af-8eae-4482d08bba84" colab={"base_uri": "https://localhost:8080/"} # for(int i =0; i<10; i++){#printf(i)#} i=0 for i in range(11): print(i) # + id="bKmGyqWm57GS" outputId="6013dcc8-e289-465e-c039-d6dc690feecd" colab={"base_uri": "https://localhost:8080/"} playlist = ["Unwell", "Perfect", "Iris"] print('Now Playing:\n') for song in playlist: print(song) # + [markdown] id="C3dDhF-K6b77" # #Flow Control # + [markdown] id="VUpyvQWe6h-k" # ###Condition Statements # + id="fwZwVHIs6nLJ" outputId="940301a5-cb01-4eb1-8b10-c76d43ad23fd" colab={"base_uri": "https://localhost:8080/"} numeral1, numeral2, = 33, 19 if(numeral1 == numeral2): print("frfr") elif(numeral1>numeral2): print("ngl") else: print("istg") # + [markdown] id="i3jwapc37UXB" # #Functions # + id="257RAivS7anx" # void Delete(int userid){ # deleter(userid): #} def delete_user(userid): print("Succesfully deleter user:{}".format(userid)) def delete_all_users(): print("succesfully deleted all users") # + id="Y68GhriE8M5c" outputId="a76b4cec-5fed-4c3e-938c-bdfbc75871b6" colab={"base_uri": "https://localhost:8080/"} userid= 202010126 delete_user(202010126) delete_all_users() # + id="376aItxN_6yp" outputId="9a84bd8e-e310-41f5-c162-f207f0975b16" colab={"base_uri": "https://localhost:8080/"} def add(addend1, addend2): print("I know how to add addend1 and addend2") return addend1 + addend2 def power_of_base2(exponent): return 2**exponent addend1 = 100 addend2 = 500 exponent = 100 #add(addend1, addend2) power_of_base2(exponent) # + [markdown] id="V0uhWwVkNdG8" # #Calculator # + id="DFGeAlY9QKrr" outputId="52c6b6c4-54ae-432e-e1d6-2b646c484e63" colab={"base_uri": "https://localhost:8080/"} print() name = input('\tWhat is your name?'); course = input('\tWhat is your course'); prelim = float(input('\tGive Prelim Grade : ')); midterm = float(input('\tGive Midterm Grade : ')); final = float(input('\tGive Final Grade : ')); grade = (prelim) + (midterm) + (final); average = grade/3; print(); print("\t===== DISPLAYING RESULTS ====="); print(); print("\tYour name is", name , "and your course is", course); print(); if average > 70: print("\tYour grade is \U0001F600"); elif average == 70: print("\tYour grade is \U0001F606"); elif average < 70: print("\tYour grade is \U0001F62D"); print(); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import pickle f = open('/home/xw17070/Documents/data_caching_coding/finished_jobs_bl.txt', 'rb') finished_jobs = pickle.load(f) f = open('/home/xw17070/Documents/data_caching_coding/finished_jobs_size_bl.txt', 'rb') finished_job_size = pickle.load(f) f = open('/home/xw17070/Documents/data_caching_coding/exceed_deadlines_bl.txt', 'rb') exceed_deadlines = pickle.load(f) f = open('/home/xw17070/Documents/data_caching_coding/exceed_deadline_size_bl.txt', 'rb') exceed_deadline_size = pickle.load(f) # + num_ep = 20 print('average finished jobs in an ep:', finished_jobs[0]/num_ep) print('ave TP in an ep:', np.sum(finished_job_size)/num_ep*1) print('average number of lost jobs:', exceed_deadlines[0]/num_ep) print('average TP of lost jobs:', np.sum(exceed_deadline_size)/num_ep*1) print('finished rate (in number of jobs):', finished_jobs[0]/(finished_jobs[0]+exceed_deadlines[0])) print('finished rate (in TP):', np.sum(finished_job_size)/(np.sum(finished_job_size)+np.sum(exceed_deadline_size))) # - a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] a[3:-2] a = np.reshape([0, 0, 1, 1, 2, 2, 3, 3], (2, 4)) a a[0][1] f = open('/home/xw17070/Documents/data_caching_hie_2/finished_jobs_DQN.txt', 'rb') finished_jobs = pickle.load(f) f = open('/home/xw17070/Documents/data_caching_hie_2/lost_jobs_DQN.txt', 'rb') lost_jobs = pickle.load(f) # + finished_eps = [] finished_TP = [] for ii in range(0, len(finished_jobs)): finished_eps.append(len(finished_jobs[ii])) finished_TP.append(sum(finished_jobs[ii])) plt.plot(finished_eps) plt.xlabel('eps') plt.title('number of finished jobs') print('average finished jobs in an ep:', np.mean(finished_eps)) plt.figure() plt.plot(finished_TP) plt.xlabel('eps') plt.title('throughput of finished jobs') print('ave TP in an ep:', np.mean(finished_TP)) range_lost = [] ddl_lost = [] for jj in range(0, len(lost_jobs)): cur_range_lost = lost_jobs[jj][0] cur_ddl_lost = lost_jobs[jj][1] range_lost.append(len(cur_range_lost)) ddl_lost.append(len(cur_ddl_lost)) plt.figure() plt.plot(range_lost) print('average lost due to range:', np.mean(range_lost)) plt.plot(ddl_lost) print('average lost due to deadline:', np.mean(ddl_lost)) # - np.mean(finished_eps) / (np.mean(range_lost) + np.mean(ddl_lost) + np.mean(finished_eps)) # + f = open('/home/xw17070/Documents/data_caching_hie/rewards_list.txt', 'rb') rewards_list = pickle.load(f) def running_mean(x, N): cumsum = np.cumsum(np.insert(x, 0, 0)) return (cumsum[N:] - cumsum[:-N]) / N rewards_list = rewards_list[10:] eps, rews = np.array(rewards_list).T smoothed_rews = running_mean(rews, 20) plt.plot(eps[-len(smoothed_rews):], smoothed_rews) plt.plot(eps, rews, color='grey', alpha=0.3) plt.xlabel('Episode') plt.ylabel('Total Reward') # f = open('/home/xw17070/Documents/data_caching_hie/losses_list.txt', 'rb') # loss_list = pickle.load(f) # plt.figure() # plt.plot(range(0,len(loss_list)), loss_list) # - np.mean(rewards_list[-50:]) rewards_list[-50:2:] # + def running_mean(x, N): cumsum = np.cumsum(np.insert(x, 0, 0)) return (cumsum[N:] - cumsum[:-N]) / N rewards_list = rewards_list[10:] eps, rews = np.array(rewards_list).T smoothed_rews = running_mean(rews, 20) plt.plot(eps[-len(smoothed_rews):], smoothed_rews) plt.plot(eps, rews, color='grey', alpha=0.3) plt.xlabel('Episode') plt.ylabel('Total Reward') plt.figure() plt.plot(range(0,len(loss_list)), loss_list) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #Huntsman Telephoto Array specifications # # ##Introduction # # The Huntsman Telephoto Array is an astronomical imaging system consisting of 1-10 imaging units attached to a telescope mount. The concept is closely based on the [Dragonfly Telephoto Array](http://www.dunlap.utoronto.ca/instrumentation/dragonfly/), pictured below. # # ![](http://www.dunlap.utoronto.ca/wp-content/uploads/2013/05/Dragonfly_1080px_Oct-2014.jpg) # # ###Imaging units # # Each imaging unit comprises a [Canon EF 400mm f/2.8L IS II USM camera lens](http://www.canon.com.au/en-AU/EOS-Professional/Product-Range/Lenses/EF-Super-Telephoto/EF400mm-f28L-IS-II-USM-Lens), an [SBIG STF-8300M ccd camera](https://www.sbig.com/products/cameras/stf-series/stf/stf-8300m/) and an adaptor from [Birger Engineering](birger). The prototype imaging unit ('the Huntsman Eye') is shown below. The tripod adaptor bracket is bolted to the main lens body, this bracket could be removed and the bolt holes used for direct attachment to a support structure. # # ![](https://scontent-nrt1-1.xx.fbcdn.net/hphotos-xaf1/t31.0-8/10548205_670322309702850_4884215092166711965_o.jpg) # # ###Telescope mount # # The Huntsman Telephoto Array will use a [Software Bisque Paramount ME II telescope mount](http://www.bisque.com/sc/pages/ParamountMEII.aspx). A [Solidworks eDrawing](http://www.bisque.com/sc/media/p/66949.aspx) is available as part of the [mount documentation set](http://www.bisque.com/sc/media/g/pmeiidocs/default.aspx). # # ![](http://www.bisque.com/help/paramountme/me_ii_render_3.jpg) # # ###Support structure # # The Huntsman Telephoto Array support structure must enable the imaging units to be assembled into an array and attached to the telescope mount. The Dragonfly Telephoto Array has adopted a modular solution based on tubular structures around each lens, as can be seen in the photo below. # # ![](http://news.yale.edu/sites/default/files/imce/stars-two.jpg) # # ### Enclosure # # The Huntsman Telephoto Array is expected to be housed in an [Astro Dome 4500](http://www.astrodomes.com/adome4500.php), a 4.5 metre diameter telescope dome with a 1.2 metre wide aperture. A 21 year old example at Mt Kent Observatory is shown below. # # ![](http://drhotdog.smugmug.com/photos/i-PdCsXsv/0/L/i-PdCsXsv-L.jpg) # ## Science requirements # ### Spatial sampling # # Derived from the specifications of the chosen hardware. # + import math from astropy import units as u pixel_pitch = 5.4 * u.micron / u.pixel # STF-8300M pixel pitch focal_length = 400 * u.millimeter # Canon EF 400 mm f/2.8L IS II USM focal length resolution = (3326, 2504) * u.pixel # STF-8300M resolution in pixels, (x, y) sampling = (pixel_pitch / focal_length).to(u.radian/u.pixel, equivalencies = u.equivalencies.dimensionless_angles()) sampling.to(u.arcsec/u.pixel) # - # * **Each imaging unit shall deliver an on-sky spatial sampling of $2.8\pm 0.1'' /$ pixel** # ### Field of view # # Derived from the specifications of the chosen hardware. fov = resolution * sampling fov.to(u.degree) # * **Each imaging unit shall deliver an instantaneous field of view of $2.6 \pm 0.1 \times 1.9 \pm 0.1$ degrees** # ### Exposure time # # Individual exposure times of 5-30 minutes are anticipated (5-10 minutes for broadband observations, 30 minutes for narrowband). exposure_times = ((5, 10, 30) * u.minute) exposure_times # * **The system shall meet all requirements with exposure times of up to 30 minutes** # ### Number of imaging units # # The maximum number of imaging units per telescope mount is really determined by the mount payload mass limit and the aperture size of the enclosure. The Dragonfly Telephoto Array are currently operating with 10 imaging units on a single mount, the Huntsman Telephoto Array should be capable of at least matching this. n_units = (1, 4, 10) n_units # * **The system shall support up to at least 10 imaging units per telescope mount** # ###Imaging unit alignment # # Given the large field of view tight coalignment of individual imaging units is not required, or even particularly desirable. coalignment_tolerance = 5 * u.arcminute coalignment_tolerance # * **All imaging units should point in the same direction to within a tolerance of 5 arcminutes radius on sky (TBC)** # # All data will be resampled prior to combination so some relative rotation between imaging units is acceptable. north_alignment_tolerance = 2.5 * u.degree north_alignment_tolerance # * **All imaging units shall have the camera y axis aligned with the North-South axis to within a tolerance of $\pm$2.5 degrees (TBC)** # ###Image quality # # [Abraham & (2014)](http://arxiv.org/abs/1401.5473) report that imaging units of the design proposed for the Huntsman Telephoto Array are capable of producing a point spread function (PSF) with full width at half maximum (FWHM) of $\sim1.5''$, as measured by (undersampled) 3rd order polynomial fitting by SExtractor. When image sensor tilts (PSF degradation $<0.4''$) and imperfect telescope tracking are taken into account average FWHM of $< 2''$ were still achieved across the entire field of view. The Huntsman Telephoto Array should at least match this. central_fwhm = 1.5 * u.arcsecond tilt_fwhm_degradation = 0.4 * u.arcsecond max_fwhm = 2 * u.arcsecond max_fwhm # * **The system shall deliver a PSF with average FWHM $< 2''$ over the full field of view, as measured using a 3rd order polynomial fit performed wth the SExtrator software** # ###Filters # # For the primary science project we anticipate using SDSS-type g & r bandpass filters, typically with half of the imaging units equipped with one and half with the other though there may be targets for which we would want to use a different mix of filters. During bright of Moon it will not be possible to make useful observations for the primary science project and so during these times we may use narrowband filters, e.g. H-$\alpha$. To do this it must be possible to change filters between nights but it is not necessary that this be a motorised/automated process. # # * **Each imaging unit shall be equipped with an optical bandpass filter** # * **It must be possible to change filters between nights** # * **The set of filters shall contain at least one SDSS-type filter of either g or r band for each imaging unit** # ### Sky coverage # # The system should allow the observation of targets at any position on the sky that corresponds to a reasonable airmass, i.e. $<2$. max_zenith_distance = 60 * u.degree max_zenith_distance # * **The system shall satisfy all functional requirements (e.g. image quality, alignment) while observing any sky position with a zenith distance less than 60 degrees. The system is not required to meet functional requirements if observing a sky position with a zenith distance of greater than 60 degrees** # ## Mechanical requirements # ### Support structure(s) # # * **The mechanical support structure(s) shall allow the number of imaging units specified in the science requirements to be attached to the telescope mount** n_units # ### Imaging unit interface # # * **The support structure(s) shall attach to the imaging units via the Canon EF 400mm f/2.8L IS II USM camera lens tripod mount bolt hole pattern and/or clamping of the camera lens body** # ### Telescope mount interface # # * **The support structure(s) shall attach to the telescope mount via the standard interface plate, the Paramount ME II Versa-Plate (drawing [here](http://www.bisque.com/sc/media/p/75171.aspx))** # ### Alignment # # * **The support structure(s) shall ensure that the imaging units are aligned to within the tolerances specified in the science requirements** coalignment_tolerance north_alignment_tolerance # ### Flexure # # The support structure(s) must be rigid enough so that flexure will not prevent the system from achieving the image quality specification from the science requirements. This requires the pointing of all imaging units to remain constant relative to either the telescope mount axes (if not autoguiding) or the autoguider pointing (if using autoguiding) to within a set tolerance for the duration of any individual exposure. # # The tolerance can be calculated from the delivered image quality specification and expected imaging unit image quality. fwhm_to_rms = (2 * (2 * math.log(2))**0.5)**-1 max_flexure_rms = fwhm_to_rms * (max_fwhm**2 - (central_fwhm + tilt_fwhm_degradation)**2)**0.5 max_flexure_rms # A given exposure time corresponds to an angle of rotation about the telescope mount hour angle axis. ha_angles = (exposure_times.to(u.hour) * (u.hourangle / u.hour)).to(u.degree) ha_angles # * **The support structure(s) shall ensure that the pointing of all imaging units shall remain fixed relative to the telescope mount axes to within 0.27 arcseconds rms while the hour angle axis rotates through any 7.5 degree angle, for any position of the declination axis, within the sky coverage requirement's zenith distance range** max_zenith_distance # ### Mass # # The telescope mount is rated for a maximum payload (not including counterweights) of 109 kg, therefore the total mass of imaging units plus support structure(s) should not exceed this value. The mass of the lens is 4.1 kg (source [here](http://www.the-digital-picture.com/Reviews/Lens-Specifications.aspx?Lens=741&LensComp=0&Units=M)), the mass of the CCD camera is 0.8 kg (source [here](https://www.sbig.com/products/cameras/stf-series/stf/stf-8300m/)) and the mass of the adaptor is estimated to be no more than 0.2 kg. # + lens_mass = 4.1 * u.kilogram camera_mass = 0.8 * u.kilogram adaptor_mass = 0.2 * u.kilogram imaging_unit_mass = lens_mass + camera_mass + adaptor_mass max_payload_mass = 109 * u.kilogram max_struture_mass = max_payload_mass - max(n_units) * imaging_unit_mass max_struture_mass # - # * **The total mass of all support structure(s) shall be less than 58 kg** # ### Footprint # # The support structure(s) needs to position the imaging units such that their combined beam footprint will pass through the dome aperture without vignetting. Translating this requirement into an allowed space envelope for the imaging units is not straightforward as the geometry is complicated: the axes of the telescope mount will be offset from each other, from the geometric centre of the dome and from the centre of the imaging unit array. 3D modelling of the mount, imaging unit array and enclosure will be required to verify this for all sky positions however as a general principle the imaging units should be as closely packed as possible to minimise the overall size of their combined beam footprint. # ## Environmental # # The system is intended to be placed at a moderate altitude site in mainland Australia. The expected ranges of environmental conditions during operation and storage are as follows. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- pip install nltk pip install gensim # + # %matplotlib inline import warnings warnings.filterwarnings("ignore") import sqlite3 import pandas as pd import numpy as np import nltk import string import matplotlib.pyplot as plt import seaborn as sns from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_extraction.text import CountVectorizer from sklearn.metrics import confusion_matrix from sklearn import metrics from sklearn.metrics import roc_curve, auc from nltk.stem.porter import PorterStemmer import re # Tutorial about Python regular expressions: https://pymotw.com/2/re/ import string from nltk.corpus import stopwords from nltk.stem import PorterStemmer from nltk.stem.wordnet import WordNetLemmatizer from gensim.models import Word2Vec from gensim.models import KeyedVectors import pickle from tqdm import tqdm import os # + # using the SQLite Table to read data. con=sqlite3.connect("C:\\Users\\Hemant\\Downloads\\18_2157_bundle_archive\\database.sqlite") #filtering only positive and negative reviews i.e. # not taking into consideration those reviews with Score=3 # SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000, will give top 500000 data points # you can change the number to any other number based on your computing power # filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 500000""", con) # for tsne assignment you can take 5k data points filtered_data = pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 LIMIT 5000""", con) # Give reviews with Score>3 a positive rating, and reviews with a score<3 a negative rating. def partition(x): if x < 3: return 0 return 1 #changing reviews with score less than 3 to be positive and vice-versa actualScore = filtered_data['Score'] positiveNegative = actualScore.map(partition) filtered_data['Score'] = positiveNegative print("Number of data points in our data", filtered_data.shape) filtered_data.head(3) # - f = pd.read_sql_query("""select * from Reviews""", con) f.head(5) f.isnull().sum() display= pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 AND UserId="AR5J8UI46CURR" ORDER BY ProductID """, con) display.head() sort_data = filtered_data.sort_values('ProductId', axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last') final = sort_data.drop_duplicates(subset = {"UserId","ProfileName","Time","Text"}, keep = 'first', inplace=False) final.shape (final['Id'].size*1.0)/(filtered_data['Id'].size*1.0)*100 # + #here HelpfulnessNumerator(yes) > HelpfulnessDenominator(yes+no) display= pd.read_sql_query(""" SELECT * FROM Reviews WHERE Score != 3 AND Id=44737 OR Id=64422 ORDER BY ProductID """, con) display.head() # - final=final[final.HelpfulnessNumerator<=final.HelpfulnessDenominator] # + #Before starting the next phase of preprocessing lets see the number of entries left print(final.shape) #How many positive and negative reviews are present in our dataset? final['Score'].value_counts() # - # BOW # # Text Preprocessing. # Now that we have finished deduplication our data requires some preprocessing before we go on further with analysis and making the prediction model. # # Hence in the Preprocessing phase we do the following in the order below:- # # 1.Begin by removing the html tags # 2.Remove any punctuations or limited set of special characters like , or . or # etc. # 3.Check if the word is made up of english letters and is not alpha-numeric # 4.Check to see if the length of the word is greater than 2 (as it was researched that there is no adjective in 2-letters) # 5.Convert the word to lowercase # 6.Remove Stopwords # 7.Finally Snowball Stemming the word (it was obsereved to be better than Porter Stemming) # After which we collect the words used to describe positive and negative reviews # + # printing some random reviews sent_0 = final['Text'].values[0] print(sent_0) print("="*50) sent_1000 = final['Text'].values[1000] print(sent_1000) print("="*50) sent_1500 = final['Text'].values[1500] print(sent_1500) print("="*50) sent_4900 = final['Text'].values[4900] print(sent_4900) print("="*50) # + # remove urls from text python: https://stackoverflow.com/a/40823105/4084039 sent_0 = re.sub(r"http\S+", "", sent_0) sent_1000 = re.sub(r"http\S+", "", sent_1000) sent_150 = re.sub(r"http\S+", "", sent_1500) sent_4900 = re.sub(r"http\S+", "", sent_4900) print(sent_0) # + # https://stackoverflow.com/questions/16206380/python-beautifulsoup-how-to-remove-all-tags-from-an-element from bs4 import BeautifulSoup soup = BeautifulSoup(sent_0, 'lxml') text = soup.get_text() print(text) print("="*50) soup = BeautifulSoup(sent_1000, 'lxml') text = soup.get_text() print(text) print("="*50) soup = BeautifulSoup(sent_1500, 'lxml') text = soup.get_text() print(text) print("="*50) soup = BeautifulSoup(sent_4900, 'lxml') text = soup.get_text() print(text) # + # https://stackoverflow.com/a/47091490/4084039 import re def decontracted(phrase): # specific phrase = re.sub(r"won't", "will not", phrase) phrase = re.sub(r"can\'t", "can not", phrase) # general phrase = re.sub(r"n\'t", " not", phrase) phrase = re.sub(r"\'re", " are", phrase) phrase = re.sub(r"\'s", " is", phrase) phrase = re.sub(r"\'d", " would", phrase) phrase = re.sub(r"\'ll", " will", phrase) phrase = re.sub(r"\'t", " not", phrase) phrase = re.sub(r"\'ve", " have", phrase) phrase = re.sub(r"\'m", " am", phrase) return phrase # - sent_1500 = decontracted(sent_1500) print(sent_1500) print("="*50) #remove words with numbers python: https://stackoverflow.com/a/18082370/4084039 sent_0 = re.sub("\S*\d\S*", "", sent_0).strip() print(sent_0) #remove spacial character: https://stackoverflow.com/a/5843547/4084039 sent_1500 = re.sub('[^A-Za-z0-9]+', ' ', sent_1500) print(sent_1500) # + # https://gist.github.com/sebleier/554280 # we are removing the words from the stop words list: 'no', 'nor', 'not' #

    ==> after the above steps, we are getting "br br" # we are including them into stop words list # instead of
    if we have
    these tags would have revmoved in the 1st step stopwords= set(['br', 'the', 'i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've",\ "you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', \ 'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their',\ 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', \ 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', \ 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', \ 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after',\ 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further',\ 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',\ 'most', 'other', 'some', 'such', 'only', 'own', 'same', 'so', 'than', 'too', 'very', \ 's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', \ 've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn',\ "hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn',\ "mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", \ 'won', "won't", 'wouldn', "wouldn't"]) # - # Combining all the above stundents from tqdm import tqdm preprocessed_reviews = [] # tqdm is for printing the status bar for sentance in tqdm(final['Text'].values): sentance = re.sub(r"http\S+", "", sentance) #removes links sentance = BeautifulSoup(sentance, 'lxml').get_text() #removes all tags sentance = decontracted(sentance) #a function converts 're - are sentance = re.sub("\S*\d\S*", "", sentance).strip() #removes digits sentance = re.sub('[^A-Za-z]+', ' ', sentance) #removes special character # https://gist.github.com/sebleier/554280 sentance = ' '.join(e.lower() for e in sentance.split() if e.lower() not in stopwords) preprocessed_reviews.append(sentance.strip()) preprocessed_reviews[1500] # + #BoW count_vect = CountVectorizer() #in scikit-learn count_vect.fit(preprocessed_reviews) print("some feature names ", count_vect.get_feature_names()[:10]) print('='*50) final_counts = count_vect.transform(preprocessed_reviews) print("the type of count vectorizer ",type(final_counts)) print("the shape of out text BOW vectorizer ",final_counts.get_shape()) print("the number of unique words ", final_counts.get_shape()[1]) # + #bi-gram, tri-gram and n-gram #removing stop words like "not" should be avoided before building n-grams # count_vect = CountVectorizer(ngram_range=(1,2)) # please do read the CountVectorizer documentation http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html # you can choose these numebrs min_df=10, max_features=5000, of your choice count_vect = CountVectorizer(ngram_range=(1,2), min_df=10, max_features=5000) #min_df = ignore the words less than this frequency and max_features ordering of words final_bigram_counts = count_vect.fit_transform(preprocessed_reviews) print("the type of count vectorizer ",type(final_bigram_counts)) print("the shape of out text BOW vectorizer ",final_bigram_counts.get_shape()) print("the number of unique words including both unigrams and bigrams ", final_bigram_counts.get_shape()[1]) # + tf_idf_vect = TfidfVectorizer(ngram_range=(1,2), min_df=10) #1, 2 bigrams 1,3 for trigrams tf_idf_vect.fit(preprocessed_reviews) print("some sample features(unique words in the corpus)",tf_idf_vect.get_feature_names()[0:10]) print('='*50) final_tf_idf = tf_idf_vect.transform(preprocessed_reviews) print("the type of count vectorizer ",type(final_tf_idf)) print("the shape of out text TFIDF vectorizer ",final_tf_idf.get_shape()) print("the number of unique words including both unigrams and bigrams ", final_tf_idf.get_shape()[1]) # - # Train your own Word2Vec model using your own text corpus i=0 list_of_sentance=[] for sentance in preprocessed_reviews: list_of_sentance.append(sentance.split()) # + # Using Google News Word2Vectors # in this project we are using a pretrained model by google # its 3.3G file, once you load this into your memory # it occupies ~9Gb, so please do this step only if you have >12G of ram # we will provide a pickle file wich contains a dict , # and it contains all our courpus words as keys and model[word] as values # To use this code-snippet, download "GoogleNews-vectors-negative300.bin" # from https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit # it's 1.9GB in size. # http://kavita-ganesan.com/gensim-word2vec-tutorial-starter-code/#.W17SRFAzZPY # you can comment this whole cell # or change these varible according to your need is_your_ram_gt_16g=False want_to_use_google_w2v = False want_to_train_w2v = True if want_to_train_w2v: # min_count = 5 considers only words that occured atleast 5 times w2v_model=Word2Vec(list_of_sentance,min_count=5,size=50, workers=4) print(w2v_model.wv.most_similar('great')) print('='*50) print(w2v_model.wv.most_similar('worst')) elif want_to_use_google_w2v and is_your_ram_gt_16g: if os.path.isfile('GoogleNews-vectors-negative300.bin'): w2v_model=KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True) print(w2v_model.wv.most_similar('great')) print(w2v_model.wv.most_similar('worst')) else: print("you don't have gogole's word2vec file, keep want_to_train_w2v = True, to train your own w2v ") # - w2v_words = list(w2v_model.wv.vocab) print("number of words that occured minimum 5 times ",len(w2v_words)) print("sample words ", w2v_words[0:50]) # average Word2Vec # compute average word2vec for each review. sent_vectors = []; # the avg-w2v for each sentence/review is stored in this list for sent in tqdm(list_of_sentance): # for each review/sentence sent_vec = np.zeros(50) # as word vectors are of zero length 50, you might need to change this to 300 if you use google's w2v cnt_words =0; # num of words with a valid vector in the sentence/review for word in sent: # for each word in a review/sentence if word in w2v_words: vec = w2v_model.wv[word] sent_vec += vec cnt_words += 1 if cnt_words != 0: sent_vec /= cnt_words sent_vectors.append(sent_vec) print(len(sent_vectors)) print(len(sent_vectors[0])) # S = ["abc def pqr", "def def def abc", "pqr pqr def"] model = TfidfVectorizer() model.fit(preprocessed_reviews) # we are converting a dictionary with word as a key, and the idf as a value dictionary = dict(zip(model.get_feature_names(), list(model.idf_))) # + # TF-IDF weighted Word2Vec tfidf_feat = model.get_feature_names() # tfidf words/col-names # final_tf_idf is the sparse matrix with row= sentence, col=word and cell_val = tfidf tfidf_sent_vectors = []; # the tfidf-w2v for each sentence/review is stored in this list row=0; for sent in tqdm(list_of_sentance): # for each review/sentence sent_vec = np.zeros(50) # as word vectors are of zero length weight_sum =0; # num of words with a valid vector in the sentence/review for word in sent: # for each word in a review/sentence if word in w2v_words and word in tfidf_feat: vec = w2v_model.wv[word] # tf_idf = tf_idf_matrix[row, tfidf_feat.index(word)] # to reduce the computation we are # dictionary[word] = idf value of word in whole courpus # sent.count(word) = tf valeus of word in this review tf_idf = dictionary[word]*(sent.count(word)/len(sent)) sent_vec += (vec * tf_idf) weight_sum += tf_idf if weight_sum != 0: sent_vec /= weight_sum tfidf_sent_vectors.append(sent_vec) row += 1 # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="ZCXzUsjtwNzy" # # Author : # + [markdown] id="eDNxMjAOwSGT" # Task-10 : ML Facial recognition to detect mood and suggest songs accordingly # + [markdown] id="wYDrVTSGwX8y" # Dataset Link: https://www.kaggle.com/msambare/fer2013 # + id="6zLBEMLKx0s7" import numpy as np import pandas as pd import os import datetime import matplotlib.pyplot as plt import tensorflow as tf from keras import regularizers from keras.utils.vis_utils import plot_model from tensorflow.keras.optimizers import Adam, RMSprop, SGD from keras.preprocessing.image import ImageDataGenerator, load_img,img_to_array from keras.layers import Conv2D, Dense, BatchNormalization, Activation, Dropout, MaxPooling2D, Flatten from keras.callbacks import ModelCheckpoint, CSVLogger, TensorBoard, EarlyStopping, ReduceLROnPlateau from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import cross_val_score, cross_val_predict from sklearn.datasets import make_classification from sklearn.utils import shuffle from sklearn.model_selection import train_test_split # + id="qwQDjxW4wCuF" colab={"base_uri": "https://localhost:8080/", "height": 165} outputId="e4f9d39f-da1a-441f-818d-ba4b387e9603" print(os.listdir("train")) # + id="-Zc31PHnjkMD" # + id="0uU4nhdNwfI3" def load_dataset(directory): x, y = [],[] i=1 for subdir in listdir(directory): path = directory + subdir + '/' faces = load_faces(path) labels = [subdir for _ in range(len(faces))] print("%d There are %d images in the class %s:"%(i,len(faces),subdir)) x.extend(faces) y.extend(labels) i=i+1 return asarray(x),asarray(y) trainX,trainY = load_dataset('/content/drive/MyDrive/Indian-celebrities/') print(trainX.shape,trainY.shape) savez_compressed('/content/drive/MyDrive/PROJECTS/face recog + mood/Indian-celeb-dataset.npz',trainX,trainY) # + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 74} id="f33RkIHHyHVQ" outputId="0ac292b2-eebd-40d7-af74-fd71c272901a" from google.colab import files uploaded = files.upload() # + id="3vKcvnxhyLEC" data = pd.read_csv('/content/drive/My Drive/projects/train.csv') test_data = pd.read_csv('/content/drive/My Drive/projects/test.csv') # + id="lVU1nxOt33oq" spotify_df.dropna(subset=['consolidates_genre_lists'], inplace=True) # + colab={"base_uri": "https://localhost:8080/"} id="RL6gBEE4375d" outputId="357e57cb-c7cd-446f-a2a1-434490265bcd" spotify_df.isna().sum() # + colab={"base_uri": "https://localhost:8080/", "height": 346} id="ht0aQ4Bt3_hs" outputId="863484f4-b900-499b-a946-8404ccf5e7f4" spotify_df.head(3) # + id="eF_cwYDJ4B5u" mood_prep = spotify_df[['duration_ms', 'danceability', 'acousticness', 'energy', 'instrumentalness', 'liveness', 'valence', 'loudness', 'speechiness', 'tempo']] # + colab={"base_uri": "https://localhost:8080/"} id="n4TiIeWm4E8Y" outputId="8cdbea3a-20b5-4a25-93bf-f38d1d95b77a" col_features = mood_prep.columns[:] col_features # + id="BuhroSjY4Gez" mood_trans = MinMaxScaler().fit_transform(mood_prep[col_features]) mood_trans_np = np.array(mood_prep[col_features]) # + colab={"base_uri": "https://localhost:8080/"} id="XcjftPuv4IOh" outputId="8c03358d-6353-42a9-cd5a-8795baef682f" mood_trans_np[10011] # + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 74} id="X_3J59g94Je-" outputId="cbe1c4f3-7353-4ce8-d165-fa1246f65eb1" from google.colab import files uploaded = files.upload() # + id="8B4ShRwB4LE7" df = pd.read_csv('data_moods.csv') # + id="cj45sGrz4XFV" cl_features = df.columns[6:-3] X= MinMaxScaler().fit_transform(df[cl_features]) X2 = np.array(df[cl_features]) Y = df['mood'] # + colab={"base_uri": "https://localhost:8080/", "height": 174} id="5y2lseJE4Ym6" outputId="3cd0aa6b-fb7b-42e2-eb83-605bc55dbddd" encoder = LabelEncoder() encoder.fit(Y) encoded_y = encoder.transform(Y) dummy_y = utils.to_categorical(encoded_y) X_train,X_test,Y_train,Y_test = train_test_split(X,encoded_y,test_size=0.2,random_state=15) target = pd.DataFrame({'mood':df['mood'].tolist(),'encode':encoded_y}).drop_duplicates().sort_values(['encode'],ascending=True) target # + id="rl6DmEnF4a20" def base_model(): model = Sequential() model.add(Dense(8,input_dim=10,activation='relu')) model.add(Dense(4,activation='softmax')) model.compile(loss='categorical_crossentropy',optimizer='adam', metrics=['accuracy']) return model # + id="Dujw2l-l4dqe" #Configure the model estimator = KerasClassifier(build_fn=base_model,epochs=300,batch_size=200,verbose=0) # + colab={"base_uri": "https://localhost:8080/"} id="Ks0-zd0Y4fO6" outputId="85a58698-abd6-414c-cb06-4fa7c3e50b12" #Evaluate the model using KFold cross validation kfold = KFold(n_splits=10,shuffle=True) results = cross_val_score(estimator,X,encoded_y,cv=kfold) print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100,results.std()*100)) # + id="FHqtsMhA4hpa" estimator.fit(X_train,Y_train) y_preds = estimator.predict(X_test) # + colab={"base_uri": "https://localhost:8080/"} id="sXkk7GOr4qyn" outputId="c8fac799-a4f5-4daf-b1cf-dfd076dc928e" y_preds # + colab={"base_uri": "https://localhost:8080/"} id="6UOdkzs05NoF" outputId="633c66a2-74d1-4fa9-a04c-0bba62d6b1d5" #Join the model and the scaler in a Pipeline pip = Pipeline([('minmaxscaler',MinMaxScaler()),('keras',KerasClassifier(build_fn=base_model,epochs=300, batch_size=200,verbose=0))]) #Fit the Pipeline pip.fit(X2,encoded_y) # + id="A_ijI6LW5PWV" def predict_mood(preds): # pipe = Pipeline([('minmaxscaler',MinMaxScaler()),('keras',KerasClassifier(build_fn=base_model,epochs=300, batch_size=200,verbose=0))]) # #Fit the Pipeline # pipe.fit(X2,encoded_y) preds_features = np.array(preds[:]).reshape(-1,1).T #Predict the features of the song results = pip.predict(preds_features) mood = np.array(target['mood'][target['encode']==int(results)]) return str(mood[0]) #print(f"{name_song} by {artist} is a {mood[0].upper()} song") # + id="M5uFsitB5S3I" colab={"base_uri": "https://localhost:8080/", "height": 346} outputId="69d4583d-e414-47de-8bf2-781b1d0378f7" res = [] for i in range(len(mood_trans_np)): res.append(predict_mood(mood_trans_np[i])) # + id="pSaY_FR-5Vaf" spotify_df.shape # + id="FbzH77K95Wwq" print(len(res)) # + id="rzHYVbZb5Yqy" spotify_df['Mood'] = np.resize(res,len(spotify_df)) # + id="LwBgVE0F5aFw" from google.colab import files uploaded = files.upload() # + id="ZrmuknuB5iMb" spotify_df.to_csv('kaggleMusicMoodFinal.csv') # + id="f7IiLg0W5m-A" res.count("Sad") # + id="l1IVoMnv5nA1" res.count("Happy") # + id="5R2S4vni6_pJ" res.count("Energetic") # + id="oOwV9OG96_sE" res.count("Calm") # + id="dWM1EpEu7AQQ" spotify_df.shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # RAMP Lockdown Experiments # # Run two experiments: # # 1. _Baseline_ (uses the [`baseline.yml`](../../model_parameters/baseline.yml) parameters file) that runs the calibrated model under normal conditions. # 2. _Lockdown earlier_ (uses the [`week_earlier.yml`](../../model_parameters/week_earlier.yml) parameters file) supposes that the UK's national lockdown in March 2020 started a week earlier. # ## Import modules # + import multiprocessing as mp import numpy as np import yaml # pyyaml library for reading the parameters.yml file import os import pandas as pd import unittest import pickle import copy import random import matplotlib.pyplot as plt import scipy.stats as stats import re from microsim.opencl.ramp.run import run_headless from microsim.opencl.ramp.snapshot_convertor import SnapshotConvertor from microsim.opencl.ramp.snapshot import Snapshot from microsim.opencl.ramp.params import Params, IndividualHazardMultipliers, LocationHazardMultipliers from microsim.opencl.ramp.simulator import Simulator from microsim.opencl.ramp.disease_statuses import DiseaseStatus import sys sys.path.append('..') #import experiments_functions # For the ones outside the class from opencl_runner import OpenCLRunner # Some additional notebook-specific functions required (functions.py) # Useful for connecting to this kernel # %connect_info # - # ## Setup parameters # # Prepare the parameters for the OpenCL model. Can choose between the baseline scenario (by reading [`baseline.yml`](../../model_parameters/baseline.yml) parameters file) or the earlier lockdown_ scenario (by reading [`week_earlier.yml`](../../model_parameters/week_earlier.yml)). # # (Note: see the `create_params()` function in [main.py](https://github.com/Urban-Analytics/RAMP-UA/blob/master/microsim/main.py) for an example of how to set parameters in the main codebase). # + #PARAM_FILE = "baseline.yml" PARAM_FILE = "week_earlier.yml" PARAMETERS_FILE = os.path.join("..", "../", "model_parameters", PARAM_FILE ) PARAMS = OpenCLRunner.create_parameters(parameters_file=PARAMETERS_FILE) # - # ### Get snapshot path # # **NB** this is the path to the OpenCL snapshot file generated by running `microsim/main.py`. You need to initilaise the model at least once to create the snapshot. Also, when changing scenario you need to re-initialise the model and create a new snapshot. # # The lines below automatically delete any previous snapshots, re-initialise the model, then load the new snapshot. PARAM_DIR = os.path.join("model_parameters", PARAM_FILE) PARAMS.place_hazard_multipliers # Check the params file was read # + # Now delete any existing snapshots and re-create # Move into the RAMP-UA root directory # %cd '../../' # Delete any existing caches # !rm -f "microsim/opencl/snapshots/cache.npz" # Re-run the model # !python microsim/main.py -ocl -init -p $PARAM_DIR # Set the working directory back to the one where this script is located # %cd 'experiments/calibration/' # - OPENCL_DIR = "../../microsim/opencl" SNAPSHOT_FILEPATH = os.path.join(OPENCL_DIR, "snapshots", "cache.npz") assert os.path.isfile(SNAPSHOT_FILEPATH), f"Snapshot doesn't exist: {SNAPSHOT_FILEPATH}" # ## Observation Data # # Read the real observations (number of hospital admissions in Devon) that will be used to calibrate the model. See the [README](./observation_data/README.md) for information about how these observations were obtained. They aren't the raw cases, it's actually a model that was fitted to the lagged cases. They need to be made cumulative as this is how they will be compared to the model. # + # New per day: gam_cases = pd.read_csv(os.path.join("..", "..", "gam_cases_new.csv"), header = 0, names=["Day", "Cases"],) # Cumulative OBSERVATIONS = pd.DataFrame( {"Day": gam_cases['Day'], "Cases": gam_cases.cumsum()['Cases']} ) assert OBSERVATIONS.tail(1)['Cases'].values[0] == sum(gam_cases['Cases']) plt.plot(gam_cases['Day'], gam_cases['Cases']) print(f"Total cases: {sum(gam_cases['Cases'])}") # - # ## Run default (manually calibrated) model # # This shows what happens with the 'default' (manually calibrated) model # + ITERATIONS = 100 # Number of iterations to run for USE_GPU = False STORE_DETAILED_COUNTS = True REPETITIONS = 10 USE_HEALTHIER_POP = False assert ITERATIONS < len(OBSERVATIONS), \ f"Have more iterations ({ITERATIONS}) than observations ({len(OBSERVATIONS)})." # Initialise the class so that its ready to run the model. # This isn't actually necessary immediately as the `run_opencl_model_multi` function is a static method # so doesn't read any of the class parameters, but the init is necessary # for calibration later when some parameters can't be passed to the run function directly OpenCLRunner.init( iterations = ITERATIONS, repetitions = REPETITIONS, observations = OBSERVATIONS, use_gpu = USE_GPU, use_healthier_pop = USE_HEALTHIER_POP, store_detailed_counts = STORE_DETAILED_COUNTS, parameters_file = PARAMETERS_FILE, opencl_dir = OPENCL_DIR, snapshot_filepath = SNAPSHOT_FILEPATH ) # - # Results from the manually-calibrated model manual_results = OpenCLRunner.run_opencl_model_multi( repetitions=REPETITIONS, # Don't use the default, want slightly more robust results iterations=ITERATIONS, params=PARAMS, opencl_dir=OPENCL_DIR, snapshot_filepath=SNAPSHOT_FILEPATH, use_gpu=USE_GPU, store_detailed_counts=False, # Get full info to plot age breakdowns multiprocess=False, random_ids=False, use_healthier_pop=USE_HEALTHIER_POP ) # + # Can optionally save/load ('pickle') the results # Save #pkl_out = re.sub( ".yml","",PARAM_FILE) #pkl_out = pkl_out + "_" + str(REPETITIONS) + ".p" #pickle.dump(manual_results, open(os.path.join("../../","output", pkl_out), "wb" ) ) # Load #manual_results = pickle.load( open( os.path.join("../../","output", pkl_out), "rb" ) ) # - # Get the summary results of of the finished model manual_summaries = [x[0] for x in manual_results] # Store the results as they can be useful as hypothetical observations to test some of the calibration algorithms pseudo_observations = OpenCLRunner.get_cumulative_new_infections(manual_summaries) # + matrix_ifr = np.zeros(shape=(REPETITIONS,3)) for rep in range(REPETITIONS): matrix_ifr[rep,0] = manual_summaries[rep].total_counts[6][ITERATIONS -1] matrix_ifr[rep,1] = manual_summaries[rep].total_counts[5][ITERATIONS -1] matrix_ifr[rep,2] = ((manual_summaries[rep].total_counts[6][ITERATIONS -1])/(manual_summaries[rep].total_counts[6][ITERATIONS -1] + manual_summaries[rep].total_counts[5][ITERATIONS -1]))*100 np.mean(matrix_ifr[:,2]) # + matrix_ifr = np.zeros(shape=(ITERATIONS*REPETITIONS,9)) for rep in range(REPETITIONS): row_ind = range((rep)*ITERATIONS,(rep+1)*ITERATIONS) matrix_ifr[row_ind,0] = rep+1 matrix_ifr[row_ind,1] = range(1,ITERATIONS+1) matrix_ifr[row_ind,2] = manual_summaries[rep].total_counts[0] matrix_ifr[row_ind,3] = manual_summaries[rep].total_counts[1] matrix_ifr[row_ind,4] = manual_summaries[rep].total_counts[2] matrix_ifr[row_ind,5] = manual_summaries[rep].total_counts[3] matrix_ifr[row_ind,6] = manual_summaries[rep].total_counts[4] matrix_ifr[row_ind,7] = manual_summaries[rep].total_counts[5] matrix_ifr[row_ind,8] = manual_summaries[rep].total_counts[6] csv_out = re.sub( ".yml","",PARAM_FILE) csv_out = csv_out + "_" + str(REPETITIONS) + ".csv" csv_path_out = os.path.join("../../","output", csv_out) # Can optionally save the results as a csv #np.savetxt(csv_path_out, matrix_ifr, delimiter=",") # - # ## Plot output summary data # # ### Total counts of disease status # + def plot_summaries(summaries, observations=None, plot_type="error_bars"): #fig, ax = plt.subplots(1, len(DiseaseStatus), sharey=True) fig, ax = plt.subplots(1, 1, figsize=(10,7)) # Work out the number of repetitions and iterations iters, reps = _get_iters_and_reps(summaries) x = range(iters) total_not_susceptible = np.zeros(iters) # Used to compare to observations for d, disease_status in enumerate(DiseaseStatus): # Calculate the mean and standard deviation mean, sd = OpenCLRunner.get_mean_total_counts(summaries, d, get_sd=True) # Don't plot susceptible or recovered as it hides everytihng else if disease_status==DiseaseStatus.Susceptible or disease_status==DiseaseStatus.Recovered: continue if plot_type == "error_bars": ax.errorbar(x, mean, sd, label=f"{disease_status}" ) elif plot_type == "lines": for rep in range(reps): ax.plot(x, matrix[rep], label=f"{disease_status} {rep}", color=plt.cm.get_cmap("hsv", len(DiseaseStatus))(d) ) if observations is not None: # Plot the observations (cumulative infections) ax.plot(x, observations.loc[:len(x)-1, "Cases"], label=f"Observations (cumulative cases)", color="black", linestyle="dashed") # And the total new infections, for comparison ax.plot(x, OpenCLRunner.get_cumulative_new_infections(summaries), label=f"Total not susceptible ", color="grey", linestyle="dashed") ax.legend() ax.set_title("Disease Status") ax.set_xlabel("Iteration") ax.set_ylabel("Number of cases") #ax.set_ylim(0, 5000) #ax.set_xlim(0,30) def _get_iters_and_reps(summaries): reps = len(summaries) iters = len(summaries[0].total_counts[0]) return (iters, reps) # - plot_summaries(summaries=manual_summaries, observations=OBSERVATIONS, plot_type="error_bars") # + #plot_summaries(summaries=summaries, plot_type="lines") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Project Title: E-commerce Analytics ML Hackathon # #### Author: # #### Date: 10 April 2020 # With the evolution of the information and communication technologies and the rapid growth of the Internet for the exchange and distribution of information, Electronic Commerce (e-commerce) has gained massive momentum globally, and attracted more and more worldwide users overcoming the time constraints and distance barriers. # # It is important to gain in-depth insights into e-commerce via data-driven analytics and identify the factors affecting product sales, the impact of characteristics of customers on their purchase habits. # # It is quite useful to understand the demand, habits, concern, perception, and interest of customers from the clue of genders for e-commerce companies. # # However, the genders of users are in general unavailable in e-commerce platforms. To address this gap the aim here is to predict the gender of e-commerce’s participants from their product viewing records. # **Train file**: CSV containing the product viewing data with gender as label. Product list contains list of products viewed by the user in the given session and it also contains the category, sub category, sub-sub category and the product all encoded and separated with a slash symbol. Each consecutive product is separated with a semicolon. # # **Test file**: CSV containing sessions for which gender prediction is to be submitted. # **Evaluation**" Submissions are evaluated on accuracy between the predicted and observed gender for the sessions in the test set. # ## Import Libraries # + import pandas as pd import numpy as np from datetime import datetime import re import missingno # visualization import seaborn as sns import plotly.express as px import plotly.graph_objects as go import matplotlib.pylab as plt # %matplotlib inline from matplotlib.pylab import rcParams # model from sklearn.model_selection import train_test_split, cross_val_score from sklearn.model_selection import ShuffleSplit from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier,ExtraTreeClassifier from sklearn.linear_model import LogisticRegression from sklearn.ensemble import AdaBoostClassifier,RandomForestClassifier,StackingClassifier,VotingClassifier,BaggingClassifier,ExtraTreesClassifier,GradientBoostingClassifier,BaggingClassifier,ExtraTreesClassifier from sklearn.model_selection import GridSearchCV, KFold from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.ensemble import GradientBoostingClassifier from xgboost.sklearn import XGBClassifier import xgboost as xgb import lightgbm as lgb import catboost as cat # evaluation from sklearn.metrics import classification_report, confusion_matrix from sklearn.ensemble import VotingClassifier from sklearn.metrics import f1_score, roc_auc_score, accuracy_score from sklearn.utils import resample from sklearn.preprocessing import LabelEncoder import warnings warnings.simplefilter('ignore') # - pd.set_option("display.max_columns", None) pd.set_option("display.max_rows", None) pd.set_option("display.max_colwidth", -1) # ## Import Data train = pd.read_csv("train.csv") test = pd.read_csv("test.csv") train.info() train.head() test.info() test.head() # ## Check for Missing Values # check for missing values miss = train.isnull().sum()/len(train) miss = miss[miss > 0] miss.sort_values(inplace=True) miss # ## Data Preprocessing # + def preprocess(data): # convert starttime and endtime to datetime format data["startTime"] = pd.to_datetime(data["startTime"]) data["endTime"] = pd.to_datetime(data["endTime"]) print("Done: Convert to Datetime format") # get the number of sites visited data["total_products"] = data["ProductList"].apply(lambda x: len(x.split(";"))) print("Done: Number of sites visited") # # extract product D group # data["D_group"] = data["ProductList"].apply(lambda x: re.search(r"D\d{3}", x).group(0)) # print("Done: Extracting D Group") # extract the productlist temp = data["ProductList"].str.split(";") data = data.reindex(data.index.repeat(temp.apply(len))) data["product_data"] = np.hstack(temp) data['category'] = data["product_data"].str.split("/").str[0] data["sub_category"] = data["product_data"].str.split("/").str[1] data["sub_sub_category"] = data["product_data"].str.split("/").str[2] data["product"] = data["product_data"].str.split("/").str[3] #data["ProductList"] = data["ProductList"].apply(lambda x: x.replace(";","")) #data["ProductList"] = data["ProductList"].apply(lambda x: re.sub("\/D\d+", "", x)) # columns = sorted(set(data['ProductList'].str.split('/').sum())) # for column in columns: # data[column] = data["ProductList"].str.count(column) print("Done: Extracting categories") # extract duration spent on website duration = data["endTime"] - data["startTime"] data["duration"] = duration.dt.seconds data["startTime_month"] = data["startTime"].dt.month data["startTime_date"] = data["startTime"].dt.date data["startTime_day"] = data["startTime"].dt.day data["startTime_doy"] = data["startTime"].dt.dayofyear data["startTime_week"] = data["startTime"].dt.week data["startTime_dow"] = data["startTime"].dt.dayofweek data["startTime_hour"] = data["startTime"].dt.hour data["startTime_minute"] = data["startTime"].dt.minute data["endTime_hour"] = data["endTime"].dt.hour data["endTime_minute"] = data["endTime"].dt.minute print("Done: Extracting duration spent on website") # total data["total_product_day"] = data.groupby(["startTime_day"])["total_products"].transform("sum") data["total_product_week"] = data.groupby(["startTime_week"])["total_products"].transform("sum") data["total_product_dow"] = data.groupby(["startTime_dow"])["total_products"].transform("sum") data["total_product_hour"] = data.groupby(["startTime_hour"])["total_products"].transform("sum") print("Done: Extracting total information") return data # - train = preprocess(train) test = preprocess(test) # remove gender from training set first train_df = train.drop("gender", axis=1) y = train["gender"] # ## Encoding Categorical le_cat = LabelEncoder() le_subcat = LabelEncoder() le_subsubcat = LabelEncoder() le_product = LabelEncoder() le_gender = LabelEncoder() le_session = LabelEncoder() combined_data = train_df.append(test) # fit on the combined data combined_data["category"] = le_cat.fit_transform(combined_data["category"]) combined_data["sub_category"] = le_subcat.fit_transform(combined_data["sub_category"]) combined_data["sub_sub_category"] = le_subsubcat.fit_transform(combined_data["sub_sub_category"]) combined_data["product"] = le_product.fit_transform(combined_data["product"]) combined_data["session_id"] = le_session.fit_transform(combined_data["session_id"]) y = le_gender.fit_transform(y) # apply transformation (train) train_df["category"] = le_cat.transform(train_df["category"]) train_df["sub_category"] = le_subcat.transform(train_df["sub_category"]) train_df["sub_sub_category"] = le_subsubcat.transform(train_df["sub_sub_category"]) train_df["product"] = le_product.transform(train_df["product"]) train_df["session_id"] = le_session.transform(train_df["session_id"]) # apply transformation (test) test["category"] = le_cat.transform(test["category"]) test["sub_category"] = le_subcat.transform(test["sub_category"]) test["sub_sub_category"] = le_subsubcat.transform(test["sub_sub_category"]) test["product"] = le_product.transform(test["product"]) test["session_id"] = le_session.transform(test["session_id"]) # ## Remove unnecessary columns train_df = train_df.drop(["startTime", "endTime", "ProductList", "product_data", "startTime_date"], axis=1) test = test.drop(["startTime", "endTime", "ProductList", "product_data", "startTime_date"], axis=1) # ## SMOTE # + # # concatenate our training data back together # X = pd.concat([X_train, y_train], axis=1) # # separate minority and majority classes # male = X[X["gender"] == "male"] # female = X[X["gender"] == "female"] # # upsample minority # male_upsampled = resample(male, # replace = True, # sample with replacement # n_samples = len(female), # match number in majority class) # random_state = 101) # # combine majority and upsample minority # upsampled = pd.concat([female, male_upsampled]) # # check new class counts # upsampled.gender.value_counts() # + # X_train = upsampled.drop("gender", axis=1) # y_train = upsampled["gender"] # - # ## Match Test Set Columns with Training Set Columns # + # # get missing columns in training and test set # missing_cols = set(train.columns) - set(test.columns) # # add a missing column in test set with default value equal to 0 # for c in missing_cols: # test[c] = 0 # # ensure that order of column in the test set is in the same order than in train set # test = test[train.columns] # + # test = test.drop("gender", axis=1) # - # ## Baseline Performance def baseliner(features, target, cv=3, metric="accuracy"): print("Baseline Models\n") eval_dict = {} models = [lgb.LGBMClassifier(), xgb.XGBClassifier(), cat.CatBoostClassifier(verbose=0), GradientBoostingClassifier(), LogisticRegression(), RandomForestClassifier(), DecisionTreeClassifier(), AdaBoostClassifier(), ExtraTreeClassifier(), ExtraTreesClassifier(), KNeighborsClassifier(), BaggingClassifier()] print("Model Name \t | CV") print("--"*50) for index, model in enumerate(models, 0): model_name = str(model).split("(")[0] eval_dict[model_name] = {} results = cross_val_score(model, features, target, cv=cv, scoring=metric) eval_dict[model_name]["cv"] = results.mean() print("%s \t | %.4f \t" % (model_name[:12], eval_dict[model_name]["cv"])) baseliner(train_df, y, cv=5, metric="accuracy") # ## Training and Testing Set rs = ShuffleSplit(n_splits=3, test_size=0.1, random_state=101) test_prob_preds = np.zeros(test.shape[0]) # ## XGBoost Classifier # + for idx, (train_idx, valid_idx) in enumerate(rs.split(train_df, pd.DataFrame(y))): print("--"*40) print("Iteration Number : {}".format(idx)) MAX_ROUNDS=2000 early_stopping_rounds=100 params = { 'booster': 'gbtree', 'objective': 'binary:logistic', 'eval_metric': 'error', 'learning_rate': 0.1, 'num_round': MAX_ROUNDS, 'max_depth': 8, 'seed': 13, 'nthread': -1 } X_train, X_valid, y_train, y_valid = train_df.iloc[train_idx], train_df.iloc[valid_idx], pd.DataFrame(y).iloc[train_idx], pd.DataFrame(y).iloc[valid_idx] dtrain = xgb.DMatrix(X_train, label=y_train) dvalid = xgb.DMatrix(X_valid, label=y_valid) watchlist = [(dtrain, 'train'), (dvalid, 'valid')] model = xgb.train( params, dtrain, evals=watchlist, num_boost_round=MAX_ROUNDS, early_stopping_rounds=early_stopping_rounds, verbose_eval=50 ) print("Best Iteration :: {} \n".format(model.best_iteration)) # Plotting Importances fig, ax = plt.subplots(figsize=(24, 24)) xgb.plot_importance(model, height=0.4, ax=ax) preds = model.predict(xgb.DMatrix(test), ntree_limit=model.best_ntree_limit) test_prob_preds += preds test_prob_preds /= rs.n_splits print(test_prob_preds.shape, test_prob_preds[: 5]) # - test.head() # take the mean of the proabilities for all the session_ids to calculate male or female test["predict_prob"] = test_prob_preds test["session_id"] = le_session.inverse_transform(test["session_id"]) agg = test.groupby(["session_id"])["predict_prob"].mean().reset_index() # ## Submission sub = pd.read_csv("sample_submission.csv") sub = sub.drop("gender", axis=1) sub = pd.merge(sub, agg, on=["session_id"], how='left') sub["gender"] = sub["predict_prob"].apply(lambda x: round(x)) sub["gender"] = sub["gender"].map({1: "male", 0: "female"}) sub = sub.drop("predict_prob", axis=1) sub.to_submission("submission.csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

    Machine learning-based prediction of early recurrence in glioblastoma patients: a glance towards precision medicine

    [Null-classifier]

    #

    [1] Library

    # + # OS library import os import sys import argparse import itertools import random # Analysis import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # Sklearn from boruta import BorutaPy from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix, f1_score, recall_score, classification_report, accuracy_score, auc, roc_curve from sklearn.model_selection import RandomizedSearchCV from sklearn.dummy import DummyClassifier import scikitplot as skplt from imblearn.over_sampling import RandomOverSampler, SMOTENC, SMOTE # - #

    [2] Exploratory data analysis and Data Preprocessing

    #

    [-] Load the database

    # + file = os.path.join(sys.path[0], "db.xlsx") db = pd.read_excel(file) print("N° of patients: {}".format(len(db))) print("N° of columns: {}".format(db.shape[1])) db.head() # - #

    [-] Drop unwanted columns + create 'results' column

    # + df = db.drop(['Name_Surname','...'], axis = 'columns') print("Effective features to consider: {} ".format(len(df.columns)-1)) print("Creating 'result' column...") # 0 = No relapse df.loc[df['PFS'] > 6, 'result'] = 0 # 1 = Early relapse (within 6 months) df.loc[df['PFS'] <= 6, 'result'] = 1 df.head() # - #

    [-] Check for class imbalance in the 'results' column

    # + print("PFS Overview") print(df.result.value_counts()) df.result.value_counts().plot(kind='pie', autopct='%1.0f%%', colors=['skyblue', 'orange'], explode=(0.05, 0.05)) # - #

    [-] Label encoding of the categorical variables

    # + df['sex'] =df['sex'].astype('category') df['ceus'] =df['ceus'].astype('category') df['ala'] =df['ala'].astype('category') #df['Ki67'] =df['Ki67'].astype(int) df['MGMT'] =df['MGMT'].astype('category') df['IDH1'] =df['IDH1'].astype('category') df['side'] =df['side'].astype('category') df['ependima'] =df['ependima'].astype('category') df['cc'] =df['cc'].astype('category') df['necrotico_cistico'] =df['necrotico_cistico'].astype('category') df['shift'] =df['shift'].astype('category') ## VARIABLE TO ONE-HOT-ENCODE df['localization'] =df['localization'].astype(int) df['clinica_esordio'] =df['clinica_esordio'].astype(int) df['immediate_p_o'] =df['immediate_p_o'].astype(int) df['onco_Protocol'] =df['onco_Protocol'].astype(int) df['result'] =df['result'].astype(int) dummy_v = ['localization', 'clinica_esordio', 'onco_Protocol', 'immediate_p_o'] df = pd.get_dummies(df, columns = dummy_v, prefix = dummy_v) # - #

    [3] Prediction Models

    #

    [-] Training and testing set splitting

    target = df['result'] inputs = df.drop(['result', 'PFS'], axis = 'columns') x_train, x_test, y_train, y_test = train_test_split(inputs['...'],target,test_size=0.20, random_state=10) #

    [-] Dummy Training

    # + dummy_train = DummyClassifier(strategy="uniform", random_state = 42) dummy_train.fit(x_train, y_train) score_dummy_train = dummy_train.score(x_train, y_train) print("Dummy Train accuracy ***TRAIN***: ", round(score_dummy_train*100,2), "% \n") y_dummy_train_predicted = dummy_train.predict(x_train) y_dummy_train_proba = dummy_train.predict_proba(x_train) cm_dummy_train = confusion_matrix(y_train, y_dummy_train_predicted) print(cm_dummy_train, "\n") false_positive_rate, true_positive_rate, thresholds = roc_curve(y_train, y_dummy_train_predicted) roc_auc = auc(false_positive_rate, true_positive_rate) print('1. The F-1 Score of the model {} \n '.format(round(f1_score(y_train, y_dummy_train_predicted, average = 'macro'), 2))) print('2. The Recall Score of the model {} \n '.format(round(recall_score(y_train, y_dummy_train_predicted, average = 'macro'), 2))) print('3. Classification report \n {} \n'.format(classification_report(y_train, y_dummy_train_predicted))) print('3. AUC: \n {} \n'.format(roc_auc)) tn, fp, fn, tp = cm_dummy_train.ravel() # Sensitivity, hit rate, Recall, or true positive rate tpr = tp/(tp+fn) print("Sensitivity (TPR): {}".format(tpr)) # Specificity or true negative rate tnr = tn/(tn+fp) print("Specificity (TNR): {}".format(tnr)) # Precision or positive predictive value ppv = tp/(tp+fp) print("Precision (PPV): {}".format(ppv)) # Negative predictive value npv = tn/(tn+fn) print("Negative Predictive Value (NPV): {}".format(npv)) # False positive rate fpr = fp / (fp + tn) print("False Positive Rate (FPV): {}".format(fpr)) tnr = tn/(tn+fp) # - #

    [-] Dummy Testing

    # + dummy_testing = DummyClassifier(strategy="uniform", random_state = 42) dummy_testing.fit(x_test, y_test) score_dummy_testing = dummy_testing.score(x_test, y_test) print("Dummy Test accuracy ***TEST***: ", round(score_dummy_testing*100,2), "% \n") y_dummy_testing_predicted = dummy_testing.predict(x_test) y_dummy_testing_proba = dummy_testing.predict_proba(x_test) cm_dummy_testing = confusion_matrix(y_test, y_dummy_testing_predicted) print(cm_dummy_testing, "\n") false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_dummy_testing_predicted) roc_auc = auc(false_positive_rate, true_positive_rate) print('1. The F-1 Score of the model {} \n '.format(round(f1_score(y_test, y_dummy_testing_predicted, average = 'macro'), 2))) print('2. The Recall Score of the model {} \n '.format(round(recall_score(y_test, y_dummy_testing_predicted, average = 'macro'), 2))) print('3. Classification report \n {} \n'.format(classification_report(y_test, y_dummy_testing_predicted))) print('3. AUC: \n {} \n'.format(roc_auc)) tn, fp, fn, tp = cm_dummy_train.ravel() # Sensitivity, hit rate, Recall, or true positive rate tpr = tp/(tp+fn) print("Sensitivity (TPR): {}".format(tpr)) # Specificity or true negative rate tnr = tn/(tn+fp) print("Specificity (TNR): {}".format(tnr)) # Precision or positive predictive value ppv = tp/(tp+fp) print("Precision (PPV): {}".format(ppv)) # Negative predictive value npv = tn/(tn+fn) print("Negative Predictive Value (NPV): {}".format(npv)) # False positive rate fpr = fp / (fp + tn) print("False Positive Rate (FPV): {}".format(fpr)) tnr = tn/(tn+fp) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Face Generation # In this project, you'll use generative adversarial networks to generate new images of faces. # ### Get the Data # You'll be using two datasets in this project: # - MNIST # - CelebA # # Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner. # # If you're using [FloydHub](https://www.floydhub.com/), set `data_dir` to "/input" and use the [FloydHub data ID](http://docs.floydhub.com/home/using_datasets/) "R5KrjnANiKVhLWAkpXhNBe". # + # # !conda install -y tqdm pillow matplotlib # + data_dir = './data' # FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe" #data_dir = '/input' """ DON'T MODIFY ANYTHING IN THIS CELL """ import helper helper.download_extract('mnist', data_dir) helper.download_extract('celeba', data_dir) # - # ## Explore the Data # ### MNIST # As you're aware, the [MNIST](http://yann.lecun.com/exdb/mnist/) dataset contains images of handwritten digits. You can view the first number of examples by changing `show_n_images`. # + show_n_images = 25 """ DON'T MODIFY ANYTHING IN THIS CELL """ # %matplotlib inline import os from glob import glob from matplotlib import pyplot mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L') pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray') # - # ### CelebA # The [CelebFaces Attributes Dataset (CelebA)](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing `show_n_images`. # + show_n_images = 25 """ DON'T MODIFY ANYTHING IN THIS CELL """ mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB') pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB')) # - # ## Preprocess the Data # Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28. # # The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images). # ## Build the Neural Network # You'll build the components necessary to build a GANs by implementing the following functions below: # - `model_inputs` # - `discriminator` # - `generator` # - `model_loss` # - `model_opt` # - `train` # # ### Check the Version of TensorFlow and Access to GPU # This will check to make sure you have the correct version of TensorFlow and access to a GPU # + """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) # - # ### Input # Implement the `model_inputs` function to create TF Placeholders for the Neural Network. It should create the following placeholders: # - Real input images placeholder with rank 4 using `image_width`, `image_height`, and `image_channels`. # - Z input placeholder with rank 2 using `z_dim`. # - Learning rate placeholder with rank 0. # # Return the placeholders in the following the tuple (tensor of real input images, tensor of z data) # + import problem_unittests as tests def model_inputs(image_width, image_height, image_channels, z_dim): """ Create the model inputs :param image_width: The input image width :param image_height: The input image height :param image_channels: The number of image channels :param z_dim: The dimension of Z :return: Tuple of (tensor of real input images, tensor of z data, learning rate) """ # TODO: Implement Function image_input = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='image_input') z_input = tf.placeholder(tf.float32, (None, z_dim), name='z_input') lr = tf.placeholder(tf.float32, name='learning_rate') return image_input, z_input, lr """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) # - # ### Discriminator # Implement `discriminator` to create a discriminator neural network that discriminates on `images`. This function should be able to reuse the variables in the neural network. Use [`tf.variable_scope`](https://www.tensorflow.org/api_docs/python/tf/variable_scope) with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator). # + def discriminator(images, reuse=False): """ Create the discriminator network :param images: Tensor of input image(s) :param reuse: Boolean if the weights should be reused :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator) """ # TODO: Implement Function kernel_size = 5 alpha = 0.2 with tf.variable_scope('discriminator', reuse=reuse): # (28, 82, 3) -> (14,14, 128) -> (7, 7, 256) -> Flatten -> FC # input: (None, 28, 28, 3) x = images # Conv2D(stride=2) -> output: (None, 14, 14, 128), note: we don't use BatchNorm for the 1st layer x = tf.layers.conv2d(x, 128, kernel_size, strides=2, padding='same') x = tf.maximum(alpha*x, x) # Conv2D(stride=2) -> output(None, 7, 7, 256) x = tf.layers.conv2d(x, 256, kernel_size, strides=2, padding='same') x = tf.layers.batch_normalization(x) x = tf.maximum(alpha*x, x) # Flatten x = tf.reshape(x, (-1, 7 * 7 * 256)) # FC/Logits logits = tf.layers.dense(x, 1, activation=None) # Sigmoid output = tf.sigmoid(logits) return output, logits """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_discriminator(discriminator, tf) # - # ### Generator # Implement `generator` to generate an image using `z`. This function should be able to reuse the variables in the neural network. Use [`tf.variable_scope`](https://www.tensorflow.org/api_docs/python/tf/variable_scope) with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x `out_channel_dim` images. # + def generator(z, out_channel_dim, is_train=True): """ Create the generator network :param z: Input z :param out_channel_dim: The number of channels in the output image :param is_train: Boolean if generator is being used for training :return: The tensor output of the generator """ # TODO: Implement Function alpha = 0.2 kernel_size = 5 # When training, we want reuse the vars, but with tf.variable_scope('generator', reuse=(not is_train)): # FC -> Reshape_to_Cube -> Conv2D_T -> Conv2D_T -> Tanh # FC /wo activation x = z x = tf.layers.dense(x, 7*7*256, activation=None) # Reshape to the cube x = tf.reshape(x, (-1, 7, 7, 256)) x = tf.layers.batch_normalization(x, training=is_train) x = tf.maximum(alpha*x, x) # Conv2D_T -> (None, 14, 14, 128) x = tf.layers.conv2d_transpose(x, 128, kernel_size, strides=2, padding='same', activation=None) x = tf.layers.batch_normalization(x, training=is_train) x = tf.maximum(alpha*x, x) # Conv2D_T -> (None, 28, 28, out_channel_dim=1 or 3) x = tf.layers.conv2d_transpose(x, out_channel_dim, kernel_size, strides=2, padding='same', activation=None) #x = tf.layers.batch_normalization(x, training=is_train) #x = tf.maximum(alpha*x, x) # Tanh output = tf.tanh(x) return output """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_generator(generator, tf) # - # ### Loss # Implement `model_loss` to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented: # - `discriminator(images, reuse=False)` # - `generator(z, out_channel_dim, is_train=True)` # + def model_loss(input_real, input_z, out_channel_dim): """ Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, generator loss) """ # TODO: Implement Function g_model = generator(input_z, out_channel_dim, is_train=True) d_model_real, d_logits_real = discriminator(input_real, reuse=False) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True) sigmoid_ce_with_logits = tf.nn.sigmoid_cross_entropy_with_logits # Label smoothing for Discriminator # - https://github.com/soumith/ganhacks # - https://arxiv.org/pdf/1606.03498.pdf d_loss_real = tf.reduce_mean(sigmoid_ce_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real) * 0.9)) #d_loss_fake = tf.reduce_mean(sigmoid_ce_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))) d_loss_fake = tf.reduce_mean(sigmoid_ce_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake) * 0.1)) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(sigmoid_ce_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))) return d_loss, g_loss """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_loss(model_loss) # - # ### Optimization # Implement `model_opt` to create the optimization operations for the GANs. Use [`tf.trainable_variables`](https://www.tensorflow.org/api_docs/python/tf/trainable_variables) to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation). # + def model_opt(d_loss, g_loss, learning_rate, beta1): """ Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :return: A tuple of (discriminator training operation, generator training operation) """ # TODO: Implement Function t_vars = tf.trainable_variables() d_vars = [var for var in t_vars if var.name.startswith('discriminator')] g_vars = [var for var in t_vars if var.name.startswith('generator')] with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars) return d_train_opt, g_train_opt """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_opt(model_opt, tf) # - # ## Neural Network Training # ### Show Output # Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training. # + """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode): """ Show example output for the generator :param sess: TensorFlow session :param n_images: Number of Images to display :param input_z: Input Z Tensor :param out_channel_dim: The number of channels in the output image :param image_mode: The mode to use for images ("RGB" or "L") """ cmap = None if image_mode == 'RGB' else 'gray' z_dim = input_z.get_shape().as_list()[-1] example_z = np.random.uniform(-1, 1, size=[n_images, z_dim]) samples = sess.run( generator(input_z, out_channel_dim, False), feed_dict={input_z: example_z}) images_grid = helper.images_square_grid(samples, image_mode) pyplot.imshow(images_grid, cmap=cmap) pyplot.show() # - # ### Train # Implement `train` to build and train the GANs. Use the following functions you implemented: # - `model_inputs(image_width, image_height, image_channels, z_dim)` # - `model_loss(input_real, input_z, out_channel_dim)` # - `model_opt(d_loss, g_loss, learning_rate, beta1)` # # Use the `show_generator_output` to show `generator` output while you train. Running `show_generator_output` for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the `generator` output every 100 batches. # + def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode): """ Train the GAN :param epoch_count: Number of epochs :param batch_size: Batch Size :param z_dim: Z dimension :param learning_rate: Learning Rate :param beta1: The exponential decay rate for the 1st moment in the optimizer :param get_batches: Function to get batches :param data_shape: Shape of the data :param data_image_mode: The image mode to use for images ("RGB" or "L") """ # TODO: Build Model image_width, image_height, image_channels = data_shape[1:] # data_shape=(n_batches, w, h, ch) input_real, input_z, lr = model_inputs(image_width, image_height, image_channels, z_dim) print('input data_shape: ', data_shape) print('input_real: {}, input_z: {}, lr: {}'.format(input_real.get_shape(), input_z.get_shape(), learning_rate)) d_loss, g_loss = model_loss(input_real, input_z, image_channels) d_train_opt, g_train_opt = model_opt(d_loss, g_loss, lr, beta1) batch_step = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epoch_count): # batch_images: (None, W, H, Ch) for batch_images in get_batches(batch_size): # TODO: Train Model batch_step += 1 # As Generator output is -1.0~1.0, amplify real image to -1.0~1.0 (as they were -0.5 to 0.5) batch_images *= 2.0 batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z, lr: learning_rate}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z, lr: learning_rate}) # Show training status and images if batch_step % 100 == 0: # Train status train_loss_d = d_loss.eval({input_real: batch_images, input_z: batch_z }) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}, Batch Step:{}, Proc Imgs: {}k ...".format(epoch_i+1, epoch_count, batch_step, batch_step*batch_size//1000), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) if batch_step % 500 == 0: show_generator_output(sess, 5**2, input_z, image_channels, data_image_mode) # last show_generator_output(sess, 5**2, input_z, image_channels, data_image_mode) # - # ### MNIST # Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0. # + batch_size = 100 z_dim = 100 learning_rate = 2e-4 beta1 = 0.3 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ epochs = 2 mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg'))) with tf.Graph().as_default(): train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches, mnist_dataset.shape, mnist_dataset.image_mode) # - # ### CelebA # Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces. # + batch_size = 100 z_dim = 100 learning_rate = 2e-4 beta1 = 0.3 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ epochs = 1 celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))) with tf.Graph().as_default(): train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches, celeba_dataset.shape, celeba_dataset.image_mode) # - # ### Submitting This Project # When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Figure S11. Displacement estimated from velocity vs. the one reconstructed from time-series # + # %matplotlib inline import os from pprint import pprint import numpy as np from cartopy import crs as ccrs from matplotlib import pyplot as plt from mintpy.defaults.plot import * from mintpy.utils import utils as ut, plot as pp, readfile from mintpy import view work_dir = os.path.expanduser('~/Papers/2021_Kirishima/figs_src/obs') os.chdir(work_dir) print('Go to directory', work_dir) ## Common setting ref_lat, ref_lon = 31.916, 130.850 # - # ### Reconstruct displacement from time-series proj_dir_base = os.path.expanduser('~/data/Kirishima') proj_names = [ 'KirishimaAlosAT424', 'KirishimaAlosDT73', 'KirishimaAlos2AT131', 'KirishimaAlos2DT23', ] unw1_files = [] unw2_files = [] for proj_name in proj_names: vel_file = os.path.join(proj_dir_base, proj_name, 'mintpy/velocity.h5') atr = readfile.read_attribute(vel_file) unw1_file = os.path.join(proj_dir_base, proj_name, 'mintpy/{}.unw'.format(atr['DATE12'])) unw2_file = os.path.join(proj_dir_base, proj_name, 'mintpyAll/{}.unw'.format(atr['DATE12'])) unw1_files.append(unw1_file) unw2_files.append(unw2_file) pprint(unw1_files) pprint(unw2_files) # ### Plot # + # view.py options dem_file = os.path.expanduser('~/data/Kirishima/DEM/gsi10m.dem.wgs84') opt = ' --sub-lat 31.895 31.955 --sub-lon 130.843 130.900 ' opt += '--dem {} --contour-step 100 --contour-smooth 0.0 '.format(dem_file) opt += ' -c jet --wrap --wrap-range -5 5 -u cm ' opt += ' --notitle --fontsize 12 --ref-size 3 --lalo-step 0.03 --nocbar --noscalebar --alpha 0.7 ' opt += ' --noverbose --lalo-loc 0 0 0 0 ' fig, axs = plt.subplots(nrows=2, ncols=4, figsize=[11, 6], subplot_kw=dict(projection=ccrs.PlateCarree())) for i in range(len(unw1_files)): unw_files = [unw2_files[i], unw1_files[i]] for j in range(2): ax = axs[j, i] cmd = 'view.py {f} phase {o}'.format(f=unw_files[j], o=opt); data, atr, inps = view.prep_slice(cmd) ax, inps, im, cbar = view.plot_slice(ax, data, atr, inps) #fig.subplots_adjust(hspace=0.01, wspace=0.05, left=0.05, right=0.95, top=0.95, bottom=0.05) fig.tight_layout() # colorbar cax = fig.add_axes([0.425, -0.01, 0.15, 0.015]) cbar = plt.colorbar(im, cax=cax, orientation='horizontal')#, ticks=[-2.5, 0, 2.5]) cbar.ax.tick_params(labelsize=font_size) cbar.set_label('LOS displacement [cm]', fontsize=font_size) # output out_file = os.path.abspath('extra4dis_map_est.png') plt.savefig(out_file, bbox_inches='tight', transparent=True, dpi=fig_dpi) print('save figure to file', out_file) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import csv from io import StringIO with open('/Users/yetongxue/Downloads/网站管理.csv') reader = csv.reader(f, delimiter=',') for index,row in enumerate(reader): if index == 0: continue print(row) import tldextract result = tldextract.extract('http://*.sometime') result from queue import PriorityQueue import numpy as np q = PriorityQueue(5) arr = np.random.randint(1, 20, size=5) arr a = [[1,2], [3,4]] a.reverse() a bin(int('2000000000000000004008020', 16)) len('10000000000000000000000000000000000000000000000000000000000000000000000100000000001000000000100000') bin(int('00000000000004008020', 16)) len('100000000001000000000100000') root = {} node = root for char in 'qwerasdf': node = node.setdefault(char, {}) node['#'] = '#' root import datetime t = 1531732030883 * 0.001 datetime.datetime.fromtimestamp(1531732030883 * 0.001) today import collections dic = collections.OrderedDict() dic[1] = 1 dic[2] = 2 dic dic.popitem(last=False) # + import datetime today = datetime.datetime.now().date() update_time = datetime.datetime.strptime('2020-08-07 23:20:10', '%Y-%m-%d %H:%M:%S').date() if today - update_time > datetime.timedelta(days=7): print(1) (today - update_time).days '1'.strip().startswith('=') # + def check_row(row): data = [] for i in row: i = i.strip() while i.startswith('='): i = i[1:] data.append(i) return data check_row(['12', ' ==12', '1=1']) # - import re res = re.match('^=+\s.', '=ab') res cc_ai = { 'normal': 'JS', 'low': 'JS', 'middle': 'JS', 'high': 'JSPAGE' } cc_ai import datetime start = datetime.datetime.now() end = datetime.datetime.strptime('2020-09-24T07:14:04.252394', '%Y-%m-%dT%H:%M:%S.%f') datetime.datetime.strptime(str(start.date()), '%Y-%m-%d') - datetime.timedelta(days=7) isinstance(datetime.datetime.now(), datetime.datetime) ''.join('\t\ra-_sdf\n a sdf '.split()) SITE_MODULE = [ 'app_ipv6', 'site_shield', 'subdomain_limit', 'custom_page', 'app_extra', 'service_charge', 'app_ipv6_defend', 'privacy_shield', 'app_ipv6_custom', 'app_subdomain_limit', 'app_ip_limit', 'app_bandwidth', 'app_subuser', 'batch', 'service_charge', 'cn2_line', 'site_shield_monitor', 'app_httpdns_1k', 'app_domain_limit' ] SITE_MODULE ''.join('a, \rd'.replace(',', ',').split()) # + from docxtpl import DocxTemplate path = '/Users/yetongxue/Downloads/test1.docx' save_path = '/Users/yetongxue/Downloads/test1_save.docx' template = DocxTemplate(path) data = {'sites': [{'domain':'asdf'},{'domain':'asdf2323'}]} template.render(context=data) template.save(save_path) # - import json json.dumps([{'chargeType': 'plan', 'number': 1, 'expireTime': '2021-04-23', 'chargeId': '604d75cc261b3b0011a1a496'}, {'chargeType': 'additionPackage', 'number': 1, 'expireTime': '2021-04-23', 'chargeId': '604d75cd261b3b0011a1a498'}]) import commonds status, output = commands.getstatusoutput('date') # 2016年 06月 30日 星期四 19:26:21 CST status, output import os res = os.popen('dig -x fc00:db20:35b:7399::5').read() if 'intra.jiasule.com' in res: print('ok') res # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Load Loan Prediction Dataset import pandas as pd loan_data = pd.read_csv('train_loan.csv') loan_data.head() # ### Feature 1 -Total Income loan_data['Total_income'] = loan_data['ApplicantIncome'] + loan_data['CoapplicantIncome'] loan_data[['ApplicantIncome', 'CoapplicantIncome', 'Total_income']].head() # ### Feature 2 - Loan amount and Income Ratio loan_data['loan_income_ratio'] = loan_data['LoanAmount'] / loan_data['ApplicantIncome'] loan_data[['ApplicantIncome', 'LoanAmount', 'loan_income_ratio']].head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt # ### generate dataset dataset_X = np.arange(0, 10, 1) dataset_Y = 0.5 * dataset_X + 5 dataset_Y plt.scatter(dataset_X, dataset_Y) w = 0 b = 0 w = (dataset_X.transpose() * dataset_Y) / (dataset_X.transpose() * dataset_X) w plt.plot(dataset_X, w) predict_Y = w.transpose() * dataset_X # + fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 3)) ax1.scatter(dataset_X, predict_Y) ax2.scatter(dataset_X, dataset_Y) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Electric Machinery Fundamentals 5th edition # ## Chapter 6 (Code examples) # + [markdown] slideshow={"slide_type": "-"} # ## Example 6-5 (d) # # **Creates a plot of the torque-speed curve of the induction motor as depicted in Figure 6-23.** # + [markdown] slideshow={"slide_type": "skip"} # **Note:** *You should first click on "`Cell` → `Run All`" in order that the plots get generated.* # + [markdown] slideshow={"slide_type": "skip"} # Import the PyLab namespace (provides set of useful commands and constants like Pi) # + slideshow={"slide_type": "skip"} # %pylab notebook # + [markdown] slideshow={"slide_type": "slide"} # First, initialize the values needed in this program. # + slideshow={"slide_type": "fragment"} r1 = 0.641 # Stator resistance x1 = 1.106 # Stator reactance r2 = 0.332 # Rotor resistance x2 = 0.464 # Rotor reactance xm = 26.3 # Magnetization branch reactance v_phase = 460 / sqrt(3) # Phase voltage n_sync = 1800 # Synchronous speed (r/min) w_sync = n_sync * 2*pi/60 # Synchronous speed (rad/s) # + [markdown] slideshow={"slide_type": "slide"} # Calculate the Thevenin voltage and impedance from Equations 7-41a: # # $$ V_{TH} = V_\phi \frac{X_M}{\sqrt{R_1^2 + (X_1 + X_M)^2}} $$ # # and 7-43: # # $$ Z_{TH} = \frac{jX_m (R_1 + jX_1)}{R_1 + j(X_1 + X_M)} $$ # + slideshow={"slide_type": "fragment"} v_th = v_phase * ( xm / sqrt(r1**2 + (x1 + xm)**2) ) z_th = ((1j*xm) * (r1 + 1j*x1)) / (r1 + 1j*(x1 + xm)) r_th = real(z_th) x_th = imag(z_th) # + [markdown] slideshow={"slide_type": "slide"} # Now calculate the torque-speed characteristic for many slips between 0 and 1. # + slideshow={"slide_type": "fragment"} s = linspace(0, 1, 50) # Slip s[0] = 0.001 # avoid divide-by-zero problems nm = (1 - s) * n_sync # mechanical speed # + [markdown] slideshow={"slide_type": "slide"} # Calculate torque for original rotor resistance using: # # $$ \tau_\text{ind} = \frac{3 V_{TH}^2 R_2 / s}{\omega_\text{sync}[(R_{TH} + R_2/s)^2 + (X_{TH} + X_2)^2]} $$ # + slideshow={"slide_type": "fragment"} t_ind1 = ((3 * v_th**2 * r2/s) / (w_sync * ((r_th + r2/s)**2 + (x_th + x2)**2))) # + [markdown] slideshow={"slide_type": "fragment"} # Calculate torque for doubled rotor resistance: # - t_ind2 = ((3 * v_th**2 * 2*r2/s) / (w_sync * ((r_th + 2*r2/s)**2 + (x_th + x2)**2))) # + [markdown] slideshow={"slide_type": "slide"} # Plot the torque-speed curve: # + slideshow={"slide_type": "fragment"} rc('text', usetex=True) # enable LaTeX commands for plot plot(nm, t_ind2,'k--', nm, t_ind1,'b', lw=2) xlabel(r'$\mathbf{n_{m}}\ [rpm]$') ylabel(r'$\mathbf{\tau_{ind}}\ [Nm]$') title ('Induction motor torque-speed characteristic') legend ((r'Doubled $R_{2}$','Original $R_{2}$'), loc = 3); grid() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="nb3nO15Uv6ZC" # # Pneumonia X-ray ML Model # # Pneumonia is an infection that inflames the air sacs in one or both lungs. The air sacs may fill with fluid or pus (purulent material), causing cough with phlegm or pus, fever, chills, and difficulty breathing. A variety of organisms, including bacteria, viruses and fungi, can cause pneumonia. # + [markdown] id="eyXny-Wov6Rr" # The normal chest X-ray depicts clear lungs without any areas of abnormal opacification in the image. Bacterial pneumonia typically exhibits a focal lobar consolidation, in this case in the right upper lobe, whereas viral pneumonia manifests with a more diffuse ‘‘interstitial’’ pattern in both lungs. # # Chest X-ray images (anterior-posterior) were selected from retrospective cohorts of pediatric patients of one to five years old from Guangzhou Women and Children’s Medical Center, Guangzhou. All chest X-ray imaging was performed as part of patients’ routine clinical care. # # For the analysis of chest x-ray images, all chest radiographs were initially screened for quality control by removing all low quality or unreadable scans. The diagnoses for the images were then graded by two expert physicians before being cleared for training the AI system. In order to account for any grading errors, the evaluation set was also checked by a third expert. # # # + [markdown] id="SkMY_gyvv6Kq" # **Data** for this project comes from [here](https://data.mendeley.com/datasets/rscbjbr9sj/2) # # **Citation :** ; ; (2018), “Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification”, Mendeley Data, V2, doi: 10.17632/rscbjbr9sj.2 # # ![Image](https://i.imgur.com/jZqpV51.png) # + colab={"base_uri": "https://localhost:8080/"} id="ESEDykRqvSV6" outputId="52f07743-5abe-41f8-8b69-f85c50e326f4" # Importing our data from the cloud and viewing and understanding it. # !wget https://data.mendeley.com/public-files/datasets/rscbjbr9sj/files/f12eaf6d-6023-432f-acc9-80c9d7393433/file_downloaded # + id="QlICC2b3EYqr" # Importing dependancies import zipfile import os import pathlib import random import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg import tensorflow as tf import tensorflow_hub as hub from tensorflow.keras import layers from tensorflow.keras.preprocessing.image import ImageDataGenerator # + id="PTmISEXbwT0h" zip_ref = zipfile.ZipFile("file_downloaded") zip_ref.extractall() zip_ref.close() # + colab={"base_uri": "https://localhost:8080/"} id="vCZv7I2qwZ5S" outputId="530b1dd4-a265-4280-d414-84ffc511ac45" # Walkthrough for dirpath, dirnames, filenames in os.walk("chest_xray"): print(f"There are {len(dirnames)} directories and {len(filenames)} images in '{dirpath}' .'") # + id="OluOOosewZ2o" colab={"base_uri": "https://localhost:8080/"} outputId="96faa3a7-e43a-45df-8ded-2acff7427048" data_dir = pathlib.Path("/content/chest_xray/train") class_names = np.array(sorted([item.name for item in data_dir.glob("*")])) class_names = class_names[1:] print(class_names) # + id="-vlGuMLOwZz6" # Visualizing our data def view_random_image(target_dir, target_class): # Setup the target directory target_folder = target_dir+target_class # Get a random image path random_image = random.sample(os.listdir(target_folder), 1) print(random_image) # Read in the image and plot it using matplotlib img = mpimg.imread(target_folder + "/" + random_image[0]) plt.imshow(img) plt.title(target_class) plt.axis("off"); print(f"Image shape: {img.shape}") # Show the shape of the image return img # + id="JN-SjxEqwZxe" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="eb9ab21d-7b9d-4164-c9eb-3146f481855f" # Viewing a random image from train img = view_random_image(target_dir="/content/chest_xray/train/", target_class="PNEUMONIA") # + id="_5ipJjLCw3M2" colab={"base_uri": "https://localhost:8080/", "height": 234} outputId="9b14722d-b53f-4b20-adbf-148c81fd5aa7" # Visualize both the data plt.figure() plt.subplot(1, 2, 1) normal = view_random_image("/content/chest_xray/train/", "NORMAL") plt.subplot(1, 2, 2) pneumonia = view_random_image("/content/chest_xray/train/", "PNEUMONIA") # + colab={"base_uri": "https://localhost:8080/"} id="ZuV85_AxwgLN" outputId="9cd764d3-ea3a-4496-ae7a-a25c9fc26245" # setup data inputs IMAGE_SHAPE = (224, 224) BATCH_SIZE = 32 train_dir = "/content/chest_xray/train/" test_dir = "/content/chest_xray/test/" train_datagen = ImageDataGenerator(rescale=1/255.) test_datagen = ImageDataGenerator(rescale=1/255.) print("Train images :") train_data = train_datagen.flow_from_directory(train_dir, target_size=IMAGE_SHAPE, batch_size=BATCH_SIZE, class_mode="binary") print("Testing images :") test_data = test_datagen.flow_from_directory(test_dir, target_size=IMAGE_SHAPE, batch_size=BATCH_SIZE, class_mode="binary") # + [markdown] id="xfxvJb4ux_ea" # ## 2. Preprocessing the data # # The data presented to us is not same, some images are of random shapes and also let's Normalize our data so that it becomes easier for our Neural Network to learn patterns. # + id="EoYcgwpkwgIp" # Defining directory dataset paths train_dir = "/content/chest_xray/train/" test_dir = "/content/chest_xray/test/" # + id="FQjP8VAgwgFp" # Normalization train_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) # + colab={"base_uri": "https://localhost:8080/"} id="4Ip7VotsyaTw" outputId="e3b68e44-82bd-40d6-bb5c-18018135a7ff" # Import data from directories and turn it into batches train_data = train_datagen.flow_from_directory(directory=train_dir, batch_size=32, target_size=(224, 224), class_mode="binary", seed=42) test_data = test_datagen.flow_from_directory(directory=test_dir, batch_size=32, target_size=(224, 224), class_mode="binary", seed=42) # + colab={"base_uri": "https://localhost:8080/"} id="BX3iFq7tyfvZ" outputId="de50b9ef-71f2-4825-9f48-ce81b5dd5804" # Getting a sample of a train data batch images, labels = train_data.next() len(images), len(labels) # + [markdown] id="n7ymJIm4ylSf" # ## 3. Creating our models # # We are using a CNNs (Convolutional Neural Networks) to get quick results on our model than a normal ANN (Artificial Neural Network) # + [markdown] id="j6y6UuTwyo_g" # ### 1. Model 1 # # For this model we will be using a Conv2D layer then a MaxPool2D then Conv2D layer and then a MaxPool2D layer followed by a Flatten and output layer. # # *All the models are trained and saved in the drive, so they need not to be run again.* # + id="lAKa_lev-6x-" # Build a CNN model model_1 = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(filters=10, kernel_size=3, activation="relu", input_shape=(224, 224, 3)), tf.keras.layers.MaxPool2D(pool_size=2, padding="valid"), tf.keras.layers.Conv2D(10, 3, activation="relu"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(1, activation="sigmoid") ]) # + id="lATa6gRD_PUJ" # Compiling the model model_1.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # + id="rpFb_0rg_SR3" colab={"base_uri": "https://localhost:8080/"} outputId="7a3e20b7-6f56-4edc-fb65-7194ad0a3048" # seed tf.random.set_seed(42) # Fit the model history_1 = model_1.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data=test_data, validation_steps=len(test_data)) # + [markdown] id="dPb0-9gfzClr" # ### 2. Model 2 # # For this model we will be using a Conv2D layer, then a Conv2D layer, then a MaxPool2D layer, then Conv2D layer, then Conv2D layer and then a MaxPool2D layer followed by a Flatten and output layer. # # *All the models are trained and saved in the drive, so they need not to be run again.* # + id="BJxWZUbUU5E4" # Build a CNN model (Tiny VGG) model_2 = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(filters=10, kernel_size=3, activation="relu", input_shape=(224, 224, 3)), tf.keras.layers.Conv2D(10, 3, activation="relu"), tf.keras.layers.MaxPool2D(pool_size=2, padding="valid"), tf.keras.layers.Conv2D(10, 3, activation="relu"), tf.keras.layers.Conv2D(10, 3, activation="relu"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(1, activation="sigmoid") ]) # + id="sgmoX_KvWovD" # Compiling the model model_2.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # + id="AHBIZrbQWsE7" colab={"base_uri": "https://localhost:8080/"} outputId="5d1cccdb-165d-46a1-e63d-dc81892e801a" # seed tf.random.set_seed(42) # Fit the model history_2 = model_2.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data=test_data, validation_steps=len(test_data)) # + [markdown] id="rU_Ju0LuzQEa" # ### 3. Model 3 # # For this model we will be using a Conv2D layer, then a Conv2D layer, then a MaxPool2D layer, then Conv2D layer, then Conv2D layer and then a MaxPool2D layer followed by a Flatten and output layer. But the `Adam()`'s learning rate set to `0.01`. # # *All the models are trained and saved in the drive, so they need not to be run again.* # + id="qDXVs1MHXU1i" # Build a CNN model model_3 = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(filters=10, kernel_size=3, activation="relu", input_shape=(224, 224, 3)), tf.keras.layers.Conv2D(10, 3, activation="relu"), tf.keras.layers.MaxPool2D(pool_size=2, padding="valid"), tf.keras.layers.Conv2D(10, 3, activation="relu"), tf.keras.layers.Conv2D(10, 3, activation="relu"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(1, activation="sigmoid") ]) # + id="XI8qmYnfaXA9" # Compiling the model model_3.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(learning_rate=0.01), metrics=["accuracy"]) # + id="BwljjcODaYT8" colab={"base_uri": "https://localhost:8080/"} outputId="09463f51-6819-44a7-9048-7ce17bbed88a" # seed tf.random.set_seed(42) # Fit the model history_3 = model_3.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data=test_data, validation_steps=len(test_data)) # + [markdown] id="Y2U5Z249zfDD" # ### 4. Model 4 # # For this model we will be using a Conv2D layer, then a Conv2D layer, then a MaxPool2D layer, then Conv2D layer, then Conv2D layer and then a MaxPool2D layer followed by a Flatten and output layer. # # *All the models are trained and saved in the drive, so they need not to be run again.* # + id="cDJjPduOEQqX" # Build a CNN model model_4 = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(filters=10, kernel_size=3, activation="relu", input_shape=(224, 224, 3)), tf.keras.layers.MaxPool2D(pool_size=2, padding="valid"), tf.keras.layers.Conv2D(10, 3, activation="relu"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(1, activation="sigmoid") ]) # + id="dI8dk5BWER1E" # Compiling the model model_4.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(learning_rate=0.01), metrics=["accuracy"]) # + id="jc36bNw_ERv6" colab={"base_uri": "https://localhost:8080/"} outputId="6e67fad2-7e02-471b-bc34-e505b9927ba7" # seed tf.random.set_seed(42) # Fit the model history_4 = model_4.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data=test_data, validation_steps=len(test_data)) # + [markdown] id="eVjDvuJl1NDa" # ### Model 5 # + id="shbNSuEFQFGc" # Build a CNN model model_5 = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(filters=16, kernel_size=3, activation="relu", padding="same", input_shape=(224, 224, 3)), tf.keras.layers.Conv2D(16, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(pool_size=2), tf.keras.layers.SeparableConv2D(32, 3, activation="relu", padding="same"), tf.keras.layers.SeparableConv2D(32, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.SeparableConv2D(64, 3, activation="relu", padding="same"), tf.keras.layers.SeparableConv2D(64, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.SeparableConv2D(128, 3, activation="relu", padding="same"), tf.keras.layers.SeparableConv2D(128, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.SeparableConv2D(256, 3, activation="relu", padding="same"), tf.keras.layers.SeparableConv2D(256, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation="relu"), tf.keras.layers.Dropout(rate=0.7), tf.keras.layers.Dense(128, activation="relu"), tf.keras.layers.Dropout(rate=0.5), tf.keras.layers.Dense(64, activation="relu"), tf.keras.layers.Dropout(rate=0.3), tf.keras.layers.Dense(1, activation="sigmoid") ]) # + id="JuCnbQex6LI-" # Compiling the model model_5.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # + id="eE98QH-5hgEw" colab={"base_uri": "https://localhost:8080/"} outputId="d6542493-4723-4410-9454-6e6fdf31aa0c" # seed tf.random.set_seed(42) # Fit the model history_5 = model_5.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data=test_data, validation_steps=len(test_data)) # + [markdown] id="HKUXBG5z1X5Y" # ### Model 6 # + id="cW31a1T_i1b-" # Build a CNN model model_6 = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(filters=16, kernel_size=3, activation="relu", padding="same", input_shape=(224, 224, 3)), tf.keras.layers.Conv2D(16, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(pool_size=2), tf.keras.layers.SeparableConv2D(32, 3, activation="relu", padding="same"), tf.keras.layers.SeparableConv2D(32, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.SeparableConv2D(64, 3, activation="relu", padding="same"), tf.keras.layers.SeparableConv2D(64, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.SeparableConv2D(128, 3, activation="relu", padding="same"), tf.keras.layers.SeparableConv2D(128, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.SeparableConv2D(256, 3, activation="relu", padding="same"), tf.keras.layers.SeparableConv2D(256, 3, activation="relu", padding="same"), tf.keras.layers.MaxPool2D(2), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(512, activation="relu"), tf.keras.layers.Dropout(rate=0.7), tf.keras.layers.Dense(128, activation="relu"), tf.keras.layers.Dropout(rate=0.5), tf.keras.layers.Dense(64, activation="relu"), tf.keras.layers.Dropout(rate=0.3), tf.keras.layers.Dense(1, activation="sigmoid") ]) # + id="M4ssuhKDuVAV" # Compiling the model model_6.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # + id="H5gZWeFKuY7B" colab={"base_uri": "https://localhost:8080/"} outputId="e9b5c388-6681-474d-f67a-60450ee63a06" # seed tf.random.set_seed(42) # Fit the model history_6 = model_6.fit_generator(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data=test_data, validation_steps=len(test_data)) # + [markdown] id="loUC66JQ1Imc" # ## Transfer learnt Models # + id="P3MhgmfdzzDY" # Models from tensorflow_hub resnet_url = "https://tfhub.dev/google/imagenet/resnet_v2_50/classification/5" efficientnet_url = "https://tfhub.dev/tensorflow/efficientnet/b0/classification/1" mobilenet_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4" efficientnetb1_url = "https://tfhub.dev/tensorflow/efficientnet/b1/classification/1" # + id="eLnUL5Ld0Ahp" # Create a model function to create a model from url def create_model(model_url, num_classes=10): """ Takes a tensorflow Hub url and creates a Keras Sequential model with it. Args: model_url (str) : A tensorflow hub feature extraction URL. num_classes (int) : Number of output neurons in the output layers. Returns: An uncompiled Keras Sequential model with model_url as feature extractor layer and Dense output layer with num_classes output neurons. """ # Download the pretrained model and save it as a Keras layer feature_extractor_layer = hub.KerasLayer(model_url, trainable=True, # freezing already learnt patterns name="feature_extraction_layer", input_shape=IMAGE_SHAPE+(None,)) # Create our own model model = tf.keras.Sequential([ feature_extractor_layer, layers.Dense(1, activation="sigmoid", name="output_layer") ]) return model # + id="35qRTcky0BMZ" # To get rid of 3 byte image not processed error from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True # + [markdown] id="pwypLJpK0RD7" # ### Resnet Model # + id="a2vbRF-rN1bt" resnet_model = create_model(resnet_url) # + id="SjYXGuRLOKGN" # Compiling our model resnet_model.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # + id="UAzfZ17DOlwy" colab={"base_uri": "https://localhost:8080/"} outputId="6c9f531a-86e3-4f37-ca9f-776daeba5e3f" tf.random.set_seed(42) # Fitting our model resnet_history = resnet_model.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data = test_data, validation_steps = len(test_data)) # + [markdown] id="IgCSyM2i0fLL" # ### Efficientnet # + id="K-8V4qbReEn4" efficientnet_model = create_model(efficientnet_url) # + id="nh-8iNpghi4z" # Compiling our model efficientnet_model.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # + id="OwoIuWtchwv5" colab={"base_uri": "https://localhost:8080/"} outputId="6bbcce21-a9f2-443e-a01a-ffa06b7cc58a" tf.random.set_seed(42) # Fitting our model efficientnet_history = efficientnet_model.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data = test_data, validation_steps = len(test_data)) # + [markdown] id="Z_SNQWnz0mPR" # ### Mobilenet # + id="ZB_klm3Kh-IF" mobilenet_model = create_model(mobilenet_url) # + id="7FW9M3QaiZbu" # Compiling our model mobilenet_model.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # + id="zr9XK3HfiZY3" colab={"base_uri": "https://localhost:8080/"} outputId="c5d74912-6185-4bc5-f64d-492b320b7e86" tf.random.set_seed(42) # Fitting our model mobilenet_history = mobilenet_model.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data = test_data, validation_steps = len(test_data)) # + [markdown] id="NOIpezDF0wFp" # ### Efficientnetb1 # + id="sGlgk2Lv0u6b" efficientnetb1_model = create_model(efficientnetb1_url) # + id="m_Dt35surhRg" # Compiling our model efficientnetb1_model.compile(loss="binary_crossentropy", optimizer=tf.keras.optimizers.Adam(), metrics=["accuracy"]) # + id="0yqHcDTErhPE" colab={"base_uri": "https://localhost:8080/"} outputId="91e7806f-e000-4e47-c9da-ed9c9852ff11" tf.random.set_seed(42) # Fitting our model efficientnetb1_history = efficientnetb1_model.fit(train_data, epochs=5, steps_per_epoch=len(train_data), validation_data = test_data, validation_steps = len(test_data)) # + [markdown] id="1n_AfMY-2JPZ" # ## Visualizing loss and accuracy curves for all the models # + id="3ArQez0i1_VZ" # Plotting the valildation and training curves seperately def plot_loss_curves(history): """ Returns seperate loss curves for training and validation metrics. """ loss = history.history["loss"] val_loss = history.history["val_loss"] accuracy = history.history["accuracy"] val_accuracy = history.history["val_accuracy"] epochs = range(len(history.history["loss"])) # Plot loss plt.plot(epochs, loss, label="training_loss") plt.plot(epochs, val_loss, label="val_loss") plt.title("loss") plt.xlabel("epochs") plt.legend() # Plot accuracy plt.figure() plt.plot(epochs, accuracy, label="accuracy") plt.plot(epochs, val_accuracy, label="val_accuracy") plt.title("accuracy") plt.xlabel("epochs") plt.legend() # + id="LFya1cb72XVp" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="ae464ce3-50d8-4189-f30e-b1a293755681" plot_loss_curves(history_1) # + id="lWuf_xDa2mLT" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="8162b0bb-a016-4a4d-8436-6a8c5bba5cdc" plot_loss_curves(history_2) # + id="_dOk0KRr2mI4" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="00b553d0-e868-45c5-87bb-baa85f7a9a7f" plot_loss_curves(history_3) # + id="kKyhrH2X2mGL" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="82b6be50-fe6e-444b-900e-c891ac336ccc" plot_loss_curves(history_4) # + id="hIj2UAtN2mDU" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="45efa3b8-5ffd-44b2-ed6a-3813b843c672" plot_loss_curves(history_5) # + id="W2Va6eEd2mAq" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="acd10cd4-0b31-443e-dbf3-52748c2c461c" plot_loss_curves(history_6) # + id="QRKhoFtR2l4n" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="38f03c72-2b22-4c27-d63b-036edf92c5c2" plot_loss_curves(resnet_history) # + id="qmPkZNcw2uk0" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="d3718d72-a2ef-43c5-9e90-b87bfce2d47e" plot_loss_curves(efficientnet_history) # + id="mcyKwc5U2uKR" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="c4f5ed22-a885-4b63-9580-95e8bc203e69" plot_loss_curves(mobilenet_history) # + id="jeerurjB2t1w" colab={"base_uri": "https://localhost:8080/", "height": 573} outputId="ccdb0a00-3948-43bb-a50b-52fd8c492117" plot_loss_curves(efficientnetb1_history) # + [markdown] id="PAuCe8mh3CsD" # ## Evaluating accuracy for all the models # + id="-uqrvQ6e3GG_" colab={"base_uri": "https://localhost:8080/"} outputId="c51ff515-e1c7-45a0-f5aa-c32b907c7352" model_1.evaluate(test_data) # + id="fLpNP7FZ3Xti" colab={"base_uri": "https://localhost:8080/"} outputId="8e53b10e-50cc-4b6f-abf3-fdb555156295" model_2.evaluate(test_data) # + id="Y99hAWZp3X1o" colab={"base_uri": "https://localhost:8080/"} outputId="8a28801b-cea8-4273-87b6-81cd9f1d1798" model_3.evaluate(test_data) # + id="0bLeOPTM3X5T" colab={"base_uri": "https://localhost:8080/"} outputId="0c0ebb86-1b25-4c10-e592-f7df63d3671e" model_4.evaluate(test_data) # + id="gkVczdqZ3X_U" colab={"base_uri": "https://localhost:8080/"} outputId="1f74cbf5-67f8-478e-9b5e-86c8678810ec" model_5.evaluate(test_data) # + id="x9gIjSOI3ZVr" colab={"base_uri": "https://localhost:8080/"} outputId="aed7fd86-631c-43a9-8a10-c5d1aeb5a485" model_6.evaluate(test_data) # + id="YL-4i5hJ3Za0" colab={"base_uri": "https://localhost:8080/"} outputId="340cbc7b-11e7-4cdd-d25f-62917533fc75" resnet_model.evaluate(test_data) # + id="3OCsHI_g3ZhU" colab={"base_uri": "https://localhost:8080/"} outputId="9f25a6de-2eac-4d77-fa5a-b21f170efadd" efficientnet_model.evaluate(test_data) # + id="xgGx80mB3ZjZ" colab={"base_uri": "https://localhost:8080/"} outputId="816a33f5-1f19-4144-e3f6-29f511996288" mobilenet_model.evaluate(test_data) # + id="O9Dw-cNT3ZlO" colab={"base_uri": "https://localhost:8080/"} outputId="85cca83e-81f5-469e-954c-7be1ffc9e587" efficientnetb1_model.evaluate(test_data) # + [markdown] id="U4qTedkW3vdI" # ## Conclusions # # With all the model building, testing and tuning we come to the conclusion that our `efficientnetb1_model` is the best with `90%` accuracyon test data. # # ![Image](https://i.ibb.co/cCBsJ8P/accuracy.jpg) # + id="L1rdaOAC4d_2" # Custom layer for our pre-trained model feature_extractor_layer = hub.KerasLayer(efficientnetb1_url, trainable=True, # freezing already learnt patterns name="feature_extraction_layer", input_shape=IMAGE_SHAPE+(None,)) # + id="xKWMa_SM4GdP" Xray_model = tf.keras.models.load_model("/content/drive/MyDrive/efficientnetb1_model.h5", custom_objects={'KerasLayer': feature_extractor_layer}) # + id="YAG2CNXZ4PbZ" colab={"base_uri": "https://localhost:8080/"} outputId="877b0598-f9a5-495a-f0eb-2664f4b9a57c" Xray_model.evaluate(test_data) # + id="TmoTksTi_o8f" colab={"base_uri": "https://localhost:8080/"} outputId="3efd8aa1-dc89-41cf-a49f-2d6d93ba2271" Xray_model.summary() # + [markdown] id="0yOXYYOj6irM" # ## Predictions on custom data # + id="yVPPVY0U6oBw" # function to import image and resize def load_and_prep_image(filename, img_shape=224): """ Reads an image from filename, turns it into a tensor and reshapes it to (img_shape, img_shape,, color_channels) """ # Read in the image img = tf.io.read_file(filename) # Decode the read file into a tensor img = tf.image.decode_image(img) # Resize the image img = tf.image.resize(img, size=IMAGE_SHAPE) #Grayscale img = tf.image.grayscale_to_rgb(img) # Rescale the image (getting all values between 0 & 1) img = img/255 return img # + id="yahfWjhR6rFL" def pred_and_plot(model, filename, class_names=["NORMAL","PNEUMONIA"]): """ Imports an image located at filename, makes a prediction with model and plots the image with the predicted class as the title. """ # Import target image and preprocess it img = load_and_prep_image(filename) # Make a prediction pred = model.predict(tf.expand_dims(img, axis=0)) # pred = model.predict(tf.squeeze(img)) # Get the predicted class pred_class = class_names[int(tf.round(pred))] # Plot image and predicted class plt.imshow(img) plt.title(f"Prediction : {pred_class}") plt.axis(False); # + id="ANpCHF3Y63M7" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="bdc98ce6-6a34-41f3-f961-a47ac6973669" pred_and_plot(Xray_model,"/content/norm1.jpeg" ) # + id="wpxxA3yN9Q91" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="739661e2-dd96-4ab2-fe6a-899ddfdf4e99" pred_and_plot(Xray_model,"/content/pneu1.jpeg" ) # + id="jEOdmr8hD9Zc" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="2fe7e1a2-9e97-4a67-90b1-287d39d857fd" pred_and_plot(Xray_model,"/content/norm2.jpeg" ) # + id="Vpkcm3gWD935" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="0ecf80bb-e69f-4db4-cdd4-1f48654a9809" pred_and_plot(Xray_model,"/content/pneu2.jpeg" ) # + id="QPFDEljLEGtG" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Keyword Extraction from rake_nltk import Rake r = Rake() text = "hi my name is . i am a software engineer." r.extract_keywords_from_text(text) r.get_ranked_phrases()[0:5] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt x=np.arange(11) fig,axis=plt.subplots(nrows=2,ncols=1,sharex=True) axis[0].plot(x,0.7**x,'o-') axis[1].plot(x,(-0.7)**x,'o-') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install pandas_profiling import pandas_profiling as pp # + from sqlalchemy import create_engine from IPython.display import display_html import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline # - import warnings import pandas as pd import numpy as np # + postgres_user = 'dsbc_student' postgres_pw = '' postgres_host = '172.16.58.3' postgres_port = '5432' postgres_db = 'houseprices' engine = create_engine('postgresql://{}:{}@{}:{}/{}'.format( postgres_user, postgres_pw, postgres_host, postgres_port, postgres_db)) df = pd.read_sql_query('select * from houseprices',con=engine) # No need for an open connection, because only doing a single query engine.dispose() df.head(10) # - pp.ProfileReport(df) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # [Advent of Code 2020: Day 12](https://adventofcode.com/2020/day/12) # ## --- Day 12: Rain Risk --- # # Your ferry made decent progress toward the island, but the storm came in faster than anyone expected. The ferry needs to take **evasive actions**! # # Unfortunately, the ship's navigation computer seems to be malfunctioning; rather than giving a route directly to safety, it produced extremely circuitous instructions. When the captain uses the [PA system](https://en.wikipedia.org/wiki/Public_address_system) to ask if anyone can help, you quickly volunteer. # # The navigation instructions (your puzzle input) consists of a sequence of single-character **actions** paired with integer input **values**. After staring at them for a few minutes, you work out what they probably mean: # # * Action **`N`** means to move **north** by the given value. # * Action **`S`** means to move **south** by the given value. # * Action **`E`** means to move **east** by the given value. # * Action **`W`** means to move **west** by the given value. # * Action **`L`** means to turn **left** the given number of degrees. # * Action **`R`** means to turn **right** the given number of degrees. # * Action **`F`** means to move **forward** by the given value in the direction the ship is currently facing. # # The ship starts by facing **east**. Only the `L` and `R` actions change the direction the ship is facing. (That is, if the ship is facing east and the next instruction is `N10`, the ship would move north 10 units, but would still move east if the following action were `F`.) # # For example: # # ``` # F10 # N3 # F7 # R90 # F11 # # ``` # # These instructions would be handled as follows: # # * `F10` would move the ship 10 units east (because the ship starts by facing east) to **east 10, north 0**. # * `N3` would move the ship 3 units north to **east 10, north 3**. # * `F7` would move the ship another 7 units east (because the ship is still facing east) to **east 17, north 3**. # * `R90` would cause the ship to turn right by 90 degrees and face **south**; it remains at **east 17, north 3**. # * `F11` would move the ship 11 units south to **east 17, south 8**. # # At the end of these instructions, the ship's [Manhattan distance](https://en.wikipedia.org/wiki/Manhattan_distance) (sum of the absolute values of its east/west position and its north/south position) from its starting position is `17 + 8` = **`25`**. # # Figure out where the navigation instructions lead. **What is the Manhattan distance between that location and the ship's starting position?** # + import unittest import re from IPython.display import Markdown, display from aoc_puzzle import AocPuzzle class FerryNav(AocPuzzle): def parse_data(self, raw_data): data_lines = raw_data.split('\n') self.data = [] for line in data_lines: m = re.match('(\w)(\d+)', line) self.data.append((m.group(1), int(m.group(2)))) self.NORTH = 'N' self.SOUTH = 'S' self.EAST = 'E' self.WEST = 'W' self.TURN_LEFT = 'L' self.TURN_RIGHT = 'R' self.GO_FORWARD = 'F' self.HEADING_LIST = ['N','E','S','W'] self.DEGREES_PER_HEADING = 90 self.start_pos = (0,0) self.pos = self.start_pos self.heading = self.EAST def change_heading(self, val): start_index = self.HEADING_LIST.index(self.heading) hchange = val // self.DEGREES_PER_HEADING hindex = (start_index + hchange) % len(self.HEADING_LIST) self.heading = self.HEADING_LIST[hindex] def do_action(self, action): move, val = action if move == self.GO_FORWARD: move = self.heading lat, lon = self.pos if move == self.NORTH: lon += val elif move == self.SOUTH: lon -= val elif move == self.EAST: lat += val elif move == self.WEST: lat -= val elif move == self.TURN_LEFT: self.change_heading(-val) elif move == self.TURN_RIGHT: self.change_heading(val) else: raise f'Unknown Action: {action}' self.pos = (lat,lon) def run(self, output=False, debug=False): self.debug = debug for action in self.data: if debug: print(f'Action: {action}') self.do_action(action) if debug: print(f'Pos: {self.pos}\n') lat, lon = self.pos result = abs(lat) + abs(lon) if output: display(Markdown(f'### Manhattan distance traveled: `{result}`')) return result class TestBasic(unittest.TestCase): def test_parse_data(self): in_data = 'F10\nN3\nF7\nR90\nF11' exp_out = [('F',10),('N',3),('F',7),('R',90),('F',11)] fn = FerryNav(in_data) self.assertEqual(fn.data, exp_out) def test_ferry_nav(self): in_data = 'F10\nN3\nF7\nR90\nF11' exp_out = 25 fn = FerryNav(in_data) self.assertEqual(fn.run(debug=True), exp_out) unittest.main(argv=[""], exit=False) # - fn = FerryNav("input/d12.txt") fn.run(output=True) # ## --- Part Two --- # # Before you can give the destination to the captain, you realize that the actual action meanings were printed on the back of the instructions the whole time. # # Almost all of the actions indicate how to move a **waypoint** which is relative to the ship's position: # # * Action **`N`** means to move the waypoint **north** by the given value. # * Action **`S`** means to move the waypoint **south** by the given value. # * Action **`E`** means to move the waypoint **east** by the given value. # * Action **`W`** means to move the waypoint **west** by the given value. # * Action **`L`** means to rotate the waypoint around the ship **left** (**counter-clockwise**) the given number of degrees. # * Action **`R`** means to rotate the waypoint around the ship **right** (**clockwise**) the given number of degrees. # * Action **`F`** means to move **forward** to the waypoint a number of times equal to the given value. # # The waypoint starts **10 units east and 1 unit north** relative to the ship. The waypoint is relative to the ship; that is, if the ship moves, the waypoint moves with it. # # For example, using the same instructions as above: # # * `F10` moves the ship to the waypoint 10 times (a total of **100 units east and 10 units north**), leaving the ship at **east 100, north 10**. The waypoint stays 10 units east and 1 unit north of the ship. # * `N3` moves the waypoint 3 units north to **10 units east and 4 units north of the ship**. The ship remains at **east 100, north 10**. # * `F7` moves the ship to the waypoint 7 times (a total of **70 units east and 28 units north**), leaving the ship at **east 170, north 38**. The waypoint stays 10 units east and 4 units north of the ship. # * `R90` rotates the waypoint around the ship clockwise 90 degrees, moving it to **4 units east and 10 units south of the ship**. The ship remains at **east 170, north 38**. # * `F11` moves the ship to the waypoint 11 times (a total of **44 units east and 110 units south**), leaving the ship at **east 214, south 72**. The waypoint stays 4 units east and 10 units south of the ship. # # After these operations, the ship's Manhattan distance from its starting position is `214 + 72` = **`286`**. # # Figure out where the navigation instructions actually lead. **What is the Manhattan distance between that location and the ship's starting position?** # + class FerryNav2(FerryNav): waypoint = (10,1) def do_move(self, mag): lat, lon = self.pos wp_lat, wp_lon = self.waypoint lat += wp_lat * mag lon += wp_lon * mag self.pos = (lat, lon) def rotate_waypoint(self, val): lat, lon = self.waypoint hchange = abs(val) // self.DEGREES_PER_HEADING if val > 0: for _ in range(hchange): lat, lon = lon, -lat else: for _ in range(hchange): lat, lon = -lon, lat return (lat, lon) def do_action(self, action): move, val = action if move == self.GO_FORWARD: self.do_move(val) return lat, lon = self.waypoint if move == self.NORTH: lon += val elif move == self.SOUTH: lon -= val elif move == self.EAST: lat += val elif move == self.WEST: lat -= val elif move == self.TURN_LEFT: lat, lon = self.rotate_waypoint(-val) elif move == self.TURN_RIGHT: lat, lon = self.rotate_waypoint(val) else: raise f'Unknown Action: {action}' self.waypoint = (lat,lon) if self.debug: print(f'Waypoint: {self.waypoint}') class TestBasic(unittest.TestCase): def test_ferry_nav2(self): in_data = 'F10\nN3\nF7\nR90\nF11' exp_out = 286 fn = FerryNav2(in_data) self.assertEqual(fn.run(debug=True), exp_out) unittest.main(argv=[""], exit=False) # - fn = FerryNav2("input/d12.txt") fn.run(output=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Spark README # # Apache Spark™ is a general engine for cluster scale computing. It provides API's for multiple languages including Python, Scala, and SQL. # # This notebook shows how to run Spark in both local and yarn-client modes within TAP, as well as using Spark Submit. # # Several [Spark examples](/tree/examples/spark) are included with TAP and [others](http://spark.apache.org/examples.html) are available on the [Spark website](http://spark.apache.org/) # # See the [PySpark API documentation](http://spark.apache.org/docs/latest/api/python/) for more information on the API's below. # # ## Supported Modes # # Currently the YARN scheduler is supported on TAP and Spark jobs can be ran in three different modes: # # Mode | Good for Big Data | Supports Interactive Sessions | Supports Batch Jobs | Runtime | Use With | Best For # ---------- | --- | -- | --- | --- | --------------- | ----------------- | ------------------------------ # **Local mode** | No | Yes | Yes | Both driver and workers run locally | pyspark, spark-shell, spark-submit | Fast small scale testing in an interactive shell or batch. Best mode to start with if you are new to Spark. # **Yarn Client** | Yes | Yes | Yes | Driver runs locally and workers run in cluster | pyspark, spark-shell, spark-submit | Big data in an interactive shell. # **Yarn Cluster** | Yes | No | Yes | Both driver and workers run in cluster | spark-submit | Big data batch jobs. # # More information is avaialable in the [Spark Documentation](http://spark.apache.org/docs/latest/) # ## Create a SparkContext in local mode # # In local mode no cluster resources are used. It is easy to setup and is good for small scale testing. # + import pyspark # Create a SparkContext in local mode sc = pyspark.SparkContext("local") # - # Test the context is working by creating an RDD and performing a simple operation rdd = sc.parallelize(range(10)) print rdd.count() # Find out ore information about your SparkContext print 'Python Version: ' + sc.pythonVer print 'Spark Version: ' + sc.version print 'Spark Master: ' + sc.master print 'Spark Home: ' + str(sc.sparkHome) print 'Spark User: ' + str(sc.sparkUser()) print 'Application Name: ' + sc.appName print 'Application Id: ' + sc.applicationId # Stop the context when you are done with it. When you stop the SparkContext resources # are released and no further operations can be performed within that context. sc.stop() # Please restart the Kernel to switch to yarn-client mode # This is only needed if you already ran with local mode in same session # The Kernel can be restarted via the menus above or with the following code: import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) # ## Create a SparkContext in yarn-client mode # # In yarn-client mode, a Spark job is launched in the cluster. This is needed to work with big data. # + import pyspark # create a configuration conf = pyspark.SparkConf() # set the master to "yarn-client" conf.setMaster("yarn-client") # set other options as desired conf.set("spark.yarn.am.memory", "512mb") conf.set("spark.executor.memory", "1g") # create the context sc = pyspark.SparkContext(conf=conf) # - # Test the context is working by creating an RDD and performing a simple operation rdd = sc.parallelize(range(10)) print rdd.count() # Find out ore information about your SparkContext print 'Python Version: ' + sc.pythonVer print 'Spark Version: ' + sc.version print 'Spark Master: ' + sc.master print 'Spark Home: ' + str(sc.sparkHome) print 'Spark User: ' + str(sc.sparkUser()) print 'Application Name: ' + sc.appName print 'Application Id: ' + sc.applicationId # Stop the context when you are done with it. When you stop the SparkContext resources # are released and no further operations can be performed within that context. sc.stop() # ## Using Spark Submit # # It is possible to upload jars via Jupyter and use Spark Submit to run them. Jars can be uploaded by accessing the [Jupyter dashboard](/tree) and clicking the "Upload" button # Call spark-submit to run the SparkPi example that ships with Spark. # We didn't need to upload this jar because it is already loaded on the system. # !spark-submit --class org.apache.spark.examples.SparkPi \ # --master local \ # /usr/local/spark/lib/spark-examples-*.jar \ # 10 # Alternatively, you can access the [Jupyter dashboard](/tree) and then choose "New -> Terminal" to run spark-submit at the command line. # ## Using the Scala spark-shell # # Access the [Jupyter dashboard](/tree) and then choose "New -> Terminal" to open a terminal Window. # # In the terminal window type: # # ```bash # spark-shell --master local # ``` # # Wait for the prompt and then type a simple Spark program to verify it is working: # # ```scala # // create an RDD and perform count # val rdd = sc.parallelize(1 to 10) # rdd.count() # # // exit when you are done # exit() # ``` # # ## Viewing/Modifying Spark Configuration # # Spark configuration can be modified using SparkConf, as in the example above. # # Additionally, default configuration can be viewed and modified in a terminal (access the [Jupyter dashboard](/tree) and then choose "New -> Terminal") # # Default Spark configuration is stored in /etc/spark/conf, important files include: # # - spark-defaults.conf - default properties # - log4j.properties - logging configuration # # Additional Hadoop settings are in /etc/hadoop/conf, important files include: # # - core-site.xml - Core Hadoop Configuration # - hdfs-site.xml - HDFS Configuration # - yarn-site.xml - YARN Configuration # # These settings are automatically downloaded from Cloudera Manager when provisioning a new Jupyter instance. # # View the [Spark Configuration](http://spark.apache.org/docs/latest/configuration.html) documentation for more information. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:pytorch] * # language: python # name: conda-env-pytorch-py # --- # # NvTK workflow on the synthetic dataset # # 1. Initialized a baseline CNN model in NvTK, with filters numbers of 20, filter size of 30, and pooling size of 30 in the first convolutional layer for capturing the full motifs. # 2. Trained and evaluated the model performance with AUROC and AUPR metrics. # 3. Interpreted the sequence pattern deep learning represented internally for its accurate prediction. # 4. Applied NvTK gradient-based methods to query the filter contribution to different motif classifications. import sys sys.path.append("../NvTK/") print(sys.path) # + import h5py, os, argparse, logging, time import numpy as np import pandas as pd import torch from torch import nn from torch.optim import Adam from torch.utils.data import DataLoader from NvTK import Trainer from NvTK.Model.ConvModel import CNN from NvTK.Evaluator import calculate_roc, calculate_pr from NvTK.Evaluator import show_auc_curve, show_pr_curve from NvTK.Explainer import get_activate_W, meme_generate, save_activate_seqlets from NvTK.Explainer import seq_logo, plot_seq_logo # + os.makedirs("./Log", exist_ok=True) logging.basicConfig(level=logging.INFO, format='%(asctime)s %(filename)s[line:%(lineno)d] %(levelname)s %(message)s', datefmt='%a, %d %b %Y %H:%M:%S', filename=time.strftime('./Log/log_nvtk_minimal.%m%d.%H:%M:%S.txt'), filemode='w') # args parser = argparse.ArgumentParser() parser.add_argument("data") parser.add_argument("--gpu-device", dest="device_id", default="0") args = parser.parse_args(['../Dataset/synthetic_dataset_simple.h5', '--gpu-device', '1']) logging.info(args) # - ## change device os.environ["CUDA_VISIBLE_DEVICES"] = args.device_id device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # ## Prepare dataset # # 1. unpack the h5file datasets # 2. generate the DataLoader # + # unpack datasets h5file = h5py.File(args.data, 'r') anno = h5file["annotation"][:] x_train = h5file["train_data"][:].astype(np.float32) y_train = h5file["train_label"][:].astype(np.float32) x_val = h5file["val_data"][:].astype(np.float32) y_val = h5file["val_label"][:].astype(np.float32) x_test = h5file["test_data"][:].astype(np.float32) y_test = h5file["test_label"][:].astype(np.float32) h5file.close() # unpack anno n_tasks = anno.shape[0] task_name = anno[:,0] # define data loader batch_size = 32 train_loader = DataLoader(list(zip(x_train, y_train)), batch_size=batch_size, shuffle=True, num_workers=2, drop_last=False, pin_memory=True) validate_loader = DataLoader(list(zip(x_val, y_val)), batch_size=batch_size, shuffle=False, num_workers=2, drop_last=False, pin_memory=True) test_loader = DataLoader(list(zip(x_test, y_test)), batch_size=batch_size, shuffle=False, num_workers=2, drop_last=False, pin_memory=True) # - # ## Define model # # Initialized a baseline CNN model in NvTK, with filters numbers of 20, filter size of 30, and pooling size of 30 in the first convolutional layer for capturing the full motifs. # define model model = CNN(output_size=n_tasks, out_planes=20, kernel_size=30, conv_args={}, bn=False, pool_args={'kernel_size': 30}, tasktype='classification') model # + optimizer = Adam(model.parameters(), lr=1e-4) criterion = nn.BCELoss().to(device) trainer = Trainer(model, criterion, optimizer, device, tasktype='binary_classification', use_tensorbord=False, tensorbord_args={}) # - # ## Trained the model # train trainer.train_until_converge(train_loader, validate_loader, test_loader, EPOCH=500) # !tensorboard --logdir=runs --bind_all # ## Evaluated the model¶ # + # model = torch.load("./Log/best_model.p")#trainer.load_best_model() # model.load_state_dict(torch.load("./Log/best_model@0311_19:41:19.params.pth")) # model.eval() # trainer.model = model # + # predict test-set _, _, test_predictions, test_targets = trainer.predict(test_loader) # metric test-set fpr, tpr, roc_auc = calculate_roc(test_targets, test_predictions) auroc = [roc_auc[k] for k in roc_auc.keys() if k not in ["macro", "micro"]] # dict keys ordered by default in py3.7+ p, r, average_precision = calculate_pr(test_targets, test_predictions) aupr = [average_precision[k] for k in average_precision.keys() if k not in ["macro", "micro"]] # dict keys ordered by default in py3.7+ pd.DataFrame({"auroc":auroc, "aupr":aupr}, index=anno[:,0]).to_csv("Metric-simple.csv") pd.DataFrame({"auroc":auroc, "aupr":aupr}, index=anno[:,0]) # - show_auc_curve(fpr=fpr, tpr=tpr, roc_auc=roc_auc, save=False, fig_size=(5, 4)) show_pr_curve(precision=p, recall=r, average_precision=average_precision, save=False, fig_size=(5, 4)) # ## Model Interpretation # The sequence pattern deep learning represented internally for its accurate prediction. # + # explain W = get_activate_W(model, model.Embedding.conv, test_loader, motif_width=30) meme_generate(W, output_file='meme-simple.txt', prefix='Filter_') save_activate_seqlets(model, model.Embedding.conv, test_loader, threshold=0.999, out_fname='seqlets.fasta', motif_width=30) # + # %%time import matplotlib.pyplot as plt from NvTK.Explainer import normalize_pwm save_path = "./Motifs" fig = plt.figure(figsize = (16, 20)) for j in range(len(W)): plt.subplot(10, 2, j+1) logo = seq_logo(W[j], height=100, nt_width=50, norm=0, alphabet='dna') plot_seq_logo(logo, nt_width=20, step_multiple=4) plt.xticks([]) plt.yticks([]) plt.ylabel("Filter_"+str(j), fontsize=15) # plt.subplot(8, 4, j*2+2) # logo = seq_logo(W[j][:,::-1][::-1,:], height=100, nt_width=50, norm=0, alphabet='dna') # plot_seq_logo(logo, nt_width=20, step_multiple=4) # plt.xticks([]) # plt.yticks([]) fig.savefig("Filters-simple-init.pdf", format='pdf', dpi=300, bbox_inches='tight') fig.show() # - # ## Gradient-based filter contribution # Applied NvTK gradient-based methods to query the filter contribution to different motif classifications. # + # %%time imp = [] for tensor,_ in test_loader: tensor = tensor.to(device) conductance = deep_explain_layer_conductance(model, model.Embedding.conv, tensor, len(anno[:,0])) imp.append(conductance) # plot_label_neuron_importance(model, model.Embedding.conv, tensor, anno[:,0]) # - df = pd.DataFrame(np.hstack(imp).max(-1).mean(1), index=anno[:,0]) plt.figure(figsize=(20,3)) ax = sns.heatmap(df, cmap="Greys") plt.savefig("label_neuron_importance.pdf") plt.show() plt.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### 1. Drop # # ### 2. Loc # # ### 3. iLoc import pandas as pd #

    Drop

    df = pd.read_csv("C:/Users/deepusuresh/Documents/Data Science/05. ML/03. To GitHub/1. Linear Regression/1. Single Variable/2. Project 02_Canada Income_Single Variable/canada_per_capita_income.csv") df.head() new_df = df.drop('per_capita_income',axis='columns') new_df.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Plotting a pressure versus latitude contour: panels # # We have plotted pressure versus latitude in previous examples. We can also do this using panels for different quantities or experiments. Let's import obrero: # + # small hack to be able to import module without install import os import sys sys.path.append(os.getcwd() + '/../') import obrero import obrero.plot as oplot # - # And we read some data: # + # file name f1 = 'data/ctl_winds.nc' # read as data array da = obrero.read_nc(f1, 'ua') db = obrero.read_nc(f1, 'va') # - # Then we get the zonal means: zm1 = obrero.get_zonal_means(da, time_mean=True) zm2 = obrero.get_zonal_means(db, time_mean=True) # And now we use the function `panel_pressure_latitude()` in the `obrero.plot` module. Keywords are very similar to those used in other panel plots in obrero: # + # data list dlist = [zm1, zm2] # specifications and list spec1 = dict(minv=-30, maxv=30, nlevels=11, extend='both', cm='coolwarm', cbstring=r'Zonal wind (m s$^{-1}$)', title='CTL', xlim=[-85, 85]) spec2 = dict(minv=-1, maxv=1, nlevels=11, extend='both', cm='coolwarm', cbstring=r'Meridional wind (m s$^{-1}$)', title='CTL', xlim=[-85, 85]) slist = [spec1, spec2] # plot # %matplotlib inline fig = oplot.panel_pressure_latitude(dlist, slist) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="b49ByxJ0oz8c" colab_type="code" colab={} #importing pandas #importing os as to work with local files import pandas as pd import os #defining empty list frames = [] #loading in all datasets #datasets are named in such a way to be loaded in like this #this current path as 619 seperate csv files to be read in for i in range(1,2): df = pd.read_csv(r"C:\Users\tyler\Desktop\all_data\z (" + str(i) + ").csv" ) frames.append(df) # + id="lo00ExNEoz8g" colab_type="code" colab={} #merging all of the datasets together #merged datasets depend on ones read into frames df = pd.concat(frames, sort=True) # + id="1IL8sfAvoz8j" colab_type="code" colab={} outputId="80f286c7-01d4-4f04-e2c8-63eec923cfec" # testing for duplicate rows in the dataframe # Select duplicate rows except first occurrence based on all columns duplicateRowsDF = df[df.duplicated(subset=None, keep='first')] #prints out duplacted rows if any print("Duplicate Rows except first occurrence based on all columns are :") print(duplicateRowsDF) # + id="1eiNjHkloz8n" colab_type="code" colab={} outputId="084e7ac9-940e-4292-ce34-369fc8765f21" #checking df shape df.shape # + id="MHPy0OJvoz8q" colab_type="code" colab={} outputId="5269dcbd-3b8f-47a1-907f-763645f895bb" df.dtypes # + id="JbfIW_K8oz8s" colab_type="code" colab={} outputId="f62b44df-b9c8-472f-d7c5-c78638eaa30f" #looking at df pd.set_option('display.max_columns', None) df.head() # + id="ye1Kzljmoz8v" colab_type="code" colab={} outputId="540fbc5e-499f-4f6c-c387-4a8f3462bd2c" #print(df['urls'][0]) print(df['profile'][0]) print(df['location'][2]) #print(df['creator'][0]) # + id="A_zFiYSDoz8x" colab_type="code" colab={} outputId="63bd7365-a13c-4524-972a-5c8de977c7f8" df.isnull().sum() # + id="Hbg8chbooz80" colab_type="code" colab={} df.dropna(subset=['location'], inplace=True) # + id="C2YtdQtyoz82" colab_type="code" colab={} # + id="nGyz3-hIoz84" colab_type="code" colab={} #creating a function that converts strings to dictionaries import json def cat_to_dic(x): return json.loads(x) import ast def loc_to_dic(x): return ast.literal_eval(str(x)) df['category_dic'] = df['category'].apply(cat_to_dic) df['location_dic'] = df['location'].apply(cat_to_dic) # + id="1XAl-i6Voz87" colab_type="code" colab={} outputId="0642c987-89ea-49c0-d625-949f0df5a3a9" df.head() # + id="KTfG1q8Hoz8-" colab_type="code" colab={} # + id="rFY3YKADoz9A" colab_type="code" colab={} def cat_to_id(x): return x['id'] def cat_to_name(x): return x['name'] def cat_to_slug(x): return x['slug'] def cat_to_short_names(x): return x['short_names'] def cat_to_displayable_name(x): return x['displayable_name'] def cat_to_localized_name(x): return x['localized_name'] # + id="UrWv5Ykpoz9C" colab_type="code" colab={} # + id="vRKM8eX0oz9E" colab_type="code" colab={} def cat_to_id(x): return x['id'] def cat_to_name(x): return x['name'] def cat_to_slug(x): return x['slug'] def cat_to_position(x): return x['position'] def cat_to_parent_id(x): return x['parent_id'] def cat_to_state(x): return x['state'] def cat_to_type(x): return x['type'] def cat_to_color(x): return x['color'] # + id="giWlJKJaoz9H" colab_type="code" colab={} #Engineering new features df['blurb_len'] = len(df['blurb']) df['category_id'] = df['category_dic'].apply(cat_to_id) df['category_name'] = df['category_dic'].apply(cat_to_name) df['category_slug'] = df['category_dic'].apply(cat_to_slug) df['category_position'] = df['category_dic'].apply(cat_to_position) #df['category_parent_id'] = df['category_dic'].apply(cat_to_parent_id) df['category_color'] = df['category_dic'].apply(cat_to_color) df['is_usa'] = df['country'].str.contains('US') df['is_usd'] = df['currency'].str.contains('USD') df['name_length'] = len(df['name']) df['slug_length'] = len(df['slug']) df['location_id'] = df['location_dic'].apply(cat_to_id) df['location_name'] = df['location_dic'].apply(cat_to_name) #df['location_slug'] = df['location_dic'].apply(cat_to_slug) #df['location_short_names'] = df['location_dic'].apply(cat_to_short_names) #df['location_displayable_name'] = df['location_dic'].apply(cat_to_displayable_name) df['location_state'] = df['location_dic'].apply(cat_to_state) # + id="St9LfqCwoz9J" colab_type="code" colab={} df['location_state'].fillna('test', inplace=True) # + id="WXOUuaduoz9M" colab_type="code" colab={} outputId="f1e13df5-c4aa-4ab8-e157-1510e6e6342f" df['location_state'].isnull().sum() # + id="b1bV6vFtoz9O" colab_type="code" colab={} #roughly seperating df into train, val, and test #train = df[0:2000] #val = df[2000:3000] #test = df[3000:778] # + id="5YKm1H8Roz9S" colab_type="code" colab={} from sklearn.model_selection import train_test_split # Split train into train & test train, test = train_test_split(df, train_size=0.80, test_size=0.20, stratify=df['state'], random_state=42) # Split train into train & val train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['state'], random_state=42) # + id="aqqHrxj1oz9V" colab_type="code" colab={} #bulk imorting libraries to be used in pipeline import category_encoders as ce from sklearn.feature_selection import f_regression, SelectKBest from sklearn.impute import SimpleImputer from sklearn.linear_model import Ridge from sklearn.model_selection import cross_val_score from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestClassifier #setting target as state target = 'state' #setting features as rest of columns minus columns that leak information #about the target features = df.columns.drop([target, 'backers_count', 'converted_pledged_amount', 'fx_rate', 'is_backing', 'is_starrable', 'is_starred', 'permissions', 'pledged', 'profile', 'source_url', 'spotlight', 'state_changed_at', 'static_usd_rate', 'urls', 'usd_pledged', 'friends', 'usd_type', 'unseen_activity_count', 'category_dic', 'creator', 'currency_trailing_code', 'current_currency', 'disable_communication', 'location_dic', 'unread_messages_count',]) #removed temporayily #'unread_messages_count', 'unseen_activity_count', #creating our vars to be used in pipline x_train = train[features] y_train = train[target] x_val = val[features] y_val = val[target] x_test = test[features] # + id="CHoipBWwoz9Y" colab_type="code" colab={} outputId="4e908754-f0cd-4b69-c55d-1e3fc50ad1c6" df['state'].isnull().sum() # + id="ZyXjFsNJoz9a" colab_type="code" colab={} outputId="6f57dc30-3e52-459b-a718-fcbda42008d7" #checking subset trains shape (features) x_train.shape # + id="hJjjlVHwoz9e" colab_type="code" colab={} outputId="20c73f93-5d41-4e3c-ba03-b61ca06a2e01" #checking subset trains shape (target) x_val.shape # + id="fbDoFEIfoz9g" colab_type="code" colab={} outputId="373d5385-6ffb-4fed-8503-0e125f5be72b" #finding the a baseline score for data # there are five classes # the code below shows the percentage # of value_counts for each class #using value_counts to find baseline y_train.value_counts(normalize=True) # + id="sE9snHESoz9j" colab_type="code" colab={} outputId="2842931a-bce8-4169-ee8c-91989d7ebb55" #creating the pipline using Random forest classifer #using ordinal encoder to make objtypes numeric #using simple imputer to fill in nans pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(n_estimators=100, n_jobs=1, random_state=0) ) # Fit on train, score on val pipeline.fit(x_train, y_train) # + id="n8Io1z4Noz9l" colab_type="code" colab={} outputId="bc03cc1f-d669-4608-d64c-e91314637b55" #printing out the validation accuracy print('Train Accuracy', pipeline.score(x_train, y_train)) print('Validation Accuracy', pipeline.score(x_val, y_val)) # + id="gP5dG04noz9n" colab_type="code" colab={} outputId="b26bb32c-fa25-408e-adfc-f7fc8ed1c385" #getting test accuracy y_pred = pipeline.predict(x_test) y_test = test[target] print('Test Accuracy', pipeline.score(x_test, y_test)) # + id="_xxiK2nUoz9p" colab_type="code" colab={} outputId="5febf2bd-eaa0-4402-a4ed-768df7e81c8b" #just analysising the data before plotting top features print('X_train shape before encoding', x_train.shape) encoder = pipeline.named_steps['ordinalencoder'] encoded = encoder.transform(x_train) print('X_train shape after encoding', encoded.shape) # + id="qlLpWKUuoz9q" colab_type="code" colab={} outputId="70d41344-95ed-47da-f29e-3de3ff68eb2d" encoded.columns # + id="GgPQnLqGoz9s" colab_type="code" colab={} outputId="53364bbc-66a1-42cc-f849-d1dce2b83fa8" encoded.columns[0] # + id="yeQsDiRMoz9v" colab_type="code" colab={} outputId="cf5e1ebb-d3ea-418f-a710-3020c89e5368" #using matplotlib to plot best features #Important top features are based of original model #not the hyperparameter tuned model # %matplotlib inline import matplotlib.pyplot as plt # Get feature importances rf = pipeline.named_steps['randomforestclassifier'] importances = pd.Series(rf.feature_importances_, encoded.columns[0:36]) # Plot top n feature importances n = 20 plt.figure(figsize=(10,n/2)) plt.title(f'Top {n} features') importances.sort_values()[-n:].plot.barh(color='grey'); # + id="SrbcFxs6oz9x" colab_type="code" colab={} # + id="MakF88Eeoz9y" colab_type="code" colab={} #testing feature selection method from lecture # + id="g8f3v9GYoz90" colab_type="code" colab={} transformers = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median') ) # + id="OhR2rhS7oz92" colab_type="code" colab={} outputId="c6c75f9d-61a5-474d-8fe9-a541798fc8e6" #testing out that feature selection tool learned today x_train_transformed = transformers.fit_transform(x_train) x_val_transformed = transformers.transform(x_val) model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) model.fit(x_train_transformed, y_train) # + id="CQ656rC_oz94" colab_type="code" colab={} outputId="3d92e085-3636-44cc-d761-1840d40d4b8f" import eli5 from eli5.sklearn import PermutationImportance # 1. Calculate permutation importances permuter = PermutationImportance( model, scoring='accuracy', n_iter=5, random_state=42 ) permuter.fit(x_val_transformed, y_val) # + id="44rEswwNoz96" colab_type="code" colab={} outputId="4a61c2f4-f9de-492a-99dc-91e2dd9ebf12" feature_names = x_val.columns.tolist() pd.Series(permuter.feature_importances_, feature_names).sort_values() # + id="RgyOSC6xoz98" colab_type="code" colab={} outputId="34b88c9b-ca6d-4271-d10a-b7900df08d1f" # 2. Display permutation importances eli5.show_weights( permuter, top=None, # show permutation importances for all features feature_names=feature_names # must be a list ) # + id="3wKg9_peoz9_" colab_type="code" colab={} outputId="591bad8a-e5f6-41a7-c9a8-754c647928af" print('Shape before removing features:', x_train.shape, x_test.shape) minimum_importance = 0 mask = permuter.feature_importances_ > minimum_importance features = x_train.columns[mask] x_train = x_train[features] features = x_test.columns[mask] x_test = x_test[features] print('Shape after removing features:', x_train.shape, x_test.shape) # + id="LYCw74X9oz-B" colab_type="code" colab={} outputId="7bbba8d0-deba-4754-e04f-867616f63f4d" x_val = x_val[features] pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median'), RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) # Fit on train, score on val pipeline.fit(x_train, y_train) print('Validation Accuracy', pipeline.score(x_val, y_val)) # + id="TtbXjIRToz-D" colab_type="code" colab={} # + id="MvmoI9qKoz-F" colab_type="code" colab={} # + id="tS6Pu__joz-H" colab_type="code" colab={} outputId="075bd55a-5634-4f59-dc12-04e94fa0fb9f" from xgboost import XGBClassifier pipeline2 = make_pipeline( ce.OrdinalEncoder(), XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1) ) pipeline2.fit(x_train, y_train) # + id="B2F4A9kSoz-J" colab_type="code" colab={} outputId="692b1560-5e50-4ec8-d1c1-26b5d2837a63" from sklearn.metrics import accuracy_score y_pred = pipeline2.predict(x_val) print('Validation Accuracy', accuracy_score(y_val, y_pred)) # + id="iLS0o7oVoz-M" colab_type="code" colab={} # + id="GSWqOfD-oz-R" colab_type="code" colab={} # + id="cpWm_0pPoz-T" colab_type="code" colab={} outputId="3327bd43-9c43-4da6-a461-a6827bdfdc33" # fit_transfom on train, transform on val encoder = ce.OrdinalEncoder() x_train_encoded = encoder.fit_transform(x_train) x_val_encoded = encoder.transform(x_val) model = XGBClassifier( n_estimators=1000, # <= 1000 trees, depends on early stopping max_depth=7, # try deeper trees because of high cardinality categoricals learning_rate=0.5, # try higher learning rate n_jobs=-1 ) eval_set = [(x_train_encoded, y_train), (x_val_encoded, y_val)] model.fit(x_train_encoded, y_train, eval_set=eval_set, eval_metric='merror', early_stopping_rounds=50) # Stop if the score hasn't improved in 50 rounds # + id="bB0RF6C5oz-V" colab_type="code" colab={} outputId="50508f6d-3b52-4af5-82fd-69d41a0f003f" results = model.evals_result() train_error = results['validation_0']['merror'] val_error = results['validation_1']['merror'] epoch = range(1, len(train_error)+1) plt.plot(epoch, train_error, label='Train') plt.plot(epoch, val_error, label='Validation') plt.ylabel('Classification Error') plt.xlabel('Model Complexity (n_estimators)') #plt.ylim((0.18, 0.22)) # Zoom in plt.legend(); # + id="BZG6YRe-oz-X" colab_type="code" colab={} outputId="1c5985b0-40db-4478-abd8-d494feb31ea1" ''' So with about 6000(2files) rows the accuracy was about 62% to about 67% with original train test val with 6000(2 files) rows with train test val v2 we had as high as 72% when 50files accuracies at 79% on 74%when using xgboost 100files accuracies where at 87% and 74%when using xgboost ''' # + id="vHexr1_xoz-Z" colab_type="code" colab={} outputId="bdafa280-9e35-4169-ce75-44bc170843b9" ''' 2files using the newly created location features we jumped to 73%-74% 50files had about 80% using xgboost had 74%. I think somthing is wrong with xgboost 100files had about 89% accuracy 300files 97.8% accuracy. data is powerful ''' # + id="TInficGDoz-b" colab_type="code" colab={} # + id="jpRqt7T3oz-d" colab_type="code" colab={} outputId="5b34ec65-7c70-4a30-b829-d3f299d1af8d" df[features].isnull().sum() # + id="au_sgLxyoz-f" colab_type="code" colab={} outputId="785baf97-f44a-4a82-ed4d-79177e81ab8c" #creating a partial dependance plot for df import category_encoders as ce import seaborn as sns from sklearn.ensemble import RandomForestClassifier X = df[features] y = df[target] encoder = ce.OrdinalEncoder() X_encoded = encoder.fit_transform(X) model = RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1) model.fit(X_encoded, y) # + id="Ef4QAv3poz-j" colab_type="code" colab={} outputId="95119991-a964-4b45-a15a-754a590e16ca" # Use Pdpbox # %matplotlib inline import matplotlib.pyplot as plt from pdpbox import pdp feature = 'deadline' pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features, feature=feature) pdp.pdp_plot(pdp_dist, feature); # + id="0xCwbkmMoz-l" colab_type="code" colab={} # + id="mg0osOIpoz-n" colab_type="code" colab={} # + id="_7mfJD54oz-o" colab_type="code" colab={} # + id="5MRzymkzoz-q" colab_type="code" colab={} outputId="48f59fa1-3fe2-474a-f29f-e4f0319080b8" x_train.shape # + id="sdiyW8ehoz-r" colab_type="code" colab={} import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline from xgboost import XGBClassifier processor = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(strategy='median') ) X_train_processed = processor.fit_transform(x_train) X_val_processed = processor.transform(x_val) eval_set = [(X_train_processed, y_train), (X_val_processed, y_val)] model = XGBClassifier(n_estimators=1000, n_jobs=-1) model.fit(X_train_processed, y_train, eval_set=eval_set, eval_metric='auc', early_stopping_rounds=10) # + id="l1PDqnAxoz-t" colab_type="code" colab={} outputId="84db3a2b-507a-4e0a-b750-1d62ed6a2b33" test_test = df[features] row = x_test.iloc[[500]] print(row) # + id="Yv1NL_VCoz-w" colab_type="code" colab={} outputId="c7f4fbf1-e816-4d4d-ec63-8c69bba4a354" x_test.shape # + id="EHFq3xdooz-y" colab_type="code" colab={} outputId="495b1ce9-d001-4605-8505-fa9fa052f0b3" x_train.shape # + id="J6DNBy2goz-0" colab_type="code" colab={} outputId="6fa9acac-0b64-41ee-d41b-92f31fa0cf3e" # STUDY/PRACTICE THIS CELL FOR THE SPRINT CHALLENGE import shap explainer = shap.TreeExplainer(model) row_processed = processor.transform(row) shap_values = explainer.shap_values(row_processed) shap.initjs() shap.force_plot( base_value=explainer.expected_value, shap_values=shap_values, features=row, link='logit' # For classification, this shows predicted probabilities ) # + id="ajq-ZUgcoz-2" colab_type="code" colab={} # + id="4dVLCwlBoz-3" colab_type="code" colab={} # + id="FpJ4onb1oz-5" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Lexcial # - As its name, the Python program (file) was "splited" into *words*. # - Formally, the sentence is that using **parser** breaks a file into *a stream of tokens*, XD. # # ### the *Token* # - Just a bunch of const vals, and the [stdlib - pylangservice](https://docs.python.org/3.7/library/language.html) # - Source code: [```token.py```](https://github.com/python/cpython/blob/3.7/Lib/token.py), [```Token.h```](https://github.com/python/cpython/blob/3.7/Include/token.h) # - Do check the [stdlib](https://docs.python.org/3.7/library/language.html) material! # ### Indentation # # Check this # # ```python # def perm(l): # # This is a LEGIT function! # # Not very nice looking, it works though. XD. # if len(l) <= 1: # return [l] # r = [] # for i in range(len(l)): # s = l[:i] + l[i+1:] # p = perm(s) # for x in p: # r.append(l[i:i+1] + x) # return r # ``` # # This one will **NOT** work # # ```python # def perm(l): # error: first line indented # for i in range(len(l)): # error: not indented # s = l[:i] + l[i+1:] # p = perm(l[:i] + l[i+1:]) # error: unexpected indent # for x in p: # r.append(l[i:i+1] + x) # return r # error: inconsistent dedent # ``` # ### Line joining # # Explicit # # ```python # if 1 == 1 \ # and 2 == 2 \ # and 3 == 3 \ # and 1 + 2 + 3 == 6: # print('You cannot add comments in here') # ``` # Implicit # # ```python # num = { # (1, 2, 3), # ah # [4, 5, 6], # ya # {7, 8, 9} # weee # } # ``` # ### the *underscore* identifiers # # if u're using ```from module import *``` # # ```python # if 'this is the file being imported': # a = 1 # totally fine # _b = 2 # nah, u won't get this one # ``` # # the ```__*``` # - kinda the same but inside the *class* # - plus the 'a bit' different mechanics # # ah, about the ```__*__``` # - see more at [here](https://docs.python.org/3/reference/datamodel.html#specialnames), and [here](https://rszalski.github.io/magicmethods/) # # # ### String stuff # 0x01 - special cases # + isinstance(b'A', bytes) isinstance('A' , str) "\'" # ' r"\'" # \' # - # 0x02 - concat print( "what" # haha " the" # comments here! "fuck" # ya still able to add '+' ) # 0x03 - ```f'{}'``` # + # Just the expr u put into the `eval` (with restrictions) f'{1 + 2}' f'{1 and 0}' greeting = 'hello' f'{greeting}' f'{greeting!r}' f'{greeting:>15}' # nasty nesting, XD f'The {greeting!r} got {float(len(greeting)):10.{len(greeting)}} alphabets.' # + import datetime today = datetime.datetime.today() print( f'{today:%B %d, %Y}' ) # - # ### Number stuff # 0x01 - groupin' # + # cool, huh? 100_0000_0000 # yes 0xdead_BEEF # u sure want this? # yes, here comes the Mr. binary! 0b1111_1111 # and our float num 3.14159_26535 # and our imaginary num 6.666_888j # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Welter issue #5 # ## Predict Teff vs order # ### Part 3- MEASURED values in K-Band *and* optical. # # # Monday, January 4, 2016 # # We have optical values now. import warnings warnings.filterwarnings("ignore") import numpy as np from astropy.io import fits import matplotlib.pyplot as plt % matplotlib inline % config InlineBackend.figure_format = 'retina' import seaborn as sns sns.set_context('notebook') import h5py import pandas as pd # ### We need the meta data about the optical spectral orders # + start_wl = [] end_wl = [] center_wl = [] ord_name = [] for o in range(0, 34+1): f = h5py.File('../data/reduced/optical/LkCa4_ESPaDOnS_eo{:03d}.hdf5'.format(o), 'r') this_wl = (f['wls'][0] + f['wls'][-1])/2.0 start_wl.append(f['wls'][0]) end_wl.append(f['wls'][-1]) center_wl.append(this_wl) ord_name.append('{:03d}'.format(o)) f.close() # - opt_ords = pd.DataFrame({"wl_start":start_wl, "wl_end":end_wl, "wl_center":center_wl, "number":range(0,34+1), "m_val":ord_name}) opt_ords.head() # There is technically no `m_val`, since we don't know the $m$ of the spectrograph. But we will just use the relative order number, $o$, as $m$ for now. # ## Actual data # You can have pandas read the clipboard if you copy the output from the terminal. Saves a step of copying files over, at the expense of reproducibility... # ```python # sf_dat = pd.read_clipboard(names=dat_names, sep=r',\s+', squeeze=True) # # sf_dat.to_csv('../data/analysis/run02_by_order.csv', index=False) # ``` dat_names = ['m_val', 'Teff_05p', 'Teff_50p', 'Teff_95p', 'logg_05p', 'logg_50p', 'logg_95p', 'FeH_05p', 'FeH_50p', 'FeH_95p', 'vz_05p', 'vz_50p', 'vz_95p', 'vi_05p', 'vi_50p', 'vi_95p', 'logO_05p', 'logO_50p', 'logO_95p', 'c1_05p', 'c1_50p', 'c1_95p', 'c2_05p', 'c2_50p', 'c2_95p', 'c3_05p', 'c3_50p', 'c3_95p', 'SA_05p', 'SA_50p', 'SA_95p', 'LA_05p', 'LA_50p', 'LA_95p', 'll_05p', 'll_50p', 'll_95p'] # + #sf_dat = pd.read_clipboard(names=dat_names, sep=r',\s+', squeeze=True) #sf_dat.to_csv('../data/analysis/optical_run01_by_order.csv', index=False) # - sf_dat = pd.read_csv('../data/analysis/optical_run01_by_order.csv') sf_dat.head() sf_dat.dropna(inplace=True) sf_dat['m_int'] = sf_dat['m_val'].astype(int) merged = pd.merge(opt_ords, sf_dat, left_on='number', right_on='m_int', how='outer') merged.head() # ## Distribution of Teff merged.Teff_50p.min(), merged.Teff_50p.max() merged.Teff_50p.median(), merged.Teff_50p.std() sns.distplot(merged.Teff_95p[gi].dropna(), hist=False) sns.distplot(merged.Teff_50p[gi].dropna(), rug=True) sns.distplot(merged.Teff_05p[gi].dropna(), hist=False) merged.Teff_50p[gi].dropna().median(), merged.Teff_50p[gi].dropna().std() len(merged.vi_50p.dropna()) len(merged.vi_50p) merged.columns merged[~gi][['vz_50p', 'vi_50p', 'll_50p', 'LA_50p']] gi = merged.vz_50p > 10 sns.distplot(merged.FeH_95p[gi], hist=False) sns.distplot(merged.FeH_50p[gi], rug=True) sns.distplot(merged.FeH_05p[gi], hist=False) merged.FeH_50p[gi].median(), merged.FeH_50p[gi].std() merged.loc[29] merged.vz_50p.dropna() sns.distplot(merged., rug=True) # ## Plot of $T_{eff}$ vs. spectral order orders= opt_ords N_orders = len(orders.wl_start) # + #plt.subplot(211) fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) #ax1.fill_between(tell.wls, tell.trans, y2=1.0, alpha=0.5) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') #text_pos = 500.0 + 20.0*np.arange(N_orders) for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) ax.plot(merged.wl_center, merged.Teff_50p, 'ro') yerr = merged.Teff_95p - merged.Teff_50p ax.errorbar(merged.wl_center, merged.Teff_50p, yerr=yerr, fmt='k.') ax.set_ylim(0, 5000) ax.set_xlim(4000, 10000) ax.set_ylabel('$T_{eff}$ (K)') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') plt.savefig('../document/figures/teff_vs_order_Viz_run01.pdf', bbox_inches='tight') # - # ## Plot of $\log{g}$ vs. spectral order # + #plt.subplot(211) fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) #ax1.fill_between(tell.wls, tell.trans, y2=1.0, alpha=0.5) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') #text_pos = 500.0 + 20.0*np.arange(N_orders) for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) log_g_mean = 3.7 lglabel = "$\log{g}$" +" = {} ".format(log_g_mean) ax.plot(orders.wl_center, [3.7]*len(orders.wl_center), 'b:', label=lglabel) ax.plot([10000, 30000], [3.5]*2, 'k--', label='Bounds') ax.plot([10000, 30000], [4.0]*2, 'k--') ax.plot(merged.wl_center, merged.logg_50p, 'ro') yerr1 = merged.logg_50p - merged.logg_05p yerr2 = merged.logg_95p - merged.logg_50p ax.errorbar(merged.wl_center, merged.logg_50p, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(3.0, 4.5) ax.set_xlim(4000, 10000) ax.set_ylabel('$\log{g}$') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # - # ## Plot of $[\mathrm{Fe}/\mathrm{H}]$ vs. spectral order # + #plt.subplot(211) fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) #ax1.fill_between(tell.wls, tell.trans, y2=1.0, alpha=0.5) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) feh_mean = 0.0 fehlabel = "$[\mathrm{Fe}/\mathrm{H}]$" +" = {} ".format(feh_mean) ax.plot(orders.wl_center, [feh_mean]*len(orders.wl_center), 'b:', label=fehlabel) ax.plot([10000, 30000], [-0.5]*2, 'k--', label='Bounds') ax.plot([10000, 30000], [0.5]*2, 'k--') ax.plot(merged.wl_center, merged.FeH_50p, 'ro') yerr1 = merged.FeH_50p - merged.FeH_05p yerr2 = merged.FeH_95p - merged.FeH_50p ax.errorbar(merged.wl_center, merged.FeH_50p, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(-3.0, 1.0) ax.set_xlim(4000, 10000) ax.set_ylabel('$[\mathrm{Fe}/\mathrm{H}]$') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # - # ## Plot of $RV$ vs. spectral order # + #plt.subplot(211) fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) #ax1.fill_between(tell.wls, tell.trans, y2=1.0, alpha=0.5) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) this_mean = 93.0 thislabel = "$v_z$" +" = {} ".format(this_mean) ax.plot([10000, 30000], [this_mean]*2, 'b:', label=thislabel) ax.plot(merged.wl_center, merged.vz_50p, 'ro') yerr1 = merged.vz_50p - merged.vz_05p yerr2 = merged.vz_95p - merged.vz_50p ax.errorbar(merged.wl_center, merged.vz_50p, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(0, 30) ax.set_xlim(4000, 10000) ax.set_ylabel('$v_z$') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # - # Vsini # + #plt.subplot(211) fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) #ax1.fill_between(tell.wls, tell.trans, y2=1.0, alpha=0.5) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) this_mean = 30.0 thislabel = "$v\sin{i}$" +" = {} ".format(this_mean) ax.plot([10000, 30000], [this_mean]*2, 'b:', label=thislabel) ax.plot(merged.wl_center, merged.vi_50p, 'ro') yerr1 = merged.vi_50p - merged.vi_05p yerr2 = merged.vi_95p - merged.vi_50p ax.errorbar(merged.wl_center, merged.vi_50p, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(20, 50) ax.set_xlim(4000, 10000) ax.set_ylabel('$v \sin{i}$') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # + fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) this_mean = -0.3 this_label = "$c^{(0)}$" +" = {} ".format(this_mean) ax.plot([10000, 30000], [this_mean]*2, 'b:', label=this_label) x = merged.wl_center y = merged.logO_50p y05 = merged.logO_05p y95 = merged.logO_95p ax.plot(x, y, 'ro') yerr1 = y - y05 yerr2 = y95 - y ax.errorbar(x, y, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(-0.5, 0.2) ax.set_xlim(4000, 10000) ax.set_ylabel('$c^{(0)}}$') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # - # # Order-by-order calibration parameters # + fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) this_mean = -0.0 this_label = "$c^{(1)}$" +" = {} ".format(this_mean) ax.plot([10000, 30000], [this_mean]*2, 'b:', label=this_label) x = merged.wl_center y = merged.c1_50p y05 = merged.c1_05p y95 = merged.c1_95p ax.plot(x, y, 'ro') yerr1 = y - y05 yerr2 = y95 - y ax.errorbar(x, y, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(-0.2, 0.2) ax.set_xlim(4000, 10000) ax.set_ylabel('$c^{(1)}}$') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # + fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) this_mean = 0.0 this_label = "$c^{(2)}$" +" = {} ".format(this_mean) ax.plot([10000, 30000], [this_mean]*2, 'b:', label=this_label) x = merged.wl_center y = merged.c2_50p y05 = merged.c2_05p y95 = merged.c2_95p ax.plot(x, y, 'ro') yerr1 = y - y05 yerr2 = y95 - y ax.errorbar(x, y, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(-0.2, 0.2) ax.set_xlim(4000, 10000) ax.set_ylabel('$c^{(2)}}$') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # + fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) this_mean = 0.0 this_label = "$c^{(3)}$" +" = {} ".format(this_mean) ax.plot([10000, 30000], [this_mean]*2, 'b:', label=this_label) x = merged.wl_center y = merged.c3_50p y05 = merged.c3_05p y95 = merged.c3_95p ax.plot(x, y, 'ro') yerr1 = y - y05 yerr2 = y95 - y ax.errorbar(x, y, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(-0.2, 0.2) ax.set_xlim(4000, 10000) ax.set_ylabel('$c^{(3)}}$') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # + fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) this_mean = 0.4 this_label = "sigAmp" +" = {} ".format(this_mean) ax.plot([10000, 30000], [this_mean]*2, 'b:', label=this_label) x = merged.wl_center y = merged.SA_50p y05 = merged.SA_05p y95 = merged.SA_95p ax.plot(x, y, 'ro') yerr1 = y - y05 yerr2 = y95 - y ax.errorbar(x, y, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(0.0, 6.0) ax.set_xlim(4000, 10000) ax.set_ylabel('sigAmp') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # + fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) this_mean = -1.8 this_label = "logAmp" +" = {} ".format(this_mean) ax.plot([10000, 30000], [this_mean]*2, 'b:', label=this_label) x = merged.wl_center y = merged.LA_50p y05 = merged.LA_05p y95 = merged.LA_95p ax.plot(x, y, 'ro') yerr1 = y - y05 yerr2 = y95 - y ax.errorbar(x, y, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(-3, 0.0) ax.set_xlim(4000, 10000) ax.set_ylabel('logAmp') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # + fig = plt.figure(figsize=(18, 6)) ax1 = fig.add_axes([0.35, 0.7, 0.45, 0.2],xticks=[], yticks=[0.0, 0.5, 1]) ax1.set_xlim(4000, 10000) ax1.set_ylim(-0.2, 1) ax1.set_ylabel('$\oplus$ trans.') for i in range(N_orders): x = [orders.wl_start[i], orders.wl_end[i]] y = [0.2+0.02*i]*2 ax1.plot(x, y, 'r-') for i in range(N_orders): if (orders.number.values[i] % 5) == 0: ax1.text(orders.wl_center[i], -0.13, '{}'.format(orders.m_val.values[i])) ax = fig.add_axes([0.35, 0.1, 0.45, 0.6]) this_mean = 25 this_label = "$l$" +" = {} ".format(this_mean) ax.plot([10000, 30000], [this_mean]*2, 'b:', label=this_label) x = merged.wl_center y = merged.ll_50p y05 = merged.ll_05p y95 = merged.ll_95p ax.plot(x, y, 'ro') yerr1 = y - y05 yerr2 = y95 - y ax.errorbar(x, y, yerr=[yerr1, yerr2], fmt='k.') ax.set_ylim(0, 60) ax.set_xlim(4000, 10000) ax.set_ylabel('$l$') ax.set_xlabel('$\lambda \,(\AA $)') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # - # Par-par plot # + #plt.subplot(211) fig = plt.figure(figsize=(18, 6)) ax = fig.add_axes([0.35, 0.1, 0.45, 0.8]) ax.plot(merged.FeH_50p, merged.Teff_50p, 'ro') yerr1 = merged.Teff_50p - merged.Teff_05p yerr2 = merged.Teff_95p - merged.Teff_50p xerr1 = merged.FeH_50p - merged.FeH_05p xerr2 = merged.FeH_95p - merged.FeH_50p ax.errorbar(merged.FeH_50p, merged.Teff_50p, yerr=[yerr1, yerr2], xerr=[xerr1, xerr2], fmt='k.') ax.set_xlim(-0.5, 0.5) ax.set_ylim(3000, 4500) ax.set_ylabel('$T_{eff}$') ax.set_xlabel('[Fe/H]') ax.legend(loc='best') #plt.savefig('../document/figures/logg_vs_order.pdf', bbox_inches='tight') # - # # Save the merged DataFrame merged.to_csv('../data/analysis/IGRINS_ESPaDOnS_run01_last10kMCMC.csv', index=False) # The end for now. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 64-bit # name: python3 # --- # # `RELIANCE - NSE Stock Data` # # The file contains RELIANCE - NSE Stock Data from 1-Jan-16 to 6-May-21 # # The data can be used to forecast the stock prices of the future # # Its a timeseries data from the national stock exchange of India # # || `Variable` | `Significance` | # | ------------- |:-------------:|:-------------:| # |1.|Symbol|Symbol of the listed stock on NSE| # |2.|Series|To which series does the stock belong (Equity, Options Future)| # |3.|Date|Date of the trade| # |4.|Prev Close|Previous day closing value of the stock| # |5.|Open Price|Current Day opening price of the stock| # |6.|High Price|Highest price touched by the stock in current day `(Target Variable)`| # |7.|Low Price|lowest price touched by the stock in current day| # |8.|Last Price|The price at which last trade occured in current day| # |9.|Close Price|Current day closing price of the stock| # |10.|Average Price|Average price of the day| # |11.|Total Traded Quantity|Total number of stocks traded in current day| # |12.|Turnover|| # |13.|No. of Trades|Current day's total number of trades| # |14.|Deliverabel Quantity|Current day deliveable quantity to the traders| # |15.|% Dly Qt to Traded Qty|`(Deliverable Quantity/Total Traded Quantity)*100`| # + import pandas as pd data_path="./data/RILO - Copy.csv" data=pd.read_csv(data_path) data # - # Renaming the columns to have snake_case naming style. (Just as a convention and for convenience) data.columns=["_".join(column.lower().split()) for column in data.columns] data.columns # Using `.describe()` on an entire DataFrame we can get a summary of the distribution of continuous variables: data.describe() # Checking for null values data.isnull().sum() # ### As shown above, we do not have any null values in our dataset. Now we can focus on feature selection and model building. # By using the correlation method `.corr()` we can get the relationship between each continuous variable: correlation=data.corr() correlation # ### `Matplotlib` # # Matplotlib is a visualization library in Python for 2D plots of arrays. Matplotlib is a multi-platform data visualization library built on NumPy arrays and designed to work with the broader SciPy stack. # # One of the greatest benefits of visualization is that it allows us visual access to huge amounts of data in easily digestible visuals. Matplotlib consists of several plots like line, bar, scatter, histogram etc. # # Matplotlib comes with a wide variety of plots. Plots helps to understand trends, patterns, and to make correlations. They’re typically instruments for reasoning about quantitative information # # ### `Seaborn` # # Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics. # + # Using seaborn and matplotlib to have a better visualization of correlation import seaborn as sn import matplotlib.pyplot as plt plt.figure(figsize=(10,8)) sn.heatmap(correlation,annot=True,linewidth=1,cmap='PuOr') plt.show() # - # From the above correlation matrix, we get a general idea of which variables can be treated as features to build our model. Lets list them out # Considering all the variables having `|corr|>=0.5` # # - prev_close # - no._of_trades # - open_price # - low_price # - last_price # - turnover # - close_price # - %_dly_qt_to_traded_qty # - average_price # # # # # # Now that we have a rough idea about our features, lets confirm their behaviour aginst target variable using scatter plots. # + plt.figure(figsize=(18,18)) plt.subplot(3,3,1) plt.scatter(data.prev_close,data.high_price) plt.title('Relation with Previous Closing Price') plt.subplot(3,3,2) plt.scatter(data['no._of_trades'],data.high_price) plt.title('Relation with No. of trades') plt.subplot(3,3,3) plt.scatter(data.open_price,data.high_price) plt.title('Relation with Opening Price') plt.subplot(3,3,4) plt.scatter(data.low_price,data.high_price) plt.title('Relation with Low Price') plt.subplot(3,3,5) plt.scatter(data.last_price,data.high_price) plt.title('Relation with Last Price') plt.subplot(3,3,6) plt.scatter(data.turnover,data.high_price) plt.title('Relation with Turnover') plt.subplot(3,3,7) plt.scatter(data.close_price,data.high_price) plt.title('Relation with Closing Price') plt.subplot(3,3,8) plt.scatter(data['%_dly_qt_to_traded_qty'],data.high_price) plt.title('Relation with Deliverable quantity') plt.subplot(3,3,9) plt.scatter(data.average_price,data.high_price) plt.title('Relation with Average Price') plt.show # - # From above visualization, we are clear to choose features for the linear-regression model. Those are: # # - prev_close # - ~~no._of_trades~~ # - open_price # - low_price # - last_price # - ~~turnover~~ # - close_price # - ~~%_dly_qt_to_traded_qty~~ # - average_price # features=['prev_close','open_price','low_price','last_price','close_price','average_price'] X=data[features] X # Target variable y=data.high_price y # + # split data into training and validation data, for both features and target # The split is based on a random number generator. Supplying a numeric value to # the random_state argument guarantees we get the same split every time we # run this script. from sklearn.model_selection import train_test_split train_X,val_X,train_y,val_y=train_test_split(X,y,test_size=0.2,random_state=0) # - from sklearn.linear_model import LinearRegression # Define model model=LinearRegression() # Fit model model.fit(train_X,train_y) # We use .score method to get an idea of quality of our model model.score(val_X,val_y) # ### Model Validation # There are many metrics for summarizing model quality, but we'll start with one called `Mean Absolute Error (also called MAE)`. Let's break down this metric starting with the last word, error. # # # `error=actual-predicted` # # So, if a stock cost Rs.4000 at a timeframe, and we predicted it would cost Rs.3980 the error is Rs.20. # # With the MAE metric, we take the absolute value of each error. This converts each error to a positive number. We then take the average of those absolute errors. This is our measure of model quality. In plain English, it can be said as # # > On average, our predictions are off by about X. # from sklearn.metrics import mean_absolute_error # Get predicted prices of stock on validation data pred_y=model.predict(val_X) mean_absolute_error(val_y,pred_y) # --- # layout: # post title: "Picnic in San Francisco" # date: 2018-04-07 8:30:00 # categories: applications, data mining # featured_image: /images/cover.jpg # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # My beloved [SF Tsunami Master Team](http://sftsunami.org/) had planned a great picnic for this week end. For the second year in a row the plan had to be canceled due to inclement weather. # # I admit I sneered at the idea of having the picnic the same month as last year, considering that it got canceled once. However, forming opinions based on a sample size of two with a sprinkle of gut feeling is not the way a Scientist does things, so I thought it would be interesting and constructive to pull some data to validate or disprove my prejudice and to provide a valid alternative. # My hypothesis is quite simple: in April chances of rain are way higher than in May, while temperatures are pretty constant, so the latter would be a better option to plan outdoor activities. # + # %matplotlib inline # %load_ext autoreload # %autoreload 2 import pandas as pd import urllib.request from bs4 import BeautifulSoup import matplotlib from matplotlib import pyplot as plt plt.style.use('ggplot') matplotlib.rcParams['figure.figsize'] = (20.0, 10.0) # - # With a quick Google search I ran into [this](ggweather.com) site which reports monthly and daily information about temperature and precipitations. # # The format is quite easy to scrape. The data I'm interested in are the monthly average temperatures and the number of rainy days per month. # + class PicNicPlanner(object): RAIN_URL = 'http://ggweather.com/sf/daily.html' TEMP_URL = 'http://ggweather.com/sf/monthly%20mean%20temps.html' def __init__(self): self.rain_table = None self.temperature_table = None def _read_soup(self, url, split='\t'): flob = urllib.request.urlopen(url) s = flob.read() flob.close() soup = BeautifulSoup(s, "lxml") return [s for s in soup.findAll('table')[1].get_text().split(split) if len(s)>0] def _clean_rain(self, row): return pd.Series(row.strip().split('\n')[1:]).astype(float) def get_rains(self): if self.rain_table is None: raw_rows = self._read_soup(self.RAIN_URL, '\xa0') cleaned_rows = pd.concat( [self._clean_rain(row) for row in raw_rows if 'Days' in row and 'Rain' not in row], axis=1) cleaned_rows.index = ['Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun'] self.rain_table = cleaned_rows.transpose() self.rain_table.index = list(range(2008,2018)) return self.rain_table def _clean_temperatures(self, row): if len(row) > 1 and not ( 'Copyright' in row or 'Reproduction' in row or 'San Francisco' in row): return pd.Series(row.strip().split('\n')) def get_temperatures(self): if self.temperature_table is None: raw_rows = self._read_soup(self.TEMP_URL) cleaned_rows = pd.concat([self._clean_temperatures(row) for row in raw_rows[2:]],axis=1) cleaned_rows.columns = cleaned_rows.iloc[0] cleaned_rows = cleaned_rows.drop(0).dropna(axis=0) cleaned_rows.index = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec', 'Annual'] self.temperature_table = cleaned_rows.transpose() self.temperature_table = self.temperature_table.astype(float) return self.temperature_table def num_days(month): if month in ['Nov', 'Apr', 'Jun', 'Sep']: return 30 if month == 'Feb': return 28 return 31 # - planner = PicNicPlanner() temp = planner.get_temperatures() rains = planner.get_rains() # The data seems to # # # So, it turns out that the data confirms that having a picnic in SF in April is probably not the best idea if you want to frolic in the Sun, while your chances of having a successful event in May are almost **three times higher!** In the figure belowwe can see how April has a 23% chances of rain! Basically one day out of 4. As late as November we can have better conditions than in April, and yet I doubt people would consider reasonable to organize a picnic in November. # + fig, axes = plt.subplots(nrows=2, ncols=1) months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'] #axes[0].set_title('Average Rainy Days by Month') axes[0].set_title('Changes of Rain by Month') axes[1].set_title('Monthly Average Temperatures') axes[0].axhline(10, color='r', linestyle='--') (100*rains.mean() / rains.columns.map(num_days))[months].plot(kind='bar', ax=axes[0], sharex=True, color='#000099') axes[0].set_ylabel('Days') temp.mean()[months].plot(kind='bar', ax=axes[1], color='#7f0000') axes[1].set_ylabel('Hours') axes[1].set_xlabel('Month') axes[1].set_ylim([50,65]) # never too hot, never too warm # - pd.DataFrame({'Daily Chances of Rain': 100*rains.mean() / rains.columns.map(num_days)}).loc[months] # The takehome message should be: # - April is too soon to plan a picnic # - May is quite dry # - SF's weather is good enough to allow you to hang out outside as late as November! # [source](https://github.com/mrpozzi/mrpozzi.github.io/blob/master/notebooks/PicnicInSF.ipynb) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + deletable=true editable=true import pandas as pd import numpy as np import itertools # + deletable=true editable=true data=pd.read_csv('savedrecs.txt',sep='\t',engine='python',index_col=False) # + deletable=true editable=true def getUniqueWords(allWords) : uniqueWords = [] for j in allWords: if j in uniqueWords: pass else: uniqueWords.append(j) return uniqueWords # - lines = [] with open("country_list.txt") as file: for line in file: line = line.strip() #or someother preprocessing lines.append(line) lines # + deletable=true editable=true df= data[data['C1'].notnull()] df_new=df['C1'] print(df_new[2]) for i in df_new.index: p=df_new[i].split() countries=['USA','Germany','France','China','Japan','Australia','Canada','Brazil','Mexico','South Africa', 'India','Korea','Israel','Turkey','Saudi Arabia','Iran','Spain','Netherlands','Sweden','Norway', 'Poland','Indonesia','Brazil','Switzerland','Denmark','Singapore','Iceland','Hong Kong','New Zealand','Belgium', 'Austria','Italy','Czech','Greece','Qatar','Portugal','Hungary','Argentina','Romania','England', 'Taiwan','Lithuania','Finland','Russia','Kazakhstan','Afghanistan','Albania'] count=[] for i in p: if i not in count: for j in countries: if (i==j or i==j+';'): count.append(i) country_list=[] for i in count: for j in countries: if (i==j or i==j+';'): country_list.append(j) print(list(set(country_list))) # + deletable=true editable=true count=[] countries=['France','USA','Japan','Sweden','Germany'] #print(countries) for i in p: if i not in count: for j in countries: if (i==j or i==j+';'): count.append(i) country_list=[] for i in count: for j in countries: if (i==j or i==j+';'): country_list.append(j) print(list(set(country_list))) # - country_list=[] for i in count: for j in countries: if (i==j or i==j+';'): country_list.append(j) print(list(set(country_list))) # + deletable=true editable=true # + deletable=true editable=true # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # A brief tutorial on using `lstbin` of `hera_cal` # # Feb. 2018 #
    # #
    # # # In this brief tutorial, we show you how to use the `lstbin` module of `hera_cal` on example HERA data. We will be using data from three nights of absolutely calibrated and RFI-flagged HERA-47 data, working only on a 7-element subset of the data. Each night contains three files worth of data, which amounts to 180 integrations per night (or 32.1 minutes per night). Details on the absolute calibration applied to the data can be found here: [HERA Memo 42](http://reionization.org/wp-content/uploads/2013/03/abscal_memo.pdf). # %matplotlib inline import numpy as np import matplotlib.pyplot as plt import glob import os import sys from pyuvdata import UVData, UVCal import hera_cal as hc from hera_cal.data import DATA_PATH from collections import OrderedDict as odict from pyuvdata import utils as uvutils import copy import uvtools as uvt from hera_cal.datacontainer import DataContainer import aipy import operator # # load and configure data # + # load the data night1 = sorted(glob.glob(os.path.join(DATA_PATH, "zen.2458043.4*.xx.HH.uvXRAA"))) night2 = sorted(glob.glob(os.path.join(DATA_PATH, "zen.2458044.4*.xx.HH.uvXRAA"))) night3 = sorted(glob.glob(os.path.join(DATA_PATH, "zen.2458045.4*.xx.HH.uvXRAA"))) uvd1 = UVData() uvd1.read_miriad(night1) uvd2 = UVData() uvd2.read_miriad(night2) uvd3 = UVData() uvd3.read_miriad(night3) # load data and meta data. LST arrays are the lst1, lst2, lst3 variables data1, flgs1, ap1, a1, f1, t1, lst1, p1 = hc.io.load_vis(uvd1, return_meta=True) data2, flgs2, ap2, a2, f2, t2, lst2, p2 = hc.io.load_vis(uvd2, return_meta=True) data3, flgs3, ap3, a3, f3, t3, lst3, p3 = hc.io.load_vis(uvd3, return_meta=True) # + # plot array layout antloc = np.array(ap1.values()) antloc -= np.median(antloc, axis=0) fig, ax = plt.subplots(figsize=(5,5)) ax.grid() ax.scatter(antloc[:, 0], antloc[:, 1], s=4000, c='steelblue') _ = [ax.text(antloc[i,0]-2, antloc[i,1], a1[i], fontsize=20, color='w') for i in range(len(a1))] ax.set_xlim(-25, 25) ax.set_ylim(-25, 25) ax.set_xlabel("X [meters]", fontsize=16) ax.set_ylabel("Y [meters]", fontsize=16) # + # form data list data_list = [data1, data2, data3] lst_list = [lst1, lst2, lst3] flgs_list = [flgs1, flgs2, flgs3] # get integration duration in radians delta_lst = np.median(np.diff(lst1)) # - # plot the data's native LST integrations fig, ax = plt.subplots(1, 1, figsize=(18, 4), dpi=200) ax.grid() p1, = ax.plot(lst1, np.ones_like(lst1)*0, color='darkred', ms=15, marker='|', ls='') p2, = ax.plot(lst2, np.ones_like(lst2)*1, color='darkorange', ms=15, marker='|', ls='') p3, = ax.plot(lst3, np.ones_like(lst3)*2, color='steelblue', ms=15, marker='|', ls='') ax.set_ylim(-1, 3) _ = [tl.set_size(15) for tl in ax.xaxis.get_ticklabels()] ax.yaxis.set_ticks([0,1,2]) ax.yaxis.set_ticklabels(['night1', 'night2', 'night3']) _ = [tl.set_size(15) for tl in ax.yaxis.get_ticklabels()] ax.set_xlabel("LST [radians]", fontsize=20) # We can see from the figure above that the 32 minutes of data from each night do not align perfercly in LST. We see a drift in the LST duration for the data from night-to-night, corresponding to 4 minutes. Also more subtle is the fact that the integrations themselves do perfectly align across nights, even in the overlapping LST range. # # bin data with 10.7 second bin width # # In the steps below, we will form a uniform LST grid and average the three nights of data that fall in each LST bin. We won't take into account the fact that the LST gridding of each night is 1) not aligned between nights and 2) not perfectly aligned with the LST bin itself. # LST bin! (lst_bins, data_avg, lst_flags, data_std, data_num) = hc.lstbin.lst_bin(data_list, lst_list, dlst=delta_lst, flags_list=flgs_list, flag_thresh=0.7) # The `flag_thresh` parameter sets the fractional threshold of flagged data per bin at which point the entire bin is flagged. # + # plot the number of points that fell into each LST bin key = data_num.keys()[0] X, Y = np.meshgrid(np.linspace(100, 200, 64, endpoint=False), lst_bins[::4]) X = X.ravel() Y = Y.ravel() fig, ax = plt.subplots(1, 1, figsize=(12, 6)) ax.grid() cax = ax.scatter(X, Y, c=data_num[key][::4, :].ravel(), s=30, cmap='viridis') cbar = fig.colorbar(cax) cbar.set_ticks([0,1,2,3]) cbar.set_label('number of points in each bin', fontsize=19) ax.set_xlabel('frequency [MHz]', fontsize=19) ax.set_ylabel('LST [radians]', fontsize=19) ax.set_ylim(0.385, 0.19) _ = [tl.set_size(15) for tl in ax.get_xticklabels()] _ = [tl.set_size(15) for tl in ax.get_yticklabels()] ax.set_title("number of points that fell into each LST-frequency bin", fontsize=16) # - # Above we show the number of points that fell into each LST-frequency bin. This matches what we would have predicted given the previous figure, given the fact that we made an LST grid with the same time integration length as the data originally had. Below, we plot waterfalls from a 14.6 meter East-West baseline before and after LST binning, where we can see a slight reduction in noise by-eye after LST-binning. Keep in mind that the data in night1 on the left extends to only 160 integrations, while the plot on the right extends all the way to 224 integrations. # + key = (24, 25, 'xx') fig, axes = plt.subplots(1, 3, figsize=(20, 6)) ax = axes[0] plt.sca(ax) _d = data1[key].copy() _d[flgs1[key]] *= np.nan uvt.plot.waterfall(_d, mode='log', mx=3, drng=3) cbar = plt.colorbar() cbar.set_label('log amplitude', fontsize=14) ax.set_title('night1 {} AMP'.format(key), fontsize=14) ax.set_xlabel('frequency channels', fontsize=14) ax.set_ylabel('LST integrations', fontsize=14) ax.set_ylim(224, 0) ax = axes[1] plt.sca(ax) _d2 = data_avg[key].copy() _d2[lst_flags[key]] *= np.nan uvt.plot.waterfall(_d2, mode='log', mx=3, drng=3) cbar = plt.colorbar() cbar.set_label('log amplitude', fontsize=14) ax.set_title('LST-binned {} AMP'.format(key), fontsize=14) ax.set_xlabel('frequency channels', fontsize=14) ax.set_ylabel('LST integrations', fontsize=14) ax = axes[2] plt.sca(ax) uvt.plot.waterfall(_d/_d2[:180], mode='log', mx=0.5, drng=1) cbar = plt.colorbar() cbar.set_label('log ratio', fontsize=14) ax.set_title('nigt1 AMP / LST-binned AMP'.format(key), fontsize=14) ax.set_xlabel('frequency channels', fontsize=14) ax.set_ylabel('LST integrations', fontsize=14) # - # # bin data with sigma clipping # # In this example, we will run a simple 1-iteration sigma clipping algorithm that will reject all points in each LST bin that lie outside of some sigma threshold. For this particular case, we will use a larger LST bin width to accumulate more points per LST bin. For the example, we choose to not LST align, although we could do this as well if we wanted. We will also choose a minimum number of points per LST-bin threshold in order to perform sigma clipping at 5 points. In other words, if an LST-bin contains 4 or less points, we won't perform sigma clipping. Note, sigma clipping slows down the code. # form lst grid starting at 0 radians and going until 2*pi radians lst_grid2 = hc.lstbin.make_lst_grid(delta_lst*4) dlst2 = np.median(np.diff(lst_grid2)) # introduce artifact into data data1[(24, 25, 'xx')][25] *= 100 data1[(24, 25, 'xx')][70, 35:40] *= 100 data1[(24, 25, 'xx')][100:140, 32] *= 100 data1[(24, 25, 'xx')][120] *= 100 # LST bin, with 5 sigma rejection tolerance (lst_bins, data_avg, flags_min, data_std, data_num) = hc.lstbin.lst_bin(data_list, lst_list, flags_list=flgs_list, dlst=dlst2, sig_clip=True, sigma=5, min_N=5) # + key = (') fig, axes = plt.subplots(1, 3, figsize=(20, 6)) ax = axes[0] plt.sca(ax) _d = data1[key].copy() _d[flgs1[key]] *= np.nan uvt.plot.waterfall(_d, mode='log', mx=3, drng=3) cbar = plt.colorbar() cbar.set_label('log amplitude', fontsize=14) ax.set_title('night1 {} AMP'.format(key), fontsize=14) ax.set_xlabel('frequency channels', fontsize=14) ax.set_ylabel('LST integrations', fontsize=14) ax.set_ylim(224, 0) ax = axes[1] plt.sca(ax) _d = data_avg[key].copy() _d[flags_min[key]] *= np.nan uvt.plot.waterfall(_d, mode='log', mx=3, drng=3) cbar = plt.colorbar() cbar.set_label('log amplitude', fontsize=14) ax.set_title('LST-binned {} AMP'.format(key), fontsize=14) ax.set_xlabel('frequency channels', fontsize=14) ax.set_ylabel('LST integrations', fontsize=14) ax = axes[2] plt.sca(ax) _d = data_num[key].copy() uvt.plot.waterfall(_d, mode='real', mx=12, drng=12) cbar = plt.colorbar() cbar.set_label('Bin Count', fontsize=14) ax.set_title('LST Bin Count for {}'.format(key), fontsize=14) ax.set_xlabel('frequency channels', fontsize=14) ax.set_ylabel('LST integrations', fontsize=14) # - # We can see that the artificial structure introduced in one file is not able to propagate to the LST binned data having sigma clipped before taking the average. # # run directly on the data files, write binned data to file # # Here we will use the `lst_bin_files()` function of `lstbin` to run the LST binner directly on the data files in a way that minimizes file I/O. # get data files data_files = [sorted(glob.glob(DATA_PATH+'/zen.2458043.4*XRAA')), sorted(glob.glob(DATA_PATH+'/zen.2458044.4*XRAA')), sorted(glob.glob(DATA_PATH+'/zen.2458045.4*XRAA'))] # rm -r zen.xx.* hc.lstbin.lst_bin_files(data_files, outdir='./', ntimes_per_file=100) # rm -r ./zen.xx.* # # coherent integration tests # # A required performance mark of the LST binning step is to show that we can indeed add data coherently such that thermal noise integrates down as the inverse square root of the number of samples per bin (i.e. $\propto N^{-1/2}$). To test this, we will perform a series of LST binning with different bin widths to get varying $N$ per bin. Our proxy for the thermal noise is simply the difference between adjacent time integrations for each frequency bin. # get raw data uvd0 = UVData() uvd0.read_miriad(night1) hc.lstbin.lst_bin_files(data_files, outdir='./', ntimes_per_file=250, dlst=delta_lst, overwrite=True) # load first run uvd1 = UVData() uvd1.read_miriad('zen.xx.STD.0.20124.uv') hc.lstbin.lst_bin_files(data_files, outdir='./', ntimes_per_file=250, dlst=delta_lst*2, overwrite=True) # load second run uvd2 = UVData() uvd2.read_miriad('zen.xx.LST.0.20046.uv') hc.lstbin.lst_bin_files(data_files, outdir='./', ntimes_per_file=250, dlst=delta_lst*4, overwrite=True) # load third run uvd3 = UVData() uvd3.read_miriad('zen.xx.LST.0.20046.uv') # + # plot thermal noise for a single baseline key = (24, 38, 'xx') d0 = uvd0.get_data(key) n0 = np.diff(d0, axis=0) c0 = uvd0.get_nsamples(key) l0 = np.unique(uvd0.lst_array)[1:] d1 = uvd1.get_data(key) n1 = np.diff(d1, axis=0) c1 = uvd1.get_nsamples(key) l1 = np.unique(uvd1.lst_array)[1:] d2 = uvd2.get_data(key) n2 = np.diff(d2, axis=0) c2 = uvd2.get_nsamples(key) l2 = np.unique(uvd2.lst_array)[1:] d3 = uvd3.get_data(key) n3 = np.diff(d3, axis=0) c3 = uvd3.get_nsamples(key) l3 = np.unique(uvd3.lst_array)[1:] fig, axes = plt.subplots(2, 4, figsize=(23, 12)) axes = axes.ravel() fig.subplots_adjust(hspace=0.3) ax = axes[0] plt.sca(ax) uvt.plot.waterfall(n0, mode='real', mx=10, drng=10) ax.set_xlabel("frequency channels", fontsize=14) ax.set_ylabel("time integrations", fontsize=14) ax.set_title("noise for night1 {}".format(key), fontsize=16) ax = axes[1] plt.sca(ax) uvt.plot.waterfall(n1, mode='real', mx=10, drng=10) ax.set_xlabel("frequency channels", fontsize=14) ax.set_title("noise for Run1 {}".format(key), fontsize=16) ax = axes[2] plt.sca(ax) uvt.plot.waterfall(n2, mode='real', mx=10, drng=10) ax.set_xlabel("frequency channels", fontsize=14) ax.set_title("noise for Run2 {}".format(key), fontsize=16) ax = axes[3] plt.sca(ax) cax = uvt.plot.waterfall(n3, mode='real', mx=10, drng=10) ax.set_xlabel("frequency channels", fontsize=14) ax.set_title("noise for Run3 {}".format(key), fontsize=16) ax = fig.add_axes([0.86, 0.52, 0.06, 0.4]) ax.axis('off') cbar = fig.colorbar(cax, ax=ax) cbar.set_label("amplitude", fontsize=20) ax = axes[4] plt.sca(ax) uvt.plot.waterfall(c0, mode='real', mx=12, drng=12) ax.set_xlabel("frequency channels", fontsize=14) ax.set_ylabel("time integrations", fontsize=14) ax.set_title("bin count for night1 {}".format(key), fontsize=16) ax = axes[5] plt.sca(ax) uvt.plot.waterfall(c1, mode='real', mx=12, drng=12) ax.set_xlabel("frequency channels", fontsize=14) ax.set_title("bin count for Run1 {}".format(key), fontsize=16) ax = axes[6] plt.sca(ax) uvt.plot.waterfall(c2, mode='real', mx=12, drng=12) ax.set_xlabel("frequency channels", fontsize=14) ax.set_title("bin count for Run2 {}".format(key), fontsize=16) ax = axes[7] plt.sca(ax) cax = uvt.plot.waterfall(c3, mode='real', mx=12, drng=12) ax.set_xlabel("frequency channels", fontsize=14) ax.set_title("bin count for Run3 {}".format(key), fontsize=16) ax = fig.add_axes([0.86, 0.1, 0.06, 0.4]) ax.axis('off') cbar = fig.colorbar(cax, ax=ax) cbar.set_label("bin count", fontsize=20) # + # plot a single frequency channel f = 32 h0 = np.abs(n0[:, f]) h1 = np.abs(n1[50:175, f]) h2 = np.abs(n2[25:85, f]) h3 = np.abs(n3[11:43, f]) m0 = np.mean(h0) m1 = np.mean(h1) m2 = np.mean(h2) m3 = np.mean(h3) mc0 = np.median(c0[:, f]) mc1 = np.median(c1[50:175, f]) mc2 = np.median(c2[25:85, f]) mc3 = np.median(c3[11:43, f]) fig, axes = plt.subplots(2, 1, figsize=(15,14)) ax = axes[0] ax.grid(zorder=0) ax.set_xlabel("noise amplitude", fontsize=18) ax.set_title("histogram of noise amplitude for freq channel {}".format(f), fontsize=18) _ = [tl.set_size(14) for tl in ax.get_xticklabels()] _ = [tl.set_size(14) for tl in ax.get_yticklabels()] ax.hist(h0, bins=15, histtype='stepfilled', color='steelblue', lw=3, range=(0, 40), alpha=0.5, zorder=3) ax.hist(h0, bins=15, histtype='step', color='k', lw=3, range=(0, 40), alpha=0.75, zorder=3) ax.hist(h1, bins=25, histtype='stepfilled', color='darkorange', lw=3, range=(0, 40), alpha=0.5, zorder=3) ax.hist(h1, bins=25, histtype='step', color='k', lw=3, range=(0, 40), alpha=0.75, zorder=3) ax.hist(h2, bins=25, histtype='stepfilled', color='firebrick', lw=3, range=(0, 40), alpha=0.5, zorder=3) ax.hist(h2, bins=25, histtype='step', color='k', lw=3, range=(0, 40), alpha=0.75, zorder=3) ax.hist(h3, bins=25, histtype='stepfilled', color='green', lw=3, range=(0, 40), alpha=0.5, zorder=3) ax.hist(h3, bins=25, histtype='step', color='k', lw=3, range=(0, 40), alpha=0.75, zorder=3) p0 = ax.axvline(m0, color='steelblue', ymin=0.75, lw=3) p1 = ax.axvline(m1, color='darkorange', ymin=0.75, lw=3) p2 = ax.axvline(m2, color='firebrick', ymin=0.75, lw=3) p3 = ax.axvline(m3, color='green', ymin=0.75, lw=3) ax.legend([p0, p1, p2, p3], ["night1", "Run1", "Run2", "Run3"], fontsize=20) ax = axes[1] ax.grid(zorder=0) ax.set_xlabel("N samples per bin", fontsize=18) ax.set_ylabel("average noise amplitude", fontsize=18) _ = [tl.set_size(14) for tl in ax.get_xticklabels()] _ = [tl.set_size(14) for tl in ax.get_yticklabels()] counts = np.array([mc0, mc1, mc2, mc3]) amps = np.array([m0, m1, m2, m3]) c_arr = np.linspace(1,12,50) a_arr = amps[0] / np.sqrt(c_arr) p0, = ax.plot(counts, amps, ls='-', marker='o', lw=3, markersize=10) p1, = ax.plot(c_arr, a_arr, ls='--', color='k', lw=2) ax.legend([p0, p1], ["histogram means", "$1/\sqrt{N}$"], fontsize=20) # - # rm -r ./zen.xx.* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.10.0 64-bit # language: python # name: python3 # --- import math import random import urllib as ul import heapq as hq import csv # # Algoritmo de dijkstra a todos los puntos def dijkstra(G, s): n = len(G) visited = [False]*n path = [None]*n cost = [math.inf]*n cost[s] = 0 queue = [(0, s)] while queue: g_u, u = hq.heappop(queue) if not visited[u]: visited[u] = True for v, w in G[u]: f = g_u + w if f < cost[v]: cost[v] = f path[v] = u hq.heappush(queue, (f, v)) return path, cost # Generamos aleatoriamente los pesos a las aristas, debido a que es necesario para que se realice el algoritmo Dijkstra # + def getGraphWeighted(G): adjlListWeighted = [[] for i in range(len(G))] for i in range(len(G)): for j in range(len(G[i])): adjlListWeighted[i].append( ( G[i][j], random.randint(0,10) ) ) return adjlListWeighted def readAdjl(filename): with ul.request.urlopen(filename) as f: L = [] for line in f: L.append(list(map(int, (line.decode("utf-8").strip().split(","))))) return L # + G = readAdjl("https://raw.githubusercontent.com/RodriCalle/cc41_tf_201915889_201910127_201917028_201718169_20141a449/master/TrabajoFinal/GeneratorAdjancencyList/adjancencyList.txt") GW = getGraphWeighted(G) path, cost = dijkstra(GW, 0) pathFile = open('dijkstraForAll.txt', 'w', newline='') with pathFile: writer = csv.writer(pathFile) writer.writerows(str(path)) #Solo imprimirmos los 1000 primeros nodos con su respectivo padre for i in range(1000): print(f"{i}: {path[i]}") # - # # Algoritmo de Dijkstra de cada almacen a cada punto de entrega # + def getIndexFromCoords(size, arr): alm = [] for a in arr: x, y = a[0], a[1] pos = x*size + y alm.append(pos) return alm def getCoordFromIndex(index, size): x = index // size y = index % size return (x, y) def dijkstraEntregas(almacenes, entregas, G): sol = [] for i in almacenes: path, cost = dijkstra(G,i) for j in entregas: node = j p = [j] c = cost[j] while node != i: node = path[node] p.append(node) p.reverse() sol.append((i,j,c,p)) return sol # + def getCoords(filename, arrx, arry): arrx.clear() arry.clear() with ul.request.urlopen(filename) as f: for line in f: xy = list(map(int, (line.decode("utf-8").strip().split(",")))) arrx.append(xy[0]) arry.append(xy[1]) urlAlmacen = "https://raw.githubusercontent.com/RodriCalle/cc41_tf_201915889_201910127_201917028_201718169_20141a449/master/Datasets/puntos_almacenes.csv" urlEntrega = "https://raw.githubusercontent.com/RodriCalle/cc41_tf_201915889_201910127_201917028_201718169_20141a449/master/Datasets/puntos_entregas.csv" almacenx, almaceny = [], [] getCoords(urlAlmacen, almacenx, almaceny) entregax, entregay = [], [] getCoords(urlEntrega, entregax, entregay) almacenes, entregas = [], [] for i in range(len(almacenx)): almacenes.append([almacenx[i], almaceny[i]]) for i in range(len(entregax)): entregas.append([entregax[i], entregay[i]]) # + arregloAlm = getIndexFromCoords(1000,almacenes) arregloEnt = getIndexFromCoords(1000,entregas) djpd = dijkstraEntregas(arregloAlm, arregloEnt, GW) djpdFile = open('dijkstraforeach.txt', 'w') with djpdFile: writer = csv.writer(djpdFile) writer.writerows(str(djpd)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: python # name: synapse_pyspark # --- # + [markdown] nteract={"transient": {"deleting": false}} # ## Load data # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} data_lake_account_name = 'synapseaikorcen' file_system_name = 'synapseaikorcen' # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "python"} nteract={"transient": {"deleting": false}} # %%pyspark for name in ["RW3", "RW4", "RW5", "RW6"]: print(f"start saving as table: {name}") spark.read.format("csv").load(f"abfss://{file_system_name}@{data_lake_account_name}.dfs.core.windows.net/battery/orig/{name}.csv",sep = ',',header=True).write.mode("overwrite").saveAsTable(f"default.battery_orig_{name}") print(f"finished saving as table: {name}") # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select * from battery_orig_rw3 limit 1000 # # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select 'rw3', min(_c0) from battery_orig_rw3 where capacity <= 1.6 # union all # select 'rw4', min(_c0) from battery_orig_rw4 where capacity <= 1.6 # union all # select 'rw5', min(_c0) from battery_orig_rw5 where capacity <= 1.6 # union all # select 'rw6', min(_c0) from battery_orig_rw6 where capacity <= 1.6 # # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select * from battery_orig_rw3 # where _c0 between 1260000 and 1261000 # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select 'rw3' unit, # _c0, # AVG(capacity) OVER ( # ORDER BY _c0 # ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING # ) MovAvg # from battery_orig_rw3 # where _c0 between 1260000 and 1261000 # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select 'rw3' file, min(time), max(time), min(capacity), max(capacity), count(*) cnt from battery_orig_rw3 # union all # select 'rw4' file, min(time), max(time), min(capacity), max(capacity), count(*) cnt from battery_orig_rw4 # union all # select 'rw5' file, min(time), max(time), min(capacity), max(capacity), count(*) cnt from battery_orig_rw5 # union all # select 'rw6' file, min(time), max(time), min(capacity), max(capacity), count(*) cnt from battery_orig_rw6 # + [markdown] nteract={"transient": {"deleting": false}} # ## Explore and transform data # + [markdown] nteract={"transient": {"deleting": false}} # ### Try visualizing moving average # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # SELECT * FROM # ( # SELECT # 'rw3' unit, # _c0 # ,AVG(capacity) OVER ( # ORDER BY _c0 # ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING # ) MovAvg # FROM battery_orig_rw3 # WHERE _c0 < 1300000 # # union ALL # SELECT # 'rw4' unit, # _c0 # ,AVG(capacity) OVER ( # ORDER BY _c0 # ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING # ) MovAvg # FROM battery_orig_rw4 # WHERE _c0 < 1300000 # # union ALL # SELECT # 'rw5' unit, # _c0 # ,AVG(capacity) OVER ( # ORDER BY _c0 # ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING # ) MovAvg # FROM battery_orig_rw5 # WHERE _c0 < 1300000 # # union ALL # SELECT # 'rw6' unit, # _c0 # ,AVG(capacity) OVER ( # ORDER BY _c0 # ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING # ) MovAvg # FROM battery_orig_rw6 # WHERE _c0 < 1300000 # ) a # WHERE _c0 % (10000000 / 1000) = 0 # # + [markdown] nteract={"transient": {"deleting": false}} # ### Observation # # - 31 Cycle 이동평균이 1.6 (기준치 2.0으로 봤을 때 80% 수준)에 도달하는 것은 128만 사이클보다 나중임. # # - 다만 이동평균을 고려하지 않으면 위에서 capacity < 1.6 인 min(_c0)에서 봤듯이 이보다 이전에 발생함. # + [markdown] nteract={"transient": {"deleting": false}} # ## Baseline 확인 # # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # SELECT 'rw3', capacity, * FROM battery_orig_rw3 WHERE _c0 = 0 # UNION ALL # SELECT 'rw4', capacity, * FROM battery_orig_rw3 WHERE _c0 = 0 # UNION ALL # SELECT 'rw5', capacity, * FROM battery_orig_rw3 WHERE _c0 = 0 # UNION ALL # SELECT 'rw6', capacity, * FROM battery_orig_rw3 WHERE _c0 = 0 # # + [markdown] nteract={"transient": {"deleting": false}} # ### Observation # # 모두 같은 값이므로 지금은 고민하지 말고 상수 값으로 봄 # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT 'rw3' unit, _c0 ,AVG(capacity) OVER ( ORDER BY _c0 ROWS BETWEEN 100 PRECEDING AND 100 FOLLOWING ) FifteenCycleMovAvg FROM battery_orig_rw3 WHERE _c0 < 5000 union ALL SELECT 'rw4' unit, _c0 ,AVG(capacity) OVER ( ORDER BY _c0 ROWS BETWEEN 100 PRECEDING AND 100 FOLLOWING ) FifteenCycleMovAvg FROM battery_orig_rw4 WHERE _c0 < 5000 ''' df_data = spark.sql(sql_statement).toPandas() display(df_data) # + [markdown] nteract={"transient": {"deleting": false}} # ### Observation # # - 1000 사이클 이전에는 줄어들었다가 다시 원복되는 과정이 있는데, 이 구간에서도 fail를 예측해야 하는지? # - 패턴이 다른 구간과 다르므로 이 구간의 예측이 불필요하다면 1000 사이클 이전은 제외하고 나머지 데이터만으로 진행하는 것이 괜찮을지? # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} sql_statement = ''' SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity FROM battery_orig_rw3 ''' df_data = spark.sql(sql_statement).toPandas() df_data.describe() # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} df_data.min() # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} df_data.max() # + [markdown] nteract={"transient": {"deleting": false}} # ### time 컬럼의 속성 확인 # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select time, int(time) int_time, time - int(time) from battery_orig_rw3 limit 100 # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select distinct time - int(time) from battery_orig_rw3 limit 100 # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select count(distinct int(time)) from battery_orig_rw3 # # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select count(distinct time) from battery_orig_rw3 # # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select int_time, count(*) # from # ( # select time, int(time) int_time from battery_orig_rw3 # ) a # group by int_time # having count(*) > 1 # limit 100 # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select * from battery_orig_rw3 where int(time) = 262183 # + [markdown] nteract={"transient": {"deleting": false}} # ## Observation # # - 위에서 볼 수 있듯이 한 사이클에 여러 값이 매우 가까운 time 값으로 존재할 수 있음 # - current값이 0인 경우도 있는데 평균으로 요약하면 왜곡이 될지?? # - 0이 누락된 값인지 실측이 0으로 된 건지 판단이 필요. 누락이라면 imputation이 필요할 수 있으나, 다른 레코드에서 음수도 있는 것을 보아 정상값일 수도 있음 # - 또는 이 4건을 1건으로 요약하지 말고 그대로 사용할 수도 있는지 봐야 함. 다만 이후 AutoML에서는 time이 키 값이 되어야 하므로 단수차이가 별도 값으로 표현되는지 봐야 함 # + jupyter={"outputs_hidden": false, "source_hidden": false} microsoft={"language": "sparksql"} nteract={"transient": {"deleting": false}} language="sql" # select * from battery_orig_rw3 where int(time) = 792764 # + [markdown] nteract={"transient": {"deleting": false}} # 이 내용을 보면 int(time) 기준으로 평균으로 구해도 될 것 같아 보이기도 하나, current 0 자체가 의미 있는 데이터로 보여 요약 하지 않고 그대로 진행. # + [markdown] nteract={"transient": {"deleting": false}} # ## Prepare data # # Automated ML Forecasting은 현재 Datetime 데이터를 기준으로 해야 하므로, time을 Base datetime 기준으로 변형하도록 함. 향후 결과 해석시에도 역으로 변형 필요함. # time의 단위는 second로 봄 (NASA https://c3.nasa.gov/dashlink/resources/133/ 참고: 확인 필요) # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT from_unixtime(unix_timestamp(to_date("2020-01-01")) + 1) ''' df_data = spark.sql(sql_statement) display(df_data) # type(df_data) # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw3 WHERE _c0 < 200 ''' df_data = spark.sql(sql_statement) display(df_data) # with Pandas: # df_data = spark.sql(sql_statement).toPandas() # # print(df_data['time']) # df_data['datetime'] = df_data['time'].apply(cycle_to_datetime) # # print(df_data['datetime']) # # print(df_data['datetime'].apply(datetime_to_cycle)) # # with Spark DataFrame # from pyspark.sql.functions import col # cycle_to_datetime = spark.udf.register("cycle_to_datetime", cycle_to_datetime) # new_df = df_data.select(cycle_to_datetime(col("time"))) # display(new_df) # df_data = spark.sql(sql_statement) # df_data.withColumn('datetime', cycle_to_datetime(col('time'))) # display(df_data) # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} df_data.describe().show() # + [markdown] nteract={"transient": {"deleting": false}} # 대용량이면 PySpark Dataframe으로 하는 것이 Pandas Dataframe보다 성능상 나음. # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} # display(df_data['time'].apply(cycle_to_datetime)) # display(df_data) # df_data['homeOwnership'] = df_data['homeOwnership'].replace('nan', np.nan).fillna(0) # df_data['isJointApplication'] = df_data['isJointApplication'].replace('nan', np.nan).fillna(0) # df_data['dtiRatio'] = df_data['dtiRatio'].astype(np.float32) # df_data['annualIncome'] = df_data['annualIncome'].astype(np.float32) # df_data['lengthCreditHistory'] = df_data['lengthCreditHistory'].astype(np.float32) # df_data['numTotalCreditLines'] = df_data['numTotalCreditLines'].astype(np.float32) # df_data['numOpenCreditLines'] = df_data['numOpenCreditLines'].astype(np.float32) # df_data['numOpenCreditLines1Year'] = df_data['numOpenCreditLines1Year'].astype(np.float32) # df_data['revolvingBalance'] = df_data['revolvingBalance'].astype(np.float32) # df_data['revolvingUtilizationRate'] = df_data['revolvingUtilizationRate'].astype(np.float32) # df_data['numDerogatoryRec'] = df_data['numDerogatoryRec'].astype(np.float32) # df_data['numDelinquency2Years'] = df_data['numDelinquency2Years'].astype(np.float32) # df_data['numChargeoff1year'] = df_data['numChargeoff1year'].astype(np.float32) # df_data['numInquiries6Mon'] = df_data['numInquiries6Mon'].astype(np.float32) # df_data['loanAmount'] = df_data['loanAmount'].astype(np.float32) # df_data['interestRate'] = df_data['interestRate'].astype(np.float32) # df_data['monthlyPayment'] = df_data['monthlyPayment'].astype(np.float32) # df_data['incomeVerified'] = df_data['incomeVerified'].astype(np.int64) # df_data['isJointApplication'] = df_data['isJointApplication'].astype(np.int64) # #df_data.columns # df_data = df_data.replace(float('nan'), None) # df_data_sp = spark.createDataFrame(df_data) # df_data_sp.write.mode("overwrite").saveAsTable("default.creditrisk_data") # # data_lake_account_name = 'ncsynapsews11acc' # # file_system_name = 'ncsynapsews11fs' # df_data_sp.coalesce(1).write.option('header', 'true').mode('overwrite').csv(f'abfss://{file_system_name}@{data_lake_account_name}.dfs.core.windows.net/creditrisk/automlprepareddata/') # drop_cols = ['memberId', 'loanId', 'date','grade'] # df_data = df_data.drop(drop_cols, axis=1) # + [markdown] nteract={"transient": {"deleting": false}} # ## Data Qualify check # # ### Duplicate Index # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw3 ''' df_data = spark.sql(sql_statement) # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} df1=df_data.groupBy("datetime").count().filter("count > 1") df1.show() # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT unix_timestamp(to_timestamp("2020-01-01 00:00:00.000000")) + 2.6 ''' df_data = spark.sql(sql_statement) display(df_data) # SELECT from_unixtime(unix_timestamp(to_timestamp("2020-01-01 00:00:00.000000")) + 2.6, "yyyy-MM-dd HH:mm:ss.SSSSSSS") # # + [markdown] nteract={"transient": {"deleting": false}} # ### Observation # # second 단위 이하로는 표현이 안되는 것 같음. 우선은 초 단위로 중복이 없도록 요약을 해야 하겠음. # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw3 WHERE from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) BETWEEN to_timestamp('2020-02-04 04:49:06') AND to_timestamp('2020-02-04 04:49:07') ''' df_data = spark.sql(sql_statement) display(df_data) # + [markdown] nteract={"transient": {"deleting": false}} # ### Remove Duplicate # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw3 /* WHERE from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) BETWEEN to_timestamp('2020-02-04 04:49:06') AND to_timestamp('2020-02-04 04:49:07') */ ) a GROUP BY unit, datetime ORDER BY 1, 2 ''' df_data = spark.sql(sql_statement) display(df_data) # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} df1=df_data.groupBy("unit", "datetime").count().filter("count > 1") df1.show() # + [markdown] nteract={"transient": {"deleting": false}} # ## Save transformed data # # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw3 ) a GROUP BY unit, datetime UNION ALL SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw4' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw4 ) a GROUP BY unit, datetime UNION ALL SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw5' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw5 ) a GROUP BY unit, datetime UNION ALL SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw6' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw6 ) a GROUP BY unit, datetime ORDER BY 1, 2 ''' df_data = spark.sql(sql_statement) df_data.write.format("parquet").mode("overwrite").save(f"abfss://{file_system_name}@{data_lake_account_name}.dfs.core.windows.net/battery/transformed/dataset_all") # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw3 ) a GROUP BY unit, datetime ORDER BY 1, 2 ''' df_data = spark.sql(sql_statement).write.format("parquet").mode("overwrite").save(f"abfss://{file_system_name}@{data_lake_account_name}.dfs.core.windows.net/battery/transformed/dataset_rw3") sql_statement = ''' SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw4' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw4 ) a GROUP BY unit, datetime ORDER BY 1, 2 ''' df_data = spark.sql(sql_statement).write.format("parquet").mode("overwrite").save(f"abfss://{file_system_name}@{data_lake_account_name}.dfs.core.windows.net/battery/transformed/dataset_rw4") sql_statement = ''' SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw5' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw5 ) a GROUP BY unit, datetime ORDER BY 1, 2 ''' df_data = spark.sql(sql_statement).write.format("parquet").mode("overwrite").save(f"abfss://{file_system_name}@{data_lake_account_name}.dfs.core.windows.net/battery/transformed/dataset_rw5") sql_statement = ''' SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw6' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw6 ) a GROUP BY unit, datetime ORDER BY 1, 2 ''' df_data = spark.sql(sql_statement).write.format("parquet").mode("overwrite").save(f"abfss://{file_system_name}@{data_lake_account_name}.dfs.core.windows.net/battery/transformed/dataset_rw6") # + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} import pandas as pd import numpy as np sql_statement = ''' SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw3 ) a GROUP BY unit, datetime ORDER BY 1, 2 ''' df_data = spark.sql(sql_statement) display(df_data) # - # ## 마지막 datetime 확인 import pandas as pd import numpy as np sql_statement = ''' SELECT unit, MIN(datetime), MAX(datetime) FROM ( SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw3 ) a GROUP BY unit, datetime ) a GROUP BY unit ORDER BY 1 ''' df_data = spark.sql(sql_statement) display(df_data) import pandas as pd import numpy as np sql_statement = ''' SELECT unit, datetime, MIN(cycle) min_cycle, AVG(voltage) avg_voltage, AVG(current) avg_current, AVG(temperature) avg_temperature, AVG(capacity) avg_capacity FROM ( SELECT 'rw3' unit, _c0 cycle, time, voltage, current, temperature, capacity, from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) datetime FROM battery_orig_rw3 WHERE from_unixtime(unix_timestamp(to_date("2020-01-01")) + time) between '2020-05-31 02:00:00' and '2020-05-31 02:05:00' ) a GROUP BY unit, datetime ''' df_data = spark.sql(sql_statement) df_data.write.mode("overwrite").saveAsTable("sample_for_test") display(df_data) # 처음에 test 용 데이터를 별도로 떼두는 것이 맞음. 지금은 임시로 마지막 구간에서 일부 데이터 가져와서 시간을 이후로 조정해서 테스트에 활용 # + [markdown] nteract={"transient": {"deleting": false}} # ## Tips # # - check missing values (10 sec based) # # - play with rolloing windows of 10 sec # # - add a calculated column, difference_capaciy # - try to predict the above value # - accumulate prediction # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.10 64-bit (''base'': conda)' # name: python3 # --- # + [markdown] id="3wF5wszaj97Y" # # Conv2D 모델링 예제 # # + [markdown] id="hiH7AC-NTniF" # 먼저 프로그램에 텐서플로 라이브러리를 임포트합니다: # - # !pip install python-mnist # + id="0trJmd6DjqBZ" import tensorflow as tf gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: # Currently, memory growth needs to be the same across GPUs for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Memory growth must be set before GPUs have been initialized print(e) # + [markdown] id="7NAbSZiaoJ4z" # [MNIST 데이터셋](http://yann.lecun.com/exdb/mnist/)을 로드하여 준비합니다. 샘플 값을 정수에서 부동소수로 변환합니다: # + id="7FP5258xjs-v" mnist = tf.keras.datasets.mnist # 손글씨 데이터셋 # image size 28 x 28 (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 # 정규화 해주게 됩니다. # - print(x_train.shape) print(x_test) # + id="mzBTJlBQ29g0" # conv2D layer 사용시, 원본 데이터에 가중치를 연산하기 위해서 원본데이터 형식이 W H C 가 되어야한다. print(x_test) x_test = x_test.reshape([-1, 28, 28, 1])#TODO, #TODO, #TODO]) # W H C -> Conv2D 용 shape x_train = x_train.reshape([-1, 28, 28, 1]) #TODO, #TODO, #TODO]) x_train.shape # + id="T13Y3rXo2BcH" batch_size = 32 train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(10000).batch(batch_size)#(#TODO, #TODO)).shuffle(10000).batch(batch_size) test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).shuffle(10000).batch(batch_size)#(#TODO, #TODO)).batch(batch_size) print(train_ds) #ImageDataGenerator와 유사함. # + [markdown] id="BPZ68wASog_I" # 층을 차례대로 쌓아 `tf.keras.Sequential` 모델을 만듭니다. 훈련에 사용할 옵티마이저(optimizer)와 손실 함수를 선택합니다: # + id="h3IKyzTCDNGo" model = tf.keras.models.Sequential() # model.add(tf.keras.layers.Conv2D(16, 3, activation='relu')) # model.add(tf.keras.layers.Conv2D(32, 3, activation='relu')) # model.add(tf.keras.layers.Flatten()) # model.add(tf.keras.layers.Dense(16, activation='relu')) # model.add(tf.keras.layers.Dense(10, activation='softmax')) model.add(tf.keras.layers.Conv2D(32, (3,3), strides=1, activation=None, padding='same', input_shape=(28, 28, 1)))#tf.keras.layers.Input(shape=(28, 28, 1)))#TODO) model.add(tf.keras.layers.BatchNormalization()) model.add(tf.keras.layers.Activation(activation='relu')) model.add(tf.keras.layers.MaxPool2D((2,2))) model.add(tf.keras.layers.Conv2D(64, (3,3), strides=1, activation=None, padding='same'))#tf.keras.layers.Input(shape=(28, 28, 1)))#TODO) model.add(tf.keras.layers.BatchNormalization()) model.add(tf.keras.layers.Activation(activation='relu')) model.add(tf.keras.layers.MaxPool2D((2,2))) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation='relu')) model.add(tf.keras.layers.Dense(256, activation='relu')) model.add(tf.keras.layers.Dense(10, activation='softmax')) # x=tf.keras.layers.Flatten()(x) # x= tf.keras.layers.Dense(256, activation=None)(x) # tf.keras.layers.Conv2D(...) # import 에 layers 추가시 # layers.Conv2D(...) # input_Layer = tf.keras.layers.Input(shape=(128,128,3)) # x=tf.keras.layers.Conv2D(32,(3,3),strides=1, activation=None)(input_Layer) # x= tf.keras.layers.BatchNormalization()(x) # x= tf.keras.layers.Activation(activation='relu')(x) # x=tf.keras.layers.MaxPool2D((2,2))(x) # x=tf.keras.layers.Conv2D(64,(3,3),strides=1,activation=None)(x) # x= tf.keras.layers.BatchNormalization()(x) # x= tf.keras.layers.Activation(activation='relu')(x) # x=tf.keras.layers.MaxPool2D((2,2))(x) # x=tf.keras.layers.Conv2D(128,(3,3),strides=1,activation=None)(x) # x= tf.keras.layers.BatchNormalization()(x) # x= tf.keras.layers.Activation(activation='relu')(x) # x=tf.keras.layers.Conv2D(64,(3,3),strides=1,activation=None)(x) # x= tf.keras.layers.BatchNormalization()(x) # x= tf.keras.layers.Activation(activation='relu')(x) # x=tf.keras.layers.MaxPool2D((2,2))(x) # x=tf.keras.layers.Flatten()(x) # x= tf.keras.layers.Dense(256, activation=None)(x) # x= tf.keras.layers.BatchNormalization()(x) # x= tf.keras.layers.Activation(activation='relu')(x) # x= tf.keras.layers.Dropout(rate=0.4)(x) # x= tf.keras.layers.Dense(512, activation=None)(x) # x= tf.keras.layers.BatchNormalization()(x) # x= tf.keras.layers.Activation(activation='relu')(x) # Out_Layer= tf.keras.layers.Dense(1, activation='sigmoid')(x) # model = tf.keras.Model(inputs=[input_Layer], outputs=[Out_Layer]) model.summary() # + id="8E13ezfV206A" # from_logits=False는 loss function을 output layer에 추가했을 경우, True면 output layer에 추가하지 않았을 경우 -> softmax를 추가한다. # SparseCategoricalCrossentropy는 label이 one hot vector가 아닌 경우.. label이 one hot이면 CategoricalCrossentropy model.compile(optimizer=tf.keras.optimizers.Adam(1e-3), # Learning rate loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False), metrics=['accuracy']) # + [markdown] id="ix4mEL65on-w" # 모델을 훈련하고 평가합니다: # + id="9SlM2SXHdBZU" # using `tf.data.Dataset` #train_data, train_labels # validation_split=0.1 history = model.fit(train_ds,# y_train, steps_per_epoch=len(x_train)//batch_size,#, epochs=10, #validation_split=0.2) validation_data=test_ds, validation_steps=len(x_test) // batch_size, # 1에폭에 대한 계산이 상대적입니다. ) # - for t, l in train_ds.take(1): print(t.shape) print(type(t.shape)) print(l.shape) # + id="F7dTAzgHDUh7" model.evaluate(x_test, y_test)[1]#) # + id="9o-Peshs3Op6" #테스트 셋의 오차 import matplotlib.pyplot as plt import numpy as np import os y_vloss = history.history['val_loss'] # 학습셋의 오차 y_loss = history.history['loss'] # 그래프로 표현 x_len = np.arange(len(y_loss)) plt.plot(x_len, y_vloss, marker='.', c="red", label='Testset_loss') plt.plot(x_len, y_loss, marker='.', c="blue", label='Trainset_loss') # 그래프에 그리드를 주고 레이블을 표시 plt.legend(loc='upper right') plt.grid() plt.xlabel('epoch') plt.ylabel('loss') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import json import pandas as pd import os import glob import re from tqdm import tqdm_notebook as tqdm from PIL import Image import shutil # + DS_FOLDER = '/APL/Datasets/UAVBenchmark/' GT_FOLDER = os.path.join(DS_FOLDER, 'UAV-benchmark-MOTD_v1.0','GT') IMG_FOLDER = os.path.join(DS_FOLDER, 'UAV-benchmark-M') ATTR_FOLDER_TRAIN = os.path.join(DS_FOLDER, 'M_attr', 'train') ATTR_FOLDER_TEST = os.path.join(DS_FOLDER, 'M_attr', 'test') OUTPUT_DIR = '/APL/Datasets/UAVBenchmark-COCO-format/' # - ground_truth_files = glob.glob(os.path.join(GT_FOLDER, '*_gt_whole.txt')) train_attribute_files = glob.glob(os.path.join(ATTR_FOLDER_TRAIN, '*.txt')) test_attribute_files = glob.glob(os.path.join(ATTR_FOLDER_TEST, '*.txt')) id_getter = re.compile('(?:train|test)\/M([0-9]*)_attr') train_ids = [id_getter.search(i)[1] for i in train_attribute_files] test_ids = [id_getter.search(i)[1] for i in test_attribute_files] train_ids test_ids #SET = 'train' SET = 'test' new_data_format = {'info': {'description': 'UAVDT Benchmark', 'url': 'https://sites.google.com/site/daviddo0323/projects/uavdt', 'version': '1.0', 'year': 2019, 'contributor': ' et. al., this format: @McGill University', 'date_created': '2019/07/07'}, 'images': [], 'annotations': [], 'categories': []} # + # Get all image data if SET == 'train': id_list = train_ids elif SET == 'test': id_list = test_ids image_data = [] regex = re.compile('img([0-9]*).') # Prepare folder to copy images to IMG_DEST_FOLDER = os.path.join(OUTPUT_DIR, SET) try: os.mkdir(IMG_DEST_FOLDER) except FileExistsError: shutil.rmtree(IMG_DEST_FOLDER) os.mkdir(IMG_DEST_FOLDER) # Create list of all image files for ID in tqdm(id_list): im_files = glob.glob(os.path.join(IMG_FOLDER, 'M' + ID, '*.jpg')) for im_file in tqdm(im_files, leave=False): im = Image.open(im_file) im_size = im.size im.close() sequence_num = regex.search(os.path.basename(im_file))[1] new_id = ID + sequence_num new_file_name = 'img' + new_id + '.jpg' shutil.copy2(im_file, os.path.join(IMG_DEST_FOLDER, new_file_name)) new_data_format['images'].append( {'file_name': new_file_name, 'height': im_size[1], 'width': im_size[0], 'id': int(new_id)}) # - # Fill out categories new_data_format['categories'] = [{'id': 1, 'name': 'car', 'supercategory': None}, {'id': 2, 'name': 'truck', 'supercategory': None}, {'id': 3, 'name': 'bus', 'supercategory': None}] # + id_getter_2 = re.compile('GT\/M([0-9]*)_gt') annot_id = 0 # Get annotations for file in tqdm(ground_truth_files): set_id = id_getter_2.search(file)[1] data = pd.read_csv(file, header=None, names=['frame_index','target_id','bbox_left','bbox_top','bbox_width','bbox_height','out-of-view','occlusion','object_category']) for row in tqdm(data.itertuples(), leave=False): findex = '{:06d}'.format(row.frame_index) new_data_format['annotations'].append( {'id': annot_id, 'image_id': int(set_id + findex), 'category_id': row.object_category, 'bbox': [row.bbox_left, row.bbox_top, row.bbox_width, row.bbox_height], 'segmentation': [[row.bbox_left, row.bbox_top, row.bbox_left, row.bbox_top + row.bbox_height, row.bbox_left + row.bbox_width, row.bbox_top + row.bbox_height, row.bbox_left + row.bbox_width, row.bbox_top]], 'area': row.bbox_width * row.bbox_height, 'iscrowd': 0}) annot_id += 1 # - new_data_format['annotations'] new_data_format['images'] OUTPUT_JSON = os.path.join(OUTPUT_DIR, SET + '.json') #Output to file with open(OUTPUT_JSON, 'w') as f: json.dump(new_data_format,f) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !wget https://github.com/rickiepark/python-machine-learning-book-2nd-edition/raw/master/code/ch09/movie_data.csv.gz # + import gzip with gzip.open('movie_data.csv.gz') as f_in, open('movie_data.csv', 'wb') as f_out: f_out.writelines(f_in) # - import nltk nltk.download('stopwords') # + import numpy as np import re from nltk.corpus import stopwords from nltk.stem import PorterStemmer stop = stopwords.words('english') porter = PorterStemmer() def tokenizer(text): text = re.sub('<[^>]*>', '', text) emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower()) text = re.sub('[\W]+', ' ', text.lower()) + ' '.join(emoticons).replace('-', '') tokenized = [w for w in text.split() if w not in stop] return tokenized def stream_docs(path): with open(path, 'r', encoding='utf-8') as csv: next(csv) # skip header for line in csv: text, label = line[:-3], int(line[-2]) yield text, label # - next(stream_docs(path='movie_data.csv')) def get_minibatch(doc_stream, size): docs, y = [], [] try: for _ in range(size): text, label = next(doc_stream) docs.append(text) y.append(label) except StopIteration: return None, None return docs, y # + from sklearn.feature_extraction.text import HashingVectorizer from sklearn.linear_model import SGDClassifier vect = HashingVectorizer(decode_error='ignore', n_features=2**21, preprocessor=None, tokenizer=tokenizer) clf = SGDClassifier(loss='log', random_state=1, max_iter=1) doc_stream = stream_docs(path='movie_data.csv') # + import pyprind pbar = pyprind.ProgBar(45) classes = np.array([0, 1]) for _ in range(45): X_train, y_train = get_minibatch(doc_stream, size=1000) if not X_train: break X_train = vect.transform(X_train) clf.partial_fit(X_train, y_train, classes=classes) pbar.update() # + X_test, y_test = get_minibatch(doc_stream, size=5000) X_test = vect.transform(X_test) print('정확도: %.3f' % clf.score(X_test, y_test)) # - clf = clf.partial_fit(X_test, y_test) # + # 파일을 저장할 movieclassifier 디렉토리를 만들고, 안에 서브디렉토리를 만들어 직렬화된 파이썬 객체를 저장 ## pickle.dump를 통해 훈련된 logistic model뿐만 아니라 라이브러리(NLTK)까지 저장 가능 import pickle import os dest = os.path.join('movieclassifier', 'pkl_objects') if not os.path.exists(dest): os.makedirs(dest) pickle.dump(stop, open(os.path.join(dest, 'stopwords.pkl'), 'wb'), protocol=4) pickle.dump(clf, open(os.path.join(dest, 'classifier.pkl'), 'wb'), protocol=4) # + # 올바르게 저장 되었나 테스트 import os os.chdir('movieclassifier') import pickle import re import os import import_ipynb from vectorizer import vect clf = pickle.load(open(os.path.join('pkl_objects', 'classifier.pkl'), 'rb')) import numpy as np # 분류기는 0,1을 반환하므로 텍스트로 매핑하기 위한 딕셔너리 정의 label = {0:'양성', 1:'음성'} example = ['I love this movie'] # 저장해둔 vectorizer.py 이용해서 샘플 문서를 단어벡터로 변환 X = vect.transform(example) # pickel화해둔 logistic model의 predict 사용해서 레이블 예측 print('예측: %s\n확률: %.2f%%' %\ (label[clf.predict(X)[0]], np.max(clf.predict_proba(X))*100)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #
    #

    ILI286 - Computación Científica II

    #

    Valores y Vectores Propios

    #

    [[S]cientific [C]omputing [T]eam](#acknowledgements)

    #

    Version: 1.11

    #
    # # Tabla de Contenidos # * [Introducción](#intro) # * [](#teo) # * [Algoritmos e Implementaciones](#alg) # * [Power Iteration](#pi) # * [Inverse Power Iteration](#invpi) # * [Rayleigh Quotient Iteration](#rq) # * [SciPy Eigenvalue](#sp) # * [Problema de Aplicación](#problema) # * [Acknowledgements](#acknowledgements) #
    # # Introducción # # Determinar los valores y vectores propios de una matriz, aporta gran información acerca de las características y propiedades de esta, como también posee gran cantidad de aplicaciones prácticas como: Análisis de convergencia de sistemas dinámicos, PCA (Principal Component Analysis), análisis espectral, Eigenfaces, etc. # # Sin embargo la determinación de los valores y vectores propios no es un problema simple. Como ya debe haber estudiado en cursos anteriores, existe un método directo basado en cálculo de las raíces del polinomio característico $p(x)$. Pero este problema resulta ser _mal condicionado_, esto es, a pequeñas variaciones en la matriz $A$ original, existe una gran variación en los resultados de los valores y vectores propios obtenidos (Ver polinomio de Wilkinson, texto guia). # # En este notebook estudiaremos un método iterativo conocido como _Power Iteration_ (y sus extensiones), que de modo similar a una iteración de punto fijo, permite obtener numéricamente los eigen(valores/vectores). #
    # # # # La motivación tras PI (Power Iteration) es que la multiplicación por matrices, tiende a "dirigir" a los vectores hacia el vector propio dominante (aquel con valor propio de mayor magnitud). # # El algoritmo en cuestión es como sigue: # # ```python # x = 'Initial guess' # for i in range n_iter: # u = x / ||x|| #normalization step # x = dot(A,u) #power iteration step # lamb = dot(u, dot(A, u)) #Rayleigh quotient # return x / ||x|| # ``` # # en donde se agrega una paso de _normalización_, para evitar que la magnitud del vector aumente sin límite, y el valor propio asociado se obtiene por medio del cociente de Rayleigh: # # $$ \lambda = \frac{x^T A x}{x^T x} $$ # # Para entender porque se de esta convergencia, considere una matriz $A \in \mathbb{R}^{m \times m}$ con valores propios reales $\lambda_1, \lambda_2, \ldots, \lambda_m$ tales que $|\lambda_1| > |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_m|$, tales que los vectores propios $\{v_1, v_2, \ldots, v_m \}$ conforman una base de $\mathbb{R}^m$. Sea entonces $x_0$ el _initial guess_, este puede ser expresado como una combinación lineal de los vectores propios $v_k$: # # \begin{align} # A x_0 &= c_1 A v_1 + \cdots + c_m A v_m = c_1 \lambda_1 v_1 + \cdots + c_m \lambda_m v_m \\ # A^2 x_0 & = c_1 \lambda_1 A v_1 + \cdots + c_m \lambda_m A v_m = c_1 \lambda_1^2 v_1 + \cdots + c_m \lambda_m^2 v_m \\ # \vdots &= \vdots \\ # A^k x_0 &= c_1 \lambda_1^k v_1 + \cdots + c_m \lambda_m^k v_m # \end{align} # # Factorizando $\lambda_1^k$ del último resultado se obtiene: # # $$ \frac{A^k x_0}{\lambda_1^k} = c_1 v_1 + c_2 \left(\frac{\lambda_2}{\lambda_1}\right)^k v_2 + \cdots + c_m \left(\frac{\lambda_m}{\lambda_1}\right)^k v_m$$ # # Dado que $|\lambda_1|>|\lambda_i| \ \ \forall i \neq 1$, a medida que $k \rightarrow \infty$ todos los términos excepto el primero tienden a cero, con razón de convergencia $S \leq |\lambda_2/\lambda_1|$. Obteniendo como resultado un vector que es múltiplo del vector propio dominante. # # **Nota**: Para más detalles revisar: _Numerical Analysis, Tymothy Sauer, Chapter 12: Eigenvalues and Singular Values_ #
    # # Algoritmos e Implementaciones # ### Librerías utilizadas durante la clase import numpy as np from scipy import linalg from matplotlib import pyplot as plt # %matplotlib inline # ### Matriz y vector de prueba A = np.array([[1, 0.5],[0.5, 1]]) x = np.array([1.,0.]) A = np.array([[1., 0.5,-0.1],[0.5, 1.,10.0],[2.,3.,5.]]) x = np.array([1.,0.,0.]) print("A =\n",A) print("x =",x) #
    # ## Power Iteration # A continuación se entrega el código del algoritmo clásico de Power Iteration. Pruebe cambiando las matrices y los parámetros del algoritmo. def power_iteration(A, x, k, verbose=False): """ Program 12.1 Power iteration Computes dominant eigenvector of square matrix Input: matrix A, initial (nonzero) vector x, number of steps k Output: dominant eigenvalue lam, eigenvector u """ if verbose: print("Power Iteration Method\n%s"%('='*80)) for j in range(k): u = x/np.linalg.norm(x) x = np.dot(A, u) lam = np.dot(u, x) #not really necessary to compute it at each iteration if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,lam,str(u.T))) u = x/np.linalg.norm(x) if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,lam,str(u.T))) return (lam, u) # Testing algorithm lam, u = power_iteration(A, x, 20, verbose=True) print("lambda = {0}".format(lam)) print("u (dominant eigenvector) = {0}".format(u)) #
    # ## Inverse Power Iteration # # Una de las complicaciones que tiene el algoritmo anterior, es que sólo permite encontrar el valor y vectores propios dominantes. Luego ¿Cómo encontramos el resto?. Para responder esta pregunta, es necesario examinar dos propiedades importantes: # # 1. Los valores propios de la matriz inversa $A^{-1}$ son los recíprocos de los valores propios de $A$, es decir: $\lambda_1^{-1}, \lambda_2^{-1}, \ldots , \lambda_m^{-1}$. Los vectores propios de se mantienen inalterados. # 2. Los valores propios de la matriz con _shift_ $A - sI$ son: $\lambda_1-s, \lambda_2-s, \ldots, \lambda_m-s$. Del mismo modo, los vectores propios se mantienen inalterados. # # **Tarea**: Pruebe estas propiedades! # # La idea es entonces realizar un shift $\widetilde{s}$ cercano a algún valor propio $s_k$, y computar PI sobre $(A - \widetilde{s}I)^{-1}$. Luego: # # $$ |\lambda_k - \widetilde{s}| < |\lambda_i - \widetilde{s}| \leftrightarrow \bigg| \frac{1}{\lambda_k - \widetilde{s}} \bigg| > \bigg| \frac{1}{\lambda_i - \widetilde{s}} \bigg| \ \ \forall i \neq k \ $$ # # entonces $\frac{1}{\lambda_k - \widetilde{s}}$ corresponderá con el vector propio dominante de $(A - \widetilde{s}I)^{-1}$. Notar que por lo enunciado en las propiedades, los vectores propios se mantienen sin alteraciones. # # La idea anterior se ve reflejada en el algoritmo implementado a continuación: def inverse_power_iteration(A, x, s, k, verbose=False): """ Program 12.2 Inverse Power iteration Computes eigenvector of square matrix nearest to input s Input: matrix A, initial (nonzero) vector x, shift s, number of steps k Output: dominant eigenvalue lam, eigenvector of inv(A-sI) """ if verbose: print("Inverse Power Iteration Method\n%s"%('='*80)) As = A - s*np.eye(*A.shape) for j in range(k): u = x/np.linalg.norm(x) x = np.linalg.solve(As, u) lam = np.dot(u.T, x) if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,1./lam+s,str(u.T))) u = x/np.linalg.norm(x) if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,1./lam+s,str(u.T))) return (1./lam+s, u) # Testing algoritm lam, u = inverse_power_iteration(A, x, s=1./4, k=10, verbose=True) print("lambda = {0}".format(lam)) print("v = {0}".format(u)) #
    # ## Rayleigh Quotient Iteration # # Como se analizó anteriormente, PI e _Inverse_ PI tienen convergencia lineal con razón de convergencia $S \approx \frac{\lambda_2}{\lambda_1}$. Además sabemos que _Inverse_ PI converge hacia el valor propio más cercano al shift, y que mientras más cercano sea el shift a tal valor, más rápido se logra la convergencia. # # Entonces la idea de RQI es la siguiente: Si en cada iteración se tiene una aproximación del valor propio que andamos buscando, podemos ocupar esta aproximación como shift $s$, y dado que el shift será más cercano al valor propio, se acelerará la convergencia. # # Tal valor aproximado es obtenido con el cociente de Rayleigh, y entonces el shift es actualizado con este cociente en cada iteración. Como resultado se produce el siguiente _trade-off_: # # 1. La convergencia pasa a ser cuadrática (de modo general) y cúbica para matrices simétricas. # 2. Sin embargo, se paga el costo de tener que resolver un sistema de ecuaciones diferentes en cada iteración. # # # A continuación se presenta una implementación del RQI: def rqi(A, x, k, verbose=False): """ Program 12.3 Rayleigh Quotient Iteration Input: matrix A, initial (nonzero) vector x, number of steps k Output: eigenvalue lam, eigenvector of inv(A-sI) """ if verbose: print("Rayleigh Quotient Iteration\n%s"%('='*80)) for j in range(k): u = x/np.linalg.norm(x) lam = np.dot(u.T, np.dot(A, u)) try: x = np.linalg.solve(A -lam*np.eye(*A.shape), u) except numpy.linalg.LinAlgError: break if verbose: print("k=%d, lambda=%+.3f, u=%s"%(j,lam,str(u.T))) u = x/np.linalg.norm(x) lam = float(np.dot(u.T, np.dot(A, u))) if verbose: print("k=%d, lambda=%+.3f, u=%s\n"%(j+1,lam,str(u.T))) return (lam, u) # **Preguntas:** # 1. ¿Porque es necesario el `try` y `except` en las líneas 11 y 13? ¿Que significa que el sistema no pueda ser resuelto? # 2. Como puede observar RQI no recibe shift como parámetro. ¿A cuál valor/vector propio convergerá? ¿Como forzar/guiar a que tienda hacia un valor/vector propio distinto? # Testing algorithm lam, v = rqi(A, x, k=2) print("lambda = {0}".format(lam)) print("v = {0}".format(v)) #
    # ## $\texttt{SciPy}$ Eigenvalue # La librería scipy tiene implementados algoritmos que permite calcular los valores y vectores propios. Las opciones posibles son: # # - En la librería scipy.linalg: eigvals/eigvalsh/eigvals_banded, eig/eigh/eig_banded, # # - En la librería scipy.sparse.linalg: eigen, eigs, eigsh. # # En general siempre conviene utilizar las funciones desde scipy y no de numpy. La librería numpy hace un excelente trabajo al permitir el uso de vectores de tipo numérico, pero contiene solo algunos algoritmos numéricos y no necesariamente los más rápidos. # # A continuación se muestra como utilizar algunas de estas funciones. # Full matrices from scipy import linalg as LA N = 3 Aux = np.random.rand(N,N) A = Aux + Aux.T # symmetric, so we'll deal with real eigs. print(LA.eigvals(A)) # Only the eigenvalues, A not necessarily symmetric print("*"*80) print(LA.eigvalsh(A)) # Only the eigenvalues, A symmetric print("*"*80) print(LA.eig(A)) # All the eigenvalues and eigenvectors, A not necessarily symmetric print("*"*80) print(LA.eigh(A)) # All the eigenvalues and eigenvectors, A symmetric (faster) print("*"*80) lambdas, V = LA.eigh(A) # All the eigenvalues and eigenvectors, A symmetric (faster) l1 = lambdas[0] v1 = V[:,0] print(l1) print(v1) print(np.dot(A, v1)) print(l1*v1) #
    # ## Problema de Aplicación # # Las matrices simétricas tiene una propiedad muy interesante: # # * Los vectores propios de las matrices simétricas son ortogonales entre sí. # # En base a lo anterior se propone el siguiente algoritmo para encontrar los primeros $k$ valores/vectores propios: # # ```python # def kEigenFinder(A, k, p): # m = A.shape[0] # lamb = 0. # v = np.zeros((m,1)) # Lamb = [] # V = [] # for i in range(k): # A -= lamb*np.dot(v,v.T) # lamb,v = power_iteration(A, p) # Lamb.append(lamb) # V.append(v) # return Lamb,V # ``` # # 1. Justifique la validez de tal procedimiento. # 2. Construya una matriz simétrica de $100 \times 100$ y ejecute el `kEigenFinder` sobre tal matriz. Una forma fácil de construir una matriz simétrica es la [matriz de covarianza](https://en.wikipedia.org/wiki/Covariance_matrix): # $$\Sigma_X = \frac{1}{n-1}X^T X$$ # donde $X \in \mathbb{R}^{m \times n}$, con $m$ _samples_ y $n$ _features_. # # 3. Concluya acerca de la utilidad del procedimiento propuesto. #
    # # Acknowledgements # * _Material creado por profesor _ (``) _y ayudantes: y . DI UTFSM. Abril 2016._ # # *** # ### DISCLAIMER ### # # El presente notebook ha sido creado para el curso **ILI286 - Computación Científica 2**, del [Departamento de Informática](http://www.inf.utfsm.cl/), [Universidad Técnica Federico Santa María](http://www.utfsm.cl/). # # El material ha sido creado por <> y <>, y es distribuido sin restricciones. En caso de encontrar un error, por favor no dude en contactarnos. # # [Update 2015] Se ha actualizado los notebooks a Python3 e includio el "magic" "%matplotlib inline" antes de cargar matplotlib para que los gráficos se generen en el notebook. # # [Update 2016] (Martín) Modificaciones mayores al formato original. Agregado contexto: Introducción, marco teórico, explicaciones y tareas. Modificaciones menores en los algoritmos. Agregada la sección de Problema de Aplicación. # # [Update 2018] (C.Torres) Using np.linalg. # *** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Indonesian ULMFiT from scratch # + # %reload_ext autoreload # %autoreload 2 # %matplotlib inline from fastai import * from fastai.text import * # - # bs=48 bs=24 # bs=128 torch.cuda.set_device(0) data_path = Config.data_path() print(data_path) # This will create a `idwiki` folder, containing a `idwiki` text file with the wikipedia contents. (For other languages, replace `id` with the appropriate code from the [list of wikipedias](https://meta.wikimedia.org/wiki/List_of_Wikipedias).) lang = 'id' # lang = 'zh' name = f'{lang}wiki' path = data_path/name path.mkdir(exist_ok=True, parents=True) lm_fns = [f'{lang}_wt', f'{lang}_wt_vocab'] print(path) # ## Indonesian wikipedia model # ### Download data from fast_nlputils import split_wiki,get_wiki get_wiki(path,lang) path.ls() # !head -n4 {path}/{name} # This function splits the single wikipedia file into a separate file per article. This is often easier to work with. dest = split_wiki(path,lang) dest.ls()[:5] # + # Use this to convert Chinese traditional to simplified characters # # ls *.txt | parallel -I% opencc -i % -o ../zhsdocs/% -c t2s.json # - # ### Create pretrained model # + #data = (TextList.from_folder(dest) # .split_by_rand_pct(0.1, seed=42) # .label_for_lm() # .databunch(bs=bs, num_workers=1)) # + #data.save(f'{lang}_databunch') # + #len(data.vocab.itos),len(data.train_ds) # - path_databunch = path/'docs' print(path_databunch) bs = 16 data = load_data(dest, f'{lang}_databunch', bs=bs) torch.cuda.empty_cache() learn = language_model_learner(data, AWD_LSTM, drop_mult=0.5, pretrained=False).to_fp16() lr = 1e-2 lr *= bs/48 # Scale learning rate by batch size learn.unfreeze() learn.fit_one_cycle(2, lr, moms=(0.8,0.7)) # Save the pretrained model and vocab: mdl_path = path/'models' mdl_path.mkdir(exist_ok=True) learn.to_fp32().save(mdl_path/lm_fns[0], with_opt=False) learn.data.vocab.save(mdl_path/(lm_fns[1] + '.pkl')) # + TEXT = "" N_WORDS = 40 N_SENTENCES = 10 print("\n".join(learn.predict(TEXT, N_WORDS, temperature=0.75) for _ in range(N_SENTENCES))) # - # ## Indonesian sentiment analysis - to be continued # ### Language model # - [Data](https://github.com/ngxbac/aivivn_phanloaisacthaibinhluan/tree/master/data) # - [Competition details](https://www.aivivn.com/contests/1) # - Top 3 f1 scores: 0.900, 0.897, 0.897 train_df = pd.read_csv(path/'train.csv') train_df.loc[pd.isna(train_df.comment),'comment']='NA' train_df.head() test_df = pd.read_csv(path/'test.csv') test_df.loc[pd.isna(test_df.comment),'comment']='NA' test_df.head() df = pd.concat([train_df,test_df], sort=False) data_lm = (TextList.from_df(df, path, cols='comment') .split_by_rand_pct(0.1, seed=42) .label_for_lm() .databunch(bs=bs, num_workers=1)) learn_lm = language_model_learner(data_lm, AWD_LSTM, pretrained_fnames=lm_fns, drop_mult=1.0) lr = 1e-3 lr *= bs/48 learn_lm.fit_one_cycle(2, lr*10, moms=(0.8,0.7)) learn_lm.unfreeze() learn_lm.fit_one_cycle(8, lr, moms=(0.8,0.7)) learn_lm.save(f'{lang}fine_tuned') learn_lm.save_encoder(f'{lang}fine_tuned_enc') # ### Classifier # + data_clas = (TextList.from_df(train_df, path, vocab=data_lm.vocab, cols='comment') .split_by_rand_pct(0.1, seed=42) .label_from_df(cols='label') .databunch(bs=bs, num_workers=1)) data_clas.save(f'{lang}_textlist_class') # - data_clas = load_data(path, f'{lang}_textlist_class', bs=bs, num_workers=1) # + from sklearn.metrics import f1_score @np_func def f1(inp,targ): return f1_score(targ, np.argmax(inp, axis=-1)) # - learn_c = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5, metrics=[accuracy,f1]).to_fp16() learn_c.load_encoder(f'{lang}fine_tuned_enc') learn_c.freeze() lr=2e-2 lr *= bs/48 learn_c.fit_one_cycle(2, lr, moms=(0.8,0.7)) learn_c.fit_one_cycle(2, lr, moms=(0.8,0.7)) learn_c.freeze_to(-2) learn_c.fit_one_cycle(2, slice(lr/(2.6**4),lr), moms=(0.8,0.7)) learn_c.freeze_to(-3) learn_c.fit_one_cycle(2, slice(lr/2/(2.6**4),lr/2), moms=(0.8,0.7)) learn_c.unfreeze() learn_c.fit_one_cycle(1, slice(lr/10/(2.6**4),lr/10), moms=(0.8,0.7)) learn_c.save(f'{lang}clas') # Competition top 3 f1 scores: 0.90, 0.89, 0.89. Winner used an ensemble of 4 models: TextCNN, VDCNN, HARNN, and SARNN. # ## Ensemble data_clas = load_data(path, f'{lang}_textlist_class', bs=bs, num_workers=1) learn_c = text_classifier_learner(data_clas, AWD_LSTM, drop_mult=0.5, metrics=[accuracy,f1]).to_fp16() learn_c.load(f'{lang}clas', purge=False); preds,targs = learn_c.get_preds(ordered=True) accuracy(preds,targs),f1(preds,targs) data_clas_bwd = load_data(path, f'{lang}_textlist_class_bwd', bs=bs, num_workers=1, backwards=True) learn_c_bwd = text_classifier_learner(data_clas_bwd, AWD_LSTM, drop_mult=0.5, metrics=[accuracy,f1]).to_fp16() learn_c_bwd.load(f'{lang}clas_bwd', purge=False); preds_b,targs_b = learn_c_bwd.get_preds(ordered=True) accuracy(preds_b,targs_b),f1(preds_b,targs_b) preds_avg = (preds+preds_b)/2 accuracy(preds_avg,targs_b),f1(preds_avg,targs_b) # ## Resources # # - https://github.com/fastai/course-nlp # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.6 64-bit (''wraeblast'': conda)' # name: python3 # --- # %matplotlib widget import asyncio from wraeblast import insights # + from matplotlib import pyplot as plt from matplotlib import rcParams import pandas as pd import seaborn as sns from IPython.display import display, HTML # sns.set(style="ticks", context="talk") # plt.style.use("dark_background") rcParams.update({'figure.autolayout': True}) # - loop = asyncio.get_event_loop() league = "Expedition" ctx = await insights.initialize_filter_context(league=league) display(HTML(ctx.currency.df[[ "item_name", "chaos_value", "chaos_value_norm", "quartile", "quintile", "decile", "percentile", ]].to_html())) ctx.currency.df.plot(x="chaos_value_norm", y=["quartile", "quintile", "decile", "percentile"]); currency = ctx.currency.df[ ctx.currency.df.item_name.str.contains("Orb") | ctx.currency.df.item_name.str.contains("Shard") | ctx.currency.df.item_name.str.contains("Mirror")] currency = currency[[ "item_name", "chaos_value_norm", "quartile", "quintile", "decile", "percentile", ]] currency # + def label_point(x, y, val, ax): a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1) for i, point in a.iterrows(): ax.text(point['x']+.02, point['y'], str(point['val']), size="small") sns.set_style("whitegrid") ax = sns.lmplot(data=currency, x="percentile", y="chaos_value_norm", hue="item_name", height=10); # label_point(currency.percentile, currency.chaos_value_norm, currency.item_name, plt.gca()) # - fig, ax = plt.subplots() fig.suptitle("Normalized chaos values of currency by quintile") sns.boxenplot(x="quintile", y="chaos_value_norm", data=currency, showfliers=False); sns.stripplot(x="quintile", y="chaos_value_norm", data=currency, color=".26"); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="zU8oXGjshFjP" colab_type="code" colab={} import numpy as np # %matplotlib inline import matplotlib.pyplot as plt np.random.seed(0) from statistics import mean # + [markdown] id="x8buFpCZpHxN" colab_type="text" # 今回はアルゴリズムの評価が中心の章なので,学習アルゴリズム実装は後に回し、sklearnを学習アルゴリズムとして使用する。 # + id="afwHFBQspYwV" colab_type="code" colab={} import sklearn # + [markdown] id="3DXxanUWrZLR" colab_type="text" # 今回、学習に使うデータはsin関数に正規分布$N(\varepsilon|0,0.05)$ノイズ項を加えたデータを使う # + id="w9qitzg_td9D" colab_type="code" colab={} size = 100 max_degree = 11 x_data = np.random.rand(size) * np.pi * 2 var_data = np.random.normal(loc=0,scale=0.1,size=size) sin_data = np.sin(x_data) + var_data # + id="9JwIOkNav8lF" colab_type="code" outputId="b2f28b75-fdc4-407b-c4b5-4f81fa2e5af5" colab={"base_uri": "https://localhost:8080/", "height": 282} plt.ylim(-1.2,1.2) plt.scatter(x_data,sin_data) # + [markdown] id="KJXHwwOgtfF4" colab_type="text" # # 学習用のアルゴリズムは多項式回帰を使います。 # + id="t4qyLuoQtecI" colab_type="code" colab={} from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.pipeline import Pipeline # + [markdown] id="Pe_xlO-vDTv_" colab_type="text" # 2.2.2:**MSE**:近似の良さの評価手法。 # # $$MSE=\int (y(x;D) - h(x))^2p(x)dx=E\{(y(x;D)-h(x))^2\}$$ # + id="DXf_GKnqDS30" colab_type="code" colab={} def MSE(y,t): return np.sum(np.square(y-t))/y.size # + id="ZVYGA_lpR4PT" colab_type="code" outputId="2f41e964-3039-4250-c912-bcc1636e3d39" colab={"base_uri": "https://localhost:8080/", "height": 35} MSE(np.array([10,3,3]),np.array([1,2,3])) # + [markdown] id="WLknxmh7nUpV" colab_type="text" # 2.2.1 (1)**ホールドアウト法**: # 手元のデータを2つに分割し、片方をトレーニングに使い、片方をテストに使う手法。 # テストデータの数が必要 # + id="efYwVXAEoS47" colab_type="code" outputId="91cfd975-3c2d-466f-eac3-3ef06219894b" colab={"base_uri": "https://localhost:8080/", "height": 52} # %%time def holdout_method(x,y,per=0.8,value_func=MSE,degree=11): index = np.random.permutation(x.size) index_train,index_test = np.split(index,[int(x.size*per)]) #plt.scatter(x_data[index_train],sin_data[index_train]) test_score_list = [] train_score_list = [] for i in range(1,degree): pf = PolynomialFeatures(degree=i, include_bias=False) lr = LinearRegression() pl = Pipeline([("PF", pf), ("LR", lr)]) pl.fit(x[index_train].reshape(-1,1), y[index_train]) pred_y_test = pl.predict(x[index_test].reshape(-1,1)) pred_y_train = pl.predict(x[index_train].reshape(-1,1)) score_train = value_func(pred_y_train,y[index_train]) score_test = value_func(pred_y_test,y[index_test]) train_score_list.append(score_train) test_score_list.append(score_test) return train_score_list,test_score_list # + id="vuWyhQp3LOqW" colab_type="code" outputId="14ce2f29-651c-4a27-d3d4-fdadbfbb6ff5" colab={"base_uri": "https://localhost:8080/", "height": 282} hold_train_score_list,hold_test_score_list = holdout_method(x_data,sin_data,degree=max_degree) plt.plot(np.array(range(1,max_degree)),np.array(hold_train_score_list),color='b') plt.plot(np.array(range(1,max_degree)),np.array(hold_test_score_list),color='r') # + [markdown] id="aCBHmr_o5Fpd" colab_type="text" # (2)**交差確認法**:手元の各クラスをn分割して、n-1のグループで学習して、残りの1つのグループのデータでテストをし、その平均を誤り率とした性能評価を行う。 # + id="_iz_4m5f48ox" colab_type="code" colab={} def cross_validation(x,y,value_func=MSE,split_num=5,degree=1): assert x.size % split_num==0,"You must use divisible number" n = x.size / split_num train_scores =[] test_scores =[] for i in range(split_num): indices = [int(i*n),int(i*n+n)] train_x_1,test_x,train_x_2=np.split(x,indices) train_y_1,test_y,train_y_2=np.split(y,indices) train_x = np.concatenate([train_x_1,train_x_2]) train_y = np.concatenate([train_y_1,train_y_2]) pf = PolynomialFeatures(degree=degree, include_bias=False) lr = LinearRegression() pl = Pipeline([("PF", pf), ("LR", lr)]) pl.fit(train_x.reshape(-1,1), train_y) pred_y_test = pl.predict(np.array(test_x).reshape(-1,1)) pred_y_train = pl.predict(np.array(train_x).reshape(-1,1)) score_train = value_func(pred_y_train,train_y) #print(score_train) score_test = value_func(pred_y_test,test_y) #print(len(test_y)) train_scores.append(score_train) test_scores.append(score_test) return mean(train_scores),mean(test_scores) # + id="tYybB58UlhgR" colab_type="code" outputId="e89c7fea-6243-498f-d345-67a755af7772" colab={"base_uri": "https://localhost:8080/", "height": 282} cross_test_score_list = [] cross_train_score_list = [] for i in range(1,max_degree): tra,tes = cross_validation(x_data,sin_data,degree=i) cross_train_score_list.append(tra) cross_test_score_list.append(tes) plt.plot(np.array(range(1,max_degree)),np.array(cross_train_score_list),color='b') plt.plot(np.array(range(1,max_degree)),np.array(cross_test_score_list),color='r') # + [markdown] id="M3zr-OsK6vUk" colab_type="text" # (3)**一つ抜き法**:交差確認法の特別な場合で、データ数=グループの数としたものである。 # + id="r5oFd8dN5BWN" colab_type="code" colab={} def leave_one_out(x,y,value_func=MSE,size=size,degree=1): return cross_validation(x,y,value_func,split_num=size,degree=degree) # + id="P_NBcoykyOvL" colab_type="code" outputId="e932532e-4ff4-4720-dc0a-9c598eeb153f" colab={"base_uri": "https://localhost:8080/", "height": 282} leave_test_score_list = [] leave_train_score_list = [] for i in range(1,max_degree): tra,tes = leave_one_out(x_data,sin_data,degree=i) leave_train_score_list.append(tra) leave_test_score_list.append(tes) plt.plot(np.array(range(1,max_degree)),np.array(leave_train_score_list),color='b') plt.plot(np.array(range(1,max_degree)),np.array(leave_test_score_list),color='r') # + id="MfMN1SM_0cjh" colab_type="code" outputId="2f8d9b53-37a1-4ab0-dd2c-10e325413130" colab={"base_uri": "https://localhost:8080/", "height": 282} plt.plot(np.array(range(1,max_degree)),np.array(hold_train_score_list),color='y') plt.plot(np.array(range(1,max_degree)),np.array(hold_test_score_list),color='m') plt.plot(np.array(range(1,max_degree)),np.array(cross_train_score_list),color='k') plt.plot(np.array(range(1,max_degree)),np.array(cross_test_score_list),color='c') plt.plot(np.array(range(1,max_degree)),np.array(leave_train_score_list),color='b') plt.plot(np.array(range(1,max_degree)),np.array(leave_test_score_list),color='r') # + [markdown] id="AcQKagJbCFnv" colab_type="text" # (4)**ブートストラップ法**:N個の復元抽出をしてブートストラップサンプルを作り、そこから # # $bias=\varepsilon(N^*,N^*)-N(N^*,N)$ # を推定して、それをいくつか計算してその平均でバイアスを推定する。 # その推定値を$\overline{bias}$として、その推定値を # # $\varepsilon = \varepsilon(N,N)-\overline{bias}$ # とする。 # + id="dSu3P_fzCCVB" colab_type="code" colab={} def bootstrap(x,y,value_func=MSE,trial=50,degree=1): biases=[] for i in range(trial): boot_ind = np.random.choice(range(x.size),size=x.size,replace=True) pf = PolynomialFeatures(degree=degree, include_bias=False) lr = LinearRegression() pl = Pipeline([("PF", pf), ("LR", lr)]) pl.fit(x[boot_ind].reshape(-1,1), y[boot_ind]) pred_y_boot = pl.predict(x[boot_ind].reshape(-1,1)) pred_y_base = pl.predict(x.reshape(-1,1)) score_boot = value_func(pred_y_boot,y[boot_ind]) #print(score_train) score_base = value_func(pred_y_base,y) bias = score_base - score_boot #print(bias) biases.append(bias) pf = PolynomialFeatures(degree=degree, include_bias=False) lr = LinearRegression() pl = Pipeline([("PF", pf), ("LR", lr)]) pl.fit(x.reshape(-1,1), y) pred_y_base = pl.predict(x.reshape(-1,1)) score_base = value_func(pred_y_base,y) return score_base + mean(biases) # + id="t2-ylH0gjjh4" colab_type="code" outputId="362a7bc4-c7aa-4adf-939c-3fedd8687def" colab={"base_uri": "https://localhost:8080/", "height": 282} boot_score_list = [] for i in range(1,max_degree): boot_score = bootstrap(x_data,sin_data,degree=i) boot_score_list.append(boot_score) plt.plot(np.array(range(1,max_degree)),np.array(boot_score_list),color='b') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from matopt_review.BOtest import BOtest import numpy as np #Run a single randomly initialized optimization of Foncesa using PA minmax_pareto_solutions = [[0.001,0.981], [0.001,0.981]] random_goal = [np.random.uniform(d[0], d[1]) for d in minmax_pareto_solutions] bo = BOtest(funcname="MulFonseca") bo.run_BO("Recsat", max_iter=10,# Number of Bayesian optimization steps (200 in the paper). sampling=1, goals=random_goal, exact_feval=True, ) Y_results = np.concatenate(bo.bowrap.bo_org.y_mult, axis=0) X_results = bo.bowrap.bo_org.X #Run a single randomly initialized optimization of Kursawe using PA minmax_pareto_solutions = [[-19.9,-13.0], [-10.7,0.09]] random_goal = [np.random.uniform(d[0], d[1]) for d in minmax_pareto_solutions] bo = BOtest(funcname="MulKursawe") bo.run_BO("Recsat", max_iter=10,# Number of Bayesian optimization steps (200 in the paper). sampling=1, goals=random_goal, exact_feval=True, ) Y_results = np.concatenate(bo.bowrap.bo_org.y_mult, axis=0) X_results = bo.bowrap.bo_org.X #Run a single randomly initialized optimization of Viennet using PA minmax_pareto_solutions = [[0.0,67.6],[15.0,17.0],[-0.10,0.183]] random_goal = [np.random.uniform(d[0], d[1]) for d in minmax_pareto_solutions] bo = BOtest(funcname="MulViennet") bo.run_BO("Recsat", max_iter=10,# Number of Bayesian optimization steps (200 in the paper). sampling=1, goals=random_goal, exact_feval=True, ) Y_results = np.concatenate(bo.bowrap.bo_org.y_mult, axis=0) X_results = bo.bowrap.bo_org.X #Run a single randomly initialized optimization of ZDT1 using PA minmax_pareto_solutions = [[0.000,1.000],[0.00,1.01]] random_goal = [np.random.uniform(d[0], d[1]) for d in minmax_pareto_solutions] bo = BOtest(funcname="MulZDT1") bo.run_BO("Recsat", max_iter=10,# Number of Bayesian optimization steps (200 in the paper). sampling=1, goals=random_goal, exact_feval=True, ) Y_results = np.concatenate(bo.bowrap.bo_org.y_mult, axis=0) X_results = bo.bowrap.bo_org.X #Run a single randomly initialized optimization of ZDT2 using PA minmax_pareto_solutions = [[0.000,1.000],[0.00,1.00]] random_goal = [np.random.uniform(d[0], d[1]) for d in minmax_pareto_solutions] bo = BOtest(funcname="MulZDT2") bo.run_BO("Recsat", max_iter=10,# Number of Bayesian optimization steps (200 in the paper). sampling=1, goals=random_goal, exact_feval=True, ) Y_results = np.concatenate(bo.bowrap.bo_org.y_mult, axis=0) X_results = bo.bowrap.bo_org.X #Run a single randomly initialized optimization of ZDT3 using PA minmax_pareto_solutions = [[0.000,0.853], [-0.77,1.00]] random_goal = [np.random.uniform(d[0], d[1]) for d in minmax_pareto_solutions] bo = BOtest(funcname="MulZDT3") bo.run_BO("Recsat", max_iter=10,# Number of Bayesian optimization steps (200 in the paper). sampling=1, goals=random_goal, exact_feval=True, ) Y_results = np.concatenate(bo.bowrap.bo_org.y_mult, axis=0) X_results = bo.bowrap.bo_org.X # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # --- # ## Data exploration # # ### Loading and inspecting the data # # After visiting Michigan and learning that wine grapes can be grown (and that wine can be made!) in such a cold place, you decide that you would like to start a vineyard there. You've seen the vineyards and know that, although it is possible to grow wine grapes there, that sometimes it is too cold. You wonder if because of climate change, Michigan might soon have a warmer, more suitable climate for growing grapes. # # You know that Europe has a long history of growing grapes, and you wonder if they kept records that might indicate how grapes respond to changes in temperature. You find a [study](https://www.clim-past.net/8/1403/2012/cp-8-1403-2012.pdf) that has compiled numerous records of grape harvest dates for more than four centuries and also a [database](http://www.climatemonitor.it/?page_id=40210&lang=en) of temperature anomalies in Europe dating back to 1655. # # Using the provided dataset, `grape_harvest.csv` (download [here](https://github.com/DanChitwood/PlantsAndPython/blob/master/grape_harvest.csv)), you're going to explore how the European grape harvest date changes with respect to temperature across centuries of data. # # To get started, import `pandas` in the cell below: # + # Import pandas here # + ### ANSWER ### import pandas as pd # - # Then, read in `grape_harvest.csv` using the `pd.read_csv()` function a pandas dataframe. # + # Read in the grape harvest data here # Put grape_harvest.csv in the same directory you are running this .ipynb from # If in a different directory, you will need to specify the path to the file # Alternatively, you can read in the data from GitHub using the following url: # https://raw.githubusercontent.com/DanChitwood/PlantsAndPython/master/grape_harvest.csv # + ### ANSWER ### # Read in the grape harvest data here # data = pd.read_csv("grape_harvest.csv") # Or read in by url from GitHub data = pd.read_csv("https://raw.githubusercontent.com/DanChitwood/PlantsAndPython/master/grape_harvest.csv") # - # Now, write some code to inspect the properties of the data and then answer the following questions: # # Use a pandas function to look at the first five lines of data: # + # Put your code here # + ### ANSWER ### data.head() # - # Use a pandas function to look at the last five lines of data: # + # Put your code here # + ### ANSWER ### data.tail() # - # Use a pandas function to look at summary statistics (like the count, min, max, and mean) for columns with continuous data: # + # Put your code here # + ### ANSWER ### data.describe() # - # Use a pandas function to retrieve the names of the columns. # + # Put your code here # + ### ANSWER ### data.columns # - # For one of the columns that is a categorical variable, use a function to list all the levels for that variable. # + # Put your code here # + ### ANSWER ### data['region'].unique() # - # For the categorical variable, also use a function to determine how many rows there are representing each level. # + # Put your code here # + ### ANSWER ## data['region'].value_counts() # - # How many rows are in this dataset? # + # Put your code here # + ### ANSWER ### print(len(data)) # - # Congratulations on reading in the data and exploring its structure! In the next activity, we will be exploring the relationship between grape harvest dates and climate! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python3.8(face_emotion_recog) # language: python # name: face_emotion_recog # --- #loading the required libraries from fastai import * from fastai.vision import * from fastai.vision import image #from fastai.vision.widgets import * import os import pandas as pd import numpy as np import cv2 # ### Loading the model for making the predictions model4_test=load_learner(path=r"D:\Data science\Alma better\DL Facial emotion recognition\Images\images\train",file='fastai_emojis_model4.pkl') model4_test.dl os.chdir(r'D:\Data science\Alma better\DL Facial emotion recognition\Images\images\validation\surprise') # ### Some test predictions on an image test1=cv2.imread('./330.jpg') t = pil2tensor(test1, dtype=np.float32) # converts to numpy tensor #t = t.permute(2,0,1) # Move num_channels as first dimension t = t.float()/255.0 im = Image(t) # Convert to fastAi Image - this class has "apply_tfms" model4_test.predict(im) show_image(im) model4_test.predict(im)[0] type(model4_test.predict(im)[0]) model4_test os.getcwd() os.chdir('D:/Data science/Alma better/DL Facial emotion recognition/Images/images/validation/happy') #path='./531.jpg' img = cv2.imread('./531.jpg') plt.imshow(img) # #### Testing on different category images os.chdir('D:/Data science/Alma better/DL Facial emotion recognition/Images/images/validation/surprise') a3=cv2.imread('./10162.jpg') t = pil2tensor(a3, dtype=np.float32) # converts to numpy tensor t = t.float()/255.0 #t = t.permute(2,0,1) # Move num_channels as first dimension im = Image(t) # Convert to fastAi Image - this class has "apply_tfms" pred0=model4_test.predict(im) print(pred0) print(str(pred0[0])) a1=cv2.imread('./10097.jpg') plt.imshow(a1) #not used just for experimentation Emojis_dict = {'Category tensor(0)':"Angry", 'Category tensor(1)':"Disgust", 'Category tensor(2)':"Fear", 'Category tensor(3)':"Happy",\ 'Category tensor(4)':"Neutral", 'Category tensor(5)':"Sad", 'Category tensor(6)':"Surprise"} # ### Images emotion detection pipeline def prediction(img1): predictions = [] predictions = model4_test.predict(img1) predictions[0] #print(predictions) #type(predictions) prediction1=[] prediction1=str(predictions[0]) #emotion = [] #emotion = Emojis_dict[predictions1] if prediction1 == 'angry': print("The person here is angry") elif prediction1 == 'disgust': print("The person here is disgusted") elif prediction1 == 'fear': print("The person here is in fear") elif prediction1 == 'happy': print("The person here is happy") elif prediction1 == 'neutral': print("The person here is neutral") elif prediction1 == 'sad': print("The person here is sad") elif prediction1 == 'surprise': print("The person here is surprised") else: print("Cannot detect") #cv2.destroyWindow("preview") def return_prediction(path): #converting image to gray scale and save it img = cv2.imread(path) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) cv2.imwrite(path, gray) #detect face in image, crop it then resize it then save it #face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_eye.xml') img = cv2.imread(path) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: face_clip = img[y:y+h, x:x+w] cv2.imwrite(path, cv2.resize(face_clip, (350, 350))) #read the processed image then make prediction and display the result read_image = cv2.imread(path) t = pil2tensor(read_image, dtype=np.float32) # converts to numpy tensor t = t.float()/255.0 #t = t.permute((2,0,1)) #t=t.transpose((2,0,1)) img1 = Image(t) # Convert to fastAi Image - this class has "apply_tfms" model_pred1 = model4_test.predict(img1)[0] predicted=prediction(img1) #uncomment when above type of display text is required for image outputs plt.imshow(img) #uncomment if image has to be displayed return str(model_pred1) t.shape a5 = t.float()/255.0 a5.shape a9=a5.permute(2,0,1).shape a9 return_prediction('./10259.jpg') os.getcwd() return_prediction('./10306.jpg') c=Emojis_dict['Category tensor(3)'] #experimentation for i in range(len(Emojis_dict)): if c=='Happy': print('Yes Happy face') else: print('Cannot detect') prediction1=str('Category tensor(3)') type(prediction1) # + #btn_upload = widgets.FileUpload() #out_pl = widgets.Output() #lbl_pred = widgets.Label() # - # ### Emotion detection pipeline for detection on videos def test_rerun(text, cap): while(True): ret, img = cap.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img, "The last phase of the person's Emotion was recorded "+str(text), (95,30), font, 1.0, (255, 0, 0), 2, cv2.LINE_AA) cv2.putText(img, "Press SPACE: Detecting", (5,470), font, 0.7, (255, 0, 0), 2, cv2.LINE_AA) cv2.putText(img, "Hold Q: To Quit😎", (460,470), font, 0.7, (255, 0, 0), 2, cv2.LINE_AA) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for x,y,w,h in faces: cv2.rectangle(img, (x,y), (x+w, y+h), (255, 0, 0), 2) cv2.imshow("Image", img) if cv2.waitKey(1) == ord(' '): cv2.imwrite("test6.jpg", img) text = return_prediction("test6.jpg") test_video_pred(text, cap) break if cv2.waitKey(1) == ord('q'): cap.release() cv2.destroyAllWindows() break os.getcwd() os.chdir(r'D:\Data science\Alma better\DL Facial emotion recognition\Images\images\Input and output') # + face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml') #cap = cv2.VideoCapture('./pexels-tiger-lily-7149007.mp4') def test_video_pred(text, cap): while(True): ret, img = cap.read() if not ret: print("Can't receive frame (stream end?). Exiting ...") break gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img, "The last phase of person's emotion was recorded: "+str(text), (95,30), font, 1.0, (255, 0, 0), 2, cv2.LINE_AA) cv2.putText(img, "Press SPACE: For detection", (5,470), font, 0.7, (255, 0, 0), 2, cv2.LINE_AA) cv2.putText(img, "Hold Q: To Quit😎", (460,470), font, 0.7, (255, 0, 0), 2, cv2.LINE_AA) faces = face_cascade.detectMultiScale(gray, 1.3, 5) for x,y,w,h in faces: cv2.rectangle(img, (x,y), (x+w, y+h), (255, 0, 0), 2) cv2.imshow("Image", img) if cv2.waitKey(1) == ord(' '): cv2.imwrite("test6.jpg", img) text = return_prediction("test6.jpg") test_rerun(text, cap) #plt.imshow(img) break if cv2.waitKey(1) == ord('q'): cap.release() cv2.destroyAllWindows() break # - # ### Examples cap = cv2.VideoCapture('./pexels-tiger-lily-7149007.mp4') test_video_pred('None',cap) cap=cv2.VideoCapture('./pexels-yan-krukov-7693411.mp4') test_video_pred('None',cap) # ### 🤩😃 # - It says the people in the video are in neutral and happy emotions. Yes!!Yayy!!! # - Its working well cap=cv2.VideoCapture('./pexels-yan-krukov-7640073.mp4') test_video_pred('None',cap) test_video_pred('None',cap) os.chdir(r'D:\Data science\Alma better\DL Facial emotion recognition\Images\images\validation\fear') return_prediction('./10099.jpg') # ### Live video emotion detection cap=cv2.VideoCapture(0) test_video_pred('None',cap) cap=cv2.VideoCapture(0) test_video_pred('None',cap) # - Here I'm holding the mobile from which it is detecting the emotion of the people in it # - It is able to detect everymoment from the live video as soon as the person comes into the picture # - Here I have used Resnet34 transfer learning(using fastai) for detecting the emotion of the person or the people. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # This notebook produces Figs. 3,4,5 # import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl import pickle from matplotlib.colors import LogNorm import os import re from collections import defaultdict import sys sys.path.insert(0, "../../lib") # add the library folder to the path I look for modules import latexify import networkx as nx def new_color(ax): return next(ax._get_lines.prop_cycler)['color'] # ## Analysis of the results from dynamical cavity runs # This section load and combine the output files produced by the script `power.py` to produce the figure shown in the paper. # # Output from ```power.py``` are saved in pickle files with name `"theta_"+str(theta)+specifier+".pkl"`. # Multiple outputs at the same theta are saved with different unique identifiers. # + def load_obj(theta,specifier=''): name='theta_'+str(theta)+specifier+'.pkl' with open('./data/dic-' + name , 'rb') as f: return pickle.load(f) def load_and_hist(theta,specifier): dic = load_obj(theta,specifier) Ts = dic['Ts'] data = dic['data'] img = [] for i in range(len(Ts)): h,b = np.histogram(data[i],bins = np.linspace(0,1,201),density=True) img+=[h] return img,b,Ts filenames=os.listdir("./data") pattern = re.compile("dic-theta_\d*\.\d+|\d.pkl") dictnames=[name for name in filenames if pattern.match(name)]# select only dictionary files print(' Results are available in the files:') for filename in dictnames: print(filename) # + theta = 0.2 dic = load_obj(theta) J = dic['J'] avg_degree = len(J.data) / J.shape[0] img,b,Ts = load_and_hist(theta,'') Ts = np.array(Ts) img = np.array(img) # - len(b),len(Ts) J_transpose = J.transpose().tolil() js = J_transpose.rows # list of list, structure is [el[i]] where el[i] # is the list of predecessors of gene i ( the index) interaction = J_transpose.data # list of list, structure is [el[i]] # where el[i] is the list of predecessors of gene i (interaction strength with sign) Ks = np.array([len(neigh) for neigh in js]) # in degree of each gene avg_degree = np.mean(Ks) N = J.shape[0] avg_degree # $$ # \rho_{\pm} = \frac{1}{2}\left[ 1+P_j \tanh\frac{\beta(\pm J-\theta)}{2}-(1-P_j)\tanh\frac{\beta(\theta)}{2}\right] # $$ def plus(P): return 0.5*(1-np.tanh(theta/np.sqrt(avg_degree)/2/Ts)+P*(np.tanh((1-theta)/np.sqrt(avg_degree)/2/Ts)+np.tanh(theta/np.sqrt(avg_degree)/2/Ts))) def minus(P): return 0.5*(1-np.tanh(theta/np.sqrt(avg_degree)/2/Ts)+P*(np.tanh((-1-theta)/np.sqrt(avg_degree)/2/Ts)+np.tanh(theta/np.sqrt(avg_degree)/2/Ts))) # + x = (1-np.tanh(theta/np.sqrt(avg_degree)/2/Ts))/(2-(np.tanh((1-theta)/np.sqrt(avg_degree)/2/Ts)+np.tanh(theta/np.sqrt(avg_degree)/2/Ts))) #x is the solution of the fixed point plus(P) = P Ts_rescaled =Ts/np.sqrt(avg_degree) plt.figure(figsize = (13,3)) X,Y = np.meshgrid(Ts_rescaled,b) plt.pcolormesh(X,Y,np.array(img).T+0.01,norm = LogNorm(0.01,10),cmap ='inferno') cbar = plt.colorbar(pad = 0.015) cbar.set_label("$\\Pi(P)$",rotation=90,fontsize=12,labelpad=-5) plt.ylabel('$P$',fontsize = 14) plt.xlabel('$T/J$',fontsize = 14.5) #plt.plot(Ts,minus,'g:',label = '$1/2(1\pm tanh(\\beta J/2))$') #plt.plot(Ts,minus*plus+minus/2,'--k') plt.plot(Ts_rescaled,x,'g--') plt.plot(Ts_rescaled,minus(x),'g--',label ='$p^*,\\rho_-(p^*)$') plt.plot(Ts_rescaled,minus(minus(x)),'k-.') plt.plot(Ts_rescaled,plus(minus(x)),'k-.',label ='$\\rho_\pm(\\rho_-(p^*))$') plt.plot(Ts_rescaled,minus(plus(minus(x))),':',lw = 2,color = 'blue',label = '$\\rho_-(\\rho_\pm(\\rho_-(p^*)))$') plt.plot(Ts_rescaled,minus(minus(minus(x))),':',lw = 2,color = 'blue') ax = plt.gca() c=new_color(ax) plt.plot(Ts_rescaled,plus(plus(minus(x))),':',lw = 2,color = c,label = '$\\rho_+(\\rho_\pm(\\rho_-(p^*)))$') plt.plot(Ts_rescaled,plus(minus(minus(x))),':',lw = 2,color = c) plt.axvline(0.1,ls = ':',lw = 2,color = 'gray') plt.axvline(0.3,ls = ':',lw = 2,color = 'gray') plt.axvline(.6,ls = ':',lw = 2,color = 'gray') plt.legend(fontsize = 13,ncol = 2) plt.tight_layout() #plt.title('$\\vartheta = $'+str(theta)) #plt.savefig('./figures/T_dependence_theta_'+str(theta)+'.pdf') # + savefigure = True latexify.latexify(columns = 2) i = int(np.arange(len(Ts))[np.argmin(np.abs(Ts -0.1))]) #[plt.axvline(0.5**3*j,ymin=0.,ls = '--',c = 'm',alpha = 0.5) for j in range(9)] p0 = 0.5*(1-np.tanh(theta/np.sqrt(avg_degree)/2/Ts[i])) plt.axvline(x[i],ls ='--',c='g') plt.axvline(minus(x)[i],ls ='--',c='g') plt.axvline(minus(minus(x))[i],ls ='-.',c='k') plt.axvline(plus(minus(x))[i],ls ='-.',c='k') plt.axvline(minus(plus(minus(x)))[i],ls = ':',color = 'blue',lw = 2) plt.axvline(minus(minus(minus(x)))[i],ls = ':',color = 'blue',lw = 2) ax = plt.gca() c=new_color(ax) plt.axvline(plus(plus(minus(x)))[i],ls = ':',color = c,lw = 2) plt.axvline(plus(minus(minus(x)))[i],ls = ':',color = c,lw = 2) plt.plot(b[:-1],img[i],label = '$\\tilde{T}=$'+str(round(Ts[i],1)),color = 'gray') plt.title('$T/J=$'+str(round(Ts[i],1)),fontsize = 13) #plt.axvline(plus(minus(minus(minus(x))))[i],ls = ':',color = 'brown',lw = 2) #plt.axvline(plus(minus(minus(0)))[i],ls= ':',color = 'm') plt.xlabel('$P$',fontsize = 13) plt.ylabel('$\\Pi(P)$',fontsize = 13) #plt.legend() plt.tight_layout() if savefigure: plt.savefig('./figures/theta_'+str(theta)+'_T_0.1.pdf') plt.figure() i = int(np.arange(len(Ts))[np.argmin(np.abs(Ts -0.3))]) plt.axvline(x[i],ls ='--',c='g') plt.axvline(minus(x)[i],ls ='--',c='g') plt.axvline(minus(minus(x))[i],ls ='-.',c='k') plt.axvline(plus(minus(x))[i],ls ='-.',c='k') plt.axvline(minus(plus(minus(x)))[i],ls = ':',color = 'blue',lw = 2) plt.axvline(minus(minus(minus(x)))[i],ls = ':',color = 'blue',lw = 2) ax = plt.gca() c=new_color(ax) plt.axvline(plus(plus(minus(x)))[i],ls = ':',color = c,lw = 2) plt.axvline(plus(minus(minus(x)))[i],ls = ':',color = c,lw = 2) plt.plot(b[:-1],img[i],label = '$\\tilde{T}=$'+str(round(Ts[i],1)),color = 'gray') plt.title('$T/J=$'+str(round(Ts[i],1)),fontsize = 13) #plt.axvline(plus(minus(1))[i],ls = ':',c='k') #plt.legend(loc = 'lower right') np.count_nonzero(J.data>0)/len(J.data),np.sort(img[i])[-2]/np.sort(img[i])[-1] plt.xlabel('$P$',fontsize = 13) plt.ylabel('$\\Pi(P)$',fontsize = 13) #plt.axvline((rho_minus)[i],ls = '-.',color = 'k',alpha = 0.7) #plt.axvline((rho_plus)[i],ls = '-.',color = 'k',alpha = 0.7) plt.tight_layout() if savefigure: plt.savefig('./figures/theta_'+str(theta)+'_T_0.3.pdf') plt.figure() i = int(np.arange(len(Ts))[np.argmin(np.abs(Ts -.6))]) plt.plot(b[:-1],img[i],label = '$T=$'+str(round(Ts[i],1))) #plt.legend() plt.axvline(x[i],ls ='--',c='g') plt.axvline(minus(x)[i],ls ='--',c='g') plt.axvline(minus(minus(x))[i],ls ='-.',c='k') plt.axvline(plus(minus(x))[i],ls ='-.',c='k') plt.plot(b[:-1],img[i],label = '$\\tilde{T}=$'+str(round(Ts[i],1)),color = 'gray') plt.title('$T/J=$'+str(round(Ts[i],1)),fontsize = 13) plt.xlabel('$P$',fontsize = 13) plt.ylabel('$\\Pi(P)$',fontsize = 13) plt.tight_layout() if savefigure: plt.savefig('./figures/theta_'+str(theta)+'_T_0.6.pdf') # - # ## Fig. 4 # Graphical solution of $\rho_{\pm}(P) = P$ i = int(np.arange(len(Ts))[np.argmin(np.abs(Ts_rescaled -0.3))]) def plot_map_trj(f,xmin,repetition,**kwargs): ymin = 0 ymax = f(xmin) xmax = ymax plt.arrow(xmin,(ymin+ymax)/2,0,0.05*(ymax-ymin)/np.abs(ymax-ymin),width = 0.01,length_includes_head=True,color=kwargs['color']) plt.arrow((xmin+xmax)/2,ymax,0.05*(xmax-xmin)/np.abs(xmax-xmin),0,width = 0.01,length_includes_head=True,color=kwargs['color']) for i in range(repetition): plt.vlines(xmin,ymin,ymax,**kwargs) plt.hlines(ymax,xmin,xmax,**kwargs) #plt.arrow((xmin+xmax)/2,ymin,0.1,0,width = 0.005,length_includes_head=True) xmin = ymax ymin = ymax ymax = f(xmin) xmax = ymax if i<1: plt.arrow(xmin,(ymin+ymax)/2,0,0.05*(ymax-ymin)/np.abs(ymax-ymin),width = 0.01,length_includes_head=True,color=kwargs['color']) plt.arrow((xmin+xmax)/2,ymax,0.05*(xmax-xmin)/np.abs(xmax-xmin),0,width = 0.01,length_includes_head=True,color=kwargs['color']) def p(P): return 0.5*(1-np.tanh(theta/np.sqrt(avg_degree)/2/T)+P*(np.tanh((1-theta)/np.sqrt(avg_degree)/2/T)+np.tanh(theta/np.sqrt(avg_degree)/2/T))) def m(P): return 0.5*(1-np.tanh(theta/np.sqrt(avg_degree)/2/T)+P*(np.tanh((-1-theta)/np.sqrt(avg_degree)/2/T)+np.tanh(theta/np.sqrt(avg_degree)/2/T))) T = Ts[i] t = np.linspace(0,1,100) plt.plot(t,p(t),label = '$\\rho_+(P)$') plt.plot(t,m(t),label = '$\\rho_-(P)$') plt.plot(t,t,'k') plt.legend(fontsize = 10) plt.xlabel('$P$',fontsize = 13) plt.ylabel('$\\rho_\pm(P)$',fontsize = 13) ax = plt.gca() ax.set_aspect('equal', adjustable='box') plot_map_trj(p,0.1,3,ls = '--',alpha = 0.5,color = 'g') plot_map_trj(m,0.9,4,ls = ':',color = 'r') plt.xlim(0,1) plt.ylim(0,1) #plt.arrow((0.1),(0.5),0.4,0.4,width = 0.02,length_includes_head=True) plt.tight_layout() #plt.savefig('graphical_sol.pdf') # ## Fig. 5 # + a = defaultdict(list) for k,P in zip(Ks,dic['data'][i]): a[k].append(P) x = (1-np.tanh(theta/np.sqrt(avg_degree)/2/Ts))/(2-(np.tanh((1-theta)/np.sqrt(avg_degree)/2/Ts)+np.tanh(theta/np.sqrt(avg_degree)/2/Ts))) # + plt.figure(figsize=(5.3,2.5)) plt.axvline(plus(1)[i],ls = '--',alpha = 0.5,lw=1.5,c = '#d62728',label = '$\\rho_+(1)$') plt.axvline(minus(1)[i],ls = ':',alpha = 0.9,lw=1.5,c = '#d62728',label = '$\\rho_-(1)$') plt.axvline(minus(0)[i],ls = '-.',alpha = 0.5,lw=1.5,c = '#d62728',label = '$\\rho_\pm(0)$') h,b = np.histogram(dic['data'][i],bins = np.linspace(0,1,201),density=True) plt.plot((b[:-1]+b[1:])/2,h) #h,b = np.histogram(P,bins = 1000,density=False) h1,b = np.histogram(a[1],bins = b,density=False)#node degree 1 plt.plot((b[:-1]+b[1:])/2,h1/N/np.diff(b),'-',alpha =0.6,label = '$k^{\\mathrm{in}}=1$') plt.legend(numpoints=1,fontsize = 10,ncol = 1,loc = 'upper left',bbox_to_anchor=(plus(1)[i]-0.25, 0.52)) plt.semilogy() plt.xlabel('$P$',fontsize = 13) plt.ylabel('$\Pi(P)$',fontsize = 13) plt.tight_layout() #plt.savefig('figures/degree_1.pdf') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #!/usr/bin/python3 # -*- coding: utf-8 -*- # Author: # Date: Fri 04 dec 2020 # virtual environment ecam import psutil from pprint import pprint as pprint from datetime import datetime from time import sleep tmp = (psutil.sensors_temperatures(), datetime.now()) pprint(tmp) tmp[0]['coretemp'][0].current tmp[1] log = f'{tmp[1].strftime("%Y-%B-%d %H:%M:%S")} - CPU Temp {tmp[0]["coretemp"][0].current}°c' print(log) def log_cpu_tmp(): tmp = (psutil.sensors_temperatures(), datetime.now()) return f'{tmp[1].strftime("%Y %B %d %H:%M:%S")} - CPU Temp {tmp[0]["coretemp"][0].current}°c' for _ in range(10): print(log_cpu_tmp()) sleep(2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: guitarsounds # language: python # name: guitarsounds # --- import os os.chdir('/Users/Olivier/anaconda3/envs/guitarsounds') # %load_ext autoreload # %autoreload 2 from guitarsounds import Sound, Signal import guitarsounds as guit import librosa import librosa.display from soundfile import write import IPython.display as ipd import matplotlib.pyplot as plt import numpy as np import os import scipy from scipy import signal as sig from noisereduce import reduce_noise # Créer des instance de Son à partir des fichiers file1 = "soundfiles/test_leste/1-1.wav" file2 = "soundfiles/test_leste/2-3.wav" test1 = Sound(file1, name='leste') test2 = Sound(file2, name='non leste', fundamental=80) # ## Ajout d'une méthode pour conditionner les sons # # `Sound.condition()` équivaut à : # # ``` # Sound.trim_signal() # Sound.filter_noise() # Sound.get_fundamental() # Sound.bin_divide() # ``` # ## Ajout d'une méthode pour trouver la fondamentale d'un son : # + # Fondamentale absente print(test1.fundamental) print(test2.fundamental) print('') # Conditionement minimal test1.trim_signal() test1.filter_noise() # Trouver la fondamentale test1.get_fundamental() # Fondamentale trouvée print(test1.fundamental, 'Hz') # - # ## 2. Fixer le bug quand le noise n'était pas suffisant test1.raw_signal.listen() # Si on fait : test1.condition() # On obtient des messages d'avertissement : test2.condition(verbose=False) # ## Graphique des enveloppes pour toutes les bins de fréquence # Reste à corriger le nombre de samples par maximum pour l'enveloppe test2.plot_freq_bins() # ## Comparaison normalisée pour les deux sons guit.time_compare(test1, test2, fbin='mid') # ## Transformées de Fourier Mirroir Normalisées guit.fft_mirror(test1, test2) # ## Ajout du type de plot : histogramme par bandes d'octaves # + test1.signal.plot(kind='fft hist') test1.signal.plot(kind='fft') plt.figure(figsize=(10,8)) test1.signal.plot(kind='fft hist', label='octave/3') test1.SP.change('octave_fraction', 6) test1.signal.plot(kind='fft hist', label='octave/6') test1.SP.change('octave_fraction', 12) test1.signal.plot(kind='fft hist', label='octave/12') plt.legend() plt.show() # - # ## Différence des FT sous forme de bandes d'octave guit.fft_diff(test1,test2, fraction=6) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # #!/usr/bin/env python # -*- coding: utf-8 -*- import matplotlib.pyplot as plt import cartopy.crs as ccrs import xarray as xr import numpy as np # Load dataset with xarray filename_aviso='f:/data/eddy/aviso/eddy_trajectory_19930101_20170106.nc' #From AVISO website dataset = xr.open_dataset(filename_aviso) dataset # - # + # Projection proj=ccrs.PlateCarree() #-------------------------------------------------------------------------------- # Follow a given eddy, plot its filtered track superimposed to its real track #-------------------------------------------------------------------------------- # Select a specific eddy and save subset = dataset.sel(obs=dataset.track==72553) # Store selected data subset.to_netcdf('eddy_trajectory_152014.nc') # Position filtering with a rolling mean windows_kwargs = dict(min_periods=7, obs=14, center=True) new_lon = subset.longitude.rolling(** windows_kwargs).mean() new_lat = subset.latitude.rolling(** windows_kwargs).mean() # Create figure fig = plt.figure() # Create subplot ax = fig.add_subplot(111, projection=proj) # Plot the two tracks ax.plot(subset.longitude, subset.latitude, color='r', label='original track', transform=proj) ax.plot(new_lon, new_lat, color='b', label='filtered track', transform=proj) # Active meridian/parallel ax.gridlines() # Active coastline ax.coastlines() # Legend ax.legend() #---------------------------------------------------------------------------------------------- # Select all eddies which go throught a given area and which have a lifespan more than 500 days #---------------------------------------------------------------------------------------------- # Create figure fig = plt.figure() # Create subplot ax = fig.add_subplot(111, projection=proj) # Bounds of the area lon_min, lon_max, lat_min, lat_max = 15, 25, -40, -34 # Draw area ax.fill( [lon_min, lon_max, lon_max, lon_min, lon_min], [lat_min, lat_min, lat_max, lat_max, lat_min], color='coral', transform=proj, alpha=0.4, zorder=30) # Select all observation in the area subset = dataset.sel( obs=(dataset.longitude > lon_min) & (dataset.longitude < lon_max) & (dataset.latitude > lat_min) & (dataset.latitude < lat_max)) # Create a mask with all track which go throught the area # Create the subset with the mask subset = dataset.isel(obs=np.in1d(dataset.track, subset.track)) # Find all the track which are longer than 500 days subset_lon_life =subset.sel(obs=subset.observation_number>500) # Create the final subset subset = subset.isel(obs=np.in1d(subset.track, subset_lon_life.track)) # Plot selected data ax.scatter( subset.longitude, subset.latitude, c=subset.track, label='All tracks longer than 500 days', s=5, transform=proj, linewidth=0, cmap='Dark2') # Active meridian/parallel ax.gridlines() # Active coastline ax.coastlines() # Legend ax.legend() # Store subset to further analyse subset.to_netcdf('eddy_trajectory_area_days_more500.nc') # Display figure plt.show() # + # + # Selection of all event with an amplitude over 40 cm subset = dataset.sel(obs=dataset.amplitude>40.) # save in netcdf file with same properties as before subset.to_netcdf('eddy_trajectory_amplitude_more40.nc') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ![title](img/cover4.png) #
    Copyright! This material is protected, please do not copy or distribute. by:
    # *** #

    Udemy course : Python Bootcamp for Data Science 2021 Numpy Pandas & Seaborn

    # # *** # ### The Exercise for Module 6 # # #### Instruction: Please write your code after the phrase " ## Your codes begin here:" # #### Q1. In the next code cell, create a numpy array named 'array1' that consists of random numbers with the shape (3,4), i.e. three rows and four columns? Then display it? # # **Hint**: To generate random numbers, you might need to use this function (**np.random.randn()**) # # The output shoulld look like this: # # output: # # array ([[-1.56035211, -0.0309776 , -0.62092842, -1.46458049], # # [ 1.41194612, -0.47673214, -0.78046921, 1.07026774], # # [-1.2822926 , -1.3274789 , 0.12633764, 0.86219372]]) # + import numpy as np np.random.seed(50) ## Used to generate the same numbers each time we run the script. ## Your codes begin here: # - # #### Q2. Display the shape and the dimension of 'array1', that was created in Q1? # # **Hint:** you need to use **print statements, print()** to display multiple lines of code. # The output should look like this: # # The output shoulld look like this: # # **output:** # # The shape is: (3, 4) # # The dimension is 2 # + ## Your codes begin here: # - # #### Q3. Display the data type of 'array1', that was created in Q1? # # The output shoulld look like this: # # **output:** # # dtype('float64') # + ## Your codes begin here: # - # #### Q4. Use your indexing skills to select only the last two columns and the last two rows from 'array1' that was created in Q1? # # **Hint: There is no universal answer to this question. # # # The output shoulld look like this: # # **output:** # # array([[ 1.44774523, -0.16963134], # # [-0.94879155, -0.21160221]]) # + ## Your codes begin here: # - # #### Q5. Use boolean indexing to select only positive numbers from 'array1' that was created in Q1? # # The output shoulld look like this: # # **output:** # # array([1.41194612, 1.07026774, 0.12633764, 0.86219372]) # + ## Your codes begin here: # - # #### Q6. Transpose 'array1' that was created in Q1? # # # The output shoulld look like this: # # **output:** # # array([[-1.56035211, 1.41194612, -1.2822926 ], # # [-0.0309776 , -0.47673214, -1.3274789 ], # # [-0.62092842, -0.78046921, 0.12633764], # # [-1.46458049, 1.07026774, 0.86219372]]) # # + ## Your codes begin here: # - # #### Q7. Sort 'array1' within columns? Then display it? # **Hint:** Sorting within columns means sorting along rows. # # The output shoulld look like this: # # **output:** # # array([[-1.56035211, -1.3274789 , -0.78046921, -1.46458049], # # [-1.2822926 , -0.47673214, -0.62092842, 0.86219372], # # [ 1.41194612, -0.0309776 , 0.12633764, 1.07026774]]) # + ## Your codes begin here: # - # *** # # Solutions # *** # #### Q1. In the next code cell, create a numpy array named 'array1' that consists of random numbers with the shape (3,4), i.e. three rows and four columns? Then display it? # + import numpy as np np.random.seed(50) ## Used to generate the same numbers each time we run the script. ## Your codes begin here: array1 = np.random.randn(3,4) array1 # - # #### Q2. Display the shape and the dimension of 'array1', that was created in Q1? # + ## Your codes begin here: print('The shape is: ', array1.shape) print('The dimension is ', array1.ndim) # - # #### Q3. Display the data type of 'array1', that was created in Q1? # + ## Your codes begin here: array1.dtype # - # #### Q4. Use your indexing skills to select only the last two columns and the last two rows from 'array1' that was created in Q1? # + ## Your codes begin here: array1[1:,2:] # - # #### Q5. Use boolean indexing to select only positive numbers from 'array1' that was created in Q1? # + ## Your codes begin here: array1[array1 > 0] # - # #### Q6. Transpose 'array1' that was created in Q1? # + ## Your codes begin here: array1.T # - # #### Q7. Sort 'array1' within columns? Then display it? # + ## Your codes begin here: array1.sort(0) array1 # - # *** # #

    Thank You

    # # *** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # cd "C:\Users\Naman\Downloads\AnalytixLab\Final Project\5. bank reviews-complaints analysis" pwd import pandas as pd import numpy as np dataset=pd.read_csv('Bank_Reviews_csv.csv') dataset.head() dataset.info() dataset.shape dataset.drop(['Unnamed: 0','Date','BankName'],axis=1,inplace=True) dataset.head() dataset.info() dataset.shape dataset.columns=['STARS','REVIEWS'] text_data=dataset.REVIEWS.values.tolist() text_data len(text_data) import re # Creating the corpus along with structuring data in a readable form corpus = [] for review in text_data: review = review.lower() review = re.sub(r"that's","that is",review) review = re.sub(r"there's","there is",review) review = re.sub(r"what's","what is",review) review = re.sub(r"where's","where is",review) review = re.sub(r"it's","it is",review) review = re.sub(r"who's","who is",review) review = re.sub(r"i'm","i am",review) review = re.sub(r"she's","she is",review) review = re.sub(r"he's","he is",review) review = re.sub(r"they're","they are",review) review = re.sub(r"who're","who are",review) review = re.sub(r"ain't","am not",review) review = re.sub(r"wouldn't","would not",review) review = re.sub(r"shouldn't","should not",review) review = re.sub(r"can't","can not",review) review = re.sub(r"couldn't","could not",review) review = re.sub(r"won't","will not",review) review = re.sub(r"\W"," ",review) review = re.sub(r"\d"," ",review) review = re.sub(r"\s+[a-z]\s+"," ",review) review = re.sub(r"\s+[a-z]$"," ",review) review = re.sub(r"^[a-z]\s+"," ",review) review = re.sub(r"\s+"," ",review) review = re.sub(r'^br$', ' ', review) review = re.sub(r'\W', ' ',review) review = review.rstrip() #to remove trailing whitespaces at the end of the string review = review.lstrip() #to remove leading whitespaces at the start of the string corpus.append(review) corpus len(corpus) # + #string of the corpus str_representation = ' '.join(corpus) #list of words of the corpus list_words=str_representation.split(" ") print('Number of words in the corpus is : ',len(list_words)) #unique count of words print('Number of unique words in the corpus are : ',len(set(list_words))) # - import nltk # + ## Removing stopwords( here i am doing it before lemmetization as latter converts words like 'has' to 'ha' and so if stop words ## removal done after lemmetization then 'ha' wont be captured) from nltk.corpus import stopwords for i in range(len(corpus)): words = nltk.word_tokenize(corpus[i]) words = [word for word in words if word not in stopwords.words('english')] corpus[i] = ' '.join(words) # - len(corpus) corpus[:7] # + #string of the corpus after removing stopwords str_representation_stop = ' '.join(corpus) #list of words of the corpus after removing stopwords list_words_stop=str_representation_stop.split(" ") print('Number of words in the corpus is : ',len(list_words_stop)) #unique count of words after removing stopwords print('Number of unique words in the corpus are : ',len(set(list_words_stop))) # + #lemmetization (note : numner of words after lemmetization will remain same as number of words after stop words.Only difference #will be the number of unique words) from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() for i in range(len(corpus)): words = nltk.word_tokenize(corpus[i]) words = [lemmatizer.lemmatize(word) for word in words] corpus[i] = ' '.join(words) # - len(corpus) corpus[:7] # + #string of the corpus after lemmetization str_representation_lem = ' '.join(corpus) #list of words of the corpus after lemmetization list_words_lem=str_representation_lem.split(" ") print('Number of words in the corpus is : ',len(list_words_lem)) #unique count of words after lemmetization print('Number of unique words in the corpus are : ',len(set(list_words_lem))) # + # Again Removing stopwords ( Here I am doing after lemmetization as if after lemmetizing the words if any word comes in the # category of stopwords, it will be removed) from nltk.corpus import stopwords for i in range(len(corpus)): words = nltk.word_tokenize(corpus[i]) words = [word for word in words if word not in stopwords.words('english')] corpus[i] = ' '.join(words) # - len(corpus) # + #string of the corpus after removing stopwords str_representation_stop_again = ' '.join(corpus) #list of words of the corpus after removing stopwords list_words_stop_again=str_representation_stop_again.split(" ") print('Number of words in the corpus is : ',len(list_words_stop_again)) #unique count of words after removing stopwords print('Number of unique words in the corpus are : ',len(set(list_words_stop_again))) # - corpus # Creating word histogram to get count of each word word2count = {} for data in corpus: words = nltk.word_tokenize(data) for word in words: if word not in word2count.keys(): word2count[word] = 1 else: word2count[word] += 1 word2count # Selecting most occuring words import heapq freq_words = heapq.nlargest(400,word2count,key=word2count.get) len(freq_words) freq_words_corpus=corpus[:] len(freq_words_corpus) # + #string of the freq_words_corpus str_freq_words_corpus = ' '.join(freq_words_corpus) #list of words of the corpus after removing stopwords list_freq_words_corpus=str_freq_words_corpus.split(" ") print('Number of words in the corpus is : ',len(list_freq_words_corpus)) #unique count of words after removing stopwords print('Number of unique words in the corpus are : ',len(set(list_freq_words_corpus))) # - #creating corpus of frequent words for i in range(len(freq_words_corpus)): words = nltk.word_tokenize(freq_words_corpus[i]) words = [word for word in words if word in freq_words] freq_words_corpus[i] = ' '.join(words) len(freq_words_corpus) # + #string of the corpus after only keeping freq words str_representation_freq_words= ' '.join(freq_words_corpus) #list of words of the corpus after only keeping freq words list_words_freq_words=str_representation_freq_words.split(" ") print('Number of words in the corpus is : ',len(list_words_freq_words)) #unique count of words after only keeping freq words print('Number of unique words in the corpus are : ',len(set(list_words_freq_words))) # - from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(stop_words='english') X = vectorizer.fit_transform(freq_words_corpus) len(vectorizer.get_feature_names()) Review=pd.DataFrame(X.toarray()) Review.head() terms = vectorizer.get_feature_names() Review.columns=terms Review.head() Review.info() dataset.head() updated_dataset=pd.DataFrame() updated_dataset=pd.concat([updated_dataset,Review],axis=1) updated_dataset.head() updated_dataset.head() updated_dataset.info() updated_dataset.shape updated_dataset=pd.concat([updated_dataset,dataset['STARS']],axis=1) updated_dataset.STARS=np.where(updated_dataset.STARS==5,1,0) updated_dataset.head() updated_dataset.info() updated_dataset.shape from sklearn.cross_validation import train_test_split rf_feature_columns=updated_dataset.columns.difference(['STARS']) len(rf_feature_columns) rf_train_X, rf_test_X, rf_train_y, rf_test_y = train_test_split( updated_dataset[rf_feature_columns], updated_dataset['STARS'], test_size = 0.2, random_state = 42 ) rf_train_X.head() rf_train_X.shape rf_train_y.head() rf_train_y.shape rf_test_X.head() rf_test_X.shape rf_test_y.head() rf_test_y.shape from sklearn.ensemble import RandomForestClassifier rf_tree = RandomForestClassifier(oob_score=True,n_estimators=116,max_features=105) rf_tree.fit( rf_train_X, rf_train_y ) radm_test_pred = pd.DataFrame( { 'actual': rf_test_y, 'predicted': rf_tree.predict( rf_test_X ) } ) radm_test_pred.to_excel('Actual_vs_Predicted.xlsx') from sklearn import metrics metrics.accuracy_score( radm_test_pred.actual, radm_test_pred.predicted ) ##Parameter tuning for random forest param_grid={'n_estimators':np.arange(100,150), 'max_features':np.arange(100,200)} from sklearn.tree import DecisionTreeClassifier, export_graphviz, export from sklearn.grid_search import GridSearchCV rf_tree = GridSearchCV(RandomForestClassifier(oob_score=False,warm_start=True,n_jobs=-1), param_grid, cv = 10) rf_tree.fit( rf_train_X, rf_train_y ) rf_tree.best_estimator_ rf_tree.best_score_ # ### CONFUSION MATRIX from sklearn.metrics import confusion_matrix cm=confusion_matrix(radm_test_pred.actual,radm_test_pred.predicted) cm import seaborn as sn import matplotlib.pyplot as plt sn.heatmap(cm, annot=True, fmt='.0f',xticklabels=["1",'5'],yticklabels=["1",'5']) plt.ylabel('ACTUAL') plt.xlabel('PREDICTED') plt.title('Test Data Confusion Matrix') plt.show() # ### Finding Ideal Cutoff test_predicted_prob = pd.DataFrame(rf_tree.predict_proba(rf_test_X))[[1]] test_predicted_prob.columns = ['prob'] actual=rf_test_y.reset_index() actual.drop('index',axis=1,inplace=True) # making a DataFrame with actual and prob columns hr_test_predict = pd.concat([actual, test_predicted_prob], axis=1) hr_test_predict.columns = ['actual','prob'] hr_test_predict.head() hr_test_predict.head() hr_test_predict.info() # + test_roc_like_df = pd.DataFrame() test_temp = hr_test_predict.copy() for cut_off in np.linspace(0,1,50): test_temp['predicted'] = test_temp['prob'].apply(lambda x: 0 if x < cut_off else 1) test_temp['tp'] = test_temp.apply(lambda x: 1 if x['actual']==1 and x['predicted']==1 else 0, axis=1) test_temp['fp'] = test_temp.apply(lambda x: 1 if x['actual']==0 and x['predicted']==1 else 0, axis=1) test_temp['tn'] = test_temp.apply(lambda x: 1 if x['actual']==0 and x['predicted']==0 else 0, axis=1) test_temp['fn'] = test_temp.apply(lambda x: 1 if x['actual']==1 and x['predicted']==0 else 0, axis=1) sensitivity = test_temp['tp'].sum() / (test_temp['tp'].sum() + test_temp['fn'].sum()) specificity = test_temp['tn'].sum() / (test_temp['tn'].sum() + test_temp['fp'].sum()) test_roc_like_table = pd.DataFrame([cut_off, sensitivity, specificity]).T test_roc_like_table.columns = ['cutoff', 'sensitivity', 'specificity'] test_roc_like_df = pd.concat([test_roc_like_df, test_roc_like_table], axis=0) # - test_roc_like_df.head() test_roc_like_df.info() test_temp.sum() plt.subplots(figsize=(10,4)) plt.scatter(test_roc_like_df['cutoff'], test_roc_like_df['sensitivity'], marker='*', label='Sensitivity') plt.scatter(test_roc_like_df['cutoff'], test_roc_like_df['specificity'], marker='*', label='Specificity') #plt.scatter(test_roc_like_df['cutoff'], 1-test_roc_like_df['specificity'], marker='*', label='FPR') plt.title('For each cutoff, pair of sensitivity and FPR is plotted for ROC') plt.legend() ## Finding ideal cut-off for checking if this remains same in OOS validation test_roc_like_df['total'] = test_roc_like_df['sensitivity'] + test_roc_like_df['specificity'] test_roc_like_df[test_roc_like_df['total']==test_roc_like_df['total'].max()] hr_test_predict['predicted'] = hr_test_predict['prob'].apply(lambda x: 1 if x > 0.67 else 0) hr_test_predict.head() hr_test_predict.to_excel('Actual_vs_Predicted.xlsx') # + ### CONFUSION MATRIX WITH IDEAL CUT-OFF # - cm=confusion_matrix(hr_test_predict.actual,hr_test_predict.predicted) sn.heatmap(cm, annot=True, fmt='.0f',xticklabels=["1",'5'],yticklabels=["1",'5']) plt.ylabel('ACTUAL') plt.xlabel('PREDICTED') plt.title('Test Data Confusion Matrix with Ideal Cut-Off') plt.show() ## Accuracy ((20+73)/(20+73+7+1)) ##Precision 20/27 ##Recall 20/21 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This is an overall tutorial of how to go about this project. The files can't actually be directly run in Jupyter Notebook since they require commandline arguments, but you can use the **sys** library and cell magic commands to execute the file while passing the commandline arguments as shown below. Be sure to go through the `requirements.txt` file to get all the libraries and repositories installed needed for this project to work. # # Frozenlake implementations and Generalized MDPs # What we tried to achieve here: # # 1. Planning as inference: # Working pseudo-softmax agent capable of solving FrozenLake # (with minimal reward shaping and no knowledge of the environment). # Non-so-much-working other pseudo-softmax implementations. # # 2. Generalization to other environments: # Parsers for standard MDP and POMDP formats. # PyroMDP & PyroPOMDP, OpenAI Gym environments which run as pyro probabilistic programs. # Working softmax agent capable of solving `gridworld.mdp` environment. # # The goal here was to solve the problem of implementing a related type of agent, the softmax agent which evaluates its own policy to compute its policy. # # Walking through the code for related `contol_as_inference.py`. As mentioned before the code chunks cannot be run directly here but would require a command-line like call which is shown below where we demonstrate a sample output. # # The code is distributed for 2 different implementations, one for FrozenLake environment and other for general MDPs, the breakdown would be shown below. # # The $main()$ function is used to provide choice of which implementation is to be run. It factors in the choice and generates the relative environment. # + def main(): assert args.policy in ('control-as-inference-like', 'softmax-like') if args.policy == 'control-as-inference-like': policy = policy_control_as_inference_like elif args.policy == 'softmax-like': policy = softmax_like if args.mdp == 'frozenlake': env = gym.make('FrozenLake-v0', is_slippery=False) env = FrozenLakeWrapper(env) trajectory_model = trajectory_model_frozenlake agent_model = agent_models.get_agent_model('FrozenLake-v0') # makes sure integer action is sent to frozenlake environment def action_cast(action): return action.item() else: env = make_mdp(args.mdp, episodic=True) env = TimeLimit(env, 100) trajectory_model = trajectory_model_mdp agent_model = agent_models.get_agent_model(args.mdp) # makes sure tensor action is sent to MDP environment def action_cast(action): return action env.reset() for t in itt.count(): print('---') print(f't: {t}') print('state:') env.render() action = policy( env, trajectory_model=trajectory_model, agent_model=agent_model, log=True, ) _, reward, done, _ = env.step(action_cast(action)) print(f'reward: {reward}') if done: print('final state:') env.render() print(f'Episode finished after {t+1} timesteps') break env.close() if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('mdp', help='`frozenlake` string or path to MDP file') parser.add_argument( '--policy', choices=['control-as-inference-like', 'softmax-like'], default='control-as-inference-like', help='Choose one of two control strategies', ) parser.add_argument('--alpha', type=float, default=100.0) # likelihood parameter parser.add_argument('--gamma', type=float, default=0.95) # discount factor parser.add_argument('--num-samples', type=int, default=2_000) args = parser.parse_args() print(f'args: {args}') main() # - # ## Implementation to solve frozen lake environment # We tried to create a softmax implementation FrozenLake but later realized that it was more similar to an implemtation of **control as inference**! # The following code is used to generate frozenlake environment trajectories. We use `pyro.factor()` is used to influence trace log-likelihood, it acts as soft-conditioning/filtering to select random trajectories which result in high sample return. # + def trajectory_model_frozenlake(env, *, agent_model=None, factor_G=False): """trajectory_model_frozenlake A probabilistic program for the frozenlake environment trajectories. The sample return can be used to affect the trace likelihood. :param env: OpenAI Gym FrozenLake environment :param agent_model: agent's probabilistic program :param factor_G: boolean; if True then apply $\\alpha G$ likelihood factor """ if agent_model is None: agent_model = agent_models.uniform env = deepcopy(env) # running return and discount factor return_, discount = 0.0, 1.0 for t in itt.count(): action = agent_model(f'A_{t}', env, env.s) _, reward, done, _ = env.step(action.item()) # running return and discount factor return_ += discount * reward discount *= args.gamma if done: break pyro.sample('G', Delta(torch.as_tensor(return_))) if factor_G: pyro.factor('factor_G', args.alpha * return_) return return_ # - # The $policy\_control\_as\_inference\_like()$ function is used toapply importance sampling to sample action site $A_0$ and display the marginal probabilities in tabulated format # + def policy_control_as_inference_like( env, *, trajectory_model, agent_model, log=False ): """policy_control_as_inference_like Implements a control-as-inference-like policy which "maximizes" $\\Pr(A_0 \\mid S_0, high G)$. Not actually standard CaI, because we don't really condition on G; rather, we use $\\alpha G$ as a likelihood factor on sample traces. :param env: OpenAI Gym environment :param trajectory_model: trajectory probabilistic program :param agent_model: agent's probabilistic program :param log: boolean; if True, print log info """ inference = Importance(trajectory_model, num_samples=args.num_samples) posterior = inference.run(env, agent_model=agent_model, factor_G=True) marginal = EmpiricalMarginal(posterior, 'A_0') if log: samples = marginal.sample((args.num_samples,)) counts = Counter(samples.tolist()) hist = [counts[i] / args.num_samples for i in range(env.action_space.n)] print('policy:') print(tabulate([hist], headers=env.actions, tablefmt='fancy_grid')) return marginal.sample() # - # ## Implementation for General MDPs # This is a similar implementation of **control as inference** for the case of general MDP's. Before we were working on a predefined gym environment of Frozen lake but using the `make_mdp()` function in main, we make call to $PyroMDP$ implementation which is done in `gym-pyro` repository. This generated and returns a probailistic environment which can be used to solve by out agent. def trajectory_model_mdp(env, *, agent_model=None, factor_G=False): """trajectory_model_mdp A probabilistic program for MDP environment trajectories. The sample return can be used to affect the trace likelihood. :param env: OpenAI Gym environment :param agent_model: agent's probabilistic program :param factor_G: boolean; if True then apply $\\alpha G$ likelihood factor """ if agent_model is None: agent_model = agent_models.uniform env = deepcopy(env) # running return and discount factor return_, discount = 0.0, 1.0 # with keep_state=True only the time-step used to name sites is being reset state = env.reset(keep_state=True) for t in itt.count(): action = agent_model(f'A_{t}', env, state) state, reward, done, _ = env.step(action) # running return and discount factor return_ += discount * reward discount *= args.gamma if done: break pyro.sample('G', Delta(return_)) if factor_G: pyro.factor('factor_G', args.alpha * return_) return return_ # This function works similar to $policy\_control\_as\_inference\_like()$ function but for the general MDP case. def softmax_like(env, *, trajectory_model, agent_model, log=False): """softmax_like :param env: OpenAI Gym environment :param trajectory_model: trajectory probabilistic program :param agent_model: agent's probabilistic program :param log: boolean; if True, print log info """ Qs = torch.as_tensor( [ infer_Q( env, action, trajectory_model=trajectory_model, agent_model=agent_model, log=log, ) for action in range(env.action_space.n) ] ) action_logits = args.alpha * Qs action_dist = Categorical(logits=action_logits) if log: print('policy:') print( tabulate( [action_dist.probs.tolist()], headers=env.actions, tablefmt='fancy_grid', ) ) return action_dist.sample() # ### Sample execution # A sample output and demonstration of execution for the file `control_as_inference.py` taking `frozenlake` as an argument. import sys # %run control_as_inference.py "frozenlake" # From the output steps, we observe that out agent was able to solve optimally for the frozen lake environment in 6 timesteps to reach the goal. # # Our final attempt at implementing Softmax # Previously we tried to implement softmax but didn't quite succeed. In this attempt we think we reached the closest to implementing a real and efficient softmax. # # Here we have the implementation of 2 paths: # 1. Frozen lake: Frozen lake environment based on pyro importance sampling # 2. General MDP: We used `gridworld.mdp` to show implementation in general case where any environment can be defined in a `.mdp` format. # # + def main(): env = make_mdp(args.mdp, episodic=True) env = TimeLimit(env, 10) env.reset() for t in itt.count(): print('---') print(f't: {t}') print('state:') env.render() action = policy(env, log=True) _, reward, done, _ = env.step(action) print(f'reward: {reward}') if done: print('final state:') env.render() print(f'Episode finished after {t+1} timesteps') break env.close() if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('mdp', help='path to MDP file') parser.add_argument('--alpha', type=float, default=5_000.0) parser.add_argument('--gamma', type=float, default=0.95) parser.add_argument('--num-samples', type=int, default=20) args = parser.parse_args() print(f'args: {args}') main() # - # The following function a probabilistic program for MDP environment trajectories using a presampled policy. # + def trajectory_model(env, policy): """trajectory_model A probabilistic program for MDP environment trajectories using a presampled policy. :param env: OpenAI Gym FrozenLake environment :param policy: predetermined policy function """ env = deepcopy(env) # running return and discount factor return_, discount = 0.0, 1.0 for _ in itt.count(): action = policy(env.state) _, reward, done, _ = env.step(action) # running return and discount factor return_ += discount * reward discount *= args.gamma if done: break return_ = pyro.sample(f'G', Delta(return_)) return return_ # - # The following model is used to performs inference to estimate $Q^\pi(s, a)$, then uses pyro.factor to modify the trace log-likelihood. # + def softmax_agent_model(env): """softmax_agent_model Softmax agent model; Performs inference to estimate $Q^\pi(s, a)$, then uses pyro.factor to modify the trace log-likelihood. :param env: OpenAI Gym environment """ policy_probs = torch.ones(env.state_space.n, env.action_space.n) policy_vector = pyro.sample('policy_vector', Categorical(policy_probs)) inference = Importance(trajectory_model, num_samples=args.num_samples) posterior = inference.run(env, lambda state: policy_vector[state]) Q = EmpiricalMarginal(posterior, 'G').mean pyro.factor('factor_Q', args.alpha * Q) return policy_vector # - # We sample the policy using importance sampling on the entire above process. The action is chosen using the sample policy. def policy(env, log=False): """policy :param env: OpenAI Gym environment :param log: boolean; if True, print log info """ inference = Importance(softmax_agent_model, num_samples=args.num_samples) posterior = inference.run(env) marginal = EmpiricalMarginal(posterior, 'policy_vector') if log: policy_samples = marginal.sample((args.num_samples,)) action_samples = policy_samples[:, env.state] counts = Counter(action_samples.tolist()) hist = [counts[i] / args.num_samples for i in range(env.action_space.n)] print('policy:') print(tabulate([hist], headers=env.actions, tablefmt='fancy_grid')) policy_vector = marginal.sample() return policy_vector[env.state] # We observed that it kinda works sometimes, but is very sensitive to hyper parameters. This was our final implementation of softmax, although we are not sure that its actually equivalent to the real softmax, a more formal proof is required for that which maybe a good idea for Future works on this. # ### Sample execution # Following line of code shows a sample exectuion the code file for the `gridworld.mdp`. # %run softmax_presample_policy.py gridworld.mdp # We see that the agent was able to solve the problem in 8 timesteps and gain the final reward of $1$. # # Observing effect of Confounding on our agent's ability to solve the environment # The main goal here was to observe the effect of confounding by observing for conditioning vs intervening on action for a general RL problem using a Confounding MDPs file which can be thought as a special case of partially observable MDPs (POMDPs). # # $E[R_t | S_t = s, do(A_t = a)] \neq E[R_t | S_t = s, A_t = a]$ # # We make use of the OpenAI gym environment framework and use a special format cmdp file. The cmdp file is derived from the encoding format for a MDP/POMDP problem, containing the states, rewards, confounders, actions and transition probabilities. The cmdp file is parsed into the environment using specially created parser from rl_parser repository which translates this file to a problem in gym environment which can be solved using traditional reinforcement learning techniques. # # The code for this part is implemented in `confounding_mdp.py` file. # # The following code is dependent on `PyroCMDP` implemetation from `gym-pyro` repository. # # The following code is used to make a call to the Pyro file which creates samples and generates the confounding environment for the agent to solve. # # `from envs import make_cmdp` # # Here we are using circles.cmpd which is nothing but a 3 x 3 grid environment with a binary confounder. The counfounders here are Clockwise and Counterclockwise direction enforcers. The agent receives positive reward for moving alongside the border depending on the confounder. # # The $main()$ function is where all the function calls are made. The $Qs$ values are calculated by making recurring calls to the $infer\_Q()$ function and are displayed in a tabulated format. # # We let the agent work for 10 timesteps and observed the effects of confounders on expected value conditioning on agents action vs expected value by making agent do A action. The agent model always tries to pick optimal action based on the intervention distribution. The confounding effect becomes more apparent since the agent model behaves differently according to it. # # # + def main(): env = make_cmdp(args.cmdp, episodic=True) env = TimeLimit(env, 10) agent_model_name = args.cmdp.split('/')[-1] agent_model = agent_models.get_agent_model(agent_model_name) values_df_index = 'E[G]', 'E[G | A=a]', 'E[G | do(A=a)]' values_df_columns = env.model.actions _, state = env.reset() for t in itt.count(): print() print(f't: {t}') env.render() Qs_none = [ infer_Q(env, action, 'none', agent_model=agent_model).item() for action in range(env.action_space.n) ] Qs_condition = [ infer_Q(env, action, 'condition', agent_model=agent_model).item() for action in range(env.action_space.n) ] Qs_intervention = [ infer_Q(env, action, 'intervention', agent_model=agent_model).item() for action in range(env.action_space.n) ] values_df = pd.DataFrame( [Qs_none, Qs_condition, Qs_intervention], values_df_index, values_df_columns, ) print(values_df) action = torch.tensor(Qs_intervention).argmax() state, _, done, _ = env.step(action) if done: print() print(f'final state: {state}') print(f'Episode finished after {t+1} timesteps') break env.close() if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('cmdp', help='CMDP file') parser.add_argument( '--gamma', type=float, default=0.95, help='discount factor' ) parser.add_argument( '--num-samples', type=int, default=1_000, help='number of samples to be used for importance sampling', ) args = parser.parse_args() print(f'args: {args}') main() # - # The trajectory function is used to simulate a trajectory by sampling random actions. The parameters here as the Open AI gym environment and the agent's probabilistic program def trajectory_model(env, *, agent_model): """trajectory_model A probabilistic program which simulates a trajectory by sampling random actions. The sample return can be used to affect the trace likelihood such that the agent policy becomes $\\pi(action_0; state_0) \\propto \\exp(\\alpha return_0)$ :param env: OpenAI Gym environment :param agent_model: agent's probabilistic program """ env = deepcopy(env) # initializing the running return and discount factor return_, discount = 0.0, 1.0 # with keep_state=True only the time-step used to name sites is being reset state, confounder = env.reset(keep_state=True) for t in itt.count(): action = agent_model(f'A_{t}', env, (state, confounder)) #agent model state, reward, done, _ = env.step(action) #environment model # updating the running return and discount factor return_ += discount * reward discount *= args.gamma if done: break pyro.sample('G', Delta(return_)) return return_ # The infer_Q function here is used to get posteriors for intervention and conditioning for every action A in working action space. We show the calculated effects in the tables displayed. G here stands for goal and A for action # def infer_Q(env, action, infer_type='intervention', *, agent_model): """infer_Q Infer Q(state, action) via pyro's importance sampling, via conditioning or intervention. :param env: OpenAI Gym environment :param action: integer action :param infer_type: type of inference; none, condition, or intervention :param agent_model: agent's probabilistic program """ if infer_type not in ('intervention', 'condition', 'none'): raise ValueError('Invalid inference type {infer_type}') if infer_type == 'intervention': model = pyro.do(trajectory_model, {'A_0': torch.tensor(action)}) elif infer_type == 'condition': model = pyro.condition(trajectory_model, {'A_0': torch.tensor(action)}) else: # infer_type == 'none' model = trajectory_model posterior = Importance(model, num_samples=args.num_samples).run( env, agent_model=agent_model ) return EmpiricalMarginal(posterior, 'G').mean # ### Sample execution # A sample output and demonstration of execution for the file `confounding_mdp.py` . # %run confounding_mdp.py circle.cmdp # E[G] here is the expectation of reaching goal when agent model is choosing the action by itself. The agent model always chooses the same optimal action. # # E[G | A=a] here represents the expectation of reaching goal while observing for an action A=a in the action space. # # E[G| do(A=a)] here is used to represent the expectation of reaching goal when we intervene and set the action A=a. # # From the results we try to observe that conditioning and intervening give us two different values. To understand the results lets focus on say t: 8. # # So here we observe that the agent just moved to left after the previous time step reaches state s00. Computing the expectations for the possible action space now, comparing the conditioning and do action on A=a we observe that the expectation for conditioning and intervening stays the same for up and left since those actions are not permitted as s00 represents the top left of the grid. Although when we see for the possible actions which are down and right it seems that conditioning overestimate the values for moving down and right whereas do operation shows that the values should be lower than what are being expected. This could show an observed effect of confounding which appears while conditioning. Moving to the time step 9 we see that the agent actually moved to right and the reward changed to 0.0 which shows that it was not the best action to take maybe it could have had a better reward if it moved down instead. At the end of 10 timesteps we see that the agent end at 0 as the final state. # # Hence, we were able to explore the difference between “conditional” RL and causal RL and this concluded our presentation for the project. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import matplotlib.pylab as plt import seaborn as sns data1 = pd.read_csv('grizzlybearmortalityhistory_1976_2017.csv') output = pd.concat([data1]) pd.DataFrame.from_dict(data1) # - data1.shape data1.head data1.columns data1.describe().apply(lambda s: s.apply(lambda x: format(x, 'f'))) data1.nunique(axis=0) data1.ID.unique() data1.HUNT_YEAR.unique() data1.MU.unique() data1.GBPU_ID.unique() data1.GBPU_NAME.unique() data1.KILL_CODE.unique() data1.SEX.unique() data1.AGE_CLASS.unique() data1.SPATIAL.unique() corr = data1.corr() sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, annot=True, cmap=sns.diverging_palette(220, 20, as_cmap=True)) data1.plot(kind='scatter', x='HUNT_YEAR', y='KILL_CODE') data1.plot(kind='scatter', x='HUNT_YEAR', y='MU') data1.plot(kind='scatter', x='HUNT_YEAR', y='SPATIAL') data1.plot(kind='scatter', x='SEX', y='MU') data1.plot(kind='scatter', x='MU', y='ID') sns.pairplot(data1) data1['HUNT_YEAR'].plot(kind='hist', bins=20, figsize=(15,10), facecolor='green',edgecolor='black') data1['GBPU_ID'].plot(kind='hist', bins=20, figsize=(15,10), facecolor='yellow',edgecolor='black') data1['MU'].plot(kind='hist', bins=20, figsize=(15,10), facecolor='blue',edgecolor='black') # + tags=[] data1.boxplot('HUNT_YEAR') # - data1.boxplot('GBPU_ID') data1.boxplot('MU') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- for i in ['a','b','c']: try: print(i**2) except OSError: print('You have an type Error. Can\'t mix int and string') continue except: print('Just trying out') def ask(): while True: try: x = int(input('Please provide an integer to sqaure it: ')) result = x**2 print(result) break except ValueError: print('You need to provide an integer') ask() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import os directory = 'data/multirc/' train = pd.read_json(directory + 'train.jsonl', lines=True) # train.info() test = pd.read_json(directory + 'test.jsonl', lines=True) # test.info() val = pd.read_json(directory + 'val.jsonl', lines=True) # val.info() [i for i in train['evidences']] def count_rationale_len(df): rationale_len = [] for doc_evidences in df['evidences']: for evidence in doc_evidences[0]: a = evidence['end_token']-evidence['start_token'] b = len(evidence['text'].split()) assert(a == b) rationale_len.append(a) return rationale_len def count_rationale_len(df): rationale_len = [] for doc_evidences in df['evidences']: l = 0 for evidence in doc_evidences[0]: a = evidence['end_token']-evidence['start_token'] b = len(evidence['text'].split()) assert(a == b) l += a rationale_len.append(l) return rationale_len rationale_lens = count_rationale_len(train) + count_rationale_len(test) + count_rationale_len(val) sum(rationale_lens)/len(rationale_lens) def file_len(file): file = open(file, 'rb') f_len = 0 for line in file.readlines(): f_len += len(line.rstrip().split()) return f_len f_lens = [] data_dir = directory + 'docs' for filename in os.listdir(data_dir): f = os.path.join(data_dir, filename) if os.path.isfile(f): f_lens.append(file_len(f)) sum(f_lens)/len(f_lens) sum(rationale_lens)/sum(f_lens) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python2 with DSX Spark # language: python # name: python2 # --- # # Analyzing Internet of Things Data with IBM DSX: Trucking Data Analysis. # # In this notebook you will see how to build a predictive model with Spark machine learning API (SparkML) and deploy it for scoring in Machine Learning (ML) in IBM DSX platform. # This notebook walks you through following steps: # - Fetching data from HDFS # - Feature engineering # - Data Visualization # - Build a binary classifier model with SparkML API # - Save the model in the ML repository # - Deploy model online(via UI) # - Test the model (via UI) # - Test the model (via REST API). # # Use Case # # Imagine a trucking company that dispatches trucks across the country. The trucks are outfitted with sensors that collect data – data like location of the driver, weather conditions, and even what event recently occured (speeding, the truck weaving out of its lane, following too closely, etc). Data like this is generated very often, say once per second and is streamed back to the company’s servers. # # The company needs a way to process this stream of data and run some analysis on the data so that it can make sure trucks are traveling safe and if the driver is likely to make any violations anytime soon. Oh, and this also needs to be done in real-time! # # # ![CRISP-DM](https://raw.githubusercontent.com/dhananjaymehta/IoTtrucking/master/trucks2.jpg) # # # # # # For predicting violations, we are simulating trucking events in terms of location, miles driven, weather conditions. Next step is to visually understand the data and correlations between different features. We will also need to do some feature engineering for data preparation. # # Once the data is ready, we can build a predictive model. In our example we are using the SparkML Random Forrest classification model. Classification is a statistical technique which assigns a "class" to each driver - **"Violations"** or **"Normal"**. We build the classification models using historical data to train our model. (In a typical analytics project large training datasets will be used but we are building this demo model with a small datasets) # # If a model's meets accuracy expectations, it is good to be deployed for scoring. # # Scoring is the process of applying the model to a new set of data. import numpy as np import pandas as pd from pyspark import SparkContext from pyspark.sql import SQLContext, SparkSession sc=SparkContext() # ## Step 1: Import HDFS Data from remote HDP cluster. # view dataset # !curl -i -L "http://172.26.241.212:50070/webhdfs/v1/tmp/enrichedEvents.csv?op=OPEN" | tail -n 5 # #### Load Events Data # + # Training Data : from HDFS eventsFile = SQLContext(sc).read.csv("hdfs://172.26.241.212:8020/tmp/enrichedEvents", header = "false", inferSchema = "false") # this will load it as Spark DataFrame # see the data eventsFile.show(5) # total number of records tot_row = eventsFile.count() # events with violations tot_violations = eventsFile.filter("_c1 == 'N'").count() tot_no_violations = tot_row - tot_violations print(type(eventsFile), tot_row) print("Violations: " + str(tot_violations) + "; No violations: " + str(tot_no_violations)) # - # ## Step 2: Data Wrangling # + # old column names old_col_names = eventsFile.columns # new names to be assigned new_col_names =['eventtyp', 'iscertified', 'paymentscheme', 'hoursdriven', 'milesdriven', 'latitude', 'longitude', 'isfoggy', 'israiny', 'iswindy'] # Renaming the columns eventsdata = reduce(lambda eventsFile, idx: eventsFile.withColumnRenamed(old_col_names[idx], new_col_names[idx]), range(len(old_col_names)), eventsFile) eventsdata.printSchema() # - # ### Type conversion for Columns # + data=eventsdata.withColumn("latitude", eventsdata["latitude"].cast("float")).withColumn("longitude", eventsdata["longitude"].cast("float")).withColumn("hoursdriven", eventsdata["hoursdriven"].cast("int")).withColumn("isfoggy", eventsdata["isfoggy"].cast("int")).withColumn("israiny", eventsdata["israiny"].cast("int")).withColumn("iswindy", eventsdata["iswindy"].cast("int")).withColumn("milesdriven", eventsdata["milesdriven"].cast("int")) # view final schema data.printSchema() # - # ### Feature Engineering # ** Transforming truck events** eventType # # eventType into binary (Y/N) ifViolated # # **N** - if driving is 'Normal' and there are no violations # # **Y** - ['Lane Departure', 'Overspeed','Unsafe following distance', 'Unsafe tail distance'] # creating a pandas dataframe data_pandas=data.toPandas() type(data_pandas) # unique trucking events truck_events= list(data_pandas['eventtyp'].unique()) truck_events # + # transform column eventType from pyspark.sql.functions import UserDefinedFunction from pyspark.sql.types import StringType name = 'eventtyp' udf = UserDefinedFunction(lambda x: 'N' if x=="Normal" else 'Y', StringType()) data_tran=data.select(*[udf(column).alias(name) if column == name else column for column in data.columns]) # - data_tran.show(5) data_pandas=data_tran.toPandas() # use updated dataframe # #### Register table for Enriched Events data.registerTempTable("enrichedEvents") # ## Step 3: Exploratory analysis # + import seaborn as sns import warnings import matplotlib.pyplot as plt # %matplotlib inline # set plot size fig_size=[0,0] fig_size[0] = 12 fig_size[1] = 9 plt.rcParams["figure.figsize"] = fig_size # setting temp dataframe = df = data_pandas # setting style sns.set_style("whitegrid") warnings.filterwarnings('ignore') # - # #### Feature transformation in pandas # # **Note:** This is for visualization purpose only # # Setting int values for column # # - eventTyp: # - 1 if Violation # - 0 for Normal # # - isCertified: # - 1 if Certified # - 0 for Not Certified # # - paymentScheme: # - 1 if "hours" # - 0 for "miles" df["eventtyp"].unique() df['eventtyp'] = df['eventtyp'].apply(lambda x: 0 if x=='N' else 1) df['iscertified'] = df['iscertified'].apply(lambda x: 0 if x=='N' else 1) df['paymentscheme'] = df['paymentscheme'].apply(lambda x: 0 if x=='miles' else 1) df.head(5) # **listing the columns** # # ['eventTyp', # 'isCertified', # 'paymentScheme', # 'hoursDriven', # 'milesDriven', # 'latitude', # 'longitude', # 'isFoggy', # 'isRainy', # 'isWindy'] # ### Correlation Matrix for features # + # Compute the correlation matrix corr = df.corr() # Set up the matplotlib figure f, ax = plt.subplots(figsize=(12, 10)) # Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr,cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .7}) # - # e.g. more miles driven => more seasoned drivers => negative correlation with violations taking place (i.e. eventtyp = 0) # ### Visualizing multidimensional relationships # # *exploring correlations between multidimensional data, when you'd like to plot all pairs of values against each other.* sns.pairplot(df, hue='eventtyp', size=2.5); plt.show() df.hist() plt.show() # ### Do certified drivers have less violations? # Bar Plot sns.regplot(x="eventtyp", y="iscertified", data=df) plt.show() sns.barplot(y="eventtyp", x="iscertified", data=df) plt.show() # ### Correleation btw hours driven and violations? ax = sns.regplot(x="hoursdriven", y="eventtyp", data=df) # ### What are median hours driven by a driver? sns.distplot(df["hoursdriven"]); # ### What are median miles driven by a driver? sns.distplot(df["milesdriven"]); # ## Step 4: Building a classifier to predict truck event # verify data schema data_tran.printSchema() data_tran.show(5) # ### Algorithm Used: RandomForest Classifier # # Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set. from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorIndexer, IndexToString from pyspark.ml import Pipeline from pyspark.ml.feature import VectorAssembler from pyspark.ml.classification import RandomForestClassifier # We are using **ML Pipelines** provide a uniform set of high-level APIs built on top of DataFrames that help users create and tune practical machine learning pipeline # # A **Pipeline** is specified as a sequence of stages, and each stage is either a **Transformer** or an **Estimator**. These stages are run in order, and the input DataFrame is transformed as it passes through each stage. # # For Estimator stages, the fit() method is called to produce a Transformer (which becomes part of the PipelineModel, or fitted Pipeline), and that Transformer’s transform() method is called on the DataFrame. # # ![CRISP-DM](https://raw.githubusercontent.com/dhananjaymehta/IoTtrucking/master/fit.png) # # For Transformer stages, the transform() method is called on the DataFrame. # # ![CRISP-DM](https://raw.githubusercontent.com/dhananjaymehta/IoTtrucking/master/transform.png) # # # # For more details ref: https://spark.apache.org/docs/2.1.1/ml-pipeline.html # + # Prepare string variables so that they can be used by the decision tree algorithm # StringIndexer encodes a string column of labels to a column of label indices SI1 = StringIndexer(inputCol='iscertified',outputCol='iscertifiedEncoded') SI2 = StringIndexer(inputCol='paymentscheme',outputCol='paymentschemeEncoded') #encode the Label column labelIndexer = StringIndexer(inputCol='eventtyp', outputCol='label').fit(data_tran) # Pipelines API requires that input variables are passed in a vector assembler = VectorAssembler(inputCols=["iscertifiedEncoded", "paymentschemeEncoded", "hoursdriven", "milesdriven", "latitude", \ "longitude", "isfoggy", "israiny", "iswindy"], outputCol="features") # + # instantiate the algorithm, take the default settings rf=RandomForestClassifier(labelCol="label", featuresCol="features") # Convert indexed labels back to original labels. labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=labelIndexer.labels) pipeline = Pipeline(stages=[SI1,SI2,labelIndexer, assembler, rf, labelConverter]) # - # Split data into train and test datasets train, test = data_tran.randomSplit([0.8,0.2], seed=6) train.cache() test.cache() train.printSchema() train.show(5) # Build model. # The fitted model from a Pipeline is a PipelineModel, which consists of fitted models and transformers, corresponding to the pipeline stages. model = pipeline.fit(train) # ### Score test dataset results = model.transform(test) results.show(5) # ### Showing the prediction results of binary classifier: results=results.select(results["eventtyp"],results["label"],results["predictedlabel"],results["prediction"],results["probability"]) results.toPandas().head(5) # ### Model evaluation print ('Model accuracy = {:.2f}'.format(results.filter(results.label == results.prediction).count() / float(results.count()))) TP = (results.filter(results.label == results.prediction).filter(results.prediction == 1.0)).count() # True positive => predicted violation and it did occur TP FP = (results.filter(results.label != results.prediction).filter(results.prediction == 1.0)).count() # False positive => predicted violation that did not occur FP precision = float(TP) / (TP + FP) print "Model precision = {:.2f}".format(precision) # + from pyspark.ml.evaluation import BinaryClassificationEvaluator # Evaluate model evaluator = BinaryClassificationEvaluator(rawPredictionCol="prediction", labelCol="label", metricName="areaUnderROC") print 'Area under ROC curve = {:.2f}'.format(evaluator.evaluate(results)) # - # ### Area Under ROC Curve # # The area under the ROC curve (AUC) is a measure of how well a parameter can distinguish between two groups: violation vs no violation. # In a ROC curve the true positive rate (Sensitivity) is plotted in function of the false positive rate (Specifity). See https://www.medcalc.org/manual/roc-curves.php # # #### Evaluation Criteria # # - .90-1 = excellent (A) # - .80-.90 = good (B) # - .70-.80 = fair (C) # - .60-.70 = poor (D) # - .50-.60 = fail (F) # # # # ## Step 5: Save Model in ML repositor from repository.mlrepositoryclient import MLRepositoryClient from repository.mlrepositoryartifact import MLRepositoryArtifact service_path = 'https://internal-nginx-svc.ibm-private-cloud.svc.cluster.local:12443' ml_repository_client = MLRepositoryClient() type(model) # ### Create the model artifact (abstraction layer) # + #model_artifact = MLRepositoryArtifact(model, training_data=train, name="Predict_Violations v4.0") # + import os from repository.swagger_client import ArtifactAuthor #model_rf = pipeline_rf.fit(train_data) #model_name = "model_01" #model_artifact = MLRepositoryArtifact(model_rf, training_data=train_data,name=model_name) model_artifact = MLRepositoryArtifact(model, training_data=train, name="trucking-demo-roberth") model_artifact.meta.add('projectId',os.environ['DSX_PROJECT_ID']) model_artifact.meta.add('projectGuid',os.environ['DSX_PROJECT_NAME']) #save_model=ml_repository_client.models.save(model_artifact) # - # ### Save pipeline and model artifacts to in Machine Learning repository saved_model = ml_repository_client.models.save(model_artifact) # ### Saved model properties print "modelType: " + saved_model.meta.prop("modelType") print "creationTime: " + str(saved_model.meta.prop("creationTime")) print "modelVersionHref: " + saved_model.meta.prop("modelVersionHref") print "label: " + saved_model.meta.prop("label") # ## Step 6: Deploy and Test model # ### Test with UI # Save the notebook and click **model** tab under your Project Assets tab of the project (hint: open with another tab in your browser). # Click your model and then click on add deployment. Add an Online deployment and use the the Test API option to test the model. # # **You can use the following data for testing:** # - **X** # isCertified=N | paymentScheme=miles | hoursDriven=70 | milesDriven=3300 | latitude=-95.01 | longitude36.73 | isFoggy=0 | isRainy=1 | isWindy=1 # # - **Y** # eventTyp - Y(1) and N(0) # # Test API # ### Test REST API call # **Step 1:** In the DeploymentDetails, copy the **scoring endpoint** into your notepad: # # ``` # https://172.26.228.121/v2/scoring/online/4a5692ff-2bfc-42d4-a005-ebbd8ffea88 # ``` # **Step 2:** Retreive the **bearer token** for accessing your deployed model with this command. # # ``` # # !curl -k -X GET https:///v2/identity/token -H "username: xxx" -H "password: " # ``` # # # **Step 3:** Copy the generated token into your notepad # !curl -k -X GET https://172.26.222.224/v2/identity/token -H "username: " -H "password: <>" # **Step 4:** Create and execute this command to invoke the model remotely: # # ``` # # !curl -i -k -X POST https://172.26.228.121/v2/scoring/online/4a5692ff-2bfc-42d4-a005-ebbd8ffea88a -d '{"fields": ["isCertified","paymentScheme","hoursDriven","milesDriven","latitude","longitude","isFoggy","isRainy","isWindy"], "records": [["N","miles",0.000000,0.000000,-94.590000,39.100000,0.000000,0.000000,0.000000]]}' -H "content-type: application/json" -H "authorization: Bearer " # ``` # ### Note: # The token received from above command is valid for a **day / 24 hours**. So you would need to update the token each day. # !curl -i -k -X POST https://172.26.222.224/v2/scoring/online/0348c72f-096e-455d-a113-aafbc9cd8e75 -d '{"fields": ["iscertified","paymentscheme","hoursdriven","milesdriven","latitude","longitude","isfoggy","israiny","iswindy"], "records": [["N","miles",70,3300,95,37,0,0,0]]}' -H "content-type: application/json" -H "authorization: Bearer " # ## Step 7: Monitor model performance # You can manage your models using Model Management tools. Click on **Model Management** on top left corner of you IBM DSX Local Experience. It will give a dashboard listing all the Models and deployments. # # Model Management # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import scipy as sp import nmrglue as ng from matplotlib import pyplot as plt dic, fid = ng.varian.read('HMDB00122-1H') # - plt.plot(fid) plt.show() # + spectra = np.abs(sp.fft(fid)) plt.plot(spectra) plt.show() # - plt.plot(np.imag(fid)) plt.show() plt.plot(np.abs(sp.fft(np.imag(fid)))) plt.show() plt.plot(np.abs(sp.fft(np.real(fid)))) plt.show() plt.plot(np.abs(sp.fft(fid))) plt.show() plt.plot(np.abs(np.real(sp.fft(fid)))) plt.show() # + import json #json.dumps(np.real(fid).tolist()) #json.dumps(np.abs(np.real(sp.fft(fid))).tolist()) # - udic = ng.bruker.guess_udic(dic, fid) uc = ng.fileiobase.uc_from_udic(udic) plt.plot(uc.ppm_scale(), np.abs(np.real(sp.fft(fid)))) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import cv2 import numpy as np image = cv2.imread('Untitled.jpg') cv2.imshow('shapes', image) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) ret, thresh = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY) contours, heirarchy = cv2.findContours(thresh,cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) len(contours) cnt_list = [] len(cnt_list) i = 0 for cnt in contours: if i == 0: i = 1 continue approx = cv2.approxPolyDP(cnt, 0.01*cv2.arcLength(cnt, True), True) i = len(approx) cnt_list.append(i) if len(approx) == 3: shape_name = "Triangle 3" cv2.drawContours(image, [cnt], 0, (0, 255, 0), 5) #placing text at centre M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1) elif len(approx) == 5: shape_name = "Pentagon 5" cv2.drawContours(image, [cnt], -1, (0,255,0), 5) #placing text at centre M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1) elif len(approx) == 6: shape_name = "Hexagon 6" cv2.drawContours(image, [cnt], -1, (0,255,0), 5) #placing text at centre M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1) elif len(approx) == 8: shape_name = "Octagon 8" cv2.drawContours(image, [cnt], -1, (0,255,0), 5) #placing text at centre M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1) elif len(approx) == 12: shape_name = "Star 12" cv2.drawContours(image, [cnt], -1, (0,255,0), 5) #placing text at centre M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1) elif len(approx) == 4: x, y, w, h = cv2.boundingRect(approx) aspectRatio = float(w)/h if aspectRatio >= 0.5 and aspectRatio <= 1.5: shape_name = "Square 4" cv2.drawContours(image, [cnt], -1, (0,255,0), 5) #placing text at centre M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1) else: shape_name = "Rectangle 4" cv2.drawContours(image, [cnt], -1, (0,255,0), 5) #placing text at centre M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1) elif len(approx) > 15: shape_name = "Circle" cv2.drawContours(image, [cnt], -1, (0,255,0), 5) #placing text at centre M = cv2.moments(cnt) cx = int(M['m10'] / M['m00']) cy = int(M['m01'] / M['m00']) cv2.putText(image, shape_name, (cx-50, cy), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 1) print(len(cnt_list)) cnt_list.sort() print(len(cnt_list)) for i in cnt_list: if i == 3: print("triangle") elif i == 4: print("quadrilateral") elif i == 5: print("Pentagon") elif i == 6: print("Hexagon") elif i == 8: print("Octagon") elif i == 12: print("Star") elif i > 15: print("Circle") cv2.imshow("image1", image) cv2.waitKey(0) cv2.clearAllWindows() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pathlib import Path from collections import Counter import anndata import scanpy as sc import pandas as pd import seaborn as sns import umap import pollock from pollock.models.model import predict_from_anndata, embed_from_anndata from pollock.models.explain import explain_predictions # - # Specify which module to use for cell type predictions. If you haven't trained a module yet try building one with the [pbmc module training](https://github.com/ding-lab/pollock/blob/master/examples/pbmc_model_training.ipynb) example module_filepath = 'modules/pbmc' # download adata object here # # it is also the same anndata object create here adata = sc.read_h5ad('data/pbmc.h5ad') adata # create a new anndata object where raw counts are stored in .X prediction_adata = sc.read_10x_mtx('data/filtered_gene_bc_matrices/hg19/', var_names='gene_symbols') prediction_adata.var_names_make_unique() prediction_adata # ## Module prediction # predict cell types predictions = predict_from_anndata(prediction_adata.copy(), module_filepath) predictions # merge the predictions with our pbmc anndata object adata.obs = pd.merge(adata.obs, predictions, left_index=True, right_index=True) adata.obs adata.obs['annotated_cell_type'] = adata.obs['leiden'].to_list() sc.pl.umap(adata, color=['annotated_cell_type', 'predicted_cell_type', 'cell_type_probability'], frameon=False, ncols=1) # ## visualization of cell embeddings # with pollock we can also visualize the cell embeddings created by the BVAE cell_embeddings = embed_from_anndata(prediction_adata.copy(), module_filepath) cell_embeddings adata.obsm['cell_embeddings'] = cell_embeddings.loc[adata.obs.index].values # replace pca generated nearest neighbor and umap with cell embedding generated one adata.obsm['X_umap'] = umap.UMAP().fit_transform(adata.obsm['cell_embeddings']) sc.pl.umap(adata, color=['annotated_cell_type', 'predicted_cell_type', 'cell_type_probability'], frameon=False, ncols=1) # ## Module explain # For model explaination, we need to split the dataset up into two pieces # - explain : anndata object containing cells to be explained. the explaination algorithm can be time consuming (~10min for 100 cells). # - background : anndata object containing background cells, generally this can be any group of cells from the dataset, as long as the are **not** in the explain_adata. # + # here we grab ten of each cell type for explain, and put the rest into background n = 10 cell_types = sorted(set(adata.obs['predicted_cell_type'])) cell_ids = [cell_id for cell_type in cell_types for cell_id in adata.obs[adata.obs['predicted_cell_type']==cell_type].index[:n]] explain = prediction_adata[cell_ids] explain.obs = adata.obs.loc[cell_ids] background_mask = [True if cell_id not in explain.obs.index else False for cell_id in prediction_adata.obs.index] background = prediction_adata[background_mask] Counter(explain.obs['predicted_cell_type']).most_common() # - # There are some additional arguments that need be be given to explain_predictions # - module_filepath : filepath of module used for prediction. # - prediction_key : column in explain.obs where cell type predictions are stored. # - n_background_cells : number of cells to sample from background_adata. A larger number means more accurate results, but a longer runtime. The default of 100 is usually sufficient. # # Gene weights are returned from explain_predictions. They are represented as a dataframe where columns are genes, rows are cells, and the values correspond to how much each gene contributed to a given cells classification. High values mean that gene pushed that cell towards a given prediction. weights = explain_predictions(explain.copy(), background.copy(), module_filepath, prediction_key='predicted_cell_type', n_background_cells=100) weights # visualizing known markers for pbmc cell types # constructing weights as an AnnData object so we can use anndata/scanpy plotting functionality feature_adata = anndata.AnnData(X=weights.values, obs=explain.obs.loc[weights.index]) feature_adata.var.index = weights.columns feature_adata markers = { 'NK': ['GNLY', 'NKG7'], 'T-cell': ['CD3D', 'CD8A'], 'B-cell': ['CD79A', 'MS4A1'], 'Monocytes': ['FCGR3A', 'CD14', 'ITGAM'], 'Megakaryocytes': ['ITGA2B'] } sc.pl.heatmap(feature_adata, var_names=markers, groupby='predicted_cell_type') # Here we run scanpy's DEG workflow to see what features are differentially weighted among the different cell types adata = feature_adata.copy() sc.pp.scale(adata) adata sc.tl.rank_genes_groups(adata, groupby='predicted_cell_type', method='wilcoxon') sc.pl.rank_genes_groups_heatmap(adata) # Viewing specific groups of cells sc.pl.rank_genes_groups_dotplot(adata, groups=['B', 'Megakaryocytes']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:dsc180] # language: python # name: conda-env-dsc180-py # --- # + import urllib import requests import os import subprocess import shutil # # !pip install beautifulsoup4 from bs4 import BeautifulSoup # - def get_app(app_url): app_url += '/download?from=details' download_page = requests.get(app_url) soup = BeautifulSoup(download_page.content) apk_url = soup.select('#iframe_download')[0].attrs['src'] return requests.get(apk_url).content # + # k = get_app('https://apkpure.com/wallpaper-for-iphone-x-8-8/com.wallpapersforiphoneX.themeapplock88plus') # - def get_data(**config): data_dir = config['data_dir'] if not os.path.exists(data_dir): os.mkdir(data_dir) apps_dir = os.path.join(data_dir, 'apps') if not os.path.exists(apps_dir): os.mkdir(apps_dir) for url in config['urls']: app_slug, app_dev = url.split('/')[-2:] app_slug = urllib.parse.unquote(app_slug) # include or not dev_dir = os.path.join(apps_dir, app_dev) if not os.path.exists(dev_dir): os.mkdir(dev_dir) apk = get_app(url) apk_fn = os.path.join(dev_dir, app_slug + '.apk') apk_fp = open(apk_fn, 'wb') apk_fp.write(apk) print(apk_fn) code_dir = os.path.join(dev_dir, app_slug) if os.path.exists(code_dir): shutil.rmtree(code_dir) print('Decompiling apk...') command = subprocess.run([ 'apktool', 'd', # decode # '-r', # no resources, but it seems to corrupt the AndroidManifest.xml file, idk why apk_fn, # apk filename '-o', code_dir # decompiled out path ], capture_output=True) print(command.stdout.decode()) get_data(**cfg) cfg = { 'data_dir': '../data', 'urls': [ 'https://apkpure.com/wallpaper-for-iphone-x-8-8/com.wallpapersforiphoneX.themeapplock88plus', 'https://apkpure.com/instagram/com.instagram.android' ] } # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="tT0pR0l7DcwP" # # **Color Pop Effect (RGB Color Family)** # User can select a color family within red, green and blue to perform a pop effect. Pixels which belongs to another two color families will be turned into grayscale. # # The RGB color code is in the format (R, G, B). Each parameter (R, G, and G) defines the intensity of the color as an integer between 0 and 255. For example. the RGB color code of red is (255,0,0). Every pixels in the image will be checked to identify which family the pixel belongs to. If the pixel has RGB Color code (241,45,85), that means the pixel falls under red color family. # # However, if the RGB color code is (65,174,26), that means it is under green color family. Therefore, the RGB Color code will be modified by using formula below. # # # # # > 1. value = *(0.290 x R) + (0.587 x G) + (0.114 x B)* # # > 2. new_RGB_code = (value, value, value) # + [markdown] id="dbmVbSi-Ag-b" # ### 1. Import necessary libraries # + id="OJlCEfmR_zRX" import os import cv2 import numpy as np import pandas as pd from PIL import Image from google.colab.patches import cv2_imshow # + [markdown] id="Hzzk1KtcAv7I" # ### 2. Mount Google Drive # + colab={"base_uri": "https://localhost:8080/"} id="yP9Ea85lAgT_" outputId="5c9fcf3f-6a09-4839-c2fc-bef4d28b0f68" from google.colab import drive drive.mount('/content/gdrive') # + [markdown] id="p5Q8x1T4A6rX" # ### 3. Set working path # The image will be uploaded to this path # + colab={"base_uri": "https://localhost:8080/"} id="K9RoQ0pQA6v3" outputId="f9bd7d4e-d6b5-4251-e9ac-3dcc162d6468" # %cd /content/gdrive/MyDrive/ # + [markdown] id="wVyrmNqVBN9C" # ### 4. Upload image file # + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 91} id="P5IgjSTRBNi5" outputId="4d59f2ce-5443-4ca1-9dfe-e32e95d836dc" from google.colab import files uploaded = files.upload() for key, value in uploaded.items(): try: print(key, 'is successfully uploaded!') except: print('Your file is not successfully uploaded!') # + [markdown] id="VeRr6GI9BVbF" # ### 5. Display the uploaded image # + colab={"base_uri": "https://localhost:8080/", "height": 130} id="Hi-yUChiBVh7" outputId="16602415-d29c-4f33-b873-93a14bcba369" directory = os.getcwd() img = cv2.imread('{0}/{1}'.format(directory,key)) cv2_imshow(img) # + [markdown] id="0jhTJHFDBgKs" # ### 6. Create function for process bar # To monitor the progress of image processing # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="7KwTkv7SBgPe" outputId="faca7c87-304c-472f-e73e-c15b9afbeb2d" from IPython.display import HTML, display import time def progress(value, max=100): return HTML(""" {value} """.format(value=value, max=max)) out = display(progress(0, 100), display_id=True) # + [markdown] id="Vt-CPZLuBtTc" # ### 7. Choose a color family to be kept # # * Pick a color family among red, green and blue # * Another two color family will be turned into grayscale # + colab={"base_uri": "https://localhost:8080/"} id="A6_YGs7dBtYx" outputId="64825420-7e83-4f15-b946-de4101e60a26" color_dict = {'0': 'Blue', '1': 'Green', '2': 'Red'} print('{0}\n{1}'.format(color_dict,'Fill in the blank with number only')) color_picked = input("\nWhat color family to be shown? ") print('{0} {1}'.format(color_dict[color_picked],'family is chosen to be shown.')) # + [markdown] id="7CjZWsYjCfmL" # ### 8. Image Processing # This will take few minutes or more according to the size of image. # + colab={"base_uri": "https://localhost:8080/", "height": 34} id="JZ7F2MyACfsF" outputId="68add858-20bf-4268-8460-85efee6f3511" height, width = img.shape[:2] image_arr = np.zeros((height, width, 3), dtype=np.uint8) for y_coor in range(0,height,1): for x_coor in range(0,width,1): #time.sleep(0.02) (b, g, r) = img[y_coor, x_coor] #image[y-coordinate, x-coordinate] max_rgb_value = max(b,g,r) max_rgb_index = [b,g,r].index(max_rgb_value) if int(color_picked) == int(max_rgb_index): image_arr[y_coor, x_coor] = [b, g, r] #[b,g,r] else: value = 0.299*r + 0.587*g + 0.114*b image_arr[y_coor, x_coor] = [value, value, value] #[b,g,r] out.update(progress(y_coor, height)) # + [markdown] id="B5mCUIkGCxXH" # ### 9. Display and save the processed image. # The output image will be saved to the same directory in Step 3.. # + colab={"base_uri": "https://localhost:8080/", "height": 130} id="1SRT0aZYCxcv" outputId="c22498b7-6c21-4049-cb31-c0eb95b7584c" image_arr_final = cv2.cvtColor(image_arr, cv2.COLOR_BGR2RGB) output = Image.fromarray(image_arr_final) display(output) output.save(key[:-4]+'_'+color_dict[color_picked]+'.png') // -*- coding: utf-8 -*- // --- // jupyter: // jupytext: // text_representation: // extension: .scala // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: Apache Toree - Scala // language: scala // name: apache_toree_scala // --- // # Sparkathon 30.01.2018 // ## Recommender system // Najpierw przetestujemy środowisko spark // ### Wczytywanie danych // Wczytamy plik. opcja header oznacza że pierwsza linijka zawiera nazwy kolumn // inferSchema oznacza że Spark automatycznie spróbuje wywnioskować jaki kształ mają dane wejściowe i na tej podstawie zbuduje DataFrame // #### Ratings (Movie lens dataset) val ratings = spark.read.option("header", true). option("inferSchema", true). csv("/home/jovyan/resources/small/ratings.csv") // #### Movies (Movielens dataset) // tu się znajdują informację o filmach (nazwa, rok, gatunek i oczywiście id) val movies = spark.read.option("header", true). option("inferSchema", true). csv("/home/jovyan/resources/small/movies.csv") ratings.show movies.show // #### Dzielimy dane na zbiór treningowy i zbiór testowy val Array(training, test) = ratings.randomSplit(Array(0.8, 0.2)) training.show test.show // #### Importujemy implementację Sparkową algorytmu Alternating Least Squares (ALS) // + import org.apache.spark.ml.recommendation.{ALS, ALSModel} val als = new ALS().setMaxIter(5). setRegParam(0.01). setColdStartStrategy("drop"). setUserCol("userId"). setItemCol("movieId"). setRatingCol("rating") // - // #### Tworzymy model z danych wejściowych (dataframu ratings) i algorytmu ALS val model: ALSModel = als.fit(training) model.recommendForAllUsers(10).show() // ### Ocena wyniku // Dla oceny wyniku musimy najpierw przekształcić zbiór testowy za pomocą naszego modełu. val predictions = model.transform(test) // I stworzyć metodę oceny wyniku. Wybrałem RMSE (Root mean square error) jako miara błędu. // + import org.apache.spark.ml.evaluation.RegressionEvaluator val evaluator = new RegressionEvaluator().setMetricName("rmse"). setLabelCol("rating"). setPredictionCol("prediction") val rmse = evaluator.evaluate(predictions) println(s"Root-mean-square error = $rmse") // - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import requests import pandas as pd import numpy as np from datetime import datetime, timezone import dateutil.parser as parser from dateutil.relativedelta import relativedelta import calendar import yaml import warnings warnings.filterwarnings('ignore') from sklearn.neighbors import NearestNeighbors from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score from pycaret.classification import * pd.get_option("display.max_columns", None) now = datetime.now() def convert_date(utc_time): parsed_date = parser.parse(utc_time) var_date=parsed_date.date() var_time=parsed_date.time() var_f_time=var_time.hour var_julian_date=parsed_date.timetuple().tm_yday var_weekday=parsed_date.weekday() var_weekday_name=calendar.day_name[parsed_date.weekday()] return var_date, var_time, var_f_time, var_julian_date, var_weekday, var_weekday_name with open ('config.yml') as ymlfile: cfg = yaml.safe_load(ymlfile) oanda_api_key = cfg['creds']['oanda_api'] account_number = cfg['creds']['account_number'] Load_10K_Records=True currency_pairs = ["EUR_USD"] timeframe = "H4" price_char = "M" price_com = "mid" provider_api_url = 'https://api-fxpractice.oanda.com/v3/accounts/{}/orders'.format(account_number) request_headers = { "Authorization": oanda_api_key, "Accept-Datetime-Format": "RFC3339", "Connection": "Keep-Alive", "Content-Type": "application/json;charset=UTF-8" } # + provider_authorization = 'Bearer {0}'.format(oanda_api_key) headers = { 'Content-Type': 'application/json', 'Authorization': provider_authorization, } # - for pair in currency_pairs: pricing_params = ( ('instruments', pair), ) response = requests.get('https://api-fxpractice.oanda.com/v3/accounts/{}/pricing'.format(account_number), headers=headers, params=pricing_params).json() response = requests.get('https://api-fxpractice.oanda.com/v3/accounts/{}/openPositions'.format(account_number), headers=headers, params=pricing_params).json() params_count = ( ('price', price_char), ('count', '5000'), ('granularity', timeframe), ) for pair in currency_pairs: first_response = requests.get('https://api-fxpractice.oanda.com/v3/instruments/{}/candles'.format(pair), headers=headers, params=params_count).json() if Load_10K_Records: datetime_object = parser.parse(first_response['candles'][0]['time']) date= datetime_object - relativedelta(years=3) from_date = date.replace(tzinfo=timezone.utc).timestamp() params_date = ( ('count', '5000'), ('price', price_char), ('from', from_date), ('granularity', timeframe),) second_response = requests.get('https://api-fxpractice.oanda.com/v3/instruments/{}/candles'.format(pair), headers=headers, params=params_date).json() first_response= first_response['candles'] second_response= second_response['candles'] second_response.extend(first_response) response=second_response else: response=first_response['candles'] # + filename = "{}_{}.csv".format(pair, timeframe) output = [] all_candlesticks = response for i in range (len(all_candlesticks)): result= (convert_date(response[i]['time'])) output.append([(result[0]),(result[1]),(result[2]),(result[3]),(result[4]),(result[5]), response[i]['time'], response[i]['volume'], response[i][price_com]['o'], response[i][price_com]['h'], response[i][price_com]['l'], response[i][price_com]['c']]) output = pd.DataFrame(output) output.columns = ['Date','Time','f_time','julian_date','Weekday','Weekday_Name','UTC_Time', 'Volume', 'Open', 'High', 'Low', 'Close'] data = output.to_csv(filename, header = True, index = False) data = pd.read_csv(filename) # - data = data.drop_duplicates() data = data.dropna() data = data.to_csv(filename, header = True, index = False) data = pd.read_csv(filename) # ## Simple Moving Average (SMA) data['SMA_10'] = data['Close'].rolling(window=10).mean().round(5) data['SMA_20'] = data['Close'].rolling(window=20).mean().round(5) # ## Moving Average Range data['F_SMA_10'] = data['Close'] - data['SMA_10'] data['F_SMA_20'] = data['Close'] - data['SMA_20'] # ## Price Range # + data['col_1'] = data['Open'] - data['Close'] for value in data['col_1']: if value > 0: data['col_2'] = data['High'] - data['Open'] data['col_3'] = data['Close'] - data['Low'] else: data['col_2'] = data['High'] - data['Close'] data['col_3'] = data['Open'] - data['Low'] # - data = data.drop_duplicates() data = data.dropna() data = data.to_csv(filename, header = True, index = False) data = pd.read_csv(filename) data.tail(3) # ## Candlestick Number candle_no = len(data) - 2 #candle_no = 9659 def viz(data): fig = go.Figure(data=[go.Candlestick(x=data['UTC_Time'], open=data['Open'], high=data['High'], low=data['Low'], close=data['Close'])]) fig.update_layout(xaxis_rangeslider_visible=False) fig.update_xaxes(rangebreaks=[dict(bounds=["sat", "mon"])]) fig.show() candle = data.iloc[candle_no-3:candle_no+2] candle[['Date','f_time','Volume']] # ## Average True Range (ATR) high_low = data['High'] - data['Low'] high_cp = np.abs(data['High'] - data['Close'].shift()) low_cp = np.abs(data['Low'] - data['Close'].shift()) df = pd.concat([high_low, high_cp, low_cp], axis=1) true_range = np.max(df, axis=1) data['ATR_14'] = true_range.rolling(14).mean() # ## Stop Loss / TakeProfit ATR = data.iloc[candle_no]['ATR_14'] CLOSED_PRICE = data.iloc[candle_no]['Close'] BUY_SL = (CLOSED_PRICE - ATR).round(5) SELL_SL = (CLOSED_PRICE + ATR).round(5) BUY_TP = (CLOSED_PRICE + ATR).round(5) SELL_TP = (CLOSED_PRICE - ATR).round(5) # ## Feature Selection data = data[['col_1','col_2','col_3','F_SMA_10','F_SMA_20']] data.head(1) # ## Model def find_k_similar_candles(candle_id, dataset, k=5): indices=[] distances = [] output = [] model_knn = NearestNeighbors(metric = 'euclidean', algorithm = 'brute') model_knn.fit(dataset) distances, indices = model_knn.kneighbors(dataset.iloc[candle_id,:].values.reshape(1,-1), n_neighbors = k) for i in range(0,len(distances.flatten())): if i==0: display (pd.DataFrame(data.iloc[candle_id]).transpose()) else: output.append ([dataset.index[indices.flatten()[i]], distances.flatten()[i], dataset.iloc[indices.flatten()[i]]['col_1'], dataset.iloc[indices.flatten()[i]]['col_2'], dataset.iloc[indices.flatten()[i]]['col_3'], dataset.iloc[indices.flatten()[i]]['F_SMA_10'], dataset.iloc[indices.flatten()[i]]['F_SMA_20'], ]) output = pd.DataFrame(output) output.columns = ['Indice','Distance', 'col_1', 'col_2', 'col_3', 'F_SMA_10', 'F_SMA_20', ] return indices, distances indices, distances = find_k_similar_candles (candle_no,data) # ## Final Action indices = indices[0:1][0] indices # + recs = [] for indice in indices[1:5]: data = pd.read_csv(filename) data = data.iloc[indice:indice+7] data['candleno'] = range (1, len(data) + 1) X = data['candleno'].values.reshape(-1, 1) Y = data['Close'].values.reshape(-1, 1) linear_regressor = LinearRegression() linear_regressor.fit(X, Y) y_pred = linear_regressor.predict(X) coeficient = (linear_regressor.coef_) if coeficient > 0: recs.append((r2_score(Y, y_pred).round(2)*100)) else: recs.append((r2_score(Y, y_pred).round(2)*100) * -1) # - data_unseen = pd.DataFrame ({'Rec1_Score': [recs[0]], 'Rec2_Score': [recs[1]], 'Rec3_Score':[recs[2]], 'Rec4_Score':[recs[3]]}) data_unseen SAVED_FINAL_MODEL = load_model('EURUSD/14-12-2021_12-59_AM_knn_EURUSD') new_prediction = predict_model(SAVED_FINAL_MODEL, data=data_unseen) new_prediction.head() KNN_Pre = new_prediction['Label'] KNN_Pre[0] SAVED_FINAL_MODEL = load_model('EURUSD/14-12-2021_09-08_AM_gbc_EURUSD') new_prediction = predict_model(SAVED_FINAL_MODEL, data=data_unseen) new_prediction.head() GBC_Pre = new_prediction['Label'] GBC_Pre[0] pair BUY_SL BUY_TP SELL_SL SELL_TP # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CNN (Exercise 11-1) # Exercise Sung Kim lesson 11: Advanced CNN
    # https://arxiv.org/abs/1602.07261
    # Test with CIFAR-10
    # + import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torch.backends.cudnn as cudnn import torchvision import torchvision.transforms as transforms from tqdm import tqdm device = 'cpu' # + print('==> Preparing data..') transform_train = transforms.Compose([ # transforms.RandomCrop(32, padding=4), # transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) transform_test = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train) trainloader = torch.utils.data.DataLoader(trainset, batch_size=100, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform_test) testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # + import itertools import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ print(cm) plt.figure(figsize=(10, 10)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout() def train(model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (data, target) in enumerate(tqdm(train_loader)): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() # if batch_idx % 100 == 0: # print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( # epoch, batch_idx * len(data), len(train_loader.dataset), # 100. * batch_idx / len(train_loader), loss.item())) def test(model, device, test_loader): model.eval() test_loss = 0 correct = 0 y_test = [] y_pred = [] with torch.no_grad(): for data, target in tqdm(test_loader): data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability y_test += list(target) y_pred += list(pred.view_as(target)) correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) # Confusion matrix confusion_mtx = confusion_matrix(y_test, y_pred) plot_confusion_matrix(confusion_mtx, classes=classes, normalize=True, title='Normalized confusion matrix') # - # # 1. Model class Net1(torch.nn.Module): """ Normal CNN 62% accuracy """ def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 100, kernel_size=5, stride=1) # in_channel is 1. Because it has 1 color self.conv2 = nn.Conv2d(100, 30, kernel_size=5, stride=1) self.mp = nn.MaxPool2d(2) self.linear1 = nn.Linear(3920, 200) self.fc = nn.Linear(200, 10) def forward(self, x): in_size = x.size(0) x = F.relu(self.mp(self.conv1(x))) x = F.relu(self.mp(self.conv2(x))) x = x.view(in_size, -1) # flatten the tensor x = F.relu(self.linear1(x)) x = self.fc(x) return F.log_softmax(x, dim=1) class BasicConv2d(nn.Module): def __init__(self, in_planes, out_planes, kernel_size, stride, padding=0): super().__init__() self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=padding, bias=False) # verify bias false self.batch_norm = nn.BatchNorm2d(out_planes, affine=True) self.relu = nn.ReLU(inplace=True) def forward(self, x): x = self.conv(x) x = self.batch_norm(x) x = self.relu(x) return x # # Wide residual networks # AND
    # arXiv:1605.07146v4 class BasicBlock(nn.Module): """ With two consecutive 3x3 convolutions with batch normalization and ReLU preceding convolution: conv3x3 - conv3x3 """ def __init__(self, in_channels=3, out_channels=32): super().__init__() self.conv = nn.Sequential( BasicConv2d(in_channels, 32, kernel_size=3, stride=1), BasicConv2d(32, out_channels, kernel_size=3, stride=1), ) def forward(self, x): return self.conv(x) class BottleNeck(nn.Module): """ with one 3 × 3 convolution surrounded by dimensionality reducing and expanding 1 × 1 convolution layers: conv1 × 1-conv3 × 3-conv1 × 1 """ def __init__(self, in_channels=3, out_channels=32): super().__init__() self.conv = nn.Sequential( BasicConv2d(in_channels, 32, kernel_size=1, stride=1), BasicConv2d(32, 96, kernel_size=3, stride=1), BasicConv2d(96, out_channels, kernel_size=1, stride=1) ) def forward(self, x): return self.conv(x) class NetX(nn.Module): """ Experiment 63% Accuracy """ def __init__(self): super().__init__() self.conv1 = BasicBlock(in_channels=3, out_channels=96) self.conv2 = BottleNeck(in_channels=96, out_channels=20) self.mp = nn.MaxPool2d(2) self.rrelu = nn.LeakyReLU() self.fc = nn.Linear(3380, 10) def forward(self, x): in_size = x.size(0) x = self.conv1(x) x = self.rrelu(self.mp(self.conv2(x))) x = x.view(in_size, -1) # flatten the tensor x = self.fc(x) return F.log_softmax(x, dim=1) model = NetX() # # 2. Loss & Optimizer # criterion has been absorbed to the `train and test` functions already optimizer = optim.Adam(model.parameters(), lr=0.001) for epoch in range(1, 1 + 1): train(model, 'cpu', trainloader, optimizer, epoch) test(model, 'cpu', testloader) # # Scratch Note target = torch.tensor([3, 2 ,2, 4], dtype=torch.long) pred = torch.tensor([8,2,2,4], dtype=torch.long) y_test = list(target) y_pred = list(pred) confusion_mtx = confusion_matrix(y_test, y_pred) plot_confusion_matrix(confusion_mtx, classes=[i for i in range(1, 10 + 1)], normalize=True, title='Normalized confusion matrix') # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # Fitting distribution with R x.norm <- rnorm(n=200,m=10,sd=2) hist(x.norm,main="Histogram of observed data") plot(density(x.norm),main="Density estimate of data") plot(ecdf(x.norm),main="Empirical cumulative distribution function") z.norm <- (x.norm-mean(x.norm))/sd(x.norm) # standardize data qqnorm(z.norm) ## drawing the QQplot abline(0,1) ## drawing a 45-degree reference line # If data differ from a normal distribution (i.e. data belonging from a Weibull pdf) we cna use `qqplot()` in this way: x.wei <- rweibull(n=200,shape=2.1,scale=1.1) ## sampling from a Weibull x.teo <- rweibull(n=200,shape=2,scale=1) ## theoretical quantiles from a Weibull population qqplot(x.teo,x.wei,main="QQ-plot distr. Weibull") abline(0,1) # # Model choice # Dealing with discrete data we can refer to Poisson's distribution with probability mass function: # # $$ f(x,\lambda)=e^{-\lambda\dfrac{\lambda^x}{x!}} \quad \text{where } x=0,1,2,\ldots$$ x.poi <- rpois(n=200, lambda=2.5) hist(x.poi, main="Poisson distribution") # As concern continuous data we have the normal (gaussian) dsitrubition: # # $$ f(x,\lambda,\sigma)=\dfrac{1}{\sqrt{2\pi}\sigma} e^{\dfrac{1(x-\mu)^2}{2\sigma^2}} $$ # # with $x \in \mathbb{R}$. curve(dnorm(x,m=10,sd=2),from=0,to=20,main="Normal distribution") # Gamma distribution: # # $$ f(x,\alpha,\lambda)=\dfrac{\lambda^\alpha}{\gamma(\alpha)}x^{\alpha-1}e^{-\lambda x} $$ # # with $x \in \mathbb{R}^+$. curve(dgamma(x, scale=1.5, shape=2), from=0, to=15, main="Gamma distribution") # Weibull distribition: # # $$ f(x,\alpha,\beta)=\alpha\beta^{-\alpha}x^{\alpha-1}e^{-\left[\left(\dfrac{x}{\beta}\right)^\alpha\right]} $$ curve(dweibull(x, scale=2.5, shape=1.5), from=0, to=15, main="Weibull distribution") h<-hist(x.norm,breaks=15) xhist<-c(min(h$breaks),h$breaks) yhist<-c(0,h$density,0) xfit<-seq(min(x.norm),max(x.norm),length=40) yfit<-dnorm(xfit,mean=mean(x.norm),sd=sd(x.norm)) plot(xhist,yhist,type="s",ylim=c(0,max(yhist,yfit)), main="Normal pdf and histogram") lines(xfit,yfit, col="red") yfit yhist ks.test(yfit,yhist) # # StackOverflow example # The following is from this StackOverflow example: https://stats.stackexchange.com/questions/132652/how-to-determine-which-distribution-fits-my-data-best # # This requires you to install the following packages with the R package manager: `fitdistrplus` and `logspline`. # + library(fitdistrplus) library(logspline) x <- c(37.50,46.79,48.30,46.04,43.40,39.25,38.49,49.51,40.38,36.98,40.00, 38.49,37.74,47.92,44.53,44.91,44.91,40.00,41.51,47.92,36.98,43.40, 42.26,41.89,38.87,43.02,39.25,40.38,42.64,36.98,44.15,44.91,43.40, 49.81,38.87,40.00,52.45,53.13,47.92,52.45,44.91,29.54,27.13,35.60, 45.34,43.37,54.15,42.77,42.88,44.26,27.14,39.31,24.80,16.62,30.30, 36.39,28.60,28.53,35.84,31.10,34.55,52.65,48.81,43.42,52.49,38.00, 38.65,34.54,37.70,38.11,43.05,29.95,32.48,24.63,35.33,41.34) # - descdist(x, discrete = FALSE) fit.weibull <- fitdist(x, "weibull") fit.norm <- fitdist(x, "norm") plot(fit.norm) plot(fit.weibull) fit.weibull$aic fit.norm$aic # ## Kolmogorov-Smirnov test simulation # + n.sims <- 5e4 stats <- replicate(n.sims, { r <- rweibull(n = length(x) , shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"] ) as.numeric(ks.test(r , "pweibull" , shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"])$statistic ) }) # - plot(ecdf(stats), las = 1, main = "KS-test statistic simulation (CDF)", col = "darkorange", lwd = 1.7) grid() # + fit <- logspline(stats) 1 - plogspline(ks.test(x , "pweibull" , shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"])$statistic , fit ) # + xs <- seq(10, 65, len=500) true.weibull <- rweibull(1e6, shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"]) boot.pdf <- sapply(1:1000, function(i) { xi <- sample(x, size=length(x), replace=TRUE) MLE.est <- suppressWarnings(fitdist(xi, distr="weibull")) dweibull(xs, shape=MLE.est$estimate["shape"], scale = MLE.est$estimate["scale"]) } ) boot.cdf <- sapply(1:1000, function(i) { xi <- sample(x, size=length(x), replace=TRUE) MLE.est <- suppressWarnings(fitdist(xi, distr="weibull")) pweibull(xs, shape= MLE.est$estimate["shape"], scale = MLE.est$estimate["scale"]) } ) #----------------------------------------------------------------------------- # Plot PDF #----------------------------------------------------------------------------- par(bg="white", las=1, cex=1.2) plot(xs, boot.pdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.pdf), xlab="x", ylab="Probability density") for(i in 2:ncol(boot.pdf)) lines(xs, boot.pdf[, i], col=rgb(.6, .6, .6, .1)) # Add pointwise confidence bands quants <- apply(boot.pdf, 1, quantile, c(0.025, 0.5, 0.975)) min.point <- apply(boot.pdf, 1, min, na.rm=TRUE) max.point <- apply(boot.pdf, 1, max, na.rm=TRUE) lines(xs, quants[1, ], col="red", lwd=1.5, lty=2) lines(xs, quants[3, ], col="red", lwd=1.5, lty=2) lines(xs, quants[2, ], col="darkred", lwd=2) # + #----------------------------------------------------------------------------- # Plot CDF #----------------------------------------------------------------------------- par(bg="white", las=1, cex=1.2) plot(xs, boot.cdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.cdf), xlab="x", ylab="F(x)") for(i in 2:ncol(boot.cdf)) lines(xs, boot.cdf[, i], col=rgb(.6, .6, .6, .1)) # Add pointwise confidence bands quants <- apply(boot.cdf, 1, quantile, c(0.025, 0.5, 0.975)) min.point <- apply(boot.cdf, 1, min, na.rm=TRUE) max.point <- apply(boot.cdf, 1, max, na.rm=TRUE) lines(xs, quants[1, ], col="red", lwd=1.5, lty=2) lines(xs, quants[3, ], col="red", lwd=1.5, lty=2) lines(xs, quants[2, ], col="darkred", lwd=2) #lines(xs, min.point, col="purple") #lines(xs, max.point, col="purple") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="PV4xFqJRIi9e" # # Spike-triggered Covariance (STC) # # This is a tutorial for linear visual filters (receptive fields) # and spike-triggered covariance. # # (1) Examples of constructing visual Gabor filters and # filtering an image. # # (2) Examples of spike-triggered aproaches to find filters. # # 2012, transcribed and modified by in 2022. # # This is a simplified version of: # Spike-triggered neural characterization. # Schwartz, Pillow, , # Journal of Vision, 2006. # + [markdown] id="gxctqjUEJAEj" # ## 1. Visual filters and images # # # + [markdown] id="l8tjr-PJMIK2" # ### Helper functions # I made you two functions that can generate 2D sinusoidal and Gaussian images. # # makeGaussian(imsize, cov) # # makeSine(imsize, spatialf, ori, phase) # # ***You don't have to look inside. Just run it.*** # + id="z9v8sP2kJDG-" import numpy as np def makeGaussian(size, cov=5): x = np.arange(0, size, 1, float) y = x[:,np.newaxis] x0 = y0 = (size) / 2 # this is for matching old matlab code. gaussian = np.exp(((x-x0)**2 + (y-y0)**2) / (-2*cov)) gaussian = gaussian/np.max(gaussian) return gaussian def makeSine(imsize=10, spatialf=5, ori=0, phase=0): ori = ori/180*np.pi phi = phase/180*np.pi try: im = np.ones((imsize[0],imsize[1])) x0 = (imsize[0]+1) / 2 - 1 y0 = (imsize[1]+1) / 2 - 1 except: im = np.ones((imsize,imsize)) x0 = y0 = (imsize+1) / 2 - 1 # this is for matching old matlab code. imsize = [imsize,imsize] for x in range(imsize[0]): for y in range(imsize[1]): im[x,y] = np.sin(2*np.pi/spatialf*(((x0-x)*np.sin(ori)+(y0-y)*np.cos(ori)))+phi) return im # + [markdown] id="D1h6e3FCJDo5" # ### 1a Gabor filters and images # + id="-a1pqMOqnQXt" import matplotlib.pyplot as plt # Set parameters of sinusoid sz = 20 period = 5 direction = 0 phase = 0 theSine = makeSine(sz,period,direction,phase) # Plot the sinusoid plt.imshow(theSine,cmap='gray') # + id="dWBqRMm4rZjJ" # Make a 2 dimensional Gaussian and plot it thesig = 2 theGauss = makeGaussian(sz, thesig); plt.imshow(theGauss,cmap='gray') # + id="ofB0oTz-xuBo" # Make a Gabor filter, by multiplying a sinusoid with a Gaussian. theFilt = theSine * theGauss; plt.imshow(theFilt,cmap='gray') # + id="fB38t41R35oS" # Load an image # download an image from our repository # !wget https://github.com/schwartz-cnl/Computational-Neuroscience-Class/blob/main/Lab%204%20Spike%20Triggered%20Covariance/einstein.pgm?raw=true -O einstein.pgm from skimage.io import imread im = imread('einstein.pgm') plt.imshow(im, cmap='gray') # + [markdown] id="F55cUdx-6vKt" # ### 1b Convolve the image with the filter # + id="LHNLwk4x6qZQ" from scipy import signal response = signal.convolve2d(im, theFilt, mode='valid') plt.imshow(response, cmap='gray') # + [markdown] id="8JGENQaCPMpi" # ### To do: # 1. Try making different Gabor filters by varying parameters above # (e.g., direction, priod, phase of the grating; and thesig of the Gaussian) # + [markdown] id="aUZwnTAYPY_0" # ## 2. Spike-triggered approaches # We have constructed in advance few model neurons. We will use spike-triggered approaches to figure out the receptive field properties of the neurons. # + [markdown] id="SLup6lpmwRuI" # ### Neuron models # This section has 2 functions: ClassModel1, ClassModel2. You can think of them as "neurons" that take stimulus as input and give response as output. (Just like what we did in Lab2.) # # ***You don't have to look inside. Just run it.*** # + id="zg9cQn6SUG_v" def ClassModel1(allStim): xDim = 8 kernelX = xDim # spatial size of filter kernelT = 6 # temporal size of filter kernelSize = kernelX * kernelT nFrames = allStim.shape[0] p = 2 th = 180/4 rate = 1/12 base = 00 itau = 1.2 sig=1.6 per=4.5 x = np.arange(1, kernelX+1, 1, float)-(kernelX+1)/2 y = np.arange(kernelT, 0, -1, float) y = y[:,np.newaxis] v1 = np.exp(-x**2/(2*sig**2)) * np.exp(-itau*y) * y**2 * makeSine([kernelT,kernelX], per, th, 0) v1 = v1.flatten() v1 = v1/np.sqrt(np.var(v1,ddof=1)) v2 = np.exp(-x**2/(2*sig**2)) * np.exp(-itau*y) * y**2 * makeSine([kernelT,kernelX], per, th, 90) v2 = v2.flatten() v2 = v2/np.sqrt(np.var(v2,ddof=1)) linResp = base + rate * (np.abs((np.matmul(allStim,v1)))**p + np.abs((np.matmul(allStim,v2)))**p) linResp = linResp/np.max(linResp) spikeResp = (linResp > np.random.rand(nFrames)) spikeResp[0:(kernelT-1)] = 0 # can't use these return spikeResp ############################################################################### def ClassModel2(allStim): xDim = 8 kernelX = xDim # spatial size of filter kernelT = 6 # temporal size of filter kernelSize = kernelX * kernelT nFrames = allStim.shape[0] p = 2 th = 180/4 rate = 0.25 base = 00 itau = 1.2 sig=1.6 per=4.5 x = np.arange(1, kernelX+1, 1, float)-(kernelX+1)/2 y = np.arange(kernelT, 0, -1, float) y = y[:,np.newaxis] v1 = np.exp(-x**2/(2*sig**2)) * np.exp(-itau*y) * y**2 * makeSine([kernelT,kernelX], per, th, 0) v1 = v1.flatten() v1 = v1/np.sqrt(np.var(v1,ddof=1)) v2 = np.exp(-x**2/(2*sig**2)) * np.exp(-itau*y) * y**2 * makeSine([kernelT,kernelX], per, th, 90) v2 = v2.flatten() v2 = v2/np.sqrt(np.var(v2,ddof=1)) v3 = np.exp(-x**2/(2*sig**2)) * np.exp(-itau*y) * y**2 * makeSine([kernelT,kernelX], per, th+90, 0) v3 = v3.flatten() v3 = v3/np.sqrt(np.var(v3,ddof=1)) l1 = (np.matmul(allStim,v1)>0)*(np.matmul(allStim,v1))**p # half squared l2 = (np.matmul(allStim,v2))**p l3 = (np.matmul(allStim,v3))**p linResp = (1+l1)/(1+0.03*l2+0.05*l3) linResp = 15*rate*linResp/np.max(linResp) spikeResp = (linResp > np.random.rand(nFrames)) spikeResp[0:(kernelT-1)] = 0 # can't use these return spikeResp # + [markdown] id="NHwsyvWFPqmM" # ### 2a. Generate random stimuli to "probe" the neuron with # + id="j_n9_T7DPpGt" nFrames = 500000 xDim = 8 kernelX = xDim # spatial size of noise stimulus kernelT = 6 # temporal size of noise stimulus kernelSize = kernelX * kernelT allStim = np.random.randn(nFrames, kernelSize) # + id="gF05IzLPQXFY" # Show example frames of the white noise stimuli fig, _ = plt.subplots(4, 4, constrained_layout=True, figsize=(8, 6)) for i,ax in enumerate(fig.axes): ax.imshow(np.reshape(allStim[i,:],(6,8)), cmap='gray') # + [markdown] id="KMXMNhuiHGq9" # ### 2b. Generate spikes from a model neuron # + id="b1hl1xRWHEE4" # This can be toggled for different model neurons; choose from: spikeResp = ClassModel1(allStim) # spikeResp = ClassModel2(allStim) # + id="R51GQYybQ-pE" # Plot the spiking activity for the first 100 frames plt.plot(spikeResp[1:100],'o') plt.title('Spikes', fontsize=16) plt.xlabel('Time (ms)', fontsize=16) # + [markdown] id="s9FOVlnWxill" # ### 2c. Spike-triggered average # + id="KcbIfMxMKFNF" # Compute the spike-triggered average # First find the frames for which the model neuron spiked spikeInd=np.where(spikeResp>0.5)[0] # + id="wIEksdbDKVsD" # Then find the spike-triggered stimuli, i.e., the stimuli for which # the neuron spiked spikeStim = allStim[spikeInd,:] numspikes = len(spikeInd) # + id="4E-kWGauKoWL" # Plot some example stimulus frames of the spike-triggered stimuli # Can you tell by eye what in the stimulus is triggering a spike? fig, _ = plt.subplots(4, 4, constrained_layout=True, figsize=(8, 6)) for i,ax in enumerate(fig.axes): ax.imshow(np.reshape(spikeStim[i,:],(6,8)), cmap='gray') # + id="7hD1NohlT9Z6" # We'll plot the spike-triggered average (STA) # Is it a structured receptive field? sta = np.mean(spikeStim, axis=0) plt.imshow(np.reshape(sta,(6,8)), cmap='gray') # + [markdown] id="52i_evyTxp2C" # ### 2d. Spike-triggered covariance # + id="hcLcfFefUwUI" # The spike-triggered average reveals changes in the mean. # We would like richer characterizations of the neurons by looking # for changes in the variance. # We'll do a simple version of a spike-triggered covariance # This is a Principal Component Analysis, computing the eigenvalues # (variances along each receptive field axes) and the eigenvectors # (the receptive field axes). # Technical note: In papers, we usually first project out the STA (which we # did not do here for simiplicity) thecov = np.matmul(spikeStim.T, spikeStim)/(numspikes-1); (eigval, eigvec) = np.linalg.eig(thecov) # Order the eigval and eigvec idx = eigval.argsort()[::-1] eigval = eigval[idx] eigvec = eigvec[:,idx] # Plot the (sorted) eigenvalues # This tells you which eigenvalues have variance that # is significantly higher or lower than the rest. plt.plot(eigval, 'o') plt.ylabel('Variance', fontsize=16) plt.xlabel('Ordered Eigenvalues', fontsize=16) # + [markdown] id="iM0JDe5xgdPf" # How many appear significant? # + id="hvoWm0xKgwR_" # Plot a corresponding eigenvector that appears significant(e.g., here for # ClassModel1 set to the first, which is indice 0) # This eigenvector corresponds to a filter/feature.receptive fiels that contributes # to the model neuron response. # Some model neurons may have more than one such receptive field (the ordered eigenvalues # above tell you which are significant!) # In one of the models, the last two eigenvalues are significant! # For that model, change thenum1 and thenum2 to reflect the last two eigenvalues # (e.g., 46 and 47) # Technical note: If the STA was structured, the first eigenvector could just be the # STA receptive field (possibly negated) thenum1 = 0 plt.imshow(np.reshape(eigvec[:,thenum1],(6,8)), cmap='gray') # + id="K0hSIkkwhB7e" # Plot another eigenvector # Here set to the second, but change as needed... # The second may or may not be significant in terms of the variance, # depending on the model. In one of the models, the last two are significant! # For that model, change thenum1 and thenum2 to reflect the last two eigenvalues # (e.g., 46 and 47) thenum2 = 1 plt.imshow(np.reshape(eigvec[:,thenum2],(6,8)), cmap='gray') # + [markdown] id="7cvJ4c6rhRQi" # Is it structured? Do we expect it to be based on the eigenvalues? # + id="8P1MXYjOhQZO" # Look at scatter plots onto two eigenvectors or receptive fields. # We will compare the responses to the spike-triggered stimuli with # those to the full stimulus set. We will match the number of stimuli # for readability of the plots. # The two receptive field basis2 = eigvec[:,thenum2] basis1 = eigvec[:,thenum1] # Responses of the two receptive fields to all stimuli allProj = [np.matmul(allStim,basis2), np.matmul(allStim,basis1)] # And to the spike-triggered stimuli spikeProj = [np.matmul(spikeStim,basis2), np.matmul(spikeStim,basis1)] thenum = min(2000, numspikes) plt.figure(figsize=(6, 6)) plt.scatter(allProj[0][0:thenum], allProj[1][0:thenum], facecolors='none', edgecolors='b', label='All stim') plt.scatter(spikeProj[0][0:thenum], spikeProj[1][0:thenum], facecolors='none', edgecolors='r', label='Spike stim') plt.xlim([-5,5]) plt.ylim([-5,5]) plt.ylabel('Receptive field 1', fontsize=16) plt.xlabel('Receptive field 2', fontsize=16) plt.legend() # + id="zESgrTwliEBs" # Plot ellipse signifying the variances found by the Principal Component Analysis angles=np.linspace(0, 2*np.pi, 100) # Variance along the 2 receptive fields ellipse = [3*np.sqrt(eigval[thenum2])*np.cos(angles), 3*np.sqrt(eigval[thenum1])*np.sin(angles)] # Variance along 2 other directions that are not structured ellipse_other = [3*np.sqrt(eigval[24])*np.cos(angles), 3*np.sqrt(eigval[25])*np.sin(angles)] # Plot the ellipses plt.figure(figsize=(6, 6)) plt.scatter(allProj[0][0:thenum], allProj[1][0:thenum], facecolors='none', edgecolors='b', label='All stim') plt.scatter(spikeProj[0][0:thenum], spikeProj[1][0:thenum], facecolors='none', edgecolors='r', label='Spike stim') plt.plot(ellipse[0],ellipse[1], 'r', linewidth=3) plt.plot(ellipse_other[0],ellipse_other[1], 'b', linewidth=3) plt.xlim([-5,5]) plt.ylim([-5,5]) plt.ylabel('Receptive field 1', fontsize=16) plt.xlabel('Receptive field 2', fontsize=16) # + [markdown] id="ZJWFDJMmsZWr" # ## Question: # Go through each of the model neurons in this tutorial, and describe what you found. Plot the Spike-triggered average (STA). In the spike-triggered covraiance analysis, what eigenvectors (receptive fields) had a striking high or low variance relative to the rest? Plot them. What did the scatter plot signify? Hint: we talked about these similar model neuron examples in class when we discussed the spike-triggered covariance! # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="FhHJcvK8QV4Y" outputId="5366ea86-b175-42fd-87a7-2eaa460d24c3" import tensorflow as tf print(tf.__version__) # + [markdown] id="Z8KAKynxQV4e" # # Saving and loading models # + [markdown] id="08vh_EHeQV4f" # ## Coding tutorials # #### [1. Saving and loading model weights](#coding_tutorial_1) # #### [2. Model saving criteria](#coding_tutorial_2) # #### [3. Saving the entire model](#coding_tutorial_3) # #### [4. Loading pre-trained Keras models](#coding_tutorial_4) # #### [5. Tensorflow Hub modules](#coding_tutorial_5) # + [markdown] id="psJljMPDQV4g" # *** # # ## Saving and loading model weights # + [markdown] id="I1vw5dC7QV4h" # #### Load and inspect CIFAR-10 dataset # + [markdown] id="WmRAfrLnQV4h" # The CIFAR-10 dataset consists of, in total, 60000 color images, each with one of 10 labels: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. For an introduction and a download, see [this link](https://www.cs.toronto.edu/~kriz/cifar.html). # + colab={"base_uri": "https://localhost:8080/"} id="tYaEDcT3QV4j" outputId="4dbc371e-c62c-4755-bd18-53f807aff6d8" # Import the CIFAR-10 dataset and rescale the pixel values (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() x_train = x_train / 255.0 x_test = x_test / 255.0 # Use smaller subset -- speeds things up x_train = x_train[:10000] y_train = y_train[:10000] x_test = x_test[:1000] y_test = y_test[:1000] # + colab={"base_uri": "https://localhost:8080/", "height": 78} id="2NFSQBPtQV4n" outputId="c1cd2a12-aa6b-4d0e-9ed0-f9f7fadda6f6" # Plot the first 10 CIFAR-10 images import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 10, figsize=(10, 1)) for i in range(10): ax[i].set_axis_off() ax[i].imshow(x_train[i]) # + [markdown] id="8yXDKUOLQV4r" # #### Introduce two useful functions # + id="6vP4aH9fQV4s" # Introduce function to test model accuracy def get_test_accuracy(model, x_test, y_test): test_loss, test_acc = model.evaluate(x=x_test, y=y_test, verbose=0) print('accuracy: {acc:0.3f}'.format(acc=test_acc)) # + id="ecZD5RluQV4w" # Introduce function that creates a new instance of a simple CNN from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D def get_new_model(): model = Sequential([ Conv2D(filters=16, input_shape=(32, 32, 3), kernel_size=(3, 3), activation='relu', name='conv_1'), Conv2D(filters=8, kernel_size=(3, 3), activation='relu', name='conv_2'), MaxPooling2D(pool_size=(4, 4), name='pool_1'), Flatten(name='flatten'), Dense(units=32, activation='relu', name='dense_1'), Dense(units=10, activation='softmax', name='dense_2') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model # + [markdown] id="_4L6ZINMQV40" # #### Create simple convolutional neural network classifier # + colab={"base_uri": "https://localhost:8080/"} id="gy8aByLEQV41" outputId="520ad78c-7591-44bb-ec95-276aa439cf83" # Create an instance of the model and show model summary model = get_new_model() model.summary() # + colab={"base_uri": "https://localhost:8080/"} id="D0cUFG--QV44" outputId="51ca8a41-3a6b-483d-e49b-076f55aeab45" # Test accuracy of the untrained model, around 10% (random) get_test_accuracy(model, x_test, y_test) # + colab={"base_uri": "https://localhost:8080/"} id="rp36ATF81mRB" outputId="a694657f-992e-48ea-a8fc-e0aec817ef08" get_test_accuracy(model, x_test, y_test) # + [markdown] id="BKs65rmrQV47" # #### Train model with checkpoints # + id="80y5otJhQV48" from tensorflow.keras.callbacks import ModelCheckpoint # + id="7jHrMg2gQV4_" # Create Tensorflow checkpoint object filepath = 'checkpoint/model' checkpoint = ModelCheckpoint(filepath, monitor = 'val_accuracy', save_best_only = True, mode = 'max') # + colab={"base_uri": "https://localhost:8080/"} id="whLx85o8QV5C" outputId="77be383f-369b-4772-c902-9d33d42775c1" # Fit model, with simple checkpoint which saves (and overwrites) model weights every epoch history = model.fit(x = x_train, y = y_train, validation_split=0.1, epochs = 10, callbacks = [checkpoint]) # + id="D209jNsjQV5E" # Have a look at what the checkpoint creates # + id="h-pNaG0SQV5H" # Evaluate the performance of the trained model # + [markdown] id="WUfpiZPbQV5J" # #### Create new model, load weights # + id="lTea2mN3QV5L" # Create a new instance of the (initialised) model, accuracy around 10% again model1 = get_new_model() # + colab={"base_uri": "https://localhost:8080/"} id="jek222rgQV5O" outputId="c4a74cbc-0e4d-430a-b469-5f7dfebef64f" # Load weights -- accuracy is the same as the trained model from tensorflow.keras.models import load_model model2 = load_model(filepath) model2.summary() get_test_accuracy(model2, x_test, y_test) # + [markdown] id="z1IlV3jlQV5R" # #### Clear directory # + id="x8rPJLyHQV5R" # ! rm -r model_checkpoints # + [markdown] id="hAytIX3jQV5U" # *** # # ## Model saving criteria # + [markdown] id="nybq0atkQV5U" # #### Create more customised checkpoint # + id="C_cRCxneQV5V" from tensorflow.keras.callbacks import ModelCheckpoint # + id="241WckCvQV5X" # Create Tensorflow checkpoint object with epoch and batch details checkpoint_5000_path = 'model_checkpoints_5000/checkpoint_{epoch:03d}_{batch:04d}' checkpoint_5000 = ModelCheckpoint(filepath = checkpoint_5000_path, save_weights_only = True, save_freq = 5000, verbose = 1) checkpoint_5000 # + id="ZKzsgIMjQV5b" # Create and fit model with checkpoint model = get_new_model() model.fit(x_train, y_train, validation_data = (x_test, y_test), epochs = 3, batch_size = 10, callbacks = [checkpoint_5000]) # + id="wc7zjxp2QV5d" # Have a look at what the checkpoint creates # + [markdown] id="_pme6veJQV5f" # #### Work with model saving criteria # + id="CjMa5HOOQV5g" # Use tiny training and test set -- will overfit! x_train = x_train[:100] y_train = y_train[:100] x_test = x_test[:100] y_test = y_test[:100] # + id="sxawGcUDQV5i" # Create a new instance of untrained model model = get_new_model() # + id="OpnNVn2sQV5l" # Create Tensorflow checkpoint object which monitors the validation accuracy from tensorflow.keras.callbacks import ModelCheckpoint checkpoint_best_path = 'model_best_checkpoints/checkpoint' checkpoint_best = ModelCheckpoint(filepath = checkpoint_best_path, save_best_only = True, save_weights_only = True, save_freq = 'epoch', verbose = 1, monitor = 'val_accuracy') # + id="Zu0lO1m_QV5o" # Fit the model and save only the weights with the highest validation accuracy history = model.fit(x = x_train, y=y_train, epochs=50, validation_data=(x_test,y_test), batch_size=10, callbacks = [checkpoint_best], verbose=1) # + id="85DHia85QV5r" # Plot training and testing curves import pandas as pd df = pd.DataFrame(history.history) df.plot(y=['accuracy', 'val_accuracy']) # + id="MKsAfSmfQV5v" # Inspect the checkpoint directory # + id="Xrx5i7bcQV5y" # Create a new model with the saved weights new_model = get_new_model() new_model.load_weights(checkpoint_best_path) # - get_test_accuracy(model, x_test, y_test) get_test_accuracy(new_model, x_test, y_test) # + [markdown] id="bNF93Fu_QV52" # #### Clear directory # + id="SerdaweWQV53" # ! rm -r model_checkpoints_5000 model_checkpoints_best # + [markdown] id="p2dNGwLzQV55" # *** # # ## Saving the entire model # + [markdown] id="e0pP6oW1QV57" # #### Create checkpoint that saves whole model, not just weights # + id="S3ROR9KfQV58" from tensorflow.keras.callbacks import ModelCheckpoint # + id="VvacO7lDQV5-" # Create Tensorflow checkpoint object # + id="RhmnWVElQV6A" # Create and fit model with checkpoint # + [markdown] id="-CGRXB0rQV6C" # #### Inspect what the checkpoint has created # + id="Wtl3cHhLQV6D" # Have a look at what the checkpoint creates # + id="7okHv-mqQV6F" # Enter variables directory # + id="53_Xuy0pQV6J" # Get the model's test accuracy # + [markdown] id="4lc0c5GhQV6R" # #### Create new model from scratch # + id="NsakDucMQV6S" # Delete model # + id="YvAxxOnHQV6U" from tensorflow.keras.models import load_model # + id="LXDBpAg3QV6X" # Reload model from scratch # + [markdown] id="T0-NSEAlQV6c" # #### Use the .h5 format to save model # + id="nOKto4IkQV6c" # Save the model in .h5 format # + id="7C0vfEnWQV6e" # Inspect .h5 file # + id="G2uU17ozQV6g" # Delete model # + id="B9dtUhsjQV6i" # Reload model from scratch # + [markdown] id="-ca32hu5QV6l" # #### Clear directory # + id="ftRw7KksQV6m" # ! rm -r model_checkpoints # ! rm my_model.h5 # + [markdown] id="CIrP42vlQV6p" # *** # # ## Loading pre-trained Keras models # + [markdown] id="dJIEw9d_QV6q" # #### Import and build Keras ResNet50 model # # Today we'll be using the ResNet50 model designed by a team at Microsoft Research, available through Keras applications. Please see the description on the [Keras applications page](https://keras.io/applications/#resnet) for details. If you continue using it, please cite it properly! The paper it comes from is: # # , , , . "Deep Residual Learning for Image Recognition", 2015. # # In the coding tutorial on Coursera, this model is loaded directly from disk. On Colab, you will load the model using the Keras API. # + id="i9GUZVHiQV6q" from tensorflow.keras.applications import ResNet50 model = ResNet50(weights='imagenet') # + [markdown] id="kRej5bQdQV6v" # #### Import and preprocess 3 sample images # + id="N7_mAS06ROOG" # Retrieve the image files # !wget -q -O lemon.jpg --no-check-certificate "https://docs.google.com/uc?export=download&id=1JSgQ9qgi9nO9t2aGEk-zA6lzYNUT9vZJ" # !wget -q -O viaduct.jpg --no-check-certificate "https://docs.google.com/uc?export=download&id=1sQzMKmyCR5Tur19lP3n1IIlEMG_o6Mct" # !wget -q -O water_tower.jpg --no-check-certificate "https://docs.google.com/uc?export=download&id=1cPAQD1O6mAiMbg0fmG5HIk8OuO_BSC6J" # + id="bZ8MTLLqQV6w" # Import 3 sample ImageNet images from tensorflow.keras.preprocessing.image import load_img lemon_img = load_img('lemon.jpg', target_size=(224, 224)) viaduct_img = load_img('viaduct.jpg', target_size=(224, 224)) water_tower_img = load_img('water_tower.jpg', target_size=(224, 224)) # + [markdown] id="_aK33Lj-QV6y" # #### Use ResNet50 model to classify images # + id="XQXTBltYQV6z" # Useful function: presents top 5 predictions and probabilities from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions import numpy as np import pandas as pd def get_top_5_predictions(img): x = img_to_array(img)[np.newaxis, ...] x = preprocess_input(x) preds = decode_predictions(model.predict(x), top=5) top_preds = pd.DataFrame(columns=['prediction', 'probability'], index=np.arange(5)+1) for i in range(5): top_preds.loc[i+1, 'prediction'] = preds[0][i][1] top_preds.loc[i+1, 'probability'] = preds[0][i][2] return top_preds # + [markdown] id="_fAaFrecQV64" # ##### Image 1: lemon # + id="e3mc6ukbQV64" # Display image # + id="6ytQGmivQV67" # Display top 5 predictions # + [markdown] id="84aRsXzIQV69" # ##### Image 2: viaduct # + id="CIUQ4SouQV6-" # Display image # + id="tSy1QNxZQV7A" # Display top 5 predictions # + [markdown] id="TByZ1C2bQV7B" # ##### Image 3: water tower # + id="RhkcV3TtQV7C" # Display image # + id="pQBVCRH6QV7D" # Display top 5 predictions # + [markdown] id="w3k4XIb9QV7F" # *** # # ## Tensorflow Hub modules # + [markdown] id="VX3o8DULQV7F" # #### Import and build Tensorflow Hub MobileNet v1 model # # Today we'll be using Google's MobileNet v1 model, available on Tensorflow Hub. Please see the description on the [Tensorflow Hub page](https://tfhub.dev/google/imagenet/mobilenet_v1_050_160/classification/4) for details on it's architecture, how it's trained, and the reference. If you continue using it, please cite it properly! The paper it comes from is: # # , , , , , , , : "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications", 2017. # # In the coding tutorial on Coursera, this model is loaded directly from disk. On Colab, you will load the model from TensorFlow Hub. # + id="3IKZUe3GQV7G" import tensorflow_hub as hub # + id="ZizaX_mqQV7K" # Build Google's Mobilenet v1 model module_url = "https://tfhub.dev/google/imagenet/mobilenet_v1_050_160/classification/4" model = Sequential([hub.KerasLayer(module_url)]) model.build(input_shape=[None, 160, 160, 3]) # + [markdown] id="ASOUEQuyQV7N" # #### Use MobileNet model to classify images # + id="3d5jycF8Q6YH" # Retrieve the image files # !wget -q -O lemon.jpg --no-check-certificate "https://docs.google.com/uc?export=download&id=1JSgQ9qgi9nO9t2aGEk-zA6lzYNUT9vZJ" # !wget -q -O viaduct.jpg --no-check-certificate "https://docs.google.com/uc?export=download&id=1sQzMKmyCR5Tur19lP3n1IIlEMG_o6Mct" # !wget -q -O water_tower.jpg --no-check-certificate "https://docs.google.com/uc?export=download&id=1cPAQD1O6mAiMbg0fmG5HIk8OuO_BSC6J" # + id="W7u6LUc9QV7N" # Import and preprocess 3 sample ImageNet images from tensorflow.keras.preprocessing.image import load_img lemon_img = load_img("lemon.jpg", target_size=(160, 160)) viaduct_img = load_img("viaduct.jpg", target_size=(160, 160)) water_tower_img = load_img("water_tower.jpg", target_size=(160, 160)) # + id="0XLfrTUUQV7P" # Read in categories text file with open('data/imagenet_categories.txt') as txt_file: categories = txt_file.read().splitlines() # + id="hfbCaoShQV7R" # Useful function: presents top 5 predictions import pandas as pd def get_top_5_predictions(img): x = img_to_array(img)[np.newaxis, ...] / 255.0 preds = model.predict(x) top_preds = pd.DataFrame(columns=['prediction'], index=np.arange(5)+1) sorted_index = np.argsort(-preds[0]) for i in range(5): ith_pred = categories[sorted_index[i]] top_preds.loc[i+1, 'prediction'] = ith_pred return top_preds # + [markdown] id="5PQnzp0SQV7T" # ##### Image 1: lemon # + id="29pUkAKaQV7T" # + id="7X9val1IQV7V" # + [markdown] id="nq22XagQQV7Y" # ##### Image 2: viaduct # + id="pI0wl0ZxQV7Y" # + id="XKr8sPJ9QV7a" # + [markdown] id="rj_qOPFIQV7b" # ##### Image 3: water tower # + id="3wjj0sBrQV7b" # + id="wB_nL9AAQV7d" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3-azureml # kernelspec: # display_name: PyCharm (drug-resistance-prediction-cambiohack) # language: python # name: pycharm-f27e5417 # --- # + gather={"logged": 1604307425503} import pandas as pd import warnings warnings.filterwarnings('ignore') import h2o h2o.init(min_mem_size='27G') # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604307447638} binarized_final_df = h2o.import_file("../data/processed/final.binarized_final_monolabel_df.tsv") binarized_final_df.head() # + gather={"logged": 1604307464844} train = h2o.import_file("../data/processed/final.train.tsv") train.head() # + gather={"logged": 1604307478044} test = h2o.import_file("../data/processed/final.test.tsv") test.head() # + gather={"logged": 1604307478173} # Identify predictors and response train_predictor_cols = train.columns train_response_col = "Resistance_Status" train_predictor_cols.remove('SampleID') train_predictor_cols.remove(train_response_col) print("train frame - predictor column: ", train_predictor_cols[0], train_predictor_cols[-1]) print("train frame - response column: ", train_response_col) # Identify predictors and response test_predictor_cols = test.columns test_response_col = "Resistance_Status" test_predictor_cols.remove('SampleID') test_predictor_cols.remove(test_response_col) print("test frame - predictor columns: ", test_predictor_cols[0], test_predictor_cols[-1]) print("test frame - response column: ", test_response_col) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604307478223} # For binary classification, response should be a factor train[train_response_col] = train[train_response_col].asfactor() test[test_response_col] = test[test_response_col].asfactor() x = train_predictor_cols y = train_response_col # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604307479662} test.head() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604307481326} train.head() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604302962344} from h2o.estimators import H2OPrincipalComponentAnalysisEstimator my_pca = H2OPrincipalComponentAnalysisEstimator( k = 400, # pca_method = "gram_s_v_d", # use_all_factor_levels = True, # pca_method = "glrm", ) # TODO: Do PCA for the entire dataset, it doesn't make sense for test/train only my_pca.train(x=x, y=y, training_frame=binarized_final_df) # Try out with smaller test dataset first # my_pca.train(x=x, y=y, training_frame=test) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604300649929} # save the model model_path = h2o.save_model(model= my_pca, path="../models/my_pca_model", force=True) # print(model_path) # model_path = "../data/processed/my_rf_model/DRF_model_python_1601519217841_129" # load the model # my_rf = h2o.load_model(model_path) # download the model built above to your local machine # my_local_model = h2o.download_model(my_rf, path="../data/processed/my_rf_model") # upload the model that you just downloded above # to the H2O cluster # uploaded_model = h2o.upload_model(my_local_model) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1603964997073} # Generate predictions on a test set (if neccessary) pred = my_pca.predict(binarized_final_df) pred # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604300650178} my_pca.summary() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604300650477} my_pca.show() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604300657396} my_pca.model_performance(test) # + [markdown] nteract={"transient": {"deleting": false}} # # Save the PCA dataframe # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604300671783} # Generate predictions train_pca = my_pca.predict(train) h2o.export_file(frame=train_pca, path="../data/processed/final.train.pca400.tsv", force=True) train_pca.head() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604300695079} # Generate predictions test_pca = my_pca.predict(test) h2o.export_file(frame=test_pca, path="../data/processed/final.test.pca400.tsv", force=True) test_pca.head() # + [markdown] nteract={"transient": {"deleting": false}} # # GLRM Model # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} from h2o.estimators import H2OGeneralizedLowRankEstimator my_glrm = H2OGeneralizedLowRankEstimator( # k=5, # loss="quadratic", # gamma_x=0.5, # gamma_y=0.5, # max_iterations=700, # recover_svd=True, # init="SVD", # transform="standardize" ) my_glrm.train(training_frame=test) # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} gather={"logged": 1604250529998} h2o.shutdown() # + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- def avoids(): word = ["sex","porn","lips"] k = input("") if k in word: print("False") else: print("True") avoids() # + def uses_all(): vowels = ["a","e","i","o","u"] k = input("") if k == vowels: print("true") else: print("false") uses_all() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"} # # Welcome # + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"} # Using this notebook, you can see how different keyboard layouts will work for you. # + deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} import os from collections import Counter import numpy as np import matplotlib.pyplot as plt import cv2 # %matplotlib inline # + [markdown] deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} # ## File utils # + deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} def find_files(root_folder, extension): res = [] for root, dirs, files in os.walk(root_folder, topdown=False): for name in files: if name.endswith(extension): print os.path.join(root, name) res.append(os.path.join(root, name)) return res # + deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} def get_char_hist(string): letter_hist = Counter(string.replace('\n', '')) return letter_hist # + deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} def file_to_string(path): with open(path, 'r') as f: string = f.read().replace('\n', '') return string.replace(' ', '') # + deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} def accumulate_hists(h1, h2): for key in h1.keys(): if key in k2.keys(): h1[key] += h2[key] return h1 # + [markdown] deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} # ## Function which overlays the key frequency on the layout and returns the mask # + deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} def generate_layout_map(freqeuncy_list, original_layout_shape): block_size = 60 row_blocks = [13, 13, 11, 10] row_offsets = [0, 90, 105, 135] mask = np.zeros(original_layout_shape) count = 0 for row in range(4): for i in range(row_blocks[row]): mask[row*block_size:(row+1)*block_size, i*block_size + row_offsets[row]:(i+1)*block_size + row_offsets[row]] = freqeuncy_list[row][i] return mask # + [markdown] deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} # ## Function to map the character histogram to frequency list based on layout # + deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} def hist_to_freq_list(hist, layout): freq_list = [np.zeros((13)), np.zeros((13)), np.zeros((11)), np.zeros((10))] for row_ind, row in enumerate(layout): row_list = [] count = 0 for i in range(0, len(row), 2): freq_list[row_ind][count] = hist[layout[row_ind][i]] + hist[layout[row_ind][i+1]] count += 1 return freq_list # + [markdown] deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} # ## layouts # + deletable=true editable=true ein.tags="worksheet-0" slideshow={"slide_type": "-"} workman = ['`~1!2@3#4$5%6^7&8*9(0)-_=+', 'qQdDrRwWbBjJfFuUpP;:[{]}\|', 'aAsShHtTgGyYnNeEoOiI\'\"', 'zZxXmMcCvVkKlL,<.>/?'] dvorak = ['`~1!2@3#4$5%6^7&8*9(0)[{]}', '\'\",<.>pPyYfFgGcCrRlL/?=+\|', 'aAoOeEuUiIdDhHtTnNsS-_', ';:qQjJkKxXbBmMwWvVzZ'] colemac = ['`~1!2@3#4$5%6^7&8*9(0)-_=+', 'qQwWfFpPgGjJlLuUyY;:[{]}\|', 'aArRsStTdDhHnNeEiIoO\'\"', 'zZxXcCvVbBkKmM,<.>/?'] qwerty = ['`~1!2@3#4$5%6^7&8*9(0)-_=+', 'qQwWeErRtTyYuUiIoOpP[{]}\|', 'aAsSdDfFgGhHjJkKlL;:\'\"', 'zZxXcCvVbBnNmM,<.>/?'] # + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"} # # Let's see the results # + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"} # ### Read all of the files you are interested # + ein.tags="worksheet-0" slideshow={"slide_type": "-"} files = find_files('~/git/', '.cpp') # + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"} # ### Create a string from all of the files # + ein.tags="worksheet-0" slideshow={"slide_type": "-"} string = '' for f in files: if f.split('/')[-1].startswith('.'): continue string += file_to_string(f) # + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"} # ### Create the character histogram # + ein.tags="worksheet-0" slideshow={"slide_type": "-"} h = get_char_hist(string) # + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"} # ### Load the keyboard layout images and threshold them # + ein.tags="worksheet-0" slideshow={"slide_type": "-"} qwerty_image = cv2.imread('../layouts/KB_US-QWERTY_with_AltGr.png', 0) dvorak_image = cv2.imread('../layouts/KB_US-Dvorak_with_AltGr.png', 0) colemac_image = cv2.imread('../layouts/KB_US-Colemak_with_AltGr.png', 0) workman_image = cv2.imread('../layouts/KB_US-Workman_with_AltGr.png', 0) qwerty_image[np.where(qwerty_image > 100)] = 255 dvorak_image[np.where(dvorak_image > 100)] = 255 colemac_image[np.where(colemac_image > 100)] = 255 workman_image[np.where(workman_image > 100)] = 255 # + [markdown] ein.tags="worksheet-0" slideshow={"slide_type": "-"} # ### Make the stroke map of the keys and overlay them on the layout images # + ein.tags="worksheet-0" slideshow={"slide_type": "-"} smoothing_kernel = np.ones((11, 11), np.float32)/121 f_list = hist_to_freq_list(h, colemac) mask = generate_layout_map(f_list, (300,900)) mask = (mask - np.min(mask)) / (np.max(mask) - np.min(mask)) mask_smooth = cv2.filter2D(mask, -1, smoothing_kernel) cv2.imwrite('colemac.png', mask_smooth * colemac_image) f_list = hist_to_freq_list(h, dvorak) mask = generate_layout_map(f_list, (300,900)) mask = (mask - np.min(mask)) / (np.max(mask) - np.min(mask)) mask_smooth = cv2.filter2D(mask, -1, smoothing_kernel) cv2.imwrite('dvorak.png', mask_smooth * dvorak_image) f_list = hist_to_freq_list(h, workman) mask = generate_layout_map(f_list, (300,900)) mask = (mask - np.min(mask)) / (np.max(mask) - np.min(mask)) mask_smooth = cv2.filter2D(mask, -1, smoothing_kernel) cv2.imwrite('workman.png', mask_smooth * workman_image) f_list = hist_to_freq_list(h, qwerty) mask = generate_layout_map(f_list, (300,900)) mask = (mask - np.min(mask)) / (np.max(mask) - np.min(mask)) mask_smooth = cv2.filter2D(mask, -1, smoothing_kernel) cv2.imwrite('qwerty.png', mask_smooth * qwerty_image) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np from scipy import interp import matplotlib.pyplot as plt from scipy.interpolate import RegularGridInterpolator # %matplotlib inline import time import sqlite3 import pandas as pd from scipy.interpolate import interp1d import random import itertools from datetime import datetime from scipy import interp from scipy.interpolate import RegularGridInterpolator plt.style.use('ggplot') # ## Datas PATH_DATA = '/Users/mathildebadoual/code/ev_controller/data/' PATH_REPORT = '/Users/mathildebadoual/code/ev_controller/report/' data = pd.read_csv(PATH_DATA + 'price_demand.csv') data.max() plt.figure(figsize=(15, 8)) for day in range(1, 31): for month in range(1, 2): plt.plot(range(0, 24), data[(data['day'] == day) & (data['Month'] == month) & (data['Year'] == 2013)]['Price']) plt.xlabel('hour of the day') plt.ylabel('price (/kWh)') plt.title('Prices for days in 2013 (CAISO)') plt.savefig(PATH_REPORT + 'prices_day.png') def satisfaction_function(x): return c * (1 - np.exp(alpha * (1 - x)))/(1 - np.exp(alpha)) # ## Markov Chain: EV data # + data_ev = pd.read_csv(PATH_DATA + 'EV_data.csv') ID_26 = data_ev['ID:26'] #Car_ID:26 ID_370 = data_ev['ID:370'] #Car_ID:370 ID_545 = data_ev['ID:545'] #Car_ID:545 ID_661 = data_ev['ID:661'] #Car_ID:661 ID_4767 = data_ev['ID:4767'] #Car_ID:4767 #Creating and resizing EV_array into (ID , Week, DOW, HOD) ev_array = np.array([ID_26, ID_370, ID_545, ID_661, ID_4767]) ev_array.resize(5,47,7,24) # + array = np.zeros((len(data.columns[1:])*len(data_ev), 5)) j = 0 for car_index, car in enumerate(data_ev.columns[1:]): for i in range(len(data_ev)): value = datetime.strptime(data_ev['localminute'].iloc[i], '%m/%d/%Y %H:%M') week_of_year = int(value.strftime('%W')) day_of_week = int(value.strftime('%w')) hour_of_day = int(value.strftime('%H')) array[j, :] = [car_index, week_of_year, day_of_week, hour_of_day, data_ev[car].iloc[i]] j += 1 data_ev_resized = pd.DataFrame(array, columns=['car_index', 'week_of_year', 'day_of_week', 'hour_of_day', 'energy_consumption']) # + presence_list = [] for element in data_ev_resized['energy_consumption']: if element <= 0.01: presence_list.append(0) else: presence_list.append(1) data_ev_resized['presence'] = presence_list # - #Creating Markov chain! #Considering (5 x 47 x 7 ) different data arrays, we find the probablility #matrix pij(k), i for charging, j for not charging. #At each time step we count the p_list =[ ] #initializing list containing the probability matrices for k in range(23): # 23 cuz probability doesn't count for last timestep Num_charging = [0, 0, 0] #[num_charging now, num_STILL charging at k+1, num_n0t charging at k+1] Num_not_charging = [0, 0, 0] #[num_not charging now, num_charging next k, num STILL not charging next k] for j in range(47): for v in range(7): if any(i >= 0.3 for i in ev_array[:,j,v,k]): #I use 0.3 as charging benchmark Num_charging[0] += 1 if any(i >= 0.3 for i in ev_array[:,j,v,k+1]): Num_charging[1] += 1 else: Num_charging[2] += 1 else: Num_not_charging[0] += 1 if any(i >= 0.3 for i in ev_array[:,j,v,k+1]): Num_not_charging[1] += 1 else: Num_not_charging[2] += 1 p_ij = np.zeros((2,2)) p_ij[0,0] = Num_charging[1]/Num_charging[0] #probability of moving from i to i p_ij[0,1] = Num_charging[2]/Num_charging[0] #probability of moving from i to j p_ij[1,0] = Num_not_charging[1]/Num_not_charging[0] p_ij[1,1] = Num_not_charging[2]/Num_not_charging[0] p_list.append(p_ij) # + P = np.array(p_list) fs = 14 plt.figure(dpi = 80, figsize = (15,8), tight_layout = False) plt.plot(np.linspace(0,22,23), P[:,0,0], label = 'plugged to plugged') plt.plot(np.linspace(0,22,23), P[:,1,1], label = 'unplugged to unplugged') plt.xticks(np.linspace(0,22,23)) plt.ylabel('probability', fontsize = fs-2) plt.xlabel('hour of day', fontsize = fs-2) plt.title('Probability to switch mode for the charging station') plt.legend() plt.show() plt.savefig(PATH_REPORT + 'probability_switch_station.png') # - # ## Regular DeterministicControl # + ## SET PARAMETERS num_station = 2 ns = 10 N = 24 c = 100 alpha = 0.003 delta_t = 1 u_max = 0.3 U_max = 0.45 # + ## GET DATA prices_day = data[(data['day'] == 1) & (data['Month'] == 1) & (data['Year'] == 2013)]['Price'] prices_interp = interp1d(np.linspace(0, 23, 24), prices_day) prices = np.flip(prices_interp(np.linspace(0, 23, N)), axis=0) presence = np.zeros((num_station, 24)) #presence[i, :] = data_ev_resized[(data_ev_resized['car_index'] == 1) & (data_ev_resized['day_of_week'] == i+1) & (data_ev_resized['week_of_year'] == 15)]['presence'] presence[0, :] = np.array([1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) presence[1, :] = np.array([1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1]) plt.plot(presence[0, :]) plt.plot(presence[1, :]) # + SOC_shape = tuple([ns for i in range(num_station)]) # This creates a tuple is needed to for any np.reshape operation later SOC_grid = np.array(list(itertools.product(np.linspace(0, 1, ns), repeat = num_station))) # repeat added... SOC_tuple = tuple([np.linspace(0,1,ns) for i in range(num_station)]) # This is needed for the interpolation later # - u_star = np.zeros((num_station, len(SOC_grid), N)) V = 100000 * np.ones((len(SOC_grid), N+1)) # + ## SOLVE DYNAMIC PROGRAM start = time.time() # Boundary Condition of Value Function (Principle of Optimality) V[-1,N] = 0 #This ensures that the progam only accepts SOC of 1 for all stations at time N. # Iterate backward in time for k in range(N-1, 0, -1): # Iterate over SOC for idx in range(0, len(SOC_grid)): # Find dominant bounds for u_batt lb = 0 ub = [min(u_max, (1.0 - SOC_grid[idx, i])/delta_t) for i in range(num_station)] # Grid Battery Power between dominant bounds u_batt_init = np.array(list(itertools.product(np.linspace(lb, ub[0], ns), np.linspace(lb, ub[1], ns)))) #initializing u_batt #for i in range(num_station): # u_batt_init[:,i] = u_batt_init[:,i] * (ub[i] / u_max) #This ensures that the upperbounds are respected if ub < u_max u_batt_delete = [] # This list is used to delete u_batt values that break the u_max constraint for i in range(len(u_batt_init)): if u_batt_init[i].sum() > U_max: u_batt_delete.append(i) u_batt_grid = np.delete(u_batt_init, u_batt_delete, 0) #This is the real u_batt_grid used in the rest of the code #TODO: enlever ceux qui ne respectent pas sum(u_i) <= Umax # compute next SOC using dynamics SOC_nxt = SOC_grid[idx] + delta_t * u_batt_grid # Cost (no satisfaction) satisfaction_k = np.array([np.sum(c * (1 - np.exp(alpha * (1 - SOC_nxt[i, :])))/(1 - np.exp(alpha))) for i in range(u_batt_grid.shape[0])]) g_k = prices[k] * u_batt_grid.sum(axis = 1) + satisfaction_k # Compute value function at nxt time step (need to interpolate) V_nxt = 100000*np.ones((len(SOC_nxt),1)) V_temp = V[:,k+1].reshape(SOC_shape) #V[k+1] has to be reshaped to work in interpolation V_interp_function = RegularGridInterpolator(SOC_tuple, V_temp, method = 'linear') #n-dimensional interpolation function V_nxt = V_interp_function(SOC_nxt) # Value Function (Principle of Optimality) V[idx, k] = (delta_t * g_k + V_nxt.T).min() ind = np.argmin(delta_t * g_k + V_nxt.T) # Save Optimal Control u_star[:, idx, k] = u_batt_grid[ind] # DP Timer end = time.time() print(str(end - start) + " seconds") # + ## Simulate Results random.seed(10) # Preallocate SOC_sim = np.zeros((num_station, N)) u_batt_sim = np.zeros((num_station, N)) J_sim = np.zeros((N)) # Initialize SOC_0 = np.array([0.2, 0.3]) SOC_sim[:, 0] = SOC_0 # Simulate PHEV Dynamics for k in range(0, N-1): # # Use optimal battery power, for given SOC # u_batt_sim[:, k] = np.reshape(interp(SOC_sim[:, k].flatten(), SOC_grid.flatten(), u_star[:, :, k].flatten()), (num_station,)) # for i in range(num_station): # u_batt_sim[i, k] = presence[i, k] * u_batt_sim[i, k] # Use optimal battery power, for given SOC for i in range(num_station): u_temp_sim = u_star[i,:,k].reshape(SOC_shape) u_interp_func= RegularGridInterpolator(SOC_tuple, u_temp_sim, method = 'linear') u_batt_sim[i, k] = presence[i,k] * u_interp_func(SOC_sim[:,k]) #interpolation necessary for selectign each element of u_batt_sim! # Fuel Consumption satisfaction_k = np.sum(c * (1 - np.exp(alpha * (1 - SOC_sim[:, k])))/(1 - np.exp(alpha))) J_sim[k] = prices[k] * np.sum(u_batt_sim[:, k]) + satisfaction_k # Time-step SOC dynamics SOC_sim[:, k+1] = (SOC_sim[:, k] + delta_t * u_batt_sim[:, k]) for i in range(num_station): if presence[i, k] == 0 and presence[i, k+1] == 1: print('switch') SOC_sim[i, k+1] = 0.3 #SOC_sim[i, k+1] = random.uniform(0, 1) else: SOC_sim[i, k+1] = presence[i, k] * SOC_sim[i, k+1] # + t = np.linspace(0, 23, N) ## Plot Simulation Results plt.figure(num=3, figsize=(10, 18), dpi=80, facecolor='w', edgecolor='k') plt.subplot(3,1,1) plt.plot(t, SOC_sim[0], label='station 1') plt.plot(t, SOC_sim[1], label='station 2') #plt.plot(t, SOC_sim[2], label='station 3') plt.title('State Of Charge versus time') plt.xlabel('time') plt.ylabel('SOC') plt.legend() # SOC versus time plt.subplot(3,1,2) plt.plot(t, [np.sum(J_sim[:k]) for k in range(0, N)]) # plot speed plt.title('Accumulated cost versus time') plt.xlabel('time') plt.ylabel('Accumulated cost') # Accumulated fuel consumption [g] versus time plt.subplot(3,1,3) plt.plot(t, u_batt_sim[0], label='station 1') plt.plot(t, u_batt_sim[1], label='station 2') #plt.plot(t, u_batt_sim[2], label='station 3') plt.plot(t, prices/40 - 0.8, label='prices') plt.title('Battery power [kW] versus time') plt.xlabel('time') plt.legend() # Battery and engine power [kW] versus time plt.savefig(PATH_REPORT + 'results_deterministic.png') print ("Total cost of charging vehicles = ", J_sim.sum(), "USD") # - # ## Stochastic Control # + ## SET PARAMETERS num_station = 2 ns = 10 N = 24 c = 100 alpha = 0.003 delta_t = 1 u_max = 0.3 U_max = 0.45 # + ## GET DATA prices_day = data[(data['day'] == 1) & (data['Month'] == 1) & (data['Year'] == 2013)]['Price'] prices_interp = interp1d(np.linspace(0, 23, 24), prices_day) #prices = prices_interp(np.linspace(0, 23, N)) prices = np.flip(prices_interp(np.linspace(0, 23, N)), axis=0) presence = np.zeros((num_station, 24)) #presence[i, :] = data_ev_resized[(data_ev_resized['car_index'] == 1) & (data_ev_resized['day_of_week'] == i+1) & (data_ev_resized['week_of_year'] == 15)]['presence'] presence[0, :] = np.array([1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) presence[1, :] = np.array([1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1]) plt.plot(presence[0, :]) plt.plot(presence[1, :]) # + SOC_shape = tuple([ns for i in range(num_station)]) # This creates a tuple is needed to for any np.reshape operation later SOC_grid = np.array(list(itertools.product(np.linspace(0, 1, ns), repeat = num_station))) # repeat added... SOC_tuple = tuple([np.linspace(0,1,ns) for i in range(num_station)]) # This is needed for the interpolation later # - u_star = np.zeros((num_station, len(SOC_grid), N)) V = 10000000 * np.ones((len(SOC_grid), N+1)) V[ :, N] = satisfaction_function(SOC_grid[:,0]) #+ satisfaction_function(SOC_grid[:,1]) # + ## SOLVE DYNAMIC PROGRAM start = time.time() # Boundary Condition of Value Function (Principle of Optimality) # V[:, :, N] = np.zeros((num_station, len(SOC_grid))) # Iterate backward in time for k in range(N-1, 0, -1): # Iterate over SOC for idx in range(0, len(SOC_grid)): # Find dominant bounds for u_batt # Find dominant bounds for u_batt lb = 0 ub = [min(u_max, (1.0 - SOC_grid[idx, i])/delta_t) for i in range(num_station)] # Grid Battery Power between dominant bounds u_batt_init = np.array(list(itertools.product(np.linspace(lb, ub[0], ns), np.linspace(lb, ub[1], ns)))) #initializing u_batt u_batt_init = u_batt_init * ub/u_max #This ensures that the upperbounds are respected if ub < u_max u_batt_delete = [] # This list is used to delete u_batt values that break the u_max constraint for i in range(len(u_batt_init)): if u_batt_init[i].sum() > U_max: u_batt_delete.append(i) u_batt_grid = np.delete(u_batt_init, u_batt_delete, 0) #This is the real u_batt_grid used in the rest of the code #TODO: enlever ceux qui ne respectent pas sum(u_i) <= Umax # compute next SOC using dynamics SOC_nxt_00 = 0 * u_batt_grid SOC_nxt_11 = SOC_grid[idx] + delta_t * u_batt_grid SOC_nxt_01 = random.uniform(0, 1) * np.ones(u_batt_grid.shape)# tbd SOC_nxt_10 = 0 * u_batt_grid # Cost of satisfaction satisfaction_k_0 = np.zeros((u_batt_grid.shape[0])) satisfaction_k_1 = np.array([np.sum(c * (1 - np.exp(alpha * (1 - SOC_grid[idx])))/(1 - np.exp(alpha))) for i in range(u_batt_grid.shape[0])]) #satisfaction_k_01 = np.array([np.sum(c * (1 - np.exp(alpha * (1 - SOC_nxt_01[i, :])))/(1 - np.exp(alpha))) for i in range(u_batt_grid.shape[0])]) #satisfaction_k_10 = np.zeros((u_batt_grid.shape[0])) g_k_00 = prices[k] * np.array([np.sum(u_batt_grid[i, :]) for i in range(u_batt_grid.shape[0])]) + satisfaction_k_0 g_k_11 = prices[k] * np.array([np.sum(u_batt_grid[i, :]) for i in range(u_batt_grid.shape[0])]) + satisfaction_k_1 g_k_01 = prices[k] * np.array([np.sum(u_batt_grid[i, :]) for i in range(u_batt_grid.shape[0])]) + satisfaction_k_1 g_k_10 = prices[k] * np.array([np.sum(u_batt_grid[i, :]) for i in range(u_batt_grid.shape[0])]) + satisfaction_k_0 if k == 23: g_k = g_k_11 else: g_k = P[k, 0, 0] * g_k_00 + P[k, 1, 1] * g_k_11 + P[k, 0, 1] * g_k_01 + P[k, 1, 0] * g_k_10 # Compute value function at nxt time step (need to interpolate) V_nxt_init = 10000000*np.ones((len(SOC_nxt),1)) #initializing V_nxt V_temp = V[:,k+1].reshape(SOC_shape) #V[k+1] has to be reshaped to work in interpolation V_interp_function = RegularGridInterpolator(SOC_tuple, V_temp, method = 'linear') #n-dimensional interpolation function V_nxt_00 = V_interp_function(SOC_nxt_00) V_nxt_11 = V_interp_function(SOC_nxt_11) V_nxt_01 = V_interp_function(SOC_nxt_01) V_nxt_10 = V_interp_function(SOC_nxt_10) if k == 23: V_nxt = V_nxt_11 else: V_nxt = P[k, 0, 0] * V_nxt_00 + P[k, 1, 1] * V_nxt_11 + P[k, 0, 1] * V_nxt_01 + P[k, 1, 0] * V_nxt_10 # Value Function (Principle of Optimality) V[idx, k] = (delta_t * g_k + V_nxt).min() ind = np.argmin(delta_t * g_k + V_nxt) # Save Optimal Control u_star[:, idx, k] = u_batt_grid[ind] # DP Timer end = time.time() print(str(end - start) + " seconds") # + ## Simulate Results random.seed(10) # Preallocate SOC_sim = np.zeros((num_station, N)) u_batt_sim = np.zeros((num_station, N)) J_sim = np.zeros((N)) # Initialize SOC_0 = np.array([0.2, 0.3]) SOC_sim[:, 0] = SOC_0 # Simulate PHEV Dynamics for k in range(0, N-1): # # Use optimal battery power, for given SOC # u_batt_sim[:, k] = np.reshape(interp(SOC_sim[:, k].flatten(), SOC_grid.flatten(), u_star[:, :, k].flatten()), (num_station,)) # for i in range(num_station): # u_batt_sim[i, k] = presence[i, k] * u_batt_sim[i, k] # Use optimal battery power, for given SOC for i in range(num_station): u_temp_sim = u_star[i,:,k].reshape(SOC_shape) u_interp_func= RegularGridInterpolator(SOC_tuple, u_temp_sim, method = 'linear') u_batt_sim[i, k] = presence[i,k] * u_interp_func(SOC_sim[:,k]) #interpolation necessary for selectign each element of u_batt_sim! # Fuel Consumption satisfaction_k = np.sum(c * (1 - np.exp(alpha * (1 - SOC_sim[:, k])))/(1 - np.exp(alpha))) J_sim[k] = prices[k] * np.sum(u_batt_sim[:, k]) + satisfaction_k # Time-step SOC dynamics SOC_sim[:, k+1] = (SOC_sim[:, k] + delta_t * u_batt_sim[:, k]) for i in range(num_station): if presence[i, k] == 0 and presence[i, k+1] == 1: print('switch') SOC_sim[i, k+1] = 0.3 #SOC_sim[i, k+1] = random.uniform(0, 1) else: SOC_sim[i, k+1] = presence[i, k] * SOC_sim[i, k+1] # + t = np.linspace(0, 23, N) ## Plot Simulation Results plt.figure(num=3, figsize=(10, 18), dpi=80, facecolor='w', edgecolor='k') plt.subplot(3,1,1) plt.plot(t, SOC_sim[0], label='station 1') plt.plot(t, SOC_sim[1], label='station 2') #plt.plot(t, SOC_sim[2], label='station 3') plt.title('State Of Charge versus time') plt.xlabel('time') plt.ylabel('SOC') plt.legend() # SOC versus time plt.subplot(3,1,2) plt.plot(t, [np.sum(J_sim[:k]) for k in range(0, N)]) # plot speed plt.title('Accumulated cost versus time') plt.xlabel('time') plt.ylabel('Accumulated cost') # Accumulated fuel consumption [g] versus time plt.subplot(3,1,3) plt.plot(t, u_batt_sim[0], label='station 1') plt.plot(t, u_batt_sim[1], label='station 2') #plt.plot(t, u_batt_sim[2], label='station 3') plt.plot(t, prices/40 - 0.8, label='prices') plt.title('Battery power [kW] versus time') plt.xlabel('time') plt.legend() # Battery and engine power [kW] versus time plt.savefig(PATH_REPORT + 'results_stochastic.png') # - # ## Tests # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lendo dados de geociência # ## License # # All content can be freely used and adapted under the terms of the # [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). # # ![Creative Commons License](https://i.creativecommons.org/l/by/4.0/88x31.png) # ## Imports # # Coloque **todos** os `import` na célula abaixo. Não se esqueça do `%matplotlib inline` para que os gráficos apareçam no notebook. import numpy as np import matplotlib.pyplot as plt import math # %matplotlib inline # ## IMPORTANTE # # Agora que vocês sabem técnicas de programação defensiva, eu espero que todo o código que vocês fizerem abaixo utilizem essas técnicas. Crie docstrings para suas funções, cheque as entradas (quando for possível) e cheque as saídas. **Não esqueçam dos comentários**. # ## Temperatura no Rio de Janeiro # # O arquivo `data/23.31S-42.82W-TAVG-Trend.txt` contém dados de temperatura média mensal para a cidade do Rio de Janeiro. O arquivo também contém médias móveis anual, 5, 10 e 20 anos. Esses dados foram baixados do site Berkeley Earth (http://berkeleyearth.lbl.gov/locations/23.31S-42.82W). # # ### Tarefa # # Faça duas funções, uma que lê os dados de temperatura mensal, outra que lê os dados da média móvel anual. # As duas funções devem: # # * Receber como entrada **somente** o nome do arquivo de dados. # * Retornar duas listas: uma com as datas referentes aos dados e outra com os dados de temperatura. # * As datas retornadas devem ser em anos decimais. Ex: Janeiro de 1984 seria 1984.0833333333333 (1984 + 1/12). # * Datas sem valores de temperatura (NaN) devem ser ignoradas (não incluidas nas listas). # # Utilize suas funções para carregar os dados e fazer um gráfico da temperatura média mensal e média movel anual pelo tempo. # + arquivo = open('data/23.31S-42.82W-TAVG-Trend.txt') # criação das listas data = [] mes = [] data_mes = [] mes_decimal = [] temp = [] temp_real = [] for linha in arquivo: #para rodar cada linha do arquivo if linha[0] != "%": # "!" igual a diferente de split = linha.split() #para separar cada componente das linhas if len(split) != 0: #para excluir valores vazios if split[2] != "NaN": # para excluir "NaN" data.append(float(split[0])) #para adicionar tds os demais valotes mes.append(float(split[1])) temp.append(float(split[2])) for j in mes: #para transformar meses em numeros decimais j = j/12 mes_decimal.append(float(j)) for i in range(len(data)): #para somar os decimais de meses aos anos referentes x = data[i] + mes_decimal[i] data_mes.append(x) for i in range(len(temp)): #para somar as temperaturas a media fornecida y = temp[i]+24.01 temp_real.append(y) arquivo.close() #fechar arquivo # + arquivo = open('data/23.31S-42.82W-TAVG-Trend.txt') # listas para criar a media movel anual data = [] mes = [] data_ano = [] mes_decimal = [] temp = [] temp_med_ano = [] #lista para colocar os dados de temperatura media anual for linha in arquivo: if linha[0] != "%": split = linha.split() if len(split) != 0: if split[4] != "NaN": data.append(float(split[0])) mes.append(float(split[1])) temp.append(float(split[4])) for j in mes: j = j/12 mes_decimal.append(float(j)) for i in range(len(data)): x = data[i] + mes_decimal[i] data_ano.append(x) for k in range(len(temp)): y = temp[k]+24.01 temp_med_ano.append(y) arquivo.close() # - plt.figure(figsize = [10, 4]) #para criar uma figura em branco plt.plot(data_mes, temp_real, '.k', label='Média Mensal') #para colocar os dados plt.plot(data_ano, temp_med_ano, 'r', label='Média Anual', linewidth = 2) #para colocar os dados plt.title("Médias de Temperatura") #título do grafico plt.xlabel(u"Ano") #titulo do eixo X plt.ylabel("Temperatura em Celsius") #titulo do eixo Y plt.xlim(1828,2017) #definição do eixo x plt.legend(loc='lower right', fontsize='large') #posicionamento da legenda # ### Resultado esperado # # O gráfico final deve ser parecido com o abaixo: # # ![images/media-mensal-temp-rio.png](images/media-mensal-temp-rio.png) # ### Tarefa # # Faça uma função que calcule a temperatura média anual a partir das temperaturas mensais. A sua função deve: # # * Receber como entrada a lista das datas e a lista das temperaturas mensais. # * Retornar duas listas: uma com os anos e outra com as temperaturas médias correspondetes. # * Anos que não contem dados de todos os 12 meses devem ser ignorados (não incluídos nas listas retornadas). # # Utilize sua função para calcular a média anual. Faça um gráfico da temperatura média anual por ano junto com a média móvel anual. # # **Dica**: A função `math.floor` retorna o número inteiro que precede um número real. Ex: `math.floor(1984.23) == 1984` # + arquivo = open('data/23.31S-42.82W-TAVG-Trend.txt') ano = [] temp_ano = [] for x in range(len(data_mes)): #rodar cada linha da arquivo y = x+11 #y é igual a 11 posicoes a frente do item x if y < len(data_mes): #para que y não ultrapasse o tamanho total da lista if math.floor(data_mes[x]) == math.floor(data_mes[y]): #para não rodar 12 elementos media = (temp_real[x]+temp_real[x+1]+temp_real[x+2]+temp_real[x+3]+temp_real[x+4]+temp_real[x+5]+temp_real[x+6]+temp_real[x+7]+temp_real[x+8]+temp_real[x+9]+temp_real[x+10]+temp_real[x+11])/12 # para fazer a media temp_ano.append(media) #para adicionar a temperatura media na lista ano.append(data_mes[x]) #para adicionar o ano na outra lista arquivo.close() # - plt.figure(figsize = [10, 4]) # figura em branco plt.plot(data_ano, temp_med_ano, 'r', label='Média Móvel Anual') #para adicionar os dados plt.plot(ano, temp_ano, 'ok', label='Média Anual', linewidth = 2) #para adicionar os dados plt.title("Médias de Temperatura") #título plt.xlabel(u"Ano") #titulo do eixo X plt.ylabel("Temperatura Média") #titulo do eixo Y plt.xlim(1828,2017) plt.legend(loc='lower right', fontsize='large') #local da legenda # ### Resultado esperado # # O gráfico final deve ser parecido com o abaixo: # # ![images/media-anual-temp-rio.png](images/media-anual-temp-rio.png) # ## Tarefa Bônus # # Salve os dados da média anual em um arquivo CSV (comma separated values) chamado `temp-media-anual.csv`. Os valores devem ser separados por `,`. A primeira coluna deve conter os anos e a segunda as temperaturas. Esse arquivo deve estar presente em seu repositório (dê `git add` nele). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + pycharm={"name": "#%%\n"} import os import tarfile import urllib # https://github.com/ageron/handson-ml2 import matplotlib.pyplot as plt # + pycharm={"name": "#%%\n"} # DOWNLOAD_ROOT = "https://github.com/ageron/handson-ml2" HOUSING_PATH = os.path.join("datasets","housing") HOUSING_URL = "C:/Users/Donald/Desktop/handson-ml2/datasets/housing/housing.tgz" # + pycharm={"name": "#%%\n"} def fetch_housing_data(housing_url=HOUSING_URL,housing_path=HOUSING_PATH): os.makedirs(housing_path,exist_ok=True) tgz_path = os.path.join(housing_path,"housing.tgz") urllib.request.urlretrieve(housing_url,housing_path) housing_tgz = tarfile.open(tgz_path) housing_tgz.extractall(path=housing_path) housing_tgz.close() # + pycharm={"name": "#%%\n"} import pandas as pd def load_housing_data(housing_path=HOUSING_PATH): csv_path = os.path.join(housing_path,"housing.csv") return pd.read_csv("./housing.csv") # + pycharm={"name": "#%%\n"} housing = load_housing_data() housing.head() # + pycharm={"name": "#%%\n"} housing.info() # + pycharm={"name": "#%%\n"} housing.describe() # + pycharm={"name": "#%%\n"} housing.hist(bins=50,figsize=(20,15)) plt.show() # + pycharm={"name": "#%%\n"} housing.median() # + pycharm={"name": "#%%\n"} housing.plot() # + pycharm={"name": "#%%\n"} #now the technical parts come in, to plot the instances into test and training sets import numpy as np def split_test_and_train(data,test_ratio): shuffled_indices = np.random.permutation(len(data)) test_set_size = int(len(data) * test_ratio) test_indices = shuffled_indices[:test_set_size] train_indices = shuffled_indices[test_set_size:] return data.iloc[train_indices],data.iloc[test_indices] # + pycharm={"name": "#%%\n"} a = split_test_and_train(housing,test_ratio=0.2) print(a) # + pycharm={"name": "#%%\n"} train_set,test_set = split_test_and_train(housing,0.2) len(train_set) # + pycharm={"name": "#%%\n"} len(test_set) # + pycharm={"name": "#%%\n"} from zlib import crc32 def test_check_set(identifier, test_ratio): return crc32(np.int64(identifier)) & 0xfffffffff < test_ratio * 2*32 # + pycharm={"name": "#%%\n"} def split_test_train_by_id(data,id_column,test_ratio): ids = data[id_column] in_test_set = ids.apply(lambda id_ : test_check_set(id_,test_ratio)) return data.loc[~in_test_set],data.loc[in_test_set] # - # # + pycharm={"name": "#%%\n"} housing_with_id = housing.reset_index() train_set,test_set = split_test_train_by_id(housing_with_id,"index",0.2) # + pycharm={"name": "#%%\n"} housing_with_id["id"] = housing["longitude"] * 1000 + housing["latitude"] train_set,test_set = split_test_train_by_id(housing_with_id,"id",0.2) # + pycharm={"name": "#%%\n"} from sklearn.model_selection import train_test_split train_set,test_set = train_test_split(housing,test_size=0.2,random_state=42) # + pycharm={"name": "#%%\n"} housing["income_category"] = pd.cut(housing["median_income"],bins=[0.,1.5,3.0,4.5,6.0,np.inf],labels=[1,2,3,4,5]) housing["income_category"].hist() # + pycharm={"name": "#%%\n"} from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1,test_size=0.2,random_state=42) for train_index, test_index in split.split(housing,housing["income_category"]): strat_train_set = housing.loc[train_index] strat_test_set = housing.loc[test_index] # + pycharm={"name": "#%%\n"} strat_test_set["income_category"].value_counts() / len(strat_test_set) # + pycharm={"name": "#%%\n"} for set_ in (strat_train_set,strat_test_set): set_.drop("income_category", axis=1, inplace=True) # + pycharm={"name": "#%%\n"} housingCopy = strat_train_set.copy() housingCopy.plot(kind="scatter",x="longitude",y="latitude") # + pycharm={"name": "#%%\n"} housingCopy.plot(kind="scatter",x="longitude",y="latitude",alpha=0.1) # + pycharm={"name": "#%%\n"} housingCopy.plot() # + pycharm={"name": "#%%\n"} housing.plot(kind = "scatter",x = "longitude",y = "latitude", alpha = 0.4, s=housing["population"]/100) # + pycharm={"name": "#%%\n"} # + pycharm={"name": "#%%\n"} # + pycharm={"name": "#%%\n"} corr_matrix = housing.corr() corr_matrix["median_house_value"].sort_values(ascending=False) # - from pandas.plotting import scatter_matrix attributes = ["median_house_value","median_income","housing_median_age","total_rooms"] scatter_matrix(housing[attributes], figsize=(12,8)) housing.plot(kind="scatter",x="median_income",y="median_house_value",alpha=0.1) # ## Experimenting With Attribute Combinations: # Before preparing the machine learning algorithm you must try the ATTRIBUTES COMBINATIONS For example total numbers of rooms is not really useful if you don't know how many households are there. # + #creating new attributes housing["rooms_per_household"]=housing["total_rooms"]/housing["households"] housing["bedrooms_per_rooms"]=housing["total_bedrooms"]/housing["total_rooms"] housing["population_per_household"]=housing["population"]/housing["households"] corr_matrix = housing.corr() corr_matrix["median_house_value"].sort_values(ascending=False) # - # # Prepare the data for ML algo # ### writing the functions is ought to be more efficient. housing = strat_train_set.drop("median_house_value", axis = 1) housing_labels = strat_train_set["median_house_value"].copy() # # Data Cleaning # Gonna use DataFrame's # dropna(),drop(),and fillna() median = housing["total_bedrooms"].median() housing["total_bedrooms"].fillna(median, inplace=True) # # SimpleImputer # sklearn's fuction which takes care of missing values # # + from sklearn.impute import SimpleImputer imputer = SimpleImputer(strategy = "median") housing_num = housing.drop("ocean_proximity",axis=1) imputer.fit(housing_num) # - imputer.statistics_ housing_num.median().values x = imputer.transform(housing_num) housing_tr = pd.DataFrame(x,columns=housing_num.columns,index=housing_num.index) # # Scikit-learn's Design is crazy # estimators, transformers, Predicors, inspection # ## Handling text and categorical Attributes # so far we have dealt with numerical attributes, but now let's look at text attributes housing_category = housing[["ocean_proximity"]] housing_category.head(10) from sklearn.preprocessing import OrdinalEncoder ordinal_encoder=OrdinalEncoder() housing_cat_encoded = ordinal_encoder.fit_transform(housing_category) housing_cat_encoded[:10] ordinal_encoder.categories_ # + from sklearn.preprocessing import OneHotEncoder cat_encoder=OneHotEncoder() housing_cat_1hot=cat_encoder.fit_transform(housing_category) housing_cat_1hot # - housing_cat_1hot.toarray() cat_encoder.categories_ # # Custom Transformers # from sklearn.base import BaseEstimator, TransformerMixin rooms_ix, bedrooms_ix, population_ix, households_ix = 3,4,5,6 class CombinedAttributesAdder(BaseEstimator, TransformerMixin): def __init__(self,add_bedrooms_per_room = True): self.add_bedrooms_per_room = add_bedrooms_per_room def fit(self,X,y=None): return self def transform(self,X,y=None): rooms_per_household = X[:, rooms_ix] / X[:, households_ix] population_per_household = X[:, population_ix] / X[:, households_ix] if self.add_bedrooms_per_room: bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix] return np.c_[X,rooms_per_household,population_per_household,bedrooms_per_room] else: return np.c_[X, rooms_per_household, population_per_household] attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False) housing_extra_attribs = attr_adder.transform(housing.values) # # Feature Scaling # ## Transformation Pipelines # # + from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attribs_adder',CombinedAttributesAdder()), ('std_scaler', StandardScaler()), ]) housing_num_tr = num_pipeline.fit_transform(housing_num) # + from sklearn.compose import ColumnTransformer num_attribs = list(housing_num) cat_attribs = ["ocean_proximity"] full_pipeline = ColumnTransformer([ ("num",num_pipeline,num_attribs), ("cat",OneHotEncoder(),cat_attribs), ]) housing_prepared = full_pipeline.fit_transform(housing) # - # # Select and Train a Model # ## Training and Evaluating on the Training_Set from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(housing_prepared, housing_labels) some_data = housing.iloc[:5] some_labels = housing_labels.iloc[:5] some_data_prepared = full_pipeline.transform(some_data) print("Predictions:", lin_reg.predict(some_data_prepared)) print("Labels:", list(some_labels)) # + from sklearn.metrics import mean_squared_error housing_predictions = lin_reg.predict(housing_prepared) lin_mse = mean_squared_error(housing_labels, housing_predictions) lin_rmse = np.sqrt(lin_mse) lin_rmse # - # # Ways to fix underfitting : # 1>reduce the constraints # 2>Use more powerfull model to feed the Training algo. # # ## using decision tree # + from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor() tree_reg.fit(housing_prepared, housing_labels) # - housing_predictions = tree_reg.predict(housing_prepared) tree_mse = mean_squared_error(housing_labels, housing_predictions) tree_rmse = np.sqrt(tree_mse) tree_rmse # # better evaluation via Cross-Validation # using train_test_split() function to split the training set into smaller training and validation set # # ## Alternative : # We ca use scikit-learn's **K-fold** cross validation feature which divides into 10 distinct sub-sets # + from sklearn.model_selection import cross_val_score scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring = 'neg_mean_squared_error',cv = 10) tree_rmse_scores = np.sqrt(-scores) # - def display_scores(scores): print("scores",scores) print("mean",scores.mean()) print("standard deviation",scores.std()) display_scores(tree_rmse_scores) # # We will now try The RandomForest regressor # + from sklearn.ensemble import RandomForestRegressor forest_reg = RandomForestRegressor() forest_reg.fit(housing_prepared, housing_labels) # - housing_predictions = forest_reg.predict(housing_prepared) forest_mse =mean_squared_error(housing_labels, housing_predictions) forest_rmse =np.sqrt(forest_mse) forest_rmse forest_scores =cross_val_score(lin_reg,housing_prepared, housing_labels, scoring = "neg_mean_squared_error",cv=10) forest_rmse_scores = np.sqrt(-forest_scores) display_scores(forest_rmse_scores) # # Time to Fine_Tune my Model # ## fine-tuning can be done in two ways : # grid_search / randomized_search # for small hyperparameter space / for large hyperparameter spacefor # + ## grid search first ### Here bootstrap is set to False as now it will consider combination's sum instead . #### like 12+6 = 18 from sklearn.model_selection import GridSearchCV param_grid = [ {"n_estimators":[3,30,300],"max_features":[2,4,6,10]}, {'bootstrap':[False],'n_estimators':[3,20],'max_features':[2,3,6]}, ] forest_reg = RandomForestRegressor() grid_search = GridSearchCV(forest_reg,param_grid,cv=5, scoring='neg_mean_squared_error', return_train_score = True) grid_search.fit(housing_prepared, housing_labels) # - grid_search.best_params_ grid_search.best_estimator_ cvres = grid_search.cv_results_ for mean_score, params in zip(cvres["mean_test_score"],cvres["params"]): print(np.sqrt(-mean_score),params) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] Collapsed="false" # # How to use # # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/simaki/cointanalysis/blob/master/examples/howto/howto.ipynb) # + Collapsed="false" # # !pip install pandas pandas_datareader matplotlib numpy # # !pip install cointanalysis # + Collapsed="false" import pandas as pd from pandas_datareader.data import DataReader import matplotlib.pyplot as plt import numpy as np from cointanalysis import CointAnalysis # + Collapsed="false" def fetch_etf(ticker): return DataReader(ticker, 'yahoo', '2012-01-01', '2018-12-31')['Adj Close'] # + [markdown] Collapsed="false" # Let us see how the main class `CointAnalysis` works using two ETFs, [HYG](https://www.bloomberg.com/quote/HYG:US) and [BKLN](https://www.bloomberg.com/quote/BKLN:US), as examples. # + Collapsed="false" hyg = fetch_etf('HYG') bkln = fetch_etf('BKLN') # + [markdown] Collapsed="false" # Since they are both connected with liabilities of low-rated companies, these prices behave quite similarly. # + Collapsed="false" plt.figure(figsize=(16, 4)) plt.title('HYG and BKLN') hyg_norm = 100 * hyg / hyg[0] bkln_norm = 100 * bkln / bkln[0] plt.plot(hyg_norm, label='HYG (2012-01-01 = 100)', linewidth=1) plt.plot(bkln_norm, label='BKLN (2012-01-01 = 100)', linewidth=1) plt.legend() plt.show() # + [markdown] Collapsed="false" # ## Cointegration test # + [markdown] Collapsed="false" # The method `test` carries out a cointegration test. # # The following code gives p-value for null-hypothesis that there is no cointegration. # + Collapsed="false" coint = CointAnalysis() X = np.stack([hyg, bkln], axis=1) coint.test(X).pvalue_ # + [markdown] Collapsed="false" # The test have rejected the null-hypothesis by p-value of 0.55%, which implies cointegration. # + [markdown] Collapsed="false" # ## Get spread # + [markdown] Collapsed="false" # The method `fit` finds the cointegration equation. # + Collapsed="false" coint.fit(X) print(f'coef: {coint.coef_}') print(f'mean: {coint.mean_}') print(f'std: {coint.std_}') # + [markdown] Collapsed="false" # This means that spread "-0.18 HYG + BKLN" has the mean 6.97 and standard deviation 0.15. # + [markdown] Collapsed="false" # In fact, the prices adjusted with these parameters clarifies the similarities of these ETFs: # + Collapsed="false" plt.figure(figsize=(16, 4)) hyg_adj = (-coint.coef_[0]) * hyg + coint.mean_ plt.title('HYG and BKLN') plt.plot(hyg_adj, label=f'{-coint.coef_[0]:.2f} * HYG + {coint.mean_:.2f}', linewidth=1) plt.plot(bkln, label='BKLN', linewidth=1) plt.legend() plt.show() # + [markdown] Collapsed="false" # The time-series of spread is obtained by applying the method `transform` subsequently. # # # The mean and the standard deviation are automatically adjusted (unless you pass parameters asking not to). # # The method `fit_transform` carries out `fit` and `transform` at once. # + Collapsed="false" spread = coint.transform(X) # + Collapsed="false" series_spread = pd.Series(spread, index=hyg.index) plt.figure(figsize=(16, 4)) plt.title('Spread between HYG and BKLN') plt.plot(series_spread, linewidth=1) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## List of themes # The list of available `matplotlib` themes is stored in a list called `plt.style.available`. There are 26 of them. # import matplotlib.pyplot as plt plt.style.available # ## Scatterplot # The [scatterplot section](https://python-graph-gallery.com/scatter-plot/) of the gallery explains in depth how to build a basic scatterplot with `matplotlib`. It is pretty straightforward thanks to the `plot()` function. # + # Create a dataset: import numpy as np import pandas as pd df=pd.DataFrame({'x': range(1,101), 'y': np.random.randn(100)*15+range(1,101) }) # plot plt.plot( 'x', 'y', data=df, linestyle='none', marker='o') plt.show() # - # ## Apply a theme # Now, let's make this chart a bit prettier thanks to the style called `fivethirtyheight`. In case you don't know it already, [FiveThirtyHeight](https://fivethirtyeight.com) is an online newspaper that often displays some very nice dataviz articles. plt.style.use('fivethirtyeight') plt.plot( 'x', 'y', data=df, linestyle='none', marker='o') plt.title('Scatterplot with the five38 theme', fontsize=12) plt.show() # ## Apply the style on a barchart # You can apply the same exact tip for any kind of chart to make it look better. Here is a barchart example coming from the barchart section of the gallery. It uses the `dark_background` theme to demo another type of customization. # + # create dataset height = [3, 12, 5, 18, 45] bars = ('A', 'B', 'C', 'D', 'E') y_pos = np.arange(len(bars)) # Create horizontal bars plt.barh(y_pos, height) # Create names on the x-axis plt.yticks(y_pos, bars) # Show graphic plt.style.use('dark_background') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ## **Calculating Metric** # * Below you can find the calculation of the base metrics, which is a simple count for the used methods from py_trees_ros library, and similar calculation for BehaviorTree.CPP import pandas as pd import csv import matplotlib.pyplot as plt import seaborn as sns from matplotlib.pyplot import figure import os.path # + ## for python codes analysis ## change the path in case the raw model data is moved ###please add where you saved the downloaded raw data for py_trees_ros path = "/home/jovyan/src/notebooks/pytreeros/" ## the provided path is for running on docker image count_dic = {} for filename in os.listdir(path): with open(path+'/'+filename) as f: try: count_Sequence =0 count_Selector =0 count_Chooser =0 count_Parallel =0 count_ReactiveSeq = 0 count_RSequence = 0 count_old_decorator = 0 for line in f: count_Sequence += line.count("composites.Sequence") # count_Sequence += line.count("Sequence(") ##refill project and smarc count_Selector += line.count("composites.Selector") count_Chooser += line.count("composites.Chooser") count_Parallel += line.count("composites.Parallel") count_ReactiveSeq += line.count("ReactiveSeq(name") ### used in Smarc count_RSequence += line.count("RSequence(name=") ## used in mobile_robot_project count_old_decorator += line.count("py_trees.meta") ## used in dyno and gizmo and refill count_dic[filename] = [count_Sequence,count_Selector, count_Chooser, count_Parallel,count_ReactiveSeq,count_RSequence,count_old_decorator] except IsADirectoryError: print("error : {}".format(filename)) print("Number of files that we calculated the metric for: {}".format(len(count_dic))) # + ## save the metric into csv with open('python_codes_analysis.csv', 'w') as f: # Just use 'w' mode in 3.x w = csv.DictWriter(f, count_dic.keys()) w.writeheader() w.writerow(count_dic) print("Download complete for the metric into csv") # + ## for XML codes analysis ## change the path in case the raw model data is moved ###please add where you saved the downloaded raw data for BehaviorTree.CPP path = "/home/jovyan/src/notebooks/behaviortreecpp/" ## the provided path is for running on docker image count_dic = {} for filename in os.listdir(path): with open(path+'/'+filename) as f: try: count_Sequence =0 count_ReactiveSelector = 0 count_SequenceStar = 0 count_ReactiveSeq = 0 count_action = 0 count_Selector =0 count_condition = 0 count_root = 0 count_blackboard = 0 count_RetryUntilSuccesful = 0 count_repeat = 0 count_RecoveryNode = 0 count_ForceSuccess = 0 count_ForceFailure = 0 count_inverter = 0 count_parallel = 0 count_BlackboardCheckString = 0 count_Negation = 0 count_Switch = 0 for line in f: # if re.match("!--", line, re.IGNORECASE): # continue # else: count_condition += line.count("") count_Sequence += line.count("") ## IntelligentRoboticsLabs_robocup2020_bt_receptionist count_SequenceStar += line.count("") count_ReactiveSeq += line.count("") count_ReactiveSelector += line.count("") count_Selector += line.count("") count_inverter += line.count("") count_root += line.count("") count_RetryUntilSuccesful += line.count( "") count_repeat += line.count("") count_ForceSuccess += line.count("") count_ForceFailure += line.count("") count_Negation += line.count("") count_Switch += line.count("") count_RecoveryNode += line.count("") ##MROS-RobMoSys-ITP_Pilot-URJC_navigate count_dic[filename] = [count_condition,count_action,count_Sequence,count_SequenceStar,count_ReactiveSeq,count_ReactiveSelector,count_Selector, count_inverter,count_root, count_blackboard, count_BlackboardCheckString,count_RetryUntilSuccesful,count_repeat,count_ForceSuccess,count_ForceFailure,count_Negation,count_Switch, count_parallel,count_RecoveryNode] except IsADirectoryError: print("error : {}".format(filename)) print("Number of files that we calculated the metric for: {}".format(len(count_dic))) # - ## save the metric into csv with open('XML_codes_analysis.csv', 'w') as f: # Just use 'w' mode in 3.x w = csv.DictWriter(f, count_dic.keys()) w.writeheader() w.writerow(count_dic) print("Download complete for the metric into csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import warnings # + from collections import deque import re import csv import json import requests import pandas as pd import numpy as np # from selenium.webdriver import Firefox # from selenium.webdriver.firefox.options import Options # Change the parser below to try out different options bs_parser = 'html.parser' # bs_parser = 'lxml' # bs_parser = 'lxml-xml' # bs_parser = 'html5lib' import bs4 from bs4 import BeautifulSoup as soup from urllib.request import urlopen import time # import warnings # warnings.filterwarnings('ignore') import github # from config import token as token from github import Github # g = Github(token) # reviews_pattern = re.compile("Reviews \((\d+)\)") # item_code_pattern = re.compile("Item code: (\d+)") # product_id_mapping = re.compile('&productId=(\d+)') # selenium = None # + token = "your Github token" g = Github(token) # - # --- # data = pd.read_csv('reposInfo_test.csv');data data['UniqueContributorsNum']=float('NaN') def getUniqueContributorsEachProject2(data1): dataset = data1.reset_index() Ucontributors=[] for i in range(len(dataset)): if dataset.repoNum[i]==0: Ucontributors.append(int(0)) print(i) print("contributor is 0") data['UniqueContributorsNum'][i]=0 elif dataset.repoNum[i]<30: print(i) n=[] for j in range(dataset.repoNum[i]): k=0 status = True while status: try: g = Github(token) n.append(g.get_user(dataset.Name[i]).get_repos()[j].get_contributors()[k].id) k+=1 time.sleep(1) ''' change sleep(45-60), if you run overnight ''' #print(k) except IndexError: status=False g = 0 # print(k) print(n) print('contributor count is {}'.format(len(list(set(n))))) Ucontributors.append(len(list(set(n)))) data['UniqueContributorsNum'][i]=len(list(set(n))) else: print(i) print("Repos is larger than 30") pass if i ==(len(dataset)-1): print('finished_parsing!') # data.to_csv('Argo_CSV_2_progress.csv',index=False) for p in range(len(Ucontributors)): data['UniqueContributorsNum'][data1.index[dataset.index[p]]]=Ucontributors[p] print (Ucontributors[p]) print("exported") # data.to_csv('Argo_CSV_2done.csv',index=False) getUniqueContributorsEachProject2(data[300:305]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # To keep it as simple as possible, this first tutorial uses synthetic data. This gives the possibility to change the signal shape at will. It also avoids downloading and de-trending real data. # # We use the batman-package [(Kreidberg 2015)](http://adsabs.harvard.edu/abs/2015PASP..127.1161K) to generate a planet around a solar-like star with a period of ~10 days. The dataset is 100 days long at a cadence of 30min (as with Kepler LC). We add 50ppm white noise per cadence. # + import numpy import batman import matplotlib.pyplot as plt from matplotlib import rcParams; rcParams["figure.dpi"] = 150 numpy.random.seed(seed=0) # reproducibility # Create test data time_start = 3.14 data_duration = 100 samples_per_day = 48 samples = int(data_duration * samples_per_day) time = numpy.linspace(time_start, time_start + data_duration, samples) # Use batman to create transits ma = batman.TransitParams() ma.t0 = time_start # time of inferior conjunction; first transit is X days after start ma.per = 10.123 # orbital period ma.rp = 6371 / 696342 # 6371 planet radius (in units of stellar radii) ma.a = 19 # semi-major axis (in units of stellar radii) ma.inc = 90 # orbital inclination (in degrees) ma.ecc = 0 # eccentricity ma.w = 90 # longitude of periastron (in degrees) ma.u = [0.4, 0.4] # limb darkening coefficients ma.limb_dark = "quadratic" # limb darkening model m = batman.TransitModel(ma, time) # initializes model synthetic_signal = m.light_curve(ma) # calculates light curve # Create noise and merge with flux ppm = 50 # Noise level in parts per million noise = numpy.random.normal(0, 10**-6 * ppm, int(samples)) flux = synthetic_signal + noise # Plot raw data plt.figure() ax = plt.gca() ax.scatter(time, flux, color='black', s=1) ax.set_ylabel("Flux") ax.set_xlabel("Time (days)") plt.xlim(min(time), max(time)) plt.ylim(0.999, 1.001); # - # The transits are visually nearly invisible. # Now, we run TLS without priors. These 3 lines of code is really all it takes to get started: from transitleastsquares import transitleastsquares model = transitleastsquares(time, flux) results = model.power() # To see the results, we print some statistics: print('Period', format(results.period, '.5f'), 'd') print(len(results.transit_times), 'transit times in time series:', \ ['{0:0.5f}'.format(i) for i in results.transit_times]) print('Transit depth', format(results.depth, '.5f')) print('Best duration (days)', format(results.duration, '.5f')) print('Signal detection efficiency (SDE):', results.SDE) # And visualize the result: # + plt.figure() ax = plt.gca() ax.axvline(results.period, alpha=0.4, lw=3) plt.xlim(numpy.min(results.periods), numpy.max(results.periods)) for n in range(2, 10): ax.axvline(n*results.period, alpha=0.4, lw=1, linestyle="dashed") ax.axvline(results.period / n, alpha=0.4, lw=1, linestyle="dashed") plt.ylabel(r'SDE') plt.xlabel('Period (days)') plt.plot(results.periods, results.power, color='black', lw=0.5) plt.xlim(0, max(results.periods)) plt.figure() plt.plot(results.model_folded_phase, results.model_folded_model, color='red') plt.scatter(results.folded_phase, results.folded_y, color='blue', s=10, alpha=0.5, zorder=2) plt.xlim(0.48, 0.52) plt.xlabel('Phase') plt.ylabel('Relative flux'); # - # That's it! Now you can start to make your own tests. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import warnings import pickle import matplotlib.pyplot as plt import matplotlib import numpy as np import sys import math import bisect sys.path.insert(0, '../../..') from cde.density_estimator import MixtureDensityNetwork from cde.data_collector import MatlabDataset, MatlabDatasetH5 from cde.density_estimator import plot_conditional_hist, measure_percentile, measure_percentile_allsame, measure_tail, measure_tail_allsame, init_tail_index_hill, estimate_tail_index_hill # + """ Open MATLAB file """ #matds = MatlabDataset('../../data/t3_0p5_1_1p5_cap_30_ar0p8.mat') #matds = MatlabDataset('../../data/dataset_normal_400k.mat') matds = MatlabDataset('../../data/fulldataset_gamma_400k.mat') # + """ Take conditioned samples and fit density model """ N_tr = 40000 #data = matds.get_data_servicerate_cond([1,1,1]) #data = matds.get_data_firstrows_cond(N_tr) #train_data = data[0:N_tr,[0,1,3,5]] train_data = matds.get_data(N_tr) #train_data = data[0:N_tr,:] Y = train_data[:,0] X = train_data[:,1:] model = MixtureDensityNetwork("GMM_3hop_expar_H", ndim_x=3, n_centers=20, ndim_y=1,n_training_epochs=1000,hidden_sizes=(32, 32)) model.fit(X, Y) # + """ Save the trained model and the training data into file """ with open('saves/New_3hop_gamma_40k_3dim.pkl', 'wb') as output: pickle.dump(model, output, pickle.HIGHEST_PROTOCOL) np.save('saves/New_ds_gamma_40k_3dim.npy', train_data) # + """ Load the trained model and training dataset from file """ dummy = MixtureDensityNetwork("GMM_3hop_gamma", ndim_x=2, ndim_y=1) dummy._setup_inference_and_initialize() with open('saves/New_3hop_gamma_40k_3dim.pkl', 'rb') as input: dummy = pickle.load(input) model = dummy train_data = np.load('saves/New_ds_gamma_40k_3dim.npy') # + """ Open analysis MATLAB file - [0,1,1] state """ #matds = MatlabDatasetH5('../../data/t3_0p5_1_1p5_cap_30_ar0p8.mat') cond_matds = MatlabDatasetH5('../../data/cond_records_[0_1_1]_92M.mat') cond_state = [0,1,1] test_data = cond_matds.get_data(cond_matds.n_records) tt,num_samples_train,avg = measure_percentile(dataset=train_data,x_cond=np.array([cond_state]),p_perc=100) measured_p8,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=80) measured_1,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=100) x = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+1), num=60) #x = np.arange(start=measured_p8, stop=measured_1, step=0.1) #print(x) tail=[] for i in range(len(x)): tail.append(measure_tail(dataset=train_data,x_cond=np.array([cond_state]),y=x[i])) testd_sorted = np.sort(test_data[:,0]) ftail=[] for i in range(len(x)): indx = bisect.bisect_left(testd_sorted, x[i]) ftail.append((len(test_data)-indx)/len(test_data)) #xh = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+40 ), num=40) tailh=[] for i in range(len(x)): tailh.append(model.find_tail(x_cond=np.array([cond_state]),y=x[i],init_bound=500)) fig, ax = plt.subplots(figsize=(9,6)) ax.loglog(x,tail, marker='.', label="empirical train "+str(num_samples_train)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,ftail, marker='.', label="empirical test "+str(num_samples_test)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,tailh, label="gmm") #substitute actual plotting here hillbetas = [0.005,0.01,0.05] for j in range(len(hillbetas)): #dsh = matds.get_data(300000) hillbeta = hillbetas[j] alpha,xk,k,n = init_tail_index_hill(dataset=train_data,x_cond=np.array([cond_state]),beta=hillbeta) if k == 0: continue xhill = [] tailhill=[] for i in range(len(x)): if x[i]>xk: xhill.append(x[i]) tailhill.append(estimate_tail_index_hill(alpha=alpha,xk=xk,k=k,n=n,x=x[i])) ax.loglog(xhill,tailhill, label="hill beta="+str(hillbeta)) #substitute actual plotting here plt.xlabel('delay time [log]') ax.set_xticks([5,6,7,8,9,10,11]) ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) plt.ylabel('Tail probability: 1-F(x) [log]') plt.title(str(cond_state)) plt.legend() plt.grid() plt.show() # + """ Open analysis MATLAB file - [0,1,2] state """ #matds = MatlabDatasetH5('../../data/t3_0p5_1_1p5_cap_30_ar0p8.mat') cond_matds = MatlabDatasetH5('../../data/cond_records_[0_1_2]_92M.mat') cond_state = [0,1,2] test_data = cond_matds.get_data(cond_matds.n_records) tt,num_samples_train,avg = measure_percentile(dataset=train_data,x_cond=np.array([cond_state]),p_perc=100) measured_p8,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=80) measured_1,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=100) x = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+1), num=60) #x = np.arange(start=measured_p8, stop=measured_1, step=0.1) #print(x) tail=[] for i in range(len(x)): tail.append(measure_tail(dataset=train_data,x_cond=np.array([cond_state]),y=x[i])) testd_sorted = np.sort(test_data[:,0]) ftail=[] for i in range(len(x)): indx = bisect.bisect_left(testd_sorted, x[i]) ftail.append((len(test_data)-indx)/len(test_data)) #xh = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+40 ), num=40) tailh=[] for i in range(len(x)): tailh.append(model.find_tail(x_cond=np.array([cond_state]),y=x[i],init_bound=500)) fig, ax = plt.subplots(figsize=(9,6)) ax.loglog(x,tail, marker='.', label="empirical train "+str(num_samples_train)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,ftail, marker='.', label="empirical test "+str(num_samples_test)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,tailh, label="gmm") #substitute actual plotting here hillbetas = [0.01,0.03,0.04,0.05] for j in range(len(hillbetas)): #dsh = matds.get_data(300000) hillbeta = hillbetas[j] alpha,xk,k,n = init_tail_index_hill(dataset=train_data,x_cond=np.array([cond_state]),beta=hillbeta) if k == 0: continue xhill = [] tailhill=[] for i in range(len(x)): if x[i]>xk: xhill.append(x[i]) tailhill.append(estimate_tail_index_hill(alpha=alpha,xk=xk,k=k,n=n,x=x[i])) ax.loglog(xhill,tailhill, label="hill beta="+str(hillbeta)) #substitute actual plotting here plt.xlabel('delay time [log]') ax.set_xticks([5,6,7,8,9,10,11]) ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) plt.ylabel('Tail probability: 1-F(x) [log]') plt.title(str(cond_state)) plt.legend() plt.grid() plt.show() # + """ Open analysis MATLAB file - [0,2,1] state """ #matds = MatlabDatasetH5('../../data/t3_0p5_1_1p5_cap_30_ar0p8.mat') cond_matds = MatlabDatasetH5('../../data/cond_records_[0_2_1]_92M.mat') cond_state = [0,2,1] test_data = cond_matds.get_data(cond_matds.n_records) tt,num_samples_train,avg = measure_percentile(dataset=train_data,x_cond=np.array([cond_state]),p_perc=100) measured_p8,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=80) measured_1,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=100) x = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+1), num=60) #x = np.arange(start=measured_p8, stop=measured_1, step=0.1) #print(x) tail=[] for i in range(len(x)): tail.append(measure_tail(dataset=train_data,x_cond=np.array([cond_state]),y=x[i])) testd_sorted = np.sort(test_data[:,0]) ftail=[] for i in range(len(x)): indx = bisect.bisect_left(testd_sorted, x[i]) ftail.append((len(test_data)-indx)/len(test_data)) #xh = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+40 ), num=40) tailh=[] for i in range(len(x)): tailh.append(model.find_tail(x_cond=np.array([cond_state]),y=x[i],init_bound=500)) fig, ax = plt.subplots(figsize=(9,6)) ax.loglog(x,tail, marker='.', label="empirical train "+str(num_samples_train)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,ftail, marker='.', label="empirical test "+str(num_samples_test)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,tailh, label="gmm") #substitute actual plotting here hillbetas = [0.005,0.01,0.05] for j in range(len(hillbetas)): #dsh = matds.get_data(300000) hillbeta = hillbetas[j] alpha,xk,k,n = init_tail_index_hill(dataset=train_data,x_cond=np.array([cond_state]),beta=hillbeta) if k == 0: continue xhill = [] tailhill=[] for i in range(len(x)): if x[i]>xk: xhill.append(x[i]) tailhill.append(estimate_tail_index_hill(alpha=alpha,xk=xk,k=k,n=n,x=x[i])) ax.loglog(xhill,tailhill, label="hill beta="+str(hillbeta)) #substitute actual plotting here plt.xlabel('delay time [log]') ax.set_xticks([5,6,7,8,9,10,11]) ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) plt.ylabel('Tail probability: 1-F(x) [log]') plt.title(str(cond_state)) plt.legend() plt.grid() plt.show() # + """ Open analysis MATLAB file - [1,1,1] state """ #matds = MatlabDatasetH5('../../data/t3_0p5_1_1p5_cap_30_ar0p8.mat') cond_matds = MatlabDatasetH5('../../data/cond_records_[1_1_1]_92M.mat') cond_state = [1,1,1] test_data = cond_matds.get_data(cond_matds.n_records) tt,num_samples_train,avg = measure_percentile(dataset=train_data,x_cond=np.array([cond_state]),p_perc=100) measured_p8,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=80) measured_1,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=100) x = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+1), num=60) #x = np.arange(start=measured_p8, stop=measured_1, step=0.1) #print(x) tail=[] for i in range(len(x)): tail.append(measure_tail(dataset=train_data,x_cond=np.array([cond_state]),y=x[i])) testd_sorted = np.sort(test_data[:,0]) ftail=[] for i in range(len(x)): indx = bisect.bisect_left(testd_sorted, x[i]) ftail.append((len(test_data)-indx)/len(test_data)) #xh = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+40 ), num=40) tailh=[] for i in range(len(x)): tailh.append(model.find_tail(x_cond=np.array([cond_state]),y=x[i],init_bound=500)) fig, ax = plt.subplots(figsize=(9,6)) ax.loglog(x,tail, marker='.', label="empirical train "+str(num_samples_train)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,ftail, marker='.', label="empirical test "+str(num_samples_test)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,tailh, label="gmm") #substitute actual plotting here hillbetas = [0.005,0.01,0.05] for j in range(len(hillbetas)): #dsh = matds.get_data(300000) hillbeta = hillbetas[j] alpha,xk,k,n = init_tail_index_hill(dataset=train_data,x_cond=np.array([cond_state]),beta=hillbeta) if k == 0: continue xhill = [] tailhill=[] for i in range(len(x)): if x[i]>xk: xhill.append(x[i]) tailhill.append(estimate_tail_index_hill(alpha=alpha,xk=xk,k=k,n=n,x=x[i])) ax.loglog(xhill,tailhill, label="hill beta="+str(hillbeta)) #substitute actual plotting here plt.xlabel('delay time [log]') ax.set_xticks([5,6,7,8,9,10,11]) ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) plt.ylabel('Tail probability: 1-F(x) [log]') plt.title(str(cond_state)) plt.legend() plt.grid() plt.show() # + """ Open analysis MATLAB file - [1,1,1] state """ cond_matds_ar = [] cond_matds_ar.append(MatlabDatasetH5('../../data/cond_records_[0_1_1]_92M.mat')) cond_matds_ar.append(MatlabDatasetH5('../../data/cond_records_[0_1_2]_92M.mat')) cond_matds_ar.append(MatlabDatasetH5('../../data/cond_records_[0_2_1]_92M.mat')) cond_matds_ar.append(MatlabDatasetH5('../../data/cond_records_[1_1_1]_92M.mat')) cond_state_arr = [[0,1,1],[0,1,2],[0,2,1],[1,1,1]] fig, axs = plt.subplots(nrows=2, ncols=2,figsize=(9*2,6*2)) for n in range(len(cond_state_arr)): ax = axs[int(n/2),n%2] test_data = cond_matds_ar[n].get_data(cond_matds_ar[n].n_records) cond_state = cond_state_arr[n] tt,num_samples_train,avg = measure_percentile(dataset=train_data,x_cond=np.array([cond_state]),p_perc=100) measured_p8,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=80) measured_1,num_samples_test,avg = measure_percentile_allsame(dataset=test_data,p_perc=100) x = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+1), num=100) #x = np.arange(start=measured_p8, stop=measured_1, step=0.1) #print(x) tail=[] for i in range(len(x)): tail.append(measure_tail(dataset=train_data,x_cond=np.array([cond_state]),y=x[i])) testd_sorted = np.sort(test_data[:,0]) ftail=[] for i in range(len(x)): indx = bisect.bisect_left(testd_sorted, x[i]) ftail.append((len(test_data)-indx)/len(test_data)) #xh = np.logspace(math.log10( measured_p8 ), math.log10( measured_1+40 ), num=40) tailh=[] for i in range(len(x)): tailh.append(model.find_tail(x_cond=np.array([cond_state]),y=x[i],init_bound=500)) ax.loglog(x,tail, marker='.', label="empirical train "+str(num_samples_train)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,ftail, marker='.', label="empirical test "+str(num_samples_test)+ " samples", linestyle = 'None') #substitute actual plotting here ax.loglog(x,tailh, label="gmm") #substitute actual plotting here hillbetas = [0.001,0.005,0.01,0.05] for j in range(len(hillbetas)): #dsh = matds.get_data(300000) hillbeta = hillbetas[j] alpha,xk,k,n = init_tail_index_hill(dataset=train_data,x_cond=np.array([cond_state]),beta=hillbeta) if k == 0: continue xhill = [] tailhill=[] for i in range(len(x)): if x[i]>xk: xhill.append(x[i]) tailhill.append(estimate_tail_index_hill(alpha=alpha,xk=xk,k=k,n=n,x=x[i])) ax.loglog(xhill,tailhill, label="hill beta="+str(hillbeta)) #substitute actual plotting here ax.set_xlabel('delay time [log]') ax.set_xticks([5,6,7,8,9,10,11]) ax.get_xaxis().set_major_formatter(matplotlib.ticker.ScalarFormatter()) ax.set_ylabel('Tail probability: 1-F(x) [log]') ax.set_title(str(cond_state)) ax.legend() ax.grid() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Ordering and Delivery # # This notebook demonstrates ordering and download with the orders api. In this notebook, we check authentication by requesting an orders list, then we create an order for two Analytic `PSScene4Band` images. We poll for order success then download images individually. And finally, we create, poll, and download the same order delivered as a single zip file. # # Reference information can be found at [Ordering & Delivery](https://developers.planet.com/docs/orders/ordering-delivery/). # + import json import os import pathlib import time import requests from requests.auth import HTTPBasicAuth # - # ## Authenticating # API Key stored as an env variable PLANET_API_KEY = os.getenv('PL_API_KEY') orders_url = 'https://api.planet.com/compute/ops/orders/v2' # ### Curl example # # To check your orders list and make sure you have the permissions you need, uncomment the following line to run `curl` # !curl -L -H "Authorization: api-key $PLANET_API_KEY" $orders_url # ### Requests example # # In this notebook, we will be using `requests` to communicate with the orders v2 API. First, we will check our orders list to make sure authentication and communication is working as expected. # # We want to get a response code of `200` from this API call. To troubleshoot other response codes, see the [List Orders](https://developers.planet.com/docs/orders/reference/#operation/listOrders) AOI reference. auth = HTTPBasicAuth(PLANET_API_KEY, '') response = requests.get(orders_url, auth=auth) response # Now we will list the orders we have created thus far. Your list may be empty if you have not created an order yet. orders = response.json()['orders'] len(orders) # ## Ordering # # In this example, we will order two `PSScene4Band` analytic images. For variations on this kind of order, see [Ordering Data](https://developers.planet.com/docs/orders/ordering-delivery/#ordering-data_1). # # In this order, we request an `analytic` bundle. A bundle is a group of assets for an item. The `analytic` bundle for the `PSScene4Band` item contains 3 assets: the analytic image, the analytic xml file, and the udm. See the [Product bundles reference](https://developers.planet.com/docs/orders/product-bundles-reference/) to learn about other bundles and other items. # # ### Place Order # set content type to json headers = {'content-type': 'application/json'} request = { "name":"simple order", "products":[ { "item_ids":[ "20151119_025740_0c74", "20151119_025741_0c74" ], "item_type":"PSScene4Band", "product_bundle":"analytic" } ], } def place_order(request, auth): response = requests.post(orders_url, data=json.dumps(request), auth=auth, headers=headers) print(response) order_id = response.json()['id'] print(order_id) order_url = orders_url + '/' + order_id return order_url order_url = place_order(request, auth) # ### Cancel an Order # report order state requests.get(order_url, auth=auth).json()['state'] oid = oids[0] response = requests.put(order_url, auth=auth) response # report order state - it could take a little while to cancel requests.get(order_url, auth=auth).json()['state'] # ### Poll for Order Success # re-order since we canceled our last order order_url = place_order(request, auth) # + def poll_for_success(order_url, auth, num_loops=30): count = 0 while(count < num_loops): count += 1 r = requests.get(order_url, auth=auth) response = r.json() state = response['state'] print(state) end_states = ['success', 'failed', 'partial'] if state in end_states: break time.sleep(10) poll_for_success(order_url, auth) # - # ### View Results r = requests.get(order_url, auth=auth) response = r.json() results = response['_links']['results'] [r['name'] for r in results] # ## Download # # ### Downloading each asset individually def download_results(results, overwrite=False): results_urls = [r['location'] for r in results] results_names = [r['name'] for r in results] print('{} items to download'.format(len(results_urls))) for url, name in zip(results_urls, results_names): path = pathlib.Path(os.path.join('data', name)) if overwrite or not path.exists(): print('downloading {} to {}'.format(name, path)) r = requests.get(url, allow_redirects=True) path.parent.mkdir(parents=True, exist_ok=True) open(path, 'wb').write(r.content) else: print('{} already exists, skipping {}'.format(path, name)) download_results(results) # ### Downloading as a single zip # # To download all of the order assets as a single zip, the order request needs to be changed slightly with delivery instructions. After that, polling and downloading are the same. zip_delivery = {"delivery": {"single_archive": True, "archive_type": "zip"}} request_zip = request.copy() request_zip.update(zip_delivery) request_zip order_url = place_order(request_zip, auth) poll_for_success(order_url, auth) r = requests.get(order_url, auth=auth) response = r.json() results = response['_links']['results'] download_results(results) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Reference Counting # # When variables are created, Python maintains a reference count. It uses this non-zero count to know if the memory location is being used. # # Getting reference count # # - sys.getrefcount(variable) <-- this also creates an extra count # - ctypes.c_long.from_address(address).value <-- address is an integer so it does not affect teh reference count import sys a = [1, 2, 3] id(a) sys.getrefcount(a) import ctypes def ref_count(address: int): return ctypes.c_long.from_address(address).value ref_count(id(a)) b = a id(b) ref_count(id(a)) c = a ref_count(id(a)) c = 10 ref_count(id(a)) b = None ref_count(id(a)) a_id = id(a) a = None ref_count(a_id) # After references is 0, Python is free to reuse the memory location for other # objects ref_count(a_id) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from scipy.io import loadmat import numpy as np import matplotlib.pyplot as plt from fmincg import fmincg # # Data importing data = loadmat('ex5data1.mat') X = data['X'] y = data['y'] Xtest = data['Xtest'] ytest = data['ytest'] Xval = data['Xval'] yval = data['yval'] m,n = X.shape # ### plotting data plt.figure(1) plt.plot(X,y,'ro') plt.xlabel('x') plt.ylabel('y') # + def linCostFunction(theta, X_inp, y, lamda=0): '''Cost function for linear regression dont add ones to X_inp''' m,n = X_inp.shape X = np.hstack((np.ones((m,1)) , X_inp)) theta = theta.reshape((n+1,-1)) y = y.reshape((m,-1)) er= X@theta - y J = (0.5/m)*(er.T @ er) grad = (1/m) * (X.T @ er) if lamda !=0 : thetareg = theta.copy() thetareg[0]=0 J += (lamda/(2*m))*(thetareg.T @ thetareg) grad += (1/m) * (lamda * thetareg) return J.item(),grad def trainlinreg(X, y, lamda=0): '''function to train linear regression''' m,n = X.shape options = {'maxiter':200,'Gradobj':'on'} theta_pred,*_ = fmincg(lambda x : linCostFunction(x,X,y,lamda) , np.zeros((n+1,1)) , options) return theta_pred def learningCurve(X, y, Xval, yval, lamda=0): '''function to track learning curve error values''' m,n = X.shape error_train = np.zeros((m,1)) error_val = np.zeros((m,1)) for i in range(1,m): theta = trainlinreg(X[:i] , y[:i] ,lamda) error_train[i],_ = linCostFunction(theta,X[:i],y[:i]) error_val[i],_ = linCostFunction(theta,Xval,yval) return error_train[1:],error_val[1:] def addPolyfeatures(X,p=8,ones=False): '''function to generate polynomial order features for polynomial regression''' m,n = X.shape degree = p if ones: ret = np.hstack((np.ones((m,1)), X)) else: ret = X for i in range(2,degree+1): ret = np.hstack((ret,X**i)) return ret def normalize(ip): mu = np.mean(ip,axis=0) sigma = np.std(ip,axis=0) return (ip-mu)/sigma , mu ,sigma def validation(X, y, Xval, yval): '''function to find lamda for min error values''' m,n = X.shape lambda_vec = [0,0.001,0.003,0.01,0.03,0.1,0.3,1,3,10] theta = list(map(lambda x :trainlinreg(X, y, x),lambda_vec)) error_train = (list(map(lambda x: (linCostFunction(x, X, y))[0],theta))) error_val = (list(map(lambda x: (linCostFunction(x, Xval, yval))[0],theta))) return lambda_vec,error_train,error_val # + #linCostFunction(np.array([[1],[1]]) ,X,y,1) #(303.9931922202643,array([[-15.30301567],[598.25074417]]).T) # - plt.figure() plt.plot(X,y,'ro') theta = trainlinreg(X,y,0) sample = np.linspace(-40,40,100).reshape(-1,1) plt.plot(sample, np.hstack((np.ones((100,1)),sample)) @ theta, label = 'desicion boundary') plt.title('Linear regression') plt.legend(); lamda = 0 error_train,error_val = learningCurve(X,y,Xval,yval,lamda) plt.figure(2) plt.plot(np.arange(1,m) , error_train.ravel(),'r',label = 'error_train') plt.plot(np.arange(1,m) , error_val.ravel(),'b',label = 'error_val') plt.title('Learning curve linear regression') plt.ylabel('error value') plt.xlabel('no. of training example') plt.legend(); X_poly,mu,sigma = normalize(addPolyfeatures(X)) Xtest_poly = (addPolyfeatures(Xtest) - mu)/sigma Xval_poly = (addPolyfeatures(Xval) - mu)/sigma plt.figure() plt.plot(X,y,'ro') theta = trainlinreg(X_poly,y,0) sample = np.linspace(-40,40,100).reshape(-1,1) plt.plot(sample, np.hstack((np.ones((100,1)),(addPolyfeatures(sample) - mu)/sigma)) @ theta, label = 'desicion boundary') plt.title('Polynomial regression') plt.legend(); lamda = 0 error_train,error_val = learningCurve(X_poly,y,Xval_poly,yval,lamda) plt.figure(2) plt.plot(np.arange(1,m) , error_train.ravel(),label = 'error_train_poly') plt.plot(np.arange(1,m) , error_val.ravel(),label = 'error_val_poly') plt.title('Learning curve polynomial regression') plt.ylabel('error value') plt.xlabel('no. of training example') plt.legend(); lambda_vec,error_train,error_val = validation(X_poly, y, Xval_poly, yval) plt.figure() plt.plot(lambda_vec ,error_train , 'r', label = 'train') plt.plot(lambda_vec,error_val ,'b', label = 'test' ) plt.xlabel('lambda') plt.ylabel('error value') plt.legend(); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="7tJfdVetzxyr" colab_type="text" # # **Introduction to OpenCV** # In this tuorial we will learn basic methods of OpenCV. The topics we will cover are as follows : # # # 1. Reading an image # 2. Showing an image using matplotlib # 3. Basics of Numpy that will be useful in handling images # # + id="sfpwqfPLzuSY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="4c147905-34d8-4919-8321-86f173111eff" # Run this block to files needed for this tutorial # !git clone https://github.com/kaustubh-sadekar/DataFolder.git # + [markdown] id="Wda-iosAzwWg" colab_type="text" # ## Part 1 : Reading, displaying and saving an image using OpenCV # + id="ZB8pmRDb2fHb" colab_type="code" colab={} # %matplotlib inline from matplotlib import pyplot as plt import cv2 import numpy as np # + id="ctP7s6BV4o47" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="b2b77e24-a301-45b5-c3a5-532f9926ec7e" # State the path/location of the image you want to read img_path = '/content/DataFolder/colab-images/minions1.jpg' # We read the image from a desired path (img_path) and save it an array (image) image = cv2.imread(img_path) # Displaying the image uisng matplotlib pyploy plt.imshow(image) plt.show() # + [markdown] id="dUx4EcV43kDF" colab_type="text" # **Note:** In OpenCV the RGB image is stored in BGR (Blue - Green - Red) format. But matplot lib considers the image in RGB (Reg - Green - Blue) Format. That is why our minions looks so `scary` !! # # How do we fix this ? We will see two methods : # 1. Using OpenCV method cv2.cvtColor # 2. Using Numpy array # # Basically all we need to do is change the order from BGR image to RGB image. Yeah! just exchange the R and B channels # + id="I_QDMRTV4jhm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 401} outputId="6f8c510d-84c3-402e-b721-d58673487d6c" img_path = '/content/DataFolder/colab-images/minions1.jpg' image = cv2.imread(img_path) # Let's try to know more about the image using numpy print("Shape of the image :") print(image.shape) print("Why is it a 3D matrix ? ") print("The BGR image consists of 3 channels of 2D images stacked together") rows,cols,ch = image.shape print("Thus we have %r rows, %r columns and %r channels in the image"%(rows,cols,ch)) print("Showing all the three channels individually") print("Note how G and R channels have so many bright pixels since yellow is made of green and red") f = plt.figure(num=None, figsize=(16, 12), dpi=80) ax1 = f.add_subplot(1,3, 1) plt.imshow(image[:,:,0],cmap='gray', vmin=0, vmax=255) ax2 = f.add_subplot(1,3, 2) plt.imshow(image[:,:,1],cmap='gray', vmin=0, vmax=255) ax3 = f.add_subplot(1,3, 3) plt.imshow(image[:,:,2],cmap='gray', vmin=0, vmax=255) ax1.title.set_text('Blue channel of image') ax2.title.set_text('Green channel of image') ax3.title.set_text('Red channel of image ') plt.show(block=True) # + id="RAqirLHfVH2n" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="129906e0-fa88-43a8-a59d-6d4198e1832e" # Cool ! now let's convert the BGR image to RGB # First using OpenCV method image2 = cv2.cvtColor(image,cv2.COLOR_BGR2RGB) # Second using Numpy image3 = np.zeros_like(image) # Trying to show a way create a zero matrix image3[:,:,0] = image[:,:,2].copy() image3[:,:,1] = image[:,:,1].copy() image3[:,:,2] = image[:,:,0].copy() f = plt.figure(num=None, figsize=(16, 12), dpi=80) ax1 = f.add_subplot(1,3, 1) plt.imshow(image) ax2 = f.add_subplot(1,3, 2) plt.imshow(image2) ax3 = f.add_subplot(1,3, 3) plt.imshow(image3) ax1.title.set_text('BGR image imported in Opencv') ax2.title.set_text('BGR to RGB using OpenCV') ax3.title.set_text('BGR to RGB using numpy ') plt.show(block=True) # + id="yMlN0ovwVfyg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="7d8192c5-968c-4280-cf80-ddf4ed200970" # Saving an image using opencv path = "saved_image.jpg" cv2.imwrite(path,image) print("File saved at %r"%path) print("Check in the files") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CANTERA MANIPULATION : 1 # required librairies import # %matplotlib notebook import cantera as ct import numpy as np import matplotlib.pyplot as plt # ## gas manipulation # gri30 is the mechanism file : and you can found a copy in "cantara/data" directory. You can use anyother mechanism file by uploading it on the server. # When using Cantera, the first thing you usually need is an object representing some phase of matter. # Here, we'll create a gas mixture: gas1 = ct.Solution('gri30.cti') # To view the state of the mixture, *call* the `gas1` object as if it were a function: gas1() # to get temperature gas1.T # to get enthalpie gas1.h # to change the state gas1.TP = 500, 101325 gas1() #The composition can be set in terms of either mole fractions (X) or mass fractions (Y): gas1.X = 'CH4:1, O2:2, N2:7.52' gas1() # ## Chemical Equilibrium # The chemical represents the burnt gas state (more precision will be given in a following courses) # the enthalpie of the system and pressure is considered constant --> the temperature increases. gas1.equilibrate('HP') gas1() # As this last command generates many species (all the sepcies present in the chemcial file). # It is possible to force the program to work only with a few set of species: #major = ['CH4','O2','CO2','H2O','N2'] #gas2 = ct.Solution(thermo='IdealGas', species=major) majorspeciesname = {S.name: S for S in ct.Species.listFromFile('gri30.cti')} majorspecies = [majorspeciesname[S] for S in ('CH4','O2','N2','CO2','H2O')] gas1 = ct.Solution(thermo='IdealGas', species=majorspecies) gas1.TPX = 500, 101325,'CH4:1, O2:2, N2:7.52' gas1.equilibrate('HP') #gas1() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Implementing Probabilistic Matrix Factorization in PyTorch # # From this paper: https://papers.nips.cc/paper/3208-probabilistic-matrix-factorization.pdf. # # Data from here: https://grouplens.org/datasets/movielens/ # # Our goal here is to find feature vectors for our users and movies that allow us to compare them. We start with a big matrix $R$ of movie ratings, with users as rows and movies as columns. Since users don't rate all movies, this is a pretty sparse matrix with a lot of missing values. # # What we can do is factor $R$ into two matrices $U$ and $V$ representing latent feature vectors for users and movies. With these latent features, we can compare users with each other, movies with each other, and predict ratings for movies users haven't seen. # # ![matrix factorization](matrix_factorization.png) # # The big idea behind this paper is that we're going to treat the latent vectors as parameters in a Bayesian model. As a reminder, Bayes theorem: # # $$ # \large P(\theta \mid D) \propto P(D \mid \theta) P(\theta) # $$ # # In our model we try to predict the rating matrix $R$ with $U$ and $V$ # # $$ \large \hat{R} = U^T V$$ # # In our model we assume the ratings are drawn from a normal distribution with mean $\hat{R}$. What's really cool is that we can place priors on our latent features. # # $$ # \begin{aligned} # R &\sim \mathrm{Normal}(U^T V, \sigma^2) \\ # U &\sim \mathrm{Normal}(0, \sigma_U^2) \\ # V &\sim \mathrm{Normal}(0, \sigma_V^2) # \end{aligned} # $$ # # The authors of the paper go further and build a hierachical structure for the user vectors # # $$ # \begin{aligned} # U_i &= Y_i + \frac{\sum_k^M I_{ik}W_k}{\sum_k^M I_{ik}} \\ # Y &\sim \mathrm{Normal}(0, \sigma_U^2) \\ # W &\sim \mathrm{Normal}(0, \sigma_W^2) # \end{aligned} # $$ # # With this model, we can maximize the posterior probability with respect to $U$ and $V$, or $V$, $Y$, and $W$ for the hierarchical model. In effect this is just a linear model with fancy regularization. # # The authors also do things like converting the ratings to be between 0 and 1, then taking the sigmoid of $U^T V$. For empirical reasons. import pandas as pd import torch ratings = pd.read_csv('ml-latest-small/ratings.csv') ratings.describe() rating_matrix = ratings.pivot(index='userId', columns='movieId', values='rating') n_users, n_movies = rating_matrix.shape # Scaling ratings to between 0 and 1, this helps our model by constraining predictions min_rating, max_rating = ratings['rating'].min(), ratings['rating'].max() rating_matrix = (rating_matrix - min_rating) / (max_rating - min_rating) # Replacing missing ratings with -1 so we can filter them out later rating_matrix[rating_matrix.isnull()] = -1 rating_matrix = torch.FloatTensor(rating_matrix.values) # This is how we can define our feature matrices # We're going to be training these, so we'll need gradients latent_vectors = 5 user_features = torch.randn(n_users, latent_vectors, requires_grad=True) user_features.data.mul_(0.01) movie_features = torch.randn(n_movies, latent_vectors, requires_grad=True) movie_features.data.mul_(0.01) class PMFLoss(torch.nn.Module): def __init__(self, lam_u=0.3, lam_v=0.3): super().__init__() self.lam_u = lam_u self.lam_v = lam_v def forward(self, matrix, u_features, v_features): non_zero_mask = (matrix != -1).type(torch.FloatTensor) predicted = torch.sigmoid(torch.mm(u_features, v_features.t())) diff = (matrix - predicted)**2 prediction_error = torch.sum(diff*non_zero_mask) u_regularization = self.lam_u * torch.sum(u_features.norm(dim=1)) v_regularization = self.lam_v * torch.sum(v_features.norm(dim=1)) return prediction_error + u_regularization + v_regularization criterion = PMFLoss() loss = criterion(rating_matrix, user_features, movie_features) # + # Actual training loop now latent_vectors = 30 user_features = torch.randn(n_users, latent_vectors, requires_grad=True) user_features.data.mul_(0.01) movie_features = torch.randn(n_movies, latent_vectors, requires_grad=True) movie_features.data.mul_(0.01) pmferror = PMFLoss(lam_u=0.05, lam_v=0.05) optimizer = torch.optim.Adam([user_features, movie_features], lr=0.01) for step, epoch in enumerate(range(1000)): optimizer.zero_grad() loss = pmferror(rating_matrix, user_features, movie_features) loss.backward() optimizer.step() if step % 50 == 0: print(f"Step {step}, {loss:.3f}") # + # Checking if our model can reproduce the true user ratings user_idx = 7 user_ratings = rating_matrix[user_idx, :] true_ratings = user_ratings != -1 predictions = torch.sigmoid(torch.mm(user_features[user_idx, :].view(1, -1), movie_features.t())) predicted_ratings = (predictions.squeeze()[true_ratings]*(max_rating - min_rating) + min_rating).round() actual_ratings = (user_ratings[true_ratings]*(max_rating - min_rating) + min_rating).round() print("Predictions: \n", predicted_ratings) print("Truth: \n", actual_ratings) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.13 64-bit (''venv'': venv)' # name: python3 # --- # # Introduction to the notebook # # This notebook assesses performance on the test set while using all information available. # # The parameters used were chosen based on hyperparameter tuning that optimised the average 5-fold validation F1-macro score. # ## Imports # + import os import numpy as np import pandas as pd import torch import torch.optim as optim import torch.nn as nn import matplotlib.pyplot as plt from statistics import mean import matplotlib from tqdm import tqdm from datetime import datetime import os from PIL import Image from sklearn.metrics import accuracy_score import torchvision from sklearn.preprocessing import LabelEncoder from torch.utils.data import Dataset, DataLoader from sklearn.model_selection import train_test_split from sklearn.metrics import precision_score, recall_score, f1_score from torch.utils.data import Dataset, DataLoader, ConcatDataset, SubsetRandomSampler from torch.optim import lr_scheduler plt.style.use('seaborn') import DiagnosisFunctions.tools as tools import torchvision.models as models import albumentations as A import torchvision.transforms.functional as TF from sklearn.model_selection import KFold import time import pickle import CNNmodels as CNNmodels # + #Set the notebook to run on the GPU, if available. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(f'This notebook is running on the {device.type}.') if device.type == 'cuda': print(f"Running on device {torch.cuda.current_device()}") print('') # + (train_path, train_target), (test_path, test_target) = tools.get_splits_characteristics() train_transform = A.Compose( [ #ElasticTransform(alpha=1, sigma=50, alpha_affine=50, interpolation=1, border_mode=4, p=0.5), A.ColorJitter(brightness=0.3, contrast=0.3, saturation=0.4, hue=0, always_apply=False, p=0.5), A.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.05, rotate_limit=15, p=0.5), ] ) train_set = tools.CharacteristicsDataset(path = train_path, target = train_target, size = [200, 200], transform = train_transform) test_set = tools.CharacteristicsDataset(path = test_path, target = test_target, size = [200, 200]) # - # ## Training # We use a custom loss function that puts weights on characteristics, diagnosis and area. This is done to later assess the importance each of the variables through hyperparameter tuning. class WeightedBCELoss(): def __init__(self, weights=[1, 1, 1]): self.weights = weights self.criterion = nn.BCELoss() def __call__(self, probabilities, targets): loss_characteristics = self.criterion(probabilities[:, :7], targets[:, :7]) loss_diagnosis = self.criterion(probabilities[:, 7:13], targets[:, 7:13]) loss_area = self.criterion(probabilities[:, 13:], targets[:, 13:]) return self.weights[0] * loss_characteristics + self.weights[1] * loss_diagnosis + self.weights[2] * loss_area def train_and_eval(phase, model, optimizer, criterion, scheduler, dataloaders): if phase == 'train': model.train() else: model.eval() running_loss = 0.0 #Preallocate the probabilities dataframe. probabilities = pd.DataFrame(columns = dataloaders[phase].dataset.variables) ground_truth = pd.DataFrame(columns = dataloaders[phase].dataset.variables) for inputs, targets, _ in dataloaders[phase]: inputs = inputs.to(device) targets = targets.to(device).float() optimizer.zero_grad() with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) loss = criterion(outputs, targets) if phase == 'train': loss.backward() optimizer.step() running_loss += loss.item() #Append to the dataframes probabilities = probabilities.append(pd.DataFrame(outputs.detach().cpu().numpy(), columns = dataloaders[phase].dataset.variables), ignore_index=True) ground_truth = ground_truth.append(pd.DataFrame(targets.detach().cpu().numpy(), columns = dataloaders[phase].dataset.variables), ignore_index=True) if phase == 'train': scheduler.step() #Return the total loss. return running_loss, ground_truth, probabilities # + criterion = WeightedBCELoss(weights=[0.41072483896743606, 0.6142489137204648, 0.17056242939212682]) lr = 0.0003213711824536609 train_loader = DataLoader(train_set, batch_size=56) test_loader = DataLoader(test_set, batch_size=56) dataloaders = {'train': train_loader, 'test': test_loader} cnn = CNNmodels.CNN().to(device) optimizer = optim.Adam(cnn.parameters(), lr=lr, weight_decay=1e-4) scheduler = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1) loss = {'train': [], 'test': []} f1_diagnosis = {'train': [], 'test': []} for epoch in tqdm(range(20), unit='epoch'): for phase in ['train', 'test']: epoch_loss, gt, p = train_and_eval(phase, cnn, optimizer, criterion, scheduler, dataloaders) avg_obs_loss = (epoch_loss / len(dataloaders[phase].dataset)) loss[phase].append(avg_obs_loss) # Predict labels based on probabilities pred_class = tools.classify_probability_predictions(p.copy()) # Compute f1 scores with average 'samples' (default values) metric_dict = tools.compute_metrics_scores(gt, pred_class) f1_diagnosis[phase].append(metric_dict['diagnosis']) # - with open('final_model.p', 'wb') as output_model: pickle.dump((cnn.to('cpu'), (loss, f1_diagnosis)), output_model) # ## Performance assessment import pickle import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix, plot_confusion_matrix with open('final_model.p', 'rb') as input_model: model_file = pickle.load(input_model) model, (loss, f1_diagnosis) = model_file plt.figure(figsize=(6,4)) plt.plot(loss['train'], label='Train loss') plt.plot(loss['test'], label='Test loss') plt.legend() plt.xlabel("Epochs") plt.ylabel("Loss") plt.xticks(range(20),range(1,21)) plt.savefig("loss_plot.pdf") plt.show() # + test_loader = DataLoader(test_set, batch_size=83) inputs, target, _ = next(iter(dataloaders['test'])) outputs = model(inputs) preds = [torch.argmax(x) for x in outputs[:, 7:13]] targets = [torch.argmax(x) for x in target[:, 7:13]] # - disp = ConfusionMatrixDisplay(confusion_matrix(targets, preds, normalize='pred'), display_labels=test_loader.dataset.variables[7:13]) disp.plot() plt.tick_params(axis=u'both', which=u'both',length=0) plt.grid(b=None) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from nltk.stem import WordNetLemmatizer string = 'initial clinical critical care covid19 patients seattle new york city chicago since first recognition cluster novel respiratory viral infections china late december 2019 intensivists united states growing concern infections severe acute respiratory syndrome coronavirus 2 sarscov2 named coronavirus disease 2019 spread hospitals united states covid19 extremely transmissible progress severe form respiratory failure potential overwhelm available critical care resources high critical care management covid19 patients spotlight covid19 arrived united states january anticipated dramatically increased usage critical care resources three hardesthit cities seattle new york city chicago combined total cases march 23 2020in special article describe initial clinical critical care covid19 areas attention clinical presentation laboratory values organ system effects treatment strategies resource management highlight clinical observations align differ already published reports represent early empiric experience authors intended serve recommendations guidelines practice rather starting point intensivists preparing address covid19 community' lemmatizer = WordNetLemmatizer() print(string) string_sep = string.split() for i in range(len(string_sep)): current_word = string_sep[i] string_sep[i] = lemmatizer.lemmatize(current_word) lemmatizer.lemmatize('microscopically') string_sep from nltk import pos_tag nltk.download('averaged_perceptron_tagger') pos_tag(string_sep) string = 'initial clinical critical care covid19 patients seattle new york city chicago since first recognition cluster novel respiratory viral infections china late december 2019 intensivists united states growing concern infections severe acute respiratory syndrome coronavirus 2 sarscov2 named coronavirus disease 2019 spread hospitals united states covid19 extremely transmissible progress severe form respiratory failure potential overwhelm available critical care resources high critical care management covid19 patients spotlight covid19 arrived united states january anticipated dramatically increased usage critical care resources three hardesthit cities seattle new york city chicago combined total cases march 23 2020in special article describe initial clinical critical care covid19 areas attention clinical presentation laboratory values organ system effects treatment strategies resource management highlight clinical observations align differ already published reports represent early empiric experience authors intended serve recommendations guidelines practice rather starting point intensivists preparing address covid19 community' ''.join([i for i in string if not i.isdigit()]) with open('downstream/TextSGC_Bio/data/corpus/ohsumed.clean.txt', 'r') as f: lines = f.readlines() doc_content_list = [l.strip() for l in lines] for j in range(len(doc_content_list)): doc_content_list[j] = ''.join([i for i in doc_content_list[j] if not i.isdigit()]) doc_content_list[j] = doc_content_list[j].strip() print(doc_content_list[3].split()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Goal: Classify vehicle as bus or car based on smartphone sensor data import pandas as pd from scipy.ndimage import gaussian_filter from sklearn.ensemble import RandomForestClassifier from sklearn.cross_validation import cross_val_score from sklearn.metrics import accuracy_score import numpy as np # %matplotlib inline # #### Load the processed sensor data for car trip (see "Process Smartphone Sensor Data" jupyter notebook). On this trip, I drove my car from home to Censio and back and used SensorLog on my iPhone to track the trip. The total time for the trip was about 15 minutes. dfcar = pd.read_csv('../Data/shaneiphone_exp2_processed.csv', index_col='DateTime') # #### Load the processed sensor data for bus trip (see "Process Smartphone Sensor Data" jupyter notebook). On this trip, I took the 47 bus for about 10 minutes. dfbus = pd.read_csv('../Data/shanebus20150827_processed.csv', index_col='DateTime') # combine into a single dataframe df = pd.concat([dfcar, dfbus]) # Use only userAcceleration and gyroscope data, since these features are expected to generalize well. xyz = ['X', 'Y', 'Z'] measures = ['userAcceleration', 'gyroscope'] basefeatures = [i + j for i in measures for j in xyz] features = [i + j for i in measures for j in xyz] # Add Gaussian smoothed features smoothfeatures = [] for i in features: df[i + 'sm'] = gaussian_filter(df[i], 3) df[i + '2sm'] = gaussian_filter(df[i], 100) smoothfeatures.append(i + 'sm') smoothfeatures.append(i + '2sm') features.extend(smoothfeatures) # Generate Jerk signal jerkfeatures = [] for i in features: diffsignal = np.diff(df[i]) df[i + 'jerk'] = np.append(0, diffsignal) jerkfeatures.append(i + 'jerk') features.extend(jerkfeatures) # + # assign class labels car0 = (df.index > '2015-08-25 14:35:00') & \ (df.index <= '2015-08-25 14:42:00') car1 = (df.index > '2015-08-25 14:43:00') & \ (df.index <= '2015-08-25 14:48:00') bus0 = (df.index > '2015-08-27 10:10:00') & \ (df.index <= '2015-08-27 10:15:00') bus1 = (df.index > '2015-08-27 10:15:00') & \ (df.index <= '2015-08-27 10:20:00') nc = len(df) df['class'] = np.zeros(nc) - 1 df['class'][car0] = np.zeros(nc) df['class'][car1] = np.zeros(nc) df['class'][bus0] = np.ones(nc) df['class'][bus1] = np.ones(nc) # - # separate into quarters for train and validation q1 = df[car0] q2 = df[car1] q3 = df[bus0] q4 = df[bus1] traindf = pd.concat([q2, q4]) validationdf = pd.concat([q1, q3]) # check for NaNs in the dataframes print(traindf.isnull().sum().sum()) print(validationdf.isnull().sum().sum()) # drop NaNs traindf = traindf.dropna() validationdf = validationdf.dropna() # Make the training and validation sets X_train = traindf[features].values y_train = traindf['class'].values X_test = validationdf[features].values y_test = validationdf['class'].values # train a random forest clf = RandomForestClassifier(n_estimators=200) # get the 5-fold cross-validation score scores = cross_val_score(clf, X_train, y_train, cv=5) print(scores, scores.mean(), scores.std()) # apply model to test set clf.fit(X_train, y_train) predict_y = clf.predict(X_test) # obtain accuracy score testscore = accuracy_score(y_test, predict_y) print("Accuracy score on test set: %6.3f" % testscore) # #### We're not overfitting the data, but we're also not really predicting the vehicle class very well, since we're only right about 65-70% of the time with any prediction we make. # Inspect feature importances for i, ifeature in enumerate(features): print(ifeature + ': %6.4f' % clf.feature_importances_[i]) # #### The smoothed gyroscopeZ data is the most useful feature. # compare bus gyroscopeZ2sm and car gyroscopeZ2sm q1['gyroscopeZ2sm'].plot(color='blue', figsize=(12,6), kind='hist', bins=40, alpha=0.4) # car q3['gyroscopeZ2sm'].plot(color='green', kind='hist', bins=40, alpha=0.4) # bus # #### Reflecting on this further, it occurs to me that this methodology is identifying that the bus trip and the car trip followed different routes and had different numbers and types of turns. A better way to go might be to identify features for each turn (e.g., time to complete turn, average accelerometer and gyroscope signal during turn, etc.) and apply the random forest to those features. # #### Another interesting avenue to pursue is features in Fourier space # Generate Fourier Transform of features fftfeatures = [] for i in features: reals = np.real(np.fft.rfft(df[i])) imags = np.imag(np.fft.rfft(df[i])) complexs = [reals[0]] n = len(reals) if n % 2 == 0: complexs.append(imags[0]) for j in range(1, n - 1): complexs.append(reals[j]) complexs.append(imags[j]) complexs.append(reals[j]) df['f' + i] = complexs fftfeatures.append('f' + i) features.extend(fftfeatures) # Make the training and validation sets X_train = traindf[fftfeatures].values y_train = traindf['class'].values X_test = validationdf[fftfeatures].values y_test = validationdf['class'].values # train a random forest clf = RandomForestClassifier(n_estimators=200) # get the 5-fold cross-validation score scores = cross_val_score(clf, X_train, y_train, cv=5) print(scores, scores.mean(), scores.std()) # apply model to test set clf.fit(X_train, y_train) predict_y = clf.predict(X_test) # obtain accuracy score testscore = accuracy_score(y_test, predict_y) print("Accuracy score on test set: %6.3f" % testscore) # #### Much better accuracy on the test set: 87%. We are definitely overfitting here, since we got 100% accuracy on the training set. We are also probably suffering from the same problem using the time series data, where the classifier learns to classify based on the nature of the route, not the nature of the ride. # Inspect feature importances for i, ifeature in enumerate(fftfeatures): print(ifeature + ': %6.4f' % clf.feature_importances_[i]) # #### Interesting that the accelerometer signal is more important here. This could be an indication that training in Fourier space helps mitigate the route-based issues that we encountered when using the time series data. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## Importing necessary packages import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import GridSearchCV, RandomizedSearchCV from sklearn.tree import DecisionTreeRegressor seed = 42 # + # read datasets train = pd.read_csv('../input/train.csv') test = pd.read_csv('../input/test.csv') # process columns, apply LabelEncoder to categorical features for c in train.columns: if train[c].dtype == 'object': lbl = LabelEncoder() lbl.fit(list(train[c].values) + list(test[c].values)) train[c] = lbl.transform(list(train[c].values)) test[c] = lbl.transform(list(test[c].values)) # shape print('Shape train: {}\nShape test: {}'.format(train.shape, test.shape)) # + from sklearn.decomposition import PCA, FastICA n_comp = 100 # PCA pca = PCA(n_components=n_comp, random_state=42) pca2_results_train = pca.fit_transform(train.drop(["y"], axis=1)) pca2_results_test = pca.transform(test) # ICA ica = FastICA(n_components=n_comp, random_state=42) ica2_results_train = ica.fit_transform(train.drop(["y"], axis=1)) ica2_results_test = ica.transform(test) # Append decomposition components to datasets #for i in range(1, n_comp+1): # train['pca_' + str(i)] = pca2_results_train[:,i-1] # test['pca_' + str(i)] = pca2_results_test[:, i-1] # train['ica_' + str(i)] = ica2_results_train[:,i-1] # test['ica_' + str(i)] = ica2_results_test[:, i-1] #y_train = train["y"] #y_mean = np.mean(y_train) train_reduced = pd.concat([pd.DataFrame(pca2_results_train), pd.DataFrame(ica2_results_train)], axis = 1) test_reduced = pd.concat([pd.DataFrame(pca2_results_test), pd.DataFrame(ica2_results_test)], axis = 1) # - #X, y = train.drop('y', axis=1).values, train.y.values #print(X.shape) X, y = train_reduced, train.y.values print X.shape # + model = DecisionTreeRegressor(random_state=seed) DTR_params = { 'max_depth': [4,8], 'min_samples_split': [2,4], 'min_samples_leaf': [1,2,4] } reg = GridSearchCV(model, DTR_params, cv = 5, verbose=1, n_jobs = -1) reg.fit(X, y) # - print(reg.best_score_) print(reg.best_params_) means = reg.cv_results_['mean_test_score'] stds = reg.cv_results_['std_test_score'] params = reg.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) # make predictions and save results y_pred = xgb_model.predict(x_test) output = pd.DataFrame({'id': test['ID'].astype(np.int32), 'y': y_pred}) #output.to_csv('xgb_6.csv', index=False) # ## Trying base data with lasso # + from sklearn.linear_model import Lasso lasso_reg = Lasso(max_iter=6000) lasso_params = { 'max_iter': [5000, 6000, 7000], 'alpha': [1.55, 1.57, 1.6], 'fit_intercept': [True,False], 'normalize': [True, False], 'precompute': [True, False], 'tol': [0.004, 0.0045, 0.005], 'selection': ['random', 'cyclic'] } lasso_reg_cv = GridSearchCV(lasso_reg, lasso_params, cv = 5, verbose=1, n_jobs = -1) lasso_reg_cv.fit(X, y) # + print(lasso_reg_cv.best_score_) print(lasso_reg_cv.best_params_) means = lasso_reg_cv.cv_results_['mean_test_score'] stds = lasso_reg_cv.cv_results_['std_test_score'] params = lasso_reg_cv.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 # + import matplotlib.pyplot as plt import numpy as np import tensorflow_probability as tfp import os import pandas as pd import feather from urllib.request import urlretrieve import tensorflow as tf tfd = tfp.distributions tfb = tfp.bijectors from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense from tensorflow.keras import Sequential, Model from tensorflow.keras.layers import Dense, Input from tensorflow.keras.optimizers import Adam # + def get_if_not_there(filename = 'deer_train.feather'): if not os.path.isfile(filename): urlretrieve('https://raw.githubusercontent.com/tensorchiefs/dl_book/master/data/{}'.format(filename), filename = filename) get_if_not_there('deer_train.feather') get_if_not_there('deer_test.feather') # - df_train = feather.read_dataframe('deer_train.feather') df_test = feather.read_dataframe('deer_test.feather') df_train.head() df_test.head() y_train = df_train.iloc[:,0].to_numpy(dtype='float32') y_test = df_test.iloc[:,0].to_numpy(dtype='float32') X_train = pd.get_dummies(df_train.iloc[:,2:]) #We wont use the year X_test = pd.get_dummies(df_test.iloc[:,2:]) X_train.iloc[:,0] = X_train.iloc[:,0]/2922.02 #We divide by the maximal number to be in the range 0 to 1 X_test.iloc[:,0] = X_test.iloc[:,0]/2922.02 del df_train, df_test X_train.head() X_test.head() y_train y_test # + X_train = X_train.to_numpy(dtype='float32') X_test = X_test.to_numpy(dtype='float32') X_train.shape,X_test.shape # - vals, counts = np.unique(y_train, return_counts=True) plt.stem(vals, counts) plt.xlabel('Count: number of deers killed') plt.ylabel('Frequency') plt.xlim(-1,40) plt.show() # ## Linear Regression model_lr = Sequential() model_lr.add(Dense(1,input_dim=(X_train.shape[1]), activation='linear')) model_lr.compile(loss='mean_squared_error',optimizer=tf.optimizers.Adam(learning_rate=0.01)) model_lr.summary() hist_lr = model_lr.fit(x=X_train, y=y_train, validation_data=(X_test, y_test), epochs=10, verbose=1) plt.plot(hist_lr.history['loss']) plt.plot(hist_lr.history['val_loss']) plt.legend(['loss', 'val_loss']) plt.ylabel('MSE') plt.xlabel('Epochs') np.mean(hist_lr.history['loss']) # + # Calculation of the the optimal sigma n = len(y_train) y_hat_train = model_lr.predict(X_train) sigma_hat_2 = (n-1.)/(n-2.) * np.var(y_train - y_hat_train.flatten(),ddof=1) print('Estimated variance ', sigma_hat_2) print('Estimated standart deviation ', np.sqrt(sigma_hat_2)) y_hat = model_lr.predict(X_test) #Prediction on the testset NLL_lr = 0.5*np.log(2 * np.pi * sigma_hat_2) + 0.5*np.mean((y_test - y_hat.flatten())**2)/sigma_hat_2 print('NLL on training:', 0.5*np.log(2 * np.pi * sigma_hat_2) + 0.5*np.mean((y_train - y_hat_train.flatten())**2)/sigma_hat_2) print('NLL on test:',NLL_lr) # - y_hat_test=model_lr.predict(X_test) plt.scatter(y_hat_test, y_test,alpha=0.3) sort_idx=np.argsort(y_hat_test,axis=0) plt.plot(y_hat_test[sort_idx].flatten(), y_hat_test[sort_idx].flatten()+2*np.sqrt(sigma_hat_2),linestyle='dashed',c="black") plt.plot(y_hat_test[sort_idx].flatten(), y_hat_test[sort_idx].flatten()-2*np.sqrt(sigma_hat_2),linestyle='dashed',c="black") plt.plot(y_hat_test, y_hat_test, c="black") plt.title('Comparison on the testset') plt.xlabel('predicted average of deers killed') plt.ylabel('observed number of deers killed') plt.show() # ## Poisson Regression # + inputs = Input(shape=(X_train.shape[1],)) rate = Dense(1, activation=tf.exp)(inputs) p_y = tfp.layers.DistributionLambda(tfd.Poisson)(rate) model_p = Model(inputs=inputs, outputs=p_y) def NLL(y_true, y_hat): return -y_hat.log_prob(y_true) model_p.compile(Adam(learning_rate=0.01), loss=NLL) model_p.summary() # - hist_p = model_p.fit(x=X_train, y=y_train, validation_data=(X_test, y_test), epochs=10, verbose=1) plt.plot(hist_p.history['loss']) plt.plot(hist_p.history['val_loss']) plt.legend(['loss', 'val_loss']) plt.xlabel('Epochs') model = Model(inputs=inputs, outputs=p_y) y_hat_test = model(X_test).mean().numpy().flatten() # + NLL_train = model_p.evaluate(X_train, y_train,verbose=0) NLL_test = model_p.evaluate(X_test, y_test,verbose=0) print('NLL on training:', NLL_train) print('NLL on test:', NLL_test) # + from scipy.stats import poisson lower=poisson.ppf(0.025, y_hat_test) upper=poisson.ppf(0.975, y_hat_test) plt.scatter(y_hat_test, y_test, alpha=0.3) plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), lower[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), upper[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test, y_hat_test, c="black") plt.title('Comparison on the testset') plt.xlabel('predicted average of deers killed') plt.ylabel('observed number of deers killed') plt.show() # - # ## Poisson Regression with Hidden Layers # + inputs = Input(shape=(X_train.shape[1],)) x = Dense(100, activation="relu")(inputs) x = Dense(100, activation="relu")(x) x = Dense(10, activation="relu")(x) rate = Dense(1, activation=tf.exp)(x) p_y = tfp.layers.DistributionLambda(tfd.Poisson)(rate) model_p = Model(inputs=inputs, outputs=p_y) def NLL(y_true, y_hat): return -y_hat.log_prob(y_true) model_p.compile(Adam(learning_rate=0.01), loss=NLL) model_p.summary() # - hist_p = model_p.fit(x=X_train, y=y_train, validation_data=(X_test, y_test), epochs=10, verbose=1) plt.plot(hist_p.history['loss']) plt.plot(hist_p.history['val_loss']) plt.legend(['loss', 'val_loss']) plt.xlabel('Epochs') model = Model(inputs=inputs, outputs=p_y) # + y_hat_test = model(X_test).mean().numpy().flatten() NLL_train = model_p.evaluate(X_train, y_train,verbose=0) NLL_test = model_p.evaluate(X_test, y_test,verbose=0) print('NLL on training:', NLL_train) print('NLL on test:', NLL_test) # + from scipy.stats import poisson lower=poisson.ppf(0.025, y_hat_test) upper=poisson.ppf(0.975, y_hat_test) plt.scatter(y_hat_test, y_test, alpha=0.3) plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), lower[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), upper[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test, y_hat_test, c="black") plt.title('Comparison on the testset') plt.xlabel('predicted average of deers killed') plt.ylabel('observed number of deers killed') plt.show() # - # ## ZIP Regression # + def zero_inf(out): rate = tf.squeeze(tf.math.exp(out[:,0:1])) #A s = tf.math.sigmoid(out[:,1:2]) #B probs = tf.concat([1-s, s], axis=1) #C return tfd.Mixture( cat=tfd.Categorical(probs=probs),#D components=[ tfd.Deterministic(loc=tf.zeros_like(rate)), #E tfd.Poisson(rate=rate), #F ]) #A The first component codes for the rate. We use exponential to guaranty values >0. We use the squeeze function to flatten the tensor. #B The second component codes for the zero inflation, using sigmoid squeezes the value between 0 and 1. #C The two probabilities for zeros or Poissonian #D The tfd.Categorical allows to create a mixture of two components. #E Zero as a deterministic value #F Value drawn from a Poissonian # + ## Definition of the custom parametrized distribution inputs = tf.keras.layers.Input(shape=(X_train.shape[1],)) out = Dense(2)(inputs)#A p_y_zi = tfp.layers.DistributionLambda(zero_inf)(out) model_zi = Model(inputs=inputs, outputs=p_y_zi) def NLL(y_true, y_hat): return -y_hat.log_prob(tf.reshape(y_true,(-1,))) model_zi.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=NLL) #A A dense layer is used without activation. The transformation is done inside zero_inf function model_zi.summary() # - hist_zi = model_zi.fit(x=X_train, y=y_train, validation_data=(X_test, y_test), epochs=10, verbose=1) plt.plot(hist_zi.history['loss']) plt.plot(hist_zi.history['val_loss']) plt.legend(['loss', 'val_loss']) plt.xlabel('Epochs') model = Model(inputs=inputs, outputs=p_y_zi) # + y_hat_test = model(X_test).mean().numpy().flatten() NLL_train = model_zi.evaluate(X_train, y_train,verbose=0) NLL_test = model_zi.evaluate(X_test, y_test,verbose=0) print('NLL on training:', NLL_train) print('NLL on test:', NLL_test) # - # workaround; only sample from 10 datapoints at once, otherwise we would get ram errors upper=[] lower=[] for i in range(0,np.int(len(X_test)/10)): samples_tmp=model_zi(X_test[(i*10):(i*10)+10]).sample(5000).numpy() upper=np.append(upper,np.quantile(samples_tmp,0.975,axis=0)) lower=np.append(lower,np.quantile(samples_tmp,0.025,axis=0)) plt.scatter(y_hat_test, y_test, alpha=0.3) plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), lower[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), upper[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test, y_hat_test, c="black") plt.title('Comparison on the testset') plt.xlabel('predicted average of deers killed') plt.ylabel('observed number of deers killed') plt.show() # ## ZIP Regression with Hidden Layers # + def zero_inf(out): rate = tf.squeeze(tf.math.exp(out[:,0:1])) #A s = tf.math.sigmoid(out[:,1:2]) #B probs = tf.concat([1-s, s], axis=1) #C return tfd.Mixture( cat=tfd.Categorical(probs=probs),#D components=[ tfd.Deterministic(loc=tf.zeros_like(rate)), #E tfd.Poisson(rate=rate), #F ]) #A The first component codes for the rate. We use exponential to guaranty values >0. We use the squeeze function to flatten the tensor. #B The second component codes for the zero inflation, using sigmoid squeezes the value between 0 and 1. #C The two probabilities for zeros or Poissonian #D The tfd.Categorical allows to create a mixture of two components. #E Zero as a deterministic value #F Value drawn from a Poissonian # + ## Definition of the custom parametrized distribution inputs = tf.keras.layers.Input(shape=(X_train.shape[1],)) x = Dense(100, activation="relu")(inputs) x = Dense(100, activation="relu")(x) x = Dense(10, activation="relu")(x) out = Dense(2)(x)#A p_y_zi = tfp.layers.DistributionLambda(zero_inf)(out) model_zi = Model(inputs=inputs, outputs=p_y_zi) def NLL(y_true, y_hat): return -y_hat.log_prob(tf.reshape(y_true,(-1,))) model_zi.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=NLL) #A A dense layer is used without activation. The transformation is done inside zero_inf function model_zi.summary() # - hist_zi = model_zi.fit(x=X_train, y=y_train, validation_data=(X_test, y_test), epochs=10, verbose=1) plt.plot(hist_zi.history['loss']) plt.plot(hist_zi.history['val_loss']) plt.legend(['loss', 'val_loss']) plt.xlabel('Epochs') model = Model(inputs=inputs, outputs=p_y_zi) # + y_hat_test = model(X_test).mean().numpy().flatten() NLL_train = model_zi.evaluate(X_train, y_train,verbose=0) NLL_test = model_zi.evaluate(X_test, y_test,verbose=0) print('NLL on training:', NLL_train) print('NLL on test:', NLL_test) # - # workaround; only sample from 10 datapoints at once, otherwise we would get ram errors upper=[] lower=[] for i in range(0,np.int(len(X_test)/10)): samples_tmp=model_zi(X_test[(i*10):(i*10)+10]).sample(5000).numpy() upper=np.append(upper,np.quantile(samples_tmp,0.975,axis=0)) lower=np.append(lower,np.quantile(samples_tmp,0.025,axis=0)) plt.scatter(y_hat_test, y_test, alpha=0.3) plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), lower[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), upper[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test, y_hat_test, c="black") plt.title('Comparison on the testset') plt.xlabel('predicted average of deers killed') plt.ylabel('observed number of deers killed') plt.show() # ## Regression with a discretized logistic mixture distribution def quant_mixture_logistic(out, bits=8, num=3): loc, un_scale, logits = tf.split(out, num_or_size_splits=num, axis=-1) scale = tf.nn.softplus(un_scale) discretized_logistic_dist = tfd.QuantizedDistribution( distribution=tfd.TransformedDistribution( distribution=tfd.Logistic(loc=loc, scale=scale), bijector=tfb.AffineScalar(shift=-0.5)), low=0., high=2**bits - 1.) mixture_dist = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical(logits=logits), #logits will be normalized to one components_distribution=discretized_logistic_dist) return mixture_dist # + inputs = tf.keras.layers.Input(shape=(X_train.shape[1],)) out = Dense(9)(inputs) p_y = tfp.layers.DistributionLambda(quant_mixture_logistic)(out) model = Model(inputs=inputs, outputs=p_y) def NLL(y_true, y_hat): return -y_hat.log_prob(tf.reshape(y_true,(-1,))) model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=NLL) model.summary() # - hist_mm = model.fit(x=X_train, y=y_train, validation_data=(X_test, y_test), epochs=10, verbose=1) plt.plot(hist_mm.history['loss']) plt.plot(hist_mm.history['val_loss']) plt.legend(['loss', 'val_loss']) plt.xlabel('Epochs') print(-np.mean(model(X_train).log_prob(y_train))) print(-np.mean(model(X_test).log_prob(y_test))) X_test.shape # + NLL_train = model.evaluate(X_train, y_train,verbose=0) NLL_test = model.evaluate(X_test, y_test,verbose=0) print('NLL on training:', NLL_train) print('NLL on test:', NLL_test) preds = np.zeros((1000,len(y_test.flatten()))) for i in range(0,1000): preds[i,:] = model(X_test).sample().numpy()# sample from the QuantizedDistribution y_hat_test=np.average(preds,axis=0) # - # workaround; only sample from 10 datapoints at once, otherwise we would get ram errors upper=[] lower=[] for i in range(0,np.int(len(X_test)/10)): samples_tmp=model(X_test[(i*10):(i*10)+10]).sample(5000).numpy() upper=np.append(upper,np.quantile(samples_tmp,0.975,axis=0)) lower=np.append(lower,np.quantile(samples_tmp,0.025,axis=0)) plt.scatter(y_hat_test, y_test, alpha=0.3) plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), lower[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), upper[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test, y_hat_test, c="black") plt.title('Comparison on the testset') plt.xlabel('predicted average of deers killed') plt.ylabel('observed number of deers killed') plt.show() # ### Regression with a discretized logistic mixture distribution and hidden layers def quant_mixture_logistic(out, bits=8, num=3): loc, un_scale, logits = tf.split(out, num_or_size_splits=num, axis=-1) scale = tf.nn.softplus(un_scale) discretized_logistic_dist = tfd.QuantizedDistribution( distribution=tfd.TransformedDistribution( distribution=tfd.Logistic(loc=loc, scale=scale), bijector=tfb.AffineScalar(shift=-0.5)), low=0., high=2**bits - 1.) mixture_dist = tfd.MixtureSameFamily( mixture_distribution=tfd.Categorical(logits=logits), #logits will be normalized to one components_distribution=discretized_logistic_dist) return mixture_dist # + inputs = tf.keras.layers.Input(shape=(X_train.shape[1],)) x = Dense(100, activation="relu")(inputs) x = Dense(100, activation="relu")(x) x = Dense(10, activation="relu")(x) out = Dense(9)(x) p_y = tfp.layers.DistributionLambda(quant_mixture_logistic)(out) model = Model(inputs=inputs, outputs=p_y) def NLL(y_true, y_hat): return -y_hat.log_prob(tf.reshape(y_true,(-1,))) model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=NLL) model.summary() # - hist_mm = model.fit(x=X_train, y=y_train, validation_data=(X_test, y_test), epochs=10, verbose=1) plt.plot(hist_mm.history['loss']) plt.plot(hist_mm.history['val_loss']) plt.legend(['loss', 'val_loss']) plt.xlabel('Epochs') print(-np.mean(model(X_train).log_prob(y_train))) print(-np.mean(model(X_test).log_prob(y_test))) # + NLL_train = model.evaluate(X_train, y_train,verbose=0) NLL_test = model.evaluate(X_test, y_test,verbose=0) print('NLL on training:', NLL_train) print('NLL on test:', NLL_test) preds = np.zeros((1000,len(y_test.flatten()))) for i in range(0,1000): preds[i,:] = model(X_test).sample().numpy()# sample from the QuantizedDistributio y_hat_test=np.average(preds,axis=0) # - # workaround; only sample from 10 datapoints at once, otherwise we would get ram errors upper=[] lower=[] for i in range(0,np.int(len(X_test)/10)): samples_tmp=model(X_test[(i*10):(i*10)+10]).sample(5000).numpy() upper=np.append(upper,np.quantile(samples_tmp,0.975,axis=0)) lower=np.append(lower,np.quantile(samples_tmp,0.025,axis=0)) plt.scatter(y_hat_test, y_test, alpha=0.3) plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), lower[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test[np.argsort(y_hat_test,axis=0)].flatten(), upper[np.argsort(y_hat_test,axis=0)],linestyle='dashed',c="black") plt.plot(y_hat_test, y_hat_test, c="black") plt.title('Comparison on the testset') plt.xlabel('predicted average of deers killed') plt.ylabel('observed number of deers killed') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="rIJcPwCF9tNT" colab_type="text" # # Training Mask R-CNN # **Protip**: Activate GPU by `Menu > Runtime > Change runtime type` and selecting `GPU` from `Hardware accelerator` dropdown. # + [markdown] id="OwDwdmKI65WT" colab_type="text" # ## Clone repository from github # + id="xah6lNgACW-N" colab_type="code" colab={} # import getpass # password = () # password = ":" + password password = "" # + id="aez9bS3Nsm6K" colab_type="code" outputId="4a63d4ff-179e-408d-9be5-ff555ab4285e" colab={"base_uri": "https://localhost:8080/", "height": 168} # %cd /content # !rm -rf Mask-RCNN repo = "https://subtleseeker" + password + "@github.com/subtleseeker/Mask-RCNN" # !git clone $repo # + [markdown] id="Rx-Jccdk6_WY" colab_type="text" # ## Mount your google drive # - Mounting will require authorization code. Follow the link after executing the cell below. # - Login with your google account. # - Approve for the required permissions. # - Copy the generated code and paste it in the cell input below. # Congrats! You have mounted your google drive. # # You can view the files of the mounted drive, by using the `>` arrow on top-left part of your screen just below `+ Code` button, and then clicking `Files` tab. # # **Note**: You will have to do this for every instance on google colab, as the instance alloted to you is temporary. # + id="3J070rjbtL1E" colab_type="code" outputId="1d2271d9-5766-42c2-b232-ed9c360c2ea4" colab={"base_uri": "https://localhost:8080/", "height": 121} from google.colab import drive drive.mount('/content/drive') # + [markdown] id="SZ120mr97Hmv" colab_type="text" # ## Downloading necessary files # Copy the required files from your google drive. # # **These files should be present in the following directory in your Google Drive: # `DRIVE_ROOT/SLAM/`.** # # # # **Required files** # - `mask_rcnn_coco.h5`: COCO Dataset pretrained weights. Download `mask_rcnn_coco.h5` from [this](https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5) link. If this link fails, open [this](https://github.com/matterport/Mask_RCNN/releases) link and download the file under `v2.0` tag. # - `dataset.tar.gz`: Dataset containing images and their masks in the required directory structure as mentioned in README.md in the github link above # # # + id="G45FCfO7uHXs" colab_type="code" colab={} # !cp /content/drive/My\ Drive/SLAM/mask_rcnn_coco.h5 /content/Mask-RCNN # + id="M9EFGWjCtMsI" colab_type="code" outputId="a678e8fc-b682-4b76-83ac-77bb4e12ee9a" colab={"base_uri": "https://localhost:8080/", "height": 50} # !tar -zxf /content/drive/My\ Drive/SLAM/dataset.tar.gz -C /content/Mask-RCNN/samples/auto/ # + id="POv-03EUwCYe" colab_type="code" outputId="8b26288b-41f9-4bf6-b627-24338cda1575" colab={"base_uri": "https://localhost:8080/", "height": 34} # %cd /content/Mask-RCNN/samples/auto # + [markdown] id="7Y08OFk5-t0t" colab_type="text" # ## Training # Execute the command to begin training # `!python3 auto.py train --dataset=/images --weights=coco` # + id="Vw4AaKufw9zg" colab_type="code" outputId="629df2e6-7d87-41b6-a3ad-711b042ceb54" colab={"base_uri": "https://localhost:8080/", "height": 1000} # !python3 auto.py train --dataset=./dataset/images --weights=coco # + [markdown] id="5ErH68FNAwJF" colab_type="text" # The last `auto_xxxxxx_yy.h5` file is the trained model weights after `yy` epochs. Save these weights in your Google Drive to use for your overlaying and evaluating the dataset. # + id="6BFnInz9Bajw" colab_type="code" colab={} # !cp /content/Mask-RCNN/logs/auto20191110T1347/mask_rcnn_balloon_0030.h5 /content/drive/My\ Drive/SLAM # + [markdown] id="sFx8gK74BqbU" colab_type="text" # The trained weights are now saved in your Google Drive. Download it from `DRIVE_ROOT/SLAM` # + [markdown] id="B7uutjCc_nPM" colab_type="text" # ## Visualize results with Tensorboard # Load tensorboard with *magic* command # + id="lAZUEiPpynvX" colab_type="code" colab={} # %load_ext tensorboard # + [markdown] id="TcR-CHMkAiBA" colab_type="text" # Logs are stored in `ROOT_DIR/logs` for every epoch. Visualize the training with: # `%tensorboard --logdir ` # + id="-ZUHfBJhxdtF" colab_type="code" outputId="1c6cae5a-f553-4f8b-9a3a-e37c5023a242" colab={"base_uri": "https://localhost:8080/", "height": 50} # %tensorboard --logdir /content/Mask-RCNN/logs % --- % jupyter: % jupytext: % text_representation: % extension: .m % format_name: light % format_version: '1.5' % jupytext_version: 1.14.4 % kernelspec: % display_name: Octave % language: octave % name: octave % --- % + size = ceil(rand() * 10^6) u = rand(1, size); v = rand(1, size); w1 = zeros(1, size); w2 = zeros(1, size); % + ti_0 = now(); for i=1:length(u), w1(i) = 2 * u(i) + 5 * v(i); end; diff_i = now() - ti_0; disp(diff_i) % + tv_0 = now(); w2 = 2 * u + 5 * v; diff_v = now() -tv_0; disp(diff_v) % - disp( sprintf("Relative time vectorial/iterative approach %0.3f%%", diff_v * 100 / diff_i) ) if sum(w1 == w2) == length(w1), disp('Equal vectors'); else disp('Something is wrong') end # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Import libraries import numpy as np import pandas as pd # ## Load the data data = pd.read_csv('BankChurners.csv', index_col='CLIENTNUM') # Deleting the last two columns data = data.iloc[:,:-2] data.head() data.shape # There is no missing value 100*data.isnull().sum() # Statistical resume data.describe().T # Categorical features columns = (data.dtypes == 'object') object_cols = list(columns[columns].index) print('Categorical variables:') print(object_cols) # More imports import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # ## Exploratory Data Analysis # Let's explore the data sns.countplot(x='Attrition_Flag',hue='Gender',data=data,palette='pastel') plt.title('Dependent count by Target') sns.countplot(x='Dependent_count',hue='Attrition_Flag',data=data,palette='pastel') # 2 and 3 dependent count is the most common plt.figure(figsize=(8,5)) plt.title('Education Level Count by Target') sns.countplot(x='Education_Level',hue='Attrition_Flag',data=data,palette='pastel') # Most customers are in the level of graduate plt.title('Marital Status Count by Target') sns.countplot(x='Marital_Status',hue='Attrition_Flag',data=data,palette='pastel') # Most customers are married or single plt.figure(figsize=(8,5)) plt.title('Income Category Count by Target') income_order = sorted(data['Income_Category'].unique()) sns.countplot(x='Income_Category',hue='Attrition_Flag',order=income_order,data=data,palette='pastel') # And they receive less than 40k plt.title('Card Category Count') sns.countplot(x='Card_Category',hue='Attrition_Flag',data=data,palette='pastel') # ## Modeling the Data from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() # Transforming the card category variable for n in data: data.loc[(data['Card_Category'] == 'Gold') | (data['Card_Category'] == 'Platinum') , 'Card_Category'] = 'Other' data.Card_Category.value_counts() for col in object_cols: data[col] = label_encoder.fit_transform(data[col]) data.head() # ## Spliting the Data X = data.drop('Attrition_Flag',axis=1) y = data['Attrition_Flag'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # First let's choose some of the more common models for cross-validation from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV, cross_val_score, StratifiedKFold, learning_curve from sklearn import metrics from sklearn.metrics import classification_report,confusion_matrix kfold = StratifiedKFold(n_splits=10) # + random_state = 42 classifiers = [] classifiers.append(SVC(random_state=random_state)) classifiers.append(DecisionTreeClassifier(random_state=random_state)) classifiers.append(RandomForestClassifier(random_state=random_state)) classifiers.append(LogisticRegression(random_state = random_state)) cv_results = [] for classifier in classifiers: cv_results.append(cross_val_score(classifier, X_train, y = y_train, scoring = "accuracy", cv = kfold, n_jobs=4)) cv_means = [] cv_std = [] for cv_result in cv_results: cv_means.append(cv_result.mean()) cv_std.append(cv_result.std()) cv_res = pd.DataFrame({'CrossValMeans':cv_means,'CrossValErrors':cv_std,'Algorithm':['SVC','Decision Tree', 'Random Forest', 'Logistic Regression']}) g = sns.barplot('CrossValMeans','Algorithm',data = cv_res, palette='pastel',orient = 'h',**{'xerr':cv_std}) g.set_xlabel('Mean Accuracy') g = g.set_title('Cross validation scores') # - cv_means # The Random Forest perform better for this data rfc = RandomForestClassifier(n_estimators=300,max_features=5,max_leaf_nodes=250,random_state=42) # ### Training the Model rfc.fit(X_train, y_train) # ### Predictions rfc_pred = rfc.predict(X_test) # ### Evaluating the Model from sklearn.metrics import classification_report,confusion_matrix,roc_auc_score print(classification_report(y_test,rfc_pred)) print(roc_auc_score(rfc_pred,y_test)) # + # Our model was capable to explain more than 95% of the attrition flag # - # ## Saving the Model import joblib filename = 'Bank_Churners_model.joblib' joblib.dump(rfc, filename) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import numpy as np import matplotlib.pyplot as plt grey = cv2.imread('../data/Lena.png', 0) cv2.imshow('original grey', grey) cv2.waitKey() cv2.destroyAllWindows() hist, bins = np.histogram(grey, 256, [0, 255]) plt.fill(hist) plt.xlabel('pixel value') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # %matplotlib inline import os import numpy as np import matplotlib.pyplot as plt # + def get_frames_from_events(xaddr_all, yaddr_all, num_events_per_sample=2000, chip_size=(240, 180), target_size=(64, 64)): """ Extract ``num_events_per_sample`` events from a one-dimensional sequence of AER-events. The events are spatially subsampled to ``target_size``, and standardized to [0, 1] using 3-sigma normalization. The resulting events are binned into a frame. The function operates on the events in ``xaddr_all`` etc sequentially until all are processed into frames. """ num_samples = int(len(xaddr_all) / num_events_per_sample) frames = np.zeros([num_samples] + list(target_size), 'float32') print("Extracting {} frames from DVS event sequence.".format(num_samples)) # Iterate for as long as there are events in the sequence. for sample_idx in range(num_samples): event_idxs = range(num_events_per_sample * sample_idx, num_events_per_sample * (sample_idx + 1)) # Loop over ``num_events_per_sample`` events for x, y in zip(xaddr_all[event_idxs], yaddr_all[event_idxs]): # Subsample from 240x180 to e.g. 64x64 x = int(x * (target_size[0] - 1) / (chip_size[0] - 1)) y = int(y * (target_size[1] - 1) / (chip_size[1] - 1)) # Count event at subsampled location (x and y axes are swapped) frames[sample_idx, y, x] += 1 # Compute standard deviation of event-sum distribution # after removing zeros sample = frames[sample_idx] sigma = np.std(sample[np.nonzero(sample)]) # Clip number of events per pixel to three-sigma frames[sample_idx] = np.clip(sample, 0, 3*sigma) return frames / 255. # + datapath = '/home/rbodo/Downloads' filename = 'rec_extracted.npz' # Load recorded data. Assumes that the data has been extracted from the hdf5 # file into variables like ``xaddr``, ``yaddr``, ``timestamp`` etc, and saved # as compressed numpy file, using a command like # np.savez_compressed(os.path.join(datapath, filename), xaddr=xaddr, yaddr=yaddr, ...) data_dict = np.load(os.path.join(datapath, filename)) print("Keys to data fields stored in ``data_dict``: {}".format(data_dict.keys())) # Access individual fields by the name of the keyword argument # with which it was saved. xaddr = data_dict['xaddr'] yaddr = data_dict['yaddr'] # The following are not needed for creating the frames. timestamp = data_dict['timestamp'] sonar_left = data_dict['sonar_left'] sonar_center = data_dict['sonar_center'] sonar_right = data_dict['sonar_right'] throttle = data_dict['throttle'] steering = data_dict['steering'] # - frames = get_frames_from_events(xaddr, yaddr, num_events_per_sample=2000, chip_size=(240, 180), target_size=(64, 64)) plt.imshow(frames[0]) # TODO: May need to add an extra axis to ``frames`` to fit input shape of CNN, i.e. (1, 64, 64) instead of (64, 64). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="hRTa3Ee15WsJ" # # Classifying an Image as Meme or Not Using Transfer Learning # + [markdown] colab_type="text" id="2X4KyhORdSeO" # We will try to classify meme vs normal images by using transfer learning from a pre-trained network. This will allows us to get higher accuracies than we saw by training our network from scratch. # + colab_type="code" id="iBMcobPHdD8O" colab={} from __future__ import absolute_import, division, print_function, unicode_literals import os import tensorflow as tf from tensorflow import keras print("TensorFlow version is ", tf.__version__) import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg tf.enable_eager_execution() # + [markdown] colab_type="text" id="v77rlkCKW0IJ" # ## Dataset File Upload # + id="nptk8-Ji93kf" colab_type="code" colab={} from google.colab import files files.upload() # + id="drMZitRtFMP9" colab_type="code" colab={} zip_file = tf.keras.utils.get_file(origin="http://files.dgtlschl.com/MemeOrNot.zip", fname="MemeOrNot.zip") zip_file # + id="ry6AlzvXAnQR" colab_type="code" colab={} import zipfile with zipfile.ZipFile(zip_file, 'r') as zip_ref: zip_ref.extractall("/content/") # + id="VUDIrktwA60_" colab_type="code" colab={} base_dir="/content/MemeOrNot/" # + [markdown] colab_type="text" id="9_6h-c5EXN91" # ### Prepare training and validation meme or not dataset # Create the training and validation directories for meme datasets and not meme datasets. # + colab_type="code" id="RWcldM4TXLen" colab={} train_dir = os.path.join(base_dir, 'train') validation_dir = os.path.join(base_dir, 'validation') # Directory with our training meme pictures train_meme_dir = os.path.join(train_dir, 'meme') print ('Total training meme images:', len(os.listdir(train_meme_dir))) # Directory with our training not meme pictures train_normal_dir = os.path.join(train_dir, 'normal') print ('Total training normal images:', len(os.listdir(train_normal_dir))) # Directory with our validation meme pictures validation_meme_dir = os.path.join(validation_dir, 'meme') print ('Total validation meme images:', len(os.listdir(validation_meme_dir))) # Directory with our validation normal pictures validation_normal_dir = os.path.join(validation_dir, 'normal') print ('Total validation normal images:', len(os.listdir(validation_normal_dir))) # + [markdown] colab_type="text" id="wvidPx6jeFzf" # ### Create Image Data Generator with Image Augmentation # # We will use ImageDataGenerator to rescale the images. # # To create the train generator, specify where the train dataset directory, image size, batch size and binary classification mode. # # The validation generator is created the same way. # + colab_type="code" id="y3PM6GVHcC31" colab={} image_size = 160 # All images will be resized to 160x160 batch_size = 32 # Rescale all images by 1./255 and apply image augmentation train_datagen = keras.preprocessing.image.ImageDataGenerator( rescale=1./255) validation_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1./255) # Flow training images in batches of 20 using train_datagen generator train_generator = train_datagen.flow_from_directory( train_dir, # Source directory for the training images target_size=(image_size, image_size), batch_size=batch_size, # Since we use binary_crossentropy loss, we need binary labels class_mode='binary') # Flow validation images in batches of 20 using test_datagen generator validation_generator = validation_datagen.flow_from_directory( validation_dir, # Source directory for the validation images target_size=(image_size, image_size), batch_size=batch_size, class_mode='binary') # + [markdown] colab_type="text" id="OkH-kazQecHB" # ## Using the **MobileNet V2** base model from the pre-trained convnets # # + colab_type="code" id="19IQ2gqneqmS" colab={} IMG_SHAPE = (image_size, image_size, 3) # Create the base model from the pre-trained model MobileNet V2 base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') # + colab_type="code" id="OTCJH4bphOeo" colab={} base_model.trainable = False # + colab_type="code" id="KpbzSmPkDa-N" colab={} # Let's take a look at the base model architecture base_model.summary() # + colab_type="code" id="eApvroIyn1K0" colab={} model = tf.keras.Sequential([ base_model, keras.layers.GlobalAveragePooling2D(), keras.layers.Dense(1, activation='sigmoid') ]) # + [markdown] colab_type="text" id="g0ylJXE_kRLi" # ### Compile the model # # You must compile the model before training it. # + colab_type="code" id="RpR8HdyMhukJ" colab={} model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=0.0001), loss='binary_crossentropy', metrics=['accuracy']) # + colab_type="code" id="I8ARiyMFsgbH" colab={} model.summary() # + [markdown] colab_type="text" id="lxOcmVr0ydFZ" # These 1.2K trainable parameters are divided among 2 TensorFlow `Variable` objects, the weights and biases of the two dense layers: # + colab_type="code" id="krvBumovycVA" colab={} len(model.trainable_variables) # + [markdown] colab_type="text" id="RxvgOYTDSWTx" # ### Train the model # # Training the model for 10 epochs # # + colab_type="code" id="Om4O3EESkab1" colab={} epochs = 10 steps_per_epoch = train_generator.n // batch_size validation_steps = validation_generator.n // batch_size history = model.fit_generator(train_generator, steps_per_epoch = steps_per_epoch, epochs=epochs, workers=4, validation_data=validation_generator, validation_steps=validation_steps) # + colab_type="code" id="53OTCh3jnbwV" colab={} acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.ylabel('Accuracy') plt.ylim([min(plt.ylim()),1]) plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.ylabel('Cross Entropy') plt.ylim([0,max(plt.ylim())]) plt.title('Training and Validation Loss') plt.show() # + [markdown] colab_type="text" id="4TU4kv3yINQ-" # # Saving the model # We will now save the model for later use. # # + colab_type="code" id="0wozBQvmINQ_" colab={} tf.keras.models.save_model( model, "/content/meme_or_not.h5", overwrite=True, include_optimizer=True, save_format="h5" ) # + [markdown] colab_type="text" id="CqwV-CRdS6Nv" # ## Fine tuning # # # + colab_type="code" id="4nzcagVitLQm" colab={} base_model.trainable = True # + colab_type="code" id="-4HgVAacRs5v" colab={} # Let's take a look to see how many layers are in the base model print("Number of layers in the base model: ", len(base_model.layers)) # Fine tune from this layer onwards fine_tune_at = 100 # Freeze all the layers before the `fine_tune_at` layer for layer in base_model.layers[:fine_tune_at]: layer.trainable = False # + colab_type="code" id="NtUnaz0WUDva" colab={} model.compile(optimizer = tf.keras.optimizers.RMSprop(lr=2e-5), loss='binary_crossentropy', metrics=['accuracy']) # + colab_type="code" id="WwBWy7J2kZvA" colab={} model.summary() # + colab_type="code" id="bNXelbMQtonr" colab={} len(model.trainable_variables) # + [markdown] colab_type="text" id="4G5O4jd6TuAG" # ### Continue Train the model # + [markdown] colab_type="text" id="0foWUN-yDLo_" # If you trained to convergence earlier, this will get you a few percent more accuracy. # + colab_type="code" id="ECQLkAsFTlun" colab={} history_fine = model.fit_generator(train_generator, steps_per_epoch = steps_per_epoch, epochs=epochs, workers=4, validation_data=validation_generator, validation_steps=validation_steps) # + colab_type="code" id="PpA8PlpQKygw" colab={} acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] acc += history_fine.history['acc'] val_acc += history_fine.history['val_acc'] loss += history_fine.history['loss'] val_loss += history_fine.history['val_loss'] # + colab_type="code" id="chW103JUItdk" colab={} plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.ylim([0.9, 1]) plt.plot([epochs-1,epochs-1], plt.ylim(), label='Start Fine Tuning') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.ylim([0, 0.2]) plt.plot([epochs-1,epochs-1], plt.ylim(), label='Start Fine Tuning') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() # + [markdown] colab_type="text" id="_TZTwG7nhm0C" # # Saving the model # We will now save the model for later use. # # + id="VkxcDivCw5-W" colab_type="code" colab={} tf.keras.models.save_model( model, "/content/meme_or_not-finetuned.h5", overwrite=True, include_optimizer=True, save_format="h5" ) # + id="ogfO3e4cGbkp" colab_type="code" colab={} pip install tensorflowjs # + id="k02bXn7zJGBn" colab_type="code" colab={} import tensorflowjs as tfjs tf.enable_eager_execution() # + id="TaiVFL-lHfbe" colab_type="code" colab={} tfjs.converters.save_keras_model(model,"/content/tfjs/") # + id="3VmY4sa6JB0g" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.10 64-bit (''myenv'': conda)' # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="0bK0zUPX97Y7" # ## Importing libraries and DataFrame # + id="-pyXK1YB97Y-" #importing libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import os from scipy.stats.stats import pearsonr # + id="T-HSTcxM97ZA" pwd = os.getcwd() # + id="SqhmKi1O97ZB" outputId="9f97708b-6e0b-427c-8d39-b3a6d83d3c0c" #importing data #kaggle datasets download -d kwadwoofosu/predict-test-scores-of-students api command for dataset #source: https://www.kaggle.com/kwadwoofosu/predict-test-scores-of-students dataframe = pd.read_csv(pwd + '\\test_scores.csv') df = dataframe.copy() dataframe.head() # + id="yGe7Jldi97ZD" outputId="d9741ac8-76e4-4860-934b-97090233c473" dataframe.info() # + [markdown] id="rro6vame97ZD" # No null values, so no necessity for value inputing or droping rows # + [markdown] id="6ed--I3O97ZF" # ## Exploratory Vizualization # + [markdown] id="oR6n6X0L97ZG" # ### Let's first understand the general characteristcs of our student population # + id="TCQsqqMl97ZH" outputId="cca8d5ca-f3c7-4c27-a7d9-911a76d15e9c" #plotting general data from population fig, ax = plt.subplots(2,2,figsize = [18,9]) cmap = {'coral', 'tab:blue', 'green'} sns.set_style('whitegrid') dataframe.groupby('gender').size().plot(kind = 'pie', autopct= '%1.2f%%', shadow = True, startangle = 45 , title = 'Gender Distribuition',ax = ax[0,0], colors = cmap) dataframe.groupby('school_setting').size().plot(kind = 'pie', autopct= '%1.2f%%', shadow = True, startangle = 45 ,title = 'School Setting', ax = ax[0,1], colors = cmap) dataframe.groupby('school_type').size().plot(kind = 'pie', autopct= '%1.2f%%', shadow = True, startangle = 45, title = 'School Type', ax = ax[1,0], colors = cmap) dataframe.groupby('teaching_method').size().plot(kind = 'pie', autopct= '%1.2f%%', shadow = True, startangle = 45, title = 'Teaching Method' , ax = ax[1,1], colors = cmap) plt.tight_layout() plt.show() # + [markdown] id="CryAL4BT97ZI" # Some highlights gained by visualizing the general caractheristics of the general population: # # - Number of studends for each gender are split equally. # - Is gender in any way a factor for influencing the test scores? #
    #
    # - The great marjority of students live in Urban/suburban settings. # - Does any of these settings influence their performance in the test? #
    #
    # - As expected, most of the students attend public schools. Only 1/4 of the students study in non-public schools. # - Does the difference between public and non-public schools scores show a significant gap in education quality? #
    #
    # - 1/3 of the students are in school that practice experimental teaching methods. # - Do these novel methods improve students scores? # + id="l0rnVAhn97ZJ" #calculating the mean score for each demographic gender_score = dataframe.groupby('gender').mean()['posttest'].reset_index() type_score = dataframe.groupby('school_type').mean()['posttest'].reset_index() lunch_score = dataframe.groupby('lunch').mean()['posttest'].reset_index() method_score = dataframe.groupby('teaching_method').mean()['posttest'].reset_index() setting_score = dataframe.groupby('school_setting').mean()['posttest'].reset_index() # + id="O7JXM9km97ZJ" outputId="33e3b8b6-4a06-467d-9804-a6bf8ceaa5ff" #plotting averages fig, ax = plt.subplots(2,2, figsize = [14, 9]) sns.set_palette('Set1') sns.barplot(data=gender_score, y = 'posttest', x = 'gender', ax = ax[0,0]) sns.barplot(data=type_score, y = 'posttest', x = 'school_type', ax = ax[0,1]) sns.barplot(data=lunch_score, y = 'posttest', x = 'lunch', ax = ax[1,0]) sns.barplot(data=method_score, y = 'posttest', x = 'teaching_method', ax = ax[1,1]) sns.set_style('whitegrid') plt.show() # + [markdown] id="dnmUg1ZU97ZK" # With the comparison of mean score for some properties of the population we can see that: # # - Gender appears not to be a relevant factor for test score. # # # # - Students from public schools have a mean performance gap compared to private school students of more than 15%. # # # # - Students in poor economic conditions (qualifies for lunch program) have a even more score gap compared with those who don't qualify for reduced/free lunch. # # # # - Students from schools that use experimental teaching programs have a greater mean score compared to those from Standard schools. # + [markdown] id="QV9lDuhd97ZK" # ### Now, we direct our focus onto how school settings affect students performance # + id="JZ0lgFmg97ZL" outputId="05f2aa41-fbb7-4795-99cb-4ab90c59fe21" #checking variance of scores for particular and public schools fig, ax = plt.subplots(1,2,figsize = [20,10]) sns.swarmplot(data = dataframe ,x = 'school_type', y = 'posttest', ax = ax[0], hue = 'school_setting') sns.boxplot(data = dataframe ,x = 'school_type', y = 'posttest', ax = ax[0], color = 'grey') sns.swarmplot(data = dataframe ,x = 'teaching_method', y = 'posttest', ax = ax[1], hue = 'school_type') sns.boxplot(data = dataframe ,x = 'teaching_method', y = 'posttest', ax = ax[1], color = 'grey') fig.suptitle("Distribution of scores for School Types and teaching methods", fontsize = 18) plt.tight_layout() plt.show() # + [markdown] id="6myM_FtD97ZL" # In relation to school type, we can see that public schools have greater score variance compared to non-public ones. Also, depending on where is located a school can have significantly different results. Public suburban schools seem to excel in performance even compared to the non-public school general score. Since suburban schools have good performance, you could expect urban instituition to at least follow closely behind, however this is not the case. #
    #
    # The factors on why Urban public schools have been notably underperforming. Questions to be raised: #
    #
    # - What is the HDI level of these urban and suburban neighborhoods?
    # - Urban schools are receiveing proper funding?
    # - Is this difference caused by teaching methods?
    #
    #
    # # In comparison between teaching methodology, it's possible to see the advantage in scores that was already observed. One thing to note is that in the group of schools with novel methods, the top 50% of performants are well distribuited between public and non-public schools. #
    #
    # # It is imperative to answer the questions below: # #
    #
    # - What are the experimental methods that these public schools are implementing?
    # - Can they be reproduced in low performance standard schools in a attempt to improve their learning?
    # # Also note #
    #
    # - Some experimental schools are having scores below the 2nd quartile of Standard schools. Most of them are public.
    # - Comparing to past performance, there was any improvement?
    # - Consider reviewing those methods.
    # # --- # + [markdown] id="nxcukWpi97ZM" # ### Relation beteween pre-test and test scores # + id="3O3PoMUp97ZM" outputId="7ef70bdd-6b46-49a5-b21d-2a94cba56d51" #correlation between pre-test and post-test scores x = np.linspace(0, 120, 100) y = np.linspace(0, 120, 100) fig, ax = plt.subplots(figsize = [8,6]) sns.scatterplot(x = dataframe['pretest'], y = dataframe['posttest'], hue =dataframe['school_type']) plt.plot(x, y, color = 'green') plt.annotate(xy = (60,60), text = 'pretest = posttest line', xytext = (50,0), xycoords = 'data', textcoords = 'offset points', arrowprops = dict(facecolor='green', shrink=0.05), horizontalalignment='left', verticalalignment='top', fontsize = 12) plt.xlim(15, 105) plt.ylim(15, 105) plt.show() # + [markdown] id="XgVVyeQl97ZN" # It's clear that, deriving from pre-test performance, we can expect at least similar or more likely higher scores at the real test. # + [markdown] id="v32uvGkN97ZN" # ### Visualizing performance per school # + [markdown] id="tfV6YqUx97ZN" # What are the characteristics of the best and worse schools? # + id="ZkLda-Jc97ZO" outputId="744e36eb-2370-4984-8d21-4fdef9799821" #Visualizing School performance school_ranking = dataframe.groupby('school').agg(posttest = ('posttest', 'mean'), School_Type = ('school_type', pd.Series.mode), teaching = ('teaching_method', pd.Series.mode), school_setting = ('school_setting', pd.Series.mode), n_student = ('n_student', pd.Series.mean) ).reset_index() school_ranking.sort_values('posttest', ascending= False, ignore_index=True, inplace=True) school_ranking.rename(columns={'school': 'School Name', 'posttest': 'Test Avg Score', 'School_Type':'School Type', 'teaching': 'Teaching Method', 'school_setting':'School Setting', 'n_student':'Avg students per class' }, inplace=True) school_ranking # + id="_Wn5ZVQ-97ZO" outputId="c2c180cc-6066-40ac-c45c-f425845ff549" sns.scatterplot(data=school_ranking, x = 'Avg students per class', y = 'Test Avg Score', hue = 'School Type') plt.show() # + [markdown] id="5qK89jCg97ZP" # We can't conclude yet the relation between number of students per class and score, but it already resembles a negative correlation. # + id="nO7YaZgK97ZP" outputId="6f1c8125-8c35-4910-b677-b18c209ca3a0" corr = pearsonr(dataframe['posttest'], dataframe['n_student']) f'Pearson correlation: {corr}' # + [markdown] id="HX9Mn4oD97ZQ" # Using Pearson Coefficient we can conclude that the number off students per class and test score has a very high inverse correlation, with a p-value close to zero indicating a significant statistical relevance for the result. # + [markdown] id="iBwL-TfJ97ZQ" # ## Predicting Student Score # + [markdown] id="gELJcX8T97ZR" # ### Dataset preparation # + id="-NX1L5Kj97ZR" df = dataframe.copy() # + id="SGW2F0uu97ZR" outputId="4c44272e-375f-4c81-8cff-f364e847b594" #dropping non-significant columns for this regression df.drop(columns='student_id', inplace=True) #encoding categorical variables from sklearn.compose import ColumnTransformer import category_encoders as ce from sklearn.preprocessing import OrdinalEncoder ct_be = ce.BinaryEncoder(cols =['school', 'school_setting', 'classroom'], return_df=True) df_encoded = ct_be.fit_transform(df)#Using binary encoding for variables with high number of categories ct_oe = ce.OrdinalEncoder(cols = ['school_type', 'teaching_method', 'gender', 'lunch'], return_df=True, mapping = [{'col': 'school_type', 'mapping': {'Non-public': 0, 'Public':1}}, {'col':'teaching_method', 'mapping':{'Experimental':1, 'Standard':0}}, {'col':'gender', 'mapping': {'Female':1, 'Male':0}}, {'col':'lunch', 'mapping': {'Does not qualify':0, 'Qualifies for reduced/free lunch':1}}]) df_encoded = ct_oe.fit_transform(df_encoded)#ordinal encoding for 0 or 1 variables df_encoded[['school_type', 'teaching_method', 'gender', 'lunch']] = df_encoded[['school_type', 'teaching_method', 'gender', 'lunch']].apply(pd.to_numeric) # + [markdown] id="FLYEHB6d97ZS" # ### Visualizing dataset_encoded # + id="jasbEWzF97ZS" outputId="d4f3876b-d6b2-4c22-9c58-4f7b1d4264d0" df_encoded.info() # + id="jWTKusx197ZS" outputId="c45dd876-a891-44e8-96cf-500976b95f91" df_encoded # + [markdown] id="n6kRyKmn97ZT" # ### Splitting Trainning and test datasets # + id="BD_VzQRf97ZT" from sklearn.model_selection import train_test_split import xgboost as xg #Split features and labels X, y = df_encoded.iloc[:, :-1], df_encoded.iloc[:, -1] # Splitting X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) #transforming objects into DMatrix data train_dmatrix = xg.DMatrix(data = X_train, label = y_train) test_dmatrix = xg.DMatrix(data = X_test, label = y_test) # + id="Nsvt2Srv97ZT" train_dmatrix = xg.DMatrix(data = X_train, label = y_train) test_dmatrix = xg.DMatrix(data = X_test, label = y_test) # + [markdown] id="FPAvYHBI97ZT" # ### Trainning XGBRegressor model # + id="wYCa5m5997ZT" #trainning xboost regressor params = {'booster':'gblinear', 'objective': 'reg:squarederror'} reg = xg.train(dtrain = train_dmatrix, params = params, num_boost_round = 500) y_pred = reg.predict(test_dmatrix) # + [markdown] id="pDpPdTDZ97ZU" # ## Results and performance Evaluation # + id="wNBl8N3T97ZU" outputId="733ebfa3-29df-4a7f-f1ce-261c38ef9cb5" from sklearn.metrics import mean_squared_error as MSE rmse = np.sqrt(MSE(y_test, y_pred)) print("RMSE :% f " %(rmse)) from sklearn.metrics import r2_score r2 = r2_score(y_test,y_pred) x = 1 - r2 y = (X.shape[0] - 1) / (X.shape[0] - X.shape[1] - 1) adjusted_r2 = 1 - (x * y) print('R2 Score:% f' %(r2), '| Adjusted-R2:% f' %(adjusted_r2)) # + [markdown] id="4dFFySID97ZU" # This model presents a good performance without utilizing more complex techiques such as feature reduction, and hyperparameter tunning. With R2 and Adjusted-R2 scores being very similar, it has a good bias-variance balance, and not prone to overfitting. #
    #
    # Using the model of the sections above we can predict the score of a certain student, or a whole school using just a few variables. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Seminar: # At this seminar we will construct the simpliest possible synthesis - diphone model. # # We will use part of the LJSpeech dataset. # Your task will be to design search and concatenation of the units. # Preprocessor stages are already performed for the test samples (and it'll be your home assignment to create a small g2p for CMU english phoneset). # ## Alignment # The first and very import part in the data preparation is alignment: we need to determine the timings of phonemes our utterance consists of. # Even the concatenative syntheses are not used today in prod alignment is still an important phase for upsampling-based parametric acoustic models (e.g. fastspeech). # ### Motreal Force Aligner # To process audio we will use MFA. # # At the alignment stage we launch xent-trained TDNN ASR system with fixed text on the output and try to determine the most probable phonemes positions in the timeline. # + # %%writefile install_mfa.sh # #!/bin/bash ## a script to install Montreal Forced Aligner (MFA) root_dir=${1:-/tmp/mfa} # mkdir -p $root_dir # cd $root_dir # download miniconda3 wget -q --show-progress https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -p $root_dir/miniconda3 -f # create py38 env $root_dir/miniconda3/bin/conda create -n aligner -c conda-forge openblas libopenblas python=3.8 openfst pynini ngram baumwelch -y source $root_dir/miniconda3/bin/activate aligner # install mfa, download kaldi pip install montreal-forced-aligner # install requirements pip install git+https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner.git # install latest updates mfa thirdparty download # echo -e "\n======== DONE ==========" # echo -e "\nTo activate MFA, run: source $root_dir/miniconda3/bin/activate aligner" # echo -e "\nTo delete MFA, run: rm -rf $root_dir" # echo -e "\nSee: https://montreal-forced-aligner.readthedocs.io/en/latest/aligning.html to know how to use MFA" # + # download and install mfa INSTALL_DIR="/tmp/mfa" # path to install directory # !bash ./install_mfa.sh {INSTALL_DIR} # - # !source {INSTALL_DIR}/miniconda3/bin/activate aligner; mfa align --help # ### LJSpeech data subset # Here we will download the dataset. # However we don't need the whole LJSpeech for diphone synthesis (and it will be processed for quite a while). # Here we will take about 1/10 of the dataset. That's more than enough for diphone TTS. # !echo "download and unpack ljs dataset" # !mkdir -p ./ljs; cd ./ljs; wget -q --show-progress https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2 # !cd ./ljs; tar xjf LJSpeech-1.1.tar.bz2 # We need sox to convert audio to 16kHz (the format alignment works with) # !sudo apt install -q -y sox # !sudo apt install -q -y libopenblas-dev # !rm -rf ./wav # !mkdir ./wav # !cat wavs_need.txt | xargs -I F -P 30 sox --norm=-3 ./ljs/LJSpeech-1.1/wavs/F.wav -r 16k -c 1 ./wav/F.wav # !echo "Number of clips" $(ls ./wav/ | wc -l) # It should be 1273 clips here with open('wavs_need.txt') as ifile: wavs_need = {l.strip() for l in ifile} # + # metadata to transcripts lines = open('./ljs/LJSpeech-1.1/metadata.csv', 'r').readlines() for line in lines: fn, _, transcript = line.strip().split('|') if fn in wavs_need: with open('./wav/{}.txt'.format(fn), 'w') as ofile: ofile.write(transcript) # !echo "Number of transcripts" $(ls ./wav/*.txt | wc -l) # - # Let's download the artifacts for alignment. # # For phoneme ASR we need acoustic model and lexicon (mapping word=>phonemes) made by some other g2p # !wget -q --show-progress https://github.com/MontrealCorpusTools/mfa-models/raw/main/acoustic/english.zip # !wget -q --show-progress http://www.openslr.org/resources/11/librispeech-lexicon.txt # Finally, we come to the alignment. # # It will take about 15-17 min for our subset to be aligned # !source {INSTALL_DIR}/miniconda3/bin/activate aligner; \ # mfa align -t ./temp -c -j 4 ./wav librispeech-lexicon.txt ./english.zip ./ljs_aligned # !echo "See output files at ./ljs_aligned" # !ls ljs_aligned/ | wc -l # + import IPython.display from IPython.core.display import display def display_audio(data): display(IPython.display.Audio(data, rate=22050)) # - # to install textgrids # !pip install praat-textgrids import numpy as np from scipy.io import wavfile import textgrids import glob # Alignment outputs are textgrids - and xml-like structure with layers for phonemes and words (with timings) alignment = {f.split("/")[-1].split(".")[0][4:]: textgrids.TextGrid(f) for f in glob.iglob('ljs_aligned/*')} wavs = {f.split("/")[-1].split(".")[0]: wavfile.read(f)[1] for f in glob.iglob('./ljs/LJSpeech-1.1/wavs/*.wav')} allphones = { ph.text for grid in alignment.values() for ph in grid["phones"] } # let's exclude special symbols: silence, spoken noise, non-spoken noise allphones = {ph for ph in allphones if ph and ph == ph.upper()} assert len(allphones) == 69 # Here your part begins: # You need to create `diphone index` - mapping structure that will allow you to find original utterance and position in it by diphone text id. # # E.g.: # `index[(PH1, PH2)] -> (utt_id, phoneme_index)` diphone_index = dict() # !!!!!!!!!!!!!!!!!!!!!!# for utt, grid in alignment.items(): phones = [phone.text for phone in grid['phones']] for idx, (phone_i, phone_j) in enumerate(zip(phones[:-1], phones[1:])): diphone_index[phone_i, phone_j] = utt, idx # !!!!!!!!!!!!!!!!!!!!!!# # check yourself for a, b in [('AH0', 'P'), ('P', 'AH0'), ('AH0', 'L')]: k, i = diphone_index[(a,b)] assert a == alignment[k]['phones'][i].text assert b == alignment[k]['phones'][i+1].text # In concat TTS you sometimes don't have all the diphones presented # If it's not very frequent ones it's not a trouble # But we need to provide some mechanism to replace missing units with open("fallback_rules.txt") as ifile: lines = [l.strip().split() for l in ifile] fallback_rules = {l[0]: l[1:] for l in lines} # In the dict `fallback_rules` lie possible replacement for all the phones # (different replacements in order of similarity). # # E.g. `a stressed` -> `a unstressed` | `o stressed` | `o unstressed` # Here is also some work for you: # You need to create diphone fallbacks from the phoneme ones: # # `diphone_fallbacks[(Ph1, Ph2)] -> (some_other_pair_of_phones_presented_in_dataset)` # # and also, if `diphone_fallbacks[(a, b)] = c, d` then: # * c = a or # * c $\in$ fallback_rules[a] and/or # * d = b or # * d $\in$ fallback_rules[d] # diphone_fallbacks = dict() # !!!!!!!!!!!!!!!!!!!!!!# diphone_fallbacks[('Z', 'Z')] = ('ZH', 'ZH') diphone_fallbacks[('Z', 'AY1')] = ('Z', 'EY1') diphone_fallbacks[('Z', 'EY0')] = ('ZH', 'EH1') # !!!!!!!!!!!!!!!!!!!!!!# # check yourself for a, b in [('Z', 'Z'), ('Z', 'AY1'), ('Z', 'EY0')]: assert (a, b) in diphone_fallbacks r1, r2 = diphone_fallbacks[(a, b)] assert r1 in fallback_rules[a] or r1 == a assert r2 in fallback_rules[b] or r2 == b assert r1 != a or r2 != b # some helping constants SAMPLE_RATE = 22050 WAV_TYPE = np.int16 # Little DSP related to concatenative synthesis: # # to prevent disturbing "clicking" sound (difference in volume) when concatenating fragments from different utterances we need to perform `cross-fade` - smoothing at concatenation point # # If we concatenate $wav_1$ and $wav_2$ at some points $M_1$ and $M_2$ corrispondively we perform crossfade with overlap of $2 V$: # # $$\forall i \in [-V; V]:~output[M_1+i] = (1-\alpha) \cdot wav_1[M_1+i] + \alpha \cdot wav_2[M_2+i]$$ # Where $$\alpha = \frac{i+V}{2 V}$$ # # And for $i < -V:~ output[M_1+i] = wav_1[M_1+i]$ # # for $i > V:~output[M_1+i] = wav_2[M_2+i]$ # # But it is not ok if the overlapping comes outside the concatenation phoneme. # # So, if junction phoneme starts and ends at positions $B_1$ and $E_1$ (the first wav) and $B_2$ and $E_2$ (the second one) # the extact formula for overlapping zone will be: # $$\forall i \in [-L; R]:~output[M_1+i] = (1-\alpha) \cdot wav_1[M_1+i] + \alpha \cdot wav_2[M_2+i]$$ # Where: # $$\alpha = \frac{i+L}{L+R},~L = min(M_1-B_1, M_2 - B_2, V), ~R = min(E_1-M_1, E_2-M_2, V)$$ # def crossfade(lcenter, ldata, rcenter, rdata, halfoverlap): """ ldata, rdata - 1d numpy array only with junction phoneme (so, B1 = 0, E1 = ldata.shape[0]) lcenter = M1 rcenter = M2 it is better to return the concatenated version of the junction phoneme (as numpy data) """ output = np.zeros(lcenter + rcenter) # !!!!!!!!!!!!!!!!!!!!!!# lmask = np.zeros(len(ldata)) lmask[:lcenter - halfoverlap] = 1.0 lmask[lcenter - halfoverlap: lcenter + halfoverlap] = np.linspace(1, 0, 2 * halfoverlap) rmask = np.zeros(len(rdata)) rmask[-rcenter + halfoverlap:] = 1.0 rmask[-rcenter - halfoverlap: -rcenter + halfoverlap] = np.linspace(0, 1, 2 * halfoverlap) output[:lcenter + halfoverlap] += (ldata * lmask)[:lcenter + halfoverlap] output[-rcenter - halfoverlap:] += (rdata * rmask)[-rcenter - halfoverlap:] return output # !!!!!!!!!!!!!!!!!!!!!!# def get_data(k, i): phoneme = alignment[k]['phones'][i] left = phoneme.xmin right = phoneme.xmax center = (left+right) * .5 left = int(left * SAMPLE_RATE) center = int(center * SAMPLE_RATE) right = int(right * SAMPLE_RATE) return center - left, wavs[k][left:right] # check yourself cf = crossfade(*get_data('LJ050-0241', 3), *get_data('LJ038-0067', 56), 300) assert np.abs(cf.shape[0] - 1764) < 10 assert np.abs(cf.mean() - 11) < 0.1 len(get_data('LJ050-0241', 3)[1]), get_data('LJ050-0241', 3)[0] len(get_data('LJ038-0067', 56)[1]), get_data('LJ038-0067', 56)[0] display_audio(get_data('LJ050-0241', 3)[1]) display_audio(get_data('LJ038-0067', 56)[1]) display_audio(cf) # + HALF_OVERLAP_CROSSFADE = 300 def synthesize(phonemes): diphones = [] for ph1, ph2 in zip(phonemes[:-1], phonemes[1:]): diphone = (ph1, ph2) if diphone in diphone_index: k, i = diphone_index[diphone] else: k, i = diphone_index[diphone_fallbacks[diphone]] diphones.append((get_data(k, i), get_data(k, i+1))) output = [] # Here you need to construct the result utterance with crossfades # NB: border (the first and the last phonemes does not require any crossfade and could be just copied) # !!!!!!!!!!!!!!!!!!!!!!# ctx_lcenter = 0 for idx, diphone in enumerate(diphones): (lcenter, ldata), (rcenter, rdata) = diphone if idx == 0: output += list(ldata) ctx_lcenter = lcenter continue elif idx == len(diphones) - 1: output += list(rdata) continue output = list(crossfade(ctx_lcenter, output, lcenter, ldata, HALF_OVERLAP_CROSSFADE)) ctx_lcenter = len(output) - HALF_OVERLAP_CROSSFADE output = list(crossfade(ctx_lcenter, output, rcenter, rdata, HALF_OVERLAP_CROSSFADE)) ctx_lcenter = len(output) - 2 * HALF_OVERLAP_CROSSFADE return np.array(output, dtype=WAV_TYPE) # !!!!!!!!!!!!!!!!!!!!!!# # need to return wav as 1d numpy array of type WAV_TYPE # - # Check youself: # # If everything was correct, you should hear 'hello world' display_audio(synthesize(['HH', 'AH0', 'L', 'OW1', 'W', 'ER1', 'L', 'D'])) # load additional test texts with open("test_phones.txt") as ifile: test_phones = [] for l in ifile: test_phones.append(l.strip().split()) # Here should a little part of the GLADOS song # + output = [] pause = np.zeros([int(0.1 * SAMPLE_RATE)], dtype=WAV_TYPE) for test in test_phones: output.append(synthesize(test)) output.append(pause) display_audio(np.concatenate(output[:-1])) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chart-Book-Neutron-Density-Porosity-using-KNN # # The objective of this project is to calculate a Neutron vs. Bulk Density Cross-Plot Total porosity using, the Schlumberger CNL or TNPH Charts as shown below. It is important to use the proper Neutron log associated with the appropriate chart for either fresh or saline fluid densities as specified. The Neutron logs should also be on a limestone matrix with these charts. # # ![CNL_Image](CNL.png) # # ![TNPH_Image](TNPH.png) # # This program uses as training data a matrix of Neutron Porosity (V/V) vs. Bulk Density (G/CC) with known porosities (see below). In this example we are using test data as our log data to observe how well the program is actually working. We have very good agreement between our log analysis chart book training values and the values estimated from this program. # # The porosity estimations are made using KNN. Before we begin any distance calculations, we first normalize the Neutron porosity and Bulk Density curves and then use a KNN of 3 to estimate our Cross-Plot porosity values from our test data set. # # ![SMatrixCNL_Image](Matrix_CNL2.png) # # # ![Matrix_TNPH.png](attachment:Matrix_TNPH.png) # # This program has been tested for the charts available showing good agreement with the training data for our calculated porosities. Further testing is recommended before using this program to better understand the uncertainties in the estimations and validate using a KNN of 3. Please provide feedback if you have any issues. # # We are using CNL_1pt1_testdata.xlsx or TNPH_1pt19_testdata.xlsx to be used as log data for testing purposes, but you could use your own log data in lieu of this file to calculate Neutron-Density Total Cross-Plot porosities for your analysis. # # We have a set of 4 Schlumberger Chart Book data files representing 4 of the Schlumberger Neutron-Density charts; 2 fluid densities for CNL and 2 for TNPH. Other vendor charts will also be included as needed in the future. # # Code: # # ### Load dependencies # -*- coding: utf-8 -*- """ Spyder Editor This is a script file. """ import math import matplotlib.pyplot as plt import pandas as pd import numpy as np # ## Read In Digitized Chartbook Data using the appropriate chart: # + """ # ============================================================================= # # ======================================================================= # # # # # # # # Read in Digitized Chartbook data stored in Excel spreadsheet # # # # # # # # ======================================================================= # # # ============================================================================= """ #select the proper Neutron-Denisity Chartbook file ''' Schlumberger CNL Neutron Log at different Fluid Densities ''' #file = r'./data/CNL_1pt0.xlsx' #file = r'./data/CNL_1pt1.xlsx' ''' Schlumberger TNPH Neutron Log at different Fluid Densities ''' #file = r'./data/TNPH_1pt0.xlsx' file = r'./data/TNPH_1pt19.xlsx' df_chart = pd.read_excel(file,index_col=False) df_chart.head() CNL_chart = df_chart['Neutron'] RHOB_chart = df_chart['RHOB'] Rho_Matrix_chart = df_chart['Rho_Matrix'] Porosity_chart = df_chart['Porosity'] #Fluid Density FD = 1.1 # - # ### Chart Book Data with Porosity on Color axis: plt.figure(figsize=(11,11)) #plt.scatter(CNL_chart, RHOB_chart, c=Porosity_chart,cmap="RdYlGn",) #plt.scatter(CNL_chart, RHOB_chart, s=50, c=Porosity_chart) #plt.scatter(CNL_chart, RHOB_chart, c=Porosity_chart,cmap="RdYlGn") plt.scatter(CNL_chart, RHOB_chart, s=50, c = Porosity_chart, cmap = "rainbow") plt.xlim(-0.1, 0.55) plt.gca().invert_yaxis() plt.title("Chart Book Data Colored by Porosity") plt.ylabel('RHOB') plt.xlabel('Neutron Porosity') plt.grid(True) plt.show() # ### Chart Book Data with Rho Matrix on Color axis: plt.figure(figsize=(11,11)) #plt.scatter(CNL_chart, RHOB_chart, c=Porosity_chart,cmap="RdYlGn",) #plt.scatter(CNL_chart, RHOB_chart, s=50, c=Porosity_chart) #plt.scatter(CNL_chart, RHOB_chart, c=Porosity_chart,cmap="RdYlGn") plt.scatter(CNL_chart, RHOB_chart, s=50, c = Rho_Matrix_chart, cmap = "rainbow") plt.xlim(-0.1, 0.55) plt.gca().invert_yaxis() plt.title("Chart Book Data Colored by Rho Matrix") plt.ylabel('RHOB') plt.xlabel('Neutron Porosity') plt.grid(True) plt.show() # ## Read In log data. This can be test data that is used to QC the results: # # You will need to ensure that the df_log['xxx'] matches what is in the log file # + """ # ============================================================================= # # =========================================================================== # # # # # # Read in log data file as a Excel Spreadsheet # # # # # =========================================================================== # ============================================================================= """ file = r'./data/TNPH_1pt19_testdata.xlsx' #file = r'./data/CNL_1pt1_testdata.xlsx' #file = r'./data/log_data_co3.xlsx' df_log = pd.read_excel(file,index_col=False) df_log.head() Dep = df_log['DEPTH'] ''' Name of Neutron Log in log file ''' CNL = df_log['TNPH'] #CNL = df_log['CNL'] #CNL = df_log['NPHI'] RHOB = df_log['RHOB'] RHOMAA = df_log['Rho_Matrix'] Porosity = df_log['Porosity'] # - # ## KNN to estimate Cross Plot porosity from the appropriate chart: # + """ # ============================================================================= # # =========================================================================== # # #-------------------------------------------------------------------------- # ## # ## This is the beginnin of KNN estimating ND xplt Porosity # ## # # #-------------------------------------------------------------------------- # # =========================================================================== # ============================================================================= """ deptharray = [] porarray = []; #make list of 0 length RHOMAA_array = [] Porosity_array = [] rhoarray = [] #log Data for k in range(0,len(df_log) ,1): cnl = (CNL[k]-(-0.05))/(0.6-(-0.05)) rhob = (RHOB[k]-1.9)/(3-1.9) dist_inv = [] dist_cnl = [] dist_rhob = [] inv_dist_array = [] Por_weight = [] CNL_norm = [] RHOB_norm = [] dist_inv_total = 0 Por_total = 0 #this is the chartbook_reference_data being used for i in range(0,len(df_chart),1): CNL_norm.append((CNL_chart[i] - (-0.05)) / (0.6 - (-0.05))) RHOB_norm.append((RHOB_chart[i] - 1.9) / (3.0 - 1.9)) #Euclidian Distance dist_cnl.append((abs(cnl - CNL_norm[i]))) dist_rhob.append( abs(rhob - RHOB_norm[i])) if math.sqrt(dist_cnl[i]**2 + dist_rhob[i]**2) > 0: dist_inv.append( 1 / math.sqrt( dist_cnl[i]**2 + dist_rhob[i]**2) ) else: dist_inv.append( 1 / math.sqrt( 0.0001 + dist_cnl[i]**2 + dist_rhob[i]**2) ) #calculalte weights Por_weight.append(dist_inv[i] * Porosity_chart[i]) inv_dist_array.append(dist_inv[i]); # add items # ============================================================================= ### KNN Array # # =========================================================================== # # #-------------------------------------------------------------------------- distance_knn_array = [dist_inv, Por_weight] # distance_knn_array = [Permeability, Porosity, G1, PD1, BV1, G2, PD2, BV2] # # #-------------------------------------------------------------------------- # # =========================================================================== # ============================================================================= xnorm=np.array(CNL_norm) ynorm=np.array(RHOB_norm) #knn_array = np.transpose array knn_array = np.transpose(distance_knn_array) #print(knn_array) #Sort array from large to low by column 0 which is dist_inv #xknn=np.array(knn_array) #matsor x[x[:,column].argsort()[::-1]] and -1 us reverse order mat_sort = knn_array[knn_array[:,0].argsort()[::-1]] #firt column reverse sort (-1) #mat_sort = x[x[:,1].argsort()[::-1]] #mat_sort = x[x[:,2].argsort()[::-1]] #------------------------------------------------------------------------------ # Number of nearest Neighbors #------------------------------------------------------------------------------ n_neighbors = 3 #------------------------------------------------------------------------------ dist_inv_total_knn = 0 por_total_knn = 0 #kNN Estimates for first 3 rows #dist_inv_total = mat_sort[0][0] + mat_sort[1][0] + mat_sort[2][0] for i in range(0,n_neighbors,1): dist_inv_total_knn = dist_inv_total_knn + mat_sort[i][0] por_total_knn = por_total_knn + mat_sort[i][1] #back to k values and calculate estimations now por_est_knn = por_total_knn / dist_inv_total_knn # print() # print(Fore.GREEN +'Estimated Porosity from KNN =',n_neighbors,' on normlalized log data') # print(Fore.GREEN + ' Por =',por_est_knn, ) # phixnd_chartbook = por_est_knn rhomatrix = (RHOB[k]-phixnd_chartbook*FD)/(1-phixnd_chartbook) #------------------------------------------------------------------------------ # Write Data to arrays #------------------------------------------------------------------------------ deptharray.append(Dep[k]); #add items porarray.append(phixnd_chartbook); #add items rhoarray.append(rhomatrix); #add items RHOMAA_array.append(RHOMAA[k]); Porosity_array.append(Porosity[k]); # - # ## Make a few Depth Plots for QC purposes: # + x=np.array(porarray) y=np.array(deptharray) xx=np.array(Porosity_array) xxx=rhoarray xxxx = RHOMAA_array #print(Porosity_array) print(len(x),len(xx),len(y)) plt.figure(figsize=(8,11)) plt.plot(x, y,'--r',lw=3, label= 'Calculated Chartbook Porosity fron kNN') plt.plot(xx, y,'-b', label= 'Porosity Reference') plt.xlim(0.5, 0) plt.legend() plt.gca().invert_yaxis() plt.title("Chartbook Porosity Estimation, Normalized KNN") plt.ylabel('Depth (feet)') plt.xlabel('Charbook Estimated Porosity') plt.grid(True) plt.show() plt.figure(figsize=(8,11)) plt.plot(xxx, y,'--r',lw=3, label= 'Calculated Chartbook Rho Matrix fron kNN') plt.plot(xxxx, y,'-b', label= 'Rho Matrix Reference') plt.xlim(2.5, 3) plt.gca().invert_yaxis() plt.legend() plt.title("RHOMAA Estimation, Normalized KNN") plt.ylabel('Depth (feet)') plt.xlabel('Charbook Estimated Rho Matrix') plt.grid(True) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib notebook from colicoords import load, CellPlot, CellListPlot, IterCellPlot, iter_subplots, save import matplotlib.pyplot as plt import numpy as np import os from collections import namedtuple from tqdm.auto import tqdm from scipy.signal import medfilt from skimage.feature import peak_local_max c41_02 = load('c41_02_binary_opt.hdf5') c41_03 = load('c41_03_binary_opt.hdf5') # + clp_c41_02 = CellListPlot(c41_02) clp_c41_03 = CellListPlot(c41_03) fig, ax = plt.subplots() clp_c41_02.hist_intensity(ax=ax, data_name='g500', linewidth=0, label='c41_2') clp_c41_03.hist_intensity(ax=ax, data_name='g500', linewidth=0, label='c41_3', alpha=0.75) plt.savefig('intensity comparison.png') plt.legend() # - storm_dtype = [('x', float), ('y', float), ('intensity', float), ('frame', int)] def add_peakfind(cell, med=9, thd=7500, min_dst=5): img = cell.data.data_dict['g500'] mf = medfilt(img, 9) img_bg = img - mf cell.data.add_data(img_bg, 'fluorescence', 'flu_mf') peaks = peak_local_max(img_bg, min_distance=min_dst, threshold_abs=thd) y, x = peaks.T new_storm = np.empty(len(x), dtype=storm_dtype) new_storm['x'] = x new_storm['y'] = y new_storm['intensity'] = np.ones_like(x) new_storm['frame'] = np.ones_like(x) cell.data.add_data(new_storm, 'storm', 'storm_thd_{}'.format(thd)) len(c41_02), len(c41_03) # + c41_02_new = c41_02.copy() [add_peakfind(c) for c in tqdm(c41_02_new)] c41_03_new = c41_03.copy() [add_peakfind(c) for c in tqdm(c41_03_new)] '' # + icp = IterCellPlot(c41_02_new) fig, axes = iter_subplots(2, 1, figsize=(8,6)) icp.imshow('g500', ax=axes[0]) icp.plot_storm(data_name='storm_thd_7500', ax=axes[0]) icp.imshow('flu_mf', ax=axes[1]) icp.plot_storm(data_name='storm_thd_7500', ax=axes[1]) plt.tight_layout() fig.display() # + icp = IterCellPlot(c41_03_new) fig, axes = iter_subplots(2, 1, figsize=(8,6)) icp.imshow('g500', ax=axes[0]) icp.plot_storm(data_name='storm_thd_7500', ax=axes[0]) icp.imshow('flu_mf', ax=axes[1]) icp.plot_storm(data_name='storm_thd_7500', ax=axes[1]) plt.tight_layout() fig.display() # + labels = ['c41_02', 'c41_03'] fig, ax = plt.subplots() nums = [] for cells in [c41_02_new, c41_03_new]: num = [len(c.data.data_dict['storm_thd_7500']) for c in cells] nums.append(num) ax.hist(nums, bins = np.arange(15), label=labels, density=True) ax.legend() #fig.text(0.04, 0.5, 'Number of spots', va='center', rotation='vertical') plt.ylabel('Fraction of cells') plt.xlabel('Number of spots') plt.savefig('spots per cell_c41 epec escc.png') # - save('c41_02_with_spots.hdf5', c41_02_new) save('c41_03_with_spots.hdf5', c41_03_new) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import requests # ## Testo # + def telegram_bot_sendtext(bot_message): bot_token = '' bot_chatID = '166942243' send_text = 'https://api.telegram.org/bot' + bot_token + '/sendMessage?chat_id=' + bot_chatID + '&parse_mode=Markdown&text=' + bot_message response = requests.get(send_text) return response.json() test = telegram_bot_sendtext("Paul") print(test) # - # ## Immagini def sendImage(): url = "https://api.telegram.org/bot937044171:AAGGDmJcUTkkvDaOwcKrnHUyj9XUND7d52A/sendPhoto"; files = {'photo': open('image/png', 'rb')} data = {'chat_id' : "166942243"} r= requests.post(url, files=files, data=data) print(r.status_code, r.reason, r.content) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %%capture # %cd .. from pprint import pprint import matplotlib.pyplot as plt # # Importing from api import Benchmark bench_dir = "cached/six_datasets.json" bench = Benchmark(bench_dir, cache=False) # # API exploration # ### Queryable tags # # Tags starting with "Train/" indicate metrics which are logged every epoch. queriable_tags = bench.get_queriable_tags() pprint(queriable_tags) # ### Datasets # + dataset_names = bench.get_dataset_names() openml_task_ids = bench.get_openml_task_ids() print(dataset_names) print(openml_task_ids) # - # ### Querying # + # Get an example for a loss log example_loss = bench.query(dataset_name="higgs", tag="Train/loss", config_id=0) # Get the log of the accuracy for the run with the best peak accuracy example_best_acc = bench.query_best("higgs", "Train/val_accuracy", "Train/val_accuracy", 0) # Get the configuration of the best performing configuration example_best_config = bench.query_best("higgs", "config", "Train/val_accuracy", 1) print("Example loss log:\n", example_loss) print("Best validation accuracy log:\n", example_best_acc) print("Best config with regard to validation accuracy:\n", example_best_config) # - # ### Plotting # # The _plot\__by\__name_ method allows you to quickly look at some logs for a number of datasets. help(bench.plot_by_name) bench.plot_by_name(dataset_names=["higgs", "jasmine"], x_col="epoch", y_col="Train/val_accuracy", n_configs=10, xscale='linear', yscale='linear', show_best=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Recommender Systems 2017/18 # # ### Practice 8 - Quickstart LightFM # # ##### The following notebook is available at [LightFM Github repo](https://github.com/lyst/lightfm) # In this example, we'll build an implicit feedback recommender using the Movielens 100k dataset (http://grouplens.org/datasets/movielens/100k/). # # The code behind this example is available as a [Jupyter notebook](https://github.com/lyst/lightfm/tree/master/examples/quickstart/quickstart.ipynb) # # LightFM includes functions for getting and processing this dataset, so obtaining it is quite easy. # + import numpy as np from lightfm.datasets import fetch_movielens data = fetch_movielens(min_rating=5.0) # - data # This downloads the dataset and automatically pre-processes it into sparse matrices suitable for further calculation. In particular, it prepares the sparse user-item matrices, containing positive entries where a user interacted with a product, and zeros otherwise. # # We have two such matrices, a training and a testing set. Both have around 1000 users and 1700 items. We'll train the model on the train matrix but test it on the test matrix. print(repr(data['train'])) print(repr(data['test'])) # We need to import the model class to fit the model: from lightfm import LightFM # We're going to use the WARP (Weighted Approximate-Rank Pairwise) model. WARP is an implicit feedback model: all interactions in the training matrix are treated as positive signals, and products that users did not interact with they implicitly do not like. The goal of the model is to score these implicit positives highly while assigining low scores to implicit negatives. # # Model training is accomplished via SGD (stochastic gradient descent). This means that for every pass through the data --- an epoch --- the model learns to fit the data more and more closely. We'll run it for 10 epochs in this example. We can also run it on multiple cores, so we'll set that to 2. (The dataset in this example is too small for that to make a difference, but it will matter on bigger datasets.) model = LightFM(loss='warp') # %time model.fit(data['train'], epochs=30, num_threads=2) # Done! We should now evaluate the model to see how well it's doing. We're most interested in how good the ranking produced by the model is. Precision@k is one suitable metric, expressing the percentage of top k items in the ranking the user has actually interacted with. `lightfm` implements a number of metrics in the `evaluation` module. from lightfm.evaluation import precision_at_k # We'll measure precision in both the train and the test set. print("Train precision: %.2f" % precision_at_k(model, data['train'], k=5).mean()) print("Test precision: %.2f" % precision_at_k(model, data['test'], k=5).mean()) # Unsurprisingly, the model fits the train set better than the test set. # # For an alternative way of judging the model, we can sample a couple of users and get their recommendations. To make predictions for given user, we pass the id of that user and the ids of all products we want predictions for into the `predict` method. # + def sample_recommendation(model, data, user_ids): n_users, n_items = data['train'].shape for user_id in user_ids: known_positives = data['item_labels'][data['train'].tocsr()[user_id].indices] scores = model.predict(user_id, np.arange(n_items)) top_items = data['item_labels'][np.argsort(-scores)] print("User %s" % user_id) print(" Known positives:") for x in known_positives[:3]: print(" %s" % x) print(" Recommended:") for x in top_items[:3]: print(" %s" % x) sample_recommendation(model, data, [3, 25, 450]) # - # #### To sum up # + from lightfm import LightFM from lightfm.datasets import fetch_movielens from lightfm.evaluation import precision_at_k # Load the MovieLens 100k dataset. Only five # star ratings are treated as positive. data = fetch_movielens(min_rating=5.0) # Instantiate and train the model model = LightFM(loss='warp') model.fit(data['train'], epochs=30, num_threads=2) # Evaluate the trained model train_precision = precision_at_k(model, data['train'], k=5).mean() test_precision = precision_at_k(model, data['test'], k=5).mean() print("Train precision WARP: {:.2f}".format(train_precision)) print("Test precision WARP: {:.2f}".format(test_precision)) # + # Instantiate and train the model model = LightFM(loss='bpr') model.fit(data['train'], epochs=30, num_threads=2) # Evaluate the trained model train_precision = precision_at_k(model, data['train'], k=5).mean() test_precision = precision_at_k(model, data['test'], k=5).mean() print("Train precision BPR: {:.2f}".format(train_precision)) print("Test precision BPR: {:.2f}".format(test_precision)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="tgRudCZekg2g" colab_type="text" # ## Import libraries # + id="Hd_bVXCKkX0k" colab_type="code" colab={} from pandas import read_csv from matplotlib import pyplot as plt # %tensorflow_version 2.x from tensorflow.keras import Sequential, layers, optimizers from sklearn.model_selection import train_test_split from sklearn.neural_network import MLPRegressor from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error # + [markdown] id="5c5XOOCzksXb" colab_type="text" # ## Muat data # + id="eDIrR-9Mkmo6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 424} outputId="75a3d81e-a98a-4d6a-ad68-9e3ad3928abe" data = read_csv("https://pastebin.com/raw/jp0aG9tv", sep=";") data # + [markdown] id="WkWTX-QFkvgn" colab_type="text" # Kita pake dataset dari sini ya :
    https://archive.ics.uci.edu/ml/datasets/QSAR+aquatic+toxicity
    Sengaja saya taruh di pastebin biar enak aja waktu ngeload'nya # + [markdown] id="gx73k9ZmlLYs" colab_type="text" # ## Bagi data # + id="61xs0LaUkuS6" colab_type="code" colab={} x_train, x_test, y_train, y_test = train_test_split(data.drop(axis=0, columns='quantitive response'), data['quantitive response'], test_size=0.3) # + [markdown] id="yq33Um5xlQ14" colab_type="text" # Pada notebook ini saya ndak bahas exploratory data analysis yuy
    Soalnya emang mau bahas cara bikin NN sederhana pake TensorFlow doang # + [markdown] id="ifx1rHaElrDr" colab_type="text" # ## Inisialisasi model # + [markdown] id="_D6jioSnmCMp" colab_type="text" # ### Jaringan saraf tiruan dari tensorflow # + id="WJUpFW1dmKf8" colab_type="code" colab={} def Simple_NN(): model = Sequential([ layers.Dense(8, activation='relu', input_shape=[8]), # DELAPAN INI SOALNYA KOLOM DATA YANG DIPAKE NGETRAIN CUMA DELAPAN layers.Dense(8, activation='relu'), layers.Dense(1) ]) optimizer = optimizers.RMSprop(0.001) model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse']) return model # + [markdown] id="BEMgQxrdmMoi" colab_type="text" # Terdiri dari :
    8 node yang ada pada input layer
    8 node yang ada pada hidden layer 1 dan 2 beserta fungsi aktivasi yang digunakan yaitu ReLU
    Terus 1 node pada output layer karena ini kita studi kasusnya regresi ya # + id="RDXupSD6lsC5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 469} outputId="c322c846-bd11-4da8-b43e-ac2d2932c9c9" lnr = LinearRegression() mlp = MLPRegressor() nn_tf = Simple_NN() print(lnr, "\n") print(mlp, "\n") print(nn_tf.summary()) # + [markdown] id="yucLurRFnahX" colab_type="text" # ## Latih model-model # + id="Ei8Ac1QLlPt6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="3d17a669-bd2a-4eae-a278-ddd2d01eef49" lnr.fit(x_train, y_train) mlp.fit(x_train, y_train) nn_tf.fit(x_train, y_train, epochs=200, validation_split = 0.2, verbose=1) # + [markdown] id="S6ucTvpwn3GT" colab_type="text" # ## Uji & evaliasi model # + id="RQpzEcJQnexP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="4fe2ecbb-523e-4bc4-bc8e-2d5db3ed23cc" hasil_lnr = lnr.predict(x_test) hasil_mlp = mlp.predict(x_test) hasil_nn_tf = nn_tf.predict(x_test) print("Mead Squared Error Linear Regression : " ,mean_squared_error(y_test, hasil_lnr)) print("Mead Squared Error Multilayer Perceptron : " ,mean_squared_error(y_test, hasil_mlp)) print("Mead Squared Error NN TensorFlow : " ,mean_squared_error(y_test, hasil_nn_tf)) # + [markdown] id="bLEgYQ1vomKO" colab_type="text" # Semakin kecil MSE sebuah model maka semakin bagus model tersebut dalam memprediksi # + [markdown] id="vKNauCItoM7B" colab_type="text" # ## Grafik garis # + id="YNJYbOagoDqf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="631f8d00-d934-4c2b-ce00-9774e50e12dc" plt.plot(list(hasil_lnr), label='Linear Regression') plt.plot(list(y_test), label='Nilai Asli') plt.legend(loc="upper right") # + id="NkPHaxFToVIU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="a9b92243-e451-47e2-8d9f-f99848c370cf" plt.plot(list(hasil_mlp), label='Multilayer Perceptron') plt.plot(list(y_test), label='Nilai Asli') plt.legend(loc="upper right") # + id="AxdJl2mUoXg9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="3c4bb0d1-bdc6-4203-95f6-e6bf7d976261" plt.plot(list(hasil_nn_tf), label='Tensorflow') plt.plot(list(y_test), label='Nilai Asli') plt.legend(loc="upper right") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Employees performance and delivered value # ![Employee performance](performance.jpg) # ## 1. Description of the topic # The aim of this project is to show the variables that can affect the employee performance in an IT company and the value delivered. # # The data sets will be simulated based as much as possible in the real world behavior and the variables will be studied to determine the ones that could cause more impact and to see tendencies when linking them. # # Based on experience, on personal research on the workplace and investigation on the topic, some of the factors that could influence performance are: Time/hours in the workplace, Initial level of studies, employee motivation, salary, training /conferences the employee attends, length of employment, experience in the position, number of meetings the employee has in a day, number of interruptions per day, emails received per day... # # In terms of the performance, the concepts related are: can be measured based on different approaches: ROI (Return of Investment), delivered value, velocity (measured in story points), incidents in production, defects per story point, customer satisfaction, time to market or completion of individual goals. # # This project will focus the simulation on the following 5 items: # # - Time/hours in the workplace # - The initial level of studies # - Employee Motivation # - Salary # # The outcome would be presented as the Revenue or value that the company gets. # **1.1 Investigation of existing research** # # - _Linking Belgian employee performancemanagement system characteristicswith performance management systemeffectiveness: exploring the mediatingrole of fairness_ # # Summary: The paper studies the effectiveness of Performance Management System, the variables that influence it and the outcomes of it. # # The main conclusion is that the way of implementing the system has a big influenece in the outcome. # # https://www.researchgate.net/publication/46469879_Linking_Belgian_employee_performance_management_system_characteristics_with_performance_management_system_effectiveness_Exploring_the_mediating_role_of_fairness # # # # - _Data: Hours worked in 2017_ # https://data.oecd.org/emp/hours-worked.htm # # Summary: Data per country: mean hours worked in 2017 # # # - _The Association of the 24 Hour Distribution of Time Spent in Physical Activity, Work, and Sleep with Emotional Exhaustion._ # # Summary: Interesting approach on how to pressent the results of the study with Ternary plot. # # # - _How do you measure value?_ # # https://www.thoughtworks.com/insights/blog/how-do-you-measure-value # # Summary: Key factors and graphical representations on how to measure value in a company. # # # - _Infographic: You waste a lot of time at work_ # # https://www.atlassian.com/time-wasting-at-work-infographic # # Summary: Factors that can reduce employee performance. # # # - _In an 8-Hour Day, the Average Worker Is Productive for This Many Hours_ # # https://thriveglobal.com/stories/in-an-8-hour-day-the-average-worker-is-productive-for-this-many-hours/ # # Summary: Article about how more hours at work could not translate into more productivity. # # # - _Wages and Employees Performance: The Quality of Work Life as Moderator._International Journal of Economics and Financial Issues, 2015, 5(Special Issue) 349-353. # # Summary: Research about how the effect of wages on employees performance are moderated by the quality of work life. It takes into consideration as well promotion policies, democratic supervision, employee involvement and safe working conditions, perception that they want to feel safe, satisfied and get a chance to be able to grow. # # It presents the descriptive statistics about the data in a table. # # # # - _Impact of Motivation on Employee Performances: A Case Study of Karmasangsthan Bank Limited, Bangladesh_ # # https://www.omicsonline.org/open-access/impact-of-motivation-on-employee-performances-a-case-study-of-karmasangsthan-bank-limited-bangladesh-.php?aid=86681 # # Summary: In this research salary is described as an extrinsic motivation along with Monetary Incentives and Compensation Package. # # ![Salary impact](salary.png) # # # The representation above is extremely valuable for this project as it can be seen that the relationship between salary and motivation will be similar to a exponential distribution. # # # # - _Data Distributions_ # # Summary: Brief of data distributions. # # http://www.ucd.ie/ecomodel/Resources/Sheet4_data_distributions_WebVersion.html # # # # - _Education you need to work at companies like Facebook, Google and Amazon_ # # Summary: Level of studies distribution in top tech companies. # # https://www.cnbc.com/2017/07/26/how-long-youll-need-to-go-to-school-to-work-at-top-tech-companies.html # # # # ## 2. Values in the data set # ### 2.1 Data Set simulation # # The dataset simulation will contain values of 1000 employees in a particular month. # # Each parameter will be simulated according to the distribution more suitable for each of them based on experience and existing papers and research. # # For this project, we will take a month of work from the employees with different salaries, motivation, level of studies, hours at work and we will compare these variables with the revenue that the company received from that month. The ROI will be a fixed value as the income received because of this feature is a fixed figure. # # For this simulation, the value obtained with this simulated feature will be 10.000.000. # # ### 2.1.1 Time/hours in the workplace # # From the initial research, we could differentiate between time spent at the workplace and productive working hours. From a recent research, just 2 hours and 53 minutes of the time spent at the workplace is productive work time. # # _Source: https://thriveglobal.com/stories/in-an-8-hour-day-the-average-worker-is-productive-for-this-many-hours/_ # # # In this project, we will simulate the number of hours spent at the workplace as it is usually the measurement taken into account by most of the traditionally in most of the companies. # # # **_Hours at work data set Simulation_** # %matplotlib inline import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt #Mean Hours worked in Ireland 2017, #Source: The Organisation for Economic Co-operation and Development (OECD) TotalhoursIreland = 1738 #calculation mean hours per day (weekends, bank holidays and Annual leave excluded) #working days in Ireland 232 meanhd= 1738/232 meanhd #Working hours data set simulation hours= pd.DataFrame ({"hours": np.random.normal (7.5, 0.5, 1000)}) hours #small test to check that the data set fits the expected np.mean (hours) # The test gives a positive result as the number is very close to the defined mean for the Numpy Random normal function # The working/hours dataset simulation has been generated using the normal distribution. We assume for this study that the company has the same working/hours policy for all employees. We assume that the policy is to work 7.5 h per day, as it's the mean of time spent working in Ireland in 2017. # # _Why a normal distribution?_ # # The distribution will follow a bell curve where most of the values will be close to 7.5. In the Normal or Gaussian distribution, the variables concentrate around a value and there are very few variables from 3 standard deviations. # # In this case, most of the employees would be working around 7,5 hours even though for this particular day that we are taking for the data some employees would stay a little bit longer at work or a little bit less than 7.5h. # # The mean for this random normal distribution is 7.5h, the standard deviation 0.5 as we assume that the employees follow the company policy and they don't change much the time they spend at work: 30 min is an acceptable deviation. # # As we expect the ROI of a feature to be obtained after a month of employees work, we will calculate as well the total of hours per month per employee. #getting the hours per month hours_month= hours * 20 hours_month # ### 2.1.2 Level of Studies # From the research, in the table below we can see the percentage of employees of each level of studies in some of the biggest tech companies. # ![Level of studies](studies.png) # # _Source: https://www.cnbc.com/2017/07/26/how-long-youll-need-to-go-to-school-to-work-at-top-tech-companies.html_ # **_Level of studies data set simulation_** # + #the distribution of studies level is actually distributed according to percentages #based on data from Airbnb employees 14% undergrad, 53% bachellor, 21% masters, 12% phd #https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.choice.html#numpy.random.choice undergraduates = 0 bachellors = 1 masters = 2 phd = 3 studies= pd.DataFrame ({"Studies": np.random.choice(4, 1000, p=[0.14, 0.53, 0.21, 0.12])}) # - studies #small test on the data np.mean (studies) # The test gives a positive result as the number is around 1, the parameter with more probability to appear defined in the np.random.choice funtcion. # We assume that our company is an online company as Airbnb that highly relies on technology to operate their business. # # The np.random.choice function is used to generate a non-uniform random sample. # # Why np.random.choice? # # The function used to generate this random distribution is np.random.choice as we need to generate a random list that fits in the percentages above: based on data from Airbnb employees 14% undergrad, 53% bachelor, 21% masters, 12% Ph.D. In order to do that np.random.choice provides the ability to set the percentages of the different variables to occur. # # Thos figures are based on the investigation, of the real numbers of Airbnb that we consider acceptable for this project as an online company that offers services to final users. # # # ### 2.1.3 Motivation # # What is motivation? # # Motivation is the word derived from the word ’motive’ which means needs, desires, wants or drives within the individuals. It implies Internal and external factors that stimulate desire and energy in people to be continually interested and committed to a job, role or subject, or to make an effort to attain a goal. # # Motivation results from the interaction of both conscious and unconscious factors such as the intensity of desire or need, incentive or reward value of the goal, and expectations of the individual and of his or her peers. These factors are the reasons one has for behaving a certain way. # # In the work goal context, the psychological factors stimulating the people’s behavior can be - # # - Desire for money # - Success # - Recognition # - Job-satisfaction # - Teamwork, etc # # _Source: https://www.managementstudyguide.com/what_is_motivation.htm , http://www.businessdictionary.com/definition/motivation.html_ # # # **_Motivation simulation_** # + #the motivation of the employees in a company would be totally specific to that company #here the parameters for this company No_motivated = 1 Somehow_motivated = 2 Motivated = 3 Very_Motivated = 4 Extremely_Motivated = 5 #as the case will be specific to the company we will calculate a totally random distribution #Assuming that each value has the same probability of being choosen #Use randint to generate random integers from low (inclusive) to high (exclusive) #following the uniform distribution motivation= pd.DataFrame ({"Motivation": np.random.randint(1,6, size=1000)}) # - motivation #small test on the data np.mean (motivation) # The test is sucessful as, using np.random.randint we get random integers from the “discrete uniform” distribution all the numbers have approximately the same possibility of appearing. Te number in the middle of this set of motivation numbers is 3, so if the mean approaches this value the data set performs correctly. # # # _Why np.random.randint?_ # np.random.randint allows to define the interval, setting low and high, needed in this case to set the low to 1 and the high to 5. As the high is not included in np.random.randint, the high in this case is set to 6, so the numbers into consideration are 1 to 5. # # The uniform distribution has been chosen in this case and we don't have refence data we assume that any value has the same possibity of apprearing. # # We could as well have choosen a standard normal distribution (bell curve), centering most of the data to a value, assuming that most of the employees would feel that value of motivation, but as we don't have any reference of what this mean value would be, we assume that any employee can have the same posibility of being motivated or de-motivated. # ### 2.1.4 Salary # # Salary or wages has been seen as one of the major components to incentivize employees performance, but is it really the main factor? What would be its data distribution? # # Regarding the questions above, should the dataset simulate data on employees in similar positions but different companies to get the most of this project? # # https://www.econjournals.com/index.php/ijefi/article/download/1403/pdf # # # **_Salary Simulation_** # Taking as reference: https://www.indeed.com/cmp/Airbnb/salaries #Mean of the salaries per employee per year salaries = [157357, 138554, 172386, 164060, 176814, 133565, 138554, 139526, 116099, 122039, 196315] SumSalaries = sum(salaries) Meansalaries = SumSalaries/11 Meansalaries #standard deviation # analyse from the mean what is the deviation to the higher value #higher value (high standar deviation hsd) a = 196315 hsd= a - Meansalaries hsd #standard deviation # analyse from the mean what is the deviation to the lowest value #lowest value, lower standar deviation (losd) b = 116099 losd= Meansalaries -b losd #mean standard deviation of the highest and the lowest sumst=(hsd + losd) #most of the values will be within 3 standard deviations meanst= sumst/3 #result in average between the 2 values meanstsfinal= meanst/2 meanstsfinal #Using the standard normal distribution as is more probable to get a salary around 150479 #This mean wll be used as the mean for the standar normal distribution salary= pd.DataFrame ({"Salary": np.random.normal (150479, meanstsfinal, 1000)}) salary #small test on the data np.mean (salary) # The test is correct as the mean for this distribution is 150479 $. np.max (salary) np.max (salary) np.min (salary) # In terms of the minimum and the maximum in the set, we can accept those figures as acceptable as they are close to the limits we have in the reference set. # # # # *A revision on how to set the standard deviation has been made as in the first attempt the standard deviation was taken just from the higher deviation. There was a rework on the standard deviation now obtained by the mean of the higher and the lower values. # The salary is normally distributed around the mean, with a smaller probability of getting 196315 $ then to get the values around the mean 150479. # # _Why standard normal distribution?_ # # As per the data observed most of the salaries vary in a range between 130000 dollars and 170000 dollars so getting the mean of all the salaries will set this intermediate point. The maximum 196315 dollars or the minimum 116099 dollars are rare exceptions, so setting the deviation in this range can give an approximate simulation of the salaries in a company. #year salary divided in 12 months month_salary= salary/12 month_salary # ### 2.1.5 Revenue # # The revenue is presented as a fix value of 10000000. We assume for ths project that when the employees deiver the sotware, the income that the company receives in return is 10000000. # ### 2.2 Concatenating the simulated data # + #Concatenating the simulated data above dataset = pd.DataFrame({"Hours month": np.random.normal (7.5, 0.5, 1000)*20 , "Studies": np.random.choice(4, 1000, p=[0.14, 0.53, 0.21, 0.12]), "Motivation": np.random.randint(1, 6, size=1000), "Salary": np.random.normal(150479, 15278, 1000)/12, "Company Revenue": 10000000}) # - dataset # ### 2.3 Syntesizing the data set #analysis of any data set dataset.describe() # The description function presents very valuable information, as it shows for our data set: # 1. COUNT: The number of values of the specified column. In this case 1000 of values in each column. # 2. MEAN: The mean of the values of each column. In this case the mean of the 1000 values that we simulated for each column # 3. STD: Shows the standard deviation between values # 4. MIN: The minimum value in the column # 5. 25th percentile. 25% of the values are less or equal to the number that appears for each column # 6. 50th percentile (medium) 50% of the values are bigger and 50% of the values are smaller than that the number that show per each column. # 7. 75th percentile. 75% of the values are lower than this number per each column # 8. MAX: the maximum value in each column #mean of each column dataset.mean () #first rows of the data set dataset.head() #last rows of the data set dataset.tail() # ### 3 Relationships between variables # Taking the project to a more advanced level, most of the variables would depend on some of the other ones in the data set. # # - The level of studies and the salary will be related. The higher level of studies the more probability of having a higher salary based on the knowledge that the employee gives to the company. # # - The hours at work and the motivation could be as well related in two opposite ways. # # 1. The employees would be more motivated if they have more time to enjoy out of the office. So when they are at the office they are more engaged. # # 2. The employees more motivated stay in the office more time because they enjoy a lot they work, so then they are motivated. # # # - The salary and the income will be closely related on calculating the ROI (return of investment) higher salaries would rate less on ROI as the salary will be part of the costs of the company in calculating the ROI (revenue- costs). # # - The Salary and motivation could be also related. There are studies (some quoted in the Investigation) that explain that salary could have an effect in the motivation as an extrinsic motivation among other factors. And other references detail the very little relationship between salary and motivation and show higher relationships between motivation and public recognition, training, company culture... # **Data set without relationship between variables Hours/Motivation** #subset of two columns of data to visualise them dataset1=dataset [["Hours month","Motivation"]] dataset1 # + #Synthesing data and showing relations # plot data and regression model # https://seaborn.pydata.org/generated/seaborn.lmplot.html sns.lmplot (x="Hours month", y="Motivation", data=dataset1) # + #Scatterplot #scatter plot with possibility of several semantic groupings #https://seaborn.pydata.org/generated/seaborn.scatterplot.html sns.set(style="ticks") sns.pairplot(dataset1, hue="Motivation") # - # Above we can see a Scatter Plot and regression model plot. # # At the moment as we can see there is not the relationship between the data as it was randomly created without taking into account any relationship between variables. # # # # _Terminology_ # # The **regression model plot** fits regression models across conditional subsets of a dataset. # _Source: https://seaborn.pydata.org/generated/seaborn.lmplot.html_ # # The **Scatterplot** draws a plot with the possibility of several semantic groupings. # In a Scatterplot, the relationship between x and y can be shown for different subsets of the data using different parameters. These parameters control what visual semantics are used to identify the different subsets. # _Source: https://seaborn.pydata.org/generated/seaborn.scatterplot.html_ # _Source: https://seaborn.pydata.org/examples/scatterplot_matrix.html_ # # # **Data set with relationship between variables Hours/Motivation** # In the data above the concatenation is just a mix of all the random sets simulated previously, with no patterns or stablished models. # # Now we will be relating the data sets to be able to prove or refute an hipotesis. # # # # # _**Hipothesis**_ # # The employees who spent more hours at work are less motivated than the employees who stay less hours at the office. #Simulating a basic inverse relationship between time spent at work and motivation h= np.random.normal (7.5, 0.5, 100)*20 m = 450/h data = pd.DataFrame({'HoursM': h, 'Motivat': m}) data # + #Plotting the data to show if it exists a relationship between the data #https://seaborn.pydata.org/generated/seaborn.scatterplot.html sns.set(style="ticks") sns.pairplot(data) # - #Plotting the data to show the data and the regression line of the data sns.lmplot (x="HoursM", y="Motivat", data=data) # This simulated data would allow us to validate the hipotesis; we can see a clear relationship in both in the regression plot and in the Scatter plot between the hours spent at work and the decrease in motivation. # # The more hours the employees spend at work the less motivation they have. # ### 4. Extra relationships simulations # #### 4.1 Salary based on level of studies #Simulating salary in base of studies level #more level of studies = more salary studies= pd.DataFrame ({"Salary": np.random.choice(4, 100, p=[0.14, 0.53, 0.21, 0.12])+1}) salary= studies * 80000 salary # In this case, we can appreciate that the higher the studies level, the higher the salary the employees receive. # The calculation is based on multiplying the level of studies randomly generated (random.choice) based on Airbnb distribution, in the previous sections, per a fixed amount 80000 euros. # # So the higher the studies level the higher amount of salary the employees will receive. # # In this specific case, as the undergraduate parameter was set to 0, the salary values for those employees would be set to 0. But adding one to the random function solves the problem of obtaining results set to 0. # # # + #Plotting the data to show if it exists a relationship between the data #https://seaborn.pydata.org/generated/seaborn.scatterplot.html sns.set(style="ticks") sns.pairplot(salary) # - # The Scatter plot above shows the amount of employees that would receive a salary range (0, 80000, 160000 and 240000) related to the lever of studies (undergraduate, bachellor, master, and ph.D) # #### 4.3 ROI Return of investment or relationship between Salary and Revenue # # **What is understood as value?** # # Based on the context of an IT company, the value will be presented as ROI or Return On Investment. # # The ROI measures the gain or loss generated on an investment relative to the amount of money invested. ROI is usually expressed as a percentage and is typically used for financial decisions, to compare a company's profitability or to compare the efficiency of different investments. # # The return on investment formula is: # # ROI = (Net Profit / Cost of Investment) x 100 # # # _Source: https://www.thoughtworks.com/insights/blog/how-do-you-measure-value_ , https://investinganswers.com/financial-dictionary/technical-analysis/return-investment-roi-1100 and https://barnraisersllc.com/2017/03/surprisingly-simple-steps-calculate-roi/_ # # # + #ROI calculation= revenue-investment/investment #Relationship between the benefits and the salary salary = pd.DataFrame({"ROI": np.random.normal(150479, 15278, 100)/12}) c= 10000000-salary ROI= c/10000000 ROI # - # The ROI shows that the higher the salary of the employee the smaller the ROI for the company, as the revenue remains constant. # ## 5. Conclussion # # The use of Numpy is very efficient for simulating data sets attending to the different distributions of the data. In this project we can see it in the different distributions used: # - The Normal distribution (numpy.random.normal) to simulate data that tends to a number, as the hours spent at work by the employees of a company tending to 7 and a half hours, setting a standard deviation of 0.5 hours. Or the salary of the employees with the mean value and the standard deviation. # - Random.choice to simulate data according to different percentages, as the level of studies of the employees in a company. # - Random.randint to create data of integers following a discrete uniform distribution, all the values has the same chance of being choosen, as the level of motivation of the employees in a company. # # Pandas package has been very useful to generate data sets, show them, and concatenate various sets of data. Moreover offers interesting functions to summarise data such as dataset.describe. # # Seaborn package is of great use to visualize the data and find patterns, relationships, and models. Scatterplots and Regression plots can be very useful to find relationships between data or lack of relationships between them. These relationships can be easier to identify visualizing the data rather than seeing it in a data set. # # The combination of these three packages offers a powerful engine for data analysis. # #### Sources relationships between data and modeling # https://www.scipy-lectures.org/packages/statistics/index.html # # https://www.theanalysisfactor.com/7-guidelines-model-building/ # # https://www.theanalysisfactor.com/13-steps-regression-anova/ # # https://en.wikipedia.org/wiki/Statistical_model # # https://www.theanalysisfactor.com/assessing-the-fit-of-regression-models/ # # https://www.kaggle.com/keiichiroumiyamoto/linear-model # # # #### Sources used to investigate code errors # # https://stackoverflow.com/questions/17839973/constructing-pandas-dataframe-from-values-in-variables-gives-valueerror-if-usi # # https://softwareengineering.stackexchange.com/questions/238033/what-does-it-mean-when-data-is-scalar # # https://stackoverflow.com/questions/43054217/typeerror-dataframe-objects-are-mutable-thus-they-cannot-be-hashed-while-s # # https://stackoverflow.com/questions/17839973/constructing-pandas-dataframe-from-values-in-variables-gives-valueerror-if-usi # # https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html # # https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o # # https://medium.com/dunder-data/selecting-subsets-of-data-in-pandas-6fcd0170be9c # # https://www.hackerearth.com/practice/machine-learning/data-manipulation-visualisation-r-python/tutorial-data-manipulation-numpy-pandas-python/tutorial/ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Warsztaty Python w Data Science # # --- # # ## Machine Learning - część 5 z 5. Reinforcement Learning # # - ### Gra w kółko i krzyżyk # - ### Q-Learning i wzór Bellmana # - ### Testy A/B # - ### Problem wielorękiego bandyty # - ### Wyżarzanie # # --- # # ## Gra w kółko i krzyżyk def cell(x): try: if x.lower()=="x": return " X " if x.lower()=="o": return " O " except: pass return " " def print_state(state): sep = "_"*11+"\n" ret = [] n=0 for row in state: ret.append("|".join(map(cell,row))) n+=1 if n<3: ret.append(sep) for line in ret: print(line) state = (('x','o',0.0), ('o','x',0.0), (0.0,'o','x')) print_state(state) def has_won(state): players = ['x', 'o'] for i in [0,1]: for row in state: if row==tuple(players[i]*3): # _ return i, True for cols in [0, 1, 2]: if state[0][cols]==state[1][cols] and state[2][cols]==state[0][cols] and state[0][cols]==players[i]: # | return i, True if state[0][0]==state[1][1] and state[0][0]==state[2][2] and state[0][0]==players[i]: # \ return i, True if state[2][0]==state[1][1] and state[2][0]==state[0][2] and state[0][2]==players[i]: # / return i, True return -1, False has_won(state) state = (('o','o','o'), ('o','x',0.0), (0.33,'o','x')) print_state(state) has_won(state) state = (('o','x','o'), ('o','x',0.0), ('o','o','x')) print_state(state) has_won(state) state = (('o','x','x'), ('o','x',0.0), ('x','o','x')) print_state(state) has_won(state) state = (('o','x','o'), ('o','x',0.0), ('x','o','x')) print_state(state) has_won(state) def is_full(state): for row in state: for v in row: if v!="x" and v!="o": return False return True is_full((('o','x','o'), ('o','x',0.0), ('o','o','x'))) is_full((('o','x','o'), ('o','x','x'), ('o','o','x'))) def is_valid_move(state, x, y): row = state[x] if row[y]!="x" and row[y]!="o": return True return False def new_state(state, x, y, character): l = [] for row in state: l.append(list(row)) l[x][y] = character return (tuple(l[0]), tuple(l[1]), tuple(l[2])) state = (('o','x','o'), ('o','x',0.0), ('o','o','x')) print_state(new_state(state, 1, 2, "x")) class Player: def move(state): return state def learn(self, won): pass class HumanPlayer(Player): def __init__(self, character): self.character=character def move(self, state): print_state(state) flag = False while not flag: print() x = int(input(f"({self.character}) row: ")) y = int(input(f"({self.character}) col: ")) flag=is_valid_move(state, x, y) print() return new_state(state, x, y, self.character) h = HumanPlayer("x") print_state( h.move((('o','x','o'), ('o','x',0.0), ('o','o','x'))) ) def do_game(player0, player1, stats, verbose=False): state = ((0.0, 0.0, 0.0), (0.0, 0.0, 0.0), (0.0, 0.0, 0.0)) ended = False full = False won_won = -1 while not ended and not full: state = player0.move(state) who_won, ended = has_won(state) full = is_full(state) if not ended and not full: state = player1.move(state) who_won, ended = has_won(state) full = is_full(state) if ended: stats.append("WIN 0" if who_won==0 else "WIN 1") player0.learn(1.0 if who_won==0 else 0.0) player1.learn(1.0 if who_won==1 else 0.0) if full and not ended: stats.append("DRAW") player0.learn(0.5) player1.learn(0.5) if verbose: print("DRAW" if full and not ended else f"{who_won} has WON!") print() print_state(state) # + import random print([random.randrange(3) for x in range(100)]) # - class RandomPlayer(Player): def __init__(self, character): self.character=character def move(self, state): flag = False while not flag: x = random.randrange(3) y = random.randrange(3) flag=is_valid_move(state, x, y) return new_state(state, x, y, self.character) do_game(HumanPlayer('x'), RandomPlayer('o'), [], True) # + from collections import Counter stats = [] for i in range(1_000): do_game(RandomPlayer('x'), RandomPlayer('o'), stats, False) print(Counter(stats)) # - 1_000_000 == 1000000 1_000.0 == 1000.0 # + # %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.style.use("dark_background") def plot_games(qstats, label0, label1): games = range(40, len(qstats), 20) won0 = [ Counter(qstats[:x])['WIN 0']/x for x in games] won1 = [ Counter(qstats[:x])['WIN 1']/x for x in games] draw = [ Counter(qstats[:x])['DRAW']/x for x in games] fig = plt.figure(); plt.figure(figsize=(20,10)); plt.title('Results ratio'); player0, = plt.plot(games, won0, label=label0); player1, = plt.plot(games, won1, label=label1); draws, = plt.plot(games, draw, label="Draws"); plt.legend(handles=[player0, player1, draws], frameon=True, loc="best", fontsize=16); plt.show(); plot_games(stats, "Random Player 0 Won", "Random Player 1 Won"); # - # --- # # _*Q-Learning*_ # https://en.wikipedia.org/wiki/Markov_chain # https://en.wikipedia.org/wiki/Q-learning # $ # Q: S \times A \rightarrow I\!R # $ # --- # # ## W danym kroku $s$ podejmujemy akcję $a$ która maksymalizuje $Q$ # # --- # # ## $Q$ Może zostać stabularyzowana (tzn. wsadzona do słownika) # ## Kluczami są pary `(stan, akcja)` a wartością wynik $Q$ # # --- # # Uczymy się $Q$ korzytając ze __*Wzoru Bellmana*__ # # # $ # Q^{new}(s_t, a_t) = Q(s_t, a_t) + \alpha \cdot ( r_t + \gamma \cdot \max_{a}Q(s_{t+1}, a) - Q(s_t, a_t) ) # $ # --- # Upraszczając: # # $ # Q^{new}(s_t, a_t) = (1- \alpha) \cdot Q(s_t, a_t) + \alpha \cdot \gamma \cdot \max_{a}Q(s_{t+1}, a) # $ # --- # # # # $\alpha$ - Tempo uczenia się # # $\gamma$ - Degradacja wagi z odległością od bodźca # # $r_t$ - nagroda w danym stanie # --- α = 0.15 # Tempo uczenia się β = 0.60 # Wartość początkowa `Q` - 'agresja' γ = 0.99 # Degradacja wagi z odległością od bodźca # --- # + from random import sample class QPlayer(Player): def __init__(self, character): self.character=character ################################################# ## stabularyzowane Q postaci: ## ## { stan1: ## ## { nowy_stan1: q1, nowy_stan2: q2, ...}, ## ## stan2: ## ## { nowy_stan3: q3, nowy_stan4: q4, ...}, ## ## ... } ## ################################################# self.q_table={} self.previous_state=None self.current_state=((0.0, 0.0, 0.0), (0.0, 0.0, 0.0), (0.0, 0.0, 0.0)) def initialize_q_table(self, state): actions = {} for x in range(3): for y in range(3): if is_valid_move(state, x, y): ################################## ## Inicjalizacja przez β (beta) ## ################################## actions[ new_state(state, x, y, self.character) ] = β self.q_table[state] = actions return actions def move(self, state): ############################################################## ## Tworzę powiązanie z ruchem drugiego gracza, ## ## Aby w Q-table był ciągły łańcuch od poprzedniego ruchu ## ############################################################## if self.previous_state: actions = self.q_table.get(self.current_state, {}) if state not in actions.keys(): actions[state] = β self.q_table[self.current_state] = actions ############################################################## ## Jeśli w Q-table nie ma akcji powiązanych z obecnym ## ## stanem, dopisuje wszystkie możliwe akcje z wagą β (beta) ## ############################################################## actions = self.q_table.get(state) if not actions: actions = self.initialize_q_table(state) ############################################################## ## Biorę najlepsze Q dla akcji w tym stanie ## ############################################################## best_q = max(actions.values()) ############################################################## ## Losuję akcję pośród tych co mają najlepsze Q ## ############################################################## best_actions = [ action for action, q in actions.items() if q==best_q ] self.previous_state = state self.current_state = sample(best_actions, 1)[0] return self.current_state def learn(self, learned_weight): self.q_table[self.previous_state][self.current_state] = learned_weight self.previous_state=None for state, actions in self.q_table.items(): for next_move, q in actions.items(): next_move_actions = self.q_table.get(next_move) if next_move_actions: best_next_q = max(next_move_actions.values()) ######################### ## ## ######################### actions[next_move] = (1-α)*q + γ*α*best_next_q self.q_table[state] = actions # - qstats = [] qplayer = QPlayer('o') # + from collections import Counter for i in range(1): do_game(RandomPlayer('x'), qplayer, qstats, False) print(Counter(qstats)) # - from pprint import pprint pprint(qplayer.q_table) # + from collections import Counter for i in range(1): do_game(RandomPlayer('x'), qplayer, qstats, False) print(Counter(qstats)) # - from pprint import pprint pprint(qplayer.q_table) # + for i in range(2_000): do_game(RandomPlayer('x'), qplayer, qstats, False) print(Counter(qstats)) plot_games(qstats, "Random Player 0 Won", "Q Player 1 Won") # + for i in range(2_000): do_game(RandomPlayer('x'), qplayer, qstats, False) print(Counter(qstats)) plot_games(qstats, "Random Player 0 Won", "Q Player 1 Won") # + def plot_2_game(qstats, stats, labelA, labelB): qgames = range(40, len(qstats), 20) qwon0 = [ Counter(qstats[:x])['WIN 0']/x for x in qgames] qwon1 = [ Counter(qstats[:x])['WIN 1']/x for x in qgames] qdraw = [ Counter(qstats[:x])['DRAW']/x for x in qgames] games = range(40, len(stats), 20) won0 = [ Counter(stats[:x])['WIN 0']/x for x in games] won1 = [ Counter(stats[:x])['WIN 1']/x for x in games] draw = [ Counter(stats[:x])['DRAW']/x for x in games] plt.rcParams['figure.figsize'] = [20,10] fig = plt.figure(); l = fig.add_subplot(121) plt.ylim([0.0, 0.9]) player0, = l.plot(qgames, qwon0, label="Random Player 0"); player1, = l.plot(qgames, qwon1, label=labelA); draws, = l.plot(qgames, qdraw, label="Draws"); l.legend(handles=[player0, player1, draws], frameon=True, loc="best", fontsize=16); r = fig.add_subplot(122) plt.ylim([0.0, 0.9]) player0, = r.plot(games, won0, label="Random Player 0"); player1, = r.plot(games, won1, label=labelB); draws, = r.plot(games, draw, label="Draws"); r.legend(handles=[player0, player1, draws], frameon=True, loc="best", fontsize=16); plt.show(); plot_2_game(qstats, stats, "Q-Player 1", "Random Player 1") # + qstats2 = [] for i in range(1_000): do_game(RandomPlayer('x'), qplayer, qstats2, False) print(Counter(qstats2)) plot_games(qstats2, "Random Player 0 Won", "Q Player 1 Won") # - # ___ # # __*A/B testing*__ # ## Za optimizely: # # ### __*A/B testing*__ (also known as split testing or bucket testing) is a method of __*comparing two versions*__ of a webpage or app against each other to determine which one performs __*better*__. # # #### Co to znaczy __*better*__ # - Bounce rate # - Conversion # - Click-Through Rate # - Etc. # # --- # ## A/B Testing ma zazwyczaj dwie fazy # - ### __*Eksploracji*__ - sprawdzamy która lepsza jest lepsza # - ### __*Eksploatacji*__ - "lepsza" wersja działa jako jedyna # ## Wady typowego podejścia # - Nie ma płynnego przejścia z 2 wersji na 1 # - W czasie eksploracji sprawdzamy wiele dużo wersji gorszych od wersji kontrolnej # - __*Eksploracja w praktyce nigdy się nie kończy*__ # --- # # Problem Wielorękiego Bandyty # # ![](img\Las_Vegas_slot_machines.jpg) # # https://en.wikipedia.org/wiki/Multi-armed_bandit # # Mamy budżet aby jak najwięcej "ugrać". Poszczególni jednoręcy bandyci różnią się między sobą szansą na wygraną. # # Ile spędzić budzetu na eksplorację najlepszej maszyny, a ile na osiągnięcie wygranej ? # # Istnieje olbrzymi katalog algorytmów (ang. __bandit algorithms__) które na to odpowiadają. # - ε-greedy # - soft-max # - soft-max z wyżarzaniem # - Upper Confidence Bound # - Exp3 # - etc. etc. # # ![](img\banditalgos.jpg) # # --- # ## Algorytm ε-zachłanny - ε-greedy # # - Z prawdopodobieństwem `1-ε` wybieramy strategię kontrolną `A` do eksploatacji. # # - Z prawdopodobieństwem `ε` (np. 20%) wybieramy strategię `B` do eksploracji. # # # ![](img\epsilongreedy.png) # --- # ## Algorytm Soft-max # # $r_A$ - proporcja sukcesów w gałęzi `A` # # $r_B$ - proporcja sukcesów w gałęzi `B` # # # - Z prawdopodobieństwem $$\frac{r_A}{r_A+r_B}$$ wybieramy strategię kontrolną `A` do eksploatacji. # # - Z prawdopodobieństwem $$\frac{r_B}{r_A+r_B}$$ wybieramy strategię `B` do eksploracji. # --- # ## Algorytm Soft-max z wyżarzaniem # # $r_A$ - proporcja sukcesów w gałęzi `A` # # $r_B$ - proporcja sukcesów w gałęzi `B` # # $\tau$ - "temperatura" # # # - Z prawdopodobieństwem $$\frac{exp(r_A/\tau)}{exp(r_A/\tau)+exp(r_B/\tau)}$$ wybieramy strategię kontrolną `A` do eksploatacji. # # - Z prawdopodobieństwem $$\frac{exp(r_B/\tau)}{exp(r_A/\tau)+exp(r_B/\tau)}$$ wybieramy strategię `B` do eksploracji. # # --- # `t = sum(self.counts) + 1` # # `tau = 1 / math.log(t + 0.0000001)` # # # --- # ## __*Wyżarzanie*__ # # ### __*Wyżarzanie*__ - to obniżanie "temperatury" czyli zmienności układu z czasem # # ### Dzięki __*wyżarzaniu*__ - system z czasem coraz rzadziej eksploruje. # # #### __*Hartowanie*__ to zwiększanie temperatury raz na jakiś czas aby "wybić" system z minimów lokalnych # # --- # ## Q-Player z __*Wyżarzaniem*__ # # Zmniejszamy `α` przez mnożenie przez `γ` co krok - przez co spada zmienność układu # + α = 0.99 # Tempo uczenia się - "temperatura" β = 0.80 # Wartość początkowa `Q` - 'agresja' γ = 0.975 # Degradacja wagi z odległością od bodźca oraz tempo spadku "temperatury" class AnnealingQPlayer(QPlayer): def __init__(self, character): super().__init__(character) def learn(self, learned_weight): global α ################ ## Wyżarzanie ## ################ α *= γ super().learn(learned_weight) # - annealing_qplayer = AnnealingQPlayer('o') # + aqstats = [] for i in range(2_500): do_game(RandomPlayer('x'), annealing_qplayer, aqstats, False) print(Counter(aqstats)) plot_games(aqstats, "Random Player 0 Won", "Annealing Q Player 1 Won") # + for i in range(2_500): do_game(RandomPlayer('x'), annealing_qplayer, aqstats, False) print(Counter(aqstats)) plot_games(aqstats, "Random Player 0 Won", "Annealing Q Player 1 Won") # - plot_2_game(qstats, aqstats, "Q-Player 1", "Annealing Q-Player 1") # + aqstats2 = [] for i in range(1000): do_game(RandomPlayer('x'), annealing_qplayer, aqstats2, False) print(Counter(aqstats2)) plot_2_game(qstats2, aqstats2, "Q-Player 1", "Annealing Q-Player 1") # - # --- # # Zadanie 1 # # Zagrajcie w kółko i krzyżyk # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import cv2 import matplotlib.pyplot as plt # load image img = cv2.imread('images/sunflower.jpg') # + # # copy image img_copy = img.copy() # convert to rgb img_rgb = cv2.cvtColor(img_copy, cv2.COLOR_BGR2RGB) # show image plt.imshow(img_rgb) # - # ### convert to Grayscale gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY) #show image plt.imshow(gray, cmap='gray') # ## Implement Canny Edge Detection # + #set lower and upper lower = 120 upper = 240 # create canny edges = cv2.Canny(gray, lower, upper) # show image plt.imshow(edges, cmap='gray') # + wide = cv2.Canny(gray, 30, 100) tight = cv2.Canny(gray, 180, 240) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10)) ax1.set_title('wide'),ax1.imshow(wide, cmap='gray') ax2.set_title('tight'),ax2.imshow(tight, cmap='gray') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # /* ===================================================================================== # # CSPlib version 1.1.0 # Copyright (2021) NTESS # https://github.com/sandialabs/csplib # # Copyright 2021 National Technology & Engineering Solutions of Sandia, LLC (NTESS). # Under the terms of Contract DE-NA0003525 with NTESS, the U.S. Government retains # certain rights in this software. # # This file is part of CSPlib. CSPlib is open-source software: you can redistribute it # and/or modify it under the terms of BSD 2-Clause License # (https://opensource.org/licenses/BSD-2-Clause). A copy of the license is also # provided under the main directory # # Questions? Contact at <>, or # at <>, or # at <> # # Sandia National Laboratories, Livermore, CA, USA # # ===================================================================================== */ import numpy as np import matplotlib.pyplot as plt import os os.getcwd() firstname='' state = np.loadtxt( firstname +'_state.dat') m = np.loadtxt( firstname +'_m.dat') tau = np.loadtxt( firstname +'_tau.dat') f = np.loadtxt( firstname + '_magMode.dat') rank = np.loadtxt( firstname +'_jac_numerical_rank.dat') t = np.loadtxt( firstname + '_time.dat') Pointers = np.loadtxt(firstname +'_cspPointers.dat') plt.figure() plt.plot(state[:,0],state[:,1] ) plt.xlabel('y') plt.ylabel('z') plt.figure() plt.plot(t,state[:,0] ,label='y') plt.plot(t,state[:,1] ,label='z') plt.xlabel('Time [s]') plt.ylabel('y/z ') plt.legend(loc='best') plt.xscale('log') def makePlot(var, var_label='M'): fig, ax1 = plt.subplots() color = 'black' ax1.set_xlabel('Time [s]') ax1.set_ylabel('y /z ', color=color) ax1.plot(t, state[:,0], color='r', label='y') ax1.plot(t, state[:,1], color='g', label='z') ax1.tick_params(axis='y', labelcolor=color) ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis color = 'tab:blue' ax2.set_ylabel(var_label,color=color) # we already handled the x-label with ax1 ax2.plot(t, var,'.', color=color) ax2.tick_params(axis='y', labelcolor=color) fig.tight_layout() plt.xscale('log') ax1.legend(loc=2) return # + fig, ax1 = plt.subplots() #figsize=(8,4) color = 'tab:red' ax1.set_xlabel('y') ax1.set_ylabel('z', color=color) ax1.plot(state[:,0],state[:,1], color=color) ax1.tick_params(axis='y', labelcolor=color) ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis color = 'tab:blue' ax2.set_ylabel('M',color=color) # we already handled the x-label with ax1 ax2.plot(state[:,0], m,'.', color=color) ax2.tick_params(axis='y', labelcolor=color) fig.tight_layout() # - makePlot(m, var_label='M') plt.savefig('M.pdf') Nvar = len(tau[0,:]) tmp = [] tmrank = [] for i,M in enumerate(m): if M == Nvar: Mi = M -1 else: Mi = M tmp += [tau[i,int(Mi)]] if rank[i] == Nvar: tmrank += [tau[i,Nvar-1]] else: tmrank += [tau[i,int(rank[i])]] makePlot(tmp, var_label='τ[M+1]') plt.yscale('log') plt.savefig('tau.pdf') # + fig, ax1 = plt.subplots() #figsize=(8,4) color = 'tab:red' ax1.set_xlabel('y') ax1.set_ylabel('z', color=color) ax1.plot(state[:,0],state[:,1], color=color) ax1.tick_params(axis='y', labelcolor=color) ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis color = 'tab:blue' ax2.set_ylabel('τ[M+1]',color=color) # we already handled the x-label with ax1 ax2.plot(state[:,0], tmp,'.', color=color) ax2.tick_params(axis='y', labelcolor=color) fig.tight_layout() plt.yscale('log') # - plt.figure() plt.plot(t,tau, 'k') plt.plot(t,tmp,'ro', label='τ[M+1]') plt.yscale('log') plt.xscale('log') plt.xlabel('Time [s]') plt.ylabel('τ [s]') plt.legend(loc='best') plt.savefig('timescales.pdf') plt.figure() plt.plot(state[:,0],tau, 'k', label='τ') plt.plot(state[:,0],tmp,'ro', label='τ[M+1]') plt.yscale('log') # plt.xscale('log') plt.xlabel('y') plt.ylabel('τ [s]') makePlot(f[:,0], var_label='f[0]') plt.savefig('f0.pdf') makePlot(f[:,1], var_label='f[1]') plt.savefig('f1.pdf') # + fig, ax1 = plt.subplots() #figsize=(8,4) color = 'tab:red' ax1.set_xlabel('y') ax1.set_ylabel('z', color=color) ax1.plot(state[:,0],state[:,1], color=color) ax1.tick_params(axis='y', labelcolor=color) ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis color = 'tab:blue' ax2.set_ylabel('f[0]',color=color) # we already handled the x-label with ax1 ax2.plot(state[:,0], f[:,0],'.', color=color) # ax2.plot(state[:,0], f[:,1],'.', color='g') ax2.tick_params(axis='y', labelcolor=color) fig.tight_layout() # plt.yscale('log') # + fig, ax1 = plt.subplots() #figsize=(8,4) color = 'tab:red' ax1.set_xlabel('y') ax1.set_ylabel('z', color=color) ax1.plot(state[:,0],state[:,1], color=color) ax1.tick_params(axis='y', labelcolor=color) ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis color = 'tab:blue' ax2.set_ylabel('f[1]',color=color) # we already handled the x-label with ax1 ax2.plot(state[:,0], f[:,1],'.', color=color) # ax2.plot(state[:,0], f[:,1],'.', color='g') ax2.tick_params(axis='y', labelcolor=color) fig.tight_layout() # - plt.figure() plt.plot(f[:,0],f[:,1] ) plt.xlabel('f0') plt.ylabel('f1') # plt.xscale('log') plt.figure() plt.plot(t,f[:,0], label='f0' ) plt.plot(t,f[:,1], label='f1' ) plt.xlabel('t') plt.ylabel('f') plt.legend(loc='best') # plt.yscale('log') Npoints = len(t) Ptrs = np.reshape(Pointers,[Npoints,Nvar,Nvar]) makePlot(Ptrs[:,0,0], var_label='csp pointer Mode 0') makePlot(Ptrs[:,1,1], var_label='csp pointer Mode 0') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: PythonData # language: python # name: pythondata # --- # # WeatherPy # ---- # # #### Note # * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. # + # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import time from scipy.stats import linregress # Import API key from api_keys import weather_api_key # Incorporated citipy to determine city based on latitude and longitude from citipy import citipy # Output File (CSV) output_data_file = "output_data/cities.csv" # Range of latitudes and longitudes lat_range = (-90, 90) lng_range = (-180, 180) # + # - # ## Generate Cities List # + # List for holding lat_lngs and cities lat_lngs = [] cities = [] # Create a set of random lat and lng combinations lats = np.random.uniform(lat_range[0], lat_range[1], size=1500) lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500) lat_lngs = zip(lats, lngs) # Identify nearest city for each lat, lng combination for lat_lng in lat_lngs: city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name # If the city is unique, then add it to a our cities list if city not in cities: cities.append(city) # Print the city count to confirm sufficient count len(cities) # - cities # ### Perform API Calls # * Perform a weather check on each city using a series of successive API calls. # * Include a print log of each city as it'sbeing processed (with the city number and city name). # # + print('Beginning Data Retrieval\n-----------------------------\n') weather_list = [] rcd = 0 sets = 1 for city in cities: rcd += 1 print(f'Processing Record {rcd} of Set {sets} | {city}') if rcd == 50: rcd = 0 sets += 1 url = f'https://api.openweathermap.org/data/2.5/weather?q={city}&appid={weather_api_key}&units=imperial' weather = requests.get(url).json() try: weather_list.append({ 'city': city, 'lat': weather['coord']['lat'], 'lng': weather['coord']['lon'], 'temp': weather['main']['temp_max'], 'humidity': weather['main']['humidity'], 'wind':weather['wind']['speed'], 'cloudiness':weather['clouds']['all'], 'country':weather['sys']['country'], 'date':weather['dt'] }) except: print('City not found. Skipping...') pass print('-----------------------------\nData Retrieval Complete\n-----------------------------') # - # ### Convert Raw Data to DataFrame # * Export the city data into a .csv. # * Display the DataFrame city_data = pd.DataFrame(weather_list) city_data.date = city_data.date.map(time.ctime) city_data.to_csv('output_data/city_data.csv') city_data.head() city_data.describe() # ## Inspect the data and remove the cities where the humidity > 100%. # ---- # Skip this step if there are no cities that have humidity > 100%. # Get the indices of cities that have humidity over 100%. city_data = city_data.loc[city_data["humidity"]<= 100] city_data.head() # Make a new DataFrame equal to the city data to drop all humidity outliers by index. # Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data". # ## Plotting the Data # * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels. # * Save the plotted figures as .pngs. # ## Latitude vs. Temperature Plot plt.grid() plt.xlabel('Latitude') plt.ylabel('Max Temperature (F)') plt.title('City Latitude vs Max Temperature (F) (04/25/2021)') plt.scatter(city_data['lat'],city_data['temp'],edgecolor='black', linewidths=1,) plt.savefig("output_data/city_lat_vs_max_temp.png") # ## Latitude vs. Humidity Plot plt.grid() plt.xlabel('Latitude') plt.ylabel('Humidity') plt.title('City Latitude vs Humidity (04/25/2021)') plt.scatter(city_data['lat'],city_data['humidity'],edgecolor='black', linewidths=1,) plt.savefig("output_data/City_Lat_vs_Humidity.png") # ## Latitude vs. Cloudiness Plot plt.grid() plt.xlabel('Latitude') plt.ylabel('Cloudiness') plt.title('City Latitude vs Cloudiness (04/25/2021)') plt.scatter(city_data['lat'],city_data['cloudiness'],edgecolor='black', linewidths=1,) plt.savefig("output_data/City_Lat_vs Cloudiness_.png") # ## Latitude vs. Wind Speed Plot plt.grid() plt.xlabel('Latitude') plt.ylabel('Wind Speed') plt.title('City Latitude vs Wind Speed (04/25/2021)') plt.scatter(city_data['lat'],city_data['wind'],edgecolor='black', linewidths=1,) plt.savefig("output_data/City_Lat_vs_Wind_Speed.png") # ## Linear Regression # #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression northern_pd = city_data.loc[city_data["lat"]>= 0] southern_pd = city_data.loc[city_data["lat"]<= 0] # establish linear regression values = linregress(city_data ("lat","temp".values) slope, intercept, rValue, pValue, stderror = linregress(northern_pd['lat'], northern_pd['temp']) print(f'The St. Pearson Correlation Coefficient between both factors is {round(rValue, 2)}') # linear regression line regress = slope*(northern_pd['lat']) + intercept regresstring = "y = " + str(round(slope, 2)) + 'x + ' + str(round(intercept, 2)) plt.annotate(regresstring, xy=(5,30), fontsize=13, color="blue") plt.plot(northern_pd['lat'],regress,"-",color="blue") plt.scatter(northern_pd['lat'],northern_pd['temp'], marker = "o", facecolors = 'green', edgecolor='black') plt.xlabel('Latitude') plt.ylabel('Max Temperature (F)') plt.title('Northern Hemisphere - Max Temp vs. Latitude Linear Regression') plt.show() plt.savefig("output_data/Northern_Hemisphere_Max_Temp_vs_Latitude_Linear_Regression.png") # #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression # establish linear regression values = linregress(city_data ("lat","temp".values) slope, intercept, rValue, pValue, stderror = linregress(southern_pd['lat'], southern_pd['temp']) print(f'The St. Pearson Correlation Coefficient between both factors is {round(rValue, 2)}') # linear regression line regress = slope*(southern_pd['lat']) + intercept regresstring = "y = " + str(round(slope, 2)) + 'x + ' + str(round(intercept, 2)) plt.annotate(regresstring, xy=(-20,48), fontsize=13, color="blue") plt.plot(southern_pd['lat'],regress,"-",color="blue") plt.scatter(southern_pd['lat'],southern_pd['temp'], marker = "o", facecolors = 'green', edgecolor='black') plt.xlabel('Latitude') plt.ylabel('Max Temperature (F)') plt.title('Southern Hemisphere - Max Temp vs. Latitude Linear Regression') plt.show() plt.savefig("output_data/Southern_Hemisphere_Max_Temp_vs_Latitude_Linear_Regression.png") # #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression # establish linear regression values = linregress(city_data ("lat","temp".values) slope, intercept, rValue, pValue, stderror = linregress(northern_pd['lat'], northern_pd['humidity']) print(f'The St. Pearson Correlation Coefficient between both factors is {round(rValue, 2)}') # linear regression line regress = slope*(northern_pd['lat']) + intercept regresstring = "y = " + str(round(slope, 2)) + 'x + ' + str(round(intercept, 2)) plt.annotate(regresstring, xy=(45,10), fontsize=13, color="blue") plt.plot(northern_pd['lat'],regress,"-",color="blue") plt.scatter(northern_pd['lat'],northern_pd['humidity'], marker = "o", facecolors = 'green', edgecolor='black') plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title('Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression') plt.show() plt.savefig("output_data/Northern_Hemisphere_Humidity_vs_Latitude_Linear_Regression.png") # #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression # + # establish linear regression values slope, intercept, rValue, pValue, stderror = linregress(southern_pd['lat'], southern_pd['humidity']) print(f'The St. Pearson Correlation Coefficient between both factors is {round(rValue, 2)}') # linear regression line regress = slope*(southern_pd['lat']) + intercept regresstring = "y = " + str(round(slope, 2)) + 'x + ' + str(round(intercept, 2)) plt.annotate(regresstring, xy=(-50,20), fontsize=13, color="blue") plt.plot(southern_pd['lat'],regress,"-",color="blue") plt.scatter(southern_pd['lat'],southern_pd['humidity'], marker = "o", facecolors = 'green', edgecolor='black') plt.xlabel('Latitude') plt.ylabel('Humidity (%)') plt.title('Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression') plt.annotate(regresstring, xy=(5,30), fontsize=13, color="blue") plt.show() plt.savefig("output_data/Southern_Hemisphere_Humidity_vs_Latitude_Linear_Regression.png") # - # #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression establish_linear_regression_values = linregress(city_data["lat"], city_data["cloudiness"]) slope, intercept, rValue, pValue, stderror = linregress(northern_pd['lat'], northern_pd['cloudiness']) print(f'The St. Pearson Correlation Coefficient between both factors is {round(rValue, 2)}') # linear regression line regress = slope*(northern_pd['lat']) + intercept regresstring = "y = " + str(round(slope, 2)) + 'x + ' + str(round(intercept, 2)) plt.annotate(regresstring, xy=(50,52), fontsize=13, color="blue") plt.plot(northern_pd['lat'],regress,"-",color="blue") plt.scatter(northern_pd['lat'], northern_pd['cloudiness'], marker = "o", facecolors = 'green', edgecolor='black') plt.xlabel('Latitude') plt.ylabel('Cloudiness (%)') plt.title('Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression') plt.show() plt.savefig("output_data/Northern_Hemisphere_Cloudiness_vs_Latitude_Linear_Regression.png") # #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression # establish linear regression values slope, intercept, rValue, pValue, stderror = linregress(southern_pd['lat'], southern_pd['cloudiness']) print(f'The St. Pearson Correlation Coefficient between both factors is {round(rValue, 2)}') # linear regression line regress = slope*(southern_pd['lat']) + intercept regresstring = "y = " + str(round(slope, 2)) + 'x + ' + str(round(intercept, 2)) plt.annotate(regresstring, xy=(-57,50), fontsize=13, color="blue") plt.plot(southern_pd['lat'],regress,"-",color="blue") plt.scatter(southern_pd['lat'],southern_pd['cloudiness'], marker = "o", facecolors = 'green', edgecolor='black') plt.xlabel('Latitude') plt.ylabel('Cloudiness') plt.title('Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression') plt.show() plt.savefig("output_data/Southern_Hemisphere_Cloudiness_vs_Latitude_Linear_Regression.png") # #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression # establish linear regression values = linregress(city_data ("lat","wind".values) slope, intercept, rValue, pValue, stderror = linregress(northern_pd['lat'], northern_pd['wind']) print(f'The St. Pearson Correlation Coefficient between both factors is {round(rValue, 2)}') # linear regression line regress = slope*(northern_pd['lat']) + intercept regresstring = "y = " + str(round(slope, 2)) + 'x + ' + str(round(intercept, 2)) plt.annotate(regresstring, xy=(45,30), fontsize=13, color="blue") plt.plot(northern_pd['lat'],regress,"-",color="blue") plt.scatter(northern_pd['lat'],northern_pd['wind'], marker = "o", facecolors = 'green', edgecolor='black') plt.xlabel('Latitude') plt.ylabel('Wind Speed (mph)') plt.title('Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression') plt.show() plt.savefig("output_data/Northern_Hemisphere_Wind_Speed_vs_Latitude_Linear_Regression.png") # #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression # establish linear regression values = linregress(city_data ("lat","wind".values) slope, intercept, rValue, pValue, stderror = linregress(southern_pd['lat'], southern_pd['wind']) print(f'The St. Pearson Correlation Coefficient between both factors is {round(rValue, 2)}') # linear regression line regress = slope*(southern_pd['lat']) + intercept regresstring = "y = " + str(round(slope, 2)) + 'x + ' + str(round(intercept, 2)) plt.annotate(regresstring, xy=(-55,12), fontsize=13, color="blue") plt.plot(southern_pd['lat'],regress,"-",color="blue") plt.scatter(southern_pd['lat'],southern_pd['wind'], marker = "o", facecolors = 'green', edgecolor='black') plt.xlabel('Latitude') plt.ylabel('Wind Speed (mph)') plt.title('Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression') plt.show() plt.savefig("output_data/Southern_Hemisphere_Wind_Speed_vs_Latitude_Linear_Regression.png") Summary #As latitude increases so does max temperature #Humidity and Cloudiness appear random in both hemispheres #Windspeed is fairly constant across the northern latitudes #Max temp and Latitude are strongly correlated in the the Northern Hemisphere # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Histograms on numerical datasets # Histogram graphs are a very handy way to visualize numerical data, compare different values of a specific field, and thus extract conclusions about the dataset. We are going to study the `diffprivlib`'s method of creating an histogram, and its accuracy when changing the epsilon factor. # # The IBM DP library offers a differential private way to create histograms. The difference with the simple queries that we tested, is that now, __geometric truncated__ noise is added in order to satisfy DP. # ## The identity of the dataset: Employee salaries # The dataset contains sensitive data regarding employee's salaries in the state of Baltimore, while stating other facts about the members of the dataset. # # The columns contained are: # - __Name__ # - __Job Title__ # - __ID__ # - __Job description__ # - __Hire date__ # - __Annual earnings__ # - __Gross earnings__ # ## Import libraries import pandas as pd import diffprivlib as dp import numpy as np import math import matplotlib.pyplot as plt import qif # ## Load the dataset DATASET = "./employee_salaries.csv" df = pd.read_csv(DATASET) # Let's take a look at the dataset info to see if everything is alright. df.info() df.head() # ## Privacy Setup # ### Target column # Throughout our tests, we are going to apply noise into 2 target columns: the __annual earnings__, and the __gross earnings__. # ### Bounds definition # With our prior knowledge regarding the IBM's bounds' selection requirements, we are going to define ourselves the bounds for the DP algorithms, by guessing the possible minimum and maximum values. annual_earnings_range = (0, 300000) annual_gross_range = (0, 300000) # ## Histograms # We are going to run various histograms, in order to observe the effect of applying DP to our dataset. # # A histogram is a graphical representation that organizes a group of data points into user-specified ranges, called bins. The `numpy` function will take as input the data and the pre-specified bins, and return the distribution of the points in each one of the bins. # # Those bins, are chosen by default given the lowest and the highest values of the dataset. Since we are applying DP though, we already know that such information leakage is unacceptable: we must predefine the bins given what we think of the bounds, independently of our knowledge of the dataset. # # We will see further details as we move on. # ### Accurate testings # We know by now that the noise inserted in the query by the DP algorithms is based on probabilistic distributions. Thus, it is clear that our testings will not be accurate if we run the query once, because it is possible that we get extreme values of noise, and as a result, extreme differences between the real histogram and the noisy one. # In order to achieve that, we are going to define a function that takes as an argument the arguments of the hist function, runs the histogram query multiple times, and finally returns the mean value of all of those runs. def accurate_histogram(input_list, bins, epsilon, bounds_range): # initialize a list of the upcoming result of the histogram result = [0] * (bins + 1) for i in range (1, 100): # compute the i-th histogram new_list = dp.tools.histogram(input_list, bins = bins, epsilon = epsilon, range = bounds_range)[0] # compute the bins to return, but only once if (i == 1): new_bins = dp.tools.histogram(input_list, bins = bins, epsilon = epsilon, range = bounds_range)[1] # calculate a new list, by adding the previous, and the new histogram result = [sum(x) for x in zip(result, new_list)] # finally return the mean results of the 100 histograms return [(math.floor(a / 100)) for a in result], new_bins # ### Accuracy error # A handy way to compute the accuracy error is to find the __l1-norm__ of the vector that consists of the accuracy errors in each bins. Thus, the total error is given by: # # $error = ||abs(dp - non\_dp)||_1$ , # where dp and non-dp are the vectors of the results of the DP and non-DP queries respectively. # # The final accuracy error will emerge by finding the mean value of multiple executions of the DP histogram. # + def accuracy_error(non_dp_vector, input_list, bins, epsilon, bounds_range): # initialize the accuracy error error = 0 # create many histograms for i in range (100): # compute the i-th histogram dp_vector = dp.tools.histogram(input_list, bins = bins, epsilon = epsilon, range = bounds_range)[0] # increase each time by the l1-norm of the accuracy error vector error += np.linalg.norm(abs(dp_vector - non_dp_vector), ord = 1) # finally return the mean results of the 100 histograms return error / 100 def euclid(x, y): # ground distance return abs(x-y) kant = qif.metric.kantorovich(euclid) # distance on distributions def kant_error(non_dp_vector, input_list, bins, epsilon, bounds_range): # initialize the accuracy error error = 0 # create many histograms for i in range (100): # compute the i-th histogram dp_vector = dp.tools.histogram(input_list, bins = bins, epsilon = epsilon, range = bounds_range)[0] # increase each time by the l1-norm of the accuracy error vector error += kant(dp_vector, non_dp_vector) # finally return the mean results of the 100 histograms return error / 100 # - # ### Histogram on annual earnings # We are going to study our dataset data, by creating a histogram based on the annual earnings. Let's start with a normal hist, without differential privacy. # We are not going to let numpy define our bins for us. We want the non-DP histogram to be in the same scale as the DP one that we are going to define later, which is going to have as a maximum and minimum the bounds that we have already defined. # + # plot initialization fig, axs = plt.subplots(1, 2, sharey = True, figsize = (20,6)) fig.subplots_adjust(hspace = 0.2) # create custom bins bins = np.linspace(0, 300000, 20)[1:] non_dp_hist, dummy = np.histogram(df['AnnualSalary'].tolist(), bins = bins) axs[0].hist(bins[:-1], len(non_dp_hist), weights = non_dp_hist) hist, dummy = accurate_histogram(df['AnnualSalary'].tolist(), bins, 1, annual_earnings_range) axs[1].hist(bins[:-1], len(hist), weights = hist) axs[0].set_title('Non-Private Histogram') axs[1].set_title('Private Histogram') plt.savefig('simple_hists') # - # Let's now plot the differential private histogram regarding the same data. # + hist, dummy = accurate_histogram(df['AnnualSalary'].tolist(), bins, 1, annual_earnings_range) plt.hist(bins[:-1], len(hist), weights = hist) # - # The results are more than satisfying in the naked eye. This is probably due to the large dataset size: we are not able to locate small changes. In order to do so, we are going to check our error using the accuracy error function that we have defined. # + acc_err = accuracy_error(non_dp_hist, df['AnnualSalary'].tolist(), bins, 1, annual_earnings_range) kant_err = kant_error(non_dp_hist, df['AnnualSalary'].tolist(), bins, 1, annual_earnings_range) print("Eucledian distance: " + str(acc_err) + "\nKantorovic distance: " + str(kant_err)) # - # The euclidean distance error determines how many entries were wrongly classified in a bin (in average), when the differentially private query was run. So, less than 20 people out of 13.000 were wrongly classified, whereas their privacy was secured. This is quite a good trade-off! # In order to decide on the best epsilon value, we are going to apply the same technique as shown in the epsilon measurements earlier. We are going to examine the behavior of the accuracy error when we increase the epsilon value. # + # # plot initialization # fig, axs = plt.subplots(2, 1, sharey = True, figsize = (12,10)) # fig.subplots_adjust(hspace = 0.2) # epsilon values we are going to run the query with epsilon = [i/10 + 0.2 for i in range (0,50)] # list of the accuracy results, empty in the beggining results = [] kant_results = [] # loop through different epsilons for e in epsilon: res = accuracy_error(non_dp_hist, df['AnnualSalary'].tolist(), bins, e, annual_earnings_range) # kant_res = kant_error(non_dp_hist, df['AnnualSalary'].tolist(), bins, e, annual_earnings_range) results.append(res) # kant_results.append(kant_res) # plot the result plt.figure(figsize = (12,8)) plt.plot(epsilon, results,'r',label='Euclidean accuracy error') # axs[1].plot(epsilon, kant_results, label='Kantorovic accuracy error') # plt.set_title('Euclidean accuracy error') # axs[1].set_title('Kantorovich accuracy error') # axs[0].set_ylim((0, 100)) # axs[1].set_ylim((0, 500)) plt.legend() plt.xlabel('Epsilon') plt.ylabel('Accuracy Error') # axs[1].set_xlabel('Epsilon') # axs[1].set_ylabel('Accuracy Error') plt.savefig('hist_metrics.png') plt.show() # - # We observe that the curve is rather normal according to both the theory and our previous experience: The biggest the epsilon gets, the smallest the accuracy error (but the higher the privacy loss). # # Even for very small epsilon values ($\leq 1$), we observe that there are less than 100 out of 13.000 people wrongly classified in a bin. # # Of course, if we increase the bins(and thus the precision of the classification needed, the error will increase). # ### Histogram on gross earnings # Last but not least, let's apply the same techniques in order to plot a differentially private histogram for the gross earnings of the employers. We are not going to study the accuracy results, as there will be very similar to the ones we have shown earlier. # + # create custom bins bins = np.linspace(0, 500000, 20)[1:] hist, dummy = accurate_histogram(df['Gross'].tolist(), bins, 1, annual_earnings_range) plt.hist(bins[:-1], len(hist), weights = hist) # - # ## Histogram queries in theory # Based on `., . (2014). The Algorithmic Foundations of Differential Privacy` the histogram queries are very high sensitivity queries, thus a slight change to the bounds could be critical for their result. # # The writers suggest that we use noise generated by the LaPlace mechanism, but with a slight change. In detail they suggest the following: # # # "_In the special (but common) case in which the queries are structurally disjoint we can do much better — we do not necessarily have to let the noise scale with the number of queries. An example is the histogram query. In this type of query the universe $N^X$ is partitioned into cells, and the query asks how many database elements lie in each of the cells. Because the cells are disjoint, the addition or removal of a single database element can affect the count in exactly one cell, and the difference to that cell is bounded by 1, so histogram queries have sensitivity 1 and can be answered by adding independent draws from $Lap(\frac{1}{\epsilon})$ to the true count in each cell._" # ## Conclusions # After plotting a very basic histogram using `diffprivlib`'s functions, it is clear that the histograms created can be trusted for the extraction of statistical data. They seem to be accurate with an extremely naive selection of bounds, if the query is run multiple times. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Data Leakage # ![6_Data_Leakage](image/1.jpg) # ![6_Data_Leakage](image/2.jpg) # ![6_Data_Leakage](image/3.jpg) # ![6_Data_Leakage](image/4.jpg) # ![6_Data_Leakage](image/5.jpg) # ![6_Data_Leakage](image/6.jpg) # ![6_Data_Leakage](image/7.jpg) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Unit 3 - Lesson 3 - Drill - Ridge vs. Lasso Regression # + hide_input=false import pandas as pd from matplotlib import pyplot as plt import numpy as np import math import seaborn as sns import sklearn from sklearn import linear_model from sklearn import preprocessing # %matplotlib inline sns.set_style('white') # - import warnings warnings.filterwarnings('ignore') # + # Load the data again. Keep air quality data, drop the index column # and any missing data columns. df = pd.read_csv( 'https://vincentarelbundock.github.io/Rdatasets/csv/ISLR/Default.csv' ).iloc[:,1:].dropna() # Recode strings to numeric. df['default'] = np.where(df['default']=='Yes', 1, 0) df['student'] = np.where(df['student']=='Yes', 1, 0) names = df.columns df = pd.DataFrame(preprocessing.scale(df), columns=names) # Define the training and test sizes. trainsize = int(df.shape[0] / 2) df_test = df.iloc[trainsize:, :].copy() df_train = df.iloc[:trainsize, :].copy() Y_train = df_train['income'].values.reshape(-1, 1) X_train = df_train.loc[:, ~(df_train.columns).isin(['income'])] # Make some new features to capture potential quadratic and cubic # relationships between solar radiation and day or temperature. df_train['balance_student'] = df_train['balance'] * df_train['student'] df_train['balance_default'] = df_train['balance'] * df_train['default'] df_train['student_default'] = df_train['student'] * df_train['default'] df_train['balance_sqrt'] = (df_train['balance'] + 100) ** .5 df_train['balance2'] = (df_train['balance'] + 100) ** 2 df_train['balance3'] = (df_train['balance'] + 100) ** 3 X_train2 = df_train.loc[:, ~(df_train.columns).isin(['income'])] # Test the simpler model with smaller coefficients. Y_test = df_test['income'].values.reshape(-1, 1) X_test = df_test.loc[:, ~(df_test.columns).isin(['income'])] # Test the more complex model with larger coefficients. df_test['balance_student'] = df_test['balance'] * df_test['student'] df_test['balance_default'] = df_test['balance'] * df_test['default'] df_test['student_default'] = df_test['student'] * df_test['default'] df_test['balance_sqrt'] = (df_test['balance'] + 100) ** .5 df_test['balance2'] = (df_test['balance'] + 100) ** 2 df_test['balance3'] = (df_test['balance'] + 100) ** 3 X_test2 = df_test.loc[:, ~(df_test.columns).isin(['income'])] # + print('Show parameter values without penalization (alpha/lambda equal to zero)') # Small number of parameters. lass = linear_model.Lasso(alpha=0) lassfit = lass.fit(X_train, Y_train) print('R² for the model with few features:') print(lass.score(X_train, Y_train)) origparams = np.append(lassfit.coef_, lassfit.intercept_) print('\nParameter estimates for the model with few features:') print(origparams) # Large number of parameters. lassBig = linear_model.Lasso(alpha=0) lassBig.fit(X_train2, Y_train) print('\nR² for the model with many features:') print(lassBig.score(X_train2, Y_train)) origparams = np.append(lassBig.coef_, lassBig.intercept_) print('\nParameter estimates for the model with many features:') print(origparams) # + # Small number of parameters. lass = linear_model.Lasso(alpha=0.35) lassfit = lass.fit(X_train, Y_train) print('R² for the model with few features:') print(lass.score(X_train, Y_train)) origparams = np.append(lassfit.coef_, lassfit.intercept_) print('\nParameter estimates for the model with few features:') print(origparams) # Large number of parameters. lassBig = linear_model.Lasso(alpha=0.35) lassBig.fit(X_train2, Y_train) print('\nR² for the model with many features:') print(lassBig.score(X_train2, Y_train)) origparams = np.append(lassBig.coef_, lassBig.intercept_) print('\nParameter estimates for the model with many features:') print(origparams) # - # Checking predictive power using the test set: # + print(lass.score(X_test, Y_test)) print(lassBig.score(X_test2, Y_test)) # - # ## Regularization parameter: Lasso # # The $\lambda$ for lasso can var between 0 (no penalty, acts like OLS) and infinity. If $\lambda$ is too large, all parameters will be set to zero. # # Create a plot below of how $R^2$ varies across different values of $\lambda$ for ridge and lasso regression. Use logic and code similar to the ridge regression demonstration above, and base your plot on the X_train2 feature set. # # Do lasso and ridge yield the same $R^2$ for a given lambda value? # # Submit your work and discuss the results with your mentor. # + # Your code here ridge_scores = [] lasso_scores = [] for lambd in np.arange(0, 10, 0.01): ridgeregrBig = linear_model.Ridge(alpha=lambd, fit_intercept=False) ridgeregrBig.fit(X_train2, Y_train) print('Ridge lambda = ', lambd) ridge_score = ridgeregrBig.score(X_train2, Y_train) ridge_scores.append(ridge_score) print('R^2 for Ridge: ', ridge_score) lassBig = linear_model.Lasso(alpha=lambd) lassBig.fit(X_train2, Y_train) print('Lasso lambda = ', lambd/100) lasso_score = lassBig.score(X_train2, Y_train) lasso_scores.append(lasso_score) print('R^2 for Ridge: ', lasso_score) print() # + plt.figure(figsize=(20,10)) plt.semilogx(np.arange(0, 10, 0.01), ridge_scores, 'r-', label='Ridge') plt.semilogx(np.arange(0, 10, 0.01), lasso_scores, 'b+', label='Lasso') plt.title('R^2 Scores for Ridge and Lasso Regression Models') plt.xlabel('Log scale') plt.ylabel('R^2') plt.legend() plt.show() # - # # Summary # # Lasso and ridge regression are both clear improvements on OLS regression. Ridge regression is an excellent tool to use with correlated features, while lasso is an efficient method of feature selection when dealing with an unmanageably large feature space. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import matplotlib.pyplot as plt import numpy as np n_samples = 10000 #number of random samples to use n_bins = 100 #number of bins for our histogram sigma = 1.0 #rms width of the gaussian mu = 0.0 #mean of the gaussian x = np.random.normal(mu,sigma,n_samples) print(x.min()) print(x.max()) def gaussian(x,mu,sigma): return 1./(2.0*np.pi*sigma**2)**0.5 * np.exp(-0.5*((x-mu)/sigma)**2) fig = plt.figure(figsize=(7,7)) y_hist, x_hist, ignored = plt.hist(x, bins=n_bins, range=[-5,5], density=True) xx = np.linspace(-5.0,5.0,1000) plt.plot(xx,gaussian(xx,mu,sigma),color="red") plt.ylim([0,0.5]) plt.xlim([-5,5]) plt.gca().set_aspect(20) plt.xlabel('x') plt.ylabel('y(x)') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # GPyOpt: Modular Bayesian Optimization # # ### Written by , Amazon Reseach Cambridge # # # *Last updated, July 2017.* # In the [Introduction Bayesian Optimization GPyOpt](./GPyOpt_reference_manual.ipynb) we showed how GPyOpt can be used to solve optimization problems with some basic functionalities. The object # # ``` # GPyOpt.methods.BayesianOptimization # ``` # is used to initialize the desired functionalities, such us the acquisition function, the initial design or the model. In some cases we want to have control over those objects and we may want to replace some element in the loop without having to integrate the new elements in the base code framework. This is now possible through the modular implementation of the package using the # # ``` # GPyOpt.methods.ModularBayesianOptimization # ``` # # class. In this notebook we are going to show how to use the backbone of GPyOpt to run a Bayesian optimization algorithm in which we will use our own acquisition function. In particular we are going to use the Expected Improvement integrated over the jitter parameter. That is # # $$acqu_{IEI}(x;\{x_n,y_n\},\theta) = \int acqu_{EI}(x;\{x_n,y_n\},\theta,\psi) p(\psi;a,b)d\psi $$ # where $p(\psi;a,b)$ is, in this example, the distribution [$Beta(a,b)$](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.random.beta.html). # # This acquisition is not available in GPyOpt, but we will implement and use in this notebook. The same can be done for other models, acquisition optimizers etc. # # As usual, we start loading GPy and GPyOpt. # %pylab inline import GPyOpt import GPy # In this example we will use the Branin function as a test case. # --- Function to optimize func = GPyOpt.objective_examples.experiments2d.branin() func.plot() # Because we are won't use the pre implemented wrapper, we need to create the classes for each element of the optimization. In total we need to create: # # * Class for the **objective function**, objective = GPyOpt.core.task.SingleObjective(func.f) # * Class for the **design space**, space = GPyOpt.Design_space(space =[{'name': 'var_1', 'type': 'continuous', 'domain': (-5,10)}, {'name': 'var_2', 'type': 'continuous', 'domain': (1,15)}]) # * Class for the **model type**, model = GPyOpt.models.GPModel(optimize_restarts=5,verbose=False) # * Class for the **acquisition optimizer**, aquisition_optimizer = GPyOpt.optimization.AcquisitionOptimizer(space) # * Class for the **initial design**, initial_design = GPyOpt.experiment_design.initial_design('random', space, 5) # * Class for the **acquisition function**. Because we want to use our own acquisition, we need to implement a class to handle it. We will use the currently available Expected Improvement to create an integrated version over the jitter parameter. Samples will be generated using a beta distribution with parameters a and b as it is done using the default [numpy beta function](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.random.beta.html). # + from GPyOpt.acquisitions.base import AcquisitionBase from GPyOpt.acquisitions.EI import AcquisitionEI from numpy.random import beta class jitter_integrated_EI(AcquisitionBase): analytical_gradient_prediction = True def __init__(self, model, space, optimizer=None, cost_withGradients=None, par_a=1, par_b=1, num_samples= 10): super(jitter_integrated_EI, self).__init__(model, space, optimizer) self.par_a = par_a self.par_b = par_b self.num_samples = num_samples self.samples = beta(self.par_a,self.par_b,self.num_samples) self.EI = AcquisitionEI(model, space, optimizer, cost_withGradients) def acquisition_function(self,x): acqu_x = np.zeros((x.shape[0],1)) for k in range(self.num_samples): self.EI.jitter = self.samples[k] acqu_x +=self.EI.acquisition_function(x) return acqu_x/self.num_samples def acquisition_function_withGradients(self,x): acqu_x = np.zeros((x.shape[0],1)) acqu_x_grad = np.zeros(x.shape) for k in range(self.num_samples): self.EI.jitter = self.samples[k] acqu_x_sample, acqu_x_grad_sample =self.EI.acquisition_function_withGradients(x) acqu_x += acqu_x_sample acqu_x_grad += acqu_x_grad_sample return acqu_x/self.num_samples, acqu_x_grad/self.num_samples # - # Now we initialize the class for this acquisition and we plot the histogram of the used samples to integrate the acquisition. acquisition = jitter_integrated_EI(model, space, optimizer=aquisition_optimizer, par_a=1, par_b=10, num_samples=200) xx = plt.hist(acquisition.samples,bins=50) # * Finally we create the class for the **type of evaluator**, # --- CHOOSE a collection method evaluator = GPyOpt.core.evaluators.Sequential(acquisition) # With all the classes on place,including the one we have created for this example, we can now create the **Bayesian optimization object**. bo = GPyOpt.methods.ModularBayesianOptimization(model, space, objective, acquisition, evaluator, initial_design) # And we run the optimization. max_iter = 10 bo.run_optimization(max_iter = max_iter) # We plot the acquisition and the diagnostic plots. bo.plot_acquisition() bo.plot_convergence() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + id="PVf8fqO7wQoZ" # + id="vnlllS49wV8T" # This cell contains the code from %%%%%%%%%%%%%%%%%%%%%%%%%%%% # that defines the functions compute_warped_image_multiNC # which we use for composing maps and identity_map_multiN which we use # to get an identity map. import torch from torch.autograd import Function from torch.nn import Module def show(x): while len(x.shape) > 2: x = x[0] plt.imshow(x.detach().cpu()) #plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 86, "referenced_widgets": ["8015e348650a47a1823c135f4ca49703", "730c5d382e164e2786e2e314d77cd204", "51daaf17ebb04c39b5d0d9e885c83144", "ee628d85981747da8c7bd81a2e487049", "51d9b1257d324809afa068dd7414a1cd", "", "", "878a5e5b02d94ca8972668b34e42175f", "", "cd193585601e420b941f602f55de3b46", "7972f69c3ede41e1825eab68953494cd"]} id="5lUk4V8RmEEM" outputId="b4fa2836-7a6b-4e53-f4cc-8a0d07e7dd31" #First, we download the MNIST dataset and store it as a dataset we can train against. import torch import torch.nn as nn import torch.nn.functional as F import numpy as np import torchvision import matplotlib.pyplot as plt import torch.optim as optim BATCH_SIZE = 128 def get_dataset(split): ds = torch.utils.data.DataLoader( torchvision.datasets.MNIST("./files/", transform=torchvision.transforms.ToTensor(), download=True, train=(split == "train") ), batch_size=500 ) images = [] for _, batch in enumerate(ds): label = np.array(batch[1]) batch_nines = label ==5 images.append(np.array(batch[0])[batch_nines]) images = np.concatenate(images) ds = torch.utils.data.TensorDataset(torch.Tensor(images)) d1, d2 = (torch.utils.data.DataLoader(ds, batch_size=128, shuffle=True, ) for _ in (1,1)) return d1, d2 d1_mnist, d2_mnist = get_dataset("train") d1_mnist_test, d2_mnist_test = get_dataset("test") # + id="hS-kbrYxrADw" N = 28 BATCH_SIZE = 128 def get_dataset_triangles(split): x, y = np.mgrid[0:1:N * 1j, 0:1:N * 1j] x = np.reshape(x, (1, N, N)) y = np.reshape(y, (1, N, N)) cx = np.random.random((6000, 1, 1)) * .2 + .4 cy = np.random.random((6000, 1, 1)) * .2 + .4 r = np.random.random((6000, 1, 1)) * .2 + .2 theta = np.random.random((6000, 1, 1)) * np.pi * 2 isTriangle = np.random.random((6000, 1, 1)) > .5 isHollow = np.random.random((6000, 1, 1)) > .5 triangles = (np.sqrt((x - cx)**2 + (y - cy)**2) - r * np.cos(np.pi / 3) / np.cos((np.arctan2(x - cx, y - cy) + theta) % (2 * np.pi / 3) - np.pi / 3)) triangles = np.tanh(-40 * triangles) circles = np.tanh(-40 * (np.sqrt((x - cx)**2 + (y - cy)**2) - r) ) images = isTriangle * triangles + (1 - isTriangle) * circles hollow = 1 - images **2 filled = (images + 1) / 2 images = isHollow * hollow + (1 - isHollow) * filled ds = torch.utils.data.TensorDataset(torch.Tensor(np.expand_dims(images, 1))) d1, d2 = (torch.utils.data.DataLoader(ds, batch_size=BATCH_SIZE, shuffle=True, ) for _ in (1,1)) return d1, d2 d1_triangles, d2_triangles = get_dataset_triangles("train") d1_triangles_test, d2_triangles_test = get_dataset_triangles("test") # + id="yHqe6G9bmeUc" #Next, we define the neural network architectures that we will pair with our #inverse consistency loss class RegisNetNoPad(nn.Module): def __init__(self): super(RegisNetNoPad, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(11, 10, kernel_size=5) self.conv3 = nn.Conv2d(21, 10, kernel_size=5) self.conv4 = nn.Conv2d(31, 10, kernel_size=5) self.conv5 = nn.Conv2d(41, 10, kernel_size=5) self.conv6 = nn.Conv2d(51, 64, kernel_size=5) def forward(self, x): x = torch.nn.functional.pad(x, [12] * 4) x = torch.cat([x[:, :, 2:-2, 2:-2], F.relu(self.conv1(x))], 1) x = torch.cat([x[:, :, 2:-2, 2:-2], F.relu(self.conv2(x))], 1) x = torch.cat([x[:, :, 2:-2, 2:-2], F.relu(self.conv3(x))], 1) x = torch.cat([x[:, :, 2:-2, 2:-2], F.relu(self.conv4(x))], 1) x = torch.cat([x[:, :, 2:-2, 2:-2], F.relu(self.conv5(x))], 1) out = self.conv6(x) ##normalize #out_norms = torch.sqrt(torch.sum(out**2, 1, keepdim=True)) #out = out / (out_norms + .0001) return out * 10 # + id="GFtDI6unVKH4" net = RegisNetNoPad() # + id="_IYIopHin3qU" def train(net, d1, d2): optimizer = torch.optim.Adam(net.parameters(), lr=.0001) net.train() net.cuda() loss_history = [] print("[", end="") for epoch in range(400): print("-", end="") if (epoch + 1) % 50 == 0: print("]", end="\n[") for A, B in list(zip(d1, d2)): loss_ = pass_(A, B, net, optimizer) if loss_ is not None: loss = loss_ loss_history.append([loss]) print(loss) print("]") return loss_history def pass_(A, B, net, optimizer): if A[0].size()[0] == BATCH_SIZE: image_A = A[0].cuda() image_B = B[0].cuda() optimizer.zero_grad() nA = net(image_A)[::, ::].reshape(-1, BATCH_SIZE, N * N) nB = net(image_B)[::, ::].reshape(-1, BATCH_SIZE, N * N) cc = torch.einsum("icn,ick->ink", nA, nB) cc_A = torch.softmax(cc, axis=1) cc_B = torch.softmax(cc, axis=2) loss = cc_A * cc_B loss = torch.clamp(loss, max=.3) loss = -torch.sum(loss) / BATCH_SIZE / (N * N) loss.backward() optimizer.step() return loss.detach() # - l = train(net, d1_mnist, d2_mnist) A = list(d1_mnist)[0][0][:1] B = list(d1_mnist)[1][0][:1] plt.subplot(1, 2, 1) show(B) plt.subplot(1, 2, 2) show(A) plt.show() net.cpu() for i in range(30): plt.subplot(5, 6, i + 1) plt.xticks([]) plt.yticks([]) show(net(A)[0, i]) #plt.colorbar() # + nA = net(A).reshape(-1, 64, N * N) nB = net(B).reshape(-1, 64, N * N) cc = torch.einsum("icn,ick->ink", nA, nB) cc_A = torch.softmax(cc, axis=1) cc_B = torch.softmax(cc, axis=2) loss = cc_A * cc_B show(loss) plt.colorbar() net(A).shape # - # + i, j = 10, 12 show(cc_A.reshape([N] * 4)[i, j]) plt.colorbar() def argmax_2d(arr): ind = np.argmax(arr) return [ind % arr.shape[0], ind // arr.shape[0]] import scipy.ndimage.measurements #x, y = argmax_2d(cc_A.reshape([28] * 4)[:, :, i, j]) y, x = scipy.ndimage.measurements.center_of_mass(cc_A.reshape([N] * 4)[:, :, i, j].detach().numpy()) plt.scatter(x, y) reshaped = cc_A.reshape([N] * 4).detach().numpy() # + import scipy.ndimage grid = np.array([ [ #(argmax_2d(reshaped[i, j]) if (np.max(reshaped[i, j]) > .01) else [np.nan, np.nan]) scipy.ndimage.measurements.center_of_mass(reshaped[i, j].transpose()) for i in range(N)] for j in range(N) ]) grid.shape grid = grid.astype(float) #grid[:, :, 0] = scipy.ndimage.gaussian_filter(grid[:, :, 0], 1) #grid[:, :, 1] = scipy.ndimage.gaussian_filter(grid[:, :, 1], 1) grid = grid[3:-3, 3:-3] plt.plot(grid[:, :, 0], grid[:, :, 1]) plt.plot(grid[:, :, 0].transpose(), grid[:, :, 1].transpose()) plt.ylim(N, 0) plt.show() show(B) plt.scatter(grid[:, :, 0], grid[:, :, 1], c="red", s=100) plt.scatter(grid[:, :, 0], grid[:, :, 1], c=np.array(A[0, 0, 3:-3, 3:-3]).transpose(), s=30) plt.ylim(N, 0) plt.show() # - C show(torch.sum(loss, axis=1).reshape(N, N)) plt.colorbar() A.size() out_norms = torch.sqrt(torch.sum(net(A)**2, 1, keepdim=True)) show(out_norms) plt.colorbar() # + # torch.clamp? # - show(cc) plt.colorbar() scipy.ndimage.measurements.center_of_mass(np.array(cc_A.reshape([28] * 4)[:, :, i, j].detach())) show(cc_A.reshape([28] * 4)[:, :, i, j].cpu().detach()) plt.plot(cc[0, 0].detach()) show(cc_A) from sklearn.decomposition import PCA pca = PCA(n_components=180) pca.fit(cc_A.detach()[0]) pca.explained_variance_ratio_ plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel("eigenvector") plt.ylabel("Cumulative explained variance") torch.save(net.state_dict(), "tri_cir_hol.pth") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Load Data # + import pandas as pd import names from sklearn.neural_network import MLPClassifier from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn import preprocessing import random from pyqubo import Constraint, Array import neal from dwave.system import LeapHybridSampler import numpy as np from collections import Counter import matplotlib.pyplot as plt import sqlite3 import statsmodels.api as sm import statsmodels.formula.api as smf plt.rcParams["figure.figsize"] = (25,8) import warnings warnings.filterwarnings('ignore') plt.rcParams["figure.figsize"] = (25,10) plt.rcParams.update({'font.size': 22}) # + l20 = pd.read_csv('League_df_2020.csv') l21 = pd.read_csv('League_df_2021.csv') t20 = pd.read_csv('Tournament_df_2020.csv') t21 = pd.read_csv('Tournament_df_2021.csv') sal_21 = pd.read_excel('2022-student-research-case-study-player-data.xlsx', "2021 Salaries")[11:].dropna(axis=1) sal_21.columns = ['Player', 'Leagues', 'Squads', 'Nations', 'Positions', 'Salary'] sal_21.Player = sal_21.Player.str.replace('.', '').str.replace(' ','').str.replace('?','') sal_21 = sal_21.drop(['Squads', 'Positions', 'Leagues'], axis=1) sal_20 = pd.read_excel('2022-student-research-case-study-player-data.xlsx', "2020 Salaries")[11:].dropna(axis=1) sal_20.columns = ['Player', 'Leagues', 'Squads', 'Nations', 'Positions', 'Salary'] sal_20.Player = sal_20.Player.str.replace('.', '').str.replace(' ','').str.replace('?','') sal_20 = sal_20.drop(['Squads', 'Positions', 'Leagues'], axis=1) # + def fix_data_league(alice1): alice1[["Age", "Born", "Year.x"]] = alice1[["Age", "Born", "Year.x"]].astype(str) alice1['Nations'] = alice1['Nations'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Positions'] = alice1['Positions'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Leagues'] = alice1['Leagues'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Squads'] = alice1['Squads'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Age'] = alice1['Age'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Born'] = alice1['Born'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Year.x'] = alice1['Year.x'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Year'] = alice1['Year.x'] alice1 = alice1.drop(['Year.x', 'Year.y'], axis=1) return alice1 def fix_data_tourn(alice1): alice1[["Age", "Born", "Year"]] = alice1[["Age", "Born", "Year"]].astype(str) alice1['Nations'] = alice1['Nations'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Positions'] = alice1['Positions'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Leagues'] = alice1['Leagues'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Age'] = alice1['Age'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Born'] = alice1['Born'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) alice1['Year'] = alice1['Year'].apply(lambda x: x.split(',')[1] if str.__contains__(x, ',') else x) return alice1 l20 = fix_data_league(l20) l21 = fix_data_league(l21) t20 = fix_data_tourn(t20) t21 = fix_data_tourn(t21) league2 = sal_21.sort_values('Player').drop_duplicates('Player', keep = 'last').merge(l21) tourn2 = t21.merge(sal_21, on = ['Player', "Nations"], how='left') league1 = sal_20.sort_values('Player').drop_duplicates('Player', keep = 'last').merge(l20) tourn1 = t20.merge(sal_20, on = ['Player', "Nations"], how='left') tourn1.Salary = tourn1.Salary.fillna(0) tourn2.Salary = tourn2.Salary.fillna(0) # + ##correlation plot## # p1 = league2[['Playing Time MP_Goalkeeping', # 'Playing Time Starts_Goalkeeping', 'Playing Time Min_Goalkeeping', # 'Playing Time 90s_Goalkeeping', 'Performance GA_Goalkeeping', # 'Performance GA90_Goalkeeping', 'Performance SoTA_Goalkeeping', # 'Performance Saves_Goalkeeping', 'Performance Save%_Goalkeeping', # 'W_Goalkeeping', 'D_Goalkeeping', 'L_Goalkeeping', # 'Performance CS_Goalkeeping', 'Performance CS%_Goalkeeping', # 'Performance PKatt_Goalkeeping', 'Penalty Kicks PKA_Goalkeeping', # 'Penalty Kicks PKsv_Goalkeeping', 'Penalty Kicks PKm_Goalkeeping', # 'Penalty Kicks Save%_Goalkeeping']].corr().abs() # matrix = np.triu(p1) # sns.heatmap(p1, # xticklabels=p1.columns.values, # yticklabels=p1.columns.values, annot=True, mask=matrix) # plt.show() # + # def adjust_stats_by_league(league2): # gb1 = league2.groupby('Player').max() # gb2 = league2.groupby('Player').min() # df_roster = pd.DataFrame() # for i in league2.Positions.unique(): # gb1_i = gb1.loc[i] # gb2_i = gb2.loc[i] # league2_i = league2[league2.Positions == i] # num_cols = ['90s_Shooting', 'Gls_Shooting', 'Standard Sh_Shooting', # 'Standard SoT_Shooting', 'Standard SoT%_Shooting', # 'Standard Sh/90_Shooting', 'Standard SoT/90_Shooting', # 'Standard G/Sh_Shooting', 'Standard G/SoT_Shooting', # 'Standard Dist_Shooting', 'Standard FK_Shooting', # 'Performance PK_Shooting', 'Performance PKatt_Shooting', # 'Expected xG_Shooting', 'Expected npxG_Shooting', # 'Expected npxG/Sh_Shooting', 'Expected G-xG_Shooting', # 'Expected np:G-xG_Shooting', '90s_Passing', 'Total Cmp_Passing', # 'Total Att_Passing', 'Total Cmp%_Passing', 'Total TotDist_Passing', # 'Total PrgDist_Passing', 'Short Cmp_Passing', 'Short Att_Passing', # 'Short Cmp%_Passing', 'Medium Cmp_Passing', 'Medium Att_Passing', # 'Medium Cmp%_Passing', 'Long Cmp_Passing', 'Long Att_Passing', # 'Long Cmp%_Passing', 'Ast_Passing', 'xA_Passing', 'A-xA_Passing', # 'KP_Passing', '1/3_Passing', 'PPA_Passing', 'CrsPA_Passing', # 'Prog_Passing', '90s_Defense', 'Tackles Tkl_Defense', # 'Tackles TklW_Defense', 'Tackles Def 3rd_Defense', # 'Tackles Mid 3rd_Defense', 'Tackles Att 3rd_Defense', # 'Vs Dribbles Tkl_Defense', 'Vs Dribbles Att_Defense', # 'Vs Dribbles Tkl%_Defense', 'Vs Dribbles Past_Defense', # 'Pressures Press_Defense', 'Pressures Succ_Defense', # 'Pressures %_Defense', 'Pressures Def 3rd_Defense', # 'Pressures Mid 3rd_Defense', 'Pressures Att 3rd_Defense', # 'Blocks Blocks_Defense', 'Blocks Sh_Defense', 'Blocks ShSv_Defense', # 'Blocks Pass_Defense', 'Int_Defense', 'Tkl+Int_Defense', 'Clr_Defense', # 'Err_Defense', 'Playing Time MP_Goalkeeping', # 'Playing Time Starts_Goalkeeping', 'Playing Time Min_Goalkeeping', # 'Playing Time 90s_Goalkeeping', 'Performance GA_Goalkeeping', # 'Performance GA90_Goalkeeping', 'Performance SoTA_Goalkeeping', # 'Performance Saves_Goalkeeping', 'Performance Save%_Goalkeeping', # 'W_Goalkeeping', 'D_Goalkeeping', 'L_Goalkeeping', # 'Performance CS_Goalkeeping', 'Performance CS%_Goalkeeping', # 'Performance PKatt_Goalkeeping', 'Penalty Kicks PKA_Goalkeeping', # 'Penalty Kicks PKsv_Goalkeeping', 'Penalty Kicks PKm_Goalkeeping', # 'Penalty Kicks Save%_Goalkeeping'] # for j in num_cols: # league2_i[j] = (league2_i[j] - gb2_i[j])/(gb1_i[j] - gb2_i[j]) # df_roster = pd.concat([df_roster,league2_i]) # return df_roster # league2 = adjust_stats_by_league(league2) # league1 = adjust_stats_by_league(league1) # - # # Get Player Ratings # + # def get_player_ratings(df): # # df['shot_accuracy'] = np.mean(df[['90s_Shooting','Standard SoT_Shooting', 'Standard SoT%_Shooting', # # 'Standard Sh/90_Shooting', 'Standard SoT/90_Shooting']].rank(pct=True,numeric_only=True), axis=1)*100 # # df['finishing'] = np.mean(df[['90s_Shooting','Gls_Shooting', # # 'Standard G/Sh_Shooting', 'Standard G/SoT_Shooting']].rank(pct=True,numeric_only=True), axis=1)*100 # # df['shot_power'] = np.mean(df[['90s_Shooting','Standard Dist_Shooting', 'Standard Sh_Shooting']].rank(pct=True,numeric_only=True), axis=1)*100 # # df['dangerous_rate'] = np.mean(df[['90s_Shooting','Expected xG_Shooting', 'Expected npxG_Shooting', # # 'Expected npxG/Sh_Shooting', 'Expected G-xG_Shooting', # # 'Expected np:G-xG_Shooting']].rank(pct=True,numeric_only=True), axis=1)*100 # df['overall_fwd'] = np.mean(df[['90s_Shooting', 'Gls_Shooting', # 'Standard Sh_Shooting', 'Standard SoT_Shooting', # 'Standard SoT%_Shooting', 'Standard Sh/90_Shooting', # 'Standard SoT/90_Shooting', 'Standard G/Sh_Shooting', # 'Standard G/SoT_Shooting', 'Standard Dist_Shooting', # 'Standard FK_Shooting', 'Performance PK_Shooting', # 'Performance PKatt_Shooting', 'Expected xG_Shooting', # 'Expected npxG_Shooting', 'Expected npxG/Sh_Shooting', # 'Expected G-xG_Shooting', 'Expected np:G-xG_Shooting' ]].rank(pct=True,numeric_only=True), axis=1)*100 # # df['long_pass'] = np.mean(df[['90s_Passing','Long Cmp_Passing', 'Long Att_Passing', # # 'Long Cmp%_Passing', 'CrsPA_Passing', 'Total TotDist_Passing', # # 'Total PrgDist_Passing', 'Prog_Passing']].rank(pct=True,numeric_only=True), axis=1)*100 # # df['short_pass'] = np.mean(df[['90s_Passing','Total Cmp_Passing', # # 'Total Att_Passing', 'Total Cmp%_Passing', 'Short Cmp_Passing', 'Short Att_Passing', # # 'Short Cmp%_Passing', 'Medium Cmp_Passing', 'Medium Att_Passing', # # 'Medium Cmp%_Passing' ]].rank(pct=True,numeric_only=True), axis=1)*100 # # df['playmaking'] = np.mean(df[['90s_Passing','Ast_Passing', 'xA_Passing', 'A-xA_Passing', # # 'KP_Passing', '1/3_Passing', 'PPA_Passing']].rank(pct=True,numeric_only=True), axis=1)*100 # df['overall_mid'] = np.mean(df[['90s_Passing', # 'Total Cmp_Passing', 'Total Att_Passing', 'Total Cmp%_Passing', # 'Total TotDist_Passing', 'Total PrgDist_Passing', 'Short Cmp_Passing', # 'Short Att_Passing', 'Short Cmp%_Passing', 'Medium Cmp_Passing', # 'Medium Att_Passing', 'Medium Cmp%_Passing', 'Long Cmp_Passing', # 'Long Att_Passing', 'Long Cmp%_Passing', 'Ast_Passing', 'xA_Passing', # 'A-xA_Passing', 'KP_Passing', '1/3_Passing', 'PPA_Passing', # 'CrsPA_Passing', 'Prog_Passing']].rank(pct=True,numeric_only=True), axis=1)*100 # # df['error_rate'] = np.mean(df[['90s_Defense','Vs Dribbles Past_Defense', 'Err_Defense' ]].rank(pct=True,numeric_only=True), axis=1)*100 # # df['good_error_rate'] = 100 - df['error_rate'] # # df['tackling'] = np.mean(df[['90s_Defense','Tackles Tkl_Defense', # # 'Tackles TklW_Defense', 'Tackles Def 3rd_Defense', # # 'Tackles Mid 3rd_Defense', 'Tackles Att 3rd_Defense', # # 'Vs Dribbles Tkl_Defense', 'Vs Dribbles Att_Defense', # # 'Vs Dribbles Tkl%_Defense', # # 'Pressures Press_Defense', 'Pressures Succ_Defense', # # 'Pressures %_Defense', 'Pressures Def 3rd_Defense', # # 'Pressures Mid 3rd_Defense', 'Pressures Att 3rd_Defense']].rank(pct=True,numeric_only=True), axis=1)*100 # # df['awwareness'] = np.mean(df[['90s_Defense','Blocks Blocks_Defense', 'Blocks Sh_Defense', 'Blocks ShSv_Defense', # # 'Blocks Pass_Defense', 'Int_Defense', 'Tkl+Int_Defense', 'Clr_Defense']].rank(pct=True,numeric_only=True), axis=1)*100 # df['overall_def'] = np.mean(df[['90s_Defense', 'Tackles Tkl_Defense', # 'Tackles TklW_Defense', 'Tackles Def 3rd_Defense', # 'Tackles Mid 3rd_Defense', 'Tackles Att 3rd_Defense', # 'Vs Dribbles Tkl_Defense', 'Vs Dribbles Att_Defense', # 'Vs Dribbles Tkl%_Defense', 'Vs Dribbles Past_Defense', # 'Pressures Press_Defense', 'Pressures Succ_Defense', # 'Pressures %_Defense', 'Pressures Def 3rd_Defense', # 'Pressures Mid 3rd_Defense', 'Pressures Att 3rd_Defense', # 'Blocks Blocks_Defense', 'Blocks Sh_Defense', 'Blocks ShSv_Defense', # 'Blocks Pass_Defense', 'Int_Defense', 'Tkl+Int_Defense', 'Clr_Defense', # 'Err_Defense']].rank(pct=True,numeric_only=True), axis=1)*100 # df['gk'] = np.mean(df[['Playing Time MP_Goalkeeping', # 'Playing Time Starts_Goalkeeping', 'Playing Time Min_Goalkeeping', # 'Playing Time 90s_Goalkeeping', 'Performance GA_Goalkeeping', # 'Performance GA90_Goalkeeping', 'Performance SoTA_Goalkeeping', # 'Performance Saves_Goalkeeping', 'Performance Save%_Goalkeeping', # 'W_Goalkeeping', 'D_Goalkeeping', 'L_Goalkeeping', # 'Performance CS_Goalkeeping', 'Performance CS%_Goalkeeping', # 'Performance PKatt_Goalkeeping', 'Penalty Kicks PKA_Goalkeeping', # 'Penalty Kicks PKsv_Goalkeeping', 'Penalty Kicks PKm_Goalkeeping', # 'Penalty Kicks Save%_Goalkeeping']].rank(pct=True,numeric_only=True), axis=1)*100 # # df = df.drop(['90s_Shooting', 'Gls_Shooting', 'Standard Sh_Shooting', # # 'Standard SoT_Shooting', 'Standard SoT%_Shooting', # # 'Standard Sh/90_Shooting', 'Standard SoT/90_Shooting', # # 'Standard G/Sh_Shooting', 'Standard G/SoT_Shooting', # # 'Standard Dist_Shooting', 'Performance PK_Shooting', # # 'Performance PKatt_Shooting', 'Standard FK_Shooting', # # 'Expected xG_Shooting', 'Expected npxG_Shooting', # # 'Expected npxG/Sh_Shooting', 'Expected G-xG_Shooting', # # 'Expected np:G-xG_Shooting', '90s_Passing', 'Total Cmp_Passing', # # 'Total Att_Passing', 'Total Cmp%_Passing', 'Total TotDist_Passing', # # 'Total PrgDist_Passing', 'Short Cmp_Passing', 'Short Att_Passing', # # 'Short Cmp%_Passing', 'Medium Cmp_Passing', 'Medium Att_Passing', # # 'Medium Cmp%_Passing', 'Long Cmp_Passing', 'Long Att_Passing', # # 'Long Cmp%_Passing', 'Ast_Passing', 'xA_Passing', 'A-xA_Passing', # # 'KP_Passing', '1/3_Passing', 'PPA_Passing', 'CrsPA_Passing', # # 'Prog_Passing', '90s_Defense', 'Tackles Tkl_Defense', # # 'Tackles TklW_Defense', 'Tackles Def 3rd_Defense', # # 'Tackles Mid 3rd_Defense', 'Tackles Att 3rd_Defense', # # 'Vs Dribbles Tkl_Defense', 'Vs Dribbles Att_Defense', # # 'Vs Dribbles Tkl%_Defense', 'Vs Dribbles Past_Defense', # # 'Pressures Press_Defense', 'Pressures Succ_Defense', # # 'Pressures %_Defense', 'Pressures Def 3rd_Defense', # # 'Pressures Mid 3rd_Defense', 'Pressures Att 3rd_Defense', # # 'Blocks Blocks_Defense', 'Blocks Sh_Defense', 'Blocks ShSv_Defense', # # 'Blocks Pass_Defense', 'Int_Defense', 'Tkl+Int_Defense', 'Clr_Defense', # # 'Err_Defense', 'Playing Time MP_Goalkeeping', # # 'Playing Time Starts_Goalkeeping', 'Playing Time Min_Goalkeeping', # # 'Playing Time 90s_Goalkeeping', 'Performance GA_Goalkeeping', # # 'Performance GA90_Goalkeeping', 'Performance SoTA_Goalkeeping', # # 'Performance Saves_Goalkeeping', 'Performance Save%_Goalkeeping', # # 'W_Goalkeeping', 'D_Goalkeeping', 'L_Goalkeeping', # # 'Performance CS_Goalkeeping', 'Performance CS%_Goalkeeping', # # 'Performance PKatt_Goalkeeping', 'Penalty Kicks PKA_Goalkeeping', # # 'Penalty Kicks PKsv_Goalkeeping', 'Penalty Kicks PKm_Goalkeeping', # # 'Penalty Kicks Save%_Goalkeeping', 'Unnamed: 0', 'Born'], axis=1) # return df # tourn1 = get_player_ratings(tourn1) # league1 = get_player_ratings(league1) # tourn2 = get_player_ratings(tourn2) # league2 = get_player_ratings(league2) league = pd.concat([league1, league2], axis=0) tourn = pd.concat([tourn1, tourn2], axis=0) league = pd.concat([league, tourn], axis=0).drop_duplicates(['Player', 'Year'], keep = 'first') # - # # Get Best Position # + # def get_pos(df): # fwds = ['FW', 'FWMF', 'FWDF', 'MFFW', 'DFFW'] # mids = [ 'MF', 'MFFW', 'GKMF', 'MFGK', 'MFDF', 'DFMF'] # d = ['DF', 'DFFW', 'DFMF', 'MFDF', 'FWDF'] # gk = ['GK', 'GKMF', 'MFGK'] # df.loc[(df.overall_fwd >= df.overall_mid) & (df.overall_fwd >= df.overall_def), "Pos"] = "F" # df.loc[(df.overall_mid >= df.overall_fwd) & (df.overall_mid >= df.overall_def), "Pos"] = "M" # df.loc[(df.overall_def >= df.overall_mid) & (df.overall_def >= df.overall_fwd), "Pos"] = "D" # df.loc[(df.Positions.isin(gk)) & (df.gk >= df.overall_mid) & (df.gk >= df.overall_def) & (df.gk >= df.overall_fwd), "Pos"] = "G" # league.Squads = league.Squads.fillna('Tournament') # league.Leagues = league.Leagues.fillna('Tournament') # return df # #tourn = get_pos(tourn) # league = get_pos(league) # league.gk = league.gk.fillna(0) # def get_ovr(league): # league.loc[league.Pos == 'M', 'Ovr'] = 0.7*league.loc[league.Pos == 'M']['overall_mid'] + 0.3*league.loc[league.Pos == 'M']['overall_fwd'] # league.loc[league.Pos == 'F', 'Ovr'] = 0.8*league.loc[league.Pos == 'F']['overall_fwd'] + 0.2*league.loc[league.Pos == 'F']['overall_mid'] # league.loc[league.Pos == 'D', 'Ovr'] = 0.8*league.loc[league.Pos == 'D']['overall_def'] + 0.2*league.loc[league.Pos == 'D']['overall_mid'] # league.loc[league.Pos == 'G', 'Ovr'] = league.loc[league.Pos == 'G']['gk'] # league.Squads = league.Squads.fillna('Tournament') # league.Leagues = league.Leagues.fillna('Tournament') # return league # league = get_ovr(league) # - league.Year = league.Year.astype(str) league.columns # + le = preprocessing.LabelEncoder() data = league[league['Year'] == '2020'] le.fit(data['Nations']) data['Nations_cat'] = le.transform(data['Nations']) le.fit(data['Squads']) data['Squads_cat'] = le.transform(data['Squads']) le.fit(data['Leagues']) data['Leagues_cat'] = le.transform(data['Leagues']) le.fit(data['Positions']) data['Positions_cat'] = le.transform(data['Positions']) data = data[['Nations_cat', 'Squads_cat', 'Positions_cat', 'Age', 'Leagues_cat', 'Salary','90s_Shooting','W_Goalkeeping','Err_Defense', 'churn' ]] X = data[['Nations_cat', 'Squads_cat', 'Positions_cat', 'Age', 'Leagues_cat', 'Salary','90s_Shooting','W_Goalkeeping','Err_Defense']].values y = data['churn'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) clf = LogisticRegression(random_state=0).fit(X_train, y_train) print("Model Score") print(clf.score(X_train, y_train)) # + from sklearn.metrics import classification_report print(classification_report(y_test, clf.predict(X_test))) # + from sklearn.metrics import confusion_matrix confusion_matrix(y_test, clf.predict(X_test)) # - feats # + feats = [] for i in [4.28083598e-13,8.41433468e-13, 4.48498360e-14, 2.17258041e-13, 5.34464121e-14, -5.31725845e-08, -1.21666767e-13, -7.99623525e-17, 1.09839441e-16]: feats.append((i - np.mean([4.28083598e-13,8.41433468e-13, 4.48498360e-14, 2.17258041e-13, 5.34464121e-14, -5.31725845e-08, -1.21666767e-13, -7.99623525e-17, 1.09839441e-16]))/np.std([4.28083598e-13,8.41433468e-13, 4.48498360e-14, 2.17258041e-13, 5.34464121e-14, -5.31725845e-08, -1.21666767e-13, -7.99623525e-17, 1.09839441e-16])) # - plt.bar(["Nation", "Squad", "Position", "Age", "League", "Salary", "Playing_Time", "Wins", "Errors"], feats) plt.title('Relative Parameter Weighting For Churn Model') plt.show() from sklearn.metrics import plot_confusion_matrix plot_confusion_matrix(clf, X_test, y_test) plt.show() # # League Churn Model (Log. Regress) # + def player_churn_model(league, year): le = preprocessing.LabelEncoder() data = league[league['Year'] == '2020'] le.fit(data['Nations']) data['Nations_cat'] = le.transform(data['Nations']) le.fit(data['Squads']) data['Squads_cat'] = le.transform(data['Squads']) le.fit(data['Leagues']) data['Leagues_cat'] = le.transform(data['Leagues']) le.fit(data['Positions']) data['Positions_cat'] = le.transform(data['Positions']) data = data[['Nations_cat', 'Squads_cat', 'Positions_cat', 'Age', 'Leagues_cat', 'Salary','90s_Shooting','W_Goalkeeping','Err_Defense', 'churn' ]] X = data[['Nations_cat', 'Squads_cat', 'Positions_cat', 'Age', 'Leagues_cat', 'Salary','90s_Shooting','W_Goalkeeping','Err_Defense']].values y = data['churn'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42) clf = LogisticRegression(random_state=0).fit(X_train, y_train) print("Model Score") print(clf.score(X_train, y_train)) league_year = league[league['Year'] == year] league_year_data = league_year.reset_index(drop=True) le.fit(league_year_data['Nations']) league_year_data['Nations_cat'] = le.transform(league_year_data['Nations']) le.fit(league_year_data['Squads']) league_year_data['Squads_cat'] = le.transform(league_year_data['Squads']) le.fit(league_year_data['Leagues']) league_year_data['Leagues_cat'] = le.transform(league_year_data['Leagues']) le.fit(league_year_data['Positions']) league_year_data['Positions_cat'] = le.transform(league_year_data['Positions']) league_year_data_x = league_year_data[['Nations_cat', 'Squads_cat', 'Positions_cat', 'Age', 'Leagues_cat', 'Salary','90s_Shooting','W_Goalkeeping','Err_Defense' ]] preds = pd.DataFrame(clf.predict_proba(league_year_data_x)).reset_index(drop=True) preds.columns = ['Active', "Churn"] df_with_preds = pd.concat([league_year_data,preds], axis=1) kicked_players = df_with_preds[df_with_preds.Churn > 0.5] print("Number of Players Kicked") num_kicked = len(kicked_players) print(num_kicked) remaining_players = df_with_preds[df_with_preds.Churn <= 0.7] remaining_players = remaining_players.drop(['Nations_cat', 'Squads_cat', 'Positions_cat', 'Active', 'Churn'], axis=1) league = league[league.Year != year] remaining_players = pd.concat([remaining_players, league], axis=0) return num_kicked, kicked_players, remaining_players num_kicked, kicked_players, league_22 = player_churn_model(league, '2021') league_22 = league_22.drop_duplicates(['Player', 'Year']) # - # # Age & Advance Player Overalls # + def advance_year(year, league): league = league[league.Year == year] player = pd.DataFrame() player["Player"] = league.Player.tolist() player["Nations"] = league.Nations.tolist() player["Squads"] = league.Squads.tolist() player["Age"] = league.Age.astype(int).tolist() #player["Positions"] = league.Positions.tolist() player["Positions"] = league.Positions.tolist() player['Year'] = league.Year.astype(int).tolist() player['Salary'] = league.Salary.astype(int).tolist() num_cols = ['90s_Shooting', 'Gls_Shooting', 'Standard Sh_Shooting', 'Standard SoT_Shooting', 'Standard SoT%_Shooting', 'Standard Sh/90_Shooting', 'Standard SoT/90_Shooting', 'Standard G/Sh_Shooting', 'Standard G/SoT_Shooting', 'Standard Dist_Shooting', 'Standard FK_Shooting', 'Performance PK_Shooting', 'Performance PKatt_Shooting', 'Expected xG_Shooting', 'Expected npxG_Shooting', 'Expected npxG/Sh_Shooting', 'Expected G-xG_Shooting', 'Expected np:G-xG_Shooting', '90s_Passing', 'Total Cmp_Passing', 'Total Att_Passing', 'Total Cmp%_Passing', 'Total TotDist_Passing', 'Total PrgDist_Passing', 'Short Cmp_Passing', 'Short Att_Passing', 'Short Cmp%_Passing', 'Medium Cmp_Passing', 'Medium Att_Passing', 'Medium Cmp%_Passing', 'Long Cmp_Passing', 'Long Att_Passing', 'Long Cmp%_Passing', 'Ast_Passing', 'xA_Passing', 'A-xA_Passing', 'KP_Passing', '1/3_Passing', 'PPA_Passing', 'CrsPA_Passing', 'Prog_Passing', '90s_Defense', 'Tackles Tkl_Defense', 'Tackles TklW_Defense', 'Tackles Def 3rd_Defense', 'Tackles Mid 3rd_Defense', 'Tackles Att 3rd_Defense', 'Vs Dribbles Tkl_Defense', 'Vs Dribbles Att_Defense', 'Vs Dribbles Tkl%_Defense', 'Vs Dribbles Past_Defense', 'Pressures Press_Defense', 'Pressures Succ_Defense', 'Pressures %_Defense', 'Pressures Def 3rd_Defense', 'Pressures Mid 3rd_Defense', 'Pressures Att 3rd_Defense', 'Blocks Blocks_Defense', 'Blocks Sh_Defense', 'Blocks ShSv_Defense', 'Blocks Pass_Defense', 'Int_Defense', 'Tkl+Int_Defense', 'Clr_Defense', 'Err_Defense', 'Playing Time MP_Goalkeeping', 'Playing Time Starts_Goalkeeping', 'Playing Time Min_Goalkeeping', 'Playing Time 90s_Goalkeeping', 'Performance GA_Goalkeeping', 'Performance GA90_Goalkeeping', 'Performance SoTA_Goalkeeping', 'Performance Saves_Goalkeeping', 'Performance Save%_Goalkeeping', 'W_Goalkeeping', 'D_Goalkeeping', 'L_Goalkeeping', 'Performance CS_Goalkeeping', 'Performance CS%_Goalkeeping', 'Performance PKatt_Goalkeeping', 'Penalty Kicks PKA_Goalkeeping', 'Penalty Kicks PKsv_Goalkeeping', 'Penalty Kicks PKm_Goalkeeping', 'Penalty Kicks Save%_Goalkeeping'] for i in num_cols: player[i] = league[i].tolist() player['Year'] = player['Year'] + 1 player['Age'] = player['Age'] + 1 player['Salary'] = player['Salary']*1.02 + np.random.binomial(1, 0.15)*player['Salary']*1.1 return player def get_next_year(year, league): teams1 = league[league.Leagues == 'A'].Squads.to_list() teams2 =league[league.Leagues == 'B'].Squads.to_list() teams3 =league[league.Leagues == 'C'].Squads.to_list() teams4 =league[league.Leagues == 'D'].Squads.to_list() teams5 =league[league.Leagues == 'E'].Squads.to_list() teams6 =league[league.Leagues == 'RFL'].Squads.to_list() temp_df = advance_year(year, league) num_cols = ['90s_Shooting', 'Gls_Shooting', 'Standard Sh_Shooting', 'Standard SoT_Shooting', 'Standard SoT%_Shooting', 'Standard Sh/90_Shooting', 'Standard SoT/90_Shooting', 'Standard G/Sh_Shooting', 'Standard G/SoT_Shooting', 'Standard Dist_Shooting', 'Standard FK_Shooting', 'Performance PK_Shooting', 'Performance PKatt_Shooting', 'Expected xG_Shooting', 'Expected npxG_Shooting', 'Expected npxG/Sh_Shooting', 'Expected G-xG_Shooting', 'Expected np:G-xG_Shooting', '90s_Passing', 'Total Cmp_Passing', 'Total Att_Passing', 'Total Cmp%_Passing', 'Total TotDist_Passing', 'Total PrgDist_Passing', 'Short Cmp_Passing', 'Short Att_Passing', 'Short Cmp%_Passing', 'Medium Cmp_Passing', 'Medium Att_Passing', 'Medium Cmp%_Passing', 'Long Cmp_Passing', 'Long Att_Passing', 'Long Cmp%_Passing', 'Ast_Passing', 'xA_Passing', 'A-xA_Passing', 'KP_Passing', '1/3_Passing', 'PPA_Passing', 'CrsPA_Passing', 'Prog_Passing', '90s_Defense', 'Tackles Tkl_Defense', 'Tackles TklW_Defense', 'Tackles Def 3rd_Defense', 'Tackles Mid 3rd_Defense', 'Tackles Att 3rd_Defense', 'Vs Dribbles Tkl_Defense', 'Vs Dribbles Att_Defense', 'Vs Dribbles Tkl%_Defense', 'Vs Dribbles Past_Defense', 'Pressures Press_Defense', 'Pressures Succ_Defense', 'Pressures %_Defense', 'Pressures Def 3rd_Defense', 'Pressures Mid 3rd_Defense', 'Pressures Att 3rd_Defense', 'Blocks Blocks_Defense', 'Blocks Sh_Defense', 'Blocks ShSv_Defense', 'Blocks Pass_Defense', 'Int_Defense', 'Tkl+Int_Defense', 'Clr_Defense', 'Err_Defense', 'Playing Time MP_Goalkeeping', 'Playing Time Starts_Goalkeeping', 'Playing Time Min_Goalkeeping', 'Playing Time 90s_Goalkeeping', 'Performance GA_Goalkeeping', 'Performance GA90_Goalkeeping', 'Performance SoTA_Goalkeeping', 'Performance Saves_Goalkeeping', 'Performance Save%_Goalkeeping', 'W_Goalkeeping', 'D_Goalkeeping', 'L_Goalkeeping', 'Performance CS_Goalkeeping', 'Performance CS%_Goalkeeping', 'Performance PKatt_Goalkeeping', 'Penalty Kicks PKA_Goalkeeping', 'Penalty Kicks PKsv_Goalkeeping', 'Penalty Kicks PKm_Goalkeeping', 'Penalty Kicks Save%_Goalkeeping'] fwds = ['MFFW', 'DFFW', 'FW','MFFW'] mids = ['FWMF','MF','GKMF','DFMF', ] d = ['DF', 'MFDF', 'FWDF', ] gk = ['GK', 'MFGK'] x = temp_df.Positions.isin(d) y = temp_df.Positions.isin(gk) z = temp_df.Positions.isin(fwds) t = temp_df.Positions.isin(mids) for i in num_cols: temp_df.loc[((x)|(y))&(temp_df.Age <= 24), i] = temp_df.loc[((x)|(y))&(temp_df.Age <= 24), i]*1.08 temp_df.loc[((x)|(y))&(temp_df.Age <= 28)&(temp_df.Age > 24), i] = temp_df.loc[((x)|(y))&(temp_df.Age <= 28)&(temp_df.Age > 24), i] temp_df.loc[((x)|(y))&(temp_df.Age >= 28), i] = temp_df.loc[((x)|(y))&(temp_df.Age >= 28), i]*0.92 temp_df.loc[((z)|(t))&(temp_df.Age <= 26), i] = temp_df.loc[((z)|(t))&(temp_df.Age <= 26), i]*1.05 temp_df.loc[((z)|(t))&(temp_df.Age <= 30)&(temp_df.Age > 26), i] = temp_df.loc[((z)|(t))&(temp_df.Age <= 30)&(temp_df.Age > 26), i] temp_df.loc[((z)|(t))&(temp_df.Age >= 30), i] = temp_df.loc[((z)|(t))&(temp_df.Age >= 30), i]*0.95 temp_df["Leagues"] = np.nan temp_df.loc[temp_df.Squads.isin(teams1), "Leagues"] = 'A' temp_df.loc[temp_df.Squads.isin(teams2), "Leagues"] = 'B' temp_df.loc[temp_df.Squads.isin(teams3), "Leagues"] = 'C' temp_df.loc[temp_df.Squads.isin(teams4), "Leagues"] = 'D' temp_df.loc[temp_df.Squads.isin(teams5), "Leagues"] = 'E' temp_df.loc[temp_df.Squads.isin(teams6), "Leagues"] = 'RFL' return temp_df league_22 = get_next_year("2021", league_22) league_22.Leagues = league_22.Leagues.fillna('Tournament') league = pd.concat([league, league_22], axis=0) league.churn = league.churn.fillna(0) # - # # Generate Players # + def get_new_player(year, kicked_players, league_22): used_names = kicked_players.Player.to_list() name = names.get_full_name(gender='male').replace(' ', '') if name in used_names: pass else: player = {} player["Player"] = name player["Nations"] = random.choice(kicked_players.Nations.tolist()) player["Squads"] = random.choice(kicked_players.Squads.tolist()) player["Age"] = round(np.random.uniform(17, 25)) player["Positions"] = random.choice(kicked_players.Positions.tolist()) player['Year'] = year player['Salary'] = round(np.random.normal(np.mean(league_22['Salary']), np.std(league_22['Salary']))) num_cols = ['90s_Shooting', 'Gls_Shooting', 'Standard Sh_Shooting', 'Standard SoT_Shooting', 'Standard SoT%_Shooting', 'Standard Sh/90_Shooting', 'Standard SoT/90_Shooting', 'Standard G/Sh_Shooting', 'Standard G/SoT_Shooting', 'Standard Dist_Shooting', 'Standard FK_Shooting', 'Performance PK_Shooting', 'Performance PKatt_Shooting', 'Expected xG_Shooting', 'Expected npxG_Shooting', 'Expected npxG/Sh_Shooting', 'Expected G-xG_Shooting', 'Expected np:G-xG_Shooting', '90s_Passing', 'Total Cmp_Passing', 'Total Att_Passing', 'Total Cmp%_Passing', 'Total TotDist_Passing', 'Total PrgDist_Passing', 'Short Cmp_Passing', 'Short Att_Passing', 'Short Cmp%_Passing', 'Medium Cmp_Passing', 'Medium Att_Passing', 'Medium Cmp%_Passing', 'Long Cmp_Passing', 'Long Att_Passing', 'Long Cmp%_Passing', 'Ast_Passing', 'xA_Passing', 'A-xA_Passing', 'KP_Passing', '1/3_Passing', 'PPA_Passing', 'CrsPA_Passing', 'Prog_Passing', '90s_Defense', 'Tackles Tkl_Defense', 'Tackles TklW_Defense', 'Tackles Def 3rd_Defense', 'Tackles Mid 3rd_Defense', 'Tackles Att 3rd_Defense', 'Vs Dribbles Tkl_Defense', 'Vs Dribbles Att_Defense', 'Vs Dribbles Tkl%_Defense', 'Vs Dribbles Past_Defense', 'Pressures Press_Defense', 'Pressures Succ_Defense', 'Pressures %_Defense', 'Pressures Def 3rd_Defense', 'Pressures Mid 3rd_Defense', 'Pressures Att 3rd_Defense', 'Blocks Blocks_Defense', 'Blocks Sh_Defense', 'Blocks ShSv_Defense', 'Blocks Pass_Defense', 'Int_Defense', 'Tkl+Int_Defense', 'Clr_Defense', 'Err_Defense', 'Playing Time MP_Goalkeeping', 'Playing Time Starts_Goalkeeping', 'Playing Time Min_Goalkeeping', 'Playing Time 90s_Goalkeeping', 'Performance GA_Goalkeeping', 'Performance GA90_Goalkeeping', 'Performance SoTA_Goalkeeping', 'Performance Saves_Goalkeeping', 'Performance Save%_Goalkeeping', 'W_Goalkeeping', 'D_Goalkeeping', 'L_Goalkeeping', 'Performance CS_Goalkeeping', 'Performance CS%_Goalkeeping', 'Performance PKatt_Goalkeeping', 'Penalty Kicks PKA_Goalkeeping', 'Penalty Kicks PKsv_Goalkeeping', 'Penalty Kicks PKm_Goalkeeping', 'Penalty Kicks Save%_Goalkeeping'] try: for i in num_cols: player[i] = np.random.normal(np.mean(league_22[i]), np.std(league_22[i])) except: print('An Error Occured') return player def add_players_to_league(num_kicked, year, kicked_players, league_22): player_dict = {} for i in range(num_kicked): player_dict[i] = get_new_player(year, kicked_players, league_22) return player_dict new_players = pd.DataFrame(add_players_to_league(num_kicked, '2022', kicked_players, league)).T #new_players = get_player_ratings(new_players) #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) # new_league_22= new_league_22.drop_duplicates(['Player', 'Year'], keep = 'last') # teams1 = league[league.Leagues == 'A'].Squads.to_list() # teams2 =league[league.Leagues == 'B'].Squads.to_list() # teams3 =league[league.Leagues == 'C'].Squads.to_list() # teams4 =league[league.Leagues == 'D'].Squads.to_list() # teams5 =league[league.Leagues == 'E'].Squads.to_list() # teams6 =league[league.Leagues == 'RFL'].Squads.to_list() # new_league_22.loc[new_league_22.Squads.isin(teams1), "Leagues"] = 'A' # new_league_22.loc[new_league_22.Squads.isin(teams2), "Leagues"] = 'B' # new_league_22.loc[new_league_22.Squads.isin(teams3), "Leagues"] = 'C' # new_league_22.loc[new_league_22.Squads.isin(teams4), "Leagues"] = 'D' # new_league_22.loc[new_league_22.Squads.isin(teams5), "Leagues"] = 'E' # new_league_22.loc[new_league_22.Squads.isin(teams6), "Leagues"] = 'RFL' # + ####generate 10 years of data## ####randomly add +1 to rarita players each year from funding## ###multiply all non-rarita salaries by 10%, rarita salaries by -10% # - league.Year = league.Year.astype(str) league = league.drop(['Born', 'Unnamed: 0'], axis=1) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # # 2023 # + num_kicked, kicked_players, league_23 = player_churn_model(league, '2022') league_23 = league_23.drop_duplicates(['Player', 'Year']) # - league_23 = get_next_year("2022", league_23) league_23.Leagues = league_23.Leagues.fillna('Tournament') league = pd.concat([league, league_23], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league_23.Year = league_23.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2023', kicked_players, league_23)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # # 2024 # + num_kicked, kicked_players, league_24 = player_churn_model(league, '2023') league_24 = league_24.drop_duplicates(['Player', 'Year']) league_24 = get_next_year("2023", league_24) league_24.Leagues = league_24.Leagues.fillna('Tournament') league = pd.concat([league, league_24], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2024', kicked_players, league_24)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # - # # 2025 # + num_kicked, kicked_players, league_25 = player_churn_model(league, '2024') league_25 = league_25.drop_duplicates(['Player', 'Year']) league_25 = get_next_year("2024", league_25) league_25.Leagues = league_25.Leagues.fillna('Tournament') league = pd.concat([league, league_25], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2025', kicked_players, league_25)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # - # # 2026 # + num_kicked, kicked_players, league_26 = player_churn_model(league, '2025') league_26 = league_26.drop_duplicates(['Player', 'Year']) league_26 = get_next_year("2025", league_26) league_26.Leagues = league_26.Leagues.fillna('Tournament') league = pd.concat([league, league_26], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2026', kicked_players, league_26)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # - # # 2027 # + num_kicked, kicked_players, league_27 = player_churn_model(league, '2026') league_27 = league_27.drop_duplicates(['Player', 'Year']) league_27 = get_next_year("2026", league_27) league_27.Leagues = league_27.Leagues.fillna('Tournament') league = pd.concat([league, league_27], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2027', kicked_players, league_27)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # - # # 2028 # + num_kicked, kicked_players, league_28 = player_churn_model(league, '2027') league_28 = league_28.drop_duplicates(['Player', 'Year']) league_28 = get_next_year("2027", league_28) league_28.Leagues = league_28.Leagues.fillna('Tournament') league = pd.concat([league, league_28], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2028', kicked_players, league_28)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # - # # 2029 # + num_kicked, kicked_players, league_29 = player_churn_model(league, '2028') league_29 = league_29.drop_duplicates(['Player', 'Year']) league_29 = get_next_year("2028", league_29) league_29.Leagues = league_29.Leagues.fillna('Tournament') league = pd.concat([league, league_29], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2029', kicked_players, league_29)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # - # # 2030 # + num_kicked, kicked_players, league_30 = player_churn_model(league, '2029') league_30 = league_30.drop_duplicates(['Player', 'Year']) league_30 = get_next_year("2028", league_30) league_30.Leagues = league_30.Leagues.fillna('Tournament') league = pd.concat([league, league_30], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2030', kicked_players, league_30)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # - # # 2031 # + num_kicked, kicked_players, league_31 = player_churn_model(league, '2030') league_31 = league_31.drop_duplicates(['Player', 'Year']) league_31 = get_next_year("2029", league_31) league_31.Leagues = league_31.Leagues.fillna('Tournament') league = pd.concat([league, league_31], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2031', kicked_players, league_31)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # + num_kicked, kicked_players, league_32 = player_churn_model(league, '2031') league_32 = league_32.drop_duplicates(['Player', 'Year']) league_32 = get_next_year("2030", league_32) league_32.Leagues = league_32.Leagues.fillna('Tournament') league = pd.concat([league, league_32], axis=0) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) new_players = pd.DataFrame(add_players_to_league(num_kicked, '2032', kicked_players, league_32)).T #new_players = get_pos(new_players) league = pd.concat([league, new_players], axis=0) #league = get_ovr(league) league.churn = league.churn.fillna(0) league.Year = league.Year.astype(str) league[['Leagues', 'Squads']]=league[['Leagues', 'Squads']].fillna('Tournament') # - league['Efficiency'] = ((1/(1 + league['90s_Shooting'] + league['90s_Passing'] + league['90s_Defense'] + league['Playing Time 90s_Goalkeeping']))*(league['Gls_Shooting'] - league['Err_Defense'] + league['xA_Passing'] + league['W_Goalkeeping'])) league = league.drop([ 'Standard Sh_Shooting', 'Standard SoT%_Shooting', 'Standard Sh/90_Shooting', 'Standard SoT/90_Shooting', 'Standard G/Sh_Shooting', 'Standard FK_Shooting', 'Performance PK_Shooting', 'Performance PKatt_Shooting', 'Expected xG_Shooting', 'Expected npxG_Shooting', 'Expected npxG/Sh_Shooting', 'Expected G-xG_Shooting', 'Expected np:G-xG_Shooting', '90s_Passing', 'Total Cmp_Passing', 'Total Att_Passing', 'Total Cmp%_Passing', 'Total TotDist_Passing', 'Total PrgDist_Passing', 'Short Cmp_Passing', 'Short Att_Passing', 'Short Cmp%_Passing', 'Medium Cmp_Passing', 'Medium Att_Passing', 'Medium Cmp%_Passing', 'Long Cmp_Passing', 'Long Att_Passing', 'Long Cmp%_Passing', 'Ast_Passing', 'A-xA_Passing', 'KP_Passing', 'PPA_Passing', 'CrsPA_Passing', '90s_Defense', 'Tackles Tkl_Defense', 'Tackles TklW_Defense', 'Tackles Def 3rd_Defense', 'Tackles Mid 3rd_Defense', 'Tackles Att 3rd_Defense', 'Vs Dribbles Tkl_Defense', 'Vs Dribbles Att_Defense', 'Vs Dribbles Tkl%_Defense', 'Vs Dribbles Past_Defense', 'Pressures Press_Defense', 'Pressures Succ_Defense', 'Pressures %_Defense', 'Pressures Def 3rd_Defense', 'Pressures Mid 3rd_Defense', 'Pressures Att 3rd_Defense', 'Blocks Blocks_Defense', 'Blocks Sh_Defense', 'Blocks ShSv_Defense', 'Blocks Pass_Defense', 'Int_Defense', 'Playing Time MP_Goalkeeping', 'Playing Time Starts_Goalkeeping', 'Playing Time Min_Goalkeeping', 'Playing Time 90s_Goalkeeping', 'Performance GA_Goalkeeping', 'Performance GA90_Goalkeeping', 'Performance SoTA_Goalkeeping', 'Performance Saves_Goalkeeping', 'L_Goalkeeping', 'Performance CS_Goalkeeping', 'Performance CS%_Goalkeeping', 'Performance PKatt_Goalkeeping', 'Penalty Kicks PKA_Goalkeeping', 'Penalty Kicks PKsv_Goalkeeping', 'Penalty Kicks PKm_Goalkeeping', 'Penalty Kicks Save%_Goalkeeping'], axis = 1) league.columns = ['Player', 'Nations', 'Salary', 'Squads', 'Positions', 'Leagues', 'Age', 'time', 'goals', 'sot', 'gps', 'dist', 'xa', 'cross','pass', 'def', 'err', 'd1', 'sv', 'w', 'd', 'churn', 'Year', 'eff'] league.Year = league.Year.astype(str) league.goals = league.goals.astype(float) league.eff = league.eff.astype(float) # # Linear Mixed Effects Model # + #league = league[league.Positions != '0'] # - league = league.dropna(axis=0) mod = smf.mixedlm("eff ~ Leagues + Year + Positions", league, groups=league["Player"]) mdf = mod.fit() print(mdf.summary()) mdf.params # + params = { "A":0, "B":-0.006394, "C":0.007385, "D":0.016564, "E":0.005290, "RFL":0.008949, "Tournament":0.012664, "2020":0, "2021":-0.006394, "2022":-0.006460, "2023":-0.006506, "2024":-0.004430, "2025": -0.003971, "2026":-0.002075, "2027":-0.014500, "2028": -0.007429, "2029":-0.006932, "2030": -0.007340, "2031":-0.013616, "2032":-0.013616, "0":0, "DF":-0.009454, "DFFW":0.004241, "DFMF": -0.004265, "FW":0.020191, "FWDF":0.003845, "FWMF":0.011219, "GK":-0.001097, "GKMF":-0.030327, "MF":0.004201, "MFDF":0.000332, 'MFFW':0.009182 } def get_adjusted_ovr(eff, year, league, pos, params): new_ovr = 0.026699 + eff*params[year] + eff*params[league] + eff*params[pos] return new_ovr league['adjust_ovr'] = (league.apply(lambda x:get_adjusted_ovr(x['eff'], x['Year'], x['Leagues'],x['Positions'],params), axis=1)) # + from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() print(scaler.fit(league.adjust_ovr.values.reshape(-1, 1))) league['test'] = scaler.transform(league.adjust_ovr.values.reshape(-1, 1)) # - (league['test'] - np.mean(league['test']))/np.std(league['test']) league['test'] = 100*league['test'] len(league) plt.hist(league_test2['adjust_ovr']) plt.show() league_test = league[(league.adjust_ovr > (np.mean(league['adjust_ovr']) - 3*np.std(league['adjust_ovr'])))& (league.adjust_ovr < (np.mean(league['adjust_ovr']) + 3*np.std(league['adjust_ovr'])))] # # Adjust Overall Ratings # + def normalize(league, year): league = league[league.Year == year] #league['adjust_ovr'] = league['adjust_ovr'].fillna(league['Ovr']) league['adjust_ovr'] = abs(league['adjust_ovr'] - min(league['adjust_ovr']))/(max(league['adjust_ovr']) - min(league['adjust_ovr'])) league['adjust_ovr'] = league['adjust_ovr']*100 return league l1 = normalize(league_test, "2020") l2 = normalize(league_test, "2021") l3 = normalize(league_test, "2022") l4 = normalize(league_test, "2023") l5 = normalize(league_test, "2024") l6 = normalize(league_test, "2025") l7 = normalize(league_test, "2026") l8 = normalize(league_test, "2027") l9 = normalize(league_test, "2028") l10 = normalize(league_test, "2029") l11 = normalize(league_test, "2030") l12 = normalize(league_test, "2031") l13 = normalize(league_test, "2032") league_test2 = pd.concat([l1,l2, l3, l4, l5, l6, l7, l8, l9, l10, l11, l12, l13], axis=0) # - league_test2.to_csv("soa_2020_2030.csv") # # Build Team for Each Nation # + def get_trim_roster(df_roster, team): gk = ['GK', 'GKMF', 'MFGK'] test = df_roster[df_roster.Nations == team].drop_duplicates('Player', keep='last') # test.loc[test.Pos == 'M', 'Ovr'] = test['adjust_ovr'] # test.loc[test.Pos == 'F', 'Ovr'] = test['adjust_ovr'] # test.loc[test.Pos == 'D', 'Ovr'] = test['adjust_ovr'] # test.loc[test.Pos == 'G', 'Ovr'] = test['adjust_ovr'] gks = test[test.Positions.isin(gk)].sort_values('adjust_ovr', ascending = False).head(2) test = test.sort_values('adjust_ovr', ascending = False).head(22) test = pd.concat([test, gks], axis=0) if (len(test) < 18)|(len(gks) < 1): pass #testfull = testfull.drop(['player_name_x', 'player_name_y','player_id', 'team_id', 'game_id' ], axis=1) else: return test df_roster = pd.DataFrame() for i in league_test2.Nations.unique(): df_team = get_trim_roster(league_test2[league_test2.Year == '2021'], i) df_roster = pd.concat([df_roster,df_team]) # + team_ratings = df_roster.groupby('Nations')['adjust_ovr'].mean().reset_index() team_ratings.sort_values(by = 'adjust_ovr', ascending=False).head(24) # - df_roster[df_roster.Nations == 'Rarita']['Salary'].sum() league['adjust_ovr'].hist() plt.show() plt.hist(100*league[league.Year == '2023']['adjust_ovr']/max(league[league.Year == '2023']['adjust_ovr'])) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 练习2:Logistic Regression 逻辑回归 # %matplotlib inline import numpy as np import matplotlib.pyplot as plt import pandas as pd # ## 查看数据 # # Suppose that you are the administrator of a university department and # you want to determine each applicant’s chance of admission based on their # results on two exams. You have historical data from previous applicants # that you can use as a training set for logistic regression. For each training # example, you have the applicant’s scores on two exams and the admissions # decision. # # 也就是数据集有三个变量,分别是 exam1和exam2的分数,以及目标 admissions # 换用pandas来读数据 data1 = pd.read_csv('./data/ex2data1.txt',names=['exam1','exam2','admission']) data1.head() # ### 绘图展示下 # + positive = data1.loc[data1['admission']==1] # 正样本 negative = data1.loc[data1['admission']==0] # 负样本 plt.figure(figsize=(10,6)) plt.plot(positive.iloc[:,0], positive.iloc[:,1], 'k+', label='Admitted') plt.plot(negative.iloc[:,0], negative.iloc[:,1], 'yo', label='Not admitted') plt.xlabel('Exam 1 score') plt.ylabel('Exam 2 score') plt.legend() # - # ## 初始化变量 X ,y,参数矩阵 $\theta$ # + X = data1.iloc[:,:-1] y = data1.iloc[:,-1:] X.insert(0,'x0',1) # 添加 1 列 # 转为矩阵形式 X = np.matrix(X) y = np.matrix(y) # 初始化参数矩阵 theta = np.matrix(np.zeros((1,3))) X.shape, y.shape ,theta.shape # - # ## Sigmod函数 # \\[g\left( z \right)=\frac{1}{1+{{e}^{-z}}}\\] def sigmoid(z): return 1 / (1 + np.exp(-z)) myx = np.arange(-10,10,.1) plt.plot(myx,sigmoid(myx)) plt.title("Sigmoid Function") plt.grid(True) # ## 代价函数 # # $$J(\theta)=-\frac{1}{m}\sum\limits_{i=1}^m [y^{(i)}log(h_{\theta}(x))+(1-y^{(i)})log(1-h_{\theta}(x))]$$ def computeCost(theta, X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) cost = np.multiply(-y, np.log(sigmoid(np.dot(X , theta.T)))) - np.multiply((1-y), np.log(1-(sigmoid(np.dot(X , theta.T))))) return np.sum(cost)/((len(X))) print('初始代价:', computeCost(theta,X, y) ) # ## 梯度下降 # # $$ \theta_j = \theta_j - \alpha\dfrac{\partial}{\partial\theta_j}J(\theta_j) $$ # $$\dfrac{\partial}{\partial\theta}J(\theta)=\frac{1}{m}\sum\limits_{i=1}^m (h_{\theta}(x^{(i)})-y^{(i)})x^{(i)} $$ # 用于计算一次迭代的梯度 def computeGradient(theta,X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) m = len(X) n = theta.shape[1] temp= np.zeros(X.shape[1]) error = sigmoid(np.dot(X,theta.T)) - y for j in range(n): temp[j] = ((1/m) * np.sum(np.multiply(error, X[:,j]))) return temp computeGradient(theta, X, y) # ## 此处要求使用fminuc函数来最小化代价,python中与该函数相近的是 fmin_tnc import scipy.optimize as opt result = opt.fmin_tnc(func=computeCost, x0=theta, fprime=computeGradient, args=(X, y)) result # ### 这是另一位大佬kaleko的解法,使用的是fmin函数 # + from scipy import optimize def optimizeTheta(mytheta,myX,myy,mylambda=0.): result = optimize.fmin(computeCost, x0=mytheta, args=(myX, myy), maxiter=400, full_output=True) return result[0], result[1] theta_new, mincost = optimizeTheta(theta,X,y) # - theta_new, mincost # ## 绘制决策边界 Decision Boundary # # 我们知道假设函数(预测函数)$h_{\theta}(z)=g(\theta^{T}x)=\dfrac{1}{1+e^{-\theta^{T}x}}$,当我们得到训练好的参数后, # # 可知 $h_{\theta}(z)=g(\theta_0+\theta_1x_1+\theta_2x_2)$ # # 由于sigmoid函数: 当$z>0,g(z)>0.5$,当$z<0,g(z)<0.5$,所以决策边界就是当$h_{\theta}(z)=g(\theta_0+\theta_1x_1+\theta_2x_2)=0.5$时才发生 # # 令 $z=\theta_0+\theta_1x_1+\theta_2x_2=0$,则 $x_2=(-\frac{1}{\theta_2})(\theta_0+\theta_1x_1)$ # 预测函数 # 假设现在有一学生 exam1 考了45分,exam2考了85分,预测其通过率 X_test = np.array([1.,45.,85.]) sigmoid(np.dot(X_test , theta.T)) # + # 先绘制原先的散点图 positive = data1.loc[data1['admission']==1] # 正样本 negative = data1.loc[data1['admission']==0] # 负样本 plt.figure(figsize=(10,6)) plt.plot(positive.iloc[:,0], positive.iloc[:,1], 'k+', label='Admitted') plt.plot(negative.iloc[:,0], negative.iloc[:,1], 'yo', label='Not admitted') # 绘制决策边界 boundary_x = X[:,1] boundary_y = (-1./theta_new[2]) * (X[:,1] * theta_new[1] + theta_new[0]) plt.plot(boundary_x,boundary_y, 'b', label='decision boundary') plt.xlabel('Exam 1 score') plt.ylabel('Exam 2 score') plt.legend() # - # # 加入正则化的逻辑回归 # ## 查看数据 # # In this part of the exercise, you will implement regularized logistic regression to predict whether microchips from a fabrication plant passes quality assurance (QA). During QA, each microchip goes through various tests to ensure it is functioning correctly. # # Suppose you are the product manager of the factory and you have the test results for some microchips on two different tests. From these two tests,you would like to determine whether the microchips should be accepted or rejected. To help you make the decision, you have a dataset of test results on past microchips, from which you can build a logistic regression model # # 数据集有三个变量,前两个是微芯片的两个不同的历史测试得分,后一个是目标值,是历史测试结果(通过或不通过QA) data2 = pd.read_csv('./data/ex2data2.txt',names=['test1','test2','accepted']) data2.head() # + positive = data2.loc[data2['accepted']==1] # 正样本 negative = data2.loc[data2['accepted']==0] # 负样本 plt.figure(figsize=(10,6)) plt.plot(positive.iloc[:,0], positive.iloc[:,1], 'k+', label='Accepted') plt.plot(negative.iloc[:,0], negative.iloc[:,1], 'yo', label='Not Accepted') plt.xlabel('Test 1 score') plt.ylabel('Test 2 score') plt.legend() # - # ## 通过Feature mapping增加一些特征 # # $mapFeature(x) = \begin{bmatrix}1\\ x_1 \\x_2\\x_1^2\\x_1x_2\\x_2^2\\x_1^3 \\...\\x_1x_2^5\\x_2^6\end{bmatrix}$ # # 最后共28维特征 # + x1 = data2['test1'] x2 = data2['test2'] data2.insert(3, 'x0', 1) degree = 6 for i in range(1, degree+1): for j in range(0, i+1): colsname = ('' if (i-j)==0 else ('x_1'+str(i-j))) + ('' if (j)==0 else ('x_2'+str(j))) data2[colsname] = np.power(x1, i-j) * np.power(x2, j) data2.drop('test1', axis=1, inplace=True) data2.drop('test2', axis=1, inplace=True) data2.head() # - # ## 接下来同样初始化变量 X2 ,y2,参数矩阵 $\theta_2$ # + X_2 = data2.iloc[:,1:] y_2 = data2.iloc[:,:1] # 转为矩阵形式 X_2 = np.matrix(X_2) y_2 = np.matrix(y_2) # 初始化参数矩阵 theta_2 = np.matrix(np.zeros((1,X_2.shape[1]))) X_2.shape, y_2.shape ,theta_2.shape # - # ## 正则化代价函数 # # $$J(\theta)=\frac{1}{m}\sum\limits_{i=1}^m [-y^{(i)}log(h_{\theta}(x))-(1-y^{(i)})log(1-h_{\theta}(x))]+\frac{\lambda}{2m}\sum\limits_{j=1}^{n}\theta_j^2$$ # # 注意:$\theta_0$不参与正则化 def computeRegCost(theta, X, y, mylambda): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) m = len(X) cost = np.multiply(-y, np.log(sigmoid(np.dot(X , theta.T)))) - np.multiply((1-y), np.log(1-(sigmoid(np.dot(X , theta.T))))) reg = (mylambda/(2*m)) * (np.sum(np.power(theta[:,1:theta.shape[1]], 2))) return (np.sum(cost)/((len(X))) + reg) # + mylambda = 1. computeRegCost(theta_2, X_2, y_2, mylambda) # - # ## 梯度下降 # # $ Repeat\ until\ convergence\{\\ \theta_0:=\theta_0-\alpha\frac{1}{m}\sum\limits_{i=1}^m [(h_{\theta}(x^{(i)})-y^{(i)})x_0^{(i)}] \\ \begin{aligned} \theta_j & :=\theta_j-\alpha\frac{1}{m}\sum\limits_{i=1}^m [(h_{\theta}(x^{(i)})-y^{(i)})x^{(i)} + \frac{\lambda}{m}\theta_j] \\& for \ j=1,2,..,n\\\ \end{aligned} $ # # # 计算迭代一次的梯度 def computeRegGradient(theta, X, y, mylambda): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) m = len(X) n = theta.shape[1] temp= np.zeros(X.shape[1]) cost = sigmoid(np.dot(X , theta.T)) - y for j in range(n): if (j == 0): temp[j] = (1/m) * np.sum(np.multiply(cost, X[:,j])) else: temp[j] = (1/m) * np.sum(np.multiply(cost, X[:,j]) + (mylambda/m) * theta[:,j]) return temp computeRegGradient(theta_2, X_2, y_2, mylambda) # ## 最小化参数 # ### 这里参考的是kaleko大佬的代码,使用了minimize函数 # + def optimizeRegularizedTheta(mytheta,myX,myy,mylambda): result = optimize.minimize(computeRegCost, mytheta, args=(myX, myy, mylambda), method='Newton-CG', jac=computeRegGradient ) return np.array([result.x]), result.fun theta2_new, mincost = optimizeRegularizedTheta(theta_2,X_2,y_2,mylambda) theta2_new, mincost # - # ### 这是另一种优化方式 # + result2 = opt.fmin_tnc(func=computeRegCost, x0=theta_2, fprime=computeRegGradient, args=(X_2, y_2, mylambda)) computeRegCost(np.matrix(result2[0]), X_2, y_2, mylambda) # - # ## 绘制决策边界 # # 我们知道决策边界就是当 $h_{\theta}(z)=g(\theta^{T}x)=0.5$ 也就是 $z=\theta^{T}x=0$ 时才发生,所以目的就是找出使等式成立的 $x_1,x_2$ # ### 这里使用的也是kaleko大佬的代码 def mapFeature( x1col, x2col ): """ Function that takes in a column of n- x1's, a column of n- x2s, and builds a n- x 28-dim matrix of featuers as described in the homework assignment """ degrees = 6 out = np.ones( (x1col.shape[0], 1) ) for i in range(1, degrees+1): for j in range(0, i+1): term1 = x1col ** (i-j) term2 = x2col ** (j) term = (term1 * term2).reshape( term1.shape[0], 1 ) out = np.hstack(( out, term )) return out def darw_data_and_boundary(x_plt, y_plt, theta, mylambda): z_plt = np.zeros((len(x_plt),len(y_plt))) for i in range(len(x_plt)): for j in range(len(y_plt)): myfeaturesij = mapFeature(np.array([x_plt[i]]),np.array([y_plt[j]])) z_plt[i][j] = np.dot(theta,myfeaturesij.T) z_plt = z_plt.transpose() plt.plot(positive.iloc[:,0], positive.iloc[:,1], 'k+', label='Accepted') plt.plot(negative.iloc[:,0], negative.iloc[:,1], 'yo', label='Not Accepted') plt.xlabel('Test 1 score') plt.ylabel('Test 2 score') plt.legend() u, v = np.meshgrid( x_plt, y_plt ) mycontour = plt.contour( u, v, z_plt, [0]) plt.title("Decision Boundary- lambda=%.2f" %mylambda) # + x1_min = data2['x_11'].min() x1_max = data2['x_11'].max() x2_min = data2['x_21'].min() x2_max = data2['x_21'].max() x_plt = np.linspace(x1_min,x1_max,50) y_plt = np.linspace(x2_min,x2_max,50) plt.figure(figsize=(10,6)) darw_data_and_boundary(x_plt, y_plt, theta2_new, mylambda) # - # ### 修改lambda值,观察不同情况下的决策边界 # + plt.figure(figsize=(16,10)) plt.subplot(221) mylambda1 = 0. theta2_new1, mincost1 = optimizeRegularizedTheta(theta_2,X_2,y_2,mylambda1) darw_data_and_boundary(x_plt, y_plt, theta2_new1,mylambda1) plt.subplot(222) mylambda2 = 1. theta2_new2, mincost2 = optimizeRegularizedTheta(theta_2,X_2,y_2,mylambda2) darw_data_and_boundary(x_plt, y_plt, theta2_new2,mylambda2) plt.subplot(223) mylambda3 = 10. theta2_new3, mincost3 = optimizeRegularizedTheta(theta_2,X_2,y_2,mylambda3) darw_data_and_boundary(x_plt, y_plt, theta2_new3,mylambda3) plt.subplot(224) mylambda4 = 100. theta2_new4, mincost4 = optimizeRegularizedTheta(theta_2,X_2,y_2,mylambda4) darw_data_and_boundary(x_plt, y_plt, theta2_new4,mylambda4) # - # ## 使用sklearn工具包 # + from sklearn import linear_model#调用sklearn的线性回归包 model = linear_model.LogisticRegression(penalty='l2', C=1.0) yy2 = np.array(y_2) model.fit(X_2, yy2.ravel()) model.score(X_2, yy2) # 模型准确率 # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Author : # ### Topic : MatplotLib Pracetice : Python Practice # ### Email : # What Is Python Matplotlib? # # matplotlib.pyplot is a plotting library used for 2D graphics in python programming language. It can be used in python scripts, shell, web application servers and other graphical user interface toolkits. # # Is Matplotlib Included in Python? # # Matplotlib is not a part of the Standard Libraries which is installed by default when Python, there are several toolkits which are available that extend python matplotlib functionality. Some of them are separate downloads, others can be shipped with the matplotlib source code but have external dependencies. # # Basemap: It is a map plotting toolkit with various map projections, coastlines and political boundaries. # # Cartopy: It is a mapping library featuring object-oriented map projection definitions, and arbitrary point, line, polygon and image transformation capabilities. # # Excel tools: Matplotlib provides utilities for exchanging data with Microsoft Excel. # # Mplot3d: It is used for 3-D plots. # # Natgrid: It is an interface to the natgrid library for irregular gridding of the spaced data. # # You may go through this recording of Python Matplotlib where our instructor has explained how to download Matplotlib in Python and the topics in a detailed manner with examples that will help you to understand this concept better. # # Import Libraries import numpy as np import matplotlib.pyplot as plt # # Functional Method x = np.linspace(0, 5, 11) y = x**2 x y plt.plot(x, y) plt.show() # ## Title and Labels # + plt.xlabel('X values') plt.ylabel('Y values') plt.title('Graph of $y = x^2$') plt.plot(x, y) plt.show() # + x1 = np.linspace(-np.pi, np.pi, 256) x2 = np.linspace(-np.pi, np.pi, 256) y1 = np.cos(x1) y2 = np.sin(x2) # - plt.plot(x1, y1, x2, y2) # ## Figure Size and dpi # + plt.figure(figsize = (10, 6), dpi = 80) plt.plot(x1, y1) plt.plot(x2, y2) plt.show() # - # ## Styling Figure # + plt.figure(figsize = (10, 6), dpi = 80) plt.plot(x1, y1, 'r-.') plt.plot(x2, y2, 'b--') plt.show() # + plt.figure(figsize = (10, 6), dpi = 80) plt.plot(x1, y1, color = 'purple') plt.plot(x2, y2, color = 'green') plt.show() # + plt.figure(figsize = (10, 6), dpi = 80) plt.plot(x1, y1, color = 'purple', lw = 2.5) plt.plot(x2, y2, color = 'green', lw = 2.5) plt.show() # + plt.figure(figsize = (10, 6), dpi = 80) plt.plot(x1, y1, color = 'purple', lw = 2.5, ls = '--') plt.plot(x2, y2, color = 'green', lw = 2.5, ls = '-.') plt.show() # - # ## xticks and yticks # + plt.figure(figsize = (10, 6), dpi = 80) plt.plot(x1, y1, color = 'purple', lw = 2.5, ls = '--') plt.plot(x2, y2, color = 'green', lw = 2.5, ls = '-.') plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi]) plt.yticks([-1, 0, 1]) plt.show() # + plt.figure(figsize = (10, 6), dpi = 80) plt.plot(x1, y1, color = 'purple', lw = 2.5, ls = '--') plt.plot(x2, y2, color = 'green', lw = 2.5, ls = '-.') plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], [r'$-\pi$', r'$-\pi/2$', r'$0$',r'$\pi/2$',r'$\pi$']) plt.yticks([-1, 0, 1]) plt.show() # - # ## Legend # + plt.figure(figsize = (10, 6), dpi = 80) plt.plot(x1, y1, color = 'purple', lw = 2.5, ls = '--', label = '$y = cosx$') plt.plot(x2, y2, color = 'green', lw = 2.5, ls = '-.', label = '$y = sinx$') plt.xlabel('X') plt.ylabel('Y') plt.title('Graph') plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], [r'$-\pi$', r'$-\pi/2$', r'$0$',r'$\pi/2$',r'$\pi$']) plt.yticks([-1, 0, 1]) plt.legend(loc = 0) plt.show() # + # plt.legend?? # - # ## Marker # + plt.figure(figsize = (10,6), dpi = 80) plt.plot([1,2,3,4,5], [1,4,9,16, 25], color = 'red', lw = 2.5, marker = '^', mec = 'blue', mew = 5) plt.show() # + # plt.plot?? # - # ## Subplot # + x = np.array([1,2,3,4,5]) y1 = x**2 y2 = np.exp(x) # + plt.figure(figsize = (10, 6), dpi = 80) plt.subplot(1, 2, 1) plt.plot(x, y1, 'r') plt.subplot(1,2, 2) plt.plot(x, y2, 'b-.') # - # # Object Oriented Methods x = np.array([1,2,3,4,5,6,7,8,9,10]) y = x**2 # + fig = plt.figure(figsize = (10,6)) axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) axes.plot(x, y) plt.show() # - # ## Axes in Figure # + fig = plt.figure(figsize = (10,6)) axes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) axes2 = fig.add_axes([0.15, 0.55, 0.4, 0.3]) axes1.plot(x,y, 'r--') axes1.set_title('Large Plot') axes2.plot(y,x, 'g-.') axes2.set_title('Small Plot') # - # ## Subplots # + fig = plt.figure(figsize = (10, 6)) axes = fig.subplots(3, 3) # - type(axes) # + fig = plt.figure(figsize = (10, 6)) axes = fig.subplots(2, 3) # + fig = plt.figure(figsize = (10, 6)) axes = fig.subplots(2, 3) axes[0][0].plot(x,y,'r--') axes[0][0].set_title('0,0') axes[0][1].plot(x,y,'b-.') axes[0][1].set_title('0,1') axes[0][2].plot(x,y,'go') axes[0][2].set_title('0,2') axes[1,0].plot(y,x,'r--') axes[1,0].set_title('1,0') fig.tight_layout() # - # # 2D Plot # ## Line Plot t = np.arange(0, 2,0.01) s = 1 + np.sin(2*np.pi*t) # + fig, ax = plt.subplots() ax.plot(t, s) ax.set(xlabel = 'Time', ylabel = 'Voltages', title = 'Time vs Voltage') ax.grid() plt.show() # - # ## Scatter Plot # + x = np.random.randint(100, size = 50) y = np.random.randint(100, size = 50) # - plt.scatter(x, y, marker = 'o') # # 3D Plot # ## Line Plot from mpl_toolkits import mplot3d # + fig = plt.figure(figsize= (10,6)) ax = plt.axes(projection = '3d') # + fig = plt.figure(figsize= (10,6)) ax = plt.axes(projection = '3d') zline = np.linspace(0, 15, 1000) xline = np.sin(zline) yline = np.cos(zline) ax.plot3D(xline, yline, zline, 'red') # - # ## 3D Scatter Plot # + fig = plt.figure(figsize = (10, 6)) ax = plt.axes(projection = '3d') x = np.random.randint(100, size = 200) y = np.random.randint(100, size = 200) z = np.random.randint(100, size = 200) ax.scatter3D(x, y, z, c = 'red') # - # ## 3D Surface Plot def f(x, y): return np.sin(np.sqrt(x**2+y**2)) # + r = np.linspace(0, 6, 20) theta = np.linspace(-0.9*np.pi, 0.8*np.pi, 40) r,theta = np.meshgrid(r, theta) X = r*np.sin(theta) Y = r*np.cos(theta) Z = f(X,Y) # + fig = plt.figure(figsize = (10, 6)) ax = plt.axes(projection = '3d') ax.plot_surface(X, Y, Z, cmap = 'winter') plt.show() # - ax.view_init(60, 0) fig # # Saving Figure fig.savefig('test.pdf', dpi = 200) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Day 1 https://github.com/c17hawke/Perceptron_implimentation_FSDS # !pip install joblib # + import os import pandas as pd import matplotlib.pyplot as plt import joblib import numpy as np plt.style.use('fivethirtyeight') # - class Perceptron: def __init__(self, eta: float=None, epochs: int=None): self.weights = np.random.randn(3) * 1e-4 self.eta = eta # Learning rate self.epochs = epochs # iterations def _z_outcome(self, inputs, weights): return np.dot(inputs, weights) def activation_function(self,z): return np.where(z > 0, 1, 0) def fit(self, x, y): self.x = x self.y = y x_with_bias = np.c_[self.x, -np.ones((len(self.x),1))] print(f"X with bias: \n{x_with_bias}") for epoch in range(self.epochs): print("--"*10) print(f"for epoch >> {epoch+1}") print("--"*10) z = self._z_outcome(x_with_bias, self.weights) y_hat = self.activation_function(z) print(f"Predicted value after forward pass: \n{y_hat}") self.error = self.y - y_hat print(f"error: \n{self.error}") self.weights = self.weights + self.eta * np.dot(x_with_bias.T, self.error) print(f"updated weights afetr epoch: {epoch+1}/{self.epochs}: \n{self.weights}") print(f"##"*10) def predict(self,x): x_with_bias = np.c_[x, -np.ones((len(x), 1))] z = self._z_outcome(x_with_bias, self.weights) return self.activation_function(z) # + OR = { 'x1':[0,0,1,1], 'x2':[0,1,0,1], 'y':[0,1,1,1] } df_OR = pd.DataFrame(OR) df_OR # - def prepare_data(df, target_col='y'): x = df.drop(target_col, axis=1) y = df[target_col] return x,y # + x, y = prepare_data(df_OR) ETA = 0.1 EPOCHS = 10 model_or = Perceptron(eta=ETA, epochs=EPOCHS) model_or.fit(x,y) # - model_or.predict(x=[[1,0]]) # + AND = { 'x1':[0,0,1,1], 'x2':[0,1,0,1], 'y':[0,0,0,1] } df_AND = pd.DataFrame(AND) df_AND x, y = prepare_data(df_AND) ETA = 0.1 EPOCHS = 10 model_and = Perceptron(eta=ETA, epochs=EPOCHS) model_and.fit(x,y) # + XOR = { 'x1':[0,0,1,1], 'x2':[0,1,0,1], 'y':[0,1,1,0] } df_XOR = pd.DataFrame(XOR) df_XOR x, y = prepare_data(df_XOR) ETA = 0.1 EPOCHS = 10 model_xor = Perceptron(eta=ETA, epochs=EPOCHS) model_xor.fit(x,y) # + NAND = { 'x1':[0,0,1,1], 'x2':[0,1,0,1], 'y':[1,0,0,0] } df_NAND = pd.DataFrame(NAND) df_NAND x, y = prepare_data(df_NAND) ETA = 0.1 EPOCHS = 10 model_nand = Perceptron(eta=ETA, epochs=EPOCHS) model_nand.fit(x,y) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Please Note: The content of this page is mainly extracted from [developer.twitter.com](https://developer.twitter.com) # # The Academic Research product track # This guideline describes Twitter's special Academic Research (AR) product track, designed to fulfill the requirements of the academic research community, through greater levels of access to public Twitter data for free. Here, we will discuss: # # 1. What is AR product track? # 2. Who can apply for AR product track? # 3. How to apply for AR product track? # ## 1. What is AR product track? # # The Academic Research product track was built to serve the needs of the academic research community via specialized, greater levels of access to public Twitter data. The AR product track is functionality designed to get more precise and complete data for analyzing the public conversation, at no cost for qualifying researchers. Because of an advanced level of access, the Academic Research product track is reserved solely for non-commercial use. # # # ### Unique advantages # With the new Academic Research product track, qualified researchers now have access to all v2 endpoints released to date, as well as: # # * **Free access to the full history** of public conversation via the full-archive search endpoint, which was previously limited to paid premium or enterprise customers # * **Higher levels of access** to the Twitter developer platform for free, including a significantly higher monthly Tweet volume cap of 10 million (20x higher than what’s available on the Standard product track today) # * **More precise filtering capabilities** across all v2 endpoints to limit data collection to what is relevant for your study and minimize data cleaning requirements # * **New technical and methodological guides** to maximize the success of your studies # # ## 2. Who can apply for AR product track? # Academic researchers with specific research objectives are encouraged to apply. This includes graduate students working on a thesis, PhD candidates working on a dissertation, or research scholars affiliated with or employed by an academic institution. # # ### Selection criteria # You’re encouraged to apply for access to the Academic Research track if you meet the following criteria: # # * You are either a master’s student, doctoral candidate, post-doc, faculty, or research-focused employee at an academic institution or university. # * You have a clearly defined research objective, and you have specific plans for how you intend to use, analyze, and share Twitter data from your research. More about Project details [here](https://developer.twitter.com/en/products/twitter-api/academic-research/application-info). # * You will use this product track for non-commercial purposes. More about non-commercial use [here](https://developer.twitter.com/en/developer-terms/commercial-terms). # # If you don’t meet all of the requirements above, the [Standard product track](https://developer.twitter.com/en/products/twitter-api/standard) might be a better fit. This track is ideal for commercial research, learning how to use the Twitter API, teaching, and building for fun or good causes. # # ## 3. How to apply for AR product track? # New and existing Twitter developers will need to complete an Academic Research application to gain access to this track. Here, you are asked about your credentials as an academic researcher, your research project, how you plan to use Twitter data, and how you plan to share your work. # # ### Steps to take # To apply for a AR product track: # # 1. Go to https://developer.twitter.com/en/portal/petition/academic/is-it-right-for-you # 2. Log into your Twitter account *OR* Click on `Apply for an account` # 3. Click on `Start Academic Research application` # 4. Fill in the questions (see **Questions** section below for the questions you will be asked in the application) # # N.B. Once you submit your application for the Academic Research product track, you *cannot edit* these details. We recommend that you save the answers on your application, in case you want to reference these responses at a later date. The information you provide on this application will not be shared with anyone outside of Twitter. # # When the application in done, you should get a response within about 7 days. # # ### Questions # Below are the questions you will be asked in the application, so you know which materials you may need to prepare before getting started. # # #### A. Your academic profile # This section is used to verify your scholarly identity and association with an academic institution: # # * Your **full name** as it is appears on your institution’s documentation # * Links to webpages that help **establish your identity**; provide one or more of the following: # * A link to your profile in your institution’s faculty or student directory # * A link to your Google Scholar profile # * A link to your research group, lab or departmental website where you are listed # * Information about your **academic institution**: its name, country, state, and city # * Your **department**, **school**, or **lab name** # * Your **academic field** of study or discipline at this institution # * Your **current role** as an academic (whether you are a graduate student, doctoral candidate, post-doc, professor, research scientist, or other faculty member) # # #### B. Your research project details # This section is needed to understand how you intend to use the Twitter API and Twitter data. Your answers to these questions illustrate that you have a clearly defined academic research project in mind. This information is also used to ensure projects adhere to the Twitter Developer Agreement and Policy, including the terms for non-commercial use, and that the safety and privacy of people on Twitter are protected. Please be detailed, thoughtful, and thorough in your response to ensure reviewers are equipped to render a decision. # # N.B. You *can’t edit* your responses to these questions later, so you’re encouraged to spend some extra time here: # * What is the **name** of your research project? # * Does this project receive **funding** from outside your academic institution? If yes, please list all your sources of funding. This question helps to understand the scale of your project. # * In English, describe your **research project**. Minimum 200 characters. # * What is your project about? # * What is your primary research question, hypothesis, or objective? # * What do you hope to learn? # * In English, describe how Twitter data via the **Twitter API** will be used in your research project. Minimum 200 characters. # * In other words, how and why will you use Twitter data in this project? # * What purpose does Twitter data serve as a datasource for your project? # * While this question may seem very similar to the prior question, it is aimed to understand the role Twitter data plays in your project specifically. For example, are you using this data to study Twitter itself as a subject, or are you using this data as a method (or one of many methods) to study a different topic? # * In English, describe your **methodology** for analyzing Twitter data, Tweets, and/or Twitter users. Minimum 200 characters. # * In other words, what types of analyses do you intend to perform with Twitter data? # * This should be more descriptive of your tactics than the question above. # * Will your research present **Twitter data** individually or in aggregate? # * Think of it as presenting individual Tweets or users vs. aggregate statistics or models. # * In English, describe how you will **share the outcomes** of your research (include tools, data, and/or other resources you hope to build and share). Minimum 200 characters. # * We would like to know how and where you are interested in publishing or sharing your results. # * Will your analysis make Twitter content or derived information available to a **government entity**? # * If yes, list all government entities you intend to provide Twitter content or derived information to under this use case. # * Note that your own academic institution does not apply in this question. # * Minimum 200 characters. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + papermill={"duration": 0.009778, "end_time": "2021-07-29T04:02:31.896578", "exception": false, "start_time": "2021-07-29T04:02:31.886800", "status": "completed"} tags=[] # + [markdown] papermill={"duration": 0.009778, "end_time": "2021-07-29T04:02:31.896578", "exception": false, "start_time": "2021-07-29T04:02:31.886800", "status": "completed"} tags=[] # # # + _cell_guid="93620de9-b3b8-4129-8b7b-0b276cc58d68" _uuid="932f2c25-03ca-46e7-a87f-7daadf0247a6" jupyter={"outputs_hidden": false} papermill={"duration": 0.021164, "end_time": "2021-07-29T04:02:31.927913", "exception": false, "start_time": "2021-07-29T04:02:31.906749", "status": "completed"} tags=[] import os from pathlib import Path in_folder_path = Path('../input/k/leolu1998/clrp-finetune-roberta-large') scripts_dir = Path(in_folder_path / 'scripts') # + papermill={"duration": 7.942391, "end_time": "2021-07-29T04:02:39.878942", "exception": false, "start_time": "2021-07-29T04:02:31.936551", "status": "completed"} tags=[] os.chdir(scripts_dir) exec(Path("imports.py").read_text()) exec(Path("config.py").read_text()) exec(Path("dataset.py").read_text()) exec(Path("model.py").read_text()) os.chdir('/kaggle/working') # + _cell_guid="f182401e-add4-489a-97ad-253995c77ea4" _uuid="db10450f-b0f8-4687-aed6-d1a986657ce6" jupyter={"outputs_hidden": false} papermill={"duration": 124.510896, "end_time": "2021-07-29T04:04:44.398451", "exception": false, "start_time": "2021-07-29T04:02:39.887555", "status": "completed"} tags=[] test_df = pd.read_csv("/kaggle/input/commonlitreadabilityprize/test.csv") tokenizer = torch.load('../input/mytokenizers/roberta-tokenizer.pt') models_folder_path = Path(in_folder_path / 'models') models_preds = [] n_models = 10 for model_num in range(n_models): print(f'Inference#{model_num+1}/{n_models}') test_ds = CLRPDataset(data=test_df, tokenizer=tokenizer, max_len=Config.max_len, is_test=True) test_sampler = SequentialSampler(test_ds) test_dataloader = DataLoader(test_ds, sampler = test_sampler, batch_size=Config.batch_size) model = torch.load(models_folder_path / f'best_model_{model_num}.pt').to(Config.device) all_preds = [] model.eval() for step,batch in enumerate(test_dataloader): sent_id, mask = batch['input_ids'].to(Config.device), batch['attention_mask'].to(Config.device) with torch.no_grad(): preds = model(sent_id, mask) all_preds += preds.flatten().cpu().tolist() models_preds.append(all_preds) del model, tokenizer, test_dataloader, test_sampler gc.collect() torch.cuda.empty_cache() # + [markdown] papermill={"duration": 0.011979, "end_time": "2021-07-29T04:04:44.422070", "exception": false, "start_time": "2021-07-29T04:04:44.410091", "status": "completed"} tags=[] # # ENS 2 # + papermill={"duration": 55.753162, "end_time": "2021-07-29T04:05:40.186466", "exception": false, "start_time": "2021-07-29T04:04:44.433304", "status": "completed"} tags=[] import os from pathlib import Path in_folder_path2 = Path('../input/clrprobertalarge463-lb') scripts_dir2 = Path(in_folder_path2 / 'scripts') os.chdir(scripts_dir2) exec(Path("imports.py").read_text()) exec(Path("config.py").read_text()) exec(Path("dataset.py").read_text()) exec(Path("model.py").read_text()) os.chdir('/kaggle/working') test_df = pd.read_csv("/kaggle/input/commonlitreadabilityprize/test.csv") tokenizer = torch.load('../input/mytokenizers/roberta-tokenizer.pt') models_folder_path2 = Path(in_folder_path2 / 'models') models_preds2 = [] n_models = 5 for model_num in range(n_models): print(f'Inference#{model_num+1}/{n_models}') test_ds = CLRPDataset(data=test_df, tokenizer=tokenizer, max_len=Config.max_len, is_test=True) test_sampler = SequentialSampler(test_ds) test_dataloader = DataLoader(test_ds, sampler = test_sampler, batch_size=Config.batch_size) model2 = torch.load(models_folder_path2 / f'best_model_{model_num}.pt').to(Config.device) all_preds2 = [] model2.eval() for step,batch in enumerate(test_dataloader): sent_id, mask = batch['input_ids'].to(Config.device), batch['attention_mask'].to(Config.device) with torch.no_grad(): preds2 = model2(sent_id, mask) all_preds2 += preds2.flatten().cpu().tolist() models_preds2.append(all_preds2) #### del model2, tokenizer, test_dataloader, test_sampler gc.collect() torch.cuda.empty_cache() # + [markdown] papermill={"duration": 0.013301, "end_time": "2021-07-29T04:05:40.214563", "exception": false, "start_time": "2021-07-29T04:05:40.201262", "status": "completed"} tags=[] # # ENS 3 # + papermill={"duration": 21.6759, "end_time": "2021-07-29T04:06:01.903271", "exception": false, "start_time": "2021-07-29T04:05:40.227371", "status": "completed"} tags=[] import os from pathlib import Path in_folder_path3 = Path('../input/mymodelrobertabase') scripts_dir3 = Path(in_folder_path3 / 'scripts') os.chdir(scripts_dir3) exec(Path("imports.py").read_text()) exec(Path("config.py").read_text()) exec(Path("dataset.py").read_text()) exec(Path("model.py").read_text()) os.chdir('/kaggle/working') test_df = pd.read_csv("/kaggle/input/commonlitreadabilityprize/test.csv") tokenizer = torch.load('../input/mytokenizers/roberta-tokenizer.pt') models_folder_path3 = Path(in_folder_path3 / 'models') models_preds3 = [] n_models = 5 for model_num in range(n_models): print(f'Inference#{model_num+1}/{n_models}') test_ds = CLRPDataset(data=test_df, tokenizer=tokenizer, max_len=Config.max_len, is_test=True) test_sampler = SequentialSampler(test_ds) test_dataloader = DataLoader(test_ds, sampler = test_sampler, batch_size=Config.batch_size) model3 = torch.load(models_folder_path3 / f'best_model_{model_num}.pt').to(Config.device) all_preds3 = [] model3.eval() for step,batch in enumerate(test_dataloader): sent_id, mask = batch['input_ids'].to(Config.device), batch['attention_mask'].to(Config.device) with torch.no_grad(): preds3 = model3(sent_id, mask) all_preds3 += preds3.flatten().cpu().tolist() models_preds3.append(all_preds3) # + papermill={"duration": 0.024919, "end_time": "2021-07-29T04:06:01.942474", "exception": false, "start_time": "2021-07-29T04:06:01.917555", "status": "completed"} tags=[] all_preds # + papermill={"duration": 0.023637, "end_time": "2021-07-29T04:06:01.982583", "exception": false, "start_time": "2021-07-29T04:06:01.958946", "status": "completed"} tags=[] all_preds2 # + papermill={"duration": 0.022981, "end_time": "2021-07-29T04:06:02.019573", "exception": false, "start_time": "2021-07-29T04:06:01.996592", "status": "completed"} tags=[] all_preds3 # + _cell_guid="ed2be4e8-71fe-4eb2-a4cf-7cc6b30c5519" _uuid="6116227b-04bd-4c60-988b-e244833da61d" jupyter={"outputs_hidden": false} papermill={"duration": 0.306695, "end_time": "2021-07-29T04:06:02.340672", "exception": false, "start_time": "2021-07-29T04:06:02.033977", "status": "completed"} tags=[] models_preds_ens = 0.51*np.array(models_preds).mean(axis=0)+(np.array(models_preds2)*0.3+np.array(models_preds3)*0.2).mean(axis=0) print(models_preds_ens.shape) print(models_preds_ens) all_preds_ens = models_preds_ens result_df = pd.DataFrame( { 'id': test_df.id, 'target': all_preds_ens }) result_df.to_csv('submission.csv', index=False) result_df.head(10) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction import ipyscales # Make a default scale, and list its trait values: scale = ipyscales.LinearScale() print(', '.join('%s: %s' % (key, getattr(scale, key)) for key in sorted(scale.keys) if not key.startswith('_'))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import time # + start_array = time.time() import random def random_sort(n): return sorted([random.random() for i in range(n)]) random_sort(5000000) end_array = time.time() print(end_array - start_array) # + start_ols = time.time() from sklearn import datasets import statsmodels.api as sm import numpy as np import pandas as pd data = datasets.load_boston() df = pd.DataFrame(data.data, columns=data.feature_names) target = pd.DataFrame(data.target, columns=["MEDV"]) X = df[["RM", "LSTAT", "CRIM", "NOX", "INDUS"]] y = target["MEDV"] model = sm.OLS(y, X).fit() predictions = model.predict(X) end_ols = time.time() print(end_ols - start_ols) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %pylab inline # + ## uncomment to install LFPy and dependencies in the EBRAINS VM running this notebook: # # !pip install LFPy # - # # # # # Example 10: Extracellular stimulation of neurons # This is an example of **``LFPy``** running in an **``Jupyter notebook``**. To run through this example code and produce output, press **````** in each code block below. # # First step is to **import ``LFPy``** and other packages for analysis and plotting: import LFPy import MEAutility as mu import numpy as np from matplotlib.gridspec import GridSpec # Create some dictionarys with parameters for cell: cellParameters = { 'morphology' : 'morphologies/L5_Mainen96_LFPy.hoc', 'tstart' : -50, # ignore startup transients 'tstop' : 100, 'dt' : 2**-4, 'v_init' : -60, 'passive' : False, } # Create an helper function to instantiate a **cell** object given a set of parameters: def instantiate_cell(cellParameters): cell = LFPy.Cell(**cellParameters, delete_sections=True) cell.set_pos(x=0, y=0, z=0) cell.set_rotation(z=np.pi) # insert hh mechanism in everywhere, reduced density elsewhere for sec in cell.allseclist: sec.insert('hh') if not 'soma' in sec.name(): # reduce density of Na- and K-channels to 5% in dendrites sec.gnabar_hh = 0.006 sec.gkbar_hh = 0.0018 return cell def plot_results(cell, electrode): fig = figure(figsize=(10, 6)) gs = GridSpec(2, 2) ax = fig.add_subplot(gs[0, 1]) im = ax.pcolormesh(np.array(cell.t_ext), cell.z.mean(axis=-1), np.array(cell.v_ext), cmap='RdBu', vmin=-100, vmax=100, shading='auto') ax.set_title('Applied extracellular potential') ax.set_ylabel('z (um)', labelpad=0) rect = np.array(ax.get_position().bounds) rect[0] += rect[2] + 0.01 rect[2] = 0.01 cax = fig.add_axes(rect) cbar = fig.colorbar(im, cax=cax, extend='both') cbar.set_label('(mV)', labelpad=0) ax = fig.add_subplot(gs[1, 1], sharex=ax) ax.plot(cell.tvec, cell.somav, 'k') ax.set_title('somatic voltage') ax.set_ylabel('(mV)', labelpad=0) ax.set_xlabel('t (ms)') ax.set_ylim([-90, 20]) ax.set_xlim(cell.tvec[0], cell.tvec[-1]) ax = fig.add_subplot(gs[:, 0]) for sec in cell.allseclist: idx = cell.get_idx(sec.name()) ax.plot(cell.x[idx], cell.z[idx], color='k') if 'soma' in sec.name(): ax.plot(cell.x[idx], cell.z[idx], color='b', lw=5) ax.plot(electrode.x, electrode.z, marker='o', color='g', markersize=3) ax.plot(electrode.x[stim_elec], electrode.z[stim_elec], marker='o', color='r', markersize=5) ax.axis([-500, 500, -400, 1200]) # Instantiate a **`LFPy.Cell`** object: cell = instantiate_cell(cellParameters) # Create an **electrode** using a commercially available design from Neuronexus: probe = mu.return_mea('Neuronexus-32') # Rotate the probe and move it so that it is in the xz plane and 50 $\mu$m away from the soma: probe.rotate(axis=[0, 0, 1], theta=90) probe.move([0, 100, 0]) # Create a pulse stimulation current: # + amp = 20000 n_pulses = 2 interpulse = 10 width = 2 dt = cell.dt t_stop = cell.tstop t_start = 20 stim_elec = 15 current, t_ext = probe.set_current_pulses(el_id=stim_elec, amp1=amp, width1=width, dt=dt, t_stop=t_stop, t_start=t_start, n_pulses=n_pulses, interpulse=interpulse) # - figure(figsize=(10, 6)) plot(t_ext, current) title("Stimulating current") xlabel('t (ms)') ylabel('(nA)') xlim(0, cell.tstop) # Create ``LFPy`` **electrode** object: electrode = LFPy.RecExtElectrode(cell=cell, probe=probe) # Enable extracellular stimulation for the **cell** using stimulating currents of the **electrode** object: v_ext = cell.enable_extracellular_stimulation(electrode, t_ext=t_ext) # Run the simulation with **`electrode`** as input to **`cell.simulate()`** cell.simulate(probes=[electrode], rec_vmem=True) # Then plot the **somatic potential**, the **extracellular field** and the **LFP** # from electrode object: plot_results(cell, electrode) np.diff(cell.t_ext), np.diff(cell.z.mean(axis=-1)) # Positive pulses close to the soma location cause an hyperpolarization in the cell. Let's try something else! cell = instantiate_cell(cellParameters) # Use the ``probe`` field in the **electrode** object created before to overwrite currents: # + amp = -20000 n_pulses = 2 interpulse = 10 width = 2 dt = cell.dt t_stop = cell.tstop t_start = 20 stim_elec = 15 electrode = LFPy.RecExtElectrode(cell=cell, probe=probe) current, t_ext = electrode.probe.set_current_pulses(el_id=stim_elec, amp1=amp, width1=width, dt=dt, t_stop=t_stop, t_start=t_start, n_pulses=n_pulses, interpulse=interpulse) # - v_ext = cell.enable_extracellular_stimulation(electrode, t_ext=t_ext) cell.simulate(probes=[electrode], rec_vmem=True) plot_results(cell, electrode) # Now the membrane potential is depolarizing, but stimulation is not strong enough to elicit an action potential. # Try to crank up the stimulation current to 50$\mu A$ # + amp = -75000 electrode = LFPy.RecExtElectrode(cell=cell, probe=probe) current, t_ext = electrode.probe.set_current_pulses(el_id=stim_elec, amp1=amp, width1=width, dt=dt, t_stop=t_stop, t_start=t_start, n_pulses=n_pulses, interpulse=interpulse) cell = instantiate_cell(cellParameters) v_ext = cell.enable_extracellular_stimulation(electrode, t_ext=t_ext) cell.simulate(probes=[electrode], rec_vmem=True) # - plot_results(cell, electrode) # Finally we got two spikes. We can maybe get the same effect with smaller currents and higher stimulation frequencies / number of pulses / pulse width. Try to increase the pulse width: # + amp = -30000 n_pulses = 1 interpulse = 10 width = 15 dt = cell.dt t_stop = cell.tstop t_start = 20 stim_elec = 15 electrode = LFPy.RecExtElectrode(cell=cell, probe=probe) current, t_ext = electrode.probe.set_current_pulses(el_id=stim_elec, amp1=amp, width1=width, dt=dt, t_stop=t_stop, t_start=t_start, n_pulses=n_pulses, interpulse=interpulse) cell = instantiate_cell(cellParameters) v_ext = cell.enable_extracellular_stimulation(electrode, t_ext=t_ext) cell.simulate(probes=[electrode], rec_vmem=True) # - plot_results(cell, electrode) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''mlops'': conda)' # language: python # name: python3 # --- import torch import torchvision.models as models from torch.profiler import profile, record_function, ProfilerActivity, tensorboard_trace_handler model = models.resnet18() inputs = torch.randn(5, 3, 224, 224) with profile(activities=[ProfilerActivity.CPU], record_shapes=True, profile_memory=True,on_trace_ready=tensorboard_trace_handler("profiles/")) as prof: with record_function("model_inference"): model(inputs) print(prof.key_averages().table(sort_by="cpu_time_total", row_limit=10)) print(prof.key_averages(group_by_input_shape=True).table(sort_by="cpu_time_total", row_limit=30)) prof.export_chrome_trace("trace.json") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="zyWse8DcM8B-" # # Installation # + colab={"base_uri": "https://localhost:8080/"} id="DE3hAzVBFk8a" outputId="5b8df402-0917-4741-c8e4-08c8ec33fa9e" # !pip install polyfuzz[all] # + [markdown] id="PxwexpFfM_Xy" # # Imports # + id="sCwdotFGM--P" from polyfuzz import PolyFuzz from polyfuzz.models import Embeddings from flair.embeddings import TransformerWordEmbeddings # + [markdown] id="qa8be5uPNEGe" # # Edit Distance based Similarity Score # + id="Lkg937M1FpoM" list_1 = ["apple", "amazon", "google"] list_2 = ["search", "juice", "forest"] # + colab={"base_uri": "https://localhost:8080/"} id="nBWVLPEOIH71" outputId="77c36460-c7fb-4ee5-d4b8-518255a17890" model = PolyFuzz("EditDistance") model.match(list_1, list_2) # + colab={"base_uri": "https://localhost:8080/", "height": 141} id="ui_ExaFpGU8Z" outputId="d600f4e9-4d1c-4645-ab19-68c4733dd69c" model.get_matches() # + [markdown] id="CZimfpXENIm3" # # Bert Based Similarity # + id="q1Lcfvf4GpQy" embeddings = TransformerWordEmbeddings('bert-base-multilingual-cased') bert = Embeddings(embeddings) # + colab={"base_uri": "https://localhost:8080/"} id="n9PfvfSPGzRF" outputId="928c7cd8-d186-4acb-f810-d125e5d3fc39" bert_model = PolyFuzz(bert) bert_model.match(list_1, list_2) # + colab={"base_uri": "https://localhost:8080/", "height": 141} id="WzNu4gV6G1Tl" outputId="7d284939-14da-4df8-f5a0-dccffa9e058b" bert_model.get_matches() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nbpresent={"id": "eff3a02c-cf13-4c08-b2b3-df48c676159c"} # # # # # #

    Hackathon 2018 - chart-digitizer

    # #


    #

    #

    #

     

    #

    #

    #

    #

    #
    #

    # # #
    #

    #
    #
    # + [markdown] nbpresent={"id": "15f3fbd2-0e7f-4ceb-93c1-8936cdebed7b"} # # # # # #

    Python
    #

    #

    OpenCV
    #

    Anaconda
    #

    # # + [markdown] nbpresent={"id": "7fb97c5e-4092-4e4e-8023-b795f49791da"} # # - Random chart generator # - Model training using object detection # - Result analysis # # + nbpresent={"id": "e0937ba9-ac52-4500-9355-f809320da00f"} import numpy as np import matplotlib.pyplot as plt import pandas as pd import matplotlib # + [markdown] nbpresent={"id": "912cb702-4e13-45e3-be96-8ca5bb075111"} # Random bar chart generator # + data = pd.DataFrame(data=np.random.rand(5,1), index=range(1,6), columns=['Fred']) m,n = np.shape(data) plt.clf() plt.bar(x=data.index.values, height=data.values.ravel(), color='k') # figsize=(10, 6)) # Options for later from https://matplotlib.org/api/_as_gen/matplotlib.pyplot.bar.html # bar_width = 0.35 # alpha = .3 fig=plt.gcf() fig.set_size_inches(3, 2) plt.axis('off') fig.tight_layout() fig.canvas.draw() # grab the pixel buffer and dump it into a numpy array pixels = np.array(fig.canvas.renderer._renderer) plt.plot(); # - # Display generated chart # + nbpresent={"id": "b6b7891d-3c45-408b-b1e0-9ff049e33104"} print(pixels); print(data); # + nbpresent={"id": "ae7b3af1-e0c2-4011-8fb6-ae6794c5dfa9"} y, X = img_gen_bar() print(y) #for neural net X=X/255 #for DNN only #X=X.reshape(1,-1,3) #data={} #for i in range(1000) : # data[i] = (generate_bar_chart() ) # + [markdown] nbpresent={"id": "45a597e1-1a3d-4e04-b570-92fb942bbbc8"} # For historical reasons, OpenCV defaults to BGR format instead of usual RGB

    # Lets convert OpenCV image to RGB consistently
    # # # The Lab color space has three components.

    # # L – Lightness ( Intensity ).
    # a – color component ranging from Green to Magenta.
    # b – color component ranging from Blue to Yellow.

    # # The Lab color space is quite different from the RGB color space. In RGB color space the color information is separated into three channels but the same three channels also encode brightness information. On the other hand, in Lab color space, the L channel is independent of color information and encodes brightness only. The other two channels encode color. # + nbpresent={"id": "dfa4b5ff-7f56-4a23-84b4-af0ac0e890ca"} cvimrgb = cv2.cvtColor(cvim2disp,cv2.COLOR_BGR2RGB) #or #imbgr = cv2.cvtColor(im2disp,cv2.COLOR_RGB2BGR) figure() imshow(cvimrgb) cvimlab = cv2.cvtColor(cvim2disp,cv2.COLOR_BGR2LAB) #or #imbgr = cv2.cvtColor(im2disp,cv2.COLOR_RGB2BGR) figure() imshow(cvimlab) # + [markdown] nbpresent={"id": "5e5ef64d-692e-4935-bcf8-8aefff52c384"} # Useful utility function # + nbpresent={"id": "71e23afe-a5a6-4888-8fbb-574af66c9ed1"} img = cv2.imread('sample-1.png', 0) img = cv2.threshold(img, 100, 255, cv2.THRESH_BINARY)[1] # ensure binary ret, labels = cv2.connectedComponents(img) # Map component labels to hue val label_hue = np.uint8(179*labels/np.max(labels)) blank_ch = 255*np.ones_like(label_hue) labeled_img = cv2.merge([label_hue, blank_ch, blank_ch]) # cvt to BGR for display labeled_img = cv2.cvtColor(labeled_img, cv2.COLOR_HSV2BGR) # set bg label to black labeled_img[label_hue==0] = 0 figure() imshow( labeled_img) # - # Simple filtering example # + im2disp = imread('sample-1.png') blurred = cv2.GaussianBlur(im2disp,(19,19),0) figure() imshow(blurred) #more general method kernel = np.ones((5,5),np.float32)/25 blurred2 = cv2.filter2D(im2disp,-1,kernel) figure() imshow(blurred2) # + [markdown] nbpresent={"id": "0b2b252f-dc8d-46b3-865c-d91e5a67ed6a"} # Converting to LAB # + nbpresent={"id": "b580a43e-03cd-4fdc-8f0d-e69653969251"} cv2.imwrite('data/mycvimage.png', cvim2disp) #or imsave('data/myimage.png',im2disp) # - x=2 # %whos # + [markdown] nbpresent={"id": "1501269c-2107-40a8-a8ab-d46443c5c133"} # #1 numpy gotcha for people coming from Matlab # + nbpresent={"id": "1aee9993-9a16-4053-bfc8-e97ea1fabbdb"} x = zeros(5) y = x y[1] = 1 #uncomment next line and run print(x) # + [markdown] nbpresent={"id": "0f618461-e7ba-41e3-9416-e70fd18e03d5"} # What happened? Why did modifying y change x? #

    A: Python copies arrays and other mutable data types by reference by default #


    Here's what you probably want: # # + nbpresent={"id": "285b7d68-b953-4b9e-8095-9e28a454adee"} x=zeros(5) y=x.copy() y[1] = 1 print(x) # + [markdown] nbpresent={"id": "fdf28f4d-e257-49f5-a7c3-a4050163da8b"} # Let's run some of the included OpenCV examples # # + nbpresent={"id": "48cf29b7-5967-4798-acf8-12fcb8c08928"} # %run inpaint.py # + nbpresent={"id": "c2b0183e-eb4e-43fe-ad48-4023b9bb7409"} # %run deconvolution.py # + nbpresent={"id": "755b4867-41e4-4c84-94bc-c91926f4de4e"} # %run find_obj.py # + nbpresent={"id": "5525c670-90ca-440a-8fdf-3e4583db70d7"} # %run peopledetect.py # - # cd python # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Day 5: Loops # # *Author: * # # ## Objective # In this challenge, we will use loops to do some math. Check out the Tutorial tab to learn more. # # ## Task # Given an integer $n$, print its $10$ multiples. Each multiple $n /times i$ (where $i\leq i\leq10$) should be printed on a new line in the form: n x i = result. # # ## Example # # $n=3$ # The printout should look like this: # # ``` # 3 x 1 = 3 # 3 x 2 = 6 # 3 x 3 = 9 # 3 x 4 = 12 # 3 x 5 = 15 # 3 x 6 = 18 # 3 x 7 = 21 # 3 x 8 = 24 # 3 x 9 = 27 # 3 x 10 = 30 # ``` # # # + pycharm={"name": "#%%\n"} # #!/bin/python3 import math import os import random import re import sys if __name__ == '__main__': n = int(input().strip()) for i in range(1,11): print(str(n) + " x " + str(i) + " = " + str(n*i)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ##Practica 2 Ejercicio 1 # Mostrar que cada uno de los siguientes sistemas tiene una orbita periódica: # # * a) # # $$\left\{ \begin{array}{lcc} # \dot{x}_{1}=x_{2} \\ # \\ \dot{x}_{2}=-x_{1}+x_{2}(1-3x_{1}^{2}-2x_{2}^{2}) # \end{array} # \right.$$ # # * b) # # $$ # \ddot{y}+y=\epsilon \dot{y}(1-y^{2}-(\dot{y})^{2}) \:; \epsilon > 0 # $$ # import sympy as sym #Con esto las salidas van a ser en LaTeX sym.init_printing(use_latex=True) x_1, x_2 = sym.symbols('x_1 x_2') X = sym.Matrix([x_1, x_2]) X f_1 = x_2 f_2 = -x_1 + x_2 * (1 - 3 * x_1 ** 2 - 2 * x_2 ** 2) f_2 F = sym.Matrix([f_1,f_2]) F A = F.jacobian(X) #A.simplify() A # puntos de equilibrio del sistema pes = sym.solve([f_1,f_2],[x_1,x_2]) pes A_1 = A.subs({x_1:0,x_2:0}) A_1 A_1.eigenvals() eq =2 * x_2 ** 2 - 6 * x_1 * x_2 ** 2 - 4 * x_2 ** 4 eq.factor() sym.plot_implicit(eq) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: blog # language: python # name: blog # --- # # Setting up Tensorflow with Cuda and Nvidia on Ubuntu 20.04 # # #### 23rd March 2021 # # I found this to be a bit of a pain! Once again, not really knowing the first thing about what my computer does or how it works is a bit of a disadvantage. I am just going to go through the steps of how I bumbled through getting this to work (eventually). # # ## My configuration # # I'm using a Dell Inspiron 5580, and Ubuntu 20.04 LSB. Some specifics: # # - ``` # $ uname -a # Linux will-Inspiron-5580 5.4.0-67-generic #75-Ubuntu SMP Fri Feb 19 18:03:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux # ``` # # - ``` # $ sudo lshw -C Display # *-display # description: VGA compatible controller # product: UHD Graphics 620 (Whiskey Lake) # vendor: Intel Corporation # physical id: 2 # bus info: pci@0000:00:02.0 # version: 02 # width: 64 bits # clock: 33MHz # capabilities: pciexpress msi pm vga_controller bus_master cap_list rom # configuration: driver=i915 latency=0 # resources: irq:145 memory:a4000000-a4ffffff memory:80000000-8fffffff ioport:5000(size=64) memory:c0000-dffff # *-display # description: 3D controller # product: GP108M [GeForce MX150] # vendor: NVIDIA Corporation # physical id: 0 # bus info: pci@0000:02:00.0 # version: a1 # width: 64 bits # clock: 33MHz # capabilities: pm msi pciexpress bus_master cap_list rom # configuration: driver=nvidia latency=0 # resources: irq:16 memory:a2000000-a2ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:4000(size=128) memory:a3000000-a307ffff # ``` # ### Installing the Nvidia Drivers # # For some reason I ended up with an error like: # # - [nvidia-smi has failed because it couldn't communicate with the nvidia driver](https://forums.developer.nvidia.com/t/nvidia-smi-has-failed-because-it-couldnt-communicate-with-the-nvidia-driver-ubuntu-16-04/48635) # # I don't really know what that was all about but what I tried was basically to tear everything down and reinstall it all, which isn't really a great permenant solution because I suspect it will break the next time Ubuntu updates the kernels. Here are some of the things I tried: # # - Removed everything # ``` # $sudo tear everything down, removed cuda, removed cuda from my .profile and .bashrc # ``` # # - Reinstalled recommended drivers # # - Blacklisted nouveau driver # # - Turned my computer off and on again (multiple times) # # - Installed some new linux headers (which is the step that I think fixed things) # # ``` # $sudo apt-get whatever # ``` # # And now it works. What the hell. # ### Installing Tensorflow and using the GPU # # In theory this requires getting the correct version of Cuda and using the Nvidia Cuda toolchain. However, Tensorflow suggests that the easiest way to get set up is to use docker, and since I've used docker before for other things, though I am not an expert, that was the simplest option for me. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: carnd-term1 # language: python # name: myenv # --- # ## Advanced Lane Finding Project # # The goals / steps of this project are the following: # # * Compute the camera calibration matrix and distortion coefficients given a set of chessboard images. # * Apply a distortion correction to raw images. # * Use color transforms, gradients, etc., to create a thresholded binary image. # * Apply a perspective transform to rectify binary image ("birds-eye view"). # * Detect lane pixels and fit to find the lane boundary. # * Determine the curvature of the lane and vehicle position with respect to center. # * Warp the detected lane boundaries back onto the original image. # * Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. # # --- # ## First, I'll compute the camera calibration using chessboard images # + import os import numpy as np import cv2 import glob from queue import Queue import matplotlib.image as mpimg import matplotlib.pyplot as plt from collections import deque # %matplotlib auto # Define a class to receive the characteristics of each line detection class Line(): def __init__(self,buffer_size): self.buffer_size = buffer_size # was the line detected in the last iteration? self.detected = False # x values of the last n fits of the line self.recent_xfitted = deque([],maxlen=self.buffer_size) # format [x1,x2...x5] #average x values of the fitted line over the last n iterations self.bestx = deque([],maxlen=self.buffer_size) #polynomial coefficients averaged over the last n iterations # x = A * y ** 2 + B * y + C self.best_fit = {'A':deque([],maxlen=self.buffer_size),'B':deque([],maxlen=self.buffer_size), 'C':deque([],maxlen=self.buffer_size)} self.best_fitm = {'A':deque([],maxlen=self.buffer_size),'B':deque([],maxlen=self.buffer_size), 'C':deque([],maxlen=self.buffer_size)} # by unit meters #polynomial coefficients for the most recent fit, i.e. current frame self.current_fit = np.array([0,0,0], dtype='float') self.current_fitm = np.array([0,0,0], dtype='float') # by unit meters #radius of curvature of the line in some units self.radius_of_curvature = deque([],maxlen=self.buffer_size) #distance in meters of vehicle center from the line self.line_base_pos = deque([],maxlen=self.buffer_size) #difference in fit coefficients between last and new fits self.diffs = np.array([0,0,0], dtype='float') #x values for detected line pixels self.allx = deque([],maxlen=self.buffer_size) #y values for detected line pixels self.ally = deque([],maxlen=self.buffer_size) class Lane(): def __init__(self): self.ym_per_pix = None # y dimension, transfer pix scale to real world meter scale self.xm_per_pix = None # x dimension, transfer pix scale to real world meter scale self.mtx = None # camera calibration mtx self.dist = None # camera calibration dst self.count = 0 left_line = Line(5) right_line = Line(5) lane = Lane() # Make a list of calibration images imgfiles = glob.glob('camera_cal/calibration*.jpg') def camera_calibration(imgfiles,display = False): # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) # x axis 9 pionts, y axis 6 points, scan from x axis, one piont by one piont objp = np.zeros((6*9,3), np.float32) objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2) # Arrays to store object points and image points from all the images. objpoints = [] # 3d points in real world space imgpoints = [] # 2d points in image plane. # Step through the list and search for chessboard corners for fname in imgfiles: img = cv2.imread(fname) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Find the chessboard corners ret, corners = cv2.findChessboardCorners(gray, (9,6),None) # corners are 9 x 6 = 54 coordinates # If found, add object points, image points if ret == True: objpoints.append(objp) imgpoints.append(corners) ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None) return mtx, dist mtx, dist = camera_calibration(imgfiles) lane.mtx = mtx lane.dist = dist # - # ## Apply a distortion correction to raw images. # + # %matplotlib auto # undistort image with camera calibration mtx, dist def undistort(img,mtx, dist): undist = cv2.undistort(img, mtx, dist, None, mtx) return undist img = mpimg.imread('camera_cal/calibration1.jpg') undist = undistort(img, mtx, dist) f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9)) f.tight_layout() ax1.imshow(img) ax1.set_title('Original Image') ax2.imshow(undist) ax2.set_title('Undistorted Image') plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.) # - # ## Use color transforms, gradients, etc., to create a thresholded binary image. # + # define color and x-gradient filter # b channel filter yellow # l channel filter white # l_thresh=(200, 255), b_thresh=(155,200) def image_filter(img, l_thresh=(200, 255), b_thresh=(155,200),sx_thresh=(70, 210)): gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # convert to LUV color space luv = cv2.cvtColor(img,cv2.COLOR_RGB2Luv) luv_channel = luv[:,:,0] u_channel = luv[:,:,1] v_channel = luv[:,:,2] luv_binary = np.zeros_like(luv_channel) # luv_threshold = (230,255) u_threshold = (0,255) v_threshold = (0,255) l_th = (luv_channel >= l_thresh[0]) & (luv_channel <=l_thresh[1]) u_th = (u_channel >= u_threshold[0]) & (u_channel <=u_threshold[1]) v_th = (v_channel >= v_threshold[0]) & (v_channel <=v_threshold[1]) luv_binary[l_th & u_th & v_th] =1 # convert to Lab color space lab = cv2.cvtColor(img,cv2.COLOR_RGB2Lab) lab_channel = lab[:,:,0] a_channel = lab[:,:,1] b_channel = lab[:,:,2] lab_binary = np.zeros_like(b_channel) lab_threshold = (0,255) a_threshold = (0,255) # b_threshold = (170,255) lab_th = (lab_channel >= lab_threshold[0]) & (lab_channel <=lab_threshold[1]) a_th = (a_channel >= a_threshold[0]) & (a_channel <=a_threshold[1]) b_th = (b_channel >= b_thresh[0]) & (b_channel <=b_thresh[1]) lab_binary[lab_th & a_th & b_th] = 1 # Sobel x sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx)) # Threshold x gradient sxbinary = np.zeros_like(scaled_sobel) sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1 # combined channel combined = np.zeros_like(sxbinary) combined[ ((luv_binary == 1) | (lab_binary ==1) | (sxbinary == 1)) ] = 1 return combined # undist = mpimg.imread('output_images/straight_lines1.jpg') undist = mpimg.imread('./output_images/straight_lines1.jpg') combined = image_filter(undist) # cv2.imwrite('output_images/binary_combined.jpg',filtered) # Plot the result f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9)) f.tight_layout() ax1.imshow(undist) ax1.set_title('Undistorted Image', fontsize=40) ax2.imshow(combined,cmap='gray') # plt.savefig('output_images/binary_combined.jpg') ax2.set_title('Binary threshold Image', fontsize=40) plt.subplots_adjust(left=0.02, right=1, top=0.9, bottom=0.) # - # ## Apply ROI to binary image # + def region_mask(img): mask = np.zeros_like(img) # inner_mask = np.zeros_like(img) h,w = mask.shape[0],mask.shape[1] vertices = np.array([[(0.4*w-50,h*0.7),(0.6*w+50,h*0.7),(w*0.85+100, h), (w*0.15-100,h)]],dtype=np.int32) # ''' # inner_vertices = np.array([[(0.45*w+20,h*0.7),(0.55*w-20,h*0.7),(w*0.8-200, h), # (w*0.2+200,h)]],dtype=np.int32) # ''' cv2.fillPoly(mask,vertices,1) # cv2.fillPoly(mask,inner_vertices,0) masked = cv2.bitwise_and(img,mask) return masked masked = region_mask(combined) f = plt.figure(figsize=(24,9)) plt.imshow(masked,cmap='gray') # - # ## Apply a perspective transform to rectify binary image ("birds-eye view"). # + # test = mpimg.imread('output_images/straight_lines1.jpg') test = mpimg.imread('output_images/straight_lines1.jpg') imshape = masked.shape h = imshape[0] w = imshape[1] # select 4 source points src = [[w*0.4+5,h*0.7],[w*0.6,h*0.7],[w*0.85+10, h],[w*0.15+10, h]] # src = [[w*0.45-5,h*0.65],[w*0.55+10,h*0.65],[w*0.77+5, h*0.9],[w*0.23+15, h*0.9]] # lane.src = src # select 4 destination points dst = [[w*0.15,0],[w*0.85,0],[w*0.85, h],[w*0.15, h]] # dst = [[w*0.23,0],[w*0.77,0],[w*0.77+5, h],[w*0.23+5, h]] # lane.dst = dst # perspective transform def perspective_transform(img,src,dst): src_pts = np.float32(src) dst_pts = np.float32(dst) # use src, dst points to compute M M = cv2.getPerspectiveTransform(src_pts, dst_pts) # Warp an image using the perspective transform, M warped = cv2.warpPerspective(img, M, (w,h), flags=cv2.INTER_LINEAR) return warped def draw_lines(img,points): pts = np.array(points, np.int32) pts = pts.reshape((-1,1,2)) # print(pts) cv2.polylines(img,[pts],True,(255,0,0),5) return img # src, dst = get_src_dst(masked) test_warped = perspective_transform(test,src,dst) # draw source and destination points to tune points parameter,when setup completed, comment the draw function. # draw source points on undistorted image draw_lines(test,src) # draw destination points on warped image draw_lines(test_warped,dst) # Plot the result f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9)) f.tight_layout() ax1.imshow(test) ax1.set_title('Undistorted Image', fontsize=40) ax2.imshow(test_warped) ax2.set_title('Warped Image', fontsize=40) plt.subplots_adjust(left=0.02, right=1.0, top=0.9, bottom=0.) # - # ## Detect lane pixels and fit to find the lane boundary. # + def search_window(img,out_img,minpix,x_center,y_bottom,margin,height,display): nonzero = img.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) win_y_low = y_bottom win_y_high = y_bottom + height win_x_low = x_center - margin # Update this win_x_high = x_center + margin # Update this ### Identify the nonzero pixels in x and y within the window ### good_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_x_low) & (nonzerox < win_x_high)).nonzero()[0] line_found = False line_center = x_center line_inds = [] if len(good_inds) > minpix: line_found = True line_center = np.int(np.mean(nonzerox[good_inds])) line_inds = good_inds if display: # Draw the windows on the visualization image cv2.rectangle(out_img,(win_x_low,win_y_low), (win_x_high,win_y_high),(0,255,0), 2) return line_found,line_center,line_inds def find_lane_pixels(binary_warped,display=False): # Take a histogram of the bottom half of the image histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0) # Create an output image to draw on and visualize the result out_img = np.dstack((binary_warped, binary_warped, binary_warped)) # Find the peak of the left and right halves of the histogram # These will be the starting point for the left and right lines midpoint = np.int(histogram.shape[0]//2) leftx_base = np.argmax(histogram[:midpoint]) # print('leftx_base: ',leftx_base) rightx_base = np.argmax(histogram[midpoint:]) + midpoint # print('rightx_base: ',rightx_base) # HYPERPARAMETERS # Choose the number of sliding windows nwindows = 9 # Set the width of the windows +/- margin margin = 100 # Set minimum number of pixels found to recenter window minpix = 50 # Set shift of the windows when the curve lane run out of window margin shift = round(1.5 * margin) nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) # Set height of windows - based on nwindows above and image shape window_height = np.int(binary_warped.shape[0]//nwindows) # Current positions to be updated later for each window in nwindows leftx_current = leftx_base rightx_current = rightx_base lane.centroid = (leftx_base+rightx_base)/2 lane.ym_per_pix = 30/binary_warped.shape[0] # y dimension, transfer pix scale to real world meter scale lane.xm_per_pix = 3.7/(rightx_base-leftx_base) # x dimension, transfer pix scale to real world meter scale left_line.line_base_pos = lane.xm_per_pix * (midpoint - leftx_base) right_line.line_base_pos = lane.xm_per_pix * (midpoint - rightx_base) # Create empty lists to receive left and right lane pixel indices left_lane_inds = [] right_lane_inds = [] # Step through the windows one by one for window in range(nwindows): y_bottom = binary_warped.shape[0] - (window+1)*window_height leftx_center = leftx_current left_line_found,leftx_current,left_line_inds = search_window(binary_warped,out_img,minpix, leftx_center,y_bottom, margin,window_height,display) if not left_line_found: leftx_center = leftx_current - shift left_line_found,leftx_current,left_line_inds = search_window(binary_warped,out_img,minpix, leftx_center,y_bottom, margin,window_height,display) if not left_line_found: leftx_center = leftx_current + shift left_line_found,leftx_current,left_line_inds = search_window(binary_warped,out_img,minpix, leftx_center,y_bottom, margin,window_height,display) rightx_center = rightx_current right_line_found,rightx_current,right_line_inds = search_window(binary_warped,out_img,minpix, rightx_center,y_bottom, margin,window_height,display) if not right_line_found: rightx_center = rightx_current - shift right_line_found,rightx_current,right_line_inds = search_window(binary_warped,out_img,minpix, rightx_center,y_bottom, margin,window_height,display) if not right_line_found: rightx_center = rightx_current + shift right_line_found,rightx_current,right_line_inds = search_window(binary_warped,out_img,minpix, rightx_center,y_bottom, margin,window_height,display) if len(left_line_inds): left_lane_inds.append(left_line_inds) if len(right_line_inds): right_lane_inds.append(right_line_inds) # Get left and right line pixel positions if len(left_lane_inds): left_lane_inds = np.concatenate(left_lane_inds) leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] else: leftx = left_line.allx[-1] # print('leftx: \n',leftx) lefty = left_line.ally[-1] # print('lefty: \n',lefty) if len(right_lane_inds): right_lane_inds = np.concatenate(right_lane_inds) rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] else: rightx = right_line.allx[-1] # print('rightx: \n',rightx) righty = right_line.ally[-1] # print('righty: \n',righty) return leftx, lefty, rightx, righty, out_img def sliding_window(binary_warped,display = False): # Find our lane pixels first leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped,display) # transfer x,y position from pix scale to real world # leftxm = lane.xm_per_pix * leftx # leftym = lane.ym_per_pix * lefty # rightxm = lane.xm_per_pix * rightx # rightym = lane.ym_per_pix * righty ### TO-DO: Fit a second order polynomial to each using `np.polyfit` ### left_fit = np.polyfit(lefty,leftx, 2) # left_fitm = np.polyfit(leftym,leftxm, 2) right_fit = np.polyfit(righty,rightx,2) # right_fitm = np.polyfit(rightym,rightxm, 2) # Generate x and y values for plotting ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0]) try: left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2] except TypeError: # Avoids an error if `left` and `right_fit` are still none or incorrect print('The function failed to fit a line!') left_fitx = 1*ploty**2 + 1*ploty right_fitx = 1*ploty**2 + 1*ploty # left_line.detected = True # right_line.detected = True left_line.current_fit = left_fit # left_line.current_fitm = left_fitm right_line.current_fit = right_fit # right_line.current_fitm = right_fitm left_line.recent_xfitted.append(left_fitx) right_line.recent_xfitted.append(right_fitx) ## Visualization ## # Colors in the left and right lane regions if display: out_img[lefty, leftx] = [255, 0, 0] out_img[righty, rightx] = [0, 0, 255] pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) pts = np.hstack((pts_left, pts_right)) cv2.polylines(out_img,np.int_([pts]),True,(255,255,0),2) plt.imshow(out_img) return leftx, lefty, rightx, righty binary_warped = perspective_transform(masked,src,dst) leftx, lefty, rightx, righty = sliding_window(binary_warped,True) # - # ## Determine the curvature of the lane and vehicle position with respect to center. # + def measure_curvature_meters(ym,line_fitm): ''' Calculates the curvature of polynomial functions in pixels. ''' ##### TO-DO: Implement the calculation of R_curve (radius of curvature) ##### curverad = ((1+(2*line_fitm[0]*ym + line_fitm[1])**2)**(3/2))/np.abs(2*line_fitm[0]) return curverad ym = lane.ym_per_pix * (binary_warped.shape[0]-1) left_fitm = np.polyfit(lane.ym_per_pix * lefty, lane.xm_per_pix *leftx,2) right_fitm = np.polyfit(lane.ym_per_pix * righty, lane.xm_per_pix *rightx,2) left_curvature = measure_curvature_meters(ym,left_fitm) right_curvature = measure_curvature_meters(ym,right_fitm) average_curverad = (left_curvature + right_curvature)/2 offset = (left_line.line_base_pos + right_line.line_base_pos)/2 print('average_curverad: ',average_curverad) print('vehicle offset: ', offset) # left_curverad, right_curverad = measure_curvature_pixels(ploty,left_fit, right_fit) # print(left_curverad, right_curverad) # - # ## Warp the detected lane boundaries back onto the original image. # + # Create an image to draw the lines on def draw_back(org_img,binary_img,src,dst,ploty,leftx,rightx): warp_zero = np.zeros_like(binary_img).astype(np.uint8) color_warp = np.dstack((warp_zero, warp_zero, warp_zero)) # Recast the x and y points into usable format for cv2.fillPoly() pts_left = np.array([np.transpose(np.vstack([leftx, ploty]))]) pts_right = np.array([np.flipud(np.transpose(np.vstack([rightx, ploty])))]) pts = np.hstack((pts_left, pts_right)) # Draw the lane onto the warped blank image cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0)) src_pts = np.float32(src) dst_pts = np.float32(dst) # compute inverse M transformation martix Minv = cv2.getPerspectiveTransform(dst_pts, src_pts) # Warp the blank back to original image space using inverse perspective matrix (Minv) newwarp = cv2.warpPerspective(color_warp, Minv, (img.shape[1], img.shape[0])) # Combine the result with the original image draw_back_img = cv2.addWeighted(org_img, 1, newwarp, 0.4, 0) return draw_back_img ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0]) left_fitx = left_line.recent_xfitted[-1] right_fitx= right_line.recent_xfitted[-1] result = draw_back(undist,binary_warped,src,dst,ploty,left_fitx,right_fitx) # plot the result f = plt.figure(figsize=(24,9)) plt.imshow(result) # - # ## search around previously polynomial fit # + def fit_poly(img_shape, leftx, lefty, rightx, righty): if len(leftx) & len(rightx): # transfer x,y position from pix scale to real world # leftxm = lane.xm_per_pix * leftx # leftym = lane.ym_per_pix * lefty # rightxm = lane.xm_per_pix * rightx # rightym = lane.ym_per_pix * righty ### TO-DO: Fit a second order polynomial to each with np.polyfit() ### left_fit = np.polyfit(lefty,leftx, 2) # left_fitm = np.polyfit(leftym,leftxm, 2) right_fit = np.polyfit(righty,rightx,2) # right_fitm = np.polyfit(rightym,rightxm, 2) left_line.current_fit = left_fit # left_line.current_fitm = left_fitm right_line.current_fit = right_fit # right_line.current_fitm = right_fitm else: left_fit = left_line.current_fit right_fit = right_line.current_fit # Generate x and y values for plotting ploty = np.linspace(0, img_shape[0]-1, img_shape[0]) ### TO-DO: Calc both polynomials using ploty, left_fit and right_fit ### left_fitx = left_fit[0]*(ploty**2) + left_fit[1]*ploty + left_fit[2] right_fitx = right_fit[0]*(ploty**2) + right_fit[1]*ploty + right_fit[2] left_line.recent_xfitted.append(left_fitx) right_line.recent_xfitted.append(right_fitx) return left_fitx, right_fitx, ploty def search_around_poly(binary_warped,display = False): # HYPERPARAMETER # Choose the width of the margin around the previous polynomial to search margin = 100 # Grab activated pixels nonzero = binary_warped.nonzero() nonzeroy = np.array(nonzero[0]) nonzerox = np.array(nonzero[1]) ### TO-DO: Set the area of search based on activated x-values ### ### within the +/- margin of our polynomial function ### ### Hint: consider the window areas for the similarly named variables ### ### in the previous quiz, but change the windows to our new search area ### left_fit = left_line.current_fit right_fit = right_line.current_fit left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy + left_fit[2] + margin))) right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy + right_fit[2] + margin))) #print('left lane points: ',len(left_lane_inds)) #print('right lane points: ',len(right_lane_inds)) # Again, extract left and right line pixel positions leftx = nonzerox[left_lane_inds] lefty = nonzeroy[left_lane_inds] rightx = nonzerox[right_lane_inds] righty = nonzeroy[right_lane_inds] # Fit new polynomials left_fitx,right_fitx,ploty = fit_poly(binary_warped.shape, leftx, lefty, rightx, righty) # Create an image to draw on and an image to show the selection window out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255 window_img = np.zeros_like(out_img) ## Visualization ## if display: # Color in left and right line pixels out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0] out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255] # Generate a polygon to illustrate the search window area # And recast the x and y points into usable format for cv2.fillPoly() left_line_window1 = np.array([np.transpose(np.vstack([left_fitx-margin, ploty]))]) left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx+margin, ploty])))]) left_line_pts = np.hstack((left_line_window1, left_line_window2)) right_line_window1 = np.array([np.transpose(np.vstack([right_fitx-margin, ploty]))]) right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx+margin, ploty])))]) right_line_pts = np.hstack((right_line_window1, right_line_window2)) # Draw the lane onto the warped blank image cv2.fillPoly(window_img, np.int_([left_line_pts]), (0,255, 0)) cv2.fillPoly(window_img, np.int_([right_line_pts]), (0,255, 0)) out_img = cv2.addWeighted(out_img, 1, window_img, 0.4, 0) # Plot the polynomial lines onto the image pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))]) pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))]) pts = np.hstack((pts_left, pts_right)) cv2.polylines(out_img,np.int_([pts]),True,(255,255,0),2) plt.imshow(out_img) ## End visualization steps ## return leftx,lefty,rightx,righty # Run image through the pipeline # Note that in your project, you'll also want to feed in the previous fits leftx,lefty,rightx,righty = search_around_poly(binary_warped,True) # + def is_outlier(best_fit,line_pos): # at the begining, best_fit array length < 5, outlier set to False. outlier = False # only when best_fit deque full, length == 5, check x position validation. if len(best_fit) ==5: p = np.min(best_fit)-50 q = np.max(best_fit) + 50 if (line_pos q): outlier = True return outlier def check_lane(left_fit,right_fit,h,w): left_line_valid = True right_line_valid = True # get left line top x position, top y position is 0 left_line_top = left_fit[2] outlier = is_outlier(left_line.best_fit['C'],left_line_top) if outlier: left_line_valid = False # get left line bottom x position, bottom y position is h left_line_bottom = left_fit[0] * h ** 2 + left_fit[1] * h + left_fit[2] # get left line best_fit x position, bottom y position is h left_best_bottom =np.array(left_line.best_fit['A'])*h**2+np.array(left_line.best_fit['B'])*h \ +np.array(left_line.best_fit['C']) outlier = is_outlier(left_best_bottom,left_line_bottom) if outlier: left_line_valid = False # get right line top x position, top y position is 0 right_line_top = right_fit[2] outlier = is_outlier(right_line.best_fit['C'],right_line_top) if outlier: right_line_valid = False # get right line bottom x position, bottom y position is h right_line_bottom = np.array(right_fit[0] ** h + right_fit[1] * h + right_fit[2]) # get right line best_fit x position, bottom y position is h right_best_bottom =np.array(right_line.best_fit['A'])*h**2+np.array(right_line.best_fit['B'])*h \ +np.array(right_line.best_fit['C']) outlier = is_outlier(right_best_bottom,right_line_bottom) if outlier: right_line_valid = False # x = A*y**2 + B*y + C # x_top = C # x_bottom = A*h**2 + B*h + C # slope = (x_bottom - x_top)/h = A*h + B left_right_match = True # absolute value of diffrence between left_line slope and right_line slope <= w/h if np.abs((left_fit[0]*h + left_fit[1]) - (right_fit[0]+right_fit[1])) > w/h: left_right_match = False # the distance between right_bottom and left_bottom must >= w/2 if (right_fit[0]*h**2 + right_fit[1]*h + right_fit[2])-(left_fit[0]*h**2 + left_fit[1]*h + left_fit[2])= w/2 if (right_fit[2] - left_fit[2]) """.format(project_video_output)) # ### try challenge_video # + from moviepy.editor import VideoFileClip # from moviepy.editor import * from IPython.display import HTML challenge_video_output = 'output_videos/challenge_video.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("challenge_video.mp4") output_clip = clip1.fl_image(pipeline) #NOTE: this function expects color images!! # %time output_clip.write_videofile(challenge_video_output, audio=False) # - HTML(""" """.format(challenge_video_output)) # ## try harder_challenger_video # + from moviepy.editor import VideoFileClip # from moviepy.editor import * from IPython.display import HTML harder_challenge_video_output = 'output_videos/harder_challenge_video.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("harder_challenge_video.mp4") output_clip = clip1.fl_image(pipeline) #NOTE: this function expects color images!! # %time output_clip.write_videofile(harder_challenge_video_output, audio=False) # - HTML(""" """.format(harder_challenge_video_output)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="R12Yn6W1dt9t" """ You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. """ # If you're using Google Colab and not running locally, run this cell. ## Install dependencies # !pip install wget # !apt-get install sox libsndfile1 ffmpeg # !pip install unidecode # ## Install NeMo BRANCH = 'v1.0.0' # !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr] ## Install TorchAudio # !pip install torchaudio>=0.6.0 -f https://download.pytorch.org/whl/torch_stable.html ## Grab the config we'll use in this example # !mkdir configs # + [markdown] id="J6ycGIaZfSLE" # # Introduction # # This Speech Command recognition tutorial is based on the MatchboxNet model from the paper ["MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition"](https://arxiv.org/abs/2004.08531). MatchboxNet is a modified form of the QuartzNet architecture from the paper "[QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions](https://arxiv.org/pdf/1910.10261.pdf)" with a modified decoder head to suit classification tasks. # # The notebook will follow the steps below: # # - Dataset preparation: Preparing Google Speech Commands dataset # # - Audio preprocessing (feature extraction): signal normalization, windowing, (log) spectrogram (or mel scale spectrogram, or MFCC) # # - Data augmentation using SpecAugment "[SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779)" to increase the number of data samples. # # - Develop a small Neural classification model that can be trained efficiently. # # - Model training on the Google Speech Commands dataset in NeMo. # # - Evaluation of error cases of the model by audibly hearing the samples # + id="I62_LJzc-p2b" # Some utility imports import os from omegaconf import OmegaConf # + id="K_M8wpkwd7d7" # This is where the Google Speech Commands directory will be placed. # Change this if you don't want the data to be extracted in the current directory. # Select the version of the dataset required as well (can be 1 or 2) DATASET_VER = 1 data_dir = './google_dataset_v{0}/'.format(DATASET_VER) if DATASET_VER == 1: MODEL_CONFIG = "matchboxnet_3x1x64_v1.yaml" else: MODEL_CONFIG = "matchboxnet_3x1x64_v2.yaml" if not os.path.exists(f"configs/{MODEL_CONFIG}"): # !wget -P configs/ "https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/matchboxnet/{MODEL_CONFIG}" # + [markdown] id="tvfwv9Hjf1Uv" # # Data Preparation # # We will be using the open-source Google Speech Commands Dataset (we will use V1 of the dataset for the tutorial but require minor changes to support the V2 dataset). These scripts below will download the dataset and convert it to a format suitable for use with NeMo. # + [markdown] id="6VL10OXTf8ts" # ## Download the dataset # # The dataset must be prepared using the scripts provided under the `{NeMo root directory}/scripts` sub-directory. # # Run the following command below to download the data preparation script and execute it. # # **NOTE**: You should have at least 4GB of disk space available if you’ve used --data_version=1; and at least 6GB if you used --data_version=2. Also, it will take some time to download and process, so go grab a coffee. # # **NOTE**: You may additionally pass a `--rebalance` flag at the end of the `process_speech_commands_data.py` script to rebalance the class samples in the manifest. # + id="oqKe6_uLfzKU" if not os.path.exists("process_speech_commands_data.py"): # !wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_speech_commands_data.py # + [markdown] id="TTsxp0nZ1zqo" # ### Preparing the manifest file # # The manifest file is a simple file that has the full path to the audio file, the duration of the audio file, and the label that is assigned to that audio file. # # This notebook is only a demonstration, and therefore we will use the `--skip_duration` flag to speed up construction of the manifest file. # # **NOTE: When replicating the results of the paper, do not use this flag and prepare the manifest file with correct durations.** # + id="cWUtDpzKgop9" # !mkdir {data_dir} # !python process_speech_commands_data.py --data_root={data_dir} --data_version={DATASET_VER} --skip_duration --log print("Dataset ready !") # + [markdown] id="eVsPFxJtg30p" # ## Prepare the path to manifest files # + id="ytTFGVe0g9wk" dataset_path = 'google_speech_recognition_v{0}'.format(DATASET_VER) dataset_basedir = os.path.join(data_dir, dataset_path) train_dataset = os.path.join(dataset_basedir, 'train_manifest.json') val_dataset = os.path.join(dataset_basedir, 'validation_manifest.json') test_dataset = os.path.join(dataset_basedir, 'validation_manifest.json') # + [markdown] id="s0SZy9SEhOBf" # ## Read a few rows of the manifest file # # Manifest files are the data structure used by NeMo to declare a few important details about the data : # # 1) `audio_filepath`: Refers to the path to the raw audio file
    # 2) `command`: The class label (or speech command) of this sample
    # 3) `duration`: The length of the audio file, in seconds. # + id="HYBidCMIhKQV" # !head -n 5 {train_dataset} # + [markdown] id="r-pyUBedh8f4" # # Training - Preparation # # We will be training a MatchboxNet model from the paper ["MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition"](https://arxiv.org/abs/2004.08531). The benefit of MatchboxNet over JASPER models is that they use 1D Time-Channel Separable Convolutions, which greatly reduce the number of parameters required to obtain good model accuracy. # # MatchboxNet models generally follow the model definition pattern QuartzNet-[BxRXC], where B is the number of blocks, R is the number of convolutional sub-blocks, and C is the number of channels in these blocks. Each sub-block contains a 1-D masked convolution, batch normalization, ReLU, and dropout. # # An image of QuartzNet, the base configuration of MatchboxNet models, is provided below. # # + [markdown] id="T0sV4riijHJF" #

    # #

    # + id="ieAPOM9thTN2" # NeMo's "core" package import nemo # NeMo's ASR collection - this collections contains complete ASR models and # building blocks (modules) for ASR import nemo.collections.asr as nemo_asr # + [markdown] id="ss9gLcDv30jI" # ## Model Configuration # The MatchboxNet Model is defined in a config file which declares multiple important sections. # # They are: # # 1) `model`: All arguments that will relate to the Model - preprocessors, encoder, decoder, optimizer and schedulers, datasets and any other related information # # 2) `trainer`: Any argument to be passed to PyTorch Lightning # + id="yoVAs9h1lfci" # This line will print the entire config of the MatchboxNet model config_path = f"configs/{MODEL_CONFIG}" config = OmegaConf.load(config_path) config = OmegaConf.to_container(config, resolve=True) config = OmegaConf.create(config) print(OmegaConf.to_yaml(config)) # + id="m2lJPR0a3qww" # Preserve some useful parameters labels = config.model.labels sample_rate = config.sample_rate # + [markdown] id="8_pmjeed78rJ" # ### Setting up the datasets within the config # # If you'll notice, there are a few config dictionaries called `train_ds`, `validation_ds` and `test_ds`. These are configurations used to setup the Dataset and DataLoaders of the corresponding config. # # # + id="DIe6Qfs18MiQ" print(OmegaConf.to_yaml(config.model.train_ds)) # + [markdown] id="Fb01hl868Uc3" # ### `???` inside configs # # You will often notice that some configs have `???` in place of paths. This is used as a placeholder so that the user can change the value at a later time. # # Let's add the paths to the manifests to the config above. # + id="m181HXev8T97" config.model.train_ds.manifest_filepath = train_dataset config.model.validation_ds.manifest_filepath = val_dataset config.model.test_ds.manifest_filepath = test_dataset # + [markdown] id="pbXngoCM5IRG" # ## Building the PyTorch Lightning Trainer # # NeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem! # # Lets first instantiate a Trainer object! # + id="bYtvdBlG5afU" import torch import pytorch_lightning as pl # + id="jRN18CdH51nN" print("Trainer config - \n") print(OmegaConf.to_yaml(config.trainer)) # + id="gHf6cHvm6H9b" # Lets modify some trainer configs for this demo # Checks if we have GPU available and uses it cuda = 1 if torch.cuda.is_available() else 0 config.trainer.gpus = cuda # Reduces maximum number of epochs to 5 for quick demonstration config.trainer.max_epochs = 5 # Remove distributed training flags config.trainer.accelerator = None # + id="UB9nr7G56G3L" trainer = pl.Trainer(**config.trainer) # + [markdown] id="2wt603Vq6sqX" # ## Setting up a NeMo Experiment # # NeMo has an experiment manager that handles logging and checkpointing for us, so let's use it ! # + id="TfWJFg7p6Ezf" from nemo.utils.exp_manager import exp_manager # + id="SC-QPoW44-p2" exp_dir = exp_manager(trainer, config.get("exp_manager", None)) # + id="Yqi6rkNR7Dph" # The exp_dir provides a path to the current experiment for easy access exp_dir = str(exp_dir) exp_dir # + [markdown] id="t0zz-vHH7Uuh" # ## Building the MatchboxNet Model # # MatchboxNet is an ASR model with a classification task - it generates one label for the entire provided audio stream. Therefore we encapsulate it inside the `EncDecClassificationModel` as follows. # + id="FRMrKhyf5vhy" asr_model = nemo_asr.models.EncDecClassificationModel(cfg=config.model, trainer=trainer) # + [markdown] id="jA9UND-Q_oyw" # # Training a MatchboxNet Model # # As MatchboxNet is inherently a PyTorch Lightning Model, it can easily be trained in a single line - `trainer.fit(model)` ! # + [markdown] id="3ngKcRFqBfIF" # ### Monitoring training progress # # Before we begin training, let's first create a Tensorboard visualization to monitor progress # # + id="sT3371CbJ8Rz" try: from google import colab COLAB_ENV = True except (ImportError, ModuleNotFoundError): COLAB_ENV = False # + id="Cyfec0PDBsXa" # Load the TensorBoard notebook extension if COLAB_ENV: # %load_ext tensorboard else: print("To use tensorboard, please use this notebook in a Google Colab environment.") # + id="4L5ymu-QBxmz" if COLAB_ENV: # %tensorboard --logdir {exp_dir} else: print("To use tensorboard, please use this notebook in a Google Colab environment.") # + [markdown] id="ZApuELDIKQgC" # ### Training for 5 epochs # We see below that the model begins to get modest scores on the validation set after just 5 epochs of training # + id="9xiUUJlH5KdD" trainer.fit(asr_model) # + [markdown] id="Dkds1jSvKgSc" # ### Evaluation on the Test set # # Lets compute the final score on the test set via `trainer.test(model)` # + id="mULTrhEJ_6wV" trainer.test(asr_model, ckpt_path=None) # + [markdown] id="XQntce8cLiUC" # # Fast Training # # We can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision. # # For multi-GPU training, take a look at [the PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html) # # For mixed-precision training, take a look at [the PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/amp.html) # # ```python # # Mixed precision: # trainer = Trainer(amp_level='O1', precision=16) # # # Trainer with a distributed backend: # trainer = Trainer(gpus=2, num_nodes=2, accelerator='ddp') # # # Of course, you can combine these flags as well. # ``` # + [markdown] id="ifDHkunjM8y6" # # Evaluation of incorrectly predicted samples # # Given that we have a trained model, which performs reasonably well, let's try to listen to the samples where the model is least confident in its predictions. # # For this, we need the support of the librosa library. # # **NOTE**: The following code depends on librosa. To install it, run the following code block first. # + id="s3w3LhHcKuD2" # !pip install librosa # + [markdown] id="PcJrZ72sNCkM" # ## Extract the predictions from the model # # We want to possess the actual logits of the model instead of just the final evaluation score, so we can define a function to perform the forward step for us without computing the final loss. Instead, we extract the logits per batch of samples provided. # + [markdown] id="rvxdviYtOFjK" # ## Accessing the data loaders # # We can utilize the `setup_test_data` method in order to instantiate a data loader for the dataset we want to analyze. # # For convenience, we can access these instantiated data loaders using the following accessors - `asr_model._train_dl`, `asr_model._validation_dl` and `asr_model._test_dl`. # + id="CB0QZCAmM656" asr_model.setup_test_data(config.model.test_ds) test_dl = asr_model._test_dl # + [markdown] id="rA7gXawcPoip" # ## Partial Test Step # # Below we define a utility function to perform most of the test step. For reference, the test step is defined as follows: # # ```python # def test_step(self, batch, batch_idx, dataloader_idx=0): # audio_signal, audio_signal_len, labels, labels_len = batch # logits = self.forward(input_signal=audio_signal, input_signal_length=audio_signal_len) # loss_value = self.loss(logits=logits, labels=labels) # correct_counts, total_counts = self._accuracy(logits=logits, labels=labels) # return {'test_loss': loss_value, 'test_correct_counts': correct_counts, 'test_total_counts': total_counts} # ``` # + id="sBsDOm5ROpQI" @torch.no_grad() def extract_logits(model, dataloader): logits_buffer = [] label_buffer = [] # Follow the above definition of the test_step for batch in dataloader: audio_signal, audio_signal_len, labels, labels_len = batch logits = model(input_signal=audio_signal, input_signal_length=audio_signal_len) logits_buffer.append(logits) label_buffer.append(labels) print(".", end='') print() print("Finished extracting logits !") logits = torch.cat(logits_buffer, 0) labels = torch.cat(label_buffer, 0) return logits, labels # + id="mZSdprUlOuoV" cpu_model = asr_model.cpu() cpu_model.eval() logits, labels = extract_logits(cpu_model, test_dl) print("Logits:", logits.shape, "Labels :", labels.shape) # + id="9Wd0ukgNXRBz" # Compute accuracy - `_accuracy` is a PyTorch Lightning Metric ! acc = cpu_model._accuracy(logits=logits, labels=labels) print("Accuracy : ", float(acc[0]*100)) # + [markdown] id="NwN9OSqCauSH" # ## Filtering out incorrect samples # Let us now filter out the incorrectly labeled samples from the total set of samples in the test set # + id="N1YJvsmcZ0uE" import librosa import json import IPython.display as ipd # + id="jZAT9yGAayvR" # First let's create a utility class to remap the integer class labels to actual string label class ReverseMapLabel: def __init__(self, data_loader): self.label2id = dict(data_loader.dataset.label2id) self.id2label = dict(data_loader.dataset.id2label) def __call__(self, pred_idx, label_idx): return self.id2label[pred_idx], self.id2label[label_idx] # + id="X3GSXvYHa4KJ" # Next, let's get the indices of all the incorrectly labeled samples sample_idx = 0 incorrect_preds = [] rev_map = ReverseMapLabel(test_dl) # Remember, evaluated_tensor = (loss, logits, labels) probs = torch.softmax(logits, dim=-1) probas, preds = torch.max(probs, dim=-1) total_count = cpu_model._accuracy.total_counts_k[0] incorrect_ids = (preds != labels).nonzero() for idx in incorrect_ids: proba = float(probas[idx][0]) pred = int(preds[idx][0]) label = int(labels[idx][0]) idx = int(idx[0]) + sample_idx incorrect_preds.append((idx, *rev_map(pred, label), proba)) print(f"Num test samples : {total_count.item()}") print(f"Num errors : {len(incorrect_preds)}") # First lets sort by confidence of prediction incorrect_preds = sorted(incorrect_preds, key=lambda x: x[-1], reverse=False) # + [markdown] id="0JgGo71gcDtD" # ## Examine a subset of incorrect samples # Let's print out the (test id, predicted label, ground truth label, confidence) tuple of first 20 incorrectly labeled samples # + id="x37wNJsNbcw0" for incorrect_sample in incorrect_preds[:20]: print(str(incorrect_sample)) # + [markdown] id="tDnwYsDKcLv9" # ## Define a threshold below which we designate a model's prediction as "low confidence" # + id="dpvzeh4PcGJs" # Filter out how many such samples exist low_confidence_threshold = 0.25 count_low_confidence = len(list(filter(lambda x: x[-1] <= low_confidence_threshold, incorrect_preds))) print(f"Number of low confidence predictions : {count_low_confidence}") # + [markdown] id="ERXyXvCAcSKR" # ## Let's hear the samples which the model has least confidence in ! # + id="kxjNVjX8cPNP" # First let's create a helper function to parse the manifest files def parse_manifest(manifest): data = [] for line in manifest: line = json.loads(line) data.append(line) return data # + id="IWxqw5k-cUVd" # Next, let's create a helper function to actually listen to certain samples def listen_to_file(sample_id, pred=None, label=None, proba=None): # Load the audio waveform using librosa filepath = test_samples[sample_id]['audio_filepath'] audio, sample_rate = librosa.load(filepath) if pred is not None and label is not None and proba is not None: print(f"Sample : {sample_id} Prediction : {pred} Label : {label} Confidence = {proba: 0.4f}") else: print(f"Sample : {sample_id}") return ipd.Audio(audio, rate=sample_rate) # + id="HPj1tFNIcXaU" # Now let's load the test manifest into memory test_samples = [] with open(test_dataset, 'r') as test_f: test_samples = test_f.readlines() test_samples = parse_manifest(test_samples) # + id="Nt7b_uiScZcC" # Finally, let's listen to all the audio samples where the model made a mistake # Note: This list of incorrect samples may be quite large, so you may choose to subsample `incorrect_preds` count = min(count_low_confidence, 20) # replace this line with just `count_low_confidence` to listen to all samples with low confidence for sample_id, pred, label, proba in incorrect_preds[:count]: ipd.display(listen_to_file(sample_id, pred=pred, label=label, proba=proba)) # + [markdown] id="gxLGGDvHW2kV" # # Fine-tuning on a new dataset # # We currently trained our dataset on all 30/35 classes of the Google Speech Commands dataset (v1/v2). # # We will now show an example of fine-tuning a trained model on a subset of the classes, as a demonstration of fine-tuning. # # + [markdown] id="mZAPGTzeXnuQ" # ## Preparing the data-subsets # # Let's select 2 of the classes, `yes` and `no` and prepare our manifests with this dataset. # + id="G1RI4GBNfjUW" import json # + id="L3cFvN5vcbjb" def extract_subset_from_manifest(name: str, manifest_path: str, labels: list): manifest_dir = os.path.split(manifest_path)[0] labels = set(labels) manifest_values = [] print(f"Parsing manifest: {manifest_path}") with open(manifest_path, 'r') as f: for line in f: val = json.loads(line) if val['command'] in labels: manifest_values.append(val) print(f"Number of files extracted from dataset: {len(manifest_values)}") outpath = os.path.join(manifest_dir, name) with open(outpath, 'w') as f: for val in manifest_values: json.dump(val, f) f.write("\n") f.flush() print("Manifest subset written to path :", outpath) print() return outpath # + id="fXQ0N1evfqZ8" labels = ["yes", "no"] train_subdataset = extract_subset_from_manifest("train_subset.json", train_dataset, labels) val_subdataset = extract_subset_from_manifest("val_subset.json", val_dataset, labels) test_subdataset = extract_subset_from_manifest("test_subset.json", test_dataset, labels) # + [markdown] id="IO5pVNyKimiE" # ## Saving/Restoring a checkpoint # # There are multiple ways to save and load models in NeMo. Since all NeMo models are inherently Lightning Modules, we can use the standard way that PyTorch Lightning saves and restores models. # # NeMo also provides a more advanced model save/restore format, which encapsulates all the parts of the model that are required to restore that model for immediate use. # # In this example, we will explore both ways of saving and restoring models, but we will focus on the PyTorch Lightning method. # + [markdown] id="lMKvrT88jZwC" # ### Saving and Restoring via PyTorch Lightning Checkpoints # # When using NeMo for training, it is advisable to utilize the `exp_manager` framework. It is tasked with handling checkpointing and logging (Tensorboard as well as WandB optionally!), as well as dealing with multi-node and multi-GPU logging. # # Since we utilized the `exp_manager` framework above, we have access to the directory where the checkpoints exist. # # `exp_manager` with the default settings will save multiple checkpoints for us - # # 1) A few checkpoints from certain steps of training. They will have `--val_loss=` tags # # 2) A checkpoint at the last epoch of training denotes by `-last`. # # 3) If the model finishes training, it will also have a `--end` checkpoint. # + id="TcHTw5ErmQRi" import glob # + id="5h8zMJHngUrV" print(exp_dir) # + id="F9K_Ct_hl8oU" # Let's list all the checkpoints we have checkpoint_dir = os.path.join(exp_dir, 'checkpoints') checkpoint_paths = list(glob.glob(os.path.join(checkpoint_dir, "*.ckpt"))) checkpoint_paths # + id="67fbB61umfb4" # We want the checkpoint saved after the final step of training final_checkpoint = list(filter(lambda x: "-last.ckpt" in x, checkpoint_paths))[0] print(final_checkpoint) # + [markdown] id="ZADUzv02nknZ" # ### Restoring from a PyTorch Lightning checkpoint # # To restore a model using the `LightningModule.load_from_checkpoint()` class method. # + id="ywd9Qj4Xm3VC" restored_model = nemo_asr.models.EncDecClassificationModel.load_from_checkpoint(final_checkpoint) # + [markdown] id="0f4GQa8vB1BB" # ## Prepare the model for fine-tuning # # Remember, the original model was trained for a 30/35 way classification task. Now we require only a subset of these models, so we need to modify the decoder head to support fewer classes. # # We can do this easily with the convenient function `EncDecClassificationModel.change_labels(new_label_list)`. # # By performing this step, we discard the old decoder head, but still, preserve the encoder! # + id="iMCMds7pB16U" restored_model.change_labels(labels) # + [markdown] id="rrspQ2QFtbCK" # ### Prepare the data loaders # # The restored model, upon restoration, will not attempt to set up any data loaders. # # This is so that we can manually set up any datasets we want - train and val to finetune the model, test in order to just evaluate, or all three to do both! # # The entire config that we used before can still be accessed via `ModelPT.cfg`, so we will use it in order to set up our data loaders. This also gives us the opportunity to set any additional parameters we wish to setup! # + id="9JxhiZN5ulUl" import copy # + id="qzHfTOkPowJo" train_subdataset_cfg = copy.deepcopy(restored_model.cfg.train_ds) val_subdataset_cfg = copy.deepcopy(restored_model.cfg.validation_ds) test_subdataset_cfg = copy.deepcopy(restored_model.cfg.test_ds) # + id="it9-vFX6vHUl" # Set the paths to the subset of the dataset train_subdataset_cfg.manifest_filepath = train_subdataset val_subdataset_cfg.manifest_filepath = val_subdataset test_subdataset_cfg.manifest_filepath = test_subdataset # + id="1qzWY8QDvgfc" # Setup the data loader for the restored model restored_model.setup_training_data(train_subdataset_cfg) restored_model.setup_multiple_validation_data(val_subdataset_cfg) restored_model.setup_multiple_test_data(test_subdataset_cfg) # + id="y8GZ5a5rC0gY" # Check data loaders are correct print("Train dataset labels :", restored_model._train_dl.dataset.labels) print("Val dataset labels :", restored_model._validation_dl.dataset.labels) print("Test dataset labels :", restored_model._test_dl.dataset.labels) # + [markdown] id="76yDcWZ9zl2G" # ## Setting up a new Trainer and Experiment Manager # # A restored model has a utility method to attach the Trainer object to it, which is necessary in order to correctly set up the optimizer and scheduler! # # **Note**: The restored model does not contain the trainer config with it. It is necessary to create a new Trainer object suitable for the environment where the model is being trained. The template can be replicated from any of the training scripts. # # Here, since we already had the previous config object that prepared the trainer, we could have used it, but for demonstration, we will set up the trainer config manually. # + id="swTe3WvBzkBJ" # Setup the new trainer object # Let's modify some trainer configs for this demo # Checks if we have GPU available and uses it cuda = 1 if torch.cuda.is_available() else 0 trainer_config = OmegaConf.create(dict( gpus=cuda, max_epochs=5, max_steps=None, # computed at runtime if not set num_nodes=1, accumulate_grad_batches=1, checkpoint_callback=False, # Provided by exp_manager logger=False, # Provided by exp_manager log_every_n_steps=1, # Interval of logging. val_check_interval=1.0, # Set to 0.25 to check 4 times per epoch, or an int for number of iterations )) print(trainer_config.pretty()) # + id="Nd_ej4bI3TIy" trainer_finetune = pl.Trainer(**trainer_config) # + [markdown] id="WtGu5q5T32XA" # ### Setting the trainer to the restored model # # All NeMo models provide a convenience method `set_trainer()` in order to setup the trainer after restoration # + id="BTozhedA3zpM" restored_model.set_trainer(trainer_finetune) # + id="XojTpEiI3TQa" exp_dir_finetune = exp_manager(trainer_finetune, config.get("exp_manager", None)) # + id="x_LSbmCQ3TUf" exp_dir_finetune = str(exp_dir_finetune) exp_dir_finetune # + [markdown] id="QT_mWWnSxPLv" # ## Setup optimizer + scheduler # # For a fine-tuning experiment, let's set up the optimizer and scheduler! # # We will use a much lower learning rate than before, and also swap out the scheduler from PolyHoldDecay to CosineDecay. # + id="TugHsePsxA5Q" optim_sched_cfg = copy.deepcopy(restored_model.cfg.optim) # Struct mode prevents us from popping off elements from the config, so let's disable it OmegaConf.set_struct(optim_sched_cfg, False) # + id="pZSo0sWPxwiG" # Lets change the maximum learning rate to previous minimum learning rate optim_sched_cfg.lr = 0.001 # Lets change the scheduler optim_sched_cfg.sched.name = "CosineAnnealing" # "power" isnt applicable to CosineAnnealing so let's remove it optim_sched_cfg.sched.pop('power') # "hold_ratio" isnt applicable to CosineAnnealing, so let's remove it optim_sched_cfg.sched.pop('hold_ratio') # Set "min_lr" to lower value optim_sched_cfg.sched.min_lr = 1e-4 print(optim_sched_cfg.pretty()) # + id="FqqyFF3Ey5If" # Now lets update the optimizer settings restored_model.setup_optimization(optim_sched_cfg) # + id="mdivgIPUzgP_" # We can also just directly replace the config inplace if we choose to restored_model.cfg.optim = optim_sched_cfg # + [markdown] id="3-lRyz2_Eyrl" # ## Fine-tune training step # # We fine-tune on the subset classification problem. Note, the model was originally trained on these classes (the subset defined here has already been trained on above). # # When fine-tuning on a truly new dataset, we will not see such a dramatic improvement in performance. However, it should still converge a little faster than if it was trained from scratch. # + [markdown] id="nq-iHIgx6OId" # ### Monitor training progress via Tensorboard # # + id="PIacDWcD5vCR" if COLAB_ENV: # %tensorboard --logdir {exp_dir_finetune} else: print("To use tensorboard, please use this notebook in a Google Colab environment.") # + [markdown] id="r5_z1eW76fip" # ### Fine-tuning for 5 epochs # + id="WH8rN6dA6V9S" trainer_finetune.fit(restored_model) # + [markdown] id="lgV0s8auJpxV" # ### Evaluation on the Test set # # Let's compute the final score on the test set via `trainer.test(model)` # + id="szpLp6XTDPaK" trainer_finetune.test(restored_model, ckpt_path=None) # + [markdown] id="uNBAaf1FKcAZ" # ## Advanced Usage: Exporting a model in its entirety # # While most models can be easily serialized via the Experiment Manager as a PyTorch Lightning checkpoint, there are certain models where this is insufficient. # # Consider the case where a Model contains artifacts such as tokenizers or other intermediate file objects that cannot be so easily serialized into a checkpoint. # # For such cases, NeMo offers two utility functions that enable serialization of a Model + artifacts - `save_to` and `restore_from`. # # Further documentation regarding these methods can be obtained from the documentation pages on NeMo. # + id="Dov9g2j8Lyjs" import tarfile # + id="WNixPPFNJyNc" # Save a model as a tarfile restored_model.save_to(os.path.join(exp_dir_finetune, "model.nemo")) # + id="B2RHYNjjLrcW" # The above object is just a tarfile which can store additional artifacts. with tarfile.open(os.path.join(exp_dir_finetune, 'model.nemo')) as blob: for item in blob: print(item) # + id="fRo04x3TLxdu" # Restore a model from a tarfile restored_model_2 = nemo_asr.models.EncDecClassificationModel.restore_from(os.path.join(exp_dir_finetune, "model.nemo")) # + [markdown] id="LyIegk2CPNsI" # ## Conclusion # Once the model has been restored, either via a PyTorch Lightning checkpoint or via the `restore_from` methods, one can finetune by following the above general steps. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import theme theme.load_style() # # The Euler-Bernoulli Beam Element # # Creative Commons License # # This lecture by is licensed under the # Creative Commons Attribution 4.0 International License. All code examples are also licensed under the [MIT license](http://opensource.org/licenses/MIT). # # Topics # - [Introduction](#intro) # - [Beam Element](#beam-el) # - [$Discretization of the Domain](#disc) # - [Derivation of Element Equations](#derive) # # Introduction # # In the last two lectures we discussed the finite element formulation of one dimensional second order BVPs and we completed examples in thermal, fluid, and solid mechanics. Today we discuss the finite element formulation of the one dimensional fourth order BVP associated with the Euler-Bernoulli beam theory. The formulation of the finite element equations is obtained by following the same steps as in the second order BVP finite element formulation. # # Euler-Bernoulli Beam Element # # In Euler-Bernoulli beam theory, it is assumed that plane sections remain # plane. In this theory, the transverse deflection $u$ of the beam is governed # by # # $$ # \frac{d^2}{dx^2}\left( a\frac{d^2u}{dx^2}\right) +q(x)=0 # $$ # # In the figure below, a typical beam, with sign conventions is shown # # # # Here $a=a(x)$ and $q=q(x)$ are the given problem data and $u$ is the dependent # variable. The function $a=EI$ is the flexural rigidity of the beam, $q$ is the # transversely distributed load, and $u$ is the transverse deflection. In # addition to satisfying the beam bending equation, $u$ must also satisfy the # boundary conditions. Since beam bending is governed by a fourth order DE, four # boundary conditions are required to solve it. As in the second order DEs we # previously solved, the appropriate boundary conditions arise from the weak # form of the governing equation. We now develop the finite element equations # for a beam. # # ## Discretization of the Domain # # The domain (length) of the beam is divided into $N-1$ line elements, each having at least two end nodes, as shown below. Note that the element geometry is the same as with the bar element, however, the number and form of the primary and secondary unknowns at each node is different and are dictated by the variational (weak) formulation of the beam equation. # # # # ## Derivation of Element Equations # # We now isolate a typical element $\Omega^{(e)}=(x_A,x_B)$ and construct the # weak form of the beam bending equation over that element. Recall that the weak # form provides the primary and secondary variables of the problem. We will then # choose approximation for the primary variable and derive the element # equations. # ### The Weak Form # # We derive the weak form in the same way as with the second order BVP, namely, # we multiply the governing equation by a weight function $w(x)$, integrate by # parts to reduce the continuity requirements on $u$, and evaluate the resulting # boundary conditions. # # $$ # \begin{gather} # \int_{x_A}^{x_B}\left(\frac{d^2}{dx^2}\left(a\frac{d^2u}{dx^2} \right) + q \right) w dx = 0 \\ # \int_{x_A}^{x_B}\left( -\frac{dw}{dx}\frac{d}{dx}\left( a \frac{d^2u}{dx^2}\right) + wq \right) dx + \left[ w\frac{d}{dx}\left( a\frac{d^2u}{dx^2} \right) \right]_{x_A}^{x_B} =0 \\ # \int_{x_A}^{x_B}\left( a\frac{d^2w}{dx^2}\frac{d^2u}{dx^2}+wq\right) dx + \left[ w\frac{d}{dx}\left( a\frac{d^2u}{dx^2} \right)-\frac{dw}{dx}a\frac{d^2u}{dx^2} \right]_{x_A}^{x_B} =0 # \end{gather} # $$ # # Note that, because of the two integrations by parts, $u,w\in C^1$. We have # reduced the continuity requirements on $u$ from $C^3$ to $C^1$. Also, because # of the integration by parts, there are now two boundary terms to be evaluated # at $x=x_A$ and $x=x_B$. Let's look closely at the two boundary conditions: # # $$ # \left[ w\frac{d}{dx}\left( a\frac{d^2u}{dx^2} \right)-\frac{dw}{dx}a\frac{d^2u}{dx^2} \right]_{x_A}^{x_B} # $$ # # From the boundary conditions, it follows that the specification of $u$ and # $du/dx$ at the endpoints of the elements constitute the essential boundary # conditions and the specification of # # $$ # \frac{d}{dx}\left( a\frac{d^2u}{dx^2} \right)=V \quad (\text{shear force}) # $$ # # and # # $$ # a\frac{d^2u}{dx^2} =M \quad (\text{bending moment}) # $$ # # at the endpoints constitute the natural boundary conditions. Thus, their are # two essential and two natural boundary conditions. Therefore, we identify $u$ # and $du/dx$ as the primary variables at each node. The natural boundary # conditions always remain in the weak form and end up as the source vector of # the matrix equations. To simplify notation, we introduce the the notation # $\phi=du/dx$ and # # $$\begin{gather} # Q_1^{(e)}=V_1^{(e)}=\left[ \frac{d}{dx}\left( a\frac{d^2u}{dx^2} \right) \right]\Bigg|_{{x_A}}, \quad Q_2^{(e)}=M_1^{(e)}=\left( a\frac{d^2u}{dx^2} \right) \Bigg|_{x_A}\\ # Q_3^{(e)}=V_2^{(e)}=-\left[ \frac{d}{dx}\left( a\frac{d^2u}{dx^2} \right) \right]\Bigg|_{x_B}, \quad Q_4^{(e)}=M_2^{(e)}=-\left( a\frac{d^2u}{dx^2} \right) \Bigg|_{x_B}\\ # \end{gather} # $$ # # where $Q_1^{(e)}$ and $Q_3^{(e)}$ denote the shear forces, and $Q_2^{(e)}$ and # $Q_4^{(e)}$ denote the bending moments shown in \rfig{bd0}. Since the set # $Q_i^{(e)}$ contain both point and ''bending'' forces, we will call the set # # $$ # \begin{Bmatrix} Q_1^{(e)} \\ Q_2^{(e)}\\ Q_3^{(e)}\\ Q_4^{(e)} \end{Bmatrix} # $$ # # the *generalized forces*. We call the corresponding set of displacements and # rotations the *generalized displacements*: # # $$ # \begin{Bmatrix} u(x_A) \\ u'(x_A) \\ u(x_B) \\ u'(x_B) \end{Bmatrix} = # \begin{Bmatrix} u_1^{(e)} \\ u_2^{(e)} \\ u_3^{(e)} \\ u_4^{(e)}\end{Bmatrix} = # \begin{Bmatrix} u_1 \\ \phi_1 \\ u_2 \\ \phi_2\end{Bmatrix} # $$ # # With this notation, we can now write # # $$ # \int_{x_A}^{x_B}\left( b\frac{d^2w}{dx^2}\frac{d^2u}{dx^2}+qw\right) dx + \left[ wV-\frac{dw}{dx}M\right]\Bigg|_{x_A}^{x_B} =0 # $$ # ### Interpolation Functions # The weak form requires that the interpolation functions of an element be continuous with nonzero derivatives up to order two. The approximation of the primary variables over a finite element should be such that it satisfies the essential boundary conditions of the element: # # $$ # u(x_A)=u_1, \quad u(x_B)=u_2, \quad \phi(x_A)=\phi_1, \quad \phi(x_B)=\phi_2 # $$ # # Since there is a total of four conditions in an element (two per node), a four parameter polynomial must be selected for $u$: # # $$ # u(x)=u_0 + u_1x + u_2x^2 + u_3x^3 # $$ # # We now express the $c_i$ in terms of the general displacements and require that # # $$ # u(x_A)=u_1^{(e)}=u_0 + u_1x_A+u_2x_A^2+u_3x_A^3\\ # \phi(x_A)= u_2^{(e)}=u_1 + 2u_2x_A+3u_3x_A^2\\ # u(x_B)=u_3^{(e)}=u_0 + u_1x_B+u_2x_B^2+u_3x_B^3\\ # \phi(x_B)= u_4^{(e)}=+u_1 + 2u_2x_B+3u_3x_B^2 # $$ # # or # # $$ # \begin{Bmatrix} u_1^{(e)} \\ u_2^{(e)} \\ u_3^{(e)} \\ u_4^{(e)} \end{Bmatrix} = # \begin{bmatrix} 1 & x_A & x_A^2 & x_A^3 \\ 0 & 1 & 2x_A & 3x_A^2 \\ 1 & x_B & x_B^2 & x_B^3 \\ 0 & 1 & 2x_B & 3x_B^2\end{bmatrix} # \begin{Bmatrix} u_0 \\ u_1 \\ u_2 \\ u_3 \end{Bmatrix} # $$ # # Inverting this expression to express the $u_i$ in terms of the $u_i^{(e)}$ and substituting the result gives # # $$ # u^{(e)}(x)=u_1^{(e)}N_1^{(e)}+u_2^{(e)}N_2^{(e)}+u_3^{(e)}N_3^{(e)}+u_4^{(e)}N_4^{(e)}=\sum_{j=1}^4u_j^{(e)}N_j^{(e)} # $$ # # where # # $$ # N_1^{(e)}=1-3\left(\frac{x-x_A}{h_e}\right)^2+2\left(\frac{x-x_A}{h_e}\right)^3, \ \ \ N_2^{(e)}=\left( x-x_A\right)\left( \frac{x-x_A}{h_e}-1\right)^2\\ # N_3^{(e)}=3\left(\frac{x-x_A}{h_e}\right)^2-2\left(\frac{x-x_A}{h_e}\right)^3, \ \ \ N_4^{(e)}=\left( x-x_A\right)\left(\left(\frac{x-x_A}{h_e}\right)^2-\frac{x-x_A}{h_e}\right) # $$ # # Note that the cubic interpolation functions were derived by interpolating $u$ # and its derivatives at the nodes. Interpolation polynomials derived in this # way are known as Hermite interpolation functions. # # The shape functions $N_i^{(e)}$ can be expressed in terms of the local coordinates $\hat{x}$: # # $$ # N_1^{(e)}=1-3\left(\frac{\hat{x}}{h_e}\right)^2+2\left(\frac{\hat{x}}{h_e}\right)^3, \ \ \ N_2^{(e)}=\hat{x}\left( \frac{\hat{x}}{h_e}-1\right)^2\\ # N_3^{(e)}=3\left(\frac{\hat{x}}{h_e}\right)^2-2\left(\frac{\hat{x}}{h_e}\right)^3, \ \ \ N_4^{(e)}=\hat{x}\left(\left(\frac{\hat{x}}{h_e}\right)^2-\frac{\hat{x}}{h_e}\right) # $$ # # The Hermite cubic interpolation functions satisfy the following interpolation properties ($i,j=1,2$) # # $$ # N_{2i-1}^{(e)}(\hat{x}_j)=\delta_{ij}, \ \ \ \ N_{2i}^{(e)}(\hat{x}_j)=0, \ \ \ \ \sum_{i=1}^2N_{2i-1}^{(e)}=1\\ # \frac{dN_{2i-1}^{e}}{dx}\Bigg|_{\hat{x}_j}=0, \ \ \ \ \frac{dN_{2i}^{(e)}}{dx}\Bigg|_{\hat{x}_j}=\delta_{ij} # $$ # # where $\hat{x}_1=0$ and $\hat{x}_2=h_e$ are the local coordinates of nodes 1 and 2 of the element $\Omega^{(e)}=(x_A,x_B)$.\\ # ### Finite Element Model # # We now substitute # $$ # u^{(e)}(x)=\sum_{j=1}^4 u_j^{(e)}N_j^{(e)} # $$ # and # $$ # w=\sum_{i=1}^4 w_i^{(e)}N_i^{(e)} # $$ # # Substituting and simplifying gives # # $$ # \sum_{j=1}^4\left(\int_{x_A}^{x_B}a\frac{d^2N_i^{(e)}}{dx^2}\frac{d^2N_{j}^{(e)}}{dx^2}dx\right) u_j^{(e)} + \int_{x_A}^{x_B}N_i^{(e)}qdx-Q_i^{(e)}=0 # $$ # # or # # $$ # \sum_{j=1}^4k_{ij}^{(e)}u_j^{(e)}+F_i^{(e)}=0 # $$ # where # # $$ # k_{ij}^{(e)}=\int_{x_A}^{x_B}a\frac{d^2N_i^{(e)}}{dx^2}\frac{d^2N_j^{(e)}}{dx^2}dx, \ \ \ \ F_i^{(e)} = \int_{x_A}^{x_B}N_i^{(e)}qdx-Q_i^{(e)} # $$ # # In matrix notation this can be written # # $$ # \begin{bmatrix} # k_{11}^{(e)} & k_{12}^{(e)} & k_{13}^3 & k_{14}^{(e)} \\ # k_{21}^{(e)} & k_{22}^{(e)} & k_{23}^3 & k_{24}^{(e)} \\ # k_{31}^{(e)} & k_{32}^{(e)} & k_{33}^3 & k_{34}^{(e)} \\ # k_{41}^{(e)} & k_{42}^{(e)} & k_{43}^3 & k_{44}^{(e)} # \end{bmatrix} # \begin{Bmatrix} # u_1^{(e)} \\ u_2^{(e)} \\ u_3^{(e)} \\ u_4^{(e)} # \end{Bmatrix} = # -\begin{Bmatrix} q_1^{(e)} \\ q_2^{(e)} \\ q_3^{(e)} \\ q_4^{(e)} \end{Bmatrix} + \begin{Bmatrix} Q_1^{(e)} \\ Q_2^{(e)} \\ Q_3^{(e)} \\ Q_4^{(e)} \end{Bmatrix} # $$ # # For the case in which $a=EI$ and $w$ are constant over an element, the coefficients of each element equation are found as with the bar element: # # $$ # k_{11}^{(e)}=EI\int_0^{h_e}\frac{d^2N_1^{(e)}}{d\hat{x}^2}\frac{d^2N_1^{(e)}}{d\hat{x}^2} \ d\hat x=EI\int_0^{h_e}\left(-\frac{6}{h_e^{2}}+12\,{\frac {\hat x}{h_e^{3}}}\right)^2 \ d\hat x=\frac{12EI}{h_e^3} # $$ # # $$ # F_1^{(e)}=q\int_0^{h_e}N_1^{(e)} \ d\hat x - Q_1^{(e)} =q\int_0^{h_e}\left(-\frac{6}{h_e^{2}}+12\,{\frac {\hat x}{h_e^{3}}}\right) d\hat x - V_1^{(e)} = \frac{qh_e}{2} - V_1^{(e)} # $$ # # All other coefficients can be found in a similar manner and combine to give the beam equations for a single two node element: # # $$ # \frac{2EI}{h_e^3}\begin{bmatrix} # 6 & 3h_e & -6 & 3h_e \\ # 3h_e & 2h_e^2 & -3h_e & h_e^2 \\ # -6 & -3h_e & 6 & -3h_e \\ # 3h_e & h_e^2 & -3h_e & 2h_e^2 \end{bmatrix}\begin{Bmatrix} u_1^{(e)} \\ \phi_1^{(e)} \\ u_2^{(e)} \\ \phi_2^{(e)} \end{Bmatrix}=-\frac{wh_e}{12}\begin{Bmatrix}6\\h_e\\6\\-h_e\end{Bmatrix}+\begin{Bmatrix}V_1^{(e)}\\M_1^{(e)}\\ V_2^{(e)}\\M_2^{(e)}\end{Bmatrix} # $$ # ### Assembly of Global Equations # # We assemble the global equations in the same way as the bar element equation, except we take into account the increased degrees of freedom at element nodes. The assembly of the global equations is based on: # # # - Interelement continuity of primary variables (deflection and slope) # - Interelement equilibrium of the secondary variables (shear force and bending moment) at the nodes of common elements # # We demonstrate the assembly of the global equations by examining the following two element model # # # # The continuity of the primary variables implies the following relation between the element degrees of freedom $u_i^{(e)}$ and the global degrees of freedom $u_i$ and $\phi_i$: # # $$ \begin{matrix} # u_1^1=u_1 & u_2^1=\phi_1 & u_3^1=u_1^2=u_2 \\ # u_4^1=u_2^2=\phi_2 & u_3^2=u_3 & u_4^2=\phi_3 \end{matrix} # $$ # # The equilibrium of the forces and moments at the connecting node requires that # $$ # Q_3^1+Q_1^2={\rm applied \ external \ point \ force}=0 # $$ # # $$ # Q_4^1+Q_2^2={\rm applied \ external \ bending \ moment}=0 # $$ # # If no external applied forces are given, the sum should be equated to zero. In equating the sums to the applied generalized forces (i.e. the force or moment) the sign convention for the element force degrees of freedom should be followed. Forces are taken positive acting upwards and moments positive taken clockwise.\\ # # We now expand each element equation to the size of the global equation and combine them to form the global set of equations: # # ### Element 1 # $$ # \begin{bmatrix} k_{11}^1 & k_{12}^1 & k_{13}^1 & k_{14}^1 & 0 & 0\\ # k_{21}^1 & k_{22}^1 & k_{23}^1 & k_{24}^1 & 0 & 0\\ # k_{31}^1 & k_{32}^1 & k_{33}^1 & k_{34}^1 & 0 & 0\\ # k_{41}^1 & k_{42}^1 & k_{43}^1 & k_{44}^1 & 0 & 0\\ # 0 & 0 & 0 & 0 & 0 & 0\\ # 0 & 0 & 0 &0 & 0 &0\\ # \end{bmatrix} # \begin{Bmatrix}u_1 \\ \phi_1 \\ u_2 \\ \phi_2 \\ u_3 \\ \phi_3 \end{Bmatrix} # =-\frac{wh_e}{12}\begin{Bmatrix}6\\h_e\\6\\-h_e\\0\\0\end{Bmatrix}+\begin{Bmatrix}Q_1^1\\Q_2^1\\Q_3^1\\Q_4^1\\0\\0\end{Bmatrix} # $$ # # ### Element 2 # $$ # \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0\\ # 0 & 0 & 0 & 0 & 0 & 0\\ # 0 & 0 & k_{11}^2 &k_{12}^2 & k_{13}^2 & k_{14}^2\\ # 0 & 0& k_{21}^2 &k_{22}^2 & k_{23}^2 & k_{24}^2\\ # 0 & 0 & k_{31}^2 & k_{32}^2 & k_{33}^2 & k_{34}^2\\ # 0 & 0 & k_{41}^2 & k_{42}^2 & k_{43}^2 & k_{44}^2\\ # \end{bmatrix}\begin{Bmatrix}u_1 \\ \phi_1 \\ u_2 \\ \phi_2 \\ u_3 \\ \phi_3 \end{Bmatrix} # =-\frac{wh_e}{12}\begin{Bmatrix}0\\0\\6\\h_e\\6\\-h_e \end{Bmatrix}+\begin{Bmatrix}0\\0\\Q_1^2\\Q_2^2\\Q_3^2\\Q_4^2\end{Bmatrix} # $$ # # Combining the equations for elements one and two give # # $$ # \begin{bmatrix} # k_{11}^1 & k_{12}^1 & k_{13}^1 & k_{14}^1 & 0 & 0\\ # k_{21}^1 & k_{22}^1 & k_{23}^1 & k_{24}^1 & 0 & 0\\ # k_{31}^1 & k_{32}^1 & k_{33}^1+k_{11}^2 & k_{34}^1+k_{12}^2 & k_{13}^2 & k_{14}^2\\ # k_{41}^1 & k_{42}^1 & k_{43}^1+k_{21}^2 & k_{44}^1+k_{22}^2 & k_{23}^2 & k_{24}^2\\ # 0 & 0 & k_{31}^2 & k_{32}^2 & k_{33}^2 & k_{34}^2\\ # 0 & 0 & k_{41}^2 & k_{42}^2 & k_{43}^2 & k_{44}^2\\ # \end{bmatrix} # \begin{Bmatrix} # u_1 \\ \phi_1 \\ u_2 \\ \phi_2 \\ u_3 \\ \phi_3 # \end{Bmatrix} # =-\frac{wh_e}{12}\begin{Bmatrix}6\\h_e\\12\\0\\6\\-h_e # \end{Bmatrix}+ # \begin{Bmatrix} # Q_1^1\\Q_2^1\\Q_3^1+Q_1^2\\Q_4^1+Q_2^2\\Q_3^2\\Q_4^2 # \end{Bmatrix} # $$ # # We now apply the boundary conditions and partition the equations to account for the homogeneous boundary conditions: # # $$ # \frac{2EI}{h_e^3}\left[\begin{array}{cc|cccc} # 6 & 3h_e & -6 & 3h_e & 0 & 0\\ # 3h_e & 2h_e^2 & -3h_e & h_e^2 & 0 & 0 \\ \hline # -6 & -3h_e & 12 & 0 &-6 & 3h_e\\ # 3h_e & h_e^2 & 0 & 4h_e^2 & -3h_e & h_e^2\\ # 0 & 0 & -6 & -3h_e^2 & 6 & -3h_e\\ # 0 & 0 & 3h_e & h_e^2 & -3h_e & 2h_e^2\\ # \end{array}\right] # \begin{Bmatrix}0 \\ 0 \\\hline u_2 \\ \phi_2 \\ u_3 \\ \phi_3 # \end{Bmatrix} # =-\frac{wh_e}{12}\begin{Bmatrix} # 6\\h_e\\\hline 12\\0\\6\\-h_e \end{Bmatrix} # +\begin{Bmatrix} # Q_1^1\\Q_2^1\\\hline 0\\0\\-F_0\\M_0 # \end{Bmatrix} # $$ # # We can now write the system as # # $$ # \begin{bmatrix} \left[ K^{11}\right] & \left[ K^{12}\right] \\ \left[ K^{21}\right] & \left[ K^{22}\right] \end{bmatrix} # \begin{Bmatrix} \left\{ u^{1}\right\} \\ \left\{ u^{2}\right\} \end{Bmatrix}= # \begin{Bmatrix} \left\{ F^{1}\right\} \\ \left\{ F^{2}\right\} \end{Bmatrix} # $$ # # and # # $$ # K^{11}u^{1} + K^{12}u^{2}=F^{1} # $$ # $$ # K^{21}u^{1} + K^{22}u^{2}=F^{2} # $$ # # From which # # $$ # F^{1} =K^{11}u^{1} + K^{12}u^{2} # $$ # $$ # K^{22}u^{2}=F^{2} -K^{21}u^{1} # $$ # # We can now solve for $u^2$ and then find the reaction forces $F^1$: # # $$ # u^2=\begin{Bmatrix} u_2 \\ \phi_2 \\ u_3 \\ \phi_3\end{Bmatrix} # =\frac{h_e^3}{2EI}\begin{bmatrix} 12 & 0 &-6 & 3h_e \\ # 0 & 4h_e^2 & -3h_e & h_e^2 \\ # -6 & -3h_e^2 & 6 & -3h_e \\ # 3h_e & h_e^2 & -3h_e & 2h_e^2 \\ # \end{bmatrix}^{-1} # \begin{Bmatrix} -wh_e \\ 0 \\ -\frac{wh_e}{2}-F_0 \\ \frac{wh_e^2}{12}+M_0\end{Bmatrix} # $$ # # Which gives # # # $$ # \begin{Bmatrix}u_2\\\phi_2\\u_3\\\phi_3\end{Bmatrix}= # \frac{h_e}{6EI} \begin{bmatrix} # 3M_0h_e-4.25wh_e^3-5F_0h_e^2 \\ # 6M_0 - 7wh_e^2-9h_eF_0\\ # 12M_0h_e -12wh_e^3-16F_0h_e^2\\ # 12M_0-8wh_e^2-12h_eF_0 # \end{bmatrix} # $$ # # The reactions $Q_1^1$ and $Q_2^1$ can be found to be # # $$ # \begin{Bmatrix} Q_1^1 \\ Q_2^1 \end{Bmatrix} = # \frac{h_e^3}{16EI}\begin{bmatrix} -6 & 3h_e & 0 & 0 \\ -3h_e & h_e^2 & 0 & 0 \end{bmatrix} \begin{Bmatrix} u_2 \\ \phi_2 \\ u_3 \\ \phi_3 \end{Bmatrix} +\frac{wh_e}{12}\begin{Bmatrix} 6 \\ h_e \end{Bmatrix} = # \begin{Bmatrix} # 2wh_e+F_0\\ # 2h_e\left( wh_e+F_0\right)-M_0 # \end{Bmatrix} # $$ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:astroconda] # language: python # name: conda-env-astroconda-py # --- # + deletable=true editable=true import os import sys import numpy as np import matplotlib matplotlib.use('nbagg') #from matplotlib import style #style.use('ggplot') import matplotlib.pyplot as plt from skimage import feature from scipy import stats, ndimage from scipy.misc import factorial as fac import photutils from astropy import units as u from astropy import stats from astropy import visualization from astropy import coordinates from astropy.table import Table, vstack from astropy.io import ascii, fits from astropy.modeling import models, fitting, custom_model # %load_ext autoreload # %autoreload 2 # + deletable=true editable=true ACT = 104 NODE = 3222 BRAD = 3228.5 # + deletable=true editable=true NODE*ACT # + deletable=true editable=true # for each node, the influence from each actuator surf2act = np.fromfile("../mmtwfs/data/Surf2ActTEL_32.bin", dtype=np.float32).reshape(NODE, ACT) nodecoor = ascii.read("../mmtwfs/data/bcv_node_coordinates.dat", names=["bcv_id", "bcv_x", "bcv_y", "bcv_z"]) actcoor = ascii.read("../mmtwfs/data/actuator_coordinates.dat", names=["act_i", "act_x", "act_y", "act_type"]) for ax in ["bcv_x", "bcv_y"]: nodecoor[ax] /= BRAD nodecoor['bcv_rho'] = np.sqrt(nodecoor['bcv_x']**2 + nodecoor['bcv_y']**2) nodecoor['bcv_phi'] = np.arctan2(nodecoor['bcv_y'], nodecoor['bcv_x']) # + deletable=true editable=true actcoor['act_x'].unit = u.mm # + deletable=true editable=true actcoor # + deletable=true editable=true im = np.arange(90).reshape(9, 10) # + deletable=true editable=true np.indices(im.shape, dtype=float) # + deletable=true editable=true slopes = (np.indices(im.shape, dtype=float)/(np.r_[im.shape].reshape(-1, 1, 1))).reshape(2, -1) slopes = np.vstack([slopes, np.ones(slopes.shape[1])]) slopes # + deletable=true editable=true pinv = np.linalg.pinv(slopes) # + deletable=true editable=true np.dot(im.reshape(1, -1), pinv).ravel()[:2] # + deletable=true editable=true im # + deletable=true editable=true import astropy.units as u # + deletable=true editable=true u.radian # + deletable=true editable=true a.to(u.nm).value # + deletable=true editable=true from mmtwfs.telescope import MMT # + deletable=true editable=true mmt = MMT() # + deletable=true editable=true mmt.diameter # + deletable=true editable=true mmt.obscuration # + deletable=true editable=true 12e6/(128*128) # + deletable=true editable=true 12e6/(4008*2672) # + deletable=true editable=true import poppy # - ap = poppy.HexagonAperture? ap = poppy.HexagonAperture(side=1.0, rotation=30) ap.display(colorbar=False) plt.show() arr = ap.to_fits(npix=27)[0].data.astype(float) plt.imshow(arr) plt.show() from mmtwfs.telescope import MMT from mmtwfs.zernike import ZernikeVector t = MMT() zv = ZernikeVector(Z05=-1000) zv.plot_map() plt.show() print(zv) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import numpy as np import scipy.stats as st np.random.uniform(low=0,high=1,size=10) np.random.randint(low=0, high=100, size=10) dist = st.norm(loc=0.0,scale=1.0) x = np.array([-0.5,0.,0.5]) dist.pdf(x) dist = st.norm(loc=0.0,scale=1.0) dist.pdf(1.645) dist.cdf(1.645) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ***Widhya Covid-19 Analysis*** import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns # ### ***Read the csv file*** dataframe=pd.read_csv('https://raw.githubusercontent.com/WidhyaOrg/datasets/master/covid19.csv') dataframe.head() # ### ***Dropping the unrelevant feature*** df=df.drop('Sno',axis=1) df.head() # ### ***Grouping all the features by date*** df=df.groupby('Date', sort = False).sum() df.head() # ### ***Total cases on each day*** df['Total_cases'] = df.sum(axis =1).astype('int') df # ### ***Visualization of total number of cases and we can see after 3rd march the cases keep increasing day by day*** plt.figure(figsize=(20,10)) plt.plot(df.Total_cases,color='r') plt.xticks(rotation = 90, fontsize = 15) plt.show() df.tail(20) # ### ***Rate of increase of each day*** rate=[] for i in range(0,df.shape[0]-1): ratenum=(df['Total_cases'].iloc[i+1]-df['Total_cases'].iloc[i])/df['Total_cases'].iloc[i] rate.append(ratenum) import numpy as np avg_rate=np.average(rate) avg_rate # ### ***Used Exponential Model*** #P_t=P_o*(e^(r*t)) import math P_o=31 t=26 P_t=round(P_o*(math.exp(avg_rate*t))) P_t # ## ***THANK YOU!!!*** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import json from yolox.exp import get_exp from yolox import inference inference.get_exp get_exp f = json.load(open('./annotations/instances_val2017.json')) for i in range(len(f['images'])): f['images'][i]['file_name'] = f['images'][i]['file_name'].replace('player_crops/', '') with open('./annotations/instances_val2017.json', 'w') as ff: json.dump(f, ff) f.keys() f['categories'] f['annotations'][0] # + w, h =[],[] for x in f['annotations']: w.append(x['bbox'][2]) h.append(x['bbox'][3]) print(sum(w)/len(w), sum(h)/len(h)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import numpy as np import pandas as pd from scipy import stats import statsmodels.api as sm from statsmodels.sandbox.regression.predstd import wls_prediction_std from scout import io from scout import utils from matplotlib import rcParams rcParams['font.family'] = 'sans-serif' rcParams['font.sans-serif'] = ['Arial'] import matplotlib.pyplot as plt import seaborn as sns working_dir = '/data/datasets/organoid_phenotyping/analysis/d35_vs_d60/Lancaster_d35/20190430_11_36_09_AA-4.30.19-org1_488LP12p5_561LP120_642LP50' os.listdir(working_dir) # + df = pd.read_excel(os.path.join(working_dir, 'individual_ventricle_cellfreq.xlsx'), index_col=0) df2 = pd.read_excel(os.path.join(working_dir, 'individual_ventricle_cyto.xlsx'), index_col=0) df['dist'] = df2['dist'] df['dist_adj'] = df['dist'] - df['eq_diam']/2 df.head() # - data_flatten = {'freq': pd.concat([df['sox2_freq'], df['tbr1_freq'], df['dn_freq']]), 'type': len(df) * ['SOX2'] + len(df) * ['TBR1'] + len(df) * ['DN'], 'eq_diam': 3 * list(df['eq_diam'])} df_flatten = pd.DataFrame(data_flatten) # + p = sns.JointGrid(x='eq_diam', y='freq', data=df_flatten) sns.regplot(x='eq_diam', y='sox2_freq', data=df, ax=p.ax_joint, color='r', scatter_kws={'s': 4, 'alpha': 0.6}, label='SOX2') sns.regplot(x='eq_diam', y='tbr1_freq', data=df, ax=p.ax_joint, color='g', scatter_kws={'s': 4, 'alpha': 0.6}, label='TBR1') sns.regplot(x='eq_diam', y='dn_freq', data=df, ax=p.ax_joint, color='b', scatter_kws={'s': 4, 'alpha': 0.6}, label='DN') sns.distplot(df_flatten['eq_diam'], ax=p.ax_marg_x, color='k') sns.distplot(df['sox2_freq'], ax=p.ax_marg_y, color='r', vertical=True) sns.distplot(df['tbr1_freq'], ax=p.ax_marg_y, color='g', vertical=True) sns.distplot(df['dn_freq'], ax=p.ax_marg_y, color='b', vertical=True) p.ax_joint.set_xlim([0, 300]) p.ax_joint.set_ylim([0, 0.9]) p.ax_joint.legend() p.ax_marg_x.set_title("{} ventricles".format(len(df))) p.ax_joint.set_xlabel('Eq. diameter ($\mu m$)') p.ax_marg_x.set_xlabel(None) p.ax_joint.set_ylabel('Frequency') p.ax_marg_y.set_ylabel(None) # Get pearson correlations r_sox2, p_sox2 = stats.pearsonr(df['eq_diam'], df['sox2_freq']) r_tbr1, p_tbr1 = stats.pearsonr(df['eq_diam'], df['tbr1_freq']) r_dn, p_dn = stats.pearsonr(df['eq_diam'], df['dn_freq']) p.ax_joint.annotate('$r_{SOX2}=$' + f'{r_sox2:0.2f}, $p=$ {p_sox2:.1e}', [80, 0.86]) p.ax_joint.annotate('$r_{TBR1}=$' + f'{r_tbr1:0.2f}, $p=$ {p_tbr1:.1e}', [80, 0.82]) p.ax_joint.annotate('$r_{DN}=$' + f'{r_dn:0.2f}, $p=$ {p_dn:.1e}', [80, 0.78]) plt.savefig(os.path.join(working_dir, 'ventricle_size_vs_celltypes.pdf'), bbox_inches='tight') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- if 'google.colab' in str(get_ipython()): # !pip install -r https://raw.githubusercontent.com/abbbe/eye-on-stick/main/requirements.txt # !git clone https://github.com/abbbe/eye-on-stick # %cd eye-on-stick # + # %load_ext autoreload # %autoreload 2 import numpy as np import os, urllib, time os.environ["MLFLOW_TRACKING_URI"] = "sqlite:///mlruns/db.sqlite" import mlflow, git mlflow_client = mlflow.tracking.MlflowClient() from stable_baselines.common.cmd_util import make_vec_env from stable_baselines.common.vec_env import VecNormalize from stable_baselines import PPO2, SAC import matplotlib.pyplot as plt # %matplotlib inline from lib.viz import showarray from IPython import display from lib import eos from lib.eos2d import EyeOnStickEnv2D from lib.pyb.eos3d import EyeOnStickEnv3D # - #ENV = EyeOnStickEnv2D ENV = EyeOnStickEnv3D N_ENVS = 1 with git.Repo() as repo: git_info = f'{repo.active_branch.name}/{repo.git.rev_parse(repo.head.object.hexsha, short=4)}' if repo.is_dirty(): git_info = f'*{git_info}' # + import json from lib.run import run_env_nsteps def log_metrics(metrics, step): # log the content of metrics dict as mlflow metrics for key, value in metrics.items(): mlflow.log_metric(key=key, value=value, step=step) def save_and_register_model(model, params, saved_models_dir, era, model_name, mlflow_run): # save the trained models, each era separately model_fname = f'{saved_models_dir}/{era}' model.save(model_fname) params_fname = f'{saved_models_dir}/{era}.json' with open(params_fname, 'w') as fp: json.dump(params, fp) # register the trained model return mlflow_client.create_model_version(name=model_name, source=model_fname, run_id=mlflow_run.info.run_id) # - def learn_and_run( exp_id, run_name, model_name, n_joints, n_eras, n_learn_episodes, params, gym_policy_class=SAC, gym_model_name='MlpPolicy', displayfunc=None, start_version=None ): """ 1. Instanciates environment with n_joints. 2. Train the model for N_LEARN_EPOCHS epochs. 3. Save the model as mlflow artefact, named by 'name'. 2. Step through the environment for N_ERAS x N_STEPS steps, collecting metrics and rendering it (if 'display' is set). 3. Log metrics into mlflow runs (parent run gets '{n_joints}J {name}' name). 7. Returns file name to load the model from. """ env = make_vec_env(lambda: ENV(n_joints, params), n_envs=N_ENVS) #env = VecNormalize(env) n_steps = params.get('MAX_NSTEPS') # create new mlflow run which will become a parent of per-era runs with mlflow.start_run(run_name=run_name, experiment_id=exp_id) as parent_run: # log gym params mlflow.log_param("gym_policy_class", gym_policy_class.__name__) mlflow.log_param("gym_model_name", gym_model_name) mlflow.log_param("start_version", start_version) for key, value in params.items(): mlflow.log_param(key, value) # arrange tensorboard logs mlflow_artifacts_dir = urllib.request.url2pathname(urllib.parse.urlparse(mlflow.get_artifact_uri()).path) tensorboard_logdir = os.path.join(mlflow_artifacts_dir, "tensorboard_log") os.makedirs(tensorboard_logdir, exist_ok=False) # create gym model and directory to save it if start_version: registered_model = mlflow_client.get_model_version(model_name, start_version) model = gym_policy_class.load(registered_model.source) model.set_env(env) else: model = gym_policy_class(gym_model_name, env, verbose=0, tensorboard_log=tensorboard_logdir) saved_models_dir = os.path.join(mlflow_artifacts_dir, "saved_models") os.makedirs(saved_models_dir, exist_ok=False) ## run eras loop metrics = None for era in range(n_eras): child_run_name = f'era={era}' with mlflow.start_run(run_name=child_run_name, experiment_id=exp_id, nested=True) as child_run: model.learn(n_learn_episodes * n_steps) registered_model = save_and_register_model(model, params, saved_models_dir, era, model_name, child_run) mlflow.log_metric("model_version", registered_model.version) env.env_method('set_render_info', {'model_name': registered_model.name, 'model_version': registered_model.version, 'start_version': start_version}) metrics, _data = run_env_nsteps(env, model, n_steps, displayfunc=displayfunc) log_metrics(metrics, step=era) # log to the parent run if metrics: log_metrics(metrics, step=None) env.close() # + # SAC(policy, env, gamma=0.99, learning_rate=0.0003, buffer_size=50000, learning_starts=100, train_freq=1, # batch_size=64, tau=0.005, ent_coef='auto', target_update_interval=1, gradient_steps=1, target_entropy='auto', action_noise=None, # random_exploration=0.0, verbose=0, tensorboard_log=None, _init_setup_model=True, policy_kwargs=None, full_tensorboard_log=False, seed=None, n_cpu_tf_sess=None) # + #raise None # - # ### Train to find the target and aim coarsely # + exp_id = mlflow.get_experiment_by_name("PYB-6J-3S-1A") model_name='eos3d.6j-coarse-aim' run_name= f'{model_name} 008 start-rand target-xz a-25 e-5 alpha_cm alpha-scalar 4p' learn_and_run( exp_id=exp_id.experiment_id, run_name=run_name, model_name=model_name, n_joints=6, n_eras=150, n_learn_episodes=500, params={'MAX_NSTEPS': 150, 'ALPHA_MAXDIFF_GOAL': 20, 'EYE_LEVEL_MAXDIFF_GOAL': 5}, ) # - raise None # ### First decently working 3J policy NJ = 3 # + # we run N_ERAS eras (=mlflow runs) in total: # first we let the agent learn for N_LEARN_EPISODES * MAX_NSTEPS # then we run it one episode and log metrics # + tags=[] N_ERAS = 25 # eras N_LEARN_EPISODES = 100 MAX_NSTEPS = 150 # episode will end after so many steps for _ in range(10): learn_and_run( n_joints=NJ, n_eras=N_ERAS, n_learn_episodes=N_LEARN_EPISODES, params={'MAX_NSTEPS': MAX_NSTEPS, 'ALPHA_MAXDIFF_GOAL': 3, 'EYE_LEVEL_MAXDIFF_GOAL': 3}, name='no eye_phi obs') # - # ### Long run # + N_ERAS = 10 # eras N_LEARN_EPISODES = 2000 MAX_NSTEPS = 150 # episode will end after so many steps #exp_id = mlflow.create_experiment("Train 3J for 3M steps") exp_id = mlflow.get_experiment_by_name("Train 3J for 3M steps") learn_and_run(n_joints=NJ, n_eras=N_ERAS, n_learn_episodes=N_LEARN_EPISODES, params={'MAX_NSTEPS': MAX_NSTEPS, 'ALPHA_MAXDIFF_GOAL': 3, 'EYE_LEVEL_MAXDIFF_GOAL': 3}, name='training', exp_id=exp_id.experiment_id, displayfunc=None) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: CMIP6 2019.10a # language: python # name: cmip6-201910a # --- def merge_and_update_indices(to_merge, df_consts, cols_to_merge, already_merged=pd.DataFrame()): """Function to fill in nan values of the indices to assets with corresponding constant variables with newly calculated indices (and variable_ids)""" df_merged = pd.merge(to_merge, df_consts, on=cols_to_merge, how='left', suffixes=('', '_const')) print(df_merged[df_merged['index_consts'].notnull()].count()) merged = already_merged.append(df_merged[df_merged['index_consts'].notnull()]) to_merge = df_merged[df_merged['index_consts'].isnull()] to_merge = to_merge.drop(columns=['index_consts','variable_id']) print(already_merged.shape,merged.shape,to_merge.shape) return merged, to_merge # + # just need one row per model/ensemble df_merge_vars = (col_analysis.df.copy() .drop(columns=['table_id','variable_id','path','time_range','dcpp_init_year']) .drop_duplicates() .reset_index(drop=False)) # get the same type of dataframe for the const vars cols_to_merge = [i for i in df_merge_vars.columns if i != 'index'] df_merge_const = (subset_const.df[cols_to_merge + ['variable_id']] .reset_index(drop=False)) # perform first cut merge trying to match on all metadata this_merged = pd.merge(df_merge_vars, df_merge_const, on=cols_to_merge, how='left', suffixes=('_vars', '_consts')) merged = this_merged[this_merged['index_consts'].notnull()] to_merge = this_merged[this_merged['index_consts'].isnull()] print(f"{merged.shape[0]},{to_merge.shape[0]}") ## now drop ensemble member and merge again # start with "preferred ensemble" this_merge_const = df_merge_const[df_merge_const.member_id=='r1i1p1f1'].drop(columns=['member_id']) cols_to_merge = [i for i in cols_to_merge if i != 'member_id'] merged, to_merge = merge_and_update_indices(to_merge, this_merge_const, cols_to_merge, already_merged = merged) print(f"{merged.shape[0]},{to_merge.shape[0]}") # now try with any ensemble this_merge_const = df_merge_const.drop(columns=['member_id']).drop_duplicates(subset=cols_to_merge) merged, to_merge = merge_and_update_indices(to_merge, this_merge_const, cols_to_merge, already_merged = merged) print(f"{merged.shape[0]},{to_merge.shape[0]}") # - # now drop version, selecting latest version cols_to_merge = [i for i in cols_to_merge if i != 'version'] this_merge_const = (this_merge_const.sort_values('version') .groupby(cols_to_merge+['variable_id']) .last() .reset_index(drop=False)) merged = merge_and_update_indices(merged, this_merge_const, cols_to_merge) print(f"{merged['index_consts'].count()}/{merged.shape[0]}") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python Reference - Classes and Functions # **Author:** # # #### Definition # the basic building blocks of object oriented programming # # ##### Links # https://www.w3schools.com/python/python_classes.asp # https://www.w3schools.com/python/python_functions.asp # https://www.w3schools.com/python/python_scope.asp # ### ctor definition # the ctor is called **\_\_init\_\_** in python by convention def __init__(self): super().__init__("pass args to parent class") # ### function definition # use the **def** keyword to define a function. set the default values and datatypes of arguments and return value # + def function_name(input: str, input_with_default: int = 1)-> int: print("function content is indented") return input_with_default print(function_name("")) print(function_name("",2)) print(type(function_name(""))) # - # ### keyword arguments # use preferred argument order and improve readability by using keyword arguments # + def increment(number, by): return number + by increment(by=2, number=3) # - # ### void # functions return "void" automatically, but it can be defined specifically using **pass** # + def nothing(): pass nothing() # - # ### varying number of arguments # python packages function args as a tuple with **\*** # + def multiply_tuple(*numbers): total = 1 for number in numbers: total *= number return total multiply_tuple(1, 2, 3) # - # ### varying number of keyword arguments # python packages function args as a dictionary with **\*\*** # + def save_user(**user): return user print("type: " + str(type(save_user(id=1, name="admin")))) print("kvp: " + str(save_user(id=1, name="admin"))) print("key: " + str(save_user(id=1, name="admin")["id"])) print("val: " + str(save_user(id=1, name="admin")["name"])) # - # ### scope # scoped variables are accessible outside of the scope they were defined in # + def greet(set_message: bool): if set_message: message = "true" return message print("str(greet(True)): " + str(greet(True))) print("str(greet(False)) will result in an \"UnboundLocalError: local variable 'message' referenced before assignment\"") # - # ### global variables # global variables are not modified by statements in functions -> use the **global** keyword # + message = "a" def greet_local(): message = "b" return message def greet_global(): global message message = "c" return message print(greet_local()) print(message) print(greet_global()) print(message) # - # ### static variables # + class Car: brand = "Daimler" print(Car.brand) Car.brand = "VW" print(Car.brand) # - # ### FizzBuzz # Python implementation of the FizzBuzz algorithm # + def fizz_buzz(input): if input % 3 == 0: if input % 5 == 0: return "FizzBuzz" return "Fizz" if input % 5 == 0: return "Buzz" return str(input) print(fizz_buzz(0)) print(fizz_buzz(1)) print(fizz_buzz(2)) print(fizz_buzz(3)) print(fizz_buzz(4)) print(fizz_buzz(5)) print(fizz_buzz(6)) print(fizz_buzz(7)) print(fizz_buzz(8)) print(fizz_buzz(9)) print(fizz_buzz(10)) print(fizz_buzz(11)) print(fizz_buzz(12)) print(fizz_buzz(13)) print(fizz_buzz(14)) print(fizz_buzz(15)) # - # ### special functions # objects have a number of special functions that can be overriden # + class River(): name: str length: int def __init__(self, name: str, length: int): self.name = name self.length = length def __str__(self): return f"River (Name = {self.name})" def __repr__(self): return f"Repr River (Name = {self.name})" def __len__(self): return self.length neckar = River("Neckar", 1234) # - # the python **tostring** method is called **\_\_str\_\_**. print(neckar) # calling an object without **str(obj)** or **print(obj)** will call the **\_\_repr\_\_** method # the python **count** method is called **\_\_len\_\_** print(len(neckar)) # ### variable function parameters # use **\*args** to pass a variable number of arguments to a function as a tuple # + def multiply(*args): print(type(args)) result = 1 for arg in args: result = result * arg return result product = multiply(1, 2, 3) print(product) print(type(product)) # - # use **\*\*args** to pass a variable number of arguments to a function as a dictionary # + # %matplotlib inline import matplotlib.pyplot as plt def create_plot(**plot_params): print(plot_params) plt.plot([1, 2, 3], [5, 6, 5], **plot_params) plt.show() create_plot(color="r", linewidth=10, linestyle="dashed") # create_plot(color="r", linewidth=10, linestyle="dashed", notcontainedparam="asd") # <- will throw an error # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="5slcdn4iWSTq" # # Kaggle Ship Segmentation - Create Model on Kaggle Environment # # Link to competition: https://www.kaggle.com/c/airbus-ship-detection # # This notebook was converted from my prior Kaggle notebook. Migrated to TF 2.x and converted various methods to be more native TF. This will create and save a trained model. The model is built from scratch, not pre-trained. I do have links to a pre-trained option, but it did not perform as well as starting from scratch. (Pre-trained is smaller, so had a harder time learning features.) # # The image size is (224, 224, 3), you can change it in the Config settings. If you want to get the best possible result, the size should be increased as some features are small and when downsized lose detail. # # I stopped training after 7 epocs, val was over 75%, but still slowly learning. # # This notebook was about applying learning to use TensorFlow 2.x and Datasets, not to create a final model for submission. There is much more fine-tuning todo to obtain 80% plus scores. # # + colab={} colab_type="code" id="Tn_PWaH1WSTu" # Change to True if using Kaggle environment.... USING_KAGGLE = True # + colab={} colab_type="code" id="g3zIITg_WST0" # Normal includes... from __future__ import absolute_import, division, print_function, unicode_literals import os, sys, random, warnings, time, copy, csv import numpy as np import IPython.display as display import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import tensorflow as tf print(tf.__version__) from tensorflow.keras.models import load_model # This allows the runtime to decide how best to optimize CPU/GPU usage AUTOTUNE = tf.data.experimental.AUTOTUNE # + colab={} colab_type="code" id="XPUpxkxkPzGS" # Config class that wraps global variable access, using personal libs is a pain in Kaggle, so copied in the class # Not all of the global vars are used, easier to jsut copy class over from lib class GlobalParms(object): def __init__(self, **kwargs): self.keys_and_defaults = { "MODEL_NAME": "", # if you leave .h5 off, puts into a subdirectory "ROOT_PATH": "", # Location of the data for storing any data or files "TRAIN_DIR": "", # Subdirectory in the Root for Training files "TEST_DIR": "", # Optional subdirectory in Root for Testing file "SUBMISSION_PATH": None, # Optional subdirectory for Contest files "MODEL_PATH": None, # Optional, subdirectory for saving/loading model "TRAIN_PATH": None, # Subdirectory in the Root for Training files "TEST_PATH": None, # Optional subdirectory in Root for Testing file "SMALL_RUN": False, # Optional, run size will be reduced "NUM_CLASSES": 0, # Number of classes "CLASS_NAMES": [], # list of class names "IMAGE_ROWS": 0, # Row size of the image "IMAGE_COLS": 0, # Col size of the image "IMAGE_CHANNELS": 0, # Num of Channels, 1 for Greyscale, 3 for color "BATCH_SIZE": 0, # Number of images in each batch "EPOCS": 0, # Max number of training EPOCS "ROW_SCALE_FACTOR": 1, # Optional, allows scaling of an image. "COL_SCALE_FACTOR": 1, # Optional, allows scaling of an image. "IMAGE_EXT": ".jpg", # Extent of the image file_ext # Optional, default is np.float64, reduce memory by using np.float32 # or np.float16 "IMAGE_DTYPE": np.float32, # Optional, change default if needed, can save memory space "Y_DTYPE": np.int, "LOAD_MODEL": False, # Optional, If you want to load a saved model "SUBMISSION": "submission.csv", # Optional, Mainly used for Kaggle "METRICS": ['accuracy'], # ['categorical_accuracy'], ['accuracy'] "FINAL_ACTIVATION": 'sigmoid', # sigmoid, softmax "LOSS": "" # 'binary_crossentropy', 'categorical_crossentropy' } self.__dict__.update(self.keys_and_defaults) self.__dict__.update((k, v) for k, v in kwargs.items() if k in self.keys_and_defaults) # Automatically reduce the training parms, change as needed if self.__dict__["SMALL_RUN"]: self.__dict__["BATCH_SIZE"] = 1 self.__dict__["EPOCS"] = 2 self.__dict__["ROW_SCALE_FACTOR"] = 1 self.__dict__["COL_SCALE_FACTOR"] = 1 # Use configuration items to create real ones self.__dict__["SCALED_ROW_DIM"] = \ np.int(self.__dict__["IMAGE_ROWS"] / self.__dict__["ROW_SCALE_FACTOR"]) self.__dict__["SCALED_COL_DIM"] = \ np.int(self.__dict__["IMAGE_COLS"] / self.__dict__["COL_SCALE_FACTOR"]) if self.__dict__["TRAIN_PATH"] is None: # Not passed, so set it self.__dict__["TRAIN_PATH"] = \ os.path.join(self.__dict__["ROOT_PATH"], self.__dict__["TRAIN_DIR"]) if self.__dict__["TEST_PATH"] is None: # Not passed, so set it self.__dict__["TEST_PATH"] = \ os.path.join(self.__dict__["ROOT_PATH"], self.__dict__["TEST_DIR"]) if self.__dict__["SUBMISSION_PATH"] is None: # Not passed, so set self.__dict__["SUBMISSION_PATH"] = \ os.path.join(self.__dict__["ROOT_PATH"], self.__dict__["SUBMISSION"]) else: self.__dict__["SUBMISSION_PATH"] = \ os.path.join(self.__dict__["SUBMISSION_PATH"], self.__dict__["SUBMISSION"]) if self.__dict__["MODEL_PATH"] is None: # Not passed, so set it self.__dict__["MODEL_PATH"] = \ os.path.join(self.__dict__["ROOT_PATH"], self.__dict__["MODEL_NAME"]) else: self.__dict__["MODEL_PATH"] = \ os.path.join(self.__dict__["MODEL_PATH"], self.__dict__["MODEL_NAME"]) self.__dict__["IMAGE_DIM"] = \ (self.__dict__["SCALED_ROW_DIM"], self.__dict__["SCALED_COL_DIM"], self.__dict__["IMAGE_CHANNELS"]) if self.__dict__["IMAGE_CHANNELS"] == 1: self.__dict__["COLOR_MODE"] = "grayscale" else: self.__dict__["COLOR_MODE"] = "rgb" def set_train_path(self, train_path): self.__dict__["TRAIN_PATH"] = train_path def set_class_names(self, class_name_list): self.__dict__["CLASS_NAMES"] = class_name_list if self.__dict__["NUM_CLASSES"] != \ len(self.__dict__["CLASS_NAMES"]): raise ValueError("ERROR number of classses do not match, Classes: " + str(self.__dict__["NUM_CLASSES"]) + " Class List: " + str(self.__dict__["CLASS_NAMES"])) def print_contents(self): print(self.__dict__) def print_key_value(self): for key, value in self.__dict__.items(): print(key, ":", value) # + [markdown] colab_type="text" id="rMK-FCdiWST_" # ## Various Methods # + colab={} colab_type="code" id="QFpBD1NUWSUB" # Set these to match your environment if USING_KAGGLE: ROOT_PATH = "../input/airbus-ship-detection/" else: ROOT_PATH = "/Users/john/Documents/ImageData/KaggleShip/" ###### CHANGE FOR SPECIFIC ENVIRONMENT # Establish global dictionary parms = GlobalParms(ROOT_PATH=ROOT_PATH, MODEL_NAME="airModel.h5", MODEL_PATH="" TRAIN_DIR="train_v2", NUM_CLASSES=1, IMAGE_ROWS=224, IMAGE_COLS=224, IMAGE_CHANNELS=3, BATCH_SIZE=16, EPOCS=8, FINAL_ACTIVATION="sigmoid", IMAGE_EXT=".jpg") parms.print_contents() # + colab={} colab_type="code" id="KWpSlnmVWSUG" # Encodes a mask, only used to verify training def multi_rle_encode(img): labels = label(img[:, :, 0]) return [rle_encode(labels==k) for k in np.unique(labels[labels>0])] # ref: https://www.kaggle.com/paulorzp/run-length-encode-and-decode def rle_encode(img): ''' img: numpy array, 1 - mask, 0 - background Returns run length as string formated ''' pixels = img.T.flatten() pixels = np.concatenate([[0], pixels, [0]]) runs = np.where(pixels[1:] != pixels[:-1])[0] + 1 runs[1::2] -= runs[::2] return ' '.join(str(x) for x in runs) def rle_decode(mask_rle, shape=(768, 768)): ''' mask_rle: run-length as string formated (start length) shape: (height,width) of array to return Returns numpy array, 1 - mask, 0 - background ''' s = mask_rle.split() starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])] starts -= 1 ends = starts + lengths img = np.zeros(shape[0]*shape[1], dtype=np.uint8) for lo, hi in zip(starts, ends): img[lo:hi] = 1 return img.reshape(shape).T # Needed to align to RLE direction def masks_as_image(in_mask_list): # Take the individual ship masks and create a single mask array for all ships all_masks = np.zeros((768, 768), dtype = np.int16) #if isinstance(in_mask_list, list): for mask in in_mask_list: if isinstance(mask, str): all_masks += rle_decode(mask) return np.expand_dims(all_masks, -1) # + colab={} colab_type="code" id="VT2xKt1nWSUL" def show_batch_mask(display_list): plt.figure(figsize=(15, 15)) title = ['Input Image', 'True Mask', 'Predicted Mask'] for i in range(len(display_list)): plt.subplot(1, len(display_list), i+1) plt.title(title[i]) plt.imshow(tf.keras.preprocessing.image.array_to_img(display_list[i])) plt.axis('off') plt.show() # + [markdown] colab_type="text" id="ES3pwdKpWSUR" # ## Create training and validation files # # The number of ships is not balanced and the size of some of the images are very small. The approach can be changed if you want. # + _cell_guid="" _uuid="" colab={} colab_type="code" id="YLfIJShCWSUT" # https://www.kaggle.com/hmendonca/u-net-model-with-submission all_df = pd.read_csv(os.path.join(parms.ROOT_PATH, "train_ship_segmentations_v2.csv")) not_empty = pd.notna(all_df.EncodedPixels) print(not_empty.sum(), 'masks in', all_df[not_empty].ImageId.nunique(), 'images') print((~not_empty).sum(), 'empty images in', all_df.ImageId.nunique(), 'total images') all_df.head() # + colab={} colab_type="code" id="iSgWYkcYWSUZ" # Add columns and built a unique list of image_ids. all_df['ships'] = all_df['EncodedPixels'].map(lambda c_row: 1 if isinstance(c_row, str) else 0) unique_img_ids = all_df.groupby('ImageId').agg({'ships': 'sum'}).reset_index() unique_img_ids['has_ship'] = unique_img_ids['ships'].map(lambda x: 1.0 if x>0 else 0.0) unique_img_ids['has_ship_vec'] = unique_img_ids['has_ship'].map(lambda x: [x]) # some files are too small/corrupt unique_img_ids['file_size_kb'] = unique_img_ids['ImageId'].map(lambda c_img_id: os.stat(os.path.join(parms.TRAIN_PATH, c_img_id)).st_size/1024) unique_img_ids = unique_img_ids[unique_img_ids['file_size_kb'] > 50] # keep only +50kb files unique_img_ids['file_size_kb'].hist() all_df.drop(['ships'], axis=1, inplace=True) unique_img_ids.sample(7) # + colab={} colab_type="code" id="1rK6dMVSWSUe" # Shows the unblanced, most have no ships, so need to change training set to have more with ships and # less without ships unique_img_ids['ships'].hist(bins=unique_img_ids['ships'].max()) # + colab={} colab_type="code" id="sLTB1uWrWSUj" # Bqlance rows SAMPLES_PER_GROUP = 1500 balanced_train_df = unique_img_ids.groupby('ships').apply(lambda x: x.sample(SAMPLES_PER_GROUP) if len(x) > SAMPLES_PER_GROUP else x) balanced_train_df['ships'].hist(bins=balanced_train_df['ships'].max()+1) print(balanced_train_df.shape[0], 'masks') # + colab={} colab_type="code" id="LEqMBiMRWSUo" # Create training and validation lists from the balanced df from sklearn.model_selection import train_test_split from sklearn.utils import shuffle train_ids, valid_ids = train_test_split(balanced_train_df, test_size = 0.2, stratify = balanced_train_df['ships']) #Try and make sure nice distribution between train and val train_df = pd.merge(all_df, train_ids) valid_df = pd.merge(all_df, valid_ids) train_df = shuffle(train_df) # Shuffle since same image would be grouped print(train_df.shape[0], 'training masks') print(valid_df.shape[0], 'validation masks') # + colab={} colab_type="code" id="P_S8I3ViWSUt" # Can double check.... #valid_df['ships'].hist(bins=train_df['ships'].max()+1) #train_df['ships'].hist() # + colab={} colab_type="code" id="8e8VIlUFWSUw" # set lengths and steps train_len = len(train_df) val_len = len(valid_df) images_list_len = train_len + val_len steps_per_epoch = np.ceil(train_len // parms.BATCH_SIZE) # set step sizes based on train & batch validation_steps = np.ceil(val_len // parms.BATCH_SIZE) # set step sizes based on val & batch print("Total number: ", images_list_len, " Train number: ", train_len, " Val number: ", val_len) print("Steps/EPOC: ", steps_per_epoch, " Steps/Validation: ", validation_steps) # + [markdown] colab_type="text" id="rj8gJz3AWSU1" # ## Build, load and augment TensorFlow Datasets # + colab={} colab_type="code" id="Wo5SrgekWSU2" # Decode the image, convert to float, normalize by 255 and resize def decode_img(image: tf.Tensor) -> tf.Tensor: # convert the compressed string to a 3D uint8 tensor image = tf.image.decode_jpeg(image, channels=parms.IMAGE_CHANNELS) # Use `convert_image_dtype` to convert to floats in the [0,1] range. image = tf.image.convert_image_dtype(image, parms.IMAGE_DTYPE) return image def resize_image_mask(image: tf.Tensor, mask: tf.Tensor) -> tf.Tensor: image = tf.image.resize(image, [parms.IMAGE_ROWS, parms.IMAGE_COLS]) mask = tf.image.resize(mask, [parms.IMAGE_ROWS, parms.IMAGE_COLS]) return image, mask def image_mask_aug(image, mask): if tf.random.uniform(()) > 0.5: k = tf.random.uniform(shape=[], minval=1, maxval=3, dtype=tf.int32) image = tf.image.rot90(image, k) #0-4, 0/360, 90/180/270 mask = tf.image.rot90(mask, k) #0-4, 0/360, 90/180/270 if tf.random.uniform(()) > 0.5: image = tf.image.flip_left_right(image) mask = tf.image.flip_left_right(mask) if tf.random.uniform(()) > 0.5: image = tf.image.flip_up_down(image) mask = tf.image.flip_up_down(mask) return image, mask def mask_wrapper(image_id_in): image_id = image_id_in.numpy().decode("utf-8") #print(type(image_id), image_id) encoded_pixels = all_df.query('ImageId==@image_id')['EncodedPixels'] #print("EP ", encoded_pixels) mask = masks_as_image(encoded_pixels) return tf.convert_to_tensor(mask, dtype=tf.int16) # method mapped to load image and mask def process_train_image_id(image_id: tf.Tensor) -> tf.Tensor: [mask,] = tf.py_function(mask_wrapper, [image_id], [tf.int16]) #parms must be tensors mask.set_shape((768,768, 1)) file_path = parms.TRAIN_PATH + "/" + image_id # load the raw data from the file as a string image = tf.io.read_file(file_path) image = decode_img(image) image, mask = resize_image_mask(image, mask) image, mask = image_mask_aug(image, mask) return image, mask def process_val_image_id(image_id: tf.Tensor) -> tf.Tensor: [mask,] = tf.py_function(mask_wrapper, [image_id], [tf.int16]) #parms must be tensors mask.set_shape((768,768, 1)) file_path = parms.TRAIN_PATH + "/" + image_id # load the raw data from the file as a string image = tf.io.read_file(file_path) image = decode_img(image) image, mask = resize_image_mask(image, mask) return image, mask # + colab={} colab_type="code" id="yj6RWCqJWSU8" # Create Dataset from pf train_dataset = tf.data.Dataset.from_tensor_slices(train_df["ImageId"].values) # Verify image paths were loaded for image_id in train_dataset.take(2): print("Image id: ", image_id.numpy().decode("utf-8")) # map training images to processing, includes any augmentation train_dataset = train_dataset.map(process_train_image_id, num_parallel_calls=AUTOTUNE) # Verify the mapping worked for image, mask in train_dataset.take(1): print("Image shape: {} Max: {} Min: {}".format(image.numpy().shape, np.max(image.numpy()), np.min(image.numpy()))) print("Encoded Pixels shape: ", mask.numpy().shape) some_image = image.numpy() some_mask = mask.numpy() #show_batch_mask([some_image, some_mask]) train_dataset = train_dataset.batch(parms.BATCH_SIZE).repeat() # + colab={} colab_type="code" id="c-HBARzmWSVp" # Show the images, execute this cell multiple times to see the images for image, mask in train_dataset.take(1): sample_image, sample_mask = image[0], mask[0] show_batch_mask([sample_image, sample_mask]) # + colab={} colab_type="code" id="dGNwIor4WSVt" # Create Dataset from pd val_dataset = tf.data.Dataset.from_tensor_slices(valid_df["ImageId"].values) # Verify image paths were loaded for image_id in val_dataset.take(2): print("Image id: ", image_id.numpy().decode("utf-8")) # map training images to processing, includes any augmentation val_dataset = val_dataset.map(process_val_image_id, num_parallel_calls=AUTOTUNE) # Verify the mapping worked for image, mask in val_dataset.take(1): print("Image shape: {} Max: {} Min: {}".format(image.numpy().shape, np.max(image.numpy()), np.min(image.numpy()))) print("Encoded Pixels shape: ", mask.numpy().shape) some_image = image.numpy() some_mask = mask.numpy() #show_batch_mask([some_image, some_mask]) val_dataset = val_dataset.batch(parms.BATCH_SIZE).repeat() # + colab={} colab_type="code" id="V2_PmP3CWSVw" # Final check before model training. I added a string of the mask non-zero counts - need to make sure the masks # were created ok. (got bit by this one after a small change....) # Test Validation or Train by changing the dataset mask_cnt_str = "" sample_image = None sample_mask = None #for image, mask in train_dataset.take(1): for image, mask in val_dataset.take(1): image_np = image.numpy() mask_np = mask.numpy() for i in range(len(image_np)): #show_batch_mask([image[i], mask[i]]) # Will show all of the batch mask_cnt_str = mask_cnt_str + str(np.count_nonzero(mask_np[i])) + " " #print("Mask shape: {} Max: {} Min: {}".format(mask.numpy().shape, np.max(mask.numpy()), np.min(mask.numpy()))) if np.count_nonzero(mask_np[i]) > 0: sample_image, sample_mask = image[i], mask[i] print("Mask counts: ", mask_cnt_str) show_batch_mask([sample_image, sample_mask]) # Will show the sample masks, if errors, then no mask was found content # + colab={} colab_type="code" id="eaBs1MOaWSVy" # Pre-trained model from this article. Could not build it on Kaggle, built it locally, then loaded as personal data set. # Did not train as well as building a new model. Move cell down if you want to play with it.... # The article is VERY good!! #https://www.tensorflow.org/tutorials/images/segmentation #https://tensorlayer.com #def iou_coe(output, target, threshold=0.5, axis=(1, 2, 3), smooth=1e-5): # pre = tf.cast(output > threshold, dtype=tf.float32) # truth = tf.cast(target > threshold, dtype=tf.float32) # inse = tf.reduce_sum(tf.multiply(pre, truth), axis=axis) # AND # union = tf.reduce_sum(tf.cast(tf.add(pre, truth) >= 1, dtype=tf.float32), axis=axis) # OR # batch_iou = (inse + smooth) / (union + smooth) # iou = tf.reduce_mean(batch_iou, name='iou_coe') # return iou # , pre, truth, inse, union #def compile_model_pre_trained(parms, model): # model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, decay=1e-6), # loss=combo_loss, # metrics=[iou_coe]) # return model #model = load_model("../input/unetmodel/baseModel.h5") #model = compile_model_pre_trained(parms, model) # + colab={} colab_type="code" id="vzVfXGUrWSV1" # If you want to see the improvements after each EPOC, add to the callback. Helps to make sure show_predictions works... class DisplayCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): clear_output(wait=True) show_predictions() print ('\nSample Prediction after epoch {}\n'.format(epoch+1)) # Normal callbacks # I monitor val_loss, just seemed to work better.... from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau, CSVLogger reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.33, patience=1, verbose=1, mode='min', min_delta=0.0001, cooldown=0, min_lr=1e-8) earlystopper = EarlyStopping(monitor="val_loss", mode="min", verbose=2, patience=10) checkpointer = ModelCheckpoint(parms.MODEL_PATH, monitor='val_loss', verbose=1, mode="auto", save_best_only=True) # Methods to support training verification def create_mask(pred_mask): pred_mask = np.where(pred_mask > 0.5, 1, 0) return pred_mask[0] # Shows the image, original mask and predicted mask def show_predictions(dataset=None, num=1): if dataset: for image, mask in dataset.take(num): pred_mask = model.predict(image) show_batch_mask([image[0], mask[0], create_mask(pred_mask)]) else: show_batch_mask([sample_image, sample_mask, create_mask(model.predict(sample_image[tf.newaxis, ...]))]) # https://lars76.github.io/neural-networks/object-detection/losses-for-segmentation/ def combo_loss(y_true, y_pred): def dice_loss(y_true, y_pred): numerator = 2 * tf.reduce_sum(y_true * y_pred, axis=(1,2,3)) denominator = tf.reduce_sum(y_true + y_pred, axis=(1,2,3)) return tf.reshape(1 - numerator / denominator, (-1, 1, 1)) return tf.keras.losses.binary_crossentropy(y_true, y_pred, from_logits=True) + dice_loss(y_true, y_pred) def compile_model_unet(parms, model): model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, decay=1e-6), loss=combo_loss, metrics=[tf.keras.metrics.MeanIoU(num_classes=2)]) return model # + colab={} colab_type="code" id="3q6fKKGGWSV4" #UNet Builder: https://www.kaggle.com/yushas/imageprocessingtips # I use this whenever I need a u-net, very easy to shrink or grow size/levels as needed. def conv_block_mod(m, dim, acti, bn, res, do=0): n = tf.keras.layers.Conv2D(dim, 3, activation=acti, padding='same')(m) n = tf.keras.layers.BatchNormalization()(n) if bn else n n = tf.keras.layers.Dropout(do)(n) if do else n n = tf.keras.layers.Conv2D(dim, 3, activation=acti, padding='same')(n) n = tf.keras.layers.BatchNormalization()(n) if bn else n return tf.keras.layers.Concatenate()([m, n]) if res else n def level_block_mod(m, dim, depth, inc, acti, do, bn, mp, up, res): if depth > 0: n = conv_block_mod(m, dim, acti, bn, res) m = tf.keras.layers.MaxPooling2D()(n) if mp else Conv2D(dim, 3, strides=2, padding='same')(n) m = level_block_mod(m, int(inc*dim), depth-1, inc, acti, do, bn, mp, up, res)#再帰 if up: m = tf.keras.layers.UpSampling2D()(m) m = tf.keras.layers.Conv2D(dim, 2, activation=acti, padding='same')(m) else: m = tf.keras.layers.Conv2DTranspose(dim, 3, strides=2, activation=acti, padding='same')(m) n = tf.keras.layers.Concatenate()([n, m]) m = conv_block_mod(n, dim, acti, bn, res) else: m = conv_block_mod(m, dim, acti, bn, res, do) return m def UNet_mod(img_shape, out_ch=1, start_ch=64, depth=4, inc_rate=2., activation='relu', dropout=False, batchnorm=True, maxpool=True, upconv=True, residual=False): i = tf.keras.layers.Input(shape=img_shape) # s = Lambda(lambda x: x / 255) (i) o = level_block_mod(i, start_ch, depth, inc_rate, activation, dropout, batchnorm, maxpool, upconv, residual)#Unet o = tf.keras.layers.Conv2D(out_ch, 1, activation=parms.FINAL_ACTIVATION)(o) return tf.keras.Model(inputs=i, outputs=o) # + colab={} colab_type="code" id="Pv_g7bWGWSV8" # Build the model model=UNet_mod(parms.IMAGE_DIM, out_ch=parms.NUM_CLASSES, start_ch=16, depth=4, batchnorm=True, dropout=0.5) model = compile_model_unet(parms, model) # + colab={} colab_type="code" id="5RNwmgTSWSV_" # Reload the model from prior runs #model = load_model(parms.MODEL_PATH, custom_objects={'combo_loss': combo_loss}) # + colab={} colab_type="code" id="_AiBKnfWWSWC" # Train from scratch # You need to download the saved model and/or move to your personal dataset # Once session ends, temp workspace is lost history = model.fit(train_dataset, validation_data=val_dataset, epochs=parms.EPOCS, steps_per_epoch=steps_per_epoch, validation_steps=validation_steps, callbacks=[reduce_lr, earlystopper, checkpointer]) # + colab={} colab_type="code" id="PFoZt2xBWSWF" history_df = pd.DataFrame(history.history) plt.figure() history_df[['loss', 'val_loss']].plot(title="Loss") plt.xlabel('Epocs') plt.ylabel('Loss') history_df[['mean_io_u', 'val_mean_io_u']].plot(title="Mean IOU") plt.xlabel('Epocs') plt.ylabel('Accuracy') #history_df[['accuracy', 'val_accuracy']].plot(title="Accuracy") #plt.xlabel('Epocs') #plt.ylabel('Accuracy') plt.show() # + colab={} colab_type="code" id="O5bBr0MtWSWI" #history_df # + [markdown] colab_type="text" id="HpP4qbg2WSWL" # ### Validate the training... # + colab={} colab_type="code" id="NPgQbjuwWSWM" # Create Dataset from pd test_df = shuffle(valid_df) test_dataset = tf.data.Dataset.from_tensor_slices(test_df["ImageId"].values) # Verify image paths were loaded for image_id in test_dataset.take(2): print(image_id.numpy().decode("utf-8")) # map training images to processing, includes any augmentation test_dataset = test_dataset.map(process_val_image_id, num_parallel_calls=AUTOTUNE) # Verify the mapping worked for image, mask in test_dataset.take(1): print("Image shape: {} Max: {} Min: {}".format(image.numpy().shape, np.max(image.numpy()), np.min(image.numpy()))) print("Encoded Pixels shape: ", mask.numpy().shape) some_image = image.numpy() some_mask = mask.numpy() #show_batch_mask([some_image, some_mask]) test_dataset = test_dataset.batch(1).repeat() # + colab={} colab_type="code" id="3Ps11g7PWSWP" show_predictions(test_dataset, 16) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.5 64-bit # language: python # name: python37564bit67e9689113b54fe08a64600d6234199b # --- # # Plant ID set # binary classification (docks or not docks) # + from dataglob import DataGlob from IPython.display import clear_output import os # + in_dir = os.getcwd() + "/../training/plant-id" out_dir = os.getcwd() glob1 = DataGlob(in_dir, out_dir) # create DataGlob # adjust some knobs glob1._overwrite = True glob1.set_configuration("prewitt", True) glob1.set_configuration("original", True) # see the knobs glob1.show_configuration() # configured model glob1.prepare_database() glob1.create_databunch() glob1.create_model() # - # control model glob1.prepare_control_database() glob1.create_databunch() glob1.create_model() # + glob1 = DataGlob(in_dir, out_dir) # create DataGlob # adjust some knobs glob1._overwrite = True glob1.set_configuration("canny_tight", True) glob1.set_configuration("original", True) # see the knobs glob1.show_configuration() # configured model glob1.prepare_database() glob1.create_databunch() glob1.create_model() # + glob1 = DataGlob(in_dir, out_dir) # create DataGlob # adjust some knobs glob1._overwrite = True glob1.set_configuration("canny_auto", True) glob1.set_configuration("original", True) # see the knobs glob1.show_configuration() # configured model glob1.prepare_database() glob1.create_databunch() glob1.create_model() # + glob1 = DataGlob(in_dir, out_dir) # create DataGlob # adjust some knobs glob1._overwrite = True glob1.set_configuration("canny_wide", True) glob1.set_configuration("original", True) # see the knobs glob1.show_configuration() # configured model glob1.prepare_database() glob1.create_databunch() glob1.create_model() # + glob1 = DataGlob(in_dir, out_dir) # create DataGlob # adjust some knobs glob1._overwrite = True glob1.set_configuration("laplacian", True) glob1.set_configuration("original", True) # see the knobs glob1.show_configuration() # configured model glob1.prepare_database() glob1.create_databunch() glob1.create_model() # + glob1 = DataGlob(in_dir, out_dir) # create DataGlob # adjust some knobs glob1._overwrite = True glob1.set_configuration("sobel_x", True) glob1.set_configuration("original", True) # see the knobs glob1.show_configuration() # configured model glob1.prepare_database() glob1.create_databunch() glob1.create_model() # + glob1 = DataGlob(in_dir, out_dir) # create DataGlob # adjust some knobs glob1._overwrite = True glob1.set_configuration("sobel_y", True) glob1.set_configuration("original", True) # see the knobs glob1.show_configuration() # configured model glob1.prepare_database() glob1.create_databunch() glob1.create_model() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=true editable=true # # Convolutional Neural Networks # + [markdown] deletable=true editable=true # ## Machine learning on images # + deletable=true editable=true import pandas as pd import numpy as np # %matplotlib inline import matplotlib.pyplot as plt # + [markdown] deletable=true editable=true # ### MNIST # + deletable=true editable=true from keras.datasets import mnist # + deletable=true editable=true (X_train, y_train), (X_test, y_test) = mnist.load_data('/tmp/mnist.npz') # + deletable=true editable=true X_train.shape # + deletable=true editable=true X_test.shape # + deletable=true editable=true X_train[0] # + deletable=true editable=true plt.imshow(X_train[0], cmap = 'gray') # + deletable=true editable=true # Flattening the input X_train = X_train.reshape(-1, 28 * 28) X_test = X_test.reshape(-1, 28 * 28) # + deletable=true editable=true X_train.shape # + deletable=true editable=true X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255.0 X_test /= 255.0 # + deletable=true editable=true X_train[0] # + deletable=true editable=true from keras.utils.np_utils import to_categorical # + deletable=true editable=true y_train_cat = to_categorical(y_train) y_test_cat = to_categorical(y_test) # + deletable=true editable=true y_train[0] # + deletable=true editable=true # One-hot encoded labels y_train_cat[0] # + deletable=true editable=true y_train_cat.shape # + deletable=true editable=true y_test_cat.shape # + [markdown] deletable=true editable=true # ### Fully connected on images # + deletable=true editable=true from keras.models import Sequential from keras.layers import Dense import keras.backend as K K.clear_session() model = Sequential() model.add(Dense(10, input_dim = 28 * 28, activation = 'relu')) model.add(Dense(256, activation = 'relu')) ''' model.add(Dense(128, activation = 'relu')) ''' model.add(Dense(16, activation = 'relu')) model.add(Dense(10, activation = 'softmax')) model.compile(loss = 'categorical_crossentropy', optimizer = 'rmsprop', metrics = ['accuracy']) # + deletable=true editable=true h = model.fit(X_train, y_train_cat, batch_size = 12, epochs = 10, verbose = 2, validation_split = 0.3) # + deletable=true editable=true plt.figure(figsize = (12,7)) plt.plot(h.history['acc']) plt.plot(h.history['val_acc']) plt.legend(['Training', 'Validation'], loc = 'lower right') plt.title('Accuracy') plt.xlabel('Epochs') # + deletable=true editable=true test_accuracy = model.evaluate(X_test, y_test_cat)[1] # Printing the accuracy test_accuracy * 100 # + [markdown] deletable=true editable=true # ### Tensor Math # + deletable=true editable=true A = np.random.randint(10, size = (2, 3, 4, 5)) B = np.random.randint(10, size = (2, 3)) # + deletable=true editable=true A # + deletable=true editable=true A[0, 1, 0, 3] # + deletable=true editable=true B # + [markdown] deletable=true editable=true # #### A random colored image # + deletable=true editable=true img = np.random.randint(255, size = (4, 4, 3), dtype = 'uint8') img # + deletable=true editable=true plt.figure(figsize = (10, 7)) plt.subplot(221) plt.imshow(img) plt.title("All Channels combined") plt.subplot(222) plt.imshow(img[:, : , 0], cmap = 'Reds') plt.title("Red channel") plt.subplot(223) plt.imshow(img[:, : , 1], cmap = 'Greens') plt.title("Green channel") plt.subplot(224) plt.imshow(img[:, : , 2], cmap = 'Blues') plt.title("Blue channel") # + [markdown] deletable=true editable=true # ### Tensor operations # + deletable=true editable=true 2 * A # + deletable=true editable=true A + A # + deletable=true editable=true A.shape # + deletable=true editable=true B.shape # + deletable=true editable=true np.tensordot(A, B, axes = ([0, 1], [0, 1])) # + deletable=true editable=true np.tensordot(A, B, axes = ([0], [0])).shape # + [markdown] deletable=true editable=true # ### 1D convolution # + deletable=true editable=true a = np.array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0], dtype='float32') # + deletable=true editable=true b = np.array([-1, 1], dtype='float32') # + deletable=true editable=true c = np.convolve(a, b) # + deletable=true editable=true a # + deletable=true editable=true b # + deletable=true editable=true c # + deletable=true editable=true plt.subplot(211) plt.plot(a, 'o-') plt.subplot(212) plt.plot(c, 'o-') # + [markdown] deletable=true editable=true # ### Image filters with convolutions # + deletable=true editable=true from scipy.ndimage.filters import convolve from scipy.signal import convolve2d from scipy import misc # + deletable=true editable=true img = misc.ascent() # + deletable=true editable=true img.shape # + deletable=true editable=true plt.imshow(img, cmap = 'gray') # + deletable=true editable=true h_kernel = np.array([[ 1, 2, 1], [ 0, 0, 0], [-1, -2, -1]]) # + deletable=true editable=true plt.imshow(h_kernel, cmap = 'gray') # + deletable=true editable=true res = convolve2d(img, h_kernel) plt.imshow(res, cmap = 'gray') # + [markdown] deletable=true editable=true # ## Convolutional neural networks # + deletable=true editable=true from keras.layers import Conv2D # + deletable=true editable=true img.shape # + deletable=true editable=true plt.figure(figsize = (7, 7)) plt.imshow(img, cmap = 'gray') # + deletable=true editable=true img_tensor = img.reshape((1, 512, 512, 1)) # + deletable=true editable=true model = Sequential() model.add(Conv2D(filters = 1, kernel_size = (3, 3), strides = (2,1), input_shape = (512, 512, 1))) model.compile('adam', 'mse') # + deletable=true editable=true img_pred_tensor = model.predict(img_tensor) # + deletable=true editable=true img_pred_tensor.shape # + deletable=true editable=true img_pred = img_pred_tensor[0, :, :, 0] # + deletable=true editable=true plt.imshow(img_pred, cmap = 'gray') # + deletable=true editable=true weights = model.get_weights() # + deletable=true editable=true weights[0].shape # + deletable=true editable=true plt.imshow(weights[0][:, :, 0, 0], cmap = 'gray') # + deletable=true editable=true weights[0] = np.ones(weights[0].shape) # + deletable=true editable=true model.set_weights(weights) # + deletable=true editable=true img_pred_tensor = model.predict(img_tensor) # + deletable=true editable=true img_pred = img_pred_tensor[0, :, :, 0] # + deletable=true editable=true plt.imshow(img_pred, cmap = 'gray') # + deletable=true editable=true model = Sequential() model.add(Conv2D(filters = 1, kernel_size = (3, 3), input_shape = (512, 512, 1), padding='same')) model.compile('adam', 'mse') img_pred_tensor = model.predict(img_tensor) img_pred_tensor.shape # + deletable=true editable=true img_pred_tensor = img_pred_tensor[0, :, :, 0] plt.imshow(img_pred_tensor, cmap = 'gray') # + [markdown] deletable=true editable=true # ## Pooling layers # + deletable=true editable=true from keras.layers import MaxPool2D, AvgPool2D # + deletable=true editable=true model = Sequential() model.add(MaxPool2D(pool_size = (5, 5), input_shape = (512, 512, 1))) model.compile('adam', 'mse') # + deletable=true editable=true img_pred = model.predict(img_tensor)[0, :, :, 0] # + deletable=true editable=true plt.imshow(img_pred, cmap = 'gray') # + deletable=true editable=true model = Sequential() model.add(AvgPool2D(pool_size = (5, 5), input_shape = (512, 512, 1))) model.compile('adam', 'mse') # + deletable=true editable=true img_pred = model.predict(img_tensor)[0, :, :, 0] plt.imshow(img_pred, cmap = 'gray') # + [markdown] deletable=true editable=true # ## Final architecture # + deletable=true editable=true X_train = X_train.reshape(-1, 28, 28, 1) X_test = X_test.reshape(-1, 28, 28, 1) # + deletable=true editable=true X_train.shape # + deletable=true editable=true from keras.layers import Flatten, Activation # + deletable=true editable=true K.clear_session() model = Sequential() model.add(Conv2D(8, (3, 3), input_shape = (28, 28, 1))) model.add(MaxPool2D(pool_size = (2, 2))) model.add(Activation('relu')) model.add(Flatten()) model.add(Dense(32, activation = 'relu')) model.add(Dense(10, activation = 'softmax')) model.compile(loss = 'categorical_crossentropy', optimizer = 'rmsprop', metrics = ['accuracy']) # + deletable=true editable=true model.summary() # + deletable=true editable=true model.fit(X_train, y_train_cat, batch_size = 12, epochs = 2, verbose = 2, validation_split = 0.3) # + deletable=true editable=true evaluated = model.evaluate(X_test, y_test_cat) # + deletable=true editable=true evaluated[1] * 100 # + [markdown] deletable=true editable=true # ### Exercise 1 # You've been hired by a shipping company to overhaul the way they route mail, parcels and packages. They want to build an image recognition system capable of recognizing the digits in the zipcode on a package, so that it can be automatically routed to the correct location. # You are tasked to build the digit recognition system. Luckily, you can rely on the MNIST dataset for the intial training of your model! # # Build a deep convolutional neural network with at least two convolutional and two pooling layers before the fully connected layer. # # - Start from the network we have just built # - Insert a `Conv2D` layer after the first `MaxPool2D`, give it 64 filters. # - Insert a `MaxPool2D` after that one # - Insert an `Activation` layer # - retrain the model # - does performance improve? # - how many parameters does this new model have? More or less than the previous model? Why? # - how long did this second model take to train? Longer or shorter than the previous model? Why? # - did it perform better or worse than the previous model? # + deletable=true editable=true X_train.shape, X_test.shape, y_train_cat.shape, y_test_cat.shape # + deletable=true editable=true from keras.layers import Input, Dense, Conv2D, MaxPool2D, Activation, Flatten import keras.backend as K from keras.models import Model from keras.optimizers import Adam # + deletable=true editable=true K.clear_session() inp = Input(shape = (28, 28, 1 )) net = Conv2D(filters = 64, kernel_size = (2, 2), activation = 'relu', padding = 'valid')(inp) net = MaxPool2D(pool_size = (2, 2), strides = (2, 2), padding = 'valid')(net) net = Conv2D(filters = 8, kernel_size = (2, 2), activation = 'relu', padding = 'valid')(net) net = MaxPool2D(pool_size = (2, 2), strides = (2, 2), padding = 'valid')(net) net = Activation(activation = 'relu')(net) net = Flatten()(net) prediction = Dense(10, activation = 'softmax')(net) # + deletable=true editable=true model = Model(inputs = inp, outputs = prediction) model.compile(optimizer = Adam(), loss = 'categorical_crossentropy', metrics = ['accuracy']) # + deletable=true editable=true model.summary() # + deletable=true editable=true model.fit(X_train, y_train_cat, batch_size = 50, validation_split = 0.2, epochs = 5, verbose = 2) # + [markdown] deletable=true editable=true # ### Exercise 2 # # Pleased with your performance with the digits recognition task, your boss decides to challenge you with a harder task. Their online branch allows people to upload images to a website that generates and prints a postcard that is shipped to destination. Your boss would like to know what images people are loading on the site in order to provide targeted advertising on the same page, so he asks you to build an image recognition system capable of recognizing a few objects. Luckily for you, there's a dataset ready made with a collection of labeled images. This is the [Cifar 10 Dataset](http://www.cs.toronto.edu/~kriz/cifar.html), a very famous dataset that contains images for 10 different categories: # # - airplane # - automobile # - bird # - cat # - deer # - dog # - frog # - horse # - ship # - truck # # In this exercise we will reach the limit of what you can achieve on your laptop and get ready for the next session on cloud GPUs. # # Here's what you have to do: # - load the cifar10 dataset using `keras.datasets.cifar10.load_data()` # - display a few images, see how hard/easy it is for you to recognize an object with such low resolution # - check the shape of X_train, does it need reshape? # - check the scale of X_train, does it need rescaling? # - check the shape of y_train, does it need reshape? # - build a model with the following architecture, and choose the parameters and activation functions for each of the layers: # - conv2d # - conv2d # - maxpool # - conv2d # - conv2d # - maxpool # - flatten # - dense # - output # - compile the model and check the number of parameters # - attempt to train the model with the optimizer of your choice. How fast does training proceed? # - If training is too slow (as expected) stop the execution and move to the next session! # + deletable=true editable=true from keras.datasets import cifar10 # + deletable=true editable=true (X_train, y_train), (X_test, y_test) = cifar10.load_data() # - X_train.shape X_train[0, :, :, 0] X_train[0, :, :, 1] X_train[0, :, :, 2] np.max(X_train[0, :, :, :]) np.min(X_train[0, :, :, :]) # + X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train = X_train / 255.0 X_test = X_test / 255.0 # - from keras.utils import to_categorical y_train_cat = to_categorical(y_train) y_test_cat = to_categorical(y_test) y_train_cat[:5] print(y_train_cat.shape) y_test_cat[:5] print(y_test_cat.shape) # + '''conv2d conv2d maxpool conv2d conv2d maxpool flatten dense output ''' from keras.models import Model from keras.layers import Input, Dense, Conv2D, MaxPool2D, Flatten from keras.regularizers import l2 # Inputs 32 x 32 x 3 inp = Input(shape = (32, 32, 3)) net = Conv2D(filters = 8, kernel_size = (2, 2), padding = 'same')(inp) net = Conv2D(filters = 8, kernel_size = (2, 2), padding = 'same')(net) net = MaxPool2D(pool_size = (2, 2), padding = 'valid')(net) net = Conv2D(filters = 8, kernel_size = (2, 2), padding = 'same')(net) net = Conv2D(filters = 8, kernel_size = (2, 2), padding = 'same')(net) net = MaxPool2D(pool_size = (2, 2), padding = 'valid')(net) net = Flatten()(net) net = Dense(units = 10, activation = 'relu')(net) prediction = Dense(units = 10, activation = 'softmax')(net) # - model = Model(inputs = [inp], outputs = [prediction]) from keras.optimizers import Adam model.compile(optimizer = Adam(), loss = 'categorical_crossentropy', metrics = ['accuracy']) model.summary() model.fit(X_train, y_train_cat, batch_size = 50, validation_split = 0.2, epochs = 5, verbose = 2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import numpy as np import pandas as pd from tqdm import tqdm from sklearn.preprocessing import LabelEncoder # - pd.options.display.max_columns = None os.listdir('../input') train = pd.read_csv('../input/train.csv.gz') train.head() col = [c for c in train.columns if c not in ['card_id']] col hist = pd.read_csv('../input/historical_transactions.csv.gz', parse_dates=['purchase_date']) hist.nunique() hist.head() hist[hist['merchant_id'].isnull()].head() hist['category_3'] = hist['category_3'].fillna('D') for c in tqdm(['authorized_flag', 'category_1', 'category_3']): print(c) le = LabelEncoder() le.fit(list(set(hist[c].values))) hist[c] = le.transform(hist[c].values) hist.head() hist['authorized_flag'].unique() stats = ['min', 'max', 'mean', 'median', 'std', 'var', 'skew'] num_aggregations = { # 'authorized_flag': 'sum', # 'category_1': stats, 'installments': stats, # 'category_3': stats, 'month_lag': stats, 'purchase_amount': stats, # 'category_2': stats, } feature = hist_train.groupby(['card_id']).agg(num_aggregations).reset_index() feature.head() feature.columns = [f'{c[0]}_{c[1]}' for c in feature.columns] feature = feature.rename(columns={'card_id_': 'card_id'}) feature.head() feature = hist_train.groupby(['card_id', 'mer']).agg(num_aggregations).reset_index() def get_dummies(df): """ binary would be drop_first """ col = df.select_dtypes('O').columns.tolist() nunique = df[col].nunique() col_binary = nunique[nunique==2].index.tolist() [col.remove(c) for c in col_binary] df = pd.get_dummies(df, columns=col) df = pd.get_dummies(df, columns=col_binary, drop_first=True) df.columns = [c.replace(' ', '-') for c in df.columns] return df hist_category = hist[['card_id', 'category_1', 'category_2', 'category_3', 'authorized_flag',]] hist_category.head() col = hist.select_dtypes('O').columns.tolist() col col = ['category_2', 'category_3'] col_binary = ['category_1', 'authorized_flag'] feature = pd.get_dummies(hist_category, columns=col) feature = pd.get_dummies(feature, columns=col_binary, drop_first=True) feature.head() argss = [ {'col': ['category_2', 'category_3'], 'col_binary': ['authorized_flag', 'category_1']} ] argss[0]['col'] feature = feature.add_prefix('pref_').rename(columns={'pref_pref_card_id': 'card_id'}) feature.head() f102 = pd.read_pickle('../feature/f102_train.pkl') f102.head() f101 = pd.read_pickle('../feature/f101_train.pkl') f101.head() f101.head() args = { 'col': ['category_2', 'category_3'], 'col_binary': ['authorized_flag', 'category_1'] } argss[0] # + KEY = 'card_id' train = pd.read_csv('../input/train.csv.gz')[[KEY]] test = pd.read_csv('../input/test.csv.gz')[[KEY]] hist = pd.read_csv('../input/historical_transactions.csv.gz') # + # col, col_binary = args['col'], args['col_binary'] # feature = hist[[KEY] + col + col_binary] # feature = pd.get_dummies(feature, columns=col) feature = pd.get_dummies(feature, columns=col_binary, drop_first=True) # feature = feature.add_prefix(PREF).rename(columns={PREF+KEY: KEY}) # feature[feature.card_id.isin(train)].to_pickle(f'../feature/{PREF}train.pkl') # feature[feature.card_id.isin(test)].to_pickle(f'../feature/{PREF}test.pkl') # - col col_binary feature.head() train.head() hist.columns train.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Support Vector Regression # # Although not amazing, support vector regression (SVR) is better at dealing with uneven spatial data than linear regression for example so it will hopefully produce a good model. In order to model using an SVR, we will first load the data. # + ### import libraries ### import pandas as pd from sklearn.svm import SVR from sklearn.metrics import r2_score, mean_squared_error from sklearn.model_selection import GridSearchCV # + ### importing data ### # features features_train = pd.read_csv('data/features_train.csv', index_col = 0) features_test = pd.read_csv('data/features_test.csv', index_col = 0) # target target_train = pd.read_csv('data/target_train.csv', index_col = 0) target_test = pd.read_csv('data/target_test.csv', index_col = 0) # - # Now we will model using SVR but we will use a grid search to find the optimal parameters. # + ### building grid search ### # choosing SVR model = SVR() # parameters to search through parameters = {'kernel' : ('linear', 'rbf'), 'C' : [0.01, 0.1, 1, 10, 100, 1000]} # fitting grid search clf = GridSearchCV(model, parameters) clf.fit(features_train, target_train.values.ravel()) # showing best parameters clf.best_params_ # - # We found the best parameters for SVR. # + ### checking metrics ### # choosing best parameters model = SVR(C = 1, kernel = 'rbf') # fitting model model.fit(features_train, target_train.values.ravel()) # predicted target pred = model.predict(features_test) print('RMSE is:', mean_squared_error(target_test, pred, squared = False), 'and the r2 is:', r2_score(target_test, pred)) # - # SVR does not act as a good model as it is worse than a horizontal line. # ## Conclusion # SVR is not a good model for predicting forest fire damage. # * r2 = -0.1308 # * RMSE = 1.0634 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Import the libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline #import sklearn package details from sklearn.model_selection import train_test_split #GridSearchCV is imported later from sklearn.model_selection from sklearn.preprocessing import StandardScaler from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.metrics import classification_report, confusion_matrix #load the dataset for Breast Cancer Detection # using the UCI repository data url = "https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data" df = pd.read_csv(url) df.head() # giving the names of the cols as the data has no col. names. There are 11 cols excluding the ID names = ['id','clump_thickness','uniform_cell_size','uniform_cell_shape', 'marginal_adhesion','single_epithelial_size','bare_nuclei', 'bland_chromatin','normal_nucleoli','mitoses','class'] df = pd.read_csv(url, names=names) df.head() #performing data exploration to understand and preprocess the data #using a heat map to see if there are blanks in the dataset #sns.heatmap(df.isnull(), yticklabels = False, cbar = False, cmap = 'viridis') -- df.info() also shows there are no nulls df.info() #barenuclei col has some items that are ?. Need to be replaced df.isin(['?']).any() df[df.eq('?').any(1)] #.eq is same as == operator # replacing ? with -99999 df.replace('?',-99999, inplace = True) df.isin(['?']).any() #print the shape of the dataset print(df.shape) #lets drop the id col. as ML wont be needing this df.drop(['id'],axis = 1, inplace = True) df.head() print(df.shape) ###PERFORMING DATA VISUALIZATION print(df.iloc[0]) #shows first row of the dataset. Class value = 2 Benign, 4=Melignant print(df.describe()) # if I show include ='all' then it will also include bare_nuclei col as that has NaN values #Good part is all the features are standardized between 1 and 10. So, I can directly use KNN without using StandardScaler #Plotting histogram for each variable or col df.hist(figsize =(10,10)) plt.show() # + #sns.pairplot(df) #scatter_matrix(df,figsize = (18,18)) temp = df.drop('bare_nuclei',axis = 1) #pairplot was errorring for bare_nuclei as it has imputed values of -99999 sns.pairplot(temp) #data does not seem to be having any standout relationship between the features to classify #the cell class to be melignant or benign # + #Implementing the ML models X = df.drop('class',axis =1) y = df['class'] # - X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=8) #IMPLEMENTING KNN knn = KNeighborsClassifier(n_neighbors =1) knn.fit(X_train, y_train) pred = knn.predict(X_test) print (classification_report (y_test, pred)) #Using elbow method to predict k value for better results error_rate=[] for i in range(1,40): knn = KNeighborsClassifier(n_neighbors =i) knn.fit(X_train, y_train) pred_i = knn.predict(X_test) error_rate.append(np.mean(pred_i != y_test)) plt.figure(figsize=(10,8)) plt.plot(range(1,40), error_rate, color = 'blue',linestyle ='--',marker='o') plt.title('Error Rate vs. K Value') plt.xlabel('K') plt.ylabel('Error Rate') #From above seems like 11 is a good value for K. Using K=11 and checking for the results -> f1 score improved from 95 to 97 knn = KNeighborsClassifier(n_neighbors =11) knn.fit(X_train, y_train) pred = knn.predict(X_test) print (classification_report (y_test, pred)) #IMPLEMENTING SVC model = SVC() model.fit(X_train, y_train) pred_svc = model.predict(X_test) print (confusion_matrix (y_test, pred_svc)) print ('\n') print (classification_report (y_test, pred_svc)) #KNN has better results vs SVC. Lets try to tune C and gamma values and see if performance improves. #Precision has a much lower score from sklearn.model_selection import GridSearchCV param_grid = {'C':[0.1,1,10,100,1000], 'gamma':[1,0.1,0.01,0.001,0.0001]} grid = GridSearchCV(SVC(),param_grid,verbose =3) grid.fit(X_train, y_train) grid.best_params_ grid.best_estimator_ grid_predictions = grid.predict(X_test) print (confusion_matrix (y_test, grid_predictions)) print ('\n') print (classification_report (y_test, grid_predictions)) # + ##Now we see SVM is very close to KNN # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import malaya with open('dumping-cleaned-common-crawl.txt') as fopen: data = fopen.read().split('\n') len(data) # + import re from unidecode import unidecode alphabets = '([A-Za-z])' prefixes = ( '(Mr|St|Mrs|Ms|Dr|Prof|Capt|Cpt|Lt|Mt|Puan|puan|Tuan|tuan|sir|Sir)[.]' ) suffixes = '(Inc|Ltd|Jr|Sr|Co|Mo)' starters = '(Mr|Mrs|Ms|Dr|He\s|She\s|It\s|They\s|Their\s|Our\s|We\s|But\s|However\s|That\s|This\s|Wherever|Dia|Mereka|Tetapi|Kita|Itu|Ini|Dan|Kami|Beliau|Seri|Datuk|Dato|Datin|Tuan|Puan)' acronyms = '([A-Z][.][A-Z][.](?:[A-Z][.])?)' websites = '[.](com|net|org|io|gov|me|edu|my)' another_websites = '(www|http|https)[.]' digits = '([0-9])' before_digits = '([Nn]o|[Nn]ombor|[Nn]umber|[Kk]e|=|al)' month = '([Jj]an(?:uari)?|[Ff]eb(?:ruari)?|[Mm]a(?:c)?|[Aa]pr(?:il)?|Mei|[Jj]u(?:n)?|[Jj]ula(?:i)?|[Aa]ug(?:ust)?|[Ss]ept?(?:ember)?|[Oo]kt(?:ober)?|[Nn]ov(?:ember)?|[Dd]is(?:ember)?)' def split_into_sentences(text, minimum_length = 5): text = text.replace('\x97', '\n') text = '. '.join([s for s in text.split('\n') if len(s)]) text = text + '.' text = unidecode(text) text = ' ' + text + ' ' text = text.replace('\n', ' ') text = re.sub(prefixes, '\\1', text) text = re.sub(websites, '\\1', text) text = re.sub(another_websites, '\\1', text) text = re.sub('[,][.]+', '', text) if '...' in text: text = text.replace('...', '') if 'Ph.D' in text: text = text.replace('Ph.D.', 'PhD') text = re.sub('[.]\s*[,]', ',', text) text = re.sub(before_digits + '\s*[.]\s*' + digits, '\\1\\2', text) text = re.sub(month + '[.]\s*' + digits, '\\1\\2', text) text = re.sub('\s' + alphabets + '[.][ ]+', ' \\1 ', text) text = re.sub(acronyms + ' ' + starters, '\\1 \\2', text) text = re.sub( alphabets + '[.]' + alphabets + '[.]' + alphabets + '[.]', '\\1\\2\\3', text, ) text = re.sub( alphabets + '[.]' + alphabets + '[.]', '\\1\\2', text ) text = re.sub(' ' + suffixes + '[.][ ]+' + starters, ' \\1 \\2', text) text = re.sub(' ' + suffixes + '[.]', ' \\1', text) text = re.sub(' ' + alphabets + '[.]', ' \\1', text) text = re.sub(digits + '[.]' + digits, '\\1\\2', text) if '”' in text: text = text.replace('.”', '”.') if '"' in text: text = text.replace('."', '".') if '!' in text: text = text.replace('!"', '"!') if '?' in text: text = text.replace('?"', '"?') text = text.replace('.', '.') text = text.replace('?', '?') text = text.replace('!', '!') text = text.replace('', '.') sentences = text.split('') sentences = sentences[:-1] sentences = [s.strip() for s in sentences if len(s) > minimum_length] return sentences split_into_sentences('Pembolehubah yang ketiga adalah niat yang merujuk kepada niat seseorang dalam melakukan pelbagai tingkah laku ( Fishbein et al . 1975 : 12 ), ') # - data[11000: 12000] # + import malaya fast_text = malaya.language_detection.fasttext() # + VOWELS = 'aeiou' PHONES = ['sh', 'ch', 'ph', 'sz', 'cz', 'sch', 'rz', 'dz'] punctuations = '!@#$%^&*()_+=-' def isword_malay(word): if re.sub('[^0-9!@#$%\^&*()-=_\+{}\[\];\':",./<>?\|~`\\\ ]+', '', word) == word: return True if not any([c in VOWELS for c in word]): return False return True def isword_english(word): if word: consecutiveVowels = 0 consecutiveConsonents = 0 for idx, letter in enumerate(word.lower()): vowel = True if letter in VOWELS else False if idx: prev = word[idx - 1] prevVowel = True if prev in VOWELS else False if not vowel and letter == 'y' and not prevVowel: vowel = True if prevVowel != vowel: consecutiveVowels = 0 consecutiveConsonents = 0 if vowel: consecutiveVowels += 1 else: consecutiveConsonents += 1 if consecutiveVowels >= 3 or consecutiveConsonents > 3: return False if consecutiveConsonents == 3: subStr = word[idx - 2 : idx + 1] if any(phone in subStr for phone in PHONES): consecutiveConsonents -= 1 continue return False return True # - def filter_string(string, min_len = 15): if len(string) < min_len: return '' string = re.sub( 'http\S+|www.\S+', '', ' '.join( [ word for word in string.split() if word.find('#') < 0 and word.find('@') < 0 ] ), ) string = [w for w in string.split() if isword_malay(w.lower())] string = ' '.join(string) if len(string) > 2: if fast_text.predict([string])[0] == 'other': return '' else: return string else: return string def loop(strings): results = [] for string in tqdm(strings): no = string[0] results.append((no, filter_string(string[1]))) return results # + import cleaning from tqdm import tqdm temp = [(no, s) for no, s in enumerate(data)] results = cleaning.multiprocessing(temp, loop) # + # %%time results = sorted(results, key=lambda x: x[0]) results = [r[1] for r in results] # - results[:1000] with open('filtered-dumping-cleaned-common-crawl.txt', 'w') as fopen: fopen.write('\n'.join(results)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Predicting Fuel Efficiency of Vehicles # In this series, we'd be going from data collection to deploying the Machine Learning model: # # Data Collection - we are using the classic Auto MPG dataset from UCI ML Repository. # Define Problem Statement - We'll frame the problem based on the dataset description and initial exploration. # EDA - Carry our exploratory analysis to figure out the important features and creating new combination of features. # Data Preparation - Using step 4, create a pipeline of tasks to transform the data to be loaded into our ML models. # Selecting and Training ML models - Training a few models to evaluate their predictions using cross-validation. # Hyperparameter Tuning - Fine tune the hyperparameters for the models that showed promising results. # Deploy the Model using a web service - Using Flask web framework to deploy our trained model on Heroku # + ##importing a few general use case libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') # + # reading the .data file using pandas cols = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin'] df = pd.read_csv('auto-mpg.data', names=cols, na_values = "?", comment = '\t', sep= " ", skipinitialspace=True) data = df.copy() # - data.sample(20) # # Problem Statement: # The data contains MPG variable which is continuous data and tells us about the efficiency of fuel consumption of a vehicle in 70s and 80s. # # Our aim here is to predict the MPG value for a vehicle given we have other attributes of that vehicle # # Step 3: Exploratory Data Analysis # 1. Check for Data type of columns # 2. Check for null values. # 3. Check for outliers # 4. Look for the category distribution in categorical columns # 5. Plot for correlation # 6. Look for new variables ##checking the data info data.info() ##checking for all the null values data.isnull().sum() ##summary statistics of quantitative variables data.describe() sns.boxplot(x=data['Horsepower']) ##imputing the values with median median = data['Horsepower'].median() data['Horsepower'] = data['Horsepower'].fillna(median) data.info() ##category distribution data["Cylinders"].value_counts() / len(data) data['Origin'].value_counts() ##pairplots to get an intuition of potential correlations sns.pairplot(data[["MPG", "Cylinders", "Displacement", "Weight", "Horsepower"]], diag_kind="kde") # # Setting aside Test Set # + # set aside the test data from sklearn.model_selection import train_test_split train_set, test_set = train_test_split(data, test_size=0.2, random_state=42) test_set.shape # - train_set['Cylinders'].value_counts() / len(train_set) test_set["Cylinders"].value_counts() / len(test_set) # # Stratified Sampling # + from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(data, data["Cylinders"]): strat_train_set = data.loc[train_index] strat_test_set = data.loc[test_index] # - strat_test_set.shape ##checking for cylinder category distribution in training set strat_train_set['Cylinders'].value_counts() / len(strat_train_set) ##checking for cylinder category distribution in testing set strat_test_set["Cylinders"].value_counts() / len(strat_test_set) ##converting integer classes to countries in Origin column train_set['Origin'] = train_set['Origin'].map({1: 'India', 2: 'USA', 3 : 'Germany'}) train_set.sample(10) ##one hot encoding train_set = pd.get_dummies(train_set, prefix='', prefix_sep='') train_set.head() data = strat_train_set.copy() # # Checking correlation matrix w.r.t. MPG corr_matrix = data.corr() corr_matrix['MPG'].sort_values(ascending=False) # # Testing new variables by checking their correlation w.r.t. MPG # 1. Displacement on Power # 2. Weight on cylinder # 3. Acceleration on power # 4. Acceleration on cylinder # + ## testing new variables by checking their correlation w.r.t. MPG data['displacement_on_power'] = data['Displacement'] / data['Horsepower'] data['weight_on_cylinder'] = data['Weight'] / data['Cylinders'] data['acceleration_on_power'] = data['Acceleration'] / data['Horsepower'] data['acceleration_on_cyl'] = data['Acceleration'] / data['Cylinders'] corr_matrix = data.corr() corr_matrix['MPG'].sort_values(ascending=False) # - # # Data Preparation # 1. Handling Categorical Functions - OneHotEncoder # 2. Data Cleaning - Imputer # 3. Attribute Addition - Adding custom transformation # 4. Setting up Data Transformation Pipeline for numerical and categorical column. # + ##handling missing values from sklearn.impute import SimpleImputer imputer = SimpleImputer(strategy="median") imputer.fit(data) # - imputer.statistics_ data.median().values X = imputer.transform(data) data_tr = pd.DataFrame(X, columns=data.columns, index=data.index) # # Segregating Target and Feature variables data = strat_train_set.drop("MPG", axis=1) data_labels = strat_train_set["MPG"].copy() data # # Preprocessing the Origin Column def preprocess_origin_cols(df): df["Origin"] = df["Origin"].map({1: "India", 2: "USA", 3: "Germany"}) return df data_tr = preprocess_origin_cols(data) data_tr.head() # # One Hot Encoding the Origin Column data_tr.info() ##isolating the origin column data_cat = data_tr[["Origin"]] data_cat.head() # + ##onehotencoding the categorical values from sklearn.preprocessing import OneHotEncoder cat_encoder = OneHotEncoder() data_cat_1hot = cat_encoder.fit_transform(data_cat) data_cat_1hot # returns a sparse matrix # - data_cat_1hot.toarray()[:5] cat_encoder.categories_ # # Handling Missing values using SimpleImputer ##segregating the numerical columns num_data = data.iloc[:, :-1] num_data.info() # + ##handling missing values from sklearn.impute import SimpleImputer imputer = SimpleImputer(strategy="median") imputer.fit(num_data) # - ##median of all the columns from imputer imputer.statistics_ ##median from pandas dataframe - same data.median().values ##imputing the missing values by transforming the dataframe X = imputer.transform(num_data) X ##converting the 2D array back into a dataframe data_tr = pd.DataFrame(X, columns=num_data.columns, index=num_data.index) data_tr.info() # # Adding Attributes using BaseEstimator and Transformer num_data.head() # + from sklearn.base import BaseEstimator, TransformerMixin acc_ix, hpower_ix, cyl_ix = 4, 2, 0 class CustomAttrAdder(BaseEstimator, TransformerMixin): def __init__(self, acc_on_power=True): # no *args or **kargs self.acc_on_power = acc_on_power def fit(self, X, y=None): return self # nothing else to do def transform(self, X): acc_on_cyl = X[:, acc_ix] / X[:, cyl_ix] if self.acc_on_power: acc_on_power = X[:, acc_ix] / X[:, hpower_ix] return np.c_[X, acc_on_power, acc_on_cyl] return np.c_[X, acc_on_cyl] attr_adder = CustomAttrAdder(acc_on_power=True) data_tr_extra_attrs = attr_adder.transform(data_tr.values) data_tr_extra_attrs[0] # - # # Creating a Pipeline of tasks # + ##Using Pipeline class from sklearn.pipeline import Pipeline ##Using StandardScaler to scale all the numerical attributes from sklearn.preprocessing import StandardScaler numerics = ['float64', 'int64'] num_data = data_tr.select_dtypes(include=numerics) ##pipeline for numerical attributes ##imputing -> adding attributes -> scale them num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attrs_adder', CustomAttrAdder()), ('std_scaler', StandardScaler()), ]) num_data_tr = num_pipeline.fit_transform(num_data) num_data_tr[0] # - # # Transforming Numerical and Categorical Attributes # + ##Transform different columns or subsets using ColumnTransformer from sklearn.compose import ColumnTransformer num_attrs = list(num_data) cat_attrs = ["Origin"] ##complete pipeline to transform ##both numerical and cat. attributes full_pipeline = ColumnTransformer([ ("num", num_pipeline, num_attrs), ("cat", OneHotEncoder(), cat_attrs), ]) prepared_data = full_pipeline.fit_transform(data) prepared_data[0] # + ##importing a few general use case libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import StratifiedShuffleSplit from sklearn.base import BaseEstimator, TransformerMixin from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import ColumnTransformer import warnings warnings.filterwarnings('ignore') # + # reading the .data file using pandas cols = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'Model Year', 'Origin'] df = pd.read_csv('auto-mpg.data', names=cols, na_values = "?", comment = '\t', sep= " ", skipinitialspace=True) data = df.copy() split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_index, test_index in split.split(data, data["Cylinders"]): strat_train_set = data.loc[train_index] strat_test_set = data.loc[test_index] # - ##segregate the feature and target variable data = strat_train_set.drop("MPG", axis=1) data_labels = strat_train_set["MPG"].copy() data ##preprocess the Origin column in data def preprocess_origin_cols(df): df["Origin"] = df["Origin"].map({1: "India", 2: "USA", 3: "Germany"}) return df # + ##creating custom attribute adder class acc_ix, hpower_ix, cyl_ix = 4,2, 0 class CustomAttrAdder(BaseEstimator, TransformerMixin): def __init__(self, acc_on_power=True): # no *args or **kargs self.acc_on_power = acc_on_power def fit(self, X, y=None): return self # nothing else to do def transform(self, X): acc_on_cyl = X[:, acc_ix] / X[:, cyl_ix] if self.acc_on_power: acc_on_power = X[:, acc_ix] / X[:, hpower_ix] return np.c_[X, acc_on_power, acc_on_cyl] return np.c_[X, acc_on_cyl] # + def num_pipeline_transformer(data): ''' Function to process numerical transformations Argument: data: original dataframe Returns: num_attrs: numerical dataframe num_pipeline: numerical pipeline object ''' numerics = ['float64', 'int64'] num_attrs = data.select_dtypes(include=numerics) num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attrs_adder', CustomAttrAdder()), ('std_scaler', StandardScaler()), ]) return num_attrs, num_pipeline def pipeline_transformer(data): ''' Complete transformation pipeline for both nuerical and categorical data. Argument: data: original dataframe Returns: prepared_data: transformed data, ready to use ''' cat_attrs = ["Origin"] num_attrs, num_pipeline = num_pipeline_transformer(data) full_pipeline = ColumnTransformer([ ("num", num_pipeline, list(num_attrs)), ("cat", OneHotEncoder(), cat_attrs), ]) prepared_data = full_pipeline.fit_transform(data) return prepared_data # - # # From raw data to processed data in 2 steps ##from raw data to processed data in 2 steps preprocessed_df = preprocess_origin_cols(data) prepared_data = pipeline_transformer(preprocessed_df) prepared_data prepared_data[0] # # Selecting and Training Models # 1. Linear Regression # 2. Decision Tree # 3. Random Forest # 4. SVM regressor # + from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(prepared_data, data_labels) # + ##testing the predictions with the sample_data = data.iloc[:5] sample_labels = data_labels.iloc[:5] sample_data_prepared = pipeline_transformer(sample_data) print("Prediction of samples: ", lin_reg.predict(sample_data_prepared)) # - print("Actual Labels of samples: ", list(sample_labels)) # # Mean Squared Error # + from sklearn.metrics import mean_squared_error mpg_predictions = lin_reg.predict(prepared_data) lin_mse = mean_squared_error(data_labels, mpg_predictions) lin_rmse = np.sqrt(lin_mse) lin_rmse # - # # Decision Tree # + from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor() tree_reg.fit(prepared_data, data_labels) # - mpg_predictions = tree_reg.predict(prepared_data) tree_mse = mean_squared_error(data_labels, mpg_predictions) tree_rmse = np.sqrt(tree_mse) tree_rmse # But no model is perfect, this means that our model has overfit the data to a great extent. # # We won't be touching out test data until we finalize our model. So, how do we check for what's happening? # # Model Evaluation using Cross Validation # Scikit-Learn’s K-fold cross-validation feature randomly splits the training set into K distinct subsets called folds, then it trains and evaluates the model K times, picking a different fold for evaluation every time and training on the other K-1 folds. # # The result is an array containing the K evaluation scores: # + from sklearn.model_selection import cross_val_score scores = cross_val_score(tree_reg, prepared_data, data_labels, scoring="neg_mean_squared_error", cv = 10) tree_reg_rmse_scores = np.sqrt(-scores) # - tree_reg_rmse_scores tree_reg_rmse_scores.mean() scores = cross_val_score(lin_reg, prepared_data, data_labels, scoring="neg_mean_squared_error", cv = 10) lin_reg_rmse_scores = np.sqrt(-scores) lin_reg_rmse_scores lin_reg_rmse_scores.mean() # # Random Forest model # + from sklearn.ensemble import RandomForestRegressor forest_reg = RandomForestRegressor() forest_reg.fit(prepared_data, data_labels) forest_reg_cv_scores = cross_val_score(forest_reg, prepared_data, data_labels, scoring='neg_mean_squared_error', cv = 10) forest_reg_rmse_scores = np.sqrt(-forest_reg_cv_scores) forest_reg_rmse_scores.mean() # - # # Support Vector Machine Regressor # + from sklearn.svm import SVR svm_reg = SVR(kernel='linear') svm_reg.fit(prepared_data, data_labels) svm_cv_scores = cross_val_score(svm_reg, prepared_data, data_labels, scoring='neg_mean_squared_error', cv = 10) svm_rmse_scores = np.sqrt(-svm_cv_scores) svm_rmse_scores.mean() # - # # Hyperparameter Tuning using GridSearchCV # + from sklearn.model_selection import GridSearchCV param_grid = [ {'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]}, {'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]}, ] forest_reg = RandomForestRegressor() grid_search = GridSearchCV(forest_reg, param_grid, scoring='neg_mean_squared_error', return_train_score=True, cv=10, ) grid_search.fit(prepared_data, data_labels) # - grid_search.best_params_ # + cv_scores = grid_search.cv_results_ ##printing all the parameters along with their scores for mean_score, params in zip(cv_scores['mean_test_score'], cv_scores["params"]): print(np.sqrt(-mean_score), params) # - # # Checking Feature importance # + # feature importances feature_importances = grid_search.best_estimator_.feature_importances_ feature_importances # + extra_attrs = ["acc_on_power", "acc_on_cyl"] numerics = ['float64', 'int64'] num_attrs = list(data.select_dtypes(include=numerics)) attrs = num_attrs + extra_attrs sorted(zip(attrs, feature_importances), reverse=True) # - # # Evaluating the entire system on Test Data # + final_model = grid_search.best_estimator_ X_test = strat_test_set.drop("MPG", axis=1) y_test = strat_test_set["MPG"].copy() X_test_preprocessed = preprocess_origin_cols(X_test) X_test_prepared = pipeline_transformer(X_test_preprocessed) final_predictions = final_model.predict(X_test_prepared) final_mse = mean_squared_error(y_test, final_predictions) final_rmse = np.sqrt(final_mse) # - final_rmse # # Creating a function to cover this entire flow def predict_mpg(config, model): if type(config) == dict: df = pd.DataFrame(config) else: df = config preproc_df = preprocess_origin_cols(df) prepared_df = pipeline_transformer(preproc_df) y_pred = model.predict(prepared_df) return y_pred # + ##checking it on a random sample vehicle_config = { 'Cylinders': [4, 6, 8], 'Displacement': [155.0, 160.0, 165.5], 'Horsepower': [93.0, 130.0, 98.0], 'Weight': [2500.0, 3150.0, 2600.0], 'Acceleration': [15.0, 14.0, 16.0], 'Model Year': [81, 80, 78], 'Origin': [3, 2, 1] } predict_mpg(vehicle_config, final_model) # - # # Save the Mode import pickle ##saving the model with open("model.bin", 'wb') as f_out: pickle.dump(final_model, f_out) f_out.close() # + ##loading the model from the saved file with open('model.bin', 'rb') as f_in: model = pickle.load(f_in) predict_mpg(vehicle_config, model) # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import OneHotEncoder from sklearn.base import BaseEstimator, TransformerMixin from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.compose import ColumnTransformer ##functions def preprocess_origin_cols(df): df["Origin"] = df["Origin"].map({1: "India", 2: "USA", 3: "Germany"}) return df acc_ix, hpower_ix, cyl_ix = 3, 5, 1 class CustomAttrAdder(BaseEstimator, TransformerMixin): def __init__(self, acc_on_power=True): # no *args or **kargs self.acc_on_power = acc_on_power def fit(self, X, y=None): return self # nothing else to do def transform(self, X): acc_on_cyl = X[:, acc_ix] / X[:, cyl_ix] if self.acc_on_power: acc_on_power = X[:, acc_ix] / X[:, hpower_ix] return np.c_[X, acc_on_power, acc_on_cyl] return np.c_[X, acc_on_cyl] def num_pipeline_transformer(data): numerics = ['float64', 'int64'] num_attrs = data.select_dtypes(include=numerics) num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attrs_adder', CustomAttrAdder()), ('std_scaler', StandardScaler()), ]) return num_attrs, num_pipeline def pipeline_transformer(data): cat_attrs = ["Origin"] num_attrs, num_pipeline = num_pipeline_transformer(data) full_pipeline = ColumnTransformer([ ("num", num_pipeline, list(num_attrs)), ("cat", OneHotEncoder(), cat_attrs), ]) full_pipeline.fit_transform(data) return full_pipeline def predict_mpg(config, model): if type(config) == dict: df = pd.DataFrame(config) else: df = config preproc_df = preprocess_origin_cols(df) print(preproc_df) pipeline = pipeline_transformer(preproc_df) prepared_df = pipeline.transform(preproc_df) print(len(prepared_df[0])) y_pred = model.predict(prepared_df) return y_pred # + import pickle from flask import Flask, request, jsonify from model_files.ml_model import predict_mpg app = Flask('mpg_prediction') @app.route('/predict', methods=['POST']) def predict(): vehicle = request.get_json() print(vehicle) with open('./model_files/model.bin', 'rb') as f_in: model = pickle.load(f_in) f_in.close() predictions = predict_mpg(vehicle, model) result = { 'mpg_prediction': list(predictions) } return jsonify(result) @app.route('/ping', methods=['GET']) def ping(): return "Pinging Model!!" # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib # %matplotlib inline # matplotlib.use('agg') # coding: utf-8 import csv import poppy # get_ipython().run_line_magic('pylab', 'inline --no-import-all') # matplotlib.rcParams['image.origin'] = 'lower' print(poppy.__version__) from poppy.sub_sampled_optics import subapertures, SH_WFS from poppy.poppy_core import PlaneType from matplotlib.colors import LogNorm import poppy.fwcentroid as fwcentroid from matplotlib import pyplot as plt import logging import astropy.units as u import copy import numpy as np logging.getLogger('poppy').setLevel(logging.INFO) #Can be logging.CRITICAL, logging.WARN, logging.INFO, logging.DEBUG for increasingly verbose output # + ## define parameters: wavelength = .635e-6 #red pixel_pitch = 2.2*u.um #both detectors have same pixel pitch lenslet_pitch = 150*u.um dm_size = lenslet_pitch*10 n_lenslets=int((dm_size.to(u.m)/lenslet_pitch.to(u.m)).value) print("n lenslets", n_lenslets) r_lenslet = lenslet_pitch/2. lenslet_focal_length = .0037*u.m pix_per_lenslet = int(lenslet_pitch/pixel_pitch) print(pix_per_lenslet) plate_scale = 1.0*u.rad/(lenslet_focal_length.to(u.m)) rad_pix = (plate_scale*pixel_pitch.to(u.m))/u.pix #radians per pixel plate_scale_converted = rad_pix.to(u.arcsec/u.pix).value #arcsec per pixel print(plate_scale_converted, plate_scale_converted*pix_per_lenslet*n_lenslets) ## define DM act_x = 2 act_y = 2 stroke = .3e-6 dm_actuator_pitch = dm_size/4 #450*u.um from BMC Multi-DM user manual dm = poppy.dms.ContinuousDeformableMirror(dm_shape=(4,4), actuator_spacing=dm_actuator_pitch, radius=dm_size/2, inclination_x =45) dm.set_actuator(act_x, act_y, stroke) shwfs_detector = poppy.Detector(plate_scale_converted, fov_pixels=pix_per_lenslet) dm.display() wf_flat = poppy.Wavefront(diam=dm_size, wavelength=wavelength, npix=68*n_lenslets) wf_flat *= poppy.CircularAperture(radius = dm_size/2) wf = poppy.Wavefront(diam=dm_size, wavelength=wavelength, npix=68*n_lenslets) wf *= poppy.CircularAperture(radius = dm_size/2) wf *= dm # - print(n_lenslets) shwfs = SH_WFS(lenslet_pitch= lenslet_pitch, lenslet_fl=lenslet_focal_length, pixel_pitch=pixel_pitch, n_lenslets = n_lenslets, circular = True, detector = shwfs_detector) # + #get flat centroids for wf reconstruction: shwfs.sample_wf(wf_flat) shwfs.get_psfs() flat_centroid_list = shwfs.get_centroids() #sample wf and propagate to detector: shwfs.sample_wf(wf) shwfs.get_psfs() # + # retrieve and display wavefront array: wf_array = shwfs.get_wavefront_array() plt.figure() plt.imshow(wf_array.intensity, cmap = 'gray') plt.colorbar(label = 'intensity') plt.xlabel('pixels') plt.ylabel('pixels') plt.title("SHWFS Lenslet Spots") # display single psf: single_psf = shwfs.wf_array[5,5] plt.figure() plt.imshow(single_psf.intensity, cmap = 'gray') plt.colorbar(label = 'intensity') plt.xlabel('pixels') plt.ylabel('pixels') plt.title("Single Lenslet PSF") # - #reconstruct wavefront: reconstruction = shwfs.reconstruct_wavefront(flat_centroid_list) # + #display result: plt.figure() plt.imshow(reconstruction.value) plt.colorbar(label = 'um') plt.title("Reconstructed Wavefront") #note: with the small number of spots across the DM # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Membuat model dengan pipeline # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.preprocessing import MinMaxScaler, OneHotEncoder from sklearn.compose import ColumnTransformer from sklearn.impute import SimpleImputer # - df = pd.read_csv('breast-cancer-datasets.csv') df.head() df.isnull().sum() df.drop(columns=['Unnamed: 32','id'], inplace=True) df.head() df_baru = df.copy(deep=True) df_baru['diagnosis'] = df['diagnosis'].map({'M':1, 'B':0}) df_baru.head() correlation = df_baru.corr()['diagnosis'] correlation strong_corr = correlation[correlation >= 0.5] strong_corr strong_corr.keys().to_list() y = df_baru['diagnosis'] X = df_baru[strong_corr.keys().to_list()[1:]] X.shape , y.shape X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=42) X_train.dtypes X_train.shape, X_test.shape, y_train.shape, y_test.shape numerical_pipeline = Pipeline([ ('inputer', SimpleImputer(strategy='mean')), ('sclare', MinMaxScaler()) ]) from sklearn.svm import SVC pipeline = Pipeline([ ('prepo', numerical_pipeline), ('algo', SVC()) ]) # ## parameter Tuning parameter = { 'algo__C': [1,10,100,1000], 'algo__degree': range(1,5) } model = GridSearchCV(pipeline, param_grid=parameter, cv=3, n_jobs=-1, verbose=2) model.fit(X_train, y_train) model.best_params_ model.score(X_test, y_test) pd.DataFrame(model.cv_results_).sort_values('rank_test_score') # --- # jupyter: # jupytext: # formats: ipynb,py:light # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # SVM via Tensorflow # # Reference: # * [SMO优化算法(Sequential minimal optimization)](http://www.cnblogs.com/jerrylead/archive/2011/03/18/1988419.html) # * [SVM原理以及Tensorflow 实现SVM分类(附代码)](https://www.cnblogs.com/vipyoumay/p/7560061.html) # * [知乎实现SVM的SMO](https://zhuanlan.zhihu.com/p/29212107) # * [Latex Symbols](https://artofproblemsolving.com/wiki/index.php/LaTeX:Symbols) # ## 1. Theory # # Given $(X_i, y_i), y_i=+1/-1,$ we try to maximize # $$ \gamma = \frac{|W^T \cdot X + b|}{||W||}$$ # With restriction # $$ # \begin{cases} # W^T \cdot X_i + b \geq +1, & \text{if } y_i=1\\ # W^T \cdot X_i + b \leq -1, & \text{if } y_i=-1 # \end{cases} # $$ # ### 1.1 Equation Formalization # Using mathmetic refactoration, we get our **SVM algorithm goal formalized** as follow: # $$ # \begin{equation} # \max_{W,b} \frac{2}{||W||}, \\ # \text{ s.t. } y_i(W^T \cdot X_i + b) \geq 1, i = 1,2,...,m # \end{equation} # $$ # # Futhermore, $||w|| > 0$, we can equally use: # $$ # \begin{equation} # \min_{W,b} \frac{||W||^2}{2}, \\ # \text{ s.t. } y_i(W^T \cdot X_i + b) \geq 1, i = 1,2,...,m # \end{equation} # $$ # ### 1.2 Dual Problem # $$ L(W, b, \alpha) = \frac{1}{2}||W||^2 + \sum_{i=1}^{m} \alpha_i [1-y_i(W^T \cdot X_i + b)], \text{ }\alpha=(\alpha_1;\alpha_2;...;\alpha_m)$$ # Make partial devative for W and b, get as follow: # $$ W = \sum_{i=1}^{m} \alpha_i y_i x_i $$ # $$ 0 = \sum_{i=1}^{m} \alpha_i y_i $$ # # Then get into $ L(W, b, \alpha) $, we get SVM dual problem as follow: # $$ \max_{\alpha} W(\alpha) = \sum_{i=1}^{m} \alpha_i - \frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m \alpha_i \alpha_j y_i y_j x_i^T x_j $$ # $$ \text{ s.t.} \sum_{i=1}^m \alpha_i y_i = 0, 0 \leq \alpha_i \leq C, i=1,2,...,m$$ # # * **Algorithm** # Repeat till converge: # 1. Select $\alpha_i$ and $\alpha_j$ to update next (using heuristic to make bigger progress) # 2. Reoptimize $W(\alpha)$ with respect to $\alpha_i$ and $\alpha_j$, while holding $\alpha_k, (k \not\equiv i, j)$ fixed # # * **Implementation** # $$ \alpha_1 y^{(1)} + \alpha_2 y^{(2)} = - \sum_{i=3}^m \alpha_i y^{(i)} $$ # make $ \alpha_1 y^{(1)} + \alpha_2 y^{(2)} = \zeta $, We analyze ($\alpha_1, \alpha_2$) as the following 2 cases: # # **Case 1**: $ y^{(1)} \cdot y^{(2)} = -1 $, we have # # # # $ L = \max(0, \alpha_2 - \alpha_1), H = \min(C, C + \alpha_2 - \alpha_1) $ # # **Case 2**: $ y^{(1)} \cdot y^{(2)} = +1 $, we have # # $ L = \max(0, \alpha_2 + \alpha_1 - C), H = \min(C, \alpha_2 + \alpha_1) $ # # Also summarize for **Case 1** and **Case 2**, $k$ is the same as $\zeta$ # # # Then we use $\alpha_2$ to represent $\alpha_1$: # # $ \alpha_1 = (\zeta - \alpha_2 y^{(2)}) y^{(1)} $ # # Put $ \alpha_1 $ into $W(\alpha)$, we get: # # $$ W(\alpha_1, \alpha_2, ..., \alpha_m) = W((\zeta - \alpha_2 y^{(2)}) y^{(1)} ,\alpha_2, ..., \alpha_m)$$ # # $W((\zeta - \alpha_2 y^{(2)}) y^{(1)} ,\alpha_2, ..., \alpha_m)$ is a kind of $a \alpha_2^2 + b \alpha_2 + c$, then use paritial deviation of $W(\alpha)$ to get $\alpha_2$, in symbol $\alpha_2| \frac{\partial W}{\partial \alpha_2}$. considering $ L \leq \alpha_2 \leq H$, we use $\alpha_2^{new,unclipped}$ to represent $\alpha_2| \frac{\partial W}{\partial \alpha_2}$, take case condition as follow: # # $$ # \alpha_2^{new} = # \begin{cases} # H, \text{ if } \alpha_2^{new,unclipped} > H \\ # \alpha_2^{new,unclipped}, \text{ if } L \leq \alpha_2^{new,unclipped} \leq H \\ # L, \text{ if } \alpha_2^{new,unclipped} < L \\ # \end{cases} # $$ # # Sovle wtih $\alpha$, get W and b to get model: # $$ f(x) = W^T \cdot X + b = \sum_{i=1}^m \alpha_i y_i x_i^T X + b $$ # # Satisfy **KKT (Karush-Kuhn-Tucker) condition** as follow: # $$ # \begin{cases} # \alpha_i \geq 0; \\ # y_i f(x_i) - 1 \geq 0; \\ # \alpha_i [y_i f(x_i) - 1] = 0. # \end{cases} # $$ # ### 1.3 Heursitic Search for Dual Problem # # [Platt 1988's SMO Paper](https://www.microsoft.com/en-us/research/publication/sequential-minimal-optimization-a-fast-algorithm-for-training-support-vector-machines/) use heurstic method to sovle Dual Problem. # # * **1. Original Problem Definition** # $$ u = W \cdot X - b $$ # 1. Original optimization problem: # $$ # \begin{equation} # \min_{W,b} \frac{||W||^2}{2}, \\ # \text{ s.t. } y_i(W^T \cdot X_i - b) \geq 1, i = 1,2,...,m # \end{equation} # $$ # # 2. Partial devation to get solution of original problem: # $$ W = \sum_{i=1}^N y_i \alpha_i X_i, b = W X_k - y_k \text{ for some } \alpha_k > 0 $$ # # * **Lagrangian conversion** # $$ # \min_{\alpha} \Psi(\alpha) = \min_{\alpha} \frac{1}{2}\sum_{i=1}^N \sum_{j=1}^N y_i y_j (x_i \cdot x_j) \alpha_i \alpha_j - \sum_{i=1}^N \alpha_i # $$ # $$ \forall i, \alpha_i \geq 0 $$ # $$ \sum_{i=1}^N y_i \alpha_i = 0 $$ # # * **2. Modified SVM problem for non-Linear splitting with penalities** # $$ # \min_{W,b, \xi} \frac{1}{2}||W||^2 + C \sum_{i=1}^N \xi_i # $$ # $$ \text{ subject to } y_i(W X_i - b) \geq 1 - \xi_i, \forall i $$ # $$ 0 \leq \alpha_i \leq C, \forall i $$ # # If taking Kernel function into consideration, we can have $u$ as: # $$ # \nu = \sum_{j=1}^N y_j \alpha_j K(x_j, x) - b # $$ # # * **Lagrangian conversion for Modified SVM problem** # $$ # \min_{\alpha} \Psi(\alpha) = \min_{\alpha} \frac{1}{2}\sum_{i=1}^N \sum_{j=1}^N y_i y_j K(x_i \cdot x_j) \alpha_i \alpha_j - \sum_{i=1}^N \alpha_i # $$ # $$ \forall i, \alpha_i \geq 0 $$ # $$ \sum_{i=1}^N y_i \alpha_i = 0 $$ # # **KKT condition** # $$ \alpha_i = 0 \iff y_i \nu_i \geq 1 $$ # $$ 0 < \alpha_i < C \iff y_i \nu_i = 1 $$ # $$ \alpha_i = C \iff y_i \nu_i \leq 1 $$ # ### 1.4 b Value for Dual Problem # # * As $\alpha_i >0$, we have $y_i f(x_i) = 1$, then such $(x_i, y_i)$ is a **supported vector**, make it as $(x_s, y_s)$. # # * For $\forall (x_s, y_s)$, they have fit for $y_i f(x_i) = 1$, summed together # $$ y_s (\sum_{i \in S} \alpha_i y_i X_i^T X_s + b) = 1, \text{ } S= \{i|\alpha_i > 0, i=1,2,...,m\} $$ # # * solve b value as follow: # $$ b = \frac{1}{|S|} \sum_{s \in S} (y_s - \sum_{i \in S} \alpha_i y_i X_i^T X_s) $$ # ## 2. Practise import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from sklearn import datasets import matplotlib.pyplot as plt # %matplotlib inline # load data iris = datasets.load_iris() x_vals = np.array([[x[0], x[3]] for x in iris.data]) y_vals = np.array([1 if y == 0 else -1 for y in iris.target]) # + from sklearn.model_selection import KFold kf = KFold(n_splits=2, random_state=42) for train_index, test_index in kf.split(x_vals, y_vals): X_train, y_train, X_test, y_test = x_vals[train_index], y_vals[train_index], x_vals[test_index], y_vals[test_index] # - # ### 2.1 GD Tensorflow # ### Use W, b to estimate SVM boundary assert len(x_vals.shape) == 2 and len(y_vals.shape) == 1 # + from tensorflow.contrib.layers import xavier_initializer from tensorflow.losses import mean_squared_error from tensorflow.train import AdamOptimizer tf.reset_default_graph() X_data = tf.placeholder(tf.float32, shape=[None, x_vals.shape[1]]) y_target = tf.placeholder(tf.float32, shape=[None, 1]) W = tf.get_variable(shape=[x_vals.shape[1], 1], name="W", initializer=xavier_initializer()) b = tf.get_variable(shape=[1, 1], name="b", initializer=xavier_initializer()) output = tf.matmul(X_data, W) - b l2_norm = mean_squared_error(output, y_target) # - # $$ Loss = \max(0, 1 - \hat{y(i)} \cdot y(i)) + \alpha ||X \cdot W - b||^2 $$ loss = tf.reduce_mean(tf.maximum(0., 1. - output * y_target)) + 0.01 * l2_norm optimizer = AdamOptimizer(0.01).minimize(loss) # + batch_size = 1024 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(20000): rand_index = np.random.choice(len(X_train), size=batch_size) rand_x = X_train[rand_index] rand_y = np.transpose([y_train[rand_index]]) sess.run(optimizer, feed_dict={X_data: rand_x, y_target: rand_y}) [[a1], [a2]] = sess.run(W) [[b]] = sess.run(b) # - # ### Draw Boundary # + slope = -a2/a1 y_intercept = b/a1 best_fit = [] x1_vals = [d[1] for d in x_vals] for i in x1_vals: best_fit.append(slope*i+y_intercept) # Separate I. setosa setosa_x = [d[1] for i, d in enumerate(x_vals) if y_vals[i] == 1] setosa_y = [d[0] for i, d in enumerate(x_vals) if y_vals[i] == 1] not_setosa_x = [d[1] for i, d in enumerate(x_vals) if y_vals[i] == -1] not_setosa_y = [d[0] for i, d in enumerate(x_vals) if y_vals[i] == -1] plt.plot(setosa_x, setosa_y, 'o', label='I. setosa') plt.plot(not_setosa_x, not_setosa_y, 'x', label='Non-setosa') plt.plot(x1_vals, best_fit, 'r-', label='Linear Separator', linewidth=3) plt.ylim([0, 10]) plt.legend(loc='lower right') plt.title('Sepal Length vs Pedal Width') plt.xlabel('Pedal Width') plt.ylabel('Sepal Length') # - # ## 2.2 SMO Tensorflow # #### SMO update equation from [Platt 1988 MSRA] # # Details reasoning refer to [知乎SVM](https://zhuanlan.zhihu.com/p/29212107) and [Platt's SMO in appendix](https://www.microsoft.com/en-us/research/publication/sequential-minimal-optimization-a-fast-algorithm-for-training-support-vector-machines/) # # 1. L, H values # $$ # \begin{cases} # L = max(0, \alpha_2-\alpha_1), H=min(C, C+\alpha_2-\alpha_1) \text{ ,when }y_2 y_1 = -1 \\ # L = max(0, \alpha_2+\alpha_1-C), H=min(C, \alpha_2+\alpha_1) \text{ ,when }y_2 y_1 = 1 # \end{cases} # $$ # # 2. Partial devative utility $\eta$ # $$ # \eta = \kappa (x_1, x_1) + \kappa (x_2, x_2) - 2 \kappa (x_1, x_2) # $$ # When $\kappa$ is linear, we have # $$ # \kappa (x_1, x_2) = x_1^T \cdot x_2 # $$ # # 3. Quaradtic solution $\alpha_2^{new}$ # $$ # \alpha_2^{new} = \alpha_2 + \frac{y_2 (E_1 - E_2)}{\eta} # $$ # Where $E_i$ is defined as: # $$ # E_i = \nu_i - y_i, \nu_i = W \cdot X_i - b - y_i # $$ # # 4. Solution clip # $$ # \alpha_2^{new, clipped} = # \begin{cases} # H, \text{ if } \alpha_2^{new} \geq H \\ # \alpha_2^{new}, \text{ if } L \leq \alpha_2^{new} \leq H \\ # L, \text{ if } \alpha_2^{new} \leq H # \end{cases} # $$ # # 5. $\alpha_1^{new}$ update # $$ # s = y_1 y_2 # $$ # $$ # \alpha_1^{new} = \alpha_1 + s(\alpha_2 - \alpha_2^{new, clipped}) # $$ # # 6. Update parameters # $$ # f_1 = y_1(E_1 + b) - \alpha_1 \kappa(x_1, x_1) - s \alpha_2 \kappa(x_1, x_2) \\ # f_2 = y_2(E_2 + b) - s \alpha_1 \kappa(x_1, x_2) - \alpha_2 \kappa(x_2, x_2) \\ # L_1 = \alpha_1 + s(\alpha_2 - L) \\ # H_1 = \alpha_1 + s(\alpha_2 - H) \\ # \Psi_L = L_1 f_1 + \frac{1}{2} L_1^2 \kappa(x_1, x_1) + \frac{1}{2}L^2 \kappa(x_2, x_2) + sL L_1 \kappa(x_1, x_2) \\ # \Psi_H = H_1 f_1 + \frac{1}{2} H_1^2 \kappa(x_1, x_1) + \frac{1}{2}H^2 \kappa(x_2, x_2) + sH H_1 \kappa(x_1, x_2) # $$ # # 7. Heuristics for choosing which multipliers to optimize # # 8. Compute the threshold # * $b_1$ is valid when new $\alpha_1$ isn't at the bounds # $$ # b_1 = E_1 + y_1(\alpha_1^{new} - \alpha_1) \kappa(x_1, x_!) + y_2 (\alpha_2^{new, clipped} - \alpha_2) \kappa(x_1, x_2) + b # $$ # * $b_2$ is valid when new $\alpha_2$ isn't at the bounds # $$ # b_2 = E_2 + y_1(\alpha_1^{new} - \alpha_1) \kappa(x_1, x_2) + y_2 (\alpha_2^{new, clipped} - \alpha_2) \kappa(x_2, x_2) + b # $$ # * both $b_1$ and $b_2$ are valid, they are equal; so using b from $b_1$ and $b_2$ # $$ # b = (b_1 + b_2) / 2 # $$ # # 9. Optimization for Linear SVMs # $$ # W^{new} = W + y_1 (\alpha_1^{new} - \alpha_1) x_1 + y_2 (\alpha_2^{new, clipped} - \alpha_2) x_2 # $$ # * **Basic implementation references** # - [while_loop for migration of SVM to tensorflow](05_basic/while_loop_in_tf.ipynb) # + import tensorflow as tf import numpy as np class TFSVM(): """ Using tensorflow to implement SMO-SVM 1. Paper: refer to https://www.microsoft.com/en-us/research/publication/sequential-minimal-optimization-a-fast-algorithm-for-training-support-vector-machines/. 2. Implementation via tensroflow: refer to https://blog.csdn.net/lilongsy/article/details/79391698. """ def __init__(self, max_iter=10000, kernel_type='linear', C=1.0, epsilon=1e-3): """[initializer for tensorflow SVM] Keyword Arguments: max_iter {int} -- [max iteration number] (default: {10000}) kernel_type {str} -- [kernel type] (default: {'linear'}) C {float} -- [penality term] (default: {1.0}) epsilon {float} -- [KKT boundary parameter to check if convergence] (default: {1e-3}) """ self._kernels = { "linear": self.kernel_linear } assert kernel_type in self._kernels self._max_iter = max_iter self._kernel_type = kernel_type self._C = C self._epsilon = epsilon self._model_available = False # initialize session and enter training session self._sess = tf.Session() self._sess.__enter__() def fit(self, X_train: np.array, y_train: np.array): """[training process for (X_train, y_train)], different from you seeing online. I'd like to use tensorflow for SMO update process, as it's actually analytical solution. Arguments: X_train {np.array} -- [X input data] y_train {np.array} -- [y output data, only +1/-1 for classification] """ def svm_weights(X_len: int, X_dim: int, y_dim: int, scope="svm"): """[setup svm variables weights] Arguments: X_len {[int]} -- [total number of X instance] X_dim {[int]} -- [dimension number of X] y_dim {[int]} -- [dimension number of y] Keyword Arguments: scope {str} -- [description] (default: {"svm"}) """ with tf.variable_scope(scope, reuse=tf.AUTO_REUSE): init = tf.initializers.xavier_initializer() W = tf.get_variable("W", shape=[X_dim, y_dim], initializer=init()) b = tf.get_variable("b", shape=[y_dim], initializer=init()) alphas = tf.random_uniform([X_len], tf.float32) * self._C return W, b, alphas def examineExample(i2: int): """[take training sample X_train[i2] as \alpha_2 to check \alpha_1] Arguments: i {int} -- [training sample index] Returns: cnt {int} - [if i2 is improved by heursitic search pair of relevant i1] """ cnt = 1 assert X_train.shape[0] == y_train.shape[0] ## 1. start training self._model_available = False ## 2. training process X_len, self._X_dim, self._y_dim = X_train.shape[0], X_train.shape[1], y_train[1] self._W, self._b, self._alphas = svm_weights(X_len, self._X_dim, self._y_dim) ## SMO for SVM train self._X = tf.placeholder(tf.float32, shape=[None, self._X_dim]) self._y = tf.placeholder(tf.float32, shape=[None, self._y_dim]) numChanged = 0; examineAll = True def smo_loop(numChanged: tf.Tensor, examineAll: tf.Tensor): """[loop processing for SMO] Arguments: numChanged {tf.Tensor} -- [if alpha_1 or alpha_2 is changed during the iteration] examineAll {tf.Tensor} -- [if all training sample is processed during the iteration] Returns: [type] -- [description] """ numChanged = tf.cond(examineAll, lambda: numChanged+1, lambda: numChanged+2) # just check if loop smo is working or not numChanged = tf.Print(numChanged, [numChanged], message="numChanged:") examineAll = tf.cond(examineAll, lambda: False, lambda: tf.cond(tf.equal(numChanged, 0), lambda: True, lambda: examineAll)) return numChanged, examineAll op = tf.while_loop(lambda numChanged, examineAll: tf.logical_or(numChanged > 0, examineAll), smo_loop, (numChanged, examineAll)) while numChanged > 0 or examineAll: numChanged = 0 if examineAll: # loop i over all traning examples raise NotImplementedError else: # loop i over examples where alpha is not 0 and not self._C raise NotImplementedError if examineAll: examineAll = False elif numChanged == 0: examineAll = True ## 3. after training self._model_available = True def score(self, X: np.array, y: np.array): """[return score of (X, y) based on svm trained model] Arguments: X {np.array} -- [feature array] y {np.array} -- [target value] Return: score {float} - [score of SVM model after evaluation of (X, y) pair] """ if not self._model_available: raise ValueError("SVM model isn't available") raise NotImplementedError def predict(self, X: np.array): """[predict target value given X as input] Arguments: X {np.array} -- [feature array] Return: y_pred {np.array} - [predicted target value] """ if not self._model_available: raise ValueError("SVM model isn't available") raise NotImplementedError ######################## # Available SVM kernels ######################## def kernel_linear(self, x1: tf.Tensor, x2: tf.Tensor): """[linear kernel method for SVM] Arguments: x1 {tf.Tensor} -- [x1 vector] x2 {tf.Tensor} -- [x2 vector] Returns: [tf.Tensor] -- [tf.matmul(x1.T, x2)] """ assert tf.shape(x1) == tf.shape(x2) return tf.matmul(tf.transpose(x1), x2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.13 64-bit (''fossworkshop'': conda)' # name: python3 # --- # # Vector Data Analysis # ## This notebook represents working with vector data in python # # Vector data is usually a tabular data coupled with location information. e.g. Data of all states in India ( This file will have some attribute data about states such as name, population, etc. along with one column of geometry containing location information). Vector data geometry can be divided in 3 major types: # # 1. Point Geometry # # Point Geometry consists of discrete location information such as *latitude, longitude* which can help us to identify the exact location of given feature. # # e.g - Location of bus stop, location of user, etc. # # 2. Line Geometry # # Line Geometry is collection of multiple *latitude, longitude* in an array which represents continuous path. # # e.g. - Centreline of road, River, Path created by user, etc. # # 3. Polygon Geometry # # Polygon Geometry is collection of multiple *latitude, longitude* in an array which represents continuous enclosed area. # # e.g. - Geometry of State, polygon of building, etc. # # ## Loading Data # # First step is to load the Data in to python, this data can be a file available on machine, data stored in database, or file hosted on some server # # ### 1. Loading Shapefile # # Loading all countries geometry (src: https://www.naturalearthdata.com/downloads/10m-cultural-vectors/) #Using geopandas to work with data import geopandas as gpd #Using Matplotlib for visualisation import matplotlib # %matplotlib inline # + #load it as a pandas dataframe with understanding on geometrical data countries = gpd.read_file('../data/ne_10m_admin_0_countries/ne_10m_admin_0_countries.shp') countries # - countries.plot() countries.head() countries.tail() # #### Understanding GeoDataFrame # # GeoDataFrame will always have geometry column, apart from that other columns will act as metadata. # # So `geopandas` = `pandas` + `geometry` # # Each column except geometry in the geopandas is of type `pandas.Series` , geometry is treated as `pandas.GeoSeries` print(type(countries.geometry)) print(type(countries.scalerank)) # Each geometry is a `shapely` Shape, thus we can perform all shapely methods on these geometries # # Checkout all available methods here https://shapely.readthedocs.io/en/stable/manual.html#predicates-and-relationships countries.geometry countries.geometry.centroid # ### 2. Loading Geojson # # Loading local geojson file rivers = gpd.read_file('../data/rivers.geojson') rivers # ### 3. Loading PostgreSQL # # Loading data from database # + import psycopg2 con = psycopg2.connect(database="postgres", user="postgres", password="", host="localhost") sql = "SELECT * FROM public.places" places = gpd.read_postgis(sql, con, geom_col='geom' ) places # - # ### 4. Import CSV # # Assuming that CSV has a geometry column that contains geometery in WKT format # + from shapely import wkt airport = gpd.read_file('../data/airport.csv') airport['geometry'] = airport['geom'].apply(wkt.loads) del airport['geom'] airport # - # ### 5. Creating geometry On the fly # # Create geodataframe from csv having columns as longitude and latitude, which will be used further to create geometery on the fly import pandas as pd df = pd.read_csv('../data/stadium.csv') stadium = gpd.GeoDataFrame( df, geometry=gpd.points_from_xy(df.lon, df.lat)) stadium # ### 6. Create Geodataframe manually # # User can also create Geodataframe in the notebook, using their own data # + from shapely.geometry import Point police = gpd.GeoDataFrame({ 'geometry': [Point(1, 1), Point(2, 2),Point(2, 1),Point(1, 2),Point(1.5, 2)], 'id': [1, 2,3,4,5], 'criminals': [12,34,112,41, 212]}) police # - police.to_html('police.html') # ## More about shapely # # ### How to create geometery # + from shapely.geometry import Polygon,Point,LineString Pt = Point(10,10) line = LineString([(0,0),(0,3),(3,0)]) poly = Polygon([[0, 0], [1, 0], [1, 1], [0, 1], [0, 0]]) # - Pt line poly # ### Geospatial analysis poly.touches(line) poly.contains(Pt) Pt.buffer(20).contains(poly) # ### Peoperties of shape # poly.area line.bounds line.length # ## More about Fiona # # Fiona is a python interface of GDAL/OGR library, Geopandas is a more easy to user wrapper import fiona places = fiona.open('../data/ne_10m_populated_places/ne_10m_populated_places.shp') places places.driver places.schema places.crs len(places) # ## Playing with GeoDataFrame # # ### Coordinate system # # Unline `shapely`, `geopandas` understands crs # # What CRS are important? # - CRS will make sense out of your data such as whether the units are degrees/meters # - Bringing all data in same CRS allows us to do spatial analysis with data # # # To check CRS of GeoDataframe # countries.crs # We can also set CRS for the GeoDataFrame which has no default CRS We can also set CRS for the GeoDataFrame which has no default CRS police = police.set_crs('epsg:4326') police.crs # We can also convert GeoDataFrame from one CRS to another police.crs police_3857 = police.to_crs(3857) police_3857 police_3857.crs # ### Merging # # 1. Atrribute based merge neighbor = pd.DataFrame({ 'id': [1, 2,3,4,5], 'neighbor_id': ['a1', 'a2','a3','c4','d5'], 'neighbor_name': ['andy','julio','true','skewd', 'tauras']}) neighbor updated_police = police.merge(neighbor, on='id') updated_police # 2. Spatial merge pd.set_option('max_columns', 100) airport = airport.set_crs('epsg:4326') airport.head() simple_countries = countries[['ADMIN','geometry']] simple_countries.head() # + airport_with_country = gpd.sjoin(airport, simple_countries, how="inner", op='intersects') # - airport_with_country.head() # op : Another way to perform same query can be using operation `within` instead of `intersect` . airport_with_country_within = gpd.sjoin(airport, simple_countries, how="inner", op='within') airport_with_country_within.tail() # how : We can use `left` , `right` , `inner` . # # `left`: use the index from the first (or left_df) geodataframe that you provide to sjoin; retain only the left_df geometry column # # `right`: use index from second (or right_df); retain only the right_df geometry column # # `inner`: use intersection of index values from both geodataframes; retain only the left_df geometry column airport_with_country_right = gpd.sjoin(airport, simple_countries, how="right", op='within') airport_with_country_right.head() # ### Edit the existing data # # Editing metadata updated_police.iloc[0] updated_police.iloc[0,2] = 24 updated_police.iloc[0] # Editing geometry # + from shapely.geometry import Point updated_point = Point(3,4) updated_police.iloc[0,0] = updated_point updated_police # - # ### Querying data # # 1. Based on metadata countries.head() India = countries[countries['ADMIN'] == "India"] India densly_pop = countries[countries['POP_EST'] > 100000000] densly_pop countriesWithC = countries[countries['SOVEREIGNT'].str.startswith('C')] countriesWithC densecountriesWithC = countries[(countries['SOVEREIGNT'].str.startswith('C')) & (countries['POP_EST'] > 1000000000)] densecountriesWithC # 2. Spatial Query # # Spatial query uses shapely geometry as base geometry on top of which geodataframe can be queried. # Available oprations are listed at # https://shapely.readthedocs.io/en/latest/manual.html#binary-predicates indian_shape = India['geometry'].squeeze() type(India['geometry'].squeeze()) test_pt = Point(1,1) test_pt.intersects(indian_shape) nashik = Point(73.76,19.96) nashik.within(indian_shape) indian_airport = airport[airport.within(indian_shape)] indian_airport # #### Quiz -> Can you create the dataframe of all airports and cities within your country indian_river = rivers[rivers.intersects(indian_shape)] indian_river.plot() Neighbours_India = countries[countries.touches(indian_shape)] Neighbours_India.plot() # ### Geospatial Operations # # Understanding base logic first! Back to `shapely` test_point = Point(0,0) test_point test_point.buffer(10) test_point.buffer(10).area # + from shapely.geometry import LineString test_line = LineString([(0, 0), (1, 1), (0, 2)]) test_line # - #Buffer puts original geometry at center and create buffer alongside test_line.buffer(0.1) # + #We can also put geometry on either side ( Positive value will put buffer to left) test_line.buffer(0.5, single_sided=True) # + #We can also put geometry on either side ( negative value will put buffer to right) test_line.buffer(-0.5, single_sided=True) # - # Operations on `geopandas` Indian_cities = places[places.within(indian_shape)] Indian_cities Indian_cities_m = Indian_cities.to_crs(3857) Indian_cities_m.crs Indian_cities_m.head() city_buffer = Indian_cities_m[['geom','name']] city_buffer fig, ax = plt.subplots(figsize=(16, 16)) India_m.plot(ax=ax, color='#ffffff', edgecolor='#6a6a6a', linewidth=2) city_buffer.plot(ax=ax, color='#f00', edgecolor='#000000') city_buffer['geom'] = city_buffer.buffer(50000) city_buffer countries.head() countries_centroid = countries[['geometry','NAME','CONTINENT']] countries_centroid.head() countries_centroid['geometry'] = countries_centroid['geometry'].centroid countries_centroid.head() countries_centroid.plot() countries['area'] = countries['geometry'].area countries.head() countries_m = countries.to_crs(3857) countries_m['area'] = (countries_m['geometry'].area)/1000000 countries_m # ## Visualising GeoDataFrame #simple visualisation countries_m.plot() countries_m = countries_m[countries_m['NAME'] != "Antarctica"] countries_m.plot() #color based on column countries_m.plot(column='CONTINENT') countries_m.plot(column='CONTINENT',legend=True) # + import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(16, 16)) countries_m.plot(ax=ax,column='CONTINENT',legend=True) # - ax = countries_m.plot(column='CONTINENT',legend=True) ax.set_axis_off() #Checkout available color maps => https://matplotlib.org/2.0.2/users/colormaps.html countries_m.plot(column='CONTINENT', cmap='winter') countries_m.plot(column='POP_EST',legend=True) countries_plot = countries_m[(countries_m['NAME'] != 'India') & (countries_m['NAME'] != 'China')] countries_plot.plot(column='POP_EST',legend=True,figsize=(16,16), legend_kwds={'label': 'Population'}) # ### matplotlib to show multiple data basemap = countries_m.plot(column='CONTINENT', cmap='cool') cities_m = places.to_crs(3857) cities_m.plot(ax=basemap, marker='o', color='red', markersize=5) #load world polygon bbox = gpd.read_file('../data/world.geojson') world = bbox.loc[0].geometry world cities_m = cities_m[cities_m.within(world)] basemap = countries_m.plot(column='CONTINENT', cmap='cool') cities_m.plot(ax=basemap, marker='o', color='red', markersize=5) # ### geopandas overlay to show multiple data # + fig, ax = plt.subplots(figsize=(16, 16)) India_m.plot(ax=ax, color='b', edgecolor='#f0f', linewidth=2) Indian_cities_m.plot(ax=ax, color='r', edgecolor='#fff') # - Indian_cities_m['geom'] = Indian_cities_m['geom'].buffer(50000) # + fig, ax = plt.subplots(figsize=(16, 16)) India_m.plot(ax=ax, color='b', edgecolor='#f0f', linewidth=2) Indian_cities_m.plot(ax=ax, color='r', edgecolor='#fff') # - non_rural_area = gpd.overlay(India_m, Indian_cities_m, how='difference') non_rural_area.plot(figsize=(16, 16)) # ## Interactive Maps in python from ipyleaflet import Map, GeoData, basemaps, LayersControl import geopandas as gpd # ## Loading empty Map # # This will include initiating map with center, zoom level and basemap choice. # Checkout available basemap options at : https://ipyleaflet.readthedocs.io/en/latest/api_reference/basemaps.html?highlight=basemap m = Map(center=(27,71), zoom = 3, basemap= basemaps.Esri.WorldTopoMap) m # ## Loading Data to Map # # ### 1. Loading Geopandas dataframe # + m = Map(center=(27,71), zoom = 3, basemap= basemaps.Stamen.Toner) cities = gpd.read_file('../data/ne_10m_populated_places/ne_10m_populated_places.shp') cities_data = GeoData(geo_dataframe = cities, style={'color': 'black', 'radius':4, 'fillColor': '#3366cc', 'opacity':0.5, 'weight':1.9, 'dashArray':'2', 'fillOpacity':0.6}, hover_style={'fillColor': 'red' , 'fillOpacity': 0.2}, point_style={'radius': 5, 'color': 'red', 'fillOpacity': 0.8, 'fillColor': 'blue', 'weight': 3}, name = 'Release') m.add_layer(cities_data) m # - # ### 2. Loading WMS layer # # + from ipyleaflet import Map, WMSLayer, basemaps wms = WMSLayer( url='https://ahocevar.com/geoserver/wms', layers='topp:states', format='image/png', transparent=True, attribution='Made for GeoPython 2021' ) m = Map(basemap=basemaps.CartoDB.Positron, center=(38.491, -95.712), zoom=4) m.add_layer(wms) m # - # ## Adding Popup # # ### 1. Adding static popup from ipywidgets import HTML from ipyleaflet import Map, Marker, Popup center = (19.975040, 73.763190) m = Map(center=center, zoom=17, close_popup_on_click=False) marker = Marker(location=(19.975040, 73.763190)) m.add_layer(marker) message2 = HTML() message2.value = "Hey!! I'm speaking at foss4g 2021 🔥" marker.popup = message2 m # ### 2. Using Custom data for popup # # For this example we'll prepare map of following scenario # Seeing all the cities as a point on map and on click show their name # + #Preparing data all_cities = gpd.read_file('../data/ne_10m_populated_places/ne_10m_populated_places.shp') all_countries = gpd.read_file('../data/ne_10m_admin_0_countries/ne_10m_admin_0_countries.shp') all_cities.dropna(subset=["NAME","geometry"]) India = all_countries[all_countries['NAME'] == 'India'] Indian_cities = all_cities[all_cities.within(India.squeeze().geometry)] Indian_cities # Creating Map from ipyleaflet import Map, Marker, Popup from ipywidgets import HTML center = (33.762918,68.637469) m = Map(center=center, zoom=3, close_popup_on_click=False) # Adding data as marker for index, row in Indian_cities.iterrows(): message2 = HTML() marker = Marker(location=(row['geometry'].y, row['geometry'].x)) message2.value = row['NAME'] # message2.description = row['NAME'] marker.popup = message2 m.add_layer(marker) # print(index) #load map m # - # ## Another interesting map options # # 1. AntPath # 2. Marker Cluster # 3. Heatmap # 4. Velocity # 5. Choropleth # # check out out at https://ipyleaflet.readthedocs.io/ # # ## Controls in map # # Different Controls can be added to the map to make it more user friendly. Some of such controls are as follows # # ### 1. Scale control # + from ipyleaflet import Map, ScaleControl m = Map(zoom=15, center=[19.975040, 73.763190]) m.add_control(ScaleControl(position='bottomleft')) m # - # ### 2. Split Map # + from ipyleaflet import Map, basemaps, basemap_to_tiles, SplitMapControl m = Map(zoom=15, center=[19.975040, 73.763190]) right_layer = basemap_to_tiles( basemaps.Stamen.Toner) left_layer = basemap_to_tiles(basemaps.CartoDB.Positron) control = SplitMapControl(left_layer=left_layer, right_layer=right_layer) m.add_control(control) m # - # ### Apart from these, some of the most widely used controls are # # 1. Draw on map # 2. Adding Legends # 3. Measure, etc. # # You can find all available controls at https://ipyleaflet.readthedocs.io/en/latest/index.html (Look for control section) # ## pydeck # # Python package based on deck.gl https://pydeck.gl/ which also provides support for 3d data and visualisation # # install the package `pip install pydeck` # # pydeck by default uses carto basemap, but it can be replaced with `Mapbox` or `Google`, to do so, you will need to get API key from their website # + import pydeck as pdk import pandas as pd data = '../data/flights.csv' commute_pattern = pd.read_csv(data) # view (location, zoom level, etc.) view = pdk.ViewState(latitude=21.214885, longitude=77.950061, pitch=50, zoom=3) # layer # from home (orange) to work (purple) arc_layer = pdk.Layer('ArcLayer', data=commute_pattern, get_source_position=['lon_from', 'lat_from'], get_target_position=['lon_to', 'lat_to'], get_width=5, get_tilt=15, pickable=True, auto_highlight=True, # RGBA colors (red, green, blue, alpha) get_source_color=[255, 165, 0, 80], get_target_color=[128, 0, 128, 80]) # render map # choose map style TOOLTIP_TEXT = {"html": "{flights} flights taken
    on this root"} arc_layer_map = pdk.Deck( layers=arc_layer, initial_view_state=view, tooltip=TOOLTIP_TEXT ) arc_layer_map.to_html('deck.html') arc_layer_map.show() # - # checkout other options at : https://pydeck.gl/index.html # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import numpy as np l=[1,2,3,4] np.asarray(l) np.zeros(5) np.zeros((2,3,4)) np.ones(4) np.ones((3,4,5)) np.zeros((4,5))+5 np.empty((2,4)) #generate random values which are very small np.linspace(4,6) #generates no between starting point and end point. default is 50 nos np.linspace(2,6,40) np.linspace(2,6,10).reshape(2,5) np.linspace(2,6,10,endpoint=False) #does not include the last no np.linspace(2,6,endpoint=False, retstep=False) np.logspace(2,4,10,base=10) #generate an array of log values a=np.arange(16).reshape(4,4) a a.max() b=np.arange(4,40).reshape(6,6) b b.max(axis=0) #row wise comparison b.max(axis=1) #column wise comparison b a=np.array([1,2,3,1,2,3]) a a+b a.T a=a.reshape(1,-1) a a.T a+b np.sqrt(b) np.exp(b) np.log10(b) x=np.array([1,2,3]) x y=x #shallow copy - any change in x will reflect in y y z=np.copy(x) #deep copy - any change in x will not reflect in z z x[0]=100 x # + y #y is reffering to the same memory location as x #any change in x will reflect in y also (y=x) # - z id(x) #returns the identity of object id(y) id(z) id(x[0]) id(y[0]) id(x[0])==id(y[0]) #id(x) and id(y) is different but still it is giving true id(x)==id(z) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np from MLlib.quark.layers import * from MLlib.models import Sequential x = np.arange(1, 1000).reshape(-1, 1)/1000 y = x*2 -3 m = Sequential() m.add(FFL(1, 1, activation='linear')) m.compile_model(lr=0.001, opt="sgd", loss="mse") m.summary() m.train(x, y, epochs=2000, show_every=100, batch_size=10) # m.visualize() # m.save_model() # load_model() print(m.predict(np.array([10]))) m.layers[0].weights, m.layers[0].biases m.visualize() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: desc-python # language: python # name: desc-python # --- # # Inspection of DC2 Object Table # ### (@wmwv) # ### Last Verified to Run: 2019-05-08 # # This notebook is currently inspection. To grow to be validation it should include goals for the visualizations and numerical thresholds for specific quantities. # # 1. Make density plots (e.g., `hexbin`, `hist2d`, `datashader`) of # - ra, dec # - u-g, g-r # - r-i, g-r # - i-z, g-r # - z-y, g-r # 2. Make 1D density plots (e.g., `hist`, kernel-density-estimation) # - N({ugrizy}) # - Shape parameters # + import os import numpy as np from numpy.lib import scimath as SM import pandas as pd import GCRCatalogs # + # %matplotlib inline import matplotlib.pyplot as plt from matplotlib.patches import Polygon import seaborn as sns # - # cmap = 'Oranges' cmap = 'viridis_r' # ## Load Data catalog_name = 'dc2_object_run1.2i' filters = ('u', 'g', 'r', 'i', 'z', 'y') cat = GCRCatalogs.load_catalog(catalog_name) # + # Look up the base directory from the GCR Catalogs config dpdd_parquet_dir = GCRCatalogs.get_catalog_config(catalog_name)['base_dir'] dpdd_parquet_file = 'dpdd_{}.parquet'.format(catalog_name) dpdd_parquet_file = os.path.join(dpdd_parquet_dir, dpdd_parquet_file) # - df = pd.read_parquet(dpdd_parquet_file) # ## Object Density in RA, Dec # DC2 Run 1.x Main region # https://docs.google.com/document/d/1aQOPL9smeDlhtlwDrp39Zuu2q8DKivDaHLQX3_omwOI/view # # | Location | RA (degrees) | Dec (degrees) | # |:--------------- |:------------ |:------------- | # Center | 55.064 | -29.783 # North-East Corner | 57.87 | -27.25 # North-West Corner | 52.25 | -27.25 # South-West Corner | 52.11 | -32.25 # South-East Corner | 58.02 | -32.25 dc2_run1x_region = [[57.87, -27.25], [52.25, -27.25], [52.11, -32.25], [58.02, -32.25]] region = Polygon(dc2_run1x_region) # + fig = plt.figure(figsize=(8, 8)) ax = plt.gca() ax.set_aspect(1) plt.hist2d(df['ra'], df['dec'], bins=100) plt.xlim(plt.xlim()[::-1]) # Flip to East left plt.xlabel('RA [deg]') plt.ylabel('Dec [deg]') plt.colorbar(shrink=0.5, label='objects / bin') region = Polygon(dc2_run1x_region, color='red', fill=False) ax.add_patch(region); # - # Select good detections: # 1. Marked as 'good' in catalog flags. # 2. SNR in given band > threshold # 3. In defined simulation range snr_threshold = 5 snr_filter = 'i' snr = df['psFlux_%s' % snr_filter] / df['psFluxErr_%s' % snr_filter] def ellipticity(I_xx, I_xy, I_yy): """Calculate ellipticity from second moments. Parameters ---------- I_xx : float or numpy.array I_xy : float or numpy.array I_yy : float or numpy.array Returns ------- e, e1, e2 : (float, float, float) or (numpy.array, numpy.array, numpy.array) Complex ellipticity, real component, imaginary component Copied from https://github.com/lsst/validate_drp/python/lsst/validate/drp/util.py """ e = (I_xx - I_yy + 2j*I_xy) / (I_xx + I_yy + 2*SM.sqrt(I_xx*I_yy - I_xy*2)) e1 = np.real(e) e2 = np.imag(e) return e, e1, e2 for filt in filters: df['e_{}'.format(filt)], df['e1_{}'.format(filt)], df['e2_{}'.format(filt)] = \ ellipticity(df['Ixx_{}'.format(filt)], df['Ixy_{}'.format(filt)], df['Iyy_{}'.format(filt)]) def inside_trapezoid(corners, ra, dec): # This is a slightly tedious way of defining a symmetric trapezoid # Could consider using geopandas, but that adds dependency dec_size = corners[1][1] - corners[2][1] # deg ra_left_side_delta = corners[1][0] - corners[2][0] ra_right_side_delta = corners[0][0] - corners[3][0] ra_left_side_slope = ra_left_side_delta / dec_size ra_right_side_slope = ra_right_side_delta / dec_size inside_ra = (corners[2][0] + ra_left_side_slope * (df['dec'] - corners[2][1]) < df['ra']) & \ (df['ra'] < corners[3][0] + ra_right_side_slope * (df['dec'] - corners[3][1])) inside_dec = (corners[2][1] < df['dec']) & (df['dec'] < corners[1][1]) return inside_ra & inside_dec inside = inside_trapezoid(dc2_run1x_region, df['ra'], df['dec']) good = df[(df['good']) & (snr > snr_threshold) & inside] stars = good[good['extendedness'] == 0] galaxies = good[good['extendedness'] > 0] print(len(df), len(good), len(stars), len(galaxies)) def plot_ra_dec(cat): """We're just doing this on a rectilearn grid. We should do a projection, of course, but that distortion is minor in this space.""" fig = plt.figure(figsize=(8, 8)) ax = plt.gca() ax.set_aspect(1) plt.hist2d(cat['ra'], cat['dec'], bins=100) plt.xlim(plt.xlim()[::-1]) # Flip to East left plt.xlabel('RA [deg]') plt.ylabel('Dec [deg]') plt.colorbar(shrink=0.5, label='objects / bin') region = Polygon(dc2_run1x_region, color='red', fill=False) ax.add_patch(region); plot_ra_dec(good) # ## Color-Color Diagrams and the Stellar Locus # + # We use the assets in `tutorials/assets' for the stellar-locus because it's the same file. datafile_davenport = '../tutorials/assets/Davenport_2014_MNRAS_440_3430_table1.txt' def get_stellar_locus_davenport(color1='gmr', color2='rmi', datafile=datafile_davenport): data = pd.read_table(datafile, sep='\s+', header=1) return data[color1], data[color2] def plot_stellar_locus(color1='gmr', color2='rmi', color='red', linestyle='--', linewidth=2.5, ax=None): model_gmr, model_rmi = get_stellar_locus_davenport(color1, color2) plot_kwargs = {'linestyle': linestyle, 'linewidth': linewidth, 'color': color, 'scalex': False, 'scaley': False} if not ax: ax = fig.gca() ax.plot(model_gmr, model_rmi, **plot_kwargs) # - def plot_color_color(z, color1, color2, range1=(-1, +2), range2=(-1, +2), bins=31, ax=None, figsize=(4,4)): """Plot a color-color diagram. Overlay stellar locus""" band1, band2 = color1[0], color1[-1] band3, band4 = color2[0], color2[-1] H, xedges, yedges = np.histogram2d( z['mag_%s' % band1] - z['mag_%s' % band2], z['mag_%s' % band3] - z['mag_%s' % band4], range=(range1, range2), bins=bins) zi = H.T xi = (xedges[1:] + xedges[:-1])/2 yi = (yedges[1:] + yedges[:-1])/2 if not ax: fig = plt.figure(figsize=figsize) ax = fig.gca() ax.pcolormesh(xi, yi, zi, cmap=cmap) ax.contour(xi, yi, zi) ax.set_xlabel('%s-%s' % (band1, band2)) ax.set_ylabel('%s-%s' % (band3, band4)) try: plot_stellar_locus(color1, color2, ax=ax) except KeyError as e: print("Couldn't plot Stellar Locus model for %s, %s" % (color1, color2)) def plot_four_color_color(cat): fig, axes = plt.subplots(2, 2, figsize=(8, 6)) colors = ['umg', 'rmi', 'imz', 'zmy'] ref_color = 'gmr' for ax, color in zip(axes.flat, colors): plot_color_color(cat, ref_color, color, ax=ax) plot_four_color_color(good) plot_four_color_color(stars) plot_four_color_color(galaxies) # Clearly one doesn't expect the galaxies to follow the stellar locus. The lines above are include to more easily guide the ey between the stars-only and the galaxies-only plots. # ## 1D Density Plots def plot_mag(filt, ax=None): if ax is None: ax = fig.gca() mag = 'mag_%s' % filt ax.hist([good[mag], stars[mag], galaxies[mag]], label=['all', 'star', 'galaxy'], range=(16, 30), bins=np.linspace(16, 30, 100), histtype='step') ax.set_xlabel(filt) ax.set_ylabel('objects / bin') ax.legend(loc='upper left') fig, axes = plt.subplots(2, 3, figsize=(12, 6)) for ax, filt in zip(axes.flat, filters): plot_mag(filt, ax=ax) # The sharp cut in i-band is because that was the reference band for most detections. It was r then i. The u-band points extend to 30th because most of them are non-detections. # # But hmmm... what is the extra extended shelf in the i-band histogram from? # Let's select those points and plot them in space and color. mag_threshold = 26.5 faint_bump_rows = good['mag_i'] > mag_threshold faint_bump = good[faint_bump_rows] plot_color_color(faint_bump, 'gmr', 'rmi') plot_ra_dec(faint_bump) plt.xlim(58.1, 52.0) plt.ylim(-32.3, -27.2) # Hmmm... so they're from something in the UDF fields of view. Naively this region could just be a bit deeper, but the color-color distribution doesn't make any sense. # ## Blendedness and Extendedness w, = np.where(np.isfinite(good['blendedness'])) print(len(good['blendedness'])) print(len(w)) good_blendedness = good[np.isfinite(good['blendedness'])] plt.hexbin(good_blendedness['mag_i'], good_blendedness['blendedness'], bins='log'); plt.xlabel('i') plt.ylabel('blendedness'); plt.hexbin(good['mag_i'], good['extendedness'], extent=(14, 28, -0.1, +1.1), bins='log'); plt.xlabel('i') plt.ylabel('extendedness'); plt.ylim(-0.1, 1.1) # ## Shape Parameters # Ixx, Iyy, Ixy def plot_shape(filt, ax=None, legend=True): if not ax: ax = fig.gca() names = ['all', 'star', 'galaxy'] colors = ['blue', 'orange', 'green'] hist_kwargs = {'color': colors, 'log': True, 'bins': np.logspace(-1, 1.5, 100), 'range': (0, 50), 'histtype': 'step'} for prefix, ls in (('Ixx', '-'), ('Iyy', '--'), ('Ixy', ':')): field = '{}_{}'.format(prefix, filt) labels = ['{} {}'.format(prefix, name) for name in names] ax.hist([good[field], stars[field], galaxies[field]], label=labels, linestyle=ls, **hist_kwargs) ax.set_ylim(100, ax.get_ylim()[1]) ax.set_xlabel('{}-band Moments: Ixx, Iyy, Ixy [pixels^2]'.format(filt)) ax.set_ylabel('objects / bin') if legend: ax.legend() fig, axes = plt.subplots(2, 3, figsize=(12, 6)) legend = True for ax, filt in zip(axes.flat, filters): plot_shape(filt, ax=ax, legend=legend) legend = False # The stars (orange) are concentrated at low values of the source moments. # # Would be interesting to # 1. Look by magnitude or SNR to undersatnd the longer tail. Are these galaxies mis-classified as stars, or are these noise sources? # 2. Distribution of ellipticity (see validate_drp to type this right) def plot_ellipticity(good, stars, galaxies, filt, ax=None, legend=True): if not ax: ax = fig.gca() names = ['all', 'star', 'galaxy'] colors = ['blue', 'orange', 'green'] hist_kwargs = {'color': colors, 'log': True, 'bins': np.logspace(-1, 1.5, 100), 'range': (0, 5), 'histtype': 'step'} for prefix, ls in (('e', '-'), ('e1', '--'), ('e2', ':')): field = '{}_{}'.format(prefix, filt) labels = ['{} {}'.format(prefix, name) for name in names] ax.hist([good[field], stars[field], galaxies[field]], label=labels, linestyle=ls, **hist_kwargs) ax.set_xlim(0, 20) ax.set_ylim(10, ax.get_ylim()[1]) ax.set_xlabel('{}-band ellipticity'.format(filt)) ax.set_ylabel('objects / bin') if legend: ax.legend() fig, axes = plt.subplots(2, 3, figsize=(12, 6)) legend = True for ax, filt in zip(axes.flat, filters): plot_ellipticity(good, stars, galaxies, filt, ax=ax, legend=legend) legend = False # ## FWHM def plot_psf_fwhm(filters=filters, colors=('purple', 'blue', 'green', 'orange', 'red', 'brown')): for filt, color in zip(filters, colors): psf_fwhm = np.array(good['psf_fwhm_%s' % filt]) w, = np.where(np.isfinite(psf_fwhm)) sns.distplot(psf_fwhm[w], label=filt, color=color) plt.xlabel('PSF FWHM [arcsec]') plt.ylabel('objects density / bin') plt.legend() plot_psf_fwhm() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd # + # 2 main datatypes # - # car_type : 1-dimensional(Series) car_type = pd.Series(['SUV','Diesel','Petrol','Small']) car_type # colors : 1-dimensional(Series) colors = pd.Series(['Red','Blue','Green','Black']) colors # # car_data : multi-dimensional(DataFrame) car_data = pd.DataFrame({'Car Type': car_type, 'Color': colors}) car_data # import data car_sale = pd.read_csv("car-sales.csv") car_sale # export data car_sale.to_csv("exported-car-sales.csv",index=False) exported_car_sales = pd.read_csv("exported-car-sales.csv") exported_car_sales # ## DataTypes # + # Attributes # - car_sale.dtypes car_sale.columns car_column = car_sale.columns car_column car_sale.index # + # Functions # - car_sale.describe() car_sale.info() car_sale.mean() car_price = pd.Series([2000,22223,232233,2323]) car_price.mean() car_sale.sum() car_sale["Odometer (KM)"].sum() len(car_sale) # ## Viewing and selecting data car_sale.head() car_sale.head(7) car_sale.tail() car_sale.tail(2) # + # .loc and .iloc # - animals = pd.Series(["dog","cat","mouse","lion","tiger","cheeta"],index=[0,4,3,2,3,4]) animals # + # loc refers to index # - animals.loc[4] car_sale.loc[3] # + # iloc refers to position # - animals.iloc[4] car_sale.iloc[5] animals[:3] car_sale[:3] #both are same car_sale["Make"] car_sale.Make # + # search filters # - car_sale[car_sale.Make == "Toyota"] car_sale.Make == "Toyota" car_sale[car_sale["Odometer (KM)"] > 100000] pd.crosstab(car_sale["Make"],car_sale["Doors"]) # + # groupby # - car_sale.groupby(["Make"]).mean() car_sale.groupby(["Colour"]).mean() # %matplotlib inline import matplotlib.pyplot as plt car_sale["Odometer (KM)"].plot() car_sale["Odometer (KM)"].hist() car_sale car_sale["Price"].plot() car_sale["Price"] = car_sale["Price"].str.replace('[\$\,\.]','').astype(int) / 100 car_sale car_sale["Price"].plot() # ## Manipulating Data # + #lower() # - car_sale["Make"].str.lower() # assigns car_sale["Make"] = car_sale["Make"].str.lower() car_sale_missing = pd.read_csv("car-sales-missing-data.csv") car_sale_missing car_sale_missing["Odometer"].fillna(car_sale_missing["Odometer"].mean()) car_sale_missing["Odometer"] car_sale_missing["Odometer"].fillna(car_sale_missing["Odometer"].mean(),inplace=True) car_sale_missing["Odometer"] car_sale_missing.dropna() car_sale_missing_dropped = car_sale_missing.dropna() car_sale_missing_dropped car_sale_missing_dropped.to_csv("car-sale-missing-dropped.csv") # + # Column from series seat_column = pd.Series([5,5,5,5,5]) # New seats column car_sale["Seats"] = seat_column # - car_sale car_sale["Seats"].fillna(5,inplace=True) car_sale # column using python list fuel_economy = [7.5,9.2,5.0,9.6,8.7,4.7,7.6,8.7,3.0,4.5] car_sale["Fuel per 100KM"] = fuel_economy car_sale car_sale["Total fuel used"] = car_sale["Odometer (KM)"]/100 * car_sale["Fuel per 100KM"] car_sale car_sale["Total fuel used (L)"] = car_sale["Odometer (KM)"]/100 * car_sale["Fuel per 100KM"] car_sale # + #using single value # - car_sale["Number of wheels"] = 4 car_sale car_sale["Passed road safety"] = True car_sale car_sale.drop("Total fuel used", axis=1) # shuffle car_sale_shuffled = car_sale.sample(frac=1) car_sale_shuffled # + # Only 20% of data # - car_sale_shuffled.sample(frac=0.2) # + # reset index # - car_sale_shuffled.reset_index(drop=True,inplace=True) car_sale_shuffled # apply car_sale["Odometer (KM)"] = car_sale["Odometer (KM)"].apply(lambda x: x/1.6) car_sale car_sale.rename(columns={"Odometer (KM)": "Odometer (Mile)"},inplace=True) car_sale # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy.stats import chi2_contingency import nltk from nltk.tokenize import word_tokenize, sent_tokenize from nltk.corpus import stopwords nltk.download('punkt') nltk.download('stopwords') from nltk.stem import PorterStemmer from nltk.stem import SnowballStemmer from nltk.stem import WordNetLemmatizer nltk.download('wordnet') from nltk.util import ngrams import string injury = pd.read_csv('severeinjury.csv', encoding='latin-1') # We are going to look at OSHA recorded severe injuries between January 2015 and Spetmeber 2020. Interested to see if there is a pattern in what injuries lead to hospitalization and or amputation. injury.head() # There are a number of columns that we can elminate to make the data more manageable. Since most labor laws are made on the state and federal level we will drop location data except for state. Further inspection could be done on the dropped columns, but we are going to focus our search for now. The source data noted the the Lat and Long columns may be unreliable so we will be dropping them. # # We can also drop the UPA ID as it is a duplicate ID, and the Final Narrative. We could run NLP on the FInal Narrative, but that is beyond the scope of this review. # # We will also drop titles columns after constructing dictionaries for labeling them later. injury.info() # Before we drop them we will first fill in our missing fields. # - Because we don't intend to use them in our analysis we will ignore the following: Address1, Address2, City, Zip, Latitude, Longitude. # - Primary NAICS will be filled with 0 as will hospitalization and amputation. # - Inspections will be converted to a binary column as the report numbers don't provide us with further information. # - Secondary Souce will be filled with 0 and Secondary Source Title will be filled with None fill = {'Primary NAICS':0, 'Hospitalized':0, 'Amputation':0, 'Inspection':0, 'Secondary Source':0, 'Secondary Source Title':'None'} injury.fillna(value=fill, inplace=True) injury.loc[injury['Inspection']!=0, 'Inspection'] = 1 injury.head() injury.columns injury.drop(columns=['ID', 'UPA','Employer', 'Address1', 'Address2','Latitude', 'Longitude', 'Nature', 'Part of Body', 'Event', 'Source', 'Secondary Source', 'Primary NAICS',] ,inplace=True) injury.columns # # Narrative Top Words injury['Final Narrative'] nar1 = injury['Final Narrative'][0] nar1 sents = nltk.sent_tokenize(nar1) words = nltk.word_tokenize(nar1) unique_tokens = set(words) average_tokens = round(len(words)/len(sents)) print('Sentences: {}'.format(len(sents))) print('Words: {}'.format(len(words))) print('Unique Words: {}'.format(len(unique_tokens))) print('Average Words per Sentence: {}'.format(average_tokens)) stop_words = set(stopwords.words('english')) final_tokens=[] for each in words: if each not in stop_words: final_tokens.append(each) print('Non Stop Words: {}'.format(len(final_tokens))) lemmatizer = WordNetLemmatizer() lemmatized_words = [lemmatizer.lemmatize(word, pos='v') for word in final_tokens] # %pprint lemmatized_words # %pprint def prep_narrative(narrative): stop_words = set(stopwords.words('english'))|set(string.punctuation) sents = nltk.sent_tokenize(narrative) prepped_narrative = [] for sentence in sents: words = nltk.word_tokenize(narrative) final_tokens=[] for each in words: if each.lower() not in stop_words: lemma = lemmatizer.lemmatize(each.lower(), pos='v') final_tokens.append(lemma) prepped_narrative.extend(final_tokens) return prepped_narrative prepped = prep_narrative(injury['Final Narrative'][0]) prepped injury['lemmatized'] = injury['Final Narrative'].apply(prep_narrative) #injury['lemmatized'] = pd.read_pickle('Narrative_lemmatized.pkl') injury['lemmatized'] injury['lemmatized'].to_pickle('Narrative_lemmatized.pkl') # + def ranked_words(row): frequent = nltk.FreqDist(row) return frequent.most_common(5) def place_words(row, rank): frequent = nltk.FreqDist(row) if len(frequent) > rank: return frequent.most_common(5)[rank-1][0] else: return None # - injury['top_words'] = injury['lemmatized'].apply(ranked_words) injury['top_words'] for i in range(1,6): injury['top_word_{}'.format(i)] = injury['lemmatized'].apply(lambda x: place_words(x, i)) injury injury.isnull().sum() injury.fillna({'City':'unknown', 'Zip':0}, inplace=True) injury['EventDate'] = pd.to_datetime(injury['EventDate']) injury['Zip'] = injury['Zip'].astype('int') injury.info() # # State Names # + state_mapper = {'NY':'NEW YORK', 'WI':'WISCONSIN', 'PA':'PENNSYLVANIA', 'GA':'GEORGIA', 'FL':'FLORIDA', 'CO':'COLORADO', 'OK':'OKLAHOMA', 'TX':'TEXAS', 'LA':'LOUISIANA', 'MI':'MISSISSIPPI','NJ':'NEW JERSEY', 'OH':'OHIO', 'IL':'ILLINOIS', 'NE':'NEBRASKA', 'NH':'NEW HAMPSHIRE', 'KS':'KANSAS', 'MA':'MASSACHUSETTS', 'AR':'ARKANSAS', 'MI':'MICHIGAN', 'ID':'IDAHO', 'MI':'MISSOURI', 'ME':'MAINE', 'CT':'CONNECTICUT', 'WV':'WEST VIRGINIA', 'ND':'NORTH DAKOTA', 'MT':'MONTANA', 'NC':'NORTH CAROLINA', 'DE':'DELAWARE', 'CA':'CALIFORNIA', 'DC':'DISTRICT OF COLUMBIA', 'AL':'ALABAMA', 'TN':'TENNESSEE', 'OR':'OREGON', 'SD':'SOUTH DAKOTA', 'RI':'RHODE ISLAND', 'IN':'INDIANA', 'VA':'VIRGINIA', 'NM':'NEW MEXICO', 'MD':'MARYLAND', 'UT':'UTAH', 'VT':'VERMONT', 'AZ':'ARIZONA', 'IA':'IOWA', 'KY':'KENTUCKY', 'MN':'MINNESOTA', 'WA':'WASHINGTON', 'SC':'SOUTH CAROLINA', 'HI':'HAWAII', 'PR':'PUERTO RICO', 'VI':'VIRGIN ISLANDS','GU':'GUAM', 'NV':'NEVADA', 'WY':'WYOMING', 'AK':'ALASKA', 'NMI':'NORTHERN MARIANA ISLANDS', 'AS':'AMERICAN SAMOA', 'MO':'MISSOURI', 'MS':'MISSISSIPPI'} injury['State'].replace(state_mapper, inplace=True) injury['State'].unique() # - # # Part of Body injury['Part of Body Title'].unique() injury['Part of Body Title Short'] = injury['Part of Body Title'].copy() injury.loc[injury['Part of Body Title'].str.contains('foot|feet|toe|Foot|Feet|toe|heel|Heel|sole|Sole|Arch|instep|Ankle'), 'Part of Body Title Short'] = 'Foot' injury.loc[injury['Part of Body Title'].str.contains('finger|hand|Finger|Hand|Wrist'), 'Part of Body Title Short'] = 'Hand' injury.loc[injury['Part of Body Title'].str.contains('knee|leg|Leg|Knee|Butt|butt|Lower extremities|Thigh'), 'Part of Body Title Short'] = 'Leg' injury.loc[injury['Part of Body Title'].str.contains('Arm|arm|Elbow|elbow|shoulder|Shoulder|Upper extremities'), 'Part of Body Title Short'] = 'Arm' injury.loc[injury['Part of Body Title'].str.contains('Head|head|Face|face|Mouth|mouth|Nose|nose|Eye|Ear|Brain|Lip|Skull|Scalp|Tooth|Cranial|Cheek|Jaw'), 'Part of Body Title Short'] = 'Head' injury.loc[injury['Part of Body Title'].str.contains('Back|back|Lumbar'), 'Part of Body Title Short'] = 'Back' injury.loc[injury['Part of Body Title'].str.contains('Trunk|trunk|hip|Hip|Chest|chest'), 'Part of Body Title Short'] = 'Core' injury.loc[injury['Part of Body Title'].str.contains('Multiple|multiple|Whole Body|Upper and Lower'), 'Part of Body Title Short'] = 'Multiple' injury.loc[injury['Part of Body Title'].str.contains('organ|Internal|Organ|internal|Lung|Liver|Spleen|Heart|Thoracic|Cocc|Sacral'), 'Part of Body Title Short'] = 'Organ' injury.loc[injury['Part of Body Title'].str.contains('Pelv|Groin|Testis|Scrotum'), 'Part of Body Title Short'] = 'Groin' injury['Part of Body Title Short'].unique().sort() injury['Part of Body Title Short'].unique() # # Right to Work rtw = ['Alabama', 'Arizona', 'Arkansas', 'Florida', 'Georgia', 'Idaho', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana', 'Michigan','Mississippi', 'Nebraska', 'Nevada', 'North Carolina', 'North Dakota', 'Oklahoma', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah', 'Virginia' 'West Virginia', 'Wisconsin', 'Wyoming'] for index in range(len(rtw)): rtw[index] = rtw[index].upper() injury['RTW'] = False injury.loc[injury['State'].isin(rtw), 'RTW'] = True injury['RTW'] # # Presidential Voting prez_red = ['Alaska', 'Alabama', 'Arkansas', 'Florida', 'Idaho', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana', 'Missouri','Mississippi', 'Montana', 'Nebraska', 'North Carolina', 'North Dakota', 'Ohio','Oklahoma', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah', 'West Virginia', 'Wyoming'] for index in range(len(prez_red)): prez_red[index] = prez_red[index].upper() injury['prez_red'] = 'Democrat' injury.loc[injury['State'].isin(prez_red), 'prez_red'] = 'Republican' injury # # Spending # ### Public public = pd.read_csv('slstate.csv') public.head() public.columns public.drop(columns=['2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', 'Unnamed: 0', 'Unnamed: 16', 'Unnamed: 17'], inplace=True) public.rename(columns={'Unnamed: 1':'state', '2018r':'2018'}, inplace=True) public public.dropna(inplace=True) public public_state = public.set_index('state') pub_stack = pd.DataFrame(public_state.stack(), columns=['public']).sort_index() pub_stack # ### Private private = pd.read_csv('nrstate.csv') private.head() private.columns private.drop(columns=['Unnamed: 19', 'Unnamed: 20', 'RSE(%)', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014'], inplace=True) private.rename(columns={'Unnamed: 0':'state', '2018r':'2018'}, inplace=True) private private.dropna(inplace=True) private private_state = private.set_index('state') priv_stack = pd.DataFrame(private_state.stack(), columns=['private']).sort_index() priv_stack pub_stack.index.rename(['state','year'], inplace=True) pub_stack.insert(1, value=priv_stack['private'], column='private') pub_stack # ### Spending Ratio spending = pub_stack.reset_index() spending spending['public'] = spending['public'].str.replace(',', '') spending['private'] = spending['private'].str.replace(',', '') spending = spending.astype({'public':'int', 'private':'int'}) spending['year'] = pd.to_datetime(spending['year']) spending['year'] = spending['year'].dt.year spending spending.info() spending['ratio'] = spending['private']/spending['public'] spending spending['total'] = spending['private']+spending['public'] spending spending['state'] = spending['state'].str.upper() spending injury['year']=injury['EventDate'].dt.year injury['year'] spending.rename(columns={'state':'State'}, inplace=True) spending aggs = injury.groupby(['State','year'])[['EventDate', 'Hospitalized', 'Amputation']].agg({'EventDate':'count', 'Hospitalized':'sum', 'Amputation':'sum'}) aggs = aggs.reset_index() aggs.rename(columns={'EventDate':'injuries'}, inplace=True) aggs combined = aggs.merge(spending, on=['State', 'year'], how='outer') combined # + # Right to Work rtw = ['Alabama', 'Arizona', 'Arkansas', 'Florida', 'Georgia', 'Idaho', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana', 'Michigan','Mississippi', 'Nebraska', 'Nevada', 'North Carolina', 'North Dakota', 'Oklahoma', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah', 'Virginia' 'West Virginia', 'Wisconsin', 'Wyoming'] for index in range(len(rtw)): rtw[index] = rtw[index].upper() combined['RTW'] = False combined.loc[combined['State'].isin(rtw), 'RTW'] = True combined['RTW'] # Presidential Voting prez_red = ['Alaska', 'Alabama', 'Arkansas', 'Florida', 'Idaho', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana', 'Missouri','Mississippi', 'Montana', 'Nebraska', 'North Carolina', 'North Dakota', 'Ohio','Oklahoma', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah', 'West Virginia', 'Wyoming'] for index in range(len(prez_red)): prez_red[index] = prez_red[index].upper() combined['prez_red'] = 'Democrat' combined.loc[combined['State'].isin(prez_red), 'prez_red'] = 'Republican' combined # - remove = ['PUERTO RICO', 'VIRGIN ISLANDS','GUAM','NORTHERN MARIANA ISLANDS', 'AMERICAN SAMOA'] combined_clean = combined.loc[~combined['State'].isin(remove)] combined_clean['State'].unique() combined_clean['State'].value_counts() OSHA = combined_clean.fillna(0).copy() OSHA pob = injury.groupby(['State', 'year'])['Part of Body Title Short'].value_counts().unstack(level=-1) pob = pob.fillna(0).copy() pob pob.reset_index() OSHA = OSHA.merge(pob, on=['State', 'year'], how='outer') OSHA OSHA = OSHA.loc[~OSHA['State'].isin(remove)] OSHA OSHA = OSHA.fillna(0).copy() OSHA OSHA['ratio'].describe() injury.to_csv('Injuries.csv') spending.to_csv('Spending.csv') OSHA.to_csv('OSHA.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (Anaconda 5) # language: python # name: anaconda5 # --- # # Critique # #### Review of Monica's sequences code by # + # #!/usr/bin/env python3 # -*- coding: utf-8 -*- def fibonacci(n): """ returns the sequence of n fibonacci numbers """ firstfib = 1 secondfib = 1 temp = 0 seq = [] for i in range (n): seq.append(firstfib) temp = secondfib + firstfib firstfib = secondfib secondfib = temp return seq # + # #!/usr/bin/env python3 # -*- coding: utf-8 -*- import fib import io import contextlib """fib.py Test Module Verify correct functionality of Fibonacci module. """ def test_fib_eighth(): """Test eight Fibonacci number. Note that this test highlights the convenience of defining a main function in a script file. Normally one would call this script from the command line via bash. Here we call it directly and redirect the output (stdout) to a string that we can test. """ # First redirect stdout to a string out = io.StringIO() with contextlib.redirect_stdout(out): # then run the script with commandline argument "6" # note that argument 0 is always the program name itself fib.main(["./fib.py", "6"]) # then check printed output in string, stripping newline characters assert out.getvalue().strip() == "8" # - # 1. Is it clear how the code is organized? # 2. Is the code properly documented with both docstrings and supplementary comments according to industry standards? # 3. Can you follow the algorithm of the code, i.e., what it is doing, and how? Does the code work? (Try cloning the respository and running it yourself to make sure it runs correctly.) # 4. Do you see any suggestions for how to improve the code? # 5. Are the test cases in the test module run by the code automatically by Travis? # 6. Do the tests verify correct functionality? On a scale of 0-100, the reviewer should rate the work produced by the reviewee # # Answers: # 1. The fibonacci method in the module by Monica works very well! The code is clearly organized well. # 2. The module is not properly documetned with both docstrings and cupplemtory comments according to industry standards. It simply states what the return value is, without giving much explanation on how the code is run. However, Monica did summarize very well how the parameter plays a role in her comment. # 3. The algorithm is precise and easy to follow. The code works. # 4. The tests work fine too! I would rate her work a solid 96/100 # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The `Dataset` object # # To illustrate extreme value theory, this tutorial will focus on an example time-series dataset. # The example dataset will contain normally distributed time-series, but the positive tail has a Pareto(2.5) distribution superimposed. # The behaviour of this power law tail is under investigation in this tutorial series. # # The `Dataset` object forms the basis of extreme value analysis in this package. # At its core is a `pd.Series` in the `self.series` attribute. # The dataset performs checks on the series so that the rest of the package can safely make assumptions about data quality. # The checks are: # # * the data cannot contain `nan`s, # * the data cannot contain non-finite values, # * the data cannot contain duplicate values. # # In this notebook, the dataset will be generated and its properties will be investigated using generic plots. # + from evt.dataset import Dataset import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy.stats import pareto, norm N_DATAPOINTS = 100000 # number of datapoints in the example set NORMAL_STD = 5 # standard deviation of the normal distribution PARETO_SHAPE = 2.5 # shape parameter of the Pareto distribution EXAMPLE_NAME = 'Values' # for nicer plots EXAMPLE_INDEX_NAME = 'Index' np.random.seed(0) # enforce deterministic behaviour series = pd.Series( norm.rvs(scale=NORMAL_STD, size=N_DATAPOINTS) + pareto.rvs(PARETO_SHAPE, size=N_DATAPOINTS), name=EXAMPLE_NAME ) series.index.name = EXAMPLE_INDEX_NAME dataset = Dataset(series) # - # The series is stored in the `.series` attribute. dataset.series # When working with data, always plot the dataset. The `Dataset` supports simply plotting against its index. Moreover, it is possible to plot a boxplot of the values, to get a grip on quantiles. # + fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6)) dataset.plot_dataset(ax1) dataset.plot_boxplot(ax2) fig.tight_layout() plt.show() # - # The boxplot raises the suspicion the right side of the distribution could be fat tailed. # The convergence behaviour of sample moments can be investigated as a next step. # + fig, ax = plt.subplots() dataset.plot_maximum_to_sum(ax) fig.tight_layout() plt.show() # - # The third and fourth moment do not seem to converge. # This could be a signal of a tail with an index between 2 and 3. # # Finally, let's determine what a good threshold would be for the peaks over threshold method. # This can be done using a mean excess plot. # + fig, ax = plt.subplots() dataset.plot_mean_excess(ax) fig.tight_layout() plt.show() # - # After careful consideration of the mean excess graph, a guess for the threshold could be a value of 15. # In the next notebook, the peaks over threshold method will be illustrated. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Reinforcement Learning | Multi-Agent RL | Self-Play | Proximal Policy Optimization Algorithm (PPO) agent | Unity Tennis environment | Collaboration and Competition # # --- # # This repository, shows how to implement and train an actor-critic [PPO](https://arxiv.org/abs/1707.06347) (Proximal Policy Optimization) Reinforcement Learning agent to play Tennis against itself. The Unity simulation environment is called Tennis and rather similar to environments depicted [here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Learning-Environment-Examples.md). # # In this README.md you'll see how to install dependencies and run the code on your own machine. To understand the learning algorithm PPO checkout the `Tennis.ipynb` notebook. # # **Why?** Reinforcement Learning (RL) is one of the most fascinating areas of Machine Learning! It is quite intuitive, because we use positive and negative feedback to learn tasks via interaction with the environment. The PPO algorithm, by Schulman et al. 2017, has been used at OpenAi to solve complex real-world tasks such as manipulating physical objects with a robot hand. Check out this [Learning Dexterity: Uncut](https://www.youtube.com/watch?time_continue=1&v=DKe8FumoD4E&feature=emb_logo) video, or the ones about simulated humanoid robots from their website [here](https://openai.com/blog/openai-baselines-ppo/) to get an idea! # # **What?** In this Tennis environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play. This is a trained agent in the environment. # # # # **How?** Checkout the `Tennis.ipynb` notebook to learn more about the PPO algorithm, and check the implementations in `ppo_agent.py`, `policy.py`. If you want to train an agent, or see a trained agent play tennis then `train.py`, and `watch_trained_agent.py` are the go-to files. # ### 1. The Learning Algorithm # # The Proximal Policy Optimization [PPO](https://arxiv.org/abs/1707.06347) algorithm was developed by al. in 2017 at the company [OpenAI](https://openai.com/). The abstract reads as follows: # >We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time. # # This is the PPo algorithm (taken from [source](http://rail.eecs.berkeley.edu/deeprlcourse-fa17/f17docs/lecture_13_advanced_pg.pdf)). # # # # The PPO computes a surrogate loss function (see above) without computing natural gradients and enables us to re-use previously collected trajectories from an older version of the same policy. For details please refer to the paper. In this notebook we use the following hyperparameters and model architecture. Refer to the code for more details. # # - SEED = 123 # - SEED = 0 # - LR = 5e-4 # - T_MAX_ROLLOUT = 1024 # - GAMMA = 0.999 # - TAU = 0.95 # - K_EPOCHS = 16 # - BATCH_SIZE = 64 # - EPSILON_PPO = 0.2 # - USE_ENTROPY = False # - ENTROPY_WEIGHT = 0.01 # - GRADIENT_CLIPPING = 2 # # The chosen actor critic model architecture is as follows. The actor and crtic network share the same fully connected body and have 5957 free, trainable parameters. # # ```python # ActorCritic( # (fc1_body): Linear(in_features=24, out_features=64, bias=True) # (fc2_body): Linear(in_features=64, out_features=64, bias=True) # (fc3_actor): Linear(in_features=64, out_features=2, bias=True) # (fc3_critic): Linear(in_features=64, out_features=1, bias=True) # ) # ``` # ### 2. Start the Environment # # We begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/). from unityagents import UnityEnvironment import numpy as np # Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded. # # - **Mac**: `"path/to/Tennis.app"` # - **Windows** (x86): `"path/to/Tennis_Windows_x86/Tennis.exe"` # - **Windows** (x86_64): `"path/to/Tennis_Windows_x86_64/Tennis.exe"` # - **Linux** (x86): `"path/to/Tennis_Linux/Tennis.x86"` # - **Linux** (x86_64): `"path/to/Tennis_Linux/Tennis.x86_64"` # - **Linux** (x86, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86"` # - **Linux** (x86_64, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86_64"` # # For instance, if you are using a Mac, then you downloaded `Tennis.app`. If this file is in the same folder as the notebook, then the line below should appear as follows: # ``` # env = UnityEnvironment(file_name="Tennis.app") # ``` env = UnityEnvironment(file_name="Tennis.app") # Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python. # get the default brain brain_name = env.brain_names[0] brain = env.brains[brain_name] # ### 3. Examine the State and Action Spaces # # In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play. # # The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping. # # Run the code cell below to print some information about the environment. # + # reset the environment env_info = env.reset(train_mode=True)[brain_name] # number of agents num_agents = len(env_info.agents) print('Number of agents:', num_agents) # size of each action action_size = brain.vector_action_space_size print('Size of each action:', action_size) # examine the state space states = env_info.vector_observations state_size = states.shape[1] print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size)) print('The state for the first agent looks like:', states[0]) # - # ### 4. Take Random Actions in the Environment # # In the next code cell, you will learn how to use the Python API to control the agents and receive feedback from the environment. # # Once this cell is executed, you will watch the agents' performance, if they select actions at random with each time step. A window should pop up that allows you to observe the agents. # # Of course, as part of the project, you'll have to change the code so that the agents are able to use their experiences to gradually choose better actions when interacting with the environment! """ for i in range(1, 6): # play game for 5 episodes env_info = env.reset(train_mode=False)[brain_name] # reset the environment states = env_info.vector_observations # get the current state (for each agent) scores = np.zeros(num_agents) # initialize the score (for each agent) t = 0 while True: actions = np.random.randn(num_agents, action_size) # select an action (for each agent) actions = np.clip(actions, -1, 1) # all actions between -1 and 1 env_info = env.step(actions)[brain_name] # send all actions to tne environment next_states = env_info.vector_observations # get next state (for each agent) rewards = env_info.rewards # get reward (for each agent) dones = env_info.local_done # see if episode finished scores += env_info.rewards # update the score (for each agent) states = next_states # roll over states to next time step t += 1 if np.any(dones): # exit loop if episode finished print('Espisode finished after {} time steps.'.format(t)) break print('Score (max over agents) from episode {}: {:.3f}'.format(i, np.max(scores))) env.close() """ # ### 5. Train a PPO Agent # # I'd recommend that you open the files `ppo_agent.py`, and `policy.py` in different tabs and walk through the code step by step to gain a deeper understanding. Training a PPO agent from scratch to solving the environment took approximately 02:00h on a 2.7 GHz Intel Core i5 (2015). Restart the kernel and only run the code underneath. If you just want to see a trained agent playing tennis, then jump to section 6. # + from unityagents import UnityEnvironment from collections import deque from ppo_agent import Agent import numpy as np import torch import pandas as pd # start unity environment env = UnityEnvironment(file_name="Tennis.app") brain_name = env.brain_names[0] brain = env.brains[brain_name] env_info = env.reset(train_mode=True)[brain_name] state_size = env_info.vector_observations.shape[1] action_size = brain.vector_action_space_size number_of_agents = len(env_info.agents) print_every = 10 def run_ppo(env, brain_name, agent, num_episodes=2000): scores = [] scores_window = deque(maxlen=100) for i_episode in range(1, num_episodes+1): agent.step(env, brain_name) max_score = agent.act(env, brain_name) scores.append(max_score) scores_window.append(max_score) print('\r{}/{} Episode. Current score: {:.4f} Avg last 100 score: {:.4f}'.\ format(i_episode, num_episodes, max_score, np.mean(scores_window)), end="") if i_episode % print_every == 0: print('\r{}/{} Episode. Current score: {:.4f} Avg last 100 score: {:.4f}'.\ format(i_episode, num_episodes, max_score, np.mean(scores_window))) if np.mean(scores_window) > 0.5: agent.save() print('\rEnvironment solved after {} episodes. Avg last 100 score: {:.4f}'.\ format(i_episode, np.mean(scores_window))) break return scores print("Start Training...") agent = Agent(state_size, action_size, load_pretrained=False) scores = run_ppo(env, brain_name, agent) print("\nTraining finished.") # + import matplotlib.pyplot as plt # %matplotlib inline scores = np.array(scores) x = np.where(scores >= 0.5) print('The first time a score >= 0.5 was reached at episode {}.'.format(x[0][0])) print('Max score reached: {:.4f}'.format(np.amax(scores))) df = pd.DataFrame({ 'x': np.arange(len(scores)), 'y': scores, }) rolling_mean = df.y.rolling(window=50).mean() #img_path ="imgs/scores_plot.png" fig = plt.figure() ax = fig.add_subplot(111) plt.plot(df.x, df.y, label='Scores') plt.plot(df.x, rolling_mean, label='Moving avg', color='orange') plt.ylabel('Scores') plt.xlabel('Episodes') plt.legend() plt.show() #fig.savefig(fname=img_path) #print('\nPlot saved to {}.'.format(img_path)) # - # ### 6. Watch a trained Agent # # Restart the kernel and run the code cell below to load a trained agent and run it in the environment. # + import numpy as np import torch from unityagents import UnityEnvironment from ppo_agent import Agent # start unity environment env = UnityEnvironment(file_name="Tennis.app") brain_name = env.brain_names[0] brain = env.brains[brain_name] env_info = env.reset(train_mode=True)[brain_name] state_size = env_info.vector_observations.shape[1] action_size = brain.vector_action_space_size agent = Agent(state_size, action_size, load_pretrained=True) num_episodes = 2 for i_episode in range(1, num_episodes+1): env_info = env.reset(train_mode=False)[brain_name] states = env_info.vector_observations states1 = states[0] states2 = states[1] scores = np.zeros(2) agent.ac_model.eval() while True: with torch.no_grad(): # self-play: same actor critic model is used for two players actions1, _, _, _ = agent.ac_model(states1) actions2, _, _, _ = agent.ac_model(states2) actions = torch.cat((actions1, actions2), dim=0) env_info = env.step([actions.cpu().numpy()])[brain_name] next_states = env_info.vector_observations dones = env_info.local_done scores += env_info.rewards states = next_states states1 = states[0] states2 = states[1] if np.any(dones): print('Episode {} finished. Scores reached: {}'.format(i_episode, scores)) break env.close() # - # ### 7. Ideas for future work # # # To reach better results one can: # # - do extensive hyper parameter search e.g. using larger T_MAX_ROLLOUT, K_EPOCHS, and BATCH_SIZE # - test the influence of the (USE_ENTROPY = True) term # - use a different model architecture with more and deeper layers, e.g. even different networks for the actor and critic # - run code on a GPU (e.g. with the headless version in the cloud) # - utilize parallel training # - more simulation time might yield better results # # ### 8. Credit # # This notebook is a project submission for the [Udacity Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893). If you are interested to learn more about Deep Reinforcement Learning, I can highly recommend it! My contribution lays mainly in section 1, 5, 6, 7, and corresponding code files: `train.py`, `ppo_agent.py`, `policy.py` , and `watch_trained_agent.py`. # # Thanks and shoutout to [](https://github.com/jknthn), [](https://github.com/andreiliphd), and especially to [ShangtongZhang](https://github.com/ShangtongZhang). Parts of this code is based on their work! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Computing basic semantic similarities between GO terms # Adapted from book chapter written by _ and _ # In this section we look at how to compute semantic similarity between GO terms. First we need to write a function that calculates the minimum number of branches connecting two GO terms. # + # %load_ext autoreload # %autoreload 2 from goatools.obo_parser import GODag godag = GODag("go-basic.obo") # - go_id3 = 'GO:0048364' go_id4 = 'GO:0032501' print(godag[go_id3]) print(godag[go_id4]) # Let's get all the annotations from arabidopsis. # + # from goatools.associations import read_gaf # associations = read_gaf("tair.gaf") import os from goatools.associations import dnld_assc fin_gaf = os.path.join(os.getcwd(), "tair.gaf") associations = dnld_assc(fin_gaf, godag) # - # Find deepest common ancestor from goatools.semantic import deepest_common_ancestor go_root = deepest_common_ancestor([go_id3, go_id4], godag) print(go_root) # # Plot the two terms of interest and highlight their deepest common ancestor # # # |color |color | GO Term | Description # |------|-------|------------|------------------------ # |blue |#d5ffff| GO:0008150 | deepest common ancestor # |green |#d1ffbd| GO:0048364 | User GO Term # |green |#d1ffbd| GO:0032501 | User GO Term # # ``` # $ scripts/go_plot.py GO:0008150#d5ffff GO:0048364#d1ffbd GO:0032501#d1ffdb -o aaa_lin.png --gaf=tair.gaf # # go-basic.obo: fmt(1.2) rel(2019-02-07) 47,387 GO Terms # READ 236,943 associations: tair.gaf # #d5ffff GO:0008150 # BP 29699 3.30 L00 D00 biological_process # #f1fbfd GO:0032502 # BP 3220 5.02 L01 D01 A developmental process # #d1ffdb GO:0032501 # BP 1003 5.48 L01 D01 B multicellular organismal process # GO:0048856 # BP 1040 5.46 L02 D02 A anatomical structure development # GO:0099402 # BP 17 6.90 L03 D03 A plant organ development # #d1ffbd GO:0048364 # BP 4 7.56 L04 D04 A root development # ``` # # # # Now we can calculate the semantic distance and semantic similarity, as so: # + from goatools.semantic import semantic_similarity sim = semantic_similarity(go_id3, go_id4, godag) print('The semantic similarity between terms {} and {} is {}.'.format(go_id3, go_id4, sim)) # - # Then we can calculate the information content of the single term, GO:0048364. # + from goatools.semantic import TermCounts, get_info_content # First get the counts of each GO term. termcounts = TermCounts(godag, associations) # Calculate the information content go_id = "GO:0048364" infocontent = get_info_content(go_id, termcounts) print('Information content ({}) = {}'.format(go_id, infocontent)) # - # Resnik's similarity measure is defined as the information content of the most informative common ancestor. That is, the most specific common parent-term in the GO. Then we can calculate this as follows: # + from goatools.semantic import resnik_sim sim_r = resnik_sim(go_id3, go_id4, godag, termcounts) print('Resnik similarity score ({}, {}) = {}'.format(go_id3, go_id4, sim_r)) # - # Lin's similarity measure is defined as: # $$ \textrm{sim}_{\textrm{Lin}}(t_{1}, t_{2}) = \frac{2*\textrm{sim}_{\textrm{Resnik}}(t_1, t_2)}{IC(t_1) + IC(t_2)} $$ # Then we can calculate this as # + from goatools.semantic import lin_sim sim_l = lin_sim(go_id3, go_id4, godag, termcounts) print('Lin similarity score ({}, {}) = {}'.format(go_id3, go_id4, sim_l)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt # + from sklearn import datasets X, y = datasets.make_moons(n_samples=500, noise=0.3, random_state=42) # - plt.scatter(X[y==0, 0], X[y==0, 1]) plt.scatter(X[y==1, 0], X[y==1, 1]) plt.show() # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) # - # ### AdaBoosting # # + from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier ada_clf = AdaBoostClassifier(base_estimator = DecisionTreeClassifier(max_depth=2), n_estimators=500, random_state=666) ada_clf.fit(X_train, y_train) # - ada_clf.score(X_test, y_test) # ### Gradient Boosting # # > 1. 训练一个模型m1, 产生错误e1 # > 2. 针对e1训练第二个模型m2, 产生错误e2 # > 3. 以此类推,最终结果是 m1 + m2 + ... # + from sklearn.ensemble import GradientBoostingClassifier gb_clf = GradientBoostingClassifier(max_depth=2, n_estimators=30, random_state=666) gb_clf.fit(X_train, y_train) # - gb_clf.score(X_test, y_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: hackathon # language: python # name: hackathon # --- # + import ruamel.yaml as yaml import os import sys import pandas as pd import numpy as np NO_CONFIG_ERR_MSG = """No config file found. Root directory is determined by presence of "config.yaml" file.""" original_wd = os.getcwd() # Number of times to move back in directory num_retries = 10 for x in range(0, num_retries): # try to load config file try: with open("config.yaml", 'r') as stream: cfg = yaml.safe_load(stream) # If not found move back one directory level except FileNotFoundError: os.chdir('../') # If reached the max number of directory levels change to original wd and print error msg if x+1 == num_retries: os.chdir(original_wd) print(NO_CONFIG_ERR_MSG) # Add directory to PATH path = os.getcwd() if path not in sys.path: sys.path.append(path) # - # ## Load Data RPM_df = pd.read_csv('data/interim/player_RPM_stats.csv') box_score_df = pd.read_csv('data/raw/Box_Scores.csv') # ## Joining RPM to players # # Need to join on composite key made up of Player Name + Season # season_id appears to be calendar year the season started in # # Based off of the latest season being 2017 and not 2018 box_score_df['season_id'].unique() RPM_df.head() # most number of whitespaces RPM_df['names'].str.split(' ').apply(len).max() # + # Who has more than one space? RPM_df['name_length_post_split'] = RPM_df['names'].str.split(' ').apply(len) RPM_df.query("name_length_post_split > 2").names.value_counts().head() # Mostly Jrs and some multiple names # - # What's the RPM of these players? RPM_df.query("name_length_post_split > 2").RPM.describe() first_name_to_search = 'Tim' box_score_df[box_score_df.First_Name.str.contains(first_name_to_search)].head() # #### For now, will join to get RPM on a season/team/player level and fix this later if necessary potential_keys = [ 'First_Name', 'Last_Name', 'season_id', ] # #### Steps: # # 1) RPM: Seperate names into first and last (start by splitting on a space) # # 2) Match season_id to either the calendar year the season ended or began. Map to RPM data # # 3) Join on First, Last, Season game_mapping_df = pd.read_csv('data/raw/Game_Mapping.csv') game_mapping_df.head() game_mapping_df['Calendar_Year'] = game_mapping_df.Date_EST.str.split('/').str.get(-1) calendar_year_to_df_map = game_mapping_df.groupby(by=['Calendar_Year', 'Season']).size().to_frame('count').reset_index() team_tricodes_df = pd.read_csv('data/raw/Team_Tricodes.csv') team_tricodes_df.head() team_mapping = pd.read_csv('data/raw/Team_Mapping.csv') team_mapping.head(20) calendar_year_to_df_map # Only use season codes that start with 2 season_codes = [season_code for season_code in calendar_year_to_df_map.Season.unique() if str(season_code).startswith('2')] season_code_starting_year = [int(str(season_code)[1:]) for season_code in season_codes] season_code_ending_year = [season_code+1 for season_code in season_code_starting_year] # + season_id_records = { 'season_id':season_codes, 'Season_Start_Year':season_code_starting_year, 'Season_End_Year':season_code_ending_year, } # - season_id_map = pd.DataFrame.from_dict(season_id_records) season_id_map.head() # #### Merge to RPM data and check for nulls # + RPM_df = pd.merge(RPM_df, season_id_map, on=['Season_Start_Year', 'Season_End_Year'], how='left') RPM_df['season_id'].isnull().sum() # - RPM_df.head() # #### Split Names First and Last RPM_df['First_Name'] = RPM_df['names'].str.split(' ').str.get(0) RPM_df['Last_Name'] = RPM_df['names'].str.split(' ').str.get(-1) # ## Merge onto Box score RPM_df[potential_keys].dtypes box_score_df[potential_keys].dtypes # Max should be number of seasons since 2013 (when RPM was introduced) RPM_df[potential_keys].groupby(by=['First_Name', 'Last_Name']).count().sort_values(by='season_id', ascending=False).head() box_score_df[potential_keys] # + test_merge = pd.merge(RPM_df, box_score_df, on=['season_id', 'First_Name', 'Last_Name'], ) test_merge.head() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Student Exam Performance Workshop # Use Python's pandas module to load a dataset containing student demographic information and test scores and find relationships between student attributes and test scores. This workshop will serve as an introduction to pandas and will allow students to practice the following skills: # # - Load a csv into a pandas DataFrame and examine summary statistics # - Rename DataFrame column names # - Add columns to a DataFrame # - Change values in DataFrame rows # - Analyze relationships between categorical features and test scores # # **Bonus:** # # Determine the relationship between the students' lunch classification and average test scores by creating a seaborn boxplot # Import the python modules that we will need to use import numpy as np import pandas as pd import matplotlib.pyplot as plt def load_data(my_path): my_dataframe = pd.read_csv(my_path) return my_dataframe # Use the `load_data` function to load the StudentsPerformance.csv file into a pandas dataframe variable called `df` # # __Hint__: Keep in mind where the csv file is in relation to this Jupyter Notebook. Do you need to provide an absolute or relative file path? # + # Write python to call the function above and load the StudentPeformance csv file into a pandas dataframe # Keep this line so you can see the first five rows of your dataframe once you have loaded it! students_df.head(5) # - # __Next step:__ Now that we have loaded our DataFrame, let's look at the summary statistics of our data. We can use the `describe` method to accomplish this: students_df.describe(include='all') # By looking at this breakdown of our dataset, I can make at least the following observations: # # 1. Our DataFrame consists of eight columns, three of which are student test scores. # 2. There are no missing any values in our DataFrame! # 3. The data appears to be pretty evenly distributed. # 4. The column names are long and difficult to type # # ## Renaming DataFrame Columns # # Let's change our column names so they are easier to work with! # # __Hint__: Look into the pandas `columns` attribute to make the change! columns = [ 'gender', 'race', 'parentDegree', 'lunchStatus', 'courseCompletion', 'mathScore', 'readingScore', 'writingScore' ] def rename_columns(my_dataframe, my_columns): my_dataframe.columns = my_columns return my_dataframe # + # Use the above function to rename the DataFrame's column names students_df.head(10) #Look at the first ten rows of the DataFrame to ensure the renaming worked! # - # ## Adding Columns to a DataFrame # # Great! Next we want to add an `avgScore` column that is an average of the three given test scores (`mathScore`, `readingScore` and `writingScore`). This will allow us to generalize the students' performance and simplify the process of us examining our feature's impact on student performance. # + # Complete the following line of code to create an avgScore column students_df['avgScore'] = students_df.head(5) # - # ## Analyzing Feature Relationships # Now that our data is looking the way we want, let's examine how some of our features correlate with students' test performances. We will start by looking at the relationship between race and parent degree status on test scores. # # __Hint__: Use pandas' `groupby` method to examine these relationships. The documentation for `groupby` can be found here: https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.groupby.html students_df.groupby(['race','parentDegree']).mean() # From examining the above output, we can see that across all `race` groups, students with "high school" and "some high school" as their parent degree status (`parentDegree`) had lower test scores. # # We can also use `groupby` to examine a subset of the original columns with respect to column-specific aggregations: students_df[['race','parentDegree','gender']].groupby(['gender']).count() # __Next step__: Since there seems to be a clear distinction between students that have parents with have some college education and those that do not, let's simplify our DataFrame by creating a `degreeBinary` column based on values in the `parentDegree` column. This new column will simply contain either "no_degree" or "has_degree." We can do this by writing a basic function and using pandas' `apply` method: # + # Complete this function to return the proper strings to denote degree status def degree_status(edu): if edu in {'high school', 'some high school'}: #Fill in your code here! students_df['degreeBinary'] = students_df['parentDegree'].apply(degree_status) students_df.head(10) # - # Great job! Now let's continue examining our features to find relationships in our data # # __Your turn:__ Use the `groupby` function again examine relationships between other features and student test scores. What can we learn about the relationship between these whether or not the students have completed the course and their test scores? What about the relationship between gender and test scores? # + # Use groupby to examine the relationship between course completion status and test scores # + # Use groupby to examine the relationship between gender and test scores # - # ## Bonus: Visualization # # Great job making it this far! As a bonus exercise, we will create a simple data visualization. We have examined the relationship between all of our features and student test scores except for one -- student lunch status, which is found in the `lunch` column. # # In order to explore this relationship, let's create a `barplot`, with the students'`lunch` status as the x-axis and their average test scores (`avgScore`) as the y-axis. # # We will use seaborn, which is a third-party library, to complete this visualization. If you do not already have seaborn installed, `pip install` it now! Follow the seaborn documentation to create the `barplot` in the cell below. # + import seaborn as sns # import the seaborn module -- make sure you have it installed! sns.set(style='whitegrid') def graph_data(my_dataframe, xkey='lunchStatus', ykey='avgScore'): # Fill this in to create the barplot! graph_data(students_df) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # IMAGE RANDOMIZATION (Training, Validation & Testing) # ### Code to generate the basic train/validation/testing image database for Keras # #### by # #### Last Update: 01/08/2018 # ---------------------------- # ## ORIGINAL & CLAHE DATA IMAGE RANDOMIZATION (copies files) # + """ BASIC IMAGE DATABASE TRAIN/VALIDATION/TEST RANDOMIZATION CODE --------------------------------- by Last Update: 2017/04/23 """ import glob import cv2 import os import shutil import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.image as mpimg import tensorflow as tf tf.Session(config=tf.ConfigProto(log_device_placement=True)) #To ensure activation of GPUs in TF Backend from keras.preprocessing.image import ImageDataGenerator #IMAGE RANDOMIZATION AND AUGMENTATION HELPER FUNCTIONS # Print iterations progress def printProgressBar (iteration, total, prefix = '', suffix = '', decimals = 1, length = 100, fill = '█'): """ Call in a loop to create terminal progress bar @params: iteration - Required : current iteration (Int) total - Required : total iterations (Int) prefix - Optional : prefix string (Str) suffix - Optional : suffix string (Str) decimals - Optional : positive number of decimals in percent complete (Int) length - Optional : character length of bar (Int) fill - Optional : bar fill character (Str) """ percent = ("{0:." + str(decimals) + "f}").format(100 * (iteration / float(total))) filledLength = int(length * iteration // total) bar = fill * filledLength + '-' * (length - filledLength) print('\r%s |%s| %s%% %s' % (prefix, bar, percent, suffix), end = '\r') # Print New Line on Complete if iteration == total: print() def data_randomization (inputpath, outputpath): # Print Message specifying database print('Randomized Spliting of database in:'+ inputpath) #Creation of required folders if not os.path.isdir(outputpath): os.mkdir(outputpath) for dirpath, dirnames, filenames in os.walk(inputpath): structure = os.path.join(outputpath + 'train/', dirpath[len(inputpath):]) if not os.path.isdir(structure): os.mkdir(structure) for dirpath, dirnames, filenames in os.walk(inputpath): structure = os.path.join(outputpath + 'validation/', dirpath[len(inputpath):]) if not os.path.isdir(structure): os.mkdir(structure) for dirpath, dirnames, filenames in os.walk(inputpath): structure = os.path.join(outputpath + 'test/', dirpath[len(inputpath):]) if not os.path.isdir(structure): os.mkdir(structure) class_num = 0 all_file_dataframe = pd.DataFrame([]) all_file_list = [] all_train_set_df = pd.DataFrame([]) all_validate_set_df = pd.DataFrame([]) all_test_set_df = pd.DataFrame([]) for directory in glob.iglob(inputpath + '*', recursive=True): class_file_list=[] df = pd.DataFrame([]) for filename in glob.iglob(directory + '/' +'*.png', recursive=True): class_file_list.append(filename) class_num += 1 all_file_list.append(class_file_list) df = pd.DataFrame({directory:class_file_list}) all_file_dataframe = pd.concat([all_file_dataframe, df], axis=1) train, validate, test = np.split(df.sample(frac=1), [int(train_p*len(df)), int((train_p + validation_p)*len(df))]) all_train_set_df = pd.concat([all_train_set_df, train], axis=1) all_validate_set_df = pd.concat([all_validate_set_df, validate], axis=1) all_test_set_df = pd.concat([all_test_set_df, test], axis=1) print('Total Number of classes: '+ str(len(all_file_list))) n = 0 l = sum([len(files) for r, d, files in os.walk(inputpath)])-1 for dir_n in all_file_dataframe.columns: #CREATE FULLY RANDOMIZED TRAINING SET for train_file_n in list(all_train_set_df.loc[:, dir_n]): if isinstance(train_file_n,str): n += 1 printProgressBar(n , l, prefix = 'Progress:', suffix = 'Complete', length = 50) shutil.copy2(train_file_n, train_file_n.replace(inputpath, outputpath + 'train/')) # Copy files to target filename is /data/test/img.png #CREATE FULLY RANDOMIZED VALIDATION SET for val_file_n in list(all_validate_set_df.loc[:, dir_n]): if isinstance(val_file_n,str): n += 1 printProgressBar(n , l, prefix = 'Progress:', suffix = 'Complete', length = 50) shutil.copy2(val_file_n, val_file_n.replace(inputpath, outputpath + 'validation/')) # Copy files to target filename is /data/test/img.png #CREATE FULLY RANDOMIZED TESTING SET for test_file_n in list(all_test_set_df.loc[:, dir_n]): if isinstance(test_file_n,str): n += 1 printProgressBar(n , l, prefix = 'Progress:', suffix = 'Complete', length = 50) shutil.copy2(test_file_n, test_file_n.replace(inputpath, outputpath + 'test/')) # Copy files to target filename is /data/test/img.png print('Total number of randomized files: '+ str(n)) # SPLITING PERCENTAGES (TRAINING=60% / VALIDATION=20% / TESTING=20%) train_p = 0.6 validation_p = 0.20 test_p = (1.0 - train_p - validation_p) print('SPLITING PERCENTAGES ARE: (TRAINING=' + str(train_p*100) +'% / VALIDATION=' + str(validation_p*100) +'% / TESTING=' + str(test_p*100) + '%)') # - #BASE DATA RANDOMIZATION #Definition of folder tree structure for converted files for original database and Randomization inputpath = './data/single_lesion_database/original_data/' outputpath ='./data/single_lesion_database/original_data_randomized/' data_randomization(inputpath, outputpath) #BASE DATA RANDOMIZATION #Definition of folder tree structure for converted files for original database and Randomization inputpath = './data/single_lesion_database/clahe_data/' outputpath ='./data/single_lesion_database/clahe_data_randomized/' data_randomization(inputpath, outputpath) # ------------------ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + # Author : # email : # + import time import os import torch import torch.nn as nn import numpy as np from torch.optim import LBFGS # %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt from scipy.interpolate import griddata import matplotlib.gridspec as gridspec torch.set_default_dtype(torch.float64) # - class ConventBlock(nn.Module): def __init__(self,in_N,out_N): super(ConventBlock, self).__init__() self.Ls = None self.net =nn.Sequential(nn.Linear(in_N,out_N),nn.Tanh()) def forward(self, x): out = self.net(x) return out # + class Network(torch.nn.Module): def __init__(self,in_N,m,H_Layer,out_N,**kwargs): super(Network,self).__init__() self.mu = kwargs["mean"] self.std = kwargs["stdev"] layers = [] layers.append(ConventBlock(in_N,m)) for i in range(0,H_Layer-1): layers.append(ConventBlock(m,m)) # output layer layers.append(nn.Linear(m,out_N)) # total layers self.net = nn.Sequential(*layers) def forward(self,x,y): data = torch.cat((x,y),dim=1); # normalize the input data = (data - self.mu)/self.std out = self.net(data) return out def init_weights(m): if type(m) == nn.Linear: nn.init.xavier_normal_(m.weight.data) nn.init.zeros_(m.bias) # + # ---------------------------------------------- # Defining the rhs : u_xx + u_yy + u = f(x,y) # ---------------------------------------------- omega = torch.tensor([1.0, 4.0]) def u_exact(x,y,omega): return torch.sin(omega[0]*np.pi*x)*torch.sin(omega[1]*np.pi*y) def u_xx(x,y,omega): return -(omega[0]*np.pi).pow(2) * u_exact(x,y,omega) def u_yy(x,y,omega): return -(omega[1]*np.pi).pow(2) * u_exact(x,y,omega) def source_function(x,y,omega): result = u_xx(x,y,omega) + u_yy(x,y,omega) + u_exact(x,y,omega) return result # - def fetch_interior_points(domain,N_data): """ Sampling collocation point: Args : - domain :(numpy array) for the size of the domain - N_data :(int) number of points out : a tensor of collocation points """ dim = domain.shape[0] soboleng = torch.quasirandom.SobolEngine(dimension=dim,scramble=True) data = soboleng.draw(N_data,dtype=torch.float64)*(domain[1] - domain[0]) + domain[0] x = data[:,0][:,None] y = data[:,1][:,None] return x,y def fetch_boundary_points(domain,N_data): """ Sampling collocation point: Args : - domain :(numpy array) for the size of the domain - N_data :(int) number of points out : a tensor of collocation points """ x_min = domain[0][0] x_max = domain[1][0] y_min = domain[0][1] y_max = domain[1][1] dim = domain.shape[0] N_data = N_data//dim**2 x,y = fetch_interior_points(domain,N_data) E_bc = torch.cat(( torch.ones_like(x), y), dim = 1) W_bc = torch.cat(( torch.full_like(x,x_min),y), dim = 1) N_bc = torch.cat((x, torch.ones_like(y) ), dim = 1) S_bc = torch.cat((x, torch.full_like(y,y_min)), dim = 1) data = torch.cat((N_bc, S_bc, E_bc, W_bc), dim = 0) x = data[:,0][:,None] y = data[:,1][:,None] return x,y def physics_loss(model,x,y,omega): u = model(x,y) u_x,u_y = torch.autograd.grad(u.sum(),(x,y),create_graph=True) u_xx = torch.autograd.grad(u_x.sum(),x,create_graph=True)[0] u_yy = torch.autograd.grad(u_y.sum(),y,create_graph=True)[0] lhs = u_xx + u_yy + u rhs = source_function(x,y,omega) loss = (lhs - rhs).pow(2) return loss def boundary_loss(model,x,y,omega): u = model(x,y) ue = u_exact(x,y,omega) loss = (u - ue).pow(2) return loss # neural network model kwargs ={"mean":0.0, "stdev":0.5772} domain = np.array([[-1.,-1.],[1.,1.]]) model = Network(in_N=2,m=30,H_Layer=3,out_N=1,**kwargs) print(model) def evaluate(model,domain): model.eval() xstar = np.linspace(domain[0][0],domain[1][0],64) ystar = np.linspace(domain[0][1],domain[1][1],64) x_star,y_star = np.meshgrid(xstar,ystar) x_test = torch.from_numpy(x_star.flatten()[:,None]) y_test = torch.from_numpy(y_star.flatten()[:,None]) u_star = u_exact(x_test,y_test,omega) u_pred = model(x_test,y_test) l2 = np.linalg.norm(u_star- u_pred.detach(), 2)/np.linalg.norm(u_star, 2) linf = max(abs(u_star- u_pred.detach().numpy())) return l2,linf.item() # + tags=[] epochs = 5000 disp = 500 print_to_consol = True # reset the model model.apply(init_weights) # update the optimizer optimizer = LBFGS(model.parameters(),line_search_fn="strong_wolfe") # initialize penalty parameter mu = torch.tensor(1.0) # maximum penalty value for safeguarding mu_max = torch.tensor(1e4) # l2 norm of constraints |C|_2 eta = torch.tensor(0.0) # penalty tolerance epsilon = torch.tensor(1e-8) # number of collocation points in the domain N_data = 512 x_dm,y_dm = fetch_interior_points(domain,N_data) x_dm = x_dm.requires_grad_(True) y_dm = y_dm.requires_grad_(True) # number of boundary points N_data = 256 # Generating Data x_bc,y_bc = fetch_boundary_points(domain,N_data) # lagrange multiplier for boundary conditions lambda_bc = torch.zeros_like(x_bc) # starting to train neural network model for epoch in range(1,epochs+1): def closure(): if torch.is_grad_enabled(): model.train() optimizer.zero_grad() pde_loss = physics_loss(model,x_dm,y_dm,omega).sum() bc_constraints = boundary_loss(model,x_bc,y_bc,omega) bc_loss = (lambda_bc * bc_constraints).sum() penalty = bc_constraints.pow(2).sum() loss = pde_loss + bc_loss + 0.5 * mu * penalty if loss.requires_grad: loss.backward() return loss def _closure(): model.eval() pde_loss = physics_loss(model,x_dm,y_dm,omega) bc_constraints = boundary_loss(model,x_bc,y_bc,omega) penalty = bc_constraints.pow(2).sum() return pde_loss,bc_constraints,penalty optimizer.step(closure) pde_loss,bc_constraints,penalty = _closure() with torch.no_grad(): if (torch.sqrt(penalty) >= 0.25*eta) and (torch.sqrt(penalty) > epsilon): mu = min(mu*2.0, mu_max) lambda_bc += mu * bc_constraints eta = torch.sqrt(penalty) # print average pde loss and the constraint norm |pi|_2 if epoch %disp == 0 and print_to_consol: print(f"epoch: {epoch : 5d}, avg pde loss:{pde_loss.mean():2.3e}, eta : {eta:2.3e}") # checkpointing the model torch.save(model.state_dict(),f"helmholtz.pt") l2,linf = evaluate(model,domain) print(f"relative l2 error :{l2:2.3e}, linf error : {linf :2.3e}") # + model.eval() xstar = np.linspace(domain[0][0],domain[1][0],64) ystar = np.linspace(domain[0][1],domain[1][1],64) x_star,y_star = np.meshgrid(xstar,ystar) x_test = torch.from_numpy(x_star.flatten()[:,None]) y_test = torch.from_numpy(y_star.flatten()[:,None]) u_star = u_exact(x_test,y_test,omega) u_pred = model(x_test,y_test) l2,linf = evaluate(model,domain) print(f"relative l2 error :{l2:2.3e}, linf error : {linf :2.3e}") # + # https://joseph-long.com/writing/colorbars/ def colorbar(mappable,min_val,max_val): from mpl_toolkits.axes_grid1 import make_axes_locatable import matplotlib.pyplot as plt last_axes = plt.gca() ax = mappable.axes fig = ax.figure divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.1) ticks = np.linspace(min_val, max_val, 4, endpoint=True) cbar = fig.colorbar(mappable, cax=cax,ticks=ticks) cbar.formatter.set_powerlimits((0, 0)) plt.sca(last_axes) return cbar params = { 'text.latex.preamble': '\\usepackage{gensymb}', 'image.origin': 'lower', 'image.interpolation': 'nearest', 'image.cmap': 'gray', 'axes.grid': False, 'savefig.dpi': 150, # to adjust notebook inline plot size 'axes.labelsize': 16, # fontsize for x and y labels 'axes.titlesize': 16, 'font.size': 16, 'legend.fontsize': 16, 'xtick.labelsize': 16, 'ytick.labelsize': 16, 'text.usetex': False, 'figure.figsize': [20, 3], 'font.family': 'serif', } plt.rcParams.update(params) # + # ------------ # 2D figures # -------------- cmap_list = ['jet','YlGnBu','coolwarm','rainbow','magma','plasma','inferno','Spectral','RdBu'] cmap = cmap_list[4] gs = gridspec.GridSpec(1, 3) gs.update(wspace=0.3) points = np.concatenate((x_star.flatten()[:,None],y_star.flatten()[:,None]),axis=1) u_pred_plot = griddata(points, u_pred.detach().flatten(), (x_star,y_star), method='cubic') u_star_plot = griddata(points, u_star.flatten(), (x_star,y_star), method='cubic') save = True #################################### Predicted Solution ##################################### ax = plt.subplot(gs[0,0]) min_val = np.min(u_star_plot) max_val = np.amax(u_star_plot) img = ax.pcolormesh(x_star,y_star,u_pred_plot, cmap = cmap,vmin=min_val,vmax=max_val,shading='gouraud') # ax.set_title('$\hat{u}(x,y)$') ax.set_xlabel('$x$') ax.set_ylabel('$y$') ax.axis('square') cbar = colorbar(img,min_val,max_val) cbar.formatter.set_powerlimits((-1, -1)) ax.axis('square') ax.set_xticks([domain[0][0],(domain[1][0]+ domain[0][0])/2,domain[1][0]]) ax.set_yticks([domain[0][1],(domain[1][1]+domain[0][1])/2,domain[1][1]]) # #################################### Exact Solution ######################################### ax = plt.subplot(gs[0,1]) img = ax.pcolormesh(x_star,y_star,u_star_plot, cmap = cmap,vmin=min_val,vmax=max_val,shading='gouraud') # ax.set_title('$u(x,y)$') ax.set_xlabel('$x$') ax.set_ylabel('$y$') cbar = colorbar(img,min_val,max_val) cbar.formatter.set_powerlimits((-1, -1)) ax.axis('square') ax.set_xticks([domain[0][0],(domain[1][0]+ domain[0][0])/2,domain[1][0]]) ax.set_yticks([domain[0][1],(domain[1][1]+domain[0][1])/2,domain[1][1]]) # #################################### Absolute Error ######################################### ax = plt.subplot(gs[0,2]) img = ax.pcolormesh(x_star,y_star,np.abs(u_star_plot - u_pred_plot), cmap = cmap,shading='gouraud') # ax.set_title('$|u - \hat{u}|$') ax.set_xlabel('$x$') ax.set_ylabel('$y$') min_val = np.amin(np.abs(u_star_plot - u_pred_plot)) max_val = np.amax(np.abs(u_star_plot - u_pred_plot)) cbar = colorbar(img,min_val,max_val) cbar.formatter.set_powerlimits((0, 0)) ax.axis('square') ax.set_xticks([domain[0][0],(domain[1][0]+ domain[0][0])/2,domain[1][0]]) ax.set_yticks([domain[0][1],(domain[1][1]+domain[0][1])/2,domain[1][1]]) filename="pecann_helmholtz" plt.figtext(0.229, -0.25,'(a)' ,wrap=True, horizontalalignment='center', fontsize=18) plt.figtext(0.508, -0.25,'(b)', wrap=True, horizontalalignment='center', fontsize=18) plt.figtext(0.790, -0.25,'(c)', wrap=True, horizontalalignment='center', fontsize=18) plt.savefig('{}.png'.format(filename), bbox_inches='tight', pad_inches=0.2) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Computing the Bayesian Hilbert Transform-DRT # In this tutorial example, we will show how the developed BHT-DRT method works using a simple ZARC model. The equivalent circuit consists one ZARC model, *i.e*., a resistor in parallel with a CPE element. # + # import the libraries import numpy as np from math import pi, log10 import matplotlib.pyplot as plt import seaborn as sns # core library import Bayes_HT import importlib importlib.reload(Bayes_HT) # - # plot standards plt.rc('font', family='serif', size=15) plt.rc('text', usetex=True) plt.rc('xtick', labelsize=15) plt.rc('ytick', labelsize=15) # ## 1) Define the synthetic impedance experiment $Z_{\rm exp}(\omega)$ # ### 1.1) Define the frequency range N_freqs = 81 freq_min = 10**-4 # Hz freq_max = 10**4 # Hz freq_vec = np.logspace(log10(freq_min), log10(freq_max), num=N_freqs, endpoint=True) tau_vec = np.logspace(-log10(freq_max), -log10(freq_min), num=N_freqs, endpoint=True) omega_vec = 2.*pi*freq_vec # ### 1.2) Define the circuit parameters for the two ZARCs R_ct = 50 # Ohm R_inf = 10. # Ohm phi = 0.8 tau_0 = 1. # sec # ### 1.3) Generate exact impedance $Z_{\rm exact}(\omega)$ as well as the stochastic experiment $Z_{\rm exp}(\omega)$, here $Z_{\rm exp}(\omega)=Z_{\rm exact}(\omega)+\sigma_n(\varepsilon_{\rm re}+i\varepsilon_{\rm im})$ # + # generate exact T = tau_0**phi/R_ct Z_exact = R_inf + 1./(1./R_ct+T*(1j*2.*pi*freq_vec)**phi) # random rng = np.random.seed(121295) sigma_n_exp = 0.8 # Ohm Z_exp = Z_exact + sigma_n_exp*(np.random.normal(0, 1, N_freqs)+1j*np.random.normal(0, 1, N_freqs)) # - # ### 1.4) show the impedance in Nyquist plot # + fig, ax = plt.subplots() plt.plot(Z_exact.real, -Z_exact.imag, linewidth=4, color='black', label='exact') plt.plot(np.real(Z_exp), -Z_exp.imag, 'o', markersize=8, color='red', label='synth exp') plt.plot(np.real(Z_exp[0:70:20]), -np.imag(Z_exp[0:70:20]), 's', markersize=8, color="black") plt.plot(np.real(Z_exp[30]), -np.imag(Z_exp[30]), 's', markersize=8, color="black") plt.annotate(r'$10^{-4}$', xy=(np.real(Z_exp[0]), -np.imag(Z_exp[0])), xytext=(np.real(Z_exp[0])-15, -np.imag(Z_exp[0])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.annotate(r'$10^{-1}$', xy=(np.real(Z_exp[20]), -np.imag(Z_exp[20])), xytext=(np.real(Z_exp[20])-5, 10-np.imag(Z_exp[20])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.annotate(r'$1$', xy=(np.real(Z_exp[30]), -np.imag(Z_exp[30])), xytext=(np.real(Z_exp[30]), 8-np.imag(Z_exp[30])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.annotate(r'$10$', xy=(np.real(Z_exp[40]), -np.imag(Z_exp[40])), xytext=(np.real(Z_exp[40]), 8-np.imag(Z_exp[40])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.annotate(r'$10^2$', xy=(np.real(Z_exp[60]), -np.imag(Z_exp[60])), xytext=(np.real(Z_exp[60])+5, -np.imag(Z_exp[60])), arrowprops=dict(arrowstyle='-',connectionstyle='arc')) plt.legend(frameon=False, fontsize = 15) plt.axis('scaled') plt.xlim(5, 70) plt.ylim(-2, 32) plt.xticks(range(5, 70, 10)) plt.yticks(range(0, 40, 10)) plt.xlabel(r'$Z_{\rm re}/\Omega$', fontsize = 20) plt.ylabel(r'$-Z_{\rm im}/\Omega$', fontsize = 20) plt.show() # - # ## 2) Calculate the DRT impedance $Z_{\rm DRT}(\omega)$ and the Hilbert transformed impedance $Z_{\rm H}(\omega)$ # ### 2.1) optimize the hyperparamters # + # set the intial parameters sigma_n = 1 sigma_beta = 20 sigma_lambda = 100 theta_0 = np.array([sigma_n, sigma_beta, sigma_lambda]) data_real, data_imag, scores = Bayes_HT.HT_est(theta_0, Z_exp, freq_vec, tau_vec) # - # ### 2.2) Calculate the real part of the $Z_{\rm DRT}(\omega)$ and the imaginary part of the $Z_{\rm H}(\omega)$ # #### 2.2.1) Bayesian regression to obtain the real part of impedance for both mean and covariance # + mu_Z_re = data_real.get('mu_Z') cov_Z_re = np.diag(data_real.get('Sigma_Z')) # the mean and covariance of $R_\infty$ mu_R_inf = data_real.get('mu_gamma')[0] cov_R_inf = np.diag(data_real.get('Sigma_gamma'))[0] # - # #### 2.2.2) Calculate the real part of DRT impedance for both mean and covariance mu_Z_DRT_re = data_real.get('mu_Z_DRT') cov_Z_DRT_re = np.diag(data_real.get('Sigma_Z_DRT')) # #### 2.2.3) Calculate the imaginary part of HT impedance for both mean and covariance mu_Z_H_im = data_real.get('mu_Z_H') cov_Z_H_im = np.diag(data_real.get('Sigma_Z_H')) # #### 2.2.4) Estimate the $\sigma_n$ sigma_n_re = data_real.get('theta')[0] # ### 2.3) Calculate the imaginary part of the $Z_{\rm DRT}(\omega)$ and the real part of the $Z_{\rm H}(\omega)$ # + # 2.3.1 Bayesian regression mu_Z_im = data_imag.get('mu_Z') cov_Z_im = np.diag(data_imag.get('Sigma_Z')) # the mean and covariance of the inductance $L_0$ mu_L_0 = data_imag.get('mu_gamma')[0] cov_L_0 = np.diag(data_imag.get('Sigma_gamma'))[0] # 2.3.2 DRT part mu_Z_DRT_im = data_imag.get('mu_Z_DRT') cov_Z_DRT_im = np.diag(data_imag.get('Sigma_Z_DRT')) # 2.3.3 HT prediction mu_Z_H_re = data_imag.get('mu_Z_H') cov_Z_H_re = np.diag(data_imag.get('Sigma_Z_H')) # 2.3.4 estimated sigma_n sigma_n_im = data_imag.get('theta')[0] # - # ## 3) Plot the BHT_DRT # ### 3.1) plot the real parts of impedance for both Bayesian regression and the synthetic experiment band = np.sqrt(cov_Z_re) plt.fill_between(freq_vec, mu_Z_re-3*band, mu_Z_re+3*band, facecolor='lightgrey') plt.semilogx(freq_vec, mu_Z_re, linewidth=4, color='black', label='mean') plt.semilogx(freq_vec, Z_exp.real, 'o', markersize=8, color='red', label='synth exp') plt.xlim(1E-4, 1E4) plt.ylim(5, 65) plt.xscale('log') plt.yticks(range(5, 70, 10)) plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$Z_{\rm re}/\Omega$', fontsize=20) plt.legend(frameon=False, fontsize = 15) plt.show() # ### 3.2 plot the imaginary parts of impedance for both Bayesian regression and the synthetic experiment band = np.sqrt(cov_Z_im) plt.fill_between(freq_vec, -mu_Z_im-3*band, -mu_Z_im+3*band, facecolor='lightgrey') plt.semilogx(freq_vec, -mu_Z_im, linewidth=4, color='black', label='mean') plt.semilogx(freq_vec, -Z_exp.imag, 'o', markersize=8, color='red', label='synth exp') plt.xlim(1E-4, 1E4) plt.ylim(-3, 30) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$-Z_{\rm im}/\Omega$', fontsize=20) plt.legend(frameon=False, fontsize = 15) plt.show() # ### 3.3) plot the real parts of impedance for both Hilbert transform and the synthetic experiment mu_Z_H_re_agm = mu_R_inf + mu_Z_H_re band_agm = np.sqrt(cov_R_inf + cov_Z_H_re + sigma_n_im**2) plt.fill_between(freq_vec, mu_Z_H_re_agm-3*band_agm, mu_Z_H_re_agm+3*band_agm, facecolor='lightgrey') plt.semilogx(freq_vec, mu_Z_H_re_agm, linewidth=4, color='black', label='mean') plt.semilogx(freq_vec, Z_exp.real, 'o', markersize=8, color='red', label='synth exp') plt.xlim(1E-4, 1E4) plt.ylim(-3, 70) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$\left(R_\infty + Z_{\rm H, re}\right)/\Omega$', fontsize=20) plt.legend(frameon=False, fontsize = 15) plt.show() # ### 3.4) plot the imaginary parts of impedance for both Hilbert transform and the synthetic experiment mu_Z_H_im_agm = omega_vec*mu_L_0 + mu_Z_H_im band_agm = np.sqrt((omega_vec**2)*cov_L_0 + cov_Z_H_im + sigma_n_re**2) plt.fill_between(freq_vec, -mu_Z_H_im_agm-3*band_agm, -mu_Z_H_im_agm+3*band_agm, facecolor='lightgrey') plt.semilogx(freq_vec, -mu_Z_H_im_agm, linewidth=4, color='black', label='mean') plt.semilogx(freq_vec, -Z_exp.imag, 'o', markersize=8, color='red', label='synth exp') plt.xlim(1E-4, 1E4) plt.ylim(-3, 30) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$-\left(\omega L_0 + Z_{\rm H, im}\right)/\Omega$', fontsize=20) plt.legend(frameon=False, fontsize = 15) plt.show() # ### 3.5) plot the difference between real parts of impedance for Hilbert transform and the synthetic experiment difference_re = mu_R_inf + mu_Z_H_re - Z_exp.real band = np.sqrt(cov_R_inf + cov_Z_H_re + sigma_n_im**2) plt.fill_between(freq_vec, -3*band, 3*band, facecolor='lightgrey') plt.plot(freq_vec, difference_re, 'o', markersize=8, color='red') plt.xlim(1E-4, 1E4) plt.ylim(-10, 10) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$\left(R_\infty + Z_{\rm H, re} - Z_{\rm exp, re}\right)/\Omega$', fontsize=20) plt.show() # ### 3.6) plot the density distribution of residuals for the real part fig = plt.figure(1) a = sns.kdeplot(difference_re, shade=True, color='grey') a = sns.rugplot(difference_re, color='black') a.set_xlabel(r'$\left(R_\infty + Z_{\rm H, re} - Z_{\rm exp, re}\right)/\Omega$',fontsize=20) a.set_ylabel(r'pdf',fontsize=20) a.tick_params(labelsize=15) plt.xlim(-5, 5) plt.ylim(0, 0.5) plt.show() # ### 3.7) plot the difference between imaginary parts of impedance for Hilbert transform and the synthetic experiment difference_im = omega_vec*mu_L_0 + mu_Z_H_im - Z_exp.imag band = np.sqrt((omega_vec**2)*cov_L_0 + cov_Z_H_im + sigma_n_re**2) plt.fill_between(freq_vec, -3*band, 3*band, facecolor='lightgrey') plt.plot(freq_vec, difference_im, 'o', markersize=8, color='red') plt.xlim(1E-4, 1E4) plt.ylim(-10, 10) plt.xscale('log') plt.xlabel(r'$f/{\rm Hz}$', fontsize=20) plt.ylabel(r'$\left(\omega L_0 + Z_{\rm H, im} - Z_{\rm exp, im}\right)/\Omega$', fontsize=20) plt.show() # ### 3.8) plot the density distribution of residuals for the imaginary part fig = plt.figure(2) a = sns.kdeplot(difference_im, shade=True, color='grey') a = sns.rugplot(difference_im, color='black') a.set_xlabel(r'$\left(\omega L_0 + Z_{\rm H, im} - Z_{\rm exp, im}\right)/\Omega$',fontsize=20) a.set_ylabel(r'pdf',fontsize=20) a.tick_params(labelsize=15) plt.xlim(-5, 5) plt.ylim(0, 0.5) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # + [markdown] nbpresent={"id": "aa548334-3712-4bc8-a30a-5a2a2260bd1c"} slideshow={"slide_type": "slide"} # # BASTRA NOTEBOOK # En este *notebook* vamos a trabajar con los datos generados por las simulaciones de BASTRA. # Los datos se agruparán por escenarios. # # Author: . sept.2016 # + [markdown] nbpresent={"id": "cc5f0cc1-0a51-4018-a01a-b9672f6f94a8"} slideshow={"slide_type": "slide"} # ## Preliminares # Antes de ejecutar los escenarios es preciso hacer las correspondienes importaciones de módulos de python para cálculo científico. # + [markdown] nbpresent={"id": "3874ab56-9646-45e2-99ae-cb18c9603f38"} slideshow={"slide_type": "subslide"} # ### Imports # + nbpresent={"id": "69d6520c-8cb1-48f0-942b-f3c057f6d32e"} slideshow={"slide_type": "fragment"} # %matplotlib inline import numpy as np import matplotlib.pyplot as plt import matplotlib.mlab as mlab import matplotlib.lines as mlines import sklearn as sk import pandas as pd # + [markdown] nbpresent={"id": "076a1b7c-343b-4bb3-a2cf-20d64498445f"} slideshow={"slide_type": "subslide"} # ### Functions # - # #### Function str_vals_percent # Given two values, print them with their percent difference def str_vals_percent( n1, n2 ): return "[" + "{0:.2f}".format(n1) + "," + "{0:.2f}".format(n2) +"] "+ "{0:.2f}".format((n2-n1)*100/n1) + "%" # + [markdown] nbpresent={"id": "745baf99-dab3-4a9a-b114-af69d3ab1633"} slideshow={"slide_type": "fragment"} # #### Function draw_histogram # Plots an histogram based on the dataset. # + nbpresent={"id": "52715953-f6ac-4bfd-bd28-e678c2e1b118"} slideshow={"slide_type": "skip"} def draw_histogram( name, ds_raw ): ds = ds_raw.loc[ ds_raw['t_arrival_secs'] > 0 ] # ds_raw.head() travel_times = ds['t_traveltime_secs'] cols = 80 plt.figure(figsize=(18,8)) y_vals, x_bins, patches = plt.hist(travel_times, cols, normed=0, facecolor='green', alpha=0.50) x_mean = np.mean(travel_times) x_median = np.median(travel_times) y_mean = np.mean(y_vals) y_median = np.median(y_vals) r_mean = np.mean(ds['route_path_num']) r_median = np.median(ds['route_path_num']) # l_mean = np.mean(ds['route_length']) # l_median = np.median(ds['route_length']) l_mean = 1 l_median = 1 # add a 'best fit' line y1_vals = np.append( y_vals, y_vals[cols-1] ) x2_mean = [ x_mean for i in y1_vals ] x3_median = [ x_median for i in y1_vals ] y2_mean = [ y_mean for i in x_bins ] y3_median = [ y_median for i in x_bins ] # print( "X-VALS: ", len(x_bins), " ", x_bins.__repr__, ) # print( "Y_VALS: ", len(y1_vals), " ", y1_vals.__repr__, ) # print( "X-MEAN: ", len(x2_mean), " ", x2_mean.__repr__, ) # print( "X-MEDIAN: ", len(x3_median), " ", x3_median.__repr__, ) print( "Experiment:", name, "\n", "Trips Total:", ds_raw['id'].count(), "\n", "Trips Completed:", travel_times.count(), "\n", " * Travel time (mean):", x_mean, "\n", " * Travel time (median):", x_median, "\n", " * Route lengths (mean):", l_mean, "\n", " * Route lengths (median):", l_median, "\n", " * Route paths (mean):", r_mean, "\n", " * Route paths (median):", r_median, "\n", ) line_estimation = plt.plot(x_bins, y1_vals, 'r-', label='Estimation', linewidth=2, color='darkgreen') line_mean = plt.plot(x2_mean, y1_vals, label='Mean', linestyle='-', linewidth=2, color='blue') line_median = plt.plot(x3_median, y1_vals, label='Median', linestyle='-', linewidth=2, color='red') plt.xlabel('Travel Time') plt.ylabel('Frecuency') # plt.title(r'$\mathrm{Histogram\ of\ Travel Times: }\ $' + name ) plt.title("Travel Times Histogram: "+name ) mean_line = mlines.Line2D([], [], color='blue', markersize=1, label='Mean') median_line = mlines.Line2D([], [], color='red', markersize=1, label='Median') plt.legend(handles=[median_line, mean_line], bbox_to_anchor=(1.15, 0.9)) # plt.axis([travel_times.min(), travel_times.max()+1, 0, 0.03]) plt.grid(True) # If plotting OUTSIDE the notebook, set interactive mode ON/OFF # plt.ioff() # plt.ion() plt.show() # + [markdown] slideshow={"slide_type": "fragment"} # #### Function draw_2histograms # Plots the comparison between two overlapped histograms based on the datasets. # + slideshow={"slide_type": "skip"} def draw_2histograms( label1, experiment1, ds_raw1, label2, experiment2, ds_raw2 ): ds1 = ds_raw1.loc[ ds_raw1['t_arrival_secs'] > 0 ] ds2 = ds_raw2.loc[ ds_raw2['t_arrival_secs'] > 0 ] # ds_raw.head() travel_times1 = ds1['t_traveltime_secs'] travel_times2 = ds2['t_traveltime_secs'] travel_times = [ travel_times1, travel_times2 ] cols = 80 fig, ax = plt.subplots(figsize=(18,8)) plt.xlabel('Travel Time') plt.ylabel('Frecuency') plt.title("Travel Times Histogram: "+label1+" <--> "+label2 ) # y_vals1, x_bins1, patches1 = ax.hist(travel_times1, histtype='bar', cols, normed=0, facecolor='green', alpha=0.30) # y_vals2, x_bins2, patches2 = ax.hist(travel_times2, histtype='bar', cols, normed=0, facecolor='blue', alpha=0.30) [y_vals1, y_vals2], x_bins, patches = ax.hist(travel_times, cols, normed=0, color=['green', 'blue'], alpha=0.30, histtype='bar') # first histogram ------------------------------------ x_mean1 = np.mean(travel_times1) x_median1 = np.median(travel_times1) y_mean1 = np.mean(y_vals1) y_median1 = np.median(y_vals1) r_mean1 = np.mean(ds1['route_path_num']) r_median1 = np.median(ds1['route_path_num']) # l_mean1 = np.mean(ds1['route_length']) # l_median1 = np.median(ds1['route_length']) l_mean1 = 1 l_median1 = 1 # add a 'best fit' line y1_vals1 = np.append( y_vals1, y_vals1[cols-1] ) x2_mean1 = [ x_mean1 for i in y1_vals1 ] x3_median1 = [ x_median1 for i in y1_vals1 ] y2_mean1 = [ y_mean1 for i in x_bins ] y3_median1 = [ y_median1 for i in x_bins ] # second histogram ------------------------------------ x_mean2 = np.mean(travel_times2) x_median2 = np.median(travel_times2) y_mean2 = np.mean(y_vals2) y_median2 = np.median(y_vals2) r_mean2 = np.mean(ds2['route_path_num']) r_median2 = np.median(ds2['route_path_num']) # l_mean2 = np.mean(ds2['route_length']) # l_median2 = np.median(ds2['route_length']) l_mean2 = 1 l_median2 = 1 # add a 'best fit' line y1_vals2 = np.append( y_vals2, y_vals2[cols-1] ) x2_mean2 = [ x_mean2 for i in y1_vals2 ] x3_median2 = [ x_median2 for i in y1_vals2 ] y2_mean2 = [ y_mean2 for i in x_bins ] y3_median2 = [ y_median2 for i in x_bins ] print( "Scenario: ", label1, " <--> ", label2, "\n", "Experiments: ", experiment1, " <--> ", experiment2, "\n", "Trips Total:", str_vals_percent(ds_raw1['id'].count(), ds_raw2['id'].count() ), "\n", "Trips Completed:", str_vals_percent(travel_times1.count(),travel_times2.count()), "\n", " * Travel time (mean) :", str_vals_percent( x_mean1, x_mean2), "\n", " * Travel time (median) :", str_vals_percent( x_median1, x_median2), "\n", # " * Route lengths (mean) :", str_vals_percent( l_mean1, l_mean2), "\n", # " * Route lengths (median):", str_vals_percent( l_median1, l_median2), "\n", " * Route paths (mean) :", str_vals_percent( r_mean1, r_mean2), "\n", " * Route paths (median) :", str_vals_percent( r_median1, r_median2), "\n", ) line_estimation1 = ax.plot(x_bins, y1_vals1, 'r-', label='Estimation', linewidth=2, color='darkgreen') line_estimation2 = ax.plot(x_bins, y1_vals2, 'r-', label='Estimation', linewidth=2, color='darkblue') max_y = np.linspace(0, max( max(y_vals1), max(y_vals2)), num=len(x_bins)) line_mean1 = ax.plot(x2_mean1, max_y, label='Mean', linestyle='--', linewidth=2, color='darkgreen') line_median1 = ax.plot(x3_median1, max_y, label='Median', linestyle='-', linewidth=2, color='darkgreen') line_mean2 = ax.plot(x2_mean2, y1_vals2, label='Mean', linestyle='--', linewidth=2, color='darkblue') line_median2 = ax.plot(x3_median2, max_y, label='Median', linestyle='-', linewidth=2, color='darkblue') mean_line1 = mlines.Line2D([], [], linestyle='--',color='darkgreen', markersize=1, label='1:Mean') median_line1 = mlines.Line2D([], [], linestyle='-', color='darkgreen', markersize=1, label='1:Median') mean_line2 = mlines.Line2D([], [], linestyle='--',color='darkblue', markersize=1, label='2:Mean') median_line2 = mlines.Line2D([], [], linestyle='-', color='darkblue', markersize=1, label='2:Median') ax.legend(handles=[median_line1, mean_line1, median_line2, mean_line2], bbox_to_anchor=(1.15, 0.9)) # plt.axis([travel_times.min(), travel_times.max()+1, 0, 0.03]) ax.grid(True) # If plotting OUTSIDE the notebook, set interactive mode ON/OFF # plt.ioff() # plt.ion() plt.show() # + [markdown] nbpresent={"id": "66306679-3be0-4087-bed0-760c7d275993"} slideshow={"slide_type": "fragment"} # #### Function process_scenario # Uploads results from scenario execution and plots an histogram based on the dataset. # * Select only those vehicles that finish the trip, i.e. have arrive time > 0. # + nbpresent={"id": "e69b902d-06a7-43de-8d38-6545b079eeb8"} slideshow={"slide_type": "skip"} def process_scenario( path, experiment ): datafile = path + "/experiments/" + experiment + ".csv" # print( "DATAFILE: ", datafile ) # ds means "dataset" ds_raw = pd.read_csv(datafile) draw_histogram( experiment, ds_raw ) # + [markdown] slideshow={"slide_type": "fragment"} # #### Function compare_scenarios # Uploads results from 2 scenario executions and plotting the histogram curves based on the dataset. # * Select only those vehicles that finish the trip, i.e. have arrive time > 0. # + slideshow={"slide_type": "skip"} def compare_scenarios( path, label1, experiment1, label2, experiment2 ): datafile1 = path + "/experiments/" + experiment1 + ".csv" datafile2 = path + "/experiments/" + experiment2 + ".csv" # print( "DATAFILE: ", datafile ) # ds means "dataset" ds_raw1 = pd.read_csv(datafile1) ds_raw2 = pd.read_csv(datafile2) draw_2histograms( label1, experiment1, ds_raw1, label2, experiment2, ds_raw2 ) # - compare_scenarios( "/Users/alvaro/Desktop/workspace/mutraff/uah-gist-mutraff-bastra", "GRID/BASTRA deactivated", "data_Grid_noBastra_160928_134826", "GRID/BASTRA activated", "data_Grid_160928_121606" ) # + [markdown] nbpresent={"id": "a622efcf-0563-4216-9d5a-f4b1f55d42a0"} slideshow={"slide_type": "slide"} # ## Simulation: GRID basic # Apply 2 weighted maps with penalty to routes, inside a network with GRID topology. Maps will be applied during all the simulation. Only 50% of the vehicles will use the system. # # **Scenario** # * Name = "Grid" # * Basic reference GRID scenario. # * Execution date: 160926_124432 # * Process time: 781 sec # * Platform: MAC-PRO OS-X Darwin # # **Simulation Time** # * Begin = 0 # * End = 10000 # # **SUMO Traffic** # * Network: # * Grid style: 16x16 # * File = "../scenes/Grid/grid.net.xml" # * Vehicle Types: # * car (weight=4 -> 80%) # * motorcycle (weight=1 -> 20%) # * Demand: # * Traffic background: # * Time: full simulation # * Volume: 3000 vehicles # * Distribution: O/D random (normal) # * Traffic flows: # * Time: full simulation # * 4 traffic flows end2end of grid. # * Volume: 100 vehicles # # **Bastra parameters** # * Logit = 0,5 # * Use_blance = true # * foresight_steps = 1 <-- Number of edges in the planned route to be scanned to consider jam. # * foresight_tries = 3 <-- Number of trials to achieve a new route that avoids jam. # * foresight_halting = 3 <-- Size of vehicles queued to understand that there is a jam. # * foresight_penalty = 50.0 <-- Weight-factor to apply to an edge in the rerouting algorithm, to force duarouter to over-weight this edge selection and thus avoid selection. # * Commands: # * **Weighted maps** # * Apply during all the simulation # * 2 weighted maps, applying 50% to each vehicle type (2) with weights_factor=3 (x3) in the selected routes. So "edge weight = travel_time * weight_factor" # * file="../scenes/Grid/maps/map1.pen.map.xml" prob="0.50" tag="motorcycle" # * file="../scenes/Grid/maps/map2.pen.map.xml" prob="0.50" tag="motorcycle" # * file="../scenes/Grid/maps/map1.pen.map.xml" prob="0.50" tag="car" # * file="../scenes/Grid/maps/map2.pen.map.xml" prob="0.50" tag="car" # + nbpresent={"id": "db3fb542-6dc1-44d6-8c08-adc17d7d044d"} slideshow={"slide_type": "subslide"} process_scenario( "/Users/alvaro/Desktop/workspace/mutraff/uah-gist-mutraff-bastra", "data_Grid_160928_121606" ) # + [markdown] nbpresent={"id": "c7f66994-f059-4fa7-b87c-fcf87fcd4692"} slideshow={"slide_type": "slide"} # ## Simulation: GRID basic (II) # Same as before. Repetition to contrast results. # # Apply 2 weighted maps with penalty to routes, inside a network with GRID topology. Maps will be applied during all the simulation. Only 50% of the vehicles will use the system. # # **Scenario** # * Name = "Grid" # * Basic reference GRID scenario. # * Execution date: 160928_130326 # * Process time: 535 sec # * Platform: MAC-PRO OS-X Darwin # # **Simulation Time** # * Begin = 0 # * End = 10000 # # **SUMO Traffic** # * Network: # * Grid style: 16x16 # * File = "../scenes/Grid/grid.net.xml" # * Vehicle Types: # * car (weight=4 -> 80%) # * motorcycle (weight=1 -> 20%) # * Demand: # * Traffic background: # * Time: full simulation # * Volume: 3000 vehicles # * Distribution: O/D random (normal) # * Traffic flows: # * Time: full simulation # * 4 traffic flows end2end of grid. # * Volume: 100 vehicles # # **Bastra parameters** # * Logit = 0,5 # * Use_blance = true # * foresight_steps = 1 <-- Number of edges in the planned route to be scanned to consider jam. # * foresight_tries = 3 <-- Number of trials to achieve a new route that avoids jam. # * foresight_halting = 3 <-- Size of vehicles queued to understand that there is a jam. # * foresight_penalty = 50.0 <-- Weight-factor to apply to an edge in the rerouting algorithm, to force duarouter to over-weight this edge selection and thus avoid selection. # * Commands: # * **Weighted maps** # * Apply during all the simulation # * 2 weighted maps, applying 50% to each vehicle type (2) with weights_factor=3 (x3) in the selected routes. So "edge weight = travel_time * weight_factor" # * file="../scenes/Grid/maps/map1.pen.map.xml" prob="0.50" tag="motorcycle" # * file="../scenes/Grid/maps/map2.pen.map.xml" prob="0.50" tag="motorcycle" # * file="../scenes/Grid/maps/map1.pen.map.xml" prob="0.50" tag="car" # * file="../scenes/Grid/maps/map2.pen.map.xml" prob="0.50" tag="car" # + nbpresent={"id": "478def4f-6428-4d47-a089-295db8ebbb0f"} slideshow={"slide_type": "subslide"} process_scenario( "/Users/alvaro/Desktop/workspace/mutraff/uah-gist-mutraff-bastra", "data_Grid_160928_130326" ) # + [markdown] slideshow={"slide_type": "slide"} # ## Simulation: GRID no Bastra (I) # Execute the simulation without Bastra rerouting mechanism. # # **Scenario** # * Name = "Grid" # * Basic reference GRID scenario. # * Execution date: 160928_235551 # * Process time: 621 sec # * Platform: MAC-PRO OS-X Darwin # # **Simulation Time** # * Begin = 0 # * End = 10000 # # **SUMO Traffic** # * Network: # * Grid style: 16x16 # * File = "../scenes/Grid/grid.net.xml" # * Vehicle Types: # * car (weight=4 -> 80%) # * motorcycle (weight=1 -> 20%) # * Demand: # * Traffic background: # * Time: full simulation # * Volume: 3000 vehicles # * Distribution: O/D random (normal) # * Traffic flows: # * Time: full simulation # * 4 traffic flows end2end of grid. # * Volume: 100 vehicles # # **Bastra parameters** # * Logit = 0 # * Use_balance = false # * foresight_steps = 0 <-- Number of edges in the planned route to be scanned to consider jam. # * foresight_tries = 0 <-- Number of trials to achieve a new route that avoids jam. # * foresight_halting = 0 <-- Size of vehicles queued to understand that there is a jam. # * foresight_penalty = 1 <-- Weight-factor to apply to an edge in the rerouting algorithm, to force duarouter to over-weight this edge selection and thus avoid selection. # * Commands: # * **Weighted maps** # * NONE # + slideshow={"slide_type": "subslide"} process_scenario( "/Users/alvaro/Desktop/workspace/mutraff/uah-gist-mutraff-bastra", "data_Grid_noBastra_160928_235551" ) # + [markdown] slideshow={"slide_type": "slide"} # ## Simulation: GRID no Bastra (II) # Same as before. Repetition to contrast results. # # Execute the simulation without Bastra rerouting mechanism. # # **Scenario** # * Name = "Grid" # * Basic reference GRID scenario. # * Execution date: 160928_134826 # * Process time: 4048 sec (second plane &) # * Platform: MAC-PRO OS-X Darwin # # **Simulation Time** # * Begin = 0 # * End = 10000 # # **SUMO Traffic** # * Network: # * Grid style: 16x16 # * File = "../scenes/Grid/grid.net.xml" # * Vehicle Types: # * car (weight=4 -> 80%) # * motorcycle (weight=1 -> 20%) # * Demand: # * Traffic background: # * Time: full simulation # * Volume: 3000 vehicles # * Distribution: O/D random (normal) # * Traffic flows: # * Time: full simulation # * 4 traffic flows end2end of grid. # * Volume: 100 vehicles # # **Bastra parameters** # * Logit = 0 # * Use_balance = false # * foresight_steps = 0 <-- Number of edges in the planned route to be scanned to consider jam. # * foresight_tries = 0 <-- Number of trials to achieve a new route that avoids jam. # * foresight_halting = 0 <-- Size of vehicles queued to understand that there is a jam. # * foresight_penalty = 1 <-- Weight-factor to apply to an edge in the rerouting algorithm, to force duarouter to over-weight this edge selection and thus avoid selection. # * Commands: # * **Weighted maps** # * NONE # + slideshow={"slide_type": "subslide"} process_scenario( "/Users/alvaro/Desktop/workspace/mutraff/uah-gist-mutraff-bastra", "data_Grid_noBastra_160928_134826" ) # + nbpresent={"id": "08badeff-9fcf-4a72-882f-406306564576"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- import os import numpy as np from shutil import copyfile KG_PATH = '/home/ubuntu/kgcatsdogsr/' DST_PATH = '/home/ubuntu/myfastai/nbs/data/dogscats/' def check_and_make_dir(path): if not os.path.exists(path): os.makedirs(path) def setup_train(classes, spath=KG_PATH+'train/', dpath=DST_PATH+'train/'): for c in classes: fdir = dpath+c+'s/' check_and_make_dir(fdir) clen = len(c) for f in os.listdir(spath): if f.startswith(c): copyfile(spath+f,fdir+f) def setup_test(spath=KG_PATH+'test/', dpath=DST_PATH+'test/'): fdir = dpath+'unknown/' check_and_make_dir(fdir) for f in os.listdir(spath): copyfile(spath+f,dpath+f) def sample_files(spath, num): return np.random.choice(os.listdir(spath), num, replace=False).tolist() def setup_valid(classes, path=DST_PATH, num=1000): for c in classes: spath = path+'train/'+c+'s/' dpath = path+'valid/'+c+'s/' check_and_make_dir(dpath) for f in sample_files(spath, num): os.rename(spath+f,dpath+f) def setup_sample(classes, path=DST_PATH, numt=80, numv=20): for c in classes: for d, n in [('train/', numt), ('valid/', numv)]: spath = path+d+c+'s/' dpath = path+'sample/'+d+c+'s/' check_and_make_dir(dpath) for f in sample_files(spath, n): copyfile(spath+f,dpath+f) def setup_data(classes): setup_train(classes) setup_test() setup_valid(classes) setup_sample(classes) classes = ['cat', 'dog'] setup_data(classes) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="9I13EoLdvSkL" executionInfo={"status": "ok", "timestamp": 1616593757196, "user_tz": -480, "elapsed": 2915, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} import os, signal, zipfile, random import numpy as np from google.colab import files import matplotlib.pyplot as plt import matplotlib.image as mpimg import tensorflow as tf from keras.preprocessing import image from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.preprocessing.image import img_to_array, load_img # + id="baWB9qSLfxKc" # + id="wL0eTtKQwXbc" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1608473578407, "user_tz": -480, "elapsed": 3465, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} outputId="af5ddb54-c562-4e87-ac0d-4afdd5174949" # !wget --no-check-certificate \ # https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps.zip \ # -O /tmp/rps.zip # !wget --no-check-certificate \ # https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps-test-set.zip \ # -O /tmp/rps-test-set.zip # + id="XY_qBS1Uu50_" local_zip = '/tmp/rps.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/') zip_ref.close() local_zip = '/tmp/rps-test-set.zip' zip_ref = zipfile.ZipFile(local_zip, 'r') zip_ref.extractall('/tmp/') zip_ref.close() # + id="-fm9PrOlwcLg" train_dir = os.path.join('/tmp/rps') train_rock_dir = os.path.join(train_dir, 'rock') train_paper_dir = os.path.join(train_dir, 'paper') train_scissors_dir = os.path.join(train_dir, 'scissors') test_dir = os.path.join('/tmp/rps-test-set') test_rock_dir = os.path.join(test_dir, 'rock') test_paper_dir = os.path.join(test_dir, 'paper') test_scissors_dir = os.path.join(test_dir, 'scissors') # + id="AJ2SgEeixIfv" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1608473580813, "user_tz": -480, "elapsed": 1044, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} outputId="030964ef-8faf-4da3-835e-51b43c7678c8" print('total training rock images:', len(os.listdir(train_rock_dir))) print('total training paper images:', len(os.listdir(train_paper_dir))) print('total training scissors images:', len(os.listdir(train_scissors_dir))) print('total testing rock images:', len(os.listdir(test_rock_dir))) print('total testing paper images:', len(os.listdir(test_paper_dir))) print('total testing scissors images:', len(os.listdir(test_scissors_dir))) rock_files = os.listdir(train_rock_dir) print(rock_files[:10]) paper_files = os.listdir(train_paper_dir) print(paper_files[:10]) scissors_files = os.listdir(train_scissors_dir) print(scissors_files[:10]) # + id="C1P67KWltXYu" nrows = 2 ncols = 3 pic_index = 0 # + id="yMKghSRcxYlK" colab={"base_uri": "https://localhost:8080/", "height": 357} executionInfo={"status": "ok", "timestamp": 1590830783955, "user_tz": -480, "elapsed": 8452, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} outputId="3c7a21ef-c9fc-475c-f466-2bbd0213a7ac" fig = plt.gcf() fig.set_size_inches(ncols * 3, nrows * 3) pic_index += 2 next_rock_pix = [os.path.join(train_rock_dir, fname) for fname in rock_files[pic_index-2:pic_index]] next_paper_pix = [os.path.join(train_paper_dir, fname) for fname in paper_files[pic_index-2:pic_index]] next_scissors_pix = [os.path.join(train_scissors_dir, fname) for fname in scissors_files[pic_index-2:pic_index]] for i, img_path in enumerate(next_rock_pix + next_paper_pix + next_scissors_pix): sp = plt.subplot(nrows, ncols, i + 1) sp.axis('Off') img = mpimg.imread(img_path) plt.imshow(img) plt.show() # + id="XFSyUV5M2qSZ" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1608473585702, "user_tz": -480, "elapsed": 862, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} outputId="fd1238a7-5efb-4482-a3d0-44c036a02ee5" train_datagen = ImageDataGenerator( rescale = 1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, brightness_range=[0.8,1.2], horizontal_flip=True, fill_mode='nearest' ) train_generator = train_datagen.flow_from_directory( train_dir, target_size = (150, 150), batch_size = 126, class_mode = 'categorical' ) validation_datagen = ImageDataGenerator(rescale=1./255) validation_generator = train_datagen.flow_from_directory( test_dir, target_size = (150, 150), batch_size = 124, class_mode = 'categorical' ) # + id="EXpibkMH14ee" model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(150, 150, 3)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dense(3, activation='softmax') ]) # + id="Lmyt-TIU2AwJ" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1608473593821, "user_tz": -480, "elapsed": 944, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} outputId="fb6fee8f-925b-4d5d-f188-6c504d280351" model.summary() # + id="Tbi_JRSFetow" checkpoint_path = "/rps_model/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_path, verbose=1, save_weights_only=True ) # + id="bHhwvFR72W1a" model.compile( loss = 'categorical_crossentropy', optimizer = 'rmsprop', metrics = ['accuracy'] ) model.save_weights(checkpoint_path.format(epoch=0)) # + id="HUEvi_0x3oti" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1608474410873, "user_tz": -480, "elapsed": 629891, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} outputId="087b8ef1-6c87-4c3d-be98-2684792f04e9" history = model.fit( train_generator, steps_per_epoch = 20, epochs = 25, callbacks=[cp_callback], validation_data = validation_generator, validation_steps = 3 ) # + id="nkzDJSi04d2p" colab={"base_uri": "https://localhost:8080/", "height": 562} executionInfo={"status": "ok", "timestamp": 1608475811782, "user_tz": -480, "elapsed": 1589, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} outputId="bea81b64-8365-4835-d80f-28b3106726b9" acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(acc)) plt.plot(epochs, acc, 'r', label='Training accuracy') plt.plot(epochs, val_acc, 'b', label='Validation accuracy') plt.title('Training and validation accuracy') plt.legend(loc=0) plt.figure() plt.plot(epochs, loss, 'r', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') # + id="v81fHx3g6Yfu" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": " "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 898} executionInfo={"status": "ok", "timestamp": 1590588369657, "user_tz": -480, "elapsed": 69652, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} outputId="547776de-6349-4c09-821d-ca31c3e5c137" uploaded = files.upload() for fn in uploaded.keys(): path = '/content/' + fn img = image.load_img(path, target_size=(150, 150)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) images = np.vstack([x]) classes = model.predict(images, batch_size=10) print(fn) print(classes) # + id="FSn7ejEJ-uVk" colab={"base_uri": "https://localhost:8080/", "height": 673} executionInfo={"status": "ok", "timestamp": 1590580608774, "user_tz": -480, "elapsed": 3875, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-BV-vQrnIvTY/AAAAAAAAAAI/AAAAAAAADxw/wSzRzhF8b6c/s64/photo.jpg", "userId": "01517089400226719298"}} outputId="9c0bf7a7-e969-46fd-8918-e75f1460529f" # Let's define a new Model that will take an image as input, and will output # intermediate representations for all layers in the previous model after # the first. successive_outputs = [layer.output for layer in model.layers[1:]] #visualization_model = Model(img_input, successive_outputs) visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs) # Let's prepare a random input image from the training set. horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names] human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names] img_path = random.choice(horse_img_files + human_img_files) img = load_img(img_path, target_size=(300, 300)) # this is a PIL image x = img_to_array(img) # Numpy array with shape (150, 150, 3) x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 150, 150, 3) # Rescale by 1/255 x /= 255 # Let's run our image through our network, thus obtaining all # intermediate representations for this image. successive_feature_maps = visualization_model.predict(x) # These are the names of the layers, so can have them as part of our plot layer_names = [layer.name for layer in model.layers] # Now let's display our representations for layer_name, feature_map in zip(layer_names, successive_feature_maps): if len(feature_map.shape) == 4: # Just do this for the conv / maxpool layers, not the fully-connected layers n_features = feature_map.shape[-1] # number of features in feature map # The feature map has shape (1, size, size, n_features) size = feature_map.shape[1] # We will tile our images in this matrix display_grid = np.zeros((size, size * n_features)) for i in range(n_features): # Postprocess the feature to make it visually palatable x = feature_map[0, :, :, i] x -= x.mean() x /= x.std() x *= 64 x += 128 x = np.clip(x, 0, 255).astype('uint8') # We'll tile each filter into this big horizontal grid display_grid[:, i * size : (i + 1) * size] = x # Display the grid scale = 20. / n_features plt.figure(figsize=(scale * n_features, scale)) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Matrix Operation # # The following codes are for conducting matrix operation using Python. # # 1. Using `numpy` # 2. Using `pytorch` # 3. Using `tensorflow` import numpy as np import matplotlib.pyplot as plt A = np.array([[3, 4],[5, 6],[7, 8]]) # A matrix with 3x2 dimension B = np.array([[1, 9], [2, 0]]) # B matrix with 2x2 dimension AB = np.dot(A,B) AB # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="wEBbHS7zN0X2" outputId="41b8c85a-339d-4d28-8989-3e3e25654c86" # %tensorflow_version 2.x import tensorflow as tf device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': raise SystemError('GPU device not found') print('Found GPU at: {}'.format(device_name)) # + id="SchA9hdXLExX" # !pip install -q kaggle # + id="XmALehcVCnkU" import tensorflow as tf # + colab={"base_uri": "https://localhost:8080/"} id="VkhJ04gMAF5X" outputId="c9bfbd2b-8c97-4c39-dafb-4c1843ab6003" # !pip install pydot # + colab={"base_uri": "https://localhost:8080/"} id="hxYbaZ8nCl0d" outputId="be1ccf89-3f4a-4a82-a0e1-1099aa267d6e" print(tf.__version__) # + id="ziW6mP19L5Qe" # !mkdir -p ~/.kaggle # !cp kaggle.json ~/.kaggle/ # + id="1feUgo1FL85S" # !chmod 600 ~/.kaggle/kaggle.json # + colab={"base_uri": "https://localhost:8080/"} id="IYVBqRZXL_1f" outputId="c4ffc0e6-0a76-400f-833d-3ea680698b45" # !kaggle datasets download -d 'xyaustin/messidor2' # + id="FNjlsJ6rMRCO" # !mkdir messidor2 # + id="ajqLRXlAOpBN" # ! unzip messidor2.zip -d messidor2 # + id="P7Lcm1FcMEL_" import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential from tensorflow.keras.models import Model from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping from tensorflow.keras.layers import Activation, Dense, Flatten, BatchNormalization, Conv2D, MaxPool2D,Dropout,Input from tensorflow.keras.optimizers import Adam, RMSprop from tensorflow.keras.callbacks import LearningRateScheduler from tensorflow.keras import regularizers from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.metrics import categorical_crossentropy from tensorflow.keras.losses import SparseCategoricalCrossentropy from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications.resnet50 import preprocess_input from tensorflow.keras.metrics import AUC,Precision,Recall from sklearn.model_selection import KFold import math from sklearn.metrics import confusion_matrix from matplotlib import pyplot as plt import cv2 import itertools import os import random import shutil import glob # + id="4sBTsn5u7i1j" import os os.mkdir("/content/messidor2/messidor-2/0") os.mkdir("/content/messidor2/messidor-2/1") os.mkdir("/content/messidor2/messidor-2/2") os.mkdir("/content/messidor2/messidor-2/3") os.mkdir("/content/messidor2/messidor-2/4") # + id="tNeYUAAPPVx3" import os os.remove("/content/messidor2/messidor-2/images/20060411_58550_0200_PP.png") os.remove("/content/messidor2/messidor-2/images/IM002385.jpg") os.remove("/content/messidor2/messidor-2/images/IM003718.jpg") os.remove("/content/messidor2/messidor-2/images/IM004176.jpg") # importing pandas module # + id="I6tqcPHLQ6kF" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="871b3f7a-8caf-450d-e214-267d945af5db" import pandas as pd # making data frame from csv file data = pd.read_csv("/content/messidor2/messidor-2/messidor_data.csv") # making new data frame with dropped NA values data.dropna(subset=["adjudicated_dr_grade"],inplace=True) data.to_csv('messidor_data2.csv') # comparing sizes of data frames print("Old data frame length:", len(data)) os.remove("/content/messidor2/messidor-2/messidor_data.csv") import shutil shutil.move("/content/messidor_data2.csv","/content/messidor2/messidor-2") # + id="QKUJjqEJ6Ktl" import shutil, os import pandas as pd labels = pd.read_csv("/content/messidor2/messidor-2/messidor_data2.csv") labels = labels.sort_values('adjudicated_dr_grade') # + id="dZXjQ51m-27U" colab={"base_uri": "https://localhost:8080/"} outputId="55c443fe-7e96-413c-dc45-7eeda1eef048" class_names = list(labels.adjudicated_dr_grade.unique()) print(class_names) # + id="GoBSawZh98kQ" colab={"base_uri": "https://localhost:8080/"} outputId="9dc9910e-d837-4c3b-941f-0ca2f78530d9" class_names = list(labels.adjudicated_dr_grade.unique()) A = [int(class_names) for class_names in class_names] print(A) # + id="b4aAdfPl9tai" for c in A: # Category Name for i in list(labels[labels['adjudicated_dr_grade']==c]['image_id']): # Image Id get_image = os.path.join('/content/messidor2/messidor-2/images/', i) # Path to Images move_image_to_cat = shutil.copy(get_image, '/content/messidor2/messidor-2/'+str(c)) # + id="M9YbOmryiCXg" ls1=os.listdir('/content/messidor2/messidor-2/1') ls2=os.listdir('/content/messidor2/messidor-2/2') ls3=os.listdir('/content/messidor2/messidor-2/3') ls4=os.listdir('/content/messidor2/messidor-2/4') os.mkdir('/content/messidor2/messidor-2/comb1234') # + id="MtbhaRHFiCdS" path='/content/messidor2/messidor-2' dest='/content/messidor2/messidor-2/comb1234' for i in ls1: img_scr=path+'/1/'+str(i) shutil.move(img_scr,dest) # + id="0VGqUlD5iCkJ" path='/content/messidor2/messidor-2' dest='/content/messidor2/messidor-2/comb1234' for i in ls2: img_scr=path+'/2/'+str(i) shutil.move(img_scr,dest) # + id="Ldjp8eoHiClq" path='/content/messidor2/messidor-2' dest='/content/messidor2/messidor-2/comb1234' for i in ls3: img_scr=path+'/3/'+str(i) shutil.move(img_scr,dest) # + id="8YSFSQnRiCsS" path='/content/messidor2/messidor-2' dest='/content/messidor2/messidor-2/comb1234' for i in ls4: img_scr=path+'/4/'+str(i) shutil.move(img_scr,dest) # + id="7qUNIrDkSf1N" for i in range(1,5): shutil.rmtree('/content/messidor2/messidor-2/'+str(i)) # + id="kpCb4WpPSpI7" os.rename('/content/messidor2/messidor-2/comb1234','/content/messidor2/messidor-2/1') # + id="MhwmcLcAlUhy" original_data_path = "/content/messidor2/messidor-2" train_pct = 0.8 test_pct = 0.1 val_pct = 0.1 if os.path.isdir("/content/messidor2/messidor-2/train/0") is False: # create folders for the sets os.mkdir("/content/messidor2/messidor-2/train") os.mkdir("/content/messidor2/messidor-2/test") os.mkdir("/content/messidor2/messidor-2/validation") for i in range(0, 2): # path to inputs with different classes num_folder_path = f'{original_data_path}/{i}' num_files_in_folder = len(os.listdir(num_folder_path)) train_size1 = int(num_files_in_folder * train_pct) test_size1 = int(num_files_in_folder * test_pct) val_size = int(num_files_in_folder * val_pct) train_size = train_size1 test_size = test_size1 os.mkdir(f'/content/messidor2/messidor-2/train/{i}') os.mkdir(f'/content/messidor2/messidor-2/test/{i}') os.mkdir(f'/content/messidor2/messidor-2/validation/{i}') test_samples = random.sample(os.listdir(num_folder_path), test_size) for file_name in test_samples: shutil.move((f"/content/messidor2/messidor-2/{i}/{file_name}"), f'/content/messidor2/messidor-2/test/{i}') val_samples = random.sample(os.listdir(num_folder_path), val_size) for file_name in val_samples: shutil.move((f"/content/messidor2/messidor-2/{i}/{file_name}"), f'/content/messidor2/messidor-2/validation/{i}') train_samples = random.sample(os.listdir(num_folder_path), train_size) for file_name in train_samples: shutil.move((f"/content/messidor2/messidor-2/{i}/{file_name}"), f'/content/messidor2/messidor-2/train/{i}') # + id="xE-ENwQnTPHw" train_path="/content/messidor2/messidor-2/train" test_path="/content/messidor2/messidor-2/test" val_path='/content/messidor2/messidor-2/validation' # + id="ULiHpFgZJ9Ly" shape=(512,512,1) # + colab={"base_uri": "https://localhost:8080/"} id="9kEo0gd8TbCL" outputId="1d6495b2-618e-4065-995d-fe0c941be2ef" CLASS_MODE = "categorical" BATCH_SIZE = 16 train_batches = ImageDataGenerator(rescale=1./223, featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True, vertical_flip=True).flow_from_directory(directory = train_path, target_size = shape[:2], batch_size = BATCH_SIZE, class_mode=CLASS_MODE,shuffle=True) val_batches = ImageDataGenerator(rescale=1./223).flow_from_directory(directory = val_path, target_size = shape[:2], batch_size = BATCH_SIZE, class_mode=CLASS_MODE, shuffle=False) test_batches = ImageDataGenerator(rescale=1./223).flow_from_directory(directory = test_path, target_size = shape[:2], batch_size = BATCH_SIZE, class_mode=CLASS_MODE, shuffle=False) # + colab={"base_uri": "https://localhost:8080/", "height": 338} id="FbDMAGHUTlHW" outputId="daff973c-f77b-4ea2-ff87-065115a3c2ee" plt.figure() f, axarr = plt.subplots(1,3) for i in range (0,3): random_num = random.randint(0,5) image = train_batches[random_num] axarr[i].imshow(image[0][0]) print(np.shape(image[0][0])) # + id="GIGG0ciqTnM_" IMG_SIZE = (224,224) batch_size = 64 epoch = 200 # + colab={"base_uri": "https://localhost:8080/"} id="8xHVR17BWsvh" outputId="d14a719e-056b-4e19-c33b-871d9890dd1a" # !pip install keras_retinanet # + id="OeVQjYcZ8MEp" import tensorflow from tensorflow import keras import numpy as np import cv2 import os import random import shutil import pandas as pd import csv import zipfile from tensorflow.keras import optimizers from tensorflow.keras.models import Sequential,Model from tensorflow.keras.layers import Dropout, Flatten, Dense,Input from tensorflow.keras.applications.resnet_v2 import ResNet50V2 from tensorflow.keras.applications.xception import Xception from tensorflow.keras.applications.resnet50 import ResNet50 from tensorflow.keras.applications.vgg16 import VGG16 from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras.applications.imagenet_utils import preprocess_input from tensorflow.keras import backend as K from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.initializers import RandomNormal import tensorflow.keras.backend as k from sklearn.utils import shuffle import io from PIL import Image as pil_image from keras_retinanet import layers import tensorflow.keras.backend as k import keras_retinanet from tensorflow.keras.metrics import AUC, Precision, Recall, FalseNegatives, FalsePositives,TrueNegatives,TruePositives, CategoricalCrossentropy # + id="PHlKYNB59Ujj" fold_num=1 # + colab={"base_uri": "https://localhost:8080/"} id="fvNWb328qcqP" outputId="3bbbb2fa-9f20-4689-a7ec-ea5b9d1ae386" k.clear_session() #Clear tensorflow.keras backend try: os.mkdir('models') #create folder for saving the trained networks except: pass full_name='ResNet50V2-FPN-fold{}'.format(fold_num) classes_number=2 #Number of classes (normal and COVID-19) input_tensor=Input(shape=(512,512,3)) weight_model = ResNet50V2(weights='imagenet', include_top=False) #Load ResNet50V2 ImageNet pre-trained weights weight_model.save_weights('weights.h5') #Save the weights base_model = ResNet50V2(weights=None, include_top=False, input_tensor=input_tensor) #Load the ResNet50V2 model without weights base_model.load_weights('weights.h5',skip_mismatch=True, by_name=True) #Load the ImageNet weights on the ResNet50V2 model except the first layer(because the first layer has one channel in our case) #Create Feature Pyramid Network (FPN) # We used some help for writing the Pyramid from the written code on https://github.com/fizyr/keras-retinanet/blob/master/keras_retinanet/models/retinanet.py feature_size=512 #Set the feature channels of the FPN layer_names = ["conv4_block1_preact_relu", "conv5_block1_preact_relu", "post_relu"] #Layers of ResNet50V2 with different scale features layer_outputs = [base_model.get_layer(name).output for name in layer_names] C3, C4, C5=layer_outputs #Features of different scales, extracted from ResNet50V2 P5 = tensorflow.keras.layers.Conv2D(feature_size, kernel_size=1, strides=1, padding='same', name='C5_reduced')(C5) P5_upsampled = layers.UpsampleLike(name='P5_upsampled')([P5, C4]) P5 = tensorflow.keras.layers.Conv2D(feature_size, kernel_size=3, strides=1, padding='same', name='P5')(P5) # Concatenate P5 elementwise to C4 P4 = tensorflow.keras.layers.Conv2D(feature_size, kernel_size=1, strides=1, padding='same', name='C4_reduced')(C4) P4 = tensorflow.keras.layers.Concatenate(axis=3)([P5_upsampled, P4]) P4_upsampled = layers.UpsampleLike(name='P4_upsampled')([P4, C3]) P4 = tensorflow.keras.layers.Conv2D(feature_size, kernel_size=3, strides=1, name='P4')(P4) # Concatenate P4 elementwise to C3 P3 = tensorflow.keras.layers.Conv2D(feature_size, kernel_size=1, strides=1, padding='same', name='C3_reduced')(C3) P3 = tensorflow.keras.layers.Concatenate(axis=3)([P4_upsampled, P3]) P3 = tensorflow.keras.layers.Conv2D(feature_size, kernel_size=3, strides=1, name='P3')(P3) # "P6 is obtained via a 3x3 stride-2 conv on C5" P6 = tensorflow.keras.layers.Conv2D(feature_size, kernel_size=3, strides=2, padding='same', name='P6')(C5) # "P7 is computed by applying ReLU followed by a 3x3 stride-2 conv on P6" P7 = tensorflow.keras.layers.Activation('relu', name='C6_relu')(P6) P7 = tensorflow.keras.layers.Conv2D(feature_size, kernel_size=3, strides=2, padding='same', name='P7')(P7) # Run classification for each of the generated features from the pyramid feature1 = Flatten()(P3) dp1 = Dropout(0.5)(feature1) preds1 = Dense(2, activation='relu',kernel_initializer=RandomNormal(mean=0.0, stddev=0.001))(dp1) feature2 = Flatten()(P4) dp2 = Dropout(0.5)(feature2) preds2 = Dense(2, activation='relu',kernel_initializer=RandomNormal(mean=0.0, stddev=0.001))(dp2) ################################################################# feature3 = Flatten()(P5) dp3= Dropout(0.5)(feature3) preds3 = Dense(2, activation='relu',kernel_initializer=RandomNormal(mean=0.0, stddev=0.001))(dp3) ################################################################# feature4 = Flatten()(P6) dp4 = Dropout(0.5)(feature4) preds4 = Dense(2, activation='relu',kernel_initializer=RandomNormal(mean=0.0, stddev=0.001))(dp4) ################################################################# feature5 = Flatten()(P7) dp5 = Dropout(0.5)(feature5) preds5 = Dense(2, activation='relu',kernel_initializer=RandomNormal(mean=0.0, stddev=0.001))(dp5) ################################################################# concat=tensorflow.keras.layers.Concatenate(axis=1)([preds1,preds2,preds3,preds4,preds5]) #Concatenate the predictions(Classification results) of each of the pyramid features out=tensorflow.keras.layers.Dense(2,activation='softmax',kernel_initializer=RandomNormal(mean=0.0, stddev=0.001))(concat) #Final Classification model = Model(inputs=base_model.input, outputs=out) #Create the Training Model ####################################################### for layer in model.layers: layer.trainable = True model.compile(optimizer=optimizers.Nadam(lr=0.0001), loss='categorical_crossentropy',metrics=['accuracy',AUC(),Precision(),Recall(),FalsePositives(),TrueNegatives(),TruePositives(),FalseNegatives()]) filepath="models/%s-{epoch:02d}-{val_accuracy:.4f}.hdf5"%full_name # Path to save the trained models checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', save_best_only=True, mode='max') #creating checkpoint to save the best validation accuracy callbacks_list = [checkpoint] #model.fit(train_batches, epochs=20,validation_data=val_batches,shuffle=True,callbacks=callbacks_list) #start training hist = model.fit(train_batches, validation_data=test_batches , epochs=30,callbacks=callbacks_list) # + id="RG-fcbKdJAO7" model.summary() # + id="GgQD2ykNRV0J" layer_names = ["conv4_block1_preact_relu", "conv5_block1_preact_relu", "post_relu"] #Layers of ResNet50V2 with different scale features layer_outputs = [base_model.get_layer(name).output for name in layer_names] C3, C4, C5=layer_outputs #Features of different scales, extracted from ResNet50V2 P5 = tensorflow.keras.layers.Conv2D(feature_size, kernel_size=1, strides=1, padding='same', name='C5_reduced')(C5) P5_upsampled = UpsampleLike([P5, C4]) # + id="YxSR0DZ9RYG2" P5 # + id="0hbBa5bSqyca" x = Flatten()(model.layers[-100].output) x=Dense(units=1024)(x) output = Dense(units=2, activation="sigmoid")(x) model = Model(inputs = model.input, outputs = output) # + id="hc2L7YLDT9Cu" initial_lrate = 0.0000001 def decay(epoch, steps=10): initial_lrate = 0.000008 drop = 0.96 epochs_drop = 8 lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop)) return lrate opt = RMSprop(lr=initial_lrate) lr_sc = LearningRateScheduler(decay, verbose=1) model.compile(optimizer=opt, loss=tf.tensorflow.keras.losses.binary_crossentropy, metrics=['accuracy',AUC(),Precision(),Recall(),FalsePositives(),TrueNegatives(),TruePositives(),FalseNegatives()]) # + id="zm0wpyvFUBGD" checkpoint = ModelCheckpoint("with_chmod_100_spatial.h5", monitor='val_accuracy', verbose=1, save_best_only=True, save_weights_only=True, mode='auto', period=1) early = EarlyStopping(monitor='val_loss', min_delta=0, verbose=1, mode='auto',patience=25) callback = [checkpoint,early] hist = model.fit(train_batches, validation_data=test_batches , epochs=epoch,callbacks=callback) # + id="9Lrs2l5aUEmh" plt.plot(hist.history['accuracy']) plt.plot(hist.history['val_accuracy']) plt.title("model accuracy") plt.ylabel("Accuracy") plt.xlabel("Epoch") plt.show() # + id="yk3veaZFbD8y" plt.plot(hist.history['loss']) plt.plot(hist.history['val_loss']) plt.title("model loss") plt.ylabel("loss") plt.xlabel("Epoch") plt.show() # + id="jMLGHnHmOu3N" def plot_confusion_matrix(cm, classes, normalize=True, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ plt.figure(figsize=(10,10)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] cm = np.around(cm, decimals=2) cm[np.isnan(cm)] = 0.0 print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # + id="yZfLDTyBPcof" target_names = [] for key in train_batches.class_indices: target_names.append(key) # + colab={"base_uri": "https://localhost:8080/", "height": 816} id="e5zA1TNhOvFF" outputId="3e04b266-899b-4e38-84bf-3bde13d970bc" Y_pred = model.predict_generator(test_batches) y_pred = np.argmax(Y_pred, axis=1) print('Confusion Matrix') cm = confusion_matrix(test_batches.classes, y_pred) plot_confusion_matrix(cm, target_names, title='Confusion Matrix') # + id="qrfl_yEsOvOd" # + id="cnGVlZoKe9GP" model.load_weights('/content/models/ResNet50V2-FPN-fold1-20-0.8382.hdf5') # + id="VoUNPIp8fDhX" model_metrics = model.evaluate(test_batches,verbose=0) # + id="tqGcn9vMfESf" f1_score = 2*( (model_metrics[3]*model_metrics[4]) / (model_metrics[3]+model_metrics[4]) ) # + colab={"base_uri": "https://localhost:8080/"} id="xMVKmFDIfJao" outputId="9bebf0c5-9599-4d9f-a19f-98bf4fe8dcc8" print(f"Accuracy on test set: {round(model_metrics[1]*100,2)}%") print(f"ROC(Receiver Operation Characteristic) AUC(Area Under Curve): {model_metrics[2]}") print(f"Precision: {round(model_metrics[3]*100,2)}%") print(f"Recall: {round(model_metrics[4]*100,2)}%") print(f"F1-score: {f1_score}") print(f"Specificity: {(model_metrics[6])/(model_metrics[6]+model_metrics[5])}") # + id="1pEJqfvyfUAB" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 64-bit # name: python3 # --- # + import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns import itertools import collections from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix,f1_score from sklearn.naive_bayes import MultinomialNB from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score, confusion_matrix, classification_report import re import nltk from wordcloud import WordCloud # - df=pd.read_csv('english_dataset.tsv',sep='\t') df.head(15) # ### About the Target Variables: # **task_1:** Classification of Hate Speech (HOF) and non-hate/offensive content. # # **task_2:** If the post is HOF, task_2 is used to classify it as either hate speech (HATE), offensive content (OFFN) or profanity (PRFN). # # **task_3:** It decides the target of the post, i.e., targeted insult (TIN) or untargeted (UNT). # # Preprocessing df=df.drop("text_id", axis=1) # Removed the column "text_id" as it is of no significance. df.info() # #### Checking for Missing Values df.isna().sum() # There are no missing values. from nltk.corpus import stopwords nltk.download('stopwords') eng_stops = set(stopwords.words("english")) nltk.download('wordnet') from nltk.stem import WordNetLemmatizer lemmatizer = WordNetLemmatizer() # ### Cleaning Tweets: Removing special characters, converting all letters to lowercase, removing stopwords and lemmatizing. def process_tweet(tweets): # remove all the special characters new_tweets = re.sub("[^a-zA-Z]", " ",tweets) # convert all letters to lower case words = tweets.lower().split() # remove stopwords words = [w for w in words if not w in eng_stops] # lemmatizer words = [lemmatizer.lemmatize(word) for word in words] # join all words back to text return (" ".join(words)) df['clean_tweets']=df['text'].apply(lambda x: process_tweet(x)) df.head(10) # # Exploratory Data Analysis df['length'] = df['clean_tweets'].apply(len) fig1 = sns.barplot('task_1', 'length', data= df) plt.title('Average character count vs whether the tweet is Non-hate/Offensive (Not) or Hate and Offensive (HOF)') plt.show() # Hence, there is no relation between the character count and type of tweet. # ### Checking the distribution of samples (i.e, checking for class imbalance) df.task_1.value_counts().plot(kind="bar", color=[ "lightblue", "purple"]); # Clearly, we have an **unbalanced target column.** # # This needs to be taken into consideration while training the model. # + df_hof = df[df["task_1"]=="HOF"] df_hof.task_2.value_counts().plot(kind="bar", color=[ "red", "purple", "yellow"]); # - # Majority of the HOF tweets are **hate** tweets followed by tweets containing profanity and offensive content. df_hof.task_3.value_counts().plot(kind="bar", color=["lightblue", "red", "lightgreen"]); # Most of the hate and offensive tweets are **targeted insults** and only about 12.5% are untargeted. hof_tweets= df_hof.clean_tweets hof_words = ' '.join(hof_tweets) # ### Plotting the wordcloud for the words that appear the most in hate and offensive tweets wordcloud = WordCloud(width=800, height=500, random_state=21, max_font_size=110).generate(hof_words) plt.figure(figsize=(10, 7)) plt.imshow(wordcloud, interpolation="bilinear") plt.title('Frequently used words in Hate/ Offensive tweets', size=20) plt.axis('off') plt.show() df_tin=df[df["task_3"]=="TIN"] df_tin.task_2.value_counts().plot(kind="bar",color=["red","black","blue"]) # ### This Plot shows the Hate Profanity against Targeted Units. df_unt=df[df["task_3"]=="UNT"] df_unt.task_2.value_counts().plot(kind="bar",color=["red","black","blue"]) # ### This Plot shows the Hate Profanity and Offensive tweets against Untargeted units. # # It is evident that Profanity to Hate ratio against untargeted units is greater than that of targeted units. words_in_tweet = [tweet.lower().split() for tweet in df["clean_tweets"]] words_in_tweet[:2] # + # List of all words across tweets all_words_no_urls = list(itertools.chain(*words_in_tweet)) # Create counter counts_no_urls = collections.Counter(all_words_no_urls) counts_no_urls.most_common(15) # + clean_tweets_no_urls = pd.DataFrame(counts_no_urls.most_common(15), columns=['words', 'count']) clean_tweets_no_urls.head() # + fig, ax = plt.subplots(figsize=(8, 8)) # Plot horizontal bar graph clean_tweets_no_urls.sort_values(by='count').plot.barh(x='words', y='count', ax=ax, color="purple") ax.set_title("Common Words Found in Tweets (Including All Words)") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import numpy as np import pandas as pd import matplotlib.gridspec as gridspec import matplotlib.pyplot as plot from tensorflow.examples.tutorials.mnist import input_data from optim.optimisticmirroradam import AdamirrorOptimizer batch_size = 32 X_dim = 784 z_dim = 10 h_dim = 128 lambda_grad = 10 lr = 0.0001 n_disc = 5 mnist = input_data.read_data_sets('../../MNIST_data', one_hot=True) def plot_images(samples): fig = plot.figure(figsize=(4, 4)) gs = gridspec.GridSpec(4, 4) gs.update(wspace=0.05, hspace=0.05) for i, sample in enumerate(samples): ax = plot.subplot(gs[i]) plot.axis('off') ax.set_xticklabels([]) ax.set_yticklabels([]) ax.set_aspect('equal') plot.imshow(sample.reshape(28, 28), cmap='Greys_r') return fig # + X = tf.placeholder(tf.float32, shape=[None, X_dim]) # D_W1 = tf.Variable(xavier_init([X_dim, h_dim])) # D_b1 = tf.Variable(tf.zeros(shape=[h_dim])) # D_W2 = tf.Variable(xavier_init([h_dim, 1])) # D_b2 = tf.Variable(tf.zeros(shape=[1])) # theta_D = [D_W1, D_W2, D_b1, D_b2] z = tf.placeholder(tf.float32, shape=[None, z_dim]) # G_W1 = tf.Variable(xavier_init([z_dim, h_dim])) # G_b1 = tf.Variable(tf.zeros(shape=[h_dim])) # G_W2 = tf.Variable(xavier_init([h_dim, X_dim])) # G_b2 = tf.Variable(tf.zeros(shape=[X_dim])) # theta_G = [G_W1, G_W2, G_b1, G_b2] # - def get_sample_z(size): return np.random.uniform(-1., 1., size=size) def generator(z): with tf.variable_scope('generator') as scope: G_h1 = tf.layers.dense(z, units=h_dim, kernel_initializer=tf.contrib.layers.xavier_initializer(), activation=tf.nn.relu) G_out = tf.nn.sigmoid(tf.layers.dense(G_h1, units=X_dim, kernel_initializer=tf.contrib.layers.xavier_initializer())) return G_out def discriminator(x, reuse=False): with tf.variable_scope('discriminator', reuse=reuse) as scope: D_h1 = tf.layers.dense(x, units=h_dim, activation=tf.nn.relu, kernel_initializer=tf.contrib.layers.xavier_initializer()) D_out = tf.layers.dense(D_h1, units=1, kernel_initializer=tf.contrib.layers.xavier_initializer()) return D_out generator_sample = generator(z) discriminator_real = discriminator(X) discriminator_fake = discriminator(generator_sample, True) epsilon = tf.random_uniform([batch_size, 1], minval=0.0, maxval=1.0) interpolated = epsilon * X + (1 - epsilon) * generator_sample grads = tf.gradients(discriminator(interpolated, True), [interpolated])[0] grad_norm = tf.sqrt(tf.reduce_sum((grads) ** 2, axis=1)) gp = lambda_grad * tf.reduce_mean((grad_norm - 1) ** 2) d_loss = tf.reduce_mean(discriminator_fake) - tf.reduce_mean(discriminator_real) + gp g_loss = -tf.reduce_mean(discriminator_fake) # + g_vars = [var for var in tf.trainable_variables() if var.name.startswith('generator')] d_vars = [var for var in tf.trainable_variables() if var.name.startswith('discriminator')] d_step = AdamirrorOptimizer(learning_rate=lr).minimize(d_loss, var_list=d_vars) g_step = AdamirrorOptimizer(learning_rate=lr).minimize(g_loss, var_list=g_vars) # - print(g_vars) print(d_vars) sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) for i in range(100000): for _ in range(n_disc): x_batch, _ = mnist.train.next_batch(batch_size) _, d_loss_val = sess.run( [d_step, d_loss], feed_dict={X: x_batch, z: get_sample_z(size=(batch_size, z_dim))} ) _, g_loss_val = sess.run( [g_step, g_loss], feed_dict={z: get_sample_z(size=(batch_size, z_dim))} ) if i % 100 == 0: print('Iteration: {} - Discriminator Loss: {:.4}, Generator Loss: {:.4}' .format(i, -d_loss_val, g_loss_val)) if i % 1000 == 0: samples = sess.run(generator_sample, feed_dict={z: get_sample_z(size=(16, z_dim))}) fig = plot_images(samples) plot.show() i += 1 plot.close(fig) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import seaborn as sns import pickle import pandas as pd # + with open('Q_stag.pickle', 'rb') as fp: Q = pickle.load(fp) with open('inven_stag.pickle', 'rb') as fp: inven = pickle.load(fp) # - sns.set_theme(style="whitegrid") # part of Q and inven # pQ = Q[:, ::10000, :, :] # pI = inven[:, ::10000, :] pQ = Q pI = inven l = pI.shape[1] pI.shape df = [] for ins in range(10): for act in range(4): val = pQ[ins, :, :, act].mean(1) df.append(pd.DataFrame({'Q':val, 'instance': ins*np.ones(l), 'Actions':[str(act+1),]*l, 'step': np.arange(l)})) Qdf = pd.concat(df) # figsize=(8,6) plt.figure() palette = sns.color_palette("husl", 4) g = sns.lineplot(data=Qdf, x='step', y='Q', hue='Actions', palette=palette) plt.xlabel(r'Steps ($10^4$)', fontsize=16) plt.ylabel('Q-values', fontsize=16) g.tick_params(axis = 'both', which = 'major', labelsize = 16) plt.legend(bbox_to_anchor=(1.05, 1), title='Actions', fontsize=14, title_fontsize=14) plt.savefig('QConverge.pdf', format='pdf', dpi=1000, bbox_inches='tight', pad_inches=0.1) df_temp = [] for ins in range(10): for agent in range(2): val = pI[ins, :, agent] df_temp.append(pd.DataFrame({'Inventory':val, 'instance': ins*np.ones(l), 'Agent':[str(agent+1),]*l, 'step': np.arange(l)})) Idf = pd.concat(df_temp) # figsize=(8,6) plt.figure() palette = sns.color_palette("husl", 4) g = sns.lineplot(data=Idf, x='step', y='Inventory', hue='Agent') plt.xlabel(r'Steps ($10^4$)', fontsize=16) plt.ylabel('Inventory', fontsize=16) g.tick_params(axis = 'both', which = 'major', labelsize = 16) plt.legend(bbox_to_anchor=(1.25, 1), title='Agents', fontsize=14, title_fontsize=14) plt.savefig('Inventory.pdf', format='pdf', dpi=1000, bbox_inches='tight', pad_inches=0.1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] toc="true" # # Table of Contents #

    # - # # Probability # - Probability is the opposite of statistics. # - Put differently, in statistics we are given data and try to infer possible causes that relate to the data, whereas in probability we are given the description of the causes and we'd like to predict the data. # - The reason why we now study probability and not statistics is because it gives us a language to describe the relationship between data and the underlying causes. # # # Coin # ## Fair Coin # In a fair coin, the chances are 50%: # - P(HEADS) = 0.5 # - P(TAILS) = 0.5 # ## Loaded Coin # A loaded coin the one that comes up with one of the two much more frequently than the other. # - P(HEADS) = 1 = 100% # - P(TAILS) = 0 = 0% # - P(HEADS) + P(TAILS) = 1 # - P(A) = 1 - P(not A) # # # Two Flips # What is the probability of if 2 flips are heads? # - The answer is 0.25. # - I will derive it for you using something called a truth table. In a truth table, you draw out every possible outcome of the experiment that you conducted. # - Another way to look at this is the probability of heads followed by heads is the product. What are the chances of the first outcome to be heads multiplied by the probability of the second outcome to be heads? # # # # One Head # Suppose we flip our coin twice. What we care about is that exactly one of the two things is heads, and thereby exactly the other one is tails. For a fair coin, what do you think the probability would be that if I flip it twice we would see heads exactly once? # - P(exactly one H) = 0.5 # # Given that, we now have to associate a truth table with the question we're asking. So where exactly is, in the outcome, heads represented once? # - And, yes, it's in the second case and in the third case. The extreme cases of heads, heads and tails, tails don't satisfy this condition. So the trick now has been to take the 0.25 probability of these two cases and add them up, which gives us 0.25 + 0.25 = 0.5. This is the number which is correct for this inquiry. # # # One of Three # ## Fair Coin # What is the probability that exactly 1 of those 3 flips comes up heads? # # **Answer** # - We will derive it through the truth table. Now there's eight possible cases. Flip one can come of heads or tail; same for flip two, heads, tail, heads, tail; and the same for flip three and if you look at this every possible combination is represented. # - For example, these are heads, tail, tail. Now each of those outcomes has the same probability of an eighth, because it's eight cases. So 8 x 1/8 sums up to 1. # - In how many cases do we have exactly one H? It turns out that it's true for only three cases. The H could be in the first position, in the second position, or in the third position. So three out of eight cases have a single H. Each of those carries a probability so we sum those cases up to carry a total of 3/8 of a probability. These are the same as 0.375. # # ## Loaded Coin # Here is a challenging question. I'll give you a loaded coin--the probability for H is 0.6. I expect this will take you awhile on a piece of paper to really calculate this probability over here. But you can do exactly the same thing. You go through the truth table. You apply the multiplication I showed you before to calculate the probability of each outcome; they're not the same anymore. H, H, H is clearly more likely than T, T, T. And when you've done this, add the corresponding figures up, and tell me what the answer is. # - P(H) = 0.6 # **Truth table in python** # - http://stackoverflow.com/questions/29548744/creating-a-truth-table-for-any-expression-in-python # + from itertools import product from numpy import prod heads = 0.6 tails = 0.4 truth_table = list(product((heads, tails), repeat=3)) for x in truth_table: print x, prod(x) # - result = [] for x in truth_table: if x.count(0.6) == 1: print x, prod(x) result.append(prod(x)) print sum(result) # # # Even Roll # Now I am throwing dice. The difference between dice and coins is that there are now 6 possible outcomes. Let me just draw them, and say it's a fair die, which means each of the different sides comes up with a probability over 6 for any of the numbers you can plug in over here. What do you think the probability is the die comes up with an even number? I'm going to write this as the outcome of the die is even. And you can once again use a truth table to calculate that number. # # **Answer** # - In truth table-speak, there are 6 outcomes, 1 to 6. Each has the same probability over six. Half of those numbers are even--2, 4, and 6, so if we add those up, we get 3 1/6--the same as a half. The outcomes is 0.5. # # # Doubles # Suppose we throw a fair die twice. What do you think the probability of a double is? Double means both outcomes are identical with the same number regardless of what that number is. The actually an important number because in many games involving two dice, have different rules when these come up with the same number. So, it might be important to know what the probability is. # # **Answer** # - Now the truth table will have 36 different entries, six for the first throw times six for the second throw, and there isn't enough space on this tablet to draw all the 36 entries. So, let me just draw the ones that really matter, one-one, two-two, and so on all the way to a six-six. # - So, each one of those is a probability of 1/6 for the first outcome times 1/6 for the second, which gives me 1/36, and the same logic applies everywhere. # - So, for all of these six outcomes, I have 1/36 of a chance this outcome would materialize. Adding them all up gives me 1/6, why? Because, I get 6 times 36 and I can simply this back to 1/6 that's just the same as 0.16667. So, 1/6 times, you will get a double # # # Summary # - You learned about probability of an event, such as the outcome of a coin flip. # - You learned that the probability of the opposite event is 1 minus the probability of the event. # - And you learned about the probability of a composite event, which was in the form P * P * P. Technically speaking, this thing over here is called independence, which means nothing else, but that the outcome of the second coin flip didn't really depend on the outcome of the first coin flip. # # In our next unit, we will talk about dependence, where there are bizarre dependencies between different outcomes. But for the time being, you really managed to get a very basic understanding of probability. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # --- # # Documentation: [Managing content | Academic](https://sourcethemes.com/academic/docs/managing-content/) # # title: "Modelling, Simulation and Control of Hydro-Power System - Part 2" # subtitle: "Model of the lakes" # summary: "In this series I will show the entire process of developing a model, performing simulations and the use of different control techniques for decision support in flood management systems." # authors: [] # tags: ["Flood Forecasting", "Model Predictive Control"] # categories: ["Flood Management"] # date: 2021-02-10T10:01:00 # lastmod: 2021-02-10T10:01:00 # featured: false # draft: false # # # Featured image # # # To use, add an image named `featured.jpg/png` to your page's folder. # # # Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. # # image: # caption: "Image by from Pixabay" # focal_point: "Smart" # preview_only: false # # # Projects (optional). # # # Associate this post with one or more of your projects. # # # Simply enter your project's folder or file name without extension. # # # E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. # # # Otherwise, set `projects = []`. # # projects: ["Modelling, Simulation and Control of Hydro-power system"] # --- # ## Overview # # In the previous post of this series (see [here](/post/hydro_power/01system_description)), I showed an overview of the system we intend to model and investigate. In this post I will show how to develop a simple yet effective mathematical representation of the series of lakes in the hydro-power system. # # The main use of these lakes is to work as a battery, i.e energy storage. At times when energy is at low demand, water can be pumped from the river to the lakes and stored there as potential energy. When there is a greater demand and only the level of water from the river can not provide enough energy, water can flow from the lakes to the river through turbines, using its kinetic energy to generate extra energy. Notice that this process is, in a real-world case, not 100% efficient. This means that more energy is needed to pump the water from the river to the lakes, then can be extracted by turbining from the lakes to the river. Yet, it can be a useful technique to keep the balance in energy generation, and also to redirect volume of water when its excess can cause floods downstream. # # Without further delay, let's start the modeling process. # ## Mathematical representation of the lakes # The water stored in the lake changes according the inflow and outflow rates. As the volume of water decreases, the level also decreases. In essence the mass conservation is the basic equation to describe the lakes: # # $$ # \frac{dm}{dt} = w_{in}(t) - w_{out}(t) # $$ # # Where $m$ is the mass of water (kg), $t$ is the time (s), $w_{in}$ is the mass inflow rate (kg/s) and $w_{in}$ is the mass outflow rate (kg/s). The above equation can be rewritten as: # # $$ # \frac{d(\rho hA)}{dt} = \rho q_{in}(t) - \rho q_{out}(t) # $$ # # Where $\rho$ is the density of water, $h$ is the water level and $A$ is the cross-section area of the lake. Since any liquid can be reasonably considered incompressible (no change of volume with pressure), the density $\rho$ can be considered constant, and thus cancelled out. The cross-section area $A$ may be constant (like a cube) or it may be a function of the water level. For better generalization. let's say $A = A(h)$, thus the final equation is: # # $$ # \frac{d(h A(h))}{dt} = q_{in}(t) - q_{out}(t) # $$ # # The above equation is an ordinary differential equation, relating the rate of change of volume ($h A(h)$) with the inlet and outlet flow rates. It can be solved using numerical integration, if all the other variables ($q_i$) are known # ## Power generated/consumed by pumps and turbines # # The power $p$ generated/consumed by pumps and turbines is directly proportional to the flow rate $q$ and the difference in water height $H$ upstream and downstream. The following equation describes this relation in a simple form: # # $$ # p = K \cdot q \cdot H # $$ # # Where $K$ is constant of proportionality, which can be referred as the pump/turbine coefficient (positive for turbines and negative for pumps). # ## Pipes and valves # # The connection between lake 1 and 2 is made through a valve. The discharge through this element can in general be modelled by a non-linear relationship with the difference in height upstream and downstream $H$ as: # # $$ # q = -\text{sign}(H)\cdot A \cdot \sqrt{2g|H|} # $$ # # Where $A$ is the duct section. # # ### Modular implementation of the lakes, pumps, turbines and ducts in Python # # A modular approach is a useful way of implementing system behaviors, separating the organization of information and the flow of it through different components in the main simulation loop. # # **OOP** is very handy for that, since it can be used to encapsulate the parameters of each instance of a class, where the class serves as a blueprint and the instances are the mathematical representations of systems. # + # import necessary libraries import matplotlib.pyplot as plt import numpy as np from scipy.integrate import odeint GRAVITY = 9.81 # m/s2 # - # ### Generic model class # # The generic model is a superclass which contains the basic behavior of a model such as lake, valve, pump, etc. The other models will inherit from this class. class GenericModel: def __init__(self, name='', recorded_variables=[]): self.name = name self._log = {var: [] for var in recorded_variables} self._log['t'] = [] def plot(self): t = self._log['t'] for var in self._log: if var != 't': plt.plot(t,self._log[var] ,label=self.name + var) plt.legend() def generator(self,dt): raise NotImplementedError def log_data(self, t, **kwargs): self._log['t'].append(t) for var in kwargs: self._log[var].append(kwargs[var]) # ## Valve/ Pipe class # # The pipe basic resemples the generic connector, since it does not generate/consume energy and it can not be opened/closed. The valve enhances the pipe by incorporating a manipulated variable $u$ which can be used to open/close the duct, restricting or not the flow rate. class Pipe(GenericModel): def __init__(self, name='',A=1): recorded_variables = ['qout'] super().__init__(name = name, recorded_variables = recorded_variables) def q(self, t, h_up, h_down): H = h_up - h_down return A*np.sqrt(GRAVITY * H) def generator(self,dt): t = 0 h_up = 0 h_down = 0 while True: t, h_up, h_down = yield self.q(t, h_up, h_down) self.log_data(t = t, qout = self.q(t)) t += dt class Valve(GenericModel): def __init__(self, name='',A=1, u=lambda t: 1): recorded_variables = ['qout'] super().__init__(name = name, recorded_variables = recorded_variables) self.u = u def q(self, t, h_up, h_down): H = h_up - h_down return self.u(t) * A*np.sqrt(GRAVITY * H) def generator(self,dt): t = 0 h_up = 0 h_down = 0 while True: t, h_up, h_down = yield self.q(t, h_up, h_down) self.log_data(t = t, qout = self.q(t)) t += dt # ## Turbine/ Pump class # # The turbine or pump is an enhacement of the valve class, because they not only can be manipulated (from 0% to 100% full power), but also consumes/ generate electricity. The difference in them is on the value of $K$, which is positive for turbines and negative for pumps. class Pump_Turbine(GenericModel): def __init__(self, name='',A=1, K=1, u=lambda t: 1): recorded_variables = ['qout', 'power'] super().__init__(name = name, recorded_variables = recorded_variables) self.K = 1 self.u = u def q(self, t): return self.u(t) def power(self, t, h_up, h_down): H = h_up - h_down return self.K * self.q(t) * H def generator(self,dt): t = 0 while True: t, h_up, h_down = yield self.q(t) self.log_data(t = t, qout = self.q(t), power = self.power(t, h_up, h_down)) t += dt # ### Lake Class # # The lake class incorporates the behavior of any lake. Since the cross-section area $A$ may be constant or not ($A = A(h)$), this concept is incorporated in the class, so the Area $A$ is passed as a function of water level. class Lake(GenericModel): def __init__(self, name='', A=lambda h: 1, bottom_height = 0): recorded_variables = ['qout', 'h'] super().__init__(name = name, recorded_variables = recorded_variables) self.A = A self.bottom_height = bottom_height def deriv(self,h,t): dh = (self.qin - self.qout)/self.A(h) return dh def generator(self,dt,IC = 0): self.h = IC while True: t, self.qin, self.qout = yield float(self.h) self.h = odeint(self.deriv,self.h,[t,t+dt])[-1] self.log_data(t = t, qout = self.qout, h = float(self.h)) t += dt def get_water_level(self): return self.h def get_absolute_water_level(self): return self.h + self.bottom_height # A careful reader will notice that we made a small trick here, to make things easier, but is not the most accurate from a mathematical perspective. The mass conservation equation was written as: # # $$ # \frac{dh}{dt} = \frac{q_{in}(t) - q_{out}(t)}{A(h)} # $$ # # Pragmatically, it is not correct to take out the term $A(h)$ from the differential term $\frac{dh}{dt}$, since the area is a function of the level. Still, it should work from a simulation perspective since we integrate the problem using small steps of time, does correcting the value of $A$ for small variations of $h$. We will see how this works with an analytical solution, so any problem will clearly arise. # ## A simple test, with analytical solution # To test if our implementation is good, it is always useful to make a comparison against some analytical, exact solution. Coming back to the mass conservation: # # $$ # \frac{d(h A(h))}{dt} = q_{in}(t) - q_{out}(t) # $$ # # Let's consider a very simple lake in the form of a cube. Thus, the cross section area is constant, say equal to 1. # # $$ # A = 1 # $$ # # Since $A \neq A(t)$, the equation simplifies to: # # $$ # \frac{dh}{dt} = q_{in}(t) - q_{out}(t) # $$ # # Say the outlet is regulated by a pump, with a constant flow rate of $q_{out}$, and the inflow is a sinusoidal flow with the shape, provided by a pump. # # $$ # q_{in}(t) = A + B\sin\frac{\pi t}{C} # $$ # # $$ # \frac{dh}{dt} = A + B\sin\frac{\pi t}{C} - q_{out} # $$ # # Call $A^{*} = A - q_{out}$ # # $$ # \frac{dh}{dt} = A^{*} + B\sin\frac{\pi t}{C} # $$ # # Integrate it. # # $$ # \int dh = \int \left(A^{*} + B\sin \left( \frac{\pi t}{C} \right)\right) dt # $$ # # $$ # h = A^{*}t - \frac{B C}{\pi} \cos \left( \frac{\pi t}{C} \right) + \text{Const} # $$ # # Which gives us the general solution to this problem. Now let's fix some numerical values for simulation. # + $q_{out} = 5$ # + $A = 5$ # + $B = 2$ # + $C=1$ # # $$ # h = - \frac{2}{\pi} \cos \left( \pi t \right) + \text{Const} # $$ # # Apply initial condition $t = 0$, $h_0 = 0$ # # $$ # \text{Const} = \frac{2}{\pi} # $$ # # The final analytical solution is, # # $$ # h = - \frac{2}{\pi} \cos \left( \pi t \right) + \frac{2}{\pi} # $$ # # Now let's implement the code in Python and compare the solutions # # + # basic sample time dt = 0.01 # create and initialize lake name_lake = 'lake 1' Area = lambda h: 1 IC = 0 lake_obj = Lake(name = name_lake, A = Area) lake = lake_obj.generator(dt, IC) lake.send(None) # create and initialize pump inlet name_pump1 = 'pump 1' A = 5 B = 2 C = 1 u_pump1 = lambda t: A + B *np.sin(np.pi * t/ C) pump1_obj = Pump_Turbine(name = name_pump1, K = -1, u = u_pump1) pump1 = pump1_obj.generator(dt) pump1.send(None) # create and initialize pump outlet name_pump2 = 'pump 2' u_pump2 = lambda t: 5 pump2_obj = Pump_Turbine(name = name_pump2, K = -1, u = u_pump2) pump2 = pump2_obj.generator(dt) pump2.send(None) for t in np.arange(0,20,dt): qin = pump1.send((t, 0, lake_obj.h)) qout = pump2.send((t, lake_obj.h, 100)) h1 = lake.send((t, qin, qout)) plt.figure() lake_obj.plot() plt.grid() # + t = lake_obj._log['t'] qout = lake_obj._log['qout'] h = lake_obj._log['h'] h_analytic = lambda t: - 2/np.pi * np.cos(np.pi* np.asarray(t) ) + 2/np.pi plt.figure() plt.plot(t[::10], h_analytic(t)[::10], label = 'analytic', color = 'r', marker = 'o', linestyle="") plt.plot(t, h, label = 'numeric', color = 'b') plt.xlim([0.0, 2.5]) plt.grid() plt.legend() # - # ### Another simple test, with variable cross-section # Let's perform a similar analysis, as the one shown above, but now using a lake which has a variable cross-section area. Say that the cross-section area follows the pattern below: # # $$ # A(h) = E h^2 # $$ # # Where $E$ is a constant value. Let's perform the same analytic integration process that was done above. # # $$ # \frac{(E h^2)dh}{dt} = A^{*} + B\sin\frac{\pi t}{C} # $$ # # Integrate it. # # $$ # \int (E h^2)dh = \int \left(A^{*} + B\sin \left( \frac{\pi t}{C} \right)\right) dt # $$ # # $$ # h = \left[ \frac{3}{E}A^{*}t - \frac{3}{E}\frac{B C}{\pi} \cos \left( \frac{\pi t}{C} \right) + \text{Const} \right]^{\frac{1}{3}} # $$ # # Which gives us the general solution to this problem. Now let's fix some numerical values for simulation. # + $q_{out} = 5$ # + $A = 5$ # + $B = 2$ # + $C=1$ # + $E=1$ # # $$ # h = \left[ \frac{3}{E}A^{*}t - \frac{3}{E}\frac{B C}{\pi} \cos \left( \frac{\pi t}{C} \right) + \text{Const} \right]^{\frac{1}{3}} # $$ # # Apply initial condition $t = 0$, $h_0 = 0$ # # $$ # 0 = \left[- \frac{3}{E}\frac{B C}{\pi} + \text{Const} \right]^{\frac{1}{3}} # $$ # # Substituting the values here we find that, # # $$ # \text{Const} \approx 1.91 # $$ # # The final analytical solution is, # # $$ # h = \left[ - 3\frac{2}{\pi} \cos \left(\pi t \right) + 1.91 \right]^{\frac{1}{3}} # $$ # # Now let's implement the code in Python and compare the solutions. For the computational approach, we have to initialize the lake with the level slight above 0, since when $h=0$, $A(h) = 0$, and since $A$ becomes a denominator in the mass conservation equation, then it would cause an indefinite solution. This naturally brings error, but we can investigate it and maybe refine the model to make it more robust for such case. # # # + # basic sample time dt = 0.01 # create and initialize lake name_lake = 'lake 1' Area = lambda h: h**2 IC = 1e-5 lake_obj = Lake(name = name_lake, A = Area) lake = lake_obj.generator(dt, IC) lake.send(None) # create and initialize pump inlet name_pump1 = 'pump 1' A = 5 B = 2 C = 1 u_pump1 = lambda t: A + B *np.sin(np.pi * t/ C) pump1_obj = Pump_Turbine(name = name_pump1, K = -1, u = u_pump1) pump1 = pump1_obj.generator(dt) pump1.send(None) # create and initialize pump outlet name_pump2 = 'pump 2' u_pump2 = lambda t: 5 pump2_obj = Pump_Turbine(name = name_pump2, K = -1, u = u_pump2) pump2 = pump2_obj.generator(dt) pump2.send(None) for t in np.arange(0,20,dt): qin = pump1.send((t, 0, lake_obj.h)) qout = pump2.send((t, lake_obj.h, 100)) h1 = lake.send((t, qin, qout)) plt.figure() lake_obj.plot() plt.grid() # + t = lake_obj._log['t'] qout = lake_obj._log['qout'] h = lake_obj._log['h'] t = np.arange(0,20,0.01) h_analytic = lambda t: (-3* 2/np.pi * np.cos(np.pi* np.asarray(t) ) + 1.91)**(1/3) plt.figure() plt.plot(t[::10], h_analytic(t)[::10], label = 'analytic', color = 'r', marker = 'o', linestyle="") plt.plot(t, h, label = 'numeric', color = 'b') plt.xlim([0.0, 2.5]) plt.grid() plt.legend() # - # It can be seen from these results that the trick used in the mass conservation approach does not cause much issue, and the results look quite reasonable. # # In the next post, we will see how to model the reaches using the De Saint Venant Equations. I see you in the next post. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import wave import numpy as np import pylab import matplotlib.pyplot as plt import matplotlib.ticker as ticker import math import warnings warnings.simplefilter("ignore", DeprecationWarning) # глобальные переменные delta = 50 reduceCount = 5 types = { 1: np.int8, 2: np.int16, 4: np.int32 } # - # Запись массива сэмплов в файл def printSamples(samples): output = open("samples.txt", "w") for sample in samples: string ='{:.2f}'.format(sample) output.write(string + '\n') output.close() def printSamplesInFile(samples, filename): output = open(filename, "w") for sample in samples: string ='{:.2f}'.format(sample) output.write(string + '\n') output.close() # Свертка (сжимает массив сэмплов, усредняя значения) def cutSamples(samples): i = 0 values = [] while i + delta < samples.size: samplesSlice = samples[i:i+delta] samplesSlice = list(map(lambda x: abs(x), samplesSlice)) samplesSlice.sort() samplesSlice = samplesSlice[reduceCount : delta - reduceCount] value = sum(samplesSlice) / len(samplesSlice) values.append(value) i = i + delta return values # Обрезка файла def cutAudio(fileName, source, target, outputName): inputFile = wave.open(fileName, mode="r") (channels, sWidth, rate, framesCount, comptype, compname) = inputFile.getparams() content = inputFile.readframes(framesCount) samples = np.fromstring(content, dtype=types[sWidth]) samples = samples[round(source*rate):round(target*rate)] outputFile = wave.open(outputName, mode="w") outputFile.setparams((channels, sWidth, rate, samples.size, comptype, compname)) outputFile.writeframes(samples) outputFile.close() return cutSamples(samples) # + values = cutAudio("records/111.wav", 5.22, 5.67, "5,1-5,8.wav") pylab.plot(values) pylab.show() # + recordName = "records/Шишкина 17.02.15_new.wav" wav = wave.open(recordName, mode="r") (nchannels, sampwidth, framerate, nframes, comptype, compname) = wav.getparams() print(nchannels, sampwidth, framerate, nframes, comptype, compname) duration = nframes / framerate content = wav.readframes(nframes) samples = np.fromstring(content, dtype=types[sampwidth]) values = cutSamples(samples) printSamples(values) # нахождение пиков в записи frameOnCut = [] i = 20 prevResults = i while i < len(values): if values[i] > 10000: slice = values[i-prevResults:i] minPrevValue = min(slice) if values[i] / minPrevValue > 20: frameOnCut.append(i) i += 150 continue i += 1 # переводим величины массива frameOnCut из порядкового номера фрейма в секунды frameOnCut = list(map(lambda x: x*delta, frameOnCut)) frameOnCut = list(map(lambda x: x/framerate, frameOnCut)) print(len(frameOnCut)) print(frameOnCut) # вырезаем аудио-файл с каждым пиком и строим диаграмму for framePik in frameOnCut: endPath = '{:.2f}'.format(framePik) path = "results/sounds/result_" + endPath + ".wav" imagePath = "results/graphics/result_" + endPath + ".png" path = path.replace(".", ",", 1) imagePath = imagePath.replace(".", ",", 1) values = cutAudio(recordName, framePik-0.05, framePik+0.4, path) plt.plot(values) plt.savefig(imagePath) plt.close() # - import os def convertSoundsInSamples(directory): files = os.listdir(directory) sounds = filter(lambda x: x.endswith('.wav'), files) for sound in sounds: wav = wave.open(directory + "/" + sound, mode="r") (nchannels, sampwidth, framerate, nframes, comptype, compname) = wav.getparams() content = wav.readframes(nframes) samples = np.fromstring(content, dtype=types[sampwidth]) cuttedSamples = cutSamples(samples) filename = sound[:-3] printSamplesInFile(cuttedSamples, directory + "/samples/" + filename + "txt") import os convertSoundsInSamples("results/кашель") convertSoundsInSamples("results/не кашель") convertSoundsInSamples("results/test") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="RcBNimbbsjMx" import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn import datasets import warnings warnings.filterwarnings('ignore') # + id="FwEgnVEBzclV" from sklearn.tree import DecisionTreeClassifier, export_graphviz from sklearn.model_selection import train_test_split,GridSearchCV from sklearn.preprocessing import StandardScaler from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score, roc_auc_score from six import StringIO from sklearn.tree import export import sklearn from sklearn.datasets import load_boston # + id="nsoqKmX_zXiK" boston = load_boston() bos = pd.DataFrame(boston.data) # + colab={"base_uri": "https://localhost:8080/", "height": 195} id="rENJkOnV0NgV" outputId="5f4fa533-4b6c-48a3-ce63-030422cb2b29" bos.head() # + colab={"base_uri": "https://localhost:8080/"} id="Dw_d4Op-0VST" outputId="fcfdf46e-8573-4e96-fb23-a2137d6f3654" boston.feature_names # + colab={"base_uri": "https://localhost:8080/"} id="VCNoj7Ao0Yis" outputId="00bf6fbb-5ab6-4a8e-8205-a197ac30df60" boston.target # + id="14mBlVqG0a7y" bos.columns = boston.feature_names # + colab={"base_uri": "https://localhost:8080/"} id="DSSuhSD30fxe" outputId="d3f94fbe-dac5-46b1-9fc7-96ddbe34d199" bos.columns # + id="8z9BRTzB0hns" bos['Target'] = boston.target # + colab={"base_uri": "https://localhost:8080/", "height": 402} id="Uf9UPLC70jDE" outputId="ac59d950-09e4-48d0-f8fa-ee38df2077e8" bos # + colab={"base_uri": "https://localhost:8080/", "height": 195} id="MaH1Hmgl0mup" outputId="31fd4993-71e2-4f74-dfe7-5256b50f1f69" bos.head() # + colab={"base_uri": "https://localhost:8080/", "height": 195} id="8KpPKMWM0o-P" outputId="17b58c6d-bacd-4543-cdb1-a607736f7389" bos.tail() # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="GVAcCwge0rol" outputId="6edb018e-fe86-43b1-d902-629457d852cb" bos.describe() # + colab={"base_uri": "https://localhost:8080/"} id="s4H47wtM0tn_" outputId="8e4075f5-ae42-449a-840b-b350f68e2a48" bos.info() # + colab={"base_uri": "https://localhost:8080/"} id="Ifp5RuY30vY4" outputId="4e212a29-bfe6-4fcb-d494-02236e97b603" bos.isnull().sum() # + colab={"base_uri": "https://localhost:8080/"} id="Ay2rHRXh0xFa" outputId="669758be-6af2-4827-9165-ab44cfb90aee" bos.columns # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="LTHhnf-m0yu0" outputId="86f69f59-368d-42fb-8f0a-b0df4c447f09" plt.figure(figsize=(20,25),facecolor='white') plotnumber=1 for column in bos: if plotnumber<=14: ax=plt.subplot(4,4,plotnumber) sns.distplot(bos[column]) plt.xlabel(column,fontsize=20) plotnumber+=1 plt.tight_layout() # + id="jW3v2um200UZ" x = bos.drop(columns=['Target']) # + colab={"base_uri": "https://localhost:8080/", "height": 402} id="-wRcogXE02iK" outputId="34ca3361-9a35-49bd-80d4-d88f93120acb" x # + id="cQsBCbZM04HL" y = bos['Target'] # + colab={"base_uri": "https://localhost:8080/"} id="1wPpomLu05fX" outputId="67195a53-9606-4c5f-c288-ba06578796d7" y # + colab={"base_uri": "https://localhost:8080/"} id="ro2-K90y065I" outputId="f5878416-8b17-4f9b-8ada-f54a19cb10fa" x.columns # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="IwbhAPtI0820" outputId="2e7bad97-6d2d-4b4d-a22b-dfa2d3bc9dad" plt.figure(figsize=(20,30),facecolor='white') plotnumber=1 for col in x: if plotnumber <=13: ax=plt.subplot(5,3,plotnumber) plt.scatter(x[col],y) plt.xlabel(col,fontsize=20) plt.ylabel('Target',fontsize=10) plotnumber+=1 plt.tight_layout() # + id="9m8IlaSg0-6H" scaler = StandardScaler() # + id="O4l-2rA_2H3W" x_scaled = scaler.fit_transform(x) # + colab={"base_uri": "https://localhost:8080/"} id="oIsY48882Mdr" outputId="48a4b6fc-2c8f-4aab-f8aa-439bd107dbb3" x_scaled # + id="uvEgN7kj2NoU" from statsmodels.stats.outliers_influence import variance_inflation_factor # + id="ldKyHenP2PdC" variables = x_scaled vif=pd.DataFrame() vif['VIF'] = [variance_inflation_factor(variables,i) for i in range(variables.shape[1])] vif['Features'] = x.columns # + colab={"base_uri": "https://localhost:8080/", "height": 432} id="iUeed4_12SJM" outputId="23b49533-353a-4d93-d52f-cebbfcf51ae4" vif # + id="wEjZ472L2UGD" x = bos.drop(columns=['Target','RAD','TAX']) # + id="eRBeQ8qU2XDh" y = bos['Target'] # + id="gNfsahUP2Yiz" scaler = StandardScaler() x_scaled = scaler.fit_transform(x) variables = x_scaled vif=pd.DataFrame() vif['VIF'] = [variance_inflation_factor(variables,i) for i in range(variables.shape[1])] vif['Features'] = x.columns # + colab={"base_uri": "https://localhost:8080/", "height": 373} id="CGQX4D3p2aj3" outputId="b5e4f5ac-5c46-4d39-ebc8-20dcd1cf3eb0" vif # + id="Dm-KxkVfAo17" from sklearn.tree import DecisionTreeClassifier, export_graphviz # + id="ahjJm848Ao18" x_train,x_test,y_train,y_test = train_test_split(x_scaled,y,test_size = 0.25,random_state = 355) # + id="p0eWdOM0Ao19" from sklearn.ensemble import RandomForestRegressor # + id="pKhnBoWoAo19" Random_clf = RandomForestRegressor(n_estimators=50,random_state=0) # + colab={"base_uri": "https://localhost:8080/"} id="ZJaytuGMAo1-" outputId="98ab3dd2-87cf-4e83-eaf7-a288edfce382" Random_clf.fit(x_train,y_train) # + id="bXrlstS9Ao2H" y_pred = Random_clf.predict(x_test) # + colab={"base_uri": "https://localhost:8080/"} id="c0sW4UNKAo2I" outputId="18712584-88f0-44a1-a4ed-d2529a544345" y_pred # + colab={"base_uri": "https://localhost:8080/"} id="bQuoEiKaAo2J" outputId="724aefcb-6ef8-4d72-9a88-d83fe06f0847" Random_clf.score(x_train,y_train) # + colab={"base_uri": "https://localhost:8080/"} id="QrEjOvsUAo2J" outputId="d69695e5-a85d-4314-d687-d3a718c64a1c" Random_clf.score(x_test,y_test) # + id="cEmTXSaBAo2L" grid_param = { "n_estimators" : [90,100,115,130], 'criterion': ["mse", "mae"], 'max_depth' : range(2,20,1), 'min_samples_leaf' : range(1,10,1), 'min_samples_split': range(2,10,1), 'max_features' : ['auto','log2'] } # + id="Nh2WKTnQC7z6" grid_search = GridSearchCV(estimator=Random_clf,param_grid=grid_param,cv=5,n_jobs=-1,verbose=3) # + colab={"base_uri": "https://localhost:8080/"} id="mx9viM6UAo2M" outputId="072237da-da1f-4481-a68b-00789f9c27b3" grid_search.fit(x_train,y_train) # + colab={"base_uri": "https://localhost:8080/"} id="LgubbcimI13q" outputId="f925ec63-f483-471b-e7f8-ed39a47ffe4f" grid_search.best_params_ # + id="jFevIzxlBLdp" Random_clf = RandomForestRegressor(criterion = 'mse',max_depth = 14,max_features = 'log2',min_samples_leaf = 1,min_samples_split=2 ,n_estimators=115,random_state=6) # + colab={"base_uri": "https://localhost:8080/"} id="Bu9hupFeJBZx" outputId="f03e63f5-878a-4671-c003-2dfecf3f915d" Random_clf.fit(x_train,y_train) # + colab={"base_uri": "https://localhost:8080/"} id="EirKC_U2JEUF" outputId="2b37fc4b-5c48-4746-bd5e-a504f960cdea" Random_clf.score(x_train,y_train) # + colab={"base_uri": "https://localhost:8080/"} id="7Qe5tE2bJJVq" outputId="3088ef8f-38a4-4c19-d4ff-b7c1e20ae407" Random_clf.score(x_test,y_test) # + id="zCBBn1_rJQ11" y_pred = Random_clf.predict(x_test) # + colab={"base_uri": "https://localhost:8080/"} id="Iiq4elRVJajs" outputId="b4a53933-7a44-4c66-e8e5-861b1ffd9346" y_pred # + id="UOjnpQBeKFSc" from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score, roc_auc_score,mean_absolute_error # + id="_B6XSx3HKY7h" from sklearn import metrics # + colab={"base_uri": "https://localhost:8080/"} id="1CqB96umJ5Qz" outputId="04b1abb7-26de-4ea4-eac3-bc7bb3c9af69" print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_pred)) print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_pred)) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) # + id="btY5YvhKJ_oH" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from tqdm.auto import tqdm t = 10 i = 0 with tqdm(total=t) as pbar: i += 1 print(i) pbar.update(i) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Ajuste de curvas # # # # > El **ajuste de curvas** es el proceso de construir una curva (función), que sea el mejor ajuste a una serie de puntos. Las curvas ajustadas pueden ser usadas como asistencia en la visualización de datos, para inferir valores de una función donde no hay datos disponibles, y para resumir la relación entre variables. # # **Referencia**: # - https://en.wikipedia.org/wiki/Curve_fitting # ___ # ## 0. Introducción # # Consideremos un polinomio de grado uno: # # $$y = \beta_1 x + \beta_0.$$ # # Esta es una **línea recta** que tiene pendiente $\beta_1$. Sabemos que habrá una línea conectando dos puntos cualesquiera. Por tanto, *una ecuación polinómica de primer grado es un ajuste perfecto entre dos puntos*. # # Si consideramos ahora un polinomio de segundo grado, # # $$y = \beta_2 x^2 + \beta_1 x + \beta_0,$$ # # este se ajustará exactamente a tres puntos. Si aumentamos el grado de la función a la de un polinomio de tercer grado, obtenemos: # # $$y = \beta_3 x^3 + \beta_2 x^2 + \beta_1 x + \beta_0,$$ # # que se ajustará a cuatro puntos. # # **Ejemplos** # 1. Encontrar la línea recta que pasa exactamente por los puntos $(0,1)$ y $(1,0)$. # 2. Encontrar la parábola que pasa exactamente por los puntos $(-1,1)$, $(0,0)$ y $(1,1)$. # # **Solución** # 1. Consideramos $y=\beta_1 x + \beta_0$. Evaluando en el punto $(0,1)$, obtenemos $\beta_1(0) + \beta_0 = 1$. Ahora, evaluando en el punto $(1,0)$, obtenemos $\beta_1(1) + \beta_0 = 0$. De esta manera, # $$\left[\begin{array}{cc} 1 & 0 \\ 1 & 1\end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1\end{array}\right]=\left[\begin{array}{c} 1 \\ 0\end{array}\right].$$ # Resolviendo, $\beta_0=-\beta_1=1$. # Importar numpy y el matplotlib.pyplot import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # + # Encontrar \beta_0 y \beta_1 resolviendo el sistema A = np.array([[1, 0], [1, 1]]) b = np.array([1, 0]) beta = np.linalg.inv(A).dot(b) #beta = np.dot(np.linalg.inv(A), b) beta # - # Graficar la recta encontrada junto con los puntos x = np.array([0, 1]) y = np.array([1, 0]) plt.figure(figsize=(8,6)) plt.plot(x, y, 'ro', ms=5, label='Puntos') plt.plot(x, beta[0]+beta[1]*x, label='Recta ajustada') plt.xlabel('x') plt.ylabel('y') plt.legend(loc='best') plt.grid() # 2. Consideramos $y=\beta_2 x^2 + \beta_1 x + \beta_0$. Evaluando en el punto $(-1,1)$, obtenemos $\beta_2(-1)^2 + \beta_1(-1) + \beta_0 = 1$. Ahora, evaluando en el punto $(0,0)$, obtenemos $\beta_2(0)^2 + \beta_1(0) + \beta_0 = 0$. Finalmente, evaluando en el punto $(1,1)$, obtenemos $\beta_2(1)^2 + \beta_1(1) + \beta_0 = 1$. De esta manera, # $$\left[\begin{array}{ccc} 1 & -1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \end{array}\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \\ \beta_2 \end{array}\right]=\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right].$$ # Resolviendo, $\beta_0=\beta_1=0$ y $\beta_2=1$. # Encontrar \beta_0, \beta_1 y \beta_2 A = np.array([[1, -1, 1], [1, 0, 0], [1, 1, 1]]) b = np.array([1, 0, 1]) beta = np.linalg.inv(A).dot(b) beta # Graficar la parabola junto con los puntos x = np.array([-1, 0, 1]) y = np.array([1, 0, 1]) xpuntos = np.linspace(-1,1,50) plt.figure(figsize=(8,6)) plt.plot(x, y, 'ro', ms=5, label='Puntos') plt.plot(xpuntos, beta[0]+beta[1]*xpuntos+beta[2]*xpuntos**2, label='Parabola ajustada') plt.xlabel('x') plt.ylabel('y') plt.legend(loc='best') plt.grid() # ### ¿Qué tienen en común los anteriores problemas? # Las curvas están completamente determinadas por los puntos (datos limpios, suficientes y necesarios). # # Esto se traduce en que, al llevar el problema a un sistema de ecuaciones lineales, existe una única solución: **no hay necesidad, ni se puede optimizar nada**. # # ¿Tendremos datos así de '*bonitos*' en la vida real? # # La realidad es que los datos que encontraremos en nuestra vida profesional se parecen más a esto... # Crear un conjunto de puntos ruidosos a partir de una recta x = np.linspace(0, 1, 50) y = 2+10*x+np.random.randn(50) # Graficar plt.figure(figsize=(8,6)) plt.plot(x, y, '*') plt.xlabel('x') plt.ylabel('y') plt.grid() # ### ¿Cómo ajustamos una curva a esto? # ## 1. Problema básico # # # # Consideramos que tenemos un conjunto de n pares ordenados de datos $(x_i,y_i)$, para $i=1,2,3,\dots,n$. # # ### ¿Cuál es la recta que mejor se ajusta a estos datos? # Consideramos entonces ajustes de la forma $\hat{f}(x) = \beta_0+\beta_1 x = \left[1 \quad x\right]\left[\begin{array}{c} \beta_0 \\ \beta_1 \end{array}\right]=\left[1 \quad x\right]\boldsymbol{\beta}$ (lineas rectas). # # Para decir '*mejor*', tenemos que definir algún sentido en que una recta se ajuste *mejor* que otra. # # **Mínimos cuadrados**: el objetivo es seleccionar los coeficientes $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$, de forma que la función evaluada en los puntos $x_i$ ($\hat{f}(x_i)$) aproxime los valores correspondientes $y_i$. # # La formulación por mínimos cuadrados, encuentra los $\boldsymbol{\beta}=\left[\beta_0 \quad \beta_1 \right]^T$ que minimiza # $$\sum_{i=1}^{n}(y_i-\hat{f}(x_i))^2=\sum_{i=1}^{n}(y_i-\left[1 \quad x_i\right]\boldsymbol{\beta})^2=\left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2,$$ # # donde $\boldsymbol{y}=\left[y_1\quad\dots\quad y_n\right]^T$, y $\boldsymbol{X}=\left[\begin{array}{ccc}1 & x_1\\ \vdots & \vdots \\ 1 & x_n\end{array}\right].$ Esto es, # # $$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2$$ # Notar que el problema anterior no es de programación lineal, ¿porqué? # # Para llevar a cabo la anterior minimización, la librería `SciPy` en su módulo `optimize` contiene la función `minimize`. # Importar el módulo optimize de la librería scipy import scipy.optimize as opt # Función minimize help(opt.minimize) # Parámetros importantes: # - fun: función $f(x)$, se debe definir antes de llamar minimize, como `def f(x): ... return ...` # - x0: valor inicial. En una función no lineal, en general, hay múltiples mínimos. Dependiendo de la semilla caerá en uno de esos mínimos. Se ingresa como $x0 = \text{np.array}([x_{01},\dots,x_{0n}])$. # - bounds: como en linprog. # - constraints: funciones que definen las restricciones $g_i(x)$ y $h_j(x)$. Se definen igual que $f(x)$ y se ingresan como {'ineq': g_i, 'eq': h_j}. # Primero debemos construir la función objetivo y la semilla inicial: # + # Definir funcion objetivo y punto inicial def obj(beta, x, y): return np.sum((y-beta[0]-beta[1]*x)**2) beta_inicial = [0, 0] # - resultado = opt.minimize(obj, beta_inicial, args=(x,y)) # Mostrar resultado # ¿Qué tan bien luce el ajuste? # Coeficientes \beta_0 y \beta_1 beta = resultado.x beta # Grafica de los puntos y la recta ajustada plt.figure(figsize=(8,6)) plt.plot(x, y, '*', label='Puntos') plt.plot(x, beta[0]+beta[1]*x, label='Recta ajustada') plt.xlabel('x') plt.ylabel('y') plt.legend(loc='best') plt.grid() # Note que la pendiente es aproximadamente $10$ y el intercepto es aproximadamente $2$. # # La anterior idea se puede extender a ajuste polinomial... # ## 2. Ajuste polinomial # # Ahora, considere el siguiente conjunto de datos... # + # Generamos 100 puntos ruidosos a partir de una senoidal n = 100 x = np.linspace(np.pi/6, 5*np.pi/3, n) y = 4*np.sin(x) + 0.5*np.random.randn(n) plt.figure(figsize=(8,6)) plt.plot(x, y, '*b') plt.xlabel('$x$') plt.ylabel('$y$') plt.grid() plt.show() # - # ### 2.1. ¿Se ajustará bien una recta? # + # Definir funcion objetivo y semilla def obj(beta, x, y): return np.sum((y-beta[0]-beta[1]*x)**2) beta_inicial = [0, 0] # - # Resolver resultado = opt.minimize(obj, beta_inicial, args = (x,y)) resultado # **Veamos $\beta$ para el ajuste con recta** # Mostrar coeficientes beta = resultado.x beta # Graficar plt.figure(figsize=(8,6)) plt.plot(x, y, '*', label='Puntos') plt.plot(x, beta[0]+beta[1]*x, label='Recta ajustada') plt.xlabel('x') plt.ylabel('y') plt.legend(loc='best') plt.grid() # ### 2.2. La recta no es buen ajuste... ¿Se ajustará bien una parabola? # + # Definir funcion objetivo y semilla def obj2(beta, x, y): return np.sum((y-beta[0]-beta[1]*x-beta[2]*x**2)**2) beta_inicial = [0, 0, 0] # - # Resolver resultado = opt.minimize(obj2, beta_inicial, args=(x,y)) resultado # **Veamos $\beta$ para el ajuste con parábola** # Mostrar coeficientes beta2 = resultado.x beta2 # Graficar recta y parabola ajustadas plt.figure(figsize=(8,6)) plt.plot(x, y, '*', label='Puntos') plt.plot(x, beta[0]+beta[1]*x, label='Recta ajustada') plt.plot(x, beta2[0]+beta2[1]*x+beta2[2]*x**2, label='Parabola ajustada') plt.xlabel('x') plt.ylabel('y') plt.legend(loc='best') plt.grid() # ### 2.3. Tampoco. Quizá un polinomio cúbico... # + # Definir funcion objetivo y semilla def obj3(beta, x, y): return np.sum((y-beta[0]-beta[1]*x-beta[2]*x**2-beta[3]*x**3)**2) beta_inicial = [0, 0, 0, 0] # - # Resolver resultado = opt.minimize(obj3, beta_inicial, args=(x,y)) resultado # **Veamos $\beta$ para el ajuste con cúbica** # Mostrar coeficientes beta3 = resultado.x beta3 # Graficar recta, parabola y cubica plt.figure(figsize=(8,6)) plt.plot(x, y, '*', label='Puntos') plt.plot(x, beta[0]+beta[1]*x, label='Recta ajustada') plt.plot(x, beta2[0]+beta2[1]*x+beta2[2]*x**2, label='Parabola ajustada') plt.plot(x, beta3[0]+beta3[1]*x+beta3[2]*x**2+beta3[3]*x**3, label='Cubica ajustada') plt.xlabel('x') plt.ylabel('y') plt.legend(loc='best') plt.grid() # Mucho mejor. Entonces, ¿mientras más se suba el orden mejor la aproximación? # # ### 2.4. Ajustemos un polinomio de grado 7... # + # Definimos funcion objetivo y semilla def obj7(beta, x, y): return np.sum((y-beta.dot([x**i for i in range(8)]))**2) beta_inicial = np.zeros(8) # - # Resolvemos resultado = opt.minimize(obj7, beta_inicial, args=(x,y)) resultado # **De nuevo, veamos $\beta$** # Mostrar coeficientes beta7 = resultado.x beta7 # **¡Cuidado! OVERFITTING...** # # Observar el tamaño de algunos coeficientes. Cuando los coeficientes son grandes, ¿qué pasa? # Grafica de ajustes plt.figure(figsize=(8,6)) plt.plot(x, y, '*', label='Puntos') plt.plot(x, beta[0]+beta[1]*x, label='Recta ajustada') plt.plot(x, beta2[0]+beta2[1]*x+beta2[2]*x**2, label='Parabola ajustada') plt.plot(x, beta3[0]+beta3[1]*x+beta3[2]*x**2+beta3[3]*x**3, label='Cubica ajustada') plt.plot(x, beta7.dot([x**i for i in range(8)]), label='Polinomio grado 7 ajustada') plt.xlabel('x') plt.ylabel('y') plt.legend(loc='best') plt.grid() # Es conveniente ver el error como función del orden del polinomio... **selección de modelos** # + # Error cuadratico e_ms = [] def obj(b, x, y, n): return np.sum((y-b.dot([x**i for i in range(n+1)]))**2) for i in range(7): b0 = np.zeros((i+2,)) res = opt.minimize(obj, b0, args=(x,y,i+1)) e_ms.append(res.fun) plt.figure(figsize=(8,5)) plt.plot(np.arange(7)+1, e_ms, 'o') plt.xlabel('orden') plt.ylabel('error') plt.show() # - # En efecto, parece que con $3$ es suficiente. # ### ¿Cómo prevenir el *overfitting* sin importar el orden del modelo? # ## 3. Regularización # # Vimos que la solución de mínimos cuadrados es: # $$\boldsymbol{\beta}^{ls} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2.$$ # # Sin embargo, si crecemos el orden del modelo hay overfitting y algunos coeficientes óptimos $\boldsymbol{\beta}$ crecen muchísimo. Que un coeficiente sea muy grande, significa que se le da mucha importancia a alguna característica (que quizá sea ruido... no sirve para predecir). # # La regularización consiste en penalizar la magnitud de los coeficientes $\boldsymbol{\beta}$ en el problema de optimización, para que no crezcan tanto. # ### 3.1. Ridge # # $$\boldsymbol{\beta}^{ridge} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|^2$$ # + def obj_ridge(b, x, y, n, l): return np.sum((y-b.dot([x**i for i in range(n+1)]))**2)+l*np.linalg.norm(b)**2 b0 = np.random.random((8,)) res = opt.minimize(obj_ridge, b0, args=(x,y,7,0.1)) yhat7_ridge = np.array([x**j for j in range(8)]).T.dot(res.x) plt.figure(figsize=(8,6)) plt.plot(x, y, '*b', label = 'datos') plt.plot(x, beta7.dot([x**i for i in range(8)]), '-c', label = 'ajuste 7') plt.plot(x, yhat7_ridge, '--r', label = 'ajuste 7_ridge') plt.legend(loc = 'best') plt.xlabel('$x$') plt.ylabel('$y$') plt.grid() plt.show() # - res.x # ### 3.2. Lasso # # $$\boldsymbol{\beta}^{lasso} = \arg \min_{\boldsymbol{\beta}} \left|\left|\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right|\right|^2 + \lambda\left|\left|\boldsymbol{\beta}\right|\right|_1$$ # # La norma 1 no es más que la suma de los valores absolutos de las componentes $\left|\left|\boldsymbol{\beta}\right|\right|_1=\sum_{j=0}^m\left|\beta_j\right|$. # + def obj_lasso(b, x, y, n, l): return np.sum((y-b.dot([x**i for i in range(n+1)]))**2)+l*np.linalg.norm(b,1) b0 = np.random.random((8,)) res = opt.minimize(obj_lasso, b0, args=(x,y,7,0.1)) yhat7_lasso = np.array([x**j for j in range(8)]).T.dot(res.x) plt.figure(figsize=(8,6)) plt.plot(x, y, '*b', label = 'datos') plt.plot(x, beta7.dot([x**i for i in range(8)]), '-c', label = 'ajuste 7') plt.plot(x, yhat7_ridge, '--r', label = 'ajuste 7_ridge') plt.plot(x, yhat7_lasso, '--k', label = 'ajuste 7_lasso') plt.legend(loc = 'best') plt.xlabel('$x$') plt.ylabel('$y$') plt.grid() plt.show() # - res.x # ## 4. Ajuste robusto # # Ahora, consideremos de nuevo el caso de la línea recta con un par de puntos atípicos al inicio y al final... # + x = np.linspace(0, 1, 30) y = 10*x + 2 + np.random.randn(30) y[0] = 16 y[-1] = 0 plt.figure(figsize=(8,6)) plt.plot(x, y, '*b') plt.xlabel('$x$') plt.ylabel('$y$') plt.grid() plt.show() # - # Solucionamos el problema normalmente... # + b0 = np.random.random((2,)) res = opt.minimize(obj, b0, args=(x,y,1)) yhat = np.array([x**j for j in range(2)]).T.dot(res.x) plt.figure(figsize=(8,6)) plt.plot(x, y, '*b', label = 'datos') plt.plot(x, yhat, '-r', label = 'ajuste') plt.legend(loc = 'best') plt.xlabel('$x$') plt.ylabel('$y$') plt.grid() plt.show() # - res.x # Si estos puntos que parecen ser atípicos, hacen parte de una 'mala medición', vemos que el ajuste que obtenemos a los otros puntos es muy pobre... # # **¿Cómo podemos evitar esto?** La respuesta es *ajuste robusto*. # + def huber(a, d): if np.abs(a)<=d: return a**2 else: return d*(2*np.abs(a)-d) def obj_robust(b, x, y, n, d): return np.sum(np.vectorize(huber)(y-b.dot([x**i for i in range(n+1)]), 1.345)) b0 = np.random.random((2,)) res = opt.minimize(obj_robust, b0, args=(x,y,1,1.345)) yhat = np.array([x**j for j in range(2)]).T.dot(res.x) plt.figure(figsize=(8,6)) plt.plot(x, y, '*b', label = 'datos') plt.plot(x, yhat, '-r', label = 'ajuste') plt.legend(loc = 'best') plt.xlabel('$x$') plt.ylabel('$y$') plt.grid() plt.show() # - res.x # Mejor... # ## 5. Actividad # # 1. Ajustar polinomios de grado 1 hasta grado 7 a los siguientes datos. # 2. Graficar el error cuadrático acumulado contra el número de términos, y elegir un polinomio que ajuste bien y su grado no sea muy alto. # 3. Para el grado de polinomio elegido, realizar el ajuste con ridge con coeficiente de 0.01. # 4. Comparar los beta. # # Abrir un nuevo notebook, llamado `Tarea5_ApellidoNombre` y subirlo a moodle en el espacio habilitado. Tarea para el Lunes 17 a las 23:00. def f(x): return np.exp(-x**2/2)/np.sqrt(2*np.pi) # + x = np.linspace(-3, 3) y = f(x) plt.figure(figsize=(8,6)) plt.plot(x, y, '*b', label = 'datos') plt.legend(loc = 'best') plt.xlabel('$x$') plt.ylabel('$y$') plt.grid() plt.show() # - # # Avisos: # # ## Recordar tarea para hoy y tarea para el jueves. # # ## Evaluación primer módulo: (Por definir) # ### Se las entrego el Lunes 17 de Septiembre (dentro de una semana) por moodle y tienen hasta el Viernes 21 de Septiembre. # # ## Proyecto: # ### 1. Elegir integrantes para proyecto. Mínimo 2, máximo 3 (sin excepción). Entregarme una hoja por equipo con los nombres de los integrantes ya. # ### 2. Deben elegir un tema para proyecto que se pueda resolver como un problema de optimización (preferiblemente, relacionado con su carrera). # ### 3. Para la siguiente semana, a más tardar, deben acercarse a mi con su tema de proyecto. Juntos, definiremos el alncance. # ### 4. Fecha de entrega y presentación: Lunes 24 de septiembre. # # #
    # Created with Jupyter by . #
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Classifying the type of flooring surface using data collected by Inertial Measurement Units sensors # # Author: () # # Creation date: 09/23/2019 # # Data source: Tampere University Signal Processing Department in Finland. # # Competition page: https://www.kaggle.com/c/competicao-dsa-machine-learning-sep-2019/ # ## Exploratory Data Analysis # # The sensor data collected includes accelerometer data, gyroscope data (angular # rate) and internally estimated orientation. Specifically: # # * Orientation: 4 attitude quaternion (a mathematical notation used to represent orientations and rotations in a 3D space) channels, 3 for vector part and one for scalar part; # * Angular rate: 3 channels, corresponding to the 3 IMU coordinate axes X, Y, and Z; # * Acceleration: 3 channels, specific force corresponding to 3 IMU coordinate axes X, Y, and Z. # # Each data point includes the measures described above of orientation, velocity and acceleration, resulting in a feature vector of length 10 for each point. # # There are 128 measurements per time series plus three identification columns: # * ***row_id***: The ID for the row. # * ***series_id***: a number that identify the measurement series. It is also the foreign key to **y_train** and sample_submission. # * ***measurement_number***: measurement number within the series. # # #### Loading the data # + # If you will use tqdm # #!pip install ipywidgets # #!jupyter nbextension enable --py widgetsnbextension # #!pip install -r requirements.txt # - import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np from tqdm import tqdm_notebook as tqdm # %matplotlib inline # + # Folder with datasets data_folder = "data/" # Running on kaggle? kaggle = False if kaggle: data_folder = "../input/" # Load the data for training ML models xtrain = pd.read_csv(data_folder + "X_treino.csv") ytrain = pd.read_csv(data_folder + "y_treino.csv") # Target train_data = pd.merge(xtrain, ytrain, how = "left", on = "series_id") #Load the Test dataset to predict the results (used for submission) xtest = pd.read_csv(data_folder + "X_teste.csv") test_data = xtest # Submission data submission = pd.read_csv(data_folder + "sample_submission.csv") # Showing the number of samples and columns for each dataset print(train_data.shape) print(test_data.shape) # - train_data.head() test_data.head() # #### Frequency Distribution # + # Check unique values train_count_series = len(train_data.series_id.unique()) test_count_series = len(test_data.series_id.unique()) train_freq_distribution_surfaces = train_data.surface.value_counts() print(f"Number of time series in train dataset: {train_count_series}") print(f"Number of time series in test dataset: {test_count_series}\n") print(f"Surfaces frequency distribution in train dataset:\n{train_freq_distribution_surfaces}") train_freq_distribution_surfaces.plot(kind="barh", figsize=(10,5)) plt.title("Sample distribution by class") plt.ylabel("Number of time series") plt.show() # - # So, the train data set contains 3810 labeled time series samples, with the corresponding surface type annotation. # # Most of the samples are for the concrete surface. The *hard_tiles* has only 2688 samples, this may be insufficient to build a robust model for this type of surface. # # Furthermore, the classes are not balanced so we need to be careful because simple accuracy score is not enough to evaluate the model performance. # #### Frequency distribution for each column plt.subplots_adjust(top=0.8) for i, col in enumerate(xtrain.columns[3:]): g = sns.FacetGrid(train_data, col="surface", col_wrap=5, height=3, aspect=1.1) g = g.map(sns.distplot, col) g.fig.suptitle(col, y=1.09, fontsize=23) # From the above plots, we can see that: # # * **orientation X** and **orientation Y** have values around -1.0 to 1.0 # * **orientation Z** and **orientation W** have values around -0.15 to 0.15 # * For orientation X, Y, Z and W **hard_tiles** have different distribution as compared to others. # * **angular_velocity_x** forms a perfect Normal distribution # * **angular_velocity_y** and **angular_velocity_z** have distributions close to a Normal for most surfaces, excepts for **hard_tiles**, **carpet** and **wood**. # * **linear_acceleration_X**, **linear_acceleration_Y** and **linear_acceleration_Z** forms a Normal distribution for all surfaces. # # ## Feature Engineering # To build the ML model we'll convert each time series values to the following metrics: # * Mean # * Standard Deviation # * Min and Max values # * Kurtosis Coefficient # * Skewness Coefficient # # Function that performs all data transformation and pre-processing def data_preprocessing(df, labeled=False): # New dataframe that will saves the tranformed data X = pd.DataFrame() # This list will save the type of surface for each series ID Y = [] # The selected attributes used in training selected_attributes = ['orientation_X', 'orientation_Y', 'orientation_Z', 'orientation_W', 'angular_velocity_X', 'angular_velocity_Y', 'angular_velocity_Z', 'linear_acceleration_X', 'linear_acceleration_Y', 'linear_acceleration_Z'] # The total number of series in training data total_test_series = len(df.series_id.unique()) for series in tqdm(range(total_test_series)): #for series in range(total_test_series): # Filter the series id in the DataFrame _filter = (df.series_id == series) # If data with labels if labeled: # Saves the type of surface (label) for each series ID Y.append((df.loc[_filter, 'surface']).values[0]) # Compute new values for each attribute for attr in selected_attributes: # Compute a new attribute for each series and save in the X DataFrame X.loc[series, attr + '_mean'] = df.loc[_filter, attr].mean() X.loc[series, attr + '_std'] = df.loc[_filter, attr].std() X.loc[series, attr + '_min'] = df.loc[_filter, attr].min() X.loc[series, attr + '_max'] = df.loc[_filter, attr].max() X.loc[series, attr + '_kur'] = df.loc[_filter, attr].kurtosis() X.loc[series, attr + '_skew'] = df.loc[_filter,attr].skew() return X,Y # + # Apply the Pre-Processing to train data X_train, Y_train = data_preprocessing(train_data, labeled=True) # Here is the result DataFrame X_train.head() # + # Transform the Y list in an array Y_train=np.array(Y_train) # Print the size X_train.shape, Y_train.shape # + # Apply the Pre-Processing to test data X_test, _ = data_preprocessing(test_data, labeled=False) X_test.head() # - print(X_test.shape) # ## Modeling # Importing packages from sklearn.model_selection import StratifiedShuffleSplit from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import ExtraTreesClassifier from sklearn.preprocessing import LabelEncoder import lightgbm as lgb # + # Get the labels (concrete, tiled, wood, etc.) unique_labels=list(train_data.surface.unique()) # Encode the train labels with value between 0 and n_classes-1 to use in Random Forest Classifier. le = LabelEncoder() Y_train_encoded = le.fit_transform(Y_train) Y_train_encoded # - # ### Using Gradient Boosting (LightGBM) # # LightGBM is a gradient boosting framework that uses tree based learning algorithms. # # Documentation: https://lightgbm.readthedocs.io/en/latest/Python-Intro.html # Function to perform all training steps for LGBM def train_lgbm_model(X_train, Y_train, X_test): # Variables that save the probabilities of each class predicted = np.zeros((X_test.shape[0],9)) measured= np.zeros((X_train.shape[0],9)) # Create a dictionary that saves the model create in each fold models = {} # Used to compute model accuracy all_scores = 0 # Use Stratified ShuffleSplit cross-validator # Provides train/test indices to split data in train/test sets. n_folds = 5 sss = StratifiedShuffleSplit(n_splits=n_folds, test_size=0.30, random_state=10) # Control the number of folds in cross-validation (5 folds) k=1 # From the generator object gets index for series to use in train and validation for train_index, valid_index in sss.split(X_train, Y_train): # Saves the split train/validation combinations for each Cross-Validation fold X_train_cv, X_validation_cv = X_train.loc[train_index,:], X_train.loc[valid_index,:] Y_train_cv, Y_validation_cv = Y_train[train_index], Y_train[valid_index] # Create the model lgbm = lgb.LGBMClassifier(objective='multiclass', is_unbalance=True, max_depth=10, learning_rate=0.05, n_estimators=500, num_leaves=30) # Training the model # eval gets the tuple pairs to use as validation sets lgbm.fit(X_train_cv, Y_train_cv, eval_set=[(X_train_cv, Y_train_cv), (X_validation_cv, Y_validation_cv)], early_stopping_rounds=60, # stops if 60 consequent rounds without decrease of error verbose=False, eval_metric='multi_error') # Get the class probabilities of the input samples # Save the probabilities for submission y_pred = lgbm.predict_proba(X_test) predicted += y_pred # Save the probabilities of validation measured[valid_index] = lgbm.predict_proba(X_validation_cv) # Cumulative sum of the score score = lgbm.score(X_validation_cv,Y_validation_cv) all_scores += score print("Fold: {} - LGBM Score: {}".format(k, score)) # Saving the model models[k] = lgbm k += 1 # Compute the mean probability predicted /= n_folds # Save the mean score value mean_score = all_scores/n_folds # Save the first trained model trained_model = models[1] return measured, predicted, mean_score, trained_model # Models is a dict that saves the model create in each fold in cross-validation measured_lgb, predicted_lgb, accuracy_lgb, model_lgb = train_lgbm_model(X_train, Y_train_encoded, X_test) print(f"\nMean accuracy for LGBM: {accuracy_lgb}") # Plot the Feature Importance for the first model created plt.figure(figsize=(15,30)) ax=plt.axes() lgb.plot_importance(model_lgb, height=0.5, ax=ax) plt.show() # + # Removing features with a importance score bellow 400 # The 400 values was chosen from several tests features_to_remove = [] feat_imp_threshold = 400 # A list of features and importance scores feat_imp = [] for i in range(len(X_train.columns)): feat_imp.append((X_train.columns[i], model_lgb.feature_importances_[i])) for fi in feat_imp: if fi[1] < feat_imp_threshold: features_to_remove.append(fi[0]) print(f"Number of feature to be remove: {len(features_to_remove)}\n") print(features_to_remove) # + # Removing features X_train_v2 = X_train.copy() X_test_v2 = X_test.copy() for f in features_to_remove: del X_train_v2[f] del X_test_v2[f] X_train_v2.shape, X_test_v2.shape # - # Train a new set of models measured_lgb, predicted_lgb, accuracy_lgb, lgbm_model = train_lgbm_model(X_train_v2, Y_train_encoded, X_test_v2) print(f"\nMean accuracy for LGBM: {accuracy_lgb}") # Using the new set of features the mean score was improved in just 1.1%. # ### Using Random Forest Classifier (RFC) # # A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. # # Documentation: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html # Function to perform all training steps def train_rfc(X_train, Y_train, X_test): # Create a dictionary that saves the model create in each fold models = {} # Variables that save the probabilities of each class predicted = np.zeros((X_test.shape[0],9)) measured = np.zeros((X_train.shape[0],9)) # Use Stratified ShuffleSplit cross-validator # Provides train/test indices to split data in train/test sets. n_folds = 5 sss = StratifiedShuffleSplit(n_splits=n_folds, test_size=0.30, random_state=10) # Control the number of folds in cross-validation (5 folds) k=1 # Used to compute model accuracy all_scores = 0 # From the generator object gets index for series to use in train and validation for train_index, valid_index in sss.split(X_train, Y_train): # Saves the split train/validation combinations for each Cross-Validation fold X_train_cv, X_validation_cv = X_train.loc[train_index,:], X_train.loc[valid_index,:] Y_train_cv, Y_validation_cv = Y_train[train_index], Y_train[valid_index] # Training the model rfc = RandomForestClassifier(n_estimators=500, min_samples_leaf = 1, max_depth= None, n_jobs=-1, random_state=30) rfc.fit(X_train_cv,Y_train_cv) # Get the class probabilities of the input samples # Save the probabilities for submission y_pred = rfc.predict_proba(X_test) predicted += y_pred # Save the probabilities of validation measured[valid_index] = rfc.predict_proba(X_validation_cv) # Cumulative sum of the score score = rfc.score(X_validation_cv,Y_validation_cv) all_scores += score print("Fold: {} - RF Score: {}".format(k, score)) # Saving the model models[k] = rfc k += 1 # Compute the mean probability predicted /= n_folds # Save the mean score value mean_score = all_scores/n_folds # Save the first trained model trained_model = models[1] return measured, predicted, mean_score, trained_model measured_rf, predicted_rf, accuracy_rf, model_rf = train_rfc(X_train_v2, Y_train, X_test_v2) print(f"\nMean accuracy for RF: {accuracy_rf}") # ### Using Extra-Trees Classifier # # The main difference between random forests and extra trees (usually called extreme random forests) lies in the fact that, instead of computing the locally optimal feature/split combination (for the random forest), for each feature under consideration, a random value is selected for the split (for the extra trees). # # This leads to more diversified trees and less splitters to evaluate when training an extremly random forest. # # Documentation: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html # Function to perform all training steps def train_etc(X_train, Y_train, X_test): # Create a dictionary that saves the model create in each fold models = {} # Variables that save the probabilities of each class predicted = np.zeros((X_test.shape[0],9)) measured = np.zeros((X_train.shape[0],9)) # Use Stratified ShuffleSplit cross-validator # Provides train/test indices to split data in train/test sets. n_folds = 5 sss = StratifiedShuffleSplit(n_splits=n_folds, test_size=0.30, random_state=10) # Control the number of folds in cross-validation (5 folds) k=1 all_scores = 0 # From the generator object gets index for series to use in train and validation for train_index, valid_index in sss.split(X_train, Y_train): # Saves the split train/validation combinations for each Cross-Validation fold X_train_cv, X_validation_cv = X_train.loc[train_index,:], X_train.loc[valid_index,:] Y_train_cv, Y_validation_cv = Y_train[train_index], Y_train[valid_index] # Training the model etc = ExtraTreesClassifier(n_estimators=400, max_depth=10, min_samples_leaf=2, n_jobs=-1, random_state=30) etc.fit(X_train_cv,Y_train_cv) # Get the class probabilities of the input samples # Save the probabilities for submission y_pred = etc.predict_proba(X_test) predicted += y_pred # Save the probabilities of validation measured[valid_index] = etc.predict_proba(X_validation_cv) # Cumulative sum of the score score = etc.score(X_validation_cv,Y_validation_cv) all_scores += score print("Fold: {} - ET Score: {}".format(k, score)) # Saving the model models[k] = etc k += 1 # Compute the mean probability predicted /= n_folds # Save the mean score value mean_score = all_scores/n_folds # Save the first trained model trained_model = models[1] return measured, predicted, mean_score, trained_model measured_et, predicted_et, accuracy_et, model_et = train_rfc(X_train_v2, Y_train, X_test_v2) print(f"\nMean accuracy for ET: {accuracy_et}") # ### Overall results print(f"LGBM accuracy: {accuracy_lgb}") print(f"RF accuracy: {accuracy_rf}") print(f"ET accuracy: {accuracy_et}") # For all algorithms used, the mean accuracy was the same. # # Let's combine them together to build a new powerful model. # ## Stacking # # Stacking is an ensemble learning technique that combines multiple classification or regression models via a meta-classifier or a meta-regressor. The base level models are trained based on a complete training set, then the meta-model is trained on the outputs of the base level model as features. # # The idea of stacking is to learn several different weak learners (heterogeneous learners) and combine them by training a meta-model to output predictions based on the multiple predictions returned by these weak models. # # So, we need to define two things in order to build our stacking model: the L learners we want to fit and the meta-model that combines them. # # In our case, the L learns are: LightGBM, Random Forest and Extra Trees. # The meta classifier wil be a Logistic Regression model. # # + # Creatin train and test datasets x_train = np.concatenate((measured_et, measured_rf, measured_lgb), axis=1) x_test = np.concatenate((predicted_et, predicted_rf, predicted_lgb), axis=1) print(x_train.shape, x_test.shape) # + # Training the model from sklearn.linear_model import LogisticRegression stacker = LogisticRegression(solver="lbfgs", multi_class="auto") stacker.fit(x_train,Y_train) # Perform predictions stacker_pred = stacker.predict_proba(x_test) # - # Creating submission file submission['surface'] = le.inverse_transform(stacker_pred.argmax(1)) submission.to_csv('submission_stack.csv', index=False) submission.head() # ## References # * https://www.researchgate.net/publication/332799607_Surface_Type_Classification_for_Autonomous_Robot_Indoor_Navigation # * https://www.kaggle.com/c/career-con-2019/overview # * http://mariofilho.com/tutorial-aumentando-o-poder-preditivo-de-seus-modelos-de-machine-learning-com-stacking-ensembles/ # * https://blog.statsbot.co/ensemble-learning-d1dcd548e936 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Stat # import dataset from https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/06_Stats/US_Baby_Names/US_Baby_Names_right.csv import pandas as pd df = pd.read_csv("https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/06_Stats/US_Baby_Names/US_Baby_Names_right.csv") df.head(10) # #### Delete the column 'Unnamed: 0' and 'Id' df = df.drop(["Unnamed: 0", "Id"], axis=1) df.head() # #### Are there more male or female names in the dataset? df["Gender"].value_counts() # #### Group the dataset by name and assign to names df = df.drop(["Year"], axis =1) df.head() name = df.groupby("Name").sum() name.head() name.sort_values("Count", ascending = 0).head() name.sort_values("Count", ascending = 1).head() # #### How many different names exist in the dataset? len(name) # #### What is the name with most occurrences? name.Count.idxmax() # #### How many different names have the least occurrences? len(name[name.Count == name.Count.min()]) # #### What is the median name occurrence? name[name.Count == name.Count.median()] # #### What is the standard deviation of names? name.Count.std() name.Count.median() # #### Get a summary with the mean, min, max, std and quartiles name.describe() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="Tc_OTCECOUtf" colab_type="code" colab={} # !pip install qiskit from qiskit import * from qiskit.visualization import plot_histogram # + id="yQ02iAhMhenC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 62} outputId="37dc6134-39c1-48db-9a13-9f33622a4ef6" from math import sqrt, pi #Initialization states must be normalized, otherwise you may recieve error #Let's try 1 & 0 initial_state = [1,0] qc = QuantumCircuit(1) qc.initialize(initial_state,0) qc.draw() # + id="1cn0GTkRicp-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a1e2ac52-e1be-469d-9610-9f0082f20e4c" get_states = execute(qc, Aer.get_backend('statevector_simulator')).result() print(get_states.get_statevector()) # + id="IuSQVK82i1Yg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="133531ac-1e38-4798-805e-95a594556e3f" plot_histogram(get_states.get_counts()) # + [markdown] id="eca06cnLi_qH" colab_type="text" # # Let's try Superposition # + id="SZSbnD0ljC7I" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 62} outputId="4c05801e-df37-4821-aaa5-061ad2c6f6fc" qc_1 = QuantumCircuit(1) superposition_state = [1/sqrt(2), 1/sqrt(2)] qc_1.initialize(superposition_state,0) qc_1.draw() # + id="HzQ25aaOjXbY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1d805bd7-1169-4b48-c449-474e079958ee" get_superposition_states = execute(qc_1, Aer.get_backend("statevector_simulator")).result() print(get_superposition_states.get_statevector()) # + id="yZpjO-1jjnw8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 328} outputId="1d4b2a4c-2599-409d-c57e-2bba5303099d" plot_histogram(get_superposition_states.get_counts()) # + [markdown] id="azD33XgcjzNQ" colab_type="text" # # Rules of Measurement (Normalization Exercise) # + [markdown] id="MbX5J9O_j7TI" colab_type="text" # Create a state vector that will give a 1/3 probability of measuring |0⟩ . # + id="uwFrebg9j2JB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 328} outputId="e884a053-8781-42f6-ea4f-d12092ffd405" state_vector = [1/sqrt(3), sqrt(2)/sqrt(3)] ex = QuantumCircuit(1) ex.initialize(state_vector,0) backend = Aer.get_backend('statevector_simulator') plot_histogram(execute(ex,backend).result().get_counts()) # + [markdown] id="XZ5Na_Y1lS7p" colab_type="text" # Create a different state vector that will give the same measurement probabilities. # + [markdown] id="JjGCOuFxnJpf" colab_type="text" # **In the equation above, |x⟩ can be any qubit state. To find the probability of measuring |x⟩ , we take the inner product of |x⟩ and the state we are measuring (in this case |ψ⟩ ), then square the magnitude. This may seem a little convoluted, but it will soon become second nature.** # + [markdown] id="o5eImSQ1neyX" colab_type="text" # ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAMoAAAAwCAYAAABQbOXXAAAJ10lEQVR4Ae1crde6PBjef7FIJBqJRKKRSDQa/ROIRKPRSDQaiUQj0UgkXu/ZYDgnbFPxcf7ePec8B+Rj3LvvXfc3EPg/zwHPASMHiPEKf4HngMMcaMot1nGEgFCEaYGq/QyxHiif4asf9Q840J42CNIjGvas5oiUEhCSofwAWDxQ/kCg/hGf4ECNYrVCXt/Gbg4JCCEI5YO302/teaC8xT5/89c4cC2REgJCdzh3AxV1gRU7Fu1xWZgwD5SFGeqH+yMOdDXyiIEiRyWAcj1izYBCdqgWJsMBoDQ) # + [markdown] id="tjj9i-QLnhCv" colab_type="text" # Probability of measuring |x> state for any other state is basically dot product of Open In Colab # + [markdown] id="SYJLVtEKMFtY" colab_type="text" # # [Sudachiベースの学習済みWord2Vecモデルを使う](https://ohke.hateblo.jp/entry/2019/06/01/120000) # + [markdown] id="5JTYyluZOydf" colab_type="text" # ## [chiVe: Japanese Word Embedding with Sudachi & NWJC](https://github.com/WorksApplications/chiVe) # + id="7Ko40Kw-Lysx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 208} outputId="664f30d1-ccae-46c4-d411-b738964e4b69" # !wget https://object-storage.tyo2.conoha.io/v1/nc_2520839e1f9641b08211a5c85243124a/chive/chive-1.0-mc5-20190314.tar.gz # + id="1UGAFuFpMSSP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 69} outputId="5fa1388e-338e-4de7-83e9-616be7075645" # !tar -zxvf ./chive-1.0-mc5-20190314.tar.gz # + id="Dn3HjMx-MfbP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 541} outputId="1430d236-2bb2-4fa6-fec7-6de00eb11f20" # !pip install gensim # + id="wsP3qO0MMhIf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 89} outputId="3eb79a29-6259-4b0c-f67c-4b5752200720" # モデルのロード from gensim.models import KeyedVectors from gensim.test.utils import datapath nwjc_model = KeyedVectors.load_word2vec_format( datapath('/content/chive-1.0-mc5-20190314.txt'), binary=False ) # 語数, 次元数 print(len(nwjc_model.vocab), nwjc_model.vector_size) # + id="RoOldYwgMqTH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 89} outputId="cd9ae3b8-0a88-4547-f608-dddfcda69573" print(nwjc_model.most_similar('平成', topn=5)) # + id="EfEdxrB3MrV_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 89} outputId="98a162f4-698d-4341-8f9e-d84972f6caa1" print(nwjc_model.most_similar(positive=['兄弟', '女'], negative=['男'], topn=5)) # + [markdown] id="H0HKRVgeYTZL" colab_type="text" # # [Pythonで形態素解析器Sudachiを使う (SudachiPy)](https://ohke.hateblo.jp/entry/2019/03/09/101500) # + [markdown] id="yRx2c7G7Y-zY" colab_type="text" # ## [SudachiPy](https://github.com/WorksApplications/SudachiPy) # + id="AfsWEexPYg6I" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 191} outputId="3290fbf1-b239-45fe-9782-5f1c45279708" # !pip install SudachiPy # + id="d7nPg4AOY5lZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 260} outputId="90965e5e-7bf3-44c0-fdb4-770c097016a7" # !pip install https://object-storage.tyo2.conoha.io/v1/nc_2520839e1f9641b08211a5c85243124a/sudachi/SudachiDict_core-20200127.tar.gz # + id="eBlKWiInZIqJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="c2aca97b-1169-48e8-ca77-978918006400" # !sudachipy tokenize -h # + id="psE4-uUublAZ" colab_type="code" colab={} from sudachipy import tokenizer from sudachipy import dictionary tokenizer_obj = dictionary.Dictionary().create() # + id="Jzp2caGOb0iK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="00021612-37c6-4bdd-a0fc-f0cd7e2f90bd" # Multi-granular tokenization # using `system_core.dic` or `system_full.dic` version 20190781 # you may not be able to replicate this particular example due to dictionary you use mode = tokenizer.Tokenizer.SplitMode.C [m.surface() for m in tokenizer_obj.tokenize("国家公務員", mode)] # => ['国家公務員'] # + id="hXi8W575b4aS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="123097c5-80a4-4802-9104-cdf7ceb084b0" mode = tokenizer.Tokenizer.SplitMode.B [m.surface() for m in tokenizer_obj.tokenize("国家公務員", mode)] # => ['国家', '公務員'] # + id="Di1jV8ltb7Tr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1bb3c22d-5e76-47e9-caad-5d393870d5db" mode = tokenizer.Tokenizer.SplitMode.A [m.surface() for m in tokenizer_obj.tokenize("国家公務員", mode)] # => ['国家', '公務', '員'] # + id="1slBzS9bb-gq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c8cedf32-0c78-4e3c-8150-52e7862434ad" # Morpheme information m = tokenizer_obj.tokenize("食べ", mode)[0] m.surface() # => '食べ' # + id="rLVC2vGgcDTb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="791e428b-4b30-491a-c30a-8c788cd91062" m.dictionary_form() # => '食べる' # + id="5rAQG-CjcFSz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="df8be1b2-b4d0-42e4-a9fe-98aa4e337d99" m.reading_form() # => 'タベ' # + id="KjXL5LqycHA6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ed4d673c-6e9a-457f-ec0c-0baf156d607d" m.part_of_speech() # => ['動詞', '一般', '*', '*', '下一段-バ行', '連用形-一般'] # + id="a3uzrIogcK7r" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1b5d8e31-5f8e-4930-9880-c022d759a433" # Normalization tokenizer_obj.tokenize("附属", mode)[0].normalized_form() # => '付属' # + id="xgbKl8PncNE1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a301f14c-cff5-49c1-a005-ba1d0afd437c" tokenizer_obj.tokenize("SUMMER", mode)[0].normalized_form() # => 'サマー' # + id="Zw8QM0YhcPeq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="00c25971-9213-4990-a2df-029902c70cfc" tokenizer_obj.tokenize("シュミレーション", mode)[0].normalized_form() # => 'シミュレーション' # + id="lfmIDzCTdHrS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 243} outputId="eb7825f9-508e-4c0e-9998-86d069dcb704" from sudachipy import tokenizer tokenizer_obj = dictionary.Dictionary().create() print(type(tokenizer_obj)) # text = '友人・我孫子とスカイツリーでスパゲティを食った。' mode = tokenizer.Tokenizer.SplitMode.C tokens = tokenizer_obj.tokenize(text, tokenizer.Tokenizer.SplitMode.C) print(type(tokens)) # for t in tokens: print(t.surface(), t.part_of_speech(), t.reading_form(), t.normalized_form()) # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # ## (2013) Combined Cointegration Approach # # * Existing applied economics literature provides many cointegration approaches to examine long-run relationship between macroeconomic variables. # * Before proceeding to cointegration approach, it is necessary to examine unit root properties of the variables, which helps in choosing suitable cointegration test for empirical model for reliable empirical results. # * Existing cointegration approaches include Engle and Granger (1987) (EG), Johansen (1991) (JOH), Phillips and Ouliaris (1990), (1994) (BO) and Banerjee $\it{et}$ $\it{al}$. (1998)(BDM). # * These cointegration approaches may provide ambiguous empirical results due to their explanatory power properties. # * Bayer and Hanck (2009 and 2013) developed a new cointegration approach known as the combined cointegration approach. This test combines the results of previous cointegration approaches (Johansen, Phillips and Ouliaris, Boswijk, and Banerjee) and provides Fisher F-statistics for more conclusive and reliable empirical findings. # * The Bayer and Hanck combined cointegration approach, the order of integration must be unique, i.e. $I(1)$. If the calculated F-statistic exceeds the critical value, we may reject the null of no cointegration; the reverse applies for the acceptance of the null hypothesis. # # # ### Fisher's formulas # The Fisher's formulas of computing Bayer and Hanck cointegration are as follows: # # 𝐸𝐺−𝐽𝑂𝐻=−2[ln(P_EG)] # # 𝐸𝐺−𝐽𝑂𝐻−𝐵𝑂−𝐵𝐷𝑀=−2[ln(P_EG)+ln(P_JOH)+ln(P_BO)+ln(P_BDM)] # # * Where P_EG, P_JOH, P_BO, P_BDM are the p-values of various individual cointegration tests such as Engle and Granger (1987), Johansen (1991), (1994) and Banerjee et al.(1998), respectively. # * The Fisher statistic is used to examine whether cointegration exists or not between the variables. We may reject the null hypothesis in favor of cointegration between the variables if the Fisher statistic exceeds the Bayer and Hanck critical bounds and vice versa. # # ### A joint test-Statistics for the Null of Non-Cointegration # * ., & ., 2013. Combining non cointegration tests. Journal of Time Series Analysis,34,83–95. # * Bayerhanck produces a joint test-statistic for the null of non-cointegration based on Engle-Granger, Johansen maximum eigenvalue, Boswijk and Banerjee tests. https://rdrr.io/github/jens-klenke/bayerhanck/ library(readxl) library(bayerhanck) dataset <- read_excel("Book2Q.xlsx") # dataframe attributes str(dataset) # descriptive statistics summary(dataset[,2:4]) # model LRGDP = f(LREP,LUNEM) Test Statistics englegranger(LRGDP ~ LREP + LUNEM, data = dataset, lags = 2, trend = "const") johansen(LRGDP ~ LREP + LUNEM, data = dataset, type = "eigen", lags = 2, trend = "const") banerjee(LRGDP ~ LREP + LUNEM, data = dataset, lags = 2, trend = "const") boswijk(LRGDP ~ LREP + LUNEM, data = dataset, lags = 2, trend = "const") # EG-JOH:lag 2, constant trend, significance level is 1% egj1 = bayerhanck(LRGDP ~ LREP + LUNEM, data = dataset, lags = 2, trend = "const", test = "eg-j", crit = 0.01) summary(egj1) # EG-JOH:lag 2, constant trend, significance level is 5% egj2 = bayerhanck(LRGDP ~ LREP + LUNEM, data = dataset, lags = 2, trend = "const", test = "eg-j", crit = 0.05) summary(egj2) # EG-JOH:lag 2, constant trend, significance level is 10% egj3 = bayerhanck(LRGDP ~ LREP + LUNEM, data = dataset, lags = 2, trend = "const", test = "eg-j", crit = 0.10) summary(egj3) # EG-JOH-BO-BDM:lag 2, constant trend, significance level is 1% bh1 = bayerhanck(LRGDP ~ LREP + LUNEM, data = dataset, lags = 2, trend = "const", test = "all", crit = 0.01) summary(bh1) #EG-JOH-BO-BDM:lag 2, constant trend, significance level is 5% bh2 = bayerhanck(LRGDP ~ LREP + LUNEM, data = dataset, lags = 2, trend = "const", test = "all", crit = 0.05) summary(bh2) # EG-JOH-BO-BDM:lag 2, constant trend, significance level is 10% bh3 = bayerhanck(LRGDP ~ LREP + LUNEM, data = dataset, lags = 2, trend = "const", test = "all", crit = 0.10) summary(bh3) # Bayer-Hanck Fisher Statistics with EG-JOH test plot(egj2) # Bayer-Hanck Fisher Statistics with EG-JOH-BO-BDM test plot(bh2) # ## Empirical Results # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
     Estimated models  EG-JOH  EG-JOH-BO-BDM   Lags  Cointegration 
     $LRGDP_{t}= LREP_{t} + LUNEM_{t}$ 4.4554  5.6672  2  No 
     Significance Level    
    1%16.67932.077  
    5%10.89521.106  
    10% 8.479 16.444  
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #
    ASSIGNMENT 2
    #
    PREDICTING HOUSING PRICES WITH POLYNOMIAL REGRESSION
    #
    Today we will continue working with boston housing data. 
    # Recap from the previous lesson that we managed to train the linear regression 
    # classifier on two features and as the result obtained mse score of 44 on train 
    # and 38 on test data. This time we will use linear regression 
    # on all the features and visualize the correlation between them and our target value. 
    # We will also try using polynomial regression to solve the problem. 
    # Finnaly, we will compare the results with the one from the previous assignment. 
    # # Note : that's a test version of a notebook #
    PART 1
    #
    Visualizing correlations
    #
    Firstly let's import necessary packages. 
    import pandas as pd from pandas.plotting import scatter_matrix import matplotlib.pyplot as plt import seaborn as sns #just for conviniece we will set a grid background of figures using seaborn sns.set() #
    We created a function load_boston_df that loads data and transforms it to pandas DataFrame, 
    # thus you don't need replicate the code from the first assignment. 
    def load_boston_df(): from sklearn import datasets boston_data = datasets.load_boston() columns_to_use = ['data','target','feature_names'] boston_data = dict([(i,boston_data[i]) for i in columns_to_use]) df = pd.DataFrame(columns=boston_data['feature_names'], data=boston_data['data']) df['target'] = boston_data['target'] return df df = load_boston_df() df.head() correlation = df.corr() correlation #
    Let's see only the correlation between target and features.
    correlation['target'].sort_values(ascending=False)[1:] #
    Some features have positive correlation with our target, 
    # meaning that when the feature increases/dicreases - the target tends to also increase/dicrease, 
    # while others have negative correlation which implies that when the feature decreases/increases 
    # - target behaves the opposite. 
    # Now let's plot the correlation matrix to better understand it. 
    # + #uncomment to see the docs of the function # #?scatter_matrix # - scatter_matrix(df, figsize=(15,10), alpha=0.3); plt.plot(); #
    Let's also plot the dependecy between the target value 
    # and the feature with the biggest negative correlation score.
    plt.figure(figsize=(15,10)) plt.scatter(df['target'],df['LSTAT']) plt.xlabel('target (price)') plt.ylabel('LSTAT (lower status of population)'); plt.title('Correlation between price of the house and lower status of population'); #
    From the chart above it's obvious why the 
    # correlation between LSTAT and target is the negative one.
    #
    PART 2
    #
    Using linear regression on all the features
    #
    First let's import necessary packages. 
    #you can see some warning messages below, but don't worry they won't effect your code from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error train_columns = list(df.columns) target_column = train_columns.pop(-1) train_columns target_column #we will set a random seed for the same results seed = 5 train_set, test_set = train_test_split(df,test_size=0.20, random_state=seed) X_train, y_train, X_test, y_test = train_set[train_columns], train_set[target_column],\ test_set[train_columns], test_set[target_column] #
    It's time to create, train and evaluate our classifier
    clf = LinearRegression() clf.fit(X_train,y_train) y_predicted_train = clf.predict(X_train) print("Mean squared error on train data : {0}".format(mean_squared_error(y_predicted_train,y_train))) y_predicted_test = clf.predict(X_test) print("Mean squared error on test data : {0}".format(mean_squared_error(y_predicted_test,y_test))) #
    Using all the features we got much better results. 
    # One thing to notice is that the mse score on train data is lower then on test one which 
    # is the sign of overfitting about which we will speak more concrete in the next lab, 
    # but for now let's move on with this one.
    #
    PART 3
    #
    Using polynomial regression on all the features
    from sklearn.preprocessing import PolynomialFeatures clf = LinearRegression() train_set, test_set = train_test_split(df,test_size=0.20, random_state=seed) X_train, y_train, X_test, y_test = train_set[train_columns], train_set[target_column],\ test_set[train_columns], test_set[target_column] #
    To use polynomial regression we firstly need to process them via PolynomialFeatures class
    #C- degree of polynom. Feel free to play around with number and see the perfomance of the model depending on it C = 2 poly = PolynomialFeatures(C) poly_features_train = poly.fit_transform(X_train) clf.fit(poly_features_train,y_train) poly_features_test = poly.transform(X_test) y_predicted_train = clf.predict(poly_features_train) print("Mean squared error on train data : {0}".format(mean_squared_error(y_predicted_train,y_train))) y_predicted_test = clf.predict(poly_features_test) print("Mean squared error on test data : {0}".format(mean_squared_error(y_predicted_test,y_test))) #
    Using polynomial regression with degree of 2 we got much better 
    # results on both train and test data, but we see that there is a gap between the model's perfomance. 
    # Model does much better job on train data which is obviously an example of overfitting. 
    # In the next lesson we will try to fix this situation using regularization 
    # and by scaling our data.
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: trainingAI # language: python # name: trainingai # --- # # Introduction #
    # # **What?** `__call__` method # # # # Definition #
    # # - `__call__` is a built-in method which enables to write classes where the instances behave like functions and can be called like a function. # - In practice: `object()` is shorthand for `object.__call__()` # # # # _ _call_ _ vs. _ _init_ _ #
    # # - `__init__()` is properly defined as Class Constructor which builds an instance of a class, whereas `__call__` makes such a instance callable as a function and therefore can be modifiable. # - Technically `__init__` is called once by `__new__` when object is created, so that it can be initialised # - But there are many scenarios where you might want to redefine your object, say you are done with your object, and may find a need for a new object. With `__call__` you can redefine the same object as if it were new. # # # # Example #1 #
    class Example(): def __init__(self): print("Instance created") # Defining __call__ method def __call__(self): print("Instance is called via special method __call__") e = Example() e.__init__() e.__call__() # # Example #2 #
    class Product(): def __init__(self): print("Instance created") # Defining __call__ method def __call__(self, a, b): print("Instance is called via special method __call__") print(a*b) p = Product() p.__init__() # Is being call like if p was a function p(2,3) # The cell above is equivalent to this call p.__call__(2,3) # # Example #3 #
    class Stuff(object): def __init__(self, x, y, Range): super(Stuff, self).__init__() self.x = x self.y = y self.Range = Range def __call__(self, x, y): self.x = x self.y = y print("__call with (%d, %d)" % (self.x, self.y)) def __del__(self, x, y): del self.x del self.y del self.Range s = Stuff(1, 2, 3) s.x s(7,8) s.x # # Example #4 #
    class Sum(): def __init__(self, x, y): self.x = x self.y = y print("__init__ with (%d, %d)" % (self.x, self.y)) def __call__(self, x, y): self.x = x self.y = y print("__call__ with (%d, %d)" % (self.x, self.y)) def sum(self): return self.x + self.y sum_1 = Sum(2,2) sum_1.sum() sum_1 = Sum(2,2) sum_1(3,3) sum_1 = Sum(2,2) # This is equivalent to sum_1.__call__(3,3) # You can also do this sum_1 = Sum(2,2)(3,3) # # References #
    # # - https://www.geeksforgeeks.org/__call__-in-python/ # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- def make_placeholder(s): '''Converts a string to placeholder format {{string}}''' return '{{'+s+'}}' def convert_placeholder_reps(reps): for old_key in list(reps.keys()): new_key = make_placeholder(old_key) reps[new_key] = str(reps.pop(old_key)) return reps def gen_job_array(ids, num_jobs): '''Given list of ids, generate list of starting and endign job numbers Output: {start_ids: [], end_ids: []} ''' num_per_job = math.ceil(len(ids) / num_jobs) def multi_str_replace(text, reps): ''' Makes multiple string substitutions Keyword arguments: text -- original string reps -- dictionary where keys are old substrings and values are new substrings ''' for old, new in reps.items(): text = text.replace(old, new) return text reps = {'sape': 4139, 'jack': 4098, 'guido': 412} num_jobs = 3 ids = range(10) import math num_per_job = math.ceil(len(ids) / num_jobs) start_idx = list(range(0, len(ids), num_per_job)) start_idx [x+num_per_job-1 for x in start_idx] # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 0.4.5 # language: julia # name: julia-0.4 # --- using ODE,Plots pyplot(size=(300,200),leg=false, guidefont=font(7), titlefont=font(7)); #function oscillator(t, y) # y[2] = - 3* + y[1] - y[2] / 10 #end oscillator(t, y) = [y[2], - 3* + y[1] - y[2] / 10] initial = [1.0,0.0]; t = 0:0.01:50 t,y = ode23(oscillator, initial, t) y=hcat(y...).'; y[1:5,:]; vₓ=y[:,1]; vᵥ=y[:,2]; plot(t,y,title="Mouvement", bg=RGB(.2,.2,.2), xlabel ="Temps",ylabel = "Vitesse") plot(vₓ,vᵥ,title="portrait de phase", bg=RGB(.2,.2,.2), xlabel ="Temps",ylabel = "Vitesse") using ODE,Plots pyplot(size=(700,300),leg=false, guidefont=font(7), titlefont=font(7)); #function oscillator(t, y) # y[2] = - 3* + y[1] - y[2] / 10 #end oscillator(t, y) = [y[2], - 3* + y[1] - y[2] / 10] initial = [1.0,0.0]; t = float([0:0.01:50]); t,y = ode23(oscillator, initial, t) y=hcat(y...).'; y[1:5,:]; vₓ=y[:,1]; vᵥ=y[:,2]; o=plot(t,y,title="Mouvement", bg=RGB(.2,.2,.2), xlabel ="Temps",ylabel = "Vitesse") l=plot(vₓ,vᵥ,title="portrait de phase", bg=RGB(.2,.2,.2), xlabel ="Temps",ylabel = "Vitesse") plot(o,l,layout=2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ASCII plotting # # **Note: For this notebook to work properly, you need to install the `asciiplotlib` and `xtermcolor` packages.** # + from physt import examples from physt import plotting plotting.set_default_backend("ascii") import numpy as np np.random.seed(42) # - examples.normal_h1().plot() plotting.ascii.ENABLE_ASCIIPLOTLIB = False examples.normal_h1().plot(show_values=True) examples.normal_h2().plot(cmap='Greys_r') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # + deletable=true editable=true # Plot inside the notebook # %matplotlib inline # this is used for debugging purposes only. allows to reload classes when changed # %load_ext autoreload # %autoreload 2 # + deletable=true editable=true # Ignore warnings in notebook import warnings warnings.filterwarnings('ignore') # - # cd .. # + deletable=true editable=true # load all the necessary modules for the analysis from script import * from skimage import io import numpy as np # + deletable=true editable=true import matplotlib.pyplot as plt # + deletable=true editable=true # General parameters of the plot plt.rcParams['figure.figsize'] = 8,8 plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'viridis' plt.rcParams['pdf.fonttype'] = 42 plt.rcParams['ps.fonttype'] = 42 # + deletable=true editable=true img = io.imread('data/Pos4_Green.tif') img_red = io.imread('data/Pos4_Red.tif') # + deletable=true editable=true #stack = np.stack((img, img_red)) # + deletable=true editable=true #img_rgb = display.to_rgb(stack, auto = True, bf=False) # + deletable=true editable=true #z, _ , _, _ = img_rgb.shape # + [markdown] deletable=true editable=true # # Create and plot an overlay image of Red and Green Channels # + deletable=true editable=true stack = np.stack((img, img_red)) img_rgb = display.to_rgb(stack, auto = True, bf=False) # + deletable=true editable=true frame=10 # + deletable=true editable=true plt.imshow(img_rgb[frame]) plt.axis("off") #plt.savefig("frame_{}.png".format(x),bbox_inches='tight' , dpi = 50) # + [markdown] deletable=true editable=true # # Cell division event # + deletable=true editable=true fig, axes = plt.subplots(2,4, figsize=(16,8)) for i in range(2): for j in range(4): if i ==0: axes[i,j].imshow(img[j,650:950,650:950], cmap = 'gray',interpolation='nearest') axes[i,j].axis("off") axes[i,j].set_title('plane_{}'.format(j+1)) else: axes[i,j].imshow(img[j+4,650:950,650:950], cmap = 'gray',interpolation='nearest') axes[i,j].set_title('plane_{}'.format(j+5)) axes[i,j].axis("off") #plt.savefig("bii_track/division.png",bbox_inches='tight' , dpi = 300) # + deletable=true editable=true nt, y, x = img.shape # + [markdown] deletable=true editable=true # ## Flat field correction # + deletable=true editable=true img_correct = img_cleaning.flat_field(img) # + [markdown] deletable=true editable=true # ## Filtering the noise using non local mean, assuming it's Gaussian # + [markdown] deletable=true editable=true # Adjusting nt, you can chose the numbe of frame to analyse # + deletable=true editable=true result_denoised = img_cleaning.parallel_denoise(img_correct, nt = nt) # + [markdown] deletable=true editable=true # ## Segmentation of cell and parasite # + deletable=true editable=true result, fill_segmentation, prop_red, Red_binary_open = segmentation.parallel_segmentation(img, img_red, result_denoised, nt = nt) # + deletable=true editable=true from skimage.color import label2rgb plt.contour(Red_binary_open[frame], colors="white") plt.imshow(label2rgb(fill_segmentation[frame], bg_label=0)) plt.axis("off") # + deletable=true editable=true plt.imshow(img[-1], cmap="gray",interpolation='nearest') for coor in result: plt.scatter(coor[:,1], coor[:,0], s=100, marker = "x",c='r') plt.axis("off") #plt.savefig("bii_track/lastframe_track.png",bbox_inches='tight' , dpi = 300) # + [markdown] deletable=true editable=true # ## Tracking the cells # + deletable=true editable=true list_track = tracking.track_multi(img, result) # + [markdown] deletable=true editable=true # ## Visualizing the result # + deletable=true editable=true plot_track.browse_track_multi(img, list_track, fill_segmentation) # + deletable=true editable=true df = pandas_track.create_pandas_track(list_track) # + deletable=true editable=true df.head() # + deletable=true editable=true import os import imageio import matplotlib.pyplot as plt def create_GIF(): path = "GIF" imageformat=".png" imfilelist=[os.path.join(path,f) for f in os.listdir(path) if f.endswith(imageformat)] images = [] for filename in imfilelist: images.append(imageio.imread(filename)) kargs = { 'duration': 0.4} imageio.mimsave('GIF/movie.gif', images, **kargs) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline df1 = pd.read_csv(r'C:\Users\Jalpa\Desktop\Modular-1\Machine-Learning-Models\DATA\Mall_Customers.csv') df1 df =df1.drop(columns = ['Unnamed: 0'] , axis=1) df from sklearn.cluster import KMeans x = df.iloc[: , 3:] x wcss = [] for i in range(1,15): kmean = KMeans(n_clusters=i , init='k-means++' , random_state=30) kmean.fit(x) wcss.append(kmean.inertia_) wcss plt.plot(range(1,15) , wcss) kmean1= KMeans(n_clusters=5 ,init = 'k-means++' , random_state=30) kmean1.fit_predict(x) x['cluster number'] = kmean1.fit_predict(x) x[x['cluster number'] ==3] x[x['cluster number'] ==1] 55 , 31 kmean1.predict([[69,91]]) # # Minibatch Kmean from sklearn.cluster import MiniBatchKMeans minibatch_kmean = MiniBatchKMeans(n_clusters=5) minibatch_kmean.fit(x) minibatch_kmean.predict([[55,31 ,4]]) # # DBSCAN from sklearn.cluster import DBSCAN dbscan = DBSCAN(eps=1, min_samples=3) dbscan.fit(x) x dbscan.labels_ len(set(dbscan.labels_)) x['cluster number '] = dbscan.labels_ x from sklearn import metrics true_label = x['cluster number'] predicted_label = x['cluster number'] metrics.adjusted_rand_score(true_label,predicted_label) metrics.adjusted_rand_score(x['cluster number'],dbscan.labels_) metrics.adjusted_rand_score(true_label,predicted_label) metrics.jaccard_score(true_label,predicted_label, average='macro') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MScFE 630 - Group 14 - Submission 1 Code - Qs 8, 9 # ## Inputs # + #Market-specific inputs r_f = 0.08 #Stock-specific inputs S_0 = 100 sigma_stock = 0.3 #Option-specific inputs T = 1 months = 12*T K = 100 #Option struck at-the-money B = 150 #Counterparty-specific inputs sigma_cp = 0.25 debt = 175 #Due in one year, same as the option's maturity corr_stock_cp = 0.2 recovery_rate = 0.25 V_0 = 200 # - # ## Creating Empty Lists # + #Ensure same set of pseudo-random numbers upon each code execution import numpy as np np.random.seed(0) #Create empty arrays to store European UAO call and CVA mean value and standard error of mean for 50 sample sizes european_uao_call_meanval = [None]*50 european_uao_call_stderror = [None]*50 cva_meanval = [None]*50 cva_stderror = [None]*50 default_adj_call_val = [None]*50 default_adj_call_val_stderror = [None]*50 # - # ## Functions #Stock price path generator based on geometric Brownian motion def stock_price_path(periods_per_path, current_stock_price, risk_free_rate, stock_vol, time_increment): series = np.zeros(periods_per_path) series[0] = current_stock_price for i in range(1, periods_per_path): dWt = np.random.normal(0, 1) * np.sqrt(time_increment) #Brownian motion series[i] = series[i-1] * np.exp((risk_free_rate - stock_vol**2/2)*time_increment + stock_vol*dWt) return series #Discounted vanilla call payoff def discounted_call_payoff(terminal_stock_price, strike, risk_free_rate, time_to_maturity): return np.exp(-risk_free_rate*time_to_maturity)*np.maximum(terminal_stock_price - strike, 0) #Black scholes price of vanilla call option def bs_call(current_stock_price, strike, time_to_maturity, risk_free_rate, volatility): from scipy import stats d1 = (np.log(current_stock_price/strike) + (risk_free_rate + volatility**2/2)*time_to_maturity)/(volatility*np.sqrt(time_to_maturity)) d2 = d1 - volatility * np.sqrt(time_to_maturity) return current_stock_price * stats.norm.cdf(d1) - strike * np.exp(-risk_free_rate*time_to_maturity) * stats.norm.cdf(d2) #Terminal stock price def terminal_value(initial_stock_price, risk_free_rate, volatility, Z, time_to_maturity): return initial_stock_price * np.exp((risk_free_rate - volatility**2/2)*time_to_maturity + volatility*np.sqrt(time_to_maturity)*Z) #Vanilla call payoff def call_payoff(terminal_stock_price, strike): return np.maximum(terminal_stock_price - strike, 0) # ## Correlation Matrix corr_matrix = np.array([[1, corr_stock_cp],[corr_stock_cp, 1]]) # ## Visualising 100 Stock Price Paths (As An Example) # + import matplotlib.pyplot as plt paths_100 = [] for sample_path in range(0,100): plt.plot(stock_price_path(months, S_0, r_f, sigma_stock, T/months)) plt.show() # - # ## Monte Carlo Simulation # + #Monte Carlo simulation from scipy.stats import norm for simulation in range(1, 51): paths = simulation*1000 all_paths = np.zeros([paths, months]) #Call price estimate for i in range(0, paths): all_paths[i] = stock_price_path(months, S_0, r_f, sigma_stock, T/months) call_values = np.zeros([paths, 2]) path_no = -1 for path in all_paths: path_no += 1 if sum((path >= B)) == 0: call_values[path_no, 0] = discounted_call_payoff(path[len(path)-1], K, r_f, T) call_values[path_no, 1] = 1 european_uao_call_meanval[simulation-1] = np.mean(np.extract(call_values[:, 1] == 1, call_values[:, 0])) european_uao_call_stderror[simulation-1] = np.std(np.extract(call_values[:, 1] == 1, call_values[:, 0]) ) / np.sqrt(np.sum(call_values[:, 1])) #CVA estimate norm_matrix = norm.rvs(size = np.array([2, paths])) corr_norm_matrix = np.matmul(np.linalg.cholesky(corr_matrix), norm_matrix) terminal_stock_val = terminal_value(S_0, r_f, sigma_stock, corr_norm_matrix[0, ], T) terminal_firm_val = terminal_value(V_0, r_f, sigma_cp, corr_norm_matrix[1, ], T) call_terminal_val = call_payoff(terminal_stock_val, K) amount_lost = np.exp(-r_f*T) * (1-recovery_rate) * (terminal_firm_val < debt) * call_terminal_val cva_meanval[simulation-1] = np.mean(amount_lost) cva_stderror[simulation-1] = np.std(amount_lost)/ np.sqrt(paths) #Default-adjusted Call Value default_adj_call_val[simulation-1] = european_uao_call_meanval[simulation-1] - cva_meanval[simulation-1] default_adj_call_val_stderror[simulation-1] = np.sqrt((european_uao_call_stderror[simulation-1])**2 + (cva_stderror[simulation-1])**2) print('Running simulation', simulation, '...', 'Call Value:', european_uao_call_meanval[simulation-1].round(3), 'CVA:', cva_meanval[simulation-1].round(3), 'Default-adj Call Value:', default_adj_call_val[simulation-1].round(3)) # - # ## Table of Estimates import pandas as pd df = pd.DataFrame(list(zip(european_uao_call_meanval, cva_meanval, default_adj_call_val)), columns = ['Default-free UAO Call Value', 'CVA Estimate', 'Default-adjusted UAO Call Value']) df.index.name = 'Simulation No.' df.index += 1 df.round(3) # ## Plot of Default-free European UAO Call Price Estimates plt.plot([sum(european_uao_call_meanval)/len(european_uao_call_meanval)]*50) plt.plot(european_uao_call_meanval, '.') plt.plot(sum(european_uao_call_meanval)/len(european_uao_call_meanval) + np.array(european_uao_call_stderror) * 3, 'r') plt.plot(sum(european_uao_call_meanval)/len(european_uao_call_meanval) - np.array(european_uao_call_stderror) * 3, 'r') plt.show() # ## Plot of CVA Estimates plt.plot([sum(cva_meanval)/len(cva_meanval)]*50) plt.plot(cva_meanval, '.') plt.plot(sum(cva_meanval)/len(cva_meanval) + np.array(cva_stderror) * 3, 'r') plt.plot(sum(cva_meanval)/len(cva_meanval) - np.array(cva_stderror) * 3, 'r') plt.show() # ## Plot of Default-adjusted European UAO Call Price Estimates plt.plot([sum(default_adj_call_val)/len(default_adj_call_val)]*50) plt.plot(default_adj_call_val, '.') plt.plot(sum(default_adj_call_val)/len(default_adj_call_val) + np.array(default_adj_call_val_stderror) * 3, 'r') plt.plot(sum(default_adj_call_val)/len(default_adj_call_val) - np.array(default_adj_call_val_stderror) * 3, 'r') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import glob import logging import pandas as pd import os import json from ml4ir.base.io import file_io from ml4ir.base.data import tfrecord_writer from sklearn.datasets import load_iris from ml4ir.base.features.feature_config import parse_config from ml4ir.base.features.feature_config import ExampleFeatureConfig from ml4ir.base.config.keys import TFRecordTypeKey # Setup logging logger = logging.getLogger() logger.setLevel(logging.DEBUG) logging.debug("Logger is initialized...") # - # Create the a dataframe for iris df = load_iris() df.feature_names = [x[0]+x.split()[1] for x in df.feature_names] # making feature names shorter e.g., sepal length (cm) -> s_length data = pd.DataFrame(df.data, columns=df.feature_names) data['label'] = df['target'] data['query_key'] = data.index data.head() feature_config_yaml = ''' query_key: name: query_key node_name: query_key trainable: false dtype: int64 log_at_inference: true feature_layer_info: type: numeric shape: null serving_info: required: false default_value: 0 tfrecord_type: context label: name: label node_name: label trainable: false dtype: int64 log_at_inference: true feature_layer_info: type: numeric shape: null serving_info: required: false default_value: 0 tfrecord_type: sequence features: - name: slength node_name: slength trainable: true dtype: float log_at_inference: false feature_layer_info: type: numeric shape: null serving_info: required: true default_value: 0.0 tfrecord_type: sequence - name: swidth node_name: swidth trainable: true dtype: float log_at_inference: false feature_layer_info: type: numeric shape: null serving_info: required: true default_value: 0.0 tfrecord_type: sequence - name: plength node_name: plength trainable: true dtype: float log_at_inference: false feature_layer_info: type: numeric shape: null serving_info: required: true default_value: 0.0 tfrecord_type: sequence - name: pwidth node_name: pwidth trainable: true dtype: float log_at_inference: false feature_layer_info: type: numeric shape: null serving_info: required: true default_value: 0.0 tfrecord_type: sequence ''' feature_config: ExampleFeatureConfig = parse_config(TFRecordTypeKey.EXAMPLE, feature_config_yaml, logger=logger) # + # Save as TFRecord SequenceExample/Example TFRECORD_DIR = '/tmp/classification/' if not os.path.exists(TFRECORD_DIR): os.makedirs(TFRECORD_DIR) tfrecord_writer.write_from_df(d, tfrecord_file=os.path.join(TFRECORD_DIR, 'file_0.tfrecord'), feature_config=feature_config, tfrecord_type=TFRecordTypeKey.EXAMPLE) # Let's see what it looks like df.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="_aeeICXStavU" outputId="0a1001a5-47e4-4457-de56-772068584d0e" colab={"base_uri": "https://localhost:8080/"} #2020-ml-p4-re # !pip uninstall --y kaggle # !pip install --upgrade pip # !pip install kaggle==1.5.6 # !mkdir -p ~/.kaggle # !cp kaggle.json ~/.kaggle/ # !chmod 600 ~/.kaggle/kaggle.json # !kaggle -v # + id="yw3ysPvoIc7p" outputId="2caed467-4220-4b9d-bad2-89e61406d535" colab={"base_uri": "https://localhost:8080/"} # !kaggle competitions download -c 2020ml4re # !unzip 2020ml4re.zip # + id="F5Og7qkyIjvh" outputId="ef596a79-9723-49da-e1a8-ac1c550f3266" colab={"base_uri": "https://localhost:8080/"} import seaborn as sns import pandas as pd import numpy as np df_train = pd.read_csv('train.csv') df_test = pd.read_csv('test.csv') df_train.drop(['age_group','religion'], axis=1, inplace=True) df_test.drop(['age_group','religion'], axis=1, inplace=True) full_data = [df_train,df_test] pd.options.display.max_columns = None print(df_train.head(20)) print(df_test.head(20)) # + id="zstUYQfYAIis" outputId="6f9be2d6-b8a5-48d4-d9d5-30f8d8357c60" colab={"base_uri": "https://localhost:8080/"} print (df_train[["gender", "voted"]].groupby(['gender'], as_index=False).mean()) print (df_train[["race", "voted"]].groupby(['race'], as_index=False).mean()) for dataset in full_data: dataset['gender'] = dataset['gender'].map( {'Female': 0, 'Male': 1} ).astype(int) dataset['race'] = dataset['race'].map( {'Arab': 0, 'Asian': 1,'Black': 2, 'Indigenous Australian': 3,'Native American':4,'Other':5,'White':6}).astype(int) # + id="AezdARIMJsk6" outputId="dfca1e9c-1316-4afa-8185-d0c00e71ca8c" colab={"base_uri": "https://localhost:8080/"} X = df_train.iloc[:,1:-1] y = df_train.iloc[:,[-1]] test_x = df_test.iloc[:,1:] print(X.dtypes) print(y.shape) print(test_x.shape) #religion object city #gender object male , female #age_group object 30s... #race object white .. # + id="9fScXL76KCJn" from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=777,stratify=y) # + id="fCDRVQYNKFTv" outputId="2620d9d8-0440-453d-f582-eb0d14d5caa1" colab={"base_uri": "https://localhost:8080/"} print('y_train class distribution') print(y_train.value_counts(normalize=True)) print('y_test class distribution') print(y_test.value_counts(normalize=True)) # + id="80USsgnjKvim" outputId="50fc362e-b20f-424f-dcbe-72d8d0cffc30" colab={"base_uri": "https://localhost:8080/"} from sklearn.ensemble import GradientBoostingClassifier clf = GradientBoostingClassifier(random_state=0) clf.fit(X_train, y_train) y_pred = clf.predict(test_x) y_pred # + id="B5PP64XuL13E" outputId="b446900a-d2d9-4630-d862-72aa568fa61a" colab={"base_uri": "https://localhost:8080/", "height": 419} submit = pd.read_csv('sampleSubmission.csv') submit # + id="BE3_tFEsMJSi" outputId="a18441b9-a9fd-4dad-aad3-553fff3918a1" colab={"base_uri": "https://localhost:8080/", "height": 419} for i in range(len(y_pred)): submit['voted'][i] = y_pred[i].item() submit # + id="6e_SsZZlMvB8" outputId="eaa70adf-75d6-4b74-e842-00250af5a8e2" colab={"base_uri": "https://localhost:8080/"} submit.to_csv('result.csv',mode='w',index=False) # !kaggle competitions submit -c 2020ml4re -f result.csv -m "14010974_이기택" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### 加载数据,切分训练测试集 # + import pandas as pd # Read the data data = pd.read_csv('AER_credit_card_data.csv', true_values = ['yes'], false_values = ['no']) # 需要通过设置true_values,false_values来识别特征找那个的true/false类型 # 特征数据中的True和false,送进模型前,不用处理吗? # Select target y = data.card # Select predictors X = data.drop(['card'], axis=1) print("Number of rows in the dataset:", X.shape[0]) X.head() # - # ### 由于这是一个小型数据集,我们将使用交叉验证来确保准确测量模型质量 # + from sklearn.pipeline import make_pipeline from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score # Since there is no preprocessing, we don't need a pipeline (used anyway as best practice!) my_pipeline = make_pipeline(RandomForestClassifier(n_estimators=100)) cv_scores = cross_val_score(my_pipeline, X, y, cv=5, scoring='accuracy') print("Cross-validation accuracy: %f" % cv_scores.mean()) # - # 根据经验,你会发现找到98%的精确模型是非常罕见的。 可能发生了数据泄露,我们应该更仔细地检查目标泄漏的数据。以下是数据摘要: # * card:如果接受信用卡申请,则为1,否则为0 # * reports:主要贬损报告的数量 # * age:年龄n岁加上一年的十二分之一 # * income:年收入(除以10,000) # * share:每月信用卡支出与年收入的比率 # * expenditure:平均每月信用卡支出 # * owner:1如果拥有房屋,0如果租房 # * selfempl:如果是自雇人员,则为1,否则为0 # * dependents:1 +家属人数 # * months:生活在当前地址的月份 # * majorcards:持有的主要信用卡数量 # * active:活动信用帐户的数量
    # 这些特征里面,一些变量看起来很可疑。 例如,支出是指支付此卡还是在使用之前使用的卡上的支出?在这个问题上,基础的数据比较可能非常有用 # + expenditures_cardholders = X.expenditure[y] # expenditure[y]? expenditures_noncardholders = X.expenditure[~y] # expenditure[~y] print('Fraction of those who did not receive a card and had no expenditures: %.2f' \ %((expenditures_noncardholders == 0).mean())) #? print('Fraction of those who received a card and had no expenditures: %.2f' \ %(( expenditures_cardholders == 0).mean())) #? # - # 如上所示,没有收到卡的每个人都没有支出,而只有2%收到卡的人没有支出。 我们的模型似乎具有高精度并不奇怪。 但这也似乎是目标泄漏的情况,其中支出可能意味着他们申请的卡上的支出。 # # 由于份额部分由支出决定,因此也应予以排除。 变量active和majorcards有点不太清楚,但从描述来看,它们听起来很有意义。 在大多数情况下,如果你无法追踪创建数据的人员以了解更多信息,那么最好是别用。 # # 我们将运行没有目标泄漏的模型,如下所示: # + # Drop leaky predictors from dataset potential_leaks = ['expenditure', 'share', 'active', 'majorcards'] X2 = X.drop(potential_leaks, axis=1) # Evaluate the model with leaky predictors removed cv_scores = cross_val_score(my_pipeline, X2, y, cv=5, scoring='accuracy') print("Cross-val accuracy: %f" % cv_scores.mean()) # - # 这种准确性低了不少,可能令人失望。 但是,我们可以预期在新应用程序中使用它的准确率大约为80%,而泄漏模型可能会比这更糟糕(尽管交叉验证中的表观得分更高)。 # # ### 结论 # # 在许多数据科学应用中,数据泄漏可能是数百万美元的错误。 仔细分离培训和验证数据可以防止训练测试污染,pipeline可以帮助实现这种分离。 同样,谨慎,常识和数据探索的组合可以帮助识别目标泄漏。 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # At the beginning of this video # 1. We were able to use @jit decorator in our Python code. import numpy as np def add_5(arr): arr += 5 return arr arr = np.random.random(55000000) # %timeit add_5(arr) # # Using Numba # + from numba import jit @jit def add_5_numba(arr): arr += 5 return arr # - # %timeit add_5_numba(arr) # # Using threads with Numba @jit( parallel=True) def add_5_numba_threads(arr): arr += 5 return arr # %timeit add_5_numba_threads(arr) # # By the end of this video # 1. We will be able to use @jit decorator for parallelizing our Python Code. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="Uo_p9OYg2mV7" # # Fastai Application Architectures # > In this article we will look at how to build custom applications in the fastai library, by looking at how current fastai image model applications are actually built. # - toc: true # - comments: true # - image: images/fastai.jpeg # - categories: [fastai] # + id="daRa_FSB2bVN" #hide # !pip install -Uqq fastbook import fastbook fastbook.setup_book() from fastbook import * # + [markdown] id="KwDeQlDR29VQ" # ## Introduction # # The [fastai deep learning library (as of 2021)](https://www.fast.ai/2020/02/13/fastai-A-Layered-API-for-Deep-Learning/) is a layered API that has 4 levels of abstraction. # # - Application layer # - High level API # - Mid level API # - Low level API # # ![](https://github.com/pranath/blog/raw/master/images/fastai-layered.png "The fastai layered API") # # In this article we will look at how to build custom applications in the fastai library, by looking at how current fastai image model applications are actually built. # + [markdown] id="sTbL4lJ33C4M" # ## Fastai Image Model Applications # # ### cnn_learner # # When using this application, the first parameter we need to give it is an architecture which will be used as the *body* of the network. Usually this will be a ResNet architecture we pre-trained weights that is automaticially downloaded for you. # # Next the final layer of the pre-trained model is cut, in fact all layers after the final pooling layer is also cut as well. Within each model we have a dictionary of information that allows us to identify these different points within the layers called *model_meta* here for example for ResNet50. # + colab={"base_uri": "https://localhost:8080/"} id="eHfzam5DRWH9" outputId="5061529a-20dd-4392-caea-69df92aca75e" model_meta[resnet50] # + [markdown] id="_xqaZnudRpqY" # Key parts of the network are: # # - **Head** - The part of the network specialised for a particular task i.e. with a CNN the part after the adaptive average pooling layer # - **Body** - Everything else not the Head including the Stem # - **Stem** - The first layers of the network # # We we take all the layers before the cut point of -2, we get the body of the model that fastai will keep to use for transfer learning. Then we can add a new head. # + colab={"base_uri": "https://localhost:8080/"} id="ZNj_KmzFSwDN" outputId="95e529b4-6085-41fc-b458-f7bd8cbc75ce" #hide_output create_head(20,2) # + [markdown] id="XqdLWloTS2mf" # With this function we can choose how many extra layers should be added at the end as well as how much dropout and pooling. Fastai by default adds 2 linear layers rather than just one, as fastai have found this helps transfer learning work more quickly and easily than just one extra layer. # # ### unet_learner # # This architecture is most often used for image segmentation tasks. # # We start of building this in the same way as the cnn_learner, chopping off the old head. For image segmentation, we are going to have to add a very different type of head to end up with a model that actually generates an image for segmentation. # # One way we could do this is to add layers that can increase the grid size in a CNN, for example duplicating each of the pixels to make an image twice as big - this is known as *nearest neighbour interpolation*. Another approach uses strides, in this case a stride of half, which is known as *transposed convolution*. However neither of these approaches works well in practice. # # They key problem here is there is simply not enough information in these downsampled activations alone to be able to recreate something like the oroginal image quality needed for segmentation - its a big ask! And perhaps not realistic. # # The solution to this problem here is our friend again *skip connections* however using them not accross one layer - but reaching these connections far accross to the opposite side of the architecture. # # ![](https://github.com/pranath/blog/raw/master/images/unet.png "The Unet architecture") # # Here on the left half of the model is a CNN, and the transposed convolutional layers on the right, with the extra skip connections in gray. This helps the Unet do a much better job at generate the type of images we want for segmentation. One challenge with Unet's is the exact architecture does in this case depend on the image size, however fastai has a *DynamicUnet* object that automatically generates the correct architecture based on the data and image sizes given. # # ### A Siamese Network # # Let's now try to create a custom model. [In an earlier article we looked at creating a Siamese network model](https://livingdatalab.com/fastai/2021/05/30/fastai-midlevel-api.html). Let's recap the details of that model. # + colab={"base_uri": "https://localhost:8080/", "height": 17} id="W3EpcHZeYiE9" outputId="1f49fc53-3be6-4eac-f987-5d710d2e88b8" #hide from fastai.vision.all import * path = untar_data(URLs.PETS) files = get_image_files(path/"images") class SiameseImage(fastuple): def show(self, ctx=None, **kwargs): img1,img2,same_breed = self if not isinstance(img1, Tensor): if img2.size != img1.size: img2 = img2.resize(img1.size) t1,t2 = tensor(img1),tensor(img2) t1,t2 = t1.permute(2,0,1),t2.permute(2,0,1) else: t1,t2 = img1,img2 line = t1.new_zeros(t1.shape[0], t1.shape[1], 10) return show_image(torch.cat([t1,line,t2], dim=2), title=same_breed, ctx=ctx) def label_func(fname): return re.match(r'^(.*)_\d+.jpg$', fname.name).groups()[0] class SiameseTransform(Transform): def __init__(self, files, label_func, splits): self.labels = files.map(label_func).unique() self.lbl2files = {l: L(f for f in files if label_func(f) == l) for l in self.labels} self.label_func = label_func self.valid = {f: self._draw(f) for f in files[splits[1]]} def encodes(self, f): f2,t = self.valid.get(f, self._draw(f)) img1,img2 = PILImage.create(f),PILImage.create(f2) return SiameseImage(img1, img2, t) def _draw(self, f): same = random.random() < 0.5 cls = self.label_func(f) if not same: cls = random.choice(L(l for l in self.labels if l != cls)) return random.choice(self.lbl2files[cls]),same splits = RandomSplitter()(files) tfm = SiameseTransform(files, label_func, splits) tls = TfmdLists(files, tfm, splits=splits) dls = tls.dataloaders(after_item=[Resize(224), ToTensor], after_batch=[IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]) # + [markdown] id="3GdlM_xtcbD5" # Let's now build a custom model for the Siamese task. We will use a pre-trained model, pass 2 images through it, concatinate the results, then send them to a custom head that will return 2 predictions. # # In terms of overall architecture and models lets define it like this. # + id="UxDNE0x6czgS" class SiameseModel(Module): def __init__(self, encoder, head): self.encoder,self.head = encoder,head def forward(self, x1, x2): ftrs = torch.cat([self.encoder(x1), self.encoder(x2)], dim=1) return self.head(ftrs) # + [markdown] id="tCinjML6c7f1" # We can create a body/encoder by taking a pre-trained model and cutting it, we just need to specify where we want to cut. The cut position for a ResNet is -2. # + colab={"base_uri": "https://localhost:8080/", "height": 83, "referenced_widgets": ["e58a8371e3244744906c7e25a16e99f3", "e3db04ef61d044d18a5b399d83216401", "02ddb25257b54c878eb599da96e65474", "ece02ba7171f49ee81a6457655f4c7e8", "235d6688b4f64b9ca845368146fa107a", "ad68f0b4ab2d4e1b844655aaa6887a0f", "c7582be27cf64f469ff2d3d2d839601a", "56d5191acc22490d86abfc609a796020"]} id="CVPO_4vpdPYR" outputId="db9a3e55-4296-4a6a-86e5-e06d88b2fc80" encoder = create_body(resnet34, cut=-2) # + [markdown] id="dKyRSb_AdUIg" # We can then create a head. If we look at the encoder/body it will tell us the last layer has 512 features, so this head will take 2*512 - as we will have 2 images. # + id="WKRNvOhJdpHy" head = create_head(512*2, 2, ps=0.5) # + [markdown] id="jReq7KxYdtZA" # We can now build our model from our constructed head and body. # + id="tkXFNDh5dx9z" model = SiameseModel(encoder, head) # + [markdown] id="fwJg0XW7d3F3" # Before we can use a Learner to train the model we need to define 2 more things. Firstly, a loss function. We might use here cross-entropy, but as our targets are boolean we need to convert them to integers or Pytorch will throw and error. # # Secondly, we need to define a custom splitter that will tell the fastai library how to split the model into parameter groups, which will help train only the head of the model when we do transfer learning. Here we want 2 parameter groups one for the encoder/body and one for the head. So lets define a splitter as well. # + id="fbMN52H2esgy" def loss_func(out, targ): return nn.CrossEntropyLoss()(out, targ.long()) def siamese_splitter(model): return [params(model.encoder), params(model.head)] # + [markdown] id="eyYoGI_7e1KG" # We can now define a learner using our data, model, loss function, splitter and a metric. As we are defining a learner manually here, we also have to call freeze manually as well, to ensure only the last paramete group i.e. the head is trained. # + id="NqCpEO-GfKvW" learn = Learner(dls, model, loss_func=loss_func, splitter=siamese_splitter, metrics=accuracy) learn.freeze() # + [markdown] id="Yid23FynfOxf" # Let's now train our model. # + colab={"base_uri": "https://localhost:8080/", "height": 173} id="9UjXCwlKfSn2" outputId="e66fdde1-692b-4345-ea22-1147fa8a594a" learn.fit_one_cycle(4, 3e-3) # + [markdown] id="bfEpk-TOfWEo" # This has trained only our head. Lets now unfreeze the whole model to make it all trainable, and use discriminative learning rates. This will give a lower learning rate for the body and a higher one for the head. # + colab={"base_uri": "https://localhost:8080/", "height": 173} id="JISyfyAyflXW" outputId="1d94dbac-e7e5-4813-f371-8cdbe5ee84bc" learn.unfreeze() learn.fit_one_cycle(4, slice(1e-6,1e-4)) # + [markdown] id="YZt5w4k7hVMr" # ## Points to consider with architectures # # There are a few points to consider when training models in practice. if you are running out of memory or time - then training a smaller model could be a good approach. If you are not training long enough to actually overfit, then you are probably not taking advantage of the capacity of your model. # # So one should first try to get to the point where your model is overfitting. # # ![](https://github.com/pranath/blog/raw/master/images/practical_principles.png "Practical principles of applying deep learning in practice") # # Often many people when faced with a model that overfits, start with the wrong thing first i.e. to use a smaller model, or more regularization. Using a smaller model should be one of the last steps one tries, as this reduces the capaity of your model to actually learn what is needed. # # A better approach is to actually try to use **more data**, such as adding more labels to the data, or using data augmentation for example. Mixup can be useful for this. Only once you are using much more data and are still overfitting, one could consider more generalisable architectures - for example adding batch norm could help here. # # After this if its still not working, one could use regularisation, such as adding dropout to the last layers, but also throughout the model. Only after these have failed one should consider using a smaller model. # + [markdown] id="5cJD1Y8Y3aLl" # ## Conclusion # # In this article we have looked at how to build custom fastai application architectures, using image model examples. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Haworth's Gas Ratios # # These are very handy mud gas ratios that can be a useful diagnostic tool that can be used whilst drilling or for post drill look backs to determine potential phase and potentially even identifying breached traps. # # From: Interpreation of Hydrocarbon Shoes Using Light (C1-C5) Hydrocarbon Gases from Mud Log Data # ; ; 1985 AAPG # # This is probably very in-efficient code and I welcome any feedback. # # I'd like to thank the great open source work from (https://github.com/andymcdgeo) for the inspiration and much of the source code on the plots from his excellent Python Petrophysics series. # ## Import Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt # ## Haworth's Ratio's Equations # # Ch - Character Calculation: # (iC4 + nC4 + C5)/C3 # # Wh - Wetness Calculation: # ((C2 + C3 + iC4 + nC4 + C5)/(C1 + C2 + C3 + iC4 + nC4 + C5))*100 # # Bh - Balance Calculation: # (C1 + C2)/(C3 + iC4 + nC4 + C5) # ## Import Data as Pandas DataFrame # # Here I used a mudlog example from a 2019 drilling campaign. Column names will require editing depending on your mud log service provider ratios = pd.read_csv('anonymouswell.csv') ratios.head() # ## Data may require cleaning # # This dataset didn't have any -999 style errors but units of measurement had become confused in the main dataframe which required editing out """ratios = ratios.replace('m', np.nan) ratios.dropna(subset = ["Bit Depth"], inplace=True) ratios.tail()""" # ## Create new columns # # These are associated with the respective mud gas ratios ratios['Character'] = (ratios['iC4'] + ratios['nC4'] + ratios['iC5'] + ratios['nC5']) / ratios['C3'] ratios['Wetness'] = ((ratios['C2'] + ratios['C3'] + ratios['iC4'] + ratios['nC4'] + ratios['iC5'] + ratios['nC5']) / (ratios['C1'] + ratios['C2'] + ratios['C3'] + ratios['iC4'] + ratios['nC4'] + ratios['iC5'] + ratios['nC5'])) * 100 ratios['Balance'] = (ratios['C1'] + ratios['C2']) / (ratios['C3'] + ratios['iC4'] + ratios['nC4']+ ratios['iC5'] + ratios['nC5']) # ## Ratio Interpretation # # Primary means of interp is Wetness. # # Created a simple logic test ("&" required for multiple conditions in the function instead of "and") replaced the string with integers for the plotting dictionary later but can be left the same and edit the keys if you prefer the definitions in the dataframe def wetness(Wh): #<0.5 V. Dry Gas #0.5-17.5 Gas, density increases with Wh #17.5-40 Oil Density Increases with Wh #>40 Residual Oil if Wh < 0.5: return 1 #'Very Dry Gas' elif (Wh > 0.5) & (Wh < 17.5): return 2 #'Gas, Density increases with Wh' elif (Wh > 17.5) & (Wh < 40): return 3 #'Oil Density Increases with Wh' elif Wh > 40: return 4 #'Residual Oil' else: pass ratios["Wh_check"] = ratios.apply(lambda row: wetness(row.Wetness), axis=1) # Bh>Wh # Gas/Oil or Gas/Condensate # Gas Density increases as curves converge # # Bh Wh: return 5 #'Gas/Oil or Gas/Condensate Gas Density Increases as Curves Converge' elif Bh < Wh: return 6 #'Oil Density increases as curves converge residual oil' else: pass ratios["Bh_check"] = ratios.apply(lambda row: balance(row.Balance, row.Wetness), axis=1) # Ch <0.5 # Predicted Gas is correct # # Ch >0.5 # Gas is associated with oil def character(Ch): if Ch < 0.5: return 7 #'Predicted Gas is correct' elif Ch > 0.5: return 8 #'Gas is associated with oil' else: pass ratios["Ch_check"] = ratios.apply(lambda row: character(row.Character), axis=1) # ## Plotting # # Simple plot to show the gas units in place # + # line 1 points x1 = ratios["C1"] y1 = ratios["TVD Depth"] # plotting the line 1 points plt.plot(x1, y1, label = "C1") # line 2 points x2 = ratios["C2"] y2 = ratios["TVD Depth"] # plotting the line 2 points plt.plot(x2, y2, label = "C2") # line 3 points x3 = ratios["C3"] y3 = ratios["TVD Depth"] # plotting the line 3 points plt.plot(x3, y3, label = "C3") # line 4 points x4 = ratios["iC4"] y4 = ratios["TVD Depth"] plt.plot(x4, y4, label = "iC4") # plotting the line 3 points x5 = ratios["nC4"] y5 = ratios["TVD Depth"] plt.plot(x5, y5, label = "nC4") x6 = ratios["iC5"] y6 = ratios["TVD Depth"] plt.plot(x6, y6, label = "iC5") x7 = ratios["nC5"] y7 = ratios["TVD Depth"] plt.plot(x7, y7, label = "iC5") plt.xlabel("Gas Content") # Set the y axis label of the current axis. plt.ylabel('Depth (m)') # Set a title of the current axes. plt.title('Depth vs Gas') # show a legend on the plot plt.legend() # Display a figure plt.gca().invert_yaxis() plt.show() # - # ## Dictionary Definition of Ratio Response # # Create a simple dictionary of responses and corresponding colors that relate to the logic tests. This can be changed if you prefer the descriptions mentioned above. ratio_fluids = {1: {'fluid_num':'Very Dry Gas', 'color':'red'}, #'Very Dry Gas' 2: {'fluid_num':'Gas, Density \n increases with Wh', 'color':'pink'}, #'Gas, Density increases with Wh' 3: {'fluid_num':'Oil, Density \n Increases with Wh', 'color':'lime'}, #'Oil Density Increases with Wh' 4: {'fluid_num':'Residual Oil', 'color':'darkgreen'}, #'Residual Oil' 5: {'fluid_num':'Gas/Oil or Gas/Condensate \n Gas Density Increases \n as Curves Converge', 'color':'orange'}, #'Gas/Oil or Gas/Condensate Gas Density Increases as Curves Converge' 6: {'fluid_num':'Oil Density \n increases as curves \n converge residual oil', 'color':'teal'}, #'Oil Density increases as curves converge residual oil' 7: {'fluid_num':'Predicted Gas is correct', 'color':'yellow'}, #'Predicted Gas is correct' 8: {'fluid_num':'Gas is associated with oil', 'color':'blue'}, #'Gas is associated with oil' } # ## Main Plots # # Create sub-plots that reference the keys and corresponding reference columns def makeplot(well, top_depth, bottom_depth): fig, ax = plt.subplots(figsize=(15,10)) #Set up the plot axes ax1 = plt.subplot2grid((1,4), (0,0), rowspan=1, colspan = 1) ax2 = ax1.twiny() ax3 = plt.subplot2grid((1,4), (0,1), rowspan=1, colspan = 1, sharey = ax1) ax4 = plt.subplot2grid((1,4), (0,2), rowspan=1, colspan = 1, sharey = ax1) ax5 = plt.subplot2grid((1,4), (0,3), rowspan=1, colspan = 1, sharey = ax1) # Wh curve ax1.plot(ratios["Wetness"], ratios['TVD Depth'], color = "green", linewidth = 0.5) ax1.set_xlabel("Wetness Curve") ax1.xaxis.label.set_color("green") ax1.set_xscale('log') ax1.set_xlim(0.1, 1) ax1.set_ylabel("Depth (m)") ax1.tick_params(axis='x', colors="green") ax1.spines["top"].set_edgecolor("green") ax1.title.set_color('green') ax1.set_xticks([1, 10, 100]) # Bh Curve ax2.plot(well["Balance"], ratios['TVD Depth'], color = "orange", linewidth = 0.5) ax2.set_xlabel("Balance Curve") ax2.set_xscale('log') ax2.set_xlim(0.1, 1) ax2.xaxis.label.set_color("orange") ax2.tick_params(axis='x', colors="orange") ax2.spines["top"].set_edgecolor("orange") ax2.set_xticks([1, 10, 100]) # Wetness ax3.plot(ratios["Wh_check"], ratios['TVD Depth'], color = "black", linewidth = 0.5) ax3.set_xlabel("Wh") ax3.set_xlim(0, 1) ax3.xaxis.label.set_color("black") ax3.tick_params(axis='x', colors="black") ax3.spines["top"].set_edgecolor("black") for key in ratio_fluids.keys(): color = ratio_fluids[key]['color'] ax3.fill_betweenx(ratios['TVD Depth'], 0, ratios['Wh_check'], where=(ratios['Wh_check']==key), facecolor=color) # Balance ax4.plot(ratios["Bh_check"], ratios['TVD Depth'], color = "black", linewidth = 0.5) ax4.set_xlabel("Bh") ax4.set_xlim(0, 1) ax4.xaxis.label.set_color("black") ax4.tick_params(axis='x', colors="black") ax4.spines["top"].set_edgecolor("black") for key in ratio_fluids.keys(): color = ratio_fluids[key]['color'] ax4.fill_betweenx(ratios['TVD Depth'], 0, ratios['Bh_check'], where=(ratios['Bh_check']==key), facecolor=color) # Character ax5.plot(ratios["Ch_check"], ratios['TVD Depth'], color = "black", linewidth = 0.5) ax5.set_xlabel("Ch") ax5.set_xlim(0, 1) ax5.xaxis.label.set_color("black") ax5.tick_params(axis='x', colors="black") ax5.spines["top"].set_edgecolor("black") for key in ratio_fluids.keys(): color = ratio_fluids[key]['color'] ax5.fill_betweenx(ratios['TVD Depth'], 0, ratios['Ch_check'], where=(ratios['Ch_check']==key), facecolor=color) ax3.set_xticks([0, 1]) ax4.set_xticks([0, 1]) ax5.set_xticks([0, 1]) # Common functions for setting up the plot can be extracted into # a for loop. This saves repeating code. for ax in [ax3, ax4, ax5]: ax.set_ylim(bottom_depth, top_depth) ax.grid(which='major', color='lightgrey', linestyle='-') ax.xaxis.set_ticks_position("top") ax.xaxis.set_label_position("top") ax.spines["top"].set_position(("axes", 1.02)) for ax in [ax3, ax4, ax5]: plt.setp(ax.get_yticklabels(), visible = False) plt.tight_layout() fig.subplots_adjust(wspace = 0.15) makeplot(ratios, 750, 2000) # args are dataframe, top depth and base depth # ## Insert Legend Chart # + y = [0, 1] x = [1, 1] fig, axes = plt.subplots(ncols=4,nrows=2, sharex=True, sharey=True, figsize=(10,5), subplot_kw={'xticks': [], 'yticks': []}) for ax, key in zip(axes.flat, ratio_fluids.keys()): ax.plot(x, y) ax.fill_betweenx(y, 0, 1, facecolor=ratio_fluids[key]['color']) ax.set_xlim(0, 0.1) ax.set_ylim(0, 1) ax.set_title(str(ratio_fluids[key]['fluid_num'])) plt.tight_layout() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="qRgoB1X-8yXQ" pip install spotipy --upgrade # + id="R_b8G0258zT1" from google.colab import drive drive.mount('/content/drive') # + id="40tgOIe9Ae-g" from __future__ import print_function import spotipy import spotipy.util as util import sys #from spotipy.oauth2 import SpotifyOAuth from spotipy.oauth2 import SpotifyClientCredentials import pandas as pd import numpy as np # + [markdown] id="elUp6JP785dK" # #### You have to create a DASHBOARD to get the required CLIENT_ID and Client_SECRET code # #### Spotify Developer Dashboard Link: https://developer.spotify.com/dashboard/ # # # Authorization Code Flow # # ### To support the Authorization Code Flow Spotipy provides a utility method util.prompt_for_user_token that will attempt to authorize the user. # # # ``` # scope = 'user-read-private' # token = util.prompt_for_user_token('', scope, # client_id='b9ab0fc630594519a6cbde985f51d242',client_secret='59a63526f72e43f5a46b0a06e104ce5e',redirect_uri='http://localhost:8080/') # sp = spotipy.Spotify(auth=token) # ``` # # # # # # Client Credentials Flow # ### Client credentials flow is appropriate for requests that do not require access to a user’s private data. Even if you are only making calls that do not require authorization, using this flow yields the benefit of a higher rate limit # # # # ``` # client_credentials_manager = SpotifyClientCredentials(client_id='b9ab0fc630594519a6cbde985f51d242',client_secret='59a63526f72e43f5a46b0a06e104ce5e') # sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager) # ``` # + id="ZLfIC-LR82OB" ## set Authorization and get token client_credentials_manager = SpotifyClientCredentials(client_id='b9ab0fc630594519a6cbde985f51d242',client_secret='59a63526f72e43f5a46b0a06e104ce5e') sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager) # + id="s4uI5ut3NY_G" ##Clean data from Billboard Billboard_Hot100 = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/DH_Final_Hot100.csv') artist_list=[] song_list=[] a_toBeReplace = ['Featuring', ' X ', ' x ', ' & ', ' + ', ', '] s_toBeReplace = ['(', ':', '.', ', '] for artist in Billboard_Hot100['artist']: artist_list.append(artist) for elem in a_toBeReplace : # Check if string is in the main string for k,v in enumerate(artist_list): if elem in v: # Replace the string artist_list[k] = v.replace(elem, ' ') for song in Billboard_Hot100['song']: song_list.append(song) for elem in s_toBeReplace : # Check if string is in the main string for k,v in enumerate(song_list): if elem in v: # Replace the string song_list[k] = v.replace(elem, ' ') # + [markdown] id="NX3_7-jsNY_H" # # Get Spotify Track ID # # ### Output # 1. artist, track, track_id # 2. error list: no track_id output >> add manually # # + id="HTBWQ9wVNY_I" track_id_list=[] error_artist=[] error_track=[] #search from list for i , j in enumerate(artist_list): for m ,n in enumerate(song_list): if i== m: search_output = sp.search(q='artist:' + j + ' track:' + n, type='track') try : search_output = pd.DataFrame(search_output['tracks']) search_output = search_output['items'][0] track_id_list.append(search_output.get('id')) except: error_artist.append(j) error_track.append(n) # + id="VYArsy32NY_a" #view error outputs with no duplicate values for (a,b) in zip(error_artist,error_track): pd.DataFrame({'error_artist':error_artist,'error_track': error_track}).to_csv('Spotify_error_list.csv') # + id="9uZzAMVBNY_a" ##get audio features artist_name_list=[] artist_genres_list=[] track_list=[] audio_feature_list=[] for track_id in track_id_list: artist_name_list.append(sp.track(track_id)['album']['artists'][0]['name']) artist_genres_list.append(sp.artist(sp.track(track_id)['album']['artists'][0]['id'])['genres']) track_list.append(sp.track(track_id)['name']) audio_feature_list.append(sp.audio_features(track_id)[0]) # + id="QFuuIVoTLjPG" track_info = pd.DataFrame({ #'artist_id' : artist_id_list, 'artist_name' : artist_name_list, 'artist_genres' : artist_genres_list, #'album_id' : album_id_list, #'album_name': album_name_list, 'track_list': track_list}) track_info = track_info.loc[track_info.astype(str).index] audio_feature = pd.DataFrame(audio_feature_list) track_info.join(audio_feature).to_csv('Spotify_nonerror_track_merged_2.csv') # + id="suq2nfJ2kQ1I" ##check error len(error_list) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="CXvBJyQ8LilZ" colab_type="text" # # Linear Regression with SciKitLearn # + [markdown] id="CvB-fOhlLoAB" colab_type="text" # ## Imports # + id="Yu36xOkKLd74" colab_type="code" colab={} import pandas as pd import numpy as np from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_absolute_error import category_encoders as ce # + id="MK6NibahLx7G" colab_type="code" colab={} LOCAL = '../data/nyc/nyc-rent-2016.csv' WEB = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/nyc/nyc-rent-2016.csv' df = pd.read_csv(WEB) assert df.shape == (48300, 34) # + id="7D7u4CIPYOFE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 504} outputId="173f277c-36da-4124-d398-11d52a8d4f30" df.head() # + [markdown] id="OoFCqqRIMPU1" colab_type="text" # ## Train/Test Split # + id="-i9IEhjwMGrH" colab_type="code" colab={} df['created'] = pd.to_datetime(df['created'], infer_datetime_format=True) # + id="6jCPhfHIMeWo" colab_type="code" colab={} # Extract the month that the apartment listing was created df['month'] = df['created'].dt.month # + id="9EsY9InJMoS-" colab_type="code" colab={} # Here's the actual split into TRAIN/TEST train = df.query('month < 6') test = df.query('month == 6') # + id="56H5mKZsM2tc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d1f9cdd8-e6a6-4ab0-bd9f-2acda3b8faa1" train.shape, test.shape, df.shape # + [markdown] id="MRCmIVbrNTMy" colab_type="text" # ## Baseline Model # # **MAE: $1020.06** # + id="bI8qGoutM8nx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="4ea03b33-7a7d-4f6a-f599-6f631ce04075" # A baseline regression train['price'].median() # + id="tLM-3iI9NZ8a" colab_type="code" colab={} # How far off are our baseline predictions on average? y_test = test['price'] # y_pred = train['price'].median() * len(test) y_pred = np.full_like(y_test, fill_value=train['price'].median()) # + id="RTcYz4u3Ncr9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a591d157-69e8-41cb-b55e-861c49ccd10a" # lots of room for improvement mean_absolute_error(y_test, y_pred) # + [markdown] id="3TKXARtwNwUa" colab_type="text" # ## Linear Regression # # **MAE: $624.07** # # * Several Features # * One-Hot Encoding # + id="fXJk6AJ_O-Rj" colab_type="code" colab={} # Select features and target features = ['bedrooms', 'bathrooms', 'fitness_center', 'interest_level', 'longitude'] target = 'price' # Create train and test dataframes/vectors X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] # One-Hot Encoder encoder = ce.OneHotEncoder(use_cat_names=True) X_train = encoder.fit_transform(X_train) X_test = encoder.transform(X_test) # Create & fit the model model = LinearRegression() model.fit(X_train, y_train) y_pred = model.predict(X_test) # + id="2TZzhfgpQQcc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b7eebd61-6d38-4118-ac51-0d7ff4855b8e" mean_absolute_error(y_test, y_pred) # + id="oqFeY1oKdF6q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 89} outputId="4453c40a-b1a0-40bd-d989-927e741c4199" print("Coefficients:", model.coef_) print("Intercept:", model.intercept_) # + [markdown] id="l8YBlqBYQxtG" colab_type="text" # ## With PCA # # **MAE: $674.33** # # (not worth it) # + id="lsW6FrgTQipH" colab_type="code" colab={} # Imports from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA # + id="SuUZShCGR7aO" colab_type="code" colab={} # Select features and target features = ['bedrooms', 'bathrooms', 'fitness_center', 'longitude'] target = 'price' # Create train and test dataframes/vectors X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] # Encode Data # encoder = ce.OneHotEncoder(use_cat_names=True) # X_train = encoder.fit_transform(X_train) # X_test = encoder.transform(X_test) #features = ['bedrooms', 'fitness_center', 'interest_level_high', 'interest_level_medium', 'interest_level_low', 'longitude'] # + id="LAd24WpPVZyQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e0d7b309-6b46-49ac-d382-3a77b3864662" df.shape, X_train.shape, X_test.shape # + id="db9svrzyQ5yO" colab_type="code" colab={} # Get the data ready to be Standardized X_train = X_train.loc[:, features].values X_test = X_test.loc[:, features].values # + id="6rVUBHVVRKiw" colab_type="code" colab={} X_train = StandardScaler().fit_transform(X_train) X_test = StandardScaler().fit_transform(X_test) # + id="6lMUrnfnR1b0" colab_type="code" colab={} X_train = pd.DataFrame(data = X_train, columns = features) X_test = pd.DataFrame(data = X_test, columns = features) # + id="7bnSmBZeTIfh" colab_type="code" colab={} # Do some PCA pca = PCA(n_components=2) train_PCs = pca.fit_transform(X_train) test_PCs = pca.fit_transform(X_test) # + id="DwYQRP-ITV5B" colab_type="code" colab={} train_PCdf = pd.DataFrame(data = train_PCs, columns = ['PC 1', 'PC 2']) test_PCdf = pd.DataFrame(data = test_PCs, columns = ['PC 1', 'PC 2']) # + id="nfj_FgFZUD7w" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1740b866-d649-42ca-dde4-cf90b7f6db94" train_PCdf.shape, test_PCdf.shape # + id="Zt10I27MTZNF" colab_type="code" colab={} # Create & fit the model model = LinearRegression() model.fit(train_PCdf, y_train) y_pred = model.predict(test_PCdf) # + id="v3P8gsQuTpJo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d561aa89-8556-44e7-975c-163e68f75a09" mean_absolute_error(y_test, y_pred) # + [markdown] id="PSAD81lYY9jn" colab_type="text" # ## Amenities feature # # **MAE: $653.52** # + id="u7Yz3AB5Tx1X" colab_type="code" colab={} df['amenities'] = True df['amenities'] = (df['elevator']+ df['cats_allowed']+ df['dogs_allowed']+ df['hardwood_floors']+ df['doorman']+ df['dishwasher']+ df['no_fee']+ df['laundry_in_building']+ df['fitness_center']+ df['laundry_in_unit']+ df['roof_deck']+ df['outdoor_space']+ df['dining_room']+ df['high_speed_internet']+ df['balcony']+ df['swimming_pool']+ df['new_construction']+ df['terrace']) # + id="GyH223RFZmYx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 504} outputId="c75b197b-8115-4e42-d1d4-e59916be9c58" df.head() # + [markdown] id="LHPEGm_6b5Yx" colab_type="text" # **gotta redo the train/test split now** # + colab_type="code" id="y8z38DELb_03" colab={} # Here's the actual split into TRAIN/TEST train = df.query('month < 6') test = df.query('month == 6') # + colab_type="code" outputId="3e870861-746e-4829-bc80-726d4101a7ec" id="sdSeEvhgb_09" colab={"base_uri": "https://localhost:8080/", "height": 34} train.shape, test.shape, df.shape # + [markdown] id="ecRbK7kfcFbv" colab_type="text" # Now the model # + id="wP3R-qT0Zota" colab_type="code" colab={} # Select features and target features = ['bedrooms', 'bathrooms', 'amenities', 'longitude'] target = 'price' # Create train and test dataframes/vectors X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] # Create & fit the model model = LinearRegression() model.fit(X_train, y_train) y_pred = model.predict(X_test) # + id="2e1jN5PDcUh2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="a45a79ab-4380-4ff8-baf4-537877e29bcf" mean_absolute_error(y_test, y_pred) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python3 (ml) # language: python # name: ml # --- # + [markdown] colab_type="text" # This is a companion notebook for the book [Deep Learning with Python, Second Edition](https://www.manning.com/books/deep-learning-with-python-second-edition?a_aid=keras&a_bid=76564dff). For readability, it only contains runnable code blocks and section titles, and omits everything else in the book: text paragraphs, figures, and pseudocode. # # **If you want to be able to follow what's going on, I recommend reading the notebook side by side with your copy of the book.** # # This notebook was generated for TensorFlow 2.6. # + [markdown] colab_type="text" # # The mathematical building blocks of neural networks # - # To provide sufficient context for introducing tensors and gradient descent, we’ll begin the # chapter with a practical example of a neural network. Then we’ll go over every new concept # that’s been introduced, point by point. # + [markdown] colab_type="text" # ## A first look at a neural network # - # The problem we’re trying to solve here is to classify grayscale images of handwritten digits ($28 \times # 28$ pixels) into their $10$ categories ($0$ through $9$). We’ll use the MNIST dataset, a classic in the # machine-learning community, which has been around almost as long as the field itself and has # been intensively studied. It’s a set of $60,000$ training images, plus $10,000$ test images, assembled # by the National Institute of Standards and Technology (the NIST in MNIST) in the 1980s. You # can think of "solving" MNIST as the "Hello World" of deep learning—it’s what you do to verify # that your algorithms are working as expected. # + [markdown] colab_type="text" # **Loading the MNIST dataset in Keras** # - # The MNIST dataset comes preloaded in Keras, in the form of a set of four NumPy arra # + colab_type="code" from tensorflow.keras.datasets import mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # - help(mnist.load_data) # + colab_type="code" train_images.shape # + colab_type="code" len(train_labels) # + colab_type="code" train_labels # + colab_type="code" test_images.shape # + colab_type="code" len(test_labels) # + colab_type="code" test_labels # + [markdown] colab_type="text" # **The network architecture** # - # The workflow will be as follows: First, we’ll feed the neural network the training data, `train_images` and `train_labels`. The network will then learn to associate images and labels. Finally, we’ll ask the network to produce predictions for `test_images`, and we’ll verify whether these predictions match the labels from `test_labels`. # + colab_type="code" from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.Dense(512, activation="relu"), layers.Dense(10, activation="softmax") ]) # - # The core building block of neural networks is the **layer**. You can think of a layer as a filter for data: some data goes in, and it comes out in a more useful form. Specifically, layers extract representations out of the data fed into them—hopefully, representations that are more meaningful for the problem at hand. Most of deep learning consists of chaining together simple layers that will implement a form of progressive data distillation. A deep-learning model is like a sieve for data processing, made of a succession of increasingly refined data filters—the layers. # Here, our model consists of a sequence of two `Dense` layers, which are densely connected (also # called **fully connected**) neural layers. The second (and last) layer is a $10$-way **softmax # classification** layer, which means it will return an array of $10$ probability scores (summing to $1$). # Each score will be the probability that the current digit image belongs to one of our $10$ digit classes. # + [markdown] colab_type="text" # **The compilation step** # - # To make the model ready for training, we need to pick three more things, as part of the **compilation** step: # # - An **optimizer**—The mechanism through which the model will update itself based on the training data it sees, so as to improve its performance. # - A **loss function**—How the model will be able to measure its performance on the training data, and thus how it will be able to steer itself in the right direction. # - **Metrics** to monitor during training and testing—Here, we’ll only care about accuracy (the fraction of the images that were correctly classified). # + colab_type="code" model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) # + [markdown] colab_type="text" # **Preparing the image data** # + import numpy as np # what does our data look like? train_images[0].flatten() # + import matplotlib.pyplot as plt plt.imshow(train_images[0]/255) # - # Before training, we’ll preprocess the data by reshaping it into the shape the model expects and scaling it so that all values are in the interval $[0, 1]$. Previously, our training images were stored in an array of shape $(60000, 28, 28)$ of type `uint8` with values in the $[0, 255]$ interval. We transform it into a `float32` array of shape $(60000, 28 * 28)$ with values between $0$ and $1$. # + colab_type="code" train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype("float32") / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype("float32") / 255 # + [markdown] colab_type="text" # **"Fitting" the model** # - # We’re now ready to train the model, which in Keras is done via a call to the model’s `fit()` method—we fit the model to its training data: # + colab_type="code" model.fit(train_images, train_labels, epochs=5, batch_size=128) # - # Two quantities are displayed during training: the **loss** of the model over the training data, and the **accuracy** of the model over the training data. We quickly reach an accuracy of $0.989$ ($98.9\%$) on the training data. # + [markdown] colab_type="text" # **Using the model to make predictions** # - # Now that we have a trained model, you can use it to predict class probabilities for new digits—images that weren’t part of the training data, like those from the test set # + colab_type="code" test_digits = test_images[0:10] predictions = model.predict(test_digits) predictions[0] # + colab_type="code" predictions[0].argmax() # + colab_type="code" predictions[0][7] # + colab_type="code" test_labels[0] # + [markdown] colab_type="text" # **Evaluating the model on new data** # - # On average, how good is our model at classifying such never-seen-before digits? Let’s check by computing average accuracy over the entire test set. # + colab_type="code" test_loss, test_acc = model.evaluate(test_images, test_labels) print(f"test_acc: {test_acc}") # - # This concludes our first example—you just saw how you can build and train a neural network to classify handwritten digits in less than 15 lines of Python code. In this chapter and the next, we’ll go into detail about every moving piece we just previewed and clarify what’s going on behind the scenes. You’ll learn about tensors, the data-storing objects going into the model; tensor operations, which layers are made of; and gradient descent, which allows your model to learn from its training examples. # + [markdown] colab_type="text" # ## Data representations for neural networks # + [markdown] colab_type="text" # ### Scalars (rank-0 tensors) # - # A tensor that contains only one number is called a scalar (or scalar tensor, or rank-0 tensor, or 0D tensor). In NumPy, a `float32` or `float64` number is a scalar tensor (or scalar array). You can display the number of axes of a NumPy tensor via the `ndim` attribute; a scalar tensor has 0 axes (`ndim == 0`). The number of axes of a tensor is also called its rank. # + colab_type="code" import numpy as np x = np.array(12) x # + colab_type="code" x.ndim, x.shape # + [markdown] colab_type="text" # ### Vectors (rank-1 tensors) # - # An array of numbers is called a vector, or rank-1 tensor, or 1D tensor. A rank-1 tensor is said to have exactly one axis. Following is a NumPy vector # + colab_type="code" x = np.array([12, 3, 6, 14, 7]) x # + colab_type="code" x.ndim, x.shape # + [markdown] colab_type="text" # ### Matrices (rank-2 tensors) # - # An array of vectors is a , or rank-2 tensor, or 2D tensor. A matrix matrix has two axes (often referred to rows and columns). You can visually interpret a matrix as a rectangular grid of numbers. This is a NumPy matrix: # + colab_type="code" x = np.array([ [5, 78, 2, 34, 0], [6, 79, 3, 35, 1], [7, 80, 4, 36, 2] ]) x.ndim, x.shape # - # access rows x[1] # access columns x[:,1] # + [markdown] colab_type="text" # ### Rank-3 tensors and higher-rank tensors # - # If you pack such matrices in a new array, you obtain a rank-3 tensor (or 3D tensor), which you can visually interpret as a cube of numbers. Following is a NumPy rank-3 tensor: # + colab_type="code" x = np.array([ [ [5, 78, 2, 34, 0], [6, 79, 3, 35, 1], [7, 80, 4, 36, 2] ], [ [5, 78, 2, 34, 0], [6, 79, 3, 35, 1], [7, 80, 4, 36, 2] ], [ [5, 78, 2, 34, 0], [6, 79, 3, 35, 1], [7, 80, 4, 36, 2] ] ]) x.ndim, x.shape # - # By packing rank-3 tensors in an array, you can create a rank-4 tensor, and so on. In deep learning, you’ll generally manipulate tensors with ranks 0 to 4, although you may go up to 5 if you process video data. # + [markdown] colab_type="text" # ### Key attributes # - # A tensor is defined by three key attributes: # # - Number of axes (**rank**)—For instance, a rank-3 tensor has three axes, and a matrix has two axes. This is also called the tensor’s `ndim` in Python libraries such as NumPy or TensorFlow. # - **Shape**—This is a tuple of integers that describes how many dimensions the tensor has along each axis. For instance, the previous matrix example has shape $(3, 5)$, and the rank-3 tensor example has shape $(3, 3, 5)$. A vector has a shape with a single element, such as $(5,)$, whereas a scalar has an empty shape, $()$. # - **Data type** (usually called `dtype` in Python libraries)—This is the type of the data contained in the tensor; for instance, a tensor’s type could be `float16`, `float32`, `float64`, `uint8`, and so on. In TensorFlow, you are also likely to come across `string` tensors. # + colab_type="code" from tensorflow.keras.datasets import mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # + colab_type="code" train_images.ndim # + colab_type="code" train_images.shape # + colab_type="code" train_images.dtype # - # So what we have here is a rank-3 tensor of 8-bit integers. More precisely, it’s an array of $60,000$ matrices of $28 \times 28$ integers. Each such matrix is a grayscale image, with coefficients between $0$ and $255$. # + [markdown] colab_type="text" # **Displaying the fourth digit** # + colab_type="code" import matplotlib.pyplot as plt digit = train_images[4] plt.imshow(digit, cmap=plt.cm.binary) plt.show() # + colab_type="code" train_labels[4] # + [markdown] colab_type="text" # ### Manipulating tensors in NumPy # - # In the previous example, we selected a specific digit alongside the first axis using the syntax `train_images[i]`. Selecting specific elements in a tensor is called tensor **slicing**. Let’s look at the tensor-slicing operations you can do on NumPy arrays # + colab_type="code" my_slice = train_images[10:100] my_slice.shape # + colab_type="code" # this is identical but more verbose and explicit, telling that you want all data from the second and third axes (short :) my_slice = train_images[10:100, :, :] my_slice.shape # + colab_type="code" my_slice = train_images[10:100, 0:28, 0:28] my_slice.shape # + colab_type="code" # For instance, in order to select 14 × 14 pixels in the bottom-right corner of all images, you would do this my_slice = train_images[:, 14:, 14:] # - # It’s also possible to use negative indices. Much like negative indices in Python lists, they indicate a position relative to the end of the current axis. In order to crop the images to patches of $14 \times 14$ pixels centered in the middle, you do this # + colab_type="code" my_slice = train_images[:, 7:-7, 7:-7] # + [markdown] colab_type="text" # ### The notion of data batches # - # In general, the first axis (axis 0, because indexing starts at 0) in all data tensors you’ll come across in deep learning will be the **samples axis** (sometimes called the samples dimension). In the MNIST example, "samples" are images of digits. # + colab_type="code" batch = train_images[:128] # first batch # + colab_type="code" batch = train_images[128:256] # second batch # + colab_type="code" n = 3 batch = train_images[128 * n:128 * (n + 1)] # nth batch # - # When considering such a batch tensor, the first axis (axis 0) is called the **batch axis** or batch dimension. This is a term you’ll frequently encounter when using Keras and other deep-learning libraries. # + [markdown] colab_type="text" # ### Real-world examples of data tensors # - # Let’s make data tensors more concrete with a few examples similar to what you’ll encounter later. The data you’ll manipulate will almost always fall into one of the following categories: # # - **Vector data**—rank-$2$ tensors of shape `(samples, features)`, where each sample is a vector of numerical attributes ("features"). # - **Timeseries data or sequence data** — rank-$3$ tensors of shape `(samples, timesteps, features)`, where each sample is a sequence (of length `timesteps`) of feature vectors. # - **Images**—rank-$4$ tensors of shape `(samples, height, width, channels)`, where each sample is a 2D grid of pixels, and each pixel is represented by a vector of values ("channels"). # - **Video**—rank-$5$ tensors of shape `(samples, frames, height, width, channels)`, where each sample is a sequence (of length frames) of images. # + [markdown] colab_type="text" # ### Vector data # - # ![](https://am3pap005files.storage.live.com/y4mDDtHHiJKppMkqD9ryFP0adqiSksMluXfKctc64Fi9in08eFk6G6Zw9jI2TNdHOQ4YmnX6KjEt2BNujXC9dSNk3UeQXpPm_S3B371D2290nFZSbgAdBHCZ-3O4uAqxg64DaKY0sselTbuZqK0GwaJnpK3WHM9kQbpp4AM1Z-uOo1DfVkAnNr_JC-TxzVNEFiw?width=262&height=282&cropmode=none) # Let’s take a look at two examples: # # - An actuarial dataset of people, where we consider each person’s `age`, `gender`, and `income`. Each person can be characterized as a vector of $3$ values, and thus an entire dataset of $100,000$ people can be stored in a rank-$2$ tensor of shape $(100000, 3)$. # - A dataset of text documents, where we represent each document by the counts of how many times each word appears in it (out of a dictionary of $20,000$ common words). Each document can be encoded as a vector of $20,000$ values (one count per word in the dictionary), and thus an entire dataset of $500$ documents can be stored in a tensor of shape $(500, 20000)$. # + import pandas as pd import numpy as np N = 100_000 people = pd.DataFrame({"age": np.random.randint(0, 100, N), "gender": np.random.randint(0, 2, N), "income": np.random.uniform(1000, 1_000_000, N)}) # - people.head() people.values[0] people.values # + [markdown] colab_type="text" # ### Timeseries data or sequence data # - # Whenever time matters in your data (or the notion of sequence order), it makes sense to store it in a rank-$3$ tensor with an explicit time axis. Each sample can be encoded as a sequence of vectors (a rank-$2$ tensor), and thus a batch of data will be encoded as a rank-$3$ tensor. # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch02-timeseries_data.png) # The time axis is always the second axis (axis of index 1), by convention. Let’s look at a few examples: # # - A dataset of stock prices. Every minute, we store the current price of the stock, the highest price in the past minute, and the lowest price in the past minute. Thus every minute is encoded as a $3$D vector, an entire day of trading is encoded as a matrix of shape $(390, 3)$ (there are $390$ minutes in a trading day), and $250$ days' worth of data can be stored in a rank-$3$ tensor of shape $(250, 390, 3)$. Here, each sample would be one day’s worth of data # - A dataset of tweets, where we encode each tweet as a sequence of $280$ characters out of an alphabet of $128$ unique characters. In this setting, each character can be encoded as a binary vector of size $128$ (an all-zeros vector except for a 1 entry at the index corresponding to the character). Then each tweet can be encoded as a rank-$2$ tensor of shape $(280, 128)$, and a dataset of $1$ million tweets can be stored in a tensor of shape $(1000000, 280, 128)$. # + [markdown] colab_type="text" # ### Image data # - # Images typically have three dimensions: height, width, and color depth. Although grayscale images (like our MNIST digits) have only a single color channel and could thus be stored in rank-$2$ tensors, by convention image tensors are always rank-$3$, with a one-dimensional color channel for grayscale images. A batch of $128$ grayscale images of size $256 \times 256$ could thus be stored in a tensor of shape $(128, 256, 256, 1)$, and a batch of $128$ color images could be stored in a tensor of shape $(128, 256, 256, 3)$ # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch02-image_data.png) # There are two conventions for shapes of images tensors: the **channels-last** convention (which is standard in TensorFlow) and the **channels-first** convention (which is increasingly falling out of favor). # # The channels-last convention places the color-depth axis at the end: `(samples, height, width, color_depth)`. Meanwhile, the channels-first convention places the color depth axis right after the batch axis: `(samples, color_depth, height, width)`. With the channels-first convention, the previous examples would become $(128, 1, 256, 256)$ and $(128, 3, 256, 256)$. The Keras API provides support for both formats. # + [markdown] colab_type="text" # ### Video data # - # Video data is one of the few types of real-world data for which you’ll need rank-$5$ tensors. A video can be understood as a sequence of frames, each frame being a color image. Because each frame can be stored in a rank-$3$ tensor `(height, width, color_depth)`, a sequence of frames can be stored in a rank-$4$ tensor `(frames, height, width, color_depth)`, and thus a batch of different videos can be stored in a rank-$5$ tensor of shape `(samples, frames, height, width, color_depth)`. # # For instance, a $60$-second, $144 \times 256$ YouTube video clip sampled at $4$ frames per second would have $240$ frames. A batch of $4$ such video clips would be stored in a tensor of shape $(4, 240, 144, 256, 3)$. That’s a total of $106,168,320$ values! If the dtype of the tensor was `float32`, then each value would be stored in $32$ bits, so the tensor would represent $405$ MB. Heavy! Videos you encounter in real life are much lighter, because they aren’t stored in `float32`, and # they’re typically compressed by a large factor (such as in the MPEG format). # + [markdown] colab_type="text" # ## The gears of neural networks: tensor operations # - # what happens here? keras.layers.Dense(512, activation="relu") # + [markdown] colab_type="text" # ### Element-wise operations # + colab_type="code" # relu(x) is max(x, 0), relu stands for "REctified Linear Unit" def naive_relu(x): assert len(x.shape) == 2 x = x.copy() for i in range(x.shape[0]): for j in range(x.shape[1]): x[i, j] = max(x[i, j], 0) return x # + colab_type="code" def naive_add(x, y): assert len(x.shape) == 2 assert x.shape == y.shape x = x.copy() for i in range(x.shape[0]): for j in range(x.shape[1]): x[i, j] += y[i, j] return x # - # In practice, when dealing with NumPy arrays, these operations are available as well-optimized built-in NumPy functions, which themselves delegate the heavy lifting to a Basic Linear Algebra Subprograms (BLAS) implementation. BLAS are low-level, highly parallel, efficient tensor-manipulation routines that are typically implemented in Fortran or C. # + colab_type="code" import time x = np.random.random((20, 100)) y = np.random.random((20, 100)) t0 = time.time() for _ in range(1000): z = x + y z = np.maximum(z, 0.) print("Took: {0:.2f} s".format(time.time() - t0)) # + colab_type="code" t0 = time.time() for _ in range(1000): z = naive_add(x, y) z = naive_relu(z) print("Took: {0:.2f} s".format(time.time() - t0)) # - # Likewise, when running TensorFlow code on a GPU, elementwise operations are executed via fully-vectorized CUDA implementations that can best utilize the highly-parallel GPU chip architecture. # + [markdown] colab_type="text" # ### Broadcasting # - # With broadcasting, you can generally apply two-tensor element-wise operations if one tensor has shape `(a, b, … n, n + 1, … m)` and the other has shape `(n, n + 1, … m)`. The broadcasting will then automatically happen for axes `a` through `n - 1`. # + colab_type="code" import numpy as np X = np.random.random((32, 10)) y = np.random.random((10,)) # + colab_type="code" y = np.expand_dims(y, axis=0) # + colab_type="code" Y = np.concatenate([y] * 32, axis=0) # + colab_type="code" def naive_add_matrix_and_vector(x, y): assert len(x.shape) == 2 assert len(y.shape) == 1 assert x.shape[1] == y.shape[0] x = x.copy() for i in range(x.shape[0]): for j in range(x.shape[1]): x[i, j] += y[j] return x # + colab_type="code" import numpy as np x = np.random.random((64, 3, 32, 10)) y = np.random.random((32, 10)) z = np.maximum(x, y) # - z.shape # + [markdown] colab_type="text" # ### Tensor product # - # The tensor product, or **dot product** (not to be confused with an element-wise product, the `*` operator) is one of the most common, most useful tensor operations. # # In mathematical notation, you’d note the operation with a dot ($\cdot{}$). # + colab_type="code" x = np.random.random((32,)) y = np.random.random((32,)) z = np.dot(x, y) z # + colab_type="code" def naive_vector_dot(x, y): assert len(x.shape) == 1 assert len(y.shape) == 1 assert x.shape[0] == y.shape[0] z = 0. for i in range(x.shape[0]): z += x[i] * y[i] return z naive_vector_dot(np.array([1, 2, 3]), np.array([4, 5, 6])) # - # You can also take the dot product between a matrix $X$ and a vector $y$, which returns a vector where the coefficients are the dot products between $y$ and the rows of $X$. You implement it as follows # + colab_type="code" def naive_matrix_vector_dot(X, y): assert len(X.shape) == 2 assert len(y.shape) == 1 assert X.shape[1] == y.shape[0] z = np.zeros(X.shape[0]) for i in range(X.shape[0]): for j in range(X.shape[1]): z[i] += X[i, j] * y[j] return z naive_matrix_vector_dot(np.array([[1, 2, 3], [4, 5, 6]]), np.array([1, 2, 3])) # + colab_type="code" def naive_matrix_vector_dot(x, y): z = np.zeros(x.shape[0]) for i in range(x.shape[0]): z[i] = naive_vector_dot(x[i, :], y) return z # - # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch02-matrix_dot_box_diagram.png) # + colab_type="code" def naive_matrix_dot(X, Y): assert len(X.shape) == 2 assert len(Y.shape) == 2 assert X.shape[1] == Y.shape[0] z = np.zeros((X.shape[0], Y.shape[1])) for i in range(X.shape[0]): for j in range(Y.shape[1]): row_x = X[i, :] column_y = Y[:, j] z[i, j] = naive_vector_dot(row_x, column_y) return z naive_matrix_dot(np.array([[1, 2, 3], [4, 5, 6]]), np.array([[1,2], [3, 4], [5, 6]])) # + [markdown] colab_type="text" # ### Tensor reshaping # - # Reshaping a tensor means rearranging its rows and columns to match a target shape. Naturally, the reshaped tensor has the same total number of coefficients as the initial tensor. Reshaping is best understood via simple examples: # + colab_type="code" train_images = train_images.reshape((60000, 28 * 28)) # + colab_type="code" x = np.array([[0., 1.], [2., 3.], [4., 5.]]) x.shape # + colab_type="code" x = x.reshape((6, 1)) x # + colab_type="code" x = np.zeros((300, 20)) x = np.transpose(x) x.shape # + [markdown] colab_type="text" # ### Geometric interpretation of tensor operations # - # It’s a point in a 2D space (see figure 2.6). It’s common to picture a vector as an arrow linking the origin to the point # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch02-geometric_interpretation_1.png) # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch02-geometric_interpretation_2.png) # # Tensor addition thus represents the action of **translating** an object (moving the object without distorting it) by a certain amount in a certain direction. # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch02-geometric_interpretation_3.png) # # In general, elementary geometric operations such as translation, rotation, scaling, skewing, and so on can be expressed as tensor operations. # # - Translation: # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/translation.png) # - Rotation: # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/rotation.png) # - Scaling: # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/scaling.png) # - Linear transform: A dot product with an arbitrary matrix implements a linear transform. Note that scaling and rotation, seen above, are by definition linear transforms. # - Affine transform: An affine transform is the combination of a linear transform (achieved via a dot product some matrix) and a translation (achieved via a vector addition). As you have probably recognized, that’s exactly the $y = W \cdot x + b$ computation implemented by the `Dense` layer! A Dense layer without an activation function is an affine layer. # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/affine_transform.png) # # - `Dense` layer with relu activation: An important observation about affine transforms is that if you apply many of them repeatedly, **you still end up with an affine transform** (so you could just have applied that one affine transform in the first place). Let’s try it with two: affine2(affine1(x)) = $W2 • (W1 • x + b1) + b2 = (W2 • Wa) • x + (W2 • b1 + b2)$. That’s an affine transform where the linear part is the matrix $W2 • W1$ and the translation part is the vector $W2 • b1 + b2$. As a consequence, a multi-layer neural network made entirely of Dense layers without activations would be equivalent to a **single Dense layer**. This "deep" neural network would just be a linear model in disguise! This is why we need activation functions, like relu. Thanks to activation functions, **a chain of Dense layer can be made to implement very complex, non-linear geometric transformation**, resulting in very rich hypothesis spaces for your deep neural networks. We cover this idea in more detail in the next chapter. # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/dense_transform.png) # + [markdown] colab_type="text" # ### A geometric interpretation of deep learning # - # In 3D, the following mental image may prove useful. Imagine two sheets of colored paper: one red and one blue. Put one on top of the other. Now crumple them together into a small ball. That crumpled paper ball is your input data, and each sheet of paper is a class of data in a classification problem. What a neural network is meant to do is **figure out a transformation of the paper ball that would uncrumple it, so as to make the two classes cleanly separable again**. With deep learning, this would be implemented as a series of simple transformations of the 3D space, such as those you could apply on the paper ball with your fingers, one movement at a time. # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch02-geometric_interpretation_4.png) # # Uncrumpling paper balls is what machine learning is about: finding neat representations for complex, highly-folded data **manifolds** in high-dimensional spaces (a manifold is a continuous surface, like our crumpled sheet of paper). At this point, you should have a pretty good intuition as to why deep learning excels at this: it takes the approach of incrementally decomposing a complicated geometric transformation into a long chain of elementary ones, which is pretty much the strategy a human would follow to uncrumple a paper ball. **Each layer in a deep network applies a transformation that disentangles the data a little**—and a deep stack of layers makes tractable an extremely complicated disentanglement process. # + [markdown] colab_type="text" # ## The engine of neural networks: gradient-based optimization # - output = relu(dot(input, W) + b) # In this expression, $W$ and $b$ are tensors that are attributes of the layer. They’re called the **weights** or **trainable parameters** of the layer (the kernel and bias attributes, respectively). These weights contain the information learned by the model from exposure to training data. # What comes next is to gradually adjust these weights, based on a feedback signal. This gradual adjustment, also called **training**, is basically the learning that machine learning is all about. # # This happens within what’s called a **training loop**, which works as follows. Repeat these steps in a loop, until the loss seems sufficiently low: # # 1. Draw a batch of training samples `x` and corresponding targets `y_true`. # 2. Run the model on `x` (a step called the forward pass) to obtain predictions `y_pred`. # 3. Compute the loss of the model on the batch, a measure of the mismatch between `y_pred` and `y_true`. # 4. Update all weights of the model in a way that slightly reduces the loss on this batch. # # You’ll eventually end up with a model that has a very low loss on its training data: a low mismatch between predictions `y_pred` and expected targets `y_true`. The model has "learned" to map its inputs to correct targets. From afar, it may look like magic, but when you reduce it to elementary steps, it turns out to be simple. # # + [markdown] colab_type="text" # ### What's a derivative? # - # **Gradient descent is the optimization technique that powers modern neural networks**. Here’s the gist of it. All of the functions used in our models (such as dot or +) transform their input in a smooth and continuous way: if you look at $z = x + y$, for instance, a small change in $y$ only results in a small change in $z$, and if you know the direction of the change in $y$, you can infer the direction of the change in $z$. Mathematically, you’d say these functions are **differentiable**. If you chain together such functions, the bigger function you obtain is still differentiable. In particular, this applies to the function that maps the model’s coefficients to the loss of the model on a batch of data: a small change of the model’s coefficients results in a small, predictable change of the loss value. This enables you to use a mathematical operator called the **gradient** to describe how the loss varies as you move the model’s coefficients in different directions. If you compute this gradient, you can use it to move the coefficients (all at once in a single update, rather than one at a time) in a direction that decreases the loss. # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/function.png) # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/derivation.png) # + [markdown] colab_type="text" # ### Derivative of a tensor operation: the gradient # - # The concept of derivation can be applied to any such function, as long as the surfaces they describe are continuous and smooth. The derivative of a tensor operation (or tensor function) is called a **gradient**. Gradients are just the generalization of the concept of derivatives to functions that take tensors as inputs. Remember how, for a scalar function, the derivative represents the local slope of the curve of the function? In just the same way, the **gradient of a tensor function represents the curvature of the multidimensional surface described by the function**. # # ![](https://miro.medium.com/max/1200/1*7030GXGlVD-u9VyqVJdTyw.png) # ![](https://blog.paperspace.com/content/images/2018/05/challenges-1.png) # # ![](https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F1200%2F1*kbP1fkMFrcjwBO3NVP9OfQ.png&f=1&nofb=1) # + [markdown] colab_type="text" # ### Stochastic gradient descent # - # Easy enough! What we just described is called **mini-batch stochastic gradient descent** (mini-batch SGD). The term stochastic refers to the fact that each batch of data is drawn at random (stochastic is a scientific synonym of random). Figure 2.18 illustrates what happens in 1D, when the model has only one parameter and you have only one training sample. # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/ch02-sgd_explained_1.png) # # # As you can see, intuitively it’s important to pick a reasonable value for the **learning_rate** factor. If it’s too small, the descent down the curve will take many iterations, and it could get stuck in a local minimum. If learning_rate is too large, your updates may end up taking you to completely random locations on the curve. # # Note that a variant of the mini-batch SGD algorithm would be to draw a single sample and target at each iteration, rather than drawing a batch of data. This would be **true SGD** (as opposed to mini-batch SGD). Alternatively, going to the opposite extreme, you could run every step on all data available, which is called **batch gradient descent**. Each update would then be more accurate, but far more expensive. The efficient compromise between these two extremes is to use mini-batches of reasonable size. # # + [markdown] colab_type="text" # ### Chaining derivatives: the Backpropagation algorithm # + [markdown] colab_type="text" # #### The chain rule # + [markdown] colab_type="text" # #### Automatic differentiation with computation graphs # - # **Computation graphs** have been an extremely successful abstraction in computer science because they enable us to treat computation as data: a computable expression is encoded as a machine-readable data structure that can be used as the input or output of another program. For instance, you could imagine a program that receives a computation graph and returns a new computation graph that implements a large-scale distributed version of the same computation—this would mean that you could distribute any computation without having to write the distribution logic yourself. Or imagine… a program that receives a computation graph and can automatically generate the derivative of the expression it represents. It’s much easier to do these things if your computation is expressed as an explicit graph data structure rather than, say, lines of ASCII characters in a .py file. # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/a_first_computation_graph.png) # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/basic_computation_graph.png) # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/basic_computation_graph_with_values.png) # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/basic_computation_graph_backward.png) # # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/path_in_backward_graph.png) # + [markdown] colab_type="text" # #### The Gradient Tape in TensorFlow # - # The API through which you can leverage TensorFlow’s powerful automatic differentiation capabilities is the `GradientTape`. It’s a Python scope that will "record" the tensor operations that run inside it, in the form of a computation graph (sometimes called a "tape"). This graph can then be used to retrieve the gradient of any output with respect to any variable or set of variables (instances of the `tf.Variable` class). A `tf.Variable` is a specific kind of tensor meant to hold mutable state—for instance, the weights of a neural network are always tf.Variable instances. # + colab_type="code" import tensorflow as tf x = tf.Variable(0.) with tf.GradientTape() as tape: y = 2 * x + 3 grad_of_y_wrt_x = tape.gradient(y, x) grad_of_y_wrt_x # + colab_type="code" x = tf.Variable(tf.random.uniform((2, 2))) with tf.GradientTape() as tape: y = 2 * x + 3 grad_of_y_wrt_x = tape.gradient(y, x) grad_of_y_wrt_x # + colab_type="code" W = tf.Variable(tf.random.uniform((2, 2))) b = tf.Variable(tf.zeros((2,))) x = tf.random.uniform((2, 2)) with tf.GradientTape() as tape: y = tf.matmul(x, W) + b grad_of_y_wrt_W_and_b = tape.gradient(y, [W, b]) grad_of_y_wrt_W_and_b # + [markdown] colab_type="text" # ## Looking back at our first example # - # ![](https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/deep-learning-in-3-figures-3_alt.png) # + colab_type="code" (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype("float32") / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype("float32") / 255 # + colab_type="code" model = keras.Sequential([ layers.Dense(512, activation="relu"), layers.Dense(10, activation="softmax") ]) # + colab_type="code" model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) # + colab_type="code" model.fit(train_images, train_labels, epochs=5, batch_size=128) # + [markdown] colab_type="text" # ### Reimplementing our first example from scratch in TensorFlow # + [markdown] colab_type="text" # #### A simple Dense class # + colab_type="code" import tensorflow as tf class NaiveDense: def __init__(self, input_size, output_size, activation): self.activation = activation w_shape = (input_size, output_size) w_initial_value = tf.random.uniform(w_shape, minval=0, maxval=1e-1) self.W = tf.Variable(w_initial_value) b_shape = (output_size,) b_initial_value = tf.zeros(b_shape) self.b = tf.Variable(b_initial_value) def __call__(self, inputs): return self.activation(tf.matmul(inputs, self.W) + self.b) @property def weights(self): return [self.W, self.b] # + [markdown] colab_type="text" # #### A simple Sequential class # + colab_type="code" class NaiveSequential: def __init__(self, layers): self.layers = layers def __call__(self, inputs): x = inputs for layer in self.layers: x = layer(x) return x @property def weights(self): weights = [] for layer in self.layers: weights += layer.weights return weights # + colab_type="code" model = NaiveSequential([ NaiveDense(input_size=28 * 28, output_size=512, activation=tf.nn.relu), NaiveDense(input_size=512, output_size=10, activation=tf.nn.softmax) ]) assert len(model.weights) == 4 # + [markdown] colab_type="text" # #### A batch generator # + colab_type="code" import math class BatchGenerator: def __init__(self, images, labels, batch_size=128): assert len(images) == len(labels) self.index = 0 self.images = images self.labels = labels self.batch_size = batch_size self.num_batches = math.ceil(len(images) / batch_size) def next(self): images = self.images[self.index : self.index + self.batch_size] labels = self.labels[self.index : self.index + self.batch_size] self.index += self.batch_size return images, labels # + [markdown] colab_type="text" # ### Running one training step # + colab_type="code" def one_training_step(model, images_batch, labels_batch): with tf.GradientTape() as tape: predictions = model(images_batch) per_sample_losses = tf.keras.losses.sparse_categorical_crossentropy( labels_batch, predictions) average_loss = tf.reduce_mean(per_sample_losses) gradients = tape.gradient(average_loss, model.weights) update_weights(gradients, model.weights) return average_loss # + colab_type="code" learning_rate = 1e-3 def update_weights(gradients, weights): for g, w in zip(gradients, model.weights): w.assign_sub(g * learning_rate) # + colab_type="code" from tensorflow.keras import optimizers optimizer = optimizers.SGD(learning_rate=1e-3) def update_weights(gradients, weights): optimizer.apply_gradients(zip(gradients, weights)) # + [markdown] colab_type="text" # ### The full training loop # + colab_type="code" def fit(model, images, labels, epochs, batch_size=128): for epoch_counter in range(epochs): print(f"Epoch {epoch_counter}") batch_generator = BatchGenerator(images, labels) for batch_counter in range(batch_generator.num_batches): images_batch, labels_batch = batch_generator.next() loss = one_training_step(model, images_batch, labels_batch) if batch_counter % 100 == 0: print(f"loss at batch {batch_counter}: {loss:.2f}") # + colab_type="code" from tensorflow.keras.datasets import mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() train_images = train_images.reshape((60000, 28 * 28)) train_images = train_images.astype("float32") / 255 test_images = test_images.reshape((10000, 28 * 28)) test_images = test_images.astype("float32") / 255 fit(model, train_images, train_labels, epochs=10, batch_size=128) # + [markdown] colab_type="text" # ### Evaluating the model # + colab_type="code" import numpy as np predictions = model(test_images) predictions = predictions.numpy() predicted_labels = np.argmax(predictions, axis=1) matches = predicted_labels == test_labels print(f"accuracy: {matches.mean():.2f}") # + [markdown] colab_type="text" # ## Chapter summary # - # - **Tensors** form the foundation of modern machine learning systems. They come in various flavors of dtype, rank, and shape. # - You can manipulate numerical tensors via tensor operations (such as addition, tensor product, or elementwise multiplication), which can be interpreted as **encoding geometric transformations**. In general, everything in deep learning is amenable to a geometric interpretation. # - Deep learning models consist of **chains of simple tensor operations, parameterized by weights**, which are themselves tensors. The weights of a model are where its "knowledge" is stored. # - Learning means finding a set of values for the model’s weights that **minimizes a loss function for a given set of training data samples** and their corresponding targets. # - Learning happens by drawing random batches of data samples and their targets, and computing the **gradient** of the model parameters with respect to the loss on the batch. The model parameters are then moved a bit (the magnitude of the move is defined by the learning rate) in the opposite direction from the gradient. This is called **mini-batch gradient descent**. # - The entire learning process is made possible by the fact that all tensor operations in neural networks are differentiable, and thus it’s possible to apply the chain rule of derivation to find the gradient function mapping the current parameters and current batch of data to a gradient value. This is called **backpropagation**. # # Two key concepts you’ll see frequently in future chapters are loss and optimizers. These are the two things you need to define before you begin feeding data into a model. # # - The ***loss** is the quantity you’ll attempt to minimize during training, so it should represent a measure of success for the task you’re trying to solve. # - The **optimizer** specifies the exact way in which the gradient of the loss will be used to update parameters: for instance, it could be the RMSProp optimizer, SGD with momentum, and so on. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import sys import numpy as np np.random.seed(0) # # Main code for Concrete Dropout # + import keras.backend as K from keras import initializers from keras.engine import InputSpec from keras.layers import Dense, Lambda, Wrapper class ConcreteDropout(Wrapper): """This wrapper allows to learn the dropout probability for any given input layer. ```python # as the first layer in a model model = Sequential() model.add(ConcreteDropout(Dense(8), input_shape=(16))) # now model.output_shape == (None, 8) # subsequent layers: no need for input_shape model.add(ConcreteDropout(Dense(32))) # now model.output_shape == (None, 32) ``` `ConcreteDropout` can be used with arbitrary layers, not just `Dense`, for instance with a `Conv2D` layer: ```python model = Sequential() model.add(ConcreteDropout(Conv2D(64, (3, 3)), input_shape=(299, 299, 3))) ``` # Arguments layer: a layer instance. weight_regularizer: A positive number which satisfies $weight_regularizer = l**2 / (\tau * N)$ with prior lengthscale l, model precision $\tau$ (inverse observation noise), and N the number of instances in the dataset. Note that kernel_regularizer is not needed. dropout_regularizer: A positive number which satisfies $dropout_regularizer = 2 / (\tau * N)$ with model precision $\tau$ (inverse observation noise) and N the number of instances in the dataset. Note the relation between dropout_regularizer and weight_regularizer: $weight_regularizer / dropout_regularizer = l**2 / 2$ with prior lengthscale l. Note also that the factor of two should be ignored for cross-entropy loss, and used only for the eculedian loss. """ def __init__(self, layer, weight_regularizer=1e-6, dropout_regularizer=1e-5, init_min=0.1, init_max=0.1, is_mc_dropout=True, **kwargs): assert 'kernel_regularizer' not in kwargs super(ConcreteDropout, self).__init__(layer, **kwargs) self.weight_regularizer = weight_regularizer self.dropout_regularizer = dropout_regularizer self.is_mc_dropout = is_mc_dropout self.supports_masking = True self.p_logit = None self.p = None self.init_min = np.log(init_min) - np.log(1. - init_min) self.init_max = np.log(init_max) - np.log(1. - init_max) def build(self, input_shape=None): self.input_spec = InputSpec(shape=input_shape) if not self.layer.built: self.layer.build(input_shape) self.layer.built = True super(ConcreteDropout, self).build() # this is very weird.. we must call super before we add new losses # initialise p self.p_logit = self.layer.add_weight(name='p_logit', shape=(1,), initializer=initializers.RandomUniform(self.init_min, self.init_max), trainable=True) self.p = K.sigmoid(self.p_logit[0]) # initialise regulariser / prior KL term input_dim = np.prod(input_shape[1:]) # we drop only last dim weight = self.layer.kernel kernel_regularizer = self.weight_regularizer * K.sum(K.square(weight)) / (1. - self.p) dropout_regularizer = self.p * K.log(self.p) dropout_regularizer += (1. - self.p) * K.log(1. - self.p) dropout_regularizer *= self.dropout_regularizer * input_dim regularizer = K.sum(kernel_regularizer + dropout_regularizer) self.layer.add_loss(regularizer) def compute_output_shape(self, input_shape): return self.layer.compute_output_shape(input_shape) def concrete_dropout(self, x): ''' Concrete dropout - used at training time (gradients can be propagated) :param x: input :return: approx. dropped out input ''' eps = K.cast_to_floatx(K.epsilon()) temp = 0.1 unif_noise = K.random_uniform(shape=K.shape(x)) drop_prob = ( K.log(self.p + eps) - K.log(1. - self.p + eps) + K.log(unif_noise + eps) - K.log(1. - unif_noise + eps) ) drop_prob = K.sigmoid(drop_prob / temp) random_tensor = 1. - drop_prob retain_prob = 1. - self.p x *= random_tensor x /= retain_prob return x def call(self, inputs, training=None): if self.is_mc_dropout: return self.layer.call(self.concrete_dropout(inputs)) else: def relaxed_dropped_inputs(): return self.layer.call(self.concrete_dropout(inputs)) return K.in_train_phase(relaxed_dropped_inputs, self.layer.call(inputs), training=training) # - # # Evaluate Concrete Dropout on synthetic data Ns = [10, 25, 50, 100, 1000, 10000] Ns = np.array(Ns) nb_epochs = [2000, 1000, 500, 200, 20, 2] nb_val_size = 1000 nb_features = 1024 Q = 1 D = 1 K_test = 20 nb_reps = 3 batch_size = 20 l = 1e-4 def gen_data(N): sigma = 1e0 # ground truth X = np.random.randn(N, Q) w = 2. b = 8. Y = X.dot(w) + b + sigma * np.random.randn(N, D) return X, Y # + import pylab # %matplotlib inline X, Y = gen_data(10) pylab.figure(figsize=(3, 1.5)) pylab.scatter(X[:, 0], Y[:, 0], edgecolor='b') pylab.savefig('data_10.pdf', bbox_inches='tight') pylab.show() X, Y = gen_data(10000) pylab.figure(figsize=(3, 1.5)) pylab.scatter(X[:, 0], Y[:, 0], edgecolor='b') pylab.xlim([-5, 5]) pylab.ylim([-2, 20]) pylab.savefig('data_10000.pdf', bbox_inches='tight') pylab.show() # - # ### Fit function: # + from keras.layers import Input, Dense, Lambda, merge from keras.models import Model from keras import backend as K def fit_model(nb_epoch, X, Y): if K.backend() == 'tensorflow': K.clear_session() N = X.shape[0] wd = l**2. / N dd = 2. / N inp = Input(shape=(Q,)) x = inp x = ConcreteDropout(Dense(nb_features, activation='relu'), weight_regularizer=wd, dropout_regularizer=dd)(x) x = ConcreteDropout(Dense(nb_features, activation='relu'), weight_regularizer=wd, dropout_regularizer=dd)(x) x = ConcreteDropout(Dense(nb_features, activation='relu'), weight_regularizer=wd, dropout_regularizer=dd)(x) mean = ConcreteDropout(Dense(D), weight_regularizer=wd, dropout_regularizer=dd)(x) log_var = ConcreteDropout(Dense(D), weight_regularizer=wd, dropout_regularizer=dd)(x) out = merge([mean, log_var], mode='concat') model = Model(inp, out) def heteroscedastic_loss(true, pred): mean = pred[:, :D] log_var = pred[:, D:] precision = K.exp(-log_var) return K.sum(precision * (true - mean)**2. + log_var, -1) model.compile(optimizer='adam', loss=heteroscedastic_loss) assert len(model.layers[1].trainable_weights) == 3 # kernel, bias, and dropout prob assert len(model.losses) == 5 # a loss for each Concrete Dropout layer hist = model.fit(X, Y, nb_epoch=nb_epoch, batch_size=batch_size, verbose=0) loss = hist.history['loss'][-1] return model, -0.5 * loss # return ELBO up to const. # - # ### Eval function: # + def logsumexp(a): a_max = a.max(axis=0) return np.log(np.sum(np.exp(a - a_max), axis=0)) + a_max def test(Y_true, MC_samples): """ Estimate predictive log likelihood: log p(y|x, D) = log int p(y|x, w) p(w|D) dw ~= log int p(y|x, w) q(w) dw ~= log 1/K sum p(y|x, w_k) with w_k sim q(w) = LogSumExp log p(y|x, w_k) - log K :Y_true: a 2D array of size N x dim :MC_samples: a 3D array of size samples K x N x 2*D """ assert len(MC_samples.shape) == 3 assert len(Y_true.shape) == 2 k = MC_samples.shape[0] N = Y_true.shape[0] mean = MC_samples[:, :, :D] # K x N x D logvar = MC_samples[:, :, D:] test_ll = -0.5 * np.exp(-logvar) * (mean - Y_true[None])**2. - 0.5 * logvar - 0.5 * np.log(2 * np.pi) test_ll = np.sum(np.sum(test_ll, -1), -1) test_ll = logsumexp(test_ll) - np.log(k) pppp = test_ll / N # per point predictive probability rmse = np.mean((np.mean(mean, 0) - Y_true)**2.)**0.5 return pppp, rmse # - # ### Plot function to make sure stuff makes sense: def plot(X_train, Y_train, X_val, Y_val, means): indx = np.argsort(X_val[:, 0]) _, (ax1, ax2, ax3, ax4) = pylab.subplots(1, 4,figsize=(12, 1.5), sharex=True, sharey=True) ax1.scatter(X_train[:, 0], Y_train[:, 0], c='y') ax1.set_title('Train set') ax2.plot(X_val[indx, 0], np.mean(means, 0)[indx, 0], color='skyblue', lw=3) ax2.scatter(X_train[:, 0], Y_train[:, 0], c='y') ax2.set_title('+Predictive mean') for mean in means: ax3.scatter(X_val[:, 0], mean[:, 0], c='b', alpha=0.2, lw=0) ax3.plot(X_val[indx, 0], np.mean(means, 0)[indx, 0], color='skyblue', lw=3) ax3.set_title('+MC samples on validation X') ax4.scatter(X_val[:, 0], Y_val[:, 0], c='r', alpha=0.2, lw=0) ax4.set_title('Validation set') pylab.show() # #Run experiment results = [] # get results for multiple N for N, nb_epoch in zip(Ns, nb_epochs): # repeat exp multiple times rep_results = [] for i in range(nb_reps): X, Y = gen_data(N + nb_val_size) X_train, Y_train = X[:N], Y[:N] X_val, Y_val = X[N:], Y[N:] model, ELBO = fit_model(nb_epoch, X_train, Y_train) MC_samples = np.array([model.predict(X_val) for _ in range(K_test)]) pppp, rmse = test(Y_val, MC_samples) # per point predictive probability means = MC_samples[:, :, :D] # K x N epistemic_uncertainty = np.var(means, 0).mean(0) logvar = np.mean(MC_samples[:, :, D:], 0) aleatoric_uncertainty = np.exp(logvar).mean(0) ps = np.array([K.eval(layer.p) for layer in model.layers if hasattr(layer, 'p')]) plot(X_train, Y_train, X_val, Y_val, means) rep_results += [(rmse, ps, aleatoric_uncertainty, epistemic_uncertainty)] test_mean = np.mean([r[0] for r in rep_results]) test_std_err = np.std([r[0] for r in rep_results]) / np.sqrt(nb_reps) ps = np.mean([r[1] for r in rep_results], 0) aleatoric_uncertainty = np.mean([r[2] for r in rep_results]) epistemic_uncertainty = np.mean([r[3] for r in rep_results]) print N, nb_epoch, '-', test_mean, test_std_err, ps, ' - ', aleatoric_uncertainty**0.5, epistemic_uncertainty**0.5 sys.stdout.flush() results += [rep_results] import pickle with open('concrete-dropout.pkl', 'wb') as f: pickle.dump(results, f) # + # import pickle # with open('concrete-dropout.pkl', 'rb') as f: # results = pickle.load(f) # - best_tests = np.array([[r[0] for r in result] for result in results]).T best_ps = np.array([[r[1] for r in result] for result in results]) best_aleatoric_uncertainty = np.array([[r[2] for r in result] for result in results]).T.squeeze() best_epistemic_uncertainty = np.array([[r[3] for r in result] for result in results]).T.squeeze() print best_tests.mean(0) print best_ps.mean(1) # + import pylab # %matplotlib inline pylab.figure(figsize=(3, 3)) pylab.plot(Ns, np.mean(best_epistemic_uncertainty, 0)**0.5, '-*') pylab.xlabel('Number of data points (N)') pylab.ylabel('Epistemic uncertainty (std)') pylab.xscale('log') pylab.savefig('epistemic.pdf', bbox_inches='tight') pylab.show() pylab.figure(figsize=(3, 3)) pylab.plot(Ns, np.mean(best_aleatoric_uncertainty, 0)**0.5, '-*') pylab.xlabel('Number of data points (N)') pylab.ylabel('Aleatoric uncertainty (std)') pylab.ylim([0, 2]) pylab.xscale('log') pylab.savefig('aleatoric.pdf', bbox_inches='tight') pylab.show() pylab.figure(figsize=(3, 3)) predictive = np.mean(best_epistemic_uncertainty, 0) + np.mean(best_aleatoric_uncertainty, 0) pylab.plot(Ns, predictive**0.5, '-*') pylab.xlabel('Number of data points (N)') pylab.ylabel('Predictive uncertainty (std)') pylab.ylim([0, 2]) pylab.xscale('log') pylab.savefig('predictive.pdf', bbox_inches='tight') pylab.show() # - pylab.figure(figsize=(3, 1.5)) ps = best_ps.mean(1) ps_std = best_ps.std(1) for i, (p, p_std) in enumerate(zip(ps.T, ps_std.T)): if i == 4: continue # layer 4 is noise layer pylab.plot(Ns, p, '-*', label='Layer #' + str(i+1)) # pylab.fill_between(Ns, p + p_std, p - p_std, alpha=0.25) pylab.legend(bbox_to_anchor=(1, 0), loc='lower left') pylab.xlabel('Number of data points (N)') pylab.ylabel('Dropout probability') pylab.xscale('log') pylab.savefig('dropout_prob.pdf', bbox_inches='tight') pylab.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # 💻 Sistema de Recomendação de Animes # *** # + [markdown] papermill={"duration": 0.024416, "end_time": "2021-12-05T07:53:15.793942", "exception": false, "start_time": "2021-12-05T07:53:15.769526", "status": "completed"} tags=[] # Os sistemas de recomendação são os sistemas projetados para recomendar coisas ao usuário com base em muitos fatores diferentes. Esses sistemas preveem o produto mais provável que os usuários provavelmente comprarão e são de interesse, utilizando uma série de algoritmos, análise de dados e inteligência artificial (IA). Empresas como Netflix, Amazon, etc. usam sistemas de recomendação para ajudar seus usuários a identificar o produto ou os filmes corretos para eles. # # Sistemas de recomendação lidam com um grande volume de informações presentes filtrando as informações mais importantes com base nos dados fornecidos por um usuário e outros fatores que atendem à preferência e interesse do usuário. Ele descobre a correspondência entre usuário e item e imputa as semelhanças entre usuários e itens para recomendação. # # Esse sistema implementa um sistema de **Recomendações Colaborativas de Animes**: O usuário receberá recomendações de animes que pessoas com gostos similares aos dele preferiram no passado. # - # ## 📚 Bibliotecas # + papermill={"duration": 1.014373, "end_time": "2021-12-05T07:53:16.823789", "exception": false, "start_time": "2021-12-05T07:53:15.809416", "status": "completed"} tags=[] import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import requests from io import StringIO import scipy as sp from scipy.sparse import csr_matrix from sklearn.metrics.pairwise import cosine_similarity from ipywidgets import widgets, HBox, Layout from IPython.display import display # - # ## 💾 Conjuto de Dados # O Anime-Recommendation-Database-2020, conjunto de dados utilizado no projeto, reúne dados de recomendação de 320.0000 usuários e 16.000 animes do site myanimelist.net. # # **MyAnimeList**, muitas vezes abreviado para MAL, é uma rede social focado nos consumidores de animes e mangás, na qual possui como maior característica a possibilidade de seus usuários criarem uma lista pessoal para que possam catalogar as obras e classificar-las através de notas. # # Informações detalhadas sobre o cojunto de dados podem ser encontradas em: https://www.kaggle.com/hernan4444/anime-recommendation-database-2020. # # Dois dataframes serão utilizados, ```animelist.csv``` e ```anime.csv```. # ### 💾 Dataframe anime # ```anime.csv``` contém informações gerais de todos os animes (17.562 animes diferentes) incluindo gênero, estatísticas, estúdio, etc. Este arquivo tem as seguintes colunas: # # | Column | Description | # |----------------|-------------------------------------------------------------------------------------------------------| # | MAL_ID | MyAnimelist ID of the anime. (e.g. 1) | # | Name | full name of the anime. (e.g. Cowboy Bebop) | # | Score | average score of the anime given from all users in MyAnimelist database. (e.g. 8.78) | # | Genres | comma separated list of genres for this anime. (e.g. Action, Adventure, Comedy, Drama, Sci-Fi, Space) | # | English name | full name in english of the anime. (e.g. Cowboy Bebop) | # | Japanese name | full name in japanses of the anime. (e.g. カウボーイビバップ) | # | Type | TV, movie, OVA, etc. (e.g. TV) | # | Episodes' | number of chapters. (e.g. 26) | # | Aired | broadcast date. (e.g. Apr 3, 1998 to Apr 24, 1999) | # | Premiered | season premiere. (e.g. Spring 1998) | # | Producers | comma separated list of produducers (e.g. Bandai Visual) | # | Licensors | comma separated list of licensors (e.g. Funimation, Bandai Entertainment) | # | Studios | comma separated list of studios (e.g. Sunrise) | # | Source | Manga, Light novel, Book, etc. (e.g Original) | # | Duration | duration of the anime per episode (e.g 24 min. per ep.) | # | Rating | age rate (e.g. R - 17+ (violence & profanity)) | # | Ranked | position based in the score. (e.g 28) | # | Popularity | position based in the the number of users who have added the anime to their list. (e.g 39) | # | Members | number of community members that are in this anime's "group". (e.g. 1251960) | # | Favorites | number of users who have the anime as "favorites". (e.g. 61,971) | # | Watching | number of users who are watching the anime. (e.g. 105808) | # | Completed | number of users who have complete the anime. (e.g. 718161) | # | On-Hold | number of users who have the anime on Hold. (e.g. 71513) | # | Dropped | number of users who have dropped the anime. (e.g. 26678) | # | Plan to Watch' | number of users who plan to watch the anime. (e.g. 329800) | # | Score-10' | number of users who scored 10. (e.g. 229170) | # | Score-9' | number of users who scored 9. (e.g. 182126) | # | Score-8' | number of users who scored 8. (e.g. 131625) | # | Score-7' | number of users who scored 7. (e.g. 62330) | # | Score-6' | number of users who scored 6. (e.g. 20688) | # | Score-5' | number of users who scored 5. (e.g. 8904) | # | Score-4' | number of users who scored 4. (e.g. 3184) | # | Score-3' | number of users who scored 3. (e.g. 1357) | # | Score-2' | number of users who scored 2. (e.g. 741) | # | Score-1' | number of users who scored 1. (e.g. 1580) | # # De acordo com a documentação do [repositório no GitHub](https://github.com/Hernan4444/MyAnimeList-Database), o arquivo pode ser acessado pelo Google Drive. # Importar anime.csv url = 'https://drive.google.com/file/d/1vfmfi4dGAXBp0T8QTNVYhA5g8_irNbKs/view?usp=sharing' id_arquivo = url.split('/')[-2] dwn_url = 'https://drive.google.com/uc?export=download&id=' + id_arquivo url2 = requests.get(dwn_url).text csv_raw = StringIO(url2) anime_df = pd.read_csv(csv_raw) # anima_data -> anime_df # ### 💾 Dataframe animelist # ```animelist.csv``` tem a lista de todos os animes registrados pelo usuário com a respectiva pontuação, status de exibição e número de episódios assistidos. Este conjunto de dados contém 109 milhões de linhas, 17.562 animes diferentes e 325.772 usuários diferentes. O arquivo tem as seguintes colunas: # # | Column | Description | # |------------------|-----------------------------------------------------------------------------------------| # | user_id | non identifiable randomly generated user id. | # | anime_id | MyAnemlist ID of the anime. (e.g. 1). | # | score | score between 1 to 10 given by the user. 0 if the user didn't assign a score. (e.g. 10) | # | watching_status | state ID from this anime in the anime list of this user. (e.g. 2) | # | watched_episodes | numbers of episodes watched by the user. (e.g. 24) | # # # # Devido âs limitaçãoes de processamento só os primeiros 5.000.000 regitros foram usados. Se você tiver acesso a uma boa estação de trabalho, poderá usar todos os 109 milhões de registros. # # O arquivo csv completo pode ser baixado em: https://drive.google.com/drive/folders/1UhinqGrH2XytkpiD7LlLzMcn7MY2I_gt # + papermill={"duration": 21.313513, "end_time": "2021-12-05T07:53:38.183293", "exception": false, "start_time": "2021-12-05T07:53:16.869780", "status": "completed"} tags=[] # Importar animelist.csv rating_df = pd.read_csv("animelist.csv", nrows=5000000) # Por motivos de eficiência, usar esses DF para usar o merge() anime_df = anime_df.rename(columns={"MAL_ID": "anime_id"}) anime_contact_df = anime_df[["anime_id", "Name"]] # - # ## 📊 Processamento do Conjunto de Dados # ### 📊 Mesclar Conjunto de Dados # + [markdown] papermill={"duration": 0.014809, "end_time": "2021-12-05T07:53:38.244620", "exception": false, "start_time": "2021-12-05T07:53:38.229811", "status": "completed"} tags=[] # Aplicar a operação ```merge``` em ```rating_df``` e ```anime_contact_df``` (dados extraido de ```anime_df```) em termos do ```anime_id``` para crirar um conjunto de dados com ambas as informações. # + papermill={"duration": 9.555769, "end_time": "2021-12-05T07:53:47.815691", "exception": false, "start_time": "2021-12-05T07:53:38.259922", "status": "completed"} tags=[] # Mesclar Dataframes rating_df = rating_df.merge(anime_contact_df, left_on = 'anime_id', right_on = 'anime_id', how = 'left') rating_df = rating_df[["user_id", "Name", "anime_id","rating", "watching_status", "watched_episodes"]] # - rating_df.head() # + papermill={"duration": 0.023256, "end_time": "2021-12-05T07:53:47.855133", "exception": false, "start_time": "2021-12-05T07:53:47.831877", "status": "completed"} tags=[] rating_df.shape # - # ### 🚫 Verificando Dados Faltantes print("Anime Missing Values:\n") print(anime_df.isna().sum()) print("\nRatings Missing Values:\n") print(rating_df.isna().sum()) # + [markdown] papermill={"duration": 0.016709, "end_time": "2021-12-05T07:53:47.887512", "exception": false, "start_time": "2021-12-05T07:53:47.870803", "status": "completed"} tags=[] # Now I will take only that data in which a particular anime has more than 200Votes and if a user has gave in total more than 500Votes to the anime. # + papermill={"duration": 7.592696, "end_time": "2021-12-05T07:53:55.496387", "exception": false, "start_time": "2021-12-05T07:53:47.903691", "status": "completed"} tags=[] count = rating_df['user_id'].value_counts() count1 = rating_df['anime_id'].value_counts() rating_df = rating_df[rating_df['user_id'].isin(count[count >= 500].index)].copy() rating_df = rating_df[rating_df['anime_id'].isin(count1[count1 >= 200].index)].copy() # + papermill={"duration": 0.024379, "end_time": "2021-12-05T07:53:55.536868", "exception": false, "start_time": "2021-12-05T07:53:55.512489", "status": "completed"} tags=[] rating_df.shape # + [markdown] papermill={"duration": 0.018938, "end_time": "2021-12-05T07:55:31.690006", "exception": false, "start_time": "2021-12-05T07:55:31.671068", "status": "completed"} tags=[] # ## 📈 Criação do Modelo # + [markdown] papermill={"duration": 0.018659, "end_time": "2021-12-05T07:54:23.527478", "exception": false, "start_time": "2021-12-05T07:54:23.508819", "status": "completed"} tags=[] # Vamos criar uma tabela dinâmica (Pivot Table) com base nas colunas ```Name``` e ```User_id``` e salvá-la em uma variável ```pivot_table```. # # Uma tabela dinâmica é forma de agrupar as entradas em uma tabela bidimensional que fornece uma sumarização multidimensional dos dados, nesse caso, as notas de cada usuário para um anime diferente. # + papermill={"duration": 68.105922, "end_time": "2021-12-05T07:55:31.651812", "exception": false, "start_time": "2021-12-05T07:54:23.545890", "status": "completed"} tags=[] pivot_table = rating_data.pivot_table(index="Name",columns="user_id", values="rating").fillna(0) pivot_table # - # A **similaridade por cosseno** é uma medida de similaridade de entre dois vetores num espaço vetorial que avalia o valor do cosseno do ângulo compreendido entre eles. Esta função trigonométrica proporciona um valor igual a 1 se o ângulo compreendido é zero, isto é se ambos vetores apontam a um mesmo lugar. Para qualquer ângulo diferente de 0, o valor de cosseno é inferior a um. # # Uma tabela dinâmica é bidimensional, então enxergando as colunas como vetores, podemos usar a similaridade por cosseno para relacionar os animes # + papermill={"duration": 8.817056, "end_time": "2021-12-05T07:55:40.564805", "exception": false, "start_time": "2021-12-05T07:55:31.747749", "status": "completed"} tags=[] # Transforma a matriz em uma matriz esparação para otimizar as operações pivot_table_csr = csr_matrix(pivot_table.values) # - # Modelo de Similaridade entre os Animes anime_similarity = cosine_similarity(pivot_table_csr) # DataFrame de Similaridade entre os Animes ani_sim_df = pd.DataFrame(anime_similarity, index = pivot_table.index, columns = pivot_table.index) def anime_recommendation(ani_name): """ This function will return the top 5 shows with the highest cosine similarity value and show match percent """ if ani_name in ani_sim_df: number = 1 print('Recomendados porque você assistiu {}:\n'.format(ani_name)) for anime in ani_sim_df.sort_values(by = ani_name, ascending = False).index[1:6]: print(f'#{number}: {anime}, {round(ani_sim_df[anime][ani_name]*100,2)}% de similaridade') number +=1 else: print('ERRO: {} não é um nome de anime válido ou não se encontra no conjunto de dados.\n'.format(ani_name)) # + [markdown] papermill={"duration": 0.019571, "end_time": "2021-12-05T07:55:41.243711", "exception": false, "start_time": "2021-12-05T07:55:41.224140", "status": "completed"} tags=[] # ## 📈 Utilizando o Modelo # + style = {'description_width': 'initial'} text = widgets.Text(description="Nome do Anime: ", style=style) button = widgets.Button(description="Executar", ) output = widgets.Output() inputs = HBox([text, button]) def on_button_clicked(b): output.clear_output() with output: anime_recommendation(text.value) button.on_click(on_button_clicked) display(inputs, output) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 ('base') # language: python # name: python3 # --- import pandas as pd from sklearn.linear_model import Ridge from sklearn.model_selection import RepeatedKFold, GridSearchCV # Load the dataset df = pd.read_csv('auto-insurance.csv', header=None) # Split the Dataset data = df.values X, y = data[:, :-1], data[:, -1] X.shape, y.shape # define Model model = Ridge() # define evaluation cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=42) # define search space space = dict() space['solver'] = ['svd', 'cholesky', 'lsqr', 'sag'] # space['alpha'] = loguniform(1e-5, 100) For Random Search space['alpha'] = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1, 10, 100] # For Grid Search space['fit_intercept'] = [True, False] space['normalize'] = [True, False] # define search search = GridSearchCV(model, space, cv=cv, scoring='neg_mean_absolute_error', n_jobs=-1) # execute search result = search.fit(X,y) # summarize result print('Best Score: %s' % result.best_score_) print('Best Hyperparameters: %s' % result.best_params_) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # RNN 2 # Train and verify the two feature RNN # + # %load_ext autoreload # %autoreload 2 import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib import math # - import MyRNN from sklearn.linear_model import SGDRegressor from sklearn import metrics def remove_outlier(data): #calculate what counts as outlier Q1 = np.quantile(data, 0.25) Q3 = np.quantile(data, 0.75) IQR = Q3 - Q1 data_without_fliers = np.copy(data) data_without_fliers[data > Q3 + 1.5*IQR] = Q3 + 1.5*IQR data_without_fliers[data < Q1 - 1.5*IQR] = Q1 - 1.5*IQR return data_without_fliers df_val = pd.read_pickle("df_test_final.pkl") df_val # results_df = pd.DataFrame.copy(df_val) results_df = df_val.drop(['max_days', 'min_days', 'days_since', 't_5', 't_4', 't_3', 't_2', 't_1'], axis = 1) results_df # + #create varaibles to store the results y_rrn1 = [] r_regres = [] # - df_val.reset_index(drop=True) # + #get the lenght original_df_len = df_val.shape[0] #two feature RRN y_rrn2 = [] rnn_row_id = [] i = 0 for index, row in df_val[80000:].iterrows(): print ("progress : " , (index/original_df_len)*100) if i%10000 == 0: print ("yee") name = "out_rnn2_" + str(i) + ".pkl" name1 = "out_row_" + str(i) + ".pkl" pd.DataFrame(y_rrn2).to_pickle(name) pd.DataFrame(rnn_row_id).to_pickle(name1) i +=1 #read the data x1 = np.array(row.product_seq[:-1]) x2 = np.array(row.days_since_seq)[1:-1] x = np.array([x1,x2]).T #scale the data scale_factor = x.max() x_two_features = x/scale_factor #train the RNN my_model = MyRNN.two_feature_RNN() my_model.train(remove_outlier(x_two_features), n_epochs = 1000, lr= 0.01, weight_decay = 0.005) #predict the future predictions = my_model.predict(x_two_features) y_rrn2.append(predictions[-1]*scale_factor) rnn_row_id.append([row.user_id, row.product_id]) # y = np.array([y1,y2]).T # print (index) # print (row) # - results_df[results_df['rnn_2'] != NaN] results_df['rnn_2'][num_done:] = y_rrn2 results_df[results_df.rnn_2.notnull()] results_df.to_pickle("rnn2_backup2.pkl") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: pivpy # language: python # name: pivpy # --- # + from pivpy import io, graphics, pivpy import pkg_resources as pkg import matplotlib.pyplot as plt import os filename = pkg.resource_filename('pivpy','data/openpiv/exp1_001_b.txt') data = io.load_vec(filename,variables='x,y,u,v') data = data.piv.vec2scal('vorticity') # - graphics.contour_plot(data.isel(t=0)) data.piv.vec2scal('ke'); graphics.contour_plot(data.isel(t=0)) data.piv.quiver() d = data.isel(t=0) plt.quiver(d.x,d.y,d.u.T,-d.v.T,scale=200) plt.gca().invert_yaxis() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Reminder to startup the Unity render binary if it is not already running! # + import sys import numpy as np import matplotlib.pyplot as plt from IPython.display import HTML, display class RRT_Node: def __init__(self, pos): self.pos = pos self.edges = np.array([]) class RRT_Graph: def __init__(self, nodes=np.array([])): self.nodes = nodes def add_node(self, node): self.nodes = np.append(self.nodes, node) def add_edge(self, node1, node2): node1.edges = np.append(node1.edges, node2) node2.edges = np.append(node2.edges, node1) def nearest_node(self, v): dist = None nearest = None for node in self.nodes: node_dist = np.linalg.norm(v - node.pos) if dist is None or node_dist < dist: dist = node_dist nearest = node return nearest # Dijkstra def shortest_path_parents(self, source): visited = {} dist = {} parent = {} for n in self.nodes: visited[n] = False parent[n] = None dist[n] = [sys.maxsize] dist[source] = 0 queue = [source] while queue: curr = queue.pop(0) for n in curr.edges: if not visited[n]: queue.append(n) new_dist = dist[curr] + np.linalg.norm(curr.pos - n.pos) if dist[n] > new_dist: parent[n] = curr dist[n] = new_dist visited[curr] = True # helper, assumes the parent exists return parent # goes up the parent chain up to source def get_path(self, parent, curr, path): if parent[curr]: path = self.get_path(parent, parent[curr], path) path = np.append(path, curr) return path # + def normalize(v): norm = np.linalg.norm(v) if norm == 0: return v else: return v / norm # not necessary FOR NOW. Function DOES NOT WORK. def willCollide(p1, p2): line_dir = p2 - p1 a = np.dot(line_dir, line_dir) for obstacle in self.obstacles: center = np.array([obstacle[0], obstacle[1]]) radius = obstacle[2] center_p1_dir = p1 - center b = 2 * np.dot(line_dir, center_p1_dir) c = np.dot(center_p1_dir, center_p1_dir) - radius * radius discrim = b * b - 4 * a * c if discrim < 0: # no intersection continue t1 = (-b + np.sqrt(discrim)) / (2 * a); t2 = (-b - np.sqrt(discrim)) / (2 * a); if (t1 >= 0 and t1 <= 1) or (t2 >= 0 and t2 <= 1): return True return False # + DELTA = 1 bound_start = np.array([-10, -10, -30]) bound_end = np.array([10, 10, -30]) start_pos = np.array([0, 0., -30.]) start_att = np.array([0.,0.,-np.pi/2]) end_pos = start_pos + np.array([9, -9., 0.]) end_att = start_att + np.array([0., 0., 0.]) end_pose = np.append(end_pos, end_att[2]) start_node = RRT_Node(start_pos) # ONLY PUTTING IN THE START NODE graph = RRT_Graph(np.array([start_node])) tree = plt.figure() plt.gca().set_aspect('equal') tree.set_size_inches(10, 10) trials = 1000 for _ in range(trials): rand_pos = bound_start + (bound_end - bound_start) * [np.random.random_sample(), np.random.random_sample(), np.random.random_sample()] nearest_node = graph.nearest_node(rand_pos) # if collide - for now assume empty environment no collision # can make it distance delta for more uniformity dir_norm = normalize(rand_pos - nearest_node.pos) new_node = RRT_Node(nearest_node.pos + DELTA * dir_norm) graph.add_node(new_node) graph.add_edge(nearest_node, new_node) xx, yy = [nearest_node.pos[0], new_node.pos[0]], [nearest_node.pos[1], new_node.pos[1]] plt.plot(xx, yy, 'k-', figure = tree) parent = graph.shortest_path_parents(start_node) nearest_node = graph.nearest_node(end_pos) path = graph.get_path(parent, nearest_node, np.array([])) plt.plot(start_node.pos[0], start_node.pos[1], marker='.', markersize=20) plt.plot(end_pose[0], end_pose[1], '.', markersize=20) path = np.append(path, RRT_Node(end_pose[:3])) prev_node = start_node for node in path: xx, yy = [prev_node.pos[0], node.pos[0]], [prev_node.pos[1], node.pos[1]] plt.plot(xx, yy, color='fuchsia', figure = tree) prev_node = node print(node.pos) # + # There is some kind of positive relation btwn AMBITION and EPSILON; # the more ambitious, the greater the need for leeway. AMBITION = 2. EPSILON = 1. # string = "with half-width {}, center at {}, num_waypoints {}, epsilon {:.2}, & AMBITION {:.2}...".format( # half_width, center, num_waypoints, EPSILON, AMBITION) # big question: how do i set theta for these? # can do arctan2 + np.pi/2 for eahc theta = 0 theta_perpen = theta + np.pi / 2 # waypoint_coords = np.vstack((center[0] + (half_width * np.cos(theta)) / (1 + np.sin(theta) * np.sin(theta)), #x # center[1] + (half_width * np.sin(theta) * np.cos(theta)) / (1 + np.sin(theta) * np.sin(theta)), #y # center[2] * np.ones(num_waypoints), #z # theta_perpen)) # + # #!/usr/bin/env python # coding: utf-8 # %matplotlib inline # %reload_ext autoreload # %autoreload from IPython.display import HTML, display sys.path.insert(0, '../') from flightgoggles.env import * if __name__ == "__main__": # drone flips over when going too fast ? ? ? SimSpeed adjustment needed?? env = flightgoggles_env( cfg_dir="../config", cfg_fgclient="FlightGogglesClient_testing.yaml", cfg_uav="multicopterDynamicsSimSpeed.yaml") # 30 max_speed/accel multicopterDynamicsSimSpeed env.set_state_vehicle(vehicle_id="uav1", position = start_pos, attitude_euler_angle=np.array([0., 0., 0.])) curr_pos = env.get_state("uav1")["position"] curr_att = env.get_state("uav1")["attitude_euler_angle"][2] curr_vel = env.get_state("uav1")["velocity"] fol_accumulator = None pos_accumulator = np.array([curr_pos]) per_accumulator = None rand_accumulator = None curr_waypoint = start_pos time_counter = 0 crash = False for waypoint_node in path: # waypoint = np.append(waypoint_node.pos, np.arctan2(waypoint_node.pos[1], waypoint_node.pos[0]) + np.pi / 2) waypoint = waypoint_node.pos prev_waypoint = curr_waypoint curr_waypoint = waypoint if (np.array_equal(curr_waypoint[:3], end_pose[:3])): target_pose = curr_waypoint EPSILON = 0.1 while np.linalg.norm(curr_pos - curr_waypoint[:3]) >= EPSILON: time_counter += 0.01 curr_pos = env.get_state("uav1")["position"] curr_att = env.get_state("uav1")["attitude_euler_angle"][2] curr_vel = env.get_state("uav1")["velocity"] # clean solution from: https://stackoverflow.com/questions/31273991/ d = curr_waypoint[:3] - prev_waypoint[:3] t = -np.dot(prev_waypoint[:3] - curr_pos, d) / np.linalg.norm(d)**2 # 2d only for now curr_pos_perpend = prev_waypoint[:3] + (d)*t if not (np.array_equal(curr_waypoint[:3], end_pose[:3])): target_pose = (curr_pos_perpend + normalize(curr_waypoint[:3] - curr_pos_perpend) * AMBITION) # curr_att considers x=0 as pi/2 att_to_target = np.arctan2(target_pose[1] - curr_pos[1], target_pose[0] - curr_pos[0]) # HOW TO CHANGE THIS? target_pose = np.append(target_pose, att_to_target) # attitude rand_accumulator = np.append(rand_accumulator, target_pose[3]) if fol_accumulator is None: fol_accumulator = np.array([target_pose]) else: fol_accumulator = np.vstack((fol_accumulator, target_pose)) collided = env.proceed_waypoint(vehicle_id="uav1", waypoint_command=target_pose, duration=0.01) if per_accumulator is None: per_accumulator = np.array([curr_pos_perpend]) else: per_accumulator = np.vstack((per_accumulator, curr_pos_perpend)) pos_accumulator = np.vstack((pos_accumulator, curr_pos)) if collided: crash = True break if crash: print("CRASHED! :(") break # f = open("observation notes continuous point lemniscate.txt", "a") # if not crash: # f.write(string + " it took {:.2f} sec\n".format(time_counter)) # else: # f.write(string + " it crashed :(\n") # f.close() with np.printoptions(precision=2, suppress=True): print("Final pose", np.append(env.get_state("uav1")["position"], env.get_state("uav1")["attitude_euler_angle"][2])) ani_set = env.plot_state_video(flag_save=False, filename="uav", dpi=200) if "cam1" in ani_set.keys(): display(HTML(ani_set["cam1"].to_html5_video())) env.close() # + # Orange dots are the waypoints, dotted are waypoints # coloration suggests the order import itertools import matplotlib.pyplot as plt plt.gca().set_aspect('equal', adjustable='box') # plt.plot(fol_accumulator.T[0], fol_accumulator.T[1], '.') plt.plot(pos_accumulator.T[0], pos_accumulator.T[1], 'r') # plt.plot(per_accumulator.T[0], per_accumulator.T[1], 'g') for waypoint_node in path: plt.plot(waypoint_node.pos[0], waypoint_node.pos[1], 'o') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (model-env) # language: python # name: model-env # --- # # Feature Selection # Generated features and labels from the cleaned data. The data should be ready to use to train an ML model. # + # %load_ext autoreload # %autoreload 2 # %matplotlib notebook import numpy as np from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedStratifiedKFold from sklearn.ensemble import GradientBoostingClassifier from matplotlib.lines import Line2D import joblib from src.data.labels_util import load_labels, LabelCol, get_labels_file, load_clean_labels, get_workouts from src.data.imu_util import ( get_sensor_file, ImuCol, load_imu_data, Sensor, fix_epoch, resample_uniformly, time_to_row_range, get_data_chunk, normalize_with_bounds, data_to_features, list_imu_abspaths, clean_imu_data, BOOT_CUTOFF, POLE_CUTOFF ) from src.data.util import find_nearest, find_nearest_index, shift, low_pass_filter, add_col from src.data.workout import Activity, Workout from src.data.data import DataState from src.data.build_features import main as build_features from src.data.features_util import list_test_files from src.model.predict import group_points from src.config import ( TRAIN_POLE_DIR, TRAIN_BOOT_DIR, TRAIN_FEATURES_FILENAME, TRAIN_LABELS_FILENAME ) from src.visualization.visualize import multiplot # import data types from pandas import DataFrame from numpy import ndarray from typing import List, Tuple, Optional # - # ## Generate Features # Create ML model's input dataset using the cleaned IMU data. Also create the labels to the input dataset. Separate these into training and testing datasets. build_features() # ## Examine Feature and Label Integrity # + def plot_helper(idx, plot): def by_activity(activity: Activity): features_file, labels_file = list_test_files(activity)[1] features: ndarray = np.load(features_file) labels: ndarray = np.load(labels_file) # Plot x-axis acceleration cutoff = BOOT_CUTOFF if activity == Activity.Boot else POLE_CUTOFF plot.plot(low_pass_filter(features[:,0], cutoff=cutoff)) # plot.plot(features[:,0]) # unsmoothed # Plot labels steps: List[Tuple[int, int]] = group_points(labels) for start, end in steps: plot.axvline(x=start, color='green', linestyle='dotted') plot.axvline(x=end, color='green', linestyle='dashed') # Legend legend_items = [Line2D([], [], color='green', linestyle='dotted', label='Start'), Line2D([], [], color='green', linestyle='dashed', label='End')] plot.legend(handles=legend_items) if idx == 0: by_activity(Activity.Boot) plot.set_xlim([1000, 1300]) # Zoom (REMOVE to see the entire graph) plot.title.set_text('Boot') else: by_activity(Activity.Pole) plot.set_xlim([1000, 1300]) # Zoom (REMOVE to see the entire graph) plot.title.set_text('Pole') multiplot(2, plot_helper) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.2 64-bit # metadata: # interpreter: # hash: 7d6993cb2f9ce9a59d5d7380609d9cb5192a9dedd2735a011418ad9e827eb538 # name: python3 # --- # + dotnet_interactive={"language": "python"} # Assignment One # Course: MIS 5150 # Professor: Dr. # + dotnet_interactive={"language": "python"} import tensorflow as tf import numpy as np # + dotnet_interactive={"language": "python"} # Question 1: # import tensorflow as tf import numpy as np # a. # Creating dataset data_set = tf.data.Dataset.from_tensor_slices([[0, 1, 9, 8, 7, 3], [2, 9, 4, 0, 2, 3], [7, 3, 3, 2, 2, 1], [0, 0, 1, 2, 2, 5]]) # b. # Iterating data # method 1 for elements in data_set: print(elements) print('') # method 2 for elements in data_set: print(elements.numpy()) print('') # method 3 iterator = iter(data_set) print(iterator.get_next()) print(iterator.get_next()) print(iterator.get_next()) print(iterator.get_next()) print('') # method 4 for data in data_set: print (data.numpy()) print('') # c. # display 3rd element from 2nd tensor # method 1 ls = [] for item in data_set: e = item.numpy() ls.append(e) print('3rd element from 2nd tensor:',ls[1][2]) print('') # method 2 np_arr = np.asarray(ls, dtype=np.float32) print('3rd element from 2nd tensor:',int(np_arr[1][2])) print('') # method 3 display_element = np.array([[0, 1, 9, 8, 7, 3], [2, 9, 4, 0, 2, 3], [7, 3, 3, 2, 2, 1], [0, 0, 1, 2, 2, 5]]) print('3rd element from 2nd tensor:', display_element[1][2]) # + dotnet_interactive={"language": "python"} # Question 2: # import tensorflow as tf # GPU Enable tf.__version__, tf.test.gpu_device_name() # output ('2.4.1', '/device:GPU:0') # + dotnet_interactive={"language": "python"} # Question 3: # import tensorflow as tf import pandas as pd from tensorflow import keras ds = 'flag.data' url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/flags/flag.data' dataset_path = tf.keras.utils.get_file(ds, url) # downloaded the data using keras dataset_path # cols from the flag.names file cols = ['name','landmass','zone','area','population','language','religion','bars','stripes','colours','red','green','blue','gold','white','black','orange','mainhue','circles','crosses','saltires','quarters','sunstars','crescent','triangle','icon','animate','text','topleft','botright'] # raw dataset using pandas raw_dataset = pd.read_csv(dataset_path, names=cols) # displaying dataset raw_dataset.head() # remove non numerical values raw_dataset.pop('name') raw_dataset.pop('mainhue') raw_dataset.pop('topleft') raw_dataset.pop('botright') # print to check removal of non-numerical values raw_dataset.head() # step 2 dataset = tf.data.Dataset.from_tensor_slices(raw_dataset.values) # range for dataset dataset = tf.data.Dataset.range(10) # using take function for i in dataset.take(3): print(i) print('') # + dotnet_interactive={"language": "python"} # Question 4: # import tensorflow as tf # a. create a tensor dataset = tf.data.Dataset.range(10) # b. display the tendor for row in dataset: print(row) print('') # c. repeat and batch data_batch = dataset.repeat(5).batch(4) # d. display batched sensor for row in data_batch: print(row) print('') # e. map to cube the dataset data_map = data_batch.map(lambda x:x ** 3) # f. batched tensor display for item in data_map.take(2): print(item) print('') # + dotnet_interactive={"language": "python"} # Question 5: # import tensorflow as tf # a. create dataset dataset = tf.data.Dataset.range(10).repeat(5) print ('dataset has', len(list(dataset)), 'elements') print('') # b. dataset buffer and display ds = dataset.shuffle(buffer_size=5).batch(7) for item in ds: print(item) print('') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="_5o1S6yXrpeR" colab_type="text" # #
    Automatic Differentiation
    # ###
    [Dr ]()
    # + [markdown] id="if8FECnSr8TW" colab_type="text" # This notebook is specifically designed to provide a quick demonstration of "autograd" capabilities. This is designed to be the first in a series on convex function minimization within Machine Learning (ML) environments, so it starts with the basics of differentiation. This notebook uses the "tensor" concepts to demonstrate some of the nice methods available with both the HIPS and PyTorch packages. These are both foundations for Deep Learning (DL) environments, but are equally adapt with some standard mathematics. # # There are two main sections: # 1. HIPS # 2. PyTorch # # The examples grow in sophistication as the notebook progresses, so be sure to follow the instructions carefully. # # NOTE: This is designed to run in Colaboratory, but is likely to run in most other Jupyter envrionments also. In particular this does not connect to a GPU, so it will run on minimal hardware. # + [markdown] id="Uw495mjQZO9p" colab_type="text" # ##1. Differentiation with HIPS Autograd # + [markdown] id="O1W1saFjZXIX" colab_type="text" # sympy is a great package for symbolic differentiation, but sympy will just be used for simple comarison of results so you can better understand the results. autograd is much more appropriate for larger problems so is more important for Differential Equations and Linear Algebra applications. The reference for autograd is found at [HIPS - Autograd](https://github.com/HIPS/autograd). # + [markdown] id="5lFCXIzDskcw" colab_type="text" # ### 1.1 - Set Up Environment # + [markdown] id="U14UhNEbs8jn" colab_type="text" # This section installs Autograd into the Colaboratory environment and imports the standard python numeric packages. # + id="zt7FWfbCtZP3" colab_type="code" colab={} # !pip install autograd #python imports import os, sys #numpy import numpy as np from numpy import linspace import scipy as sp #sympy import sympy as smp from sympy import * from sympy import Function, Symbol from sympy import Derivative # import autograd # + [markdown] id="VLC8PVlKuNq8" colab_type="text" # Thse section sets up the graphics # + id="vBIc2m-SuyPf" colab_type="code" colab={} #The general plot capabilities import matplotlib import matplotlib.pyplot as plt #Since these are images, turn off the grids matplotlib.rc('axes',**{'grid':False}) # sympy plotting from sympy.plotting import plot #seaborn import seaborn as sns # + [markdown] id="Tnu8s_PssQCb" colab_type="text" # ### 1.2 -Example 1 - $f(x) = x^2 + 4$ # + [markdown] id="039HnVTwsZj3" colab_type="text" # We start with the basic problem and progress along the knowledge path. # + [markdown] id="dcVajPE5sux6" colab_type="text" # #### 1.2.1 Sympy Implementation # + [markdown] id="F5icmCJGs1OO" colab_type="text" # This section uses the symbolic package to derive the known quantities for our function. This could be done by hand, but is intended to show how the results mesh together. # + id="QY42uvZhtJsc" colab_type="code" colab={} #Define the function x = Symbol('x') f = Function('f')(x) f = x**2 + 4 #Show the function definition print("The function f") smp.pprint(f) #take the derivative f_prime = f.diff(x) print('The derivative of f') smp.pprint(f_prime) # Plot the function and derivative p1 = plot(f,xlim=(-3.0,3.0),ylim=(0.0,12.0)) # Compute the values of f between -3 and 3 f_n = lambdify(x, f, "numpy") f_prime_n = lambdify(x,f_prime,"numpy") x_vals = linspace(-3.0, 3.0) y_vals = f_n(x_vals) y_prime_vals = f_prime_n(x_vals) sns.set_style('dark') fig, ax = plt.subplots() plt.ylim(0.0,12.0) plt.yticks(np.arange(1,13)) ax.axvline(0.0, color='k') ax.axhline(0.0, color='k') fn, = ax.plot(x_vals,y_vals, label='$f$') fprimen, = ax.plot(x_vals,y_prime_vals, label='$\\frac{\\partial f}{\\partial x}$') plt.legend(handles=[fn, fprimen]) plt.show() # + [markdown] id="ssOJbwOH0JPD" colab_type="text" # This is a standard an easily understood problem, so not much effort is put into the plot. The main point is generation of the x and y values. # + [markdown] id="SDY9W7nM5k4U" colab_type="text" # #### 1.2.2 Autograd Implementation # + [markdown] id="nxRZT2ox5pwI" colab_type="text" # Autograd understands the same type operations, but rather than a focus on symbolic computation the focus is on numeric computation using a similar underlying framework. The main difference is that the gradient (yes, these are the partial derivatives) are taken relative to a scalar "loss" value. Therefore when working with tensors of numbers, we need to define the function that will be differentiated in terms of a loss value (NOTE: The use of the loss value stems from Machine Learning.) The following code generates the same example data. # # It may not be obvious, but this provides a way to automatically compute the derivative of a function at many point concurrently. Here is the process: # # 1. Define your function, $f$, so it inputs a tensor (vector, matrix, etc). # 2. Define your loss function, $loss_f$ to be $np.sum(f(x)) $ # 3. Define the gradient of $f$ to be $grad(loss_f)$ # 4. Then for clarity, define a function g, that outputs the $f(x)$ and $f^\prime(x)$ # # These steps are shown in the next example: # # + id="c-Fsmw3k63uN" colab_type="code" colab={} import autograd.numpy as np #This is so the gradient understands the numpy operations from autograd import grad #Follow the steps def f(x): y = x*x + 4.0 return y def loss_f(x): loss = np.sum(f(x)) return loss f_p = grad(loss_f) def g(x): return f(x), f_p(x) #Compute points y, y_p = g(x_vals) #plot sns.set_style('dark') fig, ax = plt.subplots() plt.ylim(0.0,12.0) plt.yticks(np.arange(1,13)) ax.axvline(0.0, color='k') ax.axhline(0.0, color='k') fn, = ax.plot(x_vals,y, label='$f$') fprimen, = ax.plot(x_vals,y_p, label='$\\frac{\\partial f}{\\partial x}$') plt.legend(handles=[fn,fprimen]) plt.show() # # check the results # max_der_diff = np.max(np.abs(y_p - y_prime_vals)) print("The max difference in the derivative computation: {:.8f}".format(max_der_diff)) # + [markdown] id="Tvdwcms0DebA" colab_type="text" # As you can see, ther results are exactly the same as expected and performs correctly. Lets, see why: # # Recall from above, we defined the loss function as $np.sum(f(x))$, thus $loss_f = \sum_{i=0}^{N-1} f(x_i)$; therefore, # # $\frac{\partial f(x_i)}{\partial x_i} = f^{\prime}(x_i)$ # # since the value of $f(x_i)$ only appears once in the summation. # # Consequently, using the $np.sum$ function provides a quick way to compute the derivatives of $f$ for the input $x$ values. # # Obviously we now want to consider a second derivative. This is computed using the following code cell. # + id="vvI8Rpz_eBSz" colab_type="code" colab={} import autograd.numpy as np #This is so the gradient understands the numpy operations from autograd import grad #Follow the steps def f(x): y = x*x + 4.0 return y def loss_f(x): loss = np.sum(f(x)) return loss f_p = grad(loss_f) def loss_fp(x): loss = np.sum(f_p(x)) return loss f_pp = grad(loss_fp) def g(x): return f(x), f_p(x) def h(x): return f(x), f_p(x), f_pp(x) #Compute points y, y_p,y_pp = h(x_vals) pprint(y_pp) # + [markdown] id="zEUVPatZzZtf" colab_type="text" # ### Example 1.3 - $f(x,y) = x^2 + y^2 + 4$ # + [markdown] id="43X0AtraznP3" colab_type="text" # In this example, we will compute the gradient of f, namely of $grad(f) = \nabla(f) = \begin{bmatrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \end{bmatrix} = \begin{bmatrix}2x \\ 2y\end{bmatrix}$ for a set of random points with $x,y \in [0,1)$. # # This works much the same as the previous example by leveraging the grad function and providing the appropriate loss function that can operate on multiple inputs concurrently. # + id="xQ1bbC2wIrbg" colab_type="code" colab={} import autograd.numpy as np #This is so the gradient understands the numpy operations from autograd import grad #Follow the steps def f(xy): z = xy[0]*xy[0] + xy[1]*xy[1] + 4.0 return z def loss_f(z): loss = np.sum(f(z)) return loss f_p = grad(loss_f) def g(xy): return np.array(f(xy)),np.array(f_p(xy)) #Define the x and y x_vals = np.random.uniform(-1,1,50) y_vals = np.random.uniform(-1,1,50) xy = [x_vals,y_vals] # #Compute points z, z_p = g(xy) #Compute the formula values z_p_compute = np.array([2*x_vals,2*y_vals]) max_err = np.max(np.abs(z_p - z_p_compute)) pprint("Max Error: {:8f}".format(max_err)) # + [markdown] id="dOrv97-9RBvN" colab_type="text" # Once again, the use of grad allows for easy computation of exactly the values we need for computation. # + [markdown] id="dz5GqYH9RS7s" colab_type="text" # ### 1.4 Conclusion # + [markdown] id="GhZn_ep7RZcg" colab_type="text" # With autograd, the computation of gradients is automatic. Even though autograd has its primary use in Machine Learning, this tool can be very powerful for mathematics operations since it supports both GPUs and targets numpy compatibility. # + [markdown] id="vOjDdak7S9n9" colab_type="text" # ## 2. Differentiation with PyTorch.autograd # + [markdown] id="r7K9NFpLTaCU" colab_type="text" # The Autograd is a very nice package, but at this point PyTorch probably has a larger user community and it is also very pythonic. PyTorch has GPU support, but it doesn't overload the numpy packages. Given its sponsors (including Facebook), the implementation for Machine Learning is very robust and it has several pre-trained models that are ready for use in solving problems. This series of studies on using gradients generally focuses on PyTorch; however, most of the work can be done within Autograd. # # The documentation for Pytorch can be found at [Docs](https://pytorch.org/docs/stable/index.html). # + [markdown] id="yHz7safdVRgM" colab_type="text" # ###2.1 Set up Environment # + id="BdHauRiFVeFh" colab_type="code" colab={} # # !pip3 install -U torch # import torch as torch import torch.tensor as T import torch.autograd as t_autograd #normally I use autograd, but I want to distinguish between autograd and torch # # Output Environment Information # has_cuda = torch.cuda.is_available() current_device = torch.cuda.current_device() if has_cuda else -1 gpu_count = torch.cuda.device_count() if has_cuda else -1 gpu_name = torch.cuda.get_device_name(current_device) if has_cuda else "NA" print("Current device {}".format(current_device)) print("Number of devices: {:d}".format(gpu_count)) print("Current GPU Number: {:d}".format(current_device)) print("GPU Name: {:s}".format(gpu_name)) #Set the accelerator variable accelerator = 'cuda' if has_cuda else 'cpu' print("Accelerator: {:s}".format(accelerator)) # + [markdown] id="318PlTeAZMTm" colab_type="text" # ### 2.2 -Example 1 - $f(x) = x^2 + 4$ # + [markdown] id="iYUSU2sDZNyp" colab_type="text" # This section solves the same problem as the previous section, but is written to accomodate a GPU so it includes some of the details to use a GPU. # + id="Bb3iE8MWziCz" colab_type="code" colab={} #define setup # N = 50 device = torch.device(accelerator) # #Define the function and loss # def f(x): y = x * x + 2.0 return y def loss_f(x): z = f(x).sum() return z def f_p(x): z = loss_f(x) z.backward() return x.grad x_val = np.linspace(-3., 3.0, N) x = T(x_val, requires_grad = True).to(device) #Get the data x_vals = x.data.numpy() y = f(x).data.numpy() y_p = f_p(x).data.numpy() #Graph sns.set_style('dark') fig, ax = plt.subplots() plt.ylim(0.0,12.0) plt.yticks(np.arange(1,13)) ax.axvline(0.0, color='k') ax.axhline(0.0, color='k') fn, = ax.plot(x_vals,y, label='$f$') fprimen, = ax.plot(x_vals,y_p, label='$\\frac{\\partial f}{\\partial x}$') plt.legend(handles=[fn,fprimen]) plt.show() # # check the results # max_der_diff = np.max(np.abs(y_p - y_prime_vals)) print("The max difference in the derivative computation: {:.8f}".format(max_der_diff)) # + [markdown] id="Qvtsj2dPhas-" colab_type="text" # As you can see, the computations are similar to using Autograd, but instead of using a grad function this uses a backward function (backward is the word in Machine Learning that computes the derivative relative to the loss) and then grad is a property of the variable that us used for the computation that required the gradient. # + [markdown] id="aojr4FPNjiyX" colab_type="text" # ### Example 2.3 - $f(x,y) = x^2 + y^2 + 4$ # + id="IX_57KHojm6c" colab_type="code" colab={} #Follow the steps def f(xy): z = xy[0]*xy[0] + xy[1]*xy[1] + 4.0 return z def loss_f(z): loss = f(z).sum() return loss def f_p(xy): z = loss_f(xy) z.backward() return xy.grad def g(xy): return f(xy).data.numpy(),f_p(xy).data.numpy() #Define the x and y x_vals = np.random.uniform(-1,1,50) y_vals = np.random.uniform(-1,1,50) xy = T([x_vals,y_vals], requires_grad = True).to(device) # #Compute points z, z_p = g(xy) #Compute the formula values z_p_compute = np.array([2*x_vals,2*y_vals]) max_err = np.max(np.abs(z_p - z_p_compute)) pprint("Max Error: {:8f}".format(max_err)) # + [markdown] id="Ze3G_wu8m9JH" colab_type="text" # ### 2.4 Conclusion # + [markdown] id="pQgb7bJ1lutX" colab_type="text" # PyTorch provide both a numpy compatible interface for the numpy functions, so starting with a minimal set of code using numpy, it is easy to scale up to use GPUs and PyTorch.autograd. The interworking of PyTorch.autograd are exactly as we expect. # + [markdown] id="5lxhIy6D-GxI" colab_type="text" # ## Overall Conclusion # + [markdown] id="CwnKVxGz-LQx" colab_type="text" # Both Autograd and PyTorch are environments for Machine Learning, but they provide many benefits to numerical computations and modeling. These free tools coupled with Colaboratory greatly expands the types of mathematic modeling and computations that are available to developers. Autograd appears to have a smaller footprint, but PyTorch appears to have a larger following (especially when considering ML). Using both isn't a bad option depending on the resources available for processing. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="_thAKR27rZY5" # # Семинар 2 - pandas, linear regression # - import numpy as np # + [markdown] id="VQawW-URvaNx" # ## 1. Pandas # ![alt text](https://media0.giphy.com/media/fAaBpMgGuyf96/giphy.gif) # # - документация: http://pandas.pydata.org/pandas-docs/stable/ # - 10 minutes to pandas: https://pandas.pydata.org/pandas-docs/stable/10min.html # - Pandas Tutorial: DataFrames in Python: https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python # - Cheet Sheet: https://www.analyticsvidhya.com/blog/2015/07/11-steps-perform-data-analysis-pandas-python/ # - Visualization: http://pandas.pydata.org/pandas-docs/stable/visualization.html # # Будем работать с данными, собранными благодаря опросу студентов математического курса средней школы в Португалии (возраст - от 15 до 22 лет). Они находятся в файле ["math_students.csv"](https://raw.githubusercontent.com/AKuzina/ml_dpo/main/practicals/math_students.csv). # # Целевой переменной является итоговая оценка студента за курс. # + id="7-qYDedovv6G" import matplotlib.pyplot as plt import pandas as pd # магическая функция, позволяющая выводить графики прямо в ноутбук # %matplotlib inline # + colab={"base_uri": "https://localhost:8080/", "height": 223} id="fntOj3Bfv04j" outputId="2eb3c393-fe54-4260-99af-41ce1f9ef2c9" # если данные и ноутбук находятся в разных папках, то для загрузки файла помимо названия необходимо также прописать путь к нему # .csv - текстовый файл для представления табличных данных, разделенных каким-то символом. В данном случае - запятой data = pd.read_csv('math_students.csv', delimiter=',') # функция .head(n) выводит первые n строк таблицы (по умолчанию n=5) data.head() # + [markdown] id="FP5GcluiwYgq" # Итак, всего объектов 395, а признаков - 32 (учитываем, что один из столбцов - это целевая переменная). Все признаки имеют разную природу. Вот их более подробная расшифровка: # # - school - тип школы ("GP" - Gabriel Pereira или "MS" - Mousinho da Silveira) # - sex - пол ("F" - female или "M" - male) # - age - возраст (от 15 до 22) # - address - откуда студент ("U" - urban или "R" - rural) # - famsize - размер семьи ("LE3" - меньше или равно 3 или "GT3" - больше 3) # - Pstatus - в каких отношениях родители ("T" - живут вместе "A" - раздельно) # - Medu - образование матери (0 - никакого, 1 - начальное образование (4 класса), 2 – от 5 до 9 классов, 3 – среднеспециальное или 4 – высшее) # - Fedu - образование отца (0 - никакого, 1 - начальное образование (4 класса), 2 – от 5 до 9 классов, 3 – среднеспециальное или 4 – высшее) # - Mjob - работа матери ("teacher", "health" care related, civil "services" (e.g. administrative or police), "at_home" or "other") # - Fjob - работа отца ("teacher", "health" care related, civil "services" (e.g. administrative or police), "at_home" or "other") # - reason - причина выбора школы (близко к дому — "home", репутация школы — "reputation", предпочтение некоторым предметам - "course" или "other") # - guardian - опекун ("mother", "father" или "other") # - traveltime - время от дома до школы (1 - меньше 15 мин., 2 - 15 до 30 мин., 3 - 30 мин. до 1 часа, или 4 - больше 1 часа) # - studytime - количество часов обучения в неделю (1 - меньше 2 часов, 2 - от 2 до 5 часов, 3 - от 5 до 10 часов, или 4 - больше 10 часов) # - failures - количество ранее не сданных предметов (n if 1 <= n < 3, else 4) # - schoolsup - дополнительные занятия (yes or no) # - famsup - помощь от семьи при выполнении заданий (yes or no) # - paid - дополнительные платные занятия (yes or no) # - activities - внеклассная деятельность (yes or no) # - nursery - посещал детский сад (yes or no) # - higher - желание высшего образования (yes or no) # - internet - домашний интернет (yes or no) # - romantic - состоит в романтических отношениях (yes or no) # - famrel - насколько хороши отношения в семье (от 1 - очень плохие до 5 - превосходные) # - freetime - наличие свободного времени после школы (от 1 - очень мало до 5 - очень много) # - goout - гуляет с друзьями (от 1 - редко до 5 - очень часто) # - Dalc - употребление алкоголя в будние дни (от 1 - очень редко до 5 - очень часто) # - Walc - употребление алкоголя в выходные (от 1 - очень редко до 5 - очень часто) # - health - текущее состояние здоровья (от 1 - очень плохое до 5 - очень хорошее) # - absences - количество школьных пропусков (от 0 до 93) # - G1 - оценка за первый семестр (от 0 до 20) # - G2 - оценка за второй семестр (от 0 до 20) # - G3 - итоговая оценка (от 0 до 20) # # --- # # Для вывода названий всех признаков есть специальная функция: # + colab={"base_uri": "https://localhost:8080/"} id="MhSvxiE8wTdA" outputId="75fd2f1e-2ce8-40bf-dca8-4d8b147d2b99" data.columns # + [markdown] id="lKr1ovmow4Oi" # Как обращаться к колонкам? # * "dot" `data.G3` # * "brackets" `data['G3']`. # * "list in the bracket" `data[['G3', 'G2']]` # * "index" `data.iloc[:, -1]` # # + colab={"base_uri": "https://localhost:8080/"} id="DgPVCFzPxUlO" outputId="dbbe86d0-6497-4f95-ba1d-58de95561835" data.G3 # + colab={"base_uri": "https://localhost:8080/"} id="2t-EcqAFw2Bn" outputId="5bec1184-fe4a-4f50-8d7b-388fe0c503a2" data['G3'] # + colab={"base_uri": "https://localhost:8080/", "height": 417} id="o5-FkXjvxYmE" outputId="ae0ba69b-7ba5-46b8-d02b-9999c6ce223c" data[['G3']] # + colab={"base_uri": "https://localhost:8080/"} id="fMANF_rWxa9z" outputId="39e093e1-e380-4a4c-931a-7bee6499ebe6" data.iloc[:10, 10] # + [markdown] id="XslY65_yxmi4" # --- # **Задание 1** Отделить от признаков целевую переменную. Создать вектор `y` и таблицу `X` # + colab={"base_uri": "https://localhost:8080/"} id="vzKqb7W0yDi4" outputId="c951fe04-265e-4fb2-a3a8-0777d4ee11ef" data.columns[:-1] # + id="-jF8GAa5xjsL" X = data[data.columns[:-1]] X.shape # - y=data.G3 y.shape # + [markdown] id="3TkRdmrWyG69" # А теперь тоже самое, используя функцию `drop`: # # ```data.drop([col_1, col_2], axis=1)``` # + id="pD9NIxzLyYNO" X = data.drop(['G3'], axis=1) X.shape # + [markdown] id="3qQKAHD2ycwF" # Посмотрим, есть ли в данных пропуски: # + colab={"base_uri": "https://localhost:8080/"} id="22BPVfc1ydIK" outputId="946b9f7f-b313-49b1-9ba1-88562fcc683f" data.isna().sum() # + [markdown] id="OwfUgnrjyjGb" # По любой функции можно получить информацию из документации следующим образом: # + id="CJGc4nPiyeqv" # ?pd.isna # + [markdown] id="7W0YW9Riysr6" # Можно вывести статистику по значениям признаков: # + colab={"base_uri": "https://localhost:8080/", "height": 317} id="iY65hSxPymi6" outputId="e11e78a5-1cfc-49e4-979d-fab1b7d46949" data.describe() # + [markdown] id="cLpEWKXuyxYo" # --- # **Задание 2** Прочитайте документацию и выведите статистику по значениям **всех** признаков # + id="8ZtxVFqvzC-y" # ?data.describe # + colab={"base_uri": "https://localhost:8080/", "height": 410} id="V5UUN0O5yua1" outputId="a0b5006b-c33c-4f7e-9d4d-75a8340b23c4" data.describe(include="all") # + [markdown] id="L1BHTv-EzsGA" # Какие значения принимает признак `guardian`? # + colab={"base_uri": "https://localhost:8080/"} id="BTleoolZy_lK" outputId="38bf1fb3-a86d-4978-d206-4f496c4e6b80" data['guardian'].unique() # + colab={"base_uri": "https://localhost:8080/"} id="vWQ4ftRxzuPq" outputId="07991640-c164-489d-c765-71068d38d5eb" data['guardian'].value_counts() # + [markdown] id="KcSi-lLCzzJr" # Чтобы получить все строки, которые удовлетворяют условию # # ```table[condition]``` # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="md3FCosK0EkI" outputId="8260db26-8d44-49a7-b20b-724790e939fc" data[data.guardian == 'mother'].head() # + [markdown] id="fRVa6wWE0EwV" # # Чтобы комбинировать условия: # # * `&` --- and # * `|` --- or # * `~` --- not # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="ymqDJSdtzwJ1" outputId="0ddc2ded-8e14-4193-ad17-16cb1176b974" data[(data.guardian == 'mother') | (data.guardian == 'father')].head() # + [markdown] id="x1jjU6_H0Uhw" # --- # **Задание 3** # 1. Выделим студентов младще 16 лет у которых опекуном является не мать # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="IzZ3rqt60SJc" outputId="9cff7ef6-4f5c-4247-c5c8-87079f9d50f2" data[(data.guardian != 'mother') & (data.age <16)].head() # + [markdown] id="uG6ONBu70hdA" # 2. Выделим только тех студентов, у которых опекуном является мать и которая работает учителем или на дому: # + id="Gc8L2-Di0dGg" data[(data.guardian == 'mother') & ((data.Mjob == 'teacher') | (data.Mjob == 'at_home'))].head() # - data[(data.guardian == 'mother') & data.Mjob.isin(['teacher','at_home'])].head() data.Mjob.value_counts() # + [markdown] id="E_ZHPF5t1OHD" # --- # Проанализируем взаимосвязь количества пропусков и успехов в учебе. Посмотрим на распределение количества пропусков у студентов: # + colab={"base_uri": "https://localhost:8080/", "height": 458} id="Y4pj6a9N1OfL" outputId="ae7f51ab-9377-4ea7-838e-253f85f636c6" plt.figure(figsize=(10,7)) plt.title('Absences distribution') data['absences'].hist() plt.xlabel('absences') plt.ylabel('number of students') plt.show() # + [markdown] id="VCQd66101vxc" # Мы можем считать разные статистки # + colab={"base_uri": "https://localhost:8080/"} id="TLxCFdpY1jL-" outputId="4f7e7837-c2fd-481b-d27f-582a27a0ff83" data['absences'].mean() # + colab={"base_uri": "https://localhost:8080/"} id="vf9hHDGF11ao" outputId="a765073a-2f2e-497b-f519-391353c764e7" data['absences'].std() # + colab={"base_uri": "https://localhost:8080/"} id="YDdyRahD13Gq" outputId="8bae91c2-2f96-4073-a1ff-2f509889c4c6" data['absences'].max() # + [markdown] id="avTmDdit1_57" # --- # **Задание 4** Разделите студентов на две части: те, у кого количество пропусков меньше среднего, и те, у кого оно **не** меньше среднего. # + id="1f0E-8Cj14rS" mean_absences = data['absences'].mean() missers = data[data['absences'] < mean_absences] non_missers = data[data['absences'] >= mean_absences] # + [markdown] id="3H-oa5oT2Oqw" # --- # **Задание 5** Посчитайте среднее значение целевой переменной для каждой части. # + colab={"base_uri": "https://localhost:8080/"} id="4-v0aawW2RcV" outputId="95b01dc3-2e24-4787-a451-955b2504ed89" stud_few_absences_g3 = missers.G3.mean() stud_many_absences_g3 = non_missers.G3.mean() print('Students with few absences, mean G3: ', stud_few_absences_g3) print('Students with many absences, mean G3:', stud_many_absences_g3) # + [markdown] id="JD0_zfHe2h8U" # Итак, средние оценки примерно одинаковы - у тех, кто пропускал меньше занятий, она чуть хуже. Возможно, студенты, пропускавшие много занятий, знали материал очень хорошо :) # # Также данные можно исследовать с помощью диаграммы рассеивания (scatter plot) # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="86MuItuc2UlT" outputId="" plt.figure(figsize=(10,7)) data.plot.scatter(x = 'absences', y='G3') plt.xlabel('absences') plt.ylabel('Grade') plt.show() # - # ## 2. Линейная Регрессия # # Поработаем с линейной регрессией на практике с помощью библиотеки [scikit-learn](https://scikit-learn.org/stable/). Эта библиотека включает в себя множество алгоритмов, разные тестовые наборов данных, функции для подсчета метрик и подбора параметров, а также многое другое. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Text Classification # # This Notebook focuses on implementing multi-class text classification on Amazon automotive reviews dataset by choosing any one combination of various data transformation techniques and algorithms. # # Rating(1-5) is predicted for each review from the dataset. # # Best scoring combinations are listed below. Any single combination out of the following can be chosen for data transformation & model training : # # * Creation of word embeddings using gensim's word2vec & subsequent training using Random Forest algorithm. # * Creation of word embeddings using word2vec and/or Smooth Inverse Frequency (SIF) technique & subsequent training using Random Forest algorithm. # * Vectorisation using Term frequency-inverse document frequency (Tfidf) technique & subsequent training using Random Forest algorithm. # * Vectorisation using Tfidf technique & subsequent training using Linear support vector clustering (SVC) algorithm. # # | Data Transformation | Training Algorithm | # | ------------- | ------------- | # | Word2vec | Random Forest | # | Word2vec + SIF | Random Forest | # | TfIdf Vectorization | Random Forest | # | TfIdf Vectorization | Linear SVC | # ### Clone git repo BRANCH_NAME="master" #Provide git branch name "master" or "dev" # ! git clone -b $BRANCH_NAME https://github.com/CiscoAI/cisco-kubeflow-starter-pack.git # ### Install required packages # !pip install pandas nltk gensim sklearn scikit-learn==0.20.3 imbalanced-learn==0.4.3 # ### Restart Jupyter notebook kernel from IPython.display import display_html display_html("",raw=True) # ### Import required libraries # + #General import pandas as pd import numpy as np import pickle import yaml from joblib import dump import re import nltk as nl import gensim import yaml import os import requests from collections import Counter #sklearn from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.svm import LinearSVC from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.preprocessing import LabelEncoder #nltk from nltk.corpus import stopwords nl.download('punkt') nl.download('stopwords') #Over-sampling from imblearn.over_sampling import SMOTENC # - # ### Convert dataset from JSON format to CSV format path = "cisco-kubeflow-starter-pack/apps/retail/customer-reviews/onprem" json_data = pd.read_json(os.path.join(path, "data/amazon_automotive_reviews.json"), lines=True) json_data.to_csv('amazon_automotive_reviews.csv', index=False) # ### Read data from CSV file raw_data = pd.read_csv('amazon_automotive_reviews.csv') raw_data.head() # ### Clean up initial dataset raw_data['overallRating'] = raw_data['overall'] raw_data = raw_data.drop(['reviewerID','asin','reviewerName','helpful','overall','summary','unixReviewTime','reviewTime'], axis=1) # ### Clean review text column and remove punctuations and numericals raw_data['p_review'] = raw_data['reviewText'].apply(lambda x: re.sub(r'[^a-zA-Z\s]','', str(x))) # ### Choose data transformation method # + # Data transformation can be done using either word2vec or Smooth inverse frequency (sif) technique or Tf-Idf vectorization (tfidf). # Choose from ==> ['word2vec', 'sif', 'tfidf'] data_transform = '' # - # ### Choose model training algorithm # + # Model training can be done using either random forest (rf) or Linear Support vector clustering (lsvc) algorithms. #Choose from ==> ['rf','lsvc'] train_algorithm = '' # - # ### Validate data transformation method & training algorithm options if not data_transform or data_transform not in ['word2vec', 'sif', 'tfidf']: raise ValueError("Set a valid method to perform data transformation (word2vec/sif/tfidf)") if not train_algorithm or train_algorithm not in ['rf','lsvc']: raise ValueError("Set a valid algorithm to train your model(rf/lsvc)") if data_transform in ['word2vec','sif'] and train_algorithm == 'lsvc': raise Warning("The combination selected may not be the best scoring one!") # ### Apply data transformation on data as per selected choice if data_transform in ['word2vec', 'sif']: p_review = raw_data['p_review'].to_list() tokens = [nl.word_tokenize(sentences) for sentences in p_review] stop_words = stopwords.words('english') tokens = [[word for word in tokens[i] if not word in stopwords.words('english')] for i in range(len(tokens))] wv_model = gensim.models.Word2Vec(tokens, size=300, min_count=1, workers=4) wv_model.train(tokens, total_examples=len(tokens), epochs=50) print("Word2vec model generated & trained on tokens from review text") # + if data_transform == 'word2vec': print("Preparing training data using word2vec..") wv_train = [] for i in range(len(tokens)): wv_train.append(np.mean(np.asarray([wv_model[token] for token in tokens[i]]),axis=0)) print("Completed") elif data_transform == 'sif': print("Preparing training data using Smooth inverse frequency(SIF)..") vlookup = wv_model.wv.vocab Z = 0 for k in vlookup: Z += vlookup[k].count # Compute the normalization constant Z a = 0.001 embedding_size = 300 wv_sif_train = [] for i in range(len(tokens)): vs = np.zeros(300) for word in tokens[i]: a_value = a / (a + (vlookup[word].count/Z)) vs = np.add(vs, np.multiply(a_value, wv_model.wv[word])) wv_sif_train.append(np.divide(vs, len(tokens[i]))) print("Completed") elif data_transform == 'tfidf': print("Preparing training data using TfIdf vectorization..") tfidf = TfidfVectorizer(ngram_range=(1,2),sublinear_tf=True, min_df=5, norm='l2', encoding='latin-1', stop_words='english') features = tfidf.fit_transform(raw_data.p_review).toarray() print(features.shape) print("Completed") # - # ### Depict class imbalance issue in dataset using value count for each rating # # Here rating 5 has lot more records than others, so the dataset is considered to be highly skewed / imbalanced. raw_data.overallRating.value_counts() # ### Initialize target variable to a local variable y = raw_data.overallRating # ### Resample dataset to remove class imbalance issue using SMOTENC # # Preprocessing of the dataset is done in such a way that the rating categories other than 5 ( which is the majority class) is oversampled accordingly, so as to get a balanced dataset without any prediction output bias. # + sm = SMOTENC(sampling_strategy={1: 6000, 2: 6200, 3 : 6800, 4: 11000}, random_state=42, categorical_features=[1]) if data_transform == 'word2vec': X_resampled, y_resampled = sm.fit_resample(np.asarray(wv_train), y) elif data_transform == 'sif': X_resampled, y_resampled = sm.fit_resample(np.asarray(wv_sif_train), y) elif data_transform == 'tfidf': X_resampled, y_resampled = sm.fit_resample(features, y) print('Resampled dataset samples per class {}'.format(Counter(y_resampled))) # - # ### Split train & test data # + if data_transform == 'word2vec': x_train, x_test, y_train, y_test = train_test_split(X_resampled,y_resampled,test_size=0.3,shuffle=True,random_state=7) elif data_transform == 'sif': x_train, x_test, y_train, y_test = train_test_split(X_resampled,y_resampled,test_size=0.3,shuffle=True,random_state=7) elif data_transform == 'tfidf': x_train, x_test, y_train, y_test = train_test_split(X_resampled,y_resampled,test_size=0.3,shuffle=True,random_state=7) # - # ### Train model # + if train_algorithm == 'rf': model = RandomForestClassifier(n_estimators=40, random_state=0) model.fit(x_train,y_train) elif train_algorithm == 'lsvc': model = LinearSVC() model.fit(x_train, y_train) # - # ### Save model # + file_path = '/home/jovyan' file_rel_path = 'model/' file_abs_path = os.path.join(file_path, file_rel_path) file_name = 'model.joblib' if not os.path.exists(file_abs_path): os.mkdir(file_abs_path) dump(model, file_abs_path + file_name) # - # ### Define inference service name & model storage URI # + svc_name = 'text-classify' # !kubectl get pods $HOSTNAME -o yaml -n anonymous > podspec with open("podspec") as f: content = yaml.safe_load(f) for elm in content['spec']['volumes']: if 'workspace-' in elm['name']: pvc = elm['name'] os.remove('podspec') pvc storageURI = "pvc://" + pvc + '/' + file_rel_path print(storageURI) # - # ### Define configuration for inference service creation # + wsvol_blerssi_kf = f"""apiVersion: "serving.kubeflow.org/v1alpha2" kind: "InferenceService" metadata: name: {svc_name} namespace: anonymous spec: default: predictor: sklearn: storageUri: {storageURI} """ kfserving = yaml.safe_load(wsvol_blerssi_kf) with open('blerssi-kfserving.yaml', 'w') as file: yaml_kfserving = yaml.dump(kfserving,file) # ! cat blerssi-kfserving.yaml # - # ### Apply the configuration .yaml file # !kubectl apply -f blerssi-kfserving.yaml # ### Check whether inferenceservice is created # !kubectl get inferenceservice -n anonymous # ### Note: # # Wait for inference service READY="True" # ### Predict data from serving after setting INGRESS_IP # + predictions = [] host_name = svc_name + '.anonymous.example.com' headers = { 'host': host_name } for i in range(15): formData = { 'instances': x_test[i:i+1].tolist() } url = 'http://<>:<>/v1/models/' + svc_name + ':predict' res = requests.post(url, json=formData, headers=headers) results = res.json() prediction = results['predictions'] predictions.append(prediction[0]) print("Predictions") print(predictions) # - # ## Clean up after predicting # ### Delete inference service # !kubectl delete -f blerssi-kfserving.yaml # ### Delete model folder # !rm -rf $file_rel_path # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Grid Search import numpy as np import matplotlib.pyplot as plt import pandas as pd # ## Data Loading PATH = "../../../Classification/Kernel_SVM/Python/Social_Network_Ads.csv" dataset = pd.read_csv(PATH) X = dataset.iloc[:, :-1].values y = dataset.iloc[:, -1].values # ## Train Test Split # + from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) # - # ## Feature Scaling # + from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test) # - # ## Kernel SVM # + from sklearn.svm import SVC classifier = SVC(kernel="rbf", random_state=42) classifier.fit(X_train, y_train) # - # ## Prediction sample = [[30, 87000]] print(classifier.predict(sc.transform(sample))) # + jupyter={"outputs_hidden": true} y_pred = classifier.predict(X_test) print(np.concatenate((y_test.reshape(len(y_test),1), y_pred.reshape(len(y_pred),1)), axis=1)) # - # ## K-Fold Cross Validation # + from sklearn.model_selection import cross_val_score accuracies = cross_val_score(estimator=classifier, X=X_train, y=y_train, cv=10, n_jobs=-1) # - print("Mean \t\t\t%.2f" % accuracies.mean()) print("Standard Deviation \t%.2f" % accuracies.std()) # ## Grid Search # + from sklearn.model_selection import GridSearchCV parameters = [ {"C": [1, 10, 100, 1000], "kernel": ['linear']}, {"C": [1, 10, 100, 1000], "kernel": ['rbf'], "gamma": [0.5, 0.1, 0.01, 0.001, 0.0001]}, {"C": [1], "kernel": ['rbf'], "gamma": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]} ] gs = GridSearchCV(estimator=classifier, param_grid=parameters, scoring='accuracy', cv=10, n_jobs=-1) gs = gs.fit(X_train, y_train) # + best_accuracy = gs.best_score_ best_parameters = gs.best_params_ print("Best Accuracy \t\t\t%.2f" % best_accuracy) print("Best Parameters \t", best_parameters) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib.pyplot as plt # %matplotlib inline import cv2 file = 'images/center_2019_01_15_23_03_54_135.jpg' image = cv2.cvtColor(cv2.imread(file), cv2.COLOR_BGR2RGB) flipped_image = cv2.flip(image,1) plt.imshow(image) plt.show() plt.imshow(flipped_image) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # importing libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # ## Importing Data train = pd.read_csv('train_s3TEQDk.csv') test = pd.read_csv('test_mSzZ8RL.csv') # ## Cleaning Train Data # checking training data headers train.head() # visualizing missing training data (if any) sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis') # replacing 'YES', 'NO' values by '1' and '0' in training data and filling the missing values with mean train = train.replace(('Yes','No'),(1,0)) train = train.fillna(train.mean()) sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis') train.head() # converting categorical features to dummy variables using pandas gender = pd.get_dummies(train['Gender'],drop_first=True) RCode = pd.get_dummies(train['Region_Code'],drop_first=True) Occ = pd.get_dummies(train['Occupation'],drop_first=True) CCode = pd.get_dummies(train['Channel_Code'],drop_first=True) # dropping redundant feature columns after encoding train.drop(['ID','Gender','Region_Code','Occupation','Channel_Code'],axis=1,inplace=True) # concatenating dataframes train = pd.concat([train,gender,RCode,Occ,CCode],axis=1) train.head() # ## Cleaning Test Data # checking testing data headers test.head() # visualizing missing testing data (if any) sns.heatmap(test.isnull(),yticklabels=False,cbar=False,cmap='viridis') # replacing 'YES', 'NO' values by '1' and '0' in testing data and filling the missing values with mean test = test.replace(('Yes','No'),(1,0)) test = test.fillna(test.mean()) sns.heatmap(test.isnull(),yticklabels=False,cbar=False,cmap='viridis') test.head() # converting categorical features to dummy variables using pandas gender = pd.get_dummies(test['Gender'],drop_first=True) RCode = pd.get_dummies(test['Region_Code'],drop_first=True) Occ = pd.get_dummies(test['Occupation'],drop_first=True) CCode = pd.get_dummies(test['Channel_Code'],drop_first=True) # dropping redundant feature columns after encoding ID = test['ID'] test.drop(['ID','Gender','Region_Code','Occupation','Channel_Code'],axis=1,inplace=True) # concatenating dataframes test = pd.concat([test,gender,RCode,Occ,CCode],axis=1) test.head() print('Training Data Shape :',train.shape) print('\nTesting Data Shape :',test.shape) X_train = train.drop(['Is_Lead'], axis=1) X_test = test y_train = train['Is_Lead'] print('Training Data Shape :',X_train.shape) print('\nTesting Data Shape :',X_test.shape) # ## Building A Random Forest Model # importing random forest classifier and creating an instance from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier() # + # Optimizing Hyperparameters for Random Forest Algorithm from sklearn.model_selection import RandomizedSearchCV # number of trees n_estimators = [100,200,300,400,500] # max number of features to consider at every split max_features = ['auto','sqrt','log2'] # max number of levels in a tree max_depth = [10,20,30,40,50] max_depth.append(None) # minimum number of samples required to split a node min_samples_split = [2,5,10,15,20] # minimum number of samples required at each leaf node min_samples_leaf = [1,2,5,10,15] grid_param = {'n_estimators': n_estimators, 'max_depth': max_depth, 'max_features': max_features, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf} rfc_random = RandomizedSearchCV(estimator = rfc, param_distributions = grid_param, n_iter = 100, cv = 5, verbose = 2, random_state = 42, n_jobs = -1) # - rfc_random.fit(X_train, y_train) rfc_pred = rfc_random.predict_proba(X_test) rfc_pred rfc_random.best_params_ # ## Saving Predictions to a csv file ID = pd.DataFrame(ID) ID.columns =['ID'] Is_lead = pd.DataFrame(rfc_pred[:,1]) Is_lead.columns =['Is_Lead'] det = pd.concat([ID, Is_lead], join = 'outer', axis = 1) det.to_csv("Submission3.csv", index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Sparks foundation Task-1 # Importing all the library which will be needed import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline # Read the Data and make a dataframe url="http://bit.ly/w-data" df=pd.read_csv(url) print(df.shape) df.head(10) # To check linearity in the data, Plot the scatter plot # %matplotlib inline df.plot(x="Hours", y="Scores", style='o') plt.xlabel('Hours of study') plt.ylabel('Score') plt.show() # yes, our data has lineariry so we can apply linear regression model to our Data. Here we have single variable so that we use simple linear regression. # Before spliting the data reshape, convert into array (Give reason) and store data in X and Y varible x=np.array(df['Hours']).reshape(-1,1) y=np.array(df['Scores']).reshape(-1,1) print('Hours =\n',x) print('Score =\n',y) # ## Split the Data from sklearn.model_selection import train_test_split xtrain, xtest, ytrain, ytest = train_test_split(x,y,test_size=0.2,random_state=101) print("Training Data") print('X train=\n',xtrain) print('Y train=\n',ytrain) # ## Training the Model from sklearn.linear_model import LinearRegression regressor=LinearRegression() regressor.fit(xtrain,ytrain) print('Training completed') # ## Visualisation on Trained Model plt.scatter(xtrain, ytrain, color='red', label='Training data') plt.plot(xtrain,regressor.predict(xtrain), color='blue', label='Regression line') plt.title('Visuals for Trained Model') plt.xlabel("Hours of Study") plt.ylabel('Scores') plt.legend() plt.show() # ## Visualisation on Testing Data plt.scatter(xtest, ytest, color='red', label='Testing data') plt.plot(xtrain,regressor.predict(xtrain), color='blue', label='Regression line') plt.title('Visuals for Testing Data') plt.xlabel("Hours of Study") plt.ylabel('Scores') plt.legend() plt.show() # ### Making predictions ypred= regressor.predict(xtest) print('xtest value=\n',xtest) print('Absolute value=\n',ytest) print('Predicted value=\n',ypred) # What will be predicted score if a student studies for 9.25 hrs/ day? # You can also test with your own data hours = [[9.25]] own_pred = regressor.predict(hours) print("No of Hours = {}".format(hours)) print("Predicted Score = {}".format(own_pred[0])) # ## Evaluating Model # The final step is to evaluate the performance of algorithm. This step is particularly important to compare how well different algorithms perform on a particular dataset. For simplicity here, we have chosen the mean square error. There are many such metrics. # ### R-score value from sklearn.metrics import r2_score r2_score(ytest, ypred) # ### Error from sklearn import metrics print('MAE=',metrics.mean_absolute_error(ytest,ypred),'(Mean absolute error)') print('MSE',metrics.mean_squared_error(ytest,ypred),'(Mean squared error)') print('RMSE',np.sqrt(metrics.mean_squared_error(ytest,ypred)),'(Root mean squared error)') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np from tqdm import tqdm from multiprocessing import Pool import random import pandas as pd # + mat = np.load('mat1.npy') mat = mat[:,:,:720] + 1 flat_mat = mat[0] for d in range(1, 31): flat_mat = np.hstack([flat_mat, mat[d]]) mat = None K = 64 eye = np.eye(K) # - def eva_for_mat(true_mat, pre_mat): eva = 0 index = np.argwhere(true_mat > 0) for i,j in index: eva += (true_mat[i,j] - pre_mat[i, j]) ** 2 eva = eva / len(index) return eva # + P = np.random.normal(loc=0.1, scale=0.01, size=(flat_mat.shape[0], K)) Q = np.random.normal(loc=0.1, scale=0.01, size=(flat_mat.shape[1], K)) p_related_list = [] for i in range(len(P)): p_related_list.append(np.argwhere(flat_mat[i,:] > 0).flatten()) q_related_list = [] for j in range(len(Q)): q_related_list.append(np.argwhere(flat_mat[:,j] > 0).flatten()) # - for step in range(5): for i in tqdm(range(len(P))): p_related = p_related_list[i] if len(p_related) > K: P[i] = np.dot(np.dot(flat_mat[i, p_related], Q[p_related]), np.linalg.pinv(np.dot(Q[p_related].T, Q[p_related]))) else: P[i] = np.dot(np.dot(flat_mat[i, p_related], Q[p_related]), np.linalg.pinv(np.dot(Q[p_related].T, Q[p_related]) + eye)) if np.average(P[i]) > 10 or np.average(P[i]) < -10: raise np.save('P.npy',P) print(np.average((flat_mat[flat_mat > 0] - np.dot(P,Q.T)[flat_mat > 0])**2)) for j in tqdm(range(len(Q))): q_related = q_related_list[j] if len(q_related) > K: Q[j] = np.dot(np.dot(flat_mat[q_related,j].T, P[q_related]), np.linalg.pinv(np.dot(P[q_related].T, P[q_related]))) else: Q[j] = np.dot(np.dot(flat_mat[q_related,j].T, P[q_related]), np.linalg.pinv(np.dot(P[q_related].T, P[q_related]) + eye)) if np.average(Q[j]) > 10 or np.average(Q[j]) < -10: raise np.save('Q.npy',Q) print(np.average((flat_mat[flat_mat > 0] - np.dot(P,Q.T)[flat_mat > 0])**2)) ### 构造翌日隐含因子 week_features = [] for week in range(7): temp = [] for i in [week + i * 7 for i in range(4)]: temp.append(Q[i * 720: (i+1) * 720]) temp = np.array(temp) week_features.append(np.average(temp, axis = 0)) ### 构造周趋势 week_lambda = [] for i in range(5): index = [x+i*7 for x in [0,1,2]] week_lambda.append(np.sum(mat[index]) / np.sum(mat[index] > 0)) week_lambda = [x / np.average(week_lambda) for x in week_lambda] Q_generate = [] for lamb in week_lambda: for week_latent in week_features: Q_generate.append(week_latent * lamb) Q_generate = np.array(Q_generate) Q_generate = Q_generate.reshape(-1, K) sns.lineplot(range(720), np.average(Q[30 * 720:31 * 720], axis = 1)) sns.lineplot(range(720), np.average(Q_generate[30 * 720:31 * 720], axis = 1), label='Generated') plt.ylim(0.08, 0.11) plt.figure(figsize=(20,10)) sns.lineplot(range(len(Q)), np.average(Q, axis = 1)) sns.lineplot(range(len(Q_generate)), np.average(Q_generate, axis = 1), label='Generated') plt.ylim(0.08, 0.11) np.save('finalP.npy', P) np.save('finalQ.npy', Q) np.save('finalQ_pred.npy', Q_generate) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # # Data Gathering #
    # Data Gathering is simply the process of collecting your data together. #
    # This notebook covers strategies you can use to gather data for an analysis. # # If you want to move on to first working on data analyses (with provided data) you can move onto the next tutorials, and come back to this one later. # # Data gathering can encompass anything from launching a data collection project, web scraping, pulling from a database, downloading data in bulk. # # It might even include simply calling someone to ask if you can use some of their data. # ## Where to get Data # # ### The Web # # The web is absolutely full of data or ways to get data, either by hosting **data repositories** from which you can download data, by offering **APIs** through which you can request specific data from particular applications, or as data itself, such that you can use **web scraping** to extract data directly from websites. # # ### Other than the Web # # Not all data is indexed or accessible on the web, at least not publicly. # # Sometimes finding data means chasing down data wherever it might be. # # If there is some particular data you need, you can try to figure out who might have it, and get in touch to see if it might be available. # # ### Data Gathering Skills # Depending on your gathering method, you will likely have to do some combination of the following: # - Download data files from repositories # - Read data files into python # - Use APIs # - Query databases # - Call someone and ask them to send you a harddrive # ## Data Repositories #
    # A Data Repository is basically just a place that data is stored. For our purposes, it is a place you can download data from. #
    # #
    # There is a curated list of good data source included in the # project materials. #
    # ## Databases #
    # A database is an organized collection of data. More formally, 'database' refers to a set of related data, and the way it is organized. #
    # ### Structured Query Language - SQL #
    # SQL (pronounced 'sequel') is a language used to 'communicate' with databases, making queries to request particular data from them. #
    # #
    # There is a useful introduction and tutorial to SQL # here # as well as some useful 'cheat sheets' # here # and # here. #
    # SQL is the standard, and most popular, way to interface with relational databases. # # Note: None of the rest of the tutorials presume or require any knowledge of SQL. # # You can look into it if you want, or if it is relevant to accessing some data you want to analyze, but it is not required for this set of tutorials. # ## Application Program Interfaces (APIs) #
    # APIs are basically a way for software to talk to software - it is an interface into an application / website / database designed for software. #
    # #
    # For a simple explanation of APIs go # here # or for a much broader, more technical, overview try # here. #
    # APIs offer a lot of functionality - you can send requests to the application to do all kinds of actions. In fact, any application interface that is designed to be used programatically is an API, including, for example, interfaces for using packages of code. # # One of the many things that APIs do, and offer, is a way to query and access data from particular applications / databases. The benefit of using APIs for data gathering purposes is that they typically return data in nicely structured formats, that are relatively easy to analyze. # ### Launching URL Requests from Python # Imports # requests lets you make http requests from python import requests import pandas as pd # In practice, APIs are usually special URLs that return raw data (json or XML) as opposed to a web page to be rendered for human viewers (html). Find the documentation for a particular API to see how you send requests to access whatever data you want. For example, let's try the Github API. # Request data from the Github API on a particular user page = requests.get('https://api.github.com/users/tomdonoghue') # In this case, the content we get back is a json file page.content # We can read in the json data with pandas pd.read_json(page.content, typ='series') #
    # This # list # includes a collection of commonly used and available APIs. #
    # ## Web Scraping #
    # Web scraping is when you (programmatically) extract data from websites. #
    # #
    # Wikipedia # has a useful page on web scraping. #
    # Note that the following section uses the 'BeautifulSoup' module, which is not part of the standard anaconda distribution. # # If you do not have BeautifulSoup, and want to get it to run this section, you can uncomment the cell below, and run it, to install BeautifulSoup in your current Python environment. You only have to do this once. # + #import sys # #!conda install --yes --prefix {sys.prefix} beautifulsoup4 # - # Import BeautifulSoup from bs4 import BeautifulSoup # + # Set the URL for the page we wish to scrape site_url = 'https://en.wikipedia.org/wiki/Data_science' # Launch the URL request, to get the page page = requests.get(site_url) # - # Print out the first 1000 characters of the scraped web page page.content[0:1000] # Note that the source of the scraped web-page is a messy pile of HTML. # # There is a lot of information in there, but with no clear organization. There is some structure in the page though, delineated by HTML tags, etc, we just need to use them to parse out the data. We can do that with BeautifulSoup, which takes in messy documents like this, and parses them based on a specified format. # Parse the webpage with Beautiful Soup, using a html parser soup = BeautifulSoup(page.content, 'html.parser') # + # With the parsed soup object, we can select particular segments of the web page # Print out the page title print('TITLE: \n') print(soup.title) # Print out the first p-tag print('\nP-TAG:\n') print(soup.find('p')) # - # From the soup object, you can explore that the page is much more organized, and in such a way that you can extract particular components of interest. # # Note that it is still 'messy' in other ways, in that there might or might not be a systematic structure to how the page is laid out, and it still might take a lot of work to extract the particular information you want from it. # ### APIs vs. Web Scraping # # Web scraping is distinct from using an API, even though many APIs may be accessed over the internet. Web scraping is different in that you are (programmatically) navigating through the internet, and extracting data of interest. # # Note: # Be aware that scraping data from websites (without using APIs) can often be an involved project itself - scraping sites can take a considerable amount of tuning to get the data you want. # # Be aware that data presented on websites may not be well structured, or in an organzed format that lends itself to easy analysis. # # If you try scraping websites, also make sure you are allowed to scrape the data, and follow the websites terms of service. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + """ A collection of heartrate variability algorithms for both the timedomain and frequency domain. Copyright (C) 2019 & GPL GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 """ import numpy as np import pandas as pd import matplotlib.pyplot as plt import random import subprocess from scipy.interpolate import interp1d from gatspy.periodic import LombScargleFast class HRV: """ Heartrate variability class which calcualtes the standard HRV parameters with the help of Python functions and for cross validation also via the physionet's get_hrv script. """ def __init__(self, sampling_frequency): """ Constructor takes the sampling frequency. All rr_sample data is in sample number and will assume it's at this sampling rate. """ self.fs = float(sampling_frequency) self.period = 1.0/sampling_frequency def _intervals(self, rr_samples): """Calculate the RR intervals in ms from sample numbers. :param rr_samples: R peak sample locations :type rr_samples: array_like :return: RR intervals in milliseconds :rtype: ndarray """ rr_intervals = np.diff(np.array(rr_samples)*self.period*1000) return rr_intervals def _timestamps(self, rr_samples): """Calculate the timestamps in ms from sample locations. :param rr_samples: R peak sample locations :type rr_samples: array_like :return: The timestamps in milliseconds :rtype: array_like """ ts = np.array(rr_samples)*self.period*1000 return ts def _succ_diffs(self, rr_samples): """Calculate the successive differences of R peaks. :param rr_samples: R peak sample locations :type rr_samples: array_like :return: The successive differences of R peaks :rtype: ndarray """ rr_ints = self._intervals(rr_samples) succ_diffs = [] for i in range(len(rr_ints)-1): diff = rr_ints[i+1] - rr_ints[i] succ_diffs.append(diff) return np.array(succ_diffs) def SDNN(self, rr_samples, normalise=False): """Calculate SDNN, the standard deviation of NN intervals. :param rr_samples: R peak sample locations :type rr_samples: array_like :param normalise: normalise the SDNN against the average RR interval, defaults to False :type normalise: bool, optional :return: SDNN, the standard deviation of NN intervals :rtype: float """ rr_intervals = self._intervals(rr_samples) rr_std = np.std(rr_intervals) if normalise: rr_mean_interval = np.mean(rr_intervals) rr_std = rr_std/rr_mean_interval return rr_std def SDANN(self, rr_samples, average_period=5.0, normalise=False): """Calculate SDANN, the standard deviation of the average RR intervals calculated over short periods. :param rr_samples: R peak sample locations :type rr_samples: array_like :param average_period: The averging period in minutes, defaults to 5.0 :type average_period: float, optional :param normalise: normalise the SDANN against the average RR interval, defaults to False :type normalise: bool, optional :return: SDANN, the standard deviation of the average RR intervals calculated over short periods :rtype: float """ average_period_samples = int(self.fs*average_period*60) average_rr_intervals = [] sections = int((np.max(rr_samples)/average_period_samples)+0.5) if sections<1: sections = 1 for i in range(sections): idx = np.where((rr_samples>=(i*average_period_samples)) & (rr_samples<((i+1)*average_period_samples))) idx = idx[0] section_rr = rr_samples[idx[0]:idx[-1]+1] avg_rr_int = np.mean(self._intervals(section_rr)) average_rr_intervals.append(avg_rr_int) rr_std = np.std(average_rr_intervals) if normalise: rr_mean_interval = np.mean(average_rr_intervals) rr_std = rr_std/rr_mean_interval return rr_std def RMSSD(self, rr_samples, normalise = False): """Calculate RMSSD (root mean square of successive differences). :param rr_samples: R peak sample locations :type rr_samples: array_like :param normalise: normalise the RMSSD against the average RR interval, defaults to False :type normalise: bool, optional :return: RMSSD (root mean square of successive differences) :rtype: float """ succ_diffs = self._succ_diffs(rr_samples) succ_diffs = succ_diffs*succ_diffs rms = np.sqrt(np.mean(succ_diffs)) if normalise: rms = rms / np.mean(self._intervals(rr_samples)) return rms def SDSD(self, rr_samples): """Calculate SDSD (standard deviation of successive differences), the standard deviation of the successive differences between adjacent NNs. :param rr_samples: R peak sample locations :type rr_samples: array_like :return: SDSD (standard deviation of successive differences) :rtype: float """ succ_diffs = self._succ_diffs(rr_samples) return np.std(succ_diffs) def NN50(self, rr_samples): """Calculate NN50, the number of pairs of successive NNs that differ by more than 50 ms. :param rr_samples: R peak sample locations :type rr_samples: array_like :return: NN50 :rtype: float """ succ_diffs = self._succ_diffs(rr_samples) over_50 = np.where(abs(succ_diffs)>50) over_50 = over_50[0] return len(over_50) def pNN50(self, rr_samples): """Calculate pNN50, the proportion of NN50 divided by total number of NNs. :param rr_samples: R peak sample locations :type rr_samples: array_like :return: pNN50 :rtype: float """ return self.NN50(rr_samples)/(len(rr_samples)-1) def NN20(self, rr_samples): """Calculate NN20, the number of pairs of successive NNs that differ by more than 20 ms. :param rr_samples: R peak sample locations :type rr_samples: array_like :return: NN20 :rtype: float """ succ_diffs = self._succ_diffs(rr_samples) over_20 = np.where(abs(succ_diffs)>20) over_20 = over_20[0] return len(over_20) def pNN20(self, rr_samples): """Calculate pNN20, the proportion of NN20 divided by total number of NNs. :param rr_samples: R peak sample locations :type rr_samples: array_like :return: pNN20 :rtype: float """ return self.NN20(rr_samples)/(len(rr_samples)-1) def HR(self, rr_samples): """Calculate heart-rates from R peak samples. :param rr_samples: R peak sample locations :type rr_samples: array_like :return: Heart-rates in BPM :rtype: ndarray """ rr_intervals = np.diff(rr_samples) heart_rates = 60.0/(rr_intervals/float(self.fs)) heart_rates = 60.0/rr_intervals print('avg hr', np.average(heart_rates)) return heart_rates def add_rr_error(self, rr_samples, error): """ Adds jitter to the heartrate timestamps. The error and the rr_samples are in timestamps. Returns the noisy timestamps in samples. """ if error==0: return rr_samples error_values = np.random.randint(-abs(error), abs(error)+1, len(rr_samples)) noisy_rr_samples = rr_samples+error_values return noisy_rr_samples def fAnalysis(self, rr_samples): """ Frequency analysis to calc self.lf, self.hf, returns the LF/HF-ratio and also calculates the spectrum as pairs of (self.f_hr_axis,self.f_hr). The input arrary is in sample points where R peaks have been detected. """ # discrete timestamps self.hr_discrete = self._intervals(rr_samples) / 1000 # hr positions in time self.t_hr_discrete = [i/self.fs for i in rr_samples[1:]] # now let's create function which approximates the hr(t) relationship self.hr_func = interp1d(self.t_hr_discrete, self.hr_discrete) # we take 1024 samples for a linear time array for hr(t) nsamp = 1000 # linear time array for the heartrate self.t_hr_linear = np.linspace(self.t_hr_discrete[1], self.t_hr_discrete[len(self.t_hr_discrete)-2], num=nsamp) # duration in secs of the heartrate array minus the ends b/c of the approx duration = self.t_hr_discrete[len(self.t_hr_discrete)-2] - self.t_hr_discrete[1]; # heartrate linearly approximated between discrete samples self.hr_linear = self.hr_func(self.t_hr_linear) model = LombScargleFast().fit(self.t_hr_discrete, self.hr_discrete, 1E-2) fmax = 1 fmin = 0.01 df = (fmax - fmin) / nsamp self.f_hr = model.score_frequency_grid(fmin, df, nsamp) self.f_hr_axis = fmin + df * np.arange(nsamp) # lf self.lf = 0 # hf self.hf = 0 for i in range(0,int(nsamp/2)): if (self.f_hr_axis[i] >= 0.04) and (self.f_hr_axis[i] <= 0.15): self.lf = self.lf + self.f_hr[i] if (self.f_hr_axis[i] >= 0.15) and (self.f_hr_axis[i] <= 0.4): self.hf = self.hf + self.f_hr[i] # hf # - df = pd.read_csv('/media/brandon/Seagate HDD/datasets/vicarPPG/GroundTruth/PPG/Cleaned/01-base PPG.csv') N = df['Signal'].values.shape[0] stop = N - 1 stop = 500 signal = df['Signal'][:stop].values time = df['Time'][:stop].values / 1000 plt.plot(time, signal) plt.xlabel('Time (s)') peaks = [df['Time'][idx]/1000 for idx, element in enumerate(df['Peaks']) if element == 1][:stop] num_peaks = 0 for peak in peaks: if peak >= time[-1]: break num_peaks += 1 plt.axvline(x = peak, color = 'r') print(f'Number of peaks: {num_peaks}') # + def find_IBIs(peaks): # IBIs = [] # for i in range(len(peaks)-1): # IBIs.append(peaks[i+1] - peaks[i]) return np.diff(peaks) def find_hr(peaks): IBIs = find_IBIs(peaks) IBI_mean = np.average(IBIs) hr = 1/IBI_mean * 60 *1000 return hr def find_hrv(peaks): IBIs = find_IBIs(peaks) hrv = find_rmssd(IBIs) sdnn = find_sdnn(IBIs) return hrv, sdnn def find_rmssd(IBIs): N = len(IBIs) ssd = 0 for i in range(N-1): ssd += (IBIs[i+1] - IBIs[i])**2 rmssd = np.sqrt(ssd/(N-1)) return rmssd def find_sdnn(IBIs): sdnn = np.std(IBIs) return sdnn def filter_IBIs(IBIs): # remove IBIs outside of 250 ms and 2000 ms IBIs = [IBI for IBI in IBIs if IBI > 250 and IBI < 2000] # remove IBIs further than 3 standard deviations from the mean IBI_mean = np.mean(IBIs) IBI_std = np.std(IBIs) upper_limit = IBI_mean + 3*IBI_std lower_limit = IBI_mean - 3 * IBI_std print(IBI_mean, IBI_std, upper_limit, lower_limit) IBIs = [IBI for IBI in IBIs if IBI < upper_limit and IBI > lower_limit] return IBIs # - # peaks in ms peaks = [df['Time'][idx] for idx, element in enumerate(df['Peaks']) if element == 1] hrv = HRV(sampling_frequency=30) plt.plot(hrv.HR(peaks)) # print(np.diff(peaks)) plt.plot(hrv._intervals(peaks)) np.diff(np.array(peaks)*1.0/30*1000) rr_intervals = np.diff(peaks) rr_mean = np.average(rr_intervals) heart_rates = 60.0/(rr_intervals/float(30))*1000 heart_rates hr = 60.0/(rr_mean/float(30))*1000 hr plt.plot(find_IBIs(peaks)) plt.plot(hrv.HR(peaks)*1000) signal.periodogram(find_IBIs(peaks), ) import scipy.signal as signal IBIs = peaks f, Pxx_den = signal.welch(IBIs, fs=4) print(len(f), len(Pxx_den)) plt.plot(f[:50],Pxx_den[:50]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import boto3 import os from botocore.config import Config def s3_client(): # connect to Secretless Broker for Credentials. s3 = boto3.client('s3', aws_access_key_id="secretless", region_name="us-east-1", aws_secret_access_key="secretless", endpoint_url="http://secretless.empty", config=Config(proxies={'http': 'http://localhost:8099'})) return s3 # - def list_buckets(): bck_rep = s3_client().list_buckets() for buck in bck_rep["Buckets"]: print(buck) # + if __name__ == '__main__': list_buckets() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Python program to illustrate # the concept of locks # in multiprocessing import multiprocessing # function to withdraw from account def withdraw(balance, lock): for _ in range(2000): lock.acquire() balance.value = balance.value - 1 lock.release() # function to deposit to account def deposit(balance, lock): for _ in range(2000): lock.acquire() balance.value = balance.value + 1 lock.release() def perform_transactions(): # initial balance (in shared memory) balance = multiprocessing.Value('i', 500) # creating a lock object lock = multiprocessing.Lock() # creating new processes p1 = multiprocessing.Process(target=withdraw, args=(balance,lock)) p2 = multiprocessing.Process(target=deposit, args=(balance,lock)) # starting processes p1.start() p2.start() # wait until processes are finished p1.join() p2.join() # print final balance print("Final balance = {}".format(balance.value)) if __name__ == "__main__": for _ in range(10): # perform same transaction process 10 times perform_transactions() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="eBA2J1btZ0cV" colab_type="code" outputId="72691cc0-912a-4fd8-ffb5-d9f6335f705d" colab={"base_uri": "https://localhost:8080/", "height": 122} from google.colab import drive drive.mount('/content/gdrive') # + id="-a82FKy68qZf" colab_type="code" colab={} # !wget -cq https://s3.amazonaws.com/content.udacity-data.com/courses/nd188/flower_data.zip # !unzip -qq flower_data.zip # + id="8wkFmRp_-atl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="f53a2852-4757-42ac-9680-393c332e478d" # %matplotlib inline # %config InlineBackend.figure_format = 'retina' import time import json import copy import matplotlib.pyplot as plt import seaborn as sns import numpy as np import PIL # + id="UIyWfvpnh4RZ" colab_type="code" colab={} from PIL import Image from collections import OrderedDict import torch from torch import nn, optim from torch.optim import lr_scheduler from torch.autograd import Variable import torchvision from torchvision import datasets, models, transforms from torch.utils.data.sampler import SubsetRandomSampler import torch.nn as nn import torch.nn.functional as F # + id="sG5gwt0Ui8Qi" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py3.9 # language: python # name: py3.9 # --- # # Welcome to the SphericalPolygon package # [![PyPI version shields.io](https://img.shields.io/pypi/v/sphericalpolygon.svg)](https://pypi.python.org/pypi/sphericalpolygon/) [![PyPI pyversions](https://img.shields.io/pypi/pyversions/sphericalpolygon.svg)](https://pypi.python.org/pypi/sphericalpolygon/) [![PyPI status](https://img.shields.io/pypi/status/sphericalpolygon.svg)](https://pypi.python.org/pypi/sphericalpolygon/) [![GitHub contributors](https://img.shields.io/github/contributors/lcx366/SphericalPolygon.svg)](https://GitHub.com/lcx366/SphericalPolygon/graphs/contributors/) [![Maintenance](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://GitHub.com/lcx366/SphericalPolygon/graphs/commit-activity) [![GitHub license](https://img.shields.io/github/license/lcx366/SphericalPolygon.svg)](https://github.com/lcx366/SphericalPolygon/blob/master/LICENSE) [![Documentation Status](https://readthedocs.org/projects/pystmos/badge/?version=latest)](http://sphericalpolygon.readthedocs.io/?badge=latest) # # The SphericalPolygon package is an archive of scientific routines for handling spherical polygons. Currently, operations on spherical polygons include: # # 1. calculate the area or mass(if the area density is given) # 2. calculate the perimeter # 3. identify the centroid # 4. compute the geometrical or physical moment of inertia tensor # 5. determine whether one or multiple points are inside the spherical polygon # ## How to Install # On Linux, macOS and Windows architectures, the binary wheels can be installed using **pip** by executing one of the following commands: # # ``` # pip install sphericalpolygon # pip install sphericalpolygon --upgrade # to upgrade a pre-existing installation # ``` # ## How to use # ### Create a spherical polygon # Spherical polygons can be created from a 2d array in form of `[[lat_0,lon_0],..,[lat_n,lon_n]]` with unit of degrees, or from a boundary file, such as those in [Plate boundaries for NNR-MORVEL56 model](http://geoscience.wisc.edu/~chuck/MORVEL/PltBoundaries.html). The spherical polygon accepts a latitude range of [-90,90] and a longitude range of [-180,180] or [0,360]. from sphericalpolygon import Sphericalpolygon from astropy import units as u # build a spherical polygon for Antarctica Plate polygon = Sphericalpolygon.from_file('NnrMRVL_PltBndsLatLon/an',skiprows=1) print(polygon.orientation) # ### Calculate the area # Calculate the area(or the solid angle) of a spherical polygon over a unit sphere. print(polygon.area()) # Calculate the area of the spherical polygon over the Earth with an averaged radius of 6371km. Re = 6371*u.km print(polygon.area(Re)) # Calculate the mass of the spherical polygon shell with a thickness of 100km and density of 3.1g/cm3 over the Earth. thickness, density = 100*u.km, 3.1*u.g/u.cm**3 rho = thickness * density # area density print(polygon.area(Re,rho)) # ### Calculate the perimeter # Calculate the perimeter of a spherical polygon over a unit sphere. print(polygon.perimeter()) # Calculate the perimeter of a spherical polygon over the Earth. print(polygon.perimeter(Re)) # ### Calculate the compactness print(polygon.compactness()) # ### Identify the centroid # Identify the centroid of a spherical polygon over a unit sphere. print(polygon.centroid()) # Identify the centroid of a spherical polygon over the Earth. print(polygon.centroid(Re)) # It shows that the latitude of the centroid is close to the South Pole, and the centroid is located about 881km underground. # ### Compute the moment of inertia tensor # Compute the geometrical moment of inertia tensor of a spherical polygon over a unit sphere. The tensor is symmetrical and has six independent components. The first three components are located diagonally, corresponding to $Q_{11}$, $Q_{22}$, and $Q_{33}$; the last three components correspond to $Q_{12}$, $Q_{13}$, and $Q_{23}$. print(polygon.inertia()) # Compute the physical moment of inertia tensor of the spherical polygon shell over the Earth. print(polygon.inertia(Re,rho)) # ### Points are inside a polygon? # Determine if a single point or multiple points are inside a given spherical polygon. # #### single point print(polygon.contains_points([75,152])) # #### multiple points print(polygon.contains_points([[-85,130],[35,70]])) # ### Change log # - **1.2.2 — Mar 3, 2021** # - Add the `compactness()` method that reflect the deviation of a polygon from a spherical cap. # - **1.2.1 — Feb 23, 2021** # - Replace the function *create_polygon* for building a spherical polygon object from a 2d array with methods `from_array` and `from_file`. # - **1.2.0 — Mar 20, 2020** # - Add the `perimeter()` method that may calculate the perimeter of a spherical polygon. # - Add the `centroid()` method that may determaine the centroid location for a spherical polygon. # ## Reference # . "Inertia Tensor for MORVEL Tectonic Plates." *ASTRONOMICAL RESEARCH AND TECHNOLOGY* 13.1 (2016). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sklearn import numpy as np from sklearn import datasets iris = datasets.load_iris() X = iris.data[:, [2, 3]] X[:10] Y = iris.target Y[:10] from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, random_state=0) X_train[:10] X_test[:10] Y_train[:10] Y_test[:10] from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train) X_train_std = sc.transform(X_train) X_test_std = sc.transform(X_test) X_train_std[:10] X_test_std[:10] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # École Polytechnique de Montréal # # Département Génie Informatique et Génie Logiciel # INF8460 – Traitement automatique de la langue naturelle # # #### Prof. # #### Chargé de laboratoire: # # TP 3 # # ## Objectifs # # - Implanter des modèles de classification neuronaux # - Utiliser des plongements lexicaux pré-entrainés # # ## Travail à faire # # Comme dans le TP précédent, on va travailler sur l'analyse de sentiment en utilisant les données du [Large Movie Review Dataset](https://ai.stanford.edu/~amaas/data/sentiment/). # # Vous devez utiliser `scikit-learn` pour la régression logistique, et Keras pour les modèles neuronaux. # # Les sections 1, 2 et 3 sont indépendantes. # # Vous êtes libres d'appliquez les pré-traitements que vous jugerez utiles. # ### 1. Pré-traitement et lecture de données # + # Ici, vous devez lire vos données et appliquez les pré-traitements que vous jugerez utiles # On fait comme dans le TP 2 # !pip install --user nltk import os import nltk nltk.download('stopwords') nltk.download('punkt') nltk.download('wordnet') seq_train_neg = os.listdir("aclImdb/train/neg") seq_train_pos = os.listdir("aclImdb/train/pos") data_train_neg = [open("aclImdb/train/neg/" + file,encoding="utf-8").read() for file in seq_train_neg] data_train_pos = [open("aclImdb/train/pos/" + file,encoding="utf-8").read() for file in seq_train_pos] data_train = data_train_pos + data_train_neg seq_test_neg = os.listdir("aclImdb/test/neg") seq_test_pos = os.listdir("aclImdb/test/pos") data_test_neg = [open("aclImdb/test/neg/" + file,encoding="utf-8").read() for file in seq_test_neg] data_test_pos = [open("aclImdb/test/pos/" + file,encoding="utf-8").read() for file in seq_test_pos] data_test = data_test_pos + data_test_neg nb_doc_train = len(data_train) nb_doc_test = len(data_test) # + import nltk from nltk.corpus import stopwords stopwords = set(stopwords.words('english')) import re def segmentize(raw_text): """ Segmente un document en phrases. >>> raw_corpus = "Alice est là . Bob est ici" >>> segmentize(raw_text) ["Alice est là .", " ici"] :param raw_text: str :return: list(str) """ return nltk.sent_tokenize(raw_text) def tokenize(sentences): """ Tokenize une liste de phrases en mots. >>> sentences = ["Alice est là ", "Bob est ici"] >>> corpus = tokenize(sentences) >>> corpus_name [ ["Alice", "est", "là "], ["Bob", "est", "ici"] ] :param sentences: list(str), une liste de phrases :return: list(list(str)), une liste de phrases tokenizées """ res = [] for sentence in sentences: res.append(nltk.word_tokenize(sentence)) return res def remove_unvalid_tokens(tokenized_text): """ Remove the stopwords defined in nltk.corpus.stopwords from the given tokenized text. Nous devons enlever certains stopwords à la main. Remove non-alphabetic characters from the text. Remove one-character words from the text. :param tokenized_text: list(list(str)), une liste de listes de tokens :return: list(list(str)), une liste de listes de tokens, sans les tokens invalides """ #On va rajouter le stopword 'br' stopwords.update(["br"]) res = list() for sentence in tokenized_text: sentence_mod = list() for token in sentence: if token.lower() not in stopwords and re.fullmatch("[a-zA-Z]*",token)!=None and len(token) > 1: sentence_mod.append(token.lower()) if sentence_mod != []: res.append(sentence_mod) return res def clean_doc(corpus): tokens_preprocessed = [] for documents in corpus: sentences = segmentize(documents) tokens = tokenize(sentences) tokens_clean = remove_unvalid_tokens(tokens) s = "" for x in tokens_clean: s += " " + " ".join(x) tokens_preprocessed.append(s) return (tokens_preprocessed) # - data_train_clean = clean_doc(data_train) # Dans la suite, vous allez entraîner deux modèles neuronaux : un perceptron multi-couche (MLP pour *multi-layer perceptron*), un réseau récurrent LSTM bi-directionnel. Pour cela, vous devrez utiliser la librairie [Keras](https://keras.io/). # # N'hésitez pas à expérimenter différents hyper-paramètres pour obtenir le meilleur résultat possible sur au moins un de vos réseaux(nombre de couches, dimension des couches, etc.). Quelques pistes: # # - optimisation des hyper-paramètres avec validation croisée (tous modèles) # # - réduction de la dimension avec une LSA (MLP) # # - ajout de couches/augmentation de la dimension/dropout ou autres changements d'architecture (MLP ou LSTM) # # - pré-traitement différent (tous modèles) # # # Il est **fortement conseillé** d'utiliser une machine avec GPU pour entraîner ces modèles neuronaux. Vous pouvez utiliser les machines du labo L-4818 ou faire tourner votre notebook sur [Google Colab](https://colab.research.google.com) (gratuit). # ### 2. Multi-layer Perceptron # # **a)** Ici, on vous demande d'entraîner un perceptron multi-couches sur la matrice TF-IDF. Avant l'entraînement, affichez la structure du modèle avec `model.summary()`. Précisez la structure du réseau de neurones (taille et nombre de couches) et les paramètres d'entraînement (optimiseur, nombre d'époques, etc.). # + from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(max_features = 5000) X = vectorizer.fit_transform(data_train) # - import numpy as np import pandas as pd from matplotlib import pyplot as plt plt.style.use('dark_background') from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from sklearn.model_selection import train_test_split from keras.utils import to_categorical from keras.models import Sequential from keras.layers import Dense, Dropout, Embedding, LSTM, GlobalMaxPooling1D, SpatialDropout1D, Bidirectional # + model = Sequential([ Dense(250, input_shape=(5000,), activation="relu"), Dense(1, activation="sigmoid") ]) model.compile( optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"] ) print(model.summary()) # - y_train = [1 for i in range(len(data_train_pos))] + [0 for i in range(len(data_train_neg))] # + from keras.callbacks import EarlyStopping es = EarlyStopping(monitor='val_acc', mode='max',patience=3) history = model.fit(X, y_train, validation_split = 0.1, epochs=20, batch_size=32, callbacks=[es]) # - plt.clf() loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, 'g', label='Training loss') plt.plot(epochs, val_loss, 'y', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() acc = history.history['acc'] val_acc = history.history['val_acc'] plt.plot(epochs, acc, 'g', label='Training acc') plt.plot(epochs, val_acc, 'y', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() # ### 3. Bi-directional LSTM # # **b)** Toujours avec Keras, entraînez un modèle bi-LSTM sur le corpus d'entraînement. Comme précédemment, affichez la structure du réseau et indiquez les paramètres utiles. # # *Note :* si votre machine supporte CUDA, vous pouvez utiliser `keras.layers.CuDNNLSTM` au lieu de `keras.layers.LSTM` pour des gains de performance significatifs. Sur Google Colab, les environnements avec GPU supportent CUDA. def clean_doc_lstm(corpus): tokens_preprocessed = [] for documents in corpus: sentences = segmentize(documents) tokens = tokenize(sentences) tokens_clean = remove_unvalid_tokens(tokens) s = "" for x in tokens_clean: s += " " + " ".join(x) tokens_preprocessed.append(s) return ["".join(x) for x in tokens_preprocessed] X_train = clean_doc_lstm(data_train) # + phrase_len = list(map((lambda p: len(p.split(' '))), X_train)) max_phrase_len = max(phrase_len) print('max phrase len: {0}'.format(max_phrase_len)) plt.figure(figsize = (10, 8)) plt.hist(phrase_len, alpha = 0.2, density = True) plt.xlabel('phrase len') plt.ylabel('probability') plt.grid(alpha = 0.25) # + max_words = 5000 tokenizer = Tokenizer( num_words = max_words, filters = '"#$%&()*+-/:;<=>@[\]^_`{|}~' ) tokenizer.fit_on_texts(X_train) word_index = tokenizer.word_index X_train = tokenizer.texts_to_sequences(X_train) X_train = pad_sequences(X_train, maxlen = 400) y = to_categorical(y_train) # + # On va ici utiliser une couche embedding non initialisée afin de réaliser un # codage one_hot sans subir une explosion de mémoire occupée lstm_model = Sequential([ Embedding(5000, 32, input_length=400, trainable= True), Bidirectional(LSTM(32,input_shape=(400,32),dropout = 0.3, recurrent_dropout = 0.3)), Dense(2, activation="sigmoid")]) lstm_model.compile( optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"] ) print(lstm_model.summary()) # + es = EarlyStopping(monitor='val_acc', mode='max',patience=3) h = lstm_model.fit( X_train, y, validation_split = 0.1, epochs = 16, batch_size = 256, callbacks=[es] ) # - plt.clf() loss = h.history['loss'] val_loss = h.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, 'g', label='Training loss') plt.plot(epochs, val_loss, 'y', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() acc = h.history['acc'] val_acc = h.history['val_acc'] plt.plot(epochs, acc, 'g', label='Training acc') plt.plot(epochs, val_acc, 'y', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() # **c) Word Embeddings** # # Pour améliorer le modèle précédent, on va utiliser des *word embeddings* (ou plongements vectoriels) pré-entraînés. # # On utilisera un modèle Skip-gram de dimension $d=300$ entraîné sur English Wikipedia, disponible à l'adresse [http://vectors.nlpl.eu/explore/embeddings/en/models/](http://vectors.nlpl.eu/explore/embeddings/en/models/). Dans cette archive, vous trouverez les embeddings dans un fichier `.txt` tel que # - la première ligne du fichier indique le nombre de mots dans le vocabulaire et la dimension des embeddings # - chacune des lignes suivantes est composée d'un mot_étiquette grammaticale suivi des 300 coordonnées de son *embedding*, le tout séparé par des espaces. # # Ainsi, les premières lignes de ce fichier sont : # ``` # 296630 300 # also_ADV -0.010121 -0.045202 -0.065609 ... -0.065423 # one_NUM -0.060427 0.005032 -0.076370 ... -0.107769 # first_ADJ 0.005799 0.024848 0.018902 ... -0.097193 # ... # ``` # # Les étiquettes `_ADV`, `_NUM`, `_ADJ`, etc. indiquent l'étiquette grammaticale du mot et peuvent être supprimées pour ce TP. # # *Note :* vous pouvez utiliser le snippet suivant pour télécharger et dézipper automatiquement les embeddings (pratique si vous utilisez une machine distante comme Google Colab) : # ```python # import requests # import io # from zipfile import ZipFile # # res = requests.get("http://link/to/archive.zip") # with ZipFile(io.BytesIO(res.content)) as z: # z.extractall("extract/to/dir/") # ``` # # # Implémentez un modèle bi-LSTM qui utilisent ces *embeddings* pour représenter les mots d'une phrase. Vous pourrez utiliser le layer [Embedding](https://keras.io/layers/embeddings/) de Keras. # Pour utiliser le modèle skip-gram, nous nous sommes basés sur le code suivant : https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html # + dico_embed = dict() with open("embeddings/model.txt","r",encoding="utf-8") as f: firstline = f.readline() for line in f: values = line.split() word = values[0].split("_")[0] coefs = np.asarray(values[1:], dtype = "float32") dico_embed[word] = coefs temp = firstline.split() nb_mots_embed = int(temp[0]) dim_embed = int(temp[1]) # - mat_embed = np.zeros((len(word_index)+1,dim_embed)) for word,i in word_index.items(): temp = dico_embed.get(word) if temp is not None: mat_embed[i] = temp # + lstm_model_embed = Sequential([ Embedding(len(word_index)+1, dim_embed, input_length=400, weights=[mat_embed], trainable= False), Bidirectional(LSTM(32,input_shape=(max_phrase_len,32),dropout = 0.3, recurrent_dropout = 0.3)), Dense(2, activation="sigmoid")]) lstm_model_embed.compile( optimizer="adam", loss="binary_crossentropy", metrics=["accuracy"] ) print(lstm_model.summary()) # + es = EarlyStopping(monitor='val_acc', mode='max',patience=3) h_embed = lstm_model_embed.fit( X_train, y, validation_split = 0.1, epochs = 16, batch_size = 256, callbacks=[es] ) # - plt.clf() loss = h_embed.history['loss'] val_loss = h_embed.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, 'g', label='Training loss') plt.plot(epochs, val_loss, 'y', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() acc = h_embed.history['acc'] val_acc = h_embed.history['val_acc'] plt.plot(epochs, acc, 'g', label='Training acc') plt.plot(epochs, val_acc, 'y', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() # ### 5. Évaluation # **d)** Indiquez les performances de chacun de vos modèles. Comparez avec les modèles Naive Bayes et character-LM du TP précédent et commentez. # # Quel est votre meilleur modèle ? # + X_test_perceptron = vectorizer.transform(data_test) y_test_perceptron = [1 for i in range(len(data_test_pos))] + [0 for i in range(len(data_test_neg))] X_test_lstm = clean_doc_lstm(data_test) tokenizer.fit_on_texts(X_test_lstm) X_test_lstm = tokenizer.texts_to_sequences(X_test_lstm) X_test_lstm = pad_sequences(X_test_lstm, maxlen = 400) y_test_lstm = to_categorical(y_test_perceptron) # + from sklearn.linear_model import LogisticRegression classifier = LogisticRegression() classifier.fit(X, y_train) score = classifier.score(X_test_perceptron, y_test_perceptron) print("Accuracy:", score) # + results_perceptron = model.evaluate(X_test_perceptron,y_test) results_lstm = lstm_model.evaluate(X_test_lstm,y_test_lstm) results_lstm_embed = lstm_model_embed.evaluate(X_test_lstm,y_test_lstm) # - print("PERCEPTRON : loss = ",results_perceptron[0],", accuracy = ",results_perceptron[1]) print("BI-LSTM : loss = ",results_lstm[0],", accuracy = ",results_lstm[1]) print("BI-LSTM EMBEDDED : loss = ",results_lstm_embed[0],", accuracy = ",results_lstm_embed[1]) # Pour rappel, l'accuracy du modèle n-gram est de 0.82 sur les mêmes données. # Les modèles les plus simples (perceptron et régression logistique) obtiennent les meilleurs scores, ce qui est plutôt surprenant. # # # On peut imputer les basses performances des modèles sophistiqués au fait que nous n'avons pas exploré l'ensemble des hypers-paramètres, et au fait que les modèles ont clairement overfittés. On aurait utiliser une couche Drop-Out pour limiter cet overfitting. # On aurait pu plus particulièrement augmenter le nombre de couches dans les modèles LSTM. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="PCQtQXVz2lnR" colab_type="code" colab={} from textblob import TextBlob # + id="8Y-um8TS26vb" colab_type="code" colab={} frase = "Python é ótimo para Machine Learning" tb = TextBlob(frase) tb_en = tb.translate(to='en') # + id="SO0WSoTm3KXJ" colab_type="code" outputId="bea74843-4d05-495c-bfce-f9023b2e36a3" colab={"base_uri": "https://localhost:8080/", "height": 34} tb_en.sentiment.polarity # + id="1L2HbLII3lzT" colab_type="code" outputId="3dc5b2aa-ffb4-43e7-e3c0-f0d8ef95c407" colab={"base_uri": "https://localhost:8080/", "height": 34} tb_en # + id="4UiOGnIy31sS" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="rwsDD6KizieW" # ## Altres maneres d'estructurar les dades # # Al principi del tema hem vist una estructura de dades que organitza la informació de manera ordenada i que és # mutable, és a dir una llista. # # En aquesta part del capítol veurem: # # * **Tuples**: molt similars a les llistes sense mutabilitat. # * **Diccionaris**: una estructura no lineal que relaciona parells de dades. # * **List comprehensions**: eina que crea una nova llista basada en una altra llista, en una sola línia i de manera llegible. # * **Funcions lambda**: funcions anònimes, és a dir sense nom. # * **Map, filter**: funcions de Python que usen les funcions anònimes i actuen sobre les llistes. # * **Yield**: Funcions generadores de seqüències. # - # ### Tuples # # Python proporciona un altre tipus de col·lecció ordenada d'objectes, anomenada tupla. # # Les tuples són idèntiques a les llistes en tots els aspectes, tret de les propietats següents: # # * Les tuples es defineixen especificant els elements entre parèntesis, en comptes de claudàtors. # * Les tuples **són immutables**. # # + # Creació d'una tupla i us amb funcions vocals = ('a', 'e', 'i', 'o', 'u') # Operadors hi_es = 'u' in vocals print(hi_es) long = len(vocals) print(long) maxim = max(vocals) print(maxim) # Indexacio vocals[0] = 'A' # - # Les tuples es poden desempaquetar, és a dir, transformar-les en variables individuals: # + a, e, i, o, u = vocals print(a + " - " + e + " - " + i) # + x = 4 y = 5 print(str(x) + " - " + str(y)) # - aei, o , u = vocals # ens dona un error ja que hauria de posar 5 variables i no 3 print(aei) a = 4 b = 5 tupla_int = (a, b) print(tupla_int) # Això ens és molt pràctic per construir funcions _que retornen més d'un valor_ , no és una pràctica altament recomanable # però si que pot ser molt útil en algunes situacions. # + def retorna2(): a = 3 b = 4 return (a, b) primer, segon = retorna2() print(primer) print(segon) # - # **Per què utilitzar una tupla en lloc d'una llista?** # # Quan no volem que les dades es puguin modificar. Si es pretén que els valors de la col·lecció es mantinguin constants # durant tota la vida del programa, cal usar una tupla en lloc d'una llista, ja que protegirà les dades contra una possible # modificació accidental. # + [markdown] colab_type="text" id="J_9IX_vk4vTk" # ### Diccionaris # # Un diccionari consisteix en una col·lecció **no ordenada** de parells de clau-valor. # # A diferència de les seqüències, indexades per un rang de nombres, els diccionaris són indexats per claus, que poden ser # de **qualsevol tipus immutable**; els *strings* i els nombres sempre poden ser claus. # # Les tuples es poden utilitzar com a claus. No podem utilitzar llistes com a claus, ja que les llistes es poden modificar. # # El millor és pensar en les claus del diccionari com un **conjunt**, on a cada element del conjunt li correspon un valor. # # A continuació teniu les operacions bàsiques: # + colab={"base_uri": "https://localhost:8080/", "height": 92} colab_type="code" id="sMFkcWrwzEeN" outputId="9dc0134b-58d6-4ff6-c1fa-3b7d23971dff" # Creacio d'un diccionari nou dicc = dict() # Fixau-vos que aixo es el constructor de la classe diccionari # Tambe es podria construir aixi: #dicc2 = {} dicc = {43142512: "", 44216793:"", 44444444: ""} print(dicc) # - # #### Accedint als elements d'un diccionari # # En aquesta estructura no accedim als elements amb un índex, sinó que es fa amb el valor d'una clau. # Donada una clau obtenció del seu valor: nom = dicc[44444444] print(nom) # + diccionari_strings = {"biel": "professor", "arnau": "alumne"} diccionari_strings["arnau"] # - # #### Mutabilitat # # Un cop hem creat un diccionari, podem afegir, eliminar, canviar i moure elements a voluntat. `Python` ofereix una àmplia # gamma de maneres d'operar amb els diccionaris. # # # #### Modificació d'un valor # # Un únic valor d'un diccionari es pot substituir, de manera molt similar a com modifiquem una variable, però ara hem # d'especificar quin dels valors volem modificar, ho fem mitjançant la seva clau. # + #Donada una clau podem modificar el seu valor dicc[43142512] = "" nom = dicc[43142512] print(nom) # - # **Afegir informació** # # També podem afegir noves parelles clau-valor de manera dinàmica. # + # Insercio d'una parella clau - valor dicc[43142133] = "" # Inserció de noves parelles clau - valor dicc[41142133] = "" dicc[46094245] = "" dicc[40111134] = "" print(dicc) # - # # ### Operadors, funcions i mètodes # # # **Operadors** # # Als diccionaris també podem aplicar l'operador de pertinença ```in``` i el seu modificador ```not``` # + # Consultam pertanyença dni = 40111134 print("Tenim el dni", dni, "? ") pertany = 40111134 in dicc print(pertany) print(dicc[dni]) # - # **Funcions** # # Als diccionaris també podem usar la funció ```len``` que ens diu quants d'elements conté el diccionari. # # També podem usar les funcions ```max``` i ```min``` que ens retornen informació de les claus: # + longitud = len(dicc) print(longitud) valor_maxim = max(dicc) print(valor_maxim) # + [markdown] colab_type="text" id="FxQ5xnR9Btp8" # **Mètodes** # # A més de modificar i afegir elements usant els claudàtors també ho podem fer mitjançant mètodes propis dels # diccionaris. A més, usant els mètodes adients podem obtenir la informació del diccionari en forma de llista, siguin: # les claus, els valors o tuples clau-valor: # # Modificació: # # * ```clear```: Elimina totes les parelles clau-valor del diccionari. # * ```get```: Ens retorna el valor de la clau que passem per paràmetre, si la clau no existeix, retorna Error. # * ```pop```: Elimina la clau del diccionari i ens torna el seu valor, si no existeix retorna Error. # # + ## get clau = 40111134 valor = dicc.get(clau) print(valor) # pop #clau2 = 43142133 valor = dicc.pop(clau2) # error si una clau no existeix print(valor) nombre_elements = len(dicc) print(nombre_elements) # clear #dicc.clear() print(dicc) # - # Consulta: # # * ``` keys```: mètode que ens retorna una llista amb el conjunt de claus que hi ha al diccionari. # * ``` values```: mètode que ens retorna una llista amb els valors que hi ha al diccionari. # * ``` items```: mètode que ens retorna una llista de tuples. Cada tupla té una clau i el seu valor corresponent. # # És important destacar 2 coses: # # * Les llistes que obtenim de les operacions anteriors, poden no seguir cap ordre. # * Obtenir llistes ens dóna la possibilitat d'iterar sobre elles (usar un `for`). # + colab={"base_uri": "https://localhost:8080/", "height": 167} colab_type="code" id="KeG2bEncBuJ6" outputId="0070bc2e-f1d7-4512-9370-4311ccb34494" #Del nostre diccionari en podem obtenir les claus amb el metode keys() print("Claus", end= " -> ") print(dicc.keys()) #Podem tenir tots els valors print("Valors", end= " -> ") print(dicc.values()) #Tambe podem obtenir totes les parelles clau - valor en format llista de tuples print("Parelles", end= " -> ") print(dicc.items()) print("Iteracions") print("Iteracio sobre les claus d'un diccionari: ") for clau in dicc.keys(): print(clau, end= " ") print("") print("Iteracio sobre les claus valors d'un diccionari: ") for item in dicc.items(): clau, valor = item # Desempaquetam els valors d'una tupla print(valor, end= " ") #NOTA: Aquest for es pot fer d'almanco una altra manera mes: desempaquetar a l'encapçalament for clau, valor in dicc.items(): print(clau) # - # Les llistes i els diccionaris són dos dels tipus Python més usats. Com heu vist, tenen diverses similituds, però # difereixen en com s’accedeix als seus elements. S'accedeix als elements de les llistes mitjançant un índex numèric en # funció de l'ordre i al diccionari s'hi accedeix mitjançant claus. # # A causa d'aquesta diferència, les llistes i diccionaris són adequats per a circumstàncies diferents. # + [markdown] colab_type="text" id="pceCDfOjFFKO" # # ### List comprehension # # Ens proporcionen una manera concisa de crear llistes. # # Les aplicacions més habituals són la creació de noves llistes en què cada element és el resultat d’alguna operació # aplicada a cada membre d’una altra seqüència o per crear una subseqüència d'aquells elements que satisfan una # determinada condició. # # nova_llista = **[** expresio **for** element **in** iterable **]** # # Per exemple, suposam que volem crear una llista dels quadrats dels nombres entre 0 i 10: # + colab={"base_uri": "https://localhost:8080/", "height": 36} colab_type="code" id="dTToOzn2I3O4" outputId="83566706-6dc4-4cad-883f-8f88a6ecfd6f" # Manera tradicional quadrats = [] for x in range(0, 10): quadrats.append(x**2) print(quadrats) # La manera de generar aquesta mateixa llista amb list comprehension: llista_original = (0,1,2,3,4,5,6,7,8,9) quadrats_2 = [x**2 for x in llista_original] print(quadrats_2) # + [markdown] colab_type="text" id="_BYeJj1hK-VF" # A continuació teniu varis exemples del seu ús: # + colab={"base_uri": "https://localhost:8080/", "height": 111} colab_type="code" id="wn0LQpjBLCF5" outputId="e697c11f-cdd5-45bd-e26a-2a8634d36e47" vec = [-4, -2, 0, 2, 4] # donada una llista, multiplicam per dos cada un dels seus valors r1 = [x*2 for x in vec] print(r1) # - # nova_llista = **[** expresio **for** element **in** iterable **if** condicio **]** # ens quedam amb els valors positius r2 = [x for x in vec if x >= 0] print(r2) # podem aplicar funcions a cada un dels elements r3 = [abs(x) for x in vec] print(r3) # + # tenim molta flexibilitat, per exemple: r5 = [(x, x**2) for x in range(6)] print(r5) nombre, quadrat = r5[len(r5)-1] print(nombre) print(quadrat) # + [markdown] colab_type="text" id="jKlpTcVS72z2" # ### La funció *zip* # # La funció `zip` pren una o més seqüències i combina els elements corresponents de les seqüències en una tupla. Es # deté quan s'esgota la seqüència més curta. # + colab={"base_uri": "https://localhost:8080/", "height": 167} colab_type="code" id="-fGNMN7_77WW" outputId="f6f89a12-23b0-490b-f4f7-05e211a33732" l = list(range(18)) l2 = list(range(6)) z = list(zip(l,l2)) print("Resultat d'executar zip") print(z) for element in z: print(str(element[0]) + " - " + str(element[1])) # + # Exemple sumar els elements de dues llistes import random # Agafam 6 mostres d'una llista que te els elements de 0 a 99 l3 = random.sample(range(100), 6) print("Llista generada aleatoriament") print(l3) print("Llista simple") print(l) zipat = zip(l,l3) print(list(zipat)) # + # En aquest exemple aplicam un list comprehension a la funcio zip suma = [x + y for x, y in zip(l, l3)] print("Suma") print(suma) # + longitud = len(l3)-1 sumatori = [] for i in range(0, longitud): sumat = l[i] + l3[i] sumatori.append(sumat) print(sumatori) # - # **Exercicis** # # 1. Donada una llista amb els nombres d'1 a 100, obtenir una llista amb els senars. # 2. Donada una llista amb tots els nombres d'1 a 7000, eliminar tots els múltiples de 7. # 3. Crear un llista de tuples que tingui els primers 50 nombres parells, juntament amb els 50 primers nombres que no # són múltiple de 7. # + ### 1 llista_1_100 = range(1, 101) llista_senars = [x for x in llista_1_100 if x % 2 == 1] #print(llista_senars) #2 llista_7 = [x for x in range(1, 7001) if x % 7 != 0] #print(llista_7) #3 zippat = zip(llista_senars, llista_7) print(list(zippat)) # + [markdown] colab_type="text" id="-v5K7zkxSNYv" # ### Funcions lambda # # A `Python`, utilitzem la paraula clau `lambda` per declarar una funció anònima. Una funció anònima fa referència a # una funció declarada sense nom. # # Tot i que sintàcticament semblen diferents, les funcions lambda es comporten de la mateixa manera que les funcions # regulars que es declaren usant la paraula clau `def`. # # Les següents són les característiques de les funcions lambda de Python: # # * Una funció lambda pot prendre qualsevol nombre d’arguments. # * De forma sintàctica, les funcions lambda només **es limiten a una sola expressió**. Recordar que una expressió és una # peça de codi executada per la funció lambda. # # La seva sintaxi és: # ```lambda argument(s): expressio``` # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="LtUfSRUBUdIr" outputId="f53aee43-7652-4b83-8df7-7496c717e6ef" def divisible2(num): return num % 2 == 0 # es equivalent a: f = lambda num: num % 2 == 0 print(f(22)) # no es necessari ni tan sols assignar un identificador print((lambda num: num % 2 == 0)(5)) # Poden tenir el nombre de paràmetres que volguem s = lambda x,y,z: x+y+z print(s(5,5,10)) # + [markdown] colab_type="text" id="nRya-1-ZXWVu" # Les funcions anònimes (`lambda`) s’utilitzen quan necessitem una funció durant un període curt de temps. # # Normalment s'utilitza quan es vol passar una funció com a argument per a funcions d'ordre superior, és a dir, # funcions que prenen altres funcions com a arguments tal com veurem a continuació: # + [markdown] colab_type="text" id="RuGny5xSk_y7" # ### Map i Filter # # Aquestes dues funcions ens permeten aplicar funcions a objectes iterables (llistes, tuples, strings o diccionaris) # # #### Map # # La funció `map` és una funció del nucli de `Python` que com a paràmetres rep una funció i una o més llistes. La # sintaxi de la funció map és la següent: # # `map(funcio, iterable_1, iterable_2, ...) ` # # # `map ` aplica a cada element de l'iterable la funció que rep com a primer paràmetre. I ens retorna un iterable que # podem transformar de manera molt senzilla en una llista (usant la funció` list`). # # Anem a veure un exemple: # + colab={"base_uri": "https://localhost:8080/", "height": 92} colab_type="code" id="6AyNHHpvnxfz" outputId="8754b6fc-fda5-496f-f91c-7ae0ca77512c" llista = list(range(10)) #definim una funcio lambda suma_3 = lambda x: x + 3 # aplicam aquesta funcio a una llista resultat = list(map(suma_3, llista)) #print(llista) #print(resultat) llista_nova = [x + 3 for x in llista] #print(llista_nova) # Anem a sumar els valors de dues llistes #print("Suma dues llistes") suma_2_ll = map(lambda x,y: x+y, llista, resultat) # la funcio esta declarada directament suma_2_ll = list(suma_2_ll) print(llista) print(resultat) print(suma_2_ll) resultat_2 = [x + y for x, y in zip(llista,resultat)] print(resultat_2) # + [markdown] colab_type="text" id="QTpO63o7rnlp" # #### Filter # # Aquesta funció també pertany al nucli de `Python`, la seva sintaxi és la següent: # # `filter(funcio, iterable) ` # # La funció que passem ha de retornar un valor de tipus booleà. L’objecte serà cridat per a cada ítem de l'iterable per # fer-ne l’avaluació. Evidentment per cada element tindrem un valor `True` o `False`. Aquesta funció només pot tenir un # sol paràmetre iterable # # La funció `filter` retorna una llista d'aquells elements que s'avaluen com a certs per la funció. # # Anem a veure un exemple: # + colab={"base_uri": "https://localhost:8080/", "height": 92} colab_type="code" id="Nod-piwTnvIu" outputId="dc949943-d8bb-43cc-b5a0-fb81f5a99ee3" nombres = [2, 6, 8, 10, 11, 4, 12, 7, 13, 17, 0, 3, 21] filtratge = filter(lambda num: num > 7, nombres) print("Eliminam els valors majors a 7") print(list(filtratge)) filtratge_2 = [ y for y in nombres if y > 7] print(filtratge_2) # + [markdown] colab_type="text" id="mJdO3pvwMwBh" # ### Yield # # Els generadors són un tipus d’iterador els elements dels quals només podem usar una vegada. Els generadors no # emmagatzemen tots els valors de l'iterador en la memòria, els generen sobre la marxa: # # `yield` és una paraula clau que s'utilitza com a `return` d'una funció, excepte que la funció retornarà un generador. # # Vegem un exemple en codi: # + colab={"base_uri": "https://localhost:8080/", "height": 92} colab_type="code" id="PgroqM8QdkDm" outputId="a40bc37c-3575-497c-af4d-3d5995f5a8c7" #Primer exemple def generador_simple(): print("Inici") yield 1 yield 2 yield 3 yield 25 generador = generador_simple() print(next(generador)) print(next(generador)) print(next(generador)) nou_valor = next(generador) print(nou_valor) # + colab={"base_uri": "https://localhost:8080/", "height": 73} colab_type="code" id="PXsRCBkrMvpk" outputId="b3411b34-790d-4def-c5bd-36e7b623958d" # Generador en un recorregut usant un for def generador_quadrats(maxim): i = 0 while i <= maxim: yield i*i i = i +1 generador = generador_quadrats(10) y = 0 while y < 4: valor = next(generador) print(valor) y = y + 1 nou_valor = next(generador) print(nou_valor) # + [markdown] colab_type="text" id="ez22pWqpSWds" # És imporant entendre que quan es crida per primer cop a la funció, el codi de la funció no s’executa. La funció només # retorna l’objecte del generador. La primera vegada que usem el nostre generador, s'executarà el codi de la nostra # funció des del principi fins que arribi a `yield`, i tornarà el primer valor del bucle. A continuació, cada una de # les altres trucades del nostre generador executarà el bucle de la funció una vegada més i retornarà el valor següent, # fins que no hi hagi cap valor per tornar. # # El generador es considera buit una vegada que la funció s’executa, però ja no es produeix un nou valor. Pot ser que # el bucle hagi acabat, o perquè ja no satisfà una certa condició. # + colab={"base_uri": "https://localhost:8080/", "height": 1812} colab_type="code" id="pjidTpM_W1-3" outputId="2397e2a0-bbed-4ae4-f25c-ad9cdeed196a" # Aquí tenim un exemple que ens permet generar tants elements de la successio de # Fibonacci com necessitem. def Fibo(): pre1 = 1 pre2 = 1 suma = 1 yield suma while True: yield suma suma = pre1 + pre2 pre1 = pre2 pre2 = suma generador = Fibo() i = 0 while i < 33: print(next(generador)) i += 1 # + colab={} colab_type="code" id="KGv_TuNnPVtz" l = list(range(0,101)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Essentials import os, sys, glob import pandas as pd import numpy as np import nibabel as nib import scipy.io as sio from tqdm import tqdm # Stats import scipy as sp from scipy import stats import statsmodels.api as sm import pingouin as pg # Plotting import seaborn as sns import matplotlib.pyplot as plt plt.rcParams['svg.fonttype'] = 'none' # - sys.path.append('/Users/lindenmp/Google-Drive-Penn/work/research_projects/neurodev_cs_predictive/1_code/') from func import set_proj_env, my_get_cmap, rank_int parc_str = 'schaefer' # 'schaefer' 'lausanne' 'glasser' parc_scale = 200 # 200/400 | 125/250 | 360 edge_weight = 'streamlineCount' parcel_names, parcel_loc, drop_parcels, num_parcels = set_proj_env(parc_str = parc_str, parc_scale = parc_scale, edge_weight = edge_weight) if parc_str == 'schaefer' or parc_str == 'glasser': exclude_str = 't1Exclude' else: exclude_str = 'fsFinalExclude' # ### Setup directory variables print(os.environ['PIPELINEDIR']) if not os.path.exists(os.environ['PIPELINEDIR']): os.makedirs(os.environ['PIPELINEDIR']) outputdir = os.path.join(os.environ['PIPELINEDIR'], '0_get_sample', 'out') print(outputdir) if not os.path.exists(outputdir): os.makedirs(outputdir) figdir = os.path.join(os.environ['OUTPUTDIR'], 'figs') print(figdir) if not os.path.exists(figdir): os.makedirs(figdir) # # Load in demographic and symptom data # + # LTN and Health Status health = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/pncDataFreeze20170905/n1601_dataFreeze/health/n1601_health_20170421.csv')) # Protocol prot = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/pncDataFreeze20170905/n1601_dataFreeze/neuroimaging/n1601_pnc_protocol_validation_params_status_20161220.csv')) # T1 QA t1_qa = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/pncDataFreeze20170905/n1601_dataFreeze/neuroimaging/t1struct/n1601_t1QaData_20170306.csv')) # DTI QA dti_qa = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/pncDataFreeze20170905/n1601_dataFreeze/neuroimaging/dti/n1601_dti_qa_20170301.csv')) # Rest QA rest_qa = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/pncDataFreeze20170905/n1601_dataFreeze/neuroimaging/rest/n1601_RestQAData_20170714.csv')) # Demographics demog = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/pncDataFreeze20170905/n1601_dataFreeze/demographics/n1601_demographics_go1_20161212.csv')) # Brain volume brain_vol = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/pncDataFreeze20170905/n1601_dataFreeze/neuroimaging/t1struct/n1601_ctVol20170412.csv')) # Clinical diagnostic clinical = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/pncDataFreeze20170905/n1601_dataFreeze/clinical/n1601_goassess_psych_summary_vars_20131014.csv')) clinical_ps = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/pncDataFreeze20170905/n1601_dataFreeze/clinical/n1601_diagnosis_dxpmr_20170509.csv')) # GOASSESS Bifactor scores goassess = pd.read_csv(os.path.join(os.environ['DATADIR'], 'external/GO1_clinical_factor_scores_psychosis_split_BIFACTOR.csv')) # merge df = health df = pd.merge(df, prot, on=['scanid', 'bblid']) # prot df = pd.merge(df, t1_qa, on=['scanid', 'bblid']) # t1_qa df = pd.merge(df, dti_qa, on=['scanid', 'bblid']) # dti_qa df = pd.merge(df, rest_qa, on=['scanid', 'bblid']) # rest_qa df = pd.merge(df, demog, on=['scanid', 'bblid']) # demog df = pd.merge(df, brain_vol, on=['scanid', 'bblid']) # brain_vol df = pd.merge(df, clinical, on=['scanid', 'bblid']) # clinical df = pd.merge(df, clinical_ps, on=['scanid', 'bblid']) # clinical df = pd.merge(df, goassess, on=['bblid']) # goassess print(df.shape[0]) df.set_index(['bblid', 'scanid'], inplace = True) # - # # Filter subjects # + # 1) Primary sample filter df = df[df['healthExcludev2'] == 0] print('N after initial exclusion:', df.shape[0]) # 2) T1 exclusion df = df[df[exclude_str] == 0] print('N after T1 exclusion:', df.shape[0]) # 3) Diffusion exclusion df = df[df['b0ProtocolValidationStatus'] == 1] df = df[df['dti64ProtocolValidationStatus'] == 1] df = df[df['dti64Exclude'] == 0] print('N after Diffusion exclusion:', df.shape[0]) # - df['dti64QAManualScore'].unique() np.sum(df['averageManualRating'] == 2) np.sum(df['dti64QAManualScore'] == 2) # Convert age to years df['ageAtScan1_Years'] = np.round(df.ageAtScan1/12, decimals=1) # find unique ages age_unique = np.unique(df.ageAtScan1_Years) print('There are', age_unique.shape[0], 'unique age points') # ## Symptom dimensions phenos = ['Overall_Psychopathology','Psychosis_Positive','Psychosis_NegativeDisorg'] print(phenos) for pheno in phenos: if df.loc[:,pheno].isna().any(): print('NaN replacement: ', pheno) x = np.nanmedian(df.loc[:,pheno]) df.loc[df.loc[:,pheno].isna(),pheno] = x # + # Normalize rank_r = np.zeros(len(phenos),) for i, pheno in enumerate(phenos): # normalize regional metric # x = sp.stats.yeojohnson(df.loc[:,pheno])[0] x = rank_int(df.loc[:,pheno]) # check if rank order is preserved rank_r[i] = sp.stats.spearmanr(df.loc[:,pheno],x)[0] # store normalized version df.loc[:,pheno] = x print(np.sum(rank_r < 1)) # - df.loc[:,phenos].var() # ## Export header = ['squeakycleanExclude','ageAtScan1', 'ageAtScan1_Years','sex','race2','handednessv2', 'averageManualRating', 'dti64QAManualScore', 'restProtocolValidationStatus', 'restExclude', 'dti64MeanAbsRMS','dti64MeanRelRMS','dti64MaxAbsRMS','dti64MaxRelRMS', 'dti64Tsnr', 'dti64Outmax', 'dti64Outmean', 'mprage_antsCT_vol_TBV', 'averageManualRating', 'goassessSmryMood', 'goassessSmryMan', 'goassessSmryDep', 'goassessSmryEat', 'goassessSmryBul', 'goassessSmryAno', 'goassessSmryAnx', 'goassessSmryGad', 'goassessSmrySep', 'goassessSmryPhb', 'goassessSmrySoc', 'goassessSmryPan', 'goassessSmryAgr', 'goassessSmryOcd', 'goassessSmryPtd', 'goassessSmryPsy', 'goassessSmryDel', 'goassessSmryHal', 'goassessSmryHalAv', 'goassessSmryHalAs', 'goassessSmryHalVh', 'goassessSmryHalOh', 'goassessSmryHalTh', 'goassessSmryBeh', 'goassessSmryAdd', 'goassessSmryOdd', 'goassessSmryCon', 'goassessSmryPrimePos1', 'goassessSmryPrimeTot', 'goassessSmryPrimePos2', 'goassessSmryPsychOverallRtg', 'goassessDxpmr4', 'Overall_Psychopathology','Psychosis_Positive','Psychosis_NegativeDisorg'] df.to_csv(os.path.join(outputdir, exclude_str+'_df.csv'), columns = header) # # Plots # + if not os.path.exists(figdir): os.makedirs(figdir) os.chdir(figdir) sns.set(style='white', context = 'paper', font_scale = 1) cmap = my_get_cmap('pair') phenos_label_short = ['Ov. Psych.', 'Psy. (pos.)', 'Psy. (neg.)'] phenos_label = ['Overall Psychopathology','Psychosis (Positive)','Psychosis (Negative)'] # - # ## Age # + f, axes = plt.subplots(1,2) f.set_figwidth(6.5) f.set_figheight(2.5) colormap = sns.color_palette("pastel", 2) sns.distplot(df.loc[:,'ageAtScan1_Years'], bins=20, hist=True, kde=False, rug=False, hist_kws={"histtype": "step", "linewidth": 2, "alpha": 1}, color=list(cmap[0]), ax = axes[0]); axes[0].set_xlabel('Age (years)'); axes[0].set_ylabel('Number of participants'); axes[0].set_xticks(np.arange(np.min(np.round(age_unique,0)), np.max(np.round(age_unique,0)), 2)) # set width of bar barWidth = 0.25 # Sex (1 = male, 2 = female) y_train = [np.sum(df.loc[:,'sex'] == 1), np.sum(df.loc[:,'sex'] == 2)] r1 = np.arange(len(y_train))+barWidth/2 r2 = [x + barWidth for x in r1] axes[1].bar([0,0.5], y_train, width = barWidth, color = cmap[0]) axes[1].set_xlabel('Sex') # axes[1].set_ylabel('Number of participants') axes[1].set_xticks([0,0.5]) axes[1].set_xticklabels(['Male', 'Female']) f.savefig('age_distributions.png', dpi = 300, bbox_inches = 'tight', pad_inches = 0) # - # ## Symptom dimensions # + df_rc = pd.melt(df, value_vars = phenos) f, ax = plt.subplots() f.set_figwidth(2.5) f.set_figheight(4) ax = sns.violinplot(y='variable', x='value', data=df_rc, split=True, scale='width', inner = 'quartile', orient = 'h', palette = 'Pastel2') # ax.get_legend().remove() ax.set_yticklabels(phenos_label_short) ax.set_ylabel('Psychopathology dimension') ax.set_xlabel('Score') f.savefig('symptoms_distributions.png', dpi = 300, bbox_inches = 'tight', pad_inches = 0) # - # ### Export sample for FC gradients # 4) rs-fMRI exclusion df = df[df['restProtocolValidationStatus'] == 1] df = df[df['restExclude'] == 0] print('N after rs-fMRI exclusion:', df.shape[0]) df.to_csv(os.path.join(outputdir, exclude_str+'_df_gradients.csv'), columns = header) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] colab_type="text" id="-8-trVo__vRE" # _Lambda School Data Science_ # # # Make Explanatory Visualizations # # ### Objectives # # - identify misleading visualizations and how to fix them # - use Seaborn to visualize distributions and relationships with continuous and discrete variables # - add emphasis and annotations to transform visualizations from exploratory to explanatory # - remove clutter from visualizations # # ### Links # # - [How to Spot Visualization Lies](https://flowingdata.com/2017/02/09/how-to-spot-visualization-lies/) # - [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary) # - [Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html) # - [Searborn example gallery](http://seaborn.pydata.org/examples/index.html) & [tutorial](http://seaborn.pydata.org/tutorial.html) # - [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/) # - [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked) # - [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/) # + [markdown] id="s-24T844-8qv" colab_type="text" # # Avoid Misleading Visualizations # # Did you find/discuss any interesting misleading visualizations in your Walkie Talkie? # + [markdown] id="Qzxt9ntsNjs0" colab_type="text" # ## What makes a visualization misleading? # # [5 Ways Writers Use Misleading Graphs To Manipulate You](https://venngage.com/blog/misleading-graphs/) # + [markdown] id="q7_DUiENNvxk" colab_type="text" # ## Two y-axes # # # # Other Examples: # - [Spurious Correlations](https://tylervigen.com/spurious-correlations) # - # - # - # + [markdown] id="oIijNBDMNv2k" colab_type="text" # ## Y-axis doesn't start at zero. # # # + [markdown] id="ISB2p8vZNv6r" colab_type="text" # ## Pie Charts are bad # # # + [markdown] id="67CsAzu1NwBJ" colab_type="text" # ## Pie charts that omit data are extra bad # # - A guy makes a misleading chart that goes viral # # What does this chart imply at first glance? You don't want your user to have to do a lot of work in order to be able to interpret you graph correctly. You want that first-glance conclusions to be the correct ones. # # # # # # - It gets picked up by overworked journalists (assuming incompetency before malice) # # # # - Even after the chart's implications have been refuted, it's hard a bad (although compelling) visualization from being passed around. # # # # **["yea I understand a pie chart was probably not the best choice to present this data."](https://twitter.com/michaelbatnick/status/1037036440494985216)** # + [markdown] id="FYXmlToEOOTC" colab_type="text" # ## Pie Charts that compare unrelated things are next-level extra bad # # # # + [markdown] id="439l2NTqFDSb" colab_type="text" # ## And finally, a good pie chart # # ![A good pie chart](http://www.graphgraph.com/blog/wp-content/uploads/2011/11/PieChartSmall.png) # + [markdown] id="IwtMQpY_QFUw" colab_type="text" # ## Be careful about how you use volume to represent quantities: # # radius vs diameter vs volume # # # + [markdown] id="tTuAWjSBRsc7" colab_type="text" # ## Don't cherrypick timelines or specific subsets of your data: # # # # Look how specifically the writer has selected what years to show in the legend on the right side. # # # # Try the tool that was used to make the graphic for yourself # # # # + [markdown] id="Xs13S7p4Srme" colab_type="text" # ## Use Relative units rather than Absolute Units # # # + [markdown] id="CIMt5OiuTlrr" colab_type="text" # ## Avoid 3D graphs unless having the extra dimension is effective # # Usually you can Split 3D graphs into multiple 2D graphs # # 3D graphs that are interactive can be very cool. (See Plotly and Bokeh) # # # + [markdown] id="GATMu9IqUlIj" colab_type="text" # ## Don't go against typical conventions # # # + [markdown] id="g6bKgZ0m_ynS" colab_type="text" # # Tips for choosing an appropriate visualization: # + [markdown] id="WtBsVnO4VHiJ" colab_type="text" # ## Use Appropriate "Visual Vocabulary" # # [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary) # + [markdown] id="H_QM9FHqVT7T" colab_type="text" # ## What are the properties of your data? # - Is your primary variable of interest continuous or discrete? # - Is in wide or long (tidy) format? # - Does your visualization involve multiple variables? # - How many dimensions do you need to include on your plot? # # Can you express the main idea of your visualization in a single sentence? # # How hard does your visualization make the user work in order to draw the intended conclusion? # + [markdown] id="5EqXxnJeB89_" colab_type="text" # ## Which Visualization tool is most appropriate? # # [Choosing a Python Visualization Tool flowchart](http://pbpython.com/python-vis-flowchart.html) # + [markdown] id="5_na7Oy3NGKA" colab_type="text" # # Making Explanatory Visualizations with Seaborn # + [markdown] id="ORUwQD6F-VYg" colab_type="text" # Today we will reproduce this [example by FiveThirtyEight:](https://fivethirtyeight.com/features/al-gores-new-movie-exposes-the-big-flaw-in-online-movie-ratings/) # # # + colab_type="code" id="ya_w5WORGs-n" outputId="c8babe29-832e-477b-9d15-b685d866466e" colab={"base_uri": "https://localhost:8080/", "height": 355} from IPython.display import display, Image url = 'https://fivethirtyeight.com/wp-content/uploads/2017/09/mehtahickey-inconvenient-0830-1.png' example = Image(url=url, width=400) display(example) # + [markdown] colab_type="text" id="HP4DALiRG3sC" # Using this data: https://github.com/fivethirtyeight/data/tree/master/inconvenient-sequel # + [markdown] colab_type="text" id="HioPkYtUG03B" # Links # - [Strong Titles Are The Biggest Bang for Your Buck](http://stephanieevergreen.com/strong-titles/) # - [Remove to improve (the data-ink ratio)](https://www.darkhorseanalytics.com/blog/data-looks-better-naked) # - [How to Generate FiveThirtyEight Graphs in Python](https://www.dataquest.io/blog/making-538-plots/) # + [markdown] colab_type="text" id="0w_iMnQ6-VoQ" # ## Make prototypes # # This helps us understand the problem # + colab_type="code" id="5uz0eEaEN-GO" outputId="0cc8cdd8-2290-4863-ca58-ba926cca2d09" colab={"base_uri": "https://localhost:8080/", "height": 285} # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd plt.style.use('fivethirtyeight') fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33], index=range(1,11)) fake.plot.bar(color='C1', width=0.9); # + colab_type="code" id="KZ0VLOV8OyRr" outputId="fac02955-6e11-4a5d-83b1-5d9e3720d73a" colab={"base_uri": "https://localhost:8080/", "height": 289} fake2 = pd.Series( [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]) fake2.value_counts().sort_index().plot.bar(color='C1', width=0.9); # + [markdown] colab_type="text" id="mZb3UZWO-q05" # ## Annotate with text # + colab_type="code" id="f6U1vswr_uWp" outputId="22326e29-eead-482a-81dc-a7f28048897b" colab={"base_uri": "https://localhost:8080/", "height": 340} plt.style.use('fivethirtyeight') fig = plt.figure() fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33], index=range(1, 11)) ax = fake.plot.bar(color='#ef7030', width=0.9) ax.set(yticks=range(0, 50, 10), facecolor='#f9f9f9') fig.patch.set_facecolor('#f9f9f9') plt.ylabel('Percent of total votes', fontsize=9, fontweight='bold') plt.xlabel('Rating', fontsize=9, fontweight='bold') ax.text(x=-2, y=46, s="'An Inconvenient Sequel: Truth To Power' is divisive", fontsize=12, fontweight='bold') ax.text(x=-2, y=43, s='IMDb ratings for the film as of Aug. 29', fontsize=11) ax.tick_params(labelrotation=0) # + id="GY9yqTBqZz-n" colab_type="code" outputId="09510d69-0b23-467c-e0cd-f8072d5f9f6f" colab={"base_uri": "https://localhost:8080/", "height": 355} display(example) # + id="Vt2qQYMUZOEE" colab_type="code" outputId="c7ec5659-c959-471b-c989-6d6c35f5e0d3" colab={"base_uri": "https://localhost:8080/", "height": 170} help(fig.patch.set_facecolor) # + id="2EwOAbwhaZOi" colab_type="code" outputId="308c52ed-6136-4e58-eacb-68c67916f8f1" colab={"base_uri": "https://localhost:8080/", "height": 986} help(ax.text) # + [markdown] colab_type="text" id="x8jRZkpB_MJ6" # ## Reproduce with real data # + colab_type="code" id="3SOHJckDUPI8" colab={} df = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/inconvenient-sequel/ratings.csv') # + colab_type="code" id="cDltXxhC_yG-" outputId="426b8421-8f9d-424a-8f3c-3364cffa7732" colab={"base_uri": "https://localhost:8080/", "height": 344} pd.set_option('display.max_columns', 50) df.head() # + id="9pCJVeP0c-Sl" colab_type="code" outputId="8f75cc05-5e15-40f4-8b67-596daaa31909" colab={"base_uri": "https://localhost:8080/", "height": 34} df.shape # + id="fsOnVdpidBdD" colab_type="code" outputId="bb283adb-fd69-4a28-9a54-ceb36c94b0aa" colab={"base_uri": "https://localhost:8080/", "height": 886} df.sample(1).T # + id="_8w-MzXedkZn" colab_type="code" outputId="837f6da2-cb78-47bc-c980-3903797ff910" colab={"base_uri": "https://localhost:8080/", "height": 493} df.dtypes # + id="2jKWWOUbdxYa" colab_type="code" outputId="bf85ca59-4629-494b-b734-58c86fc18ba0" colab={"base_uri": "https://localhost:8080/", "height": 136} df['timestamp'] = pd.to_datetime(df['timestamp']) df['timestamp'].describe() # + id="Nm7nUjvjeBI2" colab_type="code" outputId="cbdcabeb-958f-4891-a70e-63983b024cf9" colab={"base_uri": "https://localhost:8080/", "height": 3278} df.set_index('timestamp', inplace=True) df['2017-08-29'] # + id="tZ89KA1_eTp6" colab_type="code" outputId="c576a3c9-02f7-4320-fbe1-b53349fce44a" colab={"base_uri": "https://localhost:8080/", "height": 357} df['category'].value_counts() # + id="soc9ShqkeyHp" colab_type="code" outputId="76b7ef90-de22-486f-9963-76df20bf2119" colab={"base_uri": "https://localhost:8080/", "height": 1540} lastday = df['2017-08-29'] lastday_filtered = lastday[lastday['category'] == 'IMDb users'] lastday_filtered.tail(30) # + id="7nfIlG32fq36" colab_type="code" outputId="32f62a2b-fcb1-4adb-c272-8b308f403947" colab={"base_uri": "https://localhost:8080/", "height": 313} lastday_filtered.respondents.plot() # + id="as16xbw8f5UM" colab_type="code" outputId="7cf58968-5716-4aa9-acdb-2c8da78c189a" colab={"base_uri": "https://localhost:8080/", "height": 855} final = lastday_filtered.tail(1) final.T # + id="PrKJBavdgHyG" colab_type="code" outputId="49990081-fb78-4476-c52e-db4103470dc6" colab={"base_uri": "https://localhost:8080/", "height": 111} pct_columns = ['1_pct', '2_pct', '3_pct', '4_pct', '5_pct', '6_pct', '7_pct', '8_pct', '9_pct', '10_pct'] final[pct_columns] # + id="Kvlf94G6gL9g" colab_type="code" outputId="31dcc417-a8c1-4a29-a6d1-bb70f1e45155" colab={"base_uri": "https://localhost:8080/", "height": 359} plot_data = final[pct_columns].T plot_data.index = range(1, 11) plot_data # + id="iNg661q2gbKU" colab_type="code" outputId="964c970d-ddc6-4e10-c9a9-48cdb950b054" colab={"base_uri": "https://localhost:8080/", "height": 361} plt.style.use('fivethirtyeight') agfont = {'fontname': 'Helvetica'} fig = plt.figure() #fake = pd.Series([38, 3, 2, 1, 2, 4, 6, 5, 5, 33], # index=range(1, 11)) ax = plot_data.plot.bar(color='#ef7030', width=0.9, legend=False) ax.set(yticks=range(0, 50, 10), facecolor='#f9f9f9') fig.patch.set_facecolor('#f9f9f9') plt.ylabel('Percent of total votes', fontsize=9, fontweight='bold') plt.xlabel('Rating', fontsize=9, fontweight='bold', labelpad=10) #ax.get_legend().remove() ax.text(x=-2, y=46, s="'An Inconvenient Sequel: Truth To Power' is divisive", fontsize=12, fontweight='bold') ax.text(x=-2, y=43, s='IMDb ratings for the film as of Aug. 29', fontsize=11) ax.tick_params(labelrotation=0) # + id="KxVz7PAFg3z3" colab_type="code" outputId="bf937323-c3fa-4290-83a0-dc869aa70d47" colab={"base_uri": "https://localhost:8080/", "height": 355} display(example) # + [markdown] colab_type="text" id="NMEswXWh9mqw" # # ASSIGNMENT # # Replicate the lesson code. I recommend that you [do not copy-paste](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit). # # # STRETCH OPTIONS # # #### Reproduce another example from [FiveThityEight's shared data repository](https://data.fivethirtyeight.com/). # # For example: # - [thanksgiving-2015](https://fivethirtyeight.com/features/heres-what-your-part-of-america-eats-on-thanksgiving/) (try the [`altair`](https://altair-viz.github.io/gallery/index.html#maps) library) # - [candy-power-ranking](https://fivethirtyeight.com/features/the-ultimate-halloween-candy-power-ranking/) (try the [`statsmodels`](https://www.statsmodels.org/stable/index.html) library) # - or another example of your choice! # # #### Make more charts! # # Choose a chart you want to make, from [Visual Vocabulary - Vega Edition](http://ft.com/vocabulary). # # Find the chart in an example gallery of a Python data visualization library: # - [Seaborn](http://seaborn.pydata.org/examples/index.html) # - [Altair](https://altair-viz.github.io/gallery/index.html) # - [Matplotlib](https://matplotlib.org/gallery.html) # - [Pandas](https://pandas.pydata.org/pandas-docs/stable/visualization.html) # # Reproduce the chart. [Optionally, try the "Ben Franklin Method."](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) If you want, experiment and make changes. # # Take notes. Consider sharing your work with your cohort! # # # # # # # # # + id="qateyqpC8_OQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 533} outputId="c700f6b1-7bad-49b2-f4c5-67deab1bcee0" url = 'https://fivethirtyeight.com/wp-content/uploads/2015/11/hickey-side-dish-1.png?w=575' example2 = Image(url=url) display(example2) # + id="PunnGyeN9l-f" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 674} outputId="3825e235-4d77-4143-9297-b2d5f80989cd" tg = pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/thanksgiving-2015/thanksgiving-2015-poll-data.csv') pd.set_option('display.max_columns', 65) tg.head() # + id="FCuHr38zClVC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="80eb8884-c95e-43d3-f0aa-1708c2961064" tg.shape # + id="OMFGDvTu94wp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 846} outputId="cad72ea5-2204-41d0-8a97-308566c0b5d6" tg.groupby(['US Region']).agg('count') # + id="O53jkEPYDdG8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="66f6da5a-4c25-48e4-8853-e2dc8c76f6e2" states = pd.read_csv('https://raw.githubusercontent.com/cphalpert/census-regions/master/us%20census%20bureau%20regions%20and%20divisions.csv') states.head() # + id="XqdLvNAJFn2e" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e8f4f523-bdd5-4e5d-e0e5-c435f990ed9e" import plotly plotly.__version__ # + id="JzHL6-5SVV4S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="b5a0b4e5-356f-40da-93f0-cd233be989ed" candy = pd.read_csv('https://raw.githubusercontent.com/SMSinclair/538/master/candy-data.csv') candy.head() # + id="MVpWWLaqVqxi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="02048420-9391-4a01-a777-7867431bf221" candy.shape # + id="bcCrEcR-b248" colab_type="code" colab={} candy.competitorname = candy.competitorname.str.replace("Õ", "'") # + id="xo6ogquNV3Wl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="2df5fa85-9de2-4337-901d-0f5fce2baf07" s_candy = candy.sort_values(by='winpercent', ascending=False) s_candy.head() # + id="4_wECoxDQU66" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1432} outputId="75bea1f6-39d4-45bf-81cd-8e9e9edf6d6b" fig, ax = plt.subplots(figsize=(3,17)) y_pos = np.arange(len(s_candy['competitorname'])) ax.barh(y_pos, s_candy['winpercent'], align='center', color='green', ecolor='black') ax.set_yticks(y_pos) ax.set_yticklabels(s_candy['competitorname']) ax.invert_yaxis() # labels read top-to-bottom ax.set_xlabel('win percentage') ax.set_title('How often did a fun-sized candy of a given type win its\n matchups against the rest of the field?') plt.savefig('candy.png') plt.show() # + id="umNXg-CqTa9l" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="1oK7ivgft4ss" outputId="f7b1a05d-d0f4-4a60-8f78-6f4d7eee4c77" # Dataset : https://cb.lk/covid_19 # !wget https://cb.lk/covid_19 # + colab={"base_uri": "https://localhost:8080/"} id="HjSGvC6hv6JY" outputId="6cf4dd30-b3f0-4fb8-de29-95be3fc2f64d" # !unzip covid_19 # + id="xAqGYAjpx10D" TRAIN_PATH = "CovidDataset/Train" TEST_PATH = "CovidDataset/Val" # + id="g_oVYpRczIql" import numpy as np import matplotlib.pyplot as plt import keras from keras.layers import * from keras.models import * from keras.preprocessing import image # + id="owk9DwhJzyI5" # CNN based model in Keras model = Sequential() model.add(Conv2D(32, kernel_size = (3,3), activation='relu',input_shape=(224,224,3))) model.add(Conv2D(64,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Conv2D(64,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Conv2D(128,(3,3),activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(64,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1,activation='sigmoid')) model.compile(loss=keras.losses.binary_crossentropy,optimizer='adam',metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="NQgrrWGF8Y8V" outputId="38a71fe1-88ce-4c31-cd45-1202ff00b5fb" model.summary() # + id="CvO6hJCc-qRj" # Train from scratch train_datagen = image.ImageDataGenerator( rescale = 1./255, # Normalise the to converge shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True, ) test_datagen = image.ImageDataGenerator(rescale=1/.255) # + colab={"base_uri": "https://localhost:8080/"} id="DB2qcsAH_97G" outputId="ce8097ca-ce6c-4a34-e35b-4842a6afc4c1" train_generator = train_datagen.flow_from_directory( 'CovidDataset/Train', target_size = (224,224), batch_size = 32, class_mode = 'binary') # + colab={"base_uri": "https://localhost:8080/"} id="6KRt1BcSBNWy" outputId="c499be9f-1a05-4a7d-cca3-e35bae43dfbc" train_generator.class_indices # + colab={"base_uri": "https://localhost:8080/"} id="Ct0Ugr_0BQ_r" outputId="6d11d433-6178-47f9-eee6-f97d0a7046cb" validation_generator = test_datagen.flow_from_directory( 'CovidDataset/Val', target_size = (224,224), batch_size = 32, class_mode = 'binary') # + colab={"base_uri": "https://localhost:8080/"} id="CTgmI-2hDFgP" outputId="ccb469db-ac71-4365-c8ca-e753a9f00bb2" hist = model.fit_generator( train_generator, steps_per_epoch=6, epochs = 10, validation_data = validation_generator, validation_steps = 2 ) # + id="AwV7k1sWTZ5k" # Class Activation Maps # Grad-CAM both used to visualize the how a network is differentiating feature # + [markdown] id="2n0nv4NScOP3" # # Loss is very small and decreasing # + id="YC1N-flUTd44" model.save("model_adv.h5") # + colab={"base_uri": "https://localhost:8080/"} id="TDHkMQfOVVTq" outputId="725e4a13-dc24-44c3-f3dd-6bc2fa425315" model.evaluate_generator(train_generator) # + colab={"base_uri": "https://localhost:8080/"} id="l3T6VKjqVrAj" outputId="b8f0d90b-15f4-4d4c-eb56-2f6b1fce2107" model.evaluate_generator(validation_generator) # + [markdown] id="W1vCQwwTWWpC" # # **Test Image With Confusion Metrics** # + id="zg7vh-OhWe5j" model = load_model('model_adv.h5') # + id="FNt4V-UBWuLu" import os # + colab={"base_uri": "https://localhost:8080/"} id="GZocl7IWWv_X" outputId="e912d139-882f-4fe0-8c83-f75685337323" train_generator.class_indices # + id="tnq-2ZpQW1Du" y_actual = [] y_test = [] # + id="ooQK_ejKXBCj" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="6bf9998b-5ac1-4174-88d5-ee557f978d4b" for i in os.listdir("./CovidDataset/Val/Normal/"): img = image.load_img("./CovidDataset/Val/Normal/"+i, target_size=(224,224)) img = image.img_to_array(img) img = np.expand_dims(img, axis=0) predict_x=model.predict(img) classes_x=np.argmax(predict_x,axis=1) y_test.append(predict_x[0,0]) y_actual(1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # # Wrangle Report # # This document briefly describes the wrangling efforts for the WeRateDogs Twitter dataset in the `wrangle_act.ipynb` notebook. # # ## Gathering Data # # The dataset was gather through the following methods: # 1. File on hand - `twitter_archive_enhanced.csv` # 2. File hosted on Udacity's servers - `image_predictions.tsv` # 3. Query of Twitter API - `tweet_json.txt` # ### `twitter_archive_enhanced.csv` # # Using the pandas library, `.csv` files were read directly into a DataFrame. # # ### `image_predictions.tsv` # # Using the requests library, files hosted on the internet were programmatically downloaded. Once downloaded, the pandas library was used to read in the `.tsv` into a DataFrame. # # ### `tweet_json.txt` # # Using the tweepy and json libraries, the tweets were dumped into a `.txt` file. The text file is read line by line to append the `tweet_id`, `favorite_count`, and `retweet_count` into a DataFrame # ## Assessing Data # Once the data has all been gathered into individual DataFrames, the data is assessed both visually and programmatically to look for quality and tidiness issues. # # Programmatic Methods: # - .head() # - .describe() # - .info() # - .duplicated() # - .value_counts() # - .query() # - .sum() # # The quality issues were categorized by completeness, validity, accuracy, and consistency. The tidiness issues were categorized by tidy data principles. # ### Quality Issues # #### Completeness # 1. `df_ae`: Missing and incorrect dog names # 2. `df_ae`: Benebop Cumberfloof not identified as floofer # # #### Validity # 4. `df_ae`: Retweets may capture the same dog twice with a different tweet_id # 5. `df_ae`: Replies do not have images # 6. `df_ip`: 324 predictions where the top 3 predictions are not dog breeds. Sampling data reveals turtles, fish, sloth, etc. # # #### Accuracy # 7. `df_ae`: Rating numerator and denominator have many outliers # # #### Consistency # 9. `df_ae`: Timestamp column is a string # 10. `df_ae`: Source displays url # # ### Tidiness Issues # # #### Each variable forms a column # 11. `df_ip`: Four columns for stages of dog (doggo, pupper, puppo, floofer) should be one category column # # #### Each observation forms a row # - N/A # # #### Each type of observational unit forms a table # 12. `df_ip`: Observational unit is for image prediction, `jpg_url` should be part of `df_ae` table. # 13. `df_tj`: Retweet and favorite should be appended to `df_ae` table. # ## Cleaning # This section will discuss some of the more involved cleaning efforts, the shortcomings, and possible improvements. # #### Issue #1: Missing and incorrect dog names. # # Most of the tweets introduce the dog's name in the beginning of each tweet with "This is ...". # # It appears the previous gathering efforts took note of this pattern and was able to capture most of the dog's name by extracting the word after "This is ...". # # However, if the tweet did not begin with "This is ..." the name was defaulted to "None". This explains the 745 records where the dog's name is "None". # # This method also explains why the second most dog name is "a". For example, if the tweet began with "This is a good boy..." then the method assigned the letter "a" to the dog's name. # # On further inspection, if the dog's name was lowercase, it was likely labeled incorrectly. # # The cleaning effort tried to correct the dog's name by filtering by incorrectly labeled tweets, and finding their name in the body of the text. # # In the interest of time and practicallity, the notebook only includes correction for dog names labeled as "a". # # More more can be done to correctly extract the dog names from the tweets. # #### Issue #3: Extracting nested dictionaries/lists from JSON creates messy data. # # The JSON files from Twitter are complex and include nested dictionaries/lists. While trying to convert these complex JSON files into a DataFrame, issues arose as some nested dictionaries have the same key. # # While trying to flatten or normalize the JSON files, it resulted in many empty columns and Series of lists that proved difficult to work with. # # To get around this issue, only the columns of interest were extracted. # # Additional insights may be derived from appropriate handling of the Twitter API JSON files. # #### Issue #6: Top 3 predictions are not dog breeds. # # For the majority of predictions where there are no dog predictions, the majority of the images did not have a dog in the picture. # # However, there are some instances where a dog is in a busy photo and a dog breed is not predicted. # # For example, a photo of a dog taken from behind and his face is in the reflection of a computer monitor. The top three predictions were for items on the desk. # # Retraining the model may provide more accurate breed predictions. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.6 64-bit (''pytorch_env'': conda)' # language: python # name: python37664bitpytorchenvcondac65bc6c7ab474553ae2b82822152b36f # --- # + from netCDF4 import Dataset import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import ArtistAnimation, FFMpegWriter path_bousinessq = "./boussinesq.nc" path_2dcylinder = "./cylinder2d.nc" # - def loadDataset(path): return Dataset(path) # + tags=[] cylinder2D = loadDataset(path_2dcylinder) boussinesq = loadDataset(path_bousinessq) print(cylinder2D) print(boussinesq) # - # ## Info about the datasets # # ### Cylinder 2D # # dimensions(sizes): xdim(640), ydim(80), tdim(1501), const(1) # # variables(dimensions): float32 u(tdim,ydim,xdim), float32 v(tdim,ydim,xdim), float32 xdim(xdim), float32 ydim(ydim), float32 tdim(tdim), float32 nu(const), float32 radius(const), float32 Re(const) # # ### Bousinessq # # # dimensions(sizes): xdim(150), ydim(450), tdim(2001), const(1) # # variables(dimensions): float32 u(tdim,ydim,xdim), float32 v(tdim,ydim,xdim), float32 xdim(xdim), float32 ydim(ydim), float32 tdim(tdim), # float32 radius(const), float32 obstacle_pos_x(const), float32 obstacle_pos_y(const) # # ### Accessing variables # # cylinder2D['variable_name'] # # + tags=[] type(cylinder2D['xdim']) print(cylinder2D['u'].shape) print(cylinder2D['tdim'].shape) # + tags=[] data = cylinder2D def velFromUV(data): u = data['u'][1000,:,:] v = data['v'][1000,:,:] vel = np.sqrt(u**2 + v**2) # + tags=[] u = np.array(cylinder2D['u']) fig, ax = plt.subplots() ims = [[ax.imshow(u[i], animated=True)] for i in range(1, len(u))] ani = ArtistAnimation(fig, ims, interval=1000 , blit=True, repeat_delay=1000) # plt.show() # writer = FFMpegWriter(fps=15, metadata=dict(artist='Me'), bitrate=1800) # ani.save("movie.mp4", writer=writer) # - plt.matshow(data['v'][1000,:,:]) plt.axis('off') plt.show() # ``` # % Create snapshot matrix # Nt = length(S(1,1,:)); # S = reshape(permute(S, [3 2 1]), Nt, [ ]); % Reshape data into a matrix S with Nt rows # U = S - repmat(mean(S,1), Nt, 1); % Subtract the temporal mean from each row # # % Create correlation matrix # C_s = (U*U')/(Nt-1); # # % Solve eigenvalue problem # [A_s LAM_s] = eig(C_s,'vector'); # # % Sort eigenvalues and eigenvectors # [lambda_s,ilam_s] = sort(LAM_s,'descend'); # A_s = A_s(:, ilam_s); # # % These are the temporal modes # % Calculate spatial coefficients # PHI_s = U'*A_s; # # % Reconstruction on mode k # k = 1; % for example # Utilde_k_s = A_s(:,k)*PHI_s(:,k)'; # # % Normalization to match direct and snapshot modes (optional) # PHI = normc(PHI_s); # # % Spatial modes # A = U*PHI; # # % Time coefficients # Utilde_k = A(:,k)*PHI(:,k)'; # % Reconstruction on mode k # ``` # + tags=[] S = np.transpose(u, (1,2,0)) print(S.shape) Nt = u.shape[0] print(Nt) U = # + from zipfile import ZipFile url = "https://cgl.ethz.ch/Downloads/Data/ScientificData/cylinder2d_nc.zip" file_name = "../data2/" + url.split('/')[-1] with ZipFile(file_name, 'r') as zipObj: zipObj.extractall('temp') # - # # Visualization import numpy as np import matplotlib.pyplot as plt import matplotlib Input=np.load('../data/cylinder_u.npy') Output=np.load('../output/16_400.npy') # ip=np.load('../input/200.npy') plt.imshow(Output[0,-1]) plt.imshow(Input[1475]) # plt.imshow(ip[510]) # plt.imshow(Input[500]) plt.imshow(Input[950]) fig=plt.figure(1) plt.subplot(2, 1, 1) plt.imshow(Output[800]) plt.subplot(2,1,2) plt.imshow(Input[800]) plt.savefig('comp') plt.imshow(Output[800]-ip[800]) plt.colorbar(orientation='horizontal') np.max(Output[200][:,100]-Input[200][:,100]) print(np.min(Input[800])) print(np.min(Output[800])) print(np.max(Input[800])) print(np.max(Output[800])) print(np.min(ip[800])) print(np.max(ip[800])) import numpy as np import pyJHTDB # 2048×512×1536 t = np.linspace(0, 8*np.pi, 512) l = np.linspace(-1, 1, 128) x = np.zeros((t.shape[0], l.shape[0], 3), np.float32) # print(t[np.newaxis, :].shape) x[:, :, 0] = t[:, np.newaxis] x[:, :, 1] = l[np.newaxis,:] x[:, :, 2] = 0. print(x.shape) # x = x.transpose(0,2,1) # print(x) t1 = np.linspace(0, 2*3.14, 256) t2 = np.linspace(-0.5, 0.5, 256) x = np.zeros((t1.shape[0], t2.shape[0], 3), np.float32) x[:, :, 0] = t1[np.newaxis, :] x[:, :, 1] = t2[:, np.newaxis] x[:, :, 2] = .0 print(x.shape) xy = np.mgrid[0:2048:8*np.pi, 0:1534:3*np.pi]#.reshape(2048,-1).T print(xy.shape) # print(xy) # + lJHTDB = pyJHTDB.libJHTDB() lJHTDB.initialize() #Add token auth_token = "" #Replace with your own token here lJHTDB.add_token(auth_token) import pyJHTDB.dbinfo T = pyJHTDB.dbinfo.channel5200['time'][-1] # T =10.0 # Select points in the database to query # lpoints = [] # for i in range(0,3): # lpoints.append([np.random.uniform(0, 8*3.14),np.random.uniform(-1, 1),np.random.uniform(0, 3*3.14)]) # # 2D array with single precision values # points = np.array(lpoints,dtype='float32') # time = np.random.random()*T time = 24.0 # u = lJHTDB.getData( # time, # x, # sinterp = 4, # data_set ='channel5200', # getFunction='getVelocity') # ubox = lJHTDB.getBoxFilter( # time, # x, # field = 'velocity', # data_set = 'channel', # filter_width = 5*(2*np.pi / 1024)) # lJHTDB.finalize() result = lJHTDB.getData(time, x, data_set='channel', sinterp = 4, tinterp = 0, getFunction = 'getVelocity') print(result.shape) # print(result) # print(u.shape) print(time) # - import matplotlib.pyplot as plt plt.imshow(result[:,:,0]) plt.imsave('try.png', result[:,:,0]) # print(u.shape) fig = plt.figure(figsize = (t1[-1] - t1[0], t2[-1] - t2[0])) a = fig.add_subplot(121) a.set_axis_off() a.imshow([:,:,0], extent = [t1[0], t1[-1] - t1[0], t2[0], t2[-1] - t2[0]], interpolation = 'none') # + import shutil T = np.arange(0., 25.0, 0.01) X = np.linspace(0, 8*np.pi, 4) Y = np.linspace(-1, 1, 4) Z = np.arange(0, 3*np.pi, 0.1)x[:, :, 0] = t1[np.newaxis, :] x[:, :, 1] = t2[:, np.newaxis] x[:, :, 2] = .0 x = np.zeros((p.shape[0], p.shape[0], 3), np.float32) les = [] dns = [] count = 0 # if os.path.exists('DNS-LES/les'): # shutil.rmtree('DNS-LES/les') # if os.path.exists('DNS-LES/dns'): # shutil.rmtree('DNS-LES/dns') if not os.path.exists('DNS-LES/les'): os.mkdir('DNS-LES/les') if not os.path.exists('DNS-LES/dns'): os.mkdir('DNS-LES/dns') # + def create_turb_dataset(t): count = 0 Nx = 512 Ny = 128 #for t in T: start_time = time.time() print("Time:",t) for idx in range(len(X)-1): px = np.linspace(X[idx], X[idx+1], Nx) for idy in range(len(Y)-1): py = np.linspace(Y[idy], Y[idy+1], Ny) for z in Z: x[:, :, 0] = px[np.newaxis, :] x[:, :, 1] = py[:, np.newaxis] x[:, :, 2] = z u, u_box= u_data(t, x) #les.append(u_box) #dns.append(u) if not os.path.exists('DNS-LES/les/%.2f'%t): os.mkdir('DNS-LES/les/%.2f'%t) if not os.path.exists('DNS-LES/dns/%.2f'%t): os.mkdir('DNS-LES/dns/%.2f'%t) for itx in range(3): norm1 = cv2.normalize(u[:,:,itx], 0, 255, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F) norm2 = cv2.normalize(u_box[:,:,itx], 0, 255,norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F) if count%100 == 0: print(count) cv2.imwrite("DNS-LES/dns/%.2f/%d.png"%(t,count), norm1) cv2.imwrite("DNS-LES/les/%.2f/%d.png"%(t,count), norm2) count+=1 end_time = time.time() print("Time: {:.2f} s".format(end_time-start_time)) # + # 2048×512×1536 px = np.linspace(0, 8*np.pi, 512) py = np.linspace(-1, 1, 128) x = np.zeros((px.shape[0], py.shape[0], 3), np.float32) # print(t[np.newaxis, :].shape) x[:, :, 0] = px[:, np.newaxis] x[:, :, 1] = py[np.newaxis,:] x[:, :, 2] = 0. T = np.linspace(0., 25.9, 1000) dataset = np.zeros((len(T), 512, 128)) for idx, time in enumerate(T): print(idx) dataset[idx] = lJHTDB.getData(time, x, data_set='channel', sinterp = 4, tinterp = 0, getFunction = 'getVelocity')[:,:,0] print(dataset.shape) np.save('../data/channel_data.npy', dataset) # - A = np.zeros(5) print(A) import numpy as np import matplotlib.pyplot as plt dataset = np.load('../data/channel_data_2500.npy') print(dataset.shape) plt.imshow(dataset[-1]) plt.show() # + import numpy as np import matplotlib.pyplot as plt u_comp = np.load('../data/Velocity160.npz', allow_pickle=True) u_flat = u_comp['arr_0'] print(u_flat.shape) u = u_flat.reshape(u_flat.shape[0], 320, 80) u = np.transpose(u, (0, 2, 1)) u_new = np.zeros((u.shape[0], 80, 640)) u_new[..., :320] = u u = u_new.astype(np.float32) plt.matshow(u[2000]) # - for it in u: print(it) u['arr_0'].shape u = np.load('../data/Velocity160.npz', allow_pickle=True) u = u_comp['arr_0'] # u = u_flat.reshape(u_flat.shape[0], 320, 80) # u = np.transpose(u, (0, 2, 1)) print(u.shape) plt.matshow(u[200]) dataset = np.load('../data/channel_data_2500.npy') print(dataset.shape) import numpy as np import matplotlib.pyplot as plt u_flat = np.load('../data/airfoil_80x320_data.npy', allow_pickle=True) # u_flat = u_comp['arr_0'] u = u_flat.reshape(u_flat.shape[0], 320, 80) u = np.transpose(u, (0, 2, 1))[:,:,140:-20] # u_new = np.zeros((u.shape[0], 80, 640)) # u_new[..., :320] = u # u = u_new.astype(np.float32) print(u.shape) print(np.max(u_flat), np.min(u_flat)) plt.matshow(u[400]) import numpy as np import matplotlib.pyplot as plt u_flat = np.load('../data/platekepsilon.npy', allow_pickle=True) print(u_flat.shape) # u_flat = u_comp['arr_0'] u = u_flat.reshape(u_flat.shape[0], 360, 180) u = np.transpose(u, (0, 2, 1))[:,:-20,:-40] # u_new = np.zeros((u.shape[0], 80, 640)) # u_new[..., :320] = u # u = u_new.astype(np.float32) print(u.shape) print(np.max(u_flat), np.min(u_flat)) plt.matshow(u[400]) # + import numpy as np import matplotlib.pyplot as plt labels = ['2D_cylinder', '2D_sq_cyl', '2D_plate'] dl_rom = [0.0152, 0.0150, 0.0151] cfd = [0.12, 0.112, 0.128] # dl_rom = [0.926, 0.851, 0.772] # cfd = [0.912, 0.828, 0.737] x = np.arange(len(labels)) # the label locations width = 0.3 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(x - width/2, cfd, width, label='CFD', color='gold') rects2 = ax.bar(x + width/2, dl_rom, width, label='DL-ROM', color='darkorange') # rects1 = ax.bar(x - width/2, cfd, width, label='LR_Input', color='royalblue') # rects2 = ax.bar(x + width/2, dl_rom, width, label='dl_rom', color='yellowgreen') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Iteration Time (s)') ax.set_ylim(0,0.2) # ax.set_ylabel('SSIM') # ax.set_xlabel('Dataset') ax.tick_params(axis = 'both', which = 'major', labelsize = 14) ax.yaxis.label.set_size(13) # ax.set_title('Scores by group and gender') ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() def autolabel(rects): """Attach a text label above each bar in *rects*, displaying its height.""" for rect in rects: height = rect.get_height() ax.annotate('{}'.format(height), xy=(rect.get_x() + rect.get_width() / 2, height), xytext=(0, 3), # 3 points vertical offset textcoords="offset points", ha='center', va='bottom') autolabel(rects1) autolabel(rects2) # fig.tight_layout() # plt.savefig(f'barplot_PSNR.eps', dpi=600) # plt.savefig(f'barplot_SSIM.eps', dpi=600) plt.savefig(f'barplot_time.eps', dpi=600) plt.show() # + datasets = ['2d_cylinder_CFD', '2d_sq_cyl']#,['2d_plate', 'channel_flow', 'SST'] # datasets = ['2d_plate', 'channel_flow', 'SST'] # x = np.arange(4) A = np.zeros((5, 20)) fig, ax = plt.subplots() ax.tick_params(axis = 'both', which = 'major', labelsize = 16) ax.yaxis.label.set_size(16) ax.xaxis.label.set_size(16) ax.set_ylabel('Negative Log MSE') ax.set_xlabel('Iterations') ax.set_xlim(-.25,20) ax.set_ylim(4.5,10.5) # plt.xticks(x, labels, rotation ='vertical') for it, dname in enumerate(datasets): mse = np.load(f'../simulate/{dname}/mse.npy')[:20] A[it] = mse plt.plot(-np.log(mse), "o-",label = dname) plt.legend(loc='best') fig.tight_layout() plt.savefig(f'MSE_vs_iteration_1.eps', dpi=600) plt.show() # - import os os.getcwd() reduced order modelling for temporal fluid flow prediction using deep learning Deep learning based reduced order modelling for temporal fluid flow prediction def load_transfer_learning(pretrained, model, PATH): checkpoint = torch.load(PATH) model.load_state_dict(checkpoint, strict=False) layers = [] for param in pretrained.named_parameters(): print(param[0]) layers.append(param[0]) for param in model.named_parameters(): if param[0] in layers: param[1].requires_grad = False # for param in model.named_parameters(): # print(param[0], param[1].requires_grad) return model def load_transfer_learning_UNet_3D(pretrained, model, PATH, req_grad=True): checkpoint = torch.load(PATH) model.load_state_dict(checkpoint, strict=False) layers = {} # print(model.named_parameters()) for param in pretrained.named_parameters(): if param[0][:2] != "d1" and param[0][:2] != "u5": # print(param[0]) # print(type(param[1])) layers[param[0]] = param[1].data for param in model.named_parameters(): if param[0] in layers.keys(): if param[0][:2] != "d1" and param[0][:2] != "u5": param[1].data = layers[param[0]] param[1].requires_grad = req_grad for param in model.named_parameters(): print(param[0], param[1].requires_grad) return model # + tags=[] import numpy as np import torch import matplotlib.pyplot as plt # from utils import load_transfer_learning from model import UNet_3D pre_dataset_name = "2d_cylinder_CFD" final_dataset_name = "2d_sq_cyl" final_model = UNet_3D(name=final_dataset_name) pretrained = UNet_3D(name=pre_dataset_name) dataset = "2d_cylinder_CFD" PATH = f"../results/{dataset}/weights/100.pth" model = load_transfer_learning_UNet_3D(pretrained, final_model, PATH, False) # - def plot_training_from_dict(dataset_name = '2d_sq_cyl'): V = np.load(f'../results/{dataset_name}/weights/val_loss_dict_og.npy', allow_pickle=True).item() T = np.load(f'../results/{dataset_name}/weights/train_loss_dict_og.npy', allow_pickle=True).item() V_tl = np.load(f'../results/{dataset_name}/weights/val_loss_dict.npy', allow_pickle=True).item() T_tl = np.load(f'../results/{dataset_name}/weights/train_loss_dict.npy', allow_pickle=True).item() # T = np.array(T) # V = np.array(V) print(type(T)) print(list(T.values())) fig, ax = plt.subplots() plt.xlabel('Epoch') plt.ylabel('Loss') ax.yaxis.label.set_size(16) ax.xaxis.label.set_size(16) # ax.set_ylim([0.04, 0.16]) plt.plot(list(V.keys()), list(V.values()), 'g-') plt.plot(list(T.keys()), list(T.values()), 'k-') plt.plot(list(V_tl.keys()), list(V_tl.values()), 'b-') plt.plot(list(T_tl.keys()), list(T_tl.values()), 'r-') plt.legend(loc='best', labels=['Validation Loss', 'Training Loss', 'Validation Loss_TL', 'Training Loss_TL' ]) # plt.show() plt.savefig(f'../results/{dataset_name}/training_plot_tl.eps', dpi=600) plt.close() plot_training_from_dict(dataset_name = 'SST') dataset_names = ['2d_sq_cyl','2d_plate','SST', 'channel_flow'] for dataset_name in dataset_names: V = np.load(f'../results/{dataset_name}/weights/val_loss_dict_og.npy', allow_pickle=True).item() V_tl = np.load(f'../results/{dataset_name}/weights/val_loss_dict.npy', allow_pickle=True).item() print(list(V.values())[-1]) print(list(V_tl.values())[-1]) print() # + import numpy as np import matplotlib.pyplot as plt labels = ['2D_sq_cyl', '2D_plate', 'SST', 'Channel_flow'] # dl_rom = [0.0006, 0.0014, 0.010, 0.017] # cfd = [0.0024, 0.0039, 0.0045, 0.036] cfd = [35, 32, 43, 41] dl_rom = [26, 26, 34, 32] # dl_rom = [0.926, 0.851, 0.772] # cfd = [0.912, 0.828, 0.737] x = np.arange(len(labels)) # the label locations width = 0.45 # the width of the bars fig, ax = plt.subplots() # rects1 = ax.bar(x - width/2, cfd, width, label='Normal Training', color='gold') # rects2 = ax.bar(x + width/2, dl_rom, width, label='Transfer Learning', color='darkorange') rects1 = ax.bar(x - width/2, cfd, width, label='Normal Training', color='royalblue') rects2 = ax.bar(x + width/2, dl_rom, width, label='Transfer Learning', color='yellowgreen') # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('Training Time per Epoch (s)') # ax.set_ylabel('Loss/ Mean Absolute Error') # ax.set_ylim(0,0.04) ax.set_ylim(0,50) # ax.set_ylabel('SSIM') # ax.set_xlabel('Dataset') ax.tick_params(axis = 'both', which = 'major', labelsize = 14) ax.yaxis.label.set_size(13) # ax.set_title('Scores by group and gender') ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend() def autolabel(rects): """Attach a text label above each bar in *rects*, displaying its height.""" for rect in rects: height = rect.get_height() ax.annotate('{}'.format(height), xy=(rect.get_x() + rect.get_width() / 2, height), xytext=(0, 3), # 3 points vertical offset textcoords="offset points", ha='center', va='bottom') autolabel(rects1) autolabel(rects2) # fig.tight_layout() # plt.savefig(f'barplot_PSNR.eps', dpi=600) # plt.savefig(f'barplot_SSIM.eps', dpi=600) plt.savefig(f'transfer_learning_time.eps', dpi=600) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: shenfun # language: python # name: shenfun # --- # # Section 4.3.2 # # This is an implementation of the second order problem in Sec. 4.3.2 of the paper # # , "A generic and strictly banded spectral Petrov-Galerkin method for differential equations with polynomial coefficients" # # # Helmholtz equation # # The Helmholtz Dirichlet problem is described as # # $$ # u''(x) - \mu u(x) = f(x), \quad u(\pm 1) = 0, x \in [-1, 1] # $$ # # where $\mu \ge 0$ is a constant, $f(x)$ is some right hand side function and $u(x)$ is the solution. For the solution we will use either one of the orthogonal polynomials $Q_n(x)$ defined in [Section-4.3.1](Section-4.3.1.ipynb). That is, we use $T_n, L_n, U_n$ or $Q^{(\alpha)}_n$. # # We solve the Helmholtz equation with a Petrov-Galerkin method and use the trial functions # # $$ # \psi_n = Q_n-Q_{n+2}, \quad n=0,1, \ldots, N-2, # $$ # # for the three families $Q_n=T_n, L_n,$ or $Q^{(\alpha)}_n$. For Chebyshev polynomials of the second kind the trial functions are # # $$ # \psi_n = {U_n}-\frac{n+1}{n+3}U_{n+2}, \quad n=0,1, \ldots, N-2 # $$ # # which is actually equal to $\psi_n=(n+1)(Q^{(1/2)}_n-Q^{(1/2)}_{n+2})$. # # The trial space is $S=\{\psi_n\}_{n=0}^{N-2}$. For the test space we use $V=\text{span}\left\{\phi^{(2)}_n\right\}_{n=0}^{N-2}$, where $\phi^{(2)}_n$ also was defined in [Section-4.3.1](Section-4.3.1.ipynb). Note that we use # # $$ # \phi^{(k)}_n = \frac{(1-x^2)^k}{h^{(k)}_{n+k}} \frac{d^k Q_{n+k}}{dx^k}, # $$ # # for $Q_n=T_n, L_n, Q^{(\alpha)}_n$, or $U_n$, and that $h^{(k)}_{n+k}$ includes the scaling function $g_n$. # # The method is to find $u \in S$ such that # # $$ # (u'', v)_{\omega^{\alpha}} - \mu(u, v)_{\omega^{\alpha}} = (f, v)_{\omega^{\alpha}}, \quad \text{for all } v \in V # $$ # # We start by importing shenfun's functionality, and specify some parameters with obvious meaning. # + from shenfun import * import matplotlib.pyplot as plt N = 60 mu = 1 # - # Choose family of polynomials by uncommenting only one of the following four lines. #family = 'Ultraspherical'; kw = {'alpha': 1} #family = 'Legendre'; kw = {} family = 'Chebyshev'; kw = {} #family = 'ChebyshevU'; kw = {} # Create function spaces and test and trial functions. Assemble the coefficient matrix A, that will consist of both mass and stiffness matrices. S = FunctionSpace(N, family, bc=(0, 0), **kw) V = FunctionSpace(N+2, family, basis='Phi2', **kw) u = TrialFunction(S) v = TestFunction(V) A = inner(v, div(grad(u)) - mu*u) fig, (ax1, ax2) = plt.subplots(1, 2) ax1.spy(A[0].diags(), ms=2) ax2.spy(A[1].diags(), ms=2) fig.suptitle('Sparsity pattern stiffness (left) and mass (right) matrix') # ## Alternative formulation # # An alternative formulation (see Sec. 4.1.4) for this problem is to find $u\in \text{V}^{(1)}_N$ such that # # \begin{equation} # (u'', v)_{\omega^{(\alpha+1)}} - \mu (u, v)_{\omega^{(\alpha+1)}} = (f, v)_{\omega^{(\alpha+1)}}, \quad \forall \, v \in \text{V}^{(1)}_{N} = \text{span}\{\overline{\phi}^{(2,\alpha)}_m\}_{m=0}^{N-2}, # \end{equation} # # where # # \begin{equation} # \overline{\phi}^{(2,\alpha,\beta)}_m = \gamma^{(2,\alpha,\beta)}_{m} \phi^{(1, \alpha+1, \beta+1)}_{m}, \label{eq:phiover} # \end{equation} # # and $\gamma^{(2,\alpha,\beta)}_m$ is given by Eq. (4.18) in the paper. # # We use Chebyshev polynomials of the first kind for trial function ($\psi_n=T_n-T_{n+2}$) and the second kind for the test functions that become # # $$ # \overline{\phi}^{(2,-1/2, -1/2)}_m = \frac{1}{m+2} \phi^{(1,1/2,1/2)}_m = \frac{1}{\pi (m+2)}\left(\frac{U_m}{m+1}-\frac{U_{m+2}}{m+3}\right) # $$ # # It is implemented in shenfun as S = FunctionSpace(N, 'C', bc=(0, 0)) V = FunctionSpace(N, 'U', basis='Phi1', scaled=True) u = TrialFunction(S) v = TestFunction(V) A2 = inner(v, div(grad(u)) - mu*u) # Verify that the matrix is the same as `A` assembled above. Need to make sure Chebyshev is used, so recompute `A` here S = FunctionSpace(N, 'C', bc=(0, 0), **kw) V = FunctionSpace(N+2, 'C', basis='Phi2', **kw) u = TrialFunction(S) v = TestFunction(V) A = inner(v, div(grad(u)) - mu*u) M = A[0].diags('csr') + A[1].diags('csr') M2 = A2[0].diags('csr') + A2[1].diags('csr') assert np.linalg.norm((M-M2).toarray()) < 1e-14 # If the assert passes, then the matrices are the same. # # Condition numbers # # Check the condition number of the coefficient matrix. Use a relatively low number since Jacobi is not implemented very robustly, with gamma functions that overflow. Also, Numpy's routine for computing the condition number makes use of dense matrices, which limits the size. def cond(N, family='Chebyshev', alpha=0): """Return condition number of Helmholtz coefficient matrix Parameters ---------- N : int The number of quadrature points. For a Dirichlet space there is N-2 dofs. family : str, optional Either one of - 'Chebyshev' (or 'C') - 'ChebyshevU' (or 'U') - 'Legendre' (or 'L') - 'Ultraspherical' (or 'Q') alpha : number, optional Parameters used only by the Ultraspherical family """ assert N < 1000 S = FunctionSpace(N, family, bc=(0, 0), alpha=alpha) V = FunctionSpace(N+2, family, basis='Phi2', alpha=alpha) u = TrialFunction(S) v = TestFunction(V) A = inner(v, div(grad(u)) - mu*u) M = np.sum(np.array(A, dtype=object)).diags() return np.linalg.cond(M.toarray()) # Run over all families and compute the condition numbers for a range of matrix sizes. alpha = 0.5 con = {'C': [], 'L': [], 'U': [], 'Q': []} for family in 'CULQ': for n in range(3, 8): con[family].append(cond(2**n, family, alpha)) # alpha neglected by C, U, L # Plot the condition numbers using loglog axes to show that Chebyshev, Legendre and any ultraspherical basis using scaling $g_n^{\alpha}=(P^{(\alpha,\alpha)}_n(1))^{-1}$ obtain condition numbers that are $\mathcal{O}(N)$, whereas Chebyshev of the second kind (which uses a slightly different scaling) obtain condition numbers scaling as $\mathcal{O}(N^{3/2})$. Note that an ultraspherical basis with $\alpha=1/2$ results in $\mathcal{O}(N)$. # + for family in 'CULQ': plt.loglog(2**np.array(range(3, 8)), con[family]) plt.loglog(2**np.array(range(3, 8)), 2**np.array(range(3, 8)), '--') plt.loglog(2**np.array(range(3, 8)), (2**np.array(range(3, 8)))**1.5, '--') plt.legend(['Chebyshev 1', 'Chebyshev 2', 'Legendre', rf'Ultrasph. $(\alpha={alpha})$', r'$\mathcal{O}(N)$', r'$\mathcal{O}(N^{3/2})$']) plt.title('Condition numbers Helmholtz coefficient matrix') plt.show() # - # # The Airy equation # # Another second order problem is the Airy differential equation # # \begin{equation} # \epsilon u'' - x u = 0, \quad u(-1) = \text{Ai}\left(-\sqrt[3]{\tfrac{1}{\epsilon}}\right), u(1) = \text{Ai}\left(\sqrt[3]{\tfrac{1}{\epsilon}}\right), \label{eq:airy} # \end{equation} # # which has the Airy function $u(x) = \text{Ai}\left(\sqrt[3]{\tfrac{1}{\epsilon}} x\right)$ as solution. # + import sympy as sp x = sp.Symbol('x', real=True) def main(N, family, alpha=0, returnmat=False, e=sp.Rational(1, 1e9)): ue = sp.airyai((1/e)**(sp.Rational(1, 3))*x) SN = FunctionSpace(N, family, bc=(ue.subs(x, -1), ue.subs(x, 1))) VN = FunctionSpace(N+2, family, basis='Phi2') u = TrialFunction(SN) v = TestFunction(VN) A = [inner(div(grad(u)), e*v)] + inner(u, -x*v) sol = la.Solver(A) if returnmat: return sol.mat u_hat = Function(SN) f_hat = Function(SN) u_hat = sol(f_hat, u_hat) ua = Array(SN.get_orthogonal(), buffer=ue) return np.sqrt(inner(1, (u_hat.backward()-ua)**2)) # - error = [] #e, M = sp.Rational(1, 1e6), np.arange(500, 900, 10) e, M = sp.Rational(1, 1e9), np.hstack([np.arange(200, 19201, 1000), np.arange(19220, 20781, 20), np.arange(20800, 40000, 1000)]) for N in M: error.append(main(N, 'C', e=e)) ue = sp.airyai((1/e)**(sp.Rational(1, 3))*x) fig, ax = plt.subplots(1, 2, figsize=(10, 4)) ax[1].semilogy(M, error, 'k') ax[1].set(xlabel='N', ylabel='$||u_N-u||$') M = np.linspace(-1, 1, 40000) ax[0].plot(M, sp.lambdify(x, ue)(M), 'k') ax[0].set(xlabel='x', ylabel='u(x)') left, bottom, width, height = [0.35, 0.6, 0.1, 0.2] ax2 = fig.add_axes([left, bottom, width, height]) M = np.linspace(-0.05, 0.05, 1000) ax2.plot(M, sp.lambdify(x, ue)(M), 'k') plt.show() alpha = 0.5 con = {'C': [], 'L': [], 'U': [], 'Q': []} M = range(3, 12) e = sp.Rational(1, 1e3) for family in 'CULQ': for n in M: con[family].append(np.linalg.cond(main(2**n, family, alpha, True, e=e).toarray())) # alpha neglected by C, U, L # + for family in 'CULQ': plt.loglog(2**np.array(M), con[family]) plt.loglog(2**np.array(M), 2**np.array(M), '--') plt.loglog(2**np.array(M), (2**np.array(M))**1.5, '--') plt.legend(['Chebyshev 1', 'Chebyshev 2', 'Legendre', rf'Ultrasph. $(\alpha={alpha})$', r'$\mathcal{O}(N)$', r'$\mathcal{O}(N^{3/2})$']) plt.title('Condition numbers Airy coefficient matrix') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:nh-cpu-xr14] # language: python # name: conda-env-nh-cpu-xr14-py # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.utils.data as data_utils from torch.autograd import Variable import sklearn from sklearn.preprocessing import StandardScaler from tqdm.notebook import trange, tqdm # Define bucket g = 9.807 #[m/s^2] bucket_area = 1 #[m^2] spigot_area = 0.1 #[m^2] h_max = 10 #[m] h_spigot = 3 #[m] time_step = 1 #[s] # Generate some synthetic "precipitation" as input num_records = 10000 in_list = [0] for i in range(1,num_records): # some percent of time we have no rain at all if np.random.uniform(0,1) < 0.5: in_list.append(0) # When we do have rain, the probability of heavy or light rain depends on the previous day's rainfall else: # If yesterday was a light rainy day, or no rain, then we are likely to have light rain today if in_list[i-1] < 1: if np.random.uniform(0,1) < 0.7: in_list.append(np.random.uniform(0,1)) else: # But if we do have heavy rain, then it could be very heavy in_list.append(np.random.uniform(0,8)) # If it was heavy rain yesterday, then we are likely to have heavy rain again today else: if np.random.uniform(0,1) < 0.3: in_list.append(np.random.uniform(0,1)) else: in_list.append(np.random.uniform(0,5)) # + # Bucket model # Memory to store model results df = pd.DataFrame(index=list(range(len(in_list))),columns=['input', 'et', 'h_water_level', 'mass_overflow', 'spigot_out']) # Initial conditions h_water_level = 0 mass_overflow = 0 is_noise=True noise = {"et":0, "h1":0, "h2":0, "head":0} # Main loop through time for t, mass_in in enumerate(in_list): # Add the input mass to the bucket h_water_level = h_water_level + mass_in # Lose mass out of the bucket. Some periodic type loss, evaporation, and some infiltration... et = np.max([0, np.sin(t) * np.random.normal(1,noise['et'])]) h_water_level = np.max([0 , (h_water_level - et) * np.random.normal(1, noise['h1'])]) # Overflow if the bucket is too full if h_water_level > h_max: mass_overflow = h_water_level - h_max h_water_level = h_max - np.random.uniform(0, noise['h2']) # Calculate head on the spigot h_head_over_spigot = (h_water_level - h_spigot ) * np.random.normal(1, noise['head']) # Calculate water leaving bucket through spigot if h_head_over_spigot > 0: velocity_out = np.sqrt(2 * g * h_head_over_spigot) spigot_out = velocity_out * spigot_area * time_step h_water_level = h_water_level - spigot_out else: spigot_out = 0 # Save the data in time series df.loc[t,'input'] = mass_in df.loc[t,'et'] = et df.loc[t,'h_water_level'] = h_water_level df.loc[t,'mass_overflow'] = mass_overflow df.loc[t,'spigot_out'] = spigot_out mass_overflow = 0 # - # Check to make sure that the is mass going over the top of the bucket print("overflow mean", np.round(df.mass_overflow.mean(),4)) print("overflow max", np.round(df.mass_overflow.max(),4)) df.loc[:100,:].plot() df.head() class LSTM1(nn.Module): def __init__(self, num_classes, input_size, hidden_size, num_layers, batch_size, seq_length): super(LSTM1, self).__init__() self.num_classes = num_classes #number of classes self.num_layers = num_layers #number of layers self.input_size = input_size #input size self.hidden_size = hidden_size #hidden state self.seq_length = seq_length #sequence length self.batch_size = batch_size self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size, batch_first=True) #lstm self.relu = nn.ReLU() self.tanh = nn.Tanh() self.fc_1 = nn.Linear(hidden_size, num_classes) #fully connected 1 def forward(self, x, init_states=None): if init_states is None: h_t = Variable(torch.zeros(batch_size, self.hidden_size)) #hidden state c_t = Variable(torch.zeros(batch_size, self.hidden_size)) #internal state else: h_t, c_t = init_states out, _ = self.lstm(x) out = self.relu(out) prediction = self.fc_1(out) # Dense, fully connected layer return prediction def check_validation_period(n_plot=100): lstm_output_val = lstm(torch.Tensor(np_val_seq_X)) val_spigot_prediction = [] val_overflow_prediction = [] for i in range(lstm_output_val.shape[0]): val_spigot_prediction.append((lstm_output_val[i,-1,1].cpu().detach().numpy() * \ np.std(df.iloc[train_start:train_end,4])) + \ np.mean(df.iloc[train_start:train_end,4])) val_overflow_prediction.append((lstm_output_val[i,-1,0].cpu().detach().numpy() * \ np.std(df.iloc[train_start:train_end,3])) + \ np.mean(df.iloc[train_start:train_end,3])) spigot_out = df.loc[val_start:val_end, 'spigot_out'] spigot_mean = np.mean(spigot_out) spigot_pred_variance = 0 spigot_obs_variance = 0 overflow_out = df.loc[val_start:val_end, 'mass_overflow'] overflow_mean = np.mean(overflow_out) overflow_pred_variance = 0 overflow_obs_variance = 0 for i, pred_spigot in enumerate(val_spigot_prediction): t = i + seq_length - 1 spigot_pred_variance += np.power(( pred_spigot - spigot_out.values[t]), 2) spigot_obs_variance += np.power(( spigot_mean - spigot_out.values[t]), 2) for i, pred_overflow in enumerate(val_overflow_prediction): t = i + seq_length - 1 overflow_pred_variance += np.power((pred_overflow - overflow_out.values[t]), 2) overflow_obs_variance += np.power((overflow_mean - overflow_out.values[t]), 2) print("Spigot NSE", np.round( 1 - ( spigot_pred_variance / spigot_obs_variance ), 4)) print("Overflow NSE", np.round( 1 - ( overflow_pred_variance / overflow_obs_variance ), 4)) print('hidden_state_size', hidden_state_size) print('num_layers', num_layers) print('num_epochs', num_epochs) print('batch_size', batch_size) print('seq_length', seq_length) plt.plot(df.loc[val_start+seq_length-1:val_start+n_plot+seq_length-1,'spigot_out'].values, label="Spigot out") plt.plot(val_spigot_prediction[:n_plot], label="LSTM spigot") plt.legend() plt.show() plt.close() plt.plot(df.loc[val_start+seq_length-1:val_start+n_plot+seq_length-1,'mass_overflow'].values, label="Overflow") plt.plot(val_overflow_prediction[:n_plot], label="LSTM mass_overflow") plt.legend() plt.show() plt.close() #check_validation_period() # + device='cpu' n_input=1 n_output=2 hidden_state_size = 8 #32 num_layers= 2 #6 num_epochs = 25 #25 batch_size = 256 #256 seq_length= 20 #240 learning_rate = np.linspace(start=0.1, stop=0.00001, num=num_epochs) # start=0.1, stop=0.00001 torch.manual_seed(1) lstm = LSTM1(num_classes=n_output, input_size=n_input, hidden_size=hidden_state_size, num_layers=num_layers, batch_size=batch_size, seq_length=seq_length) # + train_start = 1 train_end = int(num_records*0.6) val_start = int(num_records*0.7) val_end = int(num_records*0.8) test_start = int(num_records*0.9) test_end = int(num_records*0.999) scaler = StandardScaler() scaler_train = scaler.fit_transform(df.iloc[train_start:train_end,:]) scaler_test = scaler.transform(df.iloc[test_start:test_end,:]) scaler_val = scaler.transform(df.iloc[val_start:val_end,:]) np_train_seq_X = np.zeros((scaler_train.shape[0] - seq_length, seq_length, n_input)) np_train_seq_y = np.zeros((scaler_train.shape[0] - seq_length, seq_length, n_output)) for i in range(0, scaler_train.shape[0] - seq_length): t = i+seq_length np_train_seq_X[i, :, :] = scaler_train[i:t,:n_input] np_train_seq_y[i, :, :] = scaler_train[i:t,3:] np_val_seq_X = np.zeros((scaler_val.shape[0] - seq_length, seq_length, n_input)) np_val_seq_y = np.zeros((scaler_val.shape[0] - seq_length, seq_length, n_output)) for i in range(0, scaler_val.shape[0] - seq_length): t = i+seq_length np_val_seq_X[i, :, :] = scaler_val[i:t,:n_input] np_val_seq_y[i, :, :] = scaler_val[t,3:] np_test_seq_X = np.zeros((scaler_test.shape[0] - seq_length, seq_length, n_input)) np_test_seq_y = np.zeros((scaler_test.shape[0] - seq_length, seq_length, n_output)) for i in range(0, scaler_test.shape[0] - seq_length): t = i+seq_length np_test_seq_X[i, :, :] = scaler_test[i:t,:n_input] np_test_seq_y[i, :, :] = scaler_test[i:t,3:] ds_train = torch.utils.data.TensorDataset(torch.Tensor(np_train_seq_X), torch.Tensor(np_train_seq_y)) train_loader = torch.utils.data.DataLoader(ds_train, batch_size=batch_size, shuffle=True) ds_test = torch.utils.data.TensorDataset(torch.Tensor(np_test_seq_X), torch.Tensor(np_test_seq_y)) test_loader = torch.utils.data.DataLoader(ds_test, batch_size=batch_size, shuffle=True) ds_val = torch.utils.data.TensorDataset(torch.Tensor(np_val_seq_X), torch.Tensor(np_val_seq_y)) val_loader = torch.utils.data.DataLoader(ds_val, batch_size=batch_size, shuffle=True) # + criterion = nn.MSELoss() # Use costome loss function. optimizer = optim.Adam(lstm.parameters(), lr=learning_rate[0]) epoch_bar = tqdm(range(num_epochs),desc="Training", position=0, total=num_epochs) for epoch in epoch_bar: batch_bar = tqdm(enumerate(train_loader), desc="Epoch: {}".format(str(epoch)), position=1, total=len(train_loader)) for i, (data, targets) in batch_bar: optimizer.zero_grad() optimizer = optim.Adam(lstm.parameters(), lr=learning_rate[i]) data = data.to(device=device) targets = targets.to(device=device) # Forward lstm_output = lstm(data) loss = criterion(lstm_output,targets) #backward optimizer.zero_grad() loss.backward() # gradient descent or adam step optimizer.step() batch_bar.set_postfix(loss=loss.cpu().item(), RMSE="{:.2f}".format(loss**(1/2)), epoch=epoch) batch_bar.update() with torch.no_grad(): rmse_list = [] for i, (data_, targets_) in enumerate(val_loader): data_ = data_.to(device=device) targets_ = targets_.to(device=device) lstm_output_ = lstm(data_) MSE_ = criterion(lstm_output_, targets_) rmse_list.append(MSE_**(1/2)) epoch_bar.set_postfix(loss=loss.cpu().item(), RMSE="{:.2f}".format(np.mean(np.array(rmse_list))), epoch=epoch) batch_bar.update() # - check_validation_period() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %load_ext lab_black import matplotlib.pyplot as plt from matplotlib import ticker as from niddk_covid_sicr.analysis import get_top_n import pandas as pd from pathlib import Path import re import seaborn as sns sns.set_style("ticks") plt.rcParams["font.size"] = 14 # plt.rcParams["font.weight"] = "bold" plt.rcParams["axes.labelweight"] = "bold" plt.rcParams["axes.titleweight"] = "bold" # - top_25 = get_top_n("../data", n=26, last_date="2020/05/31") top_25.remove("US") # + def load_table(path): path = Path("../../covid-fits/%s" % path) df = pd.read_csv(path / "fit_table_reweighted.csv") df = df.set_index(["roi", "quantile"]) return df def bar_graph_feature(df, feature, log=False): rois = ["AA_Global"] + top_25 df = df.reorder_levels([1, 0]) means = df.loc[("mean", rois), feature]["mean"] stds = df.loc[("std", rois), feature]["std"] ranked = means.sort_values(ascending=False).index ranked = [x for x in ranked if x != "AA_Global"] + ["AA_Global"] plt.barh(ranked, means.loc[ranked].values, xerr=stds.loc[ranked].values) plt.gca().set_yticklabels([x if x != "AA_Global" else "Global" for x in ranked]) plt.xlabel(feature) plt.ylim(0.5, len(rois) - 0.5) plt.gca().get_yticklabels()[-1].set_weight("bold") def sort_nicely(l): """ Sorts the given iterable in the way that is expected. Required arguments: l -- The iterable to be sorted. """ convert = lambda text: int(text) if text.isdigit() else text alphanum_key = lambda key: [convert(c) for c in re.split("([0-9]+)", key)] return sorted(l, key=alphanum_key) def get_week(s): return int(re.search("[0-9]+", s)[0]) fit13 = load_table("tables-fit13") fit14 = load_table("tables-fit14") # + # R0 sns.set_style("ticks") plt.figure(figsize=(6, 9)) bar_graph_feature(fit13, "R0", log=False) plt.grid(True, which="both") plt.gca().xaxis.set_minor_locator(tkr.LogLocator(base=10, subs="all")) plt.xlim(1, 11) plt.xscale("log") plt.tick_params(axis="x", which="minor") plt.gca().xaxis.set_major_formatter(tkr.FormatStrFormatter("%d")) plt.gca().xaxis.set_minor_formatter(tkr.FormatStrFormatter("%d")) # + # Rt sns.set_style("whitegrid") from datetime import datetime, timedelta def line_graph_feature(df, feature, limit=None): rois = ["AA_Global"] + top_25 if limit: rois = rois[:limit] df = df.reorder_levels([1, 0]) features = [col for col in df if feature in col] features = sort_nicely(features) weeks = [get_week(f) for f in features] means = df.loc[("mean", rois), features].loc["mean"] stds = df.loc[("std", rois), features].loc["std"] # ranked = means.sort_values(ascending=False).index # ranked = [x for x in ranked if x != "AA_Global"] + ["AA_Global"] # plt.barh(ranked, means.loc[ranked].values, xerr=stds.loc[ranked].values) for roi in rois: plt.plot(weeks, means.loc[roi], label="Global" if roi == "AA_Global" else roi) # plt.gca().set_yticklabels([x if x != "AA_Global" else "Global" for x in ranked]) # plt.xlabel(feature) # plt.ylim(0.5, len(rois) - 0.5) # plt.gca().get_yticklabels()[-1].set_weight("bold") plt.legend(loc=1) dates = [ datetime.strptime("2020/01/20", "%Y/%m/%d") + timedelta(0, 24 * 3600 * 7 * x) for x in range(18) ] date_labels = [datetime.strftime(x, "%m/%d") for x in dates] plt.xticks(range(18), date_labels, rotation=75) plt.xlabel("Week") plt.ylabel("R(t)") plt.figure(figsize=(10, 6)) line_graph_feature(fit14, "Rt", limit=8) plt.yscale("log") # - # ### Compare across models df = pd.read_csv("../../covid-fits/tables-fit13/fit_table_raw.csv") roi = "United Kingdom" features = [col for col in df if "Rt" in col] features = sort_nicely(features) df[(df["roi"] == roi) & (df["quantile"] == "mean")].set_index("model")[ features ].round(1) z = df[(df["roi"] == "United Kingdom") & (df["quantile"] == "mean")].set_index("model")[ "waic" ] z.plot.bar() plt.ylim(z.min() * 0.9, z.min() * 1.5) plt.ylabel("WAIC") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import zipfile zip_ref = zipfile.ZipFile('Corpuses.zip', 'r') zip_ref.extractall('') zip_ref.close() import codecs one_long_string = "" with codecs.open('Corpuses/MarkTwain.txt', 'r', 'utf-8-sig') as text_file: one_long_string = text_file.read() one_long_string[99000:99900] from nltk import word_tokenize, sent_tokenize sentences = sent_tokenize(one_long_string) del(one_long_string) sentences[200:205] tokenized_sentences = map(word_tokenize, sentences) del(sentences) print(tokenized_sentences[200:205]) from nltk import download download('stopwords') from nltk.stem import WordNetLemmatizer wordnet_lemmatizer = WordNetLemmatizer() lemmatized_sentences = map(lambda sentence: map(wordnet_lemmatizer.lemmatize, sentence), tokenized_sentences) print(lemmatized_sentences[200:205]) del(tokenized_sentences) from nltk import download download('averaged_perceptron_tagger') from nltk import pos_tag, pos_tag_sents pos_tag(word_tokenize('Cats, cat, Cat, and "The Cats"')) pos_sentences = pos_tag_sents(lemmatized_sentences) del(lemmatized_sentences) print(pos_sentences[200:205]) download('tagsets') from nltk.help import upenn_tagset upenn_tagset() # https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html # Note the ``(u'Paris', 'NNP'), (u'Exposition', 'NNP'), (u'Americans', 'NNPS')`` # # ``NNP`` stands for proper noun # # ``NNPS`` proper noun, plural # # We need to get rid of all capital letters in not-proper nouns and from all punctuation marks and numbers. # tags_to_delete = ['$', "''", "(", ")", ",", "--", ".", ":", "CC"] tags_to_not_lowercase = set(['NNP', 'NNPS']) tags_to_preserve = set(['JJ', 'JJR', 'JJS', 'NN', 'NNP', 'NNPS', 'NNS', 'RB', 'RBR', 'RBS','UH', 'VB', 'VBD', 'VBG', 'VBN', 'VBP', 'VBZ']) print(pos_sentences[203]) def carefully_lowercase(words): return [(word.lower(), pos) if pos not in tags_to_not_lowercase else (word, pos) for (word, pos) in words] def filter_meaningful(words): return [word for (word, pos) in words if pos in tags_to_preserve] res = map(carefully_lowercase, pos_sentences[203:205]) print(res) filtered = map(filter_meaningful, res) del(res) print(filtered) lowercased_pos_sentences = map(carefully_lowercase, pos_sentences) del(pos_sentences) sentences_to_train_on = map(lambda words: [word for (word, pos) in words], lowercased_pos_sentences) print(sentences_to_train_on[203:205]) import itertools filtered = map(filter_meaningful, lowercased_pos_sentences) flatten = list(itertools.chain(*filtered)) words_to_keep = set(flatten) del(filtered, flatten, lowercased_pos_sentences) from nltk.corpus import stopwords import string stop_words = set(stopwords.words('english') + list(string.punctuation) + ['wa']) import gensim def trim_rule(word, count, min_count): if word not in words_to_keep or word in stop_words: return gensim.utils.RULE_DISCARD else: return gensim.utils.RULE_DEFAULT model = gensim.models.Word2Vec(sentences_to_train_on, min_count=15, trim_rule=trim_rule) model.most_similar('house', topn=5) model.most_similar('America', topn=5) model.most_similar('water', topn=5) model.most_similar('money', topn=5) model.most_similar('cat', topn=5) model.wv.save_word2vec_format(fname='MarkTwain.bin', binary=True) / --- / jupyter: / jupytext: / text_representation: / extension: .q / format_name: light / format_version: '1.5' / jupytext_version: 1.14.4 / --- / + cell_id="00000-d1d3b693-8fc2-4852-b977-bf59db96cdf7" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=1 execution_start=1634670769787 source_hash="120b09e9" tags=[] import pathlib import pandas as pd BASE_DIR = pathlib.Path().resolve().parent COURSE_DIR = BASE_DIR / "course" SAMPLES_DIR = COURSE_DIR / 'samples' / + cell_id="00001-4da4045c-083d-4950-a801-f30394e6b103" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=64 execution_start=1634670843243 source_hash="811c99cf" tags=[] init_df = pd.read_csv(SAMPLES_DIR / '2.csv') init_df.head() / + cell_id="00002-c7f63dad-4bd5-49fb-befc-dd9d01183da4" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=34 execution_start=1634670926698 source_hash="2c5513cb" tags=[] columns = ['name', 'salary_as_float'] df = init_df.copy()[columns] df.head() / + cell_id="00003-7151714e-0b8b-46ac-a35e-f057309c0bd2" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=2 execution_start=1634670959996 source_hash="889a1265" tags=[] n_rows = df.shape[0] # (rows, cols) / + cell_id="00004-4a2dced2-1c95-42cc-9c4d-be27b5ab38f4" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=5 execution_start=1634671018731 source_hash="cd46aaa8" tags=[] salaries = list(df['salary_as_float'].values) sum_salaries = sum(salaries) avg_salaries = sum_salaries / n_rows print(avg_salaries) / + cell_id="00005-ac5763c8-8738-4ebb-823b-252a21900f4d" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=8 execution_start=1634671056500 source_hash="1e1d38cd" tags=[] avg = df['salary_as_float'].mean() avg / + cell_id="00006-4f6b6acb-26b3-499f-a66a-12c9123cc087" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=7 execution_start=1634671094965 source_hash="3e7b4aaa" tags=[] df_sum = df['salary_as_float'].sum() # / n_rows df_sum / + cell_id="00007-a33f5ea6-757b-44f9-abf0-25c4d3d8e645" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=9 execution_start=1634671153935 source_hash="8a870ad4" tags=[] df_mode = df['salary_as_float'].mode() df_mode / + cell_id="00008-b944d33a-73af-458a-9183-e1ec7eca7685" deepnote_cell_type="code" deepnote_to_be_reexecuted=false execution_millis=3 execution_start=1634671186003 source_hash="e2c1864b" tags=[] top_salary = df['salary_as_float'].max() bottom_salary = df['salary_as_float'].min() print(top_salary, bottom_salary) / + [markdown] created_in_deepnote_cell=true deepnote_cell_type="markdown" tags=[] / / Created in deepnote.comDeepnote # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to Python part IV (And a discussion of linear transformations) # ## Activity 1: Discussion of linear transformations # # # * Orthogonality also plays a key role in understanding linear transformations. How can we understand linear transformations in terms of a composition of rotations and diagonal matrices? There are two specific matrix factorizations that arise this way, can you name them and describe the conditions in which they are applicable? # # * What is a linear inverse problem? What conditions guarantee a solution? # # * What is a pseudo-inverse? How is this related to an orthogonal projection? How is this related to the linear inverse problem? # # * What is a weighted norm and what is a weighted pseudo-norm? # ## Activity 2: Basic data analysis and manipulation import numpy as np # ### Exercise 1: # # Arrays can be concatenated and stacked on top of one another, using NumPy’s `vstack` and `hstack` functions for vertical and horizontal stacking, respectively. # # + A = np.array([[1,2,3], [4,5,6], [7, 8, 9]]) print('A = ') print(A) B = np.hstack([A, A]) print('B = ') print(B) C = np.vstack([A, A]) print('C = ') print(C) # - # Write some additional code that slices the first and last columns of A, and stacks them into a 3x2 array. Make sure to print the results to verify your solution. # # Note a ‘gotcha’ with array indexing is that singleton dimensions are dropped by default. That means `A[:, 0]` is a one dimensional array, which won’t stack as desired. To preserve singleton dimensions, the index itself can be a slice or array. For example, `A[:, :1]` returns a two dimensional array with one singleton dimension (i.e. a column vector). D = np.hstack((A[:, :1], A[:, -1:])) print('D = ') print(D) # An alternative way to achieve the same result is to use Numpy’s delete function to remove the second column of A. Use the search function for the documentation on the `np.delete` function to find the syntax for constructing such an array. # # ### Exercise 2: # # The patient data is longitudinal in the sense that each row represents a series of observations relating to one individual. This means that the change in inflammation over time is a meaningful concept. Let’s find out how to calculate changes in the data contained in an array with NumPy. # # The `np.diff` function takes an array and returns the differences between two successive values. Let’s use it to examine the changes each day across the first week of patient 3 from our inflammation dataset. patient3_week1 = data[3, :7] print(patient3_week1) # Calling `np.diff(patient3_week1)` would do the following calculations # `[ 0 - 0, 2 - 0, 0 - 2, 4 - 0, 2 - 4, 2 - 2 ]` # and return the 6 difference values in a new array. np.diff(patient3_week1) # Note that the array of differences is shorter by one element (length 6). # When calling `np.diff` with a multi-dimensional array, an axis argument may be passed to the function to specify which axis to process. When applying `np.diff` to our 2D inflammation array data, which axis would we specify? Take the differences in the appropriate axis and compute a basic summary of the differences with our standard statistics above. # If the shape of an individual data file is (60, 40) (60 rows and 40 columns), what is the shape of the array after you run the `np.diff` function and why? # How would you find the largest change in inflammation for each patient? Does it matter if the change in inflammation is an increase or a decrease? # ## Summary of key points # # Some of the key takeaways from this activity are the following: # # * Import a library into a program using import libraryname. # # * Use the numpy library to work with arrays in Python. # # * The expression `array.shape` gives the shape of an array. # # * Use `array[x, y]` to select a single element from a 2D array. # # * Array indices start at 0, not 1. # # * Use `low:high` to specify a slice that includes the indices from `low` to `high-1`. # # * Use `# some kind of explanation` to add comments to programs. # # * Use `np.mean(array)`, `np.std(array)`, `np.quantile(array)`, `np.max(array)`, and `np.min(array)` to calculate simple statistics. # # * Use `sp.mode(array)` to compute additional statistics. # # * Use `np.mean(array, axis=0)` or `np.mean(array, axis=1)` to calculate statistics across the specified axis. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # PROJECT - CRISP_DM # ### Cross-Industry Standard Process for Data Mining # # First Udacity Project # * This project is divided in 3 process: *Business Understanding, Data Understanding and Data Preparation.* # # Business Understanding # The goal of this project is to analyse and explore the dataset in order to answer the following questions: # # ## Questions # #### 1- Which countries have better salaries ? # #### 2- Which gender has the highest salary ? # #### 3- Do those who work from home earn more than those who don't ? # #### 4- Do the larger companies have more employees who earn more ? # #### 5- What race has the highest salary ? # # # Data Understanding # + # Imports import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # + # Load the data df = pd.read_csv('data.csv') # + # Visualizing the first 5 lines df.head() # + # The shape of the data print(f'Dataframe has {df.shape[0]} lines and {df.shape[1]} columns') # - # Amount of NA's for each column df.isna().sum() # + # Which column has more than 90% of missing values? cols = df.loc[:,df.isna().mean() > .90].columns df.loc[:,df.isna().mean() > .90].columns # - # ### Dropping the columns # * I'm gonna drop the columns, because there's just a little of information for each column, and if I impute 90% of the missing values I would affect too much the dataset with values the are probably wrong. However, dropping the columns also affect the dataset, but I have to make a decision, and I'll drop them. # * If I had chosen to impute the values with the mean, I'd have 90% of the same values and therefore no variance in the column. # * The first approach is to find out why we have missing values, maybe the people who in the business section could help us with this information # * I'm choosing the threshold of 90% because in my opinion if the column has less than this, and if its information is useful, I consider impute the values or create a dummy binary variable representing 0 for Non NAN's and 1 for NAN's values for that column, which can be great for modeling the data (it's not my case). # + # Let's drop these columns # Subsetting the cols to be removed cols = df.loc[:,df.isna().mean() > .90].columns # Dropping them df.drop(columns=cols, inplace=True) # + # Visualize df.head() # + # Let's use only the necessary columns to answer the questions # Subsetting the columns cols = ['Country', 'Salary', 'Gender', 'CompanySize', 'HomeRemote', 'Race', 'ProgramHobby'] # Creating a new dataframe with only the columns above new_df = df[cols] # + # View the first 5 line of our new dataset new_df.head() # - # Types of features new_df.dtypes # + # Checking the amount of missing values new_df.isna().sum() # - # ### Filling missing values: # # Filling missing values with the mean for float type columns, in this case we only have one: **salary**. # Another approach when we have missing data, is to impute them with some statistical value, for instance, here I have one numeric column (salary), and because it has less than 90% of the values missing, and it's an important variable, in my opinion the best approach is to fill the values with mean, and then I don't lose the rest of the data. # + # Filling the missing values with the mean for salary column. new_df['Salary'].fillna(np.mean(new_df.Salary), inplace=True) # + # Visualize the data new_df.head() # - # ##### Now, let's fill the missing values for categorical variables, in this case with the mode, because it's almost the only statistical measure that can be taken from categorical variable, we can not take the mean ou median for example, and I'm not gonna drop the columns because I already dropped those that had 90% or more missing values # + # filling the missing values # Function to get the mode and fill missing value mode = lambda x: x.fillna(pd.Series.mode(x)[0]) # Filtering only categorical columns cat_cols = new_df.select_dtypes('object').columns # Using method apply to use our function in each column and visualizing it new_df[cat_cols].apply(mode) # - # Assign the new imputed values to dataframe new_df.loc[:,cat_cols] = new_df.loc[:,cat_cols].apply(mode) # Visualizing again new_df.head() # Checking the shape new_df.shape # Checking the amount of NA's new_df.isna().sum() # #### Now we have a clean dataframe, therefore we are able to analysing the data and answer the questions asked in the beggining of the project # # Data Preparation # ### First Question: Which countries have better salaries ? # First Question: Better country have better salaries ? df1 = new_df.groupby('Country')['Salary'].agg(['mean', 'std', 'min', 'max'])\ .sort_values(ascending=False, by='mean').reset_index().head(10) df1 # Creating new column with the number of time that each country showed up in the dataset df1 = df1.sort_values(by='Country') df1['Qtd'] = new_df.Country[new_df.Country.isin(df1['Country'].values)].value_counts().sort_index().values # Visualizing, high values have strong colors df1.style.background_gradient(cmap='GnBu') # We can notice from the frame above that EUA has appeared most time in the dataset, and it has the max salary. Although Bermuda has the highest Salary mean, it has only observations in the dataset, so we can not conlude so much about this country. # + # Let's take a look for only those country that appeared 100 or more times # Visualizing the max values for each variable df1[df1.Qtd >= 100].style.highlight_max() # + # A bar graph for the mean df1[df1.Qtd >= 100].style.bar(subset='mean', align='mid', color='#5fba7d') # - # Those are the countries with highest salaries. # ### Second Question: Which gender has the highest salary ? # Visualizing the amount of each differente gender new_df.Gender.value_counts() # Amount of unique genders new_df.Gender.nunique() # + # Let's focus on the gender Male, Female, Transgender, the most common ones. genders = ['Male', 'Female', 'Transgender'] # - # Subsetting dataset df2 = new_df[new_df.Gender.isin(genders)] df2.head() # Informations about salary for each gender df2.groupby('Gender')['Salary'].agg(['mean','std','max']).style.highlight_max(subset='mean') # Besides we have only 20 transgenders in our dataset, they have the highest mean salary. # Let's check more information about transgenders. # Subsetting dataframe with only transgenders trans = new_df[new_df.Gender == 'Transgender'] trans.head() # We can see that the mean imputation wasn't a good ideia in this case # we have only 2 differents values from that salary mean that we imputed # so we can not conclude that transgenders have the highest salary mean trans.Salary.value_counts() # Let's check the mean for the genders male and female new_df.loc[new_df.Gender.isin(['Male','Female'])].groupby('Gender')['Salary'].mean()\ .reset_index().style.highlight_max(color='lightgreen') # There's a slight difference between the salary mean for men and women # ### Third Question: Do those who work from home earn more than those who don't ? # Visualize new_df.head() # Filtering the dataset subset = new_df[['HomeRemote', 'Salary']] subset.head() # Frequency table print("How often people work from home\n") print(subset.HomeRemote.value_counts(),'\n') # Frequency table print("How often people work from home %\n") print(subset.HomeRemote.value_counts(normalize=True),'\n') # ##### Creating a function to plot a bar graph # Function def bar_plot(series, rotation = 90, figure_size = (10,6)): ''' Description: This function can be used to plot a bar graph Arguments: series: a series object type; rotation: rotation of xlabels, default = 90; figure_size: a tuple representing the size of the graph, default = (10,6); Returns: A bar graph ''' # Graph size plt.figure(figsize=(figure_size)) # Plotting the graph series.plot.bar() # The rotation of Xlabels plt.xticks(rotation = rotation) # + # Let's check the mean salary for each kind of value in HomeRemote column, using the function created above bar_plot(series = subset.groupby("HomeRemote")['Salary'].mean().sort_values(ascending=False), rotation = 60) # - # #### The tables don't seem to show any evidence that most of people who work remotely has a higher salary. # ### Fourth Question: Do the larger companies have more employees who earn more ? # Visualize new_df.head() # Frequency table for Company size new_df.CompanySize.value_counts() # ##### Now I'll apply a function to classify if the company is big or small, I'm doing this because I want to see the differences between big and small comapanies in a more general scenario, therefore I don't wanna analyze a company with a speficic number of employees, instead I'm putting a threshold of 500 employees, more than that it's big, less it's small. # Creating a dictionary, I'll use it later to change the names of categorical data mapper = {'20 to 99 employees':'small', '100 to 499 employees':'small', '10,000 or more employees':'big', '10 to 19 employees':'small', '1,000 to 4,999 employees':'big', 'Fewer than 10 employees':'small', '500 to 999 employees':'big', '5,000 to 9,999 employees':'big'} # + # Dropping answers that I'll not use, because they have no information: I don't Know and I prefer not to answer. idx = new_df[(new_df.CompanySize == "I don't know") | (new_df.CompanySize == "I prefer not to answer")].index subset = new_df.drop(index=idx) # - # Apply the mapper subset.CompanySize.replace(mapper, inplace=True) # View subset.head() # Amount of employees who work at small and big companies subset.CompanySize.value_counts() # Average salary for big and small companies bar_plot(subset.groupby('CompanySize')['Salary'].mean(), rotation = 60) # #### There's just a little difference between the people working from home from big companies and small companies # ### Fifth Question: What race has the highest salary ? # + # View new_df.head() # - # Frequency table for race new_df.Race.value_counts() # + # Amount of unique races new_df.Race.nunique() # + # Let's work with only the first 3 races races = list(new_df.Race.value_counts().index[:3]) # + # Subsetting subset = new_df.loc[new_df.Race.isin(races),:] # + # Subset dataframe subset.head() # + # Frequency table again subset.Race.value_counts() # + # Salary mean for each race in our subset bar_plot(subset.groupby('Race')['Salary'].mean(), rotation=45) # - # #### We can see a small difference between the salaries from East Asian and White or of European descent, but this difference increases when compared to South Asian people's salaries. # # The End # # , # # GitHub: https://github.com/DenerBEM # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="IavdVVsTam5h" # # Chapter 1: Elements of a Program (Review Questions) # + [markdown] id="1PAZgXBqam5m" # The questions below assume that you have read the [first ](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/00_content.ipynb) and the [second ](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/03_content.ipynb) part of Chapter 1. # # Be concise in your answers! Most questions can be answered in *one* sentence. # + [markdown] id="0MZuhUzuam5n" # ## Essay Questions # + [markdown] id="lVV6T-4-am5n" # **Q1**: Elaborate on how **modulo division** might be a handy operation to know! # + [markdown] id="ildl7ZMDam5o" # < your answer > # + [markdown] id="SNLS8M9Oam5p" # **Q2**: What is a **dynamically typed** language? How does it differ from a **statically typed** language? What does that mean for Python? # + [markdown] id="FwDmFSq5am5q" # < your answer > # + [markdown] id="UlLpFNstam5q" # **Q3**: Why is it useful to start counting at $0$? # + [markdown] id="hlWo-lAwam5r" # < your answer > # + [markdown] id="WC4Wjd0aam5s" # **Q4**: What is **operator overloading**? # + [markdown] id="-wdnKKR4am5s" # < your answer > # + [markdown] id="ICDcJkgTam5t" # **Q5**: What are the basic **naming conventions** for variables? What happens if a name collides with one of Python's [keywords ](https://docs.python.org/3/reference/lexical_analysis.html#keywords)? # + [markdown] id="TpS0tqQ8am5t" # < your answer > # + [markdown] id="00hAyZlnam5u" # **Q6**: Advocates of the [functional programming ](https://en.wikipedia.org/wiki/Functional_programming) paradigm suggest not to use **mutable** data types in a program. What are the advantages of that approach? What might be a downside? # + [markdown] id="ALP4G_WIam5u" # < your answer > # + [markdown] id="vf-MHPq9am5v" # ## True / False Questions # + [markdown] id="I0WdPkmPam5w" # Motivate your answer with *one short* sentence! # + [markdown] id="YF9EjtHZam5w" # **Q7**: "**dunder**" refers to a group of Australian (i.e., "down under") geeks that work on core Python. # + [markdown] id="S7rrPRReam5x" # < your answer > # + [markdown] id="GV-ir2BRam5x" # **Q8**: The **Zen of Python** talks about Indian genius programmers. # + [markdown] id="dpD5s2wIam5y" # < your answer > # + [markdown] id="dGWmHkvmam5y" # **Q9**: When NASA famously converted some measurements to the wrong unit and lost a Mars satellite in 1999 (cf., [source](https://www.wired.com/2010/11/1110mars-climate-observer-report/)), this is an example of a so-called **runtime error**. # + [markdown] id="Dyw-Txg5am5z" # < your answer > # + [markdown] id="9qXPoOBnam5z" # **Q10**: [PEP 8 ](https://www.python.org/dev/peps/pep-0008/) suggests that developers use **8 spaces** per level of indentation. # + [markdown] id="5dRMjyCcam50" # < your answer > # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="R1JCDzPJSfi2" # # Практика №1 # + [markdown] id="LA6s0CLLSi5c" # Первое практическое занятие мы начнем с реализации dynamic time warping (DTW) алгоритм на основе token passing algorithm (TPA). # # У нас имеется несколько эталонов (различные вариации произнесения слов YES и NO) и пара записей (также YES и NO). С помощью алгоритма DTW нам будет необходимо определить, на какой из эталонов больше всего похожа каждая запись. Данные wav файлы взяты из открытой базы данных Google Speech Commands Dataset (https://ai.googleblog.com/2017/08/launching-speech-commands-dataset.html). # + [markdown] id="aV9Va3ZdS-6Y" # ### Bootstrap # + colab={"base_uri": "https://localhost:8080/"} id="QYIcD-9_3RiA" outputId="8368cce9-1ac1-4313-d497-26d650a73010" # !gdown --id "1VNi62ZxZA7rgMKsvECDGZdl3WQJcBKRb" # + id="16D45dWZTRwg" colab={"base_uri": "https://localhost:8080/"} outputId="2df1e893-bfb1-44b2-856d-b8b72953de38" # !unzip -q lab1.zip # !rm -rf lab1.zip sample_data # %cd lab1 # + id="o1IPr9a8VxXn" import os import time import librosa, librosa.display import matplotlib.pyplot as plt # %matplotlib inline import numpy as np from tqdm.notebook import tqdm import IPython.display as ipd # + id="wZmH0Go6-8-4" plt.rcParams["figure.figsize"] = (15.0, 5.0) plt.rcParams["image.interpolation"] = "nearest" plt.rcParams["image.cmap"] = "gray" # + [markdown] id="zcMtqRtgV3aa" # Рассмотрим на примере один из эталонов: # + id="I0Hf07iEV5XN" colab={"base_uri": "https://localhost:8080/", "height": 126} outputId="b6d8834c-1745-4a6b-9258-ac6c8618ab42" wav_example = "data/yes_no/etalons/yes_004ae714_nohash_0.wav" # Read wav-file. # Here sr=None to preserve the native sampling rate. x, sr = librosa.load(wav_example, sr=None) print(f"Number of samples: {len(x)}.") print(f"Sampling rate: {sr} Hz.") # ~ librosa.get_duration(x, sr): print(f"Duration: {len(x) / sr:.2f} s.") # Playback: ipd.Audio(x, rate=sr) # + id="DQcrs79wV8jn" colab={"base_uri": "https://localhost:8080/", "height": 315} outputId="304f9029-4135-419f-f505-9f04bdfe1cf3" # Time representation of the signal: librosa.display.waveplot(x, sr=sr); # + colab={"base_uri": "https://localhost:8080/", "height": 301} id="JoyQMrfWVhUz" outputId="b53ea90b-e714-491e-a3d9-8a4fb6747185" plt.plot(x); # + colab={"base_uri": "https://localhost:8080/", "height": 310} id="gHq8YVbnu7eY" outputId="33484c8f-fc5d-44b7-80bb-cb908f93f9f9" librosa.display.waveplot(x[:200], sr=sr); # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="4b7MyNgfVaTQ" outputId="6d55e42c-a635-49fc-feaf-c118b23f36ee" plt.plot(x[:200]); # + [markdown] id="Wqum98XPBGMp" # The view is different from the one that can be plotted with `matplotlib` in two aspects: # * `librosa` transforms signal samples into seconds with respect to sampling rate ($t = k / sr$, where $k$ is the sample number), whereas `matplotlib` takes plain sample numbers as units; # * for a monophonic signal (like the one that we have) `librosa` plots a curve filled between `[-abs(y), abs(y)]` (symmetric with respect to Y-axis), whereas `matplotlib`'s plot is asymmetric. # + id="G639c-CYWJT9" colab={"base_uri": "https://localhost:8080/", "height": 350} outputId="de43645f-4068-4a5c-afb2-82b77277ca54" # Spectrogram: D = librosa.amplitude_to_db(np.abs(librosa.stft(x)), ref=np.max) librosa.display.specshow(D, sr=sr, y_axis="hz", x_axis="time") plt.colorbar(format="%+2.0f dB") plt.title("Log-frequency power spectrogram"); # + [markdown] id="lrXMLQSDWajP" # Теперь посчитаем речевые признаки нашего эталона. # Для начала будем работать с 13-размерными MFCC признаками. # + id="pcuVfy_3WLS8" colab={"base_uri": "https://localhost:8080/"} outputId="4a7cd9ad-0a09-43f8-d3bc-b5f06b693c81" # n_fft = 25 ms -- длина кадра # hop_length = 10 ms -- длина шага # Тогда в отсчётах получаем следующее: n_fft = int(sr * 0.025) hop_length = int(sr * 0.01) mfcc = librosa.feature.mfcc(x, sr=sr, n_mfcc=13, n_fft=n_fft, hop_length=hop_length) print(mfcc.shape) print(mfcc[:, 0]) # вектор признаков нулевого кадра # + [markdown] id="myZAPeZSWsKH" # Теперь приступим к описанию самого DTW алгоритма. # Для начала нам потребуется создать граф, ветвями которого будут наши эталоны, с которыми мы и будем сравнивать записи. Все ветви графа будут состоять из узлов типа `State`. Каждый такой узел представляет собой описание отдельного кадра эталона. Таким образом число узлов в ветви равно числу кадров в эталоне. # # На данном этапе узлы в графе будут иметь переходы только в себя и следующий узел. Нулевой узел является корневым и имеет переходы в начальные узлы каждой ветви. Финальный узел каждой ветви имеет переход только в себя. Схема такого графа представлена ниже: # + [markdown] id="OKAQH5_FW4Jo" # ![](https://drive.google.com/uc?export=view&id=1PQwZSf9EFEZUWQt9dmOifpyqCM6uuCRM) # + [markdown] id="04Xj2ymdzQhn" # Здесь мы имеем два эталона слов YES и NO длиной 4 и 3 кадра соответственно. # + id="DxieTkKhWV0M" class State: def __init__(self, feats, index): # feats: node feature vector # is_final: whether the node is final in the word # word: etalon word (only present for the final node) # best_token: token with the least distance in the node # current_word: current etalon word # next_states: the list of next nodes # index: node index self.feats = feats self.is_final = False self.word = None self.best_token = None self.current_word = None self.next_states = [] self.index = index def load_graph(etalons_dict, enable_skips=False): start_state = State(None, 0) graph = [start_state,] state_index = 1 for word in etalons_dict.keys(): previous_state = start_state for feats_frame in etalons_dict[word]: state = State(feats_frame, state_index) # Etalon word will now be stored in each node: state.current_word = word # Add loop: state.next_states.append(state) previous_state.next_states.append(state) previous_state = state #-----------------------------TODO №2------------------------------ # Добавьте переходы через один и через два узла # (hint): добавлять переходы проще задним числом if enable_skips: if state in graph[state.index - 1].next_states \ and graph[state.index - 1] in graph[state.index - 2].next_states \ and state.index - 2 > 0: graph[state.index - 2].next_states.append(state) if graph[state.index - 2] in graph[state.index - 3].next_states \ and state.index - 3 > 0: graph[state.index - 3].next_states.append(state) #------------------------------------------------------------------ graph.append(state) state_index += 1 if state: state.word = word state.is_final = True return graph def print_graph(graph): if not os.path.exists("exp"): os.mkdir("exp") with open("exp/graph.txt", 'w') as fn: np.set_printoptions(formatter={"float": "{: 0.1f}".format}) for state in graph: next_state_indexes = [s.index for s in state.next_states] fn.write( "State: index={} word={} is_final={} " \ "next_state_indexes={} ftr={} \n".format( state.index, state.word, state.is_final, next_state_indexes, state.feats ) ) print("*** See exp/graph.txt ***") # + [markdown] id="4lEU31czXNy5" # Подготовка наших эталонов и записей для распознавания: # + id="Douk24ayXO4c" def load_data_dict(dir_path): data_dict = {} for wav_name in os.listdir(dir_path): x, sr = librosa.load(os.path.join(dir_path, wav_name), sr=None) mfcc = librosa.feature.mfcc(x, sr=sr, n_mfcc=13, n_fft=int(sr * 0.025), hop_length=int(sr * 0.01)) data_dict[wav_name] = mfcc.T return data_dict # + id="7rCUtky5XTF-" etalons_dir = "data/yes_no/etalons" records_dir = "data/yes_no/records" etalons_data_dict = load_data_dict(etalons_dir) records_data_dict = load_data_dict(records_dir) # + id="2xTbfklgXa4A" colab={"base_uri": "https://localhost:8080/"} outputId="8802ca85-4870-4e24-cebe-3e7f4ed53a44" graph = load_graph(etalons_data_dict) print_graph(graph) # + [markdown] id="f6IgmAjjXkAS" # Далее идет описание класса Token и самого алгоритма TPA: # # - Алгоритм TPA двигается последовательно по кадрам записи, на каждом кадре берёт множество токенов от предыдущего кадра и порождает множество токенов для текущего кадра. # - Один токен – это вещь, олицетворяющая собой один из возможных вариантов разметки (соотнесения кадров записи кадрам эталона), заканчивающаяся на данном кадре. По токену можно понять, какую суммарную дистанцию набрал данный токен (то есть то, насколько он хорош), в каком узле графа эталонов он находится (чтобы от него можно было породить токен для следующего кадра) и на ветке какого слова он находится на данный момент. # - На некотором кадре всё множество токенов описывает все возможные разметки, которые можно получить к данному кадру. После обработки последнего кадра мы просто переберём все финальные токены (разметки) и выберем лучший (тот, который имеет минимальное суммарное расстояние от записи до эталона). # - Финальные токены – это токены, которым соответствует законченная разметка (то есть та, в которой после финального кадра мы оказались в финальном узле графа). Остальные токены могут рассматриваться как бракованные. # # Получается, что TPA (в том виде, в котором он тут описан) – это удобная форма записи полного перебора всех возможных разметок. # + [markdown] id="oOyXHyDTXukI" # ### Проблемы: # Такой TPA будет перебирать все возможные варианты разметки, что приведет к значительному увеличению времени работы нашего DTW. Для решения этой проблемы мы будем отбрасывать "плохие" токены ещё на этапе прохождения по графу. Этим занимаются так называемые **beam** и **state prunings**. # # ### **state pruning**: # В классе `State` нужно добавить атрибут `best_token` – ссылку на лучший токен, находящийся в данном состоянии на текущем кадре записи. После порождения всех токенов за один кадр записи пройдёмся по каждому из полученных `next_tokens`, затем впишем текущий токен в `State.best_token` (здесь `State` – это узел, на котором находится токен), убив предыдущий лучший токен, либо убьём сам токен, если он хуже лучшего на этом узле. За жизнеспособность токена отвечает его атрибут `is_alive` (`True` или `False`, соответственно). Логика в том, что если несколько токенов находятся в узле *n*, то кратчайшее расстояние, которое им надо преодолеть до финиша, одинаковое. То есть мы можем оставить только лучший из них, а остальные отсеять. Главное не забыть очистить поле `best_token` у всех узлов графа перед началом обработки следующего кадра записи. # # ### **beam pruning**: # Идея состоит в том, чтобы на каждом кадре записи находить плохие токены и откидывать их (`token.is_alive = False`). # Плохие – это, очевидно, накопившие слишком большое отклонение от состояний, по которым они идут. Слишком большое отклонение – это непонятно какое (может токен плохой, может слово слишком длинное, может звук очень плохой – не разобрать). Поэтому "качество" токена считают относительно лучшего токена. Заведем переменную `threshold` (обычно её называют `beam` – ширина луча поиска), и если `token.dist` > `best_token.dist + threshold`, то `token` плохой и мы его отбросим. # Выкидывая какой-то токен из-за его отклонения, мы рискуем тем, что через сколько-то кадров все потомки выживших токенов могут оказаться очень плохими, а только потомки отброшенного токена оказались бы чудо как хороши. То есть, вводя `threshold`, мы вводим ошибку. Поэтому `threshold` нужно подобрать так, чтобы скорость работы алгоритма сильно выросла, а ошибка выросла незначительно. # # Введение этих методов может привести к тому, что у нас просто не окажется в конце выживших токенов в финальных узлах графа. Для того, чтобы иметь возможность выдавать результат в этом случае, мы введем дополнительный атрибут `current_word` у класса `State`. Теперь в любом узле каждой ветви будет храниться слово соответствующего эталона для этой ветви. # # Тогда в конце работы DTW, если у нас не будет живых финальных токенов, то мы просто выберем лучший из оставшихся и по полю `state.current_word` определим слово эталона. # + id="Zvu-la1tXb0A" class Token: def __init__(self, state, dist=0.0, word=""): # state: graph state that the given token has at the moment # dist: total accumulated distance traveled by the token # word: the word that was recognized by the token # alive: whether the token is alive self.state = state self.dist = dist self.word = word self.alive = True def beam_pruning(next_tokens, beam_threshold): #--------------------------------TODO №1----------------------------------- # 1. Ищем токен с лучшей дистанцией из next_tokens. # 2. Присваиваем token.alive значение False, если дистанция этого токена # больше, чем дистанция лучшего токена + beam_threshold. # To drop dead tokens completely: # next_tokens = list(filter(lambda x: x.alive, next_tokens)) best_token = min(next_tokens, key=lambda x: x.dist if x.alive else np.inf) for token in next_tokens: if token.alive and token.dist > best_token.dist + beam_threshold: token.alive = False #-------------------------------------------------------------------------- return next_tokens def state_pruning(next_tokens): for token in next_tokens: if not token.state.best_token: token.state.best_token = token else: if token.dist <= token.state.best_token.dist: token.state.best_token.alive = False token.state.best_token = token else: token.alive = False # сбрасываем best_token на None для всеx узлов графа: for token in next_tokens: if token.state.best_token: token.state.best_token = None return next_tokens def distance(X, Y): result = float(np.sqrt(sum(pow(X - Y, 2)))) return result def recognize(filename, features, graph, recognition_results, beam_threshold=None): start_state = graph[0] active_tokens = [Token(start_state),] next_tokens = [] for ftr_frame in tqdm(features, desc="Recognition..."): for token in active_tokens: if token.alive: for transition_state in token.state.next_states: new_token = Token(transition_state, token.dist, token.word) new_token.dist += distance(ftr_frame, transition_state.feats) next_tokens.append(new_token) # state pruning: next_tokens = state_pruning(next_tokens) #--------------------------------TODO №1------------------------------- # beam pruning: if beam_threshold is not None: next_tokens = beam_pruning(next_tokens, beam_threshold) #---------------------------------------------------------------------- active_tokens = next_tokens next_tokens = [] # поиск финальных токенов: final_tokens = [] for token in active_tokens: if token.state.is_final and token.alive: final_tokens.append(token) # если нет финальных, то берем лучший из выживших: if len(final_tokens) != 0: win_token = final_tokens[ np.argmin([token.dist for token in final_tokens]) ] else: alive_tokens = [token for token in active_tokens if token.alive] win_token = alive_tokens[ np.argmin([token.dist for token in alive_tokens]) ] win_token.state.word = win_token.state.current_word # вывод результата DTW print("Result: {} ==> {}".format(filename, win_token.state.word)) # совпадает ли запись с полученным эталоном: record_word = filename.split('_')[0] etalon_word = win_token.state.word.split('_')[0] recognition_results.append(etalon_word == record_word) return recognition_results # + [markdown] id="7d5lcrUnX-Hs" # ### Запустим наше распознавание # + id="gyOGKDeoX-xe" def run_recognizer(records_data_dict, graph, beam_threshold=10): start_time = time.time() recognition_results = [] for filename in records_data_dict.keys(): recognition_results = recognize( filename, records_data_dict[filename], graph, recognition_results, beam_threshold ) print('-' * 60) wer = (1 - sum(recognition_results) / len(recognition_results)) * 100 print(f"WER: {wer:.2f}%.") total_time = time.time() - start_time print(f"Total time: {total_time:.2f} sec.") print('-' * 60) return wer, total_time # + id="5OrSn2WoYB29" colab={"base_uri": "https://localhost:8080/", "height": 745, "referenced_widgets": ["af703f4c6d3447caa8e3a305568ff809", "3b0f377f84f24adaa5969932a30bec28", "b462ae9222e04f45aa4972a4830b56c7", "05132beca29244e7bf97e5981b4a733d", "", "", "", "", "299d562d18854973a1387cbca08ac1ab", "", "", "", "", "30a21cee1586432b9965b685602fe90f", "c8a24f0e9cb9464d881734a09581b935", "", "", "", "", "", "", "", "e327219acfc5484984768111668ae5ce", "", "", "", "", "", "", "", "", "", "", "0edeedcc30c246599eabcb6e5a84d434", "132d351584404e559ca79bc718616299", "6ad7077eabb8458e81fa75d2f15a81df", "", "", "", "", "", "", "5c124f30482f4c7eaf89e575b4ab674e", "", "e72a0402c67045ceb674d59ed0e998a9", "", "e14033a47f46437cb0215bef28fb0ac9", "46b5522b9f6645e7b6487ab2af034e95", "", "592dbab0720e4994a3b3dfb8849ee53a", "8f2a5795d4e143299fd93b75a2cb2641", "", "f738afcd125b4ed8af340758934174f3", "a15313b1e32b4e5485f863e485b17bd5", "", "3a3d68a5da6b4507b8281880a809b400", "d649001c982e4830b12078ea3872f2ea", "d8b2938529ce4e64bd02cca701074ed2", "", "048e4e8d6c20456a963be8309bf70344", "771779900b5b4e48af4369fd2f390275", "dcd80568e3314978a08121c24a88b815", "7b1f4e2bef474523b9900bee2d2c8da1", "69a4124bb4b846da9be041d5a75db873", "", "dc5a63d0505a4b57b169effba1fd8d9d", "", "", "", "80f405e5eefc4e9880cbc90efcfa49c0", "2dca2d9d50aa40dd8188590a02d930fb", "", "", "", "", "", "", "da32160d6ddd4b4a92b23785cda24be1", "", "10dc1bc380ea4bb08445be6879e293bb"]} outputId="7f927dc7-2877-42e2-a9b7-87a9d6f69228" wer_without_skips, time_without_bp = run_recognizer(records_data_dict, graph, beam_threshold=None) # + [markdown] id="qoYSL3bkY6U_" # ### **Задание №1** (6 баллов): # Реализуйте функцию отсечения токенов **`beam_pruning`**, описанную ранее. Подберите значение `beam_threshold` так, чтобы качество распознаваниия не ухудшалось. Во сколько раз ускорился процесс распознавания? # + colab={"base_uri": "https://localhost:8080/", "height": 745, "referenced_widgets": ["33a58db229b9497f853f7a240362fefb", "2c42e78ee76c42cfad9ea8ab72a8f7ff", "ad78f5658f684756aecdf7d7130203c8", "523f339b333648f7ba6b6843c742511a", "9959236cf2f9415d8a1a96496bdc8ab2", "bc5a467ce0f04a999df1ffd3be170299", "d912de4cf8174725be3967ae256d224b", "c2ce2be5963e43f59d3c9d3de1bc9811", "b5b4cf9c195f4d179e4f03e826c4a656", "249a71f485f9429a963ddd4352577378", "8f25bfcee9604d0297b63a5532163e2b", "2b22ceb5e2674e8398e7c5c564bfd69c", "c6892fb33154461daaa728b898466f38", "3b9fef37a2fc41c6a36487037e14a628", "", "0412a156ec114c75ba873be9ccf4eb45", "bd474a4727ff4c308f2a687d46329d00", "1b416f53ad9e4677a517cf50a8c38ec8", "6441e49722f04d9cbed549761fd4b65e", "", "", "3c4ce9fbb6ee43748eb3b55673547dab", "83349773eadb4f08a1808498edac3132", "f0e2398f398b4116aa9b822fe2909a97", "", "", "", "", "09982fea153049d3a1ba8d9348371b72", "", "d123ff2f11ff4a2ca2b0dabaaccd8891", "862f82a33c384214a8ffdad6fb6c6c54", "", "", "", "5d50e4d6b2484a3ca6e28bfe66e839c4", "049e3ed9a20749ab964ca97fad7af978", "a5f9995e7f5e4a22959ed019f8325b08", "", "cb1599302e9e4eadb501aeec1e4c5cb8", "4d804bb7e5e943f08aba21cf4333ae5e", "", "203b8c192a464d3d98a6949a99222948", "", "4624333dfba44b5085cadef67ed84b6e", "73d602d208a8478e8768a5a9825fe606", "", "", "", "0610a14fe8994a1fa663b9e2b0aa0dbf", "d8035d6939824504ba52511b329c4acb", "35231c2acdbe4ea894681e5a06b53e7b", "8b2e12695df243bfa4ba6f96a8e65a31", "", "265163c01e6f48f9a3431c1bd398391c", "39aee86df4c045c186ca9f87af14eb97", "0904e6a8f589493891f3790b0285b216", "", "180370192c71477995e831863337ccad", "", "", "22e8fb3145e94f5fb57d83df4326fcba", "be15bad70877494cb9aba988248cd179", "", "47103e7474124d5492f279c76a68e637", "", "", "e7a0c776c57f48048b566be872501c60", "bdff15dc8a7b4a61804d1355030a8534", "6d79c397514c4959998332e75d3a5285", "", "", "274dc28c1e474ec8900d6dc6beac0ca1", "", "", "257c290e1eb94792958d6a8f81d7d7fb", "72ceb78d90f14153a77929616c72e307", "b372a17d1c41420da05d255632a4843e", "da84146226c1468ebbeb8a3a30ecb73e", "36c493097b1e40edafeeb717363187bf"]} id="Jk0DVJ1vyEt3" outputId="f404c5da-1df7-49a9-b6a0-735f01291ebf" # WER stays the same for beam_threshold >= 2600 (heuristic value): _, time_with_bp = run_recognizer(records_data_dict, graph, beam_threshold=2600) # + colab={"base_uri": "https://localhost:8080/"} id="tgaqnqzjzJjo" outputId="8fff22d6-e418-4d09-ce83-0dca3ba64bb2" time_gain = time_without_bp / time_with_bp print(f"The recognition process has accelerated by {time_gain:.2f} times.") # + [markdown] id="JM5YlLVXY7GA" # ### **Задание №2** (4 балла): # На данный момент граф реализован так, что эталоны могут только растягиваться относительно записей. Но порой нужно уметь и сжиматься. Для этого нужно добавить для узлов графа дополнительные переходы через один и два состояния (нулевой узел должен остаться прежним). # # Улучшила ли эта модификация качество распознавания? # + id="4-UjeaPYMzwF" # Let's enable skips and repeat the steps fom above: graph = load_graph(etalons_data_dict, enable_skips=True) # + colab={"base_uri": "https://localhost:8080/", "height": 745, "referenced_widgets": ["0dabad56a237434e927128a164eb66c5", "2f4cf1f7b0c94b428e9f9b5de0d72040", "56376b2ee4884af09e99c609b0385cab", "074b652821394285ac2b872c0076b250", "2d1baa8b4455486b8db38116b0b7b3cc", "c33013e80c2c480983876af4872ba60f", "", "d425bbeb46d84573a2bf4e97e3efa397", "", "", "", "fe9ec3e0d36b4e2a95126b98cb390943", "", "605ad7b7f4ef44e9b464b6d2d64d1a63", "", "b13ea83080cc4ceeaa913343ff7b16c9", "", "", "3bbe65ba7b7b4acda86f7270acf6f273", "83fe097182d64623a04513f31b845bdc", "", "d27711965c3640a787a7fa768fe3a20d", "fbcb4e6ce56745eba34d2e3094f19c0c", "9b55ac0a922f4d6b9987e0989fe78875", "", "a98185b32d8a401d84a90b4145bc63e7", "8ede97d481504a19b3a02746a8250a16", "", "9f9aa6ef71994f88af4ca45ed4f8515f", "", "", "0b94adfebfc748ed920cc4b62a5d40d8", "", "701430c03a7742f6b34629d80ab355e1", "", "", "", "", "c0d41e68c3a64025810062918b4856d3", "f1a38d86b1454e399a81babbe8137b70", "d14ec36676e94aa39a8a3c9de4863667", "381cad1a631c4214a2ad90254b843a0c", "e7b43c28eda340c39e5c24c1723f1e69", "a73c52025ff54dad97210788938728e4", "", "", "", "26deac6122c544f08d59e69d383562fd", "d52c9dad1a7e4f018b5ef5a0190e1887", "7fe73a704cda43caadd4edf5c76898a9", "", "2970606ed36346b3826c469e38352782", "", "317e0e80e9b24112abe701e2a7e3d847", "c977e19fd03b4fb591c38adb0968a847", "9c2abb6b4f774a2491d1528ce07fd35b", "62bc85ae3f7b4291a2b19eae047f0b2a", "b37878d7336c4aeb9534b33e6cabe87a", "", "", "902786cab1984b39917ff131689e2e48", "8809d11e6fda43c696fdb13eff1f8ec1", "f25a9702f38b4503807ef0a1d0d72d07", "83f5230a78af468eafefc337d86d4d07", "", "", "", "ca057a4eedf043888d8dad22a7c2818e", "f62ec86244bb4117ae36367f06e118a2", "96985998988c46a3a8ce582b9d00aedf", "af9f1b2a55c54467892a8d98a3e009a5", "", "", "ac324d76df7a4900be64de475a2b5f7e", "fce430bbd896403a856dee63a1e3f3c1", "", "ece64a4aaf284e3ca645471d180c1ea9", "17918f6d974a4a979ef6bdefb69b70fa", "", "47507b2494bb4a249cc127ced6c92b44"]} id="03MNMx-xM2CJ" outputId="b679f0ce-f966-4a8f-fb7e-ec9f7acb0c50" wer_with_skips, time_without_bp = run_recognizer(records_data_dict, graph, beam_threshold=None) # + colab={"base_uri": "https://localhost:8080/", "height": 745, "referenced_widgets": ["31c8cb76d3fa4e859fbe807da2dae3f0", "94a6d7f7da3c4860b031b1267e1ec160", "", "74e52039dfe8467b8e02affb98492359", "", "561e0992b51f449dad5104c9b6e32fdd", "dcc623a9d2fe47cf95b9ad6c5b4e27ef", "a0ad052ffe3842558cacb1e05b244392", "18d481e9cd3d4621a266e9380bcea490", "", "", "007beae93e144d069f1732d102dff173", "a8f274d6c8174f9a8f70a9c17c2e63aa", "", "", "7f12036412e74462a2a011b985234637", "8a37c6265fa746e7be534d18d23c8050", "", "", "d6d55f3c265e408e9ebd6dd7570d7ac4", "0d8a450ee61648a684196f102f5fa05e", "8dc308ab98604214b01e6f56a6234176", "", "", "9191f7871c534ab0a1153a84d427fac1", "54f21a8615b5442e935149a302079082", "3b3087e423544ed69ddd050cc603a6ca", "98296695291c4ebe80ce7b65b579e57d", "3be2cd63bb3b408f88a4d215e283c0a6", "0a7d703276b043a281162c16f15ed226", "46f1755d2db4496c8be250f5ea5e776a", "a24bf893b7c3409ca05d1b1140854e32", "afbd099df2fe4ba0922814981b624835", "261645d2448346f58245f0e0ca1e75f5", "96228efe333d44d4ae8535fa344997fa", "fd24c947d38b4ef4b90666828ce6494c", "9266e4c4e0814b50a61c7a88ca3d299c", "", "212336ddcfc8402eb50ef0a0f5638b48", "", "65c4f5c701dc4af683135a2614bab32a", "04916a9108964a1cbfce6caf04ea3b0e", "abd3b32a3e77430fb2eb546795ef0b48", "", "9aabbc25c52a4d1aba1f23335c1a6024", "38e28c15394443bab1a2c80a6b4b9936", "", "e4fd1a648c6f4f28aeca1deaff568af6", "3ccd47c34ea045deb80a6bd38bd5a027", "c25ae3452ed64c728eb186fe4038a0ff", "2c1149aaba46497186a73ce031a1471f", "", "761c81c8d12d424da36b9831c53864e6", "", "", "d3c9a1a8a0854e0f90168cc5ad3319d6", "a88a6f4407be423a887566ed37569c1d", "92382fa4c53b4374b431eb8415d182ea", "", "", "e5f25f57475640e4af8e5e405f788701", "", "", "", "9afe72cd3e0f467c87b458ef3adc31da", "643071562213451ea9cc8e0b0817b9c1", "1dbdaeb2ece046fe85d1ec8999498d8c", "", "68a5ce15db0d4c79a31c2106664b9ad0", "2ea4c5ae579441649dcb0f93abb4e23a", "3a0638e9f6af4061a48df3912cef7763", "b3d614eb82974d469e9003e085df2080", "", "a85e7bc6cf1441ce8dc6a0775b9e26f1", "", "f07ca66827b140718dff540fdb4af6d8", "2dd76813cc8a49d4baa1517a84270458", "92f6afa4920e4c8493cd60894043a4c6", "7c94f1298ada4e94b62011c8d8901d6a", "3b97b48cf0fd4ce79ff9193ce74c86a5"]} id="5bcZTqTqM4uo" outputId="2609263c-f153-4986-8489-3baf6cbf6c81" _, time_with_bp = run_recognizer(records_data_dict, graph, beam_threshold=2600) # + colab={"base_uri": "https://localhost:8080/"} id="ZLZD-yNegZ_9" outputId="addd9e53-6f9e-429b-fe87-e1ec7ece2d5f" wer_diff = wer_without_skips - wer_with_skips if wer_diff > 0: print(f"The modification improved the recognition quality " \ f"by {wer_diff:.2f}%.") else: print(f"Whoops, for some reason the recognition quality decreased " \ f"from {wer_without_skips}% to {wer_with_skips}%.") # + colab={"base_uri": "https://localhost:8080/"} id="-m01llqKM7BX" outputId="7f7e83fe-4922-4c34-b7be-3b9daca6c0a4" time_gain = time_without_bp / time_with_bp print(f"The recognition process has accelerated by {time_gain:.2f} times.") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Fourier Transforms import numpy as np import scipy.integrate import matplotlib.pyplot as plt import math # ## Part 1: The Discrete Fourier Transform # We’re about to make the transition from Fourier series to the Fourier transform. “Transition” is the # appropriate word, for in the approach we’ll take the Fourier transform emerges as we pass from periodic # to nonperiodic functions. To make the trip we’ll view a nonperiodic function (which can be just about # anything) as a limiting case of a periodic function as the period becomes longer and longer. # We're going to start by creating a pulse function. Let's start with the following pulse function: def pulseFunction(x): return 1/(3 + (x-20)**2) x = np.linspace(-10, 50, 200) plt.plot(x, pulseFunction(x)) plt.plot(np.zeros(100), np.linspace(0, 0.5, 100), "--") plt.plot(np.ones(100) *40, np.linspace(0, 0.5, 100), "--") plt.show() # ### Step 1: Periodic Pulse Function # Take the `pulseFunction` above and make it periodic. Give it a variable period length (we will eventually make this 40 as shown by the vertical dotted lines above). def periodicPulseFunction(x, period): """ x : the x values to consider period : the period of the function """ return pulseFunction(abs(x%period)) # Plot your `periodicPulseFunction` with a period of $40$ from $-100$ to $100$ and check that it is correctly ## TO DO: Plot your periodicPulseFunction with a period of 40 from x = -100 to x = 100 x = np.linspace(-100, 100, 1000) plt.plot(x, periodicPulseFunction(x,40)) plt.show() # ### Step 2: Define the Fourier Series # This function is neither odd nor even, so we're going to have to take into account both the the even coefficients $a_k$ and the odd coefficients $b_k$. # $$ f(x) = \sum\limits_{k=0}^{\infty} a_k cos\left(\frac{2\pi k x}{T}\right) + b_k sin\left(\frac{2\pi k x}{T}\right) $$ # Complete the `fourierSeriesSum` that calculates the summation described above. def fourierSeriesSum(k, ak, bk, x, period): """ Parameters: k : the maximum k value to include in the summation above ak : an array of length 'k' containing the even coefficients (from a_0 to a_(k-1)) bk : an array of length 'k' containing the odd coefficients (from b_0 to b_(k-1)) x : an array of the x values to consider period : the period of the function """ sum = 0 for i in range (k): sum += ak[i]*np.cos(2*np.pi*i*x/period)+bk[i]*np.sin(2*np.pi*i*x/period) return sum # ### Step 3: Define the Integrands # Because we have both even and odd terms, we're going to have two separate integrals: # # The integral to solve for the even terms: # $$ a_k = \frac{1}{T} \int\limits_{0}^{T} f(x, \text{period}) \cos\left(\frac{2\pi k x}{T} \right) dx$$ # # # # The integral to solve for the odd terms: # $$ b_k = \frac{1}{T} \int\limits_{0}^{T} f(x, \text{period}) \sin\left(\frac{2\pi k x}{T} \right) dx$$ def odd_integrand(x,f, k, period): """ Parameters: x: the x values to consider f: the function f(x, period) used in the integral k: the k value to use period: the period of f """ return f(x,period)*np.cos(2*math.pi*k*x/period) q = np.array([1,2,3,4,5]) #print (odd_integrand(periodicPulseFunction, 1,100,q)) def even_integrand(x, f, k, period): """ Parameters: x: the x values to consider f: the function f(x, period) used in the integral k: the k value to use period: the period of f """ return f(x,period)*np.sin(2*math.pi*k*x/period) # ### Step 4: Find the Fourier Coefficients # Ok! Now it's time to find the coefficients. This is the same process as last time: # 1. Initialize an $a_k$ and $b_k$ array # 2. Loop through all the $k$ values # 3. Find $a_k[i]$ and $b_k[i]$ where i $\in [0, k]$ # 4. Return $a_k$ and $b_k$ # # (At the end of your quad function, add "limit = 100" as an argument) def findFourierCoefficients(f, k, period): """ Parameters: f: the function to evaluate k: the maximum k value to consider period: the period of f """ ak = [] bk = [] for i in range (k): ak.append((1/period)*scipy.integrate.quad(odd_integrand,0,period, args = (f,i,period,),limit = 100 )[0]) bk.append((1/period)*scipy.integrate.quad(even_integrand,0,period, args = (f,i,period,),limit = 100 )[0]) return ak,bk # ### Step 5: Putting it all Together # Let's test it out! # + k = 100 period = 40 ak, bk = findFourierCoefficients(periodicPulseFunction, k, period) y = fourierSeriesSum(k, ak, bk, x, period) plt.plot(x, y) plt.title("Pulse Function Constructed from Fourier Series") plt.show() # - # ### Step 6: Analyzing the Signal # Let's visualize what the coeffcients look like. # Plot the even coefficients ($a_k$ versus $k$). # TO DO: Plot ak versus k k = np.linspace(0,100,100) plt.plot(k,ak) # Plot the odd coefficients ($b_k$ versus $k$). # TO DO: Plot bk versus k plt.scatter(k,bk) # ## Part 2: Application # ### Option 1 # Below I've imported and plotted a signal for you. Break down this signal into sines and cosines, and plot the coefficients ($a_k$ versus $k$ and $b_k$ versus $k$) xNoise, yNoise = np.loadtxt("signal.txt", unpack=True) plt.figure(figsize=(15, 5)) plt.plot(xNoise, yNoise) plt.show() def lagrangian_interpolation(x, a, fa, b, fb, c, fc): """ Fits a quadratic to points (a, f(a)), (b, f(b)), and (c, f(c)) and returns an approximation for f(x) for some value x between a and c from the equation of a quadratic. Parameters: x (float): the point of interest between a and b a (float): known x value fa (float): known f(a) value b (float): known x value (b > a) fb (float): known f(b) value c (float): known x value (c > b) fc (float): known f(c) value Returns: (float): an approximation of f(x) using linear interpolation """ return ((x - b) * (x - c)/((a - b) * (a - c)) * fa + (x - a) * (x - c) / ((b - a) * (b - c))*fb + (x - a)*(x - b) / ((c - a) * (c - b) ) * fc ) # + #Ignore this, it was a successful attempt to interpolate, but it failed becuase it was not compatible with the integrate function def periodicNoise(x, period): """ Returns a value from the periodic noise function """ x = x % period try: vals = [] for i in range (len(x)): val = -1 for j in range (len(xNoise)-3): if (x[i]>=xNoise[j] and x[i]<=xNoise[j+1]): val = j break if (val==-1): vals.append(lagrangian_interpolation(x[i],xNoise[-3],yNoise[-3],xNoise[-2],yNoise[-2],xNoise[-1],yNoise[-1])) else: vals.append(lagrangian_interpolation(x[i],xNoise[val],yNoise[val],xNoise[val+1],yNoise[val+1],xNoise[val+2],yNoise[val+2])) return vals except: val = 0 for i in range (len(xNoise)-3): if (x>=xNoise[i] and x<=xNoise[i+1]): val = i break if (val==-1): return (lagrangian_interpolation(x,xNoise[-3],yNoise[-3],xNoise[-2],yNoise[-2],xNoise[-1],yNoise[-1])) else: return (lagrangian_interpolation(x,xNoise[val],yNoise[val],xNoise[val+1],yNoise[val+1],xNoise[val+2],yNoise[val+2])) return vals xVal = np.linspace(0,20,1000); yVal = periodicNoise(xVal, xNoise[-1]) plt.figure(figsize=(15, 5)) plt.plot(xVal,yVal) # + xx = np.linspace(0,10*np.pi,1000) #makes the noise function periodic to be used later def periodicFunc (x,period): x = np.mod(x,period) x = (np.round(x/(5*np.pi/1000),0)).astype(int) return yNoise[x-1] #checking periodic noise function to see if accurate plt.figure(figsize=(15, 5)) plt.plot(xx, periodicFunc(xx,5*np.pi)) # + #Running the fourier transform k = 100 period = 5*np.pi ak, bk = findFourierCoefficients(periodicFunc, k, period) y = fourierSeriesSum(k, ak, bk, x, period) #graphing the results of the transform plt.figure(figsize=(15, 5)) plt.plot(xx, y) plt.title("Pulse Function Constructed from Fourier Series") plt.show() # - #plotting ak, the even function coefficients k = np.linspace(0,100,100) plt.figure(figsize=(15, 5)) plt.plot(k,ak) #plotting bk, the odd function coefficients plt.figure(figsize=(15, 5)) plt.plot(k,bk) #the signal seems to come from a frequency around 75 # ### Option 2 # Find a signal from real data, and find the cosines and sines values that comprise that signal. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Regexs # # Up until now, to search in text we have used string methods find, startswith, endswith, etc. But sometimes you need more power. # # Regular expressions are their own little language that allows you to search through text and find matches with incredibly complex patterns. # # A regular expression, also referred to as "regex" or "regexp", provides a concise and flexible means for matching strings of text, such as particular characters, words, or patterns of characters. # # To use regular you need to import python's regex library `re` # https://docs.python.org/2/library/re.html import re # + # To run the examples we are going to use some of the logs from the # django project, a web framework for python django_logs = '''commit 722344ee59fb89ea2cd5b906d61b35f76579de4e Author: <> Date: Thu May 19 09:31:49 2016 -0400 Refs #24067 -- Fixed contenttypes rename tests failures on Oracle. Broke the initial migration in two to work around #25530 and added 'django.contrib.auth' to the available_apps to make sure its tables are also flushed as Oracle doesn't implement cascade deletion in sql_flush(). Thanks Tim for the report. commit 9fed4ec418a4e391a3af8790137ab147efaf17c2 Author: <> Date: Sat May 21 13:18:22 2016 -0400 Removed an obsolete comment about a fixed ticket. commit 94486fb005e878d629595942679ba6d23401bc22 Author: <> Date: Sat May 21 13:20:40 2016 +0200 Revert "Disable patch coverage checks" Mistakenly pushed to django/django instead of another repo This reverts commit 6dde884c01156e36681aa51a5e0de4efa9575cfd. commit 6dde884c01156e36681aa51a5e0de4efa9575cfd Author: <> Date: Sat May 21 13:18:18 2016 +0200 Disable patch coverage checks commit 46a38307c245ab7ed0b4d5d5ebbaf523a81e3b75 Author: <> Date: Fri May 20 10:50:51 2016 -0400 Removed versionadded/changed annotations for 1.9. commit 1915a7e5c56d996b0e98decf8798c7f47ff04e76 Author: <> Date: Fri May 20 09:18:55 2016 -0400 Increased the default PBKDF2 iterations. commit 97c3dfe12e095005dad9e6750ad5c5a54eee8721 Author: <> Date: Thu May 19 22:28:24 2016 -0400 Added stub 1.11 release notes. commit 8df083a3ce21ca73ff77d3844a578f3da3ae78d7 Author: <> Date: Thu May 19 22:20:21 2016 -0400 Bumped version; master is now 1.11 pre-alpha.''' # - # ## Searching # # The simplest thing you can do with regexs in python is search through text to see if there is a match. To do this you use the methods `search` or `match`. `match` only checks if it matches at the beginning of the string and `search` check the whole string. # # re.match(pattern, string) # re.search(pattern, string) print(re.match('a', 'abcde')) print(re.match('c', 'abcde')) print(re.search('a', 'abcde')) print(re.search('c', 'abcde')) print(re.match('version', django_logs)) print(re.search('version', django_logs)) if re.search('commit', django_logs): print("Someone has been doing work.") # ### TRY IT # Search for the word May in the django logs # # Special Characters # So far we can't do anything that you couldn't do with find, but don't worry. Regexs have many special characters to allow you to look for thing like the beginning of a word, whitespace or classes of characters. # # You include the character in the pattern. # # * ^ Matches the beginning of a line # * $ Matches the end of the line # * . Matches any character # * \s Matches whitespace # * \S Matches any non-whitespace character # * \* Repeats a character zero or more times # * \*? Repeats a character zero or more times (non-greedy) # * \+ Repeats a character one or more times # * +? Repeats a character one or more times (non-greedy) # * ? Repeats a character 0 or one time # * [aeiou] Matches a single character in the listed set # * [^XYZ] Matches a single character not in the listed set # * [a-z0-9] The set of characters can include a range # * {10} Specifics a match the preceding character(s) {num} number or times # * \d Matches any digit # * \b Matches a word boundary # # # **Hint** if you want to match the literal character (like $) as opposed to its special meaning, you would escape it with a `\` # + # Start simple, match any character 2 times print(re.search('..', django_logs)) # just to prove it works print(re.search('..', 'aa')) print(re.search('..', 'a')) print(re.search('..', '^%')) # - # to match a commit hash (numbers and letters a-f repeated) we can use a regex commit_pattern = '[0-9a-f]+' print(re.search(commit_pattern, django_logs)) # Let's match the time syntax time_pattern = '\d\d:\d\d:\d\d' time_pattern = '\d{2}:\d{2}:\d{2}' print(re.search(time_pattern, django_logs)) # ### TRY IT # Match anything between angled brackets < > # # Ignoring case # match and search both take an optional third argument that allows you to include flags. The most common flag is ignore case. # # re.search(pattern, string, re.IGNORECASE) # re.match(pattern, string, re.IGNORECASE) print(re.search('', django_logs)) print(re.search('', django_logs, re.IGNORECASE)) # ### TRY IT # search for 'django' in 'Both Django and Flask are very useful python frameworks' ignoring case # # Extracting Matches # Finding is only half the battle. You can also extract what you match. # # To get the string that your regex matched you can store the match object in a variable and run the group method on that # # m = re.search(pattern, string) # print m.group(0) # Let's match the time syntax time_pattern = '\d\d:\d\d:\d\d' m = re.search(time_pattern, django_logs) print(m.group(0)) # If you want to find all the matches, not just the first, you can use the findall method. It returns a list of all the matches # # re.findall(pattern, string) time_pattern = '\d\d:\d\d:\d\d' print(re.findall(time_pattern, django_logs)) # If you want to have only part of the match returned to you in findall, you can use parenthesis to set a capture point # # pattern = 'sads (part to capture) asdjklajsd' # print re.findall(pattern, string) # prints part to capture time_pattern = '(\d\d):\d\d:\d\d' hours = re.findall(time_pattern, django_logs) print(sorted(hours)) # + # you can capture more than one match time_pattern = '(\d\d):(\d\d):\d\d' times = re.findall(time_pattern, django_logs) print(times) # Unpacking the tuple in the first line for hours, mins in times: print("{} hr {} min".format(hours, mins)) # - # ### TRY IT # Capture the host of the email address (alphanumerics between @ and .com) **Hint** remember to escape the . in .com # ## Practice # There is a lot more that you can do, but it can feel overwhelming. The best way to learn is with practice. A great way to experiment is this website http://www.regexr.com/ You can put a section of text and see what regexs match patterns in your text. The site also has a cheatsheet for special characters. # + # Lets try some now # - # # Project: Doc Clerk # # Let's imagine you are working in a law office. You have millions of e-mails and other documents to go through to see what is relevant to the case. You are going to write a program to go though a file, check for key words (client's name, phone number, defendant's name) and print out the whole paragraph. It should not print any paragraphs with no relevant info. Paragraphs will be separated by an empty line. # # Your program should match the following items: # Gold E. Locks (case insensitive, E. or E) # Three bears or 3 bears # 571 209-4000 (with parens, dashes, or no spaces) # # # 0. Import re # 1. Initialize a variable called paragraph to be an empty list and a variable called found_match to false. # 2. Create a list of patterns to match and store in variable called patterns # 3. Read in test file 'evidence.txt'. # 4. For line in evidence: # a. check if it matches any of the patterns, if so set found_match to true # b. append line to paragraph # c. if line is empty (just a newline character) # - print paragraph if found_match is true **Hint** use the join method to print a string instead of a list # - reset paragraph to empty list and found_match to false # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline data = pd.read_csv('train.csv') data.head() fig,ax = plt.subplots() fig.set_size_inches(10,15) sns.heatmap(data.isnull(),cbar=False,cmap='magma') # Looks like we're missing two data point from Embarked some data from Age and a lot from Cabin # # Solution: # 1. Drop the two in Embarked # 2. Drop cabin # 3. Try to use linear regression to predict the age for the null ones # data = data[~data.Embarked.isnull()] data.drop('Cabin',axis=1,inplace=True) data.head() # I'm gonna make dummy variables for Pclass which is the fare class and Sex. Embarked will get the same treatment as well as it identifies the port at which the passenger embarked. I'll drop the first row of dummy variables because it can be inferred from the others pclass = pd.get_dummies(data.Pclass,drop_first=True) sex = pd.get_dummies(data.Sex,drop_first=True) embarked = pd.get_dummies(data.Embarked,drop_first=True) data = pd.concat([data,pclass,sex,embarked],axis=1) del pclass del sex del embarked data.head() # Sex, Pclass and Embarked are now redundant so I'll drop them. I'm also gonna drop Name and Ticket as they need further feature engineering to be useful. PasengerId gives no useful information so that'll be dropped as well data.drop(['Sex','Pclass','Embarked','Name','Ticket','PassengerId'],axis=1,inplace=True) data.head() # Now I'll split the data with non-null age values into a train and test set and try to learn the relationship between age and everything but Survived survived = data.Survived data.drop('Survived',inplace=True,axis=1) data_good = data[~data.Age.isnull()] data_good.head() from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from regressors import stats X_train, X_test, y_train, y_test = train_test_split(data_good.drop('Age',axis=1), data_good.Age, test_size=0.2, random_state=101) X_train lm = LinearRegression() lm.fit(X_train,y_train) coeff_df = pd.DataFrame(lm.coef_,data_good.drop('Age',axis=1).columns,columns=['Coefficient']) coeff_df from sklearn import metrics predictions = lm.predict(X_test) print('MAE:', metrics.mean_absolute_error(y_test, predictions)) print('MSE:', metrics.mean_squared_error(y_test, predictions)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions))) # So the linear regression has a MAE of 9.8 years. I'm gonna fill the empty age values with the predictions from the linear regression model data_notgood = data[data.Age.isnull()] data_notgood.head() lm.predict(data_notgood.drop('Age',axis=1)) # It appears that the model is predicting negative values which are obviously wrong. This happens because the model does not respect the 0 bound. To address this, I'm gonna retrain the model using the natural log of the age and predict that. lm = LinearRegression() lm.fit(X_train,np.log(y_train)) coeff_df = pd.DataFrame(lm.coef_,data_good.drop('Age',axis=1).columns,columns=['Coefficient']) coeff_df from sklearn import metrics predictions = np.exp(lm.predict(X_test)) print('MAE:', metrics.mean_absolute_error(y_test, predictions)) print('MSE:', metrics.mean_squared_error(y_test, predictions)) print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions))) data_notgood['Age'] = np.exp(lm.predict(data_notgood.drop('Age',axis=1))) data.head() data = pd.concat([data_good,data_notgood],axis=0) data.sort_index(inplace=True) data.head() fig,ax = plt.subplots() fig.set_size_inches(10,15) sns.heatmap(data.isnull(),cbar=False,cmap='magma') # Now all null values have either been dropped or replaced by an "educated" guess, we can continue with training the logistic regression from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score from sklearn.metrics import classification_report lrm = LogisticRegression(solver='lbfgs',max_iter=1000) lrm = RandomForestClassifier(n_estimators=100) X_train, X_test, y_train, y_test = train_test_split(data, survived, test_size=0.2, random_state=101) lrm.fit(X_train,y_train) lrm.score(X_test,y_test) predictions = lrm.predict(X_test) print(classification_report(y_test,predictions)) data # Now we have a baseline in place, lets try to predict from the test dataset. For this, the same preprocessing steps will need to be applied to the test set. Lets examine this dataset as well data_test = pd.read_csv('test.csv') passenger_id = data_test.PassengerId data_test.info() # It seems there is one missing value for the Fare. Lets see if we can quickly fill that in by analyzing the fare's relationship to the Pclass sns.set_style('darkgrid') sns.boxplot(x="Pclass",y="Fare",data=data_test,hue='Sex',showfliers=False) # So it seems the sex and cabin class can be a good identifier for fare. I'll just replace the missing fare with the mean of the fare for that class/sex combination data_test[data_test.Fare.isnull()] data_test['Fare'].fillna(data_test[(data_test.Pclass == 3) & (data_test.Sex == "male")].Fare.mean(),inplace=True) data_test.drop('Cabin',axis=1,inplace=True) pclass = pd.get_dummies(data_test.Pclass,drop_first=True) sex = pd.get_dummies(data_test.Sex,drop_first=True) embarked = pd.get_dummies(data_test.Embarked,drop_first=True) data_test = pd.concat([data_test,pclass,sex,embarked],axis=1) data_test.drop(['Sex','Pclass','Embarked','Name','Ticket','PassengerId'],axis=1,inplace=True) data_test.head() data_test_good = data_test[~data_test.Age.isnull()] data_test_notgood = data_test[data_test.Age.isnull()] data_test_notgood['Age'] = np.exp(lm.predict(data_test_notgood.drop('Age',axis=1))) data_test = pd.concat([data_test_good,data_test_notgood],axis=0) data_test.sort_index(inplace=True) data_test.head() predictions_test = lrm.predict(data_test) out = pd.DataFrame(data = {"PassengerId": passenger_id, "Survived": predictions_test}) out.to_csv('submission.csv',index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext nb_black # source: # 1. Reddit, Analysing 2650 Artifacts and their Main/Sub stat distributions. [Data compiled from user submissions led by /u/Acheron-X], https://www.reddit.com/r/Genshin_Impact/comments/kn0r0s/analysing_2650_artifacts_and_their_mainsub_stat/ # 2. NGA, [数据讨论]《圣遗物数值学导论》 https://nga.178.com/read.php?tid=24270728 # + import numpy as np from enum import Enum from collections import namedtuple from tqdm import tqdm class Stat(int, Enum): DEF = 1 ATK = 2 HP = 3 ATK_PCT = 4 HP_PCT = 5 DEF_PCT = 6 EM = 7 ER = 8 # correct elem damage pcts ED = 9 # unwanted elem damage pcts ED_OTHER = 10 # Phys damage pcts PD = 11 # heal bonus pcts HB = 12 CR = 13 CD = 14 class ArtifactType(int, Enum): FLOWER = 101 FEATHER = 102 SANDS = 103 GOBLET = 104 CIRCLET = 105 main_stat_max = { Stat.ATK: 322, Stat.HP: 4780, Stat.ATK_PCT: 46.6, Stat.HP_PCT: 46.6, Stat.DEF_PCT: 58.3, Stat.EM: 187, Stat.ER: 51.8, Stat.ED: 46.6, Stat.ED_OTHER: 46.6, Stat.PD: 58.3, Stat.HB: 35.9, Stat.CR: 31.1, Stat.CD: 62.2, } sub_stat_ranges = { Stat.DEF: [16, 19, 21, 23], Stat.ATK: [14, 16, 18, 19], Stat.HP: [209, 239, 269, 299], Stat.ATK_PCT: [4.1, 4.7, 5.3, 5.8], Stat.HP_PCT: [4.1, 4.7, 5.3, 5.8], Stat.DEF_PCT: [5.1, 5.8, 6.6, 7.3], Stat.EM: [16, 19, 21, 23], Stat.ER: [4.5, 5.2, 5.8, 6.5], Stat.CR: [2.7, 3.1, 3.5, 3.9], Stat.CD: [5.4, 6.2, 7.0, 7.9], } stats_name = { "CN": { Stat.ATK: "攻击力", Stat.HP: "生命值", Stat.DEF: "防御力", Stat.ATK_PCT: "攻击力%", Stat.HP_PCT: "生命值%", Stat.DEF_PCT: "防御力%", Stat.EM: "元素精通", Stat.ER: "元素充能%", Stat.ED: "元素伤害%", Stat.ED_OTHER: "元素伤害%(歪)", Stat.PD: "物理伤害%", Stat.HB: "治疗加成%", Stat.CR: "暴击率%", Stat.CD: "暴击伤害%", ArtifactType.FLOWER: "生之花", ArtifactType.FEATHER: "死之羽", ArtifactType.SANDS: "时之沙", ArtifactType.GOBLET: "空之杯", ArtifactType.CIRCLET: "理之冠", } } # - # ## Replace the default numbers with your estimation # + # initialize parameters on drop rate, main/sub-stats distribution etc # most numbers here are based on estimation/guess # the pdf for the number of level 5 artifact per run default_num_lvl5_pdf = { 1: 0.9, 2: 0.1, } # the probability of an artifact with 3 initial substats default_p_substat_cnt_3 = 0.8 # the probability of dropping the correct set p_correct_set = 0.5 # the pdf for the artifact type default_artifact_type_pdf = { ArtifactType.FLOWER: 0.2, ArtifactType.FEATHER: 0.2, ArtifactType.SANDS: 0.2, ArtifactType.GOBLET: 0.2, ArtifactType.CIRCLET: 0.2, } # ratio between different substats default_substat_pdf = { Stat.DEF: 0.15, Stat.ATK: 0.15, Stat.HP: 0.15, Stat.ATK_PCT: 0.1, Stat.HP_PCT: 0.1, Stat.DEF_PCT: 0.1, Stat.EM: 0.09, Stat.ER: 0.09, Stat.CR: 0.08, Stat.CD: 0.08, } # main stat distribution for each type of artifact main_stats_pdfs = { ArtifactType.FLOWER: {Stat.HP: 1.0}, ArtifactType.FEATHER: {Stat.ATK: 1.0}, ArtifactType.SANDS: { Stat.HP_PCT: 0.28, Stat.DEF_PCT: 0.28, Stat.ATK_PCT: 0.25, Stat.EM: 0.10, Stat.ER: 0.09, }, ArtifactType.GOBLET: { Stat.HP_PCT: 0.19, Stat.DEF_PCT: 0.21, Stat.ATK_PCT: 0.21, Stat.PD: 0.04, Stat.EM: 0.02, # 0.33 shared evenly between 5 elements Stat.ED: 0.33 / 5, Stat.ED_OTHER: 0.33 * 4 / 5, }, ArtifactType.CIRCLET: { Stat.HP_PCT: 0.22, Stat.DEF_PCT: 0.24, Stat.ATK_PCT: 0.22, Stat.HB: 0.07, Stat.EM: 0.05, Stat.CR: 0.10, Stat.CD: 0.10, }, } # + Artifact = namedtuple("Artifact", ["ttype", "main_stat", "substats"]) def normalize(pdf_p): return pdf_p / np.sum(pdf_p) def prepare_pdf(pdf_dict): items = [] pdf = [] for k, v in pdf_dict.items(): items.append(k) pdf.append(v) pdf = np.array(pdf) pdf = normalize(pdf) return items, pdf class ArtifactGenerator(object): def __init__(self, ttype, main_stats_pdf, substat_pdf=None, p_substat_cnt_3=None): self.ttype = ttype self.main_stats, self.main_stats_pdf = prepare_pdf(main_stats_pdf) if substat_pdf is None: substat_pdf = default_substat_pdf self.sub_stats, self.sub_stats_pdf = prepare_pdf(substat_pdf) if p_substat_cnt_3 is None: self.p_substat_cnt_3 = default_p_substat_cnt_3 def gen(self): # gen main stat main_stat = np.random.choice(self.main_stats, p=self.main_stats_pdf) num_init_substats = 3 if np.random.rand() < self.p_substat_cnt_3 else 4 substats = self.gen_substats( main_stat=main_stat, num_substats=5 + num_init_substats, ) return Artifact(ttype=self.ttype, main_stat=main_stat, substats=substats,) def gen_substats(self, main_stat, num_substats): # first draw the four types with rejection sampling substat_set = set() while len(substat_set) < min(4, num_substats): substat = np.random.choice(self.sub_stats, p=self.sub_stats_pdf) if substat == main_stat or substat in substat_set: continue substat_set.add(substat) # fill the numbers for each substats num_substat_types = len(substat_set) remaining_cnt = num_substats - num_substat_types upgrade_cnt = ( np.random.multinomial( remaining_cnt, [1 / num_substat_types] * num_substat_types, ) + 1 ) substats = { ttype: np.sum( np.random.choice(sub_stat_ranges[ttype], size=cnt, replace=True) ) for cnt, ttype in zip(upgrade_cnt, substat_set) } return substats class Simulator(object): def __init__(self, generators, artifact_type_pdf=None, num_lvl5_pdf=None): self.generators = generators if artifact_type_pdf is None: artifact_type_pdf = default_artifact_type_pdf self.artifact_types, self.artifact_type_pdf = prepare_pdf(artifact_type_pdf) if num_lvl5_pdf is None: num_lvl5_pdf = default_num_lvl5_pdf self.num_lvl5s, self.num_lvl5_pdf = prepare_pdf(num_lvl5_pdf) def run(self): cnt = np.random.choice(self.num_lvl5s, p=self.num_lvl5_pdf) artifacts = [] for _ in range(cnt): if np.random.rand() >= p_correct_set: continue ttype = np.random.choice(self.artifact_types, p=self.artifact_type_pdf) artifacts.append(self.generators[ttype].gen()) return artifacts def run_batch(self, num_runs): artifacts = {} for _ in tqdm(range(num_runs)): for v in self.run(): if v.ttype not in artifacts: artifacts[v.ttype] = [] artifacts[v.ttype].append(v) return artifacts def display(artifact, lang="CN"): print("===================") print( "{}:\n{:15}:\t{:3.1f}".format( stats_name[lang][artifact.ttype], stats_name[lang][artifact.main_stat], main_stat_max[artifact.main_stat], ) ) print("-------------------") for t, v in artifact.substats.items(): print("{:15}:\t{:3.1f}".format(stats_name[lang][t], v,)) print("===================") # - # ### how to compare between different artifacts generators = { ttype: ArtifactGenerator(ttype=ttype, main_stats_pdf=main_stats_pdf,) for ttype, main_stats_pdf in main_stats_pdfs.items() } simulator = Simulator(generators=generators) # # simulate the grind! num_days = 365 num_runs = 180 * num_days // 20 print("Total number of trails: {}".format(num_runs)) artifacts = simulator.run_batch(num_runs=num_runs) class ArtifactScore(object): # using the main stat max to normalize the scores stat_weights = { Stat.ATK: 1.0 / 322, Stat.HP: 0.0 / 4780, Stat.DEF: 0.0 / 187, # no sure here Stat.ATK_PCT: 3.0 / 46.6, Stat.HP_PCT: 0.0 / 46.6, Stat.DEF_PCT: 0.0 / 58.3, Stat.EM: 0.0 / 187, Stat.ER: 0.0 / 51.8, Stat.ED: 10.0 / 46.6, Stat.ED_OTHER: 0.0 / 46.6, Stat.PD: 0.0 / 58.3, Stat.HB: 0.0 / 35.9, Stat.CR: 10.0 / 31.1, Stat.CD: 10.0 / 62.2, } def __init__(self, stat_weights=None): if stat_weights is not None: self.stat_weights = stat_weights def __call__(self, artifact): score = 0 score += main_stat_max[artifact.main_stat] * self.stat_weights.get( artifact.main_stat, 0.0 ) for t, v in artifact.substats.items(): score += v * self.stat_weights.get(t, 0.0) return score # sort the artifact within each type comparator = ArtifactScore() for ttype in artifacts: artifacts[ttype].sort(key=comparator, reverse=True) # + lang = "CN" topk = 3 for ttype in { ArtifactType.FLOWER, ArtifactType.FEATHER, ArtifactType.SANDS, ArtifactType.GOBLET, ArtifactType.CIRCLET, }: print("Got {} {}.".format(len(artifacts[ttype]), stats_name[lang][ttype])) for i in range(topk): display(artifacts[ttype][i], lang=lang) print("\n\n") # - # ## how many runs do I need to get a circlet with CR/CD as main and CD/CR as good substat # + num_runs_needed = [] num_trials = 500 scorer1 = ArtifactScore( stat_weights = { Stat.CD: 1.0 } ) # # cd in substats increased more than 3 times substat_thd = 20 for _ in tqdm(range(num_trials)): cnt = 0 found = False while not found: cnt += 1 for v in simulator.run(): if v.ttype != ArtifactType.CIRCLET or v.main_stat != Stat.CR: continue if scorer1(v) > substat_thd: num_runs_needed.append(cnt) found = True break # + num_runs_needed.sort() print("10-percentile: {}".format(num_runs_needed[num_trials // 10])) print("50-percentile: {}".format(num_runs_needed[num_trials // 2])) print("90-percentile: {}".format(num_runs_needed[num_trials - num_trials // 10])) # - # ## how many runs do I need to get a SANDS with ATK_PCT as main and CD&CR as good substat # + num_runs_needed = [] num_trials = 500 scorer1 = ArtifactScore( stat_weights = { Stat.CD: 1.0, Stat.CR: 2.0, } ) # # cd & cr in substats increased more than 3 times substat_thd = 28 for _ in tqdm(range(num_trials)): cnt = 0 found = False while not found: cnt += 1 for v in simulator.run(): if v.ttype != ArtifactType.SANDS or v.main_stat != Stat.ATK_PCT: continue if scorer1(v) > substat_thd: num_runs_needed.append(cnt) found = True break # + display(v) num_runs_needed.sort() print("10-percentile: {}".format(num_runs_needed[num_trials // 10])) print("50-percentile: {}".format(num_runs_needed[num_trials // 2])) print("90-percentile: {}".format(num_runs_needed[num_trials - num_trials // 10])) # - # ## how many runs do I need to get a GOBLET with Elem Damage as main and CD&CR as good substat # + num_runs_needed = [] num_trials = 500 scorer1 = ArtifactScore( stat_weights = { Stat.CD: 1.0, Stat.CR: 2.0, } ) # # cd & cr in substats increased more than 3 times substat_thd = 28 for _ in tqdm(range(num_trials)): cnt = 0 found = False while not found: cnt += 1 for v in simulator.run(): if v.ttype != ArtifactType.GOBLET or v.main_stat != Stat.ED: continue if scorer1(v) > substat_thd: num_runs_needed.append(cnt) found = True break # + display(v) num_runs_needed.sort() print("10-percentile: {}".format(num_runs_needed[num_trials // 10])) print("50-percentile: {}".format(num_runs_needed[num_trials // 2])) print("90-percentile: {}".format(num_runs_needed[num_trials - num_trials // 10])) # - # ## how many runs do I need to get a FLOWER with CD&CR as good substat # + num_runs_needed = [] num_trials = 500 scorer1 = ArtifactScore( stat_weights = { Stat.CD: 1.0, Stat.CR: 2.0, } ) # # cd & cr in substats increased more than 3 times substat_thd = 28 for _ in tqdm(range(num_trials)): cnt = 0 found = False while not found: cnt += 1 for v in simulator.run(): if v.ttype != ArtifactType.FLOWER: continue if scorer1(v) > substat_thd: num_runs_needed.append(cnt) found = True break # + display(v) num_runs_needed.sort() print("10-percentile: {}".format(num_runs_needed[num_trials // 10])) print("50-percentile: {}".format(num_runs_needed[num_trials // 2])) print("90-percentile: {}".format(num_runs_needed[num_trials - num_trials // 10])) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Scatter Plot import numpy as np import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('data/circle_data.csv', sep=';') # delimiter atau separator terganmtung data raw df.head() x = df.x y = df.y rad = df.radius plt.scatter(x, y) plt.scatter(x, y) plt.axis("equal"); plt.scatter(x, y, c='r') # merubah warna plt.axis("equal"); plt.scatter(x, y, c='r', s=10) # merubah size plt.axis("equal"); plt.scatter(x, y, c='r', s=rad*50) # merubah size berdasarkan radius, makin ke pusat makih kecil plt.axis("equal"); plt.scatter(x, y, c=rad, s=rad*50, cmap='seismic') plt.axis("equal"); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.3 64-bit (''base'': conda)' # name: python3 # --- # # 2. Creating a synthetic Q&A dataset # We use [`davinci-instruct-beta-v2`](https://beta.openai.com/docs/engines/instruct-series-beta), a model specialized in following instructions, to create questions based on the given context. Then we also use [`davinci-instruct-beta-v2`](https://beta.openai.com/docs/engines/instruct-series-beta) to answer those questions, given the same context. # # This is expensive, and will also take a long time, as we call the davinci engine for each section. You can simply download the final dataset instead. # # We're using the dataset created using the [previous notebook](olympics-1-collect-data.ipynb) # ## 2.1 Read in the data, and create a context # Create a context by concatenating the title, the heading and the content of that section import pandas as pd df = pd.read_csv('olympics-data/olympics_sections.csv') df['context'] = df.title + "\n" + df.heading + "\n\n" + df.content df.head() # ## 2.2 Create questions based on the context # Use davinci-instruct to generate a number of plausible questions relating to the Wikipedia section contents. # # Note: We have used temperature=0, but it may be beneficial to experiment with a higher temperature to get a higher diversity of questions. # # **WARNING: This step will last a long time, and consume a lot of tokens, as it calls davinci-instruct for every section to generate a number of questions.** # + import openai def get_questions(context): try: response = openai.Completion.create( engine="davinci-instruct-beta-v2", prompt=f"Write questions based on the text below\n\nText: {context}\n\nQuestions:\n1.", temperature=0, max_tokens=257, top_p=1, frequency_penalty=0, presence_penalty=0, stop=["\n\n"] ) return response['choices'][0]['text'] except: return "" df['questions']= df.context.apply(get_questions) df['questions'] = "1." + df.questions print(df[['questions']].values[0][0]) # - # The prompt is designed to generate a number of questions. Example questions above were generated based on the summary section of the 2020 Summer Olympics page. # # We can observe that the questions 3 and 5 above repeat. Sometimes the generated questions could be ambiguous without the context. We will show that even despite these limitations we can create a successful model. print(df.content.values[0]) # ## 2.3 Create answers based on the context # Use davinci-instruct to answer the questions given the relevant Wikipedia section contents # # Note: We have used temperature=0, but it may be beneficial to experiment with a higher temperature to get a higher diversity of questions. # # **WARNING: This step will last a long time, and consume a lot of tokens, as it calls davinci-instruct for every section to answer all the questions.** # + def get_answers(row): try: response = openai.Completion.create( engine="davinci-instruct-beta-v2", prompt=f"Write questions based on the text below\n\nText: {row.context}\n\nQuestions:\n{row.questions}\n\nAnswers:\n1.", temperature=0, max_tokens=257, top_p=1, frequency_penalty=0, presence_penalty=0 ) return response['choices'][0]['text'] except Exception as e: print (e) return "" df['answers']= df.apply(get_answers, axis=1) df['answers'] = "1." + df.answers df = df.dropna().reset_index().drop('index',axis=1) print(df[['answers']].values[0][0]) # - # These are the answers to the questions above based on the context around the host city selection. # # We can see that answers 3-5 contain the correct answer, but instead of answering the question directly, the answer is a verbatim extraction. Despite these occasional lower quality answers, we will show that the model can learn the task reasonably well, given a high number of examples. # ## 2.4 Save the Olympics Q&A dataset based on Wikipedia sections # We save the file for use in the [next notebook](olympics-3-train-qa.ipynb) df.to_csv('olympics-data/olympics_qa.csv', index=False) # ## 2.5 Search file # We create a search file ([API reference](https://beta.openai.com/docs/api-reference/files/list)), which can be used to retrieve the relevant context when a question is asked. # # + df = df[df.tokens<2000] df[['context', 'tokens']].rename(columns={'context':'text','tokens':'metadata'}).to_json('olympics-data/olympics_search.jsonl', orient='records', lines=True) search_file = openai.File.create( file=open("olympics-data/olympics_search.jsonl"), purpose='search' ) olympics_search_fileid = search_file['id'] # - # ## 2.6 Answer questions based on the context provided # # We will use a simple implementation of the answers endpoint. This works by simply using the [/search endpoint](https://beta.openai.com/docs/api-reference/searches), which searches over an indexed file to obtain the relevant sections which can be included in the context, following by a question and answering prompt given a specified model. from answers_with_ft import create_context, answer_question print(create_context("Where did women's 4 x 100 metres relay event take place during the 2020 Summer Olympics?", olympics_search_fileid, max_len=400)) answer_question(olympics_search_fileid, "davinci-instruct-beta-v2", "Where did women's 4 x 100 metres relay event take place during the 2020 Summer Olympics?") # After we fine-tune the model for Q&A we'll be able to use it instead of [`davinci-instruct-beta-v2`](https://beta.openai.com/docs/engines/instruct-series-beta), to obtain better answers when the question can't be answered based on the context. We see a downside of [`davinci-instruct-beta-v2`](https://beta.openai.com/docs/engines/instruct-series-beta), which always attempts to answer the question, regardless of the relevant context being present or not. (Note the second question is asking about a future event, set in 2024.) answer_question(olympics_search_fileid, "davinci-instruct-beta-v2", "Where did women's 4 x 100 metres relay event take place during the 2048 Summer Olympics?", max_len=1000) # We can see that davinci has a tendency to answer the question, even if the question can't be answered given the context provided. Note the question asked regarding 2048 Summer Olympics, which didn't happen yet, and the retrieved content has only returned results for 2020. # ## 2.7 (Optional) Investigation into how likely the search endpoint is to return the relevant context def check_context(title, heading, question, max_len=1800, search_model='ada', max_rerank=10): """ Evaluate the performance of the search model in retrieving the correct context Parameters ---------- title: str The title of the Wikipedia page heading: str The heading of the Wikipedia section qusetion: str The question max_len: int The maximum length of the context search_model: str The search model to use - `ada` is most cost effective max_rerank: int The maximum number of reranking documents to use the search model on Returns ------- rank: int The rank of the correct context token_length: int The number of tokens needed to obtain the correct context """ try: results = openai.Engine(search_model).search( search_model=search_model, query=question, max_rerank=max_rerank, file=olympics_search_fileid, return_metadata=True ) index=-1 returns = [] cur_len = 0 for result in results['data']: cur_len += int(result['metadata']) + 4 # we add 4 tokens for the separator `\n\n###\n\n` if cur_len > max_len: break returns.append(result['text']) res = result['text'].split('\n') if res[0] == title and res[1] == heading: index = len(returns) - 1 break return index, cur_len except Exception as e: #print (e) return [] print(check_context("Athletics at the 2020 Summer Olympics – Women's 4 × 100 metres relay", "Summary", "Where did women's 4 x 100 metres relay event take place during the 2020 Summer Olympics?", max_len=10000)) # We utilize the generated questions based on context to estimate how often we can retrieve the original context. These questions are noisy, so this is not a perfect estimate. # # Our questions and answers are prefixed with numbered bullet points, however due to the way they were generated, they are missing the first number, hence we add "1." to the list of questions (and answers). # # We calculate the rank of the section retrieved using ada search, and the number of tokens in the context needed to retrieve the relevant section in full. ada_results = df.apply(lambda x: [ check_context( x.title, x.heading, q[3:], # remove the number prefix max_len=1000000, # set a large number to get the full context search_model='ada', max_rerank=200, ) for q in (x.questions).split('\n') # split the questions if len(q) >10 # remove the empty questions ], axis=1) ada_results.head() out = pd.concat([ada_results], axis=1) out.columns = ['ada'] out.to_csv('olympics-data/search_engine_results.csv') # + def expand_lists(out): """ Expand a pandas series containing lists into a series, where each list element becomes a value on its own Input is a row per paragraph, which has multiple questions Output is a row per question """ cols = [pd.DataFrame(out[name].tolist()).stack().reset_index(level=1, drop=True).rename(name) for name in out.columns] return pd.concat(cols, axis=1) out_expanded = expand_lists(out) out_expanded['rank'] = out_expanded.ada.apply(lambda x: x[0] if x != [] else -2) out_expanded['tokens'] = out_expanded.ada.apply(lambda x: x[1] if x != [] else -2) # - within_2k = (out_expanded.tokens < 2000).mean() print(f"{within_2k*100:.1f}% of relevant paragraphs are retrieved within the first 2k tokens") # The relevant context can be obtained 74% of the time on this dataset outside_200 = (out_expanded['rank'] == -1).mean() print(f"{outside_200*100:.1f}% of relevant paragraphs are not retrieved within the first 200 results") # 7.4% of the time, this is due to the keyword search part of the search algorithm not retrieving the relevant context within the first 200 results. # 18.3% of the time this is due to the semantic search not placing the relevant context within the first 2000 tokens. # + import matplotlib.pyplot as plt # plot a histogram, and add axis descriptions and title out_expanded[(out_expanded['rank'] >=0)&(out_expanded['rank'] <30)]['rank'].hist(bins=29) plt.xlabel('rank') plt.ylabel('count') plt.title('Histogram of ranks of retrieved paragraphs') plt.show() # - out_expanded[(out_expanded.tokens>=0)&(out_expanded.tokens < 2000)]['tokens'].hist(bins=29) plt.xlabel('tokens') plt.ylabel('count') plt.title('Histogram of the number of minimum tokens needed') plt.show() # We can observe that the context is most likely to be returned as one of the first results, and most likely to be returned within the first 200-500 tokens. # normalized value_counts out_expanded['rank'].value_counts(normalize=True).sort_index()[:13] # probabilities of the relevant context being returned at each rank. (-2 means a processing error, -1 means the rank is >200) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python Functions # ## What is a function? # # * A function is a block of organized, reusable code that is used to perform a single, related action. # * Single, organized, related always ? :) # # # ### DRY - Do not Repeat Yourself principle # # * *Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.* # http://wiki.c2.com/?DontRepeatYourself # # * Contrast WET - We Enjoy Typing, Write Everything Twice, Waste Everyone's Time print("I want to go eat") print("I am going to order food") # Here we define our first function # order_food is more Pythonic style than orderFood which is camelCase def order_food(): print("I want to go eat") print("I am going to order food") # so i've defined my function # yet I have not run it order_food() order_food() def going_out(): print("Let's go out!") order_food() print("Let's run!") order_food() going_out() def place_order(dish): print(f"Hello waiter!") print(f"I'd like to order {dish}") print(f"Let's eat {dish.upper()}!") place_order("potatoes") dessert = "Ice cream" place_order(dessert) food_list = ["Beet soup", "Salad", "Meat and potatoes", "Ice cream"] for food in food_list: place_order(food) food, food_list, dessert # Passing parameters(arguments) def add(a, b): # you'd have an opportunity to verify, validate before using + print(a+b) # returns None add(4,6) add(9,233) add("Hello ","Riga") add([1,2,7],list(range(6,12))) # what do we do if we want to modify some value, we could have global value which modify.. # but modifying global values generally can get messy def mult(a, b): result = a*b # result is just a name of inner local variable could be anything print(f"Look Ma! I am multiplying {a} * {b} which is {result}") return result # with no return the default return is None mult(5,9) add(5,9) t1 = mult(10,5) t2 = add(10,5) t1,t2 'soup' in food_list food_list def findNeedle(needle, mylist): for item in mylist: if needle in item: print(f"Eureka! Found {needle} in {item}") return item # so this fun will return first found item which contains needle mysoup = findNeedle('soup', food_list) mysoup def addFood(food, mylist): # check for poison, give it to cour jester to try it out mylist.append(food) # remember append is IN PLACE meaning you modify mylist food = "Strawberries" # this will not matter outside the function, only for local use mylist.append(food) # remember append is IN PLACE meaning you modify mylist mylist = ["Diet soda", "Beyond Meat burger"] # remember our variables are like aliases, like shortcuts, # so here we changed mylist to point a completely new list and are not point to incoming mylist anymore return mylist #not strictly necessary because well mylist will be modified addFood(dessert, food_list) # this returns the modified list as well dessert food_list def add_food(food, mylist): # check our food # if we do not want to modify our list we could do this # this is functional style new_list = mylist + [food] # so this will not modify mylist # do more stuff with new_list return new_list # so we will get a new list but old list will stay unmodifed # i can make all values default def make_cocktail(soda="tomato juice", alcohol="Vodka", mixer="Glass"): print(f"Mixing {soda} with {alcohol} in a {mixer}") return f"{soda}X{alcohol}" make_cocktail("coke", "rum", "shaker") make_cocktail("tonic", "gin") # i can be lazy i can use glass as default make_cocktail() # all values are default make_cocktail(alcohol="Grappa") # so I can skip some default values new_list = add_food("Pancakes", food_list) dessert new_list food_list addFood("crab legs", food_list) food_list food_list.count("Salad") addFood("Salad", food_list).count("Salad") addFood(555, food_list) food_list result = mult(4,6) print(result) print(mult(5,7)) print(mult([3,6],4)) print(mult("Gunta ", 4)) help(mult) mult(mult(2,3), mult(5,7)) def sub(a, b): print(a-b) return(a-b) sub(20, 3) result = 0 # + # Avoid this, more of an anti-pattern def add2(a,b): global result result += a+b # many calculations print(result) add2(3,6) # + # Avoid this, more of an anti-pattern def addResult(a, b, result): result += a+b # same result = result + (a+b) # many calculations print(result) return result # - result = addResult(5,10, result) print(result) def add3(a,b,c): print(a+b+c) return(a+b+c) add3(13,26,864) print(add3(list(range(5,10)), [1,3,6], [5,'VVVV'])) result = add3("A","BRACA","DABRA") result def isPrime(num): ''' Super simple method of checking for prime. ''' for n in range(2,num): #How could we optimize this? if num % n == 0: print(f'{num} is not prime, it divides by {n}') return False else: # runs when no divisors found print(f'{num} is prime') return True print(isPrime(53)) print(isPrime(51)) print(isPrime(59)) isPrime(10) def isPrimeO(num): ''' Faster method of checking for prime. ''' if num % 2 == 0 and num > 2: return False for i in range(3, int(num**0.5) + 1, 2): ## notice we only care about odd numbers and do not need to check past sqrt of num if num % i == 0: return False return True isPrimeO(23) # ## Jupyter magic # * *%%HTML* lets you render cell as HTML # * *%%time* times your cell operation, *%time* times your single line run time # * *%%timeit* runs your cell multiple time and gives you average # # ### Magic docs: http://ipython.readthedocs.io/en/stable/interactive/magics.html # %timeit isPrimeO(100001) # %timeit isPrime(100001) # + # Why are the tests not comparable? # Hint: What is different about the function outputs? # - def isPrimeSimple(num): ''' Super simple method of checking for prime. ''' for n in range(2,num): #How could we optimize this? if num % n == 0: # print(f'{num} is not prime, it divides by {n}') return False else: # runs when no divisors found # print(f'{num} is prime') return True isPrimeSimple(100001) # %timeit isPrimeSimple(100001) max(3,7,2) # notice that max function works with variable argument count(could 2 could 1000) or more max(5,2,7,222,1000, -555) def getLargest(a,b,c): result = 0 if a > b: print("Aha a is largest",a) result = a else: print("Aha b is largest",b) result = b if c > result: print("Hmm c is the largest of them all", c) result = c return result getLargest(-33,-455, -555) getLargest(333,0,500) 5 > 3 > 2 3 > 2 > 6 # with import we can use other libraries import random random.random() random.randrange(2) def guessnum(): ''' Plays the number guessing game ''' secret = random.randrange(100) #print(secret) x=-1 #Why did we need this declaration? How could we change the code to not require this assignment? while x != secret: x = int(input("Enter an integer please! ")) if x > secret: print("your number is too large") elif x < secret: print("your number is too small") elif x == 555: print("Secret Exit") break else: print("YOU WON!") print(f"secret number is {secret}") break guessnum() def guessnumCorr(): ''' Plays the number guessing game ''' secret = random.randrange(100) #print(secret) x=-1 #Why did we need this declaration? How could we change the code to not require this assignment? while x != secret: x = int(input("Enter an integer please! ")) if x == 555: print("Secret Exit") break elif x > secret: print("your number is too large") elif x < secret: print("your number is too small") else: print("YOU WON!") print(f"secret number is {secret}") break guessnumCorr() guessnum() # + ## Possible improvements, count how many tries it took to play the game # - def lazypow(a, b=2): '''Returns a taken to the power of b b default is 2''' return(a**b) lazypow(5) lazypow(5,3) print(lazypow(3,4)) print(lazypow(11)) #Chaining function calls print(lazypow(mult(2,6))) print(lazypow(mult(2,6), 3)) 12**3 print(lazypow(mult(3,5), 4)) #Returning multiple values def multdiv(a=6,b=3): '''Returns two values as a tuple!: 1. multiplication of arguments 2. a/b ''' return a*b, a/b res = multdiv() print(res) type(res) result = multdiv(4,3) result type(result) result[0] result[1] result = None mytuple = tuple(range(1,11)) mytuple mytuple[::-1] mytuple[3:7:2] mylist = list(mytuple) mylist print(multdiv()) print(multdiv(12)) print(multdiv(b=4)) print(multdiv(15,3)) # we could just return two values separately Create a lazybuzz function which takes four arguments with default values of 3,5 , 1 and 100 representing the two divisors the beggining and end # # Side effects # # In computer science, a function or expression is said to have a side effect if it modifies some state outside its scope or has an observable interaction with its calling functions or the outside world besides returning a value. # * Ideal (Platonic?) function has none, but not always possible(input/output, globals) # * Functional programming style strives towards this ideal, but real life is mixture of styles # + # %%time import time #this time library has nothing to do with %%time Jupyter command def hello(): print("HW") time.sleep(.100) print("Awake") hello() hello() # - ##Built-in Functions abs() dict() help() min() setattr() all() dir() hex() next() slice() any() divmod() id() object() sorted() ascii() enumerate() input() oct() staticmethod() bin() eval() int() open() str() bool() exec() isinstance() ord() sum() bytearray() filter() issubclass() pow() super() bytes() float() iter() print() tuple() callable() format() len() property() type() chr() frozenset() list() range() vars() classmethod() getattr() locals() repr() zip() compile() globals() map() reversed() __import__() complex() hasattr() max() round() delattr() hash() memoryview() set() # ### More info on builtin functions: https://docs.python.org/3/library/functions.html # # Usage of *args # # *args and **kwargs are mostly used in function definitions. *args and **kwargs allow you to pass a variable number of arguments to a function. What does variable mean here is that you do not know before hand that how many arguments can be passed to your function by the user so in this case you use these two keywords. *args is used to send a non-keyworded variable length argument list to the function. # # # # + # you can pass multiple arguments in a function without specifying them in advance def test_var_args(f_arg="Valdis", *argv): print("first normal arg:", f_arg) for arg in argv: print("another arg through *argv :", arg) test_var_args('yasoob','python','eggs','test') # - test_var_args("Valdis", "RTU") test_var_args("Valdis") # this will not work unless i made f_arg a default test_var_args() print(5, dessert, food) # so print takes multiple arguments # Write a function to return result of multiplying ALL arguments # If no arguments given function should return 1 def multMany(*argv): result = 1 for num in argv: result *= num # this is the same as result = result * num return result multMany() multMany(1,3,5,353,2) # #Usage of **kwargs # # **kwargs allows you to pass keyworded variable length of arguments to a function. You should use **kwargs if you want to handle named arguments in a function. # edict = {} edict.items() def greetMe(**kwargs): if kwargs is not None: for key, value in kwargs.items(): print(f"{key} == {value}") greetMe(name="Valdis",hobby="biking",work="programming") def defaultFun(a=6): print(a) defaultFun(333) defaultFun() # ## Homework Problems # Easy # Write a function to calculate volume for Rectangular Cuboid (visas malas ir taisnsturas 3D objektam) def getRectVol(l,w,h): ''' Gets the volume of Cuboid l - length w - widght h - height ''' return l * w * h #You should be returning something not None! print(getRectVol(2,5,7) == 70) print(getRectVol(0,5,7) == 0) getRectVol() # + # Medium # Write a function to check if string is a palindrome # ignore whitespace and capital letters def isPalindrome(s): ''' A palindrome is a word, number, phrase, or other sequence of characters which reads the same backward as forward, such as madam, racecar ''' # lets do a bit of preprocessing first clean_text = s.lower().replace(" ", "") # could use upper doesnt matter # this clean_text will live only inside the function return clean_text == clean_text[::-1] # so we compare string to its reverse # test your function, you do not touch these test cases print(isPalindrome("Abba") == True) print(isPalindrome("madam") == True) print(isPalindrome("madamm") == False) print(isPalindrome("race car") == True) print(isPalindrome('alusariirasula') == True) print(isPalindrome('normaltext') == False) # - dir("string") string.ascii_lowercase # + # One liner is possible! Okay to do it a longer way # Hints: dir("mystring") for string manipulation(might need more than one method) # Also remember one "unique" data structure we covered import string print(string.ascii_lowercase) def isPangram(mytext, a=string.ascii_lowercase): ''' ''' print(mytext) return None # here it should return True or False # assert(isPangram('dadfafd') == False) # assert(isPangram("The quick brown fox jumps over the lazy dog") == True) # assert(isPangram("The five boxing wizards jump quickly") == True) # - isPangram('badac') def isAbigger(a,b): #Anti-pattern if a > b: return True else: return False 5 > 6 # We can check Truth in a single line def isAbigger2(a,b): return a > b # # Small Town Population Exercise # # From: https://www.codewars.com/kata/5b39e3772ae7545f650000fc # # In a small town the population is p0 = 1000 at the beginning of a year. The population regularly increases by 2 percent per year and moreover 50 new inhabitants per year come to live in the town. How many years does the town need to see its population greater or equal to p = 1200 inhabitants? # # At the end of the first year there will be: # 1000 + 1000 * 0.02 + 50 => 1070 inhabitants # # At the end of the 2nd year there will be: # 1070 + 1070 * 0.02 + 50 => 1141 inhabitants (number of inhabitants is an integer) # # At the end of the 3rd year there will be: # 1141 + 1141 * 0.02 + 50 => 1213 # # It will need 3 entire years. # More generally given parameters: # # p0, percent, aug (inhabitants coming or leaving each year), p (population to surpass) # # the function nb_year should return n number of entire years needed to get a population greater or equal to p. # # aug is an integer, percent a positive or null number, p0 and p are positive integers (> 0) # # Examples: # # **nb_year(1500, 5, 100, 5000) -> 15** # # **nb_year(1500000, 2.5, 10000, 2000000) -> 10** # # Note: Don't forget to convert the percent parameter as a percentage in the body of your function: if the parameter percent is 2 you have to convert it to 0.02. def nb_year(start, perc, delta, target): return 0 # FIXME how many years are needed to reach the target population # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lesson 2: Errors # # Seeing errors is an inevitable part of programming. There is essentially no level of expertise at which a programmer expects to write error-free code. Let's look at some common types of errors and what you should do about them: # # - SyntaxError # - NameError # - IndexError # # In this module, you will learn # # - How to read and interpret common Python error messages # - How to debug your code # For the sake of demonstration we will quickly set ourselves up with a "load this image" function. For now it will just give us a random image. # + import numpy as np def load_image(filename): return np.random.rand(400, 600) # - # ### Syntax errors # **Exercise** What is this code supposed to do? # + from skimage.io import imread for i in range(50) image_data = load_image("image_{}".format(i)) my_images.append(image_data) # - # This file is trying to load a series of numbered images into an array of images. # # _However_ running it gives you an error # # Python errors are given to you as a "stack trace". Let's first dissect a stack trace. # # **`File "", line 3`** tells you what file, and where in the file, the error came from. Because we're in a notebook, instead of a file we get a message that tells us we're in a notebook using IPython and that the error was in input cell #2. Specifically, line 3 of cell 2. # # **` for i in range(50)`** is where Python conveniently reminds us what code was at line 3. Sometimes it shows us a bit of code before and after to give us context. # # **`^`** marks exactly _where_ in line 3 the problem was noticed. This can be tricky because a problem with a function call may not be noticed until the closing ')' at the very end of the function. Nonetheless, here it might be helpful. # # **`SyntaxError: invalid syntax`**. This tells us that our problem is a SyntaxError, which is one of a large hierarchy of errors python can provide us with. # # `SyntaxError`s happen when Python sees you violating the rules of the language - it's a problem with the literal code characters you have typed rather than what your code is trying to do conceptually. Therefore they are usually short errors with quick fixes. # **Exercise** do you see how to fix _this_ error? # + from skimage.io import imread for i in range(50): image_data = load_image("image_{}".format(i)) my_images.append(image_data) # - # ### NameError # `NameError`s happen when you try to access a python variable or function that does not exist yet. To get a NameError, Python has to actually try to run your code, so NameError is a type of `RuntimeError`. # **Exercise** Can you add in a line that fixes this code? # + from skimage.io import imread my_images = [] for i in range(50): image_data = load_image("image_{}".format(i)) my_images.append(image_data) # - # In this case "my_images" does not exist, so it cannot be appended to, thus the name error. Making an empty my_images array solves this problem. # # Now that our images are loaded, let's check to make sure each images's mean intensity changes by less than 10% compared with the next timage. This might be a good check to make sure no one bumped our microscope or turned on a light while we were taking images. for i in range(50): intensity = my_images[i].mean() next_intensity = my_images[i+1].mean() if abs(intensity - next_intensity) > 0.10 * intensity: print("Notice: intensity jumped between images {} and {}".format(i, i+1)) break # ### IndexError # `IndexError`s happen when you try to access data from a list-like object, such as a `numpy` array or image, but the location you requested does not exist. Like asking for index 10 for a list of 10 items. # # In image processing, this is often caused by switching your rows/columns or width/height. Say you have a 400x600 image and try to access a pixel at row 401-600. # # It is also generally common when looping through something by index. This is a good reason to use Python's `for item in collection` syntax rather than looping through indices! # + data = [1, 3, "cat", 0.4] # This is less clear and prone to error for index in range(4): item = data[index] print(item) # Than this for item in data: print(item) # - # One interesting thing about Python loop's current value, `i` in this case, is available outside the loop and does not reset until you run the loop again. This let's us quickly check what value `i` took on when the code crashed! print(i) # **Exercise** Now can you explain what happened here? Can you fix this code? for i in range(49): intensity = my_images[i].mean() next_intensity = my_images[i+1].mean() if abs(intensity - next_intensity) > 0.10 * intensity: print("Notice: intensity jumped between images {} and {}".format(i, i+1)) break # ### Sufficiency and necessity of error messages # **Exercise** What would have happened if instead of comparing image `i` to image `i+1` we had compared image `i-1` (previous image) to `i`? for i in range(50): intensity = my_images[i-1].mean() next_intensity = my_images[i].mean() if abs(intensity - next_intensity) > 0.10 * intensity: print("Notice: intensity jumped between images {} and {}".format(i-1, i)) break # **Exercise** There was no error! Did this solve our problem? # No, it did not. Consider the first pass through the loop. `i` is 0. `i-1` is -1. What is `my_images[-1]`? my_images[-1] # `my_images[-1]` doesn't give a name error, -1 refers to the _last_ item in the list. But we don't want to compare the first and last images, so although this runs it's not the right behavior. # ### Long stack traces # What can make the errors that we see in this course particularly daunting is that we use many libraries which use other libraries which in turn use more libraries, etc. This means that the peice of code that reports the error is often code we didn't write or didn't even know was being run, which can make errors feel unfair or unsolveable. But here's what's really happening. # # Imagine that you send me on an errand to buy groceries. You give me detailed instructions (a program) describing the steps to take. So I get in the car and start driving to the store. Half way there, I notice I'm out of gas. That's OK, I have my own program for dealing with that. "Buy gas at a station" is a bit of instructions you didn't know I had, nor did you anticipate me using, but it's being used now anyway. I pull into the gas station, get out of my car, pay for the gas, and try to start filling up. However, the gas cover locks from the inside. I don't know about this (new car), and send you a text trying to precisely describe the error: "Nozzle cannot pass through solid metal". # # So here you are, having sent me to buy groceries, and I tell you I can't because "nozzle cannot pass through solid metal". Python is frustrating in the same way: generally it tells you the _lowest-level problem_ when it fails. It's important to keep this in mind: the piece of code that reports an error is probably not the one that caused it! # # Stack traces report errors with the first call at the top. **This means that you should read a long stack trace by starting at the bottom and working up until you see a line of code that you wrote or a function you called.** This line is likely to be the line you have to change. Maybe you passed a string to a function when it needed an integer. # # Lines below code you wrote in the stack trace may contain hints about what's wrong. Maybe the lowest error at the bottom of the stack trace is "Cannot subtract string from int", which is a cluse that there was a string where an int should be. # # Higher lines tell you the context that the error happened in, i.e. what code ran before the error. Maybe you have a function that you use several times - you want to know _which_ usage of the function is giving you the error. If it's the 2nd time you use the function, then either the function _can_ work, but not in a particular context, or the function is being called in the 2nd location _before_ the first: the flow of your program is not what you expected. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Figure 4 \#30733 # + import numpy as np import matplotlib.pylab as plt import dd from getsig import getsig from scipy.signal import medfilt from smoothAm import smoothAm from scipy.signal import savgol_filter #from kalman import kalman # plt.style.use('helvet2') # shotnr = 30733 #eqdiag = 'GQH' eqdiag = 'FPG' rl05 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_0.500000') rl10 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_1.00000') rl20 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_2.00000') rl30 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_3.00000') rl40 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_4.00000') raus = np.loadtxt('/home/guimas/diags/'+str(shotnr)+'/'+eqdiag+'.Raus') rh05 = np.loadtxt('/home/guimas/layers/old_'+str(shotnr)+'/snehfs_0.500000') rh10 = np.loadtxt('/home/guimas/layers/old_'+str(shotnr)+'/snehfs_1.00000') rh20 = np.loadtxt('/home/guimas/layers/old_'+str(shotnr)+'/snehfs_2.00000') rh30 = np.loadtxt('/home/guimas/layers/old_'+str(shotnr)+'/snehfs_3.00000') rh40 = np.loadtxt('/home/guimas/layers/old_'+str(shotnr)+'/snehfs_4.00000') rh50 = np.loadtxt('/home/guimas/layers/old_'+str(shotnr)+'/snehfs_5.00000') rin = np.loadtxt('/home/guimas/diags/'+str(shotnr)+'/'+eqdiag+'.Rin') dtot = np.loadtxt('/home/guimas/diags/'+str(shotnr)+'/UVS.D_tot') tdtot = dtot[:,0] sdtot = dtot[:,1]*1e-22 tth = dd.shotfile('TTH', shotnr) h98 = tth('H/L-facs') tth.close() h1 = np.loadtxt('/home/guimas/diags/' + str(shotnr) + '/DCN.H-1') ecrh = np.loadtxt('/home/guimas/diags/' + str(shotnr) + '/ECS.PECRH') ecrhclr = '#00FF00' nbi = np.loadtxt('/home/guimas/diags/' + str(shotnr) + '/NIS.PNI') nbiclr = '#000000' icrh = np.loadtxt('/home/guimas/diags/' + str(shotnr) + '/ICP.PICRN') icrhclr = 'm' dirdiv = "/home/guimas/Divertor/" + str(shotnr) + "/" tel = np.loadtxt('/home/guimas/Documents/Publications/hmode2016/Figures/Allfigures/SupportFiles/te-ua4_30733.txt') jsatl = np.loadtxt(dirdiv + "kal2_jsat_out_" +str(shotnr)+ ".txt") # tel = np.loadtxt(dirdiv + "kal2_te_out_" +str(shotnr)+ ".txt") jsath = np.loadtxt(dirdiv + "kal2_jsat_in_" +str(shotnr)+ ".txt") teh = np.loadtxt('/home/guimas/Documents/Publications/hmode2016/Figures/Allfigures/SupportFiles/te-ui5_30733.txt') # tel = getsig(shotnr, 'LSD', 'te-ua3') # teh = getsig(shotnr, 'LSD', 'te-ui5') dne = getsig(shotnr, 'DNE', 'neDdel_2', exper='mcavedon') beta = getsig(shotnr, 'TOT', 'beta_N') asympar = np.loadtxt('./SupportFiles/aeval.30733') # + ##Global linewidth lwid = 1 mintime = 0.5 maxtime = 9.5 #HFS Separatrix shift hfsshift = 0.05 ##Alpha setting alphareg = 0.2 f, axarr = plt.subplots( nrows=4, ncols=1, sharex=True, sharey=False, gridspec_kw={'height_ratios':[5,5,4,4]},figsize=(6, 6),dpi=100) #6, 4.75 ticfont = 11 plt.rcParams.update({'font.size': ticfont}) ###Global settings ax1 = axarr[0] ax2 = axarr[1] #ax3 = axarr[2] #ax4 = axarr[3] plt.setp(ax1.get_xticklabels(), visible=False) plt.setp(ax2.get_xticklabels(), visible=False) plt.setp(ax3.get_xticklabels(), visible=False) ################################################ ####First plot ################################################ lettx = 0.02 letty = 0.82 ax1.text(lettx, letty, r'$\mathrm{(a)\,\#'+str(shotnr)+'\,LFS}$', transform=ax1.transAxes) ax1.plot(raus[:,0], raus[:,1], color='black', linewidth=lwid) ax1.plot(rl05[:,0], rl05[:,1], label='0.5', linewidth=lwid) ax1.plot(rl10[:,0], rl10[:,1], label='1.0', linewidth=lwid) ax1.plot(rl20[:,0], rl20[:,1], label='2.0', linewidth=lwid) start30 = 0.5 m30 = rl30[:,0]>=start30 ax1.plot(rl30[m30,0], rl30[m30,1], label='3.0', linewidth=lwid) ax1.set_ylim([2.13,2.23]) ax1.set_yticks([2.13,2.15,2.17,2.19,2.21,2.23]) ax1.fill_between(raus[:,0], raus[:,1], 1.5, color='black', alpha=0.25) ax1.set_ylabel(r'$\mathrm{R\,[m]}$') ################################################ ####Second plot ################################################ ax2 = axarr[1] ax2.text(0.04, 0.35, '(b)', transform=ax2.transAxes) ax2.text(0.04, 0.205, 'HFS', transform=ax2.transAxes) ax2.plot(rin[:,0], rin[:,1]-hfsshift, color='black', linewidth=lwid) ax2.plot(rh05[:,0], rh05[:,1], label='$\mathrm{0.5[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh10[:,0], rh10[:,1], label='$\mathrm{1.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh20[:,0], rh20[:,1], label='$\mathrm{2.0[10^{19}m^{-3}]}$', linewidth=lwid) m30 = rh30[:,0]>=start30 ax2.plot(rh30[m30,0], rh30[m30,1], label='$\mathrm{3.0[10^{19}m^{-3}]}$', linewidth=lwid) start40 = 1.0 m40 = rh40[:,0]>=start40 ax2.plot(rh40[m40,0], rh40[m40,1], label='$\mathrm{4.0[10^{19}m^{-3}]}$', linewidth=lwid) start50 = 1.0 m50 = rh50[:,0]>=start50 ax2.plot(rh50[m50,0], rh50[m50,1], label='$\mathrm{5.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.legend(bbox_to_anchor=(0.66,0.45), ncol=2, borderpad=0.2, labelspacing=0.1, columnspacing=0.2, handlelength=1, handletextpad=0.1, fancybox=True) ##Inner wall ax2.text(8., 1.05, r'$\mathrm{inner\,wall}$', ha='left') ax2.hlines(1.045, mintime, maxtime, color='black', lw=3) ylimax = 1.13 ylimin = 1.03 ax2.set_ylim([ylimin,ylimax]) ax2.set_yticks([1.04,1.06,1.08,1.10,1.12]) ###Ta-da ax2.set_ylabel(r'$\mathrm{R\,[m]}$') ax2.fill_between(rin[:,0], rin[:,1]-hfsshift, 1.5, color='black', alpha=0.25) ################################################ ################################################ ax6 = axarr[2] ax6.text(lettx, letty-0.05, '(c)', transform=ax6.transAxes) dtotclr = '#0000FF' ax6.plot(tdtot, sdtot, color=dtotclr)#, ls='--') ax6.text(2.6, 0.4, r'$\mathrm{D\,[10^{22}e/s]}$',color=dtotclr) h1clr = 'k' ax6.plot(h1[:,0], h1[:,1]*1e-19, color=h1clr) ax6.text(2.4, 5, r'$\mathrm{n_{e,c}\,[10^{19}m^{-3}]}$',color=h1clr) ax6.plot(h98.time, h98.data[:,7]*10, color='red', lw=1) ax6.text(7.9, 0.5, r'$\mathrm{H_{98,y2}[\times10]}$', color='red') ax6.axhline(10, color='r', lw=0.6, ls='--') ax6.text(8.3,9.5, r'$\mathrm{H_{98,y2}=1}$', color='red',fontsize=10,va='top') ax6.set_yticks([0,2,4,6,8,10]) ax6.set_ylim((0, 11)) ############################################################################ ############################################################################ ax3 = axarr[3] ax3.text(lettx, letty, '(d)', transform=ax3.transAxes) ax3.plot(ecrh[:,0], ecrh[:,1]*1e-6, color=ecrhclr) ax3.text(8, 11.5, r'$\mathrm{ECRH\,[MW]}$',color=ecrhclr) ax3.plot(nbi[:,0], nbi[:,1]*1e-6, color=nbiclr) ax3.text(0.85, 9.0, r'$\mathrm{NBI\,[MW]}$',color=nbiclr) ax3.plot(icrh[:,0], icrh[:,1]*1e-6, color=icrhclr) ax3.text(8.0, 9.0, r'$\mathrm{ICRH\,[MW]}$',color=icrhclr) ##Heating ax3.set_xlim([mintime,maxtime]) ax3.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) ax3.set_yticks([0,2.5,5,7.5,10,12.5]) ax3.set_ylim([0,15]) ################################################################ ################################################################ #ax4 = axarr[5] #ax4.text(0.8, 0.7, '(f) outer div.', transform=ax4.transAxes) #ax4.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax4.transAxes) #ax4.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax4.transAxes) #msk = ~np.isnan(tel[:,1]) #nntemp = tel[msk,1] #nntime = tel[msk,0] #ax4.plot(nntime, medfilt(nntemp,21), color='b', lw=0.5) #ax4.plot(jsatl[:,0], savgol_filter(jsatl[:,1], 301, 3), color='red', lw=0.5) #ax4.set_yticks([0,5,10,15]) #ax4.set_ylim((0,18)) ################################################################ ################################################################ #ax5 = axarr[6] #ax5.text(0.8, 0.7, '(g) inner div.', transform=ax5.transAxes) #msk = ~np.isnan(teh[:,1]) #nntemp = teh[msk,1] #nntime = teh[msk,0] #ax5.plot(nntime, medfilt(nntemp,21), color='b', lw=0.5) #ax5.plot(jsath[:,0], savgol_filter(jsath[:,1], 601, 5), color='red', lw=0.5) #ax5.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax5.transAxes) #ax5.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax5.transAxes) #ax5.set_yticks([0,5,10,15]) #ax5.set_ylim((0,18)) #ax5.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) ################################################################ ################################################################ #ax7 = axarr[2] #ax7.text(lettx, letty, '(c)', transform=ax7.transAxes) #ax7.plot(dne.time, medfilt(dne.data[:,19],21)*1e-20,lw=0.7,c='r') #asymval = 24 #ax7.set_ylim(0,13) #ax7.text(5.6, 4.2, r'$\mathrm{n_{e,v}\,[10^{20}m^{-3}]}$', color='r') #ax7.plot(beta.time, beta.data*6, color='k') #ax7.text(5.6, 1.5, r'$\mathrm{\beta[\times5]}$', color='k') plt.subplots_adjust(left=0.1, bottom=0.08, right=0.99, top=0.98, wspace=0.1, hspace=0.07) #plt.subplots_adjust(left=0.15, bottom=0.1, right=0.99, top=0.98, wspace=0.1, hspace=0.07) ################################################################ ################################################################ ax2.axvspan(1.4, 2.0, color='r', alpha=0.3) #for ax in axarr: # ax.axvline(2.25, color='C0', lw=1.2) # ax.axvline(3.8, color='C1', lw=1.2) # ax.axvline(4.8, color='C2', lw=1.2) # ax.axvline(5.5, color='C3', lw=1.2) # ax.axvline(7.0, color='C4', lw=1.2) plt.savefig("./PresentationMST1/Figure4.png", format='png', dpi=300) plt.show() # - # # Part 2 # + ##Global linewidth lwid = 1 mintime = 0.5 maxtime = 9.5 #HFS Separatrix shift hfsshift = 0.05 ##Alpha setting alphareg = 0.2 f, axarr = plt.subplots( nrows=4, ncols=1, sharex=True, sharey=False, gridspec_kw={'height_ratios':[5,5,4,4]},figsize=(6, 6),dpi=100) #6, 4.75 ticfont = 11 plt.rcParams.update({'font.size': ticfont}) ###Global settings ax1 = axarr[0] ax2 = axarr[1] ax3 = axarr[2] ax4 = axarr[3] plt.setp(ax1.get_xticklabels(), visible=False) plt.setp(ax2.get_xticklabels(), visible=False) plt.setp(ax3.get_xticklabels(), visible=False) ################################################ ####First plot ################################################ lettx = 0.02 letty = 0.82 ax1.text(lettx, letty, r'$\mathrm{(a)\,\#'+str(shotnr)+'\,LFS}$', transform=ax1.transAxes) ax1.plot(raus[:,0], raus[:,1], color='black', linewidth=lwid) ax1.plot(rl05[:,0], rl05[:,1], label='0.5', linewidth=lwid) ax1.plot(rl10[:,0], rl10[:,1], label='1.0', linewidth=lwid) ax1.plot(rl20[:,0], rl20[:,1], label='2.0', linewidth=lwid) start30 = 0.5 m30 = rl30[:,0]>=start30 ax1.plot(rl30[m30,0], rl30[m30,1], label='3.0', linewidth=lwid) ax1.set_ylim([2.13,2.23]) ax1.set_yticks([2.13,2.15,2.17,2.19,2.21,2.23]) ax1.fill_between(raus[:,0], raus[:,1], 1.5, color='black', alpha=0.25) ax1.set_ylabel(r'$\mathrm{R\,[m]}$') ################################################ ####Second plot ################################################ ax2 = axarr[1] ax2.text(0.04, 0.35, '(b)', transform=ax2.transAxes) ax2.text(0.04, 0.205, 'HFS', transform=ax2.transAxes) ax2.plot(rin[:,0], rin[:,1]-hfsshift, color='black', linewidth=lwid) ax2.plot(rh05[:,0], rh05[:,1], label='$\mathrm{0.5[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh10[:,0], rh10[:,1], label='$\mathrm{1.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh20[:,0], rh20[:,1], label='$\mathrm{2.0[10^{19}m^{-3}]}$', linewidth=lwid) m30 = rh30[:,0]>=start30 ax2.plot(rh30[m30,0], rh30[m30,1], label='$\mathrm{3.0[10^{19}m^{-3}]}$', linewidth=lwid) start40 = 1.0 m40 = rh40[:,0]>=start40 ax2.plot(rh40[m40,0], rh40[m40,1], label='$\mathrm{4.0[10^{19}m^{-3}]}$', linewidth=lwid) start50 = 1.0 m50 = rh50[:,0]>=start50 ax2.plot(rh50[m50,0], rh50[m50,1], label='$\mathrm{5.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.legend(bbox_to_anchor=(0.66,0.45), ncol=2, borderpad=0.2, labelspacing=0.1, columnspacing=0.2, handlelength=1, handletextpad=0.1, fancybox=True) ##Inner wall ax2.text(8., 1.05, r'$\mathrm{inner\,wall}$', ha='left') ax2.hlines(1.045, mintime, maxtime, color='black', lw=3) ylimax = 1.13 ylimin = 1.03 ax2.set_ylim([ylimin,ylimax]) ax2.set_yticks([1.04,1.06,1.08,1.10,1.12]) ###Ta-da ax2.set_ylabel(r'$\mathrm{R\,[m]}$') ax2.fill_between(rin[:,0], rin[:,1]-hfsshift, 1.5, color='black', alpha=0.25) ################################################ ################################################ #ax6 = axarr[2] #ax6.text(lettx, letty-0.05, '(c)', transform=ax6.transAxes) #dtotclr = '#0000FF' #ax6.plot(tdtot, sdtot, color=dtotclr)#, ls='--') #ax6.text(2.6, 0.4, r'$\mathrm{D\,[10^{22}e/s]}$',color=dtotclr) #h1clr = 'k' #ax6.plot(h1[:,0], h1[:,1]*1e-19, color=h1clr) #ax6.text(2.4, 5, r'$\mathrm{n_{e,c}\,[10^{19}m^{-3}]}$',color=h1clr) #ax6.plot(h98.time, h98.data[:,7]*10, color='red', lw=1) #ax6.text(7.9, 0.5, r'$\mathrm{H_{98,y2}[\times10]}$', color='red') #ax6.axhline(10, color='r', lw=0.6, ls='--') #ax6.text(8.3,9.5, r'$\mathrm{H_{98,y2}=1}$', color='red',fontsize=10,va='top') #ax6.set_yticks([0,2,4,6,8,10]) #ax6.set_ylim((0, 11)) ############################################################################ ############################################################################ #ax3 = axarr[3] #ax3.text(lettx, letty, '(d)', transform=ax3.transAxes) #ax3.plot(ecrh[:,0], ecrh[:,1]*1e-6, color=ecrhclr) #ax3.text(8, 11.5, r'$\mathrm{ECRH\,[MW]}$',color=ecrhclr) #ax3.plot(nbi[:,0], nbi[:,1]*1e-6, color=nbiclr) #ax3.text(0.85, 9.0, r'$\mathrm{NBI\,[MW]}$',color=nbiclr) #ax3.plot(icrh[:,0], icrh[:,1]*1e-6, color=icrhclr) #ax3.text(8.0, 9.0, r'$\mathrm{ICRH\,[MW]}$',color=icrhclr) ##Heating #ax3.set_xlim([mintime,maxtime]) #ax3.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) #ax3.set_yticks([0,2.5,5,7.5,10,12.5]) #ax3.set_ylim([0,15]) ################################################################ ################################################################ ax4 = axarr[2] ax4.text(0.8, 0.7, '(f) outer div.', transform=ax4.transAxes) ax4.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax4.transAxes) ax4.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax4.transAxes) msk = ~np.isnan(tel[:,1]) nntemp = tel[msk,1] nntime = tel[msk,0] ax4.plot(nntime, medfilt(nntemp,21), color='b', lw=0.5) ax4.plot(jsatl[:,0], savgol_filter(jsatl[:,1], 301, 3), color='red', lw=0.5) ax4.set_xlim([mintime,maxtime]) ax4.set_yticks([0,5,10,15]) ax4.set_ylim((0,18)) ################################################################ ################################################################ ax5 = axarr[3] ax5.text(0.8, 0.7, '(g) inner div.', transform=ax5.transAxes) msk = ~np.isnan(teh[:,1]) nntemp = teh[msk,1] nntime = teh[msk,0] ax5.plot(nntime, medfilt(nntemp,21), color='b', lw=0.5) ax5.plot(jsath[:,0], savgol_filter(jsath[:,1], 601, 5), color='red', lw=0.5) ax5.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax5.transAxes) ax5.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax5.transAxes) ax5.set_yticks([0,5,10,15]) ax5.set_ylim((0,18)) ax5.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) ################################################################ ################################################################ #ax7 = axarr[2] #ax7.text(lettx, letty, '(c)', transform=ax7.transAxes) #ax7.plot(dne.time, medfilt(dne.data[:,19],21)*1e-20,lw=0.7,c='r') #asymval = 24 #ax7.set_ylim(0,13) #ax7.text(5.6, 4.2, r'$\mathrm{n_{e,v}\,[10^{20}m^{-3}]}$', color='r') #ax7.plot(beta.time, beta.data*6, color='k') #ax7.text(5.6, 1.5, r'$\mathrm{\beta[\times5]}$', color='k') plt.subplots_adjust(left=0.1, bottom=0.08, right=0.99, top=0.98, wspace=0.1, hspace=0.07) #plt.subplots_adjust(left=0.15, bottom=0.1, right=0.99, top=0.98, wspace=0.1, hspace=0.07) ################################################################ ################################################################ ax2.axvspan(1.4, 2.0, color='r', alpha=0.3) #for ax in axarr: # ax.axvline(2.25, color='C0', lw=1.2) # ax.axvline(3.8, color='C1', lw=1.2) # ax.axvline(4.8, color='C2', lw=1.2) # ax.axvline(5.5, color='C3', lw=1.2) # ax.axvline(7.0, color='C4', lw=1.2) plt.savefig("./PresentationMST1/Figure4_2.png", format='png', dpi=300) plt.show() # + ##Global linewidth lwid = 1 mintime = 0.5 maxtime = 9.5 #HFS Separatrix shift hfsshift = 0.05 ##Alpha setting alphareg = 0.2 f, axarr = plt.subplots( nrows=4, ncols=1, sharex=True, sharey=False, gridspec_kw={'height_ratios':[5,5,4,4]},figsize=(6, 6),dpi=100) #6, 4.75 ticfont = 11 plt.rcParams.update({'font.size': ticfont}) ###Global settings ax1 = axarr[0] ax2 = axarr[1] #ax3 = axarr[2] #ax4 = axarr[3] plt.setp(ax1.get_xticklabels(), visible=False) plt.setp(ax2.get_xticklabels(), visible=False) plt.setp(ax3.get_xticklabels(), visible=False) ################################################ ####First plot ################################################ lettx = 0.02 letty = 0.82 ax1.text(lettx, letty, r'$\mathrm{(a)\,\#'+str(shotnr)+'\,LFS}$', transform=ax1.transAxes) ax1.plot(raus[:,0], raus[:,1], color='black', linewidth=lwid) ax1.plot(rl05[:,0], rl05[:,1], label='0.5', linewidth=lwid) ax1.plot(rl10[:,0], rl10[:,1], label='1.0', linewidth=lwid) ax1.plot(rl20[:,0], rl20[:,1], label='2.0', linewidth=lwid) start30 = 0.5 m30 = rl30[:,0]>=start30 ax1.plot(rl30[m30,0], rl30[m30,1], label='3.0', linewidth=lwid) ax1.set_ylim([2.13,2.23]) ax1.set_yticks([2.13,2.15,2.17,2.19,2.21,2.23]) ax1.fill_between(raus[:,0], raus[:,1], 1.5, color='black', alpha=0.25) ax1.set_ylabel(r'$\mathrm{R\,[m]}$') ################################################ ####Second plot ################################################ ax2 = axarr[1] ax2.text(0.04, 0.35, '(b)', transform=ax2.transAxes) ax2.text(0.04, 0.205, 'HFS', transform=ax2.transAxes) ax2.plot(rin[:,0], rin[:,1]-hfsshift, color='black', linewidth=lwid) ax2.plot(rh05[:,0], rh05[:,1], label='$\mathrm{0.5[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh10[:,0], rh10[:,1], label='$\mathrm{1.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh20[:,0], rh20[:,1], label='$\mathrm{2.0[10^{19}m^{-3}]}$', linewidth=lwid) m30 = rh30[:,0]>=start30 ax2.plot(rh30[m30,0], rh30[m30,1], label='$\mathrm{3.0[10^{19}m^{-3}]}$', linewidth=lwid) start40 = 1.0 m40 = rh40[:,0]>=start40 ax2.plot(rh40[m40,0], rh40[m40,1], label='$\mathrm{4.0[10^{19}m^{-3}]}$', linewidth=lwid) start50 = 1.0 m50 = rh50[:,0]>=start50 ax2.plot(rh50[m50,0], rh50[m50,1], label='$\mathrm{5.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.legend(bbox_to_anchor=(0.66,0.45), ncol=2, borderpad=0.2, labelspacing=0.1, columnspacing=0.2, handlelength=1, handletextpad=0.1, fancybox=True) ##Inner wall ax2.text(8., 1.05, r'$\mathrm{inner\,wall}$', ha='left') ax2.hlines(1.045, mintime, maxtime, color='black', lw=3) ylimax = 1.13 ylimin = 1.03 ax2.set_ylim([ylimin,ylimax]) ax2.set_yticks([1.04,1.06,1.08,1.10,1.12]) ###Ta-da ax2.set_ylabel(r'$\mathrm{R\,[m]}$') ax2.fill_between(rin[:,0], rin[:,1]-hfsshift, 1.5, color='black', alpha=0.25) ################################################ ################################################ ax6 = axarr[3] ax6.text(lettx, letty-0.05, '(d)', transform=ax6.transAxes) dtotclr = '#0000FF' ax6.plot(tdtot, sdtot, color=dtotclr)#, ls='--') ax6.text(2.6, 0.4, r'$\mathrm{D\,[10^{22}e/s]}$',color=dtotclr) ax6.plot(nbi[:,0], nbi[:,1]*1e-6, color=nbiclr) ax6.text(1.0, 9.0, r'$\mathrm{NBI\,[MW]}$',color=nbiclr) ax6.set_ylim(bottom=0) #h1clr = 'k' #ax6.plot(h1[:,0], h1[:,1]*1e-19, color=h1clr) #ax6.text(2.4, 5, r'$\mathrm{n_{e,c}\,[10^{19}m^{-3}]}$',color=h1clr) #ax6.plot(h98.time, h98.data[:,7]*10, color='red', lw=1) #ax6.text(7.9, 0.5, r'$\mathrm{H_{98,y2}[\times10]}$', color='red') #ax6.axhline(10, color='r', lw=0.6, ls='--') #ax6.text(8.3,9.5, r'$\mathrm{H_{98,y2}=1}$', color='red',fontsize=10,va='top') ax6.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) #ax6.set_yticks([0,2,4,6,8,10]) #ax6.set_ylim((0, 11)) ############################################################################ ############################################################################ #ax3 = axarr[3] #ax3.text(lettx, letty, '(d)', transform=ax3.transAxes) #ax3.plot(ecrh[:,0], ecrh[:,1]*1e-6, color=ecrhclr) #ax3.text(8, 11.5, r'$\mathrm{ECRH\,[MW]}$',color=ecrhclr) #ax3.plot(nbi[:,0], nbi[:,1]*1e-6, color=nbiclr) #ax3.text(0.85, 9.0, r'$\mathrm{NBI\,[MW]}$',color=nbiclr) #ax3.plot(icrh[:,0], icrh[:,1]*1e-6, color=icrhclr) #ax3.text(8.0, 9.0, r'$\mathrm{ICRH\,[MW]}$',color=icrhclr) ##Heating #ax3.set_xlim([mintime,maxtime]) #ax3.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) #ax3.set_yticks([0,2.5,5,7.5,10,12.5]) #ax3.set_ylim([0,15]) ################################################################ ################################################################ #ax4 = axarr[2] #ax4.text(0.8, 0.7, '(f) outer div.', transform=ax4.transAxes) #ax4.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax4.transAxes) #ax4.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax4.transAxes) #msk = ~np.isnan(tel[:,1]) #nntemp = tel[msk,1] #nntime = tel[msk,0] #ax4.plot(nntime, medfilt(nntemp,21), color='b', lw=0.5) #ax4.plot(jsatl[:,0], savgol_filter(jsatl[:,1], 301, 3), color='red', lw=0.5) #ax4.set_xlim([mintime,maxtime]) #ax4.set_yticks([0,5,10,15]) #ax4.set_ylim((0,18)) ################################################################ ################################################################ #ax5 = axarr[3] #ax5.text(0.8, 0.7, '(g) inner div.', transform=ax5.transAxes) #msk = ~np.isnan(teh[:,1]) #nntemp = teh[msk,1] #nntime = teh[msk,0] #ax5.plot(nntime, medfilt(nntemp,21), color='b', lw=0.5) #ax5.plot(jsath[:,0], savgol_filter(jsath[:,1], 601, 5), color='red', lw=0.5) #ax5.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax5.transAxes) #ax5.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax5.transAxes) #ax5.set_yticks([0,5,10,15]) #ax5.set_ylim((0,18)) ################################################################ ################################################################ ax7 = axarr[2] ax7.text(lettx, letty, '(c)', transform=ax7.transAxes) ax7.plot(dne.time, medfilt(dne.data[:,19],21)*1e-20,lw=0.7,c='r') ax7.axhline(10,color='k',lw=1,ls='--') ax7.text(8.0,10.2,r'$\mathrm{\beta=2}$',color='k') asymval = 24 ax7.set_ylim(0,13) ax7.text(5.6, 4.2, r'$\mathrm{n_{e,v}\,[10^{20}m^{-3}]}$', color='r') ax7.set_xlim([mintime,maxtime]) ax7.plot(beta.time, beta.data*5, color='k') ax7.text(5.6, 1.5, r'$\mathrm{\beta[\times5]}$', color='k') plt.subplots_adjust(left=0.1, bottom=0.08, right=0.99, top=0.98, wspace=0.1, hspace=0.07) #plt.subplots_adjust(left=0.15, bottom=0.1, right=0.99, top=0.98, wspace=0.1, hspace=0.07) ################################################################ ################################################################ ax2.axvspan(1.4, 2.0, color='r', alpha=0.3) #for ax in axarr: # ax.axvline(2.25, color='C0', lw=1.2) # ax.axvline(3.8, color='C1', lw=1.2) # ax.axvline(4.8, color='C2', lw=1.2) # ax.axvline(5.5, color='C3', lw=1.2) # ax.axvline(7.0, color='C4', lw=1.2) plt.savefig("./PresentationMST1/Figure4_3.png", format='png', dpi=300) plt.show() # - # ## With times # + ##Global linewidth lwid = 1 mintime = 0.5 maxtime = 9.5 #HFS Separatrix shift hfsshift = 0.05 ##Alpha setting alphareg = 0.2 f, axarr = plt.subplots( nrows=4, ncols=1, sharex=True, sharey=False, gridspec_kw={'height_ratios':[5,5,4,4]},figsize=(6, 6),dpi=100) #6, 4.75 ticfont = 11 plt.rcParams.update({'font.size': ticfont}) ###Global settings ax1 = axarr[0] ax2 = axarr[1] #ax3 = axarr[2] #ax4 = axarr[3] plt.setp(ax1.get_xticklabels(), visible=False) plt.setp(ax2.get_xticklabels(), visible=False) plt.setp(ax3.get_xticklabels(), visible=False) ################################################ ####First plot ################################################ lettx = 0.02 letty = 0.82 ax1.text(lettx, letty, r'$\mathrm{(a)\,\#'+str(shotnr)+'\,LFS}$', transform=ax1.transAxes) ax1.plot(raus[:,0], raus[:,1], color='black', linewidth=lwid) ax1.plot(rl05[:,0], rl05[:,1], label='0.5', linewidth=lwid) ax1.plot(rl10[:,0], rl10[:,1], label='1.0', linewidth=lwid) ax1.plot(rl20[:,0], rl20[:,1], label='2.0', linewidth=lwid) start30 = 0.5 m30 = rl30[:,0]>=start30 ax1.plot(rl30[m30,0], rl30[m30,1], label='3.0', linewidth=lwid) ax1.set_ylim([2.13,2.23]) ax1.set_yticks([2.13,2.15,2.17,2.19,2.21,2.23]) ax1.fill_between(raus[:,0], raus[:,1], 1.5, color='black', alpha=0.25) ax1.set_ylabel(r'$\mathrm{R\,[m]}$') ################################################ ####Second plot ################################################ ax2 = axarr[1] ax2.text(0.04, 0.35, '(b)', transform=ax2.transAxes) ax2.text(0.04, 0.205, 'HFS', transform=ax2.transAxes) ax2.plot(rin[:,0], rin[:,1]-hfsshift, color='black', linewidth=lwid) ax2.plot(rh05[:,0], rh05[:,1], label='$\mathrm{0.5[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh10[:,0], rh10[:,1], label='$\mathrm{1.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh20[:,0], rh20[:,1], label='$\mathrm{2.0[10^{19}m^{-3}]}$', linewidth=lwid) m30 = rh30[:,0]>=start30 ax2.plot(rh30[m30,0], rh30[m30,1], label='$\mathrm{3.0[10^{19}m^{-3}]}$', linewidth=lwid) start40 = 1.0 m40 = rh40[:,0]>=start40 ax2.plot(rh40[m40,0], rh40[m40,1], label='$\mathrm{4.0[10^{19}m^{-3}]}$', linewidth=lwid) start50 = 1.0 m50 = rh50[:,0]>=start50 ax2.plot(rh50[m50,0], rh50[m50,1], label='$\mathrm{5.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.legend(bbox_to_anchor=(0.66,0.45), ncol=2, borderpad=0.2, labelspacing=0.1, columnspacing=0.2, handlelength=1, handletextpad=0.1, fancybox=True) ##Inner wall ax2.text(8., 1.05, r'$\mathrm{inner\,wall}$', ha='left') ax2.hlines(1.045, mintime, maxtime, color='black', lw=3) ylimax = 1.13 ylimin = 1.03 ax2.set_ylim([ylimin,ylimax]) ax2.set_yticks([1.04,1.06,1.08,1.10,1.12]) ###Ta-da ax2.set_ylabel(r'$\mathrm{R\,[m]}$') ax2.fill_between(rin[:,0], rin[:,1]-hfsshift, 1.5, color='black', alpha=0.25) ################################################ ################################################ ax6 = axarr[3] ax6.text(lettx, letty-0.05, '(d)', transform=ax6.transAxes) dtotclr = '#0000FF' ax6.plot(tdtot, sdtot, color=dtotclr)#, ls='--') ax6.text(2.6, 0.4, r'$\mathrm{D\,[10^{22}e/s]}$',color=dtotclr) ax6.plot(nbi[:,0], nbi[:,1]*1e-6, color=nbiclr) ax6.text(1.0, 9.0, r'$\mathrm{NBI\,[MW]}$',color=nbiclr) ax6.set_ylim(bottom=0) #h1clr = 'k' #ax6.plot(h1[:,0], h1[:,1]*1e-19, color=h1clr) #ax6.text(2.4, 5, r'$\mathrm{n_{e,c}\,[10^{19}m^{-3}]}$',color=h1clr) #ax6.plot(h98.time, h98.data[:,7]*10, color='red', lw=1) #ax6.text(7.9, 0.5, r'$\mathrm{H_{98,y2}[\times10]}$', color='red') #ax6.axhline(10, color='r', lw=0.6, ls='--') #ax6.text(8.3,9.5, r'$\mathrm{H_{98,y2}=1}$', color='red',fontsize=10,va='top') ax6.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) #ax6.set_yticks([0,2,4,6,8,10]) #ax6.set_ylim((0, 11)) ############################################################################ ############################################################################ #ax3 = axarr[3] #ax3.text(lettx, letty, '(d)', transform=ax3.transAxes) #ax3.plot(ecrh[:,0], ecrh[:,1]*1e-6, color=ecrhclr) #ax3.text(8, 11.5, r'$\mathrm{ECRH\,[MW]}$',color=ecrhclr) #ax3.plot(nbi[:,0], nbi[:,1]*1e-6, color=nbiclr) #ax3.text(0.85, 9.0, r'$\mathrm{NBI\,[MW]}$',color=nbiclr) #ax3.plot(icrh[:,0], icrh[:,1]*1e-6, color=icrhclr) #ax3.text(8.0, 9.0, r'$\mathrm{ICRH\,[MW]}$',color=icrhclr) ##Heating #ax3.set_xlim([mintime,maxtime]) #ax3.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) #ax3.set_yticks([0,2.5,5,7.5,10,12.5]) #ax3.set_ylim([0,15]) ################################################################ ################################################################ #ax4 = axarr[2] #ax4.text(0.8, 0.7, '(f) outer div.', transform=ax4.transAxes) #ax4.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax4.transAxes) #ax4.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax4.transAxes) #msk = ~np.isnan(tel[:,1]) #nntemp = tel[msk,1] #nntime = tel[msk,0] #ax4.plot(nntime, medfilt(nntemp,21), color='b', lw=0.5) #ax4.plot(jsatl[:,0], savgol_filter(jsatl[:,1], 301, 3), color='red', lw=0.5) #ax4.set_xlim([mintime,maxtime]) #ax4.set_yticks([0,5,10,15]) #ax4.set_ylim((0,18)) ################################################################ ################################################################ #ax5 = axarr[3] #ax5.text(0.8, 0.7, '(g) inner div.', transform=ax5.transAxes) #msk = ~np.isnan(teh[:,1]) #nntemp = teh[msk,1] #nntime = teh[msk,0] #ax5.plot(nntime, medfilt(nntemp,21), color='b', lw=0.5) #ax5.plot(jsath[:,0], savgol_filter(jsath[:,1], 601, 5), color='red', lw=0.5) #ax5.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax5.transAxes) #ax5.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax5.transAxes) #ax5.set_yticks([0,5,10,15]) #ax5.set_ylim((0,18)) ################################################################ ################################################################ ax7 = axarr[2] ax7.text(lettx, letty, '(c)', transform=ax7.transAxes) ax7.plot(dne.time, medfilt(dne.data[:,19],21)*1e-20,lw=0.7,c='r') ax7.axhline(10,color='k',lw=1,ls='--') ax7.text(8.0,10.2,r'$\mathrm{\beta=2}$',color='k') asymval = 24 ax7.set_ylim(0,13) ax7.text(5.6, 4.2, r'$\mathrm{n_{e,v}\,[10^{20}m^{-3}]}$', color='r') ax7.set_xlim([mintime,maxtime]) ax7.plot(beta.time, beta.data*5, color='k') ax7.text(5.6, 1.5, r'$\mathrm{\beta[\times5]}$', color='k') plt.subplots_adjust(left=0.1, bottom=0.08, right=0.99, top=0.98, wspace=0.1, hspace=0.07) #plt.subplots_adjust(left=0.15, bottom=0.1, right=0.99, top=0.98, wspace=0.1, hspace=0.07) ################################################################ ################################################################ ax2.axvspan(1.4, 2.0, color='r', alpha=0.3) llww = 2 for ax in axarr: ax.axvline(2.25, color='C0', lw=llww) ax.axvline(3.8, color='C1', lw=llww) ax.axvline(4.8, color='C2', lw=llww) ax.axvline(5.5, color='C3', lw=llww) ax.axvline(7.0, color='C4', lw=llww) plt.savefig("./PresentationMST1/Figure4_4.png", format='png', dpi=300) plt.show() # - # + import numpy as np import matplotlib.pylab as plt from savitzky_golay import * import dd #from getsig import getsig #from kalman import kalman # plt.style.use('helvet2') # shotnr = 30733 #eqdiag = 'GQH' eqdiag = 'FPG' rl05 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_0.500000') rl10 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_1.00000') rl20 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_2.00000') rl30 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_3.00000') rl40 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snelfs_4.00000') raus = np.loadtxt('/home/guimas/diags/'+str(shotnr)+'/'+eqdiag+'.Raus') rh05 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snehfs_0.500000') rh10 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snehfs_1.00000') rh20 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snehfs_2.00000') rh30 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snehfs_3.00000') rh40 = np.loadtxt('/home/guimas/layers/'+str(shotnr)+'/snehfs_4.00000') rin = np.loadtxt('/home/guimas/diags/'+str(shotnr)+'/'+eqdiag+'.Rin') dtot = np.loadtxt('/home/guimas/diags/'+str(shotnr)+'/UVS.D_tot') tdtot = dtot[:,0] sdtot = dtot[:,1]*1e-22 tth = dd.shotfile('TTH', shotnr) h98 = tth('H/L-facs') tth.close() h1 = np.loadtxt('/home/guimas/diags/' + str(shotnr) + '/DCN.H-1') ecrh = np.loadtxt('/home/guimas/diags/' + str(shotnr) + '/ECS.PECRH') ecrhclr = '#00FF00' nbi = np.loadtxt('/home/guimas/diags/' + str(shotnr) + '/NIS.PNI') nbiclr = '#000000' icrh = np.loadtxt('/home/guimas/diags/' + str(shotnr) + '/ICP.PICRN') icrhclr = 'm' dirdiv = "/home/guimas/Divertor/" + str(shotnr) + "/" jsatl = np.loadtxt(dirdiv + "kal2_jsat_out_" +str(shotnr)+ ".txt") tel = np.loadtxt(dirdiv + "kal2_te_out_" +str(shotnr)+ ".txt") jsath = np.loadtxt(dirdiv + "kal2_jsat_in_" +str(shotnr)+ ".txt") teh = np.loadtxt(dirdiv + "kal2_te_in_" +str(shotnr)+ ".txt") # + ##Global linewidth lwid = 1 mintime = 0.5 maxtime = 9.5 ##Alpha setting alphareg = 0.4 f, axarr = plt.subplots( nrows=4, ncols=1, sharex=True, sharey=False, gridspec_kw={'height_ratios':[5,5,4,4]},figsize=(12/2.54, 12/2.54)) #6, 4.75 ###Global settings ax1 = axarr[0] ax2 = axarr[1] ax3 = axarr[2] ax4 = axarr[3] plt.setp(ax1.get_xticklabels(), visible=False) plt.setp(ax2.get_xticklabels(), visible=False) plt.setp(ax3.get_xticklabels(), visible=False) ################################################ ####First plot ################################################ lettx = 0.04 letty = 0.85 ax1.text(lettx, letty, r'$\mathrm{(a)\,\#'+str(shotnr)+'\,LFS}$', transform=ax1.transAxes) ax1.plot(raus[:,0], raus[:,1], color='black', label='Separatrix', linewidth=lwid) ax1.plot(rl05[:,0], rl05[:,1], color='red', label='0.5', linewidth=lwid) ax1.plot(rl10[:,0], rl10[:,1], color='blue', label='1.0', linewidth=lwid) ax1.plot(rl20[:,0], rl20[:,1], color='green', label='2.0', linewidth=lwid) start30 = 0.5 m30 = rl30[:,0]>=start30 ax1.plot(rl30[m30,0], rl30[m30,1], color='magenta', label='3.0', linewidth=lwid) start40 = 0.5 m40 = rl40[:,0]>=start40 ax1.plot(rl40[m40,0], rl40[m40,1], color='orange', label='4.0', linewidth=lwid) ax1.set_ylim([2.08,2.22]) ax1.fill_between(raus[:,0], raus[:,1], 1.5, color='black', alpha=0.25) #plt.yticks([2.05,2.07,2.09,2.11,2.13,2.15,2.17,2.19]) ax1.set_ylabel(r'$\mathrm{R\,[m]}$') ################################################ ####Second plot ################################################ ax2 = axarr[1] ax2.text(lettx, 0.23, '(b)', transform=ax2.transAxes) ax2.text(lettx, 0.085, 'HFS', transform=ax2.transAxes) ax2.plot(rin[:,0], rin[:,1], color='black', label='$\mathrm{Separatrix}$', linewidth=lwid) ax2.plot(rh05[:,0], rh05[:,1], color='red', label='$\mathrm{0.5[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh10[:,0], rh10[:,1], color='blue', label='$\mathrm{1.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh20[:,0], rh20[:,1], color='green', label='$\mathrm{2.0[10^{19}m^{-3}]}$', linewidth=lwid) m30 = rh30[:,0]>=start30 ax2.plot(rh30[m30,0], rh30[m30,1], color='magenta', label='$\mathrm{3.0[10^{19}m^{-3}]}$', linewidth=lwid) m40 = rh40[:,0]>=start40 ax2.plot(rh40[m40,0], rh40[m40,1], color='orange', label='$\mathrm{4.0[10^{19}m^{-3}]}$', linewidth=lwid) #Legend #plt.legend(loc='lower left', ncol=2, borderpad=0.2, #labelspacing=0.1, columnspacing=0.2, handlelength=1, handletextpad=0.0, fancybox=True) ax2.legend(bbox_to_anchor=(0.9,0.6), ncol=2, borderpad=0.2, labelspacing=0.1, columnspacing=0.2, handlelength=1, handletextpad=0.1, fancybox=True) ##Inner wall ax2.text(5.8, 1.054, r'$\mathrm{inner\,wall}$', fontsize='large') ax2.hlines(1.045, mintime, maxtime, color='black', lw=3) ylimax = 1.18 ylimin = 1.04 ax2.set_ylim([ylimin,ylimax]) ###Ta-da ax2.set_ylabel(r'$\mathrm{R\,[m]}$') ax2.fill_between(rin[:,0], rin[:,1], 1.5, color='black', alpha=0.25) ################################################ #### Third plot ################################################ ax6 = axarr[2] ax6.text(lettx, letty-0.05, '(c)', transform=ax6.transAxes) #sbdclr = '#000000' #ax6.plot(td, sd, color=sbdclr) #ax6.text(1.5, 3.0, r'$\mathrm{n_{e,v}\,[10^{20}m^{-3}]}$', color=sbdclr) dtotclr = '#0000FF' ax6.plot(tdtot, sdtot, color=dtotclr)#, ls='--') ax6.text(2.6, 0.4, r'$\mathrm{D\,[10^{22}e/s]}$',color=dtotclr) h1clr = 'k' ax6.plot(h1[:,0], h1[:,1]*1e-19, color=h1clr) ax6.text(2.4, 5, r'$\mathrm{n_{e,c}\,[10^{19}m^{-3}]}$',color=h1clr) ax6.plot(h98.time, h98.data[:,7]*10, color='red', lw=1) ax6.text(7.9, 0.5, r'$\mathrm{H_{98,y2}[\times10]}$', color='red') ax6.axhline(10, color='r', lw=0.6, ls='--') ax6.text(8.4,9.5, r'$\mathrm{H_{98,y2}=1}$', color='red',fontsize=7,va='top') ax6.set_yticks([0,2,4,6,8,10]) ax6.set_ylim((0, 11)) ############################################################################ ############################################################################ ax3 = axarr[3] ax3.text(lettx, letty, '(d)', transform=ax3.transAxes) ax3.plot(ecrh[:,0], ecrh[:,1]*1e-6, color=ecrhclr) ax3.text(8, 11.5, r'$\mathrm{ECRH\,[MW]}$',color=ecrhclr) ax3.plot(nbi[:,0], nbi[:,1]*1e-6, color=nbiclr) ax3.text(0.85, 9.0, r'$\mathrm{NBI\,[MW]}$',color=nbiclr) ax3.plot(icrh[:,0], icrh[:,1]*1e-6, color=icrhclr) ax3.text(8.0, 9.0, r'$\mathrm{ICRH\,[MW]}$',color=icrhclr) ##Heating ax3.set_xlim([mintime,maxtime]) ax3.set_yticks([0,2.5,5,7.5,10,12.5]) ax3.set_ylim([0,15]) ax3.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) ################################################################ ################################################################ # ax4 = axarr[4] # ax4.text(lettx, letty, '(e) outer div.', transform=ax4.transAxes) # ax4.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax4.transAxes) # ax4.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax4.transAxes) # ax4.plot(tel[:,0], savgol_filter(tel[:,1], 301, 3), color='b', lw=0.5) # ax4.plot(jsatl[:,0], savgol_filter(jsatl[:,1], 301, 3), color='red', lw=0.5) # ax4.set_yticks([0,5,10]) # ax4.set_ylim((0,14)) ################################################################ ################################################################ # ax5 = axarr[5] # ax5.text(lettx, letty, '(f) inner div.', transform=ax5.transAxes) # ax5.plot(jsath[:,0], savgol_filter(jsath[:,1], 601, 5), color='red', lw=0.5) # ax5.plot(teh[:,0], savgol_filter(teh[:,1], 601, 5), color='b', lw=0.5) # ax5.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax5.transAxes) # ax5.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax5.transAxes) # ax5.set_yticks([0,5,10]) # ax5.set_ylim((0,14)) # ax5.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) ################################################################ ################################################################ #for ax in axarr: # ax.tick_params(labelsize=ticfont) # ax.yaxis.label.set_size(ticfont) # ax.xaxis.label.set_size(ticfont) ticfont = 10 for ax in axarr: ax.tick_params(labelsize=ticfont) ax.yaxis.label.set_size(ticfont) ax.xaxis.label.set_size(ticfont) plt.subplots_adjust(left=0.15, bottom=0.1, right=0.99, top=0.98, wspace=0.1, hspace=0.07) #plt.subplots_adjust(left=0.1, bottom=0.08, right=0.99, top=0.97, wspace=0.1, hspace=0.07) ################################################################ ################################################################ ax2.axvspan(1.4, 2.0, color='r', alpha=0.3) #plt.savefig("layers_idh_"+str(shotnr)+".png", format='png', dpi=300) # for ax in axarr: # ax.axvline(2.25, color='r', lw=1.2) # ax.axvline(3.8, color='g', lw=1.2) # ax.axvline(4.8, color='b', lw=1.2) # ax.axvline(5.5, color='m', lw=1.2) # ax.axvline(7.0, color='orange', lw=1.2) #plt.savefig("./PresentationMST1/Figure4.png", format='png', dpi=300) plt.show() # - # # Second part of figure \#4 # + ##Global linewidth lwid = 1 mintime = 0.5 maxtime = 9.5 ##Alpha setting alphareg = 0.2 f, axarr = plt.subplots( nrows=6, ncols=1, sharex=True, sharey=False, gridspec_kw={'height_ratios':[5,5,4,4,4,4]},figsize=(6, 6)) #6, 4.75 ###Global settings ax1 = axarr[0] ax2 = axarr[1] ax3 = axarr[2] ax4 = axarr[3] plt.setp(ax1.get_xticklabels(), visible=False) plt.setp(ax2.get_xticklabels(), visible=False) plt.setp(ax3.get_xticklabels(), visible=False) ################################################ ####First plot ################################################ lettx = 0.04 letty = 0.85 ax1.text(lettx, letty, r'$\mathrm{(a)\,\#'+str(shotnr)+'\,LFS}$', transform=ax1.transAxes) ax1.plot(raus[:,0], raus[:,1], color='black', label='Separatrix', linewidth=lwid) ax1.plot(rl05[:,0], rl05[:,1], color='red', label='0.5', linewidth=lwid) ax1.plot(rl10[:,0], rl10[:,1], color='blue', label='1.0', linewidth=lwid) ax1.plot(rl20[:,0], rl20[:,1], color='green', label='2.0', linewidth=lwid) start30 = 0.5 m30 = rl30[:,0]>=start30 ax1.plot(rl30[m30,0], rl30[m30,1], color='magenta', label='3.0', linewidth=lwid) start40 = 0.5 m40 = rl40[:,0]>=start40 ax1.plot(rl40[m40,0], rl40[m40,1], color='orange', label='4.0', linewidth=lwid) ax1.set_ylim([2.08,2.22]) ax1.fill_between(raus[:,0], raus[:,1], 1.5, color='black', alpha=0.25) #plt.yticks([2.05,2.07,2.09,2.11,2.13,2.15,2.17,2.19]) ax1.set_ylabel(r'$\mathrm{R\,[m]}$') ################################################ ####Second plot ################################################ ax2 = axarr[1] ax2.text(lettx, 0.23, '(b)', transform=ax2.transAxes) ax2.text(lettx, 0.085, 'HFS', transform=ax2.transAxes) ax2.plot(rin[:,0], rin[:,1], color='black', label='$\mathrm{Separatrix}$', linewidth=lwid) ax2.plot(rh05[:,0], rh05[:,1], color='red', label='$\mathrm{0.5[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh10[:,0], rh10[:,1], color='blue', label='$\mathrm{1.0[10^{19}m^{-3}]}$', linewidth=lwid) ax2.plot(rh20[:,0], rh20[:,1], color='green', label='$\mathrm{2.0[10^{19}m^{-3}]}$', linewidth=lwid) m30 = rh30[:,0]>=start30 ax2.plot(rh30[m30,0], rh30[m30,1], color='magenta', label='$\mathrm{3.0[10^{19}m^{-3}]}$', linewidth=lwid) m40 = rh40[:,0]>=start40 ax2.plot(rh40[m40,0], rh40[m40,1], color='orange', label='$\mathrm{4.0[10^{19}m^{-3}]}$', linewidth=lwid) #Legend #plt.legend(loc='lower left', ncol=2, borderpad=0.2, #labelspacing=0.1, columnspacing=0.2, handlelength=1, handletextpad=0.0, fancybox=True) ax2.legend(bbox_to_anchor=(0.9,0.6), ncol=2, borderpad=0.2, labelspacing=0.1, columnspacing=0.2, handlelength=1, handletextpad=0.1, fancybox=True) ##Inner wall ax2.text(5.8, 1.054, r'$\mathrm{inner\,wall}$', fontsize='large') ax2.hlines(1.045, mintime, maxtime, color='black', lw=3) ylimax = 1.18 ylimin = 1.04 ax2.set_ylim([ylimin,ylimax]) ###Ta-da ax2.set_ylabel(r'$\mathrm{R\,[m]}$') ax2.fill_between(rin[:,0], rin[:,1], 1.5, color='black', alpha=0.25) ################################################ #### Third plot ################################################ ax6 = axarr[2] ax6.text(lettx, letty-0.05, '(c)', transform=ax6.transAxes) #sbdclr = '#000000' #ax6.plot(td, sd, color=sbdclr) #ax6.text(1.5, 3.0, r'$\mathrm{n_{e,v}\,[10^{20}m^{-3}]}$', color=sbdclr) dtotclr = '#0000FF' ax6.plot(tdtot, sdtot, color=dtotclr)#, ls='--') ax6.text(2.6, 0.4, r'$\mathrm{D\,[10^{22}e/s]}$',color=dtotclr) h1clr = 'k' ax6.plot(h1[:,0], h1[:,1]*1e-19, color=h1clr) ax6.text(2.4, 5, r'$\mathrm{n_{e,c}\,[10^{19}m^{-3}]}$',color=h1clr) ax6.plot(h98.time, h98.data[:,7]*10, color='red', lw=1) ax6.text(7.9, 0.5, r'$\mathrm{H_{98,y2}[\times10]}$', color='red') ax6.axhline(10, color='r', lw=0.6, ls='--') ax6.text(8.7,9.5, r'$\mathrm{H_{98,y2}=1}$', color='red',fontsize=7,va='top') ax6.set_yticks([0,2,4,6,8,10]) ax6.set_ylim((0, 11)) ############################################################################ ############################################################################ ax3 = axarr[3] ax3.text(lettx, letty, '(d)', transform=ax3.transAxes) ax3.plot(ecrh[:,0], ecrh[:,1]*1e-6, color=ecrhclr) ax3.text(8, 11.5, r'$\mathrm{ECRH\,[MW]}$',color=ecrhclr) ax3.plot(nbi[:,0], nbi[:,1]*1e-6, color=nbiclr) ax3.text(0.85, 9.0, r'$\mathrm{NBI\,[MW]}$',color=nbiclr) ax3.plot(icrh[:,0], icrh[:,1]*1e-6, color=icrhclr) ax3.text(8.0, 9.0, r'$\mathrm{ICRH\,[MW]}$',color=icrhclr) ##Heating ax3.set_xlim([mintime,maxtime]) ax3.set_yticks([0,2.5,5,7.5,10,12.5]) ax3.set_ylim([0,15]) ################################################################ ################################################################ ax4 = axarr[4] ax4.text(lettx, letty, '(e) outer div.', transform=ax4.transAxes) ax4.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax4.transAxes) ax4.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax4.transAxes) ax4.plot(tel[:,0], savgol_filter(tel[:,1], 301, 3), color='b', lw=0.5) ax4.plot(jsatl[:,0], savgol_filter(jsatl[:,1], 301, 3), color='red', lw=0.5) ax4.set_yticks([0,5,10]) ax4.set_ylim((0,14)) #ax4.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) ################################################################ ################################################################ ax5 = axarr[5] ax5.text(lettx, letty, '(f) inner div.', transform=ax5.transAxes) ax5.plot(jsath[:,0], savgol_filter(jsath[:,1], 601, 5), color='red', lw=0.5) ax5.plot(teh[:,0], savgol_filter(teh[:,1], 601, 5), color='b', lw=0.5) ax5.text(0.8, 0.2, r'$\mathrm{\Gamma_{D^{+}}[10^{22}e/s]}$', color='red', transform=ax5.transAxes) ax5.text(0.8, 0.45, r'$\mathrm{T_{e}\,[eV]}$', color='b', transform=ax5.transAxes) ax5.set_yticks([0,5,10]) ax5.set_ylim((0,14)) ax5.set_xlabel(r'$\mathrm{Time\,[s]}$', labelpad=2) ################################################################ ################################################################ #for ax in axarr: # ax.tick_params(labelsize=ticfont) # ax.yaxis.label.set_size(ticfont) # ax.xaxis.label.set_size(ticfont) plt.subplots_adjust(left=0.1, bottom=0.08, right=0.99, top=0.97, wspace=0.1, hspace=0.07) ################################################################ ################################################################ ax2.axvspan(1.4, 2.0, color='r', alpha=0.3) #plt.savefig("layers_idh_"+str(shotnr)+".png", format='png', dpi=300) for ax in axarr: ax.axvline(2.25, color='r', lw=1.2) ax.axvline(3.8, color='g', lw=1.2) ax.axvline(4.8, color='b', lw=1.2) ax.axvline(5.5, color='m', lw=1.2) ax.axvline(7.0, color='orange', lw=1.2) plt.savefig("./PresentationMST1/Figure4.png", format='png', dpi=300) plt.show() # - # # Figure 5 \#30733 profiles import numpy as np import matplotlib.pylab as plt from temp_perffromlays import * from perf2rho import perf2rho from scipy.interpolate import PchipInterpolator import dd from closest import closest from getsig import getsig import matplotlib.patches as mpatches # plt.style.use('helvet') # # + suf = ["0.300000","0.500000","0.750000","1.00000","2.00000","3.00000","3.50000"]#,"4.00000"] dens = [0.3, 0.5, 0.75, 1.0, 2.0, 3.0, 3.5]#, 4.0] xi = 2.11 xf = 2.25 shotnr = 30733 lwid = 2 times = [2.25, 3.8, 4.8, 5.5, 7.0] colorblue = '#66FFFF' colormagenta = "#CC99FF" colorred = "#FF4444" colorgreen = "#66FF99" alphaval = 0.3 ticsize = 10 labelstr1=r"$\mathrm{t="+str(times[0])+"s,P_{aux}=9.5MW}$""\n"r"$\mathrm{D=1.0e22e/s}$" labelstr2=r"$\mathrm{t="+str(times[1])+"s,P_{aux}=9.5MW}$""\n"r"$\mathrm{D=4.0e22e/s}$" labelstr3=r"$\mathrm{t="+str(times[2])+"s,P_{aux}=15MW}$""\n"r"$\mathrm{D=4.0e22e/s}$" labelstr4=r"$\mathrm{t="+str(times[3])+"s,P_{aux}=15MW}$""\n"r"$\mathrm{D=7.0e22e/s}$" labelstr5=r"$\mathrm{t="+str(times[4])+"s,P_{aux}=12.7MW}$""\n"r"$\mathrm{D=6.5e22e/s}$" ###### xi = 1.04 xf = xi + 0.035 interpflag = False perfh1, dh1, drh1 = perffromlays(shotnr, dens, suf, times[0], side=0, poszero=1.001, interp=interpflag) perfh2, dh2, drh2 = perffromlays(shotnr, dens, suf, times[1], side=0, poszero=1.001, interp=interpflag) perfh3, dh3, drh3 = perffromlays(shotnr, dens, suf, times[2], side=0, poszero=1.001, interp=interpflag) perfh4, dh4, drh4 = perffromlays(shotnr, dens, suf, times[3], side=0, poszero=1.001, interp=interpflag) perfh5, dh5, drh5 = perffromlays(shotnr, dens, suf, times[4], side=0, poszero=1.001, interp=interpflag) # perf1, d1, dr1 = perffromlays(30733, dens, suf, times[0], side=1, poszero=2.22, interp=interpflag) perf2, d2, dr2 = perffromlays(30733, dens, suf, times[1], side=1, poszero=2.22, interp=interpflag) perf3, d3, dr3 = perffromlays(30733, dens, suf, times[2], side=1, poszero=2.22, interp=interpflag) perf4, d4, dr4 = perffromlays(30733, dens, suf, times[3], side=1, poszero=2.22, interp=interpflag) perf5, d5, dr5 = perffromlays(30733, dens, suf, times[4], side=1, poszero=2.22, interp=interpflag) offs = np.zeros_like(d1) offs[-1] = 0.0085 perf1 = perf1+offs perf2 = perf2+offs perf3 = perf3+offs offs[-1] = 0.0065 perf4 = perf4+offs offs[-1] = 0.0085 perf5 = perf5+offs # + ######################################################################### ##### RHO! alphaval = 0.2 equil = 'FPP' #equil = 'EQH' #HFS zhfs = 0.07 perfh1 = perf2rho(shotnr, times[0], perfh1, zhfs, equil=equil) perfh2 = perf2rho(shotnr, times[1], perfh2, zhfs, equil=equil) perfh3 = perf2rho(shotnr, times[2], perfh3, zhfs, equil=equil) perfh4 = perf2rho(shotnr, times[3], perfh4, zhfs, equil=equil) perfh5 = perf2rho(shotnr, times[4], perfh5, zhfs, equil=equil) #LFS zlfs = 0.14 perf1 = perf2rho(shotnr, times[0], perf1, zlfs, equil=equil) perf2 = perf2rho(shotnr, times[1], perf2, zlfs, equil=equil) perf3 = perf2rho(shotnr, times[2], perf3, zlfs, equil=equil) perf4 = perf2rho(shotnr, times[3], perf4, zlfs, equil=equil) perf5 = perf2rho(shotnr, times[4], perf5, zlfs, equil=equil) # + xfigs = 6 yfigs = 2.5 plt.figure(figsize=(xfigs,yfigs),dpi=150) ax1 = plt.subplot(121) xi = 0.985 xf = 1.09 dh = np.linspace(0, 3.5) asp = PchipInterpolator(dh1,perfh1) #plt.plot(perfh1, dh1, label=labelstr1, color='red', lw=lwid) plt.plot(asp(dh), dh, label=labelstr1, color='red', lw=lwid) asp = PchipInterpolator(dh2,perfh2) plt.plot(asp(dh), dh, label=labelstr2, color='green', lw=lwid) #plt.plot(perfh2, dh2, label=labelstr2, color='green', lw=lwid) asp = PchipInterpolator(dh3,perfh3) plt.plot(asp(dh), dh, label=labelstr3, color='blue', lw=lwid) asp = PchipInterpolator(dh4,perfh4) plt.plot(asp(dh), dh, label=labelstr4, color='magenta', lw=lwid) asp = PchipInterpolator(dh5,perfh5) plt.plot(asp(dh), dh, label=labelstr5, color='orange', lw=lwid) plt.xlim([xi,xf]) plt.xticks([1.0,1.04,1.08]) #plt.xticks([0.9,0.95,1.0,1.05,1.1]) plt.ylabel(r"$\mathrm{n_{e}\,[10^{19}m^{-3}}$]", fontsize=ticsize) plt.axvspan(0., 1., color='black', alpha=alphaval) plt.axvline(1.0, color='black', lw=2, ls='--') plt.xlabel(r"$\mathrm{\rho_{pol}}$", fontsize=ticsize) plt.legend(frameon=False, fontsize=8, loc='lower left', labelspacing=1.5) plt.text(0.85, 0.92, "HFS", color='r', fontsize=ticsize, transform=ax1.transAxes) plt.text(0.05, 0.92, "(a)", color='k', fontsize=ticsize, transform=ax1.transAxes) #######LFS xi = 0.93 xf = 1.11 ax2 = plt.subplot(122, sharey=ax1) plt.setp(ax2.get_yticklabels(), visible=False) plt.text(0.05, 0.92, "(b) \#30733", color='k', fontsize=ticsize, transform=ax2.transAxes) plt.axvline(1.0, color='black', lw=2, ls='--') plt.axvspan(0., 1., color='black', alpha=alphaval) asp = PchipInterpolator(d1,perf1) plt.plot(asp(dh), dh, label=labelstr1, color='red', lw=lwid) rhol1 = interp1d(asp(dh), dh, fill_value='extrapolate') np.savetxt('SupportFiles/perf_attach.txt', np.c_[asp(dh), dh]) #linind = closest(nelin.time, times[0]) #plt.plot(nelin.area.data[linind,:], nelin.data[linind,:]*1e-19, color='r', ls='--', lw=1) asp = PchipInterpolator(d2,perf2) plt.plot(asp(dh), dh, label=labelstr2, color='green', lw=lwid) rhol2 = interp1d(asp(dh), dh, fill_value='extrapolate') #linind = closest(nelin.time, times[1]) #plt.plot(nelin.area.data[linind,:], nelin.data[linind,:]*1e-19, color='g', ls='--', lw=1) asp = PchipInterpolator(d3,perf3) plt.plot(asp(dh), dh,label=labelstr3, color='blue', lw=lwid) rhol3 = interp1d(asp(dh), dh, fill_value='extrapolate') #linind = closest(nelin.time, times[2]) #plt.plot(nelin.area.data[linind,:], nelin.data[linind,:]*1e-19, color='blue', ls='--', lw=1) asp = PchipInterpolator(d4,perf4) plt.plot(asp(dh), dh, label=labelstr4, color='magenta', lw=lwid) rhol4 = interp1d(asp(dh), dh, fill_value='extrapolate') #linind = closest(nelin.time, times[3]) #plt.plot(nelin.area.data[linind,:], nelin.data[linind,:]*1e-19, color='magenta', ls='--', lw=1) asp = PchipInterpolator(d5,perf5) plt.plot(asp(dh), dh, label=labelstr5, color='orange', lw=lwid) rhol5 = interp1d(asp(dh), dh, fill_value='extrapolate') np.savetxt('SupportFiles/perf_attach.txt', np.c_[asp(dh), dh]) #linind = closest(nelin.time, times[4]) #plt.plot(nelin.area.data[linind,:], nelin.data[linind,:]*1e-19, color='orange', ls='--', lw=1) #plt.axvline(rausval, color='black', lw=2, ls='--') plt.xlabel(r"$\mathrm{\rho_{pol}}$", fontsize=ticsize) plt.text(0.85, 0.92, "LFS", color='b', transform=ax2.transAxes) plt.xlim([xi, xf]) plt.xticks([0.95,1.0,1.05,1.1]) plt.ylim([0,4.2]) plt.subplots_adjust(left=0.1, bottom=0.16, right=0.98, top=0.98, wspace=0.04, hspace=0.07) #plt.savefig("rho_"+equil+"_30733_all.png", dpi=300) #plt.savefig("./Figures/Figure5.png", dpi=300) plt.show() # - # # Figure 6 rho scan import numpy as np import matplotlib.pylab as plt from temp_perffromlays import * from perf2rho import perf2rho from scipy.interpolate import PchipInterpolator import dd from closest import closest from getsig import getsig import pandas as pd import matplotlib.patches as mpatches # plt.style.use('helvetpres') # ndf = pd.DataFrame.from_csv('./SupportFiles/Pressure_Fuel_30733.csv') # + mkr_dict = {'s': 's', '*': '*'} markr = ['o', 'o', '*', '*', '*'] #colors = ['red', 'green', 'blue', 'magenta', 'orange'] colors = ['#17becf', '#bcbd22','#7f7f7f','#e377c2','#8c564b'] #colors = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728','#9467bd', '#8c564b', '#e377c2', '#7f7f7f','#bcbd22', '#17becf'] #colors = ['#d62728','#9467bd', '#8c564b', '#e377c2', '#7f7f7f','#bcbd22', '#17becf'] plt.figure(figsize=(3,3), dpi=100) for i, ind in zip(ndf.columns[1:6], range(len(ndf.columns[1:6]))): for j in range(len(ndf['Fuel'])): plt.scatter(ndf['Fuel'][j]*1e-22, ndf[i][j]*100-100, s=60, marker=markr[j], c=colors[ind]) rpa = mpatches.Patch(color=colors[0], label='0.90') gpa = mpatches.Patch(color=colors[1], label='1.00') bpa = mpatches.Patch(color=colors[2], label='1.02') mpa = mpatches.Patch(color=colors[3], label='1.04') opa = mpatches.Patch(color=colors[4], label='1.06') circar = plt.Line2D((0,1),(0,0), color='k', marker='o', linestyle='', label='9.2MW', markersize=10) starar = plt.Line2D((0,1),(0,0), color='k', marker='*', linestyle='', label='14.6MW', markersize=10) plt.legend(loc='upper left', handles=[rpa,gpa,bpa,mpa,opa,circar,starar], fontsize=9, frameon=False) plt.ylabel('Density percentual change', fontsize=10) plt.xlabel(r'$\mathrm{Fuel\,[10^{22}e/s]}$', fontsize=10) plt.xlim((0.9,7.1)) plt.yticks([0,20,40,60,80,100,120,140,160], ['0\%','20\%','40\%','60\%','80\%','100\%','120\%','140\%','160\%'], fontsize=9) plt.xticks([1,2,3,4,5,6,7], fontsize=9) plt.text(3.2, 145, '\#30733', fontsize=9) #plt.show() #plt.subplots_adjust(left=0.1, bottom=0.12, right=0.99, top=0.97, wspace=0.1, hspace=0.07) plt.subplots_adjust(left=0.21, bottom=0.155, right=0.99, top=0.97, wspace=0.1, hspace=0.07) plt.savefig("./Figures/Figure6.png", dpi=300) # - # # Figure 7 Baratron comparison import matplotlib.pyplot as plt import numpy as np import pickle from getsig import getsig from scipy.signal import medfilt # plt.style.use('helvet2') # shotnr = 30733 inner, outer, tme, newalf, filtlfsnsep, ht, hs, dt, ds, nbt, nbs, h98t, h98s = pickle.load(open('./SupportFiles/baratron_comparison.30733','rb')) f11 = getsig(shotnr, 'IOC', 'F11') f05 = getsig(shotnr, 'IOC', 'F05') conv_factor = 1.5e23 # + f, axarr = plt.subplots(3, sharex=True, figsize=(4,4),dpi=100) ax = axarr[0] fsize = 10 ax.plot(outer.time, outer.data*1e2, label=r"$\mathrm{B-LFS\,[Pa]}$", color='darkblue', ls='--') ax.plot(filtlfsnsep[:,0], filtlfsnsep[:,1], label=r"$\mathrm{n_{sep,LFS}\,[10^{19}m^{-3}]}$", color='c', lw=0.5) ax.plot(f05.time, f05.data/conv_factor, label=r"$\mathrm{F05\,[Pa]}$", color='k', lw=0.5, zorder=1) ax.text(0.15, 3.0, '\#30733') #ax.plot(middle.time, middle.data, label=middle.description) ax.set_xlim(0,9) ax.set_ylim(0,6.5) #ax.set_ylabel('Pressure ['+inner.unit+']') ax.set_yticks([0,1,2,3,4,5,6]) ax.legend(frameon=False, loc='upper left',borderpad=0.2,ncol=2, labelspacing=0.15, columnspacing=0.25, handlelength=1, handletextpad=0.3) #Inner divertor data ax = axarr[1] ax.plot(inner.time, inner.data*1e2, label=r"$\mathrm{B-HFS\,[Pa]}$", color='darkred', ls='--') ax.plot(tme, newalf, label=r"$\mathrm{n_{wall,HFS}\,[10^{19}m^{-3}]}$", color='m', lw=0.5) ax.plot(f11.time, medfilt(f11.data,9)/conv_factor, label=r"$\mathrm{F11\,[Pa]}$", color='k', lw=0.5, zorder=1) ax.legend(frameon=False, loc='upper left',borderpad=0.2,ncol=2, labelspacing=0.15, columnspacing=0.25, handlelength=1, handletextpad=0.3) ax.set_yticks([0,1,2,3,4,5,6]) ax.set_ylim(0,6.5) ax = axarr[2] ax.plot(ht, hs*1e-19, label=r'$\mathrm{n_{e,core}\,[10^{19}m^{-3}]}$', color='orangered') ax.plot(dt, ds*1e-22, label=r'$\mathrm{D\,[10^{22}e/s]}$', color='#006600') ax.plot(nbt, nbs*1e-6, label=r'$\mathrm{NBI\,[MW]}$', color='k') #ax.plot(h98t, h98s[:,7]*10, label=r'$\mathrm{H_{98,y2}\times10}$', color='b', lw=0.5) #ax.plot(h5t, h5s, label='H-5') ax.set_ylim(0,16) #ax.set_yticks([0,2,4,6,8,10,12,14]) ax.legend(frameon=False, loc='upper left',borderpad=0.2,ncol=2, labelspacing=0.15, columnspacing=0.25, handlelength=1, handletextpad=0.3) #ax.axhline(10, color='b', ls='--', lw=0.6) #ax.text(6.7, 10.1, r'$\mathrm{H_{98,y2}=1}$', va='bottom', color='b') ##Add vertical bars for ax in axarr: ax.axvline(x=2.6, color='k', ls='--',lw=0.6) ax.axvline(x=3.95, color='k', ls='--',lw=0.6) ax.axvline(x=5.1, color='k', ls='--',lw=0.6) ax.axvline(x=7.6, color='k', ls='--',lw=0.6) #for ax in axarr: # for tme in timesfit: # ax.axvline(x=tme, color='k', ls='--', lw=0.6) ax.set_xlabel('Time [s]') plt.tight_layout() plt.subplots_adjust(left=None, bottom=0.1, right=0.96, top=0.98, wspace=0.04, hspace=0.07) #plt.savefig('barall_' + str(shotnr) + 'nolines.png', dpi=300) #plt.savefig('barall_' + str(shotnr) + '.png', dpi=300) plt.savefig('./PresentationMST1//Figure7.png', dpi=300) plt.show() # - # # Figure 8 Kallenbach pressure fit # + from scipy.optimize import * from R2calc import R2calc import matplotlib.pyplot as plt import numpy as np from scipy.interpolate import interp1d import pickle inner, outer, tme, newalf, filtlfsnsep, ht, hs, dt, ds, nbt, nbs, h98t, h98s = pickle.load(open('./SupportFiles/baratron_comparison.30733','rb')) npts = 15 timesfit = np.linspace(1.8, 7.5, npts, endpoint=True) #timesfit = np.array([2.0,2.2,2.4,3.0,3.2,3.6,3.8,4.0,4.1,4.3,4.4, 5.5,6.0,6.5,7.0]) nsepint = interp1d(filtlfsnsep[:,0], filtlfsnsep[:,1]) ldvinp = interp1d(outer.time, outer.data*1e2) xx = ldvinp(timesfit) yy = nsepint(timesfit) # + def fitfunc(x, a, b,): return a*x**b guess = [2.0, 0.4] popt, pcov = curve_fit(fitfunc, xx, yy, p0=guess) #The optimised parameters of the fit print "popt:", popt #One standard deviation errors on the parameters. perr = np.sqrt(np.diag(pcov)) print "perr:", perr #The covariance matrix of the parameters print "pcov:", pcov R2=R2calc(yy,fitfunc(xx,*popt)) print "R2:", R2 # - plt.figure(figsize=(3,2.5), dpi=100) plt.scatter(ldvinp(timesfit), nsepint(timesfit), label='Data', s=20) tt = np.linspace(0.4,3) plt.plot(tt, fitfunc(tt, *popt), label='Fit, k=%0.2f'%popt[1], color='k') #plt.legend(frameon=False, loc='lower right') plt.text(0.4,3.3,"\#30733") plt.text(1.6,1.74,r'$\mathrm{n_{e,sep}=%0.2f\,P_{0}^{\,%0.2f}}$'%(popt[0],popt[1]),fontsize='10') plt.text(1.6,1.5,r'$\mathrm{R^{2}=%0.2f}$'%(R2),fontsize='10') plt.xlabel('Outer div. Pressure [Pa]') plt.ylabel(r'$\mathrm{n_{e,sep}[10^{19}m^{-3}]}$') #plt.ylabel(r'$\mathrm{n_{e,sep}\,[10^{19}m^{-3}]}\,$') plt.tight_layout() plt.subplots_adjust(left=0.17, bottom=0.165, right=0.98, top=0.98, wspace=0.04, hspace=0.07) plt.savefig('./Figures/Figure8.png',dpi=300) # # Figure 9 Typical ELM cycle # + import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1.inset_locator import inset_axes from sigelmsync import sigelmsync from starkelmsync import starkelmsync from readStark import * import matplotlib.cm as cm import matplotlib.colors as colors from getsig import getsig import dd # plt.style.use('helvet') # def smooth(data,N=5): return np.convolve(data,np.ones(N)/N,'same'); def smooth2d(data,N): for i in range(len(data[0,:])): data[:,i] = smooth(data[:,i],N) return data shotnr = 30554 ti = 2.52 tf = 2.55 #ti = 2.51 #tf = 2.57 refside="out" divside=refside divitem=["jsat","nev"] lwid = 1 if divside=='in': diag_side = 'i' else: diag_side = 'a' rdl = dd.shotfile("RDL", shotnr, edition=0) ####Parameters for Refplot refsgr = "HFSR" minyvalh = 1.04 maxyvalh = 1.18 refstrh = r"$\mathrm{HFS\,R[m]}$" hfsr = rdl(refsgr, tBegin=ti-0.002, tEnd=tf+0.002) refsgr = "LFSR" minyvall = 2.08 maxyvall = 2.22 refstrl = r"$\mathrm{LFS\,R[m]}$" lfsr = rdl(refsgr, tBegin=ti-0.002, tEnd=tf+0.002) rdl.close() ##Separatix### fpg = dd.shotfile("FPG", shotnr) separatrix_signal = "Raus" sepl = fpg(separatrix_signal, tBegin=ti-0.002, tEnd=tf+0.002) separatrix_signal = "Rin" seph = fpg(separatrix_signal, tBegin=ti-0.002, tEnd=tf+0.002) fpg.close() ############## #ELM time elmtime = 2.52474 scale=1e-3 tipl = (ti-elmtime)/scale tfpl = (tf-elmtime)/scale #Jsat prbin = np.loadtxt('/home/guimas/Documents/Publications/hmode2016/Figures/30554/probedata/LSF.ui6') prbou = np.loadtxt('/home/guimas/Documents/Publications/hmode2016/Figures/30554/probedata/LSF.ua6') #Ipolsol spou = getsig(shotnr, 'MAC', 'Ipolsola', tBegin=ti-0.002, tEnd=tf+0.002) spin = getsig(shotnr, 'MAC', 'Ipolsoli', tBegin=ti-0.002, tEnd=tf+0.002) #Dalpha sdou = getsig(shotnr, 'POT', 'ELMa-Han', tBegin=ti-0.002, tEnd=tf+0.002) sdin = getsig(shotnr, 'POT', 'ELMi-Han', tBegin=ti-0.002, tEnd=tf+0.002) #Probe Temperature stou = getsig(shotnr, 'LSD', 'te-ua6', tBegin=ti-0.002, tEnd=tf+0.002) stin = getsig(shotnr, 'LSD', 'te-ui6', tBegin=ti-0.002, tEnd=tf+0.002) #Edge interferomter h5 = getsig(shotnr, 'DCN', 'H-5') # + fig = plt.figure(figsize=(3, 5)) gs = mpl.gridspec.GridSpec(3, 2, height_ratios=[1, 1, 1], width_ratios=[4, 1]) ax1 = fig.add_subplot(gs[0, 0]) ax2 = fig.add_subplot(gs[1, 0], sharex=ax1) ax3 = fig.add_subplot(gs[2, 0], sharex=ax1) ax1.text(0.03, 0.9, '(a)', transform = ax1.transAxes) ax2.text(0.03, 0.9, '(b)', transform = ax2.transAxes) ax3.text(0.03, 0.9, '(c)', transform = ax3.transAxes) ############################### First plot #Plot Raus ax1.plot((sepl.time-elmtime)/scale, sepl.data, color='k', lw=lwid) ax1.fill_between((sepl.time-elmtime)/scale, sepl.data, 1.6, color='black', alpha=0.25) ax1.set_title('\#'+str(shotnr) + r'$\mathrm{\,t_{0}=2.52474\,s}$', loc='left') #Plot the layers nchans = 11 ## Check densities used (shortcut without dd) mindens = 0.5 maxdens = 3.25 dens = np.linspace(mindens, maxdens, 12) fth = [] frh = [] fne = [] for i in range(nchans): fth.append(lfsr.time) frh.append(lfsr.data[:,i]) fne.append(dens[i] * np.ones(len(hfsr.time))) ######## fth = np.concatenate(fth) frh = np.concatenate(frh) fne = np.concatenate(fne) ######## cmap = cm.get_cmap('jet') sc = ax1.scatter(fth, frh, c=fne, s=4.0, lw=0, cmap=cmap) sc.remove() clrs = cm.jet(np.linspace(0, 1, nchans)) for i in range(nchans): ax1.plot((lfsr.time-elmtime)/scale, lfsr.data[:, i], lw=lwid, color=clrs[i]) #cont = ax1.contourf(x, y, z, 20) #plt.tick_params(which='both', top=False, right=False) plt.setp(ax1.get_xticklabels(), visible=False) ax1.set_xlim(tipl,tfpl) ax1.set_ylim(minyvall, maxyvall) ax1.set_ylabel(refstrl) ax1.hlines(1.045, tipl, tfpl, color='black', lw=3) axins = inset_axes(ax1, width="5%", # width = 10% of parent_bbox width height="100%", # height : 50% loc=6, bbox_to_anchor=(1.05, 0., 1, 1), bbox_transform=ax1.transAxes, borderpad=0, ) cbar = plt.colorbar(sc, cax=axins) cbar.set_label(r'$\mathrm{n_{e}\,[10^{19}m^{-3}]}$') ##################### ## Second plot ##################### plt.setp(ax2.get_xticklabels(), visible=False) #Plot Raus ax2.plot((seph.time-elmtime)/scale, seph.data, color='k', lw=lwid) ax2.fill_between((seph.time-elmtime)/scale, seph.data, 1.6, color='black', alpha=0.25) #Plot the layers nchans = 11 ## Check densities used (shortcut without dd) mindens = 0.5 maxdens = 3.25 dens = np.linspace(mindens, maxdens, 12) fth = [] frh = [] fne = [] for i in range(nchans): fth.append(hfsr.time) frh.append(hfsr.data[:,i]) fne.append(dens[i] * np.ones(len(hfsr.time))) ######## fth = np.concatenate(fth) frh = np.concatenate(frh) fne = np.concatenate(fne) ######## cmap = cm.get_cmap('jet') sc = ax1.scatter((fth-elmtime)/scale, frh, c=fne, s=4.0, lw=0, cmap=cmap) sc.remove() clrs = cm.jet(np.linspace(0, 1, nchans)) for i in range(nchans): ax2.plot((hfsr.time-elmtime)/scale, hfsr.data[:, i], lw=lwid, color=clrs[i]) #cont = ax1.contourf(x, y, z, 20) #plt.tick_params(which='both', top=False, right=False) plt.setp(ax2.get_xticklabels(), visible=False) ax2.set_xlim(tipl,tfpl) ax2.set_ylim(minyvalh, maxyvalh) ax2.set_ylabel(refstrh) ax2.hlines(1.045, tipl, tfpl, color='black', lw=3) axins = inset_axes(ax2, width="5%", # width = 10% of parent_bbox width height="100%", # height : 50% loc=6, bbox_to_anchor=(1.05, 0., 1, 1), bbox_transform=ax2.transAxes, borderpad=0, ) cbar = plt.colorbar(sc, cax=axins) cbar.set_label(r'$\mathrm{n_{e}\,[10^{19}m^{-3}]}$') ##################### ## Third plot ##################### ax3.plot((spin.time-elmtime)/scale, np.abs(spin.data*1e-3), lw=0.5, label=r'HFS', color='r') ax3.plot((spou.time-elmtime)/scale, np.abs(spou.data*1e-3), lw=0.5, label=r'LFS', color='b') ax3.set_ylabel(r'$\mathrm{|I_{div}|\,[kA]}$', color='k') ax3.set_ylim(0,20) ax3twin = ax3.twinx() ax3twin.plot((h5.time-elmtime)/scale, h5.data*1e-19, color='g', lw=1) ax3twin.set_ylabel(r'$\mathrm{n_{e,edge}\,[10^{19}m^{-3}]}$', color='g') ax3twin.set_ylim(4.5,5.5) #ax3.legend(frameon=False) ax3.set_xlabel('$\mathrm{t-t_{ELM}\,[ms]}$') ##################### #plt.tight_layout() plt.subplots_adjust(left=0.19, right=0.94, hspace=0.08, bottom=0.09, top=0.94) plt.savefig('./Figures/Figure9a.png', dpi=300) # + fig = plt.figure(figsize=(3, 5), dpi=100) gs = mpl.gridspec.GridSpec(3, 2, height_ratios=[1, 1, 1], width_ratios=[4, 1]) ax4 = fig.add_subplot(gs[0, 0]) ax5 = fig.add_subplot(gs[1, 0], sharex=ax4) ax6 = fig.add_subplot(gs[2, 0], sharex=ax4) ax4.text(0.03, 0.9, '(d)', transform = ax4.transAxes) ax5.text(0.03, 0.9, '(e)', transform = ax5.transAxes) ax6.text(0.03, 0.9, '(f)', transform = ax6.transAxes) ##################### ## First plot ##################### plt.setp(ax4.get_xticklabels(), visible=False) ax4.set_title('\#'+str(shotnr) + r'$\mathrm{\,t_{0}=2.52474\,s}$', loc='left') ax4.plot((prbin[:,0]-elmtime)/scale, prbin[:,1]*1e-22, lw=0.5, label=r'HFS', color='r') ax4.plot((prbou[:,0]-elmtime)/scale, prbou[:,1]*1e-22, lw=0.5, label=r'LFS', color='b') ax4.set_ylabel(r'$\mathrm{\Gamma_{D^{+},SP}\,[10^{22}m^{-2}s^{-1}]}$', color='k') ax4.set_ylim(0, 30) ax4.legend(frameon=False) ##################### ## Second plot ##################### plt.setp(ax5.get_xticklabels(), visible=False) ax5.plot((stin.time-elmtime)/scale, stin.data, lw=0.5, label=r'HFS', color='r') ax5.plot((stou.time-elmtime)/scale, stou.data, lw=0.5, label=r'LFS', color='b') ax5.set_ylabel(r'$\mathrm{T_{e,SP}\,[eV]}$', color='k') ax5.set_xlim(tipl,tfpl) ax5.set_ylim(0,32) ##################### ## Third plot ##################### ax6.plot((sdou.time-elmtime)/scale, sdou.data, lw=0.5, color='b', label=r'$\mathrm{D^{\alpha}\,[AU]}$') ax6.plot((sdin.time-elmtime)/scale, sdin.data, lw=0.5, color='r', label=r'$\mathrm{D^{\alpha}\,[AU]}$') ax6.set_ylabel(r'$\mathrm{D^{\alpha}\,[AU]}$', color='k') ax6.set_xlim(tipl,tfpl) ax6.set_ylim(0, 0.75) ax6.set_xlabel('$\mathrm{t-t_{ELM}\,[ms]}$') ##################### #plt.tight_layout() plt.subplots_adjust(left=0.19, right=0.94, hspace=0.08, bottom=0.09, top=0.94) plt.savefig('./Figures/Figure9b.png', dpi=300) # + language="bash" # convert +append ./Figures/Figure9a.png ./Figures/Figure9b.png ./Figures/Figure9.png # - # # Figures of ELM cycles # # Confinement analysis # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.5.3 # language: julia # name: julia-1.5 # --- # # Recitation 1: a crash course in Julia and JuMP # # Julia # JuMP # ## 1. Why Julia/JuMP? # # - Julia is a "high-level, high-performance dynamic programming language for technical computing." Think the linear algebra power of Matlab, with the speed of C and the readability of Python. # - JuMP is a library that allows us to easily formulate optimization problems and solve them using a variety of solvers. It provides an easy interface to implement advanced optimization techniques. # - Check out this [talk](https://github.com/johnfgibson/whyjulia/blob/master/1-whyjulia.ipynb]) for more details on why Julia is awesome. # ## 2. Julia basics # # ### 2.1 Jupyter # # #### What is a Jupyter notebook? # # - Jupyter notebooks are **documents** (like a Word document) that can contain and run code. # - They were originally created for Python as part of the IPython project, and adapted for Julia by the **IJulia** project. # - They are very useful to **prototype**, draw **plots**, or even for teaching material like this one. # - The document relies only on a modern browser for rendering, and can easily be **shared**. # # #### How do I even open this file? # # Once Julia is installed, start julia and just run the following commands to install the `IJulia` package. # ```jl # using Pkg # Pkg.install("IJulia") # ``` # This should work on its own. If there is any issue, check out the [IJulia website](https://github.com/JuliaLang/IJulia.jl). # # Once IJulia is installed, go to the directory containing the notebook file (`Recitation 1.ipynb`), start julia and run: # ```jl # using IJulia # notebook() # ``` # A webpage should open automatically, just click on the notebook to load it. # #### Navigating the notebook # # - Click `Help -> User Interface Tour` for a guided tour of the notebook interface. # - Each notebook is composed of **cells**, that either contain code or text (`Markdown`). # - You can edit the content of a cell by double-clicking on it (_Edit Mode_). # # When you are not editing a cell, you are in _Command mode_ and can edit the structure of the notebook (cells, name, options...) # # - Create a cell by: # - Clicking `Insert -> Insert Cell` # - Pressing `a` or `b` in Command Mode # - Pressing `Alt+Enter` in Edit Mode # - Delete a cell by: # - Clicking `Edit -> Delete Cell` # - Pressing `dd` # - Execute a cell by: # - Clicking `Cell -> Run` # - Pressing `Ctrl+Enter` # - Pressing `Shift+Enter` (this will also move your focus to the next cell) # # Other functions: # - Undo last text edit with `Ctrl+z` in Edit Mode # - Undo last cell manipulation with `z` in Command Mode # - Save notebook with `Ctrl+s` in Edit Mode # - Save notebook with `s` in Command Mode # # Though notebooks rely on your browser to work, they do not require an internet connection (except for math rendering). # ### 2.2 How to Julia # # Julia, as a dynamic language, can simply be used as a calculator: 1+1 sin(exp(2*pi)+sqrt(3)) # The key building blocks of Julia code are variables: a = 1 b = 2 # This is a comment c = a^2 + b^3 # Julia supports the usual `if`, `while` and `for` structures: if c >= 10 print("Yes") else print("No") end i = 1 while i <= 5 println("Why, hello!") # Print with a new line i += 1 end for i = 1:3 print("$i banana") # '$' can be used to insert variables into text if i>1 print("s") end println() # Just a new line end # **Do not worry about writing loops**: in Julia, they are as fast as writing vectorized code, and sometimes faster! # **Arrays** (list of numbers) are at the core of research computing and Julia's arrays are extremely optimized. myList = [6, 7, 8] # Array indexing starts with 1 in Julia, and arrays are mutable. @show myList[1] myList[3] = 4 @show myList; # A 2-dimensional array is a Matrix # + A = [1 2 3 2 1 2 3 2 1] A = [1 2 3; 2 1 2; 3 2 1] #same thing # - # ## 2.3 Reading data - CSV and DataFrames # You can install these packages with: using Pkg Pkg.add("CSV") Pkg.add("DataFrames") using DataFrames, CSV # We're going to load the data for our optimization example, the transportation problem, where factories and markets are both located in the 2D plane. # - `data/supply.csv` has one row per factory, with columns for the (x, y) coordinates, and a column for the capacity # - `data/demand.csv` has one row per market, with columns for the (x, y) coordinates, and a column for the demand supply = CSV.read("data/supply.csv", DataFrame) demand = CSV.read("data/demand.csv", DataFrame); first(demand, 5) # ## 3. Basics of JuMP # # Now we will use this data to formulate and solve the transportation problem. First, we need to install a solver. A good choice is the Gurobi solver. You can follow [these instructions](https://github.com/jump-dev/Gurobi.jl) to install both Gurobi and its Julia wrapper `Gurobi.jl`. # # Then we can load JuMP and Gurobi. using JuMP, Gurobi # We're going to use JuMP to "translate" our transportation problem (see slides) into something that Gurobi can solve. "Function to build the transportation model, returns model and decision variable handles" function build_transportation_model(supply::DataFrame, demand::DataFrame) # initialize the model, and specify the solver model = Model(Gurobi.Optimizer) # Decision variables # Capacity constraint # Demand constraint # Objective return model, x end # We can now build the optimization model. Notice that Jupyter can display the model (but beware output overload for large models). model, x = build_transportation_model(supply, demand) model x # Now we can solve the model using the `optimize!` command. The `!` is a Julia convention that indicates that the function modifies its argument (in this case, by solving the optimization problem). optimize!(model) # Now we can extract the optimal objective: objective_value(model) # We can also obtain the optimal variable values: value(x[1, 4]) [value(x[i, j]) for i=1:nrow(supply), j=1:nrow(demand)] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="gGoxGohenSgA" # ------ # # TwoSixLabs DDoS CS652 Project # # ------ # Modified by # # The aim for this project was to venture into solving DDoS attack that affect many systems. # # Files used in this particular situation is Locatated: /content/drive/MyDrive/CS652101-Network-ArchProtocols/Twosixlabs # # Project requirements: https://pantelis.github.io/cs652/docs/cloud/projects/ddos-detection/ # # ------ # # ___For this project, data from various sourses were used___ # # This is based on "Detecting DDoS Attacks with Machine Learning" created by from twosixlabs. # Based on: https://www.twosixlabs.com/detecting-ddos-attacks-with-machine-learning/ # # # DataSet used to run this script was taken from "Two Six Labs Internet Disruption Dataset" By . Due to space limitations, specifically "Indian Ocean Earthquake" taking 215 MB and "Italy Blackout" taking 188.8 MB were the only datasets used in this particular situation. Some issues were presented when using other datasets, and script could not be run. # - Two Six Labs Internet Disruption Dataset (BGP) # - https://osf.io/ywz4t/ # # # # Maxmind's GeoLite database provides approximate location related to each IP address related to each CIDR block # - MaxMind’s GeoLite2 free database # - https://dev.maxmind.com/geoip/geoip2/geolite2/ # # ------ # # # # + id="xxVPxKBBiOjD" import pickle import random import numpy as np import pandas as pd import seaborn as sb import matplotlib.pyplot as plt from collections import Counter # + id="HOI88l9Y4zdc" from sklearn.decomposition import PCA from sklearn.utils import class_weight from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import StratifiedShuffleSplit from sklearn.metrics import log_loss, accuracy_score, roc_auc_score, matthews_corrcoef # + [markdown] id="yPxVJfoRcOyH" # sklearn is used in this situation to use random forest classification algorithm as well as provide a eas # + id="yel_eoAJlvor" random.seed(42) np.random.seed(42) random_state = 42 # + [markdown] id="wizCEkRl5lu8" # A random number is generated and is used for random forest classification algorithm used in this scenerio. # + id="sfBUlRegl47R" # Global Parameters DATA_FOLDER = "/content/drive/MyDrive/CS652101-Network-ArchProtocols/Twosixlabs/data_100/" LABEL_FILENAME = "internet_disruptions.tsv" CAUSES = ["Natural Disaster", "DDoS", "Power Outage", "BGP Update", "Misconfiguration", "Cable Cut", "none"] # User-Specifiable Parameters CAUSE_INDEX = 1 # which cause to evaluate DATA_LIMIT = 1 # how many countries/cities/organizations to use (max 100) TRIALS = 50 # trials STEP = 3 # ensures no poisoning of results (DO NOT CHANGE) PCA_DIM = 40 # projection dimension (should be <= DATA_LIMIT) LEARN_METHOD = 1 # If user would like to use Random Forest Classifier method LEARN_METHOD should be 1, otherwise 2 will be used if Logistic Regression is favored # + [markdown] id="xuqaPp9zBnja" # Specifying user parameters such whether to Random Forest method or if Logistic Regression. As well as how many samples are created per each attack. # + id="WCvmx6jJl7lZ" def process_cause(x): if CAUSES[CAUSE_INDEX].lower() in x.lower(): return "positive_class" return "negative_class" # + id="4HumtPrZmEb6" def preprocess_data(data_folder=DATA_FOLDER, label_filename=LABEL_FILENAME, data_limit=DATA_LIMIT): nides = pd.read_csv("/content/drive/MyDrive/CS652101-Network-ArchProtocols/Twosixlabs/internet_disruptions.tsv", sep="\t")[["common name", "cause"]].dropna() for key in nides["common name"].values: nides = nides.append({"common name":f"before_{key}","cause":"none"}, ignore_index=True) nides["cause"] = nides.cause.apply(process_cause) keep_keys = [key for key,value in Counter(nides.cause.values).items()] keep_keys = list(nides[nides.cause.isin(keep_keys)]["common name"].values.ravel()) nides = nides[nides["common name"].isin(keep_keys)] lbl = [] for key in keep_keys: cause = nides[nides["common name"] == key].cause.values[0] lbl.append(cause) y = np.zeros((len(lbl)),dtype=int) for i, cause in enumerate(set(lbl)): y[np.where(np.array(lbl)==cause)[0]] = i X = [] Y = [] for i,key in enumerate(sorted(keep_keys)): try: tmp = pickle.load(open(f"{data_folder}{key}_100.attr", "rb")) [X.append(tmp[j].values.T[:,:data_limit]) for j in range(3)] [Y.append(y[i]) for j in range(3)] except Exception as e: continue data_full = np.array(X) print(data_full.shape) data_labels = np.vstack(Y).ravel() data_pos = data_full[data_labels == 1] data_neg = data_full[data_labels == 0] np.random.shuffle(data_pos) np.random.shuffle(data_neg) min_len = np.min([len(data_pos), len(data_neg)]) X = np.vstack([data_pos[:min_len], data_neg[:min_len]]) y = np.vstack([np.ones((min_len,1)),np.zeros((min_len,1))]) return X,y # + [markdown] id="hMpmOij4oD1k" # Down sampling is performed to control class inbalance. This is because all data is not part of the DDoS attack. As mentioned in TwoSixlabs article, several days of data were also included. # + id="a7q4x2oomA4Z" def display_results(acc_test,auc_test,mcc_test): print("mean +/- std [min, max]") print(f"{np.round(np.mean(acc_test),4)} +/- {np.round(np.std(acc_test),4)} [{np.round(np.min(acc_test),4)}-{np.round(np.max(acc_test),4)}]") print(f"{np.round(np.mean(auc_test),4)} +/- {np.round(np.std(auc_test),4)} [{np.round(np.min(auc_test),4)}-{np.round(np.max(auc_test),4)}]") print(f"{np.round(np.mean(mcc_test),4)} +/- {np.round(np.std(mcc_test),4)} [{np.round(np.min(mcc_test),4)}-{np.round(np.max(mcc_test),4)}]") plt.figure(figsize=(15,5)) data = np.hstack([np.array(acc_test)[:,None],np.array(auc_test)[:,None],np.array(mcc_test)[:,None]]) df = pd.DataFrame(data,columns=["Test Accuracy","Test AUC","Test Matthew Correlation Coefficient"]) sb.boxplot(data=df, orient="h") plt.title("Detecting Denial-of-Service Attacks\nDistribution of Metrics on Test Set") plt.show() # + id="UE_GRUvAl-YB" # Start of Evaluation X,y = preprocess_data() auc_test = [] acc_test = [] mcc_test = [] for trial in range(TRIALS): index = list(range(0, len(X), STEP)) indices = list(StratifiedShuffleSplit(n_splits=1, test_size=.15).split(index, y[::STEP])) train_idx = indices[0][0] test_idx = indices[0][1] train_idx = np.array([[index[i], index[i]+1, index[i]+2] for i in train_idx]).ravel() test_idx = np.array([[index[i], index[i]+1, index[i]+2] for i in test_idx]).ravel() x_train = X[train_idx] x_test = X[test_idx] y_train = y[train_idx].ravel() y_test = y[test_idx].ravel() for i in range(len(x_train)): x_train[i] = np.divide(x_train[i], np.expand_dims(np.max(x_train[i], axis=-1)+1E-3, axis=-1)) for i in range(len(x_test)): x_test[i] = np.divide(x_test[i], np.expand_dims(np.max(x_test[i], axis=-1)+1E-3, axis=-1)) x_train = x_train - 0.5 x_test = x_test - 0.5 pca = PCA(n_components=PCA_DIM) pca.fit(np.squeeze(np.mean(x_train, axis=-1))) pca_x_train = pca.transform(np.squeeze(np.mean(x_train, axis=-1))) pca_x_test = pca.transform(np.squeeze(np.mean(x_test, axis=-1))) print(f"Baseline 1: {accuracy_score(y_test, np.zeros(len(y_test)))}\n") if LEARN_METHOD == 1: model = RandomForestClassifier(n_estimators=1000, max_depth=35) elif LEARN_METHOD == 2: model = LogisticRegression(solver="lbfgs") model.fit(pca_x_train, y_train) print(f"Test Accuracy: {accuracy_score(y_test, model.predict(pca_x_test))}") print(f"Test AUC: {roc_auc_score(y_test, model.predict_proba(pca_x_test)[:,1])}") print(f"Test Log-Loss: {log_loss(y_test, model.predict_proba(pca_x_test)[:,1])}") print(f"Test MCC: {matthews_corrcoef(y_test, model.predict(pca_x_test))}\n") acc_test.append(accuracy_score(y_test, model.predict(pca_x_test))) auc_test.append(roc_auc_score(y_test, model.predict_proba(pca_x_test)[:,1])) mcc_test.append(matthews_corrcoef(y_test, model.predict(pca_x_test))) # + [markdown] id="4g0f8dFnC1rL" # The part of the code starts by segrating the testing and training the dataset. As specified above by 'STEP' variable, 3 examples are generated per each attack. # Dimension reduction is performed by using PCA after each scaling. While accuracy is reduced, the dataset becomes easier to analyize. # Many forms of measurements are taken with this process, accuracy, AUC and Matthew Correlation Coefficient. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %load_ext autoreload # %autoreload 2 import os; import sys; sys.path.insert(0,'../') import warnings import pandas as pd warnings.simplefilter(action='ignore', category=pd.errors.PerformanceWarning) import socceraction.spadl.api as spadl # - ## Configure file and folder names datafolder = "../data" statsbomb_json = os.path.join(datafolder,"statsbomb-root","open-data-master","data") statsbomb_h5 = os.path.join(datafolder,"statsbomb.h5") spadl_h5 = os.path.join(datafolder,"spadl-statsbomb.h5") # ### Convert raw Statsbomb json files to Statsbomb HDF5 file spadl.statsbombjson_to_statsbombh5(statsbomb_json,statsbomb_h5) # ### Inspect StatsBomb HDF5 file # + tablenames = ["matches","players","teams","competitions"] tables = {name : pd.read_hdf(statsbomb_h5,key=name) for name in tablenames} match_id = tables["matches"].match_id[0] tables["events"] = pd.read_hdf(statsbomb_h5,f"events/match_{match_id}") for k,df in tables.items(): print("#",k) print(df.columns,"\n") # - # ### Convert Statsbomb data (in a HDF5 file) to the SPADL format (in a HDF5 file) spadl.statsbombh5_to_spadlh5(statsbomb_h5,spadl_h5) # ### Inspect SPADL HDF5 file # + tablenames = ["games","players","teams","competitions","actiontypes","bodyparts","results"] tables = {name : pd.read_hdf(spadl_h5,key=name) for name in tablenames} game_id = tables["games"].game_id[0] tables["actions"] = pd.read_hdf(spadl_h5,f"actions/game_{game_id}") for k,df in tables.items(): print("#",k) print(df.columns,"\n") # - # ### (optional) Plotting actions # Extra library required: ```pip install matplotsoccer``` # + import matplotsoccer tablenames = [ "games", "players", "teams", "competitions", "actiontypes", "bodyparts", "results", ] tables = {name: pd.read_hdf(spadl_h5, key=name) for name in tablenames} # Select England vs Belgium game at World Cup games = tables["games"].merge(tables["competitions"]) game_id = games[(games.competition_name == "FIFA World Cup") & (games.away_team_name == "England") & (games.home_team_name == "Belgium")].game_id.values[0] actions = pd.read_hdf(spadl_h5, f"actions/game_{game_id}") actions = ( actions.merge(tables["actiontypes"]) .merge(tables["results"]) .merge(tables["bodyparts"]) .merge(tables["players"],"left",on="player_id") .merge(tables["teams"],"left",on="team_id") .sort_values(["period_id", "time_seconds", "timestamp"]) .reset_index(drop=True) ) # use nickname if available else use full name actions["player"] = actions[["player_nickname","player_name"]].apply(lambda x: x[0] if x[0] else x[1],axis=1) #shot = 128 shot = 2201 a = actions[shot-4:shot+1] games = tables["games"] g = list(games[games.game_id == a.game_id.values[0]].itertuples())[0] minute = int((a.period_id.values[0]-1)*45 +a.time_seconds.values[0] // 60) + 1 game_info = f"{g.match_date} {g.home_team_name} {g.home_score}-{g.away_score} {g.away_team_name} {minute}'" print(game_info) labels = a[["time_seconds", "type_name", "player", "team_name"]] matplotsoccer.actions( location=a[["start_x", "start_y", "end_x", "end_y"]], action_type=a.type_name, team= a.team_name, result= a.result_name == "success", label=labels, labeltitle=["time","actiontype","player","team"], zoom=False, figsize=6 ) matplotsoccer.actions( location=a[["start_x", "start_y", "end_x", "end_y"]], action_type=a.type_name, team=a.team_name, result=a.result_name == "success", label=labels, labeltitle=["time","actiontype","player","team"], zoom=True, ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _execution_state="idle" _uuid="8de57e95ed8a9225c5cfcbb628f41a120018b225" _cell_guid="d30d4995-14dd-425a-a1c3-4fbeed01aaa5" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory from subprocess import check_output print(check_output(["ls", "../input"]).decode("utf8")) # Any results you write to the current directory are saved as output. # + _execution_state="idle" _uuid="75a6332e62aacf14ed0c9ffc85e32ed42b1fae4e" _cell_guid="7edee3d8-c4b6-412d-9de3-252c8f4399bc" # data analysis and wrangling import pandas as pd import numpy as np import random as rnd # visualization import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline # machine learning from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC, LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier from sklearn.tree import DecisionTreeClassifier # + _execution_state="idle" _uuid="ad6b552552fbbfda7b775e597fb9b1ae1bc5dabf" _cell_guid="b26e942d-a19e-4fa8-83fe-d8657c5a0566" train_df = pd.read_csv('../input/train.csv') test_df = pd.read_csv('../input/test.csv') df = [train_df, test_df] # + _execution_state="idle" _uuid="3e43c7d070f9eb1c93e863b6ec67bbca25ec0bbd" _cell_guid="60326f26-db8a-4670-b407-038835add8f5" train_df.head() # + _execution_state="idle" _uuid="36a4f8f4cb2ac4279766292a18d545f222e5cd9e" _cell_guid="53e134cf-25a0-4a21-9aba-885460f5598b" train_df.info() test_df.info() # + _execution_state="idle" _uuid="7be13129e07adad74867829e1af7a9c33999cd97" _cell_guid="a02947b7-9d77-45d6-8484-142383dafd5a" train_df.describe(include=['O']) # + _execution_state="idle" _uuid="b3039028eb3bf8ad6aded9e7218e9fef3a18843b" _cell_guid="ef1002c6-fc49-4880-84c9-c3d72e685e3d" train_df[['Pclass','Survived']].groupby('Pclass').mean() # + _execution_state="idle" _uuid="c85c0d13dcea2bf2161ec3132e7876371004457e" _cell_guid="19bd2792-43fe-4c84-9443-c6b56f3cdc24" grid = sns.FacetGrid(train_df,col='Survived') grid.map(plt.hist,'Age',bins=20) # + _execution_state="idle" _uuid="abbe0ad972ce1e9b711491cf2d316eab4a911bf7" _cell_guid="98cdda58-d5e5-4af2-a004-788956f76b13" grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6) grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep') grid.add_legend() # + _execution_state="idle" _uuid="356b3f3b04fa5a4b497a6fd24c932a0615cd2f19" _cell_guid="96f1b668-72d3-48f8-ae0e-e41fff4ce57f" train_df = train_df.drop(['Ticket','Cabin'],axis=1) test_df = test_df.drop(['Ticket','Cabin'],axis=1) combine = [train_df,test_df] # + _execution_state="idle" _uuid="cca3f05a1bda2ddcd64c16d7ee51c5cdeb5405e6" _cell_guid="65a019ce-d46e-4cb9-bd42-4a1b24c7ac9a" for dataset in combine: dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False) pd.crosstab(train_df['Title'],train_df['Sex']) # + _execution_state="idle" _uuid="719875980e838be4ed330824886ead3b18e5ce9e" _cell_guid="b2993fa0-19eb-40e4-ba8e-0fe42ced1f05" for dataset in combine: dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\ 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Other') dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss') dataset['Title'] = dataset['Title'].replace('Ms', 'Miss') dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs') train_df[['Title', 'Survived']].groupby(['Title']).mean() # + _execution_state="idle" _uuid="13363bb4a12f8aa49ec220cec37ff8eba1a9e214" _cell_guid="9c90d504-2ff8-42a2-8d0c-893e415d13bf" title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Other": 5} for dataset in combine: dataset['Title'] = dataset['Title'].map(title_mapping) dataset['Title'] = dataset['Title'].fillna(0) train_df.head() # + _execution_state="idle" _uuid="d696eba01d5c9524264d8fb2ba5409c6b70650b4" _cell_guid="58c53053-503b-4213-89c8-1e6311ea925a" train_df = train_df.drop(['Name', 'PassengerId'], axis=1) test_df = test_df.drop(['Name'], axis=1) combine = [train_df, test_df] # + _execution_state="idle" _uuid="d59490ab50f988015fc393421febefea8c0cb2cb" _cell_guid="425bf4e6-1e96-405c-87dc-ce19844ae287" for dataset in combine: dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int) train_df.head() # + _execution_state="idle" _uuid="7930c403734540992ce6a66a7e7ae1ebea5c2c57" _cell_guid="f8cf39a0-055d-48cf-b264-22de1af59626" age_median = np.zeros((2,3)) for dataset in combine: for sex in range(0,2): for pclass in range (0,3): age_df = dataset[(dataset['Sex']==sex) & (dataset['Pclass']==pclass+1)]['Age'].dropna() age_median[sex,pclass] = int(age_df.median()) for sex in range(0,2): for pclass in range (0,3): dataset.loc[(dataset.Age.isnull()) & (dataset['Sex']==sex) & (dataset['Pclass']==pclass+1),'Age'] = age_median[sex,pclass] # + _execution_state="idle" _uuid="7abd4ef03ddfe8ecb8b3613c6b8f5306b023eaed" _cell_guid="3d6394ed-804f-48cc-adf9-9fa239cce1dd" bins = (0,6,12,18,25,35,60,100) groups = [0,1,2,3,4,5,6] for dataset in combine: dataset.Age = pd.cut(dataset['Age'],bins,labels=groups).astype(int) # + _execution_state="idle" _uuid="741bd7ba2ca52fd481026848655dd9a2832f512f" _cell_guid="5b149ec5-9e8c-4743-9083-14cc8c32b4de" for dataset in combine: dataset['isAlone'] = 0 dataset.loc[(dataset['SibSp']==0) & (dataset['Parch']==0), 'isAlone'] = 1 train_df = train_df.drop(['SibSp','Parch'],axis=1) test_df = test_df.drop(['SibSp','Parch'],axis=1) combine = [train_df, test_df] # + _execution_state="idle" _uuid="6ad306fa0cb376d0c9e51efbf680f3de4f436d79" _cell_guid="56433977-6dd5-471f-bcdc-cb27b3f143a8" port = train_df.Embarked.dropna().mode()[0] for dataset in combine: dataset['Embarked'] = dataset['Embarked'].fillna(port) # + _execution_state="idle" _uuid="eeca8e4c673b361df51ce8430559a068d06a93fd" _cell_guid="a9d3337f-c13a-4c25-ad5c-19669ba09b64" for dataset in combine: dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int) # + _execution_state="idle" _uuid="20dc1a4c9b5d824981b6376b0a884e417eb70aad" _cell_guid="681cca0f-c230-475c-9b60-1bb2fd89fb5d" test_df['Fare'].fillna(test_df['Fare'].dropna().median(),inplace=True) test_df.info() # + _execution_state="idle" _uuid="1f844be055086c9908c851fcd3c7120bee566212" _cell_guid="1716a637-101c-475f-bef1-bd03b813861b" fare_band = pd.qcut(train_df['Fare'],5) fare_band bins = (-0.05,7.854,10.5,21.679,39.688,512.33) groups = (0,1,2,3,4) for dataset in combine: dataset.Fare = pd.cut(dataset['Fare'],bins,labels=groups).astype(int) # + _execution_state="idle" _uuid="75146f6ec6f62543271a5dd39802d7868fc63426" _cell_guid="f5302161-ca53-4f82-9a5c-acc2e9887e24" X_train = train_df.drop("Survived", axis=1) Y_train = train_df["Survived"] X_test = test_df.drop("PassengerId", axis=1) # + _execution_state="idle" _uuid="0a6c06486afed5257d785e3a27858e88f188017e" _cell_guid="4fbcf8da-b9f7-46f8-b1ff-70c91947d9a1" logreg = LogisticRegression() logreg.fit(X_train, Y_train) Y_test = logreg.predict(X_test) acc_log = round(logreg.score(X_train, Y_train) * 100, 2) acc_log # + _execution_state="idle" _uuid="9c9c16c7941a82001bf606208b4993ea3d32e899" _cell_guid="889acf94-c199-47b9-a850-d46135400fde" coeff_df = pd.DataFrame() coeff_df['Feature'] = train_df.columns.delete('0') coeff_df["Correlation"] = pd.Series(logreg.coef_[0]) coeff_df.sort_values(by='Correlation', ascending=False) # + _execution_state="idle" _uuid="c843af991337058e54c978e69569095e03bd0853" _cell_guid="a3110038-6eb1-48ca-9127-34d3ec0f0154" svc = SVC() svc.fit(X_train, Y_train) Y_pred = svc.predict(X_test) acc_svc = round(svc.score(X_train, Y_train) * 100, 2) acc_svc # + _execution_state="idle" _uuid="2c052440c67631204eed434d7e89ad0f816ee247" _cell_guid="9367f9cb-b8ef-459a-8de3-287d9fe1681a" knn = KNeighborsClassifier(n_neighbors = 3) knn.fit(X_train, Y_train) Y_pred = knn.predict(X_test) acc_knn = round(knn.score(X_train, Y_train) * 100, 2) acc_knn # + _execution_state="idle" _uuid="c5473b377f970ea073b677908ca157f0e9cfd2ae" _cell_guid="4e60189d-eeac-4b52-bb10-9a64ab844b34" # Decision Tree decision_tree = DecisionTreeClassifier() decision_tree.fit(X_train, Y_train) Y_pred = decision_tree.predict(X_test) acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2) acc_decision_tree # + _execution_state="idle" _uuid="11a692ba9f9a772a8ff12af5a5c8a30cb6cff4a6" _cell_guid="88ae01fb-5f03-44e2-895f-cae15bba244d" # Random Forest random_forest = RandomForestClassifier(n_estimators=100) random_forest.fit(X_train, Y_train) Y_pred = random_forest.predict(X_test) random_forest.score(X_train, Y_train) acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2) acc_random_forest # + _execution_state="idle" _uuid="fc4f0dd6c50fcf8900e79ab3c326e2361dafbf95" _cell_guid="aebbe9b2-02b5-4538-b0c7-4f27700ba3ca" submission = pd.DataFrame({ "PassengerId": test_df["PassengerId"], "Survived": Y_pred }) submission.to_csv('submission.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #Libraries used for this mini-project import pandas as pd from pandas.plotting import scatter_matrix import numpy as np import matplotlib.pyplot as plt from matplotlib.gridspec import GridSpec import sklearn.datasets from sklearn.model_selection import StratifiedShuffleSplit, GridSearchCV from sklearn.preprocessing import OneHotEncoder, StandardScaler, OrdinalEncoder from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix import sklearn.metrics import torch import torch.nn.functional as F # - # # Loading Data iris = sklearn.datasets.load_iris() iris.keys() # + data = iris["data"] target = iris["target"] features = [col.split(" (")[0].replace(" ", "_") for col in iris["feature_names"]] target_names = iris["target_names"] target = [target_names[i] for i in target]#Getting class of the iris flower plt.hist(target) plt.xticks(target_names) plt.show() # - data_df = pd.DataFrame(data, columns=features) data_df.describe() # # Data Transformation # + #Checking if we have missing values for col in data_df.columns: if data_df[col].isna().values.sum() > 0: print("Missing data in column" + col) else: print("No missing data at " + col) flag = sum([v == np.nan for v in np.asarray(target)]) if flag: print("Missing values in target") else: print("No missing values in target") # - scatter_matrix(data_df, figsize=(10, 10))# It seems that there is linear dependency between petal length and petal width plt.show() corr = data_df.corr()#see the correlation between petal length, petal width, sepal length corr #Lets see the distribution of the predictors within flower classes ax = pd.plotting.boxplot(data=data_df, by=target, figsize=(10, 10)) for x in ax: x[0].set_xlabel("") plt.suptitle("") plt.xlabel("") plt.show() # # Relationship Between Predictors # + #Testing Seperability by using linear hyperplane gs = GridSpec(3, 2) axes = [] combination = {} for i in range(0, len(features) - 1): for j in range(i + 1, len(features)): if (features[i], features[j]) not in combination: combination[(features[i], features[j])] = 0 if (features[j], features[i]) not in combination: combination[(features[j], features[i])] = 1 #dict comprehension combination = {k: v for k,v in combination.items() if v ==0} #Getting colors color_dict = {"setosa": "red", "versicolor": "blue", "virginica": "green"} colors = [color_dict[i] for i in target] #initializing the axes and plotting scatter plot counter = 0 keys = list(combination.keys()) for row in range(0, 3): for col in range(0, 2): axes.append(plt.subplot(gs[row, col])) x_d, y_d = keys[counter] data_df.plot(kind="scatter", x=x_d, y=y_d, ax=axes[row * 2 + col], c=colors) axes[row * 2 + col].set_yticks([]) axes[row * 2 + col].set_xticks([]) counter += 1 #As expected petal_lengthxpetal_width shows good seperation and similar both works well with sepal length # - # # Train Test Split features_full = [col for col in data_df.columns] features_full.append("class") full_data = pd.DataFrame(np.c_[data_df, target], columns=features_full) strata = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_ind, test_ind in strata.split(full_data[features], full_data["class"]): train_data = full_data.iloc[train_ind] test_data = full_data.iloc[test_ind] print(train_data.shape) print(test_data.shape) # # Standard Normalize Data and building Model # + X_train = train_data[features].to_numpy() y_train = train_data["class"].to_numpy().reshape(-1, 1) num_pipeline = Pipeline([ ("standarize_data", StandardScaler()) ]) cat_pipeline = Pipeline([ ("ordinal encoder", OrdinalEncoder()) ]) X_train_tr = num_pipeline.fit_transform(X_train) y_train_tr = cat_pipeline.fit_transform(y_train) # - model = LogisticRegression(multi_class="multinomial", C=0.9) model.fit(X_train_tr, y_train_tr) preds = model.predict_proba(X_train_tr).argmax(axis=1) confusion_matrix(preds, y_train_tr) # + #model performance on test set X_test = test_data[features].to_numpy() y_test = test_data["class"].to_numpy().reshape(-1, 1) X_test_tr = num_pipeline.transform(X_test) y_test_tr = cat_pipeline.transform(y_test) preds = model.predict_proba(X_test_tr).argmax(axis=1) confusion_matrix(preds, y_test_tr)# So, it is working well with regularizing factor 1/0.9 # - # # Checking if reduced model perform as well as the complete model # + #Lets if we reduced the model by removing the sepal width X_train = train_data[["petal_length", "petal_width", "sepal_length"]].to_numpy() X_test = test_data[["petal_length", "petal_width", "sepal_length" ]].to_numpy() X_train_tr = num_pipeline.fit_transform(X_train) X_test_tr = num_pipeline.transform(X_test) model = LogisticRegression(multi_class="multinomial", C=0.9) model.fit(X_train_tr, y_train_tr); preds = model.predict_proba(X_train_tr).argmax(axis=1) confusion_matrix(preds, y_train_tr) # - preds = model.predict_proba(X_test_tr).argmax(axis=1) confusion_matrix(preds, y_test_tr)# So, it is working well with regularizing factor 1/0.9 # # Finding Better regularizing value # + X_train = train_data[features].to_numpy() X_test = test_data[features].to_numpy() X_train_tr = num_pipeline.fit_transform(X_train) X_test_tr = num_pipeline.transform(X_test) model = LogisticRegression() params_grid = [ {"C": np.linspace(0.1, 1.0, 10)} ] score = sklearn.metrics.make_scorer(sklearn.metrics.precision_score, average = 'weighted')#Use precision score for the three classes grid_search = GridSearchCV(estimator=model, param_grid=params_grid, scoring=score, cv=5) grid_search.fit(X_train_tr, y_train_tr.ravel()) # - est = grid_search.best_estimator_ preds = est.predict_proba(X_test_tr).argmax(axis=1) confusion_matrix(preds, y_test_tr) for score, param in zip(grid_search.cv_results_["mean_test_score"], grid_search.cv_results_["params"]): print("Score: ", score, end="\t") print(param) grid_search.best_estimator_ grid_search.cv_results_ # # Using PyTorch # + features = [col for col in train_data.columns if col != "class"] X_train = np.asarray(train_data[features], dtype=np.float32) y_train = np.asarray(train_data["class"]).reshape(-1, 1) X_test = np.asarray(test_data[features], dtype=np.float32) y_test = np.asarray(test_data["class"]).reshape(-1, 1) ordinal = OrdinalEncoder() ordinal.fit(y_train) y_train = ordinal.transform(y_train) y_test = ordinal.transform(y_test) print(y_test[0:5]) ordinal.categories_ # - X_train_tensor = torch.from_numpy(X_train) X_test_tensor = torch.from_numpy(X_test) y_train_tensor = torch.from_numpy(y_train) y_test_tensor = torch.from_numpy(y_test) class IrisClassifier(torch.nn.Module): def __init__(self, num_features, *args, **kwargs): super(IrisClassifier, self).__init__() self.l1 = torch.nn.Linear(num_features, 50) self.l2 = torch.nn.Linear(50, 50) self.l3 = torch.nn.Linear(50, 20) self.l4 = torch.nn.Linear(20, 3) def forward(self, x): x = F.relu(self.l1(x)) x = F.relu(self.l2(x)) x = F.relu(self.l3(x)) x = F.dropout(x, p=0.4) x = self.l4(x) x = F.softmax(x, dim=1) x = x.view(x.shape[0], 3) #x = x.float() #x = torch.argmax(x, dim=1) #print(x.shape) #val, ind = torch.max(x, dim=1) #print(x) return x # + model = IrisClassifier(len(features)) #initializing model def init_model(layer): if type(layer) == "Linear": torch.nn.init.xavier_normal(layer.weight) layer.bias.data.fill_(0.001) loss_fn = torch.nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=1e-3) training_error = [] for epoch in range(0, 1000): y_pred = model(X_train_tensor) #print(y_pred) loss = loss_fn(y_pred, torch.squeeze(y_train_tensor).type(torch.LongTensor)) optimizer.zero_grad() loss.backward() optimizer.step() training_error.append(loss.item()) if epoch%10 == 0: print(f"epoch {epoch} the log-likelihood is {loss.item()}") # - for param in model.parameters(): print(param) y_p = torch.argmax(y_pred, dim=1) y_p confusion_matrix(y_p, y_train_tensor) # # Prediction for test data y_test_pred = model(X_test_tensor) y_test_p = torch.argmax(y_test_pred, dim=1) confusion_matrix(y_test_p, y_test_tensor) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # #!/usr/bin/env python # coding: utf-8 # Auther MYH import serial import sys import os import time import threading import serial.tools.list_ports global MAX_LOOP_NUM,MAX_TRY MAX_LOOP_NUM = 50 MAX_TRY=5 cmd='' def get_data(ser,addr): maxloopNum = 0 while True: data = s.read(s.inWaiting()) #读取串口缓冲区数据 maxloopNum += 1 #累加读取次数 try: #data= str(binascii.b2a_hex(s.read(n)))[2:-1] if data[3]==0x0E: if maxloopNum==1: print('返回采集数据个数正确,开始收集>>>') #根据关系计算各参数实际值,这里运算计算机会直接帮我们转换为十进制 CO2=data[4]*256+data[5] JQ=data[6]*256+data[7] TVOC=data[8]*256+data[9] PM2_5=data[10]*256+data[11] PM_10H=data[12]*256+data[13] TEMPUTER=data[14]+data[15]*0.1 WET=data[16]+data[17]*0.1 print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())) #输出 print('CO2:%d ppm 甲醛:%d ug TVOC:%d ug PM2.5:%d ug PM10:%d ug 温度:%1.f℃ 湿度:%.1f%% '%( CO2,JQ,TVOC,PM2_5,PM_10,TEMPUTER,WET)) cmd_get=bytes.fromhex('3C0'+str(addr)+'00') s.write(cmd_get) """ if data[2]==0x02: addr=str(binascii.b2a_hex(data))[2:-1] print('传感器地址为:',address[2:4]) """ except: print(data) print('没有读取到有效数据,请检查查询地址是否正确\n' '根据查询地址的返回值:00,01,10,11,修改查询指令的前两位\n' '例如返回地址结果为10,则输入1001\n') main() if (maxloopNum > MAX_LOOP_NUM): s.close() main() time.sleep(1) def send_cmd(ser): if cmd =='1': for addr in range(8): cmdstr='3C0'+str(addr)+'0' cmdByte=bytes.fromhex(cmdstr) ser.write(cmdByte) # 发送16进制读取数据指令 n=ser.inWaiting() if n==0: continue else: get_data(ser,addr) break elif cmd=='exit': sys.exit() else : print('输入指令错误,请重新输入:') main() def main(): global cmd,addr cmd='' plist = list(serial.tools.list_ports.comports()) if len(plist) <= 0: print("没有发现端口,正在检测...") time.sleep(1) else : plist_0 = list(plist[0]) serialName = plist_0[0] #先自动检测串口, 检测到可用串口,取出串口名 ser = serial.Serial(serialName, 9600, timeout=30) ser.flushInput() print("正在连接>>>", ser.name) cmd = input('请输入查询指令:\n 1.查询传感器数据请输入:1\n 2.退出请输入:exit\n') send_cmd(ser) if __name__ == '__main__': for i in range(MAX_TRY+1): if i==MAX_TRY: print('重连达到最大次数,请检查接线是否正确') break else: try: main() except: ser.close() sys.exit() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Tabular Playground Series - October 2021 # # In this notebook, we do some prelimary work on the raw data provided for the playground competition. In particular, we do the following: # # 1. Reduce memory usage # 2. Store in `.feather` format for quicker loading # 3. Perform adversarial validation on the training/test sets # # It is important to save memory since our models will run very slowly or even throw errors if we use too much memory while training. We will solely use decision forest libraries which generally require the training data to be loaded in to memory so any space we can save is a good thing. # Global variables for testing changes to this notebook quickly RANDOM_SEED = 0 NUM_FOLDS = 3 NUM_TREES = 500 EARLY_STOP = 25 # + import numpy as np import pandas as pd import time import os import pyarrow import gc # Model for validation from lightgbm import LGBMClassifier import lightgbm as lgbm # Model evaluation from sklearn.model_selection import StratifiedKFold, cross_val_predict from sklearn.metrics import roc_auc_score, mean_squared_error # Plotting import matplotlib from matplotlib import pyplot as plt # Hide warnings import warnings warnings.filterwarnings('ignore') # - # # 1. Memory Usage Reduction # # This month's data has over 200 features and combined 1.5 million rows. We will speed up our analysis by downcasting the data wherever possible. # Print the paths to all of the output files for dirname, _, filenames in os.walk('..\data'): for filename in filenames: if filename.endswith('.csv'): print(os.path.join(dirname, filename)) # ## 1.1 Loading CSV data # + # %%time # Load original .csv data train = pd.read_csv('../data/train.csv') test = pd.read_csv('../data/test.csv') # Save feature columns names for later features = [x for x in train.columns if x not in ['id', 'target']] # - # ## 1.2 Helper Function # Downcast float/int datatypes def reduce_memory_usage(df, verbose=True): start_mem = df.memory_usage().sum() / 1024 ** 2 for col, dtype in df.dtypes.iteritems(): if dtype.name.startswith('int'): df[col] = pd.to_numeric(df[col], downcast ='integer') elif dtype.name.startswith('float'): df[col] = pd.to_numeric(df[col], downcast ='float') end_mem = df.memory_usage().sum() / 1024 ** 2 if verbose: print( "Mem. usage decreased to {:.2f} Mb ({:.1f}% reduction)".format( end_mem, 100 * (start_mem - end_mem) / start_mem ) ) return df # + # %%time # Reduce memory and save as .feather train = reduce_memory_usage(train) train.to_feather('../data/train.feather') test = reduce_memory_usage(test) test.to_feather('../data/test.feather') # - # # 2. Feather Format # # We saved our data in `.feather` format for quicker loading later. This format saves the datatypes so we won't have to do our memory reduction again and we can work solely with feather for the remaining notebooks. # Check datatypes train.dtypes.value_counts() # + # %%time # Load feather data train = pd.read_feather('../data/train.feather') test = pd.read_feather('../data/test.feather') train.dtypes.value_counts() # - # We see that it takes under two seconds to load our `.feather` data as opposed to 45s with the original .csv files. # # 3. Adversarial Validation # # The idea behind adversarial validation is that if there are differences in the training and test data distributions an algorithm like LightGBM should be able to find these differences and use them to distinguish the two sets. So we create a classification problem where we predict whether a sampling of the data comes from the training or test sets. Ideally, we hope to see ~.5 AUC which would mean that our algorithm couldn't find meaningful distinctions between the test and training data. lightgbm_params = dict( n_estimators = NUM_TREES, random_state = RANDOM_SEED, ) # Performs adversarial validation with LightGBM def lgbm_validation(train, test, verbose = True): scores = np.zeros(NUM_FOLDS) # Sampling X = train.sample(n=200000, random_state=0) X_test = test.sample(n=200000, random_state=0) # Preparing data X = X.set_index('id').drop('target', axis='columns') X_test = X_test.set_index('id') # New "meta" data Xa = X.append(X_test) Xa['test'] = [0] * len(X) + [1] * len(X_test) # Cross validation scheme skf = StratifiedKFold(NUM_FOLDS, shuffle = True, random_state = RANDOM_SEED) for fold, (train_idx, valid_idx) in enumerate(skf.split(Xa, Xa['test'])): # Training and Validation Sets X_train, y_train = Xa[features].iloc[train_idx], Xa['test'].iloc[train_idx] X_valid, y_valid = Xa[features].iloc[valid_idx], Xa['test'].iloc[valid_idx] start = time.time() # Model with params model = LGBMClassifier(**lightgbm_params) model.fit( X_train, y_train, eval_set = [(X_valid, y_valid)], eval_metric = 'auc', early_stopping_rounds = EARLY_STOP, verbose = False, ) # Validation AUC scores[fold] = roc_auc_score( y_true = y_valid, y_score = model.predict_proba(X_valid)[:,1] ) end = time.time() print(f"LightGBM Fold {fold} (AUC):", round(scores[fold], 6), " ", str(round(end-start, 3))+"s") print("\nLightGBM (Avg):", round(scores.mean(), 6)) # return a fitted model and training data for feature importances return model # + # %%time fitted_model = lgbm_validation(train, test) # - # We see very high AUC values indicating that our model does a good job distinguishing the training and test data. # # Feature Importance # # We can examine the feature importances to see which features contributed the most to distinguishing the training and test sets. # ### Importance Type: Split lgbm.plot_importance( booster = fitted_model, importance_type = "split", max_num_features = 10, ) # ### Importance Type: Gain lgbm.plot_importance( booster = fitted_model, importance_type = "gain", max_num_features = 10, ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #
    , 15 June 2015
    # # Let's Code About Bike Locks # The [June 15, 2015 post on Bike Snob NYC](http://bikesnobnyc.blogspot.com/2015/06/lets-get-this-show-on-road-once-we-all.html) leads with "*Let's talk about bike locks.*" Here's what I want to talk about: in my local bike shop, I saw a combination lock called *WordLock®*, # which replaces digits with letters. I classified this as a Fred lock, # "[Fred](http://bikesnobnyc.blogspot.com/2014/06/a-fred-too-far.html)" being the term for an amateurish cyclist with the wrong equipment. # I tried the combination "FRED," and was amused with the result: # # ![](http://norvig.com/ipython/fredbuns.jpg) # # FRED BUNS! Naturally I set all the other locks on the rack to FRED BUNS as well. But my curiosity was raised ... # # # Questions # # 1. How many words can the WordLock® make? # 3. Can a lock with different letters on the tumblers make more words? # 4. How many words can be made simultaneously? For example, with the tumbler set to "FRED", the lock # above also makes "BUNS" in the next line, but with "SOMN", fails to make a word in the third line. # Could different letters make words in every horizontal line? # 5. Is it a coincidence that the phrase "FRED BUNS" appears, or was it planted there by mischievous WordLock® designers? # # Vocabulary # === # # Before we can answer the questions, we'll need to be clear about the vocabulary of the problem and how to represent concepts in code: # # * **Lock**: For our purposes a lock can be modeled as a `list` of 4 **tumblers**. # * **Tumbler:** Each tumbler has 10 distinct letters. I will represent a tumbler as a `str` of 10 letters. # * **Combination**: Choosing a letter from each tumbler gives a combination, such as "FRED" or "BUNS". There are 410 = 10,000 combinations. # * **Word**: Some combinations (such as "BUNS") are *words*; others (such as "SOMN") are not words. We'll need a collection of dictionary words. # # Now on to the code! First the imports I will need and the vocabulary concepts: # + from __future__ import division, print_function # To work in Python 2.x or 3.x from collections import Counter, defaultdict import itertools import random Lock = list # A Lock is a list of tumblers Tumbler = ''.join # A Tumbler is 10 characters joined into a str Word = ''.join # A word is 4 letters joined into a str # - # The only remaining vocabulary concept is `combinations`. I will define `fredbuns` to be a lock with four tumblers, but with each tumbler consisting of not all ten letters, but only the two letters that spell "FRED BUNS": fredbuns = ['FB', 'RU', 'EN', 'DS'] # A lock with two letters on each of four tumblers # We need a way to get the combinations that can be made from this lock. It turns out that the built-in function `itertools.product` does most of the job; it generates the product of all 2 × 2 × 2 × 2 = 16 combinations of letters: list(itertools.product(*fredbuns)) # I would prefer to deal with the string `'BUNS'` rather than the tuple `('B', 'U', 'N', 'S')`, so I will define a function, `combinations`, that takes a lock as input and returns a list of strings representing the combinations: def combinations(lock): "Return a list of all combinations that can be made by this lock." return [Word(combo) for combo in itertools.product(*lock)] combinations(fredbuns) # Dictionary Words # === # # I happen to have handy a file of four-letter words (no, not *[that](http://en.wikipedia.org/wiki/Four-letter_word)* kind of four-letter word). It is the union of an official Scrabble® word list and a list of proper names. The following shell command tests if the file has already been downloaded to our local directory and if not, fetches it from the web: ! [ -e words4.txt ] || curl -O http://norvig.com/ngrams/words4.txt # Here are the first few lines of the file: # ! head words4.txt # Python can make a set of words: WORDS = set(open('words4.txt').read().split()) len(WORDS) # So that means that no lock could ever make more than 4,360 words. Let's define `words_from(lock)`: def words_from(lock): "A list of words that can be made by lock." return [c for c in combinations(lock) if c in WORDS] words_from(fredbuns) # *Note*: An alternative is to represent a collection of words as a `set`; then `words_from` could do `return combinations(lock) & WORDS`. # # I will also introduce the function `show` to print out a lock and its words: # + def show(lock): "Show a lock and the words it makes." words = words_from(lock) print('Lock: {}\nCount: {}\nWords: {}' .format(space(lock), len(words), space(sorted(words)))) space = ' '.join # Function to concatenate strings with a space between each one. # - show(fredbuns) # For this tiny lock with just two letters on each tumbler, we find that 6 out of the 16 possible combinations are words. We're now ready to answer the real questions. # # Question 1: How Many Words? # Here is the answer: wordlock = ['SPHMTWDLFB', 'LEYHNRUOAI', 'ENMLRTAOSK', 'DSNMPYLKTE'] show(wordlock) # # How Secure is WordLock? # The lock makes 1118 words (according to my word list). You might say that an attacker who knows the combination is a word would find this lock to be only 11.18% as secure as a 4-digit lock with 10,000 combinations. But in reality, every cable lock is [vulnerable](https://www.sfbike.org/news/video-how-to-lock-your-bike/) to an attacker with wire cutters, or with a knowledge of lock-picking, so security is equally poor for WordLock® and for an equivalent lock with numbers. (You should use a hardened steel U-lock instead.) # # Random Locks # Question 2 asks if a different lock can make more words. As a baseline, before we get to improved locks, I will start with completely random locks, as produced by the function `random_lock`. I use `t=4` to say that by default there are 4 tumblers, and `c=10` to indicate 10 letters on each tumbler ("c" for "circumference"). I give `random_lock` an argument to seed the random number generator; that makes calls repeatable if desired. # + def random_lock(t=4, c=10, seed='seed'): "Make a lock by sampling randomly and uniformly from the alphabet." random.seed(seed) return Lock(Tumbler(random.sample(alphabet, c)) for i in range(t)) alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' # - show(random_lock()) # Wow, that's not very many words. Let's repeat 100 times and take the best one, with "best" determined by the function `word_count`: def word_count(lock): return len(words_from(lock)) # + random_locks = [random_lock(seed=i) for i in range(100)] show(max(random_locks, key=word_count)) # - # Still not very good. We will need a more systematic approach. # # Question 2: More Words (via Greedy Locks) # My first idea for a lock with more words is this: consider each tumbler, one at a time, and fill the tumbler with the letters that make the most words. How do I determine what letters make the most words? A `Counter` does most of the work; we feed it a list of the first letter of each word, and then ask it for the ten most common letters (and their counts): # + first_letters = [w[0] for w in WORDS] Counter(first_letters).most_common(10) # - # In other words, the letters SPTBDCLMAR are the most common ways to start a word. Let's add up those counts: # + def n_most_common(counter, n): return sum(n for (_, n) in counter.most_common(n)) n_most_common(Counter(first_letters), 10) # - # This means that SPTBDCLMAR covers 2,599 words. We don't know for sure that these are the best 10 letters to put on the first tumbler, but we do know that whatever letters are best, they can't form more than 2,599 words, so we have an upper bound on the number of words in a lock (and the 1,118 from `wordlock` is a lower bound). # # What letters should we put on the second tumbler? We will do the same thing, but this time don't consider *all* the words in the dictionary; just consider the 2,599 words that start with one of the ten letters on the first tumbler. Continue this way until we fill in all four tumblers. This is called a *greedy* approach, because when we consider each tumbler, we pick the solution that looks best right then, for that tumbler, without consideration for future tumblers. def greedy_lock(t=4, c=10, words=WORDS): "Make a lock with t tumblers, each consisting of the c letters that cover the most words at that position." lock = Lock() for i in range(t): # Make a tumbler of c letters, such that the tumbler covers the most words. # Then update words to only include the ones that can be made with this tumbler counter = Counter(word[i] for word in words) tumbler = Tumbler(L for (L, _) in counter.most_common(c)) words = [w for w in words if w[i] in tumbler] lock.append(tumbler) return lock show(greedy_lock()) # Remember that the `wordlock` gave 1118 words, so the greedy lock with 1177 is better, but only by 5%. Is it possible to do better still? # # Question 2: More Words (via Improved Locks) # Here's another idea to get more words from a lock: # # 1. Start with some lock. # 2. Pick, at random, one letter on one tumbler and change it to a new letter. # 3. If the change yields more words, keep the change; otherwise discard the change. # 4. Repeat. # # We can implement this strategy with the function `improved_lock`, which calls `changed_lock` to make a random change: # # # + def improved_lock(lock, steps=3000): "Randomly change letters in lock, keeping changes that improve the score." score = word_count(lock) for i in range(steps): lock2 = changed_lock(lock) score2 = word_count(lock2) if score2 >= score: lock, score = lock2, score2 return lock def changed_lock(lock): "Change one letter in one tumbler." lock2 = Lock(lock) i = random.randrange(len(lock)) old = random.choice(lock[i]) new = random.choice([L for L in alphabet if L not in lock[i]]) lock2[i] = lock2[i].replace(old, new) return lock2 # - # Let's see how this does to improve the best lock we've seen so far, the greedy lock: show(improved_lock(greedy_lock())) # How about starting from a random lock? show(improved_lock(random_lock())) # We got up to 1240 words, but can we go beyond? I'll improve 50 random locks, taking 6000 steps each rather than just 3000 (this will take around 15 minutes): # %time improved_locks = [improved_lock(random_lock(seed=i), 6000) for i in range(50)] # Let's see what we got: Counter(map(word_count, improved_locks)) # So 42/50 locks got a score of 1240. # My first reaction to this was that if I discovered 42 different locks with score 1240, then probably there is at least one undiscovered lock with a score above 1240. But a discussion with [](https://blog.glyphobet.net/faq) changed my thinking. The key is to realize that some locks that look different are actually the same; they just have the letters in a different order. I define the function `alock` to put each tumbler into alphabetical order (and make the lock be a tuple rather than a list, so that it can be an entry in a `dict` or `set`): def alock(lock): "Canonicalize lock by alphabetizing the letters in each tumbler." return tuple(Tumbler(sorted(tumbler)) for tumbler in lock) # Then we can find the unique locks: # + def unique_locks(locks): "Return a dict of {lock: word_count} for the distinct locks." return {alock(lock): word_count(lock) for lock in locks} unique_locks(improved_locks) # - # So out of the 50 `improved_locks` there are actually only 4 distinct ones. And only two have a score of 1240: peaks = {alock(lock) for lock in improved_locks if word_count(lock) == 1240} peaks # These two differ in just one letter (a `P` or a `W` in the second tumbler). # # This discovery changes my whole thinking about the space of scores for locks. Previously I imagined a spiky "porcupine-shaped" landscape, with 42 different peaks hitting a height of 1240. But now I have a different picture of the landscape: a single peak containing the two locks (one with a `P` and one with a `W`), surrounded by rolling hills. To verify this picture, I'll look at all the neighbors of the two peak locks, and see which ones are at least 1240: # + alphabetset = set(alphabet) def neighborhood(lock): "Yield all locks resulting from a change of one letter in one tumbler." for i in range(len(lock)): for new in alphabetset - set(lock[i]): for old in lock[i]: lock2 = Lock(lock) lock2[i] = lock2[i].replace(old, new) yield lock2 unique_locks(lock for peak in peaks for lock in neighborhood(peak) if word_count(lock) >= 1240) # - # Nothing new. This doesn't prove there is no lock with score over 1240, but it does mean we have looked hard to find one and failed. # # # Question 3: Simultaneous Words # Can we make a lock that spells 10 words simultaneously? One possible approach would be to start with any lock and randomly change it (just as we did with `improved_lock`), but measure improvements by the number of words formed. My intuition is that this approach would work, eventually, but that progress would be very slow, because most random changes to a letter would not make a word. # # An alternative approach is to think of the lock not as a list of 4 vertical tumblers (each with 10 letters), but rather as a list of 10 horizontal words (each with 4 letters). I'll call this the *word list* representation, and note that a lock and a word list are *[matrix transposes](http://en.wikipedia.org/wiki/Transpose)* of each other—they swap rows for columns. There is an [old trick](https://books.google.com/books?id=eH6jBQAAQBAJ&pg=PA574&lpg=PA574&dq=lisp+transpose+matrix&source=bl&ots=Yixwj8m3k4&sig=KoeuJnFhRnJsiD06_Cx56rUOetQ&hl=en&sa=X&ved=0CB4Q6AEwAGoVChMIyM-WiriLxgIVD6OICh2QcwBK#v=onepage&q=transpose%20matrix&f=false) to compute the transpose of a matrix `M` with the expression `zip(*M)`. But `zip` returns tuples; we want strings, so we can define `transpose` as: def transpose(strings): return [Word(letters) for letters in zip(*strings)] # And we can see the transpose of the `wordlock` is a list of words: transpose(['SPHMTWDLFB', 'LEYHNRUOAI', 'ENMLRTAOSK', 'DSNMPYLKTE']) # The first row of the word list has the letters SLED, because those are the letters in the first column of the lock. You can see that the WordLock® is designed to spell out LOOK FAST BIKE, among other words. # # Now we're ready to find a good word list with this strategy: # # 1. Start with some word list (e.g., a random sample of 10 words from `WORDS`). # 2. Pick, at random, one word and change it to a new word. # 3. If the change is an improvement, keep the change; otherwise discard the change. # 4. Repeat. # # But what counts as an improvement? We can't improve the number of words, because we start with 10 words, and every change also gives us 10 words. Rather, we will try to improve the number of duplicate letters on any tumbler (of the lock that corresponds to the word list). We improve by reducing the number of duplicate letters, and stop when there are no duplicates. # # The following code implements this approach: # + def improved_wordlist(wordlist): "Find a wordlist that has no duplicate letters, via random changes to wordlist." score = duplicates(wordlist) while score > 0: wordlist2 = changed_wordlist(wordlist) score2 = duplicates(wordlist2) if score2 < score: wordlist, score = wordlist2, score2 return wordlist def duplicates(wordlist): "The number of duplicate letters across all the tumblers of the lock that corresponds to this wordlist." lock = transpose(wordlist) def duplicates(tumbler): return len(tumbler) - len(set(tumbler)) return sum(duplicates(tumbler) for tumbler in lock) def changed_wordlist(wordlist, words=list(WORDS)): "Make a copy of wordlist and replace one of the words." copy = list(wordlist) i = random.randrange(len(wordlist)) copy[i] = random.choice(words) return copy # - # The structure of `improved_wordlist` is similar to `improved_lock`, with a few differences: # 1. We are minimizing duplicates, not maximizing word count. # 2. We stop when the score is 0, rather than continuing for a given number of iterations. # 3. We want to make a `random.choice` from `WORDS`. But `random.choice` can't operate on a `set`, so we # have to introduce `words=list(WORDS)`. # # Now we can find some wordlists: improved_wordlist(random.sample(WORDS, 10)) # That was easy! Can we go to 11? improved_wordlist(random.sample(WORDS, 11)) # # Improving Anything # We now have two similar functions, `improved_lock` and `improved_wordlist`. Could (and should?) we replace them by a single function, say, `improved`, that could improve locks, wordlists, and anything else? # # The answer is: *yes* we could, and *maybe* we should. # # It is nice to form an abstraction for the idea of *improvement*. (Traditionally, the method we have used for improvement has been called *hill-climbing*, because of the analogy that the score is like the elevation on a topological map, and we are trying to find our way to a peak.) # # However, there are many variations on the theme of *improvement*: maximizing or minimizing? Repeat for a given number of iterations, or continue until we meet a goal? I don't want `improved` to have an argument list a mile long, and I felt that five arguments is right on the border of acceptable. The arguments are: # 1. `item`: The object to start with; this is what we will try to improve. # 2. `changed`: a function that generates a new item. # 3. `scorer`: a function that evaluates the quality of an item. # 4. `extremum`: should be `max` if we are maximizing scores, or `min` if we are minimizing scores. # 5. `stop`: a predicate with args `(i, score, item)`, where `i` is the iteration number, and `score` is `scorer(item)`. Return `True` to stop. # # def improved(item, changed, scorer, extremum, stop): """Apply the function `changed` to `item` and evaluate with the function `scorer`; When `stop(i, score)` is true, return `item`.""" score = scorer(item) for i in itertools.count(0): if stop(i, score): return item item2 = changed(item) score2 = scorer(item2) if score2 == extremum(score, score2): item, score = item2, score2 # Now we can re-implement `improved_lock` and `improved_wordlist` using `improved`: # + def improved_lock(lock, steps=3000): "Randomly change letters in lock, keeping changes that improve the score." return improved(lock, changed_lock, word_count, max, lambda i, _: i == steps) def improved_wordlist(wordlist): "Find a wordlist that has no duplicate letters, via random changes to wordlist." return improved(wordlist, changed_wordlist, duplicates, min, lambda _, score: score == 0) # - show(improved_lock(random_lock())) improved_wordlist(random.sample(WORDS, 10)) # # Question 4: Coincidence? # There is still one unanswered question: did the designers of WordLock® deliberately put "FRED BUNS" in, or was it a coincidence? Hacker News reader [emhart](https://news.ycombinator.com/user?id=emhart) (aka the competitive lockpicker ) astutely commented that he had found the [patent](https://www.google.com/patents/US6621405) assigned to WordLock; it describes an algorithm similar to my `greedy_lock`. # After seeing that, I'm inclined to believe that "FRED BUNS" is the coincidental result of running the algorithm. On # the other hand, there is a [followup patent](https://www.google.com/patents/US20080053167) that discusses a refinement # "wherein the letters on the wheels are configured to spell a first word displayed on a first row of letters and a second word displayed on a second row of letters." So the possibility of a two-word phrase was somthing that Wordlock LLc. was aware of. # # We see below that the procedure described in the [patent](https://www.google.com/patents/US6621405) is not quite as good as `greedy_lock`, because the patent states that at each tumbler position "*the entire word list is scanned*" to produce the letter frequencies, whereas `greedy_lock` scans only the words that are consistent with the previous tumblers. Because of that difference, `greedy_lock` produces more words, 1177 to 1161: # + def greedy_lock_patented(t=4, c=10, words=WORDS): "Make a lock with t tumblers, each consisting of the c letters that cover the most words at that position." lock = Lock() for i in range(t): # Make a tumbler of c letters, such that the tumbler covers the most words. # Then update words to only include the ones that can be made with this tumbler counter = Counter(word[i] for word in words) tumbler = Tumbler(L for (L, _) in counter.most_common(c)) # words = [w for w in words if w[i] in tumbler] # <<<< The patent does not update the word list lock.append(tumbler) return lock word_count(greedy_lock()), word_count(greedy_lock_patented()) # - # # Tests # It is a # good idea to have some tests, in case you want to change some code and see if you have introduced an error. Also, tests serve as examples of usage of functions. The following tests have poor coverage, because it is hard to test non-deterministic functions, and I didn't attempt that here. # + def tests(): assert 'WORD' in WORDS assert 'FRED' in WORDS assert 'BUNS' in WORDS assert 'XYZZ' not in WORDS assert 'word' not in WORDS assert 'FIVE' in WORDS assert 'FIVER' not in WORDS assert len(WORDS) == 4360 assert fredbuns == ['FB', 'RU', 'EN', 'DS'] assert combinations(fredbuns) == ['FRED','FRES','FRND','FRNS','FUED','FUES','FUND','FUNS', 'BRED','BRES','BRND','BRNS','BUED','BUES','BUND','BUNS'] assert words_from(fredbuns) == ['FRED', 'FUND', 'FUNS', 'BRED', 'BUND', 'BUNS'] assert "FRED" in words_from(wordlock) and "BUNS" in words_from(wordlock) assert wordlock == ['SPHMTWDLFB', 'LEYHNRUOAI', 'ENMLRTAOSK', 'DSNMPYLKTE'] assert len(combinations(wordlock)) == 10000 assert word_count(wordlock) == 1118 assert transpose(['HIE', 'BYE']) == ['HB', 'IY', 'EE'] assert transpose(transpose(wordlock)) == wordlock return 'pass' tests() # - # # One More Question # I wonder if [@BIKESNOBNYC](https://twitter.com/bikesnobnyc) would appreciate this notebook? On the one hand, he is the kind of guy who, in discussing the fact that bicycling is the seventh most popular recreational activity, [wrote]() "*the number seven is itself a highly significant number. It is the lowest number that cannot be represented as the sum of the square of three integers*," so it seems he has some interest in mathematical oddities. On the other hand, he followed that up by writing "*I have no idea what that means, but it's true*," so maybe not. # Numbers up to 100 that are not the sum of 3 squares nums = range(11) sums = {A**2 + B**2 + C**2 for A in nums for B in nums for C in nums} set(range(max(nums)**2)) - sums # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- import cv2 import numpy as np import matplotlib.pyplot as plt # + B = cv2.imread("target.png") R = cv2.imread("input.png") B = cv2.cvtColor(B, cv2.COLOR_BGR2RGB) R = cv2.cvtColor(R, cv2.COLOR_BGR2RGB) B = cv2.resize(B, (256, 384)) R = cv2.resize(R, (256, 384)) # - plt.imshow(np.concatenate((B, cv2.resize(R, (B.shape[1], B.shape[0]))), axis=1)) def calHistRGB(I): hr = np.zeros(2**8+5).astype(np.uint64) hg = np.zeros(2**8+5).astype(np.uint64) hb = np.zeros(2**8+5).astype(np.uint64) for r in range(I.shape[0]): for c in range(I.shape[1]): hr[I[r][c][0]] = hr[I[r][c][0]] + 1 hg[I[r][c][1]] = hg[I[r][c][1]] + 1 hb[I[r][c][2]] = hb[I[r][c][2]] + 1 return hr, hg, hb hrB, hgB, hbB = calHistRGB(B) hrR, hgR, hbR = calHistRGB(R) print(hrB, hgB, hbB) def calCDF(hr, hg, hb, A): cdf_r = np.zeros(2**8).astype(np.float16) cdf_g = np.zeros(2**8).astype(np.float16) cdf_b = np.zeros(2**8).astype(np.float16) for i in range(0, 2**8): cdf_r[i] = 1.0*hr[i]/A + (cdf_r[i-1] if i > 0 else 0.0) cdf_g[i] = 1.0*hg[i]/A + (cdf_g[i-1] if i > 0 else 0) cdf_b[i] = 1.0*hb[i]/A + (cdf_b[i-1] if i > 0 else 0) return cdf_r, cdf_g, cdf_b cdfB = calCDF(hrB, hgB, hbB, B.shape[0]*B.shape[1]) cdfR = calCDF(hrR, hgR, hbR, R.shape[0]*R.shape[1]) print(cdfB[0][-1]) # + LUT = np.zeros((3, 2**8)).astype(np.uint64) for k in range(3): j = 0 for i in range(2**8): while( cdfB[k][j] < cdfR[k][i] and j < 255): j = j + 1 LUT[k][i] = j for r in range(R.shape[0]): for c in range(R.shape[1]): for k in range(R.shape[2]): R[r][c][k] = LUT[k][R[r][c][k]]; print(LUT) plt.imshow(R) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Yhd5NK-EXjTA" # ##K-Nearest Neighbors (kNN) [Unsupervised] # # *Author: * # # *Updated: * # #
    # # **References:** # # # 1. https://towardsdatascience.com/machine-learning-basics-with-the-k-nearest-neighbors-algorithm-6a6e71d01761 # 2. https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html # # # "The k-nearest neighbors (KNN) algorithm is a simple, easy-to-implement supervised machine learning algorithm that can be used to solve both classification and regression problems." [1] # # kNN algorithm, often abbreviated k-nn, is an approach to data classification that estimates how likely a data point is to be a member of one group or the other depending on what group the data points nearest to it are in. # # "The KNN algorithm assumes that similar things exist in close proximity. In other words, similar things are near to each other." [1] # # ### The KNN Algorithm [1] # 1. Load the data # 2. Initialize K to your chosen number of neighbors # 3. For each example in the data #
      #
    1. Calculate the distance between the query example and the current example from the data.
    2. #
    3. Add the distance and the index of the example to an ordered collection
    4. #
    # 4. Sort the ordered collection of distances and indices from smallest to largest (in ascending order) by the distances # 5. Pick the first K entries from the sorted collection # 6. Get the labels of the selected K entries # 7. If regression, return the mean of the K labels # 8. If classification, return the mode of the K labels # # ### Choosing the right value for K # To select the K that’s right for your data, we run the KNN algorithm several times with different values of K and choose the K that reduces the number of errors we encounter while maintaining the algorithm’s ability to accurately make predictions when it’s given data it hasn’t seen before. [1] # # Here are some things to keep in mind: # # # 1. As we decrease the value of K to 1, our predictions become less stable. Just think for a minute, imagine K=1 and we have a query point surrounded by several reds and one green (I’m thinking about the top left corner of the colored plot above), but the green is the single nearest neighbor. Reasonably, we would think the query point is most likely red, but because K=1, KNN incorrectly predicts that the query point is green. [1] # 2. Inversely, as we increase the value of K, our predictions become more stable due to majority voting / averaging, and thus, more likely to make more accurate predictions (up to a certain point). Eventually, we begin to witness an increasing number of errors. It is at this point we know we have pushed the value of K too far. [1] # 3. In cases where we are taking a majority vote (e.g. picking the mode in a classification problem) among labels, we usually make K an odd number to have a tiebreaker. [1] # # # ### Pros # * No Training Period: KNN is called Lazy Learner (Instance based learning). It does not learn anything in the training period or derive any discriminative function from the training data. In other words, there is no training period for it. It stores the training dataset and learns from it only at the time of making real time predictions. This makes the KNN algorithm much faster than other algorithms that require training e.g. SVM, Linear Regression etc. # * Since the KNN algorithm requires no training before making predictions, new data can be added seamlessly which will not impact the accuracy of the algorithm. # * KNN is very easy to implement. There are only two parameters required to implement KNN i.e. the value of K and the distance function (e.g. Euclidean or Manhattan etc.) # * Can be used both for Classification and Regression: One of the biggest advantages of K-NN is that K-NN can be used both for classification and regression problems. # # ### Cons # * Does not work well with large dataset: In large datasets, the cost of calculating the distance between the new point and each existing point is huge which degrades the performance of the algorithm. # * Does not work well with high dimensions: The KNN algorithm doesn't work well with high dimensional data because with a large number of dimensions, it becomes difficult for the algorithm to calculate the distance in each dimension. # * Requires feature scaling: Feature scaling (standardization and normalization) has to be done before applying KNN algorithm to any dataset, otherwise KNN may generate wrong predictions. # * Sensitive to noisy data, missing values and outliers: KNN is sensitive to noise in the dataset. So data has to be manipulated to impute missing values and remove outliers. # + colab={"base_uri": "https://localhost:8080/"} id="zGTwSej2XgjC" outputId="3519dcc4-1eab-4c18-da9a-d42e4eade75a" # !pip install influxdb # + colab={"base_uri": "https://localhost:8080/", "height": 923} id="2JunxCXwXs30" outputId="a64eff88-84f1-4af9-bf21-8d75465ca357" import operator import numpy as np import matplotlib.pyplot as plt from influxdb import InfluxDBClient import pandas as pd import numpy as np from datetime import datetime import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import neighbors, datasets ip_addr = "sensorweb.us" user_name = "test" user_pwd = "" db_name = "waveform" client = InfluxDBClient(host=ip_addr, port=8086, username=user_name, password=, database=db_name, ssl=True) tag_list = ["attack_01", "attack_07", "attack_08"] # influxdb query data command measurement = "sensor_04" field_key = "Ib" tag_key = "case" tag_value = "attack_05" start_time = "" end_time = "" #query_command = 'SELECT * FROM "' + measurement + \ # '"' #for tag_value in tag_list: query_command = 'SELECT "'+ field_key + '"::field,"' + tag_key +'"::tag FROM "' + measurement + \ '"'# WHERE ("'+ tag_key +'" = \''+tag_value+'\') ' #print(query_command) query_result = client.query(query_command) #print(query_command) # points is a list of dictionary points = list(query_result.get_points()) values = map(operator.itemgetter(field_key), points) data1 = list(values) test = map(operator.itemgetter(tag_key), points) labels = list(test) #print(np.unique(labels)) #print(data.shape) if(len(data1) == 0): print("No data in the chosen time range!") quit() else: print("len:", len(data1)) times = map(operator.itemgetter('time'), points) time = list(times) time1=[] cases=[] for t in time: time1.append(str(t)) for l in labels: cases.append(l) #print(time1[0:5]) length = int(len(data1)/5) # print(np.unique(y)) timet = [] i=0 while i < length: #print (t) if len(time[i])==27: temp = datetime.strptime(time[i], "%Y-%m-%dT%H:%M:%S.%fZ") elif len(time[i])==20: temp = datetime.strptime(time[i], "%Y-%m-%dT%H:%M:%SZ") timet.append(temp) i+=1 datatemp={'time':[x for x in time1], 'data':[d for d in data1], 'case':[c for c in cases]} dataset=pd.DataFrame(datatemp,columns=['time','data','case']) fs = 20000 # for electric waveform data, 20KHz dataset.case = pd.Categorical(dataset.case) dataset['label'] = dataset.case.cat.codes #dataset.drop(['case'], axis=1) del dataset['case'] #print(dataset[90000:90005]) X=np.zeros((length,2),dtype=np.float64) y=[] #np.zeros((length,1),dtype=np.float64) i=0 while i # + # create a new figure plt.figure() # plot the point (3,2) using the circle marker plt.plot(3, 2, 'o') # get the current axes ax = plt.gca() # Set axis properties [xmin, xmax, ymin, ymax] ax.axis([0,6,0,10]) # + # create a new figure plt.figure() # plot the point (1.5, 1.5) using the circle marker plt.plot(1.5, 1.5, 'o') # plot the point (2, 2) using the circle marker plt.plot(2, 2, 'o') # plot the point (2.5, 2.5) using the circle marker plt.plot(2.5, 2.5, 'o') # - # get current axes ax = plt.gca() # get all the child objects the axes contains ax.get_children() # # Scatterplots # + import numpy as np x = np.array([1,2,3,4,5,6,7,8]) y = x plt.figure() plt.scatter(x, y) # similar to plt.plot(x, y, '.'), but the underlying child objects in the axes are not Line2D # + import numpy as np x = np.array([1,2,3,4,5,6,7,8]) y = x # create a list of colors for each point to have # ['green', 'green', 'green', 'green', 'green', 'green', 'green', 'red'] colors = ['green']*(len(x)-1) colors.append('red') plt.figure() # plot the point with size 100 and chosen colors plt.scatter(x, y, s=100, c=colors) # + # convert the two lists into a list of pairwise tuples zip_generator = zip([1,2,3,4,5], [6,7,8,9,10]) print(list(zip_generator)) # the above prints: # [(1, 6), (2, 7), (3, 8), (4, 9), (5, 10)] zip_generator = zip([1,2,3,4,5], [6,7,8,9,10]) # The single star * unpacks a collection into positional arguments print(*zip_generator) # the above prints: # (1, 6) (2, 7) (3, 8) (4, 9) (5, 10) # + # use zip to convert 5 tuples with 2 elements each to 2 tuples with 5 elements each print(list(zip((1, 6), (2, 7), (3, 8), (4, 9), (5, 10)))) # the above prints: # [(1, 2, 3, 4, 5), (6, 7, 8, 9, 10)] zip_generator = zip([1,2,3,4,5], [6,7,8,9,10]) # let's turn the data back into 2 lists x, y = zip(*zip_generator) # This is like calling zip((1, 6), (2, 7), (3, 8), (4, 9), (5, 10)) print(x) print(y) # the above prints: # (1, 2, 3, 4, 5) # (6, 7, 8, 9, 10) # - plt.figure() # plot a data series 'Tall students' in red using the first two elements of x and y plt.scatter(x[:2], y[:2], s=100, c='red', label='Tall students') # plot a second data series 'Short students' in blue using the last three elements of x and y plt.scatter(x[2:], y[2:], s=100, c='blue', label='Short students') # add a label to the x axis plt.xlabel('The number of times the child kicked a ball') # add a label to the y axis plt.ylabel('The grade of the student') # add a title plt.title('Relationship between ball kicking and grades') # add a legend (uses the labels from plt.scatter) plt.legend() # add the legend to loc=4 (the lower right hand corner), also gets rid of the frame and adds a title plt.legend(loc=4, frameon=False, title='Legend') # get children from current axes (the legend is the second to last item in this list) plt.gca().get_children() # get the legend from the current axes legend = plt.gca().get_children()[-2] # you can use get_children to navigate through the child artists legend.get_children()[0].get_children()[1].get_children()[0].get_children() # + # import the artist class from matplotlib from matplotlib.artist import Artist def rec_gc(art, depth=0): if isinstance(art, Artist): # increase the depth for pretty printing print(" " * depth + str(art)) for child in art.get_children(): rec_gc(child, depth+2) # Call this function on the legend artist to see what the legend is made up of rec_gc(plt.legend()) # - # # Line Plots # + import numpy as np linear_data = np.array([1,2,3,4,5,6,7,8]) exponential_data = linear_data**2 plt.figure() # plot the linear data and the exponential data plt.plot(linear_data, '-o', exponential_data, '-o') # - # plot another series with a dashed red line plt.plot([22,44,55], '--r') plt.xlabel('Some data') plt.ylabel('Some other data') plt.title('A title') # add a legend with legend entries (because we didn't have labels when we plotted the data series) plt.legend(['Baseline', 'Competition', 'Us']) # fill the area between the linear data and exponential data plt.gca().fill_between(range(len(linear_data)), linear_data, exponential_data, facecolor='blue', alpha=0.25) # Let's try working with dates! # + plt.figure() observation_dates = np.arange('2017-01-01', '2017-01-09', dtype='datetime64[D]') plt.plot(observation_dates, linear_data, '-o', observation_dates, exponential_data, '-o') # - # Let's try using pandas # + import pandas as pd plt.figure() observation_dates = np.arange('2017-01-01', '2017-01-09', dtype='datetime64[D]') observation_dates = map(pd.to_datetime, observation_dates) # trying to plot a map will result in an error plt.plot(observation_dates, linear_data, '-o', observation_dates, exponential_data, '-o') # - plt.figure() observation_dates = np.arange('2017-01-01', '2017-01-09', dtype='datetime64[D]') observation_dates = list(map(pd.to_datetime, observation_dates)) # convert the map to a list to get rid of the error plt.plot(observation_dates, linear_data, '-o', observation_dates, exponential_data, '-o') # + x = plt.gca().xaxis # rotate the tick labels for the x axis for item in x.get_ticklabels(): item.set_rotation(45) # - # adjust the subplot so the text doesn't run off the image plt.subplots_adjust(bottom=0.25) ax = plt.gca() ax.set_xlabel('Date') ax.set_ylabel('Units') ax.set_title('Exponential vs. Linear performance') # you can add mathematical expressions in any text element ax.set_title("Exponential ($x^2$) vs. Linear ($x$) performance") # # Bar Charts plt.figure() xvals = range(len(linear_data)) plt.bar(xvals, linear_data, width = 0.3) # + new_xvals = [] # plot another set of bars, adjusting the new xvals to make up for the first set of bars plotted for item in xvals: new_xvals.append(item+0.3) plt.bar(new_xvals, exponential_data, width = 0.3 ,color='red') # + from random import randint linear_err = [randint(0,15) for x in range(len(linear_data))] # This will plot a new set of bars with errorbars using the list of random error values plt.bar(xvals, linear_data, width = 0.3, yerr=linear_err) # - # stacked bar charts are also possible plt.figure() xvals = range(len(linear_data)) plt.bar(xvals, linear_data, width = 0.3, color='b') plt.bar(xvals, exponential_data, width = 0.3, bottom=linear_data, color='r') # or use barh for horizontal bar charts plt.figure() xvals = range(len(linear_data)) plt.barh(xvals, linear_data, height = 0.3, color='b') plt.barh(xvals, exponential_data, height = 0.3, left=linear_data, color='r') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # G2Engine Reference # # More information: # # 1. [GitHub repository](https://github.com/Senzing/docker-jupyter) # 1. [Senzing documentation](http://docs.senzing.com/?python#g2config) # ## Table of contents # # 1. [Prepare environment](#Prepare-environment) # 1. [Initialize Senzing configuration](#Initialize-Senzing-configuration) # 1. [Initialize python environment](#Initialize-python-environment) # 1. [Helper class for JSON rendering](#Helper-class-for-JSON-rendering) # 1. [System path](#System-path) # 1. [Initialize variables](#Initialize-variables) # 1. [G2Engine](#G2Engine) # 1. [G2Engine initialization](#G2Engine-initialization) # 1. [initWithConfigIDV2](#initWithConfigIDV2) # 1. [reinitV2](#reinitV2) # 1. [primeEngine](#primeEngine) # 1. [getActiveConfigID](#getActiveConfigID) # 1. [exportConfig](#exportConfig) # 1. [stats](#stats) # 1. [getRepositoryLastModifiedTime](#getRepositoryLastModifiedTime) # 1. [Insert](#Insert) # 1. [Insert parameters](#Insert-parameters) # 1. [addRecord](#addRecord) # 1. [addRecordWithReturnedRecordID](#addRecordWithReturnedRecordID) # 1. [addRecordWithInfo](#addRecordWithInfo) # 1. [Search](#Search) # 1. [Record search](#Record-search) # 1. [getRecordV2](#getRecordV2) # 1. [Entity Search](#Entity-Search) # 1. [getEntityByRecordIDV2](#getEntityByRecordIDV2) # 1. [getEntityByEntityIDV2](#getEntityByEntityIDV2) # 1. [Search By Attributes](#Search-By-Attributes) # 1. [searchByAttributes](#searchByAttributes) # 1. [searchByAttributesV2](#searchByAttributesV2) # 1. [Finding Paths](#Finding-Paths) # 1. [findPathByEntityID](#findPathByEntityID) # 1. [findPathByEntityIDV2](#findPathByEntityIDV2) # 1. [findPathByRecordID](#findPathByRecordID) # 1. [findPathByRecordIDV2](#findPathByRecordIDV2) # 1. [Finding Paths with Exclusions](#Finding-Paths-with-Exclusions) # 1. [findPathExcludingByEntityID](#findPathExcludingByEntityID) # 1. [findPathExcludingByRecordID](#findPathExcludingByRecordID) # 1. [Finding Paths with Required Sources](#Finding-Paths-with-Required-Sources) # 1. [findPathIncludingSourceByEntityID](#findPathIncludingSourceByEntityID) # 1. [findPathIncludingSourceByRecordID](#findPathIncludingSourceByRecordID) # 1. [Finding Networks](#Finding-Networks) # 1. [findNetworkByEntityID](#findNetworkByEntityID) # 1. [findNetworkByEntityIDV2](#findNetworkByEntityIDV2) # 1. [findNetworkByRecordID](#findNetworkByRecordID) # 1. [findNetworkByRecordIDV2](#findNetworkByRecordIDV2) # 1. [Connection Details](#Connection-details) # 1. [whyEntityByRecordID](#whyEntityByRecordID) # 1. [whyEntityByRecordIDV2](#whyEntityByRecordIDV2) # 1. [whyEntityByEntityID](#whyEntityByEntityID) # 1. [whyEntityByEntityIDV2](#whyEntityByEntityIDV2) # 1. [Replace](#Replace) # 1. [replaceRecord](#replaceRecord) # 1. [replaceRecordWithInfo](#replaceRecordWithInfo) # 1. [Re-evaluate](#Re-evaluate) # 1. [reevaluateRecord](#reevaluateRecord) # 1. [reevaluateRecordWithInfo](#reevaluateRecordWithInfo) # 1. [reevaluateEntity](#reevaluateEntity) # 1. [reevaluateEntityWithInfo](#reevaluateEntityWithInfo) # 1. [Reporting](#Reporting) # 1. [exportJSONEntityReport](#exportJSONEntityReport) # 1. [fetchNext](#fetchNext) # 1. [closeExport](#closeExport) # 1. [exportCSVEntityReport](#exportCSVEntityReport) # 1. [Redo Processing](#Redo-Processing) # 1. [countRedoRecords](#countRedoRecords) # 1. [getRedoRecord](#getRedoRecord) # 1. [process](#process) # 1. [processWithInfo](#processWithInfo) # 1. [processRedoRecord](#processRedoRecord) # 1. [processRedoRecordWithInfo](#processRedoRecordWithInfo) # 1. [Delete](#Delete) # 1. [deleteRecord](#deleteRecord) # 1. [deleteRecordWithInfo](#deleteRecordWithInfo) # 1. [Cleanup](#Cleanup) # 1. [purgeRepository](#purgeRepository) # 1. [destroy](#destroy) # ## Prepare environment # ### Initialize Senzing configuration # # Run [senzing-G2ConfigMgr-reference.ipynb](senzing-G2ConfigMgr-reference.ipynb) # to install a Senzing Engine configuration in the database. # ### Initialize python environment # + import os import sys import json # For RenderJSON import uuid from IPython.display import display_javascript, display_html, display # - # ### Helper class for JSON rendering # # A class for pretty-printing JSON. # Not required by Senzing, # but helps visualize JSON. class RenderJSON(object): def __init__(self, json_data): if isinstance(json_data, dict): self.json_str = json.dumps(json_data) elif isinstance(json_data, bytearray): self.json_str = json_data.decode() else: self.json_str = json_data self.uuid = str(uuid.uuid4()) def _ipython_display_(self): display_html('
    '.format(self.uuid), raw=True) display_javascript(""" require(["https://rawgit.com/caldwell/renderjson/master/renderjson.js"], function() { document.getElementById('%s').appendChild(renderjson(%s)) }); """ % (self.uuid, self.json_str), raw=True) # ### System path # # Update system path. python_path = "{0}/python".format( os.environ.get("SENZING_G2_DIR", "/opt/senzing/g2")) sys.path.append(python_path) # ### Initialize variables # # Create variables used for G2Engine. # + module_name = 'pyG2EngineForAddRecord' config_path = os.environ.get("SENZING_ETC_DIR", "/etc/opt/senzing") support_path = os.environ.get("SENZING_DATA_VERSION_DIR", "/opt/senzing/data") resource_path = "{0}/resources".format( os.environ.get("SENZING_G2_DIR", "/opt/senzing/g2")) sql_connection = os.environ.get( "SENZING_SQL_CONNECTION", "sqlite3://na:na@/var/opt/senzing/sqlite/G2C.db") verbose_logging = False senzing_config_dictionary = { "PIPELINE": { "CONFIGPATH": config_path, "SUPPORTPATH": support_path, "RESOURCEPATH": resource_path }, "SQL": { "CONNECTION": sql_connection, } } senzing_config_json = json.dumps(senzing_config_dictionary) # - # ## G2Engine from G2Engine import G2Engine import G2Exception # ### G2Engine initialization # # To start using Senzing G2Engine, create and initialize an instance. # This should be done once per process. # The `initV2()` method accepts the following parameters: # # - **module_name:** A short name given to this instance of the G2Engine # object. # - **senzing_config_json:** A JSON string containing configuration parameters. # - **verbose_logging:** A boolean which enables diagnostic logging. # - **config_id:** (optional) The identifier value for the engine configuration # can be returned here. # # Calling this function will return "0" upon success. # + g2_engine = G2Engine() return_code = g2_engine.initV2( module_name, senzing_config_json, verbose_logging) print("Return Code: {0}".format(return_code)) # - # ### initWithConfigIDV2 # # Alternatively `initWithConfigIDV2()` can be used to specify a configuration. # For more information, see # [http://docs.senzing.com/?python](http://docs.senzing.com/?python#engine). # ### reinitV2 # # The `reinitV2()` function may be used to reinitialize the engine # using a specified initConfigID. See # [http://docs.senzing.com/?python](http://docs.senzing.com/?python#engine). # ### primeEngine # # The `primeEngine()` method may optionally be called to pre-initialize # some of the heavier weight internal resources of the G2 engine. return_code = g2_engine.primeEngine() print("Return Code: {0}".format(return_code)) # ### getActiveConfigID # # Call `getActiveConfigID()` to return an identifier for the loaded # Senzing engine configuration. # The call will assign a long integer to a user-designated variable # -- the function itself will return "0" upon success. # The `getActiveConfigID()` method accepts one parameter as input: # # - **configuration_id:** The identifier value for the engine configuration. # The result of function call is returned here # + configuration_id = bytearray() return_code = g2_engine.getActiveConfigID(configuration_id) print("Return code: {0}\nConfiguration id: {1}".format( return_code, configuration_id.decode())) # - # ### exportConfig # # Call `exportConfig()` to retrieve your Senzing engine's configuration. # The call will assign a JSON document to a user-designated buffer, # containing all relevant configuration information # -- the function itself will return "0" upon success. # The exportConfig function accepts the following parameters as input: # # - **response_bytearray:** The memory buffer to retrieve the JSON # configuration document # - **config_id_bytearray:** The identifier value for the engine configuration # can be returned here. # + response_bytearray = bytearray() config_id_bytearray = bytearray() return_code = g2_engine.exportConfig(response_bytearray, config_id_bytearray) print("Return Code: {0}\nConfiguration ID: {1}".format( return_code, config_id_bytearray.decode())) RenderJSON(response_bytearray) # - # ### stats # # Call `stats()` to retrieve workload statistics for the current process. # These statistics will automatically reset after retrieval. # # - **response_bytearray:** A memory buffer for returning the response # document. If an error occurred, an error response is stored here. # + response_bytearray = bytearray() return_code = g2_engine.stats(response_bytearray) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### getRepositoryLastModifiedTime # # Call `getRepositoryLastModifiedTime()` to obtain the last modified time of # the Senzing repository,measured in the number of seconds between the last # modified time and January 1, 1970 12:00am GMT (epoch time). # The call will assign a long integer to a user-designated buffer # -- the function itself will return "0" upon success. # The getRepositoryLastModifiedTime() method accepts one parameter as input: # # - **last_modified_unixtime:** The last modified time. The result of function # call is returned here # # + last_modified_timestamp = bytearray() return_code = g2_engine.getRepositoryLastModifiedTime(last_modified_timestamp) # Human readable output. from datetime import datetime last_modified_unixtime = int(int(last_modified_timestamp.decode()) / 1000) last_modified_datetime = datetime.fromtimestamp(last_modified_unixtime) print("Return Code: {0}\nLast modified timestamp: {1}\nLast modified time: {2}" .format( return_code, last_modified_timestamp.decode(), last_modified_datetime)) # - # ## Insert # ### Insert parameters # # The following variables are used as parameters to the Senzing API. # Documentation for `g2_engine_flags` values is at # [http://docs.senzing.com/?python](http://docs.senzing.com/?python#engine-control-flags) # + datasource_code_1 = "TEST" record_id_1 = "1" datasource_code_2 = "TEST" record_id_2 = "2" datasource_code_3 = "TEST" record_id_3 = "3" datasource_code_4 = "TEST" record_id_4 = "4" datasource_code_5 = "TEST" record_id_5 = "5" datasource_code_6 = "TEST" record_id_6 = "6" datasource_code_7 = "TEST" record_id_7 = "7" load_id = None g2_engine_flags = G2Engine.G2_EXPORT_DEFAULT_FLAGS # - # Initial data. data = { "NAMES": [{ "NAME_TYPE": "PRIMARY", "NAME_LAST": "Smith", "NAME_FIRST": "John", "NAME_MIDDLE": "M" }], "PASSPORT_NUMBER": "PP11111", "PASSPORT_COUNTRY": "US", "DRIVERS_LICENSE_NUMBER": "DL11111", "SSN_NUMBER": "111-11-1111" } data_as_json = json.dumps(data) # ### addRecord # # Once the Senzing engine is initialized, use `addRecord()` to load a record # into the Senzing repository # -- `addRecord()` can be called as many times as desired and from multiple # threads at the same time. # The `addRecord()` function returns "0" upon success, and accepts four # parameters as input: # # - **datasource_code:** The name of the data source the record # is associated with. # This value is configurable to the system # - **record_id:** The record ID, used to identify distinct records # - **data_as_json:** A JSON document with the attribute data for the record # - **load_id:** The observation load ID for the record; # value can be null and will default to data_source # # + return_code = g2_engine.addRecord( datasource_code_1, record_id_1, data_as_json, load_id) print("Return Code: {0}".format(return_code)) # - # ### addRecordWithReturnedRecordID # # Alternatively `addRecordWithReturnedRecordID()` can be used to add a record. # For more information, see # [http://docs.senzing.com/?python](http://docs.senzing.com/?python#data-manipulation). # ### addRecordWithInfo # # Use if you would like to know what resolved entities were modified when # adding the new record. # It behaves identically to addRecord(), # but returns a json document containing the IDs of the affected entities. # It accepts the following parameters: # # - **datasource_code:** The name of the data source the record is associated # with. This value is configurable to the system. # - **record_id:** The record ID, used to identify distinct records # - **data_as_json:** A JSON document with the attribute data for the record # - **response_bytearray:** A memory buffer for returning the response # document; if an error occurred, an error response is stored here # - **load_id:** The observation load ID for the record; # value can be null and will default to data_source # - **g2_engine_flags:** Control flags for specifying what data about the # entity to retrieve # # + response_bytearray = bytearray() return_code = g2_engine.addRecordWithInfo( datasource_code_1, record_id_1, data_as_json, response_bytearray, load_id, g2_engine_flags) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ## Search # ### Record search # #### getRecordV2 # # Use `getRecordV2()` to retrieve a single record from the data repository; # the record is assigned in JSON form to a user-designated buffer, # and the function itself returns "0" upon success. # Once the Senzing engine is initialized, # `getRecordV2()` can be called as many times as desired and from multiple # threads at the same time. # The `getRecordV2()` function accepts the following parameters as input: # # - **datasource_code:** The name of the data source the record is associated # with. This value is configurable to the system. # - **record_id:** The record ID, used to identify the record for retrieval # - **g2_engine_flags:** Control flags for specifying what data about the # record to retrieve. # - **response_bytearray:** A memory buffer for returning the response # document; if an error occurred, an error response is stored here. # + response_bytearray = bytearray() return_code = g2_engine.getRecordV2( datasource_code_1, record_id_1, g2_engine_flags, response_bytearray) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # The function `getRecordV2()` is an improved version of `getRecord()` # that also allows you to use control flags. # The `getRecord()` function has been deprecated. # ### Entity Search # #### getEntityByRecordIDV2 # # Entity searching is a key component for interactive use of Entity Resolution # intelligence. # The core Senzing engine provides real-time search capabilities that are # easily accessed via the Senzing API. # Senzing offers methods for entity searching, all of which can be called # as many times as desired and from multiple threads at the same time # (and all of which return "0" upon success). # # Use `getEntityByRecordIDV2()` to retrieve entity data based on the ID of a # resolved identity. # This function accepts the following parameters as input: # # - **datasource_code:** The name of the data source the record is associated # with. This value is configurable to the system. # - **record_id:** The numeric ID of a resolved entity # - **g2_engine_flags:** Control flags for specifying what data about the # entity to retrieve. # - **response_bytearray:** A memory buffer for returning the response # document; if an error occurred, an error response is stored here. # + response_bytearray = bytearray() return_code = g2_engine.getEntityByRecordIDV2( datasource_code_1, record_id_1, g2_engine_flags, response_bytearray) response_dictionary = json.loads(response_bytearray) entity_id_1 = response_dictionary["RESOLVED_ENTITY"]["ENTITY_ID"] print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### getEntityByEntityIDV2 # # Entity searching is a key component for interactive use of Entity Resolution # intelligence. # The core Senzing engine provides real-time search capabilities that are # easily accessed via the Senzing API. # Senzing offers methods for entity searching, all of which can be called as # many times # as desired and from multiple threads at the same time (and all of which # return "0" upon success). # # Use `getEntityByEntityIDV2()` to retrieve entity data based on the ID of a # resolved identity. # This function accepts the following parameters as input: # # - **entity_id:** The numeric ID of a resolved entity # - **g2_engine_flags:** Control flags for specifying what data about the # entity to retrieve. # - **response_bytearray:** A memory buffer for returning the response # document; if an error occurred, an error response is stored here. # + response_bytearray = bytearray() return_code = g2_engine.getEntityByEntityIDV2( entity_id_1, g2_engine_flags, response_bytearray) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### Search By Attributes # #### searchByAttributes # # Entity searching is a key component for interactive use of Entity Resolution # intelligence. # The core Senzing engine provides real-time search capabilities that are # easily accessed via the Senzing API. # Senzing offers a method for entity searching by attributes, # which can be called as many times as desired and from multiple threads at the # same time # (and all of which return "0" upon success). # # Use `searchByAttributes()` to retrieve entity data based on a user-specified # set of entity attributes. # This function accepts the following parameters as input: # # - **data_as_json:** A JSON document with the attribute data to search for. # - **response_bytearray:** A memory buffer for returning the response # document; if an error occurred, an error response is stored here. # + response_bytearray = bytearray() return_code = g2_engine.searchByAttributes(data_as_json, response_bytearray) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### searchByAttributesV2 # # This function is similar but preferable to the searchByAttributes() function. # This function has improved functionality and a better standardized output # structure. # # Use `searchByAttributesV2()` to retrieve entity data based on # a user-specified set of entity attributes. # This function accepts the following parameters as input: # # - **data_as_json:** A JSON document with the attribute data to search for. # - **g2_engine_flags:** Operational flags # - **response_bytearray:** A memory buffer for returning the response # document; if an error occurred, an error response is stored here. # + response_bytearray = bytearray() return_code = g2_engine.searchByAttributesV2( data_as_json, g2_engine_flags, response_bytearray) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### Finding Paths # # The `findPathByEntityID()` and `findPathByRecordID()` functions # can be used to find single relationship paths between two entities. # Paths are found using known relationships with other entities. # # Entities can be searched for by either Entity ID or by Record ID, # depending on which function is chosen. # # These functions have the following parameters: # # - **entity_id_2:** The entity ID for the starting entity of the search path # - **entity_id_3:** The entity ID for the ending entity of the search path # - **datasource_code_2:** The data source for the starting entity of the # search path # - **datasource_code_3:** The data source for the ending entity of the search # path # - **record_id_2:** The record ID for the starting entity of the search path # - **record_id_3:** The record ID for the ending entity of the search path # - **max_degree:** The number of relationship degrees to search # # The functions return a JSON document that identifies the path between the # entities, # and the information on the entities in question. # The document contains a section called "ENTITY_PATHS" which gives # the path from one entity to the other. # Example: # # ```JSON # { # "START_ENTITY_ID": 10, # "END_ENTITY_ID": 13, # "ENTITIES": [10, 11, 12, 13] # } # ``` # # If no path was found, then the value of ENTITIES will be an empty list. # # The response document also contains a separate ENTITIES section, # with the full information about the resolved entities along that path. # First you will need to create some records so that you have some that you can # compare. # Can you see what is the same between this record and the previous one? # + data = { "NAMES": [{ "NAME_TYPE": "PRIMARY", "NAME_LAST": "Miller", "NAME_FIRST": "Max", "NAME_MIDDLE": "W" }], "SSN_NUMBER": "111-11-1111" } data_as_json = json.dumps(data) return_code = g2_engine.replaceRecord( datasource_code_2, record_id_2, data_as_json, None) print("Return Code: {0}".format(return_code)) # - # Replace values for Record #3 # + data = { "NAMES": [{ "NAME_TYPE": "PRIMARY", "NAME_LAST": "Miller", "NAME_FIRST": "Mildred" }], "SSN_NUMBER": "111-11-1111" } data_as_json = json.dumps(data) return_code = g2_engine.replaceRecord( datasource_code_3, record_id_3, data_as_json, None) print("Return Code: {0}".format(return_code)) # - # Locate "entity identifier" for Record #1 # + response_bytearray = bytearray() return_code = g2_engine.getEntityByRecordID( datasource_code_1, record_id_1, response_bytearray) response_dictionary = json.loads(response_bytearray) entity_id_1 = response_dictionary["RESOLVED_ENTITY"]["ENTITY_ID"] print("Return Code: {0}\nEntity ID: {1}".format(return_code, entity_id_1)) # - # Locate "entity identifier" for Record #2 # + response_bytearray = bytearray() return_code = g2_engine.getEntityByRecordID( datasource_code_2, record_id_2, response_bytearray) response_dictionary = json.loads(response_bytearray) entity_id_2 = response_dictionary["RESOLVED_ENTITY"]["ENTITY_ID"] print("Return Code: {0}\nEntity ID: {1}".format(return_code, entity_id_2)) RenderJSON(response_bytearray) # - # Locate "entity identifier" for Record #3 # + response_bytearray = bytearray() return_code = g2_engine.getEntityByRecordID( datasource_code_3, record_id_3, response_bytearray) response_dictionary = json.loads(response_bytearray) entity_id_3 = response_dictionary["RESOLVED_ENTITY"]["ENTITY_ID"] print("Return Code: {0}\nEntity ID: {1}".format(return_code, entity_id_3)) RenderJSON(response_bytearray) # - # #### findPathByEntityID # + # Define search variables. max_degree = 3 # Find the path by entity ID. response_bytearray = bytearray([]) return_code = g2_engine.findPathByEntityID( entity_id_2, entity_id_3, max_degree, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### findPathByEntityIDV2 # The function `findPathByEntityIDV2()` is an improved version of # `findPathByEntityID()` that also allow you to use control flags. # + # Define search variables. max_degree = 3 # Find the path by entity ID. response_bytearray = bytearray([]) return_code = g2_engine.findPathByEntityIDV2( entity_id_2, entity_id_3, max_degree, g2_engine_flags, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### findPathByRecordID # + # Define search variables. max_degree = 3 # Find the path by record ID. response_bytearray = bytearray([]) return_code = g2_engine.findPathByRecordID( datasource_code_2, record_id_2, datasource_code_3, record_id_3, max_degree, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### findPathByRecordIDV2 # # The function `findPathByRecordIDV2()` is an improved version of # `findPathByRecordID()` that also allow you to use control flags. # + # Define search variables. max_degree = 3 # Find the path by record ID. response_bytearray = bytearray([]) return_code = g2_engine.findPathByRecordIDV2( datasource_code_2, record_id_2, datasource_code_3, record_id_3, max_degree, g2_engine_flags, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### Finding Paths with Exclusions # # The `findPathExcludingByEntityID()` and `findPathExcludingByRecordID()` # functions can be used to find single relationship paths between two # entities. # Paths are found using known relationships with other entities. # In addition, it will find paths that exclude certain entities from being on # the path. # # Entities can be searched for by either Entity ID or by Record ID, # depending on which function is chosen. # Additionally, entities to be excluded can also be specified by either Entity # ID or by Record ID. # # When excluding entities, the user may choose to either (a) strictly exclude # the entities, # or (b) prefer to exclude the entities, but still include them if no other # path is found. # By default, entities will be strictly excluded. # A "preferred exclude" may be done by specifying the # `G2_FIND_PATH_PREFER_EXCLUDE` control flag. # # These functions have the following parameters: # # - **entity_id_2:** The entity ID for the starting entity of the search path # - **entity_id_3:** The entity ID for the ending entity of the search path # - **datasource_code_2:** The data source for the starting entity of the # search path # - **datasource_code_3:** The data source for the ending entity of the search # path # - **record_id_2:** The record ID for the starting entity of the search path # - **record_id_3:** The record ID for the ending entity of the search path # - **max_degree:** The number of relationship degrees to search # - **excluded_entities_as_json:** Entities that should be avoided on the path # (JSON document) # - **g2_engine_flags:** Operational flags # #### findPathExcludingByEntityID # + # Define search variables. max_degree = 4 excluded_entities = { "ENTITIES": [{ "ENTITY_ID": entity_id_1 }]} excluded_entities_as_json = json.dumps(excluded_entities) # Find the path by entity ID. response_bytearray = bytearray([]) return_code = g2_engine.findPathExcludingByEntityID( entity_id_2, entity_id_3, max_degree, excluded_entities_as_json, g2_engine_flags, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### findPathExcludingByRecordID # + # Define search variables. excluded_records = { "RECORDS": [{ "RECORD_ID": record_id_1, "DATA_SOURCE": datasource_code_1 }]} excluded_records_as_json = json.dumps(excluded_records) # Find the path by record ID. response_bytearray = bytearray([]) return_code = g2_engine.findPathExcludingByRecordID( datasource_code_2, record_id_2, datasource_code_3, record_id_3, max_degree, excluded_records_as_json, g2_engine_flags, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### Finding Paths with Required Sources # # The `findPathIncludingSourceByEntityID()` and # `findPathIncludingSourceByRecordID()` functions # can be used to find single relationship paths between two entities. # In addition, one of the enties along the path must include a specified data # source. # # Entities can be searched for by either Entity ID or by Record ID, # depending on which function is chosen. # The required data source or sources are specified by a json document list. # # Specific entities may also be excluded, using the same methodology as the # `findPathExcludingByEntityID()` and `findPathExcludingByRecordID()` # functions use. # # These functions have the following parameters: # # - **entity_id_2:** The entity ID for the starting entity of the search path # - **entity_id_3:** The entity ID for the ending entity of the search path # - **datasource_code_2:** The data source for the starting entity of the # search path # - **datasource_code_3:** The data source for the ending entity of the search # path # - **record_id_2:** The record ID for the starting entity of the search # path # - **record_id_3:** The record ID for the ending entity of the search path # - **max_degree:** The number of relationship degrees to search # - **excluded_entities_as_json:** Entities that should be avoided on the path # (JSON document) # - **required_dsrcs_as_json:** Entities that should be avoided on the path # (JSON document) # - **g2_engine_flags:** Operational flags # #### findPathIncludingSourceByEntityID # + # Define search variables. max_degree = 4 excluded_entities = { "ENTITIES": [{ "ENTITY_ID": entity_id_1 }]} excluded_entities_as_json = json.dumps(excluded_entities) required_dsrcs = { "DATA_SOURCES": [ datasource_code_1 ]} required_dsrcs_as_json = json.dumps(excluded_entities) # Find the path by entity ID. response_bytearray = bytearray([]) return_code = g2_engine.findPathIncludingSourceByEntityID( entity_id_2, entity_id_3, max_degree, excluded_entities_as_json, required_dsrcs_as_json, g2_engine_flags, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### findPathIncludingSourceByRecordID # + # Define search variables. excluded_records = { "RECORDS": [{ "RECORD_ID": record_id_1, "DATA_SOURCE": datasource_code_1 }]} excluded_records_as_json = json.dumps(excluded_records) # Find the path by record ID. response_bytearray = bytearray([]) return_code = g2_engine.findPathIncludingSourceByRecordID( datasource_code_2, record_id_2, datasource_code_3, record_id_3, max_degree, excluded_records_as_json, required_dsrcs_as_json, g2_engine_flags, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### Finding Networks # # The `findNetworkByEntityID()` and `findNetworkByRecordID()` functions # can be used to find all entities surrounding a requested set of entities. # This includes the requested entities, paths between them, and relations to # other nearby entities. # # Entities can be searched for by either Entity ID or by Record ID, # depending on which function is chosen. # # These functions have the following parameters: # # - **entity_list_as_json:** A list of entities, specified by Entity ID # (JSON document) # - **record_list_as_json:** A list of entities, specified by Record ID # (JSON document) # - **max_degree:** The maximum number of degrees in paths between search # entities # - **buildout_degree:** The number of degrees of relationships to show around # each search entity # - **max_entities:** The maximum number of entities to return in the # discovered network # # They also have various arguments used to return response documents # # The functions return a JSON document that identifies the path between the # each set of search entities (if the path exists), and the information on the # entities in question (search entities, path entities, and build-out entities. # #### findNetworkByEntityID # + # Define search variables. entity_list = { "ENTITIES": [{ "ENTITY_ID": entity_id_1 }, { "ENTITY_ID": entity_id_2 }, { "ENTITY_ID": entity_id_3 }]} entity_list_as_json = json.dumps(entity_list) max_degree = 2 buildout_degree = 1 max_entities = 12 # Find the network by entity ID. response_bytearray = bytearray() return_code = g2_engine.findNetworkByEntityID( entity_list_as_json, max_degree, buildout_degree, max_entities, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### findNetworkByEntityIDV2 # # The function `findNetworkByEntityIDV2()` is an improved version of # `findNetworkByEntityID()` that also allow you to use control flags. # + # Define search variables. entity_list = { "ENTITIES": [{ "ENTITY_ID": entity_id_1 }, { "ENTITY_ID": entity_id_2 }, { "ENTITY_ID": entity_id_3 }]} entity_list_as_json = json.dumps(entity_list) max_degree = 2 buildout_degree = 1 max_entities = 12 # Find the network by entity ID. response_bytearray = bytearray() return_code = g2_engine.findNetworkByEntityIDV2( entity_list_as_json, max_degree, buildout_degree, max_entities, g2_engine_flags, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### findNetworkByRecordID # + # Define search variables. record_list = { "RECORDS": [{ "RECORD_ID": record_id_1, "DATA_SOURCE": datasource_code_1 }, { "RECORD_ID": record_id_2, "DATA_SOURCE": datasource_code_2 }, { "RECORD_ID": record_id_3, "DATA_SOURCE": datasource_code_3 }]} record_list_as_json = json.dumps(record_list) # Find the network by record ID. response_bytearray = bytearray() return_code = g2_engine.findNetworkByRecordID( record_list_as_json, max_degree, buildout_degree, max_entities, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # #### findNetworkByRecordIDV2 # # The function `findNetworkByRecordIDV2()` is an improved version of # `findNetworkByRecordID()` that also allow you to use control flags. # + # Define search variables. record_list = { "RECORDS": [{ "RECORD_ID": record_id_1, "DATA_SOURCE": datasource_code_1 }, { "RECORD_ID": record_id_2, "DATA_SOURCE": datasource_code_2 }, { "RECORD_ID": record_id_3, "DATA_SOURCE": datasource_code_3 }]} record_list_as_json = json.dumps(record_list) # Find the network by record ID. response_bytearray = bytearray() return_code = g2_engine.findNetworkByRecordIDV2( record_list_as_json, max_degree, buildout_degree, max_entities, g2_engine_flags, response_bytearray) # Print the results. print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ## Connection Details # The `whyEntityByEntityID()` and `whyEntityByRecordID()` functions can be used # to determine why records belong to their resolved entities. # These functions will compare the record data within an entity against the # rest of the entity data, and show why they are connected. # This is calculated based on the features that record data represents. # # Records can be chosen by either Record ID or by Entity ID, # depending on which function is chosen. # If a single record ID is used, # then comparison results for that single record will be generated, as part of # its entity. # If an Entity ID is used, # then comparison results will be generated for every record within that # entity. # # These functions have the following parameters: # # - **entity_id:** The entity ID for the entity to be analyzed # - **datasource_code:** The data source for the record to be analyzed # - **record_id:** The record ID for the record to be analyzed # - **g2_engine_flags:** Control flags for outputting entities # # They also have various arguments used to return response documents. # # The functions return a JSON document that gives the results of the record # analysis. # The document contains a section called "WHY_RESULTS", # which shows how specific records relate to the rest of the entity. # It has a "WHY_KEY", which is similar to a match key, in defining the relevant # connected data. # It shows candidate keys for features that initially cause the records # to be analyzed for a relationship, # plus a series of feature scores that show how similar the feature data was. # # The response document also contains a separate ENTITIES section, # with the full information about the resolved entity. # (Note: When working with this entity data, # Senzing recommends using the flags `G2_ENTITY_SHOW_FEATURES_EXPRESSED` # and `G2_ENTITY_SHOW_FEATURES_STATS`. # This will provide detailed feature data that is not included by default, # but is useful for understanding the WHY_RESULTS data.) # # The functions `whyEntityByEntityIDV2()` and `whyEntityByRecordV2()` are # enhanced versions of `whyEntityByEntityID()` and `whyEntityByRecordID()` # that also allow you to use control flags. # The `whyEntityByEntityID()` and `whyEntityByRecordID()` functions work in the # same way, but use the default flag value `G2_WHY_ENTITY_DEFAULT_FLAGS`. # # For more information, see # [http://docs.senzing.com/?python](http://docs.senzing.com/?python#connection-details) # ### whyEntityByRecordID # + response_bytearray = bytearray() return_code = g2_engine.whyEntityByRecordID( datasource_code_1, record_id_1, response_bytearray) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### whyEntityByRecordIDV2 # + response_bytearray = bytearray() return_code = g2_engine.whyEntityByRecordIDV2( datasource_code_1, record_id_1, g2_engine_flags, response_bytearray) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### whyEntityByEntityID # + response_bytearray = bytearray() return_code = g2_engine.whyEntityByEntityID( entity_id_1, response_bytearray) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### whyEntityByEntityIDV2 # + response_bytearray = bytearray() return_code = g2_engine.whyEntityByEntityIDV2( entity_id_1, g2_engine_flags, response_bytearray) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ## Replace # ### replaceRecord # # Use the `replaceRecord()` function to update or replace a record in the data # repository. # If record doesn't exist, a new record is added to the data repository. # Like the above functions, `replaceRecord()` returns "0" upon success, # and it can be called as many times as desired and from multiple threads at # the same time. # The `replaceRecord()` function accepts four parameters as input: # # - **datasource_code:** The name of the data source the record is associated # with. This value is configurable to the system # - **record_id:** The record ID, used to identify distinct records # - **data_as_json:** A JSON document with the attribute data for the record # - **load_id:** The observation load ID for the record; # value can be null and will default to datasource_code # + data = { "NAMES": [{ "NAME_TYPE": "PRIMARY", "NAME_LAST": "Miller", "NAME_FIRST": "John", "NAME_MIDDLE": "M" }], "PASSPORT_NUMBER": "PP11111", "PASSPORT_COUNTRY": "US", "DRIVERS_LICENSE_NUMBER": "DL11111", "SSN_NUMBER": "111-11-1111" } data_as_json = json.dumps(data) return_code = g2_engine.replaceRecord( datasource_code_1, record_id_1, data_as_json, load_id) print("Return Code: {0}".format(return_code)) # - # ### replaceRecordWithInfo # # `replaceRecordWithInfo()` is available if you would like to know what # resolved entities were modified when replacing a record. # It behaves identically to `replaceRecord()`, # but also returns a json document containing the IDs of the affected entities. # It accepts the following parameters: # # - **datasource_code:** The name of the data source the record is associated # with. This value is configurable to the system. # - **record_id:** The record ID, used to identify distinct records # - **data_as_json:** A JSON document with the attribute data for the record # - **response_bytearray:** A memory buffer for returning the response # document; if an error occurred, an error response is stored here. # - **load_id:** The observation load ID for the record; # value can be null and will default to datasource_code # + data = { "NAMES": [{ "NAME_TYPE": "PRIMARY", "NAME_LAST": "Jones", "NAME_FIRST": "John", "NAME_MIDDLE": "M" }], "PASSPORT_NUMBER": "PP11111", "PASSPORT_COUNTRY": "US", "DRIVERS_LICENSE_NUMBER": "DL11111", "SSN_NUMBER": "111-11-1111" } data_as_json = json.dumps(data) response_bytearray = bytearray() return_code = g2_engine.replaceRecordWithInfo( datasource_code_1, record_id_1, data_as_json, response_bytearray, load_id) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ## Re-evaluate # ### reevaluateRecord # + return_code = g2_engine.reevaluateRecord( datasource_code_1, record_id_1, g2_engine_flags) print("Return Code: {0}".format(return_code)) # - # ### reevaluateRecordWithInfo # + response_bytearray = bytearray() return_code = g2_engine.reevaluateRecordWithInfo( datasource_code_1, record_id_1, response_bytearray, g2_engine_flags) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ### reevaluateEntity # Find an entity. # + response_bytearray = bytearray() return_code = g2_engine.getEntityByRecordIDV2( datasource_code_1, record_id_1, g2_engine_flags, response_bytearray) response_dictionary = json.loads(response_bytearray) entity_id_1 = response_dictionary["RESOLVED_ENTITY"]["ENTITY_ID"] print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # Re-evaluate the entity. # + return_code = g2_engine.reevaluateEntity(entity_id_1, g2_engine_flags) print("Return Code: {0}".format(return_code)) # - # ### reevaluateEntityWithInfo # + response_bytearray = bytearray() return_code = g2_engine.reevaluateEntityWithInfo( entity_id_1, response_bytearray, g2_engine_flags) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # ## Reporting # # Exporting entity data from resolved entities is one of the core purposes of # Senzing software. # In just a few short steps, the Senzing engine allows users to export entity # data in either JSON or CSV format. # ### exportJSONEntityReport # # There are three steps to exporting resolved entity data from the G2Engine # object in JSON format. # First, use the `exportJSONEntityReport()` method to generate a long integer, # referred to here as an `export_handle`. # The `exportJSONEntityReport()` method accepts one parameter as input: # # - **g2_engine_flags**: An integer specifying which entity details should be # included in the export. See the "Entity Export Flags" section for further # details. export_handle = g2_engine.exportJSONEntityReport(g2_engine_flags) # ### fetchNext # # Second, use the `fetchNext()` method to read the exportHandle and export a # row of JSON output containing the entity data for a single entity. # Note that successive calls of `fetchNext()` will export successive rows of # entity data. # The `fetchNext()` method accepts the following parameters as input: # # - **export_handle:** A long integer from which resolved entity data may be # read and exported. # - **response_bytearray:** A memory buffer for returning the response # document; if an error occurred, an error response is stored here. # # For more information, see # [http://docs.senzing.com/?python](http://docs.senzing.com/?python#reporting). while True: response_bytearray = bytearray() g2_engine.fetchNext(export_handle, response_bytearray) if not response_bytearray: break response_dictionary = json.loads(response_bytearray) response = json.dumps(response_dictionary, sort_keys=True, indent=4) print(response) # ### closeExport g2_engine.closeExport(export_handle) # ### exportCSVEntityReport # # There are three steps to exporting resolved entity data from the G2Engine # object in CSV format. # First, use the `exportCSVEntityReportV2()` method to generate a long integer, # referred to here as an 'export_handle'. # # The `exportCSVEntityReportV2()` method accepts these parameter as input: # # - **csv_column_list:** A comma-separated list of column names for the CSV # export. (These are listed a little further down.) # - **g2_engine_flags:** An integer specifying which entity details should be # included in the export. # See the "Entity Export Flags" section for further details. # # Second, use the `fetchNext()` method to read the exportHandle and export a # row of CSV output containing the entity data for a single entity. # Note that the first call of `fetchNext()` will yield a header row, # and that successive calls of `fetchNext()` will export successive rows of # entity data. # The `fetchNext()` method accepts the following parameters as input: # # - **export_handle:** A long integer from which resolved entity data may be # read and exported # - **response_bytearray:** A memory buffer for returning the response # document; if an error occurred, an error response is stored here # # For more information, see # [http://docs.senzing.com/?python](http://docs.senzing.com/?python#reporting). # + export_handle = g2_engine.exportCSVEntityReport(g2_engine_flags) while True: response_bytearray = bytearray() g2_engine.fetchNext(export_handle, response_bytearray) if not response_bytearray: break print(response_bytearray.decode()) g2_engine.closeExport(export_handle) # - # ## Redo Processing # # Redo records are automatically created by Senzing when certain conditions # occur where it believes more processing may be needed. # Some examples: # # - A value becomes generic and previous decisions may need to be revisited # - Clean up after some record deletes # - Detected related entities were being changed at the same time # - A table inconsistency exists, potentially after a non-graceful shutdown # # First we will need to have a total of 6 data sources so let's add 4 more. # Create Record and Entity #6 # + data = { "NAMES": [{ "NAME_TYPE": "PRIMARY", "NAME_LAST": "Owens", "NAME_FIRST": "Lily" }], "SSN_NUMBER": "111-11-1111" } data_as_json = json.dumps(data) return_code = g2_engine.replaceRecord( datasource_code_4, record_id_4, data_as_json, None) print("Return Code: {0}".format(return_code)) # + response_bytearray = bytearray() return_code = g2_engine.getEntityByRecordID( datasource_code_4, record_id_4, response_bytearray) response_dictionary = json.loads(response_bytearray) entity_id_6 = response_dictionary["RESOLVED_ENTITY"]["ENTITY_ID"] print("Return Code: {0}\nEntity ID: {1}".format(return_code, entity_id_6)) RenderJSON(response_bytearray) # - # Create Record and Entity #7 # + data = { "NAMES": [{ "NAME_TYPE": "PRIMARY", "NAME_LAST": "Bauler", "NAME_FIRST": "August", "NAME_MIDDLE": "E" }], "SSN_NUMBER": "111-11-1111" } data_as_json = json.dumps(data) return_code = g2_engine.replaceRecord( datasource_code_5, record_id_5, data_as_json, None) print("Return Code: {0}".format(return_code)) # + response_bytearray = bytearray() return_code = g2_engine.getEntityByRecordID( datasource_code_5, record_id_5, response_bytearray) response_dictionary = json.loads(response_bytearray) entity_id_7 = response_dictionary["RESOLVED_ENTITY"]["ENTITY_ID"] print("Return Code: {0}\nEntity ID: {1}".format(return_code, entity_id_7)) RenderJSON(response_bytearray) # - # Create Record and Entity #8 # + data = { "NAMES": [{ "NAME_TYPE": "PRIMARY", "NAME_LAST": "Barcy", "NAME_FIRST": "Brian", "NAME_MIDDLE": "H" }], "SSN_NUMBER": "111-11-1111" } data_as_json = json.dumps(data) return_code = g2_engine.replaceRecord( datasource_code_6, record_id_6, data_as_json, None) print("Return Code: {0}".format(return_code)) # + response_bytearray = bytearray() return_code = g2_engine.getEntityByRecordID( datasource_code_6, record_id_6, response_bytearray) response_dictionary = json.loads(response_bytearray) entity_id_8 = response_dictionary["RESOLVED_ENTITY"]["ENTITY_ID"] print("Return Code: {0}\nEntity ID: {1}".format(return_code, entity_id_8)) RenderJSON(response_bytearray) # - # Create Record and Entity #9 # + data = { "NAMES": [{ "NAME_TYPE": "PRIMARY", "NAME_LAST": "Miller", "NAME_FIRST": "Jack", "NAME_MIDDLE": "H" }], "SSN_NUMBER": "111-11-1111" } data_as_json = json.dumps(data) return_code = g2_engine.replaceRecord( datasource_code_7, record_id_7, data_as_json, None) print("Return Code: {0}".format(return_code)) # + response_bytearray = bytearray() return_code = g2_engine.getEntityByRecordID( datasource_code_7, record_id_7, response_bytearray) response_dictionary = json.loads(response_bytearray) entity_id_9 = response_dictionary["RESOLVED_ENTITY"]["ENTITY_ID"] print("Return Code: {0}\nEntity ID: {1}".format(return_code, entity_id_9)) RenderJSON(response_bytearray) # - # ### countRedoRecords # # Once the Senzing engine is initialized, use `countRedoRecords()` # to return the remaining internally queued maintenance records in the # Senzing repository. # `countRedoRecords()` takes no arguments and returns <0 for errors. # + return_code = g2_engine.countRedoRecords() print("Return Code: {0}".format(return_code)) # - # ### getRedoRecord # # Once the Senzing engine is initialized, # use `getRedoRecord()` to retrieve the next internally queued maintenance # record into the Senzing repository # -- `getRedoRecord()` can be called as many times as desired and from multiple # threads at the same time but all threads are required to be in the same # process. # `getRedoRecord()` should not be called from multiple processes. # Unlike `processRedoRecord()`, `getRedoRecord()` does not actually process the # record. # To process the record, you would use the G2Engine `process()` function. # The `getRedoRecord()` function returns "0" upon success and an empty response # if there is nothing to do. # # - **response_bytearray:** A memory buffer for returning the maintenance # document (may be XML or JSON). # The format is internal to Senzing. # If empty it means there are no maintenance records to return. # + response_bytearray = bytearray() return_code = g2_engine.getRedoRecord(response_bytearray) print("Return Code: {0}".format(return_code)) # - # ### processWithInfo if (return_code == 0 and response_bytearray): process_response_bytearray = bytearray() process_return_code = g2_engine.processWithInfo( response_bytearray.decode(), process_response_bytearray) print("Return Code: {0}".format(process_return_code)) RenderJSON(process_response_bytearray) # ### process if (return_code == 0 and response_bytearray): g2_engine.process(response_bytearray.decode()) # ### processRedoRecord # # This processes the next redo record and returns it # (If `processRedoRecord()` "response" returns 0 # and "response_bytearray" is blank then there are no more redo records to # process and if you do `count.RedoRecords()` again it will return 0) # Has potential to create more redo records in certian situations. # # - **response_bytearray:** A buffer that returns a JSON object that summaries # the changes cased by adding the record. # Also contains the recordID. # + response_bytearray = bytearray() return_code = g2_engine.processRedoRecord(response_bytearray) print("Return Code: {0}".format(return_code)) # Pretty-print XML. xml_string = response_bytearray.decode() if len(xml_string) > 0: import xml.dom.minidom xml = xml.dom.minidom.parseString(xml_string) xml_pretty_string = xml.toprettyxml() print(xml_pretty_string) # - # ### processRedoRecordWithInfo # # `processRedoRecordWithInfo()` is available if you would like to know what # resolved entities were modified when processing a redo record. # It behaves identically to `processRedoRecord()`, # but also returns a json document containing the IDs of the affected entities. # It accepts the following parameters: # # - **response_bytearray:** A buffer that returns a JSON object that summaries # the changes cased by adding the record. Also contains the recordID. # - **response_bytearray:** A buffer that returns a JSON object that summaries # the changes cased by adding the record. Also contains the recordID. # + response_bytearray = bytearray() info_bytearray = bytearray() return_code = g2_engine.processRedoRecordWithInfo( response_bytearray, info_bytearray) print("Return Code: {0}".format(return_code)) # Pretty-print XML. xml_string = response_bytearray.decode() if len(xml_string) > 0: import xml.dom.minidom xml = xml.dom.minidom.parseString(xml_string) xml_pretty_string = xml.toprettyxml() print(xml_pretty_string) # Pretty-print JSON RenderJSON(info_bytearray) # - # ## Delete # ### deleteRecord # # Use `deleteRecord()` to remove a record from the data repository # (returns "0" upon success); # `deleteRecord()` can be called as many times as desired and from multiple # threads at the same time. # The `deleteRecord()` function accepts three parameters as input: # # - **datasource_code:** The name of the data source the record is associated # with. This value is configurable to the system. # - **record_id:** The record ID, used to identify distinct records # - **load_id:** The observation load ID for the record; # value can be null and will default to dataSourceCode # + return_code = g2_engine.deleteRecord(datasource_code_1, record_id_1, load_id) print("Return Code: {0}".format(return_code)) # - # ### deleteRecordWithInfo # # `deleteRecordWithInfo()` behaves the same as `deleteRecord()` # but also returns a json document containing the IDs of the affected entities. # It accepts the following parameters: # # - **datasource_code:** The name of the data source the record is associated # with. This value is configurable to the system. # - **record_id:** The record ID, used to identify distinct records. # - **response_bytearray:** A buffer that returns a JSON object that summaries # the changes cased by adding the record. Also contains the recordID. # - **load_id:** The observation load ID for the record; # value can be null and will default to dataSourceCode # + response_bytearray = bytearray() return_code = g2_engine.deleteRecordWithInfo( datasource_code_2, record_id_2, response_bytearray, load_id, g2_engine_flags) print("Return Code: {0}".format(return_code)) RenderJSON(response_bytearray) # - # Attempt to get the record again. # It should error and give an output similar to "Unknown record". try: response_bytearray = bytearray() return_code = g2_engine.getRecord( datasource_code_1, record_id_1, response_bytearray) response_dictionary = json.loads(response_bytearray) response = json.dumps(response_dictionary, sort_keys=True, indent=4) print("Return Code: {0}\n{1}".format(return_code, response)) except G2Exception.G2ModuleGenericException as err: print("Exception: {0}".format(err)) # ## Cleanup # # To purge the G2 repository, use the aptly named `purgeRepository()` method. # This will remove every record in your current repository. # ### purgeRepository g2_engine.purgeRepository() # ### destroy # # Once all searching is done in a given process, # call `destroy()` to uninitialize Senzing and clean up resources. # You should always do this once at the end of each process. See # [http://docs.senzing.com/?python](http://docs.senzing.com/?python#engine). # + return_code = g2_engine.destroy() print("Return Code: {0}".format(return_code)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # [How do I provide a reproducible copy of my existing DataFrame?](https://stackoverflow.com/questions/52413246/how-do-i-provide-a-reproducible-copy-of-my-existing-dataframe/52413247#52413247) # # import pandas as pd from pprint import pprint as pp # ## Create your DataFrame from some file df = pd.read_csv('data/2018-09-18_flavors_of_cacao.csv') df.head() # # Easiest Method # ## Using to_clipboard and read_clipboard df.head(10).to_clipboard(sep=',', index=False) # If you have a multi-index DataFrame or an index other than 0...n, use index=True and provide a note in your question as to which column(s) are the index. # ## output of to_clipboard "Company  (Maker-if known)","Specific Bean Origin or Bar Name",REF,"Review Date","Cocoa Percent","Company Location",Rating,"Bean Type","Broad Bean Origin" ,,1876,2016,63%,France,3.75, ,Sao Tome ,Kpime,1676,2015,70%,France,2.75, ,Togo ,Atsane,1676,2015,70%,France,3.0, ,Togo ,Akata,1680,2015,70%,France,3.5, ,Togo ,Quilla,1704,2015,70%,France,3.5, ,Peru ,Carenero,1315,2014,70%,France,2.75,Criollo,Venezuela ,Cuba,1315,2014,70%,France,3.5, ,Cuba ,Sur del Lago,1315,2014,70%,France,3.5,Criollo,Venezuela ,,1319,2014,70%,France,3.75,Criollo,Venezuela ,Pablino,1319,2014,70%,France,4.0, ,Peru # ## after executing to_clipboard, run pd.read_clipboard pd.read_clipboard(sep=',') # # With Lists # ## pretty print the headers pp(list(df.columns)) # ## create a variable, copy the printed headers and assign the copy # ### this can then be copied and pasted into Stack Overflow sof_headers = ['Company\xa0\n(Maker-if known)', 'Specific Bean Origin\nor Bar Name', 'REF', 'Review\nDate', 'Cocoa\nPercent', 'Company\nLocation', 'Rating', 'Bean\nType', 'Broad Bean\nOrigin'] # ## pretty print some small range of the DataFrame values pp(df.iloc[0:10].values) # ## create a variable, copy the printed values and assign the copy # ### this can be copied and pasted into Stack Overflow sof_values = [['', '', 1876, 2016, '63%', 'France', 3.75, '\xa0', 'Sao Tome'], ['', 'Kpime', 1676, 2015, '70%', 'France', 2.75, '\xa0', 'Togo'], ['', 'Atsane', 1676, 2015, '70%', 'France', 3.0, '\xa0', 'Togo'], ['', 'Akata', 1680, 2015, '70%', 'France', 3.5, '\xa0', 'Togo'], ['', 'Quilla', 1704, 2015, '70%', 'France', 3.5, '\xa0', 'Peru'], ['', 'Carenero', 1315, 2014, '70%', 'France', 2.75, 'Criollo', 'Venezuela'], ['', 'Cuba', 1315, 2014, '70%', 'France', 3.5, '\xa0', 'Cuba'], ['', '', 1315, 2014, '70%', 'France', 3.5, 'Criollo', 'Venezuela'], ['', '', 1319, 2014, '70%', 'France', 3.75, 'Criollo', 'Venezuela'], ['', 'Pablino', 1319, 2014, '70%', 'France', 4.0, '\xa0', 'Peru']] # ## Using sof_values and sof_headers, the Stack Overflow community can easily reproduce your DataFrame and more easily answer your question sof_df = pd.DataFrame(sof_values, columns=sof_headers) sof_df # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sys sys.path.append("..") import os import json import numpy as np import datetime as dt import pandas as pd from src.VaccineEffectiveness import VaccineEffectiveness import matplotlib.pyplot as plt # %matplotlib inline # + path_file = os.path.join("..", "output", "data", "COHORT_21JAN2021_31AUG2021") df_pair = pd.read_csv(os.path.join(path_file, "pareados_corona_1.csv"), dtype={"CPF": str, "PAR": str}) df_info = pd.read_csv(os.path.join(path_file, "pares_eventos_corona_1.csv")) # - ve_obj = VaccineEffectiveness(df_pair, df_info) intervals = ve_obj.define_intervals(dt.date(2021, 8, 31), return_=True) ve_obj.intervals[1] caso_dict = ve_obj.casos_hash['17381339315'] controle_dict= ve_obj.controles_hash['02604396300'] print(caso_dict) print("\n") print(controle_dict) intervals[(pd.notna(intervals["CASO OBITO COVID"])) | (pd.notna(intervals["CONTROLE OBITO COVID"]))][23:24] def compare_pair_survival(caso_hash, controle_hash, events_col, final_cohort): ''' Description. Args: caso_hash: dictionary. controle_hash: dictionary. events_col: dictionary. final_cohort: datetime.date. Return: xxx: xxx. ''' cpf_caso = caso_hash["CPF"] cpf_controle = controle_hash["CPF"] # Get events of case caso_d1_date = caso_hash[events_col["D1"]] caso_d2_date = caso_hash[events_col["D2"]] caso_covid_date = caso_hash[events_col["OBITO COVID"]] caso_geral_date = caso_hash[events_col["OBITO GERAL"]] # Get events of control control_d1_date = controle_hash[events_col["D1"]] control_d2_date = controle_hash[events_col["D2"]] control_covid_date = controle_hash[events_col["OBITO COVID"]] control_geral_date = controle_hash[events_col["OBITO GERAL"]] f = lambda x: x.date() if not pd.isna(x) else np.nan g = lambda x,y: (x-y).days if not pd.isna(x) and not pd.isna(y) else np.nan # --> D1 start_date = caso_d1_date.date() caso_diff = { "D1 to D2": g(f(caso_d2_date),start_date), "D1 to D1_CONTROL": g(f(control_d1_date),start_date), "D1 to COVID": g(f(caso_covid_date), start_date), "D1 to GERAL": g(f(caso_geral_date), start_date), "D1 to FIM": g(final_cohort, start_date) } control_diff = { "D1 to D1_CONTROL": g(f(control_d1_date),start_date), "D1 to COVID_CONTROL": g(f(control_covid_date),start_date), "D1 to GERAL_CONTROL": g(f(control_geral_date), start_date), "D1 to FIM": g(final_cohort,start_date) } # --> D2 start_date = caso_d2_date.date() caso_diff_d2 = { "D2 to D2": g(f(caso_d2_date),start_date), "D2 to D1_CONTROL": g(f(control_d1_date),start_date), "D2 to COVID": g(f(caso_covid_date), start_date), "D2 to GERAL": g(f(caso_geral_date), start_date), "D2 to FIM": g(final_cohort, start_date) } control_diff_d2 = { "D2 to D1_CONTROL": g(f(control_d1_date),start_date), "D2 to COVID_CONTROL": g(f(control_covid_date),start_date), "D2 to GERAL_CONTROL": g(f(control_geral_date), start_date), "D2 to FIM": g(final_cohort,start_date) } caso_events_d1 = [ (key, caso_diff[key]) for key in caso_diff.keys() ] control_events_d1 = [ (key, control_diff[key]) for key in control_diff.keys() ] caso_events_d2 = [ (key, caso_diff_d2[key]) for key in caso_diff_d2.keys() ] control_events_d2 = [ (key, control_diff_d2[key]) for key in control_diff_d2.keys() ] res = { "CPF CASO": cpf_caso, "CPF CONTROLE": cpf_controle, "D1": (caso_events_d1, control_events_d1), "D2": (caso_events_d2, control_events_d2) } return res cols = { "D1": "DATA D1", "D2": "DATA D2", "OBITO COVID": "DATA OBITO COVID", "OBITO GERAL": "DATA OBITO GERAL" } info = compare_pair_survival(caso_dict, controle_dict, cols, dt.date(2021, 8, 31)) info # + info_d1_caso = info["D1"][0] info_d1_controle = info["D1"][1] info_d1_caso = sorted(info_d1_caso, key=lambda tup: tup[1]) info_d1_controle = sorted(info_d1_controle, key=lambda tup: tup[1]) info_d1_caso = [ x for x in info_d1_caso if not pd.isna(x[1]) ] info_d1_controle = [ x for x in info_d1_controle if not pd.isna(x[1]) ] # - info_d1_controle # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="eNLQI5OfZtWD" # # Code Signal Diving Deeper # + [markdown] id="0mIu_4dBZ4QQ" # ## Extract Eack Kth (34) # Given array of integers, remove each kth element from it. # # ### Example # # For inputArray = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and k = 3, the output should be # solution(inputArray, k) = [1, 2, 4, 5, 7, 8, 10]. # # # + id="b_7SnT7fZ4uo" def extractEachKth(inputArray, k): loop = ((len(inputArray) - (len(inputArray)%k))/k) loop = int(loop) L = [] for i in range(loop): L.insert(0,i+1) for i in L: del inputArray[(i*k)-1] return inputArray # + [markdown] id="F-SJHz0vZ6io" # ## First Digit (35) # Find the leftmost digit that occurs in a given string. # # ### Example # # For inputString = "var_1__Int", the output should be # solution(inputString) = '1'; # For inputString = "q2q-q", the output should be # solution(inputString) = '2'; # For inputString = "0ss", the output should be # solution(inputString) = '0'. # + id="tTdIOykFZ63F" def firstDigit(inputString): #output the first digit found in a string L = list(inputString) J = [] for i in L: if i.isnumeric(): J.append(i) return J[0] # + [markdown] id="lraKkzAKZ7bU" # ## Different Symbols Naive (36) # Given a string, find the number of different characters in it. # # ### Example # # For s = "cabca", the output should be # solution(s) = 3. # # There are 3 different characters a, b and c. # + id="-kFsq-idZ7w0" from collections import Counter def differentSymbolsNaive(s): ## Given a string, find the number of different characters in it. c = Counter(s) getList(c) return len(c) def getList(dict): ## Create a list of all the keys in a given dictionary ## For example getList(Dictionary["SubDictionary"]) ## Or getList(Dictionary) list = [] for key in dict.keys(): list.append(key) return list # + [markdown] id="h06E5sc1Z85S" # ## Array Max Consecutive Sum (37) # Given array of integers, find the maximal possible sum of some of its k consecutive elements. # # ### Example # # For inputArray = [2, 3, 5, 1, 6] and k = 2, the output should be # solution(inputArray, k) = 8. # All possible sums of 2 consecutive elements are: # # 2 + 3 = 5; # 3 + 5 = 8; # 5 + 1 = 6; # 1 + 6 = 7. # Thus, the answer is 8. # + id="OLFSwxBfZ9Po" def arrayMaxConsecutiveSum(inputArray, k): ## Given array of integers, find the maximal possible sum of some of its k consecutive elements. S = sum(inputArray[:k]) M = S for i in range(len(inputArray) - k): S += ( inputArray[i+k] - inputArray[i]) if M < S: M = S return M # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true) # # Open in Callysto # # Migration: Necessity for diversity # # Early trade routes such as the Silk Road existed in the third century BCE. Not only goods, but also people and ideas traveled those routes. However migration rates increased in 1492 when Columbus made a maiden voyage to the Caribbean. Post World War II, with the evolution of technology, people have been traveling around the world - an era which is often referred to as **Contemporary Globalization**. # # Let's analyze the current trends of migration around the world. [Organisation for Economic Co-operation and Development (OECD)](https://www.oecd.org/) maintains an [International Migration Database](https://stats.oecd.org/Index.aspx?DataSetCode=MIG#) for member countries. The dataset used in this notebook is a customized version downloaded from that database. # ## Migration chart for various countries # Let's first check out what information is available in our dataset. Run the code cells below to see the top 5 rows. # + # Import python libraries import pandas as pd import plotly.express as px import plotly.graph_objects as go from ipywidgets import interact, fixed, widgets, Layout, Button, Box, fixed, HBox, VBox from IPython.display import clear_output # Don't show warnings in output import warnings warnings.filterwarnings('ignore') print('Libraries successfully imported.') # + # Import the dataset (remove rows with missing entries) df = pd.read_csv('./Data/OECD_Full_Data.csv') # Data clean up - remove the rows for which countries are not available, rename columns df = df[~df['Country of birth/nationality'].isin(['Unknown','Not stated','Stateless','Total'])]\ .rename(columns={'Value':'Migrated Population','CO2':'Country Code'}) # Display the top 5 rows df.head() # - # The dataset comprises of *inflow of foreign population (along with their nationalities)* in a country for various years. # # Run the code cells below and select the **Country** and **Year** for which you want to analyze data. Then click on `Show Migration Chart` button to see the [choropleth map](https://en.wikipedia.org/wiki/Choropleth_map). # Define a callback function for "Show Migration Chart" button def show_migration_chart(ev): clear_output(wait=True) # Define display order for the buttons and menus display(Box(children = [country_menu, year_menu], layout = box_layout)) display(VBox(children = [show_button], layout = Layout(display= 'flex', flex_flow= 'column', align_items= 'center', width='100%', justify_content = 'center'))) # Find rows in the dataset for user-selected country and year subset = df[df['Country'] == country_menu.value][df['Year'] == year_menu.value] # Is data available for user-selected entries? if(subset.shape[0] == 0): print('Data not available... :-(') else: # Plot choropleth map fig = px.choropleth(subset, # dataframe with required data locations="Country Code", # Column containing country codes color="Migrated Population", # Color of country should be based on population migrated hover_name="Country of birth/nationality", # title to add to hover information hover_data=["Migrated Population"], # data to add to hover information color_continuous_scale=px.colors.sequential.Reds, # Title of the chart title='Inflow of foreign population to {} (Year {})
    \ Source: OECD'\ .format(country_menu.value, year_menu.value) ) # Show the figure fig.show() print('Successfully defined the show_migration_chart, now run the next cell and click the button to see the map.') # + # Layout for widgets box_layout = Layout(display='flex', flex_flow='row', align_items='center', width='100%', justify_content = 'center') style = {'description_width': 'initial'} # Create dropdown menu for Country and Year country_menu = widgets.Dropdown(options = df['Country'].unique(), description ='Country: ', style = style, disabled=False) year_menu = widgets.Dropdown(options = df['Year'].unique(), description ='Year: ', style = style, disabled=False) # Create Show Migration Flow button and define click events show_button = widgets.Button(button_style= 'info', description="Show Migration Chart") show_button.on_click(show_migration_chart) # Define display order for the buttons and menus display(Box(children = [country_menu, year_menu], layout = box_layout)) display(VBox(children = [show_button], layout = Layout(display= 'flex', flex_flow= 'column', align_items= 'center', width='100%', justify_content = 'center'))) # - # Hover the mouse around and check out how many people migrated to your selected country. # # ### Questions: # 1. List the top 3 countries with most inflow of population to Canada in 2017. (Hint: One of the countries is an island) # # #### Discussion Questions: # 2. How have immigrants contributed economically, politically, and socially in Canada? # 3. Why do people migrate to another country despite the various challenges associated with migration? Discuss how those reasons present bright ro dark side of globalization. # ## Migration statistics for Alberta # # Let's take a look at the migration data for Alberta made available by the [Alberta Government](https://open.alberta.ca/dataset/mobility-migrants-alberta-census-divisions-and-economic-regions/resource/f4832d23-430e-4b49-a324-560e6a404efb). First we'll see what parameters are in the dataset. # + # Import the dataset (remove unnecessary columns) df2 = pd.read_csv('./Data/AB_Migrants_Data.csv').dropna().drop(columns=['Net Intraprovincial Migrants','SGC']) # Convert "Year" in the data from interval format to a number df2['Year'] = df2['Year'].str[-4:].astype('int') # Show top 5 rows df2.head() # - # Before we go ahead, it would be helpful to understand each parameter in the dataset. Please go through the [glossary](https://www150.statcan.gc.ca/n1/pub/91-528-x/2015001/gloss-eng.htm) page of [Statistics Canada](https://www.statcan.gc.ca/eng/start) to get familiar with the parameters. If you are interested in how they are calculated, more information is available [here](https://www150.statcan.gc.ca/n1/en/pub/91-528-x/91-528-x2015001-eng.pdf?st=iYlctk02). # + # Create a plotly figure object (like an empty figure) fig = go.Figure() # Create traces for i in df2.columns[1:]: # Add trace for each column (variable) in dataset fig.add_trace(go.Scatter(x=df2['Year'], y=df2[i], mode='lines+markers', name=i)) # Change the figure layout fig.update_layout( # Change properties of x-axis xaxis=dict( linecolor='rgb(204, 204, 204)', # color of x-axis mirror=True, # should axis be mirrored? linewidth=2, # width of x-axis ticks='outside', # location of x-ticks tickfont=dict( size=14, # size of x-ticks color='rgb(82, 82, 82)', # color of x-ticks ), ), # Change properties of y-axis yaxis=dict( title=dict( text = 'Migrant Population
    (in thousands)', # y-axis title font=dict( size=16, # size of y-axis title ) ), linecolor='rgb(204, 204, 204)', # color of y-axis mirror=True, # should axis be mirrored? linewidth=2, # width of y-axis ticks='outside', # location of y-ticks tickfont=dict( size=14, # size of y-ticks color='rgb(82, 82, 82)', # color of y-ticks ), ), plot_bgcolor='white', # Background color legend_orientation="h", # Orientation of legend # Title for the figure (as an annotation) annotations=[dict(xref='paper', yref='paper', x=0.5, y=1.2, # Position of the title xanchor='center', yanchor='top', text='Migration Statistics for Alberta
    (1971-2019)', # Text of the title font=dict(size=18), # Font size of the title showarrow=False)] ) # Show the figure fig.show() # - # Hover the mouse around the graph to see the annual data for the parameter you like. Also, you can enable or disable particular dataset by clicking on the legend for that dataset. # ### Questions # # 1. Which two parameters have similar trends? Why do you think that might be? # 2. Which trend has mostly gone upwards since the year 2000? # [![Callysto.ca License](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-bottom.jpg?raw=true)](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (fractals) # language: python # name: fractals # --- # ### Generating the distance from the complex origin. import math def generate_r(p,c,i): ''' outputs a radius, r, from the origin of the complex plane. p: starting point as a complex number c: constant complex number i: number of iterations ''' z = p iterations = 0 r = 0 while iterations < i: z = z**2 + c r = math.sqrt(z.real**2 + z.imag**2) iterations += 1 return round(r,2) # + c = -0.7 + 0.27j p = 0.5 + 1j i = 2 generate_r(p,c,i) # - # ### Function from blog post part 1 # + import matplotlib.pyplot as plt import numpy as np def julia_set(w, h, c = -0.7+ 0.27j, zoom=1, niter=256): """ A julia set of geometry (width x height) and iterations 'niter' """ # Why (hxw) ? Because numpy creates a matrix as row x column # and height represents the y co-ordinate or rows and # width represents the x co-ordinate or columns. pixels = np.arange(w*h,dtype=np.uint16).reshape(h, w) for x in range(w): for y in range(h): # calculate the initial real and imaginary part of z, # based on the pixel location and zoom and position values zx = 1.5*(x - w/2)/(0.5*zoom*w) zy = 1.0*(y - h/2)/(0.5*zoom*h) for i in range(niter): radius_sqr = zx*zx + zy*zy # Iterate till the point is outside # the circle with radius 2. if radius_sqr > 4: break # Calculate new positions zy,zx = 2.0*zx*zy + c.imag, zx*zx - zy*zy + c.real color = (i >> 21) + (i >> 10) + i*8 pixels[y,x] = color # display the created fractal plt.imshow(pixels) plt.show() # - julia_set(1024,768,zoom=4,niter = 256) arr = np.arange(1024*768,dtype=np.uint16) print(arr) print(len(arr)) arr2 = np.arange(1024*768,dtype=np.uint16).reshape(768,1024) print(arr2) print(len(arr2)) arr = np.arange(1024*768) print(arr) print(len(arr)) arr2 = np.arange(1024*768).reshape(768,1024) print(arr2) print(len(arr2)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: privateai # language: python # name: privateai # --- # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 495, "status": "ok", "timestamp": 1559977269702, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="1ghOuV1nJ8Uy" outputId="856d49bb-cdea-4049-85e2-31540f9c9bf4" import os os.getcwd() # + [markdown] colab_type="text" id="S-uYW5ENLe0V" # # Mount File System for Working on Colaboratory # + [markdown] colab_type="text" id="pF272MmRLbvf" # # Running or importing .py Files # + colab={"base_uri": "https://localhost:8080/", "height": 129} colab_type="code" executionInfo={"elapsed": 38352, "status": "ok", "timestamp": 1559977307596, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="C6pH56x2J8ud" outputId="21aedae5-0c63-4637-8331-a4a383d62b8f" from google.colab import drive drive.mount('/content/drive') # + [markdown] colab_type="text" id="yF6_T35WDdRN" # # Change working directory # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 39186, "status": "ok", "timestamp": 1559977308454, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="4zAzX0u-KBAg" outputId="bd65b04e-740a-4282-dfcb-6fa81be062fe" # !ls # + colab={"base_uri": "https://localhost:8080/", "height": 256} colab_type="code" executionInfo={"elapsed": 40012, "status": "ok", "timestamp": 1559977309301, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="tyrxIXPtDvhp" outputId="39862e61-f18e-4354-8d48-3538b6a765c4" # !ls 'drive/My Drive/pytorch-EverybodyDanceNow/notebook' # + colab={} colab_type="code" id="GSsevlhMDcJ0" import os os.chdir('drive/My Drive/pytorch-EverybodyDanceNow/notebook') # + colab={} colab_type="code" id="uC5YXwhGJ8zw" import os import numpy as np import torch os.environ['CUDA_VISIBLE_DEVICES'] = '0' torch.multiprocessing.set_sharing_strategy('file_system') torch.backends.cudnn.benchmark = True torch.cuda.set_device(0) import time import pickle import matplotlib.pyplot as plt from collections import OrderedDict from torch.autograd import Variable from pathlib import Path # %matplotlib inline # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 583, "status": "ok", "timestamp": 1559977328707, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="QHonnhjbJ83k" outputId="5aca6dcf-e6d2-4138-bef7-7e68d0e59226" pix2pixhd_dir = Path('../src/pix2pixHD/') import sys sys.path.append(str(pix2pixhd_dir)) print(str(pix2pixhd_dir)) # + colab={} colab_type="code" id="tWwknmiyJ8_k" # %load_ext autoreload # + colab={} colab_type="code" id="gT_dK2Z2J9DA" # %autoreload 2 # + [markdown] colab_type="text" id="xpDpRZ3_95hm" # # Download and extract video # + colab={} colab_type="code" id="yUU8C0qz95hr" import cv2 from pathlib import Path # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" executionInfo={"elapsed": 444, "status": "ok", "timestamp": 1559977389983, "user": {"displayName": "", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="2uDRO1tb95h3" outputId="50b81c64-2945-40e4-d279-61ec9883fc74" save_dir = Path('../data/source/') print("Source data dir: ",save_dir) try: save_dir.mkdir(True) except FileExistsError: print("Directory exists") img_dir = save_dir.joinpath('images') print(img_dir) try: img_dir.mkdir(True) except FileExistsError: print("Directory exists") # + [markdown] colab_type="text" id="nWstnWlz95h-" # # Get Frames from Video # + colab={"base_uri": "https://localhost:8080/", "height": 72} colab_type="code" executionInfo={"elapsed": 134118, "status": "ok", "timestamp": 1559977577787, "user": {"displayName": "", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="ynlqklIW95iB" outputId="df94a4c6-3413-45df-9aa4-8b5b3084d761" print(str(save_dir.joinpath('file_name.mp4'))) cap = cv2.VideoCapture( str(save_dir.joinpath('file_name.mp4'))) print(cap.isOpened()) i = 0 while(cap.isOpened()): flag, frame = cap.read() if flag == False: break #frame = cv2.resize(frame, dsize=(512, 512), interpolation=cv2.INTER_CUBIC) cv2.imwrite(str(img_dir.joinpath('{:05d}.png'.format(i))), frame) #if i >= 300: # break i += 1 print("Number of frames:", i) # + [markdown] colab_type="text" id="8IGkvNJk95iJ" # # Pose estimation (OpenPose) # + colab={} colab_type="code" id="8rky8wrY95iL" import numpy as np import matplotlib.pyplot as plt import torch from tqdm import tqdm # %matplotlib inline # + colab={} colab_type="code" id="F9q3gvet95iS" openpose_dir = Path('../src/pytorch_Realtime_Multi-Person_Pose_Estimation/') import sys sys.path.append(str(openpose_dir)) sys.path.append('../src/utils') # - # # Install packages with pip import sys # #!{sys.executable} -m pip install scikit-image # + colab={} colab_type="code" id="VbMgYY1P95if" # openpose from network.rtpose_vgg import get_model from evaluate.coco_eval import get_multiplier, get_outputs # utils from openpose_utils import remove_noise, get_pose # + colab={"base_uri": "https://localhost:8080/", "height": 3801} colab_type="code" executionInfo={"elapsed": 11978, "status": "ok", "timestamp": 1559977881884, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="2WtMvF4F95ip" outputId="a5523570-bd3b-419f-a51d-e3abe09ae2a8" weight_name = openpose_dir.joinpath('network/weight/pose_model.pth') model = get_model('vgg19') model.load_state_dict(torch.load(weight_name)) model = torch.nn.DataParallel(model).cuda() model.float() model.eval() # + [markdown] colab_type="text" id="JGrHBf2W95i2" # ## check # + colab={"base_uri": "https://localhost:8080/", "height": 324} colab_type="code" executionInfo={"elapsed": 525, "status": "ok", "timestamp": 1559977915339, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="4cFKPhOn95i5" outputId="ba441250-bef4-4b3b-995f-91dd1a285fb0" # %matplotlib inline print(img_dir) print(len(os.listdir(img_dir))) img_path = sorted(img_dir.iterdir())[5] img = cv2.imread(str(img_path)) shape_dst = np.min(img.shape[:2]) # offset oh = (img.shape[0] - shape_dst) // 2 ow = (img.shape[1] - shape_dst) // 2 img = img[oh:oh+shape_dst, ow:ow+shape_dst] img = cv2.resize(img, (512, 512)) plt.imshow(img[:,:,[2, 1, 0]]) # BGR -> RGB # + colab={"base_uri": "https://localhost:8080/", "height": 287} colab_type="code" executionInfo={"elapsed": 3658, "status": "ok", "timestamp": 1559977921811, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="XPl3570E95jD" outputId="8c98c1a3-a362-4206-edc7-48121d09b0a8" multiplier = get_multiplier(img) with torch.no_grad(): paf, heatmap = get_outputs(multiplier, img, model, 'rtpose') r_heatmap = np.array([remove_noise(ht) for ht in heatmap.transpose(2, 0, 1)[:-1]])\ .transpose(1, 2, 0) heatmap[:, :, :-1] = r_heatmap param = {'thre1': 0.1, 'thre2': 0.05, 'thre3': 0.5} label,cord = get_pose(param, heatmap, paf) print("Cord", cord, len(cord)) plt.imshow(label) # + [markdown] colab_type="text" id="lqziN8I695jL" # ## make label images for pix2pix # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 6248329, "status": "ok", "timestamp": 1559984191699, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="CDGn0V3X95jM" outputId="27fb5e15-5f5d-4808-bfd3-0f54a26c5990" print(save_dir) # Source images test_img_dir = save_dir.joinpath('test_img') try: test_img_dir.mkdir(exist_ok=True) except FileExistsError: print("Directory exists") # Label images test_label_dir = save_dir.joinpath('test_label_ori') try: test_label_dir.mkdir(exist_ok=True) except FileExistsError: print("Directory exists") # Head images test_head_dir = save_dir.joinpath('test_head_ori') try: test_head_dir.mkdir(exist_ok = True) except FileExistsError: print("Directory exists") NUM_FRAMES = len(os.listdir(str(img_dir))) print("Number of frames", NUM_FRAMES) pose_cords = [] for idx in tqdm(range(NUM_FRAMES)): img_path = img_dir.joinpath('{:05d}.png'.format(idx)) img = cv2.imread(str(img_path)) shape_dst = np.min(img.shape[:2]) oh = (img.shape[0] - shape_dst) // 2 ow = (img.shape[1] - shape_dst) // 2 img = img[oh:oh+shape_dst, ow:ow+shape_dst] img = cv2.resize(img, (512, 512)) multiplier = get_multiplier(img) with torch.no_grad(): paf, heatmap = get_outputs(multiplier, img, model, 'rtpose') r_heatmap = np.array([remove_noise(ht) for ht in heatmap.transpose(2, 0, 1)[:-1]])\ .transpose(1, 2, 0) heatmap[:, :, :-1] = r_heatmap param = {'thre1': 0.1, 'thre2': 0.05, 'thre3': 0.5} label, cord = get_pose(param, heatmap, paf) index = 13 crop_size = 25 try: head_cord = cord[index] except: head_cord = pose_cords[-1] # if there is no head point in this picture. #Use las frame head point pose_cords.extend([head_cord]) head = img[ int(head_cord[1] - crop_size) :int(head_cord[1] + crop_size), int(head_cord[0] - crop_size ) :int(head_cord[0] + crop_size),: ] #Show head plt.imshow(head) plt.savefig(str( test_head_dir.joinpath('pose_{}.jpg'.format(idx)))) plt.clf() cv2.imwrite(str(test_img_dir.joinpath( '{:05}.png'.format(idx))), img) cv2.imwrite(str(test_label_dir.joinpath( '{:05}.png'.format(idx))), label) if idx % 100 == 0 and idx != 0: pose_cords_arr = np.array(pose_cords, dtype = np.int) # Save pose coordinates array. np.save(str(save_dir.joinpath('pose_source.npy')), pose_cords_arr) pose_cords_arr = np.array( pose_cords, dtype = np.int ) np.save( str(save_dir.joinpath('pose_source.npy')), pose_cords_arr ) torch.cuda.empty_cache() # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" executionInfo={"elapsed": 279, "status": "ok", "timestamp": 1559977930126, "user": {"displayName": "\u00e1rez", "photoUrl": "", "userId": "17609825801325148392"}, "user_tz": 300} id="_G1ZoynL95jW" outputId="77e50ffe-1ac4-474f-eb27-c9c62b68fd1e" len(os.listdir(str(img_dir))) # + colab={} colab_type="code" id="elT-wai3J0fb" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from torch.utils.data import TensorDataset, DataLoader from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix, accuracy_score import torch.nn.functional as F import torch from torch import optim import torch import torch.nn as nn # # Importing and visualizing dataset #import the dataset iris = pd.read_csv("creditcard.csv") iris.head() # # Seperating Examples and Labels # + X= iris.iloc[:, 1:29] Y= iris.iloc[:, 30] from sklearn.preprocessing import normalize X = normalize(X) y = np.array(Y) y = y.astype(int) X_train,X_test,Y_train,Y_test= train_test_split(X, y, test_size= 0.10, random_state= 1) print ('X_train shape: ',X_train.shape) print ('y_train shape: ',Y_train.shape) print ('X_test shape: ',X_test.shape) print ('y_test shape: ',Y_test.shape) # - # ## Using Dataloader to convert numpy arrays to Tensors # + trainloader = DataLoader(TensorDataset(torch.from_numpy(X_train), torch.from_numpy(Y_train)), batch_size=len(X_train), shuffle=True) testloader = DataLoader(TensorDataset(torch.from_numpy(X_test), torch.from_numpy(Y_test)), batch_size=len(X_test), shuffle=False) dataloaders = { "train": trainloader, "validation": testloader } # - # ## This class will define our model # ### Using __init__ we will define numbers of nodes in our particular layer # ### Using forward() we will define functionality of each layer class Classifier(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(28, 340) self.fc2 = nn.Linear(340, 220) self.fc3 = nn.Linear(220, 200) self.fc4 = nn.Linear(200, 70) self.fc5 = nn.Linear(70, 10) self.fc6 = nn.Linear(10, 2) self.dropout = nn.Dropout(p=0.2) def forward(self, x): #x = x.view(x.shape[0], -1) #print(x) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.relu(self.fc4(x)) x = self.dropout(F.relu(self.fc5(x))) x = F.log_softmax(self.fc6(x), dim=1) #x = F.log_softmax(self.fc4(x)) return x # ## Model declaration, Type of loss and optimizer. # ### We are using adam optimizer to optimize our network model = Classifier() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=0.01) # ## This block is showing summary off our model model # ## This function is predicting output of examples we will feed in. # ### Will be useful in calculating model accuracies. def predict(model, inputs): output = model(inputs) return output.data.numpy().argmax(axis= 1) # # Here we will perform forward and backward propagation. # + from torch.autograd import Variable loss1=[] train_acc=[] Epoch=40 for epoch in range(Epoch): print('------------------------------------------------------------------------------------------') acc=0 train_acc1=0 for i, (features, labels) in enumerate(trainloader): #print(features.shape) features = Variable(features) labels = Variable(labels) optimizer.zero_grad() features=features.float() outputs = model(features) loss = criterion(outputs, labels.long()) loss.backward() optimizer.step() if (i+1) % len(trainloader) == 0: Ypred = predict(model, torch.from_numpy(X_train).float()) acc = np.mean(Y_train == Ypred) # train_acc1=train_accuracy/len(trainloader) train_acc1=acc/len(trainloader) train_acc.append(train_acc1) loss1.append(loss.data) print ('Epoch [%d/%d], Iter [%d] Loss: %.4f Training Accuracy: %.5f' %(epoch+1, 40, i+1, loss.data, train_acc1 )) # - # ## we will plot our accuracies and loss functions below. np_loss=loss1[0].numpy() for i in range(len(loss1)): np_loss=np.append(np_loss, loss1[i]) np_acc=0.02 for i in range(len(train_acc)): np_acc=np.append(np_acc, train_acc[i]) # # Training Accuracy # # # %matplotlib inline plt.plot(np_acc, color='blue') plt.title("Training Accuracy") plt.show() # # Training Loss # %matplotlib inline plt.plot(np_loss, color='red', label='Trainig loss') plt.title("Traininng Loss") plt.legend() plt.show() # # Training loss and accuracy curves # %matplotlib inline plt.plot(np_loss, color='red', label='Trainig loss') plt.plot(np_acc, color='blue', label='Training Accuracy') plt.title("Loss/Accuracy curves") plt.legend() plt.show() # # Model test accuracy Ypred = predict(model, torch.from_numpy(X_test).float()) acc = np.mean(Y_test == Ypred) print('Test accuracy: ', acc) # # Precision/ Recall / F1 Scores using sklearn from sklearn.metrics import classification_report target_names = ['Class 0', 'Class 1'] print(classification_report(Y_test, Ypred, target_names=target_names)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="XeV1U7GkVNZY" # ## **Importing necessary libraries** # + id="219CEYUFVNZn" #import the important packages import pandas as pd #library used for data manipulation and analysis import numpy as np # library used for working with arrays. import matplotlib.pyplot as plt # library for plots and visualisations import seaborn as sns # library for visualisations # %matplotlib inline import scipy.stats as stats # this library contains a large number of probability distributions as well as a growing library of statistical functions. # + [markdown] id="zwbR4fdpVNZp" # # **Binomial Distribution** # + [markdown] id="iLvUFh_6VNZp" # ### **1) A LED bulb manufacturing company regularly conducts quality checks at specified periods on the products it manufactures. Historically, the failure rate for LED light bulbs that the company manufactures is 5%. Suppose a random sample of 10 LED light bulbs is selected. Find the probability distribution for the defective bulbs and answer the below asked questions.** # + id="Pnp7Np_VVNZp" p = 0.05 # failure rate for LED light bulbs that the company manufactures is 5% n = 10 # sample size k = np.arange(0,11) # an array of different possible number of defective bulbs # + id="7tAFDju8VNZq" binomial = stats.binom.pmf(k,n,p) # + colab={"base_uri": "https://localhost:8080/"} id="D3FonjHuVNZq" outputId="f21b8862-d653-4d28-b37c-20e9de4f5e83" print(binomial) # + [markdown] id="So2fXK6AZ-Xs" # **Plot the binomial distribution** # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="EucEco3yZx0y" outputId="b00545a9-433f-41a5-cf75-107887eff0e8" # plot the distribution plt.bar(k,binomial) plt.title('Binomial: n=%i , p=%.2f' % (n,p), fontsize=15) plt.xlabel('Number of Successes') plt.ylabel('Probability of Successes') plt.show() # + [markdown] id="U8C34C7HVNZr" # **a) What is the probability that none of the LED bulbs are defective?** # + colab={"base_uri": "https://localhost:8080/"} id="DYBa2d2kVNZs" outputId="d96b3b97-cf0c-413f-d99b-f169f4992202" print('The probability that none of the LED bulbs are defective is %1.4f' %binomial[0]) # + [markdown] id="siKk2kFfVNZs" # **b) What is the probability that exactly one of the LED bulbs is defective?** # + colab={"base_uri": "https://localhost:8080/"} id="JvlpPRafVNZt" outputId="82865149-c0c5-4ad7-bf33-243dc8b09041" print('The probability that exactly one of the LED bulbs is defective is %1.4f' %binomial[1]) # + [markdown] id="sTKTp_VbVNZt" # **c) What is the probability that two or fewer of the LED bulbs are defective?** # + [markdown] id="_ETr7cSsVNZu" # Hint: # We need to calculate cumulative probability of two or fewer LED bulbs being defective # + id="4j7cFmgqVNZu" cumulative_binomial = stats.binom.cdf(k,n,p) # + colab={"base_uri": "https://localhost:8080/"} id="ohej5x5tVNZv" outputId="25510879-1185-48f6-d11d-4f90d7d39b32" print('The probability that two or fewer of the LED bulbs are defective is %1.4f' %cumulative_binomial[2]) # + [markdown] id="6D3d7vgNVNZv" # **d) What is the probability that three or more of the LED bulbs are defective** # + [markdown] id="-EnDIeiTVNZv" # Hint: We need to subtract the cumulative probability up to 2 defective LED bulbs from 1. # + colab={"base_uri": "https://localhost:8080/"} id="mvuUJchzVNZw" outputId="6eb39eb3-0aa2-4af7-a8ad-ab4719f925d9" P = 1- cumulative_binomial[2] print('The probability that three or more of the LED bulbs are defective is %1.4f' % P) # + [markdown] id="g7pg-lWiVNZx" # ### **2) During an NBA game a basketball player has two foul shots (free throw), if he converts 93% of his free throw shots.** # + id="lxT7-cFZVNZx" success=0.93 # + [markdown] id="I1yejxesVNZx" # **a) What is the probability that he will convert both the shots?** # + colab={"base_uri": "https://localhost:8080/"} id="SmwMFMOwVNZx" outputId="3ef552e8-2b2a-4bb7-da18-e9c00bbcbfc5" print('The probability that he will convert both the shots is',round(stats.binom.pmf(2,2,0.93),4)) # + [markdown] id="0R1FriOVVNZy" # **b) What is the probability that he will convert at least one shot?** # + colab={"base_uri": "https://localhost:8080/"} id="LHvHcZIZVNZy" outputId="a07246a6-7cfa-4192-9814-2cf9f5793de3" print('The probability that he will convert at least one shot is',round((1 - stats.binom.cdf(0,2,0.93)),4)) # + [markdown] id="jeGWrHJhabri" # ### **3) Over a long period in a large multinomial corporation, 10% of all sales trainees are rated as outstanding, 75% are rated as excellent, 10% are rated as satisfactory and 5% are considered unsatisfactory. Find the following probabilities for a sample of 10 trainees selected at random:** # + [markdown] id="f0-NPUBiVNaG" # **a) Two are rated as outstanding** # + id="nJhIbOHmabrl" outputId="32c1e229-fe87-4702-f764-d2133622b0a1" p=0.1 n=10 k=2 binomial = stats.binom.pmf(k,n,p) print('Probability of two are rated as outstanding is %1.5f' % binomial ) # + [markdown] id="RZCEMFSrVNaG" # **b) Two or more are rated as outstanding** # + id="KxHPMtooabrw" outputId="d1905403-3b31-4912-d72f-2b45519ba155" #For this we will use cumulative probability p=0.1 n=10 k=1 ##To answer this we need to calculate cumulative probability binomial = stats.binom.cdf(k,n,p) #since we have calculated for 1 or less, for two or more we will subtract this prob from 1 print('Probability of two or more are rated as outstanding is %1.5f' % (1-binomial)) # + [markdown] id="u5_21Ab8VNaH" # **c) Eight of the ten are rated either outstanding or excellent** # + id="hUf08sVQabr3" outputId="04cac182-33f8-42b0-f9c4-eb5bc342cff5" #prob of excellent & outstanding is 75+10 p=0.85 n=10 k=8 binomial = stats.binom.pmf(k,n,p) print('Probability of eight out of ten are rated excellent & outstanding is %1.5f' % binomial ) # + [markdown] id="by_BuepFVNaH" # **d) None of the trainees are rated as unsatisfactory** # + id="RHtn4WYzabsA" outputId="e9aa51f1-1a69-44a0-8895-b05c12f4b2b7" p=0.05 n=10 k=0 binomial = stats.binom.pmf(k,n,p) print('Probability of no trainees are unsatisfactory is %1.5f' % binomial ) # + [markdown] id="1-lP_U28aUuP" # # **Uniform Distribution** # # ### **1) A University records the amount of time students take to solve the statistics assignment in 'assignment.csv’. Plot the probability distribution this data follows and answer the following questions.** # # # # # # + [markdown] id="l46Ul-hMd7DD" # #### **Reading the Data into the Dataframe** # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="C1zqTRCLd6BQ" outputId="266bf3e5-3adf-4b1c-a5e4-3d1311bc6750" assignment = pd.read_csv('assignment.csv') assignment.head() # + [markdown] id="UOGOuJgoeZ7X" # Let's plot the histogram of data along with the PDF of uniform distribution using the parameters minimum time required and maximum time required for completing the assignment. # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="vtJZKDfReeL_" outputId="ad59930a-527f-48b2-9080-19c6c792f581" # visualize the distribution of the time needed for completing the assignment plt.hist(assignment['Time_taken'], density=True) plt.axhline(1/3, color = 'red') plt.xlabel('Time required for completing the assignment') plt.ylabel('Probability') plt.title('Data Distribution') plt.show() # + [markdown] id="NYP6WplCju7J" # **Insight**: As you can see from the above plot that all the values between 1 and 4 are having almost equal probability, we are going to use continuous uniform distribution. We need to decide the endpoints. Here, endpoints are 1 and 4. # # X ~ U(1, 4) # + id="08xUp223kAeo" # import the required function from scipy.stats import uniform # use the uniform.pmf() function to generate the probability distribution x = np.linspace(1, 4, 50) probs = uniform.pdf(x, loc = 1, scale = 3) # + [markdown] id="riMH9942kyzJ" # **a) Find the probability that a randomly selected Student requires atmost 2.5 hours to complete the assignment.** # # # # **CDF:** of a random variable(X) is the probability that X  will take the value less than or equal to x. It can be represented mathematically as below. # # >$F_X(x) = P(X\leq x)$ # # # In our case, random variable(X) is the number of hours. # # $ P(X\leq 2.5)$ # + colab={"base_uri": "https://localhost:8080/"} id="j6Q_wOlPlh5g" outputId="82c79ea7-ca72-4ea4-c7ad-489a2dd91760" uniform.cdf(x = 2.5, loc = 1, scale = 3) # + [markdown] id="C3TVqrPQlt1h" # **b) Find the probability that a randomly selected student requires atleast 3 hours to complete the quiz.** # # $ P(X>=3)$ # + colab={"base_uri": "https://localhost:8080/"} id="qy9TT_38l108" outputId="20724781-bd1f-48e3-fdcb-26eb1fc7597f" round(1 - uniform.cdf(x = 3, loc = 1, scale = 3), 4) # + [markdown] id="ln66mUUsmad-" # **c) Find the probability that a randomly selected student requires 1.5 to 3.5 hours to complete the quiz.** # # $ P(1.5<= X <=3.5)$ # + colab={"base_uri": "https://localhost:8080/"} id="oVS-Q3f3mZlj" outputId="283b341b-1853-49d8-f3b3-63ac852902ab" round(uniform.cdf(x = 3.5, loc = 1, scale = 3) - uniform.cdf(x = 1.5, loc = 1, scale = 3), 4) # + [markdown] id="xh7vsKOHVNZ8" # # **Normal Distribution** # + [markdown] id="yRiWnS4VabsT" # ### **1) According to the Telecommunication Industry the average monthly cell phone bill is Rs. 850 with a standard deviation of Rs. 150. Assuming that the monthly bill follows normal distribution answer the following:** # + [markdown] id="MJ7wKYwvVNZ_" # **a) What is the probability that a randomly selected cell phone bill is more than Rs 1200?** # + colab={"base_uri": "https://localhost:8080/"} id="jJ1cXSf0absV" outputId="78ae5cc4-110d-434f-a021-194964c091fc" #To calculate this, we will calculate the cumulative probability for less than 1200 and then will subtract from 1. round(1-stats.norm.cdf(1200,loc=850,scale=150), 4) # + [markdown] id="1eoJzslRVNZ_" # **b) What is the probability that a randomly selected cell phone bill is between Rs 750 and Rs 1200?** # + colab={"base_uri": "https://localhost:8080/"} id="1uU2mRzbabsh" outputId="0d94bb46-e479-405e-cd8e-579a8e2709d9" round(stats.norm.cdf(1200,loc=850,scale=150)-stats.norm.cdf(750,loc=850,scale=150), 4) # + [markdown] id="tGxBocT6VNZ_" # **c) What is the probability that a randomly selected cell phone bill is no more than Rs 650?** # + colab={"base_uri": "https://localhost:8080/"} id="TqFcIHcUabsp" outputId="19b42148-5710-4e7d-a6db-efdc471502e8" round(stats.norm.cdf(650,loc=850,scale=150), 4) # + [markdown] id="oFmIG_sqVNaA" # **d) What is the amount above which lies top 15% of cell phone bills?** # + colab={"base_uri": "https://localhost:8080/"} id="_r55P6SNabs0" outputId="2cf77402-788c-435e-fa3a-2e7fc4c4fac8" #Let the amount be M. P(X ≥ M) = 15% => 1 – P(X < M) = 0.85 . To calculate this we will use the percent point function i.e ppf. round(stats.norm.ppf(0.85,loc=850,scale=150), 4) # + [markdown] id="_lNAMUvvVNaA" # **e) What is the amount below which lies bottom 25% of cell phone bills?** # + colab={"base_uri": "https://localhost:8080/"} id="-7OQk_Ftabs-" outputId="c50bc6e1-8ff7-4fe0-d4c0-b6669b3209f9" round(stats.norm.ppf(0.25,loc=850,scale=150), 4) # + [markdown] id="QmtfulFDVNaH" # ### **2) The mean filling capacity for a coke bottle is 500 ml with a standard deviation of 20ml. Assume that it follows a normal distribution and answer the following questions:** # + id="ySnY5DVYVNaI" mu = 500 sigma = 20 # + [markdown] id="ak4RyMGVVNaI" # **a) What is the probability that the bottle filled less than 480 ml ?** # + colab={"base_uri": "https://localhost:8080/"} id="7HZogLBGVNaI" outputId="6fd2cf8a-2bb5-4119-9143-53eea85edf15" x1=480 z1=(x1-mu)/sigma p=stats.norm.cdf(z1) print ('Probaility of bottle filled less than 480 ml is %1.4f' %p) # + [markdown] id="ILDdv_5fVNaI" # **b) What is the probability that the bottle filled more than 520 ml?** # + colab={"base_uri": "https://localhost:8080/"} id="u42DGuDPVNaJ" outputId="4208ece9-6c8d-4be4-d37b-3264766f66b8" x2=520 z2=(x2-mu)/sigma p1= 1- stats.norm.cdf(z2) print ('probaility of bottle filled more than 520 ml is %1.4f' %p1) # + [markdown] id="GDGLNR7dVNaJ" # **c) What is the probability that the bottle filled between 470 ml to 525 ml?** # + colab={"base_uri": "https://localhost:8080/"} id="_uPKpbvtVNaK" outputId="1ae86422-e645-4183-d9a5-7f1097668e0e" x3=470 z3=(x3-mu)/sigma x4=525 z4=(x4-mu)/sigma p2=stats.norm.cdf(z3) p3=stats.norm.cdf(z4) p4=p3-p2 print ('probability that the bottle filled between 470 ml to 525 ml is %1.4f' %p4) # + [markdown] id="7WFVLPL-VNaM" # ### **3) In 2 Liter soft drink bottles the drink filled is normally distributed, with a mean of 2.0 liters and a standard deviation of 0.05 liter. If the bottles contain less than 95% of the listed net content(1.90 liters), the manufacturer may be subject to penalty by the state office of consumer affairs. Bottles that have a net content above 2.1 liters may cause excess spillage upon opening. Answer the following questions:** # + id="-1PK8VacVNaM" mu = 2 sigma = 0.05 # + [markdown] id="KJh8EodMVNaM" # **a) What is the probability that the bottle content is between 1.9 and 2.0 liters?** # + colab={"base_uri": "https://localhost:8080/"} id="WfcWl231VNaM" outputId="5b8e470b-4b08-484f-ec17-892dbe4313ed" Prob = stats.norm.cdf(2,loc=mu,scale=sigma) - stats.norm.cdf(1.90,loc=mu,scale=sigma) print("Probability that the bottle content is between 1.9 and 2 liters is %3.4f" % Prob) # + [markdown] id="hZhrsYtoVNaN" # **b) What is the probability that the bottle content is between 1.9 and 2.1 liters?** # + colab={"base_uri": "https://localhost:8080/"} id="wbnjjf0aVNaN" outputId="23281c0c-f9f4-41d0-b824-3fe0a1b3445c" Prob = stats.norm.cdf(2.1,loc=mu,scale=sigma) - stats.norm.cdf(1.9,loc=mu,scale=sigma) print("Probability that the bottle content is between 1.9 and 2.1 liters is %3.4f" % Prob) # + [markdown] id="H6fG71TVVNaN" # **c) What is the probability that the bottle content is below 1.9 liters or above 2.1 liters?** # + colab={"base_uri": "https://localhost:8080/"} id="1Z6O0xxHVNaO" outputId="bdd5a29e-a423-4c8d-a0dc-ce46d24e37b6" Prob1 = stats.norm.cdf(1.9,loc=mu,scale=sigma) Prob2 = 1 - stats.norm.cdf(2.1,loc=mu,scale=sigma) print("Probability that the bottle content is below 1.9 liters or above 2.1 liters is %3.4f" % (Prob1 + Prob2)) # + [markdown] id="Qc1CMY6PVNaO" # **d) 99% of the bottles atleast have how much soft drink in them?** # + colab={"base_uri": "https://localhost:8080/"} id="vhoVgTxBVNaO" outputId="19fe901b-e6aa-46a8-f5a5-7f65e08402a6" Prob = stats.norm.ppf(0.01, loc = mu, scale = sigma) print("99% of the bottles atleast have ",round(Prob,2),"Liters") # + [markdown] id="cfuKFTnCDTeA" # # **Sampling Distribution** # + [markdown] id="azEQu4DoC_Q5" # ### **Suppose an automobile battery manufacturer claims that the mean lifetime of their battery is 60 months with a standard deviation of 6 months. Suppose the distribution of battery lives of is approximately normal. Find the probability that the mean lifetime of 40 randomly sampled batteries will be less than 58 months.** # + colab={"base_uri": "https://localhost:8080/"} id="AiWFb0LCC9Wp" outputId="6d0dd09c-a184-4aac-daba-8fd1f2ec4d3d" # import the required function from scipy.stats import norm # declare the value of mean lifetime of battery in mu mu = 60 # declare the value of standard deviation of battery in std sigma = 6 # sample size n = 40 # find the sample standard deviation s = sigma/np.sqrt(40) # find the probability round(norm.cdf(58, loc = mu, scale = s), 4) # + [markdown] id="BSBuNii8HHC0" # **Insight:** The probability that the mean lifetime of 40 randomly sampled batteries will be less than 58 months is 1.75%. # + [markdown] id="1TFwIVHqBR8O" # # **Interval Estimation** # + [markdown] id="hyfZcgYs_LMD" # ### **A random sample of 40 households was selected as part of a study on electricity usage, and the number of kilowatt-hours (kWh) was recorded for each household in the sample for the March quarter of 2020. The average usage was found to be 310kWh. In a very large study in the March quarter of the previous year it was found that the standard deviation of the usage was 89kWh.** # # ### **Assuming the standard deviation is unchanged and that the usage is normally distributed, provide an expression for calculating a 95% confidence interval for the mean usage in the March quarter of 2019.** # # # + colab={"base_uri": "https://localhost:8080/"} id="uyAHorce_KaM" outputId="a17aca72-2c15-42a8-993f-88393ea59489" #import the required function from scipy.stats import norm import numpy as np #set the values of sample mean and sigma x_bar, sigma = 310, 89 # set the value of sample size n = 40 # construct the confidence interval np.round(norm.interval(0.95, loc = x_bar, scale = sigma/np.sqrt(n)), 2) # + [markdown] id="F1JpYY7JGGYY" # **Insight:** 95% of the time, the mean usage in the March quarter of 2019 will be between 282.42 and 337.58 kWh. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="oC0qpliGxcrU" # # + id="N0XTJFQXmsbb" from google.colab import drive drive.mount('/content/gdrive',force_remount=True) # + id="3055c1p_x4wH" # !pip install numba # + id="HkyZ6vAApMdk" import os import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from numba import jit PATH = 'gdrive/My Drive/VSUMM/ydata-tvsum50-v1_1/' os.listdir(PATH) # + id="jqflMn1jriIy" # Descomprimir los archivos os.system('unzip '+ PATH + '/ydata-tvsum50-data.zip') os.system('unzip '+ PATH + '/ydata-tvsum50-thumbnail.zip') # + id="YuMX9JSLuY8X" info = pd.read_csv(PATH+'/data/ydata-tvsum50-info.tsv', sep='\t') info.category = pd.Categorical(info.category) info.head() # + id="4zl2TKZmv78k" info.info() # + id="CcM6XcwawEm4" #Convert length in float info['minute'] = pd.to_datetime(info.length, format='%M:%S').dt.minute info['second'] = pd.to_datetime(info.length, format='%M:%S').dt.second # + id="ipKw-41tyX5r" info.describe() # + id="sq1hyTEku2X1" sns.histplot(data=info, x='minute', kde=True) # + [markdown] id="RzYVXvWe03Zh" # la mayoría de los videos duran dos minutos # + id="qiNkpdRE0ZYW" sns.histplot(data=info, x='second',kde=True) # + [markdown] id="tkcon5TA1Dfb" # En su mayoría duran 20 segundos # + id="xE5Lyu-H0y_e" sns.histplot(data=info, x='category',kde=True) # + [markdown] id="pfJVRqw_1JjX" # Es igual para todas las categorías. # + id="2TBNiWBc1Xbi" g = sns.scatterplot(data=info, x=info.index, y=info.second) plt.title("Second vs Index") plt.xlabel("Index") # + id="OsBfz6Mj1zyC" g = sns.scatterplot(data=info, x=info.index, y=info.minute) plt.title("minute vs Index") plt.xlabel("Index") # + id="cdhVF8UT2sIz" g = sns.violinplot(x="category", y="minute", data=info) plt.title('Boxplot por cada categoria del video') # + id="ynEtPY6W30WY" df_query = info.groupby('category')['minute', 'second'].mean() g=sns.stripplot(x=df_query.index, y='minute', data=df_query) plt.title('promedio de minutos de duración de cada tipo de video') # + id="ouqZ6NVb5vRs" g=sns.stripplot(x=info.category, y='minute', data=info) plt.title('Total de minutos de cada tipo de video') # + [markdown] id="mjuqMi3Z6OjG" # La categoría Bt, en promedio dura muy poco # # # + id="W02j2hKs7sCb" g = sns.violinplot(x="category", y="second", data=info) plt.title('Boxplot por cada categoria del video') # + id="TiL_shtZ7Glw" g=sns.stripplot(x=df_query.index, y='second', data=df_query) plt.title('promedio de segundos de duración de cada tipo de video') # + id="O1Y2OpoB7TNx" g=sns.stripplot(x=info.category, y='second', data=info) plt.title('Total de segudos de cada tipo de video') # + id="qFexHDq_7eSZ" sns.stripplot(x='minute', y='second', hue='category', data=info, dodge=True) # + id="Xp-yqX7R9D_B" anno = pd.read_csv(PATH+'/data/ydata-tvsum50-anno.tsv', delimiter='\t', names=['id', 'category', 'score']) anno.category = pd.Categorical(anno.category) anno.head() # + id="iZlmaGd3yG48" #prueba de rachas @jit def count_racha(array): racha = 0 for i in range(array.shape[0]): if array[i-1] != array[i]: racha += 1 return racha def racha(str_num): hold_score = 3 array = np.array(str_num.split(','), dtype=int) good_score = array[array>=hold_score] total_racha = count_racha(array) racha = count_racha(good_score) return racha/total_racha def percetaje(str_num): array = np.array(str_num.split(','), dtype=int) good_score = array[array>=3] return good_score.shape[0]/array.shape[0] # + id="alLB-lyPyMfc" anno['frecuency_score_growth_4'] = anno.score.apply(lambda x: racha(x)) # + [markdown] id="hKXhVis21-LT" # # + id="jwR6tUqJ7aOF" anno.frecuency_score_growth_4.plot() # + id="oJ-IWP658McS" anno.frecuency_score_growth_4.describe() # + id="_wa6WeW_-Wx5" anno.groupby('category')['frecuency_score_growth_4'].describe() # + id="b45x54IA9Ppu" sns.lineplot(x=anno.index, y="frecuency_score_growth_4", hue="category", data =anno) # + id="5adCKii6_tO0" prueba = np.array(anno.score.iloc[0].split(','), dtype=int) index = np.where(prueba>=4)[0] * 2 #time in seconds plot = pd.DataFrame(index, columns=['seconds']) sns.lineplot(x=plot.index, y=plot.seconds) # + id="XG7J-ucn2TSJ" anno['per'] = anno.score.apply(lambda x: percetaje(x)) # + id="0bTvFKcm3cr2" anno.groupby('category')['per'].describe() # + id="KV85W2Xh3ou-" sns.lineplot(x=anno.index, y="per", hue="category", data =anno) # + id="CuOxZJOy48Q9" sns.histplot(data=anno, x='per') # + id="SXyK71qw5tDI" sns.boxplot(data=anno) # + id="VvPuQ5kZ5lEL" sns.boxplot(x="category", y="per", data=anno) # + id="VedqkCCu5CbM" sns.histplot(data=anno, x='frecuency_score_growth_4') # + id="eHSqz1H35140" sns.boxplot(x="category", y="frecuency_score_growth_4", data=anno) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import csv import pickle import collections import numpy as np from nltk import word_tokenize from nltk.stem import WordNetLemmatizer from nltk.corpus import stopwords from keras.preprocessing import sequence from keras.preprocessing import text from keras.models import load_model # + DIMENSIONS = ['IE', 'NS', 'FT', 'PJ'] MODEL_BATCH_SIZE = 128 TOP_WORDS = 2500 MAX_POST_LENGTH = 40 EMBEDDING_VECTOR_LENGTH = 20 final = '' x_test = ["Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius -- and a lot of courage -- to move in the opposite direction.", "Imagination is more important than knowledge.", "Gravitation is not responsible for people falling in love.", "I want to know God's thoughts; the rest are details.", "The hardest thing in the world to understand is the income tax.", "Reality is merely an illusion, albeit a very persistent one.", "The only real valuable thing is intuition.","A person starts to live when he can live outside himself.", "I am convinced that He (God) does not play dice.","God is subtle but he is not malicious.", "Weakness of attitude becomes weakness of character.","I never think of the future. It comes soon enough.", "The eternal mystery of the world is its comprehensibility.", "sometimes one pays most for the things one gets for nothing.", "Science without religion is lame. Religion without science is blind.", "Anyone who has never made a mistake has never tried anything new.", "Great spirits have often encountered violent opposition from weak minds.", "Everything should be made as simple as possible, but not simpler.", "Common sense is the collection of prejudices acquired by age eighteen.", "Science is a wonderful thing if one does not have to earn one's living at it.", "The secret to creativity is knowing how to hide your sources.", "The only thing that interferes with my learning is my education.", "God does not care about our mathematical difficulties. He integrates empirically.", "The whole of science is nothing more than a refinement of everyday thinking.", "Technological progress is like an axe in the hands of a pathological criminal.", "Peace cannot be kept by force. It can only be achieved by understanding.", "The most incomprehensible thing about the world is that it is comprehensible.", "We can't solve problems by using the same kind of thinking we used when we created them.", "Education is what remains after one has forgotten everything he learned in school.", "The important thing is not to stop questioning. Curiosity has its own reason for existing.", "Do not worry about your difficulties in Mathematics. I can assure you mine are still greater.", "Equations are more important to me, because politics is for the present, but an equation is something for eternity.", "If A is a success in life, then A equals x plus y plus z. Work is x; y is play; and z is keeping your mouth shut.", "Two things are infinite: the universe and human stupidity; and I'm not sure about the the universe.", "As far as the laws of mathematics refer to reality, they are not certain, as far as they are certain, they do not refer to reality.", "Whoever undertakes to set himself up as a judge of Truth and Knowledge is shipwrecked by the laughter of the gods.", "I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.", "In order to form an immaculate member of a flock of sheep one must, above all, be a sheep.", "The fear of death is the most unjustified of all fears, for there's no risk of accident for someone who's dead.", "Too many of us look upon Americans as dollar chasers. This is a cruel libel, even if it is reiterated thoughtlessly by the Americans themselves.", "Heroism on command, senseless violence, and all the loathsome nonsense that goes by the name of patriotism -- how passionately I hate them!", "No, this trick won't work...How on earth are you ever going to explain in terms of chemistry and physics so important a biological phenomenon as first love?", "My religion consists of a humble admiration of the illimitable superior spirit who reveals himself in the slight details we are able to perceive with our frail and feeble mind.", "Yes, we have to divide up our time like that, between our politics and our equations. But to me our equations are far more important, for politics are only a matter of present concern. A mathematical equation stands forever.", "The release of atom power has changed everything except our way of thinking...the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker.", "Great spirits have always found violent opposition from mediocrities. The latter cannot understand it when a man does not thoughtlessly submit to hereditary prejudices but honestly and courageously uses his intelligence.", "The most beautiful thing we can experience is the mysterious. It is the source of all true art and all science. He to whom this emotion is a stranger, who can no longer pause to wonder and stand rapt in awe, is as good as dead: his eyes are closed.", "A man's ethical behavior should be based effectually on sympathy, education, and social ties; no religious basis is necessary. Man would indeeded be in a poor way if he had to be restrained by fear of punishment and hope of reward after death.", "The further the spiritual evolution of mankind advances, the more certain it seems to me that the path to genuine religiosity does not lie through the fear of life, and the fear of death, and blind faith, but through striving after rational knowledge.", "Now he has departed from this strange world a little ahead of me. That means nothing. People like us, who believe in physics, know that the distinction between past, present, and future is only a stubbornly persistent illusion.", "You see, wire telegraph is a kind of a very, very long cat. You pull his tail in New York and his head is meowing in Los Angeles. Do you understand this? And radio operates exactly the same way: you send signals here, they receive them there. The only difference is that there is no cat.", "One had to cram all this stuff into one's mind for the examinations, whether one liked it or not. This coercion had such a deterring effect on me that, after I had passed the final examination, I found the consideration of any scientific problems distasteful to me for an entire year.", "...one of the strongest motives that lead men to art and science is escape from everyday life with its painful crudity and hopeless dreariness, from the fetters of one's own ever-shifting desires. A finely tempered nature longs to escape from the personal life into the world of objective perception and thought.", "He who joyfully marches to music rank and file, has already earned my contempt. He has been given a large brain by mistake, since for him the spinal cord would surely suffice. This disgrace to civilization should be done away with at once. Heroism at command, how violently I hate all this, how despicable and ignoble war is; I would rather be torn to shreds than be a part of so base an action. It is my conviction that killing under the cloak of war is nothing but an act of murder.", "A human being is a part of a whole, called by us _universe_, a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest... a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free ourselves from this prison by widening our circle of compassion to embrace all living creatures and the whole of nature in its beauty.", "Not everything that counts can be counted, and not everything that can be counted counts."] # + types = ['INFJ', 'ENTP', 'INTP', 'INTJ', 'ENTJ', 'ENFJ', 'INFP', 'ENFP', 'ISFP', 'ISTP', 'ISFJ', 'ISTJ', 'ESTP', 'ESFP', 'ESTJ', 'ESFJ'] types = [x.lower() for x in types] lemmatizer = WordNetLemmatizer() stop_words = stopwords.words("english") def lemmatize(x): lemmatized = [] for post in x: temp = post.lower() for type_ in types: temp = temp.replace(' ' + type_, '') temp = ' '.join([lemmatizer.lemmatize(word) for word in temp.split(' ') if (word not in stop_words)]) lemmatized.append(temp) return np.array(lemmatized) for k in range(len(DIMENSIONS)): model = load_model('C:/myproject/mbti_predict/model_{}.h5'.format(DIMENSIONS[k])) tokenizer = None with open('C:/myproject/mbti_predict/tokenizer_{}.pkl'.format(DIMENSIONS[k]), 'rb') as f: tokenizer = pickle.load(f) def preprocess(x): lemmatized = lemmatize(x) tokenized = tokenizer.texts_to_sequences(lemmatized) return sequence.pad_sequences(tokenized, maxlen=MAX_POST_LENGTH) predictions = model.predict(preprocess(x_test)) prediction = float(sum(predictions)/len(predictions)) print(DIMENSIONS[k],round(prediction,2)) if prediction >= 0.5: final += DIMENSIONS[k][1] else: final += DIMENSIONS[k][0] print('') print('MBTI result: {}'.format(final)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # [View in Colaboratory](https://colab.research.google.com/github/partha1189/ml/blob/master/numpy1.ipynb) # + id="_IO6n9lCmThX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="bed98786-272e-4e1a-9c30-714e998a6b6f" my_list = [1,2,3,4] import numpy as np arr = np.array(my_list) print(arr) # + id="B24w604fnijP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1054} outputId="86d28906-e310-4d18-ce9c-8806c5a8e4f6" help(np.arange) # + id="SCJ_pt9Km2Hb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8443bf3a-ed96-464f-d92d-8369abfe6f25" np.arange(4,13,2) # + id="wNkcLeCCoShz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="57b1b65a-f897-48c3-fb99-cd74fa41f172" np.zeros((4,3)) # + id="ImYWGVE_oZiq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="ce0247ed-61bb-4f97-ab38-4ea0dcf97e6b" np.linspace(0,5,10) # + id="k-y7AJt0owhj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="63f28226-7c90-4c34-f838-9d131ed5fbac" np.eye(4) # + id="smYaeUDOo_Yy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="f2aca9d5-edd6-47aa-fa38-2aadd1749dec" np.random.rand(2,2) # + id="FEEZ3dTTpemK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0ac21710-5ca2-4ac2-8cb0-d31e7a3d272a" np.random.randn(5) # + id="-jK20fC7pkJy" colab_type="code" colab={} arr1 = np.arange(25) # + id="_Rt1gbIxq97T" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="d35c78d9-bbc9-45db-bb80-2db49af2490e" arr1 # + id="BZMxuSuDq_sb" colab_type="code" colab={} ranarr = np.random.randint(0,50,10) # + id="bQYRciU0rNV7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="e1495026-5cfd-4dac-b0d8-f9720e9f6b14" ranarr # + [markdown] id="kp37hra5sEku" colab_type="text" # RESHAPE # # + id="Dez1m_76rOqL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="91576694-4633-47bb-bf11-0de7c817c34b" arr1.reshape(5,5) # + id="OiILkfkgraR1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="686d1779-311a-4bdb-ad81-c2adaf30ee70" ranarr.max() # + id="LKHs3sDyr_cS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2634c0f2-4050-4471-ad8a-6473b47d7b66" ranarr.argmax() # + id="ggWWwQXKsCxD" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- import os import json import time from pymetamap import MetaMap from nltk.tokenize.punkt import PunktSentenceTokenizer # corpus path corpus_path = '/home/denglizong/venv4attr/corpus/WikiPediaR5' # os.listdir( corpus_path )[0:3] # list of disease names list_of_diseases = [ filename.replace('.txt',"") for filename in os.listdir( corpus_path ) if filename.endswith(".txt")] # list_of_diseases[0:3] # + # annotate corpus with metamap # explore optimal parameters of metamap based on training set # - # creat instance of metamap mm = MetaMap.get_instance("/home/denglizong/public_mm/bin/metamap20") # restrict scope semantic_group_of_disorder = ['acab','anab','cgab','comd','dsyn','emod','fndg','inpo','mobd','patf','sosy'] semantic_group_of_anatomy = ['anst','bdsu','bdsy','bpoc','bsoj','emst','ffas'] restrict_to_sts = semantic_group_of_disorder + semantic_group_of_anatomy # + # restrict sources # restrict_to_sources = ['SNOMEDCT_US'] # + tags=[] # help( mm.extract_concepts ) # + tags=[] # %%time # given .txt and .ann # output annotation results in text file dict_of_concepts_in_texts = {} for index, disease_name in enumerate(list_of_diseases): # text file textfile = os.path.join( corpus_path, disease_name + ".txt" ) # text string textstr = "" with open(textfile, 'r', encoding='utf-8') as f: textstr = f.read() # repalce '\n' to ',' textstr = textstr.replace('\n',',') # sentence tokonize with position text_sents = [] text_sents_with_spans = [] for start, end in PunktSentenceTokenizer().span_tokenize(textstr): text_sents_with_spans.append( (start,end, textstr[start:end]) ) text_sents.append( textstr[start:end] ) # print( len(text_sents) ) # print( text_sents[0] ) # sentence index sents_idx = range( len(text_sents) ) # print( list(sents_idx) ) # annotation sentences with metamap # 标注参数 composite_phrase = 2, allow_concept_gaps=True, ignore_word_order=True (aim to improve recall) # concepts,error = mm.extract_concepts(text_sents, list(sents_idx), restrict_to_sts=restrict_to_sts, \ # composite_phrase=2, allow_concept_gaps=True, ignore_word_order=True, no_nums = ['fndg'], \ # word_sense_disambiguation=True) # # change parameters # composite_phrase=0, allow_concept_gaps=False, ignore_word_order=False # SNOMEDCT_US, HPO concepts,error = mm.extract_concepts(text_sents, list(sents_idx), \ restrict_to_sts=restrict_to_sts, restrict_to_sources=['HPO'], \ composite_phrase=2, allow_concept_gaps=False, ignore_word_order=False, no_nums = ['fndg'], \ word_sense_disambiguation=True) # change parameters # concepts,error = mm.extract_concepts(text_sents, list(sents_idx), restrict_to_sts=restrict_to_sts, \ # composite_phrase=0, allow_concept_gaps=False, ignore_word_order=True, no_nums = ['fndg'], \ # word_sense_disambiguation=True) # save annotation result print(index, disease_name, len(text_sents), len(concepts) ) dict_of_concepts_in_texts.setdefault( disease_name, concepts ) # + # finish in 17min # - # save annotation results to json file save_path = '/home/denglizong/SSUMiner/corpus/MetaMapAnnotations' for disease_name in dict_of_concepts_in_texts: # list of concepts (as dict) in the text list_of_concepts_in_text = [] # concepts = dict_of_concepts_in_texts[disease_name] for concept in concepts: # keep concept with mm tag if 'mm' in dir(concept): # 记录该concept的信息为词典形式 tmpdict = {} # tmpdict.setdefault( "index", concept.index ) tmpdict.setdefault( "score", concept.score ) tmpdict.setdefault( "preferred_name", concept.preferred_name ) tmpdict.setdefault( "cui", concept.cui ) tmpdict.setdefault( "semtypes", concept.semtypes ) tmpdict.setdefault( "trigger", concept.trigger ) tmpdict.setdefault( "pos_info", concept.pos_info ) tmpdict.setdefault( "tree_codes", concept.tree_codes ) # # if len(tmpdict) != 0: list_of_concepts_in_text.append( tmpdict ) # outfilepath = os.path.join( save_path, disease_name+'.json' ) json.dump(list_of_concepts_in_text, open(outfilepath,'w',encoding='utf-8'),indent=2,ensure_ascii=False) # + # data exploration for metamap annotations # + # compare metamap with metamap lite # + # compute performance of matches # - # get names for training and testing data_dir = "/home/denglizong/SSUMiner/corpus/" # + # disease names in training and testing set diseases_used_for_training = [] diseases_used_for_test = [] # training file_of_diseases_for_training = os.path.join( data_dir+"TrainTestSplit", "diseases_for_training.txt" ) with open( file_of_diseases_for_training, 'r', encoding='utf-8' ) as f: for line in f.readlines(): if line.strip() != "": diseases_used_for_training.append( line.strip() ) # testing file_of_diseases_for_test = os.path.join( data_dir+"TrainTestSplit", "diseases_for_test.txt" ) with open( file_of_diseases_for_test, 'r', encoding='utf-8' ) as f: for line in f.readlines(): if line.strip() != "": diseases_used_for_test.append( line.strip() ) # len( diseases_used_for_training ), len( diseases_used_for_test ) # - import re # get span, name and types of concepts in brat # dict_of_brat_annotations.setdefault( ent_id, ('FindingSite', ent_name, pos_info) ) def read_brat_annotation_file( file_of_brat_annotation ): # to get dict_of_brat_annotations = {} # text string text_string = "" with open(file_of_brat_annotation,'r',encoding='utf-8') as f: text_string = f.read() # annotation content ann_lines = [] with open(file_of_brat_annotation,'r',encoding='utf-8') as f: ann_lines = f.readlines() # resolve annotation content line by line for line in ann_lines: # if line.startswith('T'): # phenotype concept if re.search('Phenotype',line,re.I): # T1 Phenotype 69 86 painful abscesses ent_id, ent_info, ent_name = line.strip().split('\t') # position info 153 157;180 194 pos_info = ent_info.replace('Phenotype ','') # # dict_of_brat_annotations.setdefault( ent_id, ('Phenotype', ent_name, pos_info) ) dict_of_brat_annotations.setdefault( pos_info, ('Phenotype', ent_name, ent_id) ) # findingsite concept elif re.search('FindingSite',line,re.I): # T4 FindingSite 114 120 breast ent_id, ent_info, ent_name = line.strip().split('\t') # position 114 120 pos_info = ent_info.replace('FindingSite ','') # # dict_of_brat_annotations.setdefault( ent_id, ('FindingSite', ent_name, pos_info) ) dict_of_brat_annotations.setdefault( pos_info, ('FindingSite', ent_name, ent_id) ) # return dict_of_brat_annotations # + # disease_name = "Aspergillosis" # - # resolve annotation results of metamap , index by span (compatible with brat) # concept in concepts # continuous concept # The simplest form : 228/6 pos_info='228/6' # non-continuous concept # Multiple comma-separated StartPos/Length pairs: 7059/5,7073/5 (indicating disjoint text strings mapped to one concept) # Multiple comma-separated bracketed StartPos/Length pairs: [1351/8],[1437/8] multiple occurrences of a concept in an utterance # Finally, forms (b) and (c) above can in rare cases be combined, e.g., [4061/10,4075/11],[4166/10,4180/11] def read_metamap_annotation_result( textstr, list_of_concepts_in_text ): # to obtain # span_key (compatible with brat) : concept metamap_annotated_concepts_by_spans = {} # sentences and positions text_sents_with_spans = [] for start, end in PunktSentenceTokenizer().span_tokenize(textstr): text_sents_with_spans.append( (start,end, textstr[start:end]) ) # iterate concepts for concept in list_of_concepts_in_text: span_key = "" # location of sentences in text sent_idx = int( concept.index ) sent_spos = text_sents_with_spans[sent_idx][0] # string_of_pos_info = "" if ';' in concept.pos_info: string_of_pos_info = concept.pos_info.split(';')[0] else: string_of_pos_info = concept.pos_info # concept.pos_info if '[' not in string_of_pos_info: # case a) if string_of_pos_info.count('/') == 1 : # 统计 case a spos, slen = string_of_pos_info.split('/') spos = int(spos) -1 + sent_spos slen = int(slen) epos = spos + slen # span_key = str(spos) + ' '+ str(epos) # if span_key not in metamap_annotated_concepts_by_spans: metamap_annotated_concepts_by_spans.setdefault( span_key, concept ) # case b elif string_of_pos_info.count('/') > 1: # ‘7059/5,7073/5’ pos_of_parts = [] for part_pos_info in string_of_pos_info.split(','): # print( part_pos_info , concept) spos, slen = part_pos_info.split('/') spos = int(spos) -1 + sent_spos slen = int(slen) epos = spos + slen # pos_of_parts.append( str(spos) + ' '+ str(epos) ) # span_key = ';'.join( pos_of_parts ) # if span_key not in metamap_annotated_concepts_by_spans: metamap_annotated_concepts_by_spans.setdefault( span_key, concept ) # # [1351/8],[1437/8] case c) # [4061/10,4075/11],[4166/10,4180/11] case d) else: # [1351/8],[1437/8] --> ['[1351/8]', '[1437/8]'] for bracket_pos_info in re.findall('\[.*?\]',string_of_pos_info): # change '[1351/8]' to '1351/8' , case a) # change '[4061/10,4075/11]' to 4061/10,4075/11 , case b) cleaned_pos_info = bracket_pos_info[1:-1] # # case c) --> case a) if cleaned_pos_info.count('/') == 1 : # spos, slen = cleaned_pos_info.split('/') spos = int(spos) -1 + sent_spos slen = int(slen) epos = spos + slen # span_key = str(spos) + ' '+ str(epos) # if span_key not in metamap_annotated_concepts_by_spans: metamap_annotated_concepts_by_spans.setdefault( span_key, concept ) # case d) --> case b elif cleaned_pos_info.count('/') > 1: # ‘7059/5,7073/5’ pos_of_parts = [] for part_pos_info in cleaned_pos_info.split(','): spos, slen = part_pos_info.split('/') spos = int(spos) -1 + sent_spos slen = int(slen) epos = spos + slen # pos_of_parts.append( str(spos) + ' '+ str(epos) ) # span_key = ';'.join( pos_of_parts ) # if span_key not in metamap_annotated_concepts_by_spans: metamap_annotated_concepts_by_spans.setdefault( span_key, concept ) # return metamap_annotated_concepts_by_spans # + tags=[] # statistics of entities annotated by metamap counts = [0]*5 for disease_name in list_of_diseases: # for disease_name in diseases_used_for_training: # disease_name的metamap annotation result concepts_annotated_by_metamap = dict_of_concepts_in_texts[disease_name] # 将pos_info的形式输出来看看 for concept in concepts_annotated_by_metamap: string_of_pos_info = "" if ';' in concept.pos_info: string_of_pos_info = concept.pos_info.split(';')[0] counts[4] +=1 else: string_of_pos_info = concept.pos_info # concept.pos_info # the simplest form if '[' not in string_of_pos_info: # case a) if string_of_pos_info.count('/') == 1 : # case a counts[0] +=1 # position spos, slen = string_of_pos_info.split('/') spos = int(spos) -1 slen = int(slen) epos = spos + slen # span_key = str(spos) + ' '+ str(epos) # case b elif string_of_pos_info.count('/') > 1: # case b counts[1] += 1 # ‘7059/5,7073/5’ pos_of_parts = [] for part_pos_info in string_of_pos_info.split(','): # print( part_pos_info , concept) spos, slen = part_pos_info.split('/') spos = int(spos) -1 slen = int(slen) epos = spos + slen # pos_of_parts.append( str(spos) + ' '+ str(epos) ) # span_key = ';'.join( pos_of_parts ) # # [4061/10,4075/11],[4166/10,4180/11] case d) else: # [1351/8],[1437/8] --> ['[1351/8]', '[1437/8]'] for bracket_pos_info in re.findall('\[.*?\]',string_of_pos_info): # 将 '[1351/8]' 变成 '1351/8' , case a) # 将 '[4061/10,4075/11]' 变成 4061/10,4075/11 , case b) cleaned_pos_info = bracket_pos_info[1:-1] # # case c) --> case a) if cleaned_pos_info.count('/') == 1 : # count counts[2] +=1 # 解析概念的起止位置 spos, slen = cleaned_pos_info.split('/') spos = int(spos) -1 slen = int(slen) epos = spos + slen # 生成brat中的span_key span_key = str(spos) + ' '+ str(epos) # case d) --> case b elif cleaned_pos_info.count('/') > 1: # count counts[3] += 1 # ‘7059/5,7073/5’ pos_of_parts = [] for part_pos_info in cleaned_pos_info.split(','): spos, slen = part_pos_info.split('/') spos = int(spos) -1 slen = int(slen) epos = spos + slen # pos_of_parts.append( str(spos) + ' '+ str(epos) ) # span_key = ';'.join( pos_of_parts ) counts # + # compare text span annotated by human and machine # - from interval import Interval # + tags=[] # precision = true prediction/total prediction # to obtain full_matched_pred_entity_counts = [0,0] partial_matched_pred_entity_counts = [0,0] # total_pred_entity_counts = [0,0] for disease_name in diseases_used_for_training: # file_of_original_text = os.path.join( corpus_path, disease_name + '.txt' ) raw_text = open( file_of_original_text, 'r', encoding='utf-8').read() # raw_text = raw_text.replace('\n',',') # human annotation file_of_brat_annotation = os.path.join( corpus_path, disease_name + '.ann' ) dict_of_brat_annotations = read_brat_annotation_file(file_of_brat_annotation) # machine annotation dict_of_meta_annotations = read_metamap_annotation_result( raw_text, dict_of_concepts_in_texts[disease_name] ) # compare for span_key_of_metamap in dict_of_meta_annotations: # metamap concepts concept = dict_of_meta_annotations[span_key_of_metamap] # strict and relax match strict_match_of_interval = False relax_match_of_interval = False # entity type pred_type = "" # '[sosy]' if concept.semtypes[1:-1] in semantic_group_of_disorder : pred_type = 'Phenotype' elif concept.semtypes[1:-1] in semantic_group_of_anatomy : pred_type = 'FindingSite' # stat if pred_type == 'Phenotype': total_pred_entity_counts[0] +=1 elif pred_type == 'FindingSite': total_pred_entity_counts[1] +=1 # span list_of_span_intervals_by_metamap = [] # if ';' in span_key_of_metamap: for pos_info in span_key_of_metamap.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) else: pos_info = span_key_of_metamap # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) # list_of_span_intervals_by_metamap # compare for span_key_of_expert in dict_of_brat_annotations: # type real_type = dict_of_brat_annotations[span_key_of_expert][0] # span list_of_span_intervals_by_expert = [] # span_key if ';' in span_key_of_expert: # span_key_of_expert mistaked as span_key_of_metamap for pos_info in span_key_of_expert.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # list_of_span_intervals_by_expert.append( Interval(spos, epos) ) else: pos_info = span_key_of_expert # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # list_of_span_intervals_by_expert.append( Interval(spos, epos) ) # compare if real_type == pred_type: if set( list_of_span_intervals_by_expert ) == set( list_of_span_intervals_by_metamap ): # print("full match", list_of_span_intervals_by_expert, list_of_span_intervals_by_metamap ) strict_match_of_interval = True break else: # overlap for _span_interval_by_metamap in list_of_span_intervals_by_metamap: if relax_match_of_interval == True: break else: for _span_interval_by_expert in list_of_span_intervals_by_expert: if _span_interval_by_metamap.overlaps( _span_interval_by_expert ): relax_match_of_interval = True break # count if strict_match_of_interval == True: # print( "full match" +'\t' + span_key_of_metamap +'\t' + concept.trigger + '\t' ) if pred_type == 'Phenotype': full_matched_pred_entity_counts[0] +=1 elif pred_type == 'FindingSite': full_matched_pred_entity_counts[1] +=1 elif relax_match_of_interval == True: # print( "partial match" +'\t' + span_key_of_metamap +'\t' + concept.trigger + '\t' ) if pred_type == 'Phenotype': partial_matched_pred_entity_counts[0] +=1 elif pred_type == 'FindingSite': partial_matched_pred_entity_counts[1] +=1 # + tags=[] # print print(full_matched_pred_entity_counts) print(partial_matched_pred_entity_counts) print(total_pred_entity_counts) # + # recall = coverred annotations/total annotations # to get full_coverred_real_entity_counts = [0,0] partial_coverred_real_entity_counts = [0,0] # total_real_entity_counts = [0,0] for disease_name in diseases_used_for_training: # file_of_original_text = os.path.join( corpus_path, disease_name + '.txt' ) raw_text = open( file_of_original_text, 'r', encoding='utf-8').read() raw_text = raw_text.replace('\n',',') # human annotation file_of_brat_annotation = os.path.join( corpus_path, disease_name + '.ann' ) dict_of_brat_annotations = read_brat_annotation_file(file_of_brat_annotation) # machine annotation dict_of_meta_annotations = read_metamap_annotation_result( raw_text, dict_of_concepts_in_texts[disease_name] ) # compare for span_key_of_expert in dict_of_brat_annotations: # strict_match_of_interval = False relax_match_of_interval = False # type real_type = dict_of_brat_annotations[span_key_of_expert][0] # count if real_type == 'Phenotype': total_real_entity_counts[0] +=1 elif real_type == 'FindingSite': total_real_entity_counts[1] +=1 # span list_of_span_intervals_by_expert = [] # span_key if ';' in span_key_of_expert: for pos_info in span_key_of_expert.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) list_of_span_intervals_by_expert.append( Interval(spos, epos) ) else: pos_info = span_key_of_expert # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) list_of_span_intervals_by_expert.append( Interval(spos, epos) ) # compare for span_key_of_metamap in dict_of_meta_annotations: # metamap concept concept = dict_of_meta_annotations[span_key_of_metamap] # type pred_type = "" # '[sosy]' if concept.semtypes[1:-1] in semantic_group_of_disorder : pred_type = 'Phenotype' elif concept.semtypes[1:-1] in semantic_group_of_anatomy : pred_type = 'FindingSite' # list_of_span_intervals_by_metamap = [] # if ';' in span_key_of_metamap: for pos_info in span_key_of_metamap.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) else: pos_info = span_key_of_metamap # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) # list_of_span_intervals_by_metamap if real_type == pred_type: # if set( list_of_span_intervals_by_expert ) == set( list_of_span_intervals_by_metamap ): strict_match_of_interval = True break # else: # for _span_interval_by_expert in list_of_span_intervals_by_expert: if relax_match_of_interval == True: break else: for _span_interval_by_metamap in list_of_span_intervals_by_metamap: if _span_interval_by_expert.overlaps( _span_interval_by_metamap ): relax_match_of_interval = True break # strict_match_of_interval if strict_match_of_interval == True: if real_type == 'Phenotype': full_coverred_real_entity_counts[0] +=1 elif real_type == 'FindingSite': full_coverred_real_entity_counts[1] +=1 elif relax_match_of_interval == True: if real_type == 'Phenotype': partial_coverred_real_entity_counts[0] +=1 elif real_type == 'FindingSite': partial_coverred_real_entity_counts[1] +=1 # + tags=[] print(full_coverred_real_entity_counts) print(partial_coverred_real_entity_counts) print(total_real_entity_counts) # + # filting General but meanless terms # CUI : count stat_of_concepts = {} for disease_name in diseases_used_for_training: # file_of_original_text = os.path.join( corpus_path, disease_name + '.txt' ) raw_text = open( file_of_original_text, 'r', encoding='utf-8').read() raw_text = raw_text.replace('\n',',') # file_of_brat_annotation = os.path.join( corpus_path, disease_name + '.ann' ) dict_of_brat_annotations = read_brat_annotation_file(file_of_brat_annotation) # machine annotation dict_of_meta_annotations = read_metamap_annotation_result( raw_text, dict_of_concepts_in_texts[disease_name] ) # compare for span_key_of_metamap in dict_of_meta_annotations: # concept = dict_of_meta_annotations[span_key_of_metamap] # # if not concept.cui.startswith('C'): # print( concept ) # if concept.cui + '::' + concept.preferred_name not in stat_of_concepts: stat_of_concepts.setdefault( concept.cui + '::' + concept.preferred_name, 1 ) else: stat_of_concepts[ concept.cui + '::' + concept.preferred_name ] += 1 # - # count len(stat_of_concepts) # + # stat_of_concepts occurrences_of_concepts = [] for key in stat_of_concepts: count = stat_of_concepts[key] occurrences_of_concepts.append( (count, key) ) occurrences_of_concepts.sort(reverse=True) # - # occurrences_of_concepts[0:100] occurrences_of_concepts[0:10] # + list_of_excluded_cuis = [] with open("/home/denglizong/SSUMiner/corpus/Excluded_Concepts/excluded_cuis.txt", 'r', encoding='utf-8') as f: for line in f.readlines(): if len( line.strip().split('\t') ) == 2: excluded_cui, excluded_name = line.strip().split('\t') list_of_excluded_cuis.append( excluded_cui ) # - len( set(list_of_excluded_cuis) ) list_of_excluded_cuis[0:3] # clean metamap annotation def clean_metamap_annotations_in_text( raw_metamap_annotations, list_of_excluded_cuis, disorder_tags, anatomy_tags ): # span_keys_to_be_excluded = [] # counts_of_remove = [0]*3 # sent_index : [ (span_key, concept) ] sent_level_annotations = {} for span_key in raw_metamap_annotations: # concept = raw_metamap_annotations[span_key] # rule 1: NN if '-noun-' not in concept.trigger: span_keys_to_be_excluded.append( span_key ) counts_of_remove[0] += 1 else: # rule 2:remove concepts without specific meanings if concept.cui in list_of_excluded_cuis: span_keys_to_be_excluded.append( span_key ) counts_of_remove[1] += 1 # if concept.index not in sent_level_annotations: sent_level_annotations.setdefault( concept.index, [(span_key, concept)] ) else: sent_level_annotations[concept.index].append( (span_key, concept) ) # for sent_idx in sent_level_annotations: # observe finding sites phenotype_in_sent = False findingsite_in_sent = False for (span_key, concept) in sent_level_annotations[sent_idx]: if concept.semtypes[1:-1] in disorder_tags: phenotype_in_sent = True elif concept.semtypes[1:-1] in anatomy_tags: findingsite_in_sent = True # if phenotype_in_sent == False and findingsite_in_sent == True: for (span_key, concept) in sent_level_annotations[sent_idx]: if concept.semtypes[1:-1] in anatomy_tags: if span_key not in span_keys_to_be_excluded: span_keys_to_be_excluded.append( span_key ) counts_of_remove[2] += 1 # clean_metamap_annotations = {} for span_key in raw_metamap_annotations: # concept = raw_metamap_annotations[span_key] # if span_key not in span_keys_to_be_excluded: clean_metamap_annotations.setdefault( span_key, concept ) # # print( counts_of_remove ) # return clean_metamap_annotations # clean metamap annotation with hpo def clean_metamap_annotations_with_hpo( raw_metamap_annotations, list_of_excluded_cuis, disorder_tags, anatomy_tags ): # span_keys_to_be_excluded = [] # for span_key in raw_metamap_annotations: # concept = raw_metamap_annotations[span_key] # rule 1:nn # if '-noun-' not in concept.trigger: # span_keys_to_be_excluded.append( span_key ) # counts_of_remove[0] += 1 # else: # # rule 2:remove concepts without specific meanings # if concept.cui in list_of_excluded_cuis: # span_keys_to_be_excluded.append( span_key ) # counts_of_remove[1] += 1 # if concept.cui in list_of_excluded_cuis: span_keys_to_be_excluded.append( span_key ) # # clean_metamap_annotations = {} for span_key in raw_metamap_annotations: # concept = raw_metamap_annotations[span_key] # if span_key not in span_keys_to_be_excluded: clean_metamap_annotations.setdefault( span_key, concept ) # # print( counts_of_remove ) # return clean_metamap_annotations # + tags=[] # HPO表型异常对应的CUI list_of_excluded_cuis_in_hpo = [] with open("/home/denglizong/SSUMiner/corpus/Excluded_Concepts/excluded_cuis_in_hpo.txt", 'r', encoding='utf-8') as f: for line in f.readlines(): if line.startswith('C'): list_of_excluded_cuis_in_hpo.append( line.strip() ) # - len(list_of_excluded_cuis_in_hpo) list_of_excluded_cuis_in_hpo[0:3] # + # re-calculate precision and recall # + tags=[] # 对于每一份疾病描述文本 # 观察机器所做的预测,并基于人工预测评估机器所作的预测是否正确 # precision = true prediction/total prediction # 预测正确的表型实体或部位实体 full_matched_pred_entity_counts = [0,0] partial_matched_pred_entity_counts = [0,0] # 所有预测 total_pred_entity_counts = [0,0] # for disease_name in diseases_used_for_training: for disease_name in diseases_used_for_test: # 疾病百科原文 file_of_original_text = os.path.join( corpus_path, disease_name + '.txt' ) raw_text = open( file_of_original_text, 'r', encoding='utf-8').read() raw_text = raw_text.replace('\n',',') # 获取该疾病文本的人工标注 # 原来是这里出错了,将 disease_name 设置为了 "Acinetobacter infections" file_of_brat_annotation = os.path.join( corpus_path, disease_name + '.ann' ) dict_of_brat_annotations = read_brat_annotation_file(file_of_brat_annotation) # 获取该疾病文本的机器标注 dict_of_meta_annotations = read_metamap_annotation_result( raw_text, dict_of_concepts_in_texts[disease_name] ) # dict_of_meta_annotations_cleaned = clean_metamap_annotations_in_text( dict_of_meta_annotations, \ # list_of_excluded_cuis, semantic_group_of_disorder, semantic_group_of_anatomy ) dict_of_meta_annotations_cleaned = clean_metamap_annotations_with_hpo( dict_of_meta_annotations, \ list_of_excluded_cuis_in_hpo, semantic_group_of_disorder, semantic_group_of_anatomy ) # 比较机器标注与人工标注 for span_key_of_metamap in dict_of_meta_annotations_cleaned: # metamap注释的概念 concept = dict_of_meta_annotations_cleaned[span_key_of_metamap] # 是否存在一个人工标注,与metamap的标注完全一致或部分一致 # 实体类型一样的前提下,区间标注完全一致或部分一致(重叠) strict_match_of_interval = False relax_match_of_interval = False # 该区间概念对应的实体类型 pred_type = "" # '[sosy]' if concept.semtypes[1:-1] in semantic_group_of_disorder : pred_type = 'Phenotype' elif concept.semtypes[1:-1] in semantic_group_of_anatomy : pred_type = 'FindingSite' # 统计预测 if pred_type == 'Phenotype': total_pred_entity_counts[0] +=1 elif pred_type == 'FindingSite': total_pred_entity_counts[1] +=1 # 生成区间;由于不连续实体的存在,可能有多段区间 list_of_span_intervals_by_metamap = [] # if ';' in span_key_of_metamap: for pos_info in span_key_of_metamap.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) else: pos_info = span_key_of_metamap # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) # list_of_span_intervals_by_metamap # 观察这一区间是否存在于人工标注的区间中 (且实体类型一致) for span_key_of_expert in dict_of_brat_annotations: # 实体类型 real_type = dict_of_brat_annotations[span_key_of_expert][0] # span区间化 list_of_span_intervals_by_expert = [] # 解析span_key if ';' in span_key_of_expert: # 这里出错啦 span_key_of_expert mistaked as span_key_of_metamap for pos_info in span_key_of_expert.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_expert.append( Interval(spos, epos) ) else: pos_info = span_key_of_expert # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_expert.append( Interval(spos, epos) ) # 在实体类型一致的基础上,进行区间一致性比较 if real_type == pred_type: # 区间一致性比较 # 完全一致 if set( list_of_span_intervals_by_expert ) == set( list_of_span_intervals_by_metamap ): # print("full match", list_of_span_intervals_by_expert, list_of_span_intervals_by_metamap ) strict_match_of_interval = True break # 如果两者区间不相等 else: # 但如果存在重叠关系,设定relax_match = True for _span_interval_by_metamap in list_of_span_intervals_by_metamap: if relax_match_of_interval == True: break else: for _span_interval_by_expert in list_of_span_intervals_by_expert: if _span_interval_by_metamap.overlaps( _span_interval_by_expert ): relax_match_of_interval = True break # 计数完全一致的预测和部分一致的预测 # 并根据类型分别记录表型和部位实体预测的一致性 if strict_match_of_interval == True: # print( "full match" +'\t' + span_key_of_metamap +'\t' + concept.trigger + '\t' ) if pred_type == 'Phenotype': full_matched_pred_entity_counts[0] +=1 elif pred_type == 'FindingSite': full_matched_pred_entity_counts[1] +=1 elif relax_match_of_interval == True: # print( "partial match" +'\t' + span_key_of_metamap +'\t' + concept.trigger + '\t' ) if pred_type == 'Phenotype': partial_matched_pred_entity_counts[0] +=1 elif pred_type == 'FindingSite': partial_matched_pred_entity_counts[1] +=1 # + tags=[] # 预测的准确率(冗余)较大,是需要通过规则进行过滤 print(full_matched_pred_entity_counts) print(partial_matched_pred_entity_counts) print(total_pred_entity_counts) # + # 统计人工标注的概念能被机器标注所覆盖的情况 # 对于每一份疾病描述文本 # 观察人工标注的概念,并观察人工标注的概念是否能被机器标注所覆盖 # recall = coverred annotations/total annotations # 预测正确的表型实体或部位实体 full_coverred_real_entity_counts = [0,0] partial_coverred_real_entity_counts = [0,0] # 所有预测 total_real_entity_counts = [0,0] # for disease_name in diseases_used_for_training: for disease_name in diseases_used_for_test: # 疾病百科原文 file_of_original_text = os.path.join( corpus_path, disease_name + '.txt' ) raw_text = open( file_of_original_text, 'r', encoding='utf-8').read() raw_text = raw_text.replace('\n',',') # 获取该疾病文本的人工标注 # 原来是这里出错了,将 disease_name 设置为了 "Acinetobacter infections" file_of_brat_annotation = os.path.join( corpus_path, disease_name + '.ann' ) dict_of_brat_annotations = read_brat_annotation_file(file_of_brat_annotation) # 获取该疾病文本的机器标注 dict_of_meta_annotations = read_metamap_annotation_result( raw_text, dict_of_concepts_in_texts[disease_name] ) # dict_of_meta_annotations_cleaned = clean_metamap_annotations_in_text( dict_of_meta_annotations, \ # list_of_excluded_cuis, semantic_group_of_disorder, semantic_group_of_anatomy ) dict_of_meta_annotations_cleaned = clean_metamap_annotations_with_hpo( dict_of_meta_annotations, \ list_of_excluded_cuis_in_hpo, semantic_group_of_disorder, semantic_group_of_anatomy ) # 比较机器标注与人工标注 # 对于一个人工标注 for span_key_of_expert in dict_of_brat_annotations: # 是否存在一个机器标注,能完全或部分的覆盖该人工标注? strict_match_of_interval = False relax_match_of_interval = False # 实体类型 real_type = dict_of_brat_annotations[span_key_of_expert][0] # 统计标注 if real_type == 'Phenotype': total_real_entity_counts[0] +=1 elif real_type == 'FindingSite': total_real_entity_counts[1] +=1 # span区间化 list_of_span_intervals_by_expert = [] # 解析span_key if ';' in span_key_of_expert: # 这里出错啦,span_key_of_expert mistaken for span_key_of_metamap for pos_info in span_key_of_expert.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_expert.append( Interval(spos, epos) ) else: pos_info = span_key_of_expert # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_expert.append( Interval(spos, epos) ) # 看机器标注的区间能否覆盖 list_of_span_intervals_by_expert = [] for span_key_of_metamap in dict_of_meta_annotations_cleaned: # metamap注释的概念 concept = dict_of_meta_annotations_cleaned[span_key_of_metamap] # 该区间概念对应的实体类型 pred_type = "" # '[sosy]' if concept.semtypes[1:-1] in semantic_group_of_disorder : pred_type = 'Phenotype' elif concept.semtypes[1:-1] in semantic_group_of_anatomy : pred_type = 'FindingSite' # 生成区间;由于不连续实体的存在,可能有多段区间 list_of_span_intervals_by_metamap = [] # if ';' in span_key_of_metamap: for pos_info in span_key_of_metamap.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) else: pos_info = span_key_of_metamap # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) # list_of_span_intervals_by_metamap # 在实体类型一致的基础上,进行区间一致性比较 if real_type == pred_type: # 区间一致性比较 # 完全一致 if set( list_of_span_intervals_by_expert ) == set( list_of_span_intervals_by_metamap ): strict_match_of_interval = True break # 如果两者区间不相等 else: # 但如果存在重叠关系,设定relax_match = True for _span_interval_by_expert in list_of_span_intervals_by_expert: if relax_match_of_interval == True: break else: for _span_interval_by_metamap in list_of_span_intervals_by_metamap: if _span_interval_by_expert.overlaps( _span_interval_by_metamap ): relax_match_of_interval = True break # strict_match_of_interval # 计数完全一致的预测和部分一致的预测 # 并根据类型分别记录表型和部位实体预测的一致性 if strict_match_of_interval == True: if real_type == 'Phenotype': full_coverred_real_entity_counts[0] +=1 elif real_type == 'FindingSite': full_coverred_real_entity_counts[1] +=1 elif relax_match_of_interval == True: if real_type == 'Phenotype': partial_coverred_real_entity_counts[0] +=1 elif real_type == 'FindingSite': partial_coverred_real_entity_counts[1] +=1 # + tags=[] print(full_coverred_real_entity_counts) print(partial_coverred_real_entity_counts) print(total_real_entity_counts) # + # 由于在训练集中标注过的术语都算是已知的数据,如果将训练集中的数据加入到MetaMap未识别到的部分 (string-based Method),观测测试集所能达到的效果 # + tags=[] # 统计在训练集中出现过的表型术语 # + # dict_of_brat_annotations.setdefault( pos_info, ('FindingSite', ent_name, ent_id) ) # 只收集表型术语 terms_occurred_in_training_set = set() for disease_name in diseases_used_for_training: # 获取该疾病文本的人工标注 # 原来是这里出错了,将 disease_name 设置为了 "Acinetobacter infections" file_of_brat_annotation = os.path.join( corpus_path, disease_name + '.ann' ) dict_of_brat_annotations = read_brat_annotation_file(file_of_brat_annotation) # for span_key in dict_of_brat_annotations: ent_type = dict_of_brat_annotations[span_key][0] ent_name = dict_of_brat_annotations[span_key][1] if ent_type == 'Phenotype': terms_occurred_in_training_set.add( ent_name ) # - len(terms_occurred_in_training_set) list(terms_occurred_in_training_set)[0:6] # + # 这是MetaMap的预测 # MetaMap没有覆盖到的人工标注,检查它是否是一个出现过的术语,这一部分也要纳入统计 # 用列表记录MetaMap未覆盖的术语 [修改覆盖度计算代码] terms_not_covered_by_MetaMap = [] for disease_name in diseases_used_for_test: # 疾病百科原文 file_of_original_text = os.path.join( corpus_path, disease_name + '.txt' ) raw_text = open( file_of_original_text, 'r', encoding='utf-8').read() raw_text = raw_text.replace('\n',',') # 获取该疾病文本的人工标注 # 原来是这里出错了,将 disease_name 设置为了 "Acinetobacter infections" file_of_brat_annotation = os.path.join( corpus_path, disease_name + '.ann' ) dict_of_brat_annotations = read_brat_annotation_file(file_of_brat_annotation) # 获取该疾病文本的机器标注 dict_of_meta_annotations = read_metamap_annotation_result( raw_text, dict_of_concepts_in_texts[disease_name] ) # dict_of_meta_annotations_cleaned = clean_metamap_annotations_in_text( dict_of_meta_annotations, \ # list_of_excluded_cuis, semantic_group_of_disorder, semantic_group_of_anatomy ) dict_of_meta_annotations_cleaned = clean_metamap_annotations_with_hpo( dict_of_meta_annotations, \ list_of_excluded_cuis_in_hpo, semantic_group_of_disorder, semantic_group_of_anatomy ) # 比较机器标注与人工标注 # 对于一个人工标注 for span_key_of_expert in dict_of_brat_annotations: # 是否存在一个机器标注,能完全或部分的覆盖该人工标注? strict_match_of_interval = False relax_match_of_interval = False # 实体类型 ent_type real_type = dict_of_brat_annotations[span_key_of_expert][0] # 实体名称 ent_name real_name = dict_of_brat_annotations[span_key_of_expert][1] # 统计标注 if real_type == 'Phenotype': total_real_entity_counts[0] +=1 elif real_type == 'FindingSite': total_real_entity_counts[1] +=1 # span区间化 list_of_span_intervals_by_expert = [] # 解析span_key if ';' in span_key_of_expert: # 这里出错啦,span_key_of_expert mistaken for span_key_of_metamap for pos_info in span_key_of_expert.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_expert.append( Interval(spos, epos) ) else: pos_info = span_key_of_expert # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_expert.append( Interval(spos, epos) ) # 看机器标注的区间能否覆盖 list_of_span_intervals_by_expert = [] for span_key_of_metamap in dict_of_meta_annotations_cleaned: # metamap注释的概念 concept = dict_of_meta_annotations_cleaned[span_key_of_metamap] # 该区间概念对应的实体类型 pred_type = "" # '[sosy]' if concept.semtypes[1:-1] in semantic_group_of_disorder : pred_type = 'Phenotype' elif concept.semtypes[1:-1] in semantic_group_of_anatomy : pred_type = 'FindingSite' # 生成区间;由于不连续实体的存在,可能有多段区间 list_of_span_intervals_by_metamap = [] # if ';' in span_key_of_metamap: for pos_info in span_key_of_metamap.split(';'): spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) else: pos_info = span_key_of_metamap # spos, epos = pos_info.split(' ') spos = int(spos) epos = int(epos) # 生成span区间,便于比较 list_of_span_intervals_by_metamap.append( Interval(spos, epos) ) # list_of_span_intervals_by_metamap # 在实体类型一致的基础上,进行区间一致性比较 if real_type == pred_type: # 区间一致性比较 # 完全一致 if set( list_of_span_intervals_by_expert ) == set( list_of_span_intervals_by_metamap ): strict_match_of_interval = True break # 如果两者区间不相等 else: # 但如果存在重叠关系,设定relax_match = True for _span_interval_by_expert in list_of_span_intervals_by_expert: if relax_match_of_interval == True: break else: for _span_interval_by_metamap in list_of_span_intervals_by_metamap: if _span_interval_by_expert.overlaps( _span_interval_by_metamap ): relax_match_of_interval = True break # strict_match_of_interval # 计数完全一致的预测和部分一致的预测 # 并根据类型分别记录表型和部位实体预测的一致性 if strict_match_of_interval == True: if real_type == 'Phenotype': full_coverred_real_entity_counts[0] +=1 elif real_type == 'FindingSite': full_coverred_real_entity_counts[1] +=1 elif relax_match_of_interval == True: if real_type == 'Phenotype': partial_coverred_real_entity_counts[0] +=1 elif real_type == 'FindingSite': partial_coverred_real_entity_counts[1] +=1 # 如果该术语没有被MetaMap覆盖,记录这一未被MetaMap覆盖的实体 if strict_match_of_interval == False and relax_match_of_interval == False: # 统计标注 if real_type == 'Phenotype': terms_not_covered_by_MetaMap.append( real_name ) # - len( terms_not_covered_by_MetaMap ) terms_not_covered_by_MetaMap[0:5] # 观测未被覆盖的术语有多少已在训练集中标注过 count = 0 for term in terms_not_covered_by_MetaMap: if term in terms_occurred_in_training_set: count += 1 count # + # 表型属性模板的定义方法 # 给定语料,提取包含有表型的句子,统计表型上下文中属性的出现情况 # 根据属性的出现情况筛选属性 # - # concepts = dict_of_concepts_in_texts[diseases_name] # concepts = mm.extract_concepts(text_sents, list(sents_idx) # concept.index --> text_sents[concept.index] # concept.pos_info 句子级别的位置 # finding attribute in the context around the concept diseases_name = "Actinomycosis" dict_of_concepts_in_texts[diseases_name] from flashtext import KeywordProcessor # + tags=[] # 纳入全部的疾病 sents_with_phenotypes = set() for disease_name in list_of_diseases: # 该疾病文档的句子划分 file_of_original_text = os.path.join( corpus_path, disease_name + '.txt' ) raw_text = open( file_of_original_text, 'r', encoding='utf-8').read() raw_text = raw_text.replace('\n',',') text_sents = [] text_sents_with_spans = [] for start, end in PunktSentenceTokenizer().span_tokenize(raw_text): text_sents_with_spans.append( (start,end, raw_text[start:end]) ) text_sents.append( raw_text[start:end] ) # 获取该疾病文本的机器标注 dict_of_meta_annotations = read_metamap_annotation_result( raw_text, dict_of_concepts_in_texts[disease_name] ) # 只保留其中HPO表型异常的部分 dict_of_meta_annotations_cleaned = clean_metamap_annotations_with_hpo( dict_of_meta_annotations, \ list_of_excluded_cuis_in_hpo, semantic_group_of_disorder, semantic_group_of_anatomy ) # for span_key_of_metamap in dict_of_meta_annotations_cleaned: # metamap注释的概念 concept = dict_of_meta_annotations_cleaned[span_key_of_metamap] # 定位该表型概念所在的句子索引 sent_idx = int( concept.index ) # 记录该句子 sents_with_phenotypes.add( text_sents[sent_idx] ) # - # 合计得到1688个句子 len( sents_with_phenotypes ) # + # list(sents_with_phenotypes)[1] # + # 载入属性库 # - attribute_lib_path = "/home/denglizong/SSUMiner/corpus/AttributeLibrary" snomed_attributes_file = os.path.join( attribute_lib_path, "snomed_attributes_lib_filted.json" ) hpo_attributes_file = os.path.join( attribute_lib_path, "hpo_attributes_lib_filted.json" ) icd_attributes_file = os.path.join( attribute_lib_path, "icd_attributes_lib.json" ) cem_attributes_file = os.path.join( attribute_lib_path, "cem_attributes_lib.json" ) # 读入属性库 dict_of_snomed_attributes = json.load( open(snomed_attributes_file,'r',encoding='utf-8') ) # dict_of_snomed_attributes dict_of_hpo_attributes = json.load( open(hpo_attributes_file,'r',encoding='utf-8') ) # dict_of_icd_attributes = json.load( open(icd_attributes_file,'r',encoding='utf-8') ) # dict_of_cem_attributes = json.load( open(cem_attributes_file,'r',encoding='utf-8') ) # + # dict_of_cem_attributes # - # 合并属性库 dict_of_attributes = {} dict_of_attributes.update( dict_of_snomed_attributes ) dict_of_attributes.update( dict_of_hpo_attributes ) dict_of_attributes.update( dict_of_icd_attributes ) dict_of_attributes.update( dict_of_cem_attributes ) len(dict_of_attributes) dict_of_attributes["272141005::Severities"] # + # 属性库中有342个属性,应该挑选哪一些属性及属性槽来配置属性模板呢? # - # 肯定不纳入属性模板的属性 excluded_attributes = ['272068007::Negative integer',"272070003::Ordinal number","272150007::Openness", "272126005::Order values","272127001::Event orders","272146000::Uniformities", "57615005::Definite time", "310886004::Absolute times - hours", "314804007::Cardiovascular site descriptor","272147009::Velocities", "303109001::Pathogeneses","314808005::Oral site descriptor","314809002::Urinary site descriptor", "314806009::Respiratory site descriptor","314805008::Digestive site descriptor"] # + # 先从属性库中删除这些属性 for key in excluded_attributes: if key in dict_of_attributes: del dict_of_attributes[key] len(dict_of_attributes) # + # "314806009::Respiratory site descriptor" in dict_of_attributes # + tags=[] # 机器初筛:在至少两个不同的句子中出现,且出现的属性取值是不相同的。 # 符合条件的候选属性个数 count_of_candidate_attributes = 0 # 符合条件的候选属性信息 list of tmpdict info_of_candidate_attributes = [] for name_of_attribute in dict_of_attributes: # if name_of_attribute in excluded_attributes: continue # values values_of_attribute = dict_of_attributes[name_of_attribute] # 如果属性的取值列表过长(超过30个), 跳过 if len(values_of_attribute) >=25: continue # creat a keyword_processor for this attribute keyword_processor = KeywordProcessor() # keyword_processor.add_keyword(, ) for value in values_of_attribute: # severe --> 【severe】 keyword_processor.add_keyword( value, '【' + value + '】' ) # occurences of values in sentences # 1. 统计该属性出现在了多少句子中 (general) occurence_of_attribute_in_sents = 0 # 2. 统计该属性的取值的出现次数 (在一个句子中出现多次仅算一次,以便与属性的统计统一) occurence_of_values_in_sents = {} for value in values_of_attribute: occurence_of_values_in_sents.setdefault( value.lower() , 0) # 3. 记录n(10)个包含该属性某一取值的句子,记录该句子包含的该属性的1个取值 sentences_containing_attribute = [] # 注意sent不是一个string for sent in sents_with_phenotypes: # 搜索句子中出现的属性取值 # keyword_processor.add_keyword('is', 'IS') # keyword_processor.extract_keywords('This is a problem') # ['IS'] # keyword_processor.replace_keywords('This is a problem') # 'This IS a problem' hits = keyword_processor.extract_keywords(sent) if len(hits) > 0: # 如果搜索到了属性取值,那么该属性的出现次数+1 occurence_of_attribute_in_sents += 1 # 记录出现了该属性的句子,及其中出现的属性取值 if len(sentences_containing_attribute) <= 10: # sentences_containing_attribute.setdefault( hits[0], sent ) # sent marked up with value marked_sent = keyword_processor.replace_keywords(sent) sentences_containing_attribute.append(marked_sent) # 如果某一取值存在,统计数+1 # hits ['【Early stage】'] hit[1:-1] lower_hits = [hit[1:-1].lower() for hit in hits] for occurred_value in set(lower_hits): if occurred_value in occurence_of_values_in_sents: occurence_of_values_in_sents[occurred_value] += 1 # 属性筛选 # 至少出现在两个句子中;至少有两个取值出现。 # 为了避免假阳性/偶然性,一个有效的取值至少应该出现在2个句子中 # 一个属性至少具有2个这样的取值 keep_flag = False number_of_qualified_value = 0 for occurred_value in occurence_of_values_in_sents: # 为了避免假阳性/偶然性,一个有效的取值至少应该出现在2个句子中 if occurence_of_values_in_sents[occurred_value] >= 2: number_of_qualified_value += 1 # 如果一个属性至少具有2个这样的取值,那这一属性存在的可能性非常大了 if number_of_qualified_value >=2 : keep_flag = True # output observing results if keep_flag: # print("候选属性名称: ", name_of_attribute) # print("该属性在句子中的出现次数:", occurence_of_attribute_in_sents) # print("该属性的取值在语料中的分布:", occurence_of_values_in_sents) # print("包含该属性的句子证据:") # for index, sent in enumerate(sentences_containing_attribute) : # print(index, sent) # break count_of_candidate_attributes += 1 # info of candidate attribute tmpdict = {} tmpdict.setdefault("name", name_of_attribute ) tmpdict.setdefault("occurrences", occurence_of_attribute_in_sents ) # 取值分布按出现次数排列 (精简下显示模式) [ ("covered",5), ("unaided",2)] distribution_of_values_sorted = sorted(occurence_of_values_in_sents.items(), key=lambda item:item[1], reverse=True) number_of_occurred_values = 0 dict_of_distribution_of_values = {} for _value, _count in distribution_of_values_sorted: dict_of_distribution_of_values.setdefault(_value, _count) if _count >= 1: number_of_occurred_values +=1 tmpdict.setdefault("distributions", dict_of_distribution_of_values ) # 在语料中出现过的属性取值占所有取值数目的百分比 percentage_of_occurred_values = round(number_of_occurred_values/len(distribution_of_values_sorted), 2) tmpdict.setdefault("percentage", percentage_of_occurred_values ) # evidence of sentences tmpdict.setdefault("evidences", sentences_containing_attribute ) # info_of_candidate_attributes.append( tmpdict ) print(count_of_candidate_attributes) # - info_of_candidate_attributes[0] # + # 根据属性的出现次数进行排序 sorted_occurrence_of_attributes = [] for infodict in info_of_candidate_attributes: sorted_occurrence_of_attributes.append( (infodict["occurrences"], infodict["name"]) ) sorted_occurrence_of_attributes.sort(reverse=True) sorted_occurrence_of_attributes[0:5] # + # 根据occurence排序后的结果文件 sorted_info_of_candidate_attributes = [] for (count, name_of_attribute) in sorted_occurrence_of_attributes: # for infodict in info_of_candidate_attributes: if infodict["name"] == name_of_attribute: sorted_info_of_candidate_attributes.append( infodict ) # - os.getcwd() # + # 保存 info_of_candidate_attributes 看看 file_of_candidate_attributes = os.path.join( attribute_lib_path, 'results_of_candidate_attributes.json' ) json.dump(sorted_info_of_candidate_attributes, open(file_of_candidate_attributes,'w', encoding='utf-8'), ensure_ascii=False, indent=2 ) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.0 64-bit (''venv'': virtualenv)' # language: python # name: python38064bitvenvvirtualenve617f17e49084a7c973a4e86cf48a722 # --- # + import sys sys.path.append('..') from utils.misc import * from utils.classifier import * from utils.visualiser import * # - # ## Loading pre-processed data # + # train df_train = pd.read_csv("../data/train/df_train.csv", index_col=0) df_train_b0 = pd.read_csv("../data/train/df_train_b0.csv", index_col=0) df_train_b1 = pd.read_csv("../data/train/df_train_b1.csv", index_col=0) df_train_b2 = pd.read_csv("../data/train/df_train_b2.csv", index_col=0) df_train_b0_f0 = pd.read_csv("../data/train/df_train_b0_f0.csv", index_col=0) df_train_b0_f1 = pd.read_csv("../data/train/df_train_b0_f1.csv", index_col=0) df_train_b0_f2 = pd.read_csv("../data/train/df_train_b0_f2.csv", index_col=0) df_train_b0_f3 = pd.read_csv("../data/train/df_train_b0_f3.csv", index_col=0) df_train_b1_f0 = pd.read_csv("../data/train/df_train_b1_f0.csv", index_col=0) df_train_b1_f1 = pd.read_csv("../data/train/df_train_b1_f1.csv", index_col=0) df_train_b1_f2 = pd.read_csv("../data/train/df_train_b1_f2.csv", index_col=0) df_train_b1_f3 = pd.read_csv("../data/train/df_train_b1_f3.csv", index_col=0) df_train_b2_f0 = pd.read_csv("../data/train/df_train_b2_f0.csv", index_col=0) df_train_b2_f1 = pd.read_csv("../data/train/df_train_b2_f1.csv", index_col=0) df_train_b2_f2 = pd.read_csv("../data/train/df_train_b2_f2.csv", index_col=0) df_train_b2_f3 = pd.read_csv("../data/train/df_train_b2_f3.csv", index_col=0) df_train_b2_f4 = pd.read_csv("../data/train/df_train_b2_f4.csv", index_col=0) df_train_wap = np.load("../data/train/df_train_wap.npy", allow_pickle=True) df_train_b0_wap = np.load("../data/train/df_train_b0_wap.npy", allow_pickle=True) df_train_b1_wap = np.load("../data/train/df_train_b1_wap.npy", allow_pickle=True) df_train_b2_wap = np.load("../data/train/df_train_b2_wap.npy", allow_pickle=True) df_train_b0_f0_wap = np.load("../data/train/df_train_b0_f0_wap.npy", allow_pickle=True) df_train_b0_f1_wap = np.load("../data/train/df_train_b0_f1_wap.npy", allow_pickle=True) df_train_b0_f2_wap = np.load("../data/train/df_train_b0_f2_wap.npy", allow_pickle=True) df_train_b0_f3_wap = np.load("../data/train/df_train_b0_f3_wap.npy", allow_pickle=True) df_train_b1_f0_wap = np.load("../data/train/df_train_b1_f0_wap.npy", allow_pickle=True) df_train_b1_f1_wap = np.load("../data/train/df_train_b1_f1_wap.npy", allow_pickle=True) df_train_b1_f2_wap = np.load("../data/train/df_train_b1_f2_wap.npy", allow_pickle=True) df_train_b1_f3_wap = np.load("../data/train/df_train_b1_f3_wap.npy", allow_pickle=True) df_train_b2_f0_wap = np.load("../data/train/df_train_b2_f0_wap.npy", allow_pickle=True) df_train_b2_f1_wap = np.load("../data/train/df_train_b2_f1_wap.npy", allow_pickle=True) df_train_b2_f2_wap = np.load("../data/train/df_train_b2_f2_wap.npy", allow_pickle=True) df_train_b2_f3_wap = np.load("../data/train/df_train_b2_f3_wap.npy", allow_pickle=True) df_train_b2_f4_wap = np.load("../data/train/df_train_b2_f4_wap.npy", allow_pickle=True) # - # test df_test = pd.read_csv("../data/test/df_test.csv", index_col=0) # ## Dividing space by direction # # ### Building 0 # + lat_split = np.mean(df_train_b0['LATITUDE'].astype(float)) # long_split = np.mean(df_train_b0['LONGITUDE'].astype(float)) # NE df_train_b0_ne = df_train_b0[(df_train_b0['LATITUDE'] >= lat_split) & (df_train_b0['LONGITUDE'] >= -7629)] # NW df_train_b0_nw = df_train_b0[(df_train_b0['LATITUDE'] >= lat_split) & (df_train_b0['LONGITUDE'] < -7629)] # SE df_train_b0_se = df_train_b0[(df_train_b0['LATITUDE'] < lat_split) & (df_train_b0['LONGITUDE'] >= -7655)] # SW df_train_b0_sw = df_train_b0[(df_train_b0['LATITUDE'] < lat_split) & (df_train_b0['LONGITUDE'] < -7655)] # + # floors in NE df_train_b0_ne_f0 = df_train_b0_ne[df_train_b0_ne['FLOOR'] == 0] df_train_b0_ne_f1 = df_train_b0_ne[df_train_b0_ne['FLOOR'] == 1] df_train_b0_ne_f2 = df_train_b0_ne[df_train_b0_ne['FLOOR'] == 2] df_train_b0_ne_f3 = df_train_b0_ne[df_train_b0_ne['FLOOR'] == 3] # floors in NW df_train_b0_nw_f0 = df_train_b0_nw[df_train_b0_nw['FLOOR'] == 0] df_train_b0_nw_f1 = df_train_b0_nw[df_train_b0_nw['FLOOR'] == 1] df_train_b0_nw_f2 = df_train_b0_nw[df_train_b0_nw['FLOOR'] == 2] df_train_b0_nw_f3 = df_train_b0_nw[df_train_b0_nw['FLOOR'] == 3] # floors in SE df_train_b0_se_f0 = df_train_b0_se[df_train_b0_se['FLOOR'] == 0] df_train_b0_se_f1 = df_train_b0_se[df_train_b0_se['FLOOR'] == 1] df_train_b0_se_f2 = df_train_b0_se[df_train_b0_se['FLOOR'] == 2] df_train_b0_se_f3 = df_train_b0_se[df_train_b0_se['FLOOR'] == 3] # floors in SW df_train_b0_sw_f0 = df_train_b0_sw[df_train_b0_sw['FLOOR'] == 0] df_train_b0_sw_f1 = df_train_b0_sw[df_train_b0_sw['FLOOR'] == 1] df_train_b0_sw_f2 = df_train_b0_sw[df_train_b0_sw['FLOOR'] == 2] df_train_b0_sw_f3 = df_train_b0_sw[df_train_b0_sw['FLOOR'] == 3] # - # ### Building 1 # + # NE df_train_b1_ne = df_train_b1[(df_train_b1['LATITUDE'] >= 4864850) & (df_train_b1['LONGITUDE'] >= -7465)] # NW df_train_b1_nw = df_train_b1[(df_train_b1['LATITUDE'] >= 4864900) & (df_train_b1['LONGITUDE'] < -7510)] # SE df_train_b1_se = df_train_b1[(df_train_b1['LATITUDE'] < 4864855) & (df_train_b1['LONGITUDE'] >= -7480)] # SW df_train_b1_sw = df_train_b1[(df_train_b1['LATITUDE'] < 4864900) & (df_train_b1['LONGITUDE'] < -7540)] # CENTRE df_train_b1_ct = df_train_b1[(df_train_b1['LATITUDE'] >= 4864830) & (df_train_b1['LATITUDE'] < 4864920) & (df_train_b1['LONGITUDE'] >= -7540) & (df_train_b1['LONGITUDE'] < -7465)] # + # floors in NE df_train_b1_ne_f0 = df_train_b1_ne[df_train_b1_ne['FLOOR'] == 0] df_train_b1_ne_f1 = df_train_b1_ne[df_train_b1_ne['FLOOR'] == 1] df_train_b1_ne_f2 = df_train_b1_ne[df_train_b1_ne['FLOOR'] == 2] df_train_b1_ne_f3 = df_train_b1_ne[df_train_b1_ne['FLOOR'] == 3] # floors in NW df_train_b1_nw_f0 = df_train_b1_nw[df_train_b1_nw['FLOOR'] == 0] df_train_b1_nw_f1 = df_train_b1_nw[df_train_b1_nw['FLOOR'] == 1] df_train_b1_nw_f2 = df_train_b1_nw[df_train_b1_nw['FLOOR'] == 2] df_train_b1_nw_f3 = df_train_b1_nw[df_train_b1_nw['FLOOR'] == 3] # floors in SE df_train_b1_se_f0 = df_train_b1_se[df_train_b1_se['FLOOR'] == 0] df_train_b1_se_f1 = df_train_b1_se[df_train_b1_se['FLOOR'] == 1] df_train_b1_se_f2 = df_train_b1_se[df_train_b1_se['FLOOR'] == 2] df_train_b1_se_f3 = df_train_b1_se[df_train_b1_se['FLOOR'] == 3] # floors in SW df_train_b1_sw_f0 = df_train_b1_sw[df_train_b1_sw['FLOOR'] == 0] df_train_b1_sw_f1 = df_train_b1_sw[df_train_b1_sw['FLOOR'] == 1] df_train_b1_sw_f2 = df_train_b1_sw[df_train_b1_sw['FLOOR'] == 2] df_train_b1_sw_f3 = df_train_b1_sw[df_train_b1_sw['FLOOR'] == 3] # floors in CT df_train_b1_ct_f0 = df_train_b1_ct[df_train_b1_ct['FLOOR'] == 0] df_train_b1_ct_f1 = df_train_b1_ct[df_train_b1_ct['FLOOR'] == 1] df_train_b1_ct_f2 = df_train_b1_ct[df_train_b1_ct['FLOOR'] == 2] df_train_b1_ct_f3 = df_train_b1_ct[df_train_b1_ct['FLOOR'] == 3] # - # ### Building 2 # NE df_train_b2_ne = df_train_b2[(df_train_b2['LATITUDE'] >= 4864780) & (df_train_b2['LONGITUDE'] >= -7345)] # NW df_train_b2_nw = df_train_b2[(df_train_b2['LATITUDE'] >= 4864820) & (df_train_b2['LONGITUDE'] < -7340)] # SE df_train_b2_se = df_train_b2[(df_train_b2['LATITUDE'] < 4864780) & (df_train_b2['LONGITUDE'] >= -7375)] # SW df_train_b2_sw = df_train_b2[(df_train_b2['LATITUDE'] < 4864820) & (df_train_b2['LONGITUDE'] < -7375)] # + # floors in NE df_train_b2_ne_f0 = df_train_b2_ne[df_train_b2_ne['FLOOR'] == 0] df_train_b2_ne_f1 = df_train_b2_ne[df_train_b2_ne['FLOOR'] == 1] df_train_b2_ne_f2 = df_train_b2_ne[df_train_b2_ne['FLOOR'] == 2] df_train_b2_ne_f3 = df_train_b2_ne[df_train_b2_ne['FLOOR'] == 3] df_train_b2_ne_f4 = df_train_b2_ne[df_train_b2_ne['FLOOR'] == 4] # floors in NW df_train_b2_nw_f0 = df_train_b2_nw[df_train_b2_nw['FLOOR'] == 0] df_train_b2_nw_f1 = df_train_b2_nw[df_train_b2_nw['FLOOR'] == 1] df_train_b2_nw_f2 = df_train_b2_nw[df_train_b2_nw['FLOOR'] == 2] df_train_b2_nw_f3 = df_train_b2_nw[df_train_b2_nw['FLOOR'] == 3] df_train_b2_nw_f4 = df_train_b2_nw[df_train_b2_nw['FLOOR'] == 4] # floors in SE df_train_b2_se_f0 = df_train_b2_se[df_train_b2_se['FLOOR'] == 0] df_train_b2_se_f1 = df_train_b2_se[df_train_b2_se['FLOOR'] == 1] df_train_b2_se_f2 = df_train_b2_se[df_train_b2_se['FLOOR'] == 2] df_train_b2_se_f3 = df_train_b2_se[df_train_b2_se['FLOOR'] == 3] df_train_b2_se_f4 = df_train_b2_se[df_train_b2_se['FLOOR'] == 4] # floors in SW df_train_b2_sw_f0 = df_train_b2_sw[df_train_b2_sw['FLOOR'] == 0] df_train_b2_sw_f1 = df_train_b2_sw[df_train_b2_sw['FLOOR'] == 1] df_train_b2_sw_f2 = df_train_b2_sw[df_train_b2_sw['FLOOR'] == 2] df_train_b2_sw_f3 = df_train_b2_sw[df_train_b2_sw['FLOOR'] == 3] df_train_b2_sw_f4 = df_train_b2_sw[df_train_b2_sw['FLOOR'] == 4] # - # ## Normalisation & PCA # + from sklearn.preprocessing import StandardScaler from sklearn.decomposition import PCA def preprocess(data, filename='df'): ss = StandardScaler() pca = PCA(n_components=.95) df = data.iloc[:, :465] df = ss.fit_transform(df) df = pca.fit_transform(df) data.to_csv("../data/train/area/" + filename + ".csv") np.save("../data/train/area/" + filename + "_wap.npy", df) # - # ### Building 0 # + kwargs = { 'data' : df_train_b0_ne, 'filename': 'df_train_b0_ne' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b0_nw, 'filename': 'df_train_b0_nw' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b0_se, 'filename': 'df_train_b0_se' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b0_sw, 'filename': 'df_train_b0_sw' } preprocess(**kwargs) # + preprocess(df_train_b0_ne_f0, 'df_train_b0_ne_f0') preprocess(df_train_b0_ne_f1, 'df_train_b0_ne_f1') preprocess(df_train_b0_ne_f2, 'df_train_b0_ne_f2') preprocess(df_train_b0_ne_f3, 'df_train_b0_ne_f3') preprocess(df_train_b0_nw_f0, 'df_train_b0_nw_f0') preprocess(df_train_b0_nw_f1, 'df_train_b0_nw_f1') preprocess(df_train_b0_nw_f2, 'df_train_b0_nw_f2') preprocess(df_train_b0_nw_f3, 'df_train_b0_nw_f3') preprocess(df_train_b0_se_f0, 'df_train_b0_se_f0') preprocess(df_train_b0_se_f1, 'df_train_b0_se_f1') preprocess(df_train_b0_se_f2, 'df_train_b0_se_f2') preprocess(df_train_b0_se_f3, 'df_train_b0_se_f3') preprocess(df_train_b0_sw_f0, 'df_train_b0_sw_f0') preprocess(df_train_b0_sw_f1, 'df_train_b0_sw_f1') preprocess(df_train_b0_sw_f2, 'df_train_b0_sw_f2') preprocess(df_train_b0_sw_f3, 'df_train_b0_sw_f3') # - # ### Building 1 # + kwargs = { 'data' : df_train_b1_ne, 'filename': 'df_train_b1_ne' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b1_nw, 'filename': 'df_train_b1_nw' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b1_se, 'filename': 'df_train_b1_se' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b1_sw, 'filename': 'df_train_b1_sw' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b1_ct, 'filename': 'df_train_b1_ct' } preprocess(**kwargs) # + preprocess(df_train_b1_ne_f0, 'df_train_b1_ne_f0') preprocess(df_train_b1_ne_f1, 'df_train_b1_ne_f1') preprocess(df_train_b1_ne_f2, 'df_train_b1_ne_f2') preprocess(df_train_b1_ne_f3, 'df_train_b1_ne_f3') preprocess(df_train_b1_nw_f0, 'df_train_b1_nw_f0') preprocess(df_train_b1_nw_f1, 'df_train_b1_nw_f1') preprocess(df_train_b1_nw_f2, 'df_train_b1_nw_f2') preprocess(df_train_b1_nw_f3, 'df_train_b1_nw_f3') preprocess(df_train_b1_se_f0, 'df_train_b1_se_f0') preprocess(df_train_b1_se_f1, 'df_train_b1_se_f1') preprocess(df_train_b1_se_f2, 'df_train_b1_se_f2') preprocess(df_train_b1_se_f3, 'df_train_b1_se_f3') preprocess(df_train_b1_sw_f0, 'df_train_b1_sw_f0') preprocess(df_train_b1_sw_f1, 'df_train_b1_sw_f1') preprocess(df_train_b1_sw_f2, 'df_train_b1_sw_f2') preprocess(df_train_b1_sw_f3, 'df_train_b1_sw_f3') preprocess(df_train_b1_ct_f0, 'df_train_b1_ct_f0') preprocess(df_train_b1_ct_f1, 'df_train_b1_ct_f1') preprocess(df_train_b1_ct_f2, 'df_train_b1_ct_f2') preprocess(df_train_b1_ct_f3, 'df_train_b1_ct_f3') # - # ### Building 2 # + kwargs = { 'data' : df_train_b2_ne, 'filename': 'df_train_b2_ne' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b2_nw, 'filename': 'df_train_b2_nw' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b2_se, 'filename': 'df_train_b2_se' } preprocess(**kwargs) # + kwargs = { 'data' : df_train_b2_sw, 'filename': 'df_train_b2_sw' } preprocess(**kwargs) # + preprocess(df_train_b2_ne_f0, 'df_train_b2_ne_f0') preprocess(df_train_b2_ne_f1, 'df_train_b2_ne_f1') preprocess(df_train_b2_ne_f2, 'df_train_b2_ne_f2') preprocess(df_train_b2_ne_f3, 'df_train_b2_ne_f3') preprocess(df_train_b2_ne_f4, 'df_train_b2_ne_f4') preprocess(df_train_b2_nw_f0, 'df_train_b2_nw_f0') preprocess(df_train_b2_nw_f1, 'df_train_b2_nw_f1') preprocess(df_train_b2_nw_f2, 'df_train_b2_nw_f2') preprocess(df_train_b2_nw_f3, 'df_train_b2_nw_f3') preprocess(df_train_b2_nw_f4, 'df_train_b2_nw_f4') preprocess(df_train_b2_se_f0, 'df_train_b2_se_f0') preprocess(df_train_b2_se_f1, 'df_train_b2_se_f1') preprocess(df_train_b2_se_f2, 'df_train_b2_se_f2') preprocess(df_train_b2_se_f3, 'df_train_b2_se_f3') # preprocess(df_train_b2_se_f4, 'df_train_b2_se_f4') # no samples preprocess(df_train_b2_sw_f0, 'df_train_b2_sw_f0') preprocess(df_train_b2_sw_f1, 'df_train_b2_sw_f1') preprocess(df_train_b2_sw_f2, 'df_train_b2_sw_f2') preprocess(df_train_b2_sw_f3, 'df_train_b2_sw_f3') preprocess(df_train_b2_sw_f4, 'df_train_b2_sw_f4') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import os import numpy as np no_convert = ['', '', '', ''] train = '' with open('../data/ptb-punct/train.txt') as f: for i, l in enumerate(f): # print("\n" in l) for word in l.split(): if word in no_convert: train+=word train+=" " else: for c in word: train+=c train+=" " train+= " " train+="\n" # if i>5: # break # print train with open('../data/ptb-punct/train_char.txt', 'w') as f: f.write(train) # + valid = '' with open('../data/ptb-punct/train.txt') as f: for i, l in enumerate(f): # print("\n" in l) for word in l.split(): if word in no_convert: valid+=word valid+=" " else: for c in word: valid+=c train+=" " train+= " " train+="\n" # - with open('../data/ptb-punct/valid_char.txt', 'w') as f: f.write(valid) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Dependencies and Setup # %matplotlib inline import matplotlib.pyplot as plt import pandas as pd import numpy as np # Hide warning messages in notebook import warnings warnings.filterwarnings('ignore') # File to Load (Remember to Change These) mouse_drug_data = "data/mouse_drug_data.csv" clinical_trial_data = "data/clinicaltrial_data.csv" # Read the Mouse and Drug Data and the Clinical Trial Data mouse_drug = pd.read_csv(mouse_drug_data) clinical_trial = pd.read_csv(clinical_trial_data) # Combine the data into a single dataset combined_data = pd.merge(clinical_trial, mouse_drug, on = "Mouse ID", how="left") # Display the data table for preview combined_data.head() # - # ## Tumor Response to Treatment # Store the Mean Tumor Volume Data Grouped by Drug and Timepoint drug_timepoint_vol = combined_data.groupby(["Drug", "Timepoint"])["Tumor Volume (mm3)"] mean_tumor_volume = drug_timepoint_vol.mean() # Convert to DataFrame mean_tumor_volume = mean_tumor_volume.reset_index() # Preview DataFrame mean_tumor_volume.head() # Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint drug_timepoint_vol = combined_data.groupby(["Drug", "Timepoint"])["Tumor Volume (mm3)"] sem_tumor_volume = drug_timepoint_vol.sem() # Convert to DataFrame sem_tumor_volume = sem_tumor_volume.reset_index() # Preview DataFrame sem_tumor_volume.head() # Minor Data Munging to Re-Format the Data Frames mean_volume_drug = mean_tumor_volume.pivot(index="Timepoint", columns="Drug", values="Tumor Volume (mm3)") # Preview that Reformatting worked mean_volume_drug.head() # + # Generate the Plot (with Error Bars) capomulin = plt.errorbar(mean_volume_drug.index, mean_volume_drug["Capomulin"], fmt="o", color= "red", ls = "--", lw = 0.5) infubinol = plt.errorbar(mean_volume_drug.index, mean_volume_drug["Infubinol"], fmt="^", color= "blue", ls = "--", lw = 0.5) ketapril = plt.errorbar(mean_volume_drug.index, mean_volume_drug["Ketapril"], fmt="s", color= "green", ls = "--", lw = 0.5) placebo = plt.errorbar(mean_volume_drug.index, mean_volume_drug["Placebo"], fmt="d", color= "black", ls = "--", lw = 0.5) plt.title("Tumor Response to Treatment") plt.xlabel("Time (Days)") plt.ylabel("Tumor Volume (mm3)") plt.legend(loc= "best") plt.grid() # Save the Figure plt.savefig("images/treatment.png") # - # Show the Figure plt.show() # ## Metastatic Response to Treatment # Store the Mean Met. Site Data Grouped by Drug and Timepoint drug_timepoint_met = combined_data.groupby(["Drug", "Timepoint"])["Metastatic Sites"] mean_met = drug_timepoint_met.mean() # Convert to DataFrame mean_met = mean_met.reset_index() # Preview DataFrame mean_met.head() # Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint drug_timepoint_met = combined_data.groupby(["Drug", "Timepoint"])["Metastatic Sites"] standard_error_met = drug_timepoint_met.sem() # Convert to DataFrame standard_error_met = standard_error_met.reset_index() # Preview DataFrame standard_error_met.head() # Minor Data Munging to Re-Format the Data Frames mean_met_drug = mean_met.pivot(index = "Timepoint", columns = "Drug", values = "Metastatic Sites") # Preview that Reformatting worked mean_met_drug.head() # + # Generate the Plot (with Error Bars) capomulin = plt.errorbar(mean_met_drug.index, mean_met_drug["Capomulin"], fmt="o", color= "red", ls = "--", lw = 0.5) infubinol = plt.errorbar(mean_met_drug.index, mean_met_drug["Infubinol"], fmt="^", color= "blue", ls = "--", lw = 0.5) ketapril = plt.errorbar(mean_met_drug.index, mean_met_drug["Ketapril"], fmt="s", color= "green", ls = "--", lw = 0.5) placebo = plt.errorbar(mean_met_drug.index, mean_met_drug["Placebo"], fmt="d", color= "black", ls = "--", lw = 0.5) plt.title("Metastatic Spread During Treatment") plt.xlabel("Time Duration(Days)") plt.ylabel("Met. Sites") plt.legend(loc= "best") plt.grid() # Save the Figure plt.savefig("images/spread.png") # Show the Figure plt.show() # - # ## Survival Rates # + # Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric) drug_timepoint_mice = combined_data.groupby(["Drug", "Timepoint"])["Mouse ID"] mice_count = drug_timepoint_mice.nunique() # Convert to DataFrame mice_count = mice_count.reset_index() mice_count = mice_count.rename(columns={"Mouse ID": "Mouse Count"}) # Preview DataFrame mice_count.head() # - # Minor Data Munging to Re-Format the Data Frames mice_count_drug = mice_count.pivot(index = "Timepoint", columns = "Drug", values = "Mouse Count") # Preview the Data Frame mice_count_drug.head() # + # Generate the Plot (Accounting for percentages) capomulin = plt.errorbar(mice_count_drug.index, mice_count_drug["Capomulin"], fmt="o", color= "red", ls = "--", lw = 0.5) infubinol = plt.errorbar(mice_count_drug.index, mice_count_drug["Infubinol"], fmt="^", color= "blue", ls = "--", lw = 0.5) ketapril = plt.errorbar(mice_count_drug.index, mice_count_drug["Ketapril"], fmt="s", color= "green", ls = "--", lw = 0.5) placebo = plt.errorbar(mice_count_drug.index, mice_count_drug["Placebo"], fmt="d", color= "black", ls = "--", lw = 0.5) plt.title("Survival During Treatment") plt.xlabel("Time (Days)") plt.ylabel("Survival Rate (%)") plt.legend(loc= "best") plt.grid() # Save the Figure plt.savefig("images/survival.png") # Show the Figure plt.show() # - # ## Summary Bar Graph # Calculate the percent changes for each drug percentage_change = ((mean_volume_drug.iloc[-1]-mean_volume_drug.iloc[0])/mean_volume_drug.iloc[0])*100 # Display the data to confirm percentage_change # + # Store all Relevant Percent Changes into a Tuple drug_list = ["Capomulin", "Infubinol", "Ketapril", "Placebo"] change_list = [percentage_change["Capomulin"],percentage_change["Infubinol"], percentage_change["Ketapril"],percentage_change["Placebo"]] # Splice the data between passing and failing drugs colors = [] for change in change_list: if change > 0: colors.append("r") else: colors.append("g") # Orient widths. Add labels, tick marks, etc. drug_plot = plt.bar(drug_list, change_list, width = -1, align = "edge", color = colors) plt.title("Tumor Change Over 45 Day Treatment") plt.ylabel("% Tumor Volume Change") plt.ylim(-30, 70) plt.grid() # Use functions to label the percentages of changes def autolabel(rects): for rect in rects: height = rect.get_height() if height > 0: location = 2 else: location = -8 plt.text(rect.get_x() + rect.get_width()/2, location,"%d" % int(height)+"%", ha="center", va="bottom", color="white") # Call functions to implement the function calls autolabel(drug_plot) # Save the Figure plt.savefig("images/change.png") # Show the Figure plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np # %matplotlib inline # # DataLit Homework Assignment Week 4 # Historical sales data from 45 stores. This dataset comes from KAGGLE (https://www.kaggle.com/manjeetsingh/retaildataset). # # + # Stores # Contains anonymized information about the 45 stores, indicating the type and size of store. stores = 'dataset/stores data-set.csv' # feature # Contains additional data related to the store, department, and regional activity for the given dates. feature = 'dataset/Features data set.csv' # Sales # Contains historical sales data, which covers to 2010-02-05 to 2012-11-01. sales = 'dataset/sales data-set.csv' data_stores = pd.read_csv(stores) data_feature = pd.read_csv(feature) data_sales = pd.read_csv(sales) # - data_stores.head() data_feature.head() data_sales.head() #drop all Markdowns inside data_feature data_feature.drop(['MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5'], axis='columns',inplace=True) data_feature.head() # + # Merge the data in a unique DataFrame df = pd.merge(pd.merge(data_feature, data_sales, on=['Store', 'Date', 'IsHoliday']), data_stores, on=['Store']) # Convert Date to pandas Date format df['Date'] = pd.to_datetime(df['Date']) df.head() # - df.dtypes df.shape df.Type.value_counts() # + # df_average_sales_week = df.groupby(by=['Date'], as_index=False)['Weekly_Sales'].sum() # df_average_sales = df_average_sales_week.sort_values('Weekly_Sales', ascending=False) # plt.figure(figsize=(15,5)) # plt.plot(df_average_sales_week.Date, df_average_sales_week.Weekly_Sales) # plt.show() # - df.groupby([df.Date.dt.year,df.Date.dt.month]).Weekly_Sales.mean() df.groupby(df.Date.dt.year).Weekly_Sales.mean() df.groupby([df.Date.dt.year,df.Date.dt.month]).Weekly_Sales.mean().plot() # + # fig_size = plt.rcParams["figure.figsize"] # plt.plot( df.Date, df.Weekly_Sales,'o-') # fig_size[0] = 14 # fig_size[1] = 4 # plt.rcParams["figure.figsize"] = fig_size # plt.ylabel('Label 1') # plt.show() fig, ax = plt.subplots() ax.plot( df.Date.dt.year, df.Weekly_Sales) ax.set(xlabel='time (s)', ylabel='voltage (mV)', title='About as simple as it gets, folks') ax.grid() #fig.savefig("test.png") plt.show() # - df.describe().transpose() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''ussrbmi'': conda)' # name: python3 # --- # # Basic Controls for Sex # This section of the notebook is dedicated to: # 1. Cleaning the data. # 2. Examining the mismatch in years between corruption and BMI data # 3. Constructing a basic control for the proportion of men and women in leadership. # + # imports & function declarations import pandas as pd import statsmodels.api as sm import numpy as np import matplotlib.pyplot as plt def regress(dep: pd.Series, indep): '''Wrapper for statsmodels regression. Takes in a dependent variable and one or more independent ones''' return sm.OLS( dep, sm.add_constant( indep) ).fit() def simple_scatter(x: pd.Series, y: pd.Series, xlabel='', ylabel='', title='', regression_line=True): '''Wrapper for matplotlib scatter plot''' fig, ax = plt.subplots() ax.scatter(x, y) ax.set_ylabel(ylabel) ax.set_xlabel(xlabel) ax.set_title(title) if regression_line: m, b = np.polyfit(x, y, 1) ax.plot(x, m*x+b, color='orange') countries = x.index.values plt.show() def annotate_scatter(x:pd.Series, y:pd.Series, annot: dict, x_adjust=0, y_adjust=0, xlabel='', ylabel='', title=''): '''Wrapper for matplotlib scatter plot but with annotations''' fig, ax = plt.subplots() ax.scatter(x, y) ax.set_ylabel(ylabel) ax.set_xlabel(xlabel) ax.set_title(title) m, b = np.polyfit(x, y, 1) ax.plot(x, m*x+b, color='orange') for i, txt in enumerate(x): ax.annotate(annot[i], (x[i] + x_adjust, y[i] + y_adjust)) plt.show() def regress_filtered(col: str): '''Regresses BMI on one measure of corruption (col)''' index = corruption[col] missing_filter = index.notna() filtered_bmi = bmis[missing_filter]['median_bmi'] filtered_index = index[missing_filter] return regress(filtered_index, filtered_bmi).summary() def plot_filtered(col: str, ylabel: str): '''Plots BMI against one measure of corruption (col)''' index = corruption[col] missing_filter = index.notna() filtered_bmi = bmis[missing_filter]['median_bmi'] filtered_index = index[missing_filter] simple_scatter(filtered_bmi, filtered_index, 'Median estmated ministers\' body-mass index', ylabel) ABBREVIATIONS = { 'Estonia': 'EST', 'Lithuania': 'LTU', 'Latvia': 'LVA', 'Georgia': 'GEO', 'Armenia': 'ARM', 'Russia': 'RUS', 'Moldova': 'MDA', 'Belarus': 'BLR', 'Kyrgyzstan': 'KGZ', 'Azerbaijan': 'AZE', 'Tajikistan': 'TJK', 'Kazakhstan': 'KAZ', 'Ukraine': 'UKR', 'Turkmenistan': 'TKM', 'Uzbekistan': 'UZB' } def abbrviate_countries(arr): '''Transforms an array of countries into their abbreviated forms''' return [ABBREVIATIONS[name] for name in arr] # + # import minister age and sex data ministers = pd.read_csv('./ministers.csv', encoding="utf-16") ministers.head() # + # clean minister age and sex data and find sex ratios def filter_sex(sex: str) -> pd.Series: '''Filters out ministers by sex''' only_sex = ministers[ministers['sex'] == sex] return only_sex.groupby('country').count()['sex'] only_men = filter_sex('M') only_women = filter_sex('F') # calculate sex ratio and add to dataframe sex_ratio = pd.DataFrame({'men': only_men, 'women': only_women}).fillna(0) sex_ratio['women'] = sex_ratio['women'].astype(int) sex_ratio['total'] = sex_ratio['men'] + sex_ratio['women'] sex_ratio['ratio'] = sex_ratio['men'] / sex_ratio['total'] sex_ratio.columns # + # import and clean WHO BMI data bmi = pd.read_csv('./WHO-BMI.csv', encoding="utf-8", index_col=0) def clean_bmi(row: pd.Series) -> pd.Series: '''Cleans up and numberfies the numerical strings in a row. e.g. "26.5 [26.0-27.1]" to 26.5''' replaced = row.str.replace(r' \[\d{1,2}.?\d{1,2}-\d{1,2}.?\d{1,2}]', '', regex=True) return replaced.astype(float) for r in bmi.columns: bmi[r] = clean_bmi(bmi[r]) # bmi data is easier to deal with reversed bmi = bmi.iloc[:, ::-1] # gets only first columns i.e. 2017 data bmi_2017 = bmi[['2016Both sexes', '2016Male', '2016Female']] bmi_2017.head(15) # - # ## Graphing BMIs over time # There was some concern in the review process that using slightly out of date BMI data would be a problem for my results. The corruption data is all from 2017, while the latest WHO BMI data is from 2016. However, these charts illustrate two things. # 1. BMI is on a steady, and within the past few years, comparable upswing # 2. BMI increases very slowly over time, so 2016's values are likely to be very similar to 2017's. # + # graph male and female BMIs over time male_cols = [el for el in bmi.columns if "Male" in el] female_cols = [el for el in bmi.columns if "Female" in el] both_cols = [el for el in bmi.columns if "Both" in el] def strip_to_year(col): return col[:4] male_bmi_time = bmi.loc[:,male_cols].rename(strip_to_year, axis=1).T female_bmi_time = bmi.loc[:, female_cols].rename( strip_to_year, axis=1).T m_plot = male_bmi_time.plot(legend=False) m_plot.legend(bbox_to_anchor=(0, -0.15, 1, 0), loc=2, ncol=2, mode="expand", borderaxespad=0) m_plot.set_xlabel('Year') m_plot.set_ylabel('Mean Male BMI (WHO)') f_plot = female_bmi_time.plot(legend=False) f_plot.legend(bbox_to_anchor=(0, -0.15, 1, 0), loc=2, ncol=2, mode="expand", borderaxespad=0) f_plot.set_xlabel('Year') f_plot.set_ylabel('Mean Female BMI (WHO)') # - # merge bmi data and minister data bmi_with_ministers = pd.merge( sex_ratio, bmi_2017, left_index=True, right_on='Country') bmi_with_ministers = bmi_with_ministers.rename({'men': 'men_in_gov', 'women': 'women_in_gov', 'total': 'minister_count', 'ratio': 'minister_sex_ratio', 'Country': 'country', '2016Both sexes': 'both_sexes_bmi', '2016Male': 'male_bmi', '2016Female': 'female_bmi'}, axis = 1) bmi_with_ministers.head(15) # + # puts median minister data (taken from Blavatskyy's paper) into dataframe with minister and BMI data median_ministers = { 'Estonia': 28.7, 'Lithuania': 30.3, 'Latvia': 30.7, 'Georgia': 30.9, 'Armenia': 32.1, 'Russia': 32.5, 'Moldova': 32.7, 'Belarus': 32.9, 'Kyrgyzstan': 33.3, 'Azerbaijan': 33.3, 'Tajikistan': 33.6, 'Kazakhstan': 33.8, 'Ukraine': 34.4, 'Turkmenistan': 34.7, 'Uzbekistan': 35.5 } bmi_with_ministers['median_minister_bmi'] = bmi_with_ministers.index.map(median_ministers) bmi_with_ministers.head(15) # + # creates sex-adjusted BMI by weighing the proportion of men and women in leadership men_ratio = bmi_with_ministers['minister_sex_ratio'] women_ratio = 1 - men_ratio men_bmi = bmi_with_ministers['male_bmi'] women_bmi = bmi_with_ministers['female_bmi'] total_num = bmi_with_ministers['minister_count'] adjusted_bmi = ((men_ratio * men_bmi) + (women_ratio * women_bmi)) bmi_with_ministers['adjusted_bmi'] = adjusted_bmi bmi_with_ministers.head(15) # + # regresses adjusted BMI on minister BMI adjusted_minister_country_relation = regress(adjusted_bmi, bmi_with_ministers['median_minister_bmi']) adjusted_minister_country_relation.summary() # + # regresses unadjusted country BMI on minister BMI unadjusted_minister_country_relation = regress( bmi_with_ministers['both_sexes_bmi'], bmi_with_ministers['median_minister_bmi'] ) unadjusted_minister_country_relation.summary() # + # plots adjusted and unadjusted BMI on minister BMI simple_scatter( bmi_with_ministers['median_minister_bmi'], adjusted_bmi, 'Median Estimated Ministers\' BMI', 'Sex-Adjusted Country BMI', 'Sex-Adjusted BMI and Median Minister BMI' ) simple_scatter( bmi_with_ministers['median_minister_bmi'], bmi_with_ministers['both_sexes_bmi'], 'Median Estimated Ministers\' BMI', 'WHO Mean BMI 2016', 'Unadjusted BMI and Minister BMI' ) # - # finds difference between mean male and female BMIs in each country bmi_diff = bmi_with_ministers['male_bmi'] - bmi_with_ministers['female_bmi'] print(bmi_diff.mean()) print(bmi_diff) # # Basic Controls for Age # This section is dedicated to controlling the data for the age of the ministers # + # finds mean and median age of ministers in each country # too many Tajik ministers missing ages to use country's data ministers_age = ministers[ministers['country'] != 'Tajikistan'] ministers_age = ministers_age[ministers_age['age'].notna()] grouped_ministers_age = ministers_age.groupby('country') mean_age = grouped_ministers_age['age'].mean() median_age = grouped_ministers_age['age'].median() print(mean_age, median_age) # + # creates dataframe of age data # from UN World Population Prospects 2015 ages_2015 = { 'Azerbaijan': 33.8, 'Armenia': 30.3, 'Georgia': 37.7, 'Kazakhstan': 29.4, 'Kyrgyzstan': 25.1, # 'Tajikistan': 22.0, 'Turkmenistan': 25.6, 'Uzbekistan': 26.2, 'Belarus': 39.5, 'Moldova': 35.6, 'Russia': 38.6, 'Ukraine': 40.0, 'Estonia': 41.6, 'Latvia': 42.6, 'Lithuania': 42.7, } country_ages = pd.Series(ages_2015) age_data = pd.concat([country_ages, mean_age, median_age], axis=1) age_data.set_axis(['mean_country_age', 'mean_minister_age', 'median_minister_age'], axis=1, inplace=True) age_data # + # compare politicians BMI and ages med_minister_bmi = bmi_with_ministers.drop('Tajikistan') print(med_minister_bmi.median_minister_bmi) print(age_data.mean_minister_age) annotate_scatter( med_minister_bmi.median_minister_bmi, age_data.mean_minister_age, abbrviate_countries(age_data['mean_minister_age'].index.values), 0.1, 0.1, 'Median Estimated Minister BMI', 'Mean Minister Age' ) # - regress(age_data['mean_minister_age'].sort_index(), med_minister_bmi.median_minister_bmi.sort_index()).summary() # regresses mean minister age on mean country age mean_age_relationship = regress(age_data['mean_minister_age'], age_data['mean_country_age']) mean_age_relationship.summary() # makes scatter plot of minister age and mean country age annotate_scatter(age_data['mean_minister_age'], age_data['mean_country_age'], abbrviate_countries(age_data['mean_country_age'].index.values), 0.1,0.1, 'Mean Minister Age', 'UN Mean Country Age', 'Mean Age of Ministers and Citizens') corruption = pd.read_csv('corruption.csv').set_index('country') corruption.drop('Tajikistan', inplace=True) corruption annotate_scatter(age_data['mean_minister_age'], corruption['corruption perception index'], abbrviate_countries(age_data['mean_country_age'].index.values), 0.25,0.25, 'Mean Minister Age', 'Corruption Perception Index', 'Mean Age of Ministers and Citizens') annotate_scatter(age_data['mean_minister_age'], corruption['control_of_corruption'], abbrviate_countries(age_data['mean_country_age'].index.values), 0.1,0.01, 'Mean Minister Age', 'Control of Corruption', 'Mean Age of Ministers and Citizens') annotate_scatter(age_data['mean_minister_age'], corruption['IDEA absence of corruption'], abbrviate_countries( age_data['mean_country_age'].index.values), 0.1, 0.01, 'Mean Minister Age', 'IDEA Absence of Corruption', 'Mean Age of Ministers and Citizens') regress(corruption['corruption perception index'].sort_index(), age_data['mean_minister_age'].sort_index()).summary() regress( corruption['control_of_corruption'].sort_index(), age_data['mean_minister_age'].sort_index() ).summary() regress( corruption['IDEA absence of corruption'].sort_index(), age_data['mean_minister_age'].sort_index() ).summary() # # Reproducibility # This section is dedicated to reproducing Blavatskyy's results with his method # + # get BMI of each minister -- commented out because it takes forever to run # from bmipredictor import BMIPredictor # bmi_predictor = BMIPredictor() # # this is a test prediction # # bmi_predictor.predict( # # 'http://npg.si.edu/sites/default/files/blog_obama_martin_schoeller.jpg') # estimated_bmi = ministers['image'].apply(lambda img : bmi_predictor.predict('./2017-images/' + img)) # estimated_bmi.head(20) # # save BMI to CSV so we don't need to do ML over and over # ministers['estimated_bmi'] = estimated_bmi # ministers.to_csv('ministers2.csv') # + estimated_ministers = pd.read_csv('ministers2.csv') # remove buggy extra index col estimated_ministers = estimated_ministers[['name', 'age', 'sex', 'country', 'image', 'estimated_bmi']] median_bmi = estimated_ministers[['country', 'estimated_bmi']].groupby('country').median().rename({'estimated_bmi': 'median_bmi'}, axis=1) mean_bmis = estimated_ministers[['country', 'estimated_bmi']].groupby('country').mean().rename({'estimated_bmi': 'mean_bmi'}, axis=1) bmis = pd.concat([median_bmi, mean_bmis], axis = 1) bmis['reported_median_bmis'] = pd.Series(median_ministers) bmis[['median_bmi', 'mean_bmi', 'reported_median_bmis']].sort_values('reported_median_bmis').round(1) # - corruption = pd.read_csv('corruption.csv').set_index('country') corruption regress_filtered('control_of_corruption') plot_filtered('control_of_corruption', 'World Bank Control of Corruption') regress_filtered('corruption perception index') plot_filtered('corruption perception index', ylabel='Transparency International Corruption Perception Index') regress_filtered('basel anti money laundering') plot_filtered('basel anti money laundering', ylabel='Basel Anti Money Laundering Index') regress_filtered('IDEA absence of corruption') plot_filtered('IDEA absence of corruption', 'IDEA Absence of Corruption Index') regress_filtered('Index of public integrity') plot_filtered('Index of public integrity', 'Index of Public Integrity') # # Test for difference between women in gov and not # + # bmi_with_ministers mean_f_minister_bmi = estimated_ministers[ministers.sex == 'F'].groupby('country').mean().estimated_bmi mean_f_bmi = bmi['2016Female'] mean_f_bmi.drop('Azerbaijan', inplace=True) # Azerbaijan has no female ministers mean_f_bmi.index = mean_f_minister_bmi.index f_bmi_comparison = pd.concat([mean_f_minister_bmi, bmi['2016Female']], axis=1, sort=True) f_bmi_comparison # - # # Building Standardized BMI # + # get data from (non-comitted) .sav files from the Health in Times of Transition 2010 survey # takes a while to run hitt1 = pd.read_spss('./Data/EAB_2_26 10 14.sav') hitt1_country_abbr = hitt1['COUNTRIES'] hitt1_sex = hitt1['V010'] hitt1_age = hitt1['V011'] hitt1_height = hitt1['V024'] hitt1_weight = hitt1['V025'] hitt1_trimmed = pd.concat([ hitt1_country_abbr, hitt1_sex, hitt1_age, pd.to_numeric(hitt1_height), pd.to_numeric(hitt1_weight), ], axis=1) hitt1_trimmed.rename(columns={ 'COUNTRIES': 'country', 'V010': 'sex', 'V011': 'age', 'V024': 'height', 'V025': 'weight' }, inplace=True) # formula from https://www.cdc.gov/nccdphp/dnpao/growthcharts/training/bmiage/page5_1.html hitt1_trimmed['bmi'] = ( (hitt1_trimmed['weight'] / (hitt1_trimmed['height']**2)) * 10000).round(1) hitt1_trimmed.drop(['height', 'weight'], axis=1, inplace=True) abbrv_transform = { 'MD': 'Moldova', 'BY': 'Belarus', 'RU': 'Russia', 'KG': 'Kyrgyzstan', 'KZ': 'Kazakhstan', 'UA': 'Ukraine', 'AM': 'Armenia', 'GE': 'Georgia', 'AZ': 'Azerbaijan' } hitt1_trimmed['country'] = hitt1_trimmed.replace({'country': abbrv_transform}) hitt1_trimmed.to_csv('./Data/hitt_trimmed.csv') # - # uses saved cleaned data from above hitt = pd.read_csv('./Data/hitt_trimmed.csv') # remove missing data print(hitt.shape) hitt = hitt.dropna() print(hitt.shape) hitt.head() simple_scatter(hitt['age'], hitt['bmi'], 'Age', 'BMI') # removal of outliers: BMI > 50 hitt = hitt[hitt['bmi'] < 50] simple_scatter(hitt['age'], hitt['bmi'], 'Age', 'BMI', regression_line=False) plt.bar(hitt['age'], height=hitt['bmi']) # density=False would make counts plt.ylabel('BMI') plt.xlabel('Age') # transforms sex into numerical binary variable hitt['is_female'] = (hitt['sex'] == 'Female').astype(int) hitt['agesex'] = hitt['is_female'] * hitt['age'] for el in ['bmi','age', 'is_female']: print(hitt[el].size) # first regression: BMI on age and sex age_sex_X = [hitt['age'], hitt['is_female']] age_sex_X = sm.add_constant(age_sex_X) age_sex_regress = sm.OLS.from_formula(formula='bmi ~ age + is_female', data=hitt).fit() age_sex_regress.summary() # second regression: BMI on age, sex, and interaction age_sex_regress2 = sm.OLS.from_formula( formula='bmi ~ age + is_female + agesex', data=hitt).fit() age_sex_regress2.summary() # + # prepare minister data for regression estimation estimated_ministers_wot = estimated_ministers[estimated_ministers['country'] != 'Tajikistan'] print(estimated_ministers_wot.shape) estimated_ministers_trimmed = estimated_ministers_wot[~np.isnan(estimated_ministers_wot.age)] estimated_ministers_trimmed = estimated_ministers_trimmed[~np.isnan(estimated_ministers_trimmed.age)] estimated_ministers_trimmed['is_female'] = (estimated_ministers_trimmed['sex'] == 'F').astype(int) estimated_ministers_trimmed['agesex'] = estimated_ministers_trimmed['is_female'] * estimated_ministers_trimmed['age'] # + # first regression on just age and sex def first_regression(row) -> int: age = row.age is_female = row.is_female return age_sex_regress.params.Intercept + (age_sex_regress.params.age * age) + (age_sex_regress.params.is_female * is_female) ministers_results = estimated_ministers_trimmed ministers_results['first_reg'] = ministers_results.apply( first_regression, axis=1) ministers_results # + # check results of normalization against corruption measures first_means = ministers_results.groupby('country').first_reg.mean() corruption_minus_tajikstan = corruption.drop('Tajikistan') def first_reg_scatter(measure, ylabel): annotate_scatter( first_means, corruption_minus_tajikstan[measure], abbrviate_countries(corruption_minus_tajikstan.index), x_adjust=0.01, y_adjust=0.01, xlabel="Mean Controlled Minister BMI (First Regression)", ylabel=ylabel ) first_reg_scatter('control_of_corruption', "World Bank Control of Corruption") first_reg_scatter('corruption perception index', "Corruption Perception Index") first_reg_scatter('IDEA absence of corruption', "IDEA Absence of Corruption") bmi_means_2016 = bmi['2016Both sexes'].drop('Tajikistan') annotate_scatter( first_means, bmi_means_2016, abbrviate_countries(first_means.index), 0.01, 0.01, 'Mean Controlled Minister BMI (First Regression)', 'WHO 2016 Mean BMI' ) # - regress(corruption_minus_tajikstan['control_of_corruption'], first_means).summary() regress( corruption_minus_tajikstan['corruption perception index'], first_means).summary() regress(corruption_minus_tajikstan['IDEA absence of corruption'], first_means).summary() regress(bmi_means_2016, first_means).summary() # + # second regression on age, sex, age^2, age*sex, and age^2*sex def second_regression(row) -> int: return age_sex_regress2.params.Intercept + (age_sex_regress2.params.is_female * row.is_female) + (age_sex_regress2.params.age * row.age) + (age_sex_regress2.params.agesex * row.agesex) ministers_results['second_reg'] = ministers_results.apply( second_regression, axis=1) ministers_results # + # graph estimated BMI against regressed BMI # simple_scatter(ministers_results.estimated_bmi, ministers_results.first_reg, 'Estimated Photographic BMI', 'Controlled BMI (First Regression)', 'Estimated BMI and First Regression Results') # simple_scatter(ministers_results.estimated_bmi, ministers_results.second_reg, 'Estimated Photographic BMI', 'Controlled BMI (Second Regression)', 'Estimated BMI and Second Regression Results') fig1, ax1 = plt.subplots() colors={0: 'blue', 1: 'red'} ax1.scatter(ministers_results.estimated_bmi, ministers_results.first_reg, c=ministers_results.is_female.map(colors)) ax1.set_xlabel('Estimated Photographic BMI') ax1.set_ylabel('Controlled BMI (First Regression)') m1, b1 = np.polyfit(ministers_results.estimated_bmi, ministers_results.first_reg, 1) ax1.plot(ministers_results.estimated_bmi, m1 * ministers_results.estimated_bmi+b1, color='orange') fig1, ax2 = plt.subplots() ax2.scatter(ministers_results.estimated_bmi, ministers_results.second_reg, c=ministers_results.is_female.map(colors)) m2, b2 = np.polyfit(ministers_results.estimated_bmi, ministers_results.second_reg, 1) ax2.plot(ministers_results.estimated_bmi, m2 * ministers_results.estimated_bmi+b2, color='orange') ax2.set_xlabel('Estimated Photographic BMI') ax2.set_ylabel('Controlled BMI (Second Regression)') # + second_means = ministers_results.groupby('country').second_reg.mean() corruption_minus_tajikstan = corruption.drop('Tajikistan') def second_reg_scatter(measure, ylabel): annotate_scatter( second_means, corruption_minus_tajikstan[measure], abbrviate_countries(corruption_minus_tajikstan.index), x_adjust=0.01, y_adjust=0.01, xlabel="Mean Controlled Minister BMI (Second Regression)", ylabel=ylabel ) second_reg_scatter('control_of_corruption', "World Bank Control of Corruption") second_reg_scatter('corruption perception index', "Corruption Perception Index") second_reg_scatter('IDEA absence of corruption', "IDEA Absence of Corruption") bmi_means_2016 = bmi['2016Both sexes'].drop('Tajikistan') annotate_scatter( second_means, bmi_means_2016, abbrviate_countries(first_means.index), 0.01, 0.01, 'Mean Controlled Minister BMI (Second Regression)', 'WHO 2016 Mean BMI' ) # - regress( corruption_minus_tajikstan['control_of_corruption'], second_means).summary() regress( corruption_minus_tajikstan['corruption perception index'], second_means).summary() regress( corruption_minus_tajikstan['IDEA absence of corruption'], second_means).summary() regress(bmi_means_2016, second_means).summary() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="xowVGIa68DSw" outputId="dceeb2f0-0022-4cbd-c38f-e859dd425fea" colab={"base_uri": "https://localhost:8080/"} #importing the libraries import keras from keras.models import Sequential from keras.layers import Dense, Activation, Dropout, Flatten,\ Conv2D, MaxPooling2D from keras.layers.normalization import BatchNormalization import numpy as np import tflearn.datasets.oxflower17 as oxflower17 import tensorflow as tf np.random.seed(1000) # + id="nXijRQhVoF-N" # !pip install -r requirements.txt # + id="f1zWKuOQpE1A" outputId="962052b7-4723-4ee8-d558-0f0d7e68d841" colab={"base_uri": "https://localhost:8080/"} # !pip install tflearn # + colab={"base_uri": "https://localhost:8080/"} id="K0p1K8dV8Mi_" outputId="959351f4-fba7-43ca-c7bf-d67d7a7ca5a6" #preparing the data x, y = oxflower17.load_data(one_hot=True) # + id="AlTY6R0O8lqU" #Model class Alexnet: def __init__(self,batch_size,epoch,validation_split,x,y): self.batch_size = batch_size self.epoch = epoch self.validation_split = validation_split self.x = x self.y = y def forward(): model = Sequential() model.add(Conv2D(96,input_shape=(224,224,3),kernel_size=(11,11),strides=(4,4),padding='valid'))# first conv model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(3,3),strides=(2,2),padding='valid')) model.add(BatchNormalization()) model.add(Conv2D(256,kernel_size=(5,5),padding='same'))# second conv model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(3,3),strides=(1,1))) model.add(BatchNormalization()) model.add(Conv2D(384,kernel_size=(3,3),padding="same",strides=(1,1)))#third conv model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(3,3),padding='valid')) model.add(BatchNormalization()) model.add(Conv2D(256,kernel_size=(3,3),padding='valid')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(3,3),strides=(2,2),padding='valid')) model.add(BatchNormalization()) model.add(Flatten()) model.add(Dense(4096)) model.add(Activation('relu')) model.add(Dense(4096)) model.add(Activation('relu')) model.add(Dense(17)) model.add(Activation('softmax')) return model def training(x,y,batch_size,epochs,validation_split,model): model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy']) model.fit(x,y, batch_size=batch_size, epochs=epochs, verbose=1,validation_split=validation_split, shuffle=True) return model # + colab={"base_uri": "https://localhost:8080/"} id="2fzBLAzQCO07" outputId="af020c0a-9f10-48bf-eb6b-238a62570711" #printing the summary of the model model = Alexnet.forward() model.summary() # + colab={"base_uri": "https://localhost:8080/"} id="b4O_tIVaCX9f" outputId="f927be63-345b-4578-f118-d97156e69574" trained_model = Alexnet.training(x = x,y = y,batch_size=64,epochs= 25,validation_split=0.25,model=model) # + id="x3oPfsTvG7lN" #saving to disk trained_model.save('Alexnet.h5') # + id="wSkmTpYWHoIr" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import math import time import numpy as np import pandas as pd from matplotlib import pyplot as plt from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error, classification_report, accuracy_score from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.layers.core import Dense, Activation, Dropout data = pd.read_csv("../Data/DIS.csv", header=None, index_col=None) #Reading the dataset data.head(4) # Checking the dataset head all_y = data[5].values # Close price all_y.shape dataset = all_y.reshape(-1, 1) dataset.shape print(all_y[30]) print(dataset[30]) # normalize the dataset between 1 and 0 scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(dataset) # split into train and test sets, 50% test data, 50% training data train_size = int(len(dataset) * 0.5) test_size = len(dataset) - train_size train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] # convert an array of values into a dataset matrix def create_dataset(dataset, look=1): dataX, dataY = [], [] for i in range(len(dataset)-look-1): a = dataset[i:(i+look), 0] dataX.append(a) dataY.append(dataset[i + look, 0]) return np.array(dataX), np.array(dataY) # reshape into X=t and Y=t+1, timestep 240 look = 240 trainX, trainY = create_dataset(train, look) testX, testY = create_dataset(test, look) print(trainX[0][0]) print(dataset[0]) # reshape input to be [samples, time steps, features] trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) # create and fit the LSTM network, optimizer=adam, 25 neurons, dropout 0.1 model = Sequential() model.add(LSTM(25, input_shape=(1, look))) model.add(Dropout(0.1)) model.add(Dense(1)) model.compile(loss='mse', optimizer='adam') model.fit(trainX, trainY, epochs=10, batch_size=240, verbose=1) # make predictions trainPredict = model.predict(trainX) testPredict = model.predict(testX) # invert predictions trainPredict = scaler.inverse_transform(trainPredict) trainY = scaler.inverse_transform([trainY]) testPredict = scaler.inverse_transform(testPredict) testY = scaler.inverse_transform([testY]) # calculate root mean squared error trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0])) print('Train Score: %.2f RMSE' % (trainScore)) testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0])) print('Test Score: %.2f RMSE' % (testScore)) # shift train predictions for plotting trainPredictPlot = np.empty_like(dataset) trainPredictPlot[:, :] = np.nan trainPredictPlot[look:len(trainPredict)+look, :] = trainPredict # shift test predictions for plotting testPredictPlot = np.empty_like(dataset) testPredictPlot[:, :] = np.nan testPredictPlot[len(trainPredict)+(look*2)+1:len(dataset)-1, :] = testPredict # plot baseline and predictions plt.plot(scaler.inverse_transform(dataset)) plt.plot(trainPredictPlot) print('testPrices:') testPrices=scaler.inverse_transform(dataset[test_size+look:]) print('testPredictions:') print(testPredict) # plot the actual price, prediction in test data=red line, actual price=blue line plt.plot(testPredictPlot) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import math from sklearn.model_selection import train_test_split from collections import Counter trainset = pd.read_csv("E:\\programs\\python\\NLP\\trainset.csv", delimiter=',', encoding='utf-8') validationset = pd.read_csv("E:\\programs\\python\\NLP\\validationset.csv", delimiter=',', encoding='utf-8') # display complete contents of a dataframe without any kind of truncation pd.set_option('display.max_rows',None) pd.set_option('display.max_columns',None) pd.set_option('display.width',None) pd.set_option('display.max_colwidth',-1) print(trainset.head(10)) # print 10 first rows train_len = trainset['Unnamed: 0'].count() #the first row is the topics so we got 700000 data valid_len = validationset['Unnamed: 0'].count() #the first row is the topics so we got 147635 data print(train_len) #printing the data rows in trainset print(valid_len) #printing the data rows in validationset number_per_cat = trainset.groupby("cat1")["id"].count() #calculating the number of items in each category number_per_cat arr = pd.read_csv("E:\\programs\\python\\NLP\\projects\\persian.csv", sep="\n", encoding='utf-8') stop_words = arr.values.tolist() #converting the stop words into a list stop_words = [item for sublist in stop_words for item in sublist] #converting stop words into a 1d list descriptions = trainset['desc'] titles = trainset['title'] descriptions.head(3) #printing the first 3 rows of description sample = descriptions.head(5) # first 5 rows of description # + def preprocessing(text): """ This method takes an unprocessed text and removes its stop words, and punctuations. It takes a list of simple texts and returns a list containing words that are processed for each text. text: the unprocessed text cleared_text: the processed text """ processed_description = [] #the processed descriptions would go here cleared_text = [] #a list to save all of the processed descriptions for i in range(len(text)): desc = text[i] words = desc.split(" ") # print(len(words)) for j in range(len(words)): if words[j] not in stop_words: processed_description.append(words[j]) # print(words) # print("======================================") # print(len(processed_description)) # print(processed_description) cleared_text.append(processed_description) # print(cleared_text) processed_description = [] return cleared_text # - uncleared_text = [] #an example to show how the preprocessing works for i in range(len(sample)): desc = sample[i] words = desc.split(" ") uncleared_text.append(words) print(uncleared_text) #unprocessed text for the first five rows print("========================") cleared_text = preprocessing(sample) print(cleared_text) #processed text for the first five rows cleared_descriptions = preprocessing(descriptions.head(1000)) #preprocessed descriptions cleared_titles = preprocessing(titles.head(1000)) #preprocessed titles number_of_texts = len(cleared_titles) #total number of texts in our corpus print(number_of_texts) def tf_idf(text): """ This method calculates the tf_idf which is tf(word, text) * idf(word). With tf_idf we represent texts in numerical forms text: the text in which we calculate the tf_idf for. returns a vector containing the tf_idf for all of the words in the list text """ vector = [] length_of_text = len(text) #the number of words in our text for i in range(length_of_text): word = text[i] #put the ith word of text into the variable word vector.append(tf(word, text) * idf(word, cleared_descriptions)) return vector def tf(word, text): """ This method calculates the tf which is the number of times the word occurs in a text on the number of words in the text. word: the word we are calculating tf for text: the text in which we calculate the tf for word """ number_of_occurances_in_text = 0 #number of times the word occurs in the text number_of_words_in_text = len(text) #total number of words in the text for i in range(number_of_words_in_text): #loop over the list text and count the number of occurances of the word if text[i] == word: number_of_occurances_in_text += 1 return (number_of_occurances_in_text/number_of_words_in_text) def idf(word, docs): """ This method calculates the idf which is the logarithm (number of text in the corpus/number of texts where the word occurs). word: the word we are calculating idf for docs: all of our texts or documents """ number_of_texts = len(docs) #total number of texts in our corpus number_of_texts_where_the_word_occurs = 0 #number of texts where the word occurs in the corpus for i in range(number_of_texts): #loop over all the documents or texts in the corpus and count the number of occurances of the word in corpus if word in docs[i]: number_of_texts_where_the_word_occurs += 1 return (math.log((1 + number_of_texts)/(1 + number_of_texts_where_the_word_occurs)) + 1) #applying smoothing print(cleared_descriptions[2]) print(tf('اصل', cleared_descriptions[2])) print(idf('اصل',cleared_descriptions)) non_numerical_title = cleared_titles # the list of processed words from the title to feed our model with non_numerical_description = cleared_descriptions # the list of processed words from the descriptions to feed our model with #Y is our actual values for cat1 non_numerical_Y = trainset['cat1'].head(1000) # we want to predict the cat1 with the title and description that we have for each text def categorizer(category): """ This method will get the list of words for categories and assigns some integer values to them instead. The integer values are as the following: businesses ==> 0 electronic-devices ==> 1 for-the-home ==> 2 leisure-hobbies ==>3 personal ==> 4 vehicles ==> 5 """ classes = [] category_length = len(category) for i in range(category_length): if category[i] == "businesses": classes.append(0) elif category[i] == "electronic-devices": classes.append(1) elif category[i] == "for-the-home": classes.append(2) elif category[i] == "leisure-hobbies": classes.append(3) elif category[i] == "personal": classes.append(4) else: classes.append(5) return classes #in here we are going to create a simple x for each text sample def get_numerical_features(non_numerical_title, non_numerical_description): """ This method returns the numerical features for the texts. One simple X for each text sample """ numerical_title = [] numerical_description = [] for i in range(len(non_numerical_title)): #calculating a simple X for each text sample numerical_title.append(np.mean(tf_idf(non_numerical_title[i]))) numerical_description.append(np.mean(tf_idf(non_numerical_description[i]))) return numerical_title, numerical_description #A list containing numerical title and description for each text sample in the training set titles, descriptions = get_numerical_features(non_numerical_title, non_numerical_description) Y = categorizer(non_numerical_Y) #A list containing an integer for the category in cat1 for i in range(5): #printing first 5 rows of X and their Corresponding Y print(titles[i]," ", descriptions[i]," " ,Y[i]) for i in range(5): #show some of the cat1 and their classes print(non_numerical_Y[i]," " ,Y[i]) # + # Dividing the trainset into training set and test set. We need to create a test set out of our training # set in order to update our model. In each itertion, we create a new test set out of our trainset and will try to predict them # to see that is our model is good enough to go against the validationset. # After the model is created, we go against the validation set with y = mx + b # train_test_split, Splits arrays or matrices into random train and test subsets # X_train : the training data out of our training set, used to train our model in each iteration # X_test : the desired outputs for the X_train, used to train our model in each iteration # Y_train : part of the trainset that we seperate and use to predict Y_test to see if our model is good enough. If the model still # is not good enough, then we update m and b of the linear function to predict better in the next iteration # Y_test : the desired outputs for the Y_train. we compare these with the prediction of our model based on Y_train to calculate accuracy. # if the accuracy is good enough, then the model is ready and we're done! #the following are all lists title_train, title_test, description_train, description_test, Y_train, Y_test = train_test_split( titles, descriptions, Y, test_size=0.2, random_state=42) # - for i in range(5): print(title_train[i], title_test[i], description_train[i], description_test[i], Y_train[i], Y_test[i]) print(len(title_train)," ",len(title_test)," " ,len(description_train), " ",len(description_test), " ",len(Y_train), " ",len(Y_test)) def calculate_businesses(classes): """ This method returns a list which contains one as an element, whenever the cat1 is business for that data and is zero otherwise """ businesses = [] classes_length = len(classes) for i in range(classes_length): #loop over the classes and if 0 which means the cat1 is businesses, append 1. append 0 otherwise if classes[i] == 0: businesses.append(1) else: businesses.append(0) return businesses def calculate_electronic_devices(classes): """ This method returns a list which contains one as an element, whenever the cat1 is electronic-devices for that data and is zero otherwise """ electronics = [] classes_length = len(classes) for i in range(classes_length): #loop over the classes and if 0 which means the cat1 is businesses, append 1. append 0 otherwise if classes[i] == 1: electronics.append(1) else: electronics.append(0) return electronics def calculate_for_the_home(classes): """ This method returns a list which contains one as an element, whenever the cat1 is for-the-home for that data and is zero otherwise """ home = [] classes_length = len(classes) for i in range(classes_length): #loop over the classes and if 0 which means the cat1 is businesses, append 1. append 0 otherwise if classes[i] == 2: home.append(1) else: home.append(0) return home def calculate_leisure_hobbies(classes): """ This method returns a list which contains one as an element, whenever the cat1 is leisure-hobbies for that data and is zero otherwise """ hobbies = [] classes_length = len(classes) for i in range(classes_length): #loop over the classes and if 0 which means the cat1 is businesses, append 1. append 0 otherwise if classes[i] == 3: hobbies.append(1) else: hobbies.append(0) return hobbies def calculate_personal(classes): """ This method returns a list which contains one as an element, whenever the cat1 is personal for that data and is zero otherwise """ personal = [] classes_length = len(classes) for i in range(classes_length): #loop over the classes and if 0 which means the cat1 is businesses, append 1. append 0 otherwise if classes[i] == 4: personal.append(1) else: personal.append(0) return personal def calculate_vehicles(classes): """ This method returns a list which contains one as an element, whenever the cat1 is vehicles for that data and is zero otherwise """ vehicles = [] classes_length = len(classes) for i in range(classes_length): #loop over the classes and if 0 which means the cat1 is businesses, append 1. append 0 otherwise if classes[i] == 5: vehicles.append(1) else: vehicles.append(0) return vehicles # These are the desired outputs for the entire trainset y0 = calculate_businesses(Y_train) #y0 is a list which is one when the cat1 is businesses and zero otherwise y1 = calculate_electronic_devices(Y_train) #y1 is a list which is one when the cat1 is electronic-devices and zero otherwise y2 = calculate_for_the_home(Y_train) #y2 is a list which is one when the cat1 is for-the-home and zero otherwise y3 = calculate_leisure_hobbies(Y_train) #y3 is a list which is one when the cat1 is leisure-hobbies and zero otherwise y4 = calculate_personal(Y_train) #y4 is a list which is one when the cat1 is personal and zero otherwise y5 = calculate_vehicles(Y_train) #y5 is a list which is one when the cat1 is vehicles and zero otherwise for i in range(5): #printing first 5 rows of classes and their respective values in y0 to y5 print(Y_train[i]," " , y0[i], y1[i], y2[i], y3[i], y4[i], y5[i]) # Thes are the desired outputs only for the train part of the trainset in the 6 class of categories #We need these values in order to train our model to predict the X values in the validation set Y_train0 = calculate_businesses(Y_train) #o0 is a list which is one when the cat1 in the Y_train is businesses and zero otherwise Y_train1 = calculate_electronic_devices(Y_train) #o1 is a list which is one when the cat1 in the Y_train is electronic-devices and zero otherwise Y_train2 = calculate_for_the_home(Y_train) #o2 is a list which is one when the cat1 in the Y_train is for-the-home and zero otherwise Y_train3 = calculate_leisure_hobbies(Y_train) #o3 is a list which is one when the cat1 in the Y_train is leisure-hobbies and zero otherwise Y_train4 = calculate_personal(Y_train) #o4 is a list which is one when the cat1 in the Y_train is personal and zero otherwise Y_train5 = calculate_vehicles(Y_train) #o5 is a list which is one when the cat1 in the Y_train is vehicles and zero otherwise for i in range(5): #printing first 5 rows of classes_of_Y_train and their respective values in Y_train0 to Y_train5 print(Y_train[i]," " , Y_train0[i], Y_train1[i], Y_train2[i], Y_train3[i], Y_train4[i], Y_train5[i]) # Implementing logistic regression:
    # Logistic Regression is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. # + # Initializing logistic regression by determining the features and weights feature = [] a = [] for i in range(len(title_train)): a.append(title_train[i]) a.append(description_train[i]) feature.append(a) a = [] w0 = np.random.uniform(low = 0, high = 1) #initializing the wight0 with a random uniform real number w1 = np.random.uniform(low = 0, high = 1) #initializing the wight1 with a random uniform real number w2 = np.random.uniform(low = 0, high = 1) #initializing the wight2 with a random uniform real number weights = np.array([w1, w2]) features = np.array(feature) #features is a 2d numpy array that has title_train as first column and description_train as the second # z = w0 + w1*title + w2*description for i in range(5): #printing some features and their corresponding labels print(features[i]) print(features.shape) print(weights.shape) z = np.dot(features, weights) print(len(z)) # - # Initializing logistic regression for businesses label = [] for i in range(len(title_train)): label.append(Y_train0[i]) #businesses actual_business_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_business_labels.shape) # Initializing logistic regression for electronic_devices label = [] for i in range(len(title_train)): label.append(Y_train1[i]) #electronic_devices actual_electronic_devices_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_electronic_devices_labels.shape) # Initializing logistic regression for for_the_home label = [] for i in range(len(title_train)): label.append(Y_train2[i]) #for_the_home actual_for_the_home_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_for_the_home_labels.shape) # Initializing logistic regression for leisure_hobbies label = [] for i in range(len(title_train)): label.append(Y_train3[i]) #leisure_hobbies actual_leisure_hobbies_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_leisure_hobbies_labels.shape) # Initializing logistic regression for personal label = [] for i in range(len(title_train)): label.append(Y_train4[i]) #personal actual_personal_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_personal_labels.shape) # Initializing logistic regression for vehicles label = [] for i in range(len(title_train)): label.append(Y_train5[i]) #vehicles actual_vehicles_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_vehicles_labels.shape) def sigmoid(z): """ Sigmoid function to map calculated value to a probablity """ return 1 / (1 + np.exp(-z)) #sigmoid(z) = 1/(1 + e^(-z)) def predict(features, weights): ''' Returns 1D array of probabilities that the class label == 1 ''' z = np.dot(features, weights) return sigmoid(z) # + # print(predict(features, weights)) # print(np.log(predict(features, weights))) # # print(np.log(1 - predict(features, weights))) # print(labels) # - def cost_function(features, labels, weights): ''' Using Mean Absolute Error Features:(400,2) Labels: (400,1) Weights:(2,1) Returns 1D matrix of predictions Cost = (labels*log(predictions) + (1-labels)*log(1-predictions) ) / len(labels) ''' observations = len(labels) predictions = predict(features, weights) #Take the error when label=1, If label=0, the first side cancels out. class1_cost = -labels*np.log(predictions) #Take the error when label=0, If label=1, the second side cancels out. class2_cost = (1-labels)*np.log(1-predictions) #a negative number #Take the sum of both costs cost = class1_cost - class2_cost #a positive number #Take the average cost cost = cost.sum() / observations return cost def update_weights(features, labels, weights, lr): ''' Vectorized Gradient Descent Features:(200, 3) Labels: (200, 1) Weights:(3, 1) ''' N = len(features) #1 - Get Predictions predictions = predict(features, weights) #2 Transpose features from (200, 3) to (3, 200) # So we can multiply w the (200,1) cost matrix. # Returns a (3,1) matrix holding 3 partial derivatives -- # one for each feature -- representing the aggregate # slope of the cost function across all observations gradient = np.dot(features.T, predictions - labels) #3 Take the average cost derivative for each feature gradient /= N #4 - Multiply the gradient by our learning rate gradient *= lr #5 - Subtract from our weights to minimize cost weights -= gradient return weights def decision_boundary(prob): return 1 if prob >= .5 else 0 def classify(predictions): ''' input - N element array of predictions between 0 and 1 output - N element array of 0s (False) and 1s (True) ''' predicted_labels = [] for i in range(len(predictions)): predicted_labels.append(decision_boundary(predictions[i])) return predicted_labels def train(features, labels, weights, learning_rate, iters): cost_history = [] for i in range(iters): weights = update_weights(features, labels, weights, learning_rate) #Calculate error for auditing purposes cost = cost_function(features, labels, weights) cost_history.append(cost) # Log Progress if i % 1000 == 0: print ("iter: ",str(i) , " cost: ",str(cost)) return weights, cost_history def accuracy(predicted_labels, actual_labels): diff = predicted_labels - actual_labels return 1.0 - (float(np.count_nonzero(diff)) / len(diff)) # + # predictions = predict(features, weights) # predicted_labels = classify(predictions) #labels predicted by the classifier # print(predicted_labels) # print(actual_business_labels) # - # TRAINING THE MODELS TO PREDICT EACH CATEGORY (y = w1 x title + w2 x description) business_weights, business_cost_history = train(features, actual_business_labels, weights, 0.35, 10000) business_predictions = predict(features, weights) business_predicted_labels = classify(business_predictions) #labels predicted by the classifier electronic_weights, electronic_cost_history = train(features, actual_electronic_devices_labels, weights, 0.35, 10000) electronic_predictions = predict(features, weights) electronic_predicted_labels = classify(electronic_predictions) #labels predicted by the classifier for_the_home_weights, for_the_home_cost_history = train(features, actual_for_the_home_labels, weights, 0.35, 10000) for_the_home_predictions = predict(features, weights) for_the_home_predicted_labels = classify(for_the_home_predictions) #labels predicted by the classifier hobbies_weights, hobbies_cost_history = train(features, actual_leisure_hobbies_labels, weights, 0.35, 10000) hobbies_predictions = predict(features, weights) hobbies_predicted_labels = classify(hobbies_predictions) #labels predicted by the classifier personal_weights, personal_cost_history = train(features, actual_personal_labels, weights, 0.35, 10000) personal_predictions = predict(features, weights) personal_predicted_labels = classify(personal_predictions) #labels predicted by the classifier vehicle_weights, vehicle_cost_history = train(features, actual_vehicles_labels, weights, 0.35, 10000) vehicle_predictions = predict(features, weights) vehicle_predicted_labels = classify(vehicle_predictions) #labels predicted by the classifier # Accuracies For Each Model print("Business Model:",accuracy(business_predicted_labels, actual_business_labels),"\n", "Electronic Model:", accuracy(electronic_predicted_labels, actual_electronic_devices_labels),"\n", "For-the-home Model:", accuracy(for_the_home_predicted_labels, actual_for_the_home_labels),"\n", "Hobbies Model:", accuracy(hobbies_predicted_labels, actual_leisure_hobbies_labels),"\n", "Personal Model:", accuracy(personal_predicted_labels, actual_personal_labels),"\n", "Vehicle Model:", accuracy(vehicle_predicted_labels, actual_vehicles_labels)) def softmax(x): """Using softmax to assign a class to each text sample""" e_x = np.exp(x - np.max(x)) return e_x / e_x.sum() # Assigning values to each text sample using predictions acquired for each model and softmax function print(business_predictions[0]) print(electronic_predictions[0]) print(for_the_home_predictions[0]) print(hobbies_predictions[0]) print(personal_predictions[0]) print(vehicle_predictions[0]) first_prediction = [business_predictions[0], electronic_predictions[0], for_the_home_predictions[0], hobbies_predictions[0], personal_predictions[0], vehicle_predictions[0]] probs = softmax(first_prediction).tolist() print(probs) print(probs.index(max(probs))) # print(softmax(first_prediction).sum()) def multiclass_predictor(business_predictions, electronic_predictions, for_the_home_predictions, hobbies_predictions, personal_predictions, vehicle_predictions): """ In this method, given the predictions done by each binary classifier, we assign a class to each text sample """ predicted_categories = [] for i in range(len(business_predictions)): prediction = [business_predictions[i], electronic_predictions[i], for_the_home_predictions[i], hobbies_predictions[i], personal_predictions[i], vehicle_predictions[i]] #list containing predictions probs = softmax(prediction).tolist() # a list of 6 values which each element rpresents the probability for that class predicted_categories.append(probs.index(max(probs))) #assigning the maximum probability as category return predicted_categories final_predictions = multiclass_predictor(business_predictions, electronic_predictions, for_the_home_predictions, hobbies_predictions, personal_predictions, vehicle_predictions) # Calculating the accuracy of the final model final_predictions = np.array(final_predictions) Y_train = np.array(Y_train) final_model_accuracy = accuracy(final_predictions, Y_train) print(final_model_accuracy) for i in range(10): print(Y_train[i], final_predictions[i]) def calculate_cross_entropy_loss_function(predicted_y, actual_y): """ cross entropy loss function calculates the distance between predicted output, and true output helping us to update our weights and threshold, to have better predictions. predicted_y : the predicted output actual_y : the actual output """ return -(actual_y * math.log(predicted_y) + (1 - actual_y) * math.log(1 - predicted_y)) def gradient_descent(sigmoid, numerical_X_train, actual_y): """ gradient descent algorithm, helps updating the wights. sigmoid : the predicted y, more precisely, the probability of being in the class numerical_X_train : the input data actual_y : the actual output """ derivatives = [] for i in range(len(numerical_X_train)): corresponding_input_value = numerical_X_train[i] derivatives.append((sigmoid - actual_y) * corresponding_input_value) return derivatives def update_weights(w, b, X, Y, learning_rate, previous_cost): """ this method is used to update the weights and the threshold w : the list of weights b : the threshold X : the input data, comes from a sample text learning_rate : the rate in which we wish to change our weights and threshold previous_cost : the cost before the updating """ latest_w = w latest_b = b w_deriv = 0 #derivative of the weights b_deriv = 0 #derivative of the threshold N = len(X) #the length of the input data for i in range(N): # Calculate partial derivatives # -2x(y - (wx + b)) w_deriv += -2*X[i] * (Y - (w[i]*X[i] + b)) # We subtract because the derivatives point in direction of steepest ascent w[i] -= (w_deriv / float(N)) * learning_rate # -2(y - (wx + b)) b_deriv += -2*(Y - (np.mean(w)*X[0] + b)) b -= (b_deriv / float(N)) * learning_rate z = weighted_sum_of_the_evidence_for_class(X, w, b) #calculating the weighted sum of the evidence for class probability_of_being_in_class = sigmoid(z) #calculating the probability of being in that specific class new_cost = calculate_cross_entropy_loss_function(probability_of_being_in_class, Y) #calculating the cost if(new_cost > previous_cost): w, b = latest_w, latest_b return w, b # Calculating Scores def calculate_true_positive(y_true, y_pred, category): """ This method calculates the number of correct positive predictions for a specific class y_true : the list containing actual outputs y_pred : the list containing predicted outputs """ true_positives = 0 length_of_data = len(y_true) #loop over the actual outputs and if the category in y_pred was also that specific class, #this means that the model correctly predicted positively for i in range(length_of_data): if y_true[i] == category: if y_pred[i] == category: true_positives += 1 return true_positives def calculate_true_negative(y_true, y_pred, category): """ This method calculates the number of correct negative predictions for a specific class y_true : the list containing actual outputs y_pred : the list containing predicted outputs """ true_negatives = 0 length_of_data = len(y_true) #loop over the actual outputs and if the category was not that specific class, and also the y_pred was different #this means that the model correctly predicted negatively for i in range(length_of_data): if y_true[i] != category: if y_pred[i] != category: true_negatives += 1 return true_negatives def calculate_false_positive(y_true, y_pred, category): """ This method calculates the number of false positive predictions for a specific class y_true : the list containing actual outputs y_pred : the list containing predicted outputs """ false_positives = 0 length_of_data = len(y_true) #loop over the predicted outputs; if the predicted output was category but the actual output (y_true) was not #this means that the model falsely predicted positively for i in range(length_of_data): if y_pred[i] == category: if y_true[i] != category: false_positives += 1 return false_positives def calculate_false_negative(y_true, y_pred, category): """ This method calculates the number of false negative predictions for a specific class y_true : the list containing actual outputs y_pred : the list containing predicted outputs """ false_negatives = 0 length_of_data = len(y_true) #loop over the actual outputs and if the category was that specific class, and also the y_pred was different #this means that the model falsely predicted negatively for i in range(length_of_data): if y_true[i] == category: if y_pred[i] != category: false_negatives += 1 return false_negatives def calculate_accuracy(true_positive, true_negative, false_positive, false_negative): """ This method calculates the accuracy of the model for a specific class true_positive : the samples that were positive and were predicted correctly true_negative : the samples that were negative and were predicted correctly false_positive : the samples that were negative and were falsely predicted as positive false_negative : the samples that were positive and were falsely predicted as negative """ accuracy = 0.0 accuracy = round((true_positive + true_negative)/(true_positive + true_negative + false_positive + false_negative), 3) return accuracy def calculate_percision(true_positive, false_positive): """ This method calculates the percision of the model for a specific class true_positive : the samples that were positive and were predicted correctly false_positive : the samples that were negative and were falsely predicted as positive """ percision = 0.0 percision = round(true_positive/(true_positive + false_positive) , 3) return percision def calculate_recall(true_positive, false_negative): """ This method calculates the recall of the model for a specific class true_positive : the samples that were positive and were predicted correctly false_negative : the samples that were positive and were falsely predicted as negative """ recall = 0.0 recall = round(true_positive/(true_positive + false_negative) , 3) return recall def calculate_f1_score(recall, percision): """ This method calculates the f1 score of the model for a specific class percision : the percision of the model for that class recall : the recall of the model for that class """ f1_score = 0.0 f1_score = round((2 * recall * percision)/(recall + percision) , 3) return f1_score def calculate_support(y_true, category): """ This method calculates the number of samples belong to a specific category in the y_true y_true : the list containing actual outputs category : the category that we are counting the number of """ count = 0 length_of_data = len(y_true) for i in range(length_of_data): if y_true[i] == category: count += 1 return count def calculate_macro_average(f1_arr, percision_arr, recall_arr): """ An arithmetic mean of per-class F1, percision, and recall f1_arr : a list containing f1_scores per class percision_arr : a list containing percision per class recall_arr : a list containing recall per class """ macro_averaged_f1 = round(np.mean(f1_arr), 3) macro_averaged_percision = round(np.mean(percision_arr), 3) macro_averaged_recall = round(np.mean(recall_arr), 3) return macro_averaged_f1, macro_averaged_percision, macro_averaged_recall def calculate_weighted_average(f1_arr, percision_arr, recall_arr, y_true): """ An arithmetic mean of per-class F1, percision, and recall, but also weights the score of each class by the number of samplesfrom that class. This method works well based on the assumption that the f1_arr, percision_arr, and recall_arr are given based on the alphabetic sort of their categories. f1_arr : a list containing f1_scores per class percision_arr : a list containing percision per class recall_arr : a list containing recall per class y_true : the list containing actual outputs """ samples_length = len(y_true) counts = [] #a list containing the number of samples in each category myset = set(y_true) #converting to set categories = list(myset) #categories sorted_categories = sorted(categories) # print(sorted_categories) for i in range(len(categories)): counts.append(y_true.count(sorted_categories[i])) # print(counts) f1 = 0 prec = 0 rec = 0 # for i in range(len(f1_arr)): # print(f1_arr[i], " ", counts[i]) for i in range(len(f1_arr)): f1 += (f1_arr[i] * counts[i]) prec += (percision_arr[i] * counts[i]) rec += (recall_arr[i] * counts[i]) weighted_f1 = round(f1/samples_length, 3) weighted_percision = round(prec/samples_length, 3) weighted_recall = round(rec/samples_length, 3) return weighted_f1, weighted_percision, weighted_recall # + def calculate_micro_average(confusion_matrix): """ The following always holds true for the micro-F1 case: micro-F1 = micro_percision = micro_recall = accuracy We first calculate micro_percision and micro_recall and then combine the two f1_arr : a list containing f1_scores per class percision_arr : a list containing percision per class recall_arr : a list containing recall per class y_true : the list containing actual outputs """ #in multi-class, all the correctly predicted samples are true positives. #gettin the diagonal true_positives = 0 j = 0 for i in range(len(confusion_matrix)): true_positives += confusion_matrix[i][j] j += 1 # print(true_positives) #each prediction error is a false positive for the class that we predicted #again, the total number of false negatives is the total number of prediction errors #false_negatives = false_positives false_positives = 0 for i in range(len(confusion_matrix)): #looping over the confusion matrix for j in range(len(confusion_matrix)): if i != j: #passing the diagonal false_positives += confusion_matrix[i][j] # print(false_positives) accuracy = true_positives/(true_positives + false_positives) return round(accuracy, 3) # + from sklearn import metrics # Constants B = "businesses" E = "electronic-devices" F = "for-the-home" L = "leisure-hobbies" P = "personal" V = "vehicles" # True values y_true = [B, E, B, E, F, L, V, V, V, P, V, B, E, F, L, P, V] # Predicted values y_pred = [E, B, F, L, L, L, V, P, P, B, V, B, E, F, L, P, V] # Print the confusion matrix print(metrics.confusion_matrix(y_true, y_pred)) # Print the precision and recall, among other metrics print(metrics.classification_report(y_true, y_pred, digits=3)) # + from sklearn import metrics # Constants C="Cat" F="Fish" H="Hen" # True values y_true = [C,C,C,C,C,C, F,F,F,F,F,F,F,F,F,F, H,H,H,H,H,H,H,H,H] # Predicted values y_pred = [C,C,C,C,H,F, C,C,C,C,C,C,H,H,F,F, C,C,C,H,H,H,H,H,H] # Print the confusion matrix confusion_matrix = metrics.confusion_matrix(y_true, y_pred) print(confusion_matrix) # Print the precision and recall, among other metrics print(metrics.classification_report(y_true, y_pred, digits=3)) # - #based on the category of the previous four calculations for TP, TN, FP, FN, # the acuracy, percision, recall and f1 score will be calculated for the same category true_positive = calculate_true_positive(y_true, y_pred, C) true_negative = calculate_true_negative(y_true, y_pred, C) false_positive = calculate_false_positive(y_true, y_pred, C) false_negative = calculate_false_negative(y_true, y_pred, C) accuracy_for_cat = calculate_accuracy(true_positive, true_negative, false_positive, false_negative) percision_for_cat = calculate_percision(true_positive, false_positive) recall_for_cat = calculate_recall(true_positive, false_negative) f1_score_for_cat = calculate_f1_score(recall_for_cat, percision_for_cat) print(true_positive, " ", true_negative, " ", false_positive, " ", false_negative) print(accuracy_for_cat, " ", percision_for_cat, " ", recall_for_cat, " ", f1_score_for_cat) f1 = [0.421, 0.308, 0.667] percision = [0.308, 0.667, 0.667] recall = [0.667, 0.200, 0.667] macro_f1, macro_percision, macro_recall = calculate_macro_average(f1, percision, recall) weighted_f1, weighted_percision, weighted_recall = calculate_weighted_average(f1, percision, recall, y_true) accuracy = calculate_micro_average(confusion_matrix) f1 = [0.333, 0.400, 0.500, 0.667, 0.400, 0.750] percision = [0.333, 0.500, 0.500, 0.500, 0.333, 1.000] recall = [0.333, 0.333, 0.500, 1.000, 0.500, 0.600] macro_f1, macro_percision, macro_recall = calculate_macro_average(f1, percision, recall) weighted_f1, weighted_percision, weighted_recall = calculate_weighted_average(f1, percision, recall, y_true) accuracy = calculate_micro_average(confusion_matrix) print(macro_f1, " ", macro_percision, " ", macro_recall) print(weighted_f1, " ", weighted_percision, " ", weighted_recall) print(accuracy) # the average methods need fixing to do the job for other dataframes. # the confusion matrix function should be implemented # + from sklearn import metrics # Print the confusion matrix print(metrics.confusion_matrix(Y_train, final_predictions)) # Print the precision and recall, among other metrics print(metrics.classification_report(Y_train, final_predictions, digits=3)) # - # implementation of confusion matrix import pandas as pd y_actu = pd.Series(Y_train, name='Actual') y_pred = pd.Series(final_predictions, name='Predicted') df_confusion = pd.crosstab(y_actu, y_pred, rownames=['Actual'], colnames=['Predicted'], margins=True) df_confusion # calcularing the accuracy for validation set descriptions = validationset['desc'] titles = validationset['title'] cleared_descriptions = preprocessing(descriptions.head(1000)) #preprocessed descriptions cleared_titles = preprocessing(titles.head(1000)) #preprocessed titles #the following are all lists #A list containing numerical title and description for each text sample in the training set titles, descriptions = get_numerical_features(non_numerical_title, non_numerical_description) Y = categorizer(non_numerical_Y) #A list containing an integer for the category in cat1 title_train, title_test, description_train, description_test, Y_train, Y_test = train_test_split( titles, descriptions, Y, test_size=0.2, random_state=42) # Thes are the desired outputs only for the train part of the trainset in the 6 class of categories #We need these values in order to train our model to predict the X values in the validation set Y_train0 = calculate_businesses(Y_train) #o0 is a list which is one when the cat1 in the Y_train is businesses and zero otherwise Y_train1 = calculate_electronic_devices(Y_train) #o1 is a list which is one when the cat1 in the Y_train is electronic-devices and zero otherwise Y_train2 = calculate_for_the_home(Y_train) #o2 is a list which is one when the cat1 in the Y_train is for-the-home and zero otherwise Y_train3 = calculate_leisure_hobbies(Y_train) #o3 is a list which is one when the cat1 in the Y_train is leisure-hobbies and zero otherwise Y_train4 = calculate_personal(Y_train) #o4 is a list which is one when the cat1 in the Y_train is personal and zero otherwise Y_train5 = calculate_vehicles(Y_train) #o5 is a list which is one when the cat1 in the Y_train is vehicles and zero otherwise # + # Initializing logistic regression by determining the features and weights feature = [] a = [] for i in range(len(title_train)): a.append(title_train[i]) a.append(description_train[i]) feature.append(a) a = [] w0 = np.random.uniform(low = 0, high = 1) #initializing the wight0 with a random uniform real number w1 = np.random.uniform(low = 0, high = 1) #initializing the wight1 with a random uniform real number w2 = np.random.uniform(low = 0, high = 1) #initializing the wight2 with a random uniform real number weights = np.array([w1, w2]) features = np.array(feature) #features is a 2d numpy array that has title_train as first column and description_train as the second # z = w0 + w1*title + w2*description for i in range(5): #printing some features and their corresponding labels print(features[i]) print(features.shape) print(weights.shape) z = np.dot(features, weights) print(len(z)) # - # TRAINING THE MODELS TO PREDICT EACH CATEGORY (y = w1 x title + w2 x description) # + # Initializing logistic regression for businesses label = [] for i in range(len(title_train)): label.append(Y_train0[i]) #businesses actual_business_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_business_labels.shape) # Initializing logistic regression for electronic_devices label = [] for i in range(len(title_train)): label.append(Y_train1[i]) #electronic_devices actual_electronic_devices_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_electronic_devices_labels.shape) # Initializing logistic regression for for_the_home label = [] for i in range(len(title_train)): label.append(Y_train2[i]) #for_the_home actual_for_the_home_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_for_the_home_labels.shape) # Initializing logistic regression for leisure_hobbies label = [] for i in range(len(title_train)): label.append(Y_train3[i]) #leisure_hobbies actual_leisure_hobbies_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_leisure_hobbies_labels.shape) # Initializing logistic regression for personal label = [] for i in range(len(title_train)): label.append(Y_train4[i]) #personal actual_personal_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_personal_labels.shape) # Initializing logistic regression for vehicles label = [] for i in range(len(title_train)): label.append(Y_train5[i]) #vehicles actual_vehicles_labels = np.array(label) #labels are the desires output or Y_train also an numpy array print(actual_vehicles_labels.shape) # - business_weights, business_cost_history = train(features, actual_business_labels, weights, 0.35, 10000) business_predictions = predict(features, weights) business_predicted_labels = classify(business_predictions) #labels predicted by the classifier electronic_weights, electronic_cost_history = train(features, actual_electronic_devices_labels, weights, 0.35, 10000) electronic_predictions = predict(features, weights) electronic_predicted_labels = classify(electronic_predictions) #labels predicted by the classifier for_the_home_weights, for_the_home_cost_history = train(features, actual_for_the_home_labels, weights, 0.35, 10000) for_the_home_predictions = predict(features, weights) for_the_home_predicted_labels = classify(for_the_home_predictions) #labels predicted by the classifier hobbies_weights, hobbies_cost_history = train(features, actual_leisure_hobbies_labels, weights, 0.35, 10000) hobbies_predictions = predict(features, weights) hobbies_predicted_labels = classify(hobbies_predictions) #labels predicted by the classifier personal_weights, personal_cost_history = train(features, actual_personal_labels, weights, 0.35, 10000) personal_predictions = predict(features, weights) personal_predicted_labels = classify(personal_predictions) #labels predicted by the classifier vehicle_weights, vehicle_cost_history = train(features, actual_vehicles_labels, weights, 0.35, 10000) vehicle_predictions = predict(features, weights) vehicle_predicted_labels = classify(vehicle_predictions) #labels predicted by the classifier print("Business Model:",accuracy(business_predicted_labels, actual_business_labels),"\n", "Electronic Model:", accuracy(electronic_predicted_labels, actual_electronic_devices_labels),"\n", "For-the-home Model:", accuracy(for_the_home_predicted_labels, actual_for_the_home_labels),"\n", "Hobbies Model:", accuracy(hobbies_predicted_labels, actual_leisure_hobbies_labels),"\n", "Personal Model:", accuracy(personal_predicted_labels, actual_personal_labels),"\n", "Vehicle Model:", accuracy(vehicle_predicted_labels, actual_vehicles_labels)) # + # multiclass prediction final_predictions = multiclass_predictor(business_predictions, electronic_predictions, for_the_home_predictions, hobbies_predictions, personal_predictions, vehicle_predictions) # Calculating the accuracy of the final model final_predictions = np.array(final_predictions) Y_train = np.array(Y_train) final_model_accuracy = accuracy(final_predictions, Y_train) print(final_model_accuracy) for i in range(10): print(Y_train[i], final_predictions[i]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: ts # language: python # name: ts # --- # # Euclidean Distance # # Welcome to your 1-st assignment. By working through this exercise you will learn how to # # **Instructions:** # - You will be using Python 3. # - Avoid using for-loops and while-loops, unless you are explicitly told to do so. # - Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function. # - After coding your function, run the cell right below it to check if your result is correct. # - The blue button "Submit Assignment" does not work. After running all the cells, please go directly to Assignment-> My submission to see your results. # # Let's get started! # ## Dataset # Suppose we have a $n$ dimensional space $\mathbb{R}^{n}$, we want to generate $1000000$ pairs of uniformly distributed random # numbers $X\sim\mathscr{U}\left(-1,\:1\right)$. # # For instance, if $n=1$, we generate $p_{1}=\left(x_{1},\:y_{1}\right)$, $p_{2}=\left(x_{2},\:y_{2}\right)$, $\cdots$, $p_{1000000}=\left(x_{1000000},\:y_{1000000}\right)$, where $x_{1}$, $x_{2}$, $\cdots$, $x_{1000000}$ are uniformly distributed, $y_{1}$, $y_{2}$, $\cdots$, $y_{1000000}$ are uniformly distributed too. # # If $n=2$, we generate $\mathbf{p}_{1}=\left(\mathbf{x}_{1},\:\mathbf{y}_{1}\right)$, where $\mathbf{x}_{1}=\left(x_{1}^{\left(1\right)},\:x_{1}^{\left(2\right)}\right)$ and $\mathbf{y}_{1}=\left(y_{1}^{\left(1\right)},\:y_{1}^{\left(2\right)}\right)$, $\mathbf{p}_{2}=\left(\mathbf{x}_{2},\:\mathbf{y}_{2}\right)$, where $\mathbf{x}_{2}=\left(x_{2}^{\left(1\right)},\:x_{2}^{\left(2\right)}\right)$ and $\mathbf{y}_{2}=\left(y_{2}^{\left(1\right)},\:y_{2}^{\left(2\right)}\right)$, $\cdots$, $\mathbf{p}_{1000000}=\left(\mathbf{x}_{1000000},\:\mathbf{y}_{1000000}\right)$, where $\mathbf{x}_{1000000}=\left(x_{1000000}^{\left(1\right)},\:x_{1000000}^{\left(2\right)}\right)$ and $\mathbf{y}_{1000000}=\left(y_{1000000}^{\left(1\right)},\:y_{1000000}^{\left(2\right)}\right)$, and $x_{1}^{\left(1\right)}$, $x_{2}^{\left(1\right)}$, $\cdots$, $x_{1000000}^{\left(1\right)}$ are uniformly distributed, $x_{1}^{\left(2\right)}$, $x_{2}^{\left(2\right)}$, $\cdots$, $x_{1000000}^{\left(2\right)}$ are uniformly distributed, $y_{1}^{\left(1\right)}$, $y_{2}^{\left(1\right)}$, $\cdots$, $y_{1000000}^{\left(1\right)}$ are uniformly distributed, and $y_{1}^{\left(2\right)}$, $y_{2}^{\left(2\right)}$, $\cdots$, $y_{1000000}^{\left(2\right)}$ are uniformly distributed too. # + # imports import numpy as np import matplotlib.pyplot as plt # %matplotlib inline from sklearn.metrics.pairwise import euclidean_distances import sys sys.path.append("..") import grading import timeit import matplotlib.mlab import scipy.stats from scipy.stats import norm # - TIMEOUT_UPPER=1800 ### ONLY FOR GRADING. DO NOT EDIT ### submissions=dict() assignment_key="" all_parts=["pmqxU", "VrXL6", "XsLp1","jD7SY","Ad4J0","1nPFm"] ### ONLY FOR GRADING. DO NOT EDIT ### COURSERA_TOKEN = ""# the key provided to the Student under his/her email on submission page COURSERA_EMAIL = ""# the email # + def euclidean_distances_stats(euclidean_distances_vector): """ Calculate Euclidean distances statistics Arguments: euclidean_distances_vector - 1-D vector of Euclidean distances Return: np.array() of length 4 the first element of array is the mean the second element is variance the third element is skew of the distribution the forth element is kurtusis of the distribution """ if len(euclidean_distances_vector) > 0: this_mean = np.mean(euclidean_distances_vector) this_variance = np.var(euclidean_distances_vector) this_skewness = scipy.stats.skew(euclidean_distances_vector) this_kurtosis = scipy.stats.kurtosis(euclidean_distances_vector) result = np.array([this_mean, this_variance, this_skewness, this_kurtosis], dtype=float) else: result = np.array([0.] * 4, dtype=float) return result def print_stats(euclidean_stats): """ Print Euclidean distances statistics Arguments: euclidean_stats - np.array() of length 4 the first element of array is the mean the second element is variance the third element is skew of the distribution the forth element is kurtusis of the distribution """ print( 'Expectation of Euclidean distances: ', euclidean_stats[0], '\n' ) print( 'Variance of Euclidean distances: ', euclidean_stats[1], '\n' ) print( 'Skewness of Euclidean distances: ', euclidean_stats[2], '\n' ) print( 'Kurtosis of Euclidean distances: ', euclidean_stats[3], '\n' ) def plot_distribution(euclidean_distances_vector, euclidean_stats, dim_space, bins_number=30): """ Plot histogram of Euclidean distances against normal distribution PDF Arguments: euclidean_distances_vector - 1-D vector of Euclidean distances euclidean_stats - np.array() of length 4 the first element of array is the mean the second element is variance the third element is skew of the distribution the forth element is kurtusis of the distribution dim_space - dimension of the space bins_number - number of bins in the histogram """ # verbose, but this is for clarity this_mean = euclidean_stats[0] this_variance = euclidean_stats[1] this_skewness = euclidean_stats[2] this_kurtosis = euclidean_stats[3] sample_size = len(euclidean_distances_vector) try: fig_l, ax_l = plt.subplots() n_bins_l, bins_l, patches_l = ax_l.hist( euclidean_distances_vector, bins_number, normed=1 ) y_l = matplotlib.mlab.normpdf( bins_l, this_mean, np.sqrt( this_variance ) ) ax_l.plot( bins_l, y_l, 'r--' ) plt.title( 'Histogram for dimension = %d and sample size = %d \n $\mu$ = %.3f, $\sigma^2$ = %.3f, Skewness = %.3f, Kurtosis = %.3f' \ % (dim_space, sample_size, this_mean, this_variance, this_skewness, this_kurtosis ) ) fig_l.tight_layout() plt.grid( True, which='both') plt.minorticks_on() return fig_l except: return None # + # generating distributions lower_boundary = 0 upper_boundary = 1 n = 5 # dimension sample_size = 10000 np.random.seed(9001) # set the seed to yield reproducible results X = np.random.uniform( low=lower_boundary, high=upper_boundary, size=(sample_size, n) ) Y = np.random.uniform( low=lower_boundary, high=upper_boundary, size=(sample_size, n) ) print( 'X: ', X ) print( 'Y: ', Y ) # - # ## Part 1 # Calculate the Euclidean distance between the two points of each pair. Do this in a loop. Hint: use sklearn to do the computation. # # Plot the histogram of the Euclidean distance. In a $n$ dimensional space $\mathbb{R}^{n}$, the Euclidean distance between $\mathbf{x}=\left(x_{1},\:x_{2},\:\cdots,\:x_{n}\right)$ and $\mathbf{y}=\left(y_{1},\:y_{2},\:\cdots,\:y_{n}\right)$ is given # by # \begin{equation} # \begin{aligned}d_{E}\left(\mathbf{p},\:\mathbf{q}\right) & =\sqrt{\left(x_{1}-y_{1}\right)^{2}+\left(x_{2}-y_{2}\right)^{2}+\cdots+\left(x_{n}-y_{n}\right)^{2}}\\ # & =\sqrt{\sum_{i=1}^{n}\left(x_{i}-y_{i}\right)^{2}}\\ # & =\left\Vert \mathbf{x}-\mathbf{y}\right\Vert _{2} # \end{aligned} # \end{equation} # + start = timeit.default_timer() ### START CODE HERE ### (≈ 4 lines of code) # implement a loop which computes "euclidean_distance" -already imported, # between each element in X and Y # store results in "euclidean_distances_vector_l" list euclidean_distances_vector_l = [] for i, x in enumerate(X): euclidean_distances_vector_l.append(euclidean_distances(x.reshape(1,-1), Y[i].reshape(1,-1))) ### END CODE HERE ### stop = timeit.default_timer() print( 'Running time: ', stop-start ) # - # Filename: SklearnDistance, PART: pmqxU ### GRADED PART (DO NOT EDIT) ### result = euclidean_distances_stats(euclidean_distances_vector_l) part_1 = list(result.squeeze()) try: part1 = " ".join(map(repr, part_1)) except TypeError: part1 = repr(part_1) submissions[all_parts[0]]=part1 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:1],all_parts,submissions) result ### GRADED PART (DO NOT EDIT) ### print_stats(result) plot_distribution(euclidean_distances_vector_l, result, n) try: plt.show() except: pass # ## Part 2 # Calculate the Euclidean distance between the two points of each pair using vectorized operations and inner product. # + # using vectorization by calculating inner product start = timeit.default_timer() # variables needed for grading euclidean_distances_vector_l_vectorized = [] ### START CODE HERE ### (≈ 3 lines of code) # compute Euclidean distances between each element in X and Y using (vectorized implementation) # store results in euclidean_distances_vector_l_vectorized # i.e. using no loops but only vectors euclidean_distances_vector_l_vectorized = np.sqrt(np.sum((X-Y)* (X-Y), axis=1)) ### END CODE HERE ### stop = timeit.default_timer() print( 'Running time: ', stop-start ) # - # Filename: VectorizedDistance, PART: VrXL6 ### GRADED PART (DO NOT EDIT) ### result = euclidean_distances_stats(euclidean_distances_vector_l_vectorized) part_2 = result.squeeze() try: part2 = " ".join(map(repr, part_2)) except TypeError: part2 = repr(part_2) submissions[all_parts[1]]=part2 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:2],all_parts,submissions) result ### GRADED PART (DO NOT EDIT) ### print_stats(result) fig = plot_distribution(euclidean_distances_vector_l_vectorized, result, n) try: plt.plot() except: pass # ## Question 3 # We repeat question 1 and question 2 for $n=1$, $n=5$, $n=10$, $n=100$, $n=1000$, $n=5000$, and $n=10000$. Then plot the expectation and variance as a function of $n$. # You need to generate two sets of n-dimensional samples, compute def VectorizationMethod(dim_space, sample_size, lower_boundary, upper_boundary, bins_number=30): """ Generate sample_size elements from dim_space-dimensional space. The coordinates of each element in the space are sampled from uniform distribution between lower_boundary and upper_boundary Arguments: dim_space - dimension of the space, a positive integer sample_size - number of samples in the dim_space-dimensional space lower_boundary - lower boundary of coordinates sampled from U(lower_boundary, upper_boundary) upper_boundary - lower boundary of coordinates sampled from U(lower_boundary, upper_boundary) bins_number - number of bins to plot a histogram stats_result - np.array() of length 4 the first element of array is the mean the second element is variance the third element is skew of the distribution the forth element is kurtusis of the distribution """ np.random.seed(42) # variables needed for grading euclidean_distances_vector_v = [] ### START CODE HERE ### (≈ 7-10 lines of code) # store results in euclidean_distances_vector_v X = np.random.uniform(lower_boundary, upper_boundary, (sample_size, dim_space)) Y = np.random.uniform(lower_boundary, upper_boundary, (sample_size, dim_space)) euclidean_distances_vector_v = np.sqrt(np.sum((X-Y)*(X-Y), axis=1)) ### END CODE HERE ### stats_result = euclidean_distances_stats(euclidean_distances_vector_v) fig = plot_distribution(euclidean_distances_vector_v, stats_result, dim_space) return tuple(stats_result.tolist()) # + start = timeit.default_timer() sample_size = 10000 lower_boundary = 0 upper_boundary = 1 dimension_vector = [2, 5, 10, 20, 40, 60, 80, 100, 200, 400, 600, 800, 1000] n_dims = len(dimension_vector) euclidean_distances_mean_vector = [np.nan] * n_dims euclidean_distances_variance_vector = [np.nan] * n_dims euclidean_distances_skewness_vector = [np.nan] * n_dims euclidean_distances_kurtosis_vector = [np.nan] * n_dims for idx, space_dims in enumerate(dimension_vector): # using vectorization euclidean_distances_mean, euclidean_distances_variance, euclidean_distances_skewness, euclidean_distances_kurtosis = \ VectorizationMethod( space_dims, sample_size, lower_boundary, upper_boundary ) euclidean_distances_mean_vector[idx] = euclidean_distances_mean euclidean_distances_variance_vector[idx] = euclidean_distances_variance euclidean_distances_skewness_vector[idx] = euclidean_distances_skewness euclidean_distances_kurtosis_vector[idx] = euclidean_distances_kurtosis print( 'Calculating finished for sample size = %d, dimension = %d\n' %( sample_size, space_dims) ) stop = timeit.default_timer() print( 'Running time: ', stop-start ) # - # Filename : DistancesMean, PART: XsLp1 ### GRADED PART (DO NOT EDIT) ### part_3 = list(euclidean_distances_mean_vector) try: part3 = " ".join(map(repr, part_3)) except TypeError: part3 = repr(part_3) submissions[all_parts[2]]=part3 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:3],all_parts,submissions) euclidean_distances_mean_vector ### GRADED PART (DO NOT EDIT) ### # Filename: DistancesVariance, PART jD7SY ### GRADED PART (DO NOT EDIT) ### part_4 = list(euclidean_distances_variance_vector) try: part4 = " ".join(map(repr, part_4)) except TypeError: part4 = repr(part_4) submissions[all_parts[3]]=part4 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:4],all_parts,submissions) euclidean_distances_variance_vector ### GRADED PART (DO NOT EDIT) ### # Filename: DistancesSkewness, PART: Ad4J0 ### GRADED PART (DO NOT EDIT) ### part_5 = list(euclidean_distances_skewness_vector) try: part5 = " ".join(map(repr, part_5)) except TypeError: part5 = repr(part_5) submissions[all_parts[4]]=part5 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:5],all_parts,submissions) euclidean_distances_skewness_vector ### GRADED PART (DO NOT EDIT) ### # Filename: DistancesKurtosis, PART: 1nPFm ### GRADED PART (DO NOT EDIT) ### part_6 = list(euclidean_distances_kurtosis_vector) try: part6 = " ".join(map(repr, part_6)) except TypeError: part6 = repr(part_6) submissions[all_parts[5]]=part6 grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:6],all_parts,submissions) euclidean_distances_kurtosis_vector ### GRADED PART (DO NOT EDIT) ### # here we plot the stats for different sample sizes try: plt.figure() plt.plot( dimension_vector, euclidean_distances_mean_vector, 'r-', marker='o' ) plt.grid( True, which='both') plt.minorticks_on() plt.title( 'Mean of Euclidean Distances Distribution' ) plt.xlabel( 'Dimension' ) plt.ylabel( 'Mean of Euclidean Distances' ) plt.figure() plt.plot( dimension_vector, euclidean_distances_variance_vector, 'r-', marker='o' ) plt.grid( True, which='both') plt.minorticks_on() plt.title( 'Variance of Euclidean Distances Distribution' ) plt.xlabel( 'Dimension' ) plt.ylabel( 'Variance of Euclidean Distances' ) plt.figure() plt.plot( dimension_vector, euclidean_distances_skewness_vector, 'r-', marker='o' ) plt.grid( True, which='both') plt.minorticks_on() plt.title( 'Skewness of Euclidean Distances Distribution' ) plt.xlabel( 'Dimension' ) plt.ylabel( 'Skewness of Euclidean Distances' ) plt.figure() plt.plot( dimension_vector, euclidean_distances_kurtosis_vector, 'r-', marker='o' ) plt.grid( True, which='both') plt.minorticks_on() plt.title( 'Kurtosis of Euclidean Distances Distribution' ) plt.xlabel( 'Dimension' ) plt.ylabel( 'Kurtosis of Euclidean Distances' ) matplotlib.pyplot.show() except: pass # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 from dtoolbioimage import ImageDataSet, Image3D, Image import numpy as np imageds = ImageDataSet('azure://jicimagedatasets/cc9d757a-b149-4626-83be-abc5a3765cb8') wall_stack = imageds.get_stack('fca-3_FLC-Venus_root01', 'fca-3_FLC-Venus_root01 #1', 1) venus_stack = imageds.get_stack('fca-3_FLC-Venus_root01', 'fca-3_FLC-Venus_root01 #1', 0) from dtoolbioimage.segment import sitk_watershed_segmentation, filter_segmentation_by_size from dtoolbioimage import zoom_to_match_scales zoomed_stack = zoom_to_match_scales(wall_stack) zoomed_stack raw_segmentation = sitk_watershed_segmentation(zoomed_stack) raw_segmentation.view(Image3D) filtered = filter_segmentation_by_size(raw_segmentation) filtered.view(Image3D) from dtoolbioimage.segment import Segmentation3D uci = filtered.view(Segmentation3D).unique_color_image uci[:,:,10].view(Image) filtered.view(Segmentation3D).save('root01segmentation.tif') import os; os.getcwd() pretty = filtered.view(Segmentation3D).pretty_color_image pretty[:,:,10].view(Image) # + from dtoolbioimage.util.array import pretty_color_array from ipywidgets import interactive, IntSlider, Output, HBox from IPython.display import clear_output, display, display_png def simple_segmentation_viewer(stack): _, _, max_z = stack.shape slider = IntSlider(min=0, max=max_z, step=1, description='Z plane:') def show_z_plane(z): display_png(pretty_color_array(stack[:,:,z]).view(Image)) return interactive(show_z_plane, z=slider) # - simple_segmentation_viewer(filtered) from io import BytesIO from imageio import imsave png_byte_arrays = [] for z in range(filtered.shape[2]): b = BytesIO() imsave(b, pretty_color_array(filtered[:,:,z]), 'PNG', compress_level=0) png_byte_arrays.append(b) class CachedImageView(object): def _repr_png_(self): return png_byte_arrays[0].getvalue() CachedImageView() # + from dtoolbioimage.util.array import pretty_color_array from ipywidgets import interactive, IntSlider, Output, HBox from IPython.display import clear_output, display, display_png def cached_segmentation_viewer(stack): _, _, max_z = stack.shape slider = IntSlider(min=0, max=max_z, step=1, description='Z plane:') png_byte_arrays = {} def show_z_plane(z): if z in png_byte_arrays: raw_png_rep = png_byte_arrays[z] else: b = BytesIO() imsave(b, pretty_color_array(stack[:,:,z]), 'PNG', compress_level=0) raw_png_rep = b.getvalue() png_byte_arrays[z] = raw_png_rep display({'image/png': raw_png_rep}, raw=True) return interactive(show_z_plane, z=slider) # - cached_segmentation_viewer(filtered) filtered[np.where(filtered > 50)] = 0 filtered.view(Segmentation3D) fpath = 'root01segmentation.tif' from imageio import volread unique_color_image = volread(fpath) unique_color_image.shape print(unique_color_image[18,256,512]) unique_color = unique_color_image[18,256,512] from dtoolbioimage.util.color import identifier_from_unique_color identifier_from_unique_color(unique_color) planes = [] for z in range(unique_color_image.shape[0]): pass zdim, xdim, ydim, _ = unique_color_image.shape planes = [] for z in range(zdim): segmentation = np.zeros((xdim, ydim), dtype=np.uint32) segmentation += unique_color_image[z,:,:,2] segmentation += unique_color_image[z,:,:,1] * 256 segmentation += unique_color_image[z,:,:,0] * 256 * 256 planes.append(segmentation) np.dstack(planes).view(Segmentation3D) Segmentation3D.from_file('root01segmentation.tif') fpath = 'root01segmentation.tif' unique_color_image = volread(fpath) unique_color_image.shape from skimage.measure import regionprops filtered.view(Segmentation3D) zoomed_venus_stack = zoom_to_match_scales(venus_stack) zoomed_venus_stack from skimage.measure import regionprops by_label = {r.label: r for r in regionprops(filtered)} by_label[5].bbox rmin, cmin, zmin, rmax, cmax, zmax = by_label[5].bbox selected = filtered[rmin:rmax, cmin:cmax, zmin:zmax] selected.view(Segmentation3D) region = selected == 5 from scipy.ndimage.morphology import binary_erosion eroded = binary_erosion(region) eroded.view(Image3D) surface = region ^ eroded surface.view(Image3D) np.sum(surface) np.sum(region) def spherality(region): eroded = binary_erosion(region) surface = region ^ eroded S = np.sum(surface) V = np.sum(region) mult = 4.5 * np.sqrt(np.pi) return mult * V / np.power(S, 1.5) spherality(region) np.power(2, 1.5) from skimage.morphology import ball ball(20).view(Image3D) spherality(ball == 1) ballbool = ball(100) == 1 ballbool.view(Image3D) eroded = binary_erosion(ballbool) surface = ballbool ^ eroded surface.view(Image3D) spherality(ballbool) ballbool.shape (4 / 3) * np.pi * np.power(100, 3) np.sum(ballbool) np.sum(surface) 4 * np.pi * 100 * 100 V = (4 / 3) * np.pi * np.power(100, 3) S = 4 * np.pi * 100 * 100 mult = 6 * np.sqrt(np.pi) mult * V / np.power(S, 1.5) from dtoolbioimage.segment import select_region, spherality r = select_region(filtered, 50) r.view(Image3D) wonky = select_region(filtered, 50) good = select_region(filtered, 8) spherality(good) spherality(wonky) print(np.unique(filtered)) bad = [l for l in range(3, 51) if spherality(select_region(filtered, l)) < 0.5] bad select_region(filtered, 3).view(Image3D) for l in bad: filtered[np.where(filtered == l)] = 0 filtered.view(Segmentation3D) larger_seg = Segmentation3D.from_file('root01segmentation.tif') larger_seg def filter_by_sphericity(segmentation): filtered = segmentation.copy() ids = set(np.unique(segmentation)) - set([0]) bad = [l for l in ids if spherality(select_region(segmentation, l)) < 0.5] for l in bad: filtered[np.where(segmentation == l)] = 0 return filtered new_filtered = filter_by_sphericity(larger_seg) new_filtered labels = set(np.unique(new_filtered)) - set([0]) spheralities = [spherality(select_region(new_filtered, l)) for l in labels] spheralities eroded_sps = [spherality(binary_erosion(select_region(new_filtered, l))) for l in labels] for s1, s2, s3 in zip(dilated_sps, spheralities, eroded_sps): print(s1, s2, s3) from scipy.ndimage.morphology import binary_dilation dilated_sps = [spherality(binary_dilation(select_region(new_filtered, l))) for l in labels] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # [header stuff here] # # In this notebook, we will access the [BLS Public Data API](https://www.bls.gov/developers/) to pull unemployment data by state over time. We'll also do some preprocessing to annualize rates by averaging. # # --- # + tags=[] import requests import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import json import time from datetime import date end_year = int(date.today().strftime("%Y"))-1 # - # **NOTE**: BLS Series IDs are coded by a two-digit state code, which are found at [https://download.bls.gov/pub/time.series/sm/sm.state](https://download.bls.gov/pub/time.series/sm/sm.state). That link came from [this documentation](https://www.bls.gov/bls/data_finder.htm). # # Also, the below will restrict the list to only the 50 states and DC, excluding territories and national aggregates. # + tags=[] bls_state_ids = pd.read_csv('../data/reference/bls_sm.state.txt',sep="\t") #two-digit leading zero filling method taken from: https://stackoverflow.com/a/51837162 # + tags=[] bls_state_ids.drop(inplace=True, index=bls_state_ids[bls_state_ids['state_name'].isin( ['All States','Puerto Rico','Virgin Islands','All Metropolitan Statistical Areas'])].index) # + tags=[] bls_state_ids = {f'LASST{k:02}0000000000003':v for k,v in zip(bls_state_ids['state_code'],bls_state_ids['state_name'])} # - # --- # # Unfortunately, per the [BLS Public Data API docs](https://www.bls.gov/developers/api_signature_v2.htm#multiple), a multiple-series query can return data for **at most 50 series** (and we need to get 51). So we'll break the request up into two chunks, processing 26 series the first pass and appending the last 25 series in the second pass. # # The below function will be the API calling code. The call will need to be made multiple times due to the following restrictions: # * Up to 50 seriesIDs can be requested per call # * We'll get the unemployment data in two batches: for states 0-25 and 26-50. # * Only 20 years of data can be requested per call # * We'll have to start from 1979 (which coincides with the FBI data's start) and get unemployment data in 20-year increments. In 2021, at least 3 calls will need to be made per batch of state data requests. # + tags=[] headers = {'Content-type': 'application/json'} #does not change between API calls def request_unemployment_data(year_range,series_ids): data = json.dumps({"seriesid": series_ids, "startyear":year_range[0], "endyear":year_range[1], "catalog": True, "annualaverage": True, "registrationkey":"" ### YOUR API KEY HERE }) #make the request p = requests.post('https://api.bls.gov/publicAPI/v2/timeseries/data/', data=data, headers=headers) series_id_results = False for i, state in enumerate(p.json()['Results']['series']): #print(f"getting data for {i}, {p.json()['Results']['series'][i]['catalog']['area']} between {year_range[0]} and {year_range[1]}.") #get the unemployment data: df = pd.DataFrame(state['data']) #make string unemployment values into floats: df['value'] = df['value'].astype('float') #get averages by year: df = pd.DataFrame(df.groupby('year')['value'].mean()).reset_index() #add which state the series corresponds to: df['state'] = p.json()['Results']['series'][i]['catalog']['area'] if series_id_results is False: series_id_results = df else: series_id_results = series_id_results.append(df,ignore_index=True) #release memory del df # must return a dataframe for the year range and specified series_ids: return series_id_results # - # The cell below will run the api call for as many times as required to get state average unemployment rates from 1979 to the most recent complete year. # + tags=[] def define_year_ranges(end_year,start_year=1979): #how many years between 1979 and end year: years = end_year - start_year #how many splits needed: splits = (years // 20) + (0 if years%20==0 else 1) result = [] last_year = start_year for i in range(splits): new_last_year = last_year + min(19,end_year-last_year) result.append((last_year,new_last_year)) last_year = new_last_year+1 return result # + tags=[] series_sets = {'set1': [x for i,x in enumerate(bls_state_ids) if i<26], 'set2': [x for i,x in enumerate(bls_state_ids) if i>25]} # + tags=[] state_unemployment = False for i, series_set in series_sets.items(): for date_range in define_year_ranges(2020): if state_unemployment is False: state_unemployment = request_unemployment_data(date_range,series_set) else: state_unemployment = state_unemployment.append(request_unemployment_data(date_range,series_set), ignore_index=True) # + tags=[] state_unemployment.shape # - # The row count above should match the row count of rows we've obtained for crime data in [Notebook](#). # + tags=[] state_unemployment.rename(columns={'value': 'avg_unemployment_rate'}, inplace=True) # - state_unemployment.columns # + tags=[] state_unemployment = state_unemployment[['state','year','avg_unemployment_rate']] # + tags=[] state_unemployment = state_unemployment.sort_values(by=['state','year']) # - state_unemployment.head() # + tags=[] state_unemployment.to_csv('../data/bls_state_unemployment.csv',index=False) # - # --- # # Get **National** averages too: us_unemployment = state_unemployment.copy() # + tags=[] us_unemployment = pd.DataFrame(us_unemployment.groupby('year')['avg_unemployment_rate'].mean()).reset_index() # + tags=[] us_unemployment[:5] # - #write to dataset: us_unemployment.to_csv('../data/bls_us_unemployment.csv',index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="v1CUZ0dkOo_F" # ##### Copyright 2019 The TensorFlow Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # + cellView="form" colab={} colab_type="code" id="qmkj-80IHxnd" #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # + [markdown] colab_type="text" id="_xnMOsbqHz61" # # Pix2Pix # + [markdown] colab_type="text" id="Ds4o1h4WHz9U" # # # # # #
    # View on TensorFlow.org # # Run in Google Colab # # View source on GitHub # # Download notebook #
    # + [markdown] colab_type="text" id="ITZuApL56Mny" # This notebook demonstrates image to image translation using conditional GAN's, as described in [Image-to-Image Translation with Conditional Adversarial Networks](https://arxiv.org/abs/1611.07004). Using this technique we can colorize black and white photos, convert google maps to google earth, etc. Here, we convert building facades to real buildings. # # In example, we will use the [CMP Facade Database](http://cmp.felk.cvut.cz/~tylecr1/facade/), helpfully provided by the [Center for Machine Perception](http://cmp.felk.cvut.cz/) at the [Czech Technical University in Prague](https://www.cvut.cz/). To keep our example short, we will use a preprocessed [copy](https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/) of this dataset, created by the authors of the [paper](https://arxiv.org/abs/1611.07004) above. # # Each epoch takes around 15 seconds on a single V100 GPU. # # Below is the output generated after training the model for 200 epochs. # # ![sample output_1](https://www.tensorflow.org/images/gan/pix2pix_1.png) # ![sample output_2](https://www.tensorflow.org/images/gan/pix2pix_2.png) # + [markdown] colab_type="text" id="e1_Y75QXJS6h" # ## Import TensorFlow and other libraries # + colab={} colab_type="code" id="YfIk2es3hJEd" from __future__ import absolute_import, division, print_function, unicode_literals try: # # %tensorflow_version only exists in Colab. # %tensorflow_version 2.x except Exception: pass import tensorflow as tf import os import time import matplotlib.pyplot as plt from IPython.display import clear_output # + [markdown] colab_type="text" id="iYn4MdZnKCey" # ## Load the dataset # # You can download this dataset and similar datasets from [here](https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets). As mentioned in the [paper](https://arxiv.org/abs/1611.07004) we apply random jittering and mirroring to the training dataset. # # * In random jittering, the image is resized to `286 x 286` and then randomly cropped to `256 x 256` # * In random mirroring, the image is randomly flipped horizontally i.e left to right. # + colab={} colab_type="code" id="Kn-k8kTXuAlv" _URL = 'https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/facades.tar.gz' path_to_zip = tf.keras.utils.get_file('facades.tar.gz', origin=_URL, extract=True) PATH = os.path.join(os.path.dirname(path_to_zip), 'facades/') # + colab={} colab_type="code" id="2CbTEt448b4R" BUFFER_SIZE = 400 BATCH_SIZE = 1 IMG_WIDTH = 256 IMG_HEIGHT = 256 # + colab={} colab_type="code" id="aO9ZAGH5K3SY" def load(image_file): image = tf.io.read_file(image_file) image = tf.image.decode_jpeg(image) w = tf.shape(image)[1] w = w // 2 real_image = image[:, :w, :] input_image = image[:, w:, :] input_image = tf.cast(input_image, tf.float32) real_image = tf.cast(real_image, tf.float32) return input_image, real_image # + colab={} colab_type="code" id="4OLHMpsQ5aOv" inp, re = load(PATH+'train/100.jpg') # casting to int for matplotlib to show the image plt.figure() plt.imshow(inp/255.0) plt.figure() plt.imshow(re/255.0) # + colab={} colab_type="code" id="rwwYQpu9FzDu" def resize(input_image, real_image, height, width): input_image = tf.image.resize(input_image, [height, width], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) real_image = tf.image.resize(real_image, [height, width], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) return input_image, real_image # + colab={} colab_type="code" id="Yn3IwqhiIszt" def random_crop(input_image, real_image): stacked_image = tf.stack([input_image, real_image], axis=0) cropped_image = tf.image.random_crop( stacked_image, size=[2, IMG_HEIGHT, IMG_WIDTH, 3]) return cropped_image[0], cropped_image[1] # + colab={} colab_type="code" id="muhR2cgbLKWW" # normalizing the images to [-1, 1] def normalize(input_image, real_image): input_image = (input_image / 127.5) - 1 real_image = (real_image / 127.5) - 1 return input_image, real_image # + colab={} colab_type="code" id="fVQOjcPVLrUc" @tf.function() def random_jitter(input_image, real_image): # resizing to 286 x 286 x 3 input_image, real_image = resize(input_image, real_image, 286, 286) # randomly cropping to 256 x 256 x 3 input_image, real_image = random_crop(input_image, real_image) if tf.random.uniform(()) > 0.5: # random mirroring input_image = tf.image.flip_left_right(input_image) real_image = tf.image.flip_left_right(real_image) return input_image, real_image # + colab={} colab_type="code" id="n0OGdi6D92kM" # As you can see in the images below # that they are going through random jittering # Random jittering as described in the paper is to # 1. Resize an image to bigger height and width # 2. Randomnly crop to the original size # 3. Randomnly flip the image horizontally plt.figure(figsize=(6, 6)) for i in range(4): rj_inp, rj_re = random_jitter(inp, re) plt.subplot(2, 2, i+1) plt.imshow(rj_inp/255.0) plt.axis('off') plt.show() # + colab={} colab_type="code" id="tyaP4hLJ8b4W" def load_image_train(image_file): input_image, real_image = load(image_file) input_image, real_image = random_jitter(input_image, real_image) input_image, real_image = normalize(input_image, real_image) return input_image, real_image # + colab={} colab_type="code" id="VB3Z6D_zKSru" def load_image_test(image_file): input_image, real_image = load(image_file) input_image, real_image = resize(input_image, real_image, IMG_HEIGHT, IMG_WIDTH) input_image, real_image = normalize(input_image, real_image) return input_image, real_image # + [markdown] colab_type="text" id="PIGN6ouoQxt3" # ## Input Pipeline # + colab={} colab_type="code" id="SQHmYSmk8b4b" train_dataset = tf.data.Dataset.list_files(PATH+'train/*.jpg') train_dataset = train_dataset.shuffle(BUFFER_SIZE) train_dataset = train_dataset.map(load_image_train, num_parallel_calls=tf.data.experimental.AUTOTUNE) train_dataset = train_dataset.batch(1) # + colab={} colab_type="code" id="MS9J0yA58b4g" test_dataset = tf.data.Dataset.list_files(PATH+'test/*.jpg') # shuffling so that for every epoch a different image is generated # to predict and display the progress of our model. train_dataset = train_dataset.shuffle(BUFFER_SIZE) test_dataset = test_dataset.map(load_image_test) test_dataset = test_dataset.batch(1) # + [markdown] colab_type="text" id="THY-sZMiQ4UV" # ## Build the Generator # * The architecture of generator is a modified U-Net. # * Each block in the encoder is (Conv -> Batchnorm -> Leaky ReLU) # * Each block in the decoder is (Transposed Conv -> Batchnorm -> Dropout(applied to the first 3 blocks) -> ReLU) # * There are skip connections between the encoder and decoder (as in U-Net). # # # + colab={} colab_type="code" id="tqqvWxlw8b4l" OUTPUT_CHANNELS = 3 # + colab={} colab_type="code" id="3R09ATE_SH9P" def downsample(filters, size, apply_batchnorm=True): initializer = tf.random_normal_initializer(0., 0.02) result = tf.keras.Sequential() result.add( tf.keras.layers.Conv2D(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False)) if apply_batchnorm: result.add(tf.keras.layers.BatchNormalization()) result.add(tf.keras.layers.LeakyReLU()) return result # + colab={} colab_type="code" id="a6_uCZCppTh7" down_model = downsample(3, 4) down_result = down_model(tf.expand_dims(inp, 0)) print (down_result.shape) # + colab={} colab_type="code" id="nhgDsHClSQzP" def upsample(filters, size, apply_dropout=False): initializer = tf.random_normal_initializer(0., 0.02) result = tf.keras.Sequential() result.add( tf.keras.layers.Conv2DTranspose(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False)) result.add(tf.keras.layers.BatchNormalization()) if apply_dropout: result.add(tf.keras.layers.Dropout(0.5)) result.add(tf.keras.layers.ReLU()) return result # + colab={} colab_type="code" id="mz-ahSdsq0Oc" up_model = upsample(3, 4) up_result = up_model(down_result) print (up_result.shape) # + colab={} colab_type="code" id="lFPI4Nu-8b4q" def Generator(): down_stack = [ downsample(64, 4, apply_batchnorm=False), # (bs, 128, 128, 64) downsample(128, 4), # (bs, 64, 64, 128) downsample(256, 4), # (bs, 32, 32, 256) downsample(512, 4), # (bs, 16, 16, 512) downsample(512, 4), # (bs, 8, 8, 512) downsample(512, 4), # (bs, 4, 4, 512) downsample(512, 4), # (bs, 2, 2, 512) downsample(512, 4), # (bs, 1, 1, 512) ] up_stack = [ upsample(512, 4, apply_dropout=True), # (bs, 2, 2, 1024) upsample(512, 4, apply_dropout=True), # (bs, 4, 4, 1024) upsample(512, 4, apply_dropout=True), # (bs, 8, 8, 1024) upsample(512, 4), # (bs, 16, 16, 1024) upsample(256, 4), # (bs, 32, 32, 512) upsample(128, 4), # (bs, 64, 64, 256) upsample(64, 4), # (bs, 128, 128, 128) ] initializer = tf.random_normal_initializer(0., 0.02) last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4, strides=2, padding='same', kernel_initializer=initializer, activation='tanh') # (bs, 256, 256, 3) concat = tf.keras.layers.Concatenate() inputs = tf.keras.layers.Input(shape=[None,None,3]) x = inputs # Downsampling through the model skips = [] for down in down_stack: x = down(x) skips.append(x) skips = reversed(skips[:-1]) # Upsampling and establishing the skip connections for up, skip in zip(up_stack, skips): x = up(x) x = concat([x, skip]) x = last(x) return tf.keras.Model(inputs=inputs, outputs=x) # + colab={} colab_type="code" id="U1N1_obwtdQH" generator = Generator() gen_output = generator(inp[tf.newaxis,...], training=False) plt.imshow(gen_output[0,...]) # + [markdown] colab_type="text" id="ZTKZfoaoEF22" # ## Build the Discriminator # * The Discriminator is a PatchGAN. # * Each block in the discriminator is (Conv -> BatchNorm -> Leaky ReLU) # * The shape of the output after the last layer is (batch_size, 30, 30, 1) # * Each 30x30 patch of the output classifies a 70x70 portion of the input image (such an architecture is called a PatchGAN). # * Discriminator receives 2 inputs. # * Input image and the target image, which it should classify as real. # * Input image and the generated image (output of generator), which it should classify as fake. # * We concatenate these 2 inputs together in the code (`tf.concat([inp, tar], axis=-1)`) # + colab={} colab_type="code" id="ll6aNeQx8b4v" def Discriminator(): initializer = tf.random_normal_initializer(0., 0.02) inp = tf.keras.layers.Input(shape=[None, None, 3], name='input_image') tar = tf.keras.layers.Input(shape=[None, None, 3], name='target_image') x = tf.keras.layers.concatenate([inp, tar]) # (bs, 256, 256, channels*2) down1 = downsample(64, 4, False)(x) # (bs, 128, 128, 64) down2 = downsample(128, 4)(down1) # (bs, 64, 64, 128) down3 = downsample(256, 4)(down2) # (bs, 32, 32, 256) zero_pad1 = tf.keras.layers.ZeroPadding2D()(down3) # (bs, 34, 34, 256) conv = tf.keras.layers.Conv2D(512, 4, strides=1, kernel_initializer=initializer, use_bias=False)(zero_pad1) # (bs, 31, 31, 512) batchnorm1 = tf.keras.layers.BatchNormalization()(conv) leaky_relu = tf.keras.layers.LeakyReLU()(batchnorm1) zero_pad2 = tf.keras.layers.ZeroPadding2D()(leaky_relu) # (bs, 33, 33, 512) last = tf.keras.layers.Conv2D(1, 4, strides=1, kernel_initializer=initializer)(zero_pad2) # (bs, 30, 30, 1) return tf.keras.Model(inputs=[inp, tar], outputs=last) # + colab={} colab_type="code" id="gDkA05NE6QMs" discriminator = Discriminator() disc_out = discriminator([inp[tf.newaxis,...], gen_output], training=False) plt.imshow(disc_out[0,...,-1], vmin=-20, vmax=20, cmap='RdBu_r') plt.colorbar() # + [markdown] colab_type="text" id="-ede4p2YELFa" # To learn more about the architecture and the hyperparameters you can refer the [paper](https://arxiv.org/abs/1611.07004). # + [markdown] colab_type="text" id="0FMYgY_mPfTi" # ## Define the loss functions and the optimizer # # * **Discriminator loss** # * The discriminator loss function takes 2 inputs; **real images, generated images** # * real_loss is a sigmoid cross entropy loss of the **real images** and an **array of ones(since these are the real images)** # * generated_loss is a sigmoid cross entropy loss of the **generated images** and an **array of zeros(since these are the fake images)** # * Then the total_loss is the sum of real_loss and the generated_loss # # * **Generator loss** # * It is a sigmoid cross entropy loss of the generated images and an **array of ones**. # * The [paper](https://arxiv.org/abs/1611.07004) also includes L1 loss which is MAE (mean absolute error) between the generated image and the target image. # * This allows the generated image to become structurally similar to the target image. # * The formula to calculate the total generator loss = gan_loss + LAMBDA * l1_loss, where LAMBDA = 100. This value was decided by the authors of the [paper](https://arxiv.org/abs/1611.07004). # + colab={} colab_type="code" id="cyhxTuvJyIHV" LAMBDA = 100 # + colab={} colab_type="code" id="Q1Xbz5OaLj5C" loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True) # + colab={} colab_type="code" id="wkMNfBWlT-PV" def discriminator_loss(disc_real_output, disc_generated_output): real_loss = loss_object(tf.ones_like(disc_real_output), disc_real_output) generated_loss = loss_object(tf.zeros_like(disc_generated_output), disc_generated_output) total_disc_loss = real_loss + generated_loss return total_disc_loss # + colab={} colab_type="code" id="90BIcCKcDMxz" def generator_loss(disc_generated_output, gen_output, target): gan_loss = loss_object(tf.ones_like(disc_generated_output), disc_generated_output) # mean absolute error l1_loss = tf.reduce_mean(tf.abs(target - gen_output)) total_gen_loss = gan_loss + (LAMBDA * l1_loss) return total_gen_loss # + colab={} colab_type="code" id="iWCn_PVdEJZ7" generator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5) discriminator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5) # + [markdown] colab_type="text" id="aKUZnDiqQrAh" # ## Checkpoints (Object-based saving) # + colab={} colab_type="code" id="WJnftd5sQsv6" checkpoint_dir = './training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer, discriminator_optimizer=discriminator_optimizer, generator=generator, discriminator=discriminator) # + [markdown] colab_type="text" id="Rw1fkAczTQYh" # ## Training # # * We start by iterating over the dataset # * The generator gets the input image and we get a generated output. # * The discriminator receives the input_image and the generated image as the first input. The second input is the input_image and the target_image. # * Next, we calculate the generator and the discriminator loss. # * Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables(inputs) and apply those to the optimizer. # * This entire procedure is shown in the images below. # # ![Discriminator Update Image](images/dis.png) # # # --- # # # ![Generator Update Image](images/gen.png) # # ## Generate Images # # * After training, its time to generate some images! # * We pass images from the test dataset to the generator. # * The generator will then translate the input image into the output we expect. # * Last step is to plot the predictions and **voila!** # + colab={} colab_type="code" id="NS2GWywBbAWo" EPOCHS = 150 # + colab={} colab_type="code" id="RmdVsmvhPxyy" def generate_images(model, test_input, tar): # the training=True is intentional here since # we want the batch statistics while running the model # on the test dataset. If we use training=False, we will get # the accumulated statistics learned from the training dataset # (which we don't want) prediction = model(test_input, training=True) plt.figure(figsize=(15,15)) display_list = [test_input[0], tar[0], prediction[0]] title = ['Input Image', 'Ground Truth', 'Predicted Image'] for i in range(3): plt.subplot(1, 3, i+1) plt.title(title[i]) # getting the pixel values between [0, 1] to plot it. plt.imshow(display_list[i] * 0.5 + 0.5) plt.axis('off') plt.show() # + colab={} colab_type="code" id="KBKUV2sKXDbY" @tf.function def train_step(input_image, target): with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: gen_output = generator(input_image, training=True) disc_real_output = discriminator([input_image, target], training=True) disc_generated_output = discriminator([input_image, gen_output], training=True) gen_loss = generator_loss(disc_generated_output, gen_output, target) disc_loss = discriminator_loss(disc_real_output, disc_generated_output) generator_gradients = gen_tape.gradient(gen_loss, generator.trainable_variables) discriminator_gradients = disc_tape.gradient(disc_loss, discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(generator_gradients, generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip(discriminator_gradients, discriminator.trainable_variables)) # + colab={} colab_type="code" id="2M7LmLtGEMQJ" def train(dataset, epochs): for epoch in range(epochs): start = time.time() for input_image, target in dataset: train_step(input_image, target) clear_output(wait=True) for inp, tar in test_dataset.take(1): generate_images(generator, inp, tar) # saving (checkpoint) the model every 20 epochs if (epoch + 1) % 20 == 0: checkpoint.save(file_prefix = checkpoint_prefix) print ('Time taken for epoch {} is {} sec\n'.format(epoch + 1, time.time()-start)) # + colab={} colab_type="code" id="a1zZmKmvOH85" train(train_dataset, EPOCHS) # + [markdown] colab_type="text" id="kz80bY3aQ1VZ" # ## Restore the latest checkpoint and test # + colab={} colab_type="code" id="HSSm4kfvJiqv" # !ls {checkpoint_dir} # + colab={} colab_type="code" id="4t4x69adQ5xb" # restoring the latest checkpoint in checkpoint_dir checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) # + [markdown] colab_type="text" id="1RGysMU_BZhx" # ## Generate using test dataset # + colab={} colab_type="code" id="" # Run the trained model on the entire test dataset for inp, tar in test_dataset.take(5): generate_images(generator, inp, tar) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.9 64-bit # language: python # name: python3 # --- # ## What is Matplotlib? # Matplotlib is a low level graph plotting library in python that serves as a visualization utility. # # Matplotlib was created by . # # Matplotlib is open source and we can use it freely. # # Matplotlib is mostly written in python, a few segments are written in C, Objective-C and Javascript for Platform compatibility. # # # + import matplotlib print(matplotlib.__version__) # - import matplotlib.pyplot as plt import numpy as np # + xpoints = np.array([0,6]) ypoints = np.array([0, 250]) plt.plot(xpoints, ypoints) plt.show() # + #Plotting Without Line #To plot only the markers, you can use shortcut string notation parameter 'o', which means 'rings'. xpoints = np.array([1, 8]) ypoints = np.array([3, 10]) plt.plot(xpoints, ypoints, 'o') plt.show() # + # Multiple Points xpoints = np.array([1, 2, 6, 8]) ypoints = np.array([3, 8, 1, 10]) plt.plot(xpoints, ypoints) plt.show() # + # Markers ypoints = np.array([3, 8, 1, 10]) plt.plot(ypoints, marker='o') plt.show() plt.plot(ypoints, marker='*') plt.show() plt.plot(ypoints, 'o:r', ms=10, mec='#4CAF50', mfc='#4CAF50') plt.show() # + #Matplotlib Line ypoints = np.array([3, 8, 1, 10]) plt.plot(ypoints, linestyle='dotted') plt.show() plt.plot(ypoints, linestyle='dashed') plt.show() # + # Multilines y1 = np.array([3, 8, 1, 10]) y2 = np.array([6, 2, 7, 11]) plt.plot(y1) plt.plot(y2) plt.show() # + # Mathplotlib Labels and Title x = np.array([80, 85, 90, 95, 100, 105, 110, 115, 120, 125]) y = np.array([240, 250, 260, 270, 280, 290, 300, 310, 320, 330]) plt.plot(x, y) plt.title("Sports Watch Data", loc="left") plt.xlabel("Average Pulse") plt.ylabel("Calorie Burnage") plt.grid() plt.show() # - # The subplots() Function # The subplots() function takes three arguments that describes the layout of the figure. # # The layout is organized in rows and columns, which are represented by the first and second argument. # # The third argument represents the index of the current plot. # # ```python # plt.subplot(1, 2, 1) # #the figure has 1 row, 2 columns, and this plot is the first plot. # ``` # + # Subplots #plot 1: x = np.array([0, 1, 2, 3]) y = np.array([3, 8, 1, 10]) plt.subplot(1, 2, 1) plt.plot(x, y) #plot 2: x = np.array([0, 1, 2, 3]) y = np.array([10, 20, 30, 40]) plt.subplot(1, 2, 2) plt.plot(x, y) plt.show() # + #plot 1: x = np.array([0, 1, 2, 3]) y = np.array([3, 8, 1, 10]) plt.subplot(2, 1, 1) plt.plot(x, y) #plot 2: x = np.array([0, 1, 2, 3]) y = np.array([10, 20, 30, 40]) plt.subplot(2, 1, 2) plt.plot(x, y) plt.suptitle("سلام") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import cv2 import numpy as np import pandas as pd import string import tensorflow as tf # 设置参数 from params import * # ## Dataset # + def getPath(path): ''' 获取路径 ''' for root, dirs, files in os.walk(trainDir): # 去除隐藏文件 if '.DS_Store' in files: files.remove('.DS_Store') # 添加系统路径 files = pd.Series(files).apply(lambda x: os.path.join(trainDir, x)).values return files def getLabel(filesPath): # 获取标签 labels = np.zeros((len(filesPath), n_len , n_class), dtype= np.uint8) for i in range(len(filesPath)): #获取文件名 name = os.path.splitext(filesPath[i])[0].split('_') num, content = name[0], name[1] # 把标记赋值给张量 for j, ch in enumerate(content): labels[i][j, :] = 0 labels[i][j, characters.find(ch)] = 1 return labels # - def load_and_preprocess_image(filePath): # 读取图片 image = tf.io.read_file(filePath) # 将png格式的图片解码,得到一个张量(一维的矩阵) image = tf.image.decode_png(image, channels=1) # 调整大小 image = tf.image.resize(image, [height, width]) # 对每个像素点的RGB值做归一化处理 image /= 255.0 return image def getDataset(dirPath): # 获取图片 filesPath = getPath(dirPath) # 获取标签 labels = getLabel(filesPath) # 构建图片路径的“dataset” dataset = tf.data.Dataset.from_tensor_slices(filesPath) # 使用AUTOTUNE自动调节管道参数 AUTOTUNE = tf.data.experimental.AUTOTUNE # 处理图片 image_ds = dataset.map(load_and_preprocess_image,num_parallel_calls=AUTOTUNE) # 构建类标数据的“dataset” label_ds = tf.data.Dataset.from_tensor_slices(labels) # 将图片和类标压缩为(图片,类标)对 image_label_ds = tf.data.Dataset.zip((image_ds, label_ds)) # 形成batch image_label_ds = image_label_ds.batch(batch_size) # 让数据集重复多次 image_label_ds = image_label_ds.repeat() # 通过“prefetch”方法让模型的训练和每个batch数据集的加载并行 image_label_ds = image_label_ds.prefetch(buffer_size=AUTOTUNE) return image_label_ds # ## 数据生成器 def gen(path = trainDir, batch_size = 32): ''' 获取训练数据/验证数据 ''' X = np.zeros((batch_size, height, width, channel), dtype= np.float16) y = np.zeros((batch_size, n_len , n_class), dtype= np.uint8) # 遍历目录 for root, dirs, files in os.walk(path): # 去除隐藏文件 if '.DS_Store' in files: files.remove('.DS_Store') # 设置起始指针 pointer = 0 while(True): # 若指针超过文件数量,从头开始 if pointer + batch_size >= len(files): pointer = 0 # 遍历文件名 for i in range(batch_size): file = files[pointer + i] #获取文件名 name = os.path.splitext(file)[0].split('_') num, content = name[0], name[1] #生成读取路径 readPath = os.path.join(path, file) # 读取图片 imgBuffer = cv2.imread(readPath, 0) # 改变图片大小 imgBuffer = cv2.resize(imgBuffer, (width, height)) # 二值化 # t, imgBuffer = cv2.threshold(imgBuffer, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # 归一化 minX = imgBuffer.min() imgBuffer = imgBuffer - minX maxX = max(1, imgBuffer.max()) imgBuffer = np.array(imgBuffer / maxX, np.float16) # 改变图片维度,适应模型输入 imgBuffer = np.expand_dims(imgBuffer, axis = 2) # 把图像赋值给张量 X[i] = imgBuffer # 把标记赋值给张量 for j, ch in enumerate(content): y[i][j, :] = 0 y[i][j, characters.find(ch)] = 1 # 指针指向下一个batch pointer += batch_size # 输出 yield X, y def test_gen(path = testDir, batch_size = 1): ''' 获取测试数据 ''' X = np.zeros((batch_size, height, width, channel), dtype= np.float16) # 遍历目录 for root, dirs, files in os.walk(path): # 去除隐藏文件 if '.DS_Store' in files: files.remove('.DS_Store') # 设置起始指针 pointer = 0 while(True): # 若指针超过文件数量,从头开始 if pointer + batch_size >= len(files): pointer = 0 # 遍历文件名 for i in range(batch_size): file = files[pointer + i] #获取文件名 num = os.path.splitext(file)[0] #生成读取路径 readPath = os.path.join(path, file) # 读取图片 imgBuffer = cv2.imread(readPath, 0) # 改变图片大小 imgBuffer = cv2.resize(imgBuffer, (width, height)) # 二值化 # t, imgBuffer = cv2.threshold(imgBuffer, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # 归一化 minX = imgBuffer.min() imgBuffer = imgBuffer - minX maxX = max(1, imgBuffer.max()) # 将对比度拉伸到0-255范围内 imgBuffer = np.array(imgBuffer / maxX, np.float16) # 改变图片维度,适应模型输入 imgBuffer = np.expand_dims(imgBuffer, axis = 2) # 把图像赋值给张量 X[i] = imgBuffer # 指针指向下一个batch pointer += batch_size # 输出 yield X, num # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: spark # language: python # name: spark # --- # # Spark # ## Spark Concepts # # ### History # # - Motivation # - Move computing to data, not data to computing # - Google # Google Distributed Filesystem (GFS) # Big Table # Map-reduce # - Yahoo! # - Hadoop Distributed File System (HDFS) # - Yet Anohter Resource Negotiator (YARN) # - MapReduce # - Limitations of MapReduce # - Cumbersome API # - Every stage reads from/writes to disk # - No native interactive SQL (HIVE, Impala, Drill) # - No native streaming (Storm) # - No native mahcine learning (Mahout) # - Spark # - Simple API # - In-memory storage # - Better fault tolerance # - Can take advantage of embarrassingly parallel computations # - Multi-language support (Scala, Java, Python, R) # - Support multiple workloads # - Spark 1.0 released May 11, 2014 # - Speed # - DAG scheduler # - Catalyst query optimizer # - Tungsten execution engine # - Ease of use # - Resilient Distributed Dataset (RDD) # - Lazy evaluation # - Modularity # - Languages # - APIs - Strucrtured Srtreaming, GraphX, MLLib # - Extensible # - Data readers # - Data writers # - [3rd party connectors](https://spark.apache.org/third-party-projects.html) # - Multiple platforms - local, data center, cloud, kubernetes # - Distributed execution concepts # - Spark driver # - Spark session # - Spark shell # - Communicates with Spark Master # - Communicates with Spakr workers # - Spark master # - Resource management on cluster # - Spark workers # - Communicate reosuces to cluster manger # - Start Spark Executors # - Spark executors # - Communicate with driver # - Runs task # - Can run multiple threds in parallel # - Execution process # - Driver creates jobs # - Each job is a DAG # - DAGScheduler translates into physical plan using RDDs # - Optimization incldues merging and splitiing into stages # - TaskScheduler distributes physical plans to Executors # - Job consists of one or more stages # - Stage normally ends when there is a need to exchange data (shuffle) # - Stage consists of tasks # - A task is a unit of execution # - Each task is sent to one executor and assigned one data partition # - A mutli-core computer can run several tasks in parallel # - Transforms and actions # - Transforms are lazy # - Actions are eager # - NArrow versus broad transforms # ### Resources # # - [Quick Start](http://spark.apache.org/docs/latest/quick-start.html) # - [Spark Programming Guide](http://spark.apache.org/docs/latest/programming-guide.html) # - [DataFramews, DataSets and SQL](http://spark.apache.org/docs/latest/sql-programming-guide.html) # - [MLLib](http://spark.apache.org/docs/latest/mllib-guide.html) # - [GraphX](http://spark.apache.org/docs/latest/graphx-programming-guide.html) # - [Streaming](http://spark.apache.org/docs/latest/streaming-programming-guide.html) # ## Distributed computing bakkground # # With distributed computing, you interact with a network of computers that communicate via message passing as if issuing instructions to a single computer. # # ![Distributed computing](https://image.slidesharecdn.com/distributedcomputingwithspark-150414042905-conversion-gate01/95/distributed-computing-with-spark-21-638.jpg?) # # Source: https://image.slidesharecdn.com/distributedcomputingwithspark-150414042905-conversion-gate01/95/distributed-computing-with-spark-21-638.jpg # # ### Hadoop and Spark # # - There are 3 major components to a distributed system # - storage # - cluster management # - computing engine # # - Hadoop is a framework that provides all 3 # - distributed storage (HDFS) # - clsuter managemnet (YARN) # - computing eneine (MapReduce) # # - Spakr only provides the (in-memory) distributed computing engine, and relies on other frameworks for storage and cluster management. It is most frequently used on top of the Hadoop framework, but can also use other distributed storage(e.g. S3 and Cassandra) or cluster mangement (e.g. Mesos) software. # # ### Distributed storage # # ![storage](http://slideplayer.com/slide/3406872/12/images/15/HDFS+Framework+Key+features+of+HDFS:.jpg) # # Source: http://slideplayer.com/slide/3406872/12/images/15/HDFS+Framework+Key+features+of+HDFS:.jpg # # ### Role of YARN # # - Resource manager (manages cluster resources) # - Scheduler # - Applications manager # - Node manager (manages single machine/node) # - manages data containers/partitions # - monitors resource usage # - reports to resource manager # # ![Yarn](https://kannandreams.files.wordpress.com/2013/11/yarn1.png) # # Source: https://kannandreams.files.wordpress.com/2013/11/yarn1.png # # ### YARN operations # # ![yarn ops](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/yarn_architecture.gif) # # Source: https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/yarn_architecture.gif # # ### Hadoop MapReduce versus Spark # # Spark has several advantages over Hadoop MapReduce # # - Use of RAM rahter than disk mean faster processing for multi-step operations # - Allows interactive applications # - Allows real-time applications # - More flexible programming API (full range of functional constructs) # # ![Hadoop](https://i0.wp.com/s3.amazonaws.com/acadgildsite/wordpress_images/bigdatadeveloper/10+steps+to+master+apache+spark/hadoop_spark_1.png) # # Source: https://i0.wp.com/s3.amazonaws.com/acadgildsite/wordpress_images/bigdatadeveloper/10+steps+to+master+apache+spark/hadoop_spark_1.png # # ### Overall Ecosystem # # ![spark](https://cdn-images-1.medium.com/max/1165/1*z0Vm749Pu6mHdlyPsznMRg.png) # # Source: https://cdn-images-1.medium.com/max/1165/1*z0Vm749Pu6mHdlyPsznMRg.png # # ### Spark Ecosystem # # - Spark is written in Scala, a functional programming language built on top of the Java Virtual Machine (JVM) # - Traditionally, you have to code in Scala to get the best performance from Spark # - With Spark DataFrames and vectorized operations (Spark 2.3 onwards) Python is now competitive # # ![eco](https://databricks.com/wp-content/uploads/2018/12/Spark.jpg) # # Source: https://databricks.com/wp-content/uploads/2018/12/Spark.jpg # # ### Livy and Spark magic # # - Livy provides a REST interface to a Spark cluster. # # ![Livy](https://cdn-images-1.medium.com/max/956/0*-lwKpnEq0Tpi3Tlj.png) # # Source: https://cdn-images-1.medium.com/max/956/0*-lwKpnEq0Tpi3Tlj.png # # ### PySpark # # ![PySpark](http://i.imgur.com/YlI8AqEl.png) # # Source: http://i.imgur.com/YlI8AqEl.png # # ### Resilient distributed datasets (RDDs) # # ![rdd](https://mapr.com/blog/real-time-streaming-data-pipelines-apache-apis-kafka-spark-streaming-and-hbase/assets/blogimages/msspark/imag12.png) # # Source: https://mapr.com/blog/real-time-streaming-data-pipelines-apache-apis-kafka-spark-streaming-and-hbase/assets/blogimages/msspark/imag12.png # # ### Spark fault tolerance # # ![graph](https://image.slidesharecdn.com/deep-dive-with-spark-streamingtathagata-dasspark-meetup2013-06-17-130623151510-phpapp02/95/deep-dive-with-spark-streaming-tathagata-das-spark-meetup-20130617-13-638.jpg) # # Source: https://image.slidesharecdn.com/deep-dive-with-spark-streamingtathagata-dasspark-meetup2013-06-17-130623151510-phpapp02/95/deep-dive-with-spark-streaming-tathagata-das-spark-meetup-20130617-13-638.jpg # ## Install Spark # # - If necessary, install [Java](https://java.com/en/download/help/download_options.xml) # - Downlaod and install [Sppark](http://spark.apache.org/downloads.html) # # ```bash # wget http://apache.mirrors.pair.com/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz # tar xzf spark-2.4.4-bin-hadoop2.7.tgz # sudo mv spark-2.4.4-bin-hadoop2.7 /usr/local/spark # ``` # # Set up graphframes # # ```bash # pip install graphframes # ``` # # Set up environment variables # # ```bash # export PATH=$PATH:/usr/local/spark/bin # export SPARK_HOME=/usr/local/spark # export PYTHONPATH=$SPARK_HOME/python:$PYTHONPATH # export PYSPARK_DRIVER_PYTHON="jupyter" # export PYSPARK_DRIVER_PYTHON_OPTS="notebook" # export PYSPARK_PYTHON=python3 # ``` # ### If you want `sparkmagic` # # Install and start `livy` # ``` # # cd ~ # wget http://apache.osuosl.org/incubator/livy/0.6.0-incubating/apache-livy-0.6.0-incubating-bin.zip # unzip apache-livy-0.6.0-incubating-bin.zip # # mv apache-livy-0.6.0-incubating-bin livy # livy/bin/livy-server start # ``` # # Install `sparkmagic` # # ``` # pip install sparkmagic # jupyter nbextension enable --py --sys-prefix widgetsnbextension # ``` # # Type `pip show sparkmagic` and cd to the directory shown in LOCATION # # ``` # jupyter-kernelspec install sparkmagic/kernels/pysparkkernel # jupyter serverextension enable --py sparkmagic # ``` # # For the adventurous, see [Running Spark on an AWS EMR cluster](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark.html). # ## Check from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() # ### Spark UI # # - Default port 4040 http://localhost:4040/ # %%file candy.csv name,age,candy tom,3,m&m shirley,6,mentos david,4,candy floss anne,5,starburst df = spark.read.csv('candy.csv') df.show(n=10) spark.stop() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.6 64-bit # language: python # name: python396jvsc74a57bd0b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e # --- # # String Data Structure # ## © []() ## # ### Industrial and Enterprise Systems Engineering, The Grainger College of Engineering, UIUC ### # #
    # # ## 1. Check if a given string is a valid number (Integer or Floating Point) | SET 2 (Regular Expression approach) # + // Java program to check whether given string // is a valid integer number using regex import java.util.regex.Pattern; import.java.util.regex.Matcher; class GFG { public static void main(String[] args) { String input1 = "abc"; String input2 = "1234"; // regular expression for an integer String regex = "[+-]?[0-9][0-9]*"; // compile regex Pattern p = Pattern.compile(regex); // Create a matcher that will match input 1 against regex Matcher m = p.matcher(input1); // If match found and equal to input1 if(m.find() && m.group().equals(input1)) System.out.println(input1 + " is a valid integer") else System.out.println(input1 + " is not a valid integer") // Creates a matcher that will match input2 against regex m = p.matcher(input2); // If match found and equal to input2 if(m.find() && m.group().equals(input2)) System.out.println(input2 + " is a valid integer number"); else System.out.println(input2 + " is not a valid integer number"); } } class GFG1 { public static void main(String[] args) { String input1 = "10e5.4"; String input2 = "2e10"; String regex = "[+-]?[0-9]+(\\.[0-9]+)?([Ee][+-]?[0-9]+)?"; Pattern p = Pattern.compile(regex); Matcher m = p.matcher(input1); if (m.find() && m.group().equals(input1)) System.out.println(input1 + " is a valid floating number"); else System.out.println(input1 + " is not a valid floating number"); m = p.matcher(input2); if (m.find() && m.group().equals(input2)) System.out.println(input2 + " is a valid floating number"); else System.out.println(input2 + " is not a valid floating number"); } } # - # ## 2. Minimum Window Substring # # Given two strings `s` and `t`, return the minimum window in `s` which will contain all the characters in `t`. If there is no such window in `s` that covers all characters in `t`, return the empty string `""`. # + from collections import Counter def minWindow(s, t): # count dictionary to store unique characters dict_t = Counter(t) # interger variable to store the count of the unique characters of s that are also in t formed = 0 # desired length of the window required = len(dict_t) # tuple to store (window lenght, left, right pointers) ans = float("inf"), None, None # left and right pointers l, r = 0, 0 # list that stores the (index, char) of s that are also present in t filtered_s = [] # store the count of the characters, as we go through filtered_s window_counts = {} # go through index i and character char. If char is present in dict_t, append it to filtered_s for i, char in enumerate(s): if char in dict_t: filtered_s.append((i, char)) # while right pointer has not reached the end while r 1: if curr_count[strr[j]]>1: curr_count[strr[j]] -= 1 start += 1 # update the current window length len_window = j - start + 1 # if it is less than min_len, update it and the start_index of the final substring if min_len>len_window: min_len = len_window start_index = start return str(strr[start_index: start_index+min_len]) # Driver code if __name__ == '__main__': print("Smallest window containing " "all distinct characters is: {}".format( findSubString("aabcbcdbca"))) # - # ## 4. Number of substrings of a string # # Find total number of non-empty substrings of a string with N characters. # + def countNonEmptySubString(string): n = len(string) return int(n * (n+1)/2) s = "abcde" print(countNonEmptySubString(s)) # - # ## 5. Print all distinct characters of a string in order # # Given a string, find the all distinct (or non-repeating characters) in it. The distinct characters should be printed in same order as they appear in input string. # + MAX_CHAR = 256 def printDistinct(string): n = len(string) # count stores the count of the character 'x'. # If x is not present in string, count will be 0. count = [0 for i in range(MAX_CHAR)] # index stores the index of the character 'x' in string. If 'x' is not present or 'x' is more than once, # then it will store a value that is not a valid index in input string index = [n for i in range(MAX_CHAR)] # Traverse the string for i in range(n): # Get the unicode value of the character stored at index 'i' in string x = ord(string[i]) # Increment the count count[x] += 1 # if it's the first occurence of x or x is not space, store the index of string of x in 'index' if count[x] == 1 and x != " ": index[x] = i # if x occurs more than once, remove it if count[x] == 2: index[x] = n # sort the index index = sorted(index) for i in range(MAX_CHAR): if index[i]==n: break print(string[index[i]], end = '') # Driver code Str = "GeeksforGeeks" printDistinct(Str) # - # ## 6. Printing Longest Common Subsequence | Set 2 (Printing All) # # Given two sequences, print all longest subsequence present in both of them. # + # maximum length of L N = 100 # L stores the length of the common subsequences of x and y. x is in the row and y is in the column L = [[0 for i in range(N)] for i in range(N)] def LCS(x, y, m, n): # Since the length of subsequence at (0,0) is 0, L has (m+1, n+1) indices to store values of the lengths of the subsequences # go through each row for i in range(m+1): # go through each column for j in range(n+1): # if they are the first indices, mark it as 0 in L if i==0 or j==0: L[i][j] = 0 # if characters of x and y match in (i-1, j-1), respective lengths at (i,j) of L will be updated elif x[i-1] == y[j-1]: L[i][j] = 1 + L[i-1][j-1] # if characters of x and y don't match, (i,j) of L is filled by the maximum of the lengths of the adjacent cells of (i,j) else: L[i][j] = max(L[i-1][j], L[i][j-1]) return L[m][n] def findLCS(x, y, m, n): # set to store all possible LCS s = set() # if we reached the top matrix if m==0 or n==0: s.add("") return s # if last characters match, get the set from the matrix bounded by x[0..m-2] and y[0...n-2] if x[m-1] == y[n-1]: temp = findLCS(x, y, m-1, n-1) # for each of the LCS inside temp set, append the current character, because it matched. for string in temp: s.add(string+x[m-1]) # if last characters don't match, then either go with top matrix or left matrix else: # if LCS can be constructed from the top of the matrix. Keep s as the set for top side of the matrix. We will merge the sets # of top and left side of the matrices whenever L[m-1][n] == L[m][n-1] if L[m-1][n]>=L[m][n-1]: s = findLCS(x, y, m-1, n) # if LCS can be constructed from the left side of the matrix if L[m][n-1]>=L[m-1][n]: temp = findLCS(x, y, m, n-1) # if L[m-1][n] == L[m][n-1], then we need to merge both sets. for i in temp: s.add(i) return s # Driver Code if __name__ == "__main__": x = "AGTGATG" y = "GTTAG" m = len(x) n = len(y) print("LCS length is", LCS(x, y, m, n)) s = findLCS(x, y, m, n) for i in s: print(i) # - # ## 7. Shortest Superstring Problem # # Given a set of n strings arr[], find the smallest string that contains each string in the given set as substring. We may assume that no string in arr[] is substring of another string. # # + # string to store the resultant string of the maximum overlapping pair of strings string = "" def findShortestSuperString(arr, len): global string # run len-1 times to consider every pair while len!=1: # max stores the maximum overlap max = float("-inf") # l and r stores the array indices of the maximum overlapping strings l = 0 r = 0 # store the final resultant string after maximum overlap resultStr = "" # Go through all indices of array for i in range(len): # go through each pair of i for j in range(i, len): # get the maximum length of the overlap. This function even updates global string to store the resultant string after overlap res_len = findOverlappingPair(arr[i], arr[j]) # Update max, resultStr, l and r if res_len>max: max = res_len resultStr = string l = i r = j # We store the string at the last index in r, if there is overlap # or if there is no overlap, we simply add the string at last index to the string at first index # hence we can safely reduce the length of the arr to not include the last string again len -= 1 # if there is no overlap, we add last string to the first string if max == float("-inf"): # note that the length is reduced in the previous code line. So, the index for the last index of the string # is updated length. arr[0] += arr[len] # if there is overlap, we store the resultStr at l # and last string at index r (why? because strinng previously at r is appended to string at l) else: arr[l] = resultStr arr[r] = arr[len] # string stored at first index is the super string return arr[0] def findOverlappingPair(str1, str2): global string # max stores the maximum overlap max = float("-inf") # len1 and len2 stores the lengths of str1 and str2 len1 = len(str1) len2 = len(str2) # store the minimum of len1 and len2 min_len = min(len1, len2) # check if suffix of str1 matches with prefix of str2 for i in range(1, min_len): # compare last i characters in str1 with first i characters in str2 if str1[len1-i:] == str2[:i]: # if i is greater than max if i>max: # update max max = i # string stores the str1 appended with the rest of the string in str2 after i string = str1 + str2[i:] # check if suffix of str2 matches with prefix of str1 for i in range(1, min_len): # compare last i characters in str2 with first i characters in str1 if str2[len2-i:] == str1[:i]: # if i is greater than max if i>max: max = i # string stores the str2 appended with the rest of the string in str1 after i string = str2+str1[i:] return max arr = ["catgc", "ctaagt", "gcta", "ttca", "atgcatc"] arr_len = len(arr) print(f"The Shortest Superstring is {findShortestSuperString(arr, arr_len)}") # - # ## 8. Longest Common Subsequence | DP-4 # # LCS Problem Statement: Given two sequences, find the length of longest subsequence present in both of them. A subsequence is a sequence that appears in the same relative order, but not necessarily contiguous. For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of “abcdefg”. # # It is a classic computer science problem, the basis of diff (a file comparison program that outputs the differences between two files), and has applications in bioinformatics. # + def lcs(X, Y, m, n): # Base case: in bottom-up approach, we have checked all the characters of X and Y. Lengths have become 0 if m==0 or n==0: return 0 # last characters of X and Y match. Hence, increase the length of the previous matrix by 1. By matrix, I mean X[0..m-1] and Y[0..n-1] elif X[m-1]== Y[n-1]: return 1+lcs(X, Y, m-1, n-1) # last characters don't match. Hence, length will be maximum of the lengths of # either top (X[0..m-1], Y[0..n]) or left matrix (X[0..m], Y[0..n-1]) else: return max(lcs(X, Y, m, n-1), lcs(X, Y, m-1, n)) # Driver program to test the above function X = "AGGTAB" Y = "GXTXAYB" print ("Length of LCS is ", lcs(X , Y, len(X), len(Y))) # - # ## 9. Find the first non-repeating character from a stream of characters # # Given a stream of characters, find the first non-repeating character from stream. You need to tell the first non-repeating character in $O(1)$ time at any moment. # + MAX_CHAR = 256 def findFirstNonRepeating(stream): inDLL = []* MAX_CHAR repeated = [False]*MAX_CHAR for i in range(len(stream)): x = stream[i] print(f"Reading {x} from stream") if not repeated[ord(x)]: if x not in inDLL: inDLL.append(x) else: inDLL.remove(x) repeated[ord(x)] = True if len(inDLL)!=0: print(f"First non-repeating character so far is {str(inDLL[0])} \n") stream = "geekforgeekandgeeksandquizfor" findFirstNonRepeating(stream) # - # ## 10. Check whether two strings are anagram of each other # # Write a function to check whether two given strings are anagram of each other or not. An anagram of a string is another string that contains the same characters, only the order of characters can be different. For example, “abcd” and “dabc” are an anagram of each other. # + MAX_CHAR = 256 def areAnagram(str1, str2): # count to store the difference of counts of characters of str1 and str2 count = [0 for i in range(MAX_CHAR)] # if lengths differ, then they are not anagrams if len(str1) != len(str2): return False # fill the count array for i in range(len(str1)): count[ord(str1[i]) - ord('a')]+=1 count[ord(str2[i]) - ord('a')]-=1 # if any count value is non-zero, then it is not anagram for i in range(MAX_CHAR): if count[i] != 0: return False return True str1="geeksforgeeks" str2="forgeeksgeeks" # Function call if (areAnagram(str1, str2)): print("The two strings are anagram of each other") else: print("The two strings are not anagram of each other") # - # ## 11. Lexicographically n-th permutation of a string # # Given a string of length m containing lowercase alphabets only. You have to find the n-th permutation of string lexicographically. # + def next_permutation(list_char): # n stores the length of the list of characters sorted in the string n = len(list_char) # start from second to last index i = n-2 # for next permutation # while this index doesn't become negative and next character at i+1 is lesser than the character at i, move back, since # we have used all next permutations until that point while i>=0 and list_char[i]>=list_char[i+1]: i -= 1 # if index becomes negative, we have explored all next permutations if i==-1: return False # j stores the index of the subarray after ith index. Ultimately, it will store the character's index which will be swapped with character # at i, (for the next permutation) j = i+1 # while j doesn't become n (length of the list_char) and character at j is greater than character at i, move j to next index while jlist_char[i]: j += 1 # j becomes next index, before while loops end. So, reduce j by 1 j -= 1 # swap characters at j and i list_char[j], list_char[i] = list_char[i], list_char[j] # For getting # left stores the index of character, next to ith character. left = i+1 # right pointer stores the index of last character right = n-1 # move left and right points such that their characters are swapped, and then we reduce the size of the sub-array bounded by left and right # pointers by moving left point to next and right point to its previous index while left str[i] i = n while (i >= 0 and str[i - 1] <= str[i]): i -= 1 # if string is sorted in ascending order # we're at the last permutation if (i <= 0): return False, "No previous permutation" # Note - str[i..n] is sorted in # ascending order # Find rightmost element's index # that is less than str[i - 1] j = i - 1 while (j + 1 <= n and str[j + 1] <= str[i - 1]): j += 1 # Swap character at i-1 with j str = list(str) temp = str[i - 1] str[i - 1] = str[j] str[j] = temp str = ''.join(str) # Reverse the substring [i..n] # str[::-1] return True, str # Driver code if __name__ == '__main__': str = "ABCD" b, str = prevPermutation(str) if (b == True): print("Previous permutation is", str) else: print("Previous permutation doesn't exist") # - # ## 13. Find n-th lexicographically permutation of a string | Set 2 # # Given a string of length m containing lowercase alphabets only. We need to find the n-th permutation of string lexicographically. # + MAX_CHAR = 26 MAX_FACT = 20 fact = [None] * (MAX_FACT) # Utility for calculating factorials def precomputeFactorials(): fact[0] = 1 for i in range(1, MAX_FACT): fact[i] = fact[i - 1] * i # Function for nth permutation def nPermute(string, n): precomputeFactorials() # length of given string length = len(string) # Count frequencies of all # characters freq = [0] * (MAX_CHAR) for i in range(0, length): freq[ord(string[i]) - ord('a')] += 1 # out string for output string out = [None] * (MAX_CHAR) # iterate till sum equals n Sum, k = 0, 0 # We update both n and sum in # this loop. while Sum != n: Sum = 0 # check for characters present in freq[] for i in range(0, MAX_CHAR): if freq[i] == 0: continue # Remove character freq[i] -= 1 # calculate sum after fixing # a particuar char xsum = fact[length - 1 - k] for j in range(0, MAX_CHAR): xsum = xsum // fact[freq[j]] Sum += xsum # if sum > n fix that char as # present char and update sum # and required nth after fixing # char at that position if Sum >= n: out[k] = chr(i + ord('a')) n -= Sum - xsum k += 1 break # if sum < n, add character back if Sum < n: freq[i] += 1 # if sum == n means this char will provide # its greatest permutation as nth permutation i = MAX_CHAR-1 while k < length and i >= 0: if freq[i]: out[k] = chr(i + ord('a')) freq[i] -= 1 i += 1 k += 1 i -= 1 # print result print(''.join(out[:k])) # Driver Code if __name__ == "__main__": n = 2 string = "abc" nPermute(string, n) # - # ## 14. Next Permutation # # Implement next permutation, which rearranges numbers into the lexicographically next greater permutation of numbers. # # >"**Algorithm: Single Pass Approach**" # # $ O(N), O(N) $ # + def nextPermutation(num): # index such that digit at this index is lesser than the digit at its next index i = len(num) - 2 # keep reducing i while i is non-negative and digit at i is greater than or equal to digit at i+1 while i>=0 and num[i]>=num[i+1]: i -= 1 if i>=0: # index such that the digit at this index will be the first digit to be greater than the digit at i j = len(num)-1 # keep reducing j such that digit at j is lesser than or equal to digit at i while num[j]<=num[i]: j -= 1 swap the digits at i and j num[i], num[j] = num[j], num[i] # reverse the digits i+1 onwards to get the next permutation reverse(num, i+1) return num def reverse(num, start): i = start j = len(num)-1 while i0: swap += imbalance # since one imbalance pair is solved, reduce that by 1 imbalance -= 1 # if it is ] elif chars[i] == ']': # increment right brackets count by 1 countRight += 1 # imbalance is difference of number of right and left brackets imbalance = countRight - countLeft return swap s = "[]][][" swapCount(s) # ## 16. Length of Longest Balanced Subsequence # # > "Given a string S, find the length of the longest balanced subsequence in it. A balanced string is defined as:- " # # 1. A null string is a balanced string. # 2. If X and Y are balanced strings, then (X)Y and XY are balanced strings. def maxLength(string, n): # invalidOpen to store the #open brackets that do not have corresponding closing bracket invalidOpen , invalidClose = 0, 0 # list of characters in the string chars = list(string) # iterate through string via index i for i in range(len(chars)): # if character is ( if chars[i] == '(': # increment invalidOpenBrackets by 1 (that haven't been closed) invalidOpen += 1 # the character is ) else: # if there are no open brackets, increment invalidClose by 1 if invalidOpen == 0: invalidClose += 1 # found matching open brackets, so decrement imbalance pair by 1 else: invalidOpen -= 1 return n-invalidOpen-invalidClose s = "()(((((()" n = len(s) maxLength(s, n) # ## 17. Find maximum depth of nested parenthesis in a string # # We are given a string having parenthesis like below # # `“( ((X)) (((Y))) )”` # # We need to find the maximum depth of balanced parenthesis, like 4 in the above example. Since ‘Y’ is surrounded by 4 balanced parentheses. # If parenthesis is unbalanced then return -1. def maxDepth(string): # list of charcters in string chars = list(string) # current_max stores the number of opening brackets until we find closing brackets. That's when we reduce it by 1 per 1 closing bracket # max stores the maximum depth current_max, max = 0, 0 n = len(chars) # go through all characters for i in range(n): # if it is an opening bracket if chars[i] == '(': # increase current_max by 1 current_max += 1 # update max if required if current_max > max: max = current_max # if it is a closing bracket elif chars[i] == ')': # current_max has opening brackets, reduce them since we found a matching closing bracket if current_max>0: current_max -= 1 # no opening bracket, we find unbalanced paranthesis else: return -1 # if imbalanced paranthesis if current_max!=0: return -1 return max s = "( ((X)) (((Y))) )" maxDepth(s) # ## 18. Find the smallest window in a string containing all characters of another string # # Given two strings, string1 and string2, the task is to find the smallest substring in string1 containing all characters of string2 efficiently. def minWindow(s, t): ans = 10**9 start = 0 count = 0 m = [0 for i in range(256)] for i in range(len(t)): if m[ord(t[i])-ord('a')]==0: count += 1 m[ord(t[i])-ord('a')] += 1 i, j = 0, 0 while j0: count += 1 i += 1 j += 1 if ans != 10**9: return s[start:start+ans] else: return -1 s = "ADOBECODEBANC" t = "ABC" print("-->Smallest window that contain all character : ") print(minWindow(s, t)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Importing Libraries import numpy as np import pandas as pd import scipy.stats as stats import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline import sklearn from sklearn import datasets from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split, cross_val_score from sklearn.metrics import mean_squared_error # to visualise al the columns in the dataframe pd.pandas.set_option('display.max_columns', None) # ### Importing data set boston = datasets.load_boston() bos = pd.DataFrame(boston.data, columns = boston.feature_names) bos['PRICE'] =boston.target # ### Exploratory Data Analysis bos.head() bos.info() bos.describe() # + ## Lets analyse the continuous values by creating histograms to understand the distribution for feature in bos.columns: data=bos.copy() data[feature].hist(bins=25) plt.xlabel(feature) plt.ylabel("Count") plt.title(feature) plt.show() # - # #### Clearly seen as feature 3 is a categorical bariable having value 0 and 1, we have already saw there is no missing values hence we have to chek for any outliers as well as do any transformation for making the data more normal distributed # # # + ## Plotting scatter plots wrt to price with different features for feature in bos.columns: data=bos.copy() if 0 in data[feature].unique(): pass else: plt.scatter(data[feature],data['PRICE']) plt.xlabel(feature) plt.ylabel('PRICE') plt.title(feature) plt.show() # - ## checking correlations correlation_matrix = bos.copy().corr().round(2) plt.figure(figsize=(12,8)) sns.heatmap(data=correlation_matrix, annot=True) # + ## Plotting scatter plots wrt to price with logtransformingdifferent features for feature in bos.columns: data=bos.copy() if 0 in data[feature].unique(): pass else: data[feature] =np.log10(data[feature]) data['PRICE'] =np.log10(data['PRICE']) plt.scatter(data[feature],data['PRICE']) plt.xlabel(feature) plt.ylabel('PRICE') plt.title(feature) plt.show() # - ### Detecting Outliers for feature in bos.columns: data=bos.copy() if 0 in data[feature].unique(): pass else: data[feature]=data[feature] data.boxplot(column=feature) plt.ylabel(feature) plt.title(feature) plt.show() # + ### Given that CRIM, RM, B, PRICE has significant outliers. ### Removing outliers using IQR methods Q1 = bos.quantile(0.25) Q3 = bos.quantile(0.75) IQR = Q3 - Q1 bos_filter_IQR = bos[~((bos< (Q1 - 10* IQR)) |(bos> (Q3 + 10* IQR))).any(axis=1)] # - bos.shape bos_filter_IQR.shape # ### Model building & Performance # #### Without removing outliers # + X = bos.drop('PRICE', axis = 1) y = bos['PRICE'] X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=42) reg_all = LinearRegression() reg_all.fit(X_train, y_train) # model evaluation for training set y_train_predict = reg_all.predict(X_train) rmse = (np.sqrt(mean_squared_error(y_train, y_train_predict))) r2 = round(reg_all.score(X_train, y_train),2) print("The model performance for training set") print("--------------------------------------") print('RMSE is {}'.format(rmse)) print('R2 score is {}'.format(r2)) print("\n") # - # #### After removing Outliers # + X = bos_filter_IQR.drop('PRICE', axis = 1) y = bos_filter_IQR['PRICE'] X_iqr_train, X_iqr_test, y_iqr_train, y__iqr_test = train_test_split(X,y,test_size=0.2, random_state=42) reg_all = LinearRegression() reg_all.fit(X_iqr_train, y_iqr_train) # model evaluation for training set y_train_predict = reg_all.predict(X_iqr_train) rmse = (np.sqrt(mean_squared_error(y_iqr_train, y_train_predict))) r2 = round(reg_all.score(X_iqr_train, y_iqr_train),2) print("The model performance for training set") print("--------------------------------------") print('RMSE is {}'.format(rmse)) print('R2 score is {}'.format(r2)) print("\n") # + ### After standardization from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # Standardize weight X_iqr_train = scaler.fit_transform(X_iqr_train) # - X_iqr_train y_iqr_train # + reg_all = LinearRegression() reg_all.fit(X_iqr_train, y_iqr_train) # model evaluation for training set y_train_predict = reg_all.predict(X_iqr_train) rmse = (np.sqrt(mean_squared_error(y_iqr_train, y_train_predict))) r2 = round(reg_all.score(X_iqr_train, y_iqr_train),2) print("The model performance for training set") print("--------------------------------------") print('RMSE is {}'.format(rmse)) print('R2 score is {}'.format(r2)) print("\n") # - # #### Performance on Testing set X_iqr_test = scaler.transform(X_iqr_test) y_test_predict = reg_all.predict(X_iqr_test) rmse = (np.sqrt(mean_squared_error(y__iqr_test, y_test_predict))) r2 = round(reg_all.score(X_iqr_test, y__iqr_test),2) print("The model performance for testing set") print("--------------------------------------") print('RMSE is {}'.format(rmse)) print('R2 score is {}'.format(r2)) print("\n") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.utils import to_categorical # Import data url = r'http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data' dataframe = pd.read_csv(url, sep = ',', header = None, index_col = None, na_values = '?') # Add column names name_list = ['age', 'sex', 'cp','trestbps', 'chol', 'fbs','restecg', 'thalac','exang','oldpeak','slope','ca','thal','num'] dataframe.columns = name_list # Fill missing data with columnwise median values dataframe = dataframe.fillna(dataframe.median()) # Select the data (input) columns data_list = ['age', 'sex', 'cp','trestbps', 'chol', 'fbs','restecg', 'thalac','exang','oldpeak','slope','ca','thal'] data = dataframe[data_list] # Scale the data data_min = data.min() data_max = data.max() data_norm = (data - data_min)/(data_max - data_min) # Another way to scale #from sklearn.preprocessing import MinMaxScaler #scaler = MinMaxScaler() #data_norm = scaler.fit_transform(data.values) # Save the data #np.save('case_1_data.npy', data_norm) # Select the labels (output) labels = dataframe['num'] # Code labels to categorical output one_hot_labels = to_categorical(labels) # Save categorical (one hot coded) labels #np.save('case_1_one_hot_labels.npy', one_hot_labels) # Save binary labels #np.save('case_1_bin_labels.npy', bin_labels) # + #test_data = data_norm[:200] #val_data = data_norm[200:] #test_labels = bin_labels[:200] #val_labels = bin_labels[200:] from sklearn.model_selection import train_test_split train_data,val_data,train_label,val_label = train_test_split(data_norm, bin_labels, test_size=0.26, random_state=42) len(train_data),len(val_data) # + from keras.models import Sequential from keras.layers import Dense import time start = time.time() model = Sequential() model.add(Dense(39,activation='relu', input_dim=13)) model.add(Dense(19,activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) history = model.fit(train_data, train_label, epochs=300, batch_size=200, verbose=1, validation_data=(val_data,val_label) ) end = time.time() h_dict = history.history epochs = range(1, len(h_dict['loss'])+1) plt.plot(epochs, h_dict['loss'], 'bo', label='Training loss') plt.plot(epochs, h_dict['val_loss'], 'b', label='Validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() print(h_dict['val_loss'][len(h_dict['val_loss'])-1]) print('time', end - start) # + plt.plot(epochs, h_dict['acc'], 'bo', label='Training acc') plt.plot(epochs, h_dict['val_acc'], 'b', label='Validation acc') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() print(h_dict['val_acc'][len(h_dict['val_acc'])-1]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Visualization of the superpixel images, graphs and edge connections # ### Superpixels from # https://github.com/bknyaz/graph_attention_pool # + import random from torchvision import transforms, datasets import os import pickle from scipy.spatial.distance import cdist from scipy import ndimage import numpy as np import dgl import torch import time import networkx as nx import matplotlib.pyplot as plt import matplotlib # - import os os.chdir('../') # go to root folder of the project print(os.getcwd()) # ### Functions definition for graph construction # + def sigma(dists, kth=8): # Get k-nearest neighbors for each node knns = np.partition(dists, kth, axis=-1)[:, kth::-1] # Compute sigma and reshape sigma = knns.sum(axis=1).reshape((knns.shape[0], 1))/kth return sigma + 1e-8 # adding epsilon to avoid zero value of sigma def compute_adjacency_matrix_images(coord, feat, use_feat=False, kth=8): coord = coord.reshape(-1, 2) # Compute coordinate distance c_dist = cdist(coord, coord) if use_feat: # Compute feature distance f_dist = cdist(feat, feat) # Compute adjacency A = np.exp(- (c_dist/sigma(c_dist))**2 - (f_dist/sigma(f_dist))**2 ) else: A = np.exp(- (c_dist/sigma(c_dist))**2) # Convert to symmetric matrix A = 0.5 * (A + A.T) #A = 0.5 * A * A.T A[np.diag_indices_from(A)] = 0 return A def compute_edges_list(A, kth=8+1): # Get k-similar neighbor indices for each node if 1==1: num_nodes = A.shape[0] new_kth = num_nodes - kth knns = np.argpartition(A, new_kth-1, axis=-1)[:, new_kth:-1] knns_d = np.partition(A, new_kth-1, axis=-1)[:, new_kth:-1] else: knns = np.argpartition(A, kth, axis=-1)[:, kth::-1] knns_d = np.partition(A, kth, axis=-1)[:, kth::-1] return knns, knns_d # - # ### MNISTSuperPix class for reading superpixels file and constructing graph class MNISTSuperPix(torch.utils.data.Dataset): def __init__(self, data_dir, split, use_mean_px=True, use_coord=True, use_feat_for_graph_construct=False,): self.split = split self.is_test = split.lower() in ['test', 'val'] with open(os.path.join(data_dir, 'mnist_75sp_%s.pkl' % split), 'rb') as f: self.labels, self.sp_data = pickle.load(f) self.use_mean_px = use_mean_px self.use_feat_for_graph = use_feat_for_graph_construct self.use_coord = use_coord self.n_samples = len(self.labels) self.img_size = 28 def precompute_graph_images(self): print('precompute all data for the %s set...' % self.split.upper()) self.Adj_matrices, self.node_features, self.edges_lists = [], [], [] for index, sample in enumerate(self.sp_data): mean_px, coord = sample[:2] coord = coord / self.img_size A = compute_adjacency_matrix_images(coord, mean_px, use_feat=self.use_feat_for_graph) edges_list, _ = compute_edges_list(A) N_nodes = A.shape[0] x = None if self.use_mean_px: x = mean_px.reshape(N_nodes, -1) if self.use_coord: coord = coord.reshape(N_nodes, 2) if self.use_mean_px: x = np.concatenate((x, coord), axis=1) else: x = coord if x is None: x = np.ones(N_nodes, 1) # dummy features self.node_features.append(x) self.Adj_matrices.append(A) self.edges_lists.append(edges_list) def __len__(self): return self.n_samples def __getitem__(self, index): g = dgl.DGLGraph() g.add_nodes(self.node_features[index].shape[0]) g.ndata['feat'] = torch.Tensor(self.node_features[index]) for src, dsts in enumerate(self.edges_lists[index]): g.add_edges(src, dsts[dsts!=src]) return g, self.labels[index] # ### Taking only coordinates for knn graph construction # This is done by setting `use_feat_for_graph_construct = False`. # If you want to also consider the mean feature intensity of superpixels for the constructing the knn graphs, set `use_feat_for_graph_construct = True` # + # Taking the test dataset only for sample visualization use_feat_for_graph_construct = False tt = time.time() data_no_feat_knn = MNISTSuperPix("data/superpixels", #split='test', split='train', use_feat_for_graph_construct=use_feat_for_graph_construct) data_no_feat_knn.precompute_graph_images() print("Time taken: {:.4f}s".format(time.time()-tt)) # - # ### Taking coordinates and features for knn graph construction # + use_feat_for_graph_construct = True tt = time.time() data_with_feat_knn = MNISTSuperPix("data/superpixels", #split='test', split='train', use_feat_for_graph_construct=use_feat_for_graph_construct) data_with_feat_knn.precompute_graph_images() print("Time taken: {:.4f}s".format(time.time()-tt)) # - # ### Prepare MNIST Images # + #dataset = datasets.MNIST(root='PATH', train=False, download=True, transform=transforms.ToTensor()) dataset = datasets.MNIST(root='PATH', train=True, download=True, transform=transforms.ToTensor()) x, _ = dataset[777] # x is now a torch.Tensor plt.imshow(x.numpy()[0], cmap='gray') # - # ### Drawing a dgl graph using networkx sample = np.random.choice(len(data_no_feat_knn)) g_sample = data_no_feat_knn[sample][0] print("Label: ", data_no_feat_knn[sample][1]) nx.draw(g_sample.to_networkx(), with_labels=True) plt.show() # ### Superpixels plot function definition # + from scipy.spatial.distance import pdist, squareform from pylab import rcParams def plot_superpixels_graph(plt, sp_data, adj_matrix, label, feat_coord, with_edges): Y = squareform(pdist(sp_data[1], 'euclidean')) x_coord = sp_data[1] #np.flip(dataset.train.sp_data[_][1], 1) intensities = sp_data[0].reshape(-1) G = nx.from_numpy_matrix(Y) pos = dict(zip(range(len(x_coord)), x_coord.tolist())) rotated_pos = {node: (y,-x) for (node, (x,y)) in pos.items()} # rotate the coords by 90 degree edge_list = [] for src, dsts in enumerate(compute_edges_list(adj_matrix)[0]): for dst in dsts: edge_list.append((src, dst)) nx.draw_networkx_nodes(G, rotated_pos, node_color=intensities, cmap=matplotlib.cm.Reds, node_size=60) #len(intensities)) if with_edges: nx.draw_networkx_edges(G, rotated_pos, edge_list, alpha=0.3) title = "Label: " + str(label) if feat_coord: title += " | Using feat and coord for knn" else: title += " | Using only coord for knn" if not with_edges: title = "Label: " + str(label) + " | Only superpixel nodes" plt.title.set_text(title) def show_image(plt, idx, alpha): x, label = dataset[idx] # x is now a torch.Tensor img = x.numpy()[0] plt.imshow(img, cmap='gray') plt.axis('off') plt.title.set_text("Label: " + str(label) + " | Original Image") # - # ### Plotting sample superpixels, and graphs num_samples_plot = 3 for f_idx, idx in enumerate(np.random.choice(int(len(data_no_feat_knn)/2), num_samples_plot, replace=False)): f = plt.figure(f_idx, figsize=(23, 5)) plt1 = f.add_subplot(141) show_image(plt1, idx, alpha=0.5) plt2 = f.add_subplot(142) plot_superpixels_graph(plt2, data_no_feat_knn.sp_data[idx], data_no_feat_knn.Adj_matrices[idx], data_no_feat_knn[idx][1], data_no_feat_knn.use_feat_for_graph, with_edges=False) plt3 = f.add_subplot(143) plot_superpixels_graph(plt3, data_no_feat_knn.sp_data[idx], data_no_feat_knn.Adj_matrices[idx], data_no_feat_knn[idx][1], data_no_feat_knn.use_feat_for_graph, with_edges=True) plt4 = f.add_subplot(144) plot_superpixels_graph(plt4, data_with_feat_knn.sp_data[idx], data_with_feat_knn.Adj_matrices[idx], data_with_feat_knn[idx][1], data_with_feat_knn.use_feat_for_graph, with_edges=True) plt.subplots_adjust(hspace=0.1, wspace=0.1) f.savefig('visualization/mnist_superpix_'+str(idx)+'.jpg') plt.show() # ### Get k-nearest neighbor distances for first 10 (denoted by [:10]) nodes for first graph (denoted by [0]) print(compute_edges_list(data_no_feat_knn.Adj_matrices[0])[1][:10]) print(compute_edges_list(data_with_feat_knn.Adj_matrices[0])[1][:10]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="pVJlMNH_X67c" # # (Example Tutorial for BRS3101) Molecule Featurization and Datasets # # **What will we learn** # # In this tutorial we introduce some key concepts regarding molecular datasets for applicaiton of machine learning to drug design and molecular modeling. # 1. Different featurization techniques (description of molecular structures for ML) # 2. Datasets, iterators, train/validation/test splitting # # **What will we use** # # We will use `deepchem` and `rdKit` within the [Google Colaboratory](https://colab.research.google.com/) framework. # * Google Colaboratory is a cloud-based platform to run python code without the need to setup your local machine with a python environment. It allows you to write and execute Python in your browser, with zero configuration required, Free access to GPUs, Easy sharing. The code you develop and run in colab will run in any python environment. # # * [`deepchem`](https://deepchem.io/) is a pyhton library that implements tools to create high quality, open source tools for drug discovery, materials science, quantum chemistry, and biology. # # * [`rdKit`](https://www.rdkit.org/) is a library of chemoinformatics # + [markdown] id="A8dJDVuLblLN" # ## Install `deepchem` and `rdkit` # + colab={"base_uri": "https://localhost:8080/"} id="BV21K0DGYGF3" outputId="02de5e5a-d649-4e6d-def7-9c31a73e1ad4" # !pip install --pre deepchem # !pip install --pre rdkit-pypi # !pip install --pre PubChemPy # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="Y5djSAYnYNjl" outputId="cfc4b32c-bff9-41ee-a290-fe468d3f5ca1" import deepchem as dc dc.__version__ # + [markdown] id="89dF5Fy_dNkr" # ## Featurizers # + [markdown] id="0OiU2hAtiBiM" # Molecules can be represented in different ways. # * fingerprints (e.g. Circular Fingerprints, ) # * molecular properties # * Graph convolution models [Duvenaud et al.](https://arxiv.org/abs/1509.09292) # * Tokenizers # # [list of featurizers](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6917215/) # # + [markdown] id="CLAKI0TfWhy1" # ### Fingerprints # + id="yEoQQgrldPeD" from rdkit import Chem smiles = ['C1=CC=CC=C1'] # Example 1: (size = 2048, radius = 4) featurizer = dc.feat.CircularFingerprint(size=2048, radius=4) features = featurizer.featurize(smiles) # + colab={"base_uri": "https://localhost:8080/"} id="Isinq0e4dTO3" outputId="9c389c8d-69e9-48a2-ee9f-10eb8d8663cb" features.shape # + colab={"base_uri": "https://localhost:8080/"} id="MBxHSehuWLUK" outputId="451e912b-4b8d-4fac-a49f-2a6c1835e33f" for i,fp in enumerate(features[0]): if fp==1: print(i) # + [markdown] id="MS3PO5nIWll7" # ### Molecular properties # + id="B03cYIWHWniy" smiles = ['CC(=O)OC1=CC=CC=C1C(=O)O'] featurizer = dc.feat.RDKitDescriptors() features = featurizer.featurize(smiles) # + colab={"base_uri": "https://localhost:8080/"} id="ZJQG5hQ6XAz0" outputId="b971bf23-1a88-4788-9004-36f091476e92" features # + [markdown] id="dU3au4NykMaV" # ## Datasets # https://deepchem.readthedocs.io/en/latest/api_reference/moleculenet.html # # + [markdown] id="fOeieHOObfqg" # ## Load an example dataset (Delaney dataset of molecular solubilities) # # # + [markdown] id="wamatJyyhc5x" # The Delaney (ESOL) dataset a regression dataset containing structures and water solubility data for 1128 compounds. The dataset is widely used to validate machine learning models on estimating solubility directly from molecular structures (as encoded in SMILES strings). # # The raw data csv file contains columns below: # * “Compound ID” - Name of the compound # * “smiles” - SMILES representation of the molecular structure # * “measured log solubility in mols per litre” - Log-scale water solubility of the compound, used as label # # ### the Dataset structure # Now let's consider the contents of the Dataset. Every Dataset stores a list of samples. Very roughly speaking, a sample is a single data point. In this case, each sample is a molecule. In other datasets a sample might correspond to an experimental assay, a cell line, an image, or many other things. For every sample the dataset stores the following information. # # * The features, referred to as `X`. This is the input that should be fed into a model to represent the sample. # * The labels, referred to as `y`. This is the desired output from the model. During training, it tries to make the model's output for each sample as close as possible to y. # * The weights, referred to as `w`. This can be used to indicate that some data values are more important than others. In later tutorials we will see examples of how this is useful. # * An `ID`, which is a unique identifier for the sample. This can be anything as long as it is unique. Sometimes it is just an integer index, but in this dataset the ID is a SMILES string describing the molecule. # # # The final piece of information listed in the output is `task_names`. Some datasets contain multiple pieces of information for each sample. For example, if a sample represents a molecule, the dataset might record the results of several different experiments on that molecule. This dataset has only a single task: "measured log solubility in mols per litre". # # + id="rYyXsl85YQut" tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='GraphConv') # + id="RckBYgdXYcRC" train_dataset, valid_dataset, test_dataset = datasets # + colab={"base_uri": "https://localhost:8080/"} id="tVtNbU7PYwJ8" outputId="9a0b72f2-d6ca-4e97-a2a4-a8b00fe60537" train_dataset # + [markdown] id="AUD5gMnJ062I" # ## Prepare and featurize your own dataset # # # Here we will read a file containing smiles and labels and prepare a dataset for machine learning. # # * read from local (see [here](https://colab.research.google.com/notebooks/io.ipynb) for colab file IO) # # + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": "OK"}}, "base_uri": "https://localhost:8080/", "height": 74} id="55FmnVlW1AJD" outputId="601df1fd-5f5e-4e7a-d8fa-73f9c2f56244" from google.colab import files import csv uploaded = files.upload() # + id="IJ4t4QxT1Yd2" lines = [line.decode("utf-8") for line in uploaded['test1.csv'].splitlines()] reader = csv.reader(lines) molecules = list(reader) # + id="FFxnWLI1140i" IDs = [molecule[0] for molecule in molecules] smiles = [molecule[1] for molecule in molecules] label = [molecule[2] for molecule in molecules] features = featurizer.featurize(smiles) dataset = dc.data.NumpyDataset(X=features, y=label, ids=IDs) # + id="BHDkDXZt_UA6" splitter = dc.splits.RandomSplitter() train_dataset, valid_dataset, test_dataset = splitter.train_valid_test_split(dataset) # + id="P0ggduKsCH4f" print(f'total {len(dataset)}') print(f'train {len(train_dataset)}') print(f'valid {len(valid_dataset)}') print(f'test {len(test_dataset)}') # + id="tRQlZgukEm1O" train_dataset.X.shape # + [markdown] id="zHo_thGghuSB" # ## Train a deep-learning model to predict # + id="4ky5fHosDhQi" model = dc.models.MultitaskClassifier(n_tasks=1, n_features=2048, layer_sizes=[1000]) # + id="OtZEzMxbEZin" model.fit(train_dataset, nb_epoch=10) metric = dc.metrics.Metric(dc.metrics.roc_auc_score) # + colab={"base_uri": "https://localhost:8080/"} id="N-fp9sfTEdTd" outputId="5caa9f2b-004c-4f3d-c420-20bcd5eaba74" metric # + colab={"base_uri": "https://localhost:8080/"} id="NxkJswnCFOK7" outputId="94951e77-16f7-4ba2-ad41-704b193ba375" print('training set score:', model.evaluate(train_dataset, metric)) print('test set score:', model.evaluate(test_dataset, metric)) # + [markdown] id="1yvUGIWmGrq8" # Let us plot the results # # [how to plot in Colab](https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/04.02-Simple-Scatter-Plots.ipynb#scrollTo=RmV1JUqGGH0X) # # # + id="oZA1UVNYFWRt" # %matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import numpy as np xy = model.predict(test_dataset).reshape(32,2) # + colab={"base_uri": "https://localhost:8080/", "height": 262} id="843tcd50Fn6E" outputId="1220707b-a356-48d6-8879-7b8e6c9186b4" plt.plot(xy[:,0], xy[:,1], 'o', color='black'); # + colab={"base_uri": "https://localhost:8080/"} id="6IxTfC6UGP6e" outputId="83e6b188-2dba-4f89-eef9-98e42423fff1" xy # + [markdown] id="wToNIL5emi9W" # # Cluster ligands by similarity in rdKit # # [examples of basic `rdkit` use](https://www.rdkit.org/docs/GettingStartedInPython.html) # # + id="BFUNxDY_GcyS" from rdkit import Chem from rdkit.Chem import AllChem # + id="TafBgmgjmqID" # Define clustering setup def cluster_fingerprints(fps,cutoff=0.2): from rdkit import DataStructs from rdkit.ML.Cluster import Butina # first generate the distance matrix: dists = [] nfps = len(fps) for i in range(1,nfps): sims = DataStructs.BulkTanimotoSimilarity(fps[i],fps[:i]) dists.extend([1-x for x in sims]) # now cluster the data: cs = Butina.ClusterData(dists,nfps,cutoff,isDistData=True) return cs # + id="dfoN19_hoIi4" featurizer = dc.feat.CircularFingerprint(size=2048, radius=4) tasks, datasets, transformers = dc.molnet.load_delaney(featurizer=featurizer) # + colab={"base_uri": "https://localhost:8080/"} id="ae0CL8tgowao" outputId="5f34dff3-022a-4eff-e496-3f47a8428624" datasets[0].X # + id="F2hRms69nUrS" ms = [Chem.MolFromSmiles(x) for x in smiles] # + id="xLPKgimToOFS" fps = [AllChem.GetMorganFingerprintAsBitVect(x,4,2048) for x in ms] # + id="PyLfx7OHpFSW" clusters = cluster_fingerprints(fps,cutoff=0.4) # + colab={"base_uri": "https://localhost:8080/"} id="FuokegX7pFx4" outputId="0ff99c3e-d3f3-4174-e516-a771d5e5e005" #show one of the clusters print(clusters[0]) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="7aAgnQ6Npd3R" outputId="67180713-9b1b-4673-e32f-45fac1c44246" #now display structures from one of the clusters from rdkit.Chem import Draw from rdkit.Chem.Draw import IPythonConsole mols=[ms[i] for i in clusters[2]] Draw.MolsToGridImage(mols) # + id="MigWx3ICpkJP" from rdkit.Chem import Descriptors # + colab={"base_uri": "https://localhost:8080/"} id="HE7GyXEVtL7Q" outputId="090e89d8-c613-4054-a904-22ab90657f71" [Descriptors.MolLogP(mol) for mol in ms] # + id="lMv5YQ1btOKj" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Printing a Tree: # + There are many ways to print the tree but we need to print it in some order. 1: 2,3 2: 4,5 4: 5: 6: # - # **Most of the problems in tree we solve it using recursion but # there are a very little problems in trees that are need to be solved using iteration. class BinaryTreeNode: def __init__(self, data): self.data = data self.left = None self.right = None # def printTree(root): # # if root.left is None and root.right is None: # What if root is None? # print(root.data) # We need to take care of all these left and right # return # # elif root.left is None: # print(root.data) # printTree(root.right) # elif root.right is None: # print(root.data) # printTree(root.left) # def printTree(root): if root == None: return print(root.data) printTree(root.left) printTree(root.right) def printTreeDetailed(root): if root == None: return print(root.data, end = ": ") print("L", root.left.data, end = ",") print("R", root.right.data) printTreeDetailed(root.left) printTreeDetailed(root.right) btn1 = BinaryTreeNode(1) btn2 = BinaryTreeNode(4) btn3 = BinaryTreeNode(5) btn1.left = btn2 btn1.right = btn3 printTree(btn1) # + # This doesn't gives mcuh information about the tree. So, we're going to print the tree in detail printTreeDetailed(btn1) # - # ### Here's how we fix the above error: # + # It's giving an error because 4's left is None and we're trying to print the data of None. So, Attribute Error # To fix this we'll print the left child iff the root has left child and we do the same for the right child def printTreeDetailed(root): if root == None: return print(root.data, end = ": ") if root.left != None: print("L", root.left.data, end = ",") if root.left != None: print("R", root.right.data, end = " ") print() printTreeDetailed(root.left) printTreeDetailed(root.right) # - printTreeDetailed(btn1) # ### Now we check the same with a big tree: (The entire code at once): # We're trying to create the following tree 1 / \ / \ / \ 2 3 / \ / 4 5 6 # + class BinaryTreeNode(): def __init__(self, data): self.data = data self.left = None self.right = None # First we create nodes of the binary tree bt1 = BinaryTreeNode(1) bt2 = BinaryTreeNode(2) bt3 = BinaryTreeNode(3) bt4 = BinaryTreeNode(4) bt5 = BinaryTreeNode(5) bt6 = BinaryTreeNode(6) # Latter we connect them bt1.left = bt2 # Here I'm assigning nodes in level order. So, don't get confused bt1.right = bt3 bt2.left = bt4 # Level 2 bt2.right = bt5 bt3.left = bt6 # Not required to assight left and right of leaf nodes as None because by default they'll # be None at the time of Node Creation (Refer Node Creation) def printTreeDetailed(root): if root == None: return print(root.data, end = ":") if root.left is not None: print("L", root.left.data, end = ",") if root.right is not None: print("R", root.right.data, end = " ") print() printTreeDetailed(root.left) # Doing the same for the root's left sub tree printTreeDetailed(root.right) # Doing the same for the root's right sub tree printTreeDetailed(bt1) # Now, with the below output, you can easitly construct a tree and it's understandable. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp core # - # # module name here # # > Sample tutorial of nbdev #hide from nbdev.showdoc import * #export def say_hello(to): "Say hello to someone" return f'Hello {to}!' # For each funtion, we can do a little test as when we do our experiement: #export say_hello("Duc") #exprot assert say_hello("Duc") == "Hello Duc!" #export class HelloSayer: "Say hello to `to` using `say_hello`" def __init__(self, to): self.to = to def say(self): say_hello(self.to) # use `show_doc()` method to add documents: from nbdev.showdoc import * show_doc(HelloSayer.say) # It's also helpful to stay uptodate with all modules once changes happen: # %load_ext autoreload # %autoreload 2 # Furthermore, it'd be nice to be able to convert notebooks to python modules within notebooks. We can use method: `notebook2script()` from nbdev.export import * notebook2script() # Running test is also can be done in parallel, before we push to github by running `nbdev_test_nbs` in terminal # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 3.2.3 Multiple Regression From Simple Univariate Regression # Suppose we have a *univariate* (p = 1) model with no intercept (3.23): # $$Y=X\beta+\varepsilon$$ # # The least squares estimate and residuals are (3.24): # $$ # \begin{equation} # \hat{\beta} = \cfrac{\sum_1^N {x_iy_i}}{\sum_1^N {x_i^2}} \\ # r_i = y_i - x_i\hat{\beta} # \end{equation} # $$ # # With the inner product: # $$ # \begin{equation} # \hat{\beta} = \cfrac{\langle \mathbf{x}, \mathbf{y} \rangle}{\langle \mathbf{x}, \mathbf{x}\rangle}\\ # \mathbf{r} = \mathbf{y} - \mathbf{x}\hat{\beta} # \end{equation} # $$ # # Suppose that the columns of the matrix **X** are orthogonal; that is $\langle \mathbf{x}_j, \mathbf{x}_k \rangle = 0$ # then it is easy to check that $\hat{\beta_j} = \langle \mathbf{x}_j, \mathbf{y} \rangle / \langle \mathbf{x}_j, \mathbf{x}_j \rangle$, i.e the inputs have no effect on each other's parameter estimates. # # Suppose next that we have an intercept and a single input x (3.27): # $$\hat{B}_1 = \cfrac{\langle \mathbf{x} - \overline{x}\mathbf{1}, \mathbf{y} \rangle}{ \langle \mathbf{x} - \overline{x}\mathbf{1}, \mathbf{x} - \overline{x}\mathbf{1} \rangle}$$ # # We can view the estimate as the result of two simple regression: # # 1. Regress **x** on **1** to produce the residual $\mathbf{z} = \mathbf{x} - \overline{x}\mathbf{1}$ # # 2. Regress **y** on the residual **z** to give the coefficient $\hat{\beta}_1$. # # Regress **b** on **a** means $\hat{\gamma}=\langle \mathbf{a},\mathbf{b} \rangle / \langle \mathbf{a}, \mathbf{a}\rangle$ and the residual vector $\mathbf{b} - \hat{\gamma}\mathbf{a}$. # # This recipe generalizes to the case of *p* inputs, as shown in Algorithm 3.1. # **Algorithm 3.1 Regression by Successive Orthogonalization** # 1. $\mathbf{z}_0 = \mathbf{x}_0 = \mathbf{1}$ # # 2. For $j = 1, 2, \cdots, p$ # # * Regress $\mathbf{x}_j$ on $\mathbf{z}_0,...,\mathbf{z}_{j - 1}$ to produce $\hat{\gamma}_{lj}=\langle \mathbf{z}_l, \mathbf{x}_j \rangle / \langle \mathbf{z}_l,\mathbf{z}_l \rangle$ $l=0,\cdots,j-1$, and residualt vector $\mathbf{z}_j=\mathbf{x}_j - \sum_{k=0}^{j-1} \hat{\gamma}_{kj}\mathbf{z}_k$ # # 3. Regress $\mathbf{y}$ on the residual $\mathbf{z}_p$ to give the estimate $\hat{\beta}_p$ # + import numpy as np import pandas as pd from scipy import stats, linalg df = pd.read_csv('../data/prostate/prostate.data', delimiter='\t', index_col=0) mask_train = df.pop('train') df_y = df.pop('lpsa') df = df.apply(stats.zscore) def orthogonalize(X): p = X.shape[1] G = np.eye(p) Z = X.copy() for j in range(1, p): for l in range(j): G[l, j] = np.dot(Z[:, l], X[:, j]) / np.dot(Z[:, l], Z[:, l]) for k in range(j): Z[:, j] -= G[k, j] * Z[:, k] return Z, G # - # The result of this algorithm is (3.28): # # $$\hat{\beta}_p=\cfrac{\langle \mathbf{z}_p, \mathbf{y} \rangle}{\langle \mathbf{z}_p,\mathbf{z}_p \rangle}$$ # # If $\mathbf{x}_p$ is highly correlated with some of the other $\mathbf{x}_k$'s the residual vector $\mathbf{x}_p$ will be close to zero, and from (3.28) the coefficient $\hat{\beta}_p$ will be unstable. # # From (3.28) we also obtain an alternative formula for the variance estimates, (3.29): # # $$Var(\hat{\beta}_p) = \cfrac{\sigma^2}{\langle \mathbf{z}_p, \mathbf{z}_p \rangle}=\cfrac{\sigma^2}{||\mathbf{z}_p||^2} $$ # # On other words, the precision with which we can estimate $\hat{\beta}_p$ depends on the lengths of the residual vector $\mathbf{z}_p$; # Algorithm 3.1 is known as the *Gram–Schmidt* procedure for multiple regression. We can represent step 2 of Algorithm 3.1 in matrix form (3.30): # # $$\mathbf{X}=\mathbf{Z\Gamma}$$ # # where $\mathbf{Z}$ has as columns the $z_j$ (in order), and $\mathbf{\Gamma}$ is the upper triangular matrix # with entries $\hat{\gamma}_{kj}$. Introducing the diagonal matrix $\mathbf{D}$ with $D_{jj}=||z_j||$, we get (3.31): # # $$\mathbf{X}=\mathbf{Z}\mathbf{D}^{-1}\mathbf{D}\mathbf{\Gamma}=\mathbf{QR}$$ # # the so-called QR decomposition of $\mathbf{X}$. Here $\mathbf{Q}$ is an N × (p +1) orthogonal # matrix, $\mathbf{Q}^T\mathbf{Q} = \mathbf{I}$, and **R** is a (p + 1) × (p + 1) upper triangular matrix. # # The least squares solution is given by: # # $$ # \hat{\beta}=\mathbf{R}^{-1}\mathbf{Q}^T\mathbf{y} # $$ # # *Proof*: # $$ # \begin{equation} # \mathbf{X}^T\mathbf{y}=\mathbf{X}^T\mathbf{X}\hat{\beta}\\ # \mathbf{R}^T\mathbf{Q}^T\mathbf{y}=\mathbf{R}^T\mathbf{Q}^T\mathbf{Q}\mathbf{R}\hat{\beta}\\ # \mathbf{R}^T\mathbf{Q}^T\mathbf{y}=\mathbf{R}^T\mathbf{R}\hat{\beta}\\ # \mathbf{Q}^T\mathbf{y}=\mathbf{R}\hat{\beta}\\ # \end{equation} # $$ # And the predicted training values: # # $$ # \hat{\mathbf{y}}=\mathbf{QQ}^T\mathbf{y} # $$ # # *Proof*: # # $$ # \begin{align} # \hat{\mathbf{y}}&=\mathbf{X}\hat{\beta}\\ # &=\mathbf{QR}\mathbf{R}^{-1}\mathbf{Q}^T\mathbf{y}\\ # &=\mathbf{QQ}^T\mathbf{y} # \end{align} # $$ # # # # We can obtain from it not just $\hat{\beta}_p$, but also the entire multiple least squares fit. # # *Proof*: # We can easily derive that: # $$ # \mathbf{R}\hat{\beta}=\mathbf{Q}^T\mathbf{y} # $$ # # which can be expanded into: # $$ # \begin{equation} # \begin{bmatrix} # R_{0 0} & R_{02} & \dots & R_{0p} \\ # 0 & R_{11} & \dots & R_{1p} \\ # \vdots & \vdots & \ddots & \vdots \\ # 0 & 0 & \dots & R_{pp} # \end{bmatrix} # \begin{bmatrix} # \hat{\beta_0} \\ # \hat{\beta_1} \\ # \vdots \\ # \hat{\beta_p} # \end{bmatrix} # = # \begin{bmatrix} # {Q_{0}}^T\mathbf{y} \\ # {Q_{1}}^T\mathbf{y} \\ # \vdots \\ # {Q_{p}}^T\mathbf{y} # \end{bmatrix} # \end{equation} # $$ # # Now by applying the backward substitution it is possible to obtain the entire multiple least squares fit. For example to find the $\hat{\beta}_p$: # $$ # \begin{equation} # R_{pp}\hat{\beta}_p = {Q_{p}}^T\mathbf{y}\\ # \hat{\beta}_p = \cfrac{\langle Q_p, \mathbf{y} \rangle}{R_{pp}}=\cfrac{\langle \mathbf{z}_p, \mathbf{y} \rangle}{\langle \mathbf{z}_p,\mathbf{z}_p \rangle} # \end{equation} # $$ # + def least_squares_qr(data_x, data_y): X = np.c_[np.ones((len(data_x), 1)), data_x] Z, G = orthogonalize(X) D = linalg.norm(Z, axis=0) Q = Z / D R = np.diag(D) @ G beta = linalg.solve_triangular(R, Q.T @ data_y) return beta beta = least_squares_qr(df[mask_train == 'T'].as_matrix(), df_y[mask_train == 'T'].as_matrix()) print ("Coefficient: ", beta) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="4kyB8MUKn7Vh" # # MNIST with Convolutional Neural Networks # + [markdown] id="YyFM4icCoAaX" # ## Prerequisites # # Install the following packages # + colab={"base_uri": "https://localhost:8080/"} id="u63awVMPoD7Y" outputId="9d63b7c2-f9f5-46a5-816b-e9a8efaa6bed" # ! pip3 install cloudmesh-installer # ! pip3 install cloudmesh-common # + [markdown] id="RlBJ-XuWopPh" # ## Import Libraries # + id="RylyDeO2oC4m" from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np from keras.models import Sequential from keras.layers import Activation, Dense, Dropout from keras.layers import Conv2D, MaxPooling2D, Flatten, AveragePooling2D from keras.utils import to_categorical, plot_model from keras.datasets import mnist # + [markdown] id="e2lUNBJxore4" # ## Download Data and Pre-Process # + colab={"base_uri": "https://localhost:8080/"} id="3X9sLwfUocZ6" outputId="c7856a83-0d33-44c5-a71e-ee9364dad122" (x_train, y_train), (x_test, y_test) = mnist.load_data() num_labels = len(np.unique(y_train)) y_train = to_categorical(y_train) y_test = to_categorical(y_test) image_size = x_train.shape[1] x_train = np.reshape(x_train,[-1, image_size, image_size, 1]) x_test = np.reshape(x_test,[-1, image_size, image_size, 1]) x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 input_shape = (image_size, image_size, 1) print(input_shape) batch_size = 128 kernel_size = 3 pool_size = 2 filters = 64 dropout = 0.2 # + [markdown] id="O8DwuhcaowYQ" # ## Define Model # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="O4tidxRFofEj" outputId="325d5686-f256-4620-f2f6-61c8e49e447c" model = Sequential() model.add(Conv2D(filters=filters, kernel_size=kernel_size, activation='relu', input_shape=input_shape, padding='same')) model.add(MaxPooling2D(pool_size)) model.add(Conv2D(filters=filters, kernel_size=kernel_size, activation='relu', input_shape=input_shape, padding='same')) model.add(MaxPooling2D(pool_size)) model.add(Conv2D(filters=filters, kernel_size=kernel_size, activation='relu', padding='same')) model.add(MaxPooling2D(pool_size)) model.add(Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')) model.add(Flatten()) model.add(Dropout(dropout)) model.add(Dense(num_labels)) model.add(Activation('softmax')) model.summary() plot_model(model, to_file='cnn-mnist.png', show_shapes=True) # + [markdown] id="ZxW3sG8WoyYC" # # Train # + colab={"base_uri": "https://localhost:8080/"} id="SjT5wffFoiEg" outputId="ad0eba1c-cc81-481c-fcd8-45a89be382c2" model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # train the network model.fit(x_train, y_train, epochs=10, batch_size=batch_size) # + [markdown] id="E3jjqPJqoljt" # ## Test # + colab={"base_uri": "https://localhost:8080/"} id="2B1-bFcvokdd" outputId="c9703915-ae16-415a-ef16-5382747b6b3b" loss, acc = model.evaluate(x_test, y_test, batch_size=batch_size) print("\nTest accuracy: %.1f%%" % (100.0 * acc)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import pandas as pd import geopandas as gpd huyen = gpd.read_file("data/Bai2_Chuyen doi he TD/HUYEN_region.shp") huyen.crs huyen.plot() huyen_mercator = huyen.to_crs(epsg=3395) huyen_mercator.plot() huyen_mercator.crs # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:s2s-future-dragonstone] # language: python # name: conda-env-s2s-future-dragonstone-py # --- # # Modeling Source-to-Sink systems using FastScape: 8. Storing stratigraphy in a subsiding basin # ![Lannemezan Fan](LannemezanFan.jpg "Lannemezan Fan") import xsimlab as xs import xarray as xr import numpy as np import matplotlib.pyplot as plt #plt.style.use('dark_background') # %load_ext xsimlab.ipython import hvplot.xarray # ## The marine model # # The marine model includes a diffusion model to reproduce the transport of sediment in the marine environment. It also includes a stratigraphic model. In this example, we only wish to have access to the stratigraphic model, so we are going to use the marine model but after stripping it of its marine component, the diffusion, the initial topography, and replace the uplift function by a simpler process (BlockUplift). # + from fastscape.models import marine_model from fastscape.processes import (BlockUplift) transit_model = (marine_model. drop_processes('diffusion'). drop_processes('init_topography'). drop_processes('uplift'). drop_processes('marine'). drop_processes('sea'). update_processes({'uplift': BlockUplift})) transit_model.visualize() # - # We introduce an uplift function that contains (in addition to the uplift in the source area) a subsidence term to create a basin. The subsidence is supposed to decrease linearly from the source/mountain edge to the base level (the opposite boundary). # + xl = 100e3 yl = 100e3 nx = 101 ny = 101 X = np.linspace(0,xl,nx) Y = np.linspace(0,yl,ny) x,y = np.meshgrid(X, Y) u0 = 3e-2 u1 = -1e-4 u = np.zeros((ny,nx)) ylim = 2*yl/(nx-1) u = np.where(yylim].plot() # + fig, ax = plt.subplots(figsize=(12,8)) nout = len(ds_out.horizon) for iout in range(nout): ds_out.strati__elevation.isel(strati=-1).isel(horizon=iout).sel(y=ylim).plot() # - # Anaylse this result by producing different cross sections in the $x$- and $y$-directions. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # *saved from internet archive:https://web.archive.org/web/20160402152249/http://nerds.weddingpartyapp.com/tech/2015/01/21/rxjava-share-publish-refcount-and-all-that-jazz/*: # # # Ok, so in my previous post I innocuously introduced the `.share()` operator. # # ```java # Observable tapEventEmitter = _rxBus.toObserverable().share(); # ``` # # # What is this share operator? # The `.share()` operator is basically just a wrapper to the chained call `.publish().refcount()`. # # You’ll find the chained combo `.publish().refcount()` used in quite a few Rx examples on the web. It allows you to “share” the emission of the stream. Considering how powerpacked and frequently used this combo is, RxJava basically introduced the friendlier more useful operator share(). This mechanism of using observables is sometimes referred to as “multicasting”. # # Let’s dig into some of the basics first: # # > “ConnectedObservable” – This is a kind of observable which doesn’t emit items even if subscribed to. It only starts emitting items after its .connect() method is called. # # It is for this reason that a connected obesrvable is also considered **“cold”** or “inactive” before the connect method is invoked. # # > `.publish()`– This method allows us to change an ordinary observable into a “ConnectedObservable”. Simply call this method on an ordinary observable and it becomes a connected one. # # We now know what ½ of the operator `share` does. Now why would you ever use a Connected Observable? The docs say: # # > In this way you can wait for all intended Subscribers to subscribe to the Observable before the Observable begins emitting items. # # This essentially means that a regular usecase for `publish` would involve more than one subscriber. When you have more than one subscriber, it can get tricky to handle each of the subscriptions and dispose them off correctly. To make this process easier, Rx introduced this magical operator called `refcount()`: # # # > `refcount()` – This operator keeps track of how many subscribers are subscribed to the resulting Observable and refrains from disconnecting from the source ConnectedObservable until all such Observables are unsubscribed. # # It essentially maintains a reference counter in the background and accordingly takes the correct action when a subscription needs to be unsubscribed or disposed off. This is the second ½ of the operator share. You are now armed with knowledge of what each of those terms mean. # # # Let’s look at the example from debouncedBuffer again and see how share was used there: # # # ``` # Observable tapEventEmitter = _rxBus.toObserverable().share(); # // which is really the same as: # Observable tapEventEmitter = _rxBus.toObserverable().publish().refcount(); # ``` # # We now have a “shareable” observable called “tapEventEmitter” and because it’s sharable and still not yet ‘live’ (publish from the share call changes it to a ConnectedObservable), we can use it to compose our niftier Observables and rest assured that we always have a reference to the original observable (the original observable being `_rxBus.toObserverable()` in this case). # # # ``` # Observable tapEventEmitter = _rxBus.toObserverable().share(); # Observable debouncedEventEmitter = tapEventEmitter.debounce(1, TimeUnit.SECONDS); # tapEventEmitter.buffer(debouncedEventEmitter) # //... # ``` # # All this sounds good. There is however a possible **race condition** with this implementation (which Ben [pointed out](https://gist.github.com/benjchristensen/e4524a308456f3c21c0b#comment-1367814) through a comment on this gist). # # The race condition occurs because there are two subscribers here (debounce and buffer) and they may come and go at different points. Remember that the RxBus is backed by a hot/live Subject which is constantly emitting items. By using the share operator we guarantee a reference to the same source, but NOT that they’ll receive the exact same items if the subscribers enter at different points of time. Ben explains this well: # # # > The race condition is when the two consumers subscribe. Often on a hot stream it doesn’t matter when subscribers come and go, and refCount is perfect for that. The race condition refCount protects against is having only 1 active subscription upstream. However, if 2 Subscribers subscribe to a refcounted stream that emits 1, 2, 3, 4, 5, the first may get 1, 2, 3, 4, 5 and the second may get 2, 3, 4, 5. # > To ensure all subscribers start at exactly the same time and get the exact same values, refCount can not be used. Either ConnectableObservable with a manual, imperative invocation of connect needs to be done, or the variant of publish(function) which connects everything within the function before connecting the upstream. # # # In our usage it’s almost immediate so it probably wouldn’t matter a whole lot. But our original intention was to have the debouncedBuffer function as a single operator. It seems conceptually incorrect if the same events are not emitted. I added a third improved implementation to handle this race condition using Ben’s latter suggestion: # # ``` # // don't start emitting items just yet by turning the observable to a connected one # ConnectableObservable tapEventEmitter = _rxBus.toObserverable().publish(); # # tapEventEmitter.publish((Func1) (stream) -> { # # // inside `publish`, "stream" is truly multicasted # # // applying the same technique for getting a debounced buffer sequence # return stream.buffer(stream.debounce(1, TimeUnit.SECONDS)); # # }).subscribe((Action1) (taps) { # _showTapCount(taps.size()); # }); # # // start listening to events now # tapEventEmitter.connect(); # ``` # # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- dfSF = read.csv('F:/GitHub Repos/datasci_course_materials/assignment6/sanfrancisco_incidents_summer_2014.csv',head=TRUE,sep=",") summary(df) library(ggplot2) install.packages('ggplot2', repos='http://cran.us.r-project.org') library(ggplot2) dfSF = df dfSea = read.csv('F:/GitHub Repos/datasci_course_materials/assignment6/seattle_incidents_summer_2014.csv',head=TRUE,sep=",") summary(dfSea) dfSF$Month <- as.Date(cut(dfSF$Date,breaks="month")) dfSF$Year <- as.Date(cut(dfSF$Date,breaks="year")) dfSF$Date <- ymd_hms(dfSF$Date) library(lubridate) summary(dfSF) dfSF$Month<-month(as.POSIXlt(dfSF$Date, format="%d/%m/%Y")) dfSF$Year<-year(as.POSIXlt(dfSF$Date, format="%d/%m/%Y")) summary(dfSF) summary(dfSea) summary(dfSF) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from konlpy.tag import Kkma, Twitter kkma = Kkma() sen = kkma.sentences("블록체인을 우리 몸에 비유하면, 블록체인은 몸에 필요한 영양분을 공급하는 혈관이고, 가상화폐 (Coin)은 혈관 속에 흐르는 피의 역할을 한다고 볼 수 있고, 이해하기도 쉬울 것이다. 블록체인의 활용방안을 설계하기 위해, 우리 사회에 공유되어야 하는 정보는 몸 속의 영양소에 비유된다. 즉, 블록체인 (핏줄)을 통해 정보(영양분)를 활발하게 공유하고, 가상화폐 (피)는 정보를 전달하고, 공유하는 것을 원할하게 해주는 역할을 한다고 볼 수 있다. 블록체인기술의 특징은 크게 3가지, 탈중앙화, 투명한 정보공개, 정보 위변조 방지로 설명할 수 있다. 블록체인은 탈중앙화를 가치로 내세우며, 컴퓨팅의 중심을 중앙에서 다시 분산시스템으로 돌려놓고 있다. 이는 중앙시스템에 대한 비용을 제거할 수 있기 때문에, IT 투자비용과 관리비용을 절감효과를 얻는 장점을 제공한다. 블록체인의 두 번째 장점인 투명한 정보공개는 중앙에 관리자가 없이도, 블록체인 자체가 신뢰기반이 되고, 누구나 정보를 조회할 수 있다. 정보의 최신 업데이트 등이 블록체인에서는 자발적으로 되므로, 정보공유의 중요한 인프라로 활용될 수 있다. 마지막으로, 블록체인에 기록된 정보에 대한 임의 수정 및 위조가 불가능하므로, 금융자산 거래 뿐만 아니라 소유권 이전, 상속/증여 등 내용의 신뢰가 보장되어야 하는 모든 거래를 가능하게 해준다. 더욱이, 스마트 컨트랙트(Smart Contract) 기반의 자율화된 거래를 적용할 수 있어, 중개인이 없이 자동화된 거래를 가능하게 해준다. 블록체인을 활용하는 비즈니스를 위해서는 이 3가지 장점을 충분히 활용할 수 있는 특성을 가지는 비즈니스에 적용하면, 적은 비용으로 전 세계에 분산된 컴퓨팅의 역량을 활용하고, 분산된 원장을 관리하고, 거래의 무결성과 완결성을 보장하면서, 거래를 효율적으로 처리할 수 있다. 이 책은 크게 3개의 장의 구성된다. 1장 “가상화폐와 비트코인”에서는 가상화폐에 적용된 블록체인의 핵심기술을 이해하는 데, 도움을 주고자 한다. 탈중앙화된 가상화폐 비트코인의 탄생배경과, 블록체인의 기반 기술인 분산원장과 합의알고리즘, 작업증명(PoW)과 지분증명(PoS), 데이터 압축저장을 위한 머클트리 기술을 설명한다. 또한, 가상화폐로 불려지는 암호화폐 기술의 기반이 되는 해시함수, 채굴방식, 전자지갑주소 생성, 공개키 암호화와 전자서명에 대해서 논의한다. 2017년 가상화폐의 급등으로 높아진 관심도에 비례하여, 블록체인 기술을 다른 분야에 응용하기 위한 사회분위기가 높아져서, 블록체인 기술이 가지는 장점을 다루면서, 블록체인 혁신에 대한 준비를 할 수 있다. 2장 “비트코인은 가상화폐인가? 금융상품인가?” 에서는 가상화폐의 투기논란으로 중요한 블록체인 기술이 등한시 되는 우를 범하지 않기를 바라면서, 가상화폐의 금융상품으로서의 가치를 논하고자 한다. 블록체인 발 비즈니스 혁신이 성공적으로 운영되기 위해서는 가상화페가 “화폐” 라는 개념보다는 탈중앙화된 블록체인 기반 응용서비스에서 대중들의 참여를 유도하는 “코인”으로서 역할을 설명하고자 한다. 가상화폐는 국내법(전자금융거래법)에서 전자화폐로 인정되지 못하는 이유와 코인으로서의 가치를 논한다. 2018년 1월초부터 급락하는 비트코인 버블과 가상화폐 거래소와 해킹, 하드포크(Hardfork)로 인해 생겨난 비트코인 캐시의 문제에도 불구하고, 핀테크 간편결제 보다는 대중들의 관심을 받는 이유가 되는 ICO에 대해서 논의한다. 마지막 3장 “블록체인 기반 비즈니스 혁신”에서는 스마트계약과 Dapp으로 인해, 바뀌는 일하는 방식을 통해 블록체인 비즈니스 모델과 응용사례를 보여준다. 난립하는 가상화폐와 블록체인으로 인한 이슈를 다루면서 블록체인 기술 검증에 대한 필요성과 검증방안을 논한다. 마지막으로 이더리움 기반의 가상화폐를 만드는 코딩 사례와 프라이빗 블록체인의 스마트 계약 만드는 사례를 통해, 블록체인의 코딩을 살펴본다.") len(sen) twitter = Twitter() twitter.sentenceSplitter("가나") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [insight] # language: python # name: Python [insight] # --- # ### Need to loop over all of the legislation (10,000s) by 1,000 at a time. Extract the bill IDs, and then extract the bill text one-by-one. After retrieving the bill text, store it to a database on AWS with some associated metadata. # # ### Then, I will need to figure out how to do that for the U.S. Congress. Use the U.S. Congress bill text as the training data, with the given subject terms, and use that to train. See how well that predicts other bills in the U.S. congress and use that model for the New York legislation. Go through a subset of the new york data and see if there are keywords or other information that can be used to hand label. Also, use the given terms to use as a broader base of keywords for labeling the U.S. data. Also, try running in an unsupervised setting to see how the data clusters. import requests my_key = open('/Users/Joel/Documents/insight/ny_bill_keys.txt', 'r').readline().strip() import time # Set up the database to save the results of the new york bill table # There will be one table for the New York bills and one for U.S. bills ## Python packages - you may have to pip install sqlalchemy, sqlalchemy_utils, and psycopg2. from sqlalchemy import create_engine from sqlalchemy_utils import database_exists, create_database import psycopg2 import pandas as pd # + #In Python: Define a database name dbname = 'bills_db' username = 'Joel' ## 'engine' is a connection to a database ## Here, we're using postgres, but sqlalchemy can connect to other things too. engine = create_engine('postgres://%s@localhost/%s'%(username,dbname)) print engine.url ## create a database (if it doesn't exist) if not database_exists(engine.url): create_database(engine.url) print(database_exists(engine.url)) # - from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() from sqlalchemy import Column, Integer, String class New_York_Bill(Base): __tablename__ = 'ny_bills' bill_num = Column(String, primary_key=True) bill_name = Column(String) bill_text = Column(String) def __repr__(self): return "" % ( self.bill_num, self.bill_name, self.bill_text) ny_bills_table = New_York_Bill.__table__ # Actually create the table Base.metadata.create_all(engine) from sqlalchemy.orm import sessionmaker Session = sessionmaker(bind=engine) session = Session() # + #ny_bills_table.drop(engine) # This seems painful. Drop the table from the command line before running the command below. # + #requests.get('http://legislation.nysenate.gov/api/3/bills/2015/A02257?view=only_fullText&key=' + my_key).json() # + # Run through a loop getting files 1,000 at a time until we receive all files offset = 0 year = 2015 limit = 1000 #limit = 10 key = my_key my_max = 50000 #my_max = 50 request_string = 'http://legislation.nysenate.gov/api/3/bills/{0}?limit={1}&offset={2}&key={3}'.format(year, limit, offset, key) all_bills = requests.get(request_string).json() while ((all_bills['responseType'] == 'bill-info list') and offset < my_max): print all_bills['offsetStart'] offset += limit request_string = 'http://legislation.nysenate.gov/api/3/bills/{0}?limit={1}&offset={2}&key={3}'.format(year, limit, offset, key) all_bills = requests.get(request_string).json() if (all_bills['responseType'] == 'bill-info list'): for bill in all_bills['result']['items']: bill_num = bill['printNo'] single_request = 'http://legislation.nysenate.gov/api/3/bills/{0}/{1}?view=only_fullText&key={2}'.format( year, bill_num, my_key) bill_data = requests.get(single_request).json() bill_text = bill_data['result']['fullText'] #print bill_num #print bill['title'] #print bill one_bill = New_York_Bill(bill_num=bill_num, bill_name=bill['title'], bill_text=bill_text) session.add(one_bill) time.sleep(1) time.sleep(2) session.commit() # - from sqlalchemy import text result = session.query(New_York_Bill).from_statement(text("SELECT * FROM ny_bills")) all_bills = result.all() len(all_bills) all_bills[0] all_bills[-1] session.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 03. Run Query # - 라이브러리를 활용해 Query날리는 법이 아닌 BigQuery Console에서 진행해보겠습니다! # - Public Datasets를 활용해 6가지 쿼리문을 작성해보겠습니다 # # ## usa_names # - 우선 BigQuery Console에서 좌측의 Public Datasets 아래에 bigquery-public-data :usa_names을 클릭한 후, use_1910_current를 클릭해주세요 # - 그 후, Details, Schema,Preview를 보겠습니다 # ## Details # # # ## Schema # # # ## Preview # # - 이 테이블은 해당연도에 특정 주에서 특정 성별을 가진 특정 이름의 수를 저장한 테이블입니다 # - 총 5개의 Column(state, gender, year, name, number) # ### Q1) 가장 많은 state, year별 태어난 사람의 수( 쿼리를 직접 짜보세요! 일부러 빈칸을 넣었어요) # # # # ~~~ # #standardSQL # SELECT state, year, sum(number) as number # FROM `bigquery-public-data.usa_names.usa_1910_current` # group by state, year # order by number desc # ~~~ # # # ### Q2) ( 연도 상관없이 ) 가장 많이 태어난 이름은? # # # # ~~~ # #standardSQL # SELECT name, sum(number) as number # FROM `bigquery-public-data.usa_names.usa_1910_current` # group by name # order by number desc # ~~~ # # # ### Q3) 가장 많이 태어난 이름은 James 그렇다면 연도별 James의 Count를 구하는 Query # # # # ~~~ # #standardSQL # SELECT name, year, sum(number) as number # FROM `bigquery-public-data.usa_names.usa_1910_current` # where name = 'James' # group by name, year # order by number desc # # # ~~~ # # # ## nyc-tlc:green data # - 우선 BigQuery Console에서 좌측의 Public Datasets 아래에 nyc-tlc:green을 클릭한 후, trips_2014를 클릭해주세요 # - 그 후, Details, Schema, Preview를 보겠습니다 # ## Details # # ## Schema # # ## Preview # # # - 이 테이블은 2014년 뉴욕의 green taxi에 대한 기록입니다 # - 총 21개의 Column # ### Q4) 2014년에 고객수(passenger_count)별 탑승 횟수를 구하는 쿼리를 짜보세요 # # # # ~~~ # #standardSQL # SELECT passenger_count, count(passenger_count) as number # FROM `nyc-tlc.green.trips_2014` # group by passenger_count # order by number desc # ~~~ # # ### Q5) 2015년 Table도 동일 dataset에 있는데, 2015년도 데이터도 포함해서 고객수별(passenger_count)탑승 횟수를 구하는 쿼리를 짜보세요 # # # # ~~~ # #standardSQL # SELECT passenger_count, count(passenger_count) as number # FROM `nyc-tlc.green.trips_*` # where _table_suffix between '2014' and '2015' # group by passenger_count # order by number desc # ~~~ # # - _table_suffix를 활용해 Table의 기간을 설정할 수 있습니다 # ### Q6) 위에서 사용한 Table에서 연도별(pickup_datetime 기준) 운행 횟수(pickup_datetime)을 구하는 쿼리를 짜보세요 # # # # ~~~ # #standardSQL # SELECT EXTRACT(YEAR FROM pickup_datetime) as year, count(pickup_datetime) as total # FROM `nyc-tlc.green.trips_*` # where _table_suffix between '2014' and '2015' # group by year # order by total desc # ~~~ # # - EXTRACT 함수를 사용하면 timestamp에 있는 데이터를 추출할 수 있습니다! # ### 이번 쿼리는 시간이 꽤 걸렸을거에요. 여기서 Explanation을 눌러볼까요? # # # - 여기에선 BigQuery의 쿼리 단계가 나타나요. Average Time, Max Time이 표시되고, 어떤 부분에서 연산 시간이 오래 걸렸는지 알려줍니다! # - 또한 Error가 날 경우엔, Error가 어디서 나타났는지 알려줍니다 # #### 몇가지만 간단히 추가했는데, 추후에 계속 쿼리에 대해 추가하겠습니다 ( Window 함수, Join, With문 등등..) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import cv2 import os import numpy as np import pandas as pd from skimage import io from skimage.transform import resize import matplotlib.pyplot as plt from func.postprocessing import find_cntr_condition, crf, find_cntr from func.tool import get_fname from func.plot import plt_result os.environ["CUDA_VISIBLE_DEVICES"] = '6' # - # # (1) sup+cntr (2)+crf (3) +cntr FLICKR = 'data/ori/tesri/' UnsupCV2DIR = 'model/Unsup_rmbg/result_sample/predict_img_postprocessd/' MASKDIR = 'model/Unet_rmbg/181128103535/Predict_mask/' SAVEDIR = 'data/processed/finaluse/' masks = os.listdir(MASKDIR) mask_path = [os.path.join(MASKDIR, i) for i in masks] masks = os.listdir(MASKDIR) flickr_path = [os.path.join(FLICKR, get_fname (i)+'.jpg') for i in masks] new_path = [os.path.join(MASKDIR, i) for i in masks] # + code_folding=[] for i in range(len(new_path)): # for i in range(1): # get path img_path = flickr_path[i] fname = get_fname(img_path) msk_path = new_path[i] # prepare data to [0,255], channel=3 flickr_img = io.imread(img_path) flickr_img = resize(flickr_img,(256,256)) flickr_img_255 = (flickr_img)*255 flickr_img_255 = flickr_img_255.astype(np.uint8) mask = io.imread(msk_path, as_grey=True) if not mask.max()>1: mask = mask*255 mask = mask.astype(np.uint8) mask3 = np.stack((mask,mask,mask),axis=2) # unsup2sup unsup2sup_mask = find_cntr_condition(mask3, condition=62000) bk_unsup2sup_img = ((unsup2sup_mask/255)*flickr_img_255).astype(np.uint8) unsup2sup_img = ((unsup2sup_mask/255)*flickr_img_255 + 255-unsup2sup_mask).astype(np.uint8) # crf, return only 'R' channel crf_output = crf(flickr_img_255, mask3) crf_mask = crf_output[:,:,0] +mask3[:,:,0] crf_mask = np.repeat(crf_mask,repeats=3).reshape((256,256,3)) bk_crf_img = ((crf_mask/255)*flickr_img_255).astype(np.uint8) crf_img = ((crf_mask/255)*flickr_img_255 + 255-crf_mask).astype(np.uint8) # find contour, return mask [h,w,3] # cntr_mask = find_cntr(crf_mask) cntr_mask = find_cntr_condition(crf_mask, condition=62000) cntr_mask_01 = cntr_mask/255 bk_cntr_img = flickr_img*cntr_mask_01 cntr_img = ((cntr_mask/255)*flickr_img_255+ 255-cntr_mask).astype(np.uint8) # show unsup+cv2 result unsupcv2_img = io.imread(os.path.join(UnsupCV2DIR, fname+'.png')) # save out p_img = [flickr_img, unsupcv2_img, unsup2sup_img, crf_img, cntr_img] p_title= ['Original', 'unsup', 'unsup2sup', 'crf', 'crf_cv2'] fig = plt_result(p_img, p_title) save_to = os.path.join('data/processed/finalstep/', 'checking') if not os.path.exists(save_to): os.makedirs(save_to) fig.savefig(os.path.join(save_to, fname+'.png'), dpi=100, format='png',bbox_inches='tight') save_to = os.path.join('data/processed/finalstep/', 'unsup2sup_cntr') if not os.path.exists(save_to): os.makedirs(save_to) io.imsave(os.path.join(save_to, fname+'.png'), unsup2sup_img) save_to = os.path.join('data/processed/finalstep/', 'mask_unsup2sup_cntr') if not os.path.exists(save_to): os.makedirs(save_to) io.imsave(os.path.join(save_to, fname+'.png'), unsup2sup_mask) save_to = os.path.join('data/processed/finalstep/', 'unsup2sup_crf') if not os.path.exists(save_to): os.makedirs(save_to) io.imsave(os.path.join(save_to, fname+'.png'), crf_img) save_to = os.path.join('data/processed/finalstep/', 'mask_unsup2sup_crf') if not os.path.exists(save_to): os.makedirs(save_to) io.imsave(os.path.join(save_to, fname+'.png'), crf_mask) save_to = os.path.join('data/processed/finalstep/', 'unsup2sup_crf_cntr') if not os.path.exists(save_to): os.makedirs(save_to) io.imsave(os.path.join(save_to, fname+'.png'), cntr_img) save_to = os.path.join('data/processed/finalstep/', 'mask_unsup2sup_crf_cntr') if not os.path.exists(save_to): os.makedirs(save_to) io.imsave(os.path.join(save_to, fname+'.png'), cntr_mask) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Get the ra and dec for each galaxy in CGM$^2$ from astropy.io import fits import glob import astropy from astropy.table import Table # Lets see what a mask file looks like file = glob.glob("gn*fits")[0] mdf = Table.read(file) mdf def get_radec(file): """ get the RA, DEC from a mask file. Put it into an astropy Table object. """ mdf = Table.read(file) ID = mdf['ID'] ra = mdf['RA'] dec = mdf['DEC'] radec_tab = Table([ID, ra, dec]) return radec_tab file_names = glob.glob("/Users/mwilde/Dropbox/COS-Gemini/gemini_data.G*/raw*/G*.fits") # + radec_list = [] for file in file_names: radec = get_radec(file) radec_list.append(radec) radec_tab = astropy.table.vstack(radec_list) # - radec_tab.info radec_tab.write('cgmsquared_gal_radec.txt', format='ascii', overwrite=True) radec_tab # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy import matplotlib.pyplot as plt import seaborn as sns import re import tqdm import datetime now = datetime.datetime.now() # - df = pd.read_pickle('data/arxiv-last50years-data.pickle') df.shape df.head() # + yearthresh = now.year+2 useless = [] for i in df.index: if int(df.year[i])>yearthresh: useless.append(i) df.drop(useless, axis=0, inplace=True) # + sns.set_style('darkgrid') fig, ax = plt.subplots() fig.set_size_inches(15, 6) psns = sns.countplot(x="year",data=df, ax=ax) plt.xticks(rotation=45) plt.show() # - import plotly import plotly.express as px import plotly.graph_objs as go data = [ go.Bar( y=df['year'].value_counts(), x=df['year'].value_counts().keys(), orientation='v', # text="d", )] layout = go.Layout( height=500, title='Publications per Year', hovermode='closest', xaxis=dict(title='Years', ticklen=5, zeroline=False, gridwidth=2, domain=[0.1, 1]), yaxis=dict(title='Counts', ticklen=5, gridwidth=2), showlegend=True ) fig = go.Figure(data=data, layout=layout) # py.iplot(fig, filename='Sector/ Area of Coaches - Combined') # plotly.plot(fig, ) fig.show() df.submitter.value_counts()[:10] data = [ go.Bar( y=df['categories'].value_counts(), x=df['categories'].value_counts().keys(), orientation='v', text="d", )] layout = go.Layout( height=500, width=2000, title='Categories published in last 50 years', hovermode='closest', xaxis=dict(title='Categories', ticklen=5, zeroline=False, gridwidth=2, domain=[0.1, 1]), yaxis=dict(title='Counts', ticklen=5, gridwidth=2), showlegend=True ) fig = go.Figure(data=data, layout=layout) # py.iplot(fig, filename='Sector/ Area of Coaches - Combined') # plotly.plot(fig, ) fig.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline # import tools we are using import pandas as pd import numpy as np from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt # read in the car ‘table’ – not a csv, so we need # to add in the column names column_names = ['mpg', 'cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'year', 'origin', 'name'] df = pd.read_table('http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data', \ sep=r"\s+", index_col=0, header=None, names = column_names) print(df.head()) #start out plotting (uses a subplot as that can be 3d) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') # pull out the 3 columns that we want xs = [] ys = [] zs = [] for index, row in df.iterrows(): xs.append(row['weight']) ys.append(index) #read_table uses first column as index zs.append(row['cylinders']) # based on our data set the extents of the axes plt.xlim(min(xs), max(xs)) plt.ylim(min(ys), max(ys)) ax.set_zlim(min(zs), max(zs)) # standard scatter diagram (except it is 3d) ax.scatter(xs, ys, zs) ax.set_xlabel('Weight') ax.set_ylabel('MPG') ax.set_zlabel('Cylinders') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 # import libraries import pandas as pd pd.set_option('display.max_columns', None) import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns import numpy as np from datetime import timedelta, datetime from termcolor import colored # aims: # 1. extract features from test failing vehicles # 2. build classification model for predicting outcome of emission test for vehicles # # additionals: # # 1. new customer - prediction # 2. return customer - prediction # 3. predict test cost # # import data # [here](https://opendata.cabq.gov/dataset/vehicle-emissions/resource/2663c138-549e-4d55-932f-6aa549c7d158) is the MetaData # # + df_all = pd.read_csv('../data/sample201320.csv', low_memory=False) print("data size:", df_all.shape) df_all.head(2) # - # # target # # 1. keep all the records with OVERALL_RESULT = P or F # 2. change to numeric # - P 173762 # - F 13741 import seaborn as sns y = df_all['OVERALL_RESULT'] print('Num of rows with missing values:', y.isnull().sum()) tmp = y.value_counts()*100.0/y.shape[0] sns.barplot(x=tmp.index, y=tmp); y.value_counts() df_all.loc[(df_all.OVERALL_RESULT=='A') & (df_all.ABORT_CODE_OTHER_DESC.notnull()), 'ABORT_CODE_OTHER_DESC'].head(30) # drop the rows where the test was aborted df_all = df_all[~df_all.OVERALL_RESULT.isin(['A','O'])] df_all.OVERALL_RESULT.value_counts() # P=1, F=1 df_all['RESULT'] = df_all['OVERALL_RESULT'].map({'P':1, 'F':0}) df_all.RESULT.value_counts() # # vehicle features # # find the unique id column print(df_all.shape[0]) print(df_all[['RecordID', 'RECORD_NUM', 'VIR_CERT_NUMBER']].nunique()) # set RecordID as index df_all.set_index('RecordID', inplace=True) print("RecordID has been set as index") # ## quick exploration for selecting features # + [markdown] heading_collapsed=true # ### vehicle features and missing values # - VID_TEST_TYPE: 80462 # - GVW_TYPE: 71 # - VEH_LICENSE: 13 # - MODEL: 2 # # + hidden=true cols = ['RESULT', 'TEST_TYPE', 'VID_TEST_TYPE', # which 'TEST_SDATE', 'TEST_EDATE', 'VIN', 'ZIP_CODE', 'VEHICLE_TYPE', 'MODEL_YEAR', 'MAKE', 'MODEL', 'GVW_TYPE', 'GVWR', # which 'CYL', 'ENGINE_SIZE', 'TRANS_TYPE', 'DUAL_EXHAUST', 'ODOMETER', 'FUEL_TYPE'] df = df_all[cols] df.head() # + hidden=true df.isnull().sum().sort_values(ascending=False)[:5] # + hidden=true # how many unique vehicles df.VIN.nunique() # + [markdown] heading_collapsed=true # ### TEST_TYPE VS VID_TEST_TYPE # - drop VID_TEST_TYPE due to the missing values # + hidden=true tmp = df[['RESULT','TEST_TYPE', 'VID_TEST_TYPE']] fig, axs = plt.subplots(1, 2, figsize=(20, 4)) sns.countplot(data=tmp, x='RESULT', hue='TEST_TYPE', ax=axs[0]) sns.countplot(data=tmp, x='RESULT', hue='VID_TEST_TYPE', ax=axs[1]) # + hidden=true df = df.drop(columns=['VID_TEST_TYPE']) # - # ### GVW_TYPE vs GVWR # - drop GVW_TYPE # tmp = df[['RESULT','GVW_TYPE', 'GVWR']] sns.countplot(data=tmp, x='RESULT', hue='GVW_TYPE') sns.pairplot(tmp) # drop GVW df = df.drop(columns=['GVW_TYPE']) # + [markdown] heading_collapsed=true # ### categorical cols distribution # - drop DUAL_EXHAUST # + hidden=true df.nunique().sort_values() # + hidden=true df.nunique().sort_values().index # + hidden=true categorical_cols = ['TEST_TYPE', 'TRANS_TYPE', 'VEHICLE_TYPE', 'FUEL_TYPE', 'CYL'] # distribution of categorical values in Pass and Fail data sets df['count'] = 1 n = len(categorical_cols[1:]) fig, axs = plt.subplots(n, 1, figsize=(5, 6*n)) for col, ax in zip(categorical_cols[1:], axs.flat): tmp = df.groupby(['RESULT', col])['count'].count().groupby('RESULT').apply(lambda x: 100*x/x.sum()).unstack(level=1) tmp.plot(kind='bar', stacked=True, ax=ax) ax.set_title(col) df = df.drop(columns=['count']) # + hidden=true # let's look closer to DUAL_EXHAUST df.groupby(['RESULT'])['DUAL_EXHAUST'].value_counts()/df.shape[0] # + hidden=true # let's drop dual_exhaust df = df.drop(columns=['DUAL_EXHAUST']) # - # ## cleaning and feature engineering # # # **set RecordID as index** # # - total records: 187503 # - P: 173762 # - F: 13741 # # **string columns : VIN, MAKE, MODEL, ZIP_CODE, MODEL_YEAR** # # # - missing values in MODEL: 2 # - using the historical records of that VIN fill in the missing MODEL values # # # - engineer new feature: MAKE + MODEL + MODEL_YEAR # - unique MAKE_MODEL_YEAR: 11684 # - total records: 187503 # # # - drop: MODEL, MAKE, MODEL_YEAR # # # - extract features from ZIP_CODE: pause!!! # - typos # - zip codes of ABQ are not structured # # # - drop: ZIP_CODE # # # final columns: VIN, MAKE_MODEL_YEAR # # # **datetime columns: TEST_SDATE, TEST_EDATE** # # - if a vehicle has multiple records from the same date, keep first record # # - records where vehicles got tested more than 1 time a day: 4046 # # # - after removing repeated tests from same day # # # - total number of records: 187503 > 183457 # - P: 173762 > 170120 # - F: 13741 > 13337 # - data range: 2013-01-01/2020-12-31 # # # - engineer feature: VEHICLE_AGE # # - drop TEST_SDATE # # final columns: VEHICLE_AGE # # **categorical columns: TEST_TYPE, TRANS_TYPE, VEHICLE_TYPE, FUEL_TYPE, CYL** # # - map to numeric: TEST_TYPE:{'I':0, 'A':1}, TRANS_TYPE:{'A':0, 'M':1} # # - drop VEHICLE_TYPE # - near 1000 MAKE_MODEL_YEARs had more than 1 VEHICLE_TYPE # - it's not clear how vehicle types were defined # # - drop CYL # - share of fail is higher when CYL is 9, 12 and R # - however only 82 records whose CYL is 9, 12 and R # # - drop FUEL_TYPE (FUEL_TYPE L only has 4 records and all are Pass) # - more than 95% of the tests have FUEL_TYPE G # # final columns: TEST_TYPE, TRANS_TYPE # # # **numeric columns: GVWR, ENGINE_SIZE, ODOMETER** # # - 0 in GVWR: 11577 (P: 11090, F: 487) # # - took the median GVWR values for each VIN # # - drop records where GVWR values are missing: 7942 (P: 7560, F: 382) # # - range of GVWR: 847 - 10000 # # - drop records where ODOMETER = 0 : 875 (P: 746, F: 129) # # - possible outliers in ODOMETER # - records where ODOMETER > 1 million miles: 229 # - vehicles with ODOMETER > 1 million miles: 218 # # **final columns: RESULT, TEST_TYPE, ENGINE_SIZE, TRANS_TYPE, ODOMETER, VEHICLE_AGE, GVWR** # # # **final data size: 174651, 8** # + [markdown] heading_collapsed=true # ### missing values # # + hidden=true df.head(2) # + hidden=true df['RESULT'].value_counts() # + hidden=true df.isnull().sum().sort_values(ascending=False)[:5] # + hidden=true numeric_cols = ['MODEL_YEAR', 'GVWR', 'ENGINE_SIZE', 'ODOMETER'] fig, axs = plt.subplots(4, 1, figsize=(20, 12)) for col, ax in zip(numeric_cols, axs.flat): print(col, df[col].sort_values().values[0], df[col].sort_values().values[-1]) sns.boxplot(x=df[col], ax=ax) # + hidden=true df.head() # + [markdown] heading_collapsed=true # ### strings # + hidden=true # select string columns string_cols = ['VIN', 'ZIP_CODE', 'MODEL_YEAR', 'MAKE', 'MODEL'] # tranform string columns def to_string(df, string_cols): for col in string_cols: df[col] = df[col].astype('string').str.strip().str.lower() df return df df = to_string(df, string_cols) print(df[string_cols].dtypes) df[string_cols].head() # + hidden=true df[string_cols].isnull().sum() # + hidden=true # using the historical values for that VIN fill in the missing MODEL values vin = df[df.MODEL.isnull()].VIN.values[0] model = df[df.VIN==vin].MODEL.values[-1] df.loc[df.MODEL.isnull(), 'MODEL'] = model df.MODEL.isnull().sum() # + hidden=true # make + model + year df['MAKE_MODEL_YEAR'] = df.MAKE + '/' + df.MODEL + '/' + df.MODEL_YEAR print("unique MAKE_MODEL_YEAR:", df.MAKE_MODEL_YEAR.nunique()) # drop MAKE and MODEL df = df.drop(columns=['MAKE', 'MODEL']) print("total records:", df.shape[0]) # + hidden=true ABQ_zipcodes = [87101, 87102, 87103, 87104, 87105, 87106, 87107, 87108, 87109, 87110, 87111, 87112, 87113, 87114, 87115, 87116, 87119, 87120, 87121, 87122, 87123, 87125, 87131, 87151, 87153, 87154, 87158, 87176, 87181, 87184, 87185, 87187, 87190, 87191, 87192, 87193, 87194, 87195, 87196, 87197, 87198, 87199] # extract ZIP_CODE def extract_zipcode(df, col): # whether from the city df['ABQ'] = 0 df.loc[df[col].str[:3]=='871', 'ABQ'] = 1 # which part of the city df['DISTRICT'] = df[col].str[3:].str.strip('-') return df # drop ZIP_CODE df = df.drop(columns=['ZIP_CODE']) # + hidden=true df.select_dtypes(include='string').head() # + hidden=true # vehicles from a specific brand/model/year fails the test? tmp = df.groupby('MAKE_MODEL_YEAR')['RESULT'].value_counts().unstack(level=1) tmp[(tmp[0] > 2) & tmp[1].isnull()] # - # ### dates # # + # select date columns date_cols = ['TEST_SDATE', 'TEST_EDATE'] # transform date columns def to_datetime(df, date_cols): for col in date_cols: df[col] = pd.to_datetime(df[col]) return df df = to_datetime(df, date_cols) print(df[date_cols].dtypes) df[date_cols].head() # - # **⚠️ if a vehicle has multiple records from the same date, keep earliest record** # # the gap between test ending time and test starting time df['gap'] = (df['TEST_EDATE'] - df['TEST_SDATE'])/timedelta(hours=1) # distribution of gap plt.figure(figsize=(20, 4)) sns.boxplot(x=df.gap); # drop gap column df = df.drop(columns=['gap']) # + # add a helper column which has date but not hours of the day df['TEST_DATE'] = df['TEST_SDATE'].dt.date # keep the first record from the same date for each vehicle df1 = df.loc[df.groupby(['VIN', 'TEST_DATE'])['TEST_SDATE'].idxmin(),:].copy() # select the records whose vehicles got tested more than 1 time a day, can capture some extra information here df2 = df[~df.index.isin(df1.index)] print("records where vehicles got tested more than 1 time a day:", df2.shape[0]) print("records after removing repeated tests from same day:", df1.shape[0]) # date range print('data range:', df1.TEST_DATE.min(), df1.TEST_DATE.max()) # drop the some columns df = df1.drop(columns=['TEST_DATE', 'TEST_EDATE']) # - # check the distribution of target df.RESULT.value_counts() # + # engineering VEHICLE_AGE df['VEHICLE_AGE'] = df.TEST_SDATE.dt.year.astype('int') - df.MODEL_YEAR.astype('int') + 2 # drop TEST_SDATE # df = df.drop(columns=['TEST_SDATE']) # check out the VEHICLE_AGE distribution sns.histplot(data=df, x='VEHICLE_AGE'); # - print("Age distribution of vehicles which PASSED in emission test") plt.figure(figsize=(20,4)) sns.countplot(data=df[df.RESULT==1], x='VEHICLE_AGE'); print("Age distribution of vehicles which FAILED in emission test") plt.figure(figsize=(20,4)) sns.countplot(data=df[df.RESULT==0], x='VEHICLE_AGE'); # + def extract_time(df, col): df['month'] = df[col].dt.month df['weekday'] = df[col].dt.weekday df['hour'] = df[col].dt.hour return df # df = extract_time(df, 'TEST_SDATE') # + # sns.pairplot(data=df[['RESULT', 'month', 'weekday', 'hour']], corner=True) # + [markdown] heading_collapsed=true # ### categorical columns # + hidden=true categorical_cols = ['TEST_TYPE', 'TRANS_TYPE', 'VEHICLE_TYPE', 'FUEL_TYPE', 'CYL'] df[categorical_cols].isnull().sum() # + hidden=true # distribution of categorical values in Pass and Fail data sets df['count'] = 1 n = len(categorical_cols) fig, axs = plt.subplots(n, 1, figsize=(5, 6*n)) for col, ax in zip(categorical_cols, axs.flat): tmp = df.groupby(['RESULT', col])['count'].count().groupby('RESULT').apply(lambda x: 100*x/x.sum()).unstack(level=1) tmp.plot(kind='bar', stacked=True, ax=ax) ax.set_title(col) df = df.drop(columns=['count']) # + hidden=true # TEST_TYPE df = df.replace({'TEST_TYPE':{'I':0, 'A':1}}) df.TEST_TYPE.value_counts() # + hidden=true # TRANS_TYPE df = df.replace({'TRANS_TYPE':{'A':0, 'M':1}}) df.TRANS_TYPE.value_counts() # + hidden=true # VEHICLE_TYPE # for each vehicle tpye, compare pass and fail df.groupby(['VEHICLE_TYPE', 'RESULT']).size()\ .groupby('VEHICLE_TYPE').apply(lambda x: 100*x/x.sum())\ .unstack(level=1).plot(kind='bar', stacked=True); # check how many records have more than 1 VEHICLE_TYPE were assigned to the same MAKE_MODEL_YEAR tmp = df[['VEHICLE_TYPE', 'MAKE_MODEL_YEAR']].groupby('MAKE_MODEL_YEAR').nunique().sort_values('VEHICLE_TYPE') print('MAKE_MODEL_YEARs with more than 1 VEHICLE_TYPE assigned to them:', tmp[tmp.VEHICLE_TYPE > 1].shape[0]) # + hidden=true # CYL # for each CYL, compare pass and fail df.groupby(['CYL', 'RESULT']).size()\ .groupby('CYL').apply(lambda x: 100*x/x.sum())\ .unstack(level=1).plot(kind='bar', stacked=True); df['CYL_912R'] = 0 df.loc[df.CYL.isin(['9', '12', 'R']), 'CYL_912R'] = 1 print(df.CYL_912R.value_counts()) # + hidden=true # drop CYL df = df.drop(columns=['CYL', 'CYL_912R']) # + hidden=true # FUEL TYPE # for each FUEL_TYPE, compare pass and fail df.groupby(['FUEL_TYPE', 'RESULT']).size()\ .groupby('FUEL_TYPE').apply(lambda x: 100*x/x.sum())\ .unstack(level=1).plot(kind='bar', stacked=True); # check FUEL_TYPE L print(df[df.FUEL_TYPE=='L'].groupby('RESULT')['FUEL_TYPE'].value_counts()) # drop FUEL_TYPE df = df.drop(columns=['FUEL_TYPE']) # + hidden=true df.head() # - # ### numeric columns # # # + numeric_cols = ['GVWR', 'ENGINE_SIZE', 'ODOMETER'] fig, axs = plt.subplots(3, 1, figsize=(20, 12)) for col, ax in zip(numeric_cols, axs.flat): sns.boxplot(x=df[col], ax=ax) # - # statistics df[numeric_cols].describe() # + # let's clean GVWR a little bit # get median GVWR of each VIN tmp = df[['VIN', 'GVWR']].groupby('VIN').agg({'GVWR':'median'}) tmp.reset_index(inplace=True) # merge tmp with df df.reset_index(inplace=True) df = df.merge(tmp, how='left', on='VIN', suffixes=('_0','')) df.set_index('RecordID', inplace=True) # missing values in GVWR and GVWR_0 print('GVWR_0 = 0:', df[df.GVWR_0==0].shape[0]) print('GVWR = 0:', df[df.GVWR==0].shape[0]) # outliers in GVWR and GVWR_0 print('GVWR_0 > 9000:', df[df.GVWR_0 > 9000].shape[0]) print('GVWR > 9000:', df[df.GVWR > 9000].shape[0]) # distribution of GVWR and GVWR_0 cols = ['GVWR_0', 'GVWR'] fig, axs = plt.subplots(2, 1, figsize=(20, 6)) for col, ax in zip(cols, axs.flat): sns.boxplot(x=df[col], ax=ax) # replace 0 with np.nan df.loc[df.GVWR==0, 'GVWR'] = np.nan df.loc[df.GVWR_0==0, 'GVWR_0'] = np.nan # using GVWR_0 fill missing values in GVWR df['GVWR'] = df.GVWR.fillna(df.GVWR_0) # keep GVW and drop GVWR_0 df = df.drop(columns=['GVWR_0']) # check share of P and F in missing values df.loc[df.GVWR.isnull(), 'RESULT'].value_counts() # - # 0 in ODOMETER df.loc[df.ODOMETER==0, 'ODOMETER'] = np.nan print('number of ODOMETER=0:', df.ODOMETER.isnull().sum()) df.loc[df.ODOMETER.isnull(), 'RESULT'].value_counts() # + # drop ODOMETER = 0 df = df[df.ODOMETER!=0] # engineer MILE_YEAR from ODOMETER df['MILE_YEAR'] = df['ODOMETER']/df['VEHICLE_AGE'] # Americans drive an average of 14,300 miles per year # outliers in MILE_YEAR print('records where MILE_YEAR > 100,000 miles:', df[df.MILE_YEAR > 100000].shape[0]) print('vehicles with ODOMETER > 100,000 miles:', df[df.MILE_YEAR > 100000].VIN.nunique()) fig, axs = plt.subplots(2, 1, figsize=(20, 8)) sns.boxplot(x=df['MILE_YEAR'], ax=axs[0]) sns.boxplot(x=df[df.MILE_YEAR < 100000]['MILE_YEAR'], ax=axs[1]) # - df['MILE_YEAR_CAT'] = df.MILE_YEAR.apply(lambda x: 1 if x > 40000 else 0) df.groupby(['MILE_YEAR_CAT', 'RESULT']).size()\ .groupby('MILE_YEAR_CAT').apply(lambda x: 100*x/x.sum())\ .unstack(level=1).plot(kind='bar', stacked=True); df = df.drop(columns=['MILE_YEAR_CAT']) # + # drop useless columns tmp = df.drop(columns=['TEST_SDATE', 'VIN', 'MODEL_YEAR']) # drop na tmp = tmp.dropna() # data size print(tmp.shape) tmp.head() # - # ## vehicles with historical records # # how many vehicle were tested how many times tmp = df.VIN.value_counts().sort_values() ax = sns.countplot(x=tmp) ax.set(ylabel='number of vehicles', xlabel='number of tests'); df.VIN.value_counts().value_counts() # let's check a vehicle that was tested 10 times df[df.VIN == df.VIN.value_counts().sort_values(ascending=False).index[5]] # + [markdown] heading_collapsed=true # ### current records + last test records # # + hidden=true # all the records where vehicles didn't receive test before df1 = df.loc[df.groupby('VIN')['TEST_SDATE'].idxmin(),:] # records whose vehicle has historical data tmp = df.VIN.value_counts() df2 = df[df.VIN.isin(tmp[tmp>1].index)] print("Records where vehicles didn't receive test before", df1.shape) print("Records where vehicles have historical data", df2.shape) # + hidden=true # merge historical data once on VIN df3 = df2.reset_index() df4 = df2.drop(columns=['MODEL_YEAR', 'MAKE_MODEL_YEAR']).reset_index() merge = pd.merge(df3, df4, how='left', on='VIN', suffixes=(None, '_1')) # add a helper column merge['DELTA'] = merge['TEST_SDATE']-merge['TEST_SDATE_1'] # keep the entries where TEST_DATE_1 is earlier than TEST_DATE merge = merge[merge.DELTA > timedelta(days=0)] merge.reset_index(drop=True, inplace=True) # keep the rows whose DELTA is > 90, meaning vehicle's last check up was at least 90 days earlier than this time # print(merge.shape) # merge = merge[merge.DELTA > timedelta(days=90)] # merge.reset_index(inplace=True) # print(merge.shape) # for each RecordID only keep most recent record as previous testing record df_hist = merge.loc[merge.groupby('RecordID')['DELTA'].idxmin(), :] df_hist.set_index('RecordID', inplace=True) print("records in historical dataframe", df_hist.shape[0]) print("unique records in historical dataframe", df_hist.index.nunique()) print("\nStatistic summary of time gaps between current and last tests\n") df_hist.DELTA.describe() # + hidden=true # stack df1 and df_hist updated_df = pd.concat([df1, df_hist]) print("Shape of the new dataframe", updated_df.shape) updated_df.sort_index().tail() # + [markdown] heading_collapsed=true # ### one entry for one vehicle # # + hidden=true # keep most recent record for each vehicle df1 = df.loc[df.groupby('VIN')['TEST_SDATE'].idxmax(),:] # let's take the historical data out df2 = df[~df.index.isin(df1.index)] print("number records that don't belong to historical records:", df1.shape[0]) print('number of unique vehicles in df1:', df1.VIN.nunique()) print('number records that belong to historical records:', df2.shape[0]) # + hidden=true # let's peel the historical df2 like an onion into multiple dataframes according to timeline older_records = [] # a list to collect dataframes updated_df2 = df2 for i in range(10): print(f'getting layer {i}') latest = updated_df2.loc[updated_df2.groupby('VIN')['TEST_SDATE'].idxmax(), :] older_records.append(latest.reset_index()) tmp = updated_df2[~updated_df2.index.isin(latest.index)] updated_df2 = tmp.copy() # + hidden=true # merge the dataframes in older_records with df1 updated_df2 = df1 for index, record in enumerate(older_records): merged_df = pd.merge(updated_df2, record, how='left', on='VIN', suffixes=(None, '_'+str(index+1))) updated_df2 = merged_df.copy() print('The size of one-entry-one-vehicle dataframe:', updated_df2.shape) # let's check out the vehicles that were tested for 11 times in last 7 years updated_df2[~updated_df2.RecordID_10.isnull()] # + hidden=true # + hidden=true # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #
    #
    #
    #
    Computational Seismology
    #
    Interpolation with Lagrange Polynomials
    #
    #
    #
    #

    # # # #

    # # # --- # # This notebook is part of the supplementary material # to [Computational Seismology: A Practical Introduction](https://global.oup.com/academic/product/computational-seismology-9780198717416?cc=de&lang=en&#), # Oxford University Press, 2016. # # # ##### Authors: # * ([@flo-woelfl](https://github.com/flo-woelfl)) # * ([@swollherr](https://github.com/swollherr)) # * ([@heinerigel](https://github.com/heinerigel)) # --- # We can approximate an arbitrary function $f(x)$ using the interpolation with Lagrange polynomials $l_i$ at given collacation points $x_i$, i.e. # # \begin{eqnarray*} # f(x) = \sum f(x_i) \cdot l_i(x). # \end{eqnarray*} # # The Lagrange polynomials at $x$ are defined as follows: # # $$ \ell_i^{(N)} (x) \ := \ \prod_{k = 1, \ k \neq i}^{N+1} \frac{x - x_k}{x_i-x_k}, \qquad i = 1, 2, \dotsc , N + 1 $$ # # # They are implemented in Python with the following code: # + code_folding=[0] # Setup # %matplotlib inline import numpy as np import matplotlib.pyplot as plt from gll import gll # Prettier plots. plt.style.use('ggplot') # + def lagrange2(N, i, x, xi): """ Function to calculate Lagrange polynomial for order N and polynomial i [0, N] at location x at given collacation points xi (not necessarily the GLL-points) """ fac = 1 for j in range(-1, N): if j != i: fac = fac * ((x - xi[j + 1]) / (xi[i + 1] - xi[j + 1])) return fac N = 4 x = np.linspace(-1, 1, 1000) xi, _ = gll(N) plt.figure(figsize=(8, 3)) for _i in range(N): plt.plot(x, lagrange2(N, _i, x, xi)) plt.ylim(-0.3, 1.1) plt.title("Lagrange Polynomials of order %i" % N) plt.show() # - # ##Exercises: # # ### 1. The GLL-points # * Use the `gll()` routine to determine the collocation points for a given order $N$ in the interval $[-1,1]$. # * Define an arbitrary function $f(x)$ and use the function `lagrange(N,i,x)` to get the $i$-th Lagrange polynomials of order N at the point x. # * Calculate the interpolating function to $f(x)$. # * Show that the interpolation is exact at the collocation points. # * Compare the original function $f(x)$ and the interpolating function on a finely spaced grid. Vary the order of the interpolating polynomials and calculate the error as a function of order. # # # ### 2. Equidistant collocation points and the Runge function # In the first exercise we used the GLL-points that are not equidistant. # Now use equidistant collacation points for the interpolation of your function $f$. Compare the two results. # # Change the code for the Runge-function on the interval $[-5,5]$ # # $$ f(x)=\frac{1}{(1+x^2)} $$ # # What do you notice when you decrease the order of your interpolation? # + # CHOOSE THE EXERCISE exercise = 1 # Exercise 1 if exercise == 1: # Initialize space in the interval [-1, 1] nx = 1000 x = np.linspace(-1, 1, nx) # CHANGE FUNCTION HERE. Currently a simple sine function. f = np.sin(np.pi * x) # Exercise 2 elif exercise == 2: # Initialize space in the interval [-5, 5] nx = 1000 x = np.linspace(-5, 5, nx) # CHANGE FUNCTION HERE. Currently the Runge function. f = 1/(1 + x ** 2) # Get order of Lagrange polynomial # Uncomment for interactive use. # N = int(input(' Give polynomials degree (N): ')) N = 5 if exercise == 1: # Get collocation points xi from gll routine [xi, w] = gll(N) fi = np.interp(xi, x, f) elif exercise == 2: xi = np.linspace(-5, 5, N+1) fi = np.interp(xi, x, f) # Initialize Lagrange polynomials on the defined grid lp = np.zeros((N + 1, len(x))) for i in range(0, len(x)): for j in range(-1, N): lp[j + 1, i] = lagrange2(N, j, x[i], xi) # Calculate interpolating polynomials by multiplying # Lagrange polynomials with function values at xi s = x * 0 for j in range(0, N + 1): s = s + lp[j, :] * fi[j] error = np.sum((np.abs(f - s))) / np.sum(np.abs(f)) * 100 # Plot results plt.figure() plt.plot(x, s, 'k-', color='green', label='Interpolating function') plt.plot(x, f, 'k--', label='Original function') plt.plot(xi, fi, 's', label='Collocation points') plt.title('Relative error: %g %%' % error) plt.xlabel('x') plt.ylabel('f(x)') if exercise == 1: plt.legend(loc="upper center") elif exercise == 2: plt.xlim(-5, 5) plt.legend(loc=2) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Predict CO2 emissions from Mauna Lao volcano import numpy as np import pandas as pd from matplotlib import pyplot as plt from sklearn.model_selection import train_test_split from statsmodels.tsa.deterministic import DeterministicProcess from sklearn.metrics import mean_squared_error from sklearn.linear_model import LinearRegression co2_df = pd.read_csv("../ts-course-data/co2.csv", index_col = "Date", parse_dates=True) co2_df.index.freq = "MS" co2_df.head(5) co2_df.plot(figsize = (15, 7), title = "Mauna Lao CO2 emissions") plt.show() # + # It does look like there is a linear trend plus seasonality # + # fit a basic linear regression model using DeterministicProcess # split the df into train and test, lets use 10% data as the test dataset dp = DeterministicProcess ( index=co2_df.index, # dates from the training data constant=True, # dummy feature for the bias (y_intercept) order=1, # the time dummy (trend) drop=True ) X = dp.in_sample() y = co2_df.loc[:, "CO2"] # - X_train, X_test = train_test_split(X, test_size = 0.1, shuffle = False) y_train, y_test = train_test_split(y, test_size = 0.1, shuffle = False) X_train.head(4), y_train.head(4) X_test.head(4), y_test.head(4) # + model = LinearRegression() model.fit(X_train, y_train) y_pred = pd.Series(model.predict(X_test), index=X_test.index) # - rmse = np.sqrt(mean_squared_error(y_pred, y_test)) print("rmse is {}".format(rmse)) # ### RMSE using basic linear regression model on just time dummy trend is 3.34 y_pred.head(4) y_test.head(4) plt.figure(figsize = (10, 5)) ax = plt.subplot(1, 1, 1) ax.plot(y_test, label = "y test") ax.plot(y_pred, color = "red", label = "y pred") plt.legend() plt.show() # + # Lets try to add seasonality. model = LinearRegression() model.fit(X, y) y_pred = pd.Series(model.predict(X), index=X.index) plt.figure(figsize = (10, 5)) ax = plt.subplot(1, 1, 1) ax.plot(y - y_pred, color = "red", label = "Yactual - Ypred") plt.title("Seasonality = Yactual - Ypred(includes trends)") plt.legend() plt.show() # + # Merging the trend + seasonality from statsmodels.tsa.deterministic import CalendarFourier, DeterministicProcess fourier = CalendarFourier(freq="A", order=10) # 10 sin/cos pairs for "A"nnual seasonality dp = DeterministicProcess( index=co2_df.index, constant=True, # dummy feature for bias (y-intercept) order=1, # trend (order 1 means linear) seasonal=True, # monthly seasonality (indicators) additional_terms=[fourier], # annual seasonality (fourier) drop=True, # drop terms to avoid collinearity ) X = dp.in_sample() y = co2_df.loc[:, "CO2"] X.head(5) # - X_train, X_test = train_test_split(X, test_size = 0.1, shuffle = False) y_train, y_test = train_test_split(y, test_size = 0.1, shuffle = False) # + model = LinearRegression() model.fit(X_train, y_train) y_pred = pd.Series(model.predict(X_test), index=X_test.index) # - rmse = np.sqrt(mean_squared_error(y_pred, y_test)) print("rmse is {}".format(rmse)) # ### RMSE is reduced from 3.34 (just linear trend) to 2.51 using seasonality (monthly and yearly) y_pred.head(4) y_test.head(5) plt.figure(figsize = (10, 5)) ax = plt.subplot(1, 1, 1) ax.plot(y_test, label = "y test") ax.plot(y_pred, color = "red", label = "y pred") plt.title("Y Test vs Y Pred") plt.legend() plt.show() # + # Lets try to add cycles. model = LinearRegression() model.fit(X, y) y_pred = pd.Series(model.predict(X), index=X.index) plt.figure(figsize = (10, 5)) ax = plt.subplot(1, 1, 1) ax.plot(y - y_pred, color = "red", label = "Yactual - Ypred") plt.title("Cycles = Yactual - Ypred(trends + seasonality)") plt.legend() plt.show() # + # lets add a lag of 1 from statsmodels.tsa.deterministic import CalendarFourier, DeterministicProcess fourier = CalendarFourier(freq="A", order=10) # 10 sin/cos pairs for "A"nnual seasonality dp = DeterministicProcess( index=co2_df.index, constant=True, # dummy feature for bias (y-intercept) order=1, # trend (order 1 means linear) seasonal=True, # weekly seasonality (indicators) additional_terms=[fourier], # annual seasonality (fourier) drop=True, # drop terms to avoid collinearity ) X = dp.in_sample() y = co2_df.loc[:, "CO2"] X["lag_1"] = y.shift(1) # adding a lag of 1 for cycles X = X.iloc[1: , :] # dropping the first row with NaN y = y[1:] X_train, X_test = train_test_split(X, test_size = 0.1, shuffle = False) y_train, y_test = train_test_split(y, test_size = 0.1, shuffle = False) model = LinearRegression() model.fit(X_train, y_train) y_pred = pd.Series(model.predict(X_test), index=X_test.index) rmse = np.sqrt(mean_squared_error(y_pred, y_test)) print("rmse is {}".format(rmse)) # - plt.figure(figsize = (10, 5)) ax = plt.subplot(1, 1, 1) ax.plot(y_test, label = "y test") ax.plot(y_pred, color = "red", label = "y pred") plt.title("Y Test vs Y Pred") plt.legend() plt.show() # + # get the residual model = LinearRegression() model.fit(X, y) y_pred = pd.Series(model.predict(X), index=X.index) plt.figure(figsize = (10, 5)) ax = plt.subplot(1, 1, 1) ax.plot(y - y_pred, color = "red", label = "Yactual - Ypred") plt.title("Residual = Yactual - Ypred(trends + seasonality + cycles)") plt.legend() plt.show() # + # Train XGBoost on the residuals from sklearn.ensemble import GradientBoostingRegressor y_residual = y - y_pred X_train_residual, X_test_residual = train_test_split(X, test_size = 0.1, shuffle = False) y_train_residual, y_test_residual = train_test_split(y_residual, test_size = 0.1, shuffle = False) model = GradientBoostingRegressor() model.fit(X_train_residual, y_train_residual) y_pred_residual = pd.Series(model.predict(X_test_residual), index=X_test_residual.index) rmse = np.sqrt(mean_squared_error(y_pred_residual, y_test_residual)) print("rmse is {}".format(rmse)) # - y_test_residual.head(4), y_pred_residual.head(4) len(y_test_residual), len(y_pred_residual) # + model = GradientBoostingRegressor() model.fit(X, y_residual) y_pred_residual = pd.Series(model.predict(X), index=X.index) # - # overall y is the sum of y predicted from trends + seasonality + cycles (from Linear Regression) and # y predicted from residual using XGBoostRegressor final_y = y_pred + y_pred_residual print(y_pred.head(4), y_pred_residual.head(4), final_y.head(4)) y_pred, final_y plt.figure(figsize = (15, 6)) ax = plt.subplot(1, 1, 1) ax.plot(final_y, color = "red", label = "Y Pred", alpha = 0.5) ax.plot(y, color = "blue", label = "Actual Y", alpha = 0.2) plt.title("Final model - Linear + GradientBoost") plt.show() # + _, final_y_test = train_test_split(final_y, test_size = 0.1, shuffle = False) rmse = np.sqrt(mean_squared_error(final_y_test, y_test)) print("rmse is {}".format(rmse)) # - # ### RMSE using combination of both Linear and Gradient Boost is 0.149 on the test dataset plt.figure(figsize = (10, 5)) ax = plt.subplot(1, 1, 1) ax.plot(y_test, label = "y test") ax.plot(final_y_test, color = "red", label = "y pred") plt.title("Y Test vs Y Pred") plt.legend() plt.show() # + # Perfecto! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 1. Crucial data structures, needed for automatic differentitation or calculating the gradients. # 2. Part of torch.autograd package. # # Structure: autograd[data, grad, creator] # # # 3. Instantiated via torch.autograd.Variable class # 4. A wrapper around the tensor object(data in structure above) # 5. Holds the gradient w.r.t it(grad above) # 6. Records reference to the function that created it(the creator) # 7. Holds the gradient of the output w.r.t thi tensor import torch from torch.autograd import Variable # ### Computation Graph Example x = Variable(torch.FloatTensor([11.2]), requires_grad = True) y = 2*x print(x) print(y) print(x.data) print(y.data) print(x.grad_fn) print(y.grad_fn) # backward method calculates the gradient of output with respect to input variables y.backward() # dy/dx -> stored in x.grad print(x.grad) print(y.grad) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.10.1 64-bit # language: python # name: python3 # --- # ## Instructions # # * Read this the code # * Spot the problems. # * Modify the code to fix the program. # # Fix the code so that it works and passes the tests when you submit. # + #bugged code number = int(input("Which number do you want to check?")) if number % 2 = 0: print("This is an even number.") else: print("This is an odd number.") # + #debugged code #bugged code number = int(input("Which number do you want to check?")) if number % 2 == 0: print("This is an even number.") else: print("This is an odd number.") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="o_0K1lsW1dj9" """ You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. """ # If you're using Google Colab and not running locally, run this cell # install NeMo BRANCH = 'v1.0.0b2' # !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[nlp] # + pycharm={"name": "#%%\n"} id="95FHWXOVpUFp" # If you're not using Colab, you might need to upgrade jupyter notebook to avoid the following error: # 'ImportError: IProgress not found. Please update jupyter and ipywidgets.' # ! pip install ipywidgets # ! jupyter nbextension enable --py widgetsnbextension # Please restart the kernel after running this cell # + id="dzqD2WDFOIN-" from nemo.collections import nlp as nemo_nlp from nemo.utils.exp_manager import exp_manager import os import wget import torch import pytorch_lightning as pl from omegaconf import OmegaConf # + [markdown] id="daYw_Xll2ZR9" # # Task Description # **Sentiment Analysis** is the task of detecting the sentiment in text. We model this problem as a simple form of a text classification problem. For example `Gollum's performance is incredible!` has a positive sentiment while `It's neither as romantic nor as thrilling as it should be.` has a negative sentiment. # . # + [markdown] id="ZnuziSwJ1yEB" # # Dataset # # In this tutorial we going to use [The Stanford Sentiment Treebank (SST-2)](https://nlp.stanford.edu/sentiment/index.html) corpus for sentiment analysis. This version of the dataset contains a collection of sentences with binary labels of positive and negative. It is a standard benchmark for sentence classification and is part of the GLUE Benchmark: https://gluebenchmark.com/tasks. Please download and unzip the SST-2 dataset from GLUE. It should contain three files of train.tsv, dev.tsv, and test.tsv which can be used for training, validation, and test respectively. # # # # # + [markdown] id="qzcZ3nb_-SVT" # # NeMo Text Classification Data Format # # [TextClassificationModel](https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/models/text_classification/text_classification_model.py) in NeMo supports text classification problems such as sentiment analysis or domain/intent detection for dialogue systems, as long as the data follows the format specified below. # # TextClassificationModel requires the data to be stored in TAB separated files (.tsv) with two columns of sentence and label. Each line of the data file contains text sequences, where words are separated with spaces and label separated with [TAB], i.e.: # # ``` # [WORD][SPACE][WORD][SPACE][WORD][TAB][LABEL] # ``` # # For example: # ``` # hide new secretions from the parental units[TAB]0 # # that loves its characters and communicates something rather beautiful about human nature[TAB]1 # ... # ``` # # # If your dataset is stored in another format, you need to convert it to this format to use the TextClassificationModel. # + [markdown] id="SL58EWkd2ZVb" # ## Download and Preprocess the Data # # First, you need to download the zipped file of the SST-2 dataset from the GLUE Benchmark website: https://gluebenchmark.com/tasks, and put it in the current folder. Then the following script would extract it into the data path specified by `DATA_DIR`: # + id="n8HZrDmr12_-" DATA_DIR = "DATA_DIR" WORK_DIR = "WORK_DIR" os.environ['DATA_DIR'] = DATA_DIR os.makedirs(WORK_DIR, exist_ok=True) os.makedirs(DATA_DIR, exist_ok=True) # ! unzip -o SST-2.zip -d {DATA_DIR} # + [markdown] id="U8Ty5_S7Ye8h" # Now, the data folder should contain the following files: # + [markdown] id="L8vsyh3JZH26" # # # * train.tsv # * dev.tsv # * test.tsv # # # The format of `train.tsv` and `dev.tsv` is close to NeMo's format except to have an extra header line at the beginning of the files. We would remove these extra lines. But `test.tsv` has different format and labels are missing for this part of the data. # # + id="qB0oLE4R9EhJ" # ! sed 1d {DATA_DIR}/SST-2/train.tsv > {DATA_DIR}/SST-2/train_nemo_format.tsv # ! sed 1d {DATA_DIR}/SST-2/dev.tsv > {DATA_DIR}/SST-2/dev_nemo_format.tsv # ! ls -l {DATA_DIR}/SST-2 # + id="6UDPgadLN6SG" # let's take a look at the data print('Contents (first 5 lines) of train.tsv:') # ! head -n 5 {DATA_DIR}/SST-2/train_nemo_format.tsv print('\nContents (first 5 lines) of test.tsv:') # ! head -n 5 {DATA_DIR}/SST-2/test.tsv # + [markdown] id="daludzzL2Jba" # # Model Configuration # + [markdown] id="_whKCxfTMo6Y" # Now, let's take a closer look at the model's configuration and learn to train the model from scratch and finetune the pretrained model. # # Our text classification model uses a pretrained [BERT](https://arxiv.org/pdf/1810.04805.pdf) model (or other BERT-like models) followed by a classification layer on the output of the first token ([CLS]). # # The model is defined in a config file which declares multiple important sections. The most important ones are: # - **model**: All arguments that are related to the Model - language model, tokenizer, head classifier, optimizer, schedulers, and datasets/data loaders. # # - **trainer**: Any argument to be passed to PyTorch Lightning including number of epochs, number of GPUs, precision level, etc. # # + id="T1gA8PsJ13MJ" # download the model's configuration file MODEL_CONFIG = "text_classification_config.yaml" CONFIG_DIR = WORK_DIR + '/configs/' os.makedirs(CONFIG_DIR, exist_ok=True) if not os.path.exists(CONFIG_DIR + MODEL_CONFIG): print('Downloading config file...') wget.download('https://raw.githubusercontent.com/NVIDIA/NeMo/v1.0.0b2/examples/nlp/text_classification/conf/' + MODEL_CONFIG, CONFIG_DIR) print('Config file downloaded!') else: print ('config file already exists') config_path = f'{WORK_DIR}/configs/{MODEL_CONFIG}' print(config_path) config = OmegaConf.load(config_path) # + [markdown] id="mX3KmWMvSUQw" # ## this line will print the entire config of the model # print(OmegaConf.to_yaml(config)) # + [markdown] id="ZCgWzNBkaQLZ" # # Model Training From Scratch # ## Setting up data within the config # # We first need to set the num_classes in the config file which specifies the number of classes in the dataset. For SST-2, we have just two classes (0-positive and 1-negative). So we set the num_classes to 2. The model supports more than 2 classes too. # # # + id="jFSMiWtlkaC5" config.model.dataset.num_classes=2 # + [markdown] id="xCkc7QGikqPh" # # Among other things, the config file contains dictionaries called dataset, train_ds and validation_ds. These are configurations used to setup the Dataset and DataLoaders of the corresponding config. # # Notice that some config lines, including `model.dataset.classes_num`, have `???` as their value, this means that values for these fields are required to be to be specified by the user. We need to specify and set the `model.train_ds.file_name`, `model.validation_ds.file_name`, and `model.test_ds.file_name` in the config file to the paths of the train, validation, and test files if they exist. We may do it by updating the config file or by setting them from the command line. # # Let's now set the train and validation paths in the config. # + id="LQHCJN-ZaoLp" config.model.train_ds.file_path = os.path.join(DATA_DIR, 'SST-2/train_nemo_format.tsv') config.model.validation_ds.file_path = os.path.join(DATA_DIR, 'SST-2/dev_nemo_format.tsv') # Name of the .nemo file where trained model will be saved. config.save_to = 'trained-model.nemo' config.export_to = 'trained-model.onnx' print("Train dataloader's config: \n") # OmegaConf.to_yaml() is used to create a proper format for printing the train dataloader's config # You may change other params like batch size or the number of samples to be considered (-1 means all the samples) print(OmegaConf.to_yaml(config.model.train_ds)) # + [markdown] id="" # ## Building the PyTorch Lightning Trainer # # NeMo models are primarily PyTorch Lightning (PT) modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem. # # Let's first instantiate a PT Trainer object by using the trainer section of the config. # + id="1tG4FzZ4Ui60" print("Trainer config - \n") # OmegaConf.to_yaml() is used to create a proper format for printing the trainer config print(OmegaConf.to_yaml(config.trainer)) # + [markdown] id="XMVVs0INi5zj" # First you need to create a PT trainer with the params stored in the trainer's config. You may set the number of steps for training with max_steps or number of epochs with max_epochs in the trainer's config. # + id="knF6QeQQdMrH" # lets modify some trainer configs # checks if we have GPU available and uses it config.trainer.gpus = 1 if torch.cuda.is_available() else 0 # for mixed precision training, uncomment the lines below (precision should be set to 16 and amp_level to O1): # config.trainer.precision = 16 # config.trainer.amp_level = O1 # disable distributed training when using Colab to prevent the errors config.trainer.accelerator = None # setup max number of steps to reduce training time for demonstration purposes of this tutorial # Training stops when max_step or max_epochs is reached (earliest) config.trainer.max_epochs = 5 # instantiates a PT Trainer object by using the trainer section of the config trainer = pl.Trainer(**config.trainer) # + [markdown] id="8IlEMdVxdr6p" # ## Setting up the NeMo Experiment¶ # # + [markdown] id="6Kl5IdnV3O8y" # NeMo has an experiment manager that handles the logging and saving checkpoints for us, so let's setup it. We need the PT trainer and the exp_manager config: # + id="8uztqGAmdrYt" # The experiment manager of a trainer object can not be set twice. We repeat the trainer creation code again here to prevent getting error when this cell is executed more than once. trainer = pl.Trainer(**config.trainer) # exp_dir specifies the path to store the the checkpoints and also the logs, it's default is "./nemo_experiments" # You may set it by uncommentig the following line # config.exp_manager.exp_dir = 'LOG_CHECKPOINT_DIR' # OmegaConf.to_yaml() is used to create a proper format for printing the trainer config print(OmegaConf.to_yaml(config.exp_manager)) exp_dir = exp_manager(trainer, config.exp_manager) # the exp_dir provides a path to the current experiment for easy access print(exp_dir) # + [markdown] id="8tjLhUvL_o7_" # Before initializing the model, we might want to modify some of the model configs. For example, we might want to modify the pretrained BERT model to another model. The default model is `bert-base-uncased`. We support a variety of models including all the models available in HuggingFace, and Megatron. # + id="Xeuc2i7Y_nP5" # complete list of supported BERT-like models print(nemo_nlp.modules.get_pretrained_lm_models_list()) # + id="RK2xglXyAUOO" # specify the BERT-like model, you want to use # set the `model.language_modelpretrained_model_name' parameter in the config to the model you want to use config.model.language_model.pretrained_model_name = "bert-base-uncased" # + [markdown] id="fzNZNAVRjDD-" # Now, we are ready to initialize our model. During the model initialization call, the dataset and data loaders will also be prepared for the training and validation. # # Also, the pretrained BERT model will be automatically downloaded. Note it can take up to a few minutes depending on the size of the chosen BERT model for the first time you create the model. If your dataset is large, it also may take some time to read and process all the datasets. # # Now we can create the model with the model config and the trainer object like this: # + id="NgsGLydWo-6-" model = nemo_nlp.models.TextClassificationModel(cfg=config.model, trainer=trainer) # + [markdown] id="kQ592Tx4pzyB" # ## Monitoring Training Progress # Optionally, you can create a Tensorboard visualization to monitor training progress. # + id="mTJr16_pp0aS" try: from google import colab COLAB_ENV = True except (ImportError, ModuleNotFoundError): COLAB_ENV = False # Load the TensorBoard notebook extension if COLAB_ENV: # %load_ext tensorboard # %tensorboard --logdir {exp_dir} else: print("To use tensorboard, please use this notebook in a Google Colab environment.") # + [markdown] id="MW_JVIi5z68e" # ## Training # # You may start the training by using the trainer.fit() method. The number of steps/epochs of the training are specified already in the config of the trainer and you may update them before creating the trainer. # + id="hUvnSpyjp0Dh" # start model training trainer.fit(model) model.save_to(config.save_to) # + [markdown] id="VPdzJVAgSFaJ" # # Evaluation # # To see how the model performs, we can run evaluate and test the performance of the trained model on a data file. Here we would load the best checkpoint (the one with the lowest validation loss) and create a model (eval_model) from the checkpoint. We would also create a new trainer (eval_trainer) to show how it is done when training is done and you have just the checkpoints. If you want to perform the evaluation in the same script as the training's script, you may still use the same model and trainer you used for training. # + id="92PB0iTqNnW-" # extract the path of the best checkpoint from the training, you may update it to any checkpoint checkpoint_path = trainer.checkpoint_callback.best_model_path # Create an evaluation model and load the checkpoint eval_model = nemo_nlp.models.TextClassificationModel.load_from_checkpoint(checkpoint_path=checkpoint_path) # create a dataloader config for evaluation, the same data file provided in validation_ds is used here # file_path can get updated with any file eval_config = OmegaConf.create({'file_path': config.model.validation_ds.file_path, 'batch_size': 64, 'shuffle': False, 'num_samples': -1}) eval_model.setup_test_data(test_data_config=eval_config) #eval_dataloader = eval_model._create_dataloader_from_config(cfg=eval_config, mode='test') # a new trainer is created to show how to evaluate a checkpoint from an already trained model # create a copy of the trainer config and update it to be used for final evaluation eval_trainer_cfg = config.trainer.copy() eval_trainer_cfg.gpus = 1 if torch.cuda.is_available() else 0 # it is safer to perform evaluation on single GPU as PT is buggy with the last batch on multi-GPUs eval_trainer_cfg.accelerator = None # 'ddp' is buggy with test process in the current PT, it looks like it has been fixed in the latest master eval_trainer = pl.Trainer(**eval_trainer_cfg) eval_trainer.test(model=eval_model, verbose=False) # test_dataloaders=eval_dataloader, # + [markdown] id="Tit5kG4Z5SXu" # # Inference # # You may create a model from a saved chechpoint and use the model.infer() method to perform inference on a list of queries. There is no need of any trainer for inference. # + id="BKe5Jn4u9xng" # extract the path of the best checkpoint from the training, you may update it to any other checkpoint file checkpoint_path = trainer.checkpoint_callback.best_model_path # Create an evaluation model and load the checkpoint infer_model = nemo_nlp.models.TextClassificationModel.load_from_checkpoint(checkpoint_path=checkpoint_path) # + [markdown] id="y8SFxPJd-hkH" # To see how the model performs, let’s get model's predictions for a few examples: # + id="DQhsamclRtxJ" # move the model to the desired device for inference # we move the model to "cuda" if available otherwise "cpu" would be used if torch.cuda.is_available(): infer_model.to("cuda") else: infer_model.to("cpu") # define the list of queries for inference queries = ['by the end of no such thing the audience , like beatrice , has a watchful affection for the monster .', 'director went out gunning to make a great one .', 'uneasy mishmash of styles and genres .'] # max_seq_length=512 is the maximum length BERT supports. results = infer_model.classifytext(queries=queries, batch_size=3, max_seq_length=512) print('The prediction results of some sample queries with the trained model:') for query, result in zip(queries, results): print(f'Query : {query}') print(f'Predicted label: {result}') # + [markdown] id="ref1qSonGNhP" # ## Training Script # # If you have NeMo installed locally (eg. cloned from the Github), you can also train the model with `examples/nlp/text_classification/text_classifciation_with_bert.py`. This script contains an example on how to train, evaluate and perform inference with the TextClassificationModel. # # For example the following would train a model for 50 epochs in 2 GPUs on a classification task with 2 classes: # # ``` # # python text_classification_with_bert.py # model.dataset.num_classes=2 # model.train_ds=PATH_TO_TRAIN_FILE # model.validation_ds=PATH_TO_VAL_FILE # trainer.max_epochs=50 # trainer.gpus=2 # ``` # # This script would also reload the best checkpoint after the training is done and does evaluation on the dev set. Then perform inference on some sample queries. # # # By default, this script uses `examples/nlp/text_classification/conf/text_classifciation_config.py` config file, and you may update all the params in the config file from the command line. You may also use another config file like this: # # ``` # # python text_classification_with_bert.py --config-name==PATH_TO_CONFIG_FILE # model.dataset.num_classes=2 # model.train_ds=PATH_TO_TRAIN_FILE # model.validation_ds=PATH_TO_VAL_FILE # trainer.max_epochs=50 # trainer.gpus=2 # ``` # # - # ## Deployment # # You can also deploy a model to an inference engine (like TensorRT or ONNXRuntime) using ONNX exporter. # If you don't have one, let's install it: # mkdir ort # cd ort git clone --depth 1 --branch v1.5.1 https://github.com/microsoft/onnxruntime.git . ./build.sh --skip_tests --config Release --build_shared_lib --parallel --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu --build_wheel pip install ./build/Linux/Release/dist/onnxruntime*.whl # cd .. # Then export model.export(config.export_to) # And run some queries # + import numpy as np import torch from nemo.utils import logging from nemo.collections.nlp.parts.utils_funcs import tensor2list from nemo.collections.nlp.models.text_classification import TextClassificationModel from nemo.collections.nlp.data.text_classification import TextClassificationDataset import onnxruntime def to_numpy(tensor): return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy() def postprocessing(results, labels): return [labels[str(r)] for r in results] def create_infer_dataloader(model, queries): batch_size = len(queries) dataset = TextClassificationDataset(tokenizer=model.tokenizer, queries=queries, max_seq_length=512) return torch.utils.data.DataLoader( dataset=dataset, batch_size=batch_size, shuffle=False, num_workers=2, pin_memory=True, drop_last=False, collate_fn=dataset.collate_fn, ) queries = ["by the end of no such thing the audience , like beatrice , has a watchful affection for the monster .", "director rob marshall went out gunning to make a great one .", "uneasy mishmash of styles and genres .", "I love exotic science fiction / fantasy movies but this one was very unpleasant to watch . Suggestions and images of child abuse , mutilated bodies (live or dead) , other gruesome scenes , plot holes , boring acting made this a regretable experience , The basic idea of entering another person's mind is not even new to the movies or TV (An Outer Limits episode was better at exploring this idea) . i gave it 4 / 10 since some special effects were nice ."] model.eval() infer_datalayer = create_infer_dataloader(model, queries) ort_session = onnxruntime.InferenceSession(config.export_to) for batch in infer_datalayer: input_ids, input_type_ids, input_mask, subtokens_mask = batch ort_inputs = {ort_session.get_inputs()[0].name: to_numpy(input_ids), ort_session.get_inputs()[1].name: to_numpy(input_mask), ort_session.get_inputs()[2].name: to_numpy(input_type_ids),} ologits = ort_session.run(None, ort_inputs) alogits = np.asarray(ologits) logits = torch.from_numpy(alogits[0]) preds = tensor2list(torch.argmax(logits, dim = -1)) processed_results = postprocessing(preds, {"0": "negative", "1": "positive"}) logging.info('The prediction results of some sample queries with the trained model:') for query, result in zip(queries, processed_results): logging.info(f'Query : {query}') logging.info(f'Predicted label: {result}') break # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Model Selection Method # **Outline** # * [Introduction](#introduction) # * [Validation Set](#validation) # * Cross Validation # * [Leave one out Cross Validation (LOOCV)](#loocv) # * [K-fold cross validation](#kcv) # * [Grid Search](#gridsearch) # * [Out of Bag Estimate for Bagging](#oob) # --- # ## Introduction # When we finish building a model, if we don't have any testing data, how do we know our model is overfitting or underfitting? Actually, if we use all the data to train our model, we don't know whether if our model is overfitting or underfitting. Let's firstly assume we have built a model and have a test data to test it, if the test MSE is much higher than our train MSE, then our model is said to be overfitted. # # Cross validation is a way to prevent it from happening. Before our model is fitting using the test data to get the test MSE, we manually group our train data into k fold and calculate the MSE using the kth fold each time and have a final estimate of the test MSE by averaging all the MSE of the k fold. # # The main idea of all the model selection method that we are going to introduce is to obtain an estimate of the test set MSE. When we build up a model, how can we know which model is better than another? When we don't have a test dataset available, we might imagine simply selecting the one with the lowest training MSE. However, it is not true. When a given method yields a small training MSE but a large test MSE, we are said to be overfitting the data. # # Therefore, we don't want to use training MSE to select our model. What we want is a metrics that can be an estimate of the test MSE. If we believe that estimate is correct, then the lower the estimate is, the better our model should be when we perform the model using test data later on. When doing model selection, we can imagine a plot with flexibility in x-axis and MSE in y. Each model has its level of flexibility. The more complex the model is, the higher flexibility the model is. If we reduce the flexibility of our model and result in a lower estimate test MSE, then the original model should have already been overfitted. # # When we compare the estimate test MSE of two models, A and B. A is more complex than B. If A has a lower estimate test MSE, then the next thing we can do is probably increase the flexibility of our model even more; if B has a lower CV score than A, then we might want to try to lower the flexibility of B and see if the estimate test MSE can be even lower. # # Also, we know that ```Expected test MSE = Variance of the model + Bias of the model + irreducible error```. The relative rate of change of first two quantities determines whether the test MSE increases or decreases. As we increase hte flexibility of a model, the bias tend to initially decrease faster than the variance increases. Consequently, the expected test MSE declines. However, at some point increaseing flexibility has little impact on the bias but starts to significantly increase the variance. When this happens the test MSE increases. # # Some of the approaches are described below. # # # **Some Note:** # * Overfitting refers specifically to the case in which a less flexible model would have yieled a smaller test MSE. # * We refer to test data as the set of observations that were not used to train the model. # To compare different approach to obtain an estimate of test MSE, we are going to use the [Wine Quality Data Set](https://archive.ics.uci.edu/ml/datasets/wine+quality). You can download the data from this [link](https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv). The goal is to model wine quality as a numeric value based on the features we have in the dataset. # %matplotlib inline import pandas as pd import numpy as np from sklearn.model_selection import train_test_split, GridSearchCV from sklearn import cross_validation from sklearn.ensemble import GradientBoostingRegressor from sklearn.ensemble import RandomForestRegressor import matplotlib.pyplot as plt from sklearn.metrics import mean_squared_error wine=pd.read_csv('_data/winequality-red.csv', sep=';') wine.head() # train/test split the features and response column X = wine.drop('quality', axis = 1).values y = wine['quality'].values X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size = 0.2, random_state = 7) # --- # ### Validation Set # **Idea:** # separate the data into 2 parts, training and validation set. Fit model using training set and calculate the MSE using validation set. # # **Drawback:** # * The validation estimate of the test error can be highly variable # * Since statistical method tends to perform worse when trained on fewer observations. Calculating the estimate MSE using validation set can overestimate the test error. In other words, it is not the model's fault, it is because we use fewer data to fit the model that result in a worse estimate of the test MSE. # ### K-fold cross validation # **Pros:** # * Computational cheaper than LOOCV. # * Since we use more data to train the model, the variance will be lower than randomly using validation set for k times. # * Set aside the computational issue, k-fold often gives more accurate estimates of the test error rate than does LOOCV. # # How is K-fold better than LOOCV? Bias-Variance tradeoff... Page 183. # When we fit a boosting model, the higher n_estimator, the higher flexibility for the model is. Therefore, we might want to plot out n_estimators as x, and the cross-validation score as y to see which n_estimator has the lowest estimate test MSE. Therefore, in the following example, we fit a gbm model using different number of iterations, and check the change in CV score in each iteration def plot_train_test_error_kfold(X_train, X_test, y_train, y_test): """ Compare the difference between training and testing error using MSE/R2 by adjusting number of tree built for GBRT. Here we use K-fold Cross Validation to obtain the estimate of our test MSE. """ train_mse = list() test_mse = list() cv_mse = list() train_r2 = list() test_r2 = list() cv_r2 = list() kfold=10 param_name = 'n_estimators' options = np.arange(1, 100) for n in options: gbm = GradientBoostingRegressor(n_estimators=n) gbm.fit(X_train, y_train) y_train_predict = gbm.predict(X_train) train_mse.append(mean_squared_error(y_train, y_train_predict)) cv_mse.append(abs(cross_validation.cross_val_score(gbm, X_train, y_train, scoring='neg_mean_squared_error', cv=kfold).mean())) y_test_predict = gbm.predict(X_test) test_mse.append(mean_squared_error(y_test, y_test_predict)) train_r2.append(gbm.score(X_train, y_train)) test_r2.append(gbm.score(X_test, y_test)) cv_r2.append(model_selection.cross_val_score(gbm, X_train, y_train, cv=kfold).mean()) mean_mse_scores = [train_mse, test_mse, cv_mse] mse_labels = ['train MSE', 'test MSE', 'cv MSE'] i_mse_optim = np.argmin(test_mse) options_mse_optim = options[i_mse_optim] mean_r2_scores = [train_r2, test_r2, cv_r2] r2_labels = ["train R^2","test R^2","cv R^2"] i_r2_optim = np.argmax(test_r2) options_r2_optim = options[i_r2_optim] # plot with y axis using MSE for score, label in zip(mean_mse_scores, mse_labels): plt.plot(options, score, label = label) plt.vlines(options_mse_optim, plt.ylim()[0], np.min(test_mse), color='k', linewidth=3, label='Optimum on test') plt.legend() plt.ylabel('MSE') plt.xlabel(param_name) plt.show() # plot with y axis using R^2 for score, label in zip(mean_r2_scores, r2_labels): plt.plot(options, score, label = label) plt.vlines(options_r2_optim, plt.ylim()[0], np.max(test_r2), color='k', linewidth=3, label='Optimum on test') plt.legend() plt.ylabel('R^2') plt.xlabel(param_name) plt.show() plot_train_test_error_kfold(X_train, X_test, y_train, y_test) # From the above plot, we see that train MSE is continuously decreasing while test test MSE slightly goes up when the number of estimator goes up. Meaning that our model is overfitted after n_estimators over some value around 40. Also, we see that the CV score aligns pretty well with our actual test MSE. Before we use test data to test the performance of our model, we can actually rely on CV score to determine which model we should use in order to have the lowest test MSE. # ## Leave one out Cross Validation (LOOCV) # **Summary:** LOOCV is a special case of k-fold CV in which k is set to equal n. # # **Pros:** # * There is no randomness in the training/ validation set splits. Therefore the estimate is not variable. # # **Cons:** # * Expensive to implement, since the model has to be fit n times. If the number of observation is large or the model is slow to fit, using this method can be very time consuming. # To perform LOOCV, we just need to set the number of fold into the number of observations we have in our training data. It actually take a long time to run. We'll skip the execution of the following code. def plot_train_test_error_loocv(X_train, X_test, y_train, y_test): """ Compare the difference between training and testing error using MSE/R2 by adjusting number of tree built for GBRT. Here we use leave one out Cross Validation to obtain the estimate of our test MSE. """ train_mse = list() test_mse = list() cv_mse = list() train_r2 = list() test_r2 = list() cv_r2 = list() kfold=10 param_name = 'n_estimators' options = np.arange(1, 100) loo = cross_validation.LeaveOneOut(len(y_train)) for n in options: gbm = GradientBoostingRegressor(n_estimators=n) gbm.fit(X_train, y_train) y_train_predict = gbm.predict(X_train) train_mse.append(mean_squared_error(y_train, y_train_predict)) cv_mse.append(abs(cross_validation.cross_val_score(gbm, X_train, y_train, scoring='neg_mean_squared_error', cv=loo).mean())) y_test_predict = gbm.predict(X_test) test_mse.append(mean_squared_error(y_test, y_test_predict)) train_r2.append(gbm.score(X_train, y_train)) test_r2.append(gbm.score(X_test, y_test)) cv_r2.append(model_selection.cross_val_score(gbm, X_train, y_train, cv=loo).mean()) mean_mse_scores = [train_mse, test_mse, cv_mse] mse_labels = ['train MSE', 'test MSE', 'cv MSE'] mean_r2_scores = [train_r2, test_r2, cv_r2] r2_labels = ["train R^2","test R^2","cv R^2"] # plot with y axis using MSE for score, label in zip(mean_mse_scores, mse_labels): plt.plot(options, score, label = label) plt.legend() plt.ylabel('MSE') plt.xlabel(param_name) plt.show() # plot with y axis using R^2 for score, label in zip(mean_r2_scores, r2_labels): plt.plot(options, score, label = label) plt.legend() plt.ylabel('R^2') plt.xlabel(param_name) plt.show() # ## Grid Search # We can also use [Grid Search](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) from sklearn.model_selection to obtain the parameter that generate us the best cv score. We set range of the parameters that we want to search through, and the combination with the highest CV score will be recorded. Below is an example of it: # + # set parameter range for GridSearchCV param_name = 'n_estimators' options = np.arange(1, 100) param_grid = {param_name: options} gbm = GradientBoostingRegressor() # this may take some minutes gs_gbm = GridSearchCV(gbm, param_grid) gs_gbm.fit(X_train, y_train) # best hyperparameter setting print('Best hyperparameters: %r' % gs_gbm.best_params_) # get the mean R^2 for all the folds, in this case, 100. gs_gbm_mean_score = gs_gbm.cv_results_['mean_test_score'] # plot out the result with x as n_estimators and y as R^2. plt.plot(options, gs_gbm_mean_score) plt.ylabel('R^2') plt.xlabel(param_name) plt.show() # - # Notice that Grid Search pick the number which has the highest CV score, which in this case is R^2. The n_estimator number Grid Search pick using CV score is 98, which is a lot higher than the previous n_estimators we saw using the actual test data, which is around 40. # # However, when we don't have a test data with us. Using cross validation score to pick the best n_estimators number seems to be our only choice for now. # **We can set the parameter to fit our final model using the following command.** # refit model on best parameters gbm.set_params(**gs_gbm.best_params_) gbm.fit(X_train, y_train) # **Predict the final result** gbm.predict(X_test)[1:10] # ## Out of Bag Estimate for Bagging # In bagging, it turns out we can use another technique called **Out of Bag Error Estimation** to estimate the test error. Originally, if we also want to use cross validation to estimate the test error of our training data set, performing bagging on large data sets would be computationally expensive. Let's assume we want to use 10-fold cross validation and bootstrap 21 training data set when building a model. In this case, for each fold we'll need to build 21 models, i.e, we train our method on the **bth** bootstrapped training set in order to get our estimated function, denoted by $\hat{f}^b(x)$. We then get our final prediction using the average of the predictions from each model, $\hat{f}^1(x), \hat{f}^2(x), \dots \hat{f}^{21}(x)$. Therefore, totally we will need to build $21 \times 10 = 210$ models in order to get an estimation of the test error. # # It turns out that we can actually get our test error estimation for a bagged model in an intuitively way. Since we know that in bagging, essentially we fit our model on a subset of the original training data set. For illustration purpose, let's say in each subset, we randomly use 2/3 of the original data set, and we totally take 21 bootstrapped training sets. For the $i_{th}$ observation, it will be in the training set in 14 of the bootstrapped training sets. We can predict the response for the $i_{th}$ observation using each of the trees in which that observation was out ob bag (OOB). This will yield around 7 predictions for the $i_{th}$ observation. To obtain a single prediction, we can average these predicted responses for regresssion tree or can take a mojority vote for classification tree. By using **Out of Bag Error Estimation**, we will only need to build 21 models but still get a fairly good estimation of the test error. It is particularly convenient when performing bagging on large data sets. # def plot_train_test_error_oob(X_train, X_test, y_train, y_test): """ Compare the difference between training and testing error using MSE/R2 by adjusting number of tree built for GBRT. Here we use K-fold Cross Validation to obtain the estimate of our test MSE. """ train_mse = list() test_mse = list() oob_mse = list() seed = 7 num_trees = 100 param_name = 'n_estimators' options = np.arange(15, 100) for n in options: # fit random forest using different n_estimator, i.e., the number of trees that we fit each time for bagging. rf = RandomForestRegressor(n_estimators=n, oob_score=True, warm_start=True, random_state=seed) rf.fit(X_train, y_train) # get train mse train_mse.append(1-rf.score(X_train, y_train)) # get test mse test_mse.append(1-rf.score(X_test, y_test)) # get oob estimates oob_mse.append(1-rf.oob_score_.mean()) mean_mse_scores = [train_mse, test_mse, oob_mse] mse_labels = ['train MSE', 'test MSE', 'oob MSE'] i_mse_optim = np.argmin(test_mse) options_mse_optim = options[i_mse_optim] # plot with y axis using MSE for score, label in zip(mean_mse_scores, mse_labels): plt.plot(options, score, label = label) plt.vlines(options_mse_optim, plt.ylim()[0], np.min(test_mse), color='k', linewidth=3, label='Optimum on test') plt.legend() plt.ylabel('OOB Error Rate=1-R^2') plt.xlabel(param_name) plt.show() return(train_mse, test_mse, oob_mse) train_mse, test_mse, oob_mse= plot_train_test_error_oob(X_train, X_test, y_train, y_test) # From the above plot, we see that random forest is a relatively steady approach for our data. The algorithm gives us a low train MSE even with small n_estimators. # # The test MSE doesn't decrease much when n_estimator goes higher. We see that like CV score, OOB error rate, which is defined by 1 - R^2, estimate our test MSE pretty well when n_estimators goes larger. We notice that when n_estimators is small, our oob error rate overestimate the test MSE. It is due to the fact that when n_estimators is small, for example, when n=2. It means we only build 2 full trees when building a random forest. Assume we have 100 observations and the subsample rate is 0.7, we predict the response for the ith observation using only 1 tree in which that observation was out ob bag (OOB). The reason for overestimate is the same as using Validation set to obtain the estimate test MSE. Since for each tree we use fewer training data to build our full tree, it is not surprised that we will have a weaker prediction. In other words, it is not the model's fault, it is because we use fewer data to fit the model that result in a worse estimate of the test MSE. # ### Reference # * [Plot train error vs test error](http://scikit-learn.org/stable/auto_examples/model_selection/plot_train_error_vs_test_error.html) # * [Sklearn: Underfitting vs. Overfitting](http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfitting_overfitting.html#sphx-glr-auto-examples-model-selection-plot-underfitting-overfitting-py) # * [Sklearn: OOB Errors for Random Forests](http://scikit-learn.org/stable/auto_examples/ensemble/plot_ensemble_oob.html) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- def is_word_palindrome(file: str): with open(file, 'r') as f: k = 0 #行数 for line in f.readlines(): k += 1 s = list(line.lower()) #字符串转列表,小写 txt = [] flag = True for item in s: #提取出纯字母 if item.isalpha(): txt.append(item) for i in range(int(len(txt)/2)): #判断是否回文 if txt[i] != txt[len(txt)-1-i]: flag = False break if flag: print('第%d行是回文'%k) else: print('第%d行不是回文'%k) is_word_palindrome('Q1.txt') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # CNN para reconhecimento de imagens # ## Dataset Dogs & Cats - Kaggle # ** # [Dataset Dogs & cats](https://www.kaggle.com/c/dogs-vs-cats) # Baixe o arquivos e descompacte conforme as variáveis **dir_treino** e **dir_teste**. Coloque labels em pelo menos 30 imagens de teste, para poder validar (é só renomear os arquivos como os de treino). import keras import keras.backend as K import os, random from keras.layers import Dense, Conv2D, Input, MaxPooling2D, Flatten, Dropout from keras.models import Model from keras.datasets import fashion_mnist from keras.callbacks import ModelCheckpoint import numpy as np import matplotlib.pyplot as plt from matplotlib import cm import cv2 from keras.preprocessing import image # %matplotlib inline batch_sz = 64 # Batch size nb_class = 2 # Número de classes nb_epochs = 10 # Número de epochs de treinamento img_h, img_w = 64, 64 # Altura e largura das imagens dir_treino = './dogscats/train/' dir_teste = './dogscats/test/' # # Preparar arquivos de imagens def ler_imagem(file): img = cv2.imread(file) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) imres = cv2.resize(gray, (img_h, img_w), interpolation=cv2.INTER_CUBIC) imres = image.img_to_array(imres.T) imres = np.expand_dims(imres, axis = 0) return imres # + def gerar_dataset(filenames): rotulos = [] dataset = np.ndarray((len(filenames), img_h, img_w, 1), dtype=np.uint8) x = 0 for arquivo in filenames: dataset[x] = ler_imagem(arquivo) if '/cat.' in arquivo: rotulos.append(0) else: rotulos.append(1) x = x + 1 if x%1000==0: print("Processados ",x) return dataset, rotulos imagens_treino = [dir_treino+i for i in os.listdir(dir_treino) if '.jpg' in i] random.shuffle(imagens_treino) imagens_teste = [dir_teste+i for i in os.listdir(dir_teste) if '.jpg' in i] x_treino, y_treino = gerar_dataset(imagens_treino) x_teste, y_teste = gerar_dataset(imagens_teste) # - im = ler_imagem(imagens_treino[0]) print(im.shape) def conv3x3(input_x,nb_filters): # Prepara a camada convolucional return Conv2D(nb_filters, kernel_size=(3,3), use_bias=False, activation='relu', padding="same")(input_x) # + # Normaliza os valores dos pixels x_treino = x_treino.astype('float32') x_teste = x_teste.astype('float32') x_treino /= 255.0 x_teste /= 255.0 # Converte os rótulos para "One-hot encoding": y_treino = keras.utils.to_categorical(y_treino, nb_class) y_teste = keras.utils.to_categorical(y_teste, nb_class) # Cria o modelo executando um treino e avaliação: inputs = Input(shape=(img_h, img_w, 1)) x = conv3x3(inputs, 32) x = conv3x3(x, 32) x = MaxPooling2D(pool_size=(2,2))(x) x = conv3x3(x, 64) x = conv3x3(x, 64) x = MaxPooling2D(pool_size=(2,2))(x) x = conv3x3(x, 128) x = MaxPooling2D(pool_size=(2,2))(x) x = Flatten()(x) x = Dense(128, activation="relu")(x) preds = Dense(nb_class, activation='softmax')(x) model = Model(inputs=inputs, outputs=preds) # - # ## Rode a célula seguinte se desejar carregar um modelo salvo model.load_weights("dogs_cats_saved.h5") # ## Carregando ou não, rode a próxima célula # + # Compila o modelo: model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) # - # ## Rode a célula seguinte para treinar novamente a CNN # + # Cria um callback para salvar o modelo a cada "epoch" de treinamento completada: callback = ModelCheckpoint('dogs_cats_saved.h5') # Treina o modelo (demora cerca de 6 minutos sem GPU): history = model.fit(x_treino, y_treino, batch_size=batch_sz, epochs=nb_epochs, verbose=1, validation_data=(x_teste, y_teste), callbacks=[callback]) # Avalia o modelo com dados de teste: score = model.evaluate(x_teste, y_teste, verbose=0) print('Perda:', score[0]) print('Acurácia:', score[1]) # Plota gráficos de perda e acurácia: plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Acurácia do modelo') plt.ylabel('Acurácia') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Perda do modelo') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # - # ## Rode a célula seguinte para validar o modelo com imagens novas # + dir_validar = "./dogscats/validate/" nomes = [dir_validar+i for i in os.listdir(dir_validar) if '.jpg' in i] print(nomes) def prepararImagem(imagem): test_image = image.img_to_array(imagem.T) test_image = np.expand_dims(test_image, axis = 0) return test_image def mostraCateg(resultado): categs = ["Gato", "Cachorro"] for idx, val in enumerate(resultado[0]): if val == 1: return categs[idx] i = 1 for nome in nomes: im = cv2.imread(nome) gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) imres = cv2.resize(gray, (img_h, img_w), interpolation=cv2.INTER_CUBIC) dados = prepararImagem(imres) plt.subplot(3, 3, i) i = i + 1 plt.imshow(gray,cmap='gray') plt.axis('off') ret = model.predict(dados, batch_size=1) print(mostraCateg(ret)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # #!/usr/bin/python # -*- coding: utf-8 -*- """ pre_precess.py 에서 정의된 이미지 처리(Image precessing)의 각 단계 및 최종 결과물에 대하여 테스트하고 분석할 수 있습니다. * 윈도우를 띄어서 진행단계의 이미지를 확인할 수 있습니다. * 이미지들을 양옆 및 위아래로 병합해여 비교할 수 있습니다. """ __author__ = " (niewoong)" from src import pre_process as pp import cv2 import numpy as np import os def show_window(image, title='untitled', max_height=700): """ 이미지 윈도우를 열어서 보여줍니다. :param image: 보여줄 이미지 (OpenCV image 객체) :param title: 윈도우 제목 :param max_height: 이미지 윈도우 사이즈의 최대 높이 :return: """ height, width = image.shape[:2] # get image size if height > max_height: # adjust window size if too large rate = max_height / height height = round(height * rate) width = round(width * rate) # apply the same rate to width cv2.namedWindow(title, cv2.WINDOW_NORMAL) # Create a window that the user can resize # cv2.resizeWindow(title, width, height) # resize window according to the size of the image cv2.imshow(title, image) # open image window key = cv2.waitKey(0) # wait until keyboard input cv2.destroyAllWindows() return key def merge_horizontal(image_gray, image_bgr): """ Height 사이즈가 같은 두 이미지를 옆으로(Horizontally) 병합 합니다. 이미지 처리(Image processing) 단계를 원본과 비교하기위한 목적으로, 2차원(2 dimension) 흑백 이미지와 3차원(3 dimension) BGR 컬리 이미지를 인자로 받아 병합합니다. :param image_gray: 2차원(2 dimension) 흑백 이미지 :param image_bgr: 3차원(3 dimension) BGR 컬리 이미지 :return: 옆으로(Horizontally) 병합된 이미지 """ # Make the grey scale image have 3 channels image_cr = cv2.cvtColor(image_gray, cv2.COLOR_GRAY2BGR) # Merge image horizontally numpy_horizontal = np.hstack((image_cr, image_bgr)) # numpy_horizontal_concat = np.concatenate((image, image_contours), axis=1) return numpy_horizontal def merge_vertical(image_gray, image_bgr): """ Width 사이즈가 같은 두 이미지를 위아래로(Vertically) 병합 합니다. 이미지 처리(Image processing) 단계를 원본과 비교하기위한 목적으로, 2차원(2 dimension) 흑백 이미지와 3차원(3 dimension) BGR 컬리 이미지를 인자로 받아 병합합니다. :param image_gray: 2차원(2 dimension) 흑백 이미지 :param image_bgr: 3차원(3 dimension) BGR 컬리 이미지 :return: 위아래로(Vertically) 병합된 이미지 """ # Make the grey scale image have 3 channels image_cr = cv2.cvtColor(image_gray, cv2.COLOR_GRAY2BGR) # Merge image horizontally numpy_vertical = np.vstack((image_cr, image_bgr)) return numpy_vertical def detect_line(image_binary): """ 이미지에서 직선을 찾아서 초록색으로 표시한 결과를 반환합니다. :param image_binary: 흑백(Binary) OpenCV image (2 dimension) :return: 라인이 삭제된 이미지 (OpenCV image) """ copy = image_binary.copy() # copy the image to be processed copy_rbg = cv2.cvtColor(copy, cv2.COLOR_GRAY2RGB) # get configs threshold = pp.configs['remove_line']['threshold'] min_line_length = pp.configs['remove_line']['min_line_length'] max_line_gap = pp.configs['remove_line']['max_line_gap'] # fine and draw lines lines = cv2.HoughLinesP(copy, 1, np.pi / 180, threshold, np.array([]), min_line_length, max_line_gap) if lines is not None: for line in lines: x1, y1, x2, y2 = line[0] # get end point of line : ( (x1, y1) , (x2, y2) ) # slop = 0 # if x2 != x1: # slop = abs((y2-y1) / (x2-x1)) # if slop < 0.5 or slop > 50 or x2 == x1: # only vertical or parallel lines. # remove line drawing black line cv2.line(copy_rbg, (x1, y1), (x2, y2), (0, 155, 0), 2) return copy_rbg def get_step_compare_image(path_of_image): """ 이미지 프로세싱 전 단계의 중간 결과물을 하나로 병합하여 반환합니다. :param path_of_image: :return: """ # open original image image_origin = pp.open_original(path_of_image) # size up ( x4 ) image_origin = cv2.pyrUp(image_origin) comparing_images = [] # Grey-Scale image_gray = pp.get_gray(image_origin) contours = pp.get_contours(image_gray) image_with_contours = pp.draw_contour_rect(image_origin, contours) # merge two image vertically compare_set = merge_vertical(image_gray, image_with_contours) comparing_images.append(compare_set) # Morph Gradient image_gradient = pp.get_gradient(image_gray) # image_gradient = pp.get_canny(image_gray) contours = pp.get_contours(image_gradient) image_with_contours = pp.draw_contour_rect(image_origin, contours) # merge two current step image vertically compare_set = merge_vertical(image_gradient, image_with_contours) comparing_images.append(compare_set) # Threshold image_threshold = pp.get_threshold(image_gradient) contours = pp.get_contours(image_threshold) image_with_contours = pp.draw_contour_rect(image_origin, contours) # merge two image vertically compare_set = merge_vertical(image_threshold, image_with_contours) comparing_images.append(compare_set) # Long line remove image_line_removed = pp.remove_long_line(image_threshold) contours = pp.get_contours(image_line_removed) image_with_contours = pp.draw_contour_rect(image_origin, contours) # merge two image vertically compare_set = merge_vertical(image_line_removed, image_with_contours) comparing_images.append(compare_set) # Morph Close image_close = pp.get_closing(image_line_removed) contours = pp.get_contours(image_close) image_with_contours = pp.draw_contour_rect(image_origin, contours) # merge two image vertically compare_set = merge_vertical(image_close, image_with_contours) comparing_images.append(compare_set) # Merge all step's images horizontally image_merged_all = np.hstack(comparing_images) return image_merged_all def get_image_with_contours(path_of_image): """ 이미지 프로세싱을 거친 후, 최종적으로 얻은 Contours 를 원본 이미지 위에 그려서 반환합니다. :param path_of_image: :return: """ # open original image image_origin = pp.open_original(path_of_image) # size up the resource ( x4 ) image_origin = cv2.pyrUp(image_origin) # Grey-Scale image_gray = pp.get_gray(image_origin) # Morph Gradient image_gradient = pp.get_gradient(image_gray) # Threshold image_threshold = pp.get_threshold(image_gradient) # Long line remove image_line_removed = pp.remove_long_line(image_threshold) # Morph Close image_close = pp.get_closing(image_line_removed) # Get contours and Draw it on the original image contours = pp.get_contours(image_close) image_with_contours = pp.draw_contour_rect(image_origin, contours) return image_with_contours def get_file_list(path): """ path 가 가리키는 directory 의 모든 파일명을 읽어서 string 으로 반환합니다. 파일명은 Absolute path 가 포함된 이름입니다. :param path: 읽어 들일 directory 의 절대경로 :return: directory 의 모든 file path 을 String 형으로 Array 에 담아 반환 """ image_path_list = [] for root, dirs, files in os.walk(path): root_path = os.path.join(os.path.abspath(path), root) for file in files: file_path = os.path.join(root_path, file) image_path_list.append(file_path) return image_path_list def read_text_from_image(image_path): messages = [] cropped_images = pp.process_image(image_path) count = 1 for cropped in cropped_images: count += 1 # gray_copy = pp.get_gray(cropped) # gradient_copy = pp.get_gradient(gray_copy) # gradient_copy = cv2.cvtColor(gradient_copy, cv2.COLOR_GRAY2BGR) # answer = jt.get_answer_from_cv2_Image(gradient_copy) # print(answer) msg = pp.get_text_from_image(cropped) messages.append(msg) return messages def get_image_with_lines(image_path): image_origin = pp.open_original(image_path) image_origin = cv2.pyrUp(image_origin) # Grey-Scale image_gray = pp.get_gray(image_origin) # Morph Gradient image_gradient = pp.get_gradient(image_gray) # Threshold image_threshold = pp.get_threshold(image_gradient) # find and draw lines image_line_removed = detect_line(image_threshold) return image_line_removed def main(): pp.read_configs('test_images/screenshot/config_screenshot.yml') # set configs todo parameter 에서 옵션값으로 입력받도록 바꾸기 image_path = 'test_images/screenshot/slide1.jpg' # todo 실행시 parameter 로 image 를 입력받도록 바꾸기 result = get_step_compare_image(image_path) # show result show_window(result, 'all steps') result = get_image_with_contours(image_path) show_window(result, 'final result') if __name__ == "__main__": main() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- from __future__ import print_function import pandas as pd import numpy as np from scipy import stats import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.graphics.api import qqplot # %matplotlib inline dta=[10930,10318,10595,10972,7706,6756,9092,10551,9722,10913,11151,8186,6422, 6337,11649,11652,10310,12043,7937,6476,9662,9570,9981,9331,9449,6773,6304,9355, 10477,10148,10395,11261,8713,7299,10424,10795,11069,11602,11427,9095,7707,10767, 12136,12812,12006,12528,10329,7818,11719,11683,12603,11495,13670,11337,10232, 13261,13230,15535,16837,19598,14823,11622,19391,18177,19994,14723,15694,13248, 9543,12872,13101,15053,12619,13749,10228,9725,14729,12518,14564,15085,14722, 11999,9390,13481,14795,15845,15271,14686,11054,10395] dta = pd.Series(dta) dta.index = pd.date_range(start='2001-01-01', end='2091-01-01', freq='A') dta.plot(figsize=(12,8)) fig = plt.figure(figsize=(12, 8)) ax1= fig.add_subplot(111) diff1 = dta.diff(1) diff1.plot(ax=ax1) fig = plt.figure(figsize=(12, 8)) ax1= fig.add_subplot(111) diff1 = dta.diff(2) diff1.plot(ax=ax1) dta= dta.diff(1)[1:] fig = plt.figure(figsize=(12,8)) ax1=fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(dta,lags=40,ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(dta,lags=40,ax=ax2) arma_mod20 = sm.tsa.ARMA(dta,(7,0)).fit() print(arma_mod20.aic,arma_mod20.bic,arma_mod20.hqic) arma_mod30 = sm.tsa.ARMA(dta,(0,1)).fit() print(arma_mod30.aic,arma_mod30.bic,arma_mod30.hqic) arma_mod40 = sm.tsa.ARMA(dta,(7,1)).fit() print(arma_mod40.aic,arma_mod40.bic,arma_mod40.hqic) arma_mod50 = sm.tsa.ARMA(dta,(8,0)).fit() print(arma_mod50.aic,arma_mod50.bic,arma_mod50.hqic) # aic/bic/hqic 越小越好,股选择ARMA(7,0) # ## 模型检验 # 在指数平滑模型下,观察ARIMA模型的残差是否是平均值为0且方差为常数的正态分布(服从零均值、方差不变的正态分布),同时也要观察连续残差是否(自)相关。 fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(arma_mod20.resid, lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(arma_mod20.resid, lags=40, ax=ax2) arma_mod20.resid.mean()/arma_mod20.resid.std() arma_mod20.resid.plot() # ## 做D-W检验 # 德宾-沃森(Durbin-Watson)检验。德宾-沃森检验,简称D-W检验,是目前检验自相关性最常用的方法,但它只使用于检验一阶自相关性。因为自相关系数ρ的值介于-1和1之间,所以 0≤DW≤4。并且DW=0 ->ρ=1   即存在正自相关性 # DW=4->ρ=-1 即存在负自相关性 # DW=2->ρ=0  即不存在(一阶)自相关性 # 因此,当DW值显著的接近于0或4时,则存在自相关性,而接近于2时,则不存在(一阶)自相关性。这样只要知道DW统计量的概率分布,在给定的显著水平下,根据临界值的位置就可以对原假设$H_0$进行检验。 print(sm.stats.durbin_watson(arma_mod20.resid.values)) resid = arma_mod20.resid#残差 fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) fig = qqplot(resid, line='q', ax=ax, fit=True) # ## Ljung-Box检验 # Ljung-Box test是对randomness的检验,或者说是对时间序列是否存在滞后相关的一种统计检验。对于滞后相关的检验,我们常常采用的方法还包括计算ACF和PCAF并观察其图像,但是无论是ACF还是PACF都仅仅考虑是否存在某一特定滞后阶数的相关。LB检验则是基于一系列滞后阶数,判断序列总体的相关性或者说随机性是否存在。 # 时间序列中一个最基本的模型就是高斯白噪声序列。而对于ARIMA模型,其残差被假定为高斯白噪声序列,所以当我们用ARIMA模型去拟合数据时,拟合后我们要对残差的估计序列进行LB检验,判断其是否是高斯白噪声,如果不是,那么就说明ARIMA模型也许并不是一个适合样本的模型。 r,q,p = sm.tsa.acf(resid.values.squeeze(), qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) table.set_index('lag') table[:20] # 检验的结果就是看最后一列前十二行的检验概率(一般观察滞后1~12阶),如果检验概率小于给定的显著性水平,比如0.05、0.10等就拒绝原假设,其原假设是相关系数为零。就结果来看,如果取显著性水平为0.05,那么相关系数与零没有显著差异,即为白噪声序列。 predict_sunspots = arma_mod20.predict('2090', '2100', dynamic=True) print(predict_sunspots) fig, ax = plt.subplots(figsize=(12, 8)) ax = dta.loc['2001':].plot(ax=ax) predict_sunspots.plot(ax=ax) # + # pd.ewma? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib.pyplot as plt import numpy as np import seaborn as sns from sklearn import linear_model sns.set() # - # Assume $y = 0.5 \cdot x + 1.5 + \epsilon$, where $\epsilon \sim N\left(0, 0.2^2\right)$. We generate two training sets by sampling $x$ uniformly from $[1, 3)$ and $[4, 6)$, respectively. Each training set has 100 examples. We should have $\bar{y}_1 \approx 2.5$ and $\bar{y}_2 \approx 4$. # + n = 100 x1 = 2 * np.random.rand(n, 1) + 1 x2 = 2 * np.random.rand(n, 1) + 4 a = 0.5 b = 1.5 sigma = 0.2 y1 = a * x1 + b + sigma * np.random.randn(n, 1) y2 = a * x2 + b + sigma * np.random.randn(n, 1) print('avg_y_1=%f, avg_y_2=%f' % (np.mean(y1), np.mean(y2))) # - # We fit simple linear regression models on these two training sets. We expect these two models to have very similar parameters. # + lr1 = linear_model.LinearRegression() lr1.fit(x1, y1) a1 = lr1.coef_[0][0] b1 = lr1.intercept_[0] x3 = np.array([0.7, 3.3]) y3 = a1 * x3 + b1 lr2 = linear_model.LinearRegression() lr2.fit(x2, y2) a2 = lr2.coef_[0][0] b2 = lr2.intercept_[0] x4 = np.array([3.7, 6.3]) y4 = a2 * x4 + b2 print('a1=%f, b1=%f, a2=%f, b2=%f' % (a1, b1, a2, b2)) # - # Below we plot the two training sets and their fitted linear regression models. plt.figure(figsize=(10, 7)) plt.tick_params(labelsize=14) plt.scatter(x1, y1) plt.scatter(x2, y2) plt.plot(x3, y3, 'indianred', x4, y4, 'm', linewidth=3) plt.xlabel('x', fontsize=14) plt.ylabel('y', fontsize=14) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="_RvEi4O4lcWn" outputId="249ee0d6-6bf9-460f-c0d3-807e26a74d19" # !pip install keras-ocr # + id="Ta6xEp70lZcW" import matplotlib.pyplot as plt import keras_ocr # + colab={"base_uri": "https://localhost:8080/"} id="Ha_gOMN8ll01" outputId="f89f57a7-6680-4bbe-cb45-7c71600e626d" pipeline = keras_ocr.pipeline.Pipeline() # + id="4xjGGaJ9lofD" images = [ keras_ocr.tools.read(url) for url in [ # 'https://upload.wikimedia.org/wikipedia/commons/b/bd/Army_Reserves_Recruitment_Banner_MOD_45156284.jpg', # 'https://upload.wikimedia.org/wikipedia/commons/e/e8/FseeG2QeLXo.jpg', # 'https://upload.wikimedia.org/wikipedia/commons/b/b4/EUBanana-500x112.jpg' 'https://i.imgur.com/b0ihHdQ.png' ] ] # + id="5c180gI_lqga" prediction_groups = pipeline.recognize(images) # + colab={"base_uri": "https://localhost:8080/"} id="trsucC8Hlsyc" outputId="e907fb7d-1ee5-4426-c5a4-e833f41d79f0" print(prediction_groups) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="AzJUxtVtlzhq" outputId="9641d7d2-0afb-49bb-b87f-6944057e309e" fig, axs = plt.subplots(nrows=len(images), figsize=(20, 20), squeeze=False) # + id="HAQG5-DKl2P5" for ax, image, predictions in zip(axs[0], images, prediction_groups): keras_ocr.tools.drawAnnotations(image=image, predictions=predictions, ax=ax) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: test_this # language: python # name: test_this # --- # ### Importing atmospheric data from NREL's WindToolKit data set using ssrs.WTK module # %load_ext autoreload # %autoreload 2 import os import sys import numpy as np import pandas as pd import matplotlib.pyplot as plt from datetime import datetime from scipy.interpolate import griddata # importing ssrs funationality used in this notebook from ssrs import WTK, WtkSource, TurbinesUSWTB from ssrs.utils import get_extent_from_bounds # directory where output is saved output_dir = os.path.join(os.path.abspath(os.path.curdir), 'output/') # list valid WTK data sources WTK.valid_sources # inspect available layers wtk_source = WtkSource('AWS') wtk_source.valid_layers # Initialize WTK object to download and manipulate data for given lonlat bounds wtk_layers = ['windspeed_100m', 'winddirection_100m', 'pressure_100m','temperature_100m'] lonlat_bounds = (-106.411, 42.769, -105.1686, 43.2566) wtk = WTK('AWS', lonlat_bounds, wtk_layers, output_dir) # get data for various variables for a given datetime dtime = datetime(2014, 10, 12, 9) # (year, month, date, hour) wtkdf = wtk.get_dataframe_for_this_time(dtime) wtkdf.head() # get all the wind turbines in this region lonlat_crs = 'EPSG:4326' turbines = TurbinesUSWTB(lonlat_bounds, lonlat_crs, min_hubheight=60.) turbines.print_details() # Plot the downloaded atmospehric variables in lon/lat crs interp_type = 'linear' # nearest, linear, cubic xname ='Longitude' yname = 'Latitude' num_pts = 100 turb_xlocs, turb_ylocs = turbines.get_locations() for j, this_var in enumerate(wtk_layers): fig, ax = plt.subplots(figsize=(7,4)) vardata = wtkdf.loc[:, this_var].values.flatten() xlocs, ylocs = wtk.get_coordinates() extent = get_extent_from_bounds(lonlat_bounds) xmin, xmax, ymin, ymax = extent xmesh, ymesh = np.meshgrid(np.linspace(xmin,xmax,num_pts), np.linspace(ymin,ymax,num_pts)) vargriddata = griddata(np.array([xlocs, ylocs]).T, vardata, (xmesh, ymesh), method=interp_type) cm = ax.imshow(vargriddata, cmap='viridis', origin='lower', extent=extent, alpha=0.75, aspect='auto') #ax.plot(wtkdf[xname], wtkdf[yname], '.k', alpha=0.15, markersize=3.) ax.plot(turb_xlocs, turb_ylocs, '1k', alpha=0.5, markersize=3.) ax.set_title(this_var) fig.colorbar(cm, ax = ax) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow.contrib.legacy_seq2seq # + from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf #import tensorflow.contrib.seq2seq #from tensorflow.contrib import legacy_seq2seq from lib.metrics import masked_mae_loss from model.dcrnn_cell import DCGRUCell import tensorflow.contrib.legacy_seq2seq from model.dcrnn_model import DCRNNModel from model.dcrnn_supervisor import DCRNNSupervisor n_f=7 n_node=106 info={'base_dir': 'data/model', 'log_level': 'INFO', 'data': {'batch_size': 4, 'dataset_dir': 'data/', 'test_batch_size': 4, 'val_batch_size': 4, 'graph_pkl_filename': 'data/sensor_graph/adj_mx_bay.pkl'},#not used? 'model': {'cl_decay_steps': 2000, 'filter_type': 'dual_random_walk', 'horizon': 12, 'input_dim': n_f, 'l1_decay': 0, 'max_diffusion_step': 2, 'num_nodes': n_node, 'num_rnn_layers': 2, 'output_dim': n_f, 'rnn_units': 64, 'seq_len': 12, 'use_curriculum_learning': True}, 'train': {'base_lr': 0.01, 'dropout': 0, 'epoch': 0, 'epochs': 100, 'epsilon': 0.001, 'global_step': 0, 'lr_decay_ratio': 0.1, 'max_grad_norm': 5, 'max_to_keep': 100, 'min_learning_rate': 2e-06, 'optimizer': 'adam', 'patience': 50, 'steps': [20, 30, 40, 50], 'test_every_n_epochs': 10}} import pickle with open("data/sensor_graph/adj_mx.pkl", 'rb') as pickle_file: adj_mx = pickle.load(pickle_file) adj_mx=adj_mx[2] from lib.utils import StandardScaler supervisor = DCRNNSupervisor(adj_mx=adj_mx, **info) # - with tf.Session() as sess: supervisor.train(sess=sess) info['train']['model_filename'] with tf.Session() as sess: supervisor.load(sess, 'data/model/dcrnn_DR_2_h_12_64-64_lr_0.01_bs_4_0106094512/models-5.3707-105261') outputs = supervisor.evaluate(sess) outputs['predictions'][0].shape outputs['groundtruth'][0].shape outputs.keys() len(outputs["predictions"]) len(outputs["groundtruth"]) outputs["groundtruth"] # + import numpy as np y_true=np.array(outputs["groundtruth"]) print(y_true.shape) y_true=np.transpose(y_true,(1,0,2)) print(y_true.shape) y_true[0] # + y_pred=np.array(outputs["predictions"]) print(y_pred.shape) y_pred=np.transpose(y_pred,(1,0,2)) print(y_pred.shape) y_pred[0] # - # + from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import matplotlib.cm as cm from matplotlib import animation node_loc=np.load('data/node_loc.npy') #_node_signal=np.load('data/node_signal.npy') #print(node_loc.shape,_node_signal.shape) x=node_loc[:,0] y=node_loc[:,1] z=node_loc[:,2] y_true_vis=y_true[-150:] y_true_vis=y_true_vis[:,-1,:] y_pred_vis=y_pred[-150:] y_pred_vis=y_pred_vis[:,-1,:] y_pred_vis.shape, y_true_vis.shape # - # + # %matplotlib notebook def update_graph(num): graph._offsets3d = (x,y,z) graph.set_array(y_true_vis[num,:]) title.set_text('3D Test, time={}'.format(num)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') title = ax.set_title('3D Test') #i_f=6 graph = ax.scatter(x, y, z,c=y_true_vis[0,:], marker='o', vmin=np.min(y_true_vis[:,:]),vmax=np.max(y_true_vis[:,:]), cmap=cm.RdBu_r,edgecolor="k") ani = animation.FuncAnimation(fig, update_graph, 150, interval=40, blit=False) fig.colorbar(graph) ani.save('images/results/true.gif') plt.show() # + # %matplotlib notebook def update_graph(num): graph._offsets3d = (x,y,z) graph.set_array(y_pred_vis[num,:]) title.set_text('3D Test, time={}'.format(num)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') title = ax.set_title('3D Test') i_f=6 graph = ax.scatter(x, y, z,c=y_pred_vis[0,:], marker='o', vmin=np.min(y_true_vis[:,:]),vmax=np.max(y_true_vis[:,:]), cmap=cm.RdBu_r,edgecolor="k") ani = animation.FuncAnimation(fig, update_graph, 150, interval=40, blit=False) fig.colorbar(graph) ani.save('images/results/pred.gif') plt.show() # - y_pred_vis.shape for i in range(106): plt.plot(np.arange(150),y_pred_vis[:,i],label="pred") plt.show() # + # %matplotlib notebook for i in range(106): plt.plot(np.arange(150),y_pred_vis[:,i],label="pred") plt.show() # + # %matplotlib notebook for i in range(106): plt.plot(np.arange(150),y_true_vis[:,i],label="pred") plt.show() # - # # %matplotlib notebook # # plt.plot(np.arange(150),y_pred_vis[:,70],label="pred") # plt.plot(np.arange(150),y_true_vis[:,70],label="true") # plt.legend() # plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # This cell is added by sphinx-gallery # !pip install mrsimulator --quiet # %matplotlib inline import mrsimulator print(f'You are using mrsimulator v{mrsimulator.__version__}') # - # # # Extended Czjzek distribution (Shielding and Quadrupolar) # # In this example, we illustrate the simulation of spectrum originating from an # extended Czjzek distribution of traceless symmetric tensors. We show two cases, an # extended Czjzek distribution of the shielding and quadrupolar tensor parameters, # respectively. # # Import the required modules. # # # + import numpy as np import matplotlib.pyplot as plt from mrsimulator import Simulator from mrsimulator.methods import BlochDecaySpectrum, BlochDecayCTSpectrum from mrsimulator.models import ExtCzjzekDistribution from mrsimulator.utils.collection import single_site_system_generator # - # ## Symmetric shielding tensor # # ### Create the extended Czjzek distribution # # First, create a distribution of the zeta and eta parameters of the shielding tensors # using the `extended_czjzek_distribution` model as follows, # # # + # The range of zeta and eta coordinates over which the distribution is sampled. z_lim = np.arange(100) * 0.4 + 40 # in ppm e_lim = np.arange(21) / 20 dominant = {"zeta": 60, "eta": 0.3} z_dist, e_dist, amp = ExtCzjzekDistribution(dominant, eps=0.14).pdf(pos=[z_lim, e_lim]) # - # The following is the plot of the extended Czjzek distribution. # # plt.figure(figsize=(4.25, 3.0)) plt.contourf(z_dist, e_dist, amp, levels=10) plt.xlabel(r"$\zeta$ / ppm") plt.ylabel(r"$\eta$") plt.tight_layout() plt.show() # ### Simulate the spectrum # # Create the spin systems from the above $\zeta$ and $\eta$ parameters. # # systems = single_site_system_generator( isotope="13C", shielding_symmetric={"zeta": z_dist, "eta": e_dist}, abundance=amp ) print(len(systems)) # Create a simulator object and add the above system. # # sim = Simulator() sim.spin_systems = systems # add the systems sim.methods = [BlochDecaySpectrum(channels=["13C"])] # add the method sim.run() # The following is the static spectrum arising from a Czjzek distribution of the # second-rank traceless shielding tensors. # # plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(sim.methods[0].simulation, color="black", linewidth=1) plt.tight_layout() plt.show() # ## Quadrupolar tensor # # ### Create the extended Czjzek distribution # # Similarly, you may also create an extended Czjzek distribution of the electric field # gradient (EFG) tensor parameters. # # # + # The range of Cq and eta coordinates over which the distribution is sampled. cq_lim = np.arange(100) * 0.1 # assumed in MHz e_lim = np.arange(21) / 20 dominant = {"Cq": 6.1, "eta": 0.1} cq_dist, e_dist, amp = ExtCzjzekDistribution(dominant, eps=0.25).pdf( pos=[cq_lim, e_lim] ) # - # The following is the plot of the extended Czjzek distribution. # # plt.figure(figsize=(4.25, 3.0)) plt.contourf(cq_dist, e_dist, amp, levels=10) plt.xlabel(r"$C_q$ / MHz") plt.ylabel(r"$\eta$") plt.tight_layout() plt.show() # ### Simulate the spectrum # **Static spectrum** # Create the spin systems. # # systems = single_site_system_generator( isotope="71Ga", quadrupolar={"Cq": cq_dist * 1e6, "eta": e_dist}, abundance=amp ) # Create a simulator object and add the above system. # # sim = Simulator() sim.spin_systems = systems # add the systems sim.methods = [ BlochDecayCTSpectrum( channels=["71Ga"], magnetic_flux_density=9.4, # in T spectral_dimensions=[dict(count=2048, spectral_width=2e5)], ) ] # add the method sim.run() # The following is a static spectrum arising from an extended Czjzek distribution of # the second-rank traceless EFG tensors. # # plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(sim.methods[0].simulation, color="black", linewidth=1) ax.invert_xaxis() plt.tight_layout() plt.show() # **MAS spectrum** # # sim.methods = [ BlochDecayCTSpectrum( channels=["71Ga"], magnetic_flux_density=9.4, # in T rotor_frequency=25000, # in Hz spectral_dimensions=[ dict(count=2048, spectral_width=2e5, reference_offset=-1e4) ], ) ] # add the method sim.config.number_of_sidebands = 16 sim.run() # The following is the MAS spectrum arising from an extended Czjzek distribution of the # second-rank traceless EFG tensors. # # plt.figure(figsize=(4.25, 3.0)) ax = plt.subplot(projection="csdm") ax.plot(sim.methods[0].simulation, color="black", linewidth=1) ax.invert_xaxis() plt.tight_layout() plt.show() # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # formats: ipynb,py:light # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:PROJ_irox_oer] * # language: python # name: conda-env-PROJ_irox_oer-py # --- # # Constructing linear model for OER adsorption energies # --- # # ### Import Modules # + import os print(os.getcwd()) import sys import time; ti = time.time() import copy import numpy as np import pandas as pd pd.set_option("display.max_columns", None) # ######################################################### from layout import layout # ######################################################### from local_methods import create_linear_model_plot from local_methods import isolate_target_col # - from methods import isnotebook isnotebook_i = isnotebook() if isnotebook_i: from tqdm.notebook import tqdm verbose = True show_plot = True else: from tqdm import tqdm verbose = False show_plot = False # ### Read Data # + from methods import get_df_features_targets df_features_targets = get_df_features_targets() from methods import get_df_slab df_slab = get_df_slab() # ######################################################### df_i = df_features_targets # Getting phase > 1 slab ids df_slab_i = df_slab[df_slab.phase > 1] phase_2_slab_ids = df_slab_i.slab_id.tolist() # + print( "Number of rows in df_features_targets:", df_i.shape[0], ) # 150 # - # # Dropping phase 1 slabs # + df_index = df_i.index.to_frame() df_index_i = df_index[ df_index.slab_id.isin(phase_2_slab_ids) ] print("Dropping phase 1 slabs") df_i = df_i.loc[ df_index_i.index ] # + # Keeping track of shape, dropping phase 1 points # 95 # 118 # 126 # 132 # 163 # 176 # 183 # 199 # 214 # 233 # 254 # 267 # 280 # 300 # 315 # 325 # 334 # 352 | Sun Jan 31 22:26:52 PST 2021 # 363 | Tue Feb 9 12:43:35 PST 2021 # 374 | Tue Feb 16 15:26:42 PST 2021 # 385 | Sat Feb 20 13:41:31 PST 2021 # 393 | Sat Mar 13 12:13:26 PST 2021 df_i.shape # - # ### Dropping `p_band_center` for now, very few points df_i = df_i.drop(columns=[ ("features", "o", "p_band_center", ), # ("features_stan", "o", "p_band_center", ), ]) # + from proj_data import layout_shared layout_master = layout_shared.update(layout) # - # # ------------------------- # # All single feature models # ## G_O models # + ads_i = "o" feature_ads_i = "oh" # if True: # feature_col_i = "active_o_metal_dist" # if True: if False: print( list(df_i["features_stan"][ads_i].columns) ) for feature_col_i in df_i["features_stan"][ads_i].columns: print(40 * "=") print(feature_col_i) print("") df_j = isolate_target_col( df_i, target_col="g_o", ) out_dict = create_linear_model_plot( df=df_j, feature_columns=[feature_col_i, ], ads=ads_i, feature_ads=feature_ads_i, layout=layout_master, verbose=verbose, ) fig = out_dict["fig"] fig.show() # - # ## G_OH models # + ads_i = "oh" feature_ads_i = "o" # if True: if False: # for feature_col_i in df_i.features_stan.columns: for feature_col_i in df_i["features_stan"][ads_i].columns: print(40 * "=") print(feature_col_i) print("") df_j = isolate_target_col( df_i, target_col="g_" + ads_i, ) out_dict = create_linear_model_plot( df=df_j, feature_columns=[feature_col_i, ], ads=ads_i, feature_ads=feature_ads_i, layout=layout_master, verbose=verbose, ) fig = out_dict["fig"] fig.show() # + active="" # # # # # - # # ------------------------- # # G_O Model # + filter_cols = [ ('targets', 'g_o', ''), # ('targets', 'g_oh', ''), # ('targets', 'g_o_m_oh', ''), # ('features', 'oh', 'O_magmom'), # ('features', 'oh', 'Ir_magmom'), # ('features', 'oh', 'active_o_metal_dist'), # ('features', 'oh', 'angle_O_Ir_surf_norm'), # ('features', 'oh', 'ir_o_mean'), # ('features', 'oh', 'ir_o_std'), # ('features', 'oh', 'octa_vol'), ('features', 'o', 'O_magmom'), ('features', 'o', 'Ir_magmom'), ('features', 'o', 'Ir*O_bader'), ('features', 'o', 'Ir_bader'), ('features', 'o', 'O_bader'), ('features', 'o', 'active_o_metal_dist'), ('features', 'o', 'angle_O_Ir_surf_norm'), ('features', 'o', 'ir_o_mean'), ('features', 'o', 'ir_o_std'), ('features', 'o', 'octa_vol'), ('features', 'o', 'Ir*O_bader/ir_o_mean'), ('features', 'dH_bulk', ''), ('features', 'volume_pa', ''), ('features', 'bulk_oxid_state', ''), ('features', 'effective_ox_state', ''), # ('features_pre_dft', 'active_o_metal_dist__pre', ''), # ('features_pre_dft', 'ir_o_mean__pre', ''), # ('features_pre_dft', 'ir_o_std__pre', ''), # ('features_pre_dft', 'octa_vol__pre', ''), ] df_i = df_i[filter_cols] # + new_cols = [] for col_i in df_i.columns: if col_i[0] == "features": if col_i[1] in ["o", "oh", "ooh", "bare", ]: new_col_i = ("features", col_i[2], ) elif col_i[2] == "": # new_col_i = col_i[1] new_col_i = ("features", col_i[1], ) else: print(col_i) # new_col_i = "TEMP" new_col_i = ("features", "TEMP", ) elif col_i[0] == "targets": # new_col_i = col_i[1] new_col_i = ("targets", col_i[1], ) else: print(col_i) # new_col_i = "TEMP" new_col_i = ("TEMP", "TEMP", ) new_cols.append(new_col_i) # new_cols idx = pd.MultiIndex.from_tuples(new_cols) df_i.columns = idx # df_i.columns = new_cols # - df_i # + ads_i = "o" feature_ads_i = "oh" df_j = df_i # df_j = isolate_target_col( # df_i, # target_col="g_o", # # target_col="g_oh", # ) # feature_cols_all = list(df_j["features_stan"][ads_i].columns) # feature_cols_all = list(df_j["features"][ads_i].columns) feature_cols_all = df_j["features"].columns.tolist() format_dict_i = { "color": "stoich", } df_j = df_j.dropna() out_dict = create_linear_model_plot( df=df_j, layout=layout_master, ads=ads_i, feature_ads=feature_ads_i, format_dict=format_dict_i, # feature_columns=["eff_oxid_state", "octa_vol", "dH_bulk", ], # feature_columns=["eff_oxid_state", "octa_vol", "dH_bulk", "bulk_oxid_state", ], feature_columns=feature_cols_all, verbose=verbose, ) fig = out_dict["fig"] fig.write_json( os.path.join( os.environ["PROJ_irox_oer"], "workflow/oer_vs_features", "out_plot/oer_lin_model__G_O_plot.json")) # + # df_i["features"]["octa_vol"] # + # assert False # - if show_plot: fig.show() # + active="" # # # # # - # # G_OH Model # + ads_i = "oh" feature_ads_i = "oh" df_j = df_i df_j = df_j.dropna() # df_j = isolate_target_col( # df_i, # target_col="g_oh", # ) # feature_cols_all = list(df_j["features_stan"][ads_i].columns) feature_cols_all = df_j["features"].columns.tolist() out_dict = create_linear_model_plot( df=df_j, layout=layout_master, feature_ads=feature_ads_i, ads=ads_i, format_dict=format_dict_i, # feature_columns=["eff_oxid_state", "octa_vol", "dH_bulk", ], # feature_columns=["eff_oxid_state", "octa_vol", "dH_bulk", "bulk_oxid_state", ], # feature_columns=["eff_oxid_state", "octa_vol", "dH_bulk", "bulk_oxid_state", "ir_o_mean", ], feature_columns=feature_cols_all, verbose=verbose, ) fig = out_dict["fig"] fig.write_json( os.path.join( os.environ["PROJ_irox_oer"], "workflow/oer_vs_features", "out_plot/oer_lin_model__G_OH_plot.json")) # - if show_plot: fig.show() # + active="" # # # # - # ### Get index off of graph with str frag # + df_ind = df_features_targets.index.to_frame() frag_i = "vota" for index_i, row_i in df_ind.iterrows(): name_i = row_i.compenv + "__" + row_i.slab_id if frag_i in name_i: print(index_i) # - # ######################################################### print(20 * "# # ") print("All done!") print("Run time:", np.round((time.time() - ti) / 60, 3), "min") print("oer_lin_model.ipynb") print(20 * "# # ") # ######################################################### # + active="" # # # # + jupyter={} # df=df_i # target_col="g_o" # + jupyter={} # # def isolate_target_col(df, target_col=None): # """ # """ # #| - isolate_target_col # df_i = df # target_col_to_plot = target_col # cols_tuples = [] # for col_i in list(df_i.columns): # if "features_stan" in col_i[0]: # cols_tuples.append(col_i) # # elif col_i == ("target_cols", target_col_to_plot): # elif col_i == ("targets", target_col_to_plot, ""): # cols_tuples.append(col_i) # elif col_i[0] == "data": # cols_tuples.append(col_i) # elif col_i[0] == "format": # cols_tuples.append(col_i) # else: # # print("Woops:", col_i) # tmp = 42 # df_j = df_i.loc[:, cols_tuples] # cols_to_check_nan_in = [] # for col_i in df_j.columns: # if "features" in col_i[0]: # cols_to_check_nan_in.append(col_i) # elif "targets" in col_i[0]: # cols_to_check_nan_in.append(col_i) # # df_j = df_j.dropna(subset=cols_to_check_nan_in) # TEMP # # df_j = df_j.dropna() # # return(df_j) # #__| # + jupyter={} # df_j # + jupyter={} # feature_cols_all # + jupyter={} # assert False # + jupyter={} # df_i.columns.tolist() # + jupyter={} # df_i.columns.tolist() # + jupyter={} # assert False # + jupyter={} # assert False # + jupyter={} # from sklearn.linear_model import LinearRegression # import plotly.graph_objs as go # from proj_data import scatter_marker_size # from plotting.my_plotly import my_plotly_plot # + jupyter={} # df = df_j # # feature_columns = [feature_col_i, ] # feature_columns = feature_cols_all # ads = ads_i # feature_ads = feature_ads_i # layout = layout # verbose = True # save_plot_to_file = True # # def create_linear_model_plot( # # df=None, # # feature_columns=None, # # ads=None, # # feature_ads=None, # # format_dict=None, # # layout=None, # # verbose=True, # # save_plot_to_file=False, # # ): # """ # """ # #| - create_linear_model_plot # # ##################################################### # df_i = df # features_cols_to_include = feature_columns # # ##################################################### # #| - Dropping feature columns # if features_cols_to_include is None or features_cols_to_include == "all": # features_cols_to_include = df_i["features_stan"][feature_ads].columns # cols_to_drop = [] # # for col_i in df_i["features_stan"][feature_ads].columns: # # for col_i in df_i["features"][feature_ads].columns: # for col_i in df_i["features"].columns: # if col_i not in features_cols_to_include: # cols_to_drop.append(col_i) # df_tmp = copy.deepcopy(df_i) # for col_i in cols_to_drop: # df_i = df_i.drop(columns=[("features_stan", feature_ads, col_i)]) # # feature_cols = list(df_i.features_stan.columns) # # feature_cols = list(df_i["features_stan"][feature_ads].columns) # feature_cols = list(df_i["features"].columns) # # print(feature_cols) # plot_title = " | ".join(feature_cols) # plot_title = "Features: " + plot_title # #__| # #| - Creating linear model # # X = df_i["features_stan"][feature_ads].to_numpy() # # X = X.reshape(-1, len(df_i["features_stan"][feature_ads].columns)) # X = df_i["features"].to_numpy() # X = X.reshape(-1, len(df_i["features"].columns)) # y = df_i.targets[ # df_i.targets.columns[0] # ] # model = LinearRegression() # model.fit(X, y) # y_predict = model.predict(X) # #__| # # | - Put together model output y_pred and y into dataframe # # y = out_dict["y"] # # y_predict = out_dict["y_predict"] # y.name = y.name[0] # df_model_i = pd.DataFrame(y) # df_model_i.columns = ["y", ] # df_model_i["y_predict"] = y_predict # df_model_i["diff"] = df_model_i["y"] - df_model_i["y_predict"] # df_model_i["diff_abs"] = np.abs(df_model_i["diff"]) # # __| # # Calculate Mean Absolute Error (MAE) # mae = df_model_i["diff_abs"].sum() / df_model_i["diff"].shape[0] # if verbose: # print(20 * "-") # print("model.score(X, y):", model.score(X, y)) # print("Model MAE:", mae) # print("") # # print(feature_cols) # # print(model.coef_) # # for i, j in zip(list(df_i["features_stan"][ads].columns), model.coef_): # for i, j in zip(list(df_i["features"].columns), model.coef_): # print(i, ": ", j, sep="") # print(20 * "-") # #| - Plotting # data = [] # from methods import get_df_slab # df_slab = get_df_slab() # #| - DEPRECATED | Getting colors ready # # df_slab_tmp = df_slab[["slab_id", "bulk_id"]] # # # # bulk_id_slab_id_lists = np.reshape( # # df_slab_tmp.to_numpy(), # # ( # # 2, # # df_slab_tmp.shape[0], # # ) # # ) # # # # slab_bulk_mapp_dict = dict(zip( # # list(bulk_id_slab_id_lists[0]), # # list(bulk_id_slab_id_lists[1]), # # )) # # # # # # slab_bulk_id_map_dict = dict() # # for i in df_slab_tmp.to_numpy(): # # slab_bulk_id_map_dict[i[0]] = i[1] # # # # # print("list(bulk_id_slab_id_lists[0]):", list(bulk_id_slab_id_lists[0])) # # # print("") # # # print("list(bulk_id_slab_id_lists[1]):", list(bulk_id_slab_id_lists[1])) # # # print("") # # # print("slab_bulk_mapp_dict:", slab_bulk_mapp_dict) # # # # import random # # get_colors = lambda n: list(map(lambda i: "#" + "%06x" % random.randint(0, 0xFFFFFF),range(n))) # # # # slab_id_unique_list = df_i.index.to_frame()["slab_id"].unique().tolist() # # # # bulk_id_list = [] # # for slab_id_i in slab_id_unique_list: # # # bulk_id_i = slab_bulk_mapp_dict[slab_id_i] # # bulk_id_i = slab_bulk_id_map_dict[slab_id_i] # # bulk_id_list.append(bulk_id_i) # # # # color_map_dict = dict(zip( # # bulk_id_list, # # get_colors(len(slab_id_unique_list)), # # )) # # # # # Formatting processing # # color_list = [] # # for name_i, row_i in df_i.iterrows(): # # # ################################################# # # slab_id_i = name_i[1] # # # ################################################# # # phase_i = row_i["data"]["phase"][""] # # stoich_i = row_i["data"]["stoich"][""] # # sum_norm_abs_magmom_diff_i = row_i["data"]["sum_norm_abs_magmom_diff"][""] # # norm_sum_norm_abs_magmom_diff_i = row_i["data"]["norm_sum_norm_abs_magmom_diff"][""] # # # ################################################# # # # # # ################################################# # # row_slab_i = df_slab.loc[slab_id_i] # # # ################################################# # # bulk_id_i = row_slab_i.bulk_id # # # ################################################# # # # # bulk_color_i = color_map_dict[bulk_id_i] # # # # if stoich_i == "AB2": # # color_list.append("#46cf44") # # elif stoich_i == "AB3": # # color_list.append("#42e3e3") # # # # # color_list.append(norm_sum_norm_abs_magmom_diff_i) # # # color_list.append(bulk_color_i) # #__| # #| - Creating parity line # # x_parity = y_parity = np.linspace(0., 8., num=100, ) # x_parity = y_parity = np.linspace(-2., 8., num=100, ) # trace_i = go.Scatter( # x=x_parity, # y=y_parity, # line=go.scatter.Line(color="black", width=2.), # mode="lines") # data.append(trace_i) # #__| # #| - Main Data Trace # # color_list_i = df_i["format"]["color"][format_dict["color"]] # trace_i = go.Scatter( # y=y, # x=y_predict, # mode="markers", # marker=go.scatter.Marker( # # size=12, # size=scatter_marker_size, # # color=color_list_i, # colorscale='Viridis', # colorbar=dict(thickness=20), # opacity=0.8, # ), # # text=df_i.name_str, # # text=df_i.data.name_str, # textposition="bottom center", # ) # data.append(trace_i) # #__| # #| - Layout # # y_axis_target_col = df_i.target_cols.columns[0] # y_axis_target_col = df_i.targets.columns[0] # y_axis_target_col = y_axis_target_col[0] # if y_axis_target_col == "g_o": # layout.xaxis.title.text = "Predicted ΔG*O" # layout.yaxis.title.text = "Simulated ΔG*O" # elif y_axis_target_col == "g_oh": # layout.xaxis.title.text = "Predicted ΔG*OH" # layout.yaxis.title.text = "Simulated ΔG*OH" # else: # print("Woops isdfsdf8osdfio") # layout.xaxis.range = [2.5, 5.5] # layout.showlegend = False # dd = 0.2 # layout.xaxis.range = [ # np.min(y_predict) - dd, # np.max(y_predict) + dd, # ] # layout.yaxis.range = [ # np.min(y) - dd, # np.max(y) + dd, # ] # layout.title = plot_title # #__| # fig = go.Figure(data=data, layout=layout) # if save_plot_to_file: # my_plotly_plot( # figure=fig, # save_dir=os.path.join( # os.environ["PROJ_irox_oer"], # "workflow/oer_vs_features", # ), # plot_name="parity_plot", # write_html=True) # #__| # # ##################################################### # out_dict = dict() # # ##################################################### # out_dict["fig"] = fig # out_dict["df_model_i"] = df_model_i # out_dict["mae"] = mae # out_dict["X"] = X # out_dict["y"] = y # out_dict["y_predict"] = y_predict # # ##################################################### # # return(out_dict) # #__| # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Sorting # # ## Setup # + import time import numpy as np import matplotlib.pyplot as plt # %config InlineBackend.figure_format = 'retina' def gettime(f, *args): "Return time in seconds; %time, %timeit don't work in functions so we write our own" t0 = time.time() f(*args) t1 = time.time() return t1 - t0 def showtime(f, A, step=1): n = len(A) times = [gettime(f, A[0:i]) for i in range(1,n,step)] plt.figure(figsize=(4,3)) plt.plot(range(1,n,step), times, '.', color='#fdae61', alpha=.8, markersize=10, markeredgewidth=.3, markeredgecolor='grey') plt.xlabel("Problem size") plt.ylabel("Time in seconds") plt.show() # - # ## Bubble sort # + def lprint(A,start=-1,stop=-1): for i,a in enumerate(A): print(f"{a:3}", end=' ') print("\b", end="") if i>=start and i<=stop: print("\u0332", end="") def bubble(A, verbose=True): swapped=True second_to_last_idx = len(A)-2 n = 1 while swapped: swapped=False if verbose: print(); lprint(A); print(f"\t\tPass {n}"); n += 1 for i in range(second_to_last_idx+1): if A[i] > A[i+1]: A[i], A[i+1] = A[i+1], A[i] swapped = True if verbose: lprint(A,i,i+1); print(f"\t\tSwap A[{i}], A[{i+1}]") # - A = [9,1,4,2,-3] bubble(A) A = np.random.randint(1,100,400) def quiet_bubble(A): return bubble(A,verbose=False) showtime(quiet_bubble, A) # ## Merge sort # + def merge(a,b): "Merge two sorted lists in O(nlogn)" return sorted(a+b) # cheating: easy but not O(n) def mergesort(data): if len(data)==0: return [] if len(data)==1: return [data[0]] if len(data)==2: if data[0]', mergesort(data)) data = [5,1] print(data, '->', mergesort(data)) data = [] print(data, '->', mergesort(data)) data = [10,3,1,10,99,100] print(data, '->', mergesort(data)) # - A = list(np.random.randint(1,100,2000)) showtime(mergesort, A) def merge(a,b): "Merge two sorted arrays, returning another, w/o wasting so much space" i = 0 j = 0 c = [] while i', mergesort(data)) data = [5,1] print(data, '->', mergesort(data)) data = [] print(data, '->', mergesort(data)) data = [10,3,1,10,99,100] print(data, '->', mergesort(data)) # - A = list(np.random.randint(1,100,2_000)) showtime(mergesort, A) # ## Quicksort def partition(A,lo,hi): pivot = A[hi] # pick last element as pivot # Filter for all less/greater than pivot (ignore pivot spot) left = [A[i] for i in range(lo,hi) if A[i]=pivot] A[lo:hi+1] = left+[pivot]+right # copy back to A return lo + len(left) # return index of pivot # + def qsort_(A, lo, hi): if lo >= hi: return pivot_idx = partition(A,lo,hi) qsort_(A, lo, pivot_idx-1) qsort_(A, pivot_idx+1, hi) def qsort(A): return qsort_(A, lo=0, hi=len(A)-1) # - A = np.random.randint(-2,10,20) print(A) qsort(A) print(A) A = np.random.randint(1,100,1000) showtime(qsort, A) # ### Slightly better partition # # Well, it *should* be better but it uses more/slower python looping, which makes it take much longer. Seems like it'd be faster as I make just one pass over the list but list comprehension probably optimized like crazy. def partition(A, lo, hi): pivotidx = hi pivot = A[pivotidx] # pick last element as pivot less = [] more = [] for i in range(lo,hi): if i==pivotidx: continue elif A[i] list: size = max(A) + 1 holes = [0] * size for a in A: holes[a] += 1 A_ = [] for i in range(0,size): A_.extend([i] * holes[i]) # for j in range(holes[i]): # alternative # A_.append(i) return A_ A = np.random.randint(1,30,10) psort(A) # - A = np.random.randint(1,100,1000) showtime(psort, A) # ## Bucket sort # + import numpy as np def bsort(A:list, nbuckets=5) -> list: mx = max(A) buckets = [] for i in range(nbuckets): buckets.append([]) for a in A: a_normalized = a / mx # get into 0..1 i = int(a_normalized * (nbuckets-1)) # spread across buckets buckets[i].append(a) A_ = [] for i in range(nbuckets): A_.extend( sorted(buckets[i]) ) return A_ A = np.random.random(size=10) * 100 + 10 bsort(A, 5) # - A = np.random.random(size=1000) * 100 + 10 showtime(bsort, A) # ## Bucket sort on strings # + # assume lowercase English letters for simplicity def pstr_sort(A:list) -> list: size = ord('z') - ord('a') + 1 holes = [] for i in range(size): holes.append([]) for s in A: i = ord(s[0])-ord('a') holes[i].append(s) #objviz(holes).view() A_ = [] for i in range(ord('z')-ord('a') + 1): A_.extend( sorted(holes[i]) ) return A_ from lolviz import * A = ['apple', 'ape', 'zebra', 'cat', 'canary', 'civet', 'dog'] pstr_sort(A) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf a = tf.constant([9, 8 ,7],name='a') b = tf.constant([1,2,3], name='b') soma = a + b type(a) print(a) with tf.Session() as sess: print(sess.run(soma)) a1 = tf.constant([[1,2,3], [4,5,6]], name='a1') type(a1) print(a1) a1.shape b1 = tf.constant([[1, 2, 3], [4, 5, 6]], name='b1') soma1 = tf.add(a1, b1, name='soma1') with tf.Session() as sess: print(sess.run(a1), end='\n\n') print(sess.run(b1), end='\n\n') print(sess.run(soma1)) a2 = tf.constant([[1,2,3], [4,5,6]]) b2 = tf.constant([[1], [2]]) with tf.Session() as s: print(s.run(a2), end='\n\n') print(s.run(b2), end='\n\n') soma2 = tf.add(a2, b2) with tf.Session() as s: print(s.run(soma2)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Rt Live Model # + # %load_ext autoreload # %autoreload 2 import pymc3 as pm import pandas as pd import numpy as np import arviz as az from matplotlib import pyplot as plt from covid.models.generative import GenerativeModel from covid.data import summarize_inference_data # %config InlineBackend.figure_format = 'retina' from covid.data import get_and_process_covidtracking_data, summarize_inference_data # - # ## Fetch data and select the state's data df = get_and_process_covidtracking_data(run_date=pd.Timestamp.today()-pd.Timedelta(days=1)) region = "OR" model_data = df.loc[region] # ## Create the model instance and sample gm = GenerativeModel(region, model_data) gm.sample() # ## Summarize Model Output result = summarize_inference_data(gm.inference_data) result.tail(10) # ## Plot Model Output fig, ax = plt.subplots(figsize=(10,5)) result.test_adjusted_positive.plot(c="g", label="Test-adjusted") result.test_adjusted_positive_raw.plot(c="g", alpha=.5, label="Test-adjusted (raw)", style="--") result.infections.plot(c="b", label="Infections") gm.observed.positive.plot(c='r', alpha=.7, label="Reported Positives") fig.set_facecolor('w') ax.legend(); # + fig, ax = plt.subplots(figsize=(10,5)) ax.set_title(f"{region} $R_t$") samples = gm.trace['r_t'] x=result.index cmap = plt.get_cmap("Reds") percs = np.linspace(51, 99, 40) colors = (percs - np.min(percs)) / (np.max(percs) - np.min(percs)) samples = samples.T result["median"].plot(c="k", ls='-') for i, p in enumerate(percs[::-1]): upper = np.percentile(samples, p, axis=1) lower = np.percentile(samples, 100-p, axis=1) color_val = colors[i] ax.fill_between(x, upper, lower, color=cmap(color_val), alpha=.8) ax.axhline(1.0, c="k", lw=1, linestyle="--") fig.set_facecolor('w') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="joDEN9996NPZ" # # Nbdev # > In this chapter, we will learn about publishing python notebooks with [Nbdev](http://nbdev.fast.ai/tutorial/). # # - toc: true # - description: this isnt working # - image: images/company_logo.png # - keywords: thisworks # - badges: true # - comments: true # - categories: [test] # - hide: false # - metadata_key1: metadata_value1 # - metadata_key2: metadata_value2 # + [markdown] id="L0bXqQMJKVK2" # [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/bnia/dataguide/main?filepath=%2Fnotebooks%2F06_Nbdev.ipynb) # [![Binder](https://pete88b.github.io/fastpages/assets/badges/colab.svg)](https://colab.research.google.com/github/bnia/dataguide/blob/main/notebooks/06_Nbdev.ipynb) # [![Binder](https://pete88b.github.io/fastpages/assets/badges/github.svg)](https://github.com/bnia/dataguide/tree/main/notebooks/06_Nbdev.ipynb) # [![Open Source Love svg3](https://badges.frapsoft.com/os/v3/open-source.svg?v=103)](https://github.com/ellerbrock/open-source-badges/) # # [![NPM License](https://img.shields.io/npm/l/all-contributors.svg?style=flat)](https://github.com/bnia/dataguide/blob/main/LICENSE) # [![Active](http://img.shields.io/badge/Status-Active-green.svg)](https://bnia.github.io) # [![GitHub last commit](https://img.shields.io/github/last-commit/bnia/dataguide.svg?style=flat)]() # # [![GitHub stars](https://img.shields.io/github/stars/bnia/dataguide.svg?style=social&label=Star)](https://github.com/bnia/dataguide) # [![GitHub watchers](https://img.shields.io/github/watchers/bnia/dataguide.svg?style=social&label=Watch)](https://github.com/bnia/dataguide) # [![GitHub forks](https://img.shields.io/github/forks/bnia/dataguide.svg?style=social&label=Fork)](https://github.com/bnia/dataguide) # [![GitHub followers](https://img.shields.io/github/followers/bnia.svg?style=social&label=Follow)](https://github.com/bnia/dataguide) # # [![Tweet](https://img.shields.io/twitter/url/https/github.com/bnia/dataguide.svg?style=social)](https://twitter.com/intent/tweet?text=Check%20out%20this%20%E2%9C%A8%20colab%20by%20@bniajfi%20https://github.com/bnia/dataguide%20%F0%9F%A4%97) # [![Twitter Follow](https://img.shields.io/twitter/follow/bniajfi.svg?style=social)](https://twitter.com/bniajfi) # + id="xd8MjmFvugUz" #hide # !pip freeze > requirements.txt # + id="KJCFuYT38nfb" # !pip install nbdev # + id="6ZPUuf1r6NPT" #hide from nbdev import * # + [markdown] id="N38-ZVW26NPa" # ## (Non-Technical) Introduction # + [markdown] id="AwIspUH_6NPb" # [![forthebadge made-with-python](http://ForTheBadge.com/images/badges/made-with-python.svg)](https://www.python.org/) # # [![ForTheBadge built-with-love](http://ForTheBadge.com/images/badges/built-with-love.svg)](https://GitHub.com/Naereen/) [![GitHub license](https://img.shields.io/github/license/Naereen/StrapDown.js.svg)](https://github.com/Naereen/StrapDown.js/blob/master/LICENSE) # # [![ForTheBadge powered-by-electricity](http://ForTheBadge.com/images/badges/powered-by-electricity.svg)](http://ForTheBadge.com) # + [markdown] id="AmnKSU6l6NPb" # Fastai, the organization that created this tool, helps empower others by lowering the barrier of entry when breaking into software (AI) development. # # In their own words: # + [markdown] id="aYpgUXhA6NPb" # > Nbdev is a tool that allows you to fully develop a library in Jupyter Notebooks, putting all your code, tests, and documentation in one place. -[Nbdev](http://nbdev.fast.ai/) # + [markdown] id="L0hoasKF6NPd" # # The Nbdev library and tutorial is actually written using a notebook. They can be created and published by running the notebook on itself! # # If you check, you can see that their [tutorial](http://nbdev.fast.ai/) is actually being hosted on [GitHub](https://github.com/fastai/nbdev/tree/master/docs) pages. # + [markdown] id="HnNEosdM6NPb" # Some of the tools that Fastai provides includes: # # - [Nbdev](https://github.com/fastai/nbdev) - For software development and documentation # - [Fastpages](https://github.com/fastai/fastpages) - Works like Nbdev but for blogging # - [Fastscript](https://github.com/fastai/fastscript) - Converts a python function to a CLI script # - [Fastprogress](https://github.com/fastai/fastprogress) - Reveals a progress bar for long processes # - [FastiAi](https://github.com/fastai/fastai) - Data convenience tools and AI models # - [FastAi Course](https://course.fast.ai/) - V3; a practical deep-learning course # + [markdown] id="8SQZYkoh6NPc" # Are you using one of these products? Tweet @jeremyphoward and [staff](https://www.fast.ai/about/#founders) or @HamelHusain for all their work! # + [markdown] id="OqNN6coI6NPc" # ### With Nbdev, you can: # + [markdown] id="S_nOF-Ej6NPc" # 1) **Publish notebooks as a module on PyPI.** # - A) "Flag" cells to [export](http://nbdev.fast.ai/export/) them as part of the Python library # # 2) **Install it anywhere**: `pip install ` # - A) This can be used in conjunction with Fastscript to create CLI scripts # - B) The CLI help interface is reachble by typing `! -h` # # 3) **Convert notebooks to documenation in HTML and Markdown.** # - A) Use the index.ipynb to auto-generate a homepage and a Github and PyPI README.md file. # - C) Have the backticked function auto-link to a functioning GitHub source code. # - D) Host the documentation on GitHub pages so they are always up to date. # - E) "Flag" cells to either collapse or hide the cells or the output from documentation. # - F) Personalize the [Jekyll](https://jekyllrb.com/) website templates used to make the site. # - G) Store photos in an images folder for use in your notebook and have it copied to your website. # # 4) **Use created modules across notebooks in realtime.** # - A) Auto-reload modules across notebooks during development if a their corresponding .py script is updated # + [markdown] id="6UErNfHT6NPd" # ### *While we were on-topic!* # + [markdown] id="uE_mFsvJ6NPe" # If this is more than you need, consider the following: # # - Sharing the Colab links directly. # - [Nbconvert](https://nbconvert.readthedocs.io/en/latest/) notes into HTML, LaTeX, PDF, Markdown, etc. # - Using [mybinder](https://mybinder.org/) # > Have a repository full of Jupyter notebooks? With Binder, open those notebooks in an executable environment, making your code immediately reproducible by anyone, anywhere. # # + [markdown] id="j4Y0a16b6NPe" # ## (Technical) Overview # + [markdown] id="CpEWWtdZ6NPe" # 1. Nbdev is a Python library that is executed directly from the terminal. # # 2. At its heart, a user simply enters a directory and executes one of the Nbdev terminal commands. # # 3. Python notebooks are automatically located and processed for publication and deployment. # # 4. Specific "Flags" (either the Comment or ["Magic"](https://ipython.readthedocs.io/en/stable/interactive/magics.html#) variety) tell Nbdev how to treat cells. # # 5. Photos stored in the 'images' folder will be copied over to your publication folder. # # + [markdown] id="raGdUN9X6NPe" # ### Nbdev Terminal Commands # + [markdown] id="xcA90huX6NPf" # Entering these commands into the terminal at the project's root directory will handle most of everything. # # *Pip and Git specific steps are not listed, but are included in sections below.* # + [markdown] id="Zfz7BlRi6NPf" # > Important: Information in this section is ever-changing! Quotes found here came from Nbdev's [official documentation](https://nbdev.fast.ai/cli/#nbdev_nb2md), from which you will derive much more insight with respect to using these functions. # + [markdown] id="EheViw-B6NPf" # #### Create # + [markdown] id="omQ_zq9Q6NPf" # > - `nbdev_new`: creates a new Nbdev project. # > - `nbdev_update_lib`: propagates any change in the library back to the notebooks. # + [markdown] id="gVKmdxcm6NPf" # #### Compile # + [markdown] id="2_yDAVGa6NPf" # > - `nbdev_nb2md`: converts a notebook to a Markdown file. # > - `nbdev_build_docs`: builds the documentation from the notebooks. # > - `nbdev_build_lib`: builds the library from the notebooks. # > - `nbdev_bump_version`/ `nbdev_bump`: increments version in settings.py by one. # + [markdown] id="0hQ8NCES6NPg" # *Nbdev ignores any file that starts with an underscore. Version bumping must be performed each time prior to uploading to PyPI.* # # Combine commands with with regular expressions for more power. # - A commands like `nbdev_build_docs --fname=[!test]*.ipynb` will run all files except ones with filenames that start with 'test' # + [markdown] id="o56607C56NPg" # #### Publish # # + [markdown] id="0o8-ED_v6NPg" # > - `nbdev_clean_nbs`: removes all superfluous metadata form the notebooks to avoid merge conflicts. # > - `nbdev_install_git_hooks`: installs the git hooks that use ( `nbdev_fix_merge` and `nbdev_diff_nbs`) automatically on each commit/merge. # + [markdown] id="TbXZOj0U6NPg" # *GitHooks must run once on project creation for them is complete so that can be set up permanently.* # + [markdown] id="F5uJXtPb6NPg" # #### Check # + [markdown] id="MFy5xTcJ6NPh" # > - `nbdev_read_nbs`: reads all notebooks to make sure none are broken. # > - `nbdev_trust_nbs`: trusts all notebooks so that the HTML content is shown. # > - `nbdev_upgrade`: updates an existing Nbdev project to use the latest features. # + [markdown] id="kvC6KE_r6NPh" # nbdev_upgrade can run any time there are new updates, through it only needs to run once per update. # + [markdown] id="bL1ElWdA6NPh" # ### Nbdev Flags (Mark Up/Cell Magic) # + [markdown] id="o1q3EfEv6NPh" # Nbdev scripts rely on special [comments **or** magics](https://nbdev.fast.ai/tutorial/#nbdev-flags), depending on your choice, for cell-level handling of notebooks. # + [markdown] id="EzQJVLYC6NPh" # Whereas Nbdev's terminal commands are executed at a project's root directory, content covered in this section is placed in the notebooks' cells. # # Most Nbdev flags define design instructions for when we create the website/module (Ex. `default_exp`, `autoreload`). # + [markdown] id="XK-3BuH96NPi" # **1.** **Basic Comment and Magics** # + [markdown] id="MJQX876z6NPi" # > **Warning**: Content in this section is pulled from the Fastpages [blog](https://fastpages.fast.ai/fastpages/jupyter/2020/02/21/introducing-fastpages.html#Other-Feautures) [posts](https://pete88b.github.io/fastpages/nbdev/fastai/jupyter/2020/07/24/nbdev-deep-dive.html) and the Nbdev docs. If you have any issues, you will have to scoure these sources to resolve them! # + [markdown] id="7TxR0bZj6NPi" # To gain access to [Nbdev's new 'magics'](https://pete88b.github.io/fastpages/nbdev/fastai/jupyter/2020/06/02/nbdev-magic.html), you my find it neccesary to execute the command `nbdev_upgrade` only once; add `from nbdev import *` to the top of a notebook in a code-cell (this [article](https://pete88b.github.io/fastpages/nbdev/fastai/jupyter/2020/06/02/nbdev-magic.html) covers that); and put Flags at the top of a cell for it to work. # + [markdown] id="iQM8Fe0M6NPi" # > **Important**: The list shown below was obtained from [here](https://pete88b.github.io/fastpages/nbdev/fastai/jupyter/2020/06/02/nbdev-magic.html). As always, please refer to the official source for up to date information. # + [markdown] id="GlEOrzLB6NPi" # | Comment flag | Magic flag | | # |---------------------------------------------|-----------------------------|--------------------------------------------------------------------------------| # | `default_exp` | `nbdev_default_export` | Define the name of the module everything should be exported in. | # | `exports` | `nbdev_export_and_show` | Export and show code in the docs. | # | `exporti` | `nbdev_export_internal` | Export but do not show in docs and or add to `__all__`. | # | `export` | `nbdev_export` | Export but do not show in docs. | # | `hide_input` | `nbdev_hide_input` | Do not show input of a test cell in docs. | # | `hide_output` | `nbdev_hide_output` | Do not show output of a test cell in docs. | # | `hide` | `nbdev_hide` | Do not show a test cell or markdown in docs. | # | `default_cls_lvl` | `nbdev_default_class_level` | Define the default toc level of classes. | # | `collapse_output open` or `collapse-output` | `nbdev_collapse_output` | Inlcude output in the docs under a collapsable element. | # | `collapse_input close` or `collapse-input` | `nbdev_collapse_output` | Inlcude input in the docs under a collapsable element. | # | `collapse_show` or `collapse-show` | `nbdev_collapse_input open` | Inlcude input in the docs under a collapsable element that is open by default. | # | `collapse_hide` or `collapse-hide` | `nbdev_collapse_input` | Inlcude input in the docs under a collapsable element. | # | `collapse` | `nbdev_collapse_input` | Inlcude input in the docs under a collapsable element. | # + [markdown] id="M2Z03gQO6NPi" # **2**: **Autoreloading** # # # + [markdown] id="xSqLo7zCiQFv" # Put `%load_ext autoreload` `%autoreload 2` in ipynbs. If you are working the notebook and importing the local version of your library, the imported modules will auto update whenever the corresponding .py file gets updated. # # > Note: This opens new doors in your programming experience. # > # > Provided all the notebooks have access to the same file directory: # > 1. You can edit .py's and use `update_lib` to push the exported code's edits back into their original ipynbs (ran anywhere) # > 2. You can edit .IPYNB's and use `build_lib` to push the exported code's edits back into their .pys (ran anywhere) # > # > The autoreload feature will let you immediately test the new modules from any desired notebook. # + [markdown] id="_o0lvq2P6NPj" # **3. Console Scripts** are made by marking up functions using the [fastscript](https://fastscript.fast.ai/) tool and editing the settings.ini file (see this [example](https://github.com/fastai/nbdev/blob/554e63b1b05390a3ad2bc086d8485f98b1e63ca4/settings.ini)). # # + [markdown] id="65S2xF6E6NPj" # **4. Anchor Links** # # > Anchor links are web links that allow users to leapfrog to a specific point on a website page. They save them the need to scroll and skim-read, making navigation easier. - [Telegraph](https://www.telegraph.co.uk/branded-content/marketing-guides/what-is-anchor-link/) # # > For example, # Both landmark and [hyperlink](#hyperlinkingValue) have been placed on this line. Click the link for this line to zip up to the top of your screen. # # Code: # ``` # # This is the hyperlink # [hyperlinktext](#hyperlinkingValue) # # # It will take you to wherever you place this landmark # # ``` # # # + [markdown] id="1-o7Uw1O6NPj" # **5**: **Jekyll Automated Document Hyperlinking** # # Mentioning an exported class with backticks `functionName` will create a hyperlink to its sourcode on GitHub. This will only work if it is an exported function. # # - Functions and Classes have are automatically documented. # # - Class methods are not shown by default, so use [`show_doc(class.method`)](http://nbdev.fast.ai/showdoc/#show_doc) to do this. # + [markdown] id="hLsxu-3t6NPj" # **6.** **Jekyll Metadata** # # The cell `#default_exp` located at the top of a cell can also be used for # [YAML](https://yaml.org/), a data-serialization standard. This is used as [Front Matter](https://assemble.io/docs/YAML-front-matter.html) that can be [used with Jekyll](https://jekyllrb.com/docs/datafiles/) to render templates. # > In the markdown cell with the title, you can add the summary as a block quote by putting an empty block quote for an empty summary. Next, add a list with any additional metadata you would like to add **[`get_metadata`](https://nbdev.fast.ai/export2html/#get_metadata)**. # + [markdown] id="ioIJ2z-f6NPj" # **7. Jekyll Images** # # The 'images' folder can be used to conjure up pictures, with an example provided below: # # `![](notebookfolder/images/company_logo.png)` # # These images will be added to and displayed in the docs. # + [markdown] id="cw3O7C486NPk" # ![](./images/ai_of_me.jpg) # + [markdown] id="Aq13T9KL6NPk" # **8. Jekyll [Notes](http://nbdev.fast.ai/export2html/#add_jekyll_notes)** # # The following section was written using the following markups: # # ```> Note:```, ```> Warning:```, ```> Tip:``` and ```> Important: text goes here``` # # > Note: This is a note. # # > Warning: This is a warning. # # > Tip: This is a tip. # # > Important: This is an Important doc link to [`add_jekyll_notes`](/export2html#add_jekyll_notes), which should also work fine. # # + [markdown] id="o3_hgZ6z6NPk" # **9. Jekyll [Search](http://nbdev.fast.ai/search/)** # # This is functionality not covered here, but may be added in the future. # # **10. Jekyll Custom Sidebar** # # This is used for your HTML documentation, which can be configured using meta-markup [JSON](https://www.json.org/json-en.html). # # > The default sidebar lists all html pages with their respective title, except the index that is named "Overview". To build a custom sidebar, set the flag custom_sidebar in your settings.ini to True then change the sidebar.json file in the doc_folder to your liking. Otherwise, the sidebar is updated at each doc build. # + [markdown] id="uQ-ptbNf6NPk" # ## Getting Started # + [markdown] id="SoP0-e306NPk" # To get started, install the Nbdev libarary. # + id="Hcb1E01E6NPl" # %%capture # ! pip install nbdev # + [markdown] id="Ymyh3zHM6NPl" # ### Connect to Folder # + [markdown] id="2kVNkxmr6NPl" # Give Colabs access to your Google Drive # + colab={"base_uri": "https://localhost:8080/"} id="0D-YmdW46NPl" executionInfo={"status": "ok", "timestamp": 1626577684648, "user_tz": 240, "elapsed": 17566, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="0fff6cc3-637e-46f2-dccd-ed03e5350194" #hide_output from google.colab import drive drive.mount('/content/drive') # + [markdown] id="TVpUAXI96NPm" # Navigate to the Google Drive directory where you store your projects folder. # + [markdown] id="8Io-ox8J6NPm" # ## Creating a New Project # + [markdown] id="IYmYBGPe6NPn" # The Nbdev library will only work with projects that have the proper file structure and files. # # Once configured, you may simply write in a notebook (placing Nbdev 'FLAGS' where needed) and publish using an Nbdev terminal commands. The Nbdev command will process the notebook and each of its cells according to rules denoted by the 'FLAGS'. # + [markdown] id="H5E8Cd9K6NPn" # ### Getting a template # + [markdown] id="pMlvyDul6NPn" # You can grab a free project template containing all the needed files and proper structure. With this, you can get started in one of two methods shown below. # + [markdown] id="p05jrGxW6NPn" # #### Method 1) Grabbing a template using GitHub and the browser. # + [markdown] id="tduhG8Dv6NPn" # Now, create a new [repository](https://github.com/fastai/nbdev_template/generate). # + id="qUUCEgES6NPn" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626052930050, "user_tz": 240, "elapsed": 1296, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="e18904e0-422f-4769-b6c5-328da33da2c1" #hide_output # ! git clone https://github.com/fastai/nbdev_template.git # + id="O-xpElfeQ2A4" # ls # + id="RSixWDN86NPn" #hide # !mv nbdev_template VitalSigns # !mv VitalSigns ../z # + id="S5H8wvdtTkgb" # !cp RBIntel.ipynb ../VitalSigns # + id="cVcLTtSC6NPo" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626053058325, "user_tz": 240, "elapsed": 122, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="91fcf0ef-fefb-457d-d349-ba6a29230fdf" #hide # cd ../VitalSigns # + id="CDngb2aC6NPo" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626053109721, "user_tz": 240, "elapsed": 282, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="7fed2075-ae9f-4239-dc6d-d940831130cf" #hide # !ls # + [markdown] id="_-SZ-Z8S6NPo" # #### Method 2) Grabbing a template using the Nbdev library and the terminal. # + colab={"base_uri": "https://localhost:8080/"} id="pnBlZPLiSfRh" executionInfo={"status": "ok", "timestamp": 1626052608323, "user_tz": 240, "elapsed": 442, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="3fda7a0b-2b90-4888-e762-2b1c9f6f1c10" # !git init # + [markdown] id="B2wFHzXe6NPo" # An Nbdev terminal command can be used to **create a template** project directory. # # Use -h to view positional args. # + id="8a13byOB6NPo" #hide # cd ../ # + id="1L1v-MOD6NPp" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626052347577, "user_tz": 240, "elapsed": 1875, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="b666e518-2264-4aa5-fa8c-5e7d3e9fe134" #collapse_output # ! nbdev_new -h # + [markdown] id="A9yZqkgV6NPp" # We will need to register your **GitHub credentials** appropraitely configured for it to work. # + id="JBqcnZU26NPp" #collapse_input open # ! git config --global user.email "" # ! git config --global user.name "bnia" # + [markdown] id="1Fl0uNLn6NPp" # Now, you can just run the command to **create a new project/directory** by a name of your choosing. # + id="zUc5nHP86NPp" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626052718872, "user_tz": 240, "elapsed": 1848, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="05f47a77-f006-4fcd-9d60-931fffe32264" #hide_output # ! nbdev_new 'test123' # + [markdown] id="TLHtNuVn6NPq" # If you enter it, you will see it comes all set up. # + id="zBPrd6QA6NPq" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626052471366, "user_tz": 240, "elapsed": 294, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="b85c5980-d233-43bc-b7f0-b080409f7ba7" #hide # !cd VitalSigns # + [markdown] id="DQJOWvFV6NPq" # As you can see, it comes all set up. # + id="ILKLTuQT6NPq" #hide # ls # + [markdown] id="ujyaGrZh6NPq" # ### Configuring the Template # + [markdown] id="Ubi5fvB76NPq" # **There are some one-offs to perform on the template.** # + [markdown] id="NOSbkzBx6NPq" # #### GitHooks # + [markdown] id="vj3jI-K66NPr" # Run `! nbdev_install_git_hooks` # # > To set up GitHooks that will remove metadata from your notebooks when you commit, greatly reducing the chance you have a conflict. # > # > But if you do get a conflict later, simply run nbdev_fix_merge filename.ipynb. This will replace any conflicts in cell outputs with your version. If there are conflicts in input cells, then both cells will be included in the merged file, along with standard conflict markers (e.g. =====). Then, you can open the notebook in Jupyter and choose which version to keep. # ~ nbdev [tutorial](http://nbdev.fast.ai/cli/#nbdev_install_git_hooks) # + id="hQ-RbrS46NPr" # %%capture # remove metadata from your notebooks when you commit reducing the chance you have a conflict. # ! nbdev_install_git_hooks # if you do get a conflict, simply run nbdev_fix_merge filename.ipynb # + [markdown] id="H0tJVyTM6NPr" # #### Settings.ini # + [markdown] id="a_Xo9XqL6NPr" # Everything is included in the template for the library to be packaged. # # Your settings.ini is where all parts of nbdev look for any required configuration information. # # Complete the Python form below to update the `settings.ini` file. # + [markdown] id="RokQ4SJ16NPr" # Really, only the '**folder_name**' and '**github_username**' fields need to be edited for this to work. # # The form values entered here will be inserted into the settings.ini doc. # + id="-VIn2weR6NPs" cellView="form" #collapse_input #@title Example form fields #@markdown Forms support many types of fields. # Name of the project folder_name = "VitalSigns" #@param {type:"string"} company_name = "BNIA-JFI" #@param {type:"string"} # GitHub Username github_username = 'bniajfi' #@param {type:"string"} description = "Python Scripts for BNIA-JFI's Vital Signs Data" #@param {type:"string"} keywords = "Community Data" #@param {type:"string"} # Who are you? author = "" #@param {type:"string"} author_email = '' #@param {type:"string"} # Where are your notebooks? They are currently at the basepath. path_to_locate_notebooks = "." #@param {type:"string"} # Where should your documentation be put? {type:"string"} path_to_place_documentation = "docs" #@param {type:"string"} #@markdown --- user=github_username nbs_path=path_to_locate_notebooks doc_path=path_to_place_documentation lib_name=folder_name # Now let's rewrite the settings.ini file. innertext = """ [DEFAULT] # All sections below are required unless otherwise specified host = github lib_name = """+lib_name+""" company_name = """+company_name+""" user = """+user+""" description = """+description+""" keywords = """+keywords+""" author = """+author+""" author_email = """+author_email+""" copyright = MIT branch = master version = 0.0.1 min_python = 3.6 audience = Developers language = English # Set to True if you want to create a more fancy sidebar.json than the default custom_sidebar = False # Add licenses and see current list in `setup.py` license = apache2 # From 1-7: Planning Pre-Alpha Alpha Beta Production Mature Inactive status = 2 # Optional. Same format as setuptools requirements # requirements = # Optional. Same format as setuptools console_scripts # console_scripts = # Optional. Same format as setuptools dependency-links # dep_links = ### # You probably won't need to change anything under here, # unless you have some special requirements ### # Change to, e.g. "nbs", to put your notebooks in nbs dir instead of repo root nbs_path = """+nbs_path+""" doc_path = """+doc_path+""" # Whether to look for library notebooks recursively in the `nbs_path` dir recursive = False # Anything shown as '%(...)s' is substituted with that setting automatically doc_host = https://%(user)s.github.io #For Enterprise Git pages use: #doc_host = https://pages.github.%(company_name)s.com. doc_baseurl = /%(lib_name)s/ # For Enterprise Github pages docs use: # doc_baseurl = /%(repo_name)s/%(lib_name)s/ git_url = https://github.com/%(user)s/%(lib_name)s/tree/%(branch)s/ # For Enterprise Github use: #git_url = https://github.%(company_name)s.com/%(repo_name)s/%(lib_name)s/tree/%(branch)s/ lib_path = %(lib_name)s title = %(lib_name)s #Optional advanced parameters #Monospace docstings: adds
     tags around the doc strings, preserving newlines/indentation.
    #monospace_docstrings = False
    #Test flags: introduce here the test flags you want to use separated by |
    #tst_flags = 
    #Custom sidebar: customize sidebar.json yourself for advanced sidebars (False/True)
    #custom_sidebar = 
    #Cell spacing: if you want cell blocks in code separated by more than one new line
    #cell_spacing = 
    #Custom jekyll styles: if you want more jekyll styles than tip/important/warning, set them here
    #jekyll_styles = note,warning,tip,important
    """
    
    # Write-Overwrites 
    file1 = open("settings.ini", "w")  # write mode 
    file1.write(innertext) 
    file1.close() 
    
    # + [markdown] id="L6aRolxX6NPs"
    # #### Edit the Notebooks
    
    # + [markdown] id="UC0UTfDj6NPt"
    # To start, notice how there are two `.ipynb`'s. 
    #
    # These are two notebooks used for the template.
    #
    # 1. `index.ipynb` - When index.ipynb gets converted to 'index.html' it becomes the documentations homepage since browsers understand 'index.html' to be a website's homepage. When published, this notebook will also be made into the README.md document that can be shown on GitHub repositories pages and PyPI. Once published to PyPI, the index notebook (i.e. the homepage) can be used to show others how to install and use the library!
    #
    # 2. `00_core.ipynb` - This is the 'core' component of the python library. You don't have to keep the 'core' part. The left two digits in the filename help the website navigation so you can increment your notebooks by the order you want them displayed. The index.ipynb is the only notebook that does not have this logic apply to it. Inside the notebook, you will see a template near ready for deployment. If the lib_name was to be replaced and a function was declared with an `# export flag` at the top of its cell; this would be ready for deployment and publishing to PyPI!
    
    # + id="oOvHrlux6NPt"
    # ! mkdir ./notebooks
    # ! mv ./00_core.ipynb ./notebooks/
    # ! mv index.ipynb ./notebooks/
    
    # + id="0NCPt14A6NPt" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626056111408, "user_tz": 240, "elapsed": 157, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="9285814d-e377-4167-c82c-a810c510366e"
    #hide
    # !ls
    
    # + [markdown] id="44Sk9AqY6NPu"
    # #### Create the library and documentation.
    
    # + [markdown] id="-Am7p1Be6NPu"
    # Install libraries used in your exported module here, or else the next spet will not work.
    #
    # For now, since we are working with the start template, this should not be a problem.
    
    # + [markdown] id="oQAjPb1q6NPu"
    # Once ready, run `! nbdev_build_lib`
    #
    # - You will now find that you have a new directory, with the name of whatever you set lib_name to.
    
    # + id="jVhg7t836NPu" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626056349944, "user_tz": 240, "elapsed": 282, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="3d93d9e1-96d2-4992-d53c-4c0f2a049173"
    #hide
    # !ls
    
    # + id="x532Somv6NPv" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626057062737, "user_tz": 240, "elapsed": 1762, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="0999bebb-2115-439e-9c27-2306ad557173"
    #hide_output
    # ! nbdev_build_lib --fname RBIntel.ipynb
    
    # + id="JLjS5XIF6NPv" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1626056184803, "user_tz": 240, "elapsed": 161, "user": {"displayName": "Baltimore Neighborhood Indicators Alliance", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg-q3A7kTIHf7wESw989OAOaJeUYKXGEdoA4M3NTA=s64", "userId": "16379023391965073054"}} outputId="937be21d-0b85-405e-d576-c142773ce44f"
    #hide
    # !ls
    
    # + [markdown] id="sW7PJ1hB6NPv"
    # **Congratulations**! You have now successfully created a Python library. 
    #
    # The library itself is still empty. In order to export code, be sure to use the `#export` Flag on cells.
    
    # + [markdown] id="AozShiPI6NPw"
    # Be sure to explore the Flags and Commands!
    
    # + id="qhZun3o36NPw"
    #hide
    # cd ../dataguide
    
    # + id="fot8eNr36NPx"
    #hide
    # !ls
    
    # + [markdown] id="LBBIYs_f6NPx"
    # #### Configuring Git-Github
    
    # + id="SapJWLRS6NPx"
    #hide_output
    # ! git init
    
    # + [markdown] id="eOu-lCiF6NPx"
    # In order to [add an existing project](https://docs.github.com/en/github/importing-your-projects-to-github/adding-an-existing-project-to-github-using-the-command-line) to GitHub: 
    # - [Create a new repository](https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-new-repository) with the repositories name the same as the new library.
    # - Ensure the repository is absolutely empty so we can make a clean push.
    
    # + [markdown] id="sPL3YZuy6NPy"
    # One you do create the GitHub repository through the website, you can connect the your Git repository to the GitHub repository using this command:
    
    # + id="FzjhgFY_6NPy"
    # Make replacements whereever needed and run this! Be sure to remove our password once this is run.
    
    # # ! git remote add ORIGIN https://:@github.com//.git
    
    # + [markdown] id="okwTG1CX6NPy"
    # You will now be able to commit this local repository to GitHub.
    
    # + [markdown] id="ISyy60IW6NPy"
    # #### Create a PyPI account
    
    # + [markdown] id="O9STkkfT6NPy"
    # In order for you to be able to publish to [PyPI](https://pypi.org/), you must have an account. Click "register" on the top right corner of their website to build an account.
    #
    # Your username should be the `user` value from your settings.ini file.
    
    # + [markdown] id="LhuhaJ0X6NPy"
    # #### Publishing!
    #
    # Refer to the Build Proccess, PyPI, and Git sections below.
    
    # + [markdown] id="ViYTkPqF6NPz"
    # ## Pipeline 1/3: NBDEV
    
    # + [markdown] id="NB71vjrz6NPz"
    # (Run once Modules, Docs, and the README are created using NBDEV and are ready to be published onto the web.)
    
    # + [markdown] id="CCRI6q_f6NPz"
    # You need to install the libraries in this nb so that they are used in your other nb.
    
    # + id="urL5e1X96NPz"
    # %%capture
    # 1_ACS EXPLORE AND DOWNLOAD
    # !pip install -U -q ipywidgets
    # !pip install geopandas
    # !jupyter nbextension enable --py widgetsnbextension
    
    # + id="oliAPLm66NPz"
    # %%capture
    # 02_Merge_Data.ipynb
    # !pip install geopandas
    # !pip install dataplay
    # !pip install dexplot
    
    # + [markdown] id="0TUVnnFi6NP0"
    # Enter the project if it exists. Otherwise, go to the folder you want it to exist at.
    
    # + colab={"base_uri": "https://localhost:8080/"} id="O4cO6k7k6NP0" executionInfo={"elapsed": 6867, "status": "ok", "timestamp": 1615316865537, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiQg4PIuuYRyIyCWtHJXvMzkkkBSQpEzUYL20Bjog=s64", "userId": "06248540216665543849"}, "user_tz": 300} outputId="71c0ec58-4b67-4728-af36-4f38931bbe05"
    # cd ../
    
    # + id="6KUK9Fhg6NP0"
    #hide
    # !ls
    
    # + colab={"base_uri": "https://localhost:8080/"} id="ubGf30sk6NP0" executionInfo={"elapsed": 799, "status": "ok", "timestamp": 1615316878185, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiQg4PIuuYRyIyCWtHJXvMzkkkBSQpEzUYL20Bjog=s64", "userId": "06248540216665543849"}, "user_tz": 300} outputId="23e09895-4db4-40ed-8799-aa8d2e93eb10"
    #hide
    # %cd dataplay
    
    # + id="mIWYBv_x6NP0"
    #hide
    # %cd dataguide
    
    # + id="uvLytCRW4tAp"
    #hide
    # %cd notebooks
    
    # + id="SvaYvt60edI2"
    #hide
    # %cd ../
    
    # + colab={"base_uri": "https://localhost:8080/"} id="k_TvGvl96NP1" executionInfo={"status": "ok", "timestamp": 1621903475267, "user_tz": 240, "elapsed": 225, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiQg4PIuuYRyIyCWtHJXvMzkkkBSQpEzUYL20Bjog=s64", "userId": "06248540216665543849"}} outputId="f1e4ae38-ced0-4d10-d42e-e90122482ff5"
    #hide 
    # !ls
    
    # + [markdown] id="MDY5K2pz6NP1"
    # As long as you are wherever you are developing the library in your folder, both of these commands will work:
    
    # + colab={"base_uri": "https://localhost:8080/"} id="X-BybYnu6NP1" executionInfo={"status": "ok", "timestamp": 1621903515139, "user_tz": 240, "elapsed": 2644, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiQg4PIuuYRyIyCWtHJXvMzkkkBSQpEzUYL20Bjog=s64", "userId": "06248540216665543849"}} outputId="71a94c05-c2ca-4ad8-dbbf-5b2c66530237"
    #hide_output
    # first. builds the .py files from from .ipynbs
    # !nbdev_build_lib
    
    # + colab={"base_uri": "https://localhost:8080/"} id="Nocef4ytzm7Q" executionInfo={"status": "ok", "timestamp": 1621903520357, "user_tz": 240, "elapsed": 2816, "user": {"displayName": "", "photoUrl": "https://", "userId": "06248540216665543849"}} outputId="02600ddf-e63a-4486-97a0-234f04d3b7c2"
    #hide_output
    # second. Push .pu changes back to their original .ipynbs
    # !nbdev_update_lib 
    
    # + id="cLHc7IWi6NP1"
    #hide_output
    # nbdev_build_docs builds the documentation from the notebooks
    # !nbdev_build_docs --force_all True --mk_readme True 
    
    # + id="RFjhl_oDeS38"
    # sometimes. Update .ipynb import statements if the .py filename.classname changes. 
    # # !relimport2name
    
    # + id="8uy5VWDKgWDS"
    # this will reload imported modules whenever the .py file changes. 
    # whenever the .py file changes via nbdev_build_lib or _update_lib. 
    # # %load_ext autoreload
    # # %autoreload 2
    
    # + id="Uz2VGKP_6NP1"
    #hide
    # %cd notebooks
    
    # + id="SKHTFTvi6NP1"
    #hide_output
    # ! nbdev_clean_nbs
    # ! nbdev_fix_merge 00_github.ipynb
    # ! nbdev_fix_merge 01_colabs.ipynb
    # ! nbdev_fix_merge 01_5_Explore_and_Download.ipynb
    # ! nbdev_fix_merge 02_scooterExploration.ipynb
    # ! nbdev_fix_merge 03_nbdev.ipynb
    # ! nbdev_fix_merge index.ipynb
    # ! nbdev_clean_nbs
    # ! find . -name "*.bak" -type f -delete
    
    # + id="gczXodZW6NP2"
    #hide_output
    # nbdev_nb2md(fname:"A notebook file name to convert", dest:"The destination folder"='.', img_path:"Folder to export images to"='', jekyll:"To use jekyll metadata for your markdown file or not"=False)
    # ! nbdev_nb2md 00_github.ipynb --dest "../markdown" 
    # ! nbdev_nb2md 01_colabs.ipynb --dest "../markdown" 
    # ! nbdev_nb2md 02_scooterExploration.ipynb --dest "../markdown" 
    # ! nbdev_nb2md 03_nbdev.ipynb --dest "../markdown" 
    # ! nbdev_nb2md index.ipynb --dest "../markdown" 
    # ! find . -type d -name "*files" -exec rm -rf {} \;
    
    # + id="NIs_Gj5z6NP2"
    #hide
    # %cd ../
    
    # + colab={"base_uri": "https://localhost:8080/"} id="h1vTn52T6NP2" executionInfo={"elapsed": 630, "status": "ok", "timestamp": 1615316942773, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiQg4PIuuYRyIyCWtHJXvMzkkkBSQpEzUYL20Bjog=s64", "userId": "06248540216665543849"}, "user_tz": 300} outputId="7476278d-afef-49a8-f59b-8c067f07dbc9"
    #hide
    # !ls
    
    # + [markdown] id="KxLxHdK_6NP3"
    # ## Pipeline 2/3: GIT
    
    # + [markdown] id="-NPSj_Fs6NP3"
    # Before you push, run `nbdev_clean_nbs` to ensure that the push will work. Of course, **that** is ensured by `nbdev_fix_merge`.
    
    # + id="XbU5tf_i6NP3"
    #hide
    # This code was meant to do what the code above it is doing it. Kept for posterity.
    # fix_conflicts('notebooks/index.ipynb') # This is ran as a method within nbdev_fix_merge
    # from nbdev.export import *
    # from nbdev.merge import *
    # tst_nb = read_nb('notebooks/index.ipynb')
    # print(tst_nb)
    
    # + [markdown] id="-e06fAdq6NP3"
    # Run once the Modules, Docs and README have been created using NBDEV but not published.
    
    # + id="5SE6ljRx6NP3"
    # # cd datalab
    
    # + id="TSk-KWkR6NP3"
    #hide_output
    # ! git add *
    
    # + id="Nfc9qmr16NP4"
    # ls
    
    # + id="PBt6nSUl6NP4"
    # !git config --global user.name "bnia"
    
    # + id="3gmLwcF_6NP4"
    # !git config --global user.email ""
    
    # + id="xy7Hic9J6NP4"
    # ! git commit -m "Collapse Experiments P2"
    
    # + id="cAJshuLZ6NP4"
    # git push -f origin master
    
    # + id="5fmcIl1R6NP4"
    #hide_output
    # ! git push -u ORIGIN main
    
    # + [markdown] id="u2hwdR1K6NP5"
    # If you get the following error "fatal: could not read Username for 'https://github.com': No such device or address", you will need to re-establish your Git-GitHub connection.
    #
    # Start by running `! git remote rm ORIGIN`, then re-add like so `! git remote add ORIGIN https://:@github.com//.git`.
    #
    # Be sure to make the replacements whereever needed and run this. Once this is running, be sure to remove the password.
    
    # + [markdown] id="apflD33N6NP5"
    # If GitHub pages does not show, consult this [guide](https://docs.github.com/en/github/working-with-github-pages/troubleshooting-jekyll-build-errors-for-github-pages-sites#troubleshooting-build-errors). Typically, this is a problem with the Markdown.
    
    # + [markdown] id="Ag01NyQT6NP5"
    # ## Pipeline 3/3: PyPI
    
    # + [markdown] id="0SZBVgPp6NP5"
    # Run once the Modules, Docs and README have been created using NBDEV.
    
    # + [markdown] id="ueSwAMCy6NP5"
    # Be sure you have a PyPI account that has the same username as your GitHub account before continuing.
    
    # + [markdown] id="OV4HANKl6NP5"
    # You could save your credentials into a .pypirc file, but this is not recommended.
    
    # + id="DrW5Cr996NP5"
    # # ! echo "[pypi]" > .pypirc
    # # ! echo "username = 'username'" >> .pypirc
    # # ! echo "password = 'password'" >> .pypirc
    
    # + [markdown] id="IaAfZylz6NP6"
    # To publish to Pypi, install twine.
    
    # + id="-bQ-RUAS6NP6"
    # %%capture
    # ! pip install twine
    
    # + [markdown] id="7sZM8_KL6NP6"
    # Be sure to run this final nbdev command `! nbdev_bump_version` prior to publishing. Otherwise, you might have issues.
    
    # + colab={"base_uri": "https://localhost:8080/"} id="CzKyaZfA6NP6" executionInfo={"elapsed": 7750, "status": "ok", "timestamp": 1615316954094, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GiQg4PIuuYRyIyCWtHJXvMzkkkBSQpEzUYL20Bjog=s64", "userId": "06248540216665543849"}, "user_tz": 300} outputId="e209d369-b921-4a88-af7f-9b07e991c4e4"
    # !nbdev_bump_version
    
    # + id="hvNl5KnkZvCR"
    
    
    # + [markdown] id="wJlTO0I_6NP6"
    # Other than what has been discussed above, Nbdev has everything else all set up! Simply run make PyPI and enter your credentials when prompted at the bottom of the terminal output. Your password will be censored. Therefore, it is safe to post this code online without clearing it.
    
    # + id="3dROenjM6NP6"
    #hide_output
    # ! make pypi
    
    # + id="kKbv2PE-6NP6"
    # ls
    
    # + [markdown] id="KW8NPUi36NP7"
    # ## Misc Tests
    
    # + [markdown] id="QDDNjmnvkCnA"
    # These things don't work... yet?
    
    # + [markdown] id="BM6W2NUt6NP7"
    # ![](./images/ai_of_me.jpg)
    
    # + id="t_FO_UAn6NP7"
    from IPython.display import display
    
    # + id="l_cnh3f16NP7"
    #hide
    var = "hide"
    display(var)
    
    # + id="zXjhC2eq6NP7"
    #hide_input
    var = "hide_input"
    display(var)
    
    # + id="h4QgL1V96NP8"
    #hide_output
    var = "hide_output"
    print('The input of this cell is visible as usual.\nWhile the OUTPUT of this cell is collapsed by default, you can expand it!')
    
    # + id="9XsshMav6NP8"
    #collapse_output
    var = "collapse_output"
    display(var)
    
    # + id="89lRyzdZ6NP8"
    #collapse_show
    var = "collapse_show"
    display(var)
    
    # + id="0FnGxS1Y6NP8"
    #collapse_hide
    var = "collapse_hide"
    display(var)
    
    # + id="LcO4NN8K6NP8"
    #collapse-hide 
    print('The input of this cell is visible as usual.\While the OUTPUT of this cell is collapsed by default, you can expand it!')
    
    # + id="t9UNB2bp6NP9"
    #collapse_input
    var = "collapse_input"
    display(var)
    
    # + id="3OohcTQF6NP9"
    #collapse_input open
    var = "collapse_input open"
    display(var)
    
    # + id="9y1oqm-16NP9"
    #collapse_output
    var = "collapse_output"
    display(var)
    
    # + id="-bDONWna6NP9"
    #collapse_output open
    var = "collapse_output open"
    display(var)
    
    # + id="5HnmOB2_6NP9"
    #collapse_output
    print('The input of this cell is visible as usual.\nWhile the OUTPUT of this cell is collapsed by default, you can expand it!')
    
    # + [markdown] id="Pv37Ejgu6NP-"
    # 
    # summary and details together # details 1 #
    # + [markdown] id="CkPkd2Qv6NP-" #
    # summary and details split # + [markdown] id="AdRWnaUY6NP-" # summary and details split #
    # + [markdown] id="UaZpr_wc6NP-" #
    # + [markdown] id="V8UfRMOu6NP-" # #### Entirely Split # + [markdown] id="7gY-gM1e6NP-" # Entirely Split # + [markdown] id="wlWK7f0v6NP_" #
    # + [markdown] id="AIVIJs6u6NP_" #
    # # md within html # **md within html** #
    # + id="8LPL6bek6NP_" #collapse_output print('The input of this cell is visible as usual.\nWhile the OUTPUT of this cell is collapsed by default, you can expand it!') # + id="0ip8Fzxu6NP_" language="html" #
    # code html # code html #
    # + id="BLPc-UaE6NP_" #collapse_output # %%html
    code html collapse output code html collapse output
    # + id="iZaiDsiI6NQA" language="html" #
    # code html # code html no collapse #
    # + id="x0M0572G6NQA" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Introduction import DSGRN import Berry_2019_figures_results as Berry from min_interval_posets import posets, poset_distance from copy import deepcopy from numpy import linspace from IPython import display import matplotlib.pyplot as plt # from matplotlib import cm from importlib import reload from matplotlib import rc rc('text', usetex=True) fontsize=20 rc('axes', labelsize=fontsize) # fontsize of the x and y labels rc('xtick', labelsize=fontsize) # fontsize of the tick labels rc('ytick', labelsize=fontsize) # fontsize of the tick labels rc('legend', fontsize=12) # legend fontsize # %matplotlib inline wt1_file = "WT1_WT2_microarray_interpolated/wt1_microarray_coregenes_lifepoints_interpol_trim.csv" wt2_file = "WT1_WT2_microarray_interpolated/wt2_microarray_coregenes_lifepoints_interpol.csv" # graph data def make_fig(fname,savename,start_time=None,end_time=None,names=None): curves = Berry.row(fname) subset_curves = deepcopy(curves) if names is not None: for name in curves: if name not in names: subset_curves.pop(name) for name,curve in sorted(subset_curves.items()): n = curve.normalize() if start_time is not None and end_time is not None: n = curve.trim(start_time,end_time) times,vals = zip(*n.items()) plt.plot(times,vals,label=r"${}$".format(name)) lgd = plt.legend(loc='upper left', bbox_to_anchor=(1, 1)) plt.ylabel(r"\textbf{normalized expression}") plt.xlabel(r"\textbf{time points}") plt.savefig(savename,bbox_extra_artists=(lgd,), bbox_inches='tight') plt.savefig(savename, bbox_inches='tight') display.display(plt.show()) return [name for name in sorted(curves)] # + start_time = 26 end_time = 170 names = ["CDC20"] epsilons = [0.00, 0.01, 0.02, 0.03, 0.04, 0.05] # - _ = make_fig(wt1_file,"time_series_wt1_{}_trimmed.pdf".format(names[0]),start_time,end_time,names) # _ = make_fig(wt2_file,"time_series_rep2_trimmed.pdf",start_time,end_time,names) intervals = Berry.getintervals(wt1_file,"row",epsilons,names,start_time,end_time) for intv in intervals[names[0]]: print(intv) # + def makeboxes(intervals,start_time,end_time,savename="intervals_CDC20.pdf"): marker = "s" for k,i in enumerate(intervals): eps = i[0] ints = i[1] for j in ints: xl = (int(j[0][0]),int(j[0][1])+1) x = list(range(*xl)) if j[1][1] == "min": color = "k" else: color = "r" if len(x) == 1: alpha = 0.25 else: alpha = 0.1 plt.plot(x,[eps]*len(x),linestyle=None,marker=marker,color=color,alpha=alpha) # plt.xlim(start_time,end_time) plt.yticks(epsilons) plt.ylabel(r"$\mbox{{\Huge$\epsilon$}}$") plt.xlabel(r"\textbf{time points}") plt.savefig(savename, bbox_inches='tight') display.display(plt.show()) # - makeboxes(intervals[names[0]],start_time,end_time) # + names = ["CDC20","NDD1"] epsilons = [0.0,0.03,0.05] _ = make_fig(wt1_file,"time_series_wt1_{}_{}_trimmed.pdf".format(names[0],names[1]),start_time,end_time,names) posets1 = Berry.getposets(wt1_file,"row",epsilons,names,start_time,end_time) # - network = DSGRN.Network("\n".join(["{} : {}".format(name,name) for name in names])) network.specification() def make_posets(p,network): events = list(p[1][0]) event_ordering = list(p[1][1]) poe = DSGRN.PosetOfExtrema(network, events, event_ordering) return poe,p[0] for p in posets1: poe, eps = make_posets(p,network) print("Epsilon = {}".format(eps)) display.display(DSGRN.DrawGraph(poe)) with open("poset_intro_{}.dot".format(int(100*eps)),"w") as f: f.write(poe.graphviz()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + [markdown] nbgrader={"grade": false, "grade_id": "q7_prompt", "locked": true, "solution": false} # # Q7 # # More written answers! # + [markdown] nbgrader={"grade": false, "grade_id": "q7a_prompt", "locked": true, "solution": false} # ### A # # How are generators and lists similar? How are they different? # + [markdown] nbgrader={"grade": true, "grade_id": "q7a", "locked": false, "points": 3, "solution": true} # # + [markdown] nbgrader={"grade": false, "grade_id": "q7b_prompt", "locked": true, "solution": false} # --- # ### B # # Let's say I'm inside a `for` loop, iterating through a large generator object (e.g. `range`). I come across a sequence of values I want to skip, moving several iterations ahead, but for whatever reason I don't know about `continue`. I decide to simply increment my loop variable by 5 inside the loop body after checking if it meets a certain condition. Will this work? Why or why not? # + [markdown] nbgrader={"grade": true, "grade_id": "q7b", "locked": false, "points": 3, "solution": true} # # + [markdown] nbgrader={"grade": false, "grade_id": "q7c_prompt", "locked": true, "solution": false} # --- # ### C # # If I want to check if two lists contain identical elements *irrespective of ordering*, how could I do that? # + [markdown] nbgrader={"grade": true, "grade_id": "q7c", "locked": false, "points": 3, "solution": true} # # + [markdown] nbgrader={"grade": false, "grade_id": "q7d_prompt", "locked": true, "solution": false} # --- # ### D # # I have four lists, each with `N` elements. I want to loop through them all simultaneously, so I `zip()` them together. How many elements will now be in my `zip()` generator? # + [markdown] nbgrader={"grade": true, "grade_id": "q7d", "locked": false, "points": 3, "solution": true} # # + [markdown] nbgrader={"grade": false, "grade_id": "q7e_prompt", "locked": true, "solution": false} # --- # ### E # # I have a matrix, and to iterate through it I use nested `for` loops. In doing so, I check the value at each matrix element, looking for a specific match. If the match is found, I execute a `break` statement: # # ``` # matrix = [ ... ] # for i in range(len(matrix)): # for j in range(len(matrix[i])): # if matrix[i][j] == some_value: # break # ``` # # What happens? # + [markdown] nbgrader={"grade": true, "grade_id": "q7e", "locked": false, "points": 3, "solution": true} # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] papermill={"duration": 0.029826, "end_time": "2021-06-06T18:53:08.497277", "exception": false, "start_time": "2021-06-06T18:53:08.467451", "status": "completed"} tags=[] # # BUĞDAY BAŞAKLARININ TESPİT EDİLMESİ: GÖRÜNTÜLERİN ANALİZ EDİLEREK BAŞAK YOĞUNLUĞU, SAĞLIK VE OLGUNLUK BİLGİLERİNİN ELDE EDİLMESİ # # ----- # + [markdown] papermill={"duration": 0.028482, "end_time": "2021-06-06T18:53:08.554491", "exception": false, "start_time": "2021-06-06T18:53:08.526009", "status": "completed"} tags=[] # ### ÖZET # # # Bu çalışmamızda buğday başaklarının takibini yapılarak sağlıklı gelişim süreci sağlanması için optik görüntülerden başak yoğunluğu, olgunluk ve temel sağlık bilgilerinin çıkarımı hedeflenmektedir. Bu amaçla görüntü işleme ve makine öğrenimi metotları kullanılarak buğday fotoğraflarından temel sağlık verileri elde edilmesi amaçlanmaktadır. Bu veriler buğday üreticisinin üretim sürecindeki kayıplara ilişkin çözümlerin oluşturulması ve kaliteli ürün elde edilmesi açısından fayda sağlayacaktır. # # ### ABSTRACT # # # In this study, it is aimed to extract spike density, maturity and basic health information from optical images in order to provide a healthy development process by monitoring wheat ears. Fort his purpose it is aimed to obtain basic health data from wheat optic images by using image processing and machine learning methods. These data will be beneficial in creating solutions for the losses in the production process of the wheat producer and obtaining quality products. # # # **** # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 0.22996, "end_time": "2021-06-06T18:53:08.813310", "exception": false, "start_time": "2021-06-06T18:53:08.583350", "status": "completed"} tags=[] # KÜTÜPHANELERİN YÜKLENMESİ / İMPORT LİBRARY import os import datetime import numpy as np import pandas as pd from matplotlib import pyplot as plt import cv2 def printime(title): print("{0}: {1}".format(title, datetime.datetime.now())) # + papermill={"duration": 0.064661, "end_time": "2021-06-06T18:53:08.897526", "exception": false, "start_time": "2021-06-06T18:53:08.832865", "status": "completed"} tags=[] # Model Sonuçlarının Yüklenmesi / Load Model Results latesub = pd.read_csv("../input/wheat-latesub/submission.csv") print("Trained model summary output shape:",latesub.shape) imageid_count = latesub['image_id'].value_counts() print("Example dataset image count: ",len(imageid_count)) print(imageid_count) latesub.head() # + papermill={"duration": 0.04508, "end_time": "2021-06-06T18:53:08.972981", "exception": false, "start_time": "2021-06-06T18:53:08.927901", "status": "completed"} tags=[] latesub["PredictionString"] = latesub["PredictionString"].str.split(' ') latesub.head() # + papermill={"duration": 0.074076, "end_time": "2021-06-06T18:53:09.067969", "exception": false, "start_time": "2021-06-06T18:53:08.993893", "status": "completed"} tags=[] # Yüklenen Sonuçların Düzenlenmesi / Editing Uploaded Results def preparingdf(df): iou = [] tx = [] ty = [] tw = [] th = [] for i in df.iterrows(): idx0 = i[1][1][0] idx1 = i[1][1][1] idx2 = i[1][1][2] idx3 = i[1][1][3] idx4 = i[1][1][4] iou.append(idx0) tx.append(idx1) ty.append(idx2) tw.append(idx3) th.append(idx4) df[['iou']] = iou df[['tx']] = tx df[['ty']] = ty df[['tw']] = tw df[['th']] = th return df data = preparingdf(latesub) data.iou = data.iou.astype(float) data.tx = data.tx.astype(float) data.ty = data.ty.astype(float) data.tw = data.tw.astype(float) data.th = data.th.astype(float) data.head() # + papermill={"duration": 0.035042, "end_time": "2021-06-06T18:53:09.123579", "exception": false, "start_time": "2021-06-06T18:53:09.088537", "status": "completed"} tags=[] data.groupby(['image_id'])['iou'].max() # + papermill={"duration": 0.07668, "end_time": "2021-06-06T18:53:09.221084", "exception": false, "start_time": "2021-06-06T18:53:09.144404", "status": "completed"} tags=[] df = data.groupby(['image_id']).mean() df[['count']] = imageid_count df[['iou_min']] = data.groupby(['image_id'])['iou'].min() df[['iou_max']] = data.groupby(['image_id'])['iou'].max() df[['tw_min']] = data.groupby(['image_id'])['tw'].min() df[['tw_max']] = data.groupby(['image_id'])['tw'].max() df[['th_min']] = data.groupby(['image_id'])['th'].min() df[['th_max']] = data.groupby(['image_id'])['th'].max() df['FX'] = (1024*1024)//(df.tw_max*df.th_max) df['FY'] = df['count']/df.FX df.head() # + papermill={"duration": 0.038725, "end_time": "2021-06-06T18:53:09.291557", "exception": false, "start_time": "2021-06-06T18:53:09.252832", "status": "completed"} tags=[] df.columns # + papermill={"duration": 0.027599, "end_time": "2021-06-06T18:53:09.340646", "exception": false, "start_time": "2021-06-06T18:53:09.313047", "status": "completed"} tags=[] df.shape # + papermill={"duration": 0.043494, "end_time": "2021-06-06T18:53:09.405340", "exception": false, "start_time": "2021-06-06T18:53:09.361846", "status": "completed"} tags=[] columns = ['iou', 'tx', 'ty', 'tw', 'th'] df.drop(columns, inplace=True, axis=1) df.head() # + [markdown] papermill={"duration": 0.021507, "end_time": "2021-06-06T18:53:09.448868", "exception": false, "start_time": "2021-06-06T18:53:09.427361", "status": "completed"} tags=[] # # + papermill={"duration": 0.039603, "end_time": "2021-06-06T18:53:09.510123", "exception": false, "start_time": "2021-06-06T18:53:09.470520", "status": "completed"} tags=[] test_dir = "../input/global-wheat-detection/test/" testfile = os.listdir(test_dir) # + papermill={"duration": 0.031184, "end_time": "2021-06-06T18:53:09.563836", "exception": false, "start_time": "2021-06-06T18:53:09.532652", "status": "completed"} tags=[] len(testfile) # + papermill={"duration": 0.077734, "end_time": "2021-06-06T18:53:09.664347", "exception": false, "start_time": "2021-06-06T18:53:09.586613", "status": "completed"} tags=[] # Test görüntülerin yüklenmesi / Uploading test images img = cv2.imread(test_dir+testfile[2]) img.shape # + papermill={"duration": 0.175351, "end_time": "2021-06-06T18:53:09.874655", "exception": false, "start_time": "2021-06-06T18:53:09.699304", "status": "completed"} tags=[] # Görüntün BGR formatından LAB formatına dönüştürülmesi / Converting the image from BGR format to LAB format lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) l = lab[:,:,0] a = lab[:,:,1] b = lab[:,:,2] # + papermill={"duration": 0.03309, "end_time": "2021-06-06T18:53:09.930482", "exception": false, "start_time": "2021-06-06T18:53:09.897392", "status": "completed"} tags=[] def summary(test_dir): testfiles = os.listdir(test_dir) for i in len(testfiles): img = cv2.imread(test_dir+testfile[i]) lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) l = lab[:,:,0] a = lab[:,:,1] b = lab[:,:,2] clahe = cv2.createCLAHE(clipLimit=2, tileGridSize=(16,16)) clahe_img = clahe.apply(l) img_result = cv2.merge((clahe_img,a,b)) imgCLAHE = cv2.cvtColor(img_result, cv2.COLOR_LAB2RGB) # + papermill={"duration": 4.372435, "end_time": "2021-06-06T18:53:14.325453", "exception": false, "start_time": "2021-06-06T18:53:09.953018", "status": "completed"} tags=[] # LAB formatındaki görüntünün histogram çıktısı / Histogram output of image in LAB format plt.hist(l.flat, bins=100, range=(0,255)) # + papermill={"duration": 4.274734, "end_time": "2021-06-06T18:53:18.624548", "exception": false, "start_time": "2021-06-06T18:53:14.349814", "status": "completed"} tags=[] # Yukarıdaki LAB formatına CHALE yöntemi uygulanması / Applying the CHALE method to the above LAB format clahe = cv2.createCLAHE(clipLimit=2, tileGridSize=(16,16)) clahe_img = clahe.apply(l) plt.hist(clahe_img.flat, bins=100, range=(0,255)) # + papermill={"duration": 0.046805, "end_time": "2021-06-06T18:53:18.696128", "exception": false, "start_time": "2021-06-06T18:53:18.649323", "status": "completed"} tags=[] img_result = cv2.merge((clahe_img,a,b)) imgNORM = cv2.cvtColor(lab, cv2.COLOR_LAB2RGB) # CHALE uygulanmamış / not apply CHALE imgCLAHE = cv2.cvtColor(img_result, cv2.COLOR_LAB2RGB) # CHALE uygulanmış / apply CHALE # + papermill={"duration": 0.241493, "end_time": "2021-06-06T18:53:18.963432", "exception": false, "start_time": "2021-06-06T18:53:18.721939", "status": "completed"} tags=[] # Görüntünün ilk hali... / First State plt.imshow(imgNORM) # + papermill={"duration": 0.320261, "end_time": "2021-06-06T18:53:19.324466", "exception": false, "start_time": "2021-06-06T18:53:19.004205", "status": "completed"} tags=[] # CHALE uygulandıktan sonra... / Last State plt.imshow(imgCLAHE) # + papermill={"duration": 0.34432, "end_time": "2021-06-06T18:53:19.701141", "exception": false, "start_time": "2021-06-06T18:53:19.356821", "status": "completed"} tags=[] im = imgCLAHE im = (im/(im.max()-im.min()))*255 im_r = im[:,:,0] im_g = im[:,:,1] im_b = im[:,:,2] im_ = ((im_r**2)+(im_g**2)+(im_b**2))**(1/2) r=im_r/im_ g=im_g/im_ b=im_b/im_ img__ = cv2.merge((r,g,b)) plt.imshow(img__) #NORMALİZASYON UYGULANMIŞ GÖRÜNTÜ # + papermill={"duration": 0.266156, "end_time": "2021-06-06T18:53:20.016560", "exception": false, "start_time": "2021-06-06T18:53:19.750404", "status": "completed"} tags=[] # Sadece yeşil kanala CHALE uygulanması durumunda / AND apply CLAHE ONLY GREEN CHANNEL clahe = cv2.createCLAHE(clipLimit=2, tileGridSize=(64,64)) #clahe_img0 = clahe.apply(img[:,:,0]) clahe_img1 = clahe.apply(img[:,:,1]) # Green Channel #clahe_img2 = clahe.apply(img[:,:,2]) img_result = cv2.merge((img[:,:,0],clahe_img1,img[:,:,2])) img_result = cv2.cvtColor(img_result, cv2.COLOR_BGR2RGB) plt.imshow(img_result) # + papermill={"duration": 0.061015, "end_time": "2021-06-06T18:53:20.130719", "exception": false, "start_time": "2021-06-06T18:53:20.069704", "status": "completed"} tags=[] imgCLAHE.shape # Görüntü çözünürlük değeri # + papermill={"duration": 0.16078, "end_time": "2021-06-06T18:53:20.345077", "exception": false, "start_time": "2021-06-06T18:53:20.184297", "status": "completed"} tags=[] def inorm(im): #Normalize Fonksiyonu im = (im/(im.max()-im.min()))*255 im_r = im[:,:,0] im_g = im[:,:,1] im_b = im[:,:,2] im_ = ((im_r**2)+(im_g**2)+(im_b**2))**(1/3) r=im_r/im_ g=im_g/im_ b=im_b/im_ img__ = cv2.merge((r,g,b)) return img__ def rgb_density(img): #Renk Yoğunluk Fonksiyonu img = inorm(img) r = img[:,:,0] g = img[:,:,1] b = img[:,:,2] s = img.sum() d1 = r.sum()/s d2 = g.sum()/s d3 = b.sum()/s return [d1,d2,d3] rgb_density(imgNORM) # + papermill={"duration": 0.155032, "end_time": "2021-06-06T18:53:20.553931", "exception": false, "start_time": "2021-06-06T18:53:20.398899", "status": "completed"} tags=[] rgb_density(imgCLAHE) #CHALE uygulanmış görüntüdeki renk yogunluk değeri sırasıyla (R,G,B) # + papermill={"duration": 0.158897, "end_time": "2021-06-06T18:53:20.767441", "exception": false, "start_time": "2021-06-06T18:53:20.608544", "status": "completed"} tags=[] rgb_density(img_result) #Yanlız yeşil kanala CHALE uygulanmış görüntüdeki renk yogunluk değeri sırasıyla (R,G,B) # + papermill={"duration": 0.153638, "end_time": "2021-06-06T18:53:20.976276", "exception": false, "start_time": "2021-06-06T18:53:20.822638", "status": "completed"} tags=[] rgb_density(img__) # + papermill={"duration": 0.050895, "end_time": "2021-06-06T18:53:21.068760", "exception": false, "start_time": "2021-06-06T18:53:21.017865", "status": "completed"} tags=[] def summary(df, test_dir): testfiles = os.listdir(test_dir) df[['y_r']] = 0 df[['y_g']] = 0 df[['y_b']] = 0 for i in range(len(testfiles)): img = cv2.imread(test_dir+testfile[i]) lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) l = lab[:,:,0] a = lab[:,:,1] b = lab[:,:,2] clahe = cv2.createCLAHE(clipLimit=2, tileGridSize=(16,16)) clahe_img = clahe.apply(l) img_result = cv2.merge((clahe_img,a,b)) imgCLAHE = cv2.cvtColor(img_result, cv2.COLOR_LAB2RGB) yogunluk = rgb_density(imgCLAHE) df['y_r'][i:i+1] = yogunluk[0] df['y_g'][i:i+1] = yogunluk[1] df['y_b'][i:i+1] = yogunluk[2] return df # + papermill={"duration": 1.400732, "end_time": "2021-06-06T18:53:22.510655", "exception": false, "start_time": "2021-06-06T18:53:21.109923", "status": "completed"} tags=[] df_ = summary(df, test_dir) df_ # + [markdown] papermill={"duration": 0.039609, "end_time": "2021-06-06T18:53:22.590459", "exception": false, "start_time": "2021-06-06T18:53:22.550850", "status": "completed"} tags=[] # YARARLANILAN KAYNAKLAR # [1] https://docs.opencv.org/master/d5/daf/tutorial_py_histogram_equalization.html # [2] https://medium.com/@ahmetkumas1/histogram-equalization-clahe-69cc9f83670c # # + [markdown] papermill={"duration": 0.039768, "end_time": "2021-06-06T18:53:22.670244", "exception": false, "start_time": "2021-06-06T18:53:22.630476", "status": "completed"} tags=[] # *Bu çalışma Şems Kurtoğlu tarafından hazırlanmış olup tez danışmanlığını Doç. Dr. yapılmaktadır.* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## Background Subtraction # ### 1. Gaussian Mixture-based Background/Foreground Segmentation Algorithm # + # OpenCV 2.4.13 only import numpy as np import cv2 cap = cv2.VideoCapture('walking.avi') # Initlaize background subtractor foreground_background = cv2.BackgroundSubtractorMOG() while True: ret, frame = cap.read() # Apply background subtractor to get our foreground mask foreground_mask = foreground_background.apply(frame) cv2.imshow('Output', foreground_mask) if cv2.waitKey(1) == 13: break cap.release() cv2.destroyAllWindows() # - # ### What about using this on our webcam input? # + # OpenCV 2.4.13 only import numpy as np import cv2 # Intialize Webcam cap = cv2.VideoCapture(0) # Initlaize background subtractor foreground_background = cv2.BackgroundSubtractorMOG() while True: ret, frame = cap.read() # Apply background subtractor to get our foreground mask foreground_mask = foreground_background.apply(frame) cv2.imshow('Output', foreground_mask) if cv2.waitKey(1) == 13: break cap.release() cv2.destroyAllWindows() # - # ### Let's try the Improved adaptive Gausian mixture model for background subtraction # + # OpenCV 2.4.13 import numpy as np import cv2 cap = cv2.VideoCapture('walking.avi') # Initlaize background subtractor foreground_background = cv2.BackgroundSubtractorMOG2() while True: ret, frame = cap.read() # Apply background subtractor to get our foreground mask foreground_mask = foreground_background.apply(frame) cv2.imshow('Output', foreground_mask) if cv2.waitKey(1) == 13: break cap.release() cv2.destroyAllWindows() # - # ### Applying it to our webcam stream # + # OpenCV 2.4.13 import numpy as np import cv2 # Intialize Webcam cap = cv2.VideoCapture(0) # Initlaize background subtractor foreground_background = cv2.BackgroundSubtractorMOG2() while True: ret, frame = cap.read() # Apply background subtractor to get our foreground mask foreground_mask = foreground_background.apply(frame) cv2.imshow('Output', foreground_mask) if cv2.waitKey(1) == 13: break cap.release() cv2.destroyAllWindows() # - # ## What about foreground substraction? # + import cv2 import numpy as np # Initalize webacam and store first frame cap = cv2.VideoCapture(0) ret, frame = cap.read() # Create a flaot numpy array with frame values average = np.float32(frame) while True: # Get webcam frmae ret, frame = cap.read() # 0.01 is the weight of image, play around to see how it changes cv2.accumulateWeighted(frame, average, 0.01) # Scales, calculates absolute values, and converts the result to 8-bit background = cv2.convertScaleAbs(average) cv2.imshow('Input', frame) cv2.imshow('Disapearing Background', background) if cv2.waitKey(1) == 13: #13 is the Enter Key break cv2.destroyAllWindows() cap.release() # - # ### Background Substraction KKN # #### OpenCV 3.X only! # + # OpenCV 3.1.0 import numpy as np import cv2 cap = cv2.VideoCapture(0) kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3)) fgbg = cv2.createBackgroundSubtractorKNN() while(1): ret, frame = cap.read() fgmask = fgbg.apply(frame) fgmask = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel) cv2.imshow('frame',fgmask) if cv2.waitKey(1) == 13: break cap.release() cv2.destroyAllWindows() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # **These code cells were run on google colab with GPU support.** # + colab={"base_uri": "https://localhost:8080/"} id="GrQx0ROdcsh1" outputId="6dbb4f19-1b71-4432-b67d-62895f56008f" import torch if torch.cuda.is_available(): device = torch.device("cuda:0") print("GPU") else: device = torch.device("cpu") print("CPU") # - # **Fetching the data** # + colab={"base_uri": "https://localhost:8080/"} id="hhI29xmxi9T_" outputId="380952c9-4258-4c7c-ab8e-9b20ef729faa" from sklearn.datasets import fetch_openml import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np mnist = fetch_openml("mnist_784") print(mnist.keys()) # - # **Plotting one example of each class** # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="3IZkcerWjEsP" outputId="25babe09-1336-42eb-d035-e4d4cb242f26" X = mnist["data"] Y = mnist["target"] Y = Y.astype(int) X=(X/255 - 0.5)*2 fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True) ax = ax.flatten() for i in range(10): for x, y in zip(X, Y): if y==i: img=np.array(x).reshape((28,28)) ax[i].imshow(img, cmap="Greys") break ax[0].set_yticks([]) ax[0].set_xticks([]) plt.tight_layout() plt.show() # - # **Out of 70k examples, 10k will be used for test set, and remaining will be used for training and validation.** # + id="6TzuxPwWjJi9" X_train, X_test, Y_train, Y_test = X[:60000], X[60000:], Y[:60000], Y[60000:] # + id="5ENyvH4kjKOj" from sklearn.model_selection import StratifiedShuffleSplit split = StratifiedShuffleSplit(n_splits=1, test_size=0.1, random_state=42) for train_index, val_index in split.split(X_train,Y_train): X_train_strat = X_train[train_index, :] Y_train_strat = Y_train[train_index] X_val_strat = X_train[val_index, :] Y_val_strat = Y_train[val_index] # + id="panFa_57kQhw" import torch.nn as nn import torch.nn.functional as Func from torch.autograd import Variable import torch.optim as optim import torch.utils.data as data import random from scipy.io import savemat import os from os import path from sklearn.preprocessing import normalize from torch.nn.utils import clip_grad_norm_ import torch.nn.parallel.data_parallel as data_parallel from sklearn.metrics import confusion_matrix # - # **Extending and overriding methods for our own dataset** # + id="k19sQHq3kVKe" class mnist_dataset(data.Dataset): def label_transformer(self, labels): return labels def __init__(self, input_data, labels): input_data = input_data.reshape((len(input_data),1,28,28)) self.feats = input_data self.labels = self.label_transformer(labels) def __len__(self): return len(self.labels) def __getitem__(self, index): x = self.feats[index] y = self.labels[index] return x,y # - # **Creating dataloader for each of the train, validation, test dataset.** # + id="9_CFo7qQkX9Z" class hyperparam: bs = 100 lr = 0.05 num_epochs = 50 params = { "batch_size": hyperparam.bs, "shuffle": True, "num_workers": 2, "drop_last": False, "pin_memory": True } train_set = mnist_dataset(X_train_strat, Y_train_strat) val_set = mnist_dataset(X_val_strat, Y_val_strat) test_set = mnist_dataset(X_test, Y_test) training_gen = data.DataLoader(train_set, **params) val_gen = data.DataLoader(val_set, **params) test_gen = data.DataLoader(test_set, **params) # - # **Created a DNN with 2 layers of CNN with 12 filters each and adding two fully connected layers of 100 and 10 neurons respectively. # Used Relu activation function, with initial learning rate = 0.05 with glorot initialization.** # + id="GgTVa1ckklJ9" from torch.nn import Conv2d, Linear from torch import flatten class optim_cnn(nn.Module): def glorot_initialize(self, layers): for layer in layers: torch.nn.init.xavier_normal_(layer.weight) torch.nn.init.zeros_(layer.bias) def __init__(self): super(optim_cnn, self).__init__() self.conv1 = Conv2d(1,12,kernel_size=(3,3), padding = 1) self.conv2 = Conv2d(12,12,kernel_size=(3,3), padding = 1) self.fc1 = Linear(588, 100) self.fc2 = Linear(100, 10) self.glorot_initialize([self.conv1, self.conv2, self.fc1, self.fc2]) def forward(self, sig): sig = Func.max_pool2d(Func.relu(self.conv1(sig)), (2, 2)) sig = Func.max_pool2d(Func.relu(self.conv2(sig)), (2, 2)) sig = sig.view(-1, 12*7*7) sig = Func.relu(self.fc1(sig)) sig = self.fc2(sig) return sig # return Func.softmax(sig, dim = 1) # - # **We have created three models, each of these will be trained using a different optimizer.** # + id="qVmZYmAEGHy7" cnn_models = [optim_cnn().to(device), optim_cnn().to(device), optim_cnn().to(device)] # - # **Model is trained for 50 epochs, after each epochs, printing the validation accuracy. and resulting learning rate after adjusting learning rate by 10% each 10 epochs # Also using early stopping mechanism, which stops the learning if the validation accuracy starts dropping for a consecutive 5 cycles. This is done to prevent overfitting.** # + colab={"base_uri": "https://localhost:8080/"} id="wov0a1kVkoBq" outputId="70d7479d-7e08-4027-a664-a4221bcfca72" from tqdm import tqdm from datetime import datetime from torch.optim.lr_scheduler import StepLR from torch.optim import Adam, RMSprop tr_avg_loss_list = {0: [], 1:[], 2:[]} tr_accuracy_list = {0: [], 1:[], 2:[]} val_avg_loss_list = {0: [], 1:[], 2:[]} val_accuracy_list = {0: [], 1:[], 2:[]} print(datetime.now()) def get_optimizer(model_num, model): if model_num==0: return RMSprop(model.parameters(), lr = 0.001, alpha = 0.9) elif model_num == 1: return torch.optim.SGD(model.parameters(), lr = 0.05, momentum=0.9, nesterov=True) # TODO Nesterov elif model_num == 2: return Adam(model.parameters(), lr = 0.001, eps = 1e-8, weight_decay=0) for model_num, cnn_model in enumerate(cnn_models): optimizer = get_optimizer(model_num, cnn_model) if model_num == 1: scheduler = StepLR(optimizer, step_size=10, gamma=0.9) loss = nn.CrossEntropyLoss() for epoch in range(hyperparam.num_epochs): print("Epoch:" + str(epoch) + " model num: " + str(model_num+1)) tr_num_correct = 0 tr_num_samples = 0 tr_total_loss = 0 val_num_correct = 0 val_num_samples = 0 val_total_loss = 0 print("Learning rate: " + str(optimizer.param_groups[0]['lr'])) with torch.set_grad_enabled(True): cnn_model.train(True) for ind, (local_batch, local_labels) in enumerate(training_gen): optimizer.zero_grad() local_batch = local_batch local_labels = local_labels local_batch, local_labels = Variable(local_batch).float(), Variable(local_labels) local_batch = local_batch.to(device) local_labels = local_labels.to(device) out1 = cnn_model(local_batch) ploss = loss(out1, local_labels.long()) tr_total_loss += ploss * hyperparam.bs ploss.backward() optimizer.step() sel_class = torch.argmax(out1, dim=1) tr_num_correct += sel_class.eq(local_labels).sum().item() tr_num_samples += hyperparam.bs tr_avg_loss = tr_total_loss / len(training_gen.dataset) tr_avg_loss_list[model_num].append(tr_avg_loss) tr_accuracy = tr_num_correct / len(training_gen.dataset) tr_accuracy_list[model_num].append(tr_accuracy) with torch.set_grad_enabled(False): cnn_model.eval() for local_batch, local_labels in val_gen: local_batch = local_batch.float() local_labels = local_labels.float() local_batch, local_labels = Variable(local_batch), Variable(local_labels) local_batch = local_batch.to(device) local_labels = local_labels.to(device) out1 = cnn_model(local_batch) ploss = loss(out1, local_labels.long()) val_total_loss += ploss * hyperparam.bs sel_class = torch.argmax(out1, dim=1) val_num_correct += sel_class.eq(local_labels).sum().item() val_num_samples += local_labels.size(0) val_avg_loss = val_total_loss / len(val_gen.dataset) val_avg_loss_list[model_num].append(val_avg_loss) val_accuracy = val_num_correct / len(val_gen.dataset) print("Validation accuracy: " + str(val_accuracy)) val_accuracy_list[model_num].append(val_accuracy) if model_num == 1: scheduler.step() if epoch > 10: if sum([val_accuracy_list[model_num][i] < val_accuracy_list[model_num][i-1] for i in range(epoch-5, epoch)]) == 5: break # - # **Plotting learning curves for validation and train dataset** # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="Wqimq2CM8TOb" outputId="ad693855-7932-4885-e054-ca4428f043ae" def plot_x_y_vals(x_vals, y_vals, x_label, y_label, label, line_titles): for i in range(len(x_vals)): plt.plot(x_vals[i], y_vals[i], label=line_titles[i]) plt.title(label) plt.legend() plt.xlabel(x_label) plt.ylabel(y_label) plt.show() model_names = ["RMSProp","Nesterov", "Adam"] for i in tr_accuracy_list: epocs = [i+1 for i in range(len(tr_accuracy_list[i]))] plot_x_y_vals([epocs, epocs], [tr_accuracy_list[i], val_accuracy_list[i]], "Epochs", "Accuracy", "Train & Validation Accuracy, " + model_names[i], ["train", "validation"]) plot_x_y_vals([epocs, epocs], [tr_avg_loss_list[i], val_avg_loss_list[i]], "Epochs", "Loss", "Train & Validation Loss, " + model_names[i], ["train", "validation"]) # - # **Overfit or Underfit?** # # **RMSPros** , we see that training accuracy is almost 1, while validation is around .985. We see that the validation loss increases with epochs towards the end of training. And we see that the validation accuracy is also not as high as baseline. We can classify this as **mild overfit**. # # **Nesterov**, we see that training accuracy is again almost 1, while validation is around 0.99. We also see that validation loss increases towards the end, but not as much. Overall this performs similar to baseline model. However the loss is not increasing as much as baseline model, we can say that it is just right fit. We could have stopped training at around 25 epochs as well. # # **Adam**, We see that validation loss doesnt increase with the epochs towards the end. The accuracy doesnt increase as much, but doesnt increase as well. It is very similar to baseline model. We can say that comparatively this model is similar to baseline. So no relative overfit or underfit compared to baseline. # # **Checking the accuracy of test set** # + id="dR8RFZkglxKX" total_accurate = [0,0,0] total_values = [0,0,0] errors={0:{i:{j:0 for j in range(10)} for i in range(10)}, 1:{i:{j:0 for j in range(10)} for i in range(10)}, 2:{i:{j:0 for j in range(10)} for i in range(10)}} incorrect_samples = {0:[], 1:[], 2:[]} correct_samples = {0:[], 1:[], 2:[]} def calculate_class_wise_errors(local_labels, sel_class, local_batch, model_num): true_labels = local_labels[sel_class.not_equal(local_labels)] predicted = sel_class[sel_class.not_equal(local_labels)] for (i, t), (i,p) in zip(enumerate(true_labels), enumerate(predicted)): errors[model_num][t.item()][p.item()] += 1 true_labels = local_labels[sel_class.eq(local_labels)] predicted = sel_class[sel_class.eq(local_labels)] for (i, t), (i,p) in zip(enumerate(true_labels), enumerate(predicted)): errors[model_num][t.item()][p.item()] += 1 if len(incorrect_samples[model_num]) < 10: samples = local_batch[sel_class.not_equal(local_labels)] predicted = sel_class[sel_class.not_equal(local_labels)] true_labels = local_labels[sel_class.not_equal(local_labels)] for (i,s), (i,p), (i, t) in zip(enumerate(samples), enumerate(predicted), enumerate(true_labels)): incorrect_samples[model_num].append((s.cpu().numpy(), p.cpu().numpy(), t.cpu().numpy())) if len(correct_samples[model_num]) < 10: samples = local_batch[sel_class.eq(local_labels)] predicted = sel_class[sel_class.eq(local_labels)] true_labels = local_labels[sel_class.eq(local_labels)] for (i,s), (i,p), (i, t) in zip(enumerate(samples), enumerate(predicted), enumerate(true_labels)): correct_samples[model_num].append((s.cpu().numpy(), p.cpu().numpy(), t.cpu().numpy())) for model_num, cnn_model in enumerate(cnn_models): with torch.set_grad_enabled(False): cnn_model.eval() for local_batch, local_labels in test_gen: local_batch = local_batch.float() local_labels = local_labels.float() local_batch, local_labels = Variable(local_batch), Variable(local_labels) local_batch = local_batch.to(device) local_labels = local_labels.to(device) out1 = cnn_model(local_batch) ploss = loss(out1, local_labels.long()) sel_class = torch.argmax(out1, dim=1) calculate_class_wise_errors(local_labels, sel_class, local_batch, model_num) total_accurate[model_num] += sel_class.eq(local_labels).sum().item() total_values[model_num] += local_labels.size(0) # + colab={"base_uri": "https://localhost:8080/"} id="pshZ1lENmC1V" outputId="89bcb7b2-5247-43b6-9aa2-793de3cb53d8" print("Predicted " + str(total_accurate) +" correctly out of " + str(total_values) + "for respective models: " + str(model_names)) # - # **We see that all three models perform well, and ther is very small margin in accuracy. We see that Nesterov has performed the best while Adam is second and RMSProp has performed worst, but not by a lot.** # # **Below we are plotting the heatmap, where y-axis represents the actual label and x-axis represents the predicted labels. Pleaase mind that all the diagonal elements have been set to zero. So the heatmap only represents the incorrectly classified label counts.** # # **For example row = 4, col = 3 represents the count of images, which were 4, but were actually classified as 3. And the cell (4,4) is left empty, although it should ideally contain count of all the correctly classified images of 4. As this heatmap is generated only to see if there is any pair that is mistaken a lot in the classification OR if there is any bias in our model for any label.** # # + colab={"base_uri": "https://localhost:8080/", "height": 926} id="jr3x3rR1Drwk" outputId="2d2f30a4-e261-4614-cba4-34241449545d" import seaborn as sns class_acc = np.zeros((3,10,10)) for d in range(3): for i in range(10): for j in range(10): if i!=j: class_acc[d,i,j] = errors[d][i][j] else: class_acc[d,i,j] = 0 print("\n\n"+model_names[d]) sns.heatmap(class_acc[d]) plt.show() # - # **Plotting few correctly classified images by our model.** # + colab={"base_uri": "https://localhost:8080/", "height": 880} id="oagP_kqzcllW" outputId="31179f2c-9731-4efa-d1db-60059449cae5" print("Incorrectly classified samples. (True and predicted values)") for d in range(3): print("\n\n"+model_names[d]) fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True) ax = ax.flatten() for i in range(10): img=np.array(incorrect_samples[d][i][0]).reshape(28,28) ax[i].imshow(img, cmap="Greys") ax[i].title.set_text(str(int(incorrect_samples[d][i][2])) + "-" + str(incorrect_samples[d][i][1])) ax[0].set_yticks([]) ax[0].set_xticks([]) plt.tight_layout() plt.show() # - # **Plotting few correctly classified images by our model.** # + colab={"base_uri": "https://localhost:8080/", "height": 880} id="GClPJXDjoFDD" outputId="9eb96aba-4405-4730-996e-bd0f9d196bfe" print("Correctly classified samples. (true and predicted values)") for d in range(3): print("\n\n"+model_names[d]) fig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True) ax = ax.flatten() for i in range(10): img=np.array(correct_samples[d][i][0]).reshape(28,28) ax[i].imshow(img, cmap="Greys") ax[i].title.set_text(str(int(correct_samples[d][i][2])) + "-" + str(correct_samples[d][i][1])) ax[0].set_yticks([]) ax[0].set_xticks([]) plt.tight_layout() plt.show() # - # **Plotting below confusion matrix for each class. By each class, we mean all the correct prediction fot that class = true positive. all the images of a class, that were incorrectly classfied as false negative. All the images, not of that class, but classified as of that class as false positive. And all the images that were not of a class and were also classified as not belonging to that class as true negative.** # + colab={"base_uri": "https://localhost:8080/", "height": 376} id="9gm8Z_bN5r7q" outputId="9246d32f-86f5-429a-9f16-ee5a23e23a9b" import pandas as pd # Confusion matrix confusion_arr = np.zeros((3, 10, 4)) confusion_dfs = [] for d in range(3): for i in range(10): confusion_arr[d][i][0] = errors[d][i][i] # tp for j in range(10): if i!=j: confusion_arr[d][i][1]+=errors[d][j][i] # fp for j in range(10): if i!=j: confusion_arr[d][i][2]+= errors[d][i][j] # fn confusion_arr[d][i][3] = total_values[d] - sum(confusion_arr[d][i][:3]) # tn confusion_dfs.append(pd.DataFrame(confusion_arr[d], columns=["tp", "fp", "fn", "tn"])) confusion_dfs[d]["precision"] = confusion_dfs[d]["tp"] / (confusion_dfs[d]["tp"] + confusion_dfs[d]["fp"]) confusion_dfs[d]["recall"] = confusion_dfs[d]["tp"] / (confusion_dfs[d]["tp"] + confusion_dfs[d]["fn"]) confusion_dfs[d]["accuracy"] = (confusion_dfs[d]["tp"] + confusion_dfs[d]["tn"]) / 10000 print("Overall Accuracy:" + str(total_accurate[0]/total_values[0]) + " For optimizer:" + model_names[0]) confusion_dfs[0] # + colab={"base_uri": "https://localhost:8080/", "height": 376} id="SjYlJLRurviz" outputId="48225745-1808-47ec-ab8b-078424911b7e" print("Overall Accuracy:" + str(total_accurate[1]/total_values[1]) + " For optimizer:" + model_names[1]) confusion_dfs[1] # + colab={"base_uri": "https://localhost:8080/", "height": 376} id="CJcPPMGar7J9" outputId="2271ea25-6ea1-46f1-b8b2-218cbb8aab50" print("Overall Accuracy:" + str(total_accurate[2]/total_values[2]) + " For optimizer:" + model_names[2]) confusion_dfs[2] # - # Conclusion: # While training all three optimizer, we noticed that all three require different learning rates. While adam when supplied with larger rate, has hard time converging to a solution. Nesterov is able to work with higher learning rates. So the ideal learning rate for the current dataset which have worked are: # Nesterov: 0.5, Adam:0.001, RMSProd: 0.001 # # We see that with nesterov, we dont have to worry much about learning rate, as we can use learning rate scheduler, While the adam and rmsprop, use there own learning rates calculations. That is why when training with adam optimizer, we have to be mindfull of learning rates, while nesterov doesnt have those kind of restrictions. In case of RMSProp as well a lower learning rate yielded better results. # # Nesterov has also performed the best of all, even though by slight margin, So in all Nesterov would be preferred choice for current dataset. # # Compared to RMSprop, Adam and nesterov optimizers also descent fast, and in few epochs itself they achieve good accuracies on validation set. This is usefull, as in case of large files, we can get away with less epochs. # # Overall all three have worked well, and depending on the problem, one could perform better than other. However for the current problem the nesterov seems like the best choice. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] tags=["remove_cell"] #
    # # # Introdução # - import os import pprint import pandas as pd from collections import OrderedDict def get_parameters(): # Read Data try: df_8468 = pd.read_excel( io=os.path.join(os.path.dirname(__file__), 'data', 'tab_dec_8468.xlsx'), sheet_name='dec_8468', index_col=0 ) except Exception as e: #print(e, '\n') #print('Read table from GitHub') df_8468 = pd.read_excel( io='https://raw.githubusercontent.com/gaemapiracicaba/norma_dec_8468-76/main/src/normas/data/tab_dec_8468.xlsx', sheet_name='dec_8468', index_col=0 ) # Filter only quality df_8468 = df_8468.loc[(df_8468['tipo_padrao'] == 'qualidade')] #print(df_8468.head()) # Classes list_classes = list(set(df_8468['padrao_qualidade'])) list_classes = [x for x in list_classes if pd.notnull(x)] list_classes.sort() return df_8468, list_classes # + tags=["remove_cell"] df_8468, list_classes = get_parameters() pprint.pprint(list_classes) # - def filter_by_classe(df_8468, classe): # Filter dataframe by Classe df_8468 = df_8468.loc[(df_8468['padrao_qualidade'] == classe)] # Parâmetros list_parametros = list(set(df_8468['parametro_descricao'])) list_parametros = [x for x in list_parametros if pd.notnull(x)] list_parametros.sort() return df_8468, list_parametros # + tags=["remove_cell"] df_8468, list_parametros = filter_by_classe(df_8468, classe='Classe 2') pprint.pprint(list_parametros) # - def filter_by_parameters(df_8468, parametro): # Filter dataframe by Parametro df_8468 = df_8468.loc[(df_8468['parametro_descricao'] == parametro)] # Check and Get Results if len(df_8468) == 1: dict_8468 = df_8468.to_dict(orient='records')[0] dict_8468 = OrderedDict(sorted(dict_8468.items(), key=lambda x: df_8468.columns.get_loc(x[0]))) return dict_8468 else: return 'erro' # + tags=["remove_cell"] # Filter Data by Parâmetros dict_8468 = filter_by_parameters(df_8468, parametro='Oxigênio Dissolvido') dict_8468 # - def set_type_desconformidade(dict_8468): if pd.isnull(dict_8468['valor_minimo_permitido']) & pd.notnull(dict_8468['valor_maximo_permitido']): #print('Parâmetro só tem "valor máximo". Caso o valor medido esteja acima, é amostra desconforme!') tipo_8486 = 'acima>desconforme' elif pd.notnull(dict_8468['valor_minimo_permitido']) & pd.isnull(dict_8468['valor_maximo_permitido']): #print('Parâmetro só tem "valor mínimo". Caso o valor medido esteja abaixo, é amostra desconforme!') tipo_8486 = 'abaixo>desconforme' elif pd.notnull(dict_8468['valor_minimo_permitido']) & pd.notnull(dict_8468['valor_maximo_permitido']): #print('Parâmetro tem "valor mínimo" e "valor máximo". Caso o valor medido acima ou abaixo, é amostra desconforme!') tipo_8486 = 'abaixo_acima>desconforme' elif pd.isnull(dict_8468['valor_minimo_permitido']) & pd.isnull(dict_8468['valor_maximo_permitido']): #print('Erro!') tipo_8486 = 'erro' else: print('Erro!') #tipo_8486 = 'erro' return tipo_8486 # + tags=["remove_cell"] set_type_desconformidade(dict_8468) # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} def evaluate_result(valor, dict_8468): # Get type tipo_8486 = set_type_desconformidade(dict_8468) # Evaluate type if tipo_8486 == 'acima>desconforme': if valor > dict_8468['valor_maximo_permitido']: result_8468 = 'desconforme' else: result_8468 = 'conforme' elif tipo_8486 == 'abaixo>desconforme': if valor < dict_8468['valor_minimo_permitido']: result_8468 = 'desconforme' else: result_8468 = 'conforme' elif tipo_8486 == 'abaixo_acima>desconforme': if dict_8468['valor_minimo_permitido'] <= valor <= dict_8468['valor_maximo_permitido']: result_8468 = 'conforme' else: result_8468 = 'desconforme' else: result_8468 = 'erro' return result_8468 # + jupyter={"outputs_hidden": false} pycharm={"name": "#%%\n"} tags=["remove_cell"] valor = 10 evaluate_result(valor, dict_8468) # + [markdown] tags=["remove_cell"] #
    # # # Export # + tags=["remove_cell"] import os from traitlets.config import Config from nbconvert import PythonExporter from nbconvert.preprocessors import TagRemovePreprocessor # + tags=["remove_cell"] input_filename = 'decreto_estadual_8468.ipynb' input_filepath = os.path.join(os.getcwd(), input_filename) # + tags=["remove_cell"] # Import the exporter c = Config() c.TagRemovePreprocessor.enabled=True c.ClearOutputPreprocessor.enabled=True c.TemplateExporter.exclude_markdown=True c.TemplateExporter.exclude_code_cell=False c.TemplateExporter.exclude_input_prompt=True c.TemplateExporter.exclude_output=True c.TemplateExporter.exclude_raw=True c.TagRemovePreprocessor.remove_cell_tags = ('remove_cell',) c.TagRemovePreprocessor.remove_input_tags = ('remove_cell',) c.TagRemovePreprocessor.remove_all_outputs_tags = ('remove_output',) c.preprocessors = ['TagRemovePreprocessor'] c.PythonExporter.preprocessors = ['nbconvert.preprocessors.TagRemovePreprocessor'] # Configure and run out exporter py_exporter = PythonExporter(config=c) py_exporter.register_preprocessor(TagRemovePreprocessor(config=c), True) # Configure and run out exporter - returns a tuple - first element with html, second with notebook metadata body, metadata = PythonExporter(config=c).from_filename(input_filepath) # Write to output html file with open(os.path.join(os.getcwd(), '..', 'src', 'normas', 'decreto_estadual_8468.py'), 'w', encoding='utf-8') as f: f.write(body) # + tags=["remove_cell"] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data analysis with Pandas # *** # Importing, cleaning and analyzing the "hotel booking demand" dataset from Kaggle, which is available [here](https://www.kaggle.com/jessemostipak/hotel-booking-demand). # + # Importing all required packages import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # + # Downloading the dataset # !kaggle datasets download -d jessemostipak/hotel-booking-demand # !unzip hotel-booking-demand.zip # !rm hotel-booking-demand.zip # + # Loading the dataset and checking some of its features dataset = pd.read_csv("hotel_bookings.csv") dataset.head() #dataset.describe() # + # As a safety precaution, and following best practices, the analysis will not be conducted on the original dataset # Instead, we'll create a deep copy of the original dataset and use it for our analysis hotel_bookings = dataset.copy(deep=True) # + # We've noticed that there are null values in this dataset. However, how many are there? null_data = hotel_bookings.isna() null_total = null_data.sum().sum() null_relative = null_total/hotel_bookings.size print(f"Total number of null features: {null_total}") print(f"% of null features relative to the overall data: {np.round(null_relative * 100, 3)}%") # + # It seems that roughly 3.4% of our data is null # Although it may seem insignificant, we need to know which columns are being affected by this null_data.columns[null_data.any()] # + # With this information, what is the ratio of each column's null to filled values? null_children = null_data.children.mean() null_country = null_data.country.mean() null_agent = null_data.agent.mean() null_company = null_data.company.mean() print(f"% of null values in the children column: {np.round(null_children * 100, 3)}%") print(f"% of null values in the country column: {np.round(null_country * 100, 3)}%") print(f"% of null values in the agent column: {np.round(null_agent * 100, 3)}%") print(f"% of null values in the company column: {np.round(null_company * 100, 3)}%") # - # ## Dealing with the null columns # *** # Now, the question is: should these columns be cleaned? It depends. However, in this scenario, having things such as a null amount of children and null country could mean that the information is unknown. Alternatively, it could mean that the information is truly missing, i.e. someone forgot to add it in or it was corrupted somehow. # # Still, leaving ```NaN``` values doesn't help us either. Thus, we must change this if we are to extract meaningful data from this dataset. We can think about it the following way: # # * Having a ```NaN``` amount of children could mean that the person has no children, or has 1 or more. However, given we don't know that, we could assign the number 0 to those null values as, either way, they won't affect our data analysis, yet having a numerical value is preferable. # # * Being from a ```NaN``` country could mean, as stated previously, that the information was lost or is unknown. However, rather than having a null value, we could replace all of those ```NaN``` with ```UNK```, thus preserving the original data structure and getting rid of null values at the same time. # # * Having a ```NaN``` agent ID could, once again, mean that the information was lost or is unknown. As it's a numerical value, and as it represents a travel agency ID, we could use the number 0 to denote this lack of information, as no agency has 0 as its ID. # # * A ```NaN``` company ID could mean that the person truly doesn't work at a company, or that the information was lost (missing or corrupted). Once again, assigning 0 to a null company ID will have the same effect, given no company has such an ID. # + # Replacing all NaNs with appropriate values hotel_bookings.children.fillna(0, inplace=True) hotel_bookings.country.fillna("UNK", inplace=True) hotel_bookings.agent.fillna(0, inplace=True) hotel_bookings.company.fillna(0, inplace=True) # + # Checking some samples from our cleaned dataframe hotel_bookings.sample(10) # - # ## Additional cleaning # *** # We can do some more cleaning on this dataframe. For instance, the ```children```, ```agent``` and ```company``` columns are of type float while they'd be better suited for the integer data type. Additionally, ```arrival_date_year```, ```arrival_date_month``` and ```arrival_date_day_of_month``` could be combined into a single column called ```arrival_date``` # # + # Changing the data type of children, agent and company change_columns = ["children", "agent", "company"] hotel_bookings[change_columns] = hotel_bookings[change_columns].astype(int) # + # Creating a datetime column from the day, month and year columns columns = ["arrival_date_day_of_month", "arrival_date_month", "arrival_date_year"] new_columns = ["day", "month", "year"] arrival_date = hotel_bookings[columns] arrival_date.columns = new_columns arrival_date.month = pd.to_datetime(arrival_date.month, format='%B').dt.month arrival_date.insert(0, "arrival_date", pd.to_datetime(arrival_date[new_columns].copy(), format="%mm/%dd/%YYYY").dt.date) arrival_date.drop(["day", "month", "year"], axis=1, inplace=True) arrival_date.sample(10) # + # Now, all we need to do is insert the new column on our dataframe and drop the old ones columns_to_drop = ["arrival_date_day_of_month", "arrival_date_month", "arrival_date_year"] hotel_bookings.insert(3, "arrival_date", arrival_date) hotel_bookings.drop(hotel_bookings[columns_to_drop], axis=1, inplace=True) hotel_bookings.head() # + # Saving the cleaned dataset to a .csv file hotel_bookings.to_csv("../datasets/cleaned_hotel_bookings.csv", index=False) # - # ## Exploratory data analysis # *** # Now that the dataframe has been properly cleaned and saved, it's ready for analysis. As such, now we can conduct statistical measurements on our data and plot information that is relevant to us. # + # 1st statistical measurement: what's the number of people that have visited the hotels in our dataset? And # what is the proportion of adults relative to the overall population in our data? columns = ["adults", "children", "babies", "hotel"] total_people = hotel_bookings[columns].groupby("hotel").sum() hotel_population = hotel_bookings[columns[0:3]].sum().sum() adults_ratio = hotel_bookings.adults.sum().sum()/hotel_population print(f"Average number of adults per hotel type: \n{np.round(total_people, 2)}") print(f"% of adults in population: {np.round(adults_ratio * 100, 2)}") # + # As expected, adults constitute a majority of our population # We can plot our results to get a clearer picture of our data sns.set_theme() ticks = np.arange(0, 175000, 25000) ax = total_people.plot(kind="bar", figsize=(10,10), legend=True, fontsize=14, rot=0 ) ax.set_title("Population per hotel", fontsize=14) ax.set_yticks(ticks) ax.set_xlabel("Hotel", fontsize=14) ax.legend(loc=1, prop={"size": 14}) ax.set_ylabel("Population", fontsize=14) for p in ax.patches: ax.annotate("%d" % p.get_height(), (p.get_x() + p.get_width()/2., p.get_height()), ha="center", va="center", xytext=(0, 10), textcoords='offset points') # + # 2nd statistical measurement: what is the arrival rate by date? # Which date had the most arrivals? arrival_by_date = pd.Series(hotel_bookings["arrival_date"]) frequency = arrival_by_date.value_counts() print(f"Arrival by date: \n{frequency}") # + # It seems that December 5th, 2015 had the most arrivals # Let's use a histogram to plot our results x = np.arange(0, 500, 50) ax = frequency.plot(kind="hist", figsize=(10,10), bins=45, alpha=0.7, legend=False, fontsize=14, rot=0 ) ax.set_title("Arrival rate by number of hotels", fontsize=14) ax.set_xlabel("People", fontsize=14) ax.set_ylabel("Number of hotels", fontsize=14) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Домашняя работа 3. Задача классификации # # ~~*Дедлайн мягкий как облачко: 1 ноября, 21:00*~~ # # ~~*Дедлайн жесткий как неспелая хурма: 5 ноября, 21:00*~~ # # *Дедлайн унылый как Асино настроение: 6 ноября, 21:00* (жесткий и единственный, без снятия баллов) # ### Оценивание и штрафы # # Максимальная оценка — 10 баллов. Еще есть 2 бонусных балла, которые можно добавить к любым домашкам или проверочным. # # Не списывайте, иначе всем участникам обнулим :) # Для удобства проверки самостоятельно посчитайте свою максимальную оценку (исходя из набора решенных задач) и укажите ниже. # # **Оценка: 9.5** print('Всем удачи!👒') # + from __future__ import annotations import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline pd.set_option('display.max_rows', 100) pd.set_option('display.max_columns', 100) # - # ## Часть 1. Логрег своими руками (4.5 балла) # **Задание 1 (4 балла)**. Реализуйте логистическую регрессию, обучаемую с помощью: # - градиентного спуска **(2 балла)** # # - стохастического градиентного спуска **(2 балла)** # # Во всех пунктах необходимо соблюдать два условия: # - Циклы можно использовать только для итераций градиентного спуска; # - В качестве критерия останова необходимо использовать (одновременно): # # - проверку на евклидову норму разности весов на двух соседних итерациях (например, меньше некоторого малого числа порядка $10^{-6}$), задаваемого параметром `tolerance`; # - достижение максимального числа итераций (например, 10000), задаваемого параметром `max_iter`. # # Чтобы проследить, что оптимизационный процесс действительно сходится, добавим атрибут класса `loss_history`. В нём после вызова метода `fit` должны содержаться значения функции потерь для всех итераций градиентного спуска, начиная с нулевой. # # Инициализировать веса можно случайным образом или нулевым вектором. # + from sklearn.base import BaseEstimator class LogReg(BaseEstimator): def __init__(self, gd_type: str = 'stochastic', tolerance: float = 1e-4, max_iter: int = 1000, eta: float = 1e-2, w0: np.array = None) -> None: """ Args: gd_type: Type of gradient descent ('full' or 'stochastic'). tolerance: Threshold for stopping gradient descent. max_iter: Maximum number of steps in gradient descent. eta: Learning rate. w0: Array of shape d (d — number of weights to optimize). Initial weights. """ self.gd_type = gd_type self.tolerance = tolerance self.max_iter = max_iter self.eta = eta self.w0 = w0 self.w = None # was None self.loss_history = None def fit(self, X: np.array, y: np.array) -> LogReg: """Fit the model on training data. Also, save value of loss after each iteration. Args: X: Training data. y: Target. Returns: self: Fitted classsifier. """ X = np.c_[X, np.ones(y.size)] if self.w0 is None: self.w0 = np.zeros(X.shape[1]) self.w = self.w0 if self.gd_type == "full": norm1 = np.linalg.norm(self.w) self.loss_history = [] for _ in range(self.max_iter): # плюс self.calc или минус? self.w = self.w - self.calc_gradient(X, y)*self.eta # /(1+i) # коэф 1/k norm2 = np.linalg.norm(self.w) self.loss_history = np.append(self.loss_history, self.calc_loss(X, y)) if abs(norm2 - norm1) < self.tolerance: break norm1 = norm2 return self else: norm1 = np.linalg.norm(self.w) self.loss_history = [] for _ in range(self.max_iter): i = np.random.randint(0, X.shape[0]) Xi = X[i, :] yi = y[i] step = - self.calc_gradient(Xi, yi) * self.eta self.w += step norm2 = np.linalg.norm(self.w) if abs(norm2 - norm1) < self.tolerance: return self norm1 = norm2 self.loss_history = np.append(self.loss_history, self.calc_loss(X, y)) return self def predict_proba(self, X: np.array) -> np.array: """Calculate probability of positive and negative class for each observation. Args: X: Array of shape (n, d). Data. Returns: Array of shape (n, 2). Predicted probabilities. """ X = np.c_[X, np.ones(X.shape[0])] if self.w is None: raise Exception('Not trained yet') return 1 / (1 + np.exp(-1 * (X.dot(self.w)))) pass def predict(self, X: np.array) -> np.array: """Predict class for each observation. Args: X: Array of shape (n, d). Data. Returns: Array of shape (n,). Predicted class labels. """ X = np.c_[X, np.ones(X.shape[0])] if self.w is None: raise Exception('Not trained yet') return np.sign(X.dot(self.w)) # может вернуть 0 pass def calc_gradient(self, X: np.array, y: np.array) -> np.array: """Calculate gradient of loss function after each iteration. Args: X: Array of shape (n, d), n can be equal to 1 if 'stochastic'. y: Array of shape (n,). Returns: Array of shape (d,). Gradient of loss function after current iteration. """ if self.gd_type == "stochastic": X = X.reshape(1, -1) M = y * (X.dot(self.w)) a = y * np.exp(-M) / (1 + np.exp(-M)) grad = -(X.T@ a) / y.size return grad def calc_loss(self, X: np.array, y: np.array) -> float: """Calculate value of loss function after each iteration. Args: X: Array of shape (n, d). y: Array of shape (n,). Returns: Value of loss function after current iteration. """ return np.mean(np.log(1 + np.exp(-1 * X.dot(self.w)))) pass # - # Далее предполагается, что вы используете собственную реализацию логистической регрессии. # Если с написанием класса возникли проблемы, используйте реализацию sklearn, чтобы не терять баллы за остальные задания. # # В части 2 и далее я бы всем советовала использовать реализацию sklearn. # Сгенерируем синтетические данные. # + from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split X, y = make_classification( n_samples=10000, n_features=10, n_informative=5, n_redundant=5, random_state=42) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=42) # - # **Задание 2 (0.5 балла).** Обучите логистическую регрессию на синтетических данных. Нарисуйте изменение лосса во время обучения. # + cls1 = LogReg(max_iter=10000, tolerance=-1e-8) cls1.fit(X_train, y_train) cls1.predict(X_test) cls2 = LogReg(gd_type="full",tolerance=1e-8, max_iter = 10000) cls2.fit(X_train, y_train) cls2.predict(X_test) plt.plot(cls1.loss_history, label="stochastic") plt.plot(cls2.loss_history, label="full") plt.grid() plt.legend() # - # На тестовой части посчитайте ROC-AUC, PR-AUC. Постройте ROC и PR кривые. # + #ROC from sklearn import metrics from sklearn.metrics import roc_curve from sklearn.metrics import roc_auc_score from sklearn.metrics import precision_recall_curve from sklearn.metrics import average_precision_score cls1 = LogReg(gd_type="full",tolerance=1e-8, max_iter = 10000) cls1.fit(X_train, y_train) cls2 = LogReg(max_iter=10000, tolerance=-1e-8) cls2.fit(X_train, y_train) y1_predicted = cls1.predict_proba(X_test) y2_predicted = cls2.predict_proba(X_test) fig, ax = plt.subplots() fpr1, tpr1, thr1 = roc_curve(y_test, y1_predicted) fpr2, tpr2, thr2 = roc_curve(y_test, y2_predicted) p1, r1, thresholds = precision_recall_curve(y_test, y1_predicted) p2, r2, thresholds = precision_recall_curve(y_test, y2_predicted) plt.plot(fpr1, tpr1, label="full") plt.plot(fpr2, tpr2, label="stochastic") plt.legend() plt.title("ROC curves") ax.set_xlabel('FPR') ax.set_ylabel('TPR') plt.grid() # - fig, ax = plt.subplots() plt.plot(r1, p1, label="full") plt.plot(r2, p2, label="stochastic") plt.legend() plt.title("PR curves") ax.set_ylabel('Precision') ax.set_xlabel('Recall') plt.grid() print("ROC AUC score (full):", roc_auc_score(y_test, y1_predicted)) print("ROC AUC score (stochastic):", roc_auc_score(y_test, y2_predicted)) ans1 = 0 ans2 = 0 ans1 = average_precision_score(y_test, y1_predicted) ans2 = average_precision_score(y_test, y2_predicted) print("PR AUC (full):", ans1) print("PR AUC (stochastic):", ans2) # ## Часть 2. Работа с категориальными признаками (2.5 балла) # В этой части мы научимся обрабатывать категориальные переменные. Как вы уже знаете, закодировать их в виде столбика чисел недостаточно (это задаёт некоторый порядок, которого на категориальных переменных может и не быть, но модель попробует его выучить). Существует два основных способа обработки категориальных значений: # - One-hot-кодирование # - Счётчики (CTR, mean-target кодирование, ...) — каждый категориальный признак заменяется на среднее значение целевой переменной по всем объектам, имеющим одинаковое значение в этом признаке. # # Начнём с one-hot-кодирования. Допустим наш категориальный признак $f_j(x)$ принимает значения из множества $C=\{c_1, \dots, c_m\}$. Заменим его на $m$ бинарных признаков $b_1(x), \dots, b_m(x)$, каждый из которых является индикатором одного из возможных категориальных значений: # $$ # b_i(x) = [f_j(x) = c_i] # $$ # __Подготовка данных.__ # # Загрузим данные [UCI Bank Marketing Dataset](https://archive.ics.uci.edu/ml/datasets/bank+marketing). Этот датасет содержит информацию о маркетинговой кампании какого-то банка, объектом в нем является телефонный звонок потенциальному клиенту с предложением некоторой услуги (утверждается, что это краткосрочный депозит), целевой переменной — ответ клиента (согласился ли он открыть депозит?). В качестве признакового описания используются характеристики клиента (образование, брак и т.д.), данные о звонке и различные экономические индикаторы — более подробная информация на страничке с датасетом. # # !wget https://archive.ics.uci.edu/ml/machine-learning-databases/00222/bank-additional.zip # # !unzip bank-additional.zip df = pd.read_csv('C:/Users/vesel/Downloads/bank-additional-full.csv', sep=';') y = df["y"] y = (y == "yes").astype(np.int8) df = df.drop(["y", "duration"], axis=1) df.head(5) df.describe() # __Задание 3 (0.5 балла).__ Разделите выборку на обучающую и тестовую в соотношении 3:1. Зафиксируйте `random_state=777`, укажите значение параметра `stratify`. Один из столбцов (помимо таргета :) ) стоит сразу выкинуть из обучающей выборки. Какой? Не отказывайте себе. # # Ответ: Нужно выбросить duration, этот показатель становится известен уже после совершения звонка, поэтому он не годится для предсказаний. # # df.groupby(["default", "campaign"]).count() X_train, X_test, y_train, y_test = train_test_split( df, y, test_size=0.25, random_state=777, stratify=y) # Закодируйте категориальные признаки с помощью `OrdinalEncoder`. Посчитайте качество (в этом задании будем работать c `AUC-PR`) при применении логистической регрессии. Здесь и далее для реализации последовательности этих действий (обработка признаков + обучение модели) используйте пайплайны. Замерьте время, потребовавшееся на обучение модели (с учетом кодирования признаков). # # __Вопрос__: почему в данном задании мы выбрали метрикой именно `AUC-PR`, а не, к примеру, `AUC-ROC`? # # __Ваш ответ__: Потому что в данном случае мы очень хотим угадывать тех, у кого высокая вероятность заключить с нами контракт # + numeric_features = df.select_dtypes([np.number]).columns ctgs = df.columns.drop(numeric_features) from sklearn.preprocessing import OrdinalEncoder from sklearn.pipeline import make_pipeline from sklearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression from sklearn.compose import ColumnTransformer, make_column_selector from sklearn.preprocessing import StandardScaler prep_ord = ColumnTransformer([ ('enc', OrdinalEncoder(), ctgs), ('scaling', StandardScaler(), numeric_features)]) pipe_ord = Pipeline(steps=[('prep_ord' , prep_ord), ('clf', LogisticRegression(max_iter=2000))]) # enc = OrdinalEncoder() # enc.fit(df[ctgs]) # enc.categories_ # df[ctgs] = enc.transform(df[ctgs]) # - # %%timeit pipe_ord.fit(X_train, y_train) y_predicted = pipe_ord.predict_proba(X_test) # print(y_predicted) # + def plotPR(y_predicted, lbl=""): p, r, thresholds = precision_recall_curve(y_test, y_predicted[:, 1]) plt.plot(r, p, label=lbl) plt.title("PR curves") plt.ylabel('Precision') plt.xlabel('Recall') plt.grid() plt.legend() print(lbl ,"AUC-RP:", average_precision_score(y_test, y_predicted[:, 1])) pipe_ord.fit(X_train, y_train) y_predicted_ord = pipe_ord.predict_proba(X_test) plotPR(y_predicted_ord, lbl="OrdinalEncoding") # - # __Задание 4 (0.5 балла).__ Закодируйте все категориальные признаки с помощью one-hot-кодирования. Обучите логистическую регрессию и посмотрите, как изменилось качество модели. Измерьте время, потребовавшееся на кодирование категориальных признаков и обучение модели. # # # + from sklearn.preprocessing import OneHotEncoder X_train, X_test, y_train, y_test = train_test_split( df, y, test_size=0.25, random_state=777, stratify=y) prep_onehot = ColumnTransformer([ ('onehot', OneHotEncoder(), ctgs), ('scaling', StandardScaler(), numeric_features)]) pipe_onehot = Pipeline(steps=[('prep_onehot' , prep_onehot), ('clf', LogisticRegression(max_iter=2000))]) # - # %%timeit pipe_onehot.fit(X_train, y_train) y_predicted = pipe_onehot.predict_proba(X_test) # 3.11 секунд по сравнению с 0.735 секунд для 2000 итераций GD pipe_onehot.fit(X_train, y_train) y_predicted_onehot = pipe_onehot.predict_proba(X_test) plotPR(y_predicted_onehot, lbl="OneHot") plotPR(y_predicted_ord, lbl = "Ordinal") # немного выросло # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ # - # Как можно было заменить, one-hot-кодирование может сильно увеличивать количество признаков в датасете, что сказывается на памяти, особенно, если некоторый признак имеет большое количество значений. Эту проблему решает другой способ кодирования категориальных признаков — счётчики. Основная идея в том, что нам важны не сами категории, а значения целевой переменной, которые имеют объекты этой категории. Каждый категориальный признак мы заменим средним значением целевой переменной по всем объектам этой же категории: # $$ # g_j(x, X) = \frac{\sum_{i=1}^{l} [f_j(x) = f_j(x_i)][y_i = +1]}{\sum_{i=1}^{l} [f_j(x) = f_j(x_i)]} # $$ # # __Задание 5 (0.5 балла).__ Закодируйте категориальные переменные с помощью счётчиков (ровно так, как описано выше без каких-либо хитростей). Обучите логистическую регрессию и посмотрите на качество модели на тестовом множестве. Сравните время обучения с предыдущими экспериментами (с учетом кодирования признаков). Заметили ли вы что-то интересное? X_train, X_test, y_train, y_test = train_test_split( df, y, test_size=0.25, random_state=777, stratify=y) df = pd.read_csv('C:/Users/vesel/Downloads/bank-additional-full.csv', sep=';') df["y"] = (df["y"] == "yes").astype(np.int8) # + # %%timeit data=df.copy() for i in ctgs: data = data.replace({f'{i}': dict(df.groupby(f'{i}')['y'].mean())}) data = data.drop(['y','duration'], axis=1) X_train, X_test, y_train, y_test = train_test_split( data, y, test_size=0.25, random_state=777, stratify=y) # почему то без первых двух строк выдаёт ошибку lg = LogisticRegression(max_iter=2000) lg.fit(X_train, y_train) y_predicted = lg.predict_proba(X_test) # + data=df.copy() for i in ctgs: data = data.replace({f'{i}': dict(df.groupby(f'{i}')['y'].mean())}) data = data.drop(['y','duration'], axis=1) X_train, X_test, y_train, y_test = train_test_split( data, y, test_size=0.25, random_state=777, stratify=y) # почему то без первых двух строк выдаёт ошибку print(X_train) lg = LogisticRegression(max_iter=2000) lg.fit(X_train, y_train) y_predicted_ctr = lg.predict_proba(X_test) # - plotPR(y_predicted_ctr) plt.figure(figsize=(9, 9)) plotPR(y_predicted_ctr, lbl="Счётчики") plotPR(y_predicted_ord, lbl="Ordinal") plotPR(y_predicted_onehot, lbl="OneHot") # Отметим, что такие признаки сами по себе являются классификаторами и, обучаясь на них, мы допускаем «утечку» целевой переменной в признаки. Это ведёт к переобучению, поэтому считать такие признаки необходимо таким образом, чтобы при вычислении счетчика для конкретного объекта его целевая метка не использовалась. Это можно делать следующими способами: # 1. Вычислять значение счётчика по всем объектам расположенным выше в датасете (например, если у нас выборка отсортирована по времени). # 2. Вычислять по фолдам, то есть делить выборку на некоторое количество частей и подсчитывать значение признаков по всем фолдам кроме текущего (как делается в кросс-валидации). # 3. Вносить шум в посчитанные признаки. # # __Задание 6 (0.5 балла).__ Реализуйте корректное вычисление счётчиков самым простым способом — добавлением шума к значениям (постарайтесь найти баланс между борьбой с переобучением и сохранением полезности признаков). Снова обучите логистическую регрессию, оцените качество. Сделайте выводы. # Какие плюсы и минусы использования счётчиков по сравнению с one-hot-кодированием можно отметить? # # Ответ: С счётчиками df занимает меньше памяти и быстрее обучается, но обладает меньшим качеством # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ X_train, X_test, y_train, y_test = train_test_split( data, y, test_size=0.25, random_state=777, stratify=y) i_max = 0.01 score_max = 0 print(X_train) for i in np.arange(0.001, 0.011, 0.001): X_train_с = X_train X_train_с[ctgs] += np.random.normal(0, i, size = X_train[ctgs].shape) scl = StandardScaler() X_train_с = scl.fit_transform(X_train_с.values) X_test = scl.transform(X_test) lg = LogisticRegression(max_iter=2000) lg.fit(X_train_с, y_train) y_predicted_noise = lg.predict_proba(X_test) score = average_precision_score(y_test, y_predicted_noise[:, 1]) print(score) if score > score_max: i_max = i score_max = score i_max # + i_max = 0.001 X_train, X_test, y_train, y_test = train_test_split( data, y, test_size=0.25, random_state=777, stratify=y) X_train_с = X_train X_train_с[ctgs] += np.random.normal(scale=i_max, size = X_train[ctgs].shape) scl = StandardScaler() X_train_с = scl.fit_transform(X_train_с) X_test = scl.transform(X_test) lg = LogisticRegression(max_iter=2000) lg.fit(X_train_с, y_train) y_predicted_noise = lg.predict_proba(X_test) # - plt.figure(figsize=(9, 9)) plotPR(y_predicted_ctr, lbl="Счётчики") plotPR(y_predicted_noise, lbl="Счётчики с шумом") plotPR(y_predicted_ord, lbl="Ordinal") plotPR(y_predicted_onehot, lbl="OneHot") # __Задание 7 (0.5 балла).__ В данных имеется признак «возраст клиента». Сейчас мы интерпретируем его как числовой, что в общем случае для линейной модели может быть неверной гипотезой. Тем не менее, у этого признака есть довольно много уникальных значений (сколько?), поэтому применять к нему one-hot кодирование может оказаться излишним. Попробуйте закодировать возраст с помощью счетчиков. Стало ли лучше? # # # + X_train, X_test, y_train, y_test = train_test_split( data, y, test_size=0.25, random_state=777, stratify=y) ctgs_age = pd.Index(np.append( ["age"], ctgs)) data=df.copy() for i in ctgs_age: data = data.replace({f'{i}': dict(df.groupby(f'{i}')['y'].mean())}) data = data.drop(['y','duration'], axis=1) print(data) X_train, X_test, y_train, y_test = train_test_split( data, y, test_size=0.25, random_state=777, stratify=y) #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ lg = LogisticRegression(max_iter=2000) lg.fit(X_train, y_train) y_predicted_age = lg.predict_proba(X_test) # - plt.figure(figsize=(9, 9)) plotPR(y_predicted_age, lbl="age in ctgs") plotPR(y_predicted_ctr, lbl="Счётчики") plotPR(y_predicted_noise, lbl="Счётчики с шумом") plotPR(y_predicted_ord, lbl="Ordinal") plotPR(y_predicted_onehot, lbl="OneHot") # СТАЛО СИЛЬНО ХУЖЕ # Можно пойти и в обратную сторону. У нас есть признаки «месяц и день недели» для звонка. Попробуйте интерпретировать их как числовые (месяц от 0 до 12, дни недели от 0 до 4). Стало ли лучше в этот раз? # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ data=df.copy() for i in ctgs.drop(["month", "day_of_week"]): data = data.replace({f'{i}': dict(df.groupby(f'{i}')['y'].mean())}) data = data.drop(['y','duration'], axis=1) X_train, X_test, y_train, y_test = train_test_split( df, y, test_size=0.25, random_state=777, stratify=y) prep = ColumnTransformer([ ('enc', OrdinalEncoder(), ["month", "day_of_week"]) ]) pipe = Pipeline(steps=[('prep' , prep), ('clf', LogisticRegression(max_iter=2000))]) pipe.fit(X_train, y_train) y_predicted = pipe.predict_proba(X_test) plotPR(y_predicted) # - # ## Часть 3. Отбор признаков (1 балл + 1 бонусный балл) # Важной частью процесса построения модели является отбор признаков. На практике многие признаки оказывают малое влияние на модель (при этом их расчёт занимает время) или даже негативно сказываются на качестве модели. Попробуем несколько подходов отбора признаков, оценим, как они влияют на качество модели и сколько времени занимают. # # Обратимся к тому же датасету про банковский телефонный маркетинг. # + df = pd.read_csv('C:/Users/vesel/Downloads/bank-additional-full.csv', sep=';') X = df.drop(columns=['duration', 'y']) y = (df.y == 'yes') X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=777, stratify=y) # - # Как вы помните, в данных много категориальных признаков (сейчас давайте интерпретировать возраст как числовой). Давайте закодируем их с помощью one-hot кодирования. Исходные колонки с категориальными признаками можно удалить. Сколько признаков мы получили? #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ encoder = OneHotEncoder(sparse=False) encoder.fit(X[ctgs]) a = encoder.transform(X[ctgs]) data = np.hstack((a, np.array(X.drop(ctgs, axis=1)))) data = pd.DataFrame(data) # Получили 62 признака data.shape # Как и ранее, в качестве основной модели будем использовать логистическую регрессию, а целевой метрикой выберем `AUC-PR`. Обучите модель и посчитайте качество на тестовой выборке. Давайте запомним полученное значение. # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=777, stratify=y) categorical = list(X_train.dtypes[X_train.dtypes == "object"].index) numeric_features = X_train.select_dtypes([np.number]).columns column_transformer = ColumnTransformer([ ('ohe', OneHotEncoder(), categorical), ('scaling', StandardScaler(), numeric_features) ]) pipeline = Pipeline(steps=[ ('ohe_and_scaling', column_transformer), ('regression', LogisticRegression(penalty="none" ,max_iter=2000)) ]) model = pipeline.fit(X_train, y_train) y_predicted = model.predict_proba(X_test) plotPR(y_predicted) # + X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=777, stratify=y) categorical = list(X_train.dtypes[X_train.dtypes == "object"].index) numeric_features = X_train.select_dtypes([np.number]).columns column_transformer = ColumnTransformer([ ('ohe', OneHotEncoder(), categorical), ]) pipeline = Pipeline(steps=[ ('ohe_and_scaling', column_transformer), ('regression', LogisticRegression(penalty="none" ,max_iter=2000)) ]) model = pipeline.fit(X_train, y_train) y_predicted = model.predict_proba(X_test) plotPR(y_predicted) # - # Замечаем, что если делать без Scaling, то качество падает. # ### Встроенные методы # Допустим, мы хотим оставить только 40 лучших признаков. Попробуем сделать это несколькими способами. # # Начнём с отбора признаков с помощью линейной модели. Как известно, веса линейной модели можно интерпретировать как вклад каждого признака в предсказание таргета, а значит, модуль этого вклада можно интерпретировать как важность признаков. Такой метод отбора называются встроенным (embedded methods), так как он заложен в особенности модели. # # __Задание 8 (0.5 балла).__ Оставьте 40 признаков с наибольшим модулем соответствующего параметра линейной модели. Обучите модель заново и оцените её качество. Замерьте скорость такого отбора признаков. # # Изменилось ли качество? Как? # # # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ encoder = OneHotEncoder(sparse=False) encoder.fit(X[ctgs]) a = encoder.transform(X[ctgs]) data = np.hstack((a, np.array(X.drop(ctgs, axis=1)))) data = pd.DataFrame(data) X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.2, random_state=777, stratify=y) model = LogisticRegression() model.fit(X_train, y_train) w = model.coef_[0] edge = sorted(w, reverse = True, key=lambda x: abs(x))[39] X_train = X_train.drop(X_train.columns[abs(w) < edge ], axis=1) # проверено, работает X_test = X_test.drop(X_test.columns[abs(w) < edge ], axis=1) # y_predicted = model.predict_proba(X_test) # plotPR(y_predicted) model.fit(X_train, y_train) y_pred = model.predict_proba(X_test) plotPR(y_pred) # - # А теперь давайте подумаем, что мы не учли. Мы предположили, что признаки вносят вклад равномерно, но не учли их масштаба. Если мы умножим один из признаков в 100 раз, то без учёта регуляризации его вес уменьшится в эти же 100 раз. А мы на основе этого отбираем признаки! Давайте сначала отмасштабируем признаки каким-то известным вам способом, а только потом будем удалять признаки. # # Кстати, в таком случае надо пересчитать качество на всех признаках (это тоже сделайте ниже). # # Что получилось? # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ # Качество на всех признаках было подсчитано выше X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.2, random_state=777, stratify=y) print(type(X_train)) scl = StandardScaler() X_train = pd.DataFrame(scl.fit_transform(X_train)) X_test = pd.DataFrame(scl.transform(X_test)) print(type(X_train)) model = LogisticRegression() model.fit(X_train, y_train) w = model.coef_[0] edge = sorted(w, reverse = True, key=lambda x: abs(x))[39] X_train = X_train.drop(X_train.columns[abs(w) < edge ], axis=1) # проверено, работает X_test = X_test.drop(X_test.columns[abs(w) < edge ], axis=1) # y_predicted = model.predict_proba(X_test) # plotPR(y_predicted) model.fit(X_train, y_train) y_pred = model.predict_proba(X_test) print(X_train.shape) plotPR(y_pred) # - # ### Методы фильтрации # # # Давайте отбирать признаки умнее, а именно через подсчёт некоторой функции для каждого признака. На основании значений этой функции будем оставлять наиболее важные признаки. Методы этого семейства называют фильтрующими или filter methods. # # В качестве такой функции будем считать t-статистику: # # $$t(j) = \frac{|\mu_+ - \mu_-|}{\sqrt{\frac{n_+ \sigma^2_+ + n_- \sigma^2_-}{n_+ + n_-}}},$$ # # где $\mu$, $\sigma$, $n$ соответственно среднее, стандартное отклонение и количество объектов каждого из классов. # # __Задание 9 (0.5 балла).__ Оставьте 40 признаков с наибольшим значением $t$ и замерьте качество. Не забудьте замерить скорость отбора признаков в этом случае. # # # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ # - # ### Методы-обёртки # # __Бонус (1 балл).__ # # Третий из рассматриваемых нами методов работает следующим образом: мы исключаем по очереди один из признаков и смотрим, как это влияет на качество. Удаляем признаки таким жадным способом, пока не окажется выполненым некоторое условие (количество признаков или ухудшение качества). # # Заметим, что во время отбора признаков нельзя подсматривать в тестовую выборку (так же как и при настройке гиперпараметров). Разделите обучающую выборку на 2 части, на одной из них обучайте модель без одного из признаков, на второй части оценивайте качество. Исходную тестовую выборку стоит использовать только для финальной оценки качества. # # Снова оставьте только 40 признаков и оцените качество на тестовой выборке. Сколько времени занял такой отбор признаков? # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ df = pd.read_csv('C:/Users/vesel/Downloads/bank-additional-full.csv', sep=';') X = df.drop(columns=['duration', 'y']) y = (df.y == 'yes') encoder = OneHotEncoder(sparse=False) encoder.fit(X[ctgs]) a = encoder.transform(X[ctgs]) data = np.hstack((a, np.array(X.drop(ctgs, axis=1)))) data = pd.DataFrame(data) X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.2, random_state=777, stratify=y) # - # Стоит отметить, что с помощью такого метода можно пойти и в обратную сторону. Попробуйте _добавлять_ по одному самому полезному признаку в выборку до тех пор, пока не наберется 40 штук. Найдется ли порог, при котором добавление следующих признаков будет только ухудшать качество модели? X_train #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ cnt = 0 features = [] mx = 0 for i in range(62): features.append(i) model = LogisticRegression(max_iter=2000) model.fit(X_train[features], y_train) y_pred = model.predict_proba(X_test[features]) score = average_precision_score(y_test, y_pred[:, 1]) # print(y_pred) if score > mx: mx = score else: features.pop() # if len(features) == 40: # break print(i, len(features), features) model = LogisticRegression(max_iter=2000) model.fit(X_train[features], y_train) # Можно оставить только 40 features[:40] но это снизит качество y_pred = model.predict_proba(X_test[features]) plotPR(y_pred) # Заметим, что алгоритм выбрал 44 признака, то есть он остановится только после выбора 40. Добавлять признаки можно было по другому, от способа добавления зависит какие признаки туда попадут. # **Вопрос для всех**: Давайте подведём итоги по отбору признаков. Назовите преимущества и недостатки каждого из методов. Какой метод привёл к наилучшему качеству? # # **Ответ:** В основном onehot показывает лучшее качество, но может быть долгим и занимать много памяти, Счётчики также работают неплохо но склонны переобучаться, при поправках, например, шумах работают неплохо, также они работают достаточно быстро - быстрее onehot. Остальные проигрывают по первым двум, хоть и могут быть быстрее. # ## Часть 4. Оценка экономического эффекта модели (2 балла) # # В данной части мы займемся тем, что от вас скорее всего потребуется на реальной работе. А именно: мы будем считать некоторые метрики и с их помощью попытаемся настроить модель на максимизацию _прибыли_. Разумеется, здесь будет сделано множество упрощающих жизнь допущений, но обо всем по порядку. # # __Задание 10 (1 балл).__ Допустим, работники вашего колл-центра получают за один звонок клиенту 1 доллар. При согласии клиента на предлагаемые условия он принесет в банк 10 долларов. # # # === Краткий курс экономики от ФКН 👒=== # # - Если вы всё прослушали на экономике, то напомним, что выручка — это сколько денег нам принесли клиенты, а прибыль — выручка за вычетом расходов на зарплату и прочее. # # === Конец краткого курса экономики от ФКН 👒 === # # Загрузите данные о телемаркетинге из предыдущего блока заданий. В этой части не нужно делить выборку - мы будем использовать кросс-валидацию. Используйте 5 фолдов, сделайте `shuffle=True, random_state=500`. По кросс-валидации у вас получится 5 вариантов обучающей и тестовой выборки. Обучите логистическую регрессию на каждой обучающей выборке (воспользуйтесь one-hot для категориальных признаков, гиперпараметры оставьте со значениями по умолчанию) и сделайте предсказания для соответствующих тестовых выборок. Допустим, всем положительным прогнозам ваши сотрудники решили позвонить. Посчитайте на всех тестовых выборках выручку и усредните. Сколько денег вы в среднем заработаете? Также вычислите стандартное отклонение. # # Сколько из заработанных денег придётся отдать операторам вашего колл-центра? # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ # - # Внесем некоторую долю случайности. Пусть теперь согласный на условия клиент будет приносить не 10 долларов, а случайную величину, равномерно распределенную в интервале $[0;20)$. Проделайте все те же самые действия. Для имитации реальной ситуации **НЕ** фиксируйте `random_seed` при подсчете выручки с клиента. Что получилось? # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ # - # Настройте по кросс-валидации коэффициент регуляризации модели для максимизации прибыли (считайте как случайную величину выше). Удалось ли получить какой-то выигрыш? При каком коэффициенте регуляризациии прибыль максимальна? Постройте график зависимости ожидаемой прибыли от коэффициента, также укажите стандартные отклонения (вам поможет `plt.errorbar`). # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ # - # __Задание 11 (1 балл).__ Выше мы уже описали примерную экономическую модель вашей задачи. Как вы считаете, что для этого бизнеса важнее — хороший precision или recall модели? Почему? # # __Ответ:__ # # # Вспомним, что на самом деле логистическая регрессия предсказывает вероятности положительного класса для объекта. Возможно, путем настройки порога бинаризации этих вероятностей мы сможем получить какой-то выигрыш? Проверьте ваши рассуждения выше с помощью настройки порога бинаризации на кросс-валидации для максимизации прибыли. Воспользуйтесь сеткой от 0 до 1 с шагом 0.01. Напомним, что снижение порога дает нам более высокий recall и более низкий precision, и наоборот. # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ # - # Каковы значения precision и recall на выбранном пороге? Оцените по кросс-валидации. Также вычислите стандартное отклонение. # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ # - # Как вы, вероятно, уже поняли, ваша модель склоняется к более высокому recall. Попробуйте оценить качество модели с помощью `PR-AUC` в зоне recall $\geq$ 0.5. Сделайте это следующим образом - выберите только те пороги, на которых достигается необходимый recall, затем интерпретируйте отсеченный в единичном квадрате прямоугольник как новый единичный квадрат и посчитайте площадь под отсеченной кривой. # + #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ # - # __Бонус (1 балл):__ чтобы получить 1 балл, вставьте что угодно в ячейку ниже. Даже можно не ходить в музей. # # (Бонус может получить только тот, кто решил хотя бы одно задание). #╰( ͡° ͜ʖ ͡° )つ──☆*:・゚ print("!!!") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from __future__ import print_function import os import pickle from shapely.geometry import LineString from zipfile import ZipFile import xml.sax, xml.sax.handler from pyproj import Proj, transform import numpy as np import time import datetime from functools import reduce from sklearn import svm, datasets from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import StandardScaler import scipy.io import scipy.integrate from astropy.convolution import convolve, Box1DKernel from matplotlib import pyplot as plt import matplotlib.cm as cm from matplotlib import rc from matplotlib.colors import LogNorm import matplotlib.patches as patches from matplotlib.collections import PatchCollection, LineCollection from matplotlib.colors import ListedColormap, BoundaryNorm from matplotlib.path import Path as mpath from scipy.optimize import least_squares from scipy import misc from scipy.optimize import fsolve import glob from skimage import filters import pandas as pd from pathlib import Path import warnings warnings.filterwarnings('ignore') # %matplotlib inline pd.options.display.max_columns = 999 pd.options.display.max_rows = 90 np.set_printoptions(threshold=np.nan) plt.rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']}) ## for Palatino and other serif fonts use: #rc('font',**{'family':'serif','serif':['Palatino']}) plt.rc('text', usetex=True) ############### ## COMPUTER ## ############### laptop = False # + k = 0.4 f = 7.2921*10**-5*2*np.sin(32*np.pi/180) z0 = 0.01 A0 = 2 B0 = 4.5 G = 10 z = np.logspace(-2,4,500) Ro = G/f/z0 Ro # + k = 0.4 f = 7.2921*10**-5*2*np.sin(32*np.pi/180) z0 = 0.01 A0 = 2 B0 = 4.5 G = 10 def drag(us0): global A0, B0, f, z0, k, G return k**2/((np.log(us0/f/z0)-A0)**2+B0**2)-us0**2/G**2 us0 = fsolve(drag, G) print(us0) #find ug0 somehow then plot a0 (the deflection as a contormap with G and z0 as the axes, #will have to find us0 each time by solving the implicit function in a nested loop of G and z0 #this will give you the equilibrium deflection for a steady ABL # i suppose i really want us and z0 to be the independent variables though... a0 = np.arctan(f*B0/(np.log(us0/f/z0)-A0)) a0 # + z0u = 10**-2 z0d = 10**0 Gud = 10 G = Gud z0 = z0u us0u = fsolve(drag, Gud) a0u = np.arcsin(us0u*B0/k/Gud) z0 = z0d us0d = fsolve(drag, Gud) a0d = np.arcsin(us0d*B0/k/Gud) Hs = us0d/f/50 X = np.linspace(0,6000,100) a0ud = a0u + iblc0*X**iblex/Hs*(a0d-a0u) # + k = 0.4 f = 7.2921*10**-5*2*np.sin(32*np.pi/180) A0 = 2 B0 = 4.5 Z0 = np.logspace(-4,0,90) GW = np.linspace(0,50,100) Z0g, GWg = np.meshgrid(Z0,GW) US0 = np.empty_like(Z0g) AL0 = np.empty_like(Z0g) HS0 = np.empty_like(Z0g) for i in np.arange(0,np.shape(Z0)[0]): for j in np.arange(0,np.shape(GW)[0]): z0 = Z0[i] G = GW[j] us0 = fsolve(drag, G) US0[j,i] = us0 AL0[j,i] = np.arcsin(us0*B0/k/G) HS0[j,i] = us0/f/50 fetch = 6000 iblex = 1.4 iblc0 = 0.1/100 IBL = np.empty_like(Z0g) IBL[:] = iblc0*fetch**iblex # - # %store -r Xmid_um # %store -r DD # + plt.rcParams['text.usetex'] = True #Let TeX do the typsetting plt.rcParams['text.latex.preamble'] = [r'\usepackage{sansmath}', r'\sansmath'] #Force sans-serif math mode (for axes labels) plt.rcParams['font.family'] = 'sans-serif' # ... for regular text plt.rcParams['font.sans-serif'] = 'Helvetica' # Choose a nice font here S = 50 LW = 0.8 fs = 12 SC = 100 v1 = 0 v2 = 20 v3 = 270 v4 = 300 lpu1 = 0 hpu1 = 25 fig = plt.gcf() ax3 = plt.subplot(221) sc1 = ax3.pcolormesh(Z0g, GWg, US0, cmap='binary',zorder=-1,vmin=0,vmax=2) sc2 = ax3.contour(Z0g, GWg, 180/np.pi*AL0, cmap='magma_r',zorder=-1,vmin=15,vmax=45) ax3.set_xscale('log') ax3.set_xlabel('$z_{0}$ (m)', fontsize=fs) ax3.set_ylabel('$U_{g}$ (m/s)', fontsize=fs) plt.xticks(fontsize=fs) plt.yticks(fontsize=fs) cbar3 = fig.colorbar(sc1, ticks=[0,1,2],ax=ax3, orientation='horizontal') cbar3.ax.xaxis.set_ticks_position('bottom') cbar3.set_label('$u_{*}$ (m/s)',fontsize=fs) cbar3.ax.set_xticklabels(['0','1','2']) cbar3.ax.xaxis.set_label_position('bottom') cbar3.ax.tick_params(labelsize=fs) ax3 = plt.subplot(222) sc1 = ax3.pcolormesh(Z0g, GWg, 180/np.pi*AL0, cmap='magma_r',zorder=-1,vmin=15,vmax=45) sc2 = ax3.contour(Z0g, GWg, US0, cmap='binary',zorder=-1,vmin=0,vmax=2) ax3.set_xscale('log') ax3.set_xlabel('$z_{0}$ (m)', fontsize=fs) ax3.set_ylabel('$U_{g}$ (m/s)', fontsize=fs) plt.xticks(fontsize=fs) plt.yticks(fontsize=fs) cbar3 = fig.colorbar(sc1, ticks=[15,30,45], ax=ax3, orientation='horizontal') cbar3.ax.xaxis.set_ticks_position('bottom') cbar3.set_label('$\\alpha$ ($^{\\circ}$)',fontsize=fs) cbar3.ax.set_xticklabels(['15','30','45']) cbar3.ax.xaxis.set_label_position('bottom') cbar3.ax.tick_params(labelsize=fs) fig.subplots_adjust(hspace=0.2) fig.subplots_adjust(wspace=0.35) ax3 = plt.subplot(223) sc1 = ax3.pcolormesh(Z0g, GWg, HS0/IBL, cmap='seismic',zorder=-1,vmin=0,vmax=2) sc3 = ax3.contour(Z0g, GWg, 180/np.pi*AL0, cmap='magma_r',zorder=-1,vmin=15,vmax=45) ax3.set_xscale('log') ax3.set_xlabel('$z_{0}$ (m)', fontsize=fs) ax3.set_ylabel('$U_{g}$ (m/s)', fontsize=fs) plt.xticks(fontsize=fs) plt.yticks(fontsize=fs) cbar3 = fig.colorbar(sc1, ticks=[0,1,2], ax=ax3, orientation='horizontal') cbar3.ax.xaxis.set_ticks_position('bottom') cbar3.set_label('$H_{Ek}/H_{SL}$',fontsize=fs) cbar3.ax.set_xticklabels(['0','1','2']) cbar3.ax.xaxis.set_label_position('bottom') cbar3.ax.tick_params(labelsize=fs) ax3 = plt.subplot(224) sc1 = ax3.plot(X, 180/np.pi*a0ud,c='k') sc1 = ax3.scatter(Xmid_um[Xmid_um>0],DD[Xmid_um>0]+90-10+180/np.pi*a0ud[0],c='y',s=1) ax3.set_ylabel('$\\alpha$ ($^{\\circ}$)', fontsize=fs) ax3.set_xlabel('$x$ (km)', fontsize=fs) plt.xlim(0,6000) plt.ylim(-45,135) plt.xticks([0,2000,4000,6000], ('0', '2', '4', '6')) plt.yticks([-45,0,45,90,135], ('-45', '0', '45', '90', '135')) plt.xticks(fontsize=fs) plt.yticks(fontsize=fs) cbar3 = fig.colorbar(sc3, ax=ax3, orientation='horizontal') cbar3.ax.xaxis.set_ticks_position('bottom') cbar3.ax.xaxis.set_label_position('bottom') cbar3.ax.tick_params(labelsize=fs)` fig.subplots_adjust(hspace=0.2) fig.subplots_adjust(wspace=0.35) fig.set_size_inches(8, 10, forward=True) # %cd /home/andrew/Documents/ plt.savefig('some_name86.png', bbox_inches='tight',dpi=300) plt.savefig('some_name86.pdf', bbox_inches='tight') # - 180/np.pi*a0d # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (dsblocks) # language: python # name: dsblocks # --- # + tags=[] # hide # default_exp core.components from nbdev.showdoc import * from dsblocks.utils.nbdev_utils import nbdev_setup, TestRunner nbdev_setup () tst = TestRunner (targets=['dummy']) from dsblocks.core.components import __all__ # - # # Components # # > Base components for building DS pipelines. # + tags=[] #export from functools import partialmethod from typing import Optional, Union import copy import pickle from pathlib import Path import re from sklearn.utils import Bunch import numpy as np import pandas as pd import pyarrow.parquet as pq import pyarrow as pa import joblib from IPython.display import display # dsblocks from dsblocks.core.data_conversion import (DataConverter, NoConverter, PandasConverter, StandardConverter, GenericConverter, data_converter_factory) from dsblocks.core.utils import (save_csv, save_parquet, save_multi_index_parquet, save_keras_model, save_csv_gz, read_csv, read_csv_gz) from dsblocks.core.utils import DataIO, SklearnIO, PandasIO, NoSaverIO from dsblocks.core.utils import data_io_factory from dsblocks.core.utils import ModelPlotter, Profiler, Comparator from dsblocks.core.utils import camel_to_snake, snake_to_camel from dsblocks.utils.utils import (set_logger, delete_logger, replace_attr_and_store, get_specific_dict_param, get_hierarchy_level) import dsblocks.config.bt_defaults as dflt # + #for tests import pytest import numpy as np import os import joblib from sklearn.utils import Bunch from pathlib import Path import dsblocks.config.bt_defaults as dflt from dsblocks.utils.utils import remove_previous_results, check_last_part from dsblocks.core.data_conversion import DataConverter # + [markdown] tags=[] # ## Component # + tags=[] #export class Component (): """Base component class used in our Pipeline.""" def __init__ (self, estimator=None, name: Optional[str] = None, class_name: Optional[str] = None, suffix: Optional[str] = None, group: str = dflt.group, root=None, overwrite_field: bool = dflt.overwrite_field, error_if_present: bool = dflt.error_if_present, ignore:set = set(), but: Union[str, list] = '', data_converter: Optional[DataConverter] = None, data_io: Optional[DataIO] = None, model_plotter: Optional[ModelPlotter] = None, profiler: Optional[Profiler] = None, comparator: Optional[Comparator] = None, apply = None, direct_apply: bool = False, direct_fit: bool = False, direct_fit_apply: bool = False, error_if_apply: bool = False, error_if_fit: bool = False, error_if_fit_apply: bool = False, error_if_result_not_loaded: bool = False, error_if_estimator_not_loaded: bool = False, logger=None, verbose: int = dflt.verbose, name_logger:str = dflt.name_logger, mode_logger:str = dflt.mode_logger, **kwargs): """ Initialize attributes and fields. Parameters ---------- estimator : estimator (classifier or transformer) or None, optional Estimator being wrapped. name : Pipeline or None, optional Name of component. If not provided, it is inferred from the name of the estimator's class, or the name of the custom class defining the componet. data_converter : DataConverter or None, optional Converts incoming data to format expected by component, and convert outgoing result to format expected by caller. data_io : DataIO or None, optional Manages data serialization and deserialization. model_plotter : ModelPlotter or None, optional Helper object that allows to retrieve information to be shown about this component, as part of a Pipeline diagram. logger : logging.logger or None, optional Logger used to write messages verbose : int, optional Verbosity, 0: warning or critical, 1: info, 2: debug. """ assert not isinstance(estimator, Component), 'estimator cannot be an instance of Component' # name of current component, for logging and plotting purposes self._determine_component_name (name, estimator, class_name=class_name, suffix=suffix, apply=apply) # obtain hierarchy_level self.hierarchy_level = get_hierarchy_level (base_class=Component) # store __init__ attrs into `self` but = ', '.join (but) if isinstance(but, list) else but but = (but + ', ') if len(but)>0 else but but = but + 'ignore, but, overwrite_field, error_if_present, path_results, path_models, apply' if isinstance (ignore, str): ignore = set(re.split(', *', ignore)) ignore.update ({'name', 'class_name', 'suffix', 'apply', 'data_converter'}) replace_attr_and_store (base_class=Component, but=but, error_if_present=error_if_present, overwrite=overwrite_field, ignore=ignore) if self.logger is None: self.logger = set_logger (self.name_logger, verbose=self.verbose, mode=self.mode_logger) # obtain class-specific kwargs kwargs = self.obtain_config_params (**kwargs) # object that manages loading / saving if self.data_io is None: self.data_io = DataIO (component=self, **kwargs) else: if 'data_io' in kwargs: del kwargs['data_io'] self.data_io = data_io_factory (self.data_io, component=self, **kwargs) self.path_results = self.data_io.path_results self.path_models = self.data_io.path_models # data converter if self.data_converter is None: # TODO: have DataConverter store a reference to component, and use the logger from that reference. self.data_converter = StandardConverter (**kwargs) else: if 'data_converter' in kwargs: del kwargs['data_converter'] self.data_converter = data_converter_factory (self.data_converter, **kwargs) # plotting model component if self.model_plotter is None: self.model_plotter = ModelPlotter (component=self, **kwargs) else: self.model_plotter.set_component (self) # profiling computational cost if self.profiler is None: self.profiler = Profiler (self, **kwargs) # comparing results against other implementations of this component if self.comparator is None: self.comparator = Comparator (self, **kwargs) elif type(self.comparator) is type: self.comparator = self.comparator (self, **kwargs) # determine and assign apply and fit functions self.assign_apply_and_fit_functions (apply=apply) def __repr__ (self): return f'Component {self.class_name} (name={self.name})' def reset_logger (self): delete_logger (self.name_logger) def get_specific_data_io_parameters (self, tag, **kwargs): suffix = f'_{tag}' n = len(suffix) return {k[:-n]:kwargs[k] for k in kwargs if k.endswith (suffix) and k[:-n] in DataIO.specific_params} def obtain_config_params (self, tag=None, **kwargs): """Overwrites parameters in kwargs with those found in a dictionary of the same name as the component. Checks if there is a parameter whose name is the name of the class or the name given to this component. In that case, it overwrites the parameters in kwargs with those found in that dictionary. The parameters in kwargs can be used as *global* parameters for multiple components, while parameters specific of one component can be overwritten using a dictionary with the name of that component. See example below. """ k = get_specific_dict_param (self, **kwargs) if k is not None: config = kwargs.copy() config.update (config[k]) else: config = kwargs if tag is not None: if tag == '__name__': tag = self.name self.tag = tag config.update (self.get_specific_data_io_parameters (tag, **kwargs)) config.update(verbose=self.verbose, logger=self.logger) return config def _determine_component_name (self, name: str, estimator, class_name:Optional[str]=None, suffix:Optional[str]=None, apply=None) -> None: """ Determines an appropriate name for the component if not provided by input. If not provided, it is inferred from the name of the estimator's class, or the name of the custom class defining the componet. """ if class_name is not None: self.class_name = class_name else: self.class_name = self.__class__.__name__ if self.class_name in __all__: if estimator is not None: self.class_name = estimator.__class__.__name__ if apply is not None and hasattr (apply, '__name__'): self.class_name = snake_to_camel (apply.__name__) if name is not None: self.name = name else: self.name = camel_to_snake (self.class_name) self.suffix = suffix if self.suffix is not None: self.name = f'{self.name}_{self.suffix}' def create_estimator (self, **kwargs): self.estimator = Bunch(**kwargs) def fit_like (self, *X, y=None, load=None, save=None, split=None, func='_fit', validation_data=None, test_data=None, sequential_fit_apply=False, fit_apply=False, converter_args={}, **kwargs): """ Estimates the parameters of the component based on given data X and labels y. Uses the previously fitted parameters if they're found in disk and load is True. """ if not fit_apply: self.profiler.start_timer () profiler_method = 'fit' else: profiler_method = 'fit_apply' if self.error_if_fit and func=='_fit': raise RuntimeError (f'{self.name} should not call fit') if self.error_if_fit_apply and func=='_fit_apply': raise RuntimeError (f'{self.name} should not call fit_apply') X = self.data_converter.convert_single_tuple_for_fitting (X) X = X + (y, ) if y is not None else X if split is not None: self.original_split = self.data_io.split self.set_split (split) self.logger.info (f'fitting {self.name} (using {self.data_io.split} data)') previous_estimator = None if self.data_io.can_load_model (load): previous_estimator = self.data_io.load_estimator() already_computed = False if previous_estimator is not None: if func=='_fit': already_computed = True elif func=='_fit_apply': previous_result = None if self.data_io.can_load_result (load): previous_result = self.data_io.load_result (split=split) elif self.error_if_result_not_loaded: raise RuntimeError ('result not loaded, and it should') already_computed = previous_result is not None else: raise ValueError (f'function {func} not valid') elif self.error_if_estimator_not_loaded: raise RuntimeError ('estimator not loaded, and it should') if not already_computed: if func=='_fit_apply': X = self.data_converter.convert_before_fit_apply ( *X, sequential_fit_apply=sequential_fit_apply, **converter_args) X = self.data_converter.convert_no_tuple (X) elif func=='_fit': X = self.data_converter.convert_before_fitting (*X) else: raise ValueError (f'function {func} not valid') additional_data= self._add_validation_and_test (validation_data, test_data) if func=='_fit': if len(kwargs) > 0: raise AttributeError (f'kwargs: {kwargs} not valid') self.profiler.start_no_overhead_timer () self._fit (*X, **additional_data) elif func=='_fit_apply': assert self.fit_apply_func is not None, ('object must have _fit_apply method or one of ' 'its aliases implemented when func="_fit_apply"') self.profiler.start_no_overhead_timer () result = self.fit_apply_func (*X, **additional_data, **kwargs) else: raise ValueError (f'function {func} not valid') self.profiler.finish_no_overhead_timer (method=profiler_method, split=self.data_io.split) if func=='_fit': _ = self.data_converter.convert_after_fitting (*X) elif func=='_fit_apply': result = self.data_converter.convert_after_fit_apply ( result, sequential_fit_apply=sequential_fit_apply, **converter_args) if self.data_io.can_save_result (save, split): self.data_io.save_result (result, split=split) else: raise ValueError (f'function {func} not valid') if self.data_io.can_save_model (save): self.data_io.save_estimator () else: self.set_estimator (previous_estimator) self.logger.info (f'loaded pre-trained {self.name}') if func=='_fit_apply': result = previous_result self.logger.info (f'loaded pre-computed result') if not (fit_apply and func == '_fit'): self.profiler.finish_timer (method=profiler_method, split=self.data_io.split) if split is not None: self.set_split (self.original_split) if func=='_fit': return self else: return result fit = partialmethod (fit_like, func='_fit') def fit_apply (self, *X, y=None, load_model=None, save_model=None, load_result=None, save_result=None, func='_fit', validation_data=None, test_data=None, sequential_fit_apply=False, **kwargs): self.profiler.start_timer () if self.error_if_fit_apply: raise RuntimeError (f'{self.name} should not call fit_apply') X = self.data_converter.convert_single_tuple_for_fitting (X) X = X + (y, ) if y is not None else X if self.fit_apply_func is not None: return self.fit_like (*X, load=load_model, save=save_model, func='_fit_apply', validation_data=validation_data, test_data=test_data, sequential_fit_apply=sequential_fit_apply, fit_apply=True, **kwargs) else: if not self.direct_fit: kwargs_fit = dict(load=load_model, save=save_model, validation_data=validation_data, test_data=test_data, sequential_fit_apply=sequential_fit_apply, fit_apply=True) else: kwargs_fit = dict() if not self.direct_apply: kwargs_apply = dict (load=load_result, save=save_result, fit_apply=True, sequential_fit_apply=sequential_fit_apply, **kwargs) else: kwargs_apply = kwargs return self.fit (*X, **kwargs_fit).apply (*X, **kwargs_apply) def _add_validation_and_test (self, validation_data, test_data): additional_data = {} def add_data (data, split_name): if data is not None: if isinstance(data, tuple): if len(data) > 0: newX = data[0] else: self.logger.warning (f'empty {split_name}') newX = None if len(data) == 2: newy = data[1] elif len(data)==1: newy = None elif len(data)>2: raise ValueError (f'{split_name} must have at most 2 elements') else: newX = data newy = None newX, newy = self.data_converter.convert_before_fitting (newX, newy) if newy is not None: additional_data[split_name] = (newX, newy) else: additional_data[split_name] = newX add_data (validation_data, 'validation_data') add_data (test_data, 'test_data') return additional_data # aliases fit_transform = fit_apply fit_predict = fit_apply def __call__ (self, *X, load=None, save=None, fit_apply=False, sequential_fit_apply=False, **kwargs): """ Transforms the data X and returns the transformed data. Uses the previously transformed data if it's found in disk and load is True. """ if not fit_apply: self.profiler.start_timer () if self.direct_apply: return self.result_func (*X, **kwargs) if self.error_if_apply: raise RuntimeError (f'{self.name} should not call apply') assert self.result_func is not None, 'apply function not implemented' result = self._compute_result (X, self.result_func, load=load, save=save, fit_apply=fit_apply, sequential_fit_apply=sequential_fit_apply, **kwargs) return result def _assign_fit_func (self): self.fit_func = None self.estimator_fit_func = None if callable(getattr(self, '_fit', None)): self.fit_func = self._fit elif self.estimator is not None and callable(getattr(self.estimator, 'fit', None)): self.fit_func = self.estimator.fit self.estimator_fit_func = 'fit' def _assign_result_func (self): implemented = [] self.result_func = None self.estimator_result_func = None if callable(getattr(self, '_apply', None)): self.result_func = self._apply implemented += [self.result_func] if callable(getattr(self, '_transform', None)): self.result_func = self._transform implemented += [self.result_func] if callable(getattr(self, '_predict', None)): self.result_func = self._predict implemented += [self.result_func] if len(implemented)==0: if self.estimator is not None and callable(getattr(self.estimator, 'transform', None)): self.result_func = self.estimator.transform self.estimator_result_func = 'transform' implemented += [self.result_func] if self.estimator is not None and callable(getattr(self.estimator, 'predict', None)): self.result_func = self.estimator.predict self.estimator_result_func = 'predict' implemented += [self.result_func] if len(implemented) > 1: raise AttributeError (f'{self.class_name} must have only one of _transform, _apply, ' f'or _predict methods implemented => found: {implemented}') def _assign_fit_apply_func (self): implemented = [] self.fit_apply_func = None self.estimator_fit_apply_func = None if callable(getattr(self, '_fit_apply', None)): self.fit_apply_func = self._fit_apply implemented += [self.fit_apply_func] if callable(getattr(self, '_fit_transform', None)): self.fit_apply_func = self._fit_transform implemented += [self.fit_apply_func] if callable(getattr(self, '_fit_predict', None)): self.fit_apply_func = self._fit_predict implemented += [self.fit_apply_func] if len(implemented)==0: if self.estimator is not None and callable(getattr(self.estimator, 'fit_transform', None)): self.fit_apply_func = self.estimator.fit_transform self.estimator_fit_apply_func = 'fit_transform' implemented += [self.fit_apply_func] if self.estimator is not None and callable(getattr(self.estimator, 'fit_predict', None)): self.fit_apply_func = self.estimator.fit_predict self.estimator_fit_apply_func = 'fit_predict' implemented += [self.fit_apply_func] if len(implemented) > 1: raise AttributeError (f'{self.class_name} must have only one of fit_transform, fit_apply, ' f'or fit_predict methods implemented => found: {implemented}') def assign_apply_and_fit_functions (self, apply=None): """Determine and assign apply and fit functions.""" if apply is not None: self._apply = apply self._assign_result_func () self._assign_fit_apply_func () self._assign_fit_func () self.is_model = True if self.fit_func is None: self._fit = self._fit_ if self.fit_apply_func is None: self.is_model = False else: self._fit = self.fit_func if self.direct_apply: self.set_apply (self.result_func) if not self.is_model: self.fit = self._fit_ # self.set_fit_apply (self.apply) else: if self.direct_fit: self.fit = self.fit_func if self.direct_fit_apply: self.set_fit_apply (self.fit_apply_func) # aliases for transform method apply = __call__ transform = __call__ predict = partialmethod (__call__, converter_args=dict(new_columns=['prediction'])) def _compute_result (self, X, result_func, load=None, save=None, split=None, converter_args={}, fit_apply=False, sequential_fit_apply=False, **kwargs): profiler_method = 'fit_apply' if fit_apply else 'apply' if split is not None: self.original_split = self.data_io.split self.set_split (split) self.logger.debug (f'applying {self.name} (on {self.data_io.split} data)') previous_result = None if self.data_io.can_load_result (load): previous_result = self.data_io.load_result (split=split) if previous_result is None: if self.error_if_result_not_loaded: raise RuntimeError ('result not loaded, and it should') X = self.data_converter.convert_single_tuple_for_transforming (X) X = self.data_converter.convert_before_transforming ( *X, fit_apply=fit_apply, sequential_fit_apply=sequential_fit_apply, **converter_args) X = self.data_converter.convert_no_tuple (X) X = self.data_converter.convert_single_tuple_for_result_func (X) self.profiler.start_no_overhead_timer () result = result_func (*X, **kwargs) self.profiler.finish_no_overhead_timer (profiler_method, self.data_io.split) result = self.data_converter.convert_after_transforming ( result, fit_apply=fit_apply, sequential_fit_apply=sequential_fit_apply, **converter_args) if self.data_io.can_save_result (save, split): self.data_io.save_result (result, split=split) else: result = previous_result self.logger.info (f'loaded pre-computed result') self.profiler.finish_timer (profiler_method, self.data_io.split) if split is not None: self.set_split (self.original_split) return result def _fit_ (self, *X, **kwargs): return self def show_result_statistics (self, result=None, split=None) -> None: """ Show statistics of transformed data. Parameters ---------- result: DataFrame or other data structure or None, optional Transformed data whose statistics we show. If not provided, it is loaded from disk. training_data_flag: bool, optional If True, transformed training data is loaded, otherwise transformed test data is loaded. """ if result is None: df = self.load_result(split=split) else: df = result if df is not None: display (self.name) if callable(getattr(df, 'describe', None)): display (df.describe()) elif isinstance(df, np.ndarray) or isinstance(df, list): df = pd.DataFrame (df) display (df.describe()) def remove_non_pickable_fields (self): pass # ******************************** # exposing some data_io and data_converters methods # ******************************** def load_estimator (self): estimator = self.data_io.load_estimator () if estimator is not None: self.set_estimator (estimator) def load_result (self, split=None, path_results=None, result_file_name=None): return self.data_io.load_result (split=split, path_results=path_results, result_file_name=result_file_name) def assert_equal (self, item1, item2=None, split=None, raise_error=True, **kwargs): return self.comparator.assert_equal (item1, item2=item2, split=split, raise_error=raise_error, **kwargs) # ******************************** # setters # ******************************** def set_split (self, split): self.data_io.set_split (split) def set_save_splits (self, save_splits): self.data_io.set_save_splits (save_splits) def set_save_model (self, save_model): self.data_io.set_save_model (save_model) def set_load_model (self, load_model): self.data_io.set_load_model (load_model) def set_save_result (self, save_result): self.data_io.set_save_result (save_result) def set_load_result (self, load_result): self.data_io.set_load_result (load_result) def set_data_io (self, data_io, copy=False): self.data_io = copy.copy(data_io) if copy else data_io self.data_io.setup (self) def set_name (self, name): self._set_name (name) def _set_name (self, name, change_original=True): self.name = name self.data_io.set_file_names (name) if change_original: self.original_name = name def set_suffix (self, suffix): self.suffix = suffix if hasattr (self, 'original_name'): base_name = self.original_name else: base_name = self.name self.original_name = self.name self._set_name (f'{base_name}_{suffix}', change_original=False) def set_estimator (self, estimator): self.estimator = estimator if self.estimator_result_func is not None: self.result_func = getattr (self.estimator, self.estimator_result_func, None) assert callable (self.result_func) if self.estimator_fit_apply_func is not None: self.fit_apply_func = getattr (self.estimator, self.estimator_fit_apply_func, None) assert callable (self.fit_apply_func) if self.estimator_fit_func is not None: self.fit_func = getattr (self.estimator, self.estimator_fit_func, None) assert callable (self.fit_func) self._fit = self.fit_func assert self.is_model def set_apply (self, result_func): self.apply = result_func self.__call__ = result_func self.transform = result_func self.predict = result_func def set_fit_apply (self, fit_apply_func): self.fit_apply = fit_apply_func self.fit_transform = fit_apply_func self.fit_predict = fit_apply_func # + [markdown] tags=[] # ### Configuring component with global and specific parameters # - # exports tests.core.test_components def test_component_config (): # ********************************************************************** # test obtain_config_params method # ********************************************************************** tr = Component(name='sky') config = dict(first=1, second=2, third=3, sky=dict (second=4) ) config_r = tr.obtain_config_params (**config) logger = set_logger (dflt.name_logger, verbose=dflt.verbose) assert config_r=={'first': 1, 'second': 4, 'third': 3, 'sky': {'second': 4}, 'verbose': dflt.verbose, 'logger': logger} assert config == {'first': 1, 'second': 2, 'third': 3, 'sky': {'second': 4}} # ********************************************************************** # test that component saves results when using global # parameter save=True # ********************************************************************** class MyTransform (Component): def __init__ (self,**kwargs): super().__init__ (**kwargs) self.create_estimator () def _fit (self, X, y=None): self.estimator.mu = X.mean() def _transform (self, X): return X-self.estimator.mu path_results = 'testing_configuration' tr = MyTransform (path_results=path_results, save = True) X = np.array([[1,2,3],[4,5,6]]) tr.fit_transform(X) import os l = sorted(os.listdir(path_results)) assert l==['models','whole'], f'found: {l}' # ********************************************************************** # test that component does not save results when we # use component-specific parameter MyTransform = dict(save=False) # ********************************************************************** from dsblocks.utils.utils import remove_previous_results remove_previous_results (path_results) tr = MyTransform (data_io = SklearnIO( path_results='testing_configuration', save = True, MyTransform = dict(save=False) ) ) tr.fit_transform(X) import pytest with pytest.raises(FileNotFoundError): os.listdir(path_results) tst.run (test_component_config, tag='dummy') # ### Recursively storing attrs across class hierarchy # exports tests.core.test_components def test_component_store_attrs (): # recursively storing __init__ attrs across hiearchy of classes class Intermediate (Component): def __init__ (self, x=3, y=4, **kwargs): super().__init__ (**kwargs) class Final (Intermediate): def __init__ (self, z=6, h=[2,3,5], **kwargs): super().__init__ (**kwargs) o = Final (x=9, h=[1,2,4]) assert o.x==9 and o.y==4 and o.z==6 and o.h==[1,2,4] o = Final (y=7, z=10, h=[1,2,4], Final={'h': [9,11,10]}) assert o.x==3 and o.y==7 and o.z==10 and o.h==[9,11,10] # only attributes specific of Final are replaced. # trying to replace attributes specific of Intermediate # does not work o = Final (y=7, z=10, h=[1,2,4], Intermediate={'y': 12}) assert o.x==3 and o.y==7 and o.z==10 and o.h==[1,2,4] class Intermediate (Component): def __init__ (self, x=3, y=4, **kwargs): super().__init__ (**kwargs) class Final (Intermediate): def __init__ (self, z=6, h=[2,3,5], **kwargs): super().__init__ (**kwargs) o = Final (x=9, h=[1,2,4], group='group_1', group_1={'y': 10, 'z':60}) assert o.x==9 and o.y==10 and o.z==60 and o.h==[1,2,4] # ******************* # test using same field in B4 and in A3, but # B4 passes that value to A3 in super(), # after modifying it # ***************** class A (Component): def __init__ (self, x=3, path_results='test_recursive', **kwargs): path_results = f'{path_results}/another' super ().__init__ (path_results=path_results, error_if_present=True, **kwargs) class B (A): def __init__ (self, x=30, y=10, **kwargs): x = x*2 super().__init__ (x=x, **kwargs) self.ab = A (**kwargs) b = B () assert b.x==60 and b.ab.x==3 and b.y==10 and b.path_results==Path('test_recursive/another').resolve() b = B (x=6, path_results='new_path') assert b.x==12 and b.ab.x==3 and b.y==10 and b.path_results==Path('new_path/another').resolve() # ******************* # test using same field in C and in A, but # the field is modified in a parent B # ***************** class C(B): def __init__ (self, x=40, z=100, **kwargs): super().__init__ (x=x, **kwargs) self.b = B(**kwargs) with pytest.raises (RuntimeError): c = C() c = C(ignore={'x'}) assert c.x==80 and c.y==10 and c.z==100 and c.b.x==60 and c.b.y==10 c = C (x=9, ignore={'x'}) assert c.x==18 and c.y==10 and c.z==100 and c.b.x==60 and c.b.y==10 assert not hasattr(c, 'ignore') tst.run (test_component_store_attrs, tag='dummy') # + [markdown] tags=[] # ### Transform method called with different aliases # + tags=[] # exports tests.core.test_components #@.reference_fails def test_component_aliases (): # test that we can implement _transform and use all the aliases # (transform, predict, apply, __call__) class MyTransform (Component): def _transform (self, x): return x*2 my_transform = MyTransform() assert my_transform.transform (3) == 6 assert my_transform.predict (3) == 6 assert my_transform.apply (3) == 6 assert my_transform (3) == 6 # test that we can implement _apply and use all the aliases # (transform, predict, apply and __call__) class MyTransform2 (Component): def _apply (self, x): return x*2 my_transform2 = MyTransform2() assert my_transform2.transform (3) == 6 assert my_transform2.predict (3) == 6 assert my_transform2.apply (3) == 6 assert my_transform2 (3) == 6 # test that we can implement _predict and use all the aliases # (transform, predict, apply and __call__) class MyTransform3 (Component): def _predict (self, x): return x*2 my_transform3 = MyTransform3() assert my_transform3.transform (3) == 6 assert my_transform3.predict (3) == 6 assert my_transform3.apply (3) == 6 assert my_transform3 (3) == 6 # test that an exception is raised if neither _tranform nor _apply are defined class MyTransform4 (Component): def _wrong_method (self, x): return x*2 my_transform4 = MyTransform4 () import pytest with pytest.raises (AssertionError): my_transform4.transform(3) # test that an exception is raised if more than one alias is implemented class MyTransform5 (Component): def _predict (self, x): return x*2 def _apply (self, x): return x*2 import pytest with pytest.raises(AttributeError): my_transform5 = MyTransform5 () # - tst.run (test_component_aliases, tag='dummy') # + [markdown] tags=[] # ### Calling `predict` is handy when the result is a single array of predictions # - # exports tests.core.test_components def test_component_predict (): # TODO: remove this cell class MyTransform (Component): def __init__ (self, **kwargs): super().__init__ ( data_converter=PandasConverter(**kwargs), **kwargs) def _predict (self, x): return x['a']+x['b'] my_transform = MyTransform() df = pd.DataFrame ({'a': [10,20,30],'b':[4,5,6]}) pd.testing.assert_frame_equal(my_transform.transform (df), pd.DataFrame ({0: [14,25,36]}) ) if False: pd.testing.assert_frame_equal(my_transform.predict (df), pd.DataFrame ({0: [14,25,36]}) ) tst.run (test_component_predict, tag='dummy') # + [markdown] tags=[] # ### The `transform` method and its aliases can be called with multiple inputs # - # exports tests.core.test_components def test_component_multiple_inputs (): # test that we can apply tranform to multiple data items from dsblocks.utils.dummies import SumXY my_transform = SumXY () result = my_transform.transform (3, 4) print (result) assert result==7 # test that we can apply tranform to single data items class MyTransform2 (Component): def _apply (self, x): return x*2 my_transform2 = MyTransform2 () result = my_transform2.transform (3) print (result) assert result==6 tst.run (test_component_multiple_inputs, tag='dummy') # + [markdown] tags=[] # ### `fit_apply()` and its aliases `fit_transform(), fit_predict()` # + # exports tests.core.test_components # example with _fit_apply implemented class TransformWithFitApply (Component): def __init__ (self, **kwargs): super().__init__ (**kwargs) def _fit (self, X, y=None): self.sum = X.sum(axis=0) def _apply (self, X): return X + self.sum def _fit_apply (self, X, y=None): self.sum = X.sum(axis=0)*10 return X + self.sum # example without _fit_apply implemented class TransformWithoutFitApply (Component): def __init__ (self, **kwargs): super().__init__ (**kwargs) def _fit (self, X, y=None): self.sum = X.sum(axis=0) def _apply (self, X): return X + self.sum #@pytest.mark.reference_fails def test_component_fit_apply (): tr1 = TransformWithFitApply () X = np.array ([100, 90, 10]) result = tr1.fit_apply (X) assert (result==(X+2000)).all() # same result obtained by aliases result = tr1.fit_transform (X) assert (result==(X+2000)).all() # different result if we apply fit and apply separately result = tr1.fit (X).transform (X) assert (result==(X+200)).all() # transform without fit_apply tr2 = TransformWithoutFitApply () result = tr2.fit_apply (X) assert (result==(X+200)).all() # same result obtained by aliases result = tr2.fit_transform (X) assert (result==(X+200)).all() # - tst.run (test_component_fit_apply, tag='dummy') # + [markdown] tags=[] # ### `fit_apply()` with DataConverters that transform inplace # + # exports tests.core.test_components # example with _fit_apply implemented class MyDataConverter (DataConverter): def __init__ (self, **kwargs): super ().__init__ (**kwargs) def convert_before_fitting (self, *X): X, y = X if len(X)==2 else (X[0], None) self.orig = X[0] X[0] = 0 return X, y def convert_after_fitting (self, *X): X, y = X if len(X)==2 else (X, None) if type(X) is tuple and len(X)==1: X = X[0] X[0] = self.orig return X def convert_before_transforming (self, X, **kwargs): self.orig2 = X[1] X[1] = 0 return X def convert_after_transforming (self, X, **kwargs): X[1] = self.orig2 return X def convert_before_fit_apply (self, *X, **kwargs): _ = self.convert_before_fitting (*X) if self.inplace: X2, y = X if len(X)==2 else (X[0], None) self.X = X2 if type(X2) is tuple else (X2,) return self.convert_before_transforming (*X) class TransformWithFitApplyDC (Component): def __init__ (self, **kwargs): super().__init__ (data_converter=MyDataConverter,**kwargs) def _fit (self, X, y=None): self.sum = X.sum(axis=0) def _apply (self, X): return X + self.sum def _fit_apply (self, X, y=None): self.sum = X.sum(axis=0) return X + self.sum if False: def test_fit_apply_inplace (): tr1 = TransformWithFitApplyDC () X = np.array ([100, 90, 10]) result = tr1.fit_apply (X) assert (result==[100, 90, 110]).all() assert (X==[100, 90, 10]).all() tr1 = TransformWithFitApplyDC (inplace=False) X = np.array ([100, 90, 10]) result = tr1.fit_apply (X) assert (result==[10, 90, 20]).all() assert (X==[ 0, 0, 10]).all() # + #tst.run (test_fit_apply_inplace, tag='dummy') # - # `_fit_apply()` is called when implemented, otherwise `fit().apply()` is called # + [markdown] tags=[] # ### Getting validation_data and test_data # - # exports tests.core.test_components def test_component_validation_test (): class Transform1 (Component): def __init__ (self, **kwargs): super().__init__ (**kwargs) def _fit (self, X, y=None, validation_data=None, test_data=None): self.sum = X.sum(axis=0) print (f'validation_data: {validation_data}') print (f'test_data: {test_data}') self.validation_data = validation_data self.test_data = test_data def _apply (self, X): return X + self.sum tr1 = Transform1 () X = np.array ([100, 90, 10]) # case 1: validation_data and test_data are not tuples validation_data = np.array ([100, 90, 10])*10 test_data = np.array ([100, 90, 10])*100 result = tr1.fit_apply (X, validation_data=validation_data, test_data=test_data) assert (tr1.validation_data==validation_data).all() assert (tr1.test_data==test_data).all() # case 2: validation_data is a tuple, and test_data is not given result = tr1.fit_apply (X, validation_data=(validation_data,1)) assert (tr1.validation_data[0]==validation_data).all() assert tr1.validation_data[1]==1 assert tr1.test_data is None # case 3: validation_data is a tuple with more than 2 elements, exception is raised import pytest with pytest.raises(ValueError): result = tr1.fit_apply (X, validation_data=(validation_data,1,2)) tst.run (test_component_validation_test, tag='dummy') # + [markdown] tags=[] # ### saving / loading # + tags=[] # exports tests.core.test_components # example with _fit_apply implemented class TransformWithoutFitApply2 (Component): def __init__ (self, error_if_fit_func=False, error_if_apply_func=False, **kwargs): super().__init__ (data_io='SklearnIO', **kwargs) self.estimator = Bunch(sum=None) def _fit (self, X, y=None): if self.error_if_fit_func: raise RuntimeError ('fit should not run') print ('running _fit') self.estimator.sum = X.sum(axis=0) def _apply (self, X): if self.error_if_apply_func: raise RuntimeError ('apply should not run') if self.estimator.sum is None: raise RuntimeError ('fit should be called before apply') print ('running _apply') return X + self.estimator.sum Transform1 = TransformWithoutFitApply2 class TransformWithFitApply2 (Component): def __init__ (self, error_if_fit_func=False, error_if_apply_func=False, error_if_fit_apply_func=False, **kwargs): super().__init__ (data_io='SklearnIO', **kwargs) self.estimator = Bunch(sum=None) def _fit (self, X, y=None): if self.error_if_fit_func: raise RuntimeError ('fit should not run') print ('running _fit') self.estimator.sum = X.sum(axis=0) def _apply (self, X): if self.error_if_apply_func: raise RuntimeError ('apply should not run') if self.estimator.sum is None: raise RuntimeError ('fit should be called before apply') print ('running _apply') return X + self.estimator.sum def _fit_apply (self, X, y=None): if self.error_if_fit_apply_func: raise RuntimeError ('fit_apply should not run') print ('running _fit_apply') self.estimator.sum = X.sum(axis=0) return X + self.estimator.sum def component_save_data (): X = np.array ([100, 90, 10]) return X def test_component_save_load (component_save_data): X = component_save_data path_results = 'component_loading_saving' remove_previous_results (path_results=path_results) tr1 = Transform1 (path_results=path_results) tr1.fit (X) result = tr1.apply (X) tr2 = Transform1 (path_results=path_results) tr2.load_estimator() assert tr2.estimator.sum == tr1.estimator.sum result2 = tr2.data_io.load_result () assert (result2 == sum(X)+X).all() import os assert os.listdir (f'{path_results}/whole')==['transform_without_fit_apply2_result.pk'] assert os.listdir (f'{path_results}/models')==['transform_without_fit_apply2_estimator.pk'] result_b = tr1.apply (X*2, split='test') result2b = tr2.data_io.load_result (split='test') assert (result_b==result2b).all() assert os.listdir (f'{path_results}/test')==['transform_without_fit_apply2_result.pk'] result2b = tr2.data_io.load_result () assert (result_b!=result2b).all() remove_previous_results (path_results=path_results) # Test that no saving is done if save=False tr1 = Transform1 (path_results=path_results, save=False) tr1.fit (X) result = tr1.apply (X) assert not os.path.exists(path_results) # - tst.run (test_component_save_load, component_save_data, tag='dummy') # + [markdown] tags=[] # ### running fit / apply depending on whether estimator / result exists # - # exports tests.core.test_components #@pytest.mark.reference_fails def test_component_run_depend_on_existence (): path_results = 'component_run_existence' remove_previous_results (path_results=path_results) tr1 = TransformWithFitApply2 (path_results=path_results, error_if_fit_func=True, error_if_apply_func=True) X = np.array ([100, 90, 10]) result = tr1.fit_apply (X) assert (result==(X+200)).all() assert os.listdir(f'{path_results}/models')==['transform_with_fit_apply2_estimator.pk'] assert os.listdir(f'{path_results}/whole')==['transform_with_fit_apply2_result.pk'] tr1 = TransformWithFitApply2 (path_results=path_results, error_if_fit_func=True, error_if_apply_func=True, error_if_fit_func_apply=True) result2 = tr1.fit_apply (X) assert (result2==(X+200)).all() assert tr1.estimator=={'sum': 200} tr2 = TransformWithFitApply2 (path_results=path_results, error_if_fit_func=True, error_if_apply_func=True, error_if_fit_apply_func=True) result3 = tr2.apply (X) assert (result3==(X+200)).all() assert tr2.estimator=={'sum': None} os.remove (f'{path_results}/models/transform_with_fit_apply2_estimator.pk') with pytest.raises (RuntimeError): result3 = tr2.fit_apply (X) tr2.error_if_fit_apply_func = False result4 = tr2.fit_apply (X) assert tr2.estimator=={'sum': 200} assert (result4==(X+200)).all() os.remove (f'{path_results}/whole/transform_with_fit_apply2_result.pk') tr3 = TransformWithFitApply2 (path_results=path_results, error_if_fit_func=True, error_if_apply_func=True, error_if_fit_apply_func=True) with pytest.raises (RuntimeError): _ = tr3.apply (X) with pytest.raises (RuntimeError): _ = tr3.fit_apply (X) tr3.error_if_fit_apply_func = False result5 = tr3.fit_apply (X) assert tr3.estimator=={'sum': 200} assert (result5==(X+200)).all() assert os.listdir (f'{path_results}/whole')==['transform_with_fit_apply2_result.pk'] assert os.listdir (f'{path_results}/models')==['transform_with_fit_apply2_estimator.pk'] remove_previous_results (path_results) tr4 = TransformWithFitApply2 (path_results=path_results, error_if_fit_func=False, error_if_apply_func=False, error_if_fit_apply_func=True) result6 = tr4.fit(X).apply (X) assert tr4.estimator=={'sum': 200} assert (result6==(X+200)).all() assert os.listdir (f'{path_results}/whole')==['transform_with_fit_apply2_result.pk'] assert os.listdir (f'{path_results}/models')==['transform_with_fit_apply2_estimator.pk'] remove_previous_results (path_results) tr5 = TransformWithoutFitApply2 (path_results=path_results, error_if_fit_func=False, error_if_apply_func=False) result7 = tr5.fit(X).apply (X) assert tr5.estimator=={'sum': 200} assert (result7==(X+200)).all() assert os.listdir (f'{path_results}/whole')==['transform_without_fit_apply2_result.pk'] assert os.listdir (f'{path_results}/models')==['transform_without_fit_apply2_estimator.pk'] remove_previous_results (path_results) tst.run (test_component_run_depend_on_existence, tag='dummy') # + [markdown] tags=[] # ### Logger # - # exports tests.core.test_components def test_component_logger (component_save_data): X = component_save_data tr1 = Transform1 (verbose=0) tr1.fit (X) result = tr1.apply (X) tr1 = Transform1 (verbose=1) tr1.fit (X) result = tr1.apply (X) tr1 = Transform1 (verbose=2) tr1.fit (X) result = tr1.apply (X) tst.run (test_component_logger, component_save_data, tag='dummy') # ### Passing data_converter and data_io # exports tests.core.test_components _ def test_component_data_converter (): class MyTransform (Component): def __init__ (self, **kwargs): super().__init__ (data_converter='PandasConverter', **kwargs) def _apply (self, x): return x*2 my_transform = MyTransform (separate_labels=False) assert my_transform.data_converter.separate_labels is False assert type(my_transform.data_converter) is PandasConverter # example where data-converter uses class-specific parameters config = dict(separate_labels=False, MyTransform=dict(separate_labels=True)) my_transform = MyTransform (**config) assert my_transform.data_converter.separate_labels is True assert config['separate_labels'] is False tst.run (test_component_data_converter, tag='dummy') # example using data_io # exports tests.core.test_components def test_component_data_io (): import pandas as pd from dsblocks.utils.utils import remove_previous_results path_results = 'test_data_io' remove_previous_results (path_results=path_results) class MyTransform (Component): def __init__ (self, **kwargs): super().__init__ (result_io='pandas', **kwargs) def _fit (self, X, y=None): self.estimator = Bunch(sum=100) def _apply (self, x): return pd.DataFrame ([[1,2],[3,4]], columns=['a','b']) my_transform = MyTransform (path_results='do_not_use', MyTransform=dict(path_results=path_results)) my_transform.fit (1) assert os.listdir (f'{path_results}/models')==['my_transform_estimator.pk'] df1 = my_transform.apply (1) assert os.listdir (f'{path_results}/whole')==['my_transform_result.parquet'] assert not os.path.exists ('do_not_use') del my_transform my_transform = MyTransform (path_results='do_not_use', MyTransform=dict(path_results=path_results)) #assert my_transform.estimator is None my_transform.load_estimator() assert my_transform.estimator == Bunch(sum=100) df2 = my_transform.load_result () pd.testing.assert_frame_equal (df1, df2) remove_previous_results (path_results=path_results) tst.run (test_component_data_io, tag='dummy') # ### assert_equal # + tags=[] # exports tests.core.test_components def test_component_equal (): path_results = 'assert_equal' remove_previous_results (path_results=path_results) class MyTransform (Component): def __init__ (self, noise=1e-10, different = False, **kwargs): super().__init__ (result_io='pandas', **kwargs) def _fit (self, X, y=None): self.estimator = Bunch(sum=100) def _generate_noise (self): while True: noise = np.random.rand() * self.noise if noise > self.noise/10: break return noise def _apply (self, x): df = pd.DataFrame ([[1.0,2.0],[3.0,4.0]], columns=['a','b']) + self._generate_noise () if self.different: df = df+10 x = np.array([[10.0,20.0],[30.0,40.0]]) + self._generate_noise () result = dict(sequence=[[1.0,2.0], x+1, dict(vector=x, data=df)], array=x+10) return result tr = MyTransform () tr2= MyTransform () tr.assert_equal (tr(1), tr2(1), significant_digits=7) import pytest with pytest.raises (AssertionError): tr = MyTransform (noise=1e-3, verbose=1) tr2= MyTransform (noise=1e-3, verbose=1) tr.assert_equal (tr(1), tr2(1), significant_digits=7) with pytest.raises (AssertionError): tr = MyTransform (verbose=1, different=True) tr2= MyTransform (verbose=1) tr.assert_equal (tr(1), tr2(1)) result = tr.assert_equal (tr(1), tr2(1), raise_error=False) assert result is False remove_previous_results (path_results=path_results) # - tst.run (test_component_equal, tag='dummy') # ### set_paths # + tags=[] # exports tests.core.test_components #@pytest.mark.reference_fails def test_set_paths (): def assert_paths (x, path_results, path_models): base = os.path.abspath('.') assert x.path_results==Path(f'{base}/{path_results}') assert x.data_io.path_results==Path(f'{base}/{path_results}') assert x.path_models==Path(f'{base}/{path_models}') assert x.data_io.path_models==Path(f'{base}/{path_models}') path_results = 'test_set_paths_1' path_models = 'test_set_paths_1' tr = Component (path_results=path_results) assert_paths (tr, path_results, path_models) path_results = 'test_set_paths_2' tr.data_io.set_path_results (path_results) assert_paths (tr, path_results, path_models) path_models='test_set_paths_models_1' tr.data_io.set_path_models (path_models) assert_paths (tr, path_results, path_models) path_results = 'test_set_paths_a' path_models = 'test_set_paths_models_a' tr = Component (path_results=path_results, path_models=path_models) assert_paths (tr, path_results, path_models) path_results = 'test_set_paths_b' tr.data_io.set_path_results (path_results) assert_paths (tr, path_results, path_models) path_models = 'test_set_paths_models_b' tr.data_io.set_path_models (path_models) assert_paths (tr, path_results, path_models) # - tst.run (test_set_paths, tag='dummy') # ### determine fit function # + tags=[] # exports tests.core.test_components from dsblocks.utils.dummies import DummyEstimator class TransformWithoutFit (Component): def __init__ (self, factor=2, **kwargs): super().__init__ (**kwargs) def _apply (self, X): return X * self.factor class TransformWithFitApplyOnly (Component): def __init__ (self, **kwargs): super().__init__ (**kwargs) def _apply (self, X): return X + self.sum def _fit_apply (self, X, y=None): self.sum = X.sum(axis=0)*10 return X + self.sum def test_determine_fit_function (): # example when there is _fit implemented component = TransformWithoutFitApply () X = np.array ([1,2,3]) component.fit (X) X2 = np.array ([10,20,30]) r = component (X2) assert (r == (X.sum() + X2)).all() assert component.is_model # example when there is estimator component = Component (DummyEstimator (2)) X = np.array ([1,2,3]) component.fit (X) assert component.estimator.sum == 6 X2 = np.array ([10,20,30]) r = component (X2) assert (r == (X.sum() + X2*2)).all() assert component.is_model # example when there is no _fit implemented, and there is no estimator component = TransformWithoutFit () X = np.array ([1,2,3]) component.fit (X) X2 = np.array ([10,20,30]) r = component (X2) assert (r == (X2*2)).all() assert not component.is_model assert component._fit == component._fit_ # example when there is only fit_apply implemented component = TransformWithFitApplyOnly () X2 = np.array ([10,20,30]) r = component.fit_apply (X2) assert (r == (X2 + X2.sum(axis=0)*10)).all() assert component.is_model assert component._fit == component._fit_ def test_use_fit_from_loaded_estimator (): path_models = 'test_use_fit_from_loaded_estimator' component = Component (DummyEstimator (2), path_models=path_models) X = np.array ([1,2,3]) component.fit (X) assert (Path (path_models) / 'models').exists() del component estimator1 = DummyEstimator (2) print (estimator1) component = Component (estimator1, path_models=path_models) print ('before loading') print (component.estimator) print (component._fit) print (component.result_func) component.load_estimator () print ('after loading') print (component.estimator) print (component._fit) print (component.result_func) assert component.estimator.sum == 6 assert component.is_model X2 = np.array ([10,20,30]) r = component (X2) assert (r == (X.sum() + X2*2)).all() remove_previous_results (path_models) # - tst.run (test_determine_fit_function, tag='dummy') tst.run (test_use_fit_from_loaded_estimator, tag='dummy') # ### Use direct methods # + tags=[] # exports tests.core.test_components from dsblocks.utils.dummies import Multiply10direct, Max10direct def test_direct_methods (): # input X = np.array ([1,2,3]) # example where we do not use direct methods component = Max10direct (verbose=2) component.fit (X) r = component (X) assert (r==X*10+X.max()).all() component = Max10direct (verbose=2, error_if_apply=True) component.fit (X) with pytest.raises (RuntimeError): r = component (X) #assert component.fitted #assert component.applied # example where we use direct methods component = Max10direct (direct_apply=True, verbose=2, error_if_apply=True) component.logger.info (f'{"-"*100}') component.logger.info (f'direct_apply={component.direct_apply}, direct_fit={component.direct_fit}, direct_fit_apply={component.direct_fit_apply}\n') component.fit (X) r = component (X) assert (r==X*10+X.max()).all() #assert component.fitted #assert not component.applied component = Max10direct (direct_fit=True, verbose=2, error_if_fit=True) component.logger.info (f'{"-"*100}') component.logger.info (f'direct_apply={component.direct_apply}, direct_fit={component.direct_fit}, direct_fit_apply={component.direct_fit_apply}\n') component.fit (X) r = component.apply (X) assert (r==X*10+X.max()).all() #assert not component.fitted #assert component.applied component = Max10direct (direct_apply=True, direct_fit=True, verbose=2, error_if_apply=True, error_if_fit=True) component.logger.info (f'{"-"*100}') component.logger.info (f'direct_apply={component.direct_apply}, direct_fit={component.direct_fit}, direct_fit_apply={component.direct_fit_apply}\n') component.fit (X) r = component.transform (X) assert (r==X*10+X.max()).all() #assert not component.fitted #assert not component.applied # example when there is no _fit implemented and we call fit_apply component = Multiply10direct (verbose=2, error_if_fit=True) component.logger.info (f'{"-"*100}') component.logger.info (f'direct_apply={component.direct_apply}, direct_fit={component.direct_fit}, direct_fit_apply={component.direct_fit_apply}\n') r = component.fit_apply (X) assert (r==X*10).all() #assert not component.is_model assert component.fit == component._fit_ #assert component.fit_apply == component.apply r2 = component.fit (X).apply (X) assert (r==X*10).all() # example when there is no _fit implemented and we want a direct apply call component = Multiply10direct (verbose=2, direct_apply=True, error_if_apply=True, error_if_fit=True) component.logger.info (f'{"-"*100}') component.logger.info (f'direct_apply={component.direct_apply}, direct_fit={component.direct_fit}, direct_fit_apply={component.direct_fit_apply}\n') r = component.fit_apply (X) assert (r==X*10).all() assert not component.is_model assert component.fit == component._fit_ #assert component.fit_apply == component._apply #assert component.fit_transform == component._apply #assert not component.applied r2 = component.fit (X).apply (X) assert (r==X*10).all() #assert not component.applied # - tst.run (test_direct_methods, tag='dummy') # ### Pass apply function by parameter # + tags=[] # exports tests.core.test_components def test_pass_apply (): component = Component (apply=lambda x: x*10, verbose=2, direct_apply=True, error_if_apply=True) X = np.array ([1,2,3]) r = component (X) assert (r==X*10).all() # - tst.run (test_pass_apply, tag='dummy') # ### Get DataIO specific parameters # + tags=[] # exports tests.core.test_components def test_get_specific_data_io_parameters_for_component (): component = Component (tag='data', x=3, par=[1,2], path_results='hello', path_results_data='world', other='yes', load_result_data = False, save_model_data=True) check_last_part(component.path_results, 'world') assert component.data_io.load_result_flag == False assert component.data_io.save_model_flag == True # - tst.run (test_get_specific_data_io_parameters_for_component, tag='dummy') # + [markdown] tags=[] # #### get_specific_data_io_parameters # - # exports tests.core.test_utils def test_get_specific_data_io_parameters (): component = Component () config = component.get_specific_data_io_parameters ( 'data', **dict(x=3, par=[1,2], path_results='hello', path_results_data='world', other='yes', load_result_data = True)) assert config == dict (path_results='world', load_result=True) tst.run (test_get_specific_data_io_parameters, tag='dummy') # ### StandardConverter # + tags=[] # exports tests.core.test_components from dsblocks.utils.dummies import Min10direct, SumXY def test_standard_converter_in_component (): component = Min10direct (data_converter='StandardConverter') X, y = np.array([1,2,3]), np.array([0,1,0]) Xr = component.fit_apply (X, y) assert (Xr==X*10+X.min()).all() Xr, yr = component.fit_apply (X, y, sequential_fit_apply=True) assert (Xr==X*10+X.min()).all() assert (yr==y).all() component = SumXY (data_converter='StandardConverter') Xr = component.fit_apply ((X,X*2), y=None) assert (Xr==X+X*2).all() Xr = component.fit_apply ((X,X*2), y=None, sequential_fit_apply=True) assert (Xr==X+X*2).all() #Xr, yr = component.fit_apply ((X,X*2), y, sequential_fit_apply=True) Xr, yr = component.fit_apply ((X,X*2), y, sequential_fit_apply=True) assert (Xr==X+X*2).all() assert (yr==y).all() # - tst.run (test_standard_converter_in_component, tag='dummy') # ### set_suffix # + tags=[] # exports tests.core.test_components def test_set_suffix (): component = Component (name='my_component') assert component.name == 'my_component' component.set_suffix ('first') assert component.name == 'my_component_first' component.set_suffix ('second') assert component.name == 'my_component_second' component.set_suffix ('third') assert component.name == 'my_component_third' component.set_name ('another') assert component.name == 'another' component.set_suffix ('first') assert component.name == 'another_first' component.set_suffix ('second') assert component.name == 'another_second' component.set_name ('last') assert component.name == 'last' component.set_suffix ('first') assert component.name == 'last_first' component.set_suffix ('second') assert component.name == 'last_second' # - tst.run (test_set_suffix, tag='dummy') # + [markdown] tags=[] # ## SamplingComponent # + tags=[] #export class SamplingComponent (Component): """ Component that makes use of labels in transform method. When calling the transform method, one of the columns of the received data is assumed to contain the ground-truth labels. This allows the transform method to modify the number of observations, changing the number of rows in the data and in the labels. See `PandasConverter` class in `dsblocks.core.data_conversion`. """ def __init__ (self, estimator=None, transform_uses_labels=True, **kwargs): # the SamplingComponent over-rides the following parameters: super().__init__ (estimator=estimator, transform_uses_labels=transform_uses_labels, **kwargs) # - # ### Usage example # exports tests.core.test_components #@pytest.mark.reference_fails def test_sampling_component (): c = SamplingComponent (data_converter='DataConverter') assert c.transform_uses_labels assert not hasattr(c.data_converter,'transform_uses_labels') c = SamplingComponent (data_converter='PandasConverter') assert c.data_converter.transform_uses_labels tst.run (test_sampling_component, tag='dummy') # ## SklearnComponent # + #export class SklearnComponent (Component): """ Component that saves estimator parameters in pickle format. Convenience subclass used when the results can be saved in pickle format. See `SklearnIO` class in `core.utils`. """ def __init__ (self, estimator=None, data_io='SklearnIO', transform_uses_labels=False, **kwargs): super().__init__ (estimator=estimator, data_io=data_io, transform_uses_labels=False, **kwargs) # alias PickleSaverComponent = SklearnComponent # - # ### Usage example # exports tests.core.test_components def test_sklearn_component (): c = SklearnComponent () assert c.data_io.fitting_load_func==joblib.load assert c.data_io.result_save_func==joblib.dump tst.run (test_sklearn_component, tag='dummy') # + [markdown] tags=[] # ## NoSaverComponent # - #export class NoSaverComponent (Component): """Component that does not save any data.""" def __init__ (self, estimator=None, data_io='NoSaverIO', **kwargs): super().__init__ (estimator=estimator, data_io=data_io, **kwargs) # ### Usage example # exports tests.core.test_components .reference_ def test_no_saver_component (): c = NoSaverComponent () assert c.data_io.__class__.__name__ == 'NoSaverIO' tst.run (test_no_saver_component, tag='dummy') # + [markdown] tags=[] # ## OneClassSklearnComponent # - #export class OneClassSklearnComponent (SklearnComponent): """Component that uses only normal data (labelled with 0) for fitting parameters.""" def __init__ (self, estimator=None, **kwargs): super().__init__ (estimator=estimator, **kwargs) def _fit (self, X, y=None): assert y is not None, 'y must be provided in OneClassSklearnComponent class' X = X[y==0] assert self.estimator is not None, 'estimator must be provided in OneClassSklearnComponent class' self.estimator.fit (X, y) # + [markdown] tags=[] # ### Usage example # + tags=[] # exports tests.core.test_components .reference_fails def get_data_for_one_class (): data = np.r_[np.ones ((5,2)), 2*np.ones((5,2))] y = np.r_[np.ones ((5,)), np.zeros((5,))] return data, y def test_one_class_sklearn_component (): path_results = 'one_class_sklearn_component' remove_previous_results (path_results=path_results) data, y = get_data_for_one_class () from sklearn.preprocessing import MinMaxScaler result1 = OneClassSklearnComponent (MinMaxScaler()).fit(data,y).transform (data) result2 = MinMaxScaler().fit(data[y==0]).transform (data) assert (result1==result2).all().all() remove_previous_results (path_results=path_results) # - tst.run (test_one_class_sklearn_component, tag='dummy') # ## PandasComponent #export class PandasComponent (Component): """ Component that preserves the DataFrame format for incoming data and results. This component also writes results in parquet format, by default. See `PandasConverter` in `core.data_conversion` for details on the data conversion performed. """ def __init__ (self, estimator=None, data_converter='PandasConverter', data_io='PandasIO', **kwargs): super().__init__ (estimator=estimator, data_converter=data_converter, data_io=data_io, **kwargs) # ### Usage example # exports tests.core.test_components .reference_fails def test_pandas_component (): c = PandasComponent () assert c.data_converter.__class__.__name__ == 'PandasConverter' assert c.data_io.__class__.__name__ == 'PandasIO' tst.run (test_pandas_component, tag='dummy') -- --- -- jupyter: -- jupytext: -- text_representation: -- extension: .hs -- format_name: light -- format_version: '1.5' -- jupytext_version: 1.14.4 -- kernelspec: -- display_name: Haskell -- language: haskell -- name: haskell -- --- -- # Day 9: Explosives in Cyberspace inputLines = lines <$> readFile "input/day9.txt" decompressedLength :: Bool -> String -> Int decompressedLength recursive xs = decompressedLength' xs where -- getSequenceLength determines the (decompressed) length of the sequence after a marker. getSequenceLength :: String -> Int getSequenceLength = if recursive then decompressedLength' else length -- It seems that there are currently no Haskell modules for regular expressions in -- ihaskell, so we parse the markers manually. decompressedLength' "" = 0 decompressedLength' ('(':xs) = parseSequenceLength xs decompressedLength' xs = length beforeMarker + decompressedLength' rest where (beforeMarker, rest) = span (/= '(') xs -- The beginning of ys is the length declaration inside a marker. -- Parse it and continue processing. parseSequenceLength :: String -> Int parseSequenceLength ys = parseCount (read markerLength) $ tail rest where (markerLength, rest) = span (/= 'x') ys -- The beginning of ys is the repeat count declaration inside a marker. -- Parse it and continue processing. parseCount :: Int -> String -> Int parseCount markerLength ys = parseSequence markerLength (read markerCount) $ tail rest where (markerCount, rest) = span (/= ')') ys -- The beginning of ys is the string after the marker. Calculate the total decompressed -- length of the marker and add the decompressed length of the remainder. parseSequence :: Int -> Int -> String -> Int parseSequence markerLength markerCount ys = markerCount * getSequenceLength zs + decompressedLength' rest where (zs, rest) = splitAt markerLength ys -- # Part 1: No recursive decompression of markers decompressedLength1 = decompressedLength False -- Verify the given expample results. decompressedLength1 "ADVENT" decompressedLength1 "A(1x5)BC" decompressedLength1 "(3x3)XYZ" decompressedLength1 "A(2x2)BCD(2x2)EFG" decompressedLength1 "(6x1)(1x3)A" decompressedLength1 "X(8x2)(3x3)ABCY" decompressedLength1 . head <$> inputLines -- # Part 2: Decompress markers recursively decompressedLength2 = decompressedLength True -- Verify the given expample results. decompressedLength2 "(3x3)XYZ" decompressedLength2 "X(8x2)(3x3)ABCY" decompressedLength2 "(27x12)(20x12)(13x14)(7x10)(1x12)A" decompressedLength2 "(25x3)(3x3)ABC(2x3)XY(5x2)PQRSTX(18x9)(3x2)TWO(5x7)SEVEN" decompressedLength2 . head <$> inputLines # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ARCH: Autoregressive Conditional Heteroscedasticity # # ## Volatilidad # # - Buscar estabilidad de la serie a largo plazo. # - Nos interesa la magnitud de los resiudos $\epsilon_t$ # - Menor varianza de los datos implica menor volatilidad. Eso se traduce en menor riesgo y mayor seguridad en las inversiones. # - Tomar $\epsilon_{t}^2$ resuelve el problema de signos positivos y negativos. # - $\epsilon_{t}^2$ penaliza diferencias altas entre valores altos y predicciones. # # Consiste en dos ecuaciones: basado en la $\mu$ (media) y en la $\sigma^2$ # - Medición de cambios inesperados. # # ## ARCH # # Dos componentes en el modelo: # i. Heterocedasticidad: dispersión diferente (medidas de dispersión dcomo $\sigma$ y $\sigma^2$. El énfasis es en esta última) # # ii. Condicional: un valor que depende de otro, i.e. $P(A|B)$. # # El modelo autorregresivo tomando los dos componentes mencionados: # - $(\sigma_{t}^2|x_1\ x_2\ x_3\ ... x_n)$ # # - $(\sigma_{t}^2|\sigma_1\sigma_2\sigma_3\ ... \sigma_n)$ # # ### ARCH(1) # # - $Var(x_t|x_{t-1}) = \alpha_0 + \alpha_1 \epsilon_{t-1}^2$ # - $\sigma_{t}^2 = \alpha_0 + \alpha_1 \epsilon_{t-1}^2$ # # ### ARCH(q) # # - $\sigma_{t}^2 = \alpha_0 + \alpha_1 \epsilon_{t-1}^2 + \alpha_2 \epsilon_{t-2}^2 + ... +\alpha_q \epsilon_{t-q}^2$ # # # ## Importing relevant packages import pandas as pd import numpy as np import matplotlib.pyplot as plt import statsmodels.graphics.tsaplots as sgt import statsmodels.tsa.stattools as sts from statsmodels.tsa.arima_model import ARIMA from scipy.stats.distributions import chi2 from math import sqrt import seaborn as sns sns.set() # ## Importing Data and Pre-processing raw_csv_data = pd.read_csv("../../DataSets/Index2018.csv") df_comp = raw_csv_data.copy() df_comp.date = pd.to_datetime(df_comp.date, dayfirst = True) df_comp.set_index("date", inplace = True) df_comp = df_comp.asfreq('b') df_comp = df_comp.fillna(method='ffill') df_comp.head() # + #df_comp.shape # + #df_comp = df_comp.asfreq('b') # - df_comp.isna().sum() # + #df_comp = df_comp.fillna(method = 'ffill') #df_comp.isna().sum() # + # Creamos una nueva variabale (una copia de ftse) df_comp['market_value'] = df_comp.ftse df_comp.head() # - import warnings warnings.filterwarnings("ignore") del df_comp['spx'] del df_comp['dax'] del df_comp['ftse'] del df_comp['nikkei'] # splitting data set size = int(len(df_comp)*0.8) df, df_test = df_comp.iloc[:size], df_comp.iloc[size:] # ### LLR TEST def LLR_test(mod_1,mod_2, DF = 1): L1 = mod_1.llf L2 = mod_2.llf LR = (2*(L2-L1)) p = chi2.sf(LR, DF).round(3) return p # ## Creating Return from 'ftse' values df['returns'] = df.market_value.pct_change(1)*100 df.head() df['sq_returns'] = df.returns.mul(df.returns) df.head() # ## Returns vs Squared Returns df.returns.plot(figsize =(20,5)) plt.title("Returns", size = 24) plt.show() df.sq_returns.plot(figsize =(20,5)) plt.title("Volatilidad", size = 24) plt.show() # Mirar plot con función de autocorrelación parcial de los retornos y retornos al cuadrado sgt.plot_pacf(df.returns[1:], lags = 40, alpha = 0.05, zero = False, method = ('ols')) plt.title("PACF of returns", size = 21) plt.show() # + sgt.plot_pacf(df.sq_returns[1:], lags = 40, alpha = 0.05, zero = False, method = ('ols')) plt.title("PACF of squared returns", size = 21) plt.show() # tiende a existir tendencias a corto plazo en la varianza # - # ## ARCH # !pip install arch from arch import arch_model model_arch_1 = arch_model(df.returns[1:]) results_arch_1 = model_arch_1.fit() # results_arch_1 = model_arch_1.fit(update_freq = 5) results_arch_1.summary() # # ARCH(1) # + model_arch_1 = arch_model(df.returns[1:], mean = 'Constant', vol = "ARCH", p = 1) results_arch_1 = model_arch_1.fit(update_freq = 1) results_arch_1.summary() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="q73_fLRvTQYI" # Sound Event Detection using Approach 1 # + [markdown] colab_type="text" id="X3PASjCITQC0" # This colab demonstrates how to extract the AudioSet embeddings, using a VGGish deep neural network (DNN). # + [markdown] colab_type="text" id="IKSLc0bIB1QS" # Based on the directions at: https://github.com/tensorflow/models/tree/master/research/audioset # + colab={"base_uri": "https://localhost:8080/", "height": 749} colab_type="code" executionInfo={"elapsed": 9304, "status": "ok", "timestamp": 1584893793375, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-TpQl_yrDGdU/AAAAAAAAAAI/AAAAAAAAAlQ/HoAfg_rDDrA/s64/photo.jpg", "userId": "01397164172225109514"}, "user_tz": -60} id="228fji9C6c-q" outputId="9febc986-a23d-437c-bcd5-b12bef28e230" # !lscpu # !nvidia-smi # + colab={"base_uri": "https://localhost:8080/", "height": 124} colab_type="code" executionInfo={"elapsed": 33533, "status": "ok", "timestamp": 1584893817627, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-TpQl_yrDGdU/AAAAAAAAAAI/AAAAAAAAAlQ/HoAfg_rDDrA/s64/photo.jpg", "userId": "01397164172225109514"}, "user_tz": -60} id="xNUdYfA11vGR" outputId="df02d4a3-232e-438c-f597-1b0e9a079c8b" #Google drive access import os from google.colab import drive drive.mount('/content/gdrive',force_remount = True) #Directory root_path = 'gdrive/My Drive/SoundEventDetection/modelTraining/' os.chdir(root_path) # + colab={} colab_type="code" id="O1YVQb-MBiUx" # %%capture #Install necessary software # !pip install six # !pip install h5py # !pip install pydub # !pip install numpy # !pip install scipy # !pip install keras # !pip install future # !pip install resampy # !pip install ipython # !pip install soundfile # !pip install pysoundfile # !pip install scikit-learn # !apt-get install libsndfile1 # !pip install python==3.6 # !pip install matplotlib # !pip install cudnn==7.1.2 # !pip install cudatoolkit==9 # !pip install tensorflow-gpu==1.12.0 #Install: cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb # !dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb # !apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub # !apt-get update # !apt-get install cuda=9.0.176-1 #OR # #!wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb #VGGish model checkpoint, in TensorFlow checkpoint format. import os os.chdir("trained_models") # !wget https://storage.googleapis.com/audioset/vggish_model.ckpt os.chdir("..") # + colab={} colab_type="code" id="jmhLEKk8bxW2" # %%capture #Important DO NOT DELETE #import six import sys #import h5py #import math #import glob import json #import time import numpy as np import pandas as pd import soundfile as sf import tensorflow as tf import matplotlib.pyplot as plt #from pydub.playback import play #from pydub import AudioSegment #from scipy.io import wavfile #from scipy.io.wavfile import write sys.path.insert(1, os.path.join(sys.path[0], '../')) #ML imports #from keras.layers import Input, Dense, BatchNormalization, Dropout, Activation, Concatenate from keras.layers import Lambda from keras.optimizers import Adam from keras.models import load_model from keras.models import Model import keras.backend as K #External .py scripts from lib import mel_features from lib import vggish_input from lib import vggish_params from lib import vggish_postprocess from lib import vggish_slim from lib import utilities from lib import data_generator from lib.train_functions import evaluateCore, trainCore, average_pooling, max_pooling, attention_pooling, pooling_shape, train, writeToFile # + colab={} colab_type="code" id="ANV8Fhf2SeDj" # %%capture trainingType = ['decision_level_max_pooling', 'decision_level_average_pooling', 'decision_level_single_attention', 'decision_level_multi_attention', 'feature_level_attention'] balanced = ['no_balance', 'balance_in_batch'] def loadModel(num1,num2): args = { "data_dir" : "packed_features/", "workspace" : "workspace/", "mini_data" : False, "balance_type" : balanced[num2], #'no_balance', 'balance_in_batch' "model_type" : trainingType[num1], "learning_rate" : 1e-3, "filename":utilities.get_filename("work/") } #Output directories sub_dir = os.path.join(args["filename"], 'balance_type={}'.format(args["balance_type"]), 'model_type={}'.format(args["model_type"])) modelFiles = [['200k_DLMP_B','200k_DLMP_U'],['200k_DLAP_B','200k_DLAP_U'], ['200k_DLSA_B','200k_DLSA_U'],['200k_DLMA_B','200k_DLMA_U'], ['200k_FLA_B','200k_FLA_U']] #CNN Architecture model = train(args, 1) optimizer = Adam(lr=args["learning_rate"]) model.compile(loss='binary_crossentropy', optimizer=optimizer) #Model dir #models_dir ="workspace/models/work/balance_type=balance_in_batch/model_type=feature_level_attention" models_dir = "trained_models/"+modelFiles[num1][num2]+".h5" #latest_file ="workspace/models/work/balance_type=no_balance/model_type=decision_level_multi_attention/md_4000_iters.h5" print(models_dir) model.load_weights(models_dir) print(type(model)) return model model = loadModel(2,1) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 360237, "status": "ok", "timestamp": 1584894144366, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-TpQl_yrDGdU/AAAAAAAAAAI/AAAAAAAAAlQ/HoAfg_rDDrA/s64/photo.jpg", "userId": "01397164172225109514"}, "user_tz": -60} id="TVHHYowNsZEK" outputId="36c91ae1-4f0b-4158-8d18-29e6b0bc5af5" ''' from keras.models import load_model model = load_model( "trained_models/"+"200k_DLSA_U"+".h5") ''' # + colab={} colab_type="code" id="e_AoyOcYSd5S" #start_time = time.time() def getTime(num, start_time): print(str(num)+ ":" + str(time.time() - start_time)) return time.time() # + colab={} colab_type="code" id="Huv3co4e1JL2" #Get labels class_labels = pd.read_csv("lib/class.csv").drop(columns=['index','mid'])['display_name'].to_numpy() def getLabeling(class_labels,num): return class_labels[num] # + colab={} colab_type="code" id="rx-mdaBcSeA1" #Predict on input def predictOnVggishInput(df, model): results = [] attention_results = [] #Handling edge case of 10 seconds length = df.shape[0] if(length==10): length = 11 for endNumber in range(10,length): startNumber = endNumber - 10 df_final = [df[startNumber:endNumber]] df_final = np.asarray(df_final) df_final = (np.float32(df_final) - 128.) / 128. output = model.predict(df_final) #attention_layer_output=model.get_layer(layer_name).output #intermediate_model=tf.keras.models.Model(inputs=model.input,outputs=layer_output) #attention_layer_prediction=intermediate_model.predict(df_final) #attention_results.append(attention_layer_prediction) results.append(output.astype(np.float32)) return results # + colab={} colab_type="code" id="bqkvenjFSp89" def plotResults(df): #Create plot plt.rcParams["figure.figsize"] = (20,10) groups = df.groupby('Class') # Plot fig, ax = plt.subplots() ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling for name, group in groups: ax.plot(group.StartTime, group.Probability, marker='o', linestyle='-', ms=6, label=getLabeling(class_labels,name)) f = "output/dogprobsld.csv" df = pd.read_csv(f) #plt.plot(df[df['Class']==0]['Second'],df[df['Class']==0]['Class']+1.1, marker='o', linestyle='', ms=6,color='C0', label="Speech SLD") #plt.plot(df[df['Class']==74]['Second'],df[df['Class']==74]['Class']-72.8, marker='o', linestyle='', ms=6,color='C3', label="Dog SLD") ax.legend() #plt.xlim((0, 180)) plt.show() # + colab={} colab_type="code" id="BwtZ__sWIUX0" #VGGish transformation def vggish_fullprocess(wav_file,data,sr,switch): # wav_file = "output/dog.wav" #tempSave checkpoint = "trained_models/vggish_model.ckpt" pca_params = "trained_models/vggish_pca_params.npz" if switch==1: examples_batch = vggish_input.wavfile_to_examples(wav_file) else: examples_batch = vggish_input.waveform_to_examples(data,sr) # Prepare a postprocessor to munge the model embeddings. pproc = vggish_postprocess.Postprocessor(pca_params) with tf.Graph().as_default(), tf.Session() as sess: # Define the model in inference mode, load the checkpoint, and # locate input and output tensors. vggish_slim.define_vggish_slim(training=False) vggish_slim.load_vggish_slim_checkpoint(sess, checkpoint) features_tensor = sess.graph.get_tensor_by_name(vggish_params.INPUT_TENSOR_NAME) embedding_tensor = sess.graph.get_tensor_by_name(vggish_params.OUTPUT_TENSOR_NAME) # Run inference and postprocessing. [embedding_batch] = sess.run([embedding_tensor],feed_dict={features_tensor: examples_batch}) postprocessed_batch = pproc.postprocess(embedding_batch) return postprocessed_batch # + colab={} colab_type="code" id="M5gCYFOcS2Ej" # #Load vggish output # filename = 'output/data.h5' def predictOnAudioFile(wav_file): wav_data, sr = sf.read(wav_file, dtype='int16') assert wav_data.dtype == np.int16, 'Bad sample type: %r' % wav_data.dtype samples = wav_data / 32768.0 # Convert to [-1.0, +1.0] if len(samples.shape) > 1: data = np.mean(samples, axis=1) else: data = samples data = np.asarray(data) result_list = [] step = int(data.shape[0]/sr) #step = 30 for segment in np.arange(0,int(data.shape[0]/sr),step): if((data.shape[0]/sr)-segment0: segmentStart = segment - 9 else: segmentStart = segment data_segment = data[sr*segmentStart:sr*(segment+step)] data_embedding = vggish_fullprocess(wav_file,data_segment,sr,2) result = predictOnVggishInput(data_embedding, model) result_list.append(result) return result_list result_list = predictOnAudioFile('output/dog.wav') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 377437, "status": "ok", "timestamp": 1584894161616, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-TpQl_yrDGdU/AAAAAAAAAAI/AAAAAAAAAlQ/HoAfg_rDDrA/s64/photo.jpg", "userId": "01397164172225109514"}, "user_tz": -60} id="B7ob0l45TPG6" outputId="c2c88c22-270a-468d-f1b6-74ae020cab52" len(result_list[0][0]) # + colab={} colab_type="code" id="Hhtx0-HHZnWx" results = [] for i in range(0, len(result_list[0])): [t] = result_list[0][i] for j in range(0,len(t)): if t[j] < 0.25: t[j] = 0 t = t.tolist() results.append(t) results = np.asarray(results) results = np.transpose(results) # + colab={"base_uri": "https://localhost:8080/", "height": 243} colab_type="code" executionInfo={"elapsed": 377424, "status": "ok", "timestamp": 1584894161623, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-TpQl_yrDGdU/AAAAAAAAAAI/AAAAAAAAAlQ/HoAfg_rDDrA/s64/photo.jpg", "userId": "01397164172225109514"}, "user_tz": -60} id="Q4DKT8epcyHk" outputId="1f83c8bb-c817-4695-86ed-e1fc6d86909d" results # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 377416, "status": "ok", "timestamp": 1584894161625, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-TpQl_yrDGdU/AAAAAAAAAAI/AAAAAAAAAlQ/HoAfg_rDDrA/s64/photo.jpg", "userId": "01397164172225109514"}, "user_tz": -60} id="gPMdrJBXcCkZ" outputId="a104cce9-4c60-4a0a-a7c1-fb8bdbbf2796" len(result_list) # + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" executionInfo={"elapsed": 377706, "status": "ok", "timestamp": 1584894161926, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-TpQl_yrDGdU/AAAAAAAAAAI/AAAAAAAAAlQ/HoAfg_rDDrA/s64/photo.jpg", "userId": "01397164172225109514"}, "user_tz": -60} id="q9zgSWS_6ntI" outputId="ed5cff26-f93e-464b-e58a-b7f5f84ce63f" masterList = [] counter = 1 for i in result_list[0]: a = np.argpartition(i[0], -10)[-10:] secondsMap={} valuesMap={} for aa in a: valuesMap[getLabeling(class_labels,aa)] = 1.*i[0][aa] secondsMap['second'] = counter secondsMap['values'] = valuesMap masterList.append(secondsMap) counter = counter + 1 json.dumps(masterList) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 1813, "status": "ok", "timestamp": 1584894255997, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-TpQl_yrDGdU/AAAAAAAAAAI/AAAAAAAAAlQ/HoAfg_rDDrA/s64/photo.jpg", "userId": "01397164172225109514"}, "user_tz": -60} id="d18Lrj5QS2QU" outputId="e032ab7b-73f1-4de0-c62c-740dd9ce1469" # Manipulate results result_table = pd.DataFrame(columns=['StartTime','Class','Probability']) i = 0 threshold = 0.2 for result_segment in result_list: for result in result_segment: result = result[0] classes = np.arange(len(result))[result>threshold] probs = result[result>threshold] for j in np.arange(len(probs)): result_table = result_table.append({'StartTime' : 0+i,'Class' : classes[j] , 'Probability' : probs[j]}, ignore_index=True) i=i+1 result_table["Class"] = result_table["Class"].astype(int) # Change column type # result_table['Class'] = pd.Categorical(result_table.Class) plotResults(result_table) result_table.head(20) # + colab={} colab_type="code" id="8YnB8SLU056F" rdf = result_table rdf['EndTime'] = rdf['StartTime'] + 10 # + colab={} colab_type="code" id="FWO2BzUvZbBU" def calc_weighted_prob(data): data['middle_second'] = data[['StartTime','EndTime']].mean(axis=1) data['weight'] = (5- abs(data['second'] - data['middle_second']))+1 weight_sum = data['weight'].sum() data['weight'] = data['weight']/weight_sum data['weighted_prob'] = data['Probability'] * data['weight'] d = {'second': data['second'].iloc[0],'Class': data['Class'].iloc[0] , 'smoothed_prob': [data['weighted_prob'].sum()]} smoothed_prob = pd.DataFrame(data=d) return smoothed_prob # + colab={} colab_type="code" id="oG3vn_dYOHtK" def plotSmoothedResults(df): #Create plot plt.rcParams["figure.figsize"] = (20,10) groups = df.groupby('Class') # Plot fig, ax = plt.subplots() ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling for name, group in groups: ax.plot(group.second, group.smoothed_prob, marker='o', linestyle='-', ms=6, label=getLabeling(class_labels,name)) f = "output/dogprobsld.csv" df = pd.read_csv(f) #plt.plot(df[df['Class']==0]['Second'],df[df['Class']==0]['Class']+1.1, marker='o', linestyle='', ms=6,color='C0', label="Speech SLD") #plt.plot(df[df['Class']==74]['Second'],df[df['Class']==74]['Class']-72.8, marker='o', linestyle='', ms=6,color='C3', label="Dog SLD") ax.legend() #plt.xlim((0, 180)) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" executionInfo={"elapsed": 384795, "status": "ok", "timestamp": 1584894169052, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-TpQl_yrDGdU/AAAAAAAAAAI/AAAAAAAAAlQ/HoAfg_rDDrA/s64/photo.jpg", "userId": "01397164172225109514"}, "user_tz": -60} id="R-xeoFQIze06" outputId="23c648a9-ccd5-40c1-e3a6-801b85492be5" first_second = np.min(rdf['StartTime']) last_second = np.max(rdf['EndTime']) smoothed_results = pd.DataFrame(columns=['second','Class','smoothed_prob']) for second in np.arange(first_second,last_second+1): data_second = rdf[(rdf['StartTime'] <=second) & (rdf['EndTime'] >= second)] data_second['second'] = second for Class in np.unique(np.asarray(data_second['Class'])): df = calc_weighted_prob(data_second[data_second['Class']==Class]) smoothed_results = smoothed_results.append(df) plotSmoothedResults(smoothed_results) # + colab={} colab_type="code" id="1hFIXsLG5so3" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Consumption analysis # ### Author : (https://github.com/NicolasLacroix) # # Licence : [Apache License 2.0] # # Data provided by [DataSud] # # Source file link (csv) : https://trouver.datasud.fr/dataset/8bfa93b0-ac2f-4148-b550-0ec5c917bb28/resource/52a8f5dd-758d-4e54-a837-8fc7ad57d378/download/eco2mix-regional-tr.csv # # [DataSud]: https://www.datasud.fr/ # # [Apache License 2.0]: https://github.com/NicolasLacroix/data-representation/blob/master/LICENSE # + import pandas as pd import matplotlib.pyplot as plt import numpy as np import dateutil.parser from datetime import datetime, date # - data_link = 'https://trouver.datasud.fr/dataset/8bfa93b0-ac2f-4148-b550-0ec5c917bb28/resource/52a8f5dd-758d-4e54-a837-8fc7ad57d378/download/eco2mix-regional-tr.csv' data = pd.read_csv(data_link, delimiter=';', encoding='utf_8', parse_dates=True) data.head() # TODO: use parse_date=True in pd.read_csv method instead data['Date'] = pd.to_datetime(data['Date']) data['Heure'] = pd.to_datetime(data['Heure'], format='%H:%M', utc=True).dt.time data['Date - Heure'] = pd.to_datetime(data['Date - Heure'], format='%Y-%m-%dT%H:%M:%S', utc=True) data['Date - Heure'] = data['Date - Heure'].dt.tz_convert('Europe/Paris') data.dtypes volumeLabels = list(data.columns.values)[6:15] percentLabels = list(data.columns.values)[15:-1] # Sorting data by ['Date - Heure'] data = data.sort_values(by=['Date - Heure']) data.head() # Today values today = datetime.combine(date.today(), datetime.min.time()) dailyData = data.loc[data['Date'] == today][['Date - Heure']+volumeLabels] dailyData.plot(x='Date - Heure', title="Today values") plt.show() def getExtremums(data, key): dailyData = getDailyData(data, 'Date - Heure', volumeLabels) # get dataframes per day min_serie = data.loc[data[key] == min(data[key])]['Date'] # Date column of data's serie where data[key] is min min_df = dailyData[pd.to_datetime(min_serie.values[0]).strftime('%Y-%m-%d')] # cell's value (date) to string max_serie = data.loc[data[key] == max(data[key])]['Date'] # Date column of data's serie where data[key] is max max_df = dailyData[pd.to_datetime(max_serie.values[0]).strftime('%Y-%m-%d')] # cell's value (date) to string return min_df, max_df min_serie = data.min() max_serie = data.max() min_df = data.loc[data['Date'] == min_serie['Date']][['Date - Heure']+volumeLabels] max_df = data.loc[data['Date'] == max_serie['Date']][['Date - Heure']+volumeLabels] max_value = max_serie['Consommation (MW)'] min_value = min_serie['Consommation (MW)'] min_df.plot(x='Date - Heure', title="Minimum") max_df.plot(x='Date - Heure', title="Maximum") plt.show() avg_df = data.mean() avg_df # average values per day avg_day = data[['Date'] + volumeLabels].groupby(['Date']).agg(np.mean) avg_day.plot(title="average values per day") plt.show() # average values per hour avg_hr = data[['Heure'] + volumeLabels].groupby(['Heure']).agg(np.mean) ax = avg_hr.plot(title="average values per hour") plt.show() # average percentage per day percent_df = data[percentLabels].mean() ax = percent_df.plot(autopct='%.2f', kind='pie', title='average percentage per day') ax.set_ylabel('') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 185} colab_type="code" id="--jFBFXI64sl" outputId="84e30ccb-a173-402f-e030-0f7920a9c881" # !pip install -q git+https://github.com/huggingface/transformers.git # !pip install -q tensorflow==2.1 # + colab={"base_uri": "https://localhost:8080/", "height": 50} colab_type="code" id="lVQzctdr6_3n" outputId="b23ea2c7-42e5-4e40-bd4d-6b312c338b6e" import torch import pandas as pd import nltk nltk.download('punkt') from nltk import tokenize import logging from tqdm import tqdm from tqdm import trange from transformers import ( CTRLLMHeadModel, CTRLTokenizer, GPT2LMHeadModel, GPT2Tokenizer, OpenAIGPTLMHeadModel, OpenAIGPTTokenizer, TransfoXLLMHeadModel, TransfoXLTokenizer, XLMTokenizer, XLMWithLMHeadModel, XLNetLMHeadModel, XLNetTokenizer, ) def prepare_ctrl_input(temperature, _, tokenizer, prompt_text): if temperature > 0.7: logger.info("CTRL typically works better with lower temperatures (and lower top_k).") encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False) if not any(encoded_prompt[0] == x for x in tokenizer.control_codes.values()): logger.info("WARNING! You are not starting your generation from a control code so you won't get good results") return prompt_text def prepare_xlm_input(xlm_language, model, tokenizer, prompt_text): # kwargs = {"language": None, "mask_token_id": None} # Set the language use_lang_emb = hasattr(model.config, "use_lang_emb") and model.config.use_lang_emb if hasattr(model.config, "lang2id") and use_lang_emb: available_languages = model.config.lang2id.keys() if xlm_language in available_languages: language = xlm_language else: language = None while language not in available_languages: language = input("Using XLM. Select language in " + str(list(available_languages)) + " >>> ") model.config.lang_id = model.config.lang2id[language] # kwargs["language"] = tokenizer.lang2id[language] # TODO fix mask_token_id setup when configurations will be synchronized between models and tokenizers # XLM masked-language modeling (MLM) models need masked token # is_xlm_mlm = "mlm" in args.model_name_or_path # if is_xlm_mlm: # kwargs["mask_token_id"] = tokenizer.mask_token_id return prompt_text def prepare_xlnet_input(padding_text, _, tokenizer, prompt_text): prompt_text = (padding_text if padding_text else PADDING_TEXT) + prompt_text return prompt_text def prepare_transfoxl_input(padding_text, _, tokenizer, prompt_text): prompt_text = (padding_text if padding_text else PADDING_TEXT) + prompt_text return prompt_text PREPROCESSING_FUNCTIONS = { "ctrl": prepare_ctrl_input, "xlm": prepare_xlm_input, "xlnet": prepare_xlnet_input, "transfo-xl": prepare_transfoxl_input, } MAX_LENGTH = int(10000) def adjust_length_to_model(length, max_sequence_length): if length < 0 and max_sequence_length > 0: length = max_sequence_length elif 0 < max_sequence_length < length: length = max_sequence_length # No generation bigger than model size elif length < 0: length = MAX_LENGTH # avoid infinite loop return length device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # + colab={} colab_type="code" id="0oKs1wk8QpP7" def initialize_model(model_type, length): MODEL_CLASSES = { "gpt2": (GPT2LMHeadModel, GPT2Tokenizer), "ctrl": (CTRLLMHeadModel, CTRLTokenizer), "openai-gpt": (OpenAIGPTLMHeadModel, OpenAIGPTTokenizer), "xlnet": (XLNetLMHeadModel, XLNetTokenizer), "transfo-xl": (TransfoXLLMHeadModel, TransfoXLTokenizer), "xlm": (XLMWithLMHeadModel, XLMTokenizer), } try: model_class, tokenizer_class = MODEL_CLASSES[model_type] except KeyError: raise KeyError("the model {} you specified is not supported.") tokenizer = tokenizer_class.from_pretrained(model_type) model = model_class.from_pretrained(model_type) model.to(device) length = adjust_length_to_model(length, max_sequence_length=model.config.max_position_embeddings) return model, tokenizer, length # + colab={"base_uri": "https://localhost:8080/", "height": 212, "referenced_widgets": ["6b80adae7498437790059c6f05d65a0d", "d20830f40f354d80b3ba5dce85fc6a9e", "dd4c9167792b4c7eb0b820abd26269e9", "9f1b97869062457eb7bd8671127a5514", "", "526e140d6e7c438bab1386f8c461efed", "6cc4a19887444edb932d67f08a43754e", "", "8955e7123c524a2c9a0f84e38dfe1327", "79e581b72a924617922892bab1b7db00", "", "", "", "48966cc8360845e0b06ede125339123b", "", "b224ab9cede244bdb66a1ca13cf36768", "", "edaf86a6fbc9483dbdc3e44e414c42e7", "", "", "9bd8d9ed81634d5fa77869d189b7437c", "b23132cadc98454797a781e8c9a669e7", "3462eda4bdea4b8396fa574a786cf31a", "203bfe73ef1a4e3f9ec77a825c4b7962", "a13600dba76a417da31ca1f7efdd08d6", "13334137efa147da840a0f857fc8321c", "2de98aa0244948e2813f4105ab9ac7f7", "6263c4be54d843ba9ede0208c3ceb3cd", "e39a7484b27346879ee32a3296c62128", "", "781a434546bb402a8e9d812ae87e2c38", "3562fdbf3e404b13b67a0068c7785a70"]} colab_type="code" id="tp2JuYeEBgxb" outputId="29606dca-4d2e-485a-a311-fbfaeea49028" model_type = "ctrl" #PUT NAME OF NEEDED MODEL length = 200 #MODEL LENGTH prompt_text = "Links Java vs Python" repetition_penalty = 1.2 temperature = 0.2 stop_token = None num_return_sequences = 5 # Initialize the model and tokenizer model, tokenizer, length = initialize_model(model_type, length) # + colab={"base_uri": "https://localhost:8080/", "height": 374} colab_type="code" id="-28paeXrU8jD" outputId="ea83197d-a0f6-476f-e896-3df2f4ed3bb5" def generate_text_from_condition(model, tokenizer, length, prompt_text, repetition_penalty, temperature, num_return_sequences, model_name, stop_token = None): # Initialize the model and tokenizer # prompt_text = args.prompt if args.prompt else input("Model prompt >>> ") # Different models need different input formatting and/or extra arguments requires_preprocessing = model_type in PREPROCESSING_FUNCTIONS.keys() if requires_preprocessing: prepare_input = PREPROCESSING_FUNCTIONS.get(model_type) preprocessed_prompt_text = prepare_input(temperature, model, tokenizer, prompt_text) encoded_prompt = tokenizer.encode( preprocessed_prompt_text, add_special_tokens=False, return_tensors="pt", add_space_before_punct_symbol=True ) else: encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt output_sequences = model.generate( input_ids=input_ids, max_length=length + len(encoded_prompt[0]), temperature=temperature, # top_k=args.k, # top_p=args.p, repetition_penalty=repetition_penalty, do_sample=True, num_return_sequences=num_return_sequences, ) # Remove the batch dimension when returning multiple sequences if len(output_sequences.shape) > 2: output_sequences.squeeze_() generated_sequences = [] for generated_sequence_idx, generated_sequence in enumerate(output_sequences): generated_sequence = generated_sequence.tolist() # Decode text text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Add the prompt at the beginning of the sequence. Remove the excess text that was used for pre-processing total_sequence = ( prompt_text + text[len(tokenizer.decode(encoded_prompt[0], clean_up_tokenization_spaces=True)) :] ) if model_name == 'ctrl': total_sequence = total_sequence.replace(prompt_text + " \n \n ", "") total_sequence = " ".join(tokenize.sent_tokenize(total_sequence)[:-1]) else: total_sequence = total_sequence.replace(prompt_text + "\n", "") total_sequence = " ".join(tokenize.sent_tokenize(total_sequence)[:-1]) generated_sequences.append(total_sequence) return generated_sequences gen = generate_text_from_condition(model, tokenizer, length, "Links What is better: Java or Python?", repetition_penalty, temperature, num_return_sequences, 'gpt-2', stop_token="<|endoftext|>") gen # + colab={} colab_type="code" id="vaZDdt-6ZVd4" df = pd.read_csv("data\100_short.csv") df = df[["object_a", 'object_b']] df = df.reindex(df.columns.tolist() + ['CTRL_gen_0','CTRL_gen_1', "CTRL_gen_2", "CTRL_gen_3", "CTRL_gen_4", 'GPT_2_gen_0','GPT_2_gen_1', "GPT_2_gen_2", "GPT_2_gen_3", "GPT_2_gen_4"], axis=1) # + colab={"base_uri": "https://localhost:8080/", "height": 573} colab_type="code" id="JDP8n6N0dM6P" outputId="e110b0b4-c420-4a1c-8cbd-ae03af05b01d" question = "Links What is better: {} or {}?" for i in trange(df.shape[0]): obj_a, obj_b = df.iloc[i, 0:2].tolist() prompt = question.format(obj_a, obj_b) generated_sequences = generate_text_from_condition(model, tokenizer, length, prompt, repetition_penalty, temperature, num_return_sequences, stop_token=None) for j in range(num_return_sequences): row_ind = df.index[i] df.loc[row_ind, 'CTRL_gen_' + str(j)] = generated_sequences[j] df # + colab={} colab_type="code" id="FmjmnRNpHENF" model_type = "gpt2" #PUT NAME OF NEEDED MODEL length = 200 #MODEL LENGTH prompt_text = "Links Java vs Python" repetition_penalty = 1.2 temperature = 0.2 stop_token = None num_return_sequences = 5 # Initialize the model and tokenizer model, tokenizer, length = initialize_model(model_type, length) # + colab={"base_uri": "https://localhost:8080/", "height": 342, "referenced_widgets": ["fd9bf7df242a4f86a723d6499ff8c050", "e9e438571a314ccfb834bff82c57987c", "8922308a5acc4cb288a1ba029d5ebadd", "64f43be4bd0b47409eafedae757c32ff", "8d7a93fdff0e4d528b048e7660962941", "", "6c958e8b6026434a87cf2e9e904eaf43", "", "", "", "", "04912ba2e1b64277b6610e10add9245b", "459d5c6d3f4d4178804daf05adfa8e53", "5c60bac1aa764a7db7479f2adb08e82f", "3222961cb81a436da459b949c7e79629", "eda87cb1f59a4cffa4e263dcedf8799e", "4ed7de85331b49bba20cce8b1a76cab8", "", "", "", "", "", "", "97573bad9f4c4ea1861f19af2c217820", "86ffdd924bb148bc882726ea43a9edef", "e0c569e8545d46c7aabe04d687e97b47", "", "", "", "1e8a303dd5e8422e9d10b4e90072bb71", "1e8fa00d5a27471c8bc8037091478130", "1f2e94c93ee1481ca3836123e0a5fd34"]} colab_type="code" id="K1tRFCduHN7h" outputId="794efc1f-2578-4943-dcd4-5e33ba6237b2" model, tokenizer, length = initialize_model(model_type, length) gen = generate_text_from_condition(model, tokenizer, length, "What is better: Java or Python?", repetition_penalty, temperature, num_return_sequences, stop_token=None) gen # + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="-oUZ3XmBGlVI" outputId="53646d36-274f-4bf9-962c-e49078d9ca67" question = "What is better: {} or {}?" for i in trange(df.shape[0]): obj_a, obj_b = df.iloc[i, 0:2].tolist() prompt = question.format(obj_a, obj_b) generated_sequences = generate_text_from_condition(model, tokenizer, length, prompt, repetition_penalty, temperature, num_return_sequences, model_name='gpt-2', stop_token="<|endoftext|>") for j in range(num_return_sequences): row_ind = df.index[i] df.loc[row_ind, 'GPT_2_gen_' + str(j)] = generated_sequences[j] df # + colab={} colab_type="code" id="_S8-XAPZ345P" df.to_csv("ctrl_gpt.csv") # + colab={"base_uri": "https://localhost:8080/", "height": 203} colab_type="code" id="mvHAmXcOIrzi" outputId="e85ef701-cd8a-495f-ff33-24e7c79c4840" d=[] while(1): d.append('1') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="UcQbleiLQOdo" # Step 1: Installing ktrain # + id="JunyltpzEZNs" colab={"base_uri": "https://localhost:8080/"} outputId="9de0b5f8-395f-4637-9547-ea19af8ff555" #installing ktrain # !pip install ktrain # + [markdown] id="Pjrc-hhkQKeg" # Step 2: Importing modules and reading the training file # + id="hv59UcpRTECJ" colab={"base_uri": "https://localhost:8080/"} outputId="d984cd89-d794-42cf-de60-712a385c4190" # %%time import time import ktrain from ktrain import text import pandas as pd from sklearn.model_selection import train_test_split import numpy as np # + id="NeiyRuVHTSGL" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="b23fa708-8df9-40f0-b1ef-dde06a90fe42" data=pd.read_csv("/content/drive/My Drive/Sentiment Analysis_Preskale/Cleaned_reviews.csv",index_col="Unnamed: 0") data.head() # + id="O1pZvvuwTtHe" colab={"base_uri": "https://localhost:8080/"} outputId="61a5c4cd-8a94-4ef5-e063-8f6605d308be" data=data.loc[:,["content","Polarity","clean_text"]] data.columns # + id="LhPyzWOSNOtd" colab={"base_uri": "https://localhost:8080/"} outputId="d7ba22cf-f76e-45b2-cb3f-a764eb006b8f" data.Polarity.value_counts() # + id="X7mg2iCZNUsr" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="5d4e8741-5b0e-49be-9f69-152cae9a6ffc" # cond=[df.Polarity=="Positive",df.Polarity=="Neutral",df.Polarity=="Negative"] # choice=[2,1,0] # df.Polarity=np.select(cond,choice) # df.Polarity.head() # + id="p9eXi5dwMKfv" # #Splitting the data into X,y # X=df["content"] # y=df["Polarity"] # X_train,X_test,y_train,y_test=train_test_split(X,y,shuffle=True,test_size=0.30, random_state=1) # + id="_-UZy7ejrnuC" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="df5df71e-c8b2-4432-c898-a1c9078d1eb5" type(X_train) # + id="HaEhg8wcMtpz" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="be283d2a-8192-42ad-cf4d-6ede150af8be" # print(f"X_train: {X_train.shape}") # print(f"y_train: {y_train.shape}") # print(f"X_test: {X_test.shape}") # print(f"y_test: {y_test.shape}") # + [markdown] id="Kq0gTzXdPuiB" # ####Step 3: Convert data to Features for BERT # ktrain provides a very convenient feature of directly converting the text # data directly into features for the model that is needed. All the text # preprocessing steps do not need to be manually performed and will be # taken care of by the library itself. Since we will be reading the data from # pandas series objects, we will use the function texts_from_array . # # + id="6Wa6Trx5Ptwe" colab={"base_uri": "https://localhost:8080/", "height": 134} outputId="2c2ecd31-af0a-4ad5-a013-77e06ead440c" (X_train_bert,y_train_bert), (X_val_bert,y_val_bert), preproc= text.texts_from_df(train_df=data, text_column="content", label_columns="Polarity", maxlen=150, preprocess_mode='bert') # + id="mf6445Kdua5A" colab={"base_uri": "https://localhost:8080/", "height": 266} outputId="18df5eea-e25a-4f07-a651-efc5d48f4098" X_val_bert # + id="2rKTGlE2uyDt" colab={"base_uri": "https://localhost:8080/"} outputId="d0eda6e5-3867-4800-e95c-7975bfafe3fc" print(len(X_val_bert)) print("Total length of the data:" ,data.shape) print("Shape of Validation Data :" , X_val_bert[0].shape) print("Shape of Train Data:", X_train_bert[0].shape) # + id="91OKMNL2IRvb" # + id="c1ok047iISDa" # + [markdown] id="6aIuZlnqXggh" # Step 4: Load Bert in a learner object # + id="FN-fhXv1XV7K" colab={"base_uri": "https://localhost:8080/"} outputId="72e702b2-9473-4292-9174-0dd4e378cb08" model = text.text_classifier(name = 'bert', train_data = (X_train_bert, y_train_bert), preproc = preproc) # + [markdown] id="4SmA7SC6YPh0" # The function " text_classifier " loads the pre-trained BERT model with a randomly initialized final Dense layer. It is worthwhile to mention that although the final Dense layer is randomly initialized, it will not be only one getting updated during the training process. Since we have not frozen any layers and all the layers of the model are trainable, the weights of all the layers of the model will be updated during backpropagation. # + id="H5urDluXYOai" learner = ktrain.get_learner(model=model, train_data=(X_train_bert, y_train_bert), val_data = (X_val_bert, y_val_bert), batch_size =32) # + [markdown] id="OUJEO5UoY37L" # The function " get_learner " creates a learner object with train and validation data which can be used to fine-tune the classifier. The last argument of get_learner is the batch size. We use a small batch size of 10. # + [markdown] id="0i2i6jGdY_kN" # Step 5: Training (Finetuning the BERT Classifier) # + id="h5cEoWNxYzvx" #learner.lr_find() to find the best learning rate # + id="9P_al4edbtsA" colab={"base_uri": "https://localhost:8080/"} outputId="31823158-4400-4681-dab2-33fd6d5fbb52" #Essentially fit is a very basic training loop, whereas fit one cycle uses the one cycle policy callback history=learner.fit_onecycle(lr = 2e-5, epochs = 4) # + id="HwRj9JW4FMNg" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="2d71d102-229e-49ef-8cfe-bf38a7b39b4d" history.history # + id="6JKqu9oCvljE" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="27478d1b-24f9-41af-fbb1-e6b2117c2060" type(history.history) # + id="qz3XwfdOFmOy" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="082cc47f-0834-4563-fa67-2c393cce60c4" history.history.keys() # + id="RqhXas-OGa_U" colab={"base_uri": "https://localhost:8080/", "height": 195} outputId="add539d5-02e0-490a-caba-d43149103c2e" report=learner.validate(val_data=(X_val_bert, y_val_bert),class_names=['Negative', 'Neutral','Positive']) # + id="YjIbwTkyTV5I" colab={"base_uri": "https://localhost:8080/", "height": 70} outputId="84c53de9-5c8c-4ea1-d01f-8aa2ed593f24" report # + id="NJDREQM3UyY2" #plotting confusion matrix import matplotlib.pyplot as plt def plot_confusion_matrix(cm, classes,normalize=False,title='BERT Confusion matrix',cmap=plt.cm.YlOrRd): """ See full source and example: http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=0) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="right", color="White" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # + id="cjfg5xrBVICo" colab={"base_uri": "https://localhost:8080/", "height": 329} outputId="67f89728-f91b-47ef-9ca0-259843f4c5c2" import itertools from google.colab import files # plt.figure(figsize=(5,5)) plot_confusion_matrix(report, classes=['Negative', 'Neutral','Positive']) # files.download( "bert confusion matrix.jpg" ) # + id="HViRwR7hMJlT" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="583dd3ea-f269-4130-8239-a999fe864b94" learner.print_layers() # + [markdown] id="pP5XuTeIG8Mo" # ### Getting predictor variable # + id="OEc2qsdkFpAF" predictor = ktrain.get_predictor(learner.model, preproc) # + [markdown] id="nQ3-FCwHHSQR" # #### Saving the Model # + id="Tj0c7mhgHVHd" predictor.save('/content/drive/My Drive/Sentiment Analysis_Preskale') # + id="0-XK1QWhH3XE" #sample dataset to test on data = ['this movie was horrible, the plot was really boring. acting was okay', 'the fild is really sucked. there is not plot and acting was bad', 'what a beautiful movie. great plot. acting was good. will see it again'] # + id="FL-4MAh4H7hs" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="76ee4c00-b385-4183-c5e7-cc6982345e51" predictor.predict(data) # + id="l4vZ5IAoIU9V" #loading the model predictor_load = ktrain.load_predictor('/content/drive/My Drive/Sentiment Analysis_Preskale') # + id="_RoLvY0oT1rg" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="3a0030e4-a1c5-498b-b712-e03f02312777" predictor_load.predict(data) # + id="ALxCIKvrD0qN" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # Using the Landlab `FractureGridGenerator` component # # *(, University of Colorado Boulder, July 2021)* # # # ## Introduction # # Landlab' `FractureGridGenerator` is a small helper component that generates a grid in which are embedded a set of randomly aligned fractures. A fracture is described by a line of nodes for which the field `fracture_at_node` equals 1. In other words, nodes where `fracture_at_node = 1` contain one or more fractures running through or near them, and nodes where `fracture_at_node = 0` are devoid of fractures. The component was originally written to initialize a cellular automaton model of rock weathering along fracture zones. An example of a gridded fracture network used in this way can be found in [Tucker et al. (2016)](https://doi.org/10.5194/gmd-9-823-2016) Figure 9. # ## Simple example with a raster grid # # import copy import numpy as np import matplotlib as mpl from landlab import RasterModelGrid, imshow_grid from landlab.components import FractureGridGenerator grid = RasterModelGrid((51, 51)) fg = FractureGridGenerator(grid, frac_spacing=20) fg.run_one_step() cmap = copy.copy(mpl.cm.get_cmap("pink")) imshow_grid(grid, grid.at_node["fracture_at_node"], cmap=cmap) # ## Example with a hex grid # # This example also shows how you can use the optional `seed` parameter to get a different random pattern. from landlab import HexModelGrid grid = HexModelGrid((51, 51), node_layout="rect") fg = FractureGridGenerator(grid, frac_spacing=10, seed=4) fg.run_one_step() cmap = copy.copy(mpl.cm.get_cmap("pink")) imshow_grid(grid, grid.at_node["fracture_at_node"], cmap=cmap) # ### Vertically oriented hex grid grid = HexModelGrid((51, 51), node_layout="rect", orientation="vertical") fg = FractureGridGenerator(grid, frac_spacing=10, seed=3) fg.run_one_step() cmap = copy.copy(mpl.cm.get_cmap("pink")) imshow_grid(grid, grid.at_node["fracture_at_node"], cmap=cmap) # ## References # # ., ., ., ., ., ., & . (2016). CellLab-CTS 2015: continuous-time stochastic cellular automaton modeling using Landlab. Geoscientific Model Development, 9(2), 823-839, [https://doi.org/10.5194/gmd-9-823-2016](https://doi.org/10.5194/gmd-9-823-2016). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # JM: 23 Dec 2021 # notebook to go through "basic" python and notebook things # mantra of the course: If you have a code problem, try Google first. A large part of programing is # experience, and you gain experience more efficiently by trying to fix code yourself. import matplotlib.pyplot as plt import numpy as np from scipy import stats # + # time-series data # by time-series data we just mean some data that depends on time # an example here is the El Nino 3.4 data given in 02_reading_data_basic_manipulations # the code below reads the data in the old-fashioned way to get the data as a 1d array # Q. try do this in pandas instead with open("elnino34_sst.data", "r") as f: elnino34_txt = f.readlines() elnino34_txt = elnino34_txt[3:-4] # strip out some unnecessary lines for k in range(len(elnino34_txt)): elnino34_txt[k] = elnino34_txt[k].strip("\n") # then we split each line (as a string) up into components elnino34_txt[0].split() # so we could define an empty list, cycle through each line, split, and add in the entries # but skipping the first one if we only want the SST entries elnino34_sst = [] for k in range(len(elnino34_txt)): # this is the new elnino34_txt after stripping out some lines dummy = elnino34_txt[k].split() # split out the entries per line for i in range(1, len(dummy)): # cycle through the dummy list but skip the first entry elnino34_sst.append(float(dummy[i])) # turn string into a float, then add to list elnino34_sst = np.array(elnino34_sst) plt.plot(elnino34_sst) plt.grid() # + # but we do not have a time to plot it against # so what you would see by reading the raw data itself is that this is monthly SST data from 1950 to 2019 # so in this case one can create a time array to plot the data against # you could either 1) do this as an artifical raw array (you have to deal with units yourself) # 2) use the datetime64 object functionality # below are code to do both # 1) ad hoc: 1 Jan 1950 to 31 Dec 2019 # use linspace, use length of the read in array, but leave out the last point # so "1950.00" correpsonds to the 1 Jan 1950 # "2019.95" or something like that corresponds to 31 Dec 2019 # # the result is an array that you can manipulate as usual time_vec_raw = np.linspace(1950, 2019+1, len(elnino34_sst), endpoint=False) # 2) datetime: this one is easier to read, but slightly less trivial to manipulate # (which matters somewhat in 08_time_series) # in this case you don't specify the DATE and smaller units (syntax reasons) # # the end result is an array but instead of numbers it is data in the "datetime64" format # syntax of "arange" is np.arange(start, end but not including, spacing), so need to the "end" slightly larger time_vec = np.arange(np.datetime64('1950-01'), np.datetime64('2020-01'), np.timedelta64(1, 'M')) fig = plt.figure(figsize=(10, 7)) ax = plt.subplot(2, 1, 1) ax.plot(time_vec_raw, elnino34_sst) ax.set_ylabel(r"SST (${}^\circ\mathrm{C}$)") ax.grid() ax = plt.subplot(2, 1, 2) ax.plot(time_vec, elnino34_sst) ax.set_xlabel(r"$t$ (years)") ax.set_ylabel(r"SST (${}^\circ\mathrm{C}$)") ax.grid() # if you wanted to create things in smaller units (e.g. days), you might do # time_vec = np.arange(np.datetime64('1950-01-01'), np.datetime64('1950-12-31'), np.timedelta64(1, 'd')) # look up online the relevant syntax # we will come back to the El-Nino data later in the exercises and in 08_time_series # + # some time-series manipulations # create and use an artificial one for now for making a point time_vec = np.arange(np.datetime64('2020-01-01'), np.datetime64('2021-12-31'), np.timedelta64(6, 'h')) nt = len(time_vec) t_vec = np.linspace(0, 2.0 * np.pi, nt) lin_trend = 0.05 * np.linspace(0, 2.0 * np.pi, nt) noise = 0.2 * np.random.rand(nt) f_vec = ( 2.7 + 0.1 * np.sin(t_vec) + 0.05 * np.sin(4.0 * t_vec) + 0.02 * np.sin(60.0 * t_vec) + lin_trend + noise ) fig = plt.figure(figsize=(10, 3)) ax = plt.axes() ax.plot(time_vec, f_vec, 'C0-') ax.set_xlabel(r"$t$") ax.set_ylabel(r"data") ax.set_ylim([2.7, 3.2]) ax.grid() # + # too much data, there looks like a trend, but how to pick it out? # one way is to just brute force downsize fig = plt.figure(figsize=(10, 3)) ax = plt.axes() ax.plot(time_vec[::20], f_vec[::20], 'C0-') # Q: what does this do? ax.set_xlabel(r"$t$") ax.set_ylabel(r"data") ax.set_ylim([2.7, 3.2]) ax.grid() # + # but the above is losing way too much information, and also not really filtering out signals # consider averaging data over a window (uniform weighting) # (need to chop off some entries at the edges to have the same array length to plot) window = 60 # specify a window (number of entries) f_vec_uni_avg = np.zeros(len(f_vec[window:-window:])) # final array with edges chopped off for i in range(len(f_vec[window:-window:])): f_vec_uni_avg[i] = np.mean(f_vec[i:i+window:]) # uniform average over a window fig = plt.figure(figsize=(10, 3)) ax = plt.axes() ax.plot(time_vec[window:-window:], f_vec_uni_avg, 'C0-') # Q: what does this do? ax.set_xlabel(r"$t$") ax.set_ylabel(r"data") ax.set_ylim([2.7, 3.2]) ax.grid() # notice this averages out the fast fluctuations without down-sampling as such # but also the values have decreased in magnitude (because of the averaging) # + # consider averaging data over a window (tent-like weighting, purely artificial) # (need to chop off some entries at the edges to have the same array length to plot) window = 60 # specify a window (number of entries) f_vec_tent_avg = np.zeros(len(f_vec[window:-window:])) # final array with edges chopped off tent_kernel = -np.abs(np.arange(window)+0.5 - window/2.0) # make a straight line and bend it into a v tent_kernel -= np.min(tent_kernel) # flip the v upside down tent_kernel /= np.max(tent_kernel) / 2.0 # normalise such that kernel itself sums to 1 for i in range(len(f_vec[window:-window:])): f_vec_tent_avg[i] = np.mean(tent_kernel * f_vec[i:i+window:]) # average over a window with weighting fig = plt.figure(figsize=(14, 3)) ax = plt.subplot2grid((1, 3), (0, 0), colspan=2) ax.plot(time_vec[window:-window:], f_vec_tent_avg, 'C0-') # Q: what does this do? ax.set_xlabel(r"$t$") ax.set_ylabel(r"data") ax.set_ylim([2.7, 3.2]) ax.grid() ax = plt.subplot2grid((1, 3), (0, 2), colspan=1) ax.plot(tent_kernel, 'C1-') # Q: what does this do? ax.set_xlabel(r"index") ax.set_ylabel(r"kernel shape") ax.set_ylim([0.0, 2.0]) ax.grid() # fairly minor differences in this case # + # consider averaging data over a window (Gaussian kernel with deconvolution) # could write this out raw but just going to call a pacakge here... from scipy.ndimage import filters # here the value to specify is sigma (what it means somewhat determines on the kernel shape) sigma = 2.0 f_vec_gauss_avg = filters.gaussian_filter1d(f_vec, sigma) fig = plt.figure(figsize=(10, 7)) ax = plt.subplot(2, 1, 1) ax.plot(time_vec, f_vec_gauss_avg, 'C0-', label=r"$\sigma = %.1f$" % sigma) ax.set_xlabel(r"$t$") ax.set_ylabel(r"data") ax.set_ylim([2.7, 3.2]) ax.grid() ax.legend() sigma = 10.0 f_vec_gauss_avg = filters.gaussian_filter1d(f_vec, sigma) ax = plt.subplot(2, 1, 2) ax.plot(time_vec, f_vec_gauss_avg, 'C0-', label=r"$\sigma = %.1f$" % sigma) ax.set_xlabel(r"$t$") ax.set_ylabel(r"data") ax.set_ylim([2.7, 3.2]) ax.grid() ax.legend() # Q: why is the latter more smooth? # Q: why does this not need time_vec to be reduced in size? # + # trends # most of the techniques we have played with in the previous workshops would work here too # we could for example do linear regression to pick out a linear trend # creating the artificial time-series again time_vec = np.arange(np.datetime64('2020-01-01'), np.datetime64('2021-12-31'), np.timedelta64(6, 'h')) nt = len(time_vec) t_vec = np.linspace(0, 2.0 * np.pi, nt) lin_trend = 0.05 * np.linspace(0, 2.0 * np.pi, nt) # this is related to the answer we are looking for noise = 0.2 * np.random.rand(nt) f_vec = ( 2.7 # this is related to the answer we are looking for + 0.1 * np.sin(t_vec) + 0.05 * np.sin(4.0 * t_vec) + 0.02 * np.sin(60.0 * t_vec) + lin_trend + noise ) # could use the filtered one too, in this case probably doesn't matter f_vec_gauss_avg = filters.gaussian_filter1d(f_vec, 10) p_orig = np.polyfit(t_vec, f_vec, 1) p_gauss = np.polyfit(t_vec, f_vec_gauss_avg, 1) fig = plt.figure(figsize=(10, 7)) ax = plt.subplot(2, 1, 1) ax.plot(time_vec, f_vec) ax.plot(time_vec, p_orig[0] * t_vec + p_orig[1], 'k--', # regressed linear trend label=f"${{{p_orig[0]:.3f}}} t + {{{p_orig[1]:.3f}}}$") ax.set_ylabel(r"SST (${}^\circ\mathrm{C}$)") ax.grid() ax.legend() ax = plt.subplot(2, 1, 2) ax.plot(time_vec, f_vec_gauss_avg) ax.plot(time_vec, p_gauss[0] * t_vec + p_gauss[1], 'k--', # regressed linear trend label=f"${{{p_gauss[0]:.3f}}} t + {{{p_gauss[1]:.3f}}}$") ax.set_xlabel(r"$t$ (years)") ax.set_ylabel(r"SST (${}^\circ\mathrm{C}$)") ax.grid() ax.legend() # Q. the real second coefficient should be 2.7, so is the regression reproducing that? # Q. in the real linear trend we added the amplitude we gave was 0.05, but regression # is returning a linear coefficient round 0.015 # the answer is actually correct, but what is the reason for the (apparent) discrepancy? # hint: look at the t_vec window I used (not "time_vec", since this time-series depends solely on "t_vec" # + # correlations # so you know how to do calculate correlation coefficients from 03 and 04 # here you might have two time-series and you want to see if they are correlated # the same time-series and if it is correlated with itself in some way # lets start with a very simple example where we should have a good feel for the answer before computing t_vec = np.linspace(0, 2.0 * np.pi, 31) f = np.sin(t_vec) f_pos = 2*np.sin(t_vec) f_neg = -np.sin(t_vec) f_shift = np.sin(t_vec - np.pi / 2.0) fig = plt.figure(figsize=(5, 3)) ax = plt.axes() ax.plot(t_vec, f , "C0", label="f") ax.plot(t_vec, f_pos, "C1", label="f pos") ax.plot(t_vec, f_neg, "C2", label="f neg") ax.plot(t_vec, f_shift, "C3", label="f shift") ax.set_xlabel(r"$t$") ax.set_ylabel(r"$f$") ax.grid() ax.legend() # Q. what correlations do you expect relative to f? # + # compute the correlations # might be an idea to plot the correlations as a scatter graph fig = plt.figure(figsize=(10, 3)) ax1 = plt.subplot(1, 3, 1) ax1.scatter(f, f_pos, color="C1") ax1.set_xlabel(r"f") ax1.set_ylabel(r"f pos") ax1.grid() ax2 = plt.subplot(1, 3, 2) ax2.scatter(f, f_neg, color="C2") ax2.set_xlabel(r"f") ax2.set_ylabel(r"f neg") ax2.grid() ax3 = plt.subplot(1, 3, 3) ax3.scatter(f, f_shift, color="C3") ax3.set_xlabel(r"f") ax3.set_ylabel(r"f shift") ax3.grid() # so we should get 1, -1 and probably 0 # + # now lets see what the computation from package tells us # going to use scipy here, but scikit-learn would work (see 03) # (you can write one yourself too, you only need to covariance and the standard deviations of the data) _, _, r_pos, _, _ = stats.linregress(f, f_pos) _, _, r_neg, _, _ = stats.linregress(f, f_neg) _, _, r_shift, _, _ = stats.linregress(f, f_shift) print(f"f and f_pos has (linear/Pearson) correlation coefficient of {r_pos:.2f}") print(f"f and f_neg has (linear/Pearson) correlation coefficient of {r_neg:.2f}") print(f"f and f_shift has (linear/Pearson) correlation coefficient of {r_shift:.2f}") # + # cross/lag correlation # # so lets go back to the shifted example but extend it a bit t_vec = np.linspace(0, 4.0 * np.pi, 61) f = np.sin(t_vec) f_shift = np.sin(t_vec - np.pi / 2.0) _, _, r_shift, _, _ = stats.linregress(f, f_shift) fig = plt.figure(figsize=(8, 3)) ax = plt.axes() ax.plot(t_vec, f , "C0", label="f") ax.plot(t_vec, f_shift, "C3", label="f shift") ax.set_xlabel(r"$t$") ax.set_ylabel(r"$f$") ax.grid() ax.legend() ax.set_title(f"(linear/Pearson) correlation coefficient of {r_shift:.2f}") # the corrleation coeffcient is zero, but from the graph it is clear that there are correlations between # the two signals (this is one of those cases where blindly applying statistics to data can give you a # misleading conclusion) # + # cross/lag correlation # # in this example I cooked up the signal is just a shift, so what you might expect is that as I calculate # the correlation of the SHIFTED signal relative to each other, you would start to see correlation go up # eventually to 1 # so, for example lag = 2 # lag by 5 indices # chop off the first few entries of the shift signal, last few entries of the original signal # (so that the two arrays are the same size and we can compute a correlation coefficient) _, _, r_lag, _, _ = stats.linregress(f[:-lag:], f_shift[lag::]) fig = plt.figure(figsize=(8, 3)) ax = plt.axes() ax.plot(t_vec[:-lag:], f [:-lag:] , "C0", label="f") ax.plot(t_vec[:-lag:], f_shift[lag::], "C3", label="f shift with lag") ax.plot(t_vec, f_shift, "C3--", label="f shift orig", alpha=0.5) ax.set_xlabel(r"$t$") ax.set_ylabel(r"$f$") ax.grid() ax.legend() ax.set_title(f"lag = {lag}, correlation coefficient of {r_lag:.2f}") # + # cross/lag correlation # # I can wrap this up in a subroutine and do this as a function of lag def custom_lag_corr(signal1, signal2, lag): if len(signal1) != len(signal2): raise Exception("array size not equal, cannot continue") if lag == 0: _, _, r, _, _ = stats.linregress(signal1, signal2) else: _, _, r, _, _ = stats.linregress(signal1[:-lag:], signal2[lag::]) return r n = 30 r_lag = np.zeros(n) for lag in range(n): r_lag[lag] = custom_lag_corr(f, f_shift, lag) fig = plt.figure(figsize=(8, 3)) ax = plt.axes() ax.plot(np.arange(n), r_lag, "C0-x") ax.set_xlabel(r"lag (in index)") ax.set_ylabel(r"$r$") ax.grid() # if n is too big, the number of samples to calculate correlation gets low, so it increasingly becomes # a statistically dodgy manoeuvre (hence why I extend the signal a little bit more above) # + # cross/lag correlation # from the graph above you see that the signal has peak correlation at somewhere between lag index 7 and 8 # (and the corresponding anti-correlation further down the line) # so in principle we could turn the lag index into a "time" (or whatever other measure you like # depending on context), essentially by finding out what "lag = 1" corresponds to in "time" # in this case I know the t_vec is uniform in time, so I just work out the differences and pick the first one # (if it isn't you have to associate each index with it's own time, which is doable but not touched on here) dt = np.diff(t_vec)[0] # again, I know the answer here: f_shift is shifted quarter wavelength out (by pi/2) relative to f # so if I shift f_shift another quarter wavelength ( pi/2) then I should get maximum correlation # so if I shift f_shift three quarters wavelength (3 pi/2) then I should get minimum correlation fig = plt.figure(figsize=(8, 3)) ax = plt.axes() ax.plot(np.arange(n) * dt, r_lag, "C0-x") ax.plot([np.pi / 2, np.pi / 2], [-2, 2], "k--", alpha=0.7) # theoretical maximum ax.plot([3 * np.pi / 2, 3 * np.pi / 2], [-2, 2], "k--", alpha=0.7) # theoreical minimum ax.set_xlabel(r"lag (in time units)") ax.set_ylabel(r"$r$") ax.set_ylim([-1.1, 1.1]) ax.grid() # add the tick labels in xt = ax.get_xticks() xt = np.append(xt, [np.pi/2, 3*np.pi/2]) xtl= xt.tolist() xtl[-2]=r"$\pi/2$" xtl[-1]=r"$3\pi/2$" ax.set_xticks(xt) ax.set_xticklabels(xtl) ax.set_xlim([0, 6]); # + # auto-correlation # if you can do lag correlations for two signals you could also do it for the signal with respect to itself # (which, if you think about it, is what I actually did above...) # this is called the auto-correlation (correlation of signal with respect to lagged versions of itself) # # so you do this if you are interested to see how the signal correlates with itself, and whether you could # use the signal's previous values to predict its future values # trivial example t_vec = np.linspace(0, 2.0 * np.pi, 31) f = np.sin(t_vec) dt = np.diff(t_vec)[0] fig = plt.figure(figsize=(16, 3)) n = 15 r_lag = np.zeros(n) for lag in range(n): r_lag[lag] = custom_lag_corr(f, f, lag) ax = plt.subplot(1, 2, 1) ax.plot(np.arange(n) * dt, r_lag, "C0-x") ax.set_xlabel(r"lag (in index)") ax.set_ylabel(r"$r$") ax.set_title(f"lag up to index {n}") ax.grid() n = 30 r_lag = np.zeros(n) for lag in range(n): r_lag[lag] = custom_lag_corr(f, f, lag) ax = plt.subplot(1, 2, 2) ax.plot(np.arange(n) * dt, r_lag, "C0-x") ax.set_xlabel(r"lag (in index)") ax.set_ylabel(r"$r$") ax.set_title(f"lag up to index {n}") ax.grid() # Q. is the left panel what you expect for the appropriate shifts # Q. given this is a sine/cosing curve, the correlations should be somewhat symmetric (see graph in cell above) # but the right panel is not symmetric about the minimum point, why? (hint: what is the size of array?) # + # auto-correlation # and of course there is a package that someone coded up already we could have used # (the "statsmodel" module might not exist on your computers; you can download it yourself) # # anaconda: conda install -c conda-forge statsmodels # pip: pip install statsmodels t_vec = np.linspace(0, 2.0 * np.pi, 31) f = np.sin(t_vec) dt = np.diff(t_vec)[0] from statsmodels.graphics.tsaplots import plot_acf # only gives the plotting # (or if you want access to the actual data etc., it is in "statsmodels.tsa.stattools.acf") fig = plt.figure(figsize=(10, 4)) ax = plt.axes() title_str = f"Auto-correlation, 1 lag index $\leftrightarrow\ dt = {dt:.3f}$" plot_acf(f, ax=ax, lags=30, adjusted=True, title=title_str) ax.set_xlabel(f"lag (index)") ax.set_ylabel(f"acf") ax.grid() # this package gives you more information # when "adjusted=True", the lines with dot ends are basically the same as above # when "adjusted=False", it accounts for loss of data by the lag, and weights the auto-correlation accordingly # -- so if you either leave out the argument or set it explicitly to False, the acf decays in time # the envelope is the 95% confidence interval at alpha=0.05 # -- you can adjust alpha by providing e.g. "alpha=0.01" as a keyword # when the dots lie within the confidence interval it is saying there is no strong statistical evidence # to say values at the lag influence the current value (see 05 and 06) # -- for this example it's saying if you know maybe 3 or 4 values then you can probably do a # reasonable job predicting the next value # -- remember this just says there is no strong statistical evidence, it doesn't mean there is no relation # (in this artificial example there is in fact a strong relation but the statistics isn't picking it up) # + # now lets try to do something with more realistic data (El-Nino 3.4) # reading code below is just the same as above with open("elnino34_sst.data", "r") as f: elnino34_txt = f.readlines() elnino34_txt = elnino34_txt[3:-4] for k in range(len(elnino34_txt)): elnino34_txt[k] = elnino34_txt[k].strip("\n") elnino34_txt[0].split() elnino34_sst = [] for k in range(len(elnino34_txt)): # this is the new elnino34_txt after stripping out some lines dummy = elnino34_txt[k].split() # split out the entries per line for i in range(1, len(dummy)): # cycle through the dummy list but skip the first entry elnino34_sst.append(float(dummy[i])) # turn string into a float, then add to list elnino34_sst = np.array(elnino34_sst) # I want to do sums on this so I am going to use the raw version # (I personally find the numbers easier to manipulate) t_vec = np.linspace(1950, 2019+1, len(elnino34_sst), endpoint=False) # + # El-Nino 3.4 linear trend # be careful here that time units are in YEARS p = np.polyfit(t_vec, elnino34_sst, 1) lin_trend = p[0] * t_vec + p[1] fig = plt.figure(figsize=(10, 3)) ax = plt.axes() ax.plot(t_vec, elnino34_sst, 'C0') ax.plot(t_vec, lin_trend, 'k--') ax.text(1990, 24.5, f"trend = ${p[0]:.3f}^{{\circ}}\ \mathrm{{C}}$ per year", color="k") ax.set_xlabel(r"$t$ (years)") ax.set_ylabel(r"SST (${}^{\circ}\mathrm{C}$)") ax.set_ylim(24, 30) ax.grid() # Q. What does the trend mean here? Is this consistent with what is know? (you might need to look this up) # + # El-Nino 3.4 filtering # if we are interested in the longer term oscillations, we might want to get rid of the higher # frequencies, so lets apply a filter to the signal fig = plt.figure(figsize=(10, 3)) ax = plt.axes() sigma_vec = [2.0, 5.0, 10.0] for sigma in sigma_vec: elnino34_gauss = filters.gaussian_filter1d(elnino34_sst, sigma) ax.plot(t_vec, elnino34_gauss, label=f"$\sigma = {sigma}$") ax.set_xlabel(r"$t$ (years)") ax.set_ylabel(r"SST (${}^{\circ}\mathrm{C}$)") ax.set_ylim(24, 30) ax.grid() ax.legend() # smaller sigma smooths out the signal a bit # larger sigma even gets rid of the shorter oscillations but keeps the longer ones (up to a point) # # Q. low pass the signal for a specified window of 6 months, 2 years and 10 years, and describe signal # do this for both weighted and not weighted (e.g. tent kernel) options # + # El-Nino 3.4 auto-correlation (unadjusted here) # when do the El-Nino 3.4 SST start decorrelating according to the auto-correlation analysis? dt = np.diff(t_vec)[0] from statsmodels.graphics.tsaplots import plot_acf # only gives the plotting # (or if you want access to the actual data etc., it is in "statsmodels.tsa.stattools.acf") fig = plt.figure(figsize=(10, 4)) ax = plt.axes() title_str = f"Auto-correlation of El-Nino 3.4, 1 lag index $\leftrightarrow\ dt = 1$ month" plot_acf(elnino34_sst, ax=ax, lags=24, title=title_str, alpha=0.05) ax.set_xlabel(f"lag (index)") ax.set_ylabel(f"acf") ax.grid() # Q. how do the acf's vary if you low-pass the signal (e.g. do you gain/lose "predictability")? # Q. (more involved) the acf above (probably?) computes the average acf over all points, # so gives an average sense of how many previous points you need to reliably "predict" (?) the current point # 1) suppose you don't do that, and just compute the autocorrelation (look up the formula or a package) # by only giving it truncated signal which has a date associated with it, do the results differ, and if so # by how much? (i.e. the acf package gives you the average, but look into the samples itself) # 2) are some months more predictable? # Q. (more involved) the years with particularly large SSTs (being a bit vague here) are regarded as El-Nino years # 1) make up a way to pick these YEARS out from the time-series, and compare what the code returns with # known El-Nino years # 2) are some El-Nino years "more predictable"? (i.e. do something like the previous Q.) # + # El-Nino 3.4 SST correlation with some other time-series data from this region # load some data diagnosed from the El-Nino 3.4 region (processed from data taken from copernicus.eu) # in this case this is also monthly data, but for averaged chlorophyll (mg/m3) from Jan 1993 to Dec 2020 # (recall El-Nino 3.4 SST here is from Jan 1950 to Dec 2019) with open("elnino34_bgc.data", "r") as f: elnino34_txt = f.readlines() elnino34_txt = elnino34_txt[1::] # strip out some unnecessary lines for k in range(len(elnino34_txt)): elnino34_txt[k] = elnino34_txt[k].strip("\n") elnino34_chl = [] for k in range(len(elnino34_txt)): # split out the entries per line, pick out the 3rd entry, and strip out the floating comma, turn into float elnino34_chl.append(float(elnino34_txt[k].split()[2].strip(","))) elnino34_chl = np.asarray(elnino34_chl) # create the analogous t_vec for this data t_vec_chl = np.linspace(1993, 2020+1, len(elnino34_chl), endpoint=False) # plot the SST and chlorophyll concentration out over the same time axis for comparison fig = plt.figure(figsize=(10, 7)) ax = plt.subplot(2, 1, 1) ax.plot(t_vec, elnino34_sst) ax.set_ylabel(r"SST (${}^\circ\mathrm{C}$)") ax.set_xlim([1948, 2022]) ax.grid() ax = plt.subplot(2, 1, 2) ax.plot(t_vec_chl, elnino34_chl, "C2") ax.set_xlabel(r"$t$ (years)") ax.set_ylabel(r"chl-$a$ ($\mathrm{mg}\ \mathrm{m}^3$)") ax.set_xlim([1948, 2022]) ax.grid() # + # El-Nino 3.4 SST correlation with some other time-series data from this region # pull out data from the same time window to compute cross-correlations sst_window = (t_vec >= 1993) & (t_vec <= 2019) chl_window = (t_vec_chl >= 1993) & (t_vec_chl <= 2019) fig = plt.figure(figsize=(10, 7)) ax = plt.subplot(2, 1, 1) ax.plot(t_vec[sst_window], elnino34_sst[sst_window]) ax.set_ylabel(r"SST (${}^\circ\mathrm{C}$)") ax.set_xlim([1990, 2022]) ax.grid() ax = plt.subplot(2, 1, 2) ax.plot(t_vec_chl[chl_window], elnino34_chl[chl_window], "C2") ax.set_xlabel(r"$t$ (years)") ax.set_ylabel(r"chl-$a$ ($\mathrm{mg}\ \mathrm{m}^3$)") ax.set_xlim([1990, 2022]) ax.grid() # linear regression to get the correlation coefficient _, _, r, _, _ = stats.linregress(elnino34_sst[sst_window], elnino34_chl[chl_window]) print(f"cross correlation (un-lagged) between SST and chl-a is {r:.6f}") # Q. interpret the correlation # Q. try a lag analysis on these signals accordingly and see if anything interesting comes out # Q. try the above, but for low-passed signals (explore the choice of time window) # Q. (harder) are the relevant lag correlations (if any) consistent with physical rationale for El-Nino, # particularly during El-Nino years? (you may have to look up which ones these are) # Q. (involved) the dataset just loaded also includes phytoplankton concentration, try and do the analogous # analyses for the various pairs of data (SST, chl-a, phytoplankton) # + # Q. (involved) the code here is dirty (no apologies for that actually) and relies a lot on native python commands, # particularly when dates are involved etc. Try do the same thing thus far but using pandas # (which would be much cleaner for managing the data) and python packages. # # Several things you might want to do: # 1) put everything into one pandas dataframe so there aren't multiple arrays hanging around # 2) there is a time-mismatch, so to have only one time dimension, either # i ) get rid of some data in SST # ii) fill out the shorter array with NaNs or missing values # 3) there are some commands in pandas that might be useful # e.g. .mean, .sum, .rolling, etc., look these up on Google # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Decision Tree Regression Notebook # #### *Author: * # #### *University of Chicago, CAPP'20* # + import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.tree import DecisionTreeRegressor # %matplotlib notebook # - # ### Load Data salary = pd.read_csv("Position_Salaries.csv") salary.head() # ### Data Cleaning salary.isnull().sum() # No value missing. # ### Feature Selection X = salary.iloc[:, 1:2].values X.shape y = salary.Salary.values y.shape # ### Model Training # As we only have ten observations, we are using the whole data set to train our model. dtr = DecisionTreeRegressor(random_state=123) dtr.fit(X, y) # ### Model Evaluation # + plt.scatter(X, y, color="red") plt.plot(X, dtr.predict(X), color="blue") plt.title("Salary against Position Level (with Decision Tree Regressor)") plt.xlabel("Position Level") plt.ylabel("Salary ($)") plt.show() # - # **This does not look like a predictive output of a decision tree regression, as decision tree regression is not continuous.** # It happens because expect for each position level, we are not predicting and plotting y values in between. A decision tree prediction should look like below: X_grid = np.arange(min(X), max(X), 0.00001) X_grid = X_grid.reshape((len(X_grid), 1)) # + plt.scatter(X, y, color="red") plt.plot(X_grid, dtr.predict(X_grid), color="blue") plt.title("Actual: Salary against Position (with Decision Tree Regressor)") plt.xlabel("Position Level") plt.ylabel("Salary ($)") plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Mode fitting # # Here we will make a simple hierarchical model that encodes some knowledge of quasi-equally spaced modes of oscillation into the prior. import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pystan # + dnu_sol = 136 # book value of large freq separation for the sun df = pd.read_table('broomhall2009.txt', delim_whitespace=True, names=('n', 'l', 'nu', 'nu_unc')) #import data lmodes = [] dnu_avg = [] f_mod = [] for i in range(4): lmodes.append(df[df.l == i]) dnu_avg.append(np.median(np.diff(lmodes[i].nu))) f_mod.append(lmodes[i].nu % dnu_avg[i]) label = "l="+str(i) plt.scatter(f_mod[i], lmodes[i].nu, label = label) plt.xlabel(r'Frequency modulo ($\mu Hz$)') plt.ylabel(r'Frequency ($\mu Hz$)') plt.legend() # - # The lower departure from a straight line is due to the BCZ, so the lower frequencies are not useful in analysing the response from the HeII ionization zone. Frequencies below 1700 $\mu$Hz are omitted. # + for i in range(4): lmodes[i] = lmodes[i].loc[lmodes[i].nu > 2200] lmodes[i] = lmodes[i].set_index(np.arange(0,len(lmodes[i]),1)) dnu_avg = [] f_mod = [] for i in range(4): dnu_avg.append(np.median(np.diff(lmodes[i].nu))) f_mod.append(lmodes[i].nu % dnu_avg[i]) label = "l="+str(i) plt.scatter(f_mod[i], lmodes[i].nu, label = label) plt.xlabel(r'Frequency modulo ($\mu Hz$)') plt.ylabel(r'Frequency ($\mu Hz$)') plt.legend() # - # To find a ballpark figure before defining priors, I will apply the model from Vrard: # # $$\nu_{UP}(n,0)=\bigg(n+\epsilon+\frac{\alpha}{2}(n-n_{max})^2+\frac{\mathcal{A}\mathcal{G}}{2\pi}sin\bigg(\frac{2\pi(n-n_{max})}{\mathcal{G}}+\phi\bigg)\bigg)\langle\Delta\nu\rangle$$ def model(n, dnu, nmax, epsilon, alpha, A, G, phi): freqs = (n + epsilon + (alpha/2)*(n-nmax)**2 + (A*G/2*np.pi)*np.sin(2*np.pi*(n-nmax)/G + phi))*dnu #Vrard universal pattern return freqs n = lmodes[0].n dnu = dnu_avg[0] numax = 2900 epsilon = 1.48 #1.437 nmax = 21.9 alpha = 0.003 #from vrard paper A = 0.0009 print(dnu_avg[0], A) G = 2 #3 phi = 0 f = model(n, dnu, nmax, epsilon, alpha, A, G, phi) plt.scatter(f_mod[0], lmodes[0].nu, label = 'Data') plt.plot(f % dnu_avg[0], f, label = 'Model', color = 'r') plt.ylabel(r'Frequency($\mu Hz$)') plt.xlabel(r'Frequency modulo ($\mu Hz$)') plt.legend() #plt.xlim(0, 135.2) code = ''' functions { real model(real n, real dnu, real nmax, real epsilon, real alpha, real A, real G, real phi){ real freqs = (n + epsilon + (alpha/2)*(n-nmax)^2 + (A*G/2*pi())*sin(2*pi()*(n-nmax)/G + phi))*dnu; return freqs; } } data { int N; // Data points real n[N]; real fobs[N]; } parameters { real nmax; real dnu; real epsilon; real alpha; real A; real G; real phi; } model { vector[N] mod; nmax ~ normal(22, 2); dnu ~ normal(135, 0.001*135); epsilon ~ normal(1.480, 0.05); alpha ~ normal(0.0025, 0.0010); A ~ normal(0.001, 0.001); G ~ normal(2, 0.2); phi ~ normal(0, pi()/4); for (i in 1:N) mod[i] = model(n[i], dnu, nmax, epsilon, alpha, A, G, phi); mod ~ normal(fobs, 10); } ''' import pystan sm = pystan.StanModel(model_code=code) stan_data = {'N': len(lmodes[0]), 'n': lmodes[0].n, 'fobs': lmodes[0].nu } nchains = 4 start = {'A': 0, 'G': 1.6, 'phi': -0.9*np.pi} fitsm = sm.sampling(data=stan_data, iter=20000, chains=nchains, init=[start for n in range(nchains)]) fitsm.plot() plt.show() print(fitsm) # + stanfit = model(lmodes[0].n, fitsm['dnu'].mean(), fitsm['nmax'].mean(), fitsm['epsilon'].mean(), fitsm['alpha'].mean(), fitsm['A'].mean(), fitsm['G'].mean(), fitsm['phi'].mean()) plt.subplots() plt.scatter(f_mod[0], lmodes[0].nu, label = 'Data') plt.plot(stanfit % dnu_avg[0], stanfit, label = 'Stan model', color = 'r') plt.legend() # - import corner data = np.vstack([fitsm['nmax'], fitsm['dnu'], fitsm['epsilon'], fitsm['alpha'], fitsm['A'], fitsm['G'], fitsm['phi'],]).T corner.corner(data, labels=[r'$n_{max}$', r'$\Delta\nu$', r'$\epsilon$', r'$\alpha$', r'$A$', r'$G$', r'$\phi$']) plt.show() # This analysis will use the methods detailed in Vrard 2015 and, as such, will use its definitions. # # The definition for the local frequency spacing is: # # $\Delta\nu(n) = \frac{\nu_{n+1,0}-\nu_{n-1,0}}{2}$. # # At the edges of the measured radial modes, we cannot use this equation and replace it by the frequency difference between two consecutive radial modes. lmodes[0]['dnu_n'] = (lmodes[0]['nu'].diff(2).shift(-1)/2) lmodes[0]['dnu_n'][0] = lmodes[0]['nu'].diff(1)[1] lmodes[0]['dnu_n'][16] = (lmodes[0]['nu'].diff(1)[16]) # The stan model determined that the maximum frequency is $\nu_{max} = 2777.27 \mu Hz$. This is then used in the determination of the universal pattern to which the frequency separation will be compared: # # $\Delta\nu_{UP}(n)=(1+\alpha(n-n_{max}))\langle\Delta\nu\rangle$ # # We compute the difference between the observed local large separation and the theoretical local large separation predicted by the universal pattern: # # $\delta_{g,obs}=\Delta\nu(n)-\Delta\nu_{UP}(n)$ # + nmax = 22 numax = 3168.618 alpha = 0.015*dnu_avg[0]**(-0.32) dnu_UP = (1+alpha*(lmodes[0].n-nmax))*dnu_avg[0] plt.scatter(lmodes[0].nu, lmodes[0].dnu_n, label = r'$\Delta\nu(n)$') plt.xlabel(r'Frequency($\mu Hz$)') plt.ylabel(r'$(\nu_{n+1,0}-\nu_{n-1,0})/2$ ($\mu Hz$)') plt.scatter(lmodes[0].nu, dnu_UP, label = r'$\Delta\nu_{UP}(n)$') plt.legend() # - # We now subtract the universal pattern from the data in order to remove the curvature term. # + deltag = lmodes[0].dnu_n - dnu_UP plt.scatter(lmodes[0].nu, deltag, label='Data') plt.xlabel(r'Frequency($\mu Hz$)') plt.ylabel(r'$\delta_{g,obs}$ ($\mu Hz$)') plt.legend() # - # The amplitude can be found by fitting an oscillatory component to the resultant frequency variations obtained after removal of the curvature term from the measurements: # # $\delta_{g,obs}=\mathcal{A}\langle\Delta\nu\rangle cos\big(\frac{2\pi(\nu-\nu_{max})}{\mathcal{G}\langle\Delta\nu\rangle}+\phi\big)$ # # where $\mathcal{G}$ is the period of the oscillation expressed in units of $\langle\Delta\nu\rangle$, $\mathcal{A}$ is the amplitude of the oscillation in units of $\langle\Delta\nu\rangle$ and $\phi$ is the phase of the oscillation centered on $\nu_{max}$. code = ''' functions { real dgobs(real A, real numax, real nu, real G, real phi){ return A * cos((2*pi()*(nu-numax))/G + phi); } } data { int N; // Data points real nu[N]; real numax; real dnu_avg; real dg[N]; } parameters { real G; real A; real phi; } model { vector[N] mod; A ~ normal(4, 1.5); G ~ normal(7, 2); phi ~ normal(1.5*pi(), 1); for (i in 1:N) mod[i] = dgobs(A, numax, nu[i], G, phi); mod ~ normal(dg, 1); } ''' import pystan sm = pystan.StanModel(model_code=code) # The code takes a while to converge. We run for 20000 iterations and check the results. stan_data = {'N': len(lmodes[0]), 'nu': lmodes[0].nu, 'dg': deltag, 'numax': numax, 'dnu_avg': dnu_avg[0]} nchains = 4 start = {'A': 3, 'G': 4, 'phi': 1} fitsm = sm.sampling(data=stan_data, iter=20000, chains=nchains, init=[start for n in range(nchains)]) fitsm.plot() plt.show() print(fitsm) # + dgstan = fitsm['A'].mean() * np.cos((2*np.pi*(lmodes[0].nu-numax))/fitsm['G'].mean() + fitsm['phi'].mean()) plt.subplots() plt.plot(lmodes[0].nu, dgstan, label = 'Stan model') plt.scatter(lmodes[0].nu, deltag, label = 'Data') # - # Here is a corner plot of the results: import corner data = np.vstack([fitsm['A'], fitsm['G'], fitsm['phi']]).T corner.corner(data, labels=[r'$\mathcal{A}$', r'$\mathcal{G}$', r'$\phi$']) plt.show() # From Nature volume 215, pages 43–44 (01 July 1967) the helium mass fraction of the sun is between 0.20 and 0.27. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pathlib import Path from astropy import units as u from astropy.table import Table from astropy.time import Time from astropy.timeseries import TimeSeries import re import matplotlib.pyplot as plt import numpy as np # + # %matplotlib inline plt.style.use('guide.mplstyle') plot_kwargs = { 'markersize': 3, 'marker': '^', } rv_data = Path('/data/PHOEBE1/1SWASP J144508.70+050514.4.rv.csv') lc_observed_data = Path('/data/PHOEBE1/1SWASP J144508.70+050514.4.lc.observed.csv') lc_synthetic_data = Path('/data/PHOEBE1/1SWASP J144508.70+050514.4.lc.synthetic.csv') lc_residuals_data = Path('/data/PHOEBE1/1SWASP J144508.70+050514.4.lc.residuals.csv') # - rv = Table.read(rv_data, format='ascii.csv', delimiter=',', fast_reader=False) lc_observed = Table.read(lc_observed_data, format='ascii.csv', delimiter=',', fast_reader=False) lc_synthetic = Table.read(lc_synthetic_data, format='ascii.csv', delimiter=',', fast_reader=False) lc_residuals = Table.read(lc_residuals_data, format='ascii.csv', delimiter=',', fast_reader=False) # + fig1 = plt.figure(figsize=(20,20)) ax1 = fig1.add_subplot(111) ax1.plot(rv['Phase'], rv['Primary'], 'r.', label='Primary', linestyle='-', **plot_kwargs) ax1.plot(rv['Phase'], rv['Secondary'], 'b.', label='Secondary', linestyle='-', **plot_kwargs) ax1.set_xlabel('Phase') ax1.set_ylabel(r'Radial Velocity / $km^{-s}$') ax1.set_title('CRTS J144508.6+050514') ax1.legend(loc='best') ax1.grid() # + fig2 = plt.figure(figsize=(20,20)) ax2 = fig2.add_subplot(111) ax2.plot(lc_observed['Phase'], lc_observed['Magnitude'], 'r.', label='Observed', **plot_kwargs) ax2.plot(lc_synthetic['Phase'], lc_synthetic['Magnitude'], 'b.', label='Synthetic', linestyle='-', **plot_kwargs) ax2.set_xlabel('Phase') ax2.set_ylabel('Magnitude') ax2.set_title('CRTS J144508.6+050514') ax2.legend(loc='best') ax2.invert_yaxis() # + fig3 = plt.figure(figsize=(20,10)) ax3 = fig3.add_subplot(111) ax3.plot(lc_residuals['Phase'], lc_residuals['Observed'], 'r.', label='Observed', **plot_kwargs) ax3.plot(lc_residuals['Phase'], lc_residuals['Synthetic'], 'b.', label='Synthetic', linestyle='-', **plot_kwargs) ax3.set_xlabel('Phase') ax3.set_ylabel('Magnitude') ax3.set_title('CRTS J144508.6+050514') ax3.legend(loc='best') ax3.set_ylim(-0.10, 0.10) # - fig1.savefig('/data/PHOEBE1/1SWASP J144508.70+050514.4.rv.png', format='png') fig2.savefig('/data/PHOEBE1/1SWASP J144508.70+050514.4.lc.png', format='png') fig3.savefig('/data/PHOEBE1/1SWASP J144508.70+050514.4.residuals.png', format='png') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MIMIC Notes and Structured Data Prep # ## Imports & Inits # %load_ext autoreload # %autoreload 2 # + import pdb import pandas as pd import numpy as np np.set_printoptions(precision=4) import pickle from tqdm import tqdm_notebook as tqdm from ast import literal_eval from pathlib import Path from scipy import stats import matplotlib.pyplot as plt import seaborn as sns sns.set_style("darkgrid") # %matplotlib inline # - path = Path('data') workdir = path/'workdir' figdir = workdir/'figures' # + [markdown] heading_collapsed=true # ## Functions # + hidden=true def change_name(col_name): if '(' not in col_name: return col_name cols = literal_eval(col_name) return f'{cols[0]}_{cols[1]}' # + hidden=true def data_interval(x): if pd.isnull(x): return -1 if 0 < x <= 1: return 0 elif 1 < x <= 2: return 1 elif 2 < x <= 3: return 2 elif 3 < x <= 4: return 3 elif 4 < x <= 5: return 4 elif 5 < x <= 6: return 5 elif 6 < x <= 7: return 6 elif 7 < x <= 8: return 7 elif 8 < x <= 9: return 8 elif 9 < x <= 10: return 9 elif 10 < x <= 11: return 10 elif 11 < x <= 12: return 11 elif 12 < x <= 13: return 12 elif 13 < x <= 14: return 13 elif 14 < x <= 15: return 14 else: return 15 def icu_adm_label(x): if 0 <= x <= 1: return -1 # unused notes due to data leakage elif 1 < x <= 3: return 1 # imminent ICU admission elif 3 < x <= 5: return -1 # unused notes due to data leakage else: return 0 # delayed ICU admission # + [markdown] heading_collapsed=true # ## Processing # + hidden=true notes_df = pd.read_csv(path/'unstructured_raw.csv', parse_dates=['intime', 'admittime', 'charttime']) notes_df.drop_duplicates(inplace=True) vitals_df = pd.read_csv(path/'structured_raw.csv', parse_dates=['charttime']) vitals_df.drop_duplicates(inplace=True) # + hidden=true notes_hadms = notes_df['hadm_id'].unique() vitals_hadms = vitals_df['hadm_id'].unique() # Extract common `hadm_id` and filter out those that do not appear in both dfs common_hadms = set(vitals_df['hadm_id'].unique()).intersection(notes_df['hadm_id'].unique()) print(f"Number of encounters that definitely have structured vitals data: {len(vitals_hadms)}") print(f"Number of encounters that definitely have clinical notes: {len(notes_hadms)}") print(f"Number of encounters that have both vitals and clinical notes: {len(common_hadms)}") # + hidden=true notes_common = notes_df[notes_df['hadm_id'].isin(common_hadms)].reset_index(drop=True) vitals_common = vitals_df[vitals_df['hadm_id'].isin(common_hadms)].reset_index(drop=True) # sanity check s, n = set(vitals_common['hadm_id'].unique()), set(notes_common['hadm_id'].unique()) assert(s.symmetric_difference(n) == set()) vitals_common.shape, notes_common.shape # + hidden=true notes_common['note'] = notes_common['category'].str.cat(notes_common['description'], sep='\n') notes_common['note'] = notes_common['note'].str.cat(notes_common['text'], sep='\n') notes_common.drop(columns=['category', 'description', 'text'], inplace=True) notes_common = pd.DataFrame(notes_common.groupby(['hadm_id', 'intime', 'admittime', 'charttime'])['note'].apply('\n'.join)).reset_index() notes_common['category'] = notes_common['note'].apply(lambda x: x.split('\n')[0]) notes_common.shape # + hidden=true # Remove redundant info by filling in each time column with the value of the var vitals_common = vitals_common.groupby(['hadm_id','charttime']).sum(min_count = 1).reset_index() # Groupby ffill vitals_common = vitals_common.groupby(['hadm_id'], as_index=False).apply(lambda group: group.ffill()) # Groupby bfill vitals_common = vitals_common.groupby(['hadm_id'], as_index=False).apply(lambda group: group.bfill()) vitals_common = vitals_common.fillna(vitals_common.median()) vitals_common.shape # + hidden=true notes_common.to_csv(path/'unstructured_notes_proc.csv', index=False) vitals_common.to_csv(path/'structured_vitals_proc.csv', index=False) # + [markdown] heading_collapsed=true # ## Compute Statistics Dev # + hidden=true vitals_common = pd.read_csv(path/'structured_vitals_proc.csv', parse_dates=['charttime']) vitals_common.shape # + hidden=true x = pd.DataFrame(vitals_common.groupby('hadm_id').size(), columns=['size']).reset_index() hadms = x.loc[(x['size'] >= 10) & (x['size'] <= 20)].sample(5)['hadm_id'].tolist() x.loc[x['hadm_id'].isin(hadms)] # + hidden=true dev_subset = vitals_common.loc[(vitals_common['hadm_id'].isin(hadms))].reset_index(drop=True) print(dev_subset.shape) print(dev_subset.columns) # + hidden=true var_cols = dev_subset.columns[2:] print(len(var_cols)) running_stats = ['min', 'mean', 'median', 'std', 'max'] dfs = [] # + hidden=true for hadm_id, group_df in tqdm(dev_subset.groupby('hadm_id'), desc='Encounters'): df = group_df.copy() var_df = df[var_cols].reset_index(drop=True) # save the original vals for later df.set_index('charttime', inplace=True) # set charttime as index for rolling 24h stats_df = df[var_cols].rolling('24h').agg(running_stats) df = pd.DataFrame(stats_df.to_records()) # flatten the resulting dataframe df.insert(loc=1, column='hadm_id', value=hadm_id) df.rename(columns=change_name, inplace=True) # rename columns df = pd.concat([df, var_df], axis=1) # add the original vals back # reorder vars such that the columns are var, var_stat... stats_cols = df.columns[2:] all_cols = [] for var in var_cols: all_cols.append(var) for stat in stats_cols: if f'{var}_' in stat: all_cols.append(stat) order = list(df.columns[:2]) + all_cols df = df[order] dfs.append(df) dev_subset_stats = pd.concat(dfs) dev_subset_stats.reset_index(drop=True, inplace=True) dev_subset_stats['charttime'] = pd.to_datetime(dev_subset_stats['charttime']) std_cols = [col for col in dev_subset_stats.columns if 'std' in col] dev_subset_stats[std_cols] = dev_subset_stats[std_cols].fillna(0) dev_subset_stats = dev_subset_stats[['hadm_id', 'charttime'] + list(dev_subset_stats.columns[2:])] # + hidden=true print(dev_subset_stats.shape) dev_subset_stats.columns # - # ## Prep data for model # ### Merge # + notes_common = pd.read_csv(path/'unstructured_notes_proc.csv', parse_dates=['intime', 'admittime', 'charttime']) notes_common.drop(columns=['category'], inplace=True) vitals_common_stats = pd.read_csv(path/'structured_vitals_stats.csv', parse_dates=['charttime']) pickle.dump(list(vitals_common_stats.columns[2:]), open(path/'str_cols.pkl', 'wb')) print(vitals_common_stats.shape, notes_common.shape, vitals_common_stats['hadm_id'].nunique(), notes_common['hadm_id'].nunique()) # + [markdown] heading_collapsed=true # #### Merge Dev # + hidden=true main = ['hadm_id', 'charttime'] sub1 = ['hr', 'hr_max', 'temp', 'temp_min', 'glucose', 'glucose_std', 'map', 'map_median'] sub2 = ['admittime', 'intime', 'note'] # + hidden=true x = pd.DataFrame(notes_common.groupby('hadm_id').size(), columns=['size']).reset_index() hadms = x.loc[(x['size'] >= 2) & (x['size'] <= 15)].sample(5)['hadm_id'].tolist() subset_stats = vitals_common_stats.loc[(vitals_common_stats['hadm_id'].isin(hadms))][main + sub1].copy().reset_index(drop=True) subset_notes = notes_common.loc[(notes_common['hadm_id'].isin(hadms))][main + sub2].copy().reset_index(drop=True) subset_stats.shape, subset_stats['hadm_id'].nunique(), subset_notes.shape, subset_notes['hadm_id'].nunique() # + hidden=true pd.DataFrame(subset_stats.groupby('hadm_id').size(), columns=['size']).reset_index() # + hidden=true pd.DataFrame(subset_notes.groupby('hadm_id').size(), columns=['size']).reset_index() # + hidden=true subset_stats.sort_values(by='charttime', inplace=True) subset_stats.reset_index(inplace=True, drop=True) subset_notes.sort_values(by='charttime', inplace=True) subset_notes.reset_index(inplace=True, drop=True) df = pd.merge_asof(subset_notes, subset_stats, left_on='charttime', right_on='charttime', by='hadm_id') cols = ['hr', 'hr_max', 'temp', 'temp_min', 'glucose', 'glucose_std', 'map', 'map_median'] df = df.groupby(['hadm_id'], as_index=False).apply(lambda group: group.bfill()) df[cols] = df[cols].fillna(df[cols].median()) # + hidden=true i = -1 # + hidden=true i += 1 print(hadms[i]) # + hidden=true subset_stats[subset_stats['hadm_id'] == hadms[i]].reset_index(drop=True) # + hidden=true subset_notes[subset_notes['hadm_id'] == hadms[i]].reset_index(drop=True) # + hidden=true df[df['hadm_id'] == hadms[i]].reset_index(drop=True) # + hidden=true df.shape # - # #### Final Merge # + vitals_common_stats.sort_values(by='charttime', inplace=True) vitals_common_stats.reset_index(inplace=True, drop=True) notes_common.sort_values(by='charttime', inplace=True) notes_common.reset_index(inplace=True, drop=True) mm_notes_vitals = pd.merge_asof(notes_common, vitals_common_stats, left_on='charttime', right_on='charttime', by='hadm_id') str_cols = pickle.load(open(path/'str_cols.pkl', 'rb')) mm_notes_vitals = mm_notes_vitals.groupby(['hadm_id'], as_index=False).apply(lambda group: group.bfill()) mm_notes_vitals[str_cols] = mm_notes_vitals[str_cols].fillna(mm_notes_vitals[str_cols].median()) # - x = pd.DataFrame(mm_notes_vitals.isna().sum(), columns=['sum']).reset_index() assert(x['sum'].sum() == 0) mm_notes_vitals.to_csv(path/'mm_notes_vitals_proc.csv', index=False) # ### Labeling # + notes_common = pd.read_csv(path/'unstructured_notes_proc.csv', parse_dates=['intime', 'admittime', 'charttime']) notes_common.drop(columns=['category'], inplace=True) mm_notes_vitals = pd.read_csv(path/'mm_notes_vitals_proc.csv', parse_dates=['intime', 'admittime', 'charttime']) print(notes_common.shape, mm_notes_vitals.shape) # + notes_common['admit_to_icu'] = (notes_common['intime'] - notes_common['admittime'])/np.timedelta64(1, 'D') notes_common['chart_to_icu'] = (notes_common['intime'] - notes_common['charttime'])/np.timedelta64(1, 'D') notes_common['interval'] = notes_common['chart_to_icu'].apply(data_interval) notes_common['imi_adm_label'] = notes_common['interval'].apply(icu_adm_label) mm_notes_vitals['admit_to_icu'] = (mm_notes_vitals['intime'] - mm_notes_vitals['admittime'])/np.timedelta64(1, 'D') mm_notes_vitals['chart_to_icu'] = (mm_notes_vitals['intime'] - mm_notes_vitals['charttime'])/np.timedelta64(1, 'D') mm_notes_vitals['interval'] = mm_notes_vitals['chart_to_icu'].apply(data_interval) mm_notes_vitals['imi_adm_label'] = mm_notes_vitals['interval'].apply(icu_adm_label) print(notes_common.shape, notes_common['hadm_id'].nunique(), mm_notes_vitals.shape, mm_notes_vitals['hadm_id'].nunique()) # - notes_common.to_csv(path/'modelready_unstructured.csv', index=False) mm_notes_vitals.to_csv(path/'modelready_mm.csv', index=False) # ## Data Exploration # ### Cohort: **notes_all** # Read in all **notes_all** and subset it to get all the data with label not equal to -1 (only data used for modeling). Then get the unique ``hadm_id``'s within that. # !ls {path} notes_df = pd.read_csv(path/'modelready_unstructured.csv', parse_dates=['intime', 'admittime', 'charttime']) model_notes_df = notes_df[notes_df['imi_adm_label'] != -1].reset_index(drop=True) hadms = model_notes_df['hadm_id'].unique() # Subset the **notes_cohort** to get details of only those encountered that are used for modeling notes_cohort = pd.read_csv(path/'notes_all_cohort.csv') notes_cohort = notes_cohort[notes_cohort['hadm_id'].isin(hadms)].reset_index(drop=True) # + def group_eth(eth): eth = eth.lower() if 'white' in eth: return 'white' elif 'black' in eth: return 'black' elif 'hispanic' in eth: return 'hispanic' elif 'asian' in eth: return 'asian' else: return 'other' notes_cohort['ethnicity'] = notes_cohort['ethnicity'].apply(group_eth) notes_cohort.loc[notes_cohort['admission_age'] > 100, 'admission_age'] = 100 # - print(f"Number of patients in notes cohort: {notes_cohort['subject_id'].nunique()}") g = notes_cohort.groupby('expire_flag')['subject_id'].nunique().to_numpy() print(f"Mortality in notes cohort: {g[1]} ({(g[1]/g.sum())*100:0.1f}%)") g = notes_cohort.groupby('gender')['subject_id'].nunique().to_numpy() print(f"Males in notes cohort: {g[1]} ({(g[1]/g.sum())*100:0.1f}%)") print(f"Mean:{notes_cohort.groupby('subject_id')['admission_age'].first().mean():0.1f}") print(f"STD:{notes_cohort.groupby('subject_id')['admission_age'].first().std():0.1f}") print(f"25th percentile:{notes_cohort.groupby('subject_id')['admission_age'].first().quantile(0.25):0.1f}") print(f"75th percentile:{notes_cohort.groupby('subject_id')['admission_age'].first().quantile(0.75):0.1f}") g = pd.DataFrame(notes_cohort.groupby('admission_type')['hadm_id'].nunique()).reset_index() g.columns = ['encounter_type', 'count'] g['pct'] = np.round((g['count']/g['count'].sum() * 100), 1) print(g) g = pd.DataFrame(notes_cohort.groupby('ethnicity')['subject_id'].nunique()).reset_index() g.columns = ['ethnicity', 'count'] g['pct'] = np.round((g['count']/g['count'].sum() * 100), 1) print(g) # ### Notes Exploration print("Encounter time to ICU Admission for model cohort:") print(f"Mean:{model_notes_df['admit_to_icu'].mean():0.1f}") print(f"STD:{model_notes_df['admit_to_icu'].std():0.1f}") print(f"25th percentile:{model_notes_df['admit_to_icu'].quantile(0.25):0.1f}") print(f"75th percentile:{model_notes_df['admit_to_icu'].quantile(0.75):0.1f}") print("Encounter time to ICU Admission for notes cohort:") print(f"Mean:{notes_df['admit_to_icu'].mean():0.1f}") print(f"STD:{notes_df['admit_to_icu'].std():0.1f}") print(f"25th percentile:{notes_df['admit_to_icu'].quantile(0.25):0.1f}") print(f"75th percentile:{notes_df['admit_to_icu'].quantile(0.75):0.1f}") print(f"Average Number of clinical notes per encounter for model cohort: {(len(model_notes_df)/model_notes_df['hadm_id'].nunique()):0.1f}") print(f"Average Number of clinical notes per encounter for notes cohort: {(len(notes_df)/notes_df['hadm_id'].nunique()):0.1f}") print("Clinical Note Length for model cohort:") print(f"Mean:{model_notes_df['note_len'].mean():0.1f}") print(f"STD:{model_notes_df['note_len'].std():0.1f}") print(f"25th percentile:{model_notes_df['note_len'].quantile(0.25):0.1f}") print(f"75th percentile:{model_notes_df['note_len'].quantile(0.75):0.1f}") print() print("Clinical Note Length for notes cohort:") print(f"Mean:{notes_df['note_len'].mean():0.1f}") print(f"STD:{notes_df['note_len'].std():0.1f}") print(f"25th percentile:{notes_df['note_len'].quantile(0.25):0.1f}") print(f"75th percentile:{notes_df['note_len'].quantile(0.75):0.1f}") print("Note distribution by category in model cohort:") g = pd.DataFrame(model_notes_df.groupby('category').size()).reset_index() g.columns = ['category', 'count'] g['pct'] = np.round((g['count']/g['count'].sum() * 100), 1) print(g) print() print("Note distribution by category in notes cohort:") g = pd.DataFrame(notes_df.groupby('category').size()).reset_index() g.columns = ['category', 'count'] g['pct'] = np.round((g['count']/g['count'].sum() * 100), 1) print(g) # ### Notes Plots cohort = 'notes_all' notes_df = pd.read_csv(path/f'{cohort}_proc.csv', parse_dates=['intime', 'admittime', 'ne_charttime']) save = False # + # Note length distribution fig, ax = plt.subplots(figsize=(11, 8)) sns.distplot(notes_df['note_len'], kde=False, ax=ax, bins=100) ax.set_xlim(0, 10000) ax.set_xlabel('Length of Note (characters)') ax.set_ylabel('# notes') if save: fig.savefig(figdir/f'{cohort}_note_len_dist.pdf', dpi=300) # + # Note distribution over days before ICU admission binned to 15 days plot_df = notes_df[['admit_to_icu']] fig, ax = plt.subplots(figsize=(10, 8)) sns.distplot(plot_df, kde=False, ax=ax, bins=80) ax.set_xlabel('Time to ICU admission (days)') ax.set_ylabel('# notes') ax.set_xlim(0, 70) if save: fig.savefig(figdir/f'{cohort}_admit_to_icu_dist.pdf', dpi=300) # + # Note distribution over days before ICU admission binned to 15 days intervals = ['-1 ≤ t ≤ 0'] intervals += [f'-{i+1} ≤ t ≤ -{i}' for i in range(1, notes_df['interval'].max())] intervals.append(f"t ≥ -{notes_df['interval'].max()}") plot_df = pd.DataFrame(notes_df.loc[notes_df['interval'] != -1].groupby('interval').size(), columns=['n_notes']).reset_index(drop=True) plot_df['days'] = intervals fig, ax = plt.subplots(figsize=(15, 8)) sns.barplot(x='days', y='n_notes', data=plot_df, ax=ax) ax.set_xticklabels(ax.get_xticklabels(),rotation=45, ha='right') ax.set_xlabel('Time to ICU admission (days)') ax.set_ylabel('# notes') for index, row in plot_df.iterrows(): ax.text(index, row['n_notes'], str(row['n_notes']), color='black', ha='center', va='bottom') if save: fig.savefig(figdir/f'{cohort}_admit_to_icu_binned_dist.pdf', dpi=300) # + # Note distribution over days before ICU admission by Category binned to 15 days def plot_intervals(ax, df, cat): sns.barplot(x='days', y='n_notes', data=df, ax=ax) ax.set_xticklabels(ax.get_xticklabels(),rotation=45, ha='right') ax.set_xlabel('') ax.set_ylabel('') ax.set_title(f"Note Category: {cat}\n# notes: {df['n_notes'].sum()}") for index, (_, row) in enumerate(df.iterrows()): ax.text(index, row['n_notes'], str(row['n_notes']), color='black', ha='center', va='bottom') plot_df = pd.DataFrame(notes_df.groupby(['category', 'interval']).size(), columns=['n_notes']) plot_df.reset_index(inplace=True) plot_df['days'] = plot_df['interval'].apply(lambda x: intervals[x]) plot_df.drop(['interval'], inplace=True, axis=1) fig, ax = plt.subplots(4, 3, figsize=(20, 25)) plot_intervals(ax[0][0], plot_df.loc[plot_df['category'] == 'Case Management ', ['n_notes', 'days']], 'Case Management') plot_intervals(ax[0][1], plot_df.loc[plot_df['category'] == 'Consult', ['n_notes', 'days']], 'Consult') plot_intervals(ax[0][2], plot_df.loc[plot_df['category'] == 'General', ['n_notes', 'days']], 'General') plot_intervals(ax[1][0], plot_df.loc[plot_df['category'] == 'Nursing', ['n_notes', 'days']], 'Nursing') plot_intervals(ax[1][1], plot_df.loc[plot_df['category'] == 'Nursing/other', ['n_notes', 'days']], 'Nursing/other') plot_intervals(ax[1][2], plot_df.loc[plot_df['category'] == 'Nutrition', ['n_notes', 'days']], 'Nutrition') plot_intervals(ax[2][0], plot_df.loc[plot_df['category'] == 'Pharmacy', ['n_notes', 'days']], 'Pharmacy') plot_intervals(ax[2][1], plot_df.loc[plot_df['category'] == 'Physician ', ['n_notes', 'days',]], 'Physician') plot_intervals(ax[2][2], plot_df.loc[plot_df['category'] == 'Radiology', ['n_notes', 'days']], 'Radiology') plot_intervals(ax[3][0], plot_df.loc[plot_df['category'] == 'Rehab Services', ['n_notes', 'days']], 'Rehab Services') plot_intervals(ax[3][1], plot_df.loc[plot_df['category'] == 'Respiratory ', ['n_notes', 'days']], 'Respiratory') plot_intervals(ax[3][2], plot_df.loc[plot_df['category'] == 'Social Work', ['n_notes', 'days']], 'Social Work') fig.text(0.5, 0.09, 'Time to ICU admission (days)', ha='center') fig.text(0.08, 0.5, '# notes', va='center', rotation='vertical') plt.subplots_adjust(hspace = 0.3) if save: fig.savefig(figdir/f'{cohort}_admit_to_icu_cat_binned_dist.pdf', dpi=300) # + # Histogram of time between note charttime and ICU admittime plot_df = notes_df[['category', 'note_to_icu']] fig, ax = plt.subplots(figsize=(10, 8)) sns.distplot(plot_df['note_to_icu'], kde=False, ax=ax, bins=80) ax.set_xlabel('Note Charttime to ICU Admittime (days)') ax.set_ylabel('# notes') ax.set_xlim(0, 60) if save: fig.savefig(figdir/f'{cohort}_note_to_icu_dist.pdf', dpi=300) # + # Histogram of time between note charttime and ICU admittime by Category def plot_period(ax, df, cat): sns.distplot(df, kde=False, ax=ax, bins=10) ax.set_xlabel('') ax.set_ylabel('') ax.set_title(f"Note Category: {cat}") fig, ax = plt.subplots(4, 3, figsize=(20, 25)) plot_period(ax[0][0], plot_df.loc[plot_df['category'] == 'Case Management ', ['note_to_icu']], 'Case Management') plot_period(ax[0][1], plot_df.loc[plot_df['category'] == 'Consult', ['note_to_icu']], 'Consult') plot_period(ax[0][2], plot_df.loc[plot_df['category'] == 'General', ['note_to_icu']], 'General') plot_period(ax[1][0], plot_df.loc[plot_df['category'] == 'Nursing', ['note_to_icu']], 'Nursing') plot_period(ax[1][1], plot_df.loc[plot_df['category'] == 'Nursing/other', ['note_to_icu']], 'Nursing/other') plot_period(ax[1][2], plot_df.loc[plot_df['category'] == 'Nutrition', ['note_to_icu']], 'Nutrition') plot_period(ax[2][0], plot_df.loc[plot_df['category'] == 'Pharmacy', ['note_to_icu']], 'Pharmacy') plot_period(ax[2][1], plot_df.loc[plot_df['category'] == 'Physician ', ['note_to_icu',]], 'Physician') plot_period(ax[2][2], plot_df.loc[plot_df['category'] == 'Radiology', ['note_to_icu']], 'Radiology') plot_period(ax[3][0], plot_df.loc[plot_df['category'] == 'Rehab Services', ['note_to_icu']], 'Rehab Services') plot_period(ax[3][1], plot_df.loc[plot_df['category'] == 'Respiratory ', ['note_to_icu']], 'Respiratory') plot_period(ax[3][2], plot_df.loc[plot_df['category'] == 'Social Work', ['note_to_icu']], 'Social Work') fig.text(0.5, 0.1, 'Note Charttime to ICU Admittime (days)', ha='center') fig.text(0.08, 0.5, '# notes', va='center', rotation='vertical') plt.subplots_adjust(hspace = 0.1) if save: fig.savefig(figdir/f'{cohort}_note_to_icu_cat_dist.pdf', dpi=300) # + desc = ['Unused', 'Delayed ICU Admission', 'Imminent ICU Admission'] p = pd.DataFrame(notes_df.groupby(['imi_adm_label']).size(), columns=['n_notes']).reset_index() # p1 = pd.DataFrame(notes_df.groupby(['imi_adm_label']).size(), columns=['n_notes']).reset_index() # p2 = notes_df.groupby(['imi_adm_label'])['hadm_id'].nunique().reset_index() # p = p1.merge(p2, on=['imi_adm_label']) p['imi_adm_label'] = desc p = p.reindex([2, 1, 0]) # p.reset_index(inplace=True, drop=True) plot_df = p.copy() plot_df.rename(columns={'hadm_id':'# Encounters', 'n_notes':'# Notes'}, inplace=True) plot_df = pd.melt(plot_df, id_vars='imi_adm_label', var_name='Legend', value_name='counts') plot_df fig, ax = plt.subplots(figsize=(11, 8)) sns.barplot(x='imi_adm_label', y='counts', data=plot_df, ax=ax) ax.set_xticklabels(ax.get_xticklabels(), ha='center') ax.set_xlabel('Class Label') ax.set_ylabel('# notes') for index, row in plot_df.iterrows(): # if index < len(plot_df)//2: ax.text(index+0.06, row['counts'], str(row['counts']), color='black', ha='right', va='bottom') # else: # ax.text(index % (len(plot_df)//2), row['counts'], str(row['counts']), color='black', ha='right', va='bottom') if save: fig.savefig(figdir/f'{cohort}_note_class_dist.pdf', dpi=300) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="H_4EUYVVmLE_" # # **Figuring out which customer may leave** # + id="CzupW10UmRPP" # + [markdown] id="99qBqi2Oy85D" # # Figuring Our Which Customers May Leave - Churn Analysis # # ### About our Dataset # Source - https://www.kaggle.com/blastchar/telco-customer-churn # 1. We have customer information for a Telecommunications company # 2. We've got customer IDs, general customer info, the servies they've subscribed too, type of contract and monthly charges. # 3. This is a historic customer information so we have a field stating whether that customer has **churnded** # **Field Descriptions** # - customerID - Customer ID # - gender - Whether the customer is a male or a female # - SeniorCitizen - Whether the customer is a senior citizen or not (1, 0) # - Partner - Whether the customer has a partner or not (Yes, No) # - Dependents - Whether the customer has dependents or not (Yes, No) # - tenure - Number of months the customer has stayed with the company # - PhoneService - Whether the customer has a phone service or not (Yes, No) # - MultipleLines - Whether the customer has multiple lines or not (Yes, No, No phone service) # - InternetService - Customer’s internet service provider (DSL, Fiber optic, No) # - OnlineSecurity - Whether the customer has online security or not (Yes, No, No internet service) # - OnlineBackup - Whether the customer has online backup or not (Yes, No, No internet service) # - DeviceProtection - Whether the customer has device protection or not (Yes, No, No internet service) # - TechSupport - Whether the customer has tech support or not (Yes, No, No internet service) # - StreamingTV - Whether the customer has streaming TV or not (Yes, No, No internet service) # - StreamingMovies - Whether the customer has streaming movies or not (Yes, No, No internet service) # - Contract - The contract term of the customer (Month-to-month, One year, Two year) # - PaperlessBilling - Whether the customer has paperless billing or not (Yes, No) # - PaymentMethod - The customer’s payment method (Electronic check, Mailed check Bank transfer (automatic), Credit card (automatic)) # - MonthlyCharges - The amount charged to the customer monthly # - TotalCharges - The total amount charged to the customer # - Churn - Whether the customer churned or not (Yes or No) # # ***Customer Churn*** - churn is when an existing customer, user, player, subscriber or any kind of return client stops doing business or ends the relationship with a company. # # **Aim -** is to figure our which customers may likely churn in future # + id="u20yT_I8y9y9" import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns # + id="joEiBbg42Rzv" colab={"base_uri": "https://localhost:8080/", "height": 301} outputId="932a0fb5-9502-4b52-d766-692af322db5f" ## Loading files file_name = "https://raw.githubusercontent.com/rajeevratan84/datascienceforbusiness/master/WA_Fn-UseC_-Telco-Customer-Churn.csv" churn_df = pd.read_csv(file_name) # Using .head() function to check if file is uploaded. It will print first 5 records. churn_df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 301} id="5EIAc8hrFBfI" outputId="70761233-33c8-4869-b6f2-fa58d206b650" ## To check last 5 records, we use .tail() funtion churn_df.tail() # + colab={"base_uri": "https://localhost:8080/", "height": 287} id="xTuPxhWWEnKh" outputId="aace9754-539c-426f-8f60-13172f7aabcf" ## To get summary on numeric columns churn_df.describe() # + colab={"base_uri": "https://localhost:8080/", "height": 414} id="eW1dP4UGFhBl" outputId="562cea0a-0287-4077-adf6-a8c3986539b8" ## To get summary on each column churn_df.describe(include="all") # + colab={"base_uri": "https://localhost:8080/"} id="Ue5BlRaPFnHF" outputId="6b28804a-7250-4287-e81e-1de63a436656" ## TO check categorical variables in dataset churn_df.select_dtypes(exclude=['int64','float']).columns # + colab={"base_uri": "https://localhost:8080/"} id="QDJa_HSITeRJ" outputId="b177f153-5a37-42f1-a96c-9f51b4c6baae" ## To check numerical variables churn_df.select_dtypes(exclude=['object']).columns # + colab={"base_uri": "https://localhost:8080/"} id="vJrjcDhZV629" outputId="e899ebf2-e8a1-4d58-98e7-ea86170da41b" ## To check the unique values churn_df.SeniorCitizen.unique() # + colab={"base_uri": "https://localhost:8080/"} id="2LvNWOYxWSEW" outputId="bdd32683-63d1-4804-ca42-85f803bc7ce3" ## To check unique levels of tenure churn_df.tenure.unique() # + colab={"base_uri": "https://localhost:8080/"} id="Il6ZRcNIWbY2" outputId="b028b653-2b75-4f22-c9e9-234992e1a956" ## Printing unique levels of Churn variable churn_df.Churn.unique() # + colab={"base_uri": "https://localhost:8080/"} id="GB5UQvRvWlOv" outputId="6db7f83f-f6d3-408f-8858-89a2d6c56d85" ## How many unique values are there in MonthlyChaerges variable len(churn_df.MonthlyCharges.unique()) # + colab={"base_uri": "https://localhost:8080/"} id="n0N9iv9IaDfI" outputId="21da339f-5189-4df1-be45-82d9cd992e46" ## How many unique values are there in Churn variable len(churn_df.Churn.unique()) # + colab={"base_uri": "https://localhost:8080/"} id="Be52eqDNaNCp" outputId="d0548f53-ba3e-4268-a6c4-195ee83d62c3" ## Another way of showing information of data in a single output print("No_of_Rows: ", churn_df.shape[0]) print() print("No_of_Columns: ", churn_df.shape[1]) print() print("Features: ", churn_df.columns.to_list) print("\nMissing_Values: ", churn_df.isnull().sum().values.sum()) print("\nMissing_Values: ", churn_df.isnull().sum()) print("\nUnique_Values: \n", churn_df.nunique()) # + colab={"base_uri": "https://localhost:8080/"} id="dmdhA8Rnbd_F" outputId="2284ccb9-787c-4a1b-9f58-4fb676ceb69a" ## Print how many churn and not churn churn_df['Churn'].value_counts(sort = False) # + id="M5LomPmNcXKi" # + [markdown] id="QrHG4v-IcyHG" # ## **Exporatory Data Analysis** # + id="oPDkaLOGc2ot" ## It is a best practice to keep a copy, in case we need to check at original dataset in future churn_df_copy = churn_df.copy() # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="5pOx1cqufJln" outputId="f309bc92-bdd7-497a-8e59-d8ee7567be98" ## Dropping the columns which are not necessary for the plots we are gonna do. churn_df_copy.drop(['customerID','MonthlyCharges', 'TotalCharges', 'tenure'], axis=1, inplace=True) churn_df_copy.head() # + [markdown] id="VR9QC7LBhx3y" # ### **pd.crosstab() == if we want to work upon more variable at a time then we use pd.crosstab()** # * The pandas crosstab function builds a cross-tabulation table that can show the frequency with which certain groups of data appear. # * The crosstab function can operate on numpy arrays, series or columns in a dataframe. # * Pandas does that work behind the scenes to count how many occurrences there are of each combination. # * The pandas crosstab function is a useful tool for summarizing data. The functionality overlaps with some of the other pandas tools but it occupies a useful place in your data analysis toolbox. # + colab={"base_uri": "https://localhost:8080/"} id="yOZA0EtJhzLG" outputId="88eca0a9-5028-460c-b442-091b7812b7a2" ## By using this code we can apply crosstab function for each column summary = pd.concat([pd.crosstab(churn_df_copy[x], churn_df_copy.Churn) for x in churn_df_copy.columns[:-1]], keys=churn_df_copy.columns[:-1]) summary # + colab={"base_uri": "https://localhost:8080/"} id="hFr1MwRCmnLx" outputId="ada21740-3a9e-4f4d-efac-9f595454fe96" ## Printing churn rate by gender pd.crosstab(churn_df_copy['Churn'],churn_df_copy['gender']) # + colab={"base_uri": "https://localhost:8080/"} id="sfSmcBfMnHbL" outputId="7831c900-7027-4022-f0d1-0e5c3cce2083" ## Printing churn rate by gender with margins pd.crosstab(churn_df_copy['Churn'],churn_df_copy['gender'],margins=True,margins_name="Total",normalize=True) # + colab={"base_uri": "https://localhost:8080/"} id="7lm82EcCnvBa" outputId="71f33e45-08e2-4c6f-81f0-d6289074f3c4" ## Checking margins pd.concat([pd.crosstab(churn_df_copy[x], churn_df_copy.Churn,margins=True,margins_name="Total",normalize=True) for x in churn_df_copy.columns[:-1]], keys=churn_df_copy.columns[:-1]) # + id="FHbXL4n1oevj" """Making a % column for summary""" summary['Churn_%'] = summary['Yes'] / summary['No'] + summary['Yes'] # + colab={"base_uri": "https://localhost:8080/"} id="2ZdfI-hGpDZM" outputId="442bd90e-360c-4a90-e2f7-7f04078e18c7" summary # + id="gnNCu0topGPY" # + [markdown] id="_7XgN9OFpRtZ" # ## **Visualization and EDA** # + colab={"base_uri": "https://localhost:8080/", "height": 426} id="cVY9W205pW5A" outputId="7f573cca-ae98-4ffc-8a86-1b8b1752ed68" import matplotlib.pyplot as plt # this is used for the plot the graph import seaborn as sns # used for plot interactive graph. from pylab import rcParams # Customize Matplotlib plots using rcParams # Data to plot labels = churn_df['Churn'].value_counts(sort = True).index sizes = churn_df['Churn'].value_counts(sort = True) colors = ["pink","lightblue"] explode = (0.05,0) # explode 1st slice rcParams['figure.figsize'] = 7,7 # Plot plt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=90,) plt.title('Customer Churn Breakdown') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 632} id="xWmafmDRpiMO" outputId="e7273f80-d6d7-4512-a80a-00846c7bd9d3" # Correlation plot doesn't end up being too informative import matplotlib.pyplot as plt def plot_corr(df,size=10): '''Function plots a graphical correlation matrix for each pair of columns in the dataframe. Input: df: pandas DataFrame size: vertical and horizontal size of the plot''' corr = df.corr() fig, ax = plt.subplots(figsize=(size, size)) ax.legend() cax = ax.matshow(corr) fig.colorbar(cax) plt.xticks(range(len(corr.columns)), corr.columns, rotation='vertical') plt.yticks(range(len(corr.columns)), corr.columns) plot_corr(churn_df) # + colab={"base_uri": "https://localhost:8080/", "height": 438} id="S6Xk3v2BpwJO" outputId="ea81993a-088d-4cbc-b87f-eb1c57a182b9" # Create a Violin Plot showing how monthy charges relate to Churn # We an see that Churned customers tend to be higher paying customers g = sns.factorplot(x="Churn", y = "MonthlyCharges",data = churn_df, kind="violin", palette = "Pastel1") # + colab={"base_uri": "https://localhost:8080/", "height": 438} id="4b1h0U1pp0_z" outputId="a5e8a3f6-9a18-40fc-b7d5-1893c227d724" # Let's look at Tenure g = sns.factorplot(x="Churn", y = "tenure",data = churn_df, kind="violin", palette = "Pastel1") # + id="mtWy031Gp6MZ" colab={"base_uri": "https://localhost:8080/", "height": 678} outputId="fc9c8155-c4d7-43b9-ab89-9ea174974dd2" plt.figure(figsize= (10,10)) sns.countplot(churn_df['Churn']) # + [markdown] id="LSq-dksKuic2" # ## **Preparing our dataset for Machine Learning** # + colab={"base_uri": "https://localhost:8080/"} id="E6rUvIWkuqQD" outputId="7de34f14-864f-438e-dfbb-ff300f66fa24" # Check for empty fields, Note, " " is not Null but a spaced character len(churn_df[churn_df['TotalCharges'] == " "]) # + id="s2C1PnAZxCuq" ## Drop missing data churn_df = churn_df[churn_df['TotalCharges'] != " "] # + colab={"base_uri": "https://localhost:8080/"} id="NFpB6Zzix0GQ" outputId="578798a0-335a-4732-9993-ac6f64358930" len(churn_df[churn_df['TotalCharges'] == " "]) # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="iswexGFW1vMS" outputId="e55c6e54-204b-45ee-fb55-4ff27652dd0d" ## Here we are making diff col - id_col, target_col, ## Next we are writing a code to check the unique levels in each categorical variable and if it is <6 then it is applying label encoding. ## Label Encoding takes binary column and changes the values to 0 and 1. We do label encoding because our can only understand 0 and 1 language. ## cat_col - will store categorical variables which are having less than 6 unique levels ## id_col - stores customerID column ## target_col - stores Churn column ## num_cols - stores all the numerical columns except id_cols, target_cols, cat_col ## bin_cols - stores the binary variables ## multi_cols - stores the categorical columns which are not binary ## Then next we do label encoding for binary columns ## And duplicating columns for multi value columns from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import StandardScaler #customer id col id_col = ['customerID'] #Target columns target_col = ["Churn"] #categorical columns cat_cols = churn_df.nunique()[churn_df.nunique() < 6].keys().tolist() cat_cols = [x for x in cat_cols if x not in target_col] #numerical columns num_cols = [x for x in churn_df.columns if x not in cat_cols + target_col + id_col] #Binary columns with 2 values bin_cols = churn_df.nunique()[churn_df.nunique() == 2].keys().tolist() #Columns more than 2 values multi_cols = [i for i in cat_cols if i not in bin_cols] #Label encoding Binary columns le = LabelEncoder() for i in bin_cols : churn_df[i] = le.fit_transform(churn_df[i]) #Duplicating columns for multi value columns churn_df = pd.get_dummies(data = churn_df, columns = multi_cols ) churn_df.head() # + id="3c74-9Pq15a1" colab={"base_uri": "https://localhost:8080/"} outputId="a86a1575-da26-4d5f-eadf-32345446fc57" len(churn_df.columns) # + colab={"base_uri": "https://localhost:8080/"} id="qG5NkJNEF4fG" outputId="d77d6d81-82aa-413c-bb48-4beb810bdfb4" num_cols # + colab={"base_uri": "https://localhost:8080/"} id="EvEEWhEHF_Eg" outputId="5588853b-0113-4fd7-9502-4fdd3a0ef6c5" id_col # + colab={"base_uri": "https://localhost:8080/"} id="Hc9chOfUGAf0" outputId="8a01265b-a330-4d65-981b-ccd30291ba64" cat_cols # + colab={"base_uri": "https://localhost:8080/", "height": 334} id="nGqGckh4GCWI" outputId="9c93b924-1232-4b67-da0e-5621752b3283" ## Scaling Numerical columns std = StandardScaler() ## Scale data scaled = std.fit_transform(churn_df[num_cols]) scaled = pd.DataFrame(scaled,columns = num_cols) ## Dropping original values merging scaled values for numerical columns df_telcom_og = churn_df.copy() churn_df = churn_df.drop(columns = num_cols,axis = 1) churn_df = churn_df.merge(scaled, left_index = True, right_index = True, how = "left") ## Churn_df.info() churn_df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 251} id="4dtV-IhKIw6_" outputId="fec8a9ee-2152-4812-83fa-3bbc9c0c7530" churn_df.drop(['customerID'], axis=1, inplace=True) churn_df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 431} id="oXTH7QUWI7Sh" outputId="3cfb7c6a-76f2-41f3-ba72-0267b133285d" churn_df[churn_df.isnull().any(axis=1)] # + id="w6lYUyfbI9-d" colab={"base_uri": "https://localhost:8080/"} outputId="99d372ce-4751-4aff-db80-1d697c01cfbd" print(len(churn_df.isnull().sum())) ## Since there are only 11 NA values, we drop them. churn_df = churn_df.dropna() # + colab={"base_uri": "https://localhost:8080/", "height": 101} id="m9ILshnxJHvE" outputId="3fde5ff6-26c1-45ac-b47e-c2750f19c5c3" # Double check that nulls have been removed churn_df[churn_df.isnull().any(axis=1)] # + [markdown] id="j7cEtngZ_ZX4" # # **Splitting into training and testing** # + id="h8zQD6ap_hHe" from sklearn.model_selection import train_test_split # We remove our label values from train data X = churn_df.drop(['Churn'],axis=1).values # We assigned our label variable to test data Y = churn_df['Churn'].values # + id="8_lQX7-MAvcW" # Split it to a 70:30 Ratio Train:Test x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.3) # + colab={"base_uri": "https://localhost:8080/"} id="1xQjKvVPAzcU" outputId="f7bcf753-fa39-4fb9-895f-b0d4ed0803e0" type(x_train) # + colab={"base_uri": "https://localhost:8080/", "height": 217} id="Q09QyWEUA6AL" outputId="739fb576-38f7-4b3b-8f76-4990ae6c8528" df_train = pd.DataFrame(x_train) df_train.head() # + colab={"base_uri": "https://localhost:8080/"} id="2uWugCLpBDlZ" outputId="318300e9-309f-4445-868a-9f81baf2c0f0" print(len(churn_df.columns)) churn_df.columns # + colab={"base_uri": "https://localhost:8080/", "height": 251} id="ffhgYoomBG8f" outputId="396c6556-3add-4588-c1aa-d76719727173" churn_df.head() # + id="KIMtbfcIBKGz" # + [markdown] id="R8NmVIM1BbXU" # # **Training LOGISTIC REGRESSION model** # + id="5pZHxanrBfkC" colab={"base_uri": "https://localhost:8080/"} outputId="430fcba1-d4d2-4f16--" from sklearn.linear_model import LogisticRegression ### creating a model classifier_model = LogisticRegression() ### passing training data to model classifier_model.fit(x_train,y_train) ### predicting values x_test using model and storing the values in y_pred y_pred = classifier_model.predict(x_test) ### interception and coefficient of model print(classifier_model.intercept_) print(classifier_model.coef_) print() ### printing values for better understanding print(list(zip(y_test, y_pred))) # + colab={"base_uri": "https://localhost:8080/"} id="ecWrSYQDLQlj" outputId="c8b0b8da-0ba1-4c7b-d3e9-a470d362528f" from sklearn.metrics import accuracy_score,confusion_matrix,classification_report ### creating and printing confusion matrix conf_matrix = confusion_matrix(y_test,y_pred) print(conf_matrix) ### Creating and printing classification report print("Classification Report: ") print(classification_report(y_test,y_pred)) ### Creating and printing accuracy score acc = accuracy_score(y_test,y_pred) print("Accuracy {0:.2f}%".format(100*accuracy_score(y_pred, y_test))) # + id="snquThPILbFr" # + [markdown] id="27j7soUlLgPn" # ## **Feature Importance using Logistic Regression** # + colab={"base_uri": "https://localhost:8080/"} id="fcsbCDE2LlK6" outputId="3a0be5a6-4190-45ac-e907-338057c6af09" # Let's see what features mattered most i.e. Feature Importance # We sort on the co-efficients with the largest weights as those impact the resulting output the most coef = classifier_model.coef_[0] coef = [abs(number) for number in coef] print(coef) # + colab={"base_uri": "https://localhost:8080/"} id="YJj6i_7dLzYd" outputId="0a64e9ef-2517-4f46-c41f-3499f45a8982" # Finding and deleting the label column cols = list(churn_df.columns) cols.index('Churn') # + colab={"base_uri": "https://localhost:8080/"} id="xI_5ToTgMBsc" outputId="16ad4aae-dc38-44de-d956-af2275cc29f6" del cols[6] cols # + colab={"base_uri": "https://localhost:8080/"} id="XdltJMhLMIay" outputId="00d8b07e-bc4c-4e69-f11a-7da8feb9adcf" # Sorting on Feature Importance sorted_index = sorted(range(len(coef)), key = lambda k: coef[k], reverse = True) for idx in sorted_index: print(cols[idx]) # + id="oP3xA5b0MOAt" # + [markdown] id="kVAI6RQ4MSyU" # ## **Try Random Forests** # + id="P0LhzSm4MVeB" from sklearn.ensemble import RandomForestClassifier random_forest_model = RandomForestClassifier(n_estimators=100,random_state=10) ## it will built 100 DT in background #fit the model on the data and predict the values random_forest_model.fit(x_train,y_train) y_pred_rf = random_forest_model.predict(x_test) # + colab={"base_uri": "https://localhost:8080/"} id="WghBHSQQMeUU" outputId="809546b1-c3bc-4538-f482-414a369db55d" from sklearn.metrics import accuracy_score,confusion_matrix,classification_report ### creating and printing confusion matrix conf_matrix_rf = confusion_matrix(y_test,y_pred_rf) print(conf_matrix_rf) ### Creating and printing classification report print("Classification Report: ") print(classification_report(y_test,y_pred_rf)) ### Creating and printing accuracy score acc = accuracy_score(y_test,y_pred_rf) print("Accuracy {0:.2f}%".format(100*accuracy_score(y_pred_rf, y_test))) # + id="SVAoYdBBNtFi" # + [markdown] id="GeQXdqj1Nt01" # # **Saving a model** # + id="5m3ZOu7KMoJM" import pickle # save with open('model.pkl','wb') as f: pickle.dump(random_forest_model, f) # load with open('model.pkl', 'rb') as f: loaded_model_rf = pickle.load(f) # + id="XV_TqDnQNz9A" predictions = loaded_model_rf.predict(x_test) # + colab={"base_uri": "https://localhost:8080/"} id="JzLIANfvjfgs" outputId="666abcd1-b7a6-4ff6-8fba-05170a669936" predictions # + id="e_a2c0PUjhpy" # + [markdown] id="8fMDaH7mjkFZ" # # **Deep Learning Model** # + id="YxSc2NlTjt9L" ## Using the newest version of Tensorflow 2.0 # %tensorflow_version 2.x # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="AEt2tGr_kL9M" outputId="762be354-984d-43da-90dc-8a3f582b1dd8" ## Checking to ensure we are using our GPU import tensorflow as tf tf.test.gpu_device_name() # + id="dDNdk-BykeUq" # Create a simple model import tensorflow.keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense model = Sequential() model.add(Dense(20, kernel_initializer = "uniform",activation = "relu", input_dim=40)) model.add(Dense(1, kernel_initializer = "uniform",activation = "sigmoid")) model.compile(optimizer= "adam",loss = "binary_crossentropy",metrics = ["accuracy"]) # + colab={"base_uri": "https://localhost:8080/"} id="rdATaKQvk9ze" outputId="4cf65f21-9173-4c58-c536-fc5fb8b74a18" # Display Model Summary and Show Parameters model.summary() # + colab={"base_uri": "https://localhost:8080/"} id="gHt155h5lAHv" outputId="9441e6a0-db18-4338-e49a-cb3c360caa1d" # Start Training Our Classifier batch_size = 64 epochs = 25 history = model.fit(x_train, y_train, batch_size = batch_size, epochs = epochs, verbose = 1, validation_data = (x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # + colab={"base_uri": "https://localhost:8080/"} id="DN6d4RAVlcLw" outputId="90492694-326b-4881-ec75-dc24b1c6e4a1" predictions = model.predict(x_test) predictions = (predictions > 0.5) print(confusion_matrix(y_test, predictions)) print(classification_report(y_test, predictions)) # + id="_dg0Uw3zlnPy" # + [markdown] id="L_Jkgea-lzx0" # # **Saving Model** # + id="CQDuS4qbl4Zz" ## Simple cnn = simple convolutional neural network ## You mean a HDF5/H5 file, which is a file format to store structured data, its not a model by itself. ## Keras saves models in this format as it can easily store the weights and model configuration in a single file. model.save("simple_cnn_25_epochs.h5") # + id="mmNpwxu8mstz" ## Loading our model from tensorflow.keras.models import load_model classifier_DL_simple_cnn = load_model('simple_cnn_25_epochs.h5') # + id="zvTqq4wDnJm3" # + [markdown] id="6nq_MmkEnMyK" # # **Trying deeper models, checkpoints and stopping early.** # + colab={"base_uri": "https://localhost:8080/"} id="udMztx4InVdW" outputId="207cf4f7-5aba-4c64-ea3e-320bff74ec2b" from tensorflow.keras.regularizers import l2 from tensorflow.keras.layers import Dropout from tensorflow.keras.callbacks import ModelCheckpoint model2 = Sequential() # Hidden Layer 1 model2.add(Dense(2000, activation='relu', input_dim=40, kernel_regularizer=l2(0.01))) model2.add(Dropout(0.3, noise_shape=None, seed=None)) # Hidden Layer 2 model2.add(Dense(1000, activation='relu', input_dim=18, kernel_regularizer=l2(0.01))) model2.add(Dropout(0.3, noise_shape=None, seed=None)) # Hidden Layer 3 model2.add(Dense(500, activation = 'relu', kernel_regularizer=l2(0.01))) model2.add(Dropout(0.3, noise_shape=None, seed=None)) model2.add(Dense(1, activation='sigmoid')) model2.summary() # Create our checkpoint so that we save model after each epoch checkpoint = ModelCheckpoint("deep_model_checkpoint.h5", monitor="val_loss", mode="min", save_best_only = True, verbose=1) # + id="zlZyai-vn7lj" model2.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # + id="NYtmlhyVoEh_" # Defining our early stoppping criteria from tensorflow.keras.callbacks import EarlyStopping earlystop = EarlyStopping(monitor = 'val_loss', # value being monitored for improvement min_delta = 0, #Abs value and is the min change required before we stop patience = 2, #Number of epochs we wait before stopping verbose = 1, restore_best_weights = True) #keeps the best weigths once stopped # we put our call backs into a callback list callbacks = [earlystop, checkpoint] # + colab={"base_uri": "https://localhost:8080/"} id="zbWYb_1IoRbh" outputId="5a623c83-a9e8-49b7-da5d-424f6d95972e" batch_size = 32 epochs = 10 history = model2.fit(x_train, y_train, batch_size = batch_size, epochs = epochs, verbose = 1, # NOTE We are adding our callbacks here callbacks = callbacks, validation_data = (x_test, y_test)) score = model2.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) # + id="QpeH3PKkoahD" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 数据文件格式json分析 # {"c":"1415.59,13369599,1,25,57b8cce4,1381770056","m":"最后的福利~"} # 对应内容 # {"c": "播放时间,颜色,模式,字号,uid,发送时间", "m": "弹幕内容"} # C: # dtTime = 播放时间:按秒计 # rgb = 颜色:十进制颜色 # danmu_model = 模式:1-3:滚动弹幕4:底端弹幕5:顶端弹幕7:高级弹幕 # font = 字号:十进制整数 1-99 # userid = uid: # stamp = 发送时间:时间戳 # m: # message = 弹幕内容 import pandas as pd from pandas import Series,DataFrame import numpy as np import time import matplotlib.pyplot as plt # %matplotlib inline # 导入弹幕数据文件,并按照json内容 划分为列 list_episode = [] for i in range(3,142): df = pd.read_json('../data/hxh/%s.json'%(i),encoding='utf-8') df = df.rename(columns=lambda x:x.replace('m', 'message')) df['dtTime'],df['rgb'],df['danmu_model'],df['font'],df['userid'],df['stamp'] = df['c'].str.split(',',6).str df['episode'] = i df = df.drop(['c'],axis = 1) list_episode.append(df) data = pd.concat(list_episode,ignore_index=True) data # 查看是否有空值 data.isnull().sum().any() # 查看数据概况 data.shape data.head() # 提取时间戳里的 小时 年 月 生成新的列合并 def timechange(times): time_local = time.localtime(int(times)) return time_local.tm_hour data['hour']= data['stamp'].map(timechange) def timechangeyear(times): time_local = time.localtime(int(times)) return time_local.tm_year data['year']= data['stamp'].map(timechangeyear) def timechangemon(times): time_local = time.localtime(int(times)) return time_local.tm_mon data['mon']= data['stamp'].map(timechangemon) data # 查看弹幕总数 #弹幕总数 data.shape[0] # 解决plt绘图中文问题 from pylab import mpl mpl.rcParams['font.sans-serif'] = ['SimHei'] # 指定默认字体:解决plot不能显示中文问题 mpl.rcParams['axes.unicode_minus'] = False # 绘图 查看弹幕密度情况 用户发送弹幕数量情况 #x集 y弹幕量 bar+线 data1 = data['episode'].value_counts() data1 = data1.sort_index() data1_index = np.array(data1.index) data1_index data1_value = np.array(data1.values) data1_value plt.bar(data1_index, data1_value, 0.5) plt.xlabel('集数', size=30) plt.ylabel('弹幕数', size=30) plt.plot(data1_index, data1_value, c='red') #用户弹幕量排名 data_damperuser = data['userid'].value_counts() data_damperuser_hbar = data_damperuser.values plt.bar([x for x in range(0,10)],data_damperuser_hbar[:10],1) plt.grid(True,lw=1,mfc='green') # 发现第一名的用户发了30000多发弹幕,远超其他人,考虑为异常,进行检查 #看看第一名 data['userid'].value_counts() # 确定问题任务id,及信息,发现大量弹幕是同一个时间点发送的,判定为异常情况 cond_usermax = data['userid']=='0bc076ae' data[cond_usermax] #查看异常弹幕人物的发弹幕分布 data[cond_usermax]['episode'].value_counts() #删除这个异常的 data = data.drop(list(data[cond_usermax].index)) #x集 y弹幕量 bar+线 data1 = data['episode'].value_counts() data1 = data1.sort_index() data1_index = np.array(data1.index) data1_index data1_value = np.array(data1.values) data1_value plt.bar(data1_index,data1_value,0.5) plt.xlabel('集数',size=30) plt.ylabel('弹幕数',size=30) plt.plot(data1_index,data1_value,c='red') # 除去问题id弹幕后,最后几级数据异常,应当删除 #去除有问题的集数 cond_err_episode = data['episode']>135 data = data.drop(list(data[cond_err_episode].index)) data.shape #x集 y弹幕量 bar+线 data1 = data['episode'].value_counts() data1 = data1.sort_index() data1_index = np.array(data1.index) data1_index data1_value = np.array(data1.values) data1_value plt.figure(figsize=(10,5)) plt.bar(data1_index,data1_value,0.5) plt.xlabel('集数',size=30) plt.ylabel('弹幕数',size=30) plt.plot(data1_index,data1_value,c='red') # 查看弹幕最少的是哪几集 data['episode'].value_counts().tail(10) # 87集 436弹幕 分析: # 10-13集: # 31-38集: data_damperuser = data['userid'].value_counts() data_damperuser_hbar = data_damperuser.values ing=range(11) plt.bar(top10_index,data_damperuser_hbar[:10],0.5) plt.xticks(ing, ing, rotation=30) plt.xlabel('用户弹幕量排名TOP10',size=20) plt.ylabel('弹幕数',size=20) # 查看用户整部动画发送多少条弹幕的分布 #用户弹幕数量百分比 饼图 data_user_dam = DataFrame({'count':data_damperuser.values,'userid':data_damperuser.index}) data_user_dam def danmu_length_pie(data_user_dam): sumall = data_user_dam.shape[0] a1 = (data_user_dam['count']==1).sum() a2_5 = ((data_user_dam['count']>1)&(data_user_dam['count']<=5)).sum() a6_10 = ((data_user_dam['count']>5)&(data_user_dam['count']<=10)).sum() a11_50 = ((data_user_dam['count']>10)&(data_user_dam['count']<=50)).sum() a51_100 = ((data_user_dam['count']>50)&(data_user_dam['count']<=100)).sum() a101_500 = ((data_user_dam['count']>100)&(data_user_dam['count']<=500)).sum() a501 = (data_user_dam['count']>500).sum() li = [a1,a2_5,a6_10,a11_50,a51_100,a101_500,a501] xp=[] for i in li: i = float(i) t = (i/sumall) xp.append(t) plt.figure(figsize=(6,9)) labels = [u'1个', u'2-5个', u'6-10个', u'11-50个',u'51-100个',u'101-500个',u'500个以上' ] sizes = xp plt.axis('equal') plt.title(u'全集用户弹幕数量分布图',loc='left') plt.pie(sizes, labels=labels,labeldistance=1.1,autopct = '%2.2f%%', startangle = 90, pctdistance = 0.5,explode=[0,0,0,0,0.5,1,1.5],shadow=True) plt.show() plt.close() danmu_length_pie(data_user_dam) data.head() #每集弹幕分布 x时间 y弹幕量 data.shape # 查看平均每集的弹幕密度分布 cond4 = (data.episode==33) max3 = data[cond4]['dtTime'].max() min3 = data[cond4]['dtTime'].min() display(max3,min3) def dtTimemin(dttime): return float(dttime)//60 data['dtTimemin']=data['dtTime'].map(dtTimemin) data['dtTimemin'].value_counts() # 一般每集时间为20分钟左右, 41446分钟为异常值,查看后删除 conds = data['dtTimemin']==41446.0 dropindex = data[conds].index[0] data = data.drop(dropindex) # 查看41分钟的弹幕,判断为视频进度条的问题,可以作为bug问题等反馈给相关部门,改善用户体验 conds2 = data['dtTimemin']==41.0 data[conds2] data = data.drop(data[conds2].index[0]) data_meanday = data['dtTimemin'].value_counts().sort_index() data_meanday plt.xlabel('动画播放分钟',size=30) plt.ylabel('弹幕数',size=30) plt.plot(np.array(data_meanday.index),np.array(data_meanday.values),c='red') # 上图说明 大家喜欢再开始播放和oped的时候发送弹幕 20分钟以后的下降 是因为41分钟的进度条bug # 下面看看发送弹幕的观众,喜欢在几点钟观看视频 # x小时 y弹幕数量 data_24 = data['hour'].value_counts().sort_index() data_24 plt.xlabel('发弹幕时间(小时)',size=30) plt.ylabel('弹幕数',size=30) plt.plot(np.array(data_24.index),np.array(data_24.values),c='red') # 可以看出 大部分人睡眠时间在5-6点周围 大概1-9点的范围. 当中的突起,代表了午间休息和午饭时间, 17点以后逐渐增加进入高峰,代表下班/放学以后 # 下面制作一个词云,分析大家的弹幕中最关注的词 testword = ' '.join(data.message.values) import jieba # 分词包 from wordcloud import WordCloud, ImageColorGenerator # 词云包 from scipy.misc import imread segtests = jieba.cut(testword) segment = [] for seg in segtests: if len(seg) > 1 and seg != '\r\n': segment.append(seg) # 去停用词(文本去噪) words_df = pd.DataFrame({'segment': segment}) # 字典中的keys就是DataFrame里面的columns,但是没有index的值,所以需要自己设定,不设定默认是从零开始计数。 words_df.head() words_df.shape #去停用词 stopwords = pd.read_csv("../data/stopwords.txt", index_col=False,quoting=3, sep='\t', names=['stopword'], encoding="utf8") words_df = words_df[~words_df.segment.isin(stopwords.stopword)] # 词汇频率表 words_stat = words_df.groupby(by=['segment'])['segment'].agg({"count": np.size}) words_stat = words_stat.reset_index().sort_values(by="count", ascending=False) words_stat.head(10) content = ' '.join(words_stat.head(40).segment.values) ## 自定义词云背景 #bimg = imread('heart.jpeg') # #bimgColors = ImageColorGenerator(bimg) wordcloud = WordCloud(font_path='simhei.ttf', background_color="white",max_words=40).generate(content) plt.axis("off") plt.imshow(wordcloud) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 旷视AI智慧交通开源赛道-交通标志识别 # # * 队伍:仙交小分队 # # * 初赛得分: # # # 本次的比赛是交通标志检测,交通标志本身种类众多,大小不定,并且在交通复杂的十字路口场景下,由于光照、天气等因素的影响,使其被精确检测变得更加困难。通过反复实验,我们选择了今年旷视新出的YOLOX目标检测框架,megengine版本的yolox代码相对于pyotorch版本的bug较多,通过反复调试,我们最终的方案是:YOLOX_L + P6 + Focalloss + Inputsize2048 + 双线性插值上采样 + dataaug30,以下是整个调试的过程。 # ## 数据分析 # 根据官方提供的示例代码,我们主要是在预训练的模型上面做微调。 # # 本次比赛提供的数据集,图片总数为3349张,长的均值为1680.0883845924157, 宽的均值为2227.9558077037923,训练集和验证集一共有2700张,初赛的测试集有580张。 # # 详细的规模如下: # # ``` # (h, w, c): num # (1200, 1600, 3): 941 # # (1944, 2592, 3): 1259 # # (2048, 2448, 3): 18 # # (1080, 1920, 3): 32 # # (1520, 2704, 3): 511 # # (2048, 2048, 3): 582 # ``` # # 数据一共包含5种类型的目标,分别是:红灯、直行标志、向左转弯标志、禁止驶入和禁止临时停车,目标的数量如下 # # ``` # 0:red_tl--1465个 # 1:arr_s--1133个 # 2:arr_l--638个 # 3:no_driving_mark_allsort--622个 # 4:no_parking_mark--1142个 # ``` # # 目前的数据集来看,第三类的目标比较少,还是和之前一样,毕竟是数据驱动的,我们最好还是能通过copy-paste这种方法在数据上做一些提升。 # # 具体数据的例子如下,从下面的图中可以看出,目标基本都比较小,而且还存在很多遮挡的情况。 # # ![](https://vehicle4cm.oss-cn-beijing.aliyuncs.com/typoraimgs/3279125,1bb5780005b1590b9.jpg) # # ![](https://vehicle4cm.oss-cn-beijing.aliyuncs.com/typoraimgs/image-20210916135049212.png) # # # ## megengine-yolox的安装 # # 比赛中要求使用的框架是megengine,其中yolox有两个实现版本,一个是[pytorch](https://github.com/Megvii-BaseDetection/YOLOX)版本的,一个是[megengine](https://github.com/MegEngine/YOLOX)版本的,其中pyotrch版本的代码相对于megengin版本的代码要相对更加完善一些,并且在pytorch中提供了大量的coco预训练模型,这些预训练模型也可以转化为megengine的版本做finetune使用。 # # 首先需要安装相应的虚拟环境,本次的megstudio中提供了对应的环境,大家只需要在启动的时候选择megengine为1.4.0,python版本为3.7的开发环境即可,conda中名称为`xuan`的虚拟环境就是本次项目所要使用到的虚拟环境,安装分为三步。 # # 第一步:安装YOLOX # # 注:因为在megstudio中已经事先安装好了megengine,所以需要将requirements中的注释,另外执行setup.py的时候会检查torch,所以还需要在requirements.txt中额外添加torch和torchvision。 # %cd /home/megstudio/workspace # !git clone https://gitee.com/cmfighting/YOLOX.git # %cd /home/megstudio/workspace/YOLOX # !source activate xuan # 激活之后可以在命令行前面看到一个xuan的括号 # !pip install -r requirements.txt # !python setup.py develop 第二步:安装pycocotools # !pip install cython # !pip install pycocotools # 第三步:测试yolox是否可以使用 # # 我们需要从yolox的megengine版本的官方地址下载官方提供的yolox-tiny的模型,执行下面的指令之后,如果可以在outputs的文件夹找到检测的正确结果,证明yolox已安装完成。 # # %cd /home/megstudio/workspace/YOLOX # !mkdir pretrained # %cd pretrained # !wget https://github.com/MegEngine/YOLOX/releases/download/0.0.1/yolox_tiny.pkl # 下载不了的话请大家手动下载并上传 # %cd /home/megstudio/workspace/YOLOX # !python tools/demo.py image -n yolox-tiny -c pretrained/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result # ## 训练交通标志检测的yolox-l模型 # # 本次比赛提供的数据集是交通标志检测的数据集,因为本次比赛要求大家使用megengine作为我们的基本框架,megengine版本的yolox代码相对于pytorch版本的yolox代码不是很完整,整个训练的过程中存在很多小坑,包括缺少预训练模型,模型迁移过程中shape不一致,以及训练的时候可能会随机在某个时刻突然停止等,下面记录了整个训练的流程,最终训练好的模型也做了保存,如果大家没有足够多的算力卡来跑完训练的流程,可以直接跳到下面的验证过程。 # # * 第一步:建立数据集的软链接 # # 首先还是建立虚拟链接,以megstudio为例,数据保存的目录在`/home/megstudiodataset/dataset-2805/`目录下,其中annotations是标签文件,images放的是图片文件,如下图所示: # # ![image-20210923205007420](https://vehicle4cm.oss-cn-beijing.aliyuncs.com/typoraimgs/image-20210923205007420.png) # # # 建立虚拟链接的代码如下: # %cd /home/megstudio/workspace/YOLOX/datasets # !mkdir traffic5 # !ln -s /home/megstudio/dataset/dataset-2805/images ./traffic5/train2017 # !ln -s /home/megstudio/dataset/dataset-2805/images ./traffic5/val2017 # !ln -s /home/megstudio/dataset/dataset-2805/images ./traffic5/test2017 # 建立annotations文件夹,然后把annntations的标注问价复制过来,改个名字 # %cd traffic5 # !mkdir annotations # %cd annotations # annotions文件要和coco的保持一致,因为这个数据集是只读的,所以我们只能新建文件之后把里面的内容复制粘贴过来(应该是有某些linux的命令可以使用,但是我太菜了),标注文件大致如下方所示: # # ![image-20210923205415437](https://vehicle4cm.oss-cn-beijing.aliyuncs.com/typoraimgs/image-20210923205415437.png) # # * 第二步:模型迁移 # # 由于官方的megengine版本的yolox代码只提供了tiny版本的预训练模型,tiny版本的预训练模型速度虽然比较快, # 但是精度相对较低,想要在比赛中拿到比较好的成绩的话,还是得用l或者x版本得预训练模型,这样就预训练的模型只能从官方的pytorch代码中进行转化。 # # 这部分操作需要借助pytorch版本的yolox代码来完成,下载好yolox_l模型之后,通过 # [YOLOX/demo/MegEngine/python at main · Megvii-BaseDetection/YOLOX (github.com)](https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/MegEngine/python)这里的教程完成模型的转化,转换完成的模型需要上传到pretrained目录下。 # # + * 第二步:注释数据归一化部分代码 pytorch转换后的模型在训练的时候是没有经过归一化处理的,官方的megengine版本代码是做了归一化处理的,如果直接进行训练,第一轮的map可能为0.0, 整体的效果也不会太好,这部分的逻辑在`yolox/data/data_augment.py`中,我们需要将179行到184行的代码注释掉,也即使下面的这段代码。 # + # image = image[:, :, ::-1] # image /= 255.0 # if mean is not None: # image -= mean # if std is not None: # image /= std # - # * 第三步:修改`yolox/utils/checkpoint.py` # # 如果直接跑训练的代码的话,在模型迁移的时候中间会有大段地不匹配出现,分析发现可能是pyotrch转化过来地模型和megengine表示方式可能不太一样,最后我们通过reshape和try catch块处理了一下, # 就能正确地完成整个迁移地过程了,修改之后地checkpoint.py文件如下 # + # #!/usr/bin/env python3 # -*- coding:utf-8 -*- # Copyright (c) 2014-2021 Megvii Inc. All rights reserved. import os import shutil from loguru import logger import megengine as mge def load_ckpt(model, ckpt): model_state_dict = model.state_dict() load_dict = {} # v是期待的模型 for key_model, v in model_state_dict.items(): if key_model not in ckpt: logger.warning( "{} is not in the ckpt. Please double check and see if this is desired.".format( key_model ) ) continue v_ckpt = ckpt[key_model] # 现在需要保证ckpt的能塞进去 # print(key_model) # head.cls_preds.0.bias try: v_ckpt =v_ckpt.reshape(v.shape) if v.shape != v_ckpt.shape: logger.warning( "ckpt Shape of {} in checkpoint is {}, while model shape of {} in model is {}.".format( key_model, v_ckpt.shape, key_model, v.shape ) ) continue load_dict[key_model] = v_ckpt except: logger.warning( "{} is not match".format(key_model) ) # continue # for i in range(3): # load_dict.pop("head.grids.{}".format(i)) model.load_state_dict(load_dict, strict=False) return model def save_checkpoint(state, is_best, save_dir, model_name=""): if not os.path.exists(save_dir): os.makedirs(save_dir) filename = os.path.join(save_dir, model_name + "_ckpt.pkl") mge.save(state, filename) if is_best: best_filename = os.path.join(save_dir, "best_ckpt.pkl") shutil.copyfile(filename, best_filename) # - # * 第四步:添加P6层 # # 因为我们考虑使用大地分辨率来训练模型,p6层只需要对p5做一次卷积就可以,相对于添加p2来说,不会对预训练地模型产生那么多地影响,所以我们需要对`yolox/models/yolo_pafpn.py`做修改, # 添加p6层,在初始化中增加一个卷积层并在前向传播的最后对p5使用这个卷积,修改之后的代码如下: # + # #!/usr/bin/env python # -*- encoding: utf-8 -*- # Copyright (c) 2014-2021 Megvii Inc. All rights reserved. import megengine.functional as F import megengine.module as M from .darknet import CSPDarknet from .network_blocks import BaseConv, CSPLayer, DWConv, UpSample class YOLOPAFPN(M.Module): """ YOLOv3 model. Darknet 53 is the default backbone of this model. """ def __init__( self, depth=1.0, width=1.0, in_features=("dark3", "dark4", "dark5"), in_channels=[256, 512, 1024], depthwise=False, act="silu", ): super().__init__() self.backbone = CSPDarknet(depth, width, depthwise=depthwise, act=act) self.in_features = in_features self.in_channels = in_channels Conv = DWConv if depthwise else BaseConv self.upsample = UpSample(scale_factor=2, mode="bilinear") self.lateral_conv0 = BaseConv( int(in_channels[2] * width), int(in_channels[1] * width), 1, 1, act=act ) self.C3_p4 = CSPLayer( int(2 * in_channels[1] * width), int(in_channels[1] * width), round(3 * depth), False, depthwise=depthwise, act=act, ) # cat self.reduce_conv1 = BaseConv( int(in_channels[1] * width), int(in_channels[0] * width), 1, 1, act=act ) self.C3_p3 = CSPLayer( int(2 * in_channels[0] * width), int(in_channels[0] * width), round(3 * depth), False, depthwise=depthwise, act=act, ) # bottom-up conv self.bu_conv2 = Conv( int(in_channels[0] * width), int(in_channels[0] * width), 3, 2, act=act ) self.C3_n3 = CSPLayer( int(2 * in_channels[0] * width), int(in_channels[1] * width), round(3 * depth), False, depthwise=depthwise, act=act, ) # bottom-up conv self.bu_conv1 = Conv( int(in_channels[1] * width), int(in_channels[1] * width), 3, 2, act=act ) self.C3_n4 = CSPLayer( int(2 * in_channels[1] * width), int(in_channels[2] * width), round(3 * depth), False, depthwise=depthwise, act=act, ) self.conv_p6 = Conv( int(1024 * width), int(1024 * width), 3, 2, act=act ) def forward(self, input): """ Args: inputs: input images. Returns: Tuple[Tensor]: FPN feature. """ # backbone out_features = self.backbone(input) features = [out_features[f] for f in self.in_features] [x2, x1, x0] = features fpn_out0 = self.lateral_conv0(x0) # 1024->512/32 f_out0 = self.upsample(fpn_out0) # 512/16 f_out0 = F.concat([f_out0, x1], 1) # 512->1024/16 f_out0 = self.C3_p4(f_out0) # 1024->512/16 fpn_out1 = self.reduce_conv1(f_out0) # 512->256/16 f_out1 = self.upsample(fpn_out1) # 256/8 f_out1 = F.concat([f_out1, x2], 1) # 256->512/8 pan_out2 = self.C3_p3(f_out1) # 512->256/8 p_out1 = self.bu_conv2(pan_out2) # 256->256/16 p_out1 = F.concat([p_out1, fpn_out1], 1) # 256->512/16 pan_out1 = self.C3_n3(p_out1) # 512->512/16 p_out0 = self.bu_conv1(pan_out1) # 512->512/32 p_out0 = F.concat([p_out0, fpn_out0], 1) # 512->1024/32 pan_out0 = self.C3_n4(p_out0) # 1024->1024/32 pan_p6 = self.conv_p6(pan_out0) outputs = (pan_out2, pan_out1, pan_out0, pan_p6) return outputs # - # 修改完之后还需要在`yolo_head.py`和`exp/yolox_base.py`中做修改,和修改之后的pafpn匹配起来。 # * 第五步:修改为focalloss # # 通过实验发现focalloss能带来一部分的涨点,所以我们将原先的bceloss修改为了focalloss,具体的代码在`yolox/models/yolox_head.py`中, 如下: # 添加focal def focal_loss_discrite(self, pred, gt): pos_inds = F.equal(gt, 1).astype("float32") neg_inds = F.equal(gt, 0).astype("float32") pos_loss = F.log(pred+1e-5) * F.pow(1-pred, 2) * pos_inds * 0.75 neg_loss = F.log(1-pred+1e-5) * F.pow(pred, 2) * neg_inds * 0.25 loss = -(pos_loss + neg_loss) return loss loss_obj = ( # todo 修改为focalloss self.focal_loss_discrite(F.sigmoid(obj_preds).view(-1, 1), obj_targets) # self.bcewithlog_loss(obj_preds.view(-1, 1), obj_targets) # 原先的loss ).sum() / num_fg # * 第六步:修改配置文件 # # 修改后的配置文件如下,这里我们使用yolox_l作为我们的基础网络,考虑到小目标的问题,我们将inputsize调整到了2048,并将数据增强的轮数进行了翻倍 # #!/usr/bin/env python3 # -*- coding:utf-8 -*- # Copyright (c) Megvii, Inc. and its affiliates. import os from yolox.exp import Exp as MyExp class Exp(MyExp): def __init__(self): super(Exp, self).__init__() self.input_size = (2048, 2048) # (height, width) self.test_size = (2048, 2048) self.no_aug_epochs = 30 self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] # Define yourself dataset path self.data_dir = "datasets/traffic5" self.train_ann = "instances_train2017.json" self.val_ann = "instances_val2017.json" # self.val_ann = "instances_test2017.json" # 测试的时候解开注释 self.num_classes = 5 self.max_epoch = 300 self.data_num_workers = 4 self.eval_interval = 1 # * 第七步:将fpn中的上采样方式修改为双线性插值 # # ``` # self.upsample = UpSample(scale_factor=2, mode="bilinear") # ``` # # * 第八步:开始训练 # # 训练是在megstudio的v100显卡上完成的,如果你选择的显卡显存比较下,可以将inputsize的大小调整为640。 # # 训练的代码如下: # # 会有两处报错,第一次报错的时候需要将trainer.py 250行的['model']去掉 # 第二次报错的时候在yolo_base.py中添加self.data_dir = "datasets/traffic5"并在100和129行传入 # %cd /home/megstudio/workspace/YOLOX # !python tools/train.py -f exps/example/yolox_l.py -d 1 -b 4 -c pretrained/yolox_l_mge.pkl # ## 模型验证 # # 训练好的的模型在YOLOX_outputs/yolox_l/best_ckpt.pkl 目录下,执行下列的指令就可以完成验证的过程。 # %cd /home/megstudio/workspace/YOLOX # !source activate xuan # !python tools/eval.py -f exps/example/yolox_l.py -c YOLOX_outputs/yolox_l/best_ckpt.pkl -b 1 -d 1 --conf 0.01 # ## 生成results.json # # 接下来要生成测试图片的结果,首先需要解开`yolox_l.py`18行的注释解开,并在`coco_evaluator.py`的167行解开注释,在/home/megstudio/workspace/submit下生成results.json文件 # %cd /home/megstudio/workspace/YOLOX # !source activate xuan # !python tools/eval.py -f exps/example/yolox_l.py -c YOLOX_outputs/yolox_l/best_ckpt.pkl -b 1 -d 1 --conf 0.001 # ## 评价测试集 # 直接在原先数据集的基础上更改测试集即可,建立软链接 # # %cd /home/megstudio/workspace/YOLOX/datasets # !ln -s /home/megstudio/dataset/traffic5/images ./traffic5/test2017 # %cd /home/megstudio/workspace/YOLOX # !source activate xuan # 激活之后可以在命令行前面看到一个xuan的括号 # !pip install -r requirements.txt # !python setup.py develop # !pip install pycocotools # # !python tools/eval.py -f exps/example/yolox_l.py -c YOLOX_outputs/yolox_l/best_ckpt.pkl -b 1 -d 1 --conf 0.01 # 重新试试在验证集上的效果,保证不出错 # %cd /home/megstudio/workspace/YOLOX # !source activate xuan # !python tools/eval.py -f exps/example/yolox_l.py -c YOLOX_outputs/yolox_l/best_ckpt.pkl -b 1 -d 1 --fuse --conf 0.1 # 重新试试在验证集上的效果,保证不出错 # %cd /home/megstudio/workspace/YOLOX # !source activate xuan # !python tools/eval.py -f exps/example/yolox_l.py -c YOLOX_outputs/yolox_l/best_ckpt.pkl -b 1 -d 1 --fuse --conf 0.1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Load Frequency Control in ANDES # This examples shows (1) how to trip a generator, and (2) how to drive frequency back by load shedding. # + import andes import numpy as np andes.main.config_logger(stream_level=20) # - # ## Tripping a Generator in the IEEE 14-Bus System # + # using the IEEE 14-bus model as an example. # The example here contains a variety of models: generators, exciters, turbine governors, and PSS # To speed up, one can remove unneeded ones, e.g., PSS ieee14_raw = andes.get_case("ieee14/ieee14.raw") ieee14_dyr = andes.get_case("ieee14/ieee14.dyr") # + # use `andes.load` to load the test system # Need to set `setup=False` to be able to add new Togglers that turns off generators. ss = andes.load(ieee14_raw, addfile=ieee14_dyr, setup=False) # + # Add a Toggler that disconnects `GENROU_2` at t=1 s ss.add("Toggler", dict(model='SynGen', dev="GENROU_2", t=1.0)) # + # Call setup manually ss.setup() # + # double check that Togglers are set up correctly # Check `u` of the Togglers - the first two line switches are disabled, and the generator trip is enabled ss.Toggler.as_df() # + # disable existing line switches # The IEEE 14-bus system contains predefined line switches. Disabling them to study generator trip only. ss.Toggler.u.v[[0, 1]] = 0 # + # calculate power flow # use constant power model for PQ (we will come back to this later) ss.PQ.config.p2p = 1 ss.PQ.config.q2q = 1 ss.PQ.config.p2z = 0 ss.PQ.config.q2z = 0 # turn off under-voltage PQ-to-Z conversion ss.PQ.pq2z = 0 ss.PFlow.run() # + # set the first simulation stop and run it ss.TDS.config.tf = 20 ss.TDS.run() # + # Show the frequency response of online generators # Refer to `plot` documentation by using `help(ss.TDS.plt.plot)` and `help(ss.TDS.plt.plot_data)` ss.TDS.load_plotter() ss.TDS.plt.plot(ss.GENROU.omega, a=(0, 2, 3, 4), ytimes=60, ) # - # ## Adjusting Load to Compensate for the Generation Loss # Check the power of the lost generator by inspecting the power flow inputs: ss.PV.as_df() # The tripped GENROU_2 correspond to the first PV (GENROU_1 corresponds to Slack). Thus, the lost active power is 0.40 pu. # # Let's compensate for that by shedding 0.4 pu of active power load at t=2.0 s. # # By checking the equation documentation of PQ (using `print(ss.PQ.doc())`, we can tell that the imposed active power for time-domain simulation is from `Ppf`, because we used the constant power model with `p2p = 1`. # # ``` # Algebraic Equations # # Name | Type | RHS of Equation "0 = g(x, y)" # -----+----------+------------------------------------------------------------- # a | ExtAlgeb | u * (dae_t <= 0) * (p0 * vcmp_zi + Rlb * vcmp_zl * v**2 + # | | Rub * vcmp_zu * v**2) + u * (dae_t > 0) * (p2p * Ppf + p2i * # | | Ipeq * v + p2z * Req * v**2) # v | ExtAlgeb | u * (dae_t <= 0) * (q0 * vcmp_zi + Xlb * vcmp_zl * v**2 + # | | Xub * vcmp_zu * v**2) + u * (dae_t > 0) * (q2q * Qpf + q2i * # | | Iqeq * v + q2z * Xeq * v**2) # # ``` # # `Ppf` may be different from `p0` specified in the data file. # + # active power from power flow solution - make a copy Ppf = np.array(ss.PQ.Ppf.v) Ppf # - # Reload the system and add the generator trip. # + ss = andes.load(ieee14_raw, addfile=ieee14_dyr, setup=False) ss.add("Toggler", dict(model='SynGen', dev="GENROU_2", t=1.0)) ss.setup() ss.Toggler.u.v[[0, 1]] = 0 ss.PQ.config.p2p = 1 ss.PQ.config.q2q = 1 ss.PQ.config.p2z = 0 ss.PQ.config.q2z = 0 ss.PQ.pq2z = 0 ss.PFlow.run() # - # But let's run to 2 seconds. # + ss.TDS.config.tf = 2.0 ss.TDS.run() # + # all `Ppf` before shedding ss.PQ.Ppf.v # - # And then apply the load shedding on buses 2, 3, 4, 5, 6, 9. # + shed_buses = [2, 3, 4, 5, 6, 9] # find the `idx` of the loads on these buses pq_shed_idx = ss.PQ.find_idx(keys='bus', values=shed_buses) pq_shed_idx # + # get `Ppf` on these buses before shedding pq_p = ss.PQ.get(src='Ppf', idx=pq_shed_idx, attr='v') pq_p # + pq_p_new = pq_p - 0.4 / len(shed_buses) ss.PQ.set(src='Ppf', idx=pq_shed_idx, attr='v', value=pq_p_new) # + # double check ss.PQ.Ppf.v # + ss.TDS.config.tf = 20 ss.TDS.run() ss.TDS.plt.plot(ss.GENROU.omega, a=(0, 2, 3, 4), ytimes=60, ) # - # The result shows the generator speed (frequency) returns to 60 Hz after load shedding. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data Processing # # Setup import random,copy,math,time,os,csv,sys import scipy.io as sio # for loading .mat files import pandas as pd import numpy as np import matplotlib.pyplot as plt DATA_DIRECTORY = './../../experimental_data/raw_data/All-datasets - Oct2020/' data_dict = sio.loadmat('./../../experimental_data/raw_data/All-datasets - Oct2020/CGG-SpCas9.mat') data_dict.keys() # ## Load raw data data_raw = [] for filename in os.listdir(DATA_DIRECTORY): if filename.endswith(".mat"): experiment = filename[:-4] print(experiment) data_dict = sio.loadmat(DATA_DIRECTORY +filename) if 't' in data_dict: data_dict["experiment"] = experiment if experiment == 'NoGuidedRNA': data_dict["experiment"] = 'Control' if '-' in experiment: defect = experiment.split('-')[0] if defect in 'NonRepeatedSequence': data_dict['defect'] = 'NRS' else: data_dict['defect'] = experiment.split('-')[0] data_dict['nuclease'] = experiment.split('-')[1] if 'AllCells' in data_dict: data_dict['AllCellsBF'] = data_dict['AllCells'] #data_dict['t'] = np.array(range(len(data_dict['AllCells']))) #else: #data_dict['t'] = np.array(range(len(data_dict['AllCellsBF']))) if 'AllGFPCells' in data_dict: data_dict['AllCellsGFP'] = data_dict['AllGFPCells'] data_raw.append(data_dict) print(' success!') else: print(' missing data!') def make_single_cell_dataframe(dct): ''' convert the dictionary of raw data to a data frame ''' data = pd.DataFrame() # dataframe for single-cell data n_wells = len(dct['AllCellsBF'][0]) n_points = len(dct['t'][0]) data['well'] = np.concatenate([np.ones(n_points)*j for j in range(n_wells)]) data['bf'] = dct['AllCellsBF'].T.reshape(n_wells*n_points) if 'AllCellsGFP' in dct: data['gfp'] = dct['AllCellsGFP'].T.reshape(n_wells*n_points) else: # this means there is no defect data['gfp'] = np.nan data['time'] = np.concatenate([dct['t'][0] for j in range(n_wells)]) data['experiment'] = dct['experiment'] if 'defect' in dct: data['defect'] = dct['defect'] else: data['defect'] = 'none' if 'nuclease' in dct: data['nuclease'] = dct['nuclease'] else: data['nuclease'] = 'none' return data # + data = pd.concat([make_single_cell_dataframe(dct) for dct in data_raw]) data['initial_cells'] = initial_cells data.to_csv('./../../experimental_data/processed_data/single_cell_data.csv') # + data_avg =data.groupby(['experiment','time','nuclease','defect']).mean() data_avg.to_csv('./../../experimental_data/processed_data/avg_data.csv') data_avg =data[data.initial_cells==1].groupby(['experiment','time','nuclease','defect']).mean() data_avg.to_csv('./../../experimental_data/processed_data/avg_data_one_cell.csv') # - [plt.plot(data.groupby(['experiment','time']).mean().bf[exp].values,'C0-') \ for exp in data.experiment.unique()]; [plt.semilogy(data.groupby(['experiment','time']).mean().gfp[exp].values,'C1--') \ for exp in data.experiment.unique()]; data.groupby(['experiment','time']).mean() initial_cells =[] for exp in data.experiment.unique(): df = data[data.experiment == exp] initial_cells.append(np.array([df[df.well==well].bf.values[0] for well in df.well.values])) initial_cells= np.concatenate(initial_cells) plt.plot([len(data[data.initial_cells==k])/len(data) for k in [1,2,3,4,5,6]],'o-') data # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Colour Perception in Humans and Machines # # In this notebook we compare the perceived similarity in colour for RetinaNet with that of Humans and the statistics of natural scenes. These plots correspond to Figure 5 from the paper. # # **Note**: Unfortunately we can't share all the models for this experiment (due to file size restrictions) so the first two cells can't be run in colab. # # ## Our Models # + ''' Can't run in colab ''' import colorsys import numpy as np import torch from training.model import BaselineModel from statistics.wavelength import wavelength_to_rgb def response_to_wavelength(model, layer, wavelength): wavelength = torch.tensor([float(wavelength)], requires_grad=True) rgb = wavelength_to_rgb(wavelength, gamma=0.7) stimulus = torch.ones((1,32,32,3), requires_grad=False) * rgb stimulus = stimulus.permute(0, 3, 2, 1) resp = model.forward_to_layer(stimulus, layer).max() resp.backward() return wavelength.grad, rgb def compute_response_to_wavelengths(model, layer, inc=0.5, threshold=1e-4): responses = [] step = 10 for i in np.concatenate((np.arange(395, 645, step), [644])): base = response_to_wavelength(model, layer, i) responses.append(base[0]) return np.array(responses) def get_distances(model_dir=None, pretrained=True, cmode='colour', d_vvs=[1, 2, 3, 4], n_bn=[1, 2, 4, 8, 16, 32]): distances = [] for d in d_vvs: for n in n_bn: for t in range(10): n_ch = 1 if cmode == 'grey' else 3 model = BaselineModel(n, d, n_ch) if pretrained: try: model.load_conv_dict(torch.load(f'{model_dir}/model_{n}_{d}_{t}.pt', map_location='cpu')) except: model.load_state_dict(torch.load(f'{model_dir}/model_{n}_{d}_{t}.pt', map_location='cpu')) r = compute_response_to_wavelengths(model, f'retina_relu2') distances.append(r) distances = np.array(distances) return distances # distances = get_distances('../models/colour') rand_distances = get_distances(pretrained=False) distort_distances = get_distances('../models/colour-distort') distances_narrow = get_distances('../models/colour', n_bn=[1, 2, 4], d_vvs=[3, 4]) distances_wide = get_distances('../models/colour', n_bn=[8, 16, 32], d_vvs=[0, 1]) # + from scipy import stats # %matplotlib inline import matplotlib.pyplot as plt from matplotlib import rc import matplotlib.font_manager rc('font',**{'family':'serif','serif':['Computer Modern Roman'],'size':13}) rc('text', usetex=True) plt.figure(figsize=(3,2.5)) sems = stats.sem(np.abs(distances_narrow), axis=0) means = (np.abs(distances_narrow)).mean(axis=0) w = np.concatenate((np.arange(395, 645, 10), [644])) plt.plot(w, means, linestyle='-') plt.fill_between(w, means - sems, means + sems, alpha=0.2) plt.xlim(400, 700) plt.gca().get_yaxis().set_ticks([]) plt.xlabel('Wavelength') plt.ylabel('Sensitivity') plt.savefig('figures/similarity_narrow.pdf', bbox_inches='tight') # + from scipy import stats # %matplotlib inline import matplotlib.pyplot as plt from matplotlib import rc import matplotlib.font_manager rc('font',**{'family':'serif','serif':['Computer Modern Roman'],'size':13}) rc('text', usetex=True) plt.figure(figsize=(3,2.5)) sems = stats.sem(np.abs(distances_wide), axis=0) means = (np.abs(distances_wide)).mean(axis=0) w = np.concatenate((np.arange(395, 645, 10), [644])) plt.plot(w, means, linestyle='-') plt.fill_between(w, means - sems, means + sems, alpha=0.2) plt.xlim(400, 700) plt.gca().get_yaxis().set_ticks([]) plt.xlabel('Wavelength') plt.ylabel('Sensitivity') plt.savefig('figures/similarity_wide.pdf', bbox_inches='tight') # + ''' Can't run in colab ''' plt.figure(figsize=(3,2.5)) sems = stats.sem(np.abs(rand_distances), axis=0) means = (np.abs(rand_distances)).mean(axis=0) w = np.concatenate((np.arange(395, 645, 10), [644])) plt.plot(w, means, linestyle='-', color='C1') plt.fill_between(w, means - sems, means + sems, alpha=0.2, facecolor='C1') plt.xlim(400, 700) plt.gca().get_yaxis().set_ticks([]) plt.xlabel('Wavelength') plt.ylabel('Sensitivity') plt.savefig('figures/similarity_random.pdf', bbox_inches='tight') # + plt.figure(figsize=(3,2.5)) sems = stats.sem(np.abs(distort_distances), axis=0) means = (np.abs(distort_distances)).mean(axis=0) w = np.concatenate((np.arange(395, 645, 10), [644])) plt.plot(w, means, linestyle='-') plt.fill_between(w, means - sems, means + sems, alpha=0.2) plt.xlim(400, 700) plt.gca().get_yaxis().set_ticks([]) plt.xlabel('Wavelength') plt.ylabel('Sensitivity') plt.savefig('figures/similarity_distort.pdf', bbox_inches='tight') # + # %matplotlib inline import matplotlib.pyplot as plt from matplotlib import rc import matplotlib.font_manager rc('font',**{'family':'serif','serif':['Computer Modern Roman'],'size':13}) rc('text', usetex=True) from matplotlib.colors import ListedColormap from matplotlib import cm from statistics.wavelength import wavelength_to_rgb import numpy as np rs = [] gs = [] bs = [] ws = list(range(400, 701)) for lam in ws: rgb = wavelength_to_rgb(lam) rs.append(rgb[0]) gs.append(rgb[1]) bs.append(rgb[2]) plt.figure(figsize=(9,2.8)) plt.box(False) plt.plot(ws, rs, linestyle='-.', color='r') plt.plot(ws, gs, linestyle='--', color='g') plt.plot(ws, bs, linestyle=':', color='b') plt.legend(['Red', 'Green', 'Blue'], frameon=False) colours = np.stack((np.array(rs), np.array(gs), np.array(bs)), axis=1) colours = ListedColormap(colours) cb = plt.gcf().colorbar(cm.ScalarMappable(cmap=colours), pad=0.27, orientation='horizontal', ticks=[], aspect=25) cb.outline.set_visible(False) plt.yticks([]) plt.xlim(400, 700) plt.ylim(-0.05, 1.05) plt.xlabel('Wavelength') plt.savefig('figures/wavelength.pdf', bbox_inches='tight') # + # %matplotlib inline import matplotlib.pyplot as plt from colour.plotting import plot_RGB_colourspaces_in_chromaticity_diagram_CIE1931 fig, ax = plot_RGB_colourspaces_in_chromaticity_diagram_CIE1931(['CIE-LAB'], legend=False, standalone=False, axes_visible=False, title='', spectral_locus_colours='RGB', diagram_opacity=0.7) plt.legend(['RGB'], frameon=False) # - # ## Load Dependencies - Colab Only from os.path import exists if not exists('opponency.zip'): # !wget -O opponency.zip https://github.com/ecs-vlc/opponency/archive/master.zip # !unzip -qq opponency.zip # !mv opponency-master/* ./ # !rm -r opponency-master # ## Other Plots # + # %matplotlib inline import matplotlib.pyplot as plt from matplotlib import rc import matplotlib.font_manager rc('font',**{'family':'serif','serif':['Computer Modern Roman'],'size':13}) rc('text', usetex=True) import pandas as pd bedford = pd.read_csv('bedford1958.csv', names=['wavelength', 'sensitivity'], header=None) bedford[bedford.wavelength.between(380, 650)].plot('wavelength', 'sensitivity', legend=False, figsize=(3,2.5), linestyle='-') plt.xlim(400, 700) plt.gca().get_yaxis().set_ticks([]) plt.xlabel('Wavelength') plt.ylabel('Perceptual Similarity') plt.savefig('figures/similarity_bedford.pdf', bbox_inches='tight') # + # %matplotlib inline import matplotlib.pyplot as plt from matplotlib import rc import matplotlib.font_manager rc('font',**{'family':'serif','serif':['Computer Modern Roman'],'size':13}) rc('text', usetex=True) import pandas as pd long = pd.read_csv('long2006.csv', names=['wavelength', 'sensitivity'], header=None) # long = long[long.wavelength.between(420, 650)] # .plot('wavelength', 'sensitivity') # bedford import numpy as np p = np.poly1d(np.polyfit(long.wavelength, long.sensitivity, 8)) import matplotlib.pyplot as plt x = np.linspace(430, 600, 1000) plt.figure(figsize=(3,2.5)) plt.plot(x, p(x), linestyle='-') plt.xlim(400, 700) plt.gca().get_yaxis().set_ticks([]) plt.xlabel('Wavelength') plt.ylabel('Predicted Similarity') plt.savefig('figures/similarity_long.pdf', bbox_inches='tight') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PyCharm (Data-preparation) # language: python # name: pycharm-eaa1fcfe # --- import pandas as pd # + pycharm={"name": "#%%\n"} data = pd.read_csv('C:/Users/toduc/OneDrive - National Economics University/Desktop/NEU/Machine Learning/Visualization/Data/tsunamis.csv', sep=',') data # + pycharm={"name": "#%%\n"} data.head(3) # + pycharm={"name": "#%%\n"} data.tail() # + pycharm={"name": "#%%\n"} data.describe() # + pycharm={"name": "#%%\n"} data.nunique() # + pycharm={"name": "#%%\n"} data.columns # + pycharm={"name": "#%%\n"} data['mag'].unique() # + pycharm={"name": "#%%\n"} data['mag'].value_counts(normalize=True) # + pycharm={"name": "#%%\n"} data[['mag', 'time']][40:50] # + pycharm={"name": "#%%\n"} data = pd.read_csv('C:/Users/toduc/OneDrive - National Economics University/Desktop/NEU/Machine Learning/Visualization/Data/earthquakes.csv', sep=',') data.head() # + pycharm={"name": "#%%\n"} data.shape # + pycharm={"name": "#%%\n"} data['magType'].unique() # + pycharm={"name": "#%%\n"} data['mag'] > 3 # + pycharm={"name": "#%%\n"} sum(data['mag']>3) # + pycharm={"name": "#%%\n"} sum(data['place'].str.contains('Indonesia')) # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Bayesian inference of Hubble constant from simulated time delays # + import pandas as pd import csv # import of standard python libraries import numpy as np import os import time import corner import astropy.io.fits as pyfits import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable # %matplotlib inline # lenstronomy imports from lenstronomy.LensModel.lens_model import LensModel from lenstronomy.LensModel.Solver.lens_equation_solver import LensEquationSolver from lenstronomy.LightModel.light_model import LightModel from lenstronomy.PointSource.point_source import PointSource from lenstronomy.ImSim.image_model import ImageModel import lenstronomy.Util.param_util as param_util import lenstronomy.Util.simulation_util as sim_util import lenstronomy.Util.image_util as image_util from lenstronomy.Util import kernel_util from lenstronomy.Data.imaging_data import ImageData from lenstronomy.Data.psf import PSF # - #constraining table tabela_nova = pd.read_csv("data_set.csv", sep=",") tabela_1 = tabela_nova[tabela_nova['thetaE'] > 0.9] tabela = tabela_1[tabela_1['thetaE'] < 1.6] df = pd.DataFrame(tabela) tabela = df.dropna().reset_index(drop=True) tabela # # Simulation choises # + # define lens configuration and cosmology z_lens = tabela['zl'] z_source = tabela['zs'] theta_E = tabela['thetaE'] from astropy.cosmology import FlatLambdaCDM cosmo = FlatLambdaCDM(H0=70, Om0=0.3, Ob0=0.) # data specifics sigma_bkg = .05 # background noise per pixel (Gaussian) exp_time = 100. # exposure time (arbitrary units, flux per pixel is in units #photons/exp_time unit) numPix = 100 # cutout pixel size deltaPix = 0.05 # pixel size in arcsec (area per pixel = deltaPix**2) fwhm = 0.1 # full width half max of PSF (only valid when psf_type='gaussian') psf_type = 'GAUSSIAN' # 'GAUSSIAN', 'PIXEL', 'NONE' kernel_size = 91 # initial input simulation # generate the coordinate grid and image properties kwargs_data = sim_util.data_configure_simple(numPix, deltaPix, exp_time, sigma_bkg) data_class = ImageData(**kwargs_data) # generate the psf variables kwargs_psf = {'psf_type': psf_type, 'pixel_size': deltaPix, 'fwhm': fwhm} psf_class = PSF(**kwargs_psf) # + # lensing quantities gamma1, gamma2 = param_util.shear_polar2cartesian(phi=-0.5, gamma=0.06) kwargs_shear = {'gamma1': gamma1, 'gamma2': gamma2} # shear values lista_de_dicionarios = [] for index in tabela['thetaE']: kwargs_spemd={} kwargs_spemd['theta_E'] = index kwargs_spemd['gamma'] = 1.98 kwargs_spemd['center_x'] = 0.0 kwargs_spemd['center_y'] = 0.0 kwargs_spemd['e1'] = 0.05 kwargs_spemd['e2'] = 0.05 lista_de_dicionarios.append(kwargs_spemd) # the lens model is a supperposition of an elliptical lens model with external shear lens_model_list = ['SPEP', 'SHEAR'] kwargs_lens = [lista_de_dicionarios, kwargs_shear] lens_model_class_lista= [] for a in range(0,104): lens_model_class = LensModel(lens_model_list=lens_model_list, z_lens=z_lens[a], z_source=z_source[a], cosmo=cosmo) lens_model_class_lista.append(lens_model_class) # choice of source type source_type = 'SERSIC' # 'SERSIC' or 'SHAPELETS' source_x = 0. source_y = 0.1 # Sersic parameters in the initial simulation phi_G, q = 0.5, 0.8 e1, e2 = param_util.phi_q2_ellipticity(phi_G, q) kwargs_sersic_source = {'amp': 4000, 'R_sersic': 0.2, 'n_sersic': 1, 'e1': e1, 'e2': e2, 'center_x': source_x, 'center_y': source_y} source_model_list = ['SERSIC_ELLIPSE'] kwargs_source = [kwargs_sersic_source] source_model_class = LightModel(light_model_list=source_model_list) # lens light model phi_G, q = 0.9, 0.9 e1, e2 = param_util.phi_q2_ellipticity(phi_G, q) kwargs_sersic_lens = {'amp': 8000, 'R_sersic': 0.4, 'n_sersic': 2., 'e1': e1, 'e2': e2, 'center_x': 0.0, 'center_y': 0} lens_light_model_list = ['SERSIC_ELLIPSE'] kwargs_lens_light = [kwargs_sersic_lens] lens_light_model_class = LightModel(light_model_list=lens_light_model_list) # Image positions lensEquationSolver = LensEquationSolver(lens_model_class) x_image_lista = [] y_image_lista = [] for a in range(0, 104): x_image, y_image = lensEquationSolver.findBrightImage(source_x, source_y, [lista_de_dicionarios[a], kwargs_shear], numImages=4, min_distance=deltaPix, search_window=numPix * deltaPix) x_image_lista.append(x_image) y_image_lista.append(y_image) mag_lista = [] for a in range(0,104): mag = lens_model_class.magnification(x_image_lista[a], y_image_lista[a], kwargs=[lista_de_dicionarios[a], kwargs_shear]) mag_lista.append(mag) kwargs_ps_lista = [] for a in range(0, 104): kwargs_ps = [{'ra_image': x_image_lista[a], 'dec_image': y_image_lista[a], 'point_amp': np.abs(mag_lista[a])*1000}] kwargs_ps_lista.append(kwargs_ps) point_source_list = ['LENSED_POSITION'] point_source_class = PointSource(point_source_type_list=point_source_list, fixed_magnification_list=[False]) kwargs_numerics = {'supersampling_factor': 1} imageModel_lista = [] for a in range(0, 104): imageModel = ImageModel(data_class, psf_class, lens_model_class_lista[a], source_model_class, lens_light_model_class, point_source_class, kwargs_numerics=kwargs_numerics) imageModel_lista.append(imageModel) # + import warnings warnings.filterwarnings('ignore') #generate image (Ele cria a imagem) image_sim = imageModel.image([lista_de_dicionarios[0], kwargs_shear], kwargs_source, kwargs_lens_light, kwargs_ps_lista[0]) poisson = image_util.add_poisson(image_sim, exp_time=exp_time) bkg = image_util.add_background(image_sim, sigma_bkd=sigma_bkg) image_sim = image_sim + bkg + poisson data_class.update_data(image_sim) kwargs_data['image_data'] = image_sim kwargs_model = {'lens_model_list': lens_model_list, 'lens_light_model_list': lens_light_model_list, 'source_light_model_list': source_model_list, 'point_source_model_list': point_source_list } # display the initial simulated image cmap_string = 'gray' cmap = plt.get_cmap(cmap_string) cmap.set_bad(color='k', alpha=1.) cmap.set_under('k') v_min = -4 v_max = 2 f, axes = plt.subplots(1, 1, figsize=(6, 6), sharex=False, sharey=False) ax = axes im = ax.matshow(np.log10(image_sim), origin='lower', vmin=v_min, vmax=v_max, cmap=cmap, extent=[0, 1, 0, 1]) ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) ax.autoscale(False) plt.show() # A imagem abaixo é a única lente que estou olhando, é essa imagem que vemos abaixo. # - # # time delays # time delays are defined in lenstronomy as the difference in light travel path relative to a straight line. Negative values correspond to earlier arrival times. The units are in days # + from lenstronomy.Analysis.td_cosmography import TDCosmography td_cosmo_lista = [] t_days_lista = [] printar_0_lista = [] dt_days_lista = [] dt_measured_lista = [] printar_1_lista = [] for k in range(0,104): td_cosmo = TDCosmography(z_lens[k], z_source[k], kwargs_model, cosmo_fiducial=cosmo) td_cosmo_lista.append(td_cosmo) # time delays, the unit [days] is matched when the lensing angles are in arcsec t_days = td_cosmo_lista[k].time_delays([lista_de_dicionarios[k], kwargs_shear], kwargs_ps_lista[k], kappa_ext=0) t_days_lista.append(t_days) printar_0 = print("the time delays for the images at position ", kwargs_ps_lista[k][0]['ra_image'], kwargs_ps_lista[k][0]['dec_image'], "are: ", t_days_lista[k]) printar_0_lista.append(printar_0) # relative delays (observable). The convention is relative to the first image dt_days = t_days_lista[k][1:] - t_days_lista[k][0] dt_days_lista.append(dt_days) # and errors can be assigned to the measured relative delays (full covariance matrix not yet implemented) dt_sigma = [3, 5, 10] # Gaussian errors np.random.seed(2) # and here a realisation of the measurement with the quoted error bars dt_measured = np.random.normal(dt_days_lista[k], dt_sigma) dt_measured_lista.append(dt_measured) printar_1 = print("the measured relative delays are: ", dt_measured_lista[k]) printar_1_lista.append(printar_1) # - # # Fermat Potential # + from lenstronomy.Cosmo.lens_cosmo import LensCosmo lens_cosmo_lista = [] d_fermat_model_lista = [] for k in range(0,104): lens_cosmo = LensCosmo(z_lens[k], z_source[k], cosmo) lens_cosmo_lista.append(lens_cosmo) d_fermat_model = lens_cosmo_lista[k].time_delay2fermat_pot(dt=dt_days_lista[k]) d_fermat_model_lista.append(d_fermat_model) # - # # Time delay distance # + td_distance_lista = [] for k in range(0,104): td_distance = td_cosmo_lista[k].ddt_from_time_delay(d_fermat_model_lista[k], dt_measured_lista[k] , kappa_s=0, kappa_ds=0, kappa_d=0) td_distance_lista.append(td_distance) # - # # kinematics # Kinematics can provide important complementary information about the lens to constrain cosmography # + R_slit = 1. # slit length in arcsec dR_slit = 1. # slit width in arcsec psf_fwhm = 0.7 # Full width at half maximum of the PSF kwargs_aperture = {'aperture_type': 'slit', 'length': R_slit, 'width': dR_slit, 'center_ra': 0.05, 'center_dec': 0, 'angle': 0} anisotropy_model = 'OM' aperture_type = 'slit' kwargs_numerics_galkin = {'interpol_grid_num': 1000, # numerical interpolation, should converge -> infinity 'log_integration': True, # log or linear interpolation of surface brightness and mass models 'max_integrate': 100, 'min_integrate': 0.001} # lower/upper bound of numerical integrals r_ani = 1. # anisotropy radius r_eff = 0.2 kwargs_anisotropy = {'r_ani': r_ani} kwargs_seeing = {'psf_type': 'GAUSSIAN', 'fwhm': psf_fwhm} from lenstronomy.Analysis.kinematics_api import KinematicsAPI kin_api_lista = [] vel_disp_lista = [] for l in range(0,104): kin_api = KinematicsAPI(z_lens[l], z_source[l], kwargs_model, cosmo=cosmo, lens_model_kinematics_bool=[True, False], light_model_kinematics_bool=[True], kwargs_aperture=kwargs_aperture, kwargs_seeing=kwargs_seeing, anisotropy_model=anisotropy_model, kwargs_numerics_galkin=kwargs_numerics_galkin, sampling_number=10000, # numerical ray-shooting, should converge -> infinity Hernquist_approx=True) kin_api_lista.append(kin_api) vel_disp = kin_api_lista[l].velocity_dispersion([lista_de_dicionarios[l], kwargs_shear], kwargs_lens_light, kwargs_anisotropy, r_eff=r_eff, theta_E=None, kappa_ext=0) vel_disp_lista.append(vel_disp) # - # # Function J # + from lenstronomy.Util import constants as const J_lista = [] for k in range(0,104): #vel_disp_lista[k]*1000, convert from [km/s] to [m/s] J = ((1000*vel_disp_lista[k])**2 * lens_cosmo_lista[k].dds) / (lens_cosmo_lista[k].ds * const.c ** 2) J_lista.append(J) # - # # $D_s$/$D_{ds}$ # + ds_dds_lista = [] for k in range(0,104): ds_dds = td_cosmo_lista[k].ds_dds_from_kinematics(vel_disp_lista[k], J_lista[k], kappa_s=0, kappa_ds=0) ds_dds_lista.append(ds_dds) # - # # Angular diameter distance to the lens ($D_d$) # + dd_lista = [] #D_d_sample = [] for k in range(0,104): dd = td_distance_lista[k] / (ds_dds_lista[k] * (1 + z_lens[k])) dd_lista.append(dd) # - # # Results for lens system # + from lenstronomy.Cosmo.kde_likelihood import KDELikelihood kde_lista = [] for k in range(0,104): kde = KDELikelihood(D_d_sample = dd_lista[k], D_delta_t_sample = td_distance_lista[k], kde_type='gaussian', bandwidth=20) kde_lista.append(kde) # + def log_likelihood_100(params): h0, om_m = params cosmo = FlatLambdaCDM(H0=h0, Om0=om_m) log_like_lista = [] D_d_r_lista = [] D_dt_r_lista = [] D_d_lista = [] D_s_lista = [] D_ds_lista = [] z_lens_lista = [] for k in range(0,104): D_d = cosmo.angular_diameter_distance(z_lens[k]).value D_s = cosmo.angular_diameter_distance(z_source[k]).value D_ds = cosmo.angular_diameter_distance_z1z2(z_lens[k], z_source[k]).value D_d_lista.append(D_d) D_s_lista.append(D_s) D_ds_lista.append(D_ds) z_lens_lista.append(z_lens[k]) D_dt_r = (1+z_lens[k])*(D_d_lista[k] * D_s_lista[k] / D_ds_lista[k]) D_d_r_lista.append(D_d_lista[k]) D_dt_r_lista.append(D_dt_r) log_like = kde_lista[k].logLikelihood(D_d_r_lista[k], D_dt_r_lista[k]) log_like_lista.append(log_like) return np.sum(log_like_lista) def log_prior_100(params): h0, om_m = params if not 0. < h0 < 150.: return -np.inf if not 0.05 < om_m < 0.5: return -np.inf return 0. def log_probability_100(params): prior = log_prior_100(params) if not np.isinf(prior): return log_likelihood_100(params) + prior else: return prior # - # -\-\-\-\-\- # + import emcee from multiprocessing import Pool n_walkers, n_dim = 50, 2 with Pool(processes = 18) as pool: np.random.seed(1) pos = np.random.normal(loc=[70., 0.3], size=[n_walkers, n_dim], scale=[1, 5e-2]) sampler = emcee.EnsembleSampler(n_walkers, n_dim, log_probability_100, pool = pool) state = sampler.run_mcmc(pos, 500, progress = True) # + import copy fig, axes = plt.subplots(n_dim, figsize=(10, 7), sharex=True) samples = copy.deepcopy(sampler.chain) #samples[:, :, 0] -= np.mean(samples[:, :, 0]) #samples[:, :, 1] -= np.median(samples[:, :, 1]) labels = [r"$H_0$ (km s$^{-1}$ Mpc$^{-1}$)", r"$\Omega_m$"] for i in range(n_dim): ax = axes[i] ax.plot(samples[:, :, i].T, "k", alpha=0.1) ax.set_ylabel(labels[i]) ax.yaxis.set_label_coords(-0.1, 0.5) axes[-1].set_xlabel("step number"); # + corner.corner(samples.reshape(-1, 2), show_titles=True, labels=labels, title_fmt='.1f' ); #numpy.reshape: dá uma nova forma a um array sem alterar os dados # - # # For a single lens # + def log_likelihood(params): h0, om_m = params cosmo = FlatLambdaCDM(H0=h0, Om0=om_m) D_d = cosmo.angular_diameter_distance(z_lens[101]).value D_s = cosmo.angular_diameter_distance(z_source[101]).value D_ds = cosmo.angular_diameter_distance_z1z2(z_lens[101], z_source[101]).value D_d_r = D_d D_dt_r = (1+z_lens[101])*(D_d * D_s / D_ds) return kde_lista[101].logLikelihood(D_d_r, D_dt_r) def log_prior(params): h0, om_m = params if not 0. < h0 < 150.: return -np.inf if not 0.05 < om_m < 0.5: return -np.inf return 0. def log_probability(params): prior = log_prior(params) if not np.isinf(prior): return log_likelihood(params) + prior else: return prior # + import emcee from multiprocessing import Pool n_walkers, n_dim = 50, 2 with Pool(processes = 8) as pool: np.random.seed(1) pos = np.random.normal(loc=[70., 0.3], size=[n_walkers, n_dim], scale=[1, 5e-2]) sampler = emcee.EnsembleSampler(n_walkers, n_dim, log_probability, pool = pool) state = sampler.run_mcmc(pos, 500, progress = True) # + import copy fig, axes = plt.subplots(n_dim, figsize=(10, 7), sharex=True) samples = copy.deepcopy(sampler.chain) #samples[:, :, 0] -= np.mean(samples[:, :, 0]) #samples[:, :, 1] -= np.median(samples[:, :, 1]) labels = [r"$H_0$ (km s$^{-1}$ Mpc$^{-1}$)", r"$\Omega_m$"] for i in range(n_dim): ax = axes[i] ax.plot(samples[:, :, i].T, "k", alpha=0.1) ax.set_ylabel(labels[i]) ax.yaxis.set_label_coords(-0.1, 0.5) axes[-1].set_xlabel("step number"); # - corner.corner(samples.reshape(-1, 2), show_titles=True, labels=labels, title_fmt='.1f' ); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + origin_pos=4 tab=["tensorflow"] import tensorflow as tf # + [markdown] origin_pos=5 # ## Conv2d(卷积核) # tf.keras.layers.Conv2D( #     filters, kernel_size, strides=(1, 1), padding='valid', #     data_format=None, dilation_rate=(1, 1), groups=1,     activation=None, #     use_bias=True, kernel_initializer='glorot_uniform', #     bias_initializer='zeros', kernel_regularizer=None, #     bias_regularizer=None, activity_regularizer=None,     kernel_constraint=None, #     bias_constraint=None, **kwargs # ) # - filters = 1 # + origin_pos=23 tab=["tensorflow"] # 构造一个二维卷积层,它具有1个输出通道和形状为(1,2)的卷积核 conv2d = tf.keras.layers.Conv2D(filters, (1, 2), use_bias=False) # + ## Padding ## 'same': keep the size the same # - # 注意!!! 这里每边都填充了1行或1列,因此总共添加了2行或2列 conv2d = tf.keras.layers.Conv2D(filters, kernel_size=3, padding='same') conv2d = tf.keras.layers.Conv2D(filters, kernel_size=(5, 3), padding='same') # ## stride conv2d = tf.keras.layers.Conv2D(filters, kernel_size=3, padding='same', strides=2) conv2d = tf.keras.layers.Conv2D(filters, kernel_size=(3,5), padding='valid',strides=(3, 4)) # ## Pooling layer(汇聚层) # tf.keras.layers.MaxPool2D( #     pool_size=(2, 2), #     strides=None, #     padding='valid', #     data_format=None, # ) X = tf.reshape(tf.range(16, dtype=tf.float32), (1, 4, 4, 1)) X # ### 默认情况下,步幅与汇聚窗口的大小相同。 因此,如果我们使用形状为(3, 3)的汇聚窗口,那么默认情况下,我们得到的步幅形状为(3, 3)。 pool2d = tf.keras.layers.MaxPool2D(pool_size=[3, 3]) pool2d(X) pool2d = tf.keras.layers.MaxPool2D(pool_size=[3, 3], padding='valid', strides=2) pool2d = tf.keras.layers.MaxPool2D(pool_size=[2, 3], padding='valid', strides=(2, 3)) # # Batch Normalization bn = tf.keras.layers.BatchNormalization() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.cross_validation import train_test_split dataset = pd.read_csv('./dataset.csv') # + #dataset.drop(['SRC', 'TGT','TXT'], axis=1,inplace=True) # + #dataset.to_csv('./dataset.csv') # - dataset[0:10] dataset.drop(dataset.columns[0], axis=1, inplace=True) dataset[0:10] dataset = dataset.reindex_axis(['VOT','SCRIND','SCROUTD','TGTIND','TGTOUTD','numWords'], axis=1) dataset[0:10] # Define x and y for training X = dataset.iloc[:, :-1].values y = dataset.iloc[:, 5].values X # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import _init_paths import caffe import numpy as np caffe.set_mode_cpu() # - net = caffe.Net('./res18_part.prototxt', caffe.TEST) print '--->net\n', net print '--->net.layer_dict\n', net.layer_dict print '--->net.layers\n', net.layers print '--->net.inputs\n', net.inputs print '--->net.blobs\n', net.blobs print '--->net.params\n', net.params print '--->net.bottom_names\n', net.bottom_names print '--->net.top_names\n', net.top_names for layer in net.layers: print '--->layer\n', layer print ' layer.type: ', layer.type print ' layer.blobs: ', layer.blobs for blob in layer.blobs: print ' blob: ', blob print ' blob.shape: ', blob.shape print ' blob.shape: ', np.array(blob.shape) for name,blob in net.blobs.iteritems(): print '--->blob name: ', name print ' blob: ', blob print ' blob.shape: ', blob.shape print ' blob.shape: ', np.array(blob.shape) for layer, param_blobs in net.params.iteritems(): print '---> layer: ', layer print ' num of param_blob: ', len(param_blobs) for i, blob in enumerate(param_blobs): print ' shape of param blob #%d: ' % i, np.array(blob.shape) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data Structure # # ## 자료구조의 정의 및 종류 # # 자료구조를 물리적으로 구현하는 방법 # * List # * Linked list # # 이를 이용하여 구현된 자료구조의 종류 # # * List # * Linked list # # 아래는 **List**, **Linked list** 로 구현이 가능하고, # * Array # * Stack # * Queue # * Tree # * Heap # * Graph # 데크는 **List** 로 구현이 가능하다. # * Deque # # ## 자료구조의 분류 # # 자료구조 # * 단순구조 # - 2진수, 정수, 실수, 문자, 문자열 --> 배열, 구조체, 클래스 등 사용자 정의 자료형으로 사용 # * 선형구조 # _ 리스트 # - 연결 리스트 --> 단순 연결, 이중 연결, 원형 연결 리스트 # - 데크 # - 스택 # - 큐 # * 비선형구조 # - 트리 --> 일반 트리, 이진 트리 # - 그래프 --> 방향 그래프, 무방향 그래프 # * 파일구조 # - 순차파일 # - 색인파일 # - 직접파일 # ## 자료구조의 구현 # ### List 형 자료구조 # * 배열: 크기가 변하지 않음, 중간 값이 지워져도 크기가 일정 # * 리스트: 크기가 변함, 중간 값이 지워지면 뒤에 값이 앞으로 이동 # #### Python example # List 를 생성하고 삭제, 출력 예시 # + # Python List sales_results = [12, 45, 67, 43, 56, 98] for s in sales_results: print ( "The sales result = %d" % ( s ) ) print ( " delete:", sales_results[3]) del sales_results[3] for s in sales_results: print ( "The sales result = %d" % ( s ) ) # - # ### 연결 리스트 자료구조 # * 연결 리스트 # * 더블 연결 리스트 # * 환형 연결 리스트 # # 연결리스트는 데이터 사이의 관계를 이용하기 때문에 # * 데이터의 중간 삽입과 삭제가 용이 # * 데이터를 서치하기는 불편 # #### Python example # Node class 를 선언하고 이를 이용해 # 1. 연결 리스트를 생성하고, # * 데이터를 삭제, # * 데이터를 삽입, # * 최종 결과를 출력 한다. # # + # Python Linked List class Node: def __init__(self, data, next=None): self.data = data self.next = None # 다음 data # + # 1. 연결리스트 생성 node1 = Node(1) node2 = Node(2) node3 = Node(3) node4 = Node(4) node1.next = node2 node2.next = node3 node3.next = node4 # + # 2. 구성된 리스트에서 데이터 2 를 지우고 나머지를 연결 del_data = 2 pre_node = node1 next_node = pre_node.next if pre_node.data == del_data: node1 = next_node del pre_node while next_node: if next_node.data == del_data: pre_node.next = next_node.next del next_node break pre_node = next_node next_node = next_node.next # - # 3. 구성된 리스트에서 데이터 9 를 삽입 new_node = Node(9) new_node.next = node1 node1 = new_node # 4. 결과 출력 node = node1 while node: print( node.data ) node = node.next # ### 연결리스트를 이용한 스택 구현 # #### Python example # Link class 를 선언하고 이를 이용해 스택을 구현 # + # Link class class Link: def __init__(self, d1=None, d2=None): self.data1 = d1 self.data2 = d2 self.next_link = None def printLink(self): print ( "{", self.data1, ", ", self.data2, "}") # LinkList class class LinkList: def __init__(self): self.first_link = None def isEmpty(self): return self.first_link == None def insert(self, d1, d2): if self.first_link == None : self.first_link = Link( d1, d2 ) else: tmp_next_link = self.first_link self.first_link = Link( d1, d2 ) # first_link 의 next_link 초기화됨 self.first_link.next_link = tmp_next_link def delete(self): rlink = self.first_link self.first_link = self.first_link.next_link return rlink def printList(self): curLink = self.first_link print ( "Link list" ) while(curLink != None): curLink.printLink() curLink = curLink.next_link def test(): link_list = LinkList() link_list.insert( 1, 100 ) link_list.insert( 2, 200 ) link_list.insert( 3, 300 ) link_list.insert( 4, 400 ) link_list.insert( 5, 500 ) link_list.printList() while( not link_list.isEmpty() ): deletedLink = link_list.delete() print ( "delete" ) deletedLink.printLink() link_list.printList() test() # - # ### 해쉬테이블 # * [참고자료](https://hsp1116.tistory.com/35) # - 설명, 코드 추가하기 # ## 순서리스트 자료구조 # # ## 배열 자료구조 # # # ## 스택 자료구조 # * Last In First Out(LIFO) # * PUSH: 데이터를 넣는 것 # * POP: 데이터를 꺼내는 것 # + stack = [] stack.append(1) stack.append(2) stack.append(3) stack.append(4) stack.append(5) print( "Stack:" ) print( stack) while stack: print( "POP Value is ", stack.pop() ) # - # ## 큐 자료구조 # * First In First Out(FIFO) # + queue = [] queue.append(1) queue.append(2) queue.append(3) queue.append(4) queue.append(5) print( "Queue:" ) print( queue ) while queue: print( "Get Value: ", queue.pop(0) ) # - # ## 데크 자료구조 # # ## 트리 자료구조 # # ## 이진 트리 # # ## 힙 # # ## 그래프 자료구조 # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Robot Sensors # # A robot senses the world through cameras and other sensors, but these sensors are not perfectly accurate. In the video, you saw an example of a robot in a 1D world made of colored grid cells; all cells were either green or red. The robot then sensed that it was in a red grid cell. # # The probability that this reading was accurate, which we'll call the prbability that the sensor has hit its target, `pHit`, was `0.6` and the probability that this reading was inaccurate (the sensor has missed its target) and the robot was *actually* in a green cell was `pMiss` equal to `0.2`. # # In this notebook, let's go through how this works step by step. # ### Uniform Distribution # # The robot starts with a map with a length of 5 cells. Since the robot does not know where it is at first, the probability of being in any space is the same; a uniform distribution! # importing resources import matplotlib.pyplot as plt import numpy as np # ex. initialize_robot(5) = [0.2, 0.2, 0.2, 0.2, 0.2] def initialize_robot(grid_length): ''' Takes in a grid length and returns a uniform distribution of location probabilities''' p = [] # create a list that has the value of 1/grid_length for each cell for i in range(grid_length): p.append(1.0/grid_length) return p # I'll also include a helper function for visualizing this distribution. The below function, `display_map` will output a bar chart showing the probability that a robot is in each grid space. The y-axis has a range of 0 to 1 for the range of probabilities. For a uniform distribution, this will look like a flat line. You can choose the width of each bar to be <= 1 should you want to space these out. def display_map(grid, bar_width=1): if(len(grid) > 0): x_labels = range(len(grid)) plt.bar(x_labels, height=grid, width=bar_width, color='b') plt.xlabel('Grid Cell') plt.ylabel('Probability') plt.ylim(0, 1) # range of 0-1 for probability values plt.title('Probability of the robot being at each cell in the grid') plt.xticks(np.arange(min(x_labels), max(x_labels)+1, 1)) plt.show() else: print('Grid is empty') # initialize a 5 cell, 1D world p = initialize_robot(5) display_map(p) # ### Probability After Sense # # Then the robot senses that it is in a red cell, and updates its probabilities. As per our example: # # * The probability that it is sensing the correct color is `pHit = 0.6`. # * The probability that it is sensing the incorrect color (in this case: seeing red but *actually* in a green cell) is `pMiss = 0.2` # # # # #### Next, we write code that outputs a new grid, `p`, after multiplying each entry by pHit or pMiss at the appropriate places. # # Remember that the red cells (cell 1 and 2) are "hits" and the other green cells are "misses." # # Note that you may see values that are not exact due to how machines imperfectly represent floating points. # + # given initial variables p = initialize_robot(5) pHit = 0.6 pMiss = 0.2 # Creates a new grid, with modified probabilities, after sensing # All values are calculated by a product of 1. the sensing probability for a color (pHit for red) # and 2. the current probability of a robot being in that location p[i]; all equal to 0.2 at first. p[0] = p[0]*pMiss p[1] = p[1]*pHit p[2] = p[2]*pHit p[3] = p[3]*pMiss p[4] = p[4]*pMiss print(p) display_map(p) # - # You should see that the red grid cells (1 and 2) have a higher probability than the green cells. One thing that may look strange is how low these probability bars are, and you may have noticed that these don't accurately represent a probability distribution because the components of this list do not add up to 1! # # ### QUIZ: Compute the sum of all of these probabilities. # # What do these values add up to and how do you think we can turn this into a probability distribution whose components do add up to 1? # # In the next code cell, write code to sum up the values in the new world, `p`. # + # given initial variables p=[0.2, 0.2, 0.2, 0.2, 0.2] # the color of each grid cell in the 1D world world=['green', 'red', 'red', 'green', 'green'] # Z, the sensor reading ('red' or 'green') Z = 'red' pHit = 0.6 pMiss = 0.2 ## Complete this function def sense(p, Z): ''' Takes in a current probability distribution, p, and a sensor reading, Z. Returns an unnormalized distribution after the sensor measurement has been made, q. This should be accurate whether Z is 'red' or 'green'. ''' q=[ (p[i]*pHit) if world[i]==Z else (p[i]*pMiss) for i in range(len(p))] sq = sum(q) q =[x/sq for x in q] return q q = sense(p,Z) print(q) display_map(q) # + # given initial variables p=[0.2, 0.2, 0.2, 0.2, 0.2] # the color of each grid cell in the 1D world world=['green', 'red', 'red', 'green', 'green'] # measurements, now a *list* of sensor readings ('red' or 'green') measurements = ['red', 'green'] pHit = 0.6 pMiss = 0.2 # sense function def sense(p, Z): ''' Takes in a current probability distribution, p, and a sensor reading, Z. Returns a *normalized* distribution after the sensor measurement has been made, q. This should be accurate whether Z is 'red' or 'green'. ''' q=[] # loop through all grid cells for i in range(len(p)): # check if the sensor reading is equal to the color of the grid cell # if so, hit = 1 # if not, hit = 0 hit = (Z == world[i]) q.append(p[i] * (hit * pHit + (1-hit) * pMiss)) # sum up all the components s = sum(q) # divide all elements of q by the sum to normalize for i in range(len(p)): q[i] = q[i] / s return q ## TODO: Add your code for accounting for 2 motion measurements, here ## Grab and print out the resulting distribution, p for Z in measurements: p = sense(p, Z) print(p) print(p) display_map(p) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Observations and Insights # # + # Dependencies and Setup # %matplotlib notebook import matplotlib.pyplot as plt import pandas as pd import scipy.stats as st import numpy as np # Study data files mouse_metadata_path = "data/Mouse_metadata.csv" study_results_path = "data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata_path) study_results = pd.read_csv(study_results_path) # Combine the data into a single dataset combined_df = pd.merge(mouse_metadata, study_results, how="outer", on="Mouse ID") # Display the data table for preview combined_df # - # Checking the number of mice. unique_count = len(combined_df["Mouse ID"].value_counts()) num_dict = { "Number of Unique Mice": unique_count } num_of_mice = pd.DataFrame([num_dict]) num_of_mice # Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint. duplicated_mice = combined_df.loc[combined_df.duplicated(subset=['Mouse ID', 'Timepoint',]),:].value_counts() # Optional: Get all the data for the duplicate mouse ID. duplicated_df = pd.DataFrame(duplicated_mice) duplicated_df.head() # Create a clean DataFrame by dropping the duplicate mouse by its ID clean_df = combined_df[combined_df["Mouse ID"].isin(duplicated_df) == False] clean_df.head() # Checking the number of mice in the clean DataFrame len(combined_df["Mouse ID"].value_counts())-1 # ## Summary Statistics clean_df.columns # + # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # Use groupby and summary statistical methods to calculate the following properties of each drug regimen: # mean, median, variance, standard deviation, and SEM of the tumor volume. # Assemble the resulting series into a single summary dataframe. mean = clean_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].mean() median = clean_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].median() variance = clean_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].var() sd = clean_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].std() sem = clean_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].sem() tumor_volume_stats = pd.DataFrame({ "Mean": round(mean,2), "Median": round(median, 2), "Variance": round(variance, 2), "Standard Deviation": round(sd, 2), "SEM": round(sem, 2) }) tumor_volume_stats # - # Using the aggregation method, produce the same summary statistics in a single line drug_groupby = clean_df.groupby('Drug Regimen') tumor_volume_stats_aggr = drug_groupby.agg(['mean','median','var','std','sem'])["Tumor Volume (mm3)"] tumor_volume_stats_aggr # ## Bar and Pie Charts # Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas. count_drugs = clean_df["Drug Regimen"].value_counts() count_drugs.plot(kind="bar", title="Total Number of Measurements per Drug Regimen", xlabel="Drug Regimen", ylabel="Total Number of Measurements (Count)", legend="best") count_per_drug = pd.DataFrame(clean_df['Drug Regimen'].value_counts()) count_per_drug # Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot drugs = ["Capomulin", "Ramicane", "Ketapril", "Naftisol", "Zoniferol", "Stelasyn", "Placebo", "Infubinol", "Ceftamin", "Propriva"] plt.xticks(rotation=45) plt.bar(drugs, count_per_drug["Drug Regimen"], color="orange", align="center") plt.title("Total Number of Measurements per Drug Regimen") plt.xlabel("Drug Regimen") plt.ylabel("Total Number of Measurements (Count)") clean_df.head() # Generate a pie plot showing the distribution of female versus male mice using pandas groupby_sex = clean_df.groupby("Sex")["Mouse ID"].nunique() groupby_sex_df = pd.DataFrame(groupby_sex) explode = (0.1, 0) groupby_sex_df["Mouse ID"].plot(kind="pie", title="Count Per Gender", legend="best", shadow = True, explode=explode, autopct="%1.1f%%") groupby_sex_df # Generate a pie plot showing the distribution of female versus male mice using pyplot labels = ["Female", "Male"] sizes = [124, 125] colors = ["blue", "orange"] explode = (0.1, 0) plt.pie(sizes, explode=explode, labels=labels, colors=colors,autopct="%1.1f%%", shadow=True) plt.title("Count Per Gender") plt.legend(loc="best") # ## Quartiles, Outliers and Boxplots clean_df.head() # + # Calculate the final tumor volume of each mouse across four of the treatment regimens: # Capomulin, Ramicane, Infubinol, and Ceftamin # Start by getting the last (greatest) timepoint for each mouse cap_df = clean_df.loc[clean_df["Drug Regimen"] == "Capomulin",:] ram_df = clean_df.loc[clean_df["Drug Regimen"] == "Ramicane", :] inf_df = clean_df.loc[clean_df["Drug Regimen"] == "Infubinol", :] ceft_df = clean_df.loc[clean_df["Drug Regimen"] == "Ceftamin", :] #capomulin cap_max = cap_df.groupby("Mouse ID")["Timepoint"].max() cap_max = pd.DataFrame(cap_max) cap_merge = pd.merge(cap_max, clean_df, on=("Mouse ID", "Timepoint"), how="left") cap_merge.head() # - #ramicane ram_max = ram_df.groupby("Mouse ID")["Timepoint"].max() ram_max = pd.DataFrame(ram_max) ram_merge = pd.merge(ram_max, clean_df, on=("Mouse ID", "Timepoint"), how="left") ram_merge.head() #infubinol inf_max = inf_df.groupby("Mouse ID")["Timepoint"].max() inf_max = pd.DataFrame(inf_max) inf_merge = pd.merge(inf_max, clean_df, on=("Mouse ID", "Timepoint"), how="left") inf_merge.head() #ceftamin ceft_max = ceft_df.groupby("Mouse ID")["Timepoint"].max() ceft_max = pd.DataFrame(ceft_max) ceft_merge = pd.merge(ceft_max, clean_df, on=("Mouse ID", "Timepoint"), how="left") ceft_merge.head() # + # Put treatments into a list for for loop (and later for plot labels) treatments = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"] # Create empty list to fill with tumor vol data (for plotting) tumor_volumes = [] # Calculate the IQR and quantitatively determine if there are any potential outliers. cap_volumes = cap_merge["Tumor Volume (mm3)"] cap_quartiles = cap_volumes.quantile([0.25, 0.5, 0.75]) cap_iqr = cap_quartiles[0.75] - cap_quartiles[0.25] tumor_volumes.append(cap_volumes) ram_volumes = ram_merge["Tumor Volume (mm3)"] ram_quartiles = ram_volumes.quantile([0.25, 0.5, 0.75]) ram_iqr = ram_quartiles[0.75] - ram_quartiles[0.25] tumor_volumes.append(ram_volumes) inf_volumes = inf_merge["Tumor Volume (mm3)"] inf_quartiles = inf_volumes.quantile([0.25, 0.5, 0.75]) inf_iqr = inf_quartiles[0.75] - inf_quartiles[0.25] tumor_volumes.append(inf_volumes) ceft_volumes = ceft_merge["Tumor Volume (mm3)"] ceft_quartiles = ceft_volumes.quantile([0.25, 0.5, 0.75]) ceft_iqr = ceft_quartiles[0.75] - ceft_quartiles[0.25] tumor_volumes.append(ceft_volumes) # Determine outliers using upper and lower bounds cap_lowerbound = cap_quartiles[0.25] - (1.5*cap_iqr) cap_uperbound = cap_quartiles[0.75] + (1.5*cap_iqr) ram_lowerbound = ram_quartiles[0.25] - (1.5*ram_iqr) ram_uperbound = ram_quartiles[0.75] + (1.5*ram_iqr) inf_lowerbound = inf_quartiles[0.25] - (1.5*inf_iqr) inf_uperbound = inf_quartiles[0.75] + (1.5*inf_iqr) ceft_lowerbound = ceft_quartiles[0.25] - (1.5*ceft_iqr) ceft_uperbound = ceft_quartiles[0.75] + (1.5*ceft_iqr) # - # Generate a box plot of the final tumor volume of each mouse across four regimens of interest fig1, ax1 = plt.subplots() ax1.set_title("Tumor Volume of Each Mouse for Each Durg Regimen") ax1.set_ylabel("Tumor Volume (mm3)") ax1.set_xlabel("Drug Regimen") ax1.boxplot(tumor_volumes, labels=treatments, sym="b") # ## Line and Scatter Plots cap_df.head() only_b742 = cap_df.loc[cap_df["Mouse ID"] == "b742",:] only_b742.head() # + # Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin time = only_b742["Timepoint"] tumor_volume = only_b742["Tumor Volume (mm3)"] volume_time_df = pd.DataFrame({ "Time": time, "Tumor Volume": tumor_volume }) volume_time_df.plot("Time", "Tumor Volume", kind="line", title="Tumor Volume for Mouse ID b742 vs. Time", xlabel="Time (Days)", ylabel="Tumor Volume (mm3)", marker="o") # - cap_df.head() # + # Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen avg_tumor = cap_df.groupby("Mouse ID")["Tumor Volume (mm3)"].mean() mouse_weight = cap_df.groupby("Mouse ID")["Weight (g)"].mean() tumor_volume_weight = pd.DataFrame({ "Average Tumor Volume": avg_tumor, "Mouse Weight": mouse_weight }) tumor_volume_weight.head() # - tumor_volume_weight.plot("Mouse Weight", "Average Tumor Volume", kind="scatter", title="Average Tumor Volume vs. Mouse Weight", xlabel= "Weight (g)", ylabel="Average Tumor Volume (mm3)") # ## Correlation and Regression # Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen correlation = st.pearsonr(tumor_volume_weight["Mouse Weight"], tumor_volume_weight["Average Tumor Volume"])[0] print(f"The correlation coefficient is {correlation}") line_regress = st.linregress(tumor_volume_weight["Mouse Weight"], tumor_volume_weight["Average Tumor Volume"]) line_regress x_value = tumor_volume_weight["Mouse Weight"] y_value = tumor_volume_weight["Average Tumor Volume"] regress_values = x_value * line_regress.slope + line_regress.intercept line_eq = "y = " + str(round(line_regress.slope,2)) + "x + " + str(round(line_regress.intercept,2)) line_eq tumor_volume_weight.plot("Mouse Weight", "Average Tumor Volume", kind="scatter", title="Average Tumor Volume vs. Mouse Weight", xlabel= "Weight (g)", ylabel="Average Tumor Volume (mm3)") plt.plot(x_value, regress_values, "r-") print(line_eq) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="yUpZYjqMpDjx" #comparison opperators is used for mathmetics as example greater then or smaller then import numpy as np import pandas as pd import math # + colab={"base_uri": "https://localhost:8080/"} id="R6EfVdiup-P3" outputId="405d96fd-3687-436c-c6e2-a44e2b7695c1" 1 < 3 # + colab={"base_uri": "https://localhost:8080/"} id="S4pWkyJiqCXr" outputId="7a9d7090-c98b-406b-fa56-5ba3265b0b3c" 1 > 3 # + colab={"base_uri": "https://localhost:8080/"} id="v-gR67M9qE1j" outputId="3891b964-0cd3-426f-8772-cc0a57cda5c4" 1 <= 3 # + colab={"base_uri": "https://localhost:8080/"} id="z7QgNhx6qKEW" outputId="01a9a695-ad49-4f20-fa61-1c3001857545" 1 == 1 # + colab={"base_uri": "https://localhost:8080/"} id="9LLJdT-aqUkQ" outputId="ebb5eee6-0ccb-4b33-eb28-09e7a0143d98" 1 != 1 # + colab={"base_uri": "https://localhost:8080/"} id="lM8W58r4qZ4a" outputId="16c116bd-cadd-4d8a-f06f-902e6b53246c" "string" == 'string' # + colab={"base_uri": "https://localhost:8080/"} id="QL2kqcWYqjC0" outputId="8f476432-93c2-4ec2-849c-86ca7d187d06" 1 == "1" # + id="0mY303OtqlQX" #logic opperators # + colab={"base_uri": "https://localhost:8080/"} id="jFPor1xpqxDT" outputId="ba406800-fe34-4583-bfbf-98353d4b1dc0" (1==3) and (1==1) # + colab={"base_uri": "https://localhost:8080/"} id="YpkZ-ipZq1ok" outputId="4f0d56c2-4160-4e11-f184-66500c5aa26a" (1==3) or (1==1) # + colab={"base_uri": "https://localhost:8080/"} id="Lb3uKL0nq7_t" outputId="92bb5101-cb41-4a08-c695-0f7b7c5b8a1e" (1==1) and not (1==2) # + id="kvcP7fmwrJVr" #el if elif statement = controleflow # + colab={"base_uri": "https://localhost:8080/"} id="-MxplgLfrPZQ" outputId="498ecfb9-73ff-4075-e4c6-6d3e10a157f6" if True: print ("hello world") # + id="naHBgckurZSx" if False: print ("hello world") # + colab={"base_uri": "https://localhost:8080/"} id="p9VZnjF7rckl" outputId="f7f81ce0-f434-4b93-f1f5-c8817c167416" if (1==1): print ("hello darkness my old friend") # + id="3CjF9zh0rpDK" if (1==2): print ("hello darkness my old friend") # + id="zsqpkLLcrrXL" import math # + colab={"base_uri": "https://localhost:8080/"} id="n76K9su8K-d3" outputId="4dcce674-5775-4b38-af6f-872ad5de61a5" print(math.sqrt(300)) # + id="5BVjqgpQLGMI" import numpy as np # + id="3Pn0JItzLXU6" my_list = [1,2,3] # + colab={"base_uri": "https://localhost:8080/"} id="AhTHhCOIMlJN" outputId="26d5d020-b2d0-4354-912d-9926cd30f8e8" x = np.array(my_list) type (x) # + id="EQuaElSaNUEb" # + id="cj-Bu2OGMuVr" my_matrix = [[1,2,3], [4,5,6],[7,8,9]] # + colab={"base_uri": "https://localhost:8080/"} id="CT6uAHFxM0DF" outputId="525e9d2d-1d80-4b10-d9b8-e9e3c34cf82c" np.array(my_matrix) # + colab={"base_uri": "https://localhost:8080/"} id="gcdUR_v8NYLD" outputId="d5cc3d4b-6576-4646-dc0e-13815bd1efcc" list(range(0,5)) np.arange(0,5) # + colab={"base_uri": "https://localhost:8080/"} id="jea2iYmWNrov" outputId="55c2d7af-e26a-4bb6-91f0-14844ceb394e" np.arange (0,11,2) # + colab={"base_uri": "https://localhost:8080/"} id="dCm5_j20OAMO" outputId="e80ef0f6-3cb9-4167-fcf7-3e74efbd85fe" np.zeros(10) # + colab={"base_uri": "https://localhost:8080/"} id="JKlNXsIiOK5J" outputId="f963bd9c-11a1-4a1d-97ad-93893423013b" type(1) type (1.0) # + colab={"base_uri": "https://localhost:8080/"} id="1Rm7RumbOQPY" outputId="3703e94e-9855-400d-cb9f-f3cdffd2d350" np.zeros((5,5)) # + colab={"base_uri": "https://localhost:8080/"} id="FOsjydTmO8g8" outputId="d66b6cd2-3db4-4949-a4b8-4fd95034643b" np.random.rand(5) # + [markdown] id="8AJOi-xOU2So" # # + colab={"base_uri": "https://localhost:8080/"} id="2BQm9KlXUlM4" outputId="70a09f9b-000e-4cd0-814d-de27ba6bfc2e" np.random.rand(5,4) # every number have eq chance to get picked # + colab={"base_uri": "https://localhost:8080/"} id="g7BiSW-uVAMu" outputId="b9e8f746-3756-43f9-c865-5c6f0f4b04d8" np.random.randn(5,3) # + colab={"base_uri": "https://localhost:8080/"} id="T1N3g_ypVgvc" outputId="ef444e17-296b-49b9-ca41-936229eeebe2" np.random.randint(0,100) # + colab={"base_uri": "https://localhost:8080/", "height": 165} id="LkF9UoyWV9A_" outputId="89fde054-35a1-4edc-fbed-78fda9338302" np.array.reshape(5,6) # + id="-ALdFo_uXVhG" colab={"base_uri": "https://localhost:8080/", "height": 165} outputId="8c4d1f69-3aca-4f43-fd7c-bd2e92ecc82d" np.exp(arr) # + colab={"base_uri": "https://localhost:8080/"} id="AKq6kZuGB2t5" outputId="ffb657be-730d-40a8-bf9e-0d59d293180f" list(range(0,10,2)) # + colab={"base_uri": "https://localhost:8080/"} id="KNjEKrXpDhum" outputId="87b6a479-a5ae-4880-8708-f5b84bc060c2" np.zeros((3,3)) # + colab={"base_uri": "https://localhost:8080/"} id="wF0BY2umDt5c" outputId="fcc83bff-a6b6-4e46-d15d-16976a62665c" np.ones((420,69)) # + colab={"base_uri": "https://localhost:8080/"} id="LC0WeU9EEEVa" outputId="9626a24c-8b70-487f-d1a2-caecfd6a431d" np.eye(4) # + colab={"base_uri": "https://localhost:8080/"} id="wVxsTTjeEiHu" outputId="00a32c1a-45ec-45e4-f2cd-68ab39d09a98" np.random.rand(5,4) # + colab={"base_uri": "https://localhost:8080/"} id="qcwlFa0XExP4" outputId="4e6e8c14-7e67-44ca-bff7-de7329f84d44" np.random.randn(5,4) # + colab={"base_uri": "https://localhost:8080/", "height": 165} id="TkF6dTlRFODG" outputId="86d85da9-37dc-4cbb-9ded-485b54cd228e" ranarr.agrim() # + colab={"base_uri": "https://localhost:8080/", "height": 182} id="3RuluAhVHnzO" outputId="9768ded7-f2f0-4370-de55-a69d16367f37" np.max(np.array) arr.max() # + id="Q2HpGZEMIr8H" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="SGjo1VKbh83g" outputId="7e8318de-1ce1-422f-8485-a2cf1acb3558" from google.colab import drive drive.mount ('/content/drive') # + colab={"base_uri": "https://localhost:8080/"} id="XQKhYLZriVhS" outputId="a1ac1b1d-0521-4b47-8012-f9de3a0e085d" # !ls '/content/drive/My Drive/yolov3_custom_data_Training' # + id="TqLJuGKxiqS3" #unzip '/content/drive/My Drive/yolov3_custom_data_Training/images.zip' -d '/content/drive/My Drive/yolov3_custom_data_Training' # + id="UTVZILiKi4aZ" # #!git clone 'https://github.com/AlexeyAB/darknet.git' '/content/drive/My Drive/yolov3_custom_data_Training/darknet' # + colab={"base_uri": "https://localhost:8080/"} id="eivY8-AVjEwu" outputId="b4722b14-403b-456b-a80c-695b5aabf016" # %cd '/content/drive/My Drive/yolov3_custom_data_Training/darknet' # + colab={"base_uri": "https://localhost:8080/"} id="oFU3yonijFd5" outputId="e23d9825-fa5d-4372-9f1c-2b5a93aa47fa" # !make # + colab={"base_uri": "https://localhost:8080/"} id="PlYjVQ4XjHRw" outputId="880255d7-11e5-4446-ca93-daf89569acd0" # %cd '/content/drive/My Drive/yolov3_custom_data_Training' # + id="nM8barEnkw45" # !python images/creating-files-data-and-name.py # + id="WD7mXiKQlfeP" # !python images/creating-train-and-test-txt-files.py # + colab={"base_uri": "https://localhost:8080/"} id="pLRr7cUAlx2U" outputId="f1ea4a6b-f036-46bb-89cf-a6575bf8f2e5" # !darknet/darknet detector train images/labelled_data.data darknet/cfg/yolov3_custom.cfg custom_weight/darknet53.conv.74 -dont_show # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Import Required Packages import tensorflow as tf import tensorflow_addons as tfa from tqdm import tqdm import pandas as pd import sklearn from sklearn import metrics import re import numpy as np import pickle as pkl import PIL import datetime import os import random import shutil import statistics import time import import_ipynb # ## Import Required Functions or Methods from Other Files from util import * from model import * from optimize_test import * # ## Saving & Restoring CLAM Model Training Checkpoints # ### Loading Models for Training # + ng_att = NG_Att_Net(dim_features=1024, dim_compress_features=512, n_hidden_units=256, n_classes=2, dropout=False, dropout_rate=.25) g_att = G_Att_Net(dim_features=1024, dim_compress_features=512, n_hidden_units=256, n_classes=2, dropout=False, dropout_rate=.25) # - ins = Ins(dim_compress_features=512, n_class=2, n_ins=8, mut_ex=True) # + s_bag = S_Bag(dim_compress_features=512, n_class=2) m_bag = M_Bag(dim_compress_features=512, n_class=2) # + s_clam = S_CLAM(att_gate=True, net_size='big', n_ins=8, n_class=2, mut_ex=False, dropout=True, drop_rate=.55, mil_ins=True, att_only=False) m_clam = M_CLAM(att_gate=True, net_size='big', n_ins=8, n_class=2, mut_ex=False, dropout=True, drop_rate=.55, mil_ins=True, att_only=False) # - # ### Loading Required Path train_is_bach = '/path/' val_is_bach = '/path/' test_is_bach = '/path/' clam_result_dir = '/path/' i_trained_model_dir = '/path/' b_trained_model_dir = '/path/' c_trained_model_dir = '/path/' current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") train_log_dir = '/path/' + current_time + '/train' val_log_dir = '/path/' + current_time + '/val' # ## Start Training, Validating & Testing CLAM Model tf_shut_up(no_warn_op=True) clam_optimize(train_log=train_log_dir, val_log=val_log_dir, train_path=train_is_bach, val_path=val_is_bach, i_model=ins, b_model=s_bag, c_model=s_clam, i_optimizer_func=tfa.optimizers.AdamW, b_optimizer_func=tfa.optimizers.AdamW, c_optimizer_func=tfa.optimizers.AdamW, i_loss_func=tf.keras.losses.binary_crossentropy, b_loss_func=tf.keras.losses.binary_crossentropy, mutual_ex=False, n_class=2, c1=0.7, c2=0.3, i_learn_rate=2e-04, b_learn_rate=2e-04, c_learn_rate=2e-04, i_l2_decay=1e-05, b_l2_decay=1e-05, c_l2_decay=1e-05, n_ins=8, batch_size=2000, batch_op=False, i_model_dir=i_trained_model_dir, b_model_dir=b_trained_model_dir, c_model_dir=c_trained_model_dir, m_bag_op=False, m_clam_op=False, g_att_op=True, epochs=200) clam_test(n_class=2, n_ins=8, att_gate=True, att_only=False, mil_ins=True, mut_ex=False, test_path=test_is_bach, result_path=clam_result_dir, result_file_name='test_bach_model_save.tsv', i_model_dir=i_trained_model_dir, b_model_dir=b_trained_model_dir, c_model_dir=c_trained_model_dir, m_bag_op=False, m_clam_op=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="o87Yuy_qSazu" # # # BSECE 2-2 # # # If Statement # + colab={"base_uri": "https://localhost:8080/"} id="zcQG5yytSIiT" outputId="64670420-09bd-4176-e911-af4c25e99a4c" a=12 b=100 if b>a: print("b is greater than a") # + [markdown] id="FxFlOVf_Smgd" # Elif Statement # + colab={"base_uri": "https://localhost:8080/"} id="ZJ5bS-fySpt-" outputId="ddfcef3f-5134-47ec-c296-760a1360db84" a=12 b=100 if ba: print("b is greater than a") # + [markdown] id="FZNcin8PTM7m" # Else Statement # + colab={"base_uri": "https://localhost:8080/"} id="dLsX9ZWWSo1U" outputId="d28d052c-8e2e-497a-de73-75117aed7b21" a=12 b=12 if ba: print("b is greater than a") else: print("a is equal to b") # + [markdown] id="Od6wQHDkTdP_" # Short Hand If...Else # + colab={"base_uri": "https://localhost:8080/"} id="M4kwtY9ZThaA" outputId="438631ca-b754-4d35-d7ad-38d79909b13a" a=420 b=330 print("A") if a>b else print("B") # + [markdown] id="RxJ9U7z3Tt1A" # And condition # + colab={"base_uri": "https://localhost:8080/"} id="w5QUzgScTvv_" outputId="5de2492e-474b-4c38-b922-c63e2b4e9fe1" a=200 b=33 c=500 if a>b and ab or a10: print("Above 10") else: print("but not above 10") if x>20: print("and also above 20!") else: print("but not above 20") # + [markdown] id="_bSIJSjtUdcJ" # Nested if...Else # + colab={"base_uri": "https://localhost:8080/"} id="yWwC4hb2UffB" outputId="5f75e1cf-8fe2-4687-f735-89a523c14ae4" x=41 if x>10: print("Above 10") if x>20: print("and also above 20!") else: print("Below 10") # + [markdown] id="-cZ08wz5Uti5" # Example 1: # Write a program that determines if the input age is qualified to vote or not the qualifying age is 18 old and above # + colab={"base_uri": "https://localhost:8080/"} id="lHWLw-lOU1dZ" outputId="ba1531e3-0ae2-4d91-96b4-b941ce076e06" x=9 print("You are qualified to vote")if x>=18 else print("You are unqualified to vote") x=18 print("You are qualified to vote")if x>=18 else print("uYou are not qualified to vote") x=20 print("You are qualified to vote")if x>=18 else print("You are not qualified to vote") # + [markdown] id="RB8E4TdOVAjy" # Example 2: # Write a program that determines if the input number is Positive or Negative. Consider 0 as positive(Considering that it contains no negative sign). # + colab={"base_uri": "https://localhost:8080/"} id="vrjBjEdCUwuZ" outputId="9a6ba0cc-d850-423b-a7f5-5c936ea1a565" number= int(input("Enter a number ")) if number >0: print("You entered a positive number") elif number ==0: print("You entered zero") else: print("You entered a negative number") # + [markdown] id="GBhx_QVTVJei" # Example 3: Write a program to determine if the grades are: if: grade>=75,"Passed" grade= 74,"Remedial" grade<74, "failed" # + colab={"base_uri": "https://localhost:8080/"} id="KQ6_5Ao7VSsz" outputId="31441035-fa9d-470d-fe6f-4caa6fb37d0d" number= int(input("Grade: ")) if number >=75: print("Passed") elif number ==74: print("Remedial") else: print("Failed") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 17} id="ZnmduOKwY_NK" outputId="e89be30f-5627-4537-b1f5-dbf8da298e93" import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import matplotlib.pyplot as plt import seaborn as sns import plotly as py import plotly.graph_objs as go from sklearn.cluster import KMeans import warnings import os warnings.filterwarnings("ignore") py.offline.init_notebook_mode(connected = True) #print(os.listdir("../input")) # + colab={"base_uri": "https://localhost:8080/", "height": 202} id="95se5-S5b6Ct" outputId="93548484-b612-4d5a-a8e2-ad11a9d3551c" df = pd.read_csv(r'/content/drive/MyDrive/archive (1)/Mall_Customers.csv') df.head() # + colab={"base_uri": "https://localhost:8080/"} id="J3obBrWge_aL" outputId="8af04985-b900-4f35-d12b-4893ae8138de" df.shape # + colab={"base_uri": "https://localhost:8080/", "height": 294} id="p7A_rYm3fY2O" outputId="d0f0e29b-c573-4dcb-e709-d880dfb53f45" df.describe() # + colab={"base_uri": "https://localhost:8080/"} id="xd5QrA2qfbWi" outputId="cd6eb46e-14d5-4af0-e8a9-67568d56c3c9" df.dtypes # + colab={"base_uri": "https://localhost:8080/"} id="y3cut2dmffFe" outputId="3a97107b-2de7-4c07-8dab-69e40508bcf7" df.isnull().sum() # + id="kehYs0rRfnxY" plt.style.use('fivethirtyeight') # + colab={"base_uri": "https://localhost:8080/", "height": 401} id="DMiJ7SPVfsSV" outputId="68bc04cd-f067-49ac-8b6b-d0a3bbee5075" plt.figure(1 , figsize = (15 , 6)) n = 0 for x in ['Age' , 'Annual Income (k$)' , 'Spending Score (1-100)']: n += 1 plt.subplot(1 , 3 , n) plt.subplots_adjust(hspace =0.5 , wspace = 0.5) sns.distplot(df[x] , bins = 20) plt.title('Distplot of {}'.format(x)) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 343} id="uyvpfVXYfvx6" outputId="3431911a-57c5-4b8e-c489-89551c35c7ed" plt.figure(1 , figsize = (15 , 5)) sns.countplot(y = 'Gender' , data = df) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 482} id="zlRNOS2Af2aQ" outputId="bb35d923-5556-4e20-c527-01ed9295cc4d" plt.figure(1 , figsize = (15 , 7)) n = 0 for x in ['Age' , 'Annual Income (k$)' , 'Spending Score (1-100)']: for y in ['Age' , 'Annual Income (k$)' , 'Spending Score (1-100)']: n += 1 plt.subplot(3 , 3 , n) plt.subplots_adjust(hspace = 0.5 , wspace = 0.5) sns.regplot(x = x , y = y , data = df) plt.ylabel(y.split()[0]+' '+y.split()[1] if len(y.split()) > 1 else y ) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 428} id="EmNBhBJwf6SO" outputId="357d1bc7-dceb-4a28-ee8a-695a4881e72f" plt.figure(1 , figsize = (15 , 6)) for gender in ['Male' , 'Female']: plt.scatter(x = 'Age' , y = 'Annual Income (k$)' , data = df[df['Gender'] == gender] , s = 200 , alpha = 0.5 , label = gender) plt.xlabel('Age'), plt.ylabel('Annual Income (k$)') plt.title('Age vs Annual Income w.r.t Gender') plt.legend() plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 428} id="I_Ols-2lf9wL" outputId="9a4e6c3a-5093-49ae-d0f6-528ec4bcbc65" plt.figure(1 , figsize = (15 , 6)) for gender in ['Male' , 'Female']: plt.scatter(x = 'Annual Income (k$)',y = 'Spending Score (1-100)' , data = df[df['Gender'] == gender] ,s = 200 , alpha = 0.5 , label = gender) plt.xlabel('Annual Income (k$)'), plt.ylabel('Spending Score (1-100)') plt.title('Annual Income vs Spending Score w.r.t Gender') plt.legend() plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 474} id="L3GwnAOQgBpd" outputId="c0a93cfe-93e5-4bba-dd9c-987868a4a97c" plt.figure(1 , figsize = (15 , 7)) n = 0 for cols in ['Age' , 'Annual Income (k$)' , 'Spending Score (1-100)']: n += 1 plt.subplot(1 , 3 , n) plt.subplots_adjust(hspace = 0.5 , wspace = 0.5) sns.violinplot(x = cols , y = 'Gender' , data = df , palette = 'vlag') sns.swarmplot(x = cols , y = 'Gender' , data = df) plt.ylabel('Gender' if n == 1 else '') plt.title('Boxplots & Swarmplots' if n == 2 else '') plt.show() # + id="labr_KrdgFKZ" '''Age and spending Score''' X1 = df[['Age' , 'Spending Score (1-100)']].iloc[: , :].values inertia = [] for n in range(1 , 11): algorithm = (KMeans(n_clusters = n ,init='k-means++', n_init = 10 ,max_iter=300, tol=0.0001, random_state= 111 , algorithm='elkan') ) algorithm.fit(X1) inertia.append(algorithm.inertia_) # + colab={"base_uri": "https://localhost:8080/", "height": 398} id="Ayyqq38KgIZZ" outputId="d3c2f8a6-9bb8-4e03-b49d-541563059a95" plt.figure(1 , figsize = (15 ,6)) plt.plot(np.arange(1 , 11) , inertia , 'o') plt.plot(np.arange(1 , 11) , inertia , '-' , alpha = 0.5) plt.xlabel('Number of Clusters') , plt.ylabel('Inertia') plt.show() # + id="AAEipTv6gL5M" algorithm = (KMeans(n_clusters = 4 ,init='k-means++', n_init = 10 ,max_iter=300, tol=0.0001, random_state= 111 , algorithm='elkan') ) algorithm.fit(X1) labels1 = algorithm.labels_ centroids1 = algorithm.cluster_centers_ # + id="yLx-FuFugOjT" h = 0.02 x_min, x_max = X1[:, 0].min() - 1, X1[:, 0].max() + 1 y_min, y_max = X1[:, 1].min() - 1, X1[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = algorithm.predict(np.c_[xx.ravel(), yy.ravel()]) # + colab={"base_uri": "https://localhost:8080/", "height": 468} id="Il9xgum6gUXE" outputId="ff850e44-22bd-4442-ae9b-b88c01d0522d" plt.figure(1 , figsize = (15 , 7) ) plt.clf() Z = Z.reshape(xx.shape) plt.imshow(Z , interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap = plt.cm.Pastel2, aspect = 'auto', origin='lower') plt.scatter( x = 'Age' ,y = 'Spending Score (1-100)' , data = df , c = labels1 , s = 200 ) plt.scatter(x = centroids1[: , 0] , y = centroids1[: , 1] , s = 300 , c = 'red' , alpha = 0.5) plt.ylabel('Spending Score (1-100)') , plt.xlabel('Age') plt.show() # + id="D0qVJuLfgXqh" '''Annual Income and spending Score''' X2 = df[['Annual Income (k$)' , 'Spending Score (1-100)']].iloc[: , :].values inertia = [] for n in range(1 , 11): algorithm = (KMeans(n_clusters = n ,init='k-means++', n_init = 10 ,max_iter=300, tol=0.0001, random_state= 111 , algorithm='elkan') ) algorithm.fit(X2) inertia.append(algorithm.inertia_) # + colab={"base_uri": "https://localhost:8080/", "height": 420} id="AWTABbPCga0Q" outputId="79834d93-fecf-4845-8bae-08870586677b" plt.figure(1 , figsize = (15 ,6)) plt.plot(np.arange(1 , 11) , inertia , 'o') plt.plot(np.arange(1 , 11) , inertia , '-' , alpha = 0.5) plt.xlabel('Number of Clusters') , plt.ylabel('Inertia') plt.show() # + id="xQXNQdUcgdyo" algorithm = (KMeans(n_clusters = 5 ,init='k-means++', n_init = 10 ,max_iter=300, tol=0.0001, random_state= 111 , algorithm='elkan') ) algorithm.fit(X2) labels2 = algorithm.labels_ centroids2 = algorithm.cluster_centers_ # + id="TrCNFGKTghRC" h = 0.02 x_min, x_max = X2[:, 0].min() - 1, X2[:, 0].max() + 1 y_min, y_max = X2[:, 1].min() - 1, X2[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z2 = algorithm.predict(np.c_[xx.ravel(), yy.ravel()]) # + colab={"base_uri": "https://localhost:8080/", "height": 479} id="7CnS2WwYgmDX" outputId="45ec32f5-e4e9-4926-f55a-9ace9111e123" plt.figure(1 , figsize = (15 , 7) ) plt.clf() Z2 = Z2.reshape(xx.shape) plt.imshow(Z2 , interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap = plt.cm.Pastel2, aspect = 'auto', origin='lower') plt.scatter( x = 'Annual Income (k$)' ,y = 'Spending Score (1-100)' , data = df , c = labels2 , s = 200 ) plt.scatter(x = centroids2[: , 0] , y = centroids2[: , 1] , s = 300 , c = 'red' , alpha = 0.5) plt.ylabel('Spending Score (1-100)') , plt.xlabel('Annual Income (k$)') plt.show() # + id="mb_Ubs6vgpjd" X3 = df[['Age' , 'Annual Income (k$)' ,'Spending Score (1-100)']].iloc[: , :].values inertia = [] for n in range(1 , 11): algorithm = (KMeans(n_clusters = n ,init='k-means++', n_init = 10 ,max_iter=300, tol=0.0001, random_state= 111 , algorithm='elkan') ) algorithm.fit(X3) inertia.append(algorithm.inertia_) # + colab={"base_uri": "https://localhost:8080/", "height": 420} id="3bdGeP8RgtSu" outputId="b3e0a03b-66ac-44fc-92c1-9485fb03419b" plt.figure(1 , figsize = (15 ,6)) plt.plot(np.arange(1 , 11) , inertia , 'o') plt.plot(np.arange(1 , 11) , inertia , '-' , alpha = 0.5) plt.xlabel('Number of Clusters') , plt.ylabel('Inertia') plt.show() # + id="7OEtymy4gxRy" algorithm = (KMeans(n_clusters = 6 ,init='k-means++', n_init = 10 ,max_iter=300, tol=0.0001, random_state= 111 , algorithm='elkan') ) algorithm.fit(X3) labels3 = algorithm.labels_ centroids3 = algorithm.cluster_centers_ # + colab={"base_uri": "https://localhost:8080/", "height": 542} id="wDpFVU4Zg1AZ" outputId="33ca055f-0491-41b4-85c5-44b5103a5b4a" df['label3'] = labels3 trace1 = go.Scatter3d( x= df['Age'], y= df['Spending Score (1-100)'], z= df['Annual Income (k$)'], mode='markers', marker=dict( color = df['label3'], size= 20, line=dict( color= df['label3'], width= 12 ), opacity=0.8 ) ) data = [trace1] layout = go.Layout( # margin=dict( # l=0, # r=0, # b=0, # t=0 # ) title= 'Clusters', scene = dict( xaxis = dict(title = 'Age'), yaxis = dict(title = 'Spending Score'), zaxis = dict(title = 'Annual Income') ) ) fig = go.Figure(data=data, layout=layout) py.offline.iplot(fig) # + id="WroSNaLug9Wn" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import camelot import pandas as pd tables = camelot.read_pdf('https://www.illinoistollway.com/documents/20184/86147/2019+TollRateIncrease-tablesonly_1118.pdf/052f2c4a-362e-49b7-a111-acd8bb1d147c', pages='all') tables tables[12].df # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # + import numpy as np import pandas as pd import warnings import matplotlib.pyplot as plt from pandas.tools.plotting import scatter_matrix from sklearn.cluster import KMeans from sklearn.decomposition import PCA from sklearn.linear_model import LinearRegression, Lasso, ElasticNet from sklearn.model_selection import cross_val_score from sklearn.ensemble import RandomForestRegressor warnings.filterwarnings("ignore") # %matplotlib inline plt.style.use("ggplot") plt.rcParams["figure.figsize"] = (12, 8) # - fl1 = "data_files/train.csv" fl2 = "data_files/test.csv" train_df = pd.read_csv(fl1, header=0) test_df = pd.read_csv(fl2, header=0) print(train_df.shape) print(test_df.shape) tr_nulls = train_df.isnull().sum().to_frame().transpose() te_nulls = test_df.isnull().sum().to_frame().transpose() # + # for c in te_nulls.columns: # print("{}: {}".format(c, tr_nulls[c][0])) # - train_df.corr() # + # scatter_matrix(train_df, alpha=0.2, diagonal='kde') # - print(train_df['loss'].min()) print(train_df['loss'].max()) train_df['loss'].plot.hist(bins=100, color="darkred") np.log(train_df['loss']).plot.hist(bins=100, color="darkred") train_df2 = train_df.copy() train_df2['log_loss'] = train_df['loss'].map(lambda x: np.log(x)) train_df2.corr() cont_vars = ["cont{}".format(i) for i in range(1, 15)] X_cl = np.asarray(train_df2[cont_vars]) km = KMeans(n_clusters=7) cont_labels = km.fit_predict(X_cl) pca = PCA() pca.fit(X_cl) plt.figure(1) plt.clf() plt.axes([.2, .2, .7, .7]) plt.plot(pca.explained_variance_, linewidth=2) plt.axis('tight') plt.xlabel('n_components') plt.ylabel('explained_variance_') cat_vars = ["cat{}".format(i) for i in range(1, 117)] train_df[cat_vars].head() # + # for c in cat_vars: # print("{}: {}".format(c, len(train_df[c].unique()))) # - train_df.shape # + num_cols_w_dums = 131 for c in cat_vars: num_cols_w_dums += len(train_df[c].unique()) - 1 # - num_cols_w_dums train_df3 = train_df.copy() train_df3 = pd.get_dummies(train_df3, columns=cat_vars, drop_first=True) train_df3.shape train_df3.head() pca2 = PCA() pca2.fit(np.asarray(train_df3.drop(['id', 'loss'], axis=1))) plt.figure(1) plt.clf() plt.axes([.2, .2, .7, .7]) plt.plot(pca2.explained_variance_, linewidth=2) plt.axis('tight') plt.xlabel('n_components') plt.ylabel('explained_variance_') pca_200_comps = PCA(n_components=200) X_pca_200 = pca_200_comps.fit_transform(np.asarray(train_df3.drop(['id', 'loss'], axis=1))) X_pca_200.shape y_pca_200 = np.asarray(train_df3['loss']).reshape(-1, 1) lm1 = LinearRegression() lm1.fit(X_pca_200, y_pca_200) lm1.score(X_pca_200, y_pca_200) ls = Lasso() cross_val_score(ls, np.asarray(train_df3.drop(['id', 'loss'], axis=1)), np.asarray(train_df3['loss']).reshape(-1, 1), scoring="neg_mean_absolute_error") rf = RandomForestRegressor(n_estimators=20, max_depth=10) # + # cross_val_score(rf, np.asarray(train_df3.drop(['id', 'loss'], axis=1)), np.asarray(train_df3['loss']).reshape(-1, 1), # scoring="neg_mean_absolute_error") # - l1_mixes = np.arange(0.1, 1.0, 0.1) # + mx_dict = dict() for l in l1_mixes: en = ElasticNet(alpha=0.1, l1_ratio=l) scs = cross_val_score(en, np.asarray(train_df3.drop(['id', 'loss'], axis=1)), np.asarray(train_df3['loss']).reshape(-1, 1), scoring="neg_mean_absolute_error") mx_dict[str(l)] = np.absolute(np.mean(scs)) # - mx_dict # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import regex as re import numpy as np from collections import Counter import matplotlib.pyplot as plt import emoji from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator # + def is_date_time(s): pattern = '^([0-9]{2})(\/)([0-9]{2})(\/)([0-9]{2,4}), ([0-9]+):([0-9]+)[ ]?(AM|PM|am|pm)? - ' result = re.match(pattern,s) if result: return True else: return False def is_author(s): result = s.split(": ") if len(result)==2: return True return False def getDataFromLine(line): split_line = line.split(" - ") date, time = split_line[0].split(", ") message = " ".join(split_line[1:]) if is_author(message): author, message = message.split(": ", maxsplit=1) else: author =None return date, time, author, message whatsapp_file = "WhatsApp-Chat-with-John.txt" data = [] with open(whatsapp_file, encoding='utf-8') as fd: date, time, author = None,None, None msg_buffer = [] while True: line = fd.readline().strip() if is_date_time(line): if(len(msg_buffer)>0): data.append([date, time, author, " ".join(msg_buffer)]) msg_buffer.clear() date, time, author, message = getDataFromLine(line) msg_buffer.append(message) else: msg_buffer.append(line) df = pd.DataFrame(data, columns=["date","time","author","message"]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Plotting massive data sets # # **This notebook is experimental in pydeck's beta version. It may not work on all devices.** # # This notebook plots 1.6 million points of LIDAR points around the Carnegie Mellon University campus. ([Source](https://github.com/ajduberstein/oakland_point_cloud)) The data points are labeled. With pydeck, we can render these points and interact with them. We'll downsample this data to 800,000 points–depending on your device, you may be able to run the full data set. # ### Cleaning the data # # First we need to import the data. We should expect about 1.6M points. # + import pandas as pd URL = 'https://raw.githubusercontent.com/ajduberstein/oakland_point_cloud/master/%s' DATA_URL_1 = URL % 'lidar_chunks_1.csv' DATA_URL_2 = URL % 'lidar_chunks_2.csv' LOOKUP_URL = URL % 'ground_truth_label.csv' lidar = pd.concat([pd.read_csv(DATA_URL_1), pd.read_csv(DATA_URL_2)]) lookup = pd.read_csv(LOOKUP_URL) lidar = lidar.merge(lookup) # - print('Number of points:', lidar.count()[0]) # It does not appear to be in a standard coordinate format, so we'll scale it to make it easy to plot on a map. We'll also color objects by label type. The `data_utils.assign_random_colors` assigns a random RGB value to a vector of data labels. # + from pydeck import data_utils color_lookup = data_utils.assign_random_colors(lidar['label_name']) lidar['rgb'] = lidar.apply(lambda row: color_lookup.get(row['label_name']), axis=1) # Scaling the points using min-max scaling lidar[['x', 'y', 'z']] -= lidar[['x', 'y', 'z']].max() lidar[['x', 'y', 'z']] /= lidar[['x', 'y', 'z']].min() lidar[['x', 'y']] /= 1000 lidar[['z']] *= 10 lidar.head() # - # ### Plotting the data # # We'll define a single `PointCloudLayer` and plot it. # # pydeck by default expects the input of `get_position` to be a string name indicating a single position value. For convenience, you can pass in a string indicating the X/Y/Z coordinate, here `get_position='[x, y, z]'`. # # We'll zoom to the approximate center of the data by taking a mean of a few hundred points in pandas. # # This example may take 10-15 seconds to render. # + import pydeck as pdk point_cloud = pdk.Layer( 'PointCloudLayer', lidar[['x', 'y', 'z', 'label_name', 'rgb']].sample(800000), # You can specify the XYZ coordinate in a list as a string get_position='[x, y, -1*z]', get_normal=[0, 0, 1], get_color='rgb', pickable=True, auto_highlight=True, radius_pixels=3) view_state = pdk.data_utils.compute_view(lidar[['x', 'y']]) view_state.max_zoom = 23 view_state.zoom = 22 view_state.pitch = 60 r = pdk.Deck( point_cloud, initial_view_state=view_state, map_style='', width=700) r.show() # + language="javascript" # // Use Javascript to change the cell background to black # document.getElementById("deckgl-overlay").style.backgroundColor = "black" # - # #### Citations: # # Contextual Classification with Functional Max-Margin Markov Networks. # , (Drew) Bagnell, , and . # IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), June, 2009. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + id="Ng4P3Zuri_Z8" # %tensorflow_version 1.x # + id="hcDlSfu4jc2t" # !wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1ovYLYG7e15QqOafzvy_nhxUeKM9uxyXd' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1ovYLYG7e15QqOafzvy_nhxUeKM9uxyXd" -O Clarkson_Finger_Generator.pkl && rm -rf /tmp/cookies.txt # + id="J2qQo3k1jExA" # !git clone https://github.com/keivanB/Clarkson_Finger_Gen # + id="O5tQe-koh8cI" import os import sys new_path = '/content/Clarkson_Finger_Gen/stylegan_K1/' sys.path.append(new_path) import pickle import numpy as np import PIL.Image import dnnlib import dnnlib.tflib as tflib import config import io import matplotlib.pyplot as plt # + id="jvq0faZhh8cx" '''Loading Pre trained network''' tflib.init_tf() f = open('/content/Clarkson_Finger_Generator.pkl', 'rb') _G, _D, Gs = pickle.load(f) # + id="QO3JNdLAh8c5" # Pick latent vector. rnd = np.random.RandomState(100) latents = rnd.randn(32, Gs.input_shape[1]) # Generate image. fmt = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) images = Gs.run(latents, None, truncation_psi=1, randomize_noise=True, output_transform=fmt) images = np.transpose(images,(0,3, 1, 2)) # + id="qLSiHyRnh8dV" fig, ax = plt.subplots(8,4, figsize=(30,30)) for i in range(32): r = int(i%4) c = int(i/4) ax[c,r].imshow(np.transpose(np.squeeze(images[i,:,:,:]), (1,2,0))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Author: # Summer Research with Professor Heather for Fellowship 2021 # Some functions to generate polynomials for quantum graphs # - # # Imports # # + import numpy as np from IPython.display import Latex import HomPolys # - # ## Example 1: A K3 graph # A K3 graph is a complete graph on three nodes with three edges. # First, we compute the edge list for a K3 graph. We expect the output to have three edges. K3_edges = HomPolys.create_graph_edges("K3") K3_edges # We now compute the polynomial representation of the homomorphism number of a K3 graph using a $q$ value of 4, 5, and 6. # + # q = 4 poly_4 = HomPolys.Polynomial(4) # q = 5 poly_5 = HomPolys.Polynomial(5) # q = 6 poly_6 = HomPolys.Polynomial(6) # - # For each initialized polynomial, we use the `add_monomial_from_edge_list` function to add our `K3_edges` from earlier. poly_4.add_monomial_from_edge_list(*K3_edges) poly_5.add_monomial_from_edge_list(*K3_edges) poly_6.add_monomial_from_edge_list(*K3_edges) # Note that using `add_monomial_from_graph_string` circumvents the entire process, as so: # + # q = 4 poly_4_quick = HomPolys.Polynomial(4) poly_4_quick.add_monomial_from_graph_string("K3") # q = 5 poly_5_quick = HomPolys.Polynomial(5) poly_5_quick.add_monomial_from_graph_string("K3") # q = 6 poly_6_quick = HomPolys.Polynomial(6) poly_6_quick.add_monomial_from_graph_string("K3") # - # We can now view the output of the Polynomials using the `tex` function. Note that `poly_n_quick` is the same as `poly_n` for all values of $n$. poly_4.tex() poly_4_quick.tex() poly_5.tex() poly_5_quick.tex() poly_6.tex() poly_6_quick.tex() poly_4 == poly_4_quick # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from bs4 import BeautifulSoup from splinter import Browser import pandas as pd import datetime as dt import requests import json # + urls_lost = [] urls_found = [] urls_reunited = [] city = "oakland" state = "ca" page_no = 1 """ city = input("Please type a city: ") state = input("Please type the 2-letter state abbreviation: ") """ # status codes: 100 = lost; 101 = found; 102 = reunited for status in range(100,103): url = 'https://www.pawboost.com/lost-found-pets/' + \ city + '-' + state + '-/all-lost-found-stray-reunited-cats/page-' + str(page_no) + \ '?LfdbFeedStatusForm%5Bstatus%5D%5B%5D=' + str(status) + '&LfdbFeedStatusForm%5Baddress%5D=' + \ city + '%2C%20' + state + \ '&LfdbFeedStatusForm%5Bradius%5D=15&LfdbFeedStatusForm%5Bspecies%5D=Cat' + \ '&LfdbFeedStatusForm%5BsortAttribute%5D=distance&LfdbFeedStatusForm%5BdateRange%5D=90' response = requests.get(url) html = response.content soup = BeautifulSoup(html, "html.parser") soup.prettify() if status == 100: print("Lost cats:") elif status == 101: print("Found cats:") elif status == 102: print("Reunited w/ family:") else: print("error") print(url) print("URLs found:") for anchor in soup.findAll('a', attrs={'class': 'btn btn-primary width-full'}, href=True): print(anchor['href']) if status == 100: urls_lost.append(anchor['href']) elif status == 101: urls_found.append(anchor['href']) elif status == 102: urls_reunited.append(anchor['href']) # - for status in range(100,103): if status == 100: length = len(urls_lost) active_list = urls_lost elif status == 101: length = len(urls_found) active_list = urls_found elif status == 102: length = len(urls_reunited) active_list = urls_reunited pet_url = [] pet_id = [] pet_name = [] pet_gender = [] pet_status = [] pet_description = [] pet_address = [] pet_address_clean = [] # first url is irrelevant due to featured pet for i in range(1,length): new_url = active_list[i] pet_url.append(new_url) response = requests.get(new_url) html = response.content soup = BeautifulSoup(html, "html.parser") soup.prettify() # print(response) item_no = [] h2_tags = [] p_tags = [] for i in range(0, 12): paragraphs = soup.find_all("h2")[i].text item_no.append(i) h2_tags.append(paragraphs) for i in range(0, 20): paragraphs = soup.find_all("p")[i].text item_no.append(i) p_tags.append(paragraphs) # for viewing which text fields are relevant results_h2 = pd.DataFrame({"text": h2_tags}) results_p = pd.DataFrame({"text": p_tags}) # pet ID pet_id.append(results_h2['text'][3]) # pet name if status == 100: pet_name.append(results_h2['text'][4]) elif status == 101: pet_name.append("Unknown") elif status == 102: pet_name.append(results_h2['text'][4]) # gender if status == 100: pet_gender.append(results_h2['text'][5]) elif status == 101: pet_gender.append(results_h2['text'][4]) elif status == 102: pet_gender.append(results_h2['text'][5]) # status if status == 100: pet_status.append(results_p['text'][4]) elif status == 101: pet_status.append(results_p['text'][3]) elif status == 102: pet_status.append(results_p['text'][4]) # description if status == 100: pet_description.append(results_p['text'][10]) elif status == 101: pet_description.append(results_p['text'][9]) elif status == 102: pet_description.append(results_p['text'][8]) # address, if available if status == 100: address = results_h2['text'][8] + " " + results_h2['text'][7] elif status == 101: address = results_h2['text'][7] + " " + results_h2['text'][6] elif status == 102: address = results_h2['text'][7] pet_address.append(address) address_clean = address.replace(' ', '+').replace('&', 'and') pet_address_clean.append(address_clean) # print(address_clean) if status == 100: details_lost = pd.DataFrame({"url": pet_url, "ref_no": pet_id, "pet_name": pet_name, "gender": pet_gender, "status": pet_status, "description": pet_description, "address": pet_address, "address_clean": pet_address_clean}) elif status == 101: details_found = pd.DataFrame({"url": pet_url, "ref_no": pet_id, "pet_name": pet_name, "gender": pet_gender, "status": pet_status, "description": pet_description, "address": pet_address, "address_clean": pet_address_clean}) elif status == 102: details_reunited = pd.DataFrame({"url": pet_url, "ref_no": pet_id, "pet_name": pet_name, "gender": pet_gender, "status": pet_status, "description": pet_description, "address": pet_address, "address_clean": pet_address_clean}) details_lost details_found details_reunited # + # need to do api call because coordinates could not be scraped from page for status in range(100,103): if status == 100: length = len(urls_lost) elif status == 101: length = len(urls_found) elif status == 102: length = len(urls_reunited) latitude = [] longitude = [] for i in range(0,length-1): if status == 100: query_address = details_lost['address_clean'][i] elif status == 101: query_address = details_found['address_clean'][i] elif status == 102: query_address = details_reunited['address_clean'][i] parameters = query_address api_key = #REDACTED query_url = "https://maps.googleapis.com/maps/api/geocode/json?address=" + \ parameters + "&key=" + api_key print(f"finding coordinates for: {query_address}") #print(query_url) response = requests.get(query_url) gmap_json = response.json() try: print(gmap_json['results'][0]['geometry']['location']['lat']) latitude.append(float(gmap_json['results'][0]['geometry']['location']['lat'])) except (KeyError, IndexError): print("could not find latitude") latitude.append(None) try: print(gmap_json['results'][0]['geometry']['location']['lng']) longitude.append(float(gmap_json['results'][0]['geometry']['location']['lng'])) except (KeyError, IndexError): print("could not find longitude") longitude.append(None) print("--------------------------") if status == 100: details_lost['latitude'] = latitude details_lost['longitude'] = longitude elif status == 101: details_found['latitude'] = latitude details_found['longitude'] = longitude elif status == 102: details_reunited['latitude'] = latitude details_reunited['longitude'] = longitude print("Coordinate lookup completed") # - details_lost details_found details_reunited details_lost.to_csv("details_lost.csv", index=True, header=True) details_found.to_csv("details_found.csv", index=True, header=True) details_reunited.to_csv("details_reunited.csv", index=True, header=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_tensorflow_p36 # language: python # name: conda_tensorflow_p36 # --- # # The Donkey Convolutional Neural Network # # The goal of this chapter is to get familiar with the neural network(s) used in the [Donkey](https://github.com/wroscoe/donkey) library. # ## The Donkey library # # The [Donkey library](https://github.com/wroscoe/donkey) has several components. # # It is first and foremost a python library installed where your other python libraries are (e.g. system python or virtualenv). After installation, you can `import` it as any normal python library: # # ```python # import donkeycar as dk # ``` # # It also has a CLI with tools mainly used to aid training (see [donkey-tools.ipynb](./donkey-tools.ipynb)): # # ```bash # donkey --help # ``` # # A `Vehicle` application, installed to the `~/d2` directory by default. This is where you'll find the `manage.py` script, which is used for both **driving** and **training**. # # ```bash # ~/d2/manage.py --help # ``` # ### Install the Donkey library # # If you already installed the [Donkey](https://github.com/wroscoe/donkey) library, you can [skip](#Train) this step. # # Otherwise, go ahead: # + # Make sure we're in SageMaker root # %cd ~/SageMaker # Remove any old versions of the library # !rm -rf ~/SageMaker/donkey # Clone the Donkey library git # !git clone https://github.com/wroscoe/donkey.git # + # Update Donkey dependencies # Keras is pinned to version 2.0.8 in the Donkey requirements. Change this to allow a newer version # !sed -i -e 's/keras==2.0.8/keras>=2.1.2/g' ~/SageMaker/donkey/setup.py # !sed -i -e 's/tensorflow>=1.1/tensorflow-gpu>=1.4/g' ~/SageMaker/donkey/setup.py # Install # !pip uninstall --yes donkeycar # !pip install ~/SageMaker/donkey # - # ## Inspecting the Keras network # # First, take a few minutes to look through the `keras.py` file (it's a [Donkey](https://github.com/wroscoe/donkey) library *part*): # Assuming donkey library is installed in ./donkey # %pycat ~/SageMaker/donkey/donkeycar/parts/keras.py # The default algorithm used is defined in `donkey/donkeycar/templates/donkey2.py`: # # ```python # def drive(cfg, model_path=None, use_joystick=False): # ... # kl = KerasCategorical() # if model_path: # kl.load(model_path) # ... # ``` # # `KerasCategorical`, which is created in `default_categorical()` method. Before looking closer at the code, let's see if we can visualize it using our pre-trained model (see [Donkey library training](./donkey-train.ipynb) chapter): # Download the model (if it's not already there) # Replace bucket if you used some other bucket than SageMaker default import sagemaker bucket = sagemaker.Session().default_bucket() # !aws s3 cp s3://{bucket}/models/my-first-model ~/SageMaker/models/my-first-model # Install some additional python libraries required by the Keras visualization lib # !conda install --yes pydot graphviz # + # Plot the algorithm from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.models import load_model model = load_model('./my-first-model') SVG(model_to_dot(model).create(prog='dot', format='svg')) # - # Ah! A nice neural network with # - 1 input layer # - 5 convolutional layers # - 1 flatten layer # - 2 dense layers # - 2 dropout layers # # The presence of a convolutional layer makes this neural network a [convolutional network](https://en.wikipedia.org/wiki/Convolutional_neural_network). This makes sense, since CNNs are particulary good with images. # # So, what do the different layers actually do? Let's have a look at them, one at a time. # #### Quick readup # # This excellent cheatsheet is handy to read through and use as a reference before continuing: # - http://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html # # And two links about CNNs (read, or be confused later): # - https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/ # - https://cambridgespark.com/content/tutorials/convolutional-neural-networks-with-keras/index.html # #### Input layer # # The *Input layer* holds the input data. # + from keras.layers import Input # Input layer # # 120 x 160 x 3 image size (RGB) # img_in = Input(shape=(120, 160, 3), name='img_in') # Rename it to x x = img_in # - # We define the shape of the data as a 3 channel (RGB) 120px x 160px image. # # If you read through the [links](#quick-readup), you'll have a rought idea of what a channel is. In CNNs, layers operate on 3D chunks of data. The first two dimensions are the height and width of the image, and the third dimension is a number of such 2D data stacked over each other (3 in RGB images). This stack is called channels. # # Why is this important? In the input layer, a channel is easily understood as RGB data (one channel per color). However, a [convolutional layer](#first-convolutional-layer) can have many more channels, which we'll get to later. # # Formally, x can be viewed as a $H \times W \times C$ vector, where `H,W` are the dimensions of the image and `C` is the number of channels of the given image volume. # # API documentation: # - https://keras.io/layers/core/#input # #### First convolutional layer # # Before continuing, make sure you've read through and have a rough idea of the following concepts: # - NN concepts - http://ml-cheatsheet.readthedocs.io/en/latest/nn_concepts.html # - Relu - http://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html#relu # - Convolution operator - https://cambridgespark.com/content/tutorials/convolutional-neural-networks-with-keras/index.html#convolutions # + from keras.layers import Convolution2D # First hidden layer # # 24 features - Results in a 24 channel feature map(s). This means the first layer can detect 24 different low level features in the image. # 5x5 pixel kernel - Width and height of the kernel. Will automatically have the same channel depth as the input (5x5x3) # 2w x 2h stride - Strides of the convolution along the width and height # relu activation - Use the 'relu' activation function # x = Convolution2D(24, (5,5), strides=(2,2), activation='relu')(x) # - # The first hidden layer is a 2D convolution layer with the described hyperparameters (see above) and a `relu` activation function. # # > What is the reasoning behind this design? Why are the hyperparameters given these values? # # This is a tricky question to answer, but important for later tweaking of the network. The sad news are that they often required lots of experience and theoretical background to master. But hey, let's give it a shot! # # They ar all a tradeoff between performance and accuracy, but there are some general rules to follow. For example, the number of *features* are usually lower in the first convolutional layers, because the input size is still large, resulting in large *feature maps*. Later layers have smaller (but deeper) input size (because of the convolution in the previous layers), and thus can afford to have more features. Similar reasoning can be done for the other hyperparameters. The following link has a more in depth discussion around CNN hyperparameters: # - http://deeplearning.net/tutorial/lenet.html#tips-and-tricks # # API documentation: # - https://keras.io/layers/convolutional/#conv2d # #### Second convolutional layer # Second hidden layer # # 32 features - Results in a 32 channel feature map(s) # x = Convolution2D(32, (5,5), strides=(2,2), activation='relu')(x) # Not much changed here. We increase the number of features as discussed earlier. This allows the network to pick up more features based on the feature maps from the first hidden layer. # #### Third convolutional layer # Third hidden layer # # 64 features - Results in a 64 channel feature map(s) # x = Convolution2D(64, (5,5), strides=(2,2), activation='relu')(x) # We increase the number of features yet again. The input to the next layer will now be 64 channels deep. # #### Fourth convolutional layer # Fourth hidden layer # # 3x3 pixel kernel - Width and height of the kernel. Will automatically have the same channel depth as the input (3x3x64) x = Convolution2D(64, (3,3), strides=(2,2), activation='relu')(x) # Here we decrease the kernel size to 3x3. By using a relatively large kernel size in the 3 previous layers, we have shrunk the image size by quite a bit (each convolution will shrink the image/feature map (if not padded), and add depth instead). By decreasing the kernel size, we make sure that image doesn't shrink too much. # #### Fifth convolutional layer # Fifth hidden layer # # 1w x 1h stride - x = Convolution2D(64, (3,3), strides=(1,1), activation='relu')(x) # In the last convolution layer, we change the stride length. This will result in larger feature maps (at the cost of performance). # #### Flatten layer # + from keras.layers import Flatten # Flatten layer # # Flatten to 1 dimension # x = Flatten(name='flattened')(x) # - # A flatten layer does what it claims to do. It flattens the convoluted input from 64 channels to 1, by putting the channels after each other. This is needed for the next layer, a fully-connected MLP (Dense). # # API documentation: # - https://keras.io/layers/core/#flatten # #### First dense layer # + from keras.layers import Dense # First Dense layer # # 100 units - Use 100 neurons in the layer # relu activation - Use 'relu' activation # x = Dense(100, activation='relu')(x) # - # A dense layer is another name for a fully-connected layer like in a [MLP](https://en.wikipedia.org/wiki/Multilayer_perceptron). It is very common for upper-layers in a CNNs to have fully-connected layers (actually, the purpose of conv layers is to extract important features from the image before downsampling enough to be handled by a MLP). # #### First dropout # + from keras.layers import Dropout # First dropout # # 0.1 - Randomly drop out 10% of the neurons to prevent overfitting. # x = Dropout(.1)(x) # - # As described in the [links](), dropout is used to prevent overfitting by forcing redundancy in the neural network (which means it cannot rely on a particular neuron because it might be dropped). # #### Second dense layer # Second Dense layer # # 50 units - Use 50 neurons in the layer # x = Dense(50, activation='relu')(x) # We gradually decrease the size of the layers from the original 100 to 50. # #### Second dropout # Second dropout # # Not much to say here... # x = Dropout(.1)(x) # #### Output layer # Outputs # # Angle # Dense - Fully-connected output layer # 15 units - Use a 15 neuron output. # softmax activation - Use 'softmax' activation # # Throttle # Dense - Fully-connected output layer # 1 unit - Use a 1 neuron output # relu activation - Use 'relu' activation angle_out = Dense(15, activation='softmax', name='angle_out')(x) throttle_out = Dense(1, activation='relu', name='throttle_out')(x) # The output layers are fully-connected with different units and activations. # # Angle uses a 15 neuron output with a softmax activation. [Softmax](https://en.wikipedia.org/wiki/Softmax_function) is a common way of creating a probability distribution over K different possible outcomes (in this case, `K=15`). # # Throttle uses a 1 neuron output with a relu activation. [Relu](https://en.wikipedia.org/wiki/Rectifier_(neural_networks) will result in the throttle having only positive values (it can only go forward). # #### Create the model # # Finally, it's time to create the model and define the loss functions for training. # + from keras.models import Model # Create model # # Optimizer # --------- # adam # # Angle # ----- # categorical cross entropy loss function # 0.9 loss weight # # Throttle # -------- # mean_absolute_error loss function # 0.001 loss weight # model = Model(inputs=[img_in], outputs=[angle_out, throttle_out]) model.compile( optimizer='adam', loss={'angle_out': 'categorical_crossentropy', 'throttle_out': 'mean_absolute_error'}, loss_weights={'angle_out': 0.9, 'throttle_out': .001}) # - # The model requires the following information: # - input (`img_in`) and output tensors (`angle_out`, `throttle_out`) # - optimizer ([`adam`](https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Adam)). The optimizer is the algorithm used when minimizing the loss function during training. Performs well and efficient. # - loss function(s), one for each output # - `angle_out` : [`categorial_crossentropy`](https://en.wikipedia.org/wiki/Cross_entropy) - Suitable for categorization. # - `throttle_out` : [`mean_absolute_error`](https://en.wikipedia.org/wiki/Mean_absolute_error) - More general loss function # - initial loss weights # ## Next # # [Donkey data](./donkey-data.ipynb) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Plot data from Rigol DS1054Z scope # ### Import the libraries import matplotlib.pyplot as plt import numpy as np import math import pandas as pd # ### Define plot mode. # Interactive mode is helpful for visuallizing the program execution # + # #%matplotlib widget # - # ### Define files to read # A helpful discussion on getting .csv files into Panda: # https://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe # + str_file = 'test010_000.csv' df1 = pd.read_csv(str_file, delimiter=',', usecols=[0,1], skiprows=1, dtype={'Sequence': np.int32, 'Volt': np.float64}) df1_meta = pd.read_csv(str_file, delimiter=',', usecols=[4], skiprows=0, nrows=1, dtype={'deltaT': np.float64}) df1_meta.columns = ['deltaT'] # + str_file = 'test010_005.csv' df2 = pd.read_csv(str_file, delimiter=',', usecols=[0,1], skiprows=1, dtype={'Sequence': np.int32, 'Volt': np.float64}) df2_meta = pd.read_csv(str_file, delimiter=',', usecols=[4], skiprows=0, nrows=1, dtype={'deltaT': np.float64}) df2_meta.columns = ['deltaT'] # - # #### Signal features # + # time series i_ns = len(df1.Volt) d_t = np.linspace(0,(i_ns-1), i_ns) d_t1 = df1_meta.deltaT[0] * d_t d_t2 = df2_meta.deltaT[0] * d_t # Peak to peak estimation d_pkpk1 = np.max(df1.Volt) - np.min(df1.Volt) d_pkpk2 = np.max(df2.Volt) - np.min(df2.Volt) # - # ### Plot the files # + fig, ax = plt.subplots(figsize=(8.0, 9.0), nrows = 2) fig.suptitle('Magnetic Pickup Response') ax[0].plot(d_t1, df1.Volt) ax[0].set_title('700 events per minute [0.010" gap] | Peak to Peak: ' + '%0.2f' % d_pkpk1 + ' volts' ) ax[0].set_xlabel('Time, seconds') ax[0].set_xlim([d_t1[0], d_t1[i_ns-1]]) ax[0].set_ylabel('Amplitude, volts') ax[0].set_ylim([-10., 10.]) ax[0].grid() ax[1].plot(d_t2, df2.Volt) ax[1].set_title('7500 events per minute [0.010" gap] | Peak to Peak: ' + '%0.1f' % d_pkpk2 + ' volts' ) ax[1].set_xlabel('Time, seconds') ax[1].set_xlim([d_t2[0], d_t2[i_ns-1]]) ax[1].set_ylabel('Amplitude, volts') ax[1].set_ylim([-10., 10.]) ax[1].grid() plt.tight_layout() figure = plt.gcf() figure.set_size_inches(8, 6) plt.savefig("0.38 Pickup gap 10 mils.pdf") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import argparse import logging import sys import time from tf_pose import common import cv2 import numpy as np from tf_pose.estimator import TfPoseEstimator from tf_pose.networks import get_graph_path, model_wh # - e = TfPoseEstimator(get_graph_path('cmu'), target_size=(432, 368)) image = common.read_imgfile('smallplayer.jpg', None, None) # + import matplotlib.pyplot as plt plt.imshow(image) # - humans = e.inference(image, resize_to_default=(w > 0 and h > 0), upsample_size=4.0) image = TfPoseEstimator.draw_humans(image, humans, imgcopy=False) humans # + import matplotlib.pyplot as plt fig = plt.figure(figsize=(14,14)) a = fig.add_subplot(2, 2, 1) a.set_title('Result') plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) bgimg = cv2.cvtColor(image.astype(np.uint8), cv2.COLOR_BGR2RGB) bgimg = cv2.resize(bgimg, (e.heatMat.shape[1], e.heatMat.shape[0]), interpolation=cv2.INTER_AREA) a = fig.add_subplot(2, 2, 2) plt.imshow(bgimg, alpha=0.5) tmp = np.amax(e.heatMat[:, :, :-1], axis=2) plt.imshow(tmp, cmap=plt.cm.gray, alpha=0.5) plt.colorbar() tmp2 = e.pafMat.transpose((2, 0, 1)) tmp2_odd = np.amax(np.absolute(tmp2[::2, :, :]), axis=0) tmp2_even = np.amax(np.absolute(tmp2[1::2, :, :]), axis=0) a = fig.add_subplot(2, 2, 3) a.set_title('Vectormap-x') plt.imshow(tmp2_odd, cmap=plt.cm.gray, alpha=0.5) plt.colorbar() a = fig.add_subplot(2, 2, 4) a.set_title('Vectormap-y') plt.imshow(tmp2_even, cmap=plt.cm.gray, alpha=0.5) plt.colorbar() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: abu_env # language: python # name: abu_env # --- import baostock as bs import pandas as pd #### 登陆系统 #### lg = bs.login() # 显示登陆返回信息 print('login respond error_code:'+lg.error_code) print('login respond error_msg:'+lg.error_msg) # + #### 获取沪深A股历史K线数据 #### # 详细指标参数,参见“历史行情指标参数”章节;“分钟线”参数与“日线”参数不同。 # 分钟线指标:date,time,code,open,high,low,close,volume,amount,adjustflag # 周月线指标:date,code,open,high,low,close,volume,amount,adjustflag,turn,pctChg rs = bs.query_history_k_data_plus("sz.300719", "date,code,open,high,low,close,preclose,volume,amount,adjustflag,turn,tradestatus,pctChg,isST", start_date='2020-08-01', end_date='2020-09-03', frequency="d", adjustflag="3") print('query_history_k_data_plus respond error_code:'+rs.error_code) print('query_history_k_data_plus respond error_msg:'+rs.error_msg) #### 打印结果集 #### data_list = [] while (rs.error_code == '0') & rs.next(): # 获取一条记录,将记录合并在一起 data_list.append(rs.get_row_data()) result = pd.DataFrame(data_list, columns=rs.fields) #### 结果集输出到csv文件 #### //result.to_csv("D:\\history_A_stock_k_data.csv", index=False) print(result) # - #### 登出系统 #### bs.logout() # 如上所示,volume 为成交股数(非手数),amount 为成交金额(单位为元),turn 为换手率(单位为%,若为20,则换手为 20%) # + #### 获取沪深A股历史K线数据 #### # 详细指标参数,参见“历史行情指标参数”章节;“分钟线”参数与“日线”参数不同。 # 分钟线指标:date,time,code,open,high,low,close,volume,amount,adjustflag # 周月线指标:date,code,open,high,low,close,volume,amount,adjustflag,turn,pctChg rs = bs.query_history_k_data_plus("sh.000001", "date,code,open,high,low,close,preclose,volume,amount,adjustflag,turn,tradestatus,pctChg,isST", start_date='2020-08-01', end_date='2020-09-03', frequency="d", adjustflag="3") print('query_history_k_data_plus respond error_code:'+rs.error_code) print('query_history_k_data_plus respond error_msg:'+rs.error_msg) #### 打印结果集 #### data_list = [] while (rs.error_code == '0') & rs.next(): # 获取一条记录,将记录合并在一起 data_list.append(rs.get_row_data()) result = pd.DataFrame(data_list, columns=rs.fields) print(result) # - type(data_list[0]),type(data_list[0][0]) a = float(result[-1:].close) a+22008 def bao_query_history_k_data_plus(stock_code,start_date,end_date=None,freq="d",field="date,code,open,high,low,close,preclose,volume,amount,adjustflag,turn,tradestatus,pctChg,isST"): if (stock_code[:2] != 'sh' and stock_code[:2] != 'sz') or (not (stock_code[2:]).isdigit()): logging.error("bad stock_code",stock_code) return None # 复权类型(只针对股票):None未复权 qfq前复权 hfq后复权 , 默认None adj = "2" code = stock_code[:2] + '.' + stock_code[2:] rs = bs.query_history_k_data_plus(code,field,start_date=start_date, end_date=end_date,frequency=freq, adjustflag=adj) if rs.error_code != '0' : print(rs.error_code) return None data_list = [] while rs.next(): data_list.append(rs.get_row_data()) if freq in "dwm": dates = [pd.to_datetime(info_list[0]) for info_list in data_list] else: dates = [pd.to_datetime(info_list[0][:-5],format="%Y%m%d%H%M") for info_list in data_list] result = pd.DataFrame(data_list, index=dates,columns=rs.fields) if 'open' in rs.fields: result.open = [float(f) for f in result.open] if 'close' in rs.fields: result.close = [float(f) for f in result.close] if 'high' in rs.fields: result.high = [float(f) for f in result.high] if 'low' in rs.fields: result.low = [float(f) for f in result.low] if 'volume' in rs.fields: result.volume = [float(f) for f in result.volume] if 'amount' in rs.fields: result.amount = [float(f) for f in result.amount] if 'date' in rs.fields: result = result.drop(columns=['date']) if 'time' in rs.fields: result = result.drop(columns=['time']) return result def bao_query_history_k_data_plus2(stock_code,start_date,end_date=None,freq="d",field="date,code,open,high,low,close,preclose,volume,amount,adjustflag,turn,tradestatus,pctChg,isST"): if (stock_code[:2] != 'sh' and stock_code[:2] != 'sz') or (not (stock_code[2:]).isdigit()): logging.error("bad stock_code",stock_code) return None # 复权类型(只针对股票):None未复权 qfq前复权 hfq后复权 , 默认None adj = "2" code = stock_code[:2] + '.' + stock_code[2:] rs = bs.query_history_k_data_plus(code,field,start_date=start_date, end_date=end_date,frequency=freq, adjustflag=adj) if rs.error_code != '0' : print(rs.error_code) return None data_list = [] while rs.next(): data_list.append(rs.get_row_data()) dates = [pd.to_datetime(info_list[0]) for info_list in data_list] result = pd.DataFrame(data_list, index=dates,columns=rs.fields) return result,data_list df = bao_query_history_k_data_plus("sh000001",'2020-08-20') df df = bao_query_history_k_data_plus("sz300719",'2020-08-20') df type(df.index),df.index type(df.date),type(df.date[-1]),df.date[-1] # ### 测试交易日管理器的正确性 # + import time import os import os.path as path g_cur_day = time.strftime("%Y-%m-%d", time.localtime()) # 运行该程序用户的 home 目录 root_drive = path.expanduser('~') """abu数据缓存主目录文件夹""" g_project_root = path.join(root_drive, 'py_quant_data') g_trade_day_mgr_dir = path.join(g_project_root, 'trade_day') g_cn_trade_day_path = path.join(g_trade_day_mgr_dir, 'cn_trade_day') g_MarketInfoManagerStartDate = "2017-08-30" print(g_cn_trade_day_path) # + def getNextDate(dateFrom): parts = [ int(i) for i in dateFrom.split("-")] year = parts[0] month = parts[1] day = parts[2] if day < 28 : parts[2] = day + 1 elif day == 28: if month == 2: if (year % 100 != 0 and year % 4 == 0) or year % 400 == 0: parts[2] = 29 else: parts[1] = 3 parts[2] = 1 else: parts[2] = day + 1 elif day == 29: if month == 2: parts[1] = 3 parts[2] = 1 else: parts[2] = day + 1 elif day == 30: if month == 4 or month == 6 or month == 9 or month == 11: parts[1] = month + 1 parts[2] = 1 else: parts[2] = 31 elif day == 31: if month == 12: parts[0] = year + 1 parts[1] = 1 parts[2] = 1 else: parts[1] = month + 1 parts[2] = 1 else: return None return "-".join(str(i) if i >=10 else ("0" + str(i)) for i in parts ) def isValidDateFormat(input): if len(input) != 10: return False if not (input[4] == "-" and input[7] == "-"): return False try: temp = int(input.replace("-","")) except: return False return True def getDateList(dateFrom,dateTo): if not (isValidDateFormat(dateFrom) and isValidDateFormat(dateTo)): return None if int(dateFrom.replace("-","")) >= int(dateTo.replace("-","")): return None ret = [] ret.append(dateFrom) if dateFrom == dateTo: return ret nextDate = getNextDate(dateFrom) while nextDate != dateTo: ret.append(nextDate) nextDate = getNextDate(nextDate) ret.append(dateTo) return ret # - start_get_date = g_MarketInfoManagerStartDate write_mode = 'a+' if os.path.isfile(g_cn_trade_day_path): df = pd.read_csv(g_cn_trade_day_path, index_col=0) start_get_date = df.date[-1] start_get_date = getNextDate(start_get_date) else: write_mode = 'w' os.system("mkdir -p " + g_trade_day_mgr_dir) print("mkdir -p " + g_trade_day_mgr_dir) start_get_date_int = int(start_get_date.replace('-','')) g_cur_day_int = int(g_cur_day.replace('-','')) print(start_get_date_int,g_cur_day_int) if start_get_date_int <= g_cur_day_int: df = bao_query_history_k_data_plus("sh000001",start_get_date) if df is not None and len(df) > 0: print("write to csv") header = True if write_mode == 'a+': header = False df.to_csv(g_cn_trade_day_path, columns=df.columns, index=True,header = header, mode=write_mode,encoding='utf-8') else: print("df is zero") else: print("need not get") df df = pd.read_csv(g_cn_trade_day_path, index_col=0) df.index = pd.to_datetime(df.index) # + def get5MinKline(stock_code,from_date): df = bao_query_history_k_data_plus(stock_code,from_date,freq='5',field="time,open,high,low,close,volume,amount") return df def get5MinKline2(stock_code,from_date): df,rs = bao_query_history_k_data_plus2(stock_code,from_date,freq='5',field="date,open,high,low,close,volume,amount") return df,rs # - df.index dt = pd.to_datetime('2017-08-30 09:35:00') a = df.loc['2017-08-30 09:35:00'] type(a),a a.open # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Compare Solutions - Homogenous (Eurus) # # | October 2015 # # This notebook shows comparisons between the responses of the different solvers. import sys sys.path.append('../') import numpy as np from zephyr.backend import Eurus, SparseKaiserSource, AnalyticalHelmholtz # + import matplotlib.pyplot as plt import matplotlib.cm as cm import matplotlib # %matplotlib inline from IPython.display import set_matplotlib_formats set_matplotlib_formats('png') matplotlib.rcParams['savefig.dpi'] = 150 # Change this to adjust figure size # + dx = 1. dz = 1. nx = 100 nz = 200 velocity = 2000. density = 1. # Anisotropy parameters theta = 0. epsilon = 0.2 delta = 0.2 nPML = 10 freeSurf = [False, False, False, False] systemConfig = { 'c': velocity, # m/s 'rho': density, # kg/m^3 'freq': 200., # Hz 'nx': nx, 'nz': nz, 'dx': dx, 'dz': dz, 'theta': theta, 'eps': epsilon, 'delta': delta, 'nPML': nPML, 'cPML': 1e3, 'freeSurf': freeSurf, } # + Ainv = Eurus(systemConfig) AH = AnalyticalHelmholtz(systemConfig) SKS = SparseKaiserSource(systemConfig) xs, zs = 50, 100 sloc = np.array([xs, zs]).reshape((1,2)) q = SKS(sloc) uMZ = Ainv*q uAH = AH(sloc) # + clip = 0.1 plotopts = { 'vmin': -np.pi, 'vmax': np.pi, 'extent': [0., dx * nx, dz * nz, 0.], 'cmap': cm.bwr, } fig = plt.figure() ax1 = fig.add_subplot(1,4,1) plt.imshow(np.angle(uAH.reshape((nz, nx))), **plotopts) plt.title('AH Phase') ax2 = fig.add_subplot(1,4,2) plt.imshow(np.angle(uMZ[:nx*nz].reshape((nz,nx))), **plotopts) plt.title('ER Phase') plotopts.update({ 'vmin': -clip, 'vmax': clip, }) ax3 = fig.add_subplot(1,4,3) plt.imshow(uAH.reshape((nz, nx)).real, **plotopts) plt.title('AH Real') ax4 = fig.add_subplot(1,4,4) plt.imshow(uMZ[:nx*nz].reshape((nz, nx)).real, **plotopts) plt.title('ER Real') fig.tight_layout() # - # ## Error plots for Eurus vs. the AnalyticalHelmholtz response # # Response of the field (showing where the numerical case does not match the analytical case): # # - Source region # - PML regions # + fig = plt.figure() ax = fig.add_subplot(1,1,1, aspect=100) plt.plot(uAH.real.reshape((nz, nx))[:,xs], label='AnalyticalHelmholtz') plt.plot(uMZ[:nx*nz].real.reshape((nz, nx))[:,xs], label='Eurus') plt.legend(loc=4) plt.title('Real part of response through xs=%d'%xs) # - # ### Relative error of the MiniZephyr solution (in %) # + uMZr = uMZ[:nx*nz].reshape((nz, nx)) uAHr = uAH.reshape((nz, nx)) plotopts.update({ 'cmap': cm.jet, 'vmin': 0., 'vmax': 20., }) fig = plt.figure() ax1 = fig.add_subplot(1,2,1) plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts) cb = plt.colorbar() cb.set_label('Percent error') plotopts.update({'vmax': 5.}) ax2 = fig.add_subplot(1,2,2) plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts) cb = plt.colorbar() cb.set_label('Percent error') fig.tight_layout() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:dgs] # language: python # name: conda-env-dgs-py # --- import networkx as nx with open("../../../flicker30k_denotation_graph/flicker30k/graph/node.idx", "r") as f: node_idx = f.read() node_idx = node_idx.split("\n") with open("../../../flicker30k_denotation_graph/flicker30k/graph/node-cap.map", "r") as f: node_caption = f.read().split("\n") with open("../../../flicker30k_denotation_graph/flicker30k/graph/node-img.map", "r") as f: node_image = f.read().split("\n") len(node_idx) node_idx[0] len(node_image) len(node_caption) from tqdm import tqdm graph = nx.DiGraph() for idx, images, captions in tqdm(zip(node_idx, node_image, node_caption), total=len(node_idx)): if idx == "": continue i, text = idx.split("\t") images = images.split("\t") assert images[0] == i images = images[1:] # remove the caption information from the images and keep only the images images = list(set(map(lambda x: x.split("#")[0], images))) captions = captions.split("\t") assert captions[0] == i captions=captions[1:] graph.add_node(int(i), text=text, images=sorted(images), captions=sorted(captions)) with open("../../../flicker30k_denotation_graph/flicker30k/graph/node-tree.txt", "r") as f: node_tree = f.read() node_tree = node_tree.split("\n") for n in tqdm(node_tree): if n == "": continue child, edge, parent, *rewrite_rules = n.split("\t") child = int(child) parent = int(parent) assert child in graph assert parent in graph graph.add_edge(parent, child, edge_type=edge, rewrite_rules=rewrite_rules) graph.size() graph.in_degree(0) graph.get_edge_data(0,54) leaf_nodes = [node for node in tqdm(graph.nodes()) if graph.in_degree(node)!=0 and graph.out_degree(node)==0] len(leaf_nodes) len(graph.nodes) leaf_nodes[0] leaf_nodes[-1] list(graph.in_edges(2642008)) subgraph = graph.edge_subgraph([(2501206, 2642008), (352, 2642008), (6902, 2642008), (1357, 2642008), (3566, 2642008)]) subgraph list(subgraph.nodes) list(subgraph.edges) subgraph.subgraph([352, 2501206]) type(subgraph.nodes[352]) subgraph.edges[(2501206, 2642008)] a = graph.successors(0) a = list(a) len(a) import matplotlib.pyplot as plt # %matplotlib inline # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Introduction to the Interstellar Medium # ### # ### Figures 9.1, 9.2, and 9.3: Bonnor-Ebert profiles and mass import numpy as np import matplotlib.pyplot as plt import scipy.integrate as integrate import scipy.interpolate as interpolate # %matplotlib inline def lane_emden_integrate(x): # solve Lane-Emden equation nsteps = x.size y = np.zeros(nsteps) yp = np.zeros(nsteps) yp2 = np.zeros(nsteps) # initial condition on d2y/dx2 # (verified that solutions are insensitive to this beyond x = 2) yp2[0] = 1/3 # integrate outwards step by step # (logarithmic steps) for i in np.arange(1,nsteps): dx = x[i] - x[i-1] y[i] = y[i-1] + yp[i-1]*dx + yp2[i-1]*dx**2/2 yp[i] = yp[i-1] + yp2[i-1]*dx yp2[i] = np.exp(-y[i]) - 2*yp[i]/x[i] return(y,yp) def plot_profiles(): # plot Bonnor-Ebert density profile nsteps = 1000 xmax = 1e4 x = np.logspace(-2, np.log10(xmax), nsteps) y,yp = lane_emden_integrate(x) # scale for various physical parameters r0 = 1.243e3 # radial scale factor in pc fig = plt.figure(figsize=(6,4)) ax = fig.add_subplot(111) ax.set_xlim(0.002,1.0) ax.set_ylim(1e8,1e13) ax.set_xscale('log') ax.set_yscale('log') ax.set_xlabel(r'${\rm Radius}\ {\rm (pc)}$', fontsize=14) ax.set_ylabel(r'${\rm H_2\ density}\ {\rm (m^{-3})}$', fontsize=14) T = 10 # isothermal temperature (K) n_ext = 8e9/T # lower density limit from pressure equilibrium n0 = np.array([1,0.2,5,25,125])*14.2*n_ext ls = ['-','-','--','--','--'] lw = [2,2,2,2,2] alpha = [1,0.3,0.3,0.3,0.3] for i in range(len(n0)): r = x * r0 * np.sqrt(T/n0[i]) n = n0[i] / np.exp(y) if i == 0: ax.plot(r, n, linestyle=ls[i], color='k', lw=lw[i], alpha=alpha[i], label='Critical') else: ax.plot(r, n, linestyle=ls[i], color='k', lw=lw[i], alpha=alpha[i]) # singular isothermal sphere r = np.logspace(-3,1,2) ax.plot(r,3.09e6*T/r**2, 'k--', lw=2, label='Singular') ax.plot([0.2,10], [n_ext,n_ext], 'k:', label='Ambient density') ax.legend() ax.text(0.0027, 2.7e9, 'Stable', fontsize=10) ax.text(0.0027, 6.9e10, 'Unstable', fontsize=10) x_labels = ['0.01','0.1','1'] x_loc = np.array([float(x) for x in x_labels]) ax.set_xticks(x_loc) ax.set_xticklabels(x_labels) fig.tight_layout() plt.savefig('bonnor_ebert_profiles.pdf') def plot_mass(): # plot mass for given P_ext nsteps = 10000 xmax = 1e4 x = np.logspace(-4, np.log10(xmax), nsteps) y,yp = lane_emden_integrate(x) T = 10 # isothermal temperature (K) r0 = 1.243e3 # radial scale factor in pc n_ext = 8e9/T # exterior density in m-3 n0 = np.logspace(np.log10(1.1*n_ext),12,300) ndens = n0.size r_ext = np.zeros(ndens) m_ext = np.zeros(ndens) m_tot = np.zeros(ndens) for i in range(ndens): y_ext = np.log(n0[i]/n_ext) j = np.where(np.abs(y/y_ext - 1) < 0.1)[0] ycubic = interpolate.UnivariateSpline(x[j],y[j]-y_ext) x_ext = ycubic.roots()[0] k = np.where(x < x_ext)[0] m_ext[i] = 1.19e3 * integrate.simps(x[k]**2 / np.exp(y[k]), x[k]) * np.sqrt(T**3/n0[i]) # max pressure contrast Pratio = n0/n_ext imax = m_ext.argmax() m_ext_max = m_ext[imax] Pratio_max = Pratio[imax] fig = plt.figure(figsize=(6,4)) ax1 = fig.add_subplot(111) ax1.set_xlim(1,3e2) ax1.set_xscale('log') #ax1.set_yscale('log') ax1.set_xlabel(r'$\rho_{\rm cen}/\rho_{\rm amb}$', fontsize=14) ax1.set_ylim(0,6.5) #ax1.set_yscale('log') ax1.set_ylabel(r'${\rm Mass}\ (M_\odot)$', fontsize=14) #mplot = ax1.plot(Pratio, m_ext, 'k-', lw=3, label='Mass') ax1.plot(Pratio[0:imax-1], m_ext[0:imax-1], 'k-', lw=2, alpha=0.3, zorder=99) ax1.plot(Pratio[imax+1:], m_ext[imax+1:], 'k--', lw=2, alpha=0.3, zorder=99) ax1.plot(Pratio_max, m_ext_max, 'ko', markersize=4, zorder=999) ax1.text(2.05, 3.2, 'Stable', fontsize=12, rotation=58, backgroundcolor='white', zorder=2) ax1.text(50, 4.6, 'Unstable', fontsize=12, rotation=-21, zorder=2) ax1.text(9.5, m_ext_max+0.15, r'$M_{\rm BE}$', fontsize=12) # SIS m_SIS = 1.06 * np.sqrt(1e10/n_ext) * (T/10)**1.5 ax1.plot([1,300], [m_SIS,m_SIS], 'k:', zorder=1) ax1.text(150, m_SIS-0.33, r'$M_{\rm SIS}$', fontsize=12) print(' M_SIS = {0:5.2f} Msun'.format(m_SIS)) print(' M_max = {0:5.2f} Msun'.format(m_ext_max)) print('M_max/M_SIS = {0:4.2f}'.format(m_ext_max/m_SIS)) print(' P_0/P_ext = {0:5.2f}'.format(Pratio_max)) ax1.plot([Pratio_max,Pratio_max], [0,10], 'k:') #x_labels = ['1','10','100'] x_labels = ['1','3','10','30','100','300'] x_loc = np.array([float(x) for x in x_labels]) ax1.set_xticks(x_loc) ax1.set_xticklabels(x_labels) fig.tight_layout() plt.savefig('bonnor_ebert_mass.pdf') def plot_b68(): fig = plt.figure(figsize=(6,4)) ax = fig.add_subplot(111) # observed profile # data from Alves et al. Nature 2001 # Figure 2 digitized using https://apps.automeris.io/wpd/ r, Av = np.genfromtxt('Alves_Av.txt', unpack=True, delimiter=',') ax.plot(r, Av, 'ko', markersize=3, label='Observations') nsteps = 10000 xmax = 10 x = np.logspace(-2, np.log10(xmax), nsteps) y, yp = lane_emden_integrate(x) # set outer boundary # (note that I find a value a bit lower than in Alves et al...) xmax = 4.5 y[x > xmax] = 10 b = np.logspace(-2, np.log10(xmax)+0.5, 1000) Av = np.zeros(b.size) yinterp = interpolate.interp1d(x, y, kind='cubic', bounds_error=False, fill_value='extrapolate') for i in range(b.size): b1 = b[i] xpath = np.sqrt(x**2 + b1**2) Av[i] = integrate.simps(np.exp(-yinterp(xpath)), xpath) # manually scale axes to match Av # this has physical significance but that's the point of the paper # (this illustrative plot is only to show that an observed core does indeed look like a BE sphere) Ascale = 35/Av.max() Av *= Ascale b *= 26 ax.plot(b, Av, 'k-', lw=2, alpha=0.5, label='Bonnor Ebert profile') ax.set_xlim(8,150) ax.set_ylim(0.3,45) ax.set_xscale('log') ax.set_yscale('log') ax.set_xlabel("Projected radius ('')", fontsize=14) ax.set_ylabel(r"${\rm A_V\ (mag)}$", fontsize=14) ax.legend(loc=3, bbox_to_anchor=(0.04, 0.05)) ax.text(0.24, 0.24, 'B68 visual extinction', fontsize=12, ha='center', transform = ax.transAxes) x_labels = ['10','30','100'] x_loc = np.array([float(x) for x in x_labels]) ax.set_xticks(x_loc) ax.set_xticklabels(x_labels) y_labels = ['1','3','10','30'] y_loc = np.array([float(y) for y in y_labels]) ax.set_yticks(y_loc) ax.set_yticklabels(y_labels) fig.tight_layout() plt.savefig('b68_profile.pdf') # Figure 9.1 plot_profiles() # Figure 9.2 plot_mass() # Figure 9.3 plot_b68() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="iJ7hbp9w32Uy" outputId="69e84138-f086-402d-bf58-1d2c1962d9bd" # !nvidia-smi # + colab={"base_uri": "https://localhost:8080/"} id="AHctWu2vPf0q" outputId="ce2dd527-42cc-451a-e707-538e84ee3794" # 確認NVIDIA是否安裝及驅動版本 # !/usr/local/cuda/bin/nvcc --version # + colab={"base_uri": "https://localhost:8080/"} id="bUx8RLwuDRxD" outputId="2aa8f152-2842-402e-9f39-5ef08dfa372a" # !git clone https://github.com/alicebook12220/YOLOX.git # + id="xfwA7nTuh_xB" # #!pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html # + colab={"base_uri": "https://localhost:8080/"} id="JHmzlOov4Hnm" outputId="11b98ae7-7f0a-4b71-e5fe-71bab0b4577e" #建置YOLOX環境 # %cd YOLOX # !pip install -U pip && pip install -r requirements.txt # !python setup.py develop # + colab={"base_uri": "https://localhost:8080/"} id="BM_6naDn9kxC" outputId="f4768d81-0cda-487e-bf3a-f4bc1c471518" #檢查是否有使用GPU,0是沒有,1是有 import torch print(torch.cuda.device_count()) # + colab={"base_uri": "https://localhost:8080/"} id="TLRrWjVe4RBE" outputId="99141399-f9f2-4598-e50c-736c086b31b3" #安裝pycocotools,讀取COCO格式資料用 # !pip install cython # !pip install pycocotools # + id="9pUrUO-D5loW" colab={"base_uri": "https://localhost:8080/"} outputId="ae8e72dd-513a-4cdb-f3ce-286df1bf5377" #下載yolox_nano COCO權重 # %cd /content/YOLOX/ # !wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_nano.pth # + colab={"base_uri": "https://localhost:8080/"} id="vs7NaAI15lq4" outputId="c375affc-063a-4242-99ad-45488cfb0093" # %cd /content/YOLOX/ #開始訓練 #-d為GPU數量 -b為batch_size --fp16為混合精度訓練 -o為先使用GPU記憶體 -c為訓練權重路徑 # !python tools/train.py -f yolox_voc_nano.py -d 1 -b 8 --fp16 -o -c yolox_nano.pth # + colab={"base_uri": "https://localhost:8080/"} id="I-A0Pa62nql8" outputId="1de412a0-45b9-4181-c44e-bc94ee47512e" #demo預測影像 TEST_IMAGE_PATH = "datasets/VOCdevkit/VOC2007/JPEGImages/002387.jpg" # !python tools/demo.py image -f yolox_voc_nano.py -c YOLOX_outputs/yolox_voc_nano/best_ckpt.pth --path {TEST_IMAGE_PATH} --conf 0.25 --nms 0.45 --tsize 416 --save_result --device gpu # + [markdown] id="p5lyDtOMw4YS" # # 拆解demo.py # + id="Mmvze6aqr6ru" import argparse import os import time from loguru import logger import cv2 import torch import torchvision from yolox.data.data_augment import ValTransform from yolox.exp import get_exp from yolox.utils import fuse_model, get_model_info, postprocess, vis # + colab={"base_uri": "https://localhost:8080/"} id="JP6bb4Lir8jW" outputId="23ca2e58-d8ac-4329-e86d-f9e792a278bb" #讀yolox-nano模型 exp = get_exp("yolox_voc_nano.py", "yolox-nano") model = exp.get_model() logger.info("Model Summary: {}".format(get_model_info(model, exp.test_size))) model.cuda() model.eval() ckpt = torch.load("YOLOX_outputs/yolox_voc_nano/best_ckpt.pth", map_location=lambda storage, loc: storage.cuda(0)) model.load_state_dict(ckpt["model"]) logger.info("loaded checkpoint done.") # + colab={"base_uri": "https://localhost:8080/"} id="-7ELfOnxw0FO" outputId="15979564-129a-411a-ad6f-286ef5936ebc" #預測影像 t0 = time.time() img = cv2.imread("datasets/VOCdevkit/VOC2007/JPEGImages/002387.jpg") preproc = ValTransform() test_size = exp.test_size #(416, 416) img, _ = preproc(img, None, test_size) img = torch.from_numpy(img).unsqueeze(0) img = img.cuda() prediction = model(img) print("Inference time:",time.time() - t0) print(prediction) # + colab={"base_uri": "https://localhost:8080/"} id="L86d32OTyO6w" outputId="8a2e1cbe-b1ba-43bb-9a1b-80b3f04e851f" #prediction解碼 conf_thre = 0.5 #信心度閥值 nms_thre = 0.4 #重疊率閥值 num_classes = 1 #類別數量 box_corner = prediction.new(prediction.shape) box_corner[:, :, 0] = prediction[:, :, 0] - prediction[:, :, 2] / 2 box_corner[:, :, 1] = prediction[:, :, 1] - prediction[:, :, 3] / 2 box_corner[:, :, 2] = prediction[:, :, 0] + prediction[:, :, 2] / 2 box_corner[:, :, 3] = prediction[:, :, 1] + prediction[:, :, 3] / 2 prediction[:, :, :4] = box_corner[:, :, :4] output = [None for _ in range(len(prediction))] for i, image_pred in enumerate(prediction): # If none are remaining => process next image if not image_pred.size(0): continue # Get score and class with highest confidence class_conf, class_pred = torch.max(image_pred[:, 5: 5 + num_classes], 1, keepdim=True) conf_mask = (image_pred[:, 4] * class_conf.squeeze() >= conf_thre).squeeze() # Detections ordered as (x1, y1, x2, y2, obj_conf, class_conf, class_pred) detections = torch.cat((image_pred[:, :5], class_conf, class_pred.float()), 1) detections = detections[conf_mask] if not detections.size(0): continue nms_out_index = torchvision.ops.nms( detections[:, :4], detections[:, 4] * detections[:, 5], nms_thre, ) detections = detections[nms_out_index] if output[i] is None: output[i] = detections else: output[i] = torch.cat((output[i], detections)) print(output) #(x1, y1, x2, y2, obj_conf, class_conf, class_pred) # + colab={"base_uri": "https://localhost:8080/"} id="7TKGkyUX1j3U" outputId="1fcc464c-6a3e-4725-f96c-22f2a04cef01" #取得bboxes、class_id, 信心度 cls_names = ["car"] img = cv2.imread("datasets/VOCdevkit/VOC2007/JPEGImages/002387.jpg") ratio = min(test_size[0] / img.shape[0], test_size[1] / img.shape[1]) output_vis = output[0] if output_vis is not None: output_vis = output_vis.cpu() bboxes = output_vis[:, 0:4] # preprocessing: resize bboxes /= ratio print("bboxes:", bboxes) cls = output_vis[:, 6] print("cls:",cls) scores = output_vis[:, 4] * output_vis[:, 5] print("scores", scores) #vis_res = vis(img, bboxes, scores, cls, conf_thre, cls_names) # + id="th4PeKT96kms" import numpy as np _COLORS = np.array( [ 0.000, 0.447, 0.741, 0.850, 0.325, 0.098, 0.929, 0.694, 0.125, 0.494, 0.184, 0.556, 0.466, 0.674, 0.188, 0.301, 0.745, 0.933, 0.635, 0.078, 0.184, 0.300, 0.300, 0.300, ] ).astype(np.float32).reshape(-1, 3) # + colab={"base_uri": "https://localhost:8080/"} id="MTFITc-A5Uz7" outputId="fbb0c4cd-1ed6-4f54-8f89-63659d81680d" #視覺化 boxes = bboxes cls_ids = cls conf = conf_thre img = cv2.imread("datasets/VOCdevkit/VOC2007/JPEGImages/002387.jpg") img_copy = img.copy() for i in range(len(boxes)): box = boxes[i] cls_id = int(cls_ids[i]) score = scores[i] if score < conf: continue #box左上角和右下角座標 x0 = int(box[0]) y0 = int(box[1]) x1 = int(box[2]) y1 = int(box[3]) print("x0, y0:", x0, y0, " x1, y1:", x1, y1) color = (_COLORS[cls_id] * 255).astype(np.uint8).tolist() text = '{}:{:.1f}%'.format(cls_names[cls_id], score * 100) txt_color = (0, 0, 0) if np.mean(_COLORS[cls_id]) > 0.5 else (255, 255, 255) font = cv2.FONT_HERSHEY_SIMPLEX txt_size = cv2.getTextSize(text, font, 0.4, 1)[0] #畫框 cv2.rectangle(img_copy, (x0, y0), (x1, y1), color, 2) txt_bk_color = (_COLORS[cls_id] * 255 * 0.7).astype(np.uint8).tolist() cv2.rectangle( img_copy, (x0, y0 + 1), (x0 + txt_size[0] + 1, y0 + int(1.5*txt_size[1])), txt_bk_color, -1 ) #寫字 cv2.putText(img_copy, text, (x0, y0 + txt_size[1]), font, 0.4, txt_color, thickness=1) # + colab={"base_uri": "https://localhost:8080/", "height": 503} id="z2EA5PUy6tFY" outputId="d6a50012-0ab9-4cf2-fb49-f543f0331d98" from matplotlib import pyplot as plt plt.figure(figsize=(12, 8)) plt.imshow(cv2.cvtColor(img_copy, cv2.COLOR_BGR2RGB)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # **fierClass: Visualization API**

    # + # Authors: () # () # (, ) # Date: 2019/05/03. # # If you are going to use fierClass in your research project, please cite its reference article # , , et al. "fierClass: A multi-signal, cepstrum-based, time series classifier," # Engineering Applications of Artificial Intelligence, Volume 87, 2020, https://doi.org/10.1016/j.engappai.2019.103262. # # Copyright and license: © , , , Politecnico di Milano # Licensed under the [MIT License](LICENSE). # # In case of need, feel free to ask the author. # - # **Libraries**

    # *This code block imports all the required libraries and packages.*

    # Visualization from bokeh.layouts import gridplot from bokeh.models import ColumnDataSource, Grid, LinearAxis from bokeh.models import Plot, Step, Legend, LegendItem from bokeh.models import Title from bokeh.plotting import figure from bokeh.palettes import Accent from bokeh.io import output_notebook, curdoc, show output_notebook() # **Cepstra Plot**

    # *This function plots the cepstra for all the instances contained in the input matrix.*

    def train_cepstra_plot(order, train_cepstra, label): # Color palette definition (one for each classes) colors = Accent.get(train_cepstra.shape[0]) # x-axis definition x = list(range(order)) # Cepstra plots. Each plot contains the cepstrum of # the considered signal for each available class. plots = [figure() for i in range(train_cepstra.shape[1])] glyphs = [plot.line(x, train_cepstra[class_idx,0,:order], color=colors[class_idx], line_width=2, legend_label=label[class_idx]) for plot in plots for i in range(2) for class_idx in range(train_cepstra.shape[0])] # Legend settings for plot in plots: plot.legend.location = "top_right" plot.legend.click_policy= "hide" show(gridplot(children = plots, ncols = 3, merge_tools = False, plot_width=300, plot_height=300)) # **Preditcions vs Actual Labels**

    # *This function plots the predicted and the actual labels, to allow the user have a visual evaluation of the trained classifier performance.*

    def results_plot(y_test,y_pred,title,featurefusion): # x-axis definition x = list(range(len(y_test))) # Classes vs subclasses evaluation plot if(featurefusion!='conv'): source = ColumnDataSource(dict(x=x, y1=y_test, y2=y_pred[0])) else: source = ColumnDataSource(dict(x=x, y1=y_test, y2=y_pred)) plot = Plot(plot_width=900, plot_height=700, title=Title(text=title, align="center")) # Actual labels glyph1 = Step(x="x", y="y1", line_width=2, line_color="#f46d43") plot.add_glyph(source, glyph1) # Predictedd labels glyph2 = Step(x="x", y="y2", line_width=2, line_dash="dashed", line_color="#1d91d0") plot.add_glyph(source, glyph2) xaxis = LinearAxis() plot.add_layout(xaxis, 'below') yaxis = LinearAxis() plot.add_layout(yaxis, 'left') # Legend li1 = LegendItem(label='True', renderers=[plot.renderers[0]]) li2 = LegendItem(label='Predicted', renderers=[plot.renderers[1]]) legend1 = Legend(items=[li1, li2], location='top_right') plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker)) plot.add_layout(Grid(dimension=1, ticker=yaxis.ticker)) plot.add_layout(legend1) plot.legend.click_policy= "hide" curdoc().add_root(plot) show(plot) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # sspmm.py ''' StationSim (aka Mike's model) converted into python. Todos: multiprocessing profile functions ''' import numpy as np import matplotlib.pyplot as plt from copy import deepcopy def error(text='Self created error.'): from sys import exit print() exit(text) return class NextRandom: # To find random number useage type 'random.np_', or even 'random' def __init__(self): np.random.seed(303) self.random_number_usage = 0 return def np_uniform(self, high=1, low=0, shape=1): r = np.random.random(shape) r = r * (high - low) self.random_number_usage += np.size(r) return r def np_gaussian(self, mu=0, sigma=1, shape=1): r = np.random.standard_normal(shape) r = r * sigma + mu #r * sigma**2 + mu self.random_number_usage += np.size(r) return r def np_integer(self, high=1, low=0, shape=1): r = np.random.randint(low, high + 1, shape) self.random_number_usage += np.size(r) return r class Agent: def __init__(self, model): self.unique_id = model.agent_count model.agent_count += 1 self.active = 0 # 0 = not started, 1 = started, 2 = finished # Location entrance = random.np_integer(model.entrances - 1) self.location = model.loc_entrances[entrance][0] # Parameters self.loc_desire = model.loc_exits[random.np_integer(model.exits - 1)][0] return def step(self, model): if self.active == 0: self.activate(model) elif self.active == 1: self.move(model) if model.do_save: self.save() self.exit_query(model) return def activate(self, model): self.speed_desire = max(random.np_gaussian(model.speed_desire_max), model.speed_desire_min) new_location = self.location new_location[1] += model.entrance_space * random.np_uniform(-.5, +.5) # Empty space to step off of 'train' self.separation = model.initial_separation if not self.collision(model, new_location): self.active = 1 model.pop_active += 1 self.location = new_location self.separation = model.separation # Save if model.do_save: self.start_time = model.time self.history_loc = [] return def move(self, model): ''' Description: This mechanism moves the agent. It checks certain conditions for collisions at decreasing speeds. First check for direct new_location in the desired direction. Second check for a new_location in a varied desired direction. Third check for a new_location with a varied current direction. Dependencies: collision - any agents in radius neighbourhood - find neighbours in radius lerp - linear extrapolation Arguments: self Returns: new_location ''' # Decreasing Speeds speeds = np.linspace(self.speed_desire, model.speed_min, 15) for speed in speeds: new_location = self.lerp(self.loc_desire, self.location, speed) if not self.collision(model, new_location): break # Wiggle if speed == model.speed_min: new_location = self.location + random.np_integer(low=-1, high=+1, shape=2) # Boundary check within_bounds = all(model.boundaries[0] <= new_location) and all(new_location <= model.boundaries[1]) if not within_bounds: new_location = np.clip(new_location, model.boundaries[0], model.boundaries[1]) # Move self.location = new_location return def collision(self, model, new_location): ''' Description: Determine whether or not there is another object at this location. Requires get neighbour from mesa? Dependencies: neighbourhood - find neighbours in radius Arguments: model.boundaries ((f, f), (f, f)) A pair of tuples defining the lower limits and upper limits to the rectangular world. new_location (f, f) The potential location of an agent. Returns: collide b The answer to whether this position is blocked ''' within_bounds = all(model.boundaries[0] <= new_location) and all(new_location <= model.boundaries[1]) if not within_bounds: collide = True elif self.neighbourhood(model, new_location): collide = True else: collide = False return collide def neighbourhood(self, model, new_location, just_one=True, forward_vision=True): ''' Description: Get agents within the defined separation. Arguments: self.unique_id i The current agent's unique identifier self.separation f The radius in which to search model.agents s The set of all agents new_location (f, f) A location tuple just_one b Defines if more than one neighbour is needed. forward_vision b Restricts separation radius to only infront. Returns: neighbours s A set of agents in a region ''' neighbours = [] for agent in model.agents: if agent.active == 1: if forward_vision and agent.location[0] < new_location[0]: distance = self.separation + 1 else: distance = np.linalg.norm(new_location - agent.location) if distance < self.separation and agent.unique_id != self.unique_id: neighbours.append(agent) if just_one: break return neighbours def lerp(self, loc1, loc2, speed): ''' Description: Linear extrapolation at a constant rate https://en.wikipedia.org/wiki/Linear_interpolation Arguments: loc1 (f, f) Point One defining the destination position loc2 (f, f) Point Two defining the agent position speed f The suggested speed of the agent Returns: loc (f, f) The location if travelled at this speed ''' distance = np.linalg.norm(loc1 - loc2) loc = loc2 + speed * (loc1 - loc2) / distance return loc def save(self): self.history_loc.append(self.location) return def exit_query(self, model): if np.linalg.norm(self.location - self.loc_desire) < model.exit_space: self.active = 2 model.pop_active -= 1 model.pop_finished += 1 if model.do_save: model.time_taken.append(model.time - self.start_time) return class Model: def __init__(self, params): for key, value in params.items(): setattr(self, key, value) # Batch Details self.time = 0 if self.do_save: self.time_taken = [] # Model Parameters self.boundaries = np.array([[0, 0], [self.width, self.height]]) self.pop_active = 0 self.pop_finished = 0 # Initialise self.initialise_gates() self.initialise_agents() return def step(self): self.time += 1 if self.time > 2: # For animation purposes for agent in self.agents: agent.step(self) return def initialise_gates(self): # Entrances self.loc_entrances = np.zeros((self.entrances, 2)) self.loc_entrances[:, 0] = 0 if self.entrances == 1: self.loc_entrances[0, 1] = self.height / 2 else: self.loc_entrances[:, 1] = np.linspace(self.height / 4, 3 * self.height / 4, self.entrances) # Exits self.loc_exits = np.zeros((self.exits, 2)) self.loc_exits[:, 0] = self.width if self.exits == 1: self.loc_exits[0, 1] = self.height / 2 else: self.loc_exits[:, 1] = np.linspace(self.height / 4, 3 * self.height / 4, self.exits) return def initialise_agents(self): self.agent_count = 0 self.agents = list([Agent(self) for _ in range(self.pop_total)]) return def agents2state(self): state = np.ravel([agent.location for agent in self.agents]) return state def state2agents(self, state): for i in range(len(self.agents)): self.agents[i].location = state[2 * i:2 * i + 2] return def batch(self): for i in range(self.batch_iterations): self.step() if self.do_ani: self.ani_agents() if self.pop_finished == self.pop_total: print('Everyone made it!') break if self.do_save: self.stats() self.plot_subplots() return def ani_agents(self): plt.figure(1) plt.clf() for agent in self.agents: if agent.active == 1: plt.plot(*agent.location, '.k') plt.axis(np.ravel(self.boundaries, 'F')) plt.pause(1 / 30) return def plot_subplots(self): _, (ax1, ax2) = plt.subplots(2) for agent in self.agents: if agent.active == 2 and agent.unique_id < 50: locs = np.array(agent.history_loc).T ax1.plot(locs[0], locs[1], linewidth=.5) ax1.axis(np.ravel(self.boundaries, 'F')) ax2.hist(self.time_taken) plt.show() return def stats(self): print() print('Stats:') print('Finish Time: ' + str(self.time)) print('Random number usage: ' + str(random.random_number_usage)) print('Active / Finished / Total agents: ' + str(self.pop_active) + '/' + str(self.pop_finished) + '/' + str(self.pop_total)) print('Average time taken: ' + str(np.mean(self.time_taken)) + 's') return ''' A particle filter to model the dynamics of the state of the model as it develops in time. Parameters: 'number_of_particles': The number of particles used to simulate the model 'number_of_iterations': The number of iterations to run the model/particle filter 'resample_window': The number of iterations between resampling particles 'agents_to_visualise': The number of agents to plot particles for 'particle_std': The standard deviation of the noise added to particle states 'model_std': The standard deviation of the noise added to model observations 'do_save': Boolean to determine if data should be saved and stats printed 'do_ani': Boolean to determine if particle filter data should be animated and displayed ''' class ParticleFilter: ''' Initialise Particle Filter Firstly, set all attributes using filter parameters. Set time and initialise base model using model parameters. Initialise particle models using a deepcopy of base model. Determine particle filter dimensions, set state of particles to the state of the base model, and then initialise all remaining arrays. ''' def __init__(self, Model, model_params, filter_params): for key, value in filter_params.items(): setattr(self, key, value) self.time = 0 self.base_model = Model(model_params) self.models = list([deepcopy(self.base_model) for _ in range(self.number_of_particles)]) self.dimensions = len(self.base_model.agents2state()) self.states = np.empty((self.number_of_particles, self.dimensions)) for particle in range(self.number_of_particles): self.states[particle,:] = self.base_model.agents2state() self.weights = np.ones(self.number_of_particles) if self.do_save: self.active_agents = [] self.means = [] self.mean_errors = [] self.variances = [] ''' Step Particle Filter Loop through process. Predict the base model and particles forward. If the resample window has been reached, reweight particles based on distance to base model and resample particles choosing particles with higher weights. Then save and animate the data. When done, plot save figures. ''' def step(self): for _ in range(self.number_of_iterations): self.predict() if self.time % self.resample_window == 0: self.reweight() self.resample() if self.do_save: self.save() if self.do_ani: self.ani() if self.do_save: self.plot_save() return ''' Predict Increment time. Step the base model. For each particle, step the particle model and then set the particle states as the agent locations with some added noise. Reassign the locations of the particle agents using the new particle states. This is the main interaction between the model and the particle filter. ''' def predict(self): self.time += 1 self.base_model.step() for particle in range(self.number_of_particles): self.models[particle].step() self.states[particle] = (self.models[particle].agents2state() + random.np_gaussian(0, self.particle_std**2, shape=self.states[particle].shape)) self.models[particle].state2agents(self.states[particle]) return ''' Reweight Add noise to the base model state to get a measured state. Calculate the distance between the particle states and the measured base model state and then calculate the new particle weights as 1/distance. Add a small term to avoid dividing by 0. Normalise the weights. ''' def reweight(self): measured_state = (self.base_model.agents2state() + random.np_gaussian(0, self.model_std**2, shape=self.states.shape)) distance = np.linalg.norm(self.states - measured_state, axis=1) self.weights = 1 / (distance + 1e-99) self.weights /= np.sum(self.weights) return ''' Resample Calculate a random partition of (0,1) and then take the cumulative sum of the particle weights. Carry out a systematic resample of particles. Set the new particle states and weights and then update agent locations in particle models. ''' def resample(self): offset_partition = ((np.arange(self.number_of_particles) + random.np_uniform()) / self.number_of_particles) cumsum = np.cumsum(self.weights) i, j = 0, 0 indexes = np.zeros(self.number_of_particles, 'i') while i < self.number_of_particles: if offset_partition[i] < cumsum[j]: indexes[i] = j i += 1 else: j += 1 self.states[:] = self.states[indexes] self.weights[:] = self.weights[indexes] for particle in range(self.number_of_particles): self.models[particle].state2agents(self.states[particle]) return ''' Save and Plot Save Calculate number of active agents, mean, and variance of particles and calculate mean error between the mean and the true base model state. Plot active agents,mean error and mean variance. ''' def save(self): self.active_agents.append(sum([agent.active == 1 for agent in self.base_model.agents])) mean = np.average(self.states, weights=self.weights, axis=0) variance = np.average((self.states - mean)**2, weights=self.weights, axis=0) self.means.append(mean[:]) self.variances.append(np.average(variance)) truth_state = self.base_model.agents2state() self.mean_errors.append(np.linalg.norm(mean - truth_state, axis=0)) return def plot_save(self): plt.figure(2) plt.plot(self.active_agents) plt.ylabel('Active agents') plt.show() plt.figure(3) plt.plot(self.mean_errors) plt.ylabel('Mean Error') plt.show() plt.figure(4) plt.plot(self.variances) plt.ylabel('Mean Variance') plt.show() print('Max mean error = ',max(self.mean_errors)) print('Average mean error = ',np.average(self.mean_errors)) print('Max mean variance = ',max(self.variances)) print('Average mean variance = ',np.average(self.variances)) ''' Animate Plot the base model state and some of the particles. Only do this if there is at least 1 active agent in the base model. We adjust the markersizes of each particle to represent the weight of that particle. We then plot some of the agent locations in the particles and draw lines between the particle agent location and the agent location in the base model. ''' def ani(self): if any([agent.active == 1 for agent in self.base_model.agents]): plt.figure(1) plt.clf() markersizes = self.weights if np.std(markersizes) != 0: markersizes *= 4 / np.std(markersizes) # revar markersizes += 4 - np.mean(markersizes) # remean particle = -1 for model in self.models: particle += 1 markersize = np.clip(markersizes[particle], .5, 8) for agent in model.agents[:self.agents_to_visualise]: if agent.active == 1: unique_id = agent.unique_id if self.base_model.agents[unique_id].active == 1: locs = np.array([self.base_model.agents[unique_id].location, agent.location]).T plt.plot(*locs, '-k', alpha=.1, linewidth=.3) plt.plot(*agent.location, '.r', alpha=.3, markersize=markersize) for agent in self.base_model.agents: if agent.active == 1: plt.plot(*agent.location, '+k') plt.axis(np.ravel(self.base_model.boundaries, 'F')) plt.pause(1 / 4) return if __name__ == '__main__': random = NextRandom() model_params = { 'width': 200, 'height': 100, 'pop_total': 1, 'entrances': 3, 'entrance_space': 2, 'exits': 2, 'exit_space': 2, 'speed_min': .1, 'speed_desire_min': .5, 'speed_desire_max': 2, 'initial_separation': 1, 'separation': 5, 'batch_iterations': 50, 'do_save': True, 'do_ani': True } if not True: # Run the model Model(model_params).batch() else: # Run the particle filter filter_params = { 'number_of_particles': 100, 'number_of_iterations': 200, 'resample_window': 20, 'agents_to_visualise': 2, 'particle_std': 0, 'model_std': 0, 'do_save': True, 'do_ani': False } pf = ParticleFilter(Model, model_params, filter_params) pf.step() # + Max mean error = 10.06444570441913 Average mean error = 1.0408552229966932 Max mean variance = 194.23319119794178 Average mean variance = 27.5432169512865 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Description of the task and code # We analize the `train.py` file, which is the main file to execute in order to create a model and test for results. # A couple of considerations: # * The main goal of this file is explain the details of the paper explained in this [tutorial](https://towardsdatascience.com/training-a-goal-oriented-chatbot-with-deep-reinforcement-learning-part-i-introduction-and-dce3af21d383) # * The code is downloaded from this [repository](https://github.com/maxbren/GO-Bot-DRL) # * Since the scope is explaining the lines of code and the meaning of the objects, for the sake of the description, many lines of codes and objects will be created and depicted, but are not part from the original train.py script. # The important thing to keep in mind is the GOAL of this whole code: # * The RL algorithm does not have an example of how to conduct the conversation. # * We only count with the desire of the user (user goal), and the available options that the bot will provide. # * The objective is to build a chatbot that is able to reach the user goal, making the right inquiries filling the missing spaces. # # First of all we bring the necessary imports. Many of them are scripts contained in the same folder, so we'll refer to those. from user_simulator import UserSimulator from error_model_controller import ErrorModelController from dqn_agent import DQNAgent from state_tracker import StateTracker import pickle, argparse, json, math from utils import remove_empty_slots, timeprint from user import User from time import time import datetime # Inmediately after, the main function is introduced: # ## 1.1. Main # `if __name__ == "__main__":` # The constants file is parsed from the folder. We comment these lines as we're not **executing the script, but a notebook.** # We use the default setup of constants, shown below: # ### 1.1.1. Constants # We import the variables that will not change during the execution of the code, or CONSTANTS. These are normally either written in uppercase or saved in a separate file, as seen in this case. # + # Can provide constants file path in args OR run it as is and change 'CONSTANTS_FILE_PATH' below # 1) In terminal: python train.py --constants_path "constants.json" # 2) Run this file as is #parser = argparse.ArgumentParser() #parser.add_argument('--constants_path', dest='constants_path', type=str, default='') #args = parser.parse_args() #params = vars(args) # Load constants json into dict CONSTANTS_FILE_PATH = 'constants.json' #if len(params['constants_path']) > 0: # constants_file = params['constants_path'] #else: # constants_file = CONSTANTS_FILE_PATH constants_file = CONSTANTS_FILE_PATH # Put in place for correct executing with open(constants_file) as f: constants = json.load(f) # - print(json.dumps(constants, indent=4, sort_keys=True)) # Next, I proceed to describe the set or variables that we find here: # Do not copy the content from here, since I'm writing on top of it. # # ``` javascript # { # "agent": { # // All necessary hypterparamenters to modify the behavior of the agent. These should be changed during the GRID SEARCH. # # "batch_size": 16, # "dqn_hidden_size": 80, # "epsilon_init": 0.0, # "gamma": 0.9, # "learning_rate": 0.001, # "load_weights_file_path": "", # "max_mem_size": 500000, # "save_weights_file_path": "", # "vanilla": true # }, # # // The paths from the 3 different databases that I will explain in detail next. All 3 are necessary # # "db_file_paths": { # "database": "data/movie_db.pkl", # "dict": "data/movie_dict.pkl", # "user_goals": "data/movie_user_goals.pkl" # }, # # // The ERROR MODEL CONTROLLER is the component that induces error on the agent and the enviornment. # # "emc": { # "intent_error_prob": 0.0, # "slot_error_mode": 0, # "slot_error_prob": 0.05 # }, # # // The parameters used in the training process # # "run": { # "max_round_num": 20, // The maximum number of steps during an episode. Once reached, the episode is done. # "num_ep_run": 40000, // # "success_rate_threshold": 0.3, // Not really sure, we'll come back to that # "train_freq": 100, // The model does not train every time it predicts, but every "this variable" times # "usersim": true, // If the user is simulated # "warmup_mem": 1000 // The memory used in the warming up of the algorithm. # } # } # ``` # Load and parse this file constants into code variables. # + # Load file path constants file_path_dict = constants['db_file_paths'] DATABASE_FILE_PATH = file_path_dict['database'] DICT_FILE_PATH = file_path_dict['dict'] USER_GOALS_FILE_PATH = file_path_dict['user_goals'] # Load run constants run_dict = constants['run'] USE_USERSIM = run_dict['usersim'] WARMUP_MEM = run_dict['warmup_mem'] NUM_EP_TRAIN = run_dict['num_ep_run'] TRAIN_FREQ = run_dict['train_freq'] MAX_ROUND_NUM = run_dict['max_round_num'] SUCCESS_RATE_THRESHOLD = run_dict['success_rate_threshold'] # - # ### 1.1.2. Databases # # # Load the user databases # #### Movie database # # This is the file of the options that the chatbot has available to offer. But the user does not choose. The chatbot must just offer the options and the user must confirm. # The example shown below depict the different parameters in the options, such as city, theater, critic_rating, etc. # + # Load movie DB # Note: If you get an unpickling error here then run 'pickle_converter.py' and it should fix it database = pickle.load(open(DATABASE_FILE_PATH, 'rb'), encoding='latin1') # Clean DB remove_empty_slots(database) database[0] # - # #### Dictionary database # It includes the single (unique) components of the movie database, in order to tag and insert into the network later. # # Load movie dict db_dict = pickle.load(open(DICT_FILE_PATH, 'rb'), encoding='latin1') db_dict # #### User goals # It is the input training tool, as each element represents a request of service from the user, and starts one episode. The more user goals, the more episodes to train with. # + # Load goal File user_goals = pickle.load(open(USER_GOALS_FILE_PATH, 'rb'), encoding='latin1') # Init. Objects if USE_USERSIM: user = UserSimulator(user_goals, constants, database) else: user = User(constants) user_goals # + emc = ErrorModelController(db_dict, constants) state_tracker = StateTracker(database, constants) dqn_agent = DQNAgent(state_tracker.get_state_size(), constants) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="cj5erWsexM2h" outputId="654de4b2-49ff-464f-ba43-1997e2342c96" # !pip install HydroErr import HydroErr as HEEM import numpy import pandas as pd from pandas import read_csv import math import keras from keras.models import Sequential from keras.layers import LSTM from keras.layers import Dense, Activation from sklearn.metrics import mean_squared_error from keras.layers import BatchNormalization, Dropout from google.colab import drive drive.mount('/content/gdrive') # + id="i0eJldqrxM29" AI_Method = "/content/gdrive/My Drive/Colab Notebooks/Nal7/12188MT-1-LSTM-SSA-Gini-1b-Sil" Var_LRs=[1e-2]# 1e-4,1e-5,1e-6,1e-7,1e-8,1e-9 Var_Decays=[1e-4] # 1e-3,1e-4,1e-5,1e-6,1e-7,1e-8,1e-9 Var_epochs=[100] # 100,500,1000,1500,2000 Streamflow=pd.read_csv('/content/gdrive/My Drive/Colab Notebooks/12188MT-1SSA-1.csv', delimiter=',') # + id="PKeLrvjKxM3N" colab={"base_uri": "https://localhost:8080/"} outputId="cbc1fd72-ab81-440d-8aad-cbb685766f43" import numpy as np x= Streamflow.drop('Q',axis=1) Y= Streamflow['Q'] x= x.drop('Wind',axis=1) x.head() x.info() # + id="HP41l7mOxM3b" colab={"base_uri": "https://localhost:8080/"} outputId="13b7705f-545b-410e-ce2e-35bd60ddd680" X=np.array(x) y=np.array(Y) print (X.shape[1]) # + id="NYQVG2IOxM3p" test_size = int(len(X) * 0.15) valid_size = int(len(X) * 0.15) train_size= len(X) - (valid_size+test_size) y_train, y_valid, y_test = y[0:train_size], y[train_size:train_size+valid_size], y[-test_size:] X_train, X_valid, X_test = X[0:train_size], X[train_size:train_size+valid_size], X[train_size+valid_size:] X_train = numpy.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1])) X_valid = numpy.reshape(X_valid, (X_valid.shape[0], 1, X_valid.shape[1])) X_test = numpy.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1])) input=X_train[1:] input_shape=X_train.shape[1:] # + id="pDp56jR4iL5B" colab={"base_uri": "https://localhost:8080/"} outputId="cfa677a6-5914-4af1-92f8-d4b3fc2104ba" print (y_train.shape, y_valid.shape, y_test.shape) # + id="NZmw4JnIxM33" from datetime import datetime from sklearn.metrics import mean_absolute_error from sklearn.metrics import mean_squared_error from math import sqrt import numpy as geek # + id="pisDJM4b2rzU" #def create_NN(): model = Sequential() model.add(LSTM(200, input_shape=X_train.shape[1:], activation='relu',return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(200, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(100, activation='relu')) model.add(Dropout(0.10)) model.add(Dense(1, activation='relu')) # + id="PZ96ms4xxM4F" colab={"base_uri": "https://localhost:8080/"} outputId="db30c4d2-91e3-4212-baff-e34687cb3151" startTime = datetime.now() fileOfSummary = open(AI_Method+'_Summary_'+str(datetime.now()).replace(":", ".")+'.csv', "w") fileOfSummary.write("Method,LR,DE,Epoch,RunTime,CCTr,CCVd,CCTt,NSTr,NSVd,NSTt,WITr,WIVd,WITt,RMSETr,RMSEVd,RMSETt,MAETr,MAEVd,MAETt"+str(X.shape[1])+"\n") # + id="5UFy_CAexM4O" colab={"base_uri": "https://localhost:8080/"} outputId="1098b3b7-9f91-431c-db06-497a4d0385ee" optimizer = keras.optimizers.Adam(lr=e_LR, decay=e_decay) model.compile(optimizer=optimizer,loss='mean_squared_error') print(" ") print(" ") print("LRs:",e_LR) print("Decays:",e_decay) print("epochs:",e_epoch) history = model.fit(X_train, y_train, epochs=e_epoch, batch_size=256, verbose=0, validation_data=(X_valid, y_valid), shuffle=True) hist_df = pd.DataFrame(history.history) Time_elasped= datetime.now() - startTime print('\nTime elapsed: ', Time_elasped) Train = model.predict(X_train) Valid = model.predict(X_valid) Test = model.predict(X_test) FileName=AI_Method+'-LR'+str(e_LR)+'-DE'+str(e_decay)+'-'+str(e_epoch) np.savetxt(FileName+'_Train.csv', Train) np.savetxt(FileName+'_Valid.csv', Valid) np.savetxt(FileName+'_Test.csv', Test) Train = Train.reshape(Train.shape[0]) Valid = Valid.reshape(Valid.shape[0]) Test = Test.reshape(Test.shape[0]) print(" ") print("LRs:",e_LR) print("Decays:",e_decay) print("epochs:",e_epoch) print("Train ==>") CC_Train=np.corrcoef(y_train,Train) print("CC_Train = %.3f" %CC_Train[0,1]) NSTr=HEEM.nse(Train, y_train) print("NSTr = %.2f" %NSTr) WITr=HEEM.d(Train, y_train) print("WITr = %.2f" %WITr) rootMeanSquaredErrorTr = sqrt(mean_squared_error(y_train, Train)) print("RMSE = %.2f" % rootMeanSquaredErrorTr) MAETr=mean_absolute_error(y_train, Train) print("MAE = %.2f" % MAETr) print("Validation ===============>") CC_Valid=np.corrcoef(y_valid,Valid) print("CC_Test = %.3f" %CC_Valid[0,1]) NSVd=HEEM.nse(Valid, y_valid) print("NSTt = %.2f" %NSVd) WIVd=HEEM.d(Valid, y_valid) print("WITr = %.2f" %WITr) rootMeanSquaredErrorVd = sqrt(mean_squared_error(y_valid, Valid)) print("RMSE = %.2f" % rootMeanSquaredErrorVd) MAEVd=mean_absolute_error(y_valid, Valid) print("MAE = %.2f" % MAEVd) print("Test ======================>") CC_Test=np.corrcoef(y_test,Test) print("CC_Test = %.3f" %CC_Test[0,1]) NSTt=HEEM.nse(Test, y_test) print("NSTt = %.2f" %NSTt) WITt=HEEM.d(Test, y_test) print("WITr = %.2f" %WITr) rootMeanSquaredErrorTt = sqrt(mean_squared_error(y_test, Test)) print("RMSE = %.2f" % rootMeanSquaredErrorTt) MAETt=mean_absolute_error(y_test, Test) print("MAE = %.2f" % MAETt) fileOfSummary.write(AI_Method+','+str(e_LR)+','+str(e_decay)+','+str(e_epoch)+','+str(Time_elasped)+',' +str(CC_Train[0,1])+','+str(CC_Valid[0,1])+','+str(CC_Test[0,1])+',' +str(NSTr)+','+str(NSVd)+','+str(NSTt)+',' +str(WITr)+','+str(WIVd)+','+str(WITt)+',' +str(rootMeanSquaredErrorTr)+','+str(rootMeanSquaredErrorVd)+','+str(rootMeanSquaredErrorTt)+',' +str(MAETr)+','+str(MAEVd)+','+str(MAETt)+'\n') # + id="Uds-9YfxxM4e" colab={"base_uri": "https://localhost:8080/"} outputId="c740e2f9-f5d1-43a3-8eff-1be6b47e7bf1" import io buf = io.StringIO() x.info(buf=buf) s = buf.getvalue() fileOfSummary.write(s) fileOfSummary.close() print("Finished") # + id="JBpRfeyZ2L2S" import matplotlib.pyplot as plt plt.figure(figsize=(8,4)) plt.plot(history.history['loss'], label='Train Loss') plt.plot(history.history['val_loss'], label='Test Loss') plt.title('model loss') plt.ylabel('loss') plt.xlabel('epochs') plt.legend(loc='upper right') plt.show(); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Linear Regression in Python # Linear Regression Using Statsmodels.api import statsmodels.api as sm import pandas as pd # # Data Preprocessing # Reading the Data data = pd.read_csv("insurance.csv", delimiter = ",") print(data.head(n=5)) # + # Getting the data ready for the models data = pd.DataFrame(data) predictors = data.iloc[:,:6] #all columns except charges y = data.iloc[:,-1] #charges, response variable df = data.copy() #copying the data # Label encoding the data - from categorical to numerical object_df = data.select_dtypes(include=['object']).copy() object_df["sex"] = object_df["sex"].astype('category') object_df["smoker"] = object_df["smoker"].astype('category') object_df["region"] = object_df["region"].astype('category') object_df["sex_binary"] = object_df["sex"].cat.codes object_df["smoker_binary"] = object_df["smoker"].cat.codes object_df["region_encoded"] = object_df["region"].cat.codes #changing the columns in the data df["sex"] = object_df["sex_binary"] df["smoker"] = object_df["smoker_binary"] df["region"] = object_df["region_encoded"] print(df.head(n=5)) # - # # Modeling # Models predictors = df.iloc[:,:6] #all columns except charges - using the new dataframe, 'df' for i in predictors: X = predictors[i] X = sm.add_constant(X) #adding a constant - adding an intercept otherwise we would get the wrong model model = sm.OLS(y,X) results = model.fit() print(f"{i} vs. charges: ", results.summary()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="wG91ueXbo_iP" # !git clone https://github.com/tiagosardi/emotionrecognition.git # + id="NvbknSweQRAH" # !pip install -q kaggle # + id="m_b5mPd5Qr56" from google.colab import files files.upload() # + id="tVd0bfL9PLAv" # ! mkdir ~/.kaggle # ! cp kaggle.json ~/.kaggle/ # + id="mVsRFaCzRSIJ" # !chmod 600 ~/.kaggle/kaggle.json # + id="OKWBZVHHRk3C" # !kaggle datasets list # + id="ff-rBJVJTK9R" # !kaggle competitions download -c challenges-in-representation-learning-facial-expression-recognition-challenge # + id="rWcD7op3LsZS" # !unzip train.csv.zip -d train # + id="Qqm7wGHNLu_w" # !unzip test.csv.zip -d test # + id="q35rMhnfQCiP" # !tar -vzxf fer2013.tar.gz # + id="_YMJ2L-9Tuj_" import cv2 import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow # + id="_xRYbz6bBNKP" data = pd.read_csv('fer2013/fer2013.csv') # + id="2w_junKmSsLv" data.tail() # + id="6epqK1x_SwH6" plt.figure(figsize=(12,6)) plt.hist(data['emotion'], bins= 30) plt.title('Img X emotions') # + id="G2dOPU9iTHsH" height, width = 48,48 faces = [] samples = 0 for pixel_sequence in pixels: face = [int(pixel) for pixel in pixel_sequence.split(' ')] face = np.asarray(face).reshape(height,width) faces.append(face) if(samples<10): cv2_imshow(face) amostras+=1 # + id="o9OKwnL0QSQa" X_train, X_test , y_train, y_test = train_test_split(faces, emocoes, test_size = .1, random_state = 42) X_train, X_val ,y_train, y_val = train_test_split (X_train, y_train , test_size = .1, random_state = 41) # + id="KYueDhDDEwNr" print("Numero de imagens no conjunto de treinamento: ", len(X_train)) print("Numero de imagens no conjunto de teste: ", len(X_test)) print("Numero de imagens no conjunto de validacao: ", len(X_val)) # + id="SjA6H8T6FDAd" np.save('mod_xtest' , X_test) np.save('mod_ytest', y_test) # + id="pRmlPucGFs5s" num_features = 64 num_labels = 7 batch_size = 64 epochs = 100 width, height = 48,48 # + id="6MvcbmBDF-mS" model.compile (loss= 'categorical_crossentropy', optimizer = Adam(lr=.001, beta_1=.9, beta_2= .999, epsilon=1e-7), metrics=['accuracy']) arquivo_modelo = 'modelo_01_expressoes.h5' arquivo_modelo_json = 'modelo_01_expressoes.json' lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor = .9, patience=3, verbose = 1) early_stopper = EarlyStoppping(monitor='val_loss', min_delta=0,patience=8, verbose=1, mode='auto') checkpointer = ModelCheckpoint(arquivo_modelo, monitor='val_loss', verbose=1, save_best_only=True) # + id="VMbVb8bOJ4rQ" history = model.fit(np.array(X_train), np.array(y_train), batch_size = batch_size, epochs = epochs, verbose = 1, validation_data = (np.array(X_val),np.array(y_val)), shuffle = True, callbacks = [lr_reducer, early_stopper, checkpointer]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pennylane as qml from pennylane import numpy as np # import numpy as np dev = qml.device(name = 'default.qubit', wires = 3, shots = 1000) @qml.qnode(dev, interface=None) def circuit(w): for i in range(3): qml.RX(w[i], wires=i) qml.CNOT(wires=[0, 1]) qml.CNOT(wires=[1, 2]) qml.CNOT(wires=[2, 0]) qml.RY(w[3], wires=1) qml.CNOT(wires=[0, 1]) qml.CNOT(wires=[1, 2]) qml.CNOT(wires=[2, 0]) qml.RX(w[4], wires=2) return qml.expval(qml.PauliZ(0) @ qml.PauliZ(2)) # + weights = '0.1,0.2,0.1,0.2,0.7' weights = weights.split(",") weights = np.array(weights, float) print(weights) print(circuit(weights)) print(circuit.draw()) # - def psr(qnode, weights, i): param1 = weights.copy() param2 = weights.copy() s = np.pi/2 # Forward shift param1[i] = param1[i] + (np.pi/2) forward_shift = qnode(param1) # Backward Shift param2[i] = param2[i] - (np.pi/2) backward_shift = qnode(param2) gradient = (forward_shift - backward_shift) / (2*np.sin(np.pi/2)) return gradient def hess(qnode, weights, i, j): param = weights.copy() param2 = weights.copy() s = np.pi/2 # Forward shift param[j] = param[j] + (np.pi/2) forward_shift = qnode(param) # Backward Shift param2[j] = param2[j] - (np.pi/2) backward_shift = qnode(param2) return (forward_shift - backward_shift) / (2*np.sin(np.pi/2)) def main(qnode, weights): gradient = np.zeros([5], dtype=np.float64) gradient2 = np.zeros([5], dtype=np.float64) jacobian = np.zeros([5, 5], dtype=np.float64) hessian = np.zeros([5, 5], dtype=np.float64) for i in range(len(weights)): gradient[i] = psr(qnode, weights, i) print(gradient) return hessian print(main(circuit, weights)) # + 0.0012756024,-0.7668909241,-0.1890228368,-0.0374176229,-0.7914937431, 0.012713476,0.0,0.012713476,0.0062927444,0.0015144486, 0.0,-0.6210429671,0.0769457494,0.1248084389,-0.6036371367, 0.012713476,0.0769457494,-0.6083294911,-0.6276219392,-0.0725364203, 0.0062927444,0.1248084389,-0.6276219392,0.1375219149,-0.0444237671, 0.0015144486,-0.6036371367,-0.0725364203,-0.0444237671,-0.6083294911, 51 # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## HW 8: Pattern Extraction & Why Algorithm Chosen (+ Parameter Tuning) # ### OCEN 460 # ### Team: _/Sample_Text/ # ### Members: and # Overview: # # The purpose of this project is to use existing data on the growth of coral to predict whether coral can grow given oceanographic conditions. The latitude, longitude, depth, temperature, salinity, and dissolved oxygen levels are used to predict a binary value with 1 meaning that coral can grow and 0 meaning that coral cannot grow. import tensorflow as tf import pandas as pd import numpy as np from tensorflow import keras from tensorflow.keras import layers import matplotlib.pyplot as plt import RegscorePy from math import sqrt, floor import os import pathlib from itertools import product from scipy.stats import pearsonr import time from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error # %matplotlib inline # Import the necessary packages. Some uncommon ones are RegscorePy, which is a custom open-source library used to calculate the AIC of tensorflow models, and itertools.product which is used to generate all combinations of elements in a list. # ## 1) Load data in # # The raw data is loaded in from a csv file and processed into a training and testing dataset. # + cwd = pathlib.Path(os.getcwd()) path = str(cwd.parent) + '/coral-prediction/processed_data/combined_data_truncated.csv' raw = pd.read_csv(path) raw = raw.sample(frac=0.2, random_state=0) print(raw.describe()) raw.pop('species') raw.pop('round_d') train = raw.sample(frac=0.8, random_state=0) test = raw.drop(train.index) train_features = train.copy() test_features = test.copy() train_labels = train_features['coral_present'] test_labels = test_features['coral_present'] train_features.pop('coral_present') test_features.pop('coral_present') # - # ## 2) Feature evaluation - correlation coefficients # # The correlation coefficients and p-values for each feature are reported. Since all p-values are <0.05, each feature selected is relevant to the model and will be kept. # # It is interesting that the longitude is strongly negatively correlated to the presence of coral. This implies that at more eastern locations (say 90E - 180E) it is less likely that coral will grow. This may be a residual of other conditions, such as the much deeper waters in the eastern Pacific Ocean - since the depth is also negatively correlated. print(train.corr()['coral_present']) print(pearsonr(train_features['latitude'], train_labels)) print(pearsonr(train_features['longitude'], train_labels)) print(pearsonr(train_features['depth'], train_labels)) print(pearsonr(train_features['temperature'], train_labels)) print(pearsonr(train_features['salinity'], train_labels)) print(pearsonr(train_features['oxygen'], train_labels)) # ## 3) Function definitions # # The following functions were used during the training and evaluation of the model. # # fit_and_evaluate() accepts a model architecture and fits the training data to it, and the reports the accuracy metrics based on the test dataset. # # The hyperparameters selected for the training are: 20% validation split and 50 epochs training duration. These were selected by testing different configurations and choosing the hyperparameters that resulted in the most accurate model. # # plot_loss() accepts the model training residuals and plots them over the training duration (number of epochs) the loss and validation loss are both shown. # # add_layer() is a part of the parametric model study, which can add hidden layers to a tensorflow model by passing parameters and hyperparameters. This functionality will hopefully be bundled into a package at some point so that users can pip install the ability to do a parametric study. # # build_and_compile_model() accepts the model architecture from the user and creates the tensorflow model. It then fits the model and performs the accuracy evaluations. def fit_and_evaluate(architecture): dnn_model = build_and_compile_model(architecture) history = dnn_model.fit(train_features, train_labels, validation_split=0.2, verbose=0, epochs=50) plot_loss(history) test_results = dnn_model.evaluate(test_features, test_labels, verbose=0) test_predictions = dnn_model.predict(test_features).flatten() r2= r2_score(np.asarray(test_labels).flatten(), test_predictions) mae = mean_absolute_error(np.asarray(test_labels).flatten(), test_predictions) aic = RegscorePy.aic.aic(np.asarray(test_labels, dtype=float).flatten(), np.asarray(test_predictions).astype(float), 4+2) rmse = sqrt(mean_squared_error(np.asarray(test_labels).flatten(), test_predictions)) return dnn_model, aic, r2, mae, rmse, test_predictions def plot_loss(history): plt.plot(history.history['loss'], label='loss') plt.plot(history.history['val_loss'], label='val_loss') # plt.ylim([0, 30]) plt.xlabel('Epoch') plt.ylabel('Error') plt.legend() plt.grid(True) plt.show() # pass def add_layer(dets, hyper, prev): default = ['relu'] try: layer = layers.Dense(dets, activation=hyper[0])(prev) except IndexError: layer = layers.Dense(dets, activation=default[0])(prev) return layer def build_and_compile_model(arch): # Adjust the number of hidden layers and neurons per layer that results in best fit NN hidden_layers = [] inputs = keras.Input(shape=(6,)) norm_layer = layers.BatchNormalization()(inputs) hidden_layers.append(inputs) hidden_layers.append(norm_layer) for i in range(num_hidden): if arch[i] == 0: pass else: layer = add_layer(arch[i], arch[num_hidden:], hidden_layers[-1]) hidden_layers.append(layer) layer = layers.Dropout(rate=0.2)(hidden_layers[-1]) hidden_layers.append(layer) outputs = layers.Dense(1)(hidden_layers[-1]) hidden_layers.append(outputs) model = keras.Model(inputs=inputs, outputs=outputs) model.compile(loss='mean_absolute_error', optimizer=tf.keras.optimizers.Adam(0.001)) return model # ## 4) Using the model trainer # # The following code sets up the necessary data to train the models. By commenting out the line: # # list(product(\*[l1, l2, l3, activ])), the parametric search is deactivated, and only the architecture specified in the next line is fitted # # Additionally, some timing features are implemented. Passing a large parametric space can result in the program running for over 4 hours, training each model. Because of this, a progress meter is added so that the user can see how far along the program is. models = [] aic_scores = [] r2_scores = [] maes = [] rmses = [] l1 = np.linspace(32, 256, 5) l2 = np.linspace(0, 256, 5) l3 = np.linspace(0, 256, 5) activ = ['relu', 'tanh'] # parametric_space = list(product(*[l1, l2, l3, activ])) parametric_space = [[200, 64, 192, 'relu']] print(parametric_space) num_hidden = len(list(i for i in parametric_space[0] if isinstance(i, (int or float)))) num_hyper = len(parametric_space[0]) - num_hidden start_t = time.time() c = 1 # ## 5) Running the model training # # This for loop passes each architecture specified in the parametric space into the build_and_compile_model() function and reports the accuracy metrics into their specific list. It also updates the progress each time a model is finished evaluation for arch in parametric_space: print('Progress: ' + str(c) + '/' + str(len(parametric_space))) dnn_model, aic, r2, mae, rmse, test_predictions = fit_and_evaluate(arch) models.append(dnn_model) aic_scores.append(aic) r2_scores.append(r2) maes.append(mae) rmses.append(rmse) curr_time = time.time() diff_t = curr_time - start_t t_per_model = diff_t / c num_mods_rem = len(parametric_space) - c t_rem = t_per_model * num_mods_rem print("Estimated Time Remaining: " + time.strftime('%H:%M:%S', time.gmtime(t_rem)) + ' seconds') c += 1 # ## 6) Choosing the best model # # After the parametric seach is finished, the accuracy metrics are compiled into a csv file. The user must look through these results and pick the model which has the LOWEST AIC score and HIGHEST R-Squared. # # The final 2 lines save the 0-th model in the parametric space for later use. During the parametric seach, this should be disabled because the 0-th model is likely not the most accurate. When the parametric search is disabled (only a single architecture is being fitted) this can be re-enabled to save the model. parametric_space_t = np.asarray(parametric_space).transpose().tolist() output_data = [parametric_space_t[0], parametric_space_t[1], parametric_space_t[2], aic_scores, maes, rmses, r2_scores] output_data = np.asarray(output_data).transpose().tolist() print(output_data) oput = pd.DataFrame(output_data, columns=['L1', 'L2', 'L3', 'AIC', 'MAE', 'RMSE', 'R2']) # print(oput) # oput.to_csv('Parametric_space_study.csv', index=False) print(models[0].summary()) out_path = str(cwd.parent) + '/models/trial0.3.h5' # models[0].save(out_path) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pyodbc pyodbc.autocommit = True conn = pyodbc.connect("DSN=Sample Hortonworks Hive DSN;", autocommit=True) cursor = conn.cursor(); cursor.execute("select year, symbol, total_dividends from default.yearly_aggregates where symbol = 'IBM' and year = '2005'") result = cursor.fetchall() for r in result: print(r) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Given a non-empty string s, you may delete at most one character. Judge whether you can make it a palindrome. # # #### Example 1: # Input: "aba" # Output: True # # #### Example 2: # Input: "abca" # Output: True # Explanation: You could delete the character 'c'. # # [1:5] is equivalent to "from 1 to 5" (5 not included) # # [1:] is equivalent to "1 to end" # # [len(a):] is equivalent to "from length of a to end" def validPalindrome(s): for i in range(len(s)): t = s[:i] + s[i+1:] if t == t[::-1]: return True return s == s[::-1] print(validPalindrome("acba")) def validPalindrome(s): for i in range(len(s)): print("s[:i]",s[:i]) print("s[i+1:]",s[i+1:]) t = s[:i] + s[i+1:] print("t",t) print("t[::-1]",t[::-1]) if t == t[::-1]: print("inside if") return True print("outside if") return s == s[::-1] print(validPalindrome("acba")) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="FpHPaEEoVSUn" colab_type="text" # # DCGAN sample using tensorflow # # tensorflow を利用して MNIST で DCGAN を実行するサンプルです。 # # - [Deep Convolutional Generative Adversarial Network][tutorial] # # [tutorial]: https://www.tensorflow.org/tutorials/generative/dcgan # + [markdown] id="lcmZkCYOLQvk" colab_type="text" # ## 環境の確認 # + id="LnMGmEyyLT2f" colab_type="code" outputId="45144fc0-8267-4774-f710-2089ad258944" colab={"base_uri": "https://localhost:8080/", "height": 53} # !cat /etc/issue # + id="UDMUXlaTLpG2" colab_type="code" outputId="913796cf-f9a4-4bf2-8ae6-a0529466b97e" colab={"base_uri": "https://localhost:8080/", "height": 71} # !free -h # + id="WHn3f12zLsTw" colab_type="code" outputId="b16b4deb-3d0f-4fdf-a0df-7280ffdc9911" colab={"base_uri": "https://localhost:8080/", "height": 1000} # !cat /proc/cpuinfo # + id="3FUZPKAcL4jO" colab_type="code" outputId="fe8f507b-fad4-455e-87e0-de49a1de987f" colab={"base_uri": "https://localhost:8080/", "height": 323} # !nvidia-smi # + id="w2V3sVtNPYdT" colab_type="code" outputId="a6fbfacf-2f49-420a-a82b-7a0c9125b066" colab={"base_uri": "https://localhost:8080/", "height": 35} # !python --version # + id="Zyf3UfsENhOf" colab_type="code" colab={} from logging import Logger def get_logger() -> Logger: import logging logger = logging.getLogger(__name__) fmt = "%(asctime)s %(levelname)s %(name)s :%(message)s" logging.basicConfig(level=logging.INFO, format=fmt) return logger logger = get_logger() # + id="EppKsLuHMYjR" colab_type="code" outputId="82552424-df85-48da-f25a-6b66654880aa" colab={"base_uri": "https://localhost:8080/", "height": 35} def check_tf_version() -> None: import tensorflow as tf logger.info(tf.__version__) check_tf_version() # + [markdown] id="SKyYk7oMY21S" colab_type="text" # ## ソースコードの取得 # + id="PjgDKO_RV_U5" colab_type="code" outputId="a80b4b94-64d3-450f-84f6-c65ba3889e30" colab={"base_uri": "https://localhost:8080/", "height": 395} # 対象のコードを取得 # !git clone -n https://github.com/iimuz/til.git # %cd til # !git checkout fdfa134 # %cd python/dcgan_tensorflow # + [markdown] id="_vUlewGLYtT1" colab_type="text" # ## 実行 # + [markdown] id="J4llcBuDKMus" colab_type="text" # ### 事前準備 # + id="H8Ng5oAwIevW" colab_type="code" colab={} import tensorflow.compat.v1 as tfv1 tfv1.enable_eager_execution() # + [markdown] id="NDZFpP_IKKCr" colab_type="text" # ### データセットの確認 # + id="8AwAXG4UHZ7V" colab_type="code" outputId="ce17000a-df4a-48f1-dd97-d4895f0cf8e3" colab={"base_uri": "https://localhost:8080/", "height": 599} # %run -i dataset.py # + id="nv1OyOv3I9f4" colab_type="code" outputId="4c17d097-54b0-45e8-9592-50f57c0e91e2" colab={"base_uri": "https://localhost:8080/", "height": 91} import dataset raw_train, raw_test = dataset.get_batch_dataset() # + [markdown] id="leX-NiQxKRHh" colab_type="text" # ### ネットワークの確認 # + id="sLKS5JeWHdNl" colab_type="code" outputId="5d339bc2-6492-4a75-daa5-59bfdd2dd6a4" colab={"base_uri": "https://localhost:8080/", "height": 1000} # %run -i network.py # + [markdown] id="lpsueOcOJRrK" colab_type="text" # ### 学習の実行 # + id="H60exsmWJWPo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 550} outputId="0ada8b18-fcb6-4632-fe63-5ccc76fee25d" import train train.train(raw_train, batch_size=256, epochs=100, gen_input_dim=100, disc_input_shape=(28, 28, 1)) # + [markdown] id="0QiTCqIlJtMG" colab_type="text" # ### 結果 # + id="J83wMaOaJvSA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 377} outputId="5bbc75d7-a678-47fd-80a1-249a955ff5bc" import utils import IPython from IPython import display def show_generated_images(): filepath = "_data/dcgan.gif" utils.save_gif("_data/", "image_at_epoch_*", filepath) try: from google.colab import files except ImportError: pass else: files.download(filepath) show_generated_images() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pylab as plt # %matplotlib inline import tqdm import json from frbpa.utils import get_phase # + with open('r3_data.json', 'r') as f: r3_data = json.load(f) burst_dict = r3_data['bursts'] # - period = 16.35 phases_1 = [] phases_2 = [] mjds = [] for k in burst_dict.keys(): phases_1 += list(get_phase(np.array(burst_dict[k]), period, ref_mjd=58369.30)) phases_2 += list(get_phase(np.array(burst_dict[k]), period+0.01, ref_mjd=58369.30)) mjds += burst_dict[k] plt.scatter(np.array(mjds) - 58369.30, phases_1, color='r', label='Period = 16.35 days') plt.scatter(np.array(mjds) - 58369.30, phases_2, color='b', label='Period = 16.36 days') plt.xlabel('MJDs') plt.ylabel('Phases') plt.legend() plt.grid() _, b, _ = plt.hist(phases_1, bins=15, alpha=0.6, label='Period = 16.35 days') plt.hist(phases_2, bins=b, alpha=0.6, label='Period = 16.36 days') plt.legend() plt.xlabel('phase') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import numpy as np import json overlap_size = 36 window_size = 45 vals = [] train_list = [] val_list = [] test_list = [] def frame_preprocess(vid, vid_class): for start in range(0, len(vid)-window_size, window_size - overlap_size): currCase = np.empty([window_size,25,2]) for index in range(0,window_size): currCase[index] = vid[start+index] vals.append((currCase, vid_class)) vid_class = 0 for vid_folder in os.listdir('bhujangasan/'): folder_name = 'bhujangasan/' + str(vid_folder) + '/' l = [] for frame in os.listdir(folder_name): with open(folder_name + frame) as json_data: d = json.load(json_data) try: data = d['people'][0]['pose_keypoints_2d'] #remove confidence values for j in range(2,52,2): data.pop(j) except: print ("Failed at" + frame) Xdata = Xdata Ydata = Ydata stk = np.dstack((Xdata, Ydata)) l.append(stk) print ("Added this to list" + frame) continue json_data.close npdata = np.asarray(data) #print(npdata.shape) Xdata = data[::2] #print(len(Xdata)) Ydata = data[1::2] stk = np.dstack((Xdata, Ydata)) l.append(stk) #y_vals.append(vid_class) #print(Xdata, Ydata) frame_preprocess(l, vid_class) # + for i in range(int(0.60*len(vals))): train_list.append(vals[i]) for i in range(int(0.60*len(vals)), int(0.60*len(vals))+int(0.20*len(vals))): val_list.append(vals[i]) for i in range(int(0.60*len(vals))+int(0.20*len(vals)), len(vals)): test_list.append(vals[i]) # - vals = [] vid_class = 1 for vid_folder in os.listdir('padamasan/'): folder_name = 'padamasan/' + str(vid_folder) + '/' l = [] for frame in os.listdir(folder_name): with open(folder_name + frame) as json_data: d = json.load(json_data) try: data = d['people'][0]['pose_keypoints_2d'] #remove confidence values for j in range(2,52,2): data.pop(j) except: print ("Failed at" + frame) Xdata = Xdata Ydata = Ydata stk = np.dstack((Xdata, Ydata)) l.append(stk) print ("Added this to list" + frame) continue json_data.close npdata = np.asarray(data) #print(npdata.shape) Xdata = data[::2] #print(len(Xdata)) Ydata = data[1::2] stk = np.dstack((Xdata, Ydata)) l.append(stk) frame_preprocess(l, vid_class) # + for i in range(int(0.60*len(vals))): train_list.append(vals[i]) for i in range(int(0.60*len(vals)), int(0.60*len(vals))+int(0.20*len(vals))): val_list.append(vals[i]) for i in range(int(0.60*len(vals))+int(0.20*len(vals)), len(vals)): test_list.append(vals[i]) # - vals = [] vid_class = 2 for vid_folder in os.listdir('shavasan/'): folder_name = 'shavasan/' + str(vid_folder) + '/' l = [] for frame in os.listdir(folder_name): with open(folder_name + frame) as json_data: d = json.load(json_data) try: data = d['people'][0]['pose_keypoints_2d'] #remove confidence values for j in range(2,52,2): data.pop(j) except: print ("Failed at" + frame) Xdata = Xdata Ydata = Ydata stk = np.dstack((Xdata, Ydata)) l.append(stk) print ("Added this to list" + frame) continue json_data.close npdata = np.asarray(data) #print(npdata.shape) Xdata = data[::2] #print(len(Xdata)) Ydata = data[1::2] stk = np.dstack((Xdata, Ydata)) l.append(stk) frame_preprocess(l, vid_class) # + for i in range(int(0.60*len(vals))): train_list.append(vals[i]) for i in range(int(0.60*len(vals)), int(0.60*len(vals))+int(0.20*len(vals))): val_list.append(vals[i]) for i in range(int(0.60*len(vals))+int(0.20*len(vals)), len(vals)): test_list.append(vals[i]) # - vals = [] vid_class = 3 for vid_folder in os.listdir('tadasan/'): folder_name = 'tadasan/' + str(vid_folder) + '/' l = [] for frame in os.listdir(folder_name): with open(folder_name + frame) as json_data: d = json.load(json_data) try: data = d['people'][0]['pose_keypoints_2d'] #remove confidence values for j in range(2,52,2): data.pop(j) except: print ("Failed at" + frame) Xdata = Xdata Ydata = Ydata stk = np.dstack((Xdata, Ydata)) l.append(stk) print ("Added this to list" + frame) continue json_data.close npdata = np.asarray(data) #print(npdata.shape) Xdata = data[::2] #print(len(Xdata)) Ydata = data[1::2] stk = np.dstack((Xdata, Ydata)) l.append(stk) frame_preprocess(l, vid_class) # + for i in range(int(0.60*len(vals))): train_list.append(vals[i]) for i in range(int(0.60*len(vals)), int(0.60*len(vals))+int(0.20*len(vals))): val_list.append(vals[i]) for i in range(int(0.60*len(vals))+int(0.20*len(vals)), len(vals)): test_list.append(vals[i]) # - val = [] vid_class = 4 for vid_folder in os.listdir('trikonasan/'): folder_name = 'trikonasan/' + str(vid_folder) + '/' l = [] for frame in os.listdir(folder_name): with open(folder_name + frame) as json_data: d = json.load(json_data) try: data = d['people'][0]['pose_keypoints_2d'] #remove confidence values for j in range(2,52,2): data.pop(j) except: print ("Failed at" + frame) Xdata = Xdata Ydata = Ydata stk = np.dstack((Xdata, Ydata)) l.append(stk) print ("Added this to list" + frame) continue json_data.close npdata = np.asarray(data) #print(npdata.shape) Xdata = data[::2] #print(len(Xdata)) Ydata = data[1::2] stk = np.dstack((Xdata, Ydata)) l.append(stk) frame_preprocess(l, vid_class) # + for i in range(int(0.60*len(vals))): train_list.append(vals[i]) for i in range(int(0.60*len(vals)), int(0.60*len(vals))+int(0.20*len(vals))): val_list.append(vals[i]) for i in range(int(0.60*len(vals))+int(0.20*len(vals)), len(vals)): test_list.append(vals[i]) # - vals = [] vid_class = 5 for vid_folder in os.listdir('vrikshasan/'): folder_name = 'vrikshasan/' + str(vid_folder) + '/' l = [] for frame in os.listdir(folder_name): with open(folder_name + frame) as json_data: d = json.load(json_data) try: data = d['people'][0]['pose_keypoints_2d'] #remove confidence values for j in range(2,52,2): data.pop(j) except: print ("Failed at" + frame) Xdata = Xdata Ydata = Ydata stk = np.dstack((Xdata, Ydata)) l.append(stk) print ("Added this to list" + frame) continue json_data.close npdata = np.asarray(data) Xdata = data[::2] Ydata = data[1::2] stk = np.dstack((Xdata, Ydata)) l.append(stk) frame_preprocess(l, vid_class) # + for i in range(int(0.60*len(vals))): train_list.append(vals[i]) for i in range(int(0.60*len(vals)), int(0.60*len(vals))+int(0.20*len(vals))): val_list.append(vals[i]) for i in range(int(0.60*len(vals))+int(0.20*len(vals)), len(vals)): test_list.append(vals[i]) # - X_train = [] y_train = [] X_test = [] y_test = [] X_val = [] y_val = [] for i in range(0, len(train_list)): X_train.append(train_list[i][0]) for i in range(0, len(train_list)): y_train.append(train_list[i][1]) for i in range(0, len(val_list)): X_val.append(val_list[i][0]) for i in range(0, len(val_list)): y_val.append(val_list[i][1]) for i in range(0, len(test_list)): X_test.append(test_list[i][0]) for i in range(0, len(test_list)): y_test.append(test_list[i][1]) X_train = np.array(X_train) y_train = np.array(y_train) X_val = np.array(X_val) y_val = np.array(y_val) X_test = np.array(X_test) y_test = np.array(y_test) np.save("data_files/X_train_eachCateg", X_train) np.save("data_files/X_test_eachCateg", X_test) np.save("data_files/X_val_eachCateg", X_val) np.save("data_files/y_train_eachCateg", y_train) np.save("data_files/y_test_eachCateg", y_test) np.save("data_files/y_val_eachCateg", y_val) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.6 64-bit (''covid-tools'': conda)' # metadata: # interpreter: # hash: 30011fb273b9676d4408653059de04445543fb2582d7f9ebe9689c8f78ae6178 # name: 'Python 3.7.6 64-bit (''covid-tools'': conda)' # --- # + import os import json import pandas as pd import seaborn as sns import numpy as np import math import pandas as pd import matplotlib.pyplot as plt from utils.analysys_utils import list_experiments_in_dir sns.set() # - experiments_path = os.path.join(os.getcwd(), "runs", "momentum_exps", "sidarthe_extended", "Italy") figs_path = os.path.join(experiments_path, "figs") if not os.path.exists(figs_path): os.makedirs(figs_path) experiments = list_experiments_in_dir(experiments_path) # + tags=[] data = [] indexes = [] for exp in experiments: try: # avoid NaNs val_loss = exp['final']['val_risks']['nrmse'] except: val_loss = np.nan #print(f"{exp['uuid']}") momentum = exp['settings']['momentum'] if(momentum): m = exp['settings']['m'] a = exp['settings']['a'] else: m = 'none' a = 'none' indexes.append((m,a)) data.append({ 'val_loss': val_loss, "momentum": momentum, "m": m, "a": a }) index = pd.MultiIndex.from_tuples(indexes, names=['m','a']) df = pd.DataFrame(data,index=index) df_mT = df.query("momentum").astype({'a': 'float32'}).query('a >= 0.') df_mF = df.query("not momentum").reset_index(drop=True) # - m_index = df_mT.index.unique('m').sort_values() a_index = df_mT.index.unique('a').sort_values() # + tags=[] # plot by fixing m pl, ax = plt.subplots() ax.set_yscale('log') for m in m_index[::5]: df_m = df_mT.loc[m] sns.lineplot(data=df_m, x='a', ax=ax, y='val_loss', label=f"{m:.2f}",legend='brief') # add band for baseline (no momentum) band_data = [] for i in range(0,len(df_mF)): val_loss = df_mF.iloc[i]['val_loss'] band_data.append({ 'val_loss': val_loss, 'a': a_index[0] }) band_data.append({ 'val_loss': val_loss, 'a': a_index[-1] }) band_df = pd.DataFrame(band_data) plot = sns.lineplot(data=band_df, x='a', ax=ax, y='val_loss', label='None', legend='brief') plot.set(ylabel="Validation loss") plot.get_legend().set_title("b") plot.get_figure().savefig(os.path.join(figs_path, "varying_a.pdf")) # + pl, ax = plt.subplots() ax.set_yscale('log') for a in a_index[::2]: df_a = df_mT.xs(a, axis=0, level=1) plot = sns.lineplot(data=df_a, x='m', ax=ax, y='val_loss', label=f"{a:.2f}",legend='brief') # add band for baseline (no momentum) band_data = [] for i in range(0,len(df_mF)): val_loss = df_mF.iloc[i]['val_loss'] band_data.append({ 'val_loss': val_loss, 'm': m_index[0] }) band_data.append({ 'val_loss': val_loss, 'm': m_index[-1] }) band_df = pd.DataFrame(band_data) plot = sns.lineplot(data=band_df, x='m', ax=ax, y='val_loss', label='None', legend='brief') plot.set(xlabel="b", ylabel="Validation loss") plot.get_legend().set_title("a") plot.get_figure().savefig(os.path.join(figs_path, "varying_m.pdf")) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Project: Data Modeling with Cassandra # # ## Introduction # Sparkify wants to analyze the song played event data they've been collecting on songs and user activity on their new music streaming app. They are interested in understanding what songs users are listening to.
    # # Team identified 3 queries to analyze the song played event data.
    # # Solution will use Apache Cassandra to support these queries. Solution will require processing event log data in CSV format to be loaded to tables in Apache Cassandra database. # # ## Solution # Solution has two phases: # 1. Part-I : ETL pipeline for pre-processing the song played event log files # 2. Part-II : Create Apache Cassandra tables for these queires and load data to event data tables # ### Part-I : ETL pipeline for pre-processing the song played event log files # #### Import Python packages # Import Python packages import pandas as pd import cassandra import re import os import glob import numpy as np import json import csv import logging # #### Creates list of filepaths to process original event csv data files # + # tim.o.> adopted from the original project template def getFilePaths(filepath # type: str, file path to obtain filepaths of files in this directory ): ''' Returns filepaths from the given directory (and all subdirectories under this hierarchy). There is no filter for name of file/directory. ''' assert (filepath != None), "Invalid argument!" assert (filepath != ''), "Invalid Value!" file_path_list = None # Create a for loop to create a list of files and collect each filepath for root, dirs, files in os.walk(filepath): # join the file path and roots with the subdirectories using glob file_path_list = glob.glob(os.path.join(root,'*')) return file_path_list def printFilePaths(file_path_list # type: List[str], contains list of filepaths ): ''' Prints filepaths in the given list ''' assert (file_path_list != None), "Invalid argument!" cnt = 0 for f in file_path_list: print(f) cnt += 1 return cnt # + logger = logging.getLogger(__name__) # checking your current working directory print('Current working directory is %s ' % os.getcwd()) logger.info('Project-2> START') logger.info('Project-2> Current working directory is %s ', os.getcwd()) # Get your current folder and subfolder event data file_path = os.getcwd() + '/event_data' logger.info('Project-2> Looking for files in %s ', file_path ) # Get filepaths file_path_list=getFilePaths(file_path) print('There are %d event files in %s.' % (printFilePaths(file_path_list),file_path)) # - # #### Processing the event log files to create the event data file csv that will be loaded to Apache Casssandra tables # + # # tim.o.> adopted from project's template # def writeHeaderToEventDataFile(event_data_fname # type: str, event data file name ): ''' Writes a header to event data file''' assert (event_data_fname != None),'Invalid Value: Event Data File Name is None!' assert (event_data_fname != ''),'Invalid Value: Event Data File Name is Empty!' # Writes header to event data file with open(event_data_fname, 'w', encoding = 'utf8', newline='') as f: writer = csv.writer(f, dialect='myDialect') writer.writerow(['artist','firstName','gender','itemInSession','lastName','length',\ 'level','location','sessionId','song','userId']) return def writeToEventDataFile(full_data_rows_list, # type: List[str], list of lines from one event log event_data_fname # type: str, event data file name ): ''' Extracts the fields from each row of event log to write to event data file ''' assert (event_data_fname != None),'Invalid Value: Event Data File Name is None!' assert (event_data_fname != ''),'Invalid Value: Event Data File Name is Empty!' assert (full_data_rows_list != None),'Invalid Value: List of lines is None!' with open(event_data_fname, 'a', encoding = 'utf8', newline='') as f: writer = csv.writer(f, dialect='myDialect') # For each row, extract the fields needed for row in full_data_rows_list: if (row[0] != ''): writer.writerow((row[0], row[2], row[3], row[4], row[5], row[6], row[7], row[8], row[12], row[13], row[16])) return def processALogFile(file_name # type: str, file name of event log csv file ): ''' Reads event log (file_name) file and collects each line into a list ''' assert (file_name != None),'Invalid Value: Event Log File Name is None!' assert (file_name != ''),'Invalid Value: Event Log File Name is Empty!' # initiating an empty list of rows that will be generated from each file full_data_rows_list = [] # reading a csv file, f with open(file_name, 'r', encoding = 'utf8', newline='') as csvfile: # creating a csv reader object csvreader = csv.reader(csvfile) next(csvreader) # extracting each data row one by one and append it for line in csvreader: #print(line), if (line != '') full_data_rows_list.append(line) return full_data_rows_list def processLogFiles(file_path_list, # type: List[str], list of event log file names event_data_fname # type: str, event data file name ): ''' Extracts data from event log files (file_path_list), and collects them into event data file. ''' assert (file_path_list != None),'Invalid Value: List of event log file names is None!' assert (event_data_fname != None),'Invalid Value: Event Data File Name is None!' assert (event_data_fname != ''),'Invalid Value: Event Data File Name is Empty!' # for every filepath in the file path list for f in file_path_list: full_data_rows_list=processALogFile(f) #if( full_data_rows_list != None and len(full_data_rows_list) > 0): writeToEventDataFile(full_data_rows_list,event_data_fname) return # + # # tim.o.> continue to follow the project template # # Extract event logs to event data file evt_data_file_name = 'event_datafile_new.csv' # write header to event data file csv.register_dialect('myDialect', quoting=csv.QUOTE_ALL, skipinitialspace=True) writeHeaderToEventDataFile(evt_data_file_name) processLogFiles(file_path_list,evt_data_file_name) logging.info('Project-2> Completed transfering event log files in %s to %s ', file_path,evt_data_file_name ) # - # # tim.o.> from project template # # check the number of rows in event data csv file with open(evt_data_file_name, 'r', encoding = 'utf8') as f: print('There are %d rows in \"%s\".'%(sum(1 for line in f),evt_data_file_name)) # ### Part-II : Create Apache Cassandra tables for the following queries and load data to event data tables # # The CSV file titled **event_datafile_new.csv**, located within the Workspace directory, contains all song played event logs. # # The **event_datafile_new.csv** contains the following columns:
    # # | Index | Column | # |-------|----------------------------| # | 0 | artist | # | 1 | firstName of user | # | 2 | gender of user | # | 3 | item number in session | # | 4 | last name of user | # | 5 | length of the song | # | 6 | level (paid or free song) | # | 7 | location of the user | # | 8 | sessionId | # | 9 | song title | # | 10 | userId | # # The image below is a screenshot of the denormalized data in the **event_datafile_new.csv** after the code above is run:
    # # # ### Connecting to Apache Cassandra # First, it will connect to cluster and get a session object.
    # Second, it will define a sparkify keyspace.
    # Third, it will set the keyspace to spakify.
    # #### Connecting to a Cluster from cassandra.cluster import Cluster # # tim.o.> adopted from project template # def connectToCassandraCluster(list_of_nodes # type: List[str], list of ip addresses ): ''' Connects to Cassandra Cluster, returns cluster and session objects ''' assert (list_of_nodes != None), 'Invalid Value: List of ip addresses is None!' assert (len(list_of_nodes) > 0), 'Invalid Value: List of ip addresses is Empty!' cluster = None session = None try: cluster = Cluster(list_of_nodes) logger.debug('Got cluster') # To establish connection and begin executing queries, need a session session = cluster.connect() logger.debug('Connected, got session!') except Exception as e: print(e) logger.exception(e) return cluster, session # Conect to Cassandra instance on local machine list_of_nodes= ['127.0.0.1'] cluster, session = connectToCassandraCluster(list_of_nodes) # #### Create Keyspace # Create a Keyspace for Sparkify try: session.execute(""" CREATE KEYSPACE IF NOT EXISTS sparkify WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }""" ) logger.debug('CREATE KEYSPACE sparkify, OK!') except Exception as e: print(e) logger.exception(e) # #### Set Keyspace # Set KEYSPACE to 'sparkify' try: session.set_keyspace('sparkify') logger.debug('SET KEYSPACE TO sparkify, OK!') except Exception as e: print(e) logger.exception(e) # ## Task: Create queries to ask the following three questions of the data # # #### 1. Give me the artist, song title and song's length in the music app history that was heard during sessionId = 338, and itemInSession = 4 # # # #### 2. Give me only the following: name of artist, song (sorted by itemInSession) and user (first and last name) for userid = 10, sessionid = 182 # # # #### 3. Give me every user name (first and last) in my music app history who listened to the song 'All Hands Against His Own' # # # # # Now we need to create tables to run these queries.
    # We will design the database tables for these queries with Apache Cassandra. # # tim.o.> adapted from project template # def insertToDB(dbSession, # type: , database session handle file, # type: str, name of event data csv file istmt, # type: str, CQL Insert Statement funcLine # type: func_ptr, Pointer to a function: line -> fields in istmt ): ''' Inserts fields from event data file to database table query string (of INSERT CQL) A function (funcLine) to extract fields from an event data file line (type: List[str]) ''' assert (dbSession != None), 'Invalid Value: database session is None!' assert (file != None), 'Invalid Value: Name of event data csv file is None!' assert (file != ''), 'Invalid Value: Name of event data csv file is Empty!' assert (istmt != None), 'Invalid Value: CQL INSERT statement is None!' assert (istmt != ''), 'Invalid Value: CQL INSERT statement is Empty!' assert (funcLine != None), 'Invalid Value: Function: line->fields in INSETRT statement is None!' with open(file, encoding = 'utf8') as f: csvreader = csv.reader(f) next(csvreader) # skip header # for each line for line in csvreader: dbSession.execute(istmt, funcLine(line)) return # ### Query 1 # **Query**: Give me the artist, song's title and song's length in the music app history # that was heard during sessionId = 338, and itemInSession = 4 # # Expected output columns are; # 1. artist's name # 1. song's title # 1. song's length # # The required query uses sessionId and itemInSession in WHERE clause. # # **PRIMARY KEY (PK) Rationale**
    # - Table for this query should use sessionId as Partition Key and itemInSession as Clustering Column.
    # - The value of sessionId will help to route the given query to one node. Assuming all rows of a session can fit in one node. # # **Data Type Selection Rationale**
    # # |Field| Data Type (Cassandra,[CQL](https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cql_data_types_c.html))| Why? | # |-----|----------|-----| # | sessionId (K) | bigint | There will be many sessions. (partition **K**ey) | # | itemInSession (C)| int | Number of items in a session.
    A range of [0,max(int)] should be sufficient. (**C**lustering column) | # | artistName | text | String data type for artist's name. (query return) | # | songTitle | text | String data type for title of played song. (query return) | # | songDuration | float | Duration of played song.
    A range of [0,max(float)] should be sufficient. (query return) | # #

    # Combined columns (sessionId and itemInSession) uniquely identify a song played event. # + ## ## Create table for query 1 ## Contains fields that need to be retrieved and primary key (partition key,clustering column) ## table_q1_name = 'songplayed_By_Session_And_ItemInSession' # CQL CREATE TABLE statement stmt = "CREATE TABLE IF NOT EXISTS " + table_q1_name stmt = stmt + " (sessionId bigint, itemInSession int, artistName text, songTitle text, songDuration double, \ PRIMARY KEY(sessionId, itemInSession));" logger.debug('CREATE TABLE FOR Q1: [%s]', stmt) try: session.execute(stmt) except Exception as e: print(e) logger.exception(e) # + def funcLine1(line # type: List[str], a row of event data csv file ): ''' Returns extracted column values from given line, a row of event data file ''' assert (line != None), 'Invalid Value: A line of event data file is None!' assert (len(line) > 9), 'Invalid Value: A line of event data file is Empty!' return (int(line[8]),int(line[3]),line[0],line[9],float(line[5])) # Prepare CQL INSERT Statement istmt = "INSERT INTO " + table_q1_name istmt = istmt + " (sessionId, itemInSession, artistName, songTitle, songDuration)" istmt = istmt + " VALUES (%s, %s, %s, %s, %s);" # Extract and load rows from event data csv file to a table in Cassandra insertToDB(session,evt_data_file_name,istmt,funcLine1) # - # #### Do a SELECT to verify that the data have been inserted into each table # # tim.o.> adopted from project template # def queryBySessionIdItemInSession(dbSession, table_name, sessionId, itemInSession ): ''' Retrieves artist's name, title of song, and duration song for given session and item in session. ''' assert (dbSession != None), 'Invalid Value: Database Session is None!' assert (table_name != None), 'Invalid Value: Table Name is None!' assert (table_name != ''), 'Invalid Value: Table Name is Empty!' # Return rows = None # CQL SELECT statement query = "SELECT artistName, songTitle, songDuration FROM "+ table_name; query = query + " WHERE sessionId = " + str(sessionId); query = query + " and itemInSession = "+ str(itemInSession) +";" logger.debug(' Query [%s]', query) try: rows = dbSession.execute(query) except Exception as e: print(e) logger.exception(e) return rows # + # query parameters sessionId = 338 itemInSession = 4 # do query rows = queryBySessionIdItemInSession(session,table_q1_name,sessionId,itemInSession) # prepare data frame from results set result_set = [] if(rows != None): for row in rows: result_set.append([row.artistname,row.songtitle,row.songduration]) q1_text = 'Give me the artist, song\'s title and song\'s length in the music app history '; q1_text = q1_text +'that was heard during'; print( '\n\n%s sessionId = %d and itemInSession = %d \n\n' % (q1_text,sessionId,itemInSession)) res_df = pd.DataFrame(result_set,index=None,columns=['artistName','songTitle','songDuration']) res_df # - # ### Query 2 # **Query**: Give me only the following: name of artist, song (sorted by itemInSession) and # user (first and last name) for userid = 10, sessionid = 182 # # Expected output columns are; # 1. artist's name # 1. song's title # 1. user's first name # 1. user's last name # # The required query uses userId, sessionId and itemInSession in WHERE clause. # # **PRIMARY KEY (PK) Rationale**
    # - Table for this query should use userId as Partition Key because the objective of query is to return data about the songs played by a user for a given session. #
    # - Table for this query should use sessionId and itemInSession as Clustering Column. The inclusion of itemInSession will support the requirement of: result set to be sorted by itemInSession field.
    # # **Assumptions** # - We can assume that all sessions of any user will fit. # # **Data Type Selection Rationale**
    # # |Field| Data Type (Cassandra,[CQL](https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cql_data_types_c.html))| Why? | # |-----|----------|-----| # | userId (K)| bigint | There will be many users. (partition key) | # | sessionId (C)| bigint | [0,max(bigint)] should be sufficient because there will be many sessions. (clustering column) | # | itemInSession (C)| int | Number of items in a session should within [0,max(int)]. (clustering column) | # | artistName | text | String data type for artist. (query return) | # | songTitle | text | String data type for title of played song. (query return) | # | userFirstName | text | First name of user. (query return) | # | userLastName | text | Last name of user. (query return) | # # Naming fields
    # - Used artist prefix in the field name for artist's name to indicate it is an attribute of artist.
    # - Used song prefix in the field name for song's title to indicate it is an attribute of song.
    # - Used user prefix in field names for user's id, first name, and last name to indicate that it is attribute of user.
    # # + ## ## Create Table for Query 2 ## table_q2_name = 'query_By_UserId_SessionId' # CQL CREATE TABLE statement stmt = "CREATE TABLE IF NOT EXISTS "+ table_q2_name +" " stmt = stmt + "(userId int, sessionId bigint, itemInSession int, \ artistName text, songTitle text, firstName text, lastName text, \ PRIMARY KEY(userId, sessionId, itemInSession));" logger.debug('CREATE TABLE FOR Q2: [%s]', stmt) try: session.execute(stmt) except Exception as e: print(e) logger.exception(e) # + ## ## (ETL) ## Load data from event_datafile_new.csv to songsplay_q2 ## def funcLine2(line # type: List[str], a row of event data csv file ): ''' Extracts userId, sessionId, itemInSession, artist, song, firstName, lastName ''' assert (line != None), 'Invalid Value: A line of event data file is None!' assert (len(line) > 9), 'Invalid Value: A line of event data file is Empty!' return (int(line[10]),int(line[8]),int(line[3]),line[0],line[9],line[1],line[4]) # INSERT statement istmt = "INSERT INTO "+ table_q2_name istmt = istmt + " (userId,sessionId,itemInSession,artistName,songTitle,firstName,lastName)" istmt = istmt + " VALUES (%s, %s, %s, %s, %s, %s, %s);" # extract and load song played event logs to a table insertToDB(session,evt_data_file_name,istmt,funcLine2) # - # # Query 2: Give me only the following: # name of artist, song (sorted by itemInSession) and user (first and last name)\ # for userid = 10, sessionid = 182 # def queryByUserIdSessionId(dbSession, # type: , database session table_name, # type: str, table name userId, # type: int, userId (query term) sessionId # type: bigint, sessionId (query term) ): ''' Retrieves artist, title of song, first and last name of user by using userId and sessionId ''' assert (dbSession != None), 'Invalid Vale: Database Session is None!' assert (table_name != None), 'Invalid Vale: Table name is None!' assert (table_name != ''), 'Invalid Vale: Table name is Empty!' rows = None # session.prepare() query = "SELECT artistName, songTitle, firstName, lastName " query = query + "FROM " + table_name + " " query = query + "WHERE userId = " + str(userId) + " and sessionId = " + str(sessionId) + ";" logger.debug('Query: [%s]',query) try: rows = dbSession.execute(query) except Exception as e: print(e) logger.exception(e) return rows # + # query parameters userId = 10 sessionId = 182 # query rows = queryByUserIdSessionId(session, table_q2_name, userId, sessionId) # process results set result_set = [] if rows != None: for row in rows: result_set.append((row.artistname,row.songtitle,row.firstname,row.lastname)) # display results q2_text = "Give me only the following: name of artist, song (sorted by itemInSession) " q2_text = q2_text + "and user (first and last name) for" print('\n%s for userid = %d, sessionid = %d \n\n' % (q2_text, userId, sessionId)) res_df = pd.DataFrame(result_set,index=None,columns=['artistName','songTitle','firstName','lastName']) res_df # - # ## Query 3 # **Query**: Give me every user name (first and last) in my music app history # who listened to the song 'All Hands Against His Own' # # Expected output columns are; # 1. firstName # 1. lastName # # The required query uses songTitle in WHERE clause. # # **PRIMARY KEY (PK) Rationale**
    # - Table for this query should use songTitle as Partition Key because the objective of query is to return first name and last name of users who played the given song.
    # - Table for this query should use userFirstName and userLastName as Clustering Column to eliminate repeated play entries of the same song by the same user. # # **Assumptions** # - The combination of song's title, user's first name, and user's last name will be unique for the query's objective. # - Hash value of song's title should produce good distribution while we should expect to observe that # there will be more song play events for favored songs. #
    # Notes # - According to query, there is no need to return/store multiple play events of the same song by the same user in the same/different sessions. (Requires inclusion of sessionId and itemInSession fields. When aggregate count() is needed or the datetime of song played event becomes important.) # - The song's title is unique (i.e., one artist - one song title. The cases of reusing the song title by the same artist(1) or different artists(2) are not handled. Requires inclusion of artist identifier(2), release date/duration attributes(1) assuming that the same artist will not release two songs with the same title in the same year/with exactly the same duration. If not, one needs to look for attributes that can distinguish these two songs.) # # **Data Type Selection Rationale**
    # # |Field| Data Type (Cassandra,[CQL](https://docs.datastax.com/en/cql/3.3/cql/cql_reference/cql_data_types_c.html))| Why? | # |-----|----------|-----| # | songTitle (K) | text | String data type for title of played song. (partition key) | # | userFirstName (C) | text | First name of user. (clustering column, query return) | # | userLastName (C) | text | Last name of user. (clustering column, query return) | # # **Naming fields**
    # - Used song prefix in the field name for song's title to indicate it is an attribute of a song.
    # - Used user prefix in field names for user's first name, and last name to indicate that it is an attribute of a user.
    # # + ## ## Create Table for Query 3 ## table_q3_name = "query_By_SongTitle" # CQL CREATE TABLE statement stmt = "CREATE TABLE IF NOT EXISTS " + table_q3_name + " " stmt = stmt + "(songTitle text, userId bigint, userFirstName text, userLastName text, \ PRIMARY KEY(songTitle, userId));" logger.debug('CREATE TABLE FOR Q3: [%s]', stmt) try: session.execute(stmt) except Exception as e: print(e) logger.exception(e) # + ## ## (ETL) ## Load data from event_datafile_new.csv to songsplay_q3 ## def funcLine3(line # type: List[str], list of columns in a line from event data ): '''Picks song, userId, firstName, and lastName ''' assert (line != None), 'Invalid Value: A line of event data file is None!' assert (len(line) > 9), 'Invalid Value: A line of event data file is Empty!' return (line[9],int(line[10]),line[1],line[4]) # INSERT statement istmt3 = "INSERT INTO " + table_q3_name + " (songTitle, userId, userFirstName, userLastName)" istmt3 = istmt3 + " VALUES (%s, %s, %s, %s);" # extract and load song played event logs to a table insertToDB(session,evt_data_file_name,istmt3,funcLine3) # - ## ## Query: Give me every user name (first and last) in my music app history ## who listened to the song 'All Hands Against His Own' ## def queryUserFirstLastNameBySongTitle(dbSession, # type: , database session table_name, # type: str, table name songTitle # type: str, song title [query term] ): ''' Retrives first and last name of users who listened to given song ''' assert (dbSession != None), 'Invalid Value: Database session is None!' assert (table_name != None), 'Invalid Value: Table name is None!' assert (table_name != ''), 'Invalid Value: Table name is Empty!' assert (songTitle != None), 'Invalid Value: Song title is None!' assert (songTitle != ''), 'Invalid Value: Song title is Empty!' rows = None # return # CQL SELECT Statement query = "SELECT userFirstName, userLastName" query = query + " FROM "+table_name query = query + " WHERE songTitle=\'"+songTitle+"\';" logger.debug('Query: [%s]', query) try: rows = dbSession.execute(query) except Exception as e: print(e) logger.exception(e) return rows # + # query parameters songTitle = 'All Hands Against His Own' # query rows = queryUserFirstLastNameBySongTitle(session,table_q3_name,songTitle) # process result set result_set = [] if rows != None: for row in rows: result_set.append((row.userfirstname,row.userlastname)) # display results q3_text = 'Give me every user name (first and last) in my music app history who listened to the song' print('\n\n%s \'%s\'.\n' % (q3_text, songTitle)) res_df = pd.DataFrame(result_set,index=None,columns=['userFirstName','userLastName']) res_df # - # ### Drop the tables before closing out the sessions # dropTable() drops one table (if exists), given by table name.
    # dropTables() drops tables (if exists), given by array of table names.
    def dropTable(dbSession, # type: , database session table_name # type: str, table name ): ''' Drops the given table if exists ''' assert (dbSession != None), 'Invalid Value: Database Session is None!' assert (table_name != None), 'Invalid Value: Table name is None!' assert (table_name != ''), 'Invalid Value: Table name is Empty!' # CQL DROP Statement stmt="DROP TABLE IF EXISTS "+table_name + ";"; try: rows = dbSession.execute(stmt) except Exception as e: print(e) logger.exception(e) return def dropTables(dbSession, # type: , database session table_names # type: List[str], list of table names to be dropped ): ''' Drops tables in the list ''' assert (dbSession != None), 'Invalid Value: Database Session is None!' assert (table_names != None), 'Invalid Value: List of table names is None!' assert (len(table_names) > 0), 'Invalid Value: List of table names is Empty!' for f in table_names: dropTable(dbSession, f) return # Drop tables # tables=[table_q1_name, table_q2_name, table_q3_name] dropTables(session, tables) # ### Close the session and cluster connection¶ # + session.shutdown() cluster.shutdown() logger.info('Project-2> END') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="sHrEKL2v2YfS" colab_type="code" colab={} # #!pip install datadotworld # #!pip install datadotworld[pandas] # + id="hmWrAB2P3f3V" colab_type="code" colab={} # #!dw configure # + id="1sYFh4X829TD" colab_type="code" colab={} from google.colab import drive import pandas as pd import numpy as np import datadotworld as dw # + id="Fw1Fadns3xE2" colab_type="code" colab={} #drive.mount("/content/drive") # + id="0pmti3Ny435n" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c16ec156-bcca-4786-b346-c303774999f2" executionInfo={"status": "ok", "timestamp": 1581619939686, "user_tz": -60, "elapsed": 896, "user": {"displayName": "\u015bniewski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # cd "drive/My Drive/Colab Notebooks/ml-dw-matrix" # + id="V_1Zd_dv5CPR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="504cc2af-1a03-471d-92c3-edf4d3c60425" executionInfo={"status": "ok", "timestamp": 1581619959894, "user_tz": -60, "elapsed": 2506, "user": {"displayName": "\u015bniewski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # ls matrix_one/ # + id="31h-Sis_5RKV" colab_type="code" colab={} # !mkdir data # + id="dRcYf8xZ5aRI" colab_type="code" colab={} # !echo 'data' > .gitignore # + id="Up7KAIq65nGV" colab_type="code" colab={} # !git add .gitignore # + id="uWJEhLxD5pwa" colab_type="code" colab={} data = dw.load_dataset('datafiniti/mens-shoe-prices') # + id="pdCkvAr952ZB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 124} outputId="5fe6001d-df94-43ac-b3f0-0a4073119228" executionInfo={"status": "ok", "timestamp": 1581620140486, "user_tz": -60, "elapsed": 1866, "user": {"displayName": "\u015bniewski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} df = data.dataframes['7004_1'] df.shape # + id="Fy0nozKP57-D" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 678} outputId="4bd3b420-3a36-4f43-b061-4cf6ed65e0b8" executionInfo={"status": "ok", "timestamp": 1581620174223, "user_tz": -60, "elapsed": 752, "user": {"displayName": "\u015bniewski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} df.sample(5) # + id="ZdCp2pFU6JnY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} outputId="2067970f-cb94-4486-fb9f-1628bc937297" executionInfo={"status": "ok", "timestamp": 1581620191353, "user_tz": -60, "elapsed": 420, "user": {"displayName": "\u015bniewski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} df.columns # + id="g_BeunYe6N20" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="f23e3997-f938-46f0-ce06-72f2158dc70b" executionInfo={"status": "ok", "timestamp": 1581620210192, "user_tz": -60, "elapsed": 738, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} df.prices_currency.unique() # + id="L-84qngs6SaV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 260} outputId="8630d98b-7905-44f4-eb96-86ec9b650c7f" executionInfo={"status": "ok", "timestamp": 1581620238032, "user_tz": -60, "elapsed": 646, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} df.prices_currency.value_counts() # + id="Geg_dfrj6ZO6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 260} outputId="048127eb-427e-4160-b1ba-701f234bbc20" executionInfo={"status": "ok", "timestamp": 1581620259615, "user_tz": -60, "elapsed": 911, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} df.prices_currency.value_counts(normalize=True) # + id="VLaULgfW6eav" colab_type="code" colab={} df_usd = df[df.prices_currency == 'USD'].copy() # + id="ECeK_MiN6yn3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="cbf756a9-df2f-40e0-b6e9-fe916b6ec1ac" executionInfo={"status": "ok", "timestamp": 1581620355633, "user_tz": -60, "elapsed": 617, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} df_usd.shape # + id="_7Sks_fL60tq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="4e47333a-f378-4f88-82b3-d835461bf64b" executionInfo={"status": "ok", "timestamp": 1581620489574, "user_tz": -60, "elapsed": 906, "user": {"displayName": "\u015bniewski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} df_usd['prices_amountmin'] = df_usd.prices_amountmin.astype(np.float) df_usd['prices_amountmin'].hist() # + id="hQLdWpFM6_Zb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1e8b352e-47b3-4873-cb30-57fde7fb1fb3" executionInfo={"status": "ok", "timestamp": 1581620621352, "user_tz": -60, "elapsed": 953, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} filter_max = np.percentile(df_usd['prices_amountmin'], 99) filter_max # + id="TiTxFEp57i1i" colab_type="code" colab={} df_usd_filter = df_usd[df_usd['prices_amountmin'] < filter_max] # + id="kJrzBRc674MS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="78ebb4a5-3650-431d-f452-adc065773f02" executionInfo={"status": "ok", "timestamp": 1581620708427, "user_tz": -60, "elapsed": 910, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} df_usd_filter.prices_amountmin.hist(bins=100) # + id="5ntsEbVL75xq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5d8b4ebc-2137-4d70-f0ff-e34edadac3af" executionInfo={"status": "ok", "timestamp": 1581620837927, "user_tz": -60, "elapsed": 2749, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # ls # + id="E77WGqvP8rFQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1b059229-6752-4c01-97ef-7926ac1c851a" executionInfo={"status": "ok", "timestamp": 1581620857385, "user_tz": -60, "elapsed": 2564, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # ls matrix_one/ # + id="wxP4jZ5E8v93" colab_type="code" colab={} # !git add matrix_one/day03.ipynb # + id="ggykMW3Y82Y_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 295} outputId="b0a88415-367c-4a6d-e4f8-0abe21dbcfe0" executionInfo={"status": "ok", "timestamp": 1581621024238, "user_tz": -60, "elapsed": 4118, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # !git status # + id="IrY2jppu9YU1" colab_type="code" colab={} # !git add matrix_one/day03.ipynb # + id="rasV_kG59erT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 350} outputId="db18cbcc-6dbc-4be5-e51b-880ba550533e" executionInfo={"status": "ok", "timestamp": 1581621056731, "user_tz": -60, "elapsed": 3051, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # !git diff # + id="eVB412cK9ghA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 225} outputId="aa15b085-0619-4550-960d-18031a2cf8d7" executionInfo={"status": "ok", "timestamp": 1581621113611, "user_tz": -60, "elapsed": 3418, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # !git commit -m "Read Men's Shoe Prices dataset from data.world" # + id="r64xLJ0k9uUC" colab_type="code" colab={} # !git config --global user.email "" # + id="JnjUx4ee9ysX" colab_type="code" colab={} # !git config --global user.name "" # + id="00EDVsT597L6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 86} outputId="5b3c5d54-3bfb-41cc-99c9-3fd625c2ccc7" executionInfo={"status": "ok", "timestamp": 1581621177800, "user_tz": -60, "elapsed": 8114, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # !git commit -m "Read Men's Shoe Prices dataset from data.world" # + id="Pu0Wuj0k982U" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="cda48117-8fb8-48e9-9689-eddd89526098" executionInfo={"status": "ok", "timestamp": 1581621204825, "user_tz": -60, "elapsed": 7965, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # !git push # + id="3cglsbkP-De9" colab_type="code" colab={} # !git reset --soft HEAD^ # + id="K3StjkOQ-hfF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="5e2f0295-48bd-4816-a954-fd91fbc2471e" executionInfo={"status": "ok", "timestamp": 1581621336007, "user_tz": -60, "elapsed": 3250, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # !git commit -am "day 3 - Mens shoes" # + id="ja8ox80R-kqu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 173} outputId="381c7d7d-c9d8-4354-d88b-58e2f1e7a364" executionInfo={"status": "ok", "timestamp": 1581621351824, "user_tz": -60, "elapsed": 5566, "user": {"displayName": "\u015bniewski", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mBtlAHi7LnttjKcyTVvnSOV4V06f083EsiIucK8jw=s64", "userId": "06218853652600346861"}} # !git push -f -u origin master # + id="fIeEheO6-n9W" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 更多字符串和特殊方法 # - 前面我们已经学了类,在Python中还有一些特殊的方法起着非常重要的作用,这里会介绍一些特殊的方法和运算符的重载,以及使用特殊方法设计类 # ## str 类 # - 一个str对象是不可变的,也就是说,一旦创建了这个字符串,那么它的内容在认为不改变的情况下是不会变的 # - s1 = str() # - s2 = str('welcome to Python') a = 'zzz' print(a) b = str(12345) b b = str([1,3,4]) b b = str('aaa') b # ## 创建两个对象,分别观察两者id # - id为Python内存地址 print(id('a')) print(id('a')) a = 'a' b = 'b' print(id(a)) print(id(b)) a = 'a' b = 'aaaaa' print(id(a)) print(id(b)) # ## 处理字符串的函数 # - len # - max # - min # - 字符串一切是按照ASCII码值进行比较 len('abc ') i= 0 a = 'abc' wlile i=, <=, >, < # - 依照ASCII码值进行比较 'abcse' 'se a ca' # ## 测试字符串 # ![](../Photo/99.png) # - 注意: # > - isalnum() 中是不能包含空格,否则会返回False b = 'abc' b.isalnum() b = '123' b.isalnum() b = 'zgh' b.isidentifier() b = 'abv' b.isalpha() a = '123abx' def Check(a): N1,N2,N3 = 0,0,0 for i in a: if i.isupper(): N1 +=1 elif i.islower(): # ## 搜索子串 # ![](../Photo/100.png) import requests url='http://www.4399dmw.com/haizeiwang/tupian' HTML = requests.get(url).text HTML.find('海贼王') import requests url='http://www.4399dmw.com/haizeiwang/tupian/' HTML = requests.get(url).text LIST = HTML.split('\n') for line in LIST: if '.jpg' in line: lin2 = line.split('""') print(lin2) for linee in lin2: if '.jpg' in linee: print(linee) # ## 转换字符串 # ![](../Photo/101.png) a = 'Joker' a.swapcase() a = 'JokJer' a.replace('J','K') a = 'a v c d' a.replace(' ',"") b = ' abcd ' b.lstrip() b = ' abcd ' b.rstrip() # ## 删除字符串 # ![](../Photo/146.png) b = " acde " b.rstrip() # ## 格式化字符串 # ![](../Photo/103.png) # ## EP: # - 1 # ![](../Photo/104.png) # - 2 # 随机参数100个数字,将www.baidu.com/?page=进行拼接 # ## Python高级使用方法 -- 字符串 # - 我们经常使用的方法实际上就是调用Python的运算重载 # ![](../Photo/105.png) a = 'a' b = 'b' a + b a.__add__('b') a.__mul__(10) a * 10 # # Homework # - 1 # ![](../Photo/106.png) def check(): a = input("格式:ddd_dd_dddd,d是数字:") if a ==123_12_123: print("valid ssn") else: print("invalid ssn") check() # - 2 # ![](../Photo/107.png) a = str(input('>>')) b = str(input('>>')) """ 输入为字符串类型 """ if a.find (b)!= -1: print('是') else: print('不是') """ if条件判断 如果a.find(b)返回-1,说明a不是b的子串 """ # - 3 # ![](../Photo/108.png) def Cheak(a): N = len (a) n1,n2,n3 = 0,0,0 for i in a: if i.isupper(): n1 +=1 elif i.islower(): n2 +=1 elif i.isdigit(): n3 +=1 if n1 ==0 : print('密码必须含有大写') if n2 ==0: print ('密码必须含有小写') if n3 == 0: print('密码必须含与数字') if N<=8: print('密码必须超过8位') check(a) # - 4 # ![](../Photo/109.png) zm = input('输入字符串') def counLettres(): geshu = 0 for i in zm: if i.isalpha(): geshu +=1 print(geshu) counLettres() # - 5 # ![](../Photo/110.png) # - 6 # ![](../Photo/111.png) zf = input('输入') def reverse(): return zf[::-1] reverse() # - 7 # ![](../Photo/112.png) # - 8 # ![](../Photo/113.png) # - 9 # ![](../Photo/114.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Real life data import logging import json import threading import itertools import pandas as pd import numpy as np import matplotlib.pyplot as plt import ibm_db import shap from pandas_profiling import ProfileReport from matplotlib import cm from mpl_toolkits.mplot3d import axes3d import seaborn as seabornInstance from sqlalchemy import Column, Integer, String, Float, DateTime, Boolean, func from iotfunctions import base from iotfunctions import bif from iotfunctions.db import Database from iotfunctions import entity from iotfunctions import metadata from iotfunctions.metadata import EntityType from iotfunctions.enginelog import EngineLogging from iotfunctions.dbtables import FileModelStore from iotfunctions import estimator from iotfunctions.ui import (UISingle, UIMultiItem, UIFunctionOutSingle, UISingleItem, UIFunctionOutMulti, UIMulti, UIExpression, UIText, UIStatusFlag, UIParameters) from iotfunctions.dbtables import FileModelStore, DBModelStore from mmfunctions.anomaly import (SaliencybasedGeneralizedAnomalyScore, SpectralAnomalyScore, FFTbasedGeneralizedAnomalyScore, KMeansAnomalyScore, GBMRegressor, Standard_Scaler, Robust_Scaler, MinMax_Scaler) import datetime as dt from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression #from sklearn.preprocessing import StandardScaler from sklearn.covariance import MinCovDet from sklearn import metrics import scipy as sp import scipy.fftpack import skimage as ski from skimage import util as skiutil # for nifty windowing import pyod as pyod from pyod.utils.data import generate_data from pyod.utils.data import evaluate_print from pyod.utils.example import visualize from pyod.models.knn import KNN from pyod.models.iforest import IForest # %matplotlib inline from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() EngineLogging.configure_console_logging(logging.INFO) # + with open('credentials_as_monitor_demo.json', encoding='utf-8') as F: credentials = json.loads(F.read()) db_schema=None fm = FileModelStore() db = Database(credentials=credentials, model_store=fm) print (db) # + #con = db.connection.connect() DB2ConnString = 'DATABASE=' + credentials['db2']['databaseName'] + \ ';HOSTNAME=' + credentials['db2']['host'] + \ ';PORT=' + str(credentials['db2']['port']) + \ ';PROTOCOL=TCPIP;UID=' + credentials['db2']['username'] + \ ';PWD=' + credentials['db2']['password'] db_connection = ibm_db.connect(DB2ConnString, '', '') # db.model = DBModelStore(credentials['tenantId'], None, credentials['db2']['username'], db.connection.connect(), 'db2' ) # - model_store = DBModelStore(credentials['tenantId'], "TESTENTITYID", credentials['db2']['username'], db_connection, 'db2') db.model_store = model_store # + # need a helper function to convert array columns to something easier from scipy import linalg def l2norm(df, tcol, col1, col2 = None, col3 = None): def str_norm(cols_str): '''norm for one string element (encodes an array of value) in one column of a data point''' return linalg.norm(np.fromstring(cols_str.replace('[',' ').replace(']','').replace('\"', ''), sep = ','))**2 def column_norm(df, tcol, col1, col2=None, col3=None): '''norm of all columns specified in parameters for all datapoints''' df_temp = pd.DataFrame() df_temp['col1_np'] = df[col1].apply(str_norm) df_temp['col2_np'] = 0 df_temp['col3_np'] = 0 if col2 is not None: df_temp['col2_np'] = df[col2].apply(str_norm) if col3 is not None: df_temp['col3_np'] = df[col3].apply(str_norm) return (df_temp['col1_np'] + df_temp['col2_np'] + df_temp['col3_np'])**(1/2) df[tcol] = column_norm(df, tcol, col1, col2, col3) def unrollAccel(df): l0,l1,l2,l3,l4=[],[],[],[],[] for i in df['ACCEL_POWER'].values: l0.append(eval(eval(i)[0])) l1.append(eval(eval(i)[1])) l2.append(eval(eval(i)[2])) l3.append(eval(eval(i)[3])) l4.append(eval(eval(i)[4])) df['accel_power_0'] = np.asarray(l0) df['accel_power_1'] = np.asarray(l1) df['accel_power_2'] = np.asarray(l2) df['accel_power_3'] = np.asarray(l3) df['accel_power_4'] = np.asarray(l4) listAttr = ['timestamp','entity','vibrations','rms','accel_speed','accel_power_0','accel_power_1', 'accel_power_2','accel_power_3','accel_power_4'] # + # Now we proceed to customer data - GOOD CASE # Get stuff in df_input_raw = pd.read_csv('./Armstark04714B6046D5.csv', index_col=False, parse_dates=['RCV_TIMESTAMP_UTC']) df_input_raw['entity']=df_input_raw['DEVICE_ID'] df_input_raw['timestamp']=df_input_raw['RCV_TIMESTAMP_UTC'] # and sort it by timestamp df_input_raw = df_input_raw.sort_values(by='timestamp') df_input_raw = df_input_raw.set_index(['entity','timestamp']).dropna() l2norm(df_input_raw, 'vibrations', 'VIBRATIONS_XAXIS', 'VIBRATIONS_YAXIS', 'VIBRATIONS_ZAXIS') l2norm(df_input_raw, 'rms', 'RMS_X', 'RMS_Y', 'RMS_Z') l2norm(df_input_raw, 'accel_speed', 'ACCEL_SPEED') unrollAccel(df_input_raw) #l2norm(df_input_raw, 'accel_power', 'ACCEL_POWER') df_input = df_input_raw.filter(listAttr, axis=1) df_input_raw.describe() # - # #### Pandas Profiling # # Try Pandas Profiling to get an overview about the data, mostly its distributions and correlations #
    # # + # df_input[['accel_power_0','accel_anomaly']].head(20) # removed 'rms' #features=['accel_speed','accel_power_0','accel_power_1','accel_power_2','accel_power_3','accel_power_4'] #df_input['rms2'] = df_input['rms'] features=['rms'] targets=['rms'] predictions=['rms_std'] predictions2=['rms_rob'] # - # #### Customer suggested a correlation between vibration and acceleration # # so let's try to predict (although correlation tests do not really indicate it) # + # Run Monitoring's anomaly detector functions EngineLogging.configure_console_logging(logging.DEBUG) print(features, targets, predictions) stdii = Standard_Scaler(features=features, targets=targets, predictions=predictions) robii = MinMax_Scaler(features=features, targets=targets, predictions=predictions2) jobsettings = { 'db': db, '_db_schema': 'public', 'save_trace_to_file' : True} et = stdii._build_entity_type(columns = [Column('rms',Float()), Column('accel_speed',Float()), Column('vibrations',Float())], **jobsettings) et2 = robii._build_entity_type(columns = [Column('rms',Float()), Column('accel_speed',Float()), Column('vibrations',Float())], **jobsettings) stdii._entity_type = et robii._entity_type = et2 # allow training and delete existing models stdii.auto_train = True stdii.delete_existing_models = True robii.auto_train = True robii.delete_existing_models = True df_input = stdii.execute(df=df_input) # - df_input = robii.execute(df=df_input) fig, ax = plt.subplots(3, 1, figsize=(15, 9)) df_input.plot(ax=ax[0], y='rms') df_input.plot(ax=ax[1], y='rms_std', color='green') df_input.plot(ax=ax[2], y='rms_rob', color='green') ax[0].set_xticklabels([]) ax[1].set_xticklabels([]) ax[2].set_xticklabels([]) df_input['rms'] = df_input['accel_speed'] # + predictions3 = ['rms_std'] stdii2 = Standard_Scaler(features=features, targets=targets, predictions=predictions3) stdii2.correlation_threshold = 0.001 jobsettings = { 'db': db, '_db_schema': 'public', 'save_trace_to_file' : True} et = stdii2._build_entity_type(columns = [Column('accel_power_0',Float()), Column('accel_power_1',Float()), Column('vibrations',Float())], **jobsettings) stdii2._entity_type = et # disallow training and preserve existing models for predict stdii2.auto_train = False stdii2.delete_existing_models = False df_input = stdii2.execute(df=df_input) # - df_input fig, ax = plt.subplots(3, 1, figsize=(15, 9)) df_input.plot(ax=ax[0], y='rms') df_input.plot(ax=ax[1], y='rms_std', color='green') #df_input.plot(ax=ax[2], y='rms_std2', color='green') ax[0].set_xticklabels([]) ax[1].set_xticklabels([]) ax[2].set_xticklabels([]) # + # Run Monitoring's anomaly detector functions # EngineLogging.configure_console_logging(logging.DEBUG) simpleii = SimpleRegressor(features=['accel_power_0','accel_power_1'], # max_depth=20, num_leaves=40, n_estimators=4000, learning_rate=0.00001, targets=['rms'], predictions=['rms_pred']) simpleii.correlation_threshold = 0.001 jobsettings = { 'db': db, '_db_schema': 'public', 'save_trace_to_file' : True} et = simpleii._build_entity_type(columns = [Column('accel_power_0',Float()), Column('accel_power_1',Float()), Column('vibrations',Float())], **jobsettings) simpleii._entity_type = et # allow training and delete existing models simpleii.auto_train = True simpleii.delete_existing_models = True df_input = simpleii.execute(df=df_input) # + simpleii = SimpleRegressor(features=['accel_power_0','accel_power_1'], # max_depth=20, num_leaves=40, n_estimators=4000, learning_rate=0.00001, targets=['rms'], predictions=['rms_pred']) simpleii.correlation_threshold = 0.001 jobsettings = { 'db': db, '_db_schema': 'public', 'save_trace_to_file' : True} et = simpleii._build_entity_type(columns = [Column('accel_power_0',Float()), Column('accel_power_1',Float()), Column('vibrations',Float())], **jobsettings) simpleii._entity_type = et # disallow training and preserve existing models for predict simpleii.auto_train = False simpleii.delete_existing_models = False df_input = simpleii.execute(df=df_input) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## This notebook contains the code used for generating the panels shown in Figure 5D. # The aim was to calculate whether DRNs were enriched in known RNA-binding sites. import os import sys import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy.special import comb from scipy.stats import hypergeom from matplotlib import rcParams from collections import defaultdict rcParams['font.family'] = 'sans-serif' rcParams['font.sans-serif'] = ['Arial'] rcParams['pdf.fonttype'] = 42 rcParams['axes.formatter.useoffset'] = False def formatAxes(ax,text_size=10,xlim=None,xlabel=None,xticks=None,xticklabels=None,ylim=None,yticks=None,ylabel=None,yticklabels=None): """ to tweak the plotting of the axes as well as the fontsize """ for loc,spine in list(ax.spines.items()): if loc == 'left': # settings for the y-axis if yticklabels and not yticks: sys.stderr.write("Need to provide a list wiht both y-labels and y-ticks!") if yticks: ax.yaxis.set_ticks_position('left') ax.yaxis.set_tick_params(direction='out') spine.set_visible(True) spine.set_position(("outward",1)) spine.set_smart_bounds(True) ax.set_yticks(yticks) if ylim: ax.set_ylim(ylim) if yticklabels: ax.set_yticklabels(yticklabels,fontsize=text_size) else: spine.set_visible(False) ax.set_yticklabels([]) ax.tick_params(axis='y',which='both',length=0) if ylabel: ax.set_ylabel(ylabel,fontsize=text_size,rotation=0,labelpad=160) ax.get_yaxis().set_label_coords(-0.1,0.5) elif loc == 'bottom': # settings for x-axis if xticks: spine.set_position('zero') spine.set_visible(False) spine.set_smart_bounds(False) ax.set_xticks(xticks) ax.tick_params(axis='x',which='both',length=0) if xlim: ax.set_xlim(xlim) if xticklabels: ax.set_xticklabels(xticklabels,fontsize=text_size) else: spine.set_visible(False) ax.set_xticklabels([]) ax.tick_params(axis='x',which='both',length=0) if xlabel: ax.tick_params(axis='x',which='both',length=0) ax.set_xlabel(xlabel,fontsize=text_size+2) ax.xaxis.labelpad = 10 else: spine.set_visible(False) ax.patch.set_visible(False) # ### Loading the big dataframe: alldata = pd.read_csv('../../../Data/New_data_table_Xist.txt',\ sep="\t",\ header=0,\ index_col=0) # ### Masking positions not considered by deltaSHAPE: positionstomask = alldata[(alldata["SHAPE_reactivity_ex_vivo_1"] < -900) | (alldata["SHAPE_reactivity_ex_vivo_2"] < -900) | (alldata["SHAPE_reactivity_in_cell_1"] < -900) | (alldata["SHAPE_reactivity_in_cell_2"] < -900)].index print(len(positionstomask)) alldata.loc[positionstomask,alldata.columns[11:]] = np.nan # ### Setting the threshold for calling DRNs in the diffBUM-HMM data threshold = 0.95 alldata.head() # ### How many nucleotides are diff modified in the diffBUM-HMM data in ex vivo and in vivo? # + ex_vivo_count = len(alldata[alldata.ex_vivo >= threshold]) in_vivo_count = len(alldata[alldata.in_vivo >= threshold]) ex_vivo_norm_count = len(alldata[alldata.scaled_ex_vivo >= threshold]) in_vivo_norm_count = len(alldata[alldata.scaled_in_vivo >= threshold]) print("ex_vivo:\t%s\nin_vivo:\t%s\nex_vivo_norm:\t%s\nin_vivo_norm:\t%s" % \ (ex_vivo_count,in_vivo_count,ex_vivo_norm_count,in_vivo_norm_count)) # - print(ex_vivo_count/in_vivo_count) print(ex_vivo_norm_count/in_vivo_norm_count) # ### Count number of DRNs in diffBUM_HMM data: proteins = ["CELF1","FUS","HuR","PTBP1","RBFOX2","TARDBP"] samples = ["in_vivo","in_vivo_scaled","ex_vivo","ex_vivo_scaled",\ "dSHAPE_in_vivo_1","dSHAPE_in_vivo_2","dSHAPE_ex_vivo_1","dSHAPE_ex_vivo_2"] proteincount = pd.DataFrame(0,index=proteins,columns=samples) # + num_diff_nucl_ex_vivo = len(alldata[alldata.ex_vivo >= threshold].index) num_diff_nucl_in_vivo = len(alldata[alldata.in_vivo >= threshold].index) print("ex_vivo\t",num_diff_nucl_ex_vivo) print("in_vivo\t",num_diff_nucl_in_vivo) # - # ### Count number of DRNs in deltaSHAPE data: # + num_diff_nucl_ex_vivo_deltaSHAPE_1 = len(alldata[alldata.deltaSHAPE_rep1 > 0].index) num_diff_nucl_in_vivo_deltaSHAPE_1 = len(alldata[alldata.deltaSHAPE_rep1 < 0].index) print("ex_vivo\t",num_diff_nucl_ex_vivo_deltaSHAPE_1) print("in_vivo\t",num_diff_nucl_in_vivo_deltaSHAPE_1) print(num_diff_nucl_ex_vivo_deltaSHAPE_1/num_diff_nucl_in_vivo_deltaSHAPE_1) # + num_diff_nucl_ex_vivo_deltaSHAPE_2 = len(alldata[alldata.deltaSHAPE_rep2 > 0].index) num_diff_nucl_in_vivo_deltaSHAPE_2 = len(alldata[alldata.deltaSHAPE_rep2 < 0].index) print("ex_vivo\t",num_diff_nucl_ex_vivo_deltaSHAPE_2) print("in_vivo\t",num_diff_nucl_in_vivo_deltaSHAPE_2) print(num_diff_nucl_ex_vivo_deltaSHAPE_2/num_diff_nucl_in_vivo_deltaSHAPE_2) # - # ### How many binding sites for each protein were found that overlapped with modified nucleotides in the ex vivo data? # + morereactive_ex_vivo = alldata[alldata.ex_vivo >= threshold] morereactive_in_vivo = alldata[alldata.in_vivo >= threshold] morereactive_ex_vivo_deltaSHAPE_1 = alldata[alldata.deltaSHAPE_rep1 > 0] morereactive_in_vivo_deltaSHAPE_1 = alldata[alldata.deltaSHAPE_rep1 < 0] morereactive_ex_vivo_deltaSHAPE_2 = alldata[alldata.deltaSHAPE_rep2 > 0] morereactive_in_vivo_deltaSHAPE_2 = alldata[alldata.deltaSHAPE_rep2 < 0] proteins = ["CELF1","FUS","HuR","PTBP1","RBFOX2","TARDBP"] # - proteins # + dict_total_binding_sites_differential_ex_vivo = {} dict_total_binding_sites_differential_in_vivo = {} dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_1 = {} dict_total_binding_sites_differential_in_vivo_deltaSHAPE_1 = {} dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_2 = {} dict_total_binding_sites_differential_in_vivo_deltaSHAPE_2 = {} for protein in proteins: dict_total_binding_sites_differential_ex_vivo[protein] = morereactive_ex_vivo[protein].sum() dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_1[protein] = morereactive_ex_vivo_deltaSHAPE_1[protein].sum() dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_2[protein] = morereactive_ex_vivo_deltaSHAPE_2[protein].sum() for protein in proteins: dict_total_binding_sites_differential_in_vivo[protein] = morereactive_in_vivo[protein].sum() dict_total_binding_sites_differential_in_vivo_deltaSHAPE_1[protein] = morereactive_in_vivo_deltaSHAPE_1[protein].sum() dict_total_binding_sites_differential_in_vivo_deltaSHAPE_2[protein] = morereactive_in_vivo_deltaSHAPE_2[protein].sum() print("diffBUM-HMM:") print(dict_total_binding_sites_differential_ex_vivo) print(dict_total_binding_sites_differential_in_vivo) print("\ndeltaSHAPE replicates:") print("rep1") print(dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_1) print(dict_total_binding_sites_differential_in_vivo_deltaSHAPE_1) print("rep2") print(dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_2) print(dict_total_binding_sites_differential_in_vivo_deltaSHAPE_2) # + ##IN CELL # - morereactive_in_vivo = alldata[alldata.in_vivo >= threshold] for protein in proteins: print("%s\ttotal_count:\t%s" % (protein,morereactive_in_vivo[protein].sum())) # + #COUNTING UP TOTAL NUMBER OF PROTEIN BINDING SITES, EXCLUDING REGIONS NOT USED BY DIFFBUMHM dict_total_binding_sites={} for protein in proteins: dict_total_binding_sites[protein]=alldata[protein].sum() print(dict_total_binding_sites) # - # ### diffBUM_HMM individual sites: # + ''' Written by Sander “M" would be total number of Xist nucleotides. “n” would in your case be total number of nucleotides that are part of an RNA-binding site (i.e. FUS, etc). “x” would be the total number of differentially modified nucleotides in ex vivo that overlap with the RNA-binding site. “N” would be the total number of differentially modified nucleotides in ex vivo. ''' diffbumhmmpvalues = defaultdict(lambda: defaultdict(float)) #Length Xist M = 17918 for protein in proteins: n = int(dict_total_binding_sites[protein]) #ex_vivo N = num_diff_nucl_ex_vivo x = dict_total_binding_sites_differential_ex_vivo[protein] hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) diffbumhmmpvalues["ex_vivo"][protein] = p_value #in_vivo N = num_diff_nucl_in_vivo x = dict_total_binding_sites_differential_in_vivo[protein] hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) diffbumhmmpvalues["in_vivo"][protein] = p_value output = pd.DataFrame.from_dict(diffbumhmmpvalues,orient='columns') output = output.mask(output > 0.05) print(output) output = output.apply(np.log10)*-1 fig,ax = plt.subplots(figsize=[3,5]) mask = output.isnull() ax = sns.heatmap(output,cmap="Blues",linewidths=.5,mask=mask,cbar_kws={'label':'-log10(p-value)'}) ax.set_facecolor("lightgrey") ax.set_yticklabels(ax.get_yticklabels(),rotation=None,horizontalalignment='right') fig.savefig("Figure_5D_panel_I.pdf",dpi=400) # - # ### deltaSHAPE individual sites: # Replicate 1: # + ''' Written by Sander “M" would be total number of Xist nucleotides. “n” would in your case be total number of nucleotides that are part of an RNA-binding site (i.e. FUS, etc). “x” would be the total number of differentially modified nucleotides in ex vivo that overlap with the RNA-binding site. “N” would be the total number of differentially modified nucleotides in ex vivo. ''' deltashapepvalues = defaultdict(lambda: defaultdict(float)) #Length Xist M = 17918 for protein in proteins: n = int(dict_total_binding_sites[protein]) #ex_vivo N = num_diff_nucl_ex_vivo_deltaSHAPE_1 x = dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_1[protein] hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) deltashapepvalues["ex_vivo"][protein] = p_value #in_vivo N = num_diff_nucl_in_vivo_deltaSHAPE_1 x = dict_total_binding_sites_differential_in_vivo_deltaSHAPE_1[protein] hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) deltashapepvalues["in_vivo"][protein] = p_value output = pd.DataFrame.from_dict(deltashapepvalues,orient='columns') output = output.mask(output > 0.05) print(output) output = output.apply(np.log10)*-1 fig,ax = plt.subplots(figsize=[3,5]) mask = output.isnull() ax = sns.heatmap(output,cmap="Blues",linewidths=.5,mask=mask,cbar_kws={'label':'-log10(p-value)'}) ax.set_facecolor("lightgrey") ax.set_yticklabels(ax.get_yticklabels(),rotation=None,horizontalalignment='right') fig.savefig("Figure_5D_panel_II.pdf",dpi=400) # - # ### deltaSHAPE individual sites: # Replicate 2: # + ''' Written by Sander “M" would be total number of Xist nucleotides. “n” would in your case be total number of nucleotides that are part of an RNA-binding site (i.e. FUS, etc). “x” would be the total number of differentially modified nucleotides in ex vivo that overlap with the RNA-binding site. “N” would be the total number of differentially modified nucleotides in ex vivo. ''' deltashapepvalues = defaultdict(lambda: defaultdict(float)) #Length Xist M = 17918 for protein in proteins: n = int(dict_total_binding_sites[protein]) #ex_vivo N = num_diff_nucl_ex_vivo x = dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_2[protein] hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) deltashapepvalues["ex_vivo"][protein] = p_value #in_vivo N = num_diff_nucl_in_vivo x = dict_total_binding_sites_differential_in_vivo_deltaSHAPE_2[protein] hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) deltashapepvalues["in_vivo"][protein] = p_value output = pd.DataFrame.from_dict(deltashapepvalues,orient='columns') output = output.mask(output > 0.05) print(output) output = output.apply(np.log10)*-1 fig,ax = plt.subplots(figsize=[3,5]) mask = output.isnull() ax = sns.heatmap(output,cmap="Blues",linewidths=.5,mask=mask,cbar_kws={'label':'-log10(p-value)'}) ax.set_facecolor("lightgrey") ax.set_yticklabels(ax.get_yticklabels(),rotation=None,horizontalalignment='right') fig.savefig("Figure_5D_panel_III.pdf",dpi=400) # - # ### All sites diffBUM_HMM # + ''' Written by Sander “M" would be total number of Xist nucleotides. “n” would in your case be total number of nucleotides that are part of an RNA-binding site (i.e. FUS, etc). “x” would be the total number of differentially modified nucleotides in ex vivo that overlap with the RNA-binding site. “N” would be the total number of differentially modified nucleotides in ex vivo. ''' totals_binding_sites = 0 totals_binding_sites_differential_ex_vivo = 0 totals_binding_sites_differential_in_vivo = 0 for protein in proteins: totals_binding_sites += int(dict_total_binding_sites[protein]) totals_binding_sites_differential_ex_vivo += dict_total_binding_sites_differential_ex_vivo[protein] totals_binding_sites_differential_in_vivo += dict_total_binding_sites_differential_in_vivo[protein] M = 17918 #ex_vivo N = num_diff_nucl_ex_vivo n = totals_binding_sites x = totals_binding_sites_differential_ex_vivo hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) print('ex_vivo\t%s' % p_value) #in_vivo N = num_diff_nucl_in_vivo n = totals_binding_sites x = totals_binding_sites_differential_in_vivo hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) print('in_vivo\t%s' % p_value) # - # ### All sites deltaSHAPE # Replicate 1: # + totals_binding_sites = 0 totals_binding_sites_differential_ex_vivo = 0 totals_binding_sites_differential_in_vivo = 0 for protein in proteins: totals_binding_sites += int(dict_total_binding_sites[protein]) totals_binding_sites_differential_ex_vivo += dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_1[protein] totals_binding_sites_differential_in_vivo += dict_total_binding_sites_differential_in_vivo_deltaSHAPE_1[protein] M = 17918 #ex_vivo N = num_diff_nucl_ex_vivo_deltaSHAPE_1 n = totals_binding_sites x = totals_binding_sites_differential_ex_vivo hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) print('ex_vivo\t%s' % p_value) #in_vivo N = num_diff_nucl_in_vivo_deltaSHAPE_1 n = totals_binding_sites x = totals_binding_sites_differential_in_vivo hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) print('in_vivo\t%s' % p_value) # - # ### All sites deltaSHAPE # Replicate 2: # + totals_binding_sites = 0 totals_binding_sites_differential_ex_vivo = 0 totals_binding_sites_differential_in_vivo = 0 for protein in proteins: totals_binding_sites += int(dict_total_binding_sites[protein]) totals_binding_sites_differential_ex_vivo += dict_total_binding_sites_differential_ex_vivo_deltaSHAPE_2[protein] totals_binding_sites_differential_in_vivo += dict_total_binding_sites_differential_in_vivo_deltaSHAPE_2[protein] M = 17918 #ex_vivo N = num_diff_nucl_ex_vivo_deltaSHAPE_2 n = totals_binding_sites x = totals_binding_sites_differential_ex_vivo hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) print('ex_vivo\t%s' % p_value) #in_vivo N = num_diff_nucl_in_vivo_deltaSHAPE_2 n = totals_binding_sites x = totals_binding_sites_differential_in_vivo hpd = hypergeom(M, n, N) p_value = hpd.pmf(x) print('in_vivo\t%s' % p_value) # - print(totals_binding_sites) print(num_diff_nucl_ex_vivo) print(totals_binding_sites_differential_ex_vivo) print(totals_binding_sites) print(num_diff_nucl_ex_vivo_deltaSHAPE_1) print(totals_binding_sites_differential_ex_vivo) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py35] # language: python # name: conda-env-py35-py # --- import numpy as np import matplotlib.pyplot as plt import pandas as pd # %matplotlib inline import string # # Idea # # Get data # - Calculate the name length # - Calculate the chr set # - Calculate the chr set length # - Calculate the ratio for the chr set length and the name length # - Remove the duplicate letter sets # - Create dataframe with index=names, columns=alphabet # - Calculate the letter distribution # - Choose argmin(letter sum); The optimum set must have atleast one of these # - Iterate through all argmin(letter sum) names: # - Recursion starts here # - Mark all name letters to False # - Update the letter distribution # - Choose argmin(letter sum); The optimum set must have atleast one of these, but due to n cutoff not all combinations are tested. # - Calculate the effective set length # - Calculate the effective ratio # - Choose the n first names with {the highest effective ratio / shortest length} # - Iterate through the chosen names # - The recursion ends here # ## Read data and calculate some properties names_df = pd.read_csv("./IMA_mineral_names.txt", sep=',', header=None, names=['names']) names_df['names'] = names_df['names'].str.strip().str.lower() names_df['len'] = names_df['names'].str.len() names_df['tuple'] = names_df['names'].apply(lambda x: tuple(sorted(set(x)))) names_df['setlen'] = names_df['tuple'].apply(lambda x: len(x)) names_df['set_per_len'] = names_df['setlen']/names_df['len'] names_df.head(5) len(names_df) # ## Remove duplicates def sort_and_return_smallest(df): if len(df) == 1: return df df = df.sort_values(by=['len', 'names']) return df.iloc[:1, :] # %time names_set = names_df.groupby(by='tuple', as_index=False).apply(sort_and_return_smallest) len(names_set) def sort_and_return_smallest_duplicates(df): if len(df) == 1: return list(df['names']) df = df.sort_values(by=['len', 'names']) names = df.loc[df['len'] == df['len'].iloc[0], 'names'] return list(names) # %time names_duplicates = names_df.groupby(by='tuple', as_index=False).apply(sort_and_return_smallest_duplicates) len(names_duplicates) # In case some of these are in the chosen set duplicate_name_dict = {} for value in names_duplicates: if len(value) > 1: duplicate_name_dict[value[0]] = value[1:] names_set.set_index(['names'], inplace=True) names_set.head() # ## Create letter table letter_df = pd.DataFrame(index=names_set.index, columns=list(string.ascii_lowercase), dtype=bool) letter_df.loc[:] = False # %%time for name, set_ in zip(names_set.index, names_set['tuple']): for letter in set_: letter_df.loc[name, letter] = True # ## Find argmin in the letter distribution lowest_count_letter = letter_df.sum(0).argmin() lowest_count_letter # + # Get subset based on the chosen letter subsetlen = letter_df[letter_df[lowest_count_letter]].sum(1) name_len = subsetlen.index.str.len() setlen = pd.DataFrame({'set_per_len' : subsetlen/name_len, 'len' : name_len}) setlen.head() # - # ## Recursion def get_min_set(df, current_items, m=46, sort_by_len=False, n_search=20): # Gather results results = [] # Get letter with lowest number of options letter = df.sum(0) letter = letter[letter > 0].argmin() # Get subset based on the chosen letter subsetlen = df.loc[df[letter], :].sum(1) name_len = subsetlen.index.str.len() setlen = pd.DataFrame({'set_per_len' : subsetlen/name_len, 'len' : name_len}) if sort_by_len: order_of_operations = setlen.sort_values(by=['len', 'set_per_len'], ascending=True).index else: order_of_operations = setlen.sort_values(by=['set_per_len', 'len'], ascending=False).index # Loop over the mineral names with chosen letter # Ordered based on the (setlen / len) for i, (name, letter_bool) in enumerate(df.loc[order_of_operations, :].iterrows()): if i > n_search: break if sum(map(len, current_items))+len(name) >= m: continue # Get df containing rest of the letters df_ = df.copy() df_.loc[:, letter_bool] = False # If letters are exhausted there is one result # Check if the result is less than chosen limit m if df_.sum(0).sum() == 0 and sum(map(len, current_items))+len(name) < m: # This result is "the most optimal" under these names current_items_ = current_items + [name] len_current_items_ = sum(map(len, current_items_)) len_unique = len(set("".join(current_items_))) results.append((len_current_items_, current_items_)) if len_current_items_ < 41: print("len", len_current_items_, "len_unique", len_unique, current_items_, "place 1", flush=True) continue # Remove mineral names without new letters df_ = df_.loc[df_.sum(1) != 0, :] if df_.sum(0).sum() == 0: if sum(map(len, current_items))+len(name) < m: unique_letters = sum(map(len, map(set, current_items + [name]))) if unique_letters == len(string.ascii_lowercase): # Here is one result (?) current_items_ = current_items + [name] len_current_items_ = sum(map(len, current_items_)) len_unique = len(set("".join(current_items_))) results.append((len_current_items_, current_items_)) if len_current_items_ < 41: print("len", len_current_items_, "len_unique", len_unique, current_items_, "place 1", flush=True) continue current_items_ = current_items + [name] optimal_result = get_min_set(df_, current_items_, m=m, sort_by_len=sort_by_len, n_search=n_search) if len(optimal_result): results.extend(optimal_result) return results # ## The effective ratio criteria # + # %%time res_list = [] order_of_oparations = setlen.loc[letter_df.loc[:, lowest_count_letter], :].sort_values(by=['set_per_len', 'len'], ascending=False).index for i, (name, letter_bool) in enumerate(letter_df.ix[order_of_oparations].iterrows()): print(name, i+1, "/", len(order_of_oparations), flush=True) df_ = letter_df.copy() df_.loc[:, letter_bool] = False res = get_min_set(df_, [name], m=45, sort_by_len=False, n_search=20) res_list.extend(res) # - res_df = pd.DataFrame([[item[0]] + item[1] for item in res_list]).sort_values(by=0) res_df.head() # ## The shortest name length criteria # + # %%time res_list_ = [] order_of_oparations = setlen.loc[letter_df.loc[:, lowest_count_letter], :].sort_values(by=['set_per_len', 'len'], ascending=False).index for i, (name, letter_bool) in enumerate(letter_df.ix[order_of_oparations].iterrows()): print(name, i+1, "/", len(order_of_oparations), flush=True) df_ = letter_df.copy() df_.loc[:, letter_bool] = False res_ = get_min_set(df_, [name], m=45, sort_by_len=True, n_search=20) res_list_.extend(res_) # + #res_df_ = pd.DataFrame([[item[0]] + item[1] for item in res_list_]).sort_values(by=0) # - res_df.shape #, res_df_.shape # ## Save the results # %time res_df.to_csv("./example_but_not_optimum_no_duplicates.csv") optimum = res_df[res_df[0] == res_df.iloc[0, 0]] # ## Check for duplicates optimum.iloc[:, 1:].applymap(lambda x: duplicate_name_dict.get(x, None)) optimum # ## Validate results optimum.apply(lambda x: "".join(sorted(set("".join(x.iloc[1:6].values)))) == string.ascii_lowercase, axis=1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import csv import numpy filename = 'pima-indians-diabetes.csv' raw_data = open(filename, 'rb') reader = csv.reader(raw_data, delimiter=',', quoting=csv.QUOTE_NONE) x = list(reader) data = numpy.array(x).astype('float') print(data.shape) # Load CSV using NumPy import pandas as pd from numpy import loadtxt filename = 'pima-indians-diabetes.csv' raw_data = open(filename, 'rb') data = loadtxt(raw_data, delimiter=",") shape=data.shape print shape from pandas import read_csv filename = 'pima-indians-diabetes.csv' data = read_csv(filename, names=names) peek=data.head(20) print(peek) # Dimensions of your data from pandas import read_csv filename = "pima-indians-diabetes.csv" names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = read_csv(filename, names=names) shape = data.shape print(shape) types=data.dtypes print(types) description=data.describe() print(description) class_count=data.groupby('class').size() print(class_count) corelations=data.corr(method='pearson') print(corelations) # Skew for each attribute from pandas import read_csv filename = "pima-indians-diabetes.csv" names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = read_csv(filename, names=names) skew = data.skew() print(skew) from matplotlib import pyplot from pandas import read_csv filename = 'pima-indians-diabetes.csv' names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = read_csv(filename, names=names) data.hist() pyplot.show() from matplotlib import pyplot from pandas import read_csv filename = 'pima-indians-diabetes.csv' names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = read_csv(filename, names=names) data.plot(kind='density', subplots=True, layout=(3,3), sharex=False) pyplot.show() from matplotlib import pyplot from pandas import read_csv import numpy filename = 'pima-indians-diabetes.csv' names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = read_csv(filename, names=names) correlations = data.corr() fig = pyplot.figure() ax = fig.add_subplot(111) cax = ax.matshow(correlations, vmin=-1, vmax=1) fig.colorbar(cax) ticks = numpy.arange(0,9,1) ax.set_xticks(ticks) ax.set_yticks(ticks) ax.set_xticklabels(names) ax.set_yticklabels(names) pyplot.show() # Correction Matrix Plot (generic) from matplotlib import pyplot from pandas import read_csv import numpy filename = 'pima-indians-diabetes.csv' names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = read_csv(filename, names=names) correlations = data.corr() # plot correlation matrix fig = pyplot.figure() ax = fig.add_subplot(111) cax = ax.matshow(correlations, vmin=-1, vmax=1) fig.colorbar(cax) pyplot.show() # Scatterplot Matrix from matplotlib import pyplot from pandas import read_csv from pandas.plotting import scatter_matrix filename = "pima-indians-diabetes.csv" names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] data = read_csv(filename, names=names) scatter_matrix(data) pyplot.show() # Rescale data (between 0 and 1) from pandas import read_csv from numpy import set_printoptions from sklearn.preprocessing import MinMaxScaler filename = 'pima-indians-diabetes.csv' names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] dataframe = read_csv(filename, names=names) array = dataframe.values # separate array into input and output components X = array[:,0:8] Y = array[:,8] scaler = MinMaxScaler(feature_range=(0, 1)) rescaledX = scaler.fit_transform(X) # summarize transformed data set_printoptions(precision=3) print(rescaledX[0:5,:]) import pandas as pd from pandas import read_csv filename='pima-indians-diabetes.csv' data=read_csv(filename) shape=data.shape print(shape) types=data.dtypes print(types) # Standardize data (0 mean, 1 stdev) from sklearn.preprocessing import StandardScaler from pandas import read_csv from numpy import set_printoptions filename = 'pima-indians-diabetes.csv' names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] dataframe = read_csv(filename, names=names) array = dataframe.values # separate array into input and output components X = array[:,0:8] Y = array[:,8] scaler = StandardScaler().fit(X) rescaledX = scaler.transform(X) # summarize transformed data set_printoptions(precision=3) print(rescaledX[0:5,:]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="tpz0r_uqYQYq" # **Continuación de ciclos** # # --- # *Ciclo **FOR** (bucle For)* # + colab={"base_uri": "https://localhost:8080/"} id="orAtwm1PYxyY" outputId="c5835afc-201d-4e0f-e18e-7c519674faa0" for i in range(1,3): print(i) # + [markdown] id="JL522vgkZGaQ" # Para dejar claras las partes que componen el bucle FOR, vamos a definir los siguientes términos: # # 1. **Iterable:** Objetos que pueden ser iterados. Por ejemplo: una lista, un valor tipo cadena, etc # # 2. **Iterador:** Son objetos que hacen referencia a un element (para hacer referencia a cada uno de los elementos, hay un método denominado *"Next"* que permite hacer referencia a los elementos del conjunto. # # **for (variable) in (iterable): # (bloque código) # # Algunos ejemplos de iterables en python son: las listas, cadenas, diccionarios, tuplas, etc. # # **¿Cómo saber que es iterable a la hora de programar?** # # Podemos darnos cuenta que es iterable usando la función isinstance(). # # *Ejemplo:* # + colab={"base_uri": "https://localhost:8080/"} id="9tlbcyjMxceq" outputId="1f18f590-e3aa-476f-ffd4-b5f79dd41864" from collections import Iterable #paquete iterablr de collections (Python 3.7) lista=[1,2,3,4,5,6,7,8,9,0] a="computador" b=14 print(isinstance(lista,Iterable)) print(isinstance(a,Iterable)) print(isinstance(b,Iterable)) # + colab={"base_uri": "https://localhost:8080/"} id="SQqnBuh-8zOi" outputId="a24abd81-9974-4a90-a5ae-1040a84bed28" a="computador" for i in a: print(i) # + [markdown] id="OznbFOqG84mz" # Para entender los iteradores, es importante conocer la función *"Iter()"*, esta función puede ser llamada sobre un objeto que sea iterable y nos devolverá un iterador. # # *Ejemplo:* # + colab={"base_uri": "https://localhost:8080/"} id="r62lrl4n9SqJ" outputId="d7e122a6-d9f7-4cba-abcb-44adf6e4c26a" lista=[1,2,3,4,5,6,7,8,9,0] ref=iter(lista) print(ref) print(type(ref)) # + [markdown] id="MqTPGyFO9rAa" # Al imprimir Ref, es un iterador de la clase *"list_iterator"*. Hace referencia a la lista original y nos permite accerder a sus elementos con la función "Next()" # + colab={"base_uri": "https://localhost:8080/"} id="6FIAE1Bt9-W6" outputId="9bee34e1-c08f-4ac4-ba00-566dfb51a3d7" lista=[1,2,3,4,5,6,7,8,9,0] ref=iter(lista) print(next(ref)) print(next(ref)) print(next(ref)) print(next(ref)) print(next(ref)) print(next(ref)) print(next(ref)) print(next(ref)) print(next(ref)) print(next(ref)) # + [markdown] id="bTRmKH1b-SgM" # **Nota:** Cada vez que llamemos a Next() sobre Ref, nos devuelve el siguiente elemento de la lista original. # # Por ejemplo, quiero acceder al elemento 7, tendremos que llamar siete veces a next() # # --- # **Tener en cuenta** # # Existen otros iteradores para las diferentes clases: # # 1. str_iterator (para cadenas) # 2. list_iterator (para listas) # 3. set_iterator (para conjuntos) # 4. dict_keyiterator (para diccionarios) # + [markdown] id="dqC2SXQw_A0D" # **FOR ANIDADOS** # # --- # Es posible anidar los for, es decir, ingresar uno dentro de otro. como lo vemos a continuación: # + colab={"base_uri": "https://localhost:8080/"} id="DO_DydgS_Owm" outputId="1f1dbc01-df94-423b-8bdf-ad2e9785bed0" #Ejemplo tipo matrix #Vamos a crear una lista de listas, lo cual conforma una especie de matriz #Recordemos que una matriz se compone de valores que se distribuyen en filas y columnas qa=[[1,2,3], [4,5,6], [7,8,9]] for i in qa: print(i) for i in qa: for j in i: print(j) # + [markdown] id="qNmOeDqVAEcD" # **Nota:** # # Si queremos acceder a cada elemento individualmente, se procede a realizar un bucle de FOR anidado, es decir, un FOR se encargará de iterar las columnas y otro las filas # + colab={"base_uri": "https://localhost:8080/"} id="7FPQ5D2qAmHU" outputId="7d4589a5-db4b-4b49-da54-4e6e134f669c" #Otra forma de utilizar sólo el FOR, es la iteración al revés. Es decir, se puede iterar desde el último hasta el primer elemento lista=[1,2,3,4,5,6,7,8,9,0] for i in lista[::-1]: print(i) # + colab={"base_uri": "https://localhost:8080/"} id="8m1cmDW2BGti" outputId="a3c34c8a-aed0-4645-df8c-f0faf2e7ac86" lista=[1,2,3,4,5,6,7,8,9,0] for i in lista[::2]: print(i) print("-----------------------------") a="florero" for i in a[::-1]: print(i) # + colab={"base_uri": "https://localhost:8080/"} id="0AmivODIMY3Q" outputId="ec3e8622-ba83-44cc-b78f-ad5db26694fd" #usando rango for i in range(10): print(i) print("-----------------------") #El range() genera una secuencia desde 0 por defecto hasta #el número que se pasa como parámetro menos 2. #Estructura: range(inicio,fin,salto) #Observemos un truco con el range print(list(range(2,10,2))) #Conversión de un rango en una lista # + [markdown] id="o_C9vrrrNksZ" # **Listas de comprensión:** # # --- # Estas nos permiten crear listas de elementos en una sola línea de código. # # *Ejemplo:* Crear una lista de las potencias a la 3 de los números impares de 0 a 9 # + colab={"base_uri": "https://localhost:8080/"} id="5tMbjTkJOBoy" outputId="9d8d20e0-43b0-463f-971b-c87e8c9b546f" tres=[i**3 for i in range(1,10,2)] #1,3,5,7,9 print(tres) print("----------------") tres=[] for i in range(1,10,2): tres.append(i**3) print(tres) # + colab={"base_uri": "https://localhost:8080/"} id="nfmPtamkelLj" outputId="1c8055ac-28cc-4ce8-9a3d-be71987e507d" s=[1 for i in range(10)] print(s) # + colab={"base_uri": "https://localhost:8080/"} id="op22W6nme3TL" outputId="36dc4076-77a7-4ca8-833a-22f94a78f8c5" lwa=[5,10,15,20,25,30] lwa2=[i/5 for i in lwa] print(lwa2) # + colab={"base_uri": "https://localhost:8080/"} id="zypP8SoufEec" outputId="1874466d-5d85-4ae6-cf38-a424e6e6b754" pera="mama ama a ema" pera2=[i for i in pera if i== 'm'] print(pera2) print(len(pera2)) # + id="ciMlZ02AgLsK" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd from helper import generate_uuid, send_batch, log # ## Data import df = pd.read_csv('data/winemag-data-130k-v2.csv', index_col=0) df.head(3) # ### To start, let's keep the dataset simple and remove a couple of more columns df = df.drop(columns=["region_1", "region_2", "taster_name", "taster_twitter_handle", "winery"]) df.head(3) # ### Save the data as new csv file to use later for uploading data to Weaviate df.to_csv('data/preprocessed_data.csv') # + # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import mmcv import matplotlib.pyplot as plt import math from matplotlib.patches import Rectangle, Polygon from PIL import Image import numpy as np import os import shutil import json import cv2 from tqdm import tqdm def npxywha2vertex(box): """ use radian X=x*cos(a)-y*sin(a) Y=x*sin(a)+y*cos(a) """ batch = box.shape[0] center = box[:,:2] w = box[:,2] h = box[:,3] rad = box[:,4] # calculate two vector verti = np.empty((batch,2), dtype=np.float32) verti[:,0] = (h/2) * np.sin(rad) verti[:,1] = - (h/2) * np.cos(rad) hori = np.empty((batch,2), dtype=np.float32) hori[:,0] = (w/2) * np.cos(rad) hori[:,1] = (w/2) * np.sin(rad) tl = center + verti - hori tr = center + verti + hori br = center - verti + hori bl = center - verti - hori return np.concatenate([tl,tr,br,bl], axis=1) path = "../../../data/Phamacity/" def json2txtAnnotations(path, outputPath): for filename in os.listdir(path): # print(filename) if filename.endswith(".json"): f = open(path + filename) data = json.load(f) for anno in data['annotations']: # print(anno) image_id = anno['image_id'] bbox = anno['bbox'] bbox[4] = bbox[4] / 180 * math.pi bbox = np.array(bbox) bbox = npxywha2vertex(bbox[np.newaxis]) with open(outputPath + image_id+'.txt', 'w') as txt: for cor in bbox[0]: txt.write(cor.astype('str') + ' ') txt.write('person 0\n') f.close() def png2jpg(path, outputPath): for dir in os.listdir(path): if os.path.isdir(path+dir): print(dir) for filename in tqdm(os.listdir(path+dir)): if filename.endswith(".jpg"): portion = os.path.splitext(filename) img = cv2.imread(path+dir+'/'+filename) cv2.imwrite(outputPath + portion[0] + '.png', img) # shutil.copy2(path+dir+'/'+filename, path+'images') json2txtAnnotations(path, "../../../data/PhamacityDota/anno/") png2jpg(path, "../../../data/PhamacityDota/images/") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: py3 # --- import pandas as pd import numpy as np # + # df = pd.read_csv('NGSIM.csv') # df[(df.Location == 'i-80')].to_csv('i_80.csv') # df[(df.Location != 'i-80')].to_csv('non_i_80.csv') # - df = pd.read_csv('i_80.csv') df = df.sort_values(by='Global_Time') df[0:len(df)//3].to_csv('0_15_min.csv') df[0:10000].to_csv('sample.csv') df = pd.read_csv('sample.csv') df = df[['Vehicle_ID', 'Frame_ID', 'Local_X', 'Local_Y','Lane_ID', 'v_length', 'v_Width', 'v_Class']] df.iloc[0].Frame_ID # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np from matplotlib.dates import (YEARLY, DateFormatter, rrulewrapper, RRuleLocator, drange, date2num) from datetime import datetime # + df = pd.read_csv( "covid-19-get-data\\public\\data\\ecdc\\full_data.csv", keep_default_na=False, encoding="UTF-8", engine='python' ) df = df.set_index('location') print(type(df)) print(df) print('\n..............................................................\n') print(df.loc[['China']]) print("type of df.loc[['China']]: ", type(df.loc[['China']])) print('\n..............................................................\n') print(df.loc['China']) print("type of df.loc['China']: ", type(df.loc['China'])) print('\n..............................................................\n') country = ['Taiwan', 'China', 'France', 'Italy', 'Japan', 'Philippines', 'South Korea', 'Spain', 'United Kingdom', 'United States', 'World'] print(df.loc[country]) df = df.loc['Zimbabwe'] print(type(df)) print(df.shape) df # - df = df.filter(items=['date','new_cases']) df print(df.iloc[0,0]) print(df.iloc[0,1]) print(df.iloc[1,1]) df.axes df.columns df = pd.read_csv( "covid-19-get-data\\public\\data\\ecdc\\full_data.csv", keep_default_na=False, encoding="UTF-8", engine='python' ) df.location.unique() country = ['Taiwan', 'China', 'France', 'Italy', 'Japan', 'Philippines', 'South Korea', 'Spain', 'United Kingdom', 'United States', 'World'] country.sort() print(country) # + df = pd.read_csv( "covid-19-get-data\\public\\data\\ecdc\\full_data.csv", keep_default_na=False, encoding="UTF-8", engine='python' ) print("\n***************start replace*********************\n") print(df) print("\n***************end replace***********************\n") df = df.set_index('location') df = df.loc[country] df = df.filter(items=['date','new_cases']) case_data = [] print(type(df.date)) print(type(df['date'])) print("\n*************************************************\n") # print("shape of df['date']:", df['date'].shape[0]) # for i in temp_df['date']: # i = datetime.strptime(i,'%Y-%m-%d') # #print("strptime: ", i) # i = [date2num(i)] # #print("date2num: ", i) print("\n*************************************************\n") for i , element in enumerate(country): temp_df = df.loc[element] print(temp_df['date']) case_data.extend([[temp_df['date'].tolist(),temp_df['new_cases'].tolist()]]) print(np.array(case_data).shape) print("\n*************************************************\n") # print(df.loc['Taiwan']) # print(df.loc['China']) # print(type(case_data)) # print(type(case_data[0])) # print(case_data[0][0]) # print(case_data[0][1]) # print(case_data[1][1]) # print(case_data[2][1]) # + xdata, ydata = [],[] for i in range(10): print("\ni=",i) for t in range(len(country)): #iterate every country x = case_data[t][0][0:i+1] y = case_data[t][1][0:i+1] # print("y:",y,"\n") #print("x:",([date2num([datetime.strptime(x[0],'%Y-%m-%d')])[0].tolist()])) #xdata.append([date2num([datetime.strptime(x[0],'%Y-%m-%d')])[0].tolist()]) xdata.append(x) print(type(y)) ydata.append(y) print(x,y) print("xdata:",xdata,"\nydata:", ydata) print(xdata) print(type(xdata[0])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import seaborn as sns import matplotlib.pyplot as plt import category_encoders as ce #from scipy import stats pd.set_option('display.max_columns', None) #pd.set_option('display.max_rows', None) pd.options.display.float_format = '{:.5f}'.format # %matplotlib inline #sns.set(style='whitegrid', palette='muted', font_scale=1.5) import warnings warnings.filterwarnings('ignore') # Input data files are available in the read-only "../input/" directory # For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" # You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session # + #pd.set_option('display.max_rows', 20) # - # !pip install openpyxl data = pd.read_excel("/kaggle/input/housing-data/housingData_7R.xlsx") # + #data = df.copy() # - df = data.copy() # + #data = df.copy() # - data.head() data.shape data.columns data = data.drop('Unnamed: 0', axis = 1) data.describe(include = "all") data['adDescription'].head() data['adDealType'].value_counts() # + _kg_hide-output=false data['Deal_Type'] = data['adDealType'].copy() # + target_map = { 10: 'rental', 20: 'sell'} data.Deal_Type.replace(target_map, inplace = True) # - data['Deal_Type'].head() data.columns len(data.columns) # + original_col = data.columns new_col = ['ID', 'DealType', 'Canton', 'ZipCode', 'City','PublishedDate', 'AvailableDate', 'PriceText', 'Description', 'LangDetected', 'NumRooms', 'Floor', 'YearOfConstruction', 'NumApartments', 'Floor1', 'NumApartments1', 'LivingSpace', 'BuildingArea', 'UsefulArea', 'CoordE', 'CoordN' ,'Deal_Type'] data = data.rename(columns=dict(zip(original_col,new_col))) data.head() # - def percentage_missing(df): pm = (df.isnull().sum()/len(data))*100 return pm # + # replace all column names (in place) #new_cols = [‘column_x’, ‘column_y’, ‘column_z’] #df.columns = new_cols # - # ## EDA data.head() data.columns sns.set_style('darkgrid') plt.figure(figsize=(8, 5)) sns.countplot(data['Deal_Type']) plt.title('Target variable distribution') plt.show() data.shape data.isnull().sum().sort_values(ascending = False) #Plot missing values in data ax = data.isna().sum().sort_values().plot(kind = 'barh', figsize = (9, 10)) plt.title('Percentage of Missing Values Per Column Dataset', fontdict={'size':15}) for p in ax.patches: percentage ='{:,.0f}%'.format((p.get_width()/data.shape[0])*100) width, height =p.get_width(),p.get_height() x=p.get_x()+width+0.02 y=p.get_y()+height/2 ax.annotate(percentage,(x,y)) #To find the percentage of missing rows in each column total = data.isnull().sum().sort_values(ascending=False) percentage = ((data.isnull().sum()/data.isnull().count()).sort_values(ascending=False)*100) missing_data = pd.concat([total, percentage], axis=1, keys=['Total', 'Percentage']) missing_data.head(20) # ## Treating Missing Values and Outliers data.isnull().sum().sort_values(ascending=False) mis_columns = ['NumApartments', 'NumApartments1', 'BuildingArea', 'UsefulArea' ] data.corr() data['NumApartments'].head() data['NumApartments'].value_counts() mis_columns NumApartments = data['NumApartments'].copy() BuildingArea = data['BuildingArea'].copy() UsefulArea = data['UsefulArea'].copy() data.columns # + outliers = [] def detect_outliers_zscore(data): thres = 3 mean = np.mean(data) std = np.std(data) # print(mean, std) for i in data: z_score = (i-mean)/std if (np.abs(z_score) > thres): outliers.append(i) return outliers# Driver code #sample_outliers = detect_outliers_zscore(data['adNumApartments.1']) #print("Outliers from Z-scores method: ", sample_outliers) # - sns.boxplot(data['NumApartments']) plt.show() sns.distplot(data['NumApartments']) plt.show() data = data.drop(columns = mis_columns, axis = 1) data.columns data.isnull().sum().any() data.isnull().sum().sort_values(ascending = False) data['City'].value_counts() City = data['City'].copy() data['City'].describe() data['City'].head() #Filling Null Values data['City'].fillna('Zürich', inplace = True) #Fillna #data['City'].ffill(axis = 0) #ForwardFill data['City'].mode() data.isnull().sum().sort_values(ascending = False) # + # ptype_encode = {} # ptype_encode_values = range(16,0,-1) # for i,k in zip(type_count.index,ptype_encode_values): # ptype_encode[i]=k # ptype_encode # data['adCity'] = data['adCity'].map(ptype_encode) # - # #### **Exploring the City copied from data['City']** City.head() City = pd.DataFrame(data = City) City.head() City.info() encoder=ce.TargetEncoder(cols='City') data['City'].head() data['City'].head(20) data['City'].describe() data.describe(include = 'all') # + #sns.scatterplot(x = data['City'], y = data['PriceText']) #plt.show() # - # ## Exploring the target Column # ### PriceText PriceText = data['PriceText'].copy() PriceText = pd.DataFrame(data = PriceText) PriceText.head() data['PriceText'].head() data['PriceText'] = data['PriceText'].replace(',','', regex = True) data['PriceText'].head() data['PriceText'] = data['PriceText'].replace('CHF','', regex = True) data['PriceText'].head() data['PriceText'].describe() data['PriceText'].isnull().sum() data['PriceText'].value_counts() data['PriceText'] = data['PriceText'].replace('On request',np.nan) data['PriceText'].value_counts() data['PriceText'].isnull().sum() ax = data.isna().sum().sort_values().plot(kind = 'barh', figsize = (9, 10)) plt.title('Percentage of Missing Values Per Column Dataset', fontdict={'size':15}) for p in ax.patches: percentage ='{:,.0f}%'.format((p.get_width()/data.shape[0])*100) width, height =p.get_width(),p.get_height() x=p.get_x()+width+0.02 y=p.get_y()+height/2 ax.annotate(percentage,(x,y)) data['PriceText'] = data['PriceText'].replace('CHF','', regex = True) data['PriceText'].head() data['PriceText'].value_counts() data['PriceText'] = data['PriceText'].replace('EUR','', regex = True) data['PriceText'].head() data['PriceText'].dtype data['PriceText'].value_counts() data['PriceText'] = data['PriceText'].apply(pd.to_numeric) data['PriceText'].dtype data['PriceText'].describe() sns.distplot(data['PriceText']) sns.boxplot(data['PriceText']) outliers = [] def detect_outliers_zscore(data): thres = 3 mean = np.mean(data) std = np.std(data) # print(mean, std) for i in data: z_score = (i-mean)/std if (np.abs(z_score) > thres): outliers.append(i) return outliers# Driver code sample_outliers = detect_outliers_zscore(data['PriceText']) print("Outliers from Z-scores method: ", sample_outliers) # Since, PriceText contains outliers, drop the rows data = data[data["PriceText"] thres): outliers.append(i) return outliers# Driver code sample_outliers = detect_outliers_zscore(data['NumRooms']) #print("Outliers from Z-scores method: ", sample_outliers) data = data[data["NumRooms"]this font color. # # For each set of exercises (one Python notebook such as this one $==$ one set of exercises) you have to submit deliverables that will then be graded and constitute 25% of the final grade. Thus, the work that you do during the practicals has double contribution towards the final grade: as 30% direct contribution and as a preparation for the exam that will define the other 65% of the grade. # # ## Deliverables # # For each set of exercises, you have to submit: # 1. Python functions and/or classes (`.py` files) that implement basic functionalities (e.g. a $k$-NN classifier) and # 2. A *single* Python notebook that contains the experiments, visualization and answer to the questions and math problems. *Do not submit your answers as Word or PDF documents (they will not be graded)*. The submitted code and notebook should run without errors and be able to fully reproduce the reported results. # # We recommend that you clone the provided notebooks (such as this one) and write your code in them. The following rubric will be used when grading the practical work: # # Component | Insufficient | Satisfactory | Excellent # --- | --- | --- | --- # **Code** | Missing or incomplete code structure, runs with errors, lacks documentation | Self-contained, does not result in errors, contains some documentation, can be easily used to reproduce the reported results | User-friendly, well-structured (good separation of general functionality and experiments, i.e. between `.py` files and the Pyhthon notebook), detailed documentation, optimized for speed, use of a version control system (such as GitHub) # **Answers to questions** | Incorrect, does not convey understanding of the material, appears to be copied from another source | Correct, conveys good understanding of the material, description in own words | Correct, conveys excellent level of understanding, makes connections between topics # # ## A word on notation # # When we refer to Python variables, we will use a monospace font. For example, `X` is a Python variable that contains the data matrix. When we refer to mathematical variables, we will use the de-facto standard notation: $a$ or $\lambda$ is a scalar variable, $\boldsymbol{\mathrm{w}}$ is a vector and $\boldsymbol{\mathrm{X}}$ is a matrix (e.g. a data matrix from the example above). You should use the same notation when writing your answers and solutions. # # # Two simple machine learning models # # ## Preliminaries # # Throughout the practical curriculum of this course, we will use the Python programming language and its ecosystem of libraries for scientific computing (such as `numpy`, `scipy`, `matplotlib`, `scikit-learn` etc). The practicals for the deep learning part of the course will use the `keras` deep learning framework. If you are not sufficiently familiar with this programming language and/or the listed libraries and packages, you are strongly advised to go over the corresponding tutorials from the ['Essential skills'](https://github.com/tueimage/essential-skills) module (the `scikit-learn` library is not covered by the tutorial, however, an extensive documentation is available [here](https://scikit-learn.org/stable/documentation.html). # # In this first set of exercises, we will use two toy datasets that ship together with `scikit-learn`. # # The first dataset is named `diabetes` and contains 442 patients described with 10 features: age, sex, body mass index, average blood pressure, and six blood serum measurements. The target variable is a continuous quantitative measure of the disease (diabetes) progression one year after the baseline measurements were recorded. More information is available [here](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/datasets/descr/diabetes.rst) and [here](https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html). # # The second dataset is named `breast_cancer` and is a copy of the UCI ML Breast Cancer Wisconsin (Diagnostic) datasets (more infortmation is available [here](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/datasets/descr/breast_cancer.rst) and [here](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)). The datasets contains of 569 instances represented with 30 features that are computed from a images of a fine needle aspirate of a breast mass. The features describe characteristics of the cell nuclei present in the image. Each instance is associated with a binary target variable ('malignant' or 'benign'). # # You can load the two datasets in the following way: # + import numpy as np from sklearn.datasets import load_diabetes, load_breast_cancer diabetes = load_diabetes() breast_cancer = load_breast_cancer() # - # In the majority of the exercises in this course, we will use higher-level libraries and packages such as `scikit-learn` and `keras` to implement, train and evaluate machine learning models. However, the goal of this first set of exercises is to illustrate basic mathematical tools and machine learning concepts. Because of this, we will impose a restriction of only using basic `numpy` functionality. Furthermore, you should as much as possible restrict the use of for-loops (e.g. use a vector-to-matrix product instead of a for loop when appropriate). # # If `X` is a 2D data matrix, we will use the convention that the rows of the matrix contain the samples (or instances) and the columns contain the features (inputs to the model). That means that a data matrix with a shape `(122, 13)` represents a dataset with 122 samples, each represented with 13 features. Similarly, if `Y` is a 2D matrix containing the targets, the rows correspond to the samples and the columns to the different targets (outputs of the model). Thus, if the shape of `Y` is `(122, 3)` that means that there are 122 samples and each sample is has 3 targets (note that in the majority of the examples we will only have a single target and thus the number of columns of `Y` will be 1). # # You can obtain the data and target matrices from the two datasets in the following way: # + X = diabetes.data Y = diabetes.target[:, np.newaxis] print(X.shape) print(Y.shape) # - # If you want to only use a subset of the available features, you can obtain a reduced data matrix in the following way: # + # use only the fourth feature X = diabetes.data[:, np.newaxis, 3] print(X.shape) # use the third, and tenth features X = diabetes.data[:, (3,9)] print(X.shape) # - # ***Question***: Why we need to use the `np.newaxis` expression in the examples above? # # _To make the data a 2-D matrix, which is necessary for performing matrix operations in Numpy._ # # Note that in all your experiments in the exercises, you should use and independent training and testing sets. You can split the dataset into a training and testing subsets in the following way: # use the fourth feature # use the first 300 training samples for training, and the rest for testing X_train = diabetes.data[:300, np.newaxis, 3] y_train = diabetes.target[:300, np.newaxis] X_test = diabetes.data[300:, np.newaxis, 3] y_test = diabetes.target[300:, np.newaxis] print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) # ## Exercises # # ### Linear regression # # Implement training and evaluation of a linear regression model on the diabetes dataset using only matrix multiplication, inversion and transpose operations. Report the mean squared error of the model. # # To get you started we have implemented the first part of this exercise (fitting of the model) as an example. # + # import data import numpy as np from sklearn.datasets import load_diabetes, load_breast_cancer diabetes = load_diabetes() # add subfolder that contains all the function implementations # to the system path so we can import them import importlib as il import sys import matplotlib.pyplot as plt sys.path.append('code/') # the actual implementation is in linear_regression.py, # here we will just use it to fit a model import linear_regression from linear_regression import * il.reload(linear_regression) # load the dataset # same as before, but now we use all features X_train = diabetes.data[:300, :] y_train = diabetes.target[:300, np.newaxis] X_test = diabetes.data[300:, :] y_test = diabetes.target[300:, np.newaxis] beta = lsq(X_train, y_train) # print the parameters print("Beta = {}".format(beta)) # evaluate the linear regression model & calculate the (Root) Mean Squared Error MSE, y_out = eval_lin_model(beta, X_test, y_test) print("Mean Squared Error = {}".format(MSE)) print("Root Mean Squared Error = {}".format(np.sqrt(MSE))) # - # ### Weighted linear regression # # Assume that in the dataset that you use to train a linear regression model, there are identical versions of some samples. This problem can be reformulated to a weighted linear regression problem where the matrices $\boldsymbol{\mathrm{X}}$ and $\boldsymbol{\mathrm{Y}}$ (or the vector $\boldsymbol{\mathrm{y}}$ if there is only a single target/output variable) contain only the unique data samples, and a vector $\boldsymbol{\mathrm{d}}$ is introduced that gives more weight to samples that appear multiple times in the original dataset (for example, the sample that appears 3 times has a corresponding weight of 3). # #

    Derive the expression for the least-squares solution of a weighted linear regression model (note that in addition to the matrices $\boldsymbol{\mathrm{X}}$ and $\boldsymbol{\mathrm{Y}}$, the solution should include a vector of weights $\boldsymbol{\mathrm{d}}$).

    # $WSS(\boldsymbol{\mathrm{w}})=(\boldsymbol{\mathrm{Y}} - \boldsymbol{\mathrm{X}}\boldsymbol{\mathrm{w}})^T\boldsymbol{\mathrm{D}}(\boldsymbol{\mathrm{Y}} - \boldsymbol{\mathrm{X}}\boldsymbol{\mathrm{w}})$ should be minimised.
    # # $ (\boldsymbol{\mathrm{Y}} - \boldsymbol{\mathrm{X}}\boldsymbol{\mathrm{w}})^T\boldsymbol{\mathrm{D}}(\boldsymbol{\mathrm{Y}} - \boldsymbol{\mathrm{X}}\boldsymbol{\mathrm{w}}) = (\boldsymbol{\mathrm{Y}}^T - \boldsymbol{\mathrm{w}}^T\boldsymbol{\mathrm{X}}^T)\boldsymbol{\mathrm{D}}(\boldsymbol{\mathrm{Y}} - \boldsymbol{\mathrm{X}}\boldsymbol{\mathrm{w}}) = \boldsymbol{\mathrm{Y}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{Y}} - \boldsymbol{\mathrm{Y}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{X}}\boldsymbol{\mathrm{w}} - \boldsymbol{\mathrm{w}}^T\boldsymbol{\mathrm{X}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{Y}} + \boldsymbol{\mathrm{w}}^T\boldsymbol{\mathrm{X}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{X}}\boldsymbol{\mathrm{w}} = \boldsymbol{\mathrm{Y}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{Y}} - 2\boldsymbol{\mathrm{w}}^T\boldsymbol{\mathrm{X}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{Y}} + \boldsymbol{\mathrm{w}}^T\boldsymbol{\mathrm{X}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{X}}\boldsymbol{\mathrm{w}}$ # # Computing the derivative w.r.t. $\boldsymbol{\mathrm{w}}$ gives $ -2\boldsymbol{\mathrm{X}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{Y}} + 2\boldsymbol{\mathrm{X}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{X}}\boldsymbol{\mathrm{w}}$.
    # # Equating this expression to 0 gives us the least-squares weighted regression coefficient vector $\boldsymbol{\mathrm{\hat{w}}}$: # $\boldsymbol{\mathrm{\hat{w}}} = (\boldsymbol{\mathrm{X}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{X}})^{-1}\boldsymbol{\mathrm{X}}^T\boldsymbol{\mathrm{D}}\boldsymbol{\mathrm{Y}}$
    # where $\boldsymbol{\mathrm{D}} = diag(\boldsymbol{\mathrm{d}})$ # ### $k$-NN classification # # Implement a $k$-Nearest neighbors classifier from scratch in Python using only basic matrix operations with `numpy` and `scipy`. Train and evaluate the classifier on the breast cancer dataset, using all features. Show the performance of the classifier for different values of $k$ (plot the results in a graph). Note that for optimal results, you should normalize the features (e.g. to the $[0, 1]$ range or to have a zero mean and unit standard deviation). # # + import numpy as np from sklearn.datasets import load_breast_cancer from NNfunction import * import matplotlib.pyplot as plt """ Load the breast cancer dataset find the true targets from the test data set and choose how many neighbour points should be looked at. """ breast_cancer = load_breast_cancer() real_values=breast_cancer.target[300:,np.newaxis] nn= 60 k=range(1,nn) """ The for-loop classfies the test dataset using different values for k. After that, the true values and the calculated values are compared and the amount of correctly and falsely classefied samples are stored in the lists: Good and Wrong. These lists are then plotted as a function of k. """ Good=[] Wrong=[] for i in range(1,nn): calssification=NNfunction(i,breast_cancer,30) Incorrect_predictions=0 Correct_predictions=0 for q in range(len(real_values)): if calssification[q,:]==real_values[q,:]: Correct_predictions=Correct_predictions+1 else: Incorrect_predictions=Incorrect_predictions+1 Good.append(Correct_predictions) Wrong.append(Incorrect_predictions) f, axis = plt.subplots(2,1,) axis[0].plot(k,Good,color='green') axis[0].set_xlabel('Number of Neighbours') axis[0].set_ylabel('Correct classifications') axis[1].plot(k,Wrong,color='red') axis[1].set_xlabel('Number of Neighbours') axis[1].set_ylabel('False classifications') plt.show() # - # The script that is shown above makes use of a function called NNfunction which classifies a test dataset using the nn nearest neighbours of the samples with the training dataset. The breast cancer dataset was split up into a training and a test dataset and test dataset was classified using the training data. This script's classification results were compared to the correct classification of the samples. The graphs shows the amount of correctly and wrongfully classified samples. # # One can see that the amount of correct classifications goes up in the beginning, but eventually the amount of correct classifications goes down , and the amount of false classifications goes up. # ### $k$-NN regression # # Modify the $k$-NN implementation to do regression instead of classification. Compare the performance of the linear regression model and the $k$-NN regression model on the diabetes dataset for different values of $k$.. # + import numpy as np from sklearn.datasets import load_diabetes from NNfunctionRegression import * import matplotlib.pyplot as plt ''' Loading the diabetes and using the NNfunctionRegression to get an average value for the targets of the test data. After that the targets of the test data was compared to the calculated average values and the mean squared error and the root mean squared error were calculated. The calculations for these errors were executed for multiple values of k and the results were visualised in subplots. ''' Diabetes=load_diabetes() nn=50 Z=Diabetes.data k=range(1,nn) MSEList=[] RMSEList=[] a=np.zeros((221,2)) for j in range(1,nn): results=NNfunctionRegression(j,Diabetes,10) Test=Diabetes.target[221:,np.newaxis] MSE= sum((Test-results)**2)/221 RMSEList.append(np.sqrt(MSE)) MSEList.append(MSE) f, axis = plt.subplots(1,2,sharex=False, sharey=False) axis[0].plot(k, MSEList) axis[0].set_xlabel('No Nearest Neighbours(-)') axis[0].set_ylabel('Mean Squared Error') axis[1].plot(k,RMSEList) axis[1].set_xlabel('No Nearest Neighbours(-)') axis[1].set_ylabel('Root Mean Squared Error') plt.show() # - # In the subplot above, the Mean Squared Error and the Root Mean Squared Error were plotted as a function of the amount of Neighbours that were used in the Nearest Neighbour Regression. When we compare these results with the results of the linear regression, we can see that the MSE of the KNN regression classifier is bigger than the MSE of the linear regression classifier. # ### Class-conditional probability # # Compute and visualize the class-conditional probability (conditional probability where the class label is the conditional variable, i.e. $P(X = x \mid Y = y)$ for all features in the breast cancer dataset. Assume a Gaussian distribution. # #

    Based on visual analysis of the plots, which individual feature can best discriminate between the two classes? Motivate your answer.

    # # # + import class_cond_prop import matplotlib.pyplot as plt from class_cond_prop import * il.reload(class_cond_prop) x_cancer = breast_cancer.data y_cancer = breast_cancer.target[:, np.newaxis] # compute matrix containing all conditional probabilities per attribute CP_pos, CP_neg = cond_prop(x_cancer, y_cancer) # indices for positive and negative class i = y_cancer[:, 0] > 0 # positive class j = y_cancer[:, 0] == 0 # negative class # plotting the (normalised) conditional probabilities (CP) fig, axes = plt.subplots(nrows=5, ncols=6, figsize=(30, 25), sharey=True) idx = 0 for row in axes: for cell in row: negx, posx = x_cancer[j, idx], x_cancer[i, idx] negy, posy = CP_neg[:, idx], CP_pos[:, idx] orderneg = np.argsort(negx) orderpos = np.argsort(posx) distneg = cell.plot(negx[orderneg], negy[orderneg], 'b-', linewidth=3) distpos = cell.plot(posx[orderpos], posy[orderpos], 'r-', linewidth=3) cell.set_xlabel("Feature value") cell.set_ylabel("Conditional Probability") cell.set_title("Feature "+ str(idx+1)) idx += 1 plt.show() # - # These graphs show the probability of measuring a particular attribute value X given the instance is in class Y (0, blue or 1, red). The conditional probabilities were normalised, i.e. summing the probabilities over all instances for one feature would yield 1. # # The best feature to discriminate between the two classes is feature 28 since the probability distributions of class 0 and 1 do least overlap and the probability values are (relatively) high. For attribute values from approximately 0 to 0.15 there is a high probability these values occur given that the instance is in class 0 and there is low/no probability these values occur given that the instance is in class 1. For attribute values from 0.15 to 0.30 vice versa. # # Based on the conditional probability feature 19 would e.g. not be a good feature to use since the graphs overlap a lot. The probability to measure a particular attribute value X is high for both classes. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from scipy import stats import seaborn as sns dados_normais = stats.norm.rvs(size=1000) dados_nao_normais = stats.skewnorm.rvs(a=10, size=1000) # ### Verificar se os dados são normais # #### Histograma sns.histplot(dados_normais, kde=True) sns.histplot(dados_nao_normais, kde=True) # #### Quantile-quantile plot from statsmodels.graphics.gofplots import qqplot qqplot(dados_normais, line='s') qqplot(dados_nao_normais, line='s') # #### Teste de Shapiro-Wilk # # p-value é usado para interpretar o teste estatístico. # * p <= alpha: rejeita o teste de hipótese, não é uma distribuição normal. # * p > alpha: não rejeita o teste de hipótese, é uma distribuição normal. def teste(p, alpha=0.05): if p > alpha: print('É uma distribuição normal') else: print('Não é uma distribuição normal') from scipy.stats import shapiro _, p = shapiro(dados_normais) _, p2 = shapiro(dados_nao_normais) p teste(p, 0.05) teste(p2, 0.05) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="riErnUwfLKDw" # # Newswires Classification with Reuters # + [markdown] id="s5T_EHZXNShT" # ##### Imports # + id="1UdxYS9ALHTE" import numpy as np # Numpy from matplotlib import pyplot as plt # Matplotlib import keras # Keras import pandas as pd # Pandas from keras.datasets import reuters # Reuters Dataset from keras.utils.np_utils import to_categorical # Categirical Classifier import random # Random # + [markdown] id="0ptvrGNGNpoD" # ##### Load dataset # + id="QFnLXk66NryE" (train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words = 10000) print('Size:', len(train_data)) print('Training Data:', train_data[0]) # + [markdown] id="eBtaode-Tu1_" # ##### Get the feel of data # + id="jAg-wM4ITxA6" def decode(index): # Decoding the sequential integers into the corresponding words word_index = reuters.get_word_index() reverse_word_index = dict([(value, key) for (key, value) in word_index.items()]) decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in test_data[0]]) return decoded_newswire print("Decoded test data sample [0]: ", decode(0)) # + [markdown] id="XGUDyWwXTpar" # ##### Data Prep (One-Hot Encoding) # + id="kl3MVkNxTsU0" def vectorize_sequences(sequences, dimension = 10000): # Encoding the integer sequences into a binary matrix results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. return results train_data = vectorize_sequences(train_data) test_data = vectorize_sequences(test_data) train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) # + [markdown] id="I0AS2vWYT9au" # ##### Building the model # + id="6JUWJ32YT_iy" model = keras.models.Sequential() model.add(keras.layers.Dense(units = 64, activation = 'relu', input_shape = (10000,))) model.add(keras.layers.Dense(units = 64, activation = 'relu')) model.add(keras.layers.Dense(units = 46, activation = 'softmax')) model.compile( optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy']) model.summary() # + [markdown] id="o7_1oOMZUeur" # ##### Training the model # + id="f8y3-hzfUhQS" x_val = train_data[:1000] train_data = train_data[1000:] y_val = train_labels[:1000] train_labels = train_labels[1000:] history = model.fit(train_data, train_labels, batch_size = 512, epochs = 10, validation_data = (x_val, y_val), verbose = False) # + [markdown] id="jO0xF9RMVILe" # ##### Evaluating the model # + id="Hy2nZqQyVKJU" result = model.evaluate(train_data, train_labels) print('Loss:', result[0]) print('Accuracy:', result[1] * 100) # + [markdown] id="Bo60EKVGVZbM" # ##### Statistics # + id="CQhxXomBVa3P" epochs = range(1, len(history.history['loss']) + 1) plt.plot(epochs, history.history['loss'], 'b', label = 'Training Loss') plt.plot(epochs, history.history['val_loss'], 'r', label = 'Validation Loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() plt.plot(epochs, history.history['accuracy'], 'b', label = 'Training Accuracy') plt.plot(epochs, history.history['val_accuracy'], 'r', label = 'Validation Accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() # + [markdown] id="Uvoc0-o6WQxC" # ##### Making predictions # + id="Z360IVlBWSKt" prediction_index = random.randint(0, len(test_data)) prediction_data = test_data[prediction_index] decoded_prediction_data = decode(prediction_index) # Info print('Random prediction index:', prediction_index) print('Original prediction Data:', prediction_data) print('Decoded prediction Data:', decoded_prediction_data) print('Expected prediction label:', np.argmax(test_labels[prediction_index])) # Prediction predictions = model.predict(test_data) print('Prediction index: ', np.argmax(predictions[prediction_index])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: pytorchNLP # language: python # name: pytorchnlp # --- import pandas as pd from string import punctuation import numpy as np import torch from nltk.tokenize import word_tokenize from torch.utils.data import TensorDataset, DataLoader from torch import nn from torch import optim import json # + with open("sentiment labelled sentences/sentiment.txt") as f: reviews = f.read() data = pd.DataFrame([review.split('\t') for review in reviews.split('\n')]) data.columns = ['Review','Sentiment'] data = data.sample(frac=1) # - data.head() # + def split_words_reviews(data): text = list(data['Review'].values) clean_text = [] for t in text: clean_text.append(t.translate(str.maketrans('', '', punctuation)).lower().rstrip()) tokenized = [word_tokenize(x) for x in clean_text] all_text = [] for tokens in tokenized: for t in tokens: all_text.append(t) return tokenized, set(all_text) reviews, vocab = split_words_reviews(data) reviews[0] # + def create_dictionaries(words): word_to_int_dict = {w:i+1 for i, w in enumerate(words)} int_to_word_dict = {i:w for w, i in word_to_int_dict.items()} return word_to_int_dict, int_to_word_dict word_to_int_dict, int_to_word_dict = create_dictionaries(vocab) int_to_word_dict # - with open('word_to_int_dict.json', 'w') as fp: json.dump(word_to_int_dict, fp) print(np.max([len(x) for x in reviews])) print(np.mean([len(x) for x in reviews])) # + def pad_text(tokenized_reviews, seq_length): reviews = [] for review in tokenized_reviews: if len(review) >= seq_length: reviews.append(review[:seq_length]) else: reviews.append(['']*(seq_length-len(review)) + review) return np.array(reviews) padded_sentences = pad_text(reviews, seq_length = 50) padded_sentences[0] # - int_to_word_dict[0] = '' word_to_int_dict[''] = 0 # + encoded_sentences = np.array([[word_to_int_dict[word] for word in review] for review in padded_sentences]) encoded_sentences[0] # - class SentimentLSTM(nn.Module): def __init__(self, n_vocab, n_embed, n_hidden, n_output, n_layers, drop_p = 0.8): super().__init__() self.n_vocab = n_vocab self.n_layers = n_layers self.n_hidden = n_hidden self.embedding = nn.Embedding(n_vocab, n_embed) self.lstm = nn.LSTM(n_embed, n_hidden, n_layers, batch_first = True, dropout = drop_p) self.dropout = nn.Dropout(drop_p) self.fc = nn.Linear(n_hidden, n_output) self.sigmoid = nn.Sigmoid() def forward (self, input_words): embedded_words = self.embedding(input_words) lstm_out, h = self.lstm(embedded_words) lstm_out = self.dropout(lstm_out) lstm_out = lstm_out.contiguous().view(-1, self.n_hidden) fc_out = self.fc(lstm_out) sigmoid_out = self.sigmoid(fc_out) sigmoid_out = sigmoid_out.view(batch_size, -1) sigmoid_last = sigmoid_out[:, -1] return sigmoid_last, h def init_hidden (self, batch_size): device = "cpu" weights = next(self.parameters()).data h = (weights.new(self.n_layers, batch_size, self.n_hidden).zero_().to(device), weights.new(self.n_layers, batch_size, self.n_hidden).zero_().to(device)) return h # + n_vocab = len(word_to_int_dict) n_embed = 50 n_hidden = 100 n_output = 1 n_layers = 2 net = SentimentLSTM(n_vocab, n_embed, n_hidden, n_output, n_layers) # + labels = np.array([int(x) for x in data['Sentiment'].values]) train_ratio = 0.8 valid_ratio = (1 - train_ratio)/2 total = len(encoded_sentences) train_cutoff = int(total * train_ratio) valid_cutoff = int(total * (1 - valid_ratio)) train_x, train_y = torch.Tensor(encoded_sentences[:train_cutoff]).long(), torch.Tensor(labels[:train_cutoff]).long() valid_x, valid_y = torch.Tensor(encoded_sentences[train_cutoff : valid_cutoff]).long(), torch.Tensor(labels[train_cutoff : valid_cutoff]).long() test_x, test_y = torch.Tensor(encoded_sentences[valid_cutoff:]).long(), torch.Tensor(labels[valid_cutoff:]) train_data = TensorDataset(train_x, train_y) valid_data = TensorDataset(valid_x, valid_y) test_data = TensorDataset(test_x, test_y) batch_size = 1 train_loader = DataLoader(train_data, batch_size = batch_size, shuffle = True) valid_loader = DataLoader(valid_data, batch_size = batch_size, shuffle = True) test_loader = DataLoader(test_data, batch_size = batch_size, shuffle = True) # - print_every = 2400 step = 0 n_epochs = 3 clip = 5 criterion = nn.BCELoss() optimizer = optim.Adam(net.parameters(), lr = 0.001) for epoch in range(n_epochs): h = net.init_hidden(batch_size) for inputs, labels in train_loader: step += 1 net.zero_grad() output, h = net(inputs) loss = criterion(output.squeeze(), labels.float()) loss.backward() nn.utils.clip_grad_norm(net.parameters(), clip) optimizer.step() if (step % print_every) == 0: net.eval() valid_losses = [] for v_inputs, v_labels in valid_loader: v_output, v_h = net(v_inputs) v_loss = criterion(v_output.squeeze(), v_labels.float()) valid_losses.append(v_loss.item()) print("Epoch: {}/{}".format((epoch+1), n_epochs), "Step: {}".format(step), "Training Loss: {:.4f}".format(loss.item()), "Validation Loss: {:.4f}".format(np.mean(valid_losses))) net.train() # + # torch.save(net.state_dict(), 'model.pkl') # - net = SentimentLSTM(n_vocab, n_embed, n_hidden, n_output, n_layers) net.load_state_dict(torch.load('model.pkl')) # + net.eval() test_losses = [] num_correct = 0 for inputs, labels in test_loader: test_output, test_h = net(inputs) loss = criterion(test_output, labels) test_losses.append(loss.item()) preds = torch.round(test_output.squeeze()) correct_tensor = preds.eq(labels.float().view_as(preds)) correct = np.squeeze(correct_tensor.numpy()) num_correct += np.sum(correct) print("Test Loss: {:.4f}".format(np.mean(test_losses))) print("Test Accuracy: {:.2f}".format(num_correct/len(test_loader.dataset))) # - def preprocess_review(review): review = review.translate(str.maketrans('', '', punctuation)).lower().rstrip() tokenized = word_tokenize(review) if len(tokenized) >= 50: review = tokenized[:50] else: review= ['0']*(50-len(tokenized)) + tokenized final = [] for token in review: try: final.append(word_to_int_dict[token]) except: final.append(word_to_int_dict['']) return final def predict(review): net.eval() words = np.array([preprocess_review(review)]) padded_words = torch.from_numpy(words) pred_loader = DataLoader(padded_words, batch_size = 1, shuffle = True) for x in pred_loader: output = net(x)[0].item() msg = "This is a positive review." if output >= 0.5 else "This is a negative review." print(msg) print('Prediction = ' + str(output)) predict("The film was good") predict("It was not good") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Common Crawl dataset # ### Summary of the Common Crawl Data # * Three formats in this dataset: WARC(Raw data), WAT(Meta data for WARC), WET(Extracted text data) # * We should focus on WET because WARC and WAT are with too much useless information # * This dataset is humongous, split in month. We cannot work on all. # * For July 2015, it is said to have 1.81 billion pages and 145 TB data # * By checking the July 2015 data, the break-down is like this # * 99 directories holding 33957 .gz pair of (WARC, WAT and WET) files # * each directory holds 343 files # * each file is around 110 MB in .gz and 250 MB in uncompressed format # * each file has around 51K pages # * We choose 1 of the 99 directory to start process # * s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981921.1/ # * We focus only on WET file for the assignment 2 # * WET provides the parsed plaintext rather than the HTML contents # * But it could not reveal the raw content in the browser as the HTML tags are gone # # ### What is in WET # * extracted plaintext on the page # * URL # * content length # * no metadata for WET so there is no offset information for each page. # # ### Comparison to NZ dataset # * The WET data is more clean than NZ # * NZ population speaks Roman languages and doesn't have that many Unicode pages than CCrawl data # * The CCrawl WET data has no meta data # * NZ dataset has metadata, can have offset for each page in the bigfile # # # ### Check July 2015 Common Crawl dataset # http://blog.commoncrawl.org/2015/08/july-2015-crawl-archive-available/ # # # ``` # $ aws s3 ls s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2015-32/ # PRE segments/ # 2015-08-13 20:45:44 632 segment.paths.gz # 2015-08-13 20:45:44 104599 warc.paths.gz # 2015-08-13 20:45:45 104321 wat.paths.gz # 2015-08-13 20:45:45 104322 wet.paths.gz # # $ wc -l *.paths # 99 segment.paths # 33957 warc.paths # 33957 wat.paths # 33957 wet.paths # 101970 total # # $ head wet.paths # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00000-ip-10-236-191-2.ec2.internal.warc.wet.gz # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00001-ip-10-236-191-2.ec2.internal.warc.wet.gz # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00002-ip-10-236-191-2.ec2.internal.warc.wet.gz # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00003-ip-10-236-191-2.ec2.internal.warc.wet.gz # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wet.gz # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00005-ip-10-236-191-2.ec2.internal.warc.wet.gz # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00006-ip-10-236-191-2.ec2.internal.warc.wet.gz # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00007-ip-10-236-191-2.ec2.internal.warc.wet.gz # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00008-ip-10-236-191-2.ec2.internal.warc.wet.gz # common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00009-ip-10-236-191-2.ec2.internal.warc.wet.gz # # $ aws s3 ls s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/ --summarize --human-readable | tail -n3 # # Total Objects: 343 # Total Size: 33.9 GiB # # $ aws s3 ls s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/warc/ --summarize --human-readable | tail -n3 # # Total Objects: 343 # Total Size: 300.2 GiB # # $ aws s3 ls s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wat/ --summarize --human-readable | tail -n3 # # Total Objects: 343 # Total Size: 95.1 GiB # # $ grep 'CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal' *.paths | grep 1460 # warc.paths:common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/warc/CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.gz # wat.paths:common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wat/CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wat.gz # wet.paths:common-crawl/crawl-data/CC-MAIN-2015-32/segments/1438042981460.12/wet/CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wet.gz # # ``` # # ### Checking WET # # ``` # $ ls -l *.gz # -rw-r--r-- 1 robert wheel 298358000 Aug 7 22:09 CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wat.gz # -rw-r--r-- 1 robert wheel 106093786 Aug 7 22:09 CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wet.gz # # $ ls -l CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wet # -rw-r--r-- 1 robert wheel 250969654 Aug 7 22:09 CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wet # # ``` # ``` # 1 WARC/1.0^M # 2 WARC-Type: warcinfo^M # 3 WARC-Date: 2015-08-08T01:50:27Z^M # 4 WARC-Filename: CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wet.gz^M # 5 WARC-Record-ID: ^M # 6 Content-Type: application/warc-fields^M # 7 Content-Length: 257^M # 8 ^M # 9 Software-Info: ia-web-commons.1.0-SNAPSHOT-20150727093232^M # 10 Extracted-Date: Sat, 08 Aug 2015 01:50:27 GMT^M # 11 robots: classic^M # 12 isPartOf: CC-MAIN-2015-32^M # 13 operator: CommonCrawl Admin^M # 14 description: Wide crawl of the web for August 2015^M # 15 publisher: CommonCrawl^M # 16 ^M # 17 ^M # 18 ^M # 19 WARC/1.0^M # 20 WARC-Type: conversion^M # 21 WARC-Target-URI: http://0205921396.reader.chegg.com/homework-help/questions-and-answers/physics-archive-2013-may-07^M # 22 WARC-Date: 2015-07-28T01:25:02Z^M # 23 WARC-Record-ID: ^M # 24 WARC-Refers-To: ^M # 25 WARC-Block-Digest: sha1:NAN6E2CTH4GJ3CF5ASQZRTZLZ65OSSGS^M # 26 Content-Type: text/plain^M # 27 Content-Length: 47^M # 28 ^M # 29 Chegg eReader # 30 THE PROTOTYPE TEST # 31 : EQUIVALENCE ^M # 32 ^M # ``` # ### Checking WAT # # #### This dataset is not useful for our assignment: Confirmed after checking # # ``` # $ ls -l *.wat.gz # -rw-r--r-- 1 robert wheel 298358000 Aug 7 22:09 CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wat.gz # # $ gzip -d *.wat.gz # $ ls -l *.wat # -rw-r--r-- 1 robert wheel 1164909718 Aug 7 22:09 CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.wat # # ``` # # ``` # 1 WARC/1.0 # 2 WARC-Type: warcinfo # 3 WARC-Date: 2015-08-08T01:50:27Z # 4 WARC-Filename: CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.gz # 5 WARC-Record-ID: # 6 Content-Type: application/warc-fields # 7 Content-Length: 108 # 8 # 9 Software-Info: ia-web-commons.1.0-SNAPSHOT-20150727093232 # 10 Extracted-Date: Sat, 08 Aug 2015 01:50:27 GMT # 11 # 12 # 13 # 14 WARC/1.0 # 15 WARC-Type: metadata # 16 WARC-Target-URI: CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.gz # 17 WARC-Date: 2015-08-06T11:12:07Z # 18 WARC-Record-ID: # 19 WARC-Refers-To: # 20 Content-Type: application/json # 21 Content-Length: 1150 # 22 # 23 {"Envelope":{"Format":"WARC","WARC-Header-Length":"273","Block-Digest":"sha1:SLBF7AGTH2IQQQ22RDT6SHVDPHQSP2Z6","Actual-Content-Length":"342","WARC-Header-Metadata":{"WAR C-Type":"warcinfo","WARC-Filename":"CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.gz","WARC-Date":"2015-08-06T11:12:07Z","Content-Length":"342","WARC-Re cord-ID":"","Content-Type":"application/warc-fields"},"Payload-Metadata":{"Trailing-Slop-Length":"0","Actual-Content-Type" :"application/warc-fields","Actual-Content-Length":"342","Headers-Corrupt":true,"WARC-Info-Metadata":{"robots":"classic","software":"Nutch 1.6 (CC)/CC WarcExport 1.0","d escription":"Wide crawl of the web for August 2015","hostname":"ip-10-236-191-2.ec2.internal","format":"WARC File Format 1.0","isPartOf":"CC-MAIN-2015-32","operator":"Co mmonCrawl Admin","publisher":"CommonCrawl"}}},"Container":{"Compressed":true,"Gzip-Metadata":{"Footer-Length":"8","Deflate-Length":"433","Header-Length":"10","Inflated-C RC":"-1161559046","Inflated-Length":"619"},"Offset":"0","Filename":"CC-MAIN-20150728002301-00004-ip-10-236-191-2.ec2.internal.warc.gz"}} # 24 # ``` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="p4kLrN3axVrb" # # ir-measures # # This is an interactive demonstration of the [`ir-measures`](https://ir-measur.es/) tool. # # Let's start by installing the package via `pip`: # + colab={"base_uri": "https://localhost:8080/"} id="WmDiVg8CxTxH" outputId="ef6198c7-18fd-4b8f-8b2b-5d44849facb7" # !pip install -q git+https://github.com/terrierteam/ir_measures # + [markdown] id="7wSVmX1VzSjb" # We'll now grab all the data we need. Let's use data from Round 1 of the TREC COVID task. # + colab={"base_uri": "https://localhost:8080/"} id="vefPwMwMzQyY" outputId="6b1a14d0-48ca-4da7-cb90-0bcd20a8f62c" # !wget https://ir.nist.gov/covidSubmit/data/qrels-rnd1.txt # !wget https://ir.nist.gov/covidSubmit/archive/round1/sab20.1.meta.docs # !wget https://ir.nist.gov/covidSubmit/archive/round1/run2 -O GUIR_s2_run2 # + [markdown] id="zij5_yAd0Mwq" # ## Command Line Interface # # The official evaluation measures for the task were P@5, nDCG@10, AP, and Ppref, but let's also check out the performance of the measures that use binary judgments using a threshold of 2 to see how the systems do on highly relevant documents. We'll also check the judgment rate of the top documents to ensure that they are sufficiently labeled. # # We can express these measures naturally to the ir_measures command line tool: # + colab={"base_uri": "https://localhost:8080/"} id="gn9Ti4E3xlLt" outputId="7f423b43-0841-47ef-d415-e0bb87001ad3" # !ir_measures qrels-rnd1.txt sab20.1.meta.docs 'P@5 P(rel=2)@5 nDCG@10 AP AP(rel=2) Bpref Bpref(rel=2) Judged@10' # + colab={"base_uri": "https://localhost:8080/"} id="g667kjjt0i3U" outputId="37eadb39-a369-4879-e23c-642cf0adb3ec" # !ir_measures qrels-rnd1.txt GUIR_s2_run2 'P@5 P(rel=2)@5 nDCG@10 AP AP(rel=2) Bpref Bpref(rel=2) Judged@10' # + [markdown] id="hxVZbAry37Eq" # In the above example, ir-measures automatically runs [trec_eval](https://github.com/usnistgov/trec_eval) twice: once for the measures that do not use a custom relevance thredhold and once for those that have `rel=2`. It also runs the judgment rate script from [OpenNIR](https://opennir.net). # # If you have [ir_datasets](https://ir-datasets.com/) installed, you can specify a dataset identifier in place of the qrels file. This takes care of automatically downloading the necessary qrels for you. # + colab={"base_uri": "https://localhost:8080/"} id="2imc_acv36YI" outputId="a257839d-33cb-4722-f7e7-cb8edc7e4719" # !pip install -q ir_datasets # + colab={"base_uri": "https://localhost:8080/"} id="OTukF06O13y1" outputId="196d81db-d1a1-41f1-e533-643da7296b5a" # !ir_measures cord19/trec-covid/round1 sab20.1.meta.docs 'P@5 P(rel=2)@5 nDCG@10 AP AP(rel=2) Bpref Bpref(rel=2) Judged@10' # + colab={"base_uri": "https://localhost:8080/"} id="AglPyXR-1-A7" outputId="b481abc9-1c7d-4c5f-b197-73bb8d36b9f7" # !ir_measures cord19/trec-covid/round1 GUIR_s2_run2 'P@5 P(rel=2)@5 nDCG@10 AP AP(rel=2) Bpref Bpref(rel=2) Judged@10' # + [markdown] id="2uTXErGo5sEq" # You can specify other options to the command line tool as well. For instance, if you want per-query results as jsonl format, you can specify the `-q -o jsonl` flags. (This output is pretty long, so we'll just show the top 10 lines using the `head` command.) # + colab={"base_uri": "https://localhost:8080/"} id="jm57Cs8j5e8S" outputId="d54dd4e0-ac6d-4a31-f334-f5e3ff92a241" # !ir_measures cord19/trec-covid/round1 sab20.1.meta.docs 'P@5 P(rel=2)@5 nDCG@10 AP AP(rel=2) Bpref Bpref(rel=2) Judged@10' -q -o jsonl | head # + [markdown] id="jMCXZ0_x6d-O" # ## Python Interface # # We can run the same commands directly in Python as well. # + id="PxGrWgUJ59zm" import ir_measures from ir_measures import * # import natural measure names # + id="_teXikr26tgE" # read qrels and run files qrels = list(ir_measures.read_trec_qrels('qrels-rnd1.txt')) sab = list(ir_measures.read_trec_run('sab20.1.meta.docs')) guir = list(ir_measures.read_trec_run('GUIR_s2_run2')) # + colab={"base_uri": "https://localhost:8080/"} id="8D2Yiw-z6wCd" outputId="7bcbf6cf-0c87-4aa9-fade-554888ad736c" ir_measures.calc_aggregate([P@5, P(rel=2)@5, nDCG@10, AP, AP(rel=2), Bpref, Bpref(rel=2), Judged@10], qrels, sab) # + colab={"base_uri": "https://localhost:8080/"} id="bLeRf4EL7A-I" outputId="08763002-2270-4037-bfda-caf818f265a5" ir_measures.calc_aggregate([P@5, P(rel=2)@5, nDCG@10, AP, AP(rel=2), Bpref, Bpref(rel=2), Judged@10], qrels, guir) # + [markdown] id="fzp_kMNA7btm" # The above code is inefficient becuase it needs to process the qrels twice -- once for each run. You can use an evaluator to eliminate this extra work. # + id="5-iqV_8a7Z0I" evaluator = ir_measures.evaluator([P@5, P(rel=2)@5, nDCG@10, AP, AP(rel=2), Bpref, Bpref(rel=2), Judged@10], qrels) # + colab={"base_uri": "https://localhost:8080/"} id="CEBcgYxq7v6I" outputId="4b6dc938-3a2a-4ccb-cbf1-6acf4db459ab" evaluator.calc_aggregate(guir) # + colab={"base_uri": "https://localhost:8080/"} id="qmtt5Nyz7xgA" outputId="861b8685-8550-4b51-8740-f7474e77c325" from timeit import timeit time = timeit(lambda: ir_measures.calc_aggregate([P@5, P(rel=2)@5, nDCG@10, AP, AP(rel=2), Bpref, Bpref(rel=2), Judged@10], qrels, guir), number=10) print(f'ir_measures.calc_aggregate: {time/10*1000:0.2f}ms/invocation') time = timeit(lambda: evaluator.calc_aggregate(guir), number=10) print(f'evaluator.calc_aggregate: {time/10*1000:0.2f}ms/invocation') # + [markdown] id="wTdMxd_x9L2z" # You can also get per-query results using `iter_calc`. This allows us to analyse per-query performance and conduct statistical tests. # + colab={"base_uri": "https://localhost:8080/"} id="TmWvXOhb75gN" outputId="76614515-debc-4a76-efc8-889903c0cb61" count = 0 for metric in ir_measures.iter_calc([P@5, P(rel=2)@5, nDCG@10, AP, AP(rel=2), Bpref, Bpref(rel=2), Judged@10], qrels, guir): print(metric) count += 1 if count >= 10: break # only show top 10 items sab_p_rel2_5 = {m.query_id: m.value for m in ir_measures.iter_calc([Bpref(rel=2)], qrels, sab)} guir_p_rel2_5 = {m.query_id: m.value for m in ir_measures.iter_calc([Bpref(rel=2)], qrels, guir)} from scipy.stats import ttest_rel qids = list(sab_p_rel2_5.keys()) ttest_rel([sab_p_rel2_5[v] for v in qids], [guir_p_rel2_5[v] for v in qids]) # + [markdown] id="10kg_Jm4-9U5" # ## PyTerrer Integration # # ir-datasets is easy to use in other tools. Here, we see how [PyTerrier](https://pyterrier.readthedocs.io/) uses ir-measures for specifying evaluation criteria in experiments: # + colab={"base_uri": "https://localhost:8080/"} id="RJqcv6aD9VVV" outputId="924142ca-6667-42c8-88aa-df651504b471" # !pip install -q git+https://github.com/terrier-org/pyterrier.git # + colab={"base_uri": "https://localhost:8080/"} id="1WAYUdsM_Poh" outputId="a71461f0-1cd9-44d6-a993-1f82f5c788ad" import pyterrier as pt if not pt.started(): pt.init() # + colab={"base_uri": "https://localhost:8080/", "height": 556, "referenced_widgets": ["07432eee05c14c38916241410f692c6a", "cdf406e1749c4479bd268a22db11526c", "a82c0181c97e48249b775698204b63a7", "86a5d201e25944dbb4e63c7e9471cdba", "e80b23c4f32643b988237e376b489e50", "", "8b49253640a44551a24901913a73648b", "20f4ae3ead36436488d46929248f6641", "b31231ff52aa4d6cbd999c8017526ec2", "12561d8aff784a08a54c3e8265f1f7ba", "8b12754b22a0416ca75895e2e710e913"]} id="a-5mqmTL_UVK" outputId="21e41ab3-64b6-4065-f6e1-7e20b00f7773" # NOTE: this example uses TREC COVID complete, rather than round1 dataset = pt.get_dataset('irds:cord19/trec-covid') pt.Experiment( [pt.BatchRetrieve.from_dataset('trec-covid', 'terrier_stemmed', wmodel='DPH'), pt.BatchRetrieve.from_dataset('trec-covid', 'terrier_stemmed', wmodel='BM25')], dataset.get_topics('description'), dataset.get_qrels(), eval_metrics=[P@5, P(rel=2)@5, nDCG@10, AP, AP(rel=2), Bpref, Bpref(rel=2), Judged@10], # ^ using ir_measures ) # + id="xvq2uicI_yGd" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python2 # --- # %load_ext autoreload # %autoreload 2 from pearce.emulator import OriginalRecipe, ExtraCrispy from pearce.mocks import cat_dict import numpy as np from os import path import tensorflow as tf import matplotlib #matplotlib.use('Agg') from matplotlib import pyplot as plt # %matplotlib inline import seaborn as sns sns.set() # The files include: # # cosmology_camb.dat : the input training cosmology, only 5 parameters: Om, Ob, sigma_8, h, n_s. w and N_eff are not used here because the analytic method is only for LCDM. # # HOD_design_m_4_n_400_tinker.dat : 400 HOD designs for the training set. # # EH_test_200COSMO_tinker.dat : the 200 test cosmologies from Tinker. # # EH_test_200COSMO_tinker.dat : the 1000 test HODs, just use the first 200. # # Cosmo_err.dat : the fractional error of wp estimated from the test boxes. # # wp_clustering_emu: folder contains the wp data for training, the columns are rp, wp # # test_200COSMO_tinker_wp_clustering_emu: folder contains the wp data for test, same format as training set. # # example.py: is an example script for my GP modeling. you should fill out the missing places. My comment on line 31 may not be right: because 0-49 for line 1 and 50-99 for line 2 etc will result repeated HOD sampling for different cosmologies, 400/50=8<40, so a better choice might be just randomly choose 50 HOD for each cosmology. (edited) dir = '/home/sean/Downloads/Zhongxu_data/for_Sean/' cosmo_data_fname = 'EH_test_200COSMO_tinker.dat' hod_data_fname = 'GP_test_HOD_1000.dat' from os.path import join cosmo_colnames = ['Om', 'Ob', 'sigma_8', 'h', 'n_s'] cosmo_data = np.loadtxt(join(dir, cosmo_data_fname), delimiter=' ') hod_colnames = ['M1', 'alpha', 'Mmin', 'sigma_logM'] hod_data = np.loadtxt(join(dir, hod_data_fname), delimiter = ' ') training_file = '/home/sean/PearceRedMagicXiCosmoFixedNd.hdf5' #test_file = '/home/sean/PearceRedMagicXiCosmoTest.hdf5' # + active="" # training_file = '/home/sean/PearceRedMagicXiCosmo.hdf5' # test_file = '/home/sean/PearceRedMagicXiCosmoTest.hdf5' # - em_method = 'nn' split_method = 'random' a = 1.0 z = 1.0/a - 1.0 # + active="" # emu.scale_bin_centers # - fixed_params = {'z':z}#, 'r':17.0389993 } # + active="" # n_leaves, n_overlap = 50, 1 # emu = ExtraCrispy(training_file, n_leaves, n_overlap, split_method, method = em_method, fixed_params=fixed_params, # custom_mean_function = None, downsample_factor = 1.0) # - emu = OriginalRecipe(training_file, method = em_method, fixed_params=fixed_params, hyperparams = {'hidden_layer_sizes': (10), 'activation': 'relu', 'verbose': True, 'tol': 1e-8, 'learning_rate_init':1e-4,\ 'max_iter':10, 'alpha':0, 'early_stopping':False, 'validation_fraction':0.3}) # + active="" # #convert zhongxu's data to my format # emu.get_param_names() # + active="" # my_cosmo_data = np.zeros((cosmo_data.shape[0], 7)) # my_hod_data = np.zeros((200, 4)) # # my_cosmo_data[:,0] = cosmo_data[:,1]*(cosmo_data[:,3])**2 # my_cosmo_data[:,1] = cosmo_data[:,0]*(cosmo_data[:,3])**2 - my_cosmo_data[:,0] # my_cosmo_data[:,2] = -1.0 # my_cosmo_data[:,3] = cosmo_data[:,4] # #my_cosmo_data[:,4] # my_cosmo_data[:, 5] = cosmo_data[:,3]*100 # my_cosmo_data[:, 6] = 3.046 # + active="" # from classy import Class # cosmo = Class() # # for i, row in enumerate(cosmo_data): # Om, Ob, sigma_8, h, n_s = row # params = { # 'output': 'mPk', # 'sigma8': sigma_8, # 'n_s': n_s, # 'h': h, # 'non linear': 'halofit', # 'omega_b': Ob*h*h, # 'omega_cdm': (Om-Ob)*h*h, # 'z_pk': 0.0} # # # cosmo.set(params) # cosmo.compute() # #print cosmo.pm # val = cosmo.get_current_derived_parameters(['ln10^{10}A_s'])['ln10^{10}A_s'] # my_cosmo_data[i,4] = val # + active="" # my_hod_data[:,0] = hod_data[:200,3] # my_hod_data[:,1] = np.log10(hod_data[:200,2]) # my_hod_data[:,2] = np.log10(hod_data[:200,0]) # my_hod_data[:,3] = hod_data[:200,1] # + clustering_dir = 'test_200COSMO_tinker_wp_clustering_emu/' from glob import glob clustering_files = sorted(glob(join(dir, clustering_dir) + '*') ) # - nbins = 9 zx = np.zeros((len(clustering_files)*nbins, 12)) zy = np.zeros((len(clustering_files)*nbins,)) # + for i, cf in enumerate(clustering_files): if i%1000==0: print i data = np.loadtxt(cf, delimiter = ' ') rs = np.log10(data[:,0]) wp = np.log10(data[:,1]) fbase = cf.split('/')[-1] split_fbase = fbase.split('_') cosmo, hod = int(split_fbase[1]), int(split_fbase[3]) zx[i*nbins:(i+1)*nbins, :7] = my_cosmo_data[cosmo] zx[i*nbins:(i+1)*nbins, 7:-1] = my_hod_data[hod] zx[i*nbins:(i+1)*nbins, -1] = rs zy[i*nbins:(i+1)*nbins] = wp # - np.savetxt('zx.npy', zx) np.savetxt('zy.npy', zy) zx = np.loadtxt('zx.npy') zy = np.loadtxt('zy.npy') # + idxs = np.random.choice(emu.x.shape[0], size = int(emu.x.shape[0]*1.0), replace=False) x_train, y_train,yerr_train = emu.x[idxs, :],emu.y[idxs],emu.yerr[idxs] y_train = y_train*(emu._y_std + 1e-5) + emu._y_mean yerr_train = yerr_train*(emu._y_std+1e-5) # - idxs len(emu.get_param_names()) unique_cosmos = np.unique(x_train[:, :7], axis =0)#*(emu._x_std[:7]+1e-5) + emu._x_mean[:7] unique_cosmos.shape left_out_cosmo = unique_cosmos[0] is_loc = np.all(x_train[:,:7] == left_out_cosmo, axis = 1) x_test = x_train[is_loc] x_train = x_train[~is_loc] y_test = y_train[is_loc] y_train = y_train[~is_loc] yerr_test = yerr_train[is_loc] yerr_train = yerr_train[~is_loc] # + active="" # x_test, y_test, ycov_test, _ = emu.get_data(test_file, fixed_params, None, False) # x_test = (x_test - emu._x_mean)/(emu._x_std+1e-5) # # #split_ycov = np.dsplit(ycov_test, ycov_test.shape[-1]) # #fullcov = block_diag(*[yc[:,:,0] for yc in split_ycov]) # #yerr_test = np.sqrt(np.hstack(np.diag(syc[:,:,0]) for syc in split_ycov)) # + active="" # from sklearn.model_selection import train_test_split # x_train, x_test, y_train, y_test, yerr_train, _ = train_test_split(x_train, y_train,yerr_train, test_size = 0.3, shuffle = True) # + active="" # pnames = emu.get_param_names() # for i in xrange(x_train.shape[1]): # for j in xrange(i+1,x_train.shape[1]): # plt.scatter(x_train[:,i], x_train[:,j]) # plt.scatter(x_test[:,i], x_test[:,j]) # plt.title('%s vs %s'%(pnames[i], pnames[j])) # plt.show(); # + active="" # plt.plot(x_np[:emu.n_bins, -1:], y_np[:emu.n_bins]) # - def n_layer_fc(x, hidden_sizes, training=False, l = 1e-8): initializer = tf.variance_scaling_initializer(scale=2.0) regularizer = tf.contrib.layers.l2_regularizer(l) fc_output = tf.layers.dense(x, hidden_sizes[0], activation=tf.nn.relu, kernel_initializer = initializer, kernel_regularizer = regularizer) #kernel_regularizer = tf.nn.l2_loss) #fc2_output = tf.layers.dense(fc1_output, hidden_sizes[1], activation=tf.nn.relu, # kernel_initializer = initializer, kernel_regularizer = regularizer) for size in hidden_sizes[1:]: fc_output = tf.layers.dense(fc_output, size, activation=tf.nn.relu, kernel_initializer=initializer, kernel_regularizer = regularizer) pred = tf.layers.dense(fc_output, 1, kernel_initializer=initializer, kernel_regularizer = regularizer)[:,0]#, return pred def novel_fc(x, hidden_sizes, training=False, l = (1e-6, 1e-6, 1e-6), p = (0.5, 0.5, 0.5),\ n_cosmo_params = 7, n_hod_params = 4): cosmo_sizes, hod_sizes, cap_sizes = hidden_sizes if type(l) is float: cosmo_l, hod_l, cap_l = l, l, l else: cosmo_l, hod_l, cap_l = l if type(p) is float: cosmo_p, hod_p, cap_p = p,p,p else: cosmo_p, hod_p, cap_p = p initializer = tf.variance_scaling_initializer(scale=2.0) #onlly for duplicating r n_params = n_cosmo_params+n_hod_params cosmo_x = tf.slice(x, [0,0], [-1, n_cosmo_params]) cosmo_x = tf.concat(values=[cosmo_x, tf.slice(x, [0, n_params-1], [-1, -1]) ], axis = 1) #print tf.shape(cosmo_x) #print tf.shape(tf.slice(x, [0, n_params-1], [-1, -1])) hod_x = tf.slice(x, [0, n_cosmo_params], [-1, -1]) cosmo_regularizer = tf.contrib.layers.l2_regularizer(cosmo_l) cosmo_out = cosmo_x for size in cosmo_sizes: fc_output = tf.layers.dense(cosmo_out, size, kernel_initializer = initializer,\ kernel_regularizer = cosmo_regularizer) bd_out = tf.layers.dropout(fc_output, cosmo_p, training = training) bn_out = tf.layers.batch_normalization(bd_out, axis = -1, training=training) cosmo_out = tf.nn.relu(bn_out)#tf.nn.leaky_relu(bn_out, alpha=0.01) hod_regularizer = tf.contrib.layers.l1_regularizer(hod_l) hod_out = hod_x for size in hod_sizes: fc_output = tf.layers.dense(hod_out, size, kernel_initializer = initializer,\ kernel_regularizer = hod_regularizer) bd_out = tf.layers.dropout(fc_output, hod_p, training = training) bn_out = tf.layers.batch_normalization(bd_out, axis = -1, training=training) hod_out = tf.nn.relu(bn_out)#tf.nn.leaky_relu(bn_out, alpha=0.01) cap_out=tf.concat(values=[cosmo_out, hod_out], axis = 1) return cap_out def pretrain_cap(cap_input, hidden_sizes, training=False, l = (1e-6, 1e-6, 1e-6), p = (0.5, 0.5, 0.5)): initializer = tf.variance_scaling_initializer(scale=2.0) cosmo_sizes, hod_sizes, cap_sizes = hidden_sizes if type(l) is float: cosmo_l, hod_l, cap_l = l, l, l else: cosmo_l, hod_l, cap_l = l if type(p) is float: cosmo_p, hod_p, cap_p = p,p,p else: cosmo_p, hod_p, cap_p = p cap_out=cap_input cap_regularizer = tf.contrib.layers.l2_regularizer(cap_l) for size in cap_sizes: fc_output = tf.layers.dense(cap_out, size, kernel_initializer = initializer,\ kernel_regularizer = cap_regularizer) bd_out = tf.layers.dropout(fc_output, cap_p, training = training) bn_out = tf.layers.batch_normalization(bd_out, axis = -1, training=training) cap_out = tf.nn.relu(bn_out)#tf.nn.leaky_relu(bn_out, alpha=0.01) pred = tf.layers.dense(cap_out, 1, kernel_initializer=initializer, kernel_regularizer = cap_regularizer)[:,0]#, return pred def final_cap(cap_input, hidden_sizes, training=False, l = (1e-6, 1e-6, 1e-6), p = (0.5, 0.5, 0.5)): initializer = tf.variance_scaling_initializer(scale=2.0) cosmo_sizes, hod_sizes, cap_sizes = hidden_sizes if type(l) is float: cosmo_l, hod_l, cap_l = l, l, l else: cosmo_l, hod_l, cap_l = l if type(p) is float: cosmo_p, hod_p, cap_p = p,p,p else: cosmo_p, hod_p, cap_p = p cap_out=cap_input cap_regularizer = tf.contrib.layers.l2_regularizer(cap_l) for size in cap_sizes: fc_output = tf.layers.dense(cap_out, size, kernel_initializer = initializer,\ kernel_regularizer = cap_regularizer) bd_out = tf.layers.dropout(fc_output, cap_p, training = training) bn_out = tf.layers.batch_normalization(bd_out, axis = -1, training=training) cap_out = tf.nn.relu(bn_out)#tf.nn.leaky_relu(bn_out, alpha=0.01) pred = tf.layers.dense(cap_out, 1, kernel_initializer=initializer, kernel_regularizer = cap_regularizer)[:,0]#, return pred def optimizer_init_fn(learning_rate = 1e-7): return tf.train.AdamOptimizer(learning_rate)#, beta1=0.9, beta2=0.999, epsilon=1e-6) from sklearn.metrics import r2_score, mean_squared_error def check_accuracy(sess, val_data,batch_size, x, weights, preds, is_training=None): val_x, val_y = val_data perc_acc, scores = [],[] for idx in xrange(0, val_x.shape[0], batch_size): feed_dict = {x: val_x[idx:idx+batch_size], is_training: 0} y_pred = sess.run(preds, feed_dict=feed_dict) #print y_pred.shape, val_y[idx:idx+batch_size].shape score = r2_score(val_y[idx:idx+batch_size], y_pred) scores.append(score) perc_acc = np.mean(emu._y_std*np.abs(val_y[idx:idx+batch_size]-y_pred)/np.abs(emu._y_std*val_y[idx:idx+batch_size] + emu._y_mean) ) print 'Val score: %.6f, %.2f %% Loss'%(np.mean(np.array(scores)), 100*np.mean(np.array(perc_acc))) device = '/device:GPU:0' #device = '/cpu:0' def train(model_init_fn, optimizer_init_fn,num_params, pretrain_data, train_data, val_data, hidden_sizes,\ num_pretrain_epochs = 500, num_epochs=1000, batch_size = 200, l = 1e-6, p = 0.5, print_every=10): tf.reset_default_graph() pretrain = True with tf.device(device): # Construct the computational graph we will use to train the model. We # use the model_init_fn to construct the model, declare placeholders for # the data and labels x = tf.placeholder(tf.float32, [None,num_params]) y = tf.placeholder(tf.float32, [None]) weights = tf.placeholder(tf.float32, [None]) is_training = tf.placeholder(tf.bool, name='is_training') cap_input = model_init_fn(x, hidden_sizes, is_training, l = l, p=p) if pretrain: preds = pretrain_cap(cap_input, hidden_sizes, is_training, l=l, p=p) else: preds = final_cap(cap_input, hidden_sizes, is_training, l=l, p=p) # Compute the loss like we did in Part II #loss = tf.reduce_mean(loss) with tf.device('/cpu:0'): loss = tf.losses.mean_squared_error(labels=y,\ predictions=preds, weights = weights)#weights? #loss = tf.losses.absolute_difference(labels=y, predictions=preds, weights = tf.abs(1.0/y))#weights? pass with tf.device(device): optimizer = optimizer_init_fn() update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_op = optimizer.minimize(loss) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) #t = 0 pretrain_x, pretrain_y = pretrain_data rand_idxs = range(pretrain_x.shape[0]) for epoch in range(num_pretrain_epochs): #print('Starting epoch %d' % epoch) np.random.shuffle(rand_idxs) losses = [] for idx in xrange(0, pretrain_x.shape[0], batch_size): feed_dict = {x: pretrain_x[rand_idxs[idx:idx+batch_size]],\ y: pretrain_y[rand_idxs[idx:idx+batch_size]],\ weights: np.ones_like(pretrain_y[rand_idxs[idx:idx+batch_size]]),\ is_training:1} loss_np, _ = sess.run([loss, train_op], feed_dict=feed_dict) losses.append(loss_np) if epoch % print_every == 0: loss_avg = np.mean(np.array(losses)) print('Epoch %d, loss = %e' % (epoch, loss_avg)) check_accuracy(sess, val_data, batch_size, x, weights, preds, is_training=is_training) pretrain = False train_x, train_y, train_yerr = train_data rand_idxs = range(train_x.shape[0]) for epoch in range(num_epochs): #print('Starting epoch %d' % epoch) np.random.shuffle(rand_idxs) losses = [] for idx in xrange(0, train_x.shape[0], batch_size): yerrbatch = train_yerr[rand_idxs[idx:idx+batch_size]] _bs = yerrbatch.shape[0] feed_dict = {x: train_x[rand_idxs[idx:idx+batch_size]],\ y: train_y[rand_idxs[idx:idx+batch_size]] + np.random.randn(_bs)*yerrbatch,\ weights: 1/yerrbatch,\ is_training:1} loss_np, _ = sess.run([loss, train_op,], feed_dict=feed_dict) losses.append(loss_np) if epoch % print_every == 0: loss_avg = np.mean(np.array(losses)) print('Epoch %d, loss = %e' % (epoch, loss_avg)) check_accuracy(sess, val_data, batch_size, x, weights, preds, is_training=is_training) #t += 1 train(novel_fc, optimizer_init_fn, x_train.shape[1],\ (zx, zy), (x_train, y_train, yerr_train), (x_test, y_test),\ [(100,100), (200,100,200), (500,100)], num_pretrain_epochs = 500, num_epochs= int(1e3),\ batch_size = 200, l = (1e-6, 1e-8, 1e-8), p = (0.333, 0.1, 0.1),\ print_every = 100) np.abs(emu.goodness_of_fit(training_file, statistic = 'log_frac')).mean() np.abs(emu.goodness_of_fit(training_file, statistic = 'frac')).mean() fit_idxs = np.argsort(gof.mean(axis = 1)) emu.goodness_of_fit(training_file).mean()#, statistic = 'log_frac')).mean() model = emu._emulator ypred = model.predict(emu.x) plt.hist( np.log10( (emu._y_std+1e-5)*np.abs(ypred-emu.y)/np.abs((emu._y_std+1e-5)*emu.y+emu._y_mean) )) ( (emu._y_std+1e-5)*np.abs(ypred-emu.y)/np.abs((emu._y_std+1e-5)*emu.y+emu._y_mean) ).mean() emu._y_mean, emu._y_std for idx in fit_idxs[:10]: print gof[idx].mean() print (ypred[idx*emu.n_bins:(idx+1)*emu.n_bins]-emu.y[idx*emu.n_bins:(idx+1)*emu.n_bins])/emu.y[idx*emu.n_bins:(idx+1)*emu.n_bins] plt.plot(emu.scale_bin_centers, ypred[idx*emu.n_bins:(idx+1)*emu.n_bins], label = 'Emu') plt.plot(emu.scale_bin_centers, emu.y[idx*emu.n_bins:(idx+1)*emu.n_bins], label = 'True') plt.legend(loc='best') plt.xscale('log') plt.show() print dict(zip(emu.get_param_names(), emu.x[8*emu.n_bins, :]*emu._x_std+emu._x_mean)) emu.get_param_names() # + active="" # #print emu.x.shape # #print emu.downsample_x.shape # if hasattr(emu, "_emulators"): # print emu._emulators[0]._x.shape # else: # print emu._emulator._x.shape # - emu._ordered_params # + active="" # x, y, y_pred = emu.goodness_of_fit(training_file, statistic = 'log_frac') # + active="" # x, y, y_pred # + active="" # N = 25 # for _y, yp in zip(y[:N], y_pred[:N]): # #plt.plot(emu.scale_bin_centers , (_y - yp)/yp ,alpha = 0.3, color = 'b') # # plt.plot(emu.scale_bin_centers, _y, alpha = 0.3, color = 'b') # plt.plot(emu.scale_bin_centers, yp, alpha = 0.3, color = 'r') # # plt.loglog(); # + active="" # %%timeit # #truth_file = '/u/ki/swmclau2/des/PearceRedMagicWpCosmoTest.hdf5' # gof = emu.goodness_of_fit(training_file, N = 100, statistic = 'log_frac') # - gof = emu.goodness_of_fit(training_file, statistic = 'frac') print gof.mean() for row in gof: print row gof = emu.goodness_of_fit(training_file, statistic = 'frac') print gof.mean() model = emu._emulator model.score(emu.x, emu.y) # + ypred = model.predict(emu.x) np.mean(np.abs(ypred-emu.y)/emu.y) # + plt.plot(emu.scale_bin_centers, np.abs(gof.mean(axis = 0)) ) plt.plot(emu.scale_bin_centers, np.ones_like(emu.scale_bin_centers)*0.01) plt.plot(emu.scale_bin_centers, np.ones_like(emu.scale_bin_centers)*0.05) plt.plot(emu.scale_bin_centers, np.ones_like(emu.scale_bin_centers)*0.1) plt.loglog(); # - plt.plot(emu.scale_bin_centers, np.abs(gof.T),alpha = 0.1, color = 'b') plt.plot(emu.scale_bin_centers, np.ones_like(emu.scale_bin_centers)*0.01, lw = 2, color = 'k') plt.loglog(); gof[:,i].shape # + for i in xrange(gof.shape[1]): plt.hist(np.log10(gof[:, i]), label = str(i), alpha = 0.2); plt.legend(loc='best') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import json import os import matplotlib.pyplot as plt # # 1. Explore the dataset # ## 1.1 Load data data_path="../dutch_data/" filenames=os.listdir(data_path) #print filenames print(filenames) #load json files datas=[] for filename in filenames: with open(r'../dutch_data/'+filename, 'r') as f: datas+=json.load(f) print(len(datas)) #Concatenate all the text for each sentiment sentiment_map={} for data in datas: if 'sentiment' in data and data['sentiment']!="": if data['sentiment'] not in sentiment_map: sentiment_map[data['sentiment']]=[data['content']] else: sentiment_map[data['sentiment']]=sentiment_map[data['sentiment']]+[data['content']] print("Classes:") print(list(sentiment_map.keys())) print("Num of Positive:") print(len(sentiment_map['positive'])) print("Num of Negative:") print(len(sentiment_map['negative'])) print("Num of Neutral:") print(len(sentiment_map['neutral'])) print("Num of Not Sure:") print(len(sentiment_map['not sure'])) #Pie chart label=['positive','negative','neutral','not sure'] size=[len(sentiment_map['positive']),len(sentiment_map['negative']),len(sentiment_map['neutral']),len(sentiment_map['not sure'])] colors=['yellowgreen','gold','lightskyblue','lightcoral'] plt.pie(size,labels=label,colors=colors,autopct='%1.1f%%',shadow=True,startangle=90) plt.axis('equal') plt.show() # ## 1.2 Word frequency by sentiment # + import nltk import numpy as np import re # - #load stopwords with open(r'../data/dutch_stopwords.txt', 'r') as f: stopwords=f.readlines() stopwords=[stopword.strip() for stopword in stopwords] print(stopwords[0:5]) # + #remove punctuations and stopwords def text_clean(text): # lower text=text.lower() # remove punctuations text=re.sub("[^a-zA-Z]", " ", text) # word tokenize word_list = nltk.word_tokenize(text) # word filter word_list=[word for word in word_list if word not in stopwords and len(word)>1] return word_list #word count for each word def word_count(data_lists): # join data text=" ".join(data_lists) word_list=text_clean(text) freq_dist = nltk.FreqDist(word_list) return freq_dist #word frequency count of the words pos_freq=word_count(sentiment_map['positive']) neg_freq=word_count(sentiment_map['negative']) neu_freq=word_count(sentiment_map['neutral']) ns_freq=word_count(sentiment_map['not sure']) # - #Top-20 words freq distribution for each of the classes pos_freq.plot(20,title='positive') neg_freq.plot(20,title='negative') neu_freq.plot(20,title='neutral') ns_freq.plot(20,title='not sure') # ## Remove neutral word # From the above pictures we can easily find that positive and negative sentiment have many identical words (such as http, www) and these words are some neutral words. We can remove them to get more emotional words. # + from pylab import mpl import pylab # convert freq_dist to numpy array def dist2array(freq_dist): freq_list = [] num_words = len(freq_dist.values()) for i in range(num_words): freq_list.append([list(freq_dist.keys())[i],list(freq_dist.values())[i]]) freq_list = sorted(freq_list,key=lambda x:x[1],reverse=True) freqArr = np.array(freq_list) return freqArr pos_array=dist2array(pos_freq) neg_array=dist2array(neg_freq) neu_array=dist2array(neu_freq) ns_array=dist2array(ns_freq) # - print(pos_array[:5]) #Use the intersection of positive and negative sample high frequency words to obtain high frequency neutral words neutral_word=list(set(pos_array[:100,0]).intersection(set(neg_array[:100,0]))) #some neutral_word examples print(neutral_word[0:5]) # + #Re-drawing #Top-20 words freq distribution for positive and negitive sentiment num=20 plt.figure(figsize = (20, 8)) mpl.rcParams['font.sans-serif'] = ['FangSong'] mpl.rcParams['axes.unicode_minus'] = False plt.subplot(121) tmp = np.array([x for x in pos_array[:] if x[0] not in neutral_word])[:num] label=tmp[:,0] value=[int(x) for x in tmp[:,1]] pylab.title('positive',fontsize=20) pylab.xticks(range(len(value)),label, rotation=90,fontsize=15) pylab.grid(True, color="silver") plt.bar(range(len(value)), value, tick_label=label) plt.subplot(122) tmp = np.array([x for x in neg_array[:] if x[0] not in neutral_word])[:num] label=tmp[:,0] value=[int(x) for x in tmp[:,1]] pylab.title('negitive',fontsize=20) pylab.xticks(range(len(value)),label, rotation=90,fontsize=15) pylab.grid(True, color="silver") plt.bar(range(len(value)), value, tick_label=label) # - # # my opinion # In my opinion, all words exclude high frequency neutral words and stopwords may contain sentiment information. They play a large part in model decision making. # However, as the number of samples is too small, many features cannot be observed. # # 2. Feature Extraction import sklearn from sklearn.feature_extraction.text import CountVectorizer,TfidfTransformer # + texts=sentiment_map['negative']+sentiment_map['positive']+sentiment_map['neutral'] def text_clean2(text): # lower text=text.lower() # remove punctuations text=re.sub("[^a-zA-Z]", " ", text) # word tokenize word_list = nltk.word_tokenize(text) # word filter word_list=[word for word in word_list if word not in stopwords and len(word)>1] return " ".join(word_list) #clean text and create labels labels=[] for i,text in enumerate(texts): texts[i]=text_clean2(text) if iTable of Contents #
    # # `os`, `os.path`: Accessing the operation system import os # ## environment variables # # useful for configs os.getenv('STUFF'), os.getenv('STUFF', default='foo') os.environ['STUFF'] = 'bar' os.getenv('STUFF', default='foo') del os.environ['STUFF'] # ## pwd, cwd # + print(os.getcwd()) os.chdir('..') print(os.getcwd()) os.chdir('notebooks') # - # ## Manipulating files os.makedirs('example', exist_ok=True) fname = 'example/test.txt' with open(fname, 'w') as f: f.write('Hello World\n') stat_res = os.stat(fname) '{:o}'.format(stat_res.st_mode) # use an octal integer os.chmod(fname, 0o600) # equivalent to chmod 666 stat_res = os.stat(fname) '{:o}'.format(stat_res.st_mode) # + os.makedirs('example/build') print(os.listdir('example/')) if os.path.exists('example/build'): os.rmdir('example/build') print(os.listdir('example')) if os.path.isfile('example/test.txt'): os.remove('example/test.txt') print(os.listdir('example')) # - os.path.join('build', 'example', 'test.txt') os.path.splitext('test.txt') # Let's create a lot of files # + import itertools import random dirnames = ['foo', 'bar', 'baz'] extensions = ['.txt', '.dat', '.docx', '.xslx', '.fits', '.png'] filenames = [name + ext for name in dirnames for ext in extensions] def create_files_and_subfolders(path, depth=3): n_files = random.randint(1, 5) for i in range(n_files): open(os.path.join(path, random.choice(filenames)), 'w').close() if depth == 0: return n_subdirs = random.randint(1, 3) for i in range(n_subdirs): subdir = os.path.join(path, random.choice(dirnames)) os.makedirs(subdir, exist_ok=True) create_files_and_subfolders(subdir, depth=depth - 1) create_files_and_subfolders('example', 3) # + # os.walk goes recursivly through all directories and returns files and subdirectories for root, dirs, files in os.walk('example'): print(root) for d in sorted(dirs): print(' ', d) for f in sorted(files): print(' ', f) # + import shutil shutil.rmtree('example', ignore_errors=True) # rm -rf # - # # `subprocess`, calling shell commands import subprocess as sp # + result = sp.check_output(['conda', 'list', 'numpy']) print(result) print() print(result.decode()) # - # more complex task, provide read stdout # + url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/0/00/Crab_Nebula.jpg/480px-Crab_Nebula.jpg' process = sp.run( ['curl', url], stdout=sp.PIPE, stderr=sp.PIPE, ) # + import matplotlib.pyplot as plt from scipy.misc import imread # %matplotlib inline from io import BytesIO # File-Like object in memory img = BytesIO(process.stdout) plt.imshow(imread(img)) # - # # `threading`, `multiprocessing`: Doing stuff in parallel # # There are much more advanced libraries for this, e.g. `joblib` # # https://pythonhosted.org/joblib/ # # Python can only run one python statement at a time through one interpreter, even # when using multiple threads, only one thread at a time will be executed. # This is called the **Global Interpreter Lock** (GIL). # # So you only gain in perfomance using threads, when: # # * there are I/O bound operations (Reading files, downloads, waiting on sockets) # * When you use a lot of c-extensions (like numpy, pandas and basically all the scientific python stack) # * sleeping # # For truly parallel operations, you need new python processes, this can be done with the `multiprocessing` module. # + from random import random import time def do_work(): time.sleep(random()) print('hello') time.sleep(1) print('world') for i in range(3): do_work() # + from threading import Thread threads = [Thread(target=do_work) for i in range(3)] for t in threads: t.start() # block until all threads are done for t in threads: t.join() # + import random def pi_mc(n): n_circle = 0 for i in range(n): x = random.uniform(-1, 1) y = random.uniform(-1, 1) if (x**2 + y**2) <= 1: n_circle += 1 return 4 * n_circle / n # + from multiprocessing import Pool n_jobs = 100 n_iters = 100000 iterable = [n_iters] * n_jobs print(iterable) with Pool(4) as pool: results = pool.map(pi_mc, iterable) print(sum(results) / len(results)) # - # # `collections`: Useful Containers # # example: count words text = '''Alice was beginning to get very tired of sitting by her sister on the bank, and of having nothing to do: once or twice she had peeped into the book her sister was reading, but it had no pictures or conversations in it, `and what is the use of a book,' thought Alice `without pictures or conversation?' So she was considering in her own mind (as well as she could, for the hot day made her feel very sleepy and stupid), whether the pleasure of making a daisy-chain would be worth the trouble of getting up and picking the daisies, when suddenly a White Rabbit with pink eyes ran close by her. There was nothing so very remarkable in that; nor did Alice think it so very much out of the way to hear the Rabbit say to itself, `Oh dear! Oh dear! I shall be late!' (when she thought it over afterwards, it occurred to her that she ought to have wondered at this, but at the time it all seemed quite natural); but when the Rabbit actually took a watch out of its waistcoat-pocket, and looked at it, and then hurried on, Alice started to her feet, for it flashed across her mind that she had never before seen a rabbit with either a waistcoat-pocket, or a watch to take out of it, and burning with curiosity, she ran across the field after it, and fortunately was just in time to see it pop down a large rabbit-hole under the hedge. ''' # remove punctuation text = text.translate({ord(c): None for c in ',`;.!:?()\''}) print(text) # + # solution one, pure python counts = {} for word in text.split(): if word not in counts: counts[word] = 0 counts[word] += 1 for name, count in sorted(counts.items(), key=lambda s: s[1], reverse=True)[:10]: print(name, count) # - # `collections.defaultdict` takes a function that initialises entries # + # solution 2, using default dict from collections import defaultdict counts = defaultdict(int) # int() returns 0 for word in text.split(): counts[word] += 1 for name, count in sorted(counts.items(), key=lambda s: s[1], reverse=True)[:10]: print(name, count) # + # solution 3, Counter from collections import Counter counts = Counter(text.split()) for word, count in counts.most_common(10): print(word, count) # - color = (1.0, 0, 0, 1.0) ## what is this? RGBA? CMYK? Something else # + from collections import namedtuple RGBAColor = namedtuple('RGBAColor', ['r', 'g', 'b', 'a']) color = RGBAColor(1.0, 0, 0, 1.0) color # - # # `functools`: functional programming in python # + import functools # reduce was a builtin in py2 functools.reduce(lambda v1, v2: v1 + v2, range(100)) # + newline_print = functools.partial(print, sep='\n') newline_print(*range(5)) # + def fib(n): if n == 0: return 0 if n in (1, 2): return 1 else: return fib(n - 1) + fib(n - 2) @functools.lru_cache(maxsize=500) def fib_cached(n): if n == 0: return 0 if n in (1, 2): return 1 else: return fib(n - 1) + fib(n - 2) fib(10) print('cached') fib_cached(7) # - # %%timeit fib(15) # %%timeit fib_cached(15) # # `re`, regular expressions import re # + files = ['img001.png', 'img002.png', 'world.txt', 'foo.txt', 'stuff.dat', 'test.xslx'] for f in files: m = re.match('img([0-9]{3}).png', f) if m: print(f, m.groups()) # - # # itertools, more iterations import itertools # + longer = [1, 2, 3, 4, 5] shorter = ['a', 'b', 'c'] print('{:-^40}'.format(' zip ')) for l, s in zip(longer, shorter): print(l, s) print('{:-^40}'.format(' zip_longest ')) for l, s in itertools.zip_longest(longer, shorter): print(l, s) print('{:-^40}'.format(' zip_longest, with fillvalue')) for l, s in itertools.zip_longest(longer, shorter, fillvalue='z'): print(l, s) # - list(itertools.permutations('ABC')) list(itertools.combinations('ABC', 2)) # # `argparse`: commandline options # # Alternatives: # # - `click`: http://click.pocoo.org/6/ # - `docopt`: http://docopt.org/ # + from argparse import ArgumentParser parser = ArgumentParser() parser.add_argument('inputfile') # positional argument parser.add_argument('-o', '--output') # option parser.add_argument('-n', '--number', default=0, type=int) # option with default value and type None # - args = parser.parse_args(['data.csv']) # of no arguments give, use sys.argv print(args.number, args.inputfile, args.output) # + args = parser.parse_args(['data.csv', '--number=5', '-o', 'test.csv']) print(args.number, args.inputfile, args.output) # - # # `copy`, copy operations # + a = [1, 2, [4, 5]] b = a b[1] = 3 b[2][1] = 'Hello' print(a) print(b) # + from copy import copy a = [1, 2, [4, 5]] b = copy(a) b[1] = 3 b[2][1] = 'Hello' print(a) print(b) # + from copy import deepcopy a = [1, 2, [4, 5]] b = deepcopy(a) b[1] = 3 b[2][1] = 'Hello' print(a) print(b) # - # # `tempfile`, Tempory Files and Directories # + import tempfile import os print(tempfile.gettempdir()) # file will be deleted when exiting the with block with tempfile.NamedTemporaryFile(prefix='python_course_', suffix='.txt', mode='w') as f: path = f.name f.write('Hello World') print('f exists:', os.path.exists(path)) print('f exists:', os.path.exists(path)) # + # directory with all contents will be deleted when we exit the with block with tempfile.TemporaryDirectory() as d: print(d) with open(os.path.join(d, 'myfile.txt'), 'w') as f: print(f.name) f.write('Hello World') print('d exists:', os.path.exists(d)) print('d exists:', os.path.exists(d)) # - # # `struct`, parsing binary data # # It's like every other day in the office. Your supervisor does not like standardized file formats. # Like `.fits`, or `.hdf` or, ("Are you completely insane?") `.json` or `.yaml`. # Because, you know, they are super hard to read using Fortran 77. # # So he sends you data in "an easy to read" file format, a custom, proprietary binary blob: # # * First 4 bytes is an unsigned integer, containing the length of the comment string # * Then N bytes comment encoded using utf-8 # * utf-8? Are you kidding me? ASCII!!! # * Then triples of double, double, unsigned int for x, y, n import random import struct # pack data struct.pack('II', 2, 1024) # pack to unsigned 32bit integers # unpack data struct.unpack('f', b'\xdb\x0f\x49\x40') # 32-bit float # create a binary file, let's add a comment first with open('letsinventourownbinaryformat.dat', 'wb') as f: comment = 'Here, have this awesome data!'.encode('ascii') f.write(struct.pack('I', len(comment))) f.write(comment) for i in range(1000): x = random.uniform(-1, 1) y = random.uniform(-1, 1) n = random.randint(1, 200 - int(100 * (x**2 + y**2))) f.write(struct.pack('ddI', x, y, n)) # + # read the file back in xs, ys, ns = [], [], [] with open('letsinventourownbinaryformat.dat', 'rb') as f: comment_length, = struct.unpack('I', f.read(4)) comment = f.read(comment_length).decode('ascii') size = struct.calcsize('ddI') data = f.read(size) while data: x, y, n = struct.unpack('ddI', data) xs.append(x) ys.append(y) ns.append(n) data = f.read(size) print(comment) print(len(xs)) # - # # `email`, `smtplib`, `getpass` # + import smtplib from email.message import EmailMessage from getpass import getpass text = '''Hello Participants, Thanks for attending! Do not forget to provide feedback Cheers, Max ''' msg = EmailMessage() msg.set_content(text) msg['Subject'] = 'Email demonstration at the Python Course' msg['From'] = 'Firstname Surname ' msg['To'] = 'Firstname Surname , Firstname Surname ' # Send the message via our own SMTP server. s = smtplib.SMTP_SSL(host='server') s.login(input('Username: '), getpass('Enter password: ')) s.send_message( from_addr=msg['From'], to_addrs=msg['To'], msg=msg, ) s.quit() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # PCA # # In this notebook, I will review a little bit about [PCA](https://arxiv.org/abs/1404.1100), implement [recursive PCA](http://www.sciencedirect.com/science/article/pii/S0959152400000226), how PCA can be viewed as an optimization problem, and implement a constrained version of this optimization process for PCA applied on time-series by including a roughness penalty. # # [Generalized PCA](https://arxiv.org/abs/1202.4002) will not be considered after careful consideration. Note: "Generalized PCA" is "an algebro-geometric solution to the problem of segmenting an unknown number of subspaces of unknown and varying dimensions from sample data points". # + import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib.patches import FancyArrowPatch from mpl_toolkits.mplot3d import proj3d from sklearn.decomposition import PCA # %matplotlib inline # - class Arrow3D(FancyArrowPatch): def __init__(self, xs, ys, zs, *args, **kwargs): FancyArrowPatch.__init__(self, (0,0), (0,0), *args, **kwargs) self._verts3d = xs, ys, zs def draw(self, renderer): xs3d, ys3d, zs3d = self._verts3d xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M) self.set_positions((xs[0],ys[0]),(xs[1],ys[1])) FancyArrowPatch.draw(self, renderer) # Let's first start to apply PCA on the following toy example. # + # Toy data X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]], dtype=np.float64) # NxT # Plot plt.figure(figsize=(5,4)) plt.scatter(X[:,0], X[:,1], color='b') plt.arrow(0, 0, 1, 0, length_includes_head = True, head_width = 0.15, color='k') plt.arrow(0, 0, 0, 1, length_includes_head = True, head_width = 0.15, color='') plt.show() # + ### PCA n_components = 2 # number of principal axes that we want to keep ## Let's compute PCA from scratch # 1. Center the data mean = X.mean(axis=0) X -= mean # note that the data was already centered N = X.shape[0] # 2. Compute the covariance matrix CovX = 1./(N-1) * X.T.dot(X) # TxT (same as np.cov(X, rowvar=False))) # 3. Compute the eigenvectors of this covariance matrix # np.linalg.eigh is more efficient than np.linalg.eig for symmetric matrix evals, evecs = np.linalg.eigh(CovX) # 4. Sort the eigenvalues (in decreasing order) and eigenvectors idx = np.argsort(evals)[::-1] evals = evals[idx] evecs = evecs[:,idx] # 5. Form the projection matrix P = evecs[:,:n_components] print(P) # 5. Project the data using the projection matrix # This is the same as rotating the matrix X using P Y = X.dot(P) # 6. Compare it with standard PCA pca = PCA(n_components=n_components) pca = pca.fit(X) Xnew = pca.transform(X) print(pca.components_.T) print(np.allclose(Xnew, Y)) # 7. Plot the data plt.figure(figsize=(10,4)) plt.subplot(1,2,1) plt.title('PCA: eigenvalues') plt.bar(np.array([0.,0.1]), evals, width=0.1) plt.xlim(0.,1.) plt.subplot(1,2,2) plt.title('PCA: data and eigenvectors') plt.scatter(X[:,0], X[:,1], color='b') plt.arrow(0, 0, np.sqrt(evals[0])*P[0,0], np.sqrt(evals[0])*P[1,0], length_includes_head = True, head_width = 0.15, color='b') plt.arrow(0, 0, np.sqrt(evals[1])*P[0,1], np.sqrt(evals[1])*P[1,1], length_includes_head = True, head_width = 0.15, color='b') plt.scatter(Y[:,0], Y[:,1], color='r') plt.arrow(0, 0, np.sqrt(evals[0]), 0, length_includes_head = True, head_width = 0.15, color='r') plt.arrow(0, 0, 0, np.sqrt(evals[1]), length_includes_head = True, head_width = 0.15, color='r') plt.show() # - # The covariance matrix can be recovered from these eigenvalues and eigenvectors. # Using the eigenvectors and eigenvalues, we can of course recover the covariance matrix # Note: if P is not square, we have to fill it up. np.allclose(CovX, P.dot(np.diag(evals)).dot(P.T)) # Few properties: # # * Applying PCA several times is the same as applying it one time. This is because PCA diagonalizes our matrix, thus applying PCA on a diagonal matrix will result in the same matrix. # * Applying PCA on a part of the data and another PCA on the other part, then applying PCA on the concatenation of both do not result in the same matrix as applying PCA on the whole data. # * Applying PCA on a rotated matrix does not give the same result as applying PCA on the initial matrix. # + # Here is the general method def pca(X, normalize=False, copy=True): if copy: X = np.copy(X) # 1. Center the data mean = X.mean(axis=0) X -= mean N = X.shape[0] if normalize: X /= X.std(axis=0) # 2. Compute the covariance matrix CovX = 1./(N-1) * X.T.dot(X) # TxT (same as np.cov(X, rowvar=False))) # 3. Compute the eigenvectors of this covariance matrix # np.linalg.eigh is more efficient than np.linalg.eig for symmetric matrix evals, evecs = np.linalg.eigh(CovX) # 4. Sort the eigenvalues (in decreasing order) and eigenvectors idx = np.argsort(evals)[::-1] evals, evecs = evals[idx], evecs[:,idx] return evals, evecs # - # #### Applying PCA on time series # # A fundamental question when applying PCA on time series is how to visualize this high dimensional data. Indeed, a sample $\pmb{x}(t) \in \mathbb{R}^T$. One way is to plot this $\pmb{x}(t)$ where the x-axis is the time, and y-axis is $x(t)$. Each time $t_i$ $(\forall i \in \{0,...,T\})$ represents a dimension. By plotting a vertical line at time $t=t_i$, we can see the variance in this particular dimension. # # For an infinite vector, or function, check about *Functional PCA*. # ## Recursive PCA # # Let's now apply **recursive PCA** on this toy example, with 3 new samples coming at different time steps. # + # Let's augment our matrix with 3 new samples Xs = np.array([[-1,1], [-3,0], [-4,-1]], dtype=np.float64) n_components = 2 k = float(N) R = CovX X_aug = X mean = X.mean(axis=0).reshape(-1,1) print(evals) for x in Xs: x = x.reshape(-1,1) X_aug = np.vstack((X_aug, x.T)) X_aug1 = X_aug - X_aug.mean(axis=0) pca = PCA(n_components=n_components) pca = pca.fit(X_aug1) #print(pca.get_covariance()) # Recursive PCA new_mean = k/(k+1) * mean + 1./(k+1) * x diff_mean = (new_mean - mean) R = (k-1)/k * R + diff_mean.dot(diff_mean.T) + 1./k * (x-new_mean).dot((x-new_mean).T) #print(R) print("The cov of the whole augmented matrix is equal to the recursive cov: {0}".format( np.allclose(pca.get_covariance(), R))) k+=1 mean = new_mean evals = np.linalg.eigh(R)[0] idx = np.argsort(evals)[::-1] evals = evals[idx] print(evals) # Compute the new projection matrix evals, evecs = np.linalg.eigh(R) idx = np.argsort(evals)[::-1] evals, evecs = evals[idx], evecs[:,idx] P = evecs[:,:n_components] Y = X_aug.dot(P) # Plot the data plt.figure(figsize=(10,4)) plt.subplot(1,2,1) plt.title('PCA: eigenvalues') plt.bar(np.array([0.,0.1]), evals, width=0.1) plt.xlim(0.,1.) plt.subplot(1,2,2) plt.title('PCA: data and eigenvectors') plt.scatter(X_aug[:,0], X_aug[:,1], color='b') plt.arrow(0, 0, np.sqrt(evals[0])*P[0,0], np.sqrt(evals[0])*P[1,0], length_includes_head = True, head_width = 0.15, color='b') plt.arrow(0, 0, np.sqrt(evals[1])*P[0,1], np.sqrt(evals[1])*P[1,1], length_includes_head = True, head_width = 0.15, color='b') plt.scatter(Y[:,0], Y[:,1], color='r') plt.arrow(0, 0, np.sqrt(evals[0]), 0, length_includes_head = True, head_width = 0.15, color='r') plt.arrow(0, 0, 0, np.sqrt(evals[1]), length_includes_head = True, head_width = 0.15, color='r') plt.show() # - # Let's now add a sample that can not be modeled by a linear combination of the principal axes, i.e. which is orthogonal to the current covariance matrix. Then, as usual, let's apply recursive PCA on it. # + # Let's add a sample that can not be modeled by a linear combination of the principal axes # i.e. which is orthogonal to the current covariance matrix. # Then, let's apply recursive PCA on it. # New 3D sample x = np.array([1,-1,1], dtype=np.float64).reshape(-1,1) # Reshaping previous values (pad a column/row of zeros) X_aug = np.pad(X_aug, ((0,0), (0,1)), mode='constant') # Nx(T+1) mean = np.pad(mean, ((0,1),(0,0)), mode='constant') R = np.pad(R, ((0,1),(0,1)), mode='constant') # Adding new sample and compute mean X_aug = np.vstack((X_aug, x.T)) X_aug1 = X_aug - X_aug.mean(axis=0) # Use sklearn PCA n_components = 3 pca = PCA(n_components=n_components) pca = pca.fit(X_aug1) #print(pca.get_covariance()) # Recursive PCA new_mean = k/(k+1) * mean + 1./(k+1) * x diff_mean = (new_mean - mean) R = (k-1)/k * R + diff_mean.dot(diff_mean.T) + 1./k * (x-new_mean).dot((x-new_mean).T) #print(R) print("The cov of the whole augmented matrix is equal to the recursive cov: {0}".format( np.allclose(pca.get_covariance(), R))) k+=1 mean = new_mean #print('-'*30) # Compute the new projection matrix evals, evecs = np.linalg.eigh(R) idx = np.argsort(evals)[::-1] evals, evecs = evals[idx], evecs[:,idx] P = evecs[:,:n_components] Y = X_aug.dot(P) # Plot the data fig = plt.figure(figsize=(10,4)) plt.subplot(1,2,1) plt.title('PCA: eigenvalues') plt.bar(np.array([0.,0.1,0.2]), evals, width=0.1) plt.xlim(0.,1.) ax = fig.add_subplot(122, projection='3d') ax.set_title('PCA: data and eigenvectors') ax.scatter(X_aug[:,0], X_aug[:,1], X_aug[:,2]) # From https://stackoverflow.com/questions/22867620/putting-arrowheads-on-vectors-in-matplotlibs-3d-plot for v in evecs: a = Arrow3D([0., v[0]], [0., v[1]], [0., v[2]], mutation_scale=20, lw=1, arrowstyle="-|>", color="b") ax.add_artist(a) ax.scatter(Y[:,0], Y[:,1], Y[:,2], color='r') a = Arrow3D([0., evals[0]], [0., evals[1]], [0., evals[2]], mutation_scale=20, lw=1, arrowstyle="-|>", color="r") ax.add_artist(a) plt.show() # - # ## PCA as an Optimization Problem # # PCA can be viewed as an optimization problem in 2 different ways. Theses 2 approaches are equivalent. # 1. Maximize the variance of the projected data. # 2. Minimize the reconstruction error in a least-square sense. # # Mathematically, here is the optimization problem that we are trying to solve: # # \begin{equation} # \max_{\pmb{v_i}} \: \pmb{v_i}^T \pmb{X}^T \pmb{X v_i} \quad \mbox{subj. to} \quad \begin{array}{l} \pmb{v_i}^T \pmb{v_i} = 1 \\ \pmb{v_i}^T \pmb{v_j} = 0 # \end{array}, # \end{equation} # $\forall i \in \{1,...,D\}, \forall 1\leq j < i$. # # Nice references: # * [What is the objective fct of PCA? (StackExchange)](https://stats.stackexchange.com/questions/10251/what-is-the-objective-function-of-pca) # * [PCA objective function: what is the connection between maximizing variance and minimizing error? (StackExchange)](https://stats.stackexchange.com/questions/32174/pca-objective-function-what-is-the-connection-between-maximizing-variance-and-m) # * [Everything you did and didn't know about PCA (blog)](http://alexhwilliams.info/itsneuronalblog/2016/03/27/pca/) # * ["PCA and Optimization - A Tutorial", 2015 (paper)](http://scholarscompass.vcu.edu/cgi/viewcontent.cgi?article=1006&context=ssor_pubs) # + # data # Toy data X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]], dtype=np.float64) # NxT N = X.shape[0] # PCA evals, evecs = pca(X) print(evals) print(evecs) # - # ### Using Scipy # + from scipy.optimize import minimize # PCA as an optimization # cache the 'covariance' matrix C = X.T.dot(X)/(N-1) # define objective function to MINIMIZE f = lambda u: -(u.T.dot(C)).dot(u) # define initial guess x0 = np.zeros((2,1)) # define optimization method # By default, it will be 'BFGS', 'L-BFGS-B', or 'SLSQP' depending on the constraints and bounds method = None # define constraints constraints = [{'type': 'eq', 'fun': lambda u: u.T.dot(u) - 1}] # Minimize --> it returns an instance of OptimizeResult u1 = minimize(f, x0, method=method, constraints=constraints) #print(u1) u1 = u1.x.reshape(-1,1) # get solution # Add constraint constraints.append({'type': 'eq', 'fun': lambda u: u1.T.dot(u)}) u2 = minimize(f, x0, method=method, constraints=constraints) #print(u2) u2 = u2.x.reshape(-1,1) # stack the optimized vector found P = np.hstack((u1,u2)) print(P) # Plot plt.title('PCA: data and eigenvectors') plt.scatter(X[:,0], X[:,1], color='b') plt.arrow(0, 0, P[0,0], P[1,0], length_includes_head = True, head_width = 0.15, color='b') plt.arrow(0, 0, P[0,1], P[1,1], length_includes_head = True, head_width = 0.15, color='g') plt.show() # - # Define PCA optimization method def pca_scipy(X, rough_param=0.0, normalize=False, copy=True): """ Compute PCA on the given data using an optimization process. """ if copy: X = np.copy(X) # center the data mean = X.mean(axis=0) X -= mean N = X.shape[0] T = X.shape[1] # normalize if normalize: X /= X.std(axis=0) # compute 'covariance' matrix and cache it N = X.shape[0] C = X.T.dot(X)/(N-1) # define objective function to MINIMIZE #f = lambda u: -(u.T.dot(C)).dot(u) def f(u): pen = 0 if rough_param != 0 and u.size > 2: ddu = np.diff(np.diff(u)) rough_pen = (ddu**2).sum() pen = rough_param*rough_pen #if u.size > 4: # ddddu = np.diff(np.diff(ddu)) # rough_pen = (ddddu**2).sum() # pen += rough_param*rough_pen loss = -(u.T.dot(C)).dot(u) return loss + pen # define initial guess x0 = np.ones((T,)) #np.zeros((T,)) # define optimization method # By default, it will be 'BFGS', 'L-BFGS-B', or 'SLSQP' depending on the constraints and bounds # If constraints, it will be 'SLSQP' (Sequential Least SQuares Programming) # 'Nelder-Mead', 'Powell', 'CG', 'BFGS', 'Newton-CG', 'L-BFGS-B', 'TNC', 'COBYLA', 'SLSQP', 'dogleg', 'trust-ncg' # 'Nelder-Mead', 'Powell', 'CG', 'Newton-CG', 'TNC', 'COBYLA', 'dogleg', 'trust-ncg' can not handle (eq) constraints # 'BFGS', 'L-BFGS-B' do not work method = 'SLSQP' # define 1st constraints: norm of 1 constraints = [{'type': 'eq', 'fun': lambda u: u.T.dot(u) - 1}] # optimize recursively evals, evecs = [], [] messages = {} for i in range(T): if i != 0: # add orthogonality constraint constraints.append({'type': 'eq', 'fun': lambda u: u1.T.dot(u)}) # minimize --> it returns an instance of OptimizeResult u1 = minimize(f, x0, method=method, constraints=constraints) if not u1.success: messages[i] = u1.message # get 'eigenvalue' evals.append(-u1.fun) # get solution u1 = u1.x evecs.append(u1) return np.array(evals), np.array(evecs).T, messages # + evals, evecs, messages = pca_scipy(X) P = evecs print(messages) print(evals) print(P) # Plot plt.figure(figsize=(10,4)) plt.subplot(1,2,1) plt.title('PCA: eigenvalues') plt.bar(np.array([0.,0.1]), evals, width=0.1) plt.xlim(0.,1.) plt.subplot(1,2,2) plt.title('PCA: data and eigenvectors') plt.scatter(X[:,0], X[:,1], color='b') plt.arrow(0, 0, P[0,0], P[1,0], length_includes_head = True, head_width = 0.15, color='b') plt.arrow(0, 0, P[0,1], P[1,1], length_includes_head = True, head_width = 0.15, color='g') plt.show() # - # ### Using CVXPY # # Note: You **cannot** use `cvxpy` for this problem, as we are trying to maximize a convex function, and `cvxpy` only accepts to maximize a concave fct. # + import cvxpy as cvx # cache the 'covariance' matrix C = X.T.dot(X)/(N-1) # define vector to optimize u1 = cvx.Variable(X.shape[1]) print(cvx.quad_form(u1, C).is_dcp()) print(cvx.quad_form(u1, C).is_quadratic()) # define objective fct to maximize #f = cvx.Maximize(u1.T*C*u1) f = cvx.Maximize(cvx.quad_form(u1, C)) # this does not work! #f = cvx.Minimize(cvx.quad_form(u1, C)) # this works (if no constraints) but that is not what we want to achieve! constraints = [u1.T*u1 == 1] prob = cvx.Problem(f, constraints) result = prob.solve() print(u1.value) # - # ### Using NLopt # # Nonlinear optimization algorithms that can handle nonlinear inequality and **equality** constraints are: # * ISRES (Improved Stochastic Ranking Evolution Strategy) $\rightarrow$ global derivative-free # * COBYLA (Constrained Optimization BY Linear Approximations) $\rightarrow$ local derivative-free # * SLSQP (Sequential Least-SQuares Programming) $\rightarrow$ local gradient-based # * AUGLAG (AUGmented LAGrangian) $\rightarrow$ global/local derivative-free/gradient based (determined based on the subsidiary algo) # # More information about the various algorithms can be found on this [link](https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/). # + # Playground with NLopt import nlopt nlopt_results = {1: 'success', 2: 'stop_val reached', 3: 'ftol reached', 4: 'xtol reached', 5: 'maxeval reached', 6: 'maxtime reached', -1: 'failure', -2: 'invalid args', -3: 'out of memory', -4: 'roundoff limited', -5: 'forced stop'} n_iter = 0 N = X.shape[0] # cache the 'covariance' matrix C = X.T.dot(X)/(N-1) # define random seed nlopt.srand(125) # define which solver to use #optimizer = nlopt.GN_ISRES #optimizer = nlopt.LN_COBYLA optimizer = nlopt.LD_SLSQP #optimizer = nlopt.LD_AUGLAG # nlopt.AUGLAG, nlopt.AUGLAG_EQ, nlopt.LD_AUGLAG, # nlopt.LD_AUGLAG_EQ, nlopt.LN_AUGLAG, nlopt.LN_AUGLAG_EQ # if nlopt.AUGLAG, we have to define a subsidiary algo suboptimizer = nlopt.LD_SLSQP #nlopt.LN_COBYLA # define objective function def f(x, grad): global n_iter n_iter += 1 if grad.size > 0: grad[:] = 2*x.T.dot(C) return x.T.dot(C).dot(x) # define norm constraint def c1(x, grad): if grad.size > 0: grad[:] = 2*x return (x.T.dot(x) - 1) # create optimizer n = X.shape[1] # nb of parameters to optimize, size of the vector opt = nlopt.opt(optimizer, n) print("Algo: %s" % opt.get_algorithm_name()) opt.set_max_objective(f) # if nlopt.GN_ISRES, we can define the population size opt.set_population(0) # by default for ISRES: pop=20*(n+1) # if nlopt.AUGLAG, set the subsidiary algo subopt = nlopt.opt(suboptimizer, n) subopt.set_lower_bounds(-1) subopt.set_upper_bounds(1) #subopt.set_ftol_rel(1e-2) #subopt.set_maxeval(100) opt.set_local_optimizer(subopt) # define bound constraints (should be between -1 and 1 because the norm should be 1) opt.set_lower_bounds(-1.) opt.set_upper_bounds(1.) # define equality constraints opt.add_equality_constraint(c1, 0) #opt.add_equality_mconstraint(constraints, tol) # define stopping criteria #opt.set_stopval(stopval) opt.set_ftol_rel(1e-8) #opt.set_xtol_rel(1e-4) opt.set_maxeval(100000) # nb of iteration opt.set_maxtime(10) # time in secs # define initial value #x0 = np.zeros((n,)) x0 = np.array([0.1,0.1]) # optimize x = x0 try: x = opt.optimize(x0) except nlopt.RoundoffLimited as e: pass max_value = opt.last_optimum_value() result = opt.last_optimize_result() print("Max value: %f" % max_value) print("Nb of iterations: %d" % n_iter) print("Result: %s" % nlopt_results[result]) print("Optimized array:") print(x) # + # Define PCA optimization method formally using NLopt def center_data(X, normalize=False, copy=True): if copy: X = np.copy(X) # center the data mean = X.mean(axis=0) X -= mean N = X.shape[0] T = X.shape[1] # normalize if normalize: X /= X.std(axis=0) return X class OrthogonalConstraint(object): def __init__(self, v): self.v = np.copy(v) def constraint(self, x, grad): if grad.size > 0: grad[:] = self.v return (x.T.dot(self.v)) def pca_nlopt(X, method=None, submethod=None, rough_param=0.0, normalize=False, copy=True): """ Compute PCA on the given data using nlopt. :param (str) method: it can take the following value: 'SLSQP', 'ISRES', 'COBYLA', 'AUGLAG'. By default, it will be 'SLSQP'. :param (str) submethod: this needs to be defined only if method is 'AUGLAG'. By default, it will be 'SLSQP'. """ # center the data X = center_data(X, normalize=normalize, copy=copy) # compute 'covariance' matrix and cache it N = X.shape[0] C = X.T.dot(X) / (N-1) # define useful variables nlopt_results = {1: 'success', 2: 'stop_val reached', 3: 'ftol reached', 4: 'xtol reached', 5: 'maxeval reached', 6: 'maxtime reached', -1: 'failure', -2: 'invalid args', -3: 'out of memory', -4: 'roundoff limited', -5: 'forced stop'} n = X.shape[1] # nb of parameters to optimize, size of the vector # define random seed nlopt.srand(125) # define which solver (and possibly subsolver) to use def get_opt(method): if method == 'ISRES': return nlopt.opt(nlopt.GN_ISRES, n) elif method == 'COBYLA': return nlopt.opt(nlopt.LN_COBYLA, n) elif method == 'SLSQP': return nlopt.opt(nlopt.LD_SLSQP, n) elif method == 'AUGLAG': return nlopt.opt(nlopt.AUGLAG, n) else: raise NotImplementedError("The given method has not been implemented") if method is None: method = 'SLSQP' opt = get_opt(method) if method == 'AUGLAG': if submethod is None: submethod = 'SLSQP' elif submethod == 'AUGLAG': raise ValueError("Submethod should be different from AUGLAG") subopt = get_opt(submethod) subopt.set_lower_bounds(-1) subopt.set_upper_bounds(1) #subopt.set_ftol_rel(1e-2) #subopt.set_maxeval(100) opt.set_local_optimizer(subopt) # define objective function def f(x, grad): if grad.size > 0: grad[:] = 2*x.T.dot(C) return x.T.dot(C).dot(x) # define norm constraint def c1(x, grad): if grad.size > 0: grad[:] = 2*x return (x.T.dot(x) - 1) # define objective function opt.set_max_objective(f) # if nlopt.GN_ISRES, we can define the population size opt.set_population(0) # by default for ISRES: pop=20*(n+1) # define bound constraints (should be between -1 and 1 because the norm should be 1) opt.set_lower_bounds(-1.) opt.set_upper_bounds(1.) # define equality constraints opt.add_equality_constraint(c1, 0) #opt.add_equality_mconstraint(constraints, tol) # define stopping criteria #opt.set_stopval(stopval) opt.set_ftol_rel(1e-8) #opt.set_xtol_rel(1e-4) opt.set_maxeval(100000) # nb of iteration opt.set_maxtime(2) # time in secs # define initial value x0 = np.array([0.1]*n) # important that the initial value ≠ 0 for the computation of the grad! evals, evecs, msgs = [], [], {} for i in range(n): if i > 0: c = OrthogonalConstraint(x) opt.add_equality_constraint(c.constraint, 0) # optimize try: x = opt.optimize(x0) except nlopt.RoundoffLimited as e: pass # save values evecs.append(x) evals.append(opt.last_optimum_value()) msgs[i] = nlopt_results[opt.last_optimize_result()] return np.array(evals), np.array(evecs).T, msgs # + method = 'SLSQP' # 'SLSQP', 'COBYLA', 'ISRES', 'AUGLAG' submethod = None evals, P, msgs = pca_nlopt(X, method=method, submethod=submethod) print(msgs) print(evals) print(P) # Plot plt.figure(figsize=(10,4)) plt.subplot(1,2,1) plt.title('PCA: eigenvalues') plt.bar(np.array([0.,0.1]), evals, width=0.1) plt.xlim(0.,1.) plt.subplot(1,2,2) plt.title('PCA: data and eigenvectors') plt.scatter(X[:,0], X[:,1], color='b') plt.arrow(0, 0, P[0,0], P[1,0], length_includes_head = True, head_width = 0.15, color='b') plt.arrow(0, 0, P[0,1], P[1,1], length_includes_head = True, head_width = 0.15, color='g') plt.show() # - # For comparison purpose, we obtained the following values with scipy ('SLSQP'): # # [ 7.93954312 0.06045688]
    # [[ 0.83849224 -0.54491355]
    # [ 0.54491353 0.83849226]] # # And these values using std PCA: # # [ 7.93954312 0.06045688]
    # [[-0.83849224 0.54491354]
    # [-0.54491354 -0.83849224]] # ### Using IPopt # + # Playground with IPopt import ipopt # define useful vars n = X.shape[1] N = X.shape[0] C = X.T.dot(X) / (N-1) # define initial value x0 = np.array([0.1]*n) # define (lower and upper) bound constraints lb = [-1]*n ub = [1]*n # define constraints cl = [1] cu = [1] class Opt(object): def __init__(self, verbose=True): self.verbose = verbose self.iter_count = 0 def objective(self, x): # objective fct to minimize return -x.T.dot(C).dot(x) def gradient(self, x): # grad of the objective fct return -2*x.T.dot(C) def constraints(self, x): # norm constraint return x.T.dot(x) def jacobian(self, x): return 2*x #def hessian(self, x): # pass def intermediate(self, alg_mod, iter_count, obj_value, inf_pr, inf_du, mu, d_norm, regularization_size, alpha_du, alpha_pr, ls_trials): if self.verbose: print("Objective value at iteration #%d: %g" % (iter_count, obj_value)) self.iter_count = iter_count opt = Opt(verbose=False) nlp = ipopt.problem(n=n, m=len(cl), problem_obj=opt, lb=lb, ub=ub, cl=cl, cu=cu) x, info = nlp.solve(x0) print("Max value: %f" % -info['obj_val']) print("Nb of iterations: %d" % opt.iter_count) print("Result: %s" % info['status_msg']) print("Optimized array:") print(x) # + # Define PCA optimization method formally using IPopt class NormConstraint(object): def __init__(self): pass def constraint(self, x): return x.T.dot(x) def jacobian(self, x): return 2*x class OrthogonalConstraint(object): def __init__(self, v): self.v = np.copy(v) def constraint(self, x): return x.T.dot(self.v) def jacobian(self, x): return self.v class IPopt(object): def __init__(self, verbose=True): self.verbose = verbose self.iter_count = 0 self.cons = [] def add_constraint(self, c): self.cons.append(c) def objective(self, x): # objective fct to minimize return -x.T.dot(C).dot(x) def gradient(self, x): # grad of the objective fct return -2*x.T.dot(C) def constraints(self, x): return np.array([c.constraint(x) for c in self.cons]) def jacobian(self, x): return np.array([c.jacobian(x) for c in self.cons]) #def hessian(self, x): # pass def intermediate(self, alg_mod, iter_count, obj_value, inf_pr, inf_du, mu, d_norm, regularization_size, alpha_du, alpha_pr, ls_trials): if self.verbose: print("Objective value at iteration #%d: %g" % (iter_count, obj_value)) self.iter_count = iter_count def pca_ipopt(X, rough_param=0.0, normalize=False, copy=True): """ Compute PCA on the given data using ipopt. """ # center the data X = center_data(X, normalize=normalize, copy=copy) # define useful vars n = X.shape[1] N = X.shape[0] C = X.T.dot(X) / (N-1) # define initial value x0 = np.array([0.1]*n) # define (lower and upper) bound constraints lb = [-1]*n ub = [1]*n # define constraints cl = [1] + [0]*(n-1) cu = [1] + [0]*(n-1) evals, evecs, msgs = [], [], {} opt = IPopt(verbose=False) opt.add_constraint(NormConstraint()) for i in range(n): if i > 0: opt.add_constraint(OrthogonalConstraint(x)) i1 = i+1 nlp = ipopt.problem(n=n, m=len(cl[:i1]), problem_obj=opt, lb=lb, ub=ub, cl=cl[:i1], cu=cu[:i1]) # solve problem x, info = nlp.solve(x0) evecs.append(x) evals.append(-info['obj_val']) msgs[i] = info['status_msg'] return np.array(evals), np.array(evecs).T, msgs # + evals, P, msgs = pca_ipopt(X) print(msgs) print(evals) print(P) # Plot plt.figure(figsize=(10,4)) plt.subplot(1,2,1) plt.title('PCA: eigenvalues') plt.bar(np.array([0.,0.1]), evals, width=0.1) plt.xlim(0.,1.) plt.subplot(1,2,2) plt.title('PCA: data and eigenvectors') plt.scatter(X[:,0], X[:,1], color='b') plt.arrow(0, 0, P[0,0], P[1,0], length_includes_head = True, head_width = 0.15, color='b') plt.arrow(0, 0, P[0,1], P[1,1], length_includes_head = True, head_width = 0.15, color='g') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:mdd] * # language: python # name: conda-env-mdd-py # --- # + jupyter={"outputs_hidden": false} import numpy as np import matplotlib.pyplot as plt # %matplotlib inline from scipy.fftpack import fft # Number of samplepoints N = 600 # sample spacing T = 1.0 / 800.0 x = np.linspace(0.0, N*T, N) y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x) yf = fft(y) halfN = np.int(N/2) xf = np.linspace(0.0, 1.0/(2.0*T), halfN) import matplotlib.pyplot as plt plt.plot(xf, 2.0/N * np.abs(yf[0:halfN])) plt.grid() plt.show() # + jupyter={"outputs_hidden": false} plt.figure() plt.plot(x,y) plt.plot(x,0.7*np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x),'r') # + jupyter={"outputs_hidden": false} from scipy.signal.windows import hann w = hann(N) ywf = fft(y*w) plt.figure() plt.plot(xf[1:halfN], 2.0/N * np.abs(yf[1:halfN]), '-b') plt.plot(xf[1:halfN], np.sqrt(8/3)*2.0/N * np.abs(ywf[1:halfN]), '-r') plt.legend(['FFT', 'FFT w. window']) plt.grid() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Our Task: Find similar company names # # # Now that we have completed our Python Primer, let's try to complete a real task, while at the same time trying to practice loops, iterations, and other Python functionality that we studied. # ## Your high level task # # # You are given a list of names. You know that the same entity in the list has different representations. You want to find duplicate companies in the data. # # As a concrete example, open the file under `data/restaurant-names.txt`. This contains a list of restaurant names, extracted from the [NYC Restaurant inspection data set](https://data.cityofnewyork.us/Health/DOHMH-New-York-City-Restaurant-Inspection-Results/xx67-kt59/data) (available online). The Department of Health has been doing a decent, but not perfect, job in recording the company names. Therefore, the same restaurant appears under different names. **Your task is to find "almost duplicate" entries in order to decide whether they correspond to the same business.** # # !head -10 ../data/restaurant-names.txt # !tail -5 ../data/restaurant-names.txt # ## Matching Company Names # Quite often, in our data, we have entries represented as strings that refer to the same entity but have different string representations (e.g., McDonald's vs McDonalds vs McDonald‎). We want to write code that will help in the task of finding such similar entries in our data. # # Our task can be broken in the following tasks: # * **Step 1**: Read the data from a file into a list of strings in memory. We have a data set under the `data` folder: The list of unique restaurant names from the NYC Restaurant Inspection dataset (`data/restaurant-names.txt`). We need to write Python code that will read th file and return a list of strings that are the company names. # * **Step 2**: We will then need to figure out how to compare two strings and find their similarity. For that, we will write a function that takes as input **two** strings, and returns back their similarities. We will explore multiple ways of writing that function, using various library functions. # * **Step 3**: We will need to write a function that takes as input a company name, and returns back a list of matching company names, together with their similarity. Ideally, we would like to also sort the list based on the similarity (highest similarity metrics on top). As part of our learning process, we will use the _list comprehension_ approach to create the list. We will also use tuples as the elements of the list, so that the list contain elements such as `[("McDonalds", 0.88), ("McDonald's", 0.81),....]`. # * **Step 4**: In the final step, we will just perform the similarity computation across all companies in the dataset. # ## STEP 1: Read the list of names from a file and create a list of names # + # STEP 1: Read the list of names from a file and create a list of names filename = '../data/restaurant-names.txt' # We open the filename for reading f = ... # Read the file into memory content = ... # Content is a big string, with one restaurant per line # so we split it into lines and create a list with the restaurant names restaurants = ... # + [markdown] solution2="hidden" solution2_first=true # ### Solution # + solution2="hidden" # STEP 1: Read the list of names from a file and create a list of names filename = '../data/restaurant-names.txt' # We open the filename for reading f = open(filename, 'r') # Read the file into memory content = f.read() # Content is a big string, with one restaurant per line # so we split it into lines and create a list with the restaurant names restaurants = content.split('\n') # + solution2="hidden" len(restaurants) # + solution2="hidden" restaurants # - # ## STEP 2: Implement the similarity function # ### Computing the similarity between two strings # There are many ways that we can calculate the similarity between two strings. For our case, we will focus on a few similarity metrics that already have implementations in Python. # # #### Some commonly used similarity metrics # # * [Sequence matching](https://docs.python.org/3.6/library/difflib.html) (part of standard Python) ([example](http://stackoverflow.com/questions/17388213/find-the-similarity-percent-between-two-strings)) # * [Edit distance](https://en.wikipedia.org/wiki/Edit_distance) ([Python Jellyfish Library](https://github.com/jamesturk/jellyfish)) # * [N-gram distance](http://pythonhosted.org/ngram/tutorial.html#comparing-and-searching-strings) # # # #### STEP 2.a: Let's install the libraries... # Edit distance # !sudo -H pip3 install -U jellyfish # Ngram # !sudo -H pip3 install -U ngram # #### STEP 2.b: Now let's test our similarity functions with various examples # # Once we have installed the necessary libraries for our project, we proceed to `import` them and test the functions. import jellyfish # ##### Edit Distance # # From Wikipedia: # # The [edit distance](https://en.wikipedia.org/wiki/Edit_distance) _ is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other._. # # The Levenshtein distance between "kitten" and "sitting" is 3. A minimal edit script that transforms the former into the latter is: # # * kitten → sitten (substitution of "s" for "k") # * sitten → sittin (substitution of "i" for "e") # * sittin → sitting (insertion of "g" at the end). jellyfish.levenshtein_distance('kitten', 'sitting') # Let's try a few more examples jellyfish.levenshtein_distance('Ipeirotis', 'Iperiotos') jellyfish.levenshtein_distance('Starbucks', 'Starbacks') jellyfish.levenshtein_distance('Starbucks', 'Starbuck') jellyfish.levenshtein_distance('Starbucks', 'Wendys') # ##### Demerau Levenshtein distance # # The Demerau Levenshtein distance also allows for transposition of two adjacent characters. jellyfish.levenshtein_distance('Ipeirotis', 'Iperiotis') jellyfish.damerau_levenshtein_distance('Ipeirotis', 'Iperiotis') jellyfish.damerau_levenshtein_distance('Starbucks', 'Starbucsk') jellyfish.levenshtein_distance('Starbucks', 'Starbucsk') # ###### Jaro–Winkler distance # # [Jaro–Winkler distance](https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance) is a string metric for measuring the edit distance between two sequences. Informally, the **Jaro** distance between two words is the minimum number of single-character transpositions required to change one word into the other; the **Jaro–Winkler** distance gives more favourable ratings to strings that match from the beginning. # jellyfish.jaro_winkler('Starbucks', 'Starbarbr') jellyfish.jaro_winkler('Starbucks', 'Milwbucks') # ##### Soundex # # [Soundex](https://en.wikipedia.org/wiki/Soundex) is a phonetic algorithm for indexing names by sound, as pronounced in English. The goal is for homophones to be encoded to the same representation so that they can be matched despite minor differences in spelling. # # Using this algorithm, both "Robert" and "Rupert" return the same string "R163" while "Rubin" yields "R150". "Ashcraft" and "Ashcroft" both yield "A261". "Tymczak" yields "T522" not "T520" (the chars 'z' and 'k' in the name are coded as 2 twice since a vowel lies in between them). "Pfister" yields "P236" not "P123" (the first two letters have the same number and are coded once as 'P'). jellyfish.soundex('Robert') jellyfish.soundex('Rupert') jellyfish.soundex('Ashcroft') jellyfish.soundex('Ashcraft') jellyfish.soundex('Papadopoylis') jellyfish.soundex('Papadopoulos') # ##### Ngrams # # With the n-gram similarity score, we split the word into sequences of n consecutive characters (n-grams) and then compare the sets of n-grams between the two words. For example, the name "Panos" has the following 2-grams: "Pa", "an", "no", "os". (We can also add "#P" and "s#" if we want to capture the prefix and suffixes.) Strings that share a large number of q-grams are often similar. import ngram ngram.NGram.compare('Ipeirotis','Iperotis',N=2) ngram.NGram.compare('Ipeirotis','Iperotis',N=1) ngram.NGram.compare('New York University','New York Universty',N=2) ngram.NGram.compare('New York University','University of New York',N=1) ngram.NGram.compare('New York University','Columbia Universty',N=2) # ### Task 1: Create a function that takes as input two strings and returns their similarity # # Given the experience with the metrics above, we want now to create a function that takes as input a string and returns their similarity. Our key requirement is for the similarity metric to be between 0 and 1, with 0 meaning no similarity, and 1 corresponding to identical strings. Some of the similarity functions above would fit right in, others will need some work. # + import ngram import jellyfish def normalized_similarity_edit_distance(str1, str2): # Compute the similarity between str1 and str2 distance = ... # Normalize normalized = ... # Return the result return ... # + [markdown] solution2="hidden" solution2_first=true # ### Solution # + solution2="hidden" # For n-gram similarity it is very simple, we just return the result import ngram def computeSimilarity_ngram(str1, str2, n=3): similarity = ngram.NGram.compare(str1,str2,N=n) return similarity # + solution2="hidden" computeSimilarity_ngram("New York University", "New York Univ") # + solution2="hidden" computeSimilarity_ngram("New York University", "New York Univ",n=2) # + solution2="hidden" computeSimilarity_ngram("New York University", "New York Univ",n=4) # + solution2="hidden" computeSimilarity_ngram("New York University", "Columbia Univ", n=2) # + solution2="hidden" # For edit distance import jellyfish def computeSimilarity_editdistance(str1, str2): # Compute the maximum length of the two strings, to normalize our distance maxlength = max( len(str1), len(str2)) # Compute the edit distance between two strings distance = jellyfish.levenshtein_distance(str1, str2) similarity = 1 - distance/maxlength return similarity computeSimilarity_editdistance("New York University", "New York Univ") computeSimilarity_editdistance("New York University", "Columbia Univ") # + solution2="hidden" # For soundex import jellyfish def computeSimilarity_soundex(str1, str2): soundex1 = jellyfish.soundex(str1) soundex2 = jellyfish.soundex(str2) if soundex1 == soundex2: return 1.0 else: return 0.0 computeSimilarity_soundex("New York University", "New York Univ") # - # ### Task 2: Modify the functions above to allow for various similarity calculation methods. # # We will now up our game, and return back the results of the comparison using various methods. The `method` parameter defines the metric that we will use def computeSimilarity(str1, str2, method): # The function should check the method # and then call the appropriate similarity function # what we implemented above and return the # corresponding similarity value return ... # Getting closer to a real setting, we can now # compute all the similarity metrics and return them # all. Perhaps even compute an average value def computeSimilarity(str1, str2): # We return a dictionary with all the metrics return { "ngram2": ..., "soundex": ... } # + [markdown] solution2="hidden" solution2_first=true # ### Solution # + solution2="hidden" def computeSimilarity(str1, str2, method): if method == 'ngram2': return computeSimilarity_ngram(str1, str2,n=2) elif method == 'ngram3': return computeSimilarity_ngram(str1, str2, n=3) elif method == 'ngram4': return computeSimilarity_ngram(str1, str2, n=4) elif method == 'edit_distance': return computeSimilarity_editdistance(str1, str2) elif method == 'soundex': return computeSimilarity_soundex(str1, str2) else: return None # + solution2="hidden" computeSimilarity("New York University", "New York Univ", 'ngram3') # + solution2="hidden" computeSimilarity("New York University", "New York Univ", 'edit_distance') # + solution2="hidden" # Most of the time we are going to compute all similarity metrics # and return back a dictionary with all the metrics def computeSimilarity(str1, str2): results = { 'ngram2': computeSimilarity_ngram(str1, str2, n=2), 'ngram3': computeSimilarity_ngram(str1, str2, n=3), 'ngram4': computeSimilarity_ngram(str1, str2, n=4), 'edit_distance' : computeSimilarity_editdistance(str1, str2), 'soundex': computeSimilarity_soundex(str1, str2) } # Arbitrarily, compute a similarity metric as the average of all results['average'] = sum(results.values()) / len(results) # Similarly arbitrarily, we can compute our own metric, by mixing # the various metrics in our own way. results['custom'] = (results['ngram3'] + 2.5 * results['edit_distance'] + 0.5 * results['soundex']) / 4 return results # + solution2="hidden" computeSimilarity("New York University", "New York Univ") # - # ## STEP 3: Compute similarity of a company against a list of company names # We now create a function that accepts a company name and a list of companies, and computes their similarity. This part will get us to exercise our for-loops, and also illustrate how we can use lists and tuples. # **Sorting a list of tuples**:_This part is a little bit advanced for now, so I will just give the code below. (Solution taken from http://stackoverflow.com/questions/3121979/how-to-sort-list-tuple-of-lists-tuples). Here is a small example below, which we will reuse in our function:_ # + data = [("Panos",0.5), ("Peter",0.8), ("Pan", 0.8)] # This code sorts the list "data", by using the second element # of each tuple as the sorting key, and sorts things in reverse order data.sort(key=lambda tupl: tupl[1], reverse=True) # sorts in place, in descending order, based on the second element of the tuple print(data) # + # STEP 3: We now create a function that accepts a company name # and a list of companies, and computes their similarity # We have a 'top' parameter (by default set to be 5) # that restricts the results to only the "top" most similar # string pairs. We also define a parameter "method" that defines # what is the similarity method that we want to use. We also define a # similarity threshold for keeping only results with sufficient similarity def companySimilarity(query, companyList, top = 5, method = 'average', sim_threshold = 0.25): ... # + [markdown] solution2="hidden" solution2_first=true # ### Solution # + solution2="hidden" # STEP 3: We now create a function that accepts a company name # and a list of companies, and computes their similarity # We have a 'top' parameter (by default set to be 5) # that restricts the results to only the most similar # string pairs. We also define a parameter "method" that defines # what is the similarity method that we want to use. We also define a # similarity threshold for keeping only results with sufficient similarity def companySimilarity(query, companyList, top = 5, method = 'average', sim_threshold = 0.25): # We will use a list to store the similar matches results = [] # Go through all the restaurants for c in companyList: # We compute the similarities (all metrics) # between the string "query" and the string "c" # which is the variable that iterates over the list "companyList" similarities = computeSimilarity(query, c) # If the ngram similarity is above 0.25 if similarities[method]>sim_threshold: # Add in results the matching restaurant name r # and the similarity results.append( (c, similarities[method]) ) # This list contains the matches. The list contains a list # of tuples (company name, similarity) # We sort in decreasing order of similarity results.sort(key=lambda tupl: tupl[1], reverse=True) # We return only the top results return results[:top] # + solution2="hidden" query = 'MACDONALDS' companySimilarity(query, restaurants, top = 5, method = 'ngram3', sim_threshold = 0.25) # + solution2="hidden" query = 'MACDONALDS' companySimilarity(query, restaurants, top = 5, method = 'average', sim_threshold = 0.25) # + solution2="hidden" query = 'STARBUCKS' companySimilarity(query, restaurants, top = 5, method = 'ngram3', sim_threshold = 0.25) # + solution2="hidden" query = 'STARBUCKS' companySimilarity(query, restaurants, top = 20, method = 'average', sim_threshold = 0.25) # - # ## Step 4: Perform the similarity computation across all companies in the dataset. # We are almost done. We now just go through all the companies in the list # and we call the `companySimilarity` function that computes the similar company names # _for all the companies in the list_. # + # STEP 4: We are almost done. We now just go through all the companies in the list # and we call the companySimilarity function that computes the similar company names # for all the companies in the list. We store the results in a dictionary, with the # key being the company name, and the value being a "list of tuples" with the # similar company names and the corresponding similarity value. # Your code here # + [markdown] solution2="hidden" solution2_first=true # ### Solution # + solution2="hidden" # STEP 4: We are almost done. We now just go through all the companies in the list # and we call the companySimilarity function that computes the similar company names # for all the companies in the list. We store the results in a dictionary, with the # key being the company name, and the value being a "list of tuples" with the # similar company names and the corresponding similarity value. # The matches counter is just to stop the computation quickly # after we have showed enough matches matches = 0 stop_after = 20 for q in restaurants: results = companySimilarity(q, restaurants, top = 6, method = 'average', sim_threshold = 0.4) # We will print only non-identical matches (remember the top # match is the restaurant matching with itself) if (len(results)>1): for r in results[1:]: print(f"{q}\t===>\t{r[0]}\t{r[1]:2.3%}") matches = matches + 1 if matches>stop_after: break # We stop after a few illustrative matches # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # LSTM 做词性预测 # 前面我们讲了词嵌入以及 n-gram 模型做单词预测,但是目前还没有用到 RNN,在最后这一次课中,我们会结合前面讲的所有预备知识,教大家如何使用 LSTM 来做词性预测。 # # ## 模型介绍 # 对于一个单词,会有这不同的词性,首先能够根据一个单词的后缀来初步判断,比如 -ly 这种后缀,很大概率是一个副词,除此之外,一个相同的单词可以表示两种不同的词性,比如 book 既可以表示名词,也可以表示动词,所以到底这个词是什么词性需要结合前后文来具体判断。 # # 根据这个问题,我们可以使用 lstm 模型来进行预测,首先对于一个单词,可以将其看作一个序列,比如 apple 是由 a p p l e 这 5 个单词构成,这就形成了 5 的序列,我们可以对这些字符构建词嵌入,然后输入 lstm,就像 lstm 做图像分类一样,只取最后一个输出作为预测结果,整个单词的字符串能够形成一种记忆的特性,帮助我们更好的预测词性。 # # ![](https://ws3.sinaimg.cn/large/006tKfTcgy1fmxi67w0f7j30ap05qq2u.jpg) # # 接着我们把这个单词和其前面几个单词构成序列,可以对这些单词构建新的词嵌入,最后输出结果是单词的词性,也就是根据前面几个词的信息对这个词的词性进行分类。 # # 下面我们用例子来简单的说明 import torch from torch import nn from torch.autograd import Variable # 我们使用下面简单的训练集 training_data = [("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]), ("Everybody read that book".split(), ["NN", "V", "DET", "NN"])] # 接下来我们需要对单词和标签进行编码 word_to_idx = {} tag_to_idx = {} for context, tag in training_data: for word in context: if word.lower() not in word_to_idx: word_to_idx[word.lower()] = len(word_to_idx) for label in tag: if label.lower() not in tag_to_idx: tag_to_idx[label.lower()] = len(tag_to_idx) word_to_idx tag_to_idx # 然后我们对字母进行编码 alphabet = 'abcdefghijklmnopqrstuvwxyz' char_to_idx = {} for i in range(len(alphabet)): char_to_idx[alphabet[i]] = i char_to_idx # 接着我们可以构建训练数据 def make_sequence(x, dic): # 字符编码 idx = [dic[i.lower()] for i in x] idx = torch.LongTensor(idx) return idx make_sequence('apple', char_to_idx) training_data[1][0] make_sequence(training_data[1][0], word_to_idx) # 构建单个字符的 lstm 模型 class char_lstm(nn.Module): def __init__(self, n_char, char_dim, char_hidden): super(char_lstm, self).__init__() self.char_embed = nn.Embedding(n_char, char_dim) self.lstm = nn.LSTM(char_dim, char_hidden) def forward(self, x): x = self.char_embed(x) out, _ = self.lstm(x) return out[-1] # (batch, hidden) # 构建词性分类的 lstm 模型 class lstm_tagger(nn.Module): def __init__(self, n_word, n_char, char_dim, word_dim, char_hidden, word_hidden, n_tag): super(lstm_tagger, self).__init__() self.word_embed = nn.Embedding(n_word, word_dim) self.char_lstm = char_lstm(n_char, char_dim, char_hidden) self.word_lstm = nn.LSTM(word_dim + char_hidden, word_hidden) self.classify = nn.Linear(word_hidden, n_tag) def forward(self, x, word): char = [] for w in word: # 对于每个单词做字符的 lstm char_list = make_sequence(w, char_to_idx) char_list = char_list.unsqueeze(1) # (seq, batch, feature) 满足 lstm 输入条件 char_infor = self.char_lstm(Variable(char_list)) # (batch, char_hidden) char.append(char_infor) char = torch.stack(char, dim=0) # (seq, batch, feature) x = self.word_embed(x) # (batch, seq, word_dim) x = x.permute(1, 0, 2) # 改变顺序 x = torch.cat((x, char), dim=2) # 沿着特征通道将每个词的词嵌入和字符 lstm 输出的结果拼接在一起 x, _ = self.word_lstm(x) s, b, h = x.shape x = x.view(-1, h) # 重新 reshape 进行分类线性层 out = self.classify(x) return out net = lstm_tagger(len(word_to_idx), len(char_to_idx), 10, 100, 50, 128, len(tag_to_idx)) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(net.parameters(), lr=1e-2) # 开始训练 for e in range(300): train_loss = 0 for word, tag in training_data: word_list = make_sequence(word, word_to_idx).unsqueeze(0) # 添加第一维 batch tag = make_sequence(tag, tag_to_idx) word_list = Variable(word_list) tag = Variable(tag) # 前向传播 out = net(word_list, word) loss = criterion(out, tag) train_loss += loss.data[0] # 反向传播 optimizer.zero_grad() loss.backward() optimizer.step() if (e + 1) % 50 == 0: print('Epoch: {}, Loss: {:.5f}'.format(e + 1, train_loss / len(training_data))) # 最后我们可以看看预测的结果 net = net.eval() test_sent = 'Everybody ate the apple' test = make_sequence(test_sent.split(), word_to_idx).unsqueeze(0) out = net(Variable(test), test_sent.split()) print(out) print(tag_to_idx) # 最后可以得到上面的结果,因为最后一层的线性层没有使用 softmax,所以数值不太像一个概率,但是每一行数值最大的就表示属于该类,可以看到第一个单词 'Everybody' 属于 nn,第二个单词 'ate' 属于 v,第三个单词 'the' 属于det,第四个单词 'apple' 属于 nn,所以得到的这个预测结果是正确的 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd import csv from pathlib import Path import os import sys # + module_path = os.path.abspath(os.path.join("..")) if module_path not in sys.path: sys.path.append(module_path) from data_pipeline.utils import unzip_file_from_url from data_pipeline.etl.sources.census.etl_utils import get_state_fips_codes # - DATA_PATH = Path.cwd().parent / "data" TMP_PATH: Path = DATA_PATH / "tmp" STATE_CSV = DATA_PATH / "census" / "csv" / "fips_states_2010.csv" SCORE_CSV = DATA_PATH / "score" / "csv" / "usa.csv" COUNTY_SCORE_CSV = DATA_PATH / "score" / "csv" / "usa-county.csv" CENSUS_COUNTIES_ZIP_URL = "https://www2.census.gov/geo/docs/maps-data/data/gazetteer/2020_Gazetteer/2020_Gaz_counties_national.zip" CENSUS_COUNTIES_TXT = TMP_PATH / "2020_Gaz_counties_national.txt" unzip_file_from_url(CENSUS_COUNTIES_ZIP_URL, TMP_PATH, TMP_PATH) counties_df = pd.read_csv(CENSUS_COUNTIES_TXT, sep="\t", dtype={"GEOID": "string", "USPS": "string"}, low_memory=False) counties_df = counties_df[['USPS', 'GEOID', 'NAME']] counties_df.rename(columns={"USPS": "State Abbreviation", "NAME": "County Name"}, inplace=True) counties_df.head() states_df = pd.read_csv(STATE_CSV, dtype={"fips": "string", "state_abbreviation": "string"}) states_df.rename(columns={"fips": "State Code", "state_name": "State Name", "state_abbreviation": "State Abbreviation"}, inplace=True) states_df.head() county_state_merged = counties_df.join(states_df, rsuffix=' Other') del county_state_merged["State Abbreviation Other"] county_state_merged.head() score_df = pd.read_csv(SCORE_CSV, dtype={"GEOID10": "string"}) score_df["GEOID"] = score_df.GEOID10.str[:5] score_df.head() score_county_state_merged = score_df.join(county_state_merged, rsuffix='_OTHER') del score_county_state_merged["GEOID_OTHER"] score_county_state_merged.head() score_county_state_merged.to_csv(COUNTY_SCORE_CSV, index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="g3Yoj5eT9Cfp" import os import numpy as np import matplotlib.pyplot as plt import cv2 import pickle from skimage.feature import hog import random from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score from sklearn.metrics import f1_score from sklearn.metrics import confusion_matrix # + colab={"base_uri": "https://localhost:8080/"} id="ZFV3WsrJyXyS" outputId="29310aa4-dc4d-44d1-c1e4-de52d496c0b9" from google.colab import drive drive.mount("/content/drive", force_remount=True) # + id="0HWo7bkN9YGx" pick_in = open('/content/drive/MyDrive/Colab Notebooks/data_hog_sign(2).pickle', 'rb') data = pickle.load(pick_in) pick_in.close() # + id="nwG59xAS9ctt" random.shuffle(data) features = [] labels = [] # + id="3SlsUlw2-eUP" for feature,label in data: features.append(feature) labels.append(label) # + id="_DCAMEr99jS-" x_train, x_test, y_train, y_test = train_test_split(features, labels, test_size=0.11, shuffle=True) # + id="Wg3XbpZr-lmm" clf = RandomForestClassifier().fit(x_train, y_train) # + id="OUxp8FWx9uK2" rf_pred = clf.predict(x_test) # + id="zES68ZZyoLhO" pred_prob2 = clf.predict_proba(x_test) # + colab={"base_uri": "https://localhost:8080/"} id="tnpGpcsW9jWp" outputId="cb875f64-a428-4724-c9e3-df203c695e2b" rf_accuracy = accuracy_score(y_test, rf_pred) rf_f1 = f1_score(y_test, rf_pred, average='weighted') rf_precision = precision_score(y_test, rf_pred, average= 'weighted') rf_recall = recall_score(y_test, rf_pred, average= 'weighted') print('Accuracy (Random Forest): ', "%.2f" % (rf_accuracy*100)) print('F1 (Random Forest): ', "%.2f" % (rf_f1*100)) print('Precision (k-NN): ', "%.2f" % (rf_precision*100)) print('Recall(k-NN): ', "%.2f" % (rf_recall*100)) # + colab={"base_uri": "https://localhost:8080/"} id="YAjeyZS59jaC" outputId="f30854ae-5069-4101-a787-649ba48529a2" from sklearn.metrics import classification_report, confusion_matrix print(confusion_matrix(y_test, rf_pred)) print(classification_report(y_test, rf_pred)) # + colab={"base_uri": "https://localhost:8080/", "height": 285} id="PGJXwLfkAOW2" outputId="040cc3ad-1b1e-49e4-dedc-95368de3a4b8" categories = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] print('Prediction is: ', categories[rf_pred[0]]) my_pred = x_test[0].reshape(128, 64) plt.imshow(my_pred, cmap='binary_r') plt.show() # + id="7T_jJSLGAqaw" n_classes= 10 from sklearn.metrics import roc_curve fpr2, tpr2, thresh2 = roc_curve(y_test, pred_prob2[:,1], pos_label=1) # + id="qw1ZrwCApjRj" random_probs = [0 for i in range(len(y_test))] p_fpr, p_tpr, _ = roc_curve(y_test, random_probs, pos_label=1) # + colab={"base_uri": "https://localhost:8080/"} id="oM8I6qArpjev" outputId="6706002e-8c00-4ba0-e728-89e335fe06ef" from sklearn.metrics import roc_auc_score # auc scores #auc_score1 = roc_auc_score(y_test, pred_prob1[:,1]) auc_score2 = roc_auc_score(y_test, pred_prob2, multi_class="ovo") print(auc_score2) # + colab={"base_uri": "https://localhost:8080/", "height": 376} id="KFXA1IZmrxtD" outputId="0a07d684-9eac-4cfd-f674-ef30941bc184" fpr = {} tpr = {} thresh ={} n_class = 10 for i in range(n_class): fpr[i], tpr[i], thresh[i] = roc_curve(y_test, pred_prob2[:,i], pos_label=i) # plotting plt.plot(fpr[0], tpr[0], linestyle='--',color='orange', label='Class 0 vs Rest') plt.plot(fpr[1], tpr[1], linestyle='--',color='green', label='Class 1 vs Rest') plt.plot(fpr[2], tpr[2], linestyle='--',color='red', label='Class 2 vs Rest') plt.plot(fpr[3], tpr[3], linestyle='--',color='yellow', label='Class 3 vs Rest') plt.plot(fpr[4], tpr[4], linestyle='--',color='purple', label='Class 4 vs Rest') plt.plot(fpr[5], tpr[5], linestyle='--',color='lime', label='Class 5 vs Rest') plt.plot(fpr[6], tpr[6], linestyle='--',color='brown', label='Class 6 vs Rest') plt.plot(fpr[7], tpr[7], linestyle='--',color='cyan', label='Class 7 vs Rest') plt.plot(fpr[8], tpr[8], linestyle='--',color='maroon', label='Class 8 vs Rest') plt.plot(fpr[9], tpr[9], linestyle='--',color='blue', label='Class 9 vs Rest') plt.title('Multiclass ROC curve') plt.xlabel('False Positive Rate') plt.ylabel('True Positive rate') plt.legend(loc='best') plt.savefig('Multiclass ROC',dpi=400) # + colab={"base_uri": "https://localhost:8080/", "height": 376} id="Ar6Ix3XIpjrX" outputId="cebab8d5-0cc7-493f-d9fe-b2484a83ca2c" plt.figure(2) plt.xlim(0, .8) plt.ylim(0.92, 1.01) plt.plot(fpr[0], tpr[0], linestyle='--',color='orange', label='Class 0 vs Rest') plt.plot(fpr[1], tpr[1], linestyle='--',color='green', label='Class 1 vs Rest') plt.plot(fpr[2], tpr[2], linestyle='--',color='red', label='Class 2 vs Rest') plt.plot(fpr[3], tpr[3], linestyle='--',color='yellow', label='Class 3 vs Rest') plt.plot(fpr[4], tpr[4], linestyle='--',color='purple', label='Class 4 vs Rest') plt.plot(fpr[5], tpr[5], linestyle='--',color='lime', label='Class 5 vs Rest') plt.plot(fpr[6], tpr[6], linestyle='--',color='brown', label='Class 6 vs Rest') plt.plot(fpr[7], tpr[7], linestyle='--',color='cyan', label='Class 7 vs Rest') plt.plot(fpr[8], tpr[8], linestyle='--',color='maroon', label='Class 8 vs Rest') plt.plot(fpr[9], tpr[9], linestyle='--',color='blue', label='Class 9 vs Rest') plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve (zoomed in at top left)') plt.legend(loc='best') plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 607} id="ANDniHZfpj1e" outputId="281bd2ad-7236-4cdf-a146-e09532bfc7fd" from sklearn.metrics import confusion_matrix import pandas as pd import seaborn as sn import matplotlib.pyplot as plt # %matplotlib inline import numpy as np data = confusion_matrix(y_test, rf_pred) df_cm = pd.DataFrame(data, columns=np.unique(y_test), index = np.unique(y_test)) df_cm.index.name = 'Actual' df_cm.columns.name = 'Predicted' plt.figure(figsize = (10,9)) plt.title('Confusion Matrix (HOG-SVM(Polynomial))') sn.set(font_scale=1.4)#for label size sn.heatmap(df_cm, cmap="Blues", annot=True, fmt= 'g' ,annot_kws={"size": 12})# font size # + id="fuyXLkORpj7d" # + id="R6-KTtiUpj_1" # + id="oy6p88AXpkEB" # + id="jk9g9bM3pkJo" # + id="ligZ6L2jpkNV" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Basics of Linear Algebra for Machine Learning ## Types of Matrices ## Diagonal Matrix ## Ch10, Page 74 # - from numpy import array from numpy import diag # define 5x3 matrix A = array([ [1, 2, 3], [2, 1, 2], [3, 2, 1], [1, 2, 3], [2, 1, 2] ]) print(A) # get diagonal vector d = diag(A) print(d) # get diagonal matrix from vector D = diag(d) print(D) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 64-bit # language: python # name: python3 # --- import pandas as pd import json import seaborn as sns import matplotlib.pyplot as plt sns.set_theme() sns.set(rc={'figure.figsize':(20, 10)}) #(11.7,8.27) pd.options.mode.chained_assignment = None df = pd.read_json('real_experiences_m_ch.jsonl', lines=True) df.head() # # Episode index 1 # Visualize order timestamp and revenue for episode index one. episode_one = df.loc[df.episode_index == 1] episode_one.head() episode_one.shift_start.nunique() episode_one.shift_end.nunique() # All entries for episode_index one regarding shift start and end is the same. Order timestamp and revenue increments suggests those are variables to explore further. # + from datetime import datetime time = datetime.strptime(episode_one.order_timestamp[0], "%Y-%m-%dT%H:%M:%S") # - sns.scatterplot(data=episode_one.head(50), x='order_timestamp', y='revenue') plt.xticks(rotation = 45) plt.title("First 50 rows of data", fontsize=16) plt.show() # We can notice that the are a few breaking points for revenue, this suggests it is depended on other information. # # **Hypothesis**: dependency on vehicle-id # Extracting vehicle_id from route_update column for every row. vehicle_ids = [] for row in range(len(episode_one)): vehicle_ids.append(episode_one.route_update.values[row]['vehicle_id']) episode_one['vehicle_id'] = vehicle_ids # After creating a new row for vehicles_id, let's see a preview of the dataset. episode_one.head() # Plotting scatterplot but this time vehicle_id will be used as seperator. sns.scatterplot(data=episode_one.head(50), x='order_timestamp', y='revenue', hue='vehicle_id') plt.xticks(rotation = 45) plt.title("First 50 rows of data", fontsize=16) plt.show() # How many of unique vehicles are present in the whole dataset? uniques = [] for episode in range(df.episode_index.nunique()): episode_index = df.loc[df.episode_index == episode] for i in range(len(episode_index)): id = episode_index.route_update.values[i]['vehicle_id'] if id not in uniques: uniques.append(id) uniques # For further EDA, order_timestamp will be simplified to only represent hour and minutes (i.e. 16:30) thus making plot for each vehicle_id more representable. time_df = pd.to_datetime(episode_one.order_timestamp, infer_datetime_format=True) episode_one['order_time'] = time_df.apply(lambda x: "{:d}:{:02d}".format(x.hour, x.minute)) after_shift_orders = episode_one.loc[(episode_one.order_time >= '20:00') & (episode_one.order_time <= '23:59')] sns.scatterplot(data=after_shift_orders, x='order_time', y='revenue', hue='vehicle_id') plt.xticks(rotation=45, fontsize=9) plt.title("Revenue increments for each vehicle after shift (8pm) until midnight", fontsize=16) plt.show() # Why does revenue increase even after shift? Does this mean drops are still made after shift hours as overtime? # Let's see what happens to order which come after 00:00 episode_one['order_hour'] = pd.to_datetime(episode_one.order_time, format='%H:%M').dt.hour order_after_midnight = episode_one.loc[(episode_one.order_hour > 0) & (episode_one.order_hour < 8)] sns.scatterplot(data=order_after_midnight, x='order_time', y='revenue', hue='vehicle_id') plt.xticks(rotation=45, fontsize=9) plt.title("Revenue increments for each vehicle after midnight until shift start", fontsize=16) plt.legend(loc=4) plt.show() # Why new orders coming around 2am show no incremental revenue increase for 'blue' vehicle? # Each vehicle plots for vehicle_id in uniques: each_vehicle = episode_one.loc[episode_one.vehicle_id == vehicle_id] sns.scatterplot(data=each_vehicle, x='order_time', y='revenue') plt.xticks(rotation=45, fontsize=9) plt.title(f"Revenue increments for vehicle: {vehicle_id}", fontsize=16) plt.show() # It is clear that in the episode 1 most of the orders arrive between 16:00 and 03:00 (next day). # Quick preview of other episode index suggests similar order time range, revenue, and we figure before that 3 vehicle_ids are the repetitive in all 20 episode indices. Therefore, EDA with respect to order time and revenues will not differentiate significantly. # # Route # Each route comes a list of all 3 vehicle for every row. # Source for visualization of the JSON tree http://jsonviewer.stack.hu/ # # Visualization below represents first row of episode index 1. # ![Route schema](https://i.imgur.com/jtyVoF2.png) # ## Route update # ![Route_update schema](https://i.imgur.com/ph3lBMH.jpg) # P.S. Route update can also have loaded drops! Check episode_one.route_update.values[150] # ![Route update 150 row](https://i.imgur.com/24liTTE.jpg) # **Comparing both routes and route_update for each vehicle I have noticed that route_update is already incorporated in loaded_drops for route** # # Next, I will check what is the case when route_update contains loaded_drops as well. # When route_update contains non-empty loaded and unloaded drops, they are the same as routes loaded and unloaded drops. However, I have noticed that routes contain the whole fleet, therefore, other vehicles have mulptile loaded and unloaded drops too. # + # episode_one.iloc[150:151, :].routes.values[0] # + # episode_one.iloc[150:151, :].route_update.values[0] # - df.loc[df.episode_index ==2 ] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # TensorRT YoloV5s Sample Application # # --- # # 1. [Prerequisites](#Prerequisites) # 1. [Set up](#Set-up) # 1. [Import model](#Import-model) # 1. [Write and test app code](#Write-and-test-app-code-in-notebook) # 1. [Package app](#Package-app) # 1. [Deploy app to device](#Deploy-app-to-device) # + [markdown] tags=[] # # Prerequisites (DO NOT SKIP) # - # 1. **PLEASE READ THE [README](README.md) INCLUDE WITH THIS BEFORE YOU START USING THIS NOTEBOOK** # + [markdown] tags=[] # # Set up # - # Import libraries for use with this notebook environment, you do not need these libraries when you write your application code. # + import sys import os import time import json import boto3 import sagemaker sys.path.insert( 0, os.path.abspath( "../common/test_utility" ) ) import panorama_test_utility # + [markdown] tags=[] # ## Notebook parameters # Global constants that help the notebook create Panorama resources on your behalf. # + tags=[] # application name app_name = 'lab3' ## package names and node names code_package_name = 'lab3' camera_node_name = 'abstract_rtsp_media_source' # AWS account ID account_id = boto3.client("sts").get_caller_identity()["Account"] # - # ## Set up application # # Every application uses the creator's AWS Account ID as the prefix to uniquely identifies the application resources. Running `panorama-cli import-application` replaces the generic account Id with your account Id. # !cd ./lab3 && panorama-cli import-application # ## Download Depedencies and Artifacts (One Time Download) panorama_test_utility.download_artifacts_gpu_sample('lab3', account_id) # ### Upload application to Panorama for deploying to devices container_asset_name = 'lab3' # This step takes some time, depending on your network environment. # !cd ./lab3 && pwd && panorama-cli package-application # ### Ready for deploying to a device # # Congrats! Your app is now ready to deploy to a device. Next, you can continue in this notebook to deploy the app programmatically or you can go to the Panorama console and deploying using the AWS Console. The console makes it easier to select camera streams and select the devices you want to deploy to. Programmatic deployment is faster to complete and easier to automate. # ### How to deploy to the device? # The [README](README.md) file has detailed instructions on how to deploy. Please go to the deployment section of the README file and follow along the instructions # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import asyncio # + async def count(): print('One') y = await asyncio.sleep(1) print('Two') return y async def main(): await asyncio.gather(count(), count(), count()) #await count(),count() # - import time s = time.perf_counter() await main() elapsed = time.perf_counter() - s print(f' executed in {elapsed:0.2f} seconds.') def count(): print('One') time.sleep(1) print('Two') def main (): for _ in range(3): count() s = time.perf_counter() main() elapsed = time.perf_counter() - s print(f' executed in {elapsed:0.2f} seconds.') import random c = ( "\033[0m", # End of color "\033[36m", # Cyan "\033[91m", # Red "\033[35m", # Magenta ) # + async def makerandom(idx: int, threshold: int = 6) -> int: print(c[idx + 1] + f'Initiated makerandom({idx})') i = random.randint(0,10) while i <= threshold: print(c[idx + 1] + f"makerandom({idx}) == {i} too low; retrying.") await asyncio.sleep(idx + 1) i = random.randint(0, 10) print(c[idx + 1] + f"---> Finished: makerandom({idx}) == {i}" + c[0]) return i async def main(): res = await asyncio.gather(*(makerandom(i, 10 - i - 1) for i in range(3))) return res # - random.seed(444) r1, r2, r3 = await (main()) print() print(f"r1: {r1}, r2: {r2}, r3: {r3}") # ## ASYNC LOGGER import json import os import pandas as pd import numpy as np import asyncio from eth_account import account from web3 import Web3 from src import s3_services from src import sql_manager from datetime import datetime from importlib import reload # + #Web3 Connection Handler projectID = '69765a3368f44f5c8cc3691d2361e496' infura_url = f'https://mainnet.infura.io/v3/{projectID}' web3 = Web3(Web3.HTTPProvider(infura_url)) infura_url = f'wss://mainnet.infura.io/ws/v3/{projectID}' web3ws = Web3(Web3.WebsocketProvider(infura_url)) if web3.isConnected() and web3ws.isConnected(): print('The connection with Ethereum network is successfull') else: print('Network connection error') # - web3ws.eth.get_transaction_by_block() # Ronin Bridge Smart Contract address = '0x1A2a1c938CE3eC39b6D47113c7955bAa9DD454F2' abi = json.loads('[{"inputs":[{"internalType":"address","name":"_proxyTo","type":"address"},{"internalType":"address","name":"_registry","type":"address"}],"payable":false,"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"_oldAdmin","type":"address"},{"indexed":true,"internalType":"address","name":"_newAdmin","type":"address"}],"name":"AdminChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"_oldAdmin","type":"address"}],"name":"AdminRemoved","type":"event"},{"anonymous":false,"inputs":[],"name":"Paused","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"address","name":"_new","type":"address"},{"indexed":true,"internalType":"address","name":"_old","type":"address"}],"name":"ProxyUpdated","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"uint256","name":"_depositId","type":"uint256"},{"indexed":true,"internalType":"address","name":"_owner","type":"address"},{"indexed":true,"internalType":"address","name":"_tokenAddress","type":"address"},{"indexed":false,"internalType":"address","name":"_sidechainAddress","type":"address"},{"indexed":false,"internalType":"uint32","name":"_standard","type":"uint32"},{"indexed":false,"internalType":"uint256","name":"_tokenNumber","type":"uint256"}],"name":"TokenDeposited","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"internalType":"uint256","name":"_withdrawId","type":"uint256"},{"indexed":true,"internalType":"address","name":"_owner","type":"address"},{"indexed":true,"internalType":"address","name":"_tokenAddress","type":"address"},{"indexed":false,"internalType":"uint256","name":"_tokenNumber","type":"uint256"}],"name":"TokenWithdrew","type":"event"},{"anonymous":false,"inputs":[],"name":"Unpaused","type":"event"},{"payable":true,"stateMutability":"payable","type":"fallback"},{"constant":true,"inputs":[],"name":"admin","outputs":[{"internalType":"address","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"_newAdmin","type":"address"}],"name":"changeAdmin","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"depositCount","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"deposits","outputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"address","name":"tokenAddress","type":"address"},{"internalType":"address","name":"sidechainAddress","type":"address"},{"internalType":"uint32","name":"standard","type":"uint32"},{"internalType":"uint256","name":"tokenNumber","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"implementation","outputs":[{"internalType":"address","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[],"name":"pause","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"paused","outputs":[{"internalType":"bool","name":"","type":"bool"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"proxyType","outputs":[{"internalType":"uint256","name":"","type":"uint256"}],"payable":false,"stateMutability":"pure","type":"function"},{"constant":true,"inputs":[],"name":"registry","outputs":[{"internalType":"contract Registry","name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[],"name":"removeAdmin","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[],"name":"unpause","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"_newProxyTo","type":"address"}],"name":"updateProxyTo","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"internalType":"address","name":"_registry","type":"address"}],"name":"updateRegistry","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[{"internalType":"uint256","name":"","type":"uint256"}],"name":"withdrawals","outputs":[{"internalType":"address","name":"owner","type":"address"},{"internalType":"address","name":"tokenAddress","type":"address"},{"internalType":"uint256","name":"tokenNumber","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"}]') contract = web3.eth.contract(address=address, abi = abi) contract.events.TokenWithdrew.event_name block_data = pd.DataFrame() web3.eth.get_transaction() def handle_event(event, df): #print(Web3.toJSON(event)) #event_frame = pd.DataFrame.from_dict(event.args) print(event['args']) print(f'Movement found in {event.blockNumber}') print(f'address: {event.address}') print(f'TransactionHash: {event.transactionHash}') #print(web3.eth.get_block(event_blockNumber)) print(web3.eth.get_transaction(web3.toHex(event.transactionHash))) #df = df.append(event_frame) #print(df) async def log_loop(event_filter, poll_interval): for i in range(10): for TokenDeposited in event_filter_1.get_new_entries(): print('New Deposit') handle_event(TokenDeposited, block_data) for TokenWithdrew in event_filter_2.get_new_entries(): print('New Withdrawal') handle_event(TokenWithdrew, block_data) await asyncio.sleep(poll_interval) # + event_filter_1 = contract.events.TokenDeposited.createFilter(fromBlock='latest') event_filter_2 = contract.events.TokenWithdrew.createFilter(fromBlock='latest') #block_filter = web3.eth.filter('latest') #tx_filter = web3.eth.filter('pending') async def main(): loop = asyncio.get_event_loop() await (asyncio.gather(log_loop(event_filter,poll_interval=2))) # - await main() block_data # + import asyncio async def rxer(): i=0 while True: i+=1 print ('Rxer ',i) await asyncio.sleep(1) async def WsServe(): for i in range(5): print ('WsServe',i) await asyncio.sleep(1) print ('Finish') loop.stop() loop.close() loop=asyncio.get_event_loop() loop.create_task(rxer()) loop.run_until_complete(WsServe()) loop.run_forever() # + async def WsServe(stop_event): for i in range(5): print ('WsServe',i) await asyncio.sleep(1) print ('Finish') stop_event.set() await asyncio.sleep(1) async def main(): asyncio.get_event_loop().create_task(rxer()) stop_event = asyncio.Event() asyncio.get_event_loop().create_task(WsServe(stop_event)) await stop_event.wait() asyncio.run(main()) # python 3.6 and older: #asyncio.get_event_loop().run_until_complete(main()) # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # Assumptions of Linear Regression # Data Source should be evaluted in accordane with multicolinearity, equal variance, autocorrelation and so on. Linear Regression is only half to build up something. The important point is how our data comply with our regression model. # # Assumption 1 # + active="" # Libraries # - if(!require(readxl)) { install.packages("readxl"); require(readxl) } if(!require(lmtest)){ install.packages("lmtest"); require(lmtest) } if(!require(corrplot)) { install.packages("corrplot"); require(corrplot) } if(!require(car)) { install.packages("car"); require(car) } if(!require(gvlma)) { install.packages("gvlma"); require(gvlma) } # + active="" # The mean of residuals is zero. Check the mean of residuals! # - #The following line provides you a GUI screen which helps to select your file. #my_data <- read_excel(file.choose()) currentDirectory <- getwd() setwd("C:\\Users\\zoint\\Desktop\\AllFiles\\Projeler\\R_Projects") directoryChanged <- getwd() my_data <- read_excel("Autoform_1000.xlsx") meanResidual <- function(){ mod <- lm(HARDNESSP1 ~ TransportTimeAfterHeating, data = my_data) mean (mod$residuals) } meanResidual() # # Assumption 2 # + active="" # Homoscedasity of residuals or equal variance # - homosCedasticity <- function(){ par(mfrow = c(2,2)) # set 2 rows and 2 column plot layout mod_layout <- lm(HARDNESSP1 ~ TransportTimeAfterHeating, data = my_data) plot(mod_layout) } homosCedasticity() # ![test%20picture.png](attachment:test%20picture.png) # # Autocorrelation (Assumption 3) # + active="" # Autocorrelation normally applies on time series data. Because time series data plot correlated each other and this would helpful for the machine learning applications. When the residuals (error) are autocorrelated, it means that the current value is dependent of the previous (historic) values and there is a definite unexplained pattern in the Y variable that shows up in the disturbances. # - autocorrelation <- function() { lmMod <- lm(thickness ~ spacing, data = my_data) acf(lmMod$residuals) #Durbin - #If the result is higher than 0, then our data is autocorrelated. print(lmtest::dwtest(lmMod)) } autocorrelation() # + active="" # If you check the plot out, you will not see a single line under the dashed (blue) region. This shows us that the data model abide by the autocorrelation rules. # - # # Assumption 4 # + active="" # The variability in X values is positive. So how to check? # - variabilityInX <- function(){ if(var(my_data$thickness) > 0.0 && var(my_data$TransportTimeAfterHeating) > 0.0 && var(my_data$EnforcedTemperatureOfEntireSheet) > 0.0 && var(my_data$QuenchingTimeInTool) > 0.0 && var(my_data$QuenchingForce) > 0.0 && var(my_data$spacing) > 0.0 && var(my_data$DefaultToolTemperature) > 0.0) { print("Variability checked") } else { print("Variability is not checked") } } variabilityInX() # # Assumption 5 # + active="" # There is no perfect relationship between independent Variables. Check Variance Inflation Factor(VIF) # - # ![A%20Formula.JPG](attachment:A%20Formula.JPG) mod2 <- lm(HARDNESSP1 ~ ., data=my_data[1:9]) vif(mod2) # + active="" # Check correlation plot # - apply(my_data, 2, function(x) sum(is.na(x))) corrplot(cor(my_data)) # # Check Assumption Automatically # + active="" # The gvlma() function check the important assumptions on a given linear model # - par(mfrow=c(2,2)) gvlmaMod <- lm(thickness ~ spacing, data = my_data[1:8, ]) gvlma::gvlma(gvlmaMod) # + active="" # Explanation of the variable of gvlma() # + active="" # Global Stat: Are the relationships between your X (independent variables) and Y (dependent variable)roughly linear? Rejection of the null (p < 0.5) indicates a non-linear relationship between the variables. # # Skewness: Is your distribution turn positively or negatively, necessitating a transformation to meet the assumption of normality? If the skewness couldn't reach p < 0.5, then you should transform your data. # # Kurtosis: Is your data highly peaked or shallowly peaked, necessitating a transformation to meet the assumption of normality? # # Link Function: Is your dependent variable truly continuous, or categorical? Rejection of null (p < 0.5) indicates that you should utilize a different linear model (e.g. logistic regression) # # Heteroscedasticity: Is the variance of your model residuals constant across the range of X? That shows your model whether it is better or worse at predicting for certain range of your X scales. # - # # Historical Data Fault Diagnosis System # + active="" # A new fault Diagnosis Method using Fault Directions in Fisher Discriminant Analysis # + active="" # Following three methods will be implemented # + active="" # 1) Principal Component Analysis (PCA) # 2) Partial Least Square (PLS) # 3) Statistical Process Monitoring (SPM) in Fault Detection # + active="" # In Historical Data Fault Diagnosis System one can carry out the system within three stages. # 1) Data preanalysis # 2) Fault Visualization # 3) Fault Diagnosis # + active="" # Process monitoring method is composed of three parts # + active="" # 1) A Preanalysis step that first roughly ioutput dentifies various clusters in a historical output data set -> isolate normal and abnormal data clusters by the k-means clustering method. # 2) A fault visualization step that visualise high-dimensional data in 2-D space by performing global Fisher Discriminant Anaylsis (FDA) # 3) A new fault diagnosis method based on fault directions in pairwise FDA. # # Subsections: # 1) Preliminary # 2) PCA-based process monitoring # 3) Fisher Discriminant Analysis # 4) K-Means Clustering # + if(!require(readxl)) { install.packages("readxl"); require(readxl) } if(!require(MASS)) { install.packages("MASS"); require(MASS) } if(!require(factoextra)) { install.packages("factoextra"); require(factoextra) } currentDirectory <- getwd() setwd("C:\\Users\\zoint\\Desktop\\AllFiles\\Projeler\\R_Projects") directoryChanged <- getwd() #print out new directory address print(directoryChanged) my_data <- read_excel("Autoform_1000.xlsx") #Normalization NaValue <- function (x) { sum(is.na(x))} apply(my_data, 2, NaValue) #Principal component Analysis Part my_data.principal <- my_data[1:999, 1:9] print(head(my_data.principal[,1:9])) pca.graph <- prcomp(my_data.principal, scale=TRUE) #Visualisierung # fviz_pca_biplot(pca.graph, repel = TRUE, # col.var = "#2E9FDF", # Variables color # col.ind = "#696969" # Individuals color # ) resultPlot <- fviz_pca_var(pca.graph, col.var="contrib")+ scale_color_gradient2(low="orange", mid="blue", high="red", midpoint=96) + theme_minimal() print(resultPlot) # Gradient color resultHeatMap <- fviz_pca_ind(pca.graph, col.ind="cos2") + scale_color_gradient2(low="yellow", mid="blue", high="red", midpoint=0.6) print(resultHeatMap) eigen.values <- get_eigenvalue(pca.graph) print(eigen.values) result.values <- get_pca_var(pca.graph) result.values$coord result.values$contrib result.values$cos2 # Graph of individuals # An alias of fviz_pca_biplot() #Linear Discriminant Analysis #It is a well-established machine learning techique for predicting categories. Its main advantages, #compared to other classification algorithms such as neural networks and random forests, are that #the model is interpretable and that prediction is easy my_data.lda <- lda(HARDNESSP1 ~ ., data = my_data[1:9]) my_data.lda.values <- predict(my_data.lda) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="8VKebFUhOTuq" outputId="66042085-fa89-48d5-a6af-3bd84d5d413c" #clone repository until final version of package is released # !git clone https://github.com/oskarfernlund/niteshade.git # %cd 'niteshade/' #necessary installs # !pip install fpdf # #!pip install niteshade # + [markdown] id="Na2Sn5owOzFa" # # Import Necessary Libraries # + id="7251Nyj6OvCV" #necessary imports import numpy as np import torch from niteshade.attack import AddLabeledPointsAttacker, LabelFlipperAttacker, BrewPoison from niteshade.defence import FeasibleSetDefender, KNN_Defender, SoftmaxDefender from niteshade.models import MNISTClassifier, BaseModel from niteshade.postprocessing import PostProcessor, PDF from niteshade.simulation import Simulator, wrap_results from niteshade.utils import train_test_MNIST, save_plot, get_time_stamp_as_string from niteshade.data import DataLoader import matplotlib.pyplot as plt # + [markdown] id="DPaAPC5bNjsO" # # Using the **niteshade** Pipeline to Test Attack and Defense Strategies on an Off The Shelf MNIST Classifier # + colab={"base_uri": "https://localhost:8080/", "height": 103, "referenced_widgets": ["09f247c707114db4af6d98abc1213945", "", "", "", "", "", "", "80ed724924f34862bf9042a841dc9c21", "", "", "7c0538299e51421fa62142824908c18b"]} id="c-DJ_1EH8jxE" outputId="335f1a3d-70f5-429a-ef97-305f0b8a4549" X_train, y_train, X_test, y_test = train_test_MNIST() # + colab={"base_uri": "https://localhost:8080/"} id="gmdansNXPPEz" outputId="66253165-82f6-4e9f-c468-2541f43e7343" torch.manual_seed(0) #seed for reproducibility BATCH_SIZE = 128 # --> mini-batches on which model is trained on NUM_EPISODES = 30 # --> number of times the attacker/defender intervene lrs = [0.001, 0.005, 0.01, 0.05, 1] #define attacker and defender label_flips_dict = {1:9, 9:1} attacker = LabelFlipperAttacker(1, label_flips_dict) defender = FeasibleSetDefender(X_train, y_train, 2000) #define model and simulator baseline_model = MNISTClassifier(lr=0.01) simulator_regular = Simulator(X_train, y_train, baseline_model, attacker=attacker, defender=None, batch_size=BATCH_SIZE, num_episodes=NUM_EPISODES) simulator_regular.run() simulators = {'regular_0.01': simulator_regular} for lr in lrs: model = MNISTClassifier(lr=lr) simulator = Simulator(X_train, y_train, model, attacker=attacker, defender=defender, batch_size=BATCH_SIZE, num_episodes=NUM_EPISODES) simulator.run() simulators[f'lr_{lr}'] = simulator postprocessor = PostProcessor(simulators) metrics = postprocessor.evaluate_simulators_metrics(X_test, y_test) # + fig, ax = plt.subplots(1, figsize=(15,10)) for key, value in metrics.items(): x = float(key.split('_')[-1]) y = value ax.scatter(x, y, label=key, marker='D', alpha=0.7, s=150) ax.set_xlabel('Learning Rate') ax.set_ylabel('Accuracy') plt.legend() plt.show() save_plot(fig, plotname='Learning Rate vs Accuracy') # - # !ls output # ## PDF demo simulation_metrics = postprocessor.compute_online_learning_metrics(X_test, y_test) postprocessor.plot_online_learning_metrics(simulation_metrics, plotname='Accuracy by Episode') # + # Generate a PDF time_stamp = get_time_stamp_as_string() header_title = f'Example Simulation Report as of {time_stamp}' pdf = PDF() pdf.set_title(header_title) pdf.add_all_charts_from_directory('output') pdf.output('summary_report.pdf', 'F') # - from IPython.display import Image Image("summary_pdf_demo.png") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import boto3 import pandas as pd from io import StringIO s3 = boto3.resource('s3') bucket = s3.Bucket("deutsche-boerse-xetra-pds") bucket_obj = bucket.objects.filter(Prefix="2021-03-15") objects = [obj for obj in bucket_obj] objects data_object = bucket.Object(key='').get().get('Body').read().decode("utf-8") data = StringIO(data_object) df = pd.read_csv(data, delimeter=",") df.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install fuzzywuzzy from fuzzywuzzy import fuzz from fuzzywuzzy import process import pandas as pd import numpy as np import string #set up df and target sentence for string matching triggers = pd.read_csv('weather app sample conversations - more explicit triggers.csv', names = ['Sentences']) triggers.dropna(inplace=True) target_sent = 'What is the weather' #classify based on fuzzy string ratio triggers['fuzzydist'] = triggers['Sentences'].apply(lambda x: fuzz.ratio(x, target_sent)) triggers['label'] = triggers['fuzzydist'].apply(lambda x: x > 50) triggers sum(triggers['label'])/len(triggers) #clean sentences first and pass into fuzzywuzzy triggers['sent_cleaned'] = triggers['Sentences'].apply(lambda x: x.translate(str.maketrans('', '', string.punctuation))) triggers['label_cleaned'] = triggers['sent_cleaned'].apply(lambda x: fuzz.ratio(x, target_sent) > 50) sum(triggers['label_cleaned'])/len(triggers) # + #test using multiple trigger sents target_sents = ['what is the weather', 'will it rain today', 'is it hot today', 'what is todays forecast', 'is it cold outside'] def classify_sent(sent): scores = [fuzz.ratio(sent, target) for target in target_sents] print(scores) return max(scores) > 50 triggers['label_mult'] = triggers['sent_cleaned'].apply(lambda x: classify_sent(x)) sum(triggers['label_mult'])/len(triggers) # - #test on negative examples f = open('samples.txt', 'r') labels_cont = [classify_sent(sent) for sent in f] sum(labels_cont) classify_sent("Im headed to New York this summer since my internship is remote anyway. Oh, that sounds cool! I've heard NY gets really humid at that time of the year though? I don't mind a little humidity, I think it'll be fine! Sweet! Let me know how that goes - I might have to go to Florida for mine, so well be on the same time zone. Oh, awesome") # ## Testing framework import test # + def is_trigger(sent, trig): return fuzz.ratio(sent, trig) > 70 test.compute_train_test_split(is_trigger) # + from tqdm import tqdm import random with open("samples.txt") as f: sentences = f.readlines() with open("triggers.txt") as f: triggers = f.readlines() with open("neg_distractors.txt") as f: distractors = f.readlines() def compute_accuracy(sents, trigs, is_trigger): identified_trigger = 0 random.shuffle(sents) for sent in tqdm(sents): for trig in trigs: if is_trigger(sent, trig): identified_trigger += 1 break return identified_trigger, identified_trigger/len(sents) def compute_train_test_split(is_trigger): print("Computing false positives:") random.shuffle(triggers) train_triggers = triggers[:int(len(triggers)*.8)] test_triggers = triggers[int(len(triggers)*.8): len(triggers)] print(compute_accuracy(sentences[:50], train_triggers, is_trigger)) print("Compute true positives") print(compute_accuracy(test_triggers, train_triggers, is_trigger)) print("Compute neg distractors rate") print(compute_accuracy(distractors, train_triggers, is_trigger)) # - compute_train_test_split(is_trigger) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Algorithm usage tutorial # # During the development of a reinforcement learning application, we often train the agent using different reinforcement learning algorithm and see its performance difference. # In this notebook, you will learn how to train an agent in the same environment using different algorithms. # 3 steps. # # (0. Preparation of this notebook) # 1. Setting up the training environment # 2. Setup the DDPG algorithm and train # 3. Change the training algorithm to SAC and train # ## Preparation # # Let's start by first installing nnabla-rl and importing required packages for training. # !pip install nnabla-rl # + import gym import nnabla as nn from nnabla import functions as NF from nnabla import parametric_functions as NPF from nnabla import solvers as NS import nnabla_rl import nnabla_rl.algorithms as A import nnabla_rl.writers as W import nnabla_rl.functions as RF from nnabla_rl.builders import SolverBuilder from nnabla_rl.environments.environment_info import EnvironmentInfo from nnabla_rl.models.q_function import QFunction from nnabla_rl.environments.wrappers import NumpyFloat32Env, ScreenRenderEnv from nnabla_rl.utils.evaluator import EpisodicEvaluator from nnabla_rl.utils.reproductions import set_global_seed # - # !git clone https://github.com/sony/nnabla-rl.git # !bash nnabla-rl/interactive-demos/package_install.sh # %run nnabla-rl/interactive-demos/colab_utils.py nn.clear_parameters() # ## Setting up the training environment # # Set up the "Pendulum" environment provided by the OpenAI Gym. def build_env(env_name): env = gym.make(env_name) env = NumpyFloat32Env(env) env = ScreenRenderEnv(env) # for rendering environment env.seed(0) # optional return env env_name = "Pendulum-v0" env = build_env(env_name) set_global_seed(0) # optional # ## Preparation of Hook (optional) # # This hook may slow down the training. render_hook = RenderHook(env=env) # ## Setup the DDPG algorithm and train # # We are almost ready to start the training. Let's first try the DDPG algorithm to train the agent. config = A.DDPGConfig(gpu_id=0, start_timesteps=200) ddpg = A.DDPG( env_or_env_info=env, config=config ) ddpg.set_hooks([render_hook]) ddpg.train(env, total_iterations=50000) # Wait for a while and see that the pendulum swang up. # ## Change the training algorithm to SAC and train # # Next, let's try training the agent with another reinforcement learning algorithm SAC. # You will find that changing the algorithm is very easy. env.reset() nn.clear_parameters() render_hook.reset() config = A.SACConfig(gpu_id=0, start_timesteps=500) sac = A.SAC( env_or_env_info=env, config=config ) sac.set_hooks([render_hook]) sac.train(env, total_iterations=50000) # Changing the training algorithm is easy, right? # ## Note # # To train an agent using different algorithms, you'll need to check the action type required by the environment. # Required action type must be supported by the algorithm that you want to use. # In this example, we used the "Pendulum" environment which works with continuous action outputs and both DDPG and SAC supports continuous action environment. # # See the [Algorithm catalog](https://github.com/sony/nnabla-rl/blob/master/nnabla_rl/algorithms/README.md) for the action type supported by the algorithm. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # EPSY 5200: Programming for Social Science Researchers # ## Week 11: Git Demo # ### Wednesday, November 13, 2019 import numpy.random as npr import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import scipy.stats as stats import statsmodels.formula.api as smf mov = pd.read_csv('tmdb_5000_movies.csv') mov.head() # Challenge 1: Find the descriptive stats for each numeric column # + # challenge 1 code here mov[:].describe().transpose() # - # Challenge 2: Find which columns have missing data (and how many missing data) # + # challenge 2 code here mov.isnull().sum() # - mov2 = mov.drop(['homepage', 'tagline'], axis = 1) mov2.head() sum(mov2.revenue == 0) sum(mov2.budget == 0) # + # challenge 3: create mov3, which is only budget > 0 & revenue > 0 mov3 = mov2[(mov2['budget'] > 0) & (mov2['revenue'] > 0)] # - mov3.head() # + mov_plot = sns.regplot(mov3.budget, mov3.revenue).get_figure() mov_plot.savefig('regression_plot.png') plt.savefig('regression_plot.png') mov_plot # - rev_budg = smf.ols(formula = 'revenue ~ budget', data = mov3).fit() rev_budg rev_budg.params rev_budg.rsquared rev_budg.pvalues rev_budg.conf_int() rev_budg.summary() output = pd.DataFrame({'estimate': rev_budg.params, 'lowCI': rev_budg.conf_int()[0], 'highCI': rev_budg.conf_int()[1], 'pVal': rev_budg.pvalues}) output output.to_csv('regression_table.csv') rbr = smf.ols(formula = 'revenue ~ budget + runtime', data = mov3).fit() rbr.summary() sns.pairplot(mov3[['budget', 'revenue', 'runtime']], kind="reg") mov3[mov3.runtime == mov3.runtime.max()] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Python # Python语言是一种解释型、面向对象、动态数据类型的高级程序设计语言,如今已成为绝大部分数据分析师的首选数据分析语言。其设计的核心理念是代码的易读性,以及允许编程者通过若干行代码轻松表达想法创意。 # ## 版本 # # 本教程所有代码采用 Python3.6 进行编写,可通过`!python --version`命令查看版本。 # + outputHidden=false inputHidden=false # !python --version # - # ## 语法 # # ### 缩进 # # 锁紧是表达代码间包含和层次关系的唯一手段,在 Python 中通过 4 个空格或 1 个 TAB 键来表示。 # # ### 注释 # 注释是不被程序执行的辅助性说明信息。在 Python 中单注释分为单行和多行,单行注释以`#`开头,其后内容为注释;多行注释以`'''`开头和结尾,中间内容为注释。 # # + # 这是一个单行注释 ''' 这是一个多行注释 这是一个多行注释 ''' print('注释') # - # ### 变量 # # 变量是用来保存和表示数据的占位符号。在 Python 中通过等号`=`用向变量赋值或修改值。 # 变量名是大小写字母、数字、下划线`_`和汉字等字符的组合,对大小写敏感,首字母不能是数字,也不能与保留字相同。 # + outputHidden=false inputHidden=false # 向变量x赋值为1 x = 1 print('x =',x) # 修改变量x的赋值为2 x = 2 print('x =',x) # - # ### 保留字 # # 保留字是被编程语言内部定义并保留使用的标识符。在 Python 中有 33 个保留字,保留字是编程语言的基本单词,对大小写敏感。 # # ||||||| # |:-:|:-:|:-:|:-:|:-:|:-:| # |and| as | assert | break | class | continue | # | def | elif | else | except | finally | for | # | from | if | import |in|is|lambda| # | not | or | pass |raise|return|try| # | while | with | yield |del|global|nonlocal| # | True | False | None |||| # ## 操作符 # # 操作符是完成运算的一种符号体系 # # ### 运算操作符 # # |操作符|描述| # |:-:|:-| # |x + y|加法运算| # |x - y|减法运算| # |x * y|乘法运算| # |x / y|除法运算| # |x // y|整除运算| # |x % y|取模(余数)运算 # | - y|负数运算| # |x ** y|幂运算,y 为小数时为开方运算| # # ### 赋值操作符 # # |操作符|描述| # |:-:|:-| # |x += y|即 x = x + y| # |x -= y|即 x = x - y| # |x *= y|即 x = x * y| # |x /= y|即 x = x / y| # |x //= y|即 x = x // y| # |x %= y|即 x = x % y| # |x **= y|即 x = x ** y| # # ### 比较运算符 # # 比较运算结果为 True 或 False。 # # |操作符|描述| # |:-:|:-| # |==|等于| # |!=|不等于| # |<>|不等于,与 != 类似| # |>|大与| # |<|小于| # |>+|大于等于| # |<=|小于等于| # # ### 位运算符 # # 按位运算符是把数字看作二进制来进行计算的。 # # |操作符|描述| # |:-:|:-| # |&|按位与| # |||按位或| # |^|按位异或| # |~|按位取反| # |<<|左移动| # |>>|右移动| # # ### 逻辑运算符 # # |运算符|描述| # |:-:|:-| # |and|布尔"与"| # |or|布尔"或"| # |not|布尔"非"| # # ### 成员运算符 # # |运算符|描述| # |:-:|:-| # |in|变量在指定序列中找到值返回 True,否则返回 False| # |not in|变量在指定的序列中没有找到值返回 True,否则返回 False| # # ### 身份运算符 # # 身份运算符用于比较两个对象的存储单元。 # # |运算符|描述| # |:-:|:-| # |is|判断两个标识符是不是引用自一个对象| # |is not|判断两个标识符是不是引用自不同对象| # ## 数据类型 # # 数据类型是供计算机程序理解的数据形式,在 Python 中数据类型被分为基本数据类型和复合数据类型。 # # - 基本数据类型:也称为数值,包括整数、浮点数、复数、布尔数值和空值。 # - 复合数据类型:也称为序列,包括字符串、列表、元组、字典和集合。 # ### 基本数据类型 # # #### 整数 # # Python 中的整数与数学中整数的概念一致,可正可负,没有取值范围限制,在 Python 中有 4 种表达形式。 # # |表达形式|格式| # |:-:|:-| # |十进制|-100,0,100| # |二进制|以0b或0B开头:0b01,-0B01| # |八进制|以0o或0O开头:0o17,-0O17| # |十六进制|以0x或0X开头:0x0f, -0X0f| # # 整数虽然有 4 种表达形式,但输出格式都是十进制。 # + outputHidden=false inputHidden=false print("二进制数 0b01 的输出格式是:",0b01) print("八进制数 0o17 的输出格式是:",0o17) print("十六进制数 0x0f 的输出格式是:",0x0f) # - # #### 浮点数 # # Python 中的浮点数与数学中实数的概念一致,指带有小数点及小数的数字。浮点数取值范围和小数精度都存在限制,但常规计算可忽略。取值范围数量级约$-10^{308}$至$10^{308}$,精度数量级$10^{16}$。 # # 在 Python 中浮点数有两种表达形式:一种用小数点+数字形式表示;一种用科学计数法表示,使用字母e或E作为幂的符号,以10为基数,$e$表示$a*10^b$ # print('浮点数 110e-3 值为:',110e-3) print('浮点数 5.2E2 值为:',5.2E2) # 在 Python 中浮点数间运算存在不确定尾数,但这并不是 Bug ,由于计算机有二进制表示小数,只能无限接近小数值,但不会完全相同。 # + outputHidden=false inputHidden=false 0.1 + 0.2 # - # 方法: # # - `round(x, b)`:对 x 四舍五入,d 是小数截取位数。 round(0.1 + 0.2, 1) # #### 复数 # # Python 中的复数与数学中复数的概念一致,记 a+b*j 为复数,其中 a 是实部,b 是虚部。 x = 1 + 2j print('复数 1 + 2j 的实部是:', x.real) print('复数 1 + 2j 的虚部是:', x.imag) # #### 布尔数值 # # Python 中的布尔数值与数学中布尔代数的概念一致,分为 True 和 False。利用布尔数值可进行逻辑运算。 # + x = True y = False # 与运算 print('x and y =',x and y) print('x&y = ',x&y) # 或运算 print('x or y =',x or y) print('x|y = ',x|y) # 异或运算 print('x ^ y =',x ^ y) # 非运算 print('not x =',not x) # - # #### 空值 # # 空值是 Python 中一种特殊的数据类型,用 None 表示。None 表示一个空对象,不能理解为 0。 # + outputHidden=false inputHidden=false x = None x == 0 # - # ### 序列 # #### 字符串 # # 字符串是由 0 个或多个字符组成的有序字符序列,由一对单引号`''`或一对双引号`""`表示。 # ## 循环语句 print('我是一个字符串') print("我是另一个字符串") # ## 参考资料 # # [北京理工大学:Python语言程序设计](https://www.icourse163.org/course/BIT-268001?tid=1003243006) # # [菜鸟教程:Python3教程](http://www.runoob.com/python3/python3-tutorial.html) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Scraping the Data # ### I used the BeautifulSoup package to transform the data in the table on the Wikipedia page into pandas dataframe import requests website_url = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text from bs4 import BeautifulSoup soup = BeautifulSoup(website_url,'lxml') # ### We can observe that the tabular data is availabe in table and belongs to class="wikitable sortable", So we need to extract only table Minha_tabela = soup.find('table',{'class':'wikitable sortable'}) print(Minha_tabela.tr.text) headers="Postcode,Borough,Neighbourhood" # ### Geting the values in tr and separate each td within by "," # tabela1="" for tr in Minha_tabela.find_all('tr'): row1="" for tds in tr.find_all('td'): row1=row1+","+tds.text tabela1=tabela1+row1[1:] print(tabela1) # ### Store the data in .csv file # file=open("toronto-data.csv","wb") file.write(bytes(tabela1,encoding="ascii",errors="ignore")) # ### Converting into dataframe and assigning column names import pandas as pd df = pd.read_csv('toronto-data.csv',header=None) df.columns=["Postalcode","Borough","Neighbourhood"] df.head(10) # ### Drop row where borough is "Not assigned" # + # Get names of indexes for which column Borough has value "Not assigned" indexNames = df[ df['Borough'] =='Not assigned'].index # Delete these row indexes from dataFrame df.drop(indexNames , inplace=True) # - df.head(10) # ### If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough df.loc[df['Neighbourhood'] =='Not assigned' , 'Neighbourhood'] = df['Borough'] df.head(10) # # ### Rows will be same postalcode will combined into one row with the neighborhoods separated with a comma # result = df.groupby(['Postalcode','Borough'], sort=False).agg( ', '.join) df_new=result.reset_index() df_new.head(10) # ## Store data in new .csv with clean data. df_new.to_csv(r'Toronto-data-processed.csv', index = False, header = True) # #### That's It, Thank you :) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # TFRecord files in Python # # # The [tf.io](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/io) module also contains pure-Python funtions for reading and writing TFRecord files. # ## Import all the necessary libraries # + import numpy as np import tensorflow as tf np.random.seed(5) tf.__version__ # - # ## Utils # + # The following functions can be used to convert a value to a type compatible # with tf.Example. def _bytes_feature(value): """Returns a bytes_list from a string/byte.""" if isinstance(value, type(tf.constant(0))): value = value.numpy() # BytesList won't unpack a string from an EagerTensor. return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) def _float_feature(value): """Return a float_list form a float/double.""" return tf.train.Feature(float_list=tf.train.FloatList(value=[value])) def _int64_feature(value): """Return a int64_list from a bool/enum/int/uint.""" return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) # - def serialize_example(feature0, feature1, feature2, feature3): """ Creates a tf.Example message ready to be written to a file. """ # Create a dictionary mapping the feature name to # the tf.Example-compatible data type feature = { 'feature0': _int64_feature(feature0), 'feature1': _int64_feature(feature1), 'feature2': _bytes_feature(feature2), 'feature3': _float_feature(feature3), } # Create a Features message using tf.train.Example. example_proto = tf.train.Example(features=tf.train.Features(feature=feature)) return example_proto.SerializeToString() # ## Writing a TFRecord file # # 1. Create the 10,000 observations data. # 2. Write the 10,000 observations to the file `test.tfrecord`. # # Each observation is converted to a `tf.Example` message, # the written to file. # You can then verify that the file `test.record` has been created # # ### Create the observations data # + # The number of observations in the dataset. n_observations = int(1e4) # Boolean feature, encoded as False or True. feature0 = np.random.choice([False, True], n_observations) # Integer feature, random from 0 to 4. feature1 = np.random.randint(0, 5, n_observations) # String feature strings = np.array([b'cat', b'dog', b'chicken', b'horse', b'goat']) feature2 = strings[feature1] # Float feature, from a standard normal distribution feature3 = np.random.randn(n_observations) # - # ### Writing the `tf.Example` observations to the file # + filename = 'test.tfrecord' with tf.io.TFRecordWriter(filename) as writer: for i in range(n_observations): example = serialize_example(feature0[i], feature1[i], feature2[i], feature3[i]) writer.write(example) # - # !du -sh {filename} # ## Reading a TFRecord file # # These serialized tensors can be easily parsed using # [tf.train.Example.ParseFromString](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/train/Example#ParseFromString) # + filenames = [filename] # Create a Dataset from a TFRecord file raw_dataset = tf.data.TFRecordDataset(filenames) raw_dataset # - for raw_record in raw_dataset.take(1): print("Serialized example: ") print("========================") print(repr(raw_record)) example = tf.train.Example() example.ParseFromString(raw_record.numpy()) print("\nProto message: ") print("========================") print(example) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: ai_safe # language: python # name: ai_safe # --- # # Pooling # # As we process images (or other data sources) we will eventually want to reduce the resolution of the # images. After all, we typically want to output an estimate that does not depend on the dimensionality of # the original image. Secondly, when detecting lower-level features, such as edge detection (we covered this # in the section on *convolutional layers*), we often want to have some degree of invariance to translation. # For instance, if we take the image X with a sharp delineation between black and white and if we shift it # by one pixel to the right, i.e. `Z[i,j]` = `X[i,j+1]`, then the output for for the new image Z will be # vastly different. The edge will have shifted by one pixel and with it all the activations. In reality objects # hardly ever occur exactly at the same place. In fact, even with a tripod and a stationary object, vibration # of the camera due to the movement of the shutter might shift things by a pixel or so (this is why high # end cameras have a special option to fix this). Given that, we need a mathematical device to address the # problem. # # This section introduces pooling layers, which were proposed to alleviate the excessive sensitivity of the # convolutional layer to location and to reduce the resolution of images through the processing pipeline. # # Maximum Pooling and Average Pooling # # Like convolutions, pooling computes the output for each element in a fixed-shape window (also known # as a pooling window) of input data. Different from the cross-correlation computation of the inputs and # kernels in the convolutional layer, the pooling layer directly calculates the maximum or average value of # the elements in the pooling window. These operations are called maximum pooling or average pooling # respectively. In maximum pooling, the pooling window starts from the top left of the input array, and # slides in the input array from left to right and top to bottom. When the pooling window slides to a certain # position, the maximum value of the input subarray in the window is the element at the corresponding # location in the output array. from IPython.display import SVG SVG(filename = '../img/pooling.svg') # Fig. 6.6: Maximum pooling with a pooling window shape of 2 × 2. The shaded portions represent the # first output element and the input element used for its computation: max(0, 1, 3, 4) = 4 # The output array in the figure above has a height of 2 and a width of 2. The four elements are derived # from the maximum value of max: # # max(0, 1, 3, 4) = 4, # # max(1, 2, 4, 5) = 5, # # max(3, 4, 6, 7) = 7, # # max(4, 5, 7, 8) = 8. # Average pooling works like maximum pooling, only with the maximum operator replaced by the average # operator. The pooling layer with a pooling window shape of p × q is called the p × q pooling layer. The # pooling operation is called p × q pooling. # # Let us return to the object edge detection example mentioned at the beginning of this section. Now we will # use the output of the convolutional layer as the input for 2 × 2 maximum pooling. Set the convolutional # layer input as X and the pooling layer output as Y. Whether or not the values of `X[i, j]` and `X[i, # j+1]` are different, or `X[i, j+1]` and `X[i, j+2]` are different, the pooling layer outputs all include # `Y[i, j]=1`. That is to say, using the 2 × 2 maximum pooling layer, we can still detect if the pattern # recognized by the convolutional layer moves no more than one element in height and width. # # As shown below, we implement the forward computation of the pooling layer in the `pool2d` function. # This function is very similar to the `corr2d` function in the section on *convolutions*. The only difference # lies in the computation of the output Y. # + import torch import torch.nn as nn def pool2d(X, pool_size, mode='max'): p_h, p_w = pool_size Y = torch.zeros((X.shape[0] - p_h + 1, X.shape[1] - p_w + 1)) for i in range(Y.shape[0]): for j in range(Y.shape[1]): if mode == 'max': Y[i, j] = X[i: i + p_h, j: j + p_w].max() elif mode == 'avg': Y[i, j] = X[i: i + p_h, j: j + p_w].mean() return Y # - # We can construct the input array X in the above diagram to validate the output of the two-dimensional maximum pooling layer. X = torch.tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]], dtype=torch.float32) pool2d(X, (2, 2)) # At the same time, we experiment with the average pooling layer. pool2d(X, (2, 2), 'avg') # # Padding and Stride # # Like the convolutional layer, the pooling layer can also change the output shape by padding the two sides # of the input height and width and adjusting the window stride. The pooling layer works in the same way # as the convolutional layer in terms of padding and strides. We will demonstrate the use of padding and # stride in the pooling layer through the two-dimensional maximum pooling layer **MaxPool2d** in the torch.nn # module. We first construct an input data of shape ($1$, $1$, $4$, $4$), where the first two dimensions are # batch and channel. X = torch.arange(16, dtype=torch.float32).reshape((1, 1, 4, 4)) print(X) # By default, the stride in the `MaxPool2d` class has the same shape as the pooling window. Below, we use # a pooling window of shape ($3$, $3$), so we get a stride shape of ($3$, $3$) by default. pool2d = nn.MaxPool2d(3) # Because there are no model parameters in the pooling layer, we do not need # to call the parameter initialization function pool2d(X) # The stride and padding can be manually specified. pool2d = nn.MaxPool2d(3, padding=1, stride=2) pool2d(X) # Of course, we can specify an arbitrary rectangular pooling window and specify the padding and stride for # height and width, respectively. # pad should be smaller than half of kernel size pool2d = nn.MaxPool2d((2, 3), padding=(1, 1), stride=(2, 3)) pool2d(X) # # Multiple Channels # # When processing multi-channel input data, the pooling layer pools each input channel separately, rather # than adding the inputs of each channel by channel as in a convolutional layer. This means that the # number of output channels for the pooling layer is the same as the number of input channels. Below, we # will concatenate arrays X and X+1 on the channel dimension to construct an input with 2 channels. X = torch.cat((X, X + 1), dim=1) print(X) # As we can see, the number of output channels is still 2 after pooling. pool2d = nn.MaxPool2d(3, padding=1, stride=2) pool2d(X) # # Summary # # * Taking the input elements in the pooling window, the maximum pooling operation assigns the # maximum value as the output and the average pooling operation assigns the average value as the # output. # * One of the major functions of a pooling layer is to alleviate the excessive sensitivity of the convolutional layer to location. # * We can specify the padding and stride for the pooling layer. # * Maximum pooling, combined with a stride larger than 1 can be used to reduce the resolution. # * The pooling layer’s number of output channels is the same as the number of input channels. # # Exercises # # 1. Implement average pooling as a convolution. # 2. What is the computational cost of the pooling layer? Assume that the input to the pooling layer is # of size c × h × w, the pooling window has a shape of $p_{h}$ × $p_{w}$ with a padding of ($p_{h}$ , $p_{w}$ ) and a # stride of ($s_{h}$ , $s_{w}$ ). # 3. Why do you expect maximum pooling and average pooling to work differently? # 4. Do we need a separate minimum pooling layer? Can you replace it with another operation? # 5. Is there another operation between average and maximum pooling that you could consider (hint - # recall the softmax)? Why might it not be so popular? % --- % jupyter: % jupytext: % text_representation: % extension: .m % format_name: light % format_version: '1.5' % jupytext_version: 1.14.4 % kernelspec: % display_name: Octave % language: octave % name: octave % --- % ### Linear dependence and matrices % % To determine if a set of matrices is linearly dependent/independent, enter each matrix as a column into a larger matrix. % % Given this set of matrices: % % $ \left( % \left[ \begin{matrix} 2 & -1 \\ 4 & 6 \end{matrix} \right], % \left[ \begin{matrix} 3 & 2 \\ 8 & 3 \end{matrix} \right], % \left[ \begin{matrix} -5 & -8 \\ -16 & 4 \end{matrix} \right], % \left[ \begin{matrix} 0 & -7 \\ -4 & 13 \end{matrix} \right] % \right) $ % % And some scalars $ a,b,c,d \in R $ % % To assume linear indepedence means $ a,b,c,d = 0 $, such that: % % $ % a\left[ \begin{matrix} 2 & -1 \\ 4 & 6 \end{matrix} \right] + % b\left[ \begin{matrix} 3 & 2 \\ 8 & 3 \end{matrix} \right] + % c\left[ \begin{matrix} -5 & -8 \\ -16 & 4 \end{matrix} \right] + % d\left[ \begin{matrix} 0 & -7 \\ -4 & 13 \end{matrix} \right] = % \left[ \begin{matrix} 0 & 0 \\ 0 & 0 \end{matrix} \right] % $ % % Then treat this as a system of equations. Consider the top left element of each matrix, then: % % $ 2a + 3b + -5c + 0d = 0 $ % % Top right: % % $ -1a + 2b -8c -7d = 0 $ % % Bottom left: % % $ 4a + 8b -16c -4d = 0 $ % % Bottom right: % % $ 6a + 3b + 4c + 13d = 0 $ # To do the same in Octave, starting with the same matrices: M_1 = [2 -1; 4 6] M_2 = [3 2; 8 3] M_3 = [-5 -8; -16 4] M_4 = [0 -7; -4 13] # Then enter each matrix as a row M = []; M(:,1) = M_1'(:) M(:,2) = M_2'(:) M(:,3) = M_3'(:) M(:,4) = M_4'(:) % For completeness, we can add the indentity matrix in there, so we need to augment the matrix with a zero vector. The definition of linear dependence requires an equality with 0. It's not strictly required to find the answer but it is helpful to remember what we're trying to do. M(:,5) = zeros(4,1) % And now we can reduce this via row operations. M(4,:) = M(4,:) + M(3,:) *-1; M(4,:) = M(4,:) + M(1,:) *-1; M(3,:) = M(3,:) + M(1,:) *-2; M(1,:) = M(1,:) + M(2,:); M(2,:) = M(2,:) + M(1,:); M(4,:) = M(4,:) + M(3,:) *4; M(2,:) = M(2,:) + M(3,:) *-3; M(3,:) = M(3,:) * 1/2 % With two rows the same, we can disregard a row. That means that the final colum, which represents the scalar $ d $ can have any value. This will require a non-trivial solution and therefore, these matrices are not linearly independent. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + # %load_ext autoreload # %autoreload 2 import warnings warnings.filterwarnings("ignore") import sys import torch sys.path.append("../") import models from utils.stylegan2_utils import StyleGAN2SampleGenerator from utils.segmentation_utils import FaceSegmentation, StuffSegmentation, GANLinearSegmentation from lelsd import LELSD # - # # Training StyleGAN2 with Supervised Segmentation # + [markdown] tags=[] # ### StyleGAN2 FFHQ # + jupyter={"outputs_hidden": true} tags=[] device = torch.device('cuda') exp_dir = "../out" G2 = models.get_model("stylegan2", "../pretrained/stylegan2/ffhq.pkl") stylegan2_sample_generator = StyleGAN2SampleGenerator(G=G2, device=device) face_bisenet = models.get_model("face_bisenet", "../pretrained/face_bisenet/model.pth") face_segmentation = FaceSegmentation(face_bisenet=face_bisenet, device=device) for latent_space in ["Z", "W", "W+"]: for loss_function in ["L2"]: for mask_aggregation in [ 'average', 'union', 'intersection', ]: for num_latent_dirs in [1, 2]: for part_name, face_parts in zip( [ "mouth", "skin", "eyes", "nose", "ears", "background", "eyebrows", "hair", "cloth", "eyeglass" ], [ ["mouth", "u_lip", "l_lip"], ["skin"], ["l_eye", "r_eye"], ["nose"], ["l_ear", "r_ear", "earrings"], ["background"], ["l_brow", "r_brow"], ["hair", "hat"], ["hair"], ["cloth", "neck", "necklace"], ["eyeglass"] ] ): lr = 0.001 min_alpha_value = -1.0 max_alpha_value = 1.0 min_abs_alpha_value = 0.0 gamma_correlation = 5.0 onehot_temperature = 0.001 batch_size = 4 localization_layers = list(range(1, 18)) localization_layer_weights = None log_dir = f'{exp_dir}/lelsd_stylegan2_ffhq/{latent_space}_{loss_function}_{mask_aggregation}/{num_latent_dirs}D/face_bisenet/{part_name}' lelsd = LELSD(device=device, localization_layers=localization_layers, semantic_parts=face_parts, loss_function=loss_function, localization_layer_weights=localization_layer_weights, mode='foreground', mask_aggregation=mask_aggregation, n_layers=18, latent_dim=512, num_latent_dirs=num_latent_dirs, learning_rate=lr, batch_size=batch_size, gamma_correlation=gamma_correlation, unit_norm=False, latent_space=latent_space, onehot_temperature=onehot_temperature, min_alpha_value=min_alpha_value, max_alpha_value=max_alpha_value, min_abs_alpha_value=min_abs_alpha_value, log_dir=log_dir, ) lelsd.fit(stylegan2_sample_generator, face_segmentation, num_batches=200 * num_latent_dirs, num_lr_halvings=3, pgbar=True, summary=True) lelsd.save() # + [markdown] tags=[] # ### StyleGAN2 LSUN Church # + device = torch.device('cuda') exp_dir = "../out" G2 = models.get_model("stylegan2", "../pretrained/stylegan2/stylegan2-church-config-f.pkl") stylegan2_sample_generator = StyleGAN2SampleGenerator(G=G2, device=device) deeplabv2_resnet101 = models.get_model("cocostuff_deeplab", "../pretrained/cocostuff_deeplab/deeplabv2_resnet101_msc-cocostuff164k-100000.pth") segmentation_model = StuffSegmentation(deeplabv2_resnet101=deeplabv2_resnet101, config_path="../pretrained/cocostuff_deeplab/", device=device) for latent_space in ["Z", "W", "W+"]: for loss_function in ["L2"]: for mask_aggregation in [ 'average', 'union', 'intersection', ]: for num_latent_dirs in [1, 2]: for part_name, sub_parts in zip( [ "church", "sky", "vegetation", "ground" ], [ ["building-other", "house"], ["sky-other", "clouds"], ["tree", "grass", "bush", "plant-other"], ["dirt", "mud", "sand", "gravel", "ground-other", "road", "pavement"], ] ): lr = 0.001 min_alpha_value = -1.0 max_alpha_value = 1.0 min_abs_alpha_value = 0.0 gamma_correlation = 5.0 onehot_temperature = 0.001 batch_size = 4 localization_layers = list(range(1, 14)) localization_layer_weights = None log_dir = f'{exp_dir}/lelsd_stylegan2_lsun_church/{latent_space}_{loss_function}_{mask_aggregation}/{num_latent_dirs}D/deeplab/{part_name}' lelsd = LELSD(device=device, localization_layers=localization_layers, semantic_parts=sub_parts, loss_function=loss_function, localization_layer_weights=localization_layer_weights, mode='foreground', mask_aggregation=mask_aggregation, n_layers=14, latent_dim=512, num_latent_dirs=num_latent_dirs, learning_rate=lr, batch_size=batch_size, gamma_correlation=gamma_correlation, unit_norm=False, latent_space=latent_space, onehot_temperature=onehot_temperature, min_alpha_value=min_alpha_value, max_alpha_value=max_alpha_value, min_abs_alpha_value=min_abs_alpha_value, log_dir=log_dir, ) lelsd.fit(stylegan2_sample_generator, segmentation_model, num_batches=200 * num_latent_dirs, num_lr_halvings=3, pgbar=True, summary=True) lelsd.save() # - # ### StyleGAN2 LSUN Car # + device = torch.device('cuda') exp_dir = "../out" G2 = models.get_model("stylegan2", "../pretrained/stylegan2/stylegan2-car-config-f.pkl") stylegan2_sample_generator = StyleGAN2SampleGenerator(G=G2, device=device) deeplabv2_resnet101 = models.get_model("cocostuff_deeplab", "../pretrained/cocostuff_deeplab/deeplabv2_resnet101_msc-cocostuff164k-100000.pth") segmentation_model = StuffSegmentation(deeplabv2_resnet101=deeplabv2_resnet101, config_path="../pretrained/cocostuff_deeplab/", device=device) for latent_space in ["W", "W+"]: for loss_function in ["L2"]: for mask_aggregation in [ 'average', ]: for num_latent_dirs in [1, 2]: for part_name, sub_parts in zip( [ "car", "road", "sky", "grass+tree", ], [ ["car", "truck", "bus", "motorcycle"], ["road", "pavement", "dirt"], ["sky-other", "clouds"], ["tree", "grass", "bush", "plant-other"], ] ): lr = 0.001 min_alpha_value = -1.0 max_alpha_value = 1.0 min_abs_alpha_value = 0.0 gamma_correlation = 5.0 onehot_temperature = 0.001 batch_size = 4 localization_layers = list(range(1, 16)) localization_layer_weights = None log_dir = f'{exp_dir}/lelsd_stylegan2_lsun_car/{latent_space}_{loss_function}_{mask_aggregation}/{num_latent_dirs}D/deeplab/{part_name}' lelsd = LELSD(device=device, localization_layers=localization_layers, semantic_parts=sub_parts, loss_function=loss_function, localization_layer_weights=localization_layer_weights, mode='foreground', mask_aggregation=mask_aggregation, n_layers=16, latent_dim=512, num_latent_dirs=num_latent_dirs, learning_rate=lr, batch_size=batch_size, gamma_correlation=gamma_correlation, unit_norm=False, latent_space=latent_space, onehot_temperature=onehot_temperature, min_alpha_value=min_alpha_value, max_alpha_value=max_alpha_value, min_abs_alpha_value=min_abs_alpha_value, log_dir=log_dir, ) lelsd.fit(stylegan2_sample_generator, segmentation_model, num_batches=200 * num_latent_dirs, num_lr_halvings=3, pgbar=True, summary=True) lelsd.save() # - # ### StyleGAN2 LSUN Horse # + device = torch.device('cuda') exp_dir = "../out" G2 = models.get_model("stylegan2", "../pretrained/stylegan2/stylegan2-horse-config-f.pkl") stylegan2_sample_generator = StyleGAN2SampleGenerator(G=G2, device=device) deeplabv2_resnet101 = models.get_model("cocostuff_deeplab", "../pretrained/cocostuff_deeplab/deeplabv2_resnet101_msc-cocostuff164k-100000.pth") segmentation_model = StuffSegmentation(deeplabv2_resnet101=deeplabv2_resnet101, config_path="../pretrained/cocostuff_deeplab/", device=device) for latent_space in ["W", "W+"]: for loss_function in ["L2"]: for mask_aggregation in [ 'average', ]: for num_latent_dirs in [1, 2]: for part_name, sub_parts in zip( [ "horse", "person", "sky", "grass+tree", "ground" ], [ ["horse"], ["person"], ["sky-other", "clouds"], ["tree", "grass", "bush", "plant-other"], ["dirt", "mud", "sand", "gravel", "ground-other", "road", "pavement"], ] ): lr = 0.001 min_alpha_value = -1.0 max_alpha_value = 1.0 min_abs_alpha_value = 0.0 gamma_correlation = 5.0 onehot_temperature = 0.001 batch_size = 4 localization_layers = list(range(1, 14)) localization_layer_weights = None log_dir = f'{exp_dir}/lelsd_stylegan2_lsun_horse/{latent_space}_{loss_function}_{mask_aggregation}/{num_latent_dirs}D/deeplab/{part_name}' lelsd = LELSD(device=device, localization_layers=localization_layers, semantic_parts=sub_parts, loss_function=loss_function, localization_layer_weights=localization_layer_weights, mode='foreground', mask_aggregation=mask_aggregation, n_layers=14, latent_dim=512, num_latent_dirs=num_latent_dirs, learning_rate=lr, batch_size=batch_size, gamma_correlation=gamma_correlation, unit_norm=False, latent_space=latent_space, onehot_temperature=onehot_temperature, min_alpha_value=min_alpha_value, max_alpha_value=max_alpha_value, min_abs_alpha_value=min_abs_alpha_value, log_dir=log_dir, ) lelsd.fit(stylegan2_sample_generator, segmentation_model, num_batches=200 * num_latent_dirs, num_lr_halvings=3, pgbar=True, summary=True) lelsd.save() # - # ### StyleGAN2 MetFaces # + device = torch.device('cuda') exp_dir = "../out" G2 = models.get_model("stylegan2", "../pretrained/stylegan2/metfaces.pkl") stylegan2_sample_generator = StyleGAN2SampleGenerator(G=G2, device=device) face_bisenet = models.get_model("face_bisenet", "../pretrained/face_bisenet/model.pth") face_segmentation = FaceSegmentation(face_bisenet=face_bisenet, device=device) for latent_space in ["W", "W+"]: for loss_function in ["L2"]: for mask_aggregation in [ 'average', ]: for num_latent_dirs in [1, 2]: for part_name, face_parts in zip( [ "mouth", "skin", "eyes", "nose", "ears", "background", "eyebrows", "hair", "cloth", ], [ ["mouth", "u_lip", "l_lip"], ["skin"], ["l_eye", "r_eye"], ["nose"], ["l_ear", "r_ear", "earrings"], ["background"], ["l_brow", "r_brow"], ["hair", "hat"], ["hair"], ["cloth", "neck", "necklace"], ] ): lr = 0.001 min_alpha_value = -1.0 max_alpha_value = 1.0 min_abs_alpha_value = 0.0 gamma_correlation = 5.0 onehot_temperature = 0.001 batch_size = 4 localization_layers = list(range(1, 18)) localization_layer_weights = None log_dir = f'{exp_dir}/lelsd_stylegan2_metfaces/{latent_space}_{loss_function}_{mask_aggregation}/{num_latent_dirs}D/face_bisenet/{part_name}' lelsd = LELSD(device=device, localization_layers=localization_layers, semantic_parts=face_parts, loss_function=loss_function, localization_layer_weights=localization_layer_weights, mode='foreground', mask_aggregation=mask_aggregation, n_layers=18, latent_dim=512, num_latent_dirs=num_latent_dirs, learning_rate=lr, batch_size=batch_size, gamma_correlation=gamma_correlation, unit_norm=False, latent_space=latent_space, onehot_temperature=onehot_temperature, min_alpha_value=min_alpha_value, max_alpha_value=max_alpha_value, min_abs_alpha_value=min_abs_alpha_value, log_dir=log_dir, ) lelsd.fit(stylegan2_sample_generator, face_segmentation, num_batches=200 * num_latent_dirs, num_lr_halvings=3, pgbar=True, summary=True) lelsd.save() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # CoastSeg Prototype: Generating Polygons along a CoastLine # - Author: # - Date: 1/13/2022 # # ## Description # This prototype is meant to show how you can generate polygons of various sizes along a coastline. The coastline in this prototype is only a geojson LineString for a small segment of the coast. Various widgets used to control the polygon generation are showcased, but none are implement with the exception of the size slider for the polygons. # # ## Dependencies # 1. ipyleaflet # 2. ipywidgets # 3. leafmap # + from ipyleaflet import DrawControl import leafmap from ipyleaflet import Map, Polygon from ipywidgets import Layout # Center the map at the location where the coastvector is (swap the lat and lng) shapes_list=[] m = leafmap.Map(center=( 36.46098029888645, -121.9725021429323), zoom=13, layout=Layout(width='100%', height='100px')) draw_control = DrawControl() draw_control.polygon = { "shapeOptions": { "fillColor": "#a45df0", "color": "#6be5c3", "fillOpacity": 0.4 }, "drawError": { "color": "#dd253b", "message": "Ops!" }, "allowIntersection": False, "transform":True } # Disable polyline, circle, and rectangle draw_control.polyline = {} draw_control.circlemarker = {} draw_control.rectangle = {} # Each time a shape is drawn it is appended to the shapeslist which is used to create the bounding box def handle_draw(target, action, geo_json): if draw_control.last_action == 'created': shapes_list.append( draw_control.last_draw['geometry']) print("\nshapes_list: ",shapes_list) draw_control.on_draw(handle_draw) m.add_control(draw_control) m # - # # Adding Widgets # ------ # ## Custom widget to show coordindates on hover # 1. The first widget is completely optional. It allows the user to to the coordinates of their mouse on the map # # ### Try it Out # - Hover your mouse on the map and watch how the coordinates below change # + from ipywidgets import Label from ipyleaflet import Map label = Label() display(label) def handle_interaction(**kwargs): if kwargs.get('type') == 'mousemove': label.value = str(kwargs.get('coordinates')) m.on_interaction(handle_interaction) m # - # ## More Widgets # 1. Location of super bounding box from geojson # - A textarea that is used to hold the geojson of the bounding box # 2. Size of the polygon ROI's # - The size of the polygons generated along the coast line vector can be manipulated with a slider # 3. Percent Overlap # - The percentage of overlap between polygons generated along the coast line vector can be manipulated with a slider # 4. Interval of ROI generation # - The distance between polygons that are generated along the coast line vector # + import ipywidgets as widgets from ipywidgets import Layout style = {'description_width': 'initial'} # Slider for the number of polygons to generate # Not currently implement as of 1/3/2022 interval_slider=widgets.IntSlider( value=7, min=1, max=10, step=1, description='Interval for Polygon Generation(m):', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='d', style=style, layout=Layout(width='45%', height='30px') ) # Slider for the size of the polygons to generate size_slider=widgets.FloatSlider( value=0.00005, min=0.00005, max=0.0002, step=0.00001, description='Polygon Size:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.5f', ) # Slider for the size of the polygons to generate # Not currently implement as of 1/3/2022 overlap_percent_slider=widgets.FloatSlider( value=0.0, min=0.0, max=0.5, step=0.01, description='% Overlap:', disabled=False, continuous_update=False, orientation='horizontal', readout=True, readout_format='.2f', ) # Textarea to hold the geojson of the bounding box # Not currently implement as of 1/3/2022 bounding_box_geojson=widgets.Textarea( value="{\'type\':\'Polygon\', \'coordinates\': [[[-121.929078, 36.459534], [-121.929078, 36.463013], [-121.926211, 36.463013], [-121.926211, 36.459534], [-121.929078, 36.459534]]]}", placeholder='Type something', description='BBox GeoJson:', disabled=False, layout=Layout(width='50%', height='80px') , style=style ) display(interval_slider) display(size_slider) display(overlap_percent_slider) display(bounding_box_geojson) # Set the polygon's size to the slider's value polygon_size= size_slider.value m # - polygon_size= size_slider.value print(f"polygon_size: {polygon_size:.5f}") # ## Convert ipyleaflet Polygon points to GeoJson # 1. Ipyleaflet draws its shapes in lat,lng format and it must be converted to lng, lat for geojson. # 2. In order to correctly draw a rectangle in geojson the order of the points matters as well. def convert_to_geojson(upper_right_y, upper_right_x,upper_left_y, upper_left_x,lower_left_y, lower_left_x,lower_right_y,lower_right_x): geojson_feature={} geojson_feature["type"]="Feature" geojson_feature["properties"]={} geojson_feature["geometry"]={} geojson_polygon={} geojson_polygon["type"]="Polygon" geojson_polygon["coordinates"]=[] # The coordinates(which are 1,2 arrays) are nested within a parent array nested_array=[] nested_array.append([upper_right_x, upper_right_y]) nested_array.append([upper_left_x, upper_left_y]) nested_array.append([lower_left_x, lower_left_y]) nested_array.append([lower_right_x, lower_right_y]) #GeoJson rectangles have the first point repeated again as the last point nested_array.append([upper_right_x, upper_right_y]) geojson_polygon["coordinates"].append(nested_array) geojson_feature["geometry"]=geojson_polygon # new_polygon_list.append(new_polygon) # print(geojson_feature) return geojson_feature # ## Generating Polygons Along a Vector # 1. Import the geojson, LineString, that represents the coastline vector. # 2. Using the size slider from earlier the size of each polygon is chosen # 3. The points of the each polygon are generated based on the size provided and their location along the LineString # 4. Once all the polygons are generated they are converted to geojson for future use with geopandas. # 5. To display the polygons on the map simply add each polygon as layer # + # TEMPORARY: hard coded vector (linestring) that holds the coast vector # This would be replaced by the real coast vector coast_vector_geojson={"type":"FeatureCollection","features":[{"type":"Feature","properties":{},"geometry":{"type":"LineString","coordinates":[[-121.92830801010132,36.46290061292468],[-121.92837238311768,36.46217580890998],[-121.92805051803589,36.46177888955466],[-121.92787885665892,36.46224483815569],[-121.92768573760988,36.46186517654388],[-121.92764282226564,36.461520028010895],[-121.92755699157713,36.46115762039776],[-121.92755699157713,36.46103681748364],[-121.92721366882324,36.46103681748364],[-121.92702054977417,36.46103681748364],[-121.9269347190857,36.461019559909126],[-121.92687034606932,36.46088149917467],[-121.92676305770874,36.460570861623395],[-121.92676305770874,36.46034651150687],[-121.92674160003662,36.4601911918152],[-121.92663431167601,36.460070387395476],[-121.9269347190857,36.45984603583096],[-121.92667722702028,36.45974248873611]]}}]} m.add_geojson(coast_vector_geojson, layer_name="coast line") m # Get only the coordinates from the geojson these will be used to create the polygons and geojson vector_points=coast_vector_geojson['features'][0]['geometry']['coordinates'] vector_points from ipyleaflet import Map, Polygon size=polygon_size new_polygon_list=[] geojson_polygons={"type": "FeatureCollection","features":[]} # Create a rectangle at each point on the line # Swap the x and y for each point because ipyleaflet swaps them for draw methods for point in vector_points: upper_right_x=point[0]-(size/2) upper_right_y=point[1]-(size/2) upper_left_x=point[0]+(size/2) upper_left_y=point[1]-(size/2) lower_left_x=point[0]+(size/2) lower_left_y=point[1]+(size/2) lower_right_x=point[0]-(size/2) lower_right_y=point[1]+(size/2) #NOTE: the x and y are swapped because ipyleaflet swaps the latitude and the longtitude for polygons new_polygon=Polygon( locations=[(upper_right_y, upper_right_x),(upper_left_y, upper_left_x),( lower_left_y, lower_left_x),(lower_right_y,lower_right_x)], color="pink", fill_color="pink") #Append the polygon we created to the list of polygons to draw onto the map new_polygon_list.append(new_polygon) #Convert each set of points to geojson (DONT swap x and y this time) geojson_polygon=convert_to_geojson(upper_right_y, upper_right_x,upper_left_y, upper_left_x,lower_left_y, lower_left_x,lower_right_y,lower_right_x) geojson_polygons["features"].append(geojson_polygon) # Draw all the polygons along the coast vector to the map for item in new_polygon_list: m.add_layer(item); display(interval_slider) display(size_slider) display(overlap_percent_slider) display(bounding_box_geojson) polygon_size= size_slider.value m.on_interaction(handle_interaction) m # - # Print the geojson of the polygons generated by the application along the coast line geojson_polygons # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Correr experimentos import math, subprocess import numpy as np import os from IPython.display import display, clear_output def correr_experimento(algoritmo, secuencia1, secuencia2): # Crear proceso para ejecutar el codigo. process = subprocess.Popen(["../cli", "-a", algoritmo, "-s", secuencia1, "-t", secuencia2], stderr=subprocess.PIPE, stdout=subprocess.PIPE, stdin=subprocess.PIPE, universal_newlines = True) exit_code = process.wait() # Verificar que el proceso no fallo. assert exit_code == 0, F"Hubo un error en la experimentacion para el algoritmo: {algoritmo} con la secuencia {secuencia1} y {secuencia2}." # Leer salida de STDERR con los tiempos de ejecucion de cada metodo. outputs = process.stderr.readline().split(); tiempo = outputs[0] celdas = outputs[1] score = outputs[2] longitud1 = outputs[3] longitud2 = outputs[4] process.stdin.close(); process.stdout.close(); process.stderr.close(); return tiempo, celdas, score, longitud1, longitud2 experimentos = []; # # Experimento NW # + genomes = [f"../sequences/genomes/{name}" for name in os.listdir("../sequences/genomes/")] for genome1 in genomes: for genome2 in genomes: if genome1 == genome2: continue for lenguaje in ["C","ASM"]: for tecnologia in ["LIN","SSE","AVX","AVX512"]: if not (lenguaje == "ASM" and tecnologia == "LIN"): algoritmo = f"NW_{lenguaje}_{tecnologia}" experimentos.append(["NW_genomes", algoritmo, genome1, genome2]) randoms = [f"../sequences/random/{name}" for name in os.listdir("../sequences/random/")] for random1 in randoms: random1_len = random1.split('_')[1] for random2 in randoms: random2_len = random2.split('_')[1] if random1_len != random2_len or random1 == random2: continue for lenguaje in ["C","ASM"]: for tecnologia in ["LIN","SSE","AVX","AVX512"]: if not (lenguaje == "ASM" and tecnologia == "LIN"): algoritmo = f"NW_{lenguaje}_{tecnologia}" experimentos.append(["NW_random", algoritmo, random1, random2]) # - # # Experimento SW # + ref_seq_dirs = [name for name in os.listdir("../sequences/reads/")] ref_seq_and_read_pairs = [] for ref_dir in ref_seq_dirs: ref_seq = f"../sequences/genomes/{ref_dir}.fasta" for name in os.listdir(f"../sequences/reads/{ref_dir}"): ref_seq_read = f"../sequences/reads/{ref_dir}/{name}" ref_seq_and_read_pairs.append((ref_seq, ref_seq_read)) for seq_pairs in ref_seq_and_read_pairs: for lenguaje in ["C","ASM"]: for tecnologia in ["LIN","SSE","AVX","AVX512"]: if not (lenguaje == "ASM" and tecnologia == "LIN"): algoritmo = f"SW_{lenguaje}_{tecnologia}" experimentos.append(["SW_reads", algoritmo, seq_pairs[0], seq_pairs[1]]) randoms = [f"../sequences/random/{name}" for name in os.listdir("../sequences/random/")] for random1 in randoms: random1_len = random1.split('_')[1] for random2 in randoms: random2_len = random2.split('_')[1] if random1_len != random2_len or random1 == random2: continue for lenguaje in ["C","ASM"]: for tecnologia in ["LIN","SSE","AVX","AVX512"]: if not (lenguaje == "ASM" and tecnologia == "LIN"): algoritmo = f"SW_{lenguaje}_{tecnologia}" experimentos.append(["SW_random", algoritmo, random1, random2]) # + tags=[] columnas = [ "experimento", "algoritmo", "secuencia1", "secuencia2", "longitud1", "longitud2", "tiempo", "celdas", "score" ]; filas = []; numero = 1 T = 1 # Numero de veces que se ejecuta cada experimento for experimento in experimentos: clear_output(wait=True) display('Experimento: ' + str(numero) + "/" + str(len(experimentos))) numero += 1 exp = experimento[0] algoritmo = experimento[1] secuencia1 = experimento[2] secuencia2 = experimento[3] # tiempos = [] for i in range(0, T): tiempo, celdas, score, longitud1, longitud2 = correr_experimento(algoritmo, secuencia1, secuencia2) # tiempos.append(tiempo); # tiempo_promedio = np.median(tiempos); filas.append([ exp, algoritmo, secuencia1, secuencia2, longitud1, longitud2, tiempo, celdas, score] ); # - df_resultado = pd.DataFrame(filas, columns=columnas); df_resultado.to_csv("results/resultado.csv", index=False, header=True); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] """ # National Inventory of Dams """ # + tags=[] import geopandas as gpd from pygeohydro import NID # - SAVE_KWDS = {"bbox_inches": "tight", "dpi": 300, "facecolor": "w"} CRS = "esri:102008" # First, we need to instantiate the NID class. nid = NID() # Some dam coordinates are either missing or incorrect. Let's get dams that are within Contiguous US with max storage larger than 200 acre-feet. # + world = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) conus_geom = world[world.iso_a3 == "USA"].geometry.iloc[0].geoms[0] dam_list = nid.get_byfilter([{"maxStorage": ["[200 5000]"]}]) conus_dams = dam_list[0][dam_list[0].is_valid] conus_dams = conus_dams[conus_dams.within(conus_geom)] conus_dams = nid.inventory_byid(conus_dams.id.to_list()) # - # Next, we can get a count of the top 10 dams based on types. dam_types = conus_dams.primaryDamTypeId.fillna(-1).astype(int) conus_dams["primaryDamTypeName"] = dam_types.apply(nid.dam_type.get) # + tags=[] types_count = conus_dams["primaryDamTypeName"].value_counts() ax = types_count.sort_values()[-10:].plot.bar(figsize=(10, 4)) for p in ax.patches: ax.annotate( p.get_height(), (p.get_x() + p.get_width() / 2, p.get_height() + 500), ha="center", va="center", ) # - # Let's compare the spatial distribution of the top five categories, excluding Earth and Other categories. # + tags=["nbsphinx-thumbnail"] conus = gpd.GeoSeries([conus_geom], crs=world.crs).to_crs(CRS) ax = conus.plot(figsize=(10, 6), facecolor="none", edgecolor="k") top_5types = types_count.index[:5] marker = dict(zip(top_5types, ["o", "^", "*", "X", "d"])) color = dict(zip(top_5types, ["r", "b", "g", "k", "c"])) conus_dams = conus_dams.to_crs(CRS) for c in top_5types: conus_dams[conus_dams.primaryDamTypeName == c].plot( ax=ax, alpha=0.6, markersize=10, marker=marker[c], facecolor="none", color=color[c], label=c, ) ax.legend() ax.axis(False) ax.set_title("Dams within CONUS with max sotrage > 200 acre-feet") ax.figure.set_dpi(100) ax.figure.savefig("_static/dams.png", **SAVE_KWDS) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Importing libraries import pandas as pd import numpy as np from sklearn.metrics.pairwise import cosine_similarity from sklearn.feature_extraction.text import CountVectorizer # # Importing data data=pd.read_csv("D:\\Workshops\\Machine Learning for Data Science & Artifcial Intelligence With Python\\data\\movie_data.CSV") data.head() # # Data wrangling data.isnull().sum() data.dropna(inplace=True) data=data[~data["movie_title"].duplicated()] data.reset_index(inplace=True) data.head() data.drop("index",axis=1,inplace=True) data.head() data.isnull().sum() data.duplicated().sum() # # Needful columns df=data[["movie_title","director_name","actor_1_name","actor_2_name","genres","country","language"]] df.head() # # Combined features " ".join(list(df.iloc[0].values)) # + features=[] for i in range(df.shape[0]): features.append(" ".join(list(df.iloc[i].values))) # - df["features"]=features # # Fetaure count matrix cvec=CountVectorizer() cv_df=cvec.fit_transform(df["features"]) cv_df.toarray() cv_df.shape df.shape # # Cosine similarity cs=cosine_similarity(cv_df) cs cs.shape # # Recommendations title="Avatar" movie_index=df[[title in name for name in df["movie_title"]]].index[0] movie_index scores=list(enumerate(cs[movie_index])) scores sorted_scores=sorted(scores,key=lambda x:x[1],reverse=True) sorted_scores top_10=[h[0] for h in sorted_scores[1:11]] top_10 df.iloc[top_10]["movie_title"] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Best Learning Rate # In this notebook, we are interested by the best learning rate (named **epsilon**). The following program will reach it by testing different epsilon values, and by getting their winning rate. First, let import the **AI_Tictactoe.py** program : # %pylab inline from AI_Tictactoe import * # To find the best learning rate, the program will train different AI with different learning rates. The different learning rates are contained in **listEpsilon**, and the learning process is in the **training()** function. def training(nbTraining, epsilon): pp1 = Player('X', isIntelligent = False, learningRate = epsilon) pp2 = Player('O', isIntelligent = False, learningRate = epsilon) pp1.experience = {} pp2.experience = {} #Force the experience to be null for i in range(nbTraining) : PlayTicTacToe(pp1,pp2, aiTraining = True) pp1.experience.update(pp2.experience) aiExperience = copy.deepcopy(pp1.experience) #pp1.Display_Experience() return aiExperience listEpsilon = np.linspace(0,1,21) print(listEpsilon) # Then, we train different AI with the different learning rates. (**Warning** : this process can be long, in the current parametrization, the function PlayTicTacToe() is called **231 000** (= (10 000 + 1 000) * 21) # + listWinningRate = [] experience = {} for epsilon in listEpsilon : print('For epsilon = {}. '.format(epsilon), end='') experience = training(10000, epsilon) listWinningRate.append(WinningRate(experience)) experience = {} # - # Finaly, we are looking for the best learning rate, by using **argmax()** function. # + plot(listEpsilon,listWinningRate) bestLearningRate = listEpsilon[argmax(listWinningRate)] print('The argmax is :', bestLearningRate) print('It is the best learning rate we had for 10 000 traning games') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Deploy model with SKIL from skil import Skil, WorkSpace, Experiment, Model, Deployment from keras.datasets import mnist # ## SKIL server # # Central class for managing connections with the SKIL server. # # **Arguments**: # * **workspace_server_id**: None by default, only specify if you want to connect to a non-default SKIL workspace server. # * **host**: string, Host on which the SKIL server runs. # * **port**: integer, Port on which the SKIL host runs. # * **debug**: boolean, set to false for more verbose logging. # * **user_id**: user name for your SKIL server connection. # * **password**: password of the provided SKIL user. skil_server = Skil( host='localhost', port=9008, user_id='admin', password='' ) # ## WorkSpace # # Workspaces are a collection of features that enable different tasks such as conducting experiments, training models, and test different dataset transforms. # # Workspaces are distinct from Deployments by operating as a space for non-production work. # # **Arguments**: # * **skil**: Skil server instance # * **name**: string. Name for the workspace. # * **labels**: string. Labels associated with the workspace, useful for searching (comma seperated). # * **verbose**: boolean. If True, api response will be printed. # * **create**: boolean. Internal. Do not use. # work_space = WorkSpace( skil =skil_server, name ='mnist-workspace', labels ='keras,mnist' ) # ## Experiments # # Experiments in SKIL are useful for defining different model configurations, # encapsulating training of models, and carrying out different data cleaning tasks. # # Experiments have a one-to-one relationship with Notebooks and have their own # storage mechanism for saving different model configurations when seeking a best # candidate. # # **Arguments**: # * **work_space**: `WorkSpace` instance. If `None` a workspace will be created. # * **experiment_id**: string. Unique id for workspace. If `None`, a unique id will be generated. # * **name**: string. Name for the experiment. # * **description**: string. Description for the experiment. # * **verbose**: boolean. If `True`, api response will be printed. experiment = Experiment( work_space=work_space, experiment_id='mnist-experiment-01', name='mnist-experiment', description='keras mnist experiment', verbose=False ) # ## Model # SKIL wrapper for DL4J, Keras, TensorFlow and other models # # SKIL has a robust model storage, serving, and import system for supporting major # deep learning libraries. # # SKIL can be used for end-to-end training, configuration, and deployment of models # or alternatively you can import models into SKIL. # # **Arguments** # * **model**: Model file path or model instance # * **model_id**: string. Unique id for model. If `None`, a unique id will be generated. # * **name**: string. Name for the model. # * **version**: integer. Version of the model. Defaults to 1. # * **experiment**: `Experiment` instance. If `None`, an `Experiment` object will be created internally. # * **labels**: string. Labels for this model # * **verbose**: boolean. If `True`, prints api response. # * **create**: boolean. Internal. Do not use. model = Model( model='model.h5', model_id='mnist-model-01', name='mnist-model', experiment=experiment ) # ## Deployment # # Deployments operate independently of workspaces to ensure that there are # no accidental interruptions or mistakes in a production environment. # # **Arguments:** # * **skil**: `Skil` server instance. # * **name**: string. Name for the deployment. deployment = Deployment( skil=skil_server, name='mnist-deployment' ) # ### Deploys the model # # **Arguments:** # * deployment: `Deployment` instance. # * start_server: boolean. If `True`, the service is immedietely started. # * scale: integer. Scale-out for deployment. # * input_names: list of strings. Input variable names of the model. # * output_names: list of strings. Output variable names of the model. # * verbose: boolean. If `True`, api response will be printed. model.deploy( deployment=deployment, start_server=True, scale=1, input_names=None, output_names=None ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from IPython.display import display, HTML from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # %load_ext autoreload # %autoreload 2 # # Loader # This notebook contains parsers. It's entire goal is to load, massage, and normalize the input data for use by the active models. # Consumption and weather data 2014-2018 # Parse and make BAU.pickle. import loadem wx = loadem.loadem() loadem.makePickles(wx) # Irradiance and weather data 1998-2014 # Parse and make various pickles. # The important one is "irradiance.pickle", with only important columns 2012-2014 import loadirr irr = loadirr.loadirr() loadirr.makePickles(irr) import loadirr irr = loadirr.readPickle() import loadem import pandas as pd bau = loadem.readPickle() bau # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Basic OLG model # The OLG model is one of two main analytical frameworks for analyzing the basic intertemporal choice, consumption vs saving. # # OLG is set to capture potential interaction of different generations of individuals in the marketplace hence, providing a tractable alternative to the infinite-horizon economic models # Imports and set magics: # + import numpy as np from scipy import optimize import sympy as sm from sympy import lambdify import matplotlib.pyplot as plt import math as m from scipy import stats as st from scipy.optimize import fsolve import time #As we are using a lot of theory in this project, we want the equations to be presented well, why we import the init_session from sympy import init_session init_session() # autoreload modules when code is run # %load_ext autoreload # %autoreload 2 # local modules import modelproject # + [markdown] toc-hr-collapsed=true # # Model description # - # In this model, we consider a static economy with an infinite number og households, $i \in N$ # In addition, population grows at a constant rate, # # $L_{t}=L_{t-1}(1+n)$ # # # Time, t, is discrete and runs to infinity and individuals born at time t live for the period $t$ til $t+1$, meaning the individuals are grouped in two generations, young and old. # # We assume a general, separable utility function for the individuals given by # # $U_t=u(c_{1t})+u(c_{2t+1})(1/(1+\beta))$ # # $\beta$ is added as a discount factor of the time-horizon. # # Production side is given by competitive firms and a CRS aggregate production function meaning we have the following production criterias # $Y_{t}=F(K_{t},L_{t})$ # # $r_{t}=f'(k_{t})$ # # $w_{t} = f(k_{t})-k_{t}f'(k_{t})$ # # $\bullet$ NOTE: $k_{t}=K_{t}/L_{t}$ # # The savings by an individual of a generation is determined as a solution to # # $max U_{t}$ s.t. # # $w_{t}=s_{t}+c_{1t}$ # # $c_{2t+1}=s_{t}(1+r_{t+1})$ # # Where $c_{1t}$ is consumption of the individual born at t when young at date t and $c_{2t+1}$ consumption when old at date t+1. In this model, the setup is built such that the old individuals will rent their savings of time t as capital to firms at t+1. Therefore they will eventually receive a gross rate of return given by $(1+r_{t})$. The second consumption constraint incorporates the fact that the individual will consume all his life-earnings. Hence, the lifetime constraint is given by # # $w_{t}=c_{1+t}+c_{2t+1}/(1+r_{t+1})$ # # # # Solution of the household problem # # The solution is found by setting up the Lagrangian # # $L(c_{1t},c_{2t+1},\lambda)= u(c_{1t})+u(c_{2t+1})(1/1+\beta)+\lambda[w_{t}-c_{1t}-(c_{2t+1}/(1+r_{t+1}))]$ # # FOCs: # # $u'(c_{1t})=\lambda$ # # $(1/(1+\beta))u'(c_{t2+1})= \lambda/(1+r_{t+1})$ # # combining the above to find the Euler equation: # # $(1+r_{t+1})/(1+\beta)u'(c_{2t+1}) = u'(c_{1t})$ # # The interpretation of Euler is that if the consumer gives up one unit of consumption in the young state, the marginal cost will be given by $u'(c_{1t})$ and the marginal benefit will be $(1+r_{t+1})$ units of consumption in next period, when old. # # Next, we want to find the **optimal savings path**: # # This is done by substituting budget constrain into the Euler eq. above. Hence, we find # # $(1+r_{t+1})/(1+\beta)u'(c_{2t+1}) = u'(c_{1t})$ $\rightarrow$ # # $(1+r_{t+1})/(1+\beta)u'((1+r_{t+1})s_{t})=u'(w_{t} - s_{t})$ # # where optimal savings are a function of wage and the interest rate. # Knowing the decision of individuals, we can now examine the **Law of Motion** by aggregating the economy as follows: # # As we are working with two periods, we know that the aggregate stock of capital at $t+1$ must equal the total savings of the economy at the time $t$ as well as the non-depreciated capital stock which is carried from $t$ and the 'dissavings' of same period. # # This means, the aggregate capital level at time $t+1$ is given by the following: # # $K_{t+1}=S_{t}+(1-\delta)K_{t}-K_{t}$ $\rightarrow$ # # $K_{t+1}=S_{t}-\delta K_{t}$ # # Remember, $k_{t} \equiv K_{t}/L_{t}$. Hence, # # $k_{t+1}(1+n) = s_{t}-\delta k_{t}$ # # In short, inserting the findings above into the capital accumulation and for simplicity setting $\delta =0$ we now have the law of motion per worker in the entire economy defined by # # $k_{t+1} (1+n)=s(f(k_{t})-f'(k_{t})k_{t}, f'(k_{t+1}))$ # # # {note: the intuition above is found at http://web.econ.ku.dk/okocg/MAT-OEK/Mak%C3%98k2/Mak%C3%98k2-2016/Forel%C3%A6sninger/Ch%203-2016-1.pdf as well as in our lecture slides from the Macroeconomics course at Copenhagen University} # # Now, we wish to solve the model using **sympy** # + #First, defining the setting, here the variables we want to use #Households definitions beta = sm.symbols('beta') c1 = sm.symbols('c_{1t}') c2 = sm.symbols('c_{2t+1}') lmda = sm.symbols('lambda') #Savings definitions r = sm.symbols('r_{t+1}') w = sm.symbols('w_t') #Capital stock definitions s = sm.symbols('s_t') kt = sm.symbols('k_t') k1 = sm.symbols('k_{t+1}') delta = sm.symbols('delta') n = sm.symbols('n') # - # Now, we want to define the utility function. In this case, we use the log utility # + #We start by setting a standard utility, then we specify the form afterwards u_std = sm.Function('u') def log_u(c): return sm.log(c) def U(c1, c2, beta, u): return u(c1) + 1 / (1 + beta) * u(c2) #Print U(c1, c2, beta, u = u_std) # - # Now, the budget constraints are set up to calculate the intertemporal consumption constraint # + #Defining budget constraints c_1budget = sm.Eq(w, c1 + s) c_2budget = sm.Eq(c2, (1+r)*s) #Printing the budget constraints c_1budget, c_2budget # - # Next, we define the intertemporal consumption constraints #Intertemporal consumption constraint defined ib = c_1budget.subs(s, sm.solve(c_2budget, s)[0]) #Print intertemporal budget constraint ib # Setting up the Lagrangian, deriving FOCs # + #Lagrangian l = U(c1,c2,beta, u = u_std) - lmda * (ib.rhs - w) #Print Lagrangian l #Deriving FOCs dc1 = sm.diff(l, c1) dc2 = sm.diff(l, c2) dlmda = sm.diff(l, lmda) #Print FOCs dc1, dc2, dlmda # - # Now we find the **Euler Equation** # + #Defining the Euler Equation def euler_equation(dc1, dc2, u, c1, lmda): x = dc2.subs(lmda, sm.solve(dc1, lmda)[0]) euler_eq = sm.Eq(sm.Derivative(u(c1)), sm.solve(x, sm.Derivative(u(c1)))[0] ) return euler_eq #Calling make_euler_equation to calculate the Euler eq euler = euler_equation(dc1, dc2, u = u_std, c1 = c1, lmda = lmda) #Printing result euler euler = euler.subs(c1, w - s).subs(c2, c_2budget.rhs) #Printing result euler # - # Euler implicitly determines the value of $s_{t}$. # Now we want to determine the development in capital. #First we set the functional form of $u_{t}$ using the $log$ function as this enables for further examination. euler = euler.replace(u_std, log_u).doit() #Print euler # Now we define the values and setting for the **firms** in the economy. # To do this, we explore the aggregate production function, normalizing this by using the per capita function as this enables us to better calculate expressions for **interest rates** and **wages** in equilibrium. # + #Production function Y = sm.symbols('Y_t') K = sm.symbols('K_t') L = sm.symbols('L_t') #including y_t and alpha y = sm.symbols('y_t') alpha = sm.symbols('alpha') #deriving the production function by using the values above prod_f = sm.Eq(Y,K**alpha * L**(1-alpha)) #Normalizing normprod_f = sm.Eq(y, kt**alpha) #Print normprod_f # + ir = sm.Eq(r, sm.Derivative(normprod_f.rhs, kt)).doit() w_t = sm.Eq(w, normprod_f.rhs - kt*sm.Derivative(normprod_f.rhs, kt)).doit() #Print results of interest rate and wage ir, w_t # - # Solving for capital period t+1(capital evolution path) k_1 = sm.Eq(k1, 1/(1+n)* (s - delta*kt) ) #Print k_1 # We now derive the **transition path** as follows # + k_sav = sm.solve(euler, s)[0].subs(w, w_t.rhs) # Substituting if r in k_sav.atoms(): k_sav = k_sav.subs(r, ir.rhs) t_path = k_1.subs(s, k_sav) #Print t_path # - # To make a visualization of the transition path, we now specify a number of parameters and define a function of the transition path # + #Defining the transition path transition_curve = sm.lambdify((kt, beta, delta, n, alpha,r ), t_path.rhs) def transition_c(kt, beta, delta,alpha, n, r = 0): return transition_curve(kt, beta, delta, alpha, n, r) #Defining equilibrium eq = sm.lambdify((beta, delta, alpha, n),sm.solve(t_path.subs(k1, kt), kt)[0]) def eqm(beta, delta,alpha, n,): return eq(beta, delta, alpha, n) # - # Letting alpha vary to show how this affects the transition path, plotting the graph #Defining variables ex = np.linspace(0,3,1000) _b = 0.05 _d = 0.02 _n = 0.2 #Plotting figure plt.figure(figsize=(14,7)) #Plot 45 degree line plt.plot(ex, ex, color = 'blue') #Defining moving function for _a in np.linspace(0.01, 0.2,5): sol = [transition_c(kt = x, alpha = _a, beta = _b, delta = _d, n = _n) for x in ex] ks = eqm(alpha = _a, beta = _b, n = _n, delta = _d) plt.plot(ex, sol, color = 'red', alpha = 1) plt.annotate(f'$\\alpha$={round(_a,2)}', xy= (ks + 0.01, ks - 0.01)) #Standard settings plt.xlabel('$k_t$', size =15) plt.ylabel('$k_{t+1}$', size=15) plt.title('Transition curves\n', size = 20) plt.xlim(0,0.4) plt.ylim(0,0.4) # For a lower alpha, the higher is the steady state. The reason behind this, that a higher alpha the more productive is the capital and therefor a higher demand for capital per worker. This means that economy reach the steady state faster. A plausible value would be 1/3 on capital and 1-1/3 n labour. As we can see from our law of motion equation then a higher n (higher population) reduce the capital accumulation and thereby have a lower steady state. # To further characterize the equilibrium paths, we construct the transition diagram in the plane $(k_{t},k_{t+1})$ examining the convergence towards **steady state** # + #Setting variable values _r = 0.05 _d = 0.05 _a = 0.5 #Defining k steady state ksteady = 0.0002 ex = range(8) ex2 = np.linspace(0,1,500) #Transition out = list() for _ in ex: ksteady = transition_c(kt = ksteady, alpha = _a, beta = _b, delta = _d, n = _n) out.append(ksteady) # - # Now we plot the figure in order to illustrate the convergence towards steady state # # res = [transition_c(kt = x, alpha = _a, beta = _b, delta = _d, n = _n) for x in ex2] #Plotting figure plt.figure(figsize=(14,7)) #Plot 45 degree line plt.plot(ex, ex, color = 'blue') #Plotting the steady state convergence line plt.plot(ex2, res, color = 'red', alpha = 1) plt.step(out[:-1], out[1:], where = 'post', color = 'green', linestyle = '--', alpha = 0.8) plt.scatter(out[:-1], out[1:], color = 'red', alpha = 0.8) #Standard settings plt.xlabel('$k_t$', size = 15) plt.ylabel('$k_{t+1}$', size = 15) plt.xlim(0,0.2) plt.ylim(0,0.2) plt.title('Convergence towards steady state\n', size = 20) # Outside the steady-state $k_{t}$ does not grow at a constant rate. Hence, the economy will in time approach the balanced growth path as seen on the graph. # # # # Extended OLG model # Now we add a government to our model to solve an example of the model using random choosen values. The government collects taxes on both capital and labour. Taxes are given by $\tau$. # # We now consider the household problem # # $max$ $u(c_{2t+1})+\beta u(c_{1t})$ # # s.t. # # $c_{2t+1}+s=(1-\tau)w+t_{2t+1}$ # # $c_{1t}=R+t_{1t}$ # # where $R$ is the after-tax gross interest rate which is given by # # $R=(1+(1-\tau)r)$ # # Instead of having to substitute and solve for an equation of one unknown, as we did in the beginning, we now want to use the **Gauss-Seidel** method for solving the problem. # # This mdel is an iterative technique for solving a square system of $n$ linear equations with an unknown vaiable, $x$: # # $Ax = b$ # # Which is defined by the iteration: # # $Lx^{k+1} =b-Ux^{k}$ # # Where the variable $x^{k}$ defines the $k^{th}$ approximation of x as well as $x^{k+1}$ is the $k+1$ iteration of x. The matrix, $A$, is decomposed into the component $L$ and a strictly upper triangular component denoted U, meaning $A = L+U$. # # The Gauss-Seidel method will solve the lhs of the expression of # $Lx^{k+1} =b-Ux^{k}$ which can be written as: # # $x^{k+1} =L^{-1} (b-Ux^{k})$ # # However, as L is of triangular form, the elements of the variable $x^{k+1}$ can be computed sequentially using the forward substitution given by: # # $x_i^{k+1} = 1/a_{ii}(b_{i}- \sum_{j=1}^{i-1} a_{ij}x_{j}^{k+1} - \sum_{j=i+1}^{n} a_{ij}x_{j}^{k} ), i=1,...,n$ # # Using this method, we start with an initial guess of the kapital in time 1t, denoted $K_{guess}$. # Next, we solve for the parameters $q, w and R$ # # $q = \alpha AK^{\alpha-1}_{1t} L^{1-a}$ # # $w=(1-\alpha)K^{\alpha}_{1t} L^{1-\alpha}$ # # $R=1+(1-\tau)(q-\delta)$ # # We then solve for $s*$ which is the optimal savings of the household, given by # # $s* =N_{2t+1} ((\beta R((1-\tau)w+t_{2t+1})-t_{1t})/((1+\beta)R))$ # # Next, we aggregate over all households of the economy to find $K_{new}$ # # $K_{new}=N_{2t+1} \bullet s*$ # # Lastly, we calculate errors and update capital as follows # # $K_{guess} = \lambda K_{new} +(1-\lambda)K_{guess}$ where $0< \lambda <1$ is an updating parameter. # # {note: the intuition behind this model is found at https://www.sciencedirect.com/topics/engineering/gauss-seidel-method} # + #Set parameters A_t = 1 alpha = 0.33 beta = 0.8 delta = 0.0 L_t = 1 N_2 = 1.0 N_1 = 1.0 tau_L = 0.22 tau_K = 0.16 t_2 = 0.0 t_1 = 0.0 # #Initial guess of capital error = 100 guess_lmda = 0.4 K_guess = 0.4 iter = 1 #Iterating to find the values while (iter<300) or (error>0.001): # Now we want to solve for q and w by q = alpha*A_t*K_guess**(alpha-1) w = (1-alpha)*A_t*K_guess**alpha R = 1 + (1-tau_K)*(q - delta) K_new = N_2* (beta*R*((1-tau_L)*w + t_2) - t_1)/((1+beta)*R) # Calculate discrepancy between old and new capital stock error = abs(K_guess-K_new)/K_guess # Update capital stock K_guess = guess_lmda*K_new + (1-guess_lmda)*K_guess iter = iter +1 #Results Ks = K_new qs = q Rs = R rs = qs - delta ws = w Ys = A_t*Ks**alpha*L_t**(1-alpha) # - # Now we want to find solutions for the entire economy # + #Household optimal consumption per period ss = Ks/N_2 c2s= (1-tau_L)*ws + t_2 - ss c1s= Rs*ss + t_1 # Residual consumption of the government Gs = N_2*tau_L*ws + N_1*tau_K*rs*ss # Finally, aggregate the consumption of the household Cs = N_2*c2s + N_1*c1s # To ensure, the above holds, we run a fast check on the condition of the goods market and resoure constraint ARC = Ys - delta*Ks - Cs - Gs print("The results using the Gauss-Seidel method are as follows") print("K* = {:6.4f}".format(Ks)) print("q* = {:6.4f}".format(qs)) print("r* = {:6.4f}".format(rs)) print("R* = {:6.4f}".format(Rs)) print("w* = {:6.4f}".format(ws)) print("Y* = {:6.4f}".format(Ys)) print("-------------------------") print("ARC = {:6.4f}".format(ARC)) print("Number of iterations = " +str(iter)) print("-------------------------") print("Optimal consumption of the household is:") print("s* = ", ss) print("c1* = ", c1s) print("c2* = ", c2s) print("Whereas the residual consumption of the government is:") print("G* = ", Gs) print("And the aggregate consumption of the household is:") print("C* = ", Cs) # - # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Speed without Mini Part 2 run modules/venv_setup.ipynb run modules/module.ipynb run modules/model.ipynb run modules/data_loader.ipynb # + params, layers = new_test(X_train, grad_check = False) v, s = adam_init(params) learning_rate=0.1 i=0 t=0 interval = 1180 #20 epochs = 1180 iterations decay_rate = 0.01 accuracy = 0 start = time.time() all_accs = [] while accuracy <= 0.988036976617727: if i==1000: break print(i, accuracy) AL, cache, cost = forward_prop_fix(X_train, y_train, params, lambda_value =0.1) grads = back_prop_fix(AL, X_train, y_train, layers, cache, lambda_value=0.1) t+=1 params = update_params_adam(params, grads, v, s, t, learning_rate=learning_rate, beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8) _, accuracy, _ = predict(X_test, y_test, params, lambda_value=0.1, threshold=0.5) if i % interval==0: new_lr = update_lr_time(learning_rate, i, decay_rate, interval) learning_rate = new_lr i+=1 all_accs.append(accuracy) end = time.time() duration = (end-start)/60 with open('saved_data/compare/compare_no_mini_part2', 'wb') as f: pickle.dump([all_accs, duration], f) # - all_accs[np.argmax(all_accs)], np.argmax(all_accs) duration # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + tags=["remove_cell"] from datascience import * import matplotlib.pyplot as plt # %matplotlib inline import numpy as np import pandas as pd from utils import * plt.style.use('seaborn-muted') from matplotlib import patches import csaps import warnings warnings.filterwarnings("ignore") # - # # An Empirical Example from EEP 147 # # Let's take a look at an empirical example of production. The dataset for this section comes from EEP 147: Regulation of Energy and the Environment. ESG_table = Table.read_table('ESGPorfolios_forcsv.csv').select( "Group", "Group_num", "UNIT NAME", "Capacity_MW", "Total_Var_Cost_USDperMWH" ).sort("Total_Var_Cost_USDperMWH", descending = False).relabel(4, "Average Variable Cost") ESG_table # This table shows some electricity generation plants in California and their costs. The `Capacity` is the output the firm is capable of producing. The `Average Variable Cost` shows the minimum variable cost per megawatt (MW) produced. At a price below AVC, the firm supplies nothing. At a price above the AVC, the firm can supply up to its capacity. Being a profit-maximising firm, it will try to supply its full capacity. # # First, lets look at just the Big Coal producers and understand this firm's particular behavior. selection = 'Big Coal' Group = ESG_table.where("Group", are.equal_to(selection)) Group # + tags=["remove_input"] # Make the plot plt.figure(figsize=(9,6)) plt.bar(new_x_group, height_group, width=width_group, edgecolor = "black") # Add title and axis names plt.title(selection) plt.xlabel('Capacity_MW') plt.ylabel('Variable Cost/Price') plt.show() # - # We have created the Big Coal supply curve. It shows the price of electricity, and the quantity supplied at those prices, which depends on variable cost. For example, at any variable cost equal to or above 36.5, the producer `FOUR CORNERS` (the one with the lowest production costs) will supply, and so on. Notably, we observe that the supply curve is also upward sloping since we need higher prices to entice producers with higher variasble costs to produce. # + tags=["remove_input"] group_plot(30) # + tags=["remove_input"] group_plot(37) # + tags=["remove_input"] group_plot(50) # - # Now we will look at all the energy sources. They have been colored according to source for reference. # + tags=["remove_input"] ESG_plot(30) # + tags=["remove_input"] ESG_plot(50) # - # Look at the thin bars concentrated on the right end of the plot. These are plants with small capacities and high variable costs. Conversely, plants with larger capacities tend to have lower variable costs. Why might this be the case? Electricity production typically benefits from economies of scale: it is cheaper per unit when producing more units. Perhaps the high fixed cost required for electricity production, such as for equipment and land, is the reason behind this phenomenon. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="B9PKJggkxfMo" # ## Preparation # + [markdown] id="EBGxhcG7xZzr" # Welcome to the Vectice tutorial notebook! # # # Through this notebook, we will be illustrating how to log the following information into Vectice using the Vectice SDK: # - Dataset versions # - Model versions # - Runs # # For more information on the tutorial, please refer to the "Vectice Tutorial Page" inside the app. # # # + [markdown] id="3McxMcLPxkdJ" # ### Install Vectice and GCS packages # + colab={"base_uri": "https://localhost:8080/"} id="aXrYRKNiIlOn" outputId="1d81b635-be99-4f15-d056-3608c2d1142a" ## Requirements # !pip3 install --q fsspec # !pip3 install --q gcsfs #Install Vectice Python library # In this tutorial we will do code versioning using github, we also support gitlab # and bitbucket: !pip install -q "vectice[github, gitlab, bitbucket]" # !pip3 install --q vectice[github] # + colab={"base_uri": "https://localhost:8080/"} id="hHkVwYekVYxU" outputId="32d114ad-a22a-411e-cd40-fa99b5d37d20" #Verify if Vectice SDK was installed # !pip3 show vectice # + [markdown] id="8a-NGpCnxo0A" # ### Retrieve the data from GCS # # We are going to load data stored in Google Cloud Storage, that is provided by Vectice for this tutorial. # # + colab={"base_uri": "https://localhost:8080/", "height": 223} id="X44K8EaHJBAu" outputId="fb40eaa4-3e26-43af-edf3-54dd2b758f7f" import os import numpy as np import pandas as pd from matplotlib import pyplot as plt import seaborn as sns # %matplotlib inline # Double check the json file name below so that it matches the name of the file that you uploaded. # Note that the key provided for this tutorial does not have permissions for you to write to GCS. # You can only use it to read the data. os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'readerKey.json' # Once your file is loaded you can view your dataset in a Pandas dataframe. df = pd.read_csv('gs://vectice_tutorial/kc_house_data_cleaned.csv') # Run head to make sure the data was loaded properly df.head() # + [markdown] id="c0k4Xz-sxtgZ" # # We will use the scikit-learn library for modeling and tracking experiments directly through the Vectice SDK. # + colab={"base_uri": "https://localhost:8080/"} id="KsXBhrmeJ0sZ" outputId="eee29d29-b8df-48ee-a524-60b6d67506be" #Import the Vectice library from vectice import Vectice from vectice.models import JobType from vectice.entity.model import ModelType # Specify the API endpoint for Vectice. This shouldn't change. os.environ['VECTICE_API_ENDPOINT']= "beta.vectice.com" # To use the Vectice SDK, you first need to authenticate your account using an API key. # You can generate an API key from the Vectice UI, by going to the "API Tokens" tab in your workspace # Copy and paste your API key here os.environ['VECTICE_API_TOKEN'] = "API TOKEN" # Next, you need to specify the tutorial project, where you will log the assets that will be generated in this notebook. # For this, you need a "Project Token", that you will find under the "Settings" tab of your project # Copy and paste your Project Token here vectice = Vectice(project_token="Project Token") print(vectice) # + [markdown] id="26ChSoJtxyiu" # ### Split dataset into training and testing # # Let's split the dataset into train and test sets and save them in GCS. # (The GCS code has been commented out as the data has already been generated.) # # # - # Testing the model on the same data as it was trained on will lead to an overfit and poor performance in real-life scenarios. # In order to avoid that, we split our data into 2 pieces: train set and test set. The most common practice is to do a 80-20 split. # + id="G4Wu5npWJ5U1" import string from math import sqrt # Load scikit-learn packages from sklearn.model_selection import train_test_split # Model Selection from sklearn.metrics import mean_absolute_error, mean_squared_error # Model Evaluation from sklearn.linear_model import LinearRegression # Linear Regression from sklearn.tree import DecisionTreeRegressor, plot_tree # Decision Tree Regression from sklearn.ensemble import RandomForestRegressor # Random Forest Regression # + colab={"base_uri": "https://localhost:8080/"} id="JU50t9fHKALv" outputId="effffc0a-21f0-4dbd-8c9f-62e2039bf1d2" # The Vectice SDK automatically detects if there have been changes to the dataset you are using. # If it detects changes, it will generate a new version of your dataset automatically. # For this tutorial, we changed the data since Albert left the company. # So, the SDK will create a new dataset version the first time you execute this code. input_ds_version = vectice.create_dataset_version().with_parent_name("cleaned_kc_house_data") # With this line we declare a reference to code existing in GitHub as the code at the origin of the outputs input_code = Vectice.create_code_version_with_github_uri("https://github.com/vectice/vectice-examples", "Notebooks/Vanilla/Tutorial/Tutorial_Modelling_All%20-%20Jupyter.ipynb") # Each execution for a given job is called a run. When creating a run you need to specify: # 1) a job name (mandatory) # 2) a job type (optional) # Job names and job types are useful to group and search runs in the Vectice UI. # For this run, we will use the job name "80/20 Split" and the job type "PREPARATION" vectice.create_run("80/20 Split", JobType.PREPARATION) #Start the run ## Using the with method end the run automatically. We don't need to add vectice.end_run(outputs=outputs) to end the run with vectice.start_run(inputs=[input_ds_version,input_code]) as run: # We will use an 80/20 split to prepare the data test_size = 0.2 # We will set the random seed so we always generate the same split. random_state = 42 train, test = train_test_split(df, test_size = test_size, random_state = random_state) # We commented out the code to persist the training and testing test in GCS, # because we already generated the data for you. # The key provided for this tutorial give you read access only to GCS. # We left the code below for convenience, in case you want to use your own credentials and GCS bucket. # train.to_csv (r'gs://vectice_tutorial/training_data.csv', index = False, header = True) # test.to_csv (r'gs://vectice_tutorial/testing_data.csv', index = False, header = True) # Generate X_train, X_test, y_train, y_test, which we will need for modeling X = df.drop("price", axis=1).values y = df["price"].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state) # Let's create new versions of the training and testing dataset if the data has changed. # We will use the existing dataset created by Albert, so that we can append new # dataset versions to it. train_ds_version = vectice.create_dataset_version().with_parent_name("train_cleaned_kc_house_data") test_ds_version = vectice.create_dataset_version().with_parent_name("test_cleaned_kc_house_data") # Attach the output datasets to the run. run.add_outputs([train_ds_version,test_ds_version]) # We can preview one of our generated outputs to make sure that everything was executed properly. X_train # + [markdown] id="UWCBtvUDx2km" # ### Name your experiments # + [markdown] id="khVCyL5ox2Vn" # Input the experiment names and make them unique # + id="6QtC_bmCMhXy" # Let's define a helper function that we will use to generate unique model version names. # We will make the model version name optional in a future edition of the Vectice SDK # so this function won't be necessary anymore. import random def get_random_string(length): return "".join(random.choice(string.ascii_letters) for i in range(length)) # + [markdown] id="tNlnYdQ_x8AZ" # ## Modeling # + [markdown] id="14f1pDCdx82I" # ### Linear regression model # - # Regression is a method of modelling a target value based on independent predictors. This method is mostly used for forecasting and finding out cause and effect relationship between variables. Regression techniques mostly differ based on the number of independent variables and the type of relationship between the independent and dependent variables. # + [markdown] id="jQU-NU7Mx_Mn" # First, we will do a basic Linear Regression and observe the baseline accuracy metrics. # + colab={"base_uri": "https://localhost:8080/"} id="eu5z76qtMvW6" outputId="25aa715e-d8af-4b9f-f24e-e8c2d5b84dc4" # Each execution for a given job is called a run, for LR we will only do one run. # Setting a job's name is mandatory when starting a run # and is useful to group and search runs in the Vectice UI. # Linear regression model training vectice.create_run("LR-Model", JobType.TRAINING) ## Start the run ## Using the with method end the run automatically. We don't need to add vectice.end_run(outputs=outputs) to end the run with vectice.start_run(inputs=[train_ds_version,test_ds_version,input_code]) as run: lr_rg = LinearRegression() lr_rg.fit(X_train, y_train) lr_pred = lr_rg.predict(X_test) # Evaluate Metrics MAE = mean_absolute_error(lr_pred, y_test) RMSE = sqrt(mean_squared_error(lr_pred, y_test)) print("Root Mean Squared Error: ", RMSE) print("Mean Absolute Error: ", MAE) # Let's log the model we trained along with its metrics, as a new version # of the "Regressor" model in Vectice. # Note that we used a random string to have a unique model version name in Vectice. ## Here we create our model version. ## We can declare the model type using the .with_type method model_version = vectice.create_model_version().with_parent_name("Regressor").with_type(ModelType.REGRESSION).with_algorithm("Linear Regression").with_metric("RMSE",RMSE).with_metric("MAE",MAE).with_user_version(get_random_string(12)) ## Attach the model version to the run run.add_outputs(outputs=[model_version]) # + [markdown] id="kHfTyn5ByD0M" # ### Decision tree model # - # A decision tree is an upside-down tree that makes decisions based on the conditions present in the data. # Decision Tree algorithm has become one of the most used machine learning algorithm both in competitions like Kaggle as well as in business environment. Decision Tree can be used both in classification and regression problem. A decision tree is arriving at an estimate by asking a series of questions to the data, each question narrowing our possible values until the model get confident enough to make a single prediction. The order of the question as well as their content are being determined by the model. In addition, the questions asked are all in a True/False form # + [markdown] id="-FIP53giyFc_" # In this section let's use the decision tree algorithm and compare the accuracy to the logistic regression algorithm. We will try different values for the tree_depth. We will log the model parameters and metrics in Vectice. # + colab={"base_uri": "https://localhost:8080/", "height": 594} id="k-iP1UhKuLeN" outputId="debf8bd9-9839-4d87-9d98-66921b906dc6" # We can do a few runs with different max depth for the tree. # Just change the value below and re-run this cell. # The model versions you created will show up in the Vectice UI as new versions # of the "Regressor" Model. You can easily compare them from there. tree_depth = 6 vectice.create_run("DT-Model", JobType.TRAINING) ## Start the run vectice.start_run(inputs=[train_ds_version,test_ds_version,input_code]) dtr = DecisionTreeRegressor(max_depth=tree_depth, min_samples_split=50) dtr.fit(X_train,y_train) dtr_pred = dtr.predict(X_test) data_feature_names = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade', 'sqft_above', 'sqft_basement', 'yr_built', 'yr_renovated', 'zipcode', 'lat', 'long', 'sqft_living15', 'sqft_lot15'] # Visualize the Decision Tree Model plt.figure(figsize=(25, 10)) plot_tree(dtr, feature_names=data_feature_names, filled=True, fontsize=10) MAE = mean_absolute_error(dtr_pred, y_test) RMSE = sqrt(mean_squared_error(dtr_pred, y_test)) print("Root Mean Squared Error:", RMSE) print("Mean Absolute Error:", MAE) ## Here we create our model version. ## We can declare the model type using the .with_type method model_version = vectice.create_model_version().with_parent_name("Regressor").with_type(ModelType.REGRESSION).with_algorithm("Decision Tree").with_property("Tree Depth",str(tree_depth)).with_metric("RMSE",RMSE).with_metric("MAE",MAE).with_user_version(get_random_string(12)) ## Here we should end the run and attach the created model version to it since we don't use the with method vectice.end_run(outputs=[model_version]) # + [markdown] id="eSA7zAZGyQmu" # ### Random forest model # # Let's use the Random Forest Regression and do some hyper-parameter tuning on it. # - # Random Forest is also a “Tree”-based algorithm that uses the qualities features of multiple Decision Trees for making decisions # Therefore, it can be referred to as a ‘Forest’ of trees and hence the name “Random Forest”. The term ‘Random’ is due to the fact that this algorithm is a forest of ‘Randomly created Decision Trees’. # The Decision Tree algorithm has a major disadvantage in that it causes over-fitting. This problem can be limited by implementing the Random Forest Regression in place of the Decision Tree Regression. Additionally, the Random Forest algorithm is also very fast and robust than other regression models. # + colab={"base_uri": "https://localhost:8080/"} id="B3I4rcTVu5PF" outputId="d849f61b-89bb-4a24-f6c1-c1345c7fff30" # You can modify the parameters below and execute multiple runs to train # different versions of RF model. nb_trees = 60 min_samples = 30 vectice.create_run("RF-Model", JobType.TRAINING) ## Start the run ## Using the with method end the run automatically. We don't need to add vectice.end_run(outputs=outputs) to end the run with vectice.start_run(inputs=[train_ds_version,test_ds_version,input_code]) as run: rf_regressor = RandomForestRegressor(n_estimators=nb_trees, min_samples_leaf=min_samples) rf_regressor.fit(X_train, y_train) rf_regressor.score(X_test, y_test) rf_regressor_pred = rf_regressor.predict(X_test) MAE = mean_absolute_error(rf_regressor_pred, y_test) RMSE = sqrt(mean_squared_error(rf_regressor_pred, y_test)) print("Root Mean Squared Error:", RMSE) print("Mean Absolute Error:", MAE) # Here's an alternative version to declare metrics metrics = [("RMSE",RMSE), ("MAE",MAE)] ## Here we create our model version. ## We can declare the model type using the .with_type method model_version = vectice.create_model_version().with_parent_name("Regressor").with_type(ModelType.REGRESSION).with_algorithm("Random Forest").with_property("nb_trees",str(nb_trees)).with_property("min_samples",str(min_samples)).with_metrics(metrics).with_user_version(get_random_string(12)) ##Attach the created model version to the run run.add_outputs(outputs=[model_version]) # + [markdown] id="QhotjOQUyTcH" # We can see that the Random Forest Regressor model gives the lowest error and should be the preferred approach despite the complexity of the algorithm. Let's get the list of features' importance to discuss which variables are influencing the model the most. # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="CHcKAqzDyUQZ" outputId="9ae03f12-d1b0-4380-86e8-81af76f8c464" columns = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'waterfront', 'view', 'condition', 'grade', 'sqft_above', 'sqft_basement', 'yr_built', 'yr_renovated', 'zipcode', 'lat', 'long', 'sqft_living15', 'sqft_lot15'] importance = pd.DataFrame({'Importance': rf_regressor.feature_importances_ * 100}, index=columns) importance.sort_values(by="Importance", axis=0, ascending=True).plot(kind="barh", color="b") plt.xlabel("Variable Importance") plt.gca().legend_ = None # + [markdown] id="z_BYeT1hyZwj" # Thank you and congratulations! You have succesfully completed the notebook part of the tutorial. # # In this notebooks we have illustrated how you can capture your experiments, hyper-parameters, dataset versions and metrics using Vectice SDK. # You can now leverage Vectice UI for analysis, documentation and to engage a business conversation around the findings. # # Vectice enables you to: # 1. Make your experiments more reproducible. # 2. Track the data and code that is used for each experiment and model versions. # 3. Document your projects' progress and collaborate with your team in Vectice's UI. # 4. Discover previous work and reuse your team knowledge for new projects. # # We are constantly improving the Vectice SDK and the Vectice application. Let us know what improvements you would like to see in the solution and what your favorite features are after completing this tutorial. # # Feel free to explore more and come up with your own ideas on how to best start leveraging Vectice! # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # Objects # # [Table of contents](../toc.ipynb) # # # Tools for Python developers # # * Upon now, you used either python through terminal or some cloud service like [Python anywhere](https://www.pythonanywhere.com/try-ipython/), or the respective jupyter notebooks for this class on [Binder](https://mybinder.org/v2/gh/StephanRhode/py-algorithms-4-automotive-engineering/master) # * This chapter introduces an integrated development environment (Pycharm) and local interactive notebooks (Jupyter). # + [markdown] slideshow={"slide_type": "slide"} # ## Pycharm # # Octocat # # * Pycharm is one possible integrated development environment (IDE) for Python. # * An IDE is basically a program which supports all steps of software development like # * Coding with code completion, static code analyses, templates,... # * Testing # * Debugging # * Version control # * Environment and package management # * Code refactoring # * Build chains,... # * Please find [here all features](https://www.jetbrains.com/pycharm/features/) # + [markdown] slideshow={"slide_type": "subslide"} # * Other popular choices next to Pycharm are Visual Studio Code, and Spyder. Just try which one is best for you. # * There is a free community edition of Pycharm [link to installer](https://www.jetbrains.com/pycharm/download). # * And as Pycharm is a professional software, there is much [video recorded training material](https://www.jetbrains.com/pycharm/learning-center/). # + [markdown] slideshow={"slide_type": "subslide"} # ### Pycharm introduction video # # Please find here a [video which presents Pycharm features](https://www.youtube.com/watch?v=BPC-bGdBSM8). # + [markdown] slideshow={"slide_type": "subslide"} # ### A small Pycharm live demo # # Pycharm # + [markdown] slideshow={"slide_type": "subslide"} # ## Exercise: First steps in Pycharm (15 minutes) # # Exercise # # Please complete the following tasks: # * Install Pycharm from [https://www.jetbrains.com/pycharm/download](https://www.jetbrains.com/pycharm/download) # * Create a new project # * Create an environment (pip, conda) # * Add this Python code as file # ``` # def my_hello(): # print("Hello world") # # ``` # * Execute this file in Pycharm # + [markdown] slideshow={"slide_type": "slide"} # ## Jupyter # # Octocat # # * Jupyter is an interactive programming environment, where you can combine programming, presentation of results, and explanation with text and equations in one web page. # * Hence Jupyter is a way to communicate scientific computing like it is done since ages. Leonardo da Vinci used Notebooks as well! # * Jupyter works with Python kernel, as well as with Julia, R, Rubi, Matlab. # * Relatively recent, [Jupyter Lab](https://jupyterlab.readthedocs.io/en/stable/) was released, which is the successor of Jupyter. # * Note the entire course material is written in Jupyter. It is very convenient :) # + [markdown] slideshow={"slide_type": "subslide"} # ### Jupyter installation # # * Jupyter is a package. Hence, just type `conda install jupyter` to extend your environment. # * You can also try Jupyter in the cloud here [https://jupyter.org/try](https://jupyter.org/try). # * The command to start the Jupyter notebook server is `jupyter notebook`. # # Jupyter # + [markdown] slideshow={"slide_type": "subslide"} # ### Some Jupyter commands # # * There is edit mode and navigation mode. You can switch between them with `enter` and `esc`. # * `shift + enter` runs a cell and selects the next cell below. # * If you use functions, you can use auto completion with `tab` or read the doc string with `shift + tab`. # * Many more keyboard shortcuts are on top of Jupyter panel in the keyboard icon. # * Add to this, there are some so called magic commands, which start with `%`. Quite common is for instance `%matplotlib notebook`, which embeds plots. # + [markdown] slideshow={"slide_type": "subslide"} # ### Jupyter tutorial # # * There is a great Jupyter tutorial on Binder. Hence, please consult this tutorial to learn Jupyter. # # * https://hub.mybinder.turing.ac.uk/user/ipython-ipython-in-depth-zgalw2wh/notebooks/binder/Index.ipynb # + [markdown] slideshow={"slide_type": "subslide"} # ## Exercise: Jupyter (10 minutes) # # Exercise # # # Here the task: # # * Activate your local Python environment and install jupyter with `conda install jupyter`. # * Open the notebook server with `jupyter notebook` command. # * Create a new notebook and try some Python code there. # * Add text in markdown cells. # + [markdown] slideshow={"slide_type": "subslide"} # ## Jupyter extensions # # There are many extensions for Jupyter notebooks like # * table of contents bar, # * variable inspector, # * spell checkers, # * auto code style checks,... # available in [jupyter-contrib-nbextensions](https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/index.html). # # With these extensions, Jupyter becomes a very powerful interactive development environment. # + [markdown] slideshow={"slide_type": "subslide"} # Please find here a screen shot of Jupyter with table of contents and variable inspector extension. # # Jupyter extended # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Preamble import os, sys import pandas as pd import numpy as np sys.path.append('/Users/lindenmp/Dropbox/Work/ResProjects/NormativeNeuroDev_CrossSec/code/func/') from proj_environment import set_proj_env from func import get_synth_cov train_test_str = 'squeakycleanExclude' exclude_str = 't1Exclude' # 't1Exclude' 'fsFinalExclude' parc_str = 'schaefer' # 'schaefer' 'lausanne' parc_scale = 400 # 200 400 | 60 125 extra_str = '' # extra_str = '_nuis-netdens' # extra_str = '_nuis-str' parcel_names, parcel_loc, drop_parcels, num_parcels, yeo_idx, yeo_labels = set_proj_env(train_test_str = train_test_str, exclude_str = exclude_str, parc_str = parc_str, parc_scale = parc_scale, extra_str = extra_str) print(os.environ['MODELDIR_BASE']) print(os.environ['MODELDIR']) # ## Load data # + # Load data df = pd.read_csv(os.path.join(os.environ['MODELDIR_BASE'], 'df_pheno.csv')) df.set_index(['bblid', 'scanid'], inplace = True) df_node = pd.read_csv(os.path.join(os.environ['MODELDIR'], 'df_node_clean.csv')) df_node.set_index(['bblid', 'scanid'], inplace = True) # adjust sex to 0 and 1 df['sex_adj'] = df.sex - 1 print(df.shape) print(df_node.shape) # - df.head() df_node.head() # # Prepare files for normative modelling # + # Note, 'ageAtScan1_Years' is assumed to be covs[0] and 'sex_adj' is assumed to be covs[1] # if more than 2 covs are to be used, append to the end and age/sex will be duplicated accordingly in the forward model covs = ['ageAtScan1_Years', 'sex_adj'] print(covs) num_covs = len(covs) print(num_covs) # - extra_str_2 = '' # ## Primary model (train/test split) # Create subdirectory for specific normative model -- labeled according to parcellation/resolution choices and covariates normativedir = os.path.join(os.environ['MODELDIR'], '+'.join(covs) + extra_str_2 + '/') print(normativedir) if not os.path.exists(normativedir): os.mkdir(normativedir); # + # Write out training -- retaining only residuals from nuissance regression df[df[train_test_str] == 0].to_csv(os.path.join(normativedir, 'train.csv')) df[df[train_test_str] == 0].to_csv(os.path.join(normativedir, 'cov_train.txt'), columns = covs, sep = ' ', index = False, header = False) resp_train = df_node[df_node[train_test_str] == 0].drop(train_test_str, axis = 1) mask = np.all(np.isnan(resp_train), axis = 1) if np.any(mask): print("Warning: NaNs in response train") resp_train.to_csv(os.path.join(normativedir, 'resp_train.csv')) resp_train.to_csv(os.path.join(normativedir, 'resp_train.txt'), sep = ' ', index = False, header = False) # Write out test -- retaining only residuals from nuissance regression df[df[train_test_str] == 1].to_csv(os.path.join(normativedir, 'test.csv')) df[df[train_test_str] == 1].to_csv(os.path.join(normativedir, 'cov_test.txt'), columns = covs, sep = ' ', index = False, header = False) resp_test = df_node[df_node[train_test_str] == 1].drop(train_test_str, axis = 1) mask = np.all(np.isnan(resp_test), axis = 1) if np.any(mask): print("Warning: NaNs in response train") resp_test.to_csv(os.path.join(normativedir, 'resp_test.csv')) resp_test.to_csv(os.path.join(normativedir, 'resp_test.txt'), sep = ' ', index = False, header = False) print(str(resp_train.shape[1]) + ' features written out for normative modeling') # - # ### Forward variants # + fwddir = os.path.join(normativedir, 'forward/') if not os.path.exists(fwddir): os.mkdir(fwddir) # Synthetic cov data x = get_synth_cov(df, cov = 'ageAtScan1_Years', stp = 1) if 'sex_adj' in covs: # Produce gender dummy variable for one repeat --> i.e., to account for two runs of ages, one per gender gender_synth = np.concatenate((np.ones(x.shape),np.zeros(x.shape)), axis = 0) # concat synth_cov = np.concatenate((np.matlib.repmat(x, 2, 1), np.matlib.repmat(gender_synth, 1, 1)), axis = 1) print(synth_cov.shape) # write out np.savetxt(os.path.join(fwddir, 'synth_cov_test.txt'), synth_cov, delimiter = ' ', fmt = ['%.1f', '%.d']) # - # ### Permutation test | train and test | no blocks # + # number of permutations num_perms = 1000 # Set seed for reproducibility np.random.seed(0) for i in range(num_perms): permdir = os.path.join(normativedir, 'perm_all/perm_' + str(i)) if not os.path.exists(permdir): os.makedirs(permdir) df_shuffed = df.copy() df_shuffed.loc[:,covs] = df_shuffed[covs].sample(frac = 1).values df_shuffed.loc[:,covs[1]] = df_shuffed.loc[:,covs[1]].astype(int) df_shuffed[df_shuffed[train_test_str] == 0].to_csv(os.path.join(permdir, 'cov_train.txt'), columns = covs, sep = ' ', index = False, header = False) df_shuffed[df_shuffed[train_test_str] == 1].to_csv(os.path.join(permdir, 'cov_test.txt'), columns = covs, sep = ' ', index = False, header = False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from tvDatafeed import TvDatafeed, Interval import datetime import logging logging.basicConfig(level=logging.DEBUG) tv = TvDatafeed(auto_login=False) # + pycharm={"name": "#%%\n"} data = tv.get_hist('NIFY', 'NSE', Interval.in_15_minute) # + pycharm={"name": "#%%\n"} print(data) # + pycharm={"name": "#%%\n"} from tvDatafeed import TvDatafeed, Interval, symbols username = 'natwijaya19' password = '' tv = TvDatafeed(username, password, chromedriver_path=None) # index nifty_index_data = tv.get_hist(symbol='NIFTY',exchange='NSE',interval=Interval.in_1_hour,n_bars=1000) # futures continuous contract nifty_futures_data = tv.get_hist(symbol='NIFTY',exchange='NSE',interval=Interval.in_1_hour,n_bars=1000,fut_contract=1) # crudeoil crudeoil_data = tv.get_hist(symbol='CRUDEOIL',exchange='MCX',interval=Interval.in_1_hour,n_bars=5000,fut_contract=1) # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="q4AS7njD7iL6" colab_type="code" colab={} # !nvidia-smi # + id="-D98CddQuwKG" colab_type="code" colab={} import gym import cv2 import time import json import random import numpy as np import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from collections import deque # + id="I8rGMlN2uzgk" colab_type="code" colab={} ENVIRONMENT = "PongDeterministic-v4" DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") SAVE_MODELS = True # Save models to file so you can test later MODEL_PATH = "./pong-cnn-" # Models path for saving or loading SAVE_MODEL_INTERVAL = 10 # Save models at every X epoch TRAIN_MODEL = True # Train model while playing (Make it False when testing a model) LOAD_MODEL_FROM_FILE = False # Load model from file LOAD_FILE_EPISODE = 0 # Load Xth episode from file BATCH_SIZE = 64 # Minibatch size that select randomly from mem for train nets MAX_EPISODE = 100000 # Max episode MAX_STEP = 100000 # Max step size for one episode MAX_MEMORY_LEN = 50000 # Max memory len MIN_MEMORY_LEN = 40000 # Min memory len before start train GAMMA = 0.97 # Discount rate ALPHA = 0.00025 # Learning rate EPSILON_DECAY = 0.99 # Epsilon decay rate by step RENDER_GAME_WINDOW = False # Opens a new window to render the game (Won't work on colab default) # + id="HxF5-bzUu1q-" colab_type="code" colab={} class DuelCNN(nn.Module): """ CNN with Duel Algo. https://arxiv.org/abs/1511.06581 """ def __init__(self, h, w, output_size): super(DuelCNN, self).__init__() self.conv1 = nn.Conv2d(in_channels=4, out_channels=32, kernel_size=8, stride=4) self.bn1 = nn.BatchNorm2d(32) convw, convh = self.conv2d_size_calc(w, h, kernel_size=8, stride=4) self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=4, stride=2) self.bn2 = nn.BatchNorm2d(64) convw, convh = self.conv2d_size_calc(convw, convh, kernel_size=4, stride=2) self.conv3 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1) self.bn3 = nn.BatchNorm2d(64) convw, convh = self.conv2d_size_calc(convw, convh, kernel_size=3, stride=1) linear_input_size = convw * convh * 64 # Last conv layer's out sizes # Action layer self.Alinear1 = nn.Linear(in_features=linear_input_size, out_features=128) self.Alrelu = nn.LeakyReLU() # Linear 1 activation funct self.Alinear2 = nn.Linear(in_features=128, out_features=output_size) # State Value layer self.Vlinear1 = nn.Linear(in_features=linear_input_size, out_features=128) self.Vlrelu = nn.LeakyReLU() # Linear 1 activation funct self.Vlinear2 = nn.Linear(in_features=128, out_features=1) # Only 1 node def conv2d_size_calc(self, w, h, kernel_size=5, stride=2): """ Calcs conv layers output image sizes """ next_w = (w - (kernel_size - 1) - 1) // stride + 1 next_h = (h - (kernel_size - 1) - 1) // stride + 1 return next_w, next_h def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) x = x.view(x.size(0), -1) # Flatten every batch Ax = self.Alrelu(self.Alinear1(x)) Ax = self.Alinear2(Ax) # No activation on last layer Vx = self.Vlrelu(self.Vlinear1(x)) Vx = self.Vlinear2(Vx) # No activation on last layer q = Vx + (Ax - Ax.mean()) return q # + id="plT51MPbu5U5" colab_type="code" colab={} class Agent: def __init__(self, environment): """ Hyperparameters definition for Agent """ # State size for breakout env. SS images (210, 160, 3). Used as input size in network self.state_size_h = environment.observation_space.shape[0] self.state_size_w = environment.observation_space.shape[1] self.state_size_c = environment.observation_space.shape[2] # Activation size for breakout env. Used as output size in network self.action_size = environment.action_space.n # Image pre process params self.target_h = 80 # Height after process self.target_w = 64 # Widht after process self.crop_dim = [20, self.state_size_h, 0, self.state_size_w] # Cut 20 px from top to get rid of the score table # Trust rate to our experiences self.gamma = GAMMA # Discount coef for future predictions self.alpha = ALPHA # Learning Rate # After many experinces epsilon will be 0.05 # So we will do less Explore more Exploit self.epsilon = 1 # Explore or Exploit self.epsilon_decay = EPSILON_DECAY # Adaptive Epsilon Decay Rate self.epsilon_minimum = 0.05 # Minimum for Explore # Deque holds replay mem. self.memory = deque(maxlen=MAX_MEMORY_LEN) # Create two model for DDQN algorithm self.online_model = DuelCNN(h=self.target_h, w=self.target_w, output_size=self.action_size).to(DEVICE) self.target_model = DuelCNN(h=self.target_h, w=self.target_w, output_size=self.action_size).to(DEVICE) self.target_model.load_state_dict(self.online_model.state_dict()) self.target_model.eval() # Adam used as optimizer self.optimizer = optim.Adam(self.online_model.parameters(), lr=self.alpha) def preProcess(self, image): """ Process image crop resize, grayscale and normalize the images """ frame = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # To grayscale frame = frame[self.crop_dim[0]:self.crop_dim[1], self.crop_dim[2]:self.crop_dim[3]] # Cut 20 px from top frame = cv2.resize(frame, (self.target_w, self.target_h)) # Resize frame = frame.reshape(self.target_w, self.target_h) / 255 # Normalize return frame def act(self, state): """ Get state and do action Two option can be selectedd if explore select random action if exploit ask nnet for action """ act_protocol = 'Explore' if random.uniform(0, 1) <= self.epsilon else 'Exploit' if act_protocol == 'Explore': action = random.randrange(self.action_size) else: with torch.no_grad(): state = torch.tensor(state, dtype=torch.float, device=DEVICE).unsqueeze(0) q_values = self.online_model.forward(state) # (1, action_size) action = torch.argmax(q_values).item() # Returns the indices of the maximum value of all elements return action def train(self): """ Train neural nets with replay memory returns loss and max_q val predicted from online_net """ if len(agent.memory) < MIN_MEMORY_LEN: loss, max_q = [0, 0] return loss, max_q # We get out minibatch and turn it to numpy array state, action, reward, next_state, done = zip(*random.sample(self.memory, BATCH_SIZE)) # Concat batches in one array # (np.arr, np.arr) ==> np.BIGarr state = np.concatenate(state) next_state = np.concatenate(next_state) # Convert them to tensors state = torch.tensor(state, dtype=torch.float, device=DEVICE) next_state = torch.tensor(next_state, dtype=torch.float, device=DEVICE) action = torch.tensor(action, dtype=torch.long, device=DEVICE) reward = torch.tensor(reward, dtype=torch.float, device=DEVICE) done = torch.tensor(done, dtype=torch.float, device=DEVICE) # Make predictions state_q_values = self.online_model(state) next_states_q_values = self.online_model(next_state) next_states_target_q_values = self.target_model(next_state) # Find selected action's q_value selected_q_value = state_q_values.gather(1, action.unsqueeze(1)).squeeze(1) # Get indice of the max value of next_states_q_values # Use that indice to get a q_value from next_states_target_q_values # We use greedy for policy So it called off-policy next_states_target_q_value = next_states_target_q_values.gather(1, next_states_q_values.max(1)[1].unsqueeze(1)).squeeze(1) # Use Bellman function to find expected q value expected_q_value = reward + self.gamma * next_states_target_q_value * (1 - done) # Calc loss with expected_q_value and q_value loss = (selected_q_value - expected_q_value.detach()).pow(2).mean() self.optimizer.zero_grad() loss.backward() self.optimizer.step() return loss, torch.max(state_q_values).item() def storeResults(self, state, action, reward, nextState, done): """ Store every result to memory """ self.memory.append([state[None, :], action, reward, nextState[None, :], done]) def adaptiveEpsilon(self): """ Adaptive Epsilon means every step we decrease the epsilon so we do less Explore """ if self.epsilon > self.epsilon_minimum: self.epsilon *= self.epsilon_decay # + id="ve4vYDe3bozg" colab_type="code" colab={} environment = gym.make(ENVIRONMENT) # Get env agent = Agent(environment) # Create Agent if LOAD_MODEL_FROM_FILE: agent.online_model.load_state_dict(torch.load(MODEL_PATH+str(LOAD_FILE_EPISODE)+".pkl")) with open(MODEL_PATH+str(LOAD_FILE_EPISODE)+'.json') as outfile: param = json.load(outfile) agent.epsilon = param.get('epsilon') startEpisode = LOAD_FILE_EPISODE + 1 else: startEpisode = 1 last_100_ep_reward = deque(maxlen=100) # Last 100 episode rewards total_step = 1 # Cumulkative sum of all steps in episodes for episode in range(startEpisode, MAX_EPISODE): startTime = time.time() # Keep time state = environment.reset() # Reset env state = agent.preProcess(state) # Process image # Stack state . Every state contains 4 time contionusly frames # We stack frames like 4 channel image state = np.stack((state, state, state, state)) total_max_q_val = 0 # Total max q vals total_reward = 0 # Total reward for each episode total_loss = 0 # Total loss for each episode for step in range(MAX_STEP): if RENDER_GAME_WINDOW: environment.render() # Show state visually # Select and perform an action action = agent.act(state) # Act next_state, reward, done, info = environment.step(action) # Observe next_state = agent.preProcess(next_state) # Process image # Stack state . Every state contains 4 time contionusly frames # We stack frames like 4 channel image next_state = np.stack((next_state, state[0], state[1], state[2])) # Store the transition in memory agent.storeResults(state, action, reward, next_state, done) # Store to mem # Move to the next state state = next_state # Update state if TRAIN_MODEL: # Perform one step of the optimization (on the target network) loss, max_q_val = agent.train() # Train with random BATCH_SIZE state taken from mem else: loss, max_q_val = [0, 0] total_loss += loss total_max_q_val += max_q_val total_reward += reward total_step += 1 if total_step % 1000 == 0: agent.adaptiveEpsilon() # Decrase epsilon if done: # Episode completed currentTime = time.time() # Keep current time time_passed = currentTime - startTime # Find episode duration current_time_format = time.strftime("%H:%M:%S", time.gmtime()) # Get current dateTime as HH:MM:SS epsilonDict = {'epsilon': agent.epsilon} # Create epsilon dict to save model as file if SAVE_MODELS and episode % SAVE_MODEL_INTERVAL == 0: # Save model as file weightsPath = MODEL_PATH + str(episode) + '.pkl' epsilonPath = MODEL_PATH + str(episode) + '.json' torch.save(agent.online_model.state_dict(), weightsPath) with open(epsilonPath, 'w') as outfile: json.dump(epsilonDict, outfile) if TRAIN_MODEL: agent.target_model.load_state_dict(agent.online_model.state_dict()) # Update target model last_100_ep_reward.append(total_reward) avg_max_q_val = total_max_q_val / step outStr = "Episode:{} Time:{} Reward:{:.2f} Loss:{:.2f} Last_100_Avg_Rew:{:.3f} Avg_Max_Q:{:.3f} Epsilon:{:.2f} Duration:{:.2f} Step:{} CStep:{}".format( episode, current_time_format, total_reward, total_loss, np.mean(last_100_ep_reward), avg_max_q_val, agent.epsilon, time_passed, step, total_step ) print(outStr) if SAVE_MODELS: outputPath = MODEL_PATH + "out" + '.txt' # Save outStr to file with open(outputPath, 'a') as outfile: outfile.write(outStr+"\n") break # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # #!pip install pandas as pd import pandas as pd #reading data in pandas Data frame df = pd.read_csv("English.csv", header = None, encoding = "ISO-8859-1") #Data Snapshot df.head() #Create Surah & Ayat Index df1 = df[0].str.split("|", expand = True) df2 = pd.concat([df1,df.iloc[:,1:]], axis = 1) #Combine Columns 2:16 in one df_a = pd.DataFrame() for a,b in df2.iterrows(): load = b[2:] data = '' for aa in load: if str(aa) != 'nan': data += str(aa) df_a = df_a.append([data]) #Concatenate to orignal df3 = pd.concat([df2.iloc[:,:2],df_a.reset_index().rename(columns = {0:"Data"})], axis = 1).drop(columns = 'index').rename(columns = {0:"Surah",1:'Ayat'}) df3.head() # **Feature Engineering** # + #Adding index of chapters for col in ['Surah', 'Ayat']: df3[col] = pd.to_numeric(df3[col]) def idx(i, j): df3['index'] = df.index return int(df3.loc[(df3['Surah']==i) & (df3['Ayat']==j), 'index']) cut_points = [-1, idx(2,141), idx(2,252), idx(3,92), idx(4,23), idx(4,147), idx(5,81), idx(6,110), idx(7,87), idx(8,40), idx(9,92), idx(11,5), idx(12,52), idx(14,52), idx(16,128), idx(18,74), idx(20,135), idx(22,78), idx(25,20), idx(27,55), idx(29,45), idx(33,30), idx(36,27), idx(39,31), idx(41,46), idx(45,37), idx(51,30), idx(57,29), idx(66,12), idx(77,50), idx(114,6)] label_names = [str(i) for i in range(1, len(cut_points))] if 'Para' not in df3.columns: df3.insert(2, 'Para', pd.cut(df.index,cut_points,labels=label_names)) df3.drop('index', axis=1, inplace=True) df3['Para'] = pd.to_numeric(df3['Para']) df3.head() # - #SAVING CHAPTER WISE MASTER df3.to_csv("Supervised_ML_Master/Para_Wise_M_L-Model_OF_NOBEL_QURAN.csv", index=False) # + #SORTING SURAH WISE surahs = df3['Surah'].unique().tolist() Surah_Data = [] for surah in surahs: Data = '' for val in df3[df3["Surah"] == surah]['Data']: Data += val Surah_Data.append(Data) Surahs_df = pd.DataFrame({'Data':Surah_Data}).reset_index().rename(columns = {'index':"Surah"}) Surahs_df.head() # - # Adding Names of Surah and other important features from Wikipedia wikipedia_link = 'https://en.wikipedia.org/wiki/List_of_chapters_in_the_Quran' from bs4 import BeautifulSoup import requests req = requests.get(wikipedia_link) soup = BeautifulSoup(req.content, 'html') surah_rows = soup.find('table', class_= 'sortable' ).find('tbody').find_all('tr')[1:] #preview of Data information surahs_list = [] for i in surah_rows: surah_dict = {} single_surah_data = i.find_all('td') #surah_dict['SurahNumber'] = single_surah_data[0].get_text().strip() surah_dict['EnglishTitle'] = single_surah_data[1].get_text().strip() surah_dict['ArabicTitle'] = single_surah_data[2].find('span').get_text() surah_dict['NumberOfVerses'] = single_surah_data[4].get_text().split(' ')[0] surah_dict['NumberOfRukus'] = single_surah_data[4].get_text().split(' ')[1] surah_dict['PlaceOfRevelation'] = single_surah_data[5].get_text() surahs_list.append(surah_dict) df_surah_information = pd.DataFrame(surahs_list).reset_index().rename(columns = {'index':"Surah Number"}) df_surah_information['Surah Number'] = df_surah_information['Surah Number'].astype(int) df_surah_information # Join of Main data and Data information df_Master = pd.merge(Surahs_df, df_surah_information, how = 'left', left_on=['Surah'], right_on = ['Surah Number']) df_Master.head() df_Master_Edit = df_Master.copy() # **Supervised_Machine_Learning_Analysis** import matplotlib.pyplot as plt import seaborn as sns color = sns.color_palette() df_MakiMadni = df_Master_Edit['PlaceOfRevelation'].value_counts() colors = sns.color_palette() plt.figure(figsize=(10, 7)) #create pie chart plt.pie(df_MakiMadni.values, labels = df_MakiMadni.index, colors = colors[1:], autopct='%.0f%%', explode =(0.1, 0.0)) plt.title('Distibution of Maki vs Madni Surahs') plt.show() #count verses per chapter df_ayahcount_perparah = df3.groupby(['Para']).agg(TotalNumberofAyats = ('Ayat', 'count')).reset_index() x= df_ayahcount_perparah['Para'] y= df_ayahcount_perparah['TotalNumberofAyats'] plt.figure(figsize=(20,8)) sns.barplot(x, y, color=color[2]) plt.xticks(rotation='vertical') plt.xlabel('Parah Number', fontsize=12) plt.ylabel('Number of Ayahs', fontsize=12) plt.title("Number of Ayahs in Each Parah", fontsize = 25) plt.show() print (df_ayahcount_perparah) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_pytorch_p36 # language: python # name: conda_pytorch_p36 # --- # #!pip install tensorflow # #!pip install keras # #!pip install pandas-datareader import tensorflow import keras # #!pip install pandas import pandas import numpy import pandas_datareader.data as web import pandas as pd import datetime import numpy as np import matplotlib.pyplot as plt # %matplotlib inline from numpy import array import sklearn from sklearn import preprocessing from sklearn.tree import DecisionTreeRegressor from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.linear_model import HuberRegressor from sklearn.preprocessing import MinMaxScaler import tensorflow as tf from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM # + #start = datetime.datetime(2014, 1, 1) #end = datetime.datetime(2020, 7, 24) # + ###axisgold = web.DataReader("AXISGOLD.NS", 'yahoo', start, end) ###bslgoldetf = web.DataReader("BSLGOLDETF.NS", 'yahoo', start, end) #cangold = web.DataReader("CANGOLD.BO", 'yahoo', start, end) #hfcmgetf = web.DataReader("HFCMFGETF.BO", 'yahoo', start, end) #ipgetf = web.DataReader("IPGETF.BO", 'yahoo', start, end) ###idbigold = web.DataReader("IDBIGOLD.NS", 'yahoo', start, end) #kotakgold = web.DataReader("KOTAKGOLD.NS", 'yahoo', start, end) ###qgoldhalf = web.DataReader("QGOLDHALF.NS", 'yahoo', start, end) #relgold = web.DataReader("RELGOLD.BO", 'yahoo', start, end) #religarego = web.DataReader("RELIGAREGO.BO", 'yahoo', start, end) #setfgold = web.DataReader("SETFGOLD.NS", 'yahoo', start, end) ###goldshare = web.DataReader("GOLDSHARE.NS", 'yahoo', start, end) ###goldbees = web.DataReader("GOLDBEES.NS", 'yahoo', start, end) # - kotakgold = pd.read_csv('KOTAKGOLD.csv') ###axisgold.to_csv('AXISGOLD.csv') ###bslgoldetf.to_csv('BSLGOLDETF.csv') ###idbigold.to_csv('IDBIGOLD.csv') kotakgold.to_csv('KOTAKGOLD.csv') ###qgoldhalf.to_csv('QGOLDHALF.csv') #setfgold.to_csv('SETFGOLD.csv') ####goldshare.to_csv('GOLDSHARE.csv') ###goldbees.to_csv('GOLDBEES.csv') ###axisgold['Open'].plot(label = 'axisgold', figsize= (15,7)) ###bslgoldetf['Open'].plot(label = 'bslgoldetf') ###idbigold['Open'].plot(label = 'idbigold') kotakgold['Open'].plot(label = 'kotakgold') ###qgoldhalf['Open'].plot(label = 'qgoldhalf') ###goldshare['Open'].plot(label = 'goldshare') ###goldbees['Open'].plot(label = 'goldbees') plt.title('Gold ETF stock price') plt.legend() # + #We are going to predict the price of one Gold ETF below. Similarly all other predictions can also be made. Let us chose Axisgold ETF # - # Select Adjusted close data data = kotakgold[['Adj Close']] print(data.shape) data[-20:] from sklearn.preprocessing import MinMaxScaler scaler=MinMaxScaler(feature_range=(0,1)) data_scaled=scaler.fit_transform(np.array(data).reshape(-1,1)) print(data_scaled) # + #Split data 80% training 20% testing training_size=int(len(data_scaled)*0.8) test_size=len(data_scaled)-training_size train_data,test_data=data_scaled[0:training_size],data_scaled[training_size:len(data)] # - training_size,test_size train_data def create_dataset(dataset, time_step=1): dataX, dataY = [], [] for i in range(len(dataset)-time_step-1): a = dataset[i:(i+time_step), 0] ###i=0, 0,1,2,3-----99 100 dataX.append(a) dataY.append(dataset[i + time_step, 0]) return numpy.array(dataX), numpy.array(dataY) #reshape into X=t,t+1,t+2,t+3 and Y=t+4 time_step = 1 x_train, y_train = create_dataset(train_data, time_step) x_test, y_test = create_dataset(test_data, time_step) #x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.2) #Regression coefficients x_train_lr = x_train y_train_lr = y_train x_test_lr = x_test y_test_lr = y_test print(x_train_lr.shape), print(y_train_lr.shape) print(x_test_lr.shape), print(y_test_lr.shape) x_train_lr.shape, x_test_lr.shape x_train_lr # + #Create models #Create decision tree model #modeltree = DecisionTreeRegressor() #modeltree.fit(x_train, y_train) #Create linear decision model modellr = LinearRegression() modellr.fit(x_train_lr, y_train_lr) # - # The coefficient print('Slope: ', np.asscalar(np.squeeze(modellr.coef_))) # The Intercept print('Intercept: ', modellr.intercept_) # Train set graph plt.figure(1, figsize=(16,10)) plt.title('Linear Regression | Price vs Time') plt.scatter(x_train_lr, y_train_lr, edgecolor='w', label='Actual Price') plt.plot(x_train_lr, modellr.predict(x_train_lr), color='r', label='Predicted Price') plt.xlabel('Integer Date') plt.ylabel('Stock Price') plt.legend() plt.show() # Create test arrays x_test_lr # Generate array with predicted values train_pred_lr = modellr.predict(x_train_lr) test_pred_lr = modellr.predict(x_test_lr) # + #model accurcy #results_lr = model.evaluate(x_test_lr, y_test_lr, batch_size=128) #print("test loss, test acc:", results_lr) # - # reshape input to be [samples, time steps, features] which is required for LSTM train_pred_lr =train_pred_lr.reshape(train_pred_lr.shape[0], 1) test_pred_lr = test_pred_lr.reshape(test_pred_lr.shape[0], 1) train_pred_lr=scaler.inverse_transform(train_pred_lr) test_pred_lr=scaler.inverse_transform(test_pred_lr) y_train_lr =y_train_lr.reshape(y_train_lr.shape[0], 1) y_test_lr = y_test_lr.reshape(y_test_lr.shape[0], 1) y_train_lr=scaler.inverse_transform(y_train_lr) y_test_lr=scaler.inverse_transform(y_test_lr) print(x_train_lr) print(train_pred_lr) # + #plt.plot(x_test_lr) #plt.plot(test_pred_lr) #plt.plot(x_train_lr) #plt.plot(train_pred_lr) # - print(y_test_lr.shape) print(train_pred_lr.shape) print(y_train_lr.shape) print(test_pred_lr.shape) # + ### Calculate RMSE performance metrics import math from sklearn.metrics import mean_squared_error print("Training Linear Regression Root Mean Squared error is:{}". format(math.sqrt(mean_squared_error(y_train_lr,train_pred_lr)))) print("Training Linear Regression Mean Squared error is {}".format(mean_squared_error(y_train_lr,train_pred_lr))) print("Test Linear Regression Root Mean Squared error is:{}". format(math.sqrt(mean_squared_error(y_test_lr,test_pred_lr)))) print("Test Linear Regression Mean Squared error is {}".format(mean_squared_error(y_test_lr,test_pred_lr))) # - # shift train predictions for plotting look_back=100 trainPredictPlot = numpy.empty_like(data_scaled) trainPredictPlot[:, :] = np.nan trainPredictPlot[look_back:len(train_pred_lr)+look_back, :] = train_pred_lr # shift test predictions for plotting testPredictPlot = numpy.empty_like(data_scaled) testPredictPlot[:, :] = numpy.nan testPredictPlot[len(train_pred_lr)+(look_back*2)-196:len(data)+100, :] = test_pred_lr # plot baseline and predictions plt.figure(figsize = (16,8)) plt.plot(scaler.inverse_transform(data_scaled)) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.legend(['Original','Training', 'Test']) plt.show() # + #data_1 = pd.read_csv('KOTAKGOLD.csv') # + #data = data_1[['Adj Close']] # + #from sklearn.preprocessing import MinMaxScaler #scaler=MinMaxScaler(feature_range=(0,1)) #data=scaler.fit_transform(np.array(data).reshape(-1,1)) # + #Split data 80% training 20% testing #training_size=int(len(data)*0.8) #test_size=len(data)-training_size #train_data,test_data=data[0:training_size],data[training_size:len(data)] # + #def create_dataset(dataset, time_step=1): # dataX, dataY = [], [] # for i in range(len(dataset)-time_step-1): # a = dataset[i:(i+time_step), 0] ###i=0, 0,1,2,3-----99 100 # dataX.append(a) # dataY.append(dataset[i + time_step, 0]) # return numpy.array(dataX), numpy.array(dataY) # + #reshape into X=t,t+1,t+2,t+3 and Y=t+4 #time_step = 1 #x_train, y_train = create_dataset(train_data, time_step) #x_test, y_test = create_dataset(test_data, time_step) #x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.2) # - #LSTM coefficients x_train_lstm = x_train y_train_lstm = y_train x_test_lstm = x_test y_test_lstm = y_test # reshape input to be [samples, time steps, features] which is required for LSTM x_train_lstm =x_train_lstm.reshape(x_train_lstm.shape[0],x_train_lstm.shape[1] , 1) x_test_lstm = x_test_lstm.reshape(x_test_lstm.shape[0],x_test_lstm.shape[1] , 1) ### Create the Stacked LSTM model from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import LSTM from tensorflow.keras.layers import Dropout, Activation # + #from keras import backend as K #K.set_image_data_format('channels_last') #keras.backend.image_data_format() # - model=Sequential() model.add(LSTM(5,return_sequences=True,input_shape=(5,1))) model.add(Dropout(0.5)) #model.add(LSTM(100,return_sequences=True)) #model.add(Dropout(0.5)) #model.add(LSTM(100,return_sequences=True)) #model.add(Dropout(0.5)) model.add(LSTM(5)) #model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation("linear")) model.compile(loss='mse',optimizer='adam') model.summary() history = model.fit(x_train_lstm,y_train_lstm,validation_data=(x_test_lstm,y_test_lstm),epochs=50,batch_size=64,verbose=1) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) ### Lets Do the prediction and check performance metrics train_pred_lstm=model.predict(x_train_lstm) test_pred_lstm=model.predict(x_test_lstm) ##Transformback to original form train_pred_lstm=scaler.inverse_transform(train_pred_lstm) test_pred_lstm=scaler.inverse_transform(test_pred_lstm) # reshape input to be [samples, time steps, features] which is required for LSTM y_train_lstm =y_train_lstm.reshape(y_train_lstm.shape[0], 1) y_test_lstm = y_test_lstm.reshape(y_test_lstm.shape[0], 1) y_train_lstm=scaler.inverse_transform(y_train_lstm) y_test_lstm=scaler.inverse_transform(y_test_lstm) print(y_test_lstm) print(test_pred_lstm) # shift train predictions for plotting look_back=100 trainPredictPlot = numpy.empty_like(data_scaled) trainPredictPlot[:, :] = np.nan trainPredictPlot[look_back:len(train_pred_lstm)+look_back, :] = train_pred_lstm # shift test predictions for plotting testPredictPlot = numpy.empty_like(data_scaled) testPredictPlot[:, :] = numpy.nan testPredictPlot[len(train_pred_lstm)+(look_back*2)-196:len(data)+100, :] = test_pred_lstm # plot baseline and predictions plt.figure(figsize = (16,8)) plt.plot(scaler.inverse_transform(data_scaled)) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.legend(['Original','Training', 'Test']) plt.show() ### Calculate RMSE performance metrics import math from sklearn.metrics import mean_squared_error print("Training LSTM Root Mean Squared error is:{}". format(math.sqrt(mean_squared_error(y_train_lstm,train_pred_lstm)))) print("Traning LSTM Mean Squared error is {}".format(mean_squared_error(y_train_lstm,train_pred_lstm))) ### Test Data RMSE print("Test LSTM Root Mean Squared error is:{}". format(math.sqrt(mean_squared_error(y_test_lstm,test_pred_lstm)))) print("Test LSTM Mean Squared error is {}".format(mean_squared_error(y_test_lstm,test_pred_lstm))) results_lstm = model.evaluate(x_test_lstm, y_test_lstm, batch_size=128) print("test loss, test acc:", results_lstm) # ### shift train predictions for plotting # look_back=100 # trainPredictPlot = numpy.empty_like(data) # trainPredictPlot[:, :] = np.nan # trainPredictPlot[look_back:len(train_predict)+look_back, :] = train_predict # # shift test predictions for plotting # testPredictPlot = numpy.empty_like(data) # testPredictPlot[:, :] = numpy.nan # testPredictPlot[len(train_predict)+(look_back*2)-196:len(data)+100, :] = test_predict # # plot baseline and predictions # plt.plot(scaler.inverse_transform(data)) # plt.plot(trainPredictPlot) # plt.plot(testPredictPlot) # plt.legend(['Original','Training', 'Test']) # plt.show() # len(data_scaled) data_scaled #Predicting to next 1 day data using previous 7 days x_input=data_scaled[len(data_scaled)-1:].reshape(1,-1) x_input.shape temp_input=list(x_input) temp_input=temp_input[0].tolist() len(temp_input) temp_input x_input # + from numpy import array lst_output=[] n_steps=1 i=0 while(i<7): if(len(temp_input)>1): #print(temp_input) x_input=np.array(temp_input[1:]) print("{} day input {}".format(i,x_input)) x_input=x_input.reshape(1,-1) x_input = x_input.reshape((1, n_steps, 1)) #print(x_input) yhat = model.predict(x_input, verbose=0) print("{} day output {}".format(i,yhat)) temp_input.extend(yhat[0].tolist()) temp_input=temp_input[1:] #print(temp_input) lst_output.extend(yhat.tolist()) i=i+1 else: x_input = x_input.reshape((1, n_steps,1)) yhat = model.predict(x_input, verbose=0) print(yhat[0]) temp_input.extend(yhat[0].tolist()) print(len(temp_input)) lst_output.extend(yhat.tolist()) i=i+1 print(lst_output) # - day_new=np.arange(1,len(lst_output)+1) day_pred=np.arange(len(lst_output)+1,len(lst_output)+8) len(day_new) lst_output day_pred.shape tx = scaler.inverse_transform(lst_output) tx plt.plot(day_new,scaler.inverse_transform(data_scaled[len(data_scaled)-7:])) plt.plot(day_pred,tx) df3=data_scaled.tolist() df3.extend(lst_output) plt.plot(df3[1500:]) df3=scaler.inverse_transform(df3).tolist() plt.plot(df3) df3[-40:] # + #train_pred_lr = modellr.predict(x_train_lr) #test_pred_lr = modellr.predict(x_test_lr) #train_pred_lstm=model.predict(x_train_lstm) #test_pred_lstm=model.predict(x_test_lstm) # - print(train_pred_lr-y_train_lr) print(train_pred_lstm-x_train_lstm) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] heading_collapsed=true # # 0.0. IMPORTS # + hidden=true import math import numpy as np import pandas as pd import inflection import datetime import seaborn as sns from scipy import stats as ss from tabulate import tabulate from matplotlib import pyplot as plt from IPython.display import Image from IPython.core.display import HTML from sklearn.preprocessing import RobustScaler, MinMaxScaler, LabelEncoder # + [markdown] heading_collapsed=true hidden=true # ## 0.1. Helper Functions # + hidden=true def cramer_v( x, y ): cm = pd.crosstab( x, y ).values n = cm.sum() r, k = cm.shape chi2 = ss.chi2_contingency( cm )[0] chi2cor = max( 0, chi2 - (k-1)*(r-1)/(n-1)) rcor = r - ((r-1)**2)/(n-1) kcor = k - ((k-1)**2)/(n-1) v= np.sqrt( ( chi2cor/n ) / ( min( kcor-1, rcor-1 ) ) ) return v def jupyter_settings(): # %matplotlib inline # %pylab inline plt.style.use( 'bmh' ) plt.rcParams['figure.figsize'] = [25, 12] plt.rcParams['font.size'] = 24 display( HTML( '') ) pd.options.display.max_columns = None pd.options.display.max_rows = None pd.set_option( 'display.expand_frame_repr', False ) sns.set() # + hidden=true jupyter_settings() # + [markdown] heading_collapsed=true hidden=true # ## 0.2. Loading Data # + hidden=true df_sales_raw = pd.read_csv( 'data/train.csv', low_memory=False ) df_store_raw = pd.read_csv( 'data/store.csv', low_memory=False ) # merge df_raw = pd.merge( df_sales_raw, df_store_raw, how='left', on='Store' ) # + [markdown] heading_collapsed=true # # 1.0. DESCRICAO DOS DADOS # + hidden=true df1 = df_raw.copy() # + [markdown] heading_collapsed=true hidden=true # ## 1.1. Rename Columns # + hidden=true cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval'] snakecase = lambda x: inflection.underscore( x ) cols_new = list( map( snakecase, cols_old ) ) # rename df1.columns = cols_new # + [markdown] heading_collapsed=true hidden=true # ## 1.2. Data Dimensions # + hidden=true print( 'Number of Rows: {}'.format( df1.shape[0] ) ) print( 'Number of Cols: {}'.format( df1.shape[1] ) ) # + [markdown] heading_collapsed=true hidden=true # ## 1.3. Data Types # + hidden=true df1['date'] = pd.to_datetime( df1['date'] ) df1.dtypes # + [markdown] heading_collapsed=true hidden=true # ## 1.4. Check NA # + hidden=true df1.isna().sum() # + [markdown] heading_collapsed=true hidden=true # ## 1.5. Fillout NA # + code_folding=[] hidden=true #competition_distance df1['competition_distance'] = df1['competition_distance'].apply( lambda x: 200000.0 if math.isnan( x ) else x ) #competition_open_since_month df1['competition_open_since_month'] = df1.apply( lambda x: x['date'].month if math.isnan( x['competition_open_since_month'] ) else x['competition_open_since_month'], axis=1 ) #competition_open_since_year df1['competition_open_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan( x['competition_open_since_year'] ) else x['competition_open_since_year'], axis=1 ) #promo2_since_week df1['promo2_since_week'] = df1.apply( lambda x: x['date'].week if math.isnan( x['promo2_since_week'] ) else x['promo2_since_week'], axis=1 ) #promo2_since_year df1['promo2_since_year'] = df1.apply( lambda x: x['date'].year if math.isnan( x['promo2_since_year'] ) else x['promo2_since_year'], axis=1 ) #promo_interval month_map = {1: 'Jan', 2: 'Fev', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug', 9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec'} df1['promo_interval'].fillna( 0, inplace=True ) df1['month_map'] = df1['date'].dt.month.map( month_map ) df1['is_promo'] = df1[['promo_interval', 'month_map']].apply( lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split(',') else 0, axis=1 ) # + hidden=true df1.sample(5) # + hidden=true df1.isna().sum() # + [markdown] heading_collapsed=true hidden=true # ## 1.6. Change Data Types # + hidden=true # competition df1['competition_open_since_month'] = df1['competition_open_since_month'].astype( int ) df1['competition_open_since_year'] = df1['competition_open_since_year'].astype( int ) # promo2 df1['promo2_since_week'] = df1['promo2_since_week'].astype( int ) df1['promo2_since_year'] = df1['promo2_since_year'].astype( int ) # + [markdown] heading_collapsed=true hidden=true # ## 1.7. Descriptive Statistics # + hidden=true num_attributes = df1.select_dtypes( include=['int64', 'float64'] ) cat_attributes = df1.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] ) # + [markdown] heading_collapsed=true hidden=true # ### 1.7.1. Numerical Attributes # + hidden=true # Central Tendency - mean, median ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T # Dispersion - std, max, min, range, skew, kurtosis d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T d2 = pd.DataFrame( num_attributes.apply( min ) ).T d3 = pd.DataFrame( num_attributes.apply( max ) ).T d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max() - x.min() ) ).T d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew() ) ).T d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T m =pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index() m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis'] m # + hidden=true sns.distplot( df1['competition_distance'], kde=False ) # + [markdown] heading_collapsed=true hidden=true # ### 1.7.2. Categories Attributes # + hidden=true cat_attributes.apply( lambda x: x.unique().shape[0] ) # + hidden=true aux = df1[( df1['state_holiday'] != '0' ) & ( df1['sales'] > 0 )] plt.subplot( 1, 3, 1 ) sns.boxplot( x='state_holiday' ,y='sales', data=aux ) plt.subplot( 1, 3, 2 ) sns.boxplot( x='store_type' ,y='sales', data=aux ) plt.subplot( 1, 3, 3 ) sns.boxplot( x='assortment' ,y='sales', data=aux ) # + [markdown] heading_collapsed=true # # 2.0. FEATURE ENGENIREERING # + hidden=true df2 = df1.copy() # + [markdown] heading_collapsed=true hidden=true # ## 2.1. Mapa Mental de Hipóteses # + hidden=true hide_input=false Image('img/mind_map_hyphotesis.png') # + [markdown] heading_collapsed=true hidden=true # ## 2.2. Criação das Hipóteses # + [markdown] heading_collapsed=true hidden=true # ### 2.2.1. Hipóteses Loja # + [markdown] hidden=true # **1.** Lojas com número maior de funcionários deveriam vender mais. # # **2.** Lojas com maior capacidade de estoque deveriam vender mais. # # **3.** Lojas com maior porte deveriam vender mais. # # **4.** Lojas com maior sortimentos deveriam vender mais. # # **5.** Lojas com competidores mais próximos deveriam vender menos. # # **6.** Lojas com competidores à mais tempo deveriam vendem mais. # + [markdown] heading_collapsed=true hidden=true # ### 2.2.2. Hipóteses Produto # + [markdown] hidden=true # **1.** Lojas que investem mais em Marketing deveriam vender mais. # # **2.** Lojas com maior exposição de produto deveriam vender mais. # # **3.** Lojas com produtos com preço menor deveriam vender mais. # # **4.** Lojas com promoções mais agressivas ( descontos maiores ), deveriam vender mais. # # **5.** Lojas com promoções ativas por mais tempo deveriam vender mais. # # **6.** Lojas com mais dias de promoção deveriam vender mais. # # **7.** Lojas com mais promoções consecutivas deveriam vender mais. # + [markdown] heading_collapsed=true hidden=true # ### 2.2.3. Hipóteses Tempo # + [markdown] hidden=true # **1.** Lojas abertas durante o feriado de Natal deveriam vender mais. # # **2.** Lojas deveriam vender mais ao longo dos anos. # # **3.** Lojas deveriam vender mais no segundo semestre do ano. # # **4.** Lojas deveriam vender mais depois do dia 10 de cada mês. # # **5.** Lojas deveriam vender menos aos finais de semana. # # **6.** Lojas deveriam vender menos durante os feriados escolares. # + [markdown] heading_collapsed=true hidden=true # ## 2.3. Lista Final de Hipóteses # + [markdown] hidden=true # **1.** Lojas com maior sortimentos deveriam vender mais. # # **2.** Lojas com competidores mais próximos deveriam vender menos. # # **3.** Lojas com competidores à mais tempo deveriam vender mais. # # **4.** Lojas com promoções ativas por mais tempo deveriam vender mais. # # **5.** Lojas com mais dias de promoção deveriam vender mais. # # **6.** Lojas com mais promoções consecutivas deveriam vender mais. # # **7.** Lojas abertas durante o feriado de Natal deveriam vender mais. # # **8.** Lojas deveriam vender mais ao longo dos anos. # # **9.** Lojas deveriam vender mais no segundo semestre do ano. # # **10.** Lojas deveriam vender mais depois do dia 10 de cada mês. # # **11.** Lojas deveriam vender menos aos finais de semana. # # **12.** Lojas deveriam vender menos durante os feriados escolares. # + [markdown] heading_collapsed=true hidden=true # ## 2.4. Feature Engineering # + hidden=true # year df2['year'] = df2['date'].dt.year # month df2['month'] = df2['date'].dt.month # day df2['day'] = df2['date'].dt.day # week of year df2['week_of_year'] = df2['date'].dt.weekofyear # year week df2['year_week'] = df2['date'].dt.strftime( '%Y-%W' ) # competition since df2['competition_since'] = df2.apply( lambda x: datetime.datetime( year=x['competition_open_since_year'], month=x['competition_open_since_month'], day=1 ), axis=1 ) df2['competition_time_month'] = ( ( df2['date'] - df2['competition_since'] )/30 ).apply( lambda x: x.days ).astype( int ) # promo since df2['promo_since'] = df2['promo2_since_year'].astype( str ) + '-' + df2['promo2_since_week'].astype( str ) df2['promo_since'] = df2['promo_since'].apply( lambda x: datetime.datetime.strptime( x + '-1', '%Y-%W-%w' ) - datetime.timedelta( days=7 ) ) df2['promo_time_week'] = ( ( df2['date'] - df2['promo_since'] )/7 ).apply( lambda x: x.days ).astype( int ) # assortment df2['assortment'] = df2['assortment'].apply( lambda x: 'basic' if x == 'a' else 'extra' if x=='b' else 'extend' ) # state holiday df2['state_holiday'] = df2['state_holiday'].apply( lambda x: 'public_holiday' if x == 'a' else 'easter_holiday' if x == 'b' else 'christmas' if x == 'c' else 'regular_day' ) # + [markdown] heading_collapsed=true # # 3.0. FILTRAGEM DE VARIÁVEIS # + hidden=true df3 = df2.copy() # + hidden=true df3.head() # + [markdown] heading_collapsed=true hidden=true # ## 3.1. Filtragem das Linhas # + hidden=true df3 = df3[( df3['open'] != '0' ) & ( df3['sales'] > 0 )] # + hidden=true df3.head() # + [markdown] heading_collapsed=true hidden=true # ## 3.2. Seleção das Colunas # + hidden=true cols_drop = ['customers', 'open', 'promo_interval', 'month_map'] df3 = df3.drop( cols_drop, axis=1 ) # + hidden=true df3.head() # + [markdown] heading_collapsed=true # # 4.0. ANALISE EXPLORATÓRIA DOS DADOS # + hidden=true df4 = df3.copy() # + hidden=true df4.head() # + [markdown] heading_collapsed=true hidden=true # ## 4.1. Análise Univariada # + [markdown] heading_collapsed=true hidden=true # ### 4.1.1. Response Variable # + hidden=true sns.distplot( df4['sales'], kde=False ) # + [markdown] heading_collapsed=true hidden=true # ### 4.1.2. Numerical Variable # + hidden=true num_attributes = df4.select_dtypes( include= ['int64', 'float64'] ) cat_attributes = df4.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] ) # + hidden=true num_attributes.hist( bins=25 ); # + [markdown] heading_collapsed=true hidden=true # ### 4.1.3. Categorical Variable # + hidden=true df4['assortment'].drop_duplicates() # + hidden=true # state_holiday plt.subplot( 3, 2, 1 ) a = df4[df4['state_holiday'] != 'regular_day'] sns.countplot( a['state_holiday'] ) plt.subplot( 3, 2, 2 ) sns.kdeplot( df4[df4['state_holiday'] == 'public_holiday']['sales'], label='public_holiday', shade=True ) sns.kdeplot( df4[df4['state_holiday'] == 'easter_holiday']['sales'], label='easter_holiday', shade=True ) sns.kdeplot( df4[df4['state_holiday'] == 'christmas']['sales'], label='christmas', shade=True ) # store_type plt.subplot( 3, 2, 3 ) sns.countplot( df4['store_type'] ) plt.subplot( 3, 2, 4 ) sns.kdeplot( df4[df4['store_type'] == 'a']['sales'], label='a', shade=True ) sns.kdeplot( df4[df4['store_type'] == 'b']['sales'], label='b', shade=True ) sns.kdeplot( df4[df4['store_type'] == 'c']['sales'], label='c', shade=True ) sns.kdeplot( df4[df4['store_type'] == 'd']['sales'], label='d', shade=True ) # assortment plt.subplot( 3, 2, 5 ) sns.countplot( df4['store_type'] ) plt.subplot( 3, 2, 6 ) sns.kdeplot( df4[df4['assortment'] == 'basic']['sales'], label='basic', shade=True ) sns.kdeplot( df4[df4['assortment'] == 'extra']['sales'], label='extra', shade=True ) sns.kdeplot( df4[df4['assortment'] == 'extend']['sales'], label='extend', shade=True ) # + [markdown] heading_collapsed=true hidden=true # ## 4.2. Análise Bivariada # + hidden=true df4.head() # + [markdown] heading_collapsed=true hidden=true # ### **H1.** Lojas com maior sortimentos deveriam vender mais. # **FALSA** Lojas com COMPETIDORES MAIS PROXIMOS vendem MAIS # + hidden=true hide_input=false aux1 = df4[['assortment', 'sales']].groupby( 'assortment' ).sum().reset_index() sns.barplot( x='assortment', y='sales', data=aux1 ) aux2 = df4[['year_week', 'assortment', 'sales']].groupby( ['year_week', 'assortment'] ).sum().reset_index() aux2.pivot( index='year_week', columns='assortment', values='sales' ).plot() aux3 = aux2[aux2['assortment'] == 'extra'] aux3.pivot( index='year_week', columns='assortment', values='sales' ).plot() # + [markdown] heading_collapsed=true hidden=true # ### **H2.** Lojas com competidores mais próximos deveriam vender menos. # **FALSA** Lojas com COMPETIDORES MAIS PROXMIMOS vendem MAIS. # + hidden=true aux1 = df4[['competition_distance', 'sales']].groupby( 'competition_distance' ).sum().reset_index() plt.subplot( 1, 3, 1 ) sns.scatterplot( x='competition_distance', y='sales', data=aux1 ); plt.subplot( 1, 3, 2 ) bins = list( np.arange(0, 20000, 1000) ) aux1['competition_distance_binned'] = pd.cut( aux1['competition_distance'], bins=bins ) aux2 = aux1[['competition_distance_binned', 'sales']].groupby( 'competition_distance_binned' ).sum().reset_index() sns.barplot( x='competition_distance_binned', y='sales', data=aux2 ); plt.subplot( 1, 3, 3 ) sns.heatmap( aux1.corr( method='pearson' ), annot=True ) # + [markdown] heading_collapsed=true hidden=true # ### **H3.** Lojas com competidores à mais tempo deveriam vender mais. # **FALSA** Lojas com COMPETIDORES À MAIS TEMPO vendem MENOS. # + hidden=true plt.subplot( 1, 3, 1 ) aux1 = df4[['competition_time_month', 'sales']].groupby( 'competition_time_month' ).sum().reset_index() aux2 = aux1[( aux1['competition_time_month'] < 120 ) & ( aux1['competition_time_month'] != 0 )] sns.barplot( x='competition_time_month', y='sales', data=aux2 ); plt.xticks( rotation=90 ); plt.subplot( 1, 3, 2 ) sns.regplot( x='competition_time_month', y='sales', data=aux2 ); plt.subplot( 1, 3, 3 ) sns.heatmap( aux1.corr( method='pearson' ), annot=True ) # + [markdown] heading_collapsed=true hidden=true # ### **H4.** Lojas com promoções ativas por mais tempo deveriam vender mais. # **FALSA** Loja com PROMOÇÔES ATIVAS POR MAIS TEMPO vendem MENOS, depois de um certo tempo de promoção. # + hidden=true aux1 = df4[['promo_time_week', 'sales']].groupby( 'promo_time_week' ).sum().reset_index() grid = GridSpec( 2,3 ) plt.subplot( grid[0,0] ) aux2 = aux1[aux1['promo_time_week'] > 0] # promo extendido sns.barplot( x='promo_time_week', y='sales', data=aux2 ); plt.xticks( rotation=90 ); plt.subplot( grid[0,1] ) sns.regplot( x='promo_time_week', y='sales', data=aux2 ); plt.subplot( grid[1,0] ) aux3 = aux1[aux1['promo_time_week'] < 0] # promo regular sns.barplot( x='promo_time_week', y='sales', data=aux3 ); plt.xticks( rotation=90 ); plt.subplot( grid[1,1] ) sns.regplot( x='promo_time_week', y='sales', data=aux3 ); plt.subplot( grid[:,2] ) sns.heatmap( aux1.corr( method='pearson' ), annot=True ); # + [markdown] heading_collapsed=true hidden=true # ### **H5.** Lojas com mais dias de promoção deveriam vender mais. # + [markdown] heading_collapsed=true hidden=true # ### **H6.** Lojas com mais promoções consecutivas deveriam vender mais. # **FALSA** Lojas com mais promoções consecutivas vendem menos # + hidden=true df4.columns # + hidden=true df4[['promo', 'promo2', 'sales']].groupby( ['promo', 'promo2'] ).sum().reset_index() # + hidden=true aux1 = df4[( df4['promo'] == 1 ) & ( df4['promo2'] == 1 )][['year_week','sales']].groupby( 'year_week' ).sum().reset_index() ax = aux1.plot() aux2 = df4[( df4['promo'] == 1 ) & ( df4['promo2'] == 0 )][['year_week','sales']].groupby( 'year_week' ).sum().reset_index() aux2.plot( ax=ax ) ax.legend( labels=['Tradicional & Extendida', 'Tradicional']); # + [markdown] heading_collapsed=true hidden=true # ### **H7.** Lojas abertas durante o feriado de Natal deveriam vender mais. # **FALSA** Lojas abertas durante o feriado de Natal vendem menos. # + hidden=true aux = df4[df4['state_holiday'] != 'regular_day'] plt.subplot( 1, 2, 1 ) aux1 = aux[['state_holiday', 'sales']].groupby( 'state_holiday' ).sum().reset_index() sns.barplot( x='state_holiday', y='sales', data=aux1 ); plt.subplot( 1, 2, 2 ) aux2 = aux[['year', 'state_holiday', 'sales']].groupby( ['year', 'state_holiday'] ).sum().reset_index() sns.barplot( x='year', y='sales', hue='state_holiday', data=aux2 ); # + [markdown] heading_collapsed=true hidden=true # ### **8.** Lojas deveriam vender mais ao longo dos anos. # **FALSA** Lojas vendem menos ao longo dos anos. # + hidden=true aux1 = df4[['year', 'sales']].groupby( 'year' ).sum().reset_index() plt.subplot( 1, 3, 1 ) sns.barplot( x='year', y='sales', data=aux1 ); plt.subplot( 1, 3, 2 ) sns.regplot( x='year', y='sales', data=aux1 ); plt.subplot( 1, 3, 3 ) sns.heatmap( aux1.corr( method='pearson' ), annot=True ) # + [markdown] heading_collapsed=true hidden=true # ### **9.** Lojas deveriam vender mais no segundo semestre do ano. # **FALSA** Lojas vendem menos no segundo semestre # + hidden=true aux1 = df4[['month', 'sales']].groupby( 'month' ).sum().reset_index() plt.subplot( 1, 3, 1 ) sns.barplot( x='month', y='sales', data=aux1 ); plt.subplot( 1, 3, 2 ) sns.regplot( x='month', y='sales', data=aux1 ); plt.subplot( 1, 3, 3 ) sns.heatmap( aux1.corr( method='pearson' ), annot=True ) # + [markdown] heading_collapsed=true hidden=true # ### **10.** Lojas deveriam vender mais depois do dia 10 de cada mês. # **VERDADEIRA** Lojas vendem mais depois do dia 10 de cada mês # + hidden=true aux1 = df4[['day', 'sales']].groupby( 'day' ).sum().reset_index() plt.subplot( 2, 2, 1 ) sns.barplot( x='day', y='sales', data=aux1 ); plt.subplot( 2, 2, 2 ) sns.regplot( x='day', y='sales', data=aux1 ); plt.subplot( 2, 2, 3 ) sns.heatmap( aux1.corr( method='pearson' ), annot=True ); aux1['before_after'] = aux1['day'].apply( lambda x: 'before_10_days' if x <=10 else 'after_10_days' ) aux2 = aux1[['before_after', 'sales']].groupby( 'before_after' ).sum().reset_index() plt.subplot( 2, 2, 4 ) sns.barplot( x='before_after', y='sales', data=aux2 ); # + [markdown] heading_collapsed=true hidden=true # ### **11.** Lojas deveriam vender menos aos finais de semana. # **VERDADEIRA** Lojas vendem menos aos finais de semana # + hidden=true aux1 = df4[['day_of_week', 'sales']].groupby( 'day_of_week' ).sum().reset_index() plt.subplot( 1, 3, 1 ) sns.barplot( x='day_of_week', y='sales', data=aux1 ); plt.subplot( 1, 3, 2 ) sns.regplot( x='day_of_week', y='sales', data=aux1 ); plt.subplot( 1, 3, 3 ) sns.heatmap( aux1.corr( method='pearson'), annot=True ); # + [markdown] heading_collapsed=true hidden=true # ### **12.** Lojas deveriam vender menos durante os feriados escolares. # **VERDADEIRA** Lojas vendem menos durante os feriados escolares, exceto julho e agosto # + hidden=true aux1 = df4[['school_holiday', 'sales']].groupby( 'school_holiday' ).sum().reset_index() plt.subplot( 2, 1, 1 ) sns.barplot( x='school_holiday', y='sales', data=aux1 ); plt.subplot( 2, 1, 2 ) aux2 = df4[['month', 'school_holiday', 'sales']].groupby( ['month', 'school_holiday'] ).sum().reset_index() sns.barplot( x='month', y='sales', hue='school_holiday', data=aux2 ); # + [markdown] hidden=true # ### 4.2.1. Resumo das Hipóteses # + hidden=true from tabulate import tabulate # + hidden=true tab = [['Hipoteses', 'Conclusao', 'Relevancia'], ['H1', 'Falsa', 'Baixa'], ['H2', 'Falsa', 'Media'], ['H3', 'Falsa', 'Media'], ['H4', 'Falsa', 'Baixa'], ['H5', '-', '-'], ['H6', 'Falsa', 'Baixa'], ['H7', 'Falsa', 'Media'], ['H8', 'Falsa', 'Alta'], ['H9', 'Falsa', 'Alta'], ['H10', 'Verdadeira', 'Alta'], ['H11', 'Verdadeira', 'Alta'], ['H12', 'Verdadeira', 'Baixa']] print( tabulate( tab, headers='firstrow') ) # + hidden=true ## 4.3. Análise Multivariada # + hidden=true ### 4.3.1. Numerical Attributes # + hidden=true correlation = num_attributes.corr( method='pearson' ) sns.heatmap( correlation, annot=True ); # + [markdown] hidden=true # ### 4.3.2. Category Attributes # + hidden=true # only categorical data a = df4.select_dtypes( include='object' ) # calculate Cramer V a1 = cramer_v( a['state_holiday'], a['state_holiday'] ) a2 = cramer_v( a['state_holiday'], a['store_type'] ) a3 = cramer_v( a['state_holiday'], a['assortment'] ) a4 = cramer_v( a['store_type'], a['state_holiday'] ) a5 = cramer_v( a['store_type'], a['store_type'] ) a6 = cramer_v( a['store_type'], a['assortment'] ) a7 = cramer_v( a['assortment'], a['state_holiday'] ) a8 = cramer_v( a['assortment'], a['store_type'] ) a9 = cramer_v( a['assortment'], a['assortment'] ) # final dataset d =pd.DataFrame( { 'state_holiday': [a1, a2, a3], 'store_type':[a2, a3, a4], 'assortment':[a7, a8, a9] } ) d = d.set_index( d.columns ) sns.heatmap( d, annot=True ) # + [markdown] heading_collapsed=true # # 5.0. DATA PREPARATION # + hidden=true df5 = df4.copy() # + [markdown] heading_collapsed=true hidden=true # ## 5.1. Normalization # + hidden=true # + [markdown] heading_collapsed=true hidden=true # ## 5.2. Rescaling # + hidden=true rs = RobustScaler() mms = MinMaxScaler() # competition distance df5['competition_distance'] = rs.fit_transform( df5[['competition_distance']].values ) # competition time month df5['competition_time_month'] = rs.fit_transform( df5[['competition_time_month']].values ) # promo time week df5['promo_time_week'] = mms.fit_transform( df5[['promo_time_week']].values) # year df5['year'] = mms.fit_transform( df5[['year']].values) # + [markdown] heading_collapsed=true hidden=true # ## 5.3. Transfromation # + [markdown] hidden=true # ### 5.3.1. Encoding # + hidden=true # state_holiday - One Hot Encoding df5 = pd.get_dummies( df5, prefix=['state_holiday'], columns=['state_holiday']) # store_type - Label Encoding le = LabelEncoder() df5['assortment'] = le.fit_transform( df5['assortment'] ) # assortment - Ordinal Encoding assortment_dict = {'basic': 1, 'extra': 2, 'extended': 3} df5['assortment'] = df5['assortment'].map( assortment_dict ) # + [markdown] hidden=true # ### 5.3.2. Respose Variable Transformation # + hidden=true df5['sales'] = np.log1p( df5['sales'] ) # + [markdown] hidden=true # ### 5.3.3. Nature Transformation # + hidden=true # day of week df5['day_of_week_sin'] = df5['day_of_week'].apply( lambda x: np.sin( x * ( 2. * np.pi/7 ) ) ) df5['day_of_week_cos'] = df5['day_of_week'].apply( lambda x: np.cos( x * ( 2. * np.pi/7 ) ) ) # month df5['month_sin'] = df5['month'].apply( lambda x: np.sin( x * ( 2. * np.pi/12 ) ) ) df5['month_cos'] = df5['month'].apply( lambda x: np.cos( x * ( 2. * np.pi/12 ) ) ) # day df5['day_sin'] = df5['day'].apply( lambda x: np.sin( x * ( 2. * np.pi/30 ) ) ) df5['day_cos'] = df5['day'].apply( lambda x: np.cos( x * ( 2. * np.pi/30 ) ) ) # week of year df5['week_of_year_sin'] = df5['week_of_year'].apply( lambda x: np.sin( x * ( 2. * np.pi/52 ) ) ) df5['week_of_year_cos'] = df5['week_of_year'].apply( lambda x: np.cos( x * ( 2. * np.pi/52 ) ) ) # + hidden=true df5.head() # + hidden=true # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="QeOwh6G4ZmgB" colab_type="text" # # Download data from Kaggle # # + id="beDyuXvFXtKP" colab_type="code" colab={} import os os.environ['KAGGLE_USERNAME'] = "xxxxx" # username from the json file os.environ['KAGGLE_KEY'] = "" # key from the json file # !kaggle competitions download -c fake-news # + id="BmakwVF_Y3wa" colab_type="code" colab={} import zipfile with zipfile.ZipFile('test.csv.zip', 'r') as zip_ref: zip_ref.extractall('') with zipfile.ZipFile('train.csv.zip', 'r') as zip_ref: zip_ref.extractall('') # + id="Fin0tJeXZHxu" colab_type="code" colab={} import pandas as pd test_df = pd.read_csv('test.csv') train_df = pd.read_csv('train.csv') submit_df = pd.read_csv('submit.csv') test_df['label'] = submit_df['label'] print("Rows in test: %s " % len(test_df)) print("Rows in train: %s " % len(train_df)) # + id="SiavXU9T1w6b" colab_type="code" colab={} train_df.head(5) # + [markdown] id="OeyIBvzROrik" colab_type="text" # # Imports # + id="pE3fYoQ_OnGr" colab_type="code" colab={} import torch import torch.nn as nn import math import random import string import time import matplotlib.pyplot as plt # %matplotlib inline # + [markdown] id="SE12lLEwO__C" colab_type="text" # # Preparing Data # + id="TcT3ebt1O_zj" colab_type="code" colab={} # Generate a list of tuples (title, label) for each data row def read_data(dataframe): df = dataframe[['title', 'label']] df = df.dropna(subset=['title', 'label']) return [tuple(x) for x in df.to_numpy()] def random_training_pair(pairs): rand_index = random.randint(0, len(pairs) - 1) return pairs[rand_index] # + [markdown] id="zPau7B-fPBpc" colab_type="text" # # Establish Tensors # + id="aNERZ6qoVNJN" colab_type="code" colab={} all_characters = string.printable n_characters = len(all_characters) # + id="4I_k-E_JO9TQ" colab_type="code" colab={} # Turns line into tensor def line_to_tensor(line): tensor = torch.zeros(len(line), 1, n_characters) for li, letter in enumerate(line): tensor[li][0][all_characters.find(letter)] = 1 return tensor # Turns label into <1 x 1> tensor def label_to_tensor(label): return torch.tensor([label], dtype=torch.long) # + [markdown] id="Rlw1SlbwPCXC" colab_type="text" # # Create Network # + id="uOXyfr5RPClH" colab_type="code" colab={} class RNNClassify(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNNClassify, self).__init__() self.hidden_size = hidden_size # Initialize linear and softmax layers self.i2h = nn.Linear(input_size + hidden_size, hidden_size) self.i2o = nn.Linear(input_size + hidden_size, output_size) self.softmax = nn.LogSoftmax(dim=1) def forward(self, input, hidden=None): # Concatenate input tensor and hidden state combined = torch.cat((input, hidden), 1) hidden = self.i2h(combined) output = self.i2o(combined) output = self.softmax(output) return output, hidden def init_hidden(self): return torch.zeros(1, self.hidden_size) # Takes the category with the highest probability as a guess def category_from_output(output): top_n, top_i = output.topk(1) return top_i[0].item() # + [markdown] id="Q8FEYN0pPC0y" colab_type="text" # # Train Network # + id="cnpW6cFbPDHh" colab_type="code" colab={} # Helper function to display how long the training has been running def time_since(since): now = time.time() s = now - since m = math.floor(s / 60) s -= m * 60 return '%dm %ds' % (m, s) # Creates function that performs a step in the training loop def make_train_step(model, criterion, optimizer): def train_step(x, y): # Sets model to TRAIN mode model.train() # Initialize hidden state hidden = model.init_hidden() # Makes predictions, running through each letter tensor for i in range(x.size()[0]): output, hidden = model(x[i], hidden) # Computes loss loss = criterion(output, y) # Computes gradients loss.backward() # Updates parameters optimizer.step() optimizer.zero_grad() return output, loss.item() return train_step # Run training on a given dataframe def run(train_df, plot=False): n_iters = 100000 print_every = 5000 plot_every = 1000 hidden_len = 256 current_loss = 0 all_losses = [] model = RNNClassify(n_characters, hidden_len, 2) data_tuples = read_data(train_df) # Create the optimizer and loss function (criterion) optimizer = torch.optim.SGD(model.parameters(), lr=0.0002) criterion = nn.NLLLoss() train_step = make_train_step(model, criterion, optimizer) start = time.time() for i in range(1, n_iters + 1): # Get data and turn input/target into tensors title, label = random_training_pair(data_tuples) input_tensor = line_to_tensor(title) target_tensor = label_to_tensor(label) # Run one training step output, loss = train_step(input_tensor, target_tensor) # The rest of the code in this function is to show how # the network is learning current_loss += loss if i % print_every == 0: guess = category_from_output(output) correct = '✓' if label == guess else '✗' print('%d %d%% (%s) %.4f %s %s' % (i, i / n_iters * 100, time_since(start), loss, title, correct)) if i % plot_every == 0: all_losses.append(current_loss / plot_every) current_loss = 0 if plot: plt.figure() plt.plot(all_losses) # Save the model torch.save(model.state_dict(), "test.model") run(train_df, plot=True) # + [markdown] id="ChSpuUmVPDSN" colab_type="text" # # Evaluate Model # + id="7a2-Ry-NPD_G" colab_type="code" colab={} # Predict the label given a title def evaluate(title, model): model.eval() hidden = model.init_hidden() input_tensor = line_to_tensor(title) for i in range(input_tensor.size()[0]): output, hidden = model(input_tensor[i], hidden) return category_from_output(output) # Calculate accuracy, recall, and precision on a given test dataframe def calculate_accuracy(model, test_df): false_positives = 0 false_negatives = 0 true_positives = 0 true_negatives = 0 tuples = read_data(test_df) for title, label in tuples: prediction = evaluate(title, model) if label == prediction and label: true_positives += 1 if label == prediction and not label: true_negatives += 1 if label != prediction and label: false_negatives += 1 if label != prediction and not label: false_positives += 1 accuracy = (true_positives + true_negatives) / len(test_df) recall = true_positives / (true_positives + false_negatives) precision = true_positives / (true_positives + false_positives) return accuracy, recall, precision # + id="AGXsoIR9RbIB" colab_type="code" colab={} model = RNNClassify(n_characters, 256, 2) model.load_state_dict(torch.load('test.model')) calculate_accuracy(model, test_df) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: kgtk_env # language: python # name: kgtk_env # --- from cosine_similarity import CosineSimilarity import json import pandas as pd from IPython.display import display, HTML v_path = '/Users/amandeep/Github/maa-analysis/MAA_Datasets/v3.2.0' cs = CosineSimilarity(f'{v_path}/text_embeddings_all.tsv', f'{v_path}/qnodes-properties-labels-for-V3.2.0_KB.tsv') r = [] r.append(cs.compute_similarity('Q37828', 'Q271997')) r.append(cs.compute_similarity('Q48989064', 'Q127956')) r.append(cs.compute_similarity('Q271997', 'Q159683')) r.append(cs.compute_similarity('Q48989064', 'Q271997')) df = pd.DataFrame(r) display(HTML(df.to_html())) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Linear Regression # + from statistics import mean import numpy as np import matplotlib.pyplot as plt from matplotlib import style import random style.use('fivethirtyeight') # + # xs = np.array([1,2,3,4,5,6], dtype=np.float64) # ys = np.array([5,4,6,5,6,7], dtype=np.float64) # plt.scatter(xs,ys) # plt.show() # - def create_dataset(hm, varience, step=2, correlation=False): val = 1 ys = [] for i in range(hm): y = val + random.randrange(-varience, varience) ys.append(y) if correlation and correlation == 'pos': val += step elif correlation and correlation == 'neg': val -= step xs = [i for i in range(len(ys))] return np.array(xs, dtype=np.float64), np.array(ys, dtype=np.float64) # We can test our assumption using varience. If varience decreases, r-square increases xs, ys = create_dataset(40, 40, 2, correlation='pos') plt.scatter(xs,ys) plt.show() def best_fit_slope_and_intercept(xs, ys): m = ((mean(xs) * mean(ys)) - mean(xs*ys)) / ((mean(xs)**2) - mean(xs**2)) b = mean(ys) - m * mean(xs) return m, b m, b = best_fit_slope_and_intercept(xs, ys) print(m) print(b) regression_line = [(m*x)+b for x in xs] print(regression_line) plt.scatter(xs,ys) plt.plot(regression_line) plt.show() predict_x = 8 predict_y = (m*predict_x) + b plt.scatter(xs,ys) plt.plot(regression_line) plt.scatter(predict_x,predict_y, color='red', s=100) plt.show() # This line is good fit line but not the best fit line. def squared_error(ys_orig,ys_line): return sum((ys_line-ys_orig)**2) def coeff_of_determination(ys_orig, ys_line): # y_mean_line = [mean(y) for y in ys_orig] # y_mean_line = mean(ys_orig) y_mean_line = [mean(ys_orig)] * len(ys_orig) squared_error_regr = squared_error(ys_orig,ys_line) squared_error_regr_y_mean = squared_error(ys_orig,y_mean_line) return 1 - (squared_error_regr/squared_error_regr_y_mean) r_squared = coeff_of_determination(ys, regression_line) print(r_squared) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import os, sys sys.path.insert(0, os.environ['PROJECT_PATH']) # + import pandas as pd from sklearn.feature_extraction.text import TfidfVectorizer from config.resources import path_to # - jbv_df = pd.read_csv(path_to['jbv_meta']) # + import matplotlib.pyplot as plt # %matplotlib inline # + jbv_df['abstract_len'] = jbv_df['AB'].dropna().apply(lambda a: len(a.split())) plt.plot(jbv_df['abstract_len'].tolist()) # + abstracts = jbv_df['AB'].dropna() nb_features = 600 tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, max_features=nb_features, stop_words='english') tfidf = tfidf_vectorizer.fit_transform(abstracts) tfidf_feature_names = tfidf_vectorizer.get_feature_names() # + from sklearn.decomposition import NMF nb_topics = 20 # Run NMF nmf = NMF(n_components=nb_topics, random_state=1, alpha=.1, l1_ratio=.5, init='nndsvd').fit(tfidf) # - def display_topics(model, feature_names, no_top_words): for topic_idx, topic in enumerate(model.components_): print "Topic %d:" % (topic_idx) print " ".join([feature_names[i] for i in topic.argsort()[:-no_top_words - 1:-1]]) nb_top_words = 10 display_topics(nmf, tf_feature_names, nb_top_words) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Calculate location colours # The colours for locations are calcuated and stored as a YAML file. # ## Imports from ag3 import release_data ag3_data = release_data() import yaml import matplotlib.pyplot as plt import pandas as pd import cartopy as cart import cartopy.crs as ccrs import matplotlib.patches as patches import numpy as np # REQUIRES: https://pypi.org/project/colormath/ # pip install colormath from colormath.color_objects import sRGBColor, LabColor from colormath.color_conversions import convert_color # ## Settings output_path = 'location_colours.yaml' # Colour space lightness = 50 # At 1, the midpoint appears black; at 100 it appears white. All points appear white around 700; black around -300. colour_blend = 70 # At 1, all points appear equal grey, by `lightness`. Above 500, colours regions have harsh boundaries. Below 70, colours are greyish. centre_at_equator = False centre_at_prime_meridian = False colour_space_orientation = {'NE': 'cyan', 'SE': 'green', 'SW': 'red', 'NW': 'magenta'} use_location_min_max_as_colour_space = False # Used these when use_location_min_max_as_colour_space = False: colour_space_min_lat = -35 # southern tip of Africa colour_space_max_lat = 20 # as north as the Sahara (roughly) colour_space_min_long = -26 # west of Cape Verde colour_space_max_long = 60 # east of Mauritius # ### Eyeball colour space mapping # + ax = plt.axes(projection=ccrs.PlateCarree()) ax.stock_img() xlim = ([colour_space_min_long - 10, colour_space_max_long + 10]) ylim = ([colour_space_min_lat - 10, colour_space_max_lat + 10]) ax.set_xlim(xlim) ax.set_ylim(ylim) rect = patches.Rectangle( (colour_space_min_long, colour_space_min_lat), (colour_space_max_long - colour_space_min_long), (colour_space_max_lat - colour_space_min_lat), linewidth=1, edgecolor='black', facecolor='none', linestyle='--' ) ax.add_patch(rect) plt.show() # - # ## Functions def latlong_to_rgb_hex_via_lab(lat, long, min_lat=-90, max_lat=90, min_long=-180, max_long=180, centre_at_equator=False, centre_at_prime_meridian=False, colour_space_orientation={'NE': 'cyan', 'SE': 'green', 'SW': 'red', 'NW': 'magenta'}, lightness=50, colour_blend=70 ): observer = '2' # Observer angle: default '2', alternative '10' degrees illuminant = 'd50' # default 'd50' # When observer '2', illuminant can have {'a', 'b', 'c', 'd50', 'd55', 'd65', 'd75', 'e', 'f2', 'f7', 'f11'} # When observer '10', illuminant can have {'d50', 'd55', 'd65', 'd75'} # Handle options to centre on the equator and/or the prime meridian if centre_at_equator: if -min_lat > max_lat: min_lat = -max_lat if centre_at_prime_meridian: if -min_long > max_long: min_Long = -max_long # Scale the lat-long 0 to 1, relative to the specified min and max lat_extent = max_lat - min_lat long_extent = max_long - min_long scaled_lat = (lat + abs(min_lat)) / lat_extent scaled_long = (long + abs(min_long)) / long_extent # Convert to range (-1 to 1) * f, according to the specified colour_space_orientation if colour_space_orientation == {'NE': 'red', 'SE': 'green', 'SW': 'cyan', 'NW': 'magenta'}: a = ((scaled_lat * 2) - 1) * colour_blend b = ((scaled_long * 2) - 1) * colour_blend elif colour_space_orientation == {'NE': 'red', 'SE': 'magenta', 'SW': 'cyan', 'NW': 'green'}: a = ((scaled_long * 2) - 1) * colour_blend b = ((scaled_lat * 2) - 1) * colour_blend elif colour_space_orientation == {'NE': 'cyan', 'SE': 'green', 'SW': 'red', 'NW': 'magenta'}: a = (1 - (scaled_long * 2)) * colour_blend b = (1 - (scaled_lat * 2)) * colour_blend elif colour_space_orientation == {'NE': 'cyan', 'SE': 'magenta', 'SW': 'red', 'NW': 'green'}: a = (1 - (scaled_lat * 2)) * colour_blend b = (1 - (scaled_long * 2)) * colour_blend else: raise ValueError(f'Unhandled colour_space_orientation: {colour_space_orientation}') # Without "clamping" or adjusting the scale factor (* 255 above), some colours will fall outside compatible space and raise an error during `get_rgb_hex()` # From https://python-colormath.readthedocs.io/en/latest/conversions.html # "RGB spaces tend to have a smaller gamut than some of the CIE color spaces. When converting to RGB, this can cause some of the coordinates to end up being out of the acceptable range (0.0-1.0 or 1-255, depending on whether your RGB color is upscaled)." # "Rather than clamp these for you, we leave them as-is. This allows for more accurate conversions back to the CIE color spaces. If you require the clamped (0.0-1.0 or 1-255) values, use the following properties on any RGB color: clamped_rgb_r, clamped_rgb_g, clamped_rgb_b" # Switch depending on supplied param type if isinstance(lat, int): lab = LabColor(lightness, a, b, observer, illuminant) rgb = convert_color(lab, sRGBColor) clamped_rgb = sRGBColor(rgb.clamped_rgb_r, rgb.clamped_rgb_g, rgb.clamped_rgb_b, is_upscaled=False) rgb_hex = clamped_rgb.get_rgb_hex() return rgb_hex elif isinstance(lat, pd.core.series.Series): # TODO: vectorize rgb_hex = [] for i in range(len(lat)): lab = LabColor(lightness, a[i], b[i], observer, illuminant) rgb = convert_color(lab, sRGBColor) clamped_rgb = sRGBColor(rgb.clamped_rgb_r, rgb.clamped_rgb_g, rgb.clamped_rgb_b, is_upscaled=False) rgb_hex.append(clamped_rgb.get_rgb_hex()) return rgb_hex else: raise ValueError('Unhandled parameter types') # ## Spot check the lat-long colour conversion spot_check_data = [] for lat in [-90, -45, 0, 45, 90]: for long in [-180, -90, 0, 90, 180]: spot = {'lat': lat, 'long': long, 'colour': latlong_to_rgb_hex_via_lab(lat, long, centre_at_equator=centre_at_equator, centre_at_prime_meridian=centre_at_prime_meridian, colour_space_orientation=colour_space_orientation, lightness=lightness, colour_blend=colour_blend ) } spot_check_data.append(spot) spot_check_df = pd.DataFrame(spot_check_data) spot_check_df.head() # + # Eyeball some lat-long to rgb_hex conversions fig = plt.figure(figsize=(8, 8)) ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() ax.add_feature(cart.feature.OCEAN, zorder=0, edgecolor='k', facecolor='lightblue') plt.scatter(spot_check_df["long"], spot_check_df["lat"], c=spot_check_df["colour"], s=100_00) plt.show() # - # ## Get the locations of all of the wild samples wild_sample_meta = ag3_data.load_sample_set_metadata(ag3_data.all_wild_sample_sets) wild_sample_meta.head() # Check that there aren't different lat-longs for the same country-location all_locations = wild_sample_meta[['country', 'location', 'latitude', 'longitude']] unique_locations = all_locations.drop_duplicates(['country', 'location', 'latitude', 'longitude']) duplicate_locations_by_lat_long = unique_locations.duplicated(['latitude', 'longitude'], keep=False) unique_locations[duplicate_locations_by_lat_long] # + # TODO: Resolve/Explain these duplicate lat-longs # - # We only need the unique locations locations = wild_sample_meta[['country', 'location', 'latitude', 'longitude']].drop_duplicates().reset_index(drop=True) locations # ## Calculate the location colours # + if use_location_min_max_as_colour_space: min_lat = locations['latitude'].min() max_lat = locations['latitude'].max() min_long = locations['longitude'].min() max_long = locations['longitude'].max() else: min_lat = colour_space_min_lat max_lat = colour_space_max_lat min_long = colour_space_min_long max_long = colour_space_max_long locations['rgb_hex'] = latlong_to_rgb_hex_via_lab(locations['latitude'], locations['longitude'], min_lat=min_lat, max_lat=max_lat, min_long=min_long, max_long=max_long, centre_at_equator=centre_at_equator, centre_at_prime_meridian=centre_at_prime_meridian, colour_space_orientation=colour_space_orientation, lightness=lightness, colour_blend=colour_blend ) locations # - # ## Check the location colours on a map # + ax = plt.axes(projection=ccrs.PlateCarree()) ax.stock_img() #ax.coastlines(color='grey') xlim = ([locations["longitude"].min() - 10, locations["longitude"].max() + 10]) ylim = ([locations["latitude"].min() - 10, locations["latitude"].max() + 10]) ax.set_xlim(xlim) ax.set_ylim(ylim) plt.scatter(locations["longitude"], locations["latitude"], edgecolors=locations["rgb_hex"], alpha=1, s=100, color='None', linewidth=2) plt.show() # - # ## Make a reference colour map # Generate lat-longs limited to the same boundaries # TODO: use a less brutal method ref_lat_longs = [] lat_step = 1 long_step = 1 for lat in np.arange(min_lat, max_lat, lat_step): for long in np.arange(min_long, max_long, long_step): ref_lat_longs.append({'lat': lat, 'long': long}) ref_lat_longs[0] # Add the generated lat-longs to a DataFrame, for convenience ref_locations_df = pd.DataFrame(data=ref_lat_longs, columns=['lat', 'long']) ref_locations_df.head() # Get the colours for the lat-longs ref_locations_df['rgb_hex'] = latlong_to_rgb_hex_via_lab(ref_locations_df['lat'], ref_locations_df['long'], min_lat=min_lat, max_lat=max_lat, min_long=min_long, max_long=max_long, centre_at_equator=centre_at_equator, colour_space_orientation=colour_space_orientation, lightness=lightness, colour_blend=colour_blend ) ref_locations_df.head() # ### Plot the coloured lat-longs on a map # + fig = plt.figure(figsize=(10, 10)) ax = plt.axes(projection=ccrs.PlateCarree()) xlim = ([-20, 60]) # west, east (long) ylim = ([-40, 40]) # south, north (lat) ax.set_xlim(xlim) ax.set_ylim(ylim) ax.add_feature(cart.feature.OCEAN, zorder=1, edgecolor='k', facecolor='white') s = 45 # Size of square found manually plt.scatter(ref_locations_df["long"], ref_locations_df["lat"], c=ref_locations_df["rgb_hex"], marker='s', s=s) plt.show() # - # ### Check the locations on the reference map # + fig = plt.figure(figsize=(10, 10)) ax = plt.axes(projection=ccrs.PlateCarree()) xlim = ([min_long - 3, max_long + 7]) # west, east (long) ylim = ([min_lat - 15, max_lat + 25]) # south, north (lat) ax.set_xlim(xlim) ax.set_ylim(ylim) ax.add_feature(cart.feature.OCEAN, zorder=1, edgecolor='k', facecolor='white') plt.scatter(ref_locations_df["long"], ref_locations_df["lat"], c=ref_locations_df["rgb_hex"], marker='s', s=s) rect = patches.Rectangle( (min_long, min_lat), (max_long - min_long), (max_lat - min_lat), linewidth=1, edgecolor='black', facecolor='none', linestyle='--', zorder=2 ) ax.add_patch(rect) plt.plot([min_long - 3, max_long + 7], [0, 0], color='black', linewidth=3, marker='None', linestyle='--') plt.scatter(locations["longitude"], locations["latitude"], c=locations["rgb_hex"], edgecolor='k', alpha=1, s=100, linewidth=1.5, zorder=3) plt.show() # - # ## Store the location colours as a file # Get the location_labels and their corresponding rgb_hex colours location_colours = locations[['country', 'location', 'rgb_hex']].set_index(['country', 'location']) location_colours # ### Convert the DataFrame values into a dictionary # + # Note: Panda's to_dict(orient='index') converts multi-column index into a tuple, # which isn't what we want for the YAML #location_colours_as_dict = location_colours.to_dict(orient='index') #list(location_colours_as_dict.keys())[0] #ignore_index=True # - # Compose the dict manually location_colours_as_dict = {} for country, country_group in location_colours.groupby('country'): location_colours_as_dict[country] = {} for location, location_group in country_group.groupby('location'): location_colours_as_dict[country][location] = location_group['rgb_hex'][0] location_colours_as_dict.keys() location_colours_as_dict['Burkina Faso'] # ### Write the dictionary to YAML with open(output_path, 'w') as ymlfile: yaml.dump(location_colours_as_dict, ymlfile, sort_keys=True, allow_unicode=True) # ## Check the location colours file # Load the data from the YAML file with open(output_path) as file: reloaded_data_dict = yaml.load(file, Loader=yaml.Loader) reloaded_data_dict.keys() reloaded_data_dict['Burkina Faso'] # ### Convert the dict into a DataFrame # + # Note: Panda's from_dict() and from_records() don't interpret the dict in the way we want #reloaded_df = pd.DataFrame.from_dict(reloaded_data_dict) #reloaded_df.head() # - # Compose the DataFrame manually reloaded_df = pd.DataFrame(columns=['country', 'location', 'rgb_hex']) for country in reloaded_data_dict.keys(): for location in reloaded_data_dict[country].keys(): reloaded_df = reloaded_df.append( {'country': country, 'location': location, 'rgb_hex': reloaded_data_dict[country][location]}, ignore_index=True ) reloaded_df # Match the location labels with the lat longs we have original_locations = locations[['country', 'location', 'longitude', 'latitude']] recovered_locations = reloaded_df.merge(original_locations, on=['country', 'location']) recovered_locations # ### Eyeball the colours from the file on a map # + fig = plt.figure(figsize=(8, 8)) ax = plt.axes(projection=ccrs.PlateCarree()) xlim = ([recovered_locations["longitude"].min() - 10, recovered_locations["longitude"].max() + 10]) ylim = ([recovered_locations["latitude"].min() - 10, recovered_locations["latitude"].max() + 10]) ax.set_xlim(xlim) ax.set_ylim(ylim) ax.gridlines(color='blue') ax.coastlines() plt.scatter(recovered_locations["longitude"], recovered_locations["latitude"], color=recovered_locations["rgb_hex"], s=100, zorder=3) plt.show() # - # ## Test `ag3.py` usage import ag3 import importlib importlib.reload(ag3) imported_location_colours = ag3.release_data().location_colours imported_location_colours['Cameroon'].keys() imported_location_colours['Cameroon']['Séboré'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Matplotlib # ## Introduction # - matplotlib is probably the single most used Python package for 2D-graphics # - it also provides good capablities to creade 3D-graphics # - quick way to visualize data from python in publication-quality # # - for further information: https://matplotlib.org/ # + [markdown] slideshow={"slide_type": "slide"} # ## Creating First Plots # # ### 1. Import pyplot package # - provides functions that makes matplotlib work like MATLAB # - object-oriented plotting # + slideshow={"slide_type": "fragment"} import matplotlib.pyplot as plt # import pyplot interface # + [markdown] slideshow={"slide_type": "subslide"} # ### 3. Create [figure](https://matplotlib.org/api/figure_api.html) and [axes](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html) # + slideshow={"slide_type": "subslide"} fig = plt.figure() # a new figure window ax = fig.add_subplot(1, 1, 1) # a new axes plt.show(fig) # + [markdown] slideshow={"slide_type": "subslide"} # ### 3. Create / [Plot data](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) (sine) # + slideshow={"slide_type": "-"} import numpy as np x = np.linspace(0,10,1000) y = np.sin(x) ax.plot(x,y, label = 'sine') fig # this is required to re-display the figure # + [markdown] slideshow={"slide_type": "subslide"} # #### Customize Line Style # + slideshow={"slide_type": "-"} fig2 = plt.figure() # a new figure window ax2 = fig2.add_subplot(1, 1, 1) # a new axes x2 = np.linspace(0,10,50) y2 = np.sin(x2) ax2.plot(x2,y2, '-o', label = 'sine') plt.show(fig2) # + slideshow={"slide_type": "subslide"} fig3 = plt.figure() # a new figure window ax3 = fig3.add_subplot(1, 1, 1) # a new axes x2 = np.linspace(0,10,50) y2 = np.sin(x2) ax3.plot(x2,y2, 'r-o', label = 'sine') plt.show(fig3) # + [markdown] slideshow={"slide_type": "subslide"} # ##### Line Colour # 'r': red # 'g': green # 'b': blue # 'c': cyan # 'm': magenta # 'y': yellow # 'k': black # 'w:': white # + [markdown] slideshow={"slide_type": "subslide"} # ##### Line Style # '-': solid # '--': dashed # ':': dotted # '-.': dot-dashed # '.': points # 'o': filled circles # '^': filled triangles # + [markdown] slideshow={"slide_type": "subslide"} # ### 4. Create / [Plot data](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) (cosine) # + slideshow={"slide_type": "-"} y2 = np.cos(x) ax.plot(x,y2, label = 'cosine') fig # + [markdown] slideshow={"slide_type": "subslide"} # ### 5. Create / [Plot data](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) (3 * cosine) on second axes # + ax_twin = ax.twinx() y3 = 3 * np.cos(x+np.pi/4) ax_twin.plot(x,y3, 'r',label = '3 * cosine') fig # + [markdown] slideshow={"slide_type": "subslide"} # ### 5. Set limits for [x](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_xlim.html)-/[y](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_ylim.html)-axis # + slideshow={"slide_type": "-"} ax.set_xlim(0,10) ax.set_ylim(-1.5, 2.0) fig # + [markdown] slideshow={"slide_type": "subslide"} # ### 6. [Add legend](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html) # + slideshow={"slide_type": "-"} ax.legend() fig # + [markdown] slideshow={"slide_type": "subslide"} # ### 7. Add [x](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_xlabel.html)-/[y](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_ylabel.html)-label and [title](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.set_title.html) # + slideshow={"slide_type": "-"} ax.set_xlabel("$x$") ax.set_ylabel("$\sin(x)$") ax.set_title("I like $\pi$") fig # + [markdown] slideshow={"slide_type": "subslide"} # ### 7. [Add grid](https://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.grid.html) # - ax.grid(True) fig # + [markdown] slideshow={"slide_type": "subslide"} # ### Excursion Subplots # - the command [fig.add_subplot](https://matplotlib.org/api/_as_gen/matplotlib.figure.Figure.html) divides the figures in grid with a certain number of axes # - syntax: # ``` python # fig.add_subplot(rows, cols, num) # ``` # - rows = number of rows in the grid # - cols = number of columns in the grid # - num = number of the subplot to create (counting from left to right, top to bottom and indexed starting at 1) # + slideshow={"slide_type": "subslide"} fig = plt.figure() for i in range(6): ax = fig.add_subplot(2, 3, i + 1) ax.set_title("Plot #%i" % i) # + [markdown] slideshow={"slide_type": "subslide"} # - the subplots are overlapping # - there are a few ways to fix it, i.e.: # + slideshow={"slide_type": "-"} fig.subplots_adjust(wspace=0.4, hspace=0.4) fig # + [markdown] slideshow={"slide_type": "-"} # - ```wspace``` and ```hspace ``` determine the width and height between each plot # + [markdown] slideshow={"slide_type": "slide"} # ## 2. Various 2D Plotting # ### [Histograms](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html) # + slideshow={"slide_type": "-"} x = np.random.normal(size=1000) fig, ax = plt.subplots() H = ax.hist(x, bins=50, alpha=0.5, histtype='stepfilled') # + [markdown] slideshow={"slide_type": "subslide"} # ### [Pie Plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.pie.html) # + slideshow={"slide_type": "-"} fracs = [30, 15, 45, 10] colors = ['b', 'g', 'r', 'w'] fig, ax = plt.subplots(figsize=(6, 6)) # make the plot square pie = ax.pie(fracs, colors=colors, explode=(0, 0, 0.05, 0), shadow=True, labels=['A', 'B', 'C', 'D']) # + [markdown] slideshow={"slide_type": "subslide"} # ### [Errorbar Plots](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.errorbar.html) # + slideshow={"slide_type": "-"} x = np.linspace(0, 10, 30) dy = 0.1 y = np.random.normal(np.sin(x),dy) fig, ax = plt.subplots() plt.errorbar(x, y, dy, fmt='.k') # + [markdown] slideshow={"slide_type": "subslide"} # ### [Contour Plots (filled)](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.contourf.html) # + slideshow={"slide_type": "-"} x = np.linspace(0, 10, 50) y = np.linspace(0, 20, 60) z = np.cos(y[:, np.newaxis]) * np.sin(x) fig, ax = plt.subplots() # filled contours im = ax.contourf(x, y, z, 100) fig.colorbar(im, ax=ax) # + [markdown] slideshow={"slide_type": "subslide"} # ### [Contour Plots (lines)](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.contour.html) # + slideshow={"slide_type": "-"} # contour lines im2 = ax.contour(x, y, z, colors='k') fig # + [markdown] slideshow={"slide_type": "slide"} # ## 3. [Various 3D Plotting](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html) # + slideshow={"slide_type": "-"} # This is the 3D plotting toolkit from mpl_toolkits.mplot3d import Axes3D # + [markdown] slideshow={"slide_type": "subslide"} # ### [3D scatter Plot](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#scatter-plots) # + slideshow={"slide_type": "-"} fig = plt.figure() ax = plt.axes(projection='3d') z = np.linspace(0, 1, 100) x = z * np.sin(20 * z) y = z * np.cos(20 * z) c = x + y ax.scatter(x, y, z, c=c) # + [markdown] slideshow={"slide_type": "subslide"} # ### [3D Line Plot](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#line-plots) # + slideshow={"slide_type": "-"} fig = plt.figure() ax = plt.axes(projection='3d') ax.plot(x, y, z, '-b') # + [markdown] slideshow={"slide_type": "subslide"} # ### [Surface Plot](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#surface-plots) # + x = np.outer(np.linspace(-2, 2, 30), np.ones(30)) y = x.copy().T z = np.cos(x ** 2 + y ** 2) fig = plt.figure() ax = plt.axes(projection='3d') ax.plot_surface(x, y, z, cmap=plt.cm.jet, rstride=1, cstride=1, linewidth=0) // --- // jupyter: // jupytext: // text_representation: // extension: .cpp // format_name: light // format_version: '1.5' // jupytext_version: 1.14.4 // kernelspec: // display_name: C++17 // language: C++17 // name: xeus-cling-cpp17 // --- // + graffitiCellId="id_z8u95or" graffitiConfig={"executeCellViaGraffiti": "nbiehqu_3qgmd50"} #include // TODO: Define PI // TODO: Define ParticleModel // TODO: Define the Move() function for ParticleModel // TODO: Inherit BicycleModel from ParticleModel // TODO: Define the move() function for BicycleModel // TODO: Pass the tests int main() { // Test function overriding BicycleModel bicycle; ParticleModel &particle = bicycle; BicycleModel bicycle2; particle.Move(10, PI / 9); bicycle2.Move(10, PI / 9); assert(particle.x == bicycle.x); assert(particle.y == bicycle.y); assert(particle.theta == bicycle.theta); } // + [markdown] graffitiCellId="id_nbiehqu" //   // // + [markdown] graffitiCellId="id_h177dz1" graffitiConfig={"rows": 12, "terminalId": "id_l2ubfti", "type": "terminal"} // Loading terminal (id_h177dz1), please wait... # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="uUEoMlbU2oME" import tensorflow as tf from sklearn.model_selection import train_test_split import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler # + id="xJjZs6Ui3Zoq" from keras.models import Sequential from keras.layers import Dense # + id="x5wKw-2V2oML" df = pd.read_csv("./credit_cards_dataset.csv") # + id="3kzSO6jj2oMO" # + colab={"base_uri": "https://localhost:8080/"} id="xrIZPJ8K2oMS" outputId="fd968bc3-96b6-4d02-a979-c3c1061829f4" df.columns # + colab={"base_uri": "https://localhost:8080/", "height": 226} id="HRmndxef2oMU" outputId="416d4848-b27f-4c67-98d9-29be61b60f6d" df.tail() # + id="EDJwdLm72oMW" # X = df.drop('Y_Value',axis =1).values # y = df['Y_Value'].values X = df.drop('default.payment.next.month',axis =1).values # + colab={"base_uri": "https://localhost:8080/"} id="I22GCKi12oMZ" outputId="bc78d1d7-d80a-4ce8-ce11-d7ca604bc39d" X.shape # + id="NfLf_JjF2oMb" y = df['default.payment.next.month'].values # + id="a1T8Sl4h2oMi" X_train, X_test, y_train, y_test = train_test_split (X,y,test_size=0.3, random_state=42) # + colab={"base_uri": "https://localhost:8080/"} id="3O7si-v72oMj" outputId="c519fdb0-49ce-410c-f609-5fdb68b21fd8" y_test.T # + colab={"base_uri": "https://localhost:8080/"} id="6t7ce3dp2oMl" outputId="07e568be-4ce5-4511-ea21-70b9343400b7" X_test.shape # + id="tgMxuii_2oMn" from sklearn.preprocessing import StandardScaler X_scaler = StandardScaler().fit(X_train) # + id="JAXU2lAn2oMq" X_train_scaled = X_scaler.transform(X_train) X_test_scaled = X_scaler.transform(X_test) # + id="mRUcCmTI2oMs" y_train_categorical = tf.keras.utils.to_categorical(y_train) y_test_categorical = tf.keras.utils.to_categorical(y_test) # + id="ne6jVqTZ2oMu" from keras.models import Sequential #instantiate model = Sequential() # + id="t08VRz8E2oMv" from keras.layers import Dense number_inputs = 24 number_units = 60 model.add(Dense(units = number_units, activation ='relu', input_dim=number_inputs)) model.add(Dense(units = 70, activation ='relu')) #second hidden layer model.add(Dense(units = 60, activation ='relu')) #second hidden layer #model.add(Dense(units = 50, activation ='relu')) #second hidden layer #model.add(Dense(units = 40, activation ='relu')) #second hidden layer #model.add(Dense(units = 30, activation ='relu')) #second hidden layer #model.add(Dense(units = 20, activation ='relu')) #second hidden layer model.add(Dense(units = 10, activation ='relu')) #second hidden layer # + id="7I8QNMJJ2oMx" number_classes =2 ## yes or no model.add(Dense(units = number_classes, activation = 'sigmoid')) # + colab={"base_uri": "https://localhost:8080/"} id="mKdnDBV_2oMy" outputId="2db81340-cab5-4d6d-f2fa-93a636c504c4" model.summary() # + id="0RdRvGpN2oMz" #compile the model model.compile(optimizer = 'adam' , loss = 'binary_crossentropy', metrics =['accuracy']) # + id="ap_InvEB2oM0" #train the model model.fit(X_train_scaled, y_train_categorical, epochs=50,shuffle = True,verbose =2) # + id="H6zypaDr2oM0" #model.save("neuralnetwork.h5") # + colab={"base_uri": "https://localhost:8080/"} id="MceafpKl2oM1" outputId="8bcd9a54-c57a-4389-a55c-aa9a4e5c666c" #quantify the model model_loss, model_accuracy = model.evaluate(X_test_scaled,y_test_categorical,verbose =2) print( model_loss ) print (model_accuracy) # + [markdown] id="x9kabdOC2oM2" # F1, Precision Recall, and Confusion Matrix # + id="ppvIcD842oM5" from sklearn.metrics import precision_recall_fscore_support from sklearn.metrics import recall_score from sklearn.metrics import classification_report # + id="u1Rk3XTy2oM7" y_prediction = model.predict(X_test) # + colab={"base_uri": "https://localhost:8080/"} id="chitKzPw2oM7" outputId="62d3ded7-837c-49db-c370-f3e6af519edf" y_prediction.reshape(-1,1) # + colab={"base_uri": "https://localhost:8080/"} id="DEV97JHi7DO1" outputId="a3f3549a-e6a3-483a-bab0-defae6e26bae" scores = model.evaluate(X_test, y_prediction) # + id="hnSwq7fb2oM8" #print("Recall score:"+ str(recall_score(y_test, y_prediction, average# != 'binary'))) # + id="V1wfH3Lm2oM9" #print(classification_report(y_test, y_prediction, target_names=["default", "non_default"])) # + id="kJUinAgy2oM-" import itertools import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix # + id="BFw7Hv6X2oM_" def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="red" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Compute confusion matrix cnf_matrix = confusion_matrix(y_test, y_prediction) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['Non_defualt', 'Default'], title='Confusion matrix, without normalization') # Plot normalized confusion matrix plt.figure() plot_confusion_matrix(cnf_matrix, classes=['Non_defualt', 'Default'], normalize=True, title='Normalized confusion matrix') plt.show() # + id="h4cC15de2oNB" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="8WFDwPtP-vS0" # Importing the necessary libraries # + id="grQqCwTK1nFq" import numpy as np import matplotlib.pyplot as plt import pandas as pd # + [markdown] id="JY8cs54mAde3" # Importing the dataset # + id="lTxcW8sh18iF" dataset=pd.read_csv("Salaries.csv") X=dataset.iloc[:,1:-1].values y=dataset.iloc[:,-1].values # + id="NyTF_mkW3Rci" print(X) # + id="BhF-bRLoAid6" print(y) # + [markdown] id="Ng-pD-RvAk37" # Importing the Random Forest Regression Model # + id="vVupFrjx2PzH" from sklearn.ensemble import RandomForestRegressor rfr=RandomForestRegressor(n_estimators=100000,random_state=0) rfr.fit(X,y) # + [markdown] id="sLmKmJS5_EnS" # Predicting a value from the built model # + id="NelK3WL92P6j" rfr.predict([[6.5]]) # + [markdown] id="ZzE6Q9lx_Bmh" # Visualising the predictions # + id="q05mKyRa2P_S" X_grid = np.arange(min(X),max(X),0.1) # makes a horizontal array X_grid= X_grid.reshape((len(X_grid),1)) #To make it a vertical array plt.scatter(X,y,color='red') plt.plot(X_grid,rfr.predict(X_grid),color='blue') plt.title("Truth vs Bluff (Random Forest Regression)") plt.xlabel("Level") plt.ylabel("Salary") plt.show() # + id="pwMRL5uX3aOL" # print(X_grid) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="1GFF0jx_SPA6" outputId="08dbaffa-9bdc-4fb9-a0c9-cae1f38a4910" '''from google.colab import drive drive.mount('/content/drive')''' # + colab={"base_uri": "https://localhost:8080/", "height": 105} colab_type="code" id="VcC41lLISYvV" outputId="cd6a6a73-9b21-4043-8315-81e346270dfc" pip install split-folders # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="RX2v4kHiSbR0" outputId="27ed09f1-8035-4eff-db51-2185c6e3ed70" import numpy as np from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential from keras.layers import Input, Lambda, Dense, Flatten from keras.models import Model from keras.applications.vgg16 import VGG16 from keras.applications.vgg16 import preprocess_input from keras.preprocessing import image import split_folders import glob import os from PIL import Image import matplotlib.pyplot as plt from keras.models import model_from_json import pickle import math # + colab={} colab_type="code" id="wFYNOF1FS7xX" driveZip = '/content/drive/My Drive/temp/cropped_images.zip' image_types = ["kurti","saree","none","shirt","tshirt"] batch_size = 32 image_size = [224,224] num_classes = 4 # + colab={} colab_type="code" id="oOsehSelUDha" import zipfile with zipfile.ZipFile(driveZip,'r') as zip_dir: zip_dir.extractall(path='/content/') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="KE0Fb_DZUFhB" outputId="93b47b51-dcc9-49c6-bd99-8683df2d789c" split_folders.ratio('cropped_images', output="output", seed=1337, ratio= (.8,.2)) # default values # + colab={} colab_type="code" id="weh6Ro90ULLw" # Image size should be [224,224] # Iterate through each color folder def resize_images(image_dir): for im_type in image_types: # Iterate through each image file in each image_type folder # glob reads in any image with the extension "image_dir/im_type/*" for file in glob.glob(os.path.join(image_dir, im_type, "*")): im = Image.open(file) f, e = os.path.splitext(file) imResize = im.resize((224,224), Image.ANTIALIAS) imResize.save(f + '.png', 'PNG', quality=90) resize_images('/content/output/val') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="F57Xbzkvagk8" outputId="a1345b1a-d95a-4379-8aba-ef7c89fb4cb7" # !rm -r /content/output/train # !rm -r NOA # !mkdir NOA # !mkdir NOA/val # !mkdir NOA/val/none # !cp -r '/content/output/val/none/' '/content/NOA/val/' # !rm -r /content/output/val/none # + colab={} colab_type="code" id="SLX7w699apo1" # !cp '/content/drive/My Drive/temp/VGG/secondLastModel.json' '/content/' # !cp '/content/drive/My Drive/temp/VGG/secondLastModel.h5' '/content/' # !cp '/content/drive/My Drive/temp/VGG/lastModel.json' '/content/' # !cp '/content/drive/My Drive/temp/VGG/lastModel.h5' '/content/' # !cp '/content/drive/My Drive/temp/VGG/logRModel.sav' '/content/' # + colab={"base_uri": "https://localhost:8080/", "height": 901} colab_type="code" id="FIO7HOzrbKUz" outputId="3eb0d565-0d2e-43fa-8ab3-bf90e3b0a263" json_file = open('secondLastModel.json', 'r') loaded_model_json = json_file.read() json_file.close() secondLastModel = model_from_json(loaded_model_json) secondLastModel.load_weights("secondLastModel.h5") secondLastModel.summary() # + colab={} colab_type="code" id="xA7oFJskbyha" val_datagen = ImageDataGenerator(rescale = 1./255) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="zLPdw5Ahb0gB" outputId="307eb94c-8033-45b9-fb51-1eb3a937b90f" validation_set = val_datagen.flow_from_directory('/content/output/val/', target_size = (224, 224), batch_size = batch_size, class_mode = 'categorical') # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="lrsa7PVYvtAd" outputId="0ba641c4-4094-4316-e97e-e487d8e427ab" def getAllLabels(data_set): number_of_examples = len(data_set.filenames) number_of_generator_calls = math.ceil(number_of_examples / (1.0 * batch_size)) # 1.0 above is to skip integer division test_labels = [] for i in range(0,int(number_of_generator_calls)): test_labels.extend(np.array(data_set[i][1])) oneHotLabels = np.asarray(test_labels) labels = np.apply_along_axis(np.argmax,1, oneHotLabels) return labels dtLabels = getAllLabels(validation_set) print(dtLabels.shape) # + colab={} colab_type="code" id="vInjx6RTb46z" dtPredict = secondLastModel.predict_generator(validation_set,steps = len(validation_set)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Sdcb2Hr0cnwr" outputId="f7fdcb59-76bb-44a2-86b3-4a4ffb4c04e9" print(dtPredict.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="aymWQ_kxdegh" outputId="aad6e852-478f-409e-f94f-bdaa22327b6a" labels = np.full(dtPredict.shape[0],1) test1Set = np.concatenate((dtPredict,np.asarray([labels]).T), axis=1) print(test1Set.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="uDz3bl10dj3F" outputId="69cced1f-b5ae-48dd-f1bf-b505a48b534e" validation_set = val_datagen.flow_from_directory('/content/NOA/val/', target_size = (224, 224), batch_size = batch_size) # + colab={} colab_type="code" id="RkJD1rM6drds" predict = secondLastModel.predict_generator(validation_set,steps = len(validation_set)) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="Xwah2EKOdv02" outputId="67ca54e1-6264-44d8-8642-5c557f466632" labels = np.full(predict.shape[0],0,np.float32) test0Set = np.concatenate((predict,np.asarray([labels]).T), axis=1) print(test0Set.shape) # + colab={} colab_type="code" id="DTawMtNXd0TN" def predictNOA(testData): testX = testData[:,:-1] testY = testData[:,-1] print(testX.shape) print(testY.shape) clf = pickle.load(open('logRModel.sav', 'rb')) out = clf.predict(testX) return out def getCount(x,val): return np.where(x == val)[0].shape[0] def getCorectSamples(dataset,out,correctVal): newSet = [] correctInds = np.where(out == correctVal)[0] for ind in correctInds: newSet.append(dataset[ind]) return np.asarray(newSet) def wrongLabelsCount(x,y): return np.where(x != y)[0].shape[0] # + colab={} colab_type="code" id="i7jv5Ip_0czT" totalSamples = test0Set.shape[0] + test1Set.shape[0] wrongSamples = 0 # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="F0m7E-630XFn" outputId="af092e14-eda2-40aa-cfd4-35539f30e0a7" out = predictNOA(test0Set) wrongSamples += getCount(out,1) # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="eZzWCy_T2xSn" outputId="b6d5d77f-37db-4fd1-f7bf-8daec61eded7" out = predictNOA(test1Set) wrongSamples += getCount(out,0) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="4160NS3z3BKb" outputId="52cf2e83-393d-45ee-e297-88918797f7f1" newSet = getCorectSamples(test1Set,out,1) print(newSet.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" id="b-9ONoCY5h2L" outputId="030c2154-4ec2-4f2a-f752-f4f6398681ec" json_file = open('lastModel.json', 'r') loaded_model_json = json_file.read() json_file.close() lastModel = model_from_json(loaded_model_json) lastModel.load_weights("lastModel.h5") lastModel.summary() # + colab={} colab_type="code" id="fq191qp_50WR" softMaxLabs = lastModel.predict(dtPredict) predLabs = np.apply_along_axis(np.argmax,1, softMaxLabs) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="GtBSglNp9LVS" outputId="3c989a6a-5507-4817-dba9-91766067e2e4" print(predLabs.shape,dtLabels.shape) # + colab={} colab_type="code" id="PDB3wuMz7V6q" wrongSamples += wrongLabelsCount(predLabs,dtLabels) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="o0JtCBd897bM" outputId="17d29023-aca4-4850-a850-ca2a494ed11e" print('Accuracy: ',(totalSamples - wrongSamples)/totalSamples) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import sys import os import time import random import copy import math import scanpy as sc # %matplotlib inline from matplotlib import pyplot as plt import matplotlib as mpl import seaborn as sns import autoreload import scipy params = { 'font.size': 12, 'axes.titlesize': 12, 'axes.labelsize': 12, 'legend.fontsize': 12, 'xtick.labelsize': 8, 'ytick.labelsize': 10, 'font.family': "Helvetica", 'pdf.fonttype': 42, 'ps.fonttype': 42, 'figure.dpi': 100 } mpl.rcParams.update(params) sns.set_style("ticks") sns.set_context(context='paper') savefig_args = {"dpi": 300, "bbox_inches": "tight", "pad_inches": 0, "transparent": False} mpl.rc('savefig', dpi=300) output_dir='figures/10.12.20/' output_suffix = "" output_formats = [".png", ".pdf"] def save_figure(fig, name, output_dir=output_dir, output_suffix=output_suffix, output_formats=output_formats, savefig_args=savefig_args): for output_format in output_formats: fig.savefig(output_dir + "/" + name + output_suffix + output_format, **savefig_args) return None pd.set_option('display.max_rows', 50) pd.set_option('display.max_columns', 20) pd.set_option('display.width', 100) # %load_ext autoreload # %autoreload 2 cfgFile = '../switchy/Prototyping.ini' sc.set_figure_params(color_map='viridis') # + [markdown] jupyter={"outputs_hidden": true} # IL_genes = list(adata.var.index[adata.var.index.str.contains("IL")]) # # sc.pl.umap(adata, color = IL_genes) # - # ## Analysis filename = 'Current.h5ad' adata = sc.read_h5ad(filename) fig, ax = plt.subplots(1,1) plt.hist(adata.obs.n_counts, histtype='step', color = 'k') plt.hist(adata.obs.n_counts, histtype='stepfilled', color = 'k', alpha = 0.2) plt.title('Uniquely Mapped Reads') adata.obs g = sns.violinplot(data = adata.obs, x ='n_counts', y = 'Treatment', hue = 'Experimental_Label') g.figure.savefig('figures/N_counts', transparent=True) g = sns.violinplot(data = adata.obs, x ='n_genes', y = 'Treatment', hue = 'Experimental_Label') adata.obs.columns g.figure.savefig('figures/N_genes', transparent=True) variable = 'phase' sc.pl.umap(adata, color = variable, title = 'Cell Cycle Phase', frameon = False, size = 200, alpha=0.6, save = variable) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ![PPGI_UFRJ](imagens/ppgi-ufrj.png) # # Fundamentos de Ciencia de Dados # --- # # PPGI/UFRJ 2020.3 # ## Prof e # --- # # Módulo 1 - Python # ## Tipos de Dados em Python # # Um programa python é denominado de **script** e é uma sequencia de definições e comandos. As definições são avaliadas e os comandos são executados pelo interpretador python, chamando de **shell**. # **Objetos** são os elementos fundamentais que a linguagem manipula. Cada objeto tem um **tipo** que define a variedade de manipulações (operações) que o programa pode fazer com este objeto. # # Os tipos podem ser de duas classes: **Escalares** e **não escalares** # # - **Escalares**: Objetos indivisiveis ou atómicos. # 1. **int** representa números inteiros. # 2. **float** representa números reais. # 3. **bool** usada para representar valores booleanos True e False. # # # - **Não escalares**: Objetos que têm uma estrutura interna, por exemplo, **strings** # Uma **Expressões** é a combinação de objetos e operadores. Uma expressão retorna um **valor** de algum **tipo**. # **Operadores matemáticos e de comparação**: # soma: + 10 + 4 # subtração: - 10 - 4 # multiplicacao 10*4 # divisao 10/4 # divisao inteira 10//4 # resto da divisao 10%4 # potencia de um numero 10**4 # comparacao: > 10 >4 # comparacao: <, >, <=, >=, != 10 < 4 10 != 4 # como saber o tipo da expressao print(type(10.3)) print(type(3)) print(type('s')) # **Operadores booleanos** # - **a and b**: True, se a e b são verdadeiros e False nos outros casos. # - **a or b**: True, se a e b são True ou algum valor de a ou b são verdadeiros, nos outros casos False. # - **not a**: True se for False e False de for True. # ### Python como máquina de Calcular # soma de numeros 7 + 4 # multiplicacao de numeros 10 * 5 # divisao 10/5 # resto de divisao 10 % 2 # string 'Curso' + ' de '+' Data Science' # duplicacao de string ' curso '*10 # ### Variáveis e Atribuição # # As **variáveis** fornecem uma forma de associar **nomes** com objetos. Pode ser usada qualquer palavra para determinar uma variriável com exeção das palavras reservadas da linguagem, tipo *for*, *else*, *int*, *float*, etc. # exemplo de variavel (nome), atribuicao (=) e valor da atribuição pi = 3.14 r = 4 area = pi*(r**2) #area print('area:>',area) # x = [1,2,3,4,5] #x print('x=',x) x1 =['a',2,'aqui',40,'calor'] x1 x1[2] # atribuicao de valores boolianos a = 4 b = False r= a and b print(r) # ### Visualização de variáveis # visualizacao de variaveis nome = ' Janeiro é considerada a Cidade Maravilhosa' # visualizar - 1: nome da variavel nome # visualizar - 2: usando o comando print() print(nome) # visualizar - 3: usaando comando print() print('Um fato:',nome) #visualizacao - 4: usando o comando print() print(nome,' ...',' sengundo a Revista Fofoca') # ### Tipos de Dados # Um **tipo** de dato é um conjuntos de valores e operações que podem ser realizados com esses valores. # **Tipos numéricos** # - **Inteiro**: *int* # exemplo de int n_i = 4 n_i # visualizar o tipo de dado type(n_i) # - **Real**:*float* # exemplo de float f = 3.9 f # + # visualizar o tipo de dados # - type(f) # - **Complexo**: *complex* # exemplo de numero complexo nc1 = 3+4j nc1 type(nc1) # #### Strings # # Uma **string** é uma sequência de caracteres fechados entre aspas simples ou duplas. # exemplo de string cidade1 = 'Rio de Janeiro' cidade2 = "Niterói" print(cidade1) print(cidade2) # outra forma de apresentar? print(cidade1+' '+cidade2) # - **função**: **str(objeto)** - retorna um objeto de tipo *string* # exemplo de uso de str() - numero2string str(10) # exemplo de criacao de str() string_vazia = str() type(string_vazia) # #### Operações Básicas com Strings # - **Concatenação de strings**:**+** # exemplo de concatenacao print(cidade1+cidade2) # outra forma? print(' ..',cidade1,' ',cidade2) # tipo ? type(cidade1) # - **Sequências repetidas de string**: \* tres_vezes = cidade1*3 + '-'+str(50) tres_vezes # - Verificar a existência de uma string em outras string. Retorna **True** ou **False** # exemplo s1 = 'O coronavírus esta solto na cidade do Rio de Janeiro' s2 = 'cidade' # exemplo s3='Araruama' r1 = s2 in s1 r2 = s3 in s1 print(r2) print(s3 in s1) # exemplo r2 = s2 not in s1 r2 # - Comparação de **strings**: <, >, >=, <=, ==, != # exemplo de comparacao print(s1 == s2) #print(s1 > s2) # why? # - **Funções de strings**: *len()*,*max()*,*min()* # tamanho da string l1 = len(s1) l1 # max() mx = max(s1) mx # min() mi = min(s1) mi # - **Índices** - indica a posição de um caracter na string # exemplo do tamanho de uma string tam = len(s1) p0 = s1[51] p0 # ## Listas # Uma **lista** é uma sequência de zero ou mais objetos. Usa-se os "[]" para indicar uma lista. # ### Operadores de Listas # exemplo de lista lista = [1,4,"oi",'Teste'] print(lista) lista0 = [10, 'a',4-5j,3.1] print(lista0) print(len(lista0)) print(lista0[3]) # | Operador | ação | # |:----------|:-------------| # | [] | lista vazia | # |append() | adiciona elementos| # | sort() | ordena os elementos | # | insert() | insere elementos| # | pop() | elimina o último elemento | # | remove() | remove um elemento | # exemplo de listas Lista = [] print('Lista vazia L:',Lista) Lista.append(34) print('append(34)-> L:',Lista) Lista.append(22) print('append(22)-> L:',Lista) Lista.append(50) print('append(50)-> L:',Lista) Lista.append(-10) print('append(-10)-> L:',Lista) Lista.sort() print('sort(L)-> L:',Lista) Lista.pop() print('pop()-> L:',Lista) Lista.insert(0, 40) print('insert(0,40)-> L:',Lista) Lista.insert(1, 55) print('insert(1,55)-> L:',Lista) Lista.pop(1) print('pop(1)-> L:',Lista) Lista.remove(22) print('remove(22)-> L:',Lista) print('O que acontece com os elementos da lista L?') Lista.remove(55) # print('remove(55)-> L:',Lista) # split(): converte uma string em lista s = 'Rio de Janeiro está em lockdown por causa da pandemia COVID-19' r = s.split() print(r) # acesar elementos da lista usando o indice elemento = r[5] print('Elemento na posição 6:> ',elemento) # join(): Converte uma lista em string ns =" ".join(r) print('ns:',ns) # ### Indexação e Fatiamento de listas # list index print('Indexação de uma Lista') #print('') texto =['Python','para','Data Science','curso','introdutório'] t0 = texto[0] t4 = texto[4] print(' texto-0:>',t0) print(' texto-4:>',t4) # posicao do elemento numa lista elemento = texto[3] print('Elemento :>',elemento) pos = texto.index(elemento) print('Posição do elemento :>',pos) # numero de elementos de uma lista l = len(texto) print('Número de elemento :>',l) # sum(). max(); min() numeros = [23,25,28,45,33,20] s1 = sum(numeros) print('Soma :>',s1) print('Máximo :>',max(numeros)) print('Mínimo :>',min(numeros)) # any(): verifica se algum dos elementos de uma lista é TRUE (iterable) an1 = any([0,0,0,False,0]) print('A1:>',an1) an2 = any([4,0,0,False,0]) print('A2:>',an2) an3 = any([0,0,0,False,'']) print('A3:>',an3) an4 = any([0,0,0,False,'Casa']) print('A4:>',an4) # all() - Retorna TRUE se todos os items de um objeto são "iterable" são TRUE al1 = all([1,1,1,1]) print('AL1:>',al1) al2 = all([1,1,1,0]) print('AL2:>',al2) al3 = all([1,1,1,'']) print('AL3:>',al3) al4 = all([1,1,1,'J']) print('AL4:>',al4) # ordena uma lista #L = ['Z','z','a','A'] L = [5,3,2,-1,1,4] lo = sorted(L) print('Lista ordenada :>',lo) # menor -> maior lr = sorted(L, reverse=True) # maior -> menor print('Lista ordenada :>',lr) # ordenando palavras texto = "COVID-19, Araruama" T = sorted(texto) print('T:>',T) # criar uma lista items_da_lista = [] total_de_items = int(input("Ingressar o número de items:")) for i in range(total_de_items): item = input(" Item da lista:> ") items_da_lista.append(item) print(f"Lista de items: {items_da_lista}") print() print('Número de elementos da lista :> ',len(items_da_lista)) # fatiamento (slices) usando os indices txt ="O transporte público está um caos" print('texto :>',txt) txt1 = txt[3:] print('txt:>',txt1) txt2 = txt[3:20] print('txt:>',txt2) # 5 valores iniciais txt5 = txt[:5] txt5 # valores txt4 = txt[12:20] txt4 # slice(inicio:fim,step) lv = [0, 10, 20, 30, 40, 50, 60, 70, 80] # print('Lista de valores :>',lv) sl = slice(2,8,2) print('valores :> ',lv[sl]) # Eliminar um elemento del lv[0] print(lv) # indice negativo t1 = lv[-1] t1 # indice negativo t2 = lv[:-1] t2 # ## Dicionários # Um **dicionário** tem zero ou mais entradas. A cada entrada tem associdada uma chave única e um valor **{key:valor}** # - {} : dicionário vazio # - {"nome":"Jorge"} : uma entrada # - {"nome":"Jorge", "Idade":57} : duas entradas # - {"hobbies":["Leitura","viajar"]} : uma entrada e o valor é uma lista # exemplo de dicionario mesNumero = {'Janeiro':1, 'Fevereiro':2, 'Março':3, 'Abril':4, 'Maio':5, 1:'Janeiro', 2:'Fevereiro', 3:'Março', 4:'Abril', 5:'Maio'} # print('O terceiro mês é ' + mesNumero[3]) distancia = mesNumero['Abril'] - mesNumero['Janeiro'] print('De Abril a Janeiro são', distancia, 'mêses de espaço ') # chaves do dicionario chaves = mesNumero.keys() print('Chaves:> ',chaves) # valores do dicionario valores = mesNumero.values() print('Valores:>',valores) # verificar chaves key = 'três' res = key in chaves print('Resultado da consulta :>',res) # verificar valor do key = 2 valor = mesNumero[2] print('Resultado da consulta :>',valor) # iteracao sobre as chaves for key in mesNumero: print('Chave:>',key) # ## Funções # + # def nome_funcao (lista_de_parametros): # # - # exemplo de funcao def soma(a,b): return a+b # uso da funcao so = soma(7,9) print('Soma:>',so) def valorx(a): if a > 3: x='verdade' x='falso' return x # t = valorx(23) # chama a funcao # print('Valor :>',t) # ## Exercícios # Prática dos conteúdos estudados: construindo e operando listas e strings # 1. Seja x='cachorro' e y='gato'. Quais são os valores retornados pelas operações: # - x + y # - "O" + x + "não gosta de " + y # - x*6 # 2. Escreva uma string que contenha nome, endereço, bairro e cidade: # - Em linhas diferentes # - Na mesma linha # 3. Quais das seguintes variáveis são nomes válidos? # - a) length # - b) _width # - c) firstBase # - d) 2MaisDois # - e) halt! # 4. Que tipo de dados seria mais apropriado para representar os seguintes valores? # - o número de meses de um ano # - a área de um círculo # - o salário mínimo atual # - a idade do universo # - seu nome # 5. Seja x = 4 e y = 0.5. Escrever os valores das seguintes expressões: # - a) x + y * 3 # - b) (x + y) * 3 # - c) x ** y # - d) x % y # - e) x / 12.0 # - f) x / 6 # 6. Seja x = 5.33. Escrever os valores das seguintes expressões: # - a) round(x) # - b) int(x) # # Têm alguma diferença? # 7. Se pode concatenar um valor numérico e uma string?. Apresente exemplos. # 8. Definir funções para as operações matemáticas básicas. # --- # #### © Copyright 2021, & # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="kAbpkqfJzzyT" colab_type="text" # # More Regression: RBF BASIS / Modified from : # # 1. https://github.com/jakevdp/PythonDataScienceHandbook # # # + deletable=true editable=true id="cIRrwEhWzzyV" colab_type="code" colab={} # %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns; sns.set() import numpy as np # + id="tet98sU-F8l-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 285} executionInfo={"status": "ok", "timestamp": 1599767235389, "user_tz": 240, "elapsed": 716, "user": {"displayName": "", "photoUrl": "", "userId": "12649994350437867413"}} outputId="8bc7677c-cb1f-4f08-df93-56c78ba54487" from sklearn.pipeline import make_pipeline from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression rng = np.random.RandomState(1) x = 10 * rng.rand(50) y = np.sin(x) + 0.1 * rng.randn(50) plt.scatter(x, y) # + [markdown] deletable=true editable=true id="wbEHlS5fzzy6" colab_type="text" # ### Gaussian basis functions # # Of course, other basis functions are possible. # For example, one useful pattern is to fit a model that is not a sum of polynomial bases, but a sum of Gaussian bases. # The result might look something like the following figure: # + [markdown] deletable=true editable=true id="idNV1KQ-zzy8" colab_type="text" # # # These Gaussian basis functions are not built into Scikit-Learn, but we can write a custom transformer that will create them, as shown here and illustrated in the following figure (Scikit-Learn transformers are implemented as Python classes; reading Scikit-Learn's source is a good way to see how they can be created): # + deletable=true editable=true id="TuqPXzCXzzy8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 268} executionInfo={"status": "ok", "timestamp": 1599767336030, "user_tz": 240, "elapsed": 762, "user": {"displayName": "", "photoUrl": "", "userId": "12649994350437867413"}} outputId="cd630ceb-beb0-4b35-9b4c-e3392ee207f1" from sklearn.base import BaseEstimator, TransformerMixin class GaussianFeatures(BaseEstimator, TransformerMixin): """Uniformly spaced Gaussian features for one-dimensional input""" def __init__(self, N, width_factor=2.0): self.N = N self.width_factor = width_factor @staticmethod def _gauss_basis(x, y, width, axis=None): arg = (x - y) / width return np.exp(-0.5 * np.sum(arg ** 2, axis)) def fit(self, X, y=None): # create N centers spread along the data range self.centers_ = np.linspace(X.min(), X.max(), self.N) self.width_ = self.width_factor * (self.centers_[1] - self.centers_[0]) return self def transform(self, X): return self._gauss_basis(X[:, :, np.newaxis], self.centers_, self.width_, axis=1) gauss_model = make_pipeline(GaussianFeatures(20), LinearRegression()) gauss_model.fit(x[:, np.newaxis], y) xfit = np.linspace(0, 10, 1000) yfit = gauss_model.predict(xfit[:, np.newaxis]) plt.scatter(x, y) plt.plot(xfit, yfit) plt.xlim(0, 10); # + deletable=true editable=true id="5gJx6akwzzzB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} executionInfo={"status": "ok", "timestamp": 1599767525367, "user_tz": 240, "elapsed": 730, "user": {"displayName": "", "photoUrl": "", "userId": "12649994350437867413"}} outputId="df193a54-7785-4dd7-cef8-a687f9e63a93" model = make_pipeline(GaussianFeatures(30), LinearRegression()) model.fit(x[:, np.newaxis], y) plt.scatter(x, y) plt.plot(xfit, model.predict(xfit[:, np.newaxis])) plt.xlim(0, 10) plt.ylim(-1.5, 1.5); # + [markdown] deletable=true editable=true id="nBQaS89KzzzE" colab_type="text" # With the data projected to the 30-dimensional basis, the model has far too much flexibility and goes to extreme values between locations where it is constrained by data. # We can see the reason for this if we plot the coefficients of the Gaussian bases with respect to their locations: # + deletable=true editable=true id="BBHQFxAfzzzE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 285} executionInfo={"status": "ok", "timestamp": 1599767645924, "user_tz": 240, "elapsed": 1136, "user": {"displayName": "", "photoUrl": "", "userId": "12649994350437867413"}} outputId="107b901e-4f6f-4a6d-fa3b-aaabde722ddc" def basis_plot(model, title=None): fig, ax = plt.subplots(2, sharex=True) model.fit(x[:, np.newaxis], y) ax[0].scatter(x, y) ax[0].plot(xfit, model.predict(xfit[:, np.newaxis])) ax[0].set(xlabel='x', ylabel='y', ylim=(-1.5, 1.5)) if title: ax[0].set_title(title) ax[1].plot(model.steps[0][1].centers_, model.steps[1][1].coef_) ax[1].set(xlabel='basis location', ylabel='coefficient', xlim=(0, 10)) model = make_pipeline(GaussianFeatures(30), LinearRegression()) basis_plot(model) # + [markdown] deletable=true editable=true id="q2V9CBwnzzzH" colab_type="text" # The lower panel of this figure shows the amplitude of the basis function at each location. # This is typical over-fitting behavior when basis functions overlap: the coefficients of adjacent basis functions blow up and cancel each other out. # We know that such behavior is problematic, and it would be nice if we could limit such spikes expliticly in the model by penalizing large values of the model parameters. # Such a penalty is known as *regularization*, and comes in several forms. # + id="QeLeMqudGVSU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 285} executionInfo={"status": "ok", "timestamp": 1599759544769, "user_tz": 240, "elapsed": 1634, "user": {"displayName": "", "photoUrl": "", "userId": "12649994350437867413"}} outputId="36a8052d-0c16-41d3-92cd-a9d938e4a45f" model = make_pipeline(GaussianFeatures(20), LinearRegression()) basis_plot(model) # + id="5IiXF59YGZif" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 285} executionInfo={"status": "ok", "timestamp": 1599767747786, "user_tz": 240, "elapsed": 1029, "user": {"displayName": "", "photoUrl": "", "userId": "12649994350437867413"}} outputId="68a686e0-5a74-430f-9718-3ec2465247fb" model = make_pipeline(GaussianFeatures(10), LinearRegression()) basis_plot(model) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="9IErrKYBBQdT" colab_type="code" colab={} from __future__ import print_function import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K # + id="pcBy5khQ8PC9" colab_type="code" colab={} batch_size=128 num_classes=10 epoch=12 # + id="eq4Z1o088XRL" colab_type="code" outputId="7cbf0e57-81a9-4978-ec80-c2b6149e7ac2" colab={"base_uri": "https://localhost:8080/", "height": 52} #input image dimensions img_rows, img_cols=28,28 #the data, split between train and test (x_train,y_train),(x_test,y_test)=mnist.load_data() # + id="cXGepaWI8z6L" colab_type="code" colab={} if K.image_data_format() == 'channels_first': x_train= x_train.reshape(x_train.shape[0],1,img_rows,img_cols) x_test=x_test.reshape(x_test.shape[0],1,img_rows,img_cols) input_shape = (1,img_rows,img_cols) else: x_train=x_train.reshape(x_train.shape[0],img_rows,img_cols,1) x_test=x_test.reshape(x_test.shape[0],img_rows,img_cols,1) input_shape=(img_rows,img_cols,1) # + id="bkQSbjlb92OY" colab_type="code" outputId="e560389e-bf5a-4c54-d957-d561913253d9" colab={"base_uri": "https://localhost:8080/", "height": 69} x_train=x_train.astype('float32') x_test=x_test.astype('float32') x_train /=255 x_test /=255 print('x_train shape: ' ,x_train.shape) print(x_train.shape[0],' train samples') print(x_test.shape[0],' test samples') # + id="0dFXn5O0-ltM" colab_type="code" colab={} #convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train,num_classes) y_test = keras.utils.to_categorical(y_test,num_classes) # + id="-oAD2Qmh-7Wy" colab_type="code" outputId="66c429ad-2226-457d-c458-fac3bc53ebe8" colab={"base_uri": "https://localhost:8080/", "height": 141} model=Sequential() model.add(Conv2D(32,kernel_size=(3,3),activation='relu',input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes,activation='softmax')) # + id="Fkv4dVkJ_1k3" colab_type="code" colab={} model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) # + id="kggpjwuEAK8x" colab_type="code" outputId="860d38c7-6d9f-44d4-b083-d6a0c0658541" colab={"base_uri": "https://localhost:8080/", "height": 541} model.fit(x_train,y_train, batch_size=batch_size, epochs=12, verbose=1, validation_data=(x_test,y_test)) # + id="ewUDeI8MCk7e" colab_type="code" colab={} score=model.evaluate(x_test,y_test,verbose=0) # + id="q8sqnG5tCup9" colab_type="code" outputId="b434bc7a-ebe4-4fff-9f32-3dceb1d84e35" colab={"base_uri": "https://localhost:8080/", "height": 52} print('Test loss: ', score[0]) print('Test accuracy: ', score[1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os import tensorflow as tf import numpy as np from sklearn.svm import SVC from sklearn.metrics import classification_report, confusion_matrix from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras import models from tensorflow.keras import layers from sklearn.model_selection import KFold from tensorflow.keras.wrappers.scikit_learn import KerasClassifier from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from tensorflow.keras import utils as np_utils import pandas as pd from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from tensorflow.keras.callbacks import EarlyStopping from tensorflow.keras.callbacks import ModelCheckpoint from tensorflow.keras.optimizers import SGD from tensorflow.keras.optimizers import RMSprop from tensorflow.keras.callbacks import CSVLogger from tensorflow.keras.callbacks import ModelCheckpoint,CSVLogger from tensorflow.keras.models import Model from tensorflow.keras.layers import Input from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import Flatten from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import MaxPooling2D from tensorflow.keras.utils import to_categorical from tensorflow.keras.utils import plot_model # + # # !pip install tensorflow-gpu # - import tensorflow as tf if tf.test.gpu_device_name(): print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) else: print("Please install GPU version of TF") orig_dir=os.getcwd() orig_dir # + # Retrieve Full HOF Data features_data_hof_full=[] features_label_hof_full=[] path="C:/Users/Ying/Downloads/LBP_TOP/HOF_cropped_classified" folders=os.listdir(path) t_count=0 for f in folders: count=1 folder_path=os.path.join(path, f) files=os.listdir(folder_path) for file in files: file_path=os.path.join(folder_path,file) features=os.listdir(file_path) for feature in features: feature_path=os.path.join(file_path,feature) histogram=np.load(feature_path) histogram = np.mean(histogram, axis=0) features_data_hof_full.append(histogram) features_label_hof_full.append(f) count=count+1 t_count=t_count+1 # - x = features_data_hof_full length = max(map(len, x)) features_data_hof_full = np.array([np.concatenate((xi,[0]*(length-len(xi)))) for xi in x]) # features_data_hof_full=np.reshape(features_data_hof_full,(10,15)).T # print(features_data_hof_full.shape) print(features_data_hof_full.shape) # + # X = features_data_hof_full # y = features_label_hof_full # + # Retrieve Full LBP Data features_data_l=[] features_label_l=[] path="C:/Users/Ying/Downloads/LBP_TOP/LBP_TOP_cropped_classified" folders=os.listdir(path) t_count=0 for f in folders: count=1 folder_path=os.path.join(path, f) files=os.listdir(folder_path) for file in files: file_path=os.path.join(folder_path,file) features=os.listdir(file_path) for feature in features: feature_path=os.path.join(file_path,feature) histogram=np.load(feature_path) histogram = np.mean(histogram, axis=0) features_data_l.append(histogram) features_label_l.append(f) count=count+1 t_count=t_count+1 # + # Process to correct format for DNN features_data_l=np.concatenate(features_data_l, axis=1).T print(features_data_l.shape) # + # X = features_data_l # y = features_label_l # - X = np.concatenate((features_data_hof_full,features_data_l),axis=1) y = features_label_l X.shape # One hot encoder of labels label_encoder = LabelEncoder() integer_encoded = label_encoder.fit_transform(y) dummy_y = np_utils.to_categorical(integer_encoded) save_location='C:/Users/Ying/Downloads/LBP_TOP/Run_time_data' modelname= 'RB_FF_NN13' savelocation=os.path.join(save_location, modelname) filepath=savelocation + ".hdf5" csv_logger=CSVLogger(savelocation +'.csv') #callbacks_list = [checkpoint,csv_logger,LRScheduler] model = Sequential() model.add(Dense(512, activation='sigmoid', input_shape=(X.shape[1],))) model.add(Dense(512, activation='sigmoid')) model.add(Dense(512, activation='sigmoid')) model.add(Dense(512, activation='sigmoid')) model.add(Dense(4, activation='softmax')) model.summary() callbacks=[csv_logger, ModelCheckpoint(filepath,monitor='val_loss', save_best_only=True)] # + jupyter={"outputs_hidden": true} tags=[] opt='RMSProp' model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) X_train, X_test, y_train, y_test = train_test_split(X, dummy_y, test_size = 0.20) history = model.fit(X_train, y_train, batch_size=2, epochs=200, verbose=1,validation_split=0.005, callbacks=callbacks) # - yhat_classes = model.predict_classes(X_test, verbose=0) y_test_original = y_test.argmax(1) y_train_original = y_train.argmax(1) labels = np.unique(y_test_original) a= confusion_matrix(y_test_original, yhat_classes) print(a) print(classification_report(y_test_original, yhat_classes)) # + from sklearn import svm from sklearn.metrics import confusion_matrix svm_model = svm.SVC(kernel = 'linear', C = 2,decision_function_shape='ovo').fit(X_train,y_train_original) y_pred = svm_model.predict(X_test) print(confusion_matrix(y_test_original, y_pred)) print("train score:",svm_model.score(X_train,y_train_original)) print("test score:",svm_model.score(X_test,y_test_original)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # import pandas import pandas as pd df = pd.read_csv('Resources/budget_data.csv') # + month_count = df.count()[0] total_pl = df['Profit/Losses'].sum() print('Financial Analysis') print('----------------------------') print(f'Total Months: {month_count}') print(f'Total: ${total_pl}') month_ind = [] change = [] for num in range(0,month_count): if num == 0: pass else: change.append(int(df['Profit/Losses'][num])-int(df['Profit/Losses'][num-1])) month_ind.append(df['Date'][num]) change_avg = round(sum(change)/len(change),2) print(f'Average Change: ${change_avg}') change_max = max(change) i = change.index(change_max) max_month = month_ind[i] print('Greatest Increase in Profits:',max_month, f'(${change_max})') change_min = min(change) i = change.index(change_min) min_month = month_ind[i] print('Greatest Decrease in Profits:',min_month, f'(${change_min})') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # + # 01 - Basic # 텐서플로우의 기본적인 구성을 익힙니다. import tensorflow as tf # tf.constant : 상수 hello = tf.constant('Hello, TensorFlow!') print(hello) a = tf.constant(10) b = tf.constant(32) c = tf.add(a,b) # a + b 로도 쓸 수 있음 print(c) # 위에서 변수와 수식들을 정의했지만, 실행이 정의한 시점에서 실행되는 것은 아닙니다. # 다음처럼 Session 객체와 run 메소드를 사용할 때 계산이 딥니다. # 따라서 모델을 구성하는 것과, 실행하는 것을 분리하여 프로그램을 깔끔하게 작성할 수 있습니다. # 그래프를 실행할 세션을 구성합니다. sess = tf.Session() # sess.run: 설정한 텐서 그래프(변수나 수식 등등)를 실행합니다. print(sess.run(hello)) print(sess.run([a,b,c])) # 세션을 닫습니다. sess.close() # + # 02 - Variable # 플레이스홀더와 변수의 개념을 익혀봅시다. import tensorflow as tf # tf.placeholder: 계산을 실행할 때 입력값을 받는 변수로 사용합니다. # None은 크기가 정해지지 않았음을 의미합니다. X = tf.placeholder(tf.float32, [None, 3]) print (X) # X 플레이스홀더에 넣을 값 입니다. # 플레이스홀더에서 설정한 것 처럼, 두번째 차원의 요소의 갯수는 3개입니다. x_data = [[1, 2, 3], [4, 5, 6]] # tf.Variable: 그래프를 계산하면서 최적화 할 변수들입니다. 이 값이 바로 신경망을 좌우하는 값들입니다. # tf.random_normal: 각 변수들의 초기값을 정규분포 랜덤 값으로 초기화합니다. W = tf.Variable(tf.random_normal([3,2])) b = tf.Variable(tf.random_normal([2,1])) # 입력값과 변수들을 계산할 수식을 작성합니다. # tf.matmul 처럼 mat* 로 되어 있는 함수로 행렬 계산을 수행합니다. expr = tf.matmul(X, W) + b sess = tf.Session() # 위에서 설정한 Variable 들의 값들을 초기화 하기 위해 # 처음에 tf.global_variables_initializer를 한 번 실행해야 합니다. sess.run(tf.global_variables_initializer()) print("=== x_data ===") print(x_data) print("=== W ===") print(sess.run(W)) print("=== b ===") print(sess.run(b)) print("=== expr ===") # expr 수식에는 X 라는 입력값이 필요합니다. # 따라서 expr 실행시에는 이 변수에 대한 실제 입력값을 다음처럼 넣어줘야합니다. print(sess.run(expr, feed_dict={X: x_data})) sess.close() # + # 03 - Linear Regression # X와 Y의 상관관계를 분석하는 기초적인 선형 회귀 모델을 만들고 실행해봅니다. import tensorflow as tf x_data = [1, 2, 3] y_data = [1, 2, 3] W = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) b = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) # name: 나중에 텐서보드등으로 값의 변화를 추적하거나 살펴보기 쉽게 하기 위해 이름을 붙여줍니다. X = tf.placeholder(tf.float32, name="X") Y = tf.placeholder(tf.float32, name="Y") print(X) print(Y) # X와 Y의 상관관계를 분석하기 위한 가설 수식을 작성합니다. # y = W * x + b # W와 X가 행렬이 아니므로 tf.matmul이 아니라 기본 곱셈 기호를 사용했습니다. hypothesis = W * X + b # 손실 함수를 작성합니다. # mean(h - Y)^2 : 예측값과 실제값의 거리를 비용(손실) 함수로 정합니다. cost = tf.reduce_mean(tf.square(hypothesis - Y)) # 텐서플로우에 기본적으로 포함되어 있는 함수를 이용해 경사 하강법 최적화를 수행합니다. optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1) # 비용을 최소화 하는 것이 최종 목표 train_op = optimizer.minimize(cost) # 세션을 생성하고 초기화합니다. with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # 최적화를 100번 수행합니다. for step in range(100): # sess.run을 통해 train_op와 cost 그래프를 계산합니다. # 이 때, 가설 수식에 넣어야 할 실제값을 feed_dict을 통해 전달합니다. _, cost_val = sess.run([train_op, cost], feed_dict={X: x_data, Y:y_data}) print(step, cost_val, sess.run(W), sess.run(b)) # 최적화가 완료된 모델에 테스트 값을 넣고 결과가 잘 나오는지 확인해봅시다. print("\n=== Test ===") print("X: 5, Y:", sess.run(hypothesis, feed_dict={X: 5})) print("X: 2.5, Y:", sess.run(hypothesis, feed_dict={X: 2.5})) sess.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #
    Financial Economics HW_03
    # # **
    11510691 程远星$\DeclareMathOperator*{\argmin}{argmin} # \DeclareMathOperator*{\argmax}{argmax} # \DeclareMathOperator*{\plim}{plim} # \newcommand{\ffrac}{\displaystyle \frac} # \newcommand{\d}[1]{\displaystyle{#1}} # \newcommand{\space}{\text{ }} # \newcommand{\bspace}{\;\;\;\;} # \newcommand{\bbspace}{\;\;\;\;\;\;\;\;} # \newcommand{\QQQ}{\boxed{?\:}} # \newcommand{\void}{\left.\right.} # \newcommand{\Tran}[1]{{#1}^{\mathrm{T}}} # \newcommand{\CB}[1]{\left\{ #1 \right\}} # \newcommand{\SB}[1]{\left[ #1 \right]} # \newcommand{\P}[1]{\left( #1 \right)} # \newcommand{\abs}[1]{\left| #1 \right|} # \newcommand{\norm}[1]{\left\| #1 \right\|} # \newcommand{\given}[1]{\left. #1 \right|} # \newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}} # \newcommand{\asim}{\overset{\text{a}}{\sim}} # \newcommand{\RR}{\mathbb{R}} # \newcommand{\EE}{\mathbb{E}} # \newcommand{\II}{\mathbb{I}} # \newcommand{\NN}{\mathbb{N}} # \newcommand{\ZZ}{\mathbb{Z}} # \newcommand{\QQ}{\mathbb{Q}} # \newcommand{\PP}{\mathbb{P}} # \newcommand{\AcA}{\mathcal{A}} # \newcommand{\FcF}{\mathcal{F}} # \newcommand{\AsA}{\mathscr{A}} # \newcommand{\FsF}{\mathscr{F}} # \newcommand{\dd}{\mathrm{d}} # \newcommand{\I}[1]{\mathrm{I}\left( #1 \right)} # \newcommand{\N}[1]{\mathcal{N}\left( #1 \right)} # \newcommand{\Exp}[1]{\mathrm{E}\left[ #1 \right]} # \newcommand{\Var}[1]{\mathrm{Var}\left[ #1 \right]} # \newcommand{\Avar}[1]{\mathrm{Avar}\left[ #1 \right]} # \newcommand{\Cov}[1]{\mathrm{Cov}\left( #1 \right)} # \newcommand{\Corr}[1]{\mathrm{Corr}\left( #1 \right)} # \newcommand{\ExpH}{\mathrm{E}} # \newcommand{\VarH}{\mathrm{Var}} # \newcommand{\AVarH}{\mathrm{Avar}} # \newcommand{\CovH}{\mathrm{Cov}} # \newcommand{\CorrH}{\mathrm{Corr}} # \newcommand{\ow}{\text{otherwise}} # \newcommand{\FSD}{\text{FSD}} # \void^\dagger$
    ** # ## Question 8.1 # # $\bspace$We first write $n$ Eular equations # # $$u_0'\P{e_0 - \Tran S \theta} S_k = \sum_{\omega \in\Omega} \pi_\omega u_1'\P{e_{1\omega} + X_{\omega,\cdot} \theta} X_{\omega,k},\bspace k = 1,2,\dots,n$$ # # $\bspace$whose solution $\theta$ is the optimal portfolio. The fact that $\tilde r_i$ are $i.i.d.$ implies that the preceding equation can be rewritten as, after letting $e_{1\omega} = 0$ # # $$\begin{align}u_0'\P{e_0 - \Tran S \theta} S_k &= \Exp{u_1'\P{\SB{S_1+S_1\cdot\tilde r_1,S_2+S_2\cdot\tilde r_2, \cdots, S_n + S_n\cdot\tilde r_n}\theta\:} \cdot S_k\P{1+\tilde r_k}}\\ # u_0'\P{e_0 - \Tran S \theta}&= \Exp{u_1'\P{\SB{S_1+S_1\cdot\tilde r_1,S_2+S_2\cdot\tilde r_2, \cdots, S_n + S_n\cdot\tilde r_n}\theta\:}} \cdot \Exp{1+\tilde r_k} # \end{align}$$ # # $\bspace$And now if we consider the variance, unbalanced allocation of securities will lead to higher covariance, and so is the variance, consequently. So the optimal portfolio is to find a $\theta^*$ such that # # $$\theta = \theta^*\cdot \iota$$ # ## Question 8.2 # # $\P{1}$ # # $\bspace$Suppose we put $y\cdot w$ money in risk security and the rest in risk free security, then we write # # $$\tilde w = y\cdot w \P{1+\tilde r} + \P{1-y}w\P{1+r_F} = w\P{1+r_F + y\P{\tilde r-r_F}}$$ # # $\bspace$With this we rewrite the optimization problem: # # $$\begin{array}{cc} # \d{\max_y} & \Exp{w\P{1+r_F + y\P{\tilde r-r_F}} - \ffrac{1}{2}aw^2\P{1+r_F + y\P{\tilde r-r_F}}^2} # \end{array}$$ # # $\bspace$And take the derivative on $y$ we have the first order condition: # # $$\begin{align} # \ffrac{\partial \Exp{\tilde w - \ffrac{1}{2}a\tilde w^2}}{\partial y} &= \Exp{w\P{\tilde r - r_F} - \ffrac{1}{2} aw^2\cdot 2\P{1+r_F + y\P{\tilde r - r_F}}\P{\tilde r - r_F}}\\ # &= w\P{\Exp{\tilde r} - r_F} - aw^2\P{\P{1+r_F}\P{\Exp{\tilde r} - r_F} + y\P{r_F^2 - 2r_F\Exp{\tilde r} + \Exp{\tilde r^2}}}\\ # &= w\P{\bar r - r_F} - aw^2\P{\P{1+r_F}\P{\bar r - r_F} + y\P{r_F^2 - 2r_F\bar r + \bar r^2 + \sigma^2}}=0\\ # \Rightarrow y &= \ffrac{\ffrac{\bar r - r_F}{aw} - \P{1+r_F}\P{\bar r - r_F}}{\P{\bar r - r_F}^2 + \sigma^2} = \ffrac{\P{\bar r - r_F}\P{1-aw\P{1+r_F}}}{aw\P{\P{\bar r - r_F}^2 + \sigma^2}} # \end{align}$$ # # $\P{2}$ # # $\bspace$Here we give the intuitive answers. # # - $r_F$ and $y$ are negatively correlated, because agent will tend to put more money in risk free security given they can get more interest # - $\bar r$ and $y$ are positively correlated, because agent will tend to put more money in risk security given they can get more interest, in a probabilistic sense # - $\sigma$ and $y$ are negatively correlated, because rational agent will withdraw money from risk security given higher risk, or volatility # - $a$ and $y$ are negatively correlated, not only can be seen from the explicit expression of $y$, but also the meaning of $a$. Given a increase in $a$ the absolute risk aversion $A\P{w} = \ffrac{a}{1-aw}$ increases meaning that the agent tend to withdraw some money from the risk security. # ## Question 8.3 # # $\P 1$ # # $\bspace$The optimization problem is now # # $$\begin{array}{cc} # \d{\max_y} & \Exp{ -\exp\CB{-aw\P{1+r_F + y\P{\tilde r-r_F}}}} # \end{array}$$ # # $\bspace$We first simplify this expression # # $$\begin{align} # \Exp{ -\exp\CB{-aw\P{1+r_F + y\P{\tilde r-r_F}}}} &= -\exp\CB{-aw\P{1+r_F-yr_F}}\Exp{\exp\CB{-awy \tilde r}}\\ # &= -\exp\CB{-aw\P{1+r_F-yr_F}} \cdot \exp\CB{-\mu \cdot awy + \ffrac{\sigma^2}{2}\P{awy}^2}\\ # &= -\exp\CB{-aw\P{1+r_F} + aw\P{r_F - \mu}y + \ffrac{\sigma^2}{2}\P{aw}^2y^2} # \end{align}$$ # # $\bspace$Since $\exp\CB{-aw\P{1+r_F}}$ is positive we have the simplified optimization problem # # $$\begin{array}{cc} # \d{\max_y} & -\exp\CB{aw\P{r_F - \mu}y + \ffrac{\sigma^2}{2}\P{aw}^2y^2} # \end{array}$$ # # $\bspace$And take the derivative on $y$ we have the first order condition: # # $$\begin{align} # \ffrac{\partial \P{-\exp\CB{aw\P{r_F - \mu}y + \ffrac{\sigma^2}{2}\P{aw}^2y^2}}}{\partial y} &= -\exp\CB{aw\P{r_F - \mu}y + \ffrac{\sigma^2}{2}\P{aw}^2y^2}\cdot\P{aw\P{r_F - \mu} + \sigma^2\P{aw}^2y} = 0\\ # \Rightarrow \P{aw\P{r_F - \mu} + \sigma^2\P{aw}^2y}&=0 \Rightarrow y = \ffrac{\mu - r_F}{\sigma^2aw} # \end{align}$$ # # $\P{2}$ # # $\bspace$Here we give the intuitive answers. # # - $r_F$ and $y$ are negatively correlated, because agent will tend to put more money in risk free security given they can get more interest # - $\mu$ and $y$ are positively correlated, because agent will tend to put more money in risk security given they can get more interest, in a probabilistic sense # - $\sigma$ and $y$ are negatively correlated, because rational agent will withdraw money from risk security given higher risk, or volatility # - $a$ and $y$ are negatively correlated, not only can be seen from the explicit expression of $y$, but also the meaning of $a$. Given a increase in $a$ the absolute risk aversion $A\P{w} = a$ increases meaning that the agent tend to withdraw some money from the risk security. # ## Question 9.1 # # $\bspace\newcommand{\FSD}{\text{FSD}} # \newcommand{\SSD}{\text{SSD}}$Consider first stochastic dominance, $A\not\succsim_{\FSD} B$ if $u\P x = x^3$, nor $B\not\succsim_{\FSD} A$ if $u\P x = x+1$. # # $\bspace$Then the second stochastic dominance, using the theorem, we let $d=1$ and define $\tilde e$ # # $$\begin{array}{c|cccc} # & \omega_1 & \omega_2 &\omega_3&\omega_4\\\hline # \tilde e & 0.4 & 0.3 & -0.3 & -0.4\\ # p & 0.25 & 0.25 & 0.25 & 0.25\\ # \tilde x_A & 1.5 & 1.5 & 1.7 & 1.7 # \end{array}$$ # # $$\begin{align} # \Exp{\tilde e \mid x_A} &= \sum_{i} \tilde e_i \cdot p\P{\tilde e_i\mid x_{A,i}}\\ # &= 0.4\times0.25 + 0.3 \times 0.25 - 0.3 \times 0.25 - 0.4 \times 0.25 = 0 # \end{align}$$ # # $\bspace$So $A\succsim_{\SSD} B$. # ## Question 9.2 # # $\P{\text a.1}$ # # $$\bspace \tilde r_B = \tilde x + \tilde e = 1 \cdot \tilde r_A + \tilde e \Rightarrow \tilde x_B = 1 \cdot \tilde x_A + \tilde e $$ # # $\bspace$And due to the independency, $\Exp{\tilde e\mid \tilde x_A} = \Exp{\tilde e} = 0$, thus $A\succsim_{\void_\SSD} B$. # # $\P{\text a.2}$ # # $$\begin{align} # \Exp{u\P{\tilde w}} &= \Exp{u\P{a\P{1+\tilde r_A} + \P{1-a}\P{1+\tilde r_B}}}\\ # &= \Exp{u\P{1+ a\P{\tilde x - \tilde x - \tilde e} + \tilde x + \tilde e}}\\ # &= \Exp{u\P{1+\P{1-a}\tilde e + \tilde x}} # \end{align}$$ # # $\bspace$Since $\Exp{\tilde e} = 0$, we take the derivative on $a$ and have the following: # # $$\begin{align} # \ffrac{\partial^2 \Exp{u\P{\tilde w}}}{\partial a^2} &=\Exp{u''\P{1+\P{1-a}\tilde e + \tilde x}\tilde e^2}\leq 0\\ # \ffrac{\partial \Exp{u\P{\tilde w}}}{\partial a} &= \Exp{-u'\P{1+\P{1-a}\tilde e + \tilde x}\tilde e}\\ # &= \sum_{e<0}e\cdot P\CB{\tilde e = e}\cdot u'\P{1+\P{1-a}e + \tilde x} + \sum_{e>0}e\cdot P\CB{\tilde e = e}\cdot u'\P{1+\P{1-a}e + \tilde x}\\ # &\geq \sum_{e<0}e\cdot P\CB{\tilde e = e}\cdot u'\P{1+\P{1-a}\cdot 0 + \tilde x} + \sum_{e>0}e\cdot P\CB{\tilde e = e}\cdot u'\P{1+\P{1-a}\cdot 0 + \tilde x}\\ # & = u'\P{1 + \tilde x}\cdot\Exp{\tilde e} = 0 # \end{align}$$ # # $\bspace$Thus $\Exp{u\P{\tilde w}}$ gets to its maximum at $a=1$. The conclusion is same when $\tilde e$ is continuous. # # $\P{\text b.1}$ # # $\bspace$With the same reasoning, we can prove that $A\succsim_{\void_\SSD} B$. # # $\P{\text b.2}$ # # $\bspace$Now $\Exp{u\P{\tilde w}} = \Exp{u\P{a\tilde x + \P{1-a}\P{\tilde y + \tilde e}}} = \Exp{u\P{\tilde y + \tilde e + a\P{\tilde x - \tilde y - \tilde e}}}$. Still we have its second order direvative on $a$ positive. Then # # $$\begin{align} # \ffrac{\partial \Exp{u\P{\tilde w}}}{\partial a} &= \Exp{u'\P{\tilde y + \tilde e + a\P{\tilde x - \tilde y - \tilde e}}\P{\tilde x - \tilde y - \tilde e}}\\ # \end{align}$$ # # $\bspace$At $a = 0.5$, this equals to $\Exp{u'\P{a\P{\tilde x + \tilde y + \tilde e}}\P{\tilde x - \tilde y - \tilde e}} = -\Exp{u'\P{a\P{\tilde x + \tilde y + \tilde e}}\tilde e}$, with the same form we've proved in $\P{\text a.2}$. Thus $\given{\ffrac{\partial \Exp{u\P{\tilde w}}}{\partial a}}_{a=0.5}\geq0$. In order to reach the maximum, we have now $a>0.5$. # # $\P{\text{c}}$ # # $\bspace$The result is intuitively obvious. In $\P{\text a}$, $\tilde r_B$ has got a higher volatility than $\tilde r_A$, so the optimal strategy is to ignore asset $B$. In $\P{\text b}$, we know that in probabilistic sense asset $A$ has a lower volatility than asset $B$, thus, we put more money in $A$ and consequently $a>0.5$. # ## Question 9.5 # # $\bspace$Using $Theorem\space 9.5$, when all $\tilde r_n$ are $i.i.d.$, whatever preference you have, the portfolio you hold is equivelent to a equal-weight portfolio. Thus the utility function of any agent, say $k$, can now be linearly transformed to the utility function of any other agent, say $j$, meaning that the utility functions of all agents, are equivalent, in linear sense. So then we have single fund separation. # ## Question 9.6 # # $\P{1}$ # # $\bspace$The weights on all $N$ risky securities are $\alpha = \SB{a_1;a_2;\cdots;a_N}$. Then we write # # $$\tilde w = w\P{1+r_F} + \sum_{i=1}^N a_i\P{\tilde r_i - r_F} = w\P{1+r_F} + \Tran\alpha\P{\tilde r - r_F\cdot \iota}$$ # # $\bspace$We then take the derivative on $\alpha$ to derive the first order condition # # $\begin{align} # &\ffrac{\partial \Exp{\tilde w - \ffrac{1}{2}a\tilde w^2}}{\partial \alpha}\\ # =\;& \ffrac{\partial \Exp{w\P{1+r_F} + \Tran{\P{\tilde r - r_F\cdot\iota}}\alpha - \ffrac{1}{2}a \P{w^2\P{1+r_F}^2 + \Tran\alpha\P{\tilde r - r_F\cdot\iota} \Tran{\P{\tilde r - r_F\cdot\iota}}\alpha + 2w\P{1+r_F}\Tran{\P{\tilde r - r_F\cdot\iota}}\alpha}}}{\partial \alpha}\\ # =\;& \Exp{\Tran{\P{\tilde r - r_F\cdot\iota}} - \ffrac{1}{2} a\Tran\alpha\P{\P{\tilde r - r_F\cdot\iota}\Tran{\P{\tilde r - r_F\cdot\iota}} + \P{\tilde r - r_F\cdot\iota}\Tran{\P{\tilde r - r_F\cdot\iota}}} - aw\P{1+r_F}\Tran{\P{\tilde r - r_F\cdot\iota}}}\\ # =\;& \Exp{\Tran{\P{\tilde r - r_F\cdot\iota}}\P{1 - aw\P{1+r_F}}- a\Tran\alpha\P{\P{\tilde r - r_F\cdot\iota}\Tran{\P{\tilde r - r_F\cdot\iota}}}}\\ # =\;& \Tran{\P{\bar r - r_F \cdot \iota}}\P{1-aw\P{1+r_F}} - a\Tran\alpha \Sigma = 0 # \end{align}$ # # $\bspace$So that $\alpha = \ffrac{1-aw\P{1+r_F}}{a} \Sigma^{-1}\P{\bar r - r_F \cdot \iota}$ # # $\P{2}$ # # $\bspace$From $Theorem\space 9.6$, the assertion is obviously true. And the proof is given in Chapter 12, so... # *** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- df df.describe() # + ages = df.Age ages.mean() # - # ages of scentists older than mean of age ages[ages > ages.mean()] df[df.Age > df.Age.mean()] df.Born.dtype born = pd.to_datetime(df.Born, format='%Y-%m-%d') born died = pd.to_datetime(df.Died, format='%Y-%m-%d') died df['born_dt'] = born df['died_dt'] = died df df.Age.to_pickle('age.series.pickle') import seaborn as sns sns.barplot(x=df.Age, y=df.Occupation) import pandas as pd df = pd.read_csv('data/scientists.csv') df.info() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="y9_MDm0KEZt4" outputId="c79c337e-4ce0-48b9-d4c1-9f19ea14d2a7" PlayListRatings = [10, 9.5, 10, 8, 7.5, 5, 10, 10] i = 0 Rating = PlayListRatings[i] while(Rating >= 6): print(Rating) Rating = PlayListRatings[i] i = i + 1 # + colab={"base_uri": "https://localhost:8080/"} id="7G8UGKH8czUD" outputId="bea0cb3e-52d7-4082-de28-57d84c1ba5a2" # Write your code below and press Shift+Enter to execute squares = ['orange', 'orange', 'purple', 'blue ', 'orange'] new_squares = [] i = 0 while squares[i] == "orange": new_squares.append(squares[i]) i = i + 1 print(new_squares) # + colab={"base_uri": "https://localhost:8080/"} id="WxEMLTqUc2fH" outputId="57695289-9362-456b-e6c8-6041eb660c77" squares = ['orange', 'orange', 'purple', 'blue ', 'orange'] new_squares = [] i = 0 while(squares[i] == 'orange'): new_squares.append(squares[i]) i = i + 1 print (new_squares) # + id="AWS5tM-sdqvN" class analysedText(object): def __init__ (self, text): # remove punctuation formattedText = text.replace('.','').replace('!','').replace('?','').replace(',','') # make text lowercase formattedText = formattedText.lower() self.fmtText = formattedText def freqAll(self): # split text into words wordList = self.fmtText.split(' ') # Create dictionary freqMap = {} for word in set(wordList): # use set to remove duplicates in list freqMap[word] = wordList.count(word) return freqMap def freqOf(self,word): # get frequency map freqDict = self.freqAll() if word in freqDict: return freqDict[word] else: return 0 # + id="FYilpUkqiTD8" class Rectangle(object): def __init__(self,width=2,height =3,color='r'): self.height=height self.width=width self.color=color def drawRectangle(self): import matplotlib.pyplot as plt plt.gca().add_patch(plt.Rectangle((0, 0),self.width, self.height ,fc=self.color)) plt.axis('scaled') plt.show() # + id="8p8S4YXTicyJ" class Points(object): def __init__(self,x,y): self.x=x self.y=y def print_point(self): print('x=',self.x,' y=',self.y) p1=Points("A","B") p1.print_point() # + id="ZbhyOmtAifZM" class Points(object): def __init__(self,x,y): self.x=x self.y=y def print_point(self): print('x=',self.x,' y=',self.y) p2=Points(1,2) p2.x=2 p2.print_point() # + colab={"base_uri": "https://localhost:8080/"} id="wh11q3s64Zwp" outputId="00efc8fa-d786-47a6-e77d-bfd9e3b560a8" a = 1 try: b = int(input("Please enter a number to divide a")) a = a/b except ZeroDivisionError: print("The number you provided cant divide 1 because it is 0") except ValueError: print("You did not provide a number") except: print("Something went wrong") # + colab={"base_uri": "https://localhost:8080/"} id="QSHHKWb64a8v" outputId="01eaf99e-4037-491c-ae28-efac63f5b9cf" a = 1 try: b = int(input("Please enter a number to divide a")) a = a/b except ZeroDivisionError: print("The number you provided cant divide 1 because it is 0") except ValueError: print("You did not provide a number") except: print("Something went wrong") else: print("success a=",a) # + colab={"base_uri": "https://localhost:8080/"} id="wv5XDsGg5cUM" outputId="15c94e52-7df8-4e77-ed25-c85a4a885fab" a = 1 try: b = int(input("Please enter a number to divide a")) a = a/b except ZeroDivisionError: print("The number you provided cant divide 1 because it is 0") except ValueError: print("You did not provide a number") except: print("Something went wrong") else: print("success a=",a) finally: print("Processing Complete") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: automi # language: python # name: automi # --- # # autoMI with markov / PCFG # This notebooks combines the PCFG and Markov models to create a PCFG with Markov endpoints. One application of this model is language, where phonological organization is thought to be largely Markovian, and syntactic and discourse structure are thought to be hierarchical. # # See # - [Parallels in the sequential organization of birdsong and human speech. , , , , (2019) Nature Communications](https://doi.org/10.1038/s41467-019-11605-y) # - [Long-range sequential dependencies precede complex syntactic production in language acquisition. , , . Proceedings of the Royal Society B](https://dx.doi.org/10.1098/rspb.2021.2657) # %load_ext autoreload # %autoreload 2 from matplotlib import mlab import matplotlib.pyplot as plt import numpy as np from tqdm.auto import tqdm # ## Create signal and sample sequences # + def gen_balanced_matrix(na=5, ps=[0.7, 0.2, 0.1]): """ Generates a balanced matrix in which every state can reach every other state for hierarchical and Markov models """ for r in range(1000): breakme = False probs = np.zeros((na, na)) for p in ps: for i in np.arange(na): ixloc = np.where( (probs[i, :] == 0) & (np.sum(probs != p, axis=0) == na) )[0] if len(ixloc) > 0: probs[i, np.random.permutation(ixloc)[0]] = p else: # the initialization didn't work breakme = True if breakme: continue probs = probs / np.sum(probs, axis=0) return probs return "Generation Failed" def gen_seq_hierarchical(alphabet, probs, depth, n_subsamples): """ generates a sequence via the Lin Tegmark recursive model Arguments: alphabet {[type]} -- [alphabet of states] probs {[type]} -- [probability matrix for recursive subsampling] depth {[type]} -- [how many times to recursively subsample] n_subsamples {[type]} -- [the number of new elements to recursively replace old elements with] Returns: sequence [type] -- [sequence of elements] """ sequence = np.random.choice( alphabet, p=np.sum(probs, axis=1) / np.sum(probs), size=1 ) if type(depth) == list: depth = np.random.choice(depth) depth_list = range(depth) for i in depth_list: q = np.random.choice(n_subsamples) sequence = recursively_subsample_sequence(sequence, probs, q, alphabet) return sequence def subsample_sequence(sequence, probs, q, alphabet): """ subsamples a sequence given a probability matrix given a sequence, resamples each element in that sequences given a probability matrix of sequence element to new elements Arguments: sequence {[type]} -- input sequence probs {[type]} -- the probability matrix q {[type]} -- the number of items to subsample """ return [ item for sublist in [ np.random.choice(alphabet, p=probs[:, i], size=q) for i in sequence ] for item in sublist ] def recursively_subsample_sequence(sequence, probs, q, alphabet): """ subsamples a sequence given a probability matrix given a sequence, resamples each element in that sequences given a probability matrix of sequence element to new elements Arguments: sequence {[type]} -- input sequence probs {[type]} -- the probability matrix q {[type]} -- the number of items to subsample """ for i in sequence: if type(i) == list: return [recursively_subsample_sequence(i, probs, q, alphabet) for i in sequence] else: return [ item for sublist in [ [list(np.random.choice(alphabet, p=probs[:, i], size=q))] for i in sequence ] for item in sublist ] def gen_seq_markov(alphabet, probs, seq_len): """ like sample_sequence_MM, but uses a numpy matrix, no start and end states, and a set sequence length """ sequence = list( np.random.choice(alphabet, p=np.sum(probs, axis=0) / np.sum(probs), size=1) ) for i in tqdm(range(seq_len), leave=False): sequence.append(np.random.choice(alphabet, p=probs[:, sequence[-1]], size=1)[0]) return sequence # - # how many branches to sample in hierarchical n_subsamples = [2] # how many subsamples to perform depth = 12 # alphabet size a_n = 5 alphabet = np.arange(a_n) # how many sequences to use nseq = 1000 print('seq len ',(np.mean(n_subsamples)**depth)) # how many markov items to sample markov_seq_len_range = [2,5] # number of elements in markov alphabet a_n_markov = 25 markov_alphabet_items = np.arange(a_n_markov) # the number of sequences can correspond to each hierarchical element markov_n_seq_per_element = 5 # generate probbility matrix probs = gen_balanced_matrix(na=a_n, ps=[.9, .1]) plt.matshow(probs) # generate markov probabilities markov_probs = np.random.rand(a_n_markov**2).reshape((a_n_markov, a_n_markov))**2 markov_probs = markov_probs/np.sum(markov_probs, axis = 0) # test it out... gen_seq_markov(markov_alphabet_items, markov_probs, 10) plt.matshow(markov_probs) # each leaf in the tree grammar should correspond to a markov generated sequence markov_alphabet = { i: [ gen_seq_markov( markov_alphabet_items, markov_probs, np.random.randint(markov_seq_len_range[0], markov_seq_len_range[1]), ) for j in range(markov_n_seq_per_element) ] for i in tqdm(markov_alphabet_items) } markov_alphabet[alphabet[0]] from joblib import Parallel, delayed sequences = Parallel(n_jobs = -1)(delayed(gen_seq_hierarchical)(alphabet, probs, depth, n_subsamples=n_subsamples) for seq in tqdm(range(nseq))) len(sequences), len(sequences[0]) def flatten(container): for i in container: if isinstance(i, (list,tuple)): for j in flatten(i): yield j else: yield i # flatten sequences from tree structure flat_sequences = [list(flatten(i)) for i in tqdm(sequences)] len(flat_sequences[0]) # ## First, we can look at MI over PCFG sequences without Markov endpoints from automutualinformation import sequential_mutual_information as smi range_ = np.arange(1,100) (MI, _), (shuff_MI, _) = smi( flat_sequences, distances=range_ ) # + fig, axs = plt.subplots(ncols=2, figsize=(10,5)) ax = axs[0] ax.plot(range_, MI - shuff_MI) ax.set_yscale('log') ax.set_xscale('log') ax.set_xlabel('MI') ax.set_xlabel('Sequential distance / lag') ax.set_ylabel('MI') ax.grid(which='both') ax = axs[1] ax.plot(range_, MI - shuff_MI) ax.set_xlabel('MI') ax.set_xlabel('Sequential distance / lag') ax.set_ylabel('MI') ax.grid(which='both') # - # ## Now, we can look at MI over sequences with Markov endpoints def replace_markov_alphabet_recursive(sequence, markov_alphabet, markov_n_seq_per_element): """ subsamples a sequence given a probability matrix given a sequence, resamples each element in that sequences given a probability matrix of sequence element to new elements Arguments: sequence {[type]} -- input sequence probs {[type]} -- the probability matrix q {[type]} -- the number of items to subsample """ for i in sequence: if type(i) == list: return [replace_markov_alphabet_recursive(i,markov_alphabet, markov_n_seq_per_element) for i in sequence] else: return markov_alphabet[i][np.random.choice(sequence)] def replace_markov_alphabet(seq, markov_alphabet,markov_n_seq_per_element): return [markov_alphabet[i][np.random.choice(markov_n_seq_per_element)] for i in seq] # replace each element with Markov sampled sequences sequences_markov = Parallel(n_jobs = -1)(delayed(replace_markov_alphabet_recursive)(seq, markov_alphabet,markov_n_seq_per_element) for seq in tqdm(sequences)) flat_sequences_markov = [list(flatten(i)) for i in tqdm(sequences_markov)] (MI, _), (shuff_MI, _) = smi( flat_sequences_markov, distances=range_ ) # + fig, axs = plt.subplots(ncols=2, figsize=(10,5)) ax = axs[0] ax.plot(range_, MI - shuff_MI) ax.set_yscale('log') ax.set_xscale('log') ax.set_xlabel('MI') ax.set_xlabel('Sequential distance / lag') ax.set_ylabel('MI') ax.grid(which='both') ax = axs[1] ax.plot(range_, MI - shuff_MI) ax.set_xlabel('MI') ax.set_xlabel('Sequential distance / lag') ax.set_ylabel('MI') ax.grid(which='both') # - # ### Fitting a decay model from automutualinformation import fit_model decay_model, model_y = fit_model( distances = range_, sig = MI - shuff_MI, decay_function = "pow_exp" ) decay_model # + fig, ax = plt.subplots(ncols=1, figsize=(5,5)) ax.scatter(range_, MI - shuff_MI, color = 'k', alpha = 0.25) ax.plot(range_, model_y, lw=3, color = 'red') ax.set_yscale('log') ax.set_xscale('log') ax.set_xlabel('Sequential distance / lag') ax.set_ylabel('MI') ax.grid(which='both') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # BOKEH # # - [Link](https://docs.bokeh.org/en/latest/docs/first_steps.html#first-steps) # # Pre-requisites pip install bokeh # # Simple line chart # + from bokeh.plotting import figure, show from bokeh.io import output_notebook output_notebook() # prepare some data x = [1, 2, 3, 4, 5] y1 = [6, 7, 2, 4, 5] y2 = [2, 3, 4, 5, 6] y3 = [4, 5, 5, 7, 2] # create a new plot with a title and axis labels p = figure(title="Multiple line example", x_axis_label="x", y_axis_label="y") # add multiple renderers p.line(x, y1, legend_label="Temp.", line_color="blue", line_width=2) p.line(x, y2, legend_label="Rate", line_color="red", line_width=2) p.line(x, y3, legend_label="Objects", line_color="green", line_width=2) # show the results show(p) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # The Index # # **Prerequisites** # # - [Introduction to pandas](https://datascience.quantecon.org/intro.html) # # # **Outcomes** # # - Understand how the index is used to align data # - Know how to set and reset the index # - Understand how to select subsets of data by slicing on index and columns # - Understand that for DataFrames, the column names also align data # ## Outline # # - [The Index](#The-Index) # - [So What is this Index?](#So-What-is-this-Index?) # - [Setting the Index](#Setting-the-Index) # - [Re-setting the Index](#Re-setting-the-Index) # - [Choose the Index Carefully](#Choose-the-Index-Carefully) # - [Exercises](#Exercises) # + hide-output=false # Uncomment following line to install on colab # ! pip install qeds # + hide-output=false import pandas as pd import numpy as np # - # ## So What is this Index? # # Every Series or DataFrame has an index. # # We told you that the index was the “row labels” for the data. # # This is true, but an index in pandas does much more than label the rows. # # The purpose of this lecture is to understand the importance of the index. # # The [pandas # documentation](https://pandas.pydata.org/pandas-docs/stable/dsintro.html) # says # # > Data alignment is intrinsic. The link between labels and data will # not be broken unless done so explicitly by you. # # # In practice, the index and column names are used to make sure the data is # properly aligned when operating on multiple DataFrames. # # This is a somewhat abstract concept that is best understood by # example… # # Let’s begin by loading some data on GDP components that we collected from # the World Bank’s World Development Indicators Dataset. # + hide-output=false url = "https://datascience.quantecon.org/assets/data/wdi_data.csv" df = pd.read_csv(url) df.info() df.head() # - # We’ll also extract a couple smaller DataFrames we can use in examples. # + hide-output=false df_small = df.head(5) df_small # + hide-output=false df_tiny = df.iloc[[0, 3, 2, 4], :] df_tiny # + hide-output=false im_ex = df_small[["Imports", "Exports"]] im_ex_copy = im_ex.copy() im_ex_copy # - # Observe what happens when we evaluate `im_ex + im_ex_copy`. # + hide-output=false im_ex + im_ex_copy # - # Notice that this operated *elementwise*, meaning that the `+` # operation was applied to each element of `im_ex` and the corresponding # element of `im_ex_copy`. # # Let’s take a closer look at `df_tiny`: # + hide-output=false df_tiny # - # Relative to `im_ex` notice a few things: # # - The row labeled `1` appears in `im_ex` but not `df_tiny`. # - For row labels that appear in both, they are not in the same position # within each DataFrame. # - Certain columns appear only in `df_tiny`. # - The `Imports` and `Exports` columns are the 6th and 5th columns of # `df_tiny` and the 1st and 2nd of `im_ex`, respectively. # # # Now, let’s see what happens when we try `df_tiny + im_ex`. # + hide-output=false im_ex_tiny = df_tiny + im_ex im_ex_tiny # - # Whoa, a lot happened! Let’s break it down. # ### Automatic Alignment # # For all (row, column) combinations that appear in both DataFrames (e.g. # rows `[1, 3]` and columns `[Imports, Exports]`), the value of `im_ex_tiny` # is equal to `df_tiny.loc[row, col] + im_ex.loc[row, col]`. # # This happened even though the rows and columns were not in the same # order. # # We refer to this as pandas *aligning* the data for us. # # To see how awesome this is, think about how to do something similar in # Excel: # # - `df_tiny` and `im_ex` would be in different sheets. # - The index and column names would be the first column and row in each # sheet. # - We would have a third sheet to hold the sum. # - For each label in the first row and column of *either* the `df_tiny` # sheet or the `im_ex` sheet we would have to do a `IFELSE` to check # if the label exists in the other sheet and then a `VLOOKUP` to # extract the value. # # # In pandas, this happens automatically, behind the scenes, and *very # quickly*. # ### Handling Missing Data # # For all elements in row `1` or columns # `["country", "year", "GovExpend", "Consumption", "GDP"]`, # the value in `im_ex_tiny` is `NaN`. # # This is how pandas represents *missing data*. # # So, when pandas was trying to look up the values in `df_tiny` and `im_ex`, it could # only find a value in one DataFrame: the other value was missing. # # When pandas tries to add a number to something that is missing, it says # that the result is missing (spelled `NaN`). # ## Setting the Index # # For a DataFrame `df`, the `df.set_index` method allows us to use one # (or more) of the DataFrame’s columns as the index. # # Here’s an example. # + hide-output=false # first, create the DataFrame df_year = df.set_index(["year"]) df_year.head() # - # Now that the year is on the index, we can use `.loc` to extract all the # data for a specific year. # + hide-output=false df_year.loc[2010] # - # This would be helpful, for example, if we wanted to compute the difference # in the average of all our variables from one year to the next. # + hide-output=false df_year.loc[2009].mean() - df_year.loc[2008].mean() # - # Notice that pandas did a few things for us. # # - After computing `.mean()`, the row labels (index) were the former column names. # - These column names were used to align data when we wanted asked pandas to # compute the difference. # # # Suppose that someone asked you, “What was the GDP in the US in 2010?” # # To compute that using `df_year` you might do something like this: # + hide-output=false df_year.loc[df_year["country"] == "United States", "GDP"].loc[2010] # - # That was a lot of work! # # Now, suppose that after seeing you extract that data, your friend asks you # “What about GDP in Germany and the UK in 2010?” # # To answer that question, you might write. # + hide-output=false df_year.loc[df_year["country"].isin(["United Kingdom", "Germany"]), "GDP"].loc[2010] # - # Notice that this code is similar to the code above, but now provides a result # that is ambiguous. # # The two elements in the series both have with label 2010. # # How do we know which is which? # # We might think that the first value corresponds to the United Kingdom because # that is what we listed first in the call to `isin`, but we would be wrong! # # Let’s check. # + hide-output=false df_year.loc[2010] # - # Setting just the year as index has one more potential issue: we will # get data alignment only on the year, which may not be sufficient. # # To demonstrate this point, suppose now you are asked to use our WDI dataset # to compute an approximation for net exports and investment in in 2009. # # As a seasoned economist, you would remember the expenditure formula for GDP is # written # # $$ # GDP = Consumption + Investment + GovExpend + Net Exports # $$ # # which we can rearrange to compute investment as a function of the variables in # our DataFrame… # # $$ # Investment = GDP - Consumption - GovExpend - Net Exports # $$ # # Note that we can compute NetExports as `Exports - Imports`. # + hide-output=false nx = df_year["Exports"] - df_year["Imports"] nx.head(19) # - # Now, suppose that we accidentally had a bug in our code that swapped # the data for Canada and Germany’s net exports in 2017. # # >**Note** # > # >This example is contrived, but if you were getting unclean data from # some resource or doing more complicated operations, this type of mistake # becomes increasingly likely. # + hide-output=false ca17 = nx.iloc[[0]] g17 = nx.iloc[[18]] nx.iloc[[0]] = g17 nx.iloc[[18]] = ca17 nx.head(19) # - # Notice that if we now add `nx` to the DataFrame and compute investment # pandas doesn’t complain. # + hide-output=false df_year["NetExports"] = nx df_year["Investment"] = df_year.eval("GDP - Consumption - GovExpend - NetExports") df_year.head(19) # - # Because we didn’t also have data alignment on the country, we would have overstated Canada’s investment by 281 billion USD and understated Germany’s by the # same amount. # # To make these types operation easier, we need to include both the year # and country in the index… # ### Setting a Hierarchical Index # # Include multiple columns in the index is advantageous in some situations. # # These situations might include: # # - When we need more than one piece of information (column) to identify an # observation (as in the Germany and UK GDP example above) # - When we need data-alignment by more than one column # # # To achieve multiple columns in the index, we pass a list of multiple column # names to `set_index`. # + hide-output=false wdi = df.set_index(["country", "year"]) wdi.head(20) # - # Notice that in the display above, the row labels seem to have two # *levels* now. # # The *outer* (or left-most) level is named `country` and the *inner* (or # right-most) level is named `year`. # # When a DataFrame’s index has multiple levels, we (and the pandas documentation) # refer to the DataFrame as having a hierarchical index. # ### Slicing a Hierarchical Index # # Now, we can answer our friend’s questions in a much more straightforward way. # + hide-output=false wdi.loc[("United States", 2010), "GDP"] # + hide-output=false wdi.loc[(["United Kingdom", "Germany"], 2010), "GDP"] # - # As shown above, we can use `wdi.loc` to extract different slices of our # national accounts data. # # The rules for using `.loc` with a hierarchically-indexed DataFrame are # similar to the ones we’ve learned for standard DataFrames, but they are a bit # more elaborate as we now have more structure to our data. # # **Slicing rules** # # pandas slicing reacts differently to `list`s and `tuple`s. # # It does this to provide more flexibility to select the # data you want. # # `list` in row slicing will be an “or” operation, where it chooses rows # based on whether the index value corresponds to any element of the list. # # `tuple` in row slicing will be used to denote a single hierarchical # index and must include a value for each level. # # **Row slicing examples** # # 1. `wdi.loc["United States"]`: all rows where the *outer* most index value is # equal to `United States` # 1. `wdi.loc[("United States", 2010)]`: all rows where the *outer-most* index value # is equal to `"United States` and the second level is equal to `2010` # 1. `wdi.loc[["United States", "Canada"]]`: all rows where the *outer-most* index is # either `"United States"` or `"Canada"` # 1. `wdi.loc[(["United States", "Canada"], [2010, 2011]), :]`: all rows where the # *outer-most* index is either `"United States` or `"Canada"` AND where the # second level index is either `2010` or `2011` # 1. `wdi.loc[[("United States", 2010), ("Canada", 2011)], :]`: all rows where the the # two hierarchical indices are either `("United States", 2010)` or # `("Canada", 2011)` # # # We can also restrict `.loc` to extract certain columns by doing: # # 1. `wdi.loc[rows, GDP]`: return the rows specified by rows (see rules # above) and only column named `GDP` (returned object will be a # Series) # 1. `df.loc[rows, ["GDP", "Consumption"]]`: return the rows specified by rows # (see rules above) and only columns `GDP` and `Consumption` # ### Alignment with `MultiIndex` # # The data alignment features we talked about above also apply to a # `MultiIndex` DataFrame. # ### `pd.IndexSlice` # # When we want to extract rows for a few values of the outer index and all # values for an inner index level, we can use the convenient # `df.loc[[id11, id22]]` shorthand. # # We can use this notation to extract all the data for the United States and # Canada. # + hide-output=false wdi.loc[["United States", "Canada"]] # - # However, suppose we wanted to extract the data for all countries, but only the # years 2005, 2007, and 2009. # # We cannot do this using `wdi.loc` because the year is on the second level, # not outer-most level of our index. # # To get around this limitation, we can use the `pd.IndexSlice` helper. # # Here’s an example. # + hide-output=false wdi.loc[pd.IndexSlice[:, [2005, 2007, 2009]], :] # - # Notice that the `:` in the first part of `[:, ["A", "D"]]` # instructed pandas to give us rows for all values of the outer most index # level and that the `:` just before `]` said grab all the columns. # ### Multi-index Columns # # The functionality of `MultiIndex` also applies to the column names. # # Let’s see how it works. # + hide-output=false wdiT = wdi.T # .T means "transpose" or "swap rows and columns" wdiT # - # Notice that `wdiT` seems to have two levels of names for the columns. # # The same logic laid out in the above row slicing rules applies when we # have a hierarchical index for column names. # + hide-output=false wdiT.loc[:, "United States"] # + hide-output=false wdiT.loc[:, ["United States", "Canada"]] # + hide-output=false wdiT.loc[:, (["United States", "Canada"], 2010)] # - # ## Re-setting the Index # # The `df.reset_index` method will move one or more level of the index # back into the DataFrame as a normal column. # # With no additional arguments, it moves all levels out of the index and # sets the index of the returned DataFrame to the default of # `range(df.shape[0])`. # + hide-output=false wdi.reset_index() # - # ## Choose the Index Carefully # # So, now that we know that we use index and column names for # aligning data, “how should we pick the index?” is a natural question to ask. # # To guide us to the right answer, we will list the first two components # to [’s](http://hadley.nz/) description of [tidy # data](http://vita.had.co.nz/papers/tidy-data.html): # # 1. Each column should each have one variable. # 1. Each row should each have one observation. # # # If we strive to have our data in a tidy form (we should), then when # choosing the index, we should set: # # - the row labels (index) to be a unique identifier for an observation # of data # - the column names to identify one variable # # # For example, suppose we are looking data on interest rates. # # Each column might represent one bond or asset and each row might # represent the date. # # Using hierarchical row and column indices allows us to store higher # dimensional data in our (inherently) two dimensional DataFrame. # ### Know Your Goal # # The correct column(s) to choose for the index often depends on the context of # your analysis. # # For example, if I were studying how GDP and consumption evolved over time for # various countries, I would want time (year) and country name on the index # # On the other hand, if I were trying to look at the differences across countries # and variables within a particular year, I may opt to put the country and # variable on the index and have years be columns. # # Following the tidy data rules above and thinking about how you intend to *use* # the data – and a little practice – will enable you to consistently select the # correct index. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Movies have form www.imdb.com/title/tt2406566/ #atomic blonde # # To get full credits cast, writers, etc: # # look for
    # # inside it will have alternating a tags linking back to the actor names and pages # # # # # # To download full source of website: # view-source:http://www.imdb.com/title/tt2406566/fullcredits # import requests import urllib.parse as parse from time import sleep from requests import Request, Session title_nums = ['tt0085244'] #The Big Chill title_nums2 = ["tt0295700"] #Wrong Turn def make_urls(): base_url = "http://www.imdb.com/title/" urls = [] for title in title_nums: urls.append(base_url + title + '/') return urls # + def my_count(): n = 1000 while True: yield n n += 1 numbers = my_count() # - def start(my_session = None): urls = make_urls() print('Urls', urls) for url in urls: try: r = my_session.get(url) print("request headers",r.request.headers) print("response headers",r.headers) except Exception as e: print("accessing url", url) print('Exception encountered at position 1:', e) return r, urls def write_result(response, **kwargs): print('writing file from..',response.url) filename = "test1.html" with open(filename, 'wb') as f: f.write(response.content) f.write(response.url.encode('utf-8')) print('saved file %s' % filename) # + import asyncio import aiofiles import aiohttp base_url = 'http://stats.nba.com/stats' HEADERS = { 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5)' } async def get_players(player_args): endpoint = '/commonallplayers' params = {'leagueid': '00', 'season': '2016-17', 'isonlycurrentseason': '1'} url = base_url + endpoint print('Getting all players...') async with aiohttp.ClientSession() as session: async with session.get(url, headers=None, params=params, timeout=20) as resp: data = await resp.json() player_args.extend( [(item[0], item[2]) for item in data['resultSets'][0]['rowSet']]) async def get_player(player_id, player_name): endpoint = '/commonplayerinfo' params = {'playerid': player_id} url = base_url + endpoint print('Getting player', player_name) async with aiohttp.ClientSession() as session: async with session.get(url, headers=HEADERS, params=params) as resp: print(resp) data = await resp.text() async with aiofiles.open( '{}'.format({player_name.replace(" ", "_")}) + '.json', 'w') as file: await file.write(data) loop = asyncio.get_event_loop() player_args = [] loop.run_until_complete(get_players(player_args)) loop.run_until_complete( asyncio.gather( *(get_player(*args) for args in player_args) ) ) # - # !ls # !conda install -y aiofiles # !conda install -y aiohttp # + import requests def print_url(r, *args, **kwargs): print(r.url) hooks = dict(response=print_url) r = requests.get('http://httpbin.org', hooks=dict(response=print_url)) print(r.status_code) # - hooks=dict(reponse=print_url) hooks # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Let's Parse 1000 Version Messages From Real Bitcoin Nodes # # So it seems like we correctly translated all the relevant tables from the protocol documentation into Python code. But how can we be sure we didn't make any mistakes. We can probably never be completely sure, but a good way for us to get started would be to send `version` messages to a large number of Bitcoin nodes, listen for and decode their `version` replies using `VersionMessage` classmethods and seeing if things "make sense". # # `earn.com` offers [a free, unauthenticated API](https://bitnodes.earn.com/api/#list-nodes) where you can get a list of all visible Bitcoin full nodes. # # Execute this command in your terminal to see what kind of data this API gives us: # # ``` # curl -H "Accept: application/json; indent=4" https://bitnodes.earn.com/api/v1/snapshots/latest/ # ``` # # We can call this API directly from Python using the `requests` module: # + import requests from pprint import pprint def get_nodes(): url = "https://bitnodes.earn.com/api/v1/snapshots/latest/" response = requests.get(url) return response.json()["nodes"] # - nodes = get_nodes() pprint(nodes) # In particular, we can get a list of addresses using the `nodes.keys()`: # + def get_addr_tuples(): nodes = get_nodes() raw_addrs = nodes.keys() addr_tuples = [] for raw_addr in raw_addrs: ip, port = raw_addr.rsplit(":", 1) addr_tuple = (ip, int(port)) addr_tuples.append(addr_tuple) return addr_tuples addr_tuples = get_addr_tuples() print(addr_tuples) # + import downloader downloader.cleanup() addrs = downloader.get_addr_tuples() downloader.connect_many(addrs) # - # Do you notice how slow this is? # # My machine received 9513 addresses from earn.com, and is processes about 5 messages per second. This is going to take about 30 seconds to process everything. # # TOO SLOW!!! # # Now let's thing for a second. Why's it so slow? In fact, it's because we're spending almost all our time waiting for `sock.connect` or `sock.recv` to give us a return value. Our Python program is just sitting on its hands while packets fly across the world, one at a time. # # Isn't there something we could have our Python program work on while it waits? Couldn't we perhaps have it send a few messages at a time? # # The answer, or course, is "yes". But this requires "asynchronous programming". FIXME: insert youtube link # # I'm not going to attempt to fully explain how this works, but I'll once again give you a magical program that does what we want. # + import downloader downloader.cleanup() addrs = downloader.get_addr_tuples() downloader.async_connect_many(addrs) # - # So what the hell is going on here? # # These strings don't look like port numbers, and # + from collections import Counter from library import VersionMessage, Address def get_versions(): with open('versions.txt', 'rb') as f: lines = f.readlines() lines[:] = (value.strip() for value in lines if value != b'\n') return lines # + from collections import Counter vms = [] for raw in get_versions(): try: vm = VersionMessage.from_bytes(raw) vms.append(vm) except Exception as e: print(e) continue # - len(vms) for vm in vms: print(vm.addr_recv) ports_counter = Counter([addr.port for addr in addrs]) ports_counter.most_common(10) ip_counter = Counter([addr.ip for addr in addrs]) ip_counter.most_common(10) ips = Counter([addr.formatted_ip for addr in addrs]) ips # I get # # ``` # {IPv4Address('172.16.58.3'), IPv4Address('192.168.3.11')} # ``` # # '172.16.58.3' is my public ip address # # # all 53 which report 8333 also report the wrong ip address ... set([interpret_raw_ip(addr.ip) for addr in addrs if addr.port == 8333]) # + raw_wrong_ip = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xc6\x1bd\t' for line in get_versions(): try: vm = VersionMessage.from_bytes(line) if vm.addr_recv.ip == raw_wrong_ip: print(vm.user_agent) except: continue # - # So there's something funky goin gon with that version of the bitcoin software. I would guess that it's hardcoding the port and ip. The reason I'm guessing it's hardcoded is because 8333 is the port that bitcoin core runs on. # # But not all node reporting this user agent get my ip / port wrong: # + satoshi_16_user_agent = b'/Satoshi:0.16.0/' for line in lines: try: vm = VersionMessage.from_bytes(line) a = Address.from_bytes(vm.addr_recv, version_msg=True) if vm.user_agent == satoshi_16_user_agent: print(interpret_raw_ip(a.ip), a.port) except: continue # - # At this point I think we can be reasonably confident that we've figured out how to parse ip addresses. But along the way it seems that we've also learned to not trust them! # + right_ip = b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xffh\x05=\x04' formatted = ":".join([str(b) for b in right_ip[-4:]]) formatted # - # # comparing speeds / new code # %load_ext autoreload # %autoreload 2 from ibd.four.downloader import * addrs = get_addrs() print(f"got {len(addrs)} node addresses") fifty_addrs = addrs[:50] futures = connect_many_threadpool(fifty_addrs) start_stop_tups = futures_to_start_stop_tups(futures) futures = list(futures) futures results = [f.result() for f in futures if not f.exception()] results start_stop = [(start, stop) for (msg, start, stop) in results] start_stop = sorted(start_stop, key=lambda tup: tup[0]) start_stop # + import numpy as np import matplotlib.pyplot as plt start,stop = np.array(start_stop).T plt.barh(range(len(start)), stop-start, left=start) plt.grid(axis="x") plt.ylabel("Tasks") plt.xlabel("Seconds") # - # # Execute tasks in threadpool and graph results # %load_ext autoreload # %autoreload 2 # + from ibd.four.downloader import * addrs = get_addrs() print(f"got {len(addrs)} node addresses") fifty_addrs = addrs[:200] futures = connect_many_threadpool(fifty_addrs) start_stop_tups = threadpool_result_to_start_stop_tups(futures) graph_tasks(start_stop_tups) # - # NOTES # * "to event loop, or not to event loop" # * should i split the concurrency lesson into 2 -- teach some here and some during the "download blocks from multiple peers" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # [NTDS'18] milestone 3: spectral graph theory # [ntds'18]: https://github.com/mdeff/ntds_2018 # # [](http://deff.ch), [EPFL LTS2](https://lts2.epfl.ch) # --- # ## Students # # * Team: 6 # * Students: , , , # * Dataset: Flights routes # --- # ## Rules # # * Milestones have to be completed by teams. No collaboration between teams is allowed. # * Textual answers shall be short. Typically one to two sentences. # * Code has to be clean. # * You cannot import any other library than we imported. # * When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks. # * The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter. # --- # ## Objective # # The goal of this milestone is to get familiar with the graph Laplacian and its spectral decomposition. # --- # ## 0 Load your network # %matplotlib inline # *If you get a `No module named 'sklearn'` error when running the below cell, install [scikit-learn](https://scikit-learn.org) with `conda install scikit-learn` (after activating the `ntds_2018` environment).* import numpy as np from scipy import sparse from scipy import linalg import scipy.sparse.linalg import matplotlib as mpl mpl.style.use('seaborn') import matplotlib.pyplot as plt from sklearn.cluster import KMeans # for ground truth checking import pandas as pd # *Let's denote your graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E}, A)$, where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, $A \in \mathbb{R}^{N \times N}$ is the (weighted) adjacency matrix, and $N = |\mathcal{V}|$ is the number of nodes.* # # *Import the adjacency matrix $A$ that you constructed in the first milestone. # (You're allowed to update it between milestones if you want to.)* # + adjacency = np.load("adjacency.npy") adjacency_unweighted = np.copy(adjacency) adjacency_unweighted[adjacency_unweighted!=0]=1 degrees = np.sum(adjacency_unweighted, axis = 0) n_nodes = adjacency_unweighted.shape[0] ## We are removing those edges where the weight is smaller thane the threshold threshold = 20 node_map = np.where(degrees >= threshold)[0] adjacency_th = np.delete(adjacency_unweighted,np.where(degrees < threshold),0) adjacency_th = np.delete(adjacency_th,np.where(degrees < threshold),1) degrees_th = np.sum(adjacency_th, axis = 0) n_nodes_th = adjacency_th.shape[0] adjacency_csr = sparse.csr_matrix(adjacency_unweighted); degree_matrix_csc = sparse.diags(degrees,format = "csc") # - # --- # ## 1 Graph Laplacian # --- # ### Question 1 # # *From the (weighted) adjacency matrix $A$, compute both the combinatorial (also called unnormalized) and the normalized graph Laplacian matrices.* # # *Note: if your graph is weighted, use the weighted adjacency matrix. If not, use the binary adjacency matrix.* # # *For efficient storage and computation, store these sparse matrices in a [compressed sparse row (CSR) format](https://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29).* laplacian_combinatorial_csr = sparse.csr_matrix(degree_matrix_csc - adjacency_csr); inv_degree_matrix_csr = sparse.linalg.inv(degree_matrix_csc).tocsr() sqrt_inv_degree_matrix_csr = sparse.csr_matrix.sqrt(inv_degree_matrix_csr) laplacian_normalized_csr = sqrt_inv_degree_matrix_csr * laplacian_combinatorial_csr * sqrt_inv_degree_matrix_csr # *Use one of them as the graph Laplacian $L$ for the rest of the milestone. # We however encourage you to run the code with both to get a sense of the difference!* laplacian = laplacian_normalized_csr # --- # ### Question 2 # # *Compute the eigendecomposition of the Laplacian $L = U^\top \Lambda U$, where the columns $u_k \in \mathbb{R}^N$ of $U = [u_1, \dots, u_N] \in \mathbb{R}^{N \times N}$ are the eigenvectors and the diagonal elements $\lambda_k = \Lambda_{kk}$ are the corresponding eigenvalues.* # # *Make sure that the eigenvalues are ordered, i.e., $0 = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_N$.* # + # calculate eigenvalues and eigenvectors [eigenvalues, eigenvectors] = sparse.linalg.eigsh(laplacian_normalized_csr, k = n_nodes-1, which = 'LM') #This function will not return the first eigenvalue 0 because we know that the first eigenvalue is always 0 #So when we now look at eigenvalues we should not forget that there is one more 0. # sort the resulting values and vectors sortID = np.argsort(eigenvalues) eigenvalues = eigenvalues[sortID] eigenvalues[eigenvalues < 10**(-10)] = 0 eigenvectors = eigenvectors[:,sortID] eigenvectors[eigenvectors < 10**(-10)] = 0 # - # * *Justify your choice of eigensolver.* # **Answer** # # # The Laplacian is always a symmetric real valued matrix and therefore a Hermitian matrix as well. So we can use the solver for sparse Hermitian matrices. # --- # ### Question 3 # # * *We can write $L = S S^\top$. What is the matrix $S$? What does $S^\top x$, with $x \in \mathbb{R}^N$, compute?* # **Answer** # # The matrix S is the incidence matrix having as rows the nodes and as columns the edges ($S \in \mathbb{R}^{NxM}$, N the number of nodes and M the number of edges). For an unweighted graph, $S_{i,j}$ = 1, for an edge $e_j$ between two vertexes $v_{i,k}$, -1 for $e_j$ between $v_{k,i}$ and 0 otherwise (the choice of sign is arbitrary, but it has to be once positive and once negative for a given edge). For a weighted graph, the entries are $\sqrt{w_{i,k}}$ and $-\sqrt{w_{i,k}}$, instead of +1 and -1, respectively. # # $S^\top$ acts as a graph gradient, and $S^\top x$ for a weighted graph computes: # # \begin{equation} # (S^\top x )[j]= \sqrt{w_{i,k}} (x[i] - x[k]) # \end{equation} # # which corresponds to the derivative of x along edge j # --- # ### Question 4 # # * *Show that $\lambda_k = \| S^\top u_k \|_2^2$, where $\| \cdot \|_2^2$ denotes the squared Euclidean norm (a.k.a. squared $L^2$ norm).* # **Answer** # # \begin{equation} # \begin{split} # \lambda_k & = u_k^\top L u_k \text{, result of the eigendecomposition of $L$ with $u_k$ being a unit-vector}\\ # & = u_k^\top S S^\top u_k \text{ , with $L = S S^\top$}\\ # & = (S^\top u_k)^\top S^\top u_k \text{ , where the order of factors reverses when taking the transpose}\\ # & =\| S^\top u_k \|_2^2 \text{ , with $x^\top x$ being the squared Euclidean norm}\\ # \end{split} # \end{equation} # * *What does the quantity $\| S^\top x \|_2^2$ tell us about $x$?* # **Answer** # # This quantity tells us how "smooth" x is, i.e. a larger quantity means a higher variation among the x vector components. # --- # ### Question 5 # # * *What is the value of $u_0$, both for the combinatorial and normalized Laplacians?* # **Answer** # # * **Combinatorial Laplacian** # # $u_0$ corresponds to the eigenvalue 0 and cannot be the vector 0. From the definition of the eigendecomposition we have: # # \begin{equation} # L u_0 = \lambda_0 u_0 = 0 # \end{equation} # # Multiply by $u_0^\top$ gives: # \begin{equation} # u_0^\top L u_0 = u_0^\top \cdot 0 = 0 # \end{equation} # # Since $u_0^\top L u_0$ is defined as follows: # \begin{equation} # u_0^\top L u_0 = \frac{1}{2} \sum_{\substack{u,v} \in E} w_{u,v}(u_0[u]-u_0[v])^2 # \end{equation} # # We get: # \begin{equation} # \sum_{\substack{u,v} \in E} w_{u,v}(u_0[u]-u_0[v])^2 = 0 # \end{equation} # # For this equation to hold, we need to have $u_0[u] = u_0[v]$ for any edge (u,v) $\in E$. In the case where the graph is connected, we get that $u_0[i] = u_0[k]$ for every i,k $\in V$. From this we get that $u_0$ equals: # \begin{align} # u_0 &= \alpha \begin{bmatrix} # 1 \\ # 1 \\ # \vdots \\ # 1 # \end{bmatrix} # \end{align} # # with $\alpha \in \mathbb{R}^*$, or to have a unit vector, $\alpha$ needs to equal $\alpha = \frac{1}{\sqrt{N}}$, where $N$ is the number of connected nodes. Thus, the value of $u_0$ is the unit vector $e$. # # - **Normalized Laplacian** # # Let's call $u_0'$ the eigenvector of the normalized Laplacian $L_{norm}$. # # From the theory, we know that $u_0' = D^{\frac{1}{2}} u_0$, and hence $u_0' = D^{\frac{1}{2}} e$ is the eigenvector of $L_{norm}$ of eigenvalue 0. # --- # ### Question 6 # # - *Look at the spectrum of the Laplacian by plotting the eigenvalues. # Comment on what you observe.* plt.plot(eigenvalues.real, "b." , markersize = 1) plt.xlabel('Eigenvalue Index') plt.ylabel('Eigenvalue') # **Answer** # # As espected the eigenvalues are in the domain $[0,2]$. The first eigenvalues which equal to 0 represent the number of connected components. # # It is interesting to observe that a lot of eigenvalues equal 1. In this notebook we will not further investigate into this property, but we will keep it in mind for the final report. # # - *How many connected components are there in your graph? Answer using the eigenvalues only.* # in addition to the 0-multiplicity we add 1 which corresponds to the first eigenvalue 0 which is omitted in the computation n_conn_comp = eigenvalues[eigenvalues == 0].shape[0] + 1 print("Number of connected components:",n_conn_comp) # There are **8** connected components. # - *Is there an upper bound on the eigenvalues, i.e., what is the largest possible eigenvalue? Answer for both the combinatorial and normalized Laplacians.* # **Answer** # # For normalized Laplacians the upper bound on the eigenvalues is 2, where equality holds iff the graph is bipartite. # # For our graph (normalized graph and the threshold graph), we observed that the upper bound of the eigenvalues of the combinatorial Laplacian was the same than the maximum degree. Defining the upper bound is an active field of research, and diverse theorems of tight bounds can be found in the literature. # --- # ## 3 Laplacian eigenmaps # # *Laplacian eigenmaps* is a method to embed a graph $\mathcal{G}$ in a $d$-dimensional Euclidean space. # That is, it associates a vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$. # The graph $\mathcal{G}$ is thus embedded as $Z \in \mathbb{R}^{N \times d}$. # --- # ### Question 7 # # * *What do we use Laplacian eigenmaps for? (Or more generally, graph embeddings.)* # **Answer** # # We use Laplacian eigenmaps for **dimensionality reduction**. Often, graph data is intrinsically low dimensional, but lies in a very high-dimensional space. Thus, **mapping a network to a vector space and reducing the dimension, while preserving relevant graph properties**, can be useful for faster **computation, machine learning algorithms, statistics, or visualization**. # --- # ### Question 8 # # *Embed your graph in $d=2$ dimensions with Laplacian eigenmaps. # Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.* # # *Recompute the eigenvectors you need with a partial eigendecomposition method for sparse matrices. # When $k \ll N$ eigenvectors are needed, partial eigendecompositions are much more efficient than complete eigendecompositions. # A partial eigendecomposition scales as $\Omega(k |\mathcal{E}|$), while a complete eigendecomposition costs $\mathcal{O}(N^3)$ operations.* # + #Creating the matrix in the good format adjacency_lc_csr = sparse.csr_matrix(adjacency_th); degrees_lc = np.sum(adjacency_th, axis = 0) degree_matrix_lc_csc = sparse.diags(degrees_lc,format = "csc") n_nodes_lc = degrees_lc.shape[0] #Computation of the laplacian for our graph laplacian_combinatorial_lc_csr = sparse.csr_matrix(degree_matrix_lc_csc - adjacency_lc_csr); inv_degree_matrix_lc_csr = sparse.linalg.inv(degree_matrix_lc_csc).tocsr() sqrt_inv_degree_matrix_lc_csr = sparse.csr_matrix.sqrt(inv_degree_matrix_lc_csr) laplacian_normalized_lc_csr = sqrt_inv_degree_matrix_lc_csr * laplacian_combinatorial_lc_csr * sqrt_inv_degree_matrix_lc_csr # - def graph_embedding(laplacian_normalized_csr, d): [eigenvalues, eigenvectors] = sparse.linalg.eigsh(laplacian_normalized_csr, k = d+1, which = 'SM') sortID = np.argsort(eigenvalues) eigenvalues = eigenvalues[sortID] eigenvectors = eigenvectors[:,sortID] proj = eigenvectors[:,1:d+1] return proj # * *Plot the nodes embedded in 2D. Comment on what you see.* proj = graph_embedding(laplacian_normalized_lc_csr, 2) plt.plot(proj[:,0],proj[:,1], '.') # **Answer** # # With the embedding in two dimensions, airports that are connected by a flight now appear close to each other in the plot. We start to see clusters that naturally emerge form the geometry. Since the clusters represent airports that are well connected to each other, there might be another meaning to it such as financially, economically, and culturally similar countries. # --- # ### Question 9 # What does the embedding $Z \in \mathbb{R}^{N \times d}$ preserve? # **Answer** # # The embedding $Z \in \mathbb{R}^{N \times d}$ preserves relevant graphs properties such as the number of data points N, while reducing the dimension d. # The goal of the Laplacian eigenmap algorithm is to preserve local information optimally. All the point node that are close to each other, meaning nodes connected by an edge with a large weight, must remain close to each other in the embedding. # --- # ## 2 Spectral clustering # # *Spectral clustering is a method to partition a graph into distinct clusters. # The method associates a feature vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$, then runs [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) in the embedding space $\mathbb{R}^d$ to assign each node $v_i \in \mathcal{V}$ to a cluster $c_j \in \mathcal{C}$, where $k = |\mathcal{C}|$ is the number of desired clusters.* # For this part, we have chosen to use the unweighted adjacency matrix, because we feel that our graph will probably be clustered according to geographical features. Moreover we kept only the nodes with a degree larger than 20, because we want to see only the airports (=nodes) that are significant (an airport with less than 20 flights is not very significant). By applying this threshold, we also assure that our graph is connected. #Computation of the eigenvalues and eigenvectors for our graph [eigenvalues_lc, eigenvectors_lc] = sparse.linalg.eigsh(laplacian_normalized_lc_csr, k = n_nodes_lc-1, which = 'LM') #This function will not return the first 0 because it is assuming it is always there. #So when we now look at eigenvalues we had not to forget that there is one more 0. sortID = np.argsort(eigenvalues_lc) eigenvalues_lc = eigenvalues_lc[sortID] eigenvectors_lc = eigenvectors_lc[:,sortID] n_conn_comp = eigenvalues_lc[eigenvalues_lc == 0].shape[0]+1 eigenvalues_lc[eigenvalues_lc < 10**(-10)] = 0 print("Number of connected components:",n_conn_comp) # Graph Spectrum plt.plot(eigenvalues_lc, "b." , markersize = 1) plt.title("Graph Spectrum") plt.xlabel("Numbers of the eigenvalues") plt.ylabel("Values of eigenvalues") # --- # ### Question 10 # # * *Choose $k$ and $d$. How did you get to those numbers?* # **Answer** # - k is chosen according to the graph of eigenvalues : the number of eighenvalues before the "gap" corresponds to k. k is the number of clusters. # - d is the dimension of the embedding space, it has to be chosen in order to preserve local infomation optimally in a certain sense. (after Belkin's book). # + #Plot eigenvalues repartitions x = np.ones_like(range(len(eigenvalues_lc))) plt.plot(eigenvalues_lc,x,"b.",markersize = 10) plt.xlim([0,0.5]) plt.title("Repartition of eigenvalues (zoom on the part next to 0)") plt.xlabel("Value of the eigenvalue") ##count how many values we have until the gap nb_before_1rst=len(eigenvalues_lc[np.where(eigenvalues_lc< 0.2)])+1 # +1 correspond to the first 0 not in the list of eigenvalues nb_before_2nd=len(eigenvalues_lc[np.where(eigenvalues_lc< 0.4)])+1 # +1 correspond to the first 0 not in the list of eigenvalues print("number of eigenvalues before 1st gap :",nb_before_1rst) print("number of eigenvalues before 2nd gap :",nb_before_2nd) # - # - As we see on the graph, we have two big gaps. The first corresponds to k=3 and the second to k=6. We will test these two possibilities # - We have decided to take d=k, because d is the number of eigenvectors to keep when applying the K-means. If d 2$ clusters, run $k$-means on $Z$. Don't implement $k$-means, use the `KMeans` class imported from scikit-learn.* # We choose to do the embedding with the eigenvectors obtained from the eigendecomposition of the normalized Laplacian. # + # For k=3 and d=3 k = 3; d = 3 H = eigenvectors_lc[:,:d]; clusters3 = KMeans(n_clusters=k, random_state=0).fit_predict(H) print("----- For k=",k," and d=",d," -----") print("Number of elements in clusters :") for i in range(k): cnt = 0 for j in clusters3: if j == i: cnt +=1 print("Cluster ",i+1,":",cnt) #For k=6 and d=6 k = 6; d = 6 H = eigenvectors_lc[:,:d]; clusters6 = KMeans(n_clusters=k, random_state=0).fit_predict(H) print() print("----- For k=",k," and d=",d," -----") print("Number of elements in clusters :") for i in range(k): cnt = 0 for j in clusters6: if j == i: cnt +=1 print("Cluster ",i+1,":",cnt) # - #For k=2, we have : fiedler_vect = np.sign(eigenvectors_lc[:,0]) nb_neg=len(fiedler_vect[np.where(fiedler_vect==-1)]) nb_pos=len(fiedler_vect[np.where(fiedler_vect==1)]) print("----- For k=2 (this is just an exemple, it has no real sense for our graph) -----") print("Number of elements in clusters :") print("Cluster labeled +1 :",nb_pos) print("Cluster labeled -1 :",nb_neg) # --- # ### Question 12 # # - *Use the computed cluster assignment to reorder the adjacency matrix $A$. # What do you expect? What do you observe?* # + # For k=3 new_order3 = np.array([],dtype = int) for i in range(3): new_order3 = np.append(new_order3,np.where(clusters3 == i)) plt.spy(adjacency_th[:,new_order3][new_order3], markersize=1) plt.title("Reordered ajacency matrix for k=3") #For k=6 plt.figure() new_order6 = np.array([],dtype = int) for i in range(6): new_order6 = np.append(new_order6,np.where(clusters6 == i)) plt.spy(adjacency_th[:,new_order6][new_order6], markersize=1) plt.title("Reordered ajacency matrix for k=6") # - # We were expecting a block diagonal matrix or something close to it, because the nodes of one cluster are supposed to be mostly connected with the nodes of this same cluster. Of course the clusters are not fully independent (in this case, our graph would have several distinct components). # - For k=3, we observe 3 blocks in the diagonal. However, those blocks seems to have "internal" blocks : that means that it may exist some "internal" clusters in clusters. This is fully visible in the case k=6. # - For k=6, the diagonal is composed of 6 blocks. Some blocks (like the 3rd - 167 nodes) are much bigger than others (like the 2nd - 11 nodes). The blocks are still connected with the other blocks even if they are mostly link with themselves. # --- # ### Question 13 # # *If you have ground truth clusters for your dataset, compare the cluster assignment from spectral clustering to the ground truth. # A simple quantitative measure is to compute the percentage of nodes that have been correctly categorized. # If you don't have a ground truth, qualitatively assess the quality of the clustering.* # # *Ground truth clusters are the "real clusters". # For example, the genre of musical tracks in FMA, the category of Wikipedia articles, the spammer status of individuals, etc. # Look for the `labels` in the [dataset descriptions](https://github.com/mdeff/ntds_2018/tree/master/projects/README.md).* # Since there are no labels in our dataset, we have chosen to check our hypothesis of geographical clusters. In order to do that we import back our dataframe of airports and flights. # + # import of routes routes = pd.read_csv('routes.dat', sep=',', encoding='utf-8', engine='python') routes.columns = ['Airline','Airline ID','Source Airport','Source Airport ID','Destination Airport','Destination Airport ID','Codeshare','Stops','Equipment'] routes = routes.drop(columns=['Source Airport ID','Destination Airport ID']) # import of source and destination airport source_airports = routes[['Source Airport']] source_airports = source_airports.rename(columns={'Source Airport':'Airport'}) dest_airports = routes[['Destination Airport']] dest_airports = dest_airports.rename(columns={'Destination Airport':'Airport'}) # creation of a dataframe with all airport and airport_idx # (we use airport_idx insteed of airportID because some airports have no airportID) airports = pd.concat([source_airports,dest_airports]) airports = airports.drop_duplicates() airports.reset_index(inplace=True) airports = airports.drop(columns=['index']) airports.reset_index(inplace=True) airports = airports.set_index('Airport') airports = airports.rename(columns={'index':'airport_idx'}) # + #For clustering with k=3 print("--------------------------------- For k=3 ---------------------------------\n") for i in range(3): print("Cluster",i+1," :\n",airports.index[node_map[np.where(clusters3 == i)]].values) # - # For k=3, we observe that our cluster are relative to the continents. # - **Cluster 1:** European Airports (ex : Geneva, Lyon, Dublin, Rome, Vienne etc) + some Indonesian Airports (Jawa Timur, Jakarta) # - **Cluster 2:** American Continent Airport (ex : Peru, America, Canada etc) # - **Cluster 3:** Asian Continent (ex : China, Japan) # # We notice that some airports are badly clustered if we consider geographical clusters : for exemple ITM (Osaka Airport, Japan) is in Cluster 1 ("Europe"). Moreover the "mix" of geographic areas in cluster 1 really shows that some in-clusters exist. Choosing a higher k will probably show a result closer to the geographical "reality". # + #For clustering with k=6 print("--------------------------------- For k=6 ---------------------------------\n") for i in range(6): print("Cluster",i+1," :\n",airports.index[node_map[np.where(clusters6 == i)]].values) # - # For k=6, we observe that our clusters are more relative to smaller areas (countries or region of countries). # - **Cluster 1:** North and Central America (Peru, Colombia, and a lot of cities of the USA) # - **Cluster 2:** South Ameria (Bresil, Argentine) # - **Cluster 3:** Europe (France, Switzerland, Italy, UK, etc) # - **Cluster 4:** Asia and Indonesia (Phillipines, Thailande, Chine, Japon, etc) # - **Cluster 5:** Africa (Cote d'Ivoire, Egypt, Arabie Saoudite, Iran, etc) # - **Cluster 6:** Russia (+ some countries linked to Russia like Kazakhstan) # # This clustering is really qualitative and shows some meaningful relations between countries. The case of Russia is quite revealing : it is a separate cluster and includes a lot of countries from the former Soviet Union. # --- # ### Question 14 # # Plot the cluster assignment (one color per cluster) on the 2D embedding you computed above with Laplacian eigenmaps. # + colors = ['Blue', 'Red', 'Green'] for i in range(3): cluster = proj[np.where(clusters3 == i)[0], :].real plt.plot(cluster[:,0],cluster[:,1], '.', color=colors[i]) # - # **Answer** # # - **Blue :** Europe + Indonesia (Jawa Timur, Jakarta) # - **Red :** America # - **Green :** Asia # --- # ### Question 15 # # Why did we use the eigenvectors of the graph Laplacian as features? Could we use other features for clustering? # **Answer** # # The eigenvectors give the embedding of the node in a lower dimensional space keeping similar nodes (in the sense of their egde weigth) close together. Similar nodes are close to each other considering the $L-$distance used for the $k$-mean clustering. Therefore, the eigenvectors are good choice for clustering nodes. # # We could have used other features, such as the eigenvectors of the adjacency matrix. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd from datetime import datetime,timedelta import numpy as np import matplotlib.pyplot as plt import xgboost from sklearn.metrics import explained_variance_score from sklearn import metrics from sklearn.grid_search import GridSearchCV pd.options.display.max_rows = 300 pd.options.display.max_columns = 100 # - data = pd.read_csv('preprocessed_train.csv') data[(data['scheduledArrival'].isnull())&(data['actualArrival'].isnull())] data[(data['scheduledDeparture'].isnull())&(data['actualDeparture'].isnull())] data[(data['scheduledDeparture'].isnull())&(data['scheduledArrival'].isnull())] data.scheduledArrival[0] data['runDate']= pd.to_datetime(data['runDate']) data['scheduledArrival']= pd.to_datetime(data['scheduledArrival']) data['scheduledDeparture']= pd.to_datetime(data['scheduledDeparture']) data['actualArrival']= pd.to_datetime(data['actualArrival']) data['actualDeparture']= pd.to_datetime(data['actualDeparture']) type(data.scheduledArrival[0])==pd._libs.tslibs.nattype.NaTType # ## Correcting values in scheduled departure column def correct_scheduled_arrival(x): if (type(x[0])==pd._libs.tslibs.nattype.NaTType) or (type(x[1])==pd._libs.tslibs.nattype.NaTType): return x[0] elif x[0] > x[1]: x[0] = x[0] - timedelta(days=1) return x[0] data['scheduledArrival'] = data[['scheduledArrival','scheduledDeparture']].apply(lambda x:correct_scheduled_arrival(x),axis=1) data[data.scheduledArrival>data.scheduledDeparture] data.isnull().sum() data[data.actualDeparture.isnull()] data.head(100) # + #function for checking delay values def check_delay_values(x): if type(x[1])==pd._libs.tslibs.nattype.NaTType: return 0 else: return x[0] data['previous_arrival_delay'] = data.groupby(['runDate','trainCode'])['ArrivalDelay'].shift(fill_value=0) data['previous_departure_delay'] = data.groupby(['runDate','trainCode'])['DepartureDelay'].shift(fill_value=0) data['previous_departure_delay'] = data[['previous_departure_delay','scheduledDeparture']].apply(lambda x: check_delay_values(x),axis=1) data['arrival_delay_for_each_stop'] = data['ArrivalDelay']-data['previous_arrival_delay'] data['departure_delay_for_each_stop'] = data['DepartureDelay']-data['previous_departure_delay'] # - data.head() # ## Treating null values in scheduledArrival, scheduledDeparture and actualDeparture columns def null_value_imputation(x): if type(x[0])==pd._libs.tslibs.nattype.NaTType: x[0] = x[1] + timedelta(days=x[2]) return x[0] else: return x[0] # + data['scheduledArrival'] = data[['scheduledArrival','runDate','dayCount']].apply(lambda x: null_value_imputation(x),axis=1) data['scheduledDeparture'] = data[['scheduledDeparture','runDate','dayCount']].apply(lambda x: null_value_imputation(x),axis=1) # - data.head() # + # data['actualDeparture'] = data[['actualDeparture','actualArrival']].apply(lambda x: null_value_imputation(x),axis=1) # - data.isnull().sum() data[data.actualDeparture.isnull()] data['actualDeparture'] = data[['actualDeparture','runDate','dayCount']].apply(lambda x: null_value_imputation(x),axis=1) data[data.actualDeparture.isnull()] data.dropna(inplace=True) data.reset_index(inplace=True,drop=True) data.runDate.min() data.runDate.max() data['scheduledArrival_delta'] = (data['scheduledArrival']-data['runDate']).apply(lambda x: x.total_seconds() / 60) data['scheduledDeparture_delta'] = (data['scheduledDeparture']-data['runDate']).apply(lambda x: x.total_seconds() / 60) data['actualArrival_delta'] = (data['actualArrival']-data['runDate']).apply(lambda x: x.total_seconds() / 60) data['actualDeparture_delta'] = (data['actualDeparture']-data['runDate']).apply(lambda x: x.total_seconds() / 60) # + # data['Stoppage_time'] = (data['scheduledDeparture']-data['scheduledArrival']).apply(lambda x: x.total_seconds() / 60) # - data data.columns data2 = data.copy() data2.sort_values(ascending=[True,True,True],by=['runDate','trainCode','distance'],inplace=True) station_dummies = pd.get_dummies(data2.stations) station_dummies trainCode_dummies = pd.get_dummies(data2.trainCode) trainCode_dummies data2 = pd.concat([data2,station_dummies,trainCode_dummies],axis=1) data2 #not going to use standard train_test_split beacause i want to split data frame based on run date train_size = int(len(data2) * 0.75) test_size = len(data2) - train_size train, test = data2.iloc[0:train_size], data2.iloc[train_size:len(data2)] print(train.shape, test.shape) # + # train_run_time_df = data2.groupby(['trainCode','runDate']).agg({'scheduledDeparture':'min','scheduledArrival':'max'}) # train_run_time_df['train_run_time'] = (train_run_time_df['scheduledArrival'] - train_run_time_df['scheduledDeparture']).apply(lambda x: x.total_seconds() / 60) # train_run_time_df.reset_index(inplace=True) # train_run_time_df # + # train_run_time_df.train_run_time.unique() # - data2.dayCount.value_counts() # + # train_wise_avg_delay = train.groupby(['trainCode','stations']).agg({'arrival_delay_for_each_stop':'mean','departure_delay_for_each_stop':'mean'}) # train_wise_avg_delay.columns=['train_wise_avg_arrival_delay', 'train_wise_avg_departure_delay'] # train_wise_avg_delay.reset_index(inplace=True) # train_wise_avg_delay # + # station_wise_avg_delay = train.groupby(['stations']).agg({'ArrivalDelay':'mean','DepartureDelay':'mean'}) # station_wise_avg_delay.columns=['station_wise_avg_arrival_delay', 'station_wise_avg_departure_delay'] # station_wise_avg_delay.reset_index(inplace=True) # station_wise_avg_delay # + # train = pd.merge(train,train_run_time_df[['trainCode','runDate','train_run_time']],on=['trainCode','runDate'],how='left') # train = pd.merge(train,train_wise_avg_delay,on=['trainCode','stations'],how='left') # # train = pd.merge(train,station_wise_avg_delay,on=['stations'],how='left') # + # test = pd.merge(test,train_run_time_df[['trainCode','runDate','train_run_time']],on=['trainCode','runDate'],how='left') # test = pd.merge(test,train_wise_avg_delay,on=['trainCode','stations'],how='left') # # test = pd.merge(test,station_wise_avg_delay,on=['stations'],how='left') # - train.drop(columns=['trainStationId','scheduledArrival','scheduledDeparture','actualArrival','actualDeparture'],inplace=True) train test.drop(columns=['trainStationId','scheduledArrival','scheduledDeparture','actualArrival','actualDeparture'],inplace=True) test data.corr() # # Arrival Model data3 = data.copy() data3.head() data3['trainCode'].unique() trainCode_encoding = {data3['trainCode'].unique()[i]: i+1 for i in range(len(data3['trainCode'].unique()))} trainCode_encoding stations_encoding = {data3['stations'].unique()[i]: i+1 for i in range(len(data3['stations'].unique()))} stations_encoding data3["trainCode"] = data3["trainCode"].apply(lambda x: trainCode_encoding[x]) data3["trainCode"] data3["stations"] = data3["stations"].apply(lambda x: stations_encoding[x]) data3["stations"] data3.columns feature_cols = ['stations', 'trainCode', 'distance', 'dayCount','scheduledArrival_delta', 'scheduledDeparture_delta', ] #not going to use standard train_test_split beacause i want to split data frame based on run date train_size = int(len(data3) * 0.75) test_size = len(data3) - train_size train, test = data3.iloc[0:train_size], data3.iloc[train_size:len(data3)] print(train.shape, test.shape) # + X_train, y_train = train[feature_cols], train['ArrivalDelay'] X_test, y_test = test[feature_cols], test['ArrivalDelay'] # + # Parameter Tuning model = xgboost.XGBRegressor() param_dist = {"max_depth": [5,10,15], "min_child_weight" : [1,3,6], "n_estimators": [100,200,300], "learning_rate": [0.05, 0.03,0.01],} grid_search = GridSearchCV(model, param_grid=param_dist, cv = 3, verbose=10, n_jobs=-1) # - grid_search.fit(X_train, y_train) grid_search.best_estimator_ arrival_xgb = xgboost.XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=0.7, colsample_bynode=0.7, colsample_bytree=0.7, gamma=0.25, gpu_id=-1, importance_type='gain', interaction_constraints='', learning_rate=0.03, max_delta_step=0, max_depth=6, min_child_weight=7, missing=np.nan, monotone_constraints='()', n_estimators=100, n_jobs=8, num_parallel_tree=1, objective='reg:squarederror', random_state=0, reg_alpha=0.8, reg_lambda=0.8, scale_pos_weight=1, subsample=0.5, tree_method='exact', validate_parameters=1, verbosity=None) arrival_xgb.fit(X_train,y_train) from sklearn.metrics import mean_squared_error import math print(mean_squared_error(y_test, y_pred)) print(math.sqrt(mean_squared_error(y_test, y_pred))) from sklearn.metrics import r2_score r2_score(y_test, y_pred) # # Departure Model X_train, y_train = train[feature_cols], train['DepartureDelay'] X_test, y_test = test[feature_cols], test['DepartureDelay'] delay_xgb = xgboost.XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=0.7, colsample_bynode=0.7, colsample_bytree=0.7, gamma=0.25, gpu_id=-1, importance_type='gain', interaction_constraints='', learning_rate=0.03, max_delta_step=0, max_depth=6, min_child_weight=7, missing=np.nan, monotone_constraints='()', n_estimators=100, n_jobs=8, num_parallel_tree=1, objective='reg:squarederror', random_state=0, reg_alpha=0.5, reg_lambda=1, scale_pos_weight=1, subsample=0.5, tree_method='exact', validate_parameters=1, verbosity=None) delay_xgb.fit(X_train,y_train) from sklearn.metrics import mean_squared_error import math print(mean_squared_error(y_test, y_pred)) print(math.sqrt(mean_squared_error(y_test, y_pred))) from sklearn.metrics import r2_score r2_score(y_test, y_pred) # + #Predicting on provided test dataset # - test_df = pd.read_csv('test.csv') test_df.head() test_df.isna().sum() test_df[(test_df.scheduledArrival.isnull())|(test_df.scheduledDeparture.isnull())] test_df['runDate']= pd.to_datetime(test_df['runDate']) test_df['scheduledArrival']= pd.to_datetime(test_df['scheduledArrival']) test_df['scheduledDeparture']= pd.to_datetime(test_df['scheduledDeparture']) test_df['scheduledArrival'] = test_df[['scheduledArrival','scheduledDeparture']].apply(lambda x:correct_scheduled_arrival(x),axis=1) test_df['trainCode'] = test_df['trainCode'].apply(lambda x: "T-"+str(x)) test_df # + test_df['scheduledArrival'] = test_df[['scheduledArrival','runDate','dayCount']].apply(lambda x: null_value_imputation(x),axis=1) test_df['scheduledDeparture'] = test_df[['scheduledDeparture','runDate','dayCount']].apply(lambda x: null_value_imputation(x),axis=1) # - test_df['scheduledArrival_delta'] = (test_df['scheduledArrival']-test_df['runDate']).apply(lambda x: x.total_seconds() / 60) test_df['scheduledDeparture_delta'] = (test_df['scheduledDeparture']-test_df['runDate']).apply(lambda x: x.total_seconds() / 60) test_df.trainCode.unique() test_df["trainCode"] = test_df["trainCode"].apply(lambda x: trainCode_encoding[x]) test_df["stations"] = test_df["stations"].apply(lambda x: stations_encoding[x]) feature_cols = ['stations', 'trainCode', 'distance', 'dayCount','scheduledArrival_delta', 'scheduledDeparture_delta'] test_df[feature_cols] arrival_delay = arrival_xgb.predict(test_df[feature_cols]) test_df['arrival_delay'] = arrival_delay test_df departure_delay = delay_xgb.predict(test_df[feature_cols]) test_df['departure_delay'] = departure_delay test_df.to_csv('Arnav_.csv',index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Creación y compra de claves juegos # + import mysql.connector import pandas as pd from mysql.connector import errorcode import random from datetime import datetime try: cnx = mysql.connector.connect( host="localhost", user="root", database='stum_for_you', passwd="" ) except mysql.connector.Error as err: if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif err.errno == errorcode.ER_BAD_DB_ERROR: print("Database does not exist") else: print(err) cursor = cnx.cursor() for auxiliar in range(0,100): #elegimos un juego cursor.execute("SELECT COUNT(*) FROM juegos") numjuegos = cursor.fetchone()[0] + 1 id_juego = random.randrange(1,numjuegos) sql = """SELECT * FROM juegos WHERE id_juego = %s """ % (id_juego) cursor.execute(sql) juego = cursor.fetchone() #elegimos un proveedor cursor.execute("SELECT COUNT(*) FROM proveedor") numprov = cursor.fetchone()[0] + 1 id_proveedor = random.randrange(1,numprov) sql = """SELECT * FROM proveedor WHERE id_proveedor = %s """ % (id_proveedor) cursor.execute(sql) proveedor = cursor.fetchone() # añadimos a la tabla transacción compra elem = random.randrange(1,100) sql = "INSERT INTO transacciones_compra (precio_total, fecha_compra) VALUES (%s, %s)" descuento = random.randrange(1,10) /10 print(descuento) preciojuego=int(juego[3]*descuento ) #fecha= aleatoriofecha inicio = datetime(2017, 1, 30) final = datetime(2020, 1, 1) fecha = inicio + (final - inicio) * random.random() val = (preciojuego * elem, fecha) cursor.execute(sql,val) cnx.commit() cursor.execute("SELECT MAX(id_transaccion) FROM transacciones_compra") id_transac = cursor.fetchone()[0] # añadimos clave print(elem) for i in range (0, elem) : aux = True while aux : num1 = str(random.randrange(1,999999)) num2 = str(random.randrange(1,9999)) clave = num1.zfill(6) + '-' + num2.zfill(4) sql = """SELECT * FROM claves_juegos WHERE clave = %s """ % (clave) cursor.execute(sql) if(len(cursor.fetchall()) == 0) : aux = False sql = "INSERT INTO claves_juegos (clave, fecha_anexion, id_juego) VALUES (%s, %s, %s)" val = (clave, fecha, id_juego) cursor.execute(sql,val) cnx.commit() cursor.execute("SELECT MAX(id_clave) FROM claves_juegos") id_clave = cursor.fetchone()[0] # añadimos compra sql = "INSERT INTO compra_juegos (id_proveedor, id_transaccion, id_claves_juego, precio) VALUES (%s, %s, %s, %s)" val = (id_proveedor, id_transac, id_clave, preciojuego) cursor.execute(sql,val) cnx.commit() # print("proveedor :" , id_proveedor , "id_transac :" , id_transac ,"id_clave :" , id_clave , "preciojuegoD :" # , preciojuego , "preciojuego :", juego[3]) # - # # Creación y compra de claves dlcs # + import mysql.connector import pandas as pd from mysql.connector import errorcode import random from datetime import datetime try: cnx = mysql.connector.connect( host="localhost", user="root", database='stum_for_you', passwd="" ) except mysql.connector.Error as err: if err.errno == errorcode.ER_ACCESS_DENIED_ERROR: print("Something is wrong with your user name or password") elif err.errno == errorcode.ER_BAD_DB_ERROR: print("Database does not exist") else: print(err) cursor = cnx.cursor() for auxiliar in range(0,70): #elegimos un dlc cursor.execute("SELECT COUNT(*) FROM dlcs") numdlcs = cursor.fetchone()[0] + 1 id_dlc = random.randrange(1,numdlcs) sql = """SELECT * FROM dlcs WHERE id_dlc = %s """ % (id_dlc) cursor.execute(sql) dlc = cursor.fetchone() #elegimos un proveedor cursor.execute("SELECT COUNT(*) FROM proveedor") numprov = cursor.fetchone()[0] + 1 id_proveedor = random.randrange(1,numprov) sql = """SELECT * FROM proveedor WHERE id_proveedor = %s """ % (id_proveedor) cursor.execute(sql) proveedor = cursor.fetchone() # añadimos a la tabla transacción compra elem = random.randrange(1,100) sql = "INSERT INTO transacciones_compra (precio_total, fecha_compra) VALUES (%s, %s)" descuento = random.randrange(1,10) /10 #print(descuento) preciodlc=int(dlc[1]*descuento) #fecha= aleatoriofecha inicio = datetime(2017, 1, 30) final = datetime(2020, 1, 1) fecha = inicio + (final - inicio) * random.random() val = (preciodlc * elem, fecha) cursor.execute(sql,val) cnx.commit() cursor.execute("SELECT MAX(id_transaccion) FROM transacciones_compra") id_transac = cursor.fetchone()[0] # añadimos clave #print(elem) for i in range (0, elem) : aux = True while aux : num1 = str(random.randrange(1,999999)) num2 = str(random.randrange(1,9999)) clave = num1.zfill(6) + '#' + num2.zfill(4) sql = """SELECT * FROM claves_dlc WHERE clave = %s """ % (clave) cursor.execute(sql) if(len(cursor.fetchall()) == 0) : aux = False sql = "INSERT INTO claves_dlc (clave, fecha_anexion, id_dlc) VALUES (%s, %s, %s)" val = (clave, fecha, id_dlc) cursor.execute(sql,val) cnx.commit() cursor.execute("SELECT MAX(id_clave) FROM claves_dlc") id_clave = cursor.fetchone()[0] # añadimos compra sql = "INSERT INTO compra_dlcs (id_proveedor, id_transaccion, id_claves_dlc, precio) VALUES (%s, %s, %s, %s)" val = (id_proveedor, id_transac, id_clave, preciodlc) cursor.execute(sql,val) cnx.commit() #print("proveedor :" , id_proveedor , "id_transac :" , id_transac ,"id_clave :" , id_clave , "precioDLCD :" # , preciodlc , "preciodlc :", dlc[3]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # SQLite # Beispiel mit SQLite, ohne Server # %load_ext sql # %sql sqlite:// # + language="sql" # CREATE TABLE writer (first_name, last_name, year_of_death); # INSERT INTO writer VALUES ('William', 'Shakespeare', 1616); # INSERT INTO writer VALUES ('Bertold', 'Brecht', 1956); # - # %sql select * from writer # #Mit MySQL # Voraussetzung, der MySQL Server läuft! # %sql mysql://tester:test@localhost/test_db # + language="sql" # CREATE TABLE `test_db`.`writer` (`first_name` VARCHAR(30) NOT NULL,`last_name` VARCHAR(45) NULL, # `year_of_death` DATETIME NULL, PRIMARY KEY (`first_name`)); # + language="sql" # INSERT INTO writer VALUES ('William', 'Shakespeare', '1616-01-01 00:00'); # INSERT INTO writer VALUES ('Bertold', 'Brecht', '1956-01-01 00:00'); # - # %sql select * from `test_db`.`writer`; # %sql DROP TABLE `test_db`.`writer` # #MongoDB # Verbindung herstellen # + import pymongo # Connection to Mongo DB try: conn=pymongo.MongoClient() print ("Connected successfully!!!") except (pymongo.errors.ConnectionFailure, e): print ("Could not connect to MongoDB: %s" % e ) conn # - # Nun mal etwas rumprobieren: db = conn.mydb collection = db.my_collection doc1 = {"first_name":"William","last_name":"Shakespeare","year_of_death":"1616"} doc2 = {"first_name":"Bertold","last_name":"Brecht","year_of_death":"1956"} collection.insert(doc1) collection.insert(doc2) conn.database_names() db.collection_names() collection.find_one() list(collection.find())[:2] collection.find({'year_of_death':'1956'}).count() # # Redis import redis r = redis.StrictRedis(host='localhost', port=6379, db=0) r.set('', '1616') r.set('', '1956') import pickle class PicklePerson(object): def __init__(self, first_name, last_name, year_of_death): self.first_name = first_name self.last_name = last_name self.year_of_death = year_of_death def __repr__(self): return "Name: " + self.first_name + " " + self.last_name + \ "\n" + "Death: " + self.year_of_death bob = pickle.dumps(PicklePerson("William","Shakespeare","1616")) bert = pickle.dumps(PicklePerson("Bertold","Brecht","1956")) r.set("bob", bob) r.set("bert", bert) pickle.loads(r.get("bob")) # + active="" # r.get('1616') # - print(r) # #etcd import etcd client = etcd.Client(host='127.0.0.1', port=4001) # Einen Key schreiben client.write('/nodes/n1', 1) client.write('/nodes/n2', 2) # Den Key lesen client.read('/nodes/n1').value # + directory = client.get("/nodes") # loop through directory children for result in directory.children: print(result.key + ": " + result.value) # - # Ein paar Infos zum Cluster client.machines -- --- -- jupyter: -- jupytext: -- text_representation: -- extension: .hs -- format_name: light -- format_version: '1.5' -- jupytext_version: 1.14.4 -- kernelspec: -- display_name: Haskell - Tensorflow -- language: haskell -- name: ihaskell_tensorflow -- --- -- # Tensorflow example with datasets -- -- _example after https://mmhaskell.com/blog/2017/8/21/digging-in-deep-solving-a-real-problem-with-haskell-tensor-flow_ -- -- -- here are various inputs that we are going to need throughout the code: -- + :e DeriveGeneric :e OverloadedLists :e TypeFamilies import Numeric.Datasets.Iris (iris, Iris(..), IrisClass(..)) import Control.Monad (forM_, when) import Control.Monad.IO.Class (liftIO) import Data.ByteString.Lazy.Char8 (pack) import Data.Csv (FromRecord, decode, HasHeader(..)) import Data.Int (Int64) import Data.Vector (Vector, length, fromList, (!)) import GHC.Generics (Generic) import System.Random.Shuffle (shuffleM) import TensorFlow.Core (TensorData, Session, Build, render, runWithFeeds, feed, unScalar, build, Tensor, encodeTensorData) import TensorFlow.Minimize (minimizeWith, adam', AdamConfig(..)) import TensorFlow.Ops (placeholder, truncatedNormal, add, matMul, relu, argMax, scalar, cast, oneHot, reduceMean, softmaxCrossEntropyWithLogits, equal, vector) import TensorFlow.Session (runSession) import TensorFlow.Types (Shape(..)) import TensorFlow.Variable (readValue, initializedVariable, Variable) -- - -- our input data comes from the *Numeric.Dataset.Iris* module. It is a list of data samples that are represented by the type *Iris*: :t Iris :t iris -- for Tensorflow, we need to know the input data size, the number of labels and the number of features. The labels have to be converted to a numberical value. nlabels = 3 :: Int64 nfields = 4 :: Int64 irisClassToInt64 :: IrisClass -> Int64 irisClassToInt64 x = case x of Setosa -> -1 Versicolor -> 0 Virginica -> 1 -- we can now convert the dataset into the required form. convertRecordsToTensorData :: Vector Iris -> (TensorData Float, TensorData Int64) convertRecordsToTensorData records = (input, output) where numRecords = Data.Vector.length records input = encodeTensorData [fromIntegral numRecords, nfields] (fromList $ concatMap recordToInputs records) output = encodeTensorData [fromIntegral numRecords] ((irisClassToInt64 . irisClass) <$> records) recordToInputs :: Iris -> [Float] recordToInputs rec = realToFrac <$> [sepalLength rec, sepalWidth rec, petalLength rec, petalWidth rec] -- + sampleSize :: Int sampleSize = 100 chooseRandomRecords :: Vector Iris -> IO (Vector Iris) chooseRandomRecords records = do let numRecords = Data.Vector.length records chosenIndices <- take sampleSize <$> shuffleM [0..(numRecords - 1)] return $ fromList $ map (records !) chosenIndices :t chooseRandomRecords -- - data Model = Model { train :: TensorData Float -- Training input -> TensorData Int64 -- Training output -> Session () , errorRate :: TensorData Float -- Test input -> TensorData Int64 -- Test output -> Session Float } :t Model buildNNLayer :: Int64 -> Int64 -> Tensor v Float -> Build (Variable Float, Variable Float, Tensor Build Float) buildNNLayer inputSize outputSize input = do weights <- truncatedNormal (vector [inputSize, outputSize]) >>= initializedVariable bias <- truncatedNormal (vector [outputSize]) >>= initializedVariable let results = (input `matMul` readValue weights) `add` readValue bias return (weights, bias, results) :t buildNNLayer createModel :: Build Model createModel = do let batchSize = -1 -- Allows variable sized batches let numHiddenUnits = 50 inputs <- placeholder [batchSize, nfields] outputs <- placeholder [batchSize] -- first layer (input -> hidden) (hiddenWeights, hiddenBiases, hiddenResults) <- buildNNLayer nfields numHiddenUnits inputs let rectifiedHiddenResults = relu hiddenResults -- output layer (hidden -> output) (finalWeights, finalBiases, finalResults) <- buildNNLayer numHiddenUnits nlabels rectifiedHiddenResults actualOutput <- render $ argMax finalResults (scalar (1 :: Int64)) let correctPredictions = equal actualOutput outputs errorRate_ <- render $ 1 - (reduceMean (cast correctPredictions)) let outputVectors = oneHot outputs (fromIntegral nlabels) 1 0 let loss = reduceMean $ fst $ softmaxCrossEntropyWithLogits finalResults outputVectors let params = [hiddenWeights, hiddenBiases, finalWeights, finalBiases] let adamConfig = AdamConfig 1e-3 0.9 0.999 1e-8 train_ <- minimizeWith (adam' adamConfig) loss params return $ Model { train = \inputFeed outputFeed -> runWithFeeds [ feed inputs inputFeed , feed outputs outputFeed ] train_ , errorRate = \inputFeed outputFeed -> unScalar <$> runWithFeeds [ feed inputs inputFeed , feed outputs outputFeed ] errorRate_ } -- + runIris = runSession $ do model <- build createModel forM_ ([0..10000] :: [Int]) $ \i -> do trainingSample <- liftIO $ chooseRandomRecords $ Data.Vector.fromList iris let (trainingInputs, trainingOutputs) = convertRecordsToTensorData trainingSample let (allInputs, allOutputs) = convertRecordsToTensorData $ Data.Vector.fromList iris train model trainingInputs trainingOutputs when (i `mod` 2000 == 0) $ do err <- errorRate model allInputs allOutputs liftIO $ putStrLn $ "Current training error " ++ show err runIris # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %reset # + import numpy as np import tensorflow as tf from numpy.random import rand, randint, exponential from src.environment.network_rm import network_rm from src.agent.dqn.dqn import exp_replay from src.agent.dqn.models import MLP # - inventory = np.array([6,6,6,20]) # the last item is time demand_types = np.array([[0,0,0,0], [1,1,1,0], [2,0,0,0]]) demand_values = np.array([0,1,20]) demand_arrivals = np.array([0.5, 0.5]) inter_arrival_time = 1 g = network_rm(inventory, demand_values, demand_types, demand_arrivals, inter_arrival_time) # + tf.reset_default_graph() session = tf.InteractiveSession() # Brain maps from observation to Q values for different actions. # Here it is a done using a multi layer perceptron with 2 hidden # layers #observation_shape has to be a 1d vector for now brain = MLP(list(g.observation_shape), [50, 50, g.num_actions], # change size to larger observation arrays [tf.tanh, tf.tanh, tf.identity]) # The optimizer to use. Here we use RMSProp as recommended # by the publication optimizer = tf.train.RMSPropOptimizer(learning_rate= 0.001, decay=0.9) # DiscreteDeepQ object current_controller = exp_replay(g.observation_shape, g.num_actions, brain, optimizer, session, discount_rate=0.99, exploration_period=1000, max_experience=100, store_every_nth=4, train_every_nth=4) session.run(tf.global_variables_initializer()) session.run(current_controller.target_network_update) # - def run_sim(env, agent, steps = 100, disable_training = False): last_observation = None last_action = None for s in range(steps): if env.terminate: break new_observation = env.observe() reward = env.collect_reward() #print(g.total_reward, " ", g.last_reward) # store last transition if last_observation is not None: agent.store(last_observation, last_action, reward, new_observation) # act new_action = agent.action(new_observation) env.perform_action(new_action) #transition env.transition() #train if not disable_training: agent.training_step() # update current state as last state. last_action = new_action last_observation = new_observation return s # ## One instance of network rm problem g = network_rm(inventory, demand_values, demand_types, demand_arrivals, inter_arrival_time) g.print_() x = g.total_reward T = 100 # # %prun T = run_sim(g, current_controller, T) T = run_sim(g, current_controller, T) print("average reward: {}".format((g.total_reward - x)/T)) g.state # ## Training S = 1000 rewards = np.zeros(S) for i in range(S): g = network_rm(inventory, demand_values, demand_types, demand_arrivals, inter_arrival_time) T = run_sim(g, current_controller, 100) rewards[i] = g.total_reward if i%(S/20) == 0 and i >= (S/20): print("average reward: {}".format(rewards[(i-int(S/20)):i].mean())) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 循环 # - 循环是一种控制语句块重复执行的结构 # - while 适用于广度遍历 # - for 开发中经常使用 # 广度遍历 # 深度遍历 # # # 堆 # 层层堆叠 # 栈 # # 队列 # 先进先出。单向走 # ## while 循环 # - 当一个条件保持真的时候while循环重复执行语句 # - while 循环一定要有结束条件,否则很容易进入死循环 # - while 循环的语法是: # # while loop-contunuation-conndition: # # Statement # ## 示例: # sum = 0 # # i = 1 # # while i <10: # # sum = sum + i # i = i + 1 # ## 错误示例: # sum = 0 # # i = 1 # # while i <10: # # sum = sum + i # # i = i + 1 # - 一旦进入死循环可按 Ctrl + c 停止 # ## EP: # ![](../Photo/143.png) # ![](../Photo/144.png) # # 验证码 # - 随机产生四个字母的验证码,如果正确,输出验证码正确。如果错误,产生新的验证码,用户重新输入。 # - 验证码只能输入三次,如果三次都错,返回“别爬了,我们小网站没什么好爬的” # - 密码登录,如果三次错误,账号被锁定 # # + import random #for i in range(4): a = random.randint(97,122) # random.shuffle() print(a) # - # ## 尝试死循环 # ## 实例研究:猜数字 # - 你将要编写一个能够随机生成一个0到10之间的且包括两者的数字程序,这个程序 # - 提示用户连续地输入数字直到正确,且提示用户输入的数字是过高还是过低 # ## 使用哨兵值来控制循环 # - 哨兵值来表明输入的结束 # - ![](../Photo/54.png) # ## 警告 # ![](../Photo/55.png) # ## for 循环 # - Python的for 循环通过一个序列中的每个值来进行迭代 # - range(a,b,k), a,b,k 必须为整数 # - a: start # - b: end # - k: step # - 注意for 是循环一切可迭代对象,而不是只能使用range # range(0,10,2) 0-10 不长2 # # dir(333) 存在 _iter_ 说明可以进行for循环 # # 一切皆对象 # # # # 在Python里面一切皆对象 # ## EP: # - ![](../Photo/145.png) s = "sfweaefewefa" i=0 while i<12 : print(s[i]) i=i+1 # ## 嵌套循环 # - 一个循环可以嵌套另一个循环 # - 每次循环外层时,内层循环都会被刷新重新完成循环 # - 也就是说,大循环执行一次,小循环会全部执行一次 # - 注意: # > - 多层循环非常耗时 # - 最多使用3层循环 # ## EP: # - 使用多层循环完成9X9乘法表 # - 显示50以内所有的素数 i = 0 j = 0 for i in range(9): i = i +1 for j in range(9): j = j+1 print(str(i*j)+' ',end='') print('\n') # ## 关键字 break 和 continue # - break 跳出循环,终止循环 # - continue 跳出此次循环,继续执行 # ## 注意 # ![](../Photo/56.png) # ![](../Photo/57.png) # # Homework # - 1 # ![](../Photo/58.png) pos = 0 neg = 0 sum_ = 0 i=0 while True: num = eval(input(">>>")) if num == 0: break elif num > int(0): pos=pos+1 elif num < 0: neg=neg+1 sum_ = sum_ + num i=i+1 p = round(sum_/i,2) print("正数:"+str(pos)+"\n负数:"+str(neg)+"\n总和:"+str(sum_)+"\n平均值:"+str(p)) # - 2 # ![](../Photo/59.png) money=10000 i=0 j=0 money_1=money money_4=0 s4=0 for i in range (10): money_1=money_1*(1+0.05) money_4=money_1 for j in range(4): s4=s4+money_4 money_4=money_4*(1+0.05) print("第"+str(i+1)+"年起4年学费:"+str(s4)) s4=0 print("十年后学费:"+str(money_1)) # - 3 # ![](../Photo/58.png) # 同1 # - 4 # ![](../Photo/60.png) j=0 for i in range(100,1000): if i%5==0 and i%6==0: print(str(i)+" ",end='') j=j+1 if j%10==0: print('\n') # - 5 # ![](../Photo/61.png) n=0 while True: n=n+1 if n**2 > 12000: print(n) break n=0 while True: n=n+1 if n**3 > 12000: print(n-1) break # - 6 # ![](../Photo/62.png) dk = eval(input("输入贷款额度")) dk_0=dk y = eval(input("输入贷款周期")) r=0.05 while r <= 0.08: dk_0=dk_0*(1+r) m=dk_0/(y*12) r_=format(r,'.3%') m_=round(m,2) dk_1=round(dk_0,2) print(r_,m_,dk_1) r=r+0.01/8 dk_0=dk # - 7 # ![](../Photo/63.png) # + sum_1=0 sum_2=0 for i in range(50000,0): sum_1=sum_1+1/i print(sum_1) print(sum_1) for j in range(1,50001): sum_2=sum_2+1/j print(sum_2) # - # - 8 # ![](../Photo/64.png) sum_ = 0 for i in range(1,98,2): sum_=sum_+i/(i+2) print(sum_) # - 9 # ![](../Photo/65.png) sum_ = 0 for i in range(10000,110000,10000): for j in range(1,2*i,2): sum_=sum_+(((-1)**((j+3)/2))/(j))*4 print("i="+str(i)+":"+str(sum_)) # - 10 # ![](../Photo/66.png) sum_=0 for i in range(1,10000): for k in range(1,i): if i%k==0 : sum_=sum_+k if sum_ == i: print(i) sum_=0 # - 11 # ![](../Photo/67.png) s=0 for n in range(1,8): for g in range(n+1,8): print(n,g) s=s+1 print('\n个数:'+str(s)) # - 12 # ![](../Photo/68.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Stock Markets # # ### Internal Library # Using [Alpha Vantage](https://www.alphavantage.co/). Free : 5 API requests per minute; 500 API requests per day. Use library function to use the cache/pickle as much as possible, to avoid exceeding free quote quota # - https://www.alphavantage.co/documentation/ # - https://en.wikipedia.org/wiki/List_of_S%26P_500_companies # - https://github.com/twopirllc/pandas-ta/blob/master/examples/AIExample.ipynb # + import matplotlib.pyplot as plt import pandas as pd import numpy as np plt.rcParams['figure.figsize'] = [20, 10] # - pd.read_csv('https://www.alphavantage.co/query?function=TIME_SERIES_DAILY_ADJUSTED&symbol=IBM&apikey=demo&datatype=csv',index_col=0) # ### WGET # Instead of hitting alphavantage everytime, we cache it in **wdata.AVGet()** import wdata abt = wdata.AVGet('ABT') ibm = wdata.AVGet('IBM') abt.iloc[::-1]['adjusted_close'].plot() # Time Series is reversed, [::-1] to reverse it df1 = abt[['adjusted_close']].rename(columns={'adjusted_close': 'ABT'}) df1['IBM'] = ibm['adjusted_close'] df1['FB'] = wdata.AVGet('FB')['adjusted_close'] df1[::-1].plot() # + #import talib #abt['MA'] = talib.SMA(abt['adjusted_close'],100) # - abt[::-1][['adjusted_close','MA']].plot() # ## Pandas-TA # - https://github.com/twopirllc/pandas-ta # - https://github.com/twopirllc/pandas-ta/blob/master/examples/AIExample.ipynb # - pip install alphaVantage-api # - pip install pandas-ta # + import pandas_ta as ta asset = abt[:400][::-1].copy() asset.ta.adjusted = "adjusted_close" asset.ta.ema(length=8, append=True) asset.ta.ema(length=21, append=True) asset.ta.ema(length=50, append=True) asset[asset.columns[5:]].tail() # - print(asset.ta.ema(length=2, append=False).head(5)) print(asset.close.head(5)) asset[["close", "EMA_8", "EMA_21", "EMA_50"]].plot() # ### Trend Returns and Cumulative Trend Returns long = ta.ema(asset.close, 8) > ta.ema(asset.close, 21) trendy = asset.ta.trend_return(trend=long, cumulative=True, trade_offset=-1, append=True) trendy.tail() # Third Column is the long trend; binary sequences # ### Join # # https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html # + cltr = trendy.iloc[:,0] tr = trendy.iloc[:,1] trendy.iloc[:,:2].plot(figsize=(16, 3)) cltr.plot(figsize=(16, 3), kind="area", stacked=False, alpha=0.25, grid=True) # + capital = 10000 total_return = cltr.cumsum() * capital positive_return = total_return[total_return > 0] negative_return = total_return[total_return <= 0] trdf = pd.DataFrame({"tr+": positive_return, "tr-": negative_return}) trdf.plot(figsize=(16, 5), kind="area", stacked=False, alpha=0.25, grid=True) # + long_trend = (trendy.iloc[:,-2] > 0).astype(int) short_trend = (1 - long_trend).astype(int) long_trend.plot(figsize=(16, 0.85), kind="area", stacked=True, alpha=0.25) short_trend.plot(figsize=(16, 0.85), kind="area", stacked=True, alpha=0.25) # + entries = (trendy.iloc[:,-1] > 0).astype(int) * asset.close entries[entries < 0.0001] = np.nan entries.name = "Entry" exits = (trendy.iloc[:,-1] < 0).astype(int) * asset.close exits[exits < 0.0001] = np.nan exits.name = "Exit" total_trades = trendy.iloc[:,-1].abs().sum() print(f"Total Trades: {total_trades}") all_trades = trendy.iloc[:,-1].copy().fillna(0) all_trades = all_trades[all_trades != 0] trades = pd.DataFrame({"Signal": all_trades, entries.name: entries.dropna(), exits.name: exits.dropna()}) trades['PnL'] = (trades.Exit - trades.Entry.shift(1)) / trades.Entry.shift(1) # - (1 + trades.PnL).prod() trades asset.close - asset.close.shift(1) # chart = asset["close"] #asset[["close", "SMA_10", "SMA_20", "SMA_50", "SMA_200"]] # chart = asset[["close", "SMA_10", "SMA_20"]] chart = asset[["close", "EMA_8", "EMA_21", "EMA_50"]] chart.plot(figsize=(16, 10), grid=True) entries.plot(figsize=(16, 10), marker="^", color='green', markersize=12, alpha=0.8) exits.plot(figsize=(16, 10), marker="v", color='#FF0000', markersize=12, alpha=0.8, grid=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Single Scenario import json # needed to load in our scenario import os # to handle file paths across operating systems from src.post_and_poll import get_api_results API_KEY = 'DEMO_KEY' # REPLACE WITH YOUR API KEY # following is not necessary but silences warnings: # InsecureRequestWarning: Unverified HTTPS request is being made to host 'developer.nrel.gov'. Adding certificate verification is strongly advised. import urllib3 urllib3.disable_warnings() # `get_api_results` POST's your JSON to the API `job` endpoint, which provides a `run_uuid` if the input is valid, and then polls the `results` endpoint using the `run_uuid` until the results come back with a status other than `Optimizing...`. # # `get_api_results` also saves the results to the `results_file`. # # A log file is also created in the current working directory. post = json.load( open( os.path.join("inputs", "Scenario_POST1.json"), "r") ) results = get_api_results( post=post, API_KEY=API_KEY, api_url="https://developer.nrel.gov/api/reopt/v1", results_file=os.path.join("outputs", "my_results.json") ) # (Note the above logging shows a local host URL, which was used to run this notebook without requiring an API_KEY) # ### Here are some results keys examples: results.keys() results["outputs"]["Scenario"]["status"] for k in results["outputs"]["Scenario"]["Site"].keys(): print(k) for k in results["outputs"]["Scenario"]["Site"]["PV"].keys(): print(k) # # Multi-scenario # Multiple scenarios can be defined in a CSV file with rows for each scenrario and columns for the inputs. `all_api_inputs.csv` provides a template for all of the possible header values (input keys). # # Let's take a look at the example `scenarios.csv`: # + import pandas as pd # only used to show the csv file df = pd.read_csv(os.path.join("inputs", "scenarios.csv")) df.head() # - # We have two Scenarios: # 1. One with no PV nor Wind evaluated (by setting their `max_kw`s to zero) # 2. One with PV and no Wind evaluated # # Both Scenarios have the same location, the same electricity tariff (set via the `urdb_label`), and use the same custom load profile - which is passed in via another csv file. """ Here are some convenience definitions for using the Multi-scenario capabilities """ ############################################################################################################## inputs_path = os.path.join(".", 'inputs') outputs_path = os.path.join(".", 'outputs') output_template = os.path.join(outputs_path, 'results_template.csv') output_file = os.path.join(outputs_path, 'results_summary.csv') ############################################################################################################## # Let's start by converting the `scenarios.csv` into a list of POST's that we can send to the API: # + from src.multi_site_inputs_parser import multi_site_csv_parser path_to_inputs = os.path.join(inputs_path, 'scenarios.csv') list_of_posts = multi_site_csv_parser( path_to_inputs, api_url='https://developer.nrel.gov/api/reopt/v1', API_KEY=API_KEY ) # - # Now we can collect all of the results using the `get_api_results` function from the Single Scenario example: # + responses = [] for post in list_of_posts: responses.append( get_api_results( post, results_file=os.path.join(outputs_path, post['Scenario']['description'] + '.json'), api_url="https://developer.nrel.gov/api/reopt/v1", API_KEY=API_KEY ) ) # - # Note that we used the `Scenario.description` to define the results file name. The `Scenario.description` was defined in the `scenarios.csv`. # ### Summarizing multiple scenario results # # There are two options for making a summary of multiple scenarios' resutls: # 1. Write to a csv using a template with column headers for desired summary keys (scalar values only) # 2. Write all inputs, outputs, and dispatch to an Excel spreadsheet # # #### Option 1: Use a template CSV to collect certain results # + from src.parse_api_responses_to_csv import parse_responses_to_csv_with_template parse_responses_to_csv_with_template( csv_template=output_template, responses=responses, output_csv=output_file, input_csv=path_to_inputs, n_custom_columns=2 ) # - # #### Option 2: Write all results out to an Excel file # + from src.parse_api_responses_to_excel import parse_api_responses_to_excel parse_api_responses_to_excel(responses, spreadsheet='results_summary.xlsx') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.6 64-bit (''base'': conda)' # name: python_defaultSpec_1599377548581 # --- # + from tensorflow.keras.datasets import imdb from tensorflow.keras.preprocessing import sequence max_features = 10000 maxlen = 500 (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features) x_train = sequence.pad_sequences(x_train, maxlen) x_test = sequence.pad_sequences(x_test, maxlen) # + tags=[] from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Conv1D, Embedding, MaxPooling1D, GlobalMaxPooling1D model = Sequential() # Learn an word-embedding layer with a float vector with 128 values. model.add(Embedding(max_features, 128, input_length=500)) model.add(Conv1D(32, 7, activation='relu')) model.add(MaxPooling1D(5)) model.add(Conv1D(32, 7, activation='relu')) model.add(GlobalMaxPooling1D()) model.add(Dense(1)) model.summary() # + tags=[] model.compile(loss='binary_crossentropy', metrics=['acc']) history = model.fit(x_train, y_train, epochs=10, validation_split=0.2, batch_size=128) # + # %matplotlib inline import matplotlib.pyplot as plt def plot(history): acc = history.history['acc'] val_acc = history.history['val_acc'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'bo', label="Training accuracy") plt.plot(epochs, val_acc, 'r', label="Validation accuracy") plt.title("Training and Validation accuracy") plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label="Training Loss") plt.plot(epochs, val_loss, 'r', label="Validation Loss") plt.legend() plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Stage-scanned light sheet imaging # # This notebook explains how to perform stage-scanned light sheet imaging. # # To acquire a z-stack, the camera is set to 'External Start' mode. The stage moves at a constant speed # during the acqusition. When the stage initiates the scanning, it sends out a TTL to trigger the camera # to start acquisiton. # # To acquire a time lapse 3D dataset, the process described above is repeated for n times # (n: number of time points). # # Note: # To avoid motion blur during the stage scan, a method called 'Light-sheet stablized stage scanning # (LS3)' is used. With this method, A galo mirror is used to offset the stage motion during each frame. # The galvo is controlled independently by another python script using the NI-DAQmax API. # Details of the LS3 methods can be found in: # https://www.biorxiv.org/content/10.1101/2020.09.22.309229v1 from pycromanager import Bridge, Acquisition # ### Define a hook function to start the scanning of the stage def move_stage(event): message = "scan" core.set_serial_port_command(port, message, "\r") return event # ### Construct java objects bridge = Bridge() core = bridge.get_core() mm = bridge.get_studio() # ### Acquisition parameter # + nb_timepoints = 5 scan_step = 2.0 # unit: um stage_scan_range = 200.0 # unit: um interval = 1 # interval time between each time point, unit: second exposureMs = core.get_exposure() nrSlices = int(stage_scan_range / scan_step) save_directory = r'E:\data' save_name = 'test' port = "COM4" speed = scan_step / exposureMs # - # ### Stage settings # Note: an ASI MS200 stage is used here. If you have a different stage, consult the manual to find out # the correct way to operate it. # + # set backlash message = "backlash x=0.02 y=0.0" print("set backlash: " + message) core.set_serial_port_command(port, message, "\r") # set default speed message = "speed x=10 y=10" core.set_serial_port_command(port, message, "\r") # set speed. note: here x-axis is the stage motion axis. message = "speed x=" + "{:.4f}".format(speed) print("set speed to scan: ", message) core.set_serial_port_command(port, message, "\r") # set current position to zero message = "zero" core.set_serial_port_command(port, message, "\r") # set the scan range message = "scanr x=0.0 y=" + "{:.4f}".format(stage_scan_range / 1000) print("scan range: " + message) core.set_serial_port_command(port, message, "\r") # - # ### Camera settings # Note: an Hamamasty Flash 4.0 camear is used here. If you have a different camera, consult the manual # to find out the correct way to set the parameter. # Camera trigger settings core.set_property("HamamatsuHam_DCAM", "TRIGGER SOURCE", "EXTERNAL") core.set_property("HamamatsuHam_DCAM", "TRIGGER DELAY", "0.0") # ### The main function to perform the time lampse 3D imaging if __name__ == '__main__': # generate the multi-dimensional events events = [] for t in range(nb_timepoints): event = [] for z in range(nrSlices): event.append({'axes': {'time': t, 'z': z}, 'min_start_time': interval}) events.append(event) print(events) with Acquisition(directory=save_directory, name=save_name, pre_hardware_hook_fn=sleep, post_camera_hook_fn=move_stage) as acq: for t in range(nb_timepoints): acq.acquire(events[t]) acq.await_completion() # set back camera property core.set_property("HamamatsuHam_DCAM", "TRIGGER SOURCE", "INTERNAL") # set the stage to default speed message = "speed x=10 y=10" core.set_serial_port_command(port, message, "\r") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import quippy from quippy import descriptors at = quippy.diamond(3.5, 6) at.positions at.Z # All descriptors are instantiated by a call to Descriptor(), which takes the descriptor initialisation string as its only argument. For a list of available descriptors and their parameters, see the following list, auto-generated using the following command: quip descriptor_str="--help" # !quip descriptor_str="--help" # + # Pair distance descriptor between carbon atoms, with a 1.5 AA cutoff # - desc = descriptors.Descriptor("distance_2b Z1=6 Z2=6 cutoff=4") desc.n_dim # number of dimensions (1, since it's a scalar distance) # Get the connectivity at.set_cutoff(desc.cutoff()+1.0) at.calc_connect() # How many instances of the descriptor can we use in the Atoms object? desc.count(at) d = desc.calc(at) d # the actual pairwise distances (One array is the cutoff, the other the distances) # %matplotlib inline import matplotlib.pyplot as plt plt.hist(d.descriptor) plt.show() n_desc, n_cross = desc.descriptor_sizes(at) print("n_desc=%d n_cross=%d" % (n_desc,n_cross)) res = desc.calc(at, grad=True) list(res) res.descriptor[:, :10] res.grad[:,:10] res.index[:,:10] # + # Many-body descriptor: SOAP # - desc = descriptors.Descriptor("soap cutoff=3 l_max=4 n_max=4 atom_sigma=0.5 n_Z=1 Z={6} ") at.set_cutoff(desc.cutoff()) at.calc_connect() desc.descriptor_sizes(at) # Note one descriptor for each atom in the structure desc.n_dim desc.calc(at) # Add a hydrogen atom, re-calculate the descriptor at.add_atoms((0.2, 0.2, 0.2), (1)) at.calc_connect() desc.descriptor_sizes(at) # Note no change in the size, as the SOAP descriptor above is just about carbons # New H and C SOAP desc = descriptors.Descriptor("soap cutoff=3 l_max=4 n_max=4 atom_sigma=0.5 n_Z=2 Z={1 6} n_species=2 species_Z={1 6}") desc.descriptor_sizes(at) # Note the size has gone up desc.calc(at) # #### We can get the same descriptor using the QUIP program, in which the input is an extended xyz file and the specification as before at.write("diamondH.xyz") # !quip atoms_filename=diamondH.xyz descriptor_str="soap cutoff=3 l_max=4 n_max=4 atom_sigma=0.5 n_Z=2 Z={1 6} n_species=2 species_Z={1 6}" # + # ^ Each vector is on its own line, starting with a DESC # - # Three-site benzene monomer desc_monomer = descriptors.Descriptor('general_monomer signature={6 6 6} atom_ordercheck=F') desc_monomer.n_dim # Three distances -> dimensionality 3 benzat = quippy.AtomsList('benzene_frames.xyz') # some benzenes # + from ase.visualize import view from ase.atoms import Atoms as AseAtoms firstBenzene = benzat[0] view(AseAtoms(firstBenzene), viewer='x3d') # - # Doesn't look much like benzene to me, but what do I know? benzat.pos.shape for i in range(len(benzat)): benzat[i].set_cutoff(desc_monomer.cutoff()+0.5) benzat[i].calc_connect() res = desc_monomer.calc(benzat, grad=True)[0] res.grad.shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Euler's integration # # We are going to look at how Euler's integration performs at the integration of the 2nd order ODE governing the evolution of a planet on a circular orbit. # This problem can be simplified as two first order ODEs: # # # $$\frac{\mathrm{d} x}{\mathrm{d} t} = u $$ # $$\frac{\mathrm{d} u}{\mathrm{d} t} = - \frac{\mu}{(x^2+y^2)^{3/2}} x,$$ # # where $x$ and $u$ are the position and velocity along the x-coordinates, while $y$ and $v$ are the position and velocity along the y-coordinates. # # And Euler's integration for this system yields, using a discrete notation # # # $$x_{n+1} = x_n + u_n\Delta t $$ # $$u_{n+1} = u_n - \frac{\mu}{(x_n^2+y_n^2)^{3/2}} x_n \Delta t.$$ # # Similarly, # # $$y_{n+1} = y_n + v_n\Delta t $$ # $$v_{n+1} = v_n - \frac{\mu}{(x_n^2+y_n^2)^{3/2}} y_n \Delta t.$$ import numpy as np import matplotlib.pyplot as plt # Let us define the function of the exact solution (which we know !) # # Here $\omega = n = \mu/a^3$, where $a$ is the semi-major axis # Exact solution for the x coordinate of the position def position_x(a, t, omega): return a * np.cos(omega * t) def position_y(a, t, omega): return a * np.sin(omega * t) # Exact solution for the x coordinate of the velocity def velocity_x(a, t, omega): return - a * omega * np.sin(omega * t) def velocity_y(a, t, omega): return a * omega * np.cos(omega * t) # Let us define the 4 derivatives # + # Velocities for the 2 components def dx_dt(x, y, u, v): return u def dy_dt(x, y, u, v): return v # Accelerations for the 2 components def du_dt(x, y, u, v, mu): omega_squared = mu/np.power(x*x + y*y, 3./2.) return - omega_squared * x def dv_dt(x, y, u, v, mu): omega_squared = mu/np.power(x*x + y*y, 3./2.) return - omega_squared * y # - # Now let us define an evolver function, which will perform the Euler integration. We want to keep in memory the solution x(t), v(t) and the time t to later plot the result and compare it to the exact solution def evolve_Euler(x0, y0, u0, v0, mu, dt, N_steps): xe = np.zeros((N_steps + 1)) ue = np.zeros((N_steps + 1)) ye = np.zeros((N_steps + 1)) ve = np.zeros((N_steps + 1)) te = np.zeros((N_steps + 1)) xe[0], ye[0] = x0, y0 ue[0], ve[0], te[0] = u0, v0, 0 print("now is time to work") for i in range(N_steps): xe[i+1] = xe[i] + (dx_dt(xe[i], ye[i], ue[i], ve[i])*dt) ye[i+1] = ye[i] + (dy_dt(xe[i], ye[i], ue[i], ve[i])*dt) ue[i+1] = ue[i] + (du_dt(xe[i], ye[i], ue[i], ve[i], mu)*dt) ve[i+1] = ve[i] + (dv_dt(xe[i], ye[i], ue[i], ve[i], mu)*dt) te[i+1] = te[i] + dt return te, xe, ye, ue, ve # Now let's define the initial conditions. # And compute both an array with the exact solution and 2 arrays with the integration results. # + Time_max = 10 # Initial conditions for the integration mu = 1.0 x0, y0 = 1.0, 0.0 u0, v0 = 0.0, 1.0 dt = 0.001 N_steps = int(Time_max/float(dt)) # Computation of the Euler's integration te, xe, ye, ue, ve = evolve_Euler(x0, y0, u0, v0, mu, dt, N_steps) # Computation of the exact solution omega = np.sqrt(mu/np.power(x0*x0+y0*y0, 3./2.)) exact_solution_x = [position_x(x0, tt, omega) for tt in te] exact_solution_y = [position_y(x0, tt, omega) for tt in te] exact_solution_u = [velocity_x(x0, tt, omega) for tt in te] exact_solution_v = [velocity_y(x0, tt, omega) for tt in te] # - # Now let's plot both the exact solution and the integration result # + fig = plt.figure(figsize=(16, 16)) ax = fig.add_subplot(2,1,1) line, = ax.plot(te, xe, '--', label="x, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_x, '-', c=line.get_color(), label="x exact") line, = ax.plot(te, ye, '--', label="y, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_y, '-', c=line.get_color(), label="y exact") ax.set_ylim([-1.5,1.5]) ax.set_ylabel("x, y") ax.legend(loc=0, prop={'size':10}) ax = fig.add_subplot(2,1,2, sharex=ax) line, = ax.plot(te, ue, '--', label="u, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_u, '-', c=line.get_color(), label="vx exact") line, = ax.plot(te, ve, '--', label="v, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_v, '-', c=line.get_color(), label="vy exact") ax.set_ylim([-1.5,1.5]) ax.set_ylabel("u, v") ax.set_xlabel("Time") ax.legend(loc=0, prop={'size':10}) # - # Let's evolve it a bit more # + Time_max = 100 N_steps = int(Time_max/float(dt)) # Computation of the Euler's integration te, xe, ye, ue, ve = evolve_Euler(x0, y0, u0, v0, mu, dt, N_steps) # Computation of the exact solution exact_solution_x = [position_x(x0, tt, omega) for tt in te] exact_solution_y = [position_y(x0, tt, omega) for tt in te] exact_solution_u = [velocity_x(x0, tt, omega) for tt in te] exact_solution_v = [velocity_y(x0, tt, omega) for tt in te] # - # And plot # + fig = plt.figure(figsize=(16, 16)) ax = fig.add_subplot(2,1,1) line, = ax.plot(te, xe, '--', label="x, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_x, '-', c=line.get_color(), label="x exact") line, = ax.plot(te, ye, '--', label="y, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_y, '-', c=line.get_color(), label="y exact") ax.set_ylim([-1.5,1.5]) ax.set_ylabel("x, y") ax.legend(loc=0, prop={'size':10}) ax = fig.add_subplot(2,1,2, sharex=ax) line, = ax.plot(te, ue, '--', label="u, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_u, '-', c=line.get_color(), label="vx exact") line, = ax.plot(te, ve, '--', label="v, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_v, '-', c=line.get_color(), label="vy exact") ax.set_ylim([-1.5,1.5]) ax.set_ylabel("u, v") ax.set_xlabel("Time") ax.legend(loc=0, prop={'size':10}) # - # This looks bad... Let's integrate it even more # + Time_max = 1000 N_steps = int(Time_max/float(dt)) # Computation of the Euler's integration te, xe, ye, ue, ve = evolve_Euler(x0, y0, u0, v0, mu, dt, N_steps) # Computation of the exact solution exact_solution_x = [position_x(x0, tt, omega) for tt in te] exact_solution_y = [position_y(x0, tt, omega) for tt in te] exact_solution_u = [velocity_x(x0, tt, omega) for tt in te] exact_solution_v = [velocity_y(x0, tt, omega) for tt in te] # + fig = plt.figure(figsize=(16, 16)) ax = fig.add_subplot(2,1,1) line, = ax.plot(te, xe, '-o', label="x, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_x, '-', c=line.get_color(), label="x exact") line, = ax.plot(te, ye, '-o', label="y, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_y, '-', c=line.get_color(), label="y exact") ax.set_ylabel("x, y") ax.legend(loc=0, prop={'size':10}) ax = fig.add_subplot(2,1,2, sharex=ax) line, = ax.plot(te, ue, '-o', label="u, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_u, '-', c=line.get_color(), label="vx exact") line, = ax.plot(te, ve, '-o', label="v, Euler, dt = %.2f" %dt) ax.plot(te, exact_solution_v, '-', c=line.get_color(), label="vy exact") ax.set_ylabel("u, v") ax.set_xlabel("Time") ax.legend(loc=0, prop={'size':10}) # - # Let's investigate the effect of the timestep # + Time_max = 500 fig = plt.figure(figsize=(12, 8)) ax = fig.add_subplot(1,1,1) for dt in [0.2, 0.1, 0.001]: N_steps = int(Time_max/float(dt)) # Computation of the Euler's integration te, xe, ye, ue, ve = evolve_Euler(x0, y0, u0, v0, mu, dt, N_steps) #te, xe, ve = evolve_Euler(x0 , v0, omega, dt, int(Time_max/float(dt))) ax.plot(te, xe, label="dt={}".format(dt)) ax.set_ylim([-15,7.5]) ax.set_ylabel("x") ax.set_xlabel("Time") ax.legend(loc=0, prop={'size':10}) # - # ## Phase space # # To visualize this a bit differently, let us plot the phase space: $u = f(x)$ and $v = f(y)$ # + Time_max = 10 dt = 0.01 N_steps = int(Time_max/float(dt)) te, xe, ye, ue, ve = evolve_Euler(x0, y0, u0, v0, mu, dt, N_steps) fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(1,1,1) print("now is time to work: plot phase space u vs x") # plot calculated phase space ax.plot(xe, ue, '--', label="Euler, dt={}".format(dt)) # plot analytical phase space ax.plot(position_x(a=x0, t=te, omega=omega), velocity_x(a=x0, t=te, omega=omega), label="Exact solution") ax.set_ylabel("u") ax.set_xlabel("x") ax.legend(loc=0, prop={'size':10}) ax.set_ylim([-1.5,1.5]) ax.set_xlim([-1.5,1.5]) plt.show() fig = plt.figure(figsize=(8, 8)) ax1 = fig.add_subplot(1,1,1) print("now is time to work: plot phase space v vs y") # plot calculated phase space ax1.plot(ye, ve, '--', label="Euler, dt={}".format(dt)) # plot analytical phase space ax1.plot(position_y(a=x0, t=te, omega=omega), velocity_y(a=x0, t=te, omega=omega), label="Exact solution") ax1.set_ylabel("v") ax1.set_xlabel("y") ax1.legend(loc=0, prop={'size':10}) ax1.set_ylim([-1.5,1.5]) ax1.set_xlim([-1.5,1.5]) # - # There is something clearly very wrong. # ## Energy of the system: is it conserved? # # # # The energy of the system is given by # \begin{equation} # E = E_{\rm grav} + E_{\rm kin} = - \frac{\mu}{r} + \frac{1}{2} V^2 %= -\frac{C}{r} # \end{equation} # where $r = \sqrt{x^2+y^2}$ and $V^2 = u^2 + v^2$. # # We will compute $\Delta E = E(t) - E(t=0)$. If there is conservation of the energy, this should be zero. # + fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(1,1,1) Time_max = 1000 for dt in [0.1, 0.01, 0.001]: N_steps = int(Time_max/float(dt)) # Computation of the Euler's integration te, xe, ye, ue, ve = evolve_Euler(x0, y0, u0, v0, mu, dt, N_steps) print("now is time to work") # Compute the energy compared to the Energy at time 0 (called energy_0) delta_energy = np.zeros((N_steps+1)) energy_0 = - mu/((xe[0]**2 + ye[0]**2)**0.5) + (ue[0]**2 + ve[0]**2)/2 for i in range(N_steps+1): delta_energy[i] = - mu/((xe[i]**2 + ye[i]**2)**0.5) + (ue[i]**2 + ve[i]**2)/2 - energy_0 ax.plot(te, delta_energy, label="dt={}".format(dt)) ax.set_ylabel("$\Delta$ Energy") ax.set_xlabel("Time") ax.set_yscale('log') ax.set_xscale('log') ax.legend(loc=0, prop={'size':10}) # - # The energy is not conserved... # ## Angular momentum: is it conserved? # # Another quantity which is supposed to be conserved is the total angular momentum. # # The angular momentum in the heliocentric frame is given by # \begin{equation} # \mathbf{h} = \frac{m_0 m_1}{m_0 + m_1}\mathbf{r} \times \mathbf{v} # \end{equation} # # For our example, the angular momentum (per unit mass) is given by # \begin{equation} # \begin{split} # \mathbf{h} &= \mathbf{r} \times \mathbf{v}\\ # h_z &= x v - y u # \end{split} # \end{equation} # # Let us see if this quantity is conserved. To do that let us compute $\Delta h_z = h_z(t) - h_z(t=0)$. # + fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(1,1,1) Time_max = 1000 for dt in [0.1, 0.01, 0.001]: N_steps = int(Time_max/float(dt)) # Computation of the Euler's integration te, xe, ye, ue, ve = evolve_Euler(x0, y0, u0, v0, mu, dt, N_steps) print("now is time to work") # Compute the angular momentum compared to the angular momentum at time 0 (called angular_momentum_0) delta_angular_momentum = np.zeros((N_steps+1)) angular_momentum_0 = (xe[0]*ve[0]) - (ye[0]*ue[0]) for i in range(N_steps+1): delta_angular_momentum[i] = (xe[i]*ve[i]) - (ye[i]*ue[i]) - angular_momentum_0 ax.plot(te, delta_angular_momentum, label="dt={}".format(dt)) ax.set_ylabel("$\Delta$ angular momentum") ax.set_xlabel("Time") ax.set_yscale('log') ax.set_xscale('log') ax.legend(loc=0, prop={'size':10}) # - # The angular momentum is not conserved either... # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.8 64-bit (''Qiskit_Ocean'': conda)' # name: python388jvsc74a57bd0173f2ffbce537830dc37cc01123f1dcc118f483457eee13e6b392e68a5e39cc7 # --- from qiskit import IBMQ, execute from qiskit.providers.ibmq.managed import IBMQJobManager from qiskit.providers.ibmq.job import job_monitor from qiskit.circuit.random import random_circuit IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q') backend = provider.get_backend('simulator_statevector') # list all jobs which have been executed from our provider in the past for job in backend.jobs(): print(job.job_id()) job_monitor(job) qc = random_circuit(num_qubits=5, depth=4, measure=True) qc.draw(output='mpl') job = execute(qc, backend, shots=4096, job_name="test_job_name", job_tags=["test", "job"]) job_monitor(job) counts = job.result().get_counts(0) counts # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pickle import matplotlib.pyplot as plt import numpy as np import seaborn as sns; sns.set() import pandas as pd # %matplotlib inline # - def array2dataframe(loss_array, log_grad_var_array, estimator, every_n=500): nseeds, nsteps = loss_array.shape data_for_pd = [] if estimator == "relax": estimator_name = "PL-RELAX" elif estimator == "rebar": estimator_name = "PL-REBAR" elif estimator == "reinforce": estimator_name = "REINFORCE" else: estimator_name = "EXACT" for i in range(nseeds): for j in range(nsteps): data_for_pd.append( [j * every_n, loss_array[i, j], log_grad_var_array[i, j], estimator_name] ) dataframe = pd.DataFrame(data=data_for_pd, columns=["Steps", "Loss", "LogGradVar", "Estimator"]) return dataframe def load_statistics(estimator): log_grad_vars = [] mean_obj = [] with open("./exp_log/history_{0}.pkl".format(estimator), "rb") as file: history = pickle.load(file) if estimator == "exact": log_grad_vars.append(history["grad_std"].mean(-1)) else: log_grad_vars.append(2*np.log(history["grad_std"].mean(-1))) mean_obj.append(history["mean_objective"]) log_grad_vars = np.array(log_grad_vars) mean_obj = np.array(mean_obj) return array2dataframe(mean_obj, log_grad_vars, estimator) statistics_dataframe = pd.concat([ load_statistics("relax"), load_statistics("rebar"), load_statistics("reinforce"), load_statistics("exact") ], ignore_index=True) from IPython.core.pylabtools import figsize figsize(14, 5) plt.rcParams.update( { 'font.size': "22", 'font.family': ["Times New Roman"], } ) plt.subplot(1, 2, 1) sns.lineplot(x="Steps", y="Loss", hue="Estimator", data=statistics_dataframe) plt.subplot(1, 2, 2) sns.lineplot( x="Steps", y="LogGradVar", hue="Estimator", hue_order=["PL-RELAX", "PL-REBAR", "REINFORCE"], data=statistics_dataframe ) plt.tight_layout(); plt.savefig("./figures/toy_together.png", bbox_inches='tight', dpi=300); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # # Cell-free and in vivo characterization of LuxR, LasR, and RpaR activation Systems # ## Data anlysis notebook # # This jupyter notebook serves to reproduce all of the figures published in "Cell-free and in vivo characterization of LuxR, LasR, and RpaR activation systems" by Murray, 2017. # # All analysis is performed in Python 3.5 using Anaconda. # # All data can be found in the Data directory of this git repository. All plate maps can be found in Data/Maps. # First we are going to import all of the packages we need for the entirety of the notebook. import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from matplotlib import rcParams # This sets default backgrounds for our seaborn plots to make them look a bit nicer. rcParams.update({'figure.autolayout': True}) rc={'lines.linewidth': 4, 'axes.labelsize': 30, 'axes.titlesize': 24} # First we want to recreate figure 1, part A through D. # # Figure 1A is a time trace off a BioTek H1M. The data for this file is in /Data/170529_RpaR_CT_Trial_tidy.csv. # # Below we are going to write a function for taking a series of wells and plotting a linegraph with standard deviation. Then we are going to pass in all of our wells from the figure and create a formatted plot at the end. # ## Figure 1A def plotWells(wells, gain, tag, color): ''' Takes a set of wells from the BioTek data that are biological replicates and generates a plot with the mean as a solid line and +/- standard deviation shaded. Function takes a list of wells, wells, the gain of interest for each well (string), the tag (label) to be plotted, and the color you want the line to be, passed as a string interpretable by matplotlib. ''' # Create a list to hold all of the data from each well. allWells = [] # Iterate over each well that was passed for well in wells: # Slice the data frame by well wellData = flxData[flxData['Well'] == well] # Select the appropriate gain that was passed wellData = wellData[wellData['Gain'] == gain] # Get the corresponding time stamp for each data point time = wellData.loc[:, 'Time (hr)'] # Append all of the micromolar concentrations of fluorescence to allWells allWells.append(wellData.uM) # Make an array from allWells wellArray = (allWells[0], allWells[1], allWells[2], allWells[3]) # Get the mean across the four biological replicates at each timepoint mean = np.mean(wellArray, axis=0) # Calculate the standard deviation across the four biological replicates at each timepoint std = np.std(wellArray, axis=0) # Create the boundary for the below mean shading below_mean = np.subtract(mean, std) # Create the boundary for the above mean shading above_mean = np.add(mean, std) # Create the shaded region plt.fill_between(time, below_mean, above_mean, alpha=0.4, color=color) # Create the line plot plt.plot(time, mean, color=color, label=tag) # Now we are going to load in our fluorescence data from the BioTek. We are using already tidy format data, for a package to tidy raw data please see github.com/sclamons/murraylab_tools # Read in fluorescence data into a pandas dataframe flxData = pd.read_csv('Data/170529_RpaR_CT_Trial_tidy.csv') # For this plot we want to plot RpaR with pRpa with different concentrations of Rpa ligand (p-coumaroyl). Those wells are: # I-P, 9-12. Each row is a different amount of AHL, each column is a replicate experiment. I is the negative control (DMSO only), and then we go from high concentraiton (100uM) in J to low concentration in P. # # Let's plot that out. # + # Identify the wells we want to plot well1 = '9' well2 = '10' well3 = '11' well4 = '12' # Call the plotWells function for each experimental group we want to plot plotWells(['J' + well1, 'J' + well2, 'J' + well3, 'J' + well4], 61, '100uM', '#b10026') plotWells(['K' + well1, 'K' + well2, 'K' + well3, 'K' + well4], 61, '10uM', '#e31a1c') plotWells(['L' + well1, 'L' + well2, 'L' + well3, 'L' + well4], 61, '1uM', '#fc4e2a') plotWells(['M' + well1, 'M' + well2, 'M' + well3, 'M' + well4], 61, '100nM', '#fd8d3c') plotWells(['N' + well1, 'N' + well2, 'N' + well3, 'N' + well4], 61, '10nM', '#feb24c') plotWells(['O' + well1, 'O' + well2, 'O' + well3, 'O' + well4], 61, '1nM', '#fed976') plotWells(['P' + well1, 'P' + well2, 'P' + well3, 'P' + well4], 61, '0.1nM', '#ffeda0') plotWells(['I' + well1, 'I' + well2, 'I' + well3, 'I' + well4], 61, '0nM', '#ffffcc') # Format the plot to make it a little bit cleaner plt.xlabel('Time (hours)', fontsize=18) plt.ylabel('Absolute fluorescence (uM of GFP)', fontsize=18) plt.legend(loc='upper left', fontsize=15) plt.tick_params(axis='both', which='major', labelsize=18) plt.show() # - # ## Figures 1B-D # # Now that we have an example trace the next thing we want to look at is crosstalk heatmaps from our TX-TL data. This is going to consist of four main steps: # 1. Calculate the maximum fluorescence across the time series # 2. Subtract background fluorescence # 3. Normalize all measurements for a given transcription factor to the highest expression observed # 4. Plot the heatmap # # Figure 1B is LuxR data, so we're going to first load that in # Read in fluorescence data into a pandas dataframe flxData = pd.read_csv('Data/170526_LuxR_CT_Trial_Tidy.csv') # Now we are going to use a function that will return the 95th percentile value across a given time series. # # Because we have replicates (N=4 for all TX-TL crosstalk experiments), we are going to pass the four wells we are interested in to the function and it is going to return the mean of the 95th percentiles for each of the four wells. # # + def get95(wells, gain, negativeControl=0): ''' Takes a list of wells (list), a gain (str), and optionally a negative control value (numeric). If no negative control value is given it is defaulted to 0. The function takes a set of wells and for each well calculates the 95th percentile fluorescence value. It then averages those fluorescence values and subtracts your negative control value to generate a background- subtracted maximum fluorescence for each group of wells. It then returns this value. ''' # Initialize empty list of percentileValues percentileValues = [] # Iterate over all wells passed, append the 95th percentile value from each well to the list of all # 95th percentiles for well in wells: # Slice dataframe by well wellData = flxData[flxData['Well'] == well] # Slice dataframe by gain wellData = wellData[wellData['Gain'] == gain] # Use numpy to grab the 95th percentile value from our list of uM values maxFlx = np.percentile(wellData.uM.values, 95) # Add each value to the list percentileValues.append(maxFlx) # Return the mean background-subtracted value across all wells passed return np.mean(np.array(percentileValues))-negativeControl # + # First we want to run this function for each of the three negative controls. LuxR_pLux_neg = get95(['H13', 'H14', 'H15', 'H16'], 61) LuxR_pLas_neg = get95(['H1', 'H2', 'H3', 'H4'], 61) LuxR_pRpa_neg = get95(['P1', 'P2', 'P3', 'P4'], 61) # Now we're going to run this function for the nine condtions we need LuxR_pLux_LuxAHL = get95(['D13', 'D14', 'D15', 'D16'], 61, LuxR_pLux_neg) LuxR_pLux_LasAHL = get95(['D17', 'D18', 'D19', 'D20'], 61, LuxR_pLux_neg) LuxR_pLux_RpaAHL = get95(['D21', 'D22', 'D23', 'D24'], 61, LuxR_pLux_neg) LuxR_pLas_LuxAHL = get95(['D1', 'D2', 'D3', 'D4'], 61, LuxR_pLas_neg) LuxR_pLas_LasAHL = get95(['D5', 'D6', 'D7', 'D8'], 61, LuxR_pLas_neg) LuxR_pLas_RpaAHL = get95(['D9', 'D10', 'D11', 'D12'], 61, LuxR_pLas_neg) LuxR_pRpa_LuxAHL = get95(['L1', 'L2', 'L3', 'L4'], 61, LuxR_pRpa_neg) LuxR_pRpa_LasAHL = get95(['L5', 'L6', 'L7', 'L8'], 61, LuxR_pRpa_neg) LuxR_pRpa_RpaAHL = get95(['L9', 'L10', 'L11', 'L12'], 61, LuxR_pRpa_neg) # + # Nowe we want to assemble our 2d Array for plotting with seaborn's heatmap plot. We are going to remove any negative # values (artifacts from background subtraction) by using np.clip(0) which will replace any negative value with 0. firstRow = np.array([LuxR_pLux_LuxAHL, LuxR_pLux_LasAHL, LuxR_pLux_RpaAHL]).clip(0) secondRow = np.array([LuxR_pLas_LuxAHL, LuxR_pLas_LasAHL, LuxR_pLas_RpaAHL]).clip(0) thirdRow = np.array([LuxR_pRpa_LuxAHL, LuxR_pRpa_LasAHL, LuxR_pRpa_RpaAHL]).clip(0) # Now we build our 2d array by combining firstRow, secondRow, and thirdRow heatmapVector = [firstRow, secondRow, thirdRow] # And finally we normalize the 2d Array by the maximum value in the array and round each entry to 2 decimal points heatmapVector = np.around(np.divide([firstRow, secondRow, thirdRow], np.amax(heatmapVector)), 2) # - # Plot out the normalized heatmap. ax = sns.heatmap(heatmapVector, annot=True, annot_kws={'size':25}, linewidths =3, cmap="YlGnBu", vmin=0, vmax=1) sns.set(font_scale=1.9) plt.show() # We'll fix the labels in photoshop, but this is the core of figure 1B. Time to make the same graphs but for 1C, and 1D. # # 1C is LasR data so we need to load in a new file. Once we do that we are going to run almost exactly what we did above for figure 1B. # Load in the flx data for LasR flxData = pd.read_csv('Data/170530_LasR_CT_Trial_Tidy.csv') # + # First we want to run this function for each of the three negative controls. LasR_pLux_neg = get95(['H5', 'H6', 'H7', 'H8'], 61) LasR_pLas_neg = get95(['H17', 'H18', 'H19', 'H20'], 61) LasR_pRpa_neg = get95(['P5', 'P6', 'P7', 'P8'], 61) # Now we're going to run this function for the nine condtions we need LasR_pLux_LuxAHL = get95(['D1', 'D2', 'D3', 'D4'], 61, LasR_pLux_neg) LasR_pLux_LasAHL = get95(['D5', 'D6', 'D7', 'D8'], 61, LasR_pLux_neg) LasR_pLux_RpaAHL = get95(['D9', 'D10', 'D11', 'D12'], 61, LasR_pLux_neg) LasR_pLas_LuxAHL = get95(['D13', 'D14', 'D15', 'D16'], 61, LasR_pLas_neg) LasR_pLas_LasAHL = get95(['D17', 'D18', 'D19', 'D20'], 61, LasR_pLas_neg) LasR_pLas_RpaAHL = get95(['D21', 'D22', 'D23', 'D24'], 61, LasR_pLas_neg) LasR_pRpa_LuxAHL = get95(['L1', 'L2', 'L3', 'L4'], 61, LasR_pRpa_neg) LasR_pRpa_LasAHL = get95(['L5', 'L6', 'L7', 'L8'], 61, LasR_pRpa_neg) LasR_pRpa_RpaAHL = get95(['L9', 'L10', 'L11', 'L12'], 61, LasR_pRpa_neg) # Nowe we want to assemble our 2d Array for plotting with seaborn's heatmap plot. We are going to remove any negative # values (artifacts from background subtraction) by using np.clip(0) which will replace any negative value with 0. firstRow = np.array([LasR_pLux_LuxAHL, LasR_pLux_LasAHL, LasR_pLux_RpaAHL]).clip(0) secondRow = np.array([LasR_pLas_LuxAHL, LasR_pLas_LasAHL, LasR_pLas_RpaAHL]).clip(0) thirdRow = np.array([LasR_pRpa_LuxAHL, LasR_pRpa_LasAHL, LasR_pRpa_RpaAHL]).clip(0) # Now we build our 2d array by combining firstRow, secondRow, and thirdRow heatmapVector = [firstRow, secondRow, thirdRow] # And finally we normalize the 2d Array by the maximum value in the array and round each entry to 2 decimal points heatmapVector = np.around(np.divide([firstRow, secondRow, thirdRow], np.amax(heatmapVector)), 2) # Plot out the normalized heatmap. ax = sns.heatmap(heatmapVector, annot=True, annot_kws={'size':25}, linewidths =3, cmap="YlGnBu", vmin=0, vmax=1) plt.show() # - # Load in the RpaR data flxData = flxData = pd.read_csv('Data/170529_RpaR_CT_Trial_Tidy.csv') # + # First we want to run this function for each of the three negative controls. RpaR_pLux_neg = get95(['H9', 'H10', 'H11', 'H12'], 61) RpaR_pLas_neg = get95(['H21', 'H22', 'H23', 'H24'], 61) RpaR_pRpa_neg = get95(['P9', 'P10', 'P11', 'P12'], 61) # Now we're going to run this function for the nine condtions we need RpaR_pLux_LuxAHL = get95(['D1', 'D2', 'D3', 'D4'], 61, RpaR_pLux_neg) RpaR_pLux_LasAHL = get95(['D5', 'D6', 'D7', 'D8'], 61, RpaR_pLux_neg) RpaR_pLux_RpaAHL = get95(['D9', 'D10', 'D11', 'D12'], 61, RpaR_pLux_neg) RpaR_pLas_LuxAHL = get95(['D13', 'D14', 'D15', 'D16'], 61, RpaR_pLas_neg) RpaR_pLas_LasAHL = get95(['D17', 'D18', 'D19', 'D20'], 61, RpaR_pLas_neg) RpaR_pLas_RpaAHL = get95(['D21', 'D22', 'D23', 'D24'], 61, RpaR_pLas_neg) RpaR_pRpa_LuxAHL = get95(['L1', 'L2', 'L3', 'L4'], 61, RpaR_pRpa_neg) RpaR_pRpa_LasAHL = get95(['L5', 'L6', 'L7', 'L8'], 61, RpaR_pRpa_neg) RpaR_pRpa_RpaAHL = get95(['L9', 'L10', 'L11', 'L12'], 61, RpaR_pRpa_neg) # Nowe we want to assemble our 2d Array for plotting with seaborn's heatmap plot. We are going to remove any negative # values (artifacts from background subtraction) by using np.clip(0) which will replace any negative value with 0. firstRow = np.array([RpaR_pLux_LuxAHL, RpaR_pLux_LasAHL, RpaR_pLux_RpaAHL]).clip(0) secondRow = np.array([RpaR_pLas_LuxAHL, RpaR_pLas_LasAHL, RpaR_pLas_RpaAHL]).clip(0) thirdRow = np.array([RpaR_pRpa_LuxAHL, RpaR_pRpa_LasAHL, RpaR_pRpa_RpaAHL]).clip(0) # Now we build our 2d array by combining firstRow, secondRow, and thirdRow heatmapVector = [firstRow, secondRow, thirdRow] # And finally we normalize the 2d Array by the maximum value in the array and round each entry to 2 decimal points heatmapVector = np.around(np.divide([firstRow, secondRow, thirdRow], np.amax(heatmapVector)), 2) # Plot out the normalized heatmap. ax = sns.heatmap(heatmapVector, annot=True, annot_kws={'size':25}, linewidths =3, cmap="YlGnBu", vmin=0, vmax=1) plt.show() # - # ## Figure 2A-C # # Now that we have reproduced figure 1, subplots A-D, we can move on to figure 2. # # The first set of figures we're going to make are the in vivo crosstalk heatmaps. These are going to be very similar to the plots provided above, but now we need to normalize our GFP readings by OD before we can create our heatmap. Our negative control well for these experiments is a set of three wells (each a distinct colony) that instead of being induced by an AHL are in media containing the same final % of DMSO. # # However, our main steps are still very similar to what they were above. Briefly: # # 1. Calculate the maximum fluorescence / OD across the time series # 2. Subtract background fluorescence (calculated from DMSO only wells) # 3. Normalize all measurements for a given transcription factor to the highest expression observed # 4. Plot the heatmap # # We're going to start with LuxR data, as before. This experiment was split across two different BioTeks which means we have two files we need to pull from, and two different sets of negative control wells. That makes the analysis code a bit longer, but same exact concept as what's going to follow for 2B and 2C. flxData1 = pd.read_csv('Data/170601_invivo_CT_panel_Tidy.csv') flxData2 = pd.read_csv('Data/170613_LuxR_pLas_CT_experiment_tidy.csv') # + def get95_ODNormalized(wells, gain1, gain2, dataFile, negativeControl=0): ''' Takes a list of wells (list), a gain1 (str), a gain2 (str), a datafile (str), and optionally a negative control value (numeric). If no negative control value is given it is defaulted to 0. The function takes a set of wells and for each well calculates the 95th percentile of OD normalize fluorescence, where gain1 is the data you want normalized and gain2 is what you want it normalized by. It then averages those fluorescence values and subtracts your negative control value to generate a background- subtracted maximum fluorescence/OD for each group of wells. It then returns this value. ''' percentileValues = [] # Iterate over all wells passed, append the 95th percentile value from each well to the list of all # 95th percentiles for well in wells: # Slice the wellData by well wellData = dataFile[dataFile['Well'] == well] # Get the GFP by slicing out gain1 wellDataGFP = wellData[wellData['Gain'] == gain1] # Get the OD data by slicing out gain2 wellDataOD = wellData[wellData['Gain']==gain2] # Normalize GFP values by OD wellDataGFPNorm = (wellDataGFP.AFU.values)/(wellDataOD.AFU.values) # Get the 95th percentile of that dataset maxFlx = np.percentile(wellDataGFPNorm, 95) # Append it to the list of percentiles percentileValues.append(maxFlx) # Return the mean of our background-subtracted samples return np.mean(np.array(percentileValues))-negativeControl # Calculate all of our negative controls LuxR_pLux_neg = get95_ODNormalized(['A10', 'A11', 'A12'], 61, -1, flxData1) LuxR_pLas_neg = get95_ODNormalized(['D10', 'D11', 'D12'], 61, -1, flxData2) LuxR_pRpa_neg = get95_ODNormalized(['B10', 'B11', 'B12'], 61, -1, flxData1) LasR_pLux_neg = get95_ODNormalized(['C10', 'C11', 'C12'], 61, -1, flxData1) LasR_pLas_neg = get95_ODNormalized(['D10', 'D11', 'D12'], 61, -1, flxData1) LasR_pRpa_neg = get95_ODNormalized(['E10', 'E11', 'E12'], 61, -1, flxData1) RpaR_pLux_neg = get95_ODNormalized(['F10', 'F11', 'F12'], 61, -1, flxData1) RpaR_pLas_neg = get95_ODNormalized(['G10', 'G11', 'G12'], 61, -1, flxData1) RpaR_pRpa_neg = get95_ODNormalized(['H10', 'H11', 'H12'], 61, -1, flxData1) # + # Now that we have all of our negative control values we can get our experimental data results. # First plot is again for LuxR. LuxR_pLux_LuxAHL = get95_ODNormalized(['A1', 'A2', 'A3'], 61, -1, flxData1, LuxR_pLux_neg) LuxR_pLux_LasAHL = get95_ODNormalized(['A4', 'A5', 'A6'], 61, -1, flxData1, LuxR_pLux_neg) LuxR_pLux_RpaAHL = get95_ODNormalized(['A7', 'A8', 'A9'], 61, -1, flxData1, LuxR_pLux_neg) LuxR_pLas_LuxAHL = get95_ODNormalized(['D1', 'D2', 'D3'], 61, -1, flxData2, LuxR_pLas_neg) LuxR_pLas_LasAHL = get95_ODNormalized(['D4', 'D5', 'D6'], 61, -1, flxData2, LuxR_pLas_neg) LuxR_pLas_RpaAHL = get95_ODNormalized(['D7', 'D8', 'D9'], 61, -1, flxData2, LuxR_pLas_neg) LuxR_pRpa_LuxAHL = get95_ODNormalized(['B1', 'B2', 'B3'], 61, -1, flxData1, LuxR_pRpa_neg) LuxR_pRpa_LasAHL = get95_ODNormalized(['B4', 'B5', 'B6'], 61, -1, flxData1, LuxR_pRpa_neg) LuxR_pRpa_RpaAHL = get95_ODNormalized(['B7', 'B8', 'B9'], 61, -1, flxData1, LuxR_pRpa_neg) # Nowe we want to assemble our 2d Array for plotting with seaborn's heatmap plot. firstRow = np.array([LuxR_pLux_LuxAHL, LuxR_pLux_LasAHL, LuxR_pLux_RpaAHL]).clip(0) secondRow = np.array([LuxR_pLas_LuxAHL, LuxR_pLas_LasAHL, LuxR_pLas_RpaAHL]).clip(0) thirdRow = np.array([LuxR_pRpa_LuxAHL, LuxR_pRpa_LasAHL, LuxR_pRpa_RpaAHL]).clip(0) heatmapVector = [firstRow, secondRow, thirdRow] heatmapVector = np.around(np.divide([firstRow, secondRow, thirdRow], np.amax(heatmapVector)), 2) # Plot out the normalized heatmap. ax = sns.heatmap(heatmapVector, annot=True, annot_kws={'size':25}, linewidths =3, cmap="YlGnBu", vmin=0, vmax=1) plt.show() # + # Time to do the same for LasR LasR_pLux_LuxAHL = get95_ODNormalized(['C1', 'C2', 'C3'], 61, -1, flxData1, LasR_pLux_neg) LasR_pLux_LasAHL = get95_ODNormalized(['C4', 'C5', 'C6'], 61, -1, flxData1, LasR_pLux_neg) LasR_pLux_RpaAHL = get95_ODNormalized(['C7', 'C8', 'C9'], 61, -1, flxData1, LasR_pLux_neg) LasR_pLas_LuxAHL = get95_ODNormalized(['D1', 'D2', 'D3'], 61, -1, flxData1, LasR_pLas_neg) LasR_pLas_LasAHL = get95_ODNormalized(['D4', 'D5', 'D6'], 61, -1, flxData1, LasR_pLas_neg) LasR_pLas_RpaAHL = get95_ODNormalized(['D7', 'D8', 'D9'], 61, -1, flxData1, LasR_pLas_neg) LasR_pRpa_LuxAHL = get95_ODNormalized(['E1', 'E2', 'E3'], 61, -1, flxData1, LasR_pRpa_neg) LasR_pRpa_LasAHL = get95_ODNormalized(['E4', 'E5', 'E6'], 61, -1, flxData1, LasR_pRpa_neg) LasR_pRpa_RpaAHL = get95_ODNormalized(['E7', 'E8', 'E9'], 61, -1, flxData1, LasR_pRpa_neg) # Nowe we want to assemble our 2d Array for plotting with seaborn's heatmap plot. firstRow = np.around(np.array([LasR_pLux_LuxAHL, LasR_pLux_LasAHL, LasR_pLux_RpaAHL]).clip(0), decimals=2) secondRow = np.around(np.array([LasR_pLas_LuxAHL, LasR_pLas_LasAHL, LasR_pLas_RpaAHL]).clip(0), decimals=2) thirdRow = np.around(np.array([LasR_pRpa_LuxAHL, LasR_pRpa_LasAHL, LasR_pRpa_RpaAHL]).clip(0), decimals=2) heatmapVector = [firstRow, secondRow, thirdRow] heatmapVector = np.around(np.divide([firstRow, secondRow, thirdRow], np.amax(heatmapVector)), decimals=2) # Plot out the normalized heatmap. ax = sns.heatmap(heatmapVector, annot=True, annot_kws={'size':25}, linewidths =3, cmap="YlGnBu", vmin=0, vmax=1) plt.show() # + # Time to do the same for RpaR RpaR_pLux_LuxAHL = get95_ODNormalized(['F1', 'F2', 'F3'], 61, -1, flxData1, RpaR_pLux_neg) RpaR_pLux_LasAHL = get95_ODNormalized(['F4', 'F5', 'F6'], 61, -1, flxData1, RpaR_pLux_neg) RpaR_pLux_RpaAHL = get95_ODNormalized(['F7', 'F8', 'F9'], 61, -1, flxData1, RpaR_pLux_neg) RpaR_pLas_LuxAHL = get95_ODNormalized(['G1', 'G2', 'G3'], 61, -1, flxData1, RpaR_pLas_neg) RpaR_pLas_LasAHL = get95_ODNormalized(['G4', 'G5', 'G6'], 61, -1, flxData1, RpaR_pLas_neg) RpaR_pLas_RpaAHL = get95_ODNormalized(['G7', 'G8', 'G9'], 61, -1, flxData1, RpaR_pLas_neg) RpaR_pRpa_LuxAHL = get95_ODNormalized(['H1', 'H2', 'H3'], 61, -1, flxData1, RpaR_pRpa_neg) RpaR_pRpa_LasAHL = get95_ODNormalized(['H4', 'H5', 'H6'], 61, -1, flxData1, RpaR_pRpa_neg) RpaR_pRpa_RpaAHL = get95_ODNormalized(['H7', 'H8', 'H9'], 61, -1, flxData1, RpaR_pRpa_neg) # Nowe we want to assemble our 2d Array for plotting with seaborn's heatmap plot. firstRow = np.array([RpaR_pLux_LuxAHL, RpaR_pLux_LasAHL, RpaR_pLux_RpaAHL]).clip(0) secondRow = np.array([RpaR_pLas_LuxAHL, RpaR_pLas_LasAHL, RpaR_pLas_RpaAHL]).clip(0) thirdRow = np.array([RpaR_pRpa_LuxAHL, RpaR_pRpa_LasAHL, RpaR_pRpa_RpaAHL]).clip(0) heatmapVector = [firstRow, secondRow, thirdRow] heatmapVector = np.around(np.divide([firstRow, secondRow, thirdRow], np.amax(heatmapVector)), decimals = 2) # Plot out the normalized heatmap. ax = sns.heatmap(heatmapVector, annot=True, annot_kws={'size':25}, linewidths =3, cmap="YlGnBu", vmin=0, vmax=1) plt.show() # - # ## Figure 2D-E # # Now we have created the plots for TX-TL traces, TX-TL crosstalk, and in vivo crosstalk. Now we are going to move on to the final two figures which examine the impact of the TF : Reporter DNA ratio for TX-TL mapping. # # The code is very similar to what we used above. Again we are going to get the maximum value out of a timeseries. This is then going to be normalized to the signal of pLux for that given TF concentration. flxData = pd.read_csv('Data/170612_Genetic_CT_Titration_tidy.csv') # + def get95_Replicates(wells, gain): ''' Takes a list of wells and a specified gain. Returns the an array with the 95th percentile from each well. ''' # Initialize list of percentile values percentileValues = [] # Iterate over all wells passed, append the 95th percentile value from each well to the list of all # 95th percentiles for well in wells: wellData = flxData[flxData['Well'] == well] wellData = wellData[wellData['Gain'] == gain] maxFlx = np.percentile(wellData.uM.values, 95) percentileValues.append(maxFlx) # Return numpy array of percentile values return np.array(percentileValues) # Get the pLux_1nM values, and normalize them by pLux_1nM values pLux_1nM = get95_Replicates(['A1', 'A2', 'A3'], 61) pLux_1nM = np.divide(pLux_1nM, np.mean(pLux_1nM)) # Get the pLux_2nM values, and normalize them by pLux_2nM values pLux_2nM = get95_Replicates(['B1', 'B2', 'B3'], 61) pLux_2nM = np.divide(pLux_2nM, np.mean(pLux_2nM)) # Get the pLux_4nM values, and normalize them by pLux_4nM values pLux_4nM = get95_Replicates(['C1', 'C2', 'C3'], 61) pLux_4nM = np.divide(pLux_4nM, np.mean(pLux_4nM)) # Get the pLux_8nM values, and normalize them by pLux_8nM values pLux_8nM = get95_Replicates(['D1', 'D2', 'D3'], 61) pLux_8nM = np.divide(pLux_8nM, np.mean(pLux_8nM)) # Get the pLux_16nM values, and normalize them by pLux_16nM values pLux_16nM = get95_Replicates(['E1', 'E2', 'E3'], 61) pLux_16nM = np.divide(pLux_16nM, np.mean(pLux_16nM)) # Get the pLas_1nM values, and normalize them by pLux_1nM values pLas_1nM = get95_Replicates(['A4', 'A5', 'A6'], 61) pLas_1nM = np.divide(pLas_1nM, np.mean(pLux_1nM)) # Get the pLas_2nM values, and normalize them by pLux_2nM values pLas_2nM = get95_Replicates(['B4', 'B5', 'B6'], 61) pLas_2nM = np.divide(pLas_2nM, np.mean(pLux_2nM)) # Get the pLas_4nM values, and normalize them by pLux_4nM values pLas_4nM = get95_Replicates(['C4', 'C5', 'C6'], 61) pLas_4nM = np.divide(pLas_4nM, np.mean(pLux_4nM)) # Get the pLas_8nM values, and normalize them by pLux_8nM values pLas_8nM = get95_Replicates(['D4', 'D5', 'D6'], 61) pLas_8nM = np.divide(pLas_8nM, np.mean(pLux_8nM)) # Get the pLas_16nM values, and normalize them by pLux_16nM values pLas_16nM = get95_Replicates(['E4', 'E5', 'E6'], 61) pLas_16nM = np.divide(pLas_16nM, np.mean(pLux_16nM)) # Get the pRpa_1nM values, and normalize them by pLux_1nM values pRpa_1nM = get95_Replicates(['A7', 'A8', 'A9'], 61) pRpa_1nM = np.divide(pRpa_1nM, np.mean(pLux_1nM)) # Get the pRpa_2nM values, and normalize them by pLux_2nM values pRpa_2nM = get95_Replicates(['B7', 'B8', 'B9'], 61) pRpa_2nM = np.divide(pRpa_2nM, np.mean(pLux_2nM)) # Get the pRpa_4nM values, and normalize them by pLux_4nM values pRpa_4nM = get95_Replicates(['C7', 'C8', 'C9'], 61) pRpa_4nM = np.divide(pRpa_4nM, np.mean(pLux_4nM)) # Get the pRpa_8nM values, and normalize them by pLux_8nM values pRpa_8nM = get95_Replicates(['D7', 'D8', 'D9'], 61) pRpa_8nM = np.divide(pRpa_8nM, np.mean(pLux_8nM)) # Get the pRpa_16nM values, and normalize them by pLux_16nM values pRpa_16nM = get95_Replicates(['E7', 'E8', 'E9'], 61) pRpa_16nM = np.divide(pRpa_16nM, np.mean(pLux_16nM)) def makeDataframe(data, label): ''' Takes a dataset (numpy array), and a label (string), and returns a pandas dataframe. ''' # Create dataframes with appropriate label df1 = pd.DataFrame([[label, data[0]]], columns=['Constructs', 'Normalized Fluorescence']) df2 = pd.DataFrame([[label, data[1]]], columns=['Constructs', 'Normalized Fluorescence']) df3 = pd.DataFrame([[label, data[2]]], columns=['Constructs', 'Normalized Fluorescence']) dfTemp = pd.concat([df1, df2, df3]) # Return the dataframe return dfTemp # Make dataframes for pLux pLux_1nM_df = makeDataframe(pLux_1nM, 'pLux_1nM') pLux_2nM_df = makeDataframe(pLux_2nM, 'pLux_2nM') pLux_4nM_df = makeDataframe(pLux_4nM, 'pLux_4nM') pLux_8nM_df = makeDataframe(pLux_8nM, 'pLux_8nM') pLux_16nM_df = makeDataframe(pLux_16nM, 'pLux_16nM') # Make dataframes for pLas pLas_1nM_df = makeDataframe(pLas_1nM, 'pLas_1nM') pLas_2nM_df = makeDataframe(pLas_2nM, 'pLas_2nM') pLas_4nM_df = makeDataframe(pLas_4nM, 'pLas_4nM') pLas_8nM_df = makeDataframe(pLas_8nM, 'pLas_8nM') pLas_16nM_df = makeDataframe(pLas_16nM, 'pLas_16nM') # Make dataframes for pRpa pRpa_1nM_df = makeDataframe(pRpa_1nM, 'pRpa_1nM') pRpa_2nM_df = makeDataframe(pRpa_2nM, 'pRpa_2nM') pRpa_4nM_df = makeDataframe(pRpa_4nM, 'pRpa_4nM') pRpa_8nM_df = makeDataframe(pRpa_8nM, 'pRpa_8nM') pRpa_16nM_df = makeDataframe(pRpa_16nM, 'pRpa_16nM') # Concatenate all the data into one larger dataframe finalDf = pd.concat([pLux_1nM_df, pLux_2nM_df, pLux_4nM_df, pLux_8nM_df, pLux_16nM_df, pLas_1nM_df, pLas_2nM_df, pLas_4nM_df, pLas_8nM_df, pLas_16nM_df, pRpa_1nM_df, pRpa_2nM_df, pRpa_4nM_df, pRpa_8nM_df, pRpa_16nM_df]) # Make pLux vs. pLas dataframe pLux_vs_pLas = pd.concat([pLux_1nM_df, pLas_1nM_df, pLux_2nM_df, pLas_2nM_df, pLux_4nM_df, pLas_4nM_df, pLux_8nM_df, pLas_8nM_df, pLux_16nM_df, pLas_16nM_df]) # Make pLux vs. pRpa dataframe pLux_vs_pRpa = pd.concat([pLux_1nM_df, pRpa_1nM_df, pLux_2nM_df, pRpa_2nM_df, pLux_4nM_df, pRpa_4nM_df, pLux_8nM_df, pRpa_8nM_df, pLux_16nM_df, pRpa_16nM_df]) # - # Make a swarmplot for pLux vs. pLas sns.swarmplot(x='Constructs', y='Normalized Fluorescence', data=pLux_vs_pLas, size=10) plt.show() #Make a swarmplot for pLux for pRpa sns.swarmplot(x='Constructs', y='Normalized Fluorescence', data=pLux_vs_pRpa, size=10) plt.show() # The formatting is absolutely atrocious, but we'll fix that in illustrator. This means we have reproduced all of the figures # from the text. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="cF3ug9_ldwkm" # # Python 101 : Basic Python Programming [Part 1] # by # # 22/08/2021 # # [datacamp] Cheat sheets http://datacamp-community-prod.s3.amazonaws.com/0eff0330-e87d-4c34-88d5-73e80cb955f2 # + [markdown] id="GFXXoNlpz16a" # ## Variables # # # ``` # variable = expression # ``` # # Variable type # ``` # number --> int float # string # boolean # ``` # # Keywords based on Python version 3.7 # ``` # False await else import pass # None break except in raise # True class finally is return # and continue for lambda try # as def from nonlocal while # assert del global not with # async elif if or yield # ``` # # + id="tMD8rjsHWON5" colab={"base_uri": "https://localhost:8080/"} outputId="3e31f429-265e-4915-87cb-e171900e2c7d" # Int a = 1 a # + id="4bovsaOLw5Gr" colab={"base_uri": "https://localhost:8080/"} outputId="588cd8ea-f36e-4b2e-9b8a-f412e9d57e7a" # This is comment a = 1 (float(a) - 1)*99 # + id="91G6iFPEWnc5" colab={"base_uri": "https://localhost:8080/"} outputId="dae5261b-e3d9-4b46-d34d-7ef3cb03b53b" # Float float(a) # + id="reCYBrjSW-WZ" colab={"base_uri": "https://localhost:8080/", "height": 58} outputId="31540699-70e0-425e-e0ad-97fdd39e2018" # String str(a) # + id="TopGFHnV_oqA" colab={"base_uri": "https://localhost:8080/", "height": 58} outputId="0b0c8461-2f65-410b-905c-e42c829d694d" str(a) + "1" # + id="w8Zv3DPAZUt3" colab={"base_uri": "https://localhost:8080/"} outputId="30e69959-c53d-421f-b919-c2bdd97ef00c" bool(a) # + id="Zn5_2PblJ7DJ" colab={"base_uri": "https://localhost:8080/"} outputId="dff2c86a-935c-4977-82ad-ec0327980f72" bool(None) # + id="gUCKSUJmZeNS" colab={"base_uri": "https://localhost:8080/"} outputId="94745146-d916-4d26-8810-aeef3aa4f68f" type(a) # + id="myq5TcAWe7ZP" colab={"base_uri": "https://localhost:8080/", "height": 58} outputId="690e281c-740c-42ad-fd6a-a02ab75766bc" b = "Python!" b # + [markdown] id="v7qySGBL4t47" # ### Basic operators # * Arithmetic Operators # ``` # + - * / % ** // # ``` # * Comparison (Relational) Operators # ``` # == != <> > < >= <= # ``` # * Assignment Operators # ``` # = += -= *= /= %= **= //= # ``` # * Logical Operators # ``` # and or not # ``` # * Bitwise Operators # ``` # & | ^ ~ << >> # ``` # * Membership Operators # ``` # in, not in # ``` # * Identity Operators # ``` # is, is not # ``` # + [markdown] id="_jGLwZ_5a5GA" # ## Collections # + [markdown] id="3ZyIaJomrYPJ" # ### String # + id="PwkD2_FxrXjm" sentence = " Hello Corgi. I am hungry. " # + id="iHraaCCo3wOZ" colab={"base_uri": "https://localhost:8080/"} outputId="71df636a-118c-4e5c-db44-c1f3510e4119" len(sentence) # + id="OfSs6-_Xr7Nn" colab={"base_uri": "https://localhost:8080/", "height": 58} outputId="9365bd1c-ea1c-47f0-df0f-ed81733681fd" sentence[1] # + id="9-o4xocQ5she" colab={"base_uri": "https://localhost:8080/", "height": 58} outputId="ffe11393-3c74-4f1b-ba44-3182ace99652" sentence[1:12] # + id="nxSxmQcyrmQQ" colab={"base_uri": "https://localhost:8080/", "height": 58} outputId="38a5d6ae-7680-47da-f9dd-cd1eccd64f62" sentence.strip() # + id="-a6IbCxw3zA8" colab={"base_uri": "https://localhost:8080/"} outputId="1b98ce90-710b-4504-cd47-aa9251e55172" len(sentence.strip()) # + id="r1I9hlv6r4FB" colab={"base_uri": "https://localhost:8080/"} outputId="5b0f987e-ebea-401d-ebc7-9a1e82e9799e" sentence.strip().split(' ') # + id="BdxmLMEuqv2j" colab={"base_uri": "https://localhost:8080/", "height": 58} outputId="90544c1a-56e5-4d28-9893-f2f201d42b82" ss = ' '.join(sentence.strip().split(' ')) ss # + [markdown] id="4SeTYcMBqLjD" # ### List # # ``` # list = [...] # ``` # + id="4YcpeyeUbGca" colab={"base_uri": "https://localhost:8080/"} outputId="1eb79913-62a7-4f18-c5cb-f74f835818a5" l = [b, a, a - 1, a] l # + id="h5tpLduMfpIY" colab={"base_uri": "https://localhost:8080/"} outputId="fab3ea82-9565-4a58-ee1d-883dc58fe951" len(l) # + id="P_7Esctef3G5" colab={"base_uri": "https://localhost:8080/"} outputId="80f27882-2a8a-4de7-b924-a26d6dad2875" l[3] = 0 l # + id="VyVzFzgLCmGr" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="579b7328-f227-4132-c18a-abd950a1a78a" # NameError l[4] = ":)" # + id="jreK8tLJCvj5" colab={"base_uri": "https://localhost:8080/"} outputId="00c41864-5a44-4c97-aa2c-c005984580ff" l # + id="omSAISZ5CweV" colab={"base_uri": "https://localhost:8080/"} outputId="51e84ae4-cfc4-43bd-d189-ca9101f9348a" l.append(":)") l # + id="SzfHN-iiC6Ag" colab={"base_uri": "https://localhost:8080/"} outputId="42f962f0-6165-4373-f521-4c0abeee7928" len(l) # + id="MIQd0KpXC8DE" colab={"base_uri": "https://localhost:8080/"} outputId="7ffc42a1-d440-4fb9-c8dc-eaa838ba2cda" l.pop() #list.pop(index) l # + id="DqMoGEKxJGn9" colab={"base_uri": "https://localhost:8080/"} outputId="1201d830-ea94-49e6-c9f4-4f863dd4f753" l.pop(3) l # + [markdown] id="DtYagBS_qQFy" # ### Tuple # # ``` # tuple = (...) # ``` # + id="ueDnLunycWaY" colab={"base_uri": "https://localhost:8080/"} outputId="74fc95cf-57ff-4d05-b27b-2180bf411d72" t = (0, 1, 2, 3, 4) t # + id="au0wM8XCgn_B" colab={"base_uri": "https://localhost:8080/"} outputId="4ac1e4cf-3cff-4a02-c5e9-d9a21d33819e" len(t) # + id="_rkBkHYUguGs" colab={"base_uri": "https://localhost:8080/"} outputId="088f89e1-bfc0-403e-c0a0-7d9343a11a26" t[0:3] # + id="n3myc9P4hCBD" colab={"base_uri": "https://localhost:8080/"} outputId="333f34e5-9933-4d21-cd45-2486029642d8" t[-1] # + id="S0LgakJNhKZE" colab={"base_uri": "https://localhost:8080/"} outputId="d14b2f26-b664-4509-b4a5-017de3f3bd98" t[-3:] # + id="vj_zrSjyhZBo" colab={"base_uri": "https://localhost:8080/", "height": 199} outputId="535ec580-aed7-4269-c057-f5a1a7b5ded6" # TypeError: Cannot edit tuple. t[4] = 5 # + [markdown] id="69kUPKHLJb__" # ### Set # # ``` # set = {...} # ``` # + id="IzN2IwqtJehQ" colab={"base_uri": "https://localhost:8080/"} outputId="6197bc64-f7d0-405d-deee-0b3d89f1fbeb" s = {1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 4, 5, 5} s # + id="uI6B8gzXK5ge" colab={"base_uri": "https://localhost:8080/"} outputId="485af2fa-5354-4fb8-8a70-5913ee5aaa5f" len(s) # + id="1Ct0_roAK8ea" colab={"base_uri": "https://localhost:8080/"} outputId="3524552e-7d83-4e59-ca77-b56a2f16df67" 0 in s # + id="acEJe37MK8x-" colab={"base_uri": "https://localhost:8080/"} outputId="0a6c27b1-3d94-49ba-b11f-2463f2fce906" 1 not in s # + id="bmvtDGRTJqB0" s1 = {5, 6, 6, 7} # + id="E_8aIn4YLIBS" colab={"base_uri": "https://localhost:8080/"} outputId="f4a061eb-4584-4942-b87b-bcfd8e305776" print('s = ', s) print('s1 = ', s1) print('s is a subset of s1 : ', s <= s1) print('s union s1 = ', s | s1) print('s intersect s1 = ', s & s1) print('s difference s1', s - s1) print('s symetric difference s1 = ', s ^ s1) # + [markdown] id="5ReonijCBhrj" # ### Dictionary # # ``` # dict = {key : value} # ``` # + id="QvXx0QchiTts" colab={"base_uri": "https://localhost:8080/"} outputId="b0d3115a-a5cc-4aad-8482-5a1251355aea" grade = { "A" : 80, "B" : 70, "C" : 60, "D" : 50 } grade # + id="kK1D_Izjq8Kj" colab={"base_uri": "https://localhost:8080/"} outputId="ec5c635e-aad3-4263-a81d-275dce0f4df0" grade["A"] # + id="4hSmxQg8rAOa" colab={"base_uri": "https://localhost:8080/"} outputId="27183a1d-4797-4ab5-98ba-d69fd9c82b0e" grade["B+"] = 75 grade # + id="TAXxt9Z2hgwz" colab={"base_uri": "https://localhost:8080/"} outputId="74b04c85-2e97-472f-dedb-602a1c3de5a7" d = {} d["s001"] = "Dog" d["s002"] = "Cat" d # + id="pJdFriSPBuDy" colab={"base_uri": "https://localhost:8080/", "height": 58} outputId="61c04a36-7a01-472b-d8eb-2e905e3aefe1" d["s001"] # + [markdown] id="1wovioh3a9f-" # ## Print # # Print in more detail: https://realpython.com/python-print/ # + id="CHKZusOdbCmf" # ?print # + colab={"base_uri": "https://localhost:8080/"} id="IgrLkFIwdBQO" outputId="c751a3df-c17e-4ce2-f89c-5ae2e34dd88c" name = 'Limpapat' age = 16 gpa = 10/3 print('Hi! \nMy name is', name, ', \nI\'m', age, 'years old, \nMy GPA:', gpa, '.') # + colab={"base_uri": "https://localhost:8080/"} id="miS2LmBugN43" outputId="ddac43c6-b10e-445d-a373-7f495dcf03b0" print('Hi! \nMy name is %s, \nI\'m %d years old, \nMy GPA: %.2f' %(name, age, gpa)) # + colab={"base_uri": "https://localhost:8080/"} id="HkhfL-Lafdgs" outputId="0c0fef4f-d3ff-4244-efd6-fb002c63cdcb" print('Hi! \nMy name is {}, \nI\'m {} years old, \nMy GPA: {}.'.format(name, age, gpa)) # + colab={"base_uri": "https://localhost:8080/"} id="JDLGP_zkd8oy" outputId="727f7923-11cd-4053-ab66-3ebda959f43b" print(f'Hi! \nMy name is {name}, \nI\'m {age} years old, \nMy GPA: {gpa}.') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import random random.seed() # ### Clearance Model based on Bell-LaPadula, Chapter 5 # + def initial_documents(): documents = [ { 'title':"Confidential Report", 'content': "efwiejfldjl", 'clearance': ('usa','confidential') }, { 'title':"Secret Blueprints", 'content': "32149209", 'clearance': ('ohio','secret') }, { 'title':"Top Secret Dossier", 'content': "&^%&*^%&^*&^", 'clearance': ('usa','topsecret') }, { 'title':"Unclassified Article", 'content': "AKLLEKNBVN", 'clearance': ('ohio','unclassified') }, ] return documents documents = initial_documents() documents # + def initial_subjects(): subjects = [ { 'knowledge':"%", 'name': "santa", 'clearance': ('world','topsecret') }, { 'knowledge':"e", 'name': "dorothy", 'clearance': ('ohio','confidential') }, { 'knowledge':"8", 'name': "roger", 'clearance': ('usa','secret') }, { 'knowledge':"C", 'name': "sandy", 'clearance': ('world','unclassified') }, ] return subjects subjects = initial_subjects() # + def level_dominates(a, b): # print("does level",a,"dominate",b,"?") levels = ['unclassified', 'confidential', 'secret', 'topsecret'] if (a in levels) and (b in levels): return levels.index(a) >= levels.index(b) return False pairs = [ ('unclassified','secret'), ('secret','unclassified'), ('secret','secret'), ('topsecret','confidential') ] for pair in pairs: a, b = pair print(a, b, level_dominates(a,b)) # + regions = { 'world' : ['usa','europe'], 'usa' : ['delaware','ohio','kansas'], 'europe' : ['france','germany','spain'], 'ohio' : ['cuyahoga','summit','portage','medina'], 'delaware' : ['wilmington','dover'], 'germany' : ['munich','berlin','hanover'], 'summit' : ['akron','fairlawn'], 'akron' : ['highland_square','uakron'] } def region_dominates(a, b): # print("does region",a,"dominate",b,"?") if (a == b): return True if not a in regions: return False if b in regions[a]: return True for region in regions[a]: if region_dominates(region,b): return True return False pairs = [ ('usa','ohio'), ('europe','europe'), ('africa','ohio'), ('usa','france'), ('ohio','portage'), ('usa','portage'), ('usa','akron'), ('usa','highland_square'), ('europe','portage'), ('ohio','usa') ] for pair in pairs: a, b = pair print(a, b, region_dominates(a,b)) # - # + def dominates(a,b): # print("does clearance",a,"dominate",b,"?") region_a, level_a = a region_b, level_b = b if level_b == "unclassified": return True return level_dominates(level_a, level_b) return region_dominates(region_a, region_b) and level_dominates(level_a, level_b) print(dominates(('usa','secret'),('ohio','confidential'))) print(dominates(('usa','confidential'),('ohio','secret'))) print(dominates(('usa','secret'),('ohio','secret'))) print(dominates(('usa','secret'),('europe','secret'))) print(dominates(('world','secret'),('uakron','secret'))) # + def simple_security(subject, document): return dominates(subject['clearance'],document['clearance']) def star_property(subject, document): return dominates(document['clearance'],subject['clearance']) def can_read(subject, document): return simple_security(subject, document) def can_write(subject, document): return star_property(subject, document) # - can_read(subjects[0],documents[1]) can_read(subjects[2],documents[2]) for subject in subjects: for document in documents: if can_read(subject, document): print(subject['name'],'can read',document['content']) else: print(subject['name'],'can not read',document['content']) for subject in subjects: for document in documents: if can_write(subject, document): print(subject['name'],'can write',document['content']) else: print(subject['name'],'can not write',document['content']) # + def show_conditions(): print() print("SUBJECTS") print() for subject in subjects: print(subject['name'], 'with clearance of', subject['clearance'], 'knows about ',[subject['knowledge']]) print() print("DOCUMENTS") print() for document in documents: print(document['title'], 'with clearance of', document['clearance'], 'has content of ',[document['content']]) print() show_conditions() # + def attempt_to_read(subject, document, verbose=False): if not can_read(subject, document) and random.randint(1,1000) <= 999: if verbose: print(subject['name'], "can not read from" , document['title']) return if verbose: print(subject['name'], "reads from" , document['title']) item = random.choice(document['content']) if item not in subject['knowledge']: subject['knowledge'] += item attempt_to_read(subjects[0],documents[0]) attempt_to_read(subjects[1],documents[0]) # + def attempt_to_write(subject, document, verbose=False): if not can_write(subject, document) and random.randint(1,1000) <= 999: if verbose: print(subject['name'], "can not write to" , document['title']) return if verbose: print(subject['name'], "writes to" , document['title']) item = random.choice(subject['knowledge']) if item not in document['content']: document['content'] += item attempt_to_write(subjects[0],documents[0]) attempt_to_write(subjects[1],documents[0]) # + subjects = initial_subjects() documents = initial_documents() show_conditions() # for subject in subjects: # for document in documents: # attempt_to_read(subject, document) # attempt_to_write(subject, document) print("start interactions") for i in range(0,100000): subject = random.choice(subjects) document = random.choice(documents) if random.choice("RW") == "R": attempt_to_read(subject, document) else: attempt_to_write(subject, document) print("stop interactions") show_conditions() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tushare as ts cons = ts.get_apis() # 1、获取股票行情 #股票日线行情 df = ts.bar('600000', conn=cons, freq='D', start_date='2016-01-01', end_date='') df.head(5) # 2、股票因子数据 #带因子的行情 df = ts.bar('600000', conn=cons, start_date='2016-01-01', end_date='', ma=[5, 10, 20], factors=['vr', 'tor']) df.head(5) # 3、复权数据 #复权行情, adj=qfq(前复权), hfq(后复权),默认None不复权 df = ts.bar('600000', conn=cons, adj='qfq', start_date='2016-01-01', end_date='') df.head(5) # 4、分钟和其它频度数据 #分钟数据, 设置freq参数,分别为1min/5min/15min/30min/60min,D(日)/W(周)/M(月)/Q(季)/Y(年) df = ts.bar('600000', conn=cons, freq='1min', start_date='2016-01-01', end_date='') df.head(15) # 指数行情 #指数日行情, 设置asset='INDEX' df = ts.bar('000300', conn=cons, asset='INDEX', start_date='2016-01-01', end_date='') df.head(5) # 5、港股数据 #港股数据, 设置asset='X' df = ts.bar('00981', conn=cons, asset='X', start_date='2016-01-01', end_date='') df.head(5) # 6、期货数据 #期货数据, 设置asset='X' df = ts.bar('CU1801', conn=cons, asset='X', start_date='2016-01-01', end_date='') df.head(5) # 7、美股数据 import time import sys from datetime import datetime, timedelta,date def n_days_ago(n): #now=datetime.now() today=date.today() #print("Now:\n"+str(today)) ndays_ago=today-timedelta(n) #print(str(n)+" days ago:\n"+str(ndays_ago)) return str(ndays_ago) n=180 while n>0: ndate=n_days_ago(n) df = ts.tick('300676', conn=cons, date=ndate) if df is None: print(ndate + ": NULL") else: print(ndate + ": Valid") print(len(df.datetime)) n=n-1 df=ts.get_tick_data('000050',date="2017-09-16") print(df) n=180 while n>0: ndate=n_days_ago(n) df = ts.get_tick_data('000050',date=ndate) if df is None: print(ndate + ": NULL") else: print(ndate + ": Valid") n=n-1 #美股数据, 设置asset='X' df = ts.bar('BABA', conn=cons, asset='X', start_date='2016-01-01', end_date='') df.head(5) # 8,股票tick # + #股票tick,type:买卖方向,0-买入 1-卖出 2-集合竞价成交 _df = ts.tick('000050', conn=cons, date='2018-02-26') if _df is None: idx=0 else: idx=len(_df.datetime) i=0 while(i 0 its always true in Python Boolean statements are True or False (case-sensitive) my_variable_6 = bool(" ") # whitespace string is True --> behind a whitespace is a character ascii sign which is a True context my_variable_7 = bool("") # empty string is always False print( my_variable_1, type(my_variable_1), "\n", my_variable_2, type(my_variable_2), "\n", my_variable_3, type(my_variable_3), "\n", my_variable_4, type(my_variable_4), "\n", my_variable_5, type(my_variable_5), "\n", my_variable_6, type(my_variable_6), "\n", my_variable_7, type(my_variable_7) ) # + # the naming of a variable does have a huge variance # it is possible to use "_" underline signs between words # the possible variable naming styles are: # --> camelCase # --> PascalCase # --> snake_case # it is also possible to use numbers inside a variable name but not for the first sign # the first variable letter need to be an underline or a normal letter # reserved functions or special signs are not allowed # if u using a reserved function (built-in) or reserved word (str, int, ...) u overwrite the functionallity --- this will devastate your program # during development most of all programmer using temporary variable names, in production code we need to keep the topic information for our variable # because of nearly no limitation for the variable name i prefer to use snake_case with lower letter with descritpion of the variable topic # if the variable does have the topic to keep a sensor gps signal frame we call the variable name my_gps_sensor_frame = "some gps sensor data" # gps could have diferent datatypes but this is not the topic at the moment # some developer keep the datatype inside the variable also str_my_gps_sensor_frame = "..." # depends on you how to style your variables # capital letter variables are normally cons inside a program MY_AWESOME_CIRCLE_CONSTANT = 3.1415 # short form of Pi for calculation WINDOW_WIDTH = 640 # window size paramter in guis WINDOW_HEIGHT = 480 # window size paramter in guis # sometimes we dont know how to store or use a variable # in such a case it is possible to use a placeholder variable instead of a complete variable name # the common placeholder for variables is "_" _ = "dont like to use a this variable" # in python it is also possible to use multiple variable allocations var_1 = var_2 = var_3 = 42 # all 3 variables are integer with value of 42 # - # we learned that a variable (a single value without variable storage also!) are objects of classes # if we keep in mind that everything in python is handled the object oriented way we understand how variables are stored and used # the interpreter decide what kind of datatype our variable is and create an object from the specific class dynamically # to learn what kind of class function we could use, we need to have a look at the specific class documentation # the builtin function for selecting the class function is help() # it is possible to use help on the class directly, the function, variable or a value help(int) # + # help shows us the docstring documentation of the class/function/variable # under the method section we see a few methods with syntax "_" "_" "name" "_" "_" # these double underlined functions are special system relevant functions # they are activated with special code instructions, for example if we program # print(my_variable) # the internal __str__ function is called # # my_var1 + my_var2 # the internal __add__ function is called # # function names with a start of 2 underline signs are private and hidden functions and not direct callable # in class context everything with double underline in the beginning is hidden and only accessable inside the class # functions which we are able to use normally start with a normal letter and always have round brackets (with or without parameters) # # functions from an object are callable by typing a dot after the object # this process could explain as a concatination of objects with functions # if we concat a object to a function, the return "object" of the function is also able to add a concatination # this codestyle is typically python in oneliner code instruction # # lets have a look to an example my_string_variable = "this is a test string" print("Normal: \n", my_string_variable) print("function call with capital first letter: \n", my_string_variable.capitalize()) print("function call uppercase: \n", my_string_variable.upper()) print("function call to upper, and concat the result of upper with a function call swap which returns upper case to lower case: \n", my_string_variable.upper().swapcase()) print("function call with capital and swap: \n", my_string_variable.capitalize().swapcase()) # - # concatination of functions are very strong and multiple times usable # try to find the limitation of concatination :) # ## a few special datatypes # * we already learned a few basic datatypes, with external modules it is also possible to expand the number of datatypes in all ways # * special datatypes are normally called as complex # * in programming languages complex datatypes are structures with subdatatypes or compound datatypes # + # type of lists # if we want to work with lists it is possible to use the list datatype # in python we have a few list objects which are enforced by different brackets # the classical list is enforced by a square bracket # all list keeping objects of other datatypes/variables my_list_1 = [1,2,3,4] # this variable is a normal list, an array my_list_2 = [1,2, "text", 42.42] # it is also possible to mix the elements of datatype print("normal list:", my_list_2, type(my_list_2)) # standard lists are mutable, that means we are able to directly add more values without overwright the variable # to add a value to the list, we can use .append(value) function my_list_1.append(42) print("after append:",my_list_1) # the second list type we want to have a look on is a tuple # a tuple list is a immutable list datatype # it is also possible to mix the sub datatypes # tuples are enforced by normal brackets my_list_3 = (42, 55, "GMT", "zone") print("tuple list", my_list_3, type(my_list_3)) # we could use a tuple datatype if we want to struct different datatypes to a fix variable # it is also possible to use a tuple for multiple return inside function # for example a gps sensor dataset consist of longitude, latitude, time and height --- the perfect datatype after sensor readout is tuple # tuple lists are not changeable --> immutable, we always need to overwrite our variable or convert back to lists with using the list() function # the third datatype is used in different ways # a set is a unique value list of possible mixed types # a set list is immutable and will keep only unique values # to enforce a set we need to program with curly brackets my_list_4 = {42,55 , 20, 42, "test", 17, 0, 4} # keep an eye on 42 its 2 times inside the declaration print("set list", my_list_4, type(my_list_4)) # 42 we see only once # with sets it is also possible to use mathematically set functions like join, innerjoin a.s.o - try it out by yourself :) # sets are very special lists it is also possible to use the set property to expand our posibilities # it is possible to declare a new code block with a ":" sign... in case of lists the code block instruction is interpreted as a new declaration # inside sets this means it is possible to interprete the value left from the double dot as a unique variable and the part right of the double dot as a value to the variable # this special code instruction is better known as key value list # inside a key value list the key need to be unique (set property), the value could be what ever we want (variable property) my_list_5 = {42: "test value", "banana":[1,2,3], "sth.more":{ "subthing_1": 42.42, "subthing_2":1704}} print("dictionary list:", my_list_5, type(my_list_5)) # in python programming language a key value list is better known as "Dictionary" # + # lists or list variants (datatype that could be a list too) are iterable # iterable means we are able to call an element of a list directly # the list call instruction is a square bracket # with tuples and arrays it is possible to ask for an element by calling the element position # we start counting with 0 that means the first element of a list is called with [0] print("element 1 of list 1:", my_list_1[0]) # the first element of list_1 counting from left to right # it is also possible to call the last element (or first element from the right position) print("last element of list 3", my_list_3[-1]) # inside the square bracket it is possible to have a range of values with stepping # for a range of values we need to use the doubledot instruction again # if we want to have "from value_1 to value_3" we need to call [0:2] left to right orientation # the stepping is also callable with again a double dot [0:20:2] --> every second value from 0 to 20 element print("first two elements of list 2", my_list_2[:2]) # first two elements of list 2 # sets are special list, to call an element of a set we need to use a function # it is only possible to get a set value element by element in order # the function pop will show us the value elementwise # after pop using the value is deleted from the set print("set before pop:", my_list_4) print("pop value:",my_list_4.pop()) print("set after pop:",my_list_4) # a dictionary works a little bit different # we are able to call the key value directly on the dictionary # if we using the normal iteration counter (0,1,2,3,...) we get an error # to get values of the dictionary it is necessary to use a function or direct key call print("dictionary:",my_list_5) print("keys of dictionary:",my_list_5.keys()) print("call a specific key from dictionary:",my_list_5["sth.more"]) # it is also possible to concat list calls if we have a sublist inside a list # because of multiple datatype mix and declaration it is possible to have multidimensional lists # inside our dictionare we have a subdictionary we want to call print("subdictionary call with listconcat:", my_list_5["sth.more"]["subthing_2"]) # if we want to change a value inside a dictionary we only need to call the keys and declarate new values to the key # it is also possible to add new keys if we calling a non existing key with a declaration my_list_5["new_key"] = (1,2,3,4,5) print("after adding a new key:",my_list_5) # - # as we remember strings are a special datatype, list of chars # that means strings are list like objects and iterable # we are able to do a list call on a string too, this helps us if we want to work with strings directly for comparison a.s.o. print("Stringtext:", my_string_variable) print("first 10 signs:",my_string_variable[:10]) print("first 10 every second sign:", my_string_variable[:10:2]) print("string readorder from left perspective:", my_string_variable[::-1]) # + # as we said with sets we are able to do mathematical operation # example to use set functions my_list_set1 = {1,2,3,4,5,6,7,8} my_list_set2 = {2,4,6,8,10} print(my_list_set1.difference(my_list_set2)) # delta set set1 print(my_list_set1.intersection(my_list_set2)) # same in both # a lot more is possible # - # ## palindrom check # + # with the knowledge of listcalls and class.function calls we are able to develop a short program # lets develop a short palindrom check # a palindrom is a word or sentence which is the same from both readorders (left to right == right to left - readorder) # first we define a palindrom string my_palin = "A man, a plan, a canal: Panama." print("original:\n", my_palin) # inside the string we have special characters we need to replace # "," - comma; ":" - doubledot; "." - dot; " " - whitespace # additional to the replacement we need to lower or uppercase all signs my_corrected_palin = my_palin.replace(":","").replace(",","").replace(".","").replace(" ","").lower() print("corrected:\n", my_corrected_palin) # next we need to change the readorder my_changed_readorder_palin = my_corrected_palin[::-1] print("change readorder:\n", my_changed_readorder_palin) # last we check both variables print("is our string a palindrom?\n", my_changed_readorder_palin == my_corrected_palin) # - # ## print syntax # * with the print function it is possible to print to our active terminal variables or text # * the normal structure inside a string is comma separation of variables, this have an effect that between two variable information a whitespace is automatically added # * if we using string concatination we need to ensure that all dataparts are string datatype (need to enforce it to string) # * it is also possible to multiply string with a number, if we want to print 3 times "bla" we need to program: 3*"bla" # * if we using string statements it is also possible to use excape commands like backslash n for newline as we see above # * sometimes we dont want to do the escape command or we want to fill our string statement with variables # * in past python version (< v3.7) it was possible to set placeholder inside a string statement with curly brackets and replace the placeholder with variables and format function usage # * today the format function is used if we have dynamically URL calls which are changed depending on what we want - that means we declare a general URL GET request with placeholder, and replace the placeholder with our variables, for example data from different users is cylce call with changing user information # * if want to print a string as it is - raw text, it is possible to add a small r infront of the string, this formats the string to raw text string # * since python 3.7 it is possible to format a string as a f-string which let us handle the variable replacement more easy # + # normal string usage with comma separation print("1234", "\n", "5678") # a docstring is a string also my_docstring = """this docstring test is a test additional line indent with pressing tabulator key on keyboard indentation with whitespaces this is again a new line """ print("formatted printout depends on the docstring: \n",my_docstring) # - # string concatination with string enforcement and calculation print("1" + "2" + str(42) * 3) # string replacement of placeholder with format print("on my text at position {} we have a variable {} with values".format(42, "testname") ) print("on my text at position {0} we have a variable {0} with values".format(42, "testname") ) print("on my text at position {0} we have a variable {1} with values".format(42, "testname") ) print("on my text at position {1} we have a variable {0} with values".format(42, "testname") ) print("on my text at position {1} we have a variable {1} with values".format(42, "testname") ) # string raw text print(r"your button expect an integer value for example = {my_int} with value 42.") print(r"with a \n escape command you add a newline to your print statement") # f-string direct replacement of variables inside the string my_int = 99 print(f"value of my variable is {my_int} with datatype: {type(my_int)}") # * congratulation you finished this lesson # # ## what we learned # * datatypes in python with enforcement # * special structured datatypes (lists, dictionary) # * using a class function (function calls) # * list calls # * print syntax # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="XaQD4yBX7jO2" # ## Loading the dataset # + id="KyfTCPub61-p" # Import PyDrive and associated libraries # This only needs to be done once per notebook from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authenticate and create the PyDrive client # This only needs to be done once per notebook auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) # + id="io0uGUvy7gWA" # Download a file based on its file ID. # A file ID looks like: laggVyWshwcyP6kEI-y_W3P8D26sz file_id = '1QxkppL24m9TAT5Ix5AjJBvlK-6sfeN0V' # Check your own ID in GDrive downloaded = drive.CreateFile({'id': file_id}) # Save file in Colab memory downloaded.GetContentFile('tweet_data.csv') # + [markdown] id="OeQfMyVG7pU2" # # Text Pre-Processing # + id="bSYBXxKx7tH9" colab={"base_uri": "https://localhost:8080/"} outputId="9489d414-a644-4038-a346-957ea2a31ac7" # !pip install tweet-preprocessor # + id="iku57eXf7_u2" import numpy as np import pandas as pd import matplotlib.pyplot as plt import preprocessor as p import seaborn as sns # + id="ylVp2Udw8TvG" df = pd.read_csv('tweet_data.csv',lineterminator='\n') # + id="Adbjg36k-UWV" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="12d92123-6505-44c9-d528-61765c90bf11" df # + id="iuh_zy5l-bUp" df = df[df["Language"] == "en"].drop(["Tweet Id","Unnamed: 0", "Language"], axis=1) # + id="XW3an2d1-04h" def clean_date(date_obj): for x in range(0, len(date_obj)): if date_obj[x] == " ": break date_obj = date_obj[:x] return date_obj def preprocess_tweet(text): text = p.clean(text) return text # + id="TpfXuc0xAEWk" df["Datetime"] = df["Datetime"].apply(clean_date) df['Clean Text'] = df['Text'].apply(preprocess_tweet) df['Clean Text']= df['Clean Text'].str.replace('[^\w\s]','') # + colab={"base_uri": "https://localhost:8080/"} id="on0u3R5JALS3" outputId="e55e5b9d-39a6-44e5-eeb2-9c2e944ad1b0" # Removing Stop Words import nltk from nltk.corpus import stopwords nltk.download('stopwords') stop = stopwords.words('english') # + id="nGlhsQiPAL7c" df['Clean Text'] = df['Clean Text'].apply(lambda x: " ".join(x.lower() for x in x.split() if x not in stop)) # + colab={"base_uri": "https://localhost:8080/", "height": 213} id="AP0fCbqcAOVe" outputId="bf795eba-dfb1-4485-81e2-a54fe1d78222" pd.set_option('display.max_colwidth', 400) df.sample(2) # + [markdown] id="dq7KS0paJ-3w" # # TextBlob Sentiment Analysis, plotting the polarity scores' distribution # + id="QJWgnB0xXXRz" from textblob import TextBlob from wordcloud import WordCloud # + id="c0gHth7zA-Ei" df['TBScore'] = df['Clean Text'].apply(lambda x: TextBlob(x).sentiment.polarity) # Set threshold to define neutral sentiment neutral_thresh = 0.05 # Convert polarity score into sentiment categories df['Sentiment'] = df['TBScore'].apply(lambda c: 'Positive' if c >= neutral_thresh else ('Negative' if c <= -(neutral_thresh) else 'Neutral')) # + colab={"base_uri": "https://localhost:8080/", "height": 436} id="ab2tWl3XXjKT" outputId="b48a187e-c52a-4288-824e-653037b801e7" fig = plt.figure(figsize=(10, 6)) df['TBScore'].hist() plt.xlabel('Polarity Score', fontsize=18) plt.ylabel('Frequency', fontsize=18) plt.xticks(fontsize=16) plt.yticks(fontsize=16) # + [markdown] id="q4ygrjeVKE3r" # ## Plotting Word Clouds # + colab={"base_uri": "https://localhost:8080/"} id="ChEkS_bZXmXG" outputId="7c7298a1-6910-4f32-a05c-6489c7aef115" import nltk nltk.download('stopwords') from nltk.corpus import stopwords from wordcloud import STOPWORDS # + colab={"base_uri": "https://localhost:8080/", "height": 198} id="Kuca6HlzXp9S" outputId="3ddfea20-e3b3-4144-d901-811a1fc6b52c" pos_tweets=df[df["Sentiment"]=="Positive"] txt=" ".join(tweet for tweet in pos_tweets["Clean Text"]) stop_words = ["amp"] + list(STOPWORDS) wordcloud = WordCloud(collocations = False, background_color = 'white', stopwords = stop_words).generate(txt) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 198} id="2zF2hQhbXuzR" outputId="b1064df3-b9dd-4ab2-e6a8-21b9454099ec" neg_tweets=df[df["Sentiment"]=="Negative"] txt=" ".join(tweet.lower() for tweet in neg_tweets["Clean Text"]) wordcloud = WordCloud(collocations = False,background_color = 'white',stopwords=stop_words).generate(txt) plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() # + [markdown] id="y8yPGSwAKHte" # ## Plotting Bar Chart and Time-Series graphs # + colab={"base_uri": "https://localhost:8080/", "height": 143} id="aKKM5FCxYCGb" outputId="8ef71e05-dbcd-4b82-f325-23adcf661695" def get_value_counts(col_name): count = pd.DataFrame(df[col_name].value_counts()) percentage = pd.DataFrame(df[col_name].value_counts(normalize=True).mul(100)) value_counts_df = pd.concat([count, percentage], axis = 1) value_counts_df = value_counts_df.reset_index() value_counts_df.columns = ['sentiment', 'counts', 'percentage'] value_counts_df.sort_values('sentiment', inplace = True) value_counts_df['percentage'] = value_counts_df['percentage'].apply(lambda x: round(x,2)) value_counts_df = value_counts_df.reset_index(drop = True) return value_counts_df tb_sentiment_df = get_value_counts('Sentiment') tb_sentiment_df # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="e3Vu0sZpYJ_F" outputId="a8b7bc89-b137-4435-a96e-e70c359f9517" ax = sns.barplot(x="sentiment", y="percentage", data=tb_sentiment_df) ax.set_title('TextBlob results') for index, row in tb_sentiment_df.iterrows(): ax.text(row.name,row.percentage, round(row.percentage,1), color='black', ha="center") # + colab={"base_uri": "https://localhost:8080/", "height": 523} id="0o5AsPNhYLWV" outputId="0db3821c-65de-4275-a249-6a110e66c88c" # Plotting timeseries plot df["Datetime"] = pd.to_datetime(df.Datetime) timeline = df.resample('D', on='Datetime')["Sentiment"].value_counts().unstack(1) timeline.reset_index(inplace=True) timeline = timeline.melt("Datetime", var_name='Sentiment', value_name='Tweets') plt.figure(figsize=(22,10)) ax = sns.lineplot(x="Datetime", y="Tweets", hue="Sentiment", data=timeline) ax.figure.autofmt_xdate() # + [markdown] id="wx6-ZYkyKSpu" # ## Checking out the top 10 most Negative tweets (With Polarity of -1) # + colab={"base_uri": "https://localhost:8080/", "height": 449} id="q0-rEcFvYO3V" outputId="b297c258-c40a-4e4a-f6ac-f2b56a54e506" pd.set_option('display.max_colwidth', 400) df.sort_values(by='TBScore', ascending=True)[['Text', 'TBScore', 'Username']].reset_index(drop=True).head(n=10) # + [markdown] id="gnC2tPTRKYuW" # ## Checking out the top 10 most Positive tweets (With Polarity of +1) # + colab={"base_uri": "https://localhost:8080/", "height": 449} id="KOmeUXCrYR3H" outputId="80e4fd77-e008-458f-8361-5d8a10096127" df.sort_values(by='TBScore', ascending=False)[['Text', 'TBScore', 'Username']].reset_index(drop=True).head(n=10) # + [markdown] id="qDr2hT2mKq_7" # # VADER (Valence Aware sEntiment Intensity Analyser) Sentiment Analysis # + colab={"base_uri": "https://localhost:8080/"} id="6wwxilnIYTru" outputId="e0fa8200-3f1c-48c6-8098-cb38c5a58ce5" # Performing VADER Sentiment Analysis import nltk nltk.download('vader_lexicon') # Download the VADER lexicon from nltk.sentiment.vader import SentimentIntensityAnalyzer # Initialize sentiment intensity analyzer sia = SentimentIntensityAnalyzer() # Obtaining NLTK scores df['VScore'] = df['Clean Text'].apply(lambda x: sia.polarity_scores(x)) # Obtaining NLTK compound score df['VComp'] = df['VScore'].apply(lambda score_dict: score_dict['compound']) # Set threshold to define neutral sentiment neutral_thresh = 0.05 # Categorize scores into the sentiments of positive, neutral or negative df['Sentiment'] = df['VComp'].apply(lambda c: 'Positive' if c >= neutral_thresh else ('Negative' if c <= -(neutral_thresh) else 'Neutral')) # + [markdown] id="MK2xeuBmK09n" # ## Plotting the Polarity Scores' distribution, Bar Chart and Time-Series graphs # + colab={"base_uri": "https://localhost:8080/", "height": 436} id="Xm8n4oC4ZF8J" outputId="a653fb69-c933-4118-d794-3871ef574233" fig = plt.figure(figsize=(10, 6)) df['VComp'].hist() plt.xlabel('Polarity Score', fontsize=18) plt.ylabel('Frequency', fontsize=18) plt.xticks(fontsize=16) plt.yticks(fontsize=16) # + colab={"base_uri": "https://localhost:8080/", "height": 143} id="q5sjpfOwZLcN" outputId="998884d9-0479-4d8f-b2e8-a6985bf94469" def get_value_counts(col_name): count = pd.DataFrame(df[col_name].value_counts()) percentage = pd.DataFrame(df[col_name].value_counts(normalize=True).mul(100)) value_counts_df = pd.concat([count, percentage], axis = 1) value_counts_df = value_counts_df.reset_index() value_counts_df.columns = ['sentiment', 'counts', 'percentage'] value_counts_df.sort_values('sentiment', inplace = True) value_counts_df['percentage'] = value_counts_df['percentage'].apply(lambda x: round(x,2)) value_counts_df = value_counts_df.reset_index(drop = True) return value_counts_df tb_sentiment_df = get_value_counts('Sentiment') tb_sentiment_df # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="va9M-UlfZUti" outputId="257db838-5c84-4a06-acb4-e1fb88c77b13" ax = sns.barplot(x="sentiment", y="percentage", data=tb_sentiment_df) ax.set_title('Vader results') for index, row in tb_sentiment_df.iterrows(): ax.text(row.name,row.percentage, round(row.percentage,1), color='black', ha="center") # + colab={"base_uri": "https://localhost:8080/", "height": 523} id="ZWvFUfRSZXEy" outputId="8bffcf87-147a-4e6e-8c90-de869053dd04" df["Datetime"] = pd.to_datetime(df.Datetime) timeline = df.resample('D', on='Datetime')["Sentiment"].value_counts().unstack(1) timeline.reset_index(inplace=True) timeline = timeline.melt("Datetime", var_name='Sentiment', value_name='Tweets') plt.figure(figsize=(22,10)) ax = sns.lineplot(x="Datetime", y="Tweets", hue="Sentiment", data=timeline) ax.figure.autofmt_xdate() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-1b6b75485105e36c", "locked": true, "schema_version": 1, "solution": false} # # BLU03 - Exercises Notebook # + deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-ee9dcdd4eb1308b9", "locked": true, "schema_version": 1, "solution": false} import hashlib # for grading purposes import math import numpy as np import pandas as pd import requests import sqlalchemy from bs4 import BeautifulSoup # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-cc67f3fcb340fbcd", "locked": true, "schema_version": 1, "solution": false} # ## Part A - SQL exercises # # ### Querying the FIFAdb with a SQL client # # Open your favorite SQL client and connect to the FIFAdb. # The connection settings are the following. # # * host: data-wrangling-batch3.cl9uj9cucww7.eu-west-1.rds.amazonaws.com # * port: 5432 # * user: ldsa_student # * database: fifa # * schema: public # * password: (shared through slack) # # This is a different database than the one we used in the learning notebooks. This database contains information about football matches, players, teams, and which league and country these matches took place in. Additionally, it also countains the player's and team's "attributes", sourced from the EA Sports' FIFA video game series. # # The tables in this database are the following: # # 1. Match: has information about the the football matches: who were the 11 home and away players (identified by their player_id), how many goals did each team score, the date of the match, the league id and the home/away team id's. # 2. Player: contains informations about the players. # 3. Team: contains information about the teams. # 4. League: contains information about the football leagues, including the id of the country where they take place. # 5. Country: names and id's of the countries # 6. Player_Attributes: contains the attributes for each player. # 7. Team_Attributes: contains the attributes for each team. # # You can preview these tables using the SQL client. # # ### Q1. Select the name of the player with id 30981 # # Write a query that selects the name of the player whose id is 30981, and run it in the SQL client. # # Then, assign the result to variable id30981_player_name (just copy and paste the name you obtained). # + deletable=false nbgrader={"grade": false, "grade_id": "cell-03b22b93be7b587e", "locked": false, "schema_version": 1, "solution": true} # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-92e8f002ca863db1", "locked": true, "points": 1, "schema_version": 1, "solution": false} expected_hash = 'e3ccd9684de593c7c6b6354cbe413d233959e7677258bfc3727d807e5900dce2' assert hashlib.sha256(id30981_player_name.encode()).hexdigest() == expected_hash # - # ### Q2. Calculate the maximum number of goals scored by team with id 9825, when playing at home # # Write a query that calculates the highest amount of goals scored by team with id 9825, when playing at home. # # Then, assign the result to variable max_goals_by_team_id_9825 (just copy and paste the value). # + deletable=false nbgrader={"grade": false, "grade_id": "cell-be58fad2af1735bd", "locked": false, "schema_version": 1, "solution": true} # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-1f2f87262d1b5fc4", "locked": true, "points": 1, "schema_version": 1, "solution": false} expected_hash = '7902699be42c8a8e46fbbb4501726517e86b22c56a189f7625a6da49081b2451' assert hashlib.sha256(str(max_goals_by_team_id_9825).encode()).hexdigest() == expected_hash # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-010c418f565b3108", "locked": true, "schema_version": 1, "solution": false} # ### Q3. Calculate the average overall rating of players whose first name is Cristiano # # Are Cristianos predisposed to be good Football players? Only one way to find out! # # Write a query that calculates the average overall_rating attribute of players whose name is "Cristiano *something else*" (that is, players whose first name is Cristiano, and last name varies), and run it in the SQL client. # # Then, assign the result to variable avg_cristiano_rating (round up to the nearest integer). # # **Hint**: check the [LIKE](https://www.postgresql.org/docs/current/static/functions-matching.html#FUNCTIONS-LIKE) keyword for this exercise. # + deletable=false nbgrader={"grade": false, "grade_id": "cell-3a7464176782e2cb", "locked": false, "schema_version": 1, "solution": true} # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-f1f24a46b0d15dc9", "locked": true, "points": 1, "schema_version": 1, "solution": false} expected_hashes = ['8722616204217eddb39e7df969e0698aed8e599ba62ed2de1ce49b03ade0fede', '96061e92f58e4bdcdee73df36183fe3ac64747c81c26f6c83aada8d2aabb1864', 'eb624dbe56eb6620ae62080c10a273cab73ae8eca98ab17b731446a31c79393a'] assert hashlib.sha256(str(avg_cristiano_rating).encode()).hexdigest() in expected_hashes # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-f1ef8aaf001e8015", "locked": true, "schema_version": 1, "solution": false} # ### Q4. Count how many different teams played in Portugal # # Write a query that counts the number of different teams that played in Portugal, across all games. You can calculate this value considering only the home or away team (it should be the same because every team has played on both sides of the field!). # # Assign the result to variable number_of_portuguese_teams (just copy and paste the value). # # **Hints**: keep in mind you only want to count DISTINCT team names. For this, the [DISTINCT](https://www.postgresql.org/docs/current/static/sql-select.html#SQL-DISTINCT) keyword will be essential. Also, remember that the relationship between Country and Match isn't explicitly presented on the Match table, but there is a relationship between League and Country. # + deletable=false nbgrader={"grade": false, "grade_id": "cell-645c2fbe3b16c880", "locked": false, "schema_version": 1, "solution": true} # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-41d99aabd53c87e7", "locked": true, "points": 1, "schema_version": 1, "solution": false} expected_hash = '35135aaa6cc23891b40cb3f378c53a17a1127210ce60e125ccf03efcfdaec458' assert hashlib.sha256(str(number_of_portuguese_teams).encode()).hexdigest() == expected_hash # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-24c0142cc3082a0c", "locked": true, "schema_version": 1, "solution": false} # ### Q5. Find out which team has the highest average goal count while playing away # # Write a query to find out which team scores the most goals on average while playing away. Assign the team's name to the variable team_with_highest_average_goals_away. # # Also find out what this average amount of goals is, and assign it to the variable average_goals_away (round to the nearest integer). # + deletable=false nbgrader={"grade": false, "grade_id": "cell-c5ef88aa51ea99f6", "locked": false, "schema_version": 1, "solution": true} # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-f0f9a337b81de14a", "locked": true, "points": 1, "schema_version": 1, "solution": false} expected_team_hash = '09c58af5e0ed9ebb10922ebc55ceacbb6ea1dc78d6b0237fe5e37b408d9a84d6' assert hashlib.sha256(team_with_highest_average_goals_away.encode()).hexdigest() == expected_team_hash expected_goals_hash = 'd4735e3a265e16eee03f59718b9b5d03019c07d8b6c51f90da3a666eec13ab35' assert hashlib.sha256(str(average_goals_away).encode()).hexdigest() == expected_goals_hash # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-3a1dca85f9b02ebf", "locked": true, "schema_version": 1, "solution": false} # ### Querying the FIFAdb with pandas # # In these exercises, the goal is to query the FIFAdb using pandas. # # ### Q6. Find the teams who are the least successful at pressuring their opponents' defence. # # The connection settings to use in this exercise are the same ones as in the previous exercises. # # Write a query to find the name, short_name and *average amount of goals scored when playing at home* of the teams with a high "defencepressure" team attribute (*greater than 55*). # # Search only for teams with: # * an *average amount of goals scored when playing at home* lesser than 2 (the least successful teams at pressuring the opponents' defence and scoring goals); # * more than 30 games played at home, to reduce the number of statistically insignificant results. # # Order the results by the average amount of goals scored, in ascending order. # # Assign the result to dataframe df6. # + deletable=false nbgrader={"grade": false, "grade_id": "cell-f2cd873c0ac14ca0", "locked": false, "schema_version": 1, "solution": true} # Create an engine that allows to to connect to the FIFAdb PostgreSQL database # engine = sqlalchemy.create_engine(...) # YOUR CODE HERE raise NotImplementedError() # Write the query as specified in the question # query = ... # YOUR CODE HERE raise NotImplementedError() # Use pandas read_sql_query function to read the query result into a DataFrame # df6 = pd.read_sql_query(...) # YOUR CODE HERE raise NotImplementedError() # - df6.iloc[2]["name"] # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-28986c4b783959be", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert type(engine) == sqlalchemy.engine.base.Engine assert len(df6) == 16 assert len(df6.columns) == 3 expected_hash = '29adea3d0c885434d0f6f957aeb61f87c49c416fd1ecc895d438b552a5ec90e9' assert hashlib.sha256(df6.iloc[2]["name"].encode()).hexdigest() == expected_hash expected_hash = 'fe34924d143b814542bfb9714341fa68ac9fca7a0b4eeda1b654abacae2d1a50' assert hashlib.sha256(df6.iloc[4].short_name.encode()).hexdigest() == expected_hash # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-06b122f1f318a355", "locked": true, "schema_version": 1, "solution": false} # ### Q7. Select players with high shot power and agility, and order the results by height # # In this exercise, we want to query a local SQLite database. # In order to do this, connect to the FIFAdb.sqlite database, as was done in the learning notebooks for the_movies.db. The database file we're using is in the same directory as this Exercises Notebook. # # Write a query that selects the player name, height, weight, shot_power and agility for all players with shot_power higher than 85 and agility higher than 80. Order these results in descending order by player height. # # Use pandas to read this query into a DataFrame called df7 with five columns: name, height, weight, shot_power and agility. # + deletable=false nbgrader={"grade": false, "grade_id": "cell-8d45d74ae0d84110", "locked": false, "schema_version": 1, "solution": true} # Create an engine that allows to to connect to the the_movies.db SQLite database # engine = sqlalchemy.create_engine(...) # YOUR CODE HERE raise NotImplementedError() # Write the query as specified in the question # query = ... # YOUR CODE HERE raise NotImplementedError() # Use pandas read_sql_query function to read the query result into a DataFrame # df7 = pd.read_sql_query(...) # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-19abd085a003ab6f", "locked": true, "points": 1, "schema_version": 1, "solution": false} assert type(engine) == sqlalchemy.engine.base.Engine assert len(df7) == 8 assert len(df7.columns) == 5 assert df7.columns.tolist() == ['name', 'height', 'weight', 'shot_power', 'agility'] expected_hash = '51f5bb089774886799fada5cfdf18780d7d33ed30e85b0e0783ab8e1f13b06ea' assert hashlib.sha256(df7.loc[0, 'name'].encode()).hexdigest() == expected_hash expected_hash = '397edbc247ad5e6500752bde4715741a87369467b217220279c3e7adbd7c7ea0' assert hashlib.sha256(str(df7.loc[2, 'height']).encode()).hexdigest() == expected_hash expected_hash = 'ed145fcc7ab03071c1e1548a515ba1f2b66bee9623532681476b6515c3ffc7fd' assert hashlib.sha256(df7.loc[7, 'name'].encode()).hexdigest() == expected_hash # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-a63384ff91b49abf", "locked": true, "schema_version": 1, "solution": false} # ## Part B - Public APIs # # In this exercises, the goal is to get data from a public API. We'll use the [Star Wars API](https://swapi.co/). # # In order to complete the exercises, you'll have to consult the API's [documentation](https://swapi.co/documentation). # #
    # # # #
    # # ### Q8. Get planet 7 from the API # # Read the [documentation](https://swapi.co/documentation) of the API in order to find out how to get the planet with id 7. # In order to get this data, you'll need to do an HTTP GET request. # # The result should be a JSON object (which is the same as a dictionary in Python), and assigned to variable planet_7. # + deletable=false nbgrader={"grade": false, "grade_id": "cell-3e7084c9bf8398d0", "locked": false, "schema_version": 1, "solution": true} # Do an HTTP GET request to the Star Wars API to get planet 7 as a JSON object # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-50d051ce26525944", "locked": true, "points": 3, "schema_version": 1, "solution": false} assert type(planet_7) == dict assert set(planet_7.keys()) == {'climate', 'created', 'diameter', 'edited', 'films', 'gravity', 'name', 'orbital_period', 'population', 'residents', 'rotation_period', 'surface_water', 'terrain', 'url'} expected_planet_name_hash = '97479479e5561d6d1124f45e97533586e6f56b12e7b90151409a3ebe7f4e7fe7' assert hashlib.sha256(planet_7['name'].encode()).hexdigest() == expected_planet_name_hash expected_planet_climate_hash = '20f52b106bdf3451be03acc441d360b6b14e057765745311553ca9ce264a7284' assert hashlib.sha256(planet_7['climate'].encode()).hexdigest() == expected_planet_climate_hash # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-90ff2db8962c1d5b", "locked": true, "schema_version": 1, "solution": false} # ### Q9. Filter characters from the API # # Read the documentation of the API in order to find out how to filter information using a GET request with the 'search' parameter. # # Then, find all the characters from the Skywalker family, by searching for 'Skywalker'. The response should be a JSON object with the characters' information. # # The desired results can be found in the 'results' field of the response. Please assign them to variable skywalker_family. # + deletable=false nbgrader={"grade": false, "grade_id": "cell-3c68a66b3b6b56d0", "locked": false, "schema_version": 1, "solution": true} # Do an HTTP GET request to filter characters according to the criteria above # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-15c07449c3238657", "locked": true, "points": 3, "schema_version": 1, "solution": false} assert type(skywalker_family) == list assert len(skywalker_family) == 3 assert set(skywalker_family[0].keys()) == {'birth_year', 'created', 'edited', 'eye_color', 'films', 'gender', 'hair_color', 'height', 'homeworld', 'mass', 'name', 'skin_color', 'species', 'starships', 'url', 'vehicles'} assert skywalker_family[0]['hair_color'] == 'blond' assert skywalker_family[0]['skin_color'] == 'fair' assert skywalker_family[0]['eye_color'] == 'blue' expected_hash = 'e7c68f75d23b428e7afc5b72ab7fca0d5db7e8f0779d89d5c4b89b3c77f4eadd' assert hashlib.sha256(skywalker_family[1]['name'].encode()).hexdigest() == expected_hash # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-91ec6c1c0bc1f4f5", "locked": true, "schema_version": 1, "solution": false} # ## Part C - Web scraping # # In this exercise, we're going to use web scraping to get data about some albums that were released in 2018, from [this web page](https://www.albumoftheyear.org/2018/releases/). We are only going to focus on the first page. # # # ### Q10. Scrape all scores (critic and user) for albums on the first page. # # Assign a list with the scores' values to variable score_list. # In the list, each score should be a float. # # *(Extra food for thought, and not required for this exercise: observe what happens to the URL when you press the "Next" page button. Based on this observation, could you figure out how to scrape the ratings for ALL the albums of 2018? Keep in mind - always scrape responsibly and wait some time between scraping each page, so you don't overload the websites!)* # + deletable=false nbgrader={"grade": false, "grade_id": "cell-bd8d7b19855c5b9f", "locked": false, "schema_version": 1, "solution": true} # Assign the URL of the page to be scraped to variable url # url = ... # YOUR CODE HERE raise NotImplementedError() # Do a GET request to get the page content, using the url we've just defined # response = ... # YOUR CODE HERE raise NotImplementedError() # Instanciate a soup object using the response of the GET request # YOUR CODE HERE raise NotImplementedError() # Now it's the tricky part! # Parse the soup in order to retrieve the scores. # In the end, store the scores in a list and assign it to variable score_list. # Make sure that all the score_list in the list are floats! # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-335a90721257cd1a", "locked": true, "points": 6, "schema_version": 1, "solution": false} assert type(score_list) == list assert len(score_list) == 204 assert type(score_list[0]) == float assert score_list[0] == 84.0 assert math.isclose(72.72, np.mean(score_list), rel_tol=1e-2) # + [markdown] deletable=false editable=false nbgrader={"grade": false, "grade_id": "cell-ff83f6395e265feb", "locked": true, "schema_version": 1, "solution": false} # ### Q11. Where did you find the scores? # # When you were scraping the album scores, you found out that the information you needed was in an HTML element, which looks like this: # # ``` # Score goes here # ``` # # Regarding the HTML element where you found the rating's value: # # * Assign the tagname to variable score_tagname # * Assign the classname to variable score_classname # # In both cases you don't need to code, just copy and paste the values into the two variables. # + deletable=false nbgrader={"grade": false, "grade_id": "cell-88e2603722dac1e3", "locked": false, "schema_version": 1, "solution": true} # score_tagname = ... # score_classname = ... # YOUR CODE HERE raise NotImplementedError() # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "cell-f78d31cd4e4779eb", "locked": true, "points": 1, "schema_version": 1, "solution": false} expected_hash = 'cd35a2426062b7d58fd4a63f813cc506ef87e449087d28d256b8c393f20fa364' assert hashlib.sha256(score_tagname.encode()).hexdigest() == expected_hash expected_hash = '112895d7af6c125315cd807a911473c05eb9779bf7cd459c5e29db919ccbf408' assert hashlib.sha256(score_classname.encode()).hexdigest() == expected_hash # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Binary Batch Distillation # This is the eighth problem of the famous set of [Ten Problems in Chemical Engineering](https://www.polymath-software.com/ASEE/Tenprobs.pdf). Here, the goal is to solve a system of equations comprised of ordinary differential equations and nonlinear algebraic equations. # # , 2019 # # Problem Setup # # For binary distillation, the moles of liquid remaining change as a function of mole fraction of species 2 (toluene) # # $$\frac{dL}{dx_2} = \frac{L}{x_2\left(k_2-1\right)}$$ # # Antoine equation for species $i$: # # $$P_{vap,i} = 10^{A-\frac{B}{T+C}}$$ # # The vapor liquid equilibrium ratio for each species: # $$k_i = P_{vap,i}/P$$ # # The constraint to enforce is to ensure the overall pressure does not change: # $$x_1P_{vap,1}+x_2P_{vap,2} = P$$ # or # $$k_1x_1+k_2x_2=1$$ # # Problem Tasks # # The batch distillation of benzene (component 1) and toluene (component 2) mixture is being carried out at a pressure of 1.2 atm. Initially, there are 100 moles of liquid in the still, comprised of 60% benzene and 40% toluene (mole fraction basis). Calculate the amount of liquid remaining in the still when concentration of toluene reaches 80%. # # Solutions # + import numpy as np from scipy.integrate import solve_ivp from scipy.optimize import root import matplotlib.pyplot as plt # %matplotlib inline A = np.array([6.90565,6.95464]) B = np.array([1211.033,1344.8]) C = np.array([220.79,219.482]) P = 1.2*760 L0 = [100] x_start = 0.40 x_final = 0.80 T_guess = (80.1+110.6)/2 xspan = [x_start, x_final] def Pvap_constraint(T,x2): x = np.array([1-x2,x2]) P_i = 10**(A-B/(T+C)) k = P_i/P obj = 1 - np.dot(k,x) # make sure partial pressures sum to system pressure return(obj) def distill(x2,L): Topt = root(Pvap_constraint,T_guess,args=(x2)) P_i = 10**(A-B/(Topt.x+C)) k = P_i/P dL_dx = L/(x2*(k[1]-1)) return(dL_dx) sol=solve_ivp(distill,xspan,L0,method='LSODA') # LSODA is an implicit ODE solver # - plt.plot(sol.t,sol.y[0,:]) plt.xlabel('Mole Fraction Toluene') plt.ylabel('Moles of Liquid') plt.show() print('At 80% Toluene, there are {:.3} moles of liquid'.format(sol.y[0,-1])) # # Reference # “The Use of Mathematical Software packages in Chemical Engineering”, , , . # Nuttal, , Workshop Material from Session 12, Chemical Engineering Summer School, Snowbird, # Utah, Aug., 1997. # %load_ext watermark # %watermark -v -p scipy,matplotlib,numpy # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #rahul #0 2 4 #rhl #au # - s = input('Enter some string') print('Characters at even position:', s[::2]) print('Characters at odd position:', s[1::2]) s = input('Enter some string') i = 0 print('The characters at even position') while i < len(s): print(s[i], end=',') i = i+2 print() print('The characters at odd position') i=1 while iOpen In Colab # + id="61QkNe0yZpRI" import tensorflow as tf import numpy as np # + id="5SIlPF-YZzTm" a = np.array([10, 30, -50, -10, -15, 5, 25], dtype=float) b = np.array([50, 86, -58, 14, 5, 41, 77], dtype=float) # + id="brTOwV8ac9qM" layer = tf.keras.layers.Dense(units=1, input_shape=[1]) model = tf.keras.Sequential([layer]) # + id="S3StyhE9duA7" model.compile(optimizer=tf.keras.optimizers.Adam(0.1), loss="mean_squared_error") # + colab={"base_uri": "https://localhost:8080/"} id="aD2Z9SAYeKW3" outputId="0a37322a-b077-44f4-a9fe-d048ae51439b" print("Comenzando entrenamiento....") history = model.fit(a, b, epochs=2000, verbose=False) print('Entrenamiento finalizado') # + colab={"base_uri": "https://localhost:8080/", "height": 296} id="EzfApxn6ewI9" outputId="ec871fb4-7e3f-4d3b-f746-58b04d1d96c6" import matplotlib.pyplot as plt plt.xlabel("# Epoca") plt.ylabel("# Perdida") plt.plot(history.history["loss"]) # + colab={"base_uri": "https://localhost:8080/"} id="epmJLyLLfgya" outputId="07039f94-71a7-4446-dcb9-3915affeb613" result = model.predict([-45]) print(str(result)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Extras # # If we have time: Changing the colormap with widgets in two ways -- The Michigan Depth Map # # Order is: # 1. Review of Michigan map # 1. More complicated layout # 1. Even more complicated layout and placement (perhaps unnecessarily so :D ) # # Import our usual things: # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd # Also import ipywidgets: import ipywidgets # ## 1. Review of Michigan map: Michigan colormap and scale with interact # # More info for `Output`: https://ipywidgets.readthedocs.io/en/latest/examples/Output%20Widget.html # **Note:** If we are short on time, we might only get through the color-map-by-hand portion of this. # # We'll need a few extra functions to do this sort of thing "by hand": # Last week we also started working with the Michigan Depth Map which we loaded with Numpy and performed some data cleaning on: # + data_filename = '/Users/jillnaiman/Downloads/michigan_lld.flt' michigan = np.fromfile(data_filename, dtype='f4').reshape((5365, 4201)) michigan[michigan == -9999] = np.nan # set flagged bad data to NaN # quick plot of depths: plt.hist(michigan.flat) plt.show() # - # Neat! Let's look at this data more in the way that it was intended -- as an image. We can use `matplotlib`'s `imshow` function to do this: plt.imshow(michigan) plt.colorbar() plt.show() # ### Question: # # Now that we've had a chance to look at our data a bit, what do we think the values represent? What does a positive value mean? Negative value? Where do we think, spatially, these things will lie? plt.imshow(michigan) plt.clim(0, 100) # only plot from depths of 0->100 plt.colorbar(extend = 'both') # add little arrow ends plt.show() # Let's see if we can't get a colormap that shows this outline better. Turns out there is an actual "terrain" map: plt.imshow(michigan, cmap="terrain") plt.colorbar() plt.show() # So, while this is starting to look better, intutatively, we want our map to look bluish in the lake, and brownish on the land. We can do this by doing a symmetric log color normalization: import matplotlib.colors as colors plt.imshow(michigan, cmap="terrain", norm = colors.SymLogNorm(10)) plt.colorbar() plt.show() # We can even set the color limits to be symmetric so that the yellow bit is right at zero elevation: np.nanmin(michigan), np.nanmax(michigan) # So, we'll make sure we make our colormap to include these limits: plt.imshow(michigan, cmap="terrain", norm = colors.SymLogNorm(10)) plt.clim(-352, 352) plt.colorbar() plt.show() # If we now look at our image, we see some interesting things. So, now there is a sharp contrast between negative & positive depths/heights and there is not as much contrast between blue/green or brown/white. # # But why? Let's check out the docs for `SymLogNorm`: # + # colors.SymLogNorm? # - # This is a symmetrical log scale so it logs things both in the negative & positive directions. # # Example: np.log10([1,10,50]),np.log10(np.abs([-1,-10,-50])) # We see that 1 and 10 are mapped to a jump of 1 but 1->50 is mapped only to a jump of 0.7 instead of 40. # The lake Michigan data is a very high resolution map, so we can zoom in to see some cool details: plt.imshow(michigan, cmap="terrain", norm = colors.SymLogNorm(10)) plt.clim(-352, 352) plt.colorbar() plt.xlim(2700, 3300) plt.ylim(3300, 3900) # This shows us one of the rivers that feed into lake Michigan. # And just for fun, here is how it looks with our bad "jet" colormap: plt.imshow(michigan, cmap="jet", norm = colors.SymLogNorm(10)) plt.clim(-352, 352) plt.colorbar() # ew. # One natural thing we might want to do is change color scheme and be able to toggle on and off the SymLogNorm color remapper. We can do this 2 ways - by using our widget `@interact` decorator function again, and by explicitly laying out widgets. Let's try to first way first: @ipywidgets.interact(colormap = plt.colormaps(), color_range = (1.0, 352.0, 1.0), sym_log=True) def plot(colormap = 'terrain', color_range = 352, sym_log = True): if sym_log: norm = colors.SymLogNorm(10) else: norm = colors.Normalize() fig, ax = plt.subplots(figsize=(6,8)) # calling colorbar in a different way: CAX = ax.imshow(michigan, cmap=colormap, norm = norm) CAX.set_clim(-color_range, color_range) plt.colorbar(CAX, extend = 'both') plt.show() # ## 2. More complicated layout # # We can mess with the layout of our widgets by creating them externally, and then using them to plot. # Let's start with creating a dropdown widget for all of the colormaps: cmap_widget = ipywidgets.Dropdown(options=plt.colormaps()) # Let's take a quick look: cmap_widget # Ok! So we just have the stand-alone widget. Since we know that some of the color maps work well/less well for this dataset, let's set a default of the "terrain" colormap to this widget: cmap_widget = ipywidgets.Dropdown(options=plt.colormaps(), value='terrain') cmap_widget # Finally, let's ad a description to this widget that is different from the default in the `@interacts` call above. cmap_widget = ipywidgets.Dropdown(options=plt.colormaps(), value='terrain', description='Select colormap:') cmap_widget # We note that now our description sort of "runs off" the page. Because we have access to the individual widget, we can mess with the "layout" of this widget -- i.e., how it looks. cmap_widget.keys cmap_widget.layout.width = '500px' # changes the box size cmap_widget.style.keys # here is where the description width is hidden cmap_widget.style.description_width = 'initial' cmap_widget cmap_widget.layout.width = '200px' # back to not so large of a box cmap_widget # Let's now make our checkbox button widget: log_check = ipywidgets.Checkbox(value=True, description='Take log of colormap? ') log_check # Note that we could also use a toggle button (see: https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html#ToggleButton). # We can use these as inputs to an `@interacts` call: @ipywidgets.interact(colormap = cmap_widget, sym_log = log_check) def plot(colormap = 'terrain', color_range = 352, sym_log = True): # hard-coding color_range here if sym_log: norm = colors.SymLogNorm(10) else: norm = colors.Normalize() fig, ax = plt.subplots(figsize=(6,8)) # calling colorbar in a different way: CAX = ax.imshow(michigan, cmap=colormap, norm = norm) CAX.set_clim(-color_range, color_range) plt.colorbar(CAX, extend = 'both') plt.show() # So, now we've messed with how our widgets look, but how about where they are placed? # # One option is the "Even more complicated layout and placement" section below, OR what we will cover with `bqplot` next week (for example, see `bqplot` examples in https://ipywidgets.readthedocs.io/en/latest/examples/Layout%20Templates.html#2x2-Grid) # ## 3. Even more complicated layout and placement # + #plt.close('all') # - from IPython.display import display, clear_output # %config InlineBackend.close_figures=False # If you get "double" displays over the next 2 cells, make sure you have this "config" statement there # This stops the auto-creation of figures until we say "plt.show" or "display" # Read more here: https://github.com/jupyter-widgets/ipywidgets/issues/1940 # + fig = plt.figure() # add axes by hand like last week # order here is: left, bottom, width, height plt.ioff() ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) image = ax.imshow(michigan, cmap='terrain') fig.colorbar(image, extend = 'both') out = ipywidgets.Output() ### NEW WIDGET CALL display(out) # - # The `Output` widget sort of "captures" the display until we explictly call it in context: with out: display(fig) # We use the `Layout` widget call, along with `figsize` in matplotlib to change the size of our image: # + fig = plt.figure(figsize=(8,8)) # add axes by hand like last week # order here is: left, bottom, width, height ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) image = ax.imshow(michigan, cmap='terrain') fig.colorbar(image, extend = 'both') #del out out = ipywidgets.Output(layout=ipywidgets.Layout(height='500px', width = '500px')) ### NEW WIDGET CALL display(out) # hold our output... with out: # until we explicitly say display! display(fig) # only display the fig when we explitly say to! # - # Why would we bother making our lives more complicated like this instead of just using `@interact` like we did before? So that we can start placing our widgets where we want and start to have a little more control over what we are displaying and how. For example, let's add a dropdown menu by hand: dropdown = ipywidgets.Dropdown(options=plt.colormaps()) dropdown.keys # `dropdown.index` gives us the # of the color map from `plt.colormaps()`. Let's add this in: # + fig = plt.figure(figsize=(8,8)) # add axes by hand like last week # order here is: left, bottom, width, height ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) image = ax.imshow(michigan, cmap='terrain') fig.colorbar(image, extend = 'both') #del out out = ipywidgets.Output(layout=ipywidgets.Layout(height='500px', width = '500px')) dropdown = ipywidgets.Dropdown(options=plt.colormaps()) hbox=ipywidgets.HBox([out, dropdown]) display(hbox) with out: display(fig) # - # So now we can start placing our interactive widgets how we want! Note that if update the dropdown, nothing happens because its not connected to the plot anyway. Let's work on connecting our dropdown menu to our plot using an `.observe` traitlets call: # + fig = plt.figure(figsize=(8,8)) # add axes by hand like last week # order here is: left, bottom, width, height ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) image = ax.imshow(michigan, cmap='terrain') fig.colorbar(image, extend = 'both') #del out out = ipywidgets.Output(layout=ipywidgets.Layout(height='500px', width = '500px')) dropdown = ipywidgets.Dropdown(options=plt.colormaps()) hbox=ipywidgets.HBox([out, dropdown]) display(hbox) #with out: # display(fig) def updateDropdown(change): print(change) # first just print with out: clear_output(wait=True) # clear everything on the display - don't keep making figures! display(fig) dropdown.observe(updateDropdown) # - # Let's use `change['owner'].index` to grab the index of the colormap we want: # + fig = plt.figure(figsize=(8,8)) # add axes by hand like last week # order here is: left, bottom, width, height ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) image = ax.imshow(michigan, cmap='terrain') fig.colorbar(image, extend = 'both') out = ipywidgets.Output(layout=ipywidgets.Layout(height='500px', width = '500px')) dropdown = ipywidgets.Dropdown(options=plt.colormaps()) hbox=ipywidgets.HBox([out, dropdown]) display(hbox) def updateDropdown(change): cmap=plt.colormaps()[change['owner'].index] # grab our new cmap # let's start by clearing out all our previous axes and starting with a fresh canvas for a in fig.axes: fig.delaxes(a) # draw on our axes like before ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) # make an image and assign a color map image = ax.imshow(michigan, cmap=cmap) fig.colorbar(image, extend = 'both') # display with an output widget with out: clear_output(wait=True) # clear everything on the display - don't keep making figures! display(fig) dropdown.observe(updateDropdown) # - # So, it's a little annoying that we have to wait to select something to display, so let's reorganize our function a bit to make it look nice: # + #plt.close('all') # if you get a "too many figures open" warning fig = plt.figure(figsize=(8,8)) # add axes by hand like last week # order here is: left, bottom, width, height ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) image = ax.imshow(michigan, cmap='terrain') fig.colorbar(image, extend = 'both') out = ipywidgets.Output(layout=ipywidgets.Layout(height='500px', width = '500px')) dropdown = ipywidgets.Dropdown(options=plt.colormaps()) hbox=ipywidgets.HBox([out, dropdown]) display(hbox) def updateDropdown(change): if change is not None: cmap=plt.colormaps()[change['owner'].index] for a in fig.axes: fig.delaxes(a) ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) image = ax.imshow(michigan, cmap=cmap) fig.colorbar(image, extend = 'both') with out: clear_output(wait=True) display(fig) dropdown.observe(updateDropdown) updateDropdown(None) # - # Let's keep going and add in our toggle box! # + plt.close('all') # if you get a "too many figures open" warning fig = plt.figure(figsize=(8,8)) # add axes by hand like last week # order here is: left, bottom, width, height ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) image = ax.imshow(michigan, cmap='terrain') fig.colorbar(image, extend = 'both') out = ipywidgets.Output(layout=ipywidgets.Layout(height='500px', width = '500px')) #dropdown = ipywidgets.Dropdown(options=plt.colormaps()) # just so that we can start with 'terrain' dropdown = ipywidgets.Dropdown(options=plt.colormaps(), index=plt.colormaps().index('terrain')) toggleButton = ipywidgets.ToggleButton(value=False,description='Log Norm?') controls = ipywidgets.VBox([dropdown, toggleButton]) hbox=ipywidgets.HBox([out, controls]) display(hbox) # (2) update figure based on toggle on/off def updateToggle(change): if change is not None: #print(change) #print(change['owner']) #print(change['owner'].value) # grab base colormap from other widget cmap=plt.colormaps()[dropdown.index] for a in fig.axes: fig.delaxes(a) ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) # pick norm based on toggle button if change['owner'].value: norm = mpl_colors.SymLogNorm(10) else: norm = mpl_colors.Normalize() image = ax.imshow(michigan, cmap=cmap, norm=norm) fig.colorbar(image, extend = 'both') with out: clear_output(wait=True) display(fig) toggleButton.observe(updateToggle) # (1) update figure based on dropdown # AND MAKE SURE WE ADD IN RESULTS OF TOGGLE BUTTON! def updateDropdown(change): if change is not None: cmap=plt.colormaps()[change['owner'].index] for a in fig.axes: fig.delaxes(a) ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) # pick norm based on toggle button if toggleButton.value: norm = mpl_colors.SymLogNorm(10) else: norm = mpl_colors.Normalize() image = ax.imshow(michigan, cmap=cmap, norm=norm) fig.colorbar(image, extend = 'both') with out: clear_output(wait=True) display(fig) dropdown.observe(updateDropdown) updateDropdown(None) updateToggle(None) # - # Finally, note that we can now move around each widget individually on our plot: # + plt.close('all') # if you get a "too many figures open" warning fig = plt.figure(figsize=(8,8)) # add axes by hand like last week # order here is: left, bottom, width, height ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) image = ax.imshow(michigan, cmap='terrain') fig.colorbar(image, extend = 'both') out = ipywidgets.Output(layout=ipywidgets.Layout(height='500px', width = '500px')) #dropdown = ipywidgets.Dropdown(options=plt.colormaps()) # just so that we can start with 'terrain' dropdown = ipywidgets.Dropdown(options=plt.colormaps(), index=plt.colormaps().index('terrain')) toggleButton = ipywidgets.ToggleButton(value=False,description='Log Norm?') controls = ipywidgets.VBox([dropdown, toggleButton]) controls.layout.top = '200px' # UPDATED hbox=ipywidgets.HBox([out, controls]) display(hbox) # (2) update figure based on toggle on/off def updateToggle(change): if change is not None: # grab base colormap from other widget cmap=plt.colormaps()[dropdown.index] for a in fig.axes: fig.delaxes(a) ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) # pick norm based on toggle button if change['owner'].value: norm = mpl_colors.SymLogNorm(10) else: norm = mpl_colors.Normalize() image = ax.imshow(michigan, cmap=cmap, norm=norm) fig.colorbar(image, extend = 'both') with out: clear_output(wait=True) display(fig) toggleButton.observe(updateToggle) # (1) update figure based on dropdown # AND MAKE SURE WE ADD IN RESULTS OF TOGGLE BUTTON! def updateDropdown(change): if change is not None: cmap=plt.colormaps()[change['owner'].index] for a in fig.axes: fig.delaxes(a) ax = fig.add_axes([0.0, 0.15, 1.0, 0.8]) # pick norm based on toggle button if toggleButton.value: norm = mpl_colors.SymLogNorm(10) else: norm = mpl_colors.Normalize() image = ax.imshow(michigan, cmap=cmap, norm=norm) fig.colorbar(image, extend = 'both') with out: clear_output(wait=True) display(fig) dropdown.observe(updateDropdown) updateDropdown(None) updateToggle(None) # - # This seems a lot more complicated: why would we bother? # # 1. You don't have to if you don't want to! (At least for this week...) # 1. It gives us finer-grained control over where to place things when we start building up multi-panel dashboards. # # Taking some time to understand widgets in this context will help you design custom dashboards for your analysis & visualization needs. # # `bqplot`, which we will use next week, uses this sort of layout options to link figures with widgets, but makes this sort of design a lot easier then what we just did! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 # + # Read an Image img = cv2.imread('OpenCV Basics + KNN Project/dog.png') newImg = cv2.cvtColor(img,cv2.COLOR_BGR2RGB) # - from matplotlib import pyplot as plt plt.imshow(img) plt.show() plt.imshow(newImg) plt.show() print(img) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="xuggRSThLRon" # ## LabelEncoder # + id="aBktcQb-nv29" cities = ['London', 'Berlin', 'Berlin', 'New York', 'London'] from sklearn.preprocessing import LabelEncoder encoder = LabelEncoder() city_labels = encoder.fit_transform(cities) city_labels # - # ## OneHotEncoder # + id="LAwpjiOVqX0Q" from sklearn.preprocessing import OneHotEncoder one_hot_encoder = OneHotEncoder(sparse=False) city_labels = city_labels.reshape((5, 1)) one_hot_encoder.fit_transform(city_labels) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # File Handling # ### Write a Python program to read an entire text file. file = open('demo.txt') file_read = file.read() file_read # ### Write a Python program to read first n lines of a file. N = 5 with open("demo.txt", "r") as file: for i in range(N): line = next(file).strip() print(line) # ### Write a Python program to append text to a file and display the text. # + def Appendtext(fname): with open(fname,'a+') as f: f.write('appending line 1,\n ') f.write('appending line 2. ') f.close() # y=open('file1.txt') # print(y.read()) Appendtext('demo.txt') x= open('demo.txt') print(x.read()) # - # ### Write a Python program to read last n lines of a file. # + def read_lastnlines(fname,n): with open('demo.txt') as f: for line in (f.readlines() [-n:]): print(line) read_lastnlines('demo.txt',3) # - # ### Write a Python program to read a file line by line and store it into a list. with open("demo.txt") as f: lst = f.readlines() lst = [x.strip() for x in lst] lst # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import numpy as np ARE = [] with open('./inference_log/SE_prosody_Daudio_woZ_set05_b512_es750_L1_H200_dr0.9_03-20-15-42.txt') as f: ARE = f.readlines()[1].split(' ') ARE = [int(x) for x in ARE] TRE = [] with open('./inference_log/SE_nlp_Daudio_woZ_set05_b512_es128_L1_H400_G1_dr0.9_03-20-16-44.txt') as f: TRE = f.readlines()[1].split(' ') TRE = [int(x) for x in TRE] MDRE = [] with open('./inference_log/SE_multi_prosody_half_Daudio_woZ_set05_b256_esA750_LA1_HA400_drA0.9_esT128_LT1_HT200_G1_drT0.9_03-20-15-42.txt') as f: MDRE = f.readlines()[1].split(' ') MDRE = [int(x) for x in MDRE] MDREA = [] with open('./inference_log/SE_multi_attn_prosody_half_Daudio_woZ_set05_b256_esA750_LA1_HA400_drA0.9_esT128_LT1_HT100_G1_drT0.9_03-20-16-01.txt') as f: MDREA = f.readlines()[1].split(' ') MDREA = [int(x) for x in MDREA] label = np.load('../data/processed/IEMOCAP/four_category/audio_woZ_set05/test_label.npy') # + import numpy as np def plot_confusion_matrix(cm, target_names, title='Confusion matrix', cmap=None, color_bar=False, normalize=True): """ given a sklearn confusion matrix (cm), make a nice plot Arguments --------- cm: confusion matrix from sklearn.metrics.confusion_matrix target_names: given classification classes such as [0, 1, 2] the class names, for example: ['high', 'medium', 'low'] title: the text to display at the top of the matrix cmap: the gradient of the values displayed from matplotlib.pyplot.cm see http://matplotlib.org/examples/color/colormaps_reference.html plt.get_cmap('jet') or plt.cm.Blues normalize: If False, plot the raw numbers If True, plot the proportions Usage ----- plot_confusion_matrix(cm = cm, # confusion matrix created by # sklearn.metrics.confusion_matrix normalize = True, # show proportions target_names = y_labels_vals, # list of names of the classes title = best_estimator_name) # title of graph Citiation --------- http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html """ import matplotlib.pyplot as plt import matplotlib import numpy as np import itertools # https://matplotlib.org/examples/pylab_examples/fonts_demo.html font = {'family' : 'sans-serif', 'weight' : 'medium', 'size' : 15} matplotlib.rc('font', **font) accuracy = np.trace(cm) / float(np.sum(cm)) misclass = 1 - accuracy if cmap is None: cmap = plt.get_cmap('Blues') plt.figure(figsize=(8, 6)) plt.imshow(cm, interpolation='nearest', cmap=cmap, ) # plt.title(title) if color_bar: plt.colorbar() if target_names is not None: tick_marks = np.arange(len(target_names)) plt.xticks(tick_marks, target_names, rotation=45) # plt.xticks(tick_marks, target_names) plt.yticks(tick_marks, target_names) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] thresh = cm.max() / 1.5 if normalize else cm.max() / 2 for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): if normalize: plt.text(j, i, "{:0.2f}".format(cm[i, j]*100), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") else: plt.text(j, i, "{:,}".format(cm[i, j]), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") # plt.tight_layout() plt.ylabel('True class') # plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass)) plt.xlabel('Predicted class') plt.show() # - plot_confusion_matrix(cm = np.array([[ 1098, 1934, 807], [ 604, 4392, 6233], [ 162, 2362, 31760]]), normalize = True, target_names = ['high', 'medium', 'low'], color_bar = True, title = "Confusion Matrix") data = ARE hist = np.zeros([4,4]) for i, j in zip(label, data): hist[i][j] += 1 plot_confusion_matrix(cm = hist, normalize = True, color_bar = True, target_names = ['angry', 'happy', 'sad', 'neutral'], title = "Confusion Matrix") data = TRE hist = np.zeros([4,4]) for i, j in zip(label, data): hist[i][j] += 1 plot_confusion_matrix(cm = hist, normalize = True, color_bar = True, target_names = ['angry', 'happy', 'sad', 'neutral'], title = "Confusion Matrix") data = MDRE hist = np.zeros([4,4]) for i, j in zip(label, data): hist[i][j] += 1 plot_confusion_matrix(cm = hist, normalize = True, color_bar = True, target_names = ['angry', 'happy', 'sad', 'neutral'], title = "Confusion Matrix") data = MDREA hist = np.zeros([4,4]) for i, j in zip(label, data): hist[i][j] += 1 plot_confusion_matrix(cm = hist, normalize = True, color_bar = True, target_names = ['angry', 'happy', 'sad', 'neutral'], title = "Confusion Matrix") # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.5.0 # language: julia # name: julia-1.5 # --- # + using LaTeXStrings using CxxWrap: StdVector using StatsPlots using LinearAlgebra using Corpuscles using OnlineStats using Distributed using JLD2 using FileIO gr() theme(:gruvbox_dark) # - pdgHistograms = Dict( :ss => CountMap(Int32), :bb => CountMap(Int32), :cc => CountMap(Int32), :gg => CountMap(Int32) ) bbHisto = load("bb.jld2")["pdgHistograms"][:bb] ccHisto = load("cc.jld2")["pdgHistograms"][:cc] ssHisto = load("ss.jld2")["pdgHistograms"][:ss] ggHisto = load("gg.jld2")["pdgHistograms"][:gg] # before normalization plot(bbHisto, fillcolor=nothing, linecolor=:red, label="bb") plot!(ccHisto, fillcolor=nothing, linecolor=:yellow, label="cc") plot!(ggHisto, fillcolor=nothing, linecolor=:cyan, label="gg") plot!(ssHisto, fillcolor=nothing, linecolor=:white, label="ss") pdgValues = sort(union([collect(keys(x)) for x in (bbHisto, ssHisto, ggHisto, ccHisto)]...)) data = [get(sample.value, pdg, 0)/sample.n for pdg in pdgValues, sample in (bbHisto, ccHisto, ggHisto, ssHisto)] labels = [Particle(Int(pdg)).name for pdg in pdgValues] unicodeLabels = [latexstring(Particle(Int(pdg)).latex) for pdg in pdgValues] # following the recipe from https://github.com/JuliaPlots/StatsPlots.jl ctg = repeat(["bb", "cc", "gg", "ss"], inner = length(pdgValues)) nam = repeat(labels, outer = 4) groupedbar(nam, data, group=ctg, xlabel="PDG Values", lw=0) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.5 64-bit (''base'': conda)' # name: python3 # --- # # PCM for tuning regularization parameters in Ridge regression # # Import necessary libraries import PcmPy as pcm import numpy as np import pickle import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from numpy import exp, sqrt from sklearn.linear_model import Ridge, LinearRegression from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV # + # Make the training data: N = 100 # Number of observations Q = 10 # Number of random effects regressors P = 10 # Number of variables Z = np.random.normal(0,1,(N,Q)) # Make random design matrix U = np.random.normal(0,1,(Q,P))*0.5 # Make random effects Y = Z @ U + np.random.normal(0,1,(N,P)) # Generate training data # Make testing data: Zt = np.random.normal(0,1,(N,Q)) Yt = Zt @ U + np.random.normal(0,1,(N,P)) # - # Build the datasets from the Data and condition vectors comp = np.array([0,0,0,0,0,0,1,1,1,1]) M1 = pcm.regression.RidgeDiag(comp, fit_intercept = True) M1.optimize_regularization(Z,Y) print('Estimated thetas:', M1.theta_.round(2)) print('Regularisation:', (exp(M1.theta_[-1])/exp(M1.theta_[:-1])).round(2)) # Now you can fit the model M1.fit(Z,Y) Yp = M1.predict(Zt) R2 = 1- np.sum((Yt-Yp)**2)/np.sum((Yt)**2) print('r2 :', R2.round(3)) Yp = M1.predict(Zt) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="y8tkP1OxeVM0" colab={"base_uri": "https://localhost:8080/"} outputId="5eceed00-e6b3-46fc-9327-157b91f4a589" import matplotlib.pyplot as plt import numpy as np import os import PIL import tensorflow as tf print(tf. __version__) from glob import glob import pathlib #Importing keras from tensorflow import keras from keras import layers, callbacks, utils, applications, optimizers from keras.models import Sequential, Model, load_model import cv2 # + colab={"base_uri": "https://localhost:8080/"} id="M5j3yjoRK9Gv" outputId="eb5571fb-b685-4703-9771-0d869164ded7" # # !pip install tflite-support # !pip install tensorflowjs # + id="QogpYNs0tAth" colab={"base_uri": "https://localhost:8080/"} outputId="17b9b604-a474-465e-edfb-3864aa12cc30" from google.colab import drive drive.mount('/content/drive') # + id="rKI0CP0ojJkI" #giving the path of the data set main data set ps-not to the sub directories image_path = "/content/drive/MyDrive/MOCK_DATA/" # + id="td8F0cjijjFg" colab={"base_uri": "https://localhost:8080/"} outputId="8cdc3930-735c-455b-de86-651748941bb3" a_path = "/content/drive/MyDrive/MOCK_DATA/A" a_length = len(list(glob(a_path + "/*.jpg"))) print(a_length) # + id="-ca5buc67nQW" colab={"base_uri": "https://localhost:8080/", "height": 217} outputId="79461dd9-7eb4-4622-99ea-14214458b76a" A = list(glob(a_path + "/*.jpg")) PIL.Image.open(str(A[0])) # + id="NlPFnc0cwxbK" colab={"base_uri": "https://localhost:8080/"} outputId="6891e219-7fce-4894-d517-f2e569e72b90" #Image shape image = cv2.imread(A[10]) print(image.shape) # + id="sGLC_aXCj93H" colab={"base_uri": "https://localhost:8080/"} outputId="29998724-d786-4ac7-94ef-16042ab0dc0d" all_images = len(list(glob(image_path + "*/*.jpg"))) print(all_images) # + id="DJWLOS_F82AV" batch_size = 32 img_height = 96 img_width = 96 # + id="Vuv-b3GVNnRq" colab={"base_uri": "https://localhost:8080/"} outputId="b451ffee-5e22-4fd4-dbb0-50e70616274f" #80% trainnin data train_ds = tf.keras.utils.image_dataset_from_directory( image_path, validation_split=0.2, subset="training", seed=123, image_size=(img_height, img_width), batch_size=batch_size) # + id="DcsmGjnXNyZf" colab={"base_uri": "https://localhost:8080/"} outputId="8ff5f28d-44e7-4dc6-e20a-9138177b4585" #20 training val_ds = tf.keras.utils.image_dataset_from_directory( image_path, validation_split=0.2, subset="validation", seed=123, image_size=(img_height, img_width), batch_size=batch_size) # + id="3MWaBHU0N-xG" colab={"base_uri": "https://localhost:8080/"} outputId="bd470d3c-e918-4b60-8eeb-caec97d1037c" #to confirm the classes name class_names = train_ds.class_names size = (len(list(class_names))) print(size) print((list(class_names))) # + id="2aXF4LHw3Eg5" f= open("Labels.txt","w+") for i in range(size): f.write(class_names[i] + "\n") f.close() # + id="1BLwbRKQeQa5" colab={"base_uri": "https://localhost:8080/"} outputId="2f74bedb-a8c9-43a5-ff76-ae1e00ccd28c" for image_batch, labels_batch in train_ds: print(image_batch.shape) print(labels_batch.shape) break # + id="8GEOTUdA7TYM" data_augmentation = keras.Sequential( [ layers.RandomFlip("horizontal", input_shape=(img_height, img_width, 3)), layers.RandomRotation(0.1), layers.RandomZoom(0.1), ] ) # + id="0uKiph5p8BxM" colab={"base_uri": "https://localhost:8080/", "height": 632} outputId="72fb4727-d241-4650-9951-c311944f0da0" plt.figure(figsize=(10, 10)) for images, _ in train_ds.take(2): for i in range(9): augmented_images = data_augmentation(images) ax = plt.subplot(3, 3, i + 1) plt.imshow(augmented_images[0].numpy().astype("uint8")) plt.axis("off") # + id="gnviydXxmNk1" num_classes = len(class_names) model = Sequential([ data_augmentation, layers.Rescaling(1./255), layers.Conv2D(16, (3, 3), strides=(1, 1), padding="same", activation="relu"), layers.MaxPooling2D(), layers.Conv2D(32, (3, 3), strides=(1, 1), padding="same", activation="relu"), layers.MaxPooling2D(), layers.Conv2D(64, (3, 3), strides=(1, 1), padding="same", activation="relu"), layers.MaxPooling2D(), layers.Dropout(0.2), layers.Flatten(), layers.Dense(128, activation='relu'), layers.Dense(num_classes) ]) #https://www.tensorflow.org/tutorials/images/classification#compile_the_model #https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/classification.ipynb#scrollTo=SbtTDYhOHZb6 # https://machinelearningmastery.com/pooling-layers-for-convolutional-neural-networks/ # + colab={"base_uri": "https://localhost:8080/"} id="fI3qztmp46pL" outputId="a1cc714f-3d12-446f-eb6b-f649c4f4fb61" model.summary() # + id="jb1uYefoSnqW" model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) # + id="Ml4jxPP3S7WZ" colab={"base_uri": "https://localhost:8080/"} outputId="e09986d1-8e18-4951-bb7d-b1125424f51d" epochs=5 history = model.fit( train_ds, validation_data=val_ds, epochs=epochs ) # + id="aYxail-qTwGX" colab={"base_uri": "https://localhost:8080/", "height": 499} outputId="1129037d-e811-4256-be7d-cb349c115606" acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs_range = range(epochs) plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(epochs_range, acc, label='Training Accuracy') plt.plot(epochs_range, val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(epochs_range, loss, label='Training Loss') plt.plot(epochs_range, val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show() # + id="PXc8E3CfUZV-" import tensorflowjs as tfjs tfjs.converters.save_keras_model(model, "mockModel" ) # + id="tpJBYHeYBkhA" # + id="9UrXivFKWJeb" colab={"base_uri": "https://localhost:8080/"} outputId="2cec3e65-9336-4771-a81b-2f45b6c13b53" # TF_LITE_MODEL_FILE_NAME = "tf_lite_model.tflite" # tf_lite_converter = tf.lite.TFLiteConverter.from_keras_model(model) # tflite_model = tf_lite_converter.convert() # tflite_model_name = TF_LITE_MODEL_FILE_NAME # open(tflite_model_name, "wb").write(tflite_model) # + [markdown] id="youzk6H8Uvo2" # Adding meta data into the tflite model # + id="l1MKEjBXKAcr" # from tflite_support import flatbuffers # from tflite_support import metadata as _metadata # from tflite_support import metadata_schema_py_generated as _metadata_fb # + id="XlDGzsYBKYOX" # # Creates model info. # model_meta = _metadata_fb.ModelMetadataT() # model_meta.name = "sign classification for 5 letters" # model_meta.description = ("Identify the most prominent object in the " # "image from a set of 5 categories such as " # "A, B, C, D,E ") # model_meta.version = "v1" # model_meta.author = "TensorFlow" # model_meta.license = ("Apache License. Version 2.0 " # "http://www.apache.org/licenses/LICENSE-2.0.") # + id="2wfpl35IKnbM" # # Creates input info. # input_meta = _metadata_fb.TensorMetadataT() # # Creates output info. # output_meta = _metadata_fb.TensorMetadataT() # + id="b-eBTElZ-tzf" # input_meta.name = "image" # input_meta.description = ( # "Input image to be classified. The expected image is {0} x {1}, with " # "three channels (red, blue, and green) per pixel. Each value in the " # "tensor is a single byte between 0 and 1".format(96, 96)) # input_meta.content = _metadata_fb.ContentT() # input_meta.content.contentProperties = _metadata_fb.ImagePropertiesT() # input_meta.content.contentProperties.colorSpace = ( # _metadata_fb.ColorSpaceType.RGB) # input_meta.content.contentPropertiesType = ( # _metadata_fb.ContentProperties.ImageProperties) # input_normalization = _metadata_fb.ProcessUnitT() # input_normalization.optionsType = ( # _metadata_fb.ProcessUnitOptions.NormalizationOptions) # input_normalization.options = _metadata_fb.NormalizationOptionsT() # input_normalization.options.mean = [96] # input_normalization.options.std = [96] # input_meta.processUnits = [input_normalization] # input_stats = _metadata_fb.StatsT() # input_stats.max = [1] # input_stats.min = [0] # input_meta.stats = input_stats # + id="n0wXxYlo-81V" # # Creates output info. # output_meta = _metadata_fb.TensorMetadataT() # output_meta.name = "probability" # output_meta.description = "Probabilities of the 5 labels respectively." # output_meta.content = _metadata_fb.ContentT() # output_meta.content.content_properties = _metadata_fb.FeaturePropertiesT() # output_meta.content.contentPropertiesType = ( # _metadata_fb.ContentProperties.FeatureProperties) # output_stats = _metadata_fb.StatsT() # output_stats.max = [1.0] # output_stats.min = [0.0] # output_meta.stats = output_stats # label_file = _metadata_fb.AssociatedFileT() # label_file.name = os.path.basename("/content/Labels.txt") # label_file.description = "Labels for objects that the model can recognize." # label_file.type = _metadata_fb.AssociatedFileType.TENSOR_AXIS_LABELS # output_meta.associatedFiles = [label_file] # + id="3ZX-jJee_O3i" # # Creates subgraph info. # subgraph = _metadata_fb.SubGraphMetadataT() # subgraph.inputTensorMetadata = [input_meta] # subgraph.outputTensorMetadata = [output_meta] # model_meta.subgraphMetadata = [subgraph] # b = flatbuffers.Builder(0) # b.Finish( # model_meta.Pack(b), # _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER) # metadata_buf = b.Output() # + id="Iedu0vM6_QsL" # populator = _metadata.MetadataPopulator.with_model_file("/content/tf_lite_model.tflite") # populator.load_metadata_buffer(metadata_buf) # populator.load_associated_files(["/content/Labels.txt"]) # populator.populate() # + colab={"base_uri": "https://localhost:8080/"} id="9qYtc2dS5j7n" outputId="85d62cb7-90ba-485e-adbb-7f18167c56c9" # interpreter = tf.lite.Interpreter(model_path = TF_LITE_MODEL_FILE_NAME) # input_details = interpreter.get_input_details() # output_details = interpreter.get_output_details() # print("Input Shape:", input_details[0]['shape']) # print("Input Type:", input_details[0]['dtype']) # print("Output Shape:", output_details[0]['shape']) # print("Output Type:", output_details[0]['dtype']) # + [markdown] id="ubM-TYwZVRXj" # Input is expected in 96 * 96 RGB # + id="3-s2mwqZk3fQ" # + [markdown] id="qa7wnVWYVQes" # # + id="P7lWby49UiK_" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="KBInnj6mb16z" # # A Hello World to Tensorflow # - Tensorflow is an opensource library which is mostly used in the machine learning , itis also a math library which is based on dataflow graphs and differential programming paradigms. # - According to the wikipedia definition
    *Differentiable programming is a programming paradigm in which a numeric computer program can be differentiated throughout via automatic differentiation. This allows for gradient based optimization of parameters in the program, often via gradient descent. Differentiable programming has found use in a wide variety of areas, particularly scientific computing and artificial intelligence.* # # # --- # # # # What is different with Machine learning? # In a conventional programming setup , a programmer would break any computational problem into simpler sub problems, which will be then coded, using logic flows. # # # # !Todo: the figure for the conventional programming # # But the machine learning way of solving the problem is bit inverted. Here the machine learning system will take mapping of the input to output pairs as input to the system, and will learn the pattern from those mappings and outputs the rules. # # - Let us break this down. Let us consider we are having two arrays # # # ``` # x = [0,1,2,3,4,5] # y = [2,4,6,8,10,12] # ``` # When taking a closer look at the above equations, we can see the relationship between the y and x as ```y = 2x + 2``` # In the convetional setup we will be coding like: # # + id="GaTXCrLu_xm3" colab={"base_uri": "https://localhost:8080/"} outputId="d40926c4-6b4d-4c78-c964-7c0a5340a9b4" x = [0,1,2,3,4,5] rule ="2*x +2" y = [] #inorder to figure out the y values ,using this rule for every_element in x: y.append(every_element*2 +2) print("y value is :",y) # + [markdown] id="zey2lJmmg0qb" # Let us make this as a function which can take the input x values and gives out the corresponding y value # + id="ehakFqt4gtOr" colab={"base_uri": "https://localhost:8080/"} outputId="09ec5d96-7b89-4e5a-fbe5-63ff7f72ab2e" def calculate_y_value(x): return 2*x +2 # Let us calculate the value for x= 10 print("The value of y , when x =10 is :",calculate_y_value(10)) # + [markdown] id="uElPkZRphWtp" # #### **Let us do the same calculation in tensorflow way** # The program breaks down as follows: # - create a machine architecutre, which will take the input/output mappings. # - Learn the mapping rules # - Use the learned rule to predict the value of y for an unseen x value # + id="UC4Z8QV_hSki" colab={"base_uri": "https://localhost:8080/"} outputId="4b7e0b7b-6f8e-42af-b694-a98056de4d34" # Step 1: Creating a learning machine architecutre to learn the mappings import tensorflow as tf print("Tensorflow version we are using",tf.__version__) model = tf.keras.Sequential(tf.keras.layers.Dense(units = 1,input_shape=[1])) #here units is the value which measures the shape of output value. #input_shape is the indicator for the input shape # + [markdown] id="cICjcVpfjO-p" # - Our machine has to learn a mapping between one x value to one y value. # - Thus we can see that the machine has 1 units (output) and input_shape =1 one input x value. # + id="M6W4gmn6ic-z" model.compile(optimizer = 'sgd', loss='mean_squared_error') # + [markdown] id="Ry845ab_kNV9" # - Imagine the optimizer and the loss function as internal mechanism the machine has to follow inorder to learn the best mapping between the provided input and output values # - Let us ask the machine with the above setup to learn the mappings # + id="IKLJXYNBkNCa" model.fit(x,y,epochs=500) # The output is cleared to increase the readability of the notebook feel free to run the notebook # + id="nxXmAXccklkb" colab={"base_uri": "https://localhost:8080/"} outputId="da2a3dd0-9b82-4701-d432-c4dba63d0013" print("y value when x is 10:",model.predict([10])[0][0]) # + [markdown] id="OQz0RTG6lLcP" # Let us spend some time in understanding some of the words used above like epoch, predict and fit # # **Epoch** # - Imagine the machine as a student who learns a new skill. Epoch is a measure of number of time he repeatedly does the same task of learning adjusting and correcting the mistakes every time. # - In a normal case imagine like the more he learns the more he captures the mapping between the input and output.
    # {Try running the above cells with less epochs, say 100 and then check the model predict output.} # # **Fit** # - Fit is the equivalent of asking the machine to learn the mapping. # # **Predict** # - Predict is the way of asking the machine (here the machine is named as model) to predict the value of y given the x value based on the learning it has. # # **Optimizer and Loss** # # - Imagine loss as the numerical measure of the difference between expected ouput and the actual output. # - In the above model fit output, you can see the loss during the initial epochs are greater and the sucessive epochs result in lower loss. # - This reduced loss during sucessive epochs are achieved with the help of the Optimizer function. The optimizer function adjusts the machine parameters with the goal of reducing the loss. # # **Why is the output value not exactly 22 as like the convetional programming methods?** # - This is because , the machine always consider the things in the probabilistic way, thus it will give the output which is closer to the expected value. # - When the machine has a large number of input output pairs to learn from then it will be giving the exact values # # ##### Thus we have finished the Hello World of Tensorflow, let us see how can we use this to solve some intresting problems too # + id="h1Ym1f9HqC1V" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python36 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Yq8grbXZuqnf" # Clustering # ====== # # When a data set doesn’t have labels we can use unsupervised learning to find some kind of structure in the data - allowing us to discover patterns or groupings. # # Cluster analysis is a method of finding groupings, known as clusters, in datasets. As the data sets are unlabelled, cluster analysis tries to group similar examples using the examples features. # # K-means clustering lives true to its name - it separates examples into k number of clusters (so if k is 5, it will divide the examples into 5 clusters) and it partitions the examples by the average (mean) of the clusters. # + [markdown] id="8lUv3QNnuqni" # Step 1 # ----- # # In this exercise we will look at using k-means clustering to categorise a few different datasets. # # Let's start by first creating three clusters. # # #### Run the code below to set up the graphing features. # + id="Ws1lCzGVuqnj" # This sets up the graphs import warnings warnings.filterwarnings("ignore") import matplotlib.pyplot as graph # %matplotlib inline graph.rcParams['figure.figsize'] = (15,5) graph.rcParams["font.family"] = 'DejaVu Sans' graph.rcParams["font.size"] = '12' graph.rcParams['image.cmap'] = 'rainbow' # + [markdown] id="00MDXQVKuqnp" # ### In the cell below replace: # #### 1. `` with `cluster_data` # #### 2. `` with `output` # #### and then __run the code__. # + id="mnw-BkQduqnq" outputId="09d4cff2-c005-4bba-fb2a-a457f35798f8" colab={"base_uri": "https://localhost:8080/", "height": 265} # Let's make some data! import numpy as np from sklearn import datasets ### # REPLACE WITH cluster_data AND WITH output ### cluster_data, output = datasets.make_classification(n_samples = 500, n_features = 2, n_informative = 2, n_redundant = 0, n_repeated = 0, n_classes = 3, n_clusters_per_class = 1, class_sep = 1.25, random_state = 6) ### # Let's visualise it graph.scatter(cluster_data.T[0], cluster_data.T[1]) graph.show() # + [markdown] id="7-NPnNi8uqnw" # Now let's see how k-means performs on a dataset like this! # + [markdown] id="-r1rgAtXuqnx" # ### In the cell below replace: # #### 1. `` with `KMeans` # #### 2. `` with `fit` # #### 3. `` with `k_means.cluster_centers_` # #### 4. `` with `k_means.labels_` # #### and then __run the code__. # + id="7vOpNA9muqny" outputId="9c046cfd-5e18-482c-8920-dcf1ac5f1aa0" colab={"base_uri": "https://localhost:8080/", "height": 265} from sklearn.cluster import KMeans ### # REPLACE WITH KMeans ### k_means = KMeans(n_clusters=3) ### ### # REPLACE WITH fit ### k_means.fit(cluster_data) ### # Let's visualise it ### # REPLACE BELOW WITH k_means.cluster_centers_ ### for mean in k_means.cluster_centers_: graph.plot(mean[0], mean[1], 'ko', marker = '+', markersize = 20) ### ### # REPLACE BELOW WITH k_means.labels_ ### graph.scatter(cluster_data.T[0], cluster_data.T[1], c = k_means.labels_) ### graph.show() # + [markdown] id="xfVecujquqn2" # It performs rather well, by the looks of it! But we already knew that it had three clusters, sometimes it might not be so clear. # + [markdown] id="Kbvop2O7uqn3" # ## Step 2 # # Let's generate another dataset in which it may be a little less obvious how many classes it contains. # # #### Replace `` with `datasets.make_classification` and run the code. # + id="P4nOBd9wuqn3" outputId="11dae094-15bf-4633-86a9-48c32ce73d87" colab={"base_uri": "https://localhost:8080/", "height": 265} ### # REPLACE BELOW WITH datasets.make_classification ### cluster_data, output = datasets.make_classification(n_samples = 500, n_features = 2, n_informative = 2, n_redundant = 0, n_repeated = 0, n_classes = 4, n_clusters_per_class = 1, class_sep = 1.25, random_state = 6) ### graph.scatter(cluster_data.T[0], cluster_data.T[1]) graph.show() # + [markdown] id="ynbcnpUiuqn8" # In instances where we do not know how many classes to expect, it is handy to run k-means multiple times and compare how the data looks when divided up into a differing number of classes. Let's try that now. # # #### Replace `` with `n` and run the code # + id="WLCyZ7F0uqn9" outputId="9c3528cd-7c61-4413-b52a-f1226cc22620" colab={"base_uri": "https://localhost:8080/", "height": 1000} ### # REPLACE BELOW WITH n ### for n in range(2,6): k_means = KMeans(n_clusters = n).fit(cluster_data) ### for mean in k_means.cluster_centers_: graph.plot(mean[0], mean[1], 'ko', marker = '+', markersize = 20) graph.scatter(cluster_data.T[0], cluster_data.T[1], c = k_means.labels_) graph.show() # + [markdown] id="TeAarsEtuqoC" # Which one do you think best splits the data? # + [markdown] id="8dSiridfuqoC" # Step 3 # ======== # # K-means clustering performs well enough on clustered data like that, but let's try it out on a dataset that is not so linear. # # #### Replace `` with `make_circles` and run the code. # + id="PH14aCm7uqoE" outputId="bf71243b-6e70-4522-8cb8-0ed9e2c5d724" colab={"base_uri": "https://localhost:8080/", "height": 265} ### # REPLACE BELOW WITH make_circles ### ring_data, target = datasets.make_circles(n_samples = 500, factor = .5, noise = 0.05, random_state = 6) ### graph.scatter(ring_data.T[0], ring_data.T[1], c = target) graph.show() # + [markdown] id="OsfuSOEBuqoH" # We can clearly distinguish two "clusters", that is, the two rings of datapoints. # # Let's see how k-means handles a dataset like this. # # #### Replace `` with `ring_data` and run the code # + id="oXifzHPLuqoI" outputId="3db10616-6718-45c7-cd47-74ac06d084d5" colab={"base_uri": "https://localhost:8080/", "height": 265} ### # REPLACE BELOW WITH ring_data ### k_means = KMeans(n_clusters = 2).fit(ring_data) ### for mean in k_means.cluster_centers_: graph.plot(mean[0], mean[1], 'ko', marker = '+', markersize = 20) graph.scatter(ring_data.T[0], ring_data.T[1], c = k_means.labels_) graph.show() # + [markdown] id="Ueg5nG2suqoL" # K-means clearly has difficulty solving this. # # As we are using it, there is no way for k-means to place two means to label this data set correctly. # + [markdown] id="H-lxpin6uqoM" # Step 4 # ------ # # But, we can try another way. We can use another feature - distance away from the centre. # # Let's see if k-means is able to classify the two data clusters with this new feature. # # #### Replace `` with `np.sqrt` and run the code. # + id="41rgcgmKuqoM" outputId="398a4927-13ff-4ac2-8818-73e582ee5335" colab={"base_uri": "https://localhost:8080/", "height": 265} distance_from_center = [] for sample in ring_data: ### # REPLACE BELOW WITH np.sqrt ### z = 4 * np.sqrt(sample[0]**2 + sample[1]**2) ### distance_from_center.append(z) # Make it a three-dimensional dataset ring_data = np.concatenate((ring_data, np.array(distance_from_center).reshape(-1, 1)), axis = 1) graph.scatter(ring_data.T[0], ring_data.T[1], c = ring_data.T[2]) graph.show() # + [markdown] id="Wze9ZK1fuqoQ" # Looks like it will work, so let's plot all three features. # # ### In the cell below replace: # #### 1. `` with `projection='3d'` # #### 2. `` with `ring_data.T[2]` # #### and then __run the code__. # + id="v8H24MQ-uqoR" outputId="589e631a-0145-4f60-8c5b-3236c7e89fa2" colab={"base_uri": "https://localhost:8080/", "height": 248} from mpl_toolkits.mplot3d import Axes3D fig = graph.figure() ### # REPLACE BELOW WITH projection='3d' ### ax = fig.add_subplot(111, projection='3d') ### ### # REPLACE BELOW WITH ring_data.T[2] ### ax.scatter(ring_data.T[0], ring_data.T[1], ring_data.T[2], c = target) ### ax.view_init(30, 45) graph.show() # + [markdown] id="mj0jXXAxuqoX" # Let's see how k-means deals with the data now that it has 3 features! # # ### In the cell below replace: # #### 1. `` with `ring_data` # #### 2. `` with `k_means.labels_` # #### and then __run the code__. # + id="tlgM4L29uqoY" outputId="7d837773-1f9e-40c8-e93a-f6bb3e667e3d" colab={"base_uri": "https://localhost:8080/", "height": 248} ### # REPLACE BELOW WITH ring_data ### k_means = KMeans(n_clusters = 2, random_state = 0).fit(ring_data) ### fig = graph.figure() ax = fig.add_subplot(111, projection='3d') for mean in k_means.cluster_centers_: ax.scatter(mean[0], mean[1], mean[2], c='black', marker='+', s=50) # plot the cluster centres ### # REPLACE BELOW WITH k_means.labels_ ### ax.scatter(ring_data.T[0], ring_data.T[1], ring_data.T[2], c = k_means.labels_) ### # We can plot a hyperplane that separates the two rings hp_X, hp_Y = np.array(np.meshgrid(np.linspace(-1, 1, 11), np.linspace(-1, 1, 11))) hp_Z = np.full(hp_X.shape, np.abs(k_means.cluster_centers_[0][2] - k_means.cluster_centers_[1][2] / 2)) ax.plot_wireframe(hp_X, hp_Y, hp_Z, rstride = 1, cstride = 1, color = 'k', linewidth = 1, linestyle = 'solid', alpha = 0.5) ax.view_init(20, 45) ax.set_zlabel('new axis') graph.show() # + [markdown] id="pWoReCC1uqoc" # You can see the `+` that indicates the center of the clusters. Looks good! # # Step 5 # ------ # # Some data we cannot manipulate like that. Let's have a look at a different type of data distribution. # # #### Replace `` with `datasets.make_moons` and run the code. # + id="9e62SdXQuqod" outputId="26bc9867-f481-446f-92a8-206b0257ce38" colab={"base_uri": "https://localhost:8080/", "height": 265} ### # REPLACE BELOW WITH datasets.make_moons ### crescent_data, output = datasets.make_moons(n_samples = 500, noise = .05) ### graph.scatter(crescent_data.T[0], crescent_data.T[1], c = target) graph.show() # + [markdown] id="WaCjzelzuqoi" # Let's try fitting it. # # #### Replace `` with `crescent_data` and run the code. # + id="GLeKkwaxuqol" outputId="d0f4d389-88b3-4ae5-82d3-46a09581740c" colab={"base_uri": "https://localhost:8080/", "height": 265} # Below we run KMeans on crescent_data using n_clusters = 2 ### # REPLACE WITH crescent_data ### k_means = KMeans(n_clusters = 2).fit(crescent_data) ### for mean in k_means.cluster_centers_: graph.plot(mean[0], mean[1], 'ko', marker = '+', markersize = 20) graph.scatter(crescent_data.T[0], crescent_data.T[1], c = k_means.labels_) graph.show() # + [markdown] id="Vj_N0ESWuqor" # Again, a similar issue as with the circle data. # # But k-means is just one method for clustering, other methods don't have quite the same restrictions as k-means. # # Step 6 # ------ # # Spectral clustering is a clustering method that aims to cluster data that is in some way connected - but not necessarily distributed. # # ### In the cell below replace: # #### 1. `` with `SpectralClustering` # #### 2. `` with `crescent_data` # #### 3. `` with `labels_` # #### and then __run the code__. # + id="sgZxa2p4uqor" outputId="5cbe9a64-5a88-41d5-e9f2-65766711d2f6" colab={"base_uri": "https://localhost:8080/", "height": 265} from sklearn import cluster ### # REPLACE BELOW WITH SpectralClustering ### spectral = cluster.SpectralClustering(n_clusters = 2, eigen_solver = 'arpack', affinity = 'nearest_neighbors') ### ### # REPLACE BELOW WITH crescent_data ### labels_ = spectral.fit_predict(crescent_data) ### ### # REPLACE BELOW WITH labels_ ### graph.scatter(crescent_data.T[0], crescent_data.T[1], c = labels_) ### graph.show() # + [markdown] id="slzL2Vkyuqou" # ### In the cell below replace: # #### 1. `` with `SpectralClustering` # #### 2. `` with `ring_data` # #### 3. `` with `labels_` # #### and then __run the code__. # + id="6BPLOYcjuqov" outputId="6944eff4-7d26-4c86-84fc-2ce3ff2fdac2" colab={"base_uri": "https://localhost:8080/", "height": 265} # Let's use spectral clustering on the ring_data ### # REPLACE BELOW WITH SpectralClustering ### spectral = cluster.SpectralClustering(n_clusters = 2, eigen_solver = 'arpack', affinity = 'nearest_neighbors') ### ### # REPLACE BELOW WITH ring_data ### labels_ = spectral.fit_predict(ring_data) ### ### # REPLACE BELOW WITH labels_ ### graph.scatter(ring_data.T[0], ring_data.T[1], c = labels_) ### graph.show() # + [markdown] id="_51esvmiuqoz" # Does it classify the data in the correct clusters? # + [markdown] id="uLrZTlnXuqo0" # ## Conclusion # # We have learnt two important clustering methods, k-means and spectral clustering, and used them on a variety of datasets where one might be more appropriate to use than the other. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np from numba import jit, prange from dask import delayed, compute import matplotlib.pyplot as plt # %matplotlib inline # ## Fast boostrap resampling when there are multiple datasets (of same size) to boostrap def multi_bootstrap(data, boots): """ Keyword arguments: data -- numpy multi-dimentional array boot -- number of bootstraps """ designs = data.shape[0] samples = data.shape[1] to_return = np.empty((designs, boots)) for design in range(designs): to_return[design:design+1] = bootstrap(data[design], boots) return to_return @jit(nopython=False) def multi_bootstrap_jit(data, boots): """ Keyword arguments: data -- numpy multi-dimentional array boot -- number of bootstraps """ designs = data.shape[0] samples = data.shape[1] to_return = np.zeros((designs, boots)) for design in range(designs): to_return[design:design+1] = bootstrap(data[design], boots) return to_return @jit(nopython=False) def multi_bootstrap_par(data, boots): """ Keyword arguments: data -- numpy multi-dimentional array boot -- number of bootstraps """ designs = data.shape[0] samples = data.shape[1] to_return = np.empty((designs, boots)) for design in range(designs): to_return[design:design+1] = bootstrap_par(data[design], boots) return to_return @jit(nopython=True) def bootstrap(data, boots): """ Create bootstrap datasets that represent the distribution of the mean. Returns a numpy array containing the bootstrap datasets Keyword arguments: data -- numpy array of systems to boostrap boots -- number of bootstrap (default = 1000) """ bs_data = np.empty(boots) for b in range(boots): total=0 for s in range(data.shape[0]): total += data[np.random.randint(0, data.shape[0])] bs_data[b] = total / data.shape[0] return bs_data @jit(nopython=True, parallel=True) def bootstrap_par(data, boots): """ Create bootstrap datasets that represent the distribution of the mean. Returns a numpy array containing the bootstrap datasets Keyword arguments: data -- numpy array of systems to boostrap boots -- number of bootstrap (default = 1000) """ bs_data = np.empty(boots) for b in prange(boots): total=0 for s in range(data.shape[0]): total += data[np.random.randint(0, data.shape[0])] bs_data[b] = total / data.shape[0] return bs_data # ## Small Problem 10 initial datasets sample = np.arange(100).reshape(10, 10) sample.shape # %timeit x = multi_bootstrap(sample, 1000) # %timeit x = multi_bootstrap_jit(sample, 1000) # %time x = multi_bootstrap_par(sample, 1000) # multi-core processing improves performance even with a problem as small as 10 initial datasets # ## Larger Problem. 1000 initial datasets sample = np.arange(10000).reshape(1000, 10) sample.shape # NumPy Implementation # %%timeit x = multi_bootstrap(sample, 1000) # NumPy and Numba JIT implementation # %%timeit x = multi_bootstrap_jit(sample, 1000) # NumPy, JIT and Parallel (multi-core) Implementation # %%timeit x = multi_bootstrap_par(sample, 1000) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # # Implementing an anti-aliasing technique with a model # # # When performing nonlinear computations on discrete grids, it is important to # choose the time window and the number of sample points large enough to prevent # aliasing errors [B2001]_. This example demonstrates how to # implement an anti-aliasing technique directly with a model. # # For a nonlinearity of order three, as for the cubic Kerr nonlinearity, an easy # anti-aliasing procedure is to extend the spectrum by a factor of two and to # proceed by zero-padding [HCL2008]_ [FCGK2005]_. Therefore, in each step, after # the nonlinear term is evaluated in the time domain and transformed to the # Fourier domain, the upper half of the spectrum is set to zero. # # Here, the symmetric split-step Fourier method is used and the propagation of a # fourth-order soliton is considered. # # .. codeauthor:: <> # # We first import the functionality needed to perform the sequence of numerical # experiments: # # import sys; sys.path.append('../../') import numpy as np import numpy.fft as nfft from fmas.models import ModelBaseClass from fmas.config import FTFREQ, FT, IFT, C0 from fmas.solver import SySSM from fmas.grid import Grid from fmas.tools import plot_evolution # Next, we implement a model for the nonlinear Schrödinger equation. In # particular, we here consider the standard nonlinear Schrödinger equation, # given by # # \begin{align}\partial_z u = -i \frac{\beta_2}{2}\partial_t^2 u + i\gamma |u|^2 u,\end{align} # # wherein $u = u(z, t)$ represents the slowly varying pulse envelope, # $\beta_2=-1$ is the second order dispersion parameter, and # $\gamma=1$ is the nonlinear parameter. # As discussed above, we here implement a simple technique allowing # to compute the nonlinear term free of aliasing errors. # # class NSE(ModelBaseClass): def __init__(self, w, b2 = -1.0, gamma = 1.): super().__init__(w, 0.5*b2*w*w) self.gamma = gamma # ANTI-ALIASING FILTER SETTING UPPER HALF OF SPECTRUM TO ZERO self._de_alias = lambda uw: np.where(np.abs(w) < 0.5 * w.max(), uw, 0j) @property def Lw(self): return 1j*self.beta_w def Nw(self, uw): ut = IFT(uw) return self._de_alias(1j*self.gamma*FT(np.abs(ut)**2*ut)) # Next, we initialize the computational domain and use a symmetric split-step # Fourier method to propagate a single third-order soliton for six soliton # periods. # For this numerical experiment, the extend of the time domain and the number # of sample points is chosen large enough to allow for a zero padding # anti-aliasing technique without cropping important parts of the spectrum. # # # + grid = Grid( t_max = 34., t_num = 2**12) t, w = grid.t, grid.w model = NSE(w, b2=-1., gamma=1.) u_0t = 4./np.cosh(t) solver = SySSM(model.Lw, model.Nw) solver.set_initial_condition(w, FT(u_0t)) solver.propagate(z_range = 3*np.pi/2, n_steps = 10000, n_skip = 50) z, utz = solver.z_, solver.utz plot_evolution( solver.z, grid.t, solver.utz, t_lim = (-5,5), w_lim = (-60,60), DO_T_LOG=False) # - # **References:** # # .. [B2001] J.P. Boyd, Chebychev and Fourier Spectral Methods, Dover, New York (2001) # # .. [HCL2008] , , , A pseudospectral Fourier # method for a 1D incompressible two-fluid model, Int. J. Numer. # Meth. Fluids 58 (2008) 639, https://doi.org/10.1002/fld.1772 # # .. [FCGK2005] , , , , An efficient # model for three-dimensional surface wave simulations Part I: Free # space problems, J. Comp. Phys. 205 (2005) 665, # https://doi.org/10.1016/j.jcp.2004.11.027 # # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # IMOVE Time-Series and Big-Data Workshop # In this workshop we will use the time-series functionality of [Pandas](https://pandas.pydata.org/docs/) and [Xarray](http://xarray.pydata.org/en/stable/) to explore some data collected by the [Ocean Observatories Intiative](https://oceanobservatories.org/). Hopefully we will also get a chance to explore [Dask](https://docs.dask.org/en/latest/) and [Dask Delayed](https://docs.dask.org/en/latest/delayed.html) functions to parallelize a data analysis workflow in the cloud. We will be working on the [OOICloud](https://www.ooicloud.org/) [Pangeo](http://pangeo.io/) instance. Further information on using Python to analyze Earth science datasets can be found in the book [Earth and Environmental Data Science](https://earth-env-data-science.github.io/intro) which I have been using to teach Research Computing in the Earth Sciences this semester. # ## Bottom pressure data at Axial Seamount # Here we find data using the new OOI Data Explorer and use Pandas [read_csv()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) and [time-series functionality](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html) to plot a smoothed representation of the bottom pressure at Axial Seamount. # ### First import some required packages import pandas as pd from matplotlib import pyplot as plt # %matplotlib inline plt.rcParams['figure.figsize'] = (14, 8) from IPython import display display.set_matplotlib_formats('retina') # ### Next find some data on the OOI Data Explorer url = 'http://erddap.dataexplorer.oceanobservatories.org/erddap/tabledap/ooi-rs03ccal-mj03f-05-botpta301.csv?time%2Cbotpres%2Cbotpres_qc_agg%2Cz&time%3E%3D2014-08-29T20%3A59%3A00Z&time%3C%3D2020-11-21T06%3A00%3A00Z' url botpt = pd.read_csv(url, parse_dates=True, usecols = ['time','botpres'], index_col='time', skiprows=[1]) botpt.head() botpt = botpt.rename(columns={'botpres':'pressure'}) botpt.head() len(botpt) botpt[0:20000].pressure.plot(ylabel='pressure') botpt['2014'].plot() botpt['2019-04-28':'2019-07-12'].plot() start = pd.Timestamp('2020-08') botpt[start:start+pd.Timedelta(days=30)].plot() botpt.plot() botpt_rolling = botpt.rolling('14D', min_periods = 60*24*7).pressure.mean() botpt_rolling.plot(ylabel='pressure') botpt_rolling = botpt.rolling(60*24*14, win_type='hamming', min_periods = 60*24*7).pressure.mean() botpt_rolling.plot(ylabel='pressure') import hvplot.pandas botpt_rolling[::60].hvplot(ylabel='pressure') # ## Earthquake catalog from the OOI seismic array at Axial Seamount # Here we parse and plot Axial Seamount earthquake catalog data from ['s near-real-time automated earthquake location system](http://axial.ocean.washington.edu/). The data we will use is a text file in they HYPO71 output format located here: http://axial.ocean.washington.edu/hypo71.dat. eqs_url = 'http://axial.ocean.washington.edu/hypo71.dat' col_names = ['ymd', 'hm', 's', 'lat_deg', 'lat_min', 'lon_deg', 'lon_min', 'depth', 'MW', 'NWR', 'GAP', 'DMIN', 'RMS', 'ERH', 'ERZ', 'ID', 'PMom', 'SMom'] eqs = pd.read_csv(eqs_url, sep='\s+', header=0, names=col_names) eqs.head() from datetime import datetime def parse_hypo_date(ymd, hm, s): hour = int(hm.zfill(4)[0:2]) minute = int(hm.zfill(4)[2:]) second = float(s) if second == 60: second = 0 minute += 1 if minute == 60: minute=0 hour +=1 eq_date_str = ('%s%02.0f%02.0f%05.2f' % (ymd, hour, minute, second)) return datetime.strptime(eq_date_str, '%Y%m%d%H%M%S.%f') eqs = pd.read_csv(eqs_url, sep='\s+', header=0, names=col_names, parse_dates=[[0,1,2]], date_parser=parse_hypo_date) eqs.head() eqs['lat'] = eqs.lat_deg+eqs.lat_min/60 eqs['lon'] = -(eqs.lon_deg+eqs.lon_min/60) eqs.head() eqs.rename(columns={'ymd_hm_s': 'time', 'MW': 'mw'}, inplace=True) eqs.set_index('time', inplace=True) eqs.head() eqs = eqs[['lat', 'lon', 'depth', 'mw']] eqs.head() len(eqs) eqs.mw.plot(marker='.', linestyle='', markersize=1) daily_count = eqs.mw.resample('1D').agg('count') daily_count.head() fig, ax1 = plt.subplots() ax1.bar(daily_count.index, daily_count.values, width=5) ax1.set_ylim(ymax=3000) fig, ax1 = plt.subplots() ax1.bar(daily_count.index, daily_count.values, width=5) ax1.set_ylim(ymax=2500) ax2 = ax1.twinx() ax2.plot(botpt_rolling, color='r') # + fig, ax1 = plt.subplots() ax1.bar(daily_count['2015'].index, daily_count['2015'].values, width=1) ax1.set_ylim(ymax=2500) ax1.set_ylabel('count') ax2 = ax1.twinx() ax2.plot(botpt_rolling['2015'], color='r') ax2.set_ylabel('pressure') # - # ### Mapping eq data # Let's make some maps just because we can. import cartopy.crs as ccrs import cartopy import cartopy.feature as cfeature import numpy as np caldera = pd.read_csv('caldera.csv', sep=',') now = pd.Timestamp('now') eqs_sub = eqs[(now-pd.Timedelta(weeks=8)):] eqs_sub eqs_sub.index.dayofyear # + ax = plt.axes(projection=ccrs.Robinson(central_longitude=-130)) ax.plot(caldera.lon, caldera.lat,transform=ccrs.Geodetic(), c=(0.8, 0.8, 0.8)) sc = ax.scatter(eqs_sub.lon, eqs_sub.lat, s=0.00001*(eqs_sub.mw+3)**11, c=eqs_sub.index.dayofyear, edgecolor='k', cmap='viridis', transform=ccrs.PlateCarree()) plt.colorbar(sc, label='Day of Year') ax.gridlines() ax.set_title('Eqs from last 8 weeks'); extent = [-130.07, -129.95, 45.90, 46.02] ax.set_extent(extent) # - eqs_sub = eqs['2016'] # + ax = plt.axes(projection=ccrs.Robinson(central_longitude=-130)) ax.plot(caldera.lon, caldera.lat,transform=ccrs.PlateCarree()) sc = ax.scatter(eqs_sub.lon, eqs_sub.lat, c=eqs_sub.mw, s=40, edgecolor='k', cmap='Reds', transform=ccrs.PlateCarree()) plt.colorbar(sc, label='magnitude') ax.gridlines() ax.set_title('Eqs from 2016'); extent = [-130.07, -129.95, 45.90, 46.02] ax.set_extent(extent) # - # ## OOI Seafloor Camera Data # Now let's look at some video data from the [OOI Seafloor Camera](https://oceanobservatories.org/instrument-class/camhd/) system deployed at Axial Volcano on the Juan de Fuca Ridge. We will make use of the [Pycamhd](https://github.com/tjcrone/pycamhd) library, which can be used to extract frames from the ProRes encoded Quicktime files. These data are hosted on Microsoft's [Azure Open Datasets](https://azure.microsoft.com/en-us/services/open-datasets/). import pycamhd as camhd import numpy as np from ipywidgets import interact from ipywidgets import IntSlider dbcamhd_url = 'https://ooiopendata.blob.core.windows.net/camhd/dbcamhd.json' dbcamhd = pd.read_json(dbcamhd_url, orient="records", lines=True) dbcamhd.tail() dbcamhd.frame_count.sum() mov = dbcamhd.iloc[7000] mov def show_image(frame_number): plt.rc('figure', figsize=(12, 6)) plt.rcParams.update({'font.size': 8}) frame = camhd.get_frame(mov.url, frame_number) fig, ax = plt.subplots(); im1 = ax.imshow(frame); plt.yticks(np.arange(0,1081,270)) plt.xticks(np.arange(0,1921,480)) plt.title('Deployment: %s File: %s Frame: %s' % (mov.deployment, mov['name'], frame_number)); # %matplotlib inline # %config InlineBackend.figure_format = 'svg' initial_frame = 8060 frame_slider = IntSlider(min=0, max=mov.frame_count-1, step=1, value=initial_frame, continuous_update=False) interact(show_image, frame_number=frame_slider); # ## Scratch scratch df = pd.DataFrame(index=test.groupby(test.index.date).count().index) df['count'] = test.groupby(test.index.date).count().lat counts = pd.DataFrame(index=test.resample('1D').agg('count').index) #counts['count'] = test.resample('1D').agg('count') test.nlargest(100, 'MW').plot.scatter(x='lon', y='lat', s='MW') test['2020-01-01':].plot.scatter(x = 'lon', y = 'lat', c = 'depth', s='MW', cmap='magma') test.hvplot.scatter(x = 'lon', y = 'lat', c='depth', datashade=True, dynspread=True) eqs = eqs.rename(columns={''} botpt = botpt.rename(columns={'botpres':'pressure'}) eqs = pd.read_csv(eq_url, sep='\s+', header=0, names=col_names, parse_dates=[[0,1,2]], date_parser=parse_hypo_date) url = 'http://erddap.dataexplorer.oceanobservatories.org/erddap/tabledap/ooi-rs03int1-mj03c-10-trhpha301.csv?time%2Ctrhphte_abs%2Ctrhphte_abs_qc_agg%2Cz&time%3E%3D2014-09-27T06%3A46%3A00Z&time%3C%3D2020-11-21T05%3A47%3A00Z' test2 = pd.read_csv(url, parse_dates=True, index_col='time', skiprows=[1]) test2.nsmallest(1000, 'trhphte_abs') test2.iloc[-200000:].hvplot(y='trhphte_abs', marker='.') url = 'http://erddap.dataexplorer.oceanobservatories.org/erddap/tabledap/ooi-rs03int2-mj03d-06-botpta303.nc?time%2Cbotpres%2Cbotpres_qc_agg%2Cz&time%3E%3D2014-08-29T23%3A17%3A00Z&time%3C%3D2020-11-21T06%3A00%3A00Z' # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- from scipy.stats import randint import pandas as pd import revenue_maximization_ranking as rmr from revenue_maximization_ranking.cascade import full_best_x, expected_revenue rmr.__version__ g = randint(1, 20) df = pd.DataFrame({"name": list("abcdefghijklmnop"), "rev": [3.1, 1.1, 5.1, 0.1, 0.3, 0.3, 3.1, 0.4, 0.4, 0.01, 0.01, 1.1, 1.08, 3.2, 0.001, 0.1], "prob": [0.05, 0.02, 0.005, 0.01, 0.1, 0.08, 0.12, 0.06, 0.05, 1-0, 0.99, 0.1, 0.1, 0.04, 1.0, 0.05]}) df["default_rnk"] = [i + 1 for i in range(df.shape[0])] rev_default = expected_revenue(df, revenue_col="rev", probability_col="prob", ranking_col="default_rnk", g=g) df["ranking"] = full_best_x(df, revenue_col="rev", probability_col="prob", g=g) df rev_full_best_x = expected_revenue(df, revenue_col="rev", probability_col="prob", ranking_col="ranking", g=g) (rev_full_best_x/rev_default - 1) * 100 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Displaying 3D Objects in Jupyter # # I was missing the possibility to display 3D objects directly in Jupyterlab. So I implemented a simpole Mime Renderer with the help of Three.js. # # You can direclty open any files like the [cube.stl](./cube.stl) or you can display them in Cell as demonstrated below. # # The following mime types are supported: # - model/stl # - model/amf # - model/obj # - model/3mf # - model/gcode # - model/vnd.collada+xml # # # ## Installation # The extension can be installed with the folliwng # ```bash # git clone # # cd jupyterlab-viewer-stl # npm install # npm run build # jupyter labextension install # ``` # # # ## Examples # The following examples are written in Python. However I plan to use them in my JVM based kernels.... # from IPython.display import publish_display_data with open('cube.stl','rb') as f: publish_display_data({'model/stl': f.read()}) with open('cube.gcode','rb') as f: publish_display_data({'model/gcode': f.read()}) with open('cube.amf','rb') as f: publish_display_data({'model/amf': f.read()}) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:torch1.6.0] # language: python # name: conda-env-torch1.6.0-py # --- # + from json import load import sys sys.path.append("/home/ly/workspace/mmsa") import json import os import pickle import collections import numpy as np from typing import * from tqdm import tqdm from collections import OrderedDict from utils.tokenization import BasicTokenizer from utils.load_yelp import * seed = 1024 np.random.seed(seed) # - base_dir = os.path.join("data","yelp-vistanet") def load_data(): with open("data/yelp-vistanet/clear_data.pickle", "rb") as r: return pickle.load(r) class YelpSimpleTokenizer(BasicTokenizer): def __init__(self, vocab:Dict[str, int]=None, do_lower_case:bool=True) -> None: super(YelpSimpleTokenizer, self).__init__(do_lower_case) self.SENT_DELIMITER = '|||' self.vocab = vocab self.UNK = len(vocab) + 1 if vocab is not None else None # def tokenize(self, text:str) -> List[str]: # 默认切成2d res = [] for sent in text.split(self.SENT_DELIMITER): if len(sent) > 0: # 有一定几率出现空字符串 res.append(super(YelpSimpleTokenizer, self).tokenize(sent)) return res def _getidx(self, token:str): return self.vocab.get(token, self.UNK) def to_idx(self, text:str) -> List[int]: assert self.vocab is not None, "No vocab!" sents = self.tokenize(text) res = [] for sent in sents: res.append([self._getidx(token) for token in sent]) return res data = load_data() vocab = load_glove_vocab() glove_tokenizer = YelpSimpleTokenizer(vocab["token2idx"], do_lower_case=True) len(vocab["token2idx"]), len(vocab["idx2token"]), len(vocab["glove_idx"]) # + def check_photo(i): path = os.path.join(base_dir, "photos", i[:2], i + ".jpg") return os.path.exists(path) def build_glove_data(tokenizer, reviews:List[dict]): res = [] total_img = 0 for review in tqdm(reviews): d = {} d["Text"] = tokenizer.to_idx(review["Text"]) d["Photos"] = [] for _id in review["Photos"]: if check_photo(_id): d["Photos"].append(_id) total_img += 1 d["Rating"] = review["Rating"] res.append(d) print(f"Image num : {total_img}") return res # - # %%time glove_data = {} for key in ["train", "valid", "test"]: glove_data[key] = build_glove_data(glove_tokenizer, data[key]) path = os.path.join(base_dir, "glove_data.pickle") with open(path, "wb") as w: pickle.dump(glove_data, w, protocol=pickle.HIGHEST_PROTOCOL) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tensorflow2 # language: python # name: tensorflow2 # --- # #### The project uses reinforcement learning to balance the frictionless cart on the pole by moving the cart according to the position of the pole. The model that will be used is the OpenAI gym. I will be using Neural network for training. # ### loading Libraries import sys assert sys.version_info >=(3,5) import gym gym.envs.registry.all() from tensorflow import keras import numpy as np import tensorflow as tf from tensorflow import keras assert tf.__version__ >= "2.0" import sklearn assert sklearn.__version__ >= "0.20" # #### setting seed np.random.seed(42) tf.random.set_seed(42) env = gym.make("CartPole-v1") env.seed(42) import matplotlib.pyplot as plt import matplotlib as mpl # %matplotlib inline mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) # ###### environmetn gym.envs.registry.all() # env = gym.make("Acrobot-v1") # + # gym.envs.registry.all() # - # #### reset method to return an observation to start over again obs = env.reset() print(obs) # #### attempting the piece of the game action print(env.action_space) action = 1 obs, reward, done, info = env.step(action) print(obs) print(reward) print(done) print(info) # ### Hardcoding the strategy to see how it will work # + env.seed(42) def basic_policy(obs): angle = obs[2] if angle < 0: return 0 else: return 1 # - # ###### Now let us play 500 episodes of the game, each episode with 200 steps. For each step we call the basic_policy to get the action, and perform that step with that action. Let us calculate the rewards for each episode and finally see how many minimum, maximum, and mean steps that our basic_policy is able to keep the pole up: # # totals = [] for episode in range(500): episode_rewards = 0 obs = env.reset() for step in range(200): action = basic_policy(obs) obs, reward, done, info = env.step(action) episode_rewards += reward if done: break totals.append(episode_rewards) # ##### alculating the mean, start deviation, minimum, and maximum number of steps that the pole was upright. print(np.mean(totals), np.std(totals), np.min(totals), np.max(totals)) # ##### Building with Neural Network # #### clearing session to removes all nodes left from previous models keras.backend.clear_session() # #### setting input shape to 4, since we are feeding all values of obs n_inputs = 4 # #### Building the neural network model = tf.keras.Sequential([ keras.layers.Dense(5, activation="elu", input_shape=[n_inputs]), keras.layers.Dense(1, activation="sigmoid"), ]) model.summary() # #### function for the untrain network policy # #### we shall set randomly thrashold to allow the network decide the set of actions to take that will be more better in achieveing it purpose def basic_policy_untrained(obs): left_proba = model.predict(obs.reshape(1, -1)) action = int(np.random.rand() > left_proba) return action # #### Runing this for 50 episodes and 200 steps. we shall call the basic_policy_untrained for each step of each episode: totals = [] for episode in range(50): episode_reward = 0 obs = env.reset() for step in range(200): action = basic_policy_untrained(obs) obs, reward, done, info = env.step(action) episode_rewards += reward if done: break totals.append(episode_rewards) np.mean(totals), np.std(totals), np.min(totals), np.max(totals) # ##### from the above scores we can see that neural network perform badly # #### Training Neural Network for better performance # ##### setting the number of environment to 50 n_environments = 50 # #### setting numbers of iteration to 5000 n_iterations = 5000 # ###### initializing cartpole environmen envs = [gym.make("CartPole-v1") for _ in range(n_environments)] # ##### Setting different seeds to each environment with their respective indices as per the above list for index, env in enumerate(envs): env.seed(index) # #### Reset all the environments to get the observation for all the environment observations = [env.reset() for env in envs] # ##### Initialize the optimizer to keras optimization optimizer = keras.optimizers.RMSprop() # #### initializing loss function loss_fn = keras.losses.binary_crossentropy # #### Setting the target probabilities of the action. If angle < 0, we want proba(left) = 1., or else proba(left) = 0. Then, we shall fit the observations to the model and calculate the gradients based on the loss with respect to the targets. We shall use the probabilities predicted by the model and determined the actions and proceed for the next step, and the iterations continue. # + for iteration in range(n_iterations): # if angle < 0, we want proba(left) = 1., or else proba(left) = 0. target_probas = np.array([([1.] if obs[2] < 0 else [0.]) for obs in observations]) with tf.GradientTape() as tape: left_probas = model(np.array(observations)) loss = tf.reduce_mean(loss_fn(target_probas, left_probas)) print("\rIteration: {}, Loss: {:.3f}".format(iteration, loss.numpy()), end="") grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) actions = (np.random.rand(n_environments, 1) > left_probas.numpy()).astype(np.int32) for env_index, env in enumerate(envs): obs, reward, done, info = env.step(actions[env_index][0]) observations[env_index] = obs if not done else env.reset() # - # ##### from the above score you can see that it performance much more beteer # #### creating function to play a single step using the model def play_one_step(env, obs, model, loss_fn): with tf.GradientTape() as tape: left_probas = model(obs[np.newaxis]) action = (tf.random.uniform([1, 1]) > left_probas) y_target = tf.constant([[1.]]) - tf.cast(action, tf.float32) loss = tf.reduce_mean(loss_fn(y_target, left_probas)) grads = tape.gradient(loss, model.trainable_variables) obs, reward, done, info = env.step(int(action[0,0].numpy())) return obs, reward, done, grads # ##### function that will return all rewards def play_multiple_episodes(env, n_episodes, n_max_steps, model, loss_fn): all_rewards = [] all_grads = [] for episode in range(n_episodes): current_rewards = [] current_grads = [] obs = env.reset() for step in range(n_max_steps): obs, reward, done, grads = play_one_step(env, obs, model,loss_fn) current_rewards.append(reward) current_grads.append(grads) if done: break all_rewards.append(current_rewards) all_grads.append(current_grads) return all_rewards, all_grads # #### function to define the dicount reward def discount_rewards(rewards, discount_rate): discounted = np.array(rewards) for step in range(len(rewards) - 2, -1, -1): discounted[step] += discounted[step + 1] * discount_rate return discounted def discount_and_normalize_rewards(all_rewards, discount_rate): all_discounted_rewards = [discount_rewards(rewards, discount_rate) for rewards in all_rewards] flat_rewards = np.concatenate(all_discounted_rewards) reward_mean = flat_rewards.mean() reward_std = flat_rewards.std() return [(discounted_rewards - reward_mean) / reward_std for discounted_rewards in all_discounted_rewards] # ##### Train with policy Gradients # ##### seeting the number of environments # + tf.random.set_seed(42) np.random.seed(42) n_iterations = 150 n_episodes_per_update = 10 n_max_steps = 200 discount_rate = 0.95 optimizer = keras.optimizers.Adam(lr=0.01) loss_fn = keras.losses.binary_crossentropy n_inputs = 4 # + # n_iteration = 150 # + # n_max_steps = 200 # - # #### setting discount rate # + # discount_rate = 0.95 # + # n_inputs = 4 # + # loss_fn = keras.losses.binary_crossentropy # + # optimizer = keras.optimizers.Adam(lr=0.01) # - # ##### applying gradient algorithm while training the neural network def nn_policy_gradient(model, n_iterations, n_episodes_per_update, n_max_steps, loss_fn): env = gym.make("CartPole-v1") env.seed(42); for iteration in range(n_iterations): all_rewards, all_grads = play_multiple_episodes( env, n_episodes_per_update, n_max_steps, model, loss_fn) total_rewards = sum(map(sum, all_rewards)) # Not shown in the book print("\rIteration: {}, mean rewards: {:.1f}".format( # Not shown iteration, total_rewards / n_episodes_per_update), end="") # Not shown all_final_rewards = discount_and_normalize_rewards(all_rewards, discount_rate) all_mean_grads = [] for var_index in range(len(model.trainable_variables)): mean_grads = tf.reduce_mean( [final_reward * all_grads[episode_index][step][var_index] for episode_index, final_rewards in enumerate(all_final_rewards) for step, final_reward in enumerate(final_rewards)], axis=0) all_mean_grads.append(mean_grads) optimizer.apply_gradients(zip(all_mean_grads, model.trainable_variables)) return model env.close() # #### models model = keras.models.Sequential([ keras.layers.Dense(5, activation="elu", input_shape=[n_inputs]), keras.layers.Dense(1, activation="sigmoid"), ]) # ##### calling the function model = nn_policy_gradient(model, n_iterations, n_episodes_per_update, n_max_steps, loss_fn) # + totals = [] for episode in range(20): print("Episode:",episode) episode_rewards = 0 obs = env.reset() for step in range(200): action = basic_policy_untrained(obs) obs, reward, done, info = env.step(action) episode_rewards += reward if done: break totals.append(episode_rewards) np.mean(totals), np.std(totals), np.min(totals), np.max(totals) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Importing necessary packages import numpy as np from numpy import nan import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns db = pd.read_csv('http://bit.ly/w-data') db.head() db.info() # The dataset doesn't contain any null objects. db.describe() sns.scatterplot(data=db, x='Hours', y='Scores', palette='bright') plt.xlabel('Hours Studied') plt.ylabel('Percentage Score') plt.show() print('min hours:',db['Hours'].min()) print('max hours:',db['Hours'].max()) print('min score:',db['Hours'].min()) print('max score:',db['Hours'].max()) db.boxplot(['Scores']) db.boxplot(['Hours']) # there are no outliers in the given dataset. db["Hours"].value_counts(bins=4).sort_index() db["Scores"].value_counts(bins=4).sort_index() # The hours and scores are normally distributed hence we can easily perform Linear Regression. sns.heatmap(db.corr(),annot=True) # Hours and Scores are highly Positively correlated to each other sns.pairplot(db) # Splitting the dataset for training and testing from sklearn.model_selection import train_test_split x=db.iloc[:,:-1].values y=db.iloc[:,1].values x_train, x_test, y_train, y_test= train_test_split(x, y,train_size=0.60,test_size=0.40,random_state=0) # Training Linear Regression Model from sklearn.linear_model import LinearRegression model= LinearRegression() model.fit(x_train, y_train) line = model.coef_*x+model.intercept_ # Plotting for the test data plt.scatter(x, y) plt.plot(x, line); plt.show() y_pred = model.predict(x_test) y_pred # Predicting Scores print('Test Score') print(model.score(x_test, y_test)) print('Training Score') print(model.score(x_train, y_train)) # + output = pd.DataFrame({'Actual Score': y_test,'Predicted Score': y_pred, 'Residual':y_test-y_pred }) print(output.head()) # - # What will be predicted score if a student study for 9.25 hrs in a day? print('Score of student who studied for 9.25 hours a day', model.predict([[9.25]])) #

    Model Evaluation

    from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score from sklearn.metrics import mean_absolute_error from sklearn import metrics print('Mean absolute error : ', metrics.mean_absolute_error(y_test, y_pred)) print('Root mean square error : ', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) metrics.r2_score(y_test,y_pred) from scipy import stats t_statistic, pvalues =stats.ttest_ind(y_test, y_pred) print('t-statistic -->',t_statistic) print('P-value -->',pvalues) f_statistic, pvalues =stats.f_oneway(y_test, y_pred) print('f-statistic -->',f_statistic) print('P-value -->',pvalues) #

    summary

    # # The datset given to us is a clean dataset with 2 attributes Hours and Scores and no null values present. # With the help of seaborn and matplotlib we visualized the data and found it to be normally distributed. # # We performed Linear Regression operation on the given dataset and the model had an accuracy of 95%. Thus, the model could predict the score for a student who studies for 9.25hrs in a day which is 92.65%. # # In test sample R-square ,T-test, F-test were performed to measure the model performance in terms of goodness of fit & randomness of variance between actual and the predicted values. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:geospatial] # language: python # name: conda-env-geospatial-py # --- # # Vegetation Index Model # # The code in this notebook was adapted from __[here](https://github.com/e-sensing/wgiss-py-webinar/blob/master/WGISS_Tech_Webinar.ipynb)__ # ## Simulate a time series # + import math import numpy as np n = 46 # number of observations f = 16/365 # 16 observations a year # model y = np.zeros(n) # process (true) time series z = np.zeros(n) # data time series mu = 3.0 # offset alpha = 1.2 # amplitude phi = math.pi/4 # phase omega = 2 * math.pi * f # angular frequency # noise vectors nu = np.random.normal(0, math.sqrt(0.1), n) v = np.random.uniform(-1, 1, n) # build the time series for k in range(0, n): y[k] = mu + alpha * math.cos(omega * k + phi) + nu[k] z[k] = y[k] * (1 + 0.5 * v[k]) print(y) print(z) # Kalman filter y_est = c1 + c2 * cos(omega_k + c3) c1 c2 c3 = [1 1 1] I = np.matrix(np.identity(n), copy=False) R = 0.5 * I Q = 0.1 * I # model error # gaussian random noise # - # ## Get time series of vegetation index # %matplotlib inline import matplotlib import pandas as pd from wtss import wtss from cycler import cycler # get time series of MODIS data of this point latitude = -14.919100049 longitude = -59.11781088 # get time series w = wtss("http://www.dpi.inpe.br/tws") ts = w.time_series("mod13q1_512", ("ndvi", "evi"), latitude, longitude) # build a data frame made of vegetation indexes ndvi = pd.Series(ts["ndvi"], index = ts.timeline) * \ cv_scheme['attributes']['ndvi']['scale_factor'] evi = pd.Series(ts["evi"], index = ts.timeline) * \ cv_scheme['attributes']['evi']['scale_factor'] vidf = pd.DataFrame({'ndvi': ndvi, 'evi': evi}) # setup plot style matplotlib.style.use('ggplot') colors = cycler(u'color', [u'#74c476',u'#6baed6',u'#d62728', \ u'#ff7f0e', u'#756bb1']) matplotlib.rcParams['axes.prop_cycle'] = colors # plot fig, ax = matplotlib.pyplot.subplots(figsize = (15, 5)) ax.plot() vidf['ndvi'].plot() vidf['evi'].plot() ax.legend() fig.autofmt_xdate() # print first rows of the data frame vidf[0:5] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: fenicsker # language: python # name: fenicsker # --- #import the required libraries import numpy as np import matplotlib.pyplot as pl from dolfin import * import meshio # + ## Import the mesh file msh = meshio.read("meshings/slot_rbf2.msh") for key in msh.cell_data_dict["gmsh:physical"].keys(): if key == "triangle": triangle_data = msh.cell_data_dict["gmsh:physical"][key] elif key == "tetra": tetra_data = msh.cell_data_dict["gmsh:physical"][key] for cell in msh.cells: if cell.type == "tetra": tetra_cells = cell.data elif cell.type == "triangle": triangle_cells = cell.data tetra_mesh = meshio.Mesh(points=msh.points, cells={"tetra": tetra_cells}, cell_data={"name_to_read":[tetra_data]}) triangle_mesh =meshio.Mesh(points=msh.points, cells=[("triangle", triangle_cells)], cell_data={"name_to_read":[triangle_data]}) meshio.write("plate.xdmf", tetra_mesh) meshio.write("mf.xdmf", triangle_mesh) from dolfin import * set_log_level(LogLevel.ERROR) mesh = Mesh() with XDMFFile("plate.xdmf") as infile: infile.read(mesh) mvc = MeshValueCollection("size_t", mesh, 2) with XDMFFile("mf.xdmf") as infile: infile.read(mvc, "name_to_read") # define the facet meshes mf = cpp.mesh.MeshFunctionSizet(mesh, mvc) mvc2 = MeshValueCollection("size_t", mesh, 3) with XDMFFile("plate.xdmf") as infile: infile.read(mvc2, "name_to_read") #define the cells meshes cf = cpp.mesh.MeshFunctionSizet(mesh, mvc2) # + from dolfin import * # Scaled variables for cantilever beam L = 1; W = 0.2 mu = 1 rho = 1 delta = W/L gamma = 0.4*delta**2 beta = 1.25 lambda_ = beta g = gamma # Create mesh and define function space V = FunctionSpace(mesh, 'P', 2) # Define boundary condition tol = 1E-14 bc = DirichletBC(V, Constant((0, 0, 0)),mf, 1) # Define strain and stress def epsilon(u): return 0.5*(nabla_grad(u) + nabla_grad(u).T) #return sym(nabla_grad(u)) def sigma(u): return lambda_*nabla_div(u)*Identity(d) + 2*mu*epsilon(u) ds = Measure("ds", domain=mesh, subdomain_data=mf) dS = Measure("dS", domain=mesh, subdomain_data=mf) # Compute solution # Define variational problem u = TrialFunction(V) d = u.geometric_dimension() # space dimension v = TestFunction(V) f = Constant((0, 0, -rho*g)) T = Constant((0, 0, 0)) a = inner(sigma(u), epsilon(v))*dx L = dot(f, v)*dx + dot(T, v)*ds(2) u = Function(V) solve(a == L, u, bc) file1= XDMFFile('results/u1.xdmf') file1.parameters["flush_output"] = True file1.write(u) # # Plot solution # plot(u, title='Displacement', mode='displacement') # + # define a degree of freedom map v2d = vertex_to_dof_map(V) dofs = [] for facet in facets(mesh): if mf[facet.index()] == 1: vertices = facet.entities(0) for vertex in vertices: dofs.append(v2d[vertex]) unique_dofs = np.array(list(set(dofs)), dtype=np.int32) boundary_coords = V.tabulate_dof_coordinates()[unique_dofs] N=len(boundary_coords[:,1]) g=np.zeros((N,3)) for i, dof in enumerate(unique_dofs): g[i,0]=u.vector()[dof] # print(boundary_coords[i], v.vector()[dof]) # - g.shape # define coordinates and control points coor=V.tabulate_dof_coordinates() num_coor=len(coor[:,1]) ctr_pts=np.array(boundary_coords) # + # nx, ny, nz = (10, 10, 10) # mesh = np.zeros((nx * ny * nz, 3)) # xv = np.linspace(0, 1, nx) # yv = np.linspace(0, 1, ny) # zv = np.linspace(0, 1, nz) # z_1, y_1, x_1 = np.meshgrid(zv, yv, xv) # # mesh = np.array([x.ravel(), y.ravel(), z.ravel()]) # # mesh = mesh.T # + # num_coor=len(x_1.flatten()) # coor=np.zeros((num_coor,3)) # coor[:,0]=x_1.flatten() # coor[:,1]=y_1.flatten() # coor[:,2]=z_1.flatten() # - #Plot in matplotlib fig = pl.figure(1) ax = fig.add_subplot(111, projection='3d') ax.scatter(coor[:, 0], coor[:, 1], coor[:, 2], c='blue', marker='o') ax.set_xlabel('X axis') ax.set_ylabel('Y axis') ax.set_zlabel('Z axis') pl.show() # + # N=len(ctr_pts) # g=np.zeros((N,3)) # for i in range(N): # g[i,0]= # - # define the P matrix from control points P=np.zeros((N,4)) for i in range(N): for j in range(4): if (j==0): P[i,j]=1 else: P[i,j]=ctr_pts[i,j-1] # known displacements P.shape def rd(c1, c2): return np.sqrt((c1[0]-c2[0])**2+(c1[1]-c2[1])**2+(c1[2]-c2[2])**2) #rbf as global support spline type def rbf(r): return np.exp(-r**2) # + # # define lhs RBF kernel def M(coor,ctr_pts): M_m=np.zeros((N,N)) for i in range(N): for j in range(N): M_m[i,j]=rbf(rd(ctr_pts[i],ctr_pts[j])) return M_m # formulate the lhs part of the systems of linear equations to be dolved for coefficients def lhs_(P,M_m): lhs=np.zeros((N+4,N+4)) P_t=np.transpose(P) for i in range(N+4): for j in range(N+4): if (i(N-1)): lhs[i, j]=P[i,j-N] elif (i>(N-1) and j= 300s With DataFlow) # + Collapsed="false" df_minute_5.head() # + Collapsed="false" body = df_minute_5['body'].tolist() title = df_minute_5['title'].tolist() # + Collapsed="false" times = [] # + Collapsed="false" for body_element, title_element in zip(body, title): start_time = time.time() temp1 = preprocessing_object.nlp(body_element) temp2 = preprocessing_object.nlp(title_element) end_time = time.time() total_time = end_time-start_time times.append(total_time) # + Collapsed="false" times # + Collapsed="false" # Testing Memory Leakage # + Collapsed="false" import random import spacy import plac import psutil import sys import objgraph import gc gc.set_debug(gc.DEBUG_SAVEALL) def load_data(): return ["This is a fake test document number %d."%i for i in random.sample(range(100_000), 10_000)] # def print_memory_usage(): # print(objgraph.show_growth(limit=5)) # print("GC count="+str(gc.get_count())) # gc.collect() class ReloadableNlp: def __init__(self, model, reload=1000): self.model = model self.reload = reload self.count = 0 self.nlp = spacy.load(model) def get_nlp(self): self.count += 1 if self.count % 1_000 == 0: del self.nlp gc.collect() self.nlp = spacy.load(self.model) return self.nlp def parse_texts(reloadable, texts, iterations=1_000): for i in range(iterations): for doc in reloadable.get_nlp().pipe(texts, cleanup=True): yield doc @plac.annotations( iterations=("Number of iterations", "option", "n", int), model=("spaCy model to load", "positional", None, str) ) def main(model='en_core_web_sm', iterations=1_000): texts = load_data() reloadable = ReloadableNlp(model) for i, doc in enumerate(parse_texts(reloadable, texts, iterations=iterations)): if i % 10_000 == 0: print(i, psutil.virtual_memory().percent) #print_memory_usage() sys.stdout.flush() # + Collapsed="false" main() # + Collapsed="false" # ! python -m spacy validate # + Collapsed="false" # ! python -m spacy download en_core_web_sm # + Collapsed="false" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import cv2 as cv import os from matplotlib import pyplot as plt import numpy as np llist = {'0':0,'1':0,'2':0,'3':0,'4':0,'5':0,'6':0,'7':0,'8':0,'9':0, 'a':0,'b':0,'c':0,'d':0,'e':0,'f':0,'g':0,'h':0,'i':0,'j':0,'k':0,'l':0,'m':0,'n':0, 'o':0,'p':0,'q':0,'r':0,'s':0,'t':0,'u':0,'v':0,'w':0,'x':0,'y':0,'z':0, 'A':0,'B':0,'C':0,'D':0,'E':0,'F':0,'G':0,'H':0,'I':0,'J':0,'K':0,'L':0,'M':0, 'N':0,'O':0,'P':0,'Q':0,'R':0,'S':0,'T':0,'U':0,'V':0,'W':0,'X':0,'Y':0,'Z':0} data_path = '../data/color_sample' file_names = os.listdir(data_path) file_names # - llist['a'] SAMPLE1_PATH = data_path+'/'+file_names[0] SAMPLE2_PATH = data_path+'/'+file_names[1] SAMPLE3_PATH = data_path+'/'+file_names[2] SAMPLE4_PATH = data_path+'/'+file_names[3] SAMPLE5_PATH = data_path+'/'+file_names[4] SAMPLE6_PATH = data_path+'/'+file_names[5] SAMPLE7_PATH = data_path+'/'+file_names[6] SAMPLE8_PATH = data_path+'/'+file_names[7] img = cv.imread(SAMPLE6_PATH) plt.imshow(img) plt.show() n_img = np.zeros((img.shape[0],img.shape[1])) img_aft = cv.normalize(img, n_img, 0,255,cv.NORM_MINMAX) plt.imshow(img_aft) plt.show() gray = cv.cvtColor(img_aft,cv.COLOR_BGR2GRAY) plt.imshow(gray) plt.show() ret, thresh = cv.threshold(gray,0,255,cv.THRESH_BINARY_INV+cv.THRESH_OTSU) #ret, thresh = cv.threshold(gray,0,255,cv.THRESH_OTSU) plt.imshow(thresh) plt.show() import copy im2,contours,hierarchy = cv.findContours(thresh,cv.RETR_EXTERNAL,cv.CHAIN_APPROX_SIMPLE) filter_containor = [] temp_img = copy.deepcopy(img) for i in range(0,len(contours)): x, y, w, h = cv.boundingRect(contours[i]) newimage=img[y:y+h,x:x+w] # 先用y确定高,再用x确定宽 nrootdir=("cut_image/") if h<5 and w<5: continue filter_containor.append([x, y, w, h]) cv.rectangle(temp_img, (x,y), (x+w,y+h), (153,153,0), 1) if not os.path.isdir(nrootdir): os.makedirs(nrootdir) cv.imwrite( nrootdir+str(i)+".jpg",newimage) print (x, y, w, h) plt.imshow(temp_img) plt.show() def seg_char(img_folder, img_name, save_folder): img = cv.imread(img_folder+'/'+img_name) name_list = img_name.split('_')[0] gray = cv.cvtColor(img_aft,cv.COLOR_BGR2GRAY) ret, thresh = cv.threshold(gray,0,255,cv.THRESH_BINARY_INV+cv.THRESH_OTSU) n_img = np.zeros((img.shape[0],img.shape[1])) img_aft = cv.normalize(thresh, n_img, 0,255,cv.NORM_MINMAX) im2,contours,hierarchy = cv.findContours(thresh,cv.RETR_LIST,cv.CHAIN_APPROX_SIMPLE) filter_containor = [] temp_img = copy.deepcopy(img) for i in range(0,len(contours)): x, y, w, h = cv.boundingRect(contours[i]) newimage=img[y:y+h,x:x+w] # 先用y确定高,再用x确定宽 char_class = name_list[-1-i] #nrootdir=save_folder#("cut_image/") if h<5 and w<5: continue filter_containor.append([x, y, w, h]) cv.rectangle(temp_img, (x-2,y-2), (x+w+2,y+h+2), (153,153,0), 1) file_path = save_folder+char_class+'/' if not os.path.isdir(file_path): os.makedirs(file_path) cv.imwrite( file_path+str(llist[char_class])+".png",newimage) llist[char_class] +=1 print (x, y, w, h) plt.imshow(temp_img) plt.show() seg_char(data_path, file_names[5], "test/" ) # + import sys, os sys.path.insert(1, '/home/ning_a/Desktop/CAPTCHA/base_solver/base_solver_char') import cv2 as cv from matplotlib import pyplot as plt import numpy as np import operator import copy from captcha_cnn_model import CNN, Generator import torch from torch.autograd import Variable import captcha_setting import my_dataset from torchvision.utils import save_image import cv2 as cv from matplotlib import pyplot as plt from torch.utils.data import DataLoader, Dataset import torchvision.transforms as transforms from PIL import Image import copy import operator import torch.nn as nn import captcha_setting from matplotlib import cm llist = {'0':0,'1':0,'2':0,'3':0,'4':0,'5':0,'6':0,'7':0,'8':0,'9':0, 'a':0,'b':0,'c':0,'d':0,'e':0,'f':0,'g':0,'h':0,'i':0,'j':0,'k':0,'l':0,'m':0,'n':0, 'o':0,'p':0,'q':0,'r':0,'s':0,'t':0,'u':0,'v':0,'w':0,'x':0,'y':0,'z':0, 'A':0,'B':0,'C':0,'D':0,'E':0,'F':0,'G':0,'H':0,'I':0,'J':0,'K':0,'L':0,'M':0, 'N':0,'O':0,'P':0,'Q':0,'R':0,'S':0,'T':0,'U':0,'V':0,'W':0,'X':0,'Y':0,'Z':0} generator = Generator() generator.load_state_dict(torch.load('/home/ning_a/Desktop/CAPTCHA/base_solver/base_solver_char/7800.pkl')) generator.eval() # data_path ='/home/ning_a/Desktop/CAPTCHA/test_generator/test'# '../data/color_sample' data_path = '/home/ning_a/Desktop/CAPTCHA/dataset_darkweb_bias/kaggle_captcha_4_letter/train/' save_path = '/home/ning_a/Desktop/CAPTCHA/dataset_darkweb_bias/kaggle_captcha_4_letter/train_char_gen/' transform = transforms.Compose([ # transforms.ColorJitter(), transforms.Grayscale(), transforms.ToTensor(), # transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) ]) file_names = os.listdir(data_path) # file_names def seg_char(img_folder, img_name, save_folder): img = cv.imread(img_folder+'/'+img_name) torch_img = Image.open(img_folder+'/'+img_name) # print(img.shape) torch_img = torch_img.resize((160, 60)) # torch_img = cv.resize(torch_img,(160, 60), interpolation = cv.INTER_CUBIC) # print(torch_img.shape) torch_img = transform(torch_img) gen_img = generator(torch_img) gen_img2 = gen_img.data.cpu().numpy() target_img = gen_img2[0][0] target_img = target_img*255 cv.imwrite( "temp.jpg",target_img) gen_img = cv.imread( "temp.jpg") gen_img = torch_img = cv.resize(gen_img,(72, 24), interpolation = cv.INTER_CUBIC) name_list = img_name.split('.')[0] # print(name_list) gray = cv.cvtColor(img,cv.COLOR_BGR2GRAY) gray_2 = cv.cvtColor(gen_img,cv.COLOR_BGR2GRAY) ret, thresh = cv.threshold(gray,0,255,cv.THRESH_BINARY_INV+cv.THRESH_OTSU) n_img = np.zeros((img.shape[0],img.shape[1])) img_aft = cv.normalize(thresh, n_img, 0,255,cv.NORM_MINMAX) im2,contours,hierarchy = cv.findContours(thresh,cv.RETR_EXTERNAL,cv.CHAIN_APPROX_TC89_KCOS) filter_containor = [] temp_img = copy.deepcopy(img) # print("seg #: ",len(contours)) num_index = 0 if(len(contours)!=4): raise Exception("Sorry, no numbers below zero") for i in range(0,len(contours)): x, y, w, h = cv.boundingRect(contours[i]) filter_containor.append([x, y, w, h]) # continue # if h<35 or w<5: # continue # filter_containor.append([x, y, w, h]) # sort contour list # print(filter_containor) filter_containor = sorted(filter_containor, key=operator.itemgetter(0)) for i in range(0,len(filter_containor)): #x, y, w, h = cv.boundingRect(contours[i]) #try: # print(name_list, i) char_class = name_list[i] x = max(0, filter_containor[i][0]-2) y = max(0,filter_containor[i][1]-2) if(char_class=='i'): y = 0# max(0,filter_containor[i][1]-2) xr = min(159,filter_containor[i][0]+filter_containor[i][2]+2 ) if(char_class=='i'): yb = 59# max(0,filter_containor[i][1]-2) yb = min(59, filter_containor[i][1]+filter_containor[i][3]+2 ) # newimage=gray[y:yb,x:xr] # 先用y确定高,再用x确定宽 newimage=gray_2[y:yb,x:xr] #char_class = name_list[-1-i] #nrootdir=save_folder#("cut_image/") num_index+=1 cv.rectangle(temp_img, (x,y), (xr,yb), (153,153,0), 1) file_path = save_folder+char_class+'/' if not os.path.isdir(file_path): os.makedirs(file_path) newimage = cv.resize(newimage,(30, 60), interpolation = cv.INTER_CUBIC) # print(char_class) cv.imwrite( file_path+(str(llist[char_class]).upper())+".png",newimage) llist[char_class] +=1 # except: # print(img_name) # plt.imshow(temp_img) # plt.show() #print (x, y, xr, h) # if('i' in name_list): # print(filter_containor) # plt.imshow(temp_img) # plt.show() total = 0 correct = 0 for each_image in file_names: #print(each_image) total+=1 try: seg_char(data_path, each_image, save_path ) correct += 1 except: continue print(total, correct, float(correct)/float(total)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="HyVCYktQidB4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9e0b6e61-0153-4a85-9a1f-7960eb3bb942" import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt print(tf.__version__) # + id="K8HMaTY8i38d" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="451becba-9b50-41fb-f3b6-d5ed150b904f" fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # + id="Qzmq0NVPjRxO" colab_type="code" colab={} class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # + id="WE-RXGW_jZrP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 347} outputId="da8b2003-b06c-428b-a8c8-461b90dacfb0" plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) # + id="KzXZmRGtkPVm" colab_type="code" colab={} train_images = train_images / 255.0 test_images = test_images /255.0 # + id="S2Ip5Sr4kgtB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 609} outputId="af3ef02b-5ea5-4b3c-b94c-0f6a5f795ff0" plt.figure(figsize = (10, 10)) for i in range(25): plt.subplot(5, 5, i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap = plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) # + id="ZLuGjbIAlXt5" colab_type="code" colab={} model = keras.Sequential([ keras.layers.Flatten(input_shape=(28,28)), keras.layers.Dense(128, activation=tf.nn.relu), keras.layers.Dense(10, activation=tf.nn.softmax) ]) # + id="sQj4ymt2mSOe" colab_type="code" colab={} model.compile(optimizer=tf.train.AdamOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # + id="9J4tq4L-nPNH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="7510adde-c5bb-4df9-c383-7ed9ecf8e3b4" model.fit(train_images, train_labels, epochs = 5) # + id="xaVOYtEznfsn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="e7444e16-e603-4323-d79d-914adf777278" test_loss, test_acc = model.evaluate(test_images, test_labels) print('Test accuracy:', test_acc) # + id="vmyljkdBoARO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="4d93e8c2-a012-4986-ffec-66c14ef66491" predictions = model.predict(test_images) predictions[0] # + id="v2cygy9PoYbq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="061eeefb-d63a-4e6e-e0d6-2f1c2b066989" np.argmax(predictions[1123]), test_labels[1123] # + id="kf6fnCtso0dv" colab_type="code" colab={} def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img = predictions_array[i], true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color = color) def plot_value_array(i, predictions_array, true_label): predictions_array, true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color = '#777777') plt.ylim([0,1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') # + id="rqTeCo75rjny" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="95bcbed0-b5d1-4227-8799-583667749f84" i = 1123 plt.figure(figsize=(6, 3)) plt.subplot(1,2,1) plot_image(i, predictions, test_labels, test_images) plt.subplot(1, 2,2) plot_value_array(i, predictions, test_labels) # + id="jOr7kqqutkOu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 609} outputId="1a83bf60-f747-4845-b2ff-6db3b93a0d89" num_rows = 5 num_cols = 3 num_images = num_rows * num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions, test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions, test_labels) # + id="HmqfEJcwur3u" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="73639ab5-e239-4358-b1a5-7cc1e2373b63" img = test_images[0] print(img.shape) # + id="Y9L4feYXu4S1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="72cb79d9-546f-4e47-fe4e-1940e95e386c" img = (np.expand_dims(img, 0)) print(img.shape) # + id="bjnCVYVOvDZH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="99b5d12e-a928-441a-e97d-aedaf23e0f70" predictions_single = model.predict(img) print(predictions_single) # + id="RVwjl6NNvMv5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 377} outputId="0f3aeae3-0c30-4e03-c0a6-7865b8abb129" plot_value_array(0, predictions_single, test_labels) _ = plt.xticks(range(10), class_names, rotation=45) # + id="1UESRgTtvwg4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="0ce99794-984f-430a-bf91-e2eb4aa22b36" np.argmax(predictions_single[0]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Replots 1st layer filters with y-axis labels that match a ground truth motif according to tomtom # # # Figures generated from this notebook include: # - Fig. 1b # - Supplmeentary Fig. 1 # - Supplmeentary Fig. 2 # # + import os, sys from six.moves import cPickle import numpy as np import pandas as pd import logomaker import helper from tfomics import utils, explain from tensorflow import keras from keras import backend as K import tensorflow.compat.v1.keras.backend as K1 import matplotlib.pyplot as plt # %matplotlib inline # + arid3 = ['MA0151.1', 'MA0601.1', 'PB0001.1'] cebpb = ['MA0466.1', 'MA0466.2'] fosl1 = ['MA0477.1'] gabpa = ['MA0062.1', 'MA0062.2'] mafk = ['MA0496.1', 'MA0496.2'] max1 = ['MA0058.1', 'MA0058.2', 'MA0058.3'] mef2a = ['MA0052.1', 'MA0052.2', 'MA0052.3'] nfyb = ['MA0502.1', 'MA0060.1', 'MA0060.2'] sp1 = ['MA0079.1', 'MA0079.2', 'MA0079.3'] srf = ['MA0083.1', 'MA0083.2', 'MA0083.3'] stat1 = ['MA0137.1', 'MA0137.2', 'MA0137.3', 'MA0660.1', 'MA0773.1'] yy1 = ['MA0095.1', 'MA0095.2'] Gmeb1 = ['MA0615.1'] motifs = [[''],arid3, cebpb, fosl1, gabpa, mafk, max1, mef2a, nfyb, sp1, srf, stat1, yy1] motifnames = [ '','Arid3', 'CEBPB', 'FOSL1', 'Gabpa', 'MAFK', 'MAX', 'MEF2A', 'NFYB', 'SP1', 'SRF', 'STAT1', 'YY1'] # - # load data data_path = '../data/synthetic_code_dataset.h5' data = helper.load_data(data_path) x_train, y_train, x_valid, y_valid, x_test, y_test = data # + num_trials = 10 model_names = ['cnn-deep', 'cnn-2', 'cnn-50'] activations = ['relu', 'exponential', 'sigmoid', 'tanh', 'softplus', 'linear', 'elu'] results_path = utils.make_directory('../results', 'task1') params_path = utils.make_directory(results_path, 'model_params') layer = 3 threshold = 0.5 window = 20 num_cols = 8 figsize = (30,10) size=32 save_path = os.path.join(results_path, 'conv_filters') for model_name in model_names: for activation in ['relu', 'exponential']: for trial in range(num_trials): # load model model = helper.load_model(model_name, activation=activation, input_shape=200) name = model_name+'_'+activation+'_'+str(trial) print(name) weights_path = os.path.join(params_path, name+'.hdf5') model.load_weights(weights_path) # save path file_path = os.path.join(save_path, name, 'tomtom.tsv') best_qvalues, best_match, min_qvalue, match_fraction, match_any = helper.match_hits_to_ground_truth(file_path, motifs, size) intermediate = keras.Model(inputs=model.inputs, outputs=model.layers[layer].output) fmap = intermediate.predict(x_test) W = explain.activation_pwm(fmap, x_test, threshold=threshold, window=window) num_filters = len(W) num_widths = int(np.ceil(num_filters/num_cols)) fig = plt.figure(figsize=figsize) fig.subplots_adjust(hspace=0.3, wspace=0.3) for n, w in enumerate(W): ax = fig.add_subplot(num_widths, num_cols, n+1) #if (np.sum(w) != 0) | (np.sum(np.isnan(w) == True) > 0): # calculate sequence logo heights I = np.log2(4) + np.sum(w * np.log2(w+1e-7), axis=1, keepdims=True) logo = I*w L, A = w.shape counts_df = pd.DataFrame(data=0.0, columns=list('ACGT'), index=list(range(L))) for a in range(A): for l in range(L): counts_df.iloc[l,a] = logo[l,a] logomaker.Logo(counts_df, ax=ax) ax = plt.gca() ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.yaxis.set_ticks_position('none') ax.xaxis.set_ticks_position('none') plt.xticks([]) plt.yticks([]) plt.ylabel(motifnames[int(best_match[n])], fontsize=16) outfile = os.path.join(save_path, 'label_'+name+'.pdf') fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight') plt.close() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ... ***CURRENTLY UNDER DEVELOPMENT*** ... # # ## Validation of the total water level # # inputs required: # * historical wave conditions # * emulator output - synthetic wave conditions of TWL # * emulator output - synthetic wave conditions of TWL with 3 scenarios of SLR # # # in this notebook: # * Comparison of the extreme distributions # # # + # #!/usr/bin/env python # -*- coding: utf-8 -*- # common import os import os.path as op # pip import numpy as np import xarray as xr from datetime import datetime import matplotlib.pyplot as plt # DEV: override installed teslakit import sys sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..', '..')) # teslakit from teslakit.database import Database from teslakit.climate_emulator import Climate_Emulator from teslakit.extremes import Peaks_Over_Threshold as POT from teslakit.util.time_operations import xds_reindex_daily from teslakit.plotting.extremes import Plot_ReturnPeriodValidation_CC from teslakit.plotting.estela import Plot_DWTs_Probs from teslakit.plotting.wts import Plot_Probs_WT_WT from teslakit.plotting.outputs import Plot_LevelVariables_Histograms # - # # ## Database and Site parameters # + # -------------------------------------- # Teslakit database p_data = r'/Users/albacid/Projects/TeslaKit_projects' # offshore db = Database(p_data) db.SetSite('ROI') # climate change - S1 db_S1 = Database(p_data) db_S1.SetSite('ROI_CC_S1') # climate change - S2 db_S2 = Database(p_data) db_S2.SetSite('ROI_CC_S2') # climate change - S3 db_S3 = Database(p_data) db_S3.SetSite('ROI_CC_S3') # + # -------------------------------------- # Load complete hourly data for extremes analysis # Historical HIST_C_h = db.Load_HIST_OFFSHORE(vns=['TWL'],decode_times=True) # Simulation (1000 yrs) SIM_C_h = db.Load_SIM_OFFSHORE_all(vns=['TWL'], decode_times=True, use_cftime=True) # Simulation climate change S1 (100 yrs) SIM_C_h_CChange_S1 = db_S1.Load_SIM_OFFSHORE_all(decode_times=True, use_cftime=True) # Simulation climate change S2 (100 yrs) SIM_C_h_CChange_S2 = db_S2.Load_SIM_OFFSHORE_all(decode_times=True, use_cftime=True) # Simulation climate change S3 (100 yrs) SIM_C_h_CChange_S3 = db_S3.Load_SIM_OFFSHORE_all(decode_times=True, use_cftime=True) # - # Keep first 100 years of simulation without climate change SIM_C_h = SIM_C_h.isel(time=slice(0, len(SIM_C_h_CChange_S1.time))) # 100 years # # ## Level Variables (TWL) - Histograms # + from teslakit.plotting.outputs import axplot_compare_histograms from teslakit.plotting.config import _faspect, _fsize import matplotlib.gridspec as gridspec # Plot TWL histogram comparison between historical and simulated data for different SLR scenarios data_fit = HIST_C_h['TWL'].values[:]; data_fit = data_fit[~np.isnan(data_fit)] data_sim = SIM_C_h['TWL'].sel(n_sim = 0).values[:]; data_sim = data_sim[~np.isnan(data_sim)] data_sim_1 = SIM_C_h_CChange_S1['TWL'].sel(n_sim = 0).values[:]; data_sim_1 = data_sim_1[~np.isnan(data_sim_1)] data_sim_2 = SIM_C_h_CChange_S2['TWL'].sel(n_sim = 0).values[:]; data_sim_2 = data_sim_2[~np.isnan(data_sim_2)] data_sim_3 = SIM_C_h_CChange_S3['TWL'].sel(n_sim = 0).values[:]; data_sim_3 = data_sim_3[~np.isnan(data_sim_3)] # plot figure fig = plt.figure(figsize=(_faspect*_fsize, _fsize*2/2.3)) gs = gridspec.GridSpec(2, 2) n_bins = np.linspace(np.nanmin([np.nanmin(data_fit), np.nanmin(data_sim_3)]),np.nanmax([np.nanmax(data_fit), np.nanmax(data_sim_3)]), 40) ax = plt.subplot(gs[0, 0]) axplot_compare_histograms(ax, data_fit, data_sim, ttl='TWL', n_bins=n_bins, color_1='white', color_2='skyblue', alpha_1=0.9, alpha_2=0.7, label_1='Historical', label_2='Simulation') ax = plt.subplot(gs[0, 1]) axplot_compare_histograms(ax, data_sim, data_sim_1, ttl='TWL', n_bins=n_bins, color_1='white', color_2='skyblue', alpha_1=0.9, alpha_2=0.7, label_1='Simulation', label_2='Simulation Climate Change S1') ax = plt.subplot(gs[1, 0]) axplot_compare_histograms(ax, data_sim, data_sim_2, ttl='TWL', n_bins=n_bins, color_1='white', color_2='skyblue', alpha_1=0.9, alpha_2=0.7, label_1='Simulation', label_2='Simulation Climate Change S2') ax = plt.subplot(gs[1, 1]) axplot_compare_histograms(ax, data_sim, data_sim_3, ttl='TWL', n_bins=n_bins, color_1='white', color_2='skyblue', alpha_1=0.9, alpha_2=0.7, label_1='Simulation', label_2='Simulation Climate Change S3') # - # # ## TWL - Annual Maxima for different SLR scenarios # + # Plot TWL annual maxima # calculate Annual Maxima values for historical and simulated data hist_A = HIST_C_h['TWL'].groupby('time.year').max(dim='time') sim_A = SIM_C_h['TWL'].groupby('time.year').max(dim='time') # - # ### SLR S1 (intermediate low, +0.5m) # + sim_B = SIM_C_h_CChange_S1['TWL'].groupby('time.year').max(dim='time') # Return Period historical vs. simulations Plot_ReturnPeriodValidation_CC(hist_A, sim_A.transpose(), sim_B.transpose()); # - # ### SLR S2 (intermediate, +1m) # + sim_B = SIM_C_h_CChange_S2['TWL'].groupby('time.year').max(dim='time') # Return Period historical vs. simulations Plot_ReturnPeriodValidation_CC(hist_A, sim_A.transpose(), sim_B.transpose()); # - # ### SLR S3 (intermediate high, +1.5m) # # + sim_B = SIM_C_h_CChange_S3['TWL'].groupby('time.year').max(dim='time') # Return Period historical vs. simulations Plot_ReturnPeriodValidation_CC(hist_A, sim_A.transpose(), sim_B.transpose()); # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: fastai # language: python # name: fastai # --- #default_exp gui from nbdev.showdoc import show_doc # # Graphical user interface # # > Simple GUI with ipywidgets. # + #export import os, sys, shutil, time, zipfile, urllib, subprocess import ipywidgets as w, numpy as np, pandas as pd, IPython.display as d from IPython.utils.io import ask_yes_no from IPython.core.getipython import get_ipython from ipywidgets.embed import embed_minimal_html from pathlib import Path from fastcore.foundation import store_attr from fastcore.basics import GetAttr from fastai.data.transforms import get_files import segmentation_models_pytorch as smp import matplotlib.pyplot as plt from deepflash2.utils import unzip, get_label_fn from deepflash2.learner import EnsembleLearner, Config, _optim_dict from deepflash2.gt import GTEstimator from deepflash2.models import ARCHITECTURES, ENCODERS, get_pretrained_options from deepflash2.losses import LOSSES try: from google import colab COLAB = True except ImportError: COLAB = False GRID_COLS = 2 # - # ## Helper Functions # + #export tooltip_css = """""" def set_css_in_cell_output(): d.display(d.HTML(tooltip_css)) try: get_ipython().events.register('pre_run_cell', set_css_in_cell_output) except: pass # - #export def _html_wrap(name, tooltip='', url=None): 'Wrapper function to create html with tooltip and link to URL' if not url: return f'

    ⓘ {tooltip}

    {name}' else: open_tab = 'rel="noopener noreferrer" target="_blank"'#open new tab return f"""

    ⓘ {tooltip} More information.

    {name}""" #export def _connect_to_drive(path=None): "Connect to Google Drive and return path" path = path or Path('.') if COLAB: subprocess.call(['rm', '-rf', '/content/sample_data/']) with colab.output.temporary(): con = ask_yes_no("Connect to Google Drive?") if con: with colab.output.temporary(): colab.drive.mount('/content/drive/') path = Path('/content/drive/My Drive') return path #export def _get_model_list(n): return ["ensemble"]+[f'model_{x}' for x in range(1, n+1)] #export def _get_train_sample_data(path): path = Path(path) path.mkdir(exist_ok=True, parents=True) url = "https://github.com/matjesg/deepflash2/releases/download/model_library/wue1_cFOS_small.zip" urllib.request.urlretrieve(url, path/'sample_data_train.zip') unzip(path, path/'sample_data_train.zip') #export def _get_expert_sample_masks(path): path = Path(path) path.mkdir(exist_ok=True, parents=True) url = "https://github.com/matjesg/bioimage_analysis/raw/master/train_data/lab-wue1/labels/" experts = ['expert_'+str(e) for e in range(1,6)] images = [1,4,6,7,8] for i in images: for e in experts: (path/e).mkdir(exist_ok=True, parents=True) urllib.request.urlretrieve(f'{url}/{e}/000{i}_cFOS.png', path/e/f'000{i}_mask.png') # ## Widgets # ### Base Widgets #export class ZipUpload(): "Widgets upload and extract zip files" def __init__(self, path=None, layout=None): self.path = path or Path('.') layout = layout or w.Layout(width='100%') self.widget = w.FileUpload(description='Upload .zip', accept='.zip', multiple=True,layout=layout) self.widget.observe(self.extract_content, 'value') def extract_content(self, c): zip_file = self.path/'upload.zip' for fname in self.widget.value: content = self.widget.value[fname]['content'] with open(zip_file, 'wb') as f: f.write(content) unzip(self.path, zip_file) t = ZipUpload() t.widget #export class ItemsPerPage: "Dropdown to show n items per page" def __init__(self, path=None, plot_fn=None, items=dict(), srt=True, srt_by='name', srt_index=0, **kwargs): self.path = path or Path('.') self.plot_fn = plot_fn self.items = items self.kwargs = kwargs self.current_page = 0 # Widgets and Layout self.drp = w.Dropdown(options=[1,5,10,20,50,100], layout=w.Layout(width='auto', min_width='1px')) drp_lbl = w.HTML(' Items per Page') self.srt = w.Dropdown(index=srt_index, options=['ascending', 'descending'], layout=w.Layout(width='auto', min_width='1px')) self.srt.layout.display = "none" up_right = w.HBox([self.srt, drp_lbl, self.drp]) lyt = w.Layout(width='100px') self.nxt = w.Button(description='Next', layout=lyt) self.prv = w.Button(description='Previous', layout=lyt) self.lbl = w.Label() self.exp = w.Button(description='Export View', layout=lyt) up_left = w.HBox([self.prv, self.lbl, self.nxt, self.exp]) up = w.HBox([up_left, up_right], layout=w.Layout(justify_content='space-between')) self.out = w.Output() self.widget = w.VBox([up, self.out]) # Callbacks self.drp.observe(self.on_value_change, 'value') self.nxt.on_click(self.on_button_clicked) self.prv.on_click(self.on_button_clicked) self.exp.on_click(self.on_export_clicked) self.lbl.value = self.page_lbl self.on_srt_change({'new': self.srt.options[srt_index]}) if srt: self.srt.layout.display = "block" self.srt.observe(self.on_srt_change, 'value') self.srt.description= f'Sort by {srt_by}' @property def page_lbl(self): return f'Page {self.current_page+1} of {self.max_pages}' @property def max_pages(self): return int(np.ceil(len(self.items)/self.drp.value)) def on_srt_change(self, change): reverse = True if change['new']=='descending' else False self.items = dict(sorted(self.items.items(), key=lambda x: x[1], reverse=reverse)) self.on_button_clicked('') def on_value_change(self, change): self.on_button_clicked('') def on_button_clicked(self, b): if isinstance(b, w.Button): if all([b.description=='Next', self.current_page<(self.max_pages-1)]): self.current_page +=1 elif all([b.description=='Previous', self.current_page>0]): self.current_page -=1 else: return slc = slice(self.current_page*self.drp.value,(self.current_page+1)*self.drp.value) self.lbl.value = self.page_lbl with self.out: self.out.clear_output() if self.plot_fn: self.plot_fn(files=list(self.items.keys())[slc], **self.kwargs) plt.show() def on_export_clicked(self, b): embed_minimal_html(self.path/'export.html', views=[self.out], title='Output export') with self.out: print(f'Saved current view to {self.path/"export.html"}') if COLAB: colab.files.download(self.path/'export.html') t = ItemsPerPage(srt_index=1) t.widget #export class BaseParamWidget: 'Parameter Widget Base Class' def __init__(self, config=None): config = config or Config() self.set_config(config) for k,v in self.params.items(): setattr(v, 'name', k) v.observe(self.on_change, 'value') def set_config(self, config): self.config = config for k,v in self.params.items(): v.value = getattr(config, k) def on_change(self, change): setattr(self.config, change['owner'].name, change['new']) def on_reset_clicked(self, b): self.set_config(Config()) def on_close_clicked(self, b): self.widget.layout.display = "none" #export class BaseUI: 'Base UI for different steps' #_defaults = 'config' def __init__(self, config=None, path=None): self.config = config or Config() self.path = path or Path('.') def hide(self): self.main_box.layout.display = "none" self.sb_acc.layout.display = "none" def show(self): self.main_box.layout.display = "block" self.sb_acc.layout.display = "block" def sidebar_change(self, change): if change['name']=='selected_index': if change['old'] is not None: self.main[list(self.sb.keys())[change['old']]].layout.display = "none" if change['new'] is not None: self.main[list(self.sb.keys())[change['new']]].layout.display = "block" # #### Path Widgets #export # adapted from https://stackoverflow.com/questions/48056345/jupyter-lab-browsing-the-remote-file-system-inside-a-notebook class PathSelector(): "Widgets to browse and select files or directories" def __init__(self,start_dir,select_name='Select',select_file=False,tooltip=None): self.file = None self.select_file = select_file self.hidden = True self.tooltip = tooltip or 'Click to select directory' #Get (and create) dir path = Path(start_dir).resolve() path.mkdir(parents=True, exist_ok=True) self.start_dir, self.cwd, self.path = path, path, path #Path Button self.button = w.Button(tooltip=self.tooltip, layout=w.Layout(width='auto')) self.button.on_click(self.on_button_clicked) #Save Button self.button_select = w.Button(description='Save', tooltip='Save Selection', layout=w.Layout(width='auto')) self.button_select.on_click(self.on_button_select_clicked) #Reset Button self.button_reset = w.Button(description='Reset', layout=w.Layout(width='auto')) self.button_reset.on_click(self.on_button_reset_clicked) #Close Button self.button_close = w.Button(description='Close', layout=w.Layout(width='auto')) self.button_close.on_click(self.on_button_close_clicked) #SelectMultiple self.select = w.SelectMultiple(options=['init'],value=(),rows=10,description='', layout=w.Layout(min_width='50px')) self.select.observe(self.on_update,'value') #Display self.refresh(self.path) self.button.description = select_name self.hide() self.dialog = w.VBox([w.HBox([self.button_select, self.button_reset, self.button_close]), self.select]) def show(self): self.hidden = False self.button.button_style = "info" self.button_select.layout.display = "block" self.button_close.layout.display = "block" self.button_reset.layout.display = "block" self.select.layout.display = "block" def hide(self): self.hidden = True self.button.button_style = "" self.button_select.layout.display = "none" self.button_close.layout.display = "none" self.button_reset.layout.display = "none" self.select.layout.display = "none" def set_path(self, path): path = Path(path).resolve() path.mkdir(parents=True, exist_ok=True) self.start_dir, self.cwd, self.path = path, path, path self.refresh('') def on_button_clicked(self, b): self.refresh('') if self.hidden: self.show() else: self.hide() def on_button_select_clicked(self, b): self.path = self.cwd self.button.description = f'{self.path.name}' self.hide() def on_button_reset_clicked(self, b): self.cwd = self.start_dir self.refresh('') def on_button_close_clicked(self, b): self.hide() def on_update(self,change): if len(change['new']) > 0: self.refresh(change['new'][0]) def refresh(self,item): path = self.cwd/item if path.is_file(): if self.select_file: self.file = path else: self.select.value = () else: # os.path.isdir(path) self.file = None self.cwd = path.resolve() # Build list of files and dirs keys = ['[..]'] if self.cwd != self.start_dir else [] for item in sorted(path.iterdir()): if item.is_dir(): keys.append('['+item.name+']') else: keys.append(item.name) # Create list of output values vals = [] for k in keys: if k[0] == '[': vals.append(k[1:-1]) # strip off brackets else: vals.append(k) # Update widget self.select.options = list(zip(keys,vals)) with self.select.hold_trait_notifications(): self.select.value = () t = PathSelector('') display(t.button, t.dialog) #export class PathDownloads(PathSelector): "Widgets to browse and download files or directories" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) #Download Button if COLAB: self.button_select.description = 'Download' self.button_select.tooltip = 'Download entire folder or selected file' else: self.button_select.description = 'Download on Google Colab only' self.button_select.disabled = True def on_button_select_clicked(self, b): if not self.file: shutil.make_archive(self.cwd.name, 'zip', self.cwd) colab.files.download(f'{self.cwd.name}.zip') else: for f in self.select.value: colab.files.download(self.cwd/f) t = PathDownloads('', select_file=True) display(t.button, t.dialog) #export class PathConfig(PathSelector): "Widgets to browse and and load config file" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) #Download Button self.button_select.description = 'Load' self.button_select.tooltip = 'Load selected configuration file (.json)' #Ouput self.output = w.Output() self.dialog = w.VBox([self.dialog,self.output]) def on_button_select_clicked(self, b): pass t = PathConfig('', select_file=True) display(t.button, t.dialog) # ### Grund Truth Estimation Widgets # #### Sidebars #export _dgt = { 'exp' : ('Expert Masks*', 'The parent folder containing sub-folders with segmentation masks, one folder per expert.', 'https://matjesg.github.io/deepflash2/add_information.html#Ground-Truth-Estimation'), 'up_gt': ('Upload Data', 'Upload a zip file. It will be extracted automatically and must contain sub-folders with segmentation masks, one folder per expert.'), 'sd' : ('Sample Data', 'Sample data for demonstration and testing.'), 'staple': ("STAPLE", "Simultaneous truth and performance level estimation (STAPLE)", "https://pubmed.ncbi.nlm.nih.gov/15250643/"), 'mv' :("Majority Voting", "Pixelwise majority voting to obtain the reference segmentation.", "https://simpleitk.org/doxygen/latest/html/classitk_1_1simple_1_1LabelVotingImageFilter.html"), } #export class GTDataSB: 'Layout for Grund Truth Estimation "Data" Section' #Hints txt = 'Provide expert segmentation masks' hints = w.Label(txt) #Grid grid = w.GridspecLayout(5, GRID_COLS, width='100%', grid_gap="0px", align_items='center') grid[0, 0] = w.HTML(_html_wrap(*_dgt['exp'])) grid[2, :] = w.HTML('
    ') grid[3, 0] = w.HTML(_html_wrap(*_dgt['up_gt'])) grid[4, 0] = w.HTML(_html_wrap(*_dgt['sd'])) #Data Upload sd = w.Button(description='Load Sample Data',layout=w.Layout(width='auto'),tooltip='Click to download sample data') grid[4, 1:] = sd #Load Data run = w.Button(description='Load Data*', layout=w.Layout(width='auto')) grid[1, 1:] = run #Final Widget widget = w.VBox([hints, grid]) def __init__(self, path=None): path = path or Path('.') self.msk = PathSelector(path, 'Select Parent Folder') self.grid[0, 1:] = self.msk.button self.du = ZipUpload(path) self.grid[3, 1:] = self.du.widget t=GTDataSB() t.widget #export class GTEstSB: 'Layout for Grund Truth Estimation "Data" Section' #Hints txt = 'Select algorithm for ground truth estimation' hints = w.HTML(txt) grid = w.GridspecLayout(4, GRID_COLS, width='100%', grid_gap="0px", align_items='center') #Labels grid[0, 0] = w.HTML(_html_wrap(*_dgt['staple'])) grid[1, 0] = w.HTML(_html_wrap(*_dgt['mv'])) grid[2, :] = w.HTML('
    ') grid[3, 0] = w.HTML('Downloads') #Run Staple staple = w.Button(description='Run', layout=w.Layout(width='auto'), tooltip='Run simultaneous truth and performance level estimation') setattr(staple, 'name', 'STAPLE') grid[0, 1:] = staple #Run MV mv = w.Button(description='Run', layout=w.Layout(width='auto'), tooltip='Run Majority Voting') setattr(mv, 'name', 'majority_voting') grid[1, 1:] = mv #Final Widget widget = w.VBox([hints,grid]) def __init__(self, path=None): path = path or Path('.') self.down = PathDownloads(path, 'Select', tooltip='Click to download file or directory') self.grid[3, 1:] = self.down.button t=GTEstSB() t.widget #not used class GTSimSB: 'Layout for Expert Similarity "Data" Section' #Hints txt = 'Get statistics on segmentation similarities
    ' hints = w.HTML(txt) grid = w.GridspecLayout(4, GRID_COLS, width='100%', grid_gap="0px", align_items='center') #Labels #grid[0, :] = w.Label('Simultaneous truth & performance level estimation') grid[0, 0] = w.HTML("Inter Experts") grid[1, 0] = w.HTML("GT vs Experts") grid[2, :] = w.HTML('
    ') grid[3, 0] = w.Label('Downloads') #Intercoder sim inter = w.Button(description='Get statistics', layout=w.Layout(width='auto'), tooltip='Calculate intercoder (inter-expert) agreement') grid[0, 1:] = inter #GT vs Experts gt_exp = w.Button(description='Get statistics', layout=w.Layout(width='auto'), tooltip='Calculate agreement between expert segmentation masks and estimated ground truth') grid[1, 1:] = gt_exp #Final Widget widget = w.VBox([hints,grid]) def __init__(self, path=None): path = path or Path('.') self.down = PathDownloads(path, 'Select', tooltip='Click to download file or directory') self.grid[3, 1:] = self.down.button t=GTSimSB() t.widget # #### Grund Truth Estimation UI #export class GTEstUI(BaseUI): 'UI for ground truth estimation' sections = ['1 - Expert Annotations', '2 - Ground Truth Estimation', '3 - Intercoder Reliability'] def __init__(self, hide=False, **kwargs): super().__init__(**kwargs) #Sidebar self.sb = { 'data':GTDataSB(self.path), 'gt':GTEstSB(self.path) } #Sidebar Accordion self.sb_acc = w.Accordion(children=[x.widget for x in self.sb.values()], layout=w.Layout(grid_area='sidebar')) for i, name in enumerate(self.sections): self.sb_acc.set_title(i, name) self.sb_acc.observe(self.sidebar_change) #Main self.main = { 'msk':self.sb['data'].msk.dialog, 'gt_down':self.sb['gt'].down.dialog, **{k:w.Output() for k in self.sb.keys()} } self.main_box = w.VBox(list(self.main.values())) if hide: self.hide() t=GTEstUI() t.sb_acc # ### Train Widgets # #### Sidebars #export _dtrain = { 'img' : ('Image Folder*', 'One folder containing all training images.', 'https://matjesg.github.io/deepflash2/add_information.html#Training'), 'msk' : ('Mask Folder*', 'One folder containing all segmentation masks. We highly recommend using ground truth estimation from multiple experts.'), 'c' : ('No. of Classes', 'Number of classes: e.g., 2 for binary segmentation (foreground and background class).'), 'il' : ('Instance Labels', 'Are you providing instance labels (class-aware and instance-aware)?'), 'up' : ('Upload Data', 'Upload a zip file. It will be extracted automatically and must contain the correct folder structure.', 'https://matjesg.github.io/deepflash2/add_information.html#Training'), 'sd' : ('Sample Data', 'Get sample data for demonstration and testing.'), 'pretrained': ('Pretrained*', 'Select pretrained weights from the model libray or "new" to use an untrained model (random initialization).', 'https://matjesg.github.io/deepflash2/model_library.html'), 'n' : ('No. of Models', "Number of models within an ensemble; If you're experimenting with parameters, try only one model first; Depending on the data, ensembles should at least comprise 3-5 models"), 's' : ('Select', 'Train all models (ensemble) or (re-)train specific model.'), 'n_iter': ('Train Iterations', 'How many times a single model is trained on a mini-batch of the training data.', 'https://matjesg.github.io/deepflash2/add_information.html#Training-Epochs-and-Iterations'), 'base_lr': ('Learning Rate', '''Base learning rate for fine-tuning the model. The learning rate controls how quickly or slowly a neural network model learns. - Best learning rate depend on your data and other settings, e.g., the optimize \n - Use the learning rate finder to find the best learning rate for your dataset''', 'https://docs.fast.ai/callback.schedule.html#Learner.fine_tune'), 'lrf' : ('LR Finder', 'Click Open to get more information.'), 'mw' : ('Loss Function', 'Click "Customize" to get more information.'), 'ts' : ('Train Settings', 'Click "Customize" to get more information.'), 'cfg_load' : ('Configuration', 'Select configuration file (.json) from previous experiments.'), 'cfg_save' : ('Configuration', 'Save current configuration to file (.json).'), 'tta' : ('Use TTA', 'Enable test-time augmentation for prediction (more reliable and accurate, but slow).'), 's_val': ('Select', 'Select model for validation. Select "Ensemble" to validate all models.'), 'mdl' : ('Model Folder', '(Optional) One folder containing the all models of the ensemble. If not selected, latest models from Training will be selected.'), #'ood' : ('OOD Detection', '(Optional) For inference/prediction: Train a support-vector machine (SVM) for the detection of out-of-distribution (OOD) or anomalous data.'), 'cache' : ('Clear Cache', 'Delete cached files used for validation and visualization. This will not affect the final results.'), } #export class TrainDataSB(BaseParamWidget, GetAttr): 'Layout for "Training Data" Section' _default = 'config' #Hints txt = 'Provide training images and segmentation masks' hints = w.Label(txt) params = { 'c': w.IntSlider(value=2, min=2, max=10, step=1, layout=w.Layout(width='auto', min_width='1px')), 'il':w.ToggleButtons(options=[('Yes', True), ('No', False)],tooltips=['You are providing instance labels (class-aware and instance-aware)', 'You are not providing only class-aware labels']), } params['il'].style.button_width = '50px' grid = w.GridspecLayout(9, GRID_COLS, width='100%', grid_gap="0px", align_items='center') #Labels grid[0, 0] = w.HTML(_html_wrap(*_dtrain['img'])) grid[1, 0] = w.HTML(_html_wrap(*_dtrain['msk'])) grid[2, 0] = w.HTML(_html_wrap(*_dtrain['c'])) grid[2, 1:]= params['c'] grid[3, 0] = w.HTML(_html_wrap(*_dtrain['il'])) grid[3, 1:]= params['il'] grid[5, :] = w.HTML('
    ') grid[6, 0] = w.HTML(_html_wrap(*_dtrain['up'])) grid[7, 0] = w.HTML(_html_wrap(*_dtrain['sd'])) grid[8,0] = w.HTML(_html_wrap(*_dtrain['cfg_load'])) sd = w.Button(description='Load Sample Data',layout=w.Layout(width='auto'), tooltip='Click to download sample data') grid[7, 1:] = sd #Load Data run = w.Button(description='Load Data*', layout=w.Layout(width='auto', min_width='1px')) grid[4, 1:] = run #Final Widget widget = w.VBox([hints,grid]) def __init__(self, path=None, **kwargs): super().__init__(**kwargs) path = path or Path('.') self.img = PathSelector(path, 'Select') self.grid[0, 1:] = self.img.button self.msk = PathSelector(path, 'Select') self.grid[1, 1:] = self.msk.button #Load Config self.cfg = PathConfig(path, 'Select Config File', select_file=True) self.grid[8, 1:] = self.cfg.button #Data Upload self.du = ZipUpload(path) self.grid[6, 1:] = self.du.widget t=TrainDataSB('') t.widget #export class TrainModelSB(BaseParamWidget, GetAttr): 'Layout for "Ensemble Training"' _default = 'config' #Hints txt = 'Train Model Ensemble' hints = w.Label(txt) params = { #'pretrained': w.Dropdown(options=_pretrained, continuous_update=True, layout=w.Layout(width='auto', min_width='1px')), 'n': w.IntSlider(min=1, max=5, step=1, continuous_update=True, orientation='horizontal', layout=w.Layout(width='auto', min_width='1px')), 'n_iter':w.IntSlider(min=100, max=1e4, step=100, continuous_update=True,orientation='horizontal', layout=w.Layout(width='auto', min_width='1px')), 'base_lr': w.FloatText(description='', layout=w.Layout(width='auto', min_width='1px')) } sel = w.Dropdown(options=[],continuous_update=True, layout=w.Layout(width='auto', min_width='1px')) #Grid grid = w.GridspecLayout(10, GRID_COLS, width='100%', grid_gap="0px", align_items='center') #grid[0, 0] = w.Label('Model Arch') #grid[0, 0] = w.HTML(_html_wrap(*_dtrain['pretrained'])) #grid[0, 1:]= params['pretrained'] grid[0, 0] = w.HTML(_html_wrap(*_dtrain['n'])) grid[0, 1:]= params['n'] grid[1, 0] = w.HTML(_html_wrap(*_dtrain['n_iter'])) grid[1, 1:]= params['n_iter'] grid[2, 0] = w.HTML(_html_wrap(*_dtrain['s'])) grid[2, 1:]= sel #grid[4, 0] = w.Label('MAY TAKE SOME HOURS') grid[4, :] = w.HTML('
    ') grid[5, 0] = w.HTML(_html_wrap(*_dtrain['base_lr'])) grid[5, 1:]= params['base_lr'] grid[6, 0] = w.HTML(_html_wrap(*_dtrain['lrf'])) grid[7, 0] = w.HTML(_html_wrap(*_dtrain['mw'])) grid[8, 0] = w.HTML(_html_wrap(*_dtrain['ts'])) grid[9, 0] = w.HTML(_html_wrap(*_dtrain['cfg_save'])) #Run run = w.Button(description='Start Training', layout=w.Layout(width='auto')) grid[3, 1:] = run #LR Finder open_lrfinder = w.Button(description='Open', layout=w.Layout(width='auto')) grid[6, 1:] = open_lrfinder #Custom Mask Weights open_mw = w.Button(description='Customize', layout=w.Layout(width='auto')) grid[7, 1:] = open_mw #Custom Data Aug open_par = w.Button(description='Customize', layout=w.Layout(width='auto')) grid[8, 1:] = open_par #Custom Data Aug cfg_save = w.Button(description='Save Config', layout=w.Layout(width='auto')) grid[9, 1:] = cfg_save #Final Widget widget = w.VBox([hints,grid]) def __init__(self, **kwargs): super().__init__(**kwargs) self.sel.options = _get_model_list(self.config.n) self.params['n'].observe(self.sel_update, 'value') def sel_update(self, change): self.sel.options = _get_model_list(change['new']) t=TrainModelSB() t.widget #export class TrainValidSB(BaseParamWidget, GetAttr): 'Layout for "Validation" Section' _default = 'config' #Hints txt = 'Validate Model Ensemble' hints = w.Label(txt) params = { 'tta': w.ToggleButtons(options=[('Yes', True), ('No', False)], tooltips=['Enable Test-Time Augmentation','Disable Test-Time Augmentation']) } params['tta'].style.button_width = '50px' #Grid grid = w.GridspecLayout(9, GRID_COLS, width='100%', grid_gap="0px", align_items='center') grid[0, 0] = w.HTML(_html_wrap(*_dtrain['s_val'])) grid[1, 0] = w.HTML(_html_wrap(*_dtrain['tta'])) grid[1, 1:]= params['tta'] grid[3, :] = w.HTML('
    ') grid[4, :] = w.Label('Load exisiting models') grid[5, 0] = w.HTML(_html_wrap(*_dtrain['mdl'])) grid[6, :] = w.HTML('
    ') grid[7, 0] = w.HTML('Downloads') #grid[8, 0] = w.HTML(_html_wrap(*_dtrain['cache'])) #Model sel = w.Dropdown(continuous_update=True, layout=w.Layout(width='auto', min_width='1px')) grid[0, 1:] = sel #Res run = w.Button(description='Run Validation', layout=w.Layout(width='auto')) grid[2, 1:] = run #Res #ood = w.Button(description='Train OOD Model', layout=w.Layout(width='auto')) #grid[6, 1:] = ood #cache = w.Button(description='Clear', layout=w.Layout(width='auto')) #grid[8, 1:] = cache #Final Widget widget = w.VBox([hints,grid]) def __init__(self, path=None, **kwargs): super().__init__(**kwargs) path = path or Path('.') self.sel.options = _get_model_list(self.config.n) self.down = PathDownloads(path, 'Select', tooltip='Click to download models, predictions or validation results') self.grid[7, 1:] = self.down.button self.ens = PathSelector(path, 'Select') self.grid[5, 1:] = self.ens.button def sel_update(self, change): self.sel.options = _get_model_list(change['new']) t=TrainValidSB() t.widget # #### Pop-Up Widgets #export class LRWidget: 'Widget for Learning Rate Finder' #Start Button tt_start = 'Start the Learning Rate Finder' run = w.Button(description='Start', tooltip=tt_start) #Close Button tt_close = 'Close Learning Rate Finder. ATTENTION: This will not interrupt the execution)' button_close = w.Button(description='Close', tooltip=tt_close) #Lbl html = _html_wrap('More information', '', 'https://docs.fast.ai/callback.schedule.html#Learner.lr_find') lbl = w.HTML(html) #Output output = w.Output() #Box widget = w.Accordion(children=[w.VBox([w.HBox([run, button_close]), lbl, output])]) widget.set_title(0, 'Learning Rate Finder') def __init__(self): self.widget.layout.display = "none" self.button_close.on_click(self.on_close_clicked) def on_close_clicked(self, b): self.widget.layout.display = "none" t = LRWidget() t.widget.layout.display = "block" t.widget #export class BasePopUpParamWidget(BaseParamWidget): 'Parameter Pop-Up Widget Base Class' #Reset Button tt_reset = 'Reset to defaults' button_reset = w.Button(description='Reset', tooltip=tt_reset) #Close Button tt_close = 'Close Widget' button_close = w.Button(description='Close', tooltip=tt_close) def __init__(self, **kwargs): super().__init__(**kwargs) self.button_reset.on_click(self.on_reset_clicked) self.button_close.on_click(self.on_close_clicked) # + #export _dparam= { 'arch' : ('Model Architecture', 'The architecture of the deep learning model.', 'https://matjesg.github.io/deepflash2/models.html'), 'encoder_name' : ('Encoder', 'Encoders for architectures from Segmentation Models Pytorch.', 'https://github.com/qubvel/segmentation_models.pytorch#encoders-'), 'encoder_weights' : ('Encoder Weights', 'Pretrained encoders weights for architectures from Segmentation Models Pytorch.', 'https://github.com/qubvel/segmentation_models.pytorch#encoders-'), 'bs' : ('Mini-Batch Size', 'The number of samples that will be propagated through the network during one iteration; 4 works best in our experiements; 4-8 works good for mixed precision training'), 'mpt' : ('Mixed Precision Training', 'Mixed Precision Training', 'https://docs.fast.ai/callback.fp16.html'), 'wd' : ('Weight Decay', 'Weight Decay.', 'https://arxiv.org/abs/1711.05101'), 'optim' : ('Optimizer', 'Optimizer.', 'https://docs.fast.ai/optimizer.html'), 'sample_mult' : ('Sample Multiplier', 'Defines how many are tiles are sampled from each image per epoch. Set to 0 for auto-mode.'), 'flip' : ('Flip', 'Randomly flip a training image.', 'https://matjesg.github.io/deepflash2/data.html#Data-augmentation'), 'rot': ('Rotation (max. degrees)', 'Randomly rotate a training image up to max. degrees.', 'https://matjesg.github.io/deepflash2/data.html#Data-augmentation'), 'gamma_limit_lower': ('Gamma (lower limit)', 'Random Gamma augmentation lower gamma limit.', 'https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomGamma'), 'gamma_limit_upper': ('Gamma (upper limit)', 'Random Gamma augmentation upper gamma limit.', 'https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomGamma'), 'brightness_limit': ('Brightness (limit)', 'Factor range for changing brightness.', 'https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomBrightnessContrast'), 'contrast_limit': ('Contrast (limit)', 'Factor range for changing contrast.', 'https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomBrightnessContrast'), 'CLAHE_clip_limit': ('CLAHE (clip limit)', 'Contrast Limited Adaptive Histogram Equalization (CLAHE). Set upper threshold value for contrast limiting.', 'https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.CLAHE'), 'distort_limit': ('GridDistortion (limit)', 'GridDistortion in the range (-distort_limit, distort_limit).', 'https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.GridDistortion'), #'zoom_sigma': ('Zoom standard deviation', 'Standard deviation of the Guassian that zooms in or out of the image.', 'https://matjesg.github.io/deepflash2/data.html#RandomTileDataset'), } # - #export class ParamWidget(BasePopUpParamWidget, GetAttr): 'Widget for custom training parameters' _default = 'config' params = { 'arch' : w.Dropdown(options=ARCHITECTURES, layout=w.Layout(width='auto', min_width='1px')), 'encoder_name' : w.Dropdown(options=ENCODERS, layout=w.Layout(width='auto', min_width='1px')), 'encoder_weights' : w.Dropdown(options=['imagenet'],layout=w.Layout(width='auto', min_width='1px'), continuous_update=True), 'bs':w.IntSlider(min=2, max=32, step=2,layout=w.Layout(width='auto', min_width='1px')), 'mpt': w.ToggleButtons(options=[('Yes', True), ('No', False)], tooltips=['Enable Mixed-Precision Training','Disable Mixed-Precision Training']), 'wd':w.FloatText(min=0, max=1,layout=w.Layout(width='auto', min_width='1px')), 'optim':w.Dropdown(options=_optim_dict.keys(), layout=w.Layout(width='auto', min_width='1px')), 'sample_mult' : w.IntText(layout= w.Layout(width='auto')), 'flip':w.ToggleButtons(options=[('Yes', True), ('No', False)]), 'rot':w.IntSlider(min=0, max=360, step=5, layout=w.Layout(width='auto', min_width='1px')), #'zoom_sigma':w.FloatSlider(min=0, max=1, layout=w.Layout(width='auto', min_width='1px')),#w.FloatText(layout=w.Layout(width='auto', min_width='1px')) 'gamma_limit_lower':w.IntSlider(min=0, max=255, step=10, layout=w.Layout(width='auto', min_width='1px')), 'gamma_limit_upper':w.IntSlider(min=0, max=255, step=10, layout=w.Layout(width='auto', min_width='1px')), 'brightness_limit':w.FloatSlider(min=0, max=1, layout=w.Layout(width='auto', min_width='1px')), 'contrast_limit':w.FloatSlider(min=0, max=1, layout=w.Layout(width='auto', min_width='1px')), 'CLAHE_clip_limit':w.FloatSlider(min=0, max=1, layout=w.Layout(width='auto', min_width='1px')), 'distort_limit':w.FloatSlider(min=0, max=1, layout=w.Layout(width='auto', min_width='1px')), } params['mpt'].style.button_width = '25px' params['flip'].style.button_width = '25px' #Close Button tt_show = 'Show example' button_show = w.Button(description='Show', tooltip=tt_show) #Hint lbl = w.Label('Settings are saved automatically') #Grid grid = w.GridspecLayout(18, 2, width='400px', grid_gap="0px", align_items='center') i=0 for k in params: grid[i, 0] = w.HTML(_html_wrap(*_dparam[k])) grid[i, 1] = params[k] i += 1 if i==8: grid[i, :] = w.HTML('
    ') i += 1 grid[i, :] = w.HTML('Data Augmentation') i += 1 def __init__(self, **kwargs): super().__init__(**kwargs) self.widget = w.Accordion(children=[w.VBox([w.HBox([self.button_reset, self.button_close]),self.lbl,self.grid])]) self.widget.set_title(0, 'Training Parameters') self.widget.layout.display = "none" self.params['encoder_name'].observe(self.on_encoder_change, 'value') def on_encoder_change(self, change): self.params['encoder_weights'].options = get_pretrained_options(change['new']) t=ParamWidget(config=Config(bs=8)) t.widget.layout.display = "block" t.widget #export _dmw= { 'loss': ('Loss Function', 'Objective function to minimize during training.', 'https://matjesg.github.io/deepflash2/losses.html'), 'loss_alpha' :('alpha', 'Loss coefficient for FocalLoss and TverskyLoss.', 'https://kornia.readthedocs.io/en/latest/losses.html#kornia.losses.TverskyLoss'), 'loss_beta' :('beta', 'Loss coefficient for TverskyLoss.', 'https://kornia.readthedocs.io/en/latest/losses.html#kornia.losses.TverskyLoss'), 'loss_gamma' :('gamma', 'FocalLoss power factor for dampening weight (focal strength).', 'https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/losses/focal.py'), 'loss_smooth_factor': ('smooth_factor', 'SoftCrossEntropyLoss factor to smooth target (e.g. if smooth_factor=0.1 then [1, 0, 0] -> [0.9, 0.05, 0.05])', 'https://github.com/qubvel/segmentation_models.pytorch/blob/master/segmentation_models_pytorch/losses/soft_ce.py'), } #export class MWWidget(BasePopUpParamWidget, GetAttr): 'Widget to customize loss functions' _default = 'config' params = { 'loss' : w.Dropdown(options=LOSSES, layout=w.Layout(width='auto', min_width='1px')), 'loss_alpha' : w.FloatText(layout= w.Layout(width='auto')), 'loss_beta' : w.FloatText(layout= w.Layout(width='auto')), 'loss_gamma' : w.FloatText(layout= w.Layout(width='auto')), 'loss_smooth_factor': w.FloatText(layout= w.Layout(width='auto')) } out = w.Output() #Hint lbl = w.HTML('Settings are saved automatically.') #Grid grid = w.GridspecLayout(5, 2, width='400px', grid_gap="0px", align_items='center') i=0 for k in params: grid[i, 0] = w.HTML(_html_wrap(*_dmw[k])) grid[i, 1] = params[k] i += 1 def __init__(self, **kwargs): super().__init__(**kwargs) self.widget = w.Accordion(children=[w.VBox([w.HBox([self.button_reset, self.button_close]), self.lbl, self.grid, self.out])] ) self.widget.set_title(0, 'Loss Function') self.widget.layout.display = "none" self.params['loss'].observe(self.on_arch_change, 'value') self.on_arch_change({'new':'CrossEntropyDiceLoss'}) def on_arch_change(self, change): enable = ['loss'] if change['new'] == 'FocalLoss': enable += ['loss_gamma', 'loss_alpha'] elif change['new'] == 'TverskyLoss': enable += ['loss_beta', 'loss_alpha'] elif change['new'] == 'SoftCrossEntropyLoss': enable += ['loss_smooth_factor'] for k,v in self.params.items(): if k in enable: v.disabled = False else: v.disabled = True t=MWWidget(config=Config()) t.widget.layout.display = "block" t.widget # #### Train UI #export class TrainUI(BaseUI): 'UI for ensemble training' sections = ['1 - Data', '2 - Ensemble Training', '3 - Validation'] def __init__(self, hide=False, **kwargs): super().__init__(**kwargs) #Sidebar self.sb = { 'data':TrainDataSB(path=self.path, config=self.config), 'train':TrainModelSB(config=self.config), 'valid':TrainValidSB(path=self.path, config=self.config), } self.sb['train'].open_lrfinder.on_click(self.open_lrfinder) self.sb['train'].open_mw.on_click(self.open_mw) self.sb['train'].open_par.on_click(self.open_par) #Update model options from train in valid self.sb['train'].params['n'].observe(self.sb['valid'].sel_update, 'value') #Sidebar Accordion self.sb_acc = w.Accordion(children=[x.widget for x in self.sb.values()], layout=w.Layout(grid_area='sidebar')) for i, name in enumerate(self.sections): self.sb_acc.set_title(i, name) self.sb_acc.observe(self.sidebar_change) #Extra self.xtr = { 'lr':LRWidget(), 'mw':MWWidget(config=self.config), 'param':ParamWidget(config=self.config) } #Main self.main = { 'img':self.sb['data'].img.dialog, 'msk':self.sb['data'].msk.dialog, 'cfg':self.sb['data'].cfg.dialog, 'lr':self.xtr['lr'].widget, 'mw':self.xtr['mw'].widget, 'param':self.xtr['param'].widget, 'ens':self.sb['valid'].ens.dialog, 'down':self.sb['valid'].down.dialog, **{k:w.Output() for k in self.sb.keys()} } self.main_box = w.VBox(list(self.main.values())) if hide: self.hide() def set_config(self, config): self.sb['data'].set_config(config) self.sb['train'].set_config(config) self.xtr['param'].set_config(config) self.xtr['mw'].set_config(config) def open_lrfinder(self, b): self.main['lr'].layout.display = "block" def open_mw(self, b): self.main['mw'].layout.display = "block" def open_par(self, b): self.main['param'].layout.display = "block" t=TrainUI('') t.sb_acc # ### Prediction Widgets # #### Sidebars #export _dpred = { 'img_pred' : ('Image Folder*', 'One folder containing all new images for prediction.', 'https://matjesg.github.io/deepflash2/add_information.html#Prediction'), 'mdl' : ('Model Folder', 'One folder containing the all models of the ensemble. If not selected, latest models from Training will be selected.'), 'msk_pred' : ('Test Mask Folder', 'For testing on new data. One folder containing all segmentation masks.'), 'up_pred' : ('Upload Data', 'Upload a zip file with images or models.'), 's_pred' : ('Select', 'Ensemble or model to be used for prediction.'), 'min_pixel_export': ('Minimum Size', 'Minimumen ROI size (number of pixels) to be included in ImageJ export'), 'imagej' : ('ImageJ Export', 'Export predicted masks to ImageJ ROI sets. '), 'cache' : ('Clear Cache', 'Delete cached files used for ensembling and visualization. This will not affect the final results.'), } #export class PredInputSB: 'Layout for "Data and Ensemble" Section' #Hints txt = 'Provide new images and models' hints = w.Label(txt) grid = w.GridspecLayout(6, GRID_COLS, width='100%', grid_gap="0px", align_items='center') #Labels grid[0, 0] = w.HTML(_html_wrap(*_dpred['img_pred'])) grid[1, 0] = w.HTML(_html_wrap(*_dpred['mdl'])) grid[3, :] = w.HTML('
    ') grid[4, 0] = w.HTML(_html_wrap(*_dpred['msk_pred'])) grid[5, 0] = w.HTML(_html_wrap(*_dpred['up_pred'])) #Load Data run = w.Button(description='Load*', layout=w.Layout(width='auto', min_width='1px')) grid[2, 1:] = run #Final Widget widget = w.VBox([hints,grid]) def __init__(self, path=None): path = path or Path('.') self.img = PathSelector(path, 'Select') self.grid[0, 1:] = self.img.button self.ens = PathSelector(path, 'Select') self.grid[1, 1:] = self.ens.button self.msk = PathSelector(path, 'Select') self.grid[4, 1:] = self.msk.button #Data Upload self.du = ZipUpload(path) self.grid[5, 1:] = self.du.widget t=PredInputSB('') t.widget #export class PredSB(BaseParamWidget, GetAttr): 'Layout for "Prediction and Quality Control" Section' _default = 'config' #Hints txt = 'Predict segmentations' hints = w.Label(txt) params = { 'pred_tta': w.ToggleButtons(options=[('Yes', True), ('No', False)], tooltips=['Enable Test-Time Augmentation','Disable Test-Time Augmentation']), 'min_pixel_export' : w.IntText(layout= w.Layout(width='auto')), } params['pred_tta'].style.button_width = '50px' #Grid grid = w.GridspecLayout(6, GRID_COLS, width='100%', grid_gap="0px", align_items='center') #grid[0, 0] = w.HTML(_html_wrap(*_dpred['s_pred'])) grid[0, 0] = w.HTML(_html_wrap(*_dtrain['tta'])) grid[0, 1:] = params['pred_tta'] grid[2, :] = w.HTML('
    ') grid[3, 0] = w.HTML(_html_wrap(*_dpred['min_pixel_export'])) grid[3, 1:] = params['min_pixel_export'] grid[4, 0] = w.HTML(_html_wrap(*_dpred['imagej'])) grid[5, 0] = w.HTML('Downloads') #grid[6, 0] = w.HTML(_html_wrap(*_dpred['cache'])) #Res run = w.Button(description='Run Prediction*', layout=w.Layout(width='auto')) grid[1, 1:] = run #Res rois = w.Button(description='Export ROIs', layout=w.Layout(width='auto')) grid[4, 1:] = rois #clear #cache = w.Button(description='Clear', layout=w.Layout(width='auto')) #grid[6, 1:] = cache #Final Widget widget = w.VBox([hints,grid]) def __init__(self, path=None, **kwargs): super().__init__(**kwargs) path = path or Path('.') self.down = PathDownloads(path, 'Select', tooltip='Click to download') self.grid[5, 1:] = self.down.button t=PredSB() t.widget # #### Prediction UI #export class PredUI(BaseUI): 'UI for prediction of new data' sections = ['1 - Data and Ensemble', '2 - Prediction and Quality Control'] def __init__(self, hide=False, **kwargs): super().__init__(**kwargs) #Sidebar self.sb = { 'data':PredInputSB(path=self.path), 'pred':PredSB(path=self.path, config=self.config), } #Sidebar Accordion self.sb_acc = w.Accordion(children=[x.widget for x in self.sb.values()], layout=w.Layout(grid_area='sidebar')) for i, name in enumerate(self.sections): self.sb_acc.set_title(i, name) self.sb_acc.observe(self.sidebar_change) #Main self.main = { 'img':self.sb['data'].img.dialog, 'ens':self.sb['data'].ens.dialog, 'msk':self.sb['data'].msk.dialog, 'down':self.sb['pred'].down.dialog, **{k:w.Output() for k in self.sb.keys()} } self.main_box = w.VBox(list(self.main.values())) if hide: self.hide() def open_lrfinder(self, b): self.main['lr'].layout.display = "block" t=PredUI('') t.sb_acc # ## GUI #export class GUI(GetAttr): 'GUI for deepflash2' _default = 'config' #Header and Footer head = "


    " header = w.HTML(value=head, layout=w.Layout(width='auto', grid_area='header')) #foot = "
     
    " foot = "
    " footer = w.HTML(value=foot, layout=w.Layout(width='100%', grid_area='footer')) #Category Buttons cat_btns = { 'gt':w.Button(description='GT Estimation', layout=w.Layout(flex='1 1 0%',width='auto'), tooltip='Derive reference segmentations from segmentations of multiple experts.'), 'train':w.Button(description='Training', layout=w.Layout(flex='1 1 0%',width='auto'), tooltip='Train deep learning ensembles for segmentation of fluorescent labels in microscopy images.'), 'pred':w.Button(description='Prediction', layout=w.Layout(flex='1 1 0%', width='auto'), tooltip='Predict segmentation masks on new data using customized models') } cat_btns_box = w.Box(children=list(cat_btns.values()), layout=w.Layout(grid_area='cat_btns')) def __init__(self, path=Path('.'), reinit=False): self.config = Config() self.base_path = path.resolve() self.config.proj_dir = str(self.base_path/self.proj_dir) if self.proj_dir=='deepflash2' else self.proj_dir #Project Dir self.proj = PathSelector(self.base_path, 'Select Project Folder', tooltip='Project Folder \ndefault: ') self.proj.button.button_style='info' self.proj.button_select.on_click(self.set_project_dir) #Click Category Buttons for v in self.cat_btns.values(): v.on_click(self.cat_clicked) #Categories self.gt = GTEstUI(hide=True, path=self.proj_path) self.train = TrainUI(hide=True,path=self.proj_path, config=self.config) self.pred = PredUI(hide=True, path=self.proj_path, config=self.config) self.cat = {'gt':self.gt, 'train':self.train, 'pred':self.pred} #Connect Buttons ## GT Estimation self.gt.sb['data'].run.on_click(self.gt_data_run_clicked) self.gt.sb['data'].sd.on_click(self.gt_data_sd_clicked) self.gt.sb['gt'].staple.on_click(self.gt_ref_clicked) self.gt.sb['gt'].mv.on_click(self.gt_ref_clicked) self.gt_to_train = w.Button(description='Continue to training with estimated ground truth.', layout=w.Layout(width='auto')) self.gt_to_train.on_click(self.gt_to_train_clicked) ## Train self.train.sb['data'].run.on_click(self.train_data_run_clicked) self.train.sb['data'].sd.on_click(self.train_data_sd_clicked) self.train.sb['data'].cfg.button_select.on_click(self.train_data_load_cfg_clicked) self.train.sb['train'].run.on_click(self.train_run_clicked) self.train.sb['train'].cfg_save.on_click(self.train_cfg_save_clicked) self.train.sb['valid'].run.on_click(self.train_valid_run_clicked) self.train.sb['valid'].ens.button_select.on_click(self.train_valid_ens_save_clicked) #self.train.sb['valid'].cache.on_click(self.train_valid_cache_clicked) self.train.xtr['lr'].run.on_click(self.lr_start_clicked) ## Pred self.pred.sb['data'].run.on_click(self.pred_data_run_clicked) self.pred.sb['data'].msk.button_select.on_click(self.pred_data_msk_save_clicked) self.pred.sb['pred'].run.on_click(self.pred_run_clicked) self.pred.sb['pred'].rois.on_click(self.pred_rois_clicked) #self.pred.sb['pred'].cache.on_click(self.pred_cache_clicked) # Init Project self._init_proj() self.star_info = w.Label('*Required') self.star_info.layout.display = "none" box_sb = w.VBox(children=[x.sb_acc for x in self.cat.values()]+[self.star_info], layout=w.Layout(grid_area='sidebar')) box_main = w.VBox(children=[x.main_box for x in self.cat.values()], layout=w.Layout(grid_area='main')) box_proj = w.VBox(children=[self.proj.button, self.proj.dialog], layout=w.Layout(grid_area='proj', flex_flow='column', align_items='flex-start')) self.tmp = w.Output(layout=w.Layout(grid_area='tmp')) self.gb = w.GridBox(children=[self.header, self.cat_btns_box, box_proj, box_sb, box_main, self.footer, self.tmp], layout=w.Layout( width='100%', grid_template_columns='350px auto', grid_gap='0px 0px', grid_template_areas=''' "header header" "cat_btns proj" "sidebar main" "footer footer" "tmp tmp" ''')) display(self.gb) @property def proj_path(self): return Path(self.proj_dir) def _set_download_dirs(self): self.gt.sb['gt'].down.set_path(self.proj_path/self.gt_dir) self.train.sb['valid'].down.set_path(self.proj_path/self.train_dir) self.pred.sb['pred'].down.set_path(self.proj_path/self.pred_dir) #Models dir self.pred.sb['data'].ens.path = self.proj_path/self.train_dir/self.ens_dir def _set_selection_dirs(self): self.gt.sb['data'].msk.set_path(self.proj_path) self.train.sb['data'].img.set_path(self.proj_path) self.train.sb['data'].msk.set_path(self.proj_path) self.train.sb['data'].cfg.set_path(self.proj_path) self.pred.sb['data'].img.set_path(self.proj_path) self.pred.sb['data'].msk.set_path(self.proj_path) self.train.sb['valid'].ens.set_path(self.proj_path) self.pred.sb['data'].ens.set_path(self.proj_path) def _init_proj(self): self.el, self.el_pred, self.gt_est = None, None, None self._set_download_dirs() self.test_masks_provided = False def set_project_dir(self, b): self.proj.button.button_style='' self.config.proj_dir = str(self.proj.path) self._set_selection_dirs() self._init_proj() def cat_clicked(self, b): self.star_info.layout.display = "block" for k,v in self.cat_btns.items(): if v==b: v.button_style = 'info' self.cat[k].show() else: v.button_style = '' self.cat[k].hide() def set_config(self, config): self.config=config self.train.set_config(config) # GT Estimation def gt_data_run_clicked(self, b): out = self.gt.main['data'] out.clear_output() exp_folder = self.gt.sb['data'].msk.path.relative_to(self.proj_path) with out: self.gt_est = GTEstimator(exp_folder, config=self.config, path=self.proj_path) items = {x:x for x in self.gt_est.masks.keys()} ipp = ItemsPerPage(self.proj_path, self.gt_est.show_data, items=items) display(ipp.widget) def gt_data_sd_clicked(self, b): out = self.gt.main['data'] out.clear_output() with out: path = self.proj_path/'sample_data'/'expert_segmentations' print(f'Dowloading expert sample data into {path.relative_to(self.proj_path)}...') _get_expert_sample_masks(path) self.gt.sb['data'].msk.path = path self.gt_data_run_clicked('') def gt_to_train_clicked(self, b): self.train.sb['data'].msk.refresh(self.gt_save_dir) self.train.sb['data'].msk.on_button_select_clicked("") self.cat_clicked(self.cat_btns['train']) def gt_ref_clicked(self, b): out = self.gt.main['gt'] out.clear_output() self.gt_save_dir = (self.proj_path/self.gt_dir/b.name).relative_to(self.proj_path) with out: assert type(self.gt_est)==GTEstimator, 'Please load data first!' print('Please watch the logs below. The final results will be printed here.') if COLAB: with colab.output.temporary(): print('Temporary Logs:') self.gt_est.gt_estimation(method=b.name, save_dir=self.gt_save_dir) else: with self.tmp: print('Temporary Logs:') self.gt_est.gt_estimation(method=b.name, save_dir=self.gt_save_dir) self.tmp.clear_output() with out: print('Ground Truth Estimation finished:') print(f'- {b.name} segmentation masks saved to folder: {self.gt_save_dir}.') print(f'- {b.name} similiarity results are saved to folder: {self.gt_dir}.') print(f'- You can download masks and results in the "Downloads" Section.') display(self.gt_to_train) print(f'--------------------------------------------------------------') items = {x:x for x in self.gt_est.masks.keys()} ipp = ItemsPerPage(self.proj_path, self.gt_est.show_gt, items=items, method=b.name) display(ipp.widget) # Train def train_data_run_clicked(self, b): out = self.train.main['data'] out.clear_output() with out: print('Loading data. Please wait') image_folder = self.train.sb['data'].img.path.relative_to(self.proj_path) mask_folder = self.train.sb['data'].msk.path.relative_to(self.proj_path) ens_folder = self.proj_path/self.train_dir/self.ens_dir with out: self.el = EnsembleLearner(image_folder, mask_folder, self.config, self.proj_path, ens_folder) items = {x:x for x in self.el.files} ipp = ItemsPerPage(self.proj_path, self.el.ds.show_data, items=items, overlay=True if self.c==2 else False) display(ipp.widget) def train_data_sd_clicked(self, b): out = self.train.main['data'] out.clear_output() with out: path = self.proj_path/'sample_data' print(f'Dowloading sample data into {path.relative_to(self.proj_path)}') _get_train_sample_data(path) self.train.sb['data'].img.path = path/'images' self.train.sb['data'].msk.path = path/'masks' self.train_data_run_clicked(b) def train_data_load_cfg_clicked(self, b): cfg = self.train.sb['data'].cfg cfg.output.clear_output() with cfg.output: path = cfg.cwd/cfg.select.value[0] self.config.load(path) for ui in [*self.train.sb.values(),*self.train.xtr.values()]: if hasattr(ui, 'set_config'): ui.set_config(self.config) time.sleep(3) cfg.output.clear_output() def train_cfg_save_clicked(self, b): timestr = time.strftime("%Y%m%d%H%M") path = self.proj_path/self.train_dir/f'config_{timestr}' with self.train.main['train']: self.config.save(path) def train_run_clicked(self, b): out = self.train.main['train'] out.clear_output() with out: assert type(self.el)==EnsembleLearner, 'Please load data first!' print('Starting training. This may take up to a few hours - depending on your hardware, the number of models, and training iterations.') print('Please watch the logs below. The final results will be printed here.') sel = self.train.sb['train'].sel.value self.el.set_n(self.n) for i in range(1, self.n+1): if (sel != 'ensemble') and (sel != f'model_{i}'): continue with out: print(f'Training of model {i}') if COLAB: with colab.output.temporary(): print('Temporary Logs:') self.el.fit(i) else: with self.tmp: print('Temporary Logs:') self.el.fit(i) self.tmp.clear_output() with out: print(f'Metrics for model {i} of {self.n}') self.el.recorder[i].plot_metrics() def train_valid_run_clicked(self, b): out = self.train.main['valid'] out.clear_output() sel = self.train.sb['valid'].sel.value model_no = None if sel == 'ensemble' else int(sel[6:]) export_dir = self.proj_path/self.train_dir/self.val_dir with out: assert type(self.el)==EnsembleLearner, 'Please load data first!' assert len(self.el.models)>0, 'Please train models first!' print(f'Validating {sel}. This may take a while!') if COLAB: with colab.output.temporary(): print('Temporary Logs:') df_val = self.el.get_valid_results(model_no, export_dir=export_dir, use_tta=self.tta) else: with self.tmp: print('Temporary Logs:') df_val = self.el.get_valid_results(model_no, export_dir=export_dir, use_tta=self.tta) self.tmp.clear_output() out.clear_output() with out: items = {x.file:f'{x.model_no}_{x.file}' for _,x in df_val.iterrows()} ipp = ItemsPerPage(self.proj_path,self.el.show_valid_results, items=items, srt_by='model/name') display(ipp.widget) def train_valid_ens_save_clicked(self, b): path = self.train.sb['valid'].ens.path with self.train.main['valid']: self.el.load_ensemble(path) def train_valid_cache_clicked(self, b): with self.train.main['valid']: self.el.clear_tmp() #Lr finder def lr_open(self,b): self.lrfinder.widget.layout.display = "block" def lr_start_clicked(self, b): out = self.train.xtr['lr'].output out.clear_output() with out: print('Starting Learning Rate Finder.') print('Please watch the logs below. The final results will be printed here.') if COLAB: with colab.output.temporary(): print('Temporary Logs:') sug_lrs, rec = self.el.lr_find(show_plot=False) else: with self.tmp: print('Temporary Logs:') sug_lrs, rec = self.el.lr_find(show_plot=False) self.tmp.clear_output() with out: print(sug_lrs) rec.plot_lr_find() plt.show() #Mask weights #def mw_open(self,b): # self.mw.widget.layout.display = "block" #Train Settings def par_open(self,b): self.par.widget.layout.display = "block" # Prediction def pred_data_msk_save_clicked(self, b): self.test_masks_provided = True def pred_data_run_clicked(self, b): out = self.pred.main['data'] out.clear_output() with out: print('Loading data. Please wait') image_dir = self.pred.sb['data'].img.path.relative_to(self.proj_path) ens_dir = self.pred.sb['data'].ens.path mask_dir = self.pred.sb['data'].msk.path.relative_to(self.proj_path) if self.test_masks_provided else None with out: self.el_pred = EnsembleLearner(image_dir, mask_dir=mask_dir, config=self.config, path=self.proj_path, ensemble_dir=ens_dir) self.el_pred.load_ensemble() items = {x:x for x in self.el_pred.files} ipp = ItemsPerPage(self.proj_path, self.el_pred.ds.show_data, items=items) display(ipp.widget) def pred_run_clicked(self, b, use_tta=None, show=True): use_tta = use_tta or self.pred_tta out = self.pred.main['pred'] out.clear_output() export_dir = self.pred.sb['pred'].down.path with out: assert type(self.el_pred)==EnsembleLearner, 'Please load data first!' print('Predicting segmentation masks. Please wait...') if COLAB: with colab.output.temporary(): print('Temporary Logs:') self.el_pred.get_ensemble_results(self.el_pred.files, export_dir=export_dir, use_tta=use_tta) else: with self.tmp: print('Temporary Logs:') self.el_pred.get_ensemble_results(self.el_pred.files, export_dir=export_dir, use_tta=use_tta) self.tmp.clear_output() if show: with out: if self.test_masks_provided: print('Calculating metrics...') self.el_pred.score_ensemble_results(label_fn=self.el_pred.label_fn) print(f'Saving scoring results to {export_dir}') self.el_pred.df_ens.to_csv(export_dir/f'scored_prediction.csv', index=False) self.el_pred.df_ens.to_excel(export_dir/f'scored_prediction.xlsx') items = {r.file:r.mean_energy for _, r in self.el_pred.df_ens.iterrows()} print(f'A lower mean_energy score indicates difficult or anomalous data.') ipp = ItemsPerPage(self.proj_path, self.el_pred.show_ensemble_results, items=items, srt_by='Energy Score', unc_metric='mean_energy') display(ipp.widget) #items = {x.name:x.name for x in self.el_pred.files} #ipp = ItemsPerPage(self.proj_path, self.el_pred.show_ensemble_results, items=items, unc=use_tta) #display(ipp.widget) def pred_rois_clicked(self, b): export_dir = self.pred.sb['pred'].down.path output_folder=Path(export_dir)/'ImageJ_ROIs' out = self.pred.main['pred'] with out: print('Exporting ROIs to ImageJ ROI Sets...') self.el_pred.export_imagej_rois(output_folder, min_pixel=self.min_pixel_export) print(f'Saved ROIs to {output_folder}. Click on `Select` to to download.') def pred_cache_clicked(self, b): with self.pred.main['pred']: self.el_pred.clear_tmp() t = GUI() # ## Export - #hide from nbdev.export import * notebook2script() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Exercise 1.02: Create Pandas Series to form a DataFrame # # #### 1. Import numpy and pandas package # + import pip try: import numpy except ImportError: pip.main(['install', 'numpy']) try: import pandas except ImportError: pip.main(['install', 'pandas']) # - # #### 2. Generate random numbers # + numpy.random.seed(0) random_number = numpy.random.random(10) random_number # - # #### 3. Set random_number variable to Series and assign it to variable s s = pandas.Series(random_number) print(s) print(type(s)) print(type(random_number)) print(s[2]) # #### 4. Observe what happens when we include an index index_value = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] s = pandas.Series(random_number, index=index_value) print(s) # ### Exercise 1.2a: Create set of lists or Series and stack them as a DataFrame # # #### 1. Create four Series of the same lengths # s1 = pandas.Series([1, 2, 3, 4, 5]) s2 = pandas.Series([6, 7, 8, 9, 10]) s3 = pandas.Series([11, 12, 13, 14, 15]) s4 = pandas.Series([16, 17, 18, 19, 20]) # #### 2. Stack them into DataFrame: df = pandas.DataFrame({'s1':s1, 's2':s2, 's3':s3, 's4':s4}) df # ### Exercise 3: Download dataset from World Bank Open data to demonstrate how Pandas read file into a DataFrame # # #### 1. Use pandas read_csv() to csv file into variable called temp_aus_df # # temp_aus_df =pandas.read_csv("tas_1991_2016_AUS.csv") # #### 2. Use Pandas head function to print the first few rows of the DataFrame, temp_aus_df # temp_aus_df.head() # #### 3. Use Pandas tail function to print the last few rows of the DataFrame, temp_aus_df # temp_aus_df.tail() # #### 4. What total number of records/rows/observations of the dataframe? len(temp_aus_df) # #### 5. what are the datatypes of the columns in the dataframe? temp_aus_df.dtypes # #### 6. How many missing values in the dataframe? temp_aus_df.isnull().sum() # #### 7. Replace missing values in DataFrame for colname in temp_aus_df.columns: if df[colname].dtype == 'object': df[colname] =temp_aus_df[colname].fillna('others') else: temp_aus_df[colname] = temp_aus_df[colname].fillna(0) temp_aus_df.to_csv(‘newdata.csv’, index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Scripting and Automation for the Web GIS Power User # + [markdown] slideshow={"slide_type": "slide"} # ## Reasons # 1. Prototpying ideas # 2. Integration with Pandas and plotting libraries # 3. Integration with ArcGIS # 4. Growing open-source project with [numeours extensions](http://jupyter-contrib-nbextensions.readthedocs.io/en/latest/) # + [markdown] slideshow={"slide_type": "subslide"} # ### 1. Prototyping ideas # # + [markdown] slideshow={"slide_type": "subslide"} # ### 2. Integration with pandas and plotting libraries # + slideshow={"slide_type": "fragment"} # read and display a csv file import pandas as pd csv = pd.read_csv('major_earthquakes.csv') csv.head() # + slideshow={"slide_type": "fragment"} import matplotlib.pyplot as plt # %matplotlib inline csv.hist('DamageExtent') # + [markdown] slideshow={"slide_type": "subslide"} # ### 3. Integration with ArcGIS # + slideshow={"slide_type": "fragment"} from arcgis.gis import * gis = GIS() m1 = gis.map() m1 # + slideshow={"slide_type": "fragment"} eq = gis.content.search("earthquakes", "feature layer")[0] m1.add_layer(eq) # + [markdown] slideshow={"slide_type": "subslide"} # ### 4. Growing open source project, lots of extensions # 1. Collapsible headings # 2. RISE - presentations # 3. [list of extensions](http://jupyter-contrib-nbextensions.readthedocs.io/en/latest/) # - # ## How to get notebooks on your computer? # 1. Through ArcGIS Pro # 2. Install conda # 3. Try the live notebooks # + [markdown] slideshow={"slide_type": "slide"} # ## How to use notebooks and best practices # + [markdown] slideshow={"slide_type": "subslide"} # ### Notebooks walk through # 1. Cell types - markdown, code # 2. Running cells # 3. Using intellisense # 4. REPL - read evaluate print loop # + [markdown] slideshow={"slide_type": "subslide"} # Markdown - headings # # H1 # ## H2 # ### H3 # #### H4 # + [markdown] slideshow={"slide_type": "fragment"} # Markdown - bold, italics, code snippet # # **this is bold** # # _this is italics_ and this is not # - # embed images # ![](http://mms.businesswire.com/media/20170615005128/en/570118/4/esri_logo17.jpg) # + [markdown] slideshow={"slide_type": "subslide"} # ### singe line code highlighting # Represent equation of a straight line: `y = m*x + C` # + [markdown] slideshow={"slide_type": "fragment"} # ### multiline code highlighting # ```PYTHON # import os # data_root = r'root dir to walk down' # for directory, subdir_list, file_list in os.walk(data_root): # print(directory) # print("\t", end="") # print(subdir_list) # print("\t\t", end="") # print(file_list) # ``` # + slideshow={"slide_type": "subslide"} #this is a code cell 1+2 # run this by hitting `shift + enter` # + slideshow={"slide_type": "fragment"} # code cell import sys sys.version # + slideshow={"slide_type": "subslide"} # fixing errors - intellisense sys.pth # + [markdown] slideshow={"slide_type": "subslide"} # #### Using intellisense - Python API # + slideshow={"slide_type": "fragment"} from arcgis.gis import * gis = GIS() # + slideshow={"slide_type": "fragment"} #look for the basemaps groups in arcgis online #use intellisense - dot notation gis. # + slideshow={"slide_type": "subslide"} #build a query - look up method parameters, pull up help into new page # gis.groups.search? # + slideshow={"slide_type": "fragment"} #REPL groups_list = gis.groups.search("basemaps") groups_list # + [markdown] slideshow={"slide_type": "slide"} # ### Notebooks - best practices # 1. Don't pull all your code in 1 cell # 2. Add documentation # 3. Use password hiders # 4. Share notebooks on github # + slideshow={"slide_type": "subslide"} import getpass password = () ago_gis = GIS("https://python.playground.esri.com/portal", "atma.mani", password) # + slideshow={"slide_type": "subslide"} from arcgis.gis import GIS ago_gis = GIS("https://python.playground.esri.com/portal", "atma.mani") # + slideshow={"slide_type": "fragment"} ago_gis # - # ## Reproducible research # ### Sharing notebooks # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # SI-14 Single Molecule RNA FISH # 2018.4.2 # + #minimum imports: import time,os,sys,glob import cPickle as pickle import numpy as np import khmer sys.path.append(r'/n/home13/pzheng/Documents/python-functions/python-functions-library') from LibraryConstruction import fastaread,fastawrite,fastacombine import LibraryDesigner as ld import LibraryConstruction as lc from Bio import SeqIO from Bio.Seq import Seq from Bio.Alphabet import IUPAC from Bio.SeqRecord import SeqRecord import csv import io # - # ## 1. Import data master_dir = r'/n/boslfs/LABS/zhuang_lab/User/pzheng/Libraries/SI-14/EMT_smFISH'; # input filename input_filename = 'EMT_smFISH.txt'; dic_list = [] with open(master_dir+os.sep+input_filename, 'rU') as handle: headers = handle.readline().split("\n")[0].split("\t") for line in handle.readlines(): _dic = {} for header,info in zip(headers,line.split("\n")[0].split("\t")): _dic[header] = info dic_list.append(_dic) # ## 2. Design barcode scheme # Get list of all genes genes = list(np.unique([v['Gene'] for v in dic_list])) barcode_scheme = {}; for i,gene in enumerate(sorted(genes)): barcode_scheme[gene] = {'bc_stv': i, 'bc_ndb': i} # ## 3. Patch barcodes # ### 3.1 import barcodes # + ## Read all barcodes barcode_dir = r'/n/boslfs/LABS/zhuang_lab/User/pzheng/Barcodes'; # read all Stv barcodes #stv_adaptor = [1,2,17,77,78,79,80,81,82,83,84] # barcodes saved for adaptors #stv_bad = [34,38,41] # barcodes performed badly #stv_mask = stv_adaptor + stv_bad stv_mask =[] with open(barcode_dir+os.sep+'top_Stvs.fasta', "rU") as handle: stv_barcodes = []; for record in SeqIO.parse(handle, "fasta"): if int(record.id.split('_')[1]) not in stv_mask: stv_barcodes.append(record); # read all NDB barcodes ndb_mask = []; with open(barcode_dir+os.sep+'NDBs.fasta', "rU") as handle: ndb_barcodes = []; for record in SeqIO.parse(handle, "fasta"): if int(record.id.split('_')[1]) not in ndb_mask: ndb_barcodes.append(record); print "Barcodes loaded: Stv: "+str(len(stv_barcodes))+", NDB: "+str(len(ndb_barcodes)); # - # ### 3.2 import primers # + ## Read all primers primer_dir = r'/n/boslfs/LABS/zhuang_lab/User/pzheng/Primers'; fwd_primer_filename = 'forward_primers_keep.fasta'; rev_primer_filename = 'reverse_primers_keep.fasta'; # read all forward primers with open(primer_dir+os.sep+fwd_primer_filename, "rU") as handle: fwd_primers = []; for record in SeqIO.parse(handle, "fasta"): fwd_primers.append(record); # read all forward primers with open(primer_dir+os.sep+rev_primer_filename, "rU") as handle: rev_primers = []; for record in SeqIO.parse(handle, "fasta"): rev_primers.append(record); print "Primers loaded: forward: "+str(len(fwd_primers))+", reverse: "+str(len(rev_primers)); # primers fprimer = fwd_primers[2]; print '- forward primer:', fprimer rprimer = rev_primers[1]; print '- reverse primer:', rprimer # + ## Parameters used for patch barcodes & primers # barcodes barcode_source = {'bc_stv':'stv', 'bc_ndb':'ndb'}; barcode_starts = {'stv':1, 'ndb':301}; _stv_barcodes, _ndb_barcodes = [],[]; for record in stv_barcodes: if not int(record.id.split('_')[1]) < barcode_starts['stv']: _stv_barcodes.append(record) for record in ndb_barcodes: if not int(record.id.split('_')[1]) < barcode_starts['ndb']: _ndb_barcodes.append(record) barcode_len = 20 # - # ### 3.3 start patching for i,dic in enumerate(dic_list): dic['bc_stv'] = _stv_barcodes[barcode_scheme[dic['Gene']]['bc_stv']] dic['bc_ndb'] = _ndb_barcodes[barcode_scheme[dic['Gene']]['bc_ndb']] total_seq_list = [fprimer.seq, \ dic['bc_stv'].seq[-barcode_len:].reverse_complement(),\ dic['bc_ndb'].seq[-barcode_len:].reverse_complement(),\ Seq(dic['Target']),\ dic['bc_stv'].seq[-barcode_len:].reverse_complement(),\ dic['bc_ndb'].seq[-barcode_len:].reverse_complement(),\ rprimer.seq[-20:].reverse_complement()] total_seq = Seq(''); for s in total_seq_list: total_seq += s dic['total_seq'] = total_seq; name_list = [''] barcode_scheme fprimer.id.split('_')[-1] ## Create pb_record and pb_list by pb_file pb_records, reg_pb_dic = [],{}; # initialize # loop through all designed probes with open(master_dir+os.sep+pb_filename, 'rU') as handle: lines = handle.readlines() titles = lines[0].split("\n")[0].split("\t") for line in lines[1:]: seq, name = line.split("\n")[0].split(" "); pb_records.append(SeqRecord(Seq(seq.upper(),alphabet=IUPAC.unambiguous_dna),id=name, name=name,description='')) reg_id = int(name.split('reg_')[1].split("_")[0]) pb_info = {'reg_index':reg_id, 'total_seq':seq, 'total_name':name}; if reg_id not in reg_pb_dic.keys(): reg_pb_dic[reg_id] = [pb_info] else: reg_pb_dic[reg_id].append(pb_info) pb_lists = reg_pb_dic.values() print "- Total candidate sequences:", len(pb_records) # save save_dir = 'final_probes' if not os.path.exists(master_dir+os.sep+save_dir): os.makedirs(master_dir+os.sep+save_dir) print "-- Save pb_lists" pickle.dump(pb_lists, open(master_dir+os.sep+save_dir+os.sep+'list.pkl', 'w')); print "-- Save pb_records" with open(master_dir+os.sep+save_dir+os.sep+'candidate_probes.fasta', "w") as output_handle: SeqIO.write(pb_records, output_handle, 'fasta'); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PythonData # language: python # name: pythondata # --- import pandas as pd skywest = pd.read_csv("Complete_Updated_M2.csv", encoding='cp1252') skywest.head() skywest["Origin_to_Destination"] = skywest["Origin IATA Code"] +" to "+ skywest["Destination IATA Code"] Origin_to_Destination skywest skywest.to_csv('skywest.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Dependencies import tweepy import json import numpy as np from config2 import consumer_key, consumer_secret, access_token, access_token_secret # Twitter API Keys consumer_key = consumer_key consumer_secret = consumer_secret access_token = access_token access_token_secret = access_token_secret # Setup Tweepy API Authentication auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser()) target_term = '@facebook' # + # Lists to hold sentiments # Variables for holding sentiments compound_list = [] positive_list = [] negative_list = [] neutral_list = [] for x in range(1, 20): # Get all tweets from home feed public_tweets = api.user_timeline(target_term, page=x) # Loop through all tweets for tweet in public_tweets: # Run Vader Analysis on each tweet results = analyzer.polarity_scores(tweet["text"]) compound = results["compound"] pos = results["pos"] neu = results["neu"] neg = results["neg"] # Add each value to the appropriate list compound_list.append(compound) positive_list.append(pos) negative_list.append(neg) neutral_list.append(neu) # - print( sum(compound_list)/len(compound_list), sum(positive_list)/len(positive_list), sum(negative_list)/len(negative_list), sum(neutral_list)/len(neutral_list)) sentiments = { 'Average Compounded': sum(compound_list) / len(compound_list), 'Average Negative': sum(negative_list) / len(negative_list), 'Average Positive': sum(positive_list) / len(positive_list), 'Average Neutral': sum(neutral_list) / len(neutral_list) } sentiments len(compound_list) # ## mentions of facebook # + # Search for all tweets # public_tweets = api.search(target_term, count=300, result_type="recent") # + # Twitter API Keys consumer_key = consumer_key consumer_secret = consumer_secret access_token = access_token access_token_secret = access_token_secret # Setup Tweepy API Authentication auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth, parser=tweepy.parsers.JSONParser()) # - print(target_term) # + # Loop through all public_tweets fb_tweets = [] date = [] oldest_tweet = None for x in range(1,20): public_tweets = api.search(target_term, count=100, result_type="recent", max_id=oldest_tweet) for tweet in public_tweets['statuses']: tweet_id = tweet["id"] tweet_author = tweet["user"]["screen_name"] tweet_text = tweet["text"] fb_tweets.append(tweet['text']) date.append(tweet['created_at']) oldest_tweet = tweet_id - 1 print(len(fb_tweets)) # + compound_list = [] positive_list = [] negative_list = [] neutral_list = [] for tweet in fb_tweets: # Run Vader Analysis on each tweet results = analyzer.polarity_scores(tweet) compound = results["compound"] pos = results["pos"] neu = results["neu"] neg = results["neg"] # Add each value to the appropriate list compound_list.append(compound) positive_list.append(pos) negative_list.append(neg) neutral_list.append(neu) # - sentiments = { 'Average Compounded': sum(compound_list) / len(compound_list), 'Average Negative': sum(negative_list) / len(negative_list), 'Average Positive': sum(positive_list) / len(positive_list), 'Average Neutral': sum(neutral_list) / len(neutral_list) } sentiments len(fb_tweets) date[1891] june_14 = { 'Text': fb_tweets, 'Compounded': compound_list, 'Negative': negative_list, 'Positive': positive_list, 'Neutral': neutral_list, 'Date': date } import pandas as pd tweets_june_14_df = pd.DataFrame(june_14) tweets_june_14_df.head() tweets_june_14_df.to_csv('tweets_june_14.csv') print(date[0]) print(date[1891]) # --- # jupyter: # jupytext: # split_at_heading: true # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #default_exp vision.widgets # - #export from fastai.torch_basics import * from fastai.data.all import * from fastai.vision.core import * from ipywidgets import HBox,VBox,widgets,Button,Checkbox,Dropdown,Layout,Box,Output,Label,FileUpload #hide from nbdev.showdoc import * #export _all_ = ['HBox','VBox','widgets','Button','Checkbox','Dropdown','Layout','Box','Output','Label','FileUpload'] # # Vision widgets # # > ipywidgets for images #export @patch def __getitem__(self:Box, i): return self.children[i] #export def widget(im, *args, **layout): "Convert anything that can be `display`ed by IPython into a widget" o = Output(layout=merge(*args, layout)) with o: display(im) return o im = Image.open('images/puppy.jpg').to_thumb(256,512) VBox([widgets.HTML('Puppy'), widget(im, max_width="192px")]) #export def _update_children(change): for o in change['owner'].children: if not o.layout.flex: o.layout.flex = '0 0 auto' #export def carousel(children=(), **layout): "A horizontally scrolling carousel" def_layout = dict(overflow='scroll hidden', flex_flow='row', display='flex') res = Box([], layout=merge(def_layout, layout)) res.observe(_update_children, names='children') res.children = children return res # + ts = [VBox([widget(im, max_width='192px'), Button(description='click')]) for o in range(3)] carousel(ts, width='450px') # - #export def _open_thumb(fn, h, w): return Image.open(fn).to_thumb(h, w).convert('RGBA') #export class ImagesCleaner: "A widget that displays all images in `fns` along with a `Dropdown`" def __init__(self, opts=(), height=128, width=256, max_n=30): opts = ('', '')+tuple(opts) store_attr('opts,height,width,max_n') self.widget = carousel(width='100%') def set_fns(self, fns): self.fns = L(fns)[:self.max_n] ims = parallel(_open_thumb, self.fns, h=self.height, w=self.width, progress=False, n_workers=min(len(self.fns)//10,defaults.cpus)) self.widget.children = [VBox([widget(im, height=f'{self.height}px'), Dropdown( options=self.opts, layout={'width': 'max-content'})]) for im in ims] def _ipython_display_(self): display(self.widget) def values(self): return L(self.widget.children).itemgot(1).attrgot('value') def delete(self): return self.values().argwhere(eq('')) def change(self): idxs = self.values().argwhere(negate_func(in_(['','']))) return idxs.zipwith(self.values()[idxs]) fns = get_image_files('images') w = ImagesCleaner(('A','B')) w.set_fns(fns) w w.delete(),w.change() #export def _get_iw_info(learn, ds_idx=0): dl = learn.dls[ds_idx].new(shuffle=False, drop_last=False) inp,probs,targs,preds,losses = learn.get_preds(dl=dl, with_input=True, with_loss=True, with_decoded=True) inp,targs = L(zip(*dl.decode_batch((inp,targs), max_n=9999))) return L([dl.dataset.items,targs,losses]).zip() #export @delegates(ImagesCleaner) class ImageClassifierCleaner(GetAttr): "A widget that provides an `ImagesCleaner` with a CNN `Learner`" def __init__(self, learn, **kwargs): vocab = learn.dls.vocab self.default = self.iw = ImagesCleaner(vocab, **kwargs) self.dd_cats = Dropdown(options=vocab) self.dd_ds = Dropdown(options=('Train','Valid')) self.iwis = _get_iw_info(learn,0),_get_iw_info(learn,1) self.dd_ds.observe(self.on_change_ds, 'value') self.dd_cats.observe(self.on_change_ds, 'value') self.on_change_ds() self.widget = VBox([self.dd_cats, self.dd_ds, self.iw.widget]) def _ipython_display_(self): display(self.widget) def on_change_ds(self, change=None): info = L(o for o in self.iwis[self.dd_ds.index] if o[1]==self.dd_cats.value) self.iw.set_fns(info.sorted(2, reverse=True).itemgot(0)) # # Export - #hide from nbdev.export import notebook2script notebook2script() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### BROADCAST # + from pyspark import SparkConf, SparkContext def loadMovieNames(): movieNames = {} with open("../PySpark_DataSets/datasets/ml-100k/u.ITEM") as f: for line in f: fields = line.split('|') movieNames[int(fields[0])] = fields[1] return movieNames conf = SparkConf().setMaster("local").setAppName("PopularMovies") sc = SparkContext(conf = conf) # - nameDict = sc.broadcast(loadMovieNames()) print(nameDict.value) lines = sc.textFile("../PySpark_DataSets/datasets/ml-100k/u.data") movies = lines.map(lambda x: (int(x.split()[1]), 1)) movieCounts = movies.reduceByKey(lambda x, y: x + y) flipped = movieCounts.map( lambda x : (x[1], x[0])) sortedMovies = flipped.sortByKey() # + sortedMoviesWithNames = sortedMovies.map(lambda countMovie : (nameDict.value[countMovie[1]], countMovie[0])) results = sortedMoviesWithNames.collect() for result in results: print (result) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from tabulate import tabulate from sagas.crmsfa.odoo_facade import odoo, login import sagas.crmsfa.odoo_info as info from sagas.tool.app_settings import setup_logger, setup_jupyter setup_logger('crmsfa.log') setup_jupyter() # - # ?login login(db='demo', username='admin', password='') info.desc_model("crm.lead") # * 显示所有的潜在客户和他们的电子邮件 # + model_name="crm.lead" Model = odoo.env[model_name] model_ids = Model.search([]) print(model_ids) for record in Model.browse(model_ids): print(getattr(record, "name"), getattr(record, "email_from")) # - # * 查找指定邮件地址的客户(限制返回记录数为1) lead_ids = Model.search([('email_from', '=', '')], limit=1) print(lead_ids) leads = Model.browse(lead_ids) print(leads) print(leads[0]) for lead in leads: print(lead.id, lead.name, type(lead.tag_ids)) for tag in lead.tag_ids: print("\t", tag.name) # * 日历事件 Model = odoo.env["calendar.event"] model_ids = Model.search([]) print("total", len(model_ids)) for rec in Model.browse(model_ids): # print(rec.id, rec.start, rec.stop, rec.duration) # + import dateparser import time # procs-dateparser.md print(dateparser.parse('1 hour ago')) print(dateparser.parse(u'2小时前')) print(dateparser.parse(u'2018年11月1日')) print(dateparser.parse(u'2018年12月30日')) print(time.strftime('%Y-%m-05 12:00:00')) #⊕ [python - How to overcome "datetime.datetime not JSON serializable"? - Stack Overflow](https://stackoverflow.com/questions/11875770/how-to-overcome-datetime-datetime-not-json-serializable) from datetime import date, datetime from json import dumps def json_serial(obj): """JSON serializer for objects not serializable by default json code""" if isinstance(obj, (datetime, date)): return obj.isoformat() raise TypeError ("Type %s not serializable" % type(obj)) print(dumps(datetime.now(), default=json_serial)) # - # * 查找指定日期范围内的记录 a_date=str(dateparser.parse(u'2018年11月1日')) b_date=str(dateparser.parse(u'2018年12月10日')) # procs-odoo-orm-api.md #⊕ [Get list of ids - Odoo - Stack Overflow](https://stackoverflow.com/questions/45839212/get-list-of-ids-odoo) # ev_recs = Model.search([('start','<=',b_date),('start','>=',a_date)]).ids ev_recs = Model.search([('start','<=',b_date),('start','>=',a_date)]) print(len(ev_recs)) # * 图查询(使用graphql) # * 这一部分需要先了解graphql协议, 以及使用graphene库来支持graphql的查询和更新功能 # * 相对来说比较复杂, 之后会专门演练graphql # + import graphene class CrmLead(graphene.ObjectType): name = graphene.String() email_from=graphene.String() class Query(graphene.ObjectType): crm_lead = graphene.Field(CrmLead, email_from=graphene.String()) crm_leads = graphene.List(CrmLead) def resolve_crm_lead(self, info, email_from): return get_crm_lead(email_from) def resolve_crm_leads(self, info): model_name="crm.lead" Model = odoo.env[model_name] model_ids = Model.search([]) recordset=[] for record in Model.browse(model_ids): recordset.append(CrmLead(name=getattr(record, "name"), email_from=getattr(record, "email_from"))) return recordset def get_crm_lead(sent): model_name="crm.lead" Model = odoo.env[model_name] lead_ids = Model.search([('email_from', '=', sent)], limit=1) leads = Model.browse(lead_ids) assert len(leads)==1 # print(leads) lead=leads[0] return CrmLead(name=lead.name, email_from=lead.email_from) schema = graphene.Schema(query=Query) # - import json default_query = ''' query FetchCrmLead($sent: String!) { crmLead(emailFrom: $sent) { name emailFrom } } '''.strip() variables = {"sent":""} result = schema.execute(default_query, variables=variables) print(json.dumps(result.data, indent=2, ensure_ascii=False)) default_query = ''' { crmLeads { name emailFrom } } '''.strip() result = schema.execute(default_query) print(json.dumps(result.data, indent=2, ensure_ascii=False)) info.desc_model("calendar.event") # + import graphene from odoorpc.fields import Many2many, Many2one, One2many, Unknown TYPES_TO_FIELDS = { 'binary': graphene.String, 'boolean': graphene.Boolean, 'char': graphene.String, 'date': graphene.types.datetime.Date, 'datetime': graphene.types.datetime.DateTime, 'float': graphene.Float, 'html': graphene.String, 'integer': graphene.Int, #⊕ [Graphene-Python](http://docs.graphene-python.org/en/latest/types/scalars/#custom-scalars) 'many2many': Many2many, 'many2one': Many2one, 'one2many': One2many, 'reference': graphene.String, 'selection': graphene.String, 'text': graphene.String, } def generate_field(name, data): """Generate a well-typed field according to the data dictionary supplied (obtained via the `fields_get' method of any models). """ assert 'type' in data field = TYPES_TO_FIELDS.get(data['type'], Unknown)() return field # - field = TYPES_TO_FIELDS.get("date", Unknown)() print(type(field)) # + # from graphene.utils.str_converters import to_camel_case, to_snake_case def to_camel_case(snake_str): components = snake_str.split("_") # We capitalize the first letter of each component except the first one # with the 'capitalize' method and join them together. return "".join(x.capitalize() if x else "_" for x in components[:]) def normalize_class(model): cls_name = model.replace('.', '_') return to_camel_case(cls_name) print(normalize_class("calendar.event")) # + from odoorpc.env import FIELDS_RESERVED import odoorpc.fields as fields model="calendar.event" attrs = { '_name': model, '_columns': {}, } def get_recurrent_fields(): return ['byday', 'recurrency', 'final_date', 'rrule_type', 'month_by', 'interval', 'count', 'end_type', 'mo', 'tu', 'we', 'th', 'fr', 'sa', 'su', 'day', 'week_list'] fields_get = odoo.execute(model, 'fields_get') for field_name, field_data in fields_get.items(): field_type=field_data['type'] if field_name not in FIELDS_RESERVED and \ field_name not in get_recurrent_fields() and \ not field_name.startswith("_") and \ field_type not in ("many2many","many2one","one2many"): Field = fields.generate_field(field_name, field_data) FieldType=generate_field(field_name, field_data) attrs['_columns'][field_name] = Field attrs[field_name] = FieldType print(field_name, type(FieldType)) cls_name=normalize_class(model) print(".. create class", cls_name) model_cls=type(cls_name, (graphene.ObjectType,), attrs) # - for fld in model_cls._columns.keys(): print(fld) # + import inspect def desc_type(type_name): # attributes = inspect.getmembers(type(type_name), lambda a:not(inspect.isroutine(a))) attributes = inspect.getmembers(type_name, lambda a:not(inspect.isroutine(a))) fields=[a for a in attributes if not(a[0].startswith('__') and a[0].endswith('__'))] for f in fields: if f[0]!="_columns": print(f) desc_type(model_cls) # + import graphene def create_with_attrs(model_cls, record): inst=model_cls() for fld in model_cls._columns.keys(): val=getattr(record, fld) setattr(inst, fld, val) return inst model_name="calendar.event" of_type=model_cls class CalQuery(graphene.ObjectType): calendar_event = graphene.List(of_type) def resolve_calendar_event(self, info): Model = odoo.env[model_name] model_ids = Model.search([]) recordset=[] for record in Model.browse(model_ids): recordset.append(create_with_attrs(of_type, record)) return recordset schema = graphene.Schema(query=CalQuery) print(of_type) # - default_query = ''' { calendarEvent { name description start } } '''.strip() result = schema.execute(default_query) print(json.dumps(result.data, indent=2, ensure_ascii=False)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="x0fRdfk3E22X" # # Fully Convolutional Networks for Change Detection # # Example code for training the network presented in the paper: # # ``` # ., . and ., 2018, October. Fully convolutional siamese networks for change detection. In 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 4063-4067). IEEE. # ``` # # Code uses the OSCD dataset: # # ``` # ., ., . and ., 2018, July. Urban change detection for multispectral earth observation using convolutional neural networks. In IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium (pp. 2115-2118). IEEE. # ``` # # # FresUNet architecture from paper: # # ``` # ., ., . and ., 2019. Multitask learning for large-scale semantic change detection. Computer Vision and Image Understanding, 187, p.102783. # ``` # # Please consider all relevant papers if you use this code. # + id="iPxtbDGgE22h" # # rcdaudt.github.io # # + colab={"base_uri": "https://localhost:8080/"} id="MZhSPUgaE22j" outputId="3526308a-aa6d-4096-95ca-eef39ad8e419" language="bash" # hostname # + colab={"base_uri": "https://localhost:8080/"} id="gMmXyq4tE22j" outputId="1f1e9940-6cf4-453a-dc35-3b5500b17593" # Imports # PyTorch import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader from torch.autograd import Variable import torchvision.transforms as tr # Models imported for .py files in local directory. Hashed out here - models just put in a cell. #from unet import Unet #from siamunet_conc import SiamUnet_conc #from siamunet_diff import SiamUnet_diff #from fresunet import FresUNet #from smallunet import SmallUnet #from smallunet_attempt import Unet # Other import os import numpy as np import random from skimage import io from scipy.ndimage import zoom import matplotlib.pyplot as plt # %matplotlib inline from tqdm import tqdm as tqdm from pandas import read_csv from math import floor, ceil, sqrt, exp from IPython import display import time from itertools import chain import warnings from pprint import pprint print('IMPORTS OK') # + colab={"base_uri": "https://localhost:8080/"} id="feNT93S_FLmj" outputId="d028085a-9cc9-4235-993c-b03b6af010fa" from google.colab import drive drive.mount('/content/drive') # + colab={"base_uri": "https://localhost:8080/"} id="PMmB5KZlE22k" outputId="bcb1b038-2a54-4972-d54e-3c3b0debb235" # Global Variables' Definitions PATH_TO_DATASET = '/content/drive/MyDrive/onera/' IS_PROTOTYPE = False FP_MODIFIER = 1 # Tuning parameter, use 1 if unsure BATCH_SIZE = 32 PATCH_SIDE = 96 N_EPOCHS = 50 NORMALISE_IMGS = True TRAIN_STRIDE = int(PATCH_SIDE/2) - 1 TYPE = 1 # 0-RGB | 1-RGBIr | 2-All bands s.t. resulution <= 20m | 3-All bands LOAD_TRAINED = False DATA_AUG = True print('DEFINITIONS OK') # + colab={"base_uri": "https://localhost:8080/"} id="ZWJCCS-iE22k" outputId="7fa048f1-c112-4ae1-868a-cfb258d2f2c4" ### This cell defines a load of functions that we will need to train the network e.g. data augmentation functions, ### functions that call the different bands of the sentinel data, etc. # Functions def adjust_shape(I, s): """Adjust shape of grayscale image I to s.""" # crop if necesary I = I[:s[0],:s[1]] si = I.shape # pad if necessary p0 = max(0,s[0] - si[0]) p1 = max(0,s[1] - si[1]) return np.pad(I,((0,p0),(0,p1)),'edge') def read_sentinel_img(path): """Read cropped Sentinel-2 image: RGB bands.""" im_name = os.listdir(path)[0][:-7] r = io.imread(path + im_name + "B04.tif") g = io.imread(path + im_name + "B03.tif") b = io.imread(path + im_name + "B02.tif") I = np.stack((r,g,b),axis=2).astype('float') if NORMALISE_IMGS: I = (I - I.mean()) / I.std() return I def read_sentinel_img_4(path): """Read cropped Sentinel-2 image: RGB and NIR bands.""" im_name = os.listdir(path)[0][:-7] r = io.imread(path + im_name + "B04.tif") g = io.imread(path + im_name + "B03.tif") b = io.imread(path + im_name + "B02.tif") nir = io.imread(path + im_name + "B08.tif") I = np.stack((r,g,b,nir),axis=2).astype('float') if NORMALISE_IMGS: I = (I - I.mean()) / I.std() return I def read_sentinel_img_leq20(path): """Read cropped Sentinel-2 image: bands with resolution less than or equals to 20m.""" im_name = os.listdir(path)[0][:-7] r = io.imread(path + im_name + "B04.tif") s = r.shape g = io.imread(path + im_name + "B03.tif") b = io.imread(path + im_name + "B02.tif") nir = io.imread(path + im_name + "B08.tif") ir1 = adjust_shape(zoom(io.imread(path + im_name + "B05.tif"),2),s) ir2 = adjust_shape(zoom(io.imread(path + im_name + "B06.tif"),2),s) ir3 = adjust_shape(zoom(io.imread(path + im_name + "B07.tif"),2),s) nir2 = adjust_shape(zoom(io.imread(path + im_name + "B8A.tif"),2),s) swir2 = adjust_shape(zoom(io.imread(path + im_name + "B11.tif"),2),s) swir3 = adjust_shape(zoom(io.imread(path + im_name + "B12.tif"),2),s) I = np.stack((r,g,b,nir,ir1,ir2,ir3,nir2,swir2,swir3),axis=2).astype('float') if NORMALISE_IMGS: I = (I - I.mean()) / I.std() return I def read_sentinel_img_leq60(path): """Read cropped Sentinel-2 image: all bands.""" im_name = os.listdir(path)[0][:-7] r = io.imread(path + im_name + "B04.tif") s = r.shape g = io.imread(path + im_name + "B03.tif") b = io.imread(path + im_name + "B02.tif") nir = io.imread(path + im_name + "B08.tif") ir1 = adjust_shape(zoom(io.imread(path + im_name + "B05.tif"),2),s) ir2 = adjust_shape(zoom(io.imread(path + im_name + "B06.tif"),2),s) ir3 = adjust_shape(zoom(io.imread(path + im_name + "B07.tif"),2),s) nir2 = adjust_shape(zoom(io.imread(path + im_name + "B8A.tif"),2),s) swir2 = adjust_shape(zoom(io.imread(path + im_name + "B11.tif"),2),s) swir3 = adjust_shape(zoom(io.imread(path + im_name + "B12.tif"),2),s) uv = adjust_shape(zoom(io.imread(path + im_name + "B01.tif"),6),s) wv = adjust_shape(zoom(io.imread(path + im_name + "B09.tif"),6),s) swirc = adjust_shape(zoom(io.imread(path + im_name + "B10.tif"),6),s) I = np.stack((r,g,b,nir,ir1,ir2,ir3,nir2,swir2,swir3,uv,wv,swirc),axis=2).astype('float') if NORMALISE_IMGS: I = (I - I.mean()) / I.std() return I def read_sentinel_img_trio(path): """Read cropped Sentinel-2 image pair and change map.""" # read images if TYPE == 0: I1 = read_sentinel_img(path + '/imgs_1/') I2 = read_sentinel_img(path + '/imgs_2/') elif TYPE == 1: I1 = read_sentinel_img_4(path + '/imgs_1/') I2 = read_sentinel_img_4(path + '/imgs_2/') elif TYPE == 2: I1 = read_sentinel_img_leq20(path + '/imgs_1/') I2 = read_sentinel_img_leq20(path + '/imgs_2/') elif TYPE == 3: I1 = read_sentinel_img_leq60(path + '/imgs_1/') I2 = read_sentinel_img_leq60(path + '/imgs_2/') cm = io.imread(path + '/cm/cm.png', as_gray=True) != 0 # crop if necessary s1 = I1.shape s2 = I2.shape I2 = np.pad(I2,((0, s1[0] - s2[0]), (0, s1[1] - s2[1]), (0,0)),'edge') return I1, I2, cm def reshape_for_torch(I): """Transpose image for PyTorch coordinates.""" # out = np.swapaxes(I,1,2) # out = np.swapaxes(out,0,1) # out = out[np.newaxis,:] out = I.transpose((2, 0, 1)) return torch.from_numpy(out) class ChangeDetectionDataset(Dataset): """Change Detection dataset class, used for both training and test data.""" def __init__(self, path, train = True, patch_side = 96, stride = None, use_all_bands = False, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ # basics self.transform = transform self.path = path self.patch_side = patch_side if not stride: self.stride = 1 else: self.stride = stride if train: fname = 'train.txt' else: fname = 'test.txt' # print(path + fname) self.names = read_csv(path + fname).columns self.n_imgs = self.names.shape[0] n_pix = 0 true_pix = 0 # load images self.imgs_1 = {} self.imgs_2 = {} self.change_maps = {} self.n_patches_per_image = {} self.n_patches = 0 self.patch_coords = [] for im_name in tqdm(self.names): # load and store each image I1, I2, cm = read_sentinel_img_trio(self.path + im_name) self.imgs_1[im_name] = reshape_for_torch(I1) self.imgs_2[im_name] = reshape_for_torch(I2) self.change_maps[im_name] = cm s = cm.shape n_pix += np.prod(s) true_pix += cm.sum() # calculate the number of patches s = self.imgs_1[im_name].shape n1 = ceil((s[1] - self.patch_side + 1) / self.stride) n2 = ceil((s[2] - self.patch_side + 1) / self.stride) n_patches_i = n1 * n2 self.n_patches_per_image[im_name] = n_patches_i self.n_patches += n_patches_i # generate path coordinates for i in range(n1): for j in range(n2): # coordinates in (x1, x2, y1, y2) current_patch_coords = (im_name, [self.stride*i, self.stride*i + self.patch_side, self.stride*j, self.stride*j + self.patch_side], [self.stride*(i + 1), self.stride*(j + 1)]) self.patch_coords.append(current_patch_coords) self.weights = [ FP_MODIFIER * 2 * true_pix / n_pix, 2 * (n_pix - true_pix) / n_pix] def get_img(self, im_name): return self.imgs_1[im_name], self.imgs_2[im_name], self.change_maps[im_name] def __len__(self): return self.n_patches def __getitem__(self, idx): current_patch_coords = self.patch_coords[idx] im_name = current_patch_coords[0] limits = current_patch_coords[1] centre = current_patch_coords[2] I1 = self.imgs_1[im_name][:, limits[0]:limits[1], limits[2]:limits[3]] I2 = self.imgs_2[im_name][:, limits[0]:limits[1], limits[2]:limits[3]] label = self.change_maps[im_name][limits[0]:limits[1], limits[2]:limits[3]] label = torch.from_numpy(1*np.array(label)).float() sample = {'I1': I1, 'I2': I2, 'label': label} if self.transform: sample = self.transform(sample) return sample class RandomFlip(object): """Flip randomly the images in a sample.""" # def __init__(self): # return def __call__(self, sample): I1, I2, label = sample['I1'], sample['I2'], sample['label'] if random.random() > 0.5: I1 = I1.numpy()[:,:,::-1].copy() I1 = torch.from_numpy(I1) I2 = I2.numpy()[:,:,::-1].copy() I2 = torch.from_numpy(I2) label = label.numpy()[:,::-1].copy() label = torch.from_numpy(label) return {'I1': I1, 'I2': I2, 'label': label} class RandomRot(object): """Rotate randomly the images in a sample.""" # def __init__(self): # return def __call__(self, sample): I1, I2, label = sample['I1'], sample['I2'], sample['label'] n = random.randint(0, 3) if n: I1 = sample['I1'].numpy() I1 = np.rot90(I1, n, axes=(1, 2)).copy() I1 = torch.from_numpy(I1) I2 = sample['I2'].numpy() I2 = np.rot90(I2, n, axes=(1, 2)).copy() I2 = torch.from_numpy(I2) label = sample['label'].numpy() label = np.rot90(label, n, axes=(0, 1)).copy() label = torch.from_numpy(label) return {'I1': I1, 'I2': I2, 'label': label} print('UTILS OK') # + id="UCcAqRx6ikeb" ### Simple UNet implementation #import torch #import torch.nn as nn #import torch.nn.functional as F #from torch.nn.modules.padding import ReplicationPad2d class Unet(nn.Module): """EF segmentation network.""" def __init__(self, input_nbr, label_nbr): super(Unet, self).__init__() self.input_nbr = input_nbr self.conv11 = nn.Conv2d(input_nbr, 16, kernel_size=3, padding=1) self.bn11 = nn.BatchNorm2d(16) self.do11 = nn.Dropout2d(p=0.2) self.conv12 = nn.Conv2d(16, 16, kernel_size=3, padding=1) self.bn12 = nn.BatchNorm2d(16) self.do12 = nn.Dropout2d(p=0.2) self.conv21 = nn.Conv2d(16, 32, kernel_size=3, padding=1) self.bn21 = nn.BatchNorm2d(32) self.do21 = nn.Dropout2d(p=0.2) self.conv22 = nn.Conv2d(32, 32, kernel_size=3, padding=1) self.bn22 = nn.BatchNorm2d(32) self.do22 = nn.Dropout2d(p=0.2) self.conv31 = nn.Conv2d(32, 64, kernel_size=3, padding=1) self.bn31 = nn.BatchNorm2d(64) self.do31 = nn.Dropout2d(p=0.2) self.conv32 = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.bn32 = nn.BatchNorm2d(64) self.do32 = nn.Dropout2d(p=0.2) self.conv33 = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.bn33 = nn.BatchNorm2d(64) self.do33 = nn.Dropout2d(p=0.2) self.conv41 = nn.Conv2d(64, 128, kernel_size=3, padding=1) self.bn41 = nn.BatchNorm2d(128) self.do41 = nn.Dropout2d(p=0.2) self.conv42 = nn.Conv2d(128, 128, kernel_size=3, padding=1) self.bn42 = nn.BatchNorm2d(128) self.do42 = nn.Dropout2d(p=0.2) self.conv43 = nn.Conv2d(128, 128, kernel_size=3, padding=1) self.bn43 = nn.BatchNorm2d(128) self.do43 = nn.Dropout2d(p=0.2) self.upconv4 = nn.ConvTranspose2d(128, 128, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv43d = nn.ConvTranspose2d(256, 128, kernel_size=3, padding=1) self.bn43d = nn.BatchNorm2d(128) self.do43d = nn.Dropout2d(p=0.2) self.conv42d = nn.ConvTranspose2d(128, 128, kernel_size=3, padding=1) self.bn42d = nn.BatchNorm2d(128) self.do42d = nn.Dropout2d(p=0.2) self.conv41d = nn.ConvTranspose2d(128, 64, kernel_size=3, padding=1) self.bn41d = nn.BatchNorm2d(64) self.do41d = nn.Dropout2d(p=0.2) self.upconv3 = nn.ConvTranspose2d(64, 64, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv33d = nn.ConvTranspose2d(128, 64, kernel_size=3, padding=1) self.bn33d = nn.BatchNorm2d(64) self.do33d = nn.Dropout2d(p=0.2) self.conv32d = nn.ConvTranspose2d(64, 64, kernel_size=3, padding=1) self.bn32d = nn.BatchNorm2d(64) self.do32d = nn.Dropout2d(p=0.2) self.conv31d = nn.ConvTranspose2d(64, 32, kernel_size=3, padding=1) self.bn31d = nn.BatchNorm2d(32) self.do31d = nn.Dropout2d(p=0.2) self.upconv2 = nn.ConvTranspose2d(32, 32, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv22d = nn.ConvTranspose2d(64, 32, kernel_size=3, padding=1) self.bn22d = nn.BatchNorm2d(32) self.do22d = nn.Dropout2d(p=0.2) self.conv21d = nn.ConvTranspose2d(32, 16, kernel_size=3, padding=1) self.bn21d = nn.BatchNorm2d(16) self.do21d = nn.Dropout2d(p=0.2) self.upconv1 = nn.ConvTranspose2d(16, 16, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv12d = nn.ConvTranspose2d(32, 16, kernel_size=3, padding=1) self.bn12d = nn.BatchNorm2d(16) self.do12d = nn.Dropout2d(p=0.2) self.conv11d = nn.ConvTranspose2d(16, label_nbr, kernel_size=3, padding=1) self.sm = nn.LogSoftmax(dim=1) def forward(self, x1, x2): x = torch.cat((x1, x2), 1) """Forward method.""" # Stage 1 x11 = self.do11(F.relu(self.bn11(self.conv11(x)))) x12 = self.do12(F.relu(self.bn12(self.conv12(x11)))) x1p = F.max_pool2d(x12, kernel_size=2, stride=2) # Stage 2 x21 = self.do21(F.relu(self.bn21(self.conv21(x1p)))) x22 = self.do22(F.relu(self.bn22(self.conv22(x21)))) x2p = F.max_pool2d(x22, kernel_size=2, stride=2) # Stage 3 x31 = self.do31(F.relu(self.bn31(self.conv31(x2p)))) x32 = self.do32(F.relu(self.bn32(self.conv32(x31)))) x33 = self.do33(F.relu(self.bn33(self.conv33(x32)))) x3p = F.max_pool2d(x33, kernel_size=2, stride=2) # Stage 4 x41 = self.do41(F.relu(self.bn41(self.conv41(x3p)))) x42 = self.do42(F.relu(self.bn42(self.conv42(x41)))) x43 = self.do43(F.relu(self.bn43(self.conv43(x42)))) x4p = F.max_pool2d(x43, kernel_size=2, stride=2) # Stage 4d x4d = self.upconv4(x4p) pad4 = ReplicationPad2d((0, x43.size(3) - x4d.size(3), 0, x43.size(2) - x4d.size(2))) x4d = torch.cat((pad4(x4d), x43), 1) x43d = self.do43d(F.relu(self.bn43d(self.conv43d(x4d)))) x42d = self.do42d(F.relu(self.bn42d(self.conv42d(x43d)))) x41d = self.do41d(F.relu(self.bn41d(self.conv41d(x42d)))) # Stage 3d x3d = self.upconv3(x41d) pad3 = ReplicationPad2d((0, x33.size(3) - x3d.size(3), 0, x33.size(2) - x3d.size(2))) x3d = torch.cat((pad3(x3d), x33), 1) x33d = self.do33d(F.relu(self.bn33d(self.conv33d(x3d)))) x32d = self.do32d(F.relu(self.bn32d(self.conv32d(x33d)))) x31d = self.do31d(F.relu(self.bn31d(self.conv31d(x32d)))) # Stage 2d x2d = self.upconv2(x31d) pad2 = ReplicationPad2d((0, x22.size(3) - x2d.size(3), 0, x22.size(2) - x2d.size(2))) x2d = torch.cat((pad2(x2d), x22), 1) x22d = self.do22d(F.relu(self.bn22d(self.conv22d(x2d)))) x21d = self.do21d(F.relu(self.bn21d(self.conv21d(x22d)))) # Stage 1d x1d = self.upconv1(x21d) pad1 = ReplicationPad2d((0, x12.size(3) - x1d.size(3), 0, x12.size(2) - x1d.size(2))) x1d = torch.cat((pad1(x1d), x12), 1) x12d = self.do12d(F.relu(self.bn12d(self.conv12d(x1d)))) x11d = self.conv11d(x12d) return self.sm(x11d) # + id="9fG-gvtxiyf4" # # https://rcdaudt.github.io/ # ., ., & . "Fully convolutional siamese networks for change detection". In 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 4063-4067). IEEE. import torch import torch.nn as nn import torch.nn.functional as F from torch.nn.modules.padding import ReplicationPad2d class SiamUnet_diff(nn.Module): """SiamUnet_diff segmentation network.""" def __init__(self, input_nbr, label_nbr): super(SiamUnet_diff, self).__init__() self.input_nbr = input_nbr self.conv11 = nn.Conv2d(input_nbr, 16, kernel_size=3, padding=1) self.bn11 = nn.BatchNorm2d(16) self.do11 = nn.Dropout2d(p=0.2) self.conv12 = nn.Conv2d(16, 16, kernel_size=3, padding=1) self.bn12 = nn.BatchNorm2d(16) self.do12 = nn.Dropout2d(p=0.2) self.conv21 = nn.Conv2d(16, 32, kernel_size=3, padding=1) self.bn21 = nn.BatchNorm2d(32) self.do21 = nn.Dropout2d(p=0.2) self.conv22 = nn.Conv2d(32, 32, kernel_size=3, padding=1) self.bn22 = nn.BatchNorm2d(32) self.do22 = nn.Dropout2d(p=0.2) self.conv31 = nn.Conv2d(32, 64, kernel_size=3, padding=1) self.bn31 = nn.BatchNorm2d(64) self.do31 = nn.Dropout2d(p=0.2) self.conv32 = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.bn32 = nn.BatchNorm2d(64) self.do32 = nn.Dropout2d(p=0.2) self.conv33 = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.bn33 = nn.BatchNorm2d(64) self.do33 = nn.Dropout2d(p=0.2) self.conv41 = nn.Conv2d(64, 128, kernel_size=3, padding=1) self.bn41 = nn.BatchNorm2d(128) self.do41 = nn.Dropout2d(p=0.2) self.conv42 = nn.Conv2d(128, 128, kernel_size=3, padding=1) self.bn42 = nn.BatchNorm2d(128) self.do42 = nn.Dropout2d(p=0.2) self.conv43 = nn.Conv2d(128, 128, kernel_size=3, padding=1) self.bn43 = nn.BatchNorm2d(128) self.do43 = nn.Dropout2d(p=0.2) self.upconv4 = nn.ConvTranspose2d(128, 128, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv43d = nn.ConvTranspose2d(256, 128, kernel_size=3, padding=1) self.bn43d = nn.BatchNorm2d(128) self.do43d = nn.Dropout2d(p=0.2) self.conv42d = nn.ConvTranspose2d(128, 128, kernel_size=3, padding=1) self.bn42d = nn.BatchNorm2d(128) self.do42d = nn.Dropout2d(p=0.2) self.conv41d = nn.ConvTranspose2d(128, 64, kernel_size=3, padding=1) self.bn41d = nn.BatchNorm2d(64) self.do41d = nn.Dropout2d(p=0.2) self.upconv3 = nn.ConvTranspose2d(64, 64, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv33d = nn.ConvTranspose2d(128, 64, kernel_size=3, padding=1) self.bn33d = nn.BatchNorm2d(64) self.do33d = nn.Dropout2d(p=0.2) self.conv32d = nn.ConvTranspose2d(64, 64, kernel_size=3, padding=1) self.bn32d = nn.BatchNorm2d(64) self.do32d = nn.Dropout2d(p=0.2) self.conv31d = nn.ConvTranspose2d(64, 32, kernel_size=3, padding=1) self.bn31d = nn.BatchNorm2d(32) self.do31d = nn.Dropout2d(p=0.2) self.upconv2 = nn.ConvTranspose2d(32, 32, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv22d = nn.ConvTranspose2d(64, 32, kernel_size=3, padding=1) self.bn22d = nn.BatchNorm2d(32) self.do22d = nn.Dropout2d(p=0.2) self.conv21d = nn.ConvTranspose2d(32, 16, kernel_size=3, padding=1) self.bn21d = nn.BatchNorm2d(16) self.do21d = nn.Dropout2d(p=0.2) self.upconv1 = nn.ConvTranspose2d(16, 16, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv12d = nn.ConvTranspose2d(32, 16, kernel_size=3, padding=1) self.bn12d = nn.BatchNorm2d(16) self.do12d = nn.Dropout2d(p=0.2) self.conv11d = nn.ConvTranspose2d(16, label_nbr, kernel_size=3, padding=1) self.sm = nn.LogSoftmax(dim=1) def forward(self, x1, x2): """Forward method.""" # Stage 1 x11 = self.do11(F.relu(self.bn11(self.conv11(x1)))) x12_1 = self.do12(F.relu(self.bn12(self.conv12(x11)))) x1p = F.max_pool2d(x12_1, kernel_size=2, stride=2) # Stage 2 x21 = self.do21(F.relu(self.bn21(self.conv21(x1p)))) x22_1 = self.do22(F.relu(self.bn22(self.conv22(x21)))) x2p = F.max_pool2d(x22_1, kernel_size=2, stride=2) # Stage 3 x31 = self.do31(F.relu(self.bn31(self.conv31(x2p)))) x32 = self.do32(F.relu(self.bn32(self.conv32(x31)))) x33_1 = self.do33(F.relu(self.bn33(self.conv33(x32)))) x3p = F.max_pool2d(x33_1, kernel_size=2, stride=2) # Stage 4 x41 = self.do41(F.relu(self.bn41(self.conv41(x3p)))) x42 = self.do42(F.relu(self.bn42(self.conv42(x41)))) x43_1 = self.do43(F.relu(self.bn43(self.conv43(x42)))) x4p = F.max_pool2d(x43_1, kernel_size=2, stride=2) #################################################### # Stage 1 x11 = self.do11(F.relu(self.bn11(self.conv11(x2)))) x12_2 = self.do12(F.relu(self.bn12(self.conv12(x11)))) x1p = F.max_pool2d(x12_2, kernel_size=2, stride=2) # Stage 2 x21 = self.do21(F.relu(self.bn21(self.conv21(x1p)))) x22_2 = self.do22(F.relu(self.bn22(self.conv22(x21)))) x2p = F.max_pool2d(x22_2, kernel_size=2, stride=2) # Stage 3 x31 = self.do31(F.relu(self.bn31(self.conv31(x2p)))) x32 = self.do32(F.relu(self.bn32(self.conv32(x31)))) x33_2 = self.do33(F.relu(self.bn33(self.conv33(x32)))) x3p = F.max_pool2d(x33_2, kernel_size=2, stride=2) # Stage 4 x41 = self.do41(F.relu(self.bn41(self.conv41(x3p)))) x42 = self.do42(F.relu(self.bn42(self.conv42(x41)))) x43_2 = self.do43(F.relu(self.bn43(self.conv43(x42)))) x4p = F.max_pool2d(x43_2, kernel_size=2, stride=2) # Stage 4d x4d = self.upconv4(x4p) pad4 = ReplicationPad2d((0, x43_1.size(3) - x4d.size(3), 0, x43_1.size(2) - x4d.size(2))) x4d = torch.cat((pad4(x4d), torch.abs(x43_1 - x43_2)), 1) x43d = self.do43d(F.relu(self.bn43d(self.conv43d(x4d)))) x42d = self.do42d(F.relu(self.bn42d(self.conv42d(x43d)))) x41d = self.do41d(F.relu(self.bn41d(self.conv41d(x42d)))) # Stage 3d x3d = self.upconv3(x41d) pad3 = ReplicationPad2d((0, x33_1.size(3) - x3d.size(3), 0, x33_1.size(2) - x3d.size(2))) x3d = torch.cat((pad3(x3d), torch.abs(x33_1 - x33_2)), 1) x33d = self.do33d(F.relu(self.bn33d(self.conv33d(x3d)))) x32d = self.do32d(F.relu(self.bn32d(self.conv32d(x33d)))) x31d = self.do31d(F.relu(self.bn31d(self.conv31d(x32d)))) # Stage 2d x2d = self.upconv2(x31d) pad2 = ReplicationPad2d((0, x22_1.size(3) - x2d.size(3), 0, x22_1.size(2) - x2d.size(2))) x2d = torch.cat((pad2(x2d), torch.abs(x22_1 - x22_2)), 1) x22d = self.do22d(F.relu(self.bn22d(self.conv22d(x2d)))) x21d = self.do21d(F.relu(self.bn21d(self.conv21d(x22d)))) # Stage 1d x1d = self.upconv1(x21d) pad1 = ReplicationPad2d((0, x12_1.size(3) - x1d.size(3), 0, x12_1.size(2) - x1d.size(2))) x1d = torch.cat((pad1(x1d), torch.abs(x12_1 - x12_2)), 1) x12d = self.do12d(F.relu(self.bn12d(self.conv12d(x1d)))) x11d = self.conv11d(x12d) return self.sm(x11d) # + id="X0PZNZxsixCa" # ., ., & . "Fully convolutional siamese networks for change detection". In 2018 25th IEEE International Conference on Image Processing (ICIP) (pp. 4063-4067). IEEE. ### SiamUNet_conc network. Improvement on simple UNet, as outlined in the paper above. Siamese architectures are pretty nifty. import torch import torch.nn as nn import torch.nn.functional as F from torch.nn.modules.padding import ReplicationPad2d class SiamUnet_conc(nn.Module): """SiamUnet_conc segmentation network.""" def __init__(self, input_nbr, label_nbr): super(SiamUnet_conc, self).__init__() self.input_nbr = input_nbr self.conv11 = nn.Conv2d(input_nbr, 16, kernel_size=3, padding=1) self.bn11 = nn.BatchNorm2d(16) self.do11 = nn.Dropout2d(p=0.2) self.conv12 = nn.Conv2d(16, 16, kernel_size=3, padding=1) self.bn12 = nn.BatchNorm2d(16) self.do12 = nn.Dropout2d(p=0.2) self.conv21 = nn.Conv2d(16, 32, kernel_size=3, padding=1) self.bn21 = nn.BatchNorm2d(32) self.do21 = nn.Dropout2d(p=0.2) self.conv22 = nn.Conv2d(32, 32, kernel_size=3, padding=1) self.bn22 = nn.BatchNorm2d(32) self.do22 = nn.Dropout2d(p=0.2) self.conv31 = nn.Conv2d(32, 64, kernel_size=3, padding=1) self.bn31 = nn.BatchNorm2d(64) self.do31 = nn.Dropout2d(p=0.2) self.conv32 = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.bn32 = nn.BatchNorm2d(64) self.do32 = nn.Dropout2d(p=0.2) self.conv33 = nn.Conv2d(64, 64, kernel_size=3, padding=1) self.bn33 = nn.BatchNorm2d(64) self.do33 = nn.Dropout2d(p=0.2) self.conv41 = nn.Conv2d(64, 128, kernel_size=3, padding=1) self.bn41 = nn.BatchNorm2d(128) self.do41 = nn.Dropout2d(p=0.2) self.conv42 = nn.Conv2d(128, 128, kernel_size=3, padding=1) self.bn42 = nn.BatchNorm2d(128) self.do42 = nn.Dropout2d(p=0.2) self.conv43 = nn.Conv2d(128, 128, kernel_size=3, padding=1) self.bn43 = nn.BatchNorm2d(128) self.do43 = nn.Dropout2d(p=0.2) self.upconv4 = nn.ConvTranspose2d(128, 128, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv43d = nn.ConvTranspose2d(384, 128, kernel_size=3, padding=1) self.bn43d = nn.BatchNorm2d(128) self.do43d = nn.Dropout2d(p=0.2) self.conv42d = nn.ConvTranspose2d(128, 128, kernel_size=3, padding=1) self.bn42d = nn.BatchNorm2d(128) self.do42d = nn.Dropout2d(p=0.2) self.conv41d = nn.ConvTranspose2d(128, 64, kernel_size=3, padding=1) self.bn41d = nn.BatchNorm2d(64) self.do41d = nn.Dropout2d(p=0.2) self.upconv3 = nn.ConvTranspose2d(64, 64, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv33d = nn.ConvTranspose2d(192, 64, kernel_size=3, padding=1) self.bn33d = nn.BatchNorm2d(64) self.do33d = nn.Dropout2d(p=0.2) self.conv32d = nn.ConvTranspose2d(64, 64, kernel_size=3, padding=1) self.bn32d = nn.BatchNorm2d(64) self.do32d = nn.Dropout2d(p=0.2) self.conv31d = nn.ConvTranspose2d(64, 32, kernel_size=3, padding=1) self.bn31d = nn.BatchNorm2d(32) self.do31d = nn.Dropout2d(p=0.2) self.upconv2 = nn.ConvTranspose2d(32, 32, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv22d = nn.ConvTranspose2d(96, 32, kernel_size=3, padding=1) self.bn22d = nn.BatchNorm2d(32) self.do22d = nn.Dropout2d(p=0.2) self.conv21d = nn.ConvTranspose2d(32, 16, kernel_size=3, padding=1) self.bn21d = nn.BatchNorm2d(16) self.do21d = nn.Dropout2d(p=0.2) self.upconv1 = nn.ConvTranspose2d(16, 16, kernel_size=3, padding=1, stride=2, output_padding=1) self.conv12d = nn.ConvTranspose2d(48, 16, kernel_size=3, padding=1) self.bn12d = nn.BatchNorm2d(16) self.do12d = nn.Dropout2d(p=0.2) self.conv11d = nn.ConvTranspose2d(16, label_nbr, kernel_size=3, padding=1) self.sm = nn.LogSoftmax(dim=1) def forward(self, x1, x2): """Forward method.""" # Stage 1 x11 = self.do11(F.relu(self.bn11(self.conv11(x1)))) x12_1 = self.do12(F.relu(self.bn12(self.conv12(x11)))) x1p = F.max_pool2d(x12_1, kernel_size=2, stride=2) # Stage 2 x21 = self.do21(F.relu(self.bn21(self.conv21(x1p)))) x22_1 = self.do22(F.relu(self.bn22(self.conv22(x21)))) x2p = F.max_pool2d(x22_1, kernel_size=2, stride=2) # Stage 3 x31 = self.do31(F.relu(self.bn31(self.conv31(x2p)))) x32 = self.do32(F.relu(self.bn32(self.conv32(x31)))) x33_1 = self.do33(F.relu(self.bn33(self.conv33(x32)))) x3p = F.max_pool2d(x33_1, kernel_size=2, stride=2) # Stage 4 x41 = self.do41(F.relu(self.bn41(self.conv41(x3p)))) x42 = self.do42(F.relu(self.bn42(self.conv42(x41)))) x43_1 = self.do43(F.relu(self.bn43(self.conv43(x42)))) x4p = F.max_pool2d(x43_1, kernel_size=2, stride=2) #################################################### # Stage 1 x11 = self.do11(F.relu(self.bn11(self.conv11(x2)))) x12_2 = self.do12(F.relu(self.bn12(self.conv12(x11)))) x1p = F.max_pool2d(x12_2, kernel_size=2, stride=2) # Stage 2 x21 = self.do21(F.relu(self.bn21(self.conv21(x1p)))) x22_2 = self.do22(F.relu(self.bn22(self.conv22(x21)))) x2p = F.max_pool2d(x22_2, kernel_size=2, stride=2) # Stage 3 x31 = self.do31(F.relu(self.bn31(self.conv31(x2p)))) x32 = self.do32(F.relu(self.bn32(self.conv32(x31)))) x33_2 = self.do33(F.relu(self.bn33(self.conv33(x32)))) x3p = F.max_pool2d(x33_2, kernel_size=2, stride=2) # Stage 4 x41 = self.do41(F.relu(self.bn41(self.conv41(x3p)))) x42 = self.do42(F.relu(self.bn42(self.conv42(x41)))) x43_2 = self.do43(F.relu(self.bn43(self.conv43(x42)))) x4p = F.max_pool2d(x43_2, kernel_size=2, stride=2) #################################################### # Stage 4d x4d = self.upconv4(x4p) pad4 = ReplicationPad2d((0, x43_1.size(3) - x4d.size(3), 0, x43_1.size(2) - x4d.size(2))) x4d = torch.cat((pad4(x4d), x43_1, x43_2), 1) x43d = self.do43d(F.relu(self.bn43d(self.conv43d(x4d)))) x42d = self.do42d(F.relu(self.bn42d(self.conv42d(x43d)))) x41d = self.do41d(F.relu(self.bn41d(self.conv41d(x42d)))) # Stage 3d x3d = self.upconv3(x41d) pad3 = ReplicationPad2d((0, x33_1.size(3) - x3d.size(3), 0, x33_1.size(2) - x3d.size(2))) x3d = torch.cat((pad3(x3d), x33_1, x33_2), 1) x33d = self.do33d(F.relu(self.bn33d(self.conv33d(x3d)))) x32d = self.do32d(F.relu(self.bn32d(self.conv32d(x33d)))) x31d = self.do31d(F.relu(self.bn31d(self.conv31d(x32d)))) # Stage 2d x2d = self.upconv2(x31d) pad2 = ReplicationPad2d((0, x22_1.size(3) - x2d.size(3), 0, x22_1.size(2) - x2d.size(2))) x2d = torch.cat((pad2(x2d), x22_1, x22_2), 1) x22d = self.do22d(F.relu(self.bn22d(self.conv22d(x2d)))) x21d = self.do21d(F.relu(self.bn21d(self.conv21d(x22d)))) # Stage 1d x1d = self.upconv1(x21d) pad1 = ReplicationPad2d((0, x12_1.size(3) - x1d.size(3), 0, x12_1.size(2) - x1d.size(2))) x1d = torch.cat((pad1(x1d), x12_1, x12_2), 1) x12d = self.do12d(F.relu(self.bn12d(self.conv12d(x1d)))) x11d = self.conv11d(x12d) return self.sm(x11d) # + id="hWB5lipxWFhS" # ., ., . and ., 2019. Multitask learning for large-scale semantic change detection. Computer Vision and Image Understanding, 187, p.102783. # FresUNet - comes from the above paper. Still not sure how it improves on UNet tbh. Will find out soon. import torch import torch.nn as nn import torch.nn.functional as F from torch.nn.modules.padding import ReplicationPad2d def conv3x3(in_planes, out_planes, stride=1): "3x3 convolution with padding" return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1) class BasicBlock_ss(nn.Module): def __init__(self, inplanes, planes = None, subsamp=1): super(BasicBlock_ss, self).__init__() if planes == None: planes = inplanes * subsamp self.conv1 = conv3x3(inplanes, planes) self.bn1 = nn.BatchNorm2d(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.bn2 = nn.BatchNorm2d(planes) self.subsamp = subsamp self.doit = planes != inplanes if self.doit: self.couple = nn.Conv2d(inplanes, planes, kernel_size=1) self.bnc = nn.BatchNorm2d(planes) def forward(self, x): if self.doit: residual = self.couple(x) residual = self.bnc(residual) else: residual = x out = self.conv1(x) out = self.bn1(out) out = self.relu(out) if self.subsamp > 1: out = F.max_pool2d(out, kernel_size=self.subsamp, stride=self.subsamp) residual = F.max_pool2d(residual, kernel_size=self.subsamp, stride=self.subsamp) out = self.conv2(out) out = self.bn2(out) out += residual out = self.relu(out) return out class BasicBlock_us(nn.Module): def __init__(self, inplanes, upsamp=1): super(BasicBlock_us, self).__init__() planes = int(inplanes / upsamp) # assumes integer result, fix later self.conv1 = nn.ConvTranspose2d(inplanes, planes, kernel_size=3, padding=1, stride=upsamp, output_padding=1) self.bn1 = nn.BatchNorm2d(planes) self.relu = nn.ReLU(inplace=True) self.conv2 = conv3x3(planes, planes) self.bn2 = nn.BatchNorm2d(planes) self.upsamp = upsamp self.couple = nn.ConvTranspose2d(inplanes, planes, kernel_size=3, padding=1, stride=upsamp, output_padding=1) self.bnc = nn.BatchNorm2d(planes) def forward(self, x): residual = self.couple(x) residual = self.bnc(residual) out = self.conv1(x) out = self.bn1(out) out = self.relu(out) out = self.conv2(out) out = self.bn2(out) out += residual out = self.relu(out) return out class FresUNet(nn.Module): """FresUNet segmentation network.""" def __init__(self, input_nbr, label_nbr): """Init FresUNet fields.""" super(FresUNet, self).__init__() self.input_nbr = input_nbr cur_depth = input_nbr base_depth = 8 # Encoding stage 1 self.encres1_1 = BasicBlock_ss(cur_depth, planes = base_depth) cur_depth = base_depth d1 = base_depth self.encres1_2 = BasicBlock_ss(cur_depth, subsamp=2) cur_depth *= 2 # Encoding stage 2 self.encres2_1 = BasicBlock_ss(cur_depth) d2 = cur_depth self.encres2_2 = BasicBlock_ss(cur_depth, subsamp=2) cur_depth *= 2 # Encoding stage 3 self.encres3_1 = BasicBlock_ss(cur_depth) d3 = cur_depth self.encres3_2 = BasicBlock_ss(cur_depth, subsamp=2) cur_depth *= 2 # Encoding stage 4 self.encres4_1 = BasicBlock_ss(cur_depth) d4 = cur_depth self.encres4_2 = BasicBlock_ss(cur_depth, subsamp=2) cur_depth *= 2 # Decoding stage 4 self.decres4_1 = BasicBlock_ss(cur_depth) self.decres4_2 = BasicBlock_us(cur_depth, upsamp=2) cur_depth = int(cur_depth/2) # Decoding stage 3 self.decres3_1 = BasicBlock_ss(cur_depth + d4, planes = cur_depth) self.decres3_2 = BasicBlock_us(cur_depth, upsamp=2) cur_depth = int(cur_depth/2) # Decoding stage 2 self.decres2_1 = BasicBlock_ss(cur_depth + d3, planes = cur_depth) self.decres2_2 = BasicBlock_us(cur_depth, upsamp=2) cur_depth = int(cur_depth/2) # Decoding stage 1 self.decres1_1 = BasicBlock_ss(cur_depth + d2, planes = cur_depth) self.decres1_2 = BasicBlock_us(cur_depth, upsamp=2) cur_depth = int(cur_depth/2) # Output self.coupling = nn.Conv2d(cur_depth + d1, label_nbr, kernel_size=1) self.sm = nn.LogSoftmax(dim=1) def forward(self, x1, x2): x = torch.cat((x1, x2), 1) # pad5 = ReplicationPad2d((0, x53.size(3) - x5d.size(3), 0, x53.size(2) - x5d.size(2))) s1_1 = x.size() x1 = self.encres1_1(x) x = self.encres1_2(x1) s2_1 = x.size() x2 = self.encres2_1(x) x = self.encres2_2(x2) s3_1 = x.size() x3 = self.encres3_1(x) x = self.encres3_2(x3) s4_1 = x.size() x4 = self.encres4_1(x) x = self.encres4_2(x4) x = self.decres4_1(x) x = self.decres4_2(x) s4_2 = x.size() pad4 = ReplicationPad2d((0, s4_1[3] - s4_2[3], 0, s4_1[2] - s4_2[2])) x = pad4(x) # x = self.decres3_1(x) x = self.decres3_1(torch.cat((x, x4), 1)) x = self.decres3_2(x) s3_2 = x.size() pad3 = ReplicationPad2d((0, s3_1[3] - s3_2[3], 0, s3_1[2] - s3_2[2])) x = pad3(x) x = self.decres2_1(torch.cat((x, x3), 1)) x = self.decres2_2(x) s2_2 = x.size() pad2 = ReplicationPad2d((0, s2_1[3] - s2_2[3], 0, s2_1[2] - s2_2[2])) x = pad2(x) x = self.decres1_1(torch.cat((x, x2), 1)) x = self.decres1_2(x) s1_2 = x.size() pad1 = ReplicationPad2d((0, s1_1[3] - s1_2[3], 0, s1_1[2] - s1_2[2])) x = pad1(x) x = self.coupling(torch.cat((x, x1), 1)) x = self.sm(x) return x # + colab={"base_uri": "https://localhost:8080/"} id="MC_LwYkiE22x" outputId="0689cb82-3225-4cd9-bf06-287c74b18141" # Dataset if DATA_AUG: data_transform = tr.Compose([RandomFlip(), RandomRot()]) else: data_transform = None train_dataset = ChangeDetectionDataset(PATH_TO_DATASET, train = True, stride = TRAIN_STRIDE, transform=data_transform) #weights = torch.FloatTensor(train_dataset.weights) weights = torch.FloatTensor(train_dataset.weights).cuda() print(weights) train_loader = DataLoader(train_dataset, batch_size = BATCH_SIZE, shuffle = True, num_workers = 4) test_dataset = ChangeDetectionDataset(PATH_TO_DATASET, train = False, stride = TRAIN_STRIDE) test_loader = DataLoader(test_dataset, batch_size = BATCH_SIZE, shuffle = True, num_workers = 4) print('DATASETS OK') # + id="_xT84TwlE22x" # print(weights) # + colab={"base_uri": "https://localhost:8080/"} id="NGrGMeR8E22y" outputId="ccda1a6b-5424-4787-8b3f-cef1864d6a66" # 0-RGB | 1-RGBIr | 2-All bands s.t. resulution <= 20m | 3-All bands if TYPE == 0: # net, net_name = Unet(2*3, 2), 'FC-EF' net, net_name = SiamUnet_conc(3, 2), 'FC-Siam-conc' # net, net_name = SiamUnet_diff(3, 2), 'FC-Siam-diff' # net, net_name = FresUNet(2*3, 2), 'FresUNet' elif TYPE == 1: # net, net_name = Unet(2*4, 2), 'FC-EF' net, net_name = SiamUnet_conc(4, 2), 'FC-Siam-conc' # net, net_name = SiamUnet_diff(4, 2), 'FC-Siam-diff' # net, net_name = FresUNet(2*4, 2), 'FresUNet' elif TYPE == 2: # net, net_name = Unet(2*10, 2), 'FC-EF' net, net_name = SiamUnet_conc(10, 2), 'FC-Siam-conc' # net, net_name = SiamUnet_diff(10, 2), 'FC-Siam-diff' # net, net_name = FresUNet(2*10, 2), 'FresUNet' elif TYPE == 3: # net, net_name = Unet(2*13, 2), 'FC-EF' net, net_name = SiamUnet_conc(13, 2), 'FC-Siam-conc' # net, net_name = SiamUnet_diff(13, 2), 'FC-Siam-diff' # net, net_name = FresUNet(2*13, 2), 'FresUNet' net.cuda() criterion = nn.NLLLoss(weight=weights) # to be used with logsoftmax output - need to think about tweaking this too. print('NETWORK OK') # + colab={"base_uri": "https://localhost:8080/"} id="55bIf7GtE22y" outputId="4c545979-a32b-44e0-8296-cf969bc8d487" def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print('Number of trainable parameters:', count_parameters(net)) # + id="XihBjBIZE22y" ### This cell gives the procedure to train the model on the training dataset, and output ### graphs that show the progress of the training through the epochs e.g. loss, recall etc. ### Uses the Adam optimiser. ### There are lots of things we could tweak here - optimiser, learning rate, weight decay (regularisation), ### no. of epochs, as well as tweaking the fundamental structure of the ConvNet models used. # net.load_state_dict(torch.load('net-best_epoch-1_fm-0.7394933126157746.pth.tar')) def train(n_epochs = N_EPOCHS, save = True): t = np.linspace(1, n_epochs, n_epochs) epoch_train_loss = 0 * t epoch_train_accuracy = 0 * t epoch_train_change_accuracy = 0 * t epoch_train_nochange_accuracy = 0 * t epoch_train_precision = 0 * t epoch_train_recall = 0 * t epoch_train_Fmeasure = 0 * t epoch_test_loss = 0 * t epoch_test_accuracy = 0 * t epoch_test_change_accuracy = 0 * t epoch_test_nochange_accuracy = 0 * t epoch_test_precision = 0 * t epoch_test_recall = 0 * t epoch_test_Fmeasure = 0 * t # mean_acc = 0 # best_mean_acc = 0 fm = 0 best_fm = 0 lss = 1000 best_lss = 1000 plt.figure(num=1) plt.figure(num=2) plt.figure(num=3) optimizer = torch.optim.Adam(net.parameters(), weight_decay=1e-4) # optimizer = torch.optim.Adam(net.parameters(), lr=0.0005) scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, 0.95) for epoch_index in tqdm(range(n_epochs)): net.train() print('Epoch: ' + str(epoch_index + 1) + ' of ' + str(N_EPOCHS)) tot_count = 0 tot_loss = 0 tot_accurate = 0 class_correct = list(0. for i in range(2)) class_total = list(0. for i in range(2)) # for batch_index, batch in enumerate(tqdm(data_loader)): for batch in train_loader: I1 = Variable(batch['I1'].float().cuda()) I2 = Variable(batch['I2'].float().cuda()) label = torch.squeeze(Variable(batch['label'].cuda())) #I1 = Variable(batch['I1'].float()) #I2 = Variable(batch['I2'].float()) #label = torch.squeeze(Variable(batch['label'])) optimizer.zero_grad() output = net(I1, I2) loss = criterion(output, label.long()) loss.backward() optimizer.step() scheduler.step() epoch_train_loss[epoch_index], epoch_train_accuracy[epoch_index], cl_acc, pr_rec = test(train_dataset) epoch_train_nochange_accuracy[epoch_index] = cl_acc[0] epoch_train_change_accuracy[epoch_index] = cl_acc[1] epoch_train_precision[epoch_index] = pr_rec[0] epoch_train_recall[epoch_index] = pr_rec[1] epoch_train_Fmeasure[epoch_index] = pr_rec[2] # epoch_test_loss[epoch_index], epoch_test_accuracy[epoch_index], cl_acc, pr_rec = test(test_dataset) epoch_test_loss[epoch_index], epoch_test_accuracy[epoch_index], cl_acc, pr_rec = test(test_dataset) epoch_test_nochange_accuracy[epoch_index] = cl_acc[0] epoch_test_change_accuracy[epoch_index] = cl_acc[1] epoch_test_precision[epoch_index] = pr_rec[0] epoch_test_recall[epoch_index] = pr_rec[1] epoch_test_Fmeasure[epoch_index] = pr_rec[2] plt.figure(num=1) plt.clf() l1_1, = plt.plot(t[:epoch_index + 1], epoch_train_loss[:epoch_index + 1], label='Train loss') l1_2, = plt.plot(t[:epoch_index + 1], epoch_test_loss[:epoch_index + 1], label='Test loss') plt.legend(handles=[l1_1, l1_2]) plt.grid() # plt.gcf().gca().set_ylim(bottom = 0) plt.gcf().gca().set_xlim(left = 0) plt.title('Loss') display.clear_output(wait=True) display.display(plt.gcf()) plt.figure(num=2) plt.clf() l2_1, = plt.plot(t[:epoch_index + 1], epoch_train_accuracy[:epoch_index + 1], label='Train accuracy') l2_2, = plt.plot(t[:epoch_index + 1], epoch_test_accuracy[:epoch_index + 1], label='Test accuracy') plt.legend(handles=[l2_1, l2_2]) plt.grid() plt.gcf().gca().set_ylim(0, 100) # plt.gcf().gca().set_ylim(bottom = 0) # plt.gcf().gca().set_xlim(left = 0) plt.title('Accuracy') display.clear_output(wait=True) display.display(plt.gcf()) plt.figure(num=3) plt.clf() l3_1, = plt.plot(t[:epoch_index + 1], epoch_train_nochange_accuracy[:epoch_index + 1], label='Train accuracy: no change') l3_2, = plt.plot(t[:epoch_index + 1], epoch_train_change_accuracy[:epoch_index + 1], label='Train accuracy: change') l3_3, = plt.plot(t[:epoch_index + 1], epoch_test_nochange_accuracy[:epoch_index + 1], label='Test accuracy: no change') l3_4, = plt.plot(t[:epoch_index + 1], epoch_test_change_accuracy[:epoch_index + 1], label='Test accuracy: change') plt.legend(handles=[l3_1, l3_2, l3_3, l3_4]) plt.grid() plt.gcf().gca().set_ylim(0, 100) # plt.gcf().gca().set_ylim(bottom = 0) # plt.gcf().gca().set_xlim(left = 0) plt.title('Accuracy per class') display.clear_output(wait=True) display.display(plt.gcf()) plt.figure(num=4) plt.clf() l4_1, = plt.plot(t[:epoch_index + 1], epoch_train_precision[:epoch_index + 1], label='Train precision') l4_2, = plt.plot(t[:epoch_index + 1], epoch_train_recall[:epoch_index + 1], label='Train recall') l4_3, = plt.plot(t[:epoch_index + 1], epoch_train_Fmeasure[:epoch_index + 1], label='Train Dice/F1') l4_4, = plt.plot(t[:epoch_index + 1], epoch_test_precision[:epoch_index + 1], label='Test precision') l4_5, = plt.plot(t[:epoch_index + 1], epoch_test_recall[:epoch_index + 1], label='Test recall') l4_6, = plt.plot(t[:epoch_index + 1], epoch_test_Fmeasure[:epoch_index + 1], label='Test Dice/F1') plt.legend(handles=[l4_1, l4_2, l4_3, l4_4, l4_5, l4_6]) plt.grid() plt.gcf().gca().set_ylim(0, 1) # plt.gcf().gca().set_ylim(bottom = 0) # plt.gcf().gca().set_xlim(left = 0) plt.title('Precision, Recall and F-measure') display.clear_output(wait=True) display.display(plt.gcf()) # mean_acc = (epoch_test_nochange_accuracy[epoch_index] + epoch_test_change_accuracy[epoch_index])/2 # if mean_acc > best_mean_acc: # best_mean_acc = mean_acc # save_str = 'net-best_epoch-' + str(epoch_index + 1) + '_acc-' + str(mean_acc) + '.pth.tar' # torch.save(net.state_dict(), save_str) # fm = pr_rec[2] fm = epoch_train_Fmeasure[epoch_index] if fm > best_fm: best_fm = fm save_str = 'net-best_epoch-' + str(epoch_index + 1) + '_fm-' + str(fm) + '.pth.tar' torch.save(net.state_dict(), save_str) lss = epoch_train_loss[epoch_index] if lss < best_lss: best_lss = lss save_str = 'net-best_epoch-' + str(epoch_index + 1) + '_loss-' + str(lss) + '.pth.tar' torch.save(net.state_dict(), save_str) # print('Epoch loss: ' + str(tot_loss/tot_count)) if save: im_format = 'png' # im_format = 'eps' plt.figure(num=1) plt.savefig(net_name + '-01-loss.' + im_format) plt.figure(num=2) plt.savefig(net_name + '-02-accuracy.' + im_format) plt.figure(num=3) plt.savefig(net_name + '-03-accuracy-per-class.' + im_format) plt.figure(num=4) plt.savefig(net_name + '-04-prec-rec-fmeas.' + im_format) out = {'train_loss': epoch_train_loss[-1], 'train_accuracy': epoch_train_accuracy[-1], 'train_nochange_accuracy': epoch_train_nochange_accuracy[-1], 'train_change_accuracy': epoch_train_change_accuracy[-1], 'test_loss': epoch_test_loss[-1], 'test_accuracy': epoch_test_accuracy[-1], 'test_nochange_accuracy': epoch_test_nochange_accuracy[-1], 'test_change_accuracy': epoch_test_change_accuracy[-1]} print('pr_c, rec_c, f_meas, pr_nc, rec_nc') print(pr_rec) return out L = 1024 N = 2 def test(dset): net.eval() tot_loss = 0 tot_count = 0 tot_accurate = 0 n = 2 class_correct = list(0. for i in range(n)) class_total = list(0. for i in range(n)) class_accuracy = list(0. for i in range(n)) tp = 0 tn = 0 fp = 0 fn = 0 for img_index in dset.names: I1_full, I2_full, cm_full = dset.get_img(img_index) s = cm_full.shape steps0 = np.arange(0,s[0],ceil(s[0]/N)) steps1 = np.arange(0,s[1],ceil(s[1]/N)) for ii in range(N): for jj in range(N): xmin = steps0[ii] if ii == N-1: xmax = s[0] else: xmax = steps0[ii+1] ymin = jj if jj == N-1: ymax = s[1] else: ymax = steps1[jj+1] I1 = I1_full[:, xmin:xmax, ymin:ymax] I2 = I2_full[:, xmin:xmax, ymin:ymax] cm = cm_full[xmin:xmax, ymin:ymax] I1 = Variable(torch.unsqueeze(I1, 0).float()).cuda() I2 = Variable(torch.unsqueeze(I2, 0).float()).cuda() cm = Variable(torch.unsqueeze(torch.from_numpy(1.0*cm),0).float()).cuda() output = net(I1, I2) loss = criterion(output, cm.long()) # print(loss) tot_loss += loss.data * np.prod(cm.size()) tot_count += np.prod(cm.size()) _, predicted = torch.max(output.data, 1) c = (predicted.int() == cm.data.int()) for i in range(c.size(1)): for j in range(c.size(2)): l = int(cm.data[0, i, j]) class_correct[l] += c[0, i, j] class_total[l] += 1 pr = (predicted.int() > 0).cpu().numpy() gt = (cm.data.int() > 0).cpu().numpy() tp += np.logical_and(pr, gt).sum() tn += np.logical_and(np.logical_not(pr), np.logical_not(gt)).sum() fp += np.logical_and(pr, np.logical_not(gt)).sum() fn += np.logical_and(np.logical_not(pr), gt).sum() net_loss = tot_loss/tot_count net_accuracy = 100 * (tp + tn)/tot_count for i in range(n): class_accuracy[i] = 100 * class_correct[i] / max(class_total[i],0.00001) prec = tp / (tp + fp) rec = tp / (tp + fn) f_meas = 2 * prec * rec / (prec + rec) prec_nc = tn / (tn + fn) rec_nc = tn / (tn + fp) pr_rec = [prec, rec, f_meas, prec_nc, rec_nc] return net_loss, net_accuracy, class_accuracy, pr_rec # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="FUUR7CGLE22z" outputId="bb2719d5-6036-42f0-f1fe-a06b613cba96" ### This cell either loads trained weights, or it begins the training process of a network by calling train(). if LOAD_TRAINED: net.load_state_dict(torch.load('net_final.pth.tar')) print('LOAD OK') else: t_start = time.time() out_dic = train() t_end = time.time() print(out_dic) print('Elapsed time:') print(t_end - t_start) # + colab={"base_uri": "https://localhost:8080/"} id="SxcJZ3ItE22z" outputId="e3cbfb9b-5dae-4ef1-d4a6-b7abc243547e" ### This cell saves the weights of the trained network for future use. if not LOAD_TRAINED: torch.save(net.state_dict(), 'siamunet_conc_net_final.pth.tar') print('SAVE OK') # + colab={"base_uri": "https://localhost:8080/"} id="wU_BV2LPE220" outputId="9c2bff7a-8f51-4b0b-ac53-726df1ac639d" ### This cell outputs the results of the trained network when applied to the test set. ### Results come in the form of png files showing the network's predictions of change. def save_test_results(dset): for name in tqdm(dset.names): with warnings.catch_warnings(): I1, I2, cm = dset.get_img(name) I1 = Variable(torch.unsqueeze(I1, 0).float()).cuda() I2 = Variable(torch.unsqueeze(I2, 0).float()).cuda() out = net(I1, I2) _, predicted = torch.max(out.data, 1) I = np.stack((255*cm,255*np.squeeze(predicted.cpu().numpy()),255*cm),2) io.imsave(f'{net_name}-{name}.png',I) t_start = time.time() # save_test_results(train_dataset) save_test_results(test_dataset) t_end = time.time() print('Elapsed time: {}'.format(t_end - t_start)) # + colab={"base_uri": "https://localhost:8080/"} id="IXF_BPcwE220" outputId="6d53231d-9b99-47fa-a413-2a91025ca264" ### This cell returns various metrics that relate to the performance of the network. ### It does this by testing the trained network on the test set (called by test) and then ### computing the various metrics e.g. accuracy, precision, recall. L = 1024 def kappa(tp, tn, fp, fn): N = tp + tn + fp + fn p0 = (tp + tn) / N pe = ((tp+fp)*(tp+fn) + (tn+fp)*(tn+fn)) / (N * N) return (p0 - pe) / (1 - pe) def test(dset): net.eval() tot_loss = 0 tot_count = 0 tot_accurate = 0 n = 2 class_correct = list(0. for i in range(n)) class_total = list(0. for i in range(n)) class_accuracy = list(0. for i in range(n)) tp = 0 tn = 0 fp = 0 fn = 0 for img_index in tqdm(dset.names): I1_full, I2_full, cm_full = dset.get_img(img_index) s = cm_full.shape for ii in range(ceil(s[0]/L)): for jj in range(ceil(s[1]/L)): xmin = L*ii xmax = min(L*(ii+1),s[1]) ymin = L*jj ymax = min(L*(jj+1),s[1]) I1 = I1_full[:, xmin:xmax, ymin:ymax] I2 = I2_full[:, xmin:xmax, ymin:ymax] cm = cm_full[xmin:xmax, ymin:ymax] I1 = Variable(torch.unsqueeze(I1, 0).float()).cuda() I2 = Variable(torch.unsqueeze(I2, 0).float()).cuda() cm = Variable(torch.unsqueeze(torch.from_numpy(1.0*cm),0).float()).cuda() output = net(I1, I2) loss = criterion(output, cm.long()) tot_loss += loss.data * np.prod(cm.size()) tot_count += np.prod(cm.size()) _, predicted = torch.max(output.data, 1) c = (predicted.int() == cm.data.int()) for i in range(c.size(1)): for j in range(c.size(2)): l = int(cm.data[0, i, j]) class_correct[l] += c[0, i, j] class_total[l] += 1 pr = (predicted.int() > 0).cpu().numpy() gt = (cm.data.int() > 0).cpu().numpy() tp += np.logical_and(pr, gt).sum() tn += np.logical_and(np.logical_not(pr), np.logical_not(gt)).sum() fp += np.logical_and(pr, np.logical_not(gt)).sum() fn += np.logical_and(np.logical_not(pr), gt).sum() net_loss = tot_loss/tot_count net_loss = float(net_loss.cpu().numpy()) net_accuracy = 100 * (tp + tn)/tot_count for i in range(n): class_accuracy[i] = 100 * class_correct[i] / max(class_total[i],0.00001) class_accuracy[i] = float(class_accuracy[i].cpu().numpy()) prec = tp / (tp + fp) rec = tp / (tp + fn) dice = 2 * prec * rec / (prec + rec) prec_nc = tn / (tn + fn) rec_nc = tn / (tn + fp) pr_rec = [prec, rec, dice, prec_nc, rec_nc] k = kappa(tp, tn, fp, fn) return {'net_loss': net_loss, 'net_accuracy': net_accuracy, 'class_accuracy': class_accuracy, 'precision': prec, 'recall': rec, 'dice': dice, 'kappa': k} results = test(test_dataset) pprint(results) # + id="up-DyoilE221" # + id="xLkl8KhjE221" # + id="0Fa4VoPaE221" # + id="DQvzXKopE221" # + id="6pxT-us0E222" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Custom libraries from datascienceutils import plotter from datascienceutils import analyze from datascienceutils import predictiveModels as pm from datascienceutils import sklearnUtils as sku from IPython.display import Image # Standard libraries import json # %matplotlib inline import datetime import numpy as np import pandas as pd import random from sklearn import cross_validation from sklearn import metrics from bokeh.plotting import figure, show, output_file, output_notebook, ColumnDataSource from bokeh.charts import Histogram import bokeh output_notebook() # Set pandas display options #pd.set_option('display.width', pd.util.terminal.get_terminal_size()[0]) pd.set_option('display.expand_frame_repr', False) pd.set_option('max_colwidth', 800) # + # Data set from https://archive.ics.uci.edu/ml/machine-learning-databases/audiology/ i.e: famous uci ml data set repository with open('./data/audiology.data', 'r') as fd: data = fd.readlines() # - from pprint import pprint with open('./data/audiology.names', 'r') as fd: pprint(fd.readlines()) # + all_obs = set() def parse_line(line): global all_obs line = line.strip('\n') line = line.strip(']') line = line.strip('[') all_f = line.split(',') caseid = all_f[0] classif = all_f[1] descs = all_f[2:] descs[0] = descs[0].strip('[') features = list() for ea in descs: all_obs.add(ea) descs = ','.join(descs) return [caseid, classif, descs] # - audiology_df = pd.DataFrame(columns=['case_id', 'classification', 'case_features']) #'age_gt_60', 'boneAbnormal','airBoneGap', 'ar_c(normal)']) for idx, each in enumerate(data): if bool(each): line = parse_line(each) audiology_df.loc[idx] = line audiology_df.head() # ## Looks like the case_features are all text labels/observations by doctors. Let's split them into features and make them boolean. print(audiology_df.groupby('classification').count()) #def check_defect_presence(): # if ea in all_obs: # pass for ea in all_obs: audiology_df[ea] = audiology_df['case_features'].apply( lambda x: True if ea in x else False) audiology_df.drop('case_features', 1, inplace=True) audiology_df.head() # ## OKay, based on the above data set sample, the only meaningful thing we can try is to see if we can predict the case classification based on any of the observed features. # # ## We have 87 features,(I'm assuming these are labels that came out of human judgment) and most of it is false.. aka this is a sparsely populated dataset in these dimensions, and most likely the dimensions are not orthogonal(aka independent) to(of) each other. # # ## Due to these reasons, # * a tree based prediction is best(since it is all boolean features) # * Xgboost since it is mostly False/empty features.(aka sparse features) # audiology_df.head() # + from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelEncoder le = LabelEncoder() le.fit(audiology_df['classification'].unique()) audiology_df['classification'] = le.transform(audiology_df['classification']) target = audiology_df.classification audiology_df.drop(['case_id', 'classification'], 1, inplace=True) # - audiology_df.head() X_train, X_test, y_train, y_test = train_test_split(audiology_df, target, test_size=0.3) tree_model = pm.train(X_train, y_train, 'tree') tree_model.fit(X_train, y_train) # The mean squared error print("Mean squared error: %.2f" % np.mean((tree_model.predict(X_test) - y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % tree_model.score(X_test, y_test)) plotter.show_tree_model(tree_model, model_type='tree') # Train the model using the training sets xgb_model = pm.train(X_train, y_train, 'xgboost') xgb_model.fit(X_train, y_train) # The mean squared error print("Mean squared error: %.2f" % np.mean((xgb_model.predict(X_test) - y_test) ** 2)) # Explained variance score: 1 is perfect prediction plotter.show_tree_model(xgb_model, model_type='xgboost') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # + import matplotlib.pyplot as plt import seaborn as sns # built in dataset asnscombe df = sns.load_dataset('anscombe') #df.to_csv('anscombe.csv', index=False) # but save it for review # 3 columns df # - # matplot lib. not using sns yet # we don't need lines but see the lines for feedback # example plot plt.plot(df.x, df.y) # example plot but o for observation points. # over all points in csv file, not broken down into data set groups plt.plot(df.x, df.y, 'o') #better, we don't want lines # There are 4 groups of data # group s1 = df[df.dataset == 'I'] s2 = df[df.dataset == 'II'] s3 = df[df.dataset == 'III'] s4 = df[df.dataset == 'IV'] # + # build a figure and for each dataset group in csv # sub dataset col to show diff shape rather than all of it. fig = plt.figure() # figure with 2 rows and 2 columns, position of subplot x1 = fig.add_subplot(2,2,1) x2 = fig.add_subplot(2,2,2) x3 = fig.add_subplot(2,2,3) x4 = fig.add_subplot(2,2,4) # for each subplot plot o points for each independent series extracted as groups. # each subplot tells its own distinct story x1.plot(s1.x, s1.y, 'o') x2.plot(s2.x, s2.y, 'o') x3.plot(s3.x, s3.y, 'o') x4.plot(s4.x, s4.y, 'o') # motivating example why to visualize the segmented # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from Bio import SeqIO from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord from Bio.Alphabet import IUPAC from Bio.SeqFeature import SeqFeature, FeatureLocation def write_gene_reference(gene_seq, gene_name, gene_description, cov_type, outfile): gene_record = SeqRecord(gene_seq, id= seq_record.id, name= gene_name, description= gene_description) source_feature = SeqFeature(FeatureLocation(0, len(gene_seq)), type='source', qualifiers={'organsism':cov_type, "mol_type":"genomic RNA"}) gene_record.features.append(source_feature) cds_feature = SeqFeature(FeatureLocation(0, len(gene_seq)), type='CDS', qualifiers={'translation':gene_seq.translate()}) gene_record.features.append(cds_feature) SeqIO.write(gene_record, outfile, 'genbank') # + #229e #From UniProt P0C6X1 (R1AB_CVH22) #https://www.uniprot.org/uniprot/P0C6X1 #In reference strain I have been using, find region homologous to: >sp|P0C6X1|4069-4995 SFDNSYLNRVRGSSAARLEPCNGTDIDYCVRAFDVYNKDASFIGKNLKSNCVRFKNVDKD DAFYIVKRCIKSVMDHEQSMYNLLKGCNAVAKHDFFTWHEGRTIYGNVSRQDLTKYTMMD LCFALRNFDEKDCEVFKEILVLTGCCSTDYFEMKNWFDPIENEDIHRVYAALGKVVANAM LKCVAFCDEMVLKGVVGVLTLDNQDLNGNFYDFGDFVLCPPGMGIPYCTSYYSYMMPVMG MTNCLASECFMKSDIFGQDFKTFDLLKYDFTEHKEVLFNKYFKYWGQDYHPDCVDCHDEM CILHCSNFNTLFATTIPNTAFGPLCRKVFIDGVPVVATAGYHFKQLGLVWNKDVNTHSTR LTITELLQFVTDPTLIVASSPALVDKRTVCFSVAALSTGLTSQTVKPGHFNKEFYDFLRS QGFFDEGSELTLKHFFFTQKGDAAIKDFDYYRYNRPTMLDIGQARVAYQVAARYFDCYEG GCITSREVVVTNLNKSAGWPLNKFGKAGLYYESISYEEQDAIFSLTKRNILPTMTQLNLK YAISGKERARTVGGVSLLATMTTRQFHQKCLKSIVATRNATVVIGTTKFYGGWDNMLKNL MADVDDPKLMGWDYPKCDRAMPSMIRMLSAMILGSKHVTCCTASDKFYRLSNELAQVLTE VVYSNGGFYFKPGGTTSGDATTAYANSVFNIFQAVSSNINCVLSVNSSNCNNFNVKKLQR QLYDNCYRNSNVDESFVDDFYGYLQKHFSMMILSDDSVVCYNKTYAGLGYIADISAFKAT LYYQNGVFMSTAKCWTEEDLSIGPHEFCSQHTMQIVDENGKYYLPYPDPSRIISAGVFVD DITKTDAVILLERYVSLAIDAYPLSKHPKPEYRKVFYALLDWVKHLNKTLNEGVLESFSV TLLDEHESKFWDESFYASMYEKSTVLQ # + #229e for seq_record in SeqIO.parse("../229e/config/229e_full_reference.gb", "genbank"): for feature in seq_record.features: if feature.type == 'CDS': if feature.qualifiers['product'] == ['replicase polyprotein 1ab']: rdrp_229e_nt = feature.location.extract(seq_record.seq)[12204:14985] rdrp_229e_aa = feature.qualifiers['translation'][0][4068:4995] outfile = '../229e/config/229e_rdrp_reference.gb' write_gene_reference(rdrp_229e_nt, '229e_rdrp', 'rdrp sequence extracted from whole genome Genbank file', '229e', outfile) # + #nl63 #can't find rdrp/nsp12 annotated in a nl63 sequence or protein #use region homologous to 229e's nsp12 since both are alpha-CoVs # + #nl63 for seq_record in SeqIO.parse("../nl63/config/nl63_full_reference.gb", "genbank"): for feature in seq_record.features: if feature.type == 'CDS': if feature.qualifiers['product'] == ['replicase polyprotein 1ab']: rdrp_nl63_nt = feature.location.extract(seq_record.seq)[12129:14910] rdrp_nl63_aa = feature.qualifiers['translation'][0][4043:4970] outfile = '../nl63/config/nl63_rdrp_reference.gb' write_gene_reference(rdrp_nl63_nt, 'nl63_rdrp', 'rdrp sequence extracted from whole genome Genbank file based on homology to 229e rdrp', 'nl63', outfile) # + #hku1 #full reference file has nsp12 annotated for seq_record in SeqIO.parse("../hku1/config/hku1_full_reference.gb", "genbank"): for feature in seq_record.features: if feature.type == 'mat_peptide': if feature.qualifiers['product'] == ['nsp12']: rdrp_hku1_nt = feature.location.extract(seq_record.seq) rdrp_hku1_aa = feature.location.extract(seq_record.seq).translate() outfile = '../hku1/config/hku1_rdrp_reference.gb' write_gene_reference(rdrp_hku1_nt, 'hku1_rdrp', 'rdrp sequence extracted from whole genome Genbank file based on homology to oc43 rdrp', 'hku1', outfile) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from tkinter import * from math import * from tkinter import messagebox expression = "" names = {'sqrt': sqrt, 'powTo': pow} xpad = 5 ypad = 5 sqrtBool = False powerCount = 0 def sqRoot(): global sqrtBool if sqrtBool == False: press('sqrt(') sqrtBool = True else: press(')') sqrtBool = False def powTo(): global powerCount if powerCount == 0: press('powTo(') powerCount += 1 elif powerCount == 1: press(',') powerCount += 1 else: press(')') powerCount = 0 # Function to update expressiom # in the text entry box def press(num): global expression expression = expression + str(num) equation.set(expression) # Function to evaluate (eval()) the final expression def equalpress(): try: global expression print(expression) total = str(eval(expression, names)) equation.set(total) expression = "" except: equation.set(" error ") expression = "" # Function to clear the contents # of text entry box def clear(): global expression expression = "" equation.set("") def Info(): messagebox.showinfo("Info","Created by the awesome --> Enter Name <--- ! =O") # Driver code # Is this file directly run by Python or is it imported if __name__ == "__main__": # create a GUI window gui = Tk() # set the background colour of GUI window gui.configure(background="white") # set the title of GUI window gui.title("LABINF Calculator") # set the configuration of GUI window gui.geometry("350x200") # StringVar() is the variable class equation = StringVar() # create the text entry box for expression_field = Entry(gui, textvariable=equation,bg='grey',fg='white', font=("Calibri 15"), justify=CENTER, state=DISABLED) # grid method is used for placing expression_field.grid(columnspan=5, ipadx=50, ipady=10) equation.set('Enter your expression') # create a Buttons and place at a particular button1 = Button(gui, text=' 1 ', fg='black', bg='light grey',command=lambda: press(1), height=1, width=7) button1.grid(row=2, column=0, padx=xpad, pady=ypad) button2 = Button(gui, text=' 2 ', fg='black', bg='light grey',command=lambda: press(2), height=1, width=7) button2.grid(row=2, column=1, padx=xpad, pady=ypad) button3 = Button(gui, text=' 3 ', fg='black', bg='light grey',command=lambda: press(3), height=1, width=7) button3.grid(row=2, column=2, padx=xpad, pady=ypad) button3 = Button(gui, text=' 4 ', fg='black', bg='light grey',command=lambda: press(4), height=1, width=7) button3.grid(row=3, column=0, padx=xpad, pady=ypad) button3 = Button(gui, text=' 5 ', fg='black', bg='light grey',command=lambda: press(5), height=1, width=7) button3.grid(row=3, column=1, padx=xpad, pady=ypad) button3 = Button(gui, text=' 6 ', fg='black', bg='light grey',command=lambda: press(6), height=1, width=7) button3.grid(row=3, column=2, padx=xpad, pady=ypad) button3 = Button(gui, text=' 7 ', fg='black', bg='light grey',command=lambda: press(7), height=1, width=7) button3.grid(row=4, column=0, padx=xpad, pady=ypad) button3 = Button(gui, text=' 8 ', fg='black', bg='light grey',command=lambda: press(8), height=1, width=7) button3.grid(row=4, column=1, padx=xpad, pady=ypad) button3 = Button(gui, text=' 9 ', fg='black', bg='light grey',command=lambda: press(9), height=1, width=7) button3.grid(row=4, column=2, padx=xpad, pady=ypad) button3 = Button(gui, text=' 0 ', fg='black', bg='light grey',command=lambda: press(0), height=1, width=7) button3.grid(row=5, column=1, padx=xpad, pady=ypad) dec = Button(gui, text=' , ', fg='black', bg='light grey',command=lambda: press('.'), height=1, width=7) dec.grid(row=5, column=2, padx=xpad, pady=ypad) add = Button(gui, text=' + ', fg='black', bg='orange', command=lambda: press("+"), height=1, width=7) add.grid(row=2, column=3, padx=xpad, pady=ypad) sub = Button(gui, text=' - ', fg='black', bg='orange', command=lambda: press("-"), height=1, width=7) sub.grid(row=3, column=3, padx=xpad, pady=ypad) multi = Button(gui, text=' * ', fg='black', bg='orange', command=lambda: press("*"), height=1, width=7) multi.grid(row=4, column=3, padx=xpad, pady=ypad) div = Button(gui, text=' / ', fg='black', bg='orange', command=lambda: press("/"), height=1, width=7) div.grid(row=5, column=3, padx=xpad, pady=ypad) sqr = Button(gui, text=' ^ ', fg='black', bg='orange', command=lambda: powTo(), height=1, width=7) sqr.grid(row=3, column=4, padx=xpad, pady=ypad) power = Button(gui, text=' √ ', fg='black', bg='orange', command=lambda: sqRoot(), height=1, width=7) power.grid(row=4, column=4, padx=xpad, pady=ypad) equal = Button(gui, text=' = ', fg='black', bg='orange', command=equalpress, height=1, width=7) equal.grid(row=5, column=4, padx=xpad, pady=ypad) clear = Button(gui, text='Clear', fg='white', bg='lightcoral', command=clear, height=1, width=7) clear.grid(row=2, column=4, padx=xpad, pady=ypad) info = Button(gui, text=' Info ', fg='black', bg='light blue',command=lambda: Info(), height=1, width=7) info.grid(row=5, column=0, padx=xpad, pady=ypad) # start the GUI gui.mainloop() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #Calculate a life import datetime date_a = datetime.date(2018,1,1) date_b = datetime.date(2019,1,1) # - days_2019 = date_b - date_a print(days_2019) print(date_a) print(type(date_a)) print(type(days_2019)) #daysはtimedelta型が持っている関数 #あるデータ型に属している専用関数のことをデータ属性、または属性(アトリビュート)と呼ぶ days_2019.days today = datetime.date.today() print(today) mybirthday = datetime.date(2000,1,1) print(mybirthday) lifetime = today - mybirthday print(lifetime.days) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Try to reconfigure OpSim for other cadence # # *Evenly distribute and include the galactic plane* # + from OpSim import OpSim from astropy.coordinates import SkyCoord from astropy import units import numpy as np import matplotlib.pyplot as plt # %matplotlib notebook filters = ['u_', 'g_', 'r_', 'i_', 'z_', 'y_'] # - OpS = OpSim() OpS.dbFile = '/Users/ageller/WORK/LSST/onGitHub/EBLSST/input/db/minion_1016_sqlite.db' #for the OpSim database OpS.getAllOpSimFields() print(OpS.Nobs[1]) print(np.max(OpS.Nobs[np.where(OpS.Nobs < 5000)])) # + coords = SkyCoord(OpS.RA, OpS.Dec, unit=(units.degree, units.degree),frame='icrs') RAwrap = coords.ra.wrap_at(180.*units.degree).degree Decwrap = coords.dec.wrap_at(180.*units.degree).degree f, ax = plt.subplots(subplot_kw={'projection': "mollweide"}, figsize=(8,5)) ax.grid(True) ax.set_xlabel("RA",fontsize=16) ax.set_ylabel("Dec",fontsize=16) mlw = ax.scatter(np.array(RAwrap).ravel()*np.pi/180., np.array(Decwrap).ravel()*np.pi/180., c=OpS.Nobs, #probably want this in a given filter, but that takes a long time to gather cmap='viridis_r', s = 4, vmin=0, vmax=1000) cbar = f.colorbar(mlw, shrink=0.7) cbar.set_label(r'Nobs') # - # ### Take only the fields that are observed and get their dates observed = np.where(OpS.Nobs > 0) primary = np.where(OpS.Nobs > 800) print(len(observed[0])) print(len(primary[0])) # *There will be this many observations in each of these fields* # # *Neglect the fields with the very large number of obs -- is this a bug or because of overlap and/or a planned revisit?* # + Nfields = len(observed[0]) Nobs = OpS.Nobs[observed] bad = np.where(Nobs > 5000) good = np.where(Nobs < 5000) #print(bad, Nobs[bad]) Nobs[bad] = np.max(Nobs[good]) #print(Nobs[bad]) NtotalObs = np.sum(Nobs) print("Nfields, NtotalObs, NObsPerField = ",Nfields, NtotalObs, NtotalObs/Nfields) # - # ### Let's try to construct a CDF of the dt inbetween observations #this takes a long time to run through... OpSimi = OpS.fieldID[primary].astype("int") OpS.dt = np.array([{} for i in OpSimi]) OpSimi = OpSimi[0:2] #take only the first two for testing for i in OpSimi: print(i) dt = {} OpS.setDates(i, filters) for f in filters: dt[f] = np.diff(OpS.obsDates[i][f]) OpS.dt[i] = dt i = OpSimi[0] print(OpS.NobsDates[i]) print(OpS.Nobs) #print(OpS.dt[i][filters[0]]) # *Construct the CDF. Ideally we wouldn't bin it, but I think this will be easier to do given that there are about 3000 fields and we'd want to run this in MPI on Quest.* # + f,ax = plt.subplots(len(filters),1, figsize=(5,15), sharex=True) Nbins = 100 dist = {} for a,f in zip(ax, filters): pdf = np.zeros(Nbins) for i in OpSimi: #sum up all the dt's in that filter across all fields p,bin_edges = np.histogram(np.log10(OpS.dt[i][f]), bins=Nbins) pdf += p bins = bin_edges[:-1] + np.diff(bin_edges)/2. cdf = np.cumsum(pdf) a.step(bins,pdf/np.max(pdf),linestyle='--') a.step(bins,cdf/np.max(cdf)) a.set_ylabel(f) dist[f] = {} dist[f]['bins']= bins dist[f]['cdf']= cdf/np.max(cdf) dist[f]['pdf']= pdf/np.max(pdf) ax[-1].set_xlabel('log10(dt [days])') plt.subplots_adjust(hspace=0) # - print(dist[filters[1]]) # ### Files from Quest import os import pickle # + f,ax = plt.subplots(len(filters),1, figsize=(5,15), sharex=True) #d = 'primary_output_files/' #primary only take into account the fields with most obs, but using this would overcount total number of available observations d = 'output_files/' #uses all fields with >0 observations files = os.listdir(d) IDs = [] bins = {} pdf = {} cdf = {} Nobs = {} for i,file in enumerate(files): dist = pickle.load( open( d+file, "rb")) for f in filters: if (i == 0): bins[f] = dist[f]['bins'] pdf[f] = dist[f]['pdf'] Nobs[f] = dist[f]['Nobs'] if (i > 0): xx = np.where(np.logical_or(np.isinf(dist[f]['pdf']), np.isnan(dist[f]['pdf'])))[0] if (len(xx) > 0): #print(file, xx, dist[f]) dist[f]['pdf'][xx] = np.zeros_like(xx) pdf[f] += dist[f]['pdf'] Nobs[f] += dist[f]['Nobs'] outDist = {} for a,f in zip(ax, filters): Nobs[f] /= float(len(files)) cdf[f] = np.cumsum(pdf[f]) pdf[f] /= np.max(pdf[f]) cdf[f] /= np.max(cdf[f]) outDist[f] = {} outDist[f]['Nobs'] = Nobs[f] outDist[f]['bins'] = bins[f] outDist[f]['cdf'] = cdf[f] a.step(bins[f],pdf[f],linestyle='--') a.step(bins[f],cdf[f]) a.set_ylabel(f) ax[-1].set_xlabel('log10(dt [days])') plt.subplots_adjust(hspace=0) #pickle.dump( outDist, open( "OpSim_primary_dtDist.pickle", "wb" ) ) pickle.dump( outDist, open( "OpSim_observed_dtDist.pickle", "wb" ) ) #print(cdf) print(Nobs) # - # *Test drawing observations* # + f,ax = plt.subplots(len(filters),1, figsize=(5,15), sharex=True) for f,a in zip(filters,ax): N = round(Nobs[f]) dt = [] for i in range(N): y = np.random.random() dt.append(10.**np.interp(y, cdf[f], bins[f])) dates = np.cumsum(np.array(dt)) _ = a.hist(np.log10(dt), bins=50) a.set_ylabel(f) ax[-1].set_xlabel('log10(dt [days])') plt.subplots_adjust(hspace=0) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Global Fishing Effort Trawlers import time import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap from matplotlib import colors,colorbar import matplotlib # %matplotlib inline import csv import math # from scipy import stats import bq client = bq.Client.Get() def Query(q): t0 = time.time() answer = client.ReadTableRows(client.Query(q)['configuration']['query']['destinationTable']) print 'Query time: ' + str(time.time() - t0) + ' seconds.' return answer # + # how many vessels? q = ''' SELECT count(*) FROM [scratch_global_fishing_raster.classification_results_20160506] WHERE max_label_label = "Trawler" ''' neural_trawlers = int(Query(q)[0][0]) q = ''' SELECT count(*) FROM [scratch_global_fishing_raster.classification_results_20160506] WHERE max_label_label = "Trawler" and max_label_score > .8''' neural_trawlers_highconfidence = int(Query(q)[0][0]) q = ''' SELECT count(*) FROM [scratch_global_fishing_raster.classification_results_20160506] WHERE max_label_label = "Trawler" and mmsi in(select string(mmsi) from [scratch_bjorn.2015_combined_fishing])''' neural_combined_trawlers = int(Query(q)[0][0]) q = ''' SELECT count(*) FROM [scratch_global_fishing_raster.classification_results_20160506] WHERE max_label_label = "Trawler" and mmsi in(select string(mmsi) from [scratch_bjorn.2015_combined_fishing]) and max_label_score > .8''' neural_combined_trawlers_highconfidence = int(Query(q)[0][0]) # - print "Classified Trawlers:", neural_trawlers print "Classified Trawlers High Confidence:", neural_trawlers_highconfidence print "Classified Trawlers that are Likely Fishing Vessels:", neural_combined_trawlers print "Classified Trawlers High Confidence that are Likely Fishing Vessels :", neural_combined_trawlers_highconfidence # + q = ''' SELECT INTEGER(lat*2) lat_bin, INTEGER(lon*2) lon_bin, SUM(fishing_hours) fishing_hours FROM ( SELECT lat, lon, (last_hours + next_hours)/2 fishing_hours FROM [scratch_global_fishing_raster.2015_with_score_and_hours] WHERE measure_new_score > .5 AND lat <90 AND lon <180 AND mmsi IN ( SELECT INTEGER(mmsi) FROM [scratch_global_fishing_raster.classification_results_20160506] WHERE max_label_label = "Trawler" ) AND seg_id IN ( SELECT seg_id, FROM [scratch_david_seg_analysis.2015_segments] WHERE NOT( point_count<=20 OR (point_count<100 AND point_count = terrestrial_positions) OR (min_lat >= 0 AND max_lat <= 0.109225) OR (min_lon >= 0 AND max_lon <= 0.109225) ))) GROUP BY lat_bin, lon_bin ''' fishing_grid = Query(q) # - # + cellsize = .5 one_over_cellsize = 2 max_lat = 90 min_lat = -90 min_lon = -180 max_lon = 180 num_lats = (max_lat-min_lat)*one_over_cellsize num_lons = (max_lon-min_lon)*one_over_cellsize grid = np.zeros(shape=(num_lats,num_lons)) for row in fishing_grid: lat = int(row[0]) lon = int(row[1]) lat_index = lat-min_lat*one_over_cellsize lon_index = lon-min_lon*one_over_cellsize grid[lat_index][lon_index] = float(row[2]) # + plt.rcParams["figure.figsize"] = [12,7] cutoff = 0 # 4 degress away from the pole firstlat = 90-cutoff lastlat = -90+cutoff firstlon = -180 lastlon = 180 scale = cellsize one_over_cellsize = 2 fishing_days_truncated = grid[one_over_cellsize*cutoff:(180*one_over_cellsize)-cutoff*one_over_cellsize][:] numlats = int((firstlat-lastlat)*one_over_cellsize+.5) numlons = int((lastlon-firstlon)*one_over_cellsize+.5) lat_boxes = np.linspace(lastlat,firstlat,num=numlats,endpoint=False) lon_boxes = np.linspace(firstlon,lastlon,num=numlons,endpoint=False) fig = plt.figure() m = Basemap(llcrnrlat=lastlat, urcrnrlat=firstlat, llcrnrlon=lastlon, urcrnrlon=firstlon, lat_ts=0, projection='robin',resolution="h", lon_0=0) m.drawmapboundary(fill_color='#111111') # m.drawcoastlines(linewidth=.2) m.fillcontinents('#111111',lake_color='#111111')#, lake_color, ax, zorder, alpha) x = np.linspace(-180, 180, 360*one_over_cellsize) y = np.linspace(lastlat, firstlat, (firstlat-lastlat)*one_over_cellsize) x, y = np.meshgrid(x, y) converted_x, converted_y = m(x, y) from matplotlib import colors,colorbar maximum = grid.max() minimum = 1 norm = colors.LogNorm(vmin=minimum, vmax=maximum) # norm = colors.Normalize(vmin=0, vmax=1000) m.pcolormesh(converted_x, converted_y, fishing_days_truncated, norm=norm, vmin=minimum, vmax=maximum, cmap = plt.get_cmap('viridis')) t = "Fishing Hours for Trawlers, 2015\nTralwers identified by the Neural Net" plt.title(t, color = "#ffffff", fontsize=18) ax = fig.add_axes([0.2, 0.1, 0.4, 0.02]) #x coordinate , norm = colors.LogNorm(vmin=minimum, vmax=maximum) # norm = colors.Normalize(vmin=0, vmax=1000) lvls = np.logspace(np.log10(minimum),np.log10(maximum),num=8) cb = colorbar.ColorbarBase(ax,norm = norm, orientation='horizontal', ticks=lvls, cmap = plt.get_cmap('viridis')) the_labels = [] for l in lvls: if l>=1: l = int(l) the_labels.append(l) #cb.ax.set_xticklabels(["0" ,round(m3**.5,1), m3, round(m3**1.5,1), m3*m3,round(m3**2.5,1), str(round(m3**3,1))+"+"], fontsize=10) cb.ax.set_xticklabels(the_labels, fontsize=10, color = "#ffffff") cb.set_label('Fishing Hours by Two Degree Grid',labelpad=-40, y=0.45, color = "#ffffff") ax.text(1.7, -0.5, 'Data Source: Orbcomm\nMap by Global Fishing Watch', verticalalignment='bottom', horizontalalignment='right', transform=ax.transAxes, color='#ffffff', fontsize=6) plt.savefig("fishing_hours_trawlers_2015_v1.png",bbox_inches='tight',dpi=300,transparent=True,pad_inches=.1, facecolor="#000000") plt.show() # + q = ''' SELECT INTEGER(lat*2) lat_bin, INTEGER(lon*2) lon_bin, SUM(fishing_hours) fishing_hours FROM ( SELECT lat, lon, (last_hours + next_hours)/2 fishing_hours FROM [scratch_global_fishing_raster.2015_with_score_and_hours] WHERE measure_new_score > .5 AND lat <90 AND lon <180 AND mmsi IN ( SELECT INTEGER(mmsi) FROM [scratch_global_fishing_raster.classification_results_20160506] WHERE max_label_label = "Trawler" and max_label_score > .8) AND seg_id IN ( SELECT seg_id, FROM [scratch_david_seg_analysis.2015_segments] WHERE NOT( point_count<=20 OR (point_count<100 AND point_count = terrestrial_positions) OR (min_lat >= 0 AND max_lat <= 0.109225) OR (min_lon >= 0 AND max_lon <= 0.109225) ))) GROUP BY lat_bin, lon_bin ''' fishing_grid = Query(q) # + cellsize = .5 one_over_cellsize = 2 max_lat = 90 min_lat = -90 min_lon = -180 max_lon = 180 num_lats = (max_lat-min_lat)*one_over_cellsize num_lons = (max_lon-min_lon)*one_over_cellsize grid = np.zeros(shape=(num_lats,num_lons)) for row in fishing_grid: lat = int(row[0]) lon = int(row[1]) lat_index = lat-min_lat*one_over_cellsize lon_index = lon-min_lon*one_over_cellsize grid[lat_index][lon_index] = float(row[2]) # + plt.rcParams["figure.figsize"] = [12,7] cutoff = 0 # 4 degress away from the pole firstlat = 90-cutoff lastlat = -90+cutoff firstlon = -180 lastlon = 180 scale = cellsize one_over_cellsize = 2 fishing_days_truncated = grid[one_over_cellsize*cutoff:(180*one_over_cellsize)-cutoff*one_over_cellsize][:] numlats = int((firstlat-lastlat)*one_over_cellsize+.5) numlons = int((lastlon-firstlon)*one_over_cellsize+.5) lat_boxes = np.linspace(lastlat,firstlat,num=numlats,endpoint=False) lon_boxes = np.linspace(firstlon,lastlon,num=numlons,endpoint=False) fig = plt.figure() m = Basemap(llcrnrlat=lastlat, urcrnrlat=firstlat, llcrnrlon=lastlon, urcrnrlon=firstlon, lat_ts=0, projection='robin',resolution="h", lon_0=0) m.drawmapboundary(fill_color='#111111') # m.drawcoastlines(linewidth=.2) m.fillcontinents('#111111',lake_color='#111111')#, lake_color, ax, zorder, alpha) x = np.linspace(-180, 180, 360*one_over_cellsize) y = np.linspace(lastlat, firstlat, (firstlat-lastlat)*one_over_cellsize) x, y = np.meshgrid(x, y) converted_x, converted_y = m(x, y) from matplotlib import colors,colorbar maximum = grid.max() minimum = 1 norm = colors.LogNorm(vmin=minimum, vmax=maximum) # norm = colors.Normalize(vmin=0, vmax=1000) m.pcolormesh(converted_x, converted_y, fishing_days_truncated, norm=norm, vmin=minimum, vmax=maximum, cmap = plt.get_cmap('viridis')) t = "Fishing Hours for Trawlers, 2015\nTralwers identified by the Neural Net with Confidence > 0.8" plt.title(t, color = "#ffffff", fontsize=18) ax = fig.add_axes([0.2, 0.1, 0.4, 0.02]) #x coordinate , norm = colors.LogNorm(vmin=minimum, vmax=maximum) # norm = colors.Normalize(vmin=0, vmax=1000) lvls = np.logspace(np.log10(minimum),np.log10(maximum),num=8) cb = colorbar.ColorbarBase(ax,norm = norm, orientation='horizontal', ticks=lvls, cmap = plt.get_cmap('viridis')) the_labels = [] for l in lvls: if l>=1: l = int(l) the_labels.append(l) #cb.ax.set_xticklabels(["0" ,round(m3**.5,1), m3, round(m3**1.5,1), m3*m3,round(m3**2.5,1), str(round(m3**3,1))+"+"], fontsize=10) cb.ax.set_xticklabels(the_labels, fontsize=10, color = "#ffffff") cb.set_label('Fishing Hours by Two Degree Grid',labelpad=-40, y=0.45, color = "#ffffff") ax.text(1.7, -0.5, 'Data Source: Orbcomm\nMap by Global Fishing Watch', verticalalignment='bottom', horizontalalignment='right', transform=ax.transAxes, color='#ffffff', fontsize=6) plt.savefig("fishing_hours_trawlers_2015_highconfidence_v1.png",bbox_inches='tight',dpi=300,transparent=True,pad_inches=.1, facecolor="#000000") plt.show() # + q = ''' SELECT INTEGER(lat*2) lat_bin, INTEGER(lon*2) lon_bin, SUM(fishing_hours) fishing_hours FROM ( SELECT lat, lon, (last_hours + next_hours)/2 fishing_hours FROM [scratch_global_fishing_raster.2015_with_score_and_hours] WHERE measure_new_score > .5 and lat <90 and lon <180 AND mmsi IN ( SELECT integer(mmsi) FROM [scratch_global_fishing_raster.classification_results_20160506] WHERE max_label_label = "Trawler" and mmsi in(select string(mmsi) from [scratch_bjorn.2015_combined_fishing])) AND seg_id IN ( SELECT seg_id, FROM [scratch_david_seg_analysis.2015_segments] WHERE NOT( point_count<=20 OR (point_count<100 AND point_count = terrestrial_positions) OR (min_lat >= 0 AND max_lat <= 0.109225) OR (min_lon >= 0 AND max_lon <= 0.109225) ))) GROUP BY lat_bin, lon_bin ''' fishing_grid = Query(q) # + cellsize = .5 one_over_cellsize = 2 max_lat = 90 min_lat = -90 min_lon = -180 max_lon = 180 num_lats = (max_lat-min_lat)*one_over_cellsize num_lons = (max_lon-min_lon)*one_over_cellsize grid = np.zeros(shape=(num_lats,num_lons)) for row in fishing_grid: lat = int(row[0]) lon = int(row[1]) lat_index = lat-min_lat*one_over_cellsize lon_index = lon-min_lon*one_over_cellsize grid[lat_index][lon_index] = float(row[2]) # + plt.rcParams["figure.figsize"] = [12,7] cutoff = 0 # 4 degress away from the pole firstlat = 90-cutoff lastlat = -90+cutoff firstlon = -180 lastlon = 180 scale = cellsize one_over_cellsize = 2 fishing_days_truncated = grid[one_over_cellsize*cutoff:(180*one_over_cellsize)-cutoff*one_over_cellsize][:] numlats = int((firstlat-lastlat)*one_over_cellsize+.5) numlons = int((lastlon-firstlon)*one_over_cellsize+.5) lat_boxes = np.linspace(lastlat,firstlat,num=numlats,endpoint=False) lon_boxes = np.linspace(firstlon,lastlon,num=numlons,endpoint=False) fig = plt.figure() m = Basemap(llcrnrlat=lastlat, urcrnrlat=firstlat, llcrnrlon=lastlon, urcrnrlon=firstlon, lat_ts=0, projection='robin',resolution="h", lon_0=0) m.drawmapboundary(fill_color='#111111') # m.drawcoastlines(linewidth=.2) m.fillcontinents('#111111',lake_color='#111111')#, lake_color, ax, zorder, alpha) x = np.linspace(-180, 180, 360*one_over_cellsize) y = np.linspace(lastlat, firstlat, (firstlat-lastlat)*one_over_cellsize) x, y = np.meshgrid(x, y) converted_x, converted_y = m(x, y) from matplotlib import colors,colorbar maximum = grid.max() minimum = 1 norm = colors.LogNorm(vmin=minimum, vmax=maximum) # norm = colors.Normalize(vmin=0, vmax=1000) m.pcolormesh(converted_x, converted_y, fishing_days_truncated, norm=norm, vmin=minimum, vmax=maximum, cmap = plt.get_cmap('viridis')) t = "Fishing Hours for Trawlers, 2015\nTralwers identified by the Neural Net also in 2015_combined_fishing list" plt.title(t, color = "#ffffff", fontsize=18) ax = fig.add_axes([0.2, 0.1, 0.4, 0.02]) #x coordinate , norm = colors.LogNorm(vmin=minimum, vmax=maximum) # norm = colors.Normalize(vmin=0, vmax=1000) lvls = np.logspace(np.log10(minimum),np.log10(maximum),num=8) cb = colorbar.ColorbarBase(ax,norm = norm, orientation='horizontal', ticks=lvls, cmap = plt.get_cmap('viridis')) the_labels = [] for l in lvls: if l>=1: l = int(l) the_labels.append(l) #cb.ax.set_xticklabels(["0" ,round(m3**.5,1), m3, round(m3**1.5,1), m3*m3,round(m3**2.5,1), str(round(m3**3,1))+"+"], fontsize=10) cb.ax.set_xticklabels(the_labels, fontsize=10, color = "#ffffff") cb.set_label('Fishing Hours by Two Degree Grid',labelpad=-40, y=0.45, color = "#ffffff") ax.text(1.7, -0.5, 'Data Source: Orbcomm\nMap by Global Fishing Watch', verticalalignment='bottom', horizontalalignment='right', transform=ax.transAxes, color='#ffffff', fontsize=6) plt.savefig("fishing_hours_trawlers_2015_combinedfishing_v1.png",bbox_inches='tight',dpi=300,transparent=True,pad_inches=.1, facecolor="#000000") plt.show() # - # + q = ''' SELECT INTEGER(lat*2) lat_bin, INTEGER(lon*2) lon_bin, SUM(fishing_hours) fishing_hours FROM ( SELECT lat, lon, (last_hours + next_hours)/2 fishing_hours FROM [scratch_global_fishing_raster.2015_with_score_and_hours] WHERE measure_new_score > .5 and lat <90 and lon <180 AND mmsi IN ( SELECT integer(mmsi) FROM [scratch_global_fishing_raster.classification_results_20160506] WHERE max_label_label = "Trawler" and max_label_score > .8 and mmsi in(select string(mmsi) from [scratch_bjorn.2015_combined_fishing])) AND seg_id IN ( SELECT seg_id, FROM [scratch_david_seg_analysis.2015_segments] WHERE NOT( point_count<=20 OR (point_count<100 AND point_count = terrestrial_positions) OR (min_lat >= 0 AND max_lat <= 0.109225) OR (min_lon >= 0 AND max_lon <= 0.109225) ))) GROUP BY lat_bin, lon_bin ''' fishing_grid = Query(q) # + cellsize = .5 one_over_cellsize = 2 max_lat = 90 min_lat = -90 min_lon = -180 max_lon = 180 num_lats = (max_lat-min_lat)*one_over_cellsize num_lons = (max_lon-min_lon)*one_over_cellsize grid = np.zeros(shape=(num_lats,num_lons)) for row in fishing_grid: lat = int(row[0]) lon = int(row[1]) lat_index = lat-min_lat*one_over_cellsize lon_index = lon-min_lon*one_over_cellsize grid[lat_index][lon_index] = float(row[2]) # + plt.rcParams["figure.figsize"] = [12,7] cutoff = 0 # 4 degress away from the pole firstlat = 90-cutoff lastlat = -90+cutoff firstlon = -180 lastlon = 180 scale = cellsize one_over_cellsize = 2 fishing_days_truncated = grid[one_over_cellsize*cutoff:(180*one_over_cellsize)-cutoff*one_over_cellsize][:] numlats = int((firstlat-lastlat)*one_over_cellsize+.5) numlons = int((lastlon-firstlon)*one_over_cellsize+.5) lat_boxes = np.linspace(lastlat,firstlat,num=numlats,endpoint=False) lon_boxes = np.linspace(firstlon,lastlon,num=numlons,endpoint=False) fig = plt.figure() m = Basemap(llcrnrlat=lastlat, urcrnrlat=firstlat, llcrnrlon=lastlon, urcrnrlon=firstlon, lat_ts=0, projection='robin',resolution="h", lon_0=0) m.drawmapboundary(fill_color='#111111') # m.drawcoastlines(linewidth=.2) m.fillcontinents('#111111',lake_color='#111111')#, lake_color, ax, zorder, alpha) x = np.linspace(-180, 180, 360*one_over_cellsize) y = np.linspace(lastlat, firstlat, (firstlat-lastlat)*one_over_cellsize) x, y = np.meshgrid(x, y) converted_x, converted_y = m(x, y) from matplotlib import colors,colorbar maximum = grid.max() minimum = 1 norm = colors.LogNorm(vmin=minimum, vmax=maximum) # norm = colors.Normalize(vmin=0, vmax=1000) m.pcolormesh(converted_x, converted_y, fishing_days_truncated, norm=norm, vmin=minimum, vmax=maximum, cmap = plt.get_cmap('viridis')) t = "Fishing Hours for Trawlers, 2015\nTralwers id'd by the Neural Net in 2015_combined_fishing w/ High Confidence" plt.title(t, color = "#ffffff", fontsize=18) ax = fig.add_axes([0.2, 0.1, 0.4, 0.02]) #x coordinate , norm = colors.LogNorm(vmin=minimum, vmax=maximum) # norm = colors.Normalize(vmin=0, vmax=1000) lvls = np.logspace(np.log10(minimum),np.log10(maximum),num=8) cb = colorbar.ColorbarBase(ax,norm = norm, orientation='horizontal', ticks=lvls, cmap = plt.get_cmap('viridis')) the_labels = [] for l in lvls: if l>=1: l = int(l) the_labels.append(l) #cb.ax.set_xticklabels(["0" ,round(m3**.5,1), m3, round(m3**1.5,1), m3*m3,round(m3**2.5,1), str(round(m3**3,1))+"+"], fontsize=10) cb.ax.set_xticklabels(the_labels, fontsize=10, color = "#ffffff") cb.set_label('Fishing Hours by Two Degree Grid',labelpad=-40, y=0.45, color = "#ffffff") ax.text(1.7, -0.5, 'Data Source: Orbcomm\nMap by Global Fishing Watch', verticalalignment='bottom', horizontalalignment='right', transform=ax.transAxes, color='#ffffff', fontsize=6) plt.savefig("fishing_hours_trawlers_2015_combinedfishing_highcon_v1.png",bbox_inches='tight',dpi=300,transparent=True,pad_inches=.1, facecolor="#000000") plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np x = np.array([0,1,2,3,4]) y = np.array([2,3,4,5,6]) (10 % 2 == 0) and (10 % 3 == 0) (y %2 == 0) and (x %3 == 0) (y[0] %2 == 0) and (x[3] %3 == 0) (y %2 == 0) & (x %3 == 0) (10 % 2 == 0) or (10 % 3 == 0) (y %2 == 0) or (x %3 == 0) (y[1] %2 == 0) or (x[2] %3 == 0) (y %2 == 0) | (x %3 == 0) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernel_info: # name: python3 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Awesome Class - Assignment X # # ## Formalities # **Submit in a group of 2-3 people until xx.xx.xxxx 23.59CET. The deadline is strict!** # # ## Evaluation and Grading # **General advice for programming exercises at _Awesome Institution_** # # Evaluation of your submission is done semi-automatically. # Think of it as this notebook being executed once. # Afterwards, some test functions are appended to this file and executed respectively. # # Therefore: # - Submit valid _Python3_ code only! # - Use **external** libraries if and only if specified by task. # Using the standard library is fine unless you're told otherwise. # - Ensure your definitions (functions, classes, methods, variables) follow the specification. # The signature of a function/method/class usually can be inferred from task description, code skeletons and test cases. # - Ensure your code does not rely on the current notebook or system state! # - Use `Kernel --> Restart & Run All` to see if you are using any definitions, variables etc. that are not in scope anymore. # - Double check if your code relies on presence of files or directories other than those mentioned in given tasks. # - Tests run under Linux, hence don't use Windows style paths (`some\path`, `C:\another\path`). # `pathlib` provides neat abstractions for that, use it! # Also, use paths only that are relative to and within your working directory (OK: `some/path`, `./some/path`; NOT OK: `/home/alice/python`, `../../python`). # - Keep your code [idempotent](https://en.wikipedia.org/wiki/Idempotence)! # Running your code or parts of it multiple times must not yield different results. # Minimize usage of global variables. # - Ensure your code/notebook terminates in reasonable time. # We enforce timeouts during notebook/test execution. # Exceeding those results in an error and potentially no points for that sub-task. # - Textual answers must always be backed by code and may not refer to results that are not part of your submission. # # **There's a story behind each of these points! Don't expect us to fix your code!** # + [markdown] pycharm={"name": "#%% md\n"} # Credentials of all team members (you may add or remove items from the list) # + pycharm={"name": "#%%\n"} team_members = [ { 'first_name': 'Alice', 'last_name': 'Foo', 'student_id': 12345 }, { 'first_name': 'Bob', 'last_name': 'Bar', 'student_id': 54321 } ] # + pycharm={"name": "#%%\n"} import sys import time import matplotlib.pyplot as plt import pandas as pd # + pycharm={"name": "#%%\n"} SOME_CONSTANT = 42 # + [markdown] pycharm={"name": "#%% md\n"} # Let's do some plotting... # + pycharm={"name": "#%%\n"} df = pd.DataFrame([[1, 1], [2, 4], [3, 9], [4, 16], [5, 25]]) df.plot() # + [markdown] pycharm={"name": "#%% md\n"} # ... or render a table # + pycharm={"name": "#%%\n"} from IPython.display import display display(df) # + [markdown] pycharm={"name": "#%% md\n"} # Now, read some data from the file system # + pycharm={"name": "#%%\n"} with open('foo.txt', mode='rt') as f: data = f.read() print(data[:len(data) // 2]) print(data[len(data) // 2:], file=sys.stderr) # + [markdown] pycharm={"name": "#%% md\n"} # ... or modify an existing file # + pycharm={"name": "#%%\n"} with open('bar.txt', mode='a') as f: f.write('>> modification by notebook <<') # + [markdown] pycharm={"name": "#%% md\n"} # ... or create a new file # + pycharm={"name": "#%%\n"} with open('fnord.txt', mode='wt') as f: f.write('let\'s leave some artifacts in the system') # + [markdown] pycharm={"name": "#%% md\n"} # Explicit calls to `plt.show` will be re-directed to `auto_save_figure`, storing the plot in a file # + pycharm={"name": "#%%\n"} for e in [2, 3]: plt.plot(*zip(*((v, v ** e) for v in range(-10, 10)))) plt.show() # + [markdown] pycharm={"name": "#%% md\n"} # However, we can also save a figure explicitly! # + pycharm={"name": "#%%\n"} plt.plot(*zip(*((v, v ** 5 + (v - 5) ** 3 - (3 * v) ** 2) for v in range(-20, 20)))) plt.savefig('plot.png') # + [markdown] pycharm={"name": "#%% md\n"} # Here are some more functions we can write tests for # + pycharm={"name": "#%%\n"} def square(x: float) -> float: return x ** 2 def cube(x: float) -> float: return x ** 3 abs_cube = cube def fail(): raise ValueError('no chance, this function crashes so badly') def sleep(x): time.sleep(x) # + [markdown] pycharm={"name": "#%% md\n"} # Tests may define timeouts for cell and test execution. # Hence, the following code may rise a `TimeoutError`! # + pycharm={"name": "#%%\n"} sleep(.1) # + [markdown] pycharm={"name": "#%% md\n"} # Importing _autograde_ may fail due to import policy, see `NotebookTest.set_import_filter` # + pycharm={"name": "#%%\n"} import autograde print(autograde.__version__) # - # Importing _autograde_ is an issue as it could be used to monkey-patch the library, e.g. # # ```Python # import autograde # # def mock(self, *_, **__): # return self.score, '🖕' # # autograde.notebook_test.UnitTest.__call__ = mock # ``` # + [markdown] pycharm={"name": "#%% md\n"} # Import filters also work at test time and for various ways of importing # + pycharm={"name": "#%%\n"} def illegal_import(): exec('import autograde as ag') # + [markdown] pycharm={"name": "#%% md\n"} # Unlike normal Python scripts, notebook execution continues even if a cell crashes! # + pycharm={"name": "#%%\n"} assert False, 'this cell will crash under any circumstances ...' # + pycharm={"name": "#%%\n"} print('... but this cell is not affected :^)') # + [markdown] pycharm={"name": "#%% md\n"} # Tests may access contents of markdown comments. # This is useful when students are asked to elaborate on their solutions. # - # **Q1:** _How many roads must a man walk down before you call him a man?_ # **A1:** The answer, my friend, is blowin' in the wind. The answer is blowin' in the wind. # **Q2:** _What's the answer to life, universe and everything?_ # + [markdown] pycharm={"name": "#%% md\n"} # **A2:** 42 (forty-two) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # Apriori Algorithm: # # The apriori algorithm is an algorithm for mining frequent itemset for boolean association rules.' # # Apriori uses a "botton up" approach, where frequent subsets are extended one item at a time which is known as candidate generation, and groups of candidates are tested against the data. # # Apriori algorithm is a data mining techniques which helps us mine basket data or data about transaction for association rule. # Basket has 2 meaning: # 1.Single transaction: All items brought in one single transaction. # 2.Items brought by user for short period of time like for 1 or 2 month # # When a set of basket is fed into Apriori algorithm it will generate set of rules which tells what kind of products are purchased in same basket. Each of these rules has to satisfy minimum support and confidence. # # Support and confidence is matrixes that are defined in a way that says how valid a rule is and how strong the association is. Support tells us how many out of all the transaction contain the items in the rule. Confidence tells us how many of the transactions out of all the transaction which contains those items exhibit these association # # #Data Setup # https://github.com/marshallshen/nyc_restaurants_inspection # + import pandas as pd df = pd.read_csv("tesco_dataset.csv") #df1 = pd.read_csv("Restaurant-Dataset.csv") df # + # # %load apriori.py import sys from itertools import chain, combinations from collections import defaultdict from optparse import OptionParser import numpy as np import matplotlib.pyplot as plt from numpy import * def subsets(arr): """ Returns non empty subsets of arr""" return chain(*[combinations(arr, i + 1) for i, a in enumerate(arr)]) def returnItemsWithMinSupport(itemSet, transactionList, minSupport, freqSet): """calculates the support for items in the itemSet and returns a subset of the itemSet each of whose elements satisfies the minimum support""" _itemSet = set() localSet = defaultdict(int) for item in itemSet: for transaction in transactionList: if item.issubset(transaction): freqSet[item] += 1 localSet[item] += 1 for item, count in localSet.items(): support = float(count)/len(transactionList) if support >= minSupport: _itemSet.add(item) return _itemSet def joinSet(itemSet, length): """Join a set with itself and returns the n-element itemsets""" return set([i.union(j) for i in itemSet for j in itemSet if len(i.union(j)) == length]) def getItemSetTransactionList(data_iterator): transactionList = list() itemSet = set() for record in data_iterator: transaction = frozenset(record) transactionList.append(transaction) for item in transaction: itemSet.add(frozenset([item])) # Generate 1-itemSets return itemSet, transactionList def runApriori(data_iter, minSupport, minConfidence): """ run the apriori algorithm. data_iter is a record iterator Return both: - items (tuple, support) - rules ((pretuple, posttuple), confidence) """ itemSet, transactionList = getItemSetTransactionList(data_iter) freqSet = defaultdict(int) largeSet = dict() # Global dictionary which stores (key=n-itemSets,value=support) # which satisfy minSupport assocRules = dict() # Dictionary which stores Association Rules oneCSet = returnItemsWithMinSupport(itemSet, transactionList, minSupport, freqSet) currentLSet = oneCSet k = 2 while(currentLSet != set([])): largeSet[k-1] = currentLSet currentLSet = joinSet(currentLSet, k) currentCSet = returnItemsWithMinSupport(currentLSet, transactionList, minSupport, freqSet) currentLSet = currentCSet k = k + 1 def getSupport(item): """local function which Returns the support of an item""" return float(freqSet[item])/len(transactionList) toRetItems = [] for key, value in largeSet.items(): toRetItems.extend([(tuple(item), getSupport(item)) for item in value]) toRetRules = [] for key, value in largeSet.items()[1:]: for item in value: _subsets = map(frozenset, [x for x in subsets(item)]) for element in _subsets: remain = item.difference(element) if len(remain) > 0: confidence = getSupport(item)/getSupport(element) if confidence >= minConfidence: toRetRules.append(((tuple(element), tuple(remain)), confidence)) return toRetItems, toRetRules def printResults(items, rules): """prints the generated itemsets sorted by support and the confidence rules sorted by confidence""" for item, support in sorted(items, key=lambda (item, support): support): print "item: %s , %.3f" % (str(item), support) print "\n------------------------ RULES:" for rule, confidence in sorted(rules, key=lambda (rule, confidence): confidence): pre, post = rule print "Rule: %s ==> %s , %.3f" % (str(pre), str(post), confidence) def dataFromFile(fname): """Function which reads from the file and yields a generator""" file_iter = open(fname, 'rU') for line in file_iter: line = line.strip().rstrip(',') # Remove trailing comma record = frozenset(line.split(',')) yield record if __name__ == "__main__": optparser = OptionParser() optparser.add_option('-f', '--inputFile', dest='input', help='filename containing csv', default=None) optparser.add_option('-s', '--minSupport', dest='minS', help='minimum support value', default=0.15, type='float') optparser.add_option('-c', '--minConfidence', dest='minC', help='minimum confidence value', default=0.6, type='float') (options, args) = optparser.parse_args() inFile = None if options.input is None: inFile = sys.stdin elif options.input is not None: inFile = dataFromFile(options.input) else: print 'No dataset filename specified, system with exit\n' sys.exit('System will exit') minSupport = options.minS minConfidence = options.minC items, rules = runApriori(inFile, minSupport, minConfidence) printResults(items, rules) #Reference #https://github.com/asaini/Apriori # + # #%run apriori.py -f Restaurant-Dataset.csv -s 0.7 0.8 #or # %run apriori.py -f tesco_dataset.csv -s 0.5 0.6 # + # Display in graph and plots import numpy as np import matplotlib.pyplot as plt N = rules names,values = zip(*N) ind = np.arange(len(N)) # the x locations for the groups width = 0.20 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(ind, values, width, color='r') # add some text for labels, title and axes ticks ax.set_ylabel('Support') ax.set_xticks(ind+width/20) ax.set_xticklabels(names) plt.show() #Reference #https://matplotlib.org/examples/api/barchart_demo.html # - # The above bar graph shows the rules generated by running the algorithm with the support. # Bibiography # # https://www.analyticsvidhya.com/blog/2014/08/effective-cross-selling-market-basket-analysis/ # # https://matplotlib.org/examples/api/barchart_demo.html # # https://github.com/asaini/Apriori # # https://github.com/marshallshen/nyc_restaurants_inspection # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="y1SHYM0ivKJN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="42eb2292-0602-487d-d407-f1311f8fb204" nltk.download('all') # + id="s8lXyxJexaf1" colab_type="code" colab={} import nltk # + id="H1w9PeJKvTbQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="a453bd0c-48f9-4145-b35a-443529cf4a51" text = ''' A trained language model generates text. We can optionally pass it some text as input, which influences its output. The output is generated from what the model “learned” during its training period where it scanned vast amounts of text. Training is the process of exposing the model to lots of text. That process has been completed. All the experiments you see now are from that one trained model. It was estimated to cost 355 GPU years and cost $4.6m. The dataset of 300 billion tokens of text is used to generate training examples for the model. For example, these are three training examples generated from the one sentence at the top. You can see how you can slide a window across all the text and make lots of examples. ''' for text1 in text: sentences = nltk.sent_tokenize(text1) for sentence in sentences: words = nltk.word_tokenize(sentence) tagged = nltk.pos_tag(words) print(tagged) # + id="8Y6EgPUtwQAZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="f807d80b-1d2b-424b-e071-5a96325d49f5" from nltk.tokenize import TweetTokenizer text = '''Wind and solar are the cheapest source of new power for two-thirds of the world's population.''' twtk = TweetTokenizer() twtk.tokenize(text) # + id="eVGNrK2gyu9k" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="b6fe87e6-8247-434f-c452-e5010e9020d3" from urllib import request url = "http://www.gutenberg.org/files/2557/2557-0.txt" resp = request.urlopen(url) raw = resp.read().decode('utf8') len(raw) print(raw[:75]) from nltk.tokenize import word_tokenize tokens = word_tokenize(raw) print(tokens) # + id="_mJJ2iXq2FYp" colab_type="code" colab={} #A Python script to download all the tweets of a hashtag into a csv import tweepy import csv import pandas as pd ####input your credentials here consumer_key = '' consumer_secret = '' access_token = '' access_token_secret = '' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth,wait_on_rate_limit=True) #####United Airlines # Open/Create a file to append data csvFile = open('ua.csv', 'a') #Use csv Writer csvWriter = csv.writer(csvFile) for tweet in tweepy.Cursor(api.search,q="#unitedAIRLINES",count=100, lang="en", since="2017-04-03").items(): print (tweet.created_at, tweet.text) csvWriter.writerow([tweet.created_at, tweet.text.encode('utf-8')]) # + id="Gr6ebdcdztYL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="2a7beb14-5f82-49c5-d684-12afb2d2a019" import nltk f = open('tweets1.txt', 'r') text = f.read() text1 = text.split() text2 = nltk.Text(text1) text2.concordance('good') # + id="vvjIS7k_3Kfq" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Fake Reviews # # Data Source # # https://www.kaggle.com/yelp-dataset/yelp-dataset # # The overall goal of this notebook is to create examples that we can use to train a neural network that will be able to create fake reviews. #import findspark osx #findspark.init() import pyspark # Create a spark session with Apache Arrow enabled. from pyspark.sql import SparkSession spark = SparkSession.builder. \ appName('big-data') \ .master("local[8]") \ .config("spark.driver.memory", "8g") \ .config('spark.sql.execution.arrow.enabled', 'true').getOrCreate() # ## Reviews # # Loading the yelp reviews, printing their schema and dropping NA rows. reviews_df = spark.read.json('yelp-dataset/yelp_academic_dataset_review.json') reviews_df.printSchema() reviews_df.count() reviews_df = reviews_df.na.drop() reviews_df.count() # ## Businesses business_df = spark.read.json('yelp-dataset/yelp_academic_dataset_business.json') business_df.count() business_df.printSchema() business_df = business_df.na.drop(subset=['city', 'categories']) business_df.count() # We ultimately want to create examples in the following form: # # 'RATING CITY CATEGORIES': 'REVIEW TEXT' # # If we wanted more granular control over our fake review generation, we could also use something like 'RATING BUSINESS_NAME CITY STATE CATEGORIES' # # Let's keep it simple and create a new examples data frame: reviews = reviews_df.alias('reviews') business = business_df.alias('business') # + from pyspark.sql.functions import concat_ws examples = reviews.join(business, reviews.business_id == business.business_id) \ .select(concat_ws(' ', reviews.stars, business.city, business.categories).alias('context'), \ reviews.text.alias('review')) # - examples.show() examples.count() # ## Tokenization # # We need to tokenize our examples so that we can feed them into our neural network. We also need to clean up our examples before tokenizing them. We remove new-lines and non-ascii characters from pyspark.sql.functions import regexp_replace examples = examples.withColumn('review', regexp_replace(examples.review, '[\\r\\n]', ' ')) examples = examples.withColumn('review', regexp_replace(examples.review, '[^\x00-\x7F]+', ' ')) # We now want to save the context and review columns as text files. examples.select(examples.context).write.format('text').save('context.txt') examples.select(examples.review).write.format('text').save('reviews.txt') # Making sure that both exports have the same line counts # !cat context.txt/*.txt | wc -l # !cat reviews.txt/*.txt | wc -l # Unfortunately, the default SparkML Tokenizer is rather simple, there is a RegexTokenizer, but crafting hand-written tokenization rules with it is error-prone and cumbersome. We could try to use an industrial-strength Tokenizer from spacy. Implementing an SparkML-Transformer would require a Java counter-part on top of the python implementation, so let's fall back to UDF's instead. # # Another alternative is using Apache Arrow. # # As it turns out, both ways really slow. The following cells are therefore only used for illustrative purposes. I've tested the UDF approach multiple times and had a lot of time outs or OutOfMemory exceptions. # # I therefore had to rely on Stanford's CoreNLP package to tokenize the examples. Please consult the tokenization notebook. # + import spacy from pyspark.sql.functions import udf, pandas_udf from pyspark.sql.types import ArrayType, StringType nlp = spacy.load("en_core_web_sm") @udf(ArrayType(StringType())) def tokenize(s): return [token.text for token in nlp(s)] # + import spacy from pyspark.sql.functions import udf, pandas_udf, PandasUDFType from pyspark.sql.types import ArrayType, StringType nlp = spacy.load("en_core_web_sm") def spacy_tokenize(s): return [token.text for token in nlp(s)] @pandas_udf("string", PandasUDFType.SCALAR) def tokenize(x): return x.apply(spacy_tokenize) # - tokenized = examples.select(concat_ws(' ', tokenize(examples.context)).alias('context'), concat_ws(' ', tokenize(examples.review)).alias('review')) train, val, test = tokenized.randomSplit([0.98, 0.016, 0.004], seed=42) train.count() val.count() test.count() # + def write_df(df, file_name): df.select(df.context).write.format('text').save(file_name + '_src.txt') df.select(df.review).write.format('text').save(file_name + '_tgt.txt') # - train = train.cache() val = val.cache() test = test.cache() write_df(test, 'test') write_df(val, 'val') write_df(train, 'train') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Watch Me Code 1: Yes Or No # # Our approach # # 1. take input "yes" or "no" # 1. convert to lower case # 1. slice off first character and return it! # dir(str) help(str.lower) x = "Nick" x.lower() # + def YesOrNo(text): first_char = text[0] first_char = first_char.lower() if "y" == first_char: return True elif "n" == first_char: return False else: return None x = input("Please enter y/n, or yes/no") result = YesOrNo(x) if result is None: print("Please enter a y/n") elif result: print("You entered Yes") elif not result: print("You entered No") # - x="EDISON" x.lower() # + def YesOrNo(text) first_char=text(0) first_char=first_char.lower() if "y" ==first_char: return True elif "n" ==first_char: return False else: return None x=input("please enter y/n or yes/no") result=YesOrNo(x) if result is None: print("please enter a y/n") elif result print("you enter YES") elif not result: print("loser") # - x="Mike" y=x.upper().replace("I","K") print (y) # + raw="12 45 96" token=raw.split(" ") total = 0 for token in tokens: total=total+int(token) print("your total is %d" %(total)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd from datetime import date import matplotlib.pyplot as plt city_df=pd.read_csv('cities.csv') city_df def plotting(y_col,df=city_df,x_col='Lat',time='01/05/17',font=15,pad=20,font_w="bold"): plt.figure(figsize=(10,7)) plt.scatter(df[x_col],df[y_col], facecolors="#288da3", edgecolors="black" ,s=65) plt.xlabel('Latitude') plt.grid() if y_col=='Max Temp': plt.ylabel('Max Temperature(F)') plt.title(f'City Latitude vs Max Temperature ({time})',fontsize= font,pad=pad,\ fontweight=font_w) plt.savefig('assets/images/max-Temp.png') elif y_col=='Humidity': plt.ylabel('Humidity(%)') plt.title(f'City Latitude vs Humidity ({time})',fontsize= font,pad=pad,fontweight=font_w) plt.savefig('assets/images/Humidity.png') elif y_col=='Cloudiness': plt.ylabel('Cloudiness(%)') plt.title(f'City Latitude vs Cloudiness ({time})',fontsize= font,pad=pad,fontweight=font_w) plt.savefig('assets/images/Cloudiness.png') elif y_col=='Wind Speed': plt.ylabel('Wind Speed(mph)') plt.title(f'City Latitude vs Wind Speed ({time})',fontsize= font,pad=pad,fontweight=font_w) plt.savefig('assets/images/Lat_wind.png') plt.show() plotting(y_col='Max Temp') plotting(y_col='Humidity') plotting(y_col='Cloudiness') plotting(y_col='Wind Speed') city_df=pd.read_csv('cities.csv') city_df.set_index("City_ID", inplace=True) city_df city_df.to_html("../data.html") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: proj_env # language: python # name: proj_env # --- from fastquant import get_stock_data, backtest from gaussian_hmm import * import matplotlib.pyplot as plt # + params = { 'n_components': 2, 'algorithm': 'map', 'n_iter': 100, 'd': 5, 'name':'GHMM' } ghmm = GHMM(params=params) # - # back testing dates on AAPL 2019-01-01 to 2020-05-31 back_test_data = get_stock_data('AAPL', '2019-01-01', '2020-05-31') # training data AAPL 2017-08-01 to 2019-01-01 training_data = get_stock_data('AAPL', '2017-08-01', '2019-01-01') ghmm.train(train_data=training_data) preds,actual = ghmm.predict(test_data=back_test_data) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #
    # Author: # # # # #
    # # # [Click here to see class lecture](https://drive.google.com/file/d/1SVe2wI_K0j0T9kUNhpgswnk_KJcjLCfm/view) # # ![image.png](figures/fig163.PNG) # # **Todays agenda** # # ![image.png](figures/fig164.PNG) # # `From sender side application layer contacts its transport layer to send the data, in the receiving side transport layer contacts with appropriate process to receive the data` # # - logical communication is basically communication between processes of two different hosts and transport service provides this logical communication. In the fig we can see there are multiple processes of brave browser, these processes communicates with the processes of server side and this communication is called logical communication, This communication happens by the mean of transport service. So in transport service is process to process communication by port numbers. # # - Transport protocols run in end system. # # # ![image.png](figures/fig166.PNG) # # ![image.png](figures/fig165.PNG) # # - Network layer is end to end communication. This is the logical communication between hosts. # # ![image.png](figures/fig167.PNG) # # - In the fig we can see three benefits of TCP UDP doesn't provide these three benefits. # # ![image.png](figures/fig168.PNG) # Sir tool ct1 on this day. # # # That's all for this lecture! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # IPDxIRR_2F (Ionospheric plasma densities) # # > Abstract: Access to the derived plasma characteristics at 1Hz (level 2 product). # %load_ext watermark # %watermark -i -v -p viresclient,pandas,xarray,matplotlib # + from viresclient import SwarmRequest import datetime as dt import matplotlib.pyplot as plt from matplotlib.dates import DateFormatter request = SwarmRequest() # - # ## IPDxIRR_2F product information # # Derived plasma characteristics at 1Hz, for each Swarm spacecraft. # # Documentation: # - https://earth.esa.int/web/guest/missions/esa-eo-missions/swarm/data-handbook/level-2-product-definitions#IPDxIRR_2F # ### Check what "IPD" data variables are available request.available_collections("IPD", details=False) request.available_measurements("IPD") # ## Fetch three hours of IPD data request.set_collection("SW_OPER_IPDAIRR_2F") request.set_products(measurements=request.available_measurements("IPD")) data = request.get_between( dt.datetime(2014,12,21, 0), dt.datetime(2014,12,21, 3) ) # ### Load and plot using pandas/matplotlib df = data.as_dataframe() df.head() df.columns # + fig, axes = plt.subplots(nrows=7, ncols=1, figsize=(20,18), sharex=True) df.plot(ax=axes[0], y=['Background_Ne', 'Foreground_Ne', 'Ne'], alpha=0.8) df.plot(ax=axes[1], y=['Grad_Ne_at_100km', 'Grad_Ne_at_50km', 'Grad_Ne_at_20km']) df.plot(ax=axes[2], y=['RODI10s', 'RODI20s']) df.plot(ax=axes[3], y=['ROD']) df.plot(ax=axes[4], y=['mROT']) df.plot(ax=axes[5], y=['delta_Ne10s', 'delta_Ne20s', 'delta_Ne40s']) df.plot(ax=axes[6], y=['mROTI20s', 'mROTI10s']) axes[0].set_ylabel("[cm$^{-3}$]") axes[1].set_ylabel("[cm$^{-3}$m$^{-1}$]") axes[2].set_ylabel("[cm$^{-3}$s$^{-1}$]") axes[3].set_ylabel("[cm$^{-3}$m$^{-1}$]") axes[4].set_ylabel("[TECU s$^{-1}$]") axes[5].set_ylabel("[cm$^{-3}$m$^{-1}$]") axes[6].set_ylabel("[TECU s$^{-1}$]") axes[6].set_xlabel("Timestamp") for ax in axes: # Reformat time axis # https://www.earthdatascience.org/courses/earth-analytics-python/use-time-series-data-in-python/customize-dates--matplotlib-plots-python/ ax.xaxis.set_major_formatter(DateFormatter("%Y-%m-%d\n%H:%M:%S")) ax.legend(loc="upper right") ax.grid() fig.subplots_adjust(hspace=0) # - # ### Load as xarray ds = data.as_xarray() ds # ### Alternative plot setup # # To plot the data from xarray, we need a different plotting setup. This does however give us more control over the plot. The units are extracted directly from the xarray object. fig, axes = plt.subplots(nrows=7, ncols=1, figsize=(20,18), sharex=True) def subplot(ax=None, y=None, **kwargs): """Plot combination of variables onto a given axis""" units = ds[y[0]].units for var in y: ax.plot(ds["Timestamp"], ds[var], label=var, **kwargs) if units != ds[var].units: raise ValueError(f"Units mismatch for {var}") ax.set_ylabel(f"[{units}]") # Reformat time axis # https://www.earthdatascience.org/courses/earth-analytics-python/use-time-series-data-in-python/customize-dates--matplotlib-plots-python/ ax.xaxis.set_major_formatter(DateFormatter("%Y-%m-%d\n%H:%M:%S")) ax.legend(loc="upper right") ax.grid() subplot(ax=axes[0], y=['Background_Ne', 'Foreground_Ne', 'Ne']) subplot(ax=axes[1], y=['Grad_Ne_at_100km', 'Grad_Ne_at_50km', 'Grad_Ne_at_20km']) subplot(ax=axes[2], y=['RODI10s', 'RODI20s']) subplot(ax=axes[3], y=['ROD']) subplot(ax=axes[4], y=['mROT']) subplot(ax=axes[5], y=['delta_Ne10s', 'delta_Ne20s', 'delta_Ne40s']) subplot(ax=axes[6], y=['mROTI20s', 'mROTI10s']) fig.subplots_adjust(hspace=0) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="Jbvf2aHA0jNs" colab_type="text" # # Synthetic Features Engineering Notebook on Google Colab (GPU) # # This notebook is modified to work on Google Colab, which provides free GPU / TPU resources for machine learning tasks. # # Everyone who has access to this google drive folder should have access to this notebook and should be edit it directly on Google Colab. # # + [markdown] id="1F5WY5hw2BDK" colab_type="text" # ## README and Developer Guide # # We will write and modify code DIRECTLY on **Google Colab**, which is not automatically synced with the version on our github repository. Make sure that you **DOWNLOAD** this notebook after edit and upload it to our **github repository** for a more consistent version control. # # Make sure you add the Paradigm folder to your drive before runnning the code. To do so right click the folder and select **Add to my drive** from the dropdown menu. # # Because of the nature of Google Drive, it is very hard to edit the same ipython file on the same time on Colab. If someone is editing the colab file at the same time as you, you will need to **restart** the runtime, so be mindful of that if someone is editing the notebook at the same time as you. # # We are fairly new to Google Colab and thus feel free to add anything to the README section if you come across anything that is important when coding! # # This notebook will be geared to use **GPU** to help accelerate the training process. Please make sure that when you run this notebook the backend if using **GPU**. To do so click 'Runtime' then select 'Change runtime type' and make sure that the hardware accelerator is set to GPU. If you are interested in using **TPU** to accelerate the training process even **FASTER**, please create another notebook to run the TPU-Compatible model. # + [markdown] id="G4cIzHHy39-M" colab_type="text" # ## (0) SETUP # # 1. Grant Google Colab access to Google Drive # 2. Import libraries and tools # # # # # # # + id="2CrtVDmg0kIw" colab_type="code" outputId="b57dac5f-dc0d-4e11-f010-08e53492b260" executionInfo={"status": "ok", "timestamp": 1554922565592, "user_tz": 420, "elapsed": 21368, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 122} # Getting access to the dataset and the Python files on Google Drive. # You will have to give permission and sign in to your Google account. from google.colab import drive drive.mount('/content/gdrive') root_folder = "/content/gdrive/My Drive/Paradigm/Paradigm (Spr 19) - Team 2/Colab-code/" # + id="U8hKmXJyZSwN" colab_type="code" outputId="d0230606-611c-450b-b993-d422c34c7944" executionInfo={"status": "ok", "timestamp": 1554922571556, "user_tz": 420, "elapsed": 27266, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 190} # install segtok # !pip install segtok # + id="4wgY1xMi43ms" colab_type="code" colab={} # general imports from __future__ import absolute_import, division, print_function import matplotlib.pyplot as plt import numpy as np import pandas as pd import collections import os import string import time from segtok import tokenizer from collections import Counter import json import sys # + id="KK9eQkWi5Poc" colab_type="code" outputId="4f615ba1-8491-4c04-b7f3-534e9bdc96e3" executionInfo={"status": "ok", "timestamp": 1554922573967, "user_tz": 420, "elapsed": 29625, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # machine learning libraries imports import keras import sklearn import tensorflow as tf from keras import backend as K from keras import layers from keras.models import Model, Sequential from keras.layers import Dense, Embedding, Input, Lambda, LSTM, Masking, RepeatVector, TimeDistributed from keras.preprocessing.text import Tokenizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.model_selection import train_test_split # + [markdown] id="XkaR9Zy2JRa_" colab_type="text" # ## (1) EDA # # # 1. Import data # 2. Analysize data # 3. Clean data # 4. Graph relationships # # **NOTE**: The dataset we have is not very big (around 7000 cyptocurrency related articles). If time permitted we can scrap older news article from [bitCoinTalk Thread](https://bitcointalk.org/index.php?board=77.0) # # + [markdown] id="RdejphAAVlW4" colab_type="text" # ### (1.1) Import data from the data folder # + id="0QYxDh2l5T5m" colab_type="code" outputId="087840c5-a28c-40d6-8498-134e32fc6396" executionInfo={"status": "ok", "timestamp": 1554922575112, "user_tz": 420, "elapsed": 30671, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 51} # get a list of data we have data_folder = root_folder + 'data/' print("We have gathered the following datasets") print(os.listdir(data_folder)) # + id="_mqvydQbLTEZ" colab_type="code" colab={} # importing data from csv to dataframe news_score_df = pd.read_csv(data_folder + 'news_score.csv') raw_data_df = pd.read_csv(data_folder + 'rawData_test1009.csv') cleaned_author_df = pd.read_csv(data_folder + '1119_cleaned_author_articles.csv') onehot_df = pd.read_csv(data_folder + 'data_onehot.csv') # + [markdown] id="cmRmIt1YNwYj" colab_type="text" # ### (1.2) EDA on News Score Dataframe # + id="pGxsSKlXNMRr" colab_type="code" outputId="d6504189-7c17-41b1-9668-403575510186" executionInfo={"status": "ok", "timestamp": 1554655011967, "user_tz": 420, "elapsed": 761, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 576} print("There are " + str(len(news_score_df)) + " entries in this dataframe.") sum_of_nans = sum(len(news_score_df) - news_score_df.count()) print("There are " + str(sum_of_nans) + " Nan values in the dataframe.") # take a look at the news scores dataframe news_score_df.head() # + id="d13QewRKQrz4" colab_type="code" outputId="2806a48b-cacf-4343-e9fb-8df6690b74eb" executionInfo={"status": "ok", "timestamp": 1554655012369, "user_tz": 420, "elapsed": 602, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 459} # find out which column has NAN values news_score_df.isna().sum() # + id="7UukGv2EORHW" colab_type="code" outputId="5691153f-b998-495e-88fe-01328838d176" executionInfo={"status": "ok", "timestamp": 1554655016365, "user_tz": 420, "elapsed": 703, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 119} # get the list of news score dataframe columns print(news_score_df.columns) # + id="dfR5QNEmOlnP" colab_type="code" outputId="7c78dace-3684-4184-e3eb-555387622cf8" executionInfo={"status": "ok", "timestamp": 1554655016954, "user_tz": 420, "elapsed": 807, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 334} # describe the dataframe news_score_df.describe() # + [markdown] id="xYYA6GfFN1mb" colab_type="text" # ### (1.3) EDA on Raw Data Dataframe # + id="M3M26sfjQ_nT" colab_type="code" outputId="d21635d9-b8c3-45d8-89dd-2572df02b6dc" executionInfo={"status": "ok", "timestamp": 1554655021326, "user_tz": 420, "elapsed": 883, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 323} print("There are " + str(len(raw_data_df)) + " entries in this dataframe.") sum_of_nans = sum(len(raw_data_df) - raw_data_df.count()) print("There are " + str(sum_of_nans) + " Nan values in the dataframe.") # take a look at the news scores dataframe raw_data_df.head() # + [markdown] id="ydsc-4NUN6jA" colab_type="text" # ### (1.4) EDA on Cleaned Author Dataframe # + id="y6vIkwK-N8sa" colab_type="code" outputId="a83cd2a8-f371-4a32-9040-af776169ac88" executionInfo={"status": "ok", "timestamp": 1554655022608, "user_tz": 420, "elapsed": 1175, "user": {"displayName": "SHUN LIN", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 559} print("There are " + str(len(cleaned_author_df)) + " entries in this dataframe.") sum_of_nans = sum(len(cleaned_author_df) - cleaned_author_df.count()) print("There are " + str(sum_of_nans) + " Nan values in the dataframe.") # take a look at the news scores dataframe cleaned_author_df.head() # + [markdown] id="Z4nsUEh9OIUV" colab_type="text" # ### EDA on One Hot Dataframe # + id="H3bipu32OLiA" colab_type="code" outputId="dd1200a0-9e39-4a27-9298-9a5c6f5d1b34" executionInfo={"status": "ok", "timestamp": 1554655023948, "user_tz": 420, "elapsed": 1395, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 576} print("There are " + str(len(onehot_df)) + " entries in this dataframe.") sum_of_nans = sum(len(onehot_df) - onehot_df.count()) print("There are " + str(sum_of_nans) + " Nan values in the dataframe.") # take a look at the news scores dataframe onehot_df.head() # + [markdown] id="dEvVjZn3SkXP" colab_type="text" # ## (2) Language Models # # We will try to build a language model for our news score dataset to extract additional information. # # **NOTE**: We will be trying to build different language models with different architectures. We will also build models using tensorflow and keras for comparsion. # + [markdown] id="ylPt5Z28Xj5H" colab_type="text" # ### (2.1) Preprocessing our data # + id="3J_QxrpFS2ks" colab_type="code" colab={} # helper methods def numerize_sequence(tokenized): return [w2i.get(w, unkI) for w in tokenized] def pad_sequence(numerized, pad_index, to_length): pad = numerized[:to_length] padded = pad + [pad_index] * (to_length - len(pad)) mask = [w != pad_index for w in padded] return padded, mask # + id="mMjEz58vV7P0" colab_type="code" colab={} dataset_df = news_score_df[["title"]] dataset = dataset_df.to_dict('records') # + id="ofsrpPapYMuq" colab_type="code" outputId="96fe8951-41ff-45bc-cdea-2a0d56918e2f" executionInfo={"status": "ok", "timestamp": 1554922594208, "user_tz": 420, "elapsed": 1570, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 34} input_length = 0 for a in dataset: tokenized_title = tokenizer.word_tokenizer(a['title'].lower()) input_length = max(input_length, len(tokenized_title)) a['tokenized'] = tokenized_title print(input_length) # + id="iecSKIzrYNYp" colab_type="code" colab={} word_counts = Counter() for a in dataset: word_counts.update(a['tokenized']) # + id="Ze5LV5ijZgi3" colab_type="code" outputId="69692f93-9edc-419f-af9a-eec950b9cae6" executionInfo={"status": "ok", "timestamp": 1554922596105, "user_tz": 420, "elapsed": 897, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # Creating the vocab vocab_size = len(word_counts) special_words = ["", "UNK", "PAD"] vocabulary = special_words + [w for w, c in word_counts.most_common(vocab_size-len(special_words))] w2i = {w: i for i, w in enumerate(vocabulary)} # Numerizing and padding unkI, padI, startI = w2i['UNK'], w2i['PAD'], w2i[''] for a in dataset: a['numerized'] = numerize_sequence(a['tokenized']) # Change words to IDs a['numerized'], a['mask'] = pad_sequence(a['numerized'], padI, input_length) # Append appropriate PAD tokens # Compute fraction of words that are UNK: word_counters = Counter([w for a in dataset for w in a['title'] if w != padI]) print("Fraction of UNK words:", float(word_counters[unkI]) / sum(word_counters.values())) # + id="VrAXaUUYZ0WZ" colab_type="code" outputId="0c1ba74d-dd8c-437b-9964-4f930944d5cd" executionInfo={"status": "ok", "timestamp": 1554922600427, "user_tz": 420, "elapsed": 548, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 68} vocab_size = len(vocabulary) input_length = len(dataset[0]['numerized']) # The length of the first element in the dataset, they are all of the same length d_train, d_valid = train_test_split(dataset, test_size=0.01, random_state=42) print("Vocabulary Size:", vocab_size) print("Number of training samples:",len(d_train)) print("Number of validation samples:",len(d_valid)) # + id="vl-epkKRaGmB" colab_type="code" outputId="af7ddb8c-bbf7-4ac9-e265-494c86104ebe" executionInfo={"status": "ok", "timestamp": 1554655071961, "user_tz": 420, "elapsed": 667, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 71} def numerized2text(numerized): words = [vocabulary[int(num)] for num in numerized] converted_string = ' '.join(words) return converted_string entry = d_train[100] print("Reversing the numerized: "+numerized2text(entry['numerized'])) print("From the `title` entry: "+ entry['title']) # + [markdown] id="FoCwB-9xadBx" colab_type="text" # ### (2.2) Building Model # + id="P46hY9_PaQ7S" colab_type="code" colab={} def build_batch(dataset, batch_size): # randomize the indices we want to get the batch of indices = list(np.random.randint(0, len(dataset), size=batch_size)) # indice into the batch batch = [dataset[i] for i in indices] # Get the raw numerized for this input batch_numerized = np.asarray([db_element["numerized"] for db_element in batch]) # Create an array of start_index that will be concatenated at position 1 for input start_tokens = np.zeros((batch_size, 1)) batch_input = np.concatenate((start_tokens, batch_numerized), axis=1) # Remove the last word from each element in the batch to "shift" input batch_input = batch_input[:, :-1] # The target should be the un-shifted numerized input batch_target = batch_numerized # The target-mask is a 0 or 1 filter to note which tokens are # padding or not, to give the loss, so the model doesn't get rewarded for # predicting PAD tokens. batch_target_mask = np.array([a['mask'] for a in batch]) return batch_input, batch_target, batch_target_mask # + id="_ux4FZaeaw8P" colab_type="code" colab={} # Using a basic RNN/LSTM for Language modeling class LanguageModel(): def __init__(self, input_length, vocab_size, rnn_size, learning_rate=1e-4): self.input_num = tf.placeholder(tf.int32, shape=[None, input_length]) self.targets = tf.placeholder(tf.int32, shape=[None, input_length]) self.targets_mask = tf.placeholder(tf.bool, shape=[None, input_length]) self.embedding = tf.Variable(tf.random_uniform([vocab_size, rnn_size], -1.0, 1.0)) input_emb = tf.nn.embedding_lookup(self.embedding, self.input_num) lm_cell = tf.nn.rnn_cell.LSTMCell(rnn_size) outputs, states = tf.nn.dynamic_rnn(lm_cell, input_emb, dtype=tf.float32) self.output_logits = tf.layers.dense(inputs=outputs, units=vocab_size) weights = tf.cast(self.targets_mask, tf.float32) self.loss = tf.losses.sparse_softmax_cross_entropy(labels=self.targets,logits=self.output_logits, weights=weights) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, name='Adam') self.global_step = tf.train.get_or_create_global_step() self.train_op = optimizer.minimize(self.loss, global_step=self.global_step) self.saver = tf.train.Saver() # + [markdown] id="48Cy77pXbD1m" colab_type="text" # ### (2.3) Create Model # + id="sCamFzWFa1VB" colab_type="code" outputId="abc03c34-a9e4-4187-f1c4-24663fe95a5b" executionInfo={"status": "ok", "timestamp": 1554655077570, "user_tz": 420, "elapsed": 1909, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 343} tf.reset_default_graph() model = LanguageModel(input_length=input_length, vocab_size=vocab_size, rnn_size=256, learning_rate=1e-4) # + [markdown] id="-s8bZMkkbTcU" colab_type="text" # ### (2.4) Train Model # + id="FtfYzgENbWBZ" colab_type="code" outputId="1f5a5616-8b4e-4fa1-af04-bdee4f340b63" colab={"base_uri": "https://localhost:8080/", "height": 37893} # DO NOT RUN THIS BLOCK IF YOU DON'T WANT TO TRAIN THE NETWORK experiment = root_folder+"models/tf_language_model" plot_info_path = root_folder+"plots/tf_language_model.csv" plot_info = pd.DataFrame(columns=['training_err', 'validation_err']) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Here is how you restore the weights previously saved # model.saver.restore(sess, experiment) # also need to restore plot_info from plot_info_path epoch = 20000000 batch_size = 64 num_iter = epoch * len(d_train) // batch_size print("Total number of iterations is: " + str(num_iter)) eval_input, eval_target, eval_target_mask = build_batch(d_valid, 50) feed = {model.input_num: eval_input, model.targets: eval_target, model.targets_mask: eval_target_mask} eval_loss = sess.run(model.loss, feed_dict=feed) print("Evaluation set loss: ", eval_loss) for i in range(num_iter): # Here is how you obtain a batch: batch_input, batch_target, batch_target_mask = build_batch(d_train, batch_size) # Map the values to each tensor in a `feed_dict` feed = {model.input_num: batch_input, model.targets: batch_target, model.targets_mask: batch_target_mask} # Obtain a single value of the loss for that batch. # # !IMPORTANT! Don't forget to include the train_op to when using a batch from the training dataset # (d_train) # # !MORE IMPORTANT! Don't use the train_op if you evaluate the loss on the validation set, # Otherwise, your network will overfit on your validation dataset. step, train_loss, _ = sess.run([model.global_step, model.loss, model.train_op], feed_dict=feed) # record info for graphs every 20 steps if i % 20 == 0 and i % 200 != 0: eval_input, eval_target, eval_target_mask = build_batch(d_valid, 50) feed = {model.input_num: eval_input, model.targets: eval_target, model.targets_mask: eval_target_mask} eval_loss_steps = sess.run(model.loss, feed_dict=feed) row = {'training_err': train_loss, 'validation_err': eval_loss_steps} plot_info.loc[len(plot_info)] = row # save weights info every 200 steps if i % 200 == 0: print("step: " + str(i)) print("train_loss: " + str(train_loss)) eval_input, eval_target, eval_target_mask = build_batch(d_valid, 50) feed = {model.input_num: eval_input, model.targets: eval_target, model.targets_mask: eval_target_mask} eval_loss_steps = sess.run(model.loss, feed_dict=feed) # if (eval_loss_steps < eval_loss): # print("eval_loss decreases!") eval_loss = eval_loss_steps print("Evaluation set loss: ", eval_loss) print("saving plot info so far ....") plot_info.to_csv(plot_info_path, index=False) print("saving plot info so far completed ....") print("saving model weights ....") model.saver.save(sess, experiment) print("saving model weights completed ....") # else: # print("eval_loss didn't decrease.") # print("half learning rate, make another model, reset to previous checkpoint") # # learning_rate /= 2 # # model = LanguageModel(input_length=input_length, vocab_size=vocab_size, rnn_size=256*4, learning_rate=learning_rate) # model.saver.restore(sess, experiment) # Here is how you save the model weights model.saver.save(sess, experiment) # Here is how you restore the weights previously saved model.saver.restore(sess, experiment) # + [markdown] id="0cf96KxecEGV" colab_type="text" # ### (2.4) Evaluate Models # # We will use different methods to evaluate our models # # 0. Plot Training and Validation error over epochs # 1. How our model generate new titles # 2. How our model detects unlikely titles # 3. How the embedding perform on our data # # # + [markdown] id="epGLYb4xiZcQ" colab_type="text" # #### (2.4.0) Plots # # + id="f3ymyrZMijK4" colab_type="code" outputId="1c5406d6-7caf-435d-b882-cebedbff5efd" executionInfo={"status": "ok", "timestamp": 1554655087104, "user_tz": 420, "elapsed": 836, "user": {"displayName": "SHUN LIN", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 410} plot_info_path = root_folder+"plots/tf_language_model.csv" plot_df = pd.read_csv(plot_info_path) plot_df_per_epoch = plot_df.groupby(np.arange(len(plot_df))//50).mean() train_error_per_epoch = plot_df_per_epoch['training_err'] validation_error_per_epoch = plot_df_per_epoch['validation_err'] print(len(plot_df)) f, ax = plt.subplots() ax.plot(train_error_per_epoch, 'o-',c='r') ax.plot(validation_error_per_epoch, 'o-',c='g') # Plot legend and use the best location automatically: loc = 0. ax.legend(['Train loss', 'Validation loss'], loc = 0) ax.set_title('Training/Validation loss per Epoch') ax.set_xlabel('Epoch') ax.set_ylabel('Error') plt.plot() # + [markdown] id="z6QP9iHRcNlM" colab_type="text" # #### (2.4.1) Generating New Titles # + id="_wKS9khlbopZ" colab_type="code" outputId="6cfcc8c3-57ed-4bd2-9a77-715fa7a0ebb0" executionInfo={"status": "ok", "timestamp": 1554655095594, "user_tz": 420, "elapsed": 7084, "user": {"displayName": "SHUN LIN", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 258} model_file = root_folder+"models/tf_language_model" with tf.Session() as sess: model.saver.restore(sess, model_file) # Here are some headline starters. # They're all about tech companies, because # That is what is in our dataset headline_starters = ["bitcoin price", "today", "big"] for headline_starter in headline_starters: print("===================") print("Generating titles starting with: "+headline_starter) tokenized = tokenizer.word_tokenizer(headline_starter) current_build = [startI] + numerize_sequence(tokenized) while len(current_build) < input_length: current_padded = current_build[:input_length] + [padI] * (input_length - len(current_build)) current_padded = np.array([current_padded]) feed = {model.input_num: current_padded} logits = sess.run(model.output_logits, feed_dict=feed) last_index = len(current_build) - 1 last_logits = logits[0][last_index] current_build.append(np.argmax(last_logits)) produced_sentence = numerized2text(current_build) print(produced_sentence) # + [markdown] id="TBnVgDvEc9cY" colab_type="text" # #### (2.4.2) Fake/Unlikely News Headline Detection # # Lower loss means the headline is more likely. # + id="NViFkWxoc3AJ" colab_type="code" outputId="9989d921-f92a-42a2-cdb8-c28b1324f42a" executionInfo={"status": "ok", "timestamp": 1554655095913, "user_tz": 420, "elapsed": 5213, "user": {"displayName": "", "photoUrl": "https://lh4.googleusercontent.com/-pkp40ccE7So/AAAAAAAAAAI/AAAAAAAAAU4/Upp1QcV6fHs/s64/photo.jpg", "userId": "16137932526864003348"}} colab={"base_uri": "https://localhost:8080/", "height": 207} headline1 = "Bitcoin price crashes" headline2 = "Bitcoin price reach a new high" headline3 = "bitcoin is legal in all countries" headlines = [headline1, headline2, headline3] with tf.Session() as sess: model.saver.restore(sess, model_file) for headline in headlines: headline = headline.lower() tokenized = tokenizer.word_tokenizer(headline) numerized = numerize_sequence(tokenized) unkI, padI, startI = w2i['UNK'], w2i['PAD'], w2i[''] padded, mask = pad_sequence(numerized, padI, input_length) hl_element = {} hl_element['tokenized'] = tokenized hl_element['numerized'] = padded hl_element['mask'] = mask d_hl = [hl_element] hl_input, hl_target, hl_target_mask = build_batch(d_hl, 1) feed = {model.input_num: hl_input, model.targets: hl_target, model.targets_mask: hl_target_mask} loss = sess.run([model.loss], feed_dict=feed) print("----------------------------------------") print("Headline:",headline) print("Loss of the headline:", loss) # + id="heAfajU4e_ii" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # !pip install sqlalchemy # + import sqlalchemy as db from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer, Text, ForeignKey from sqlalchemy.orm import relationship, backref from sqlalchemy.orm import sessionmaker Base = declarative_base() engine = db.create_engine('sqlite:///clother.db') connection = engine.connect() Session = sessionmaker(bind=engine) # - class Category(Base): __tablename__ = 'category' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('category.id'), nullable=True) title = Column(Text, nullable=False) last_level = Column(Integer, default=1, nullable=False) subcategories = relationship('Category', backref=backref('parent', remote_side=[id]), lazy=True) Base.metadata.create_all(engine) # + session = Session() categories = json.load(open("./clother/static/categories.json", 'r')) for entry in categories: category = Category(parent_id=entry['parent_id'], title=entry['title']) session.add(category) session.commit() delete_query = Category.__table__.delete().where(Category.title.ilike("view all")) session.execute(delete_query) session.commit() categories = session.query(Category).all() for category in categories: if len(category.subcategories) > 0: category.last_level = False session.commit() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- from __future__ import print_function, absolute_import, division, unicode_literals import numpy as np from scipy import signal from matplotlib import pyplot as plt import scipy.io as sio from os.path import expanduser # %matplotlib inline def read_bin(address, dtype, num_ch): return np.memmap(address, dtype = dtype, mode = 'r').reshape([num_channels, -1], order='F') def welch_plot(data, fs, channel=None, start=None, good_channels=None, preprocess=None): if good_channels is not None: channel = channel if channel is not None else np.random.choice(good_channels) else: channel = channel if channel is not None else np.random.randint(data.shape[0]) start = start if start is not None else np.random.randint(0, high = data.shape[1]-100*fs) data = data[channel,start:start+100*fs].astype(np.float32) if preprocess is not None: data = preprocess(data) f, Pxx_den = signal.welch(data, fs, nperseg=2**12) plt.figure(figsize=(20,10)) plt.xticks(fontsize=20) plt.yticks(fontsize=20) plt.semilogy(f, Pxx_den) plt.xlim([0,2000]) return channel, start def fft_plot(data, fs, channel=None, start=None, good_channels=None, preprocess=None): if good_channels is not None: channel = channel if channel is not None else np.random.choice(good_channels) else: channel = channel if channel is not None else np.random.randint(data.shape[0]) start = start if start is not None else np.random.randint(0, high = data.shape[1]-100*fs) data = data[channel,start:start+100*fs].astype(np.float32) if preprocess is not None: data = preprocess(data) p = 20*np.log10(np.abs(np.fft.rfft(data))) rate = fs f = np.linspace(0, rate/2, len(p)) plt.figure(figsize=(20,10)) plt.xticks(fontsize=20) plt.yticks(fontsize=20) plt.plot(f, p) plt.xlim([0,2000]) return channel, start home = expanduser('~/') # ## Neuroseeker data fs = 2e4 address = home + '26May/Data/2017_05_26T13_28_10_Amp_S16_LP3p5KHz_uV.bin' channel_info = np.genfromtxt(home + '26May/Data/channel_config.csv', delimiter=',', usecols=2, dtype=np.string0)[:-1] num_channels = channel_info.shape[0] fs = 20000 data_type = np.int16 data = read_bin(address, data_type, num_channels) chs_prop_address = home + '26May/Analysis/Kilosort/chanmap_12regions_norefs_nolfps.mat' chs_prop = sio.loadmat(chs_prop_address) good_channels = [ch for ch in range(num_channels) if ch not in chs_prop['bad_channels'] and ch not in chs_prop['lfps'] and ch not in chs_prop['refs']] # ### Welch Plot channel, start = welch_plot(data, fs, good_channels=good_channels) # ### fft plot channel, start = fft_plot(data, fs, channel=channel, start=start) # ## Atlas data filename = home + 'test_day/amplifier.bin' num_channels = 128 fs = 30000 data_type = np.uint16 data = read_bin(filename, data_type, num_channels) good_channels = np.load(home + 'test_day/good_channels.npy') # ### Welch Plot channel, start = welch_plot(data, fs, good_channels=good_channels, preprocess = lambda x: (x - 32768) * 0.195) # plt.vlines(50, 1e-3,1e2) # plt.vlines(100, 1e-3,1e2) # ### fft plot channel, start = fft_plot(data, fs, channel, start, good_channels=good_channels, preprocess = lambda x: (x - 32768) * 0.195) # ## CRCNS data filename = home + 'crcns/hc-2/ec014.333/crcns/hc2/ec014.333/ec014.333.dat' num_channels = 64 fs = 20000 data_type = np.int16 data = read_bin(filename, data_type, num_channels) # ### Welch plot channel, start = welch_plot(data, fs) # ### fft plot channel, start = fft_plot(data, fs, channel, start) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 5.10. Interacting with asynchronous parallel tasks in IPython # #### This notebook requires an ipcluster to be executed before start # + magic_args="bash --bg" language="script" # ipcluster start -n 4 # - # !sleep 5 import sys import time import ipyparallel import ipywidgets from IPython.display import clear_output, display rc = ipyparallel.Client() view = rc.load_balanced_view() def f(x): import time time.sleep(.1) return x * x numbers = list(range(100)) ar = view.map_async(f, numbers) ar.metadata[0] for i in ar: print(i, end=', ') def progress_bar(ar): # We create a progress bar. w = ipywidgets.IntProgress() # The maximum value is the number of tasks. w.max = len(ar.msg_ids) # We display the widget in the output area. display(w) # Repeat: while not ar.ready(): # Update the widget's value with the # number of tasks that have finished # so far. w.value = ar.progress time.sleep(.1) w.value = w.max ar = view.map_async(f, numbers) # + podoc={"output_text": "Progress bar"} progress_bar(ar) # - # ## Cleanup # !ipcluster stop # %killbgscripts # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import os, sys cwd = os.getcwd() project_path = cwd[:cwd.find('pygents')+7] if project_path not in sys.path: sys.path.append(project_path) os.chdir(project_path) from importlib import reload # Python 3.4+ import json import pandas as pd import numpy as np import matplotlib.pyplot as plt import pickle #force reimport if 'pygents.util' in sys.modules: del sys.modules['pygents.util'] if 'pygents.text' in sys.modules: del sys.modules['pygents.text'] if 'pygents.plot' in sys.modules: del sys.modules['pygents.plot'] if 'pygents.token' in sys.modules: del sys.modules['pygents.token'] from pygents.util import * from pygents.text import * from pygents.plot import * from pygents.token import * # + # https://www.vengaglobal.com/blog/simplified-traditional-chinese-mandarin-cantonese/ # Target Market Written Spoken # ------------------------------------- # China Simplified Mandarin # Singapore Simplified Mandarin # Taiwan Traditional Mandarin # Hong Kong Traditional Cantonese # Lexicon: # http://www.chineselexicaldatabase.com/download.php - used below # ., ., . & . (2018). Chinese Lexical Database (CLD): A large-scale lexical database for simplified Mandarin Chinese. Behavior Research Methods, https://doi.org/10.3758/s13428-018-1038-3. # 48644 words # http://crr.ugent.be/programs-data/subtitle-frequencies/subtlex-ch # 99121 words # https://www.plecoforums.com/threads/word-frequency-list-based-on-a-15-billion-character-corpus-bcc-blcu-chinese-corpus.5859/ # 1048575 words # Corpora: # https://www.openslr.org/38/ - test-audio corpus, not relevant # https://github.com/CLUEbenchmark/CLUECorpus2020/ - email request sent # https://github.com/brightmart/nlp_chinese_corpus - nearly same as above downloaded, used further # TODO: # https://metatext.io/datasets/nlp-chinese-corpus - paper with word segmentation # - path = '../../nlp/corpora/Chinese/' # ## Lexicon cld_df = pd.read_csv(os.path.join(path,'lexicon/chineselexicaldatabase2.1.txt')) len(cld_df) cld_df cld_df[['Word']] cld_df = pd.read_csv(os.path.join(path,'lexicon/bcc_global_wordfreq.release_UTF-8.txt'),sep='\t') print(len(cld_df)) cld_df cld_df = pd.read_csv(os.path.join(path,'lexicon/SUBTLEX-CH-WF.txt'),sep='\t') print(len(cld_df)) cld_df # ## Corpora #out-of-memory #news2016zh_valid_df = pd.read_json('data/corpora/chinese/new2016zh/news2016zh_train.json',encoding = 'utf-8-sig', lines=True) news2016zh_valid_df = pd.read_json(os.path.join(path,'clue/new2016zh/news2016zh_valid.json'),encoding = 'utf-8-sig', lines=True) len(news2016zh_valid_df) news2016zh_valid_df def zh_clue_json2text(path,filename): with open(os.path.join(path,filename+'.json')) as file: with open(os.path.join(path,filename+'.txt'), 'w') as fout: while True: line = file.readline() if not line: break j = json.loads(line) #print('title',j['title']) #print('desc',j['desc']) #print('content',j['content']) fout.write(j['title']) fout.write('\n') fout.write(j['desc']) fout.write('\n') fout.write(j['content']) fout.write('\n') #do this once! #zh_clue_json2text(path,'clue/new2016zh/news2016zh_valid') #do this once! #zh_clue_json2text(path,'clue/new2016zh/news2016zh_train') #check if context is present n_counters1 = context_save_load(None,'chinese_news2016zh_valid',folder='data/models/') len(n_counters1) if n_counters1 is None or len(n_counters1) < 1: max_n = 3 # in case of Chinese!? n_counters1 = grams_init(max_n) cnt = 0 with open(os.path.join(path, 'clue/new2016zh/news2016zh_valid.txt'),errors='ignore') as f: while True: line = f.readline() if not line: break cnt += 1 if (cnt % 1000) == 0: print(cnt,line) text = preprocess_text(line) text_grams_count(n_counters1,text,max_n) print(cnt) context_save_load(n_counters1,'chinese_news2016zh_valid',folder='data/models/') dfs = [] for i in range(len(n_counters1)): counter = n_counters1[i] df = pd.DataFrame([(gram, counter[gram]) for gram in counter],columns=['gram','freq']) df['log'] = np.log10(df['freq']) df.sort_values('freq',ascending=False,inplace=True) df.title = str(1+i) dfs.append(df) dfs[0][:20][['gram','freq']] # 的 - of # 是 - yes # 在 - exist # 不 - do not dfs[1][:20][['gram','freq']] # 一个 - one # 公司 - company # 中国 - china # 我们 - us/ourselves # 可以 - can dfs[2][:20][['gram','freq']] # 自己的 - my own # ,我们 - , us # 互联网 - the internet # + #https://chowdera.com/2022/03/202203280859161240.html #http://anqin007.blogspot.com/2018/12/show-chinese-characters-in-matplotlib.html from pylab import mpl mpl.rcParams['font.sans-serif'] = ['SimHei'] mpl.rcParams['axes.unicode_minus'] = False plt.rcParams["figure.figsize"] = (20,20) for df in dfs: p = df[:100][['gram','freq']].plot.barh(x='gram'); p.invert_yaxis(); p.set_title(df.title,fontsize = 32) plt.show() # - plt.rcParams["figure.figsize"] = (20,20) for df in dfs: p = df[:100][['gram','log']].plot.barh(x='gram'); p.invert_yaxis(); p.set_title(df.title,fontsize = 32) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="DyhbDNfq4ulu" # from google.colab import drive # drive.mount("/content/drive", force_remount=True) # + colab={"base_uri": "https://localhost:8080/"} id="lbnbzSwN40b7" outputId="ae220976-71ff-43b6-a8c0-4c359dca45a3" # %cd '/content/drive/MyDrive/Cov19/input' # + id="jF9jsZwY4153" import pandas as pd import numpy as np import scipy import matplotlib.pyplot as plt import seaborn import os as os import cv2 as cv import glob as glob import nibabel as nib import pickle import imgaug as ia import imgaug.augmenters as iaa import tqdm.notebook as tqdm import gc import warnings import tensorflow as tf from keras import backend as K from keras import losses, metrics from keras import optimizers from keras import callbacks from keras.models import Model from keras.layers import Input, BatchNormalization, Activation, Dense, Dropout from keras.layers import concatenate, Conv2D, MaxPooling2D, Conv2DTranspose, add, multiply from keras.layers import Multiply, UpSampling2D, core from keras.layers.merge import concatenate from keras.layers.core import Lambda from sklearn.model_selection import train_test_split from sklearn.cluster import KMeans from skimage import morphology as morph from skimage import measure # + [markdown] id="fRnstAY55Rjk" # # 1. Loading data # # + id="6ClKVctx43gh" infile = open('Processed_Data.cp','rb') data_dict = pickle.load(infile) all_cts = data_dict['cts'] all_lungs = data_dict['lungs'] all_inf = data_dict['infects'] infile.close() # + colab={"base_uri": "https://localhost:8080/"} id="UeUflHhi5t8G" outputId="d097b69e-f034-4fb4-cacb-a33627e0649b" print(all_cts.shape) print(all_lungs.shape) print(all_inf.shape) # + id="JuHjBH6rY4jy" from sklearn.utils import shuffle all_cts, all_lungs, all_inf = shuffle(all_cts, all_lungs, all_inf) # + colab={"base_uri": "https://localhost:8080/"} id="QMOfAhVuY-HC" outputId="d337f9cb-d57c-46ab-ada6-8c5e0377c6ff" all_cts = (all_cts - all_cts.min())/(all_cts.max()-all_cts.min()) all_lungs = (all_lungs - all_lungs.min())/(all_lungs.max()-all_lungs.min()) all_inf = (all_inf - all_inf.min())/(all_inf.max()-all_inf.min()) print("{} {}".format(all_cts.min(), all_cts.max())) print("{} {}".format(all_lungs.min(), all_lungs.max())) print("{} {}".format(all_inf.min(), all_inf.max())) # + [markdown] id="qmWCu4QuuB9J" # # 2. Splitting data into training and validation sets # + colab={"base_uri": "https://localhost:8080/"} id="kF0uPJAYuIcH" outputId="d5aa2317-133e-41a6-8e33-e8c39b84b6b8" train_size = int(0.65*all_cts.shape[0]) X_train, yl_train, yi_train = (all_cts[:train_size]/255, all_lungs[:train_size], all_inf[:train_size]) X_valid, yl_valid, yi_valid = (all_cts[train_size:int(0.8*all_cts.shape[0])]/255, all_lungs[train_size:int(0.8*all_cts.shape[0])], all_inf[train_size:int(0.8*all_cts.shape[0])]) test_size = int(0.8*all_cts.shape[0]) X_test, yl_test, yi_test = (all_cts[test_size:]/255, all_lungs[test_size:], all_inf[test_size:]) print(X_train.shape, yl_train.shape, yi_train.shape) print(X_valid.shape, yl_valid.shape, yi_valid.shape) print(X_test.shape, yl_test.shape, yi_test.shape) # + [markdown] id="_CbeWd38rouS" # # 3. Define evaluation metrics # We will use dice coefficient as a metric for infection segmentation. Dice coefficient is (2TP/2TP+FN+FP) where TP, FN and FP correspond to true positive, false negative and false positive. Code taken from # https://medium.com/@karan_jakhar/100-days-of-code-day-7-84e4918cb72c # + id="kj1Gxu9r53Ee" def dice(y_true, y_pred, smooth=1): intersection = K.sum(y_true * y_pred, axis=[1,2,3]) union = K.sum(y_true, axis=[1,2,3]) + K.sum(y_pred, axis=[1,2,3]) dice = K.mean((2. * intersection + smooth)/(union + smooth), axis=0) return dice def dice_loss(y_true, y_pred): loss = 1 - dice(y_true, y_pred) return loss def bce_dice_loss(y_true, y_pred): loss = 0.5*losses.binary_crossentropy(y_true, y_pred) + 0.5*dice_loss(y_true, y_pred) return loss class CosineAnnealingLearningRateSchedule(callbacks.Callback): def __init__(self, n_epochs, n_cycles, lrate_max, verbose=0): self.epochs = n_epochs self.cycles = n_cycles self.lr_max = lrate_max self.lrates = list() def cosine_annealing(self, epoch, n_epochs, n_cycles, lrate_max): epochs_per_cycle = np.floor(n_epochs/n_cycles) cos_inner = (np.pi * (epoch % epochs_per_cycle)) / (epochs_per_cycle) return lrate_max/2 * (np.cos(cos_inner) + 1) def on_epoch_begin(self, epoch, logs=None): lr = self.cosine_annealing(epoch, self.epochs, self.cycles, self.lr_max) K.set_value(self.model.optimizer.lr, lr) self.lrates.append(lr) # + [markdown] id="W5AXZrg0wqS6" # # 4. CNN # # + id="zS82kxUkATB0" def block1 (input_shape, filtersize, poolsz=(2,2)) : x = Conv2D(filtersize, (3,3), activation='relu', padding='same', kernel_initializer="he_normal") (input_shape) x = Conv2D(filtersize, (3,3), activation='relu', padding='same', kernel_initializer="he_normal") (x) x_inter = BatchNormalization()(x) x = MaxPooling2D(poolsz) (x_inter) x = Dropout(0.2)(x) return x, x_inter def block2 (input_shape, filtersize) : x = BatchNormalization() (input_shape) x = Conv2D(filtersize, (3, 3), activation='relu', padding='same', kernel_initializer="he_normal") (x) x = Conv2D(filtersize, (3, 3), activation='relu', padding='same', kernel_initializer="he_normal") (x) return x # + colab={"base_uri": "https://localhost:8080/"} id="QeEUhS-hwQJR" outputId="487709be-0f7b-4c96-e804-b06dc060d36a" def infection_segmentation(input_shape) : x_input = Input(input_shape) x, Xa = block1(x_input, 32) x, Xb = block1(x, 64) x, _ = block1(x, 128, poolsz=(1,1)) x, _ = block1(x, 256, poolsz=(1,1)) x = block2(x, 256) x = Conv2DTranspose(128, (2, 2), strides=(2,2), padding='same') (x) x = block2(x, 128) x = Conv2DTranspose(64, (2, 2), padding='same') (x) x = concatenate([x, Xb]) x = block2(x, 64) x = Conv2DTranspose(32, (2, 2), strides=(2,2), padding='same') (x) x = concatenate([x, Xa], axis=3) x = block2(x, 32) infection_segmentation = Conv2D(1, (1, 1), activation='sigmoid', name='infect_output') (x) model = Model(inputs=x_input, outputs=infection_segmentation, name='infect_model') return model strategy = tf.distribute.MirroredStrategy() print('Devices {}'.format(strategy.num_replicas_in_sync)) with strategy.scope() : infection_segmentation = infection_segmentation(all_cts.shape[1:]) infection_segmentation.summary() # + id="7lIiAF5Aw_QE" epochs = 100 lrmax = 5e-5 n_cycles = epochs / 25 lr_cb = CosineAnnealingLearningRateSchedule(epochs, n_cycles, lrmax) checkpoint_fpath = "infection_segmentation_weights.hdf5" cts_checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_fpath, monitor='val_dice', save_best_only=True, mode='max', verbose=1, save_weights_only=True) batch_size = 8 optim = optimizers.Adam(lr=5e-5, beta_1=0.9, beta_2=0.99) with strategy.scope() : infection_segmentation.compile(optimizer=optim, loss=bce_dice_loss, metrics=[dice]) # + [markdown] id="UWCIZz9vxiS7" # # 5. Train Model # + colab={"base_uri": "https://localhost:8080/"} id="EY9B84-Exk7N" outputId="e691b825-f05f-4ccd-923d-96ba12536c29" infection_segmentation_res = infection_segmentation.fit(x = X_train, y = yi_train, batch_size = batch_size, epochs = epochs, verbose = 1, validation_data = (X_valid, yi_valid), callbacks = [cts_checkpoint_cb, lr_cb]) # + colab={"base_uri": "https://localhost:8080/", "height": 337} id="hV_kk78VzJY3" outputId="0129dc54-3b8a-4ac0-cf9c-fae07cdba78f" plt.style.use('ggplot') fig, axes = plt.subplots(1, 2, figsize=(11,5)) axes[0].plot(infection_segmentation_res.history['dice'], color='b', label='train-infection') axes[0].plot(infection_segmentation_res.history['val_dice'], color='r', label='valid-infection') axes[0].set_ylabel('Dice coefficient') axes[0].set_xlabel('Epoch') axes[0].legend() axes[1].plot(infection_segmentation_res.history['loss'], color='b', label='train-infection') axes[1].plot(infection_segmentation_res.history['val_loss'], color='r', label='valid-infection') axes[1].set_ylabel('Loss') axes[1].set_xlabel('Epoch') axes[1].legend(); # + [markdown] id="rKaY03cI0mVw" # # 6. Saving loads # + id="n716JP-wbcp1" infection_segmentation.load_weights('infection_segmentation_weights.hdf5') prediction = infection_segmentation.predict(X_test) # + [markdown] id="s5m49RsN08nS" # # 7. Testing # + id="zEFH6wvo06VY" def plot_lung_seg(all_cts, all_lungs, all_inf, pred_infs, axes) : axes[0].imshow(all_cts[:,:,0], cmap='bone') axes[0].set_title('CT image'); plt.grid(None) axes[0].set_xticks([]); axes[0].set_yticks([]) axes[1].imshow(all_lungs[:,:,0], cmap='bone') axes[1].imshow(all_inf[:,:,0], alpha=0.5, cmap='Reds') axes[1].set_title('Infection mask'); plt.grid(None) axes[1].set_xticks([]); axes[1].set_yticks([]) axes[2].imshow(all_lungs[:,:,0], cmap='bone') axes[2].imshow(pred_infs[:,:,0], alpha=0.5, cmap='Reds') axes[2].set_title('Pred. Infection mask'); plt.grid(None) axes[2].set_xticks([]); axes[2].set_yticks([]) # + colab={"base_uri": "https://localhost:8080/", "height": 541} id="gaUb86Ol1LEh" outputId="f4200be2-72d2-4a01-8f84-05c63b955dcf" import random indices = random.choices(range(len(X_test)), k=5) fig, axes = plt.subplots(3, 5, figsize=(15,9)) for ii, idx in enumerate(indices) : plot_lung_seg(X_test[idx], yl_test[idx], yi_test[idx], prediction[idx], list(axes[:,ii])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Introduction # - Dataset on Amazon's Top 50 bestselling books from 2009 to 2019. Contains 550 books, data has been categorized into fiction and non-fiction using Goodreads # ## Data source # - https://www.kaggle.com/sootersaalu/amazon-top-50-bestselling-books-2009-2019 # ## Data description # - In this data there are total 550 rows and 7 columns. # - 'Name' - Book title. # - 'Author' - Person who wrote the book. # - 'User Rating' - Book rating out of 5. # - 'Reviews' - Number of people/readers reviewing the book via. rating. # - 'Price' - Cost of each book in US dollars. # - 'Year' - Book launch year. # - 'Genre' - Categories of books # - To analyse tha data using python libraries pandas and numpy and for data visualisation using plotly library. # ## Tasks performed in this analysis # - Distribution of genre in complete dataset. # - Visualize the distribution of genre per year. # - Top 10 most profitable authors # - Most profitable author per year of each genre # - The top 10 books with maximum number of reviews # - The books with the maximum number of reviews per year. # - Visualize the distribution of genre with respect to reviews. # - Top 10 books with the highest rating. # - Does a higher rating of the books affect its price? # - Is the mean price is changing over the years? # - Mean price per genre. import pandas as pd import numpy as np import plotly.express as px import plotly.graph_objects as go import warnings warnings.filterwarnings('ignore') data = pd.read_csv('bestsellers_with_categories.csv') data.drop_duplicates(inplace=True) data.head() # ## Distribution of genre in complete dataset. print(data.filter(['Name','Genre']).shape) print(data.filter(['Name','Genre']).drop_duplicates().shape) category = data.filter(['Name','Genre']).drop_duplicates() category.isnull().sum() # category.drop_duplicates(inplace=True) category = category.groupby(['Genre']).agg(count_genre=('Genre','count')) category.reset_index(level=0,inplace=True) fig = px.pie(category, values='count_genre', names='Genre',title='Distribution of genre') fig.show() # ### Observation - # - Pie chart shows the distribution between fiction and non-fiction books. # - 160 books genre is fiction and 191 genre is non-fiction which is slightly higher. # - Non-fiction books is 8.8 % higher than fiction books # ## Visualize the distribution of genre per year. year_genre = data.filter(['Name','Year','Genre']) # year_genre.drop_duplicates(inplace=True) year_genre = year_genre.groupby(['Year','Genre']).agg(count_book = ('Name','count')) year_genre.reset_index(level=[0,1],inplace=True) fig = px.bar(x=year_genre.Year, y=year_genre.count_book, color=year_genre['Genre'], barmode='group', labels={'x': 'Year', 'y': 'Total number of books','color':'Genre'},title='Genre comparison per year') fig.show() # ### Observation - # - X-axis represents year and Y-axis represents total number of books. # - We have more books from non-fiction genre over all years, except for 2014. # - We can observe a small rise in fiction book from 2010 to 2014. # ## Top 10 most profitable authors # Limitation # - Sales data is not avialble in a dataset. # # Assumption # - If we assume all readers have provided a review then we can assume the number of reviews as number of books sold. # # Hence, we can compute profit as the product of reviews and price for a given year. # + bestselling = data.filter(['Name','Author','Reviews','Price','Genre']) bestselling.drop_duplicates(subset=['Name'],inplace=True) bestselling['selling_price'] = bestselling['Reviews'] * bestselling['Price'] bestselling_fiction = bestselling[bestselling.Genre == 'Fiction'] bestselling_fiction['rank_fiction_price'] = bestselling_fiction.selling_price.rank(method='first',ascending=False).astype(np.int32) bestselling_fiction = bestselling_fiction[bestselling_fiction.rank_fiction_price < 11].copy() bestselling_non_fiction = bestselling[bestselling.Genre == 'Non Fiction'] bestselling_non_fiction['rank_non_fiction_price'] = bestselling_non_fiction.selling_price.rank(method='first',ascending=False).astype(np.int32) bestselling_non_fiction = bestselling_non_fiction[bestselling_non_fiction.rank_non_fiction_price < 11].copy() fig = go.Figure(data=[ go.Bar(name='Fiction', x=bestselling_fiction.Author, y=bestselling_fiction.selling_price), go.Bar(name='Non-Fiction', x=bestselling_non_fiction.Author, y=bestselling_non_fiction.selling_price) ]) fig.update_layout(barmode='group',title='Top 10 profitable author per genre') fig.update_xaxes(title_text="Authors") fig.update_yaxes(title_text="Price in $") fig.show() # - # ### Observation # - Bar graph represent the best-seller from 2009 to 2019. # - X-axis represent authors name. # - Y-axis represent selling price. # - is the best-seller in fiction followed by . # - American psychiatric association is the best-seller in non-fiction followed by . # ## Most profitable author per year of each genre # + fiction = data.filter(['Name','Author','Year','Reviews','Price','Genre']) # df.drop_duplicates(subset=['Name'],inplace=True) fiction['fiction_selling_price'] = fiction['Reviews']* fiction['Price'] fiction = pd.DataFrame(fiction.groupby(['Year','Author','Genre']).agg({'fiction_selling_price':'sum'})) fiction.reset_index(level=[0,1,2],inplace=True) fiction = fiction[fiction.Genre == 'Fiction'] fiction['rank_price'] = fiction.groupby(['Year'])['fiction_selling_price'].rank(method='first',ascending=False).astype(np.int32) fiction = fiction[fiction.rank_price == 1] non_fiction = data.filter(['Name','Author','Year','Reviews','Price','Genre']) non_fiction['non_fiction_selling_price'] = non_fiction['Reviews']* non_fiction['Price'] non_fiction = pd.DataFrame(non_fiction.groupby(['Year','Author','Genre']).agg({'non_fiction_selling_price':'sum'})) non_fiction.reset_index(level=[0,1,2],inplace=True) non_fiction = non_fiction[non_fiction.Genre == 'Non Fiction'] non_fiction['rank_price'] = non_fiction.groupby(['Year'])['non_fiction_selling_price'].rank(method='first',ascending=False).astype(np.int32) non_fiction = non_fiction[non_fiction.rank_price == 1] fig = px.bar(y=fiction.Author, x=fiction.fiction_selling_price,color=fiction['Year'],title='Most profitable author in fiction', labels={'x': 'Price in $', 'y': 'Authors','color':'Year'}) fig.show() # - # ### Observation # - X-axis represent profit in dollars. # - Y-axis represent authors. # - Wixards RPG Team profitale in 2017 and 2018. # - Suzanne collins profitable in 2010 and 2011. fig = px.bar(y=non_fiction.Author, x=non_fiction.non_fiction_selling_price,color=non_fiction.Year,title='Most profitable author in non-fiction', labels={'x': 'Price in $', 'y': 'Authors','color':'Year'}) fig.show() # ### Observation # - X-axis represent profit in dollars. # - Y-axis represet authors. # - American Psychological Association was the profitable author in 2009, 2015 and 2016 # - was the profitable author in 2010, 2011, 2012 and 2014. # - was the profitable author in 2018 and 2019. # ## The top 10 books with maximum number of reviews most_rvs = data.filter(['Name','Reviews']) most_rvs.drop_duplicates(subset=['Name'],inplace=True) most_rvs.isnull().sum() most_rvs['rank_reviews'] = most_rvs.Reviews.rank(method='first',ascending=False).astype(np.int32) most_rvs = most_rvs[most_rvs.rank_reviews < 11] most_rvs.sort_values(by=['rank_reviews'],ascending=False,inplace=True) fig = px.bar(y=most_rvs.Name, x=most_rvs.Reviews,title='Top 10 books with maximum Reviews',labels={'x': 'Total Reviews', 'y': 'Book Title'}) fig.show() # ### Observation - # - X-axis represent the total number of reviews. # - Y-axis represent the books name. # - The number of reviews ranges between 37 and 87,841. # - By far the most reviews have been given to 'Where the Crawdads Sing' by with a user rating of 4.8 and 'The Girl on the Train' by with a user rating of 4.1. # ## The books with the maximum number of reviews per year. most_rvs_year = data.filter(['Name','Year','Reviews']) # most_rvs_year.drop_duplicates(subset=['Name'],inplace=True) most_rvs_year = pd.DataFrame(most_rvs_year.groupby(['Year','Name']).Reviews.max()) most_rvs_year.reset_index(level=[0,1],inplace=True) most_rvs_year['rank_Reviews'] = most_rvs_year.groupby(['Year']).Reviews.rank(method='first',ascending=False).astype(np.int32) most_rvs_year = most_rvs_year[most_rvs_year.rank_Reviews == 1] most_rvs_year fig = px.bar(x=most_rvs_year.Year, y=most_rvs_year.Reviews,color=most_rvs_year.Name,title='Books with the maximum number of reviews per year', labels={'x': 'Years', 'y': 'Total Reviews'}) fig.show() # ### Observation # - X-axis represent years. # - Y-axis represent Reviews. # - "Gone Girl" recorded the maximum number of reviews in three consecutive years(2012-2014). # - "The Girl on the Train" recorded the maximum number of reviews in two consecutive years(2015-2016). # - "Where the C" recorded highest number of reviews in 2019. # ## Visualize the distribution of genre with respect to reviews. genre_diff = data.filter(['Genre','Reviews']) genre_diff = genre_diff.groupby(['Genre']).agg({'Reviews':'sum'}) genre_diff.reset_index(level=0,inplace=True) fig = px.pie(genre_diff, values='Reviews', names='Genre',title='Review comparision per genre') fig.show() # ### Observation - # - Total number of reviews 6,574,305. # - Fiction reviews 3,764,110. # - Non Fiction reviews 2,810,195. # ## Top 10 books with the highest rating. max_rating = data.filter(['Name','Author','User Rating','Reviews']).drop_duplicates() max_rating['rank_rating'] = max_rating['User Rating'].rank(method='first',ascending=False).astype(np.int32) # max_rating = max_rating[max_rating.rank_rating < 11] max_rating['rank_review'] = max_rating['Reviews'].rank(method='first',ascending=False).astype(np.int32) # max_rating = max_rating[max_rating.rank_review < 11] print(max_rating.rank_rating.min(), max_rating.rank_rating.max()) print(max_rating.rank_review.min(), max_rating.rank_review.max()) max_rating['total_rank'] = (max_rating['rank_rating'] + max_rating['rank_review'])/2 max_rating['rank_total'] = max_rating['total_rank'].rank(method='first',ascending=True).astype(np.int32) max_rating = max_rating[max_rating.rank_total < 11] fig = px.bar(y=max_rating.Name, x=max_rating['User Rating'],labels={'x':'Rating','y':'Book Title'},title='Top 10 books with maximum rating') fig.show() # ### Observation # - The bar graph represent top 10 higher rating books. # - X-axis represent rating. # - Y-axis represent book title. # - 6 books have rating of 4.9. # - 4 books have rating of 4.8. # ## Does a higher rating of the books affect its price? rating_price = data.filter(['Author','Name','User Rating','Price','Reviews']).drop_duplicates() print(rating_price.isnull().sum().sum()) rating_price = pd.DataFrame(rating_price.groupby(['User Rating']).agg({'Price': 'mean', 'Name': 'nunique'})) rating_price.reset_index(level=0,inplace=True) fig = px.scatter(rating_price, x="User Rating", y="Price", title='Higher rating affect on price', size='Name') fig.update_xaxes(title_text="User Rating") fig.update_yaxes(title_text="Price in $") fig.show() # ### Observation - # - X-axis represent user rating. # - Y-axis represent price in dollars. # - There is no clear relationship between user rating and price. # - We have seen that the number of books are more in higher rating. # ## Is the mean price is changing over the years? price_year = data.filter(['Price','Year']) price_year = pd.DataFrame(price_year.groupby(['Year']).Price.mean()) price_year.reset_index(level=0,inplace=True) fig = px.line(price_year, x="Year", y="Price", title='Mean price change over the years') fig.update_xaxes(title_text="Year") fig.update_yaxes(title_text="Price in $") fig.show() # ### Observation # - X-axis represent years. # - Y-axis represent price in dollars. # - Sudden fall in price between 2014-2015 but we don't know the reason because unavailability of data. # ## Mean price per genre. genre_price = data.filter(['Price','Genre']) genre_price = pd.DataFrame(genre_price.groupby(['Genre']).Price.mean()) genre_price.reset_index(level=0,inplace=True) fig = px.bar(genre_price, x='Genre', y='Price',title='Mean price per genre') fig.update_xaxes(title_text="Genre") fig.update_yaxes(title_text="Price in $") fig.show() # ### Observation # - X-axis represent genre. # - Y-axis represent price in dollars. # - Mean price of non fiction is 14.84. # - Mean price of fiction is 10.85 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # CSE 6040, Fall 2015 [21]: Linear regression via least squares, Part 1 -- Error analysis # # Yay! Time for a new topic: _linear regression by the method of least squares_. # # We will do this topic in two parts: one which is generically about the errors that can arise in carrying out a numerical computation, and the other which is specifically about the linear regression problem. # # > This lab kicks off the last part of our course, which is a survey of data analysis and mining methods. We will assume you will learn the theory behind these methods in more detail in other classes; our focus will be on algorithmic and implementation concepts. # # For today's topic, let's use the following dataset, which is a crimes dataset from 1960: http://cse6040.gatech.edu/fa15/uscrime.csv # # This dataset comes from: http://www.statsci.org/data/general/uscrime.html # # Other useful references and downloads for today's class: # # * Please get the latest version of [`cse6040utils.py`](https://raw.githubusercontent.com/rvuduc/cse6040-ipynbs/master/cse6040utils.py) # * Python's documentation on its `float` type: https://docs.python.org/2/tutorial/floatingpoint.html # * Solvers, including linear systems solvers, in SciPy: http://docs.scipy.org/doc/scipy/reference/linalg.html # # Also, much of the discussion of round-off error is taken from an [excellent book](http://epubs.siam.org/doi/book/10.1137/1.9781611971446) on numerical linear algebra. # # > You may be able to use the Georgia Tech library's web proxy service to download electronic chapters of this book. Also, this book was written by Rich's former PhD advisor, so a referral thereto is probably not completely objective. `:)` # ## Motivation # # Let's start by loading a dataset and motivating the problem of modeling it using linear regression. import numpy as np import pandas as pd from IPython.display import display import cse6040utils as cse6040 df = pd.read_csv ('uscrime.csv', skiprows=1) display (df.head ()) # Each row of this dataset is a US State. The columns are described here: http://www.statsci.org/data/general/uscrime.html # Let's take a quick peek at the dataset. df.describe () import seaborn as sns # %matplotlib inline # Look at a few relationships sns.pairplot (df[['Crime', 'Wealth', 'Ed', 'U1']]) # Suppose we wish to build a model of some quantity, called the _response_ variable, given some set of _predictors_. In the US crimes dataset, the response might be the crime rate (`Crime`), which we wish to predict from the predictors of income (`Wealth`), education (`Ed`), and the unemployment rate of young males (`U1`). # # In a linear regression model, we posit that the response is a linear function of the predictors. That is, suppose there are $m$ observations in total and consider the $i$-th observation. Let $b_i$ be the response of that observation. Then denote the $n$ predictors for observation $i$ as $\{a_{i,1}, a_{i,2}, \ldots, a_{i,n}\}$. From this starting point, we might then posit a _linear_ model of $b$ having the form, # # $b_i = x_0 + a_{i,1} x_1 + a_{i,2} x_2 + \cdots + a_{i,n} x_n$, # # where we wish to compute the "best" set of coefficients, $\{x_0, x_1, \ldots, x_n\}$. Note that this model includes a constant offset term, $x_0$. Since we want this model to hold for observations, then we effectively want to solve the system of $m$ equations in $n+1$ unknowns, # # $\left( # \begin{array}{c} # b_1 \\ # b_2 \\ # \vdots \\ # b_m # \end{array} # \right) # $ # = # $\left( # \begin{array}{ccccc} # 1. & a_{1,1} & a_{1,2} & \ldots & a_{1,n} \\ # 1. & a_{2,1} & a_{2,2} & \ldots & a_{2,n} \\ # & & \cdots & & \\ # 1. & a_{m,1} & a_{m,2} & \ldots & a_{m,n} # \end{array} # \right) # \left(\begin{array}{c} # x_0 \\ # x_1 \\ # \vdots \\ # x_n # \end{array}\right), # $ # # or just $Ax=b$. Typically, there are many more observations than parameters ($m \gg n$), in which case we say this linear system is _overdetermined_. # # So how do we compute $x$? Most statistical software and libraries hide the details of computing $x$ from you. However, it's important to understand at least a little bit about what happens under the hood, which is today's topic. # ## Aside: Models, Errors, and Algorithms -- Oh my! # # The main task of data analysis is to help explain and understand some phenomenon by building a mathematical model from data, and analyzing the model or using it to make predictions. During this process, there are several potential sources of error, including: # # * _measurement errors_ in the data, due to limitations in our observational equipment or methodology; # * _modeling error_, due to our mathematical model having simplifying assumptions; # * _truncation error_, due to limitations in our computer algorithms; and # * _rounding error_, due to the fact that we must represent all values on the computer in _finite precision_. # # In this course, we will mostly leave measurement and modeling errors as a topics for other courses. Instead, we will focus on truncation and rounding errors, and to a lesser extent, assessing the potential impact of measurement error. # **Exercise.** Are there other kinds of errors not captured in the list above? # # **Exercise.** Give examples of the kinds of errors that might arise in our motivating problem (i.e., building a linear regression model of crime rates). # _@YOUSE: Enter your discussion / solutions here_ # ## Floating-point arithmetic # # Real values are typically stored in _IEEE floating-point format_. If you have programmed in other languages, you may have seen scalar data types for _single-precision_ and _double-precision_ formats (e.g., `float` and `double` in C/C++/Java). A "floating-point" encoding is basically a normalized scientific notation consisting of a _base_, a _sign_, a fractional _mantissa_, and an integer _exponent_. Let's look at an example to see how this might work. # # Consider the value 0.125. In a normalized scientific notation, we would write this number as $+1.25 \times 10^{-1}$, where the base is 10, the mantissa is 1.25, and the exponent is -1. Conceptually, if we always used base 10 for all our floating-point values, then our floating-point encoding of this value would, conceptually, be a tuple $(+, 1.25, -1)$. # # However, we cannot store an infinite number of digits for the mantissa and exponent values. Thus, we would normally _also_ limit the number of digits that may appear in either. We might use, say, 6 digits for the mantissa and 2 digits (ignoring the sign) for the exponent, i.e., a tuple of the form $(\pm, m.mmmmm, \pm xx)$. # **Exercise.** What is the largest value we could represent in this format? What is the smallest value? What is the smallest _positive_ value we could represent? How would we encode these values? # _@YOUSE: Enter your solutions here_ # **Exercise.** Encode the following values as tuples: # # 1. $1.0$ # 2. $-10^{-6}$ # 3. $1.0 - 10^{-6}$ # 4. $1.0 + 10^{-6}$ # _@YOUSE: Enter your solutions here_ # # 1. (+, 1.00000, +00) # 2. (-, 1.00000, -06) # 3. (+, 9.99999, -01) # 4. (+, 1.00000, +00) # 0.000001 == 10^(-6) # # # ### A small surprise? The consequences of finite-precision # # Let $a=1.0$ and $b=10^{-6}$. Now consider two programs. # # _Program 1_: # # s = a - b # t = s + b # # _Program 2_: # # s = a + b # t = s - b # # If the _precision_, or number of digits in the encoding, were infinite, then both programs would produce `t == a == 1.0`. # **Exercise.** Suppose we instead use a _finite-precision_ floating-point encoding, using base 10 digits with 6 digits of precision for the mantissa and 2 digits for the exponent, plus separate sign "digits" for each. What is the final value of `t` in each of these two programs? # _@YOUSE: Enter your solutions here_ # The preceding examples assume the digits are represented in base 10. However, computers encode all values using _bits_, which are _base 2_ digits. All the same ideas as above apply, but on base 2 values rather than base 10 values. # # One consequence of this difference is that certain finite-precision decimal fractions _cannot_ be represented exactly! # # > Can you see why? Consider the decimal value 0.1 represented in a binary format. # # In addition, the IEEE floating-point standard defines the encoding a little differently than we've used it. First, if the value is not 0, then the mantissa _always_ has an implicit "1" as the leading digit; therefore, it needn't be stored explicitly, thereby saving a bit and effectively increasing the precision a little. Secondly, the range of the exponent is not symmetric. In our hypothetical base-10 "6 + 2" encoding, we assumed the exponent would range from -99 to 99, which is a symmetric interval; in IEEE floating-point, there will be a slight asymmetry in this range. Part of the reason is that the IEEE floating-point encoding can also represent several kinds of special values, such as infinities and an odd bird called "not-a-number" or `NaN`. This latter value, which you may have seen if you have used any standard statistical packages, can be used to encode certain kinds of floating-point exceptions that result when, for instance, you try to divide by zero. # # In IEEE floating-point, there are two main encodings, known as _single-precision_ (`float` in C/C++/Java) and _double-precision_ (`double` in C/C++/Java). In brief, these differ as follows: # # * Single-precision: 32 bits total, with 24 bits for the mantissa and an exponent range of [-126, 127]. # # * Double-precision: 64 bits total, with 53 bits for the mantissa and an exponent range of [-1022, 1023]. # **Exercise.** What is the smallest positive value that can be represented in IEEE single-precision? What about in double-precision? # _@YOUSE: Enter your solutions here_ # **Exercise.** Consider the smallest possible value greater than 1.0 that can be represented in floating-point. Let's call this value, $1.0 + \epsilon$. # # Determine $\epsilon_s$ and $\epsilon_d$, the corresponding values of $\epsilon$ in single-precision and double-precision, respectively. # _@YOUSE: Enter your solutions here_ # Another important consequence of the binary format is that when you print things in base ten, what you see may not be what you get! For instance, try running the code below. # + from decimal import Decimal x = 1.0 + 2.0**(-52) print x print Decimal (x) # What does this do? # - # > Aside: If you ever need true decimal storage with no loss of precision, turn to the `Decimal` package. Just be warned it will come at a price: # + a_native = 1.0 b_native = 2.0 a_decimal = Decimal ('1.0') b_decimal = Decimal ('2.0') # %timeit a_native + b_native # %timeit a_decimal + b_decimal # - # For today's lesson, it will be helpful to occasionally peek at floating-point values "in the raw." For this purpose, we've provided you with a handy routine called `float_to_bin(x)`, which given a floating-point value `x` will return its IEEE representation as a binary string. (It is defined in the `cse6040utils` module.) # # The following code uses `float_to_bin()` to define another function that dumps a floating-point number's complete `Decimal` form along with its binary form. # + def print_float_bin (x, prefix=""): print ("%s: %s\n%s %s" % (prefix, Decimal (x), ' ' * len (prefix), cse6040.float_to_bin (x))) a = -1.0 b = 2.**(-52) # Recall: \epsilon_d c = b / 2. print_float_bin (a, prefix="a") print_float_bin (b, prefix="b") print_float_bin (c, prefix="c") # - # **Exercise.** Recall the two program fragments from above: # # _Program 1_: # # s = a - b # t = s + b # # _Program 2_: # # s = a + b # t = s - b # # Let $a=1.0$ and $b=\epsilon_d / 2 \approx 1.11 \times 10^{-16}$. Write some Python code to evaluate these two programs and compare their outputs. (To look closely at their outputs, use the `print_float_bin()` function from above.) # + a = 1.0 b = 2.**(-53) # What value goes here? s1 = a - b t1 = s1 + b s2 = a + b t2 = s2 - b print_float_bin (s1, prefix="s1") print_float_bin (t1, prefix="t1") print ("\n") print_float_bin (s2, prefix="s2") print_float_bin (t2, prefix="t2") print "\n", t1, t2 print ("\n(t1 == t2) == %s" % (t1 == t2)) # - # By the way, in Numpy you can determine machine epsilon in a more portable way: # + EPS_S = np.finfo (np.float32).eps EPS_D = np.finfo (float).eps print_float_bin (float (EPS_S), prefix="eps_s") print_float_bin (float (EPS_D), prefix="eps_d") # - # ## Perturbation theory and condition numbers # # Given the various sources of error, how do we know whether a given algorithm is "good" for computing the solution to a given problem? An important tool in the area of _numerical analysis_ is _perturbation theory_. # # To see perturbation theory in action, suppose we wish to determine by how much a "small" measurement error, $\Delta x$, affects the output of some function, $f(x)$. Barring any other information and assuming $f(x)$ is continuous and differentiable, we could try to estimate the error of evaluating $f(x + \Delta x)$ compared to $f(x)$ by the following linear approximation, which comes from a Taylor series expansion: # # $$f(x + \Delta x) \approx f(x) + \Delta x \cdot f'(x),$$ # # where $f'(x)$ is the first derivative of $f(x)$ at the point $x$. # **Absolute condition numbers.** From this relation, we can compute an upper-bound on the absolute value of the error: # # $$\left|f(x + \Delta x) - f(x)\right| \approx \left|\Delta x\right| \cdot \left|f'(x)\right|.$$ # # This calculation says that the error depends not only on the measurement error, $\Delta x$, _but also_ the nature of the function itself at $x$ through the factor, $\left|f'(x)\right|$. Indeed, we will give this factor a special name of _absolute condition number_ of evaluating $f$ at $x$. For any given computational problem, we will try to find condition numbers to help us quantify the "hardness" of the problem. # # That is, for the problem of evaluating $f(x)$, the preceding analysis says that if this factor is not too large, then small measurement errors will lead to only small errors in the output. In this case, we say the problem is _well-conditioned_. If instead this factor is very large, then even very small errors will lead to large errors in the output. In this case, we say the problem is _ill-conditioned_. # # Put differently, the problem of evaluating $f(x)$ when the condition number is large is inherently _more difficult_ than doing so when the condition number is small. # **Relative condition numbers.** The error considered above is the absolute error, in contrast to the _relative error_, # # $$\left|f(x + \Delta x) - f(x)\right| / \left|f(x)\right|.$$ # # For this case and problem of evaluating $f(x)$, let's rewrite this slightly as, # # $$\frac{\left|f(x + \Delta x) - f(x)\right|} # {\left|f(x)\right|} # \approx # \frac{|\Delta x|} # {|x|} # \cdot # \underbrace{ # \frac{\left|f'(x)\right| \cdot |x|} # {\left|f(x)\right|} # }_{\equiv\ \kappa_r(x)} # ,$$ # # where $\kappa_r(x)$ is the _relative condition number_ of evaluating $f(x)$ at $x$. # # Observe that this relation expresses the relative change in the output as a function of some relative change in the input ($|\Delta x| / |x|$). # **Backward stability.** Let's say someone devises an algorithm to compute $f(x)$. For a given value $x$, let's suppose this algorithm produces the value $\mathrm{alg}(x)$. One important question might be, is that output "good" or "bad?" # # One property to measure "goodness" will be _backward stability_. In particular, we will say that $\mathrm{alg}(x)$ is a _backward stable algorithm_ to compute $f(x)$ if, for all $x$, there exists a "small" $\Delta x$ such that # # $$\mathrm{alg}(x) = f(x + \Delta x).$$ # # That should look familiar! It means we can estimate the (absolute) backward error using our perturbation analysis from before, i.e., # # $$\left|\mathrm{alg}(x) - f(x)\right| \approx \left|f'(x)\right| \cdot \left|\Delta x\right|.$$ # **Round-off errors.** We already know that numerical values can only be represented finitely, which introduces round-off error. Thus, at the very least we should hope that a scheme to compute $f(x)$ is as insensitive to round-off errors as possible. # # To quantify sensitivity, the standard technique is to assume that every scalar floating-point operation incurs some bounded error. Let $a \odot b$ be the exact result of some mathematical operation on $a$ and $b$, and let $\mathrm{fl}(a \odot b)$ be the computed value, after rounding in finite-precision. We will model the difference by, # # $$\mathrm{fl}(a \odot b) \equiv (a \odot b) (1 + \delta),$$ # # where $|\delta| \leq \epsilon$, machine epsilon. # ## (Left off here) # # In class, we left off at this point. Indeed, there were some errors in the text that had appeared below, so they have been removed. We will pick up here (with the correct text!) in the next class. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # # Import Modules # + import os import sys from ase import io from ase.visualize import view # - os.environ.keys() print( 'os.environ["repo_185b"]:', os.environ["repo_185b"] ) # assert False atoms_path = os.path.join( os.environ["dropbox"], "__temp__", "VBiO4_mp-23506_primitive.cif", ) # + # atoms = io.read("/home/raulf2012/GoogleDrive/GDrive_Stanford/TeamDrives/chemeng_185b/Computational_DFT/bulk_structures/VBiO4_mp-23506_primitive.cif") # atoms = io.read("/mnt/c/Users/raul_desktop/Desktop/VBiO4_mp-23506_primitive.cif") atoms = io.read(atoms_path) # atoms = io.read( # os.environ["repo_185b"], # "bulk_structures/VBiO4_mp-23506_primitive.cif", # ) # - atoms # + lat = atoms.cell.get_bravais_lattice() print(lat.description()) # lat.plot_bz(show=True) # lat.plot_bz? # + BandPath = atoms.cell.bandpath( path="GXLTWRX1ZGYSW,L1Y,Y1Z", npoints=200, density=None, special_points=None, eps=0.0002, pbc=True, ) path_list = [] for i_cnt, char in enumerate(BandPath.path): # Skip ',' character if char == ",": continue if char == "1": path_list[-1] += "1" continue path_list.append(char) kpts = BandPath.kpts kpoints_plot = BandPath.get_linear_kpoint_axis() X = kpoints_plot[1] x = kpoints_plot[0] save_dict = dict( kpts=kpts.tolist(), X=X.tolist(), x=x.tolist(), pathlist=path_list, ) # - # # Write data to json import json with open("bandpath.json", "w") as outfile: json.dump(save_dict, outfile, indent=2) # # Read json data # + # ####################################################################### import json data_path = os.path.join( os.environ["repo_185b"], "workflow/band_path", "bandpath.json", ) with open(data_path, "r") as fle: data = json.load(fle) kpts = data["kpts"] X = data["X"] x = data["x"] # ####################################################################### # + active="" # # # # # + jupyter={"source_hidden": true} # kpts.tolist() # X.tolist() # x.tolist() # + jupyter={"source_hidden": true} # view(atoms) # + jupyter={"source_hidden": true} # repo_185b # /home/flores12/00_dropbox/01_norskov/00_git_repos/CHEMENG185B_DFT # + jupyter={"source_hidden": true} # # atoms.cell.get_bravais_lattice? # - # + jupyter={"source_hidden": true} # atoms.cell.bandpath( # path="L", # npoints=None, # density=None, # special_points=None, # eps=0.0002, # pbc=True, # ) # + jupyter={"source_hidden": true} # path_list = [] # for i_cnt, char in enumerate(BandPath.path): # # Skip ',' character # if char == ",": # continue # if char == "1": # path_list[-1] += "1" # continue # path_list.append(char) # + jupyter={"source_hidden": true} # "G", # "X", # "L", # "T", # "W", # "R", # "X1", # "Z", # "G", # "Y", # "S", # "W", # "L1", # "Y", # "Y1", # "Z", # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 5.17 Intro to Data Science: Simulation and Static Visualizations # ## 5.17.1 Sample Graphs for 600, 60,000 and 6,000,000 Die Rolls # ## 5.17.2 Visualizing Die-Roll Frequencies and Percentages # ### Launching IPython for Interactive Matplotlib Development # ### Importing the Libraries # # Note: `%matplotlib inline` is an IPython magic that enables Matplotlib-based graphics to be displayed directly in the notebook. # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import random import seaborn as sns # ### Rolling the Die and Calculating Die Frequencies rolls = [random.randrange(1, 7) for i in range(600)] values, frequencies = np.unique(rolls, return_counts=True) # ### Creating the Initial Bar Plot title = f'Rolling a Six-Sided Die {len(rolls):,} Times' sns.set_style('whitegrid') axes = sns.barplot(x=values, y=frequencies, palette='bright') # ### Setting the Window Title and Labeling the x- and y-Axes axes.set_title(title) axes.set(xlabel='Die Value', ylabel='Frequency') # ### Finalizing the Bar Plot axes.set_ylim(top=max(frequencies) * 1.10) for bar, frequency in zip(axes.patches, frequencies): text_x = bar.get_x() + bar.get_width() / 2.0 text_y = bar.get_height() text = f'{frequency:,}\n{frequency / len(rolls):.3%}' axes.text(text_x, text_y, text, fontsize=11, ha='center', va='bottom') # ### Rolling Again and Updating the Bar Plot—Introducing IPython Magics plt.cla() # %recall 5 rolls = [random.randrange(1, 7) for i in range(600)] rolls = [random.randrange(1, 7) for i in range(600)] rolls = [random.randrange(1, 7) for i in range(60000)] # %recall 6-13 values, frequencies = np.unique(rolls, return_counts=True) title = f'Rolling a Six-Sided Die {len(rolls):,} Times' sns.set_style('whitegrid') axes = sns.barplot(x=values, y=frequencies, palette='bright') axes.set_title(title) axes.set(xlabel='Die Value', ylabel='Frequency') axes.set_ylim(top=max(frequencies) * 1.10) for bar, frequency in zip(axes.patches, frequencies): text_x = bar.get_x() + bar.get_width() / 2.0 text_y = bar.get_height() text = f'{frequency:,}\n{frequency / len(rolls):.3%}' axes.text(text_x, text_y, text, fontsize=11, ha='center', va='bottom') values, frequencies = np.unique(rolls, return_counts=True) title = f'Rolling a Six-Sided Die {len(rolls):,} Times' sns.set_style('whitegrid') axes = sns.barplot(x=values, y=frequencies, palette='bright') axes.set_title(title) axes.set(xlabel='Die Value', ylabel='Frequency') axes.set_ylim(top=max(frequencies) * 1.10) for bar, frequency in zip(axes.patches, frequencies): text_x = bar.get_x() + bar.get_width() / 2.0 text_y = bar.get_height() text = f'{frequency:,}\n{frequency / len(rolls):.3%}' axes.text(text_x, text_y, text, fontsize=11, ha='center', va='bottom') # ### Saving Snippets to a File with the %save Magic # %save RollDie.py 1-13 # ``` # The following commands were written to file `RollDie.py`: # import matplotlib.pyplot as plt # import numpy as np # import random # import seaborn as sns # rolls = [random.randrange(1, 7) for i in range(600)] # values, frequencies = np.unique(rolls, return_counts=True) # title = f'Rolling a Six-Sided Die {len(rolls):,} Times' # sns.set_style("whitegrid") # axes = sns.barplot(values, frequencies, palette='bright') # axes.set_title(title) # axes.set(xlabel='Die Value', ylabel='Frequency') # axes.set_ylim(top=max(frequencies) * 1.10) # for bar, frequency in zip(axes.patches, frequencies): # text_x = bar.get_x() + bar.get_width() / 2.0 # text_y = bar.get_height() # text = f'{frequency:,}\n{frequency / len(rolls):.3%}' # axes.text(text_x, text_y, text, # fontsize=11, ha='center', va='bottom') # ``` # ### Command-Line Arguments; Displaying a Plot from a Script ########################################################################## # (C) Copyright 2019 by Deitel & Associates, Inc. and # # Pearson Education, Inc. All Rights Reserved. # # # # DISCLAIMER: The authors and publisher of this book have used their # # best efforts in preparing the book. These efforts include the # # development, research, and testing of the theories and programs # # to determine their effectiveness. The authors and publisher make # # no warranty of any kind, expressed or implied, with regard to these # # programs or to the documentation contained in these books. The authors # # and publisher shall not be liable in any event for incidental or # # consequential damages in connection with, or arising out of, the # # furnishing, performance, or use of these programs. # ########################################################################## # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.8 64-bit # metadata: # interpreter: # hash: a062be9dc8f0b4e802ee41733a00058b9f58e2e67b7919a84fe2c611795dfe04 # name: python3 # --- # # Vectorized String Operations # # One strength of Python is it relative ease in handling and manipulating string data. Pandas builds on this and provides a comprehensive set of *vectorized string operations* that become an essential piece of the type of munging required when working with (read:cleaning up) real-world data. In this section, we'll walk through some of the Pandas string operations, and then take a look at using them to partially clean up a very messy dataset of recipes collected from the Internet. # ## Introducing Pandas String Operations # # We saw in previous sections how tools like NumPy and Pandas generalize arithmetic operations so that we can easily and quickly perform the same operation on many array elements. For example: import numpy as np x = np.array([2, 3, 5, 7, 11, 13]) x * 2 # This *vectorization* of operations simplifies the syntax of operating on arrays of data: we no longer have to worry about the size or shape of the array, but just about what opeation we want done. For arrays of strings, NumPy does not provied such simple access, and thus you're stuck using a more verbose loop syntax: data = ['peter', 'Paul', 'MARY', 'gUIDO'] [s.capitalize() for s in data] # This is perhaps sufficient to work with some data, but it will break if there are any missing values. For example: data = ['peter', 'Paul', None, 'MARY', 'gUIDO'] [s.capitalize() for s in data] # Pandas includes features to address both this need for vectorized string operations and for correctly handling missing data via the `str` attribute of Pandas Series and Index objects containing strings. So, for example, suppose we create a Panads Series with this data: import pandas as pd names = pd.Series(data) names # We can now call a single method that will capitalize all the entries, while skipping over any missing values: names.str.capitalize() # Using tab completion on this `str` attribute will list all the vectorized string methods available to Pandas. # ## Tables of Pandas String Methods # # If you have a good understanding of string manipulation in Python, most of Pandas string syntax is intuitive enough that it's probably sufficient to just list a table of available methods; we will start with that here, before diving deeper into a few of subtitles. The examples in this section use the following series of names: monte = pd.Series(['', '', '', '', '', '']) # ### Methods similar to Python string methods # Nearly all Python's built-in string methods are mirrored by a Pandas vectorized string method. Here is a list of Pandas `str` methods that mirror Python string methods: # # ``` # # ``` # ``` # len() lower() translate() islower() # ljust() upper() startswith() isupper() # rjust() find() endswith() isnumeric() # center() rfind() isalnum() isdecimal() # zfill() index() isalpha() split() # strip() rindex() isdigit() rsplit() # rstrip() capitalize() isspace() partition() # lstrip() swapcase() istitle() rpartition() # ``` # Notice that these have various return values. Some, like `lower()`, return a series of string: monte.str.lower() # But some others return numbers: monte.str.len() monte.str.startswith('T') # Still others return lists or other compound values for each element: monte.str.split() # We'll see further manipulations of this kind of series-of-lists object as we continue our discussion. # ### Methods using regular expressions # In addition, there are several methods that accept regular expressions to examine the content of each string element, and follow some of the API conventions of Python's built-in `re` module: # # ``` # ``` # ``` # Method Description # match() Call re.match() on each element, returning a boolean. # extract() Call re.match() on each element, returning matched groups as strings. # findall() Call re.findall() on each element # replace() Replace occurrences of pattern with some other string # contains() Call re.search() on each element, returning a boolean # count() Count occurrences of pattern # split() Equivalent to str.split(), but accepts regexps # rsplit() Equivalent to str.rsplit(), but accepts regexps # ``` # With these, you can do a wide range of interesting operations. For example, we can extract the first name from each by asking for a contiguous group of characters at the beginning of each element: monte.str.extract('([A-Za-z]+)', expand=False) # Or we can do something more complicated, like finding all names that start and end with a consonant, making use of the start-of-string (`^`) and end-of-string(`$`) regular expression characters: monte.str.findall(r'^[^AEIOU].*[^aeiou]$') # The ability to concisely apply regular expression across `Series` or `Dataframe` entries opens up many possibilities for analysis and cleanign of data. # ### Miscellaneous methods # # Finally, there are some miscellaneous methods that enable oter convenient operations: # ``` # Method Description # get() Index each element # slice() Slice each element # slice_replace() Replace slice in each element with passed value # cat() Concatenate strings # repeat() Repeat values # normalize() Return Unicode form of string # pad() Add whitespace to left, right, or both sides of strings # wrap() Split long strings into lines with length less than a given width # join() Join strings in each element of the Series with passed separator # get_dummies() extract dummy variables as a dataframe # ``` # #### Vectorized item access and slicing # # The `get()` and `slice()` operations, in particular, enable vectorized element access from each array. For example, we can get a slice of the first three characters of each array using `str.slice(0,3)`. Note that this behavior is also available through Python's normal indexing syntax-for example, `df.str.slice(0, 3)` is equivalent to `df.str[0:3]`: monte.str[0:3] # Indexing via `df.str.get(i)` and `df.str[i]` is likewise similar. # # These `get()` and `slice()` methods also let you access elements of arrays returned by `split()`. For example, to extract the last name of each entry, we can combine `split()` and `get()`: monte.str.split().str.get(-1) # #### Indicator variables # # Another method that requires a bit of extra explanation is the `get_dummies()` method. This is useful when your data has a column containing some sort of coded indicator. For example, we might have a dataset that contains information in the form of codes, such as A= "born in America,"B="born in the United Kingdom," C="likes chesse," D="likes spam": full_monte = pd.DataFrame({'name': monte, 'info': ['B|C|D', 'B|D', 'A|C', 'B|D', 'B|C', 'B|C|D']}) full_monte # The `get_dummies()` routine lets you quickl split-out these indicator variables into a `DataFrame`: full_monte['info'].str.get_dummies('|') # With these operations as building blocks, you can construct an endless range of string processing procedures when cleaning your data. # ## Example: Recibe Database # # These vectorized string operations become most useful in the process of cleaning up messy, real-word data. Here I'll walk through an example of that, using an open recipe database compiled from various sources on the Web. Our goal will be to parse the recipe data into ingredient lists, so we can quickly find a recipe based on some ingredients we have on hand. # !gzip try: recipes = pd.read_json('data/recipeitems-latest.json') except ValueError as e: print("ValueError:", e) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lab Assignment 4 # link # # For this assignment, you will continue using the wheat seeds data set. Start by opening up the starter code located in Module3/assignment4.py, and write code that... # # 1. Loads up the seeds dataset, located at Module3/Datasets/wheat.data into a dataframe # 2. Drop the id, area, and perimeter features from your dataset # 3. Plot a parallel coordinates chart, grouped by the wheat_type feature. Be sure to set the optional display parameter alpha to 0.4 # # Once you're done, answer the following questions about your work: # + # imports import pandas as pd from pandas.tools.plotting import parallel_coordinates import matplotlib import matplotlib.pyplot as plt matplotlib.style.use('ggplot') # because it looks pretty # + # 1.Loads up the seeds dataset, located at Module3/Datasets/wheat.data' into a dataframe. You should be very good at doing this by now. wheatdata_ff = r'C:\Users\ng35019\Documents\Training\python_for_ds\Module3\Datasets\wheat.data' df = pd.read_csv(wheatdata_ff); df.head(10) # + # 2.Drop the id, area, and perimeter features from your dataset df1 = df.drop(['id','area','perimeter'], axis=1); df1.head(10) # - help(plt.figure) # + # 3.Plot a parallel coordinates chart, grouped by the wheat_type feature. # Be sure to set the optional display parameter alpha to 0.4 plt.figure(figsize=(14,8)) parallel_coordinates(df1, 'wheat_type', alpha=0.4) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import time from cricsheet.fow_analysis.collapses.extract_collapses import return_collapses # ## Load and Format Single Match filepath = '../data/raw/csv/howstat/fall_of_wickets/' file = 'fow_1999.csv' df = pd.read_csv(filepath+file, index_col=0, parse_dates=[2], infer_datetime_format=True) df.groupby(['MatchId','MatchInnings', 'Team']).apply(return_collapses).reset_index() # Questions to answer: # - Number of collapses by Team, by year (unique collapses, innings with a collapse) # - Positions most often involved # - Batters most often involved # ## Load and Format All Matches # ### Experiments - Concatenating Dfs # # ~2500 matches. # 1) What is the most efficient way to load all dataframes? # # 2) Is it more efficient to load and format one-by-one, or concatenate into a single df and groupby the whole df # # + # Try with 100 dataframes initially n = 100 filepath = '../data/raw/csv/howstat/fall_of_wickets/' # - # #### Method 1: Load all using glob generator + concat import glob import os # + start = time.time() all_files = glob.glob(os.path.join(filepath, "*.csv")) all_files_to_load = all_files[:] df_from_each_file = (pd.read_csv(f, index_col=0, parse_dates=[2], infer_datetime_format=True) for f in all_files_to_load) concatenated_df = pd.concat(df_from_each_file, ignore_index=True) end = time.time() time_taken = end - start print(f'{time_taken} seconds') # - """ Attempts: 13 seconds 6.19 seconds """ # #### Method 2: os.listdir + concat # + from os import listdir start = time.time() df = pd.concat([pd.read_csv(f'{filepath}/{f}', index_col=0, parse_dates=[2], infer_datetime_format=True) for f in os.listdir(filepath) if f.endswith('.csv')]) end = time.time() time_taken = end - start print(f'{time_taken} seconds') # - """ Attempts: 6.37 seconds 6.012 """ # #### Method 3: Dask # + import dask.dataframe as dd start = time.time() df = dd.read_csv(f'{filepath}*.csv') end = time.time() time_taken = end - start print(f'{time_taken} seconds') # + start = time.time() df = df.compute() end = time.time() time_taken = end - start print(f'{time_taken} seconds') # - """ Attempts: 33 seconds """ df.info() # I prefer Method 1 # ### Apply Chosen Method # + import glob import os filepath_fow = '../data/raw/csv/howstat/fall_of_wickets/' all_fow = glob.glob(os.path.join(filepath_fow, "*.csv")) all_fow_to_load = all_fow[:] df_fow_from_each_file = (pd.read_csv(f, index_col=0, parse_dates=[2], infer_datetime_format=True) for f in all_fow_to_load) concatenated_df_fow = pd.concat(df_fow_from_each_file, ignore_index=True) # - concatenated_df_fow.info() #concatenated_df_fow.to_csv('../data/interim/howstat/fall_of_wickets/fow_all.csv') concatenated_df_fow = pd.read_csv('../data/interim/howstat/fall_of_wickets/fow_all.csv') # + filepath_scores = '../data/raw/csv/howstat/scorecards/' all_scores = glob.glob(os.path.join(filepath_scores, "*.csv")) all_scores_to_load = all_scores[:] df_scores_from_each_file = (pd.read_csv(f, index_col=0, parse_dates=[2], infer_datetime_format=True) for f in all_scores_to_load) concatenated_df_scores = pd.concat(df_scores_from_each_file, ignore_index=True) # - concatenated_df_scores = concatenated_df_scores[['MatchId', 'MatchInnings', 'Team', 'TeamInnings', 'Player', 'R', 'BF']] concatenated_df_scores['BattingPosition'] = concatenated_df_scores.groupby(['MatchId','MatchInnings', 'Team']).cumcount() + 1 concatenated_df_scores.info() #concatenated_df_scores.to_csv('../data/interim/howstat/scorecards/scorecards_all.csv') concatenated_df_scores = pd.read_csv('../data/interim/howstat/scorecards/scorecards_all.csv') # ### Experiments - Fuzzy Matching # # # We want to get some information from the scorecards into the Fall of Wickets objects. Unfortunately the batter names don't match exactly (scorecards have initials as well). We need to do some fuzzy matching before joining info from the scorecards. df_fow_to_merge = concatenated_df_fow[concatenated_df_fow.MatchId<=10] df_scores_to_merge = concatenated_df_scores[concatenated_df_scores.MatchId<=10] # #### Method 1: fuzzy wuzzy # + from cricsheet.utils import fuzzy_merge start = time.time() df_merged = fuzzy_merge(df_fow_to_merge, df_scores_to_merge, 'Player', 'Player', 80) end = time.time() time_taken = end - start print(f'{time_taken} seconds') # - # 2 seconds for 10 matches. # # 183 seconds for 100 matches. # # We have 2400 so this won't scale. Estimate: >2hrs to match all. # #### Method 2: fuzzy-pandas # + import fuzzy_pandas as fpd start = time.time() matches = fpd.fuzzy_merge(df_fow_to_merge, df_scores_to_merge, left_on=['Player'], right_on=['Player'], method='levenshtein', ignore_case=True, keep='match', join='left-outer', threshold=0.3) end = time.time() time_taken = end - start print(f'{time_taken} seconds') # - # 15 seconds for 10 matches. # # 51 seconds for 20 matches. # # We have 2400 so this won't scale. Estimate: >2hrs to match all. # #### Method 3: difflib # + import difflib from functools import partial start = time.time() f = partial( difflib.get_close_matches, possibilities=df_scores_to_merge['Player'].tolist(), n=1, cutoff=0.3) matches = df_fow_to_merge['Player'].map(f).str[0].fillna('') scores = [ difflib.SequenceMatcher(None, x, y).ratio() for x, y in zip(matches, df_fow_to_merge['Player']) ] df_fuzzy_matched = df_fow_to_merge.assign(best=matches, score=scores) end = time.time() time_taken = end - start print(f'{time_taken} seconds') # - # 0.85 seconds for 10 matches. # # 64 seconds for 100 matches. # # 548 seconds for 300 matches. df_fuzzy_matched['score'].describe() # All of the above approaches match all of df_fow with all of df_scorecard. # That isn't necessary. We can do it for each Match, since the batters will match for each match. # ### Load and Fuzzy-match a Match at a time # + import glob import os filepath_scores = '../data/raw/csv/howstat/scorecards/' all_scores = glob.glob(os.path.join(filepath_scores, "*.csv")) all_scores_to_load = all_scores[:100] filepath_fow = '../data/raw/csv/howstat/fall_of_wickets/' all_fow = glob.glob(os.path.join(filepath_fow, "*.csv")) all_fow_to_load = all_fow[:100] # + import difflib from functools import partial def fuzzy_match(df1, df2, left_on, right_on): f = partial( difflib.get_close_matches, possibilities=df2[right_on].tolist(), n=2, cutoff=0.3) matches = df1[left_on].map(f).str[0].fillna('') scores = [ difflib.SequenceMatcher(None, x, y).ratio() for x, y in zip(matches, df1[left_on]) ] df_fuzzy_matched = df1.assign(best=matches, score=scores) return df_fuzzy_matched # + # check the match_ids match l_merged_df = [] # for each fow file: for i in range(len(all_fow_to_load)): #print(f'file: {i}') # read fow, read scorecard df_fow = pd.read_csv(all_fow_to_load[i], index_col=0, parse_dates=[2], infer_datetime_format=True) df_scores = pd.read_csv(all_scores_to_load[i], index_col=0, parse_dates=[2], infer_datetime_format=True) # select cols in scorecard, rename df_scores = df_scores[['MatchId', 'MatchInnings', 'Team', 'TeamInnings', 'Player', 'R', 'BF']] df_scores['BattingPosition'] = df_scores.groupby(['MatchId','MatchInnings', 'Team']).cumcount() + 1 l_innings = [] for innings in df_fow.MatchInnings.unique(): #print(f'innings: {innings}') df_fow_innings = df_fow[df_fow.MatchInnings==innings] df_scores_innings = df_scores[df_scores.MatchInnings==innings] # fuzzy match on Player df_matched_innings = fuzzy_match(df_fow_innings, df_scores_innings, 'Player', 'Player') # merge cols from scores df_merged_innings = df_matched_innings.merge(df_scores_innings, how='left', left_on=['MatchId', 'MatchInnings', 'Team', 'TeamInnings', 'best'], right_on=['MatchId', 'MatchInnings', 'Team', 'TeamInnings', 'Player'] ) # reformat df_merged_innings.drop(['Player_x', 'Player_y'], axis=1, inplace=True) df_merged_innings = df_merged_innings.rename({'best':'Player'}, axis=1) df_merged_innings['Player'] = df_merged_innings['Player'].apply(lambda x: re.sub('[!,*)@#%(&$_?.^†]', '', x)) l_innings.append(df_merged_innings) df_merged_match = pd.concat(l_innings) l_merged_df.append(df_merged_match) # - df = pd.concat(l_merged_df) from cricsheet.fow_analysis.collapses.preprocess_data import load_all_and_process df = load_all_and_process() df # + start = time.time() df_grouped = df.groupby(['MatchId','MatchInnings', 'Team']).apply(return_collapses).reset_index() end = time.time() time_taken = end - start print(f'{time_taken} seconds') # - df_grouped.to_csv('../data/processed/collapses/all_collapses.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="J0Dlgnkbqgpk" # #NEURAL STYLE TRANSFER # + id="8e-6IH1HRNnD" # %%capture # !pip install gradio # + [markdown] id="uAHb2bq7qZrD" # ##Importing Libraries # + id="haTY0gGEPdOu" import gradio as gr # + id="gC7xVOx7zE9D" import tensorflow_hub as hub import tensorflow as tf import matplotlib.pyplot as plt import IPython.display as display import matplotlib as mpl import os import time from PIL import Image import numpy as np import PIL.Image import time import functools import cv2 # + colab={"base_uri": "https://localhost:8080/"} id="Kg4mzkNruD8L" outputId="680e7797-ccb9-4d30-f4f3-25ef1e448c52" # !mkdir nstmodel # !wget -c https://storage.googleapis.com/tfhub-modules/google/magenta/arbitrary-image-stylization-v1-256/2.tar.gz -O - | tar -xz -C /content/nstmodel # + id="XsxV3105pGyz" import tensorflow.keras from PIL import Image, ImageOps import numpy as np MODEL_PATH='/content/nstmodel' # Disable scientific notation for clarity np.set_printoptions(suppress=True) # Load the model model = tensorflow.keras.models.load_model(MODEL_PATH) # + id="Or4H3jtCc3Yq" def tensor_to_image(tensor): tensor = tensor*255 tensor = np.array(tensor, dtype=np.uint8) if np.ndim(tensor)>3: assert tensor.shape[0] == 1 tensor = tensor[0] return PIL.Image.fromarray(tensor) # + [markdown] id="k28bGvpF0bgp" # ##Saving unscaled Tensor images. # + id="3HgBL3bCJwEZ" def save_image(image, filename): """ Saves unscaled Tensor Images. Args: image: 3D image tensor. [height, width, channels] filename: Name of the file to save to. """ if not isinstance(image, Image.Image): image = tf.clip_by_value(image, 0, 255) image = Image.fromarray(tf.cast(image, tf.uint8).numpy()) image.save("%s.jpg" % filename) print("Saved as %s.jpg" % filename) # + [markdown] id="v1GRqqoq0LNp" # ##Grayscaling image for testing purpose to check if we could get better results. # + id="kHIud3iSa11t" import cv2 def gray_scaled(inp_img): gray = cv2.cvtColor(inp_img, cv2.COLOR_BGR2GRAY) gray_img = np.zeros_like(inp_img) gray_img[:,:,0] = gray gray_img[:,:,1] = gray gray_img[:,:,2] = gray return gray_img # + id="SBcz5IelMLRc" def transform_mymodel(content_image,style_image): # Convert to float32 numpy array, add batch dimension, and normalize to range [0, 1] content_image=gray_scaled(content_image) content_image = content_image.astype(np.float32)[np.newaxis, ...] / 255.0 style_image = style_image.astype(np.float32)[np.newaxis, ...] / 255.0 #Resizing image style_image = tf.image.resize(style_image, (256, 256)) # Stylize image outputs = model(tf.constant(content_image), tf.constant(style_image)) stylized_image = outputs[0] # stylized = tf.image.resize(stylized_image, (356, 356)) stylized_image =tensor_to_image(stylized_image) save_image(stylized_image,'stylized') return stylized_image # + id="L7OaaI836mme" def gradio_intrface(mymodel): # Initializing the input component image1 = gr.inputs.Image() #CONTENT IMAGE image2 = gr.inputs.Image() #STYLE IMAGE stylizedimg=gr.outputs.Image() gr.Interface(fn=mymodel, inputs= [image1,image2] , outputs= stylizedimg,title='Style Transfer').launch(share=False,) # + [markdown] id="XLPR91kkz_Jy" # ##The function will be launched both inline and outline where u need to add a content and style image. # # + id="ZalVEN2YWvHv" colab={"base_uri": "https://localhost:8080/", "height": 590} outputId="fb4c7cad-a7d2-49b3-b41b-25f9dd12c39d" gradio_intrface(transform_mymodel) # + id="YCF7bN6VzULi" # + id="Cd7GfC2SzUI6" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + language="javascript" # $('#appmode-leave').hide(); # $('#copy-binder-link').hide(); # $('#visit-repo-link').hide(); # + import ipywidgets as ipw import json import random import time import pandas as pd import os import webbrowser import math from IPython.display import display, Markdown, Math, Latex, clear_output # set kinetic parameters with open("rate_parameters.json") as infile: jsdata = json.load(infile) params = jsdata["equi4"] # - # The hydrolysis Sucrose into Glucose and Fructose is catalysed by the enzyme Invertase. # \begin{equation} # Sucrose + Invertase + \mathrm{H_2O} \to Glucose + Fructose # \end{equation} # There are however several substances that can inhibit the efficacy of the catalyst # # Imagine performing a series of experiments using different initial concentration of Sucrose where you measure the rate of formation of Glucose with. The results of your experiments are affected by the presence of a contaminating substance that interferes with the catalytic reaction. Although you can somewhat control the concentration of the contaminant, you cannot completely eliminate it. # # 1. Determine whether the contaminating substance inhibits the catalytic reaction and the type of the inhibition mechanism, *e.g.* Competitive, Uncompetitive, Non-competitive or Mixed. # # 2. Determine the maximum rate achieved by the reaction, $V_{max}$ and the Michaelis constant, $K_M$, in the case you could completely eliminate the contamininat. # # ### Tips: # - Note that every time you restart the experiment the type of the inhibition mechanism may change. # # ### Instructions: # # - Use the slide bar below to select temperature at which you perform the virtual experiment, # - Click `Perform measurement` to run the virtual experiment and obtain the result of the experiment, # - Click `Download CSV` to export the complete data set for all the experiments as a CSV file. # + # define path to results.csv file respath = os.path.join(os.getcwd(), "..", "results.csv") # delete existing result file and setup rng if os.path.exists(respath): os.remove(respath) class system: def __init__(self, vol=0, conc=0, press=0): self.vol = vol self.conc = conc self.press = press self.inhibition = 0 self.seed = 0 self.Vm = 0 self.Km = 0 self.Ki = 0 self.Kip= 0 class data: def __init__(self, start=-1, error=0, label='none', units='pure', value=0, minval=-1, maxval=3, text='none'): self.start = start self.minval = minval self.maxval = maxval self.error = error self.label = label self.units = units self.value = value self.text = text # Experiment setup (+ hidden paramters) system = system() def initialiseExperiment(): global n global system global columns_list global scatter scatter = 0.01 n = [] columns_list = [] n.append(len(args)) # number of input adjustable parameters n.append(len(result)) # number of results for the experiment for i in range(0, n[0]): columns_list.append(f"{args[i].label} [{args[i].units}]") for i in range(0, n[1]): columns_list.append(f"{result[i].label} [{result[i].units}]") # Random number seed t = int( time.time() * 1000.0 ) system.seed = ((t & 0xff000000) >> 24) + ((t & 0x00ff0000) >> 8) +((t & 0x0000ff00) << 8) +((t & 0x000000ff) << 24) random.seed(system.seed) # Random inhibition type rnd = random.random() system.inhibition = int(5 * rnd) if (system.inhibition > 4): system.inhibition = 4 system.Vm = params["Vm"] * (1 + random.random()/2) system.Km = params["Km"] * (1 + random.random()/2) system.Ki = system.Km * random.random() system.Kip= system.Km * random.random() # + # Adjustable input parameters def initialiseVariables(): global logScale logScale = True global args args = [] args.append( data( label = "[S]", minval = -3, maxval = 1, start = 0.001, units = "mol/L", value = 0. ) ) args.append( data( label = "[I]", minval = -3, maxval = 0, start = 0.001, units = "mol/L", value = 0. ) ) # Results def initialiseResults(): global result result = [] result.append( data( label = "Reaction Rate", start = 0., error = random.random() / 10., units = "mol/L·min" ) ) def measure(): concS = float(args[0].text.value) concI = float(args[0].text.value) Vm = system.Vm Km = system.Km Ki = system.Ki Kip= system.Kip # no inhibition a = 1 ap = 1 # competitive if (system.inhibition == 1): a = 1 + concI / Ki ap = 1 adp = 1 # non-competitive elif (system.inhibition == 4): a = 1 ap = 1 + concI / Ki adp = 1 # un-competitive elif (system.inhibition == 2): a = 1 ap = 1 adp = 1. / (1 + concI / Kip) # mixed elif (system.inhibition == 3): a = 1 + concI / Ki ap = 1 adp = 1. / (1 + concI / Kip) res = (ap * adp) * Vm * concS / ((a * adp)*Km + concS) return res initialiseVariables() # + out_P = ipw.Output() out_L = ipw.Output() out_X = ipw.Output() with out_L: display(Markdown("[Download CSV](../results.csv)")) def calc(btn): out_P.clear_output() # Measurement result result[0].value = measure() # Random error result[0].error = result[0].value * scatter * (0.5 - random.random()) * 2 # Output result out_R[0].value = f"{result[0].value + result[0].error:.3e}" # Read previous lines res = pd.read_csv(respath) var_list = [] for i in range(0, n[0]): var_list.append(args[i].text.value) for i in range(0, n[1]): var_list.append(result[i].value + result[i].error) # Append result res.loc[len(res)] = var_list res.to_csv(respath, index=False) with out_P: display(res.tail(50)) def reset(btn): if os.path.exists(respath): os.remove(respath) initialiseResults() initialiseExperiment() res = pd.DataFrame(columns=columns_list) res.to_csv(respath, index=False) with out_P: out_P.clear_output() display(res.tail(1)) with out_X: out_X.clear_output() btn_reset = ipw.Button(description="Restart Laboratory", layout=ipw.Layout(width="150px")) btn_reset.on_click(reset) btn_calc = ipw.Button(description="Perform measurement", layout=ipw.Layout(width="150px")) btn_calc.on_click(calc) # --- rows = [] reset(btn_reset) args[0].text = ipw.Text(str(args[0].start)) rows.append(ipw.HBox([ipw.Label('Initial concentration of ' + args[0].label + ' : '),args[0].text])) args[1].text = ipw.Text(str(args[1].start)) rows.append(ipw.HBox([ipw.Label('Initial concentration of ' + args[1].label + ' : '),args[1].text])) out_R = [] for i in range(0, n[1]): out_R.append(ipw.Label(value="")) rows.append(ipw.HBox([ipw.Label(value=f"Measured {result[i].label} [{result[i].units}]:", layout=ipw.Layout(width="250px")), out_R[i]])) rows.append(ipw.HBox([btn_reset, btn_calc, out_L])) def calc2(btn): random.seed(system.seed) rnd = random.random() iType = int(4 * rnd) + 1 with out_X: out_X.clear_output() if (iType == 1): display(Markdown(r'Competitive inhibition')) elif (iType == 2): display(Markdown(r'Un-Competitive inhibition')) elif (iType == 3): display(Markdown(r'Mixed inhibition')) elif (iType == 4): display(Markdown(r'Non-Competitive inhibition')) else: display(Markdown(r'No inhibition')) display(Markdown(r'$K_M$ = 'rf'{system.Km:7.5}')) display(Markdown(r'$V_{max}$ = 'rf'{system.Ki:7.5}')) if (iType == 1) or (iType == 3) or (iType == 4): display(Markdown(r'$K_i$ = 'rf'{system.Ki:7.5}')) if (iType == 2) or (iType == 3) or (iType == 4): display(Markdown(r'$K_i^\prime$ = 'rf'{system.Kip:7.5}')) display(out_X) btn_calc2 = ipw.Button(description="Check Inhibition Type", layout=ipw.Layout(width="150px")) btn_calc2.on_click(calc2) rows.append(ipw.HBox([btn_calc2])) rows.append(ipw.HBox([out_P])) ipw.VBox(rows) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Day 4: Intro to Grama # # - Worked example # - Grama motivation # - Grama elements # - Exploratory model analysis # # + import grama as gr import numpy as np import scipy as sp import pandas as pd import matplotlib.pyplot as plt from plotnine import * DF = gr.Intention() # Set figure options plt.rcParams['figure.figsize'] = [6, 6] # Need square aspect ratio for gradients to appear normal plt.rcParams['figure.dpi'] = 100 # 200 e.g. is really fine, but slower # - # ## Worked Example: Fitting a model # Load data # # + ## Load built-in dataset from grama.data import df_trajectory_windowed ( df_trajectory_windowed >> ggplot(aes("x", "y")) + geom_point() ) # - # Suppose we want to make predictions on where the projectile will land; we'll need a *model*: # + ## Load built-in Grama model from grama.models import make_trajectory_linear md_traj = make_trajectory_linear() ## Print info about this model md_traj.printpretty() # - # This is a model for a projectile's trajectory using Newton's laws and a linear drag model. The model takes in inputs `u0, v0, tau` (parameters) and the time `t` and outputs `x, y` coordinates. # # Making a prediction with uninformed guesses will be terrible: # + ## Need to set parameters for prediction: u0 = 20 # initial horizontal velocity v0 = 20 # initial vertical velocity tau = 5 # time constant (drag) ## Make a prediction with the model df_prediction = gr.eval_df( md_traj, df=gr.df_make(u0=u0, v0=v0, tau=tau, t=np.linspace(0, 2)) ) ## Visualize ( df_trajectory_windowed >> ggplot(aes("x", "y")) + geom_point() + geom_line(data=df_prediction, color="blue") ) # - # Use *nonlinear least squares* (NLS) to **optimize** the parameter values: # # + ## Fit the model md_fit = gr.fit_nls( df_trajectory_windowed, md=md_traj, method="SLSQP", seed=101, verbose=False, ) md_fit.printpretty() ## Make prediction with fitted model df_pred_fit = gr.eval_df( md_fit, df=gr.df_make(t=np.linspace(0, 4.5)) ) ## Visualize ( df_trajectory_windowed >> ggplot(aes("x", "y")) + geom_point() + geom_line(data=df_pred_fit, color="salmon") ) # - # ## Grama Motivation # # --- # # Idea: Have one computational object that you can use to encapsulate *many* of the assumptions and choices in quantitative modeling. # # Some example modeling choices: # - Select the model inputs # - Specify bounds for the inputs # - Simplify the physics # - Fit a distribution for inputs # # Grama is a software package to help make model building and analysis more transparent, convenient, and understandable. # # Modeling in two phases: # - Model building # - Model analysis # # ### Model Building # # --- # # Make a blank model # ## Create a blank model md = gr.Model("Base model") ## Print the details md.printpretty() # Make a model with a function ## Create a blank model md = gr.Model("Base model") ## Add a function md = gr.comp_function( md, fun=lambda X: X[0], # f(x) = x var=["x"], # Inputs: x out=["f"], # Outputs: f ) ## Print the details md.printpretty() # Make a model with function and variable bounds ## Create a blank model md = gr.Model("Base model") ## Add a function md = gr.comp_function( md, fun=lambda X: X[0], var=["x"], out=["f"], ) ## Add a bound md = gr.comp_bounds( md, x=(-1, +1), ) ## Print the details md.printpretty() # ### Example: RLC circuit # # **Model Functions** # # Parallel RLC circuit: here are some key performance indicators of the circuit # # $$\omega_0 = \sqrt{\frac{1}{LC}}$$ # # $$Q = \omega_0 RC.$$ # # **Model Domain** # # Decide on a range of values for $R, L, C$ to test. # + ## Implement RLC Grama model md_RLC = ( gr.Model("RLC Circuit") ## Add the natural frequency >> gr.cp_vec_function( fun=lambda df: gr.df_make( omega0=np.sqrt(1 / df.L / df.C) ), name="natural frequency", var=["L", "C"], out=["omega0"], ) ## Add the Q factor >> gr.cp_vec_function( fun=lambda df: gr.df_make( Q=df.omega0 * df.R * df.C ), name="quality factor", var=["omega0", "R", "C"], out=["Q"] ) ## Set bounds for the input variables >> gr.cp_bounds( R=(1e-3, 1e0), # resistance [ohm] L=(1e-9, 1e-3), # inductance [H] C=(1e-3, 10), # capacitance [F] ) ) md_RLC.printpretty() # - # **Remember**: You can always use `md.printpretty()` to inspect a model; see its inputs, outputs, function names, bounds/distribution. # # ## Model Analysis # # Evaluate the model at specified `R,L,C` values: # gr.eval_df( md_RLC, df=gr.df_make(R=1000, L=0.1, C=0.1) ) # Fit the model to find `R,L,C` values for specified `omega0, Q`. df_rlc = gr.eval_nls( md_RLC, df_data=gr.df_make(omega0=10, Q=1), n_restart=10 ) df_rlc # Check that those values give the desired values # gr.eval_df(md_RLC, df=df_rlc) # ## Grama Elements # # --- # # Grama considers *data* and *models*. Data are organized into DataFrames, which are handled by the Pandas package. # df_example = pd.DataFrame(dict( x=[1, 2, 3], y=[0.1, 0.2, 0.3], z=["a", "b", "c"] )) df_example # DataFrames are useful because they're more human-readable than arrays. Each column has a name, so we can access specific columns with `df.variable`: df_example.x # There are four fundamental Grama [verbs](https://py-grama.readthedocs.io/en/latest/source/language.html#verbs): # # | Verb Type | Prefix (Short) | In | Out | # |---|---|---|---| # | Compose | `comp_` (`cp_`) | `md` | `md` | # | Evaluate | `eval_` (`ev_`) | `md` | `df` | # | Fit | `fit_` (`ft_`) | `df` | `md` | # | Transform | `tran_` (`tf_`) | `df` | `df` | # | Plot | `plot_` (`pt_`) | `df` | (Plot) | # # ### Compose # # Used primarily to build up a model # ## Create a blank model md = gr.Model("Base model") ## Add a function md = gr.comp_function( ## Take in function; will return modified md, fun=lambda X: X[0], var=["x"], out=["f"], ) ## Print the details md.printpretty() # ### Evaluate # # Used to generate data from a model # df_result = gr.eval_df( ## Model to evaluate md, ## DataFrame at which to evaluate df=gr.df_make(x=[0, 1, 2]) ) df_result # ### Fit # # Used to derive a model from data # # First, set up a scenario with data and a model to fit: # # + ## from grama.models import make_trajectory_linear from grama.data import df_trajectory_windowed md_trajectory = make_trajectory_linear() md_trajectory.printpretty() # - # Fit the model # md_fit = gr.fit_nls(df_trajectory_windowed, md_trajectory) md_fit.printpretty() # ### Transform # # Used to transform data # df_trajectory_windowed.head() # Estimate time derivatives with finite differences # ( df_trajectory_windowed >> gr.tf_mutate( # Estimate horizontal velocity # (x1 - x0) / (t1 - t0) dxdt=(DF.x - gr.lag(DF.x)) / (DF.t - gr.lag(DF.t)), dydt=(DF.y - gr.lag(DF.y)) / (DF.t - gr.lag(DF.t)), ) >> gr.tf_head() ) # ## Exploratory Model Analysis # # --- # # Grama is useful for *exploratory model analysis*; making sense of how a model behaves with respect to its inputs. Let's look at a simple model to build intuition. # # # $$f(x, a) = a \exp(x)$$ # + md_exponential = ( ## Start an empty model gr.Model("Exponential model") ## Add in the function >> gr.cp_vec_function( fun=lambda df: gr.df_make( f=df.a * np.exp(df.x) ), var=["a", "x"], out=["f"], name="Exponential" ) ## Add some bounds >> gr.cp_bounds( a=(-1, +1), x=(-1, +1), ) ) md_exponential.printpretty() # - # Let's investigate the model with a *sinew* plot # ( md_exponential >> gr.ev_sinews(df_det="swp", seed=101) >> gr.pt_auto() ) # - input `a` has a linear effect on output `f` # - input `x` has an exponential effect on output `f` # - direction is affected by `a` # # Sinew plots especially useful for exploring a new model # df_results = ( md_RLC >> gr.ev_sinews(df_det="swp", n_density=20, n_sweeps=5, seed=101) >> gr.pt_auto() ) # - input `R` has a positive, linear effect on `Q` # - input `R` has zero effect on `omega0` # - input `C` has a positive, diminishing effect on `Q` # - input `C` has a negative, diminishing effect on `omega0` # - input `L` has a positive, diminishing effect on `Q` # - input `L` has a negative, diminishing effect on `omega0` # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.10.1 64-bit # language: python # name: python3 # --- # + # Question: Minimize the maximum distance to reach all required places. # In other words, choose a block such that distance of all places is minimum from there # Example: consider block at index 3, min distance to reach gym is 1, # for school is 0, and for store is 1. Thus, the maximum you have to travel to reach # any of the places is 1. # Write code to find such a block # + blocks = [ { "gym": False, "school": True, "store": False, }, { "gym": True, "school": False, "store": False, }, { "gym": True, "school": True, "store": False, }, { "gym": False, "school": True, "store": False, }, { "gym": False, "school": True, "store": True, }, ] # list of required places to consider reqs = ["gym", "school"] # + # a dictionary to store max of min distances of any place distances = {} # iterating through blocks for i, block in enumerate(blocks): dis_to_place = {} # iterating through the required places list to check distances for place in reqs: dist_after = dist_before = current_pos = len(blocks) # if place is in current block set distance to 0 and continue if block[place]: dis_to_place[place] = 0 continue # if place is not in current block then look in blocks after and before if not block[place]: # checking blocks after the current block for si, sblock in enumerate(blocks[(i+1):]): if sblock[place]: dist_after = si+1 break # checking blocks before the current block if current block is not the first if i != 0: for si, sblock in enumerate(blocks[(i-1)::-1]): if sblock[place]: dist_before = si+1 break # selecting the minimum of dist_after and dist_before dis_to_place[place] = min((dist_after, dist_before, current_pos)) # selecting the max distance from either place's distances distances[i] = max(dis_to_place.values()) print(distances) # + # selecting the distance with least max travel minimized = sorted(distances.items(), key=lambda x: x[1])[0] print(minimized) # + # here block at index 2 has the minimum maximum distance 0 # which is the answer # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import pandas as pd import sklearn as sk import ase import ase.io as aio import qml import time import gc import scipy as sp from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from qml.kernels import gaussian_kernel from qml.math import cho_solve import metric_learn import copy # %config Completer.use_jedi = False import warnings warnings.filterwarnings("ignore") # %matplotlib notebook # + data = pd.read_csv('./data.csv', index_col=0) data.head() # + data['ase_reactants'] = [aio.read( 'data_react_xyz/' + data.mol.values[i] + data.enan.values[i] + '.xyz') for i in range(len(data))] data['ase_products'] = [aio.read( 'data_react_xyz/' + data.mol.values[i] + data.enan.values[i] + '.xyz') for i in range(len(data))] mbtypes = np.load('./mbtypes.npy', allow_pickle=True) data['SLATM_react'] = [qml.representations.generate_slatm(coordinates=mol.positions, nuclear_charges=mol.get_atomic_numbers(), mbtypes=mbtypes, local=False) for mol in data['ase_reactants']] data['SLATM_prod'] = [qml.representations.generate_slatm(coordinates=mol.positions, nuclear_charges=mol.get_atomic_numbers(), mbtypes=mbtypes, local=False) for mol in data['ase_products']] # - data['SLATM_diff'] = [x - y for x,y in zip(data.SLATM_prod.values, data.SLATM_react.values)] extrapolation_data = pd.read_pickle('./extrapolation_data') extrapolation_data.head() mbtypes = np.load('./mbtypes.npy', allow_pickle=True) slatm_react = [qml.representations.generate_slatm(coordinates=mol.positions, nuclear_charges=mol.get_atomic_numbers(), mbtypes=mbtypes, local=False) for mol in extrapolation_data.ase2] slatm_prod = [qml.representations.generate_slatm(coordinates=mol.positions, nuclear_charges=mol.get_atomic_numbers(), mbtypes=mbtypes, local=False) for mol in extrapolation_data.ase3] extrapolation_data['slatm2'] = slatm_react extrapolation_data['slatm3'] = slatm_prod extrapolation_data['slatm_diff'] = [x - y for x,y in zip(slatm_prod, slatm_react)] extrapolation_data.to_pickle('extrapolation_data') Y = data.Eafw.values # # Extrapolation SLATM_DIFF+ slatmdiff = np.vstack(data.SLATM_diff.values) lin_regs = [sp.stats.linregress(xs, ys) for xs, ys in zip(slatmdiff.T, [Y for i in range( slatmdiff.shape[1])])] rvalues = np.array([ccc.rvalue for ccc in lin_regs]) # + pre = sk.preprocessing.MinMaxScaler() pre.fit(slatmdiff[:, np.argsort(rvalues**2)[-500:]]) Xf = pre.transform(slatmdiff[:, np.argsort(rvalues**2)[-500:]]) dmtrtr = sk.metrics.pairwise.pairwise_distances(Xf, n_jobs=-1) sigma = 6 regg = 1e-6 K = np.exp(- 0.5 * sigma ** -2 * dmtrtr ** 2) K[np.diag_indices_from(K)] += regg alpha_vec = cho_solve(K, Y) # + X_extrapolation = pre.transform( np.vstack(extrapolation_data.slatm_diff.values)[:, np.argsort(rvalues**2)[-500:]]) dmtrts = sk.metrics.pairwise.pairwise_distances(X_extrapolation, Xf, n_jobs=-1) # extrapolation Ks = np.exp(- 0.5 * sigma ** -2 * dmtrts ** 2) Y_predicted = np.dot(Ks, alpha_vec) # + fig, ax = plt.subplots() exact = 627.509 * (extrapolation_data.TS - extrapolation_data.int2).values ax.scatter(Y_predicted, exact, label='r$^2$ = {:.2f} \n MAE: {:.2f} kcal/mol'.format( np.corrcoef(Y_predicted, exact)[0][1] ** 2, np.mean(np.abs(Y_predicted - exact))) ) ax.plot(exact, exact, color='black', alpha=0.3) ax.legend() ax.set_xlabel('DFT E$_a$ [kcal/mol]') ax.set_ylabel('ML E$_a$ [kcal/mol]') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Question 37 - Employees who are managers # # Suppose you're trying to understand how many managers you have per employee at Company XYZ. On your search to understand, you are given two tables: (1) managers and (2) employees. Each table has 1 column named id. # # Given this dataset, can you use SQL to find the employees that are also managers? Hint: given the table names as well as the single column name you should be able to write a full SQL query. # ```sql # select # m.id # from managers m # join employees e # on e.id = m.id # ``` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Functions of Numpy 6 import pandas as pd from pandas import DataFrame as df import matplotlib.pyplot as plt import numpy as np df_1=df(data=np.arange(12).reshape(3,4), index=['r0','r1','r2'], dtype='int', columns=['c0','c1','c2','c3']) print(df_1) print(df_1.T) print(df_1.axes) print(df_1.dtypes) print(df_1.size) print(type(df_1)) print(df_1.values) print(type(df_1.values)) df_2=df({ 'class_1':['a','a','b','b','c'], 'var_1':np.arange(5), 'var_2':np.random.randn(5)}, index=['r0','r1','r2','r3','r4']) print(df_2) print(df_2.index) print(df_2.iloc[2:]) print(df_2.iloc[2]) print(df_2.head(3)) print(df_2.tail(3)) print(df_2.columns) print(df_2['class_1']) print(df_2[['class_1','var_2']]) # ========================================================== idx=['r0','r1','r2','r3','r4'] df_1=df({'c1':np.arange(5), 'c2': np.random.randn(5)}, index=idx) print(df_1) new_idx=['r0','r1','r2','r5','r6'] df_1=df_1.reindex(new_idx) print(df_1) df_1=df_1.reindex(new_idx,fill_value='missing') print(df_1) # ========================================================= print(pd.date_range("2018-9-10", "2018-9-30", freq='B')) date_idx=pd.date_range("09/10/2018", periods=10, freq='D') print(date_idx) df_2=df({"c1":[10,20,30,40,50,10,20,30,40,50]},index=date_idx) print(df_2) date_idx2=pd.date_range("09/05/2018", periods=20, freq='D') print(date_idx2) df_2=df_2.reindex(date_idx2) print(df_2) df_2=df_2.reindex(date_idx2, method='bfill') print(df_2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np from random import randint # + white = [1 for _ in range(9)] blue = [2 for _ in range(9)] green = [3 for _ in range(9)] yellow = [4 for _ in range(9)] orange = [5 for _ in range(9)] red = [6 for _ in range(9)] face_white = np.array([(white[0], white[1], white[2]), (white[3], white[4], white[5]), (white[6], white[7], white[8])]) face_blue = np.array([(blue[0], blue[1], blue[2]), (blue[3], blue[4], blue[5]), (blue[6], blue[7], blue[8])]) face_green = np.array([(green[0], green[1], green[2]), (green[3], green[4], green[5]), (green[6], green[7], green[8])]) face_yellow = np.array([(yellow[0], yellow[1], yellow[2]), (yellow[3], yellow[4], yellow[5]), (yellow[6], yellow[7], yellow[8])]) face_orange = np.array([(orange[0], orange[1], orange[2]), (orange[3], orange[4], orange[5]), (orange[6], orange[7], orange[8])]) face_red = np.array([(red[0], red[1], red[2]), (red[3], red[4], red[5]), (red[6], red[7], red[8])]) # + import matplotlib.pyplot as plt import matplotlib.colors as mcolors cmap, norm = mcolors.from_levels_and_colors([1, 2, 3, 4, 5, 6, 7], ['lavender', 'blue', 'limegreen', 'yellow', 'orange', 'red']) plt.imshow(face_white, cmap=cmap, norm=norm) plt.grid() plt.colorbar() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from sklearn.manifold import TSNE import matplotlib.pyplot as plt import joblib import seaborn as sns sns.set_palette(sns.color_palette("hls", 11)) # %matplotlib inline # - data = joblib.load('./representations.joblib') data.keys() # + flatten = lambda l: [item for sublist in l for item in sublist] X = [] labels = [] for label, key in enumerate(data): for _ in data[key][0]: labels.append(label) for x in data[key]: X.extend(x) # - len(X) X[0].shape tsne = TSNE(n_components=2) X_tsne = tsne.fit_transform(X) x, y = zip(*X_tsne) cm = [] colors = sns.color_palette() for label in labels: cm.append(colors[label]) plt.scatter(x,y,color=cm, marker='.') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # # HW 2: Corporate Bond Pricing (due by 9.21 Fri) # We are going to compute the price of a corporate bond (subject to default) with Monte-Carlo simulation. Assume that # * the default time of a company follows the exponential distribution with intensity $\lambda=$__`def_rate`__. # * the riskfree interest rate is $r_f=$__`rf_rate`__ and the maturity of the bond is $T=$__`mat`__. # * in the case of default, you can recover some portion ($R=$__`recovery_rate`__) of the face value. # * the coupon is 0%, i.e., it is a zero-coupon bond. # * the face value of the bond is 1.0 # * use compound rate for discounting; the price of the default-free bond is $e^{-r_f T}$ # # The Problem 1 of the [2017 ASP Midterm Exam](../files/ASP2017_Midterm.pdf) will be helpful. # # ### Instruction to upload your HW # * Create a repository named __`PHBS_ASP_2018`__ (and clone it to your PC) # * Copy this file to __`PHBS_ASP_2018/HW2/HW2.ipynb`__ (Please use the same name for repository and ipynb file) # * Adding more code. # * Run your your code to make sure that there's no error. # * Upload (commit and sync) your file. # ### 1. First, let's create a pricing function and check the std import numpy as np def_rate = 0.1 rf_rate = 0.03 recovery = 0.3 mat = 10 # # + # First generate exponential random numbers # Although you can generate directly using fault_time = np.random.exponential(scale=), let's use uniform random numbers. n_sample = 10000 U = np.random.uniform(size=n_sample) default_time = -(1/def_rate)*np.log(U) # You can check if the RNs are correct by comparing the means (default_time.mean(), 1/def_rate) # + # Put your code here to price the corporate bond def corp_bond(def_rate=0.1, rf_rate=0.03, recovery=0.3, mat=10, n_sample=10000): U = np.random.uniform(size=n_sample) default_time = -(1/def_rate)*np.log(U) price = np.where(default_time>mat, np.exp(-rf_rate*mat), recovery*np.exp(-rf_rate*default_time)) return np.mean(price) # Call your function corp_bond(def_rate, rf_rate, recovery, mat) # Find the mean and std by calling the function 100 times. arr = [corp_bond() for i in range(100)] np.mean(arr), np.std(arr) # - # ### 2. Now, let's improve the function by reducing the MC variations. # 1. Use antithetic method: If `U` is uniform random variable, so is `1-U` # 2. Also shift the RNs to match the mean, `1/def_rate` # + # For example, antithetic method mean n_sample = 10000 U = np.random.uniform(size=int(n_sample/2)) default_time = -(1/def_rate)*np.log(np.concatenate((U,1-U),axis=0)) # Mean-matching means default_time += 1/def_rate-default_time.mean() (default_time.mean(), 1/def_rate) # + # No include the two new features: `antithetic` and `mean_match` def corp_bond_cv(def_rate=0.1, rf_rate=0.03, recovery=0.3, mat=10, antithetic=False, mean_match=False, n_sample=10000): if(antithetic): U = np.random.uniform(size=int(n_sample/2)) U = np.concatenate((U,1-U), axis=0) else: U = np.random.uniform(size=n_sample) default_time = -(1/def_rate)*np.log(U) if(mean_match): default_time += 1/def_rate - default_time.mean() price = np.where(default_time>mat, np.exp(-rf_rate*mat), recovery*np.exp(-rf_rate*default_time)) return np.mean(price) # Find the mean and std by calling the function 100 times for (i) antithetic (ii) mean_match and (iii) both arr1 = [corp_bond_cv(antithetic=True, mean_match=False) for i in range(100)] arr2 = [corp_bond_cv(antithetic=False, mean_match=True) for i in range(100)] arr3 = [corp_bond_cv(antithetic=True, mean_match=True) for i in range(100)] np.array( [ np.mean(arr1), np.std(arr1), np.mean(arr2), np.std(arr2), np.mean(arr3), np.std(arr3) ] ).reshape([3,2]) # - # ### 3. Finally, what is the analytic value of the corporate bond? How does it compare to your MC result above? # + ### Put the analytic expression for the corporate bond price def corp_bond_an(def_rate=0.1, rf_rate=0.03, recovery=0.3, mat=10): price = (recovery*def_rate*(1-np.exp((-rf_rate-def_rate)*mat)))/(rf_rate+def_rate)+np.exp((-rf_rate-def_rate)*mat) return np.mean(price) corp_bond_an() # - # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Bayesian Structural Time Series: Forecasting # This notebook shows how Bayesian Structural Time Series (BSTS) analysis can be used to forecast and decompose time series for enterprise needs. We focus on demand forecasting use case. # # This tutorial is structured as follows: # * We use a public dataset that contains about 2 months of electricity demand and temperature (which is a regressor in this case) values with hourly resolution. # * We use Structural Time Series library from TensorFlow Probability for forecasting. The implementation is adoped from this [example](https://blog.tensorflow.org/2019/03/structural-time-series-modeling-in.html). # * We decompose both the original series and the forecast into components (trend, seasonal, covariates, and autoregtressive). # # ## Data # The notebook uses datasets that are availbale in the `tensor-house-data` repository. # # --- # + import matplotlib as mpl from matplotlib import pylab as plt import matplotlib.dates as mdates import seaborn as sns import collections import numpy as np import pandas as pd from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() sns.set_context("notebook", font_scale=1.) sns.set_style("whitegrid") import tensorflow.compat.v2 as tf import tensorflow_probability as tfp from tensorflow_probability import distributions as tfd from tensorflow_probability import sts tf.enable_v2_behavior() # - # ## Data Loading and Visualization # + demand_dates = np.arange('2014-01-01', '2014-02-26', dtype='datetime64[h]') demand_loc = mdates.WeekdayLocator(byweekday=mdates.WE) demand_fmt = mdates.DateFormatter('%a %b %d') data = pd.read_csv('../../tensor-house-data/time-series/electricity-demand-victoria.csv', comment='#', header=None).T data = pd.concat([data, pd.DataFrame(demand_dates)], axis=1) data.columns = ["demand", "temperature", "date"] num_forecast_steps = 24 * 7 * 2 # two weeks data_training = data[:-num_forecast_steps] data_test = data[-num_forecast_steps:] colors = sns.color_palette() c1, c2 = colors[0], colors[1] fig = plt.figure(figsize=(12, 6)) ax = fig.add_subplot(2, 1, 1) ax.plot(data_training.date, data_training.demand, lw=2, label="training data") ax.set_ylabel("Hourly demand (GW)") ax = fig.add_subplot(2, 1, 2) ax.plot(data_training.date, data_training.temperature, lw=2, label="training data", c=c2) ax.set_ylabel("Temperature (deg C)") ax.set_title("Temperature") ax.xaxis.set_major_locator(demand_loc) ax.xaxis.set_major_formatter(demand_fmt) fig.suptitle("Electricity Demand in Victoria, Australia (2014)", fontsize=15) fig.autofmt_xdate() # + # # Plotting functions # def plot_forecast(x, y, forecast_mean, forecast_scale, forecast_samples, title, x_locator=None, x_formatter=None): colors = sns.color_palette() c1, c2 = colors[0], colors[1] fig = plt.figure(figsize=(12, 6)) ax = fig.add_subplot(1, 1, 1) num_steps = len(y) num_steps_forecast = forecast_mean.shape[-1] num_steps_train = num_steps - num_steps_forecast ax.plot(x, y, lw=2, color=c1, label='ground truth') forecast_steps = np.arange( x[num_steps_train], x[num_steps_train]+num_steps_forecast, dtype=x.dtype) ax.plot(forecast_steps, forecast_samples.T, lw=1, color=c2, alpha=0.1) ax.plot(forecast_steps, forecast_mean, lw=2, ls='--', color=c2, label='forecast') ax.fill_between(forecast_steps, forecast_mean-2*forecast_scale, forecast_mean+2*forecast_scale, color=c2, alpha=0.2) ymin, ymax = min(np.min(forecast_samples), np.min(y)), max(np.max(forecast_samples), np.max(y)) yrange = ymax-ymin ax.set_ylim([ymin - yrange*0.1, ymax + yrange*0.1]) ax.set_title("{}".format(title)) ax.legend() if x_locator is not None: ax.xaxis.set_major_locator(x_locator) ax.xaxis.set_major_formatter(x_formatter) fig.autofmt_xdate() return fig, ax def plot_components(dates, component_means_dict, component_stddevs_dict, x_locator=None, x_formatter=None): colors = sns.color_palette() c1, c2 = colors[0], colors[1] axes_dict = collections.OrderedDict() num_components = len(component_means_dict) fig = plt.figure(figsize=(12, 2.5 * num_components)) for i, component_name in enumerate(component_means_dict.keys()): component_mean = component_means_dict[component_name] component_stddev = component_stddevs_dict[component_name] ax = fig.add_subplot(num_components,1,1+i) ax.plot(dates, component_mean, lw=2) ax.fill_between(dates, component_mean-2*component_stddev, component_mean+2*component_stddev, color=c2, alpha=0.5) ax.set_title(component_name) if x_locator is not None: ax.xaxis.set_major_locator(x_locator) ax.xaxis.set_major_formatter(x_formatter) axes_dict[component_name] = ax fig.autofmt_xdate() fig.tight_layout() return fig, axes_dict # - # ## Model Specification and Fitting # + def build_model(observed_time_series, temperature): hour_of_day_effect = sts.Seasonal( num_seasons=24, observed_time_series=observed_time_series, name='hour_of_day_effect') day_of_week_effect = sts.Seasonal( num_seasons=7, num_steps_per_season=24, observed_time_series=observed_time_series, name='day_of_week_effect') temperature_effect = sts.LinearRegression( design_matrix=tf.reshape(temperature - np.mean(temperature), (-1, 1)), name='temperature_effect') autoregressive = sts.Autoregressive( order=1, observed_time_series=observed_time_series, name='autoregressive') model = sts.Sum([hour_of_day_effect, day_of_week_effect, temperature_effect, autoregressive], observed_time_series=observed_time_series) return model demand_model = build_model(data_training.demand, data.temperature) variational_posteriors = tfp.sts.build_factored_surrogate_posterior(model=demand_model) num_variational_steps = 200 num_variational_steps = int(num_variational_steps) optimizer = tf.optimizers.Adam(learning_rate=.1) # Using fit_surrogate_posterior to build and optimize the variational loss function. @tf.function(experimental_compile=True) def train(): elbo_loss_curve = tfp.vi.fit_surrogate_posterior( target_log_prob_fn=demand_model.joint_log_prob(observed_time_series=data_training.demand), surrogate_posterior=variational_posteriors, optimizer=optimizer, num_steps=num_variational_steps) return elbo_loss_curve elbo_loss_curve = train() plt.plot(elbo_loss_curve) plt.show() # Draw samples from the variational posterior. q_samples_demand_ = variational_posteriors.sample(50) print("Inferred parameters:") for param in demand_model.parameters: print("{}: {} +- {}".format(param.name, np.mean(q_samples_demand_[param.name], axis=0), np.std(q_samples_demand_[param.name], axis=0))) # - # ## Forecasting # + demand_forecast_dist = tfp.sts.forecast( model=demand_model, observed_time_series=data_training.demand, parameter_samples=q_samples_demand_, num_steps_forecast=num_forecast_steps) num_samples=10 ( demand_forecast_mean, demand_forecast_scale, demand_forecast_samples ) = ( demand_forecast_dist.mean().numpy()[..., 0], demand_forecast_dist.stddev().numpy()[..., 0], demand_forecast_dist.sample(num_samples).numpy()[..., 0] ) fig, ax = plot_forecast(demand_dates, data.demand, demand_forecast_mean, demand_forecast_scale, demand_forecast_samples, title="Electricity demand forecast", x_locator=demand_loc, x_formatter=demand_fmt) ax.set_ylim([0, 10]) fig.tight_layout() # - # ## Decomposition # + component_dists = sts.decompose_by_component( demand_model, observed_time_series=data_training.demand, parameter_samples=q_samples_demand_) forecast_component_dists = sts.decompose_forecast_by_component( demand_model, forecast_dist=demand_forecast_dist, parameter_samples=q_samples_demand_) demand_component_means_, demand_component_stddevs_ = ( {k.name: c.mean() for k, c in component_dists.items()}, {k.name: c.stddev() for k, c in component_dists.items()} ) demand_forecast_component_means_, demand_forecast_component_stddevs_ = ( {k.name: c.mean() for k, c in forecast_component_dists.items()}, {k.name: c.stddev() for k, c in forecast_component_dists.items()} ) # Concatenate the training data with forecasts for plotting. component_with_forecast_means_ = collections.OrderedDict() component_with_forecast_stddevs_ = collections.OrderedDict() for k in demand_component_means_.keys(): component_with_forecast_means_[k] = np.concatenate([ demand_component_means_[k], demand_forecast_component_means_[k]], axis=-1) component_with_forecast_stddevs_[k] = np.concatenate([ demand_component_stddevs_[k], demand_forecast_component_stddevs_[k]], axis=-1) fig, axes = plot_components( demand_dates, component_with_forecast_means_, component_with_forecast_stddevs_, x_locator=demand_loc, x_formatter=demand_fmt) for ax in axes.values(): ax.axvline(demand_dates[-num_forecast_steps], linestyle="--", color='red') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="T3_C_A-E7mI-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b488806a-6a92-470a-f0b9-990db844035c" executionInfo={"status": "ok", "timestamp": 1581452958868, "user_tz": -60, "elapsed": 2812, "user": {"displayName": "", "photoUrl": "", "userId": "07711871229181942680"}} print ("Hello Github") # + id="k5zXMc7E8N-k" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 1: Introduction to Clustering # # ## Exercise 1.06 # + import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import silhouette_score from scipy.spatial.distance import cdist seeds = pd.read_csv('Seed_Data.csv') # - X = seeds[['A','P','C','LK','WK','A_Coef','LKG']] # + def k_means(X, K): #Keep track of history so you can see k-means in action centroids_history = [] labels_history = [] rand_index = np.random.choice(X.shape[0], K) centroids = X[rand_index] centroids_history.append(centroids) while True: # Euclidean distances are calculated for each point relative to centroids, #and then np.argmin returns # the index location of the minimal distance - which cluster a point is #assigned to labels = np.argmin(cdist(X, centroids), axis=1) labels_history.append(labels) #Take mean of points within clusters to find new centroids: new_centroids = np.array([X[labels == i].mean(axis=0) for i in range(K)]) centroids_history.append(new_centroids) # If old centroids and new centroids no longer change, k-means is complete and end. Otherwise continue if np.all(centroids == new_centroids): break centroids = new_centroids return centroids, labels, centroids_history, labels_history # - X_mat = X.values centroids, labels, centroids_history, labels_history = k_means(X_mat, 3) silhouette_score(X[['A','LK']], labels) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sklearn import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import TfidfVectorizer import seaborn as sns import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (12, 8) pd.options.display.max_rows = 20 import warnings warnings.filterwarnings('ignore') # + import re def cleaning_data(code_data): text = code_data text = text.replace('(
    )', '') text = text.replace('().*()', '') text = text.replace('(&)', '') text = text.replace('(>)', '') text = text.replace('(<)', '') text = text.replace('(\xa0)', ' ') text = text.replace('-', ' ') text = text.replace('(', ' ') text = text.replace(')', ' ') text = filtering(text) return text.strip() def filtering(text): stripped = re.sub('[^a-zA-Z, ^А-Я,а-я,Ә,І,Ң,Ғ,Ү,Ұ,Қ,Ө,Һ,ә,і,ə,ң,ғ,ү,ұ,қ,ө,һ]', ' ', str(text).replace('-', '')) stripped = re.sub('_', '', stripped) stripped = re.sub('\s+', ' ', stripped) return str(stripped).lower() # - df = pd.read_csv("dataset/Dataset_12classes.csv") df['text'] = df['text'].apply(cleaning_data) df df['text_length'] = df['text'].apply(len) df['text_word_count'] = df['text'].apply(lambda x: len(x.split(' '))) df['text_avg_length'] = df['text'].apply(lambda x: round(len(x)/len(x.split(' ')), 2)) df # + plt.figure(figsize=(12,6)) sns.kdeplot(df[df['text_length'] > 60][df['text_length'] < 1000]['text_length'], shade=True, color="r").set_title("Сөздердің саны бойынша таралу") plt.xlabel("Сөздер саны") plt.ylabel("Таралуы") plt.show() plt.figure(figsize=(12,6)) sns.kdeplot(df[df['text_word_count'] > 60][df['text_word_count'] < 1000]['text_word_count'], shade=True, color="g").set_title("Сөздердің саны бойынша таралу") plt.xlabel("Сөздер саны") plt.ylabel("Таралуы") plt.show() plt.figure(figsize=(12,6)) sns.kdeplot(df[df['text_avg_length'] < 1000]['text_avg_length'], shade=True, color="b").set_title("Сөздердің саны бойынша таралу") plt.xlabel("Сөздер саны") plt.ylabel("Таралуы") plt.show() # + g = sns.barplot(x="text_length", y="category", data=df, capsize=.2, palette="pastel") g.set_yticklabels(g.get_yticklabels(), rotation=30) plt.title("Текстың ұзындығы бойынша графика шығару") plt.xlabel("Саны") plt.ylabel("Жанрлар") plt.show() g = sns.barplot(x="text_word_count", y="category", data=df, capsize=.2, palette="pastel") g.set_yticklabels(g.get_yticklabels(), rotation=30) plt.title("Текстың сөздер саны бойынша графика шығару") plt.xlabel("Саны") plt.ylabel("Жанрлар") plt.show() g = sns.barplot(x="text_avg_length", y="category", data=df, capsize=.2, palette="pastel") g.set_yticklabels(g.get_yticklabels(), rotation=30) plt.title("Текстың арифметикалық ораташа ұзындығы бойынша графика шығару") plt.xlabel("Саны") plt.ylabel("Жанрлар") plt.show() # + from wordcloud import WordCloud desc = ' '.join(df.text.tolist()) wc_desc = WordCloud(background_color='white', max_words=400, width=400, height=400,random_state=10).generate(desc) plt.figure(figsize=(12,8)) plt.imshow(wc_desc) plt.title("Сөздер бұлты") # + vectorizer = TfidfVectorizer(lowercase=True, min_df = 0.1) tfidf = vectorizer.fit_transform(df.text) feature_array = np.array(vectorizer.get_feature_names()) tfidf_sorting = np.argsort(tfidf.toarray().sum(axis=0)).flatten()[::-1] n = 20 top_n_fe = feature_array[tfidf_sorting][:n] top_n_tf = tfidf.toarray().sum(axis=0)[tfidf_sorting][:n] plt.figure(figsize=(20,12)) sns.barplot(y=top_n_tf, x=top_n_fe, palette="pastel") plt.title("Текста TF-IDF бойынша ең жоғары қолданыстағы, я,ни стоп сөздерді шығару") plt.xlabel("Сөз") plt.ylabel("TF-IDF өлшемі") plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="RXba5v1a0aQ9" # ! pip install dnspython # ! pip install "pymongo[srv]" # + id="4oWQQzVj2HbA" #importing required modules import pymongo import re import pandas as pd import json from pymongo import MongoClient #creating connection with MongoDB client = pymongo.MongoClient("mongodb+srv://Sangeetha:SangiScooby28@../myFirstDatabase?retryWrites=true&w=majority") # creating database db = client["TaskMongoDb"] db = client.TaskMongoDb records=db.students_data students = [] for line in open('students.json', 'r'): students.append(json.loads(line)) #x= records.insert_many(students) # + id="wWWj80ii_9aW" #finding the name of the student who scored high marks in exam , quiz and Homework #creating three different lists to store the marks of all students as per their category exam=[] quiz=[] Homework=[] for i in range(0,200): xx=students[i]['scores'] exam.append(xx[0]['score']) quiz.append(xx[1]['score']) Homework.append(xx[2]['score']) #finding max score for exam maxscore_exam =max(exam) maxscore_exam_index=exam.index(maxscore_exam) #print(maxscore_exam) #print(maxscore_exam_index) #finding max score for quiz maxscore_quiz =max(quiz) maxscore_quiz_index=quiz.index(maxscore_quiz) #print(maxscore_quiz) #print(maxscore_quiz_index) ##finding max score for Homework maxscore_Homework =max(Homework) maxscore_Homework_index=Homework.index(maxscore_Homework) #print(maxscore_Homework) #print(maxscore_Homework_index) #finding average: avg=[] for i in range(0,200): sum=exam[i]+quiz[i]+Homework[i] average=sum/3 avg.append(average) max_average =max(avg) max_average_index=avg.index(max_average) #print(max_average_index) #find the name of the student: id=[] for i in range(0,200): id.append(students[i]['_id']) Maximum_scorer =id[max_average_index] #print("Maximum scorer in all three subjects :",Maximum_scorer) #Querying in mongodb #print(students) for doc1 in records.find({"_id":13}): print(doc1) # + id="w7gV5yi-Hfpa" # Find students who scored below average in the exam and pass mark is 40%? sume=0 for i in exam: sume=sume+i avge=sume/200 #print(avge) id_s =[] for i in range(0,200): if exam[i] < avge: id_s.append(i) for i in id_s: for doc2 in records.find({"_id":i}): print(doc2) # + id="QHxKq2hSj8hd" colab={"base_uri": "https://localhost:8080/"} outputId="1bb83695-0832-4cd1-d7c0-8000ec2b892c" # Find the total and average of the exam, quiz and homework and store them in a separate collection Total_average=[] for i in range(0, 200): sum=exam[i]+quiz[i]+Homework[i] average=sum/3 d={'name':students[i]['name'],'total':sum, 'percentage': average} Total_average.append(d) records=db.Total_average records.insert_many(Total_average) # + id="RawBDmDMhuOc" colab={"base_uri": "https://localhost:8080/"} outputId="ba9e119d-d696-4cd0-abdb-99aee9153da9" #Creating a new collection to store the students who have failed in all the categories Failed_in_all_categories=[] for i in range(0, 200): if (exam[i]<40) and (quiz[i]<40) and (Homework[i]<40): d1={'name':students[i]['name'] , 'id':students[i]['_id'], 'grade':'Fail in all categorie'} Failed_in_all_categories.append(d1) print(Failed_in_all_categories) records=db.failedinAllCategories records.insert_many(Failed_in_all_categories) # + id="ul_an4j_xSHS" Pass_in_all_categories=[] for i in range(0, 200): if (exam[i]>=40) and (quiz[i]>=40) and (Homework[i]>=40): d1={'name':students[i]['name'] , 'id':students[i]['_id'], 'grade':'Pass in all categorie'} Pass_in_all_categories.append(d1) print(Pass_in_all_categories) records=db.PassinAllCategories records.insert_many(Pass_in_all_categories) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Dimensional Resemblance Analysis (DRA) Dataset E #import libraries import warnings warnings.filterwarnings("ignore") import numpy as np import pandas as pd from matplotlib import pyplot as plt import os from tqdm import tqdm import hoggorm as ho import numpy as np print('Libraries imported!!') # + #define directory of functions and actual directory HOME_PATH = '' #home path of the project FUNCTIONS_DIR = 'EVALUATION FUNCTIONS/RESEMBLANCE' ACTUAL_DIR = os.getcwd() #change directory to functions directory os.chdir(HOME_PATH + FUNCTIONS_DIR) #import functions for univariate resemblance analisys from dimensional_resemblance import preprocess_data from dimensional_resemblance import pca_transform from dimensional_resemblance import isomap_transform from dimensional_resemblance import umap_transform from dimensional_resemblance import dra_distance #change directory to actual directory os.chdir(ACTUAL_DIR) print('Functions imported!!') # - # ## 1. Read real and synthetic datasets # In this part real and synthetic datasets are read. #Define global variables DATA_TYPES = ['Real','GM','SDV','CTGAN','WGANGP'] SYNTHESIZERS = ['GM','SDV','CTGAN','WGANGP'] FILEPATHS = {'Real' : HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/E_PimaIndiansDiabetes_Real_Train.csv', 'GM' : HOME_PATH + 'SYNTHETIC DATASETS/GM/E_PimaIndiansDiabetes_Synthetic_GM.csv', 'SDV' : HOME_PATH + 'SYNTHETIC DATASETS/SDV/E_PimaIndiansDiabetes_Synthetic_SDV.csv', 'CTGAN' : HOME_PATH + 'SYNTHETIC DATASETS/CTGAN/E_PimaIndiansDiabetes_Synthetic_CTGAN.csv', 'WGANGP' : HOME_PATH + 'SYNTHETIC DATASETS/WGANGP/E_PimaIndiansDiabetes_Synthetic_WGANGP.csv'} categorical_columns = ['Outcome'] data = dict() #iterate over all datasets filepaths and read each dataset for name, path in FILEPATHS.items() : data[name] = pd.read_csv(path) for col in categorical_columns : data[name][col] = data[name][col].astype('category') data # ## 2. Preprocess variables for data reduction data_scaled = dict() for name in tqdm(DATA_TYPES) : data_scaled[name] = preprocess_data(data[name]) print(name, ':', data_scaled[name].shape) data_scaled # ## 3. Principal Component Analysis (PCA) pca = dict() pca['Real'] = pca_transform(data_scaled['Real'], np.zeros((len(data_scaled['Real']), 1))) for name in SYNTHESIZERS : pca[name] = pca_transform(data_scaled[name], np.ones((len(data_scaled[name]), 1))) pca # + fig, axs = plt.subplots(nrows=1, ncols=4, figsize=(10,2.5)) axs_idxs = range(4) idx = dict(zip(SYNTHESIZERS,axs_idxs)) targets = [0,1] COLORS = [['tab:blue','tab:orange'], ['tab:blue','tab:green'], ['tab:blue','tab:red'], ['tab:blue','tab:purple']] cont = 0 first = True legend_data = list() pca_real = pca['Real'][['PC1','PC2']] for name in SYNTHESIZERS : pca_data = pd.DataFrame(data=pca['Real'], columns=['PC1','PC2','Label']).append(pca[name]).sample(frac=1) ax = axs[idx[name]] colors = COLORS[cont] for target, color in zip(targets,colors): indicesToKeep = pca_data['Label'] == target handles = ax.scatter(pca_data.loc[indicesToKeep, 'PC1'], pca_data.loc[indicesToKeep, 'PC2'], c = color, s = 20, alpha = 0.5) if target == 1 or first == True : legend_data.append(handles) first = False pca_synthetic = pca[name][['PC1','PC2']] print(name) joint_dist = dra_distance(pca_real, pca_synthetic) print('- Joint distance: ', joint_dist) ax.set_xlabel('PC1') ax.set_ylabel('PC2') ax.set_xticks([]) ax.set_yticks([]) cont=cont+1 ax.legend(handles=legend_data, ncol=5, labels=DATA_TYPES, bbox_to_anchor=(0.3,-0.1)) fig.savefig('DATA REDUCTION RESULTS/PCA_PLOTS.png', bbox_inches='tight') # - # ## 4. ISOMAP isomap = dict() isomap['Real'] = isomap_transform(data_scaled['Real'], np.zeros((len(data_scaled['Real']), 1))) for name in SYNTHESIZERS : isomap[name] = isomap_transform(data_scaled[name], np.ones((len(data_scaled[name]), 1))) isomap # + fig, axs = plt.subplots(nrows=1, ncols=4, figsize=(10, 2.5)) axs_idxs = range(4) idx = dict(zip(SYNTHESIZERS,axs_idxs)) targets = [0,1] COLORS = [['tab:blue','tab:orange'], ['tab:blue','tab:green'], ['tab:blue','tab:red'], ['tab:blue','tab:purple']] cont = 0 first = True legend_data = list() isomap_real = isomap['Real'][['PC1','PC2']] for name in SYNTHESIZERS : isomap_data = pd.DataFrame(data=isomap['Real'], columns=['PC1','PC2','Label']).append(isomap[name]).sample(frac=1) ax = axs[idx[name]] colors = COLORS[cont] for target, color in zip(targets,colors): indicesToKeep = isomap_data['Label'] == target handles = ax.scatter(isomap_data.loc[indicesToKeep, 'PC1'], isomap_data.loc[indicesToKeep, 'PC2'], c = color, s = 20, alpha = 0.5) if target == 1 or first == True : legend_data.append(handles) first = False isomap_synthetic = isomap[name][['PC1','PC2']] print(name) joint_dist = dra_distance(isomap_real, isomap_synthetic) print('- Joint distance: ', joint_dist) ax.set_xlabel('PC1') ax.set_ylabel('PC2') ax.set_xticks([]) ax.set_yticks([]) cont=cont+1 ax.legend(handles=legend_data, ncol=5, labels=DATA_TYPES, bbox_to_anchor=(0.3,-0.1)) fig.savefig('DATA REDUCTION RESULTS/ISOMAP_PLOTS.png', bbox_inches='tight') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 1 - Defina a função soma_nat que recebe como argumento um número natural n # e devolve a soma de todos os números naturais até n. # - Ex: soma_nat(5) = 15 soma_nat = lambda n : 1 if n == 1 else n + soma_nat(n-1) print(soma_nat(2)) # 2 - Defina a função div que recebe como argumentos dois números naturais m # e n e devolve o resultado da divisão inteira de m por n. Neste exercício você não # pode recorrer às operações aritméticas de multiplicação, divisão e resto da divisão # inteira. # - Ex: div(7,2) = 3 div = lambda m, n: 0 if m < n else 1 + div(m-n, n) print(div(7,2)) # 3 - Defina a função prim_alg que recebe como argumento um número natural e devolve o primeiro algarismo (o mais significativo) na representação decimal de n. # - Ex: prim_alg(5649) = 5 # - Ex: prim_alg(7) = 7 prim_alg = lambda n: str(n)[0] print(prim_alg(5649)) print(prim_alg(7)) # 4 - Defina a função prod_lista que recebe como argumento uma lista de inteiros e # devolve o produto dos seus elementos. # - Ex: prod_lista([1,2,3,4,5,6]) = 720 from functools import reduce prod_lista = lambda inteiros: reduce((lambda x, y: x * y), inteiros) prod_lista([1,2,3,4,5,6]) # 5 - Defina a função contem_parQ que recebe como argumento uma lista de números # inteiros w e devolve True se w contém um número par e False em caso contrário. # - Ex: contem_parQ([2,3,1,2,3,4]) = True # - Ex: contem_parQ([1,3,5,7]) = False contem_parQ = lambda w: True if list(filter(lambda x: x%2 == 0, w)) else False print(contem_parQ([2,3,1,2,3,4])) print(contem_parQ([1,3,5,7])) # 6 - Defina a função todos_imparesQ que recebe como argumento uma lista de # números inteiros w e devolve True se w contém apenas números ímpares e False # em caso contrário. # - Ex: todos_imparesQ([1,3,5,7]) = True # - Ex: todos_imparesQ([]) = True # - Ex: todos_imparesQ([1,2,3,4,5]) = False todos_imparesQ = lambda w: True if list(filter(lambda x: x%2 != 0, w)) == w else False print(todos_imparesQ([1,3,5,7])) print(todos_imparesQ([])) print(todos_imparesQ([1,2,3,4,5])) # 7 - Defina a função pertenceQ que recebe como argumentos uma lista de números # inteiros w e um número inteiro n e devolve True se n ocorre em w e False em # caso contrário. # - Ex: pertenceQ([1,2,3],1) = True # - Ex: pertenceQ([1,2,3],2) = True # - Ex: pertenceQ([1,2,3],3) = True # - Ex: pertenceQ([1,2,3],4) = False pertenceQ = lambda w, n: True if n in w else False print(pertenceQ([1,2,3],1)) print(pertenceQ([1,2,3],2)) print(pertenceQ([1,2,3],3)) print(pertenceQ([1,2,3],4)) # 8 - Defina a função junta que recebe como argumentos duas listas de números # inteiros w1 e w2 e devolve a concatenação de w1 com w2 . # - Ex: junta([1,2,3],[4,5,6]) = [1, 2, 3, 4, 5, 6] # - Ex: junta([],[4,5,6]) = [4, 5, 6] # - Ex: junta([1,2,3],[]) = [1, 2, 3] junta = lambda w1, w2: w1 + w2 print(junta([1,2,3],[4,5,6])) print(junta([],[4,5,6])) print(junta([1,2,3],[]) ) # 9 - Defina a função temPrimoQ que recebe como argumento uma lista de listas de # números inteiros w e devolve True se alguma das sublistas w tem um número # primo e False em caso contrário. # - Ex: temPrimoQ([[4,4,4,4],[5,4,6,7],[2,4,3]]) = True # - Ex: temPrimoQ([[4,4,4,4],[4,4,4],[],[4]]) = False # + retorna_primo = lambda x: True if not list(filter(lambda z: x % z == 0, range(2, x))) else False retorna_primo_lista = lambda lista: list(filter(lambda x: retorna_primo(x), lista)) temPrimoQ = lambda listas: True if list(filter(lambda lista: retorna_primo_lista(lista), listas)) else False print(temPrimoQ([[4,4,4,4],[5,4,6,7],[2,4,3]])) print(temPrimoQ([[4,4,4,4],[4,4,4],[],[4]])) # - # 10 - Defina a função inverteLista que recebe como argumento uma lista w e devolve a # mesma lista mas invertida. # - Ex: inverteLista([1,2,3,4,5]) = [5, 4, 3, 2, 1] # - Ex: inverteLista([]) # + inverteLista = lambda w: w[::-1] print((inverteLista([1,2,3,4,5]))) print(inverteLista([])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] colab_type="text" id="I6jmLA-RYrt5" #

    Soal 1: Helper Function

    # # Jawab Pertanyaan di bawah ini: # # Misalkan kamu ingin mengetahui tentang kegunaan fungsi 'len' di Python, maka fungsi apa yang bisa kamu gunakan untuk menolong kamu? # + colab={} colab_type="code" id="VrvyQxGJYrt9" # lengkapi code ini help(len) # + [markdown] colab_type="text" id="KrcuFTuAYruT" #

    Soal 2: Built-in Function

    # # Jawab Pertanyaan di bawah ini: # # Apa itu built-in Function? # # Sebutkan 3 built-in Function di pyhon! # # Berikan contoh masing2 1 cara penggunaan built-in function yang kamu sebutkan sebelumnya # + [markdown] colab_type="text" id="xro-58_CYruW" # ### isi jawaban text disini # Built-in function adalah fungsi bawaan dalam bahasa pemrogramman Python. # Contohnya adalah str(), print(), int() # + colab={} colab_type="code" id="Xv7lHkgfYruY" # Contoh penggunaan built in function print("Hello World") a = 10 b = 5 c = int(a / b) print(c) nilai_pi = 3.14 print("Nilai Pi untuk menghitung lingkaran adalah: " + str(nilai_pi)) # + [markdown] colab_type="text" id="yhWXiPNDYrul" #

    Soal 3: Method dan Function

    # # Jawab Pertanyaan di bawah ini: # # - Apa perbedaan method dan function? # + [markdown] colab_type="text" id="rqpTJlgDYrun" # Method adalah suatu fungsi yang dimiliki oleh suatu object dan penggunaannya terbatas. Sedangkan function adalah fungsi adalah fungsi yang penggunaannya lebih luas seperti print(), str(), dll # + [markdown] colab_type="text" id="tUU1xgYBYrup" #

    Soal 4: Menggunkan Method String

    # # Lengkapi kode untuk menghasilkan suatu output yang di harapkan # + colab={} colab_type="code" id="037gauOGYrur" kalimat = "Corona cepat selesai" # gunakan method untuk mengubah nilai kalimat menjadi uppercase semua kemudian tampilkan hasilnya print(kalimat.upper()) # gunakan method untuk menghitung berapa huruf e di dalam kalimat print(kalimat.count('e')) # + [markdown] colab_type="text" id="fPTd4R5VYru5" # Expected Output: # # CORONA CEPAT SELESAI # # 3 # + [markdown] colab_type="text" id="IPjp5vwDYrvL" #

    Soal 5: Membuat Simple Function

    # # Buatlah suatu fungsi yang menerima satu input argumen berbentuk list dan mempunyai elemen bertipe numeric semua, dimana fungsi tersebut berguna untuk menghitung rata2 dari kumpulan elemen list tersebut. namai fungsi tersebut 'mean_list' # + colab={} colab_type="code" id="smoKI9zIYrvO" obj_list = [11.25, 18.0, 20.0, 10.75, 9.50] def mean_list(inp_list): # isikan code a = sum(obj_list) b = len(obj_list) return a/b print(mean_list(obj_list)) # + [markdown] colab_type="text" id="MVAvoHHgYrva" # Expected Output: # # 13.9 # + [markdown] colab_type="text" id="A8jjpNC8Yrvc" #

    Soal 6: Membuat Function dengan Multiple arguments

    # # Buatlah suatu fungsi untuk melakukan penggabungan antara dua list # + colab={} colab_type="code" id="r42z4mafYrve" obj_list = [2, 4, 5, 6] obj_penambah = [1, 2, 3] def kali_list(obj_list, obj_penambah): # isikan kode obj_list.extend(obj_penambah) return obj_list print(kali_list(obj_list, obj_penambah)) # + [markdown] colab_type="text" id="RYhU921kYrvp" # Expected Output: # # [2, 4, 5, 6, 1, 2, 3] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import pyspark import os from pyspark import SparkConf , SparkContext from pyspark.sql import SparkSession, SQLContext from pyspark.sql.types import * import pyspark.sql.functions as F from pyspark.sql.functions import udf, col from pyspark.ml.regression import LinearRegression from pyspark.mllib.evaluation import RegressionMetrics from pyspark.ml.tuning import ParamGridBuilder, CrossValidator, CrossValidatorModel from pyspark.ml.feature import VectorAssembler, StandardScaler from pyspark.ml.evaluation import RegressionEvaluator # - import seaborn as sns import matplotlib.pyplot as plt from pyspark import SparkContext sc = SparkContext() from pyspark import SparkConf from pyspark import SparkContext conf = SparkConf().setAppName("SparkRDD").setMaster("local") sc1 = SparkContext(conf=conf) sc1 values = [1,2,3,4,5] rdd = sc1.parallelize(values) rdd.take(2) rdd= sc1.textFile("../trgc1000.csv") rdd.take(1) aba = sc1.parallelize(range(1,10000,2)) aba.persist() text_file = sc1.textFile("../trgc1000.csv") text_file.cache() text_file # + #Visualization from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" pd.set_option("display.max_columns",200) pd.set_option("display.max_colwidth",400) from matplotlib import rcParams sns.set(context = "notebook",style= "whitegrid",rc = {"figure.figsize":(18,4)}) rcParams["figure.figsize"]= 18,4 # %matplotlib inline # %config InlineBackend.figure_format = "retina" # - rnd_seed = 23 np.random.seed = rnd_seed np.random.set_state = rnd_seed # ## 2. Creating the Spark Session spark = SparkSession.builder.master("local[2]").\ appName("Linear-Regression-California-Housing").getOrCreate() spark sc= spark.sparkContext sc sqlContext = SQLContext(spark.sparkContext) sqlContext HOUSING_DATA = "cal_housing.data" # + #define the schema, corresponding to a line in the csv data schema = StructType([ StructField("long", FloatType(),nullable = True), StructField("lat",FloatType(),nullable=True), StructField("medage",FloatType(),nullable=True), ]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="JoxbTL-lIE29" colab_type="code" colab={} import pandas as pd import os # + id="bpv19JWrIE3C" colab_type="code" colab={} outputId="e32d8a56-4af9-4ad0-e3c0-2cc31bccb3f5" # !pip install pandas-profiling==2.* # + id="jzOYNWEpIE3F" colab_type="code" colab={} outputId="5f9617eb-05b0-484e-92ed-86ba9025f70e" # !pip install category_encoders==2.* # + id="bOYns0tbIE3H" colab_type="code" colab={} outputId="0ac2760a-b7a3-4542-dab4-18a21e06d171" from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(r'C:\Users\Lesley\Downloads\train_features.csv'), pd.read_csv(r'C:\Users\Lesley\Downloads\train_labels.csv')) test = pd.read_csv(r'C:\Users\Lesley\Downloads\test_features.csv') train, val = train_test_split(train, train_size=0.80, test_size=0.20, stratify=train['status_group'], random_state=42) train.shape, val.shape, test.shape # + id="2As3NAxUIE3J" colab_type="code" colab={} outputId="babe9e4c-3778-4f3c-abc7-d4ea9d851f92" # + id="QyHkyrfEIE3M" colab_type="code" colab={} import numpy as np def wrangle(X): """Wrangle train, validate, and test sets in the same way""" # Prevent SettingWithCopyWarning X = X.copy() # About 3% of the time, latitude has small values near zero, # outside Tanzania, so we'll treat these values like zero. X['latitude'] = X['latitude'].replace(-2e-08, 0) # When columns have zeros and shouldn't, they are like null values. # So we will replace the zeros with nulls, and impute missing values later. # Also create a "missing indicator" column, because the fact that # values are missing may be a predictive signal. cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) X[col+'_MISSING'] = X[col].isnull() # Drop duplicate columns duplicates = ['quantity_group', 'payment_type'] X = X.drop(columns=duplicates) # Drop recorded_by (never varies) and id (always varies, random) unusable_variance = ['recorded_by'] X = X.drop(columns=unusable_variance) # Convert date_recorded to datetime X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True) # Extract components from date_recorded, then drop the original column X['year_recorded'] = X['date_recorded'].dt.year X['month_recorded'] = X['date_recorded'].dt.month X['day_recorded'] = X['date_recorded'].dt.day X = X.drop(columns='date_recorded') # Engineer feature: how many years from construction_year to date_recorded X['years'] = X['year_recorded'] - X['construction_year'] X['years_MISSING'] = X['years'].isnull() # return the wrangled dataframe return X train = wrangle(train) val = wrangle(val) test = wrangle(test) # + id="l1wowJynIE3O" colab_type="code" colab={} target = 'status_group' train_features = train.drop(columns=[target, 'id']) numeric_features = train_features.select_dtypes(include='number').columns.tolist() cardinality = train_features.select_dtypes(exclude='number').nunique() categorical_features = cardinality[cardinality <= 50].index.tolist() features = numeric_features + categorical_features # + id="ZuBaY-n9IE3Q" colab_type="code" colab={} outputId="b3e58e49-f307-4638-da60-fcc37f18f67a" # + id="cm8RZNuMIE3S" colab_type="code" colab={} X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] # + id="_VBxRXCkIE3U" colab_type="code" colab={} outputId="b2dd26b5-ba62-460a-c34e-e8b51337c2c1" import category_encoders as ce from sklearn.ensemble import RandomForestClassifier from sklearn.impute import SimpleImputer from sklearn.pipeline import make_pipeline pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), RandomForestClassifier(max_depth=20, n_estimators=22)) pipeline.fit(X_train[features], y_train) score = pipeline.score(X_val[features], y_val) print('Random Forest Accuracy Score', score) # + id="ertfzNK2IE3X" colab_type="code" colab={} outputId="ce90e0b8-7bf4-4b09-ee5c-f965af0aedfa" # %%time pipeline = make_pipeline( ce.OrdinalEncoder(), SimpleImputer(), RandomForestClassifier(max_depth=20, n_estimators=22)) pipeline.fit(X_train[features], y_train) score = pipeline.score(X_val[features], y_val) print('Random Forest Accuracy Score', score) # + id="AQHPwA4HIE3Z" colab_type="code" colab={} # + id="GA23Xy3DIE3a" colab_type="code" colab={} y_pred = pipeline.predict(X_test) # + id="bnFKYmAAIE3c" colab_type="code" colab={} path=r'C:\Users\Lesley\Desktop\Lambda\Lesley_Rich' submission = test[['id']].copy() submission['status_group'] = y_pred # submission['status_group'] submission.to_csv(path+'DecisionTreeWaterPumpSub2.csv', index=False) # + id="_8KkPK7pIE3e" colab_type="code" colab={} # + id="03XsESzxIE3h" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # # ### Simulación de procesos financieros. # # **Nombres:** , . # # **Fecha:** XX de marzo del 2021. # # **Expediente** : if722626, if726622. # **Profesor:** . # # **Link Github**: https://github.com/PintorOD1997/ProyectoConjunto_LandaverdeI_PintorD # # # Proyecto TEMA-2 # ___ # ## 1. Entregable. # Los trabajos deben tener los siguientes componentes básicos. # # ### 1.1 Título del trabajo. # > Febrero Loco y Marzo otro poco # > Clima: ¿predecible o impredecible? # > Clima estocástico # # ### 1.2 Objetivos. # > - Se refieren a los propósitos por los cuales se hace el trabajo. # > - Deben ser **concretos, evaluables y verificables.** # > - Deben ser escritos en infinitivo. # #### 1.1 Objetivo general. # > - Fin último de estudio. Se formula atendiendo el propósito global del trabajo. No presenta detalles. # - Encontrar patrones climáticos e intentar predecir probabilidad de lluvia. # > - Se orienta a la totalidad del trabajo. Tiene que ver con el título del trabajo. # - # #### 1.2 Objetivos específicos # > - De forma detallada describen cada una de los componentes del trabajo. # - Definir estaciones del año, en las que se puedan encuadrar patrones climáticos para la ciudad de Seattle. # - Correlacionar la lluvia con otros indicadores como la humedad, dirección del viento, velocidad del viento y temperatura. # - Simular escenarios para cada una de las cuatro estaciones del año. # - Enunciar la probabilidad de que llueva en una estación dada del año para Seattle. # > - En conjunto garantizan la consecución del objetivo general. # # Referencia: # - https://es.slideshare.net/rosbur/metodologia-objetivos-generales-y-especficos # # ### 1.3 Definición del problema. # > Se debe describir en una cuartilla máximo cuál es el escenario de negocio que se va a modelar, identificar cuáles son los supuestos que defines en el negocio y cuáles son los "nodos" o "escenarios" que identificas necesarios simular y porqué. Debes elaborar un diagrama de inicio a fin, identificando los "nodos" o "escenarios" a simular. # # ### 1.4 Nodos y variables que se decidieron simular y porqué # > En este apartado el objetivo es identificar los problemas que se van a abordar, para examinar la factibilidad y viabilidad de la simulación de los nodos escogidos. Posteriormente se debe de argumentar del porqué de los nodos escogidos. # # ### 1.5 Definición de hipótesis y supuestos. # > Para poder modelar un proceso de negocio o cualquiera otro proceso en el área de las ingenierías o ciencias sociales, después de identificar el objeto de estudio, es importante indicar cuales son los supuestos que se dan por verdaderos a lo largo de la ejecución del proceso. Estos supuestos se deben indicar en forma de variables e hipótesis (son diferentes), porque si puedes definir en un valor el supuesto lo llamaremos "supuesto constante" y si no es posible identificarlo con un número entonces será una hipótesis que afectará la construcción de los escenarios posibles del proceso. # # ### 1.6 Obtención de bases de datos # > El tercer paso del proyecto es indicar de dónde se obtendrán las bases de datos, que indican como se han comportado las variables que identificaste en tu proceso de negocio que vas a simular. En esta investigación debes haber encontrado información propia de la empresa, organización o institución que vas a simular y otra parte de la información debe provenir de investigación que realices en fuentes de información públicas o privadas como las que tiene SECOBI, ahí normalmente y dependiendo del problema elegido, se investigan variables económicas, como tasas de interés, inflación, tipo de cambio, etc., de varios años, por poner un ejemplo. # # ### 1.7 Visualización de resultados de simulación. # > Se deben simular al menos 4 "nodos" o "escenarios" en tu problema. Para la segunda entrega debes haber elaborado un programa que simule dos de ellos, los primeros dos de tu diagrama. # > Para la entrega final deben de tener los 4 nodos con todas sus simulaciones. # # ### 1.6 Conclusiones. # > Mucho cuidado, las conclusiones no son cualquier cosa. Se debe concluir respecto a los objetivos planteados de acuerdo a los resultados obtenidos. # # ### 1.7 Referencias. # > Citar (en formato APA) la bibliografía utilizada. # ___ # ## 1. Entregable. # Los trabajos deben tener los siguientes componentes básicos. # # ### 1.1 Título del trabajo. # > Febrero Loco y Marzo otro poco # > Clima: ¿predecible o impredecible? # > Clima estocástico # # ### 1.2 Objetivos. # > - Se refieren a los propósitos por los cuales se hace el trabajo. # > - Deben ser **concretos, evaluables y verificables.** # > - Deben ser escritos en infinitivo. # #### 1.1 Objetivo general. # > - Fin último de estudio. Se formula atendiendo el propósito global del trabajo. No presenta detalles. # - Encontrar patrones climáticos e intentar predecir probabilidad de lluvia. # > - Se orienta a la totalidad del trabajo. Tiene que ver con el título del trabajo. # - # #### 1.2 Objetivos específicos # > - De forma detallada describen cada una de los componentes del trabajo. # - Definir estaciones del año, en las que se puedan encuadrar patrones climáticos para la ciudad de Seattle. # - Correlacionar la lluvia con otros indicadores como la humedad, dirección del viento, velocidad del viento y temperatura. # - Simular escenarios para cada una de las cuatro estaciones del año. # - Enunciar la probabilidad de que llueva en una estación dada del año para Seattle. # > - En conjunto garantizan la consecución del objetivo general. # # Referencia: # - https://es.slideshare.net/rosbur/metodologia-objetivos-generales-y-especficos # # ### 1.3 Definición del problema. # > Se debe describir en una cuartilla máximo cuál es el escenario de negocio que se va a modelar, identificar cuáles son los supuestos que defines en el negocio y cuáles son los "nodos" o "escenarios" que identificas necesarios simular y porqué. Debes elaborar un diagrama de inicio a fin, identificando los "nodos" o "escenarios" a simular. # # ### 1.4 Nodos y variables que se decidieron simular y porqué # > En este apartado el objetivo es identificar los problemas que se van a abordar, para examinar la factibilidad y viabilidad de la simulación de los nodos escogidos. Posteriormente se debe de argumentar del porqué de los nodos escogidos. # # ### 1.5 Definición de hipótesis y supuestos. # > Para poder modelar un proceso de negocio o cualquiera otro proceso en el área de las ingenierías o ciencias sociales, después de identificar el objeto de estudio, es importante indicar cuales son los supuestos que se dan por verdaderos a lo largo de la ejecución del proceso. Estos supuestos se deben indicar en forma de variables e hipótesis (son diferentes), porque si puedes definir en un valor el supuesto lo llamaremos "supuesto constante" y si no es posible identificarlo con un número entonces será una hipótesis que afectará la construcción de los escenarios posibles del proceso. # # ### 1.6 Obtención de bases de datos # > El tercer paso del proyecto es indicar de dónde se obtendrán las bases de datos, que indican como se han comportado las variables que identificaste en tu proceso de negocio que vas a simular. En esta investigación debes haber encontrado información propia de la empresa, organización o institución que vas a simular y otra parte de la información debe provenir de investigación que realices en fuentes de información públicas o privadas como las que tiene SECOBI, ahí normalmente y dependiendo del problema elegido, se investigan variables económicas, como tasas de interés, inflación, tipo de cambio, etc., de varios años, por poner un ejemplo. # # ### 1.7 Visualización de resultados de simulación. # > Se deben simular al menos 4 "nodos" o "escenarios" en tu problema. Para la segunda entrega debes haber elaborado un programa que simule dos de ellos, los primeros dos de tu diagrama. # > Para la entrega final deben de tener los 4 nodos con todas sus simulaciones. # # ### 1.6 Conclusiones. # > Mucho cuidado, las conclusiones no son cualquier cosa. Se debe concluir respecto a los objetivos planteados de acuerdo a los resultados obtenidos. # # ### 1.7 Referencias. # > Citar (en formato APA) la bibliografía utilizada. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import warnings import itertools import pandas as pd import numpy as np import statsmodels.api as sm import matplotlib.pyplot as plt #import pandas_datareader.data # import pandas_datareader.data as web import seaborn as sns # from pylab import rcParams # from pandas import datetime # from statsmodels.graphics.tsaplots import plot_acf, plot_pacf # from statsmodels.tsa.stattools import adfuller from statsmodels.tsa.arima_model import ARIMA sns.set_style("whitegrid") # - path = '../Data/df2.csv' df1 = pd.read_csv(path, parse_dates=['Date']) df1 = df.rename(columns = {"Date":"ds","Close":"y"}) df1['ds'] = pd.to_datetime(df1['ds']) df1 import datetime as datetime ctf_date = datetime.datetime(2020, 3, 1) df = df1.loc[df1.ds < ctf_date, ].copy() df pred_periods = 252 cutoff = len(df) - 252 df_train = df[:cutoff].copy() df_test = df[cutoff:].copy() print(cutoff) df.columns[1:] # + #feature normalization normal_constants = [] for col in df.columns[1:]: tmp_max = np.max(abs(df[col])) normal_constants.append(tmp_max) df[col]= df[col]/tmp_max print(normal_constants) print(df.head()) # - # + # # processing data, removing the space in coumn names # df= pd.read_csv('../data/df.csv') # new_name = [name.strip() for name in total_data.columns] # total_data.columns = new_name # total_data = total_data.sort_values(by='Date') # #extract features to use # sub_feature = ['Date','Close','bond','fed funds','fed total assets'] # total_data = total_data[sub_feature] # total_data=total_data.iloc[:10000] # #change the columns names for fitting prophet() # total_data.columns = ['ds','y','bond','fed funds','fed total assets'] # total_data # + #feature normalization # normal_constants = [] # for col in total_data.columns[1:]: # tmp_max = np.max(abs(total_data[col])) # normal_constants.append(tmp_max) # df[col]= df[col]/tmp_max # print(normal_constants) # print(df.head()) # + #data = df.sort_index(ascending=True, axis=0) training = df_train['y'] validation = df_test['y'] #model = auto_arima(training, start_p=1, start_q=1,max_p=3, max_q=3, m=12,start_P=0, seasonal=True,d=1, D=1, trace=True,error_action='ignore',suppress_warnings=True) #model.fit(training) #forecast = model.predict(n_periods=248) #forecast = pd.DataFrame(forecast,index = valid.index,columns=['Prediction']) # + from pmdarima.arima import auto_arima # + model = auto_arima(training, start_p=1, start_q=1,max_p=3, max_q=3, m=12,start_P=0, seasonal=True,d=1, D=1, trace=True,error_action='ignore',suppress_warnings=True) # - model.fit(training) arima_forecast = model.predict(n_periods=pred_size) forecast = pd.DataFrame(forecast,index = test_data.index,columns=['Prediction']) #test1 = test_data[:248] rms=np.sqrt(np.mean(np.power((np.array(test_data['y'])-np.array(forecast['Prediction'])),2))) print(rms) #plot plt.plot(train_data['y']) plt.plot(test_data['y']) plt.plot(forecast['Prediction']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.linear_model import LogisticRegression # # Convertion rate challenge # ### # ## Goal of the exercise # The goal of this exercise is to build a model that predicts conversion rate and, based on the model, come up with ideas to improve revenue. # # We have data about users who hit our site: whether they converted or not as well as some of their characteristics such as their country, the marketing channel, their age, whether they are repeat users and the number of pages visited during that session (as a proxy for site activity/time spent on site). # # The project is to: # - Predict conversion rate # - Come up with recommendations for the product team and the marketing team to improve conversion rate # ## Data exploration data_original = pd.read_csv('conversion_data.csv') data_original.head() # Columns: # - country : user country based on the IP address # - age : user age. Self-reported at sign-in step # - new_user : whether the user created the account during this session or had already an account and simply came back to the site # - source : marketing channel source # - Ads: came to the site by clicking on an advertisement # - Seo: came to the site by clicking on search results # - Direct: came to the site by directly typing the URL on the browser # - total_pages_visited: number of total pages visited during the session. This is a proxy for time spent on site and engagement during the session. # - converted: this is our label. 1 means they converted within the session, 0 means they left without buying anything. # # __The company goal is to increase conversion rate: # conversions / total sessions.__ # Number of rows in the data set: len(data_original) data_original.describe() # A couple of interesting things from the above description of our data: # # - The age range for our users is 17 to 123. The max value of this range indicates that there may be some problems with the data recorded, although it is posible for a person of 123 years of age to be using the website, seems a bit strange. # - 75% of our data points correspond to people under the age of 36. # - 68% are new users. # - The mean number of pages visited in a session is 4.9. # - The mean convertion can be seen as the mean conversion rate of the data set, in this case 3%. # Let's explore the super high ages we found in the description of the data set. data_original.sort_values(by='age', ascending=False).head(10) # Still feels a bit strange to have 2 users with ages 123 and 111. They both converted and have really high number of total pages visited. Since these are only 2 data points (of more than 360,000) if would remove them just to be safe. data_cleaned = data_original[data_original['age'] < 100] len(data_cleaned) # Lets see if our data set has some missing values: data_cleaned.isnull().values.any() # No missing values on our data set. data_cleaned.groupby('converted').mean() # Seems to be a clear relationship between total_pages visited and converted. Also the mayority of converted are old users. Either seems a good idea to make new users come back, or to find out why people don't convert on the first visit. data_cleaned.groupby('source').mean() # There seems to be no much of a relationship for source and convertion rates. data_cleaned.groupby('country').mean() # Looking at country-based data, the Chinese convertion rate seems very low. # ## Data visualization # %matplotlib inline data_cleaned['age'].hist() plt.title('Histogram of Age') plt.xlabel('Age') plt.ylabel('Frequency') data_cleaned['new_user'].hist() plt.title('Histogram of New user') plt.xlabel('New user') plt.ylabel('Frequency') data_cleaned['total_pages_visited'].hist() plt.title('Histogram of Total pages visited') plt.xlabel('Total pages visited') plt.ylabel('Frequency') data_cleaned['country'].value_counts().plot(kind='bar') plt.title('Histogram of Country') plt.xlabel('Country') plt.ylabel('Frequency') data_cleaned['source'].value_counts().plot(kind='bar') plt.title('Histogram of Source') plt.xlabel('Source') plt.ylabel('Frequency') # ## Data preparation # For logistic regrasion I'll create dummy variables for country and source. dummy_country = pd.get_dummies(data_cleaned['country'], prefix='country') dummy_country.head() dummy_source = pd.get_dummies(data_cleaned['source'], prefix='source') dummy_source.head() data_lr = data_cleaned[['age', 'new_user', 'total_pages_visited', 'converted']].join([dummy_country, dummy_source]) data_lr.head() # ## Logistic regression #still don't get why I have to remove the first dummy value for each dummy variable (has to do with the intercept?) feature_cols = ['age', 'new_user', 'total_pages_visited', 'country_Germany', 'country_UK', 'country_US', 'source_Direct', 'source_Seo'] X = data_lr[feature_cols] y = data_lr['converted'] model = LogisticRegression() model = model.fit(X, y) dict(zip(X.columns, np.transpose(model.coef_))) model.intercept_ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt # %matplotlib inline # Loading the previously cleaned data from it's CSV file hs_dataframe = pd.read_csv('hearthstone_dataframe.csv') hs_dataframe.columns # + # Color Pallete for Seaborn classes_palette = {'Druid': '#FF7D0A', 'Warrior': '#C79C6E', 'Hunter': '#ABD473', 'Warlock': '#9482C9', 'Mage':'#69CCF0', 'Shaman': '#0070DE', 'Paladin': '#F58CBA', 'Rogue': '#FFF569', 'Priest': '#FFFFFF'} classes_label = ['Druid', 'Warrior', 'Hunter', 'Warlock', 'Mage', 'Shaman', 'Paladin', 'Rogue', 'Priest'] class_colors = ['#FF7D0A', '#C79C6E', '#ABD473', '#9482C9', '#69CCF0', '#0070DE', '#F58CBA', '#FFF569', '#FFFFFF'] # Plotting Cards Distribution Per Class cards_per_class = hs_dataframe['playerClass'].value_counts() classes = cards_per_class.keys() cards_count = cards_per_class.values explode = (0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1) # MatplotLib 'SubPlots' Handler fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10,5)) # Pie Plot axes.set_title('Cards Distribution Per Class (On our Dataset)', y=1.05) axes.pie(cards_count, labels=classes, colors=class_colors, explode=explode, autopct='%1.1f%%', shadow=True, startangle=90) axes.axis('equal') # - # Loading hearthstone Dataframe to Seaborn Engine hs_dataframe['type'].unique() hs_minions_dataframe = hs_dataframe[hs_dataframe['type'] == 'Minion'][['cost', 'health', 'attack', 'type']] sns.pairplot(hs_minions_dataframe, hue='type', kind='scatter') plt.suptitle('Distribution of Cards per Cost, Attack and Health (Minions)', y=1.05) plt.show() # + fig, axs = plt.subplots(figsize=(15,5), ncols=3, nrows=1) # Distribution of Minions Per Cost, Attack and Health sns.distplot(hs_minions_dataframe['cost'], kde=False, bins=11, ax=axs[0]) sns.distplot(hs_minions_dataframe['attack'], kde=False, bins=13, ax=axs[1]) sns.distplot(hs_minions_dataframe['health'], kde=False, bins=15, ax=axs[2]) axs[0].set_ylabel('Cards Count') axs[0].set_title('Dist. Per Cost') axs[1].set_title('Dist. Per Attack') axs[2].set_title('Dist. Per Health') plt.show() # + # Cards Distribution Per Type and Class fig, axs = plt.subplots(figsize=(15,5), ncols=1, nrows=1) hs_cards_dataset = hs_dataframe[['playerClass', 'health', 'attack', 'type']] axs.set_title('Distribution of Card Types Per Class') sns.countplot(x='type',data=hs_cards_dataset, hue='playerClass', palette=classes_palette, ax=axs) plt.show() # - hs_dataframe.columns # + # Plotting Cards Distribution Per Class cards_per_class = hs_dataframe['rarity'].value_counts() classes = cards_per_class.keys() cards_count = cards_per_class.values explode = (0.1, 0.1, 0.1, 0.1, 0.1) rarity_colors = ['#ffffff', '#0070ff', '#EFEFEF', '#ff8000', '#a335ee'] # MatplotLib 'SubPlots' Handler fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10,5)) # Pie Plot axes.set_title('Cards Distribution Per Rarity (On our Dataset)', y=1.05) axes.pie(cards_count, labels=classes, colors=rarity_colors, explode=explode, autopct='%1.1f%%', shadow=True, startangle=90) axes.axis('equal') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="LYvAOR2VzHmW" # # **Diplomatura en Ciencia de Datos, Aprendizaje Automático y sus Aplicaciones** # # **Edición 2022** # # --- # # # Datos y Modelos # # + id="Xwdfo7z20TUK" import io import matplotlib import matplotlib.pyplot as plt import numpy import pandas as pd import seaborn seaborn.set_context('talk') # + [markdown] id="XY2Hl-Ma07Nn" # ## Lectura del dataset # # En la notebook 00 se explican los detalles de la siguiente sección. # + id="Vviv_sqXdR5W" url = 'https://cs.famaf.unc.edu.ar/~mteruel/datasets/diplodatos/sysarmy_survey_2020_processed.csv' df = pd.read_csv(url) # + id="gckNHXXLktJ4" colab={"base_uri": "https://localhost:8080/", "height": 323} executionInfo={"status": "ok", "timestamp": 1648060797106, "user_tz": 180, "elapsed": 139, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09936144167370803067"}} outputId="cbb3fa61-bf5f-4ae9-9881-0e4a2881ba73" df[:3] # + [markdown] id="2i_yGkkUs3QA" # # Estadística descriptiva # # Hemos estado organizando y visualizando los datos de distintas maneras, pero ¿qué intuiciones podemos obtener? # # Las visualizaciones como histogramas o gráficos de conteo muestran la cantidad de veces que se observa cada valor en un conjunto de realizaciones de una variable aleatoria. Esto se denomina análisis de frecuencia, y es parte de la **estadística descriptiva**. # # El uso de visualizaciones nos limita a estimaciones, pero los datos crudos son demasiado como para intepretarlos en conjunto. Para eso, la estadística descriptiva provee también medidas de tendencia central y de dispersión, que resumen en un valor numérico propiedades de las realizaciones de la variable. # # Retomemos el problema original con la v.a. `salary_monthly_NETO`, ¿qué información brindan las siguientes métricas y cómo usarlas? # # + id="AXFDG0eBPDgH" salary_col='salary_monthly_BRUTO' # + colab={"base_uri": "https://localhost:8080/"} id="fHre-H9euQv4" executionInfo={"status": "ok", "timestamp": 1648063230485, "user_tz": 180, "elapsed": 436, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09936144167370803067"}} outputId="3ffe1c34-6492-40a2-e722-ec2bebbe2412" df[salary_col].describe().apply(lambda s: '{0:.2f}'.format(s)) # + [markdown] id="QC-wJbBitzDH" # ## Medidas de tendencia central # # Cuando se quiere obtener un valor representativo de todas las realizaciones de una v.a., o su centro, se utiliza una *medida de tendencia central*. # # Repasando, dada una característica de interés (modelada por X v.a.) y un conjunto de observaciones $x = \{ x_1, x_2 ... \}$ donde $x_i = X(\omega_i)$ para algún $\omega_i \in \Omega$, y $N = |x|$: # # * La **media muestral** (aritmética) o promedio se calcula como: # # $$ \bar{x} = \frac{1}{N} \sum_i^N x_i $$ # # * La **mediana** se calcula: # 1. Ordenar las realizaciones tal que $x_j \leq x_{j+1}$ # 2. Si la cantidad de datos $N$ es impar, la mediana es el valor central: $median = x_{\lfloor N / 2 \rfloor +1}$ # 3. Si la cantidad de datos $N$ es par, la mediana es e promedio de los dos valores centrales: $median = \frac{1}{2} (x_{ N / 2 } + x_{ (N / 2) +1})$ # # * La **moda** son los valores o él valor con mayor frecuencia, es decir, los o él que más se repite. # # + colab={"base_uri": "https://localhost:8080/"} id="VGJfjf-x5TOh" executionInfo={"status": "ok", "timestamp": 1648063239575, "user_tz": 180, "elapsed": 325, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09936144167370803067"}} outputId="c109f974-16ac-4ede-e3bd-5fbc8cd8c9e5" df[salary_col].mean(), df[salary_col].median() # + [markdown] id="hDltOaTjnuFd" # **¿Por qué las dos medidas son tan distintas?** # # * La media se puede interpretar como el *centro de masa* del histograma. Es decir, si el histograma fuera una figura de madera, el punto de equilibrio donde podemos apoyarlo y no se cae es la media. # * La media es muy sensible a valores extremos. # * La mediana es más robusta a valores extremos. # * Si la distribución de los datos es simétrica, las medidas coinciden. (Luego, si no coinciden es porque la distribución no es simétrica) # # **¿Se cumple para estos datos?** # + colab={"base_uri": "https://localhost:8080/"} id="woWeBF8-0u5Q" executionInfo={"status": "ok", "timestamp": 1648063253444, "user_tz": 180, "elapsed": 323, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09936144167370803067"}} outputId="d0ffcb4a-1810-4d38-f357-c101dfdb393a" max_salaries = [df[salary_col].max(), 10**6, 500000, 400000, 300000, 200000] central_tendency = [ (max_salary, df[df[salary_col] < max_salary][salary_col].mean(), df[df[salary_col] < max_salary][salary_col].median()) for max_salary in max_salaries ] central_tendency # + [markdown] id="EZrjSY4yPV8-" # Se pueden graficar estos números para lograr una mejor intuición de la magnitud de las diferencias. Además, al mostrar una visualización se pueden incluir más puntos. # # Para poder crear gráficos de seaborn con distintos grupos de datos, muchas veces es necesario cambiar el formato del dataframe de wide a long. Ver [este link](https://anvil.works/blog/tidy-data) para más información. # + colab={"base_uri": "https://localhost:8080/", "height": 141} id="MpMJWSNq3Xq_" executionInfo={"status": "ok", "timestamp": 1618615228626, "user_tz": 180, "elapsed": 7815, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgEpHkJ3bTs6Ms1_5XLJaQvEwE5hv2Ac2i3p5w_Q2o=s64", "userId": "10886139577622185878"}} outputId="c2282517-899c-426c-b822-fdb4a9eb3a56" central_tendency_max = [ (max_salary, df[df[salary_col] < max_salary][salary_col].mean(), df[df[salary_col] < max_salary][salary_col].median()) for max_salary in range(50000, int(df[salary_col].max()), 10**4) ] central_tendency_max_df = pd.DataFrame(central_tendency_max, columns=['max_salary', 'mean', 'median'])\ .melt(id_vars='max_salary', var_name='metric') central_tendency_max_df[:3] # + colab={"base_uri": "https://localhost:8080/", "height": 355} id="rJQfOlKV15Z4" executionInfo={"status": "ok", "timestamp": 1618615229342, "user_tz": 180, "elapsed": 8521, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgEpHkJ3bTs6Ms1_5XLJaQvEwE5hv2Ac2i3p5w_Q2o=s64", "userId": "10886139577622185878"}} outputId="7c393e3a-4b53-4432-b487-79fef379f2ef" # ¡Podemos ver estos datos visualmente! valga la redundancia!! fig = plt.figure(figsize=(15, 5)) seaborn.lineplot(data=central_tendency_max_df, x='max_salary', y='value', hue='metric') plt.ticklabel_format(style='plain', axis='x') seaborn.despine() # + colab={"base_uri": "https://localhost:8080/", "height": 519} id="mSyyNgvndRPQ" executionInfo={"status": "ok", "timestamp": 1618615230016, "user_tz": 180, "elapsed": 9185, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgEpHkJ3bTs6Ms1_5XLJaQvEwE5hv2Ac2i3p5w_Q2o=s64", "userId": "10886139577622185878"}} outputId="4c4037e0-1e16-40ba-b6c1-0468e9329470" ## La diferencia no siempre se aprecia en los gráficos fig, axes = plt.subplots(nrows=2, figsize=(16, 8)) seaborn.histplot(df[salary_col], bins=100, ax=axes[0], color='gray') axes[0].axvline(df[salary_col].mean(), color='orangered', linestyle='--', label='Media') axes[0].axvline(df[salary_col].median(), color='indigo', linestyle='-.', label='Mediana') filtered_df = df[df[salary_col] < 200000] seaborn.histplot(filtered_df[salary_col], bins=100, ax=axes[1], color='gray') axes[1].axvline(filtered_df[salary_col].mean(), color='orangered', linestyle='--', label='Media') axes[1].axvline(filtered_df[salary_col].median(), color='indigo', linestyle='-.', label='Mediana') axes[0].legend() seaborn.despine() # + [markdown] id="3MdG-7bK8AKR" # ¿Qué decir de la moda? Sólo que el resultado de la función no es un valor, sino una series de valores, aunque la serie tenga un único elemento. # + colab={"base_uri": "https://localhost:8080/"} id="r01xw1q18AmV" executionInfo={"status": "ok", "timestamp": 1618615230021, "user_tz": 180, "elapsed": 9178, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgEpHkJ3bTs6Ms1_5XLJaQvEwE5hv2Ac2i3p5w_Q2o=s64", "userId": "10886139577622185878"}} outputId="ca25ecd4-6b27-4615-a64c-edeaefa03673" df.profile_gender.mode() # + [markdown] id="Li3vLv3X8k7Z" # ## Medidas de dispersión # # Las medidas de dispersión vistas en el teórico son la desviación estándar, la varianza, y el coeficiente de variación. También permiten representar con un número alguna propiedad de los datos. # # Por ejemplo, comparemos el salario neto con el salario bruto. A priori, **¿deberíamos ver alguna diferencia?** # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="504NtuJWjpX0" executionInfo={"status": "ok", "timestamp": 1618615230024, "user_tz": 180, "elapsed": 9170, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgEpHkJ3bTs6Ms1_5XLJaQvEwE5hv2Ac2i3p5w_Q2o=s64", "userId": "10886139577622185878"}} outputId="be2db5aa-afb4-4203-cd5b-5146992b08bb" df[['salary_monthly_NETO', 'salary_monthly_BRUTO']].describe().round() # + [markdown] id="m6dcAgVYlUWK" # Claramente, ambas distribuciones están centradas en valores distintos, pero ¿podemos decir algo sobre su dispersión? # # Cuando se comparan dos características diferentes (que pueden tener magnitudes diferentes) puede no ser conveniente comparar directamente los valores de las desviaciones estándar, sino que podemos usar el coeficiente de variación (desviación estándar dividida la media). # + colab={"base_uri": "https://localhost:8080/"} id="5Ga3FpQalrCm" executionInfo={"status": "ok", "timestamp": 1618615230025, "user_tz": 180, "elapsed": 9156, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgEpHkJ3bTs6Ms1_5XLJaQvEwE5hv2Ac2i3p5w_Q2o=s64", "userId": "10886139577622185878"}} outputId="40b738d6-04e5-4971-93fa-b0b0b55e3b39" import scipy print("Coeficiente de variación salario bruto", scipy.stats.variation(df.salary_monthly_BRUTO)) print("Coeficiente de variación salario neto", scipy.stats.variation(df.salary_monthly_NETO.dropna())) # + [markdown] id="lVG6Ro-6ao3j" # ## Percentiles y gráficos de caja # # Los gráficos de caja son otra forma de representar la distribución de las realizaciones de una v.a. numérica, de una forma más condensada que un histograma. # # Son muy útiles para comparar muchas distribuciones, pero sólo cuando son muy distintas entre ellas, ya que oscurecen algunas sutilezas. Otros problema de este tipo de gráficos es que *no todo el mundo recuerda cómo leerlos*. # # En estadística descriptiva, un gráfico de caja es un método para representar gráficamente grupos de datos numéricos a través de sus cuartiles. Los gráficos de caja también pueden tener líneas que se extienden verticalmente desde las cajas (bigotes) indicando la variabilidad fuera de los cuartiles superior e inferior. Los valores atípicos pueden representarse como puntos individuales. # # La definición anterior sugiere que, si hay un valor atípico, se representará como un punto en el diagrama de caja, mientras que el resto de los datos de la muestra se agrupará y se mostrará en forma de cajas. Intentemos verlo nosotros mismos. #
    # #
    # + colab={"base_uri": "https://localhost:8080/", "height": 301} id="5dbBiShrasMI" executionInfo={"status": "ok", "timestamp": 1618615230681, "user_tz": 180, "elapsed": 9794, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgEpHkJ3bTs6Ms1_5XLJaQvEwE5hv2Ac2i3p5w_Q2o=s64", "userId": "10886139577622185878"}} outputId="d88536fb-a803-43d2-d9f7-74b5a54bc231" plt.figure(figsize=(12, 4)) seaborn.boxplot(x=df[salary_col]) seaborn.despine() # + id="j9J3KNTD_S6j" colab={"base_uri": "https://localhost:8080/", "height": 372} executionInfo={"status": "ok", "timestamp": 1618615309974, "user_tz": 180, "elapsed": 930, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgEpHkJ3bTs6Ms1_5XLJaQvEwE5hv2Ac2i3p5w_Q2o=s64", "userId": "10886139577622185878"}} outputId="68157738-fbd1-4296-bf9f-b5899f6c9714" seaborn.distplot(df[df.profile_age < 100].profile_age) # + colab={"base_uri": "https://localhost:8080/", "height": 318} id="GdK00mpDa7Nz" executionInfo={"status": "ok", "timestamp": 1618330973435, "user_tz": 180, "elapsed": 8844, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj5Cl9j0p5Ont2qFoH4hE2P4Pws7ek_gATkrdVJ=s64", "userId": "13159704290090351697"}} outputId="c5431d14-a5f9-472c-f6f8-e7e0be5be086" plt.figure(figsize=(12, 4)) seaborn.boxplot(x=df[df.profile_age < 100].profile_age) # + [markdown] id="tyx3Pmk-dJL4" # Por ejemplo, podemos comparar la distribución de los salarios netos con respecto al nivel de estudios alcanzado. # + colab={"base_uri": "https://localhost:8080/", "height": 390} id="W1dKgRP9gkHj" executionInfo={"status": "ok", "timestamp": 1618330973438, "user_tz": 180, "elapsed": 8833, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj5Cl9j0p5Ont2qFoH4hE2P4Pws7ek_gATkrdVJ=s64", "userId": "13159704290090351697"}} outputId="bc5078c3-3a8d-4136-9b49-99e04eac18ea" plt.figure(figsize=(12, 6)) seaborn.boxplot(data=df, x=salary_col, y='profile_studies_level', color='orangered') plt.ticklabel_format(style='plain', axis='x') # + [markdown] id="HClH-TMBhMfg" # **¿Qué puede estar causando una diferencia tan grande en la distribución para las respuestas que dicen Posdoctorado?** # + [markdown] id="yqHgU6mUhXSi" # ### Boxenplots # # Los boxplots tienen una gran desventaja: ocultan mucha información en la distribución de las colas. Por ejemplo, para la categoría Posdoctorado, sabemos que el 25% de los valores de sueldo neto es mayor que los ~650000 pesos. Pero no conocemos cómo se distribuyen. Para conjuntos de datos de gran tamaño, el 25% de los datos contiene mucha información. # # Un gráfico más informativo es el **boxenplot**, que visualiza más percentiles. Otra ventaja es la percepción del mismo debido al peso visual de las cajas: los datos en el rango intercuartílico no parecen muuuucho más importantes que los datos en las colas. # # Sin embargo, es aún más difícil de leer si buscamos exactitud, ya que los percentiles que definen el límite de cada caja se definen recursivamente y no decrecen linealmente. # + colab={"base_uri": "https://localhost:8080/", "height": 410} id="evtF2AFChc06" executionInfo={"status": "ok", "timestamp": 1648064038381, "user_tz": 180, "elapsed": 1058, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09936144167370803067"}} outputId="8bfcaa03-5a48-4bde-a5df-b3b1b9a900b7" plt.figure(figsize=(12, 6)) seaborn.boxenplot(data=df, x=salary_col, y='profile_studies_level', color='orangered') plt.ticklabel_format(style='plain', axis='x') # + [markdown] id="81z4Ue6PkEZr" # ## Eliminación de valores extremos # # ### Usando percentiles # # Una forma conservadora de eliminar valores extremos que estén afectando a la media, el rango y las visualizaciones es seleccionar un cierto porcentaje más extremo. Para eso, usamos los percentiles. # # Por ejemplo, podemos elegir quedarnos con el 99% de salarios más bajos, eliminando el 1%. Podemos calcular todos los percentiles para decidir cuál sería el más apropiado. # + colab={"base_uri": "https://localhost:8080/"} id="-R7cIusV_1ri" executionInfo={"status": "ok", "timestamp": 1619706141046, "user_tz": 180, "elapsed": 887, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj5Cl9j0p5Ont2qFoH4hE2P4Pws7ek_gATkrdVJ=s64", "userId": "13159704290090351697"}} outputId="f4c1242f-b76e-431c-e219-d074aa6e3a06" k = 90 percentile_90 = df[salary_col].quantile(k / 100) n_below = len(df[df[salary_col] < percentile_90]) n_above = len(df[df[salary_col] > percentile_90]) print('Percentil {} de la columna {}: {}'.format(k, salary_col, percentile_90)) print('% de datos menor que percentil {}: {}'.format(k, n_below / len(df))) print('% de datos mayor que percentil {}: {}'.format(k, n_above / len(df))) # + colab={"base_uri": "https://localhost:8080/"} id="bWEgaBVvka9p" executionInfo={"status": "ok", "timestamp": 1618330974254, "user_tz": 180, "elapsed": 9628, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj5Cl9j0p5Ont2qFoH4hE2P4Pws7ek_gATkrdVJ=s64", "userId": "13159704290090351697"}} outputId="e7baf270-f52d-4ec8-8aa6-dce26d5584dd" df[salary_col].quantile([.95, .98, .99, .995, .998]) # + colab={"base_uri": "https://localhost:8080/", "height": 318} id="A-2cG3unruwo" executionInfo={"status": "ok", "timestamp": 1618330974258, "user_tz": 180, "elapsed": 9619, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj5Cl9j0p5Ont2qFoH4hE2P4Pws7ek_gATkrdVJ=s64", "userId": "13159704290090351697"}} outputId="2e8308b6-f072-40ae-94c2-a84c123e8618" plt.figure(figsize=(12, 4)) max_limit = df[salary_col].quantile(.98) seaborn.boxenplot(x=df[df[salary_col] < max_limit][salary_col]) # + colab={"base_uri": "https://localhost:8080/", "height": 753} id="MWmor0akspwt" executionInfo={"status": "ok", "timestamp": 1618330975030, "user_tz": 180, "elapsed": 10377, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj5Cl9j0p5Ont2qFoH4hE2P4Pws7ek_gATkrdVJ=s64", "userId": "13159704290090351697"}} outputId="c07bcbd1-fb54-4b28-97a8-63df74c03b08" fig, axes = plt.subplots(figsize=(12, 12), nrows=3) max_limit = df[salary_col].quantile(.98) data = df[df[salary_col] < max_limit][salary_col] seaborn.histplot(x=data, ax=axes[0]) seaborn.boxplot(x=data, ax=axes[1]) seaborn.boxenplot(x=data, ax=axes[2]) # + [markdown] id="SBkKwLFltJwg" # ### # + id="poVH7-0RFxqC" def clean_outliers_q3(dataset, column_name): """Returns dataset removing the outlier rows from column @column_name.""" interesting_col = dataset[column_name] # Here we can remove the outliers from both ends, or even add more restrictions. mask_outlier = (interesting_col <= (2.5 * interesting_col.quantile(.75))) return dataset[mask_outlier] # + colab={"base_uri": "https://localhost:8080/", "height": 319} id="MaZj8_fatXgo" executionInfo={"status": "ok", "timestamp": 1648064273907, "user_tz": 180, "elapsed": 1515, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09936144167370803067"}} outputId="86e80326-c724-4c21-c1de-b3c867c97e1a" fig = plt.figure(figsize=(12, 4)) data = clean_outliers_q3(df, salary_col)[salary_col] seaborn.histplot(x=data) # + id="9RPNlz5-kjgD" def clean_outliers_sd(dataset, column_name): """Returns dataset removing the outlier rows from column @column_name.""" interesting_col = dataset[column_name] # Here we can remove the outliers from both ends, or even add more restrictions. mask_outlier = ( numpy.abs(interesting_col - interesting_col.mean()) <= (2.5 * interesting_col.std())) return dataset[mask_outlier] # + colab={"base_uri": "https://localhost:8080/", "height": 319} id="bpQ28UwSGYAF" executionInfo={"status": "ok", "timestamp": 1648064304620, "user_tz": 180, "elapsed": 521, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09936144167370803067"}} outputId="1269dc58-75be-4ca3-ef51-0bc241ee8cfe" fig = plt.figure(figsize=(12, 4)) data = clean_outliers_sd(df, salary_col)[salary_col] seaborn.histplot(x=data) # + [markdown] id="KY55pa57CW7T" # # + [markdown] id="fuDscbVqttGZ" # ### ¡Mirando los datos! # # ¿Quiénes son los que cobran tanto? # + colab={"base_uri": "https://localhost:8080/", "height": 862} id="zIt2nJXvtx3g" executionInfo={"status": "ok", "timestamp": 1618330975464, "user_tz": 180, "elapsed": 10785, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj5Cl9j0p5Ont2qFoH4hE2P4Pws7ek_gATkrdVJ=s64", "userId": "13159704290090351697"}} outputId="68b04aad-5eca-4b3f-c3eb-4fdd61e5387f" df[df[salary_col] > df[salary_col].quantile(0.98)] # + [markdown] id="SfgRCnKUaUBH" # Volvemos a las filminas # + [markdown] id="s-VSiRuLCXxg" # Datos vs Modelo # # QQplot # + id="2lzmzK1NuPNT" (Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp��� 8 ` � � � ( P x � � �  @ h � � �  0 X � � � � H p � � � 8`���(Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp���8`���(Px���@h��� 0 X � � � � !H!p!�!�!�!"8"`"�"�"�"#(#P#x#�#�#�#$@$h$�$�$�$%0%X%�%�%�%�% &H&p&�&�&�&'8'`'�'�'�'(((P(x(�(�(�()@)h)�)�)�)*0*X*�*�*�*�* +H+p+�+�+�+,8,`,�,�,�,-(-P-x-�-�-�-.@.h.�.�.�./0/X/�/�/�/�/ 0H0p0�0�0�0181`1�1�1�12(2P2x2�2�2�23@3h3�3�3�3404X4�4�4�4�4 5H5p5�5�5�5686`6�6�6�67(7P7x7�7�7�78@8h8�8�8�8909X9�9�9�9�9 :H:p:�:�:�:;8;`;�;�;�;<(<P<x<�<�<�<=@=h=�=�=�=>0>X>�>�>�>�> ?H?p?�?�?�?@8@`@�@�@�@A(APAxA�A�A�AB@BhB�B�B�BC0CXC�C�C�C�C DHDpD�D�D�DE8E`E�E�E�EF(FPFxF�F�F�FG@GhG�G�G�GH0HXH�H�H�H�H IHIpI�I�I�IJ8J`J�J�J�JK(KPKxK�K�K�KL@LhL�L�L�LM0MXM�M�M�M�M NHNpN�N�N�NO8O`O�O�O�OP(PPPxP�P�P�PQ@QhQ�Q�Q�QR0RXR�R�R�R�R SHSpS�S�S�ST8T`T�T�T�TU(UPUxU�U�U�UV@VhV�V�V�VW0WXW�W�W�W�W XHXpX�X�X�XY8Y`Y�Y�Y�YZ(ZPZxZ�Z�Z�Z[@[h[�[�[�[\0\X\�\�\�\�\ ]H]p]�]�]�]^8^`^�^�^�^_(_P_x_�_�_�_`@`h`�`�`�`a0aXa�a�a�a�a bHbpb�b�b�bc8c`c�c�c�cd(dPdxd�d�d�de@ehe�e�e�ef0fXf�f�f�f�f gHgpg�g�g�gh8h`h�h�h�hi(iPixi�i�i�ij@jhj�j�j�jk0kXk�k�k�k�k lHlpl�l�l�lm8m`m�m�m�mn(nPnxn�n�n�no@oho�o�o�op0pXp�p�p�p�p qHqpq�q�q�qr8r`r�r�r�rs(sPsxs�s�s�st@tht�t�t�tu0uXu�u�u�u�u vHvpv�v�v�vw8w`w�w�w�wx(xPxxx�x�x�xy@yhy�y�y�yz0zXz�z�z�z�z {H{p{�{�{�{|8|`|�|�|�|}(}P}x}�}�}�}~@~h~�~�~�~0X���� �H�p�������8�`�����؁�(�P�x���Ȃ���@�h��������0�X�����Є�� �H�p�������8�`�����؆�(�P�x���ȇ���@�h��������0�X�����Љ�� �H�p�������8�`�����؋�(�P�x���Ȍ���@�h��������0�X�����Ў�� �H�p�������8�`�����ؐ�(�P�x���ȑ��@�h��������0�X�����Г�� �H�p�������8�`�����ؕ�(�P�x���Ȗ��@�h��������0�X�����И�� �H�p�������8�`�����ؚ�(�P�x���ț��@�StarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFiltered����  �oB   �@   3 @@ HW�A� B� B �BZ@@@ "'-39?EKQW]ciou{���������������������� %+06<BHNSY_ekqw}���������������������� %+17=CIOU[agmsy���������������������� #(.4:@FLRX^djpv|����������������������� &,28>DJPV\bhntz����������������������� #)/5;AGMSY_ekqw}��������������������� #)/5;AGMSY_ekqw}���������������������� "(.48>DJPV[aglrx~��������������������� #)/5;AGMSY_djpv|����������������������     & , 2 8 > D I O U [ a g m s y ~ � � � � � � � � � � � � � � � � � � � � �     $ * 0 6 ; A F L R X ] c i o u { � � � � � � � � � � � � � � � � � � � � � �      % + 0 6 < B F L Q W ] c i o u { � � � � � � � � � � � � � � � � � � � � � �       % + 1 7 = C I O U [ a g m s y  � � � � � � � � � � � � � � � � � � � � �     ! ' - 3 6694564980752907710750548841500582507964551284396182602925092745933014255112525347198748266302776035026042351217299091922408341826817298291625232459843646376458691741582129599937358239972629834548533402464422095122226510053585920771327572414743450242915086131180264052687441078497462167845616517688223854677696166465508588124545827435901461333612747526654104582535717587294526928155353971305649031924164481546848540057719433796769928554606509905022805527841919578208581288021302384540637325721027601093091773467759984100537584390594154314919245013691433257328281384966778455802738126084237494536447708277857672071183384103439153324347294358456795568544437765191401265137598719711090177609458325573556107561336234019388725008324834153552844636554658631432678612163500138432249320157872086699779134941636066419199534575634437387190934439758844134504377280676325199353563287076650875621399303889367350720891052019963881602415748227676955161146487762312008906816119038131311385294108691137546365808951081577585874146985273498880715838077828451272162642417739035869584568012375508783312469333020668078844266858940165802799097734552783684312784464440125624286122467258750545993284240771255289011991319723659978648976228604438045687798341725156888734644013504529836746341144188342734437122922850380800453302949845094910277587723134417276203275859120537583907101537271690726801298096733120612301109761243522692701765438212974418150739993656888546198841275272955131697390767837865070418691948794245315341896197463453510607223985850646934661970298188424885883146080933789704272349274589413144269025989025133903851473444025483794434503026797137964625275047971786702346019578061866712020509563739175593570641905180433726785531625576860321303918734203221109504261112873439752308041226375683871881561121653082471726323145138262849849973677660449139633360918913537944784574017740755664422728293540847732330782855419348353512888391514798800358856756069636239554678495778698762543468361558627877233672847657946257793286823966307123170931563079481281693617074133110950563737873463163669059538183387890799839274582426158707548217562255716244694408936456856294094184808708006152923110413366426999612994463277682936544115597809872988981612628648666320674666183123228958394866844940901065051823358171192837024145419441068129669673959670886233727232193860113753984820249549967815169658539680404272534281671899734321749257462522254116718494352252372932812186779261165791643217343441535515114558243355164556039445569106387308145463429346646993125935171084157179366124643481757854287600531603868917903320540842738879288048591775221628686918643414270777133275894441643256032435026107774571642565226415093547221421916630521278111612268392080315126195134866151468251398932171235128274545538658251883042521521902035499595752803367872786472241194005200757904283116297315308928326004921800009490890085020616476706123427232555967390221915254960642310636341758681264269012577897070834775896990207386999978058540815113341755620770194712862283154042245735705576906877405459017599913728736661236773215066824886569515315792915841948916925681727188486487284993711825730331123630558928744569314327903448423871348874841435314990592384155267406747126496224907784017108547225917354513023265130526041597631054736969440423147854221667764571720824440666054331789673273584373729844285081782722590573555114775208312532342432593� �-#V���r�"#�'�*H<�X^�������[&c@e�{��6�<�����&"�N�V�i�p�t�q�B�����g�x���,./!F�G�������,��� �h�sҁ*�.�J��������(fQIZ�h �+���:�b� V� �� � �� �� �� � j �# �4 L< �Q x ^� �� j� �� q� �� �� �� �� � 1 �> *J @S ( 9� �� � �� >� B� � �" :; O 2] *f �u �,��, �,&�,��,��,�--�T-TZ-�r-T.H .�F.PS.ak.V�.Q�.ɐ.t/�/U�/{�/T�/]�0��0 �0��1k�1F�1�1)�1��1H�1� 2�2J2�(2�2��2��2��2�26730N3�V3��3Y�3��34�3 �3��3�&4E,4K14vV4Tf4Sj4�w4N4�4��43�4��4��4��4 5�05�5>�5O�5��5��5B�5A�5;�5�U6Qo6�q6�v6n�6��6D�6�67�67�6��6� 7- 7�47$�7�7�7&8�8�28�<8VP8Yb8�}8�8щ8�8��8q�8�869�x9�|9�9ݻ9�:�:D=:��:��:��:�:��:b�:��:;; ;�%;S5;4?;˓;�;׶;G�;��;��;�+<)C<�W<}_<�<a�<c�<~�<[�<�B=]=�j=Y�=&�=��=��=��=��=N�=> 9>�P>�>��> �>��>>?�/?G_?yb?�j?~q?�?��?տ?�;@�=@�N@%S@�r@9�@P�@X�@��@��@8A6=A�?AuIAW^A�wAp~AM�A��A��AW�A# --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Importing Libraries # # import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # ## Reading the Data loans = pd.read_csv('loan_data.csv') # loans.info() loans.describe() loans.head() # # Exploratory Data Analysis plt.figure(figsize=(10,6)) loans[loans['credit.policy']==1]['fico'].hist(alpha=0.5,color='blue', bins=30,label='Credit.Policy=1') loans[loans['credit.policy']==0]['fico'].hist(alpha=0.5,color='red', bins=30,label='Credit.Policy=0') plt.legend() plt.xlabel('FICO') plt.figure(figsize=(10,6)) loans[loans['not.fully.paid']==1]['fico'].hist(alpha=0.5,color='blue', bins=30,label='not.fully.paid=1') loans[loans['not.fully.paid']==0]['fico'].hist(alpha=0.5,color='red', bins=30,label='not.fully.paid=0') plt.legend() plt.xlabel('FICO') plt.figure(figsize=(11,7)) sns.countplot(x='purpose',hue='not.fully.paid',data=loans,palette='Set1') sns.jointplot(x='fico',y='int.rate',data=loans,color='purple') # **trend differed between not.fully.paid and credit.policy.** plt.figure(figsize=(11,7)) sns.lmplot(y='int.rate',x='fico',data=loans,hue='credit.policy', col='not.fully.paid',palette='Set1') # # Setting up the Data # loans.info() # ## Categorical Features # # we need to transform them using dummy variables so sklearn will be able to understand them. we use pd.get_dummies. # cat_feats = ['purpose'] final_data = pd.get_dummies(loans,columns=cat_feats,drop_first=True) final_data.head() final_data.info() # ## Train Test Split # from sklearn.model_selection import train_test_split X = final_data.drop('not.fully.paid',axis=1) y = final_data['not.fully.paid'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101) # ## Training a Decision Tree Model # from sklearn.tree import DecisionTreeClassifier from sklearn import tree dtree = DecisionTreeClassifier() dtree.fit(X_train,y_train) a=tree.plot_tree(dtree) # ## Predictions and Evaluation of Decision Tree predictions = dtree.predict(X_test) from sklearn.metrics import classification_report,confusion_matrix from sklearn.metrics import accuracy_score,precision_score,recall_score accuracy_score(y_test,predictions) precision_score(y_test,predictions) recall_score(y_test,predictions) print(classification_report(y_test,predictions)) print(confusion_matrix(y_test,predictions)) # ## Training the Random Forest model from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier(n_estimators=1000) rfc.fit(X_train,y_train) # ## Predictions and Evaluation predictions = rfc.predict(X_test) from sklearn.metrics import classification_report,confusion_matrix print(classification_report(y_test,predictions)) # **Show the Confusion Matrix for the predictions.** print(confusion_matrix(y_test,predictions)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # We have already seen lists and how they can be used. Now that you have some more background I will go into more detail about lists. First we will look at more ways to get at the elements in a list and then we will talk about copying them. # # # Here are some examples of using indexing to access a single element of an list. Execute the cells below in order: # list = ['zero', 'one', 'two', 'three', 'four', 'five'] list[0] list[4] list[5] # You should have gotten the result 'zero', 'four', 'five'. # # All those examples should look familiar to you. If you want the first item in the list just look at index 0. The second item is index 1 and so on through the list. However, what if you want the last item in the list? One way could be to use the `len` function like `list[len(list)-1]`. This way works since the `len` function always returns the last index plus one. The second from the last would then be `list[len(list)-2]`. There is an easier way to do this. In Python the last item is always index -1. The second to the last is index -2 and so on. Here are some more examples: # list[len(list)-1] # + active="" # 'five' # - list[len(list)-2] # + active="" # 'four' # - list[-1] # + active="" # 'five' # - list[-2] # + active="" # 'four' # - list[-6] # + active="" # 'zero' # - # # Thus any item in the list can be indexed in two ways: from the front and from the back. # # # Another useful way to get into parts of lists is using slices. Here is another example to give you an idea what they can be used for: # list = [0, 'Fred', 2, 'S.P.A.M.', 'Stocking', 42, "Jack", "Jill"] list[0] # + active="" # 0 # - list[7] # + active="" # 'Jill' # - list[0:8] # + active="" # [0, 'Fred', 2, 'S.P.A.M.', 'Stocking', 42, 'Jack', 'Jill'] # - list[2:4] # + active="" # [2, 'S.P.A.M.'] # - list[4:7] # + active="" # ['Stocking', 42, 'Jack'] # - list[1:5] # + active="" # ['Fred', 2, 'S.P.A.M.', 'Stocking'] # - # # Slices are used to return part of a list. The slice operator is in the form `list[first_index:following_index]`. The slice goes from the `first_index` to the index before the `following_index`. You can use both types of indexing: # list[-4:-2] # + active="" # ['Stocking', 42] # - list[-4] # + active="" # 'Stocking' # - list[-4:6] # + active="" # ['Stocking', 42] # - # # Another trick with slices is the unspecified index. If the first index is not specified the beginning of the list is assumed. If the following index is not specified the whole rest of the list is assumed. Here are some examples: # list[:2] # + active="" # [0, 'Fred'] # - list[-2:] # + active="" # ['Jack', 'Jill'] # - list[:3] # + active="" # [0, 'Fred', 2] # - list[:-5] # + active="" # [0, 'Fred', 2] # - # # Here is a program example: # # + poem = ["", "Jack", "and", "Jill", "", "went", "up", "the", "hill",\ "to", "", "fetch", "a", "pail", "of", "", "water.", "Jack",\ "fell", "", "down", "and", "broke", "", "his", "crown", "and",\ "", "Jill", "came", "", "tumbling", "after"] def get_bolds(list): ## is_bold tells whether or not the we are currently looking at ## a bold section of text. is_bold = False ## start_block is the index of the start of either an unbolded ## segment of text or a bolded segment. start_block = 0 for index in range(len(list)): ##Handle a starting of bold text if list[index] == "": if is_bold: print("Error: Extra Bold") ##print "Not Bold:", list[start_block:index] is_bold = True start_block = index+1 ##Handle end of bold text ##Remember that the last number in a slice is the index ## after the last index used. if list[index] == "": if not is_bold: print("Error: Extra Close Bold") print("Bold [", start_block, ":", index, "] ",\ list[start_block:index]) is_bold = False start_block = index+1 get_bolds(poem) # - # with the output being: # # + active="" # Bold [ 1 : 4 ] ['Jack', 'and', 'Jill'] # Bold [ 11 : 15 ] ['fetch', 'a', 'pail', 'of'] # Bold [ 20 : 23 ] ['down', 'and', 'broke'] # Bold [ 28 : 30 ] ['Jill', 'came'] # - # # # The `get_bold` function takes in a list that is broken into words # and token's. The tokens that it looks for are `` which starts # the bold text and `<\B>` which ends bold text. The function # `get_bold` goes through and searches for the start and end # tokens. # # # The next feature of lists is copying them. If you try something simple like: # a = [1, 2, 3] b = a print(b) # + active="" # [1, 2, 3] # - b[1] = 10 print(b) # + active="" # [1, 10, 3] # - print(a) # + active="" # [1, 10, 3] # - # # This probably looks surprising since a modification to b # resulted in a being changed as well. What happened is that the # statement `b = a` makes b a *reference* to the same list that a is a reference to. # This means that b and a are different names for the same list. # Hence any modification to b changes a as well. However # some assignments don't create two names for one list: # a = [1, 2, 3] b = a*2 print(a) # + active="" # [1, 2, 3] # - print(b) # + active="" # [1, 2, 3, 1, 2, 3] # - a[1] = 10 print(a) # + active="" # [1, 10, 3] # - print(b) # + active="" # [1, 2, 3, 1, 2, 3] # - # # # In this case, b is not a reference to a since the # expression `a*2` creates a new list. Then the statement # `b = a*2` gives b a reference to `a*2` rather than a # reference to a. All assignment operations create a reference. # When you pass a list as a argument to a function you create a # reference as well. Most of the time you don't have to worry about # creating references rather than copies. However when you need to make # modifications to one list without changing another name of the list # you have to make sure that you have actually created a copy. # # # There are several ways to make a copy of a list. The simplest that # works most of the time is the slice operator since it always makes a # new list even if it is a slice of a whole list: # a = [1, 2, 3] b = a[:] b[1] = 10 print(a) # + active="" # [1, 2, 3] # - print(b) # + active="" # [1, 10, 3] # - # # # Taking the slice [:] creates a new copy of the list. However it # only copies the outer list. Any sublist inside is still a references # to the sublist in the original list. Therefore, when the list # contains lists the inner lists have to be copied as well. You could # do that manually but Python already contains a module to do it. You # use the deepcopy function of the copy module: # import copy a = [[1, 2, 3], [4, 5, 6]] b = a[:] c = copy.deepcopy(a) b[0][1] = 10 c[1][1] = 12 print(a) # + active="" # [[1, 10, 3], [4, 5, 6]] # - print(b) # + active="" # [[1, 10, 3], [4, 5, 6]] # - print(c) # + active="" # [[1, 2, 3], [4, 12, 6]] # - # # First of all, notice that a is an array of arrays. Then notice # that when `b[0][1] = 10` is run both a and b are # changed, but c is not. This happens because the inner arrays # are still references when the slice operator is used. However, with # deepcopy, c was fully copied. # # # So, should I worry about references every time I use a function or # `=`? The good news is that you only have to worry about # references when using dictionaries and lists. Numbers and strings # create references when assigned but every operation on numbers and # strings that modifies them creates a new copy so you can never modify # them unexpectedly. You do have to think about references when you are # modifying a list or a dictionary. # # # By now you are probably wondering why are references used at all? The # basic reason is speed. It is much faster to make a reference to a # thousand element list than to copy all the elements. The other reason # is that it allows you to have a function to modify the inputed list # or dictionary. Just remember about references if you ever have some # weird problem with data being changed when it shouldn't be. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="MZZ3Bu4XW2zM" import numpy as np import pandas as pd import os # + [markdown] id="jKltdexQkzhk" # # DataTable # + [markdown] id="zpBoDbjbXv2J" # # + ID = # # + SPOT_ID = (Microaaray measurements) Alternative spot identifier - use only when no identifier or sequence tracking information is available. This column is useful for designating control and empty features. # # + Full_Name = Full name of the printed material # # + Printed conc. (mg/ml) = Concentration of printed solution in mg/ml # # + Supplier = Company that delivered the material # # + Cat.No. = # # + Lot.No. = # # + UniProt entry = # + id="wrNJSSXBnWOY" folder_path = '/content/drive/MyDrive/Autoimmune_Disease/Datasets/Autoantibodies_and_nucleic_acids/GSE69372_RAW' # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="ZGlRjSzfXxZP" outputId="ae68d55d-954d-4e85-cf11-30a0d833d457" df0 = pd.read_csv(os.path.join(folder_path,'DataTable.csv')) df0 # + [markdown] id="DAHV1Lymh25T" # # Raw Data (one patient) # Microaaray slides \ # IgM, IgG, C3 and C4 signals # Bkg: local background \ # Spot: Three parallel signal intensities # # # + Normalization: # 1. slide to slide normalization was applied separately to the two print-batches of microarray slides # 2. compensate the possible differences in the two printing batches. Values in a given batch were divided by geometric means of values derived from the given antigen in control serum samples # + colab={"base_uri": "https://localhost:8080/", "height": 505} id="qn2guAkOW_d1" outputId="37960c30-93f6-41d1-c325-a597756f93fa" df1 = pd.read_csv(os.path.join(folder_path,"GSE69365-C3/GSM1698601_C3-1.txt"), sep="\t") df1.columns = df1.columns.str.replace(' ','') df1 # + [markdown] id="pxcU2evDMZqM" # ## Merge DataTable and Raw Data # + colab={"base_uri": "https://localhost:8080/", "height": 748} id="lTvVZQU3MZOU" outputId="c0ef01f3-f88a-4fb2-fb88-3191ce4fc563" df_all_1 = df0.merge(df1, on='ID', how='inner') df_all_1.head(-20) # + colab={"base_uri": "https://localhost:8080/", "height": 676} id="AsevsDN_Nfhe" outputId="731c36e6-4350-4d49-8390-dbc2a95bfb71" df_all_1_sub = df_all_1[['SPOT_ID','Full_Name','Bkg.Median','Spot.Median']] df_all_1_sub.head(20) # + [markdown] id="fe8Ez3jsi4TP" # # Processed Data (all patients) # # + Fluorescent signal intensity # # + Data Processing: # Local background of spot ('Bkg.Median') was substracted from the value of signal ('Spot.Median'). Values were multiplied by 65535 and rounded to be integers. Negative signals values were clamped to arbitrary value 1. # + colab={"base_uri": "https://localhost:8080/", "height": 487} id="F2FYnydkXaxn" outputId="57835a1e-e48a-4172-bc51-c9d427e0e255" df2 = pd.read_csv(os.path.join(folder_path,"expreData.csv")) df2 # + [markdown] id="IpPUoOUdOZYO" # ## Merge Processed Data # + colab={"base_uri": "https://localhost:8080/", "height": 713} id="QxH3BbWEOX-b" outputId="cb080afd-8d9c-420d-b519-c2a0d170fcf6" df_all_2 = pd.merge(df_all_1_sub, df2, left_index=True, right_index=True) df_all_2 # + [markdown] id="-5AjY48Gk2Zr" # # Demography # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="eZrKyghgjBCb" outputId="e22ad9ff-96b2-4529-f0fa-85755f880b38" df3 = pd.read_csv("/content/drive/MyDrive/Autoimmune Disease/Datasets/Autoantibodies and nucleic acids skew complement consumption in systemic lupus erythematosus [C4]/GSE69372_RAW/phenoData.csv") df3 # + [markdown] id="lvXKEdCiQGuk" # ## Merge Patients with Demography # + id="MaPKlVG6lCsx" colab={"base_uri": "https://localhost:8080/", "height": 523} outputId="edbdd08c-26f3-4e21-e908-199556ae10c0" df4 = df2.T df4.index = df4.index.str.replace('exprs.','') df4 # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="fZY29IjOQx_s" outputId="a74b91e1-06ad-4c05-8252-c874814ee39c" df5 = df3.set_index('geo_accession') df5 # + colab={"base_uri": "https://localhost:8080/"} id="n08UIdXSSVZP" outputId="cdf804a8-6669-4199-e14a-544ad11bdfdc" df5['gender:ch1'].value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="e9VN1vGFTRCy" outputId="068a5fcd-4fd1-451e-8789-97671196d158" df5['age in years:ch1'].value_counts() # + [markdown] id="_hPNBQAQVOQW" # # + NHS: normal human serum # # + SLE: systemic lupus erythematosus # # + SSc: systemic sclerosis # # + SjS: sjörgen's syndrome # # + UCTD: undifferentiated connective tissue disease # # + PsA: psoriatic arthritis # + colab={"base_uri": "https://localhost:8080/"} id="CcgpdIlPTWp5" outputId="c5949614-1835-41e4-81ce-57dc4f545b4a" df5["disease group (nhs-normal human serum; sle-systemic lupus erythematosus ;uctd-undifferentiated connective tissue disease; sjs-sjörgen's syndrome;ssc-systemic sclerosis; psa-psoriatic arthritis):ch1"].value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="mcPqREfNSMQ_" outputId="bcbd7b37-923d-431a-d55f-1dfb9f39ebbe" df5.columns # + colab={"base_uri": "https://localhost:8080/", "height": 455} id="q88vYYSoSNeU" outputId="2e8f38e8-1080-403a-ad30-ad32b660c7f0" df5_sub = df5[["age in years:ch1", "gender:ch1", "disease group (nhs-normal human serum; sle-systemic lupus erythematosus ;uctd-undifferentiated connective tissue disease; sjs-sjörgen's syndrome;ssc-systemic sclerosis; psa-psoriatic arthritis):ch1",]] df5_sub = df5_sub.rename(columns = {"disease group (nhs-normal human serum; sle-systemic lupus erythematosus ;uctd-undifferentiated connective tissue disease; sjs-sjörgen's syndrome;ssc-systemic sclerosis; psa-psoriatic arthritis):ch1":'Label'}) df5_sub = df5_sub.rename(columns = {"age in years:ch1":'Age'}) df5_sub = df5_sub.rename(columns = {"gender:ch1":'Gender'}) df5_sub # + colab={"base_uri": "https://localhost:8080/", "height": 467} id="jFCqUi7oRQ1f" outputId="3fee20c6-fd3c-4bfd-ef30-99be96030f95" df_all_3 = pd.merge(df4,df5_sub, left_index=True, right_index=True) df_all_3 # + colab={"base_uri": "https://localhost:8080/", "height": 487} id="LlsX6VYHSDL3" outputId="0a5b5a86-cc55-4178-b2c3-4cac89c67a1c" df_all_3_IgM = df_all_3.iloc[0:400,:] df_all_3_IgM # + colab={"base_uri": "https://localhost:8080/"} id="nTTZp1b3XM65" outputId="eab83dde-bf87-4591-fd3d-6b72eb2b5f6c" df_all_3.iloc[0,:] # + colab={"base_uri": "https://localhost:8080/"} id="NZLgEH5yXWIN" outputId="99fe55f5-88de-46db-b2ba-6f5fde0d663a" df_all_3.iloc[425,:] # + id="qVu-lE2fYjpS" # df_all_3[(df_all_3['Age']==35) & (df_all_3['Gender']=='Female') & (df_all_3['Label']=='SLE')] # + [markdown] id="9zWRKvmVafJ_" # # Groupby patients # + [markdown] id="CishH2cI7iot" # # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="lUnnzIDNZo2j" outputId="ea652d78-52e6-4bbc-dc08-9c94c7c37c2f" df3 # + colab={"base_uri": "https://localhost:8080/", "height": 467} id="I77PatGzaeDi" outputId="62715b49-81b8-44cc-f96d-2d6807a3d8d6" df4 # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="ueY1OqHVbugh" outputId="964423d0-da56-49d4-b3be-2bf88558e69b" df3_sub = df3[["title", "age in years:ch1", "gender:ch1", "disease group (nhs-normal human serum; sle-systemic lupus erythematosus ;uctd-undifferentiated connective tissue disease; sjs-sjörgen's syndrome;ssc-systemic sclerosis; psa-psoriatic arthritis):ch1",]] df3_sub = df3_sub.rename(columns = {"disease group (nhs-normal human serum; sle-systemic lupus erythematosus ;uctd-undifferentiated connective tissue disease; sjs-sjörgen's syndrome;ssc-systemic sclerosis; psa-psoriatic arthritis):ch1":'Label'}) df3_sub = df3_sub.rename(columns = {"age in years:ch1":'Age'}) df3_sub = df3_sub.rename(columns = {"gender:ch1":'Gender'}) df3_sub # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="t3AtmKFmcUL5" outputId="12f15107-6f2d-40a2-fb08-56e687a0d5a9" df3_C3 = df3_sub.iloc[0:425,:] df3_C4 = df3_sub.iloc[425:850,:] df3_IgG = df3_sub.iloc[850:1275,:] df3_IgM = df3_sub.iloc[1275:1700,:] df3_C3 # + colab={"base_uri": "https://localhost:8080/", "height": 487} id="DTmACuSMeHKw" outputId="0bffd0c4-4b15-47ed-cd79-3051689711f6" df4_C3 = df4.iloc[0:425,:].reset_index().drop(columns="index").add_prefix('C3_') df4_C4 = df4.iloc[425:850,:].reset_index().drop(columns="index").add_prefix('C4_') df4_IgG = df4.iloc[850:1275,:].reset_index().drop(columns="index").add_prefix('IgG_') df4_IgM = df4.iloc[1275:1700,:].reset_index().drop(columns="index").add_prefix('IgM_') df4_C3 # + colab={"base_uri": "https://localhost:8080/", "height": 487} id="bI5Ylo6ogM2F" outputId="1e0eeafc-b5eb-4667-9bfd-0a121007caa0" df_final_all = pd.merge(df4_C3, df4_C4, left_index=True, right_index=True) df_final_all = pd.merge(df_final_all, df4_IgG, left_index=True, right_index=True) df_final_all = pd.merge(df_final_all, df4_IgM, left_index=True, right_index=True) df_final_all = pd.merge(df_final_all, df3_C3, left_index=True, right_index=True) df_final_all = df_final_all.drop(columns="title") df_final_all # + colab={"base_uri": "https://localhost:8080/", "height": 142} id="KmocDcWJjkpi" outputId="374ab619-7a04-49de-d6f8-311fc57c4b72" df_final_all[df_final_all.isnull().T.any()] # + id="CSilMS6TWPCP" df_final_all.to_csv('/content/drive/MyDrive/Autoimmune Disease/Datasets/Autoantibodies and nucleic acids skew complement consumption in systemic lupus erythematosus [C4]/GSE69372_RAW/AllMergedData.csv',index=False) # + [markdown] id="WG17oLV4oYa0" # # New Processed Data # + id="XPVG9UYkobYZ" colab={"base_uri": "https://localhost:8080/", "height": 438} outputId="8cba72c9-fae7-45fa-b0b1-29e20e42112a" df_all_1.head() # + colab={"base_uri": "https://localhost:8080/", "height": 487} id="Wp4XOHGwR1wR" outputId="f4b9e149-fcf5-4460-984c-ad" df_all_2_new = df_all_2.drop(columns=['Bkg.Median','Spot.Median','Full_Name','SPOT_ID']) # df_all_2_new = df_all_2_new.transform(data_transform) df_all_2_new # + id="UII--OMNUGnk" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Intro to reimbursements: overview with visualization # # This notebook provides an overview of the `2017-03-15-reimbursements.xz` dataset, which contains broad data regarding CEAP usage in all terms since 2009. # # It aims to provide an example of basic analyses and visualization by exploring topics such as: # # - Average monthly spending per congressperson along the years # - Seasonality in reimbursements # - Reimbursements by type of spending # - Which party has the most spending congressmen? # - Which state has the most spending congressmen? # - Who were the most hired suppliers by amount paid? # - Which were the most expensive individual reimbursements? # # Questions are not explicitly answered. Charts and tables are provided for free interpretation, some of them with brief commentaries from the author. # # **Obs**.: original analysis was made considering data from 2009 to 2017 (mainly until 2016). One might want to filter by terms (e.g. 2010-2014) to make more realistic comparisons (spenditures by state, party, congressperson, etc.). Code cell #4 provides an example of how it could be done. # # --- # + import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt from pylab import rcParams # %matplotlib inline # Charts styling plt.style.use('ggplot') rcParams['figure.figsize'] = 15, 8 matplotlib.rcParams.update({'font.size': 14}) #rcParams['font.family'] = 'Georgia' # Type setting for specific columns #DTYPE = dict(cnpj=np.str, cnpj_cpf=np.str, ano=np.int16, term=np.str) # Experimenting with 'category' type to reduce df size DTYPE =dict(cnpj_cpf=np.str,\ year=np.int16,\ month=np.int16,\ installment='category',\ term_id='category',\ term='category',\ document_type='category',\ subquota_group_id='category',\ subquota_group_description='category',\ #subquota_description='category',\ subquota_number='category',\ state='category',\ party='category') # - reimbursements = pd.read_csv('../data/2017-03-15-reimbursements.xz', \ dtype=DTYPE, low_memory=False, parse_dates=['issue_date']) # Creates a DataFrame copy with fewer columns r = reimbursements[['year', 'month', 'total_net_value', 'party', 'state', 'term', 'issue_date',\ 'congressperson_name', 'subquota_description','supplier', 'cnpj_cpf']] r.head() # ## Filters depending on the scope of analysis # Here, filters by state, party, years, etc. can be applied. # # Obs.: chart commentaries provided might not remain valid depending on filters chosen. # + # Filters only most recent years (from 2015) #r = r[(r.year == 2015) | (r.year == 2016) | (r.year == 2017)] #r.head() # - # ## Questions & answers # ### Evolution of average monthly spending along the years # Are congressmen spending more today in relation to past years? # #### How many congressmen in each year? # + years = r.year.unique() # Computes unique names in each year and saves into a pd.Series d = dict() for y in years: d[y] = r[r.year == y].congressperson_name.nunique() s = pd.Series(d) s # - s.plot(kind='bar') plt.title('Qtdy of congressmen listed per year') # ##### Commentary # Greater number of congressmen in 2011 and 2015 is due to term transitions which occur during the year. # # --- # #### How much did they spend, in average, per month in each year? # This analysis takes into consideration the following elements: # # - Main data: # - Monthly average spending per congressman during each year # - Relevant aspects for trend comparison: # - CEAP limit for each year (i.e. the maximum allowed quota increased during the years) # - Inflation indexes (i.e. prices of goods raised during the years) # ##### Evolution of inflation (IPCA) # + # Source: http://www.ibge.gov.br/home/estatistica/indicadores/precos/inpc_ipca/defaultseriesHist.shtm ipca_years = [2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016] ipca_indexes = [0.0431, 0.0590, 0.0650, 0.0583, 0.0591, 0.0641, 0.1067, 0.0629] ipca = pd.DataFrame({ 'year': ipca_years, 'ipca': ipca_indexes }) # Filters only by years in dataset ipca = ipca[ipca['year'].isin(r.year.unique())].set_index('year') ipca.head() # - # ##### Maximum quota allowed (CEAP limits) # There is information available for maximum CEAP for 2009 and 2017. Therefore, a simple compound growth rate (CAGR) is calculated from 2009 to 2017. Values for years in between are assumed to be a linear composition of the growth rate. # + states = ['AC', 'AL', 'AM', 'AP', 'BA', 'CE', 'DF', 'ES', 'GO', 'MA', 'MG', 'MS', 'MT', 'PA', 'PB', 'PE', 'PI', 'PR', 'RJ', 'RN', 'RO', 'RR', 'RS', 'SC', 'SE', 'SP', 'TO'] # Source: http://www2.camara.leg.br/a-camara/estruturaadm/diretorias/dirgeral/estrutura-1/deapa/portal-da-posse/ceap-1 ceap_2009 = [40711.32, 37318.73, 39734.17, 39554.50, 35540.51, 38705.50, 27977.66, 34080.83, 32317.69, 38429.49, 32856.38, 36949.65, 35924.24, 38499.17, 38319.91, 37992.68, 37344.18, 35412.67, 32550.32, 38963.25, 39828.33, 41612.80, 37256.00, 36337.92, 36578.43, 33730.95, 35993.76] # Source: http://www2.camara.leg.br/comunicacao/assessoria-de-imprensa/cota-parlamentar ceap_2017 = [44632.46, 40944.10, 43570.12, 43374.78, 39010.85, 42451.77, 30788.66, 37423.91, 35507.06, 42151.69, 36092.71, 40542.84, 39428.03, 42227.45, 42032.56, 41676.80, 40971.77, 38871.86, 35759.97, 42731.99, 43672.49, 45612.53, 40875.90, 39877.78, 40139.26, 37043.53, 39503.61] ceap_limit_states = pd.DataFrame({ 'ceap_2009': ceap_2009, 'ceap_2017': ceap_2017 }, index=states) ceap_limit_states.head() # + all_years = ipca_years # Calculates CAGR according to data available (CEAP@2009 and CEAP@2017), using the CEAP average among states cagr = ((ceap_limit_states.ceap_2017.mean() / ceap_limit_states.ceap_2009.mean())**(1./(2017-2009)) - 1) # Computes estimated CEAP values for years in between 2009 and 2017 using CAGR ceap_values = [] for i in range(2017-2009): if i == 0: ceap_values.append(ceap_limit_states.ceap_2009.mean()) elif i == (r.year.nunique() - 1): ceap_values.append(ceap_limit_states.ceap_2017.mean()) else: ceap_values.append(ceap_values[i-1] * (1 + cagr)) # Creates df with all years ceap_limit_years = pd.DataFrame({ 'year': all_years, 'max_avg_ceap': ceap_values }) # Filters only by years in dataset ceap_limit_years = ceap_limit_years[ceap_limit_years['year'].isin(r.year.unique())].set_index('year') ceap_limit_years.head() # + # Groups by name summing up spendings a = r.groupby(['year']).sum().drop('month', 1) a['congressmen_qty'] = s a['avg_monthly_value_per_congressmen'] = a['total_net_value'] / a['congressmen_qty'] / 12 a = a.drop(2017, 0) # Neglets 2017 # Adds columns for CEAP limits and IPCA indexes a['max_avg_ceap'] = ceap_limit_years['max_avg_ceap'] a['pct_of_quota_used'] = (a['avg_monthly_value_per_congressmen'] / a['max_avg_ceap']) * 100 a['ipca'] = ipca['ipca'] a['acc_ipca'] = (a['ipca'] + 1).cumprod() - 1 a # + # Procedure to handle secondary Y axis fig0, ax0 = plt.subplots() ax1 = ax0.twinx() y0 = a[['avg_monthly_value_per_congressmen', 'max_avg_ceap']].plot(kind='line', ax=ax0)#, label='Itens vendidos') y1 = (a['acc_ipca']*100).plot(kind='line', secondary_y=False, style='g--', ax=ax1)#, label='Preço unitário') y0.legend(loc=2) # bar legend to the left y1.legend(loc=1) # line legend to the right y0.set_ylim((0,50000)) #y1.set_ylim((0,50000)) y0.set_ylabel('CEAP usage and limit (R$)') y1.set_ylabel('Accumulated IPCA index (%)') plt.title('Avg. monthly congressmen spending vs. maximum quota and inflation idx.') plt.show() plt.close() # - # ##### Commentary # Although average spending has increased along the years, it can be due to both aspects considered: raises in prices and expanded limit for reimbursements. # # The next chart shows how spending has increased with respect to quota limits. a.pct_of_quota_used.plot() plt.ylim((0,100)) plt.title('Fluctuation of monthly CEAP spending per congressperson (% of max. quota)') # ##### Commentary # The chart shows that average spending has increased more than quota limits were raised (from ca. 40% to 60% of quota usage). This might be due to the steep rise in inflation levels, as observed in the previous chart. # # --- # ### Average monthly spending per congressperson along the years # This table shows the data above detailed per congressperson. # + # Groups by name summing up spendings a = r.groupby(['congressperson_name', 'year'])\ .sum()\ .drop('month', 1) # Computes average spending per month and unstacks a['monthly_total_net_value'] = a['total_net_value'] / 12 a = a.drop('total_net_value', 1).unstack() # Creates subtotal column to the right a['mean'] = a.mean(axis=1) a.head() # - # ### Seasonality in reimbursements # Out of curiosity,in which period of the year more reimbursements were issued? # + r.groupby('month')\ .sum()\ .total_net_value\ .sort_index()\ .plot(kind='bar', rot=0) plt.title('Fluctuation of reimbursements issued by months (R$)') # - # ### Reimbursements by type of spending # For what are congressmen most using their quota? # + r.groupby('subquota_description')\ .sum()\ .total_net_value\ .sort_values(ascending=True)\ .plot(kind='barh') plt.title('Total spent by type of service (R$)') # - # ##### Commentary # This chart makes it clear what is prioritized by congressmen: publicity of their activity. Voters might judge whether this choice is reasonable or not. # # --- # ### Which party has the most spending congressmen? # ##### How many congressmen in each party? parties = r.party.unique() parties # + # Computes unique names in each state and saves into a pd.Series d = dict() for p in parties: d[p] = r[r.party == p].congressperson_name.nunique() s = pd.Series(d) s # - # #### How much did congressmen from each party spend in the year, in average? # + t = r.groupby('party').sum() t = t.drop(['year', 'month'], 1) # Removes useless columns t['congressmen_per_party'] = s years = r.year.nunique() # - t['monthly_value_per_congressperson'] = t['total_net_value'] / t['congressmen_per_party'] / (12*years) t.sort_values(by='monthly_value_per_congressperson', ascending=False).head() # + t.monthly_value_per_congressperson\ .sort_values(ascending=False)\ .plot(kind='bar') plt.title('Average monthly reimbursements per congressperson by party (R$)') # - # ##### Commentary # It is important to note that many congressmen change parties frequently. Therefore, anyone interested in drawing conclusions regarding parties might want to analyse the data in further detail than it is presented here. # # --- # ### Which state has the most spending congressmen? # ##### How many congressmen in each state? states = r.state.unique() states # + # Computes unique names in each party and saves into a pd.Series d = dict() for s in states: d[s] = r[r.state == s].congressperson_name.nunique() s = pd.Series(d) s # - # #### How much did congressmen from each party spend in the year, in average? # ##### (!) Important: CEAP maximum value differs among states # As already commented previously, CEAP max. quota varies among state, according to: http://www2.camara.leg.br/comunicacao/assessoria-de-imprensa/cota-parlamentar, # CEAP maximum values from 2017 ceap_states = ceap_limit_states.drop('ceap_2009',1) ceap_states.columns = ['monthly_max_ceap'] # Renames column to be compatible to code below ceap_states.head() # + t = r.groupby('state').sum() t = t.drop(['year', 'month'], 1) # Removes useless columns t['congressmen_per_state'] = s t['monthly_max_ceap'] = ceap_states years = r.year.nunique() # + t['monthly_value_per_congressperson'] = t['total_net_value'] / t['congressmen_per_state'] / (12*years) t['ceap_usage'] = (t['monthly_value_per_congressperson'] / t['monthly_max_ceap']) * 100 t.sort_values(by='ceap_usage', ascending=False).head() # + t.ceap_usage\ .sort_values(ascending=False)\ .plot(kind='bar', rot=0) plt.title('Average monthly CEAP usage per congressperson by state (% of max. quota)') # - # #### Comparison between given state and the country's average t.head() country_average = t.ceap_usage.mean() country_average # Parametrizes single state analysis state = 'SP' state_average = t.loc[state].ceap_usage state_average s = pd.Series() s['average_all_states'] = country_average s[state] = state_average s s.plot(kind='bar', rot=0) plt.title('Average monthly CEAP usage per congressperson: ' + state + ' vs. rest of the country (% of max. quota)') # ### Who were the top spenders of all time in absolute terms? r.groupby('congressperson_name')\ .sum()\ .total_net_value\ .sort_values(ascending=False)\ .head(10) # + r.groupby('congressperson_name')\ .sum()\ .total_net_value\ .sort_values(ascending=False)\ .head(30)\ .plot(kind='bar') plt.title('Total reimbursements issued per congressperson (all years)') # - # ##### Commentary # Because the dataset comprises 2009-2017, it might not be reasonable to draw any hard conclusions by looking to this chart alone. Some congressmen might have been elected for longer periods and that would reflect on higher reimbursement total values. # # For a more detailed - hence coherent - analysis, one might want to make this comparison for each term (e.g. 2010-2014). That would better identify "top spenders" by comparing congressmen spendings on the same time period. # # Another interesting analysis can be made by expanding the chart to all congressmen, not only the top 30. This enables a richer look at how discrepant top spenders are from the rest. To do that, just change `.head(30)\` argument in the previous cell. # # --- # ### Who were the most hired suppliers by amount paid? # This analysis identifies suppliers by their unique CNPJ. It is worth noting that, commonly, some telecom carriers use different CNPJ for its subsidiaries in different states (e.g. TIM SP, TIM Sul, etc). # + sp = r.groupby(['cnpj_cpf', 'supplier', 'subquota_description'])\ .sum()\ .drop(['year', 'month'], 1)\ .sort_values(by='total_net_value', ascending=False) sp.reset_index(inplace=True) sp = sp.set_index('cnpj_cpf') sp.head() # + cnpj = r.groupby('cnpj_cpf')\ .sum()\ .drop(['year', 'month'], 1)\ .sort_values(by='total_net_value', ascending=False) cnpj.head() # + # Adds supplier name besides total_net_value in cnpj df cnpj['supplier'] = '' # Creates empty column cnpj = cnpj.head(1000) # Gets only first 1000 for this analysis # + # Looks up for supplier names in sp df and fills cnpj df (it might take a while to compute...) for i in range(len(cnpj)): try: cnpj.set_value(cnpj.index[i], 'supplier', sp.loc[cnpj.index[i]].supplier.iloc[0]) except: cnpj.set_value(cnpj.index[i], 'supplier', sp.loc[cnpj.index[i]].supplier) cnpj.head(10) # + # Fixes better indexing to plot in a copy sp2 = cnpj.set_index('supplier') sp2.head(30)\ .plot(kind='bar') plt.title('Most hired suppliers (unique CNPJ) by total amount paid (R$)') # - # ##### Commentary # In general, telecom carries were the suppliers with higher concentration of reimbursements. It is worth noting, however, that Telecommunication subquota accounts for only 8% of the reimbursents. This might suggest a 'long tail' pattern for other subquota types such as publicity, which accounts for 28% of all reimbursements. # # Another aspect worth noting is the fact that some individual suppliers ("pessoas físicas") appear as top 15 suppliers (e.g. Mr. and Mrs. ). One might wonder if such concentration of reimbursements for single-person suppliers is reasonable. pct_telecom = r[r['subquota_description'] == 'Telecommunication'].total_net_value.sum() / r.total_net_value.sum() pct_telecom pct_publicity = r[r['subquota_description'] == 'Publicity of parliamentary activity'].total_net_value.sum() / r.total_net_value.sum() pct_publicity # #### Congressmen that hired the top supplier and how much they paid r.groupby(['cnpj_cpf', 'congressperson_name'])\ .sum()\ .sort_values(by='total_net_value', ascending=False)\ .loc['02558157000162']\ .total_net_value\ .head(20) # ### Which are the most expensive individual reimbursements? r = r.sort_values(by='total_net_value', ascending=False) r.head(20) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # ## Setup # %load_ext sql # %config SqlMagic.autocommit=False # avoiding the error: FAILED: IllegalStateException COMMIT is not supported yet. # %sql hive://hadoop@localhost:10000/ # # Time It # ## Count all rows # %time %sql select count(*) from movielens.ratings # %time %sql select count(*) from movielens_parquet.ratings # %time %sql select count(*) from movielens_parquet_compressed.ratings # ## Get max(userid) # %time %sql select max(userid) from movielens.ratings # %time %sql select max(userid) from movielens_parquet.ratings # %time %sql select max(userid) from movielens_parquet_compressed.ratings # ## Save all Ratings into a Variable # %%time # test_1 = %sql select userid from movielens.ratings # %%time # test_2 = %sql select userid from movielens_parquet.ratings # %%time # test_3 = %sql select userid from movielens_parquet_compressed.ratings # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # %pylab inline # %load_ext autoreload # %autoreload 2 # # Imports from io import StringIO import json import re import unicodedata from collections import namedtuple from collections import defaultdict from IPython.display import Image import networkx as nx import matplotlib.pyplot as plt import matplotlib.image as img import pandas as pd import numpy as np from bs4 import UnicodeDammit import html2text pd.options.display.max_rows = 999 import pydot from stanford_corenlp_pywrapper import sockwrap # # CoreNLP # ## Start Server # + CORENLP_DIR = '/home/blannon/src/stanford-corenlp-full-2015-04-20' CORENLP_VER = '3.5.2' CORENLP_JARS = [ '{d}/stanford-corenlp-{v}.jar'.format(d=CORENLP_DIR, v=CORENLP_VER), '{d}/stanford-corenlp-{v}-models.jar'.format(d=CORENLP_DIR, v=CORENLP_VER), ] # + CORENLP_CONFIG = { 'annotators': 'tokenize, ssplit, pos, lemma, ner, parse', 'sutime.markTimeRanges': 'true', 'sutime.includeRange': 'true', 'ssplit.newlineIsSentenceBreak': 'two', 'parse.flags': '-makeCopulaHead' } OUTPUT_TYPES = ['pos, lemmas, parse, deps_basic, deps_cc, ner, normner'] # + CORENLP_NN_CONFIG = { 'annotators': 'tokenize, ssplit, pos, lemma, ner, depparse', 'sutime.markTimeRanges': True, 'sutime.includeRange': True, 'ssplit.newlineIsSentenceBreak': 'two', 'parse.flags': '-makeCopulaHead' } NN_OUTPUT_TYPES = ['pos, lemmas, deps_cc, ner, normner'] # - parser = sockwrap.SockWrap( mode=None, corenlp_jars=CORENLP_JARS, corenlp_dir=CORENLP_DIR, configdict=CORENLP_CONFIG, output_types=OUTPUT_TYPES ) nn_parser = sockwrap.SockWrap( mode=None, corenlp_jars=CORENLP_JARS, corenlp_dir=CORENLP_DIR, configdict=CORENLP_NN_CONFIG, output_types=NN_OUTPUT_TYPES ) # + #parser.kill_proc_if_running() # + #nn_parser.kill_proc_if_running() # - # ## Parse test_text = "From 1981 to 1983, Vicki served as an assistant city representative at the National Center for Municipal Development." parsed = parser.parse_doc(test_text) sent = parsed['sentences'][0] parsed # # Utils pdg = pydot.Dot() # + # pdg.write_dot? # - TokenNode = namedtuple('TokenNode', ('index', 'token', 'pos', 'lemma', 'ner', 'char_offsets')) def build_dep_graph(sent, dep_type='cc'): token_nodes = [TokenNode(i,t,p,l,n,tuple(o)) for i,t,p,l,n,o in zip(xrange(len(sent['tokens'])), sent['tokens'], sent['pos'],sent['lemmas'], sent['ner'], sent['char_offsets'])] token_lookup = {i:t for i,t in enumerate(token_nodes)} #token_lookup[-1] = 'ROOT' dg = nx.DiGraph() for tn in token_nodes: dg.add_node(tn, label=tn.lemma, ner=tn.ner, pos=tn.pos) sorted_deps = sorted(sent['deps_'+dep_type], key=lambda x: x[0]) for rel, lix, rix in sorted_deps: try: lnode = token_lookup[lix] rnode = token_lookup[rix] dg.add_edge(lnode, rnode, label=rel.replace(':','_')) except KeyError: continue for e in dg.selfloop_edges(): dg.remove_edge(*e) return dg def display_parse(dep_graph, filename): pdg = pydot.Dot() for u,v in dep_graph.edges(): ulabel = '{lemma}-{index}'.format(**u.__dict__) vlabel = '{lemma}-{index}'.format(**v.__dict__) pdg.add_edge(pydot.Edge(ulabel,vlabel,**dep_graph.edge[u][v])) pdg.write_png('images/{fn}.png'.format(fn=filename), prog='dot') pdg.write_dot('images/{fn}.dot'.format(fn=filename), prog='dot') def subtree_to_string(head, dg): others = [d for d in nx.algorithms.descendants(dg, head) if dg[head].get(d,{'label':''})['label'] != 'case'] linearized = sorted([head,] + others, key=lambda x: x.index) return ' '.join([t.token for t in linearized]) def simple_pas(predicate, dg): arguments = dg[predicate] _pas = defaultdict(list) for arg, rel in arguments.items(): _pas[rel['label']].append(subtree_to_string(arg, dg)) _pas[u'predicate'] = predicate.token return dict(_pas) def collect_all_predicates(dg): predicates = [n for n in nx.topological_sort_recursive(dg) if n.pos.startswith('V')] return [simple_pas(p, dg) for p in predicates] # # Build DAG parsed = parser.parse_doc(test_text) mydg = build_dep_graph(parsed['sentences'][0]) ner_nodes = [n for n in mydg.nodes() if n.ner != 'O'] sorted(ner_nodes, key=lambda x: x.index) mydg.edges(nbunch=ner_nodes,data=True) # # Display a parse display_parse(build_dep_graph(parsed['sentences'][0]), 'test') Image('images/test.png') # # Examples simple = "From 1981 to 1983, Vicki served as an assistant city representative at the National Center for Municipal Development." copula = "She was the assistant to the executive director of the Democratic Study Group in the US House of Representatives from 1979 to 1981." twoverb = "Vicki has also served as a government relations consultant, representing the interests of Portland, Oregon in Washington DC." smallclause = "The Department of Agriculture had appointed Vicki president" relclause_subj = "The clients whom Vicki has represented include Coca-Cola, Texaco, and Giant Foods." subclause = "Vicki lobbied for health insurance companies that supported Obamacare" passive = "Giant is represented by Victoria Cram as of June 3, 2006." print ' '.join([simple, copula]) # # PCFG # ## simple simple_parse = parser.parse_doc(simple) simple_dg_cc = build_dep_graph(simple_parse['sentences'][0]) display_parse(simple_dg_cc, 'pcfg-simple') Image('images/pcfg-simple.png') collect_all_predicates(simple_dg_cc) tn = simple_dg_cc.nodes()[0] pd.DataFrame.from_records(simple_dg_cc.nodes(), columns=simple_dg_cc.nodes()[0].__dict__.keys()).sort('index') tn.__dict__.keys() tn.__dict__ # ## copula # Note: "1979 - 1981" got parsed wrong! copula_parse = parser.parse_doc(copula) copula_dg_cc = build_dep_graph(copula_parse['sentences'][0]) display_parse(copula_dg_cc, 'pcfg-copula') Image('images/pcfg-copula.png') collect_all_predicates(copula_dg_cc) # ## twoverb twoverb_parse = parser.parse_doc(twoverb) twoverb_dg_cc = build_dep_graph(twoverb_parse['sentences'][0]) display_parse(twoverb_dg_cc, 'pcfg-twoverb') Image('images/pcfg-twoverb.png') collect_all_predicates(twoverb_dg_cc) # ## smallclause smallclause_parse = parser.parse_doc(smallclause) smallclause_dg_cc = build_dep_graph(smallclause_parse['sentences'][0]) display_parse(smallclause_dg_cc, 'pcfg-smallclause') Image('images/pcfg-smallclause.png') collect_all_predicates(smallclause_dg_cc) # ## relclause relclause_parse = parser.parse_doc(relclause_subj) relclause_dg_cc = build_dep_graph(relclause_parse['sentences'][0]) display_parse(relclause_dg_cc, 'pcfg-relclause') Image('images/pcfg-relclause.png') nx.algorithms.traversal.bfs_successors collect_all_predicates(relclause_dg_cc) # ## subclause subclause_parse = parser.parse_doc(subclause) subclause_dg_cc = build_dep_graph(subclause_parse['sentences'][0]) display_parse(subclause_dg_cc, 'pcfg-subclause') Image('images/pcfg-subclause.png') collect_all_predicates(subclause_dg_cc) # ## passive passive_parse = parser.parse_doc(passive) passive_dg_cc = build_dep_graph(passive_parse['sentences'][0]) display_parse(passive_dg_cc, 'pcfg-passive') Image('images/pcfg-passive.png') collect_all_predicates(passive_dg_cc) nx.topological_sort(simple_dg_cc) nx.topological_sort_recursive(simple_dg_cc) nx.topological_sort(twoverb_dg_cc) nx.topological_sort_recursive(twoverb_dg_cc) # # Test data h2t = html2text.HTML2Text() h2t.body_width = 0 h2t.unicode_snob = 1 h2t.emphasis_mark = '' # + def asciify(bio): asciitext = re.sub(r'[Aa]\?\?', ' ', unicodedata.normalize('NFD', bio).encode('ascii','replace')) return asciitext def filter_lists(asciitext): for l in re.split(r'(\r\n|\n)', asciitext): lstrip = l.strip() if len(lstrip) > 0: if lstrip[0] not in ['*','-']: yield l def clean_bio(bio): text = h2t.handle(bio) return '\n'.join([l for l in filter_lists(text)]) # - manatt_data = json.load(open('data/manatt-out-html-full.json')) manatt_data[0] s = manatt_data[0]['bio'][2] h2t.handle(s) h2t.handle(manatt_data[0]['bio'][0]).strip() def parse_to_predicates( for sent in _parsed['sentences']: _dg = build_dep_graph(sent) try: for pas in collect_all_predicates(_dg): yield pas # + def bio_to_parses(bio): parses = [] for text in bio: for sent in parser.parse_doc(clean_bio(text))['sentences']: parses.append(sent) return parses def process_data(d): d['parses'] = bio_to_parses(d['bio']) d['name'] = h2t.handle(d['name'][0]) return d # - # %time processed = [process_data(d) for d in manatt_data[0:1]] processed[0] manatt_data[0]['bio'][-1] p = nn_parser.parse_doc(clean_bio(manatt_data[0]['bio'][-1])) display_parse(build_dep_graph(p['sentences'][-2]),'test2') Image('images/test2.png') all_preds = [collect_all_predicates(build_dep_graph(p)) for p in processed[0]['parses']] all_preds display_parse(build_dep_graph(processed[0]['parses'][-2]), 'test') Image('images/test.png') dg = build_dep_graph(p['sentences'][-2]) pd.DataFrame.from_records(dg.nodes(), columns=dg.nodes()[0].__dict__.keys()).sort('index') collect_all_predicates(dg) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### **0. Quick introduction to jupyter notebooks** # * Each cell in this notebook contains either code or text. # * You can run a cell by pressing Ctrl-Enter, or run and advance to the next cell with Shift-Enter. # * Code cells will print their output, including images, below the cell. Running it again deletes the previous output, so be careful if you want to save some reuslts. # * You don't have to rerun all cells to test changes, just rerun the cell you have made changes to. Some exceptions might apply, for example if you overwrite variables from previous cells, but in general this will work. # * If all else fails, use the "Kernel" menu and select "Restart Kernel and Clear All Output". You can also use this menu to run all cells. # ### **0.5 Some hardware setup** # Keras uses all available GPUs in your computer. The following ```os.environ``` commands configures that only one of them should be used. If you are on a system with several GPUs and want to use more than one, you can change or comment out these commands. # # By default, Keras will allocate all of the available memory in the device. The last two lines will have Keras allocate memory as needed. # + import os import tensorflow as tf # If there are multiple GPUs and we only want to use one/some, set the number in the visible device list. os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"]="0" # This sets the GPU to allocate memory only as needed physical_devices = tf.config.experimental.list_physical_devices('GPU') if len(physical_devices) != 0: tf.config.experimental.set_memory_growth(physical_devices[0], True) # - # ### **1. Introduction to Keras** # Keras is a Python API (Application Programming Interface) for fast development of neural networks that sits on top of TensorFlow, the machine learning platform developed by Google. The full documentation, which you probably will have to search at some point, can be found at https://www.tensorflow.org/api_docs/python/tf/keras. # # To begin we should go through some of the essential terminology that you need to know for the rest of the assignment to go smoothly. # # 1. **Models** # # A Keras Model is, just like the name implies, the top-level object that describes your neural network architecture. This is the thing you create, train, and use to process data. It is very general and can be configured to perform essentially any task you want, such as image classification, text analysis, or continuous regression. # # # 2. **Layers** # # These are the fundamental building blocks of the Model. Each Model contains a list of Layers. The way these are connected to each other defines the architecture of the network. There is a huge number of Layers, such as ```Dense```, which is the same fully connected layer you implemented in assignment 1. Another very important Layer is ```Conv2D```, which performs 2-dimensional convolution of the input using some filter. # # # 3. **Optimizers, Losses, and Metrics** # # These are the functions and algorithms used to train and evaluate the Model. # * The Optimizer defines the algorithm used to update the weights using the gradients. In the first assignment you implemented stocastic gradient descent (SGD), which is one type of Optimizer. # * Losses are differentiable objective functions that compute the performance quantity that the model tries to minimize. One example of a loss function is Mean Squared Error, which you used in the first assignment. Another is Categorical Crossentropy (*aka.* log-loss) which we will use this time. # * Metrics are functions that compute the performance of the network in a way that is humanly understandable. Unlike the Losses, Metrics don't need to be differentiable since we don't use them in the gradient calculations. A perfect example of this is Accuracy; it's easily understood, but we can't use it as a Loss function. # # We will look at all of this in more detail further down. # ### **2. Loading the dataset** # For this introduction, we will use the MNIST dataset as for assignment 1. This time however, we will use the higher resolution images. We start by importing Keras and loading the dataset. # + from tensorflow import keras (X_train, y_train), (X_test, y_test) = keras.datasets.mnist.load_data() print("Shape of training data:") print(X_train.shape) print(y_train.shape) print("Shape of test data:") print(X_test.shape) print(y_test.shape) # - # #### **Plotting images** # In order to create nice plots in Python we use a library called *matplotlib*. As the name suggests, this gives access to Matlab-like plot functions, although without the interactive elements. When we call ```plt.show()``` the current figure is rendered as a *.png* and displayed. # # Here we select some random examples from the X_train matrix and show them as images (you might need to run the cell twice to see the images). # + import matplotlib.pyplot as plt import numpy as np fig, axarr = plt.subplots(1, 5, figsize=(16,3)) for i in range(5): rnd = np.random.randint(low=0, high=X_train.shape[0]) img = X_train[rnd] axarr[i].imshow(img, cmap=plt.get_cmap('gray')) plt.show() # - # #### **Preparing the dataset** # We need to make some transformations to the data before we can use it for training. It's possible to use the data as is, but the results will not be very good. We will leave it as an exercice as to why that is the case. # # The first step is to change the labels from a number representation, i.e. 1,2,3 etc., to a ***one-hot encoding*** where each target is a vector with only one of the values set to 1, the rest 0. This represents the probability that the output from the network should try to mimic. The concept is the same as the D matrix from the first assignment. A minor difference is that in **D**, each target was a vector with a single 1 and the rest -1. This is because we used *tanh* as output activation, which outputs in the [-1, 1] range. In this assignment we use *softmax* activation in the output layer, which only give values beween 0 and 1, thus the one-hot encoding. # # *As a side note for those of you that are interested. While the change from a -1/1 vector to a 0/1 (one-hot) vector might not seem that significant at first, the gradient calculation is very nice when using both softmax activation and cross-entropy loss in the output layer, as several terms cancel. We will let Keras do all the work for us this time, but if you ever need to implement a general backpropagation algorithm from scratch, or you just really like partial derivatives :), this is something to look into.* # # Second, we intensity normalize the data to the [0,1] range, instead of [0,255]. This is not strictly necessary as all weights will just scale to account for the change, but the convergence is much faster. The reason for this is that Keras uses specific initialization schemes for the weights that expect the data to be [0,1] normalized. # + # Transform label indices to one-hot encoded vectors y_train_c = keras.utils.to_categorical(y_train, num_classes=10) y_test_c = keras.utils.to_categorical(y_test, num_classes=10) # Normalization of pixel values (to [0-1] range) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train /= 255 X_test /= 255 # Print some labels print("Label index --> one-hot encoding") for i in range(5): print(str(y_train[i]) + " --> " + str(y_train_c[i])) # - # ### **3. A first Keras model** # Now let's build our first Model for classifying the MNIST data. In Keras, there are two different ways to create a model. The first is using the **Sequential** API which can only build networks that have a single stack of layers. We can add new layers to the network by calling ```model.add(...)```. This is nice and simple but is limited to a single stack of layers. # # *Why would you need anything else, you might ask. Nowadays most models for deep learning connect layers not just sequentially, but also using longer connection that "skip" one or more layers before merging into the network again. These, called skip connections or residual connections, are very powerful but also outside the scope of this course. For a concrete example lookup the popular ResNet architecture.* # # The second way to build models is using the **Functional** API. In this, we treat each layer as an individual function and we manually use the output from one (or more) layer as the input to another layer. This gives much more flexibility in designing the network architecture. For this reason, we will use the Functional API even though the Sequential would do just fine for this assignment. This will give you more freedom in the final task and better prepare you for any future projects you want to do. # # We will begin by building a model very similar to the one from assignment 1, *i.e.* a two-layer fully connected (Dense) network. # + # Import some stuff we need from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Dense, Flatten # Create the input layer x_in = Input(shape=X_train.shape[1:]) # Create the rest of the layers x = Flatten()(x_in) x = Dense(64, activation='tanh')(x) x = Dense(10, activation='softmax')(x) # Create the model object model = Model(inputs=x_in, outputs=x) # - # We want you to really understand what is going on here, so let's go through this step by step. # 1. ```x_in = Input(shape=X_train.shape[1:])```: this creates the input layer using the ```Input``` class. This requires the shape of the input data, which is given by ```X_train.shape[1:]```. We use ```[1:]``` to select only the height and width of the images, skipping the number of images. If this is unfamiliar to you, we recommend section 3.1.3 in the official python tutorial: https://docs.python.org/3/tutorial/introduction.html. The output from the input layer, which we call ```x_in```, is a data structure that can be used as input to other layers. # # *As a side note, normally when we think of the output of a function we expect that to be some data that we can plot or visualize in some way, like the value 5, or a vector, etc. However, remember that we have not given any data to this model yet, so there is no data to visualize. Here instead we are defining how to process the input data that will eventually be given to the model. When feeding the input of a layer to another, we add a new operation to our process. Remember, a neural network is just a function. It can be a very complicated function, but a function nonetheless, and is therefore just a series of operations.* # # # 2. ```x = Flatten()(x_in)```: simply changes the shape of the input data. The MNIST dataset is images of size 28x28 pixels. However, the next ```Dense``` layer expects all features in a single dimension. The flatten operation simply squashes multidimentional arrays into a single vector (in our case from 28x28 to 784). Note that when we add the ```Flatten``` layer we directly give it the input in the same line of code, *i.e.* ```x = Flatten()(x_in)```. This is a nice trick because it means we don't have to save the layer itself to a variable before using it; we can just define and use it on the same line which saves both space and time. You can test this by running the following cell, which creates a standalone ```Flatten``` layer and processes the first 5 images in X_train. # + # TEST for the Flatten() layer F = Flatten(input_shape=(28,28)) ''' don't worry about the following operation. This it to make sure that the data is in the right format to be processed by Keras. This is automatically taken care of when using the input layer.''' In = tf.convert_to_tensor(X_train[0:5]) Out = F(In) print("Before Flatten: " + str(In.shape)) print("After Flatten : " + str(Out.shape)) # - # 3. ```x = Dense(64, activation='tanh')(x)``` and ```x = Dense(10, activation='softmax')(x)```: defines two dense layers, the first with 64 nodes and the second with 10, as the number of output classes. For the hidden layer we specify *tanh* as activation function , whereas *softmax* for the output layer. # # Specifying the activation directly when creating the layer is not the only way to do it; there is a layer called ```Activation``` that can be used to add a standalone activation function between layers. This is sometimes necessary, for example when using more advanced activation functions, such as LeakyReLU. We also sometimes want to add normalization layers within the network to improve the training, which we will see in the main assignment. In these cases we usually apply the activation function after normalization, which requires using a separate ```Activation``` layer. # # # 4. ```model = Model(inputs=x_in, outputs=x)```: creates the actual model object. We initialize it with the input and output of the network, which we have in ```x_in``` and ```x```. At the moment we don't need any outputs from the middle layers, which is why we overwrite ```x``` when adding new layers. However, we must always save a separate variable for the input layer to create the model. # # # Notice how we only specify the shape of the data at the first layer, using ```X_train.shape[1:]```. Although every layer has this input parameter, we only need to specify the input shape of the first layer since the model takes care of the rest for us. No more struggling with matrix sizes. Neat! # #### **Finalizing the model** # Now that we defined the architecture of our model, we need to specify the way we want to train and evaluate it, *i.e.* the Optimizer, Loss function and any Metrics. As Optimizer we will use Mini-batch Stocastic Gradient Descent, which is implemented in the ```SGD``` class. This is almost the same as in assignment 1, except that we don't process the entire dataset before taking a learning step. Instead we randomly divide the data in mini-batches of a few examples, typically a relatively small power of 2, and take a step for each of them. This makes learning faster since we take more steps per epoch than if we were to use the entire dataset for each step. It's also sometimes necessary since the dataset can be so large that it doesn't fit in memory. In other words, batch SGD takes one step based on the mean of the gradient from all data, whereas mini-batch SGD takes multiple steps where each is the mean computed over a fixed number of samples (batch size). # # We will also use Nesterov momentum and learing rate decay in the optimizer. Don't worry if this doesn't mean anything to you, it's just a specific improvement to the base SGD algorithm that tries to be smart about the learning step. You will see that it is very good for this problem. # # We will use *categorical crossentropy* as loss function, which uses the *log* of predicted probabilites to calculate the loss, which exponentially punishes predictions the more wrong they are. For the metric we will use accuracy, *i.e.* the fraction of the data classified correctly. Note that we can give several metrics to the model if we want, but in this case we will only use accuracy. # # We set the optimizer, loss function, and metrics using the ```model.compile``` method. # + from tensorflow.keras.optimizers import SGD # Define the optimizer sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True) # Compile the model using the optimizer, loss and metric model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy']) # - # #### **Inspecting the model** # Now that we have a model, we can do some visualization. First, let's print a table summarizing the model with ```model.summary``` (the parameter is simply the width of the table). model.summary(60) # We can see the type, output size, and number of parameters for each layer. This is very useful when working with larger models. For example we might want to find out why the model is overfitting. Is there a layer with a lot of parameters? That might be a good candidate to change. Having many parameters to train is also slower, so changing the network by adding or removing layers might speed up training. Finally there's a summary at the bottom with the total number of parameters. Some layers have parameters that are configurable but not part of the optimization, these are the *Non-trainable params* of which we currently have none. # # Note how the first value of the output shape is **None**. This just means that we haven't specified the mini-batch size yet. We do this when we actually train the model. # # We can also print an image of the model using the code below. This might seem unnecessary when we have the table, and at the moment it gives roughly the same information. However, when building more advanced models, for example using residual connections, the table quickly becomes unreadable. keras.utils.plot_model(model, show_shapes=True, show_layer_names=False) # #### **Training the model** # Finally, time to do some model training! This is done with the ```model.fit``` method. We input the training data, one-hot encoded targets, mini-batch size, and number of epochs. When using mini-batches, one epoch passes when all the data has been used once. Each training sample can only be used once per epoch, thus the mini-batches are randomly selected each epoch, which is why it's called stochastic gradient decent. # # We also set the *validataion_split* parameter to 0.2. This tells ```model.fit``` to split off 20% of the training data to use for validation during the training process. In the previous assignment we used the test data for this purpose, but this is not how it's usually done. We might want to use the validation data during training to make decisions, such as stopping early if we detect overfitting or decreasing the learning rate if we detect that the performance no longer improves. But that means we cannot use the same data to get the final performance of the model, as it has influenced the training and the model parameters. This is why we need three datasets, one for training, one for validation during training, and one for testing after training. Using the *validation_split* parameter we get a random selection from the training data for validation, but we can also give a specific validation set to the ```fit``` method if we want, or we can tell it to always use the first 20%, etc. # # Finally, setting the *verbose* flag to 1 prints the status of the training. Now let's run it! history = model.fit(X_train, y_train_c, epochs=10, batch_size=32, verbose=1, validation_split=0.2) # #### **Evaluating the model** # We evaluate the model on the test data using the ```model.evaluate``` method, which returns the loss and metrics. The results can vary, but should generally be around 97% accuracy. Remember that the first assignment required only 93%. Not bad for a few lines of code and 10 epochs, right? # + score = model.evaluate(X_test, y_test_c, verbose=0) for i in range(len(score)): print("Test " + model.metrics_names[i] + " = %.3f" % score[i]) # - # We can also plot the history of the training using the *history* object that is returned from ```model.fit```. This again uses the *matplotlib* library. # + plt.figure(figsize=(12,5)) # Plot loss plt.subplot(1,2,1) plt.semilogy(history.history['loss'] , label="Training") plt.semilogy(history.history['val_loss'], label="Validation") plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(loc='upper right') plt.grid(True, which="both") # Plot accuracy plt.subplot(1,2,2) plt.plot(100 * np.array(history.history['accuracy']) , label="Training") plt.plot(100 * np.array(history.history['val_accuracy']), label="Validation") plt.title('Model accuracy') plt.ylabel('Acc [%]') plt.xlabel('Epoch') plt.legend(loc='lower right') plt.grid(True, which="both") plt.show() # - # We can also import some libraries to calculate and view the confusion matrix. For this we want to get the predicted class for each test sample, which we can do with ```model.predict```. However, since the model outputs a 10-class probabilty vector we need to convert it back to a label index vector (*i.e.* 1,2,3 etc.). Use ```np.argmax``` for this. # + from sklearn.metrics import confusion_matrix from Custom import PrintPrediction # Probabilities for each class, for each test image prob_test = model.predict(X_test) # Prediction (class number) for each test image. p_test = np.argmax(prob_test,axis=1) # Calculate confusion matrix CM = confusion_matrix(y_test, p_test) # Print probablities and predictions (rounded to a few decimal places) print("Probabilites and predictions") for i in range(5): PrintPrediction(prob_test[i]) print("\nConfusion matrix") print(CM) # - # Here is a nice custom function for evaluating the model and plotting the history and confusion matrix. We will use this in the main part of the assignment, so you can focus on the fun parts instead of writing the plot code. # # *The code for the Labels input is a bit of python magic called list comprehention. It's very nice and also very easy to make completly unreadable. Use responsibly :)* from Custom import PlotModelEval plt.text PlotModelEval(model, history, X_test, y_test, Labels=[str(x) for x in range(10)]) # ### **------------------------------------------------------------------------------------** # ### **Extra, examples of more complicated models** # As a bonus, here are some short examples of models built using the functional API, that can't be created with the Sequential model. # # ##### **Adding a residual (skip) connection** # Residual (skip) connection are used in some of the most powerful modern architectures, for reasons that are outside the scope of this course. However, for a concreate example, lookup ResNet. # + # Import Keras and create some shorthand notation import tensorflow.keras as keras import tensorflow.keras.layers as L # Begin like last time x_in = L.Input(shape=100) x = L.Dense(50, activation="tanh")(x_in) # But now, save the output to another variable, y y = L.Dense(10, activation="tanh")(x) # Use y as new input, but then change back to x x = L.Dense(10, activation="tanh")(y) x = L.Dense(10, activation="tanh")(x) # Now we have two different outputs from the network, at two different points. # We can merge them using different Layers, for example Add, using a list of layer outputs: x = L.Add()([x,y]) # Finally, add the output layer and create the model x = L.Dense(5, activation="softmax")(x) model2 = keras.models.Model(inputs=x_in, outputs=x) # Print the model model2.summary(100) # - # You can see on the right that there is a new column the shows the connections between layers. That's not very visual though, so let's print it as an image instead. keras.utils.plot_model(model2, show_shapes=True, show_layer_names=False) # ##### **More than one input and output** # This can be useful when doing object recognition in images. You can imagine that you not only want to classify an image as a cat or dog, but also give the coordinates of the cat or dog in the image. Or even have a segmented images as output where the background is removed. # + # Import Keras and create some shorthand notation import tensorflow.keras as keras import tensorflow.keras.layers as L x_in1 = L.Input(shape=100) x_in2 = L.Input(shape=50) x1 = L.Dense(20, activation="tanh")(x_in1) x2 = L.Dense(10, activation="tanh")(x_in2) c1 = L.Concatenate()([x1,x2]) x3 = L.Dense(10, activation="tanh")(c1) x4 = L.Dense(10, activation="tanh")(c1) model3 = keras.models.Model(inputs=[x_in1, x_in2], outputs=[x3,x4]) # Print the model keras.utils.plot_model(model3, show_shapes=True, show_layer_names=False) # - # ##### **A (not so) crazy model** # # You can also very easily make some crazy models. Here is a network where the input to each layer is the sum of outputs from all previous layers. While this particular model is just a dummy, similar techniques where there is a high degree of connectivity between the layers have actually been used in some scientific publications, for example to detect diseases in X-Ray images. # + # Import Keras and create some shorthand notation import tensorflow.keras as keras import tensorflow.keras.layers as L # Begin like last time x_in = L.Input(shape=100) x1 = L.Dense(10, activation="tanh")(x_in) x2 = L.Dense(10, activation="tanh")(x1) a1 = L.Add()([x1,x2]) x3 = L.Dense(10, activation="tanh")(a1) a2 = L.Add()([x1,x2,x3]) x4 = L.Dense(10, activation="tanh")(a2) a2 = L.Add()([x1,x2,x3,x4]) x = L.Dense(5, activation="tanh")(a2) model4 = keras.models.Model(inputs=x_in, outputs=x) # Print the model keras.utils.plot_model(model4, show_shapes=True, show_layer_names=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import math as math import random import time from triplet_encoding import * from shared_module import * from triplet_encoding import * # + N_ANCHORS = ANCHORS_12 ENCODING_STRATEGY = VGG19_4096 ENCODING_SIZE = 4096 INPUT_FILE_PATH = './input/labels' labels_file_path = get_path(INPUT_FILE_PATH, N_ANCHORS, ENCODING_STRATEGY) df_labels = pd.read_csv(labels_file_path('anchor')) display(df_labels.head()) # - breeds = df_labels['breed'].unique() breeds.sort() print(breeds) df_test = pd.read_csv('input/labels_test_vgg19_4096.csv') display(df_test.head()) encoding_model = input_encoding_model((1, ENCODING_SIZE)) predictions = predict_on_test_model(df_labels, df_test, breeds, model_encode(encoding_model, ENCODING_SIZE)) cols = ['id'] cols.extend(breeds) pd.DataFrame(predictions, columns=cols) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline X = np.random.rand(100,3) data = 2 if data==1: X[:,2] = ((X[:,0]+2*X[:,1]+0.5*np.random.randn(X.shape[0]))>1.2).astype(int) if data==0: X[:,2] = (np.abs(X[:,0]+2*X[:,1]-1.7)>0.5).astype(int) if data==2: nX = X+np.random.randn(X.shape[0],3)*0.1 X[:,2] = 1 X[(nX[:,0]>0.5) & (nX[:,1]>0.5),2]=0 X[(nX[:,0]<0.5) & (nX[:,1]<0.5),2]=0 plt.plot(X[X[:,2]==0,0],X[X[:,2]==0,1],'xr') plt.plot(X[X[:,2]==1,0],X[X[:,2]==1,1],'ob') # - np.set_printoptions(precision=2,suppress=True) X # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Weather check # Bus check # books check # reminders/alerts # - from pprint import pprint import requests import re import time from datetime import datetime # using openweathermap # api.openweathermap.org/data/2.5/weather?q={city name},{state},{country code}&appid={your api key} # api.openweathermap.org/data/2.5/weather?zip=94040,us&appid={your api key} class Weather: def __init__(self): # use your own API keys self.fetched = False f = open("./../weatherApiKeys.txt", "r") self.apikey = f.read().strip() print("-> Getting Weather info...") self.req_today_weather = requests.get('http://api.openweathermap.org/data/2.5/weather?zip=94132,us&appid=' + self.apikey) if(self.req_today_weather.status_code == 200): self.fetched = True print("-> Fetched successfully!") self.json_today_weather = self.req_today_weather.json() # extracting and setting other variables self.temp_min = (self.json_today_weather["main"]["temp_min"] - 273.15) * 9/5 + 32 self.temp_max = (self.json_today_weather["main"]["temp_max"] - 273.15) * 9/5 + 32 # kelvin to F self.description = self.json_today_weather["weather"][0]["description"] self.visibility = self.json_today_weather["visibility"] * 0.000621371 # in miles self.wind = self.json_today_weather["wind"]["speed"] * 2.23694 # in miles/hr else: print("-> Unsuccessful Fetch!") def isFetched(self): return self.fetched def isWindy(self): if(self.wind > 13): return 1.0 elif(self.wind > 9 and self.wind <= 13): return 0.5 else: return 0 def isFog(self): if(selfvisibility <= 7): return 1 elif(selfvisibility <= 9 and selfvisibility > 7): return 0.5 else: return 0 def isCold(self): if(self.temp_max < 53 or self.temp_min <53): return 1 else: return 0 def getDescription(self): return self.description # + # bus # sf-muni # 57 # 4704 # list of agencies: # http://webservices.nextbus.com/service/publicXMLFeed?command=agencyList # list of lines: # http://webservices.nextbus.com/service/publicXMLFeed?command=routeList&a=sf-muni # list of stops: # http://webservices.nextbus.com/service/publicXMLFeed?command=routeConfig&a=sf-muni&r=57 # predictions at a stop: # http://webservices.nextbus.com/service/publicXMLFeed?command=predictions&a=sf-muni&r=57&s=4704 # - class busSchedule(): def getPrediction(self): req_BS = requests.get('http://webservices.nextbus.com/service/publicXMLFeed?command=predictions&a=sf-muni&r=57&s=4704') if(req_BS.status_code == 200): print("-> Fetched Schedule @ " + str(datetime.now()).split(" ")[1].split(".")[0]) text = req_BS.text startList = ([m.start() for m in re.finditer('', text)]) predictionList = [] for i in range(len(startList)): listele = text[startList[i]:endList[i]] predictionList.append(listele[listele.find("minutes"):listele.find("isDeparture")].strip()[9:-1]) return(predictionList) else: print("-> Error occured while fetching Bus prediction") return None def DAILYSCHEDULE(): weatherToday = Weather() scheduleToday = busSchedule() if(weatherToday.isFetched): print("-> Weather today:") print("-> isWindy: " + str(weatherToday.isWindy())) print("-> isCold: " + str(weatherToday.isCold())) print("-> getDescription: " + str(weatherToday.getDescription())) print() while(True): schedule = scheduleToday.getPrediction() if(schedule is not None): print("-> Next Buses in: ",end="" ) print(schedule) print() time.sleep(60*4) DAILYSCHEDULE() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [Root] # language: python # name: Python [Root] # --- # # Método das diferenças finitas: Convecção # Vamos resolver a equação de convecção: # # $$\frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0$$ # ## Setup # %matplotlib inline import numpy as np import matplotlib.pyplot as plt # As células abaixo definem funções que criam o domínio e as condições iniciais. def cria_dominios(tamanho, Nx, duração, Nt): """ Cria o domínio espacial e calcula os intervalos de tempo e espaço. """ x = np.linspace(0, tamanho, Nx) dx = x[1] - x[0] dt = duração/(Nt - 1) return x, dx, dt x, dx, dt = cria_dominios(tamanho=2, Nx=51, duração=1, Nt=51) print('dx =', dx, 'dt =', dt) def cria_cond_inicial(x): """ Cria um vetor de condições iniciais u0 com uma função degrau. """ u0 = np.ones(x.size) u0[(x >= 0.2) & (x <= 0.5)] = 2 return u0 # + u0 = cria_cond_inicial(x) plt.figure() plt.plot(x, u0, '.-k') plt.xlabel('x') plt.ylabel('u') plt.title('u0') plt.ylim(0, 3) # - # ## Tarefa 1 # # Complete a função abaixo que executa 1 único passo no tempo utilizando diferenças progressivas no tempo e regressivas no espaço. def passo_no_tempo(u_passado, dx, dt, velocidade): u_futuro = u_passado.copy() Nx = len(u_passado) u_futuro[0]=1 #determinamos nossa condição de contorno, no caso, o valor do primero elemento do nosso dominio for k in range(1,Nx): #Utilizamos o for e range para definir as posições dos elementos com base em u_passado. #Ressaltando que começamos apartir do segundo elemento, visto que o primeiro esta dentro da condição de contorno. u_futuro[k] = u_passado[k]- velocidade*(dt/dx)*(u_passado[k]-u_passado[k-1]) #Adequamos a função passada em aula para fazer o cálculo de cada elemento de acordo com o k (Sendo k a posição, variando com base nas posições de u_passado) return u_futuro # Use as células abaixo para checar se sua função funciona. u1 = passo_no_tempo(u0, dx, dt, velocidade=1) plt.figure() plt.plot(x, u0, '--r') plt.plot(x, u1, '.-k') plt.xlabel('x') plt.ylabel('u') plt.ylim(0, 3) # ## Tarefa 2 # # Complete a função abaixo que executa uma simulação completa de diferenças finitas (utilizando as funções definidas acima) para uma deterimada duração. def simula(tamanho, Nx, duração, Nt, velocidade): x, dx, dt = cria_dominios(tamanho, Nx, duração, Nt) #acrescentamos a função que determina o dominio u0 = cria_cond_inicial(x) #acrescentamos a função que nos da a condição inicial u_passado = u0 #definimos u_passado começando da condição inicial for j in range(Nt): #para que ela funcionasse por uma determinada duração acresentamos o for e range para que a função rodasse dentro da variação de tempo u_futuro = passo_no_tempo(u_passado, dx, dt, velocidade) #acrescetamos a função para calcular u_futuro u_passado = u_futuro #designamos que u_passado seria igual a u_futuro, assim cada vez que ele rodar usara como base u anterior e não somente o u0. return x, u0, u_futuro # Utilize as células abaixo para checar o resultado da sua função. x, u0, u_futuro = simula(tamanho=2, Nx=51, duração=1, Nt=51, velocidade=1) plt.figure() plt.plot(x, u0, '--r') plt.plot(x, u_futuro, '.-k') plt.xlabel('x') plt.ylabel('u') plt.ylim(0, 3) # ### O que aconteceu com o resultado no final da simulação? Isso deveria acontecer? # Neste caso ele foi suavizado devido ao fato de delta (x) ter valor distante de zero, mostrando um grafico diferente do esperado. Para que isso não aconteça é necessario pegar intervalos diferentes de delta(x) de acordo com o tempo. # ## Tarefa 3 # # Faça uma figura com o resultado da simulação para diferentes valores `Nx` (utilize a lista abaixo). Inclua uma legenda no seu gráfico. valores_de_Nx = [51, 71, 91, 101, 111] plt.figure() #serve para criar uma imagem vazia para receber todos os graficos juntos. for Nx in valores_de_Nx: #usamos for para variar nx, assim conseguimos criar a representação para cada u dentro do grafico. x, u0, u_futuro = simula(tamanho=2, Nx=Nx, duração=1, Nt=51, velocidade=1) plt.plot(x, u_futuro, '.-') plt.plot(x, u0, '--r') plt.legend(valores_de_Nx, loc=0) # com a legenda podemos localizar a u mais similar a u0 plt.xlabel('x') plt.ylabel('u') plt.ylim(0, 3) # ### O método é igualmente preciso para todos os valores de Nx? # Não. Pois alguns intervalos de delta(x) fazem com que o grafico 'exploda' ou continue suavizado. # ## Bônus # # Complete a função abaixo que executa a simulação completa mas dessa vez guarda cada passo da simulação. A função deve gerar uma lista `u` que contem o valor de $u$ de cada iteração. # # Complete o código que gera um gráfico com o valor de `u` a cada 10 iterações. Ou seja, o gráfico deve conter `u[0]`, `u[10]`, `u[20]`, etc. def simula_grava(tamanho, Nx, duração, Nt, velocidade): #usamos as funçoes já informadas pelo professor x, dx, dt = cria_dominios(tamanho, Nx, duração, Nt) u0 = cria_cond_inicial(x) u_passado = u0 #consideramos u_passado começando da condição inicial u=[] #criamos uma lista vazia para receber nosso resultado for j in range(Nt): #usamos o for para variar no tempo u_futuro = passo_no_tempo(u_passado, dx, dt, velocidade) #usamos uma função definida em uma das tarefas anteriores u_passado = u_futuro #essa igualdade serve para que o proximo u_futuro seja de acordo com o u definido pela função passo_tempo u.append(u_futuro) #salvamos os resultados na lista definida anteriormente return x, u x, u = simula_grava(tamanho=2, Nx=51, duração=1, Nt=51, velocidade=1) plt.figure() #criamos uma figura vazia para receber os graficos for i in range(0,len(u),10): #usamos o for, range e len para plotar somente as funçoes que queriamos. #Começando do elemnto de indice 0 e pecorrrendo todos os elementos de u, de 10 em 10. plt.plot(x,u[i]) plt.plot(x,u[0], '--r') plt.xlabel('x') plt.ylabel('u') plt.ylim(0, 3) plt.title('Simulação completa (cada 10 iterações)') # Extra: # Fizemos com o Nx, que apresentou representação grafica mais proximo do u0, observado no gráfico da tarefa 3. x, u = simula_grava(tamanho=2, Nx=101, duração=1, Nt=51, velocidade=1) plt.figure() for i in range(0,len(u),10): plt.plot(x,u[i],) plt.plot(x,u[0], '--r') plt.xlabel('x') plt.ylabel('u') plt.ylim(0, 3) plt.title('Simulação completa (cada 10 iterações)') # **Course website**: https://github.com/mat-esp/about # # **Note**: This notebook is part of the course "Matemática Especial I" of the [Universidade do Estado do Rio de Janeiro](http://www.uerj.br/). All content can be freely used and adapted under the terms of the # [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/). # # ![Creative Commons License](https://i.creativecommons.org/l/by/4.0/88x31.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="ngAgintgFLNI" outputId="da3de206-79d2-4969-9962-02c34583a159" # !wget https://developer.nvidia.com/compute/cuda/9.2/Prod/local_installers/cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64 -O cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb # !dpkg -i cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb # !apt-get update # !apt-get install cuda-9.2 # + colab={"base_uri": "https://localhost:8080/"} id="K_mKv1POigw7" outputId="479950c7-338a-4c86-ac62-0711b63963d2" # !nvcc --version # + [markdown] id="3CWqT8wEN-UY" # A1: **a) Implement Parallel Reduction using Min, Max, Sum and Average # operations. # b) Write a CUDA program that, given an N-element vector, find- # • The maximum element in the vector # • The minimum element in the vector # • The arithmetic mean of the vector # • The standard deviation of the values in the vector # Test for input N and generate a randomized vector V of length N (N should be large). The program should generate output as the two computed maximum values as well as the time taken to find each value.** # + id="XcAwTB48NYs1" code = """#include "bits/stdc++.h" using namespace std; #define N 8 size_t bytes; __global__ void min(double *array, int n) { unsigned int id = blockDim.x * blockIdx.x + threadIdx.x; int step_size = 1; while(n > 0) { if(id < n) { int i = (int)id * step_size * 2; int j = i + step_size; if(array[i] > array[j]) array[i] = array[j]; } step_size = step_size*2; n = n/2; } } __global__ void max(double *array, int n) { unsigned int id = blockDim.x * blockIdx.x + threadIdx.x; int step_size = 1; while(n > 0) { if(id < n) { int i = (int)id * step_size * 2; int j = i + step_size; if(array[i] < array[j]) array[i] = array[j]; } step_size = step_size*2; n = n/2; } } __global__ void sum(double *array, int n) { unsigned int id = blockDim.x * blockIdx.x + threadIdx.x; int step_size = 1; while(n > 0) { if(id < n) { int i = (int)id * step_size * 2; int j = i + step_size; array[i] = array[i] + array[j]; } step_size = step_size*2; n = n/2; } } __global__ void mean_diff_square(double* array, double avg, int n){ unsigned int id = blockDim.x * blockIdx.x + threadIdx.x; if(id < n) array[id] = (array[id]-avg)*(array[id]-avg); } int main() { double *array = new double [N]; cout<<"array: "; for(int i=0; i>>(d_array,N/2); cudaDeviceSynchronize(); cudaMemcpy(&result, d_array, sizeof(double), cudaMemcpyDeviceToHost); cout<<"min: "<>>(d_array, N/2); cudaDeviceSynchronize(); cudaMemcpy(&result, d_array, sizeof(double), cudaMemcpyDeviceToHost); cout<<"max: "<>>(d_array,N/2); cudaDeviceSynchronize(); cudaMemcpy(&result, d_array, sizeof(double), cudaMemcpyDeviceToHost); cout<<"sum: "<>>(d_array, avg, N); cudaDeviceSynchronize(); sum<<<1,N/2>>>(d_array,N/2); cudaDeviceSynchronize(); cudaMemcpy(&result, d_array, sizeof(double), cudaMemcpyDeviceToHost); double variance = result/N; cout<<"std: "<>>(d_a, d_b, d_c, N); cudaDeviceSynchronize(); cudaMemcpy(c, d_c, bytes, cudaMemcpyDeviceToHost); cout<<"c: "; for(int i=0; i>>(d_vector, d_matrix, d_result, N, M); cudaDeviceSynchronize(); bytes = M*sizeof(int); cudaMemcpy(result, d_result, bytes, cudaMemcpyDeviceToHost); cout<<"result:"<>>(d_a, d_b, d_c, M, N ,K); cudaDeviceSynchronize(); bytes = M*K*sizeof(int); cudaMemcpy(c, d_c, bytes, cudaMemcpyDeviceToHost); cout<<"result:"< #include using namespace std; void swap(int * num1, int * num2) { int temp = * num1; * num1 = * num2; * num2 = temp; } int main() { int n = 10; int a[n]; omp_set_num_threads(2); for (int i = 0; i < n; i++) { a[i] = rand() % 100; } for (int i = 0; i < n; i++) cout << " " << a[i]; cout << endl; int i = 0, j = 0; int first = 0; double start, end; start = omp_get_wtime(); for (i = 0; i < n - 1; i++) { first = i % 2; #pragma omp parallel for for (j = first; j < n - 1; j++) { if (a[j] > a[j + 1]) swap( & a[j], & a[j + 1]); } } end = omp_get_wtime(); cout << "Result(parallel) : " << endl; for (i = 0; i < n; i++) cout << " " << a[i]; cout << endl; cout << "Time parallel = " << (end - start) << endl; return 0; } """ # + id="S4Q5eEF1nujO" file=open("code.cpp","w") file.write(code) file.close() # + id="IJuHbVZon56-" # !g++ -fopenmp code.cpp # + colab={"base_uri": "https://localhost:8080/"} id="99iT2FcRn9BR" outputId="0d84f944-65a6-4010-bb05-85a379175032" # !./a.out # + colab={"base_uri": "https://localhost:8080/"} id="IZMVhiq7n_kq" outputId="ab2b0fe4-ec6b-4e5b-dfff-4c67e939cb31" # !nvprof ./a.out # + [markdown] id="fCk10t8boD5H" # **MERGE SORT** # + id="ywEgB6peoHuu" code=""" #include #include using namespace std; void printArray(int * arr, int size) { for (int i = 0; i < size; i++) { cout << arr[i] << " "; } cout << endl; } void merge(int * arr, int start, int mid, int end) { int len = (end - start) + 1; int temp[len]; int cur = 0; int i = start; int j = mid + 1; while (i <= mid && j <= end) { if (arr[i] < arr[j]) { temp[cur] = arr[i]; cur++; i++; } else { temp[cur] = arr[j]; cur++; j++; } } if (i <= mid) { while (i <= mid) { temp[cur] = arr[i]; i++; cur++; } } else if (j <= end) { while (j <= end) { temp[cur] = arr[j]; j++; cur++; } } cur = 0; for (i = start; i <= end; i++) { arr[i] = temp[cur]; cur++; } } void mergeSort(int * arr, int start, int end) { if (start < end) { int mid = (start + end) / 2; #pragma omp parallel sections { #pragma omp section mergeSort(arr, start, mid); #pragma omp section mergeSort(arr, mid + 1, end); } merge(arr, start, mid, end); } } int main(int argc, char * argv[]) { int size = 10; int a[size]; double start, end; omp_set_num_threads(2); for (int i = 0; i < size; i++) { a[i] = rand() % 100; } //int a[]= {7,33,5,5,23,111,75,34,77,121,120}; for (int i = 0; i < size; i++) cout << " " << a[i]; cout << endl; start = omp_get_wtime(); mergeSort(a, 0, size - 1); cout<<"Sorted: "; printArray(a, size); end = omp_get_wtime(); cout << "Time parallel = " << (end - start) << endl; return 0; } """ # + id="Q92E4diqoZNL" file=open("code.cpp","w") file.write(code) file.close() # + id="ijjif79Iobvi" # !g++ -fopenmp code.cpp # + colab={"base_uri": "https://localhost:8080/"} id="xrM6rmE-od2O" outputId="720d3a9a-31ac-4b70-ae30-2e5c657b3582" # !./a.out # + colab={"base_uri": "https://localhost:8080/"} id="cz88AXOjo6KN" outputId="170d2605-8d79-49f3-a175-2a91ae7da9c1" # !nvprof ./a.out # + [markdown] id="yYi0voMzPvWv" # A4. **Parallel Search Algorithm: # Design and implement parallel algorithm utilizing all resources available for # •1. Binary Search for Sorted Array # •2. Best-First Search that (traversal of the graph to reach a target # in the shortest possible path)** # + [markdown] id="lGjYbcm-pZCY" # **BINARY SEARCH** # + colab={"base_uri": "https://localhost:8080/"} id="lC0oLWX9pfiK" outputId="2158c123-82fe-40fc-ee0b-098ba5981e1d" # !mpiCC code.cpp # !mpirun --allow-run-as-root -np 4 ./a.out # + [markdown] id="xkuDIV3UArhq" # **BEST_FIRST SEARCH** # + colab={"base_uri": "https://localhost:8080/"} id="GSR5uArSAucm" outputId="63f35e7f-24ce-43b7-9d32-5040cce21e2c" # !g++ -fopenmp bfs.cpp # !./a.out # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # Import tree-level data from PNW_FIADB2011 forest inventory plots # Summarize the data into 2" diameter classes # Export a csv that has the total trees per acre by each species and 2" diameter class # for each "condition" (~inventory plot) # - # import the packages import pandas as pd import numpy as np import csv import time import json # read in the treelists for the PNW_FIADB2011 dataset and IDBv2.0 dataset PNWFIADB2011 = pd.read_csv("G:\projects\ForestPlanner_2015\Data\Processed\PNW_FIADB2011_TREELIST_2015-11-06.csv") IDB2pt0 = pd.read_csv("G:\projects\ForestPlanner_2015\Data\Processed\IDB2pt0_TREELIST_2015-11-04.csv") print str(PNWFIADB2011.COND_ID.nunique()) + " unique PNWFIADB 2011 conditions" print str(IDB2pt0.COND_ID.nunique()) + " unique IDB 2.0 conditions" # combine these datasets into one master treelist treelist = PNWFIADB2011.append(IDB2pt0) treelist.head() # read in the FVS species name crosswalk table FVS_crosswalk = pd.read_csv("G:\projects\ForestPlanner_2015\Data\work\FVS_SpeciesCrosswalk_2015-11-06.csv") # + # drop entries in the crosswalk without species_symbols FVS_crosswalk.dropna(subset=["SPECIES_SYMBOL"], inplace = True) # set species_symbol as index for FVS_crosswalk FVS_crosswalk.set_index("SPECIES_SYMBOL", inplace = True) # - FVS_crosswalk.columns.values # Drop columns from FVS_crosswalk we don't want or need dropcols = ['FIA_SPCD', 'FVS_ALPHA_CODE', 'SCIENTIFIC NAME_FSVeg', 'AK', 'CI', 'CR', 'EM', 'IE', 'KT', 'NI', 'NC', 'TT', 'UT', 'WS'] FVS_crosswalk.drop(labels = dropcols, axis = 1, inplace = True) FVS_crosswalk.head() fvs_symbols_missing = 0 for symbol in pd.unique(treelist.SPECIES_SYMBOL): if symbol in FVS_crosswalk.index.tolist(): pass else: fvs_symbols_missing += 1 print(symbol + "not in FVS_crosswalk") if fvs_symbols_missing == 0: print("All species symbols from treelist found in FVS_crosswalk") else: print('\n' "Symbols missing from FVS_crosswalk: " + str(fvs_symbols_missing)) # + # Count number of unique values of columns in FVS_crosswalk for all species symbols found in treelist # this shows how much mapping species_symbol to any of these columns will consolidate the number of options # first do it on the index of fvs_crosswalk (species symbols) print("Species symbols" + ": " + str(len(pd.unique(FVS_crosswalk.loc[FVS_crosswalk.index.isin(pd.unique(treelist.SPECIES_SYMBOL))].index)))) # then on all the other columns for col in FVS_crosswalk.columns.values: print(col + ": " + str(len(pd.unique(FVS_crosswalk[col].loc[FVS_crosswalk.index.isin(pd.unique(treelist.SPECIES_SYMBOL))])))) # - # Over-write the common names in the treelist using the values from FVS_crosswalk print(str(len(pd.unique(treelist.COMMON_NAME))) + " common_names before") for symbol in pd.unique(treelist.SPECIES_SYMBOL): treelist.COMMON_NAME.loc[treelist.SPECIES_SYMBOL == symbol] = FVS_crosswalk.COMMON_NAME.loc[symbol] print(str(len(pd.unique(treelist.COMMON_NAME))) + " common_names after") # + # create nested dictionary for all entries, top-level key is common_name species_crosswalk = {} # all common names set up as keys with empty nested dictionary for value species_crosswalk = {common_name:{} for common_name in pd.unique(FVS_crosswalk['COMMON_NAME'])} # for each common name (key) in the dictionary, fill in the values... for key in species_crosswalk: # create a nested dictionary where the name is the FVS_crosswalk column name, and the value is empty species_crosswalk[key] = {column:'' for column in FVS_crosswalk.columns.values} # for each common name (key) in the species_crosswalk dictionary, fill in the values... for key in species_crosswalk: # for each nested dictionary (which have FVS_crosswalk column names as subkeys)... for subkey in species_crosswalk[key]: # set the value for each nested dictionary based on the FVS_crosswalk table species_crosswalk[key][subkey] = FVS_crosswalk[subkey].loc[FVS_crosswalk.COMMON_NAME == key].values[0] # - # dump the species_crosswalk to a json file with open('G:\projects\ForestPlanner_2015\Data\Processed\FVS_species_crosswalk_' + time.strftime("%Y-%m-%d") + '.json', 'w') as dumpsite: json.dump(species_crosswalk, dumpsite) print str(treelist.COND_ID.nunique()) + " unique conditions (~inventory plots)" print str(len(treelist)) + " trees" # Set COND_ID as index treelist.set_index("COND_ID", inplace = True) print(str(len(pd.unique(treelist.COMMON_NAME))) + " COMMON_NAMEs") print(str(len(pd.unique(treelist.SPECIES_SYMBOL))) + " SPECIES_SYMBOLs") print(str(len(pd.unique(treelist.SPCD))) + " SPCDs") print(str(len(pd.unique(treelist.GENUS))) + " GENUSes") print(str(len(pd.unique(treelist.SPP_GRP))) + " SPP_GRPs") # How many null values are there for each column? treelist.isnull().sum() sorted(pd.unique(treelist.COMMON_NAME)) # remove trees without a DIA or TPA_UNADJ value treelist.dropna(axis = 0, how = 'any', subset = ["TPA_UNADJ", "DIA"], inplace = True) # How many null values are there for each column now? treelist.isnull().sum() # + # identify the 2" diameter class for each tree # takes in a tree DBH and returns the min-max range for 2" diameter class for each tree def minDBH(dbh): return int(np.floor(dbh/2)*2) def maxDBH(dbh): # if a tree has an even DBH (e.g., 2.0), don't let maxDBH = minDBH... if np.ceil(dbh/2)*2 == np.floor(dbh/2)*2: return int(np.ceil(dbh/2)*2 +2) else: return int(np.ceil(dbh/2)*2) # - # calculate the min and max DBHs for all the trees treelist["minDBH"] = treelist.DIA.apply(minDBH) treelist["maxDBH"] = treelist.DIA.apply(maxDBH) # Add a basal area field treelist["BA_ft2_ac"] = (treelist.DIA ** 2) * 0.005454 * treelist.TPA_UNADJ # create columns for each variant # for each tree in the treelist, populate them with with the species code FVS uses for that variant variants = ['BM', 'CA', 'EC', 'PN', 'SO', 'WC'] for variant in variants: treelist[variant] = treelist['COMMON_NAME'].apply(lambda common_name: species_crosswalk[common_name][variant]) # create tree_live treelist by filtering out dead trees (statuscd = 2) treelive = treelist.loc[treelist.STATUSCD == 1] treelive.head() # + # Calculate total live TPA of each species within each 2" diameter class # create a blank dataframe that will hold treelive_summaries for all variants treelive_summary = pd.DataFrame() # create a tree_live summary for each variant for variant in variants: # groupby COND_ID, COMMON_NAME, variant (column name for species codes to which the tree is assigned in that variant), DBH class # return the sum of TPA in each of these groups grouped = pd.DataFrame(treelive.groupby([treelive.index, "PLOT", "COMMON_NAME", variant, "minDBH"])["TPA_UNADJ", "BA_ft2_ac", "HT", "DIA", "TOTAGE"].agg([np.sum, np.mean, 'count'])).reset_index() grouped.rename(columns={'COND_ID': 'cond_id'}, inplace=True) grouped.set_index("cond_id", inplace=True) # flatten the column names (groupby returned multi-level column names) grouped.columns = ['_'.join(col).strip() if col[1] != '' else col[0] for col in grouped.columns.values] # add a column for this dataframe that stored which variant it represents grouped["variant"] = variant # create a 'varname' column that concatenates common_name and diameter class grouped["varname"] = grouped.COMMON_NAME + "_" + grouped.minDBH.map(str) # rename the columns with what the IDB_summary schema in Forest Planner is expecting grouped.rename(columns={'PLOT': 'plot_id', variant: 'FVS_Spp_Code', 'COMMON_NAME': 'fia_forest_type_name', 'minDBH': 'calc_dbh_class', 'TPA_UNADJ_sum': 'sumoftpa', 'TPA_UNADJ_mean': 'avgoftpa', 'TPA_UNADJ_count': 'calc_tree_count', 'BA_ft2_ac_sum': 'sumofba_ft2_ac', 'BA_ft2_ac_mean': 'avgofba_ft2_ac', 'HT_mean': 'avgofht_ft', 'DIA_mean': 'avgofdbh_in', 'TOTAGE_mean': 'avgofage_bh'}, inplace=True) # Map columns to the data types the TREELIVE_SUMMARY db schema expects (for those not already appropriately formatted) # mapping to floats for column in ['calc_dbh_class', 'sumoftpa', 'avgoftpa']: grouped[column] = grouped[column].map(float) grouped.reset_index(inplace = True) # append the treelive_summary for this variant to the treelive_summary = treelive_summary.append(grouped, ignore_index = True) # - treelive_summary.head() treelive_summary.dtypes # + # Add columns to treelive_summary to record total BA of that COND_ID, count of species-x-size class in that COND_ID, and # the % of total basal area for that COND_ID found in this species-x-size class # Create a dataframe that has the total basal area and count of species size classes for each COND_ID cond_summary = pd.DataFrame(treelive_summary.loc[treelive_summary.variant == 'BM'].groupby(['cond_id'])['sumofba_ft2_ac'].agg(['sum', 'count'])) #cond_summary.set_index("cond_id", inplace=True) # lookup the total BA for each COND_ID in the cond_sumamry dataframe, apply it to every species-x-size class in the treelive_summary treelive_summary['total_ba_ft2_ac'] = treelive_summary.reset_index()['cond_id'].apply(lambda id: cond_summary.at[id,'sum']) # lookup the number of species-x-size classes in the cond_sumamry dataframe, apply it to every species-x-size class in the treelive_summary treelive_summary['count_speciessizeclasses'] = treelive_summary.reset_index()['cond_id'].apply(lambda id: cond_summary.at[id,'count']).map(int) # Calculate the % of total basal area found in each species-x-size class treelive_summary['pct_of_totalba'] = treelive_summary['sumofba_ft2_ac']/treelive_summary['total_ba_ft2_ac'] # - treelive_summary.dtypes treelive_summary[['cond_id', 'calc_dbh_class', 'fia_forest_type_name', 'avgofdbh_in', 'avgofht_ft']].loc[(treelive_summary.cond_id == 521) & (treelive_summary.fia_forest_type_name == "Pacific silver fir")] # + # write the treelive_summary to a CSV cols_to_write = ['cond_id', 'plot_id', 'variant', 'varname','fia_forest_type_name', 'FVS_Spp_Code', 'calc_dbh_class', 'calc_tree_count', 'sumoftpa', 'avgoftpa', 'sumofba_ft2_ac', 'avgofba_ft2_ac', 'avgofht_ft', 'avgofdbh_in', 'avgofage_bh', 'total_ba_ft2_ac', 'count_speciessizeclasses', 'pct_of_totalba'] treelive_summary.to_csv("G:\projects\ForestPlanner_2015\Data\Processed\TREELIVE_SUMMARY_" + time.strftime("%Y-%m-%d") + ".csv", columns = cols_to_write, header = True, index = True, index_label = "class_id", quoting=csv.QUOTE_NONNUMERIC) print("printed " + str(len(treelive_summary)/len(variants)) + " species-x-dbh classes") print("from "+ str(len(pd.unique(treelive_summary.cond_id))) + " conditions to:") print("G:\projects\ForestPlanner_2015\Data\Processed\TREELIVE_SUMMARY_" + time.strftime("%Y-%m-%d") + ".csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Text Classification # # There are several types of classification: # # - Binary : 2 mutually exclusive categories (Detecting spam etc) # - Multiclass: More than 2 mutually exclusive categories (Language detection etc) # - Multilabel: non-mutually exclusive categories (like movie genres, tV shows etc) # # ### Binary text classification problem # # from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.svm import LinearSVC # + # Train and test data set train_data = ['Football: a great sport', 'The referee has been very bad this season', 'Our team scored 5 goals', 'I love tenis', 'Politics is in decline in the UK', 'Brexit means Brexit', 'The parlament wants to create new legislation', 'I so want to travel the world'] train_labels = ["Sports","Sports","Sports","Sports", "Non Sports", "Non Sports", "Non Sports", "Non Sports"] test_data = ['Swimming is a great sport', 'A lot of policy changes will happen after Brexit', 'The table tenis team will travel to the UK soon for the European Championship'] test_labels = ["Sports", "Non Sports", "Sports"] # - # Representation of data using Tf-IDF vectorizer = TfidfVectorizer() vectorized_train_data = vectorizer.fit_transform(train_data) vectorized_test_data = vectorizer.transform(test_data) # Train the classifier given the training data classifier = LinearSVC() classifier.fit(vectorized_train_data, train_labels) # Predict the labels for the test documents print(classifier.predict(vectorized_test_data)) # ### Nice. We build our text classifier :) # - Matching problems # - Cases never seen below # - "Spurious" correlations and bias ("car" appears only in the +ve category) # + from pprint import pprint # This way we print pretty :) def feature_values(doc, representer): doc_rep = representer.transform([doc]) features = representer.get_feature_names() return [(features[index], doc_rep[0, index]) for index in doc_rep.nonzero()[1]] pprint([feature_values(doc, vectorizer) for doc in test_data]) # - # ### Let's try with remove with stop-word # + from nltk.corpus import stopwords # Load the list of english / stop words from nltk stop_words = stopwords.words("english") # Represent, train, predict and print it out vectorizer = TfidfVectorizer(stop_words=stop_words) vectorized_train_data = vectorizer.fit_transform(train_data) vectorized_test_data = vectorizer.transform(test_data) # Assign SVC classifier classifier = LinearSVC() # fit the classifier with vectorized train data set and their labels. classifier.fit(vectorized_train_data, train_labels) # Lets print and see what comes out, should give a Sports, Non Sports, Sports print(classifier.predict(vectorized_test_data)) # - # ### Ok, cool. # # ### Multi-Class Classification Challenge # # Here lets address the multi-class problem of detecting the language of a sentence based on 3 mutually exclusive languages such as English, Spanish, French. Lets assume that we can only have three languages that the documents can contain. # # So, lets get on and create a sample artificial text... # + train_data = ['PyCon es una gran conferencia', 'Aprendizaje automatico esta listo para dominar el mundo dentro de poco', 'This is a great conference with a lot of amazing talks', 'AI will dominate the world in the near future', 'Dix chiffres por resumer le feuilleton de la loi travail'] train_labels = ["SP", "SP", "EN", "EN", "FR"] test_data = ['Estoy preparandome para dominar las olimpiadas', 'Me gustaria mucho aprender el lenguage de programacion Scala', 'Machine Learning is amazing', 'Hola a todos'] test_labels = ["SP", "SP", "EN", "SP"] # Representation vectorizer = TfidfVectorizer() vectorized_train_data = vectorizer.fit_transform(train_data) vectorized_test_data = vectorizer.transform(test_data) # Training classifier = LinearSVC() classifier.fit(vectorized_train_data, train_labels) # Predicting predictions = classifier.predict(vectorized_test_data) pprint(predictions) # - # ### So, what happened above? # # Why didn't is show SP in the end as per the test label but EN? # # ### Multi-Label Problem # # Here we try to figure out the multi-label problem of labelling documents with their relevance to sports, politics etc. As previously demonstrated, we create a small collection. # # We will try to do it differently this time in: # - Change the representation of the data viewing every document as a list of bits -- with them representing of weither being OR not to each category. We'll need a `MultiLabelBinarizer` from the sklearn library # - We'll run the classifier N times, once for each category where the negative cases will be documents in all other categories. for this we'll need a `OneVsRestClassifier` from sklearn. [Note: There is also a `OneVsOneClassifier`, but we'll discuss this another time] # # So, lets get started... # + from sklearn.preprocessing import MultiLabelBinarizer from sklearn.multiclass import OneVsRestClassifier train_data = ['Soccer: a great sport', 'The referee has been very bad this season', 'Our team scored 5 goals', 'I love tenis', 'Politics is in decline in the UK', 'Brexit means Brexit', 'The parlament wants to create new legislation', 'I so want to travel the world', 'The government will increase the budget for sports in the NL after great sport medal tally!', "O'Reilly has a great conference this year"] train_labels = [["Sports"], ["Sports"], ["Sports"], ["Sports"], ["Politics"],["Politics"],["Politics"],[],["Politics", "Sports"],[]] test_data = ['Swimming is a great sport', 'A lot of policy changes will happen after Brexit', 'The table tenis team will travel to the UK soon for the European Championship', 'The government will increase the budget for sports in the NL after great sport medal tally!', 'PyCon is my favourite conference'] test_labels = [["Sports"], ["Politics"], ["Sports"], ["Politics","Sports"],[]] # We change the representation of the data as a list of bit lists multilabelbin = MultiLabelBinarizer() binary_train_labels = multilabelbin.fit_transform(train_labels) binary_test_labels = multilabelbin.transform(test_labels) print("These are Binary Train Labels: ", binary_train_labels) print("These are Binary Test Labels: ", binary_test_labels) # + # Doing same with OneVsRest # Represent first vectorizer = TfidfVectorizer(stop_words=stop_words) vectorized_train_data = vectorizer.fit_transform(train_data) vectorized_test_data = vectorizer.transform(test_data) # Build one classifier per category classifier = OneVsRestClassifier(LinearSVC()) classifier.fit(vectorized_train_data, binary_train_labels) # Predict predictions = classifier.predict(vectorized_test_data) print(predictions) print() # - print(multilabelbin.inverse_transform(predictions)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Path Changes from : # # New paths were data will be updated, and secondly the archived links if eeded for troubleshooting etc. Change made 25.03 main_link = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/' CONFIRMED = pd.read_csv(main_link+'time_series_covid19_confirmed_global.csv') DEATHS = pd.read_csv(main_link+'time_series_covid19_deaths_global.csv') RECOVERED = pd.read_csv(main_link+'time_series_covid19_recovered_global.csv') # + #main_link = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/archived_data/archived_time_series/' #CONFIRMED = pd.read_csv(main_link+'time_series_19-covid-Confirmed_archived_0325.csv') #DEATHS = pd.read_csv(main_link+'time_series_19-covid-Deaths_archived_0325.csv') #RECOVERED = pd.read_csv(main_link+'time_series_19-covid-Recovered_archived_0325.csv') # - # ### Estimating SIR-model # # # # Source of idea & code base: https://github.com/Lewuathe/COVID19-SIR # # Blogpost: https://www.lewuathe.com/covid-19-dynamics-with-sir-model.html # # # This script is adapted to that it reads the files from , writes them to a folder, rewrites and then uses them, so there is probably a more optimal way to do it. The same script is available aslo as solver.py, but thought it might be easier to test/change in this format. # # #### Folder name used for files: Converted_datafiles import numpy as np import pandas as pd from scipy.integrate import solve_ivp from scipy.optimize import minimize import matplotlib.pyplot as plt from datetime import timedelta, datetime import urllib.request # + class Learner(object): def __init__(self, country, loss, start_date, predict_range, s_0, i_0, r_0): self.country = country self.loss = loss self.start_date = start_date self.predict_range = predict_range self.s_0 = s_0 self.i_0 = i_0 self.r_0 = r_0 self.main_link = main_link def load_confirmed(self, country): df = pd.read_csv('Converted_datafiles/time_series_19-covid-Confirmed-country.csv') country_df = df[df['Country/Region'] == country] return country_df.iloc[0].loc[self.start_date:] def load_recovered(self, country): df = pd.read_csv('Converted_datafiles/time_series_19-covid-Recovered-country.csv') country_df = df[df['Country/Region'] == country] return country_df.iloc[0].loc[self.start_date:] def load_dead(self, country): df = pd.read_csv('Converted_datafiles/time_series_19-covid-Deaths-country.csv') country_df = df[df['Country/Region'] == country] return country_df.iloc[0].loc[self.start_date:] def extend_index(self, index, new_size): values = index.values current = datetime.strptime(index[-1], '%m/%d/%y') while len(values) < new_size: current = current + timedelta(days=1) values = np.append(values, datetime.strftime(current, '%m/%d/%y')) return values def predict(self, beta, gamma, data, recovered, death, country, s_0, i_0, r_0): new_index = self.extend_index(data.index, self.predict_range) size = len(new_index) def SIR(t, y): S = y[0] I = y[1] R = y[2] return [-beta*S*I, beta*S*I-gamma*I, gamma*I] extended_actual = np.concatenate((data.values, [None] * (size - len(data.values)))) extended_recovered = np.concatenate((recovered.values, [None] * (size - len(recovered.values)))) extended_death = np.concatenate((death.values, [None] * (size - len(death.values)))) return new_index, extended_actual, extended_recovered, extended_death, solve_ivp(SIR, [0, size], [s_0,i_0,r_0], t_eval=np.arange(0, size, 1)) def train(self): recovered = self.load_recovered(self.country) death = self.load_dead(self.country) conf = self.load_confirmed(self.country) data = (self.load_confirmed(self.country) - recovered - death) print('Hi') optimal = minimize(loss, [0.001, 0.001], args=(data, recovered, self.s_0, self.i_0, self.r_0), method='L-BFGS-B', bounds=[(0.00000001, 0.4), (0.00000001, 0.4)]) print(optimal) beta, gamma = optimal.x new_index, extended_actual, extended_recovered, extended_death, prediction = self.predict(beta, gamma, data, recovered, death, self.country, self.s_0, self.i_0, self.r_0) df = pd.DataFrame({'Infected data': extended_actual, 'Recovered data': extended_recovered, 'Death data': extended_death, 'Susceptible': prediction.y[0], 'Infected': prediction.y[1], 'Recovered': prediction.y[2]}, index=new_index) fig, ax = plt.subplots(figsize=(15, 10)) ax.set_title(self.country) df.plot(ax=ax) print(f"country={self.country}, beta={beta:.8f}, gamma={gamma:.8f}, r_0:{(beta/gamma):.8f}") fig.savefig(f"{self.country}.png") #return recovered, death, data, conf def loss(point, data, recovered, s_0, i_0, r_0): size = len(data) beta, gamma = point def SIR(t, y): S = y[0] I = y[1] R = y[2] return [-beta*S*I, beta*S*I-gamma*I, gamma*I] solution = solve_ivp(SIR, [0, size], [s_0,i_0,r_0], t_eval=np.arange(0, size, 1), vectorized=True) l1 = np.sqrt(np.mean((solution.y[1] - data)**2)) l2 = np.sqrt(np.mean((solution.y[2] - recovered)**2)) alpha = 0.1 return alpha * l1 + (1 - alpha) * l2 # - # These functions does the rewriting, just need to execute once. # + def download_data(url_dictionary): #Lets download the files for url_title in url_dictionary.keys(): urllib.request.urlretrieve(url_dictionary[url_title], "./Converted_datafiles/" + url_title) def remove_province(input_file, output_file): input = open(input_file, "r") output = open(output_file, "w") output.write(input.readline()) for line in input: if line.lstrip().startswith(","): output.write(line) input.close() output.close() # + data_d = { "time_series_19-covid-Confirmed.csv" : "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv", "time_series_19-covid-Recovered.csv" : "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv", "time_series_19-covid-Deaths.csv" : "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv" } download_data(data_d) # - remove_province('./Converted_datafiles/time_series_19-covid-Confirmed.csv', './Converted_datafiles/time_series_19-covid-Confirmed-country.csv') remove_province('./Converted_datafiles/time_series_19-covid-Recovered.csv', './Converted_datafiles/time_series_19-covid-Recovered-country.csv') remove_province('./Converted_datafiles/time_series_19-covid-Deaths.csv', './Converted_datafiles/time_series_19-covid-Deaths-country.csv') # ### Run model for different countries # # # The original script needed start date as an input, here we take that from first occurence of infected persons. # # Default values for the original function was s_0 = 100000, i_0 = 2, r_0 = 0. For Italy s_0 = 100000 works fine, for others 15 000 is better, as explained in the blogpost. def get_start_date(country): df = pd.read_csv('Converted_datafiles/time_series_19-covid-Confirmed-country.csv') country_df = df[df['Country/Region'] == country] confirmed_country = df[df['Country/Region'] == country].sum()[4:].values first_case = np.argwhere(confirmed_country)[0][0] colname = country_df.columns[(first_case+4)] return colname countries = ['Italy'] #startdate = START_DATE predict_range = 150 s_0 = 100000 i_0 = 2 r_0 = 0 for country in countries: startdate = get_start_date(country) learner = Learner(country, loss, startdate, predict_range, s_0, i_0, r_0) #try: #recovered, death, data, conf = learner.train() learner.train() #except BaseException: # print('WARNING: Problem processing ' + str(country) + # '. Be sure it exists in the data exactly as you entry it.' + # ' Also check date format if you passed it as parameter.') countries = ['Korea, South'] #startdate = START_DATE predict_range = 150 s_0 = 15000 i_0 = 2 r_0 = 0 for country in countries: startdate = get_start_date(country) learner = Learner(country, loss, startdate, predict_range, s_0, i_0, r_0) #try: #recovered, death, data, conf = learner.train() learner.train() #except BaseException: # print('WARNING: Problem processing ' + str(country) + # '. Be sure it exists in the data exactly as you entry it.' + # ' Also check date format if you passed it as parameter.') # + countries = ['Sweden'] #startdate = START_DATE predict_range = 150 s_0 = 15000 i_0 = 2 r_0 = 0 for country in countries: startdate = get_start_date(country) learner = Learner(country, loss, startdate, predict_range, s_0, i_0, r_0) #try: #recovered, death, data, conf = learner.train() learner.train() #except BaseException: # print('WARNING: Problem processing ' + str(country) + # '. Be sure it exists in the data exactly as you entry it.' + # ' Also check date format if you passed it as parameter.') # + countries = ['Sweden'] #startdate = START_DATE predict_range = 150 s_0 = 15000 i_0 = 2 r_0 = 0 for country in countries: startdate = get_start_date(country) learner = Learner(country, loss, startdate, predict_range, s_0, i_0, r_0) #try: #recovered, death, data, conf = learner.train() learner.train() #except BaseException: # print('WARNING: Problem processing ' + str(country) + # '. Be sure it exists in the data exactly as you entry it.' + # ' Also check date format if you passed it as parameter.') # + countries = ['Denmark'] #startdate = START_DATE predict_range = 150 s_0 = 15000 i_0 = 2 r_0 = 0 for country in countries: startdate = get_start_date(country) learner = Learner(country, loss, startdate, predict_range, s_0, i_0, r_0) #try: #recovered, death, data, conf = learner.train() learner.train() #except BaseException: # print('WARNING: Problem processing ' + str(country) + # '. Be sure it exists in the data exactly as you entry it.' + # ' Also check date format if you passed it as parameter.') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.10.0 64-bit (''adventofcode-mOkh6lsX'': pipenv)' # language: python # name: python3 # --- # # Day 24, symbolic evaluation # # * # # This puzzle is different from most AoC problems in that the description and tests are not actually all that much use. You need to study the puzzle input too, as it is the specific mathematical expressions created from the input that'll determine when, given the 14 different inputs (each between 1 and 9), you'll get a zero a the output. # # ## Puzzle input patterns # # The input consists of 14 repeated sections like this: # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
    #opcodeop1op2interpretation
    1inpww = input_digit
    2mulx0 # x = z % 26
    # Here, z is the output of the previous section. #
    3addxz
    4modx26
    5divzA # z = z / A
    # A is either 1 or 26, depending on B #
    6addxB # x = x + B
    # B is a number between -15 and +15. #
    7eqlxw # x = 0 if x == w else 1
    #
    8eqlx0
    9muly0 # y = 25 * x + 1
    # x is either 0 or 1, so y is now either 1 or 26. #
    10addyx
    11muly25
    12addy1
    13mulzyz = z * y
    14muly0 # y = (w + C) * x #
    C is a positive, non-zero integer. x is either 0 or 1. #
    15addyw
    16addyC
    17mulyx
    18addzyz = z + y
    # # The values for A, B and C are the only values that vary between the parts, and, in fact, between puzzle inputs for everyone participating in AoC. Moreover, A depends on B; it is 26 only if B is a positive number (zero or greater). # # So, expressed as Python, the sections come down to: # # ```python # def section(input, z, B, C): # x = z % 26 + B # if B >= 0: # z //= 26 # if input != x: # z = z * 26 + input + C # return z # ``` # # From this, you can see that `z` will never be negative, and can only be 0 if, by the time we reach the last block, it is smaller than 26 (as `z //= 26` is the only point where `z` decreases, and only for values smaller than 26 would floor division give 0 there). # # The other conclusion we can make is that the outcome _branches_, based on the values of the input digits; at least, for those blocks where `B` is not larger than 9, as that would _guarantee_ that `input` is not equal to `x`. *One* of those branches will end up being zero, for a given set of conditions. Our job will be to find that set of conditions, because from that we can deduce the permissible range of each input variable. # # Finally, I note that only the _condition_ has to rely on modulo operations. If we play our cards right, then each variant of the expression being processed is going to be a [linear polynomial](https://en.wikipedia.org/wiki/Polynomial#linear_polynomial) with all positive [coefficients](https://en.wikipedia.org/wiki/Coefficient). Put differently, it'll be a rather simple $ai_0 + bi_1 + ci_2 + ... + zi_n$ expression, something we can make use of when trying to simplify expressions or prune branches. # # ## Using Sympy to track branching # # I decided to solve this problem by using [sympy](https://www.sympy.org/) to parse and solve the equation, as it'll let us combine the parts into a single equation and track branching. Braching is tracked via [`Piecewise` objects](https://docs.sympy.org/latest/modules/functions/elementary.html#sympy.functions.elementary.piecewise.Piecewise), and Sympy will automatically eliminate branches if it recognises the condition always applies or can never be met. Sympy can do this because keeps track of various properties of the symbols (variables) involved, such as the fact that all our inputs are going to be non-zero positive integers. # # However, there are a few challenges to overcome: # # - The ALU division operation needs to floor the outcome (if the signs of the operands are the same. truncate towards zero. We don't have to worry about negative numbers however, as the only division that takes place is either by 1 or by 26. We can't just use `floor()` here, because then Sympy generally won't be able to simplify the expression further. # - The expresion rapidly grows to a size where manipulating it gets _very_ slow, so we need to find strategies to simplify it further than the standard Sympy simplifcation methods can achieve. # # ### Recasting division to floor the result # # The first problem can be solved by redefining the operation in terms that Sympy can process and even simplify. Floor division can de defined by first subtracting the remainder from the dividend before dividing: # # $$ # \lfloor \frac a b \rfloor = \frac {a - (a \mod b)} {b} # $$ # # Sympy knows how to handle modulo operations, so that's what we'll use to translate the `div` operator. # # We don't have to worry about rounding towards negative infinity, as for this puzzle, neither operand is ever smaller than zero. However, should the need arise, you can expand on this by testing for either $a$ or $b$ being negative: # # $$ # \begin{cases} # \frac {a + (-a \mod b)} {b} & \text{if } a < 0 \land b > 0 \\ # \frac {a + (a \mod -b)} {b} & \text{if } a > 0 \land b < 0 \\ # \frac {a - (a \mod b)} {b} & \text{if } ab >= 0 # \end{cases} # $$ # # In Sympy, you can then model those multiple cases in a `Piecewise()` object. I didn't bother with this however, as the first two cases would simply be dropped instantly, anyway. # # ### Eliminating modulo operations # # Next, we can assist Sympy by eliminating modulo operations if we know the left-hand $a$ value is always going to be lower than the right-hand value $b$, which in our case is always going to be 26 (either from the `mod x 26` operation form line 4, or one of the `div z 26` operations on line 5). # # One way we could do this is to try and test the expression $a < b$ for each free symbol (input variable) in $a$ using the [`solveset()` function](https://docs.sympy.org/latest/modules/solvers/solveset.html#sympy.solvers.solveset.solveset) and a [`Range()` set](https://docs.sympy.org/latest/modules/sets.html#sympy.sets.fancysets.Range) as the domain. If this produces the same range of values again, we know that for all possible values for that input, the modulo operation will not have any effect and can be eliminated. # # However, because the left-hand-side expression in our modulo operations are always linear polynomials with positive coefficients (only `+` and `*` operations), you can instead substitute all input symbols with $9$ to determine the highest possible value. If the result is then lower than $b$, we know the modulo can be removed. # # ### Collapsing equality tests # # We can do something similar for equality tests, but this time we'll have to stick with `solveset()`, as the alternative would have to be testing each possible combination of the inputs involved. # # For each free $symbol$ in the $expression$ (each an input variable), test what `solveset(expression, symbol, Range(1, 10))` returns. This will give us a new set, the set of all values for that that symbol for which the outcome will be true. There are three possible outcomes: # # * The empty set: the equality test is _always false_, regardless of what the value is for that input. # * The `Range(1, 10)` set: the equality test is _always true_, for all possible inputs. # * Some other set, which is always a subset of the input domain. # # For the first two outcomes, the equality can be replaced by a boolean constant. # # ### Eliminating branches # # From the above analysis we know that $z$ can only ever be zero if, by the time we reach the very last section, $z$ is a value between 0 and 25 inclusive, and the only way $z$ is going to get there is by division by 26. If you count the number times $z$ is divided by 26, you can test any given branch by substituting all inputs with 1 and seeing if the result is equal to or greater than 26 raised to the power of the number of divisions that are still left. # # However, because we also eliminate branches that can never be taken (by collapsing equality tests), we can't know how many divisions will remain until we've parsed all the sections. So instead, we start with merging expressions that have already been simplified into a single branch into the current expression. The moment the merged expression still has two branches, we start a new set of expressions to merge. # # Once we then have a series of branched expressions, we count how many of these divide by 26, so we know the limit beyond which a given expression will no longer reach zero. Each branch will have one or more inequality conditions, in the form of `inputA - inputB != number`; the remaining branches need to be updated with the _inverse_ of those conditions, because these conditions are what limit the input values to a smaller range. If you end up with a _single_ branch, you've found a path to `z == 0`, so we need to keep trace of those conditions. # # ### Finding the minimum and maximum possible version numbers # # Merging all branches this way, results in a single `0` expression, and a single condition, a conjunction of equality tests. Each of those equality tests can be turned into a maximum value for the smaller of the two inputs. E.g., the expression `inputA - inputB = 5` can only be true if `inputB` is smaller than `inputA`, and can, at most, be `4`. If it was `5`, then the condition would have matched one of the already eliminated branches, ones that don't reach zero! # # To determine the maximum version number, then, start with a list of all `9` digits, and adjust those for inputs that must be smaller to meet the conditions they are involved in. For part two, do the same with a list of `1` digits, adjusted upwards to keep the other input large enough for the condition to apply. # # + from __future__ import annotations from functools import cached_property, reduce, singledispatchmethod from operator import add, mod, mul from typing import Callable, Final import sympy as sy from sympy import piecewise_fold, simplify_logic, solveset OPCODES: Final[dict[str, Callable[[sy.Basic, sy.Basic], sy.Basic]]] = { "add": add, "mul": mul, "div": lambda a, b: (a - a % b) / b, # we can assume a * b >= 0, always. "mod": mod, "eql": lambda a, b: sy.Piecewise((1, sy.Eq(a, b)), (0, True)), } Z: Final[sy.Symbol] = sy.Symbol("z", integer=True, negative=False) class MONAD: _condition: sy.Boolean = sy.S.true _limit: int = 0 _min: int | None = None _max: int | None = None def __init__(self, instructions: str) -> None: self._parse(instructions) def _parse(self, instructions: str) -> None: reg: dict[str, sy.Basic] = dict.fromkeys("xyz", sy.S.Zero) ws: list[sy.Symbol] = [] branches: list[sy.Basic] = [sy.S.Zero] for block in instructions.split("inp w\n")[1:]: w = sy.Symbol(f"w{len(ws)}", integer=True, positive=True, nonzero=True) ws.append(w) reg |= {"w": w, "z": Z} for line in block.splitlines(): instr, target, *args = line.split() args = [reg[p] if p in reg else sy.Integer(p) for p in args] reg[target] = OPCODES[instr](reg[target], *args) if not branches[-1].is_Piecewise: reg["z"] = reg["z"].subs({Z: branches.pop()}) expr = piecewise_fold(reg["z"]).replace(*self._replace_args) branches.append(expr) # combine all branched expressions into a single expression, while # removing branches that are never going to reach zero. expr = sy.S.Zero self._limit = 26 ** sum(1 for br in branches if br.has(sy.S.One / 26)) for branch in branches: self._limit //= 26 if branch.has(sy.S.One / 26) else 1 expr = piecewise_fold(branch.subs({Z: expr})).replace(*self._replace_args) def _find_extrema(self): """Turn the final 0 condition into boundaries for the 14 digits""" ws = sorted(self._condition.free_symbols, key=sy.default_sort_key) mins, maxs = [1] * len(ws), [9] * len(ws) for cond in self._condition.args: # each condition is an inequality between two inputs. It is always # in the form inputA - inputB == C so we only need to know the value # of C and the indexes of the input variables involved. w1, w2, diff = cond.lhs.args[0], -cond.lhs.args[1], cond.rhs.p if diff < 0: w1, w2, diff = w2, w1, -diff wi1, wi2 = ws.index(w1), ws.index(w2) mins[wi1], maxs[wi2] = max(mins[wi1], 1 + diff), min(maxs[wi2], 9 - diff) self._min = reduce(lambda a, b: a * 10 + b, mins) self._max = reduce(lambda a, b: a * 10 + b, maxs) @property def minimum(self) -> int: if self._min is None: self._find_extrema() return self._min @property def maximum(self) -> int: if self._max is None: self._find_extrema() return self._max @singledispatchmethod def _simplify(self, _: sy.Basic) -> sy.Basic | None: """Handler for simplification handlers via single dispatch Individual methods below are registered to simplify a specific Sympy object type. """ return None @cached_property def _replace_args( self, ) -> tuple[Callable[[sy.Basic], bool], Callable[[sy.Basic], sy.Basic | None]]: """Argument pair for Expr.replace(), dispatching to the _simplify() method For each expression element for which the first callable returns True, sympy calls the second method, which in turn will call the registered hook method for the specific type of object. """ # this is way harder than it should be, singledispatchmethod should # really add registry on the generated method directly. Access _simplify # via the class namespace so the descriptor protocol doesn't kick in, # so we can then access the dispatcher registry. dispatch_registry = vars(type(self))["_simplify"].dispatcher.registry types = tuple(dispatch_registry.keys() - {object}) return ((lambda a: isinstance(a, types)), self._simplify) @_simplify.register def _simplify_mod(self, mod: sy.Mod) -> sy.Basic | None: """Unwrap a modulo operation if a is always smaller than b""" (a, b), subs = mod.args, dict.fromkeys(mod.free_symbols, 9) if not mod.has(Z) and b.is_number and a.subs(subs) < b: return a return None @_simplify.register def _simplify_eq(self, eq: sy.Eq) -> sy.Basic | None: """Simplify an equality expression if it's always true or false""" for sym in eq.free_symbols - {Z}: match solveset(eq, sym, sy.Range(1, 10)): case sy.EmptySet: return sy.S.false case sy.Range(1, 10): return sy.S.true return None @_simplify.register def _simplify_ne(self, ne: sy.Ne) -> sy.Basic | None: """Simplify an inequality expression if it's always true or false""" if (result := self._simplify_eq(~ne)) is not None: return ~result return None @_simplify.register def _simplify_piecewise(self, pw: sy.Piecewise) -> sy.Basic | None: """Eliminate branches that will exceed the limit""" limit = self._limit if not limit: return None elim, new_pairs, subs = sy.S.true, [], dict.fromkeys(pw.free_symbols, 1) for br, cond in pw.args: if br.subs(subs) >= limit: elim &= ~cond continue new_pairs.append((br, cond)) new_pairs = [(e, simplify_logic(c & elim)) for e, c in new_pairs] if len(new_pairs) == 1: # all other branches eliminated; update the condition that applies # to this single branch. (expr, cond), = new_pairs self._condition &= cond return expr return pw.func(*new_pairs) # + import aocd alu_instructions = aocd.get_data(day=24, year=2021) expr = MONAD(alu_instructions) print("Part 1:", expr.maximum) print("Part 2:", expr.minimum) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:root] * # language: python # name: conda-root-py # --- # # Spambase Dataset Analysis # # This notebook will analyze and preprocess the spambase dataset. For dataset information go to [UCI repository](https://archive.ics.uci.edu/ml/datasets/spambase). Now, let's import dependencies and the dataset. # + # %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.cm as cm import seaborn as sns from sklearn.preprocessing import LabelEncoder, OneHotEncoder # - spambase = pd.read_csv('spambase.data') spambase.dtypes # + # Get the columns names as array new_header = spambase.columns.values #header = new_header.copy() #header[5]='convenient' # Create the columns cols = ['word_freq_'+str(i) for i in range(48)] cols = cols + ['char_freq_'+str(i) for i in range(6)] cols = cols + ['capital_run_length_average', 'capital_run_length_longest', 'capital_run_length_total', 'target'] new_header # + # Create a DataFrame from the hedader sb = pd.DataFrame(new_header[None,:],columns=list(cols)) new_header[2] = '0.64' new_header[11] = '0.64' new_header[15] = '0.32' new_header[40] = '0.32' sb.loc[0,cols] = new_header.astype(float) sb = sb.astype(float) #sb['0.64.1'] = '0.64' #sb['0.64.2'] = '0.64' #sb['0.32.1'] = '0.32' #sb['0.32.2'] = '0.32' #sb = sb.astype(float) # Transform the type of classes as integer sb['capital_run_length_average'] = sb['capital_run_length_average'].astype(int) sb['capital_run_length_longest'] = sb['capital_run_length_longest'].astype(int) sb['capital_run_length_total'] = sb['capital_run_length_total'].astype(int) sb['target'] = sb['target'].astype(int) # Append both the dataframes sb # - sb.dtypes sb.append(spambase) sb # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- from ggplot import * # %matplotlib inline # geom_blank don't do shit ggplot(diamonds, aes(x='price')) + geom_blank() # no matter what you throw at it # geom_blank don't do shit ggplot(diamonds, aes(x='price')) + \ geom_blank() + \ facet_grid("cut", "clarity") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression, SGDRegressor, Ridge, Lasso, ElasticNet, LogisticRegression from sklearn.preprocessing import PolynomialFeatures from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.base import clone from sklearn import datasets X = 2 * np.random.rand(100,1) y = 4 + 3 * X + np.random.rand(100,1) plt.plot(X, y, "b.") X_b = np.c_[np.ones_like(X), X] theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y) theta_best X_new = np.array([[0],[2]]) X_new_b = np.c_[np.ones_like(X_new),X_new] y_predict = X_new_b.dot(theta_best) plt.plot(X_new, y_predict, "r-") plt.plot(X, y, "b.") plt.axis([0, 2, 0, 15]) plt.show() lin_reg = LinearRegression() lin_reg.fit(X, y) lin_reg.intercept_, lin_reg.coef_ # + eta = 0.1 n_iterations = 1000 m = 100 theta = np.random.randn(2,1) for i in range(n_iterations): gradient = 2/m * X_b.T.dot(X_b.dot(theta)-y) theta -= eta * gradient theta # + n_epoch = 50 t0, t1 = 5, 50 def learning_schedule(t): return t0/(t + t1) theta = np.random.randn(2,1) for e in range(n_epoch): for i in range(m): random_index = np.random.randint(m) xi = X_b[random_index:random_index + 1] yi = y[random_index] gradient = 2 * xi.T.dot(xi.dot(theta) - yi) eta = learning_schedule(e * m + i) theta -= eta * gradient theta # - sgd_reg = SGDRegressor(n_iter = 50, penalty = None, eta0 = 0.1) sgd_reg.fit(X, y.ravel()) sgd_reg.intercept_, sgd_reg.coef_ m_poly = 100 X_poly = 6 * np.random.rand(m, 1) - 3 y_poly = 0.5 * X_poly**2 + X_poly + 2 + np.random.rand(m, 1) plt.plot(X_poly, y_poly, "b.") poly_features = PolynomialFeatures(degree = 2, include_bias = False) X_poly_pf = poly_features.fit_transform(X_poly) X_poly[0], X_poly_pf[0] lin_reg = LinearRegression() lin_reg.fit(X_poly_pf, y_poly) lin_reg.intercept_, lin_reg.coef_ def plot_learning_curves(model, X, y): X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.2) train_errors, val_errors = [], [] for m in range(1, len(X_train)): model.fit(X_train[:m], y_train[:m]) y_train_predict = model.predict(X_train[:m]) y_val_predict = model.predict(X_val) train_errors.append(mean_squared_error(y_train_predict, y_train[:m])) val_errors.append(mean_squared_error(y_val_predict, y_val)) plt.plot(np.sqrt(train_errors), "r-+", linewidth = 2, label = "train") plt.plot(np.sqrt(val_errors), "b-", linewidth = 3, label = "val") lin_reg = LinearRegression() plot_learning_curves(lin_reg, X_poly_pf, y_poly) plot_learning_curves(lin_reg, X_poly, y_poly) polynomial_regression = Pipeline(( ("poly_features", PolynomialFeatures(degree = 10, include_bias = False)), ("sgd_reg", LinearRegression()))) plot_learning_curves(polynomial_regression, X_poly, y) plt.ylim([0,3]) poly_features_10 = PolynomialFeatures(degree = 10, include_bias = False) X_poly_10 = poly_features.fit_transform(X_poly) plot_learning_curves(lin_reg, X_poly_10, y) plt.ylim([0,3]) ridge_reg = Ridge(alpha = 1, solver = "cholesky") ridge_reg.fit(X_poly_pf, y_poly) poly_features = PolynomialFeatures(degree = 2, include_bias = False) some_X = poly_features.fit_transform([[1.5]]) ridge_reg.predict(some_X) sgd_reg = SGDRegressor(n_iter = 50, penalty = "l2", eta0 = 0.1) sgd_reg.fit(X_poly_pf, y_poly.ravel()) sgd_reg.predict(some_X) sgd_reg.intercept_, sgd_reg.coef_ lasso_reg = Lasso(alpha = 0.1) lasso_reg.fit(X_poly_pf, y_poly) lasso_reg.predict(some_X) elastic_reg = ElasticNet(alpha = 0.1, l1_ratio = 0.5) elastic_reg.fit(X_poly_pf, y_poly) elastic_reg.predict(some_X) # + sgd_reg_es = SGDRegressor(n_iter = 1, warm_start = True, penalty = None, learning_rate = "constant", eta0 = 0.001) minimum_val_error = float("inf") best_epoch = None best_model = None val_error_vect, train_error_vect = [], [] X_train_poly, X_val_poly, y_train, y_val = train_test_split(X_poly_pf, y_poly, test_size = 0.2) for e in range(1000): sgd_reg_es.fit(X_train_poly, y_train.ravel()) y_val_predict = sgd_reg_es.predict(X_val_poly) val_error = mean_squared_error(y_val_predict, y_val) val_error_vect.append(val_error) y_train_predict = sgd_reg_es.predict(X_train_poly) train_error_vect.append(mean_squared_error(y_train_predict, y_train)) if val_error < minimum_val_error: minimum_val_error = val_error best_epoch = e best_model = clone(sgd_reg_es) # - best_model.fit(X_poly_pf, y_poly.ravel()) best_model.predict(some_X) minimum_val_error, best_epoch plt.plot(np.sqrt(train_error_vect), "r-", linewidth = 1, label = "train") plt.plot(np.sqrt(val_error_vect), "b-", linewidth = 1, label = "val") plt.ylim([0.2,0.5]) iris = datasets.load_iris() list(iris.keys()) X_iris = iris['data'][:,3:] y_iris = (iris['target'] == 2).astype(np.int) iris['feature_names'] log_reg = LogisticRegression() log_reg.fit(X_iris, y_iris) X_iris_new = np.linspace(0,3,1000).reshape(-1,1) y_iris_proba = log_reg.predict_proba(X_iris_new) plt.plot(X_iris_new, y_iris_proba, "g-") plt.plot(X_iris_new, y_iris_proba[:,0], "b--") X_iris = iris["data"][:, (2,3)] y_iris = iris["target"] softmax_reg = LogisticRegression(multi_class = "multinomial", solver = "lbfgs", C = 10) softmax_reg.fit(X_iris, y_iris) softmax_reg.predict([[5,2]]),softmax_reg.predict_proba([[5,2]]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + nbsphinx="hidden" import sys sys.path.append('../') # import os # os.environ['PYTHONASYNCIODEBUG'] = '1' # - # Note: you can try this tutorial in [![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/zh217/aiochan/master?filepath=doc%2Fselect.ipynb). # # Select: the quitessential operation # Channels with their put and get operations can already be used to build rather complicated systems. Now we introduce the operation `select`, which hugely increases the expressive power of channels further. # Basically, if we have channels `c1`, `c2` and `c3` and we write # ``` # result = await select(c1, c2, c3) # ``` # then `result` will hold the result of one and only one `get` operation on `c1`, `c2` and `c3`. *Only one operation will be attempted*. If we have several operations that can be completed at the same time, only one will complete, and the non-completing ones *will not run at all*. This is in constrast with, say, `asyncio.wait`. # Let's have some examples: # + import asyncio import aiochan as ac async def main(): c1 = ac.Chan(name='c1').add(1, 2, 3).close() c2 = ac.Chan(name='c2').add('a', 'b', 'c').close() c3 = ac.Chan(name='c3').add('x', 'y', 'z').close() result, chan = await ac.select(c1, c2, c3) print('the result is', result) print('the result is from', chan) async for v in c1: print('c1 still has value:', v) async for v in c2: print('c2 still has value:', v) async for v in c3: print('c3 still has value:', v) ac.run(main()) # - # Here we have also used some new operations on channels: # # * We can give names to channels: `Chan(name='some name')`, # * `ch.add(...)` adds elements to channels on the background when it is possible to do so, # * `close` closes the channel immediately, but all pending puts (here those by `add`) will still have an opportunity to complete, # * `add` and `close` can be chained as both these methods return the channel. # And for our `select`: # # * it returns a tuple: the value together with the channel that is involved, # * if several operations can all be completed, which one is completed is non-deterministic (try running the above script several times to see). # Actually, it is not only get operations that can be `select`ed: # + async def receive(c): r = await c.get() print('received', r, 'on', c) async def main(): c1 = ac.Chan(name='c1') c2 = ac.Chan(name='c2') ac.go(receive(c1)) ac.go(receive(c2)) await ac.nop() result, chan = await ac.select((c1, 'A'), (c2, 'B')) print('select completes on', chan) ac.run(main()) # - # we see that if we give an argument like `(chan, value)` it is interpreted as a put operation akin to `chan.put(value)`. Again, one and only one operation will complete. You can also mix get operations with put operations. # Also, if you are careful, you will have noticed that we have inserted a `nop` above. If it is not there, the `select` will always complete on `c1`. You may want to think about why. # The more non-trivial the application is, the more use of `select` you can find. One of its simplest use is for stopping many workers at once: # + async def worker(out, stop, tag): i = 0 while True: i += 1 await asyncio.sleep(0.1) result, c = await ac.select(stop, (out, '%s-%s' % (tag, i)), priority=True) if c is stop: print('%s stopped' % tag) break async def consumer(c, stop): while True: result, c = await ac.select(stop, c, priority=True) if c is stop: print('consumer stopped') break else: print('received', result) async def main(): c = ac.Chan() stop = ac.Chan() for i in range(3): ac.go(worker(c, stop, 'worker%s' % i)) ac.go(consumer(c, stop)) await asyncio.sleep(0.6) stop.close() await asyncio.sleep(0.2) ac.run(main()) # - # Here stopping can actually be signaled by simply closing the fan-in-fan-out channel, but in more complicated situations (for example, closing down in response to *any one* of several conditions) `select` is essential. # We have also seen that `select` takes an argument `priority`, which defaults to `False`. Here we set it to true, so when several operations become completable at the same time, it is guaranteed that the leftmost one will complete. Here we use this priority `select` to make sure that the operation stops at the earliest instance. # There is also a `default` argument to `select`, which if set, will produce the set value immediately when none of the operations can be completed immediately, with `None` in the place where you usually find the completed channel. The following snippet completes the put only if it can be done immediately: # + async def main(): ch = ac.Chan() result, c = await ac.select((ch, 'value'), default='giveup') if c is None: print(result) print('put cannot complete immediately and was given up') ac.run(main()) # - # By now you should know how to use `select`. It certainly seems a simple enough operation to understand. However, `select` is non-trivial. What we mean by that is that, using only channels and put and get operations on channels, it is not possible to write a `select` clone that has the correct semantics. The semantics of `select` has three requirements: # # * at least one operation is completed; # * at most one operation is completed; # * an operation is completed at the earliest possible time (no unnecessary waiting). # # Writing an operation satisfying any two of the above is easy. But to satisfy all three, you need to submit your operations to the involved channels at the time of calling, and at the time of completion of any operation, you will need to notify all other operations to cancel themselves. Thus the semantics of `select` must be implemented inside `Chan`, not outside. # `select` is actually the whole point of `aiochan`: `asyncio` do provide us with futures, locks and things, which are somewhat like our channels superficially. But `select` is conspicuously missing. Channels are made to make `select` possible. , the inventor of golang, mentions `select` as the reason why channels in golang is provided by the language itself instead of as a library. # Another way of putting this is: in the hierarchy of concurrency operations, `select` is on the highest level of abstraction. Consider the following: # # * unlike python, Java was designed with concurrency (with threads) in mind, so thread primitives exist from the beginning; # * but as working with the primitives were too low-level, `java.util.concurrent` was added as a libray; # * Clojure runs on the JVM so can use all the Java concurrency libraries. Clojure also adds its own flavour of concurrency-friendly constructs in the form of refs (atoms, agents, and even STM) # * BUT Clojure still needs `core.async` as a library, since writing a `select` that works well on all the previous stuff is not possible! (By the way, `select` is called `alt!`, `alts!`, `alt!!` and `alts!!` in core.async. Yes there are four of them.) # By the way, python has a built-in library called `select`, and a higher-level one doing essentially the same thing called `selectors`. But these libraries only work with files or sockets, not plain python objects, and the availability of the various operations in theses libraries depend on the operating system. That is because the library just offloads it work to system calls. Usually we think of system calls as pretty low level. How many times have you encountered some abstraction that is provided by the lower-level operating system but not by the higher-level programming language? # To recap: # # * The `select` operator completes exactly one operation from the given operations, # * `select` can be used as a control structure, # * `select` is non-trivial. # # Useful constructs: # # * `select` # * `aiochan.Chan.add` # * Channel operations can be chained (more to come) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from data import * from lda_trial import * from sklearn.linear_model import SGDClassifier from sklearn.svm import LinearSVC from sklearn.metrics import accuracy_score # - trainset = IMDBDataset('train', data_limit=20_000) validset = IMDBDataset('valid') train_accu lens = [len(x) for x in trainset.x] np.array(lens).mean() # + n = 20000 ks = [2, 5, 10, 20, 50] train_accu = [0 for _ in range(len(ks))] test_accu = [0 for _ in range(len(ks))] times = [] for _ in range(3): for idx, k in enumerate(ks): print(f'{k}.') start = time.time() trained_model, final_metric = tp_one_trial(trainset, 'slda', k, n, 3, 5, # args.burn_in, max_iter=1000, stop_increase=5, metric='ll') times.append(time.time() - start) lda_x, lda_y = load_LDA_data_batch(trained_model, trainset) model = LinearSVC() model.fit(lda_x, lda_y) prediction = model.predict(lda_x) ground_truth = lda_y print(f'Train: {100*accuracy_score(prediction, ground_truth):6.5f}%') train_accu[idx] += accuracy_score(prediction, ground_truth) lda_x, lda_y = load_LDA_data_batch(trained_model, validset) prediction = model.predict(lda_x) ground_truth = lda_y print(f'Test: {100*accuracy_score(prediction, ground_truth):6.5f}%') test_accu[idx] += accuracy_score(prediction, ground_truth) print('-' * 100) # - n = 20000 train_accu = [] test_accu = [] for k in [2, 3, 5, 10, 20, 50, 100]: print(f'{k}.') trained_model, final_metric = tp_one_trial(trainset, 'slda', k, n, 3, 5, # args.burn_in, max_iter=20, stop_increase=5, metric='ll') lda_x, lda_y = load_LDA_data_batch(trained_model, trainset) model = LinearSVC() model.fit(lda_x, lda_y) prediction = model.predict(lda_x) ground_truth = lda_y print(f'Train: {100*accuracy_score(prediction, ground_truth):6.5f}%') train_accu.append(accuracy_score(prediction, ground_truth)) lda_x, lda_y = load_LDA_data_batch(trained_model, validset) prediction = model.predict(lda_x) ground_truth = lda_y print(f'Test: {100*accuracy_score(prediction, ground_truth):6.5f}%') train_accu.append(accuracy_score(prediction, ground_truth)) print('-' * 100) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd from numpy import * import numpy as np import seaborn as sns import matplotlib.pyplot as plt import os import cv2 import numpy as np from keras.models import model_from_json from keras.preprocessing import image from sklearn import preprocessing from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score from sklearn import metrics from sklearn.model_selection import train_test_split from sklearn import neighbors data =pd.read_csv('/Users/kurukundasreenathreddy/Desktop/Major project/data/train.csv') array = data.values for i in range(len(array)): if array[i][0]=="Male": array[i][0]=1 else: array[i][0]=0 df=pd.DataFrame(array) maindf =df[[0,1,2,3,4,5,6]] mainarray=maindf.values print (mainarray) temp=df[7] train_y =temp.values # print(train_y) # print(mainarray) train_y=temp.values for i in range(len(train_y)): train_y[i] =str(train_y[i]) mul_lr = linear_model.LogisticRegression(multi_class='multinomial', solver='newton-cg',max_iter =1000) mul_lr.fit(mainarray, train_y) testdata =pd.read_csv('/Users/kurukundasreenathreddy/Desktop/Major project/data/test.csv') test = testdata.values for i in range(len(test)): if test[i][0]=="Male": test[i][0]=1 else: test[i][0]=0 # - data data.describe() data.isnull sns.pairplot(data) sns.relplot(x="Gender",y="Personality (Class label)",data=data) sns.relplot(x="Age",y="Personality (Class label)",hue="Age",data=data) sns.catplot(x="Age",y="Personality (Class label)",data=data) sns.relplot(x="Age",y="Personality (Class label)", kind="line",data=data) data["Age"].value_counts() # + df1=pd.DataFrame(test) testdf =df1[[0,1,2,3,4,5,6]] maintestarray=testdf.values print(maintestarray) y_pred = mul_lr.predict(maintestarray) for i in range(len(y_pred)) : y_pred[i]=str((y_pred[i])) DF = pd.DataFrame(y_pred,columns=['Predicted Personality']) DF.index=DF.index+1 DF.index.names = ['Serial Number'] DF.to_csv("output.csv") # - sns.distplot(data.Age,kde=True) sns.boxplot(x='Age',y='Personality (Class label)',data=data) sns.heatmap(data.isnull(), cmap='plasma') sns.countplot(x='Age',hue='Personality (Class label)',data=data) sns.countplot(x='Age',hue='Gender',data=data) # + photo="Demo1.jpg" im = cv2.imread(photo) im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) # - plt.imshow(im, interpolation='nearest',cmap='gray') im = cv2.GaussianBlur(im, (5, 5), 0) plt.imshow(im) im_th = cv2.adaptiveThreshold(im, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 115, 1) plt.imshow(im_th,interpolation='nearest',cmap='gray') plt.imshow(im,cmap='hot') plt.colorbar() import tkinter as tk from tkinter import * from PIL import ImageTk,Image # + root = Tk() root.title("Personality Detection") root.geometry("400x200") def graph(): Personality_detection = np .random.normal(2000,250000,50000) plt.hist(Personality_detection,200) plt.show() my_button = Button(root,text = "Graph",command = graph) my_button.pack() root.mainloop() # + class Win1: def __init__(self, master): self.master = master self.master.geometry("400x300") self.show_widgets() def show_widgets(self): self.frame = tk.Frame(self.master) self.master.title("Window n.1") self.create_button("Click to open Window 2", Win2) self.create_button("Click to open Window 3", Win3) self.create_button("Click to open Window 4", Win4) self.frame.pack() def create_button(self, text, _class): "Button that creates a new window" tk.Button( self.frame, text=text, command=lambda: self.new_window(_class)).pack() def new_window(self, _class): global win2, win3 , win4 try: if _class == Win2: if win2.state() == "normal": win2.focus() except: win2 = tk.Toplevel(self.master) _class(win2) try: if _class == Win3: if win3.state() == "normal": win3.focus() except: win3 = tk.Toplevel(self.master) _class(win3) try: if _class == Win4: if win4.state() == "normal": win4.focus() except: win4 = tk.Toplevel(self.master) _class(win4) def close_window(self): self.master.destroy() class Win2(Win1): def show_widgets(self): self.master.title("Window 2") self.frame = tk.Frame(self.master, bg="red") self.quit_button = tk.Button( self.frame, text=f"Quit this window n. 2", command=self.close_window) self.quit_button.pack() self.create_button("Open window 3 from window 2", Win3) self.frame.pack() self.label = tk.Label( self.frame, text="THIS IS ONLY IN THE THIRD WINDOW") class Win3(Win2): def show_widgets(self): self.master.title("Window 3") self.frame = tk.Frame(self.master, bg="green") self.quit_button = tk.Button( self.frame, text=f"Quit this window n. 3", command=self.close_window) self.label = tk.Label( self.frame, text="1)50-0= Extraversion\n 2)100-50 = Agreeableness\n 3)150-100=Conscientiousness\n4)200-150=Neuroticism\n5)250-200=Openness") self.label.pack() self.frame.pack() class Win4(Win3): def show_widgets(self): self.master.title("Window 4") self.frame = tk.Frame(self.master, bg="green") self.quit_button = tk.Button( self.frame, text=f"Quit this window n. 4", command=self.close_window) self.label = tk.Label( self.frame, text="1)Openness to experience is one of the domains which are used to describe human personality in the Five Factor Model. Openness involves six facets, or dimensions: active imagination, aesthetic sensitivity, attentiveness to inner feelings, preference for variety, intellectual\ncuriosity, and challenging authority.\n 2)Neuroticism is the trait disposition to experience negative affects, including anger, anxiety, self‐consciousness, irritability, emotional instability, and depression") self.label.pack() self.frame.pack() root = tk.Tk() app = Win1(root) root.mainloop() # - cv2.namedWindow("Resulting Image with Rectangular ", cv2.WINDOW_NORMAL) cv2.imshow("Resulting Image ", im) cv2.waitKey(0) cv2.destroyAllWindows() hist # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # PyWebIO # # PyWebIO provides a series of imperative functions to obtain user input and output on the browser, turning the browser into a “rich text terminal”, and can be used to build simple web applications or browser-based GUI applications. Using PyWebIO, developers can write applications just like writing terminal scripts (interaction based on input and print), without the need to have knowledge of HTML and JS. PyWebIO can also be easily integrated into existing Web services. PyWebIO is very suitable for quickly building applications that do not require complex UI. # # ### Features #
  • Use synchronization instead of callback-based method to get input # #
  • Non-declarative layout, simple and efficient # #
  • Less intrusive: old script code can be transformed into a Web service only by modifying the input and output operation # #
  • Support integration into existing web services, currently supports Flask, Django, Tornado, aiohttp framework # #
  • Support for asyncio and coroutine # #
  • Support data visualization with third-party libraries # ! pip install pywebio from pywebio.input import * from pywebio.output import * name=input("Enter the name",type="text") age=input("Enter the age",type=NUMBER) # + password = input("Input password", type=PASSWORD) # Drop-down selection gift = select('Which gift you want?', ['keyboard', 'ipad']) # Checkbox agree = checkbox("User Term", options=['I agree to terms and conditions']) # Single choice answer = radio("Choose one", options=['A', 'B', 'C', 'D']) # Multi-line text input text = textarea('Text Area', rows=3, placeholder='Some text') # File Upload img = file_upload("Select a image:", accept="image/*") # + def check_age(p): # return None when the check passes, otherwise return the error message if p < 10: return 'Too young!!' if p > 60: return 'Too old!!' age = input("How old are you?", type=NUMBER, validate=check_age) # - age code = textarea('Code Edit', code={ 'mode': "python", # code language 'theme': 'darcula', # Codemirror theme. Visit https://codemirror.net/demo/theme.html#cobalt to get more themes }, value='import something\n# Write your python code') code data = input_group("Basic info",[ input('Input your name', name='name'), input('Input your age', name='age', type=NUMBER, validate=check_age) ]) data # + from pywebio.output import * # Text Output put_text("Hello world!") # Table Output put_table([ ['Commodity', 'Price'], ['Apple', '5.5'], ['Banana', '7'], ]) # Markdown Output put_markdown('~~Strikethrough~~') # File Output put_file('hello_word.txt', b'hello word!') # PopUp Output popup('popup title', 'popup text content') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="K4wbhulGsGww" import numpy as np import pandas as pd import matplotlib.pyplot as plt import zipfile # + id="flSBIZl2sm7k" zip = zipfile.ZipFile('archive.zip') zip.extractall() # + colab={"base_uri": "https://localhost:8080/", "height": 473} id="U3TMw-qbspFR" outputId="b42d3952-dbad-4cdd-a7ab-1eebcfdfe07f" df=pd.read_csv("Life Expectancy Data.csv") df['country']=df['Country'] df['GDP']=round(df['GDP']) df['Alcohol']=round(df['Alcohol']) df[' BMI ']=round(df[' BMI ']) df[' HIV/AIDS']=round(df[' HIV/AIDS']) df['Schooling']=round(df['Schooling']) df[' thinness 1-19 years']=round(df[' thinness 1-19 years']) df[' thinness 5-9 years']=round(df[' thinness 5-9 years']) df=df.drop(columns=['Year','Country','Income composition of resources']) df=df.dropna(how='any') df # + colab={"base_uri": "https://localhost:8080/"} id="_wETuFO0tCqa" outputId="257353d8-e156-4e4c-baf2-10b04b7aa366" df['country'].unique() # + colab={"base_uri": "https://localhost:8080/", "height": 473} id="oqsIuQiYtFvK" outputId="32bbcc49-9dcd-49b0-c033-39eeada1b717" from sklearn.preprocessing import LabelEncoder le=LabelEncoder() le1=LabelEncoder() df['Status']=le.fit_transform(df['Status']) df['country']=le1.fit_transform(df['country']) df # + id="a3mc9MdktQ0O" X=df.iloc[:,:-1].values y=df.iloc[:,-1].values # + colab={"base_uri": "https://localhost:8080/"} id="ohGiLM_dtUyt" outputId="7ca23181-a1fc-431c-fbb0-f129f5357e66" X[:,15:16] # + id="wcsq6l-xtW5v" from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2 ,random_state = 1) # + id="gFmmZQcbtY4T" from sklearn.preprocessing import StandardScaler norm = StandardScaler() X_train[:,15:16] = norm.fit_transform(X_train[:, 15:16]) X_test[:, 15:16] = norm.fit_transform(X_test[:, 15:16]) # + colab={"base_uri": "https://localhost:8080/"} id="yNEUNmR4ta_h" outputId="4263bba2-4f6a-4741-ba41-347f94690b99" X_train[:,15:16] # + id="82IOQWrfumLC" y_train=tf.keras.utils.to_categorical(y_train) y_test=tf.keras.utils.to_categorical(y_test) # + id="c7eai0B4tfHv" import tensorflow as tf ann = tf.keras.models.Sequential() ann.add(tf.keras.layers.Dense(units=500, activation='relu')) ann.add(tf.keras.layers.Dense(units=500, activation='relu')) ann.add(tf.keras.layers.Dense(units=500, activation='relu')) ann.add(tf.keras.layers.Dense(units=500, activation='relu')) ann.add(tf.keras.layers.Dense(units=500, activation='relu')) ann.add(tf.keras.layers.Dense(units=133,activation='softmax')) # + colab={"base_uri": "https://localhost:8080/"} id="EO7WZBNBtjEi" outputId="05028490-72a7-45dc-ab9c-59f8707dec35" ann.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) history = ann.fit(X_train, y_train, batch_size=32, epochs=1000) ann.save("LEC.h5") # + colab={"base_uri": "https://localhost:8080/", "height": 573} id="h0vAyF-1tkcC" outputId="54299462-e260-4a8f-fca7-d40cff5c74b4" plt.figure(0) plt.plot(history.history['accuracy'], label='training accuracy') plt.title('Accuracy') plt.xlabel('epochs') plt.ylabel('accuracy') plt.legend() plt.title('Results for ANN training-1') plt.savefig('Accuracy.png') plt.figure(1) plt.plot(history.history['loss'], label='training loss') plt.title('Loss') plt.xlabel('epochs') plt.ylabel('loss') plt.legend() plt.title('Results for ANN training-1') plt.savefig('Loss.png') # + colab={"base_uri": "https://localhost:8080/"} id="wV3WntsouXDn" outputId="a73b3060-db67-455c-d461-6b7d69a230a8" model = tf.keras.models.load_model('LEC.h5') print("Loaded model from disk") # + colab={"base_uri": "https://localhost:8080/"} id="HmFrplWvubrX" outputId="efa09709-322f-41ac-8b73-f1e51db86992" y_pred= model.predict(X_test) y_pred=np.round(y_pred) np.set_printoptions(precision=2) print(y_pred) # + colab={"base_uri": "https://localhost:8080/"} id="_61QkjfIuiY2" outputId="8aa0ebf6-96a1-4952-9d23-2fd6cef06d3d" from sklearn.metrics import accuracy_score print("Accuracy Score for the algorithm=>{}%".format(round(accuracy_score(y_test,y_pred)*100),2)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="1DtGV4xh7SND" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1642108631192, "user_tz": -330, "elapsed": 20662, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="56743038-8cb5-4278-b5a7-491328de576e" from google.colab import drive drive.mount('/content/drive') # + id="BZ0g9z9E7gso" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1642108631872, "user_tz": -330, "elapsed": 682, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="65901ce5-c721-4aff-fc4a-ef3f9ef54ced" # %cd /content/drive/My Drive/semeval_2022/maMi # + colab={"base_uri": "https://localhost:8080/"} id="KrO9sy8rpryV" executionInfo={"status": "ok", "timestamp": 1642108635115, "user_tz": -330, "elapsed": 3245, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="520bcd5e-6916-4429-a7da-8baf75f470ac" # !pip install tensorflow_addons # + id="XEDCSKN17guQ" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1642108637394, "user_tz": -330, "elapsed": 2281, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="a6e006e0-c6b5-4275-f30d-b7c336211767" import pandas as pd import numpy as np import tensorflow as tf import tensorflow_addons as tfa import glob, warnings import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix, classification_report import seaborn as sns warnings.filterwarnings('ignore') print('TensorFlow Version ' + tf.__version__) # + id="zwFbHyP77gyJ" IMAGE_SIZE = 224 BATCH_SIZE = 16 EPOCHS = 7 img_folder='./TRAINING/' train_df=pd.read_csv('train.csv',sep='\t') val_df=pd.read_csv('test.csv',sep='\t') # + id="I3vVLqD5Q2W_" executionInfo={"status": "ok", "timestamp": 1642108644084, "user_tz": -330, "elapsed": 383, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} IMAGE_SIZE = 224 BATCH_SIZE = 16 img_folder='./test/' test_df=pd.read_csv('./test/Test.csv',sep='\t') # + id="xxIh3AdA7g0f" def data_augment(image): p_spatial = tf.random.uniform([], 0, 1.0, dtype = tf.float32) p_rotate = tf.random.uniform([], 0, 1.0, dtype = tf.float32) p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype = tf.float32) p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype = tf.float32) p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype = tf.float32) # Flips image = tf.image.random_flip_left_right(image) image = tf.image.random_flip_up_down(image) if p_spatial > .75: image = tf.image.transpose(image) # Rotates if p_rotate > .75: image = tf.image.rot90(image, k = 3) # rotate 270º elif p_rotate > .5: image = tf.image.rot90(image, k = 2) # rotate 180º elif p_rotate > .25: image = tf.image.rot90(image, k = 1) # rotate 90º # Pixel-level transforms if p_pixel_1 >= .4: image = tf.image.random_saturation(image, lower = .7, upper = 1.3) if p_pixel_2 >= .4: image = tf.image.random_contrast(image, lower = .8, upper = 1.2) if p_pixel_3 >= .4: image = tf.image.random_brightness(image, max_delta = .1) return image # + id="v7WgiogjRKv7" #seeee class mode # + id="1HP04i6q7g4S" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640348055665, "user_tz": -330, "elapsed": 53484, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="3950aff6-dbf4-45d1-f77d-1c06975262ec" datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale = 1./255, samplewise_center = True, samplewise_std_normalization = True, validation_split = 0.2, preprocessing_function = data_augment) train_gen = datagen.flow_from_dataframe(dataframe = train_df, directory = img_folder, x_col = 'file_name', y_col = 'misogynous', subset = 'training', batch_size = BATCH_SIZE, seed = 1, color_mode = 'rgb', shuffle = True, class_mode = 'raw', target_size = (IMAGE_SIZE, IMAGE_SIZE)) valid_gen = datagen.flow_from_dataframe(dataframe = train_df, directory = img_folder, x_col = 'file_name', y_col = 'misogynous', subset = 'validation', batch_size = BATCH_SIZE, seed = 1, color_mode = 'rgb', shuffle = False, class_mode = 'raw', target_size = (IMAGE_SIZE, IMAGE_SIZE)) test_gen = datagen.flow_from_dataframe(dataframe = val_df, directory = img_folder, x_col = 'file_name', y_col = 'misogynous', batch_size = BATCH_SIZE, seed = 1, color_mode = 'rgb', shuffle = False, class_mode = 'raw', target_size = (IMAGE_SIZE, IMAGE_SIZE)) # + id="UAr1VIMTRe8X" executionInfo={"status": "ok", "timestamp": 1642108645086, "user_tz": -330, "elapsed": 1, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale = 1./255, samplewise_center = True, samplewise_std_normalization = True) # + colab={"base_uri": "https://localhost:8080/"} id="iFZVicgfQyfj" executionInfo={"status": "ok", "timestamp": 1642108647834, "user_tz": -330, "elapsed": 379, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="44c99716-2f19-40e9-f0d4-6c8a3e7e1e9e" test_gen = datagen.flow_from_dataframe(dataframe = test_df, directory = img_folder, x_col = 'file_name', y_col = None, batch_size = BATCH_SIZE, seed = 1, color_mode = 'rgb', shuffle = False, class_mode = None, target_size = (IMAGE_SIZE, IMAGE_SIZE)) # + id="HAYJlF9e7g6t" executionInfo={"status": "ok", "timestamp": 1642108658739, "user_tz": -330, "elapsed": 3561, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} # !pip install --quiet vit-keras from vit_keras import vit # + id="m5V4sSfsQ7if" #### seee classes 1 or 2 # + id="x98Jmb7M7g_I" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640285933061, "user_tz": -330, "elapsed": 17877, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="9100c840-8219-4908-a53c-fee5c8231ca2" vit_model = vit.vit_b32( image_size = IMAGE_SIZE, activation = 'sigmoid', pretrained = True, include_top = False, pretrained_top = False, classes=1) # + colab={"base_uri": "https://localhost:8080/"} id="F16yBu6dqNwL" executionInfo={"status": "ok", "timestamp": 1640285935647, "user_tz": -330, "elapsed": 388, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="94ce6eb4-fc13-4758-fcf5-0ded47bedd6f" # + id="8w_CVjKiG394" colab={"base_uri": "https://localhost:8080/", "height": 356} executionInfo={"status": "error", "timestamp": 1640285959687, "user_tz": -330, "elapsed": 5171, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="a62e807b-fc9f-4a1e-ea79-9011c00a92e2" from vit_keras import visualize x = test_gen.next() image = x[0] attention_map = visualize.attention_map(model = vit_model, image = image) # Plot results fig, (ax1, ax2) = plt.subplots(ncols = 2) ax1.axis('off') ax2.axis('off') ax1.set_title('Original') ax2.set_title('Attention Map') _ = ax1.imshow(image) _ = ax2.imshow(attention_map) # + id="su9N-irNG4AA" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640285987220, "user_tz": -330, "elapsed": 2052, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="2acf5066-5d76-4a17-ebb1-7159b0d48a9d" model = tf.keras.Sequential([ vit_model, tf.keras.layers.Flatten(), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(11, activation = tfa.activations.gelu), tf.keras.layers.BatchNormalization(), tf.keras.layers.Dense(1, 'sigmoid') ], name = 'vision_transformer') model.summary() # + id="rPUXHcING4Du" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640317796444, "user_tz": -330, "elapsed": 31787173, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="8464604d-ef55-4542-aa9e-b493b5380ff1" learning_rate = 1e-4 optimizer = tfa.optimizers.RectifiedAdam(learning_rate = learning_rate) model.compile(optimizer = optimizer, loss = tf.keras.losses.BinaryCrossentropy(label_smoothing = 0.2), metrics = ['accuracy']) STEP_SIZE_TRAIN = train_gen.n // train_gen.batch_size STEP_SIZE_VALID = valid_gen.n // valid_gen.batch_size reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor = 'val_accuracy', factor = 0.2, patience = 2, verbose = 1, min_delta = 1e-4, min_lr = 1e-6, mode = 'max') earlystopping = tf.keras.callbacks.EarlyStopping(monitor = 'val_accuracy', min_delta = 1e-4, patience = 5, mode = 'max', restore_best_weights = True, verbose = 1) checkpointer = tf.keras.callbacks.ModelCheckpoint(filepath = './saved_models/vit_binary_image_checkpoint.hdf5', monitor = 'val_accuracy', verbose = 1, save_best_only = True, save_weights_only = True, mode = 'max') callbacks = [earlystopping, reduce_lr, checkpointer] model.fit(x = train_gen, steps_per_epoch = STEP_SIZE_TRAIN, validation_data = valid_gen, validation_steps = STEP_SIZE_VALID, epochs = EPOCHS, callbacks = callbacks) model.save('./saved_models/vit_binary_image_model.h5') # + id="6QAy-Wv-RHdG" executionInfo={"status": "ok", "timestamp": 1642108667939, "user_tz": -330, "elapsed": 402, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} from keras.models import load_model # + id="glvPuHqkQCfI" executionInfo={"status": "ok", "timestamp": 1642108687177, "user_tz": -330, "elapsed": 17965, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} model=load_model('./saved_models/vit_binary_image_model.h5') # + id="7O7zbb1vG4Gm" colab={"base_uri": "https://localhost:8080/", "height": 234} executionInfo={"status": "error", "timestamp": 1640348590855, "user_tz": -330, "elapsed": 510609, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="b93dfcb6-58c3-4842-f21b-77b55c1c8423" predicted_classes = np.round(model.predict(test_gen, steps = test_gen.n // test_gen.batch_size + 1)) true_classes = test_gen.classes class_labels = list(test_gen.class_indices.keys()) confusionmatrix = confusion_matrix(true_classes, predicted_classes) plt.figure(figsize = (16, 16)) sns.heatmap(confusionmatrix, cmap = 'Blues', annot = True, cbar = True) print(classification_report(true_classes, predicted_classes)) # + id="GJTC_ajNR_77" executionInfo={"status": "ok", "timestamp": 1642108912627, "user_tz": -330, "elapsed": 220666, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputs = np.round(model.predict(test_gen, steps = test_gen.n // test_gen.batch_size + 1)) outputs=list(map(int,outputs)) predictions_db = pd.DataFrame(data=test_df['file_name']) predictions_db['misogynist'] = outputs predictions_db.to_csv('./final_res/answer9.txt', index=False, sep='\t', header=False) # + id="sDmbQ4ZX7hCA" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1640349419845, "user_tz": -330, "elapsed": 405, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="c1537a34-3326-4d20-abd4-dd87eca2ce48" predicted_classes # + colab={"base_uri": "https://localhost:8080/"} id="7jhU0QXBccUx" executionInfo={"status": "ok", "timestamp": 1640349493467, "user_tz": -330, "elapsed": 6, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="a6dd8d08-6ddb-438c-938f-edd9423556a5" print(classification_report(val_df.misogynous.values, predicted_classes)) # + id="yiUFoG3ecuOh" from sklearn.metrics import f1_score # + id="a1RF7nr8c81n" mi=f1_score(val_df.misogynous.values,predicted_classes,average='macro') # + colab={"base_uri": "https://localhost:8080/"} id="H04BmJDydMcw" executionInfo={"status": "ok", "timestamp": 1640349631579, "user_tz": -330, "elapsed": 380, "user": {"displayName": " 19210068", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Giseq-GUEEEHxpbIRj6F3acRsPjOAw-9WWZZRWdDw=s64", "userId": "10632211144139647575"}} outputId="a6eb113a-b06d-4fa9-c5dc-9d0f1c5ede4a" mi # + id="yTJnKHfxdNS_" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: rdkit-env # language: python # name: rdkit-env # --- # + import wget import zipfile import re import tqdm import os import numpy as np from rdkit import Chem from rdkit.Chem.Descriptors import MolWt from rdkit.Chem.MolStandardize import rdMolStandardize enumerator = rdMolStandardize.TautomerEnumerator() import matplotlib.pyplot as plt import matplotlib.font_manager as fm import matplotlib as mpl from pylab import cm # + ########################################################### FUNCTIONS BY ITERATIONS ####################################### def get_stable_tatuomer(Smiles): """ Get Sable Tautoer """ tautomer = "" try: tautomer = Chem.CanonSmiles(Chem.MolToSmiles(enumerator.Canonicalize(Chem.MolFromSmiles(Smiles)))) except: print(f"Could not find tatuomer for {Smiles}") return tautomer def network_iterations(path,Tautomers=False): """ Generates a dictionary that saves molecules by Iteration""" Network = dict() # Define an Empty Dictionary Loaded = 0 # Number of molecules loaded ### Open file with context handler ### with open(path, "r") as handler: for line in handler: # Iterate over the lines try: iteration,smiles = line.split() # Splite every line: The format is Iteration,moleule e.g. G1 ccc1c3 try: if Tautomers: CanonicalSmiles = get_stable_tatuomer(smiles) else: CanonicalSmiles = Chem.CanonSmiles(smiles) except: pass if CanonicalSmiles != "": try: Network[iteration].append(CanonicalSmiles) # Append molecule to iteration list inside Network dictionary Loaded +=1 except: Network[iteration] = list() # If an exception occurs then iteration list does not exxists, create it Network[iteration].append(CanonicalSmiles) # Append molecule to iteration list inside Network dictionary Loaded +=1 except: pass print(f"Iterations: {Network.values().__len__()}, Total Moleules: {Loaded}") return Network def get_max_mass_iterations(Network): MaxMass = dict() # Initiate Dictioanry for Iteration,SmilesList in zip(Network.keys(),Network.values()): # Iterate over Network Mass = max([MolWt(Chem.MolFromSmiles(Smiles)) for Smiles in SmilesList]) # Get Maximum mass at each iteration MaxMass[Iteration] = Mass # Append it to dictionary print(f"Max Molecular Weight for iteration {Iteration}: {Mass}") return MaxMass ###################################################### Functions without database iterations def database(path,MaxMass,Tautomers=False): """ Generates a list that saves smiles strings """ Ignored = 0 Network = list() # Define an Empty Dictionary Limit = max(MaxMass.values()) # Maxmimum molecular weight of network ### Open file with context handler ### with open(path, "r") as handler: # Iterate over handler for line in handler: # Iterate over the lines smiles = line # Splite every line try: if Tautomers: CanonicalSmiles = get_stable_tatuomer(smiles) else: CanonicalSmiles = Chem.CanonSmiles(smiles) # Convert smiles to canonical smiles # Check that molecules is not heavier than heavies molecule in network (we will compare apples to apples) if MolWt(Chem.MolFromSmiles(CanonicalSmiles)) <= Limit: if re.match("^[CcOoNn]",CanonicalSmiles): # If Molecules starts with C,N,O Network.append(CanonicalSmiles) # Append molecule to network if mass is lower than limit else: Ignored +=1 # If not add 1 to ignore counter except: Ignored +=1 # If there is no canonical smiels, than there must be a typo in string, ingore it print(f"{Ignored} molecules were ignored out of {len(Network)+Ignored}. Leaving a total of {len(Network)} Molecules.") return Network #### Compare program ### def compare_networks_database(mod,database,verbose=True,plot=True): matched = dict() missed = dict() for mol2 in database: for iterations in mod.keys(): #print(f" Currently working on Iteration: {iterations}") total_set = set(mod[iterations]) for mol in total_set: #### Capture Matches #### if mol == mol2: try: matched[iterations].add(mol) # Append molecule to iteration list inside Network dictionary except: matched[iterations] = set() # If an exception occurs then iteration list does not exxists, create it matched[iterations].add(mol) # Append molecule to iteration list inside Network dictionary for iterations in mod.keys(): total_set = set(mod[iterations]) # Define Universe for iteration matched_set = matched[iterations] # Get matched set missed[iterations] = total_set -matched_set # Get Intersection if verbose: matched_list = list() matched_database =list() unmatched_list = list() print() print("Summary (Unique Molecules)") print("-".center(75,"-")) for iterations in mod.keys(): print(f"Iteration {iterations}") matched_set = matched[iterations] total_set = set(mod[iterations]) matched_percent = len(matched_set)/len(total_set)*100 match_d= len(matched_set)/len(database)*100 matched_database.append(match_d) print(f"Molecules Matches: {len(matched_set)} of total {len(total_set)}, percentage: {matched_percent} %") matched_list.append(matched_percent) unmatched = missed[iterations] missed_percent = len(unmatched)/len(total_set)*100 #{len(missed[iterations])} print(f"Molecules Missed: {len(unmatched)} of total {len(total_set)} , percent: {missed_percent} %") unmatched_list.append(missed_percent) print("-".center(75,"-")) print() if plot: Matches_sorted = np.log10(np.array([ len(a[1]) for a in sorted(matched.items())])) Iterations = range(len(Matches_sorted)) labels = [ "G" + str(i) for i in Iterations] Missed_sorted = np.log10(np.array([ len(a[1]) for a in sorted(missed.items())])) Total_sorted = Matches_sorted + Missed_sorted Network_sorted = np.log10(np.array([ len(a[1]) for a in sorted(Network.items())])) plt.figure(figsize=(10,7)) plt.title("Matched/Unmatched molecules through iterations") plt.xlabel("Iterations",fontsize=12) plt.ylabel("Number of Molecules (log10)",fontsize=12) plt.plot(Matches_sorted,"--g",label="Matched molecules") plt.plot(Network_sorted,"--y",label="Network molecules") i=0 for match,percent in zip(Matches_sorted, matched_database): plt.text(i-0.1,match+0.2,str(round(percent,2)) + "%",fontsize=12) i+=1 plt.xticks(Iterations,labels) plt.legend(loc="best") plt.grid() plt.show() return matched,missed,matched_list,unmatched_list # - path = "Datasets/glucose_degradation_output2.txt" Network = network_iterations(path,Tautomers=False) path = "Datasets/glucose_degradation_output2.txt" Network = network_iterations(path,Tautomers=True) MaxMass = get_max_mass_iterations(Network) path = "Datasets/HarrisonDataAugust2019_molescules.csv" Database = database(path,MaxMass,Tautomers=False) # Font mpl.rcParams['font.family'] ='Times New Roman' plt.rcParams['font.size'] = 16 plt.rcParams['axes.linewidth'] = 1 matched,missed,matched_list,unmatched_list = compare_networks_database(Network ,Database,verbose=True) len(matched_l) plt.bar(list(MaxMass.keys()),list(MaxMass.values()), width=0.8, bottom=None, align='center') plt.show() MaxMass.values() def DownloadHMDB(Verbose): """Downloads an instance of the HMDB.sdf Database if a local copy is not avialble""" if Verbose: print("Oops, you Don't have a local coppy of HMDB.sdf") print("A new instance of the Database will be dowloaded for you") url = "https://hmdb.ca/system/downloads/current/structures.zip" root = os.getcwd() filename = wget.download(url) with zipfile.ZipFile(filename, 'r') as zip_ref: zip_ref.extractall(os.getcwd()) if Verbose: print("Almost Done...") os.rename(r'structures.sdf',r'HMDB.sdf') os.chdir(root) if Verbose: print("Done") DownloadHMDB(Verbose=True) # Download sdf path = "HMDB.sdf" hmdb=Chem.SDMolSupplier(path) # Load sdf hmdbsmiles = [] for mol in hmdb: try: hmdbsmiles.append(Chem.MolToSmiles(mol)) except: pass len(hmdbsmiles) def database_smiles(SMILES,MaxMass,Tautomers=False): """ Generates a list that saves smiles strings """ Ignored = 0 Network = list() # Define an Empty Dictionary Limit = max(MaxMass.values()) # Maxmimum molecular weight of network ### Open file with context handler ### for smiles in SMILES: # Iterate over the lines try: if Tautomers: CanonicalSmiles = get_stable_tatuomer(smiles) else: CanonicalSmiles = Chem.CanonSmiles(smiles) # Convert smiles to canonical smiles # Check that molecules is not heavier than heavies molecule in network (we will compare apples to apples) if MolWt(Chem.MolFromSmiles(CanonicalSmiles)) <= Limit: if re.match("^[CcOoNn]",CanonicalSmiles): # If Molecules starts with C,N,O Network.append(CanonicalSmiles) # Append molecule to network if mass is lower than limit else: Ignored +=1 # If not add 1 to ignore counter except: Ignored +=1 # If there is no canonical smiels, than there must be a typo in string, ingore it print(f"{Ignored} molecules were ignored out of {len(Network)+Ignored}. Leaving a total of {len(Network)} Molecules.") return Network Database2 = database_smiles(hmdbsmiles,MaxMass,Tautomers=False) matched,missed,matched_list,unmatched_list = compare_networks_database(Network ,Database2,verbose=True) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="ab7GPGB0OJOM" # ###PROBLEM STATEMENT 1. # + colab={"base_uri": "https://localhost:8080/"} id="3CdC_QsbOSos" outputId="a76f2f44-f756-4820-c406-477e7ec9e7d0" #Create a Python program that will produce an output of sum of 10 numbers less than 5 using FOR LOOP statement. (30 points) num = [ 0, 1, 2, 3, 4, -1, -2, -3, -4, -5 ] for x in num: sum = x + 0 print("The sum of 10 numbers less than 5 is", sum) # + [markdown] id="L8KMv-_BW8fJ" # ###PROBLEM STATEMENT 2. # + colab={"base_uri": "https://localhost:8080/"} id="aywNLLVvkOfS" outputId="80f101cd-09e0-4ae5-db7d-34026848cb78" #Create a Python program that will produce accept five numbers and determine the sum of first and last number among the five numbers entered using WHILE LOOP (35 points) num = [] while len(num) is not 5: user = int(input("Input number: ")) num.append(user) print(num) summ = num[0] + num[-1] print(summ) # + [markdown] id="Ggc1kI72YmTG" # ###PROBLEM STATEMENT 3. # + colab={"base_uri": "https://localhost:8080/"} id="PmdpGuAjYo_G" outputId="02e1652c-7463-4c2f-f3b0-175014e1034f" #Create a Python program to calculate student grades. It accepts a numerical grade as input and it will display the character grade as output based on the given scale: (Use Nested-IF-Else statement) (35 points) grade = x = ( int(input("Insert Your Grade: "))) if x >= 90: print ("A") elif x >= 80: print ("B") elif x >= 70: print ("C") elif x >= 60: print ("D") else: print ("F") # + # Copyright 2010 # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Kakuru puzzle in Google CP Solver. http://en.wikipedia.org/wiki/Kakuro ''' The object of the puzzle is to insert a digit from 1 to 9 inclusive into each white cell such that the sum of the numbers in each entry matches the clue associated with it and that no digit is duplicated in any entry. It is that lack of duplication that makes creating Kakuro puzzles with unique solutions possible, and which means solving a Kakuro puzzle involves investigating combinations more, compared to Sudoku in which the focus is on permutations. There is an unwritten rule for making Kakuro puzzles that each clue must have at least two numbers that add up to it. This is because including one number is mathematically trivial when solving Kakuro puzzles; one can simply disregard the number entirely and subtract it from the clue it indicates. ''' This model solves the problem at the Wikipedia page. For a larger picture, see http://en.wikipedia.org/wiki/File:Kakuro_black_box.svg The solution: 9 7 0 0 8 7 9 8 9 0 8 9 5 7 6 8 5 9 7 0 0 0 6 1 0 2 6 0 0 0 4 6 1 3 2 8 9 3 1 0 1 4 3 1 2 0 0 2 1 Compare with the following models: * Comet : http://www.hakank.org/comet/kakuro.co * MiniZinc: http://www.hakank.org/minizinc/kakuro.mzn * SICStus : http://www.hakank.org/sicstus/kakuro.pl * ECLiPSe: http://www.hakank.org/eclipse/kakuro.ecl * Gecode: http://www.hakank.org/gecode/kenken2.cpp This model was created by () Also see my other Google CP Solver models: http://www.hakank.org/google_or_tools/ """ from __future__ import print_function import sys from ortools.constraint_solver import pywrapcp # # Ensure that the sum of the segments # in cc == res # def calc(cc, x, res): solver = list(x.values())[0].solver() # ensure that the values are positive for i in cc: solver.Add(x[i[0] - 1, i[1] - 1] >= 1) # sum the numbers solver.Add(solver.Sum([x[i[0] - 1, i[1] - 1] for i in cc]) == res) # Create the solver. solver = pywrapcp.Solver("Kakuro") # # data # # size of matrix n = 7 # segments # [sum, [segments]] # Note: 1-based problem = [[16, [1, 1], [1, 2]], [24, [1, 5], [1, 6], [1, 7]], [17, [2, 1], [2, 2]], [29, [2, 4], [2, 5], [2, 6], [2, 7]], [35, [3, 1], [3, 2], [3, 3], [3, 4], [3, 5]], [7, [4, 2], [4, 3]], [8, [4, 5], [4, 6]], [16, [5, 3], [5, 4], [5, 5], [5, 6], [5, 7]], [21, [6, 1], [6, 2], [6, 3], [6, 4]], [5, [6, 6], [6, 7]], [6, [7, 1], [7, 2], [7, 3]], [3, [7, 6], [7, 7]], [23, [1, 1], [2, 1], [3, 1]], [30, [1, 2], [2, 2], [3, 2], [4, 2]], [27, [1, 5], [2, 5], [3, 5], [4, 5], [5, 5]], [12, [1, 6], [2, 6]], [16, [1, 7], [2, 7]], [17, [2, 4], [3, 4]], [15, [3, 3], [4, 3], [5, 3], [6, 3], [7, 3]], [12, [4, 6], [5, 6], [6, 6], [7, 6]], [7, [5, 4], [6, 4]], [7, [5, 7], [6, 7], [7, 7]], [11, [6, 1], [7, 1]], [10, [6, 2], [7, 2]]] num_p = len(problem) # The blanks # Note: 1-based blanks = [[1, 3], [1, 4], [2, 3], [3, 6], [3, 7], [4, 1], [4, 4], [4, 7], [5, 1], [5, 2], [6, 5], [7, 4], [7, 5]] num_blanks = len(blanks) # # variables # # the set x = {} for i in range(n): for j in range(n): x[i, j] = solver.IntVar(0, 9, "x[%i,%i]" % (i, j)) x_flat = [x[i, j] for i in range(n) for j in range(n)] # # constraints # # fill the blanks with 0 for i in range(num_blanks): solver.Add(x[blanks[i][0] - 1, blanks[i][1] - 1] == 0) for i in range(num_p): segment = problem[i][1::] res = problem[i][0] # sum this segment calc(segment, x, res) # all numbers in this segment must be distinct segment = [x[p[0] - 1, p[1] - 1] for p in segment] solver.Add(solver.AllDifferent(segment)) # # search and solution # db = solver.Phase(x_flat, solver.INT_VAR_DEFAULT, solver.INT_VALUE_DEFAULT) solver.NewSearch(db) num_solutions = 0 while solver.NextSolution(): for i in range(n): for j in range(n): val = x[i, j].Value() if val > 0: print(val, end=" ") else: print(" ", end=" ") print() print() num_solutions += 1 solver.EndSearch() print() print("num_solutions:", num_solutions) print("failures:", solver.Failures()) print("branches:", solver.Branches()) print("WallTime:", solver.WallTime()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # cd .. import tagnews import pandas as pd # Download (and extract if needed) a saved glove data from # https://github.com/stanfordnlp/GloVe # and save it to tagnews/data/ glove = tagnews.load_glove('tagnews/data/glove.6B.50d.txt') # Download (and extract if needed) the NER data from # https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus/data # and save it to tagnews/data/ ner = tagnews.load_ner_data('tagnews/data/') ner = pd.concat([ner, pd.DataFrame(glove.loc[ner['word'].str.lower()].values)], axis='columns') num_asserts = 0 for i, row in ner.sample(1000).iterrows(): if not any(row.iloc[2:].isnull()): assert (glove.loc[row['word'].lower()].values == row.iloc[3:].values).all() num_asserts += 1 print('Asserted correct vectorizations', num_asserts, 'times.') import sklearn.ensemble clf = sklearn.ensemble.RandomForestClassifier() # be careful doing this if you are relying on sequential-ness! ner.fillna(value=0.0, inplace=True) clf.fit(ner.iloc[:200000, 3:], ner['tag'].iloc[:200000].values) clf.predict_proba(glove.loc[['london', 'france', 'napkins']]) # Go to https://geo-extract-tester.herokuapp.com/ and download # the validation data (validation.txt). with open('validation.txt', encoding='utf-8') as f: s = f.read() with open('guesses.txt', 'w') as f: for prob in clf.predict_proba(glove.loc[[w for w in s.split('\n') if w]].fillna(0))[:,1]: f.write(str(prob) + '\n') # Now go to https://geo-extract-tester.herokuapp.com/ and upload `guesses.txt` to see how you did! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd df = pd.read_excel('Census1971to2001.xlsx') df df.at[1, 'Name of the State/UT'] = 'Assam' df = df.drop(df.columns[6], axis=1) df.columns = ['SNo', 'Name of the State/UT', '1971', '1981', '1991', '2001'] df = df.drop(df.index[[0,1,37,38]]) df = df.reset_index(drop=True) df.at[33, 'Name of the State/UT'] = 'Mizoram' import numpy as np df = df.replace(np.NaN, 0) df df.at[6, '1991'] = 0 df.at[18, '2001'] = 177268 df.at[18, '1991'] = 142868 df import matplotlib.pyplot as plt ax = plt.gca() i = 0 for name, values in df.iteritems(): if i == 0 or i == 1: pass else: ax = plt.gca() df.plot(kind='bar', x='Name of the State/UT', y='{name}'.format(name=name), ax=ax) plt.show() i=i+1 i=0 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import pandas as pd file = open('SMSSpamCollection','r') data = pd.read_table('SMSSpamCollection',sep='\t',names=['label', 'sms_message']) data.head(5) # + ''' for i in range(len(data)): if data.label[i] =='ham': data.label[i] =0 else: data.label[i]=1 ''' #data[label]=data.label.map({'ham':0,'spam':1}) data['label']=data.label.map({'ham':0,'spam':1}) print(data.shape) # - data.head(4) #data.ix[:,0] documents = ['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?'] lowercase_documents =[] for i in documents: i=i.lower() lowercase_documents.append(i) print(lowercase_documents) ''' import string without_punctuation_documents =[] for i in lowercase_documents: i = i.translate(str.maketrans('', '', string.punctuation)) without_punctuation_documents.append(i) print(without_punctuation_documents) ''' without_punctuation_documents=[] import string for i in lowercase_documents: for j in i: i=''.join(c for c in i if c not in(string.punctuation)) without_punctuation_documents.append(i) print(without_punctuation_documents) tokens_list =[] for i in without_punctuation_documents: tokens_list.append(i.split()) print(tokens_list) # + final_list_of_words=[j for i in tokens_list for j in i] from collections import Counter count = Counter() for i in final_list_of_words: count[i] +=1 print(dict(count)) # - from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test = train_test_split(data['sms_message'],data['label'],random_state = 1) print("Number of rows in the total set:{} ".format(data.shape[0])) print("Number of rows in the training set:{}".format(X_train.shape[0])) print("Number of rows in the testing set:{}".format(X_test.shape[0])) X_train[1] from sklearn.feature_extraction.text import CountVectorizer count_vector = CountVectorizer() training_data = count_vector.fit_transform(X_train) testing_data = count_vector.transform(X_test) testing_data from sklearn.naive_bayes import MultinomialNB naive_bayes = MultinomialNB() naive_bayes.fit(training_data,y_train) predictions = naive_bayes.predict(testing_data) from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score print('Accuracy score: ', format(accuracy_score(y_test, predictions))) print('Precision score: ', format(precision_score(y_test, predictions))) print('Recall score: ', format(recall_score(y_test, predictions))) print('F1 score: ', format(f1_score(y_test, predictions))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # 说明: # 你有n个 tiles,每个tiles上都印着一个字母。返回使用这些平铺上打印的字母可以生成的非空字母序列数。 # # 要求: # 使用给定的 S,返回由S可以组成字符串的最大数量。不能重复。 # # Example 1: # Input: tiles = "AAB" # Output: 8 # Explanation: The possible sequences are "A", "B", "AA", "AB", "BA", "AAB", "ABA", "BAA". # # Example 2: # Input: tiles = "AAABBC" # Output: 188 # # Example 3: # Input: tiles = "V" # Output: 1 # - # 虽然可以通过,但是时间复杂度较高 class Solution: def numTilePossibilities(self, tiles: str) -> int: self.res = [] for i in range(len(tiles)): self.dfs(tiles[:i] + tiles[i+1:], tiles[i]) return len(self.res) def dfs(self, tiles, temp): if temp not in self.res: self.res.append(temp[:]) for i in range(len(tiles)): self.dfs(tiles[:i] + tiles[i+1:], temp+tiles[i]) tiles_ = "AAABBC" solution = Solution() solution.numTilePossibilities(tiles_) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.2 64-bit # name: python_defaultSpec_1596915098752 # --- # + tags=[] from zemailer.core.files import FileOpener from zemailer.core.settings import configuration # - dummy = configuration.get('DUMMY_FILE') item = FileOpener(dummy) item.names # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Prerequisite : Need to run the NLP model notebook # This plot shows the word count before preprocess # + token_before_stop_words = reviews.apply(normalize) fig = plt.figure(figsize = (10,4)) plt.gcf().subplots_adjust(bottom=0.15) plt.tight_layout() plt.title('Top 30 words in review before preprocessing') plt.xlabel('Words') plt.ylabel('Counts') word_counts = list(itertools.chain(*token_before_stop_words)) freq_dist = FreqDist(word_counts) freq_dist.plot(30, cumulative=False) plt.show() fig.savefig('../word_count_before_preprocess.png', bbox_inches = "tight") # - # This plot for review length will be used in the report. However, for the purpose of presentation, I will remove the outliers. # + import seaborn as sns processed_df['Review Length'] = processed_df["Tokens"].apply(lambda x: len(x)) processed_df.tail() fig = plt.figure(figsize = (25,4)) plt.tight_layout() plt.legend() sns.countplot(processed_df['Review Length']) plt.xticks(rotation = 90) # - # This plot will be used for the presentation purpose in which the outliers will be removed. # # IQR = Q3 - Q1 = 26-6 = 20 Upper_Bound = 26+1.5*20 = 56 Those review that have review length>56 will be removed from the plots for presentation. # + import seaborn as sns processed_df['Review Length'] = processed_df["Tokens"].apply(lambda x: len(x)) processed_df.tail() df['Review Length'].describe() fig = plt.figure(figsize = (10,4)) plt.tight_layout() plt.legend() sns.countplot(processed_df[processed_df['Review Length']<56]['Review Length']) plt.xticks(rotation = 90) plt.title('Review Length Distribution') fig.savefig('../ReviewLength.png', bbox_inches = "tight") # - # From here, we can see that people mostly like to make short review. However, it might be due to the fact that our preprocessing # was unable to separate some words. Thus, there might be more words with a longer actual word length. # This plot shows the word count after preprocess # + fig = plt.figure(figsize = (10,4)) plt.tight_layout() plt.title('Top 30 words in review after preprocessing') plt.xlabel('Words') plt.ylabel('Counts') word_counts = list(itertools.chain(*tokens)) freq_dist = FreqDist(word_counts) freq_dist.plot(30, cumulative=False) plt.show() fig.savefig('../freqDist.png', bbox_inches = "tight") # - # For this part of the code, I edited a bit to get the csv with the compound scores inside. # + analyzer = SentimentIntensityAnalyzer() def sentiment_polarity(s): global analyzer polarity_scores = analyzer.polarity_scores(s) compound_score = polarity_scores["compound"] if compound_score >= 0.5: label = "Positive" elif compound_score > -0.05 and compound_score < 0.05: label = "Neutral" else: label = "Negative" return label, polarity_scores["neu"], polarity_scores["pos"], polarity_scores["neg"],compound_score df = processed_df df["Sentiment"], df["Neutral Proportion"], df["Positive Proportion"], df["Negative Proportion"],df['Compound'] = zip(*df["Review"].apply(sentiment_polarity)) df.sample(3) df.to_csv("../Project/compound.csv") # - # The plots made by using the compound csv can be found in visualization_project_V3.rmd # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (Belle2) # language: python # name: python3 # --- # # Introduction to plotting import sciplot as sp import matplotlib.pyplot as plt # %matplotlib inline plt.style.use('sciplot') # %load_ext autoreload # %autoreload 2 # ### Creating some pseudo data # + import pandas as pd import numpy as np ns =30000 nb =80000 df = {'mass': np.append(np.random.random_sample(nb)*7 - 3.5, np.random.normal(0, 0.5, ns))} df['sig'] = np.append(np.zeros(nb),np.ones(ns), ) df['exp'] = np.random.randint(0,8, ns+nb) df['gauss'] = np.append(np.random.random_sample(nb)*7 - 3.5, np.random.normal(-1, 0.5, ns)) df = pd.DataFrame(df) df1 = {'mass': np.append(np.random.random_sample(nb)*7 - 3.5, np.random.random_sample(ns)*7 - 3.5 )} df1['sig'] = np.append(np.zeros(nb),np.ones(ns), ) df1 = pd.DataFrame(df1) # - # # Simple Plot sp.hist(df.mass, lw=2, scale=0.5, weights=np.random.normal(1,0.1,len(df))) sp.xlim() sp.labels("Mass", "Events", root_style=1) sp.figure() sp.hist(df.mass, color=1, style=0) sp.xlim() sp.labels("Mass", "Events", root_style=1) xx = df.sample(50).mass sp.figure() sp.errorhist(xx, color='black') sp.xlim() plt.ylim(0) sp.labels("Mass", "Events", 'GeV', root_style=1) sp.errorhist(xx, box=True) sp.xlim() plt.ylim(0) sp.labels("Mass", "Events", 'GeV',root_style=1) # # Several Distrtibutions # + sp.figure() sp.hist(df.mass, style=0,) sp.hist(df1.mass, lw=2) sp.xlim() sp.labels("Mass", "Events",'' ) # - sp.sig_bkg_plot(df, "mass", 'sig') sp.figure() sp.hist(df[df.sig==1].mass, style=0, color=0, range=(-3,3), label='Signal') sp.hist(df1.mass, style=1, color=1, label='MC1') sp.hist(np.random.normal(-3,1, 10000), style=2, color=3, label='NP') sp.xlim() plt.legend() sp.labels("Mass", "Events",'GeV' ,1) with plt.style.context("sciplot_ticks"): sp.stacked(df, "mass", 'exp', bins=50,) # sp.errorbar(df1.mass, color='black') plt.legend() sp.labels('$\Delta M$', "Events", "GeV", ) sp.xlim() sp.stacked([df[df.exp==2].mass, df[df.exp==3].mass], bins=50, lw=.25) # sp.errorbar(df1.mass, color='black') sp.xlim() sp.stacked(df, "mass", 'exp', bins=50, color=sp.b2helix(8), label=range(8)) sp.errorhist(df.mass.values, color='black', weights=np.random.normal(1.01,0.9, len(df)), label="Data") sp.xlim() plt.legend() sp.labels("$M$", "Events", 'GeV', 1) from sciplot.analysis import plot_flatness with plt.style.context(('sciplot_ticks')): sp.figure() plot_flatness(df.mass, df.gauss, xrange= (-4,3)) sp.xlim() y,x,h = sp.stacked([df[df.exp==2].mass, df[df.exp==3].mass], bins=50, lw=.25) sp.errorbar((x[:-1]+ x[1:])/2, y, np.sqrt(y) ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.9 64-bit (''env'': virtualenv)' # language: python # name: python3 # --- # + import os import pprint from pathlib import Path import tensorflow as tf import json import yaml import pandas as pd from tqdm import tqdm from src.data.monuseg.tf_data import get_dataset from src.models.monuseg.train_model import eval, loss from src.models.monuseg.models import get_model os.environ["CUDA_VISIBLE_DEVICES"] = "3" pp = pprint.PrettyPrinter(indent=4) # + path_indices = "/home/valentin/python_wkspce/2d_bispectrum_cnn/data/indices/monuseg.json" config = "/home/valentin/python_wkspce/2d_bispectrum_cnn/src/models/monuseg/configs/bispect_nh1.yaml" # config = "/home/valentin/python_wkspce/2d_bispectrum_cnn/src/models/monuseg/configs/unet_default.yaml" with open(path_indices, "r") as f: indices_list = json.load(f) with open(config, 'r') as f: params = yaml.safe_load(f) # - ds_test = get_dataset(id_list=indices_list[0]["test"], instance=True) ds_test = ds_test.cache().batch(1) params["n_harmonics"] = 8 params model = get_model(model_name=params["model_name"], output_channels=3, loss=loss, n_harmonics=params["n_harmonics"], cosine_decay=params["cosine_decay"], last_activation="softmax", run_eagerly=False) model.load_weights('/home/valentin/python_wkspce/2d_bispectrum_cnn/models/MoNuSeg/BispectUnet__rotation_True__nh_8__n_train_-1__psize_60x60__20211202-173734/weights/split_0/final') # model.load_weights('/home/valentin/python_wkspce/2d_bispectrum_cnn/models/MoNuSeg/BispectUnet__rotation_True__nh_8__n_train_-1__psize_60x60__20211202-173734/weights/split_0/final') # model = tf.keras.models.load_model( # '/home/valentin/python_wkspce/2d_bispectrum_cnn/models/test', # compile=False) # model.compile( # loss=[loss], # optimizer=tf.keras.optimizers.Adam(1e-3), # run_eagerly=False, # ) # eval # + # scores_test = eval(ds=ds_test, # model=model, # cropper=tf.keras.layers.Cropping2D(cropping=(20, 20))) # pp.pprint(scores_test) # + model_path = Path( "/home/valentin/python_wkspce/2d_bispectrum_cnn/models/MoNuSeg/" "BispectUnet__rotation_True__nh_8__n_train_-1__psize_60x60__20211202-173734" "/weights/") # model_path = Path( # "/home/valentin/python_wkspce/2d_bispectrum_cnn/models/MoNuSeg/" # "Unet__rotation_True__nh_0__n_train_-1__psize_60x60__20211202-132039" # "/weights/") # - cropper = tf.keras.layers.Cropping2D(cropping=(20, 20)) scores = pd.DataFrame() for split in tqdm(range(10)): model.load_weights(model_path / f"split_{split}/final") ds_test = get_dataset(id_list=indices_list[split]["test"], instance=True).batch(1) scores = scores.append( { "split": split, **eval(ds=ds_test, model=model, cropper=cropper), }, ignore_index=True, ) scores scores = pd.read_csv("scores_nh8_test.csv") scores.mean() scores.std() scores.to_csv("scores_nh8_test.csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="hS9yMrWe02WQ" colab_type="text" # ## Segmentation model Training and Inference # # In this notebook we will use axelerate Keras-based framework for AI on the edge to quickly setup model training and then after training session is completed convert it to .tflite and .kmodel formats. # # First, let's take care of some administrative details. # # 1) Before we do anything, make sure you have choosen GPU as Runtime type (in Runtime - > Change Runtime type). # # 2) We need to mount Google Drive for saving our model checkpoints and final converted model(s). Press on Mount Google Drive button in Files tab on your left. # # In the next cell we clone axelerate Github repository and import it. # # **It is possible to use pip install or python setup.py install, but in that case you will need to restart the enironment.** Since I'm trying to make the process as streamlined as possibile I'm using sys.path.append for import. # + id="y07yAbYbjV2s" colab_type="code" colab={} # %tensorflow_version 1.x # !git clone https://github.com/AIWintermuteAI/aXeleRate.git import sys sys.path.append('/content/aXeleRate') from axelerate import setup_training,setup_inference # + [markdown] id="5TBRMPZ83dRL" colab_type="text" # At this step you typically need to get the dataset. You can use !wget command to download it from somewhere on the Internet or !cp to copy from My Drive as in this example # ``` # # # !cp -r /content/drive/'My Drive'/pascal_20_segmentation.zip . # # # !unzip --qq pascal_20_segmentation.zip # ``` # For this notebook small test dataset is already in axelerate/sample_datasets folder, so no need to download anything. # # For semantic segmentation the dataset consists of RGB images and segmentation masks. # A few things to keep in mind: # # - The filenames of the annotation images should be same as the filenames of the RGB images. # # - The dimensions of the annotation image for the corresponding RGB image should be same. # # - For each pixel in the RGB image, the class label of that pixel in the annotation image would be the value of the annotation image pixel. # # Let's visualize our semantic segmentation test dataset and see what that means in practice. # # + id="_tpsgkGj7d79" colab_type="code" colab={} # %matplotlib inline from axelerate.networks.segnet.data_utils.visualize_dataset import visualize_segmentation_dataset visualize_segmentation_dataset('aXeleRate/sample_datasets/segmentation/imgs_validation', 'aXeleRate/sample_datasets/segmentation/anns_validation', n_classes=20) # + [markdown] id="S1oqdtbr7VLB" colab_type="text" # Next step is defining a config dictionary. Most lines are self-explanatory. # # Type is model frontend - Classifier, Detector or Segnet # # Architecture is model backend (feature extractor) # # - Full Yolo # - Tiny Yolo # - MobileNet1_0 # - MobileNet7_5 # - MobileNet5_0 # - MobileNet2_5 # - SqueezeNet # - VGG16 # - ResNet50 # # + id="Jw4q6_MsegD2" colab_type="code" colab={} config = { "model" : { "type": "SegNet", "architecture": "MobileNet7_5", "input_size": 224, "n_classes" : 20 }, "weights" : { "full": "", "backend": "imagenet" }, "train" : { "actual_epoch": 5, "train_image_folder": "aXeleRate/sample_datasets/segmentation/imgs", "train_annot_folder": "aXeleRate/sample_datasets/segmentation/anns", "train_times": 4, "valid_image_folder": "aXeleRate/sample_datasets/segmentation/imgs_validation", "valid_annot_folder": "aXeleRate/sample_datasets/segmentation/anns_validation", "valid_times": 4, "valid_metric": "val_loss", "batch_size": 8, "learning_rate": 1e-4, "saved_folder": "segment", "first_trainable_layer": "", "ignore_zero_class": False, "augumentation": True }, "converter" : { "type": ["k210","tflite"] } } # + [markdown] id="kobC_7gd5mEu" colab_type="text" # Let's check what GPU we have been assigned in this Colab session, if any. # + id="rESho_T70BWq" colab_type="code" colab={} from tensorflow.python.client import device_lib device_lib.list_local_devices() # + [markdown] id="cWyKjw-b5_yp" colab_type="text" # Finally we start the training by passing config dictionary we have defined earlier to setup_training function. The function will start the training with Checkpoint, Reduce Learning Rate on Plateu and Early Stopping callbacks. After the training has stopped, it will convert the best model into the format you have specified in config and save it to the project folder. # + id="deYD3cwukHsj" colab_type="code" colab={} model_path = setup_training(config_dict=config) # + [markdown] id="ypTe3GZI619O" colab_type="text" # After training it is good to check the actual perfomance of your model by doing inference on your validation dataset and visualizing results. This is exactly what next block does. Obviously since our model has only trained on a few images the reults are far from stellar, but if you have a good dataset, you'll have better results. # + id="jE7pTYmZN7Pi" colab_type="code" colab={} from keras import backend as K K.clear_session() setup_inference(config, model_path) # + [markdown] id="5YuVe2VD11cd" colab_type="text" # Good luck and happy training! Have a look at these articles, that would allow you to get the most of Google Colab or connect to local runtime if there are no GPUs available; # # https://medium.com/@oribarel/getting-the-most-out-of-your-google-colab-2b0585f82403 # # https://research.google.com/colaboratory/local-runtimes.html # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="l8Rm5122dy2R" # ## **Analytic Antipodal Grasps** # + [markdown] id="VsbCH_XUJDCN" # ## Notebook Setup # The following cell will install Drake, checkout the manipulation repository, and set up the path (only if necessary). # - On Google's Colaboratory, this **will take approximately two minutes** on the first time it runs (to provision the machine), but should only need to reinstall once every 12 hours. # # More details are available [here](http://manipulation.mit.edu/drake.html). # + id="opM0QN-CPwry" import importlib import os, sys from urllib.request import urlretrieve if 'google.colab' in sys.modules and importlib.util.find_spec('manipulation') is None: urlretrieve(f"http://manipulation.csail.mit.edu/scripts/setup/setup_manipulation_colab.py", "setup_manipulation_colab.py") from setup_manipulation_colab import setup_manipulation setup_manipulation(manipulation_sha='c1bdae733682f8a390f848bc6cb0dbbf9ea98602', drake_version='0.25.0', drake_build='releases') from manipulation import running_as_notebook import numpy as np from pydrake.all import( Variable, sin, cos, Evaluate, Jacobian, atan, MathematicalProgram, Solve, eq ) import matplotlib.pyplot as plt, mpld3 if running_as_notebook: mpld3.enable_notebook() # + [markdown] id="_4KTk534ex9e" # ## Introduction to Symbolic Differentiation # # For this assignment, you will need [symbolic differentiation](https://en.wikipedia.org/wiki/Computer_algebra), supported by Drake's symbolic library. We will demonstrate how to use it with a simple function: # $$T=\cos^2(x) + y^5$$ # # and it's Jacobian (first-order derivative), # $$J = \begin{pmatrix} \frac{\partial T}{\partial x} & \frac{\partial T}{\partial y} \end{pmatrix}=\begin{pmatrix} -2\cos(x)\sin(x) & 5y^4 \end{pmatrix}$$ # # as well as the Hessian (second-order derivative), # $$H = \begin{pmatrix} \frac{\partial^2 T}{\partial x^2} & \frac{\partial^2 T}{\partial x \partial y} \\ \frac{\partial^2 T}{\partial y \partial x} & \frac{\partial^2 T}{\partial y^2} \end{pmatrix}=\begin{pmatrix} 2 \sin^2(x) - 2\cos^2(x) & 0 \\ 0 & 20y^3 \end{pmatrix}$$ # # Below are some snippets of how to define symbolic variables, differentiate expressions, and evaluate them using numerical values. # + id="FbI9kuNKe9fb" # 1. Symbolic variables are defined x = Variable('x') y = Variable('y') # 2. Expressions can be written by composing operations on Variables. T = cos(x) ** 2.0 + y ** 5.0 print(T) # 3. Use Evaluate to query the numerical value of the expression given the variable values. # Use function for multi-dimensional quantities print(Evaluate(np.array([T]), {x: 3.0, y:5.0})) # Use method for scalar quantities print(T.Evaluate({x: 3.0, y:5.0})) # 4. Differentiate a quantity using Jacobian, or Differentiate. J = np.array([T.Differentiate(x), T.Differentiate(y)]) print(J) # Use method for scalar quantities J = T.Jacobian([x, y]) print(J) print(Evaluate(J, {x: 3.0, y:5.0})) # Use function for taking Jacobian of multi-dimensional quantities. H = Jacobian(J, [x, y]) print(H) print(Evaluate(H, {x: 3.0, y: 5.0})) # + [markdown] id="A7Xm7hVkhbF0" # Are the symbolic values of the Jacobian and Hessian what you expect? # + [markdown] id="KncJ9HEed94E" # ## The Cycloidal Gear # # Now we enter the main part of the problem. # # After graduating from MIT, you decide to work at a company producing cutting-edge [hypercycloidal gears](https://youtu.be/MBWkibie_5I?t=74). You are in charge of designing a robotic pick-and-place system for these parts. In order to reliably grasp the gears, you decide to use your knowledge of antipodal points. # # The mechanical design department gave you a pretty ugly parametric equation for what the shape looks like, which we won't even bother writing in latex! Instead, we provided it via the function `shape`. # # Given a angle in polar coordinates (parameter $t$), it returns $p(t)=[x(t),y(t)]$, a position in 2D. # # The below cell implements the function and shows you what the gear part looks like. # + id="K9tnarjr8FnB" def shape(t): x = (10*cos(t))-(1.5*cos(t+atan(sin(-9*t)/((4/3)-cos(-9*t)))))-(0.75*cos(10*t)) y = (-10*sin(t))+(1.5*sin(t+atan(sin(-9*t)/((4/3)-cos(-9*t)))))+(0.75*sin(10*t)) return np.array([x, y]) def plot_gear(): theta = np.linspace(0, 2*np.pi, 500) gear_shape = [] for i in range(500): gear_shape.append(Evaluate(shape(theta[i])).squeeze()) gear_shape = np.array(gear_shape) plt.axis("equal") plt.plot(gear_shape[:,0], gear_shape[:,1], 'k-') plot_gear() # + [markdown] id="m-W34GnXCG_N" # ## Grasp Energy Function # # How can we analytically find a pair of antipodal points given the parametric equation of a shape? We make the following claim: # # **Claim**: Let $p(t_1)$ and $p(t_2)$ be a pair of antipodal points given in parametric space. Then $t_1$ and $t_2$ are critical points of the following energy function: # $$E=\frac{1}{2}\kappa\|p(t_1)-p(t_2)\|^2$$ # # that is, they satisfy $\frac{\partial E}{\partial \mathbf{t}}=[0, 0]$ where $\mathbf{t}=[t_1,t_2]$. # # For the subsequent problems, you may assume $\kappa=2$. # # **Problem 5.1.a** [2pts]: Prove the claim. \\ # **Problem 5.1.b** [2pts]: Prove that the converse may not necessarily hold. # # HINT: The derivative of $p(t)$ respect to $t$ gives the tangent 'velocity' vector: $v(t)=p'(t)$ # # Write down your answer in a paper / pdf file, and submit to the Gradescope written submission section! # + [markdown] id="iNo0LiKyq0CO" # ## Implementation # # **Problem 5.1.c** [4pts] # Using this knowledge, we will write a Mathematical Program to find the antipodal points. Since we are looking for $t_1$ and $t_2$ such that the Jacobians is a zero vector, we are solving a root finding problem. Problems of this nature can still be transcribed as an instance of a Mathematical program; it simply doesn't have a cost. # # We will write down our problem as follows: # # \begin{aligned} \text{find} \quad & \mathbf{t} \\ # \text{s.t.} \quad & \frac{\partial E}{\partial \mathbf{t}}(\mathbf{t}) = \mathbf{0} \\ # \quad & 0 \leq \mathbf{t} \leq 2\pi \\ # \quad & t_1 - t_2 \geq \varepsilon # \end{aligned} # # The first constraint makes sure that they are critical points of the energy function, while the last two makes sure the points are not overlapping. You will write the following outer loop to check for the validity of solutions. # # 1. Pick a random guess for $\mathbf{t}$ using [SetInitialGuess](https://drake.mit.edu/pydrake/pydrake.solvers.mathematicalprogram.html?highlight=setinitialguess#pydrake.solvers.mathematicalprogram.MathematicalProgram.SetInitialGuess) by uniform sampling over $[0, 2\pi]$ (use `np.random.rand(2)`). # 2. Using `MathematicalProgram`, solve the above problem. Remember there is no cost in this problem, so we simply only add the constraints. # 3. If the solution is not valid (i.e. problem doesn't return success), repeat 1-2 with random guesses until a valid solution is found. # 4. If a valid solution $\mathbf{t}^*$ is found, return the Eigenvalues of the Hessian of $E$ at $\mathbf{t}^*$. (Use `np.linalg.eigvals`) # + id="WYPS22sY80WD" def find_antipodal_pts(shape): """ Finds antipodal points given the parametric function that describes the shape of the object. Args: - shape: function from parametric space t to position R2. Returns: - result: 2-dim np array that contains antipodal grasp locations parametrized by [t1, t2] - H_eig: 2-dim np array that contains eigenvalues of the Hessian. """ eps = 1e-3 # do not modify, but use it for epsilon variable above. ## Fill your code here result = np.array([0., 0.]) # modify here H_eig = np.array([0., 0.]) # modify here return result, H_eig # + [markdown] id="LuOUVR5Vys3p" # You can run the cell below to check the correctnes of your implementation. As the constraint is nonlinear, it might take some time to compute. (Typically, the solve time will still be less than 2~3 seconds). # + id="Hzm6QrdGT88d" def plot_antipodal_pts(pts, shape): antipodal_pts = [] for i in range(2): val = Evaluate(shape(pts[i])).squeeze() antipodal_pts.append(val) antipodal_pts = np.array(antipodal_pts) plt.scatter(antipodal_pts[:,0], antipodal_pts[:,1], color='red') plot_gear() result, H_eig = find_antipodal_pts(shape) plot_antipodal_pts(result, shape) print(H_eig) # + [markdown] id="tvstL9DDHIk3" # ## Hessian Analysis # # Why did we implement the Hessian? You may remember that if the Hessian is used for the second-derivative test. For a function $f(x)$ with a critical point $x^*$, this critical point is: # - A local minima if the Hessian is positive-definite (i.e. all positive eigenvalues) # - A local maxima if the Hessian is negative-definite (i.e. all negative eigenvalues) # - A saddle point if the Hessian has mixed positive / negative eigenvalues. # # **Problem 5.1.d** [2pts] Describe what grasps the local minima, maxima, and saddle points correspond to in terms of the geometry of the object. In a very simple sentence, explain why you might prefer one configuration over another. # # HINT: The cell below will visualize each of the cases. # + id="MF16oJya8-WM" if (running_as_notebook): plt.subplot(1,3,1) plot_gear() plt.title("Local Minima") np.random.seed(45) while True: result, H_eig = find_antipodal_pts(shape) if ((H_eig > 0).all()): break plot_antipodal_pts(result, shape) plt.subplot(1,3,2) plot_gear() plt.title("Local Maxima") np.random.seed(4) while True: result, H_eig = find_antipodal_pts(shape) if ((H_eig < 0).all()): break plot_antipodal_pts(result, shape) plt.subplot(1,3,3) plot_gear() plt.title("Saddle Point") np.random.seed(13) while True: result, H_eig = find_antipodal_pts(shape) if ((H_eig[0] > 0) and (H_eig[1] < 0)): break plot_antipodal_pts(result, shape) # + [markdown] id="MwE8yNg58VQN" # ## How will this notebook be Graded? # # If you are enrolled in the class, this notebook will be graded using [Gradescope](www.gradescope.com). You should have gotten the enrollement code on our announcement in Piazza. # # For submission of this assignment, you must do two things. # - Download and submit the notebook `analytic_antipodal_grasps.ipynb` to Gradescope's notebook submission section, along with your notebook for the other problems. # - Write down your answers to 5.1.a, 5.1.b, and 5.1.d to a separately pdf file and submit it to Gradescope's written submission section. # # We will evaluate the local functions in the notebook to see if the function behaves as we have expected. For this exercise, the rubric is as follows: # - [2 pts] 5.1.a is answered correctly. # - [2 pts] 5.1.b is answered correctly. # - [4 pts] `find_antipodal_points` must be implemented correctly. # - [2 pts] 5.1.d is answered correctly. # + id="xj5nAh4g8VQO" from manipulation.exercises.clutter.test_analytic_grasp import TestAnalyticGrasp from manipulation.exercises.grader import Grader Grader.grade_output([TestAnalyticGrasp], [locals()], 'results.json') Grader.print_test_results('results.json') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: TensorFlow-2.1.0 # language: python # name: tensorflow-2.1.0 # --- # + # 导入相关包 # 神经网络构造 import tensorflow as tf from tensorflow.keras import layers from tensorflow import keras # 画图 import matplotlib.pyplot as plt # 数据计算 import numpy as np # 鸢尾花数据集 from sklearn import datasets # - iris = datasets.load_iris() # 从datasets中载入鸢尾花分类数据 y_train = iris.target # 鸢尾花分类的目标数据 x_train = iris.data # 鸢尾花分类的特征数据 # + print(y_train.shape) # 查看目标数据的维度 print(x_train.shape) # 查看训练数据的维度 print(y_train[1]) # 查看样本数据形式 print(x_train[1]) # 查看样本数据形式 print(len(set(list(y_train)))) # 查看有多少类别 print(set(list(y_train))) # 查看有哪些类别 # + #动态分配显存 gpus = tf.config.experimental.list_physical_devices(device_type='GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) model = keras.Sequential( [ layers.Flatten(input_shape=[4]), # 展开层,将输入展开成shape为[4]的tensor layers.Dense(3, activation='softmax') # 全连接层,输出为3维,使用softmax作为激活函数;表示输入属于属于三类的概率 ]) # - # 模型训练方法配置 model.compile(optimizer='adam', # 使用Adam算法进行优化 loss='sparse_categorical_crossentropy', # 由于是多分类问题,使用交叉熵损失函数 metrics=['accuracy']) # 配置准确率作为测量指标 model.summary() # 查看模型的细节,展示每一层的详细信息 # + # 输入参数: # x_train:输入特征数据集 # y_train:输入的目标数据集 # batch_size:每次更新参数使用的样本数量大小 # epochs:对全量样本的训练轮数 # validation_split:在训练过程中使用的验证样本占所有数据集(x_train,y_train)的比例, # 设置验证数据可帮助直观查看模型是欠拟合过拟合、 # 输出: # 模型配置model.compile 中metrics包含的评估指标在每次训练的值, 此处metrics=['accuracy'] # 由于设置了validation_split,因此也包含评估指标在验证数据上的值 history = model.fit(x_train, y_train, batch_size=20, epochs=500, validation_split=0.1) # - # 模型训练结果打印 plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.legend(['training_acc', 'valivation_acc'], loc='upper left') plt.show() # 模型训练结果打印 plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.legend(['training_loss', 'valivation_loss'], loc='upper left') plt.show # 模型预测 result = model.predict([[5.1,3.3,1.7,0.5]]) # 使用构造数据测试 print(result) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # RGI02 (Western Canada and USA) # # import pandas as pd import geopandas as gpd import subprocess import matplotlib.pyplot as plt import matplotlib.patches as mpatches import seaborn as sns import numpy as np from utils import mkdir, submission_summary, needs_size_filter, size_filter, plot_map, plot_date_hist import os # ## Files and storage paths # + # Region of interest reg = 2 # go down from rgi7_scripts/workflow data_dir = '../../rgi7_data/' # Level 2 GLIMS files l2_dir = os.path.join(data_dir, 'l2_sel_reg_tars') # Output directories output_dir = mkdir(os.path.join(data_dir, 'l3_rgi7a')) output_dir_tar = mkdir(os.path.join(data_dir, 'l3_rgi7a_tar')) # RGI v6 file for comparison later rgi6_reg_file = os.path.join(data_dir, 'l0_RGIv6', '02_rgi60_WesternCanadaUS.zip') # - # Specific to this region: boxes where data has to be selected differently support_dir = os.path.join(data_dir, 'l0_support_data') # ### Load the input data # Read L2 files shp = gpd.read_file('tar://' + l2_dir + f'/RGI{reg:02d}.tar.gz/RGI{reg:02d}/RGI{reg:02d}.shp') # ### List of submissions sdf, df_classes = submission_summary(shp) sdf # Notes based on inidivual submission evaluations: # - 635 is for all glaciers above 60°N (was used in RGI6) # - 624 is a lonely glacier on the close to Region 01 border, it was misplaced in RGI6 and is already available in 623! # - 623 is for the rest of the glaciers in Canada not covered by 635. The version in GLIMS has several issues ([GH issue](https://github.com/GLIMS-RGI/glims_issue_tracker/issues/8)) # - 619: not clear what this is. the 5 outlines are already available in 614 # - 618: an intermediate inventory for the colorado range # - 617: a further intermediate inventory for the colorado range # - 616: used by RGI for Colorado to replace 614 in this region (make a shape to select them) # - 744: all the rest of USA # - 721, 722: update of two outlines by Bruce which we need to use # + # # Optional: write out selection in intermediate shape files for manual GIS review # tmp_output_dir = mkdir(os.path.join(data_dir, 'l0_tmp_data', f'rgi{reg:02d}_inventories')) # tmp_output_dir_tar = mkdir(os.path.join(data_dir, 'l0_tmp_data')) # for subid in shp.subm_id.unique(): # s_loc = shp.loc[shp.subm_id == subid] # s_loc.to_file(tmp_output_dir + f'/subm_{int(subid):03d}.shp') # print('Taring...') # print(subprocess.run(['tar', '-zcvf', f'{tmp_output_dir_tar}/rgi{reg:02d}_inventories.tar.gz', '-C', # os.path.join(data_dir, 'l0_tmp_data'), f'rgi{reg:02d}_inventories'])) # - # Remove the useless inventories now: shp = shp.loc[shp['subm_id'].isin([744, 616, 623, 635, 721, 722])].copy() # ### Read in the geometry data for sub-inventory selection # Read L2 files shp_loc = gpd.read_file('tar://' + support_dir + f'/sub_inventory_sel_RGI02.tar.gz/sub_inventory_sel_RGI02.shp') shp_loc.plot(edgecolor='k'); shp_loc # + # Test the polygons I drew - each subregion should be equivalent as the sel by id for sub_id in [635, 623, 616]: sel = shp.loc[shp['subm_id'] == sub_id].copy() rp = sel.representative_point().to_frame('geometry') rp['orig_index'] = sel.index intersect = gpd.overlay(rp, shp_loc.loc[shp_loc['subm_id'] == sub_id], how='intersection') odf = sel.loc[intersect['orig_index']] assert len(sel) == len(odf) # Also even without preselection rp = shp.representative_point().to_frame('geometry') rp['orig_index'] = shp.index for sub_id in [635, 623]: sel = shp.loc[shp['subm_id'] == sub_id].copy() intersect = gpd.overlay(rp, shp_loc.loc[shp_loc['subm_id'] == sub_id], how='intersection') odf = shp.loc[intersect['orig_index']] delta = 0 if sub_id == 623: delta = 2 # Those two glaciers assert len(sel) + delta == len(odf) # for 614, 616 we mix and mingle but I trust what we have done below # - # ### Apply selection criteria to create the RGI7 data subset # for northern Canada we use 'subm_id' 635 by analyst '' RGI_ss_NCan = shp.loc[shp['subm_id'] == 635].copy() needs_size_filter(RGI_ss_NCan) # for southern Canada we use 'subm_id' 623 by analyst '' (with 721, 722 which are corrections) RGI_ss_SCan = shp.loc[shp['subm_id'].isin([623, 721, 722])].copy() print(len(RGI_ss_SCan)) RGI_ss_SCan = size_filter(RGI_ss_SCan) len(RGI_ss_SCan) # + # For CONUS we use 'subm_id' 744 by analyst '.' except for colorado RGI_ss_CONUS = shp.loc[shp['subm_id'] == 744].copy() # Remove colorado print(len(RGI_ss_CONUS)) rp = RGI_ss_CONUS.representative_point().to_frame('geometry') rp['orig_index'] = RGI_ss_CONUS.index intersect = gpd.overlay(rp, shp_loc.loc[shp_loc['subm_id'] == 744], how='intersection') RGI_ss_CONUS = RGI_ss_CONUS.loc[intersect['orig_index'].values].copy() print(len(RGI_ss_CONUS)) RGI_ss_CONUS = size_filter(RGI_ss_CONUS) len(RGI_ss_CONUS) # - # For Colorado we use 'subm_id' 616 by analyst '.' RGI_ss_Colo = shp.loc[shp['subm_id'] == 616].copy() print(len(RGI_ss_Colo)) RGI_ss_Colo = size_filter(RGI_ss_Colo) len(RGI_ss_Colo) # combine the geodataframes rgi7 = pd.concat([RGI_ss_NCan, RGI_ss_SCan, RGI_ss_CONUS, RGI_ss_Colo]) # ### Some sanity checks sdf, df_class = submission_summary(rgi7) df_class # Nothing should change here rgi7['is_rgi6'] = True # Check the orphaned rock outcrops orphan_f = os.path.join(data_dir, 'l1_orphan_interiors', f'RGI{reg:02d}', f'RGI{reg:02d}.shp') if os.path.exists(orphan_f): orphan_f = gpd.read_file(orphan_f) check = np.isin(rgi7.subm_id.unique(), orphan_f.subm_id.unique()) if np.any(check): print(f'Orphan rock outcrops detected in subm_id {rgi7.subm_id.unique()[check]}') orphan_f['area'] = orphan_f.to_crs({'proj':'cea'}).area orphan_f = orphan_f.loc[orphan_f.subm_id == 623] orphan_f['area'].sum() * 1e-6 # Ok, more details in the checks below. # ### Plots plot_map(rgi7, reg) plot_map(rgi7, reg, is_rgi6=True) plot_date_hist(rgi7, reg=reg, figsize=(20, 5)) plot_date_hist(RGI_ss_CONUS, title='744 - CONUS Fountain', figsize=(20, 5), savefig=False) # ### Text for github fgh = sdf.T fgh print(fgh.to_markdown(headers=np.append(['subm_id'], fgh.columns))) # ## Write out and tar # + dd = mkdir(f'{output_dir}/RGI{reg:02d}/', reset=True) print('Writing...') rgi7.to_file(dd + f'RGI{reg:02d}.shp') print('Taring...') print(subprocess.run(['tar', '-zcvf', f'{output_dir_tar}/RGI{reg:02d}.tar.gz', '-C', output_dir, f'RGI{reg:02d}'])) # - # ## New RGI-file created - Check result! # ### load reference data (here RGI6) to enable comparison # load reference data from utils import open_zip_shapefile ref_odf = open_zip_shapefile(rgi6_reg_file) # ## Compare new RGI7-file to RGI6 # ### Number of elements (differences depict problems) print('Number of glaciers in new RGI:', len(rgi7)) print('Number of glaciers in RGI6:', len(ref_odf)) print('Difference:', len(rgi7)-len(ref_odf)) # ### How many nominal glaciers were there in RGI06? len(ref_odf.loc[ref_odf.Status == 2]) # ### Total area # add an area field to RGI_ss and reference data rgi7['area'] = rgi7.to_crs({'proj':'cea'}).area ref_odf['area'] = ref_odf.to_crs({'proj':'cea'}).area # print and compare area values Area_RGI = rgi7['area'].sum() * 1e-6 print('Area RGI7 [km²]:', Area_RGI) Area_ref = ref_odf['area'].sum() * 1e-6 print('Area RGI6 [km²]:', Area_ref) d = (Area_RGI - Area_ref) print('Area difference [km²]:', d) # ### Northern Canada (635, Berthier, no problem there): rp = ref_odf.representative_point().to_frame('geometry') rp['orig_index'] = ref_odf.index intersect = gpd.overlay(rp, shp_loc.loc[shp_loc['subm_id'] == 635], how='intersection') ref_odf_NCan = ref_odf.loc[intersect['orig_index']].copy() print('Number of glaciers in RGI7 subset:', len(RGI_ss_NCan)) print('Number of glaciers in reference data (RGI6):', len(ref_odf_NCan)) print('Difference:', len(RGI_ss_NCan)-len(ref_odf_NCan)) # print and compare area values Area_7 = RGI_ss_NCan['area'].sum() * 1e-6 print('Area RGI7 [km²]:', Area_7) Area_6 = ref_odf_NCan['area'].sum() * 1e-6 print('Area RGI6 [km²]:', Area_6) d = (Area_7 - Area_6) print('Area difference [km²]:', d) # This is brilliant! No issue there. # ### Southern Canada (623, Bolch, some problems): rp = ref_odf.representative_point().to_frame('geometry') rp['orig_index'] = ref_odf.index intersect = gpd.overlay(rp, shp_loc.loc[shp_loc['subm_id'] == 623], how='intersection') ref_odf_SCan = ref_odf.loc[intersect['orig_index']].copy() print('Number of glaciers in RGI7 subset:', len(RGI_ss_SCan)) print('Number of glaciers in reference data (RGI6):', len(ref_odf_SCan)) print('Difference:', len(RGI_ss_SCan)-len(ref_odf_SCan)) # print and compare area values Area_7 = RGI_ss_SCan['area'].sum() * 1e-6 print('Area RGI7 [km²]:', Area_7) Area_6 = ref_odf_SCan['area'].sum() * 1e-6 print('Area RGI6 [km²]:', Area_6) d = (Area_7 - Area_6) print('Area difference [km²]:', d) # We have one more glacier in GLIMS (this is expected from the glacier that was on the wrong side of the region border in RGI6) RGI_ss_SCan.loc[RGI_ss_SCan.anlys_id == 380747]['area'].sum() * 1e-6 # Arg, we still have 6 km2 more in GLIMS than RGI6. Quick check on GIS reveals that some polygons in polygons are in GLIMS but not RGI, and some rock outcrops are in RGI but not GLIMS (see [example](https://github.com/GLIMS-RGI/glims_issue_tracker/issues/8#issuecomment-981134080) in GH issue). We'll ignore this for now. # # Also, orphaned rock outcrops: # + # for i in range(len(orphan_f)): # f, ax = plt.subplots(figsize=(2, 2)) # orphan_f.iloc[[i]].plot(ax=ax); # - # ### CONUS (744, Fountain, OK): rp = ref_odf.representative_point().to_frame('geometry') rp['orig_index'] = ref_odf.index intersect = gpd.overlay(rp, shp_loc.loc[shp_loc['subm_id'] == 744], how='intersection') ref_odf_CONUS = ref_odf.loc[intersect['orig_index']].copy() print('Number of glaciers in RGI7 subset:', len(RGI_ss_CONUS)) print('Number of glaciers in reference data (RGI6):', len(ref_odf_CONUS)) print('Difference:', len(RGI_ss_CONUS)-len(ref_odf_CONUS)) # print and compare area values Area_7 = RGI_ss_CONUS['area'].sum() * 1e-6 print('Area RGI7 [km²]:', Area_7) Area_6 = ref_odf_CONUS['area'].sum() * 1e-6 print('Area RGI6 [km²]:', Area_6) d = (Area_7 - Area_6) print('Area difference [km²]:', d) # I don't know about the N glacier difference (not a big deal), and the missing area is small enough! # ### Colorado (616, Fountain, OK): rp = ref_odf.representative_point().to_frame('geometry') rp['orig_index'] = ref_odf.index intersect = gpd.overlay(rp, shp_loc.loc[shp_loc['subm_id'] == 616], how='intersection') ref_odf_Colo = ref_odf.loc[intersect['orig_index']].copy() print('Number of glaciers in RGI7 subset:', len(RGI_ss_Colo)) print('Number of glaciers in reference data (RGI6):', len(ref_odf_Colo)) print('Difference:', len(RGI_ss_Colo)-len(ref_odf_Colo)) # print and compare area values Area_7 = RGI_ss_Colo['area'].sum() * 1e-6 print('Area RGI7 [km²]:', Area_7) Area_6 = ref_odf_Colo['area'].sum() * 1e-6 print('Area RGI6 [km²]:', Area_6) d = (Area_7 - Area_6) print('Area difference [km²]:', d) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #Exercise 3.9 # %matplotlib inline import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import statsmodels.api as sm import statsmodels.formula.api as smf # ##3.9 a auto = pd.read_csv('data/Auto.csv', na_values=['?']) subset = auto.dropna() sns.pairplot(subset[:-1]) # pairplot doesn't like NaN's # ##3.9 b auto.corr() # ##3.9 c formula = 'mpg ~ ' + '+'.join(auto.columns.difference(['mpg', 'name'])) model = smf.ols(formula, data=auto) fit = model.fit() print(fit.summary()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="PxyEAFXRJbYJ" colab_type="code" colab={} import torch as pt pt.manual_seed(42) import torchvision import torchvision.transforms as transforms import numpy as np import matplotlib.pyplot as plt # %matplotlib inline device = pt.device("cuda:0" if pt.cuda.is_available() else "cpu") device # + id="JFatzLoiakGy" colab_type="code" colab={} transform = transforms.Compose([transforms.ToTensor(), transforms.Lambda(lambda x: x.to(device))]) train_ds = torchvision.datasets.FashionMNIST('./fmnist', download = True, train = True, transform = transform) train_dl = pt.utils.data.DataLoader(train_ds, batch_size=4) # shuffle=True, # num_workers=4) # + id="CCCOIM0Uk0ru" colab_type="code" colab={} test_ds = torchvision.datasets.FashionMNIST('./fmnist', download = True, train = False, transform = transform) test_dl = pt.utils.data.DataLoader(test_ds, batch_size=4) # shuffle=True, # num_workers=4) # + id="PrhNu-yzXF0g" colab_type="code" colab={} CLASSES = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] idx, (image, label) = next(enumerate(train_ds)) idx, image.shape, label # + id="eCUeFU6ervH8" colab_type="code" colab={} plt.imshow(image.cpu().squeeze().numpy(), cmap = 'binary') plt.xlabel(CLASSES[label]); # + id="9f6Cz_hxcCq3" colab_type="code" colab={} import numpy as np plt.figure(figsize=(10, 10)) for i in range(25): plt.subplot(5, 5, i+1) plt.xticks([]) plt.yticks([]) plt.grid('off') idx = np.random.randint(0, len(train_ds)) image, label = train_ds[idx] plt.imshow(image.cpu().squeeze().numpy(), cmap='binary') plt.xlabel(CLASSES[label]) # + id="aCvpsRnydLOx" colab_type="code" colab={} from torch import nn class Lambda(nn.Module): def __init__(self, fn): super(Lambda, self).__init__() self.fn = fn def forward(self, x): return self.fn(x) model = nn.Sequential( nn.Conv2d(1, 32, 3), # 28x28x32 -> 26x26x32 nn.ReLU(), nn.Conv2d(32, 64, 3), # 26x26x64 -> 24x24x64 nn.ReLU(), nn.MaxPool2d(2, 2), # 24x24x64 -> 12x12x64 nn.Dropout2d(), Lambda(lambda x: x.view(-1, 12 * 12 * 64)), nn.Linear(12 * 12 * 64, 128), nn.ReLU(), nn.Dropout2d(), nn.Linear(128, 10) ).to(device) # + id="1InBRaELdbxu" colab_type="code" colab={} # model = Net().to(device) def forward(X): return model(X) def loss(y_pred, y): return pt.nn.functional.cross_entropy(y_pred, y) optimizer = pt.optim.AdamW(model.parameters()) # + id="ykLChClbt6hs" colab_type="code" colab={} class ConfusionMatrix(): def __init__(self, classes): self.classes = classes self.side = len(classes) self.values = np.zeros( (self.side, self.side) ) self.total = int(0) def average_accuracy(self): return (self.values.diagonal() / self.values.sum(axis = 1).astype(float)).mean() def __call__(self, y_pred, y): for row, col in zip(y_pred, y): self.values[row, col] += 1 self.total += len(y_pred) return self def update(self, y_pred, y): return self.__call__(y_pred, y) def __repr__(self): msg = "" for i in range(self.side): msg += "{}: {:.2f}% ".format(self.classes[i], 100.0 * (self.values[i, i] / self.values[i,:].sum())) return msg TOY_CLASSES = ['Hot dog', 'Not hot dog'] confusion_matrix = ConfusionMatrix(TOY_CLASSES) confusion_matrix([0,1], [0,1]) # + id="A3g_h4KuuBxs" colab_type="code" colab={} plt.imshow(confusion_matrix.values, cmap = 'binary') plt.xticks(np.arange(len(TOY_CLASSES)), TOY_CLASSES, rotation = 90) plt.yticks(np.arange(len(TOY_CLASSES)), TOY_CLASSES); plt.colorbar(); confusion_matrix.average_accuracy() # + id="OUfgkTRDdhUZ" colab_type="code" colab={} training_cm = ConfusionMatrix(CLASSES) EPOCHS = 5 for epoch in range(EPOCHS): running_loss = 0.0 for batch_idx, (X_batch, y_batch) in enumerate(train_dl): y_pred_batch = forward(X_batch) xe = loss(y_pred_batch, y_batch.to(device)) training_cm.update(y_pred_batch.argmax(dim = 1).cpu().detach().numpy(), y_batch.cpu().detach().numpy()) if batch_idx % 1000 == 0: print("Loss: ", xe.data, " Metric: ", training_cm.average_accuracy()) xe.backward() optimizer.step() optimizer.zero_grad() print(training_cm) # + id="M1fW9-lmqifI" colab_type="code" colab={} plt.imshow(training_cm.values, cmap = 'binary') plt.xticks(np.arange(len(CLASSES)), CLASSES, rotation = 90) plt.yticks(np.arange(len(CLASSES)), CLASSES); plt.colorbar(); # + [markdown] id="-VcU6arT-z_B" colab_type="text" # Copyright 2020 CounterFactual.AI LLC. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # default_exp defaults # - # # defaults # + # export cluster_params = { 'n_cluster':3, 'eps': 0.3 } # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # # SETI CNN using TF and Binary DS # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} import requests import json #import ibmseti import numpy as np import matplotlib.pyplot as plt # %matplotlib inline import tensorflow as tf import pickle import time # #!sudo pip install sklearn import os from sklearn.metrics import confusion_matrix from sklearn import metrics # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # ### Import dataset reader # The following cell will load a python code to read the SETI dataset. # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # !wget --output-document SETI.zip https://ibm.box.com/shared/static/jhqdhcblhua5dx2t7ixwm88okitjrl6l.zip # !unzip -o SETI.zip import SETI # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # ### Download data # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} ds_directory = 'SETI/SETI_ds_64x128/' #ds_name = 'SETI/SETI64x128.tar.gz' # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} print os.popen("ls -lrt "+ ds_directory).read() # to verify # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # ### Load data SETI # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} #from tensorflow.examples.tutorials.mnist import input_data #dataset = input_data.read_data_sets("MNIST_data/", one_hot=True) dataset = SETI.read_data_sets(ds_directory, one_hot=True, validation_size=0) dataset.train.images.shape # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # ## Network Parameters # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # Parameters decay_rate=0.96 decay_steps=1000 learning_rate = 0.05 training_epochs = 200 batch_size = 50 display_step = 100 #check point directory chk_directory = 'SETI/save/' checkpoint_path = chk_directory+'model.ckpt' n_classes = 4 # number of possible classifications for the problem dropout = 0.50 # Dropout, probability to keep units height = 64 # height of the image in pixels width = 128 # width of the image in pixels n_input = width * height # number of pixels in one image # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # ### Inputs # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} x = tf.placeholder(tf.float32, shape=[None, n_input]) y_ = tf.placeholder(tf.float32, shape=[None, n_classes]) # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} x_image = tf.reshape(x, [-1,height,width,1]) x_image # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Convolutional Layer 1 # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} W_conv1 = tf.Variable(tf.truncated_normal([5, 5, 1, 32], stddev=0.1)) b_conv1 = tf.Variable(tf.constant(0.1, shape=[32])) # need 32 biases for 32 outputs convolve1 = tf.nn.conv2d(x_image, W_conv1, strides=[1, 1, 1, 1], padding='SAME') + b_conv1 h_conv1 = tf.nn.relu(convolve1) conv1 = tf.nn.max_pool(h_conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #max_pool_2x2 conv1 # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Convolutional Layer 2 # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} W_conv2 = tf.Variable(tf.truncated_normal([5, 5, 32, 64], stddev=0.1)) b_conv2 = tf.Variable(tf.constant(0.1, shape=[64])) #need 64 biases for 64 outputs convolve2= tf.nn.conv2d(conv1, W_conv2, strides=[1, 1, 1, 1], padding='SAME')+ b_conv2 h_conv2 = tf.nn.relu(convolve2) conv2 = tf.nn.max_pool(h_conv2, ksize=[1, 2, 2, 1], strides=[1, 4, 4, 1], padding='SAME') #max_pool_2x2 conv2 # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Convolutional Layer 3 # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} active="" # W_conv3 = tf.Variable(tf.truncated_normal([5, 5, 64, 128], stddev=0.1)) # b_conv3 = tf.Variable(tf.constant(0.1, shape=[128])) #need 64 biases for 64 outputs # convolve3= tf.nn.conv2d(conv2, W_conv3, strides=[1, 1, 1, 1], padding='SAME')+ b_conv3 # h_conv3 = tf.nn.relu(convolve3) # conv3 = tf.nn.max_pool(h_conv3, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #max_pool_2x2 # conv3 # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Convolutional Layer 4 # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} active="" # W_conv4 = tf.Variable(tf.truncated_normal([5, 5, 128, 256], stddev=0.1)) # b_conv4 = tf.Variable(tf.constant(0.1, shape=[256])) #need 64 biases for 64 outputs # convolve4= tf.nn.conv2d(conv3, W_conv4, strides=[1, 1, 1, 1], padding='SAME')+ b_conv4 # h_conv4 = tf.nn.relu(convolve4) # conv4 = tf.nn.max_pool(h_conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') #max_pool_2x2 # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Fully Connected Layer 1 # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} input_layer = conv2 dim = input_layer.get_shape().as_list() dim # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} dims= dim[1]*dim[2]*dim[3] nodes1 = 1024 prv_layer_matrix = tf.reshape(input_layer, [-1, dims]) W_fc1 = tf.Variable(tf.truncated_normal([dims, nodes1], stddev=0.1)) b_fc1 = tf.Variable(tf.constant(0.1, shape=[nodes1])) # need 1024 biases for 1024 outputs h_fcl1 = tf.matmul(prv_layer_matrix, W_fc1) + b_fc1 fc_layer1 = tf.nn.relu(h_fcl1) # ??? fc_layer1 # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Droupout 1 # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} keep_prob = tf.placeholder(tf.float32) layer_drop1 = tf.nn.dropout(fc_layer1, keep_prob) # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Fully Connected Layer 2 # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} active="" # nodes2 = 256 # W_fc2 = tf.Variable(tf.truncated_normal([layer_drop1.get_shape().as_list()[1], nodes2], stddev=0.1)) # b_fc2 = tf.Variable(tf.constant(0.1, shape=[nodes2])) # h_fcl2 = tf.matmul(layer_drop1, W_fc2) + b_fc2 # fc_layer2 = tf.nn.relu(h_fcl2) # ??? # fc_layer2 # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Droupout 2 # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} active="" # layer_drop2 = tf.nn.dropout(fc_layer2, keep_prob) # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Readout Layer # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} W_fc = tf.Variable(tf.truncated_normal([nodes1, n_classes], stddev=0.1)) #1024 neurons b_fc = tf.Variable(tf.constant(0.1, shape=[n_classes])) # 10 possibilities for classes [0,1,2,3] fc = tf.matmul(layer_drop1, W_fc) + b_fc y_CNN= tf.nn.softmax(fc) # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Loss function # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_CNN, labels=y_)) # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Training # # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # Create a variable to track the global step. global_step = tf.Variable(0, trainable=False) # create learning_decay lr = tf.train.exponential_decay( learning_rate, global_step, decay_steps, decay_rate, staircase=True ) # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # Use the optimizer to apply the gradients that minimize the loss # (and also increment the global step counter) as a single training step. optimizer = tf.train.GradientDescentOptimizer(lr) train_op = optimizer.minimize(cross_entropy, global_step=global_step) #train_op = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cross_entropy) # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #### Evaluation # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} correct_prediction = tf.equal(tf.argmax(y_CNN,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # ### Create checkpoint directory # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} directory = os.path.dirname(chk_directory) try: os.stat(directory) ckpt = tf.train.get_checkpoint_state(chk_directory) print ckpt except: os.mkdir(directory) # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # ## Training # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # Initializing the variables init = tf.global_variables_initializer() # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} loss_values = [] with tf.Session() as sess: X_test = dataset.test.images y_test = dataset.test.labels sess.run(init) saver = tf.train.Saver(tf.global_variables()) # load previously trained model if appilcable ckpt = tf.train.get_checkpoint_state(chk_directory) if ckpt: print "loading model: ",ckpt.model_checkpoint_path saver.restore(sess, ckpt.model_checkpoint_path) #step = 0 num_examples = dataset.train.num_examples # Training cycle for epoch in range(training_epochs): avg_loss = 0. avg_accuracy = 0. #dataset.shuffle_data() total_batch = int(num_examples / batch_size) # Loop over all batches for step in range(total_batch): start = time.time() x_batch, y_batch = dataset.train.next_batch(batch_size,shuffle=True) train_op.run(feed_dict={x: x_batch, y_: y_batch, keep_prob: dropout}) loss, acc = sess.run([cross_entropy, accuracy], feed_dict={x: x_batch,y_: y_batch,keep_prob: 1.}) end = time.time() avg_loss += loss / total_batch avg_accuracy += acc / total_batch if step % display_step == 1000: # Calculate batch loss and accuracy loss, acc = sess.run([cross_entropy, accuracy], feed_dict={x: x_batch,y_: y_batch,keep_prob: 1.}) #train_accuracy = accuracy.eval(feed_dict={x:x_batch, y_: y_batch, keep_prob: 0.5}) test_accuracy = sess.run(accuracy, feed_dict={x: X_test[0:100], y_: y_test[0:100], keep_prob: 1.}) print("Iter " + str(step) + \ ", Training time= " + "{:.5f}".format(end - start) + \ ", Minibatch Loss= " + "{:.6f}".format(loss) + \ ", Training Accuracy= " + "{:.5f}".format(acc) + \ ", Test Accuracy= " + "{:.5f}".format(test_accuracy) ) # save model every 1 epochs if epoch >= 0 and epoch % 1 == 0: # Save model #print ("model saved to {}".format(checkpoint_path)) #saver.save(sess, checkpoint_path, global_step = epoch) plr = sess.run(lr) loss_values.append(avg_loss) #print(sess.run(tf.train.global_step())) print "Epoch:", '%04d' % (epoch+1), "lr=", "{:.9f}".format(plr), "cost=", "{:.9f}".format(avg_loss) ,"Acc=", "{:.9f}".format(avg_accuracy) print("Optimization Finished!") print ("model saved to {}".format(checkpoint_path)) saver.save(sess, checkpoint_path, global_step = (epoch+1)*step) # Calculate accuracy for test images #print("Testing Accuracy:", sess.run(accuracy, feed_dict={x: X_test[0:30], y_: y_test[0:30], keep_prob: 1.})) # Find the labels of test set y_pred_lb = sess.run(tf.argmax(y_CNN,1), feed_dict={x: X_test[0:100], y_: y_test[0:100], keep_prob: 1.}) y_pred = sess.run(y_CNN, feed_dict={x: X_test[0:100], y_: y_test[0:100], keep_prob: 1.}) # lets save kernels kernels_l1 = sess.run(tf.reshape(tf.transpose(W_conv1, perm=[2, 3, 0, 1]),[32,-1])) kernels_l2 = sess.run(tf.reshape(tf.transpose(W_conv2, perm=[2, 3, 0, 1]),[32*64,-1])) # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # %matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.plot([np.mean(loss_values[i:i+5]) for i in range(len(loss_values))]) plt.show() # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # ## Evaluation # + [markdown] deletable=true editable=true # Accuracy is depend on the number of epoch that you set in partametrs part. # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} y_ = np.argmax(y_test[0:100],1) # ground truth print metrics.classification_report(y_true= y_, y_pred= y_pred_lb) print metrics.confusion_matrix(y_true= y_, y_pred= y_pred_lb) print("Classification accuracy: %0.6f" % metrics.accuracy_score(y_true= y_, y_pred= y_pred_lb) ) print("Log Loss: %0.6f" % metrics.log_loss(y_true= y_, y_pred= y_pred, labels=range(4)) ) # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # ### Viz # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # !wget --output-document utils1.py http://deeplearning.net/tutorial/code/utils.py import utils1 from utils1 import tile_raster_images # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} #from utils import tile_raster_images import matplotlib.pyplot as plt from PIL import Image # %matplotlib inline image = Image.fromarray(tile_raster_images(kernels_l1, img_shape=(5, 5) ,tile_shape=(4, 8), tile_spacing=(1, 1))) ### Plot image plt.rcParams['figure.figsize'] = (18.0, 18.0) imgplot = plt.imshow(image) imgplot.set_cmap('gray') # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} image = Image.fromarray(tile_raster_images(kernels_l2, img_shape=(5, 5) ,tile_shape=(4, 12), tile_spacing=(1, 1))) ### Plot image plt.rcParams['figure.figsize'] = (18.0, 18.0) imgplot = plt.imshow(image) imgplot.set_cmap('gray') # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} import numpy as np plt.rcParams['figure.figsize'] = (5.0, 5.0) sampleimage1 = X_test[3] plt.imshow(np.reshape(sampleimage1,[64,128]), cmap="gray") # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # Launch the graph with tf.Session() as sess: sess.run(init) saver = tf.train.Saver(tf.all_variables()) # load previously trained model if appilcable ckpt = tf.train.get_checkpoint_state(chk_directory) if ckpt: print "loading model: ",ckpt.model_checkpoint_path saver.restore(sess, ckpt.model_checkpoint_path) ActivatedUnits1 = sess.run(convolve1,feed_dict={x:np.reshape(sampleimage1,[1,64*128],order='F'),keep_prob:1.0}) plt.figure(1, figsize=(20,20)) n_columns = 3 n_rows = 3 for i in range(9): plt.subplot(n_rows, n_columns, i+1) plt.title('Filter ' + str(i)) plt.imshow(ActivatedUnits1[0,:,:,i], interpolation="nearest", cmap="gray") # + [markdown] button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # #

    Authors:

    #
    #
    <NAME>
    #

    #

    , PhD is Sr. Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.

    #
    # + button=false deletable=true editable=true new_sheet=false run_control={"read_only": false} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier from sklearn.decomposition import PCA import matplotlib.pyplot as plt def abs_sum(y_true, y_prob): return sum(sum(abs(y_true - y_prob))) def convert_to_prob(y): y_prob = [] for label in y: row = [] for i in range(4): if i == label: row.append(1) else: row.append(0) y_prob.append(row) return np.array(y_prob) # + df = pd.read_csv('train.csv') df_list = [] for items in df.values: df_list.append([items[0]] + [float(i) for i in items[1].split(',')] + [items[2]]) df = pd.DataFrame(np.array(df_list)) df.columns = ['id'] + ['signal_'+str(i) for i in range(len(df_list[0])-2)] + ['label'] print(df.head()) print(df.describe()) # + label0 = df.loc[df['label'] == 0.0, ['signal_'+str(i) for i in range(205)]] label0_mean = np.array(label0.mean()) label1 = df.loc[df['label'] == 1.0, ['signal_'+str(i) for i in range(205)]] label1_mean = np.array(label1.mean()) label2 = df.loc[df['label'] == 2.0, ['signal_'+str(i) for i in range(205)]] label2_mean = np.array(label2.mean()) label3 = df.loc[df['label'] == 3.0, ['signal_'+str(i) for i in range(205)]] label3_mean = np.array(label3.mean()) time = list(range(205)) plt.plot(time, label0_mean ,c='blue', label='label0_line') plt.plot(time,label1_mean, c='red', label='label1_line') plt.plot(time, label2_mean ,c='green', label='label2_line') plt.plot(time, label3_mean ,c='yellow', label='label3_line') plt.xlabel("time") plt.ylabel("heartbeat_signal") plt.legend() plt.show() # + x = df.drop(['id', 'label'], axis=1).to_numpy() y = df['label'].to_numpy() pca = PCA(n_components=0.95) x = pca.fit_transform(x) from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier test_set_size = 0.1 X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=test_set_size, random_state=3) y_true_prob = convert_to_prob(y_test) knn = KNeighborsClassifier(n_neighbors=1, weights='distance').fit(X_train, y_train) y_pred_prob = knn.predict_proba(X_test) print(y_pred_prob) # - print(y_true_prob) loss = abs_sum(y_pred_prob, y_true_prob) print(loss) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] _uuid="866e7eb37a03e1a48e71ba4e4c8f4baca43472e8" # #
    The Data Scientist’s Toolbox Tutorial
    # ###
    Quite Practical and Far from any Theoretical Concepts
    # #
    last update: 11/13/2018
    # + [markdown] _uuid="e02d495da0fb0ad24e0341e91848f4c4cfc35bdb" # --------------------------------------------------------------------- # Fork, Run and Follow this kernel on GitHub: # > ###### [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist) # # # ------------------------------------------------------------------------------------------------------------- # **I hope you find this kernel helpful and some UPVOTES would be very much appreciated** # # ----------- # # + [markdown] _uuid="85b27cf82d3023fd69c338df2be7afb2d7afaf32" #
    # **Notebook Content** # 1. [Introduction](#1) # 1. [Import](#2) # 1. [Version](#3) # 1. [setup](#4) # 1. [Python](#5) # 1. [Python Syntax compared to other programming languages](#6) # 1. [Basics](#2) # 1. [Functions](#3) # 1. [Types and Sequences](#4) # 1. [More on Strings](#5) # 1. [Reading and Writing CSV files](#6) # 1. [Dates and Times](#7) # 1. [Objects and map()](#8) # 1. [Lambda and List Comprehensions](#9) # 1. [OOP](#10) # 1. [Numpy](#12) # 1. [Creating Arrays](#13) # 1. [Combining Arrays](#14) # 1. [Operations](#15) # 1. [Math Functions](#16) # 1. [Indexing / Slicing](#17) # 1. [Copying Data](#18) # 1. [Iterating Over Arrays](#19) # 1. [The Series Data Structure](#20) # 1. [Querying a Series](#21) # 1. [Pandas](#22) # 1. [The DataFrame Data Structure](#22) # 1. [Dataframe Indexing and Loading](#23) # 1. [Missing values](#24) # 1. [Merging Dataframes](#25) # 1. [Making Code Pandorable](#26) # 1. [Group by](#27) # 1. [Scales](#28) # 1. [Pivot Tables](#29) # 1. [Date Functionality](#30) # 1. [Distributions in Pandas](#31) # 1. [Hypothesis Testing](#32) # 1. [Matplotlib](#33) # 1. [Scatterplots](#34) # 1. [Line Plots](#35) # 1. [Bar Charts](#36) # 1. [Histograms](#37) # 1. [Box and Whisker Plots](#38) # 1. [Heatmaps](#39) # 1. [Animations](#40) # 1. [Interactivity](#41) # 1. [DataFrame.plot](#42) # 1. [seaborn](#43) # 1. [5-1 Seaborn Vs Matplotlib](#43) # 1. [5-2 10 Useful Python Data Visualization Libraries](#43) # 1. [SKlearn](#44) # 1. [Introduction](#45) # 1. [Algorithms](#46) # 1. [Framework](#47) # 1. [Applications](#48) # 1. [Data](#49) # 1. [Supervised Learning: Classification](#50) # 1. [Separate training and testing sets](#51) # 1. [linear, binary classifier](#52) # 1. [Prediction](#53) # 1. [Back to the original three-class problem](#54) # 1. [Evaluating the classifier](#55) # 1. [Using the four flower attributes](#56) # 1. [Unsupervised Learning: Clustering](#57) # 1. [Supervised Learning: Regression](#58) # 1. [Plotly](#59) # 1. [New to Plotly?](#59) # 1. [CheatSheet](#60) # 1. [Conclusion](#61) # 1. [References](#62) # + [markdown] _uuid="6b741a5af31141057f72904b11954acfb57f5772" #
    # ## 1-Introduction # + [markdown] _uuid="1e7678d8e7831683d0aefffdab152562c18c3188" # In this kernel, we have a comprehensive tutorials for Five packages in python after that you can read my other kernels about machine learning # + [markdown] _uuid="918161eb40e34a545c75850d6a7dee64d9bf5cd1" # # # ###### [Go to top](#top) # + [markdown] _uuid="953258de456acd2780aba3d18c1fc817ae689ad7" #
    # ## 1-1 Import # + _kg_hide-input=true _uuid="4f8b6d44c99e1141bd001556f554c7efc6b77acf" from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score import matplotlib.pyplot as plt import plotly.graph_objs as go import scipy.stats as stats import plotly.plotly as py import seaborn as sns import pandas as pd import numpy as np import matplotlib import warnings import sklearn import scipy import json import sys import csv import os # - # !pip install plotly # + [markdown] _uuid="9a30328de5327e576716f7a57ef119e63a6e77c3" #
    # ## 1-2 Version # + _kg_hide-input=true _uuid="3925ca44958df0a27f3b51de1dba2a18a2650110" print('matplotlib: {}'.format(matplotlib.__version__)) print('sklearn: {}'.format(sklearn.__version__)) print('scipy: {}'.format(scipy.__version__)) print('seaborn: {}'.format(sns.__version__)) print('pandas: {}'.format(pd.__version__)) print('numpy: {}'.format(np.__version__)) print('Python: {}'.format(sys.version)) # + [markdown] _uuid="51a98c51b1bf27253f6521a2951a5d61625642bf" #
    # ## 1-3 Setup # + _kg_hide-input=true _uuid="cbf46ff8ec8faf08c4aa9c84eb796ffcf0a8c4c8" warnings.filterwarnings('ignore') sns.set(color_codes=True) plt.style.available # %matplotlib inline # %precision 2 # + [markdown] _uuid="5efeff35ad9951e40551d0763eaf26f08bb4119e" #
    # # 2-Python # # **Python** is a modern, robust, high level programming language. It is very easy to pick up even if you are completely new to programming. # # It is used for: # # 1. web development (server-side), # 1. software development, # 1. mathematics, # 1. system scripting. # #
    # ## 2-1 Python Syntax compared to other programming languages # # 1. Python was designed to for readability, and has some similarities to the English language with influence from mathematics. # 1. Python uses new lines to complete a command, as opposed to other programming languages which often use semicolons or parentheses. # 1. Python relies on indentation, using whitespace, to define scope; such as the scope of loops, functions and classes. Other programming languages often use curly-brackets for this purpose. # ###### [Go to top](#top) # + [markdown] _uuid="a276b26993c902362e522d8a29a67ef670db103d" #
    # # 2-2 Python: Basics # # + _kg_hide-input=true _uuid="732b4733f620d36d4ac78b27261933559f7086de" import this # + [markdown] _uuid="46d2f5efcea352bda6ee7f8721be0001548440f1" #
    # ### 2-2-1 Variables # A name that is used to denote something or a value is called a variable. In python, variables can be declared and values can be assigned to it as follows, # + _kg_hide-input=true _uuid="52c065c046f96806741321dcc3e40dffec8f75fe" x = 2 y = 5 xy = 'Hey' # + [markdown] _uuid="762c34fcdeed69fc79fb84d7bb86c5baf8f9152e" #
    # ### 2-2-2 Operators # + [markdown] _uuid="9dcd3e4436aa8050f116b6f7157cc986e762c794" # | Symbol | Task Performed | # |----|---| # | + | Addition | # | - | Subtraction | # | / | division | # | % | mod | # | * | multiplication | # | // | floor division | # | ** | to the power of | # # ### Relational Operators # | Symbol | Task Performed | # |----|---| # | == | True, if it is equal | # | != | True, if not equal to | # | < | less than | # | > | greater than | # | <= | less than or equal to | # | >= | greater than or equal to | # ### Bitwise Operators # | Symbol | Task Performed | # |----|---| # | & | Logical And | # | l | Logical OR | # | ^ | XOR | # | ~ | Negate | # | >> | Right shift | # | << | Left shift | # # ###### [Go to top](#top) # + [markdown] _uuid="281c637be82f68519f371353afa603fe842bd5fa" #
    # # 2-3 Python : Functions # + [markdown] _uuid="b513caa4821bc20e6b833577ba4d894bce011e32" #
    # `add_numbers` is a function that takes two numbers and adds them together. # + _kg_hide-input=true _uuid="700ebb9c4d8c7c57948194e3d848b4b480e06f9a" def add_numbers(x, y): return x + y add_numbers(1, 2) # + [markdown] _uuid="0fb081fa2f33e01e616a82aad306ab6490f48a03" #
    # `add_numbers` updated to take an optional 3rd parameter. Using `print` allows printing of multiple expressions within a single cell. # + _uuid="b7dd7bc4f7f52acac95ac94586c74f4b59a22acb" def add_numbers(x,y,z=None): if (z==None): return x+y else: return x+y+z print(add_numbers(1, 2)) print(add_numbers(1, 2, 3)) # + [markdown] _uuid="51d91959b8088baa0884eaf191edc9520e012625" #
    # `add_numbers` updated to take an optional flag parameter. # + _kg_hide-input=true _uuid="8f109910dd7dacbac0212d03b3a0d55e8d92b4bf" def add_numbers(x, y, z=None, flag=False): if (flag): print('Flag is true!') if (z==None): return x + y else: return x + y + z print(add_numbers(1, 2, flag=True)) # + [markdown] _uuid="052a1ebfb11cba627a4ed7ae13360e15cfd2f5bc" #
    # Assign function `add_numbers` to variable `a`. # + _kg_hide-input=true _uuid="774773afde862fefb51b22d9e676b44744410c6f" def add_numbers(x,y): return x+y a = add_numbers a(1,2) # + [markdown] _uuid="54244074f574884d9c241727f559466cdc646793" #
    # # 2-4 Python : Types and Sequences # + [markdown] _uuid="fe3bfa4b99b20b534a24cd4b04953de4c81a2032" #
    # Use `type` to return the object's type. # + _uuid="ea5486e825425487854b66e135142787a8696547" type('This is a string') # + _uuid="6b1b6307a4b2363c5f13c86a94a6c6ed83cf05d1" type(None) # + _uuid="1bfe51959705a633d7ae10b8e807ba752809f8a2" type(1) # + _uuid="9c6ea01bf5fac295d6ca5dea7a867fd769d9854f" type(1.0) # + _uuid="08e7ecceb2d59d5964d039b1307736a106abef30" type(add_numbers) # + [markdown] _uuid="843a9232374917b366c4afadc94fd123e13bc2cf" #
    # Tuples are an immutable data structure (cannot be altered). # + _uuid="02c0f949470c05c302705b8cbde9dbc4c5e9ec58" x = (1, 'a', 2, 'b') type(x) # + [markdown] _uuid="32e3d57ead5741f615aaf0ec276583163a6fc1a7" #
    # Lists are a mutable data structure. # + _uuid="e559da4fdd4adcf955e36ee33209728c7f04c873" x = [1, 'a', 2, 'b'] type(x) # + [markdown] _uuid="1c25590aa57c37e383007ea43e9aebd08dbb7ef8" #
    # Use `append` to append an object to a list. # + _uuid="46b3996bb68391469648cfaf732d9f891aad8e76" x.append(3.3) print(x) # + [markdown] _uuid="a0a9f9ad271ff5cab43f7d95c00f917dfa42cda3" #
    # This is an example of how to loop through each item in the list. # + _uuid="fdbcbc6f491284146fb7ec2288a48dca70d028f0" for item in x: print(item) # + [markdown] _uuid="a863bf6c33d37823812c0d373bc685f8018328c3" #
    # Or using the indexing operator: # + _uuid="34ebdc7a81ca55c6d96576e8e1032bb2e2a8c38d" i=0 while( i != len(x) ): print(x[i]) i = i + 1 # + [markdown] _uuid="7accd0a14f6f94b5f8f87c28e772fd7e70f5aa89" #
    # Use `+` to concatenate lists. # + _uuid="2e2d33c027d2d2e66444ea8c5bc16ac182604699" [1,2] + [3,4] # + [markdown] _uuid="8dac836d3bfd72d4ba18e0ad35e918fd5578e0c4" #
    # Use `*` to repeat lists. # + _uuid="33fbab44facf2994b9a1f4f842b5b68db790c60c" [1]*3 # + [markdown] _uuid="1bf6aa261fae9ee56cc8e2ec7ba789818619bfc7" #
    # Use the `in` operator to check if something is inside a list. # + _uuid="9dcaf39b71a922a3a61f6565cb9025fe89291cc8" 1 in [1, 2, 3] # + [markdown] _uuid="2fdc7e3a4da86bf2fc3daef510d521ff4f9e780a" #
    # Now let's look at strings. Use bracket notation to slice a string. # # ###### [Go to top](#top) # + _uuid="31a527ad3d4ac52203a81f2aa05d6ccbcbefd7d0" x = 'This is a string' print(x[0]) #first character print(x[0:1]) #first character, but we have explicitly set the end character print(x[0:2]) #first two characters # + [markdown] _uuid="a2da025858712f120a1708fa8de0da26c6b8b1de" #
    # This will return the last element of the string. # + _uuid="24cb16eaabaa7e13b0a4a69dfc6cdf71ffb0238b" x[-1] # + [markdown] _uuid="892cb74f3b1ba32053e1b854681e9520d9ae7578" #
    # This will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end. # + _uuid="b947154182b9eb7aa08f5986eb768020e9eefdcb" x[-4:-2] # + [markdown] _uuid="7d88463b1e8c9555030af32e1e99cdd522c40610" #
    # This is a slice from the beginning of the string and stopping before the 3rd element. # + _uuid="961b4829e231286702b78c13f6ce68c780df7c56" x[:3] # + [markdown] _uuid="d365c72e42116bbd6c7ae960d77af0d89d35f201" #
    # And this is a slice starting from the 3rd element of the string and going all the way to the end. # + _uuid="82c83cb0bfe09b269037a97ef0b00cd66254ab89" x[3:] # + _uuid="b3914291465c80995a05d30f2842f38b1e2d7255" firstname = 'MJ' lastname = 'Bahmani' print(firstname + ' ' + lastname) print(firstname*3) print('mj' in firstname) # + [markdown] _uuid="5e92142f11b79daaa369a0a031bab1979602457e" #
    # `split` returns a list of all the words in a string, or a list split on a specific character. # + _uuid="63a7aea0881d73a1c2caab2b2bc0bf09d46e0402" firstname = 'Mr Dr M'.split(' ')[0] # [0] selects the first element of the list lastname = 'Mr Dr M'.split(' ')[-1] # [-1] selects the last element of the list print(firstname) print(lastname) # + _uuid="a46edc198922ad4957bcdc3d990f43aaae35a464" 'MJ' + str(2) # + [markdown] _uuid="50a1372d759bcc6e0b4d8852588ba9fe1c3a1200" #
    # Dictionaries associate keys with values. # + _uuid="b6dd088e810aca788fc74f76d23777b2115e12d0" x = {'': '', 'irmatlab': ''} x[''] # Retrieve a value by using the indexing operator # + _uuid="6ec5106de063d06587a532835173cb3e15c4c074" x[''] = None x[''] # + [markdown] _uuid="909deaebc50f94808fbdbe831ffbe1137066a2fe" #
    # Iterate over all of the keys: # + _uuid="993e0c88238de00475ddd22a2f4594ce14aa936d" for name in x: print(x[name]) # + [markdown] _uuid="9248f6bfe90a9c867ca248b08ad81c8626eaf689" #
    # Iterate over all of the values: # + _uuid="9f995195b06da4b3e64cabbe950d19570fbb910c" for email in x.values(): print(email) # + [markdown] _uuid="147f360dd1e8f878ae393c46648ec5f630d8d80c" #
    # Iterate over all of the items in the list: # + _uuid="ab973ccad15b9f3af27e6a2cbd2425ede962be86" for name, email in x.items(): print(name) print(email) # + [markdown] _uuid="3a909b20788d5df862adec16c2d6d15e021a74dc" #
    # You can unpack a sequence into different variables: # + _uuid="296600127a5b164c23ad4aad0e622883f6ce56b1" x = ('MJ', 'Bahmani', '') fname, lname, email = x # + _uuid="b1d783512fd2e8d76b8d1c147ed51f8584d203b7" fname # + _uuid="a69c79b2a267725470ff843205f11c8a6e2a820e" lname # + [markdown] _uuid="6b069e526dd64ce6efba8cb51854ef2636e308b9" #
    # # 2-5 Python: More on Strings # # ###### [Go to top](#top) # + _uuid="2aa934d05afb73cd7cdcb9ff223b2014798f3b18" print('MJ' + str(2)) # + [markdown] _uuid="2a9b5f81e7ae92846e4bce5c8c34ff23490b83ef" #
    # Python has a built in method for convenient string formatting. # + _uuid="a8fa5e6a70f8a2e642f0799b514d96f5e8b4cf67" sales_record = { 'price': 3.24, 'num_items': 4, 'person': 'MJ'} sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}' print(sales_statement.format(sales_record['person'], sales_record['num_items'], sales_record['price'], sales_record['num_items']*sales_record['price'])) # + [markdown] _uuid="7fee00e11e78abee9a0397711ca813c60b1b8c57" #
    # # 2-6 Python:Reading and Writing CSV files # + [markdown] _uuid="3f79954fc298a34c3c231d35161dfb3bf743576d" #
    # Let's import our datafile train.csv # ###### [Go to top](#top) # + _uuid="ab5ab332ba1dc0fb1668d18df1b1771232cc4e35" with open('train.csv') as csvfile: train = list(csv.DictReader(csvfile)) train[:1] # The first three dictionaries in our list. # + [markdown] _uuid="ac3ffa35fd8725c3b1d81ab92470f1e5de19061b" #
    # `csv.Dictreader` has read in each row of our csv file as a dictionary. `len` shows that our list is comprised of 234 dictionaries. # + _uuid="3f51c4e51000c69c3b3113f4cc8ed1e5e5fa005f" len(train) # + [markdown] _uuid="59457b14190fe59e900f644450b66f542b2320c4" #
    # `keys` gives us the column names of our csv. # + _uuid="de6d753e74e0345bcc292cb6cfded49241b696f8" train[0].keys() # + [markdown] _uuid="391e4c865b352e70ae7bc5cbdef9559ccf396cb8" #
    # How to do some math action on the data set # + _uuid="c654cdc458ff6f8d960caa80268f61e9f213e9e8" sum(float(d['Fare']) for d in train) / len(train) # + [markdown] _uuid="a12ec3e5c23e0660c9cfce427333afabe5a4384c" #
    # Use `set` to return the unique values for the type of Sex in our dataset have. # + _uuid="3541bdc6bd01a88c58e408bcc99411136a953538" Sex = set(d['Sex'] for d in train) Sex # + [markdown] _uuid="51bd65ce547fceba323660a1370a407d004e9abe" #
    # # 2-7 Python: Dates and Times # + _uuid="77aea246af2c06fe57b72bf45e424a4a6ae2e8c5" import datetime as dt import time as tm # + [markdown] _uuid="f14fbbf5385512cc6d6631677a62ff620d51f9ae" #
    # `time` returns the current time in seconds since the Epoch. (January 1st, 1970) # # ###### [Go to top](#top) # + _uuid="73ffa7423c31ed5efb8bab44ead79a4162711920" tm.time() # + [markdown] _uuid="83a2aceba5919534adf74e59c16baa61d6e24bda" #
    # Convert the timestamp to datetime. # + _uuid="b0726eaf4ebb492d3da2396520a7328bbb00263c" dtnow = dt.datetime.fromtimestamp(tm.time()) dtnow # + [markdown] _uuid="df2a8c0e8237c6c88ea5d8f446604d6c60435b72" #
    # Handy datetime attributes: # + _uuid="3ca2dd821f01874e23fc1eccf13b6584ba21e795" dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime # + [markdown] _uuid="85160de1a52833048050277d7df6713172b367c9" #
    # `timedelta` is a duration expressing the difference between two dates. # + _uuid="c466ccf8a9f33641fec51525f0e8a03dbec8fd62" delta = dt.timedelta(days = 100) # create a timedelta of 100 days delta # + [markdown] _uuid="c09e3bd1c17b5a67b8e340956fdb27b2e6b32b2d" #
    # `date.today` returns the current local date. # + _uuid="539408df43c0aa1b449551e14b84b49110b04beb" today = dt.date.today() # + _uuid="4022bf4458c6495ab1c4ddfcf82c155742641823" today - delta # the date 100 days ago # + _uuid="38399dc2975b4f44e083ae319f37217286d9533a" today > today-delta # compare dates # + [markdown] _uuid="a8bcac1b08550c3f143e21cf7ca9049449304a5c" #
    # # 2-8 Python: Objects and map() # + [markdown] _uuid="28be3905d37e6b7d2fb09a5f1a0e0c9df7e48559" #
    # An example of a class in python: # + _uuid="2dc3faf4892b932cb1015902e83a13bd47125b20" class Person: department = 'School of Information' #a class variable def set_name(self, new_name): #a method self.name = new_name def set_location(self, new_location): self.location = new_location # + _uuid="1da9aaf6d97f50fff89e9d08b658adb3c691a262" person = Person() person.set_name('') person.set_location('MI, Berlin, Germany') print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department)) # + [markdown] _uuid="8422c57273a71b4a74b6b4a9eb5535444356823d" #
    # Here's an example of mapping the `min` function between two lists. # + _uuid="3052442e823ee0a450e404607e44cb3c321b508c" store1 = [10.00, 11.00, 12.34, 2.34] store2 = [9.00, 11.10, 12.34, 2.01] cheapest = map(min, store1, store2) cheapest # + [markdown] _uuid="787b960d3c824c764a0eb7c2dab69d073eacfa9d" #
    # Now let's iterate through the map object to see the values. # + _uuid="a9d6751e7c95241dbd10a44f0179a40df8ee3a7a" for item in cheapest: print(item) # + [markdown] _uuid="e9c6c70988cd1459c7078f31fb9a43a275965d46" #
    # # 2-9-Python : Lambda and List Comprehensions # + [markdown] _uuid="b6257386671f84333b4cfae3a979a2c1f8100e76" #
    # Here's an example of lambda that takes in three parameters and adds the first two. # + _uuid="a306ae5687e20ae64431306035359f33aa465382" my_function = lambda a, b, c : a + b # + _uuid="6bf98de2fab3b4239e41a093386d5fb9acad6367" my_function(1, 2, 3) # + [markdown] _uuid="5db80d702a02a1873aa09fa3acd26d922d74ff46" #
    # Let's iterate from 0 to 9 and return the even numbers. # # ###### [Go to top](#top) # + _uuid="90e7eb5e7689d6ce448b8c0c801c2caef70e6d44" my_list = [] for number in range(0, 9): if number % 2 == 0: my_list.append(number) my_list # + [markdown] _uuid="1bfe68a8553e5718a04b14a321262e6e88407b2c" #
    # Now the same thing but with list comprehension. # + _uuid="ee501b9820296d1452845b8da201dbd69d4e0637" my_list = [number for number in range(0,10) if number % 2 == 0] my_list # + [markdown] _uuid="e99f98ecb3393b8846fca621cdbad5dcaf1c60d1" #
    # # 2-10 OOP # 1. **Class** − A user-defined prototype for an object that defines a set of attributes that characterize any object of the class. The attributes are data members (class variables and instance variables) and methods, accessed via dot notation. # # 1. **Class variable** − A variable that is shared by all instances of a class. Class variables are defined within a class but outside any of the class's methods. Class variables are not used as frequently as instance variables are. # # 1. **Data member** − A class variable or instance variable that holds data associated with a class and its objects. # # 1. **Function overloading** − The assignment of more than one behavior to a particular function. The operation performed varies by the types of objects or arguments involved. # # 1. **Instance variable** − A variable that is defined inside a method and belongs only to the current instance of a class. # # 1. **Inheritance** − The transfer of the characteristics of a class to other classes that are derived from it. # # 1. **Instance** − An individual object of a certain class. An object obj that belongs to a class Circle, for example, is an instance of the class Circle. # # 1. **Instantiation** − The creation of an instance of a class. # # 1. **Method** − A special kind of function that is defined in a class definition. # # 1. **Object** − A unique instance of a data structure that's defined by its class. An object comprises both data members (class variables and instance variables) and methods. # # 1. **Operator overloading** − The assignment of more than one function to a particular operator.[4] # # ###### [Go to top](#top) # + _uuid="1c3c77a74a2f45e0cfbec9e2a43ac089aaa09558" class FirstClass: test = 'test' def __init__(self,name,symbol): self.name = name self.symbol = symbol # + _uuid="2dd1233bc33a4e9ca7bf528968267868f4d1bbe1" eg3 = FirstClass('Three',3) # + _uuid="8bb856d47cf1407cccd295fe241825c2999e0756" print (eg3.test, eg3.name) # + _uuid="3fa8ea01163ceea3f5cf46fe2f5eb843fa8a9cc7" class FirstClass: def __init__(self,name,symbol): self.name = name self.symbol = symbol def square(self): return self.symbol * self.symbol def cube(self): return self.symbol * self.symbol * self.symbol def multiply(self, x): return self.symbol * x # + _uuid="6086b2eaf54cc57b3447b7c0b47e4b67f208cba3" eg4 = FirstClass('Five',5) # + _uuid="7029000f20ae1efcd694754d0db4b7e546cdafae" print (eg4.square()) print (eg4.cube()) # + _uuid="e60cf1e71ae99cb38e6b555ae43cb6948bdfc329" eg4.multiply(2) # + _uuid="52ccbcd46b12d802454097d1a4c6ee52c551c4a3" FirstClass.multiply(eg4,2) # + [markdown] _uuid="c68d1914adc7080d3ee9b5592420ff95ea2870e7" #
    # ### 2-10-1 Inheritance # # There might be cases where a new class would have all the previous characteristics of an already defined class. So the new class can "inherit" the previous class and add it's own methods to it. This is called as inheritance. # # Consider class SoftwareEngineer which has a method salary. # + _uuid="ea270e5c33b3e6d6fb3c7b66a0081e8d80825a78" class SoftwareEngineer: def __init__(self,name,age): self.name = name self.age = age def salary(self, value): self.money = value print (self.name,"earns",self.money) # + _uuid="d63b58d87a62e2e692f25644b1bfd1b0c3c2ccff" a = SoftwareEngineer('Kartik',26) # + _uuid="14e20781ff108a4cc13ac68de99478293413b3d1" a.salary(40000) # + _uuid="89cf62c093a3c6369b77536f4700f70ddf84eebf" dir(SoftwareEngineer) # + _uuid="edcf9e2d90a66ca3370d889ffd2109827229825f" class Artist: def __init__(self,name,age): self.name = name self.age = age def money(self,value): self.money = value print (self.name,"earns",self.money) def artform(self, job): self.job = job print (self.name,"is a", self.job) # + _uuid="92e459938effe069d173a19f94819e68b34d4eca" b = Artist('Nitin',20) # + _uuid="a708a468bec5bc8314bc40ad7e830568805885bc" b.money(50000) b.artform('Musician') # + _uuid="efa5a80085b1a7a6e98a88c5fd86da6c880421cf" dir(Artist) # + [markdown] _uuid="843c1001213bd2cfd201e99b682a5c5bfe78c0c6" #
    # ## 2-11 Python JSON # # + _uuid="153a10fa7584e50553490e6f765284192776ee5c" # some JSON: x = '{ "name":"John", "age":30, "city":"New York"}' # parse x: y = json.loads(x) # the result is a Python dictionary: print(y["age"]) # + [markdown] _uuid="85876511f2a41d866fe4bfa5208372f51536e6c5" #
    # ## 2-11-1 Convert from Python to JSON # # + _uuid="06ce583a101287b07e6e95d28426ae20d5b4e64b" # a Python object (dict): x = { "name": "John", "age": 30, "city": "New York" } # convert into JSON: y = json.dumps(x) # the result is a JSON string: print(y) # + [markdown] _uuid="b016e08d500dc749010ed6495e3c02e3f1144b4e" # You can convert Python objects of the following types, into JSON strings: # # * dict # * list # * tuple # * string # * int # * float # * True # * False # * None # # ###### [Go to top](#top) # + _uuid="2906f3ee8c9392b9be7507c2c99a1fc3db9df95d" print(json.dumps({"name": "John", "age": 30})) print(json.dumps(["apple", "bananas"])) print(json.dumps(("apple", "bananas"))) print(json.dumps("hello")) print(json.dumps(42)) print(json.dumps(31.76)) print(json.dumps(True)) print(json.dumps(False)) print(json.dumps(None)) # + [markdown] _uuid="5cd4ddf30b6996185afdc94f68ad675b258ebb9a" # Convert a Python object containing all the legal data types: # + _uuid="7a5a584c5912ac52a6fc9796e23eb4799d5b9261" x = { "name": "John", "age": 30, "married": True, "divorced": False, "children": ("Ann","Billy"), "pets": None, "cars": [ {"model": "BMW 230", "mpg": 27.5}, {"model": "Ford Edge", "mpg": 24.1} ] } print(json.dumps(x)) # + [markdown] _uuid="27a8da102cf08964e9cde82973cd653306e3356f" #
    # ## 2-12 Python PIP # # + [markdown] _uuid="5d13a95f504b710e977b4b589c200d5dadca5bc2" #
    # ### 2-12-1 What is a Package? # A package contains all the files you need for a module. # # Modules are Python code libraries you can include in your project. # # ###### [Go to top](#top) # + [markdown] _uuid="cfe1ddbfe7ac5b38f1fb0d36401aaa59bf706b06" #
    # ### 2-12-2 Install PIP # If you do not have PIP installed, you can download and install it from this page: https://pypi.org/project/pip/ # + [markdown] _uuid="af880b807b69c45d77b21433937a13772453b481" #
    # ## 2-13 Python Try Except # The **try** block lets you test a block of code for errors. # # The **except** block lets you handle the error. # # The **finally** block lets you execute code, regardless of the result of the try- and except blocks. # + _uuid="078d0a98c43a742ff43460848dd6bf31a69975ce" try: print(x) except NameError: print("Variable x is not defined") except: print("Something else went wrong") # + _uuid="c5e3cbfe7a287cda43736597455b95e2117ccf4d" try: print(x) except: print("Something went wrong") finally: print("The 'try except' is finished") # + [markdown] _uuid="6d4ae311b6798fa03dff0516dfb59e7d271b040e" #
    # ## 2-14 Python Iterators # An iterator is an object that contains a countable number of values. # # An iterator is an object that can be iterated upon, meaning that you can traverse through all the values. # # Technically, in Python, an iterator is an object which implements the iterator protocol, which consist of the methods __iter__() and __next__(). # ###### [Go to top](#top) # + [markdown] _uuid="0fd29f43681c38533591b6cd0d3cff27e75b6b0c" # Return a iterator from a tuple, and print each value: # + _uuid="d4f1c3ecdad754158b847660bfc92343470e9c55" mytuple = ("apple", "banana", "cherry") myit = iter(mytuple) print(next(myit)) print(next(myit)) print(next(myit)) # + [markdown] _uuid="5a15f75b705e078ec25f1fb628af76ed19da0333" #
    # ### 2- 14-1 Looping Through an Iterator # # + _uuid="830a5d61f4d7afec6525f32b8f8e9c0f4783acc0" mytuple = ("apple", "banana", "cherry") for x in mytuple: print(x) # + [markdown] _uuid="e7410e1e9691615946998a833fec7bea80f50cc2" #
    # ## 2- 15 Dictionary # A **dictionary** is a collection which is **unordered, changeable and indexed**. In Python dictionaries are written with curly brackets, and they have **keys and values**. # + _uuid="85f51ded66c72e13b253c5775e5b1ac914459479" thisdict = { "brand": "Ford", "model": "Mustang", "year": 1964 } print(thisdict) # + [markdown] _uuid="437f23fd4289f0a757dc1c87a1e339a9c6f96c31" #
    # ## 2-16 Tuples # A **tuple** is a collection which is **ordered and unchangeable**. In Python tuples are written with round brackets. # # # + _uuid="253ab9bcc46ce48b2f5b702bacad75db5d95de7c" thistuple = ("apple", "banana", "cherry") print(thistuple) # + [markdown] _uuid="0d377a64581e95a0022045181a2c81530eba31d1" #
    # ## 2-19 Set # A set is a collection which is unordered and unindexed. In Python sets are written with curly brackets. # ###### [Go to top](#top) # + _uuid="0d4b6b6a5efa453ed3a594173586fb63bb6475dc" thisset = {"apple", "banana", "cherry"} print(thisset) # + _uuid="dbb1998040fc3bdc67e0d690de391b4f47e117b8" thisset = {"apple", "banana", "cherry"} for x in thisset: print(x) # + [markdown] _uuid="2f51152b3812f1d279b3a009fed112780aedbda4" #
    # ### 2-17-1 Add Items # To add one item to a set use the add() method. # # To add more than one item to a set use the update() method. # ###### [Go to top](#top) # + _uuid="cccd895e65e0b5bbbfbecdd96b3322a43688de78" thisset = {"apple", "banana", "cherry"} thisset.add("orange") print(thisset) # + [markdown] _uuid="1a8697f93952e076f6f949997676d40518d7b5a6" #
    # # Python Packages # * Numpy # * Pandas # * Matplotlib # * Seaborn # * Sklearn # * plotly # + [markdown] _uuid="bfb701e45e93aea0b3ed64e148ca2fdb53559038" #
    # # 3- Numerical Python (NumPy) # + _uuid="db9a850ebb440ca960a0713d822e20090bc10601" import numpy as np # + [markdown] _uuid="79699175b4559f509181d359393167f801735485" #
    # ## 3-1 NumPy :Creating Arrays # + [markdown] _uuid="fb5123cfa4687a819758ea82810984fa69d631e3" # Create a list and convert it to a numpy array # + _uuid="1cdc9404b31261269891723ddc59064802063041" mylist = [1, 2, 3] x = np.array(mylist) x # + [markdown] _uuid="85123a6bf0589918ff03fe9916b06635fa32b776" #
    # Or just pass in a list directly # + _uuid="dca7dd9319716e863760bb7c4e1e47a1c17d7b1b" y = np.array([4, 5, 6]) y # + [markdown] _uuid="5a29d1e6bb19131b3bde9ae197b562cf5c905f2a" #
    # Pass in a list of lists to create a multidimensional array. # + _uuid="e18e77b6d1becf1b7ded3a4daa361dbe3f985d96" m = np.array([[7, 8, 9], [10, 11, 12]]) m # + [markdown] _uuid="956c870e985074aadd2aa97b8a4c820c64bd7d2b" #
    # Use the shape method to find the dimensions of the array. (rows, columns). # + _uuid="7e5472233313b1eb806b7e9dfd2478f4155b23d0" m.shape # + [markdown] _uuid="d20433435f865bf4593ee8b5eae4cb2173794a57" #
    # `arange` returns evenly spaced values within a given interval. # + _uuid="87aded8cbe232bfcb4ce246ecf1bac0b70d9477e" n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30 n # + [markdown] _uuid="ef6d0651972cb576188cd986830aae2d53bb20b8" #
    # `reshape` returns an array with the same data with a new shape. # + _uuid="6722026f830d545bfd21a2760f80c20e33c0d757" n = n.reshape(3, 5) # reshape array to be 3x5 n # + [markdown] _uuid="a47a0e9b094d81ae9ec235cd33d395ca34c379b3" #
    # `linspace` returns evenly spaced numbers over a specified interval. # + _uuid="099d0b4b7f3c6e5aef6cebd760e3c6b8111205d8" o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4 o # + [markdown] _uuid="e5abb93fd66c344624ad2bb5e91ec1cd8a9ab220" #
    # `resize` changes the shape and size of array in-place. # + _uuid="66de4ad6fbb8a2ffe7251b667de732fe0048c39c" o.resize(3, 3) o # + [markdown] _uuid="95af33fa252b8ccc4afad8666e0a7ced2d83aee9" #
    # `ones` returns a new array of given shape and type, filled with ones. # + _uuid="015e5a604fceb5b335760d85f090812416ab0edd" np.ones((3, 2)) # + [markdown] _uuid="415ba7eb6ffcca82d1ad73b6e652d77c2b40c4c7" #
    # `zeros` returns a new array of given shape and type, filled with zeros. # + _uuid="4833b4502eac60ec53ca4ab55f10653103be3584" np.zeros((2, 3)) # + [markdown] _uuid="acbef4f6ab93b0ea5ed8bea30ec0eddbce8415f1" #
    # `eye` returns a 2-D array with ones on the diagonal and zeros elsewhere. # + _uuid="5e4e8112e5829b290385bd076041d9d0c43bdd4e" np.eye(3) # + [markdown] _uuid="bb258a18b84c56c26c19e5e467f8886a3209eb6d" #
    # `diag` extracts a diagonal or constructs a diagonal array. # + _uuid="0167bbb06dc5175a14e2d1844b3e92c3f601e156" np.diag(y) # + [markdown] _uuid="05f0ea0d69c70181a0ddee0b0a0670afc1a5761c" #
    # Create an array using repeating list (or see `np.tile`) # + _uuid="76f7c4dd56d1f9ba9c8dcc3bf3b0e67bc62326de" np.array([1, 2, 3] * 3) # + [markdown] _uuid="e9d6935951d0df2a4d60f66b302da7b876006c71" #
    # Repeat elements of an array using `repeat`. # + _uuid="7299b4990895d4ea8a59447489fa6fbc0cde5ea6" np.repeat([1, 2, 3], 3) # + [markdown] _uuid="01250c2ea726f387db5f3fab5a004e741e574a35" #
    # ## 3-2 Numpy:Combining Arrays # # ###### [Go to top](#top) # + _uuid="0dc57919256360d1ac309813fb6e836f75d17484" p = np.ones([2, 3], int) p # + [markdown] _uuid="98aa86e98478bcad2ce35eded8e2adb825bbf709" #
    # Use `vstack` to stack arrays in sequence vertically (row wise). # + _uuid="48b0f8194df6a4f63932d97b24883d6a5d69d0df" np.vstack([p, 2*p]) # + [markdown] _uuid="a2cf2eb6fdf3deccca31df9979dd978241adadb8" #
    # Use `hstack` to stack arrays in sequence horizontally (column wise). # + _uuid="834984b63056d7161f261d868d85b83f24403287" np.hstack([p, 2*p]) # + [markdown] _uuid="b82f24eec48ab5206a47d04a36e5e639dad7f9a1" #
    # ## 3-3 Numpy:Operations # # ###### [Go to top](#top) # + [markdown] _uuid="d58a4bb35b9f1525736b28ffc7e2b6ead4035266" # Use `+`, `-`, `*`, `/` and `**` to perform element wise addition, subtraction, multiplication, division and power. # + _uuid="7ba8b42fcdcdc83e6f43df83b73665aa289e9786" print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9] print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3] # + _uuid="cd42ade6bd206750b5dde1086b06bde519f2b1df" print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18] print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5] # + _uuid="3370d5b9b8ca04ed5be48331c3d6d4d08738ab5d" print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9] # + [markdown] _uuid="e665d9db501694e29f54b02de56befaf69305629" #
    # **Dot Product:** # # $ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix} # \cdot # \begin{bmatrix}y_1 \\ y_2 \\ y_3\end{bmatrix} # = x_1 y_1 + x_2 y_2 + x_3 y_3$ # + _uuid="b45a94567cf5801e05869bde614e831603b1599a" x.dot(y) # dot product 1*4 + 2*5 + 3*6 # + _uuid="b27ce9c3b1328461ba14f5efdd36079e3f827951" z = np.array([y, y**2]) print(len(z)) # number of rows of array # + [markdown] _uuid="30ab350aa70687cbdd087d86386b96504aec4479" #
    # Let's look at transposing arrays. Transposing permutes the dimensions of the array. # + _uuid="2630bf81db6d00483f7e0abd3a3c3af28144d55f" z = np.array([y, y**2]) z # + [markdown] _uuid="8bb90454186cab0c688be9a01ff84c7ca67fa6a9" #
    # The shape of array `z` is `(2,3)` before transposing. # + _uuid="bdcbfc2e1c9b985a83aefa1a5972aa919acb8365" z.shape # + [markdown] _uuid="3b796b3d6c4e9a90e4fb5332d708398bb848c2e4" #
    # Use `.T` to get the transpose. # + _uuid="8ca46bf46ed15a5fa3af7e2d325ac68bc8785f05" z.T # + [markdown] _uuid="d2949665fb40cdb932376219b6d78156265d5ebf" #
    # The number of rows has swapped with the number of columns. # + _uuid="e48e8eff60259a9c865cf2dae88a3ce1641e826c" z.T.shape # + [markdown] _uuid="08fac1d78a66fdd513372cd31eaa5f973655e80a" #
    # Use `.dtype` to see the data type of the elements in the array. # + _uuid="6e1ea214c877f23d670b72f06060c2dee1bdaee5" z.dtype # + [markdown] _uuid="70da3bcabbc0631bb11b48796106c07f4de41ba8" #
    # Use `.astype` to cast to a specific type. # + _uuid="60684ed5f18c88c4ef4e2fe4207fa94eede9d993" z = z.astype('f') z.dtype # + [markdown] _uuid="e1199442dd07a4a4eef965c584d9e0f443c40013" #
    # ## 3-4 Numpy: Math Functions # # ###### [Go to top](#top) # + [markdown] _uuid="01121300b7b5b8a83d213d4f065383da19d5f7d8" # Numpy has many built in math functions that can be performed on arrays. # + _uuid="e1eaeb06cf68d055f6a5536ea72a17606b6762c1" a = np.array([-4, -2, 1, 3, 5]) # + _uuid="2918e83be55935fa03fd24924bc7f07a271c40d3" a.sum() # + _uuid="2e86a6ea98c6c86dbe7a96612af4f37452b23670" a.max() # + _uuid="1e8e48425f65e90a18a29fb7983dc6aa424f8445" a.min() # + _uuid="27131855b2c26a5f90c6524d4a8c7ed0266bf378" a.mean() # + _uuid="a020531df7f87af15577e986699e80f892300773" a.std() # + [markdown] _uuid="6d26c78ddd00e9387ba4214470debcc8147fd2bc" #
    # `argmax` and `argmin` return the index of the maximum and minimum values in the array. # + _uuid="dd6f6aee91fc8dd99cc5a8359dc81ce1d443e77f" a.argmax() # + _uuid="59aeec1ba92a0eb4c6928564e64445e6ce46cc3c" a.argmin() # + [markdown] _uuid="cf3ad4800506fd903882f60d811ca2548756e7c8" #
    # # ## 3-5 Numpy:Indexing / Slicing # ###### [Go to top](#top) # + _uuid="82ca4b616a46de3280b5a50df4c0114298d07aea" s = np.arange(13)**2 s # + [markdown] _uuid="417c1b56b9d3eca21021422bde199036c3f08ec3" #
    # Use bracket notation to get the value at a specific index. Remember that indexing starts at 0. # + _uuid="6f90f5dee19a73afde4e9455b47c0dbe86c9ce6b" s[0], s[4], s[-1] # + [markdown] _uuid="9d0cf8a30f9b1bd1f8dc60a46d3271e5cac235d0" #
    # Use `:` to indicate a range. `array[start:stop]` # # # Leaving `start` or `stop` empty will default to the beginning/end of the array. # + _uuid="1ea11cc5be369a751b250e7085cf507b122bcf88" s[1:5] # + [markdown] _uuid="dd42e9a4274baebf747c627767a273ef8ca9a26f" #
    # Use negatives to count from the back. # + _uuid="ce51a60f59516b22174f97a4f7c6da6c75322a8b" s[-4:] # + [markdown] _uuid="0fdd94c22e7ceb4cac00001bd98f09d5d879613e" #
    # A second `:` can be used to indicate step-size. `array[start:stop:stepsize]` # # Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached. # + _uuid="f28245bea6ab9a1bed7a76859ddfbb295ea95038" s[-5::-2] # + [markdown] _uuid="2f2295df96d9e7da01cde82bf87d5c7157bd9d21" #
    # Let's look at a multidimensional array. # + _uuid="582e15f695a2891826ef4eb8e913a0dc61913898" r = np.arange(36) r.resize((6, 6)) r # + [markdown] _uuid="26fd89ebfe4b92cedad89152753918608b68eafe" #
    # Use bracket notation to slice: `array[row, column]`. # + _uuid="7bf0f1e77c5243c3e2b846f7b268c4891786621b" r[2, 2] # + [markdown] _uuid="fcbf322288c6671ef799299b5ddb828b68b8cab9" #
    # And use : to select a range of rows or columns. # + _uuid="64d430ac4626977723c6357865a5297699e5323d" r[3, 3:6] # + [markdown] _uuid="e109144f7316b357450c4b3cae3cb61ad327f5d9" #
    # Here we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column. # + _uuid="7b80dc3ba5c970afe2db37c23599d476399b1c68" r[:2, :-1] # + [markdown] _uuid="6c5126aa54a1e6b9d0d078f2226b68d3fb42c4a2" #
    # This is a slice of the last row, and only every other element. # + _uuid="054f80d47b0e2d249a9140321846ff7b50daba1f" r[-1, ::2] # + [markdown] _uuid="62e5354b353e1fe2e60036f2c3f94909a8af1b7b" #
    # We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see `np.where`) # + _uuid="5476c43a5e7d195804cac7517abbdbcdbc234829" r[r > 30] # + [markdown] _uuid="77ae22731783157aaecdd019be77004504b943af" #
    # Here we are assigning all values in the array that are greater than 30 to the value of 30. # ###### [Go to top](#top) # + _uuid="6651ea99641526b7df83e7a9d9c1efe4eab9dc99" r[r > 30] = 30 r # + [markdown] _uuid="6189d2e7dc9038dc1c4f2ed95a47fad76d700890" #
    # ## 3-6 Numpy :Copying Data # + [markdown] _uuid="ad2453d83a8db6de311e9265672e9b10b0337284" # Be careful with copying and modifying arrays in NumPy! # # # `r2` is a slice of `r` # + _uuid="e6bb5dab7586dd4cfbc93b2cee05c57aae1e8518" r2 = r[:3,:3] r2 # + [markdown] _uuid="65d42e0a4cade74ae9b2b587f182efaa8a4b1dbf" #
    # Set this slice's values to zero ([:] selects the entire array) # + _uuid="9026a103a457054124258eefca3492008f884ef4" r2[:] = 0 r2 # + [markdown] _uuid="f73d4004c9181b5cffddf6f13b452d7e40cea3f9" #
    # `r` has also been changed! # + _uuid="80c0d26d5cb0374f82929e3d79c183f5e116f4ea" r # + [markdown] _uuid="c8a21fd94e5dfcd322ec0017e89302533ac3cf2d" #
    # To avoid this, use `r.copy` to create a copy that will not affect the original array # + _uuid="d2b2e17295f75a5a3b8dacf5ca65b4c4f3b6ca47" r_copy = r.copy() r_copy # + [markdown] _uuid="2ef3568f7ac932cc45931854410cbb9e4d909df8" #
    # Now when r_copy is modified, r will not be changed. # + _uuid="4f7fa9e1b65eea1e9514fbd6f91a2720822430ba" r_copy[:] = 10 print(r_copy, '\n') print(r) # + [markdown] _uuid="752b09b0f8f1606d55bf0423d3f2d9cb162d3ce9" #
    # ## 3-7 Numpy: Iterating Over Arrays # + [markdown] _uuid="6a7f8c639e5f8cbdbd0cf2fe4c78504955ec2ccb" # Let's create a new 4 by 3 array of random numbers 0-9. # + _uuid="c1496dab8d8f90434e8c44e63c225bfb3ca9713f" test = np.random.randint(0, 10, (4,3)) test # + [markdown] _uuid="bf971c58819a069592b3d5c195cf9e21faa4797b" #
    # Iterate by row: # + _uuid="993bb7b7f1be5a0b088caac550c32257dd1c9297" for row in test: print(row) # + [markdown] _uuid="f5b7fa4289acc25093efbbc99fe36355073bcd02" #
    # Iterate by index: # + _uuid="9de357edca10cf7708e1c5b37ef5a0ad337fcbbf" for i in range(len(test)): print(test[i]) # + [markdown] _uuid="00ed3013376dff60b43f6f1eb5091580279063ad" #
    # Iterate by row and index: # + _uuid="a75a7881baf87a49a3f48236fc9a9281f2ace310" for i, row in enumerate(test): print('row', i, 'is', row) # + [markdown] _uuid="08f51011e8d4da3ab72a33ccb6daca3c97832eb4" #
    # Use `zip` to iterate over multiple iterables. # + _uuid="63709cf63fc6e5596bc540055a287d56a57c55df" test2 = test**2 test2 # + _uuid="88776121d1063744cdb0d1df15320af35a40690f" for i, j in zip(test, test2): print(i,'+',j,'=',i+j) # + [markdown] _uuid="c2ec9941ed71b0d102881252688723804c536b65" #
    # ## 3-8 Numpy: The Series Data Structure # One-dimensional ndarray with axis labels (including time series) # + _uuid="ff60c47c0ee85b3534fa0eeb1fc6c18951e13a93" animals = ['Tiger', 'Bear', 'Moose'] pd.Series(animals) # + _uuid="3b9d3593c2f04eb52439d1e2f6eaced42103b385" numbers = [1, 2, 3] pd.Series(numbers) # + _uuid="ac6b145a659c5c6e143e47a726be2d2bc904ea05" animals = ['Tiger', 'Bear', None] pd.Series(animals) # + _uuid="608363b045521c88d96135d4651624753a0a97f8" numbers = [1, 2, None] pd.Series(numbers) # + _uuid="81c84dd1739a442c3eca83911b9e9cd146beccf1" import numpy as np np.nan == None # + _uuid="ee829b3241dc2b99e7aeb8b16daa61f43516a08e" np.nan == np.nan # + _uuid="c39a1b7d020fa502a055e4befa580529da3d9206" np.isnan(np.nan) # + _uuid="703803f890a0c351d5122b4f509c0835b949481d" sports = {'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'} s = pd.Series(sports) s # + _uuid="6a4a03374c688db5bc1cb49cf0ae67a166f33ab5" s.index # + _uuid="16a45c1d82eb06da0e940aa9304455d5b6629723" s = pd.Series(['Tiger', 'Bear', 'Moose'], index=['India', 'America', 'Canada']) s # + _uuid="d5c017e5124d3c8c62bbe91b16ae5e2fb76f2cd5" sports = {'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'} s = pd.Series(sports, index=['Golf', 'Sumo', 'Hockey']) s # + [markdown] _uuid="91a03b68b0698ad0f0736d0a8106bb4c2023437d" #
    # # 3-9 Numpy: Querying a Series # + _uuid="6c44f80eadf6e60f8d1d966594169e6e579fd91a" sports = {'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'} s = pd.Series(sports) s # + _uuid="560e0377aa2d53cc14f61a42a6492f99932a6eab" s.iloc[3] # + _uuid="2c0e34116ab3b36363b7d7ffde48d3520b493d50" s.loc['Golf'] # + _uuid="12dce7548346ce489c21f50e02a30e4a7dad6a81" s[3] # + _uuid="a44e0fd0b5ea294bfde2383f3ed660bb4dc7c032" s['Golf'] # + _uuid="97f106d843bd9da560c4aaa36e9a2baf0fd5f820" sports = {99: 'Bhutan', 100: 'Scotland', 101: 'Japan', 102: 'South Korea'} s = pd.Series(sports) # + _uuid="6c8d10a115a42956c1bf95a925006f1a8f44ac77" s = pd.Series([100.00, 120.00, 101.00, 3.00]) s # + _uuid="a877d83b4ea29606ab220884b742b9436640a87b" total = 0 for item in s: total+=item print(total) # + _uuid="eb4aaac9d42d6ad0df5b0d766bf142cec13ca640" total = np.sum(s) print(total) # + _uuid="c5ec441b291558581a79c6bda7d67ee7df640ac4" #this creates a big series of random numbers s = pd.Series(np.random.randint(0,1000,10000)) s.head() # + _uuid="cafe2341bfb1b9419fcff2624e9973a12beb579a" len(s) # + _uuid="b0028ee6c78f4715a1848458c3b5ef2ea75e601b" # %%timeit -n 100 summary = 0 for item in s: summary+=item # + _uuid="bbc4da9b2ba74a04d797ff837bddf7553b4625d9" # %%timeit -n 100 summary = np.sum(s) # + _uuid="a9ef0f8d0db28ffb3f5c92c86dbcffd8b9d01840" s+=2 #adds two to each item in s using broadcasting s.head() # + _uuid="f092ba84abb6a9172a23030b25caa23b7dea1c3f" for label, value in s.iteritems(): s.set_value(label, value+2) s.head() # + _uuid="979c2ea48abace1804d37a316ecf8a7cb0b53aa2" # %%timeit -n 10 s = pd.Series(np.random.randint(0,1000,100)) for label, value in s.iteritems(): s.loc[label]= value+2 # + _uuid="84c6c0358bf55fd26254ae784ae2b60b3d7c3526" # %%timeit -n 10 s = pd.Series(np.random.randint(0,1000,100)) s+=2 # + _uuid="35dabb561b6b3aaf6520311c68004880cadf5f7d" s = pd.Series([1, 2, 3]) s.loc['Animal'] = 'Bears' s # + _uuid="1f40d7bded3fd8cea73b8ab31e945929d75a57f4" original_sports = pd.Series({'Archery': 'Bhutan', 'Golf': 'Scotland', 'Sumo': 'Japan', 'Taekwondo': 'South Korea'}) cricket_loving_countries = pd.Series(['Australia', 'Barbados', 'Pakistan', 'England'], index=['Cricket', 'Cricket', 'Cricket', 'Cricket']) all_countries = original_sports.append(cricket_loving_countries) # + _uuid="95fa6b69a0927865a896c7a518ba848c3b994cad" original_sports # + _uuid="bb617195f684747e2a79b87eb7307e832f2bfe50" cricket_loving_countries # + _uuid="5274031be8720d734db1866f20ad048c4c2ea7da" all_countries # + _uuid="33c9d9f54962decaa00a7328b124252ddcf2b661" all_countries.loc['Cricket'] # + [markdown] _uuid="7653d84bad68f6370b1bdf484a2e9b6fb5982977" #
    # ## 4- Pandas:The DataFrame Data Structure # You'll hone your pandas skills by learning how to organize, reshape, and aggregate multiple data sets to answer your specific questions. # **Pandas**: # Two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure. # # Pandas is capable of many tasks including: # # Reading/writing many different data formats # Selecting subsets of data # Calculating across rows and down columns # Finding and filling missing data # Applying operations to independent groups within the data # Reshaping data into different forms # Combing multiple datasets together # Advanced time-series functionality # Visualization through matplotlib and seaborn # Although pandas is very capable, it does not provide functionality for the entire data science pipeline. Pandas is typically the intermediate tool used for data exploration and cleaning squashed between data capturing and storage, and data modeling and predicting. # ###### [Go to top](#top) # + _uuid="4e82246f590c37992f9190583cdb0035d93c0dcd" purchase_1 = pd.Series({'Name': 'Chris', 'Item Purchased': 'Dog Food', 'Cost': 22.50}) purchase_2 = pd.Series({'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': 2.50}) purchase_3 = pd.Series({'Name': 'Vinod', 'Item Purchased': 'Bird Seed', 'Cost': 5.00}) df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2']) df.head() # + _uuid="835c6bb2dba772d11345bb2d5e40a992999d31b6" df.loc['Store 2'] # + _uuid="0b48c39ad30936af6ce6932597073451c6b2bac9" type(df.loc['Store 2']) # + _uuid="c929565dc25461fd914c756431df400e6cdf058b" df.loc['Store 1'] # + _uuid="a11d22702f1be476a4d443cf811fe0a07a5dbbe4" df.loc['Store 1', 'Cost'] # + _uuid="adb35d3ac7e00c00ced7236aa7c3eaab3c85e675" df.T # + _uuid="0e478aabe2d50b04f5bd9cca220150476adc8b1f" df.T.loc['Cost'] # + _uuid="2ed1c53b43bb00bc07a95642071be8ef2b5aa779" df['Cost'] # + _uuid="084454d6fcc47a738808b0c832d59aebc49be70c" df.loc['Store 1']['Cost'] # + _uuid="fdbddb30cb59ed8462a07795aaadbcd9ad2b1aa4" df.loc[:,['Name', 'Cost']] # + _uuid="2541e9071816bd8f496ddf62b77cccb8fe325fbc" df.drop('Store 1') # + _uuid="5fa5a22b81b92bae274d3a9afd76283ded17b478" df # + _uuid="918e58d124508bd5edff0ed84ba6a4c252cdec3d" copy_df = df.copy() copy_df = copy_df.drop('Store 1') copy_df # + _uuid="0e3830ddb755492607166e3975ac6e18c9436422" copy_df.drop # + _uuid="946e48754fa8ed9914ef62b1c7049260861098db" del copy_df['Name'] copy_df # + _uuid="de93b5c96c8b546bfb01b57d75347f5045ea01d1" df['Location'] = None df # + _uuid="42414bc5b478108d59aaf9b5dff463c95904097d" costs = df['Cost'] costs # + _uuid="76922dc612283caa9821f793abac91dad9328c75" costs+=2 costs # + _uuid="3ea1881948d05207d7dc2e1805c446adfa544959" df # + [markdown] _uuid="4d339cee9608b148762d7ad3068c362bbc9454f7" #
    # # 4-1 Pandas:Dataframe Indexing and Loading # # As a Data Scientist, you'll often find that the data you need is not in a single file. It may be spread across a number of text files, spreadsheets, or databases. You want to be able to import the data of interest as a collection of DataFrames and figure out how to combine them to answer your central questions. # ###### [Go to top](#top) # + _uuid="53fa2f4cb18784de6d077871a606dcf5b1511862" df = pd.read_csv('train.csv') df.head() # + _uuid="01a8c7bc0a10635dc10dd56ba6edcbe595013772" df.columns # + _uuid="ffb1b09a5a8953e7dae65425890b652c214b1fb5" # Querying a DataFrame # + _uuid="258b2c7201efba77d84b92286bbe69a6af240ca8" df['Survived'] > 0 # + _uuid="739f5037a2fcdd548abc5e68f5abcba3fcdb68e4" only_Survived = df.where(df['Survived'] > 0) only_Survived.head() # + _uuid="e420645f2daa14d2bf12b3370438b5c1741f5c52" only_Survived['Survived'].count() # + _uuid="2c3561ac4d86a22f3984b11ebe1200100fc95417" df['Survived'].count() # + _uuid="86e547ab11dacd87ccfe4657f8eb11fd9fcf3fef" only_Survived = only_Survived.dropna() only_Survived.head() # + _uuid="cf0829126fff5c075151fcc5418bbe9b945c14c9" only_Survived = df[df['Survived'] > 0] only_Survived.head() # + _uuid="5b957eae6c82a982ff9672d321deadf637aa421c" len(df[(df['Survived'] > 0) | (df['Survived'] > 0)]) # + _uuid="d076fe5f6dade0e49c2a35c7f9c64baeaf42a59d" df[(df['Survived'] > 0) & (df['Survived'] == 0)] # + _uuid="717b27412a1a852c84f820272d8bf94a45022aca" # Indexing Dataframes # + _uuid="822efde2bbb058575dea289d057368d1af7d1394" df.head() # + _uuid="b11ed5fbe0e8d35b303125afc78b04abf4dc0190" df['PassengerId'] = df.index df = df.set_index('Survived') df.head() # + _uuid="4848977e538e02e0444862e632101a9d6bc97742" df = df.reset_index() df.head() # + _uuid="5da1a958ccd43f5f5427415dc8682ccbbd589b3d" df = pd.read_csv('train.csv') df.head() # + _uuid="9c818adf02056d59d534e4cb790dd6ce74c2b861" df['Age'].unique() # + _uuid="c1adb2169bf24831daaa59655083e069d5fda4a5" df=df[df['Age'] == 50] df.head() # + [markdown] _uuid="94d2eb99802e00e342e3a046f9b26a06a3c501a7" #
    # # 4-2 Pandas:Missing values # # + _uuid="6946487c3ba7a29af57472c6fe03cde0ababd341" df = pd.read_csv('train.csv') df # + _uuid="30cc0a09aa17b60a69ddccebbc0b6ceaf6077bfb" df.fillna # + _uuid="be537539e67066ad45f9217988aa7ca7c23a370b" df = df.set_index('PassengerId') df = df.sort_index() df # + _uuid="d2681f382b87e0eb47c41745576c2d35a8f55f5b" df = df.reset_index() df = df.set_index(['PassengerId', 'Survived']) df # + _uuid="bea1dfdc973fe52315d701fecb6abb28edaecb81" df = df.fillna(method='ffill') df.head() # + [markdown] _uuid="d79a17c1a5930de30ef9c238bf143cfc9962d24f" #
    # # 4-3 Pandas :Merging Dataframes # pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations. # # + _uuid="4809bb7be74c5ef657c069446ecffb409937f952" df = pd.DataFrame([{'Name': 'MJ', 'Item Purchased': 'Sponge', 'Cost': 22.50}, {'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': 2.50}, {'Name': 'Filip', 'Item Purchased': 'Spoon', 'Cost': 5.00}], index=['Store 1', 'Store 1', 'Store 2']) df # + _uuid="f30d304abf7b7345a4e6e7c1105e190dd1a621d2" df['Date'] = ['December 1', 'January 1', 'mid-May'] df # + _uuid="fbedab0057046e0510dd1331f03ffc18c9ba520b" df['Delivered'] = True df # + _uuid="8ed708570d219bad3637b3a907bb0a00be33b939" df['Feedback'] = ['Positive', None, 'Negative'] df # + _uuid="fc549de9e14ccf0504553ee8960442180ba895b0" adf = df.reset_index() adf['Date'] = pd.Series({0: 'December 1', 2: 'mid-May'}) adf # + _uuid="80e056750b87aa3d692e6f3aa07ca4e40ce05512" staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR'}, {'Name': 'Sally', 'Role': 'Course liasion'}, {'Name': 'James', 'Role': 'Grader'}]) staff_df = staff_df.set_index('Name') student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business'}, {'Name': 'Mike', 'School': 'Law'}, {'Name': 'Sally', 'School': 'Engineering'}]) student_df = student_df.set_index('Name') print(staff_df.head()) print() print(student_df.head()) # + _uuid="c0e141f46ea59c406f9c75a501139d808720bea6" pd.merge(staff_df, student_df, how='outer', left_index=True, right_index=True) # + _uuid="117f0b5ad0687b45deead65bfd2cd2e2b42aec7a" pd.merge(staff_df, student_df, how='inner', left_index=True, right_index=True) # + _uuid="0fcc1d0780a0b786ffbb77e88a1e1bdc5f415a4a" pd.merge(staff_df, student_df, how='left', left_index=True, right_index=True) # + _uuid="d2492a6a8108c115d1f9c980a8b01242cc695a37" pd.merge(staff_df, student_df, how='right', left_index=True, right_index=True) # + _uuid="7876e8102392731c7d48123c8c5dced6693a32d2" staff_df = staff_df.reset_index() student_df = student_df.reset_index() pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name') # + _uuid="08c724f0bf11154a1244924bd7ca0b195fff3a21" staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR', 'Location': 'State Street'}, {'Name': 'Sally', 'Role': 'Course liasion', 'Location': 'Washington Avenue'}, {'Name': 'James', 'Role': 'Grader', 'Location': 'Washington Avenue'}]) student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business', 'Location': '1024 Billiard Avenue'}, {'Name': 'Mike', 'School': 'Law', 'Location': 'Fraternity House #22'}, {'Name': 'Sally', 'School': 'Engineering', 'Location': '512 Wilson Crescent'}]) pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name') # + _uuid="da1dc7a2a97543ef9d338dd45fc74b8e66f6221e" staff_df = pd.DataFrame([{'First Name': 'Kelly', 'Last Name': 'Desjardins', 'Role': 'Director of HR'}, {'First Name': 'Sally', 'Last Name': 'Brooks', 'Role': 'Course liasion'}, {'First Name': 'James', 'Last Name': 'Wilde', 'Role': 'Grader'}]) student_df = pd.DataFrame([{'First Name': 'James', 'Last Name': 'Hammond', 'School': 'Business'}, {'First Name': 'Mike', 'Last Name': 'Smith', 'School': 'Law'}, {'First Name': 'Sally', 'Last Name': 'Brooks', 'School': 'Engineering'}]) staff_df student_df pd.merge(staff_df, student_df, how='inner', left_on=['First Name','Last Name'], right_on=['First Name','Last Name']) # + [markdown] _uuid="c47476edc934d4d851254db98b156de91018a0c8" #
    # # 4-4 Idiomatic Pandas: Making Code Pandorable # # + _uuid="2cd537dc9b3bd93a924d808ef8d6377853dae984" df = pd.read_csv('../input/train.csv') df # + _uuid="6ec456b0ba8db3f621911ad4c3a36008b04cfc7f" df = df[df['Age']==50] df.set_index(['PassengerId','Survived'], inplace=True) df.rename(columns={'Pclass': 'pclass'}) # + [markdown] _uuid="ef246335b484cd563b29cafd6c178820c04a6f0f" #
    # ## 4-5 Pandas :Group by # + _uuid="b45f3ee990bce0101774749b4b81e24b81911ad6" df = pd.read_csv('../input/train.csv') df = df[df['Age']==50] df # + _uuid="1c9afa54368039e6439950f57d39e5e0ae1faf7a" df.head() # + [markdown] _uuid="49994af42a822d8e0ad579866d12fdcd3a7b65ba" #
    # ## 4-6 Pandas:Scales # # + _uuid="ef0c2c453afcd5f43e37f27dd3dccd01aa7e33c0" df = pd.DataFrame(['A+', 'A', 'A-', 'B+', 'B', 'B-', 'C+', 'C', 'C-', 'D+', 'D'], index=['excellent', 'excellent', 'excellent', 'good', 'good', 'good', 'ok', 'ok', 'ok', 'poor', 'poor']) df.rename(columns={0: 'Grades'}, inplace=True) df # + _uuid="cc490b8a1f851253430185eaab0d8a5ac1b843b8" df['Grades'].astype('category').head() # + _uuid="388f3454ed9fcc1b88c898e329c0c2d4b062df1f" grades = df['Grades'].astype('category', categories=['D', 'D+', 'C-', 'C', 'C+', 'B-', 'B', 'B+', 'A-', 'A', 'A+'], ordered=True) grades.head() # + _uuid="51a0c5d18dbc6c395b900c34e9e590591d671676" grades > 'C' # + [markdown] _uuid="d205d889ce4d23e00f2ced7864c044e9d3d3ec84" #
    # ## 4-7 Pandas:Date Functionality # ###### [Go to top](#top) # + [markdown] _uuid="a466f75d44bdabf52ddb21d0c173d9421afce7d9" # ### 4-7-1 Timestamp # + _uuid="80a449e3cac139ac3b9697dba331363538a1a65f" pd.Timestamp('9/1/2016 10:05AM') # + [markdown] _uuid="c39545f2495d3f837c6b75dfd57b1a53c3d27d75" # ### 4-7-2 Period # + _uuid="3c160f45b74a5e4faecbf6661978e9c59e933e14" pd.Period('1/2016') # + _uuid="124c5a5ba7872ab55b3cd1fbbe18669747124eea" pd.Period('3/5/2016') # + [markdown] _uuid="f19b87e81ba7aa0d8adb850d75d9452cf3a73ddf" # ### 4-7-3 DatetimeIndex # + _uuid="68a99def5b8fbd9839cf35667a3481ed23c476c0" t1 = pd.Series(list('abc'), [pd.Timestamp('2016-09-01'), pd.Timestamp('2016-09-02'), pd.Timestamp('2016-09-03')]) t1 # + _uuid="073eeeb51e53b50660a822eb7484d4e8b72a7dfa" type(t1.index) # + [markdown] _uuid="c891236e512838ed6088e02c88dd888029b226a3" # ### 4-7-4 PeriodIndex # + _uuid="5a19abe4e4a7324a8f7565c66f8270ab9eb3cae6" t2 = pd.Series(list('def'), [pd.Period('2016-09'), pd.Period('2016-10'), pd.Period('2016-11')]) t2 # + _uuid="522200c3bbb47e10177f1c63a0ed3bfb49cbcf47" type(t2.index) # + [markdown] _uuid="50e147a4abff8fd3014f3fb2a105f516b4a5ea2f" # ## 4-8 Pandas: Converting to Datetime # + _uuid="862d2bc5c5b430ed2be1292b9d3b5efe8a3c9cc1" d1 = ['2 June 2013', 'Aug 29, 2014', '2015-06-26', '7/12/16'] ts3 = pd.DataFrame(np.random.randint(10, 100, (4,2)), index=d1, columns=list('ab')) ts3 # + _uuid="840963eeb8ca7bfe0a5d3167211e74d48446fe3a" ts3.index = pd.to_datetime(ts3.index) ts3 # + _uuid="e2482002fae0947549bdd81a3da93c8e0cde40fe" pd.to_datetime('4.7.12', dayfirst=True) # + _uuid="cae5847092e33cba7b657a90810a7ef35ae307e4" pd.Timestamp('9/3/2016')-pd.Timestamp('9/1/2016') # + [markdown] _uuid="c54a7d88507df86710fc585582b2074cb8d5aa5a" # ### 4-8-1 Timedeltas # + _uuid="fe333509538c1ec2bf81fda4613344cfa699b410" pd.Timestamp('9/3/2016')-pd.Timestamp('9/1/2016') # + _uuid="08e10cce4428eaa329eb84677c755b0307488bfa" pd.Timestamp('9/2/2016 8:10AM') + pd.Timedelta('12D 3H') # + [markdown] _uuid="a73485dcf7310c754e69ab5ed802d7e466684242" # ### 4-8-2 Working with Dates in a Dataframe # # + _uuid="f5bcbc00ce23e50d346495588428e5d6f430a8df" dates = pd.date_range('10-01-2016', periods=9, freq='2W-SUN') dates # + _uuid="41df9aeeb10a2d404f1eada62b4a4066e0f37af0" df.index.ravel # + [markdown] _uuid="2f7c5d5041dc630abeaff47ff5a96a0dd53db8e5" #
    # ## 4-9 Distributions in Pandas # ###### [Go to top](#top) # + _uuid="96be284ba6d63fd0b1db5641a21d75aacdfb7da4" np.random.binomial(1, 0.5) # + _uuid="4ae2c7ff2cf941bae62be23864a1685a196551d0" np.random.binomial(1000, 0.5)/1000 # + _uuid="c5e89f6f0c7376c164f80dc4c5d582b0a639e254" chance_of_tornado = 0.01/100 np.random.binomial(100000, chance_of_tornado) # + _uuid="7804824638e0e6e97ebe5a252806d51a4e5cac2c" chance_of_tornado = 0.01 tornado_events = np.random.binomial(1, chance_of_tornado, 1000000) two_days_in_a_row = 0 for j in range(1,len(tornado_events)-1): if tornado_events[j]==1 and tornado_events[j-1]==1: two_days_in_a_row+=1 print('{} tornadoes back to back in {} years'.format(two_days_in_a_row, 1000000/365)) # + _uuid="4583c0a0ec05e914af1c01e2c901da41828ad653" np.random.uniform(0, 1) # + _uuid="365cfd6e6602f46bc98c13d10a47ad1d98af978e" np.random.normal(0.75) # + _uuid="8091adda4fbdccffb73424476c68a7aa0fb53c9a" distribution = np.random.normal(0.75,size=1000) np.sqrt(np.sum((np.mean(distribution)-distribution)**2)/len(distribution)) # + _uuid="9eed1ac016763a69de465e82736d30d7e5b1d028" np.std(distribution) # + _uuid="ab89e527fcd900577d879b50272d400ae0bdbaa0" stats.kurtosis(distribution) # + _uuid="c974e2563460f9b4942ab8c9b1d1783479779fa5" stats.skew(distribution) # + _uuid="de01fc7f82016eeb5f3b206788e7213ad373a441" chi_squared_df2 = np.random.chisquare(2, size=10000) stats.skew(chi_squared_df2) # + _uuid="d8cce6278eaf4b4865efebf34dfb4ec2d9b684ca" chi_squared_df5 = np.random.chisquare(5, size=10000) stats.skew(chi_squared_df5) # + _uuid="1c30298fc2dcd63dface93cb1626c5e7367d4698" output = plt.hist([chi_squared_df2,chi_squared_df5], bins=50, histtype='step', label=['2 degrees of freedom','5 degrees of freedom']) plt.legend(loc='upper right') # + [markdown] _uuid="4f02c235284ef14f0007400125119ba36b81e325" #
    # ## 5- Matplotlib # # This Matplotlib tutorial takes you through the basics Python data visualization: the anatomy of a plot, pyplot and pylab, and much more # ###### [Go to top](#top) # + [markdown] _uuid="857d9a02e72211b4fde6f06debd3a57c8a0b1849" # You can show matplotlib figures directly in the notebook by using the `%matplotlib notebook` and `%matplotlib inline` magic commands. # # `%matplotlib notebook` provides an interactive environment. # + _uuid="0d66eabe3f4f7288f12f8f0160b2f8e914d16ba9" # because the default is the line style '-', # nothing will be shown if we only pass in one point (3,2) plt.plot(3, 2) # + _uuid="dad68ac5985444a4458259c448d1b6ffd9bf1710" # we can pass in '.' to plt.plot to indicate that we want # the point (3,2) to be indicated with a marker '.' plt.plot(3, 2, '.') # + [markdown] _uuid="18c4b411903becff1b3612730bc0287cb6ac14ad" # Let's see how to make a plot without using the scripting layer. # + _uuid="8f1bd139edb5cf1cc73f9f956d2d34844a726566" # First let's set the backend without using mpl.use() from the scripting layer from matplotlib.backends.backend_agg import FigureCanvasAgg from matplotlib.figure import Figure # create a new figure fig = Figure() # associate fig with the backend canvas = FigureCanvasAgg(fig) # add a subplot to the fig ax = fig.add_subplot(111) # plot the point (3,2) ax.plot(3, 2, '.') # save the figure to test.png # you can see this figure in your Jupyter workspace afterwards by going to # https://hub.coursera-notebooks.org/ canvas.print_png('test.png') # + [markdown] _uuid="66c8f302e0afffb4652ff2f80212610fd37ae126" # We can use html cell magic to display the image. # + _uuid="d43abba30465cee9dc1364fd2a27334c70da8598" language="html" # # + _uuid="2c76e50e3ecda1fe2144f16b9c9fdaa6bca37c61" # create a new figure plt.figure() # plot the point (3,2) using the circle marker plt.plot(3, 2, 'o') # get the current axes ax = plt.gca() # Set axis properties [xmin, xmax, ymin, ymax] ax.axis([0,6,0,10]) # + _uuid="0cc553845bb24ead8e57b5ccc59c6753a847f136" # create a new figure plt.figure() # plot the point (1.5, 1.5) using the circle marker plt.plot(1.5, 1.5, 'o') # plot the point (2, 2) using the circle marker plt.plot(2, 2, 'o') # plot the point (2.5, 2.5) using the circle marker plt.plot(2.5, 2.5, 'o') # + _uuid="41cc8f299ccc6825cf1ac49a8bc2039496bcc9a9" # get current axes ax = plt.gca() # get all the child objects the axes contains ax.get_children() # + _uuid="add973c3905be06ebae9f033070ee779f58dd785" plt.plot([1, 2, 3, 4], [10, 20, 25, 30], color='lightblue', linewidth=3) plt.scatter([0.3, 3.8, 1.2, 2.5], [11, 25, 9, 26], color='darkgreen', marker='^') plt.xlim(0.5, 4.5) plt.show() # + [markdown] _uuid="0f4decd6f93319534a203748fd8a2b9bcc154ddb" #
    # ## 5-1 Scatterplots # + _uuid="bf892e30bb09f0dc5fdf039d827ac55560b6ec06" x = np.array([1,2,3,4,5,6,7,8]) y = x plt.figure() plt.scatter(x, y) # similar to plt.plot(x, y, '.'), but the underlying child objects in the axes are not Line2D # + _uuid="9d4e979c8b366aa6e511a6d7316f63586346c649" x = np.array([1,2,3,4,5,6,7,8]) y = x # create a list of colors for each point to have # ['green', 'green', 'green', 'green', 'green', 'green', 'green', 'red'] colors = ['green']*(len(x)-1) colors.append('red') plt.figure() # plot the point with size 100 and chosen colors plt.scatter(x, y, s=100, c=colors) # + _uuid="935be30b9a6a7c78b58fe1a9c6e4690c494fb697" # convert the two lists into a list of pairwise tuples zip_generator = zip([1,2,3,4,5], [6,7,8,9,10]) print(list(zip_generator)) # the above prints: # [(1, 6), (2, 7), (3, 8), (4, 9), (5, 10)] zip_generator = zip([1,2,3,4,5], [6,7,8,9,10]) # The single star * unpacks a collection into positional arguments print(*zip_generator) # the above prints: # (1, 6) (2, 7) (3, 8) (4, 9) (5, 10) # + _uuid="d6f848fc8d0879c658a6a4e90ca533bddf3be6e3" # use zip to convert 5 tuples with 2 elements each to 2 tuples with 5 elements each print(list(zip((1, 6), (2, 7), (3, 8), (4, 9), (5, 10)))) # the above prints: # [(1, 2, 3, 4, 5), (6, 7, 8, 9, 10)] zip_generator = zip([1,2,3,4,5], [6,7,8,9,10]) # let's turn the data back into 2 lists x, y = zip(*zip_generator) # This is like calling zip((1, 6), (2, 7), (3, 8), (4, 9), (5, 10)) print(x) print(y) # the above prints: # (1, 2, 3, 4, 5) # (6, 7, 8, 9, 10) # + _uuid="b9c79e1558adbb3b19de2564376a24191766ab1f" plt.figure() # plot a data series 'Tall students' in red using the first two elements of x and y plt.scatter(x[:2], y[:2], s=100, c='red', label='Tall students') # plot a second data series 'Short students' in blue using the last three elements of x and y plt.scatter(x[2:], y[2:], s=100, c='blue', label='Short students') # + _uuid="70731835527ce5a16e4a8bd2a0017e3bbe8065ee" # add a label to the x axis plt.xlabel('The number of times the child kicked a ball') # add a label to the y axis plt.ylabel('The grade of the student') # add a title plt.title('Relationship between ball kicking and grades') # + _uuid="691db571df12646a4fe2740e55a43942ea779063" # add a legend (uses the labels from plt.scatter) plt.legend() # + _uuid="0908d7c94c14feb08f953e5cd10f31cb0081f5d3" # add the legend to loc=4 (the lower right hand corner), also gets rid of the frame and adds a title plt.legend(loc=4, frameon=False, title='Legend') # + _uuid="0eb79626b7b55559e40ca71ba5c3542466b629c9" # get children from current axes (the legend is the second to last item in this list) plt.gca().get_children() # + _uuid="f2b5d97457e3ebd63311aef5b92fc137bd1701f5" # get the legend from the current axes legend = plt.gca().get_children()[-2] # + _uuid="5334176f4177a64cb1726accb26502beb9b36626" x = np.random.randint(low=1, high=11, size=50) y = x + np.random.randint(1, 5, size=x.size) data = np.column_stack((x, y)) fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(8, 4)) ax1.scatter(x=x, y=y, marker='o', c='r', edgecolor='b') ax1.set_title('Scatter: $x$ versus $y$') ax1.set_xlabel('$x$') ax1.set_ylabel('$y$') ax2.hist(data, bins=np.arange(data.min(), data.max()), label=('x', 'y')) ax2.legend(loc=(0.65, 0.8)) ax2.set_title('Frequencies of $x$ and $y$') ax2.yaxis.tick_right() # + [markdown] _uuid="c372afa727e992b9ec097c1f172becfd06bff4ef" #
    # ## 5-2 Line Plots # + _uuid="163942ca710e3c67e8479cd02518ca37eae8ead7" linear_data = np.array([1,2,3,4,5,6,7,8]) exponential_data = linear_data**2 plt.figure() # plot the linear data and the exponential data plt.plot(linear_data, '-o', exponential_data, '-o') # + _uuid="7a4c6677b81fb82e1e7a1164ced70c4168080dbf" # plot another series with a dashed red line plt.plot([22,44,55], '--r') # + [markdown] _uuid="517e72eb500a3ba4db76467919f59ded9d3683cb" #
    # ## 5-3 Bar Charts # + _uuid="8a5ee722c74c9228af2bcece24831e1539fe97b2" plt.figure() xvals = range(len(linear_data)) plt.bar(xvals, linear_data, width = 0.3) # + _uuid="11a85a29ca9b1574536bfee91cc63122148d996e" new_xvals = [] # plot another set of bars, adjusting the new xvals to make up for the first set of bars plotted for item in xvals: new_xvals.append(item+0.3) plt.bar(new_xvals, exponential_data, width = 0.3 ,color='red') # + _uuid="f6597010276961362ce22b84d49a9ef5f46521f6" from random import randint linear_err = [randint(0,15) for x in range(len(linear_data))] # This will plot a new set of bars with errorbars using the list of random error values plt.bar(xvals, linear_data, width = 0.3, yerr=linear_err) # + _uuid="8db2b0bb5d4160f7954bbcd7d9e53d29c68ef342" # stacked bar charts are also possible plt.figure() xvals = range(len(linear_data)) plt.bar(xvals, linear_data, width = 0.3, color='b') plt.bar(xvals, exponential_data, width = 0.3, bottom=linear_data, color='r') # + _uuid="ee208f2f850122b6e01ff1236b2915914fe59466" # or use barh for horizontal bar charts plt.figure() xvals = range(len(linear_data)) plt.barh(xvals, linear_data, height = 0.3, color='b') plt.barh(xvals, exponential_data, height = 0.3, left=linear_data, color='r') # + _uuid="acd3cfbdedddae19d7a2bbc9b02fefba21f595de" # Initialize the plot fig = plt.figure(figsize=(20,10)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) # or replace the three lines of code above by the following line: #fig, (ax1, ax2) = plt.subplots(1,2, figsize=(20,10)) # Plot the data ax1.bar([1,2,3],[3,4,5]) ax2.barh([0.5,1,2.5],[0,1,2]) # Show the plot plt.show() # + _uuid="db3db00430b8e5ddb633f3c7c82a6bf63ccb0cbc" plt.figure() # subplot with 1 row, 2 columns, and current axis is 1st subplot axes plt.subplot(1, 2, 1) linear_data = np.array([1,2,3,4,5,6,7,8]) plt.plot(linear_data, '-o') # + _uuid="ce5372a8e09fd5379b1b33c7b451ec1b983f8616" exponential_data = linear_data**2 # subplot with 1 row, 2 columns, and current axis is 2nd subplot axes plt.subplot(1, 2, 2) plt.plot(exponential_data, '-o') # + _uuid="cc4de0125429bdcd3227e894d083c0425934b228" # plot exponential data on 1st subplot axes plt.subplot(1, 2, 1) plt.plot(exponential_data, '-x') # + _uuid="0b565e5356d99d5875de88a853710ee2dd3c4a53" plt.figure() ax1 = plt.subplot(1, 2, 1) plt.plot(linear_data, '-o') # pass sharey=ax1 to ensure the two subplots share the same y axis ax2 = plt.subplot(1, 2, 2, sharey=ax1) plt.plot(exponential_data, '-x') # + _uuid="ec35c2bab0a1debaa6420f15b929d127d0d5f8cd" plt.figure() # the right hand side is equivalent shorthand syntax plt.subplot(1,2,1) == plt.subplot(121) # + _uuid="43bb9cbc7acd94927be086a66ad913bd0a796fc5" # create a 3x3 grid of subplots fig, ((ax1,ax2,ax3), (ax4,ax5,ax6), (ax7,ax8,ax9)) = plt.subplots(3, 3, sharex=True, sharey=True) # plot the linear_data on the 5th subplot axes ax5.plot(linear_data, '-') # + _uuid="ec95c476865ba15ca11e2ef4a1ba466a9212f9b9" # set inside tick labels to visible for ax in plt.gcf().get_axes(): for label in ax.get_xticklabels() + ax.get_yticklabels(): label.set_visible(True) plt.show() # + _uuid="161be784e0c413060709d21a160985ad798b40f8" # necessary on some systems to update the plot plt.gcf().canvas.draw() plt.show() # + [markdown] _uuid="487fb5e77983c55ae891c3a99537d6cef0450b74" #
    # ## 5-4 Histograms # + _uuid="42551850b478e274b9f78d4f6ef717c636242616" # create 2x2 grid of axis subplots fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True) axs = [ax1,ax2,ax3,ax4] # draw n = 10, 100, 1000, and 10000 samples from the normal distribution and plot corresponding histograms for n in range(0,len(axs)): sample_size = 10**(n+1) sample = np.random.normal(loc=0.0, scale=1.0, size=sample_size) axs[n].hist(sample) axs[n].set_title('n={}'.format(sample_size)) # + _uuid="39b84012b223069dd4cc6f1441d2ad0f585218bf" # repeat with number of bins set to 100 fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True) axs = [ax1,ax2,ax3,ax4] for n in range(0,len(axs)): sample_size = 10**(n+1) sample = np.random.normal(loc=0.0, scale=1.0, size=sample_size) axs[n].hist(sample, bins=100) axs[n].set_title('n={}'.format(sample_size)) # + _uuid="4def4012769663b5665f7b70a077cf839b7793f1" plt.figure() Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) plt.scatter(X,Y) # + _uuid="f6304a7d71b6edf6894e93bc62ad9ee4ffb0cce1" # use gridspec to partition the figure into subplots import matplotlib.gridspec as gridspec plt.figure() gspec = gridspec.GridSpec(3, 3) top_histogram = plt.subplot(gspec[0, 1:]) side_histogram = plt.subplot(gspec[1:, 0]) lower_right = plt.subplot(gspec[1:, 1:]) # + _uuid="2a8f66c5169fcabafafa104e87897f175624245e" Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) lower_right.scatter(X, Y) top_histogram.hist(X, bins=100) s = side_histogram.hist(Y, bins=100, orientation='horizontal') # + _uuid="97062a938d44626006bd9e24b5893fdf9d98155d" # clear the histograms and plot normed histograms top_histogram.clear() top_histogram.hist(X, bins=100, normed=True) side_histogram.clear() side_histogram.hist(Y, bins=100, orientation='horizontal', normed=True) # flip the side histogram's x axis side_histogram.invert_xaxis() # + _uuid="e6b1208bb887180b8f0363c9e82cc6fcbf387698" # change axes limits for ax in [top_histogram, lower_right]: ax.set_xlim(0, 1) for ax in [side_histogram, lower_right]: ax.set_ylim(-5, 5) # + [markdown] _uuid="2bc0c6fc0bb19748e9e87603e3207f75ffa9b565" #
    # ## 5-5 Box and Whisker Plots # + _uuid="94dad21ec08e2633dacb64a89e5c807145042994" normal_sample = np.random.normal(loc=0.0, scale=1.0, size=10000) random_sample = np.random.random(size=10000) gamma_sample = np.random.gamma(2, size=10000) df = pd.DataFrame({'normal': normal_sample, 'random': random_sample, 'gamma': gamma_sample}) # + _uuid="87c08a9f914647f1735cb4b835b80f645685ef1e" df.describe() # + _uuid="9f4b288fe4b8ab78e6ad788e4bcfb5931920fcf2" plt.figure() # create a boxplot of the normal data, assign the output to a variable to supress output _ = plt.boxplot(df['normal'], whis='range') # + _uuid="9dc56a6415be6584fba51630ced26b0aaa486a09" # clear the current figure plt.clf() # plot boxplots for all three of df's columns _ = plt.boxplot([ df['normal'], df['random'], df['gamma'] ], whis='range') # + _uuid="0f44453d7022928d2aeed3c3e0126cbd7118cdd9" plt.figure() _ = plt.hist(df['gamma'], bins=100) # + _uuid="4dba4f705171f002fc94429ec80f7b6a2fe67ff3" import mpl_toolkits.axes_grid1.inset_locator as mpl_il plt.figure() plt.boxplot([ df['normal'], df['random'], df['gamma'] ], whis='range') # overlay axis on top of another ax2 = mpl_il.inset_axes(plt.gca(), width='60%', height='40%', loc=2) ax2.hist(df['gamma'], bins=100) ax2.margins(x=0.5) # + _uuid="90b1e8ffe23e39ec54414c1fd63b5d5c4e72be6f" # switch the y axis ticks for ax2 to the right side ax2.yaxis.tick_right() # + _uuid="cb3652a440484d391d27122878456a642c58d804" # if `whis` argument isn't passed, boxplot defaults to showing 1.5*interquartile (IQR) whiskers with outliers plt.figure() _ = plt.boxplot([ df['normal'], df['random'], df['gamma'] ] ) # + [markdown] _uuid="fbf8bf0a67f6c49d78911c5f37be531ebbcd9edb" #
    # ## 5-6 Heatmaps # + _uuid="ebfc0dcb8e85aa540f6568fa96431a4e9707f3c1" plt.figure() Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) _ = plt.hist2d(X, Y, bins=25) # + _uuid="fdbcf35950f94a4d0f1ce10efee6a4502f6ecfc8" plt.figure() _ = plt.hist2d(X, Y, bins=100) # + [markdown] _uuid="139f44a4deb043128c8c7254eb60c33e0fc26e68" #
    # ## 5-7 Animations # + _uuid="b3676970195153dc6056600f024f55c1b6f0ba12" import matplotlib.animation as animation n = 100 x = np.random.randn(n) # + _uuid="ea3eb5835e4acd53d43da483e17d79c32228cad6" # create the function that will do the plotting, where curr is the current frame def update(curr): # check if animation is at the last frame, and if so, stop the animation a if curr == n: a.event_source.stop() plt.cla() bins = np.arange(-4, 4, 0.5) plt.hist(x[:curr], bins=bins) plt.axis([-4,4,0,30]) plt.gca().set_title('Sampling the Normal Distribution') plt.gca().set_ylabel('Frequency') plt.gca().set_xlabel('Value') plt.annotate('n = {}'.format(curr), [3,27]) # + _uuid="fb2314b3b1735c5e191c8427c5abe6429e4ff767" fig = plt.figure() a = animation.FuncAnimation(fig, update, interval=100) # + [markdown] _uuid="7e702e5e0a876f9fa0b2e4fe497a56b91e00a95d" #
    # ## 5-8 Interactivity # + _uuid="51fefb947daf8ca558cbc153e2ddbf39bcb7d4b2" plt.figure() data = np.random.rand(10) plt.plot(data) def onclick(event): plt.cla() plt.plot(data) plt.gca().set_title('Event at pixels {},{} \nand data {},{}'.format(event.x, event.y, event.xdata, event.ydata)) # tell mpl_connect we want to pass a 'button_press_event' into onclick when the event is detected plt.gcf().canvas.mpl_connect('button_press_event', onclick) # + _uuid="1d64a25dc386a30bd895ca8f58ca86d632f05d74" from random import shuffle origins = ['China', 'Brazil', 'India', 'USA', 'Canada', 'UK', 'Germany', 'Iraq', 'Chile', 'Mexico'] shuffle(origins) df = pd.DataFrame({'height': np.random.rand(10), 'weight': np.random.rand(10), 'origin': origins}) df # + _uuid="e13dd7e002938af1a52d7520004d839a3d7d2011" plt.figure() # picker=5 means the mouse doesn't have to click directly on an event, but can be up to 5 pixels away plt.scatter(df['height'], df['weight'], picker=5) plt.gca().set_ylabel('Weight') plt.gca().set_xlabel('Height') # + _uuid="e926ab1d2dc2098a6af48526e9f980bf594c79cd" def onpick(event): origin = df.iloc[event.ind[0]]['origin'] plt.gca().set_title('Selected item came from {}'.format(origin)) # tell mpl_connect we want to pass a 'pick_event' into onpick when the event is detected plt.gcf().canvas.mpl_connect('pick_event', onpick) # + _uuid="bb968584c16b9acc6466a3897c9415c57f3a7404" # use the 'seaborn-colorblind' style plt.style.use('seaborn-colorblind') # + [markdown] _uuid="84742816a16280f7bca7c43879d6762e10e0a440" #
    # ## 5-9 DataFrame.plot # + _uuid="97a3554f9640a2b77e07c861b5a5b6c814a3b276" np.random.seed(123) df = pd.DataFrame({'A': np.random.randn(365).cumsum(0), 'B': np.random.randn(365).cumsum(0) + 20, 'C': np.random.randn(365).cumsum(0) - 20}, index=pd.date_range('1/1/2017', periods=365)) df.head() # + _uuid="4d64fa4bc8b62fe1f4d2c2de0869bb49c8f7fc3d" df.plot('A','B', kind = 'scatter'); # + [markdown] _uuid="857cecae1e2c9eb59c1a9d136ef1c5422d86d5ba" # You can also choose the plot kind by using the `DataFrame.plot.kind` methods instead of providing the `kind` keyword argument. # # `kind` : # - `'line'` : line plot (default) # - `'bar'` : vertical bar plot # - `'barh'` : horizontal bar plot # - `'hist'` : histogram # - `'box'` : boxplot # - `'kde'` : Kernel Density Estimation plot # - `'density'` : same as 'kde' # - `'area'` : area plot # - `'pie'` : pie plot # - `'scatter'` : scatter plot # - `'hexbin'` : hexbin plot # ###### [Go to top](#top) # + _uuid="74997530957394a96f0aed15c21e65f54911159c" # create a scatter plot of columns 'A' and 'C', with changing color (c) and size (s) based on column 'B' df.plot.scatter('A', 'C', c='B', s=df['B'], colormap='viridis') # + _uuid="6299f8dddb909c7850620499edc49afdfd909f75" ax = df.plot.scatter('A', 'C', c='B', s=df['B'], colormap='viridis') ax.set_aspect('equal') # + _uuid="91a25480397c8759047100da9ebc6c0264d8a918" df.plot.box(); # + _uuid="b7bfa0ce17ea260d75eb97a0161af3dbd700f780" df.plot.hist(alpha=0.7); # + [markdown] _uuid="21a68cb3d0111753d29df2b402011daff81c5ff4" # [Kernel density estimation plots](https://en.wikipedia.org/wiki/Kernel_density_estimation) are useful for deriving a smooth continuous function from a given sample. # + _uuid="7b4d0f65af26e55acaf9a06da13dc71eb21a408b" df.plot.kde(); # + [markdown] _uuid="68f7c4b8c85e6e28d978f11adbd9b54223d58538" #
    # # 6- Seaborn # # As you have just read, **Seaborn** is complimentary to Matplotlib and it specifically targets statistical data visualization. But it goes even further than that: Seaborn extends Matplotlib and that’s why it can address the two biggest frustrations of working with Matplotlib. Or, as says in the “introduction to Seaborn”: “If matplotlib “tries to make easy things easy and hard things possible”, seaborn tries to make a well-defined set of hard things easy too.” # # One of these hard things or frustrations had to do with the default Matplotlib parameters. Seaborn works with different parameters, which undoubtedly speaks to those users that don’t use the default looks of the Matplotlib plots # Seaborn is a library for making statistical graphics in Python. It is built on top of matplotlib and closely integrated with pandas data structures. # # Here is some of the functionality that seaborn offers: # # A dataset-oriented API for examining relationships between multiple variables # Specialized support for using categorical variables to show observations or aggregate statistics # Options for visualizing univariate or bivariate distributions and for comparing them between subsets of data # Automatic estimation and plotting of linear regression models for different kinds dependent variables # Convenient views onto the overall structure of complex datasets # High-level abstractions for structuring multi-plot grids that let you easily build complex visualizations # Concise control over matplotlib figure styling with several built-in themes # Tools for choosing color palettes that faithfully reveal patterns in your data # Seaborn aims to make visualization a central part of exploring and understanding data. Its dataset-oriented plotting functions operate on dataframes and arrays containing whole datasets and internally perform the necessary semantic mapping and statistical aggregation to produce informative plots. # # Here’s an example of what this means: # + [markdown] _uuid="ad6fd29ef151c7625da19a38a8ef9b24b353427f" #
    # ## 6-1 Seaborn Vs Matplotlib # # It is summarized that if Matplotlib “tries to make easy things easy and hard things possible”, Seaborn tries to make a well defined set of hard things easy too.” # # Seaborn helps resolve the two major problems faced by Matplotlib; the problems are # # * Default Matplotlib parameters # * Working with data frames # # As Seaborn compliments and extends Matplotlib, the learning curve is quite gradual. If you know Matplotlib, you are already half way through Seaborn. # # Important Features of Seaborn # Seaborn is built on top of Python’s core visualization library Matplotlib. It is meant to serve as a complement, and not a replacement. However, Seaborn comes with some very important features. Let us see a few of them here. The features help in − # # * Built in themes for styling matplotlib graphics # * Visualizing univariate and bivariate data # * Fitting in and visualizing linear regression models # * Plotting statistical time series data # * Seaborn works well with NumPy and Pandas data structures # * It comes with built in themes for styling Matplotlib graphics # # In most cases, you will still use Matplotlib for simple plotting. The knowledge of Matplotlib is recommended to tweak Seaborn’s default plots. # + _uuid="43b3d518ca544a9eac820327635535b04fba563f" def sinplot(flip = 1): x = np.linspace(0, 14, 100) for i in range(1, 5): plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip) sinplot() plt.show() # + _uuid="3b82e364e3599e9979d62bc70c8058d4d60a3fc8" def sinplot(flip = 1): x = np.linspace(0, 14, 100) for i in range(1, 5): plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip) sns.set() sinplot() plt.show() # + _uuid="a78ee6a9554ff6a42f4d584d00f700c63b3945cb" np.random.seed(1234) v1 = pd.Series(np.random.normal(0,10,1000), name='v1') v2 = pd.Series(2*v1 + np.random.normal(60,15,1000), name='v2') # + _uuid="ee4e031c263014b7d16c96351bbec7503ea16fbf" plt.figure() plt.hist(v1, alpha=0.7, bins=np.arange(-50,150,5), label='v1'); plt.hist(v2, alpha=0.7, bins=np.arange(-50,150,5), label='v2'); plt.legend(); # + _uuid="67dcace37ccbc4ebbb58297ad5b09c4edcbc32e8" plt.figure() # we can pass keyword arguments for each individual component of the plot sns.distplot(v2, hist_kws={'color': 'Teal'}, kde_kws={'color': 'Navy'}); # + _uuid="7bc165c0ea4d4f20a5975bd71a81e203afae0feb" sns.jointplot(v1, v2, alpha=0.4); # + _uuid="ee8e1e256786a02ada1f55233695ac507c05558e" grid = sns.jointplot(v1, v2, alpha=0.4); grid.ax_joint.set_aspect('equal') # + _uuid="b25a62cb3ddf0be742e2465d20c9f69bc8062a95" sns.jointplot(v1, v2, kind='hex'); # + _uuid="28893e60d141b295029b9414d6f4ac3fece7c43a" # set the seaborn style for all the following plots sns.set_style('white') sns.jointplot(v1, v2, kind='kde', space=0); # + _uuid="9eef5ff46fbf313f209a35ea44bce3b0d85e5966" train = pd.read_csv('../input/train.csv') train.head() # + [markdown] _uuid="5587ea1eec62478c9970831d2886c4cca9237257" # ## 6-2 10 Useful Python Data Visualization Libraries # + [markdown] _uuid="c1c287a60f1c634dc68b18165ca6ee4df415320c" # I am giving an overview of 10 interdisciplinary Python data visualization libraries, from the well-known to the obscure. # # * 1- matplotlib # # matplotlib is the O.G. of Python data visualization libraries. Despite being over a decade old, it’s still the most widely used library for plotting in the Python community. It was designed to closely resemble MATLAB, a proprietary programming language developed in the 1980s. # # * 2- Seaborn # # Seaborn harnesses the power of matplotlib to create beautiful charts in a few lines of code. The key difference is Seaborn’s default styles and color palettes, which are designed to be more aesthetically pleasing and modern. Since Seaborn is built on top of matplotlib, you’ll need to know matplotlib to tweak Seaborn’s defaults. # # * 3- ggplot # # ggplot is based on ggplot2, an R plotting system, and concepts from The Grammar of Graphics. ggplot operates differently than matplotlib: it lets you layer components to create a complete plot. For instance, you can start with axes, then add points, then a line, a trendline, etc. Although The Grammar of Graphics has been praised as an “intuitive” method for plotting, seasoned matplotlib users might need time to adjust to this new mindset. # # # * 4- Bokeh # # Like ggplot, Bokeh is based on The Grammar of Graphics, but unlike ggplot, it’s native to Python, not ported over from R. Its strength lies in the ability to create interactive, web-ready plots, which can be easily outputted as JSON objects, HTML documents, or interactive web applications. Bokeh also supports streaming and real-time data. # # # * 5- pygal # # Like Bokeh and Plotly, pygal offers interactive plots that can be embedded in the web browser. Its prime differentiator is the ability to output charts as SVGs. As long as you’re working with smaller datasets, SVGs will do you just fine. But if you’re making charts with hundreds of thousands of data points, they’ll have trouble rendering and become sluggish. # # * 6- Plotly # # You might know Plotly as an online platform for data visualization, but did you also know you can access its capabilities from a Python notebook? Like Bokeh, Plotly’s forte is making interactive plots, but it offers some charts you won’t find in most libraries, like contour plots, dendograms, and 3D charts. # # * 7- geoplotlib # # geoplotlib is a toolbox for creating maps and plotting geographical data. You can use it to create a variety of map-types, like choropleths, heatmaps, and dot density maps. You must have Pyglet (an object-oriented programming interface) installed to use geoplotlib. Nonetheless, since most Python data visualization libraries don’t offer maps, it’s nice to have a library dedicated solely to them. # # * 8- Gleam # # Gleam is inspired by R’s Shiny package. It allows you to turn analyses into interactive web apps using only Python scripts, so you don’t have to know any other languages like HTML, CSS, or JavaScript. Gleam works with any Python data visualization library. Once you’ve created a plot, you can build fields on top of it so users can filter and sort data. # # # * 9- missingno # # Dealing with missing data is a pain. missingno allows you to quickly gauge the completeness of a dataset with a visual summary, instead of trudging through a table. You can filter and sort data based on completion or spot correlations with a heatmap or a dendrogram. # # # * 10- Leather # # Leather’s creator, , puts it best: “Leather is the Python charting library for those who need charts now and don’t care if they’re perfect.” It’s designed to work with all data types and produces charts as SVGs, so you can scale them without losing image quality. Since this library is relatively new, some of the documentation is still in progress. The charts you can make are pretty basic—but that’s the intention. # # At the end, nice cheatsheet on how to best visualize your data. I think I will print it out as a good reminder of "best practices". Check out the link for the complete cheatsheet, also as a PDF. hashtag#data hashtag#visualization hashtag#datascience # # Link: https://github.com/mjbahmani/Machine-Learning-Workflow-with-Python # ![cheatsheet ][1] # [Reference][2] # # # [1]: http://s8.picofile.com/file/8340669884/53f6a826_d7df_4b55_81e6_7c23b3fff0a3_original.png # [2]: https://blog.modeanalytics.com/python-data-visualization-libraries/ # + [markdown] _uuid="4524fb4d3bdeca34f1a526bbf2fc2caa9bfbf51a" #
    # ## 7- SKlearn # # - The __open source__ Python ecosystem provides __a standalone, versatile and powerful scientific working environment__, including: [NumPy](http://numpy.org), [SciPy](http://scipy.org), [IPython](http://ipython.org), [Matplotlib](http://matplotlib.org), [Pandas](http://pandas.pydata.org/), _and many others..._ # # + [markdown] _uuid="ec7344e7f2a1bafa9a44a518722fcd8ec47c374b" #
    # ## 7-1 Introduction # # # # - Scikit-Learn builds upon NumPy and SciPy and __complements__ this scientific environment with machine learning algorithms; # - By design, Scikit-Learn is __non-intrusive__, easy to use and easy to combine with other libraries; # - Core algorithms are implemented in low-level languages. # + [markdown] _uuid="6e80040de557789b0dff267ce45ba3e494885fee" #
    # ## 7-2 Algorithms # + [markdown] _uuid="666c206f83175114a513b37fb9ae322b5cd8543e" # __Supervised learning:__ # # * Linear models (Ridge, Lasso, Elastic Net, ...) # * Support Vector Machines # * Tree-based methods (Random Forests, Bagging, GBRT, ...) # * Nearest neighbors # * Neural networks (basics) # * Gaussian Processes # * Feature selection # + [markdown] _uuid="44eef8d741beebe15555c5166360b2ce77f5d5b1" # __Unsupervised learning:__ # # * Clustering (KMeans, Ward, ...) # * Matrix decomposition (PCA, ICA, ...) # * Density estimation # * Outlier detection # + [markdown] _uuid="8da2cc5428b697a7b5f21d34038d343bb8b094bb" # __Model selection and evaluation:__ # # * Cross-validation # * Grid-search # * Lots of metrics # # _... and many more!_ (See our [Reference](http://scikit-learn.org/dev/modules/classes.html)) # + [markdown] _uuid="e8a877d51d20c1ad31bb635cffc89175426eb77c" #
    # ## 7-3 Framework # # Data comes as a finite learning set ${\cal L} = (X, y)$ where # * Input samples are given as an array $X$ of shape `n_samples` $\times$ `n_features`, taking their values in ${\cal X}$; # * Output values are given as an array $y$, taking _symbolic_ values in ${\cal Y}$. # + [markdown] _uuid="bafb45df9ecfe90563f2f9a1be8a327823cf6d35" # The goal of supervised classification is to build an estimator $\varphi: {\cal X} \mapsto {\cal Y}$ minimizing # # $$ # Err(\varphi) = \mathbb{E}_{X,Y}\{ \ell(Y, \varphi(X)) \} # $$ # # where $\ell$ is a loss function, e.g., the zero-one loss for classification $\ell_{01}(Y,\hat{Y}) = 1(Y \neq \hat{Y})$. # + [markdown] _uuid="7efef8f514caf78e7bc2a60b4d5c0e7fa6d160ac" #
    # ## 7-4 Applications # # - Classifying signal from background events; # - Diagnosing disease from symptoms; # - Recognising cats in pictures; # - Identifying body parts with Kinect cameras; # - ... # # + [markdown] _uuid="7cc13baab79cbc6446763e4ebe8feba2c95e74c9" #
    # ## 7-5 Data # # - Input data = Numpy arrays or Scipy sparse matrices ; # - Algorithms are expressed using high-level operations defined on matrices or vectors (similar to MATLAB) ; # - Leverage efficient low-leverage implementations ; # - Keep code short and readable. # + _uuid="da71475a60440648988cf0624a200439759112f7" from sklearn import datasets iris = datasets.load_iris() X_iris = iris.data y_iris = iris.target # + [markdown] _uuid="9c1cb2c59e225e0bafa33bae8e77db53c1f34572" # The dataset includes 150 instances, with 4 attributes each. For each instance, we will also have a target class (in our case, the species). This class is a special attribute which we will aim to predict for new, previously unseen instances, given the remaining (known) attributes. # + _uuid="d60e800532bb74141e10d0ca57761ab8886831a9" print (X_iris.shape, y_iris.shape) print ('Feature names:{0}'.format(iris.feature_names)) print ('Target classes:{0}'.format(iris.target_names)) print ('First instance features:{0}'.format(X_iris[0])) # + [markdown] _uuid="0699a589dbf3d8de81b360a3d1aee84019a64e3e" # Let us display each instance in a 2d-scatter plot, using first sepal measures, and then petal measures. # + _uuid="20a39dd47768e6ff4944573955c2b65322386efd" plt.figure('sepal') colormarkers = [ ['red','s'], ['greenyellow','o'], ['blue','x']] for i in range(len(colormarkers)): px = X_iris[:, 0][y_iris == i] py = X_iris[:, 1][y_iris == i] plt.scatter(px, py, c=colormarkers[i][0], marker=colormarkers[i][1]) plt.title('Iris Dataset: Sepal width vs sepal length') plt.legend(iris.target_names) plt.xlabel('Sepal length') plt.ylabel('Sepal width') plt.figure('petal') for i in range(len(colormarkers)): px = X_iris[:, 2][y_iris == i] py = X_iris[:, 3][y_iris == i] plt.scatter(px, py, c=colormarkers[i][0], marker=colormarkers[i][1]) plt.title('Iris Dataset: petal width vs petal length') plt.legend(iris.target_names) plt.xlabel('Petal length') plt.ylabel('Petal width') plt.show() # + [markdown] _uuid="90e0f3c3f49ae20dfd3b0648bf1fb4e9131455ef" #
    # ## 7-6 Supervised Learning: Classification # + [markdown] _uuid="d41c9b43569995b19a096eaf1435ff05aa4010f4" # In 1936 introduced the Iris dataset to the statistics world, using it to develop a _linear discriminant model_. What he did was to build a linear combination of the attributes that separates a species from the rest, that is, find a straight line similar to the one we suggested in the previous section. # # Our first task will be to predict the specie of an Iris flower given the four sepal and length measures. For the moment, we will start using only two attributes, its sepal width and length. We will do this to ease visualization, but later we will use the four attributes, and see if performance improves. This is an instance of a **classification problem**, where we want to assign a label taken from a discrete set to an item according to its features. # # The typical classification process roughly involves the following steps: # - select your attributes, # - build a model based on available data, and # - evaluate your model’s performance on previously unseen data. # # To do this, before building our model we should separate training and testing data. Training data will be used to build the model, and testing data will be used to evaluate its performance. # # + [markdown] _uuid="4f4c054a1e478dbf9745c1c9128ebed228a369dd" #
    # ## 7-7 Separate training and testing sets # + [markdown] _uuid="29d9d5fb317893fd18c495bc6b93db45c8825f97" # Our first step will be to separate the dataset into to separate sets, using 75% of the instances for training our classifier, and the remaining 25% for evaluating it (and, in this case, taking only two features, sepal width and length). We will also perform _feature scaling_: for each feature, calculate the average, subtract the mean value from the feature value, and divide the result by their standard deviation. After scaling, each feature will have a zero average, with a standard deviation of one. This standardization of values (which does not change their distribution, as you could verify by plotting the X values before and after scaling) is a common requirement of machine learning methods, to avoid that features with large values may weight too much on the final results. # + _uuid="a0ba1691f3394e53a8939c709b2f874d34276e55" from sklearn.model_selection import train_test_split from sklearn import preprocessing # Create dataset with only the first two attributes X, y = X_iris[:, [0,1]], y_iris # Test set will be the 25% taken randomly X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=33) # Standarize the features scaler = preprocessing.StandardScaler().fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # + [markdown] _uuid="d3d494e1c138f0ceccf7e916a7a0466053fb72e7" # Check that, after scaling, the mean is 0 and the standard deviation is 1 (this should be exact in the training set, but only approximated in the testing set, because we used the training set media and standard deviation): # + _uuid="50971f3878324a26664603f242e743c0005eaa47" print ('Training set mean:{:.2f} and standard deviation:{:.2f}'.format(np.average(X_train),np.std(X_train))) print ('Testing set mean:{:.2f} and standard deviation:{:.2f}'.format(np.average(X_test),np.std(X_test))) # + [markdown] _uuid="f0f7e10a6a4435eae2734323f23fc3a5aa3f364b" # Display the training data, after scaling. # + _uuid="bf6fe6e16343a274dde88a9668521bad8d745617" colormarkers = [ ['red','s'], ['greenyellow','o'], ['blue','x']] plt.figure('Training Data') for i in range(len(colormarkers)): xs = X_train[:, 0][y_train == i] ys = X_train[:, 1][y_train == i] plt.scatter(xs, ys, c=colormarkers[i][0], marker=colormarkers[i][1]) plt.title('Training instances, after scaling') plt.legend(iris.target_names) plt.xlabel('Sepal length') plt.ylabel('Sepal width') plt.show() # + [markdown] _uuid="4c6d378e8aa4a86c374cee25410178c06e880420" #
    # ## 7-8 A linear, binary classifier # + [markdown] _uuid="a1220ea7fdc23fe8831310212c52bdd4d5a2dd09" # To start, let's transform the problem to a binary classification task: we will only want to distinguish setosa flowers from the rest (it seems easy, according to the plot). To do this, we will just collapse all non-setosa targets into the same clas (later we will come back to the three-class original problem) # + _uuid="5f762b1f6b88ea60a54a1bacc44bec42bc403bf7" import copy y_train_setosa = copy.copy(y_train) # Every 1 and 2 classes in the training set will became just 1 y_train_setosa[y_train_setosa > 0]=1 y_test_setosa = copy.copy(y_test) y_test_setosa[y_test_setosa > 0]=1 print ('New training target classes:\n{0}'.format(y_train_setosa)) # + [markdown] _uuid="753b8c2d1e1d6320cdb4b23818aa9afac5ca1099" # Our first classifier will be a linear one. # # Linear classification models have been very well studied through many years, and the are a lot of different methods with actually very different approaches for building the separating hyperplane. We will use the `SGDClassifier` from scikit-learn to implement a linear model, including regularization. The classifier (actually, a family of classifiers, as we will see) receives its name from using Stochastic Gradient Descent, a very effective numerical procedure to find the local minimum of a function. # # Gradient Descent was introduced by in 1847, to solve a system of linear equations. The idea is based on the observation that a multivariable function decreases fastest in the direction of its negative gradient (you can think of the gradient as a generalization of the derivative for several dimensions). If we want to find its minimum (at least a local one) we could move in the direction of its negative gradient. This is exactly what gradient descent does. # # + [markdown] _uuid="077958a4afee1ca86cb244f34b7926ab7ae7e856" # Every classifier in scikit-learn is created the same way: calling a method with the classifier's configurable hyperparameters to create an instance of the classifier. In this case, we will use `linear_model.SGDClassifier`, telling scikit-learn to use a _log_ loss function. # + _uuid="624efe224ad09deb106bbd41098387241686e8e3" from sklearn import linear_model clf = linear_model.SGDClassifier(loss='log', random_state=42) print (clf) # + [markdown] _uuid="9fade050090fe34bef437d28a35b918e399c304f" # Note that the classifier includes several parameteres. Usually, scikit-learn specifies default values for every parameter. But be aware that it is not a good idea to keep it with their default values. Later (or in future notebooks, I do not know yet), we will talk about _model selection_, the process of selecting the best parameters. # # Now, we just call the `fit` method to train the classifier (i.e., build a model we will later use), based on the available training data. In our case, the trainig setosa set. # # + _uuid="79b66e76e757b42f4ed7fd49e3c62b83e861b625" clf.fit(X_train, y_train_setosa) # + [markdown] _uuid="a15f78bd1067a7e1614127a2f32a5971966c89a2" # How does our model look? Well, since we are building a linear classifier, our model is a... line. We can show its coefficients: # + _uuid="d087bff89ccc12837db145767a2213e19a83e095" print (clf.coef_,clf.intercept_) # + [markdown] _uuid="3c72318eb1be6a0670405774623093466716b386" # ... and we can draw the decision boundary using pyplot: # + _uuid="76d30025f191ad3f6d6e3572fdc24b37411de3c6" x_min, x_max = X_train[:, 0].min() - .5, X_train[:, 0].max() + .5 y_min, y_max = X_train[:, 1].min() - .5, X_train[:, 1].max() + .5 xs = np.arange(x_min, x_max, 0.5) fig,axes = plt.subplots() axes.set_aspect('equal') axes.set_title('Setosa classification') axes.set_xlabel('Sepal length') axes.set_ylabel('Sepal width') axes.set_xlim(x_min, x_max) axes.set_ylim(y_min, y_max) plt.sca(axes) plt.scatter(X_train[:, 0][y_train == 0], X_train[:, 1][y_train == 0], c='red', marker='s') plt.scatter(X_train[:, 0][y_train == 1], X_train[:, 1][y_train == 1], c='black', marker='x') ys = (-clf.intercept_[0]- xs * clf.coef_[0, 0]) / clf.coef_[0, 1] plt.plot(xs, ys, hold=True) plt.show() # + [markdown] _uuid="5b44914dc28906e9b62bdde03b8dc6bc695421ce" # The blue line is our decision boundary. Every time $30.97 \times sepal\_length - 17.82 \times sepal\_width - 17.34$ is greater than zero we will have an iris setosa (class 0). # + [markdown] _uuid="67295f4a942c50d80cb1e9ae2ca8ad59a6d2fb74" #
    # ## 7-9 Prediction # + [markdown] _uuid="ea3f08d3e7d80014d39a580fee23a5b901836278" # Now, the really useful part: when we have a new flower, we just have to get its petal width and length and call the `predict` method of the classifier on the new instance. _This works the same way no matter the classifier we are using or the method we used to build it_ # + _uuid="adc12ede8d795ab655398672c326947bfaf72352" print ('If the flower has 4.7 petal width and 3.1 petal length is a {}'.format( iris.target_names[clf.predict(scaler.transform([[4.7, 3.1]]))])) # + [markdown] _uuid="d1176b1981567568ad2a6fdf7df660971c60abbe" # Note that we first scaled the new instance, then applyied the `predict` method, and used the result to lookup into the iris target names arrays. # + [markdown] _uuid="86353db47b33bc43025f024071c6cafd09cd3990" #
    # ## 7-10 Back to the original three-class problem # + [markdown] _uuid="f388c210be1b2158117b65eb8592bfa7bc1386ca" # Now, do the training using the three original classes. Using scikit-learn this is simple: we do exactly the same procedure, using the original three target classes: # + _uuid="dff8a43a1c8d68407e90d1b83d1e8dbea87e958f" clf2 = linear_model.SGDClassifier(loss='log', random_state=33) clf2.fit(X_train, y_train) print (len(clf2.coef_)) # + [markdown] _uuid="8c986abaa3874a8cd0b65a272528a70a345b91cd" # We have know _three_ decision curves... scikit-learn has simply converted the problem into three one-versus-all binary classifiers. Note that Class 0 is linearly separable, while Class 1 and Class 2 are not # + _uuid="0b5cd2ffc95fb39d6b136cb6c9966dbbbd1f0500" x_min, x_max = X_train[:, 0].min() - .5, X_train[:, 0].max() + .5 y_min, y_max = X_train[:, 1].min() - .5, X_train[:, 1].max() + .5 xs = np.arange(x_min,x_max,0.5) fig, axes = plt.subplots(1,3) fig.set_size_inches(10,6) for i in [0,1,2]: axes[i].set_aspect('equal') axes[i].set_title('Class '+ iris.target_names[i] + ' versus the rest') axes[i].set_xlabel('Sepal length') axes[i].set_ylabel('Sepal width') axes[i].set_xlim(x_min, x_max) axes[i].set_ylim(y_min, y_max) plt.sca(axes[i]) ys=(-clf2.intercept_[i]-xs*clf2.coef_[i,0])/clf2.coef_[i,1] plt.plot(xs,ys,hold=True) for j in [0,1,2]: px = X_train[:, 0][y_train == j] py = X_train[:, 1][y_train == j] color = colormarkers[j][0] if j==i else 'black' marker = 'o' if j==i else 'x' plt.scatter(px, py, c=color, marker=marker) plt.show() # + [markdown] _uuid="fa09989724671462a28e9dc963260c585441f48a" # Let us evaluate on the previous instance to find the three-class prediction. Scikit-learn tries the three classifiers. # + _uuid="4a4819672b9113bf4cc77bd32d343eaa2defdfc0" scaler.transform([[4.7, 3.1]]) print(clf2.decision_function(scaler.transform([[4.7, 3.1]]))) clf2.predict(scaler.transform([[4.7, 3.1]])) # + [markdown] _uuid="3a67576ba3e26fc8b1a01ae245efee25fb28732c" # The `decision_function` method tell us the classifier scores (in our case, the left side of the decision boundary inequality). In our example, the first classifier says the flower is a setosa (we have a score greater than zero), and it is not a versicolor nor a virginica. Easy. What if we had two positive values? In our case, the greatest score will be the point which is further away from the decision line. # + [markdown] _uuid="447f9507fe540dda42582d97b3958847c6a1c322" #
    # ## 7-11 Evaluating the classifier # + [markdown] _uuid="d42f5fa8d221da382fa01028d2492a908059e840" # The performance of an estimator is a measure of its effectiveness. The most obvious performance measure is called _accuracy_: given a classifier and a set of instances, it simply measures the proportion of instances correctly classified by the classifier. We can, for example, use the instances in the training set and calculate the accuracy of our classifier when predicting their target classes. Scikit-learn includes a `metrics` module that implements this (and many others) performance metric. # + _uuid="b47d40cd9bec34549456e928de1fb3fa059fbdbc" from sklearn import metrics y_train_pred = clf2.predict(X_train) print ('Accuracy on the training set:{:.2f}'.format(metrics.accuracy_score(y_train, y_train_pred))) # + [markdown] _uuid="bd340e2bd542d74bf2507fb1d4714e21c1229749" # This means that our classifier correctly predicts 83\% of the instances in the training set. But this is actually a bad idea. The problem with the evaluating on the training set is that you have built your model using this data, and it is possible that your model adjusts actually very well to them, but performs poorly in previously unseen data (which is its ultimate purpose). This phenomenon is called overfitting, and you will see it once and again while you read this book. If you measure on your training data, you will never detect overfitting. So, _never ever_ measure on your training data. # # Remember we separated a portion of the training set? Now it is time to use it: since it was not used for training, we expect it to give us and idead of how well our classifier performs on previously unseen data. # + _uuid="70dd8452466af9966724f4b5af586bb68d81f27e" y_pred = clf2.predict(X_test) print ('Accuracy on the training set:{:.2f}'.format(metrics.accuracy_score(y_test, y_pred))) # + [markdown] _uuid="760b5655dff3c9a89c4f1b3be08f5644a12de85b" # Generally, accuracy on the testing set is lower than the accuracy on the training set, since the model is actually modeling the training set, not the testing set. # # One of the problems with accuracy is that does not reflect well how our model performs on each different target class. For example, we know that our classifier works very well identifying setosa species, but will probably fail when separating the other two species. If we could measure this, we could get hints for improving performance, changing the method or the features. # # A very useful tool when facing multi-class problems is the confusion matrix. This matrix includes, in row i and column _j_ the number of instances of class _i_ that were predicted to be in class _j_. A good classifier will accumulate the values on the confusion matrix diagonal, where correctly classified instances belong. Having the original and predicted classes, we can easily print the confusion matrix: # + _uuid="58e838b89ffd8b8d5b883897b531930e174fee3a" print (metrics.confusion_matrix(y_test, y_pred)) # + [markdown] _uuid="6442fc90fe99617f6238ad264c140989c0bff47d" # To read the classification matrix, just remember the definition: the “8” on row 2, column 3, means that eight instances if class 1 where predicted to be in class 2. Our classifier is never wrong in our evaluation set when it classifies class zero (setosa) flowers. However, when it faces classes one and two (versicolor and virginica), it confuses them. The confusion matrix gives us useful information to know what kind of errors the classifier is making. # + [markdown] _uuid="0bc68e8d5a2ef4423d70fbb7eb4ea48e6bee9594" # Accuracy on the test set is a good performance measure when the number of instances of each class is similar, i.e., we have a uniform distribution of classes. However, consider that 99 percent of your instances belong to just one class (you have a skewed): a classifier that always predicts this majority class will have an excellent performance in terms of accuracy, despite the fact that it is an extremely naive method (and that it will surely fail in the “difficult” 1% cases). # # Within scikit-learn, there are several evaluation functions; we will show three popular ones: precision, recall, and F1-score (or f-measure). # + _uuid="9ca68dac14ab26a2392b8ac302db3ca16278c276" print (metrics.classification_report(y_test, y_pred, target_names=iris.target_names)) # + [markdown] _uuid="6cb4f31974f9b2ea0c1ac9856daf7841e16fb344" # - Precision computes the proportion of instances predicted as positives that were correctly evaluated (it measures how right is our classifier when it says that an instance is positive). # - Recall counts the proportion of positive instances that were correctly evaluated (measuring how right our classifier is when faced with a positive instance). # - F1-score is the harmonic mean of precision and recall, and tries to combine both in a single number. # + [markdown] _uuid="790eff98fb2a7f1f5b469fd55b84f9565329a24a" #
    # ## 7-12 Using the four flower attributes # + [markdown] _uuid="77229bf2110999ada713fbb0ae55c2ec725e7526" # To end with this classification section, we will repeat the whole process, this time using the four original attributes, and check if performance improves. # + _uuid="0b4214c627c2d3e64519b58aa529477f604db287" # Test set will be the 25% taken randomly X_train4, X_test4, y_train4, y_test4 = train_test_split(X_iris, y_iris, test_size=0.25, random_state=33) # Standarize the features scaler = preprocessing.StandardScaler().fit(X_train4) X_train4 = scaler.transform(X_train4) X_test4 = scaler.transform(X_test4) # Build the classifier clf3 = linear_model.SGDClassifier(loss='log', random_state=33) clf3.fit(X_train4, y_train4) # Evaluate the classifier on the evaluation set y_pred4 = clf3.predict(X_test4) print (metrics.classification_report(y_test4, y_pred4, target_names=iris.target_names)) # + [markdown] _uuid="0ec1ebbf687f017f402f9fd66fc46e186d204ec6" #
    # ## 7-13 Unsupervised Learning: Clustering # + [markdown] _uuid="af8b8dcd0dfc9fd8e224538b7e2c3cc0771100d5" # Sometimes, is possible to take an unlabeled training set and try to find a hidden structure or patterns in the data: there is no given target class to predict or to evaluate the resulting model. We call these class of machine learning tasks _unsupervised learning_. For instance, _clustering_ methods try to group instances into subsets (called clusters): an instance should be similar to another in the same subset and different from those belonging to another subset. # # In this section, we will perform clustering of the Iris data set, to see if we could group instances using their petal and sepal width and length. The trainig set is the same we used we used for our last example on supervised classification. # # + [markdown] _uuid="0aab6518f743c93eb3708492b88e8f503589606f" # K-means is probably the most popular clustering algorithm, because it is very simple and easy to implement, and it has shown good performance on different tasks. It belongs to the class of partition algorithms that simultaneously partition data points into distinct groups, called clusters. We will apply k-means to the training data, using only sepal dimensions, building 3 clusters (note that we could have selected a different number of cluster to group data into) # + _uuid="da0b9bbfed649b023ea14910a4cb2fc6947df419" from sklearn import cluster clf_sepal = cluster.KMeans(init='k-means++', n_clusters=3, random_state=33) clf_sepal.fit(X_train4[:,0:2]) # + [markdown] _uuid="8c5283b63443298ab77ce3c99ccef438b4ecfa32" # We can show the label assigned for each instance (note that this label is a cluster name, it has nothing to do with our original target classes... actually, when you are doing clustering you have no target class!). # + _uuid="dd66f6da3cdcf0b0df45033f8af80d8a913ebd3d" print (clf_sepal.labels_) # + [markdown] _uuid="fb2cf032fd214f500bc1b5c1e5cb2e91181e1d9c" # Using NumPy's indexing capabilities, we can display the actual target classes for each cluster, just to compare the built clusters with our flower type classes... # + _uuid="670b352cc30adc61caad1dfe64376c4d321ebecd" print (y_train4[clf_sepal.labels_==0]) # + _uuid="1cde150dd604e7a62003b96f2897ed3882a85281" print (y_train4[clf_sepal.labels_==1]) # + _uuid="0659d1c3fc12c5303065e2987816ad2dda76d455" print (y_train4[clf_sepal.labels_==2]) # + [markdown] _uuid="2caa2d5609a8af2a2703a69c5395c8de9e6ebd10" # As usually, is a good idea to display our instances and the clusters they belong to, to have a first approximation to how well our algorithm is behaving on our data: # + _uuid="6bc2b0a1cd98f8d58658f941799782e37f1fc381" colormarkers = [ ['red','s'], ['greenyellow','o'], ['blue','x']] step = .01 margin = .1 sl_min, sl_max = X_train4[:, 0].min()-margin, X_train4[:, 0].max() + margin sw_min, sw_max = X_train4[:, 1].min()-margin, X_train4[:, 1].max() + margin sl, sw = np.meshgrid( np.arange(sl_min, sl_max, step), np.arange(sw_min, sw_max, step) ) Zs = clf_sepal.predict(np.c_[sl.ravel(), sw.ravel()]).reshape(sl.shape) centroids_s = clf_sepal.cluster_centers_ # + [markdown] _uuid="7cedf51a164ebbfae5cabb2cd24a94e74b243502" # Display the data points and the calculated regions # + _uuid="c756314b2724adffb276b09fafff446c4a29a044" plt.figure(1) plt.clf() plt.imshow(Zs, interpolation='nearest', extent=(sl.min(), sl.max(), sw.min(), sw.max()), cmap= plt.cm.Pastel1, aspect='auto', origin='lower') for j in [0,1,2]: px = X_train4[:, 0][y_train == j] py = X_train4[:, 1][y_train == j] plt.scatter(px, py, c=colormarkers[j][0], marker= colormarkers[j][1]) plt.scatter(centroids_s[:, 0], centroids_s[:, 1],marker='*',linewidths=3, color='black', zorder=10) plt.title('K-means clustering on the Iris dataset using Sepal dimensions\nCentroids are marked with stars') plt.xlim(sl_min, sl_max) plt.ylim(sw_min, sw_max) plt.xlabel("Sepal length") plt.ylabel("Sepal width") plt.show() # + [markdown] _uuid="c6fa87cdcda44eeacba51b645c35dc7367e5c636" # Repeat the experiment, using petal dimensions # + _uuid="11e8ba67a6433ddac4efac055f86bb9b45e77252" clf_petal = cluster.KMeans(init='k-means++', n_clusters=3, random_state=33) clf_petal.fit(X_train4[:,2:4]) # + _uuid="8830e184fccb6094195cfc008a7bf5a479203167" print (y_train4[clf_petal.labels_==0]) # + _uuid="f9d66355b903a618cabfef234f2eacb32b4eba4b" print (y_train4[clf_petal.labels_==1]) # + _uuid="5f4ae459d0272d74281ab4ae2845ce4f74bcbb14" print (y_train4[clf_petal.labels_==2]) # + [markdown] _uuid="e3a17a8fba39e8d49358579cae1ef6c3ab6e3c3c" # Plot the clusters # + _uuid="5647c6daddf59a1bdcd8d5f2a58fbac147ba9f56" colormarkers = [ ['red','s'], ['greenyellow','o'], ['blue','x']] step = .01 margin = .1 sl_min, sl_max = X_train4[:, 2].min()-margin, X_train4[:, 2].max() + margin sw_min, sw_max = X_train4[:, 3].min()-margin, X_train4[:, 3].max() + margin sl, sw = np.meshgrid( np.arange(sl_min, sl_max, step), np.arange(sw_min, sw_max, step), ) Zs = clf_petal.predict(np.c_[sl.ravel(), sw.ravel()]).reshape(sl.shape) centroids_s = clf_petal.cluster_centers_ plt.figure(1) plt.clf() plt.imshow(Zs, interpolation='nearest', extent=(sl.min(), sl.max(), sw.min(), sw.max()), cmap= plt.cm.Pastel1, aspect='auto', origin='lower') for j in [0,1,2]: px = X_train4[:, 2][y_train4 == j] py = X_train4[:, 3][y_train4 == j] plt.scatter(px, py, c=colormarkers[j][0], marker= colormarkers[j][1]) plt.scatter(centroids_s[:, 0], centroids_s[:, 1],marker='*',linewidths=3, color='black', zorder=10) plt.title('K-means clustering on the Iris dataset using Petal dimensions\nCentroids are marked with stars') plt.xlim(sl_min, sl_max) plt.ylim(sw_min, sw_max) plt.xlabel("Petal length") plt.ylabel("Petal width") plt.show() # + [markdown] _uuid="409c72a611b055e4cc2e3dd523d88f7f46b10d16" # Now, calculate the clusters, using the four attributes # + _uuid="56668e53ec70e5e26d9994b0f172702011ed3e41" clf = cluster.KMeans(init='k-means++', n_clusters=3, random_state=33) clf.fit(X_train4) # + _uuid="5a89293931b142460fd2912f1802a67712110bfb" print (y_train[clf.labels_==0]) # + _uuid="27b83ae9ed49f4a517a409e72bb5416a3ea6dd8a" print (y_train[clf.labels_==1]) # + _uuid="b7fae1d9f328ead4f338581999b41deaf29e20b7" print (y_train[clf.labels_==2]) # + [markdown] _uuid="2ca2c5b2699e71c03c2ac1d9b47638adaddfb359" # Measure precision & recall in the testing set, using all attributes, and using only petal measures # + _uuid="f45af067d0c87aec50dcb0dffe93eba15257f741" y_pred=clf.predict(X_test4) print (metrics.classification_report(y_test, y_pred, target_names=['setosa','versicolor','virginica'])) # + _uuid="83c4f44e777257689fcb04190ae2ae2bad083f91" y_pred_petal=clf_petal.predict(X_test4[:,2:4]) print (metrics.classification_report(y_test, y_pred_petal, target_names=['setosa','versicolor','virginica'])) # + [markdown] _uuid="b4da546786d8990d5b0ec6bc5488e11590bce7de" # Wait, every performance measure is better using just two attributes. It is possible that less features give better results? Although at a first glance this seems contradictory, we will see in future notebooks that selecting the right subset of features, a process called feature selection, could actually improve the performance of our algorithms. # + [markdown] _uuid="341f79e2997c5dcedcc13434021135e5424006a6" #
    # ## 7-14 Supervised Learning: Regression # + [markdown] _uuid="e7856f5680378e28373a93620f776fbec0a86847" # In every example we have seen so far the output we aimed at predicting belonged to a discrete set. For classification, the set was the target class, while for the clustering algorithm the set included different calculated clusters. What if we want to predict a value extracted from the real line?. In this case, we are trying to solve a regression problem. # # To show how regression works in scikit-learn, we will apply to a (very) simple and well-know problem: trying to predict the price of a house given some of its. As the dataset, we will use the Boston house-prices dataset (find the dataset description and attributes [here](https://github.com/mjbahmani). # + _uuid="13728a32bdb3e5f5bb5355e95b43551054355b66" from sklearn.datasets import load_boston boston = load_boston() print ('Boston dataset shape:{}'.format(boston.data.shape)) # + _uuid="d4610811ae4bfebb057e2054426cd2029c3f3226" print (boston.feature_names) # + [markdown] _uuid="e051b322579ebaec46e3de34b00b2e2186bdf210" # Create training and testing sets, and scale values, as usual # + [markdown] _uuid="43c8934779f1d7d266606adf95287cce83876068" # Create a method for training and evaluating a model. This time, to evaluate our model we will a different approach: instead of separating the training set, we will use _cross-validation_. # # Cross-validation usually involves the following steps: # 1. Partition the dataset into k different subsets. # 2. Create k different models by training on k-1 subsets and testing on the remaining one. # 3. Measure the performance of each of the k models and use the average value as you performance value. # # + [markdown] _uuid="91acd3dadd07de1cbfcdde2c3309c241429e9174" #
    # ## 8- Plotly # How to use **Plotly** offline inside IPython notebooks. # # # + [markdown] _uuid="f9c6140fcc2890c1da1ad4de3ad2d68c90acbb9f" #
    # ## 8-1 New to Plotly? # Plotly, also known by its URL, Plot.ly, is a technical computing company headquartered in Montreal, Quebec, that develops online data analytics and visualization tools. Plotly provides online graphing, analytics, and statistics tools for individuals and collaboration, as well as scientific graphing libraries for Python, R, MATLAB, Perl, Julia, Arduino, and REST. # + _uuid="a763c7f03b88cd1679931bc151c4f8f98006b3cb" # example for plotly import plotly.offline as py import plotly.graph_objs as go py.init_notebook_mode(connected=True) from plotly import tools import plotly.figure_factory as ff iris = datasets.load_iris() X = iris.data[:, :2] # we only take the first two features. Y = iris.target x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 trace = go.Scatter(x=X[:, 0], y=X[:, 1], mode='markers', marker=dict(color=np.random.randn(150), size=10, colorscale='Viridis', showscale=False)) layout = go.Layout(title='Training Points', xaxis=dict(title='Sepal length', showgrid=False), yaxis=dict(title='Sepal width', showgrid=False), ) fig = go.Figure(data=[trace], layout=layout) # + _uuid="e2031ce099799a53d125b1cc0986be86bf51e0e1" py.iplot(fig) # + [markdown] _uuid="7f180dcbf1ac66e1e40aa21551467d0cfeb2f974" #
    # ## 8-2 Plotly Offline from Command Line # You can plot your graphs from a python script from command line. On executing the script, it will open a web browser with your Plotly Graph drawn. # + _uuid="cd95c77032e07526708e2339b0c4b862419286bb" import plotly.graph_objs as go plot([go.Scatter(x=[1, 2, 3], y=[3, 1, 6])]) # + [markdown] _uuid="77a55889cd7f12ec03adc2f8328548213aabfa14" #
    # ## 8-3 Generating Offline Graphs within Jupyter Notebook # You can also plot your graphs offline inside a Jupyter Notebook Environment. First you need to initiate the Plotly Notebook mode as below: # + _uuid="7eaf6d6cc7809ed749c29f11aa153a7767fff21c" init_notebook_mode(connected=True) # + [markdown] _uuid="3f536d2afd2f215c927bd2b6adbac28373fc9106" # Run at the start of every ipython notebook to use plotly.offline. This injects the plotly.js source files into the notebook. # # # + _uuid="2f19a9f0852aa105d632d5d72dd61cb8f6ec7bc7" iplot([{"x": [1, 2, 3], "y": [3, 1, 6]}]) # + _uuid="242f7a3ed1b231373ef5a2040e8c865e701e4879" import plotly.graph_objs as go import numpy as np x = np.random.randn(2000) y = np.random.randn(2000) iplot([go.Histogram2dContour(x=x, y=y, contours=dict(coloring='heatmap')), go.Scatter(x=x, y=y, mode='markers', marker=dict(color='white', size=3, opacity=0.3))], show_link=False) # + [markdown] _uuid="beae557b399fbe7368ab4536ca1a5e074fe74741" # ## 8-4 Plotting Offline with Cufflinks # # + _uuid="d43e390b979b5075e63e6836b26f150e6f9ed910" import cufflinks as cf iplot(cf.datagen.lines().iplot(asFigure=True, kind='scatter',xTitle='Dates',yTitle='Returns',title='Returns')) # + [markdown] _uuid="485e0412b2708e99457d9423449363a647e5e369" #
    # ## 9- Resource # In the end, I have prepared some resource that can be useful for you. # + [markdown] _uuid="1cddfa2ed52693fae8532576cc1064c05777354c" #
    # ## 9-1 Courses # There are a lot of online courses that can help you develop your knowledge, here I have just listed some of them: # # 1. [Machine Learning Certification by Stanford University (Coursera)](https://www.coursera.org/learn/machine-learning/) # # 2. [Machine Learning A-Z™: Hands-On Python & R In Data Science (Udemy)](https://www.udemy.com/machinelearning/) # # 3. [Deep Learning Certification by from deeplearning.ai (Coursera)](https://www.coursera.org/specializations/deep-learning) # # 4. [Python for Data Science and Machine Learning Bootcamp (Udemy)](Python for Data Science and Machine Learning Bootcamp (Udemy)) # # 5. [Mathematics for Machine Learning by Imperial College London](https://www.coursera.org/specializations/mathematics-machine-learning) # # 6. [Deep Learning A-Z™: Hands-On Artificial Neural Networks](https://www.udemy.com/deeplearning/) # # 7. [Complete Guide to TensorFlow for Deep Learning Tutorial with Python](https://www.udemy.com/complete-guide-to-tensorflow-for-deep-learning-with-python/) # # 8. [Data Science and Machine Learning Tutorial with Python – Hands On](https://www.udemy.com/data-science-and-machine-learning-with-python-hands-on/) # # 9. [Machine Learning Certification by University of Washington](https://www.coursera.org/specializations/machine-learning) # # 10. [Data Science and Machine Learning Bootcamp with R](https://www.udemy.com/data-science-and-machine-learning-bootcamp-with-r/) # 11. [Creative Applications of Deep Learning with TensorFlow](https://www.class-central.com/course/kadenze-creative-applications-of-deep-learning-with-tensorflow-6679) # 12. [Neural Networks for Machine Learning](https://www.class-central.com/mooc/398/coursera-neural-networks-for-machine-learning) # 13. [Practical Deep Learning For Coders, Part 1](https://www.class-central.com/mooc/7887/practical-deep-learning-for-coders-part-1) # 14. [Machine Learning](https://www.cs.ox.ac.uk/teaching/courses/2014-2015/ml/index.html) #
    # ## 9-2 Ebooks # If you love reading , here is **10 free machine learning books** # 1. [Probability and Statistics for Programmers](http://www.greenteapress.com/thinkstats/) # 2. [Bayesian Reasoning and Machine Learning](http://web4.cs.ucl.ac.uk/staff/D.Barber/textbook/091117.pdf) # 2. [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/) # 2. [Understanding Machine Learning](http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/index.html) # 2. [A Programmer’s Guide to Data Mining](http://guidetodatamining.com/) # 2. [Mining of Massive Datasets](http://infolab.stanford.edu/~ullman/mmds/book.pdf) # 2. [A Brief Introduction to Neural Networks](http://www.dkriesel.com/_media/science/neuronalenetze-en-zeta2-2col-dkrieselcom.pdf) # 2. [Deep Learning](http://www.deeplearningbook.org/) # 2. [Natural Language Processing with Python](https://www.researchgate.net/publication/220691633_Natural_Language_Processing_with_Python) # 2. [Machine Learning Yearning](http://www.mlyearning.org/) #
    # ## 9-3 Cheat Sheets # Data Science is an ever-growing field, there are numerous tools & techniques to remember. It is not possible for anyone to remember all the functions, operations and formulas of each concept. That’s why we have cheat sheets. But there are a plethora of cheat sheets available out there, choosing the right cheat sheet is a tough task. So, I decided to collect them here # # Here I have selected the cheat sheets on the following criteria: comprehensiveness, clarity, and content [5]: # 1. [Quick Guide to learn Python for Data Science ](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/blob/master/cheatsheets/Data-Science-in-Python.pdf) # 1. [Python for Data Science Cheat sheet ](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/blob/master/cheatsheets/beginners_python_cheat_sheet.pdf) # 1. [Python For Data Science Cheat Sheet NumPy](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/blob/master/cheatsheets/Numpy_Python_Cheat_Sheet.pdf) # 1. [Exploratory Data Analysis in Python]() # 1. [Data Exploration using Pandas in Python](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/blob/master/cheatsheets/Data-Exploration-in-Python.pdf) # 1. [Data Visualisation in Python](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/blob/master/cheatsheets/data-visualisation-infographics1.jpg) # 1. [Python For Data Science Cheat Sheet Bokeh](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/blob/master/cheatsheets/Python_Bokeh_Cheat_Sheet.pdf) # 1. [Cheat Sheet: Scikit Learn ](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/blob/master/cheatsheets/Scikit-Learn-Infographic.pdf) # 1. [MLalgorithms CheatSheet](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/blob/master/cheatsheets/MLalgorithms-.pdf) # 1. [Probability Basics Cheat Sheet ](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/blob/master/cheatsheets/probability_cheatsheet.pdf) # + [markdown] _uuid="a4a58d0f0aa8204797f2b0836d6204404280d6f3" # #
    # ## 10 -Conclusion # In this Kernel you have got an introduction to the main packages and ideas in the **data scientist's toolbox**.I hope you have had fun with it also, I want to hear your voice to update this kernel. # + [markdown] _uuid="dc985ec01fe0e62afd495b8ec461359f085071b6" # # # --------------------------------------------------------------------- # Fork, Run and Follow this kernel on GitHub: # > ###### [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist) # # # ------------------------------------------------------------------------------------------------------------- # **I hope you find this kernel helpful and some UPVOTES would be very much appreciated.** # # ----------- # # + [markdown] _uuid="3eb9f355dc3885ea4b0f38968a4bf14d20de02fd" #
    # ## 11- References # 1. [Coursera](https://www.coursera.org/specializations/data-science-python) # 1. [GitHub](https://github.com/mjbahmani) # 1. [plot.ly](https://plot.ly/python/offline/) # 1. [tutorialspoint](https://www.tutorialspoint.com/python/python_classes_objects.htm) # 1. [Top 28 Cheat Sheets for Machine Learning](https://www.analyticsvidhya.com/blog/2017/02/top-28-cheat-sheets-for-machine-learning-data-science-probability-sql-big-data/) # ###### [Go to top](#top) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # Importing some python libraries and stuff. import matplotlib.pyplot as pl import numpy as np from numpy.random import randn import seaborn as sns # %matplotlib inline # Fixing figure sizes from pylab import rcParams rcParams['figure.figsize'] = 10,7 # - # # Discrete approximation to Brownian motion # + # Setting up some parameters. T = 1; # Final time n = 500; # Number of points to use in discretization Dt = float(T)/n; print 'Stepsize =', Dt,'.' # + def pathGenerate(npath,n, Dt=0.002): # Function that generates discrete approximations to a brownian path. Wiener = np.zeros([n,npath]) for j in xrange(npath): for i in xrange(n-1): Wiener[i+1,j] = Wiener[i,j]+np.sqrt(Dt)*randn() return Wiener t = np.linspace(0,T,n) # - Wiener = pathGenerate(10, n) WienerMean = np.mean(Wiener,axis=1) WienerVar = np.var(Wiener,axis=1) pl.errorbar(t, WienerMean,yerr=np.sqrt(WienerVar), color=sns.xkcd_rgb['pale red'],ecolor=sns.xkcd_rgb['denim blue'],linewidth=5) pl.legend(['Mean of paths', 'Uncertainty (standard deviation)'],loc=0) # First case is with ten paths. We can see that the mean path hasn't quite converged yet. Wiener = pathGenerate(100, n) WienerMean = np.mean(Wiener,axis=1) WienerVar = np.var(Wiener,axis=1) pl.errorbar(t, WienerMean,yerr=np.sqrt(WienerVar), color=sns.xkcd_rgb['pale red'],ecolor=sns.xkcd_rgb['denim blue'],linewidth=5) pl.legend(['Mean of paths', 'Uncertainty (standard deviation)'],loc=0) # The picture looks much better with a hundred paths. Wiener = pathGenerate(1000, n) WienerMean = np.mean(Wiener,axis=1) WienerVar = np.var(Wiener,axis=1) pl.errorbar(t, WienerMean,yerr=np.sqrt(WienerVar), color=sns.xkcd_rgb['pale red'],ecolor=sns.xkcd_rgb['denim blue'],linewidth=5) pl.legend(['Mean of paths', 'Uncertainty (standard deviation)'],loc=0) pl.title('Using a thousand simulated paths.',fontsize=15) # And with a thousand paths, we can clearly see the convergence of the mean. Also, the confidence region shows the correct growth. n=10 t = np.linspace(0,T,n) Wiener = pathGenerate(10, n) WienerMean = np.mean(Wiener,axis=1) WienerVar = np.var(Wiener,axis=1) pl.errorbar(range(0,n), WienerMean,yerr=np.sqrt(WienerVar), color=sns.xkcd_rgb['pale red'],ecolor=sns.xkcd_rgb['denim blue'],linewidth=5) pl.legend(['Mean of paths', 'Uncertainty (standard deviation)'],loc=0) pl.title('For comparison, here is another plot with only 10 points', fontsize=15) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Bonus: Temperature Analysis I import pandas as pd from datetime import datetime as dt # "tobs" is "temperature observations" df = pd.read_csv('Resources/hawaii_measurements.csv') df.head() df.tail() # Convert the date column format from string to datetime df.date = pd.to_datetime(df.date, infer_datetime_format=True) # Set the date column as the DataFrame index df = df.set_index(df['date']) #Drop date column df = df.drop(columns = 'date') df.head() # ### Compare June and December data across all years from scipy import stats # Filter data for desired months june_data=df[df.index.month==6] december_data=df[df.index.month==12] # Identify the average temperature for June june_data.mean() june_data.describe() # Identify the average temperature for December december_data.mean() december_data.describe() # Create collections of temperature data june_temp = june_data.tobs december_temp = december_data.tobs june_temp.head() december_temp.head() # Run paired t-test stats.ttest_ind(june_temp,december_temp) # ### Analysis # The mean temperatures found across all stations in June and December for the years 2010-2017 change by around 3.9 degrees F. When a paired t test was run, it had a high p-value of 3.90521, which deems is not statistically significant, therefore indicates strong evidence of a null hypothesis (meaning there exists no relationship in our data). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # How Does a Bike-Share Navigate Speedy Success # # ![](ghtop_images/header2.png) # ## Introduction # # Some quick analysis on Cycle bike-share data from Chicago # # Welcome to the Cyclistic bike-share analysis case study! In this case study, you will perform many real-world tasks of a junior # data analyst. You will work for a fictional company, Cyclistic, and meet different characters and team members. In order to # answer the key business questions, you will follow the steps of the data analysis process: ask, prepare, process, analyze, # share, and act. Along the way, the Case Study Roadmap tables — including guiding questions and key tasks — will help you # stay on the right path. # By the end of this lesson, you will have a portfolio-ready case study. Download the packet and reference the details of this # case study anytime. Then, when you begin your job hunt, your case study will be a tangible way to demonstrate your # knowledge and skills to potential employers. # ### Some imports # + deletable=false editable=false run_control={"frozen": true} # import pandas as pd # from bs4 import BeautifulSoup # import numpy as np # import requests # # from datetime import datetime, timezone # import os # import geopandas as gpd # # #get the current working directory # owd=os.getcwd() # - # ## A function to extract web zip files and save to /dat folder # # ### We'll just look at Q1 for 2020 data # # The data for 2020 is split into Q1 and the rest is for each month. So we need to download and convert each to a data frame and then combine them together # + deletable=false editable=false run_control={"frozen": true} # def extractStuff(url): # import requests, zipfile, io # import os # owd=os.getcwd() # r = requests.get(url) # z = zipfile.ZipFile(io.BytesIO(r.content)) # z.extractall(owd+"/dat/") # URL="https://divvy-tripdata.s3.amazonaws.com/" # noma =[['202004-divvy-tripdata'], # ['202005-divvy-tripdata'], # ['202006-divvy-tripdata'], # ['202007-divvy-tripdata'], # ['202008-divvy-tripdata'], # ['202009-divvy-tripdata'], # ['202010-divvy-tripdata'], # ['202011-divvy-tripdata'], # ['202012-divvy-tripdata']] # # + deletable=false editable=false run_control={"frozen": true} # for nom in noma: # extractStuff(URL+ nom[0]+".zip") # - # ### Load files, put in pandas data frame and have a look # # #### Lets load each csv and combine them # + deletable=false editable=false run_control={"frozen": true} # for i,nom in enumerate(noma): # if i>0: # df=pd.read_csv(owd+"/dat/"+nom[0]+'.csv') # dfAll=pd.concat([df,dfAll]) # print('1 ',nom[0]) # else: # dfAll=pd.read_csv(owd+"/dat/"+nom[0]+'.csv') # print('2',nom[0]) # - # ### How many NaN stations? # + deletable=false editable=false run_control={"frozen": true} # print('The percentage start stations NaN = {}'.format(100*np.shape(dfAll[dfAll['start_station_id'].isna()])[0] / np.shape(dfAll)[0]) )#95 282 3 114 796 # # print('The percentage end stations NaN = {}'.format(100*np.shape(dfAll[dfAll['end_station_id'].isna()])[0] / np.shape(dfAll)[0]) )#95 282 3 114 796 # # bothNa=dfAll[dfAll['start_station_id'].isna() | dfAll['end_station_id'].isna()] # print('The percentage start stations NaN = {}'.format(100*np.shape(bothNa)[0] / np.shape(dfAll)[0]) )#95 282 3 114 796 # + deletable=false editable=false run_control={"frozen": true} # import copy # dfUse=copy.copy(dfAll[dfAll['start_station_id'].notnull() & dfAll['end_station_id'].notnull()]) # dfUse.describe(include='all') # - # ### Now we need to convert the dates from object (i.e. string) to date format # # #### next add a new column as time for hire in hours # + deletable=false editable=false run_control={"frozen": true} # dfUse.loc[:,'started_at']=pd.to_datetime(dfUse['started_at'],infer_datetime_format=True) # dfUse.loc[:,'ended_at']=pd.to_datetime(dfUse['ended_at'],infer_datetime_format=True) # # delta=dfUse.iloc[:,3]-dfUse.iloc[:,2] # dd=delta.dt.total_seconds()/(60*60) # dfUse.insert(2,"hire_time_h",dd) # dfUse.head() # - # #### Maybe we want the day of the week? # The day of the week with Monday=0, Sunday=6. # + deletable=false editable=false run_control={"frozen": true} # dfUse.insert(3,'day_week',dfUse.loc[:,'started_at'].dt.dayofweek) # dfUse.head() # - # ### Lets also get the time on its own # + deletable=false editable=false run_control={"frozen": true} # dfUse.insert(4,'time_day',dfUse.loc[:,'started_at'].dt.hour + dfUse.loc[:,'started_at'].dt.minute/60) # dfUse.head() # - # ### And the distance travelled # + deletable=false editable=false run_control={"frozen": true} # def distanceLatLong(lat1,lon1,lat2,lon2): # # import numpy as np # # def deg2rad(deg): # return deg * np.pi/180 # # R = 6371; # Radius of the earth in km # dLat = deg2rad(lat2-lat1) # deg2rad below # dLon = deg2rad(lon2-lon1) # a = np.sin(dLat/2) * np.sin(dLat/2) + \ # np.cos(deg2rad(lat1)) * np.cos(deg2rad(lat2)) * \ # np.sin(dLon/2) * np.sin(dLon/2) # # c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a)) # d = R * c ## Distance in km # # return d # + deletable=false editable=false run_control={"frozen": true} # d=distanceLatLong(dfUse["start_lat"].values,dfUse["start_lng"].values,dfUse["end_lat"].values,dfUse["end_lng"].values) # dfUse.insert(11,'distance',d) # dfUse.describe() # - # ### Some issues arose above # - hire_time_h max and min values # # Looks like the error is there from the start, so lets delete them # # do the same for long times # + deletable=false editable=false run_control={"frozen": true} # dfUse=dfUse[dfUse.hire_time_h>=0] # # dfUse=dfUse[dfUse.hire_time_h<24] # - # ### Lets drop some columns for space # + deletable=false editable=false run_control={"frozen": true} # dfUse.drop(columns=["ride_id", "started_at","ended_at","start_station_name","end_station_name"],inplace=True) # #,"start_station_id","end_station_id"] # - # ### save # + deletable=false editable=false run_control={"frozen": true} # dfUse.to_csv('/data/df_2020.csv') # df=copy.copy(dfUse) # - df = pd.read_csv('/data/df_2020.csv') df # ### Put frequency location onto a map # + deletable=false editable=false run_control={"frozen": true} # import folium # from folium import plugins # from folium.plugins import HeatMap # # lat=df['start_lat'].values # lon=df['start_lng'].values # latlon = [lat, lon] # # maps = folium.Map(location=[lat[0],lon[0]], # zoom_start = 11) # # latlon=np.transpose(latlon) # # # Plot it on the map # HeatMap(latlon).add_to(maps) # # # Display the map # maps # - # ![](ghtop_images/ChiHeat.png) # Bit of a mess, grouping by region may be better # ## Plot some choroplots # open and modify the geojson file- seems to make life easier later # + deletable=false editable=false run_control={"frozen": true} # import geopandas as gpd # fname='Chicago.geojson' # chicago = gpd.read_file(fname) # # # neighborhoods aren't unique so lets use the index and call it ID # chicago.reset_index(inplace=True) # chicago.rename(columns={'index':'ID'},inplace=True) # # chicago.drop(columns=['sec_neigh','shape_area','shape_len'],inplace=True) # chicago.to_file("Chi_.json", driver="GeoJSON") # chicago.head() # - # ### Now we want to convert each station to a region within the json file # # #### First let's create a variable for each station, with location and station_id # + deletable=false editable=false run_control={"frozen": true} # #use mean here just in case some slight differences- big ones lets hope not! # dfStat=df.groupby(by=['start_station_id']).mean() # # dfStat=dfStat.drop(columns=['Unnamed: 0','hire_time_h','day_week','time_day','distance','end_lat','end_lng']) # dfStat.reset_index(inplace=True) # dfStat # - # #### Now for each station we want a JSON-area code # This is *slightly* convoluted # - scroll through each station # - for each station find if it's inside a Chigao_JSON region # - if not we assign if a value 1000 # - for those with no region find the nearest station that has a JSON-region (done in tab after this) # # whatChoro = json ID # whatwhat = station ID # + deletable=false editable=false run_control={"frozen": true} # from shapely.geometry import shape, Point # # # whatChoro=[] # whatwhat=[] # # check each polygon to see if it contains the point # i=0 # # scroll through each station # for istat in range(np.shape(dfStat)[0]): # i=0 # #create a point for the station # point=Point(dfStat.loc[istat,'start_lng'],dfStat.loc[istat,'start_lat']) # # #scroll through each geometery # for feature in chicago.ID: # polygon = shape(chicago.loc[i,'geometry']) # if polygon.contains(point): # #this gives the json region ID # whatChoro.append(chicago.loc[i,'ID']) # #this give the station id # whatwhat.append(dfStat.loc[istat,'start_station_id']) # break # # #if we don't get a match!! # if feature==chicago['ID'].iloc[-1]: # # import copy # # find distances lat2/lng2 this location # # lat1/lng1 all locations # lat1_=copy.copy(dfStat['start_lat']) # lon1_=copy.copy(dfStat['start_lng']) # lat2_=dfStat.loc[istat,'start_lat'] # lon2_=dfStat.loc[istat,'start_lng'] # # # #this gives the json region ID # whatChoro.append(1000) # #this give the station id # whatwhat.append(dfStat.loc[istat,'start_station_id']) # # # i=i+1 # - # #### This does the cleaning up if they don't have a json id # # This will handle when we don't get a match---> a reuse of the distance function with slight mods # + deletable=false editable=false run_control={"frozen": true} # def distanceLatLong_v2(lat1_,lon1_,lat2_,lon2_): # # import numpy as np # import math # def deg2rad(deg): # return deg * np.pi/180 # def inner(lat1,lon1,lat2,lon2): # R = 6371; # Radius of the earth in km # dLat = deg2rad(lat2-lat1) # deg2rad below # dLon = deg2rad(lon2-lon1) # a = np.sin(dLat/2) * np.sin(dLat/2) + \ # np.cos(deg2rad(lat1)) * np.cos(deg2rad(lat2)) * \ # np.sin(dLon/2) * np.sin(dLon/2) # # c = 2 * math.atan2(np.sqrt(a), np.sqrt(1-a)) # d = R * c ## Distance in km # if d==0: # d=1000 # return d # if np.shape(lat1_)[0]>1: # d=[] # for i in range(np.shape(lat1_)[0]): # d.append(inner(lat1_[i],lon1_[i],lat2_,lon2_)) # # else: # d=inner(lat1_,lon1_,lat2_,lon2_) # # return d # - # this scrolls through ones we didn't match and finds nearest JSON-id we did match # + deletable=false editable=false run_control={"frozen": true} # for i in range(np.shape(dfStat)[0]): # if whatChoro[i]==1000: # # #find distances lat2/lng2 this location # # lat1/lng1 all locations # lat1_=copy.copy(dfStat['start_lat']) # lon1_=copy.copy(dfStat['start_lng']) # lat2_=dfStat.loc[i,'start_lat'] # lon2_=dfStat.loc[i,'start_lng'] # ind=[idx for idx, element in enumerate(whatChoro) if element==1000] # lat1_[ind]=0 # lon1_[ind]=0 # d=distanceLatLong_v2(lat1_,lon1_,lat2_,lon2_) # indamin=d.index(min(d)) # # whatwhat[i]=whatwhat[indamin] # whatChoro[i]=whatChoro[indamin] # # print(i,indamin,whatChoro[i],whatChoro[indamin],min(d)) # - # ### Now we can insert a new column in df with the json ID # + deletable=false editable=false run_control={"frozen": true} # chicID=[] # for stat in df['start_station_id']: # # chicID.append(stat) # ind=[idx for idx, element in enumerate(whatwhat) if element==stat] # try: # chicID.append(whatChoro[ind[0]]) # except: # continue # + deletable=false editable=false run_control={"frozen": true} # df.insert(0,'ID',chicID) # df.head() # - # ### And represent each JSON region by how many times they're used # We'll take the count and divide it by the total- and because of the distribution we'll also take the log- basically hires are highly focussed on a few regions with many having low % # + deletable=false editable=false run_control={"frozen": true} # dfG=df.groupby('ID').count() # dfG.reset_index(inplace=True) # dfG=dfG[['ID','rideable_type']] # dfG.rename(columns={'rideable_type':'Frequency'}) # dfG.rideable_type=np.log(dfG.rideable_type/sum(dfG.rideable_type)) # dfG.head() # + deletable=false editable=false run_control={"frozen": true} # df.to_csv('/data/dfChoro_2020.csv') # - df = pd.read_csv('/data/dfChoro_2020.csv') # ### Now the plotting # We first read in the json file, add the df with our frequency values to it then we can plot the data # # ### Lets put this in a function to look at differences def bigChoro(dfIN,colname,choi): import folium LEGNOM=colname if choi=='count': dfG=dfIN.groupby('ID').count() dfG[colname]=dfG[colname]/(100*274/7) myscale = (dfG[colname].quantile((0,0.25,0.5,0.75,0.9,0.95,.97,1))).tolist() # np.linspace(dfG[colname].min(),dfG[colname].max(),10) LEGNOM='Number of journeys 100s per week' elif choi=='mean': dfG=dfIN.groupby('ID').mean() myscale = np.linspace(dfG[colname].min(),dfG[colname].max(),10) elif choi=='sum': dfG=dfIN.groupby('ID').sum() myscale = np.linspace(dfG[colname].min(),dfG[colname].max(),10) elif choi=='dayofweek': dfIN=dfIN[['ID',colname]] dfG=dfIN.groupby(['ID']).agg(lambda x:x.value_counts().index[0]) dfG[dfG[colname]>4]=5-dfG[dfG[colname]>4] myscale = np.array([-2.,0.,1.,2.,3.,4.]) LEGNOM='Day of week (-2 to -1 weekend, 0-4 Monday to Friday)' elif choi=='mode': dfIN=dfIN[['ID',colname]] dfIN[colname].astype('int32') dfG=dfIN.groupby(['ID']).agg(lambda x:x.value_counts().index[0]) myscale = np.linspace(dfG[colname].min(),dfG[colname].max(),10) dfG.reset_index(inplace=True) dfG=dfG[['ID',colname]] nil=gpd.read_file("Chi_.json") nil=nil[['ID','geometry']] # merge data frames nilpop=nil.merge(dfG,on="ID") #initial map m = folium.Map(location=[41.884,-87.6247], zoom_start=10,\ control_scale=True,tiles="Stamen Toner")#,tiles = t_list[1]) folium.TileLayer('CartoDB positron',name="Light Map",control=False).add_to(m) # (dfG['rideable_type'].quantile((0,.02,0.1,.25,0.5,0.75,0.9,0.95,0.98,1))).tolist() choropleth =folium.Choropleth( geo_data="Chi_.json", data=nilpop, threshold_scale=myscale, columns=['ID',colname], name='choropleth', fill_color='BuPu',#PuBuGn YlGn PuBuGn YlGnBu RdYlBu key_on= "feature.properties.ID", fill_opacity=0.7, line_opacity=0.2, nan_fill_color='gray', legend_name=LEGNOM, nan_fill_opacity =.5, ).add_to(m) folium.LayerControl().add_to(m) choropleth.geojson.add_child( folium.features.GeoJsonTooltip(['pri_neigh'],labels=False) ) return m dfIN.columns # + deletable=false editable=false run_control={"frozen": true} # colname='distance'#end_station_id # dfIN= df[df.member_casual=='member'] # m=bigChoro(dfIN,colname,'mean') # m # - # ![](ghtop_images/ChiDist.png) # + deletable=false editable=false run_control={"frozen": true} # colname='day_week'#end_station_id # dfIN= df[df.member_casual=='member'] # m=bigChoro(dfIN,colname,'dayofweek') # m # - # ![](ghtop_images/ChiDay.png) # + deletable=false editable=false run_control={"frozen": true} # colname='time_day'#end_station_id # dfIN= df[df.member_casual=='member'] # m=bigChoro(dfIN,colname,'mode') # m # - # ![](ghtop_images/ChiDayTime.png) colname='time_day'#end_station_id dfIN= df[df.member_casual=='member'] m=bigChoro(dfIN,colname,'count') m # ![](ghtop_images/ChiCountMember.png) # + deletable=false editable=false run_control={"frozen": true} # colname='time_day'#end_station_id # dfIN= df[df.member_casual=='casual'] # m=bigChoro(dfIN,colname,'count') # m # - # ![](ghtop_images/ChiCountCasual.png) np.linspace(-2,4,7) np.array([-2.,0.,1.,2.,3.,4.]) import seaborn as sns import matplotlib as mpl import matplotlib.pyplot as plt # + sns.set_theme(style="ticks") f, ax = plt.subplots(figsize=(7, 5)) sns.despine(f) # Draw a nested boxplot to show bills by day and time sns.histplot(df,hue="member_casual", x="time_day", multiple="stack", palette="dark:b_r", edgecolor=".3", linewidth=.5,) sns.despine(offset=10, trim=True) # + from matplotlib.ticker import PercentFormatter df_=df[df.hire_time_h<5] df_=df_[df_.hire_time_h>0] sns.set_theme(style="ticks") binwidth = 5 f, ax1 = plt.subplots(figsize=(10, 7)) sns.despine(f) df2_=df_[df_.member_casual=='member'] # Draw a nested boxplot to show bills by day and time sns.histplot(df2_, x="hire_time_h", multiple="stack", palette="light:m_r", edgecolor=".3", linewidth=.5, stat='probability', log_scale=True, ax=ax1, label='member') ax2=ax1.twinx() df2_=df_[df_.member_casual=='casual'] # Draw a nested boxplot to show bills by day and time sns.histplot(df2_, x="hire_time_h", element="step",fill=False, color='red', linewidth=.8, stat='probability', log_scale=True, ax=ax2, label='casual') ax1.legend(loc='upper left') ax2.legend(loc='upper right') sns.despine(offset=10, trim=True) # ax.legend('Member','Casual') # + sns.set_theme(style="ticks") f, ax = plt.subplots(figsize=(10, 8)) sns.despine(f) df_=df[ df["distance"]>0.1 ] df_= df_[df_["distance"]<20] df_=df_[df_.member_casual=='member'] # Draw a nested boxplot to show bills by day and time sns.histplot(data=df_, x="distance", edgecolor=".3", linewidth=.5, stat='probability', label='member') sns.despine(offset=10, trim=True) df_=df[df["distance"]>0.1] df_= df_[df_["distance"]<20] df_=df_[df_.member_casual=='casual'] sns.histplot(data=df_, x="distance", linewidth=.8, color='r', stat='probability', label='casual', fill=False, element='step') ax.legend(loc='upper right') # + # f, ax = plt.subplots(figsize=(7, 5)) sns.set_theme(style="whitegrid") # Draw a nested boxplot to show bills by day and time ax=sns.histplot(df,hue="member_casual", x="day_week",palette="dark:b_r", multiple="dodge", bins=[0 ,1 ,2 ,3, 4, 5, 6,7], shrink=.9 ) sns.despine(offset=20, trim=True) aa=np.array([0,1,2,3,4,5,6])+.5 ax.set_xticks(aa) ax.set_xlim([0, 7.5]) lab=['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday'] ax.set_xticklabels(lab,rotation='vertical') ax.set_xlabel('Day of the week') # - # ### So after a quick look at the data (*maybe some plots need mods) some clear trends: # - Casuals use bikes more on weekends, members more on weekdays # - Members tend to use bikes in commuting times 7-9 am and 4-7 pm. Whereas casuals more spread but focussed later # - Casuals tend to use the bikes for longer and travel further from initial location # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np # + #Python code def python_fib(n): a, b = 0.0, 1.0 for i in range(n): tmp = a a += b b = tmp return a # %timeit python_fib(2000000000) # - # %load_ext cython # + language="cython" # # #Cypthon code # # def cython_fib(int n): # cdef double a = 0.0 # cdef double b = 1.0 # cdef double tmp # for i in range(n): # tmp = a # a = a + b # b = tmp # return a # # - # %timeit cython_fib(2000000000) # + language="cython" # # cdef float a = 129 #in seconds # cdef float b = 1.67 # # #how much faster is cypthon? # # cdef float c = a // b # # print(f'Cypthon is {c} times faster than python') # - print('Cypthon is fast') # + def say_hello (): print('Hello World') # %timeit say_hello # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Extensionmethods # Not all operators are loaded at import of rx. # Example: from_marbles import rx try: rx.Observable.from_marbles('a-b|') except Exception as ex: print 'error:', ex # shown only after ipython notebook kernel restart # -> to see whats there don't use e.g. `dir(Observable)` but find # 'def from_marbles' in the rx directory, to see the module, # then import it: ''' ~/GitHub/RxPY/rx $ ack 'def from_marbl' testing/marbles.py 90:def from_marbles(self, scheduler=None): ''' import rx.testing.marbles def show(x): print (x) stream = rx.Observable.from_marbles('a-b--c|').to_blocking().subscribe(show) # # Async Operations # It is useful to understand on a high level how RxPY handles asyncronity and when. # E.g. naively you might want to know, when notifying a value to a subscriber, what other subscribers are present. # This makes no sense to ask (I think in general in reactive programming) and it will be clear looking at an example. # # Consider timing and thread outputs in the following: # + # ============================= # change these (both in millis) delay_stream, slow_emit = 0, 0 # ============================= import rx, threading, random, time thread = threading.currentThread def call_observer(obs): '''observer functions are invoked, blocking''' print_out(obs.__class__.__name__, hash(obs)) for i in range(2): obs.on_next(1) if slow_emit: time.sleep(slow_emit/1000) obs.on_next(1) stream = rx.Observable.create(call_observer).take(10) if delay_stream: stream = stream.delay(delay_stream) def print_out(*v): '''printout of current time, v, and current thread''' v_pretty = ' '.join([str(s) for s in v]) print ('%.8f - %30s - %s\n' % (time.time(), v_pretty, thread().getName())) d = stream.subscribe(lambda x: print_out('Observer 1', x)) d = stream.subscribe(lambda x: print_out('Observer 2', x)) # - # - As long as there is no time related stream operator involved, then RXPy does everything *syncrononous*. # - RXPy goes async only when it has to, according to the nature of the async operation declared by the user. # - It defaults to reasonable mechanics, e.g. using threading. # - You can overwrite these defaults, by picking a "scheduler" (e.g. gevent, e.g. twisted, e.g. futures) # # > => In the `call_observer` function you can't know about the concurrency situation # > It soleley depends on the design of the stream operations applied. # > See `.ref_count()` though, for published streams # # Check the [`.observe_on`](./Part%20VII%20-%20Meta%20Operations.ipynb#...when-it-notifies-observers-observe_on) example to get a deeper understanding how scheduling works. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 15: Off to analyzing text # Way to go! You have already learned a lot of essential components of the Python language. Being able to deal with data structures, import packages, build your own functions and operate with files is not only essential for most tasks in Python, but also a prerequisite for text analysis. We have applied some common preprocessing steps like casefolding/lowercasing, punctuation removal, and stemming/lemmatization. Did you know that there are some very useful NLP packages and modules that do some of these steps? One that is often used in text analysis is the Python package **NLTK (the Natural Language Toolkit)**. # # ### At the end of this chapter, you will be able to: # * have an idea of the NLP tasks that constitute an NLP pipeline # * use the functions of the NLTK module to manipulate the content of files for NLP purposes (e.g. sentence splitting, tokenization, POS-tagging, and lemmatization); # * do nesting of multiple for-loops or files # # ### More NLP software for Python: # * [NLTK](http://www.nltk.org/) # * [SpaCy](https://spacy.io/) # * [Stanford CoreNLP](https://stanfordnlp.github.io/CoreNLP/index.html) # * [About Python NLP libraries](https://elitedatascience.com/python-nlp-libraries) # # # If you have **questions** about this chapter, please contact us **()**. # # 1 A short intro to text processing # There are many aspects of text we can (try to) analyze. Commonly used analyses conducted in Natural Language Processing (**NLP**) are for instance: # # * determining the part of speech of words in a text (verb, noun, etc.) # * analyzing the syntactic relations between words and phrases in a sentence (i.e., syntactic parsing) # * analyzing which entities (people, organizations, locations) are mentioned in a text # # ...and many more. Each of these aspects is addressed within its own **NLP task**. # # **The NLP pipeline** # # Usually, these tasks are carried out sequentially because they depend on each other. For instance, we need to first tokenize the text (split it into words) in order to be able to assign part-of-speech tags to each word. This sequence is often called an **NLP pipeline**. For example, a general pipeline could consist of the components shown below (taken from [here](https://www.slideshare.net/YuriyGuts/natural-language-processing-nlp)) You can see the NLP pipeline of the NewsReader project [here](http://www.newsreader-project.eu/files/2014/02/SystemArchitecture.png). (you can ignore the middle part of the picture, and focus on the blue and green boxes in the outer row). # # # # In this chapter we will look into four simple NLP modules that are nevertheless very common in NLP: **tokenization, sentence splitting**, **lemmatization** and **POS tagging**. # # There are also more advanced processing modules out there - feel free to do some research yourself :-) # # 2 The NLTK package # NLTK (Natural Language Processing Toolkit) is a module we can use for most fundamental aspects of natural language processing. There are many more advanced approaches out there, but it is a good way of getting started. # # Here we will show you how to use it for tokenization, sentence splitting, POS tagging, and lemmatization. These steps are necessary processing steps for most NLP tasks. # # We will first give you an overview of all tasks and then delve into each of them in more detail. # # Before we can use NLTK for the first time, we have to make sure it is downloaded and installed on our computer (some of you may have already done this). # # To install NLTK, please try to run the following two cells. If this does not work, please try and follow the [documentation](http://www.nltk.org/install.html). If you don't manage to get this to work, please ask for help. # + language="bash" # pip install nltk # - # Once you have downloaded the NLTK book, you do not need to run the download again. If you are using the NLTK again, it is sufficient to import it. # + # downloading nltk import nltk nltk.download('book') # - # Now that we have installed and downloaded NLTK, let's look at an example of a simple NLP pipeline. In the following cell, you can observe how we tokenize raw text into tokens and setnences, perform part of speech tagging and lemmatize some of the tokens. Don't worry about the details just yet - we will go trhough them step by step. # + text = "This example sentence is used for illustrating some basic NLP tasks. Language is awesome!" # Tokenization tokens = nltk.word_tokenize(text) # Sentence splitting sentences = nltk.sent_tokenize(text) # POS tagging tagged_tokens = nltk.pos_tag(tokens) # Lemmatization lmtzr = nltk.stem.wordnet.WordNetLemmatizer() lemma=lmtzr.lemmatize(tokens[4], 'v') # Printing all information print(tokens) print(sentences) print(tagged_tokens) print(lemma) # - # ## 2.1 Tokenization and sentence splitting with NLTK # ### 2.1.1 `word_tokenize()` # Now, let's try tokenizing our Charlie story! First, we will open and read the file again and assign the file contents to the variable `content`. Then, we can call the `word_tokenize()` function from the `nltk` module as follows: # + with open("../Data/Charlie/charlie.txt") as infile: content = infile.read() tokens = nltk.word_tokenize(content) print(type(tokens), len(tokens)) print(tokens) # - # As you can see, we now have a list of all words in the text. The punctuation marks are also in the list, but as separate tokens. # ### 2.1.2 `sent_tokenize()` # Another thing that NLTK can do for you is to split a text into sentences by using the `sent_tokenize()` function. We use it on the entire text (as a string): # + with open("../Data/Charlie/charlie.txt") as infile: content = infile.read() sentences = nltk.sent_tokenize(content) print(type(sentences), len(sentences)) print(sentences) # - # We can now do all sorts of cool things with these lists. For example, we can search for all words that have certain letters in them and add them to a list. Let's say we want to find all present participles in the text. We know that present participles end with *-ing*, so we can do something like this: # + # Open and read in file as a string, assign it to the variable `content` with open("../Data/Charlie/charlie.txt") as infile: content = infile.read() # Split up entire text into tokens using word_tokenize(): tokens = nltk.word_tokenize(content) # create an empty list to collect all words having the present participle -ing: present_participles = [] # looking through all tokens for token in tokens: # checking if a token ends with the present parciciple -ing if token.endswith("ing"): # if the condition is met, add it to the list we created above (present_participles) present_participles.append(token) # Print the list to inspect it print(present_participles) # - # This looks good! We now have a list of words like *boiling*, *sizzling*, etc. However, we can see that there is one word in the list that actually is not a present participle (*ceiling*). Of course, also other words can end with *-ing*. So if we want to find all present participles, we have to come up with a smarter solution. # ## 2.2. Part-of-speech (POS) tagging # Once again, NLTK comes to the rescue. Using the function `pos_tag()`, we can label each word in the text with its part of speech. # # To do pos-tagging, you first need to tokenize the text. We have already done this above, but we will repeat the steps here, so you get a sense of what an NLP pipeline may look like. # ### 2.2.1 `pos_tag()` # To see how `pos_tag()` can be used, we can (as always) look at the documentation by using the `help()` function. As we can see, `pos_tag()` takes a tokenized text as input and returns a list of tuples in which the first element corresponds to the token and the second to the assigned pos-tag. # As always, we can start by reading the documentation: help(nltk.pos_tag) # + # Open and read in file as a string, assign it to the variable `content` with open("../Data/Charlie/charlie.txt") as infile: content = infile.read() # Split up entire text into tokens using word_tokenize(): tokens = nltk.word_tokenize(content) # Apply pos tagging to the tokenized text tagged_tokens = nltk.pos_tag(tokens) # Inspect pos tags print(tagged_tokens) # - # ### 2.2.2 Working with POS tags # As we saw above, `pos_tag()` returns a list of tuples: The first element is the token, the second element indicates the part of speech (POS) of the token. # # This POS tagger uses the POS tag set of the Penn Treebank Project, which can be found [here](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html). For example, all tags starting with a V are used for verbs. # # We can now use this, for example, to identify all the verbs in a text: # # + # Open and read in file as a string, assign it to the variable `content` with open("../Data/Charlie/charlie.txt") as infile: content = infile.read() # Apply tokenization and POS tagging tokens = nltk.word_tokenize(content) tagged_tokens = nltk.pos_tag(tokens) # List of verb tags (i.e. tags we are interested in) verb_tags = ["VBD", "VBG", "VBN", "VBP", "VBZ"] # Create an empty list to collect all verbs: verbs = [] # Iterating over all tagged tokens for token, tag in tagged_tokens: # Checking if the tag is any of the verb tags if tag in verb_tags: # if the condition is met, add it to the list we created above verbs.append(token) # Print the list to inspect it print(verbs) # - # ## 2.3. Lemmatization # We can also use NLTK to lemmatize words. # # The lemma of a word is the form of the word which is usually used in dictionary entries. This is useful for many NLP tasks, as it gives a better generalization than the strong a word appears in. To a computer, `cat` and `cats` are two completely different tokens, even though we know they are both forms of the same lemma. # # # ### 2.3.1 The WordNet lemmatizer # We will use the WordNetLemmatizer for this using the `lemmatize()` function. In the code below, we loop through the list of verbs, lemmatize each of the verbs, and add them to a new list called `verb_lemmas`. Again, we show all the processing steps (consider the comments in the code below): # + ################################################################################# #### Process text as explained above ### with open("../Data/Charlie/charlie.txt") as infile: content = infile.read() tokens = nltk.word_tokenize(content) tagged_tokens = nltk.pos_tag(tokens) verb_tags = ["VBD", "VBG", "VBN", "VBP", "VBZ"] verbs = [] for token, tag in tagged_tokens: if tag in verb_tags: verbs.append(token) print(verbs) ############################################################################# #### Use the list of verbs collected above to lemmatize all the verbs ### # Instatiate a lemmatizer object lmtzr = nltk.stem.wordnet.WordNetLemmatizer() # Create list to collect all the verb lemmas: verb_lemmas = [] for participle in verbs: # For this lemmatizer, we need to indicate the POS of the word (in this case, v = verb) lemma = lmtzr.lemmatize(participle, "v") verb_lemmas.append(lemma) print(verb_lemmas) # - # **Note about the wordnet lemmatizer:** # # We need to specify a POS tag to the WordNet lemmatizer, in a WordNet format ("n" for noun, "v" for verb, "a" for adjective). If we do not indicate the Part-of-Speech tag, the WordNet lemmatizer thinks it is a noun (this is the default value for its part-of-speech). See the examples below: test_nouns = ('building', 'applications', 'leafs') for n in test_nouns: print(f"Noun in conjugated form: {n}") default_lemma=lmtzr.lemmatize(n) # default lemmatization, without specifying POS, n is interpretted as a noun! print(f"Default lemmatization: {default_lemma}") verb_lemma=lmtzr.lemmatize(n, 'v') print(f"Lemmatization as a verb: {verb_lemma}") noun_lemma=lmtzr.lemmatize(n, 'n') print(f"Lemmatization as a noun: {noun_lemma}") print() test_verbs=('grew', 'standing', 'plays') for v in test_verbs: print(f"Verb in conjugated form: {v}") default_lemma=lmtzr.lemmatize(v) # default lemmatization, without specifying POS, v is interpretted as a noun! print(f"Default lemmatization: {default_lemma}") verb_lemma=lmtzr.lemmatize(v, 'v') print(f"Lemmatization as a verb: {verb_lemma}") noun_lemma=lmtzr.lemmatize(v, 'n') print(f"Lemmatization as a noun: {noun_lemma}") print() # # 3 Nesting # So far, we typically used a single for-loop, or we were opening a single file at a time. In Python (and most programming languages), one can **nest** multiple loops or files in one another. For instance, we can use one (outer) for-loop to iterate through files, and then for each file iterate through all its sentences (internal for-loop). As we have learned above, `glob` is a convenient way of creating a list of files. # # You might think: can we stretch this on more levels? Iterate through files, then iterate through the sentences in these files, then iterate through each word in these sentences, then iterate through each letter in these words, etc. This is possible. Python (and most programming languages) allow you to perform nesting with (in theory) as many loops as you want. Keep in mind that nesting too much will eventually cause computational problems, but this also depends on the size of your data. # # For the tasks we are treating here, a a couple of levels of nesting are fine. # # In the code below, we want get an idea of the number and length of the sentences in the texts stored in the `../Data/dreams` directory. We do this by creating two for loops: We iterate over all the files in the directory (loop 1), apply sentence tokenization and iterate over all the sentences in the file (loop 2). # # Look at the code and comments below to figure out what is going on: # + import glob ### Loop 1 #### # Loop1: iterate over all the files in the dreams directory for filename in glob.glob("../Data/dreams/*.txt"): # read in the file and assign the content to a variable with open(filename, "r") as infile: content = infile.read() # split the content into sentences sentences = nltk.sent_tokenize(content) # Print the number of sentences in the file print(f"INFO: File {filename} has {len(sentences)} sentences") # For each file, assign a number to each sentence. Start with 0: counter=0 #### Loop 2 #### # Loop 2: loop over all the sentences in a file: for sentence in sentences: # add 1 to the counter counter+=1 # tokenize the sentence tokens=nltk.word_tokenize(sentence) # print the number of tokens per sentence print(f"Sentence {counter} has {len(tokens)} tokens") # print an empty line after each file (this belongs to loop 1) print() # - # # 4 Putting it all together # In this section, we will use what we have learned above to write a small NLP program. We will go through all the steps and show how they can be put together. In the last chapters, we have already learned how to write functions. We will make use of this skill here. # # Our goal is to collect all the nouns from Vickie's dream reports. # # Before we write actual code, it is always good to consider which steps we need to carry out to reach the goal. # # Important steps to remember: # # * create a list of all the files we want to process # * open and read the files # * tokenize the texts # * perform pos-tagging # * collect all the tokens analyzed as nouns # # Remember, we first needed to import `nltk` to use it. # ## 4.1 Writing a processing function for a single file # # Since we want to carry out the same task for each of the files, it is very useful (and good practice!) to write a single function which can do the processing. The following function reads the specified file and returns the tokens with their POS tags: # + import nltk def tag_tokens_file(filepath): """Read the contents of the file found at the location specified in FILEPATH and return a list of its tokens with their POS tags.""" with open(filepath, "r") as infile: content = infile.read() tokens = nltk.word_tokenize(content) tagged_tokens = nltk.pos_tag(tokens) return tagged_tokens # - # Now, instead of having to open a file, read the contents and close the file, we can just call the function `tag_tokens_file` to do this. We can test it on a single file: filename = "../Data/dreams/vickie1.txt" tagged_tokens = tag_tokens_file(filename) print(tagged_tokens) # ## 4.2 Iterating over all the files and applying the processing function # We can also do this for each of the files in the `../Data/dreams` directory by using a for-loop: # + import glob # Iterate over the `.txt` files in the directory and perform POS tagging on each of them for filename in glob.glob("../Data/dreams/*.txt"): tagged_tokens = tag_tokens_file(filename) print(filename, "\n", tagged_tokens, "\n") # - # ## 4.3 Collecting all the nouns # Now, we extend this code a bit so that we don't print all POS-tagged tokens of each file, but we get all (proper) nouns from the texts and add them to a list called `nouns_in_dreams`. Then, we print the set of nouns: # + # Create a list that will contain all nouns nouns_in_dreams = [] # Iterate over the `.txt` files in the directory and perform POS tagging on each of them for filename in glob.glob("../Data/dreams/*.txt"): tagged_tokens = tag_tokens_file(filename) # Get all (proper) nouns in the text ("NN" and "NNP") and add them to the list for token, pos in tagged_tokens: if pos in ["NN", "NNP"]: nouns_in_dreams.append(token) # Print the set of nouns in all dreams print(set(nouns_in_dreams)) # - # Now we have an idea what Vickie dreams about! # # # Exercises # **Exercise 1:** # # Try to collect all the present participles in the the text store in `../Data/Charlie/charlie.txt` using the NLTK tokenizer and POS-tagger. # + # you code here # - # You should get the following list: # `['boiling', 'bubbling', 'hissing', 'sizzling', 'clanking', 'running', 'hopping', 'knowing', 'rubbing', 'cackling', 'going']` # we can test our code using the assert statement (don't worry about this now, # but if you want to use it, you can probably figure out how it works yourself :-) # If our code is correct, we should get a compliment :-) assert len(present_participles) == 11 and type(present_participles[0]) == str print("Well done!") # **Exercise 2:** # # The resulting list `verb_lemmas` above contains a lot of duplicates. Do you remember how you can get rid of these duplicates? Create a set in which each verb occurs only once and name it `unique_verbs`. Then print it. # + ## the list is stored under the variable 'verb_lemmas' # your code here # - # Test your code here! If your code is correct, you should get a compliment :-) assert len(unique_verbs) == 28 print("Well done!") # **Exercise 3:** # # Now use a for-loop to count the number of times that each of these verb lemmas occurs in the text! For each verb in the list you just created, get the count of this verb in `charlie.txt` using the `count()` method. Create a dictionary that contains the lemmas of the verbs as keys, and the counts of these verbs as values. Refer to the notebook about Topic 1 if you forgot how to use the `count()` method or how to create dictionary entries! # # Tip: you don't need to read in the file again, you can just use the list called verb_lemmas. # + verb_counts = {} # Finish this for-loop for verb in unique_verbs: # your code here print(verb_counts) # - # Test your code here! If your code is correct, you should get a compliment :-) assert len(verb_counts) == 28 and verb_counts["bubble"] == 1 and verb_counts["be"] == 9 print("Well done!") # **Exercise 4:** # # Write your counts to a file called `charlie_verb_counts.txt` and write it to `../Data/Charlie/charlie_verb_counts.txt` in the following format: # # verb, count # # verb, count # # ... # # Don't forget to use newline characters at the end of each line. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Working with APIs in Python # # In Python, there are many libraries to make HTTP requests. We will use a 3rd-party library called "requests", which is very easy to use and very popular. # # Making a "GET" request is as simple as: # # ```python # import requests # # res = requests.get(url) # returns a "Response" object # res.content # has the "body" of the response # ``` # # You might need to install the requests library! # # You can do that with the following code in a Jupyter cell: # # ```python # # ! pip install requests # ``` # # Or, if you're using anaconda, optionally you can also do: # # ```python # # ! conda install -c anaconda requests # ``` # ## Pokemon API # # There is a simple, open API called "pokeapi" that allows us to make requests and see how to use APIs. Like everything, we first look at the documentation: # # https://pokeapi.co/docs/v2.html # # The video below will walk you through how to read the documentation page. # + from IPython.lib.display import YouTubeVideo YouTubeVideo('5-li5umLyGM', width=640,height=360) # + # Let's see how to make a GET request to the API: import requests # let's take a look at the "Pokemon" resource res = requests.get('https://pokeapi.co/api/v2/pokemon') # the .json() method on the Response class essentially # wraps a call to `json.loads` on the response body # for us: res.json() # + # Exercise 1: # Create a Dataframe with all the Pokemon names and the URL # that can be used to get detailed information about them via # the API: # # HINT: Take a look at the "next" property in the JSON API # response. # name | url | # -----------|----------------------- # bulbasaur | https://pokeapi.co/api/v2/pokemon/1/ # ... # squirtle | https://pokeapi.co/api/v2/pokemon/7/ # ... # - # ## Exercise 2: Exchange Rates # # Imagine that you work with financial assets which are denominated in different currencies. You analyze this data regularly, and want to create a "transformation" function that transforms all your assets into EUR prices, based on today's exchange rate. # # Your data with the local-currency-denominated value of each asset lives in a file called "assets.csv" which should be located in the same folder as this notebook. # # Write a "data loading" function that: # # 1. Reads the data, given the path to the file. # 2. Returns a dataframe with an additional column that has the assets value in euros, as of today. # # Use this free API to get today's exchange rates: https://exchangeratesapi.io/. You will need to read the documentation and try it out to see how it works. # # HINT: Write a separate function to get the current exchange rates - a function that can be reused! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # TITLE # + # %matplotlib inline import matplotlib.pyplot as plt # magics and warnings # %load_ext autoreload # %autoreload 2 import warnings; warnings.simplefilter('ignore') import os, random, codecs, json, time from tqdm.notebook import tqdm import pandas as pd import numpy as np seed = 99 random.seed(seed) np.random.seed(seed) import nltk, sklearn import matplotlib.pyplot as plt import seaborn as sns sns.set(style="white") sns.set_context("notebook", font_scale=1.2, rc={"lines.linewidth": 2.5}) # + # datasets who_latest = "datasets/WHO_18_03_2020.csv" dimensions_latest = "datasets/Dimensions_18_03_2020.csv" df_who = pd.read_csv(who_latest) df_dimensions = pd.read_csv(dimensions_latest) # + # clean DOIs def clean_doi(d): if isinstance(d,str): d = d.replace("https://doi.org/","") d = d.replace("doi:","") return d return d # - df_who["DOI"] = df_who["DOI"].apply(clean_doi) df_dimensions["DOI"] = df_dimensions["DOI"].apply(clean_doi) df_who.head() df_dimensions.head() # + # check DOIs print("WHO") print(df_who.shape) print(df_who[pd.notna(df_who["DOI"])].shape) print("Dimensions") print(df_dimensions.shape) print(df_dimensions[pd.notna(df_dimensions["DOI"])].shape) # - df_join = df_dimensions.join(df_who.set_index("DOI"), how='inner', on="DOI", lsuffix='dimensions', rsuffix='who') df_join = df_join[pd.notna(df_join["DOI"])] df_join.shape df_join.head() who_dois = df_who[pd.notnull(df_who["DOI"])]["DOI"].tolist() dimensions_dois = df_dimensions[pd.notnull(df_dimensions["DOI"])]["DOI"].tolist() dimensions_pmids = df_dimensions[(pd.notnull(df_dimensions["PMID"])) & ~(pd.notnull(df_dimensions["DOI"]))]["PMID"].tolist() len(set(dimensions_dois).intersection(set(who_dois))) all_dois = list(set(dimensions_dois).union(set(who_dois))) print(len(all_dois)) extra_pmids = list(set(dimensions_pmids)) print(len(extra_pmids)) # + # load Scite data # + scite_folder = "datasets/Scite_COVID_Dimensions" df_covid = pd.read_csv(os.path.join(scite_folder,"covid.csv")) df_sources = pd.read_csv(os.path.join(scite_folder,"covid-source-tallies.csv")) df_targets = pd.read_csv(os.path.join(scite_folder,"covid-target-tallies.csv")) print(df_sources.shape) print(df_targets.shape) # - df_sources.drop_duplicates(subset="doi", keep = False, inplace = True) df_targets.drop_duplicates(subset="doi", keep = False, inplace = True) print(df_sources.shape) print(df_targets.shape) df_covid.head() df_sources.head() df_targets.head() df_sources.describe() df_targets.describe() df_sources.sort_values("supporting", ascending=False).head(20) df_targets.sort_values("supporting", ascending=False).head(20) sns.scatterplot("supporting","total",data=df_targets) sns.scatterplot("contradicting","total",data=df_targets) sns.scatterplot("mentioning","total",data=df_targets) sns.scatterplot("mentioning","total",data=df_sources) # ### Index of support # + # For each source, calculate a vector of supporting - contradicting - mentioning - total cited publications # - len(set(df_sources.doi.tolist()).intersection(set(df_covid.source_doi.tolist())))/len(set(df_sources.doi.tolist())) for source in df_sources.doi.tolist(): # get cited targets = list(set(df_covid[df_covid.source_doi == source].target_doi.tolist())) print(len(targets)) df_sources.doi.tolist() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: python37 # language: python # name: python37 # --- # # 4. Linear Models for Classification # + import sys sys.path.append("../") # %matplotlib inline import matplotlib.pyplot as plt import numpy as np from numpy import asarray from numpy.linalg import inv, pinv from prmlmy.util import cv_, norm2s, calc_range from prmlmy.plot_util import plot_decision_boundary, grid_plot seed = 12 # - from importlib import reload import prmlmy.preprocess; reload(prmlmy.preprocess) import prmlmy.linear; reload(prmlmy.linear); from prmlmy.linear import LinearDiscriminantAnalysis # + import numpy as np import matplotlib.pyplot as plt # %matplotlib inline from prmlmy.pipe import SimplePipe from prmlmy.preprocess import PolynomialFeature from prmlmy.linear import ( LinearDiscriminantAnalysis, BayesianLogisticRegression, LeastSquaresClassifier, FishersLinearDiscriminant, LogisticRegression, # Perceptron, # SoftmaxRegression ) # - def create_toy_data(size=50, add_outliers=False, add_class=False, seed=seed): np.random.seed(seed) x0 = np.random.normal(size=size).reshape(-1, 2) - 1 x1 = np.random.normal(size=size).reshape(-1, 2) + 1. if add_outliers: x_1 = np.random.normal(size=10).reshape(-1, 2) + np.array([5., 10.]) return np.concatenate([x0, x1, x_1]), np.concatenate([np.zeros(25), np.ones(30)]).astype(np.int) if add_class: x2 = np.random.normal(size=size).reshape(-1, 2) + 3. return np.concatenate([x0, x1, x2]), np.concatenate([np.zeros(25), np.ones(25), 2 + np.zeros(25)]).astype(np.int) return np.concatenate([x0, x1]), np.concatenate([np.zeros(size//2), np.ones(size//2)]).astype(np.int) def fit_and_plot_decision_boundary(model, data_name, title, ax=None): X_train, y_train = data_map[data_name] model.fit(X_train, y_train) plot_decision_boundary(model, X_train, y_train, ax=ax, title=title) def fit_and_plot_decision_proba(model, data_name, title, ax=None): X_train, y_train = data_map[data_name] model.fit(X_train, y_train) plot_decision_proba(model, X_train, y_train, ax=ax, title=title) data_map = { "base": create_toy_data(), "outliers": create_toy_data(add_outliers=True), "3 class": create_toy_data(add_class=True), "big": create_toy_data(size=1000), } # ## 4.1 Discriminant Functions # ### 4.1.3 Least squares for classification data_names = ["base", "outliers", "3 class"] model = SimplePipe([ PolynomialFeature(1), LeastSquaresClassifier(), ]) grid_plot([model], data_names, fit_and_plot_decision_boundary, row_names=[""]) # ### 4.1.4 Fisher's linear discriminant data_names = ["base", "outliers"] models = [FishersLinearDiscriminant()] grid_plot(models, data_names, fit_and_plot_decision_boundary, row_names=[""]) # ## 4.3 Probabilistic Discriminative Models # ### Linear Discriminant Analysis from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDASklearn models = [LinearDiscriminantAnalysis(), LDASklearn()] model_names = ["My LDA", "SKL LDA"] data_names = ["base", "outliers"] grid_plot(models, data_names, fit_and_plot_decision_boundary, row_names=model_names) # generative model 이니 데이터를 샘플링해보자 # + model = LinearDiscriminantAnalysis() def plot_gen(model, data_name, title, ax): X_train, y_train = data_map[data_name] model.fit(X_train, y_train) xs, ys = model.generate(1000) ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, marker='o', s=80) ax.scatter(xs[:, 0], xs[:, 1], c=ys, s=20, alpha=0.2) ax.set_title(title) grid_plot([model], ["base", "outliers"], plot_gen, row_names=['']) # - # ### Logistic Regression from importlib import reload import prmlmy.preprocess; reload(prmlmy.preprocess) import prmlmy.linear; reload(prmlmy.linear); import prmlmy.util; reload(prmlmy.util); import prmlmy.tfmodels; reload(prmlmy.tfmodels) from prmlmy.linear import LogisticRegression from prmlmy.tfmodels import LogisticRegression as TFLR from sklearn.linear_model import LogisticRegression as SKLR models = [ LogisticRegression(solver="gd", eta=1e-3, max_iter=200), LogisticRegression(solver="nr", max_iter=200), SKLR(solver="lbfgs"), ] model_names = ["gd", "nr", "skl"] data_names = ["base", "outliers"] grid_plot(models, data_names, fit_and_plot_decision_boundary, row_names=model_names) # ### tf쪽에서는 gradient 계산쪽이 의도대로 동작하지 않아서, 추후 tf를 더 살펴보고 구현하도록 보류 # + # X_train, y_train = data_map["base"] # np.random.seed(123) # model = TFLR(eta=.01, max_iter=100, fit_intercept=True) # model.fit(X_train, y_train) # plot_decision_boundary(model, X_train, y_train) # plt.show() # # print(model.score(X_train, y_train)) # history = np.concatenate(model.history, axis=0) # plt.plot(history[:, 0], label='w1') # plt.plot(history[:, 1], label='w2') # plt.plot(history[:, 2], label='b') # plt.legend() # - # ### 4.3.4 Multiclass logistic regression # + # x_train, y_train = create_toy_data(add_class=True) # x1, x2 = np.meshgrid(np.linspace(-5, 10, 100), np.linspace(-5, 10, 100)) # x = np.array([x1, x2]).reshape(2, -1).T # feature = PolynomialFeature(1) # X_train = feature.transform(x_train) # X = feature.transform(x) # model = SoftmaxRegression() # model.fit(X_train, y_train, max_iter=1000, learning_rate=0.01) # y = model.classify(X) # plt.scatter(x_train[:, 0], x_train[:, 1], c=y_train) # plt.contourf(x1, x2, y.reshape(100, 100), alpha=0.2, levels=np.array([0., 0.5, 1.5, 2.])) # plt.xlim(-5, 10) # plt.ylim(-5, 10) # plt.gca().set_aspect('equal', adjustable='box') # plt.show() # - # ### 4.5 Bayesian Logistic Regression from importlib import reload import prmlmy.preprocess; reload(prmlmy.preprocess) import prmlmy.linear; reload(prmlmy.linear); import prmlmy.plot_util; reload(prmlmy.plot_util); from prmlmy.plot_util import plot_decision_proba from prmlmy.linear import LogisticRegression, BayesianLogisticRegression models = [ LogisticRegression(solver="gd", eta=1e-3, alpha=0.001, max_iter=2000, atol=1e-3), BayesianLogisticRegression(eta=1e-3, alpha=0.001, max_iter=2000), ] model_names = ["Logistic", "BayesLogistic"] grid_plot(models, ["base", "outliers"], fit_and_plot_decision_proba, row_names=model_names) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (capstone) # language: python # name: capstone # --- # Simulated and Experimental Comparison for iterative pFBA import pandas as pd import numpy as np import matplotlib import sklearn import seaborn import matplotlib.pyplot as plt import matplotlib.colors as colors from scipy.stats import zscore, spearmanr from sklearn.preprocessing import MinMaxScaler from scipy.stats import pearsonr #import consumption production frame from iterative pFBA analysis fortyCP = pd.read_pickle("pFBA_df.pkl") #import values from experimenal testing biolog = pd.read_csv('plata_biolog_raw.csv', index_col = 0) # Rewriting some of the nutrient names in experimental data to match nomenclature of simulated nutrients biolog.columns = ['Dextrin', 'Maltose', 'TRHL', 'CELB', 'Gentiobiose', 'Sucrose', 'Stachyose', 'D-Raffinose', 'LACT', 'Melibiose', 'b-Methyl-D-Glucoside', 'Salicin', 'N-Acetyl-D-glucosamine', 'N-Acetyl-D-mannosamine', 'N-Acetyl-DGalactosamine', 'N-Acetyl-Neuraminic Acid', 'D-Glucose', 'D-Mannose', 'D-Fructose', 'Galactose', 'D-Fucose', 'L-Fucose', 'L-Rhamnose', 'Inosine', 'Sorbitol', 'D-Mannitol', 'D-Arabitol', 'L-Inositol', 'Glycerol', 'D-glucose-6-phosphate', 'Fructose-6-Phosphate', 'D-Aspartic Acid', 'D-Serine', 'Gelatin', 'L-Alanine', 'L-Arginine', 'L-Aspartate', 'L-Glutamate', 'L-Histidine', 'L-PyroglutamicAcid', 'L-Serine', 'Pectin', 'D-Galacturonate', 'L-GalactonicAcid-g-Lactone', 'GLCN', 'Glucuronate', 'Mucic Acid', 'Quinic Acid', 'D-Saccharic Acid', 'P-HydroxyPhenyl AceticAcid', 'L-Lactate', 'Citrate', '2-Oxoglutarate', 'D-Malic Acid', 'L-Malate', 'GABA', 'a-HydroxyButyric Acid', '2-Oxobutyrate', 'Acetoacetic Acid', 'Propionic Acid', 'Acetic Acid', 'Formate'] biolog_nutrients = biolog.columns fortyCP_nutrients = fortyCP.columns intersection = biolog_nutrients.intersection(fortyCP_nutrients) print(intersection) print("number of nutrients that the two datasets have in common:") print(intersection.shape[0]) # Overall Comparison fortyCP.sort_index(inplace = True) biolog.sort_index(inplace = True) i = 0 avg = 0 count = 0 arr_b = [] arr_f = [] while i < biolog.shape[0]: intersection = (biolog.iloc[i][biolog.iloc[i]!=0].index.intersection(fortyCP.iloc[i][fortyCP.iloc[i]<0].index)) biolog_i = biolog.iloc[i][intersection] fortyCP_i = fortyCP.iloc[i][intersection] rank_b = biolog_i.rank() rank_f = fortyCP_i.rank(ascending = False) if intersection.shape[0]>1 and not np.isnan(spearmanr(rank_b, rank_f)[0]): avg+=(spearmanr(rank_b, rank_f)[0]) plt.scatter(rank_b, rank_f, alpha=0.2, c = 'b') arr_b = np.append(arr_b,rank_b.values) arr_f = np.append(arr_f,rank_f.values) count+=1 i+=1 avg = avg/count print(avg) x = np.linspace(1, 8, 1000) plt.plot(x,x) plt.xlabel('Experimental ranking') plt.ylabel('Simulated ranking'); Pcorr, Ppval = pearsonr(arr_b, arr_f) print('Pearson correlation: %.3f' % Pcorr) print('Pearson pval: %f' % Ppval) hexplot = seaborn.jointplot(arr_b, arr_f, kind = 'hex') hexplot.set_axis_labels('Experimental ranking', 'Simulated ranking'); # All code below this point is extra/outdated. from sklearn.linear_model import LinearRegression regressor = LinearRegression() #regressor.fit(np.concatenate(np.array(arr_b)).reshape(-1,1), np.concatenate(np.array(arr_f)).reshape(-1, 1)) #print('slope:', regressor.coef_) #np.array(arr_b) #plotting spearman correlation from scipy.stats import spearmanr i = 0 arr = []; placeholder = pd.DataFrame(np.zeros((2,55))) while i < fortyCP.shape[0]: if not np.isnan(spearmanr(rank_b.iloc[i],rank_f.iloc[i])[0]): arr.append(spearmanr(rank_b.iloc[i],rank_f.iloc[i])[0]) i += 1 p = seaborn.stripplot(data=arr) biolog.iloc[2][biolog.iloc[2]!=0].index.intersection(fortyCP.iloc[2][fortyCP.iloc[2]!=0].index) i = 0 avg = 0 count = 0 while i < biolog.shape[0]: intersection = (biolog.iloc[i][biolog.iloc[i]!=0].index.intersection(fortyCP.iloc[i][fortyCP.iloc[i]!=0].index)) biolog_i = biolog.iloc[i][intersection] fortyCP_i = fortyCP.iloc[i][intersection] rank_b = biolog_i.rank() rank_f = fortyCP_i.rank() if intersection.shape[0]>1 and not np.isnan(spearmanr(rank_b, rank_f)[0]): avg+=(spearmanr(rank_b, rank_f)[0]) count+=1 i+=1 avg = avg/count print(avg) # plot of zscore(rank_biolog - rank_fba) output = np.subtract(rank_b, rank_f) output_z = output.apply(zscore) p = seaborn.stripplot(data=output_z) p = seaborn.stripplot(data=output_z.transpose()) p.set_xticklabels(output_z.index, rotation = 70, ha = 'right', size=8); # + output = np.subtract(rank_b, rank_f) fig, ax = plt.subplots() im = ax.imshow(output,cmap='PiYG_r') ax.set_xticks(np.arange(len(intersection))) ax.set_xticklabels(intersection, rotation=90); ax.set_yticks(np.arange(len(biolog_i.index))) ax.set_yticklabels(biolog_i.index); cb = plt.colorbar(im) fig.set_size_inches(15, 15) fig.tight_layout() # - print(output_z) #np.histogram(output) plt.hist(output_z, bins = [-3 , -2, -1,0,1, 2,3]); plt.xlabel('Biolog - pFBA') plt.ylabel('Frequency') plt.title('Histogram of Biolog/pFBA comparison') #np.histogram(output) plt.hist(output, bins = [-1. , -0.8, -0.6, -0.4, -0.2, 0. , 0.2, 0.4, 0.6, 0.8, 1. ]); plt.xlabel('Biolog - pFBA') plt.ylabel('Frequency') plt.title('Histogram of Biolog/pFBA comparison') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from lxml import etree import pandas as pd from collections import Counter import os import glob import re import matplotlib.pyplot as plt import numpy as np from collections import Counter from numpy import array import numpy as np wdir = "/home/jose/Dropbox/biblia/tb/" file = "TEIBible" # "*.xml" outdir = "/home/jose/Dropbox/biblia/tb/resulting data/" # - entities = pd.ExcelFile(wdir + "entities.xls", index_col=0).parse('Sheet1').fillna("") entities.head(3) secure_places = entities.loc[ (entities["type"]== "place" ) & (entities["latitude"]!="" ) & (entities["longitude"]!="") & (entities["geo_cert"].isin(["high","medium"]))] secure_places.shape secure_places.head(3) books = pd.ExcelFile(wdir + "documentation/books.xlsx", index_col=0).parse('Sheet1').fillna("") books.head(3) parser = etree.XMLParser(encoding='utf-8') documento_xml = etree.parse(wdir+file+".xml", parser) documento_root = documento_xml.getroot() namespaces_concretos = {'tei':'http://www.tei-c.org/ns/1.0','xi':'http://www.w3.org/2001/XInclude'} # + books["latitude"] = "" books["longitude"] = "" for book in documento_root.xpath('//tei:TEI', namespaces=namespaces_concretos, with_tail=True): title = book.xpath('.//tei:title[2]/tei:idno[@type="string"]/text()', namespaces=namespaces_concretos, with_tail=True)[0] rss = book.xpath('.//tei:rs[not(@cert)]/@key|.//tei:rs[@cert="medium"]/@key', namespaces=namespaces_concretos, with_tail=True) set_entities = list(set([entity for reference in rss for entity in reference.split(" ")])) print(title) print(len(rss)) print(len(set_entities)) longitude = secure_places.loc[secure_places["id"].isin(set_entities)]["longitude"].mean() latitude = secure_places.loc[secure_places["id"].isin(set_entities)]["latitude"].mean() books.loc[books["codebook"]==title, ["longitude"]] = longitude books.loc[books["codebook"]==title, ["latitude"]] = latitude print(longitude, latitude) # - books books.to_excel(wdir+"documentation/books_2.xlsx", encoding="utf-8") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- sents = """"오패산터널 총격전 용의자 검거 서울 연합뉴스 경찰 관계자들이 19일 오후 서울 강북구 오패산 터널 인근에서 사제 총기를 발사해 경찰을 살해한 용의자 성모씨를 검거하고 있다.성씨는 검거 당시 서바이벌 게임에서 쓰는 방탄조끼에 헬멧까지 착용한 상태였다. 서울 연합뉴스 김은경 기자 사제 총기로 경찰을 살해한 범인 성모 46 씨는 주도면밀했다.경찰에 따르면 성씨는 19일 오후 강북경찰서 인근 부동산 업소 밖에서 부동산업자 이모 67 씨가 나오기를 기다렸다 이씨와는 평소에도 말다툼을 자주 한 것으로 알려졌다.이씨가 나와 걷기 시작하자 성씨는 따라가면서 미리 준비해온 사제 총기를 이씨에게 발사했다 총알이 빗나가면서 이씨는 도망갔다 그 빗나간 총알은 지나가던 행인 71 씨의 배를 스쳤다.성씨는 강북서 인근 치킨집까지 이씨 뒤를 쫓으며 실랑이하다 쓰러뜨린 후 총기와 함께 가져온 망치로 이씨 머리를 때렸다.이 과정에서 오후 6시 20분께 강북구 번동 길 위에서 사람들이 싸우고 있다 총소리가 났다 는 등의 신고가 여러건 들어왔다.5분 후에 성씨의 전자발찌가 훼손됐다는 신고가 보호관찰소 시스템을 통해 들어왔다 성범죄자로 전자발찌를 차고 있던 성씨는 부엌칼로 직접 자신의 발찌를 끊었다.용의자 소지 사제총기 2정 서울 연합뉴스 임헌정 기자 서울 시내에서 폭행 용의자가 현장 조사를 벌이던 경찰관에게 사제총기를 발사해 경찰관이 숨졌다 19일 오후 6시28분 강북구 번동에서 둔기로 맞았다 는 폭행 피해 신고가 접수돼 현장에서 조사하던 강북경찰서 번동파출소 소속 김모 54 경위가 폭행 용의자 성모 45 씨가 쏜 사제총기에 맞고 쓰러진 뒤 병원에 옮겨졌으나 숨졌다 사진은 용의자가 소지한 사제총기.신고를 받고 번동파출소에서 김창호 54 경위 등 경찰들이 오후 6시 29분께 현장으로 출동했다 성씨는 그사이 부동산 앞에 놓아뒀던 가방을 챙겨 오패산 쪽으로 도망간 후였다.김 경위는 오패산 터널 입구 오른쪽의 급경사에서 성씨에게 접근하다가 오후 6시 33분께 풀숲에 숨은 성씨가 허공에 난사한 10여발의 총알 중 일부를 왼쪽 어깨 뒷부분에 맞고 쓰러졌다.김 경위는 구급차가 도착했을 때 이미 의식이 없었고 심폐소생술을 하며 병원으로 옮겨졌으나 총알이 폐를 훼손해 오후 7시 40분께 사망했다.김 경위는 외근용 조끼를 입고 있었으나 총알을 막기에는 역부족이었다.머리에 부상을 입은 이씨도 함께 병원으로 이송됐으나 생명에는 지장이 없는 것으로 알려졌다.성씨는 오패산 터널 밑쪽 숲에서 오후 6시 45분께 잡혔다.총격현장 수색하는 경찰들 서울 연합뉴스 이효석 기자 19일 오후 서울 강북구 오패산 터널 인근에서 경찰들이 폭행 용의자가 사제총기를 발사해 경찰관이 사망한 사건을 조사 하고 있다.총 때문에 쫓던 경관들과 민간인들이 몸을 숨겼는데 인근 신발가게 직원 이모씨가 다가가 성씨를 덮쳤고 이어 현장에 있던 다른 상인들과 경찰이 가세해 체포했다.성씨는 경찰에 붙잡힌 직후 나 자살하려고 한 거다 맞아 죽어도 괜찮다 고 말한 것으로 전해졌다.성씨 자신도 경찰이 발사한 공포탄 1발 실탄 3발 중 실탄 1발을 배에 맞았으나 방탄조끼를 입은 상태여서 부상하지는 않았다.경찰은 인근을 수색해 성씨가 만든 사제총 16정과 칼 7개를 압수했다 실제 폭발할지는 알 수 없는 요구르트병에 무언가를 채워두고 심지를 꽂은 사제 폭탄도 발견됐다.일부는 숲에서 발견됐고 일부는 성씨가 소지한 가방 안에 있었다.""" from newspaper import Article from konlpy.tag import Kkma from konlpy.tag import Okt from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.feature_extraction.text import CountVectorizer from sklearn.preprocessing import normalize import numpy as np import gensim from gensim import corpora class SentenceTokenizer(object): def __init__(self): self.kkma = Kkma() self.twitter = Twitter() self.stopwords = ['중인' ,'만큼', '마찬가지', '꼬집었', "연합뉴스", "데일리", "동아일보", "중앙일보", "조선일보", "기자" ,"아", "휴", "아이구", "아이쿠", "아이고", "어", "나", "우리", "저희", "따라", "의해", "을", "를", "에", "의", "가",] def text2sentences(self, text): sentences = self.kkma.sentences(text) for idx in range(0, len(sentences)): if len(sentences[idx]) <= 10: sentences[idx-1] += (' ' + sentences[idx]) sentences[idx] = '' print() print('text2s',sentences) print(type(sentences)) return sentences def get_nouns(self, sentences): nouns = [] for sentence in sentences: if sentence is not '': nouns.append(' '.join([noun for noun in self.twitter.nouns(str(sentence)) if noun not in self.stopwords and len(noun) > 1])) #bigram = gensim.models.Phrases(nouns, min_count = 2, threshold = 0.000000000000000000001) #print(bigram) #bigram_model = gensim.models.phrases.Phraser(bigram) #bigram_document = [bigram_model[n] for n in nouns] #print() #print(bigram_document) #print('n',nouns) #print(type(nouns)) #return bigram_document return nouns class GraphMatrix(object): def __init__(self): #self.tfidf = TfidfVectorizer() self.cnt_vec = CountVectorizer() self.graph_sentence = [] def build_words_graph(self, sentence): cnt_vec_mat = normalize(self.cnt_vec.fit_transform(sentence).toarray().astype(float), axis=0) vocab = self.cnt_vec.vocabulary_ return np.dot(cnt_vec_mat.T, cnt_vec_mat), {vocab[word] : word for word in vocab} # + class Rank(object): def get_ranks(self, graph, d=0.85): # d = damping factor A = graph matrix_size = A.shape[0] for id in range(matrix_size): A[id, id] = 0 # diagonal 부분을 0으로 link_sum = np.sum(A[:,id]) # A[:, id] = A[:][id] if link_sum != 0: A[:, id] /= link_sum A[:, id] *= -d A[id, id] = 1 B = (1-d) * np.ones((matrix_size, 1)) ranks = np.linalg.solve(A, B) # 연립방정식 Ax = b return {idx: r[0] for idx, r in enumerate(ranks)} # + class TextRank(object): def __init__(self, text): self.sent_tokenize = SentenceTokenizer() print('st',self.sent_tokenize) self.sentences = self.sent_tokenize.text2sentences(text) print('t2s',self.sentences) self.nouns = self.sent_tokenize.get_nouns(self.sentences) print('n',self.nouns) self.graph_matrix = GraphMatrix() self.words_graph, self.idx2word = self.graph_matrix.build_words_graph(self.nouns) print('g',self.words_graph) print('id2',self.idx2word) self.rank = Rank() self.word_rank_idx = self.rank.get_ranks(self.words_graph) self.sorted_word_rank_idx = sorted(self.word_rank_idx, key=lambda k: self.word_rank_idx[k], reverse=True) def keywords(self, word_num=10): rank = Rank() rank_idx = rank.get_ranks(self.words_graph) sorted_rank_idx = sorted(rank_idx, key=lambda k: rank_idx[k], reverse=True) keywords = [] index=[] for idx in sorted_rank_idx[:word_num]: index.append(idx) #index.sort() for idx in index: keywords.append(self.idx2word[idx]) return keywords # + news2 = """경북 영천에서 17일 하루 만에 신종 코로나바이러스 감염증(코로나19) 확진자가 5명이 발생, N차 감염이 우려되는 가운데 비상이 걸렸다. 지난 7일 80번 확진자 이후 열흘 만에 지역 누적 확진자는 85명으로 늘어났다.영천시에 따르면 경산에 주소를 둔 공부방 교사가 확진됐다는 소식을 접한 A여고 한 학생이 지난 16일 보건소 검체 검사 결과 17일 양성판정을 받았다.또 확진자 교사가 운영 중인 공부방에서 과외 학습을 받고 있는 B여고 학생이 17일 오후 8시 확진 판정을 받은 가운데 A여고 학생 부모와 B여고 학생 엄마 등 총 5명이 코로나 판정을 받았다.수업을 받은 나머지 학생들은 모두 음성 판정을 받은 것으로 알려졌다.이날 A여고 측은 학생, 교직원 등 140여 명이 보건소에서 검체를 실시했으며 뒤늦게 알려진 B여고는 18일 학생과 교직원들이 검사를 받을 예정이다.이에 보건소는 코로나 확진자들을 자가 격리하고 이들의 이동 동선 파악에 집중하는 가운데 학교 등의 방역소독을 철저히 하고 있다.""" textrank2 = TextRank(news2) textrank3 = TextRank(sents) print() print('keywords :',textrank2.keywords()) print() print('keywords :',textrank3.keywords()) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Null Value Computation import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from IPython.display import display # %matplotlib inline # set seed for reproducibility SEED = 20 np.random.seed(SEED) # Loading Data df = pd.read_csv('diabetes.csv') df.head() print('Total zero Glucose values: ' + str(768-df['Glucose'].astype(bool).sum(axis=0))) print('Total zero BloodPressure values: ' + str(768-df['BloodPressure'].astype(bool).sum(axis=0))) print('Total zero SkinThickness values: ' + str(768-df['SkinThickness'].astype(bool).sum(axis=0))) print('Total zero Insulin values: ' + str(768-df['Insulin'].astype(bool).sum(axis=0))) print('Total zero BMI values: ' + str(768-df['BMI'].astype(bool).sum(axis=0))) print('Total zero DiabetesPedigreeFunction values: ' + str(768-df['DiabetesPedigreeFunction'].astype(bool).sum(axis=0))) print('Total zero Age values: ' + str(768-df['Age'].astype(bool).sum(axis=0))) # These are all 0 values out of 768 in each field. # We saw outliers during our data viz. Now we need to handle these # Total zero values in DiabetesPedigreeFunction and Age variable is zero. # Pregnancy field can be 0. def replace_zero(df): df_nan=df.copy(deep=True) cols = ["Glucose","BloodPressure","SkinThickness","Insulin","BMI"] df_nan[cols] = df_nan[cols].replace({0:np.nan}) replace_zero(df) df_nan.isnull().sum() # We have successfully replaced 0's with Null values # for colums ["Glucose","BloodPressure","SkinThickness","Insulin","BMI"] # Now we need to handle Nulls somehow # to find the median for filling null values # Function outputs median value for mentioned variable based on Outcome var def replace_zero(df): df_nan=df.copy(deep=True) cols = ["Glucose","BloodPressure","SkinThickness","Insulin","BMI"] df_nan[cols] = df_nan[cols].replace({0:np.nan}) return df_nan df_nan=replace_zero(df) find_median(df_nan,'Glucose') # 107 is the median value for Glucose var for non-diab people # 140 is the median value for Glucose var for diab people. # Function to replace Null values with relevant median values # returns number of Null values after computation (Should return 0 when called) def replace_null(frame,var): median_df=find_median(frame,var) var_0=median_df[var].iloc[0] var_1=median_df[var].iloc[1] frame.loc[(frame['Outcome'] == 0) & (frame[var].isnull()), var] = var_0 frame.loc[(frame['Outcome'] == 1) & (frame[var].isnull()), var] = var_1 return frame[var].isnull().sum() print(str(replace_null(df_nan,'Glucose'))+ ' Nulls for Glucose') print(str(replace_null(df_nan,'SkinThickness'))+ ' Nulls for SkinThickness') print(str(replace_null(df_nan,'Insulin'))+ ' Nulls for Insulin') print(str(replace_null(df_nan,'BMI'))+ ' Nulls for BMI') print(str(replace_null(df_nan,'BloodPressure'))+ ' Nulls for BloodPressure') # We have successfully handled Nulls df_nan.isnull().sum() # Just a confirmation plt.figure(dpi = 125,figsize= (5,4)) mask = np.triu(df.corr()) #np.triu returns lower triangle for our heatmap as we do not need upper map sns.heatmap(df_nan.corr(),mask = mask, fmt = ".1f",annot=True,lw=0.1,cmap = 'YlGnBu') plt.title('Correlation Map') plt.show() # New Correlation map has higher correlated values # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="mXU4vYj7eIE2" colab_type="text" # # # **Credit card Fraud versus a set of Models** # + [markdown] id="ii325n0OeIE3" colab_type="text" # _This notebook contains an example of running several models against a credit card fraud dataset pulled from Kaggle._ # # + [markdown] id="ogFOxktmeIE4" colab_type="text" # # Setup # + [markdown] id="Cal-fSpOeIE4" colab_type="text" # First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20. # + id="wQPExwX7eIE5" colab_type="code" colab={} # Python ≥3.5 is required import sys assert sys.version_info >= (3, 5) # Scikit-Learn ≥0.20 is required import sklearn assert sklearn.__version__ >= "0.20" # Common imports import numpy as np import os # to make this notebook's output stable across runs np.random.seed(42) # To plot pretty figures # %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt mpl.rc('axes', labelsize=14) mpl.rc('xtick', labelsize=12) mpl.rc('ytick', labelsize=12) import warnings warnings.filterwarnings(action="ignore", message="^internal gelsd") # + [markdown] id="QsLxUJZBrt8_" colab_type="text" # # Load Data # + [markdown] id="QVpFYPD2X10O" colab_type="text" # I loaded the card fraud data from an S3 bucket on AWS for you. # + id="gFWAsLuQsR3N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 224} outputId="4638d866-34fa-4df2-e4b0-6b3a776614fd" import pandas as pd cfraud=pd.read_csv("https://s3.amazonaws.com/www.ruxton.ai/creditcardfraud.zip") cfraud.head() # + [markdown] colab_type="text" id="_Ju81LsLhMG8" # # Some Minimal EDA # + id="T95OhBmPhJnn" colab_type="code" colab={} # + id="6He7X8dBZQU2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 624} outputId="e1466eac-0604-4705-850c-fc28eb7808eb" from string import ascii_letters import seaborn as sns import matplotlib.pyplot as plt sns.set(style="white") # Generate a large random dataset rs = np.random.RandomState(33) d = pd.DataFrame(data=rs.normal(size=(100, 26)), columns=list(ascii_letters[26:])) # Compute the correlation matrix corr = cfraud.corr() # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=np.bool)) # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) # + id="sz1DWkSIWti0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6c73429b-1424-44a1-f2b9-59e20146b368" use=list(cfraud.columns.values[[1,2,3,4,5,6,7,9,10,11,12,14,16,17,18,19,28]]) # use all the predictor data for example print(use) # + [markdown] id="Dursgx1LXZ0W" colab_type="text" # EDA: Before fitting models, you should do EDA. Do you want to add any features as combos of others? # + [markdown] id="W1gSBePLXZP5" colab_type="text" # Transform data here # + [markdown] id="tb5pxZxsWoqk" colab_type="text" # That looks awful. Let's try and dientify predictors that are intrinsic to banks balance sheet. # + [markdown] id="13UBYWXPXBHR" colab_type="text" # That looks better. Now try some other methods like random forest SVM, xgboost, decisio trees. Try tuning them. Which do you choose? # + id="Gi_Cs3ihhYbF" colab_type="code" outputId="6c511665-308a-4a93-84aa-60b73a30f502" colab={"base_uri": "https://localhost:8080/", "height": 1000} import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.datasets import make_moons, make_circles, make_classification from sklearn.linear_model import LogisticRegression from sklearn.neural_network import MLPClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from xgboost import XGBClassifier from sklearn import metrics h = .02 # step size in the mesh names = [ "Linear SVM", "Logistic", "Decision Tree", "Random Forest", "Neural Net", "AdaBoost", "Naive Bayes", "QDA","XGBoost"] classifiers = [ SVC(kernel="linear", C=0.025), LogisticRegression(), DecisionTreeClassifier(max_depth=5), RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1), MLPClassifier(alpha=1, max_iter=1000), AdaBoostClassifier(), GaussianNB(), QuadraticDiscriminantAnalysis(), XGBClassifier()] X, y = make_classification(n_features=5, n_redundant=0, n_informative=2, random_state=1, n_clusters_per_class=1) rng = np.random.RandomState(2) X += 2 * rng.uniform(size=X.shape) linearly_separable = (X, y) i=1. # figure counter # preprocess dataset, split into training and test part X, y = cfraud[use],cfraud["Class"] X = StandardScaler().fit_transform(X) X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=.3, random_state=42) # iterate over classifiers for name, clf in zip(names, classifiers): figure = plt.figure(num=i,figsize=(108, 6)) ax = plt.subplot(1, len(classifiers) + 1, i) clf.fit(X_train, y_train) fpr, tpr, _ = metrics.roc_curve(y_test, clf.predict(X_test)) roc_auc = metrics.auc(fpr, tpr) # Plot of a ROC curve for a specific class # plt.figure() lw = 2 plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC for '+ name ) plt.legend(loc="lower right") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # DATA MUNGING # Missing Value Treatment # Outliers # ISSUES # Inaccurate analysis # Modeling won't work in many cases # SOLUTIONS # Deletion # Imputation # + import numpy as np import pandas as pd import os raw_data_path=os.path.join(os.path.pardir,'data','raw') train_file_path=os.path.join(raw_data_path,'train.csv') test_file_path=os.path.join(raw_data_path,'test.csv') train_df=pd.read_csv(train_file_path,index_col="PassengerId") test_df=pd.read_csv(test_file_path,index_col="PassengerId") test_df["Survived"]=-888 df=pd.concat((train_df,test_df),axis=0) df.info() # - df[df.Embarked.isnull()] df.Embarked.value_counts() pd.crosstab(df[df.Survived!=-888].Survived,df[df.Survived!=-888].Embarked) # df.Embarked.fillna('S',inplace=True) df.groupby(['Pclass','Embarked']).Fare.median() df.Embarked.fillna('C',inplace=True) df[df.Embarked.isnull()] df.info() df[df.Fare.isnull()] df[df.Age.isnull()] df.Age.plot(kind="hist",bins=20,color='c'); df.Age.median() df.Age.mean() pd.options.display.max_rows=15 df[df.Age.isnull()] df.groupby('Sex').median() df[df.Age.notnull()].boxplot('Age','Sex') df.groupby("Sex").Age.transform('median') df[df.Age.notnull()].boxplot('Age','Pclass') df.Name def GetTitle(name): first_name_with_title=name.split(',')[1] title=first_name_with_title.split('.')[0] title=title.strip().lower() return title df['Title']=df.Name.map(lambda x: GetTitle(x)) df[df.Age.notnull()].boxplot('Age','Title'); title_age_median=df.groupby('Title').Age.transform('median') df.Age.fillna(title_age_median,inplace=True) df.info() df[df.Fare.isna()] df[df.Fare.notnull()].groupby('Pclass').Fare.median() fare_median=df.groupby('Pclass').Fare.transform('median'); df.Fare.fillna(fare_median) df.info() # %%HTML

    WORKING WITH OUTLIERS

    df.Age.plot(kind='hist', bins=20) df[df.Age>70] df.Fare.plot(kind='hist',bins=20) df.Fare.plot(kind='box') df.loc[df.Fare==df.Fare.max()] LogFare=np.log(df.Fare+1.0) LogFare.plot(kind='hist',bins=20); df.Fare.plot(kind='hist',bins=20); # %%HTML

    Binning

    pd.qcut(df.Fare,4,labels=['very_low','low','high','very_high']) pd.qcut(df.Fare,4,labels=['very_low','low','high','very_high']).value_counts().plot(kind='bar'); df['Fare_Bin']=pd.qcut(df.Fare,4,labels=['very_low','low','high','very_high']) df # %%HTML

    FEATURE ENGINEERING

    # + # TRANSFORMATION # CREATION # SELECTION # Domain_Knowledge + Technical_Expertise # - df['AgeState']=np.where(df.Age>=18,'Adult','Child') df.AgeState.value_counts() pd.crosstab(df.Survived[df.Survived!=-888],df.AgeState[df.Survived!=-888]) df['Family_Size']=df.SibSp+df.Parch+1 df df.Family_Size.plot(kind='hist') df[df.Family_Size>8] pd.crosstab(df[df.Survived!=-888].Family_Size,df[df.Survived!=-888].Survived) df['isMother']=np.where(((df.Sex=='female') & (df.Parch>0) & (df.Title!='miss')),1,0) df[df.isMother==1][['Age']].plot(kind='hist') pd.crosstab(df[df.Survived!=-888].Survived,df[df.Survived!=-888].isMother) df.Cabin df.Cabin.unique() df.loc[df.Cabin=='T','Cabin']=np.NAN def get_deck(cabin): return np.where(pd.notnull(cabin),str(cabin)[0].upper(),'Z') df["Deck"]=df.Cabin.map(lambda x:get_deck(x)) df['Deck'].unique() pd.crosstab(df[df.Survived!=-888].Deck,df[df.Survived!=-888].Survived) df.info() # %%HTML

    CATEGORICAL FEATURE ENCODING

    BINARY ENCODING

    LABEL ENCODING

    ONE HOT ENCODING

    df.Embarked # %%HTML

    CATEGORICAL FEATURE ENCODING

    df['isMale']=np.where(df.Sex=='male',1,0) df=pd.get_dummies(df,columns=['Deck','Pclass','Title','Fare_Bin','Embarked','AgeState']); df.info() df.Ticket df.drop(['Cabin','Name','Ticket','Parch','SibSp','Sex'],axis=1,inplace=True) columns=[column for column in df.columns if column!='Survived'] columns=['Survived']+columns df=df[columns] df.info() # %%HTML

    SAVE PROCESSED DATA

    processed_data_path=os.path.join(os.path.pardir,'data','processed') write_train_path=os.path.join(processed_data_path,'train.csv') write_test_path=os.path.join(processed_data_path,'test.csv') df.loc[df.Survived!=-888].to_csv(write_train_path) df.loc[df.Survived==-888].to_csv(write_test_path) # %%HTML

    Building the data processing script

    #to be done later # + language="html" #

    ADVANCED VISUALIZATIONS USING MAT

    # - import matplotlib.pyplot as plt # %matplotlib inline plt.hist(df.Age,bins=20,color='c'); plt.title('Histogram : Age') plt.xlabel('Bins') plt.ylabel('Counts') f,ax=plt.subplots() ax.hist(df.Age,bins=20,color='c') ax.set_xlabel('Bins') ax.set_ylabel('Counts') plt.show() # + f,(ax1,ax2)=plt.subplots(1,2,figsize=(14,3)) ax1.hist(df.Age,bins=20,color='c') ax1.set_xlabel('Bins') ax1.set_ylabel('Counts') ax1.set_title('Histogram : Age') ax2.hist(df.Family_Size,bins=20,color='tomato') ax2.set_xlabel('Bins') ax2.set_ylabel('Counts') ax2.set_title('Histogram : Fare') plt.show() # - df.Fare # + #Multiple plots can be made using array indexing, to avoid overlapping use plt.tight_layout() # + # remove a plot -> ax[2,2].axis('off') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Lab 06 # # Labs in general are for you to solve short programming challenges in class. In contrast, homework assignments will involve more challenging and lengthy problems. # # Feel free to ask the TAs for help if there is anything you do not understand. The TAs will go through suggested solutions in the last 15 minutes of the lab - typically by solving them in a live demo. **Your midterm exams will be like this, so it is highly beneficial for you to attend these labs**. # # The second lab is to gain basic familiarity with root finding and optimization. # # - You can import any Python library module you need # - Do this lab without using the web to search for solutions # **1**. Use the secant method to find the solution to $x^2 + 4x - 5 = 0$ starting from the (2,3) and running 5 iterations. # + # %matplotlib inline import numpy as np import matplotlib.pyplot as plt # + f = lambda x: x**2 + 4*x - 5 def secant(x0, x1, f): """Secant update.""" y0 = f(x0) y1 = f(x1) m = (y1 - y0)/(x1 - x0) b = y1 - m*x1 return -b/m x0 = 2 x1 = 3 x = [x0, x1] for i in range(5): x0, x1 = x1, secant(x0, x1, f) x.append(x1) print(x[-1]) x = np.array(x) y = f(x) xp = np.linspace(-5, 5, 100) plt.plot(xp, f(xp)) plt.scatter(x, y) plt.scatter(x, np.zeros_like(x)) plt.axhline(0) pass # - # **2**. Construct the companion matrix to find all solutions to $x^3 + 4x + 5 = 0$. A = np.array([ [0,-4,-5], [1,0,0], [0,1,0] ]) x = np.linalg.eigvals(A) x # + f = lambda x: x**3 + 4*x + 5 np.allclose(f(x), np.zeros_like(x)) # - # **3**. Use the Newton-Raphson method to find the real cube root of 10 starting with an initial guess of 2.. def newton(x, f, fp, max_iter=5): "Newton-Raphson method" for i in range(max_iter): x = x - f(x)/fp(x) return x # + f = lambda x: x**3 - 10 fp = lambda x: 3*x**2 x0 = 2 x = newton(x0, f, fp) x # - x**3 # **4**. The Lagrange basis functions are given by # # $$ # l_j(x_j) = \prod_{0 \le m \le k, m \ne j} \frac{x - x_m}{x_j - x_m} # $$ # # Here, $x$ represents the points at which you want to interpolate, $x_j$ and $x_m$ are indices of the given points. # # Use this to fit and plot a quadratic to the 3 points (1,1), (3,7) and (4,11) x = np.array([1,3,4]) y = np.array([1,7,11]) # + xp = np.linspace(0, 5, 50) L0 = ((xp-x[1])*(xp-x[2])) / ((x[0]-x[1])*(x[0]-x[2])) L1 = ((xp-x[0])*(xp-x[2])) / ((x[1]-x[0])*(x[1]-x[2])) L2 = ((xp-x[0])*(xp-x[1])) / ((x[2]-x[0])*(x[2]-x[1])) yp = y[0]*L0 + y[1]*L1 + y[2]*L2 plt.scatter(x, y) plt.plot(xp, yp) pass # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="uVKZFzf9I2Hb" # # + [markdown] id="rQ2grrj7Iopw" # # Lab 04 : Train vanilla neural network -- exercise # # # # Training a one-layer net on FASHION-MNIST # + colab={"base_uri": "https://localhost:8080/"} id="bFNyy3YRIop4" executionInfo={"status": "ok", "timestamp": 1631301424198, "user_tz": -480, "elapsed": 37751, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="5414625d-ca7c-44a1-aa9b-5725d12d8891" # For Google Colaboratory import sys, os if 'google.colab' in sys.modules: # mount google drive from google.colab import drive drive.mount('/content/gdrive') # find automatically the path of the folder containing "file_name" : file_name = 'train_vanilla_nn_exercise.ipynb' import subprocess path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8") path_to_file = path_to_file.replace(file_name,"").replace('\n',"") # if previous search failed or too long, comment the previous line and simply write down manually the path below : #path_to_file = '/content/gdrive/My Drive/CS5242_2021_codes/codes/labs_lecture03/lab04_train_vanilla_nn' print(path_to_file) # change current path to the folder containing "file_name" os.chdir(path_to_file) # !pwd # + id="WLTJzmiAIop6" executionInfo={"status": "ok", "timestamp": 1631301442998, "user_tz": -480, "elapsed": 5418, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} import torch import torch.nn as nn import torch.optim as optim from random import randint import utils # + [markdown] id="b4J_SGkGIop7" # ### Download the TRAINING SET (data+labels) # + colab={"base_uri": "https://localhost:8080/", "height": 522, "referenced_widgets": ["03cf8083664a43d2bdb3fd32d611daee", "e39f972153ba4152bf87e0265cbe31ae", "eae27f3de9554bcfb63130600c8fafd7", "a0cf7a1a0a9c47e0acc1f1c854f12ff7", "03b3484e39054f4eb608e9bc47048e87", "1959eab79eb2459fa631c3b43188d8ac", "0a3c3020ec7442e8b7fa66ce449ab003", "2a0442ad85414c3dbf790e0c032c461e", "513c449d33f54d6c8e3d9e2b4709622c", "", "", "78cd728f4cce4d9b8a5ea17365faf946", "34a661b627c34a32b5de949845985e48", "", "53afa963948141328b94795291ca20b9", "", "", "", "4381ce8a8cc342a7b4c3e80918b1bd31", "8b9e288437314f309588ff8e246d885c", "65cbed60f1db4e3584fe12b10454328c", "ef3aa32f16a9478f9e5e0d1e4e90ed4e", "", "447386feb4ea479996f97b7ef95c1603", "", "", "", "", "", "", "", "", "", "", "2883079520ef43ce9d19baf7a08b0d98", "4d8a9daa958a4ed590824d7181c928a2", "35d875f7956143158a0be1c4ca0d7601", "d229669dbd9541d29e0678b94ad47907", "", "86d3527962d84587a7c5584fef1cc87a", "d9ba6fb596284340be6c7e3c9a545c4b", "", "78960b70a80d4550ae2d00dfd0f6921d", "5e111661570f4d36a09d712f2ab5d3d6"]} id="qn4pfya3Iop8" executionInfo={"status": "ok", "timestamp": 1631301456448, "user_tz": -480, "elapsed": 13459, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="05787fe8-69c7-48b1-eca0-c1be5f0ce090" from utils import check_fashion_mnist_dataset_exists data_path=check_fashion_mnist_dataset_exists() train_data=torch.load(data_path+'fashion-mnist/train_data.pt') train_label=torch.load(data_path+'fashion-mnist/train_label.pt') print(train_data.size()) print(train_label.size()) # + [markdown] id="u_sFlECBIop9" # ### Download the TEST SET (data only) # + colab={"base_uri": "https://localhost:8080/"} id="wZpbN_MzIop-" executionInfo={"status": "ok", "timestamp": 1631301461653, "user_tz": -480, "elapsed": 518, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="cce43c62-ab53-44bc-ec5f-ddefcc0bea0a" test_data=torch.load(data_path+'fashion-mnist/test_data.pt') print(test_data.size()) # + [markdown] id="uIXKA-iAIop-" # ### Make a one layer net class # + id="mCp7shxGIop_" executionInfo={"status": "ok", "timestamp": 1631302603825, "user_tz": -480, "elapsed": 552, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} class one_layer_net(nn.Module): def __init__(self, input_size, output_size): super(one_layer_net , self).__init__() # complete here self.linear_layer = nn.Linear(input_size, output_size, bias = True) def forward(self, x): x = self.linear_layer(x)# complete here p = torch.softmax(x, dim=1)# complete here return p # + [markdown] id="aB-OkhLbIoqA" # ### Build the net # + colab={"base_uri": "https://localhost:8080/"} id="P4lwTsApIoqB" executionInfo={"status": "ok", "timestamp": 1631302611907, "user_tz": -480, "elapsed": 613, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="b7ed8e61-dbe0-4e09-b268-dd9036bf7a12" net=one_layer_net(784,10) print(net) # + [markdown] id="T07csKe-IoqB" # ### Take the 4th image of the test set: # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="aUDQBE2iIoqC" executionInfo={"status": "ok", "timestamp": 1631302948826, "user_tz": -480, "elapsed": 547, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="315b0905-c887-4107-8ec3-22a24bf572f7" im= test_data[3]# complete here utils.show(im) # + [markdown] id="lO9h9E-ZIoqC" # ### And feed it to the UNTRAINED network: # + colab={"base_uri": "https://localhost:8080/"} id="9VbLixjTIoqD" executionInfo={"status": "ok", "timestamp": 1631303093200, "user_tz": -480, "elapsed": 517, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="3acd8f82-27cf-4116-970e-cf571a8e8a55" print(im.size()) p = net(im.view(1, 784))# complete here print(p) # + [markdown] id="r7rwsS3NIoqD" # ### Display visually the confidence scores # + colab={"base_uri": "https://localhost:8080/", "height": 406} id="lZ9nBW2KIoqD" executionInfo={"status": "ok", "timestamp": 1631303109070, "user_tz": -480, "elapsed": 713, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="4271a217-cdb7-4637-e3bc-e0dd1abdab77" utils.show_prob_fashion_mnist(p) # + colab={"base_uri": "https://localhost:8080/"} id="QW3iAHoOQ3bW" executionInfo={"status": "ok", "timestamp": 1631303506996, "user_tz": -480, "elapsed": 519, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="30a0fbef-7c27-459f-f351-0b7e7d024007" print(train_label[0]) print(train_label[0].view(1)) # + [markdown] id="-8jrxer5IoqD" # ### Train the network (only 5000 iterations) on the train set # + id="lEuJD4dNIoqE" executionInfo={"status": "ok", "timestamp": 1631304718581, "user_tz": -480, "elapsed": 2149, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} criterion = nn.NLLLoss() optimizer=torch.optim.SGD(net.parameters() , lr=0.01 ) # 训练 5000 次 for iter in range(1,5000): # choose a random integer between 0 and 59,999 # extract the corresponding picture and label # and reshape them to fit the network # complete here # complete here # complete here # 防止 neural network 记住顺序,能提高 5% 的预测率 idx = randint(0, 60000-1) input = train_data[idx].view(1, 784) # label 需要 dimension label = train_label[idx].view(1) # feed the input to the net input.requires_grad_() # for backprobagation -- we will discuss it later # complete here prob = net(input) # update the weights (all the magic happens here -- we will discuss it later) log_prob=torch.log(prob) loss = criterion(log_prob, label) optimizer.zero_grad() loss.backward() optimizer.step() # + [markdown] id="U-GC4sBtIoqE" # ### Take the 34th image of the test set: # + colab={"base_uri": "https://localhost:8080/", "height": 430} id="gGJY8eBfIoqE" executionInfo={"status": "ok", "timestamp": 1631304868381, "user_tz": -480, "elapsed": 554, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="96d1a17b-1e67-4e3a-a1a5-9fd02c8a9877" im= test_data[3]# complete here utils.show(im) # + [markdown] id="edN2XiXlIoqF" # ### Feed it to the TRAINED net: # + colab={"base_uri": "https://localhost:8080/"} id="R1L4zQyAIoqF" executionInfo={"status": "ok", "timestamp": 1631304761080, "user_tz": -480, "elapsed": 517, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="6f17eb91-66f1-44f6-f90c-bdcae7b9a689" p = net(im.view(1, 784))# complete here print(p) # + [markdown] id="IlcihUadIoqF" # ### Display visually the confidence scores # + colab={"base_uri": "https://localhost:8080/", "height": 406} id="hdgp20pfIoqF" executionInfo={"status": "ok", "timestamp": 1631304764766, "user_tz": -480, "elapsed": 724, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "14753845709256584186"}} outputId="9cb3d3d8-72d7-48cf-803a-af7374415415" utils.show_prob_fashion_mnist(prob) # + [markdown] id="nYHHJ0R0IoqK" # ### Choose image at random from the test set and see how good/bad are the predictions # + id="Z5y9zwrqIoqK" # choose a picture at random idx=randint(0, 10000-1) im=test_data[idx] # diplay the picture utils.show(im) # feed it to the net and display the confidence scores prob = net( im.view(1,784)) utils.show_prob_fashion_mnist(prob) # + id="EmNnKjdkIoqK" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''NLP-py39'': conda)' # language: python # name: python3 # --- # ### Ubah source code dan penjelasan pada artikel untuk JavaScript ini, diubah menjadi Python # (https://blog.bitsrc.io/understanding-higher-order-functions-in-javascript-75461803bad) # ## Map # #### Example 1# # + # Kode Javascript # const arr1 = [1, 2, 3]; # const arr2 = arr1.map(function(item) { # return item * 2; # }); # console.log(arr2); # + # kode di python arr1 = [1, 2, 3] arr2 = map(lambda a: a * 2, arr1) print(list(arr2)) # - # ### Example 2# # + # Kode Javascript # const birthYear = [1975, 1997, 2002, 1995, 1985]; # const ages = birthYear.map(year => 2018 - year); # // prints [ 43, 21, 16, 23, 33 ] # console.log(ages); # - birthYear = [1975, 1997, 2002, 1995, 1985] umur = map(lambda u: 2021 - u, birthYear) print(list(umur)) # ## Filter numbers = [13, 4, 18, 35] div_by_5 = filter(lambda num: num % 5 == 0, numbers) print(list(div_by_5)) # + arbitrary_numbers = map( lambda num: num ** 3, filter(lambda num: num % 3 == 0, range(1, 21)) ) print(list(arbitrary_numbers)) # [27, 216, 729, 1728, 3375, 5832] # + # javascript # const persons = [ # { name: 'Peter', age: 16 }, # { name: 'Mark', age: 18 }, # { name: 'John', age: 27 }, # { name: 'Jane', age: 14 }, # { name: 'Tony', age: 24}, # ]; # const fullAge = persons.filter(person => person.age >= 18); # console.log(fullAge); # + persons = [ {"name": "Peter", "age": 16}, {"name": "Mark", "age": 18}, {"name": "John", "age": 27}, {"name": "Jane", "age": 14}, {"name": "Tony", "age": 24}, ] fullAge = filter(lambda umur: umur["age"] > 18, persons) print(list(fullAge)) # - # ### Reduce # #### Example 1# # + # Javascript # const arr = [5, 7, 1, 8, 4]; # const sum = arr.reduce(function(accumulator, currentValue) { # return accumulator + currentValue; # }); # // prints 25 # console.log(sum); # + from functools import reduce arr = [5, 7, 1, 8, 4] sum = reduce(lambda accumulator, currentValue: accumulator + currentValue, arr) print(sum) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Testing ML architectures implemented on the MLTSA package # # In this packaged there are multiple architectures built in for testing on the different data available # + pycharm={"name": "#%%\n"} """First we import our dataset examples, and as usual generate data to work with""" from OneD_pot_data import potentials from OneD_pot_data import dataset #This sets the potentials, don't re-run total_n_pots = 25 n_DW = 5 relevant_DW_n = 2 #After defining the desired parameters we define the potentials accordingly pots = potentials(total_n_pots, n_DW, relevant_DW_n) # This creates the first dataset of data. # It creates the mixing coefficients don't re-run n_features = 180 degree_of_mixing = 2 #We specified the number of features wanted and how much they will mix oneD_dataset = dataset(pots, n_features, degree_of_mixing) # + pycharm={"name": "#%%\n"} """Now we generate the trajectories we will use for the whole experiment""" #Generate the trajectories n_simulations = 100 n_steps = 250 data, ans = oneD_dataset.generate_linear(n_simulations, n_steps) data_val, ans_val = oneD_dataset.generate_linear(int(n_simulations/2), n_steps) #Prepare it for training from sklearn.preprocessing import OneHotEncoder time_frame = [30, 60] #Same time frame as the sklearn one X, Y = oneD_dataset.PrepareData(data, ans, time_frame, mode="Normal") X_val, Y_val = oneD_dataset.PrepareData(data_val, ans_val, time_frame, mode="Normal") # + [markdown] pycharm={"name": "#%% md\n"} # Note that we will start by using the simple MLP, and we will move on to more advanced models. # # First we will import the model builder, and we will try both the simple MLP and the more "deep" version with 5 layers. # + pycharm={"name": "#%%\n"} #We will start with the basic Multi-Layer Perceptron from MLTSA_tensorflow import TF_2_MLP MLP = TF_2_MLP.build_MLP(n_steps, n_features, n_labels=2).model MLP_deep = TF_2_MLP.build_MLP(n_steps, n_features, n_labels=2, type="deep").model # + pycharm={"name": "#%%\n"} from tensorflow.keras.callbacks import EarlyStopping MLP.fit(X, Y, epochs=500, batch_size=n_steps, verbose=1, validation_split=0.2, callbacks=[EarlyStopping(monitor='loss', min_delta=1e-8, restore_best_weights=True, patience=50)]) val_acc = MLP.evaluate(X_val, Y_val, verbose=1) print("We achieved", val_acc[1]*100, "% accuracy on Validation") # + pycharm={"name": "#%%\n"} #A deep learning model needs more epochs to train to get to convergence. We will also allow more patience MLP_deep.fit(X, Y, epochs=1000, batch_size=n_steps, verbose=1, validation_split=0.2, callbacks=[EarlyStopping(monitor='loss', min_delta=1e-8, restore_best_weights=True, patience=100)]) val_acc = MLP_deep.evaluate(X_val, Y_val, verbose=1) print("We achieved", val_acc[1]*100, "% accuracy on Validation") # + pycharm={"name": "#%%\n"} #We will start with the basic LSTM ("Vanilla") from MLTSA_tensorflow import TF_2_LSTM LSTM = TF_2_LSTM.build_LSTM(n_steps, n_features, n_labels=2).model # + pycharm={"name": "#%%\n"} LSTM.fit(X, Y, epochs=1000, batch_size=n_steps, verbose=1, validation_split=0.2, callbacks=[EarlyStopping(monitor='loss', min_delta=1e-8, restore_best_weights=True, patience=100)]) val_acc = MLP_deep.evaluate(X_val, Y_val, verbose=1) print("We achieved", val_acc[1]*100, "% accuracy on Validation") # + pycharm={"name": "#%%\n"} # + pycharm={"name": "#%%\n"} # + pycharm={"name": "#%%"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # 测试DVI正态近似的准确性 # + # %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from IPython.display import clear_output import os, json import gaussian_variables as gv import utils as u import plot_utils as pu import bayes_layers as bnn from bayes_models import MLP, PointMLP, AdaptedMLP np.random.seed(3) # - # 首先构建数据集: # \begin{equation} # y = -(x+0.5)\sin(3\pi x) + \eta # \end{equation} # # # \begin{equation} # \eta = 0.45(x + 0.5)^2 # \end{equation} # + def base_model(x): return -(x+0.5)*np.sin(3 * np.pi *x) def noise_model(x): return 0.45*(x+0.5)**2 def sample_data(x): return base_model(x) + np.random.normal(0, noise_model(x)) data_size = {'train': 500, 'valid': 100, 'test': 100} toy_data = [] for section in ['train', 'valid', 'test']: x = (np.random.rand(data_size['train'], 1) - 0.5) toy_data.append([x, sample_data(x).reshape(-1)]) x = np.arange(-1,1,1/100) toy_data.append([[[_] for _ in x], base_model(x)]) pu.toy_results_plot(toy_data, {'mean':base_model, 'std':noise_model}) # - # 设置模型参数,这里以一个三层、每层5个神经元的网络为例(本项目中的其他网络结果修改“hidden_dims”参数可得到) hypers = { "x_dim": 1, "y_dim": 2, "hidden_dims": [5,5,5], "nonlinearity": "relu", "adapter": { 'in' : {"scale": [[1.0]], "shift": [[0.0]]}, 'out': {"scale": [[1.0, 0.83]], "shift": [[0.0, -3.5]]} }, "method": "bayes", "style": "heteroskedastic", "homo_logvar_scale": 2*np.log(0.2), "prior_type": ["empirical", "wider_he", "wider_he"], "n_epochs": 20000, "batch_size": 500, "learning_rate": 0.001, "lambda": 1.0, "warmup_updates": {'lambda': 14000.0}, "anneal_updates": {'lambda': 1000.0}, "optimizer": "adam", "gradient_clip": 0.1, "data_fraction": 1.0, "sections_to_run": ["train"] } # 构建模型 def make_model(hypers): if hypers['method'].lower().strip() == 'bayes': MLP_factory = MLP prediction = lambda y: tf.reshape(y.mean[:,0], [-1]) loss = bnn.regression_loss else: MLP_factory = PointMLP prediction = lambda y: tf.reshape(y.mean[:,0], [-1]) loss = bnn.point_regression_loss mlp = MLP_factory(hypers['x_dim'], hypers['y_dim'], hypers) mlp = AdaptedMLP(mlp) mlp.make_placeholders() ipt = mlp.placeholders['ipt_mean'] y = mlp(ipt) target = tf.placeholder(tf.float32, [None]) mlp.placeholders['target'] = target global_step = tf.Variable(0, trainable=False, name='global_step') loss, logprob, all_surprise = loss(y, target, mlp, hypers, global_step) accuracy = tf.reduce_mean(tf.abs(target - prediction(y))) return { 'model': mlp, 'metrics': { 'accuracy': accuracy, 'loss': loss, 'logprob': logprob, 'all_surprise': all_surprise }, 'global_step': global_step} # 给出一个比较DVI和MCVI算法结果的函数 from scipy import stats def show_compare(model_and_metrics, sess): plt.figure() n_samp = 20000 x = 0.25 ipt = [[[x]] for _ in range(n_samp)] sample_op = model_and_metrics['model'].run_with_MC( ipt, n_sample=n_samp) approx_op = model_and_metrics['model'](x) samples = sess.run(sample_op) approx = sess.run([approx_op.mean, approx_op.var]) # samples_b.shape m_min = stats.norm.ppf( 0.0001, loc=approx[0][0, 0], scale=np.sqrt(approx[1][0, 0, 0])) m_max = stats.norm.ppf( 0.9999, loc=approx[0][0, 0], scale=np.sqrt(approx[1][0, 0, 0])) l_min = stats.norm.ppf( 0.0001, loc=approx[0][0, 1], scale=np.sqrt(approx[1][0, 1, 1])) l_max = stats.norm.ppf( 0.9999, loc=approx[0][0, 1], scale=np.sqrt(approx[1][0, 1, 1])) bin_no_m = np.linspace(m_min, m_max, 50) bin_no_l = np.linspace(l_min, l_max, 50) fig = plt.figure() ax1 = fig.add_subplot(221) ax1.hist(samples[:, 0, 0], bin_no_m, density=True, edgecolor='k', facecolor='#b4c7e7') ax1.plot(*gaussian1d(approx[0][0, 0], approx[1][0, 0, 0], m_min, m_max), 'b') plt.xlim([m_min, m_max]) ax1.set_yticks([]) ax1.set_xlabel('$m$') ax1.set_ylabel('$q(m)$') ax2 = fig.add_subplot(222) ax2.hist(samples[:, 0, 1], bin_no_l, density=True, edgecolor='k', facecolor='#b4c7e7', label="MC") ax2.plot(*gaussian1d(approx[0][0, 1], approx[1][0, 1, 1], l_min, l_max), 'b', label="ours") plt.xlim([l_min, l_max]) ax2.set_yticks([]) ax2.set_xlabel('$\ell$') ax2.set_ylabel('$q(\ell)$') plt.show() return None def gaussian1d(mean, var, min, max): x_axis = np.linspace(min, max, 1000) return x_axis, 1.0 / np.sqrt(2.0 * np.pi * var) * \ np.exp(-1.0 / (2.0 * var) * (x_axis - mean)**2) # 定义训练函数 def run(data): run_id = u.start_run() restricted_training_set = u.restrict_dataset_size(data[0], hypers['data_fraction']) hypers['dataset_size'] = len(restricted_training_set[0]) device_id = 1 device_string = u.get_device_string(device_id) print(hypers) with tf.device(device_string): if True: model_and_metrics = make_model(hypers) train_op = u.make_optimizer(model_and_metrics, hypers) sess = u.get_session() saver = tf.train.Saver() all_summaries = [] best_valid_accuracy = np.inf show_compare(model_and_metrics, sess) for epoch in range(hypers['n_epochs']): verbose = (epoch % 20 == 0) if verbose: print("Epoch %i: " % epoch, end='') epoch_summary, accuracies = u.train_valid_test( { 'train': restricted_training_set, 'valid': data[1], 'test': data[2] }, sess, model_and_metrics, train_op, hypers, verbose) show_compare(model_and_metrics, sess) # 开始训练,并得到两个DVI和MCVI之间的比较 run(toy_data) # 将之前的数据集整体增大10倍 # + def base_model_10(x): return 10*(-(x+0.5)*np.sin(3 * np.pi *x)) def noise_model_10(x): return 10*(0.45*(x+0.5)**2) def sample_data_10(x): return base_model_10(x) + np.random.normal(0, noise_model_10(x)) data_size = {'train': 500, 'valid': 100, 'test': 100} toy_data = [] for section in ['train', 'valid', 'test']: x = (np.random.rand(data_size['train'], 1) - 0.5) toy_data.append([x, sample_data_10(x).reshape(-1)]) x = np.arange(-1,1,1/100) toy_data.append([[[_] for _ in x], base_model_10(x)]) pu.toy_results_plot(toy_data, {'mean':base_model_10, 'std':noise_model_10}) plt.ylim([-11,16]) # - # 同样的方式进行训练,并得到DVI和MCVI之间的比较 run(toy_data) # 对于m的结果,DVI和MCVI的一致性变强了。 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Define Individual Context – Single Person Use Case # # # ## Overview ## # # Explore the FEC data by specifying SQL predicates that identify **Individuals**, which are people identities extracted—and somewhat cleansed—from the [Individual Contributions](https://www.fec.gov/campaign-finance-data/contributions-individuals-file-description/) file. Inidividual records (stored in the `indiv` table), are basically distinct combinations of name and address information (city, state, zipcode) that have not been aggressively deduplicated. Thus, there will be multiple records for a real-world person if there are variants (or typos or deception) in the identifying information for contribution records. # # Querying by Individual can be used to target all of the `indiv` records (and associated contribution data in `indiv_contrib`) for a single person, or for a set of people to be explored collectively. An example of the first usage will be presented here (the second will be covered in the subsequent `dc2` notebook). One of the limitation of Querying by Individual is that it is difficult to distinguish between the contribution of distinct people identities within a result set. # # Note that this approach will create the following query contexts (each of which may be used in formulating specific queries for investigation or reporting): # # **Principal Context View** # # * `ctx_indiv` # # **Dependent Context Views** # # * `ctx_indiv_contrib` # ## Notebook Setup ## # # ### Configure database connect info/options ### # # Note: database connect string can be specified on the initial `%sql` command: # # ```python # database_url = "postgresql+psycopg2://user@localhost/fecdb" # # %sql $database_url # # ``` # # Or, connect string is taken from DATABASE_URL environment variable (if not specified for `%sql`): # # ```python # # %sql # # ``` # %load_ext sql # %config SqlMagic.autopandas=True # %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # connect string taken from DATABASE_URL environment variable # %sql # ### Clear context ### # # Note that we drop *all* context views so we won't have any inconsistencies after this notebook is run. After defining `ctx_indiv` below, we will define all dependent views (see Overview, above), and leave any higher-order or orthogonal views undefined # %sql drop view if exists ctx_dseg_memb cascade # %sql drop view if exists ctx_dseg cascade # %sql drop view if exists ctx_donor_contrib cascade # %sql drop view if exists ctx_donor cascade # %sql drop view if exists ctx_household cascade # %sql drop view if exists ctx_iseg_memb cascade # %sql drop view if exists ctx_iseg cascade # %sql drop view if exists ctx_indiv_contrib cascade # %sql drop view if exists ctx_indiv cascade # ### Set styling ### # + language="html" # # - # ## Create Principal View (`ctx_indiv`) ## # # For this use case, we'll identify the `indiv` records associated with an identity that we previously queried (in `el_queries1.sql` and `el_queries3.sql`) # + language="sql" # create or replace view ctx_indiv as # select * # from indiv # where name like '%' # and zip_code ~ '9402[58]' # and name !~ 'MRS\.' # - # Let's take a quick look at the context we just set (for validation) before proceeding # + language="sql" # select id, # name, # city, # state, # zip_code, # elect_cycles # from ctx_indiv # - # ## Create Dependent Views ## # # ### Create `ctx_indiv_contrib` ### # # Now we'll create the context view for the contributions from the targeted "Individual" records # + language="sql" # create or replace view ctx_indiv_contrib as # select ic.* # from ctx_indiv ix # join indiv_contrib ic on ic.indiv_id = ix.id # - # And some quick validation on the view # + language="sql" # select count(*) as contribs, # sum(transaction_amt) as total_amt, # array_agg(distinct elect_cycle) as elect_cycles # from ctx_indiv_contrib # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="2OvlxnFIsNyT" # # Demo RNA Multi-Perceptrón Backpropagation usando Keras para procesar las imágenes de personajes de los simpsons e identificar a cual corresponse # + [markdown] id="WGdqrNAvsWiF" # 1) Cargar librerías: # + id="gcVLfyLKsaCj" cellView="both" outputId="0c6c3d20-9efb-4076-9eed-9f72ca3ba313" colab={"base_uri": "https://localhost:8080/", "height": 34} import random import keras from keras.layers import Input, Dense from keras.models import Model from keras.utils import plot_model import tensorflow as tf from matplotlib import pyplot as plt import pandas as pd import numpy as np import os from PIL import Image from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix print("Librerías cargadas") # + [markdown] id="xXate9UMmhDA" # 2) Definir los parámetros: # + id="_JT7y1jKmrD9" outputId="a9d2dd13-d7bc-4de2-ea61-6e8758ef1a8f" colab={"base_uri": "https://localhost:8080/", "height": 34} # tamaño de las imágenes IMAGE_SHAPE = (60, 60, 3) # indica si se usan las imágenes generadas por data augmentation usarDA = False # define tamaño de datos de entrada num_inputs = IMAGE_SHAPE[0] * IMAGE_SHAPE[1] * IMAGE_SHAPE[2] # define tamaño de datos de salida (las clases están codificadas en un único número) num_outputs = 1 # cantidad de neuronas ocultas ##hidden_layers = [ 896, 84, 8 ] hidden_layers = [ num_inputs//5, num_inputs//20, num_inputs//100 ] # cantidad de épocas del entrenamiento cantEpocas = 300 map_characters = { 0: 'abraham_grampa_simpson', 1: 'homer_simpson', 2: 'otto_mann', 3: 'agnes_skinner', 4: 'kent_brockman', 5: 'patty_bouvier', 6: 'apu_nahasapeemapetilon', 7: 'krusty_the_clown', 8: 'principal_skinner', 9: 'barney_gumble', 10: 'lenny_leonard', 11: 'professor_john_frink', 12: 'bart_simpson', 13: 'lionel_hutz', 14: 'rainier_wolfcastle', 15: 'carl_carlson', 16: 'lisa_simpson', 17: 'ralph_wiggum', 18: 'charles_montgomery_burns', 19: 'maggie_simpson', 20: 'selma_bouvier', 21: 'chief_wiggum', 22: 'marge_simpson', 23: 'sideshow_bob', 24: 'cletus_spuckler', 25: 'martin_prince', 26: 'sideshow_mel', 27: 'comic_book_guy', 28: 'mayor_quimby', 29: 'simpsons_dataset', 30: 'disco_stu', 31: 'milhouse_van_houten', 32: 'snake_jailbird', 33: 'edna_krabappel', 34: 'miss_hoover', 35: 'troy_mcclure', 36: 'fat_tony', 37: 'moe_szyslak', 38: 'waylon_smithers', 39: 'gil', 40: 'ned_flanders', 41: 'groundskeeper_willie', 42: 'nelson_muntz' } print("Configuración de RNA MLP Backpropagation definida: [", num_inputs, hidden_layers, num_outputs," ] ") # + [markdown] id="-Rm33ZPNnBpE" # 3) Montar el Drive: # + id="ysaIl300nDud" outputId="d1575bac-6c27-4a1b-ebf0-468efd64a722" colab={"base_uri": "https://localhost:8080/", "height": 34} from google.colab import drive drive.mount('/content/drive', force_remount=True) # #!unzip "/content/drive/My Drive/I.A. GRUPO/T2 - TheSimpsons/dataset.zip" image_pathes = 'simpsons_dataset' # !rm -rf simpsons_dataset/simpsons_dataset # + [markdown] id="sDkaUtZ6nG8l" # 4) Cargar imágenes para entrenar el modelo: # + id="uYz8mV4SnJ4O" outputId="326e267d-e815-42fd-e00b-2b3bb3051173" colab={"base_uri": "https://localhost:8080/", "height": 348} # define función para cargar las imágenes def cargarImagenes(imagPath): classes_ori = [] images_ori = [] all_dirs = os.listdir( imagPath ) print(all_dirs) for each_dir in all_dirs: auxiPath = imagPath + '/' + each_dir imagFN = os.listdir( auxiPath ) for each_imagFN in imagFN: # abre la imagen imag = Image.open(auxiPath + "/" + each_imagFN) # ajusta el tamaño imag = imag.resize((IMAGE_SHAPE[0], IMAGE_SHAPE[1]), Image.ANTIALIAS) # transforma a un vector de nros arImag = np.array(imag) # agrega a los vectores classes_ori.append( each_dir ) images_ori.append( arImag ) return classes_ori, images_ori, "RGB" def split_into_train_and_test_randomly(classes, images): indexes = list(range(0, len(images))) random.shuffle(indexes) eighty_percent = (len(indexes) // 10) * 8 train_indexes = indexes[:eighty_percent] test_indexes = indexes[eighty_percent:] train_classes, train_images = split_into_train_and_test(classes, images, train_indexes) test_classes, test_images = split_into_train_and_test(classes, images, test_indexes) return train_classes, train_images, test_classes, test_images def split_into_train_and_test(classes, images, indexes): return split_by_indexes(classes, indexes), split_by_indexes(images, indexes) def split_by_indexes(elements, indexes): return [*map(lambda index: elements[index], indexes)] # carga las imagenes de entrenamiento classes, images, image_type = cargarImagenes(image_pathes) print("clases", len(classes)); print("images", len(images)) classes_train, images_train, classes_test, images_test = split_into_train_and_test_randomly(classes, images) print("> Para Entrenamiento: ") print("- Imágenes cargadas: ", len(images_train)) if len(classes_train)>0: print("- Ejemplo ", classes_train[0], " ", images_train[0].shape, ": ") display( Image.fromarray(images_train[0], image_type) ) print("\n\n> Para Prueba: ") print("- Imágenes cargadas: ", len(images_test)) if len(classes_test)>0: print("- Ejemplo ", classes_test[0], " ", images_test[0].shape, ": ") display( Image.fromarray(images_test[0], image_type) ) # + id="CPPvnkjTnTQN" # define función auxiliar para mostrar imágenes preparadas def plot_image(imag): if IMAGE_SHAPE[2]==1: plt.imshow((imag*255).reshape(IMAGE_SHAPE[0], IMAGE_SHAPE[1]).astype(np.uint8)) plt.gray() else: plt.imshow((imag*255).reshape(IMAGE_SHAPE).astype(np.uint8)) plt.axis("off") # define función auxiliar para preparar la lista de imágenes a procesar def prepare_imageList(imagList): auxiAr = np.array(imagList).astype('float32') / 255. auxiAr = auxiAr.reshape((len(auxiAr), num_inputs)) return np.array(auxiAr) # define función auxiliar para preparar lista de clases def prepare_clasesList(classesList, dictMapeo=None): if dictMapeo==None: # genera diccionario de mapeo auxDict = list(set(classesList)) dictMapeo = dict( zip( auxDict, range(len(auxDict)) ) ) # realiza el mapeo y = [] for cl in classesList: y.append( dictMapeo[cl] ) return np.array(y), dictMapeo # define vector auxiliar de datos de entrada para usar en el entrenamiento y prueba x_train = prepare_imageList(images_train) x_test = prepare_imageList(images_test) # define vector auxiliar de datos de salida para usar en el entrenamiento y prueba # también usa esta información para determinar la cantida de neuronas de salida y_train, dictMapeo = prepare_clasesList(classes_train) y_test, _ = prepare_clasesList(classes_test, dictMapeo) # genera diccionario auxiliar para poder convertir de ID de clase a nombre de clase clases_map = [ x for x,y in dictMapeo.items() ] print("> Para Entrenamiento: ") print(" - x_train (cant ejemplos, datos entrada): ", x_train.shape) print(" - y_train (cant): ", len(y_train)) print("\n\n> Para Prueba: ") print(" - x_test (cant ejemplos, datos entrada): ", x_test.shape) print(" - y_test (cant): ", len(y_test)) print("\n\n> Para Ambos: ") print(" - dictMapeo: ", dictMapeo) print(" - clases_map: ", clases_map) if len(y_train)>0: print("\n - Imagen reconstruida de ", clases_map[y_train[0]], "(", y_train[0], ")") plot_image(x_train[0]) # + [markdown] id="wvooB4Gws7ua" # 5) Establecer el modelo para la RNA: # + id="_MlYyhEutC_O" # define la arquitectura de capas teniendo en cuenta la definición dada anteriomente input_img_Lay = Input(shape=(num_inputs,), name='input_img') # capa de entrada eachLay = input_img_Lay auxName = 'hidd_' auxId = 1 for num_hid in hidden_layers: # agrega la capa oculta auxlayerName = auxName+str(auxId) auxId = auxId + 1 eachLay = Dense(num_hid, name=auxlayerName)(eachLay) # capas ocultas output_img_Lay = Dense(num_outputs, activation=None, name='output')(eachLay) # capa de salida # genera el modelo RNA MLP Backpropagation model = Model(input_img_Lay, output_img_Lay, name='RNA') #model.compile(optimizer='rmsprop', loss='mse', metrics=['accuracy']) model.compile(optimizer='adam', loss='mse', metrics=['accuracy']) print("Modelo creado con ", len(model.layers), " capas:") model.summary() print("\n") plot_model(model, show_layer_names=True, show_shapes=True) # + [markdown] id="DBtXyyXCtjDc" # 6) Entrenar el modelo de la RNA: # + id="21pQvmtCtn-T" # lleva a cabo el entrenamiento model.fit(x_train, y_train, epochs = cantEpocas, batch_size = 15) # + id="ZzmLDXuUHkdf" # función auxiliar para probar el modelo entrenado en detalle def probarModelo(x, y, esDAimag, clases_map): # procesa las imágenes de prueba con el modelo predClass = model.predict(x) # muestra los resultados con las imágenes umbralClas = 0.5 classPreds = [] classReal = [] for i in range(len(x)): # prepara salida clReal = clases_map[ y[i] ] idclPred = predClass[i][0] ## determina clase predecida de acuerdo al umbral de clasificación idclPredRnd = int(idclPred) if (idclPred - idclPredRnd)>0.5 and (idclPredRnd+1)=len(clases_map): clPred = "CLASE " + str(idclPredRnd) + " INVÁLIDA!" else: clPred = clases_map[ idclPredRnd ] classReal.append( clReal ) classPreds.append( clPred ) # sólo muestra las imágenes no generadas por DA if not esDAimag[i]: strTitulo = 'Real: ' + clReal + ' / RNA: ' strTitulo = strTitulo + clPred + ' (' + str( idclPred ) +')' # muestra comparación con la imagen fig = plt.figure() fig.suptitle( strTitulo ) ax1 = fig.add_subplot(121) plot_image( x[i] ) plt.tight_layout() fig = plt.gcf() # muestra reporte de clasificación print("\n Reporte de Clasificación: ") print(classification_report(classReal, classPreds)) # muestra matriz de confusion print('\nMatriz de Confusión: ') cm = confusion_matrix(classReal, classPreds, labels=clases_map) cmtx = pd.DataFrame( cm, index=['r:{:}'.format(x) for x in clases_map], columns=['p:{:}'.format(x) for x in clases_map] ) print(cmtx) print("\n") print("\n>Resultados: ") # prueba con los datos de entrenamiento print("*** Resultados con datos de Entrenamiento: ") probarModelo(x_train, y_train, esDAimag_train, clases_map) # + [markdown] id="Yh-6p2xDtrUU" # 7) Evaluar el modelo de la RNA entrenado usando las imágenes de prueba: # + id="A15K-9TRtq7U" # evalua al modelo entrenado resEval = model.evaluate(x_test, y_test) print("\n>Evaluación del Modelo: ") print(" - Error: ", resEval[0]) print(" - Exactitud: ", resEval[1]*100) print("\n") # prueba con los datos de entrenamiento print("\n\n*** Resultados con datos de Prueba: ") probarModelo(x_test, y_test, esDAimag_test, clases_map) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python (fastai_dev) # language: python # name: fastai_dev # --- # + # default_exp core # - # export import numpy as np import json from pathlib import Path # hide from nbdev.showdoc import show_doc from nbdev.export import notebook2script # # Core # > Functions that implement basic functionality that will be used in the library. # ## Util functions # A set of functions that provide usefull functionality # + # export def filter_files(files, include=[], exclude=[]): "Filter list of files using a list of strings to inculde and/or exclude" for incl in include: files = [f for f in files if incl in f.name] for excl in exclude: files = [f for f in files if excl not in f.name] return sorted(files) def ls(x, recursive=False, include=[], exclude=[]): "List files in folder, if recursive is True also list subfolders" if not recursive: out = list(x.iterdir()) else: out = [o for o in x.glob('**/*')] out = filter_files(out, include=include, exclude=exclude) return sorted(out) Path.ls = ls def hdf_attr_check(attr, hdf, default): "Check if attribute is in hdf_attr_dict and return default" return default if not hasattr(hdf, attr) else hdf.__getattr__(attr) def dict2json(data:dict, file): "Writes json file from dict" with open(file, 'w') as f: f.write(json.dumps(data)) # - # Examples: path = Path('.') path.ls() path = Path('.') path.ls(include=['.ipynb']) path = Path('.') path.ls(include=['.ipynb'], exclude=['_checkpoints']) # export def monthlen(year, month): "Gives lenght of the month" base = [31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31] if (year % 4) == 0: if (year % 100) == 0: if (year % 400) == 0: base[1] += 1 else: base[1] += 1 return base[month-1] year = 2000 month = 2 monthlen(year, month) # export class InOutPath(): """Keeps track of an input and a output path. Creates paths if they don't exist and mkdir=True""" def __init__(self, input_path:str, output_path:str, mkdir=True): if isinstance(input_path, str): input_path = Path(input_path) if isinstance(output_path, str): output_path = Path(output_path) self.input_path = input_path self.output_path = output_path if mkdir: self.mkdirs() @property def src(self): "Shortcut to input_path" return self.input_path @property def dst(self): "Shortcut to output_path" return self.output_path def mkdirs(self): self.input_path.mkdir(exist_ok=True, parents=True) self.output_path.mkdir(exist_ok=True, parents=True) def __truediv__(self, s): return InOutPath(self.src/s, self.dst/s) def __repr__(self): return '\n'.join([f'{i}: {o}' for i, o in self.__dict__.items()]) + '\n' show_doc(InOutPath.src) show_doc(InOutPath.dst) # hide notebook2script() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # INF8111 - Fouille de données # # # ## TP2 Automne 2019 - Extraction et analyse d'une base de données de tweets # # ##### Membres de l'équipe: # # - # - # - # ## Présentation du problème # # En 2017, Twitter compte 313 millions d’utilisateurs actifs par mois avec 500 millions de tweets envoyés par jour. Cette information est rendue disponible à destination de la recherche et du développement web grâce à une API publique qui permet de collecter les informations que l'on souhaite. # # Néanmoins, la politique de développement de Twitter limite le partage de ces données. En effet, le partage du contenu des tweets dans une base de données n'est pas autorisé, seuls les identifiants des tweets le sont. # Pour partager publiquement une base de données de tweets que l'on a créée, il faut que cette base de données ne soit consituée que des identifiants de tweets, et c'est ce que l'on retrouve dans la plupart des jeux de données publiques. # # Il est donc nécessaire pour exploiter ces données "d'hydrater" les tweets en question, c'est-à-dire extraire l'ensemble des informations à partir de l'ID, ce qui demande d'utiliser l'API de Twitter. # # Nous allons ici utiliser des bases de données publiques créées par GWU (George Washington University), qui ont l'avantage d'être très récentes : # https://dataverse.harvard.edu/dataverse/gwu-libraries # # Chaque base de données de GWU couvre un sujet précis (élection américaine de 2016, jeux olympiques, etc.), et les données ont été recueillis en appliquant des requêtes qui filtraient les résultats pour n'avoir que des tweets pertinents. Un fichier README est fourni avec chaque base de données pour donner les détails de création du *dataset*. # # # **Les objectifs de ce TP sont donc les suivants :** # # 1. Construire un *crawler* qui collecte les informations d'un tweet à partir de son ID, avec le jeu de données de son choix et les informations pertinentes pour le sujet choisi # 2. A partir de ces données de Twitter collectés, application de méthodes en Machine Learning (ML)/Natural Language Processing (NLP) pour fournir une analyse pertinente. # # # Twitter autorisant le partage **local** des données (par exemple au sein d'un groupe de recherche), une base de données sera fournie si vous ne parvenez pas à créer la vôtre. # # I/ Hydratation de tweets à l'aide de l'API Twitter (4 Pts) # ### 1. Obtenir l'authorisation de Twitter pour l'utilisation de l'API # Pour l'authentification, Twitter utilise OAuth : https://developer.twitter.com/en/docs/basics/authentication/overview/oauth # Vous aurez ici besoin en particulier de OAuth2, car vous n'allez pas interagir avec des utilisateurs sur Twitter (simplement collectés des données). # # ##### 1.1. Obtention d'un compte Twitter développeur # # La première étape nécessaire pour enregistrer votre application et de créer un compte Twitter développeur. Pour ce faire : # # - Créez un compte Twitter classique si vous n'en avez pas déjà un. # # - Sur le site, https://developer.twitter.com, cliquez sur *apply* pour obtenir un compte développeur. # # - Remplissez tous les champs nécessaires. Twitter demande beaucoup de détails sur l'utilisation que vous allez faire de ce compte, il est donc important d'expliquer la démarche en détail : il faut souligner le fait que le projet est **académique** (aucune intention commerciale, aucune publication des données collectés, etc.), expliquer les objectifs et l'apprentissage de ce TP (prise en main de l'API Twitter, l'application concrète de méthodes de Data Mining, etc.), mais aussi expliquer en détail ce que vous allez faire des données (en reprenant des consignes du sujet), les méthodes que vous allez appliquer (citez des méthodes vues en cours ou au précédent TP), le rendu fourni (insistez sur le fait que rien ne sera publique), etc. Pensez notamment à indiquer le nom du cours et le sigle du cours, le nom de l'établissement, mon nom (Théo Moins), etc. Cochez que vous n'utiliserez pas la fonctionnalité de Retweet, et que l'aggregation et l'affichage de tweets ne sera fait que dans un cadre pédagogique (non publique, et sous la forme d'un projet de recherche). Si jamais vous n'êtes pas assez précis, Twitter peut vous renvoyer un courriel pour vous demander des précisions. # # ##### 1.2. Obtention d'un jeton d'accès # # - Lorsque Twitter aura validé votre demande de compte développeur, allez sur https://developer.twitter.com/en/apps pour créer une application (cliquer sur *create an app*) # # - Ici encore, des informations sont à fournir ici. Certaines, comme le nom ou le site internet, ne sont pas très importante, vous pouvez mettre un site internet factice si vous le souhaitez. # # - A la fin de ce processus, vous pouvez enfin obtenir les clés et les jetons pour utiliser l'API: allez sur la page de l'application pour créer les jetons. Vous devez récupérer une paire de clés et une paire de jetons pour passer à la suite. # # # + CONSUMER_KEY = "aLLEXDaLKaWHIgR7qD14tmYFv" CONSUMER_SECRET = "" oauth_token = "" oauth_secret = "" # - # ### 2. Premiers pas avec Twython # # ##### 2.1 Installation et import de la librairie # # Plusieurs librairies Python existent pour manipuler l'API Twitter. Aussi appelé *wrappers*, ce sont un ensemble de fonctions python qui appelle des fonctions de l'API. Parmi elles, nous utiliserons Twython, librairie répendue et activement maintenue. # # Documentation de Twython : https://twython.readthedocs.io/en/latest/api.html # + import csv import time import sys import pandas as pd try: from twython import Twython, TwythonError, TwythonRateLimitError except ImportError: # !pip install --user twython # - # ##### 2.2 Création d'une application et premiers tests: twitter = Twython(CONSUMER_KEY, CONSUMER_SECRET, oauth_token, oauth_secret) # Voici un test avec une recherche très simple pour vous assurer que la requête fonctionne. # # La fonction search renvoie une recherche (non exhaustive) de tweets, et l'option "*popular*" permet de retourner les résultats les plus populaires de la réponse. (documentation ici: https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets) basic_search = twitter.search(q='python', result_type='popular') # La fonction `search` renvoie un dictionnaire contenant la liste de tweets de la requête, et les métadonnées. # # Voici un exemple d'un résultat d'une recherche, observez ainsi toutes les données/métadonnées que contient un tweet et que vous pouvez extraire par la suite: basic_search['statuses'][0] # Il est également possible avec Twython de récupérer les informations d'un tweet à partir de son ID. # # #### Question 1. Afficher la date, le nom d'utilisateur et le contenu du tweet ayant l'ID : 1157345692517634049 (0.5 Pts) # # *Indice : vous pourrez utiliser avec la fonction de twython `show_status`* test_id = "1157345692517634049" tweet = twitter.show_status(id=test_id) print("Date : {}\n".format(tweet["created_at"])) print("Nom d'utilisateur : {}\n".format(tweet["user"]["name"])) print("Contenu du tweet : {}\n".format(tweet["text"])) # **Attention** : Twitter a une limitation de requête par fenêtre de 15 minutes, qui est donc à prendre en compte dans la base de données : https://developer.twitter.com/en/docs/basics/rate-limiting.html # ### 3. Hydratation d'une base de donnée de tweets # # Les choses sérieuses commencent ! # # On souhaite désormais construire une fonction `hydrate_database` qui, à partir d'un fichier texte contenant une liste d'ID de tweets, créer un fichier csv contenant les informations que l'on souhaite extraire. # # Due à la limitation de requête, la fonction `show_status` vue plus haut s'avère peu efficace pour cette tâche : à raison de 900 requêtes pour 15 minutes, il sera beaucoup trop long de construire une base de données un tant soit peu conséquente. La fonction `lookup_status` (voir documentation) sera donc plus adaptée. Elle permettra d'hydrater 100 tweets par requête, ce qui, a raison d'une limite de 900 requêtes pour 15 minutes, rends la construction de la base de données plus réaliste. Il faudra tout de même gérer l'erreur générer par la limitation, si l'on souhaite avoir plus de 90000 tweets ou si l'on appelle plusieurs fois la fonction en moins de 15 minutes. # # #### Question 2. Implémenter la fonction `hydrate_database` (3.5 Pts) # # *Attention : Il faut également gérer le cas où la feature demandée n'est pas une clé du dictionnaire mais une "sous-clé", comme c'est le cas pour le nom d'utilisateur par exemple (accessible dans la feature *user*, qui lui même est un dictionnaire). Un moyen simple pour pallier à ce problème consiste à considérer la feature comme une liste, qui contiendrait la clé et les sous-clés si il y a lieu (voir exemple plus bas) # # *Indice : La fonction `sleep` du module time permet de patienter le temps nécessaire* def hydrate_database(filename, database_name, features, nb_requests, tweet_hydratation_limit=100): """ Create a csv file that contains features of tweets from an file that contains ID of tweets. filename: Name of the file that contains ids database_name: name of the file that will be created features: List of features nb_requests: number of time the function lookup_status will be called tweet_hydratation_limit: """ from itertools import islice from time import sleep # Opening the ID File: file = open(filename, "r") print("File to hydrate: " + filename+"\n") print("Number of requests: "+ str(nb_requests)+"\n") # Creation of the file that will contain the hydrated tweets: hydrated_tweets = pd.DataFrame(columns=['_'.join(i) for i in features]) n = 1 while n <= nb_requests : try:# TODO if n % 50 == 0: print("Number of done requests: "+ str(n)) tweet_ids = list(map(lambda x:x.strip(), islice(file, 10))) tweet_status = twitter.lookup_status(id=tweet_ids) for tweet in tweet_status: l = [] for i in range(len(features)): r = tweet[features[i][0]] if len(features[i]) >1: for j in features[i][1:]: r = r[j] l.append(r) hydrated_tweets.loc[len(hydrated_tweets)] = l n += 1 except TwythonError as e: if isinstance(e, TwythonRateLimitError): retry_after = int(e.retry_after) sleep(900) file.close() hydrated_tweets.to_csv (database_name, index = None, header=True) print("\n") print("File " + filename + " Hydrated. :)") # Utilisez le fichier suivant en guise d'example : # https://dataverse.harvard.edu/file.xhtml?persistentId=doi:10.7910/DVN/5QCCUU/QPYP8G&version=1.1 # # On suppose qu'on ne souhaite garder que le texte (*text*) l'ID de l'utilisateur (*user/screen_name*) # + filename = "gwu/climate_id.txt" database_name = "databases/climate.csv" features = [['text'], ['user', 'screen_name']] nb_requests = 400 hydrate_database(filename, database_name, features, nb_requests, tweet_hydratation_limit=100) # - # # II/ Analyse d'une base de données au choix (16 pts) # # Maintenant que vous êtes en mesure d'hydrater une base de données de tweets efficacement et en prenant en compte les limitations de Twitter, vous pouvez l'appliquer sur le *dataset* qui vous intéresse le plus. # # ### 1. Instructions # # Dans cette partie, vous allez mener **entièrement** de vous-même un projet de *Data Science*, c'est à dire de la collecte des données jusqu'à l'interprétation des résultats. 3 sujets sont proposés, vous devez choisir celui qui vous intéresse le plus parmi : # # 1. Analyse de sentiments pour la prédiction des résultats de l'élection américaine. # # **Dataset :** "2016 United States Presidential Election Tweet Ids", https://doi.org/10.7910/DVN/PDI7IN # # **Précision :** Ce sujet est assez similaire au TP1 (avec ici sentiment = parti politique), vous êtes donc libre de reprendre ce que vous aviez fait. Cependant, il faudrait aller un peu plus en profondeur ici, par exemple sur l'étape de la classification. De plus, vous avez ici une nouvelle problématique qui est que vos données ne sont pas labellisés (mais la construction des collections devrait vous permettre de labelliser vous-même). # # # 2. Détection de discours d'incitation à la haine. # # **Dataset :** Modifier votre fonction d'hydratation en utilisant la fonction search pour n'avoir que des tweets récents. # # **Précision :** Ce sujet pourrait également être abordé de la même manière que le TP1 : des étapes de preprocessing + de la classification. Néanmoins, dans ce cas, posséder des données avec des labels "incitant à la haine"/"n'insite pas à la haine" est beaucoup plus complexe, car beaucoup de bases de données étiquetés, lors de l'hydratation, se trouveront être quasi-vide, car les tweets auront été supprimés au moment où nous ferons notre requête (car Twitter veille aussi à la suppression de tweets haineux). C'est pourquoi vous êtes obligés de créer une base de données avec des tweets les plus récents possibles, avant qu'ils ne soient potentiellement supprimés. Pour désigner un tweet comme haineux, une méthode serait la détection de vocabulaire haineux, par exemple avec `hatebase.org`, qui propose des larges bases de données très complètes. Vous pouvez créer un compte sur le site pour avoir accès à l'API, et ensuite utiliser cette librairie pour Python : https://github.com/DanielJDufour/hatebase. En modifiant la requête pour n'avoir que des tweets contenant ce vocabulaire, et en le mêlant à de l'analyse de sentiment, vous pourrez obtenir des résultats à analyser. Vous pourriez aussi avoir une approche "utilisateur" pour rechercher des tweets haineux : lorsqu'un tweet est détecter comme haineux, inspecter l'ensemble des tweets de l'utilisateur et/ou de ses *followers*. En bref, beaucoup de possibilités, mais ce sujet est le plus complexe des trois. Je serai donc moins exigeant sur les résultats 'chiffrés', l'important ici étant plus l'analyse, et le fait d'avoir une approche cohérente (il est également très important de prendre le temps de réfléchir à une définition claire de "haineux"). # # # 3. Méthodes de clusterings appliqué au tweet sur l'actualité, et analyse des résultats. # # **Dataset :** "News Outlet Tweet Ids", https://doi.org/10.7910/DVN/2FIFLH # # **Précision :** Application de méthodes de preprocessing, puis de méthodes de clustering pour regrouper les tweets qui mentionnent la même actualité ou catégorie d'actualité (au choix!), puis visualisation, étude en fonction du temps... Vous devrez trouver quelle est la meilleur méthode de clustering, et celle-ci dépendra de votre approche (nombre de classes connu ? si oui, combien de classes?). # # # Vous êtes entièrement libre sur l'ensemble du processus (choix des informations extraites, méthodes en ML, librairie, etc.). Ici seul les bases de données en elle-même sont rigoureusement imposés. Les précisions faites ici servent juste pour vous guider un peu si vous le souhaitez, mais si vous avez d'autres idées n'hésitez pas ! Ces sujets étant populaires au sein de la communauté scientifique, vous pouvez (**seulement si vous le souhaitez**) vous inspirer d'articles de la littérature, à condition de le citer dans votre rapport et de faire votre propre implémentation. # # #### L'objectif cependant ici n'est pas d'obtenir l'état de l'art, mais d'appliquer une méthodologie claire et rigoureuse que vous aurez construite vous-même. # # Les datasets étant massifs, il est fortement déconseillé de faire une base de données contenant tous les tweets hydratés (par exemple, les auteurs de la BDD n°1 soulignent qu'avec les limitations de l'API cela vous prendrait environ 32 jours). C'est à vous de voir quelle est la taille du dataset dont vous avez besoin. # # Pensez aussi à lire le fichier README correspondant à la base que vous avez choisi, afin de vous aider à mieux comprendre vos futurs résultats. # # ### 2. Rédaction d'un rapport # # Pour ce TP, vous allez devoir fournir un rapport qui détail et justifie l'ensemble de votre méthode, et qui fournisse les résultats que vous avez obtenus. Les éléments suivants doivent y apparaitre (cela peut vous servir de plan, mais ce n'est pas rigide) : # # - Titre du projet, et nom de l'ensemble des membres de l'équipe (avec mail et matricule) # # - **Introduction** : résumé du problème, de la méthodologie et des résultats obtenus. # # - **Présentation du dataset** : description, justification de la taille, du choix des features, etc. # # - **Preprocessing** : s'il y en a, justification des étapes de preprocessing. # # - **Methodologie** : description et justification de l'ensemble des choix (algorithmes, hyper-paramètres, régularisation, métriques, etc.) # # - **Résultats** : analyse des résultats obtenus (utilisez des figures pour illustrer), mise en relation entre les choix de design et la performance obtenue. # # - **Discussion** : discutez des avantages et des inconvénients de votre approche; quels sont les faiblesses, les failles ? Qu'est-ce qu'il peut être amélioré ? Vous pouvez également suggérer des futures idées d'exploration. # # - **Références** : si vous vous êtes inspiré d'une étude déjà faite. # # Vous pouvez utiliser le template d'arXiv pour le rapport : https://fr.overleaf.com/latex/templates/style-and-template-for-preprints-arxiv-bio-arxiv/fxsnsrzpnvwc. **L'ensemble du rapport ne doit cependant pas excéder 5 pages, figures et références compris.** Les 5 pages ne sont pas obligatoires, si vous estimez que moins est suffisant et que votre rapport est effectivement complet, vous ne serez pas pénalisé. # # # ### 3. Rendu attendu # # A la fin du TP, vous soumettrez un fichier *zip* contenant les éléments suivants: # # - Le fichier *pdf* du rapport # - Ce notebook que vous aurez complété. Vous pouvez également implémenter votre méthode à la suite ici, ou alors utiliser un autre fichier si vous le souhaitez. Bien que seul le rapport servira pour la notation, ayez un code commenté et clair ! # - Ne pas envoyer les fichiers de données, car trop conséquent. Avec le rapport et le code, tout sera détaillé et il sera possible de les refaire facilement. # # ### 4. Evalutation # # 12 points de cette partie sera basé sur la méthodologie, et 4 points sur les résultats. # # La notation sur la méthodologie inclus : # # - La pertinence de l'ensemble des étapes de l'approche # # - La bonne description des algorithmes choisis # # - La justification judicieuse des choix établis # # - Une analyse pertinente des résultats # # - La clarté et l'organisation du rapport (figures, tables) et du code. # # # Pour ce qui est des résultats, il est impossible de mettre un barème fixe car ils vont dépendre du sujet que vous allez choisir. C'est un problème auquel vous serez confrontés : chaque étude étant spécifique, il peut être compliqué d'évaluer qualitativement un modèle, d'autant que vous n'avez sans doute pas connaissance de l'état de l'art. C'est pourquoi il va être important de faire plusieurs essais, et de comparer différentes méthodes. Ainsi, les résultats doivent être cohérent avec la complexité de votre implémentation : un modèle simple et naïf vous fournira des premiers résultats, que vous devrez ensuite améliorer avec des modèles plus précis et complexes. # # De ce fait, l'ensemble des points pour les résultats seront donnés si : # - Vous obtenez des premiers résultats avec une méthode naïve qui témoignent de la pertinence de vos choix # - Ces résultats sont ensuite améliorés avec une méthode plus complexe # # # Data aquisition & preprocessing # + import os for filename in os.listdir("./elections/") : database_name = "./databases/"+filename[:-3]+'csv' features = [['text'], ['id']] if "election-day" in filename[:-3]: # In order to make predictions on locations features = [['text'], ['id'], ['user','location'], ['user', 'screen_name']] nb_requests = 400 hydrate_database("./elections/"+filename, database_name, features, nb_requests, tweet_hydratation_limit=100) print("\n----------------------------\n") # + import string import re import pickle import nltk from nltk.corpus import stopwords from nltk.corpus import words from nltk.stem import PorterStemmer from stop_words import get_stop_words import numpy as np import os import pandas as pd from collections import Counter from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.model_selection import RandomizedSearchCV, GridSearchCV import time from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.linear_model import LogisticRegression from sklearn.linear_model import SGDClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import MultinomialNB # - # # Labelisation des données # Création de la base de données # + db_dir = "./databases/" features = [['text'], ['id']] column_names = ['_'.join(name) for name in features]+['label'] df = pd.DataFrame(columns = column_names) for filename in os.listdir(db_dir): print(filename) if filename[:10] == 'republican' : buffer_df = pd.read_csv(db_dir+filename) buffer_df.columns = column_names[:-1] buffer_df['label'] = 0 df = pd.concat([df,buffer_df], axis = 0) if filename[:10] == 'democratic' : buffer_df = pd.read_csv(db_dir+filename) buffer_df.columns = column_names[:-1] buffer_df['label'] = 1 df = pd.concat([df,buffer_df], axis = 0) print(df.label.unique()) # - df.head(5) # Is there any missing data df.text.isnull().values.any() X, y = df.loc[:,'text'], df.loc[:,'label'] y=y.astype('int') # ## Processing # + class TwitterPreprocessing(object): def handle_urls(self, tweet): tweet = re.sub(r'(https:|http:)?\/\/t.co\/[a-zA-Z0-9]+', ' __URL__ ', str(tweet)) return tweet def handle_numbers(self, tweet): tweet = re.sub(r'\b\d+\b', ' __NBR__ ', tweet) return tweet def handle_tags(self, tweet): tweet = re.sub(r'@(\S+)', r'\1', tweet) return tweet def handle_hashtags(self, tweet): tweet = re.sub(r'#(\S+)', r'\1', tweet) return tweet def handle_and(self, tweet): return tweet.replace("&", "&") def handle_ponctuation(self,tweet): # Remove punctuation tweet = tweet.strip('\'"?!,.():;') return tweet def handle_emojis(self, tweet): # taken from https://github.com/abdulfatir/twitter-sentiment-analysis/blob/master/code/preprocess.py # Smile -- tweet = re.sub(r'(:-\)|\(-:|:\'\))', ' __EMO_POS__ ', tweet) # Laugh -- tweet = re.sub(r'(:-?D|x-?D|X-?D)', ' __EMO_POS__ ', tweet) # Love -- tweet = re.sub(r'(<3|:\*)', ' __EMO_POS__ ', tweet) # Wink -- tweet = re.sub(r'(;-?\)|;-?D|\(-?;)', ' __EMO_POS__ ', tweet) # Sad -- tweet = re.sub(r'(:-\(|\)-:)', ' __EMO_NEG__ ', tweet) # Sad 2 -- tweet = re.sub(r'(:\/|:-\/)', ' __EMO_NEG__ ', tweet) # Cry -- tweet = re.sub(r'(:,\(|:\'\(|:"\()', ' __EMO_NEG__ ', tweet) return tweet def Remove_numbers(self,tweet): tweet = re.sub("\d+", "__number__ ", tweet) return tweet def Remove_stepword(self, features): stop_words = list(get_stop_words('en')) #About 900 stopwords nltk_words = list(stopwords.words('english')) #About 150 stopwords stop_words.extend(nltk_words) list_words = [w for w in features if not w in stop_words] return list_words def stemm(self,list_word): stemmer= PorterStemmer() list_stem = [stemmer.stem(word) for word in list_word] return list_stem def preprocess_reviews(self,features): REPLACE_NO_SPACE = re.compile("(\.)|(\;)|(\:)|(\!)|(\')|(\?)|(\,)|(\")|(\()|(\))|(\[)|(\])") REPLACE_WITH_SPACE = re.compile("()|(\-)|(\/)") features = [REPLACE_NO_SPACE.sub("", line.lower()) for line in features] features = [REPLACE_WITH_SPACE.sub(" ", line) for line in features] return features def tokenize(self, text): # Have to return a list of tokens tokens = nltk.tokenize.word_tokenize(text) return tokens def preprocess(self,data): data=self.handle_urls(data) data=self.handle_ponctuation(data) data=self.handle_emojis(data) data=self.handle_tags(data) data=self.handle_hashtags(data) data=self.handle_and(data) data=self.Remove_numbers(data) # remove single letters and some repeated punctuation data = re.sub(r'\b[-\']\b', '', data) data = re.sub(r'\b\w\b', '', data) data = re.sub(r'[^\w\s]','',data) # data=self.tokenize(data) data=self.preprocess_reviews(data) data=self.stemm(data) data= ' '.join(data) #data=self.Remove_stepword(data) # remove encoding chars #data = data.encode("cp1251","ignore").decode("utf8") return data # - # test text = "Hello world \x8f there ? is my #first . & 56 http://t.co/b4zCMd tweet 2019!poke @Hamza0, @Amine #LetsMakePolyGreatAgain :D ;) :-) :-D :-/ :/ meet us @ //t.co/kbb0B5FxMK https://t.co/gooNdg00Poly" tweet_prep = TwitterPreprocessing() tweet_prep.preprocess(text) data=list(map(tweet_prep.preprocess, X)) X= pd.Series(data).astype(str).str.zfill(11) # + # I used Stratify parameter : means that the train_test_split method returns training and valid subsets that have the same proportions of class labels as the input dataset train_X, test_X, train_Y, test_Y = train_test_split(X, y, test_size=0.20, random_state=12,stratify= y) print("Length of training set : ", len(train_X)) print("Length of test set : ", len(test_X)) #print("Length of test set : ", len(test_X)) # - # - # **Feature engineering** # # Logistic regressions, SVM and other very common classification models require entries that are all the same size, which is not necessarily the case for data types such as texts, which may have a variable number of words. For handling that I'm going to use two important methods : # # - **CountVectorizer:** it is a representation of comments by vectors whose size is equal to the size of the vocabulary, and which is constructed by counting the number of occurrences of each word. Thus, each token is here associated with a dimension. # # - **TF-IDF:** The use of the frequency of gross word appearance, as is the case with CountVectorizer, can be problematic. Indeed, few tokens will have a very high frequency in a comment, and because of this, the weight of these words will be much larger than the others, which will tend to bias all the weights. Moreover, the words that appear in most documents do not help to discriminate them. TF-IDF is a method to overcome this problem. # it weights the vector using an inverse document frequency (IDF) and a frequency of terms (TF). # # - # **Model Selection** # # Now that we have our features, we can train a classifier to try to classify the comments. For the model selection part. # # - we decided to use the 4 common classification models ( Logistic regressions, SVM, Random Forest and Naïve Bayes ). # # - In order to make the vectorizer => transformer => classifier easier to work with, scikit-learn provides a Pipeline class that behaves like a compound classifier. # # - Instead of tweaking the parameters of the various components of the chain, it is possible to run an exhaustive search of the best parameters on a grid of possible values. # # - We try out all classifiers on either words or bigrams, with or without idf ... # # - We used 5-fold cross validation to select a model's parameters # # # ### LogisticRegression # + parameters_regression_tfidf = { 'vec__ngram_range':[(1, 1), (1, 2),(1,3)], 'tfidf__norm': ['l1', 'l2'], 'tfidf__smooth_idf': [True, False], 'clf__C': [.5, 1, 2, 2.5, 3], } pipeline_regression_tfidf = Pipeline([ ('vec', CountVectorizer()), ('tfidf', TfidfTransformer(use_idf=True)), ('clf', LogisticRegression(solver='saga', penalty='l2')) ]) rs_regression_tfidf = GridSearchCV(pipeline_regression_tfidf, parameters_regression_tfidf, cv=5, scoring='accuracy', n_jobs=-1, verbose=0, return_train_score=True) start = time.time() rs_regression_tfidf.fit(train_X, train_Y) #time.time() - start, rs_regression_tfidf.best_params_, rs_regression_tfidf.best_score_ print("Best parameters are : ", rs_regression_tfidf.best_params_) print("Best score(with 5-fold cross validation ) : %0.3f" % rs_regression_tfidf.best_score_) # - filename = './models/rs_regression_tfidf.sav' pickle.dump(rs_regression_tfidf, open(filename, 'wb')) # ### SVM # + parameters_SVM = { 'vec__ngram_range':[(1, 1), (1, 2),(1,3)], 'tfidf__norm': ['l1', 'l2'], 'tfidf__smooth_idf': [True, False], 'clf__alpha': (1e-2, 1e-3,0.1,1e-4,1e-5,1e-6,1,2), } pipeline_SVM = Pipeline([ ('vec', CountVectorizer()), ('tfidf', TfidfTransformer(use_idf=True)), ('clf', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, random_state=42, max_iter=5, tol=None)), ]) rs_SVM = GridSearchCV(pipeline_SVM, parameters_SVM, cv=5, scoring='accuracy', n_jobs=-1, verbose=0, return_train_score=True) rs_SVM.fit(train_X, train_Y) #time.time() - start, rs_regression_tfidf.best_params_, rs_regression_tfidf.best_score_ print("Best parameters are : ", rs_SVM.best_params_) print("Best score(with 5-fold cross validation ) : %0.3f" % rs_SVM.best_score_) # - filename = './models/rs_SVM.sav' pickle.dump(rs_SVM, open(filename, 'wb')) # ### Random Forest # + # specify parameters and distributions to sample from param_dist = {'clf__max_depth': [300, 500, 700], 'vec__ngram_range':[(1, 1), (1, 2),(1,3)], 'tfidf__norm': ['l1', 'l2'], 'tfidf__smooth_idf': [True, False], 'clf__max_features': ["auto","sqrt","log2"], 'clf__n_estimators': [100,200,500,600], 'clf__bootstrap': [True, False], 'clf__criterion': ["gini", "entropy"]} pipeline_rf = Pipeline([ ('vec', CountVectorizer()), ('tfidf', TfidfTransformer(use_idf=True)), ('clf', RandomForestClassifier(n_estimators=20)), ]) rs_rf = RandomizedSearchCV(pipeline_rf, param_dist, cv=5, scoring='accuracy', n_jobs=-1, verbose=0, return_train_score=True) rs_rf.fit(train_X, train_Y) #time.time() - start, rs_regression_tfidf.best_params_, rs_regression_tfidf.best_score_ print("Best parameters are : ", rs_rf.best_params_) print("Best score(with 5-fold cross validation ) : %0.3f" % rs_rf.best_score_) # - filename = './models/rs_rf.sav' pickle.dump(rs_rf, open(filename, 'wb')) # ### Multinomial Naïve Bayes # + parameters_MultinomialNB = { 'vec__ngram_range':[(1, 1), (1, 2),(1,3)], 'tfidf__norm': ['l1', 'l2'], 'tfidf__smooth_idf': [True, False], #'tfidf__use_idf': (True, False), 'clf__alpha': [0.1,0.01,0.001,1,2], } pipeline_MultinomialNB = Pipeline([ ('vec', CountVectorizer()), ('tfidf', TfidfTransformer(use_idf=True)), ('clf', MultinomialNB()) ]) rs_MultinomialNB = GridSearchCV(pipeline_MultinomialNB, parameters_MultinomialNB, cv=5, scoring='accuracy', n_jobs=-1, verbose=0, return_train_score=True) start = time.time() rs_MultinomialNB.fit(train_X, train_Y) #time.time() - start, rs_regression_tfidf.best_params_, rs_regression_tfidf.best_score_ print("Best parameters are : ", rs_MultinomialNB.best_params_) print("Best score(with 5-fold cross validation ) : %0.4f" % rs_MultinomialNB.best_score_) # - filename = './models/rs_MultinomialNB.sav' pickle.dump(rs_MultinomialNB, open(filename, 'wb')) # ## Evaluation metric and Model validation # + import string import re import pickle import nltk from nltk.corpus import stopwords from nltk.corpus import words from nltk.stem import PorterStemmer from stop_words import get_stop_words import numpy as np import os from collections import Counter from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.model_selection import RandomizedSearchCV, GridSearchCV import time from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.linear_model import LogisticRegression from sklearn.linear_model import SGDClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import MultinomialNB # - rs_MultinomialNB = pickle.load(open('./models/rs_MultinomialNB.sav', 'rb')) rs_regression_tfidf = pickle.load(open('./models/rs_regression_tfidf.sav', 'rb')) rs_SVM = pickle.load(open('./models/rs_SVM.sav', 'rb')) rs_rf = pickle.load(open('./models/rs_rf.sav', 'rb')) # #### Data balance import seaborn as sns # %matplotlib inline print(y.value_counts()) sns.countplot(y,label="Count") # #### Matrice de confusion # + from sklearn.metrics import confusion_matrix import seaborn as sn import matplotlib.pyplot as plt from sklearn.metrics import classification_report print('Matrice de confusion :') cm = confusion_matrix(test_Y, rs_SVM.predict(test_X)) df = pd.DataFrame(cm, index = ["republican","democratic"], columns = ["republican", "democratic"]) plt.figure(figsize = (5,3.5)) sn.heatmap(df, annot=True, cmap="Blues") plt.show() print('Rapport de classification :') print(classification_report(test_Y, rs_SVM.predict(test_X))) # - # ### Accuracy evaluation # + from sklearn.metrics import accuracy_score import random from prettytable import PrettyTable t = PrettyTable(['Model', 'accuracy test (%)']) model=['Logistic Regression Classifier','SVM Classifier','Random Forest Classifier','MultinomialNB'] accu=[round(accuracy_score(test_Y,rs_regression_tfidf.predict(test_X))*100,2),round(accuracy_score(test_Y,rs_SVM.predict(test_X)) *100,2),round(accuracy_score(test_Y,rs_rf.predict(test_X)) *100,2),round(accuracy_score(test_Y,rs_MultinomialNB.predict(test_X)) *100,2)] for i in range(len(accu)): t.add_row([model[i],accu[i]]) print(t) # - # ## Predictions & analyses on election day # + db_dir = "./databases/" features = [['text'], ['id'], ['user','location'], ['user', 'screen_name']] column_names = ['_'.join(name) for name in features]+['label'] df = pd.DataFrame(columns = column_names) for filename in os.listdir(db_dir): if "election-day" in filename[:-3]: elections_day = pd.read_csv(db_dir+filename) elections_day.columns = column_names[:-1] # - elections_day.head() X = elections_day["text"] data=list(map(tweet_prep.preprocess, X)) X= pd.Series(data).astype(str).str.zfill(11) elections_day["predict_party"] = rs_SVM.predict(X) # ## Sentiment Analysis per party # ### From TP1 # + from scipy.sparse import csr_matrix import math def bigram(tokens): """ tokens: a list of strings """ bigrams = [] for words in zip(tokens[:-1],tokens[1:]): bigrams.append(" ".join(words)) # This function returns the list of bigrams return bigrams def trigram(tokens): """ tokens: a list of strings """ trigrams = [] for words in zip(tokens[:-2],tokens[1:-1], tokens[2:]): trigrams.append(" ".join(words)) # This function returns the list of trigrams return trigrams class TFIDFBoW(object): def __init__(self, pipeline, bigram=False, trigram=False): """ pipelineObj: instance of PreprocesingPipeline bigram: enable or disable bigram trigram: enable or disable trigram words: list of words in the vocabulary idf: list of idfs for each document """ self.pipeline = pipeline self.bigram = bigram self.trigram = trigram self.words = None self.idf = None def computeTFIDF(self, tokens): """ Calcule du TF-IDF, à partir d'un dictionnaire de mots et d'une liste de tweets. On suppose que l'on a déjà collecté le dictionnaire ainsi que calculé le vecteur contenant l'idf pour chaque document. Entrée: tokens, une liste de vecteurs contenant les tweets (une liste de liste) Return: une csr_matrix """ if self.words is None: raise Exception( "fit_transform() should be called first (no dictionnary available)" ) word_to_idx = {word:idx for idx,word in enumerate(self.words)} tf = np.zeros((len(tokens), len(self.words)),dtype=np.int8) for tweet_idx,tweet_tokens in enumerate(tokens): all_tokens = tweet_tokens.copy() if self.bigram == True : all_tokens+=bigram(tweet_tokens) if self.bigram == True : all_tokens+=trigram(tweet_tokens) for token in all_tokens: word_idx = word_to_idx.get(token, -1) if word_idx>=0: tf[tweet_idx,word_to_idx[token]] += 1 # puisque ce n'est pas specifié on utilise le np.log if self.idf is None: self.idf = np.log(tf.shape[0] / (tf!=0).sum(axis=0)) return np.multiply(tf, self.idf) def fit_transform(self, X): """ Cette méthode preprocess les données en utilisant la pipeline, ajoute les bigram et trigram si besoin, et transforme les textes en vecteurs de flottants avec la pondération TF-IDF. Entrée : X, une liste de vecteurs contenant les tweets Return: une csr_matrix """ toknized_tweets = list(map(self.pipeline.preprocess, X)) words_dictionnary = set() for tweet in toknized_tweets: for token in tweet: words_dictionnary.add(token) if self.bigram == True : for token in bigram(tweet): words_dictionnary.add(token) if self.trigram == True : for token in trigram(tweet): words_dictionnary.add(token) self.words = list(words_dictionnary) return self.computeTFIDF(toknized_tweets) def transform(self, X): """ Cette méthode preprocess les données en utilisant la pipeline, ajoute les bigram et trigram si besoin, et transforme les textes en vecteurs de flottants avec la pondération TF-IDF. Différence avec fit_transform : on suppose qu'on dispose déjà du dictionnaire et du calcul des idf ici. Entrée : X, une liste de vecteurs contenant les tweets Return: une csr_matrix """ if self.words is None: raise Exception( "fit_transform() should be called first (no dictionnary available)" ) toknized_tweets = list(map(self.pipeline.preprocess, X)) return self.computeTFIDF(toknized_tweets) # + from nltk.stem.snowball import SnowballStemmer import nltk class Stemmer(object): def __init__(self): self.stemmer = SnowballStemmer("english", ignore_stopwords=True) def stem(self, token): """ token: a string that contain a token """ # Have to return the stemmed token return list(map(self.stemmer.stem,token)) class NLTKTokenizer(object): """ This tokenizer uses the default function of nltk package (https://www.nltk.org/api/nltk.html) to tokenize the text. """ def tokenize(self, text): # Have to return a list of tokens tokens = nltk.tokenize.word_tokenize(text) return tokens class PreprocessingPipeline: def __init__(self, tokenization, twitterPreprocessing, stemming): """ tokenization: enable or disable tokenization. twitterPreprocessing: enable or disable twitter preprocessing. stemming: enable or disable stemming. """ self.tokenizer = NLTKTokenizer() if tokenization else SpaceTokenizer() self.twitterPreprocesser = TwitterPreprocessing( ) if twitterPreprocessing else None self.stemmer = Stemmer() if stemming else None def preprocess(self, tweet): """ Transform the raw data tokenization: boolean value. twitterPreprocessing: boolean value. Apply the stemming: boolean value. """ if self.twitterPreprocesser: tweet_processed = self.twitterPreprocesser.preprocess(tweet) else: tweet_processed = tweet tokens = self.tokenizer.tokenize(tweet_processed) if self.stemmer: tokens = self.stemmer.stem(tokens) return tokens # + import csv from sklearn.model_selection import train_test_split def load_dataset(path): x = [] y = [] with open(path, 'r', newline='', encoding="latin-1") as csvfile: reader = csv.reader(csvfile, delimiter=',') # Taking the header of the file + the index of useful columns: header = next(reader) ind_label = header.index('airline_sentiment') ind_text = header.index('text') for row in reader: x.append(row[ind_text]) label = row[ind_label] if label == "negative": y.append(0) elif label == "neutral": y.append(1) elif label == "positive": y.append(2) assert len(x) == len(y) return x, y # Path of the dataset path = "data/airline_tweets_database.csv" X_tp1, y_tp1 = load_dataset(path) # + from sklearn.linear_model import LogisticRegression # init configuration selected_conf = {"model":TFIDFBoW, "tokenize":True, "stemming": True, "preprocess":True, "bi":True} stemming = selected_conf.get("stemming",False) tw_prep = selected_conf.get("preprocess",False) tokenize = selected_conf.get("tokenize",False) bi = selected_conf.get("bi",False) tri = selected_conf.get("tri",False) # init preprocessing pipeline pipeline = PreprocessingPipeline(tokenization = tokenize, twitterPreprocessing = tw_prep, stemming = stemming) bowObj = selected_conf["model"](pipeline, bigram = bi, trigram = tri) training_rep = bowObj.fit_transform(X_tp1) # fit the classifier classifier = LogisticRegression(n_jobs=-1) classifier.fit(training_rep, y_tp1) # - elections_rep = bowObj.transform(elections_day.text.values) elections_day.head() elections_day["sentiment"] = classifier.predict_proba(elections_rep)[:,1] #drop nan elections_day = elections_day.dropna() # + from geotext import GeoText from uszipcode import SearchEngine search = SearchEngine(simple_zipcode=True) def get_state(x): cities = GeoText(x).cities if len(cities)==0: return "UNK" else: results = search.by_city(cities[0]) state = results[0].state if len(results) else "UNK" return state elections_day["normalized_loc"] = elections_day.user_location.apply(get_state,1) # - dataframe.columns # + dataframe = pd.DataFrame(pd.pivot_table(elections_day, values="sentiment",columns="predict_party", aggfunc=np.nanmean,index="normalized_loc").to_records()).set_index("normalized_loc") dataframe = dataframe.merge(elections_day.groupby("normalized_loc")["predict_party"].mean().to_frame(),right_index=True, left_index=True) dataframe.columns = ["Sentiment for Republicans", "Sentiment for Democrats", "Portion of Democratic tweets"] dataframe["Portion of Republican tweets"] = 1 - dataframe["Portion of Democratic tweets"] dataframe.iloc[:23].plot.bar(figsize=(20,6)) plt.title("Statistics about Elections across US States") plt.xlabel("States"); # - dataframe.iloc[23:].plot.bar(figsize=(20,6)) plt.title("Statistics about Elections across US States") plt.xlabel("States"); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Import Libraries for NLP from pyspark.sql import SparkSession spark = SparkSession.builder.appName(`nlp`).getOrCreate() # Import libraries for Tokenizer from pyspark.ml.feature import Tokenizer, RegexTokenizer from pyspark.sql.functions import col, udf from pyspark.sql.types import IntegerType senDF = spark.createDataFrame([ (0, "Hi I heard about Spark"), (1, "I wish Java could use case classes"), (2, "Logistic,Regression,models,are,neat") ], ["id", "Sentence"]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:projects] # language: python # name: conda-env-projects-py # --- # ### Get corresponding NTA demographics # Uses `all_rides_nta.csv` *(saved from `get_nta()`)* and `demographics.csv` *(from transit project folder)* all_rides = pd.read_csv('all_rides_nta.csv') # ride_type, interval, latitude, longitude, cluster, nta_code demographics = pd.read_csv('demographics.csv') # Join all_rides table with demographics table on nta_code all_rides = all_rides.merge(demographics, on='nta_code') # ### Get NTA codes for each ride # Results saved in shared file `all_rides_nta.csv` and used to get corresponding NTA demographics import pandas as pd all_rides = pd.read_csv('all_rides.csv') # ride_type, interval, latitude, longitude, cluster geographic = pd.read_csv('geographic.csv') # List of lists of tuples of polygon corners (x=longitude,y=latitude) for one NTA nta_names = geographic.columns nta_poly = [] for nta in nta_names: L = geographic[nta][~geographic[nta].isnull()] it = iter(L) nta_poly.append(list(zip(it,it))) from shapely.geometry import Point from shapely.geometry.polygon import Polygon # Find NTA that contains coordinate def get_nta(row): point = Point(row['longitude'],row['latitude']) for nta,poly in zip(nta_names,nta_poly): polygon = Polygon(poly) if polygon.contains(point): return nta # #!pip install pandarallel # Parallelize Pandas' apply() from pandarallel import pandarallel pandarallel.initialize() # %%time all_rides['nta_code'] = all_rides.parallel_apply(get_nta, axis=1) all_rides = all_rides.drop('Unnamed: 0', axis=1) all_rides.to_csv('all_rides_nta.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="4fsDADOlVKmV" # #ML 101 # # You all have seen datasets. Sometimes they are small, but often at times, they are tremendously large in size. It becomes very challenging to process the datasets which are very large, at least significant enough to cause a processing bottleneck. # # So, what makes these datasets this large? Well, it's features. The more the number of features the larger the datasets will be. Well, not always. You will find datasets where the number of features is very much, but they do not contain that many instances. But that is not the point of discussion here. So, you might wonder with a commodity computer in hand how to process these type of datasets without beating the bush. # # Often, in a high dimensional dataset, there remain some entirely irrelevant, insignificant and unimportant features. It has been seen that the contribution of these types of features is often less towards predictive modeling as compared to the critical features. They may have zero contribution as well. These features cause a number of problems which in turn prevents the process of efficient predictive modeling - # # >Unnecessary resource allocation for these features.These features act as a noise for which the machine learning model can perform terribly poorly. The machine model takes more time to get trained. # # So, what's the solution here? The most economical solution is **Feature Selection**. # # Feature Selection is the process of selecting out the most significant features from a given dataset. In many of the cases, Feature Selection can enhance the performance of a machine learning model as well. # # ## Introduction to feature selection # # Feature selection is also known as Variable selection or Attribute selection. # # Essentially, it is the process of selecting the most important/relevant. Features of a dataset. # # ## Understanding the importance of feature selection # # The importance of feature selection can best be recognized when you are dealing with a dataset that contains a vast number of features. This type of dataset is often referred to as a high dimensional dataset. Now, with this high dimensionality, comes a lot of problems such as - this high dimensionality will significantly increase the training time of your machine learning model, it can make your model very complicated which in turn may lead to Overfitting. # # Often in a high dimensional feature set, there remain several features which are redundant meaning these features are nothing but extensions of the other essential features. These redundant features do not effectively contribute to the model training as well. So, clearly, there is a need to extract the most important and the most relevant features for a dataset in order to get the most effective predictive modeling performance. # # >"The objective of variable selection is three-fold: improving the prediction performance of the predictors, providing faster and more cost-effective predictors, and providing a better understanding of the underlying process that generated the data." # # Now let's understand the difference between dimensionality reduction and feature selection. # # Sometimes, feature selection is mistaken with **dimensionality reduction**. But they are different. Feature selection is different from dimensionality reduction. Both methods tend to reduce the number of attributes in the dataset, but a dimensionality reduction method does so by creating new combinations of attributes (sometimes known as feature transformation), whereas feature selection methods include and exclude attributes present in the data without changing them. # # Some examples of dimensionality reduction methods are Principal Component Analysis, Singular Value Decomposition, Linear Discriminant Analysis, etc. # # ## Filter methods # # The following image best describes filter-based feature selection methods: # # ![fs00](https://media.githubusercontent.com/media/mariolpantunes/ml101/main/figs/fs00.webp) # # Filter method relies on the general uniqueness of the data to be evaluated and pick feature subset, not including any mining algorithm. Filter method uses the exact assessment criterion which includes distance, information, dependency, and consistency. The filter method uses the principal criteria of ranking technique and uses the rank ordering method for variable selection. The reason for using the ranking method is simplicity, produce excellent and relevant features. The ranking method will filter out irrelevant features before classification process starts. # # Filter methods are generally used as a data preprocessing step. The selection of features is independent of any machine learning algorithm. Features give rank on the basis of statistical scores which tend to determine the features' correlation with the outcome variable. Correlation is a heavily contextual term, and it varies from work to work. You can refer to the following table for defining correlation coefficients for different types of data (in this case continuous and categorical). # # ## Wrapper methods # # Like filter methods, let me give you a same kind of info-graphic which will help you to understand wrapper methods better: # # ![fs01](https://media.githubusercontent.com/media/mariolpantunes/ml101/main/figs/fs01.webp) # # As you can see in the above image, a wrapper method needs one machine learning algorithm and uses its performance as evaluation criteria. This method searches for a feature which is best-suited for the machine learning algorithm and aims to improve the mining performance. To evaluate the features, the predictive accuracy used for classification tasks and goodness of cluster is evaluated using clustering. # # Some typical examples of wrapper methods are forward feature selection, backward feature elimination, recursive feature elimination, etc. # # - **Forward Selection**: The procedure starts with an empty set of features [reduced set]. The best of the original features is determined and added to the reduced set. At each subsequent iteration, the best of the remaining original attributes is added to the set. # # - **Backward Elimination**: The procedure starts with the full set of attributes. At each step, it removes the worst attribute remaining in the set. # # - **Combination of forward selection and backward elimination**: The stepwise forward selection and backward elimination methods can be combined so that, at each step, the procedure selects the best attribute and removes the worst from among the remaining attributes. # # - **Recursive Feature elimination**: Recursive feature elimination performs a greedy search to find the best performing feature subset. It iteratively creates models and determines the best or the worst performing feature at each iteration. It constructs the subsequent models with the left features until all the features are explored. It then ranks the features based on the order of their elimination. In the worst case, if a dataset contains $N$ number of features RFE will do a greedy search for $2^N$ combinations of features. # # ## Embedded methods # # Embedded methods are iterative in a sense that takes care of each iteration of the model training process and carefully extract those features which contribute the most to the training for a particular iteration. Regularization methods are the most commonly used embedded methods which penalize a feature given a coefficient threshold. # # This is why Regularization methods are also called penalization methods that introduce additional constraints into the optimization of a predictive algorithm (such as a regression algorithm) that bias the model toward lower complexity (fewer coefficients). # # Examples of regularization algorithms are the LASSO, Elastic Net, Ridge Regression, etc. # # ## Difference between filter and wrapper methods # # Well, it might get confusing at times to differentiate between filter methods and wrapper methods in terms of their functionalities. Let's take a look at what points they differ from each other. # # - Filter methods do not incorporate a machine learning model in order to determine if a feature is good or bad whereas wrapper methods use a machine learning model and train it the feature to decide if it is essential or not. # - Filter methods are much faster compared to wrapper methods as they do not involve training the models. On the other hand, wrapper methods are computationally costly, and in the case of massive datasets, wrapper methods are not the most effective feature selection method to consider. # - Filter methods may fail to find the best subset of features in situations when there is not enough data to model the statistical correlation of the features, but wrapper methods can always provide the best subset of features because of their exhaustive nature. # - Using features from wrapper methods in your final machine learning model can lead to overfitting as wrapper methods already train machine learning models with the features and it affects the true power of learning. But the features from filter methods will not lead to overfitting in most of the cases # # So far you have studied the importance of feature selection, understood its difference with dimensionality reduction. You also covered various types of feature selection methods. So far, so good! # # + id="6VuPeDYWVGwX" import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns # + id="JYyMhTz-eYXC" url = "https://media.githubusercontent.com/media/mariolpantunes/ml101/main/datasets/pima-indians-diabetes.data.csv" names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class'] dataframe = pd.read_csv(url, names=names) dataframe.head() # + id="ZT3lWCMRelar" array = dataframe.values X = array[:,0:8] Y = array[:,8] print(X) # + [markdown] id="on8VUd8oeqpm" # First, you will implement a Chi-Squared statistical test for non-negative features to select 4 of the best features from the dataset. You have already seen Chi-Squared test belongs the class of filter methods. If anyone's curious about knowing the internals of Chi-Squared, this [video](https://www.youtube.com/watch?v=VskmMgXmkMQ) does an excellent job. # # The scikit-learn library provides the **SelectKBest** class that can be used with a suite of different statistical tests to select a specific number of features, in this case, it is Chi-Squared. # + id="7hwPCgKdeom4" # Import the necessary libraries first from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 from sklearn.feature_selection import mutual_info_classif # Feature extraction test = SelectKBest(score_func=mutual_info_classif, k=4) fit = test.fit(X, Y) # Summarize scores np.set_printoptions(precision=8) print(fit.scores_) features = fit.transform(X) # Summarize selected features print(features[0:5,:]) # + [markdown] id="1_9GgqLWe8W0" # ## Interpretation: # # You can see the scores for each attribute and the 4 attributes chosen (those with the highest scores): plas, test, mass, and age. This scores will help you further in determining the best features for training your model. # # **P.S.**: The first row denotes the names of the features. For preprocessing of the dataset, the names have been numerically encoded. # + [markdown] id="uY3ZiEmBmJfQ" # The second filter method will be the Pearson correlation. # Here we will first plot the Pearson correlation heatmap and see the correlation of independent variables with the output variable MEDV. We will only select features which has correlation of above 0.5 (taking absolute value) with the output variable. # + id="YJk2EMyVmE3y" #Using Pearson Correlation plt.figure(figsize=(12,10)) cor = dataframe.corr() sns.heatmap(cor, annot=True, cmap=plt.cm.Reds) plt.show() # + id="0oKVmcIwmgdD" #Correlation with output variable cor_target = abs(cor["class"])#Selecting highly correlated features relevant_features = cor_target[cor_target>0.20] relevant_features # + [markdown] id="hueq6gyQfSq2" # Next, you will implement Recursive Feature Elimination which is a type of wrapper feature selection method. # # The Recursive Feature Elimination (or RFE) works by recursively removing attributes and building a model on those attributes that remain. # # It uses the model accuracy to identify which attributes (and combination of attributes) contribute the most to predicting the target attribute. # # You can learn more about the **RFE** class in the scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html#sklearn.feature_selection.RFE). # # You will use RFE with the Logistic Regression classifier to select the top 3 features. The choice of algorithm does not matter too much as long as it is skillful and consistent. # + id="psALnu9cfizG" # Import your necessary dependencies from sklearn.feature_selection import RFE from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier # Feature extraction #model = LogisticRegression(max_iter=1000) model = RandomForestClassifier(random_state=0) rfe = RFE(model, n_features_to_select=3) fit = rfe.fit(X, Y) print("Num Features: %s" % (fit.n_features_)) print("Selected Features: %s" % (fit.support_)) print("Feature Ranking: %s" % (fit.ranking_)) # + [markdown] id="mKveCXOqf7js" # You can see that RFE chose the top 3 features as preg, mass, and pedi. # # These are marked True in the support array and marked with a choice “1” in the ranking array. This, in turn, indicates the strength of these features. # + [markdown] id="OZVCt1yUgETC" # Next up you will use Ridge regression which is basically a regularization technique and an embedded feature selection techniques as well. # # This [article](https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/#three) gives you an excellent explanation on Ridge regression. Be sure to check it out. # + id="JXL5pBr0gDrY" # First things first from sklearn.linear_model import Ridge ridge = Ridge(alpha=1.0) ridge.fit(X, Y) # + [markdown] id="cA9G0_5agX3s" # In order to better understand the results of Ridge regression, you will implement a little helper function that will help you to print the results in a better so that you can interpret them easily. # + id="SdP0yWA5gYq9" # A helper method for pretty-printing the coefficients def pretty_print_coefs(coefs, names = None, sort = False): if names == None: names = ["X%s" % x for x in range(len(coefs))] lst = zip(coefs, names) if sort: lst = sorted(lst, key = lambda x:-np.abs(x[0])) return " + ".join("%s * %s" % (round(coef, 3), name) for coef, name in lst) print ("Ridge model:", pretty_print_coefs(ridge.coef_)) # + [markdown] id="C0AloSh_geli" # You can spot all the coefficient terms appended with the feature variables. It will again help you to choose the most essential features. Below are some points that you should keep in mind while applying Ridge regression: # - It is also known as L2-Regularization. # - For correlated features, it means that they tend to get similar coefficients. # - Feature having negative coefficients don't contribute that much. But in a more complex scenario where you are dealing with lots of features, then this score will definitely help you in the ultimate feature selection decision-making process. # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import panel as pn import numpy as np import pyvista as pv pn.extension('vtk') # Temporal function inpired from http://holoviews.org/user_guide/Live_Data.html # + alpha = 2 xvals = np.linspace(-4, 4,101) yvals = np.linspace(-4, 4,101) xs, ys = np.meshgrid(xvals, yvals) #temporal function to create data on a plane def time_function(time): return np.sin(((ys/alpha)**alpha+time)*xs) # 3d plane to support the data mesh_ref = pv.UniformGrid( (xvals.size, yvals.size, 1), #dims (xvals[1]-xvals[0],yvals[1]-yvals[0],1), #spacing (xvals.min(),yvals.min(),0) #origin ) mesh_ref.point_arrays.append(time_function(0).flatten(order='F'), 'scalars') #add data for time=0 pl_ref = pv.Plotter() pl_ref.add_mesh(mesh_ref, cmap='rainbow') pn.panel(pl_ref.ren_win) # - # We will demonstrate how to warp the surface and plot a temporal animation # + mesh_warped = mesh_ref.warp_by_scalar() # warp the mesh using data at time=0 #create the pyvista plotter pl = pv.Plotter() pl.add_mesh(mesh_warped, cmap='rainbow') #initialize panel and widgets camera = { 'position': [13.443258285522461, 12.239550590515137, 12.731934547424316], 'focalPoint': [0, 0, 0], 'viewUp': [-0.41067028045654297, -0.40083757042884827, 0.8189500570297241] } vtkpan = pn.panel(pl.ren_win, orientation_widget=True, sizing_mode='stretch_both', camera=camera) frame = pn.widgets.Player(value=0, start=0, end=50, interval=100, loop_policy="reflect") @pn.depends(frame=frame.param.value) def update_3d_warp(frame): #the player value range in between 0 and 50, howver we want time between 0 and 10 time = frame/5 data = time_function(time).flatten(order='F') mesh_ref.point_arrays.append(data, 'scalars') mesh_warped.point_arrays.append(data, 'scalars') mesh_warped.points = mesh_ref.warp_by_scalar(factor=0.5).points vtkpan.synchronize() pn.Column(frame, vtkpan, update_3d_warp, width=600, height=600).servable() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # SQL # Before you start, download the SQLite version of the [Chinook database](https://github.com/lerocha/chinook-database) from [GitHub](https://github.com/lerocha/chinook-database/raw/master/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite). # + import numpy as np import pandas as pd import sqlite3 # Open connection to database db_connection = sqlite3.connect('Chinook_Sqlite.sqlite') # - # ## Example # # Select the first 10 customers. pd.read_sql( '''SELECT * FROM customer LIMIT 10''', con=db_connection) # ## Exercise 1 # # Select the first name of all customers from the UK. pd.read_sql( '''SELECT FirstName FROM customer WHERE Country == \'United Kingdom\'''', con=db_connection) # ## Exercise 2 # # Select the city and country of all customers from the UK or Portugal. pd.read_sql( '''SELECT City, Country FROM customer WHERE Country == \'United Kingdom\' OR Country == \'Portugal\'''', con=db_connection) # ## Exercise 3 # # Select the first 10 invoices. pd.read_sql( '''SELECT * FROM invoice LIMIT 10''', con=db_connection) # ## Exercise 4 # # Join the tables `customer` and `invoice`, and retrieve customer ID and invoice amount. pd.read_sql( '''SELECT c.CustomerId, i.Total FROM customer AS c JOIN invoice AS i ON c.CustomerId == i.CustomerId''', con=db_connection) # Now compute the total of all invoices by customer. pd.read_sql( '''SELECT c.CustomerId, SUM(i.Total) FROM customer AS c JOIN invoice AS i ON c.CustomerId == i.CustomerId GROUP BY c.CustomerId''', con=db_connection) # Now aggregate only invoices from 2013. # # Hint: use the SQLite function `STRFTIME` on `InvoiceDate`. pd.read_sql( '''SELECT c.CustomerId, SUM(i.Total) FROM customer AS c JOIN invoice AS i ON c.CustomerId == i.CustomerId WHERE STRFTIME(\'%Y\', i.InvoiceDate) == \'2013\' GROUP BY c.CustomerId''', con=db_connection) # Now order by total amount in descending order. pd.read_sql( '''SELECT c.CustomerId, SUM(i.Total) AS total FROM customer AS c JOIN invoice AS i ON c.CustomerId == i.CustomerId WHERE STRFTIME(\'%Y\', i.InvoiceDate) == \'2013\' GROUP BY c.CustomerId ORDER BY total DESC''', con=db_connection) # Finally, add the first name of the support rep from table `employee`. pd.read_sql( '''SELECT c.CustomerId, e.FirstName, SUM(i.Total) AS total FROM customer AS c JOIN invoice AS i ON c.CustomerId == i.CustomerId JOIN employee AS e ON c.SupportRepId == e.EmployeeId WHERE STRFTIME(\'%Y\', i.InvoiceDate) == \'2013\' GROUP BY c.CustomerId ORDER BY total DESC''', con=db_connection) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # --- # VORGEHEN: # # 1) Module importieren # 2) Define Function # 3) Create Corpus # 4) Create Subcorpus (beide Geschlechter, nach 2000) # 5) Subcorpus als csv abspeichern # 6) csv mit scattertext importieren # 7) die beiden Achsen aufspannen auf Basis der Variable Geschlecht (m/f) # # (Wenn alle Spaltennamnen stimmen sollte es so klappen) # + ### Step 1) Import modules import spacy import textacy import scattertext as st import pandas as pd import plotnine as p9 # + ### Step 2) Define function def get_texts_from_csv(f_csv, text_column): """ Read dataset from a csv file and sequentially stream the rows, including metadata. """ # read dataframe df = pd.read_csv(f_csv) # keep only documents that have text filtered_df = df[df[text_column].notnull()] # iterate over rows in dataframe for idx, row in filtered_df.iterrows(): #read text and join lines (hard line-breaks) text = row[text_column].replace('\n', ' ') #use all columns as metadata, except the column with the actual text metadata = row.to_dict() del metadata[text_column] # return documents one after another (sequentially) yield (text, metadata) # + ### Step 3) Create Corpus # stream texts from a given folder f_csv = '../KED2022/materials/data/dataset_speeches_federal_council_2019.csv' texts = get_texts_from_csv(f_csv, text_column='text') # load german language model de = textacy.load_spacy_lang("de_core_news_sm") # create corpus from processed documents corpus_speeches_XY = textacy.Corpus(de, data=texts) # + ### Step 4) Create Subcorpus (both genders, DE, after the year 2000) ## subcor (filtering by meta attributes "language" and "after 2000") # function to filter by metadata def filter_func_1(doc): return doc._.meta.get("Jahr") > 2000 # create new corpus after applying filter function subcor = textacy.corpus.Corpus(de, data=corpus_speeches_XY.get(filter_func_1)) # + ### Step 5) Export corpus as csv dataset # merge metadata and actual content for each document in the corpus # ugly, verbose syntax to merge two dictionaries data = [{**doc._.meta, **{'text': doc.text}} for doc in subcor] # export corpus as csv f_csv = '../KED2022/materials/data/dataset_speeches.csv' textacy.io.csv.write_csv(data, f_csv, fieldnames=data[0].keys()) # csv format is the best to load in scattertext data[0] # + ### Step 6) Import csv to use in scattertext: load file # read dataset from csv file f_csv = '../KED2022/materials/data/dataset_speeches.csv' df = pd.read_csv(f_csv) # filter out non-german texts or very short texts df_sub = df[(df['Sprache'] == 'de') & (df['text'].str.len() > 10)] # make new column containing all relevant metadata (showing in plot later on) df_sub['descripton'] = df_sub[['Redner', 'Partei', 'Jahr']].astype(str).agg(', '.join, axis=1) # sneak peek of dataset df_sub.head() # + ### Step 7) create scattertext plot, axes basing on the variable "gender" censor_tags = set(['CARD']) # tags to ignore in corpus, e.g. numbers # stop words to ignore in corpus de_stopwords = spacy.lang.de.stop_words.STOP_WORDS # default stop words custom_stopwords = set(['[', ']', '%', '*', '•', '2.', '19.', '21.', '9.', '3.']) de_stopwords = de_stopwords.union(custom_stopwords) # extend with custom stop words # create corpus from dataframe # lowercased terms, no stopwords, no numbers # use lemmas for English only, German quality is too bad corpus_speeches = st.CorpusFromPandas(df_sub, # dataset category_col='Geschlecht', # index differences by ... text_col='text', nlp=de, # German model feats_from_spacy_doc=st.FeatsFromSpacyDoc(tag_types_to_censor=censor_tags, use_lemmas=False), ).build().get_stoplisted_unigram_corpus(de_stopwords) # produce visualization (interactive html) html = st.produce_scattertext_explorer(corpus_speeches, category='m', # set attribute to divide corpus into two parts category_name='male', not_category_name='female', metadata=df_sub['descripton'], width_in_pixels=1000, minimum_term_frequency=5, # drop terms occurring less than 5 times save_svg_button=True, ) # write visualization to html file fname = '..KED2022/materials/data/gender_differences_final.html' open(fname, 'wb').write(html.encode('utf-8')) # + vscode={"languageId": "plaintext"} About the scattertext - Explanation for Orientation: Terms in upper right = frequently used by male and female speakers alike Terms in lower right = often used by female speakers Terms in upper left = often used by male speakers Terms in lower left = infrequently used by male and female speakers alike ### What can we see? Top 3 terms in male speeches: Geschichte, Zusammenhalt, Tessin Top 3 terms in female speeches: gemeinsam, Grenzen, brauchen Top 3 terms in both genders: Menschen, Land, Schweiz ### Important to keep in mind Document count total: 96 male document count: 67 female document count: 29 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Activation from keras.layers import Conv2D, MaxPooling2D, ZeroPadding2D from keras import optimizers from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import EarlyStopping, CSVLogger from keras import backend as K import matplotlib.pyplot as plt import numpy as np img_width = 150 img_height = 150 train_data_dir = 'data/cat_or_dog/train' valid_data_dir = 'data/cat_or_dog/validation' datagen = ImageDataGenerator(rescale = 1./255) train_generator = datagen.flow_from_directory(directory=train_data_dir, target_size=(img_width,img_height), classes=['dogs','cats'], class_mode='binary', batch_size=16) validation_generator = datagen.flow_from_directory(directory=valid_data_dir, target_size=(img_width,img_height), classes=['dogs','cats'], class_mode='binary', batch_size=16) model =Sequential() model.add(Conv2D(32,(3,3), input_shape=(img_width, img_height, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(32,(3,3), input_shape=(img_width, img_height, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Conv2D(64,(3,3), input_shape=(img_width, img_height, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(64)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.summary() model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy']) es = EarlyStopping(monitor='val_loss', patience=2) csv_logger = CSVLogger('training.log') training = model.fit_generator(generator=train_generator, steps_per_epoch=2048 // 16, epochs=20, validation_data=validation_generator, validation_steps=832//16, callbacks=[es, csv_logger]) model.save('models/dog_cat_CNN.h5') # - #正答率 plt.plot(training.history['acc']) plt.plot(training.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() #loss plt.plot(training.history['loss']) plt.plot(training.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() # + from keras.models import load_model #from keras.preprocessing import image import cv2 from PIL import Image import numpy as np import matplotlib.pyplot as plt #filepath = 'data/cat_or_dog/validation/cats/cat.1000.jpg' #filepath = 'data/cat_or_dog/validation/dogs/dog.1002.jpg' filepath = 'data/cat_or_dog/others/002.jpg' model = load_model('models/dog_cat_CNN.h5') image=cv2.imread(filepath) b,g,r = cv2.split(image) x = cv2.merge([r,g,b]) x = cv2.resize(x,(150, 150)) x = np.array([x / 255.]) #x = image.load_img(filepath, target_size=(150, 150)) #x = image.img_to_array(x) #x = np.array([x / 255.]) #x = np.array(Image.open(filepath).resize((150, 150))) #x = np.array([x / 255.]) plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) plt.show() result = model.predict(x) cat_per = result[0][0] * 100 dog_per = 100 - cat_per print('Cat: ' + str(round(cat_per, 2)), '[%]') print('Dog: ' + str(round(dog_per, 2)), '[%]') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Introduction to the Interstellar Medium # ### # ### Figure 7.5: CO and 13CO spectrum toward a star-forming clump in the Rosette molecular cloud # #### from observations taken with the IRAM 30m telescope by the author in 2002 import numpy as np import matplotlib.pyplot as plt from scipy import integrate # %matplotlib inline # + fig = plt.figure(figsize=(6,4)) ax1 = fig.add_subplot(111) ax1.set_xlabel(r"$v$ (km/s)", fontsize=16) ax1.set_ylabel(r"$T_{\rm B}$ (K)", fontsize=16) v, T1, T2 = np.loadtxt('rosette_CO_spectrum.txt', unpack=True) vmin, vmax = 5, 21 ax1.set_xlim(vmin, vmax) ax1.set_ylim(-4, 27) show = (v > vmin+0.75) & (v < vmax-0.75) ax1.plot(v[show], T1[show], color='k', lw=2, linestyle='solid', drawstyle='steps', label='CO') ax1.plot(v[show], T2[show], color='k', lw=2, linestyle='dotted', drawstyle='steps', label=r'$^{13}$CO') # get integrated intensity int = (v > 11) & (v < 16) T1_int = integrate.simps(T1[int],v[int]) T2_int = integrate.simps(T2[int],v[int]) print("Integrated CO emission = {0:5.2f} K km/s".format(T1_int)) print("Integrated 13CO emission = {0:5.2f} K km/s".format(T2_int)) ax1.legend() fig.tight_layout() plt.savefig('rosette_CO_spectrum.pdf') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import numpy as np from cvnn.layers import Convolutional from pdb import set_trace import sys from scipy import signal from scipy import linalg # Both TF and NP results calculate fft the same way # + aaa = np.linspace(1.0, 10000.0, 10000) x = aaa + 1j * aaa x_tensor = tf.convert_to_tensor(x) tf_fft = tf.signal.fft(x_tensor) np_fft = np.fft.fft(x) print(tf_fft.dtype) print(np.all(tf_fft.numpy() == np_fft)) # Results are not exactly the same (but fair enough) print(tf_fft.numpy()[:10]) print(np_fft[:10]) print(tf_fft.numpy() == np_fft) print((tf_fft.numpy() - np_fft)[1]) # - # ## Testing on 1D # + b = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] c = [1, 0, 1] b_pad = tf.cast(tf.pad(b, tf.constant([[0, 2]])), tf.complex64) # Full padding I = tf.signal.fft(tf.cast(b_pad, tf.complex64)) paddings = tf.constant([[0, 9]]) c_pad = tf.cast(tf.pad(c, paddings), tf.complex64) C = tf.signal.fft(c_pad) F = tf.math.multiply(I, C) f = tf.signal.ifft(F) f_real = tf.cast(f, tf.int32) # print("std_out: " + str(std_out)) print("f_real: \t" + str(f_real.numpy())) print("convolve:\t" + str(np.convolve(b, c))) manual_conv = [] for i in range(len(b)-len(c)+1): manual_conv.append(np.sum(tf.math.multiply(c, b[i:i+3]).numpy())) print("Manual nn conv: " + str(manual_conv)) c.reverse() manual_conv = [] for i in range(len(b)-len(c)+1): manual_conv.append(np.sum(tf.math.multiply(c, b[i:i+3]).numpy())) print("Manual fft conv:" + str(manual_conv)) # - # ## Testing on 2D # + np.set_printoptions(suppress=True) img2 = np.array([ [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0] ]).astype(np.float64) k = np.array([ [1., 0., -1.], [1., 0., -1.], [1., 0., -1.] ]).astype(np.float64) mode = 'full' conv = Convolutional(1, (3, 3), (6, 6, 1), padding=2, input_dtype=np.float32) conv.kernels = [] conv.kernels.append(tf.reshape(tf.cast(tf.Variable(k, name="kernel" + str(0) + "_f" + str(0)), dtype=np.float32), (3, 3, 1))) std_out = conv([img2])[..., 0] print("manual_conv: \n" + str(std_out.numpy())) img_tf = tf.constant(tf.reshape(img2, (1, 6, 6, 1)), dtype=tf.float64) k_tf = tf.constant(tf.reshape(k, (3, 3, 1, 1)), dtype=tf.float64) conv_tf = tf.nn.conv2d(img_tf, k_tf, strides=[1, 1], padding="SAME")[0, ..., 0] print("tf_nn_conv2d: \n" + str(np.around(conv_tf.numpy()))) # set_trace() img2_pad = tf.pad(img2.astype(np.float64), tf.constant([[0, 2], [0, 2]])) k_pad = tf.pad(k, tf.constant([[0, 5], [0, 5]])) I = tf.signal.fft2d(tf.cast(img2_pad, tf.complex128)) print(I) K = tf.signal.fft2d(tf.cast(k_pad, tf.complex128)) F = tf.math.multiply(I, K) f = tf.signal.ifft2d(F) f_real = tf.cast(f, tf.int32) print("manual_fft_conv: " + str(f_real)) np_fft_conv = np.array(signal.fftconvolve(img2, k, mode=mode) , np.int32) print("sp_fft_conv_" + mode + ":\n" + str(np_fft_conv)) np_conv = np.array(signal.convolve2d(img2 , k, mode), np.int32) print("sp_conv2d" + mode + ":\n" + str(np_conv)) # Check numpy implementation I = np.fft.fft2(img2_pad) K = np.fft.fft2(tf.pad(k, tf.constant([[0, 5], [0, 5]]))) F = np.multiply(I, K) f = np.fft.ifft2(F) print("np_fft_conv: \n" + str(np.round(f.astype(np.float32)))) # - # There are 2 results here and they are: # # - $(x*v)(n) = \sum x(m) \, v(m)$ # - $(x*v)(n) = \sum x(m) \, v(n-m)$ # ## StackOverflow Example # # https://stackoverflow.com/questions/40703751/using-fourier-transforms-to-do-convolution # + x = np.array([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, 3, 0], [0, 0, 0, 1]]) y = np.array([[4, 5], [3, 4]]) print("conv:\n", signal.convolve2d(x, y, 'full')) s1 = np.array(x.shape) s2 = np.array(y.shape) size = s1 + s2 - 1 # Full padding size = (5, 5) fsize = 2 ** np.ceil(np.log2(size)).astype(int) # I do this to have a 2^n size to make fft faster # fsize = (8, 8) fslice = tuple([slice(0, int(sz)) for sz in size]) # slice to get the values later ([0:5], [0:5]) new_x = np.fft.fft2(x, fsize) new_y = np.fft.fft2(y, fsize) result = np.fft.ifft2(new_x*new_y)[fslice].copy() print("manual fft method:\n", np.array(result.real, np.int32)) print("fft:\n" , np.array(signal.fftconvolve(x, y), np.int32)) # - # ## Complex Conv # + img2 = np.array([ [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0], [10, 10, 10, 0, 0, 0] ]).astype(np.float64) k = np.array([ [1., 0., -1.], [1., 0., -1.], [1., 0., -1.] ]).astype(np.float64) mode = 'full' c_img = tf.Variable(tf.complex(img2, img2)) c_k = tf.Variable(tf.complex(k, np.zeros(k.shape))) conv_tf = tf.nn.conv2d(c_img, c_k, strides=[1, 1], padding="SAME")[0, ..., 0] print(conv_tf) # - # https://stackoverflow.com/questions/47577458/complex-convolution-in-tensorflow # ## Tensorflow fftconv2d # + import tensorflow as tf def _centered(arr, newshape): # Return the center newshape portion of the array. currshape = tf.shape(arr)[-2:] startind = (currshape - newshape) // 2 endind = startind + newshape return arr[..., startind[0]:endind[0], startind[1]:endind[1]] def fftconv(in1, in2, mode="full"): mode = mode.lower() # Reorder channels to come second (needed for fft) in1 = tf.transpose(in1, perm=[0, 3, 1, 2]) in2 = tf.transpose(in2, perm=[0, 3, 1, 2]) # Extract shapes s1 = tf.convert_to_tensor(tf.shape(in1)[-2:]) s2 = tf.convert_to_tensor(tf.shape(in2)[-2:]) shape = s1 + s2 - 1 # Compute convolution in fourier space sp1 = tf.spectral.rfft2d(in1, shape) sp2 = tf.spectral.rfft2d(in2, shape) ret = tf.spectral.irfft2d(sp1 * sp2, shape) # Crop according to mode if mode == "full": cropped = ret elif mode == "same": cropped = _centered(ret, s1) elif mode == "valid": cropped = _centered(ret, s1 - s2 + 1) else: raise ValueError("Acceptable mode flags are 'valid'," " 'same', or 'full'.") # Reorder channels to last result = tf.transpose(cropped, perm=[0, 2, 3, 1]) return result # - onv_tf = fftconv(img_tf, k_tf, mode="SAME")[0, ..., 0] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ライブドアニュースコーパスのダウンロード # # Google Driveにミラーを用意しています。オリジナルは[https://www.rondhuit.com/download.html](https://www.rondhuit.com/download.html)にあります。 # 論文などで引用する際は、リンク先を参照ください。 # + from google_drive_downloader import GoogleDriveDownloader as gdd tgz_fname = "ldcc-20140209.tar.gz" # - # Google driverからダウンロード gdd.download_file_from_google_drive(file_id="1b-llzNQdmKIp0FYMwzGOKmXdQUNpNXC8", dest_path="./ldcc-20140209.tar.gz", unzip=False) # + # 株式会社ロンウィットからダウンロード #import urllib.request #tgz_url = "https://www.rondhuit.com/download/ldcc-20140209.tar.gz" #urllib.request.urlretrieve(tgz_url, "ldcc-20140209.tar.gz") # - # # 分類用データへの加工 # # * tar.gzファイルから2つのジャンルを対象にする # * text/GENRE/GENRE-#######.txt # * 3行目がタイトルなのでこれを分類対象とする # * 文章の前後についている【】を削除 # * tsvファイルとして出力 # * フィールド構造 # 1. (未使用) # 2. クラス (0/1) # 3. (未使用) # 4. テキスト # + import tarfile import csv import re target_genre = ["it-life-hack", "kaden-channel"] zero_fnames = [] one_fnames = [] tsv_fname = "all.tsv" brackets_tail = re.compile('【[^】]*】$') brackets_head = re.compile('^【[^】]*】') def remove_brackets(inp): output = re.sub(brackets_head, '', re.sub(brackets_tail, '', inp)) return output def read_title(f): # 2行スキップ next(f) next(f) title = next(f) # 3行目を返す title = remove_brackets(title.decode('utf-8')) return title[:-1] with tarfile.open(tgz_fname) as tf: # 対象ファイルの選定 for ti in tf: # ライセンスファイルはスキップ if "LICENSE.txt" in ti.name: continue if target_genre[0] in ti.name and ti.name.endswith(".txt"): zero_fnames.append(ti.name) continue if target_genre[1] in ti.name and ti.name.endswith(".txt"): one_fnames.append(ti.name) with open(tsv_fname, "w") as wf: writer = csv.writer(wf, delimiter='\t') # ラベル 0 for name in zero_fnames: f = tf.extractfile(name) title = read_title(f) row = [target_genre[0], 0, '', title] writer.writerow(row) # ラベル 1 for name in one_fnames: f = tf.extractfile(name) title = read_title(f) row = [target_genre[1], 1, '', title] writer.writerow(row) # - # # データのシャッフルと分割 # # * train/dev/test : 8:1:1 で分割 # + import random random.seed(100) with open("all.tsv", 'r') as f, open("rand-all.tsv", "w") as wf: lines = f.readlines() random.shuffle(lines) for line in lines: wf.write(line) # + random.seed(101) train_fname, dev_fname, test_fname = ["train.tsv", "dev.tsv", "test.tsv"] with open("rand-all.tsv") as f, open(train_fname, "w") as tf, open(dev_fname, "w") as df, open(test_fname, "w") as ef: ef.write("class\tsentence\n") for line in f: v = random.randint(0, 9) if v == 8: df.write(line) elif v == 9: row = line.split('\t') ef.write("\t".join([row[1], row[3]])) else: tf.write(line) # - # # BERTのセットアップ # # * git repoの用意 # * Multilingualモデルの用意 # + import subprocess import os if not os.path.exists("bert"): subprocess.call("git clone https://github.com/google-research/bert".split()) # - import urllib.request ml_url = "https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip" urllib.request.urlretrieve(ml_url, "multilingual_L-12_H-768_A-12.zip") # ! unzip multilingual_L-12_H-768_A-12.zip # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import pandas as pd import matplotlib.pyplot as plt dfs = [] dfs.append(pd.read_csv("../cache/results/ach0.csv")) dfs.append(pd.read_csv("../cache/results/ach2.csv")) dfs.append(pd.read_csv("../cache/results/ach4.csv")) dfs.append(pd.read_csv("../cache/results/ach6.csv")) dfs.append(pd.read_csv("../cache/results/novent.csv")) temperature = [] for i in range(5): avg_temp= dfs[i]['MAINXGROUND:ZONE1:Zone Operative Temperature [C](Hourly:ON)'] avg_temp+=dfs[i]['MAINXGROUND:ZONE5:Zone Operative Temperature [C](Hourly:ON)'] avg_temp+=dfs[i]['MAINXGROUND:ZONE4:Zone Operative Temperature [C](Hourly:ON)'] avg_temp+=dfs[i]['MAINXGROUND:ZONE6:Zone Operative Temperature [C](Hourly:ON)'] avg_temp+=dfs[i]['MAINXGROUND:ZONE10:Zone Operative Temperature [C](Hourly:ON)'] avg_temp+=dfs[i]['MAINXGROUND:ZONE3:Zone Operative Temperature [C](Hourly:ON)'] avg_temp+=dfs[i]['MAINXGROUND:ZONE2:Zone Operative Temperature [C](Hourly:ON)'] avg_temp+=dfs[i]['MAINXGROUND:ZONE7:Zone Operative Temperature [C](Hourly:ON)'] avg_temp+=dfs[i]['MAINXGROUND:ZONE8:Zone Operative Temperature [C](Hourly:ON)'] temperature.append(avg_temp/9.0) plt.figure(figsize=(15, 5), dpi=90) plt.plot(temperature[0][2500:7500],color="r",label="ach0") plt.plot(temperature[1][2500:7500],color="g",label="ach2") plt.plot(temperature[2][2500:7500],color="b",label="ach4") plt.plot(temperature[3][2500:7500],color="c",label="ach6") plt.plot(temperature[4][2500:7500],color="m",label="ach6") plt.title("AVG Operative Temperature change due to ACH value") plt.xlabel("Time [h]") plt.ylabel("AVG Operative Temperature [deg]") plt.legend() plt.grid() plt.savefig("./figures/temp_change_ach.png") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # IPython: Beyond Normal Python # There are many options for development environments for Python, and I'm often asked which one I use in my own work. # My answer sometimes surprises people: my preferred environment is [IPython](http://ipython.org/) plus a text editor (in my case, Emacs or Atom depending on my mood). # IPython (short for *Interactive Python*) was started in 2001 by as an enhanced Python interpreter, and has since grown into a project aiming to provide, in Perez's words, "Tools for the entire life cycle of research computing." # If Python is the engine of our data science task, you might think of IPython as the interactive control panel. # # As well as being a useful interactive interface to Python, IPython also provides a number of useful syntactic additions to the language; we'll cover the most useful of these additions here. # In addition, IPython is closely tied with the [Jupyter project](http://jupyter.org), which provides a browser-based notebook that is useful for development, collaboration, sharing, and even publication of data science results. # The IPython notebook is actually a special case of the broader Jupyter notebook structure, which encompasses notebooks for Julia, R, and other programming languages. # As an example of the usefulness of the notebook format, look no further than the page you are reading: the entire manuscript for this book was composed as a set of IPython notebooks. # # IPython is about using Python effectively for interactive scientific and data-intensive computing. # This chapter will start by stepping through some of the IPython features that are useful to the practice of data science, focusing especially on the syntax it offers beyond the standard features of Python. # Next, we will go into a bit more depth on some of the more useful "magic commands" that can speed-up common tasks in creating and using data science code. # Finally, we will touch on some of the features of the notebook that make it useful in understanding data and sharing results. # ## Shell or Notebook? # # There are two primary means of using IPython that we'll discuss in this section: the IPython shell and the IPython notebook. # The bulk of the material in this section is relevant to both, and the examples will switch between them depending on what is most convenient. # In the few sections that are relevant to just one or the other, we will explicitly state that fact. # Before we start, some words on how to launch the IPython shell and IPython notebook. # ### Launching the IPython Shell # # This section, like most of this book, is not designed to be absorbed passively. # I recommend that as you read through it, you follow along and experiment with the tools and syntax we cover: the muscle-memory you build through doing this will be far more useful than the simple act of reading about it. # Start by launching the IPython interpreter by typing **``ipython``** on the command-line; alternatively, if you've installed a distribution like Anaconda or EPD, there may be a launcher specific to your system (we'll discuss this more fully in [Help and Documentation in IPython](01.01-Help-And-Documentation.ipynb)). # # Once you do this, you should see a prompt like the following: # ``` # IPython 4.0.1 -- An enhanced Interactive Python. # ? -> Introduction and overview of IPython's features. # # %quickref -> Quick reference. # help -> Python's own help system. # object? -> Details about 'object', use 'object??' for extra details. # In [1]: # ``` # With that, you're ready to follow along. # ### Launching the Jupyter Notebook # # The Jupyter notebook is a browser-based graphical interface to the IPython shell, and builds on it a rich set of dynamic display capabilities. # As well as executing Python/IPython statements, the notebook allows the user to include formatted text, static and dynamic visualizations, mathematical equations, JavaScript widgets, and much more. # Furthermore, these documents can be saved in a way that lets other people open them and execute the code on their own systems. # # Though the IPython notebook is viewed and edited through your web browser window, it must connect to a running Python process in order to execute code. # This process (known as a "kernel") can be started by running the following command in your system shell: # # ``` # $ jupyter notebook # ``` # # This command will launch a local web server that will be visible to your browser. # It immediately spits out a log showing what it is doing; that log will look something like this: # # ``` # $ jupyter notebook # [NotebookApp] Serving notebooks from local directory: /Users/jakevdp/PythonDataScienceHandbook # [NotebookApp] 0 active kernels # [NotebookApp] The IPython Notebook is running at: http://localhost:8888/ # [NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). # ``` # # Upon issuing the command, your default browser should automatically open and navigate to the listed local URL; # the exact address will depend on your system. # If the browser does not open automatically, you can open a window and manually open this address (*http://localhost:8888/* in this example). # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 with Spark # language: python3 # name: python36 # --- # # Querying IBM Db2 Event Store # IBM Db2 Event Store is a hybrid transactional/analytical processing (HTAP) system. This notebook will demonstrate the best practices for querying a table stored in IBM Db2 Event Store. # # ***Pre-Req: Event_Store_Table_Creation*** # ## Connect to IBM Db2 Event Store # # Edit the values in the next cell # + CONNECTION_ENDPOINT="" EVENT_USER_ID="" EVENT_PASSWORD="" # Port will be 1100 for version 1.1.2 or later (5555 for version 1.1.1) PORT = "30370" DEPLOYMENT_ID="" # Database name DB_NAME = "EVENTDB" # Table name TABLE_NAME = "IOT_TEMPERATURE" HOSTNAME="" DEPLOYMENT_SPACE="" # - # ## Import Python modules # + ## Note: Only run this cell if your IBM Db2 Event Store is installed with IBM Cloud Pak for Data (CP4D) # In IBM Cloud Pak for Data, we need to create link to ensure Event Store Python library is # properly exposed to the Spark runtime. from pathlib import Path src = '/home/spark/user_home/eventstore/eventstore' dst = '/home/spark/shared/user-libs/python3.6/eventstore' is_symlink = Path(dst).is_symlink() if is_symlink == False : os.symlink(src, dst) print("Creating symlink to include Event Store Python library...") else: print("Symlink already exists, not creating..") # - bearerToken=!echo `curl --silent -k -X GET https://{HOSTNAME}:443/v1/preauth/validateAuth -u {EVENT_USER_ID}:{EVENT_PASSWORD} |python -c "import sys, json; print(json.load(sys.stdin)['accessToken'])"` bearerToken=bearerToken[0] keystorePassword=!echo `curl -k --silent GET -H "authorization: Bearer {bearerToken}" "https://{HOSTNAME}:443/icp4data-databases/{DEPLOYMENT_ID}/zen/com/ibm/event/api/v1/oltp/keystore_password"` keystorePassword from eventstore.common import ConfigurationReader from eventstore.oltp import EventContext from eventstore.sql import EventSession from pyspark.sql import SparkSession # ## Connect to Event Store ConfigurationReader.setConnectionEndpoints(CONNECTION_ENDPOINT) ConfigurationReader.setEventUser(EVENT_USER_ID) ConfigurationReader.setEventPassword(_PASSWORD) ConfigurationReader.setSslKeyAndTrustStorePasswords(keystorePassword[0]) ConfigurationReader.setDeploymentID(DEPLOYMENT_ID) ConfigurationReader.getSslTrustStorePassword() # ## Open the database # # The cells in this section are used to open the database and create a temporary view for the table that we created previously. # To run Spark SQL queries, you first have to set up a Db2 Event Store Spark session. The EventSession class extends the optimizer of the SparkSession class. sparkSession = SparkSession.builder.appName("EventStore SQL in Python").getOrCreate() eventSession = EventSession(sparkSession.sparkContext, DB_NAME) # The next cell opens the database to allow operations against it to be executed. eventSession.open_database() # With the following cells, we can list all existing tables and then load the table we previously created into the tab DataFrame reference. Note that we are defining the `tab` DataFrame reference that will be used later on in this notebook to create a temporary view. # + with EventContext.get_event_context(DB_NAME) as ctx: print("Event context successfully retrieved.") print("Table names:") table_names = ctx.get_names_of_tables() for name in table_names: print(name) # - tab = eventSession.load_event_table(TABLE_NAME) # Let's recall the table schema we previously created. try: resolved_table_schema = ctx.get_table(TABLE_NAME) print(resolved_table_schema) except Exception as err: print("Table not found") # ## Best Practices for efficient queries # In the next cell we create a lazily evaluated "view" that we can then use like a hive table in Spark SQL, but this is only evaluated when we actually run or cache query results. We are calling this view "readings" and that is how we will refer to it in the queries below: tab.createOrReplaceTempView("readings") query = "SELECT count(*) FROM readings" print("{}\nRunning query in Event Store...".format(query)) df_data = eventSession.sql(query) df_data.toPandas() # Let's have a look at the record structure query = "SELECT * FROM readings LIMIT 1" print("{}\nRunning query in Event Store...".format(query)) df_data = eventSession.sql(query) df_data.toPandas() query = "SELECT MIN(ts), MAX(ts) FROM readings" print("{}\nRunning query in Event Store...".format(query)) df_data = eventSession.sql(query) df_data.toPandas() # ## Optimal query through the index # # - Index queries will significantly reduce amount of data that needs to be scanned for results. # - Indexes in IBM Db2 Event Store are formed asynchronously to avoid insert latency. # - They are stored as a Log Structured Merge (LSM) Tree. # - The index is formed by "runs", which include sequences of sorted keys. These runs are written to disk during “Share” processing. # - These index runs are merged together over time to improve scan and I/O efficiency. # # - For an optimal query performance you must specify equality on all the equal_columns in the index and a range on the sort column in the index. # For example, in the following query we are retrieving all the values in the range of dates for a specific device and sensor, where both the `deviceID` and `sensorID` are in the equal_columns definition for the index schema and the `ts` column is the sort column for the index. # Then the following cell runs the query and caches the results. Note that this caching is for demostration purposes, to show the time it takes to run the query and cache the results in memory within Spark. This caching is recommended when you are going to do additional processing on this cached data, as the query against IBM Db2 Event Store is only run once. %%time index_query = "SELECT ts, temperature FROM readings where deviceID=1 and sensorID=12 and ts >1541021271619 and ts < 1541043671128 order by ts" print("{}\nCreating a dataframe for the query ...".format(index_query)) df_index_query = eventSession.sql(index_query) df_index_query.cache() # Finally the results from the cached data can be visualized: df_index_query.toPandas() # ## Sub-optimal query # # This next query shows a sub-optimal query that only specifies equality in one of the equal_columns in the index schema, and for this reason ends up doing a full scan of the table. %%time fullscan_query = "SELECT count(*) FROM readings where sensorID = 7" print("{}\nCreating a dataframe for the query...".format(fullscan_query)) df_fullscan_query = eventSession.sql(fullscan_query) df_fullscan_query.cache() df_fullscan_query.toPandas() # ## Accessing multiple sensorIDs optimally # # The easiest way to write a query that needs to retrieve multiple values in the equal_columns in the index schema is by using an *In-List*. With this, you can get optimal index access across multiple sensorID's. # # In this example we specify equality for a specific deviceID, and an In-List for the four sensors we are trying to retrieve. To limit the number of records we are returning we also include a range of timestamps. %%time inlist_query = "SELECT deviceID, sensorID, ts FROM readings where deviceID=1 and sensorID in (1, 5, 7, 12) and ts >1541021271619 and ts < 1541043671128 order by ts" print("{}\nCreating a dataframe for the query...".format(inlist_query)) df_inlist_query = eventSession.sql(inlist_query) df_inlist_query.cache() df_inlist_query.toPandas() # ## Exploiting the synopsis table: Advanced medium weight queries # # - Event Store tables include a synopsis table which summarizes the minimum/maximum values of the data for each range of rows in the synopsis table. # # - It contains one range for every 1000 rows. # - It is stored in a separate internal table in the shared storage layer. # - It is parquet compressed to minimize footprint. # - For highly selective queries, it can improve performance by up to 1000x. # # - Using an equality or a range predicate on a clustered field (e.g. the 'ts' column in our case because the data is inserted into the table in order) is faster than doing a full scan as it should be able to exploit the synopsis table, but this will be slower than an optimal index scan. %%time synopsis_query = "SELECT deviceID, sensorID, ts FROM readings where ts >1541021271619 and ts < 1541043671128 order by ts" print("{}\nCreating a dataframe for the query...".format(synopsis_query)) df_synopsis_query = eventSession.sql(synopsis_query) df_synopsis_query.cache() df_synopsis_query.toPandas() # ## Summary # This demo introduced you to the best practices querying the table stored in IBM Db2 Event Store database. # # ## Next Step # `"Event_Store_Data_Analytics.ipynb"` will show you how to perform data analytics with IBM Db2 Event Store with multiple scientific tools. #

    # © Copyright 2019 IBM Corp. All Rights Reserved. #

    # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file # except in compliance with the License. You may obtain a copy of the License at # https://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software distributed under the # License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either # express or implied. See the License for the specific language governing permissions and # limitations under the License. #

    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os os.environ['CUDA_VISIBLE_DEVICES'] = '3' # - from tensor2tensor.data_generators import problem from tensor2tensor.data_generators import text_problems from tensor2tensor.data_generators import translate from tensor2tensor.layers import common_attention from tensor2tensor.utils import registry from tensor2tensor import problems import tensorflow as tf import os import logging import sentencepiece as spm import transformer_tag from tensor2tensor.layers import modalities # + vocab = 'sp10m.cased.t5.model' sp = spm.SentencePieceProcessor() sp.Load(vocab) class Encoder: def __init__(self, sp): self.sp = sp self.vocab_size = sp.GetPieceSize() + 100 def encode(self, s): return self.sp.EncodeAsIds(s) def decode(self, ids, strip_extraneous = False): return self.sp.DecodeIds(list(ids)) # + d = [ {'class': 0, 'Description': 'PAD', 'salah': '', 'betul': ''}, { 'class': 1, 'Description': 'kesambungan subwords', 'salah': '', 'betul': '', }, { 'class': 2, 'Description': 'tiada kesalahan', 'salah': '', 'betul': '', }, { 'class': 3, 'Description': 'kesalahan frasa nama, Perkara yang diterangkan mesti mendahului "penerang"', 'salah': 'Cili sos', 'betul': 'sos cili', }, { 'class': 4, 'Description': 'kesalahan kata jamak', 'salah': 'mereka-mereka', 'betul': 'mereka', }, { 'class': 5, 'Description': 'kesalahan kata penguat', 'salah': 'sangat tinggi sekali', 'betul': 'sangat tinggi', }, { 'class': 6, 'Description': 'kata adjektif dan imbuhan "ter" tanpa penguat.', 'salah': 'Sani mendapat markah yang tertinggi sekali.', 'betul': 'Sani mendapat markah yang tertinggi.', }, { 'class': 7, 'Description': 'kesalahan kata hubung', 'salah': 'Sally sedang membaca bila saya tiba di rumahnya.', 'betul': 'Sally sedang membaca apabila saya tiba di rumahnya.', }, { 'class': 8, 'Description': 'kesalahan kata bilangan', 'salah': 'Beribu peniaga tidak membayar cukai pendapatan.', 'betul': 'Beribu-ribu peniaga tidak membayar cukai pendapatan', }, { 'class': 9, 'Description': 'kesalahan kata sendi', 'salah': 'Umar telah berpindah daripada sekolah ini bulan lalu.', 'betul': 'Umar telah berpindah dari sekolah ini bulan lalu.', }, { 'class': 10, 'Description': 'kesalahan penjodoh bilangan', 'salah': 'Setiap orang pelajar', 'betul': 'Setiap pelajar.', }, { 'class': 11, 'Description': 'kesalahan kata ganti diri', 'salah': 'Pencuri itu telah ditangkap. Beliau dibawa ke balai polis.', 'betul': 'Pencuri itu telah ditangkap. Dia dibawa ke balai polis.', }, { 'class': 12, 'Description': 'kesalahan ayat pasif', 'salah': 'Cerpen itu telah dikarang oleh saya.', 'betul': 'Cerpen itu telah saya karang.', }, { 'class': 13, 'Description': 'kesalahan kata tanya', 'salah': 'Kamu berasal dari manakah ?', 'betul': 'Kamu berasal dari mana ?', }, { 'class': 14, 'Description': 'kesalahan tanda baca', 'salah': 'Kamu berasal dari manakah .', 'betul': 'Kamu berasal dari mana ?', }, { 'class': 15, 'Description': 'kesalahan kata kerja tak transitif', 'salah': 'Dia kata kepada saya', 'betul': 'Dia berkata kepada saya', }, { 'class': 16, 'Description': 'kesalahan kata kerja transitif', 'salah': 'Dia suka baca buku', 'betul': 'Dia suka membaca buku', }, { 'class': 17, 'Description': 'penggunaan kata yang tidak tepat', 'salah': 'Tembuk Besar negeri Cina dibina oleh Shih Huang Ti.', 'betul': 'Tembok Besar negeri Cina dibina oleh Shih Huang Ti', }, ] class Tatabahasa: def __init__(self, d): self.d = d self.kesalahan = {i['Description']: no for no, i in enumerate(self.d)} self.reverse_kesalahan = {v: k for k, v in self.kesalahan.items()} self.vocab_size = len(self.d) def encode(self, s): return [self.kesalahan[i] for i in s] def decode(self, ids, strip_extraneous = False): return [self.reverse_kesalahan[i] for i in ids] # - @registry.register_problem class Grammar(text_problems.Text2TextProblem): """grammatical error correction.""" def feature_encoders(self, data_dir): encoder = Encoder(sp) t = Tatabahasa(d) return {'inputs': encoder, 'targets': encoder, 'targets_error_tag': t} def hparams(self, defaults, model_hparams): super(Grammar, self).hparams(defaults, model_hparams) if 'use_error_tags' not in model_hparams: model_hparams.add_hparam('use_error_tags', True) if 'middle_prediction' not in model_hparams: model_hparams.add_hparam('middle_prediction', False) if 'middle_prediction_layer_factor' not in model_hparams: model_hparams.add_hparam('middle_prediction_layer_factor', 2) if 'ffn_in_prediction_cascade' not in model_hparams: model_hparams.add_hparam('ffn_in_prediction_cascade', 1) if 'error_tag_embed_size' not in model_hparams: model_hparams.add_hparam('error_tag_embed_size', 12) if model_hparams.use_error_tags: defaults.modality[ 'targets_error_tag' ] = modalities.ModalityType.SYMBOL error_tag_vocab_size = self._encoders[ 'targets_error_tag' ].vocab_size defaults.vocab_size['targets_error_tag'] = error_tag_vocab_size def example_reading_spec(self): data_fields, _ = super(Grammar, self).example_reading_spec() data_fields['targets_error_tag'] = tf.VarLenFeature(tf.int64) return data_fields, None @property def approx_vocab_size(self): return 32100 @property def is_generate_per_split(self): return False @property def dataset_splits(self): return [ {'split': problem.DatasetSplit.TRAIN, 'shards': 200}, {'split': problem.DatasetSplit.EVAL, 'shards': 1}, ] DATA_DIR = os.path.expanduser('t2t-tatabahasa/data') TMP_DIR = os.path.expanduser('t2t-tatabahasa/tmp') TRAIN_DIR = os.path.expanduser('t2t-tatabahasa/train-small') PROBLEM = 'grammar' t2t_problem = problems.problem(PROBLEM) MODEL = 'transformer_tag' HPARAMS = 'transformer_base' from tensor2tensor.utils.trainer_lib import create_run_config, create_experiment from tensor2tensor.utils.trainer_lib import create_hparams from tensor2tensor.utils import registry from tensor2tensor import models from tensor2tensor import problems from tensor2tensor.utils import trainer_lib # + X = tf.placeholder(tf.int32, [None, None], name = 'x_placeholder') Y = tf.placeholder(tf.int32, [None, None], name = 'y_placeholder') targets_error_tag = tf.placeholder(tf.int32, [None, None], 'error_placeholder') X_seq_len = tf.count_nonzero(X, 1, dtype=tf.int32) maxlen_decode = tf.reduce_max(X_seq_len) x = tf.expand_dims(tf.expand_dims(X, -1), -1) y = tf.expand_dims(tf.expand_dims(Y, -1), -1) targets_error_tag_ = tf.expand_dims(tf.expand_dims(targets_error_tag, -1), -1) features = { "inputs": x, "targets": y, "target_space_id": tf.constant(1, dtype=tf.int32), 'targets_error_tag': targets_error_tag, } Modes = tf.estimator.ModeKeys hparams = trainer_lib.create_hparams(HPARAMS, data_dir=DATA_DIR, problem_name=PROBLEM) # + hparams.filter_size = 2048 hparams.hidden_size = 512 hparams.num_heads = 8 hparams.num_hidden_layers = 6 hparams.vocab_divisor = 128 hparams.dropout = 0.1 hparams.max_length = 256 # LM hparams.label_smoothing = 0.0 hparams.shared_embedding_and_softmax_weights = False hparams.eval_drop_long_sequences = True hparams.max_length = 256 hparams.multiproblem_mixing_schedule = 'pretrain' # tpu hparams.symbol_modality_num_shards = 1 hparams.attention_dropout_broadcast_dims = '0,1' hparams.relu_dropout_broadcast_dims = '1' hparams.layer_prepostprocess_dropout_broadcast_dims = '1' # - model = registry.model(MODEL)(hparams, Modes.PREDICT) # + # logits = model(features) # logits # sess = tf.InteractiveSession() # sess.run(tf.global_variables_initializer()) # l = sess.run(logits, feed_dict = {X: [[10,10, 10, 10,10,1],[10,10, 10, 10,10,1]], # Y: [[10,10, 10, 10,10,1],[10,10, 10, 10,10,1]], # targets_error_tag: [[10,10, 10, 10,10,1], # [10,10, 10, 10,10,1]]}) # + features = { "inputs": x, "target_space_id": tf.constant(1, dtype=tf.int32), } with tf.variable_scope(tf.get_variable_scope(), reuse = False): fast_result = model._greedy_infer(features, maxlen_decode) # - result_seq = tf.identity(fast_result['outputs'], name = 'greedy') result_tag = tf.identity(fast_result['outputs_tag'], name = 'tag_greedy') # + from tensor2tensor.layers import common_layers def accuracy_per_sequence(predictions, targets, weights_fn = common_layers.weights_nonzero): padded_predictions, padded_labels = common_layers.pad_with_zeros(predictions, targets) weights = weights_fn(padded_labels) padded_labels = tf.to_int32(padded_labels) padded_predictions = tf.to_int32(padded_predictions) not_correct = tf.to_float(tf.not_equal(padded_predictions, padded_labels)) * weights axis = list(range(1, len(padded_predictions.get_shape()))) correct_seq = 1.0 - tf.minimum(1.0, tf.reduce_sum(not_correct, axis=axis)) return tf.reduce_mean(correct_seq) def padded_accuracy(predictions, targets, weights_fn = common_layers.weights_nonzero): padded_predictions, padded_labels = common_layers.pad_with_zeros(predictions, targets) weights = weights_fn(padded_labels) padded_labels = tf.to_int32(padded_labels) padded_predictions = tf.to_int32(padded_predictions) n = tf.to_float(tf.equal(padded_predictions, padded_labels)) * weights d = tf.reduce_sum(weights) return tf.reduce_sum(n) / d # - acc_seq = padded_accuracy(result_seq, Y) acc_tag = padded_accuracy(result_tag, targets_error_tag) ckpt_path = tf.train.latest_checkpoint(os.path.join(TRAIN_DIR)) ckpt_path sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) saver = tf.train.Saver(var_list = var_lists) saver.restore(sess, ckpt_path) # + import pickle with open('../pure-text/dataset-tatabahasa.pkl', 'rb') as fopen: data = pickle.load(fopen) encoder = Encoder(sp) # - def get_xy(row, encoder): x, y, tag = [], [], [] for i in range(len(row[0])): t = encoder.encode(row[0][i][0]) y.extend(t) t = encoder.encode(row[1][i][0]) x.extend(t) tag.extend([row[1][i][1]] * len(t)) # EOS x.append(1) y.append(1) tag.append(0) return x, y, tag import numpy as np x, y, tag = get_xy(data[10], encoder) e = encoder.encode('Pilih mana jurusan yang sesuai dengan kebolehan anda dalam peperiksaan Sijil Pelajaran Malaysia semasa memohon kemasukan ke institusi pengajian tinggi.') + [1] r = sess.run(fast_result, feed_dict = {X: [e]}) r['outputs_tag'] encoder.decode(r['outputs'][0].tolist()) encoder.decode(x) encoder.decode(y) hparams.problem.example_reading_spec()[0] def parse(serialized_example): data_fields = hparams.problem.example_reading_spec()[0] features = tf.parse_single_example( serialized_example, features = data_fields ) for k in features.keys(): features[k] = features[k].values return features dataset = tf.data.TFRecordDataset('t2t-tatabahasa/data/grammar-dev-00000-of-00001') dataset = dataset.map(parse, num_parallel_calls=32) dataset = dataset.padded_batch(32, padded_shapes = { 'inputs': tf.TensorShape([None]), 'targets': tf.TensorShape([None]), 'targets_error_tag': tf.TensorShape([None]) }, padding_values = { 'inputs': tf.constant(0, dtype = tf.int64), 'targets': tf.constant(0, dtype = tf.int64), 'targets_error_tag': tf.constant(0, dtype = tf.int64), }) dataset = dataset.make_one_shot_iterator().get_next() dataset seqs, tags = [], [] index = 0 while True: try: d = sess.run(dataset) s, t = sess.run([acc_seq, acc_tag], feed_dict = {X:d['inputs'], Y: d['targets'], targets_error_tag: d['targets_error_tag']}) seqs.append(s) tags.append(t) print(f'done {index}') index += 1 except: break np.mean(seqs), np.mean(tags) saver = tf.train.Saver(tf.trainable_variables()) saver.save(sess, 'transformertag-small/model.ckpt') strings = ','.join( [ n.name for n in tf.get_default_graph().as_graph_def().node if ('Variable' in n.op or 'Placeholder' in n.name or 'greedy' in n.name or 'tag_greedy' in n.name or 'x_placeholder' in n.name or 'self/Softmax' in n.name) and 'adam' not in n.name and 'beta' not in n.name and 'global_step' not in n.name and 'modality' not in n.name and 'Assign' not in n.name ] ) strings.split(',') def freeze_graph(model_dir, output_node_names): if not tf.gfile.Exists(model_dir): raise AssertionError( "Export directory doesn't exists. Please specify an export " 'directory: %s' % model_dir ) checkpoint = tf.train.get_checkpoint_state(model_dir) input_checkpoint = checkpoint.model_checkpoint_path absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1]) output_graph = absolute_model_dir + '/frozen_model.pb' clear_devices = True with tf.Session(graph = tf.Graph()) as sess: saver = tf.train.import_meta_graph( input_checkpoint + '.meta', clear_devices = clear_devices ) saver.restore(sess, input_checkpoint) output_graph_def = tf.graph_util.convert_variables_to_constants( sess, tf.get_default_graph().as_graph_def(), output_node_names.split(','), ) with tf.gfile.GFile(output_graph, 'wb') as f: f.write(output_graph_def.SerializeToString()) print('%d ops in the final graph.' % len(output_graph_def.node)) freeze_graph('transformertag-small', strings) def load_graph(frozen_graph_filename): with tf.gfile.GFile(frozen_graph_filename, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) with tf.Graph().as_default() as graph: tf.import_graph_def(graph_def) return graph g = load_graph('transformertag-small/frozen_model.pb') x = g.get_tensor_by_name('import/x_placeholder:0') greedy = g.get_tensor_by_name('import/greedy:0') tag_greedy = g.get_tensor_by_name('import/tag_greedy:0') test_sess = tf.InteractiveSession(graph = g) test_sess.run([greedy, tag_greedy], feed_dict = {x:d['inputs']}) import tensorflow as tf from tensorflow.tools.graph_transforms import TransformGraph from glob import glob tf.set_random_seed(0) import tensorflow_text import tf_sentencepiece # + transforms = ['add_default_attributes', 'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)', 'fold_constants(ignore_errors=true)', 'fold_batch_norms', 'fold_old_batch_norms', 'quantize_weights(fallback_min=-10, fallback_max=10)', 'strip_unused_nodes', 'sort_by_execution_order'] pb = 'transformertag-small/frozen_model.pb' input_graph_def = tf.GraphDef() with tf.gfile.FastGFile(pb, 'rb') as f: input_graph_def.ParseFromString(f.read()) transformed_graph_def = TransformGraph(input_graph_def, ['x_placeholder'], ['greedy', 'tag_greedy'], transforms) with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f: f.write(transformed_graph_def.SerializeToString()) # - g = load_graph('transformertag-small/frozen_model.pb.quantized') x = g.get_tensor_by_name('import/x_placeholder:0') greedy = g.get_tensor_by_name('import/greedy:0') tag_greedy = g.get_tensor_by_name('import/tag_greedy:0') test_sess = tf.InteractiveSession(graph = g) test_sess.run([greedy, tag_greedy], feed_dict = {x:d['inputs']}) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import tensorflow as tf mu = 0.5 sigma = 1 t = 0.45 def est_deriv_mu(samples): approx = np.square(samples * sigma + mu - t) * samples / sigma return np.mean(approx) def est_deriv_sigma(samples): approx = np.square(samples * sigma + mu - t) * (np.square(samples) - 1) / sigma return np.mean(approx) def get_noise(num): return np.random.normal(size=(num, 1)) for i in range(7): num_samples = 10 ** i samples = get_noise(num_samples) est_mu, est_sigma = est_deriv_mu(samples), est_deriv_sigma(samples) print("sample: {} est_mu: {} est_sigma: {}".format(num_samples, est_mu, est_sigma)) # + weights ={ 'wg1': tf.Variable(tf.random_normal([1, 3], stddev=0.01)), 'wg2': tf.Variable(tf.random_normal([3, 1], stddev=0.01)), } biases ={ 'bg1': tf.Variable(tf.zeros([3])), 'bg2': tf.Variable(tf.zeros([1])), } Z = tf.placeholder(tf.float32, [None, 1], name = 'gan_X') reward = tf.placeholder(tf.float32, [None, 1], name = 'reward') X = sigma * Z + mu noise = tf.nn.relu(tf.matmul(X, weights['wg1']) + biases['bg1']) surr_reward = tf.matmul(noise, weights['wg2']) + biases['bg2'] rev_reward = (reward - surr_reward) var_e = sigma + 1e-6 grads_mean = tf.reduce_mean(rev_reward * (Z / var_e) + tf.gradients(surr_reward, X)) grads_var = tf.reduce_mean(rev_reward * ((tf.square(Z) / var_e) - 1 / var_e) + (tf.gradients(surr_reward, X) * Z)) S_var_list = [weights['wg1'], biases['bg1'], weights['wg2'], biases['bg2']] # Gradient of S opt_S = tf.train.AdamOptimizer(1e-5) grads_S_m = tf.gradients(grads_mean * grads_mean, S_var_list) grads_S_v = tf.gradients(grads_var * grads_var, S_var_list) grads_S = [grads_S_m[i] + grads_S_v[i] for i in range(len(S_var_list))] grads_and_vars_S = zip(grads_S, S_var_list) # Ask the optimizer to apply the capped gradients. train_S = opt_S.apply_gradients(grads_and_vars_S) sess = tf.Session() sess.run(tf.global_variables_initializer()) for i in range(1000): num_samples = 1000 samples = get_noise(num_samples) rewards = np.square(samples * sigma + mu - t) _, surr_rew, est_mean, est_var = sess.run([train_S, surr_reward, grads_mean, grads_var], feed_dict={Z: samples, reward: rewards}) print("iter: {}/1000".format(i), "surrogate reward: {}, mean: {}, var: {}".format(np.mean(surr_rew), est_mean, est_var)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # fix tree, for the last time! # ### Test case # + import tree.ctutils as ctu def link_circle_up(x, y, r, ax, finish=0): """ Given two points, draw circle at the first point and link it to the second point without drawing the second point by default (so that it can repeat to build a long thread of bids). for the last point, pass the radius of the last circle to the argument 'finish' For example, fig = plt.figure() ax = fig.add_subplot(111) xpos = [1,1] & ypos = [2,4] link_circle(xpos, ypos, 10, ax) xpos = [1,2] & ypos = [4,6] link_circle(xpos, ypos, 30, ax, finish=30) fig.show() """ ax.plot(x[0], y[0], 'o', ms=r, lw=2, alpha=0.7, mfc='orange') ax.plot(x, y, '-', c='black',alpha=0.7) if finish > 0: ax.plot(x[1], y[1], 'o', ms=20, lw=2, alpha=0.7, mfc='orange') def get_xarr(n): import numpy as np arr=[] a=0 for i in range(n): a += (-1)**i * i arr.append(a) return np.asarray(arr) def recursive_tree(idx, tt, nstep, ax, x0, y0, dx, mass_unit=1e10): import tree.draw_merger_tree as dmt prgs = ctu.get_progenitors(tt, idx) i_this_gal = np.where(tt['id'] == idx) m = np.sqrt(tt[i_this_gal]["mvir"] / mass_unit) #print("IDX:", idx, "prgs: ",prgs, "mass:", m, i_this_gal) nprg = len(prgs) if nstep == 0: return else: if nprg == 0: return else: if nprg > 1: #dx *= 1.1 dx += 0.5 # print("Branch!", nprg) #xarr = get_xarr(nprg) * dx + x0 xarr = np.arange(nprg) * dx + x0 for i, x in zip(prgs, xarr): link_circle_up([x0, x], [y0, y0 + 1], m, ax) recursive_tree(i, tt, nstep - 1, ax, x, y0 + 1, dx, mass_unit=mass_unit) def extract_main_tree(treedata, idx=None, verbose=False): """ Returns a single branch/trunk of tree following only the main progenitors. Works with both alltrees or atree. Search until no progenitor is found. Doesn't matter how long the given tree is. Only earlier snapshots are searched for. """ if idx == None: idx = treedata['id'][0] if verbose: print("No idx is given") print("idx = ", idx) nprg = 1 ind_list=[np.where(treedata['id'] == idx)[0][0]] # main progenitor = mmp. while nprg > 0: idx = ctu.get_progenitors(treedata, idx, main=True) # print(idx) ind_list.append(np.where(treedata['id'] == idx[0])[0][0]) nprg = ctu.get_npr(treedata, idx[0]) return treedata[ind_list] def plot_atree(atree, galid): fig, ax = plt.subplots(1) ax.scatter(atree['aexp'], np.log10(atree['m'])) ax.title(galid) plt.savefig(wdir + "mergertrees/" + sidgal + '.png') # - ############################################################################### import matplotlib.pyplot as plt from tree import treemodule from tree import treeutils import pickle import numpy as np # + alltrees = treemodule.CTree() wdir = './10002/' is_gal = True if is_gal: # Galaxy tree tree_path = 'GalaxyMaker/Trees/' else: # halo tree tree_path = 'halo/Trees/' load_extended_tree = True if load_extended_tree: try: alltrees = pickle.load(open(wdir + tree_path + "extended_tree.pickle", "rb" )) print("Loaded an extended tree") except: load_extended_tree = False if not load_extended_tree: """ info file of each snapshot are required. """ alltrees = treemodule.CTree() alltrees.load(filename= wdir + tree_path + 'tree_0_0_0.dat') # Fix nout ----------------------------------------------------- nout_max = alltrees.data['nout'].max() alltrees.data['nout'] += 187 - nout_max print("------ NOUT fixed") alltrees.data = ctu.augment_tree(alltrees.data, wdir, is_gal=is_gal) print("------ tree data extended") alltrees = nout_fi = 187 nout_ini = 30 i_final = np.where(alltrees.data["nout"] == nout_fi) ttt_sub = alltrees.data[i_final] nouts = np.arange(nout_fi - nout_ini + 1) final_gals = ttt_sub['id'] final_gals_org = ttt_sub['Orig_halo_id'] plt.ioff() #figure(figsize=[6,6]) #ax = fig.add_subplot(211) #aexps = np.unique(alltrees.data["aexp"])[:len(nouts)] aexps = np.unique(alltrees.data["aexp"])[:-len(nouts):-1] zreds = ["%.2f" % (1/i -1) for i in aexps] # + import os if not os.path.isdir(wdir + "mergertrees/"): os.mkdir(wdir + "mergertrees/") fig, ax = plt.subplots(1,2) fig.set_size_inches([12,6]) for galid in final_gals: sidgal = str(galid).zfill(5) #print(zreds) atree = ctu.extract_a_tree(alltrees.data, galid) mtree = extract_main_tree(atree) ax[0].scatter(atree['aexp'], np.log10(atree['m']), edgecolors='none', alpha=0.3) ax[0].scatter(mtree['aexp'], np.log10(mtree['m']), edgecolors='none', alpha=0.6, facecolors='red') ax[0].set_xlim([0.15,1.1]) ax[0].set_xticks(aexps[0:151:20]) ax[0].set_xticklabels(zreds[0:151:20]) ax[0].set_title(galid) recursive_tree(galid, atree, 150, ax[1], 0, 0, 0.8, mass_unit=2e8) # y axis label (redshift) ax[1].set_ylabel("Redshift") ax[1].set_ylim([-5,155]) ax[1].set_yticks(range(0,151,10)) ax[1].set_yticklabels(zreds[0:151:10]) #plt.yticks(range(0,151,10), zreds[0:151:10]) ax[1].set_title(sidgal + ", " + str(atree[0]['Orig_halo_id'])) #fig.show() plt.savefig(wdir + "mergertrees/" + sidgal + '.png') ax[0].clear() ax[1].clear() #plt.close() plt.close() # - # ### 158132, 93 from IPython.display import Image Image(filename='./10002/mergertrees/158132.png') # CASE 1) # The second galaxy branch makes sense. But the main progenitor does not. # There might be another galaxy (not in this tree), and CT might have failed to achieve all:all matching. # This is the only case in 10002. # # CASE 2) # There is no case where there is wrong main progenitor when possibly 'real' progenitor is also in the tree. # For example, 29176/270444 is such a case. # # However, these will result in sudden change in lambda. (can be categorized as both major or minor merger..) # ## CASE 2 might be easier to solve. Image(filename='./29176/mergertrees/270444.png') alltree = ctu.load_tree('./29176/', is_gal=True) atree = ctu.extract_a_tree(alltree.data, 270444) sub = atree[atree['m'] > 1e10] print(sub['id'][sub['nout'] == 103]) import importlib importlib.reload(ctu) # extract_main_tree 고쳤음. branch1 = ctu.extract_main_tree(sub, idx=171034, no_subset=True) branch2 = ctu.extract_main_tree(sub, idx=171035, no_subset=True) ind_list=[np.where(sub['id'] == 171034)[0][0]] print(ind_list) branch2['nout'] fig, ax=plt.subplots(2) ax[0].scatter(branch1['nout'], np.log10(branch1['m']), c='r')#, c=sub['']) ax[0].scatter(branch2['nout'], np.log10(branch2['m']), c='b')#, c=sub['']) ax[1].scatter(sub['nout'], np.log10(sub['m']), c='b')#, c=sub['']) plt.show() sub.dtype # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="Am8YGvbdWzwp" toc=true #

    Table of Contents

    # # + [markdown] id="4ip9ysnsTFqJ" # # Segunda práctica: Aspectos prácticos de las redes neuronales # # En esta segunda parte, vamos a continuar desarrollando el problema de Fashion MNIST, con el objetivo de entender los aspectos prácticos del entrenamiento de redes neuronales. # # El código utilizado para contestar tiene que quedar claramente reflejado en el Notebook. Puedes crear nuevas cells si así lo deseas para estructurar tu código y sus salidas. A la hora de entregar el notebook, **asegúrate de que los resultados de ejecutar tu código han quedado guardados**. # + executionInfo={"elapsed": 2491, "status": "ok", "timestamp": 1620419567422, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="yQ1DOKSRTFqK" # Puedes añadir todos los imports adicionales que necesites aquí import keras from keras.datasets import fashion_mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten import tensorflow as tf import matplotlib import matplotlib.pyplot as plt # + [markdown] id="lBY7qt3mTFqM" # #### Obtención de los datos y pre-processing # + executionInfo={"elapsed": 2340, "status": "ok", "timestamp": 1620421061754, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="PImY4g9yTFqM" (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data() x_train = x_train / 255.0 x_test = x_test / 255.0 # + [markdown] id="zJyX2Bk8TFqO" # ## Consideraciones iniciales # + [markdown] id="jRmct5ogTFqO" # ### Train-validation-test split # # En todos los modelos que entrenemos, vamos a partir los datos de training (x_train) en dos sets: training y validación. De este modo, al final tendremos tres datasets distintos: training, validation, y test. Esta es una estrategia común en el aprendizaje automático, en la que los datos de test (o held-out data) se # "esconden" hasta el final. Los datos de validación se utilizan para estimar cómo de bien están funcionando nuestros modelos y para observar si estamos cayendo en overfitting. Esto nos permite cambiar hiperparámetros y probar distintas arquitecturas **sabiendo que no estamos utilizando información del test set para "optimizar" los resultados en éste** (si eligiéramos nuestro mejor modelo en base a los resultados de test, estaríamos "haciendo trampas", ya que se ha utilizado la información contenida en éste para elegir el modelo y las métricas reportadas serían optimistas). # # Para utilizar un split training-validation data durante el entrenamiento, podemos partir nosotros mismos los datos o dejar que Keras lo haga. Podéis ver cómo hacer estas particiones en la documentación de *fit*. # # **Requisito: En todos los entrenamientos de esta práctica, se requiere utilizar el 20% de los datos en x_train como conjunto de datos de validación** # + [markdown] id="BttR0CzHTFqP" # ### Un error común con modelos de Keras # # En esta práctica entrenaremos varios modelos para comparar resultados. Un error común en Keras es no instanciar un nuevo modelo cada vez que hacemos un nuevo entrenamiento. Al hacer # # *model = Sequential()* # # *model.add(lo que sea) # Definición del modelo* # # *model.fit()* # # si queremos entrenar un nuevo modelo o el mismo modelo otra vez, es necesario volver a inicializar el modelo con model = Sequential(). Si olvidamos este paso y volvemos a hacer fit(), el modelo seguirá entrenando por donde se quedó en el último fit(). # + [markdown] id="X7REMbqlTFqP" # ### Análisis de resultados # # A la hora de escribir las respuestas y los análisis pedidos, es importante presentar las conclusiones de manera adecuada a partir de lo visto en nuestros experimentos. Los Jupyter Notebook son una herramienta imprescindible para *data scientists* e ingenieros de Machine Learning para presentar los resultados, incluyendo soporte para incluir gráficas y elementos visuales. Podéis explicar vuestras observaciones del modo que consideréis adecuado, si bien recomendamos la utilización de gráficas para evaluar los entrenamientos y comparar resultados. # # Como ayuda, las siguientes funciones pueden resultar interesantes a la hora de evaluar resultados. Todas ellas utilizan el objeto *history* que podéis obtener como salida del método *fit()* de Keras: # # history = model.fit(x_train, y_train, ...) # # Por supuesto, podéis modificarlas y utilizarlas como prefiráis para crear vuestros propios informes. # + executionInfo={"elapsed": 3212, "status": "ok", "timestamp": 1620419568160, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="L5epQBRpTFqP" def plot_acc(history, title="Model Accuracy"): """Imprime una gráfica mostrando la accuracy por epoch obtenida en un entrenamiento""" plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title(title) plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Val'], loc='upper left') plt.show() def plot_loss(history, title="Model Loss"): """Imprime una gráfica mostrando la pérdida por epoch obtenida en un entrenamiento""" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title(title) plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Val'], loc='upper right') plt.show() def plot_compare_losses(history1, history2, name1="Red 1", name2="Red 2", title="Graph title"): """Compara losses de dos entrenamientos con nombres name1 y name2""" plt.plot(history1.history['loss'], color="green") plt.plot(history1.history['val_loss'], 'r--', color="green") plt.plot(history2.history['loss'], color="blue") plt.plot(history2.history['val_loss'], 'r--', color="blue") plt.title(title) plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train ' + name1, 'Val ' + name1, 'Train ' + name2, 'Val ' + name2], loc='upper right') plt.show() def plot_compare_accs(history1, history2, name1="Red 1", name2="Red 2", title="Graph title"): """Compara accuracies de dos entrenamientos con nombres name1 y name2""" plt.plot(history1.history['accuracy'], color="green") plt.plot(history1.history['val_accuracy'], 'r--', color="green") plt.plot(history2.history['accuracy'], color="blue") plt.plot(history2.history['val_accuracy'], 'r--', color="blue") plt.title(title) plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train ' + name1, 'Val ' + name1, 'Train ' + name2, 'Val ' + name2], loc='lower right') plt.show() # Nota: podéis cambiar los números aquí presentes y ejecutar esta línea si queréis cambiar el tamaño # de las gráficas # matplotlib.rcParams['figure.figsize'] = [8, 8] # + [markdown] id="M_yZ9B8gTFqR" # ## 1. Unidades de activación # + [markdown] id="MuVNxmXSTFqR" # En este ejercicio, vamos a evaluar la importancia de utilizar las unidades de activación adecuadas. Las funciones de activación como sigmoid han dejado de utilizarse en favor de otras unidades como ReLU. # # **Ejercicio 1 *(2.5 puntos)***: Partiendo de una red sencilla como la desarrollada en el Trabajo 1, escribir un breve análisis comparando la utilización de unidades sigmoid y ReLU (por ejemplo, se pueden comentar aspectos como velocidad de convergencia, métricas obtenidas...). Explicar por qué pueden darse estas diferencias. Opcionalmente, comparar con otras activaciones disponibles en Keras. # # *Pista: Usando redes más grandes se hace más sencillo apreciar las diferencias. Es mejor utilizar al menos 3 o 4 capas densas.* # + executionInfo={"elapsed": 3209, "status": "ok", "timestamp": 1620419568162, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="VxaWAsso1Pj0" ## Definición de parámetros generales hidden_units_1 = 128 hidden_units_2 = 64 hidden_units_3 = 64 dropout_rate = 0.25 batch_size = 64 epochs = 20 validation_split = 0.20 metrics = ['accuracy'] # + executionInfo={"elapsed": 1372, "status": "ok", "timestamp": 1620421069687, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="Sn2wDK6fstja" ## Obteniendo las etiquetas y convertiendo los valores de las categorías en un vector one-hot import numpy as np from keras.utils import to_categorical labels, counts = np.unique(y_train, return_counts=True) y_train = to_categorical(y_train) y_test = to_categorical(y_test) # + executionInfo={"elapsed": 3198, "status": "ok", "timestamp": 1620419568164, "user": {"displayName": "Jos\u00e9 ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="hoYUajTuTFqS" ## Método que construye un modelo con: ## - 1 capa Flatten de entrada para aplanar las imágenes de entrenamiento ## - 3 capas densas ocultas con función de activación "activation_function" ## - 1 capa Dropout para borrar pesos de forma aleatoria durante el entrenamiento, para evitar el sobreajuste ## - 1 capa densa de salida con función de activación softmax def build_model(x_train, labels, activation_function='relu', loss_function='categorical_crossentropy', optimizer = 'adam', initializer = 'glorot_uniform'): model = Sequential() model.add(Flatten(input_shape=(x_train.shape[1] , x_train.shape[2]))) model.add(Dense(hidden_units_1, activation=activation_function, kernel_initializer=initializer)) model.add(Dense(hidden_units_2, activation=activation_function)) model.add(Dense(hidden_units_3, activation=activation_function)) model.add(Dropout(dropout_rate)) model.add(Dense(len(labels), activation='softmax')) model.compile(loss=loss_function, optimizer=optimizer, metrics=metrics) return model # + colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"elapsed": 3192, "status": "ok", "timestamp": 1620419568165, "user": {"displayName": "Jos\u00\u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="CkJc8hnqsZWe" outputId="2dea8295-4500-401d-a9c8-0f96dc95fa8b" ## Obteniendo el modelo con función de activación Sigmoid, función de perdidad Binary Crossentropy y optimizador SGD. from keras.optimizers import SGD from keras.utils import plot_model model_sigmoid = build_model(x_train, labels, 'sigmoid', 'binary_crossentropy', SGD(learning_rate = 0.1, momentum=0.9)) model_sigmoid.summary() plot_model(model_sigmoid, show_shapes=True) # + [markdown] id="HvBC9mceiTtT" # **Comentario:** Se muestra una sola vez la información del modelo ya que la estructura de la red, el número de capas y el número nodos por capa, y el número de parámetros por capa y total es el mismo que en todos los modelos que se construyen en esta pregunta. # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 48930, "status": "ok", "timestamp": 1620419613916, "user": {"displayName": "\u00", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="DO_rVdQkiRAX" outputId="01853d6c-9dd6-4896-8106-ce10711647d5" ## Entrenando modelo history_sigmoid = model_sigmoid.fit(x_train, y_train, validation_split=validation_split, batch_size=batch_size, epochs=epochs) # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 49914, "status": "ok", "timestamp": 1620419614909, "user": {"displayName": "\u0", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="FKS0UZVBqepK" outputId="5681a7ed-68f5-4247-f9eb-3033181a7c3b" ## Dibujar gráfica de los valores de Accuracy y Loss del modelo entrenado con Sigmoid plot_acc(history_sigmoid, "Accuracy en el entrenamiento con Sigmoid") plot_loss(history_sigmoid, "Loss en el entrenamiento con Sigmoid") # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 97425, "status": "ok", "timestamp": 1620419662432, "user": {"displayName": "Jos\u00e\u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="uMPgt9GtvWte" outputId="426242aa-c432-4b91-8050-0036a7e8cb54" ## Obteniendo el modelo con función de activación ReLU, función de perdidad Categorical Crossentropy y optimizador Adam. model_relu = build_model(x_train, labels) ## Entrenando modelo history_relu = model_relu.fit(x_train, y_train, validation_split=validation_split, batch_size=batch_size, epochs=epochs) # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 97416, "status": "ok", "timestamp": 1620419662434, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="LwtK5079wKUF" outputId="99eaeafb-2876-49da-acaa-0bc7941340b2" ## Dibujar gráfica de los valores de Accuracy y Loss del modelo entrenado con ReLU plot_acc(history_relu, "Accuracy en el entrenamiento con ReLU") plot_loss(history_relu, "Loss en el entrenamiento con ReLU") # + [markdown] id="-2w5MQ2Y6WwE" # **Respuesta:** En este ejercicio he entrenado con los mismos parámetros globales (números de nodos por cada capa, tasa de dropout para el entrenamiento, entre otros) a dos modelos, donde en uno de ellos la función de activación es *Sigmoid*, y el otro modelo su función de activación es *ReLU*. Además, en cada modelo uso sus **respectivos optimizadores y funciones de perdidas** más adecuados ajustando un poco sus parámetros. # # Como resultado del entrenamiento, se observa que el modelo con la función de activación **ReLU tiene un resultado más aceptable si nos referimos al accuracy**, sin embargo, el valor loss respecto de los datos de validación es alto. Una característica negativa vista de este modelo es que parece que sufre de **overfitting** porque el accuracy y loss de los datos de validación se alejan respecto a sus análogos de los datos de entrenamiento, es decir, *esta memorizando muy bien los registros de entrenamiento, pero no es capaz de predecir bien nuevos registros no vistos antes*. Otra observación que se destaca es que para la cantidad de epocas definidas (20) se puede apreciar la **evolución del entrenamiento** llegando a valores casi estables en las últimas epocas. # # Respecto al modelo que fue entrenado con la función de activación *Sigmoid*, he tenido que ajustar los parámetros de la función de perdida, porque **no estaba encontrando los mínimos locales**. Con los nuevos parámetros de la tasa de aprendizaje y el momentum, se obtiene un resultado bueno de accuracy pero peor al de ReLU, si embargo, la ventaja se resalta en **los valores de loss que mejoran sustancialmente a los de ReLU**, 0.069 (Sigmoid) vs 0.1903 (ReLU). En este modelo se puede ver también que *no se sufre de overfitting*, es decir, el modelo predice con mucha similitud los datos de entrenamiento y los de validación. # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 144680, "status": "ok", "timestamp": 1620419709711, "user": {"displayName": "Jos\u00\u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="SeyuAx8SBJVk" outputId="1ecfd772-9d93-43af-c399-9156af9e609a" ## Obteniendo el modelo con función de activación Tangente Hiperbólica, función de perdidad MSE y optimizador Adam. model_tanh = build_model(x_train, labels, 'tanh', 'mean_squared_error') ## Entrenando modelo history_tanh = model_tanh.fit(x_train, y_train, validation_split=validation_split, batch_size=batch_size, epochs=epochs) # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 145390, "status": "ok", "timestamp": 1620419710430, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="VXcy1tWpDb1p" outputId="015c5816-2be5-4ecc-8603-85bbd83d455a" ## Dibujar gráfica de los valores de Accuracy y Loss del modelo entrenado con Tanh plot_compare_losses(history_relu, history_tanh, "ReLU", "Tanh", "Comparación del valor Loss entre ReLU y Tanh") plot_compare_accs(history_relu, history_tanh, "ReLU", "Tanh", "Comparación del valor Accuracy entre ReLU y Tanh") # + [markdown] id="a5D8isxTEXn6" # **Respuesta:** En este nuevo modelo entrenado con la función de activación Tanh vemos que se produce *una tendencia del valor accuracy similar al de ReLU*, sin embargo **mejora considerablemente el valor loss** tanto en datos de entrenamiento, como en datos de validación. Este modelo también tiene tendencia a sufrir de **overfitting** al igual que el modelo de ReLU. # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 197858, "status": "ok", "timestamp": 1620419762909, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="kkthj6y7H662" outputId="3f2fa298-93e1-48d5-cafb-d271d1f6d539" ## Obteniendo el modelo con función de activación Softplus, función de perdidad Categorical Crossentropy y optimizador Adam. model_softplus = build_model(x_train, labels, 'softplus', 'categorical_crossentropy') ## Entrenando modelo history_softplus = model_softplus.fit(x_train, y_train, validation_split=validation_split, batch_size=batch_size, epochs=epochs) # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 197850, "status": "ok", "timestamp": 1620419762911, "user": {"displayName": "\u0", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="WZhaw_SnITvZ" outputId="59fabaf6-1376-4e02-80c2-61e8d2170416" ## Dibujar gráfica de los valores de Accuracy y Loss del modelo entrenado con Softplus plot_compare_losses(history_relu, history_softplus, "ReLU", "Softplus", "Comparación del valor Loss entre ReLU y Softplus") plot_compare_accs(history_relu, history_softplus, "ReLU", "Softplus", "Comparación del valor Accuracy entre ReLU y Softplus") # + [markdown] id="nDaBc1GmJ8-W" # **Respuesta:** Usar la función **softplus** (una acercamiento a la funcioón ReLU con una curva suave) **no mejora el resultado de ReLU**, e incluso reproduce casi exactamente sus resultados, pero si se observa más suavidad en la gráfica, es decir, tiene menor varianza respecto a ReLU. # + [markdown] id="pu6RbUFKTFqT" # ## 2. Inicialización de parámetros # + [markdown] id="Abmm05UPTFqU" # En este ejercicio, vamos a evaluar la importancia de una correcta inicialización de parámetros en una red neuronal. # # **Ejercicio 2 *(2.5 puntos)***: Partiendo de una red similar a la del ejercicio anterior (usando ya ReLUs), comentar las diferencias que se aprecian en el entrenamiento al utilizar distintas estrategias de inicialización de parámetros. Para ello, inicializar todas las capas con las siguientes estrategias, disponibles en Keras, y analizar sus diferencias: # # * Inicialización con ceros. # * Inicialización con una variable aleatoria normal. # * Inicialización con los valores por defecto de Keras para una capa Dense (estrategia *glorot uniform*) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 243681, "status": "ok", "timestamp": 1620419808754, "user": {"displayName": "\u0", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="qcMt7pSkTFqU" outputId="20b896bc-47c1-4341-973e-daffdec142e4" from tensorflow.keras import initializers ## Obteniendo el modelo con función de activación ReLU, función de perdidad Categorical Crossentropy, optimizador Adam e inicializador de ceros. model_relu_zeros = build_model(x_train, labels, initializer=initializers.Zeros()) ## Entrenando modelo history_relu_zeros = model_relu_zeros.fit(x_train, y_train, validation_split=validation_split, batch_size=batch_size, epochs=epochs) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 292349, "status": "ok", "timestamp": 1620419857434, "user": {"displayName": "\u00", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="SHLHHokGWy0h" outputId="8c64d20f-e79c-4281-e628-957e81d0b9bd" ## Obteniendo el modelo con función de activación ReLU, función de perdidad Categorical Crossentropy, optimizador Adam e inicializador de ceros. model_relu_random = build_model(x_train, labels, initializer=initializers.RandomNormal(mean=0.0, stddev=1.0)) ## Entrenando modelo history_relu_random = model_relu_random.fit(x_train, y_train, validation_split=validation_split, batch_size=batch_size, epochs=epochs) # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 292343, "status": "ok", "timestamp": 1620419857438, "user": {"displayName": "Jos\u00e\u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="YXRFBbwJWyn5" outputId="4d5d5de9-d56c-4e20-e89c-56a634b223c8" ## Dibujar gráfica de los valores de Accuracy y Loss para los modelos entrenado con ReLU, el verde con inicializador "glorot_uniform" y el azul con inicializador de pesos en ceros. plot_compare_losses(history_relu, history_relu_zeros, "ReLU GorotUniform", "ReLU RandomNormal", "Comparación del valor Loss entre inicialización glorot_uniform e inicialización con ceros") plot_compare_accs(history_relu, history_relu_zeros, "ReLU GorotUniform", "ReLU RandomNormal", "Comparación del valor Accuracy entre inicialización glorot_uniform e inicialización con ceros") # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 293787, "status": "ok", "timestamp": 1620419858895, "user": {"displayName": "\u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="7ntgY7EDgjJv" outputId="77d0a4e9-9246-4d17-b865-c2cce787c306" ## Dibujar gráfica de los valores de Accuracy y Loss para los modelos entrenado con ReLU, el verde con inicializador "glorot_uniform" y el azul con inicializador de la distribución normal. plot_compare_losses(history_relu, history_relu_random, "ReLU GorotUniform", "ReLU RandomNormal", "Comparación del valor Loss entre inicialización glorot_uniform e inicialización con pesos aleatorios de la distribución normal") plot_compare_accs(history_relu, history_relu_random, "ReLU GorotUniform", "ReLU RandomNormal", "Comparación del valor Accuracy entre inicialización glorot_uniform e inicialización con pesos aleatorios de la distribución normal") # + [markdown] id="7yylUfcgjp5j" # **Respuesta**: # + [markdown] id="NqIAyVWrTFqV" # ## 3. Optimizadores # + [markdown] id="lcYj29hYTFqW" # **Ejercicio 3 *(2.5 puntos)***: Partiendo de una red similar a la del ejercicio anterior (utilizando la mejor estrategia de inicialización observada), comparar y analizar las diferencias que se observan al entrenar con varios de los optimizadores vistos en clase, incluyendo SGD como optimizador básico (se puede explorar el espacio de hiperparámetros de cada optimizador, aunque para optimizadores más avanzados del estilo de adam y RMSprop es buena idea dejar los valores por defecto provistos por Keras). # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 336873, "status": "ok", "timestamp": 1620419901993, "user": {"displayName": "Jos\u00e\u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="0fWDiqXvTFqW" outputId="e369241d-6b76-4fbc-d63e-ce94c7ae81cb" ## Obteniendo el modelo con función de activación ReLU, función de perdidad Categorical Crossentropy y optimizador SGD. model_relu_sgd = build_model(x_train, labels, optimizer=SGD(learning_rate = 0.01, momentum=0.9)) ## Entrenando modelo history_relu_sgd = model_relu_sgd.fit(x_train, y_train, validation_split=validation_split, batch_size=batch_size, epochs=epochs) # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 337598, "status": "ok", "timestamp": 1620419902730, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="sTQ9ndzscpch" outputId="3ff447c6-8e7f-4b6c-c20d-80f2bf7c77b3" ## Dibujar gráfica de los valores de Accuracy y Loss para los modelos entrenado con ReLU, el verde con inicializador "glorot_uniform" y el azul con inicializador de la distribución normal. plot_compare_losses(history_relu, history_relu_sgd, "ReLU Adam", "ReLU SGD", "Comparación del valor Loss entre optimizador Adam y SGD") plot_compare_accs(history_relu, history_relu_sgd, "ReLU Adam", "ReLU SGD", "Comparación del valor Accuracy entre optimizador Adam y SGD") # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 393404, "status": "ok", "timestamp": 1620419958553, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="h9Id6v8MlFfB" outputId="455a6de0-6b53-43b7-851a-6ae38b5fddb2" from keras.optimizers import RMSprop ## Obteniendo el modelo con función de activación ReLU, función de perdidad Categorical Crossentropy y optimizador RMSprop. model_relu_rmsprop = build_model(x_train, labels, optimizer=RMSprop()) ## Entrenando modelo history_relu_rmsprop = model_relu_rmsprop.fit(x_train, y_train, validation_split=validation_split, batch_size=batch_size, epochs=epochs) # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 393983, "status": "ok", "timestamp": 1620419959142, "user": {"displayName": "Jos\u00e9 0", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="IQ2vGO5ilFWi" outputId="2c580cfd-a98e-4d8c-bbba-48c6d8a11ae4" ## Dibujar gráfica de los valores de Accuracy y Loss para los modelos entrenado con ReLU, el verde con optimizador "Adam" y el azul con RMSprop. plot_compare_losses(history_relu, history_relu_rmsprop, "ReLU Adam", "ReLU RMSpro", "Comparación del valor Loss entre optimizador Adam y RMSpro") plot_compare_accs(history_relu, history_relu_rmsprop, "ReLU Adam", "ReLU RMSpro", "Comparación del valor Accuracy entre optimizador Adam y RMSpro") # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 461134, "status": "ok", "timestamp": 1620420026305, "user": {"displayName": "Jos\u00\u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="7wm_uPVLmzai" outputId="aac07373-7f6b-4c14-a4cc-044841a2b9ac" from keras.optimizers import Nadam ## Obteniendo el modelo con función de activación ReLU, función de perdidad Categorical Crossentropy y optimizador Nadam. model_relu_nadam = build_model(x_train, labels, optimizer=Nadam()) ## Entrenando modelo history_relu_nadam = model_relu_nadam.fit(x_train, y_train, validation_split=validation_split, batch_size=batch_size, epochs=epochs) # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 461779, "status": "ok", "timestamp": 1620420026959, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="d6C60DZomzU5" outputId="9edb48cc-b890-4c8d-86e4-3a6e09543bf3" ## Dibujar gráfica de los valores de Accuracy y Loss para los modelos entrenado con ReLU, el verde con optimizador Adam y el azul con Nadam. plot_compare_losses(history_relu, history_relu_nadam, "ReLU Adam", "ReLU Nadam", "Comparación del valor Loss entre optimizador Adam y Nadam") plot_compare_accs(history_relu, history_relu_nadam, "ReLU Adam", "ReLU Nadam", "Comparación del valor Accuracy entre optimizador Adam y Nadam") # + [markdown] id="aVVIzQy7tDU1" # **Respuesta**: He probado tres tipos de optimizadores distintos al usado en el modelo construido con la función de activación ReLU de la pregunta anterior. Comparando los resultados para cada optimizador, se observa que el mejor resultado ha sido con el optimizador **NADAM**, que mejora ligeramente al optimizador ADAM. # # Se podría decir que el *peor otimizador* para este modelo y grupo de datos de entrenamiento fue el RMSprop. Para el caso del optimizador SGD, se ha tenido que **buscar nuevos valores para sus parámetros** que mejora el resultado que genera los parámetros por defecto. # + [markdown] id="BkfTFoJOTFqZ" # ## 4. Regularización y red final *(2.5 puntos)* # + [markdown] id="f6CQhK7ZTFqZ" # **Ejercicio 4.1**: Entrenar una red final que sea capaz de obtener una accuracy en el validation set cercana al 90%. Para ello, combinar todo lo aprendido anteriormente y utilizar técnicas de regularización para evitar overfitting. Algunos de los elementos que pueden tenerse en cuenta son los siguientes. # # * Número de capas y neuronas por capa # * Optimizadores y sus parámetros # * Batch size # * Unidades de activación # * Uso de capas dropout, regularización L2, regularización L1... # * Early stopping (se puede aplicar como un callback de Keras, o se puede ver un poco "a ojo" cuándo el modelo empieza a caer en overfitting y seleccionar el número de epochs necesarias) # * Batch normalization # # Si los modelos entrenados anteriormente ya se acercaban al valor requerido de accuracy, probar distintas estrategias igualmente y comentar los resultados. # # Explicar brevemente la estrategia seguida y los modelos probados para obtener el modelo final, que debe verse entrenado en este Notebook. No es necesario guardar el entrenamiento de todos los modelos que se han probado, es suficiente con explicar cómo se ha llegado al modelo final. # + executionInfo={"elapsed": 2102, "status": "ok", "timestamp": 1620421344472, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="AUJ5AtunTFqa" ## Método que construye un modelo final de la práctica con: ## - 1 capa Flatten de entrada para aplanar las imágenes de entrenamiento ## - 3 capas densas ocultas con función de activación "relu" ## - 1 capa Dropout para borrar pesos de forma aleatoria durante el entrenamiento, para evitar el sobreajuste ## - 1 capa densa de salida con función de activación softmax def build_final_model(x_train, labels): model = Sequential() model.add(Flatten(input_shape=(x_train.shape[1] , x_train.shape[2]))) model.add(Dense(128, activation='softplus')) model.add(Dropout(0.3)) model.add(Dense(256, activation='softplus')) model.add(Dropout(0.3)) model.add(Dense(256, activation='softplus')) model.add(Dropout(0.3)) model.add(Dense(128, activation='softplus')) model.add(Dropout(0.3)) model.add(Dense(len(labels), activation='softmax')) model.summary() plot_model(model, show_shapes=True) model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['accuracy']) return model # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 438977, "status": "ok", "timestamp": 1620421786799, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="u0o3wgGHwwnm" outputId="17eed6f9-50a7-47a2-8c84-cb21ac07649a" from keras.callbacks import EarlyStopping ## Obteniendo el modelo final. final_model = build_final_model(x_train, labels) ## Definiendo la parada del entrenamiento callback = [EarlyStopping(monitor='val_accuracy', patience=10)] ## Entrenando modelo history_final = final_model.fit(x_train, y_train, validation_split=validation_split, batch_size=64, epochs=80, callbacks=callback) # + colab={"base_uri": "https://localhost:8080/", "height": 573} executionInfo={"elapsed": 1717, "status": "ok", "timestamp": 1620421115827, "user": {"displayName": "Jos\u00\u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="F8q48VxBHRJL" outputId="c51dfe4f-3305-4581-fe0d-bda9fbf82f06" ## Dibujar gráfica de los valores de Accuracy y Loss del modelo entrenado con Sigmoid plot_acc(history_final, "Accuracy en el entrenamiento del modelo final") plot_loss(history_final, "Loss en el entrenamiento del modelo final") # + [markdown] id="XJa-HeqFXkIg" # **Respuesta**: Para mejorar los resultados del entrenamiento, concretamente la predicción sobre los valores de validación, se han aplicado algunas técnicas que mejoran la predicción y la reducen algunos problemas como el overfitting. # # Entre las principales técnicas o herramientas usadas tenemos, selección de número de capas ocultas y nodos por capa, función de activación, función de perdida, optimizador, incorporación de capas de Dropout, etc. # # * Se ha ajustado el número de capas y el número de nodos para mejorar el valor de **val_accuracy**. # * Se inicio con una función de activación ReLU, pero despúes se obtuvo mejores resultados con **Softplus**. # * Se mantuvo la función de perdida **categorical_crossentropy** porque el resto no mejoraba el resultado. # * A partir del resultado de la pregunta anterior se ha decidido usar el optimizador **NADAM**. # * Dado que se producía overffing con pocas epicas, se ha añadido capas **Dropout** con una tasa de 0.3. # * También se ha creado un callback de tipo **EarlyStopping** con un valor de paciencia de *10* para detener el entramiento cuando ya no mejore el valor de *val_accuracy*. En el último entrenamiento no se ha detenido antes de las 80 epocas. # * Finalmente se han ajustado los valores de **batch_size** y **epochs** para conseguir llegar al *90% de val_accuracy*. # # Después de aplicar todos los pasos descritos previamente, se ha logrado conseguir pocas iteraciones que superen el *90% de val_accuracy* pero la mayoría estuvieron en **89%**. # + [markdown] id="B5LcQgwUTFqb" # ### Evaluación del modelo en datos de test # + [markdown] id="4ldy0NmtTFqb" # Una vez elegido el que creemos que es nuestro mejor modelo a partir de la estimación que hemos visto en los datos de validación, es hora de utilizar los datos de test para ver cómo se comporta nuestro modelo ante nuevos datos. Si hemos hecho bien las cosas, este número debería ser parecido al valor de nuestra estimación vista en los datos de validación. # # **Pregunta 4.2**. Utilizando nuestro mejor modelo, obtener la accuracy resultante en el dataset de test. Comentar este resultado. # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 2636, "status": "ok", "timestamp": 1620421091465, "user": {"displayName": "Jos\u00e9 \u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="1AchhaHqTFqc" outputId="f42f4894-c0b2-4653-aeb5-6059bc6b1031" loss, acc = final_model.evaluate(x_test, y_test, batch_size=64) print("\nTest loss: %.1f%%" % (100.0 * loss)) print("Test accuracy: %.1f%%" % (100.0 * acc)) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 767400, "status": "ok", "timestamp": 1620420332648, "user": {"displayName": "Jos\u0\u00e1", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="qM4nL1Fc_XJh" outputId="6320256c-1e29-4dc3-fcba-22052805d010" result = final_model.predict(x_test) from sklearn.metrics import confusion_matrix confusion_matrix(np.argmax(y_test, axis=1), np.argmax(result, axis=1)) # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 767388, "status": "ok", "timestamp": 1620420332649, "user": {"displayName": "\u0", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhaYypRXRBkgZWSMj9q5F9NWoJLp1_U38NusgL4Mw=s64", "userId": "16264495400181037765"}, "user_tz": -120} id="fT7iq3kM_d0V" outputId="ec34bd14-6e67-4c28-bd8c-4eb26fd83f6c" from sklearn.metrics import classification_report print(classification_report(np.argmax(y_test, axis=1), np.argmax(result, axis=1))) # + [markdown] id="8ZkUxA8Fi55O" # **Respuesta**: El valor del accuracy conseguido con los datos de pruebas es del **89.7%**, muy similar a los valores de val_accuracy conseguidos en el entrenamiento. # * Las prendas que **mejor predijo** el modelo a partir de los datos de prueba son el **Trouser, Ankle boot y Sandal**, ya que tienen una forma diferentes al resto de prendas. # * En el tipo de prenda en que **más problemas tuvo al predecir fue Shirt**, porque es el que más tiene parecido con el resto de prendas, por ejemplo *T-Shirt, Pullover, Dress y Coat*. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import geopandas as gpd import matplotlib.pyplot as plt import matplotlib import folium import folium.plugins import mapclassify import numpy as np import cv2 import base64 import webbrowser data = pd.read_csv("../../data/full/train/data_train.csv") # + pycharm={"name": "#%%\n"} def create_popup(image_path: str) -> folium.Popup: encoded = base64.b64encode(open(image_path, 'rb').read()) html = ''.format iframe = folium.IFrame(html(encoded.decode('UTF-8')), width=220, height=220) return folium.Popup(iframe, max_width=400) # + pycharm={"name": "#%%\n"} def create_interactive_map(df: pd.DataFrame, output_html_path: str, group_column: str = None, group_columns : list = None) -> str: """ Input: df: pd.DataFrame Input data. output_html_path : str Where to save the interactive map html. group_column : str If not None, specifies which column defines the group / cluster. Each group will be colored differently on the map. group_columns : list of column names If not None, specifies which columns define the RGB (blue -> red -> green) color of the dots. Output: map_path: str Path to HTML interactive map. """ if group_column and group_columns: print("Error! Can't color by both class and value.") return "err" # colormap n_clusters = 1 if group_column: n_clusters = df[group_column].nunique() colors = ['red', 'blue', 'green', 'purple', 'orange', 'darkred', 'lightred', 'beige', 'darkblue', 'darkgreen', 'cadetblue', 'darkpurple', 'white', 'pink', 'lightblue', 'lightgreen', 'gray', 'black', 'lightgray'] if group_columns: colors = np.zeros((len(df), 3)) for idx, col in enumerate(group_columns): colors[:, idx] = df[col].values colors[:, idx] = (colors[:, idx] - colors[:, idx].min()) / (colors[:, idx].max() - colors[:, idx].min()) colors = colors[:, [1, 2, 0]] def get_color(idx, row: pd.Series): if group_column: return colors[row[group_column]] if group_columns: return matplotlib.colors.to_hex(colors[idx, :]) return 'cadetblue' map_path = output_html_path + "brightness2.html" map = folium.Map(location=[44.813555, 15.978054], zoom_start=8, tiles="cartodb positron", max_bounds=True, min_lat=42.44, max_lat=48.55, min_lon=13.5, max_lon=19.43, min_zoom=8, prefer_canvas=True ) for idx, (_, row) in enumerate(df.iterrows()): angle = str(np.random.choice([0, 90, 180, 270])) image_path = f"../../data/full/train/{row.uuid}/{angle}.jpg" popup = create_popup(image_path) lat, long = round(row.latitude, 3), round(row.longitude, 3) folium.CircleMarker(location=(row.latitude, row.longitude), radius=4, popup=popup, tooltip=f"({lat}, {long})", color=get_color(idx, row), fill=False, #get_color(row), ).add_to(map) map.save(map_path) return map_path # + pycharm={"name": "#%%\n"} from skimage.io import imread temp = data.sample(250).copy() for index in temp.index.values: M = 0 for angle in [0, 90, 180, 270]: image_path = f"../../data/full/train/{temp.loc[index, 'uuid']}/{angle}.jpg" img = imread(image_path) M += img.mean() / 4 temp.loc[index, 'brightness'] = M temp['brightness'] = (temp['brightness'] - temp['brightness'].min()) / (temp['brightness'].max() - temp['brightness'].min()) temp['brightness'] = 1/(1 + np.exp(-temp['brightness'])) # + pycharm={"name": "#%%\n"} # temp = data.sample(120).copy() # temp["cluster"] = np.random.randint(0, 5, 120) # map_path = create_interactive_map(df=temp, output_html_path="", group_column="cluster") # webbrowser.open(map_path) temp[["a", "b"]] = np.random.randn(len(temp), 2) map_path = create_interactive_map(df=temp, output_html_path="", group_columns=["brightness", "brightness"]) webbrowser.open(map_path) # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_chainer_p27 # language: python # name: conda_chainer_p27 # --- # + # #!/usr/bin/python2 import numpy as np from numpy.lib.stride_tricks import as_strided import chainer from chainer import computational_graph from chainer import cuda from chainer import optimizers from chainer import serializers import argparse import six import imageio import numbers from nr_model import Model from fr_model import FRModel # - def extract_patches(arr, patch_shape=(32,32,3), extraction_step=32): arr_ndim = arr.ndim if isinstance(patch_shape, numbers.Number): patch_shape = tuple([patch_shape] * arr_ndim) if isinstance(extraction_step, numbers.Number): extraction_step = tuple([extraction_step] * arr_ndim) patch_strides = arr.strides slices = tuple(slice(None, None, st) for st in extraction_step) indexing_strides = arr[slices].strides patch_indices_shape = ((np.array(arr.shape) - np.array(patch_shape)) // np.array(extraction_step)) + 1 shape = tuple(list(patch_indices_shape) + list(patch_shape)) strides = tuple(list(indexing_strides) + list(patch_strides)) patches = as_strided(arr, shape=shape, strides=strides) return patches args = {} args['INPUT'] = ["DigitalBlurSet/DiskR8_1.jpg", "DigitalBlurSet/DiskR10_1.jpg", "DigitalBlurSet/DiskR20_1.jpg", "DigitalBlurSet/DiskR30_1.jpg", "DigitalBlurSet/DiskR50_1.jpg", "DigitalBlurSet/GaussianH1x50S250_1.jpg", "DigitalBlurSet/GaussianH2x80S300_1.jpg", "DigitalBlurSet/GaussianH15x15S200_1.jpg", "DigitalBlurSet/GaussianH25x25S100_1.jpg", "DigitalBlurSet/GaussianH25x25S300_1.jpg", "DigitalBlurSet/GaussianH50x50S300_1.jpg", "DigitalBlurSet/MotionL50Th90_1.jpg", "DigitalBlurSet/MotionL80Th45_1.jpg", "DigitalBlurSet/MotionL100Th0_1.jpg", "DigitalBlurSet/MotionL100Th45_1.jpg", "DigitalBlurSet/Original_1.jpg"] args['REF'] = "" args['model'] = "models/nr_tid_weighted.model" args['top'] = "weighted" args['gpu'] = 0 # + FR = True if args['REF'] == "": FR = False if FR: model = FRModel(top=args['top']) else: model = Model(top=args['top']) cuda.cudnn_enabled = True cuda.check_cuda_available() xp = cuda.cupy serializers.load_hdf5(args['model'], model) model.to_gpu() if FR: ref_img = imageio.imread(args['REF']) patches = extract_patches(ref_img) X_ref = np.transpose(patches.reshape((-1, 32, 32, 3)), (0, 3, 1, 2)) for input_img in args['INPUT']: img = imageio.imread(input_img) patches = extract_patches(img) X = np.transpose(patches.reshape((-1, 32, 32, 3)), (0, 3, 1, 2)) y = [] weights = [] batchsize = min(2000, X.shape[0]) t = xp.zeros((1, 1), np.float32) for i in six.moves.range(0, X.shape[0], batchsize): X_batch = X[i:i + batchsize] X_batch = xp.array(X_batch.astype(np.float32)) if FR: X_ref_batch = X_ref[i:i + batchsize] X_ref_batch = xp.array(X_ref_batch.astype(np.float32)) model.forward(X_batch, X_ref_batch, t, False, n_patches_per_image=X_batch.shape[0]) else: model.forward(X_batch, t, False, X_batch.shape[0]) y.append(xp.asnumpy(model.y[0].data).reshape((-1,))) weights.append(xp.asnumpy(model.a[0].data).reshape((-1,))) y = np.concatenate(y) weights = np.concatenate(weights) print("%f" % (np.sum(y*weights)/np.sum(weights))) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Probabilistic Graphical Models with `pgmpy` # !pip install pgmpy from pgmpy.factors import TabularCPD # Declare a CPD grade_cpd = TabularCPD(variable="G", variable_card=3, values=[[0.3, 0.05, 0.9, 0.5], [0.4, 0.25, 0.08, 0.3], [0.3, 0.7, 0.02, 0.2]], evidence=["I", "D"], evidence_card=[2, 2]) grade_cpd # + # Declare the sudent model in pgmpy from pgmpy.models import BayesianModel from pgmpy.factors import TabularCPD # Define nodes and edges student_model = BayesianModel([("D", "G"), ("I", "G"), ("G", "L"), ("I", "S")]) #Define CPDs grade_cpd = TabularCPD( variable="G", variable_card=3, values=[[0.3, 0.05, 0.9, 0.5], [0.4, 0.25, 0.08, 0.3], [0.3, 0.7, 0.02, 0.2]], evidence=["I", "D"], evidence_card=[2, 2]) difficulty_cpd = TabularCPD( variable="D", variable_card=2, values=[[0.6, 0.4]]) intel_cpd = TabularCPD( variable="I", variable_card=2, values=[[0.7, 0.3]]) letter_cpd = TabularCPD( variable="L", variable_card=2, values=[[0.1, 0.4, 0.99], [0.9, 0.6, 0.01]], evidence=["G"], evidence_card=[3]) sat_cpd = TabularCPD( variable="S", variable_card=2, values=[[0.95, 0.2], [0.05, 0.8]], evidence=["I"], evidence_card=[2]) #Add CPDs to nodes and edges student_model.add_cpds(grade_cpd, difficulty_cpd, intel_cpd, letter_cpd, sat_cpd) grade_cpd # - student_model.get_cpds('G') student_model.get_parents('G') # + from pgmpy.inference import VariableElimination student_infer = VariableElimination(student_model) prob_G = student_infer.query(variables='G') print(prob_G['G']) # + prob_G = student_infer.query(variables='G', evidence={'I': 1, 'D' : 0}) print(prob_G['G']) # + prob_G = student_infer.query(variables='G', evidence={'I': 0, 'D' : 1}) print(prob_G['G']) # + #Train Model from Data from pgmpy.models import BayesianModel import pandas as pd import numpy as np # Considering that each variable have only 2 states, # we can generate some random data. raw_data = np.random.randint(low=0,high=2,size=(1000, 5)) data = pd.DataFrame(raw_data,columns=["D", "I", "G","L", "S"]) print(data[: int(data.shape[0]*0.75)]) data_train = data[: int(data.shape[0] * 0.75)] student_model = BayesianModel([("D", "G"),("I", "G"),("I", "S"),("G", "L")]) student_model.fit(data_train) student_model.get_cpds('D') # - student_model.get_cpds('L') student_model.active_trail_nodes('D') student_model.local_independencies('G') student_model.get_independencies() # + data_test = data[int(0.75 * data.shape[0]) : data.shape[0]] data_test.drop('G', axis=1, inplace=True) student_model.predict(data_test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + tags=[] import pandas as pd import json from itertools import chain from functools import reduce import operator df_path = '../data/gmu-covid.csv' pa_path = '../data/gmu-covid-parents.json' def expand_data(df_path: str, pa_path: str): def get_interactions(values): interactions = sorted(list(set(values))) interactions = filter(lambda s: s.find('!') > 0, interactions) interactions = map(lambda s: (s, s.split('!')), interactions) interactions = {k: v for k, v in interactions} return interactions df = pd.read_csv(df_path) with open(pa_path, 'r') as f: parents = json.load(f) ch_interactions = get_interactions(chain(*[v for _, v in parents.items()])) pa_interactions = get_interactions([k for k, _ in parents.items()]) interactions = {**ch_interactions, **pa_interactions} def expand(r, cols): vals = [r[c] for c in cols] result = reduce(operator.mul, vals, 1) return result for col_name, cols in interactions.items(): df[col_name] = df.apply(lambda r: expand(r, cols), axis=1) return df df = expand_data(df_path, pa_path) df.shape # + tags=[] import networkx as nx g = nx.read_gpickle('../data/gmu-covid-networkx.gpickle') # + tags=[] from typing import Tuple, Dict, List, Any from itertools import chain, combinations import numpy as np def get_parameters(df: pd.DataFrame, g: nx.DiGraph) -> Tuple[Dict[str, List[str]], Dict[str, List[float]]]: """ Gets the parameters. :param df: Data. :param g: Graph (structure). :return: Tuple; first item is dictionary of domains; second item is dictionary of probabilities. """ def vals_to_str(): ddf = df.copy(deep=True) for col in ddf.columns: ddf[col] = ddf[col].astype(str) return ddf def get_filters(ch, parents, domains): pas = parents[ch] if len(pas) == 0: ch_domain = domains[ch] return [f'{ch}=="{v}"' for v in ch_domain] else: def is_valid(tups): n_tups = len(tups) u_tups = len(set([name for name, _ in tups])) if n_tups == u_tups: return True return False vals = [[(pa, v) for v in domains[pa]] for pa in pas] vals = vals + [[(ch, v) for v in domains[ch]]] vals = chain(*vals) vals = combinations(vals, len(pas) + 1) vals = filter(is_valid, vals) vals = map(lambda tups: ' and '.join([f'`{t[0]}`=="{t[1]}"' for t in tups]), vals) vals = list(vals) return vals def get_total(filters, n): def divide(arr): a = np.array(arr) n = np.sum(a) if n == 0: p = 1 / len(arr) return [p for _ in range(len(arr))] r = a / n r = list(r) return r counts = [ddf.query(f).shape[0] for f in filters] counts = [counts[i:i + n] for i in range(0, len(counts), n)] counts = [divide(arr) for arr in counts] counts = list(chain(*counts)) return counts ddf = vals_to_str() nodes = list(g.nodes()) domains = {n: sorted(list(ddf[n].unique())) for n in nodes} parents = {ch: list(g.predecessors(ch)) for ch in nodes} p = {ch: get_total(get_filters(ch, parents, domains), len(domains[ch])) for ch in nodes} return domains, p domains, p = get_parameters(df, g) # - domains p # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import os, sys import scipy import numpy as np from astropy.io import fits import matplotlib.pyplot as plt from tqdm import tqdm_notebook from astropy.table import Table from scipy.ndimage import minimum_filter1d from scipy.ndimage.filters import percentile_filter plt.rcParams['font.size'] = 5 # - d = 23 master_log = master_log = Table.read('/Users/arcticfox/Documents/youngStars/veloce/master_log.tab', format='ascii') date = '2020-11-{0}'.format(d) directory = '2011{0}'.format(d) table = master_log[master_log['ObsDate']==date] fileformat = '{0}nov3{1:04d}.fits' table[table['Frame']==110] # + files = np.sort([i for i in os.listdir(directory) if i.endswith('.fits')]) science_frames, bias_frames, dark_frames, flat_frames = [], [], [], [] for i in range(len(table)): if 'TIC' in table['ObjType'][i]: science_frames.append(fileformat.format(d, table['Frame'][i])) elif 'BiasFrame' == table['ObjType'][i]: bias_frames.append(fileformat.format(d, table['Frame'][i])) elif 'DarkFrame' in table['ObjType'][i]: dark_frames.append(fileformat.format(d, table['Frame'][i])) elif 'FlatField' in table['ObjType'][i]: flat_frames.append(fileformat.format(d, table['Frame'][i])) else: continue #dark_inds = dark_inds[np.argwhere(np.diff(dark_inds)>10)[0][0]+1:] #bias_inds = bias_inds[4:] #flat_inds = flat_inds[2:] #science_frames = np.unique(np.sort([os.path.join(directory, i) for i in science_frames])) bias_frames = np.unique(np.sort([os.path.join(directory, i) for i in bias_frames])) dark_frames = np.unique(np.sort([os.path.join(directory, i) for i in dark_frames]))[6:46]#[27:-5] flat_frames = np.unique(np.sort([os.path.join(directory, i) for i in flat_frames]))[20:]#[23:] # - len(bias_frames), len(dark_frames), len(flat_frames) # ## Creating master frames def master_file(files, output_fn, fntype='dark'): arrs = [] for fn in tqdm_notebook(files): hdu = fits.open(fn) if hdu[0].data.shape == (4112, 4202): arrs.append(hdu[0].data) hdu.close() arrs = np.array(arrs) if fntype == 'bias' or fntype == 'dark': masked = np.copy(arrs) + 0.0 for i in range(len(arrs)): rows, cols = np.where(arrs[i]>1000) masked[i][rows,cols] = np.nan masked = np.array(masked) med = np.nanmedian(masked, axis=0) else: med = np.nanmedian(arrs, axis=0) np.save(output_fn, med) return med # + if 'dark_med.npy' not in os.listdir(directory): DARK_MED = master_file(dark_frames, os.path.join(directory, 'dark_med.npy'), fntype='dark') else: DARK_MED = np.load(os.path.join(directory, 'dark_med.npy')) if 'bias_med.npy' not in os.listdir(directory): BIAS_MED = master_file(bias_frames, os.path.join(directory, 'bias_med.npy'), fntype='bias') else: BIAS_MED = np.load(os.path.join(directory, 'bias_med.npy')) if 'flat_med.npy' not in os.listdir(directory): FLAT_MED = master_file(flat_frames, os.path.join(directory, 'flat_med.npy'), fntype='flat') else: FLAT_MED = np.load(os.path.join(directory, 'flat_med.npy')) # - # ## Science Frames def extract_science(files): outputfns = [] arrs = [] for fn in tqdm_notebook(files): hdu = fits.open(os.path.join(directory,fn)) np.save(fn[:-5]+'.npy', hdu[0].data) outputfns.append(fn[:-5]+'.npy') arrs.append(hdu[0].data) return np.array(arrs), outputfns science_arrs, science_files = extract_science(science_frames) directory # # Creating the dot models # + # #%matplotlib inline def get_outliers(x_value, sigma=0.8, plot=False): arr = science_arrs[0][x_value:x_value+1][0] + 0.0 x = np.arange(0,len(arr),1,dtype=int) outliers = np.where(arr >= np.nanmedian(arr) + sigma*np.nanstd(arr))[0] if plot: plt.figure(figsize=(1,1)) plt.plot(x, arr, 'k') plt.ylim(800,1800) plt.plot(x[outliers], arr[outliers], 'o') plt.show() return outliers def group_inds(values, sep): results = [] for i, v in enumerate(values): if i == 0: mini = maxi = v temp = [v] else: # SETS 4 CADENCE LIMIT if (np.abs(v-maxi) <= sep): temp.append(v) if v > maxi: maxi = v if v < mini: mini = v else: results.append(int(np.nanmin(temp))) mini = maxi = v temp = [v] # GETS THE LAST GROUP if i == len(values)-1: results.append(int(np.nanmin(temp))) return np.array(results) # + rows, cols = np.where(science_arrs[0] > 1100) mask = np.zeros(science_arrs[0].shape) mask[rows,cols] = 1 # - plt.imshow(DARK_MED[3000:3500,3000:3500], vmin=300, vmax=2000) # %matplotlib notebook plt.figure(figsize=(1,1)) plt.imshow(mask, cmap='Greys_r', vmin=0, vmax=1)#, alpha=0.9) plt.plot(cols, rows, '.') plt.colorbar() plt.show() # + plt.figure(figsize=(1,1)) sargs = np.argsort(cols) cols, rows = cols[sargs]+0, rows[sargs]+0 starts = np.where((rows<=795) & (rows>=770))[0] ends = np.where((rows<=3330) & (rows>=3290))[0] starts = group_inds(starts, sep=100) ends = group_inds(ends, sep=100) plt.plot(cols, rows, 'k.', ms=1) plt.plot(cols[ends], rows[ends], '.', ms=1) plt.plot(cols[starts], rows[starts], 'r.', ms=1) # - starts = np.delete(starts, [0, 35]) ends = np.delete(ends, [17, 39, 40]) len(starts), len(ends) # + mid_ends = np.where((rows>=2892) & (rows<=2908))[0] mid_starts = np.where((rows>=1160) & (rows<=1180))[0] mid = np.where((rows>=1995) & (rows<=2010))[0] mid_starts = group_inds(mid_starts, sep=100) mid_ends = group_inds(mid_ends, sep=100) mid = group_inds(mid, sep=100) plt.figure(figsize=(1,1)) ms = 3 plt.plot(cols, rows, 'k.', ms=1) plt.plot(cols[mid_ends], rows[mid_ends], 'b.', ms=ms) plt.plot(cols[starts], rows[starts], 'r.', ms=ms) for i in range(len(mid_starts)): plt.plot(cols[mid_starts[i]], rows[mid_starts[i]], '.', ms=ms) plt.plot(cols[ends], rows[ends], 'g.', ms=ms) plt.plot(cols[mid], rows[mid], 'y.', ms=ms) # - len(starts), len(mid_starts), len(mid), len(mid_ends), len(ends) #starts = np.delete(starts, [23]) mid_starts = np.delete(mid_starts, [27, 29, 33, 35, 38, 42, 43]) mid = np.delete(mid, [17, 38, 39]) mid_ends = np.delete(mid_ends, [24, 30, 37, 40, 41, 42]) #ends = np.delete(ends, [-1]) len(starts), len(mid_starts), len(mid), len(mid_ends), len(ends) plt.figure(figsize=(1,1)) plt.plot(cols, rows, 'k.', ms=1) plt.plot(cols[ends], rows[ends], 'b.', ms=1) plt.plot(cols[starts], rows[starts], 'r.', ms=1) plt.plot(cols[mid_starts], rows[mid_starts], '.', c='darkorange', ms=1) plt.plot(cols[mid_ends], rows[mid_ends], 'g.', ms=1) plt.plot(cols[mid], rows[mid], 'y.', ms=1) dot_array = np.array([starts, mid_starts, mid, mid_ends, ends]) fit_x = np.arange(300, 4000, 1) plt.figure(figsize=(1,1)) plt.plot(rows, cols, 'k.', ms=1) models = np.zeros((len(mid), len(fit_x))) for i in range(len(mid)): plt.plot(rows[dot_array[:,i]], cols[dot_array[:,i]], '.', ms=1) fit = np.polyfit(rows[dot_array[:,i]], cols[dot_array[:,i]], deg=2) model = np.poly1d(fit) plt.plot(fit_x, model(fit_x), lw=1) models[i] = model(fit_x) np.save('./{0}/models.npy'.format(directory), models) # ## Discretize model gap fits discrete = np.zeros(models.shape, dtype=int) for i in range(len(models)): discrete[i] = np.round(models[i]) np.save('./{0}/discrete_models.npy'.format(directory), discrete) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + from __future__ import division import time, os, gc import numpy as np import pandas as pd import scipy from sklearn.preprocessing import LabelEncoder, OneHotEncoder, StandardScaler from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer # from sklearn.model_selection import StratifiedKFold from sklearn.feature_selection import SelectPercentile, f_classif from nltk.stem.porter import PorterStemmer from nltk.stem.snowball import SnowballStemmer from sklearn.preprocessing import OneHotEncoder,LabelEncoder,StandardScaler from sklearn.decomposition import TruncatedSVD,PCA from sklearn.feature_extraction import text import config PATH = config.RAW_PATH # + train_orig = pd.read_csv(PATH+'train.csv', header=0)#.sample(n=10000) test_orig = pd.read_csv(PATH+'test.csv', header=0)#.sample(n=10000) # def stem_str(x,stemmer=SnowballStemmer('english')): # x = text.re.sub("[^a-zA-Z0-9]"," ", x) # x = (" ").join([stemmer.stem(z) for z in x.split(" ")]) # x = " ".join(x.split()) # return x # porter = PorterStemmer() # snowball = SnowballStemmer('english') # train_orig['question1'] = train_orig['question1'].astype(str).apply(lambda x:stem_str(x.lower(),snowball)) # train_orig['question1'] = train_orig['question1'].astype(str).apply(lambda x:stem_str(x.lower(),snowball)) # train_orig['question2'] = train_orig['question2'].astype(str).apply(lambda x:stem_str(x.lower(),snowball)) # train_orig['question2'] = train_orig['question2'].astype(str).apply(lambda x:stem_str(x.lower(),snowball)) df1 = train_orig[['question1']].copy() df2 = train_orig[['question2']].copy() df1_test = test_orig[['question1']].copy() df2_test = test_orig[['question2']].copy() df2.rename(columns = {'question2':'question1'},inplace=True) df2_test.rename(columns = {'question2':'question1'},inplace=True) train_questions = df1.append(df2) train_questions = train_questions.append(df1_test) train_questions = train_questions.append(df2_test) #train_questions.drop_duplicates(subset = ['qid1'],inplace=True) train_questions.drop_duplicates(subset = ['question1'],inplace=True) train_questions.reset_index(inplace=True,drop=True) questions_dict = pd.Series(train_questions.index.values,index=train_questions.question1.values).to_dict() train_cp = train_orig.copy() test_cp = test_orig.copy() train_cp.drop(['qid1','qid2'],axis=1,inplace=True) test_cp['is_duplicate'] = -1 test_cp.rename(columns={'test_id':'id'},inplace=True) comb = pd.concat([train_cp,test_cp]) comb['q1_hash'] = comb['question1'].map(questions_dict) comb['q2_hash'] = comb['question2'].map(questions_dict) q1_vc = comb.q1_hash.value_counts().to_dict() q2_vc = comb.q2_hash.value_counts().to_dict() def try_apply_dict(x,dict_to_apply): try: return dict_to_apply[x] except KeyError: return 0 #map to frequency space comb['q1_freq'] = comb['q1_hash'].map(lambda x: try_apply_dict(x,q1_vc) + try_apply_dict(x,q2_vc)) comb['q2_freq'] = comb['q2_hash'].map(lambda x: try_apply_dict(x,q1_vc) + try_apply_dict(x,q2_vc)) # comb['q1_hash_freq'] = comb['q1_hash'].map(lambda x: try_apply_dict(x,q1_vc)) # comb['q2_hash_freq'] = comb['q2_hash'].map(lambda x: try_apply_dict(x,q2_vc)) comb['freq_diff'] = (abs(comb['q1_freq'] - comb['q2_freq'])+0.1) / (comb['q1_freq'] * comb['q2_freq']) # + comb['q_hash_pos'] = comb['q1_hash']-comb['q2_hash']>0 comb['q_hash_pos'] = comb['q_hash_pos'].astype(int) # comb['q_hash_pos_1'] = comb[['q1_freq','q_hash_pos']].apply(lambda x: 1 if x[0]>1 and x[1]>0 else 0, axis=1) list1 = [] list1.append(0) gpf = comb['q2_hash'].values tag = gpf[0] for i in range(comb.shape[0])[1:]: if gpf[i]-tag<0: list1.append(gpf[i]-tag) if gpf[i]-tag>=0: list1.append(gpf[i]-tag) tag=gpf[i] comb['q2_change'] = list1 list1 = [] list1.append(0) gpf = comb['q1_hash'].values tag = gpf[0] for i in range(comb.shape[0])[1:]: if gpf[i]-tag<0: list1.append(gpf[i]-tag) if gpf[i]-tag>=0: list1.append(gpf[i]-tag) tag=gpf[i] comb['q1_change'] = list1 # comb['q1_q2_change_mean'] = (comb['q1_change'] + comb['q2_change'])/2.0 # comb['q1_q2_change_min'] = comb[['q1_change','q2_change']].apply(lambda x: min(x[0],x[1]),axis=1) comb['q1_q2_change_max'] = comb[['q1_change','q2_change']].apply(lambda x: max(x[0],x[1]),axis=1) Q_CHANGE = 0 comb['q_change_pair'] = (comb['q1_change']= 0]#[corr_list] # test_comb = comb[comb['is_duplicate'] < 0]#[corr_list] comb[comb['is_duplicate'] >= 0].corr() comb.to_csv(config.FEAT_PATH+'magic_feature.csv',index=False) # + # encoding: utf-8 import sys reload(sys) sys.setdefaultencoding('utf8') import pandas as pd import hashlib import gc df_train = pd.read_csv(config.RAW_PATH+'train.csv').fillna("") df_test = pd.read_csv(config.RAW_PATH+'test.csv').fillna("") # Generating a graph of Questions and their neighbors def generate_qid_graph_table(row): hash_key1 = hashlib.md5(row["question1"].encode('utf-8')).hexdigest() hash_key2 = hashlib.md5(row["question2"].encode('utf-8')).hexdigest() qid_graph.setdefault(hash_key1, []).append(hash_key2) qid_graph.setdefault(hash_key2, []).append(hash_key1) qid_graph = {} print('Apply to train...') df_train.apply(generate_qid_graph_table, axis=1) print('Apply to test...') df_test.apply(generate_qid_graph_table, axis=1) def pagerank(): MAX_ITER = 20 d = 0.85 # Initializing -- every node gets a uniform value! pagerank_dict = {i: 1 / len(qid_graph) for i in qid_graph} num_nodes = len(pagerank_dict) for iter in range(0, MAX_ITER): for node in qid_graph: local_pr = 0 for neighbor in qid_graph[node]: local_pr += pagerank_dict[neighbor] / len(qid_graph[neighbor]) pagerank_dict[node] = (1 - d) / num_nodes + d * local_pr return pagerank_dict print('Main PR generator...') pagerank_dict = pagerank() def get_pagerank_value(row): q1 = hashlib.md5(row["question1"].encode('utf-8')).hexdigest() q2 = hashlib.md5(row["question2"].encode('utf-8')).hexdigest() s = pd.Series({ "q1_pr": pagerank_dict[q1], "q2_pr": pagerank_dict[q2] }) return s print('Apply to train...') pagerank_feats_train = df_train.apply(get_pagerank_value, axis=1) print('Writing train...') # pagerank_feats_train.to_csv("pagerank_train.csv", index=False) del df_train gc.collect() print('Apply to test...') pagerank_feats_test = df_test.apply(get_pagerank_value, axis=1) print('Writing test...') # pagerank_feats_test.to_csv("pagerank_test.csv", index=False) # - train = pd.concat([pagerank_feats_train, pagerank_feats_test], axis=0).reset_index(drop=True) train.to_csv(config.FEAT_PATH+'pagerank.csv', index=False) # + from tqdm import tqdm import re train_orig = pd.read_csv(config.RAW_PATH+'train.csv', header=0)#.sample(n=1000) test_orig = pd.read_csv(config.RAW_PATH+'test.csv', header=0)#.sample(n=1000) df = pd.concat([train_orig[['question1', 'question2']], test_orig[['question1', 'question2']]], axis=0).reset_index(drop=True) locations = pd.read_csv(config.FEAT_PATH+"cities.csv") countries = set(locations['Country'].dropna(inplace=False).values.tolist()) all_places = countries regex = "|".join(sorted(set(all_places))) results = [] for index, row in tqdm(df.iterrows()): q1 = str(row['question1']) q2 = str(row['question2']) rr = {} q1_matches = [] q2_matches = [] if (len(q1) > 0): q1_matches = [i.lower() for i in re.findall(regex, q1, flags=re.IGNORECASE)] if (len(q2) > 0): q2_matches = [i.lower() for i in re.findall(regex, q2, flags=re.IGNORECASE)] rr['z_q1_place_num'] = len(q1_matches) rr['z_q1_has_place'] =len(q1_matches) > 0 rr['z_q2_place_num'] = len(q2_matches) rr['z_q2_has_place'] = len(q2_matches) > 0 rr['z_place_match_num'] = len(set(q1_matches).intersection(set(q2_matches))) rr['z_place_match'] = rr['z_place_match_num'] > 0 rr['z_place_mismatch_num'] = len(set(q1_matches).difference(set(q2_matches))) rr['z_place_mismatch'] = rr['z_place_mismatch_num'] > 0 results.append(rr) loc_df = pd.DataFrame.from_dict(results) #out_df.to_csv("../features/{}_place_matches.csv".format(dataset), index=False, header=True) # out_df.to_csv("{}_place_matches.csv".format(dataset)) # + magic_feature = pd.read_csv(config.FEAT_PATH+'magic_feature.csv') pagerank = pd.read_csv(config.FEAT_PATH+'pagerank.csv') magic_feature = pd.concat([magic_feature, pagerank, loc_df], axis=1) magic_feature['z_place_match']=magic_feature['z_place_match'].astype(int) magic_feature['z_place_mismatch']=magic_feature['z_place_mismatch'].astype(int) magic_feature['z_q1_has_place']=magic_feature['z_q1_has_place'].astype(int) magic_feature['z_q2_has_place']=magic_feature['z_q2_has_place'].astype(int) # + train_orig = pd.read_csv(config.RAW_PATH+'train.csv', header=0)#.sample(n=1000) test_orig = pd.read_csv(config.RAW_PATH+'test.csv', header=0)#.sample(n=1000) df = pd.concat([train_orig[['question1', 'question2']], test_orig[['question1', 'question2']]], axis=0).reset_index(drop=True) question_kcores = pd.read_csv(config.FEAT_PATH+'question_kcores.csv') question_kcores_dict = dict(zip(question_kcores.question, question_kcores.kcores)) def kcores(x): try: return question_kcores_dict[x.lower()] except: return -1 df['q1_kcores'] = df[['question1']].apply(lambda x: kcores(str(x['question1'])), axis=1) df['q2_kcores'] = df[['question2']].apply(lambda x: kcores(str(x['question2'])), axis=1) magic_feature[['q1_kcores','q2_kcores']] = df[['q1_kcores','q2_kcores']] # - # + ############ cliques networks ########### # %matplotlib inline import networkx as nx import pandas as pd from itertools import combinations import config G = nx.Graph() df = pd.read_csv(config.RAW_PATH+'train.csv',nrows=400000).fillna("") edges = [tuple(x) for x in df[['question1', 'question2']].values] G.add_edges_from(edges) map_label = dict(((x[0], x[1]), x[2]) for x in df[['question1', 'question2', 'is_duplicate']].values) map_clique_size = {} cliques = sorted(list(nx.find_cliques(G)), key=lambda x: len(x)) for cli in cliques: for q1, q2 in combinations(cli, 2): if (q1, q2) in map_label: map_clique_size[q1, q2] = len(cli) elif (q2, q1) in map_label: map_clique_size[q2, q1] = len(cli) df['clique_size'] = df.apply(lambda row: map_clique_size.get((row['question1'], row['question2']), -1), axis=1) # - df[df['clique_size']>2] # + import networkx as nx import random cliques = nx.find_cliques(G) cliques = [clq for clq in cliques if len(clq) >= 6] h = G.subgraph([n for n in random.choice(cliques)]) nx.draw(h, with_labels=True, alpha=0.8, font_size=11) # break # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- #export __all__ = ["grep", "grepTemplate"] import re, k1lib from k1lib.cli.init import BaseCli, Table, Row; import k1lib.cli as cli from collections import deque; from typing import Iterator #export inf = float("inf") class grep(BaseCli): def __init__(self, pattern:str, before:int=0, after:int=0, N:int=float("inf"), sep:bool=False): """Find lines that has the specified pattern. Example:: # returns ['d', 'd'] "" | grep("d") | deref() # returns ['c', 'd', '2', 'd'], 2 sections of ['c', 'd'] and ['2', 'd'] "" | grep("d", 1) | deref() # returns ['c', 'd'] "" | grep("d", 1, N=1) | deref() # returns ['d', 'e', 'd', '3', '4'], 2 sections of ['d', 'e'] and ['d', '3', '4'] "" | grep("d", 0, 3).till("e") | deref() # returns [['0', '1', '2'], ['3', '1', '4']] "0123145" | grep("1", 2, 1, sep=True) | deref() You can also separate out the sections:: # returns [['c', 'd'], ['2', 'd']] "abcde12d34" | grep("d", 1, sep=True) | deref() # returns [['c', 'd']] "abcde12d34" | grep("d", 1, N=1, sep=True) | deref() # returns [['1', '2', '3'], ['1', '4', '5']] "0123145" | grep("1", sep=True).till() | deref() :param pattern: regex pattern to search for in a line :param before: lines before the hit. Outputs independent lines :param after: lines after the hit. Outputs independent lines :param N: max sections to output :param sep: whether to separate out the sections as lists""" super().__init__() self.pattern = re.compile(pattern) self.before = before; self.after = after self.N = N; self.sep = sep; self.tillPattern = None def till(self, pattern:str=None): """Greps until some other pattern appear. Inclusive, so you might want to trim the last line. Example:: # returns ['5', '6', '7', '8'], includes last item range(10) | join("") | grep("5").till("8") | deref() # returns ['d', 'e', 'd', '3', '4'] "abc4" | grep("d").till("e") | deref() # returns ['d', 'e'] "abc4" | grep("d", N=1).till("e") | deref() If initial pattern and till pattern are the same, then you don't have use this method at all. Instead, do something like this:: # returns ['1', '2', '3'] "0123145" | grep("1", after=1e9, N=1) | deref()""" if pattern == self.pattern.pattern: pattern = None # "\ue000" is in unicode's private use area, so extremely unlikely that we # will actually run into it in normal text processing, because it's not text self.tillPattern = re.compile(pattern or "\ue000") self.tillAfter = self.after; self.after = inf; return self def __ror__(self, it:Iterator[str]) -> Iterator[str]: self.sectionIdx = 0; tillPattern = self.tillPattern if self.sep: self.sep = False; elems = []; idx = 0 for line in (it | self): if self.sectionIdx > idx: # outputs whatever remaining if len(elems) > 0: yield list(elems) idx = self.sectionIdx; elems = [] elems.append(line) yield list(elems); return queue = deque([], self.before); counter = 0 # remaining lines after to display cRO = k1lib.RunOnce(); cRO.done() for line in it: if self.pattern.search(line): # new section self.sectionIdx += 1; counter = self.after+1; cRO.revert() if self.sectionIdx > self.N: return yield from queue; queue.clear(); yield line elif tillPattern is not None and tillPattern.search(line) and counter == inf: # closing section counter = self.tillAfter + 1; cRO.revert(); yield line if counter == 0: queue.append(line) # saves recent past lines elif counter > 0: # yielding "after" section if cRO.done(): yield line counter -= 1 # joined, normal assert "de12d34" | grep("d") | cli.deref() == ['d', 'd'] assert "abcde12d34" | grep("d", 1) | cli.deref() == ['c', 'd', '2', 'd'] assert "abcde12d34" | grep("d", 1, N=1) | cli.deref() == ['c', 'd'] assert "0123456789" | grep("4", after=1e9, N=1) | cli.deref() == ['4', '5', '6', '7', '8', '9'] # joined, till assert "abcde12d34" | grep("d", N=1).till("e") | cli.deref() == ['d', 'e'] assert "abcde12d34" | grep("d").till("e") | cli.deref() == ['d', 'e', 'd', '3', '4'] assert range(10) | cli.join("") | grep("5").till("8") | cli.deref() == ['5', '6', '7', '8'] assert range(10) | cli.join("") | grep("5", N=1).till("8") | cli.deref() == ['5', '6', '7', '8'] assert "0123145" | grep("1", N=1).till("1") | cli.deref() == ['1', '2', '3'] assert "0123145" | grep("1", N=1).till() | cli.deref() == ['1', '2', '3'] assert "0123145" | grep("1", after=1e9, N=1) | cli.deref() == ['1', '2', '3'] # separated assert "abcde12d34" | grep("d", 1, sep=True) | cli.deref() == [['c', 'd'], ['2', 'd']] assert "abcde12d34" | grep("d", 1, N=1, sep=True) | cli.deref() == [['c', 'd']] assert "0123145" | grep("1", sep=True).till() | cli.deref() == [['1', '2', '3'], ['1', '4', '5']] assert "0123145" | grep("1", 4, 2, sep=True) | cli.deref() == [['0', '1', '2', '3'], ['1', '4', '5']] assert "0123145" | grep("1", 2, 1, sep=True) | cli.deref() == [['0', '1', '2'], ['3', '1', '4']] #export class grepTemplate(BaseCli): def __init__(self, pattern:str, template:str): """Searches over all lines, pick out the match, and expands it to the templateand yields""" super().__init__() self.pattern = re.compile(pattern); self.template = template def __ror__(self, it:Iterator[str]): super().__ror__(it) for line in it: matchObj = self.pattern.search(line) if matchObj is None: continue yield matchObj.expand(self.template) # !../../export.py cli/grep # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) movies = pd.read_csv('tmdb_5000_movies.csv') credits = pd.read_csv('tmdb_5000_credits.csv') movies.head() credits.head() movies = movies.merge(credits,on='title') movies.head(1) movies = movies[['movie_id','title','overview','genres','keywords','cast','crew']] import ast credits.head(1)['cast'].values movies.head(1) movies=movies.merge(credits,on='title') movies.head(1) credits.head(1) movies=movies.merge(credits,on='title') movies.head(1) movies = movies[['movie_id','title','overview','genres','keywords','cast','crew']] movies.isnull().sum() movies.dropna(inplace=True) movies.isnull().sum() movies.duplicated().sum() movies.iloc[0].genres def convert(obj): L = [] for i in ast.literal_eval(obj): L.append(i['name']) return L movies['genres'] = movies['genres'].apply(convert) movies.head() movies['keywords'] = movies['keywords'].apply(convert) movies.head() def convert3(obj): L = [] counter = 0 for i in ast.literal_eval(obj): if counter!=3: L.append(i['name']) counter+=1 else: break return L movies['cast'] = movies['cast'].apply(convert3) movies.head() def fetch_director(obj): L = [] for i in ast.literal_eval(obj): if i['job'] == 'Director': L.append(i['name']) break return L movies['crew'] = movies['crew'].apply(fetch_director) movies.head() movies['overview'][10] movies['overview'] = movies['overview'].apply(lambda x:x.split()) def collapse(L): L1 = [] for i in L: L1.append(i.replace(" ","")) return L1 movies['cast'] = movies['cast'].apply(collapse) movies['crew'] = movies['crew'].apply(collapse) movies['genres'] = movies['genres'].apply(collapse) movies['keywords'] = movies['keywords'].apply(collapse) movies.head() movies['tags'] = movies['overview'] + movies['genres'] + movies['keywords'] + movies['cast'] + movies['crew'] movies.head() new = movies.drop(columns=['overview','genres','keywords','cast','crew']) new.head() new['tags'] = new['tags'].apply(lambda x: " ".join(x)) new.head() new['tags'] = new['tags'].apply(lambda x:x.lower()) import nltk from nltk.stem.porter import PorterStemmer ps=PorterStemmer() def stem(text): y=[] for i in text.split(): y.append(ps.stem(i)) return " ".join(y) new['tags']=new['tags'].apply(stem) from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer(max_features=5000,stop_words='english') vector = cv.fit_transform(new['tags']).toarray() new.head() from sklearn.metrics.pairwise import cosine_similarity similarity = cosine_similarity(vector) similarity[1] def recommend(movie): index = new[new['title'] == movie].index[0] distances = sorted(list(enumerate(similarity[index])),reverse=True,key = lambda x: x[1]) for i in distances[1:6]: print(new.iloc[i[0]].title) recommend('Gandhi') import pickle pickle.dump(new,open('movie_list.pkl','wb')) pickle.dump(similarity,open('similarity.pkl','wb')) pickle.dump(new.to_dict(),open('movie_dict.pkl','wb')) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py37 # language: python # name: py37 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt from os import listdir from keras.preprocessing import sequence import tensorflow as tf from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from keras.optimizers import Adam from keras.models import load_model from keras.callbacks import ModelCheckpoint # + # Example代码分析 ### (1)由几个部分构成 ### (2)功能是什么 ### (3)输入和输出都是什么(类型和格式) ### (4)输入数据可以用什么工具生成 # - df1 = pd.read_csv('../data/MovementAAL/dataset/MovementAAL_RSS_1.csv') df2 = pd.read_csv('../data/MovementAAL/dataset/MovementAAL_RSS_2.csv') df1.head(5), df2.head(5) df1.shape, df2.shape path = '../data/MovementAAL/dataset/MovementAAL_RSS_' sequences = list() for i in range(1,315): file_path = path + str(i) + '.csv' #print(file_path) df = pd.read_csv(file_path, header=0) values = df.values #print(values) sequences.append(values) targets = pd.read_csv('../data/MovementAAL/dataset/MovementAAL_target.csv') targets = targets.values[:,1] #我们现在有一个列表“序列”,其中包含来自运动传感器的数据和“目标”, #其中包含csv文件的标签。当我们打印序列[0]时,从第一个csv文件中获取传感器的值: sequences[0] #如前所述,数据集是在三对不同的房间中收集的——因此有三组。 #此信息可用于将数据集划分为训练集、测试集和验证集。我们现在将加载DatasetGroup csv文件: groups = pd.read_csv('../data/MovementAAL/groups/MovementAAL_DatasetGroup.csv', header=0) groups = groups.values[:,1] print(groups) #让我们找出最小长度、最大长度和平均长度: len_sequences = list() for one_seq in sequences: len_sequences.append(len(one_seq)) pd.Series(len_sequences).describe() #Padding the sequence with the values in last row to max length to_pad = 129 new_seq = list() for one_seq in sequences: len_one_seq = len(one_seq) #print("one_seq", one_seq) last_val = one_seq[-1] #print("last_val", last_val) n = to_pad - len_one_seq to_concat = np.repeat(one_seq[-1], n).reshape(4, n).transpose() #print(np.repeat(one_seq[-1], n)) #print(np.repeat(one_seq[-1], n).reshape(4, n)) new_one_seq = np.concatenate([one_seq, to_concat]) new_seq.append(new_one_seq) # print(new_seq) final_seq = np.stack(new_seq) #数组堆叠 print(final_seq) #truncate the sequence to length 60 from keras.preprocessing import sequence seq_len = 60 final_seq=sequence.pad_sequences(final_seq, maxlen=seq_len, padding='post', dtype='float', truncating='post') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: LiveProject # language: python # name: liveproject # --- # # Milestone 4 # Set up the questions and the paragraphs # + import pandas as pd df = pd.read_csv("paragraphs.csv") questions = [ ["Does a company need to keep track of the carbon intensity of the electricity?"], ["What metric is used for evaluating emission?"], ["How does one get to net-zero emissions economy?"], ["What is net-zero emissions economy?"], ["How can carbon emission of the processes of cement clinker be reduced?"], ["How is the Weighted Cogeneration Threshold calculated?"], ["What is carbon capture and sequestration?"], ["What stages does CCS consist of?"], ["What should be the average energy consumption of a water supply system?"], ["What are sludge treatments?"], ["How is the process of anaerobic digestion?"], ["What is considered Zero direct emission vehicles?"] ] # - # Set up the vectorizer and user a function to most similar paragraph to the question. # + from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import linear_kernel vectorizer = TfidfVectorizer() vector_corpus = vectorizer.fit_transform(df["paragraph"]) def get_context(question): q_v = vectorizer.transform(question) lk_rank = linear_kernel(q_v, vector_corpus).flatten() return df["paragraph"][lk_rank.argsort()[-1]] # - # Initiate the QA pipeline and a function which return the answer # + from transformers import pipeline MODEL = "distilbert-base-uncased-distilled-squad" qamodel = pipeline("question-answering", model=MODEL, tokenizer=MODEL, device=-1) def get_answer_pipeline(question, context): answer = qamodel(question=question, context=context) return answer["answer"].rstrip(".").rstrip(",").lstrip("(").rstrip(")").rstrip(".").strip("'").strip(":") # - # Go through the different questions and print the answers and the contexts for question in questions: context = get_context(question) answer = get_answer_pipeline(question, context) print(f"{question[0]}\n\n{answer}\n\n{context}") print("-"*100) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Cooling Fraction Function # - Using **scipy.interpolate.RectBivariateSpline** to interpolate function: # - Linear interpolation (kx=1, ky=1) # + import numpy as np import pandas as pd import sympy as s import math import glob import matplotlib import matplotlib.pyplot as plt from cycler import cycler from scipy import interpolate from astropy import units as u from astropy import constants as const from astropy.units import imperial imperial.enable() # %matplotlib inline import os #home_dir = os.environ['/Users/eriksolhaug'] + '/' import pyCloudy as pc # - pd.set_option("display.max_rows", None, "display.max_columns", None) pd.set_option('precision', 16) pd.set_option('display.float_format', lambda x: '%.3e' % x) # + # The directory in which we will have the model # You may want to change this to a different place so that the current directory # will not receive all the Cloudy files. dir_ = '/Users/eriksolhaug/cloudy/c17.02/cloudoutputs/' user_dir = '/Users/eriksolhaug/cloudy/c17.02/' # Define verbosity to high level (will print errors, warnings and messages) pc.log_.level = 3 # - def calccoolingfrac(model_name, model_dir): """ Returns FUNCTION object f(X, Y) where X is a hydrogen density (LOG) and Y is a temperature (LOG). The output fraction is in LOG. The temperature range given to the Cloudy input file is 10^4 - 10^7 K and the hydrogen density range given to the Cloudy input file is 10^-5.0 - 10^-3.0 cm^-3 Inputs: model_name - a string, name of model run in CLOUDY (f.ex. 'model_41') model_dir - a string, directory containing model output files Output: f - an object, function yielding cooling fraction for the requested for the given model in LOG as a function of hydrogen density and temperature To get the cooling fraction for a certain hden X and temperature Y, simply call f(X, Y) and take the exponent 10**f(X, Y) of this to find the cooling fraction for the model at the specified hydrogen density and temperature. An example: In[]: f = calccolfrac('model_47', '/Users/eriksolhaug/cloudy/c17.02/cloudoutputs/') 10**f(-5.0, 5.5) Out[]: array([[4.50693322e-33]]) -- where 4.50693322e-33 is the cooling fraction of OVI for hydrogen density of 10**-5.0 and a temperature of 10**5.5 Kelvin """ # Interpolating function for the data computed in Cloudy grid_df = pd.read_csv(f'{model_dir}/{model_name}.grid', sep='\t') hdengrid = grid_df.loc[:, 'HDEN=%f L'] tempgrid = grid_df.loc[:, 'CONSTANT '] x = hdengrid y = tempgrid # Loading .cool file cols = ['#depth cm','Temp K','Htot erg/cm3/s','Ctot erg/cm3/s','cool fracs'] cool_df = pd.read_csv(f'{model_dir}{model_name}.cool', sep = '\t', usecols=cols) cool_df = cool_df.iloc[:, 0:4] # Excluding cool fracs column which contains unnecessary info, we want the H... and C... columns # Getting fractional columns z_array = [] for index in range(0, x.size): frac_cool = float(cool_df.iloc[index*2, -1]) # Need to convert this to float as values are reported as strings z_array.append(np.log10(frac_cool+1e-50)) #Adding a small value to avoid case of log(0) z = pd.DataFrame(z_array, columns=['z']) if model_name == 'model_43': step = 5 elif model_name == 'model_45': step = 41 elif model_name == 'model_46': step = 401 elif model_name == 'model_47': step = 101 else: step = 11 # Putting vectors in dataframe representation xyz = pd.DataFrame({'x' : x, 'y' : y, 'z' : z['z']}) # Simplifying x and y inputs xi = xyz['x'][:step] yi = xyz['y'][::step] # Preparing spline arrays twoDarray = [] for i in range(len(xi)): array = [] for j in range(len(yi)): idx = i + j*step array.append(xyz['z'][idx]) twoDarray.append(array) # Simplifying z inputs zi = twoDarray print(xi, yi, zi) print(len(xi), len(yi), len(zi)) # INTERPOLATION f = interpolate.RectBivariateSpline(xi, yi, zi, kx=1, ky=1) # Linear determined by kx, ky # Displaying match between old fractions and interpolated function interpolated_z = [] for temp in yi: for hden in xi: interpolated_z.append(f(hden, temp)) interpolated_z = np.concatenate(interpolated_z) print(interpolated_z) print(min(interpolated_z)) return f def coolingplot(keyword, second_val_arr, model_name, model_dir, plot_dir): ''' Function used to make and save plots of the fractional columns for different elements Input: keyword - a string, either 'temp' or 'hden' for what needs to be plotted against second_val - a float in LOG, value for either temp or hden (whatever not requested by keyword) - the plot needs a set temp or hden and this is set by this parameter model_name - a string, name of model run in CLOUDY (f.ex. 'model_42') model_dir - a string, directory containing model output files plot_dir - a string, directory for where to save plot ''' # Plotting tick_fontsize = 16 axis_fontsize = 22 lwidth = 3 lstyle = '-' fig, ax = plt.subplots(1, 1) fig.set_size_inches(8, 5) fig.tight_layout(w_pad = 10.0) if keyword == 'temp': constant_arr = second_val_arr vary = np.arange(4, 7, 0.01) xaxis = 10**vary elif keyword == 'hden': vary = np.arange(-5.0, -3.0, 0.01) constant_arr = second_val_arr xaxis = 10**vary else: print('Not a valid keyword. Needs either "temp" or "hden".') f = calccoolingfrac(model_name, model_dir) if keyword == 'temp': other_keyword = 'hden' for constant in constant_arr: ax.plot(xaxis, 10**f(constant, vary)[0], linewidth=lwidth, label=f'10^{constant}cm-3', linestyle=lstyle) elif keyword == 'hden': other_keyword = 'temp' for constant in constant_arr: ax.plot(xaxis, 10**f(vary, constant), linewidth=lwidth, label=f'10^{constant}K', linestyle=lstyle) ax.set_xscale('log') ax.set_yscale('log') if keyword == 'temp': ax.set_xlabel(r'Temperature ($K$)', fontsize = 12) else: ax.set_xlabel(r'Hydrogen density ($cm^{-3}$)', fontsize = 12) ax.set_ylabel('Cooling Fraction', fontsize = 12) # ax.set_ylim(1e-36, 1e-25) ax.set_title(f'Cooling fractions | Constant {other_keyword}', fontsize=18, fontweight='bold', pad=15) for tick in ax.xaxis.get_major_ticks(): tick.label.set_fontsize(fontsize=tick_fontsize) for tick in ax.yaxis.get_major_ticks(): tick.label.set_fontsize(fontsize=tick_fontsize) ax.xaxis.label.set_fontsize(axis_fontsize) ax.yaxis.label.set_fontsize(axis_fontsize) ax.tick_params(which='major', width=2, length=8) ax.tick_params(which='minor', width=1, length=5) ax.grid(linestyle='--') ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', prop={'size': 18}) ax.set_prop_cycle(cycler('color', ['c', 'm', 'y', 'k'])) fig.savefig(f'{plot_dir}{keyword}vary_coolingfractions.pdf', bbox_inches="tight") # ### Example Executions: model_name = 'model_47' model_dir = '/Users/eriksolhaug/cloudy/c17.02/cloudoutputs/' f = calccoolingfrac(model_name, model_dir) 10**f(-5.0,5.5) # $\: \uparrow$ This is the cooling fraction for the model at a given hydrogen density and temperature. # One can use the same procedure for other ions: # ### Plotting fractional columns for all ions (C, N, O, Si) model_name = 'model_47' plot_dir = '/Users/eriksolhaug/cloudy/c17.02/es/es_data/cooling_fractions/' + model_name + '/' keyword_array = ['temp', 'hden'] temp_val_array = np.arange(-5.0, -2.5, 0.5) hden_val_array = np.arange(4.0, 6.5, 0.5) for keyword in keyword_array: if keyword == 'temp': second_val_array = temp_val_array elif keyword == 'hden': second_val_array = hden_val_array plot = coolingplot(keyword, second_val_array, model_name, model_dir, plot_dir) # + # Individual plots: # def coolingplot(keyword, second_val, model_name, model_dir, plot_dir): # ''' # Function used to make and save plots of the fractional columns for different elements # Input: # keyword - a string, either 'temp' or 'hden' for what needs to be plotted against # second_val - a float in LOG, value for either temp or hden (whatever not requested by keyword) - the plot needs a set temp or hden and this is set by this parameter # model_name - a string, name of model run in CLOUDY (f.ex. 'model_42') # model_dir - a string, directory containing model output files # plot_dir - a string, directory for where to save plot # ''' # # Plotting # tick_fontsize = 16 # axis_fontsize = 22 # lwidth = 3 # lstyle = '-' # fig, ax = plt.subplots(1, 1) # fig.set_size_inches(8, 5) # fig.tight_layout(w_pad = 10.0) # if keyword == 'temp': # constant = second_val # vary = np.arange(4, 7, 0.01) # xaxis = 10**vary # elif keyword == 'hden': # vary = np.arange(-5.0, -3.0, 0.01) # constant = second_val # xaxis = 10**vary # else: # print('Not a valid keyword. Needs either "temp" or "hden".') # f = calccoolingfrac(model_name, model_dir) # if keyword == 'temp': # other_keyword = 'hden' # ax.plot(xaxis, 10**f(constant, vary)[0], linewidth=lwidth, label=f'{model_name}', linestyle=lstyle) # elif keyword == 'hden': # other_keyword = 'temp' # ax.plot(xaxis, 10**f(vary, constant), linewidth=lwidth, label=f'{model_name}', linestyle=lstyle) # ax.set_xscale('log') # ax.set_yscale('log') # if keyword == 'temp': # ax.set_xlabel(r'Temperature ($K$)', fontsize = 12) # else: # ax.set_xlabel(r'Hydrogen density ($cm^{-3}$)', fontsize = 12) # ax.set_ylabel('Cooling Fraction', fontsize = 12) # # ax.set_ylim(1e-36, 1e-25) # ax.set_title(f'Cooling fractions | Constant {other_keyword}: 10^{second_val}', fontsize=18, fontweight='bold', pad=15) # for tick in ax.xaxis.get_major_ticks(): # tick.label.set_fontsize(fontsize=tick_fontsize) # for tick in ax.yaxis.get_major_ticks(): # tick.label.set_fontsize(fontsize=tick_fontsize) # ax.xaxis.label.set_fontsize(axis_fontsize) # ax.yaxis.label.set_fontsize(axis_fontsize) # ax.tick_params(which='major', width=2, length=8) # ax.tick_params(which='minor', width=1, length=5) # ax.grid(linestyle='--') # ax.legend(bbox_to_anchor=(1.05, 1), loc='upper left', prop={'size': 18}) # ax.set_prop_cycle(cycler('color', ['c', 'm', 'y', 'k'])) # fig.savefig(f'{plot_dir}{keyword}vary_{second_val}_coolingfractions.pdf', bbox_inches="tight") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Caesar Cipher alphabet = 'abcdefghijklmnopqrstuvwxyz' input_text = 'hello' # ## Build a very simple version output = '' for char in input_text: alpha_index = alphabet.find(char) output = output + alphabet[alpha_index+3] print(output) # ## What if cipher index goes beyond end of alphabet? output = '' for char in input_text: alpha_index = alphabet.find(char) output = output + alphabet[alpha_index+30] print(output) 30%26 # ## Write a function to deal with shift def shift_amount(i): '''Will determine the shift, taking into account the length of the alphabet. Takes integer - returns integer''' return i%26 # ## Now test with shift > 26 output_1 = '' for char in input_text: alpha_index = alphabet.find(char) output_1 = output_1 + alphabet[shift_amount(alpha_index+30)] print(output_1) # ## A complete function def encrypt(text,required_shift): out_string = '' text = text.lower() for char in text: if char not in alphabet: out_string = out_string + char else: alpha_index = alphabet.find(char) out_string = out_string + alphabet[shift_amount(alpha_index +required_shift)] return out_string new_string = 'Once upon' shift = encrypt(new_string,5) shift # + basker = '''I confess at these words a shudder passed through me. There was a thrill in the doctor’s voice which showed that he was himself deeply moved by that which he told us. Holmes leaned forward in his excitement and his eyes had the hard, dry glitter which shot from them when he was keenly interested.''' encrypt_basker = encrypt(basker,10) print(encrypt_basker) # - print(encrypt(encrypt_basker,-10)) question_text = 'The cat sat on the mat' print(encrypt(question_text,11)) question_text. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # **Learn the Basics** || # `Quickstart `_ || # `Tensors `_ || # `Datasets & DataLoaders `_ || # `Transforms `_ || # `Build Model `_ || # `Autograd `_ || # `Optimization `_ || # `Save & Load Model `_ # # Learn the Basics # =================== # # Authors: # ` `_, # ` `_, # ` `_, # ` `_, # ` `_ # # Most machine learning workflows involve working with data, creating models, optimizing model # parameters, and saving the trained models. This tutorial introduces you to a complete ML workflow # implemented in PyTorch, with links to learn more about each of these concepts. # # We'll use the FashionMNIST dataset to train a neural network that predicts if an input image belongs # to one of the following classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, # Bag, or Ankle boot. # # `This tutorial assumes a basic familiarity with Python and Deep Learning concepts.` # # # Running the Tutorial Code # ------------------ # You can run this tutorial in a couple of ways: # # - **In the cloud**: This is the easiest way to get started! Each section has a Colab link at the top, which opens a notebook with the code in a fully-hosted environment. Pro tip: Use Colab with a GPU runtime to speed up operations *Runtime > Change runtime type > GPU* # - **Locally**: This option requires you to setup PyTorch and TorchVision first on your local machine (`installation instructions `_). Download the notebook or copy the code into your favorite IDE. # # # How to Use this Guide # ----------------- # If you're familiar with other deep learning frameworks, check out the `0. Quickstart `_ first # to quickly familiarize yourself with PyTorch's API. # # If you're new to deep learning frameworks, head right into the first section of our step-by-step guide: `1. Tensors `_. # # # .. include:: /beginner_source/basics/qs_toc.txt # # .. toctree:: # :hidden: # # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [py35] # language: python # name: Python [py35] # --- # # 7.3 String Manipulation(字符串处理) # # python很多内建方法很适合处理string。而且对于更复杂的模式,可以配合使用正则表达式。而pandas则混合了两种方式。 # # # 1 String Object Methods(字符串对象方法) # # 大部分string处理,使用内建的一些方法就足够了。比如,可以用split来分割用逗号区分的字符串: val = 'a,b, guido' val.split(',') # split经常和strip一起搭配使用来去除空格(包括换行符): pieces = [x.strip() for x in val.split(',')] pieces # 可以使用+号把::和字符串连起来: first, second, third = pieces first + '::' + second + '::' + third # 但这种方法并不python,更快的方法是直接用join方法: '::'.join(pieces) # 其他一些方法适合锁定子字符串位置相关的。用in关键字是检测substring最好的方法,当然,index和find也能完成任务: 'guido' in val val.index(',') val.find(':') # 注意index和find的区别。如果要找的string不存在的话,index会报错。而find会返回-1: val.index(':') # count会返回一个substring出现的次数: val.count(',') # replace会取代一种出现方式(pattern)。也通常用于删除pattern,传入一个空字符串即可: val.replace(',', '::') val.replace(',', '') # 这里一些内建的string方法: # # ![](http://oydgk2hgw.bkt.clouddn.com/pydata-book/m643y.png) # # # 2 Regular Expressions(正则表达式) # # 正则表达式能让我们寻找更复杂的pattern。通常称一个表达式为regex,由正则表达语言来代表一个字符串模式。可以使用python内建的re模块来使用。 # # > 关于正则表达式,有很多教学资源,可以自己找几篇来学一些,这里不会介绍太多。 # # re模块有以下三个类别:patther matching(模式匹配), substitution(替换), splitting(分割)。通常这三种都是相关的,一个regex用来描述一种pattern,这样会有很多种用法。这里举个例子,假设我们想要根据空格(tabs,spaces,newlines)来分割一个字符串。用于描述一个或多个空格的regex是`\s+`: import re text = "foo bar\t baz \tqux" re.split('\s+', text) # 当调用`re.split('\s+', text)`的时候,正则表达式第一次被compile编译,并且split方法会被调用搜索text。我们可以自己编译regex,用re.compile,可以生成一个可以多次使用的regex object: regex = re.compile('\s+') regex.split(text) # 如果想要得到符合regex的所有结果,以一个list结果返回,可以使用findall方法: regex.findall(text) # > 为了防止\在正则表达式中的逃逸,推荐使用raw string literal,比如`r'C:\x'`,而不是使用`'C:\\x` # # 使用re.compile创建一个regex object是被强烈推荐的,如果你打算把一个表达式用于很多string上的话,这样可以节省CPU的资源。 # # match和search,与findall关系紧密。不过findall会返回所有匹配的结果,而search只会返回第一次匹配的结果。更严格地说,match只匹配string开始的部分。这里举个例子说明,我们想要找到所有的邮件地址: # + text = """Dave Steve Rob Ryan """ pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}' # - # re.IGNORECASE makes the regex case-insensitive regex = re.compile(pattern, flags=re.IGNORECASE) # 使用findall找到一组邮件地址: regex.findall(text) # search返回text中的第一个匹配结果。match object能告诉我们找到的结果在text中开始和结束的位置: m = regex.search(text) m text[m.start():m.end()] # regex.match返回None,因为它只会在pattern存在于stirng开头的情况下才会返回匹配结果: print(regex.match(text)) # 而sub返回一个新的string,把pattern出现的地方替换为我们指定的string: print(regex.sub('REDACTED', text)) # 假设你想要找到邮件地址,同时,想要把邮件地址分为三个部分,username, domain name, and domain suffix.(用户名,域名,域名后缀)。需要给每一个pattern加一个括号: pattern = r'([A-Z0-9._%+-]+)@([A-Z0-9.-]+)\.([A-Z]{2,4})' regex = re.compile(pattern, flags=re.IGNORECASE) # match object会返回一个tuple,包含多个pattern组份,通过groups方法: m = regex.match('') m.groups() # findall会返回a list of tuples: regex.findall(text) # sub也能访问groups的结果,不过要使用特殊符号 \1, \2。\1表示第一个匹配的group,\2表示第二个匹配的group,以此类推: print(regex.sub(r'Username: \1, Domain: \2, Suffix: \3', text)) # 这里给一些正则表达式的方法: # # ![](http://oydgk2hgw.bkt.clouddn.com/pydata-book/mj4vc.png) # # # 3 Vectorized String Functions in pandas(pandas中的字符串向量化函数) # # 一些复杂的数据清理中,string会有缺失值: import numpy as np import pandas as pd data = {'Dave': '', 'Steve': '', 'Rob': '', 'Wes': np.nan} data = pd.Series(data) data data.isnull() # 可以把一些字符串方法和正则表达式(用lambda或其他函数)用于每一个value上,通过data.map,但是这样会得到NA(null)值。为了解决这个问题,series有一些数组导向的方法可以用于字符串操作,来跳过NA值。这些方法可以通过series的str属性;比如,我们想检查每个电子邮箱地址是否有'gmail' with str.contains: data.str data.str.contains('gmail') # 正则表达式也可以用,配合任意的re选项,比如IGNORECASE: pattern data.str.findall(pattern, flags=re.IGNORECASE) # 有很多方法用于向量化。比如str.get或index索引到str属性: matches = data.str.match(pattern, flags=re.IGNORECASE) matches # 为了访问嵌套list里的元素,我们可以传入一个index给函数: matches.str.get(1) matches.str.get(0) # 也可以使用这个语法进行切片: data.str[:5] # 这里有一些字符串向量化的方法: # # ![](http://oydgk2hgw.bkt.clouddn.com/pydata-book/owc7z.png) # # ![](http://oydgk2hgw.bkt.clouddn.com/pydata-book/cn2y0.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import gym import math import random import numpy as np import matplotlib import matplotlib.pyplot as plt from collections import namedtuple, deque from itertools import count from PIL import Image import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torchvision.transforms as T import utils import gc # + # set up matplotlib is_ipython = 'inline' in matplotlib.get_backend() if is_ipython: from IPython import display plt.ion() # if gpu is to be used device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(device) # - class DQN(nn.Module): def __init__(self, c, outputs): super(DQN, self).__init__() self.conv1 = nn.Conv2d(c, 32, kernel_size=8, stride=4) self.bn1 = nn.BatchNorm2d(32) self.conv2 = nn.Conv2d(32, 64, kernel_size=4, stride=2) self.bn2 = nn.BatchNorm2d(64) self.conv3 = nn.Conv2d(64, 64, kernel_size=3, stride=1) self.bn3 = nn.BatchNorm2d(64) self.hidden = nn.Linear(3136, 512, bias=True) self.head = nn.Linear(512, outputs) # Called with either one element to determine next action, or a batch # during optimization. Returns tensor([[left0exp,right0exp]...]). def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) x = x.view(x.size(0), -1) x = F.relu(self.hidden(x)) return F.softmax(self.head(x)) env = gym.make('Breakout-v0').unwrapped env.unwrapped.get_action_meanings() # + BATCH_SIZE = 32 GAMMA = 0.99 EPS_START = 1 EPS_END = 0.1 EPS_DECAY = 100000 TARGET_UPDATE = 3000 MEMORY_SIZE = 20000 HISTORY_LENGTH = 3 SKIP_FRAMES = 1 CHECKPOINT_UPDATE = 500 CHART_UPDATE = 25 # Get number of actions from gym action space n_actions = env.action_space.n - 1 policy_net = DQN(HISTORY_LENGTH, n_actions).to(device) target_net = DQN(HISTORY_LENGTH, n_actions).to(device) optimizer = optim.RMSprop(policy_net.parameters(), lr=0.00025, eps=0.01, momentum=0.95) memory = utils.ReplayMemory(MEMORY_SIZE) gc.collect() torch.cuda.empty_cache() steps_done = 0 i_episode = 0 durations = [] frames = deque([], maxlen=HISTORY_LENGTH) # + checkpoint_name = "breakout_dqn_v4_23000" if checkpoint_name: extra = utils.load_checkpoint(policy_net, optimizer, checkpoint_name) memory = extra["memory"] steps_done = extra["steps_done"] i_episode = extra["i_episode"] durations = extra["durations"] # + policy_net = policy_net.to(device) target_net = target_net.to(device) target_net.load_state_dict(policy_net.state_dict()) target_net.eval() # + def optimize_model(policy_net, target_net, optimizer, memory, device, BATCH_SIZE, GAMMA): if len(memory) < BATCH_SIZE: return transitions = memory.sample(BATCH_SIZE) # Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for # detailed explanation). This converts batch-array of Transitions # to Transition of batch-arrays. batch = Transition(*zip(*transitions)) # Compute a mask of non-final states and concatenate the batch elements # (a final state would've been the one after which simulation ended) non_final_mask = torch.tensor(tuple(map(lambda s: s is not None, batch.next_state)), device=device, dtype=torch.bool) non_final_next_states = torch.cat([s for s in batch.next_state if s is not None]) state_batch = torch.cat(batch.state) action_batch = torch.cat(batch.action) - 1 reward_batch = torch.cat(batch.reward) # Compute Q(s_t, a) - the model computes Q(s_t), then we select the # columns of actions taken. These are the actions which would've been taken # for each batch state according to policy_net state_action_values = policy_net(state_batch).gather(1, action_batch) # Compute V(s_{t+1}) for all next states. # Expected values of actions for non_final_next_states are computed based # on the "older" target_net; selecting their best reward with max(1)[0]. # This is merged based on the mask, such that we'll have either the expected # state value or 0 in case the state was final. next_state_values = torch.zeros(BATCH_SIZE, device=device) next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach() # Compute the expected Q values expected_state_action_values = (next_state_values * GAMMA) + reward_batch # Compute Huber loss loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1)) # Optimize the model optimizer.zero_grad() loss.backward() for param in policy_net.parameters(): param.grad.data.clamp_(-1, 1) optimizer.step() Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward')) # + num_episodes = 1000000 name = "breakout_dqn_v4_" def custom_get_screen(obvs, device): screen = obs.transpose((2, 0, 1)) screen = np.ascontiguousarray(screen, dtype=np.float32) / 255 screen = torch.from_numpy(screen) return utils.screen_transforms(screen).unsqueeze(0).to(device) gc.collect() torch.cuda.empty_cache() for _ in range(num_episodes): # Initialize the environment and state env.reset() for _ in range(HISTORY_LENGTH): frames.append(utils.get_screen(env, device)) state = torch.cat(tuple(frames), 1) total_reward = 0 for t in count(): # Select and perform an action action = utils.epsilon_greedy(state, policy_net, steps_done, n_actions, \ device, EPS_START, EPS_END, EPS_DECAY) + 1 steps_done += 1 # Observe new state reward = 0 for _ in range(SKIP_FRAMES): obs, p_reward, done, _ = env.step(action.item()) frames.append(custom_get_screen(obs, device)) reward += p_reward if done: break total_reward += reward reward = torch.tensor([reward], device=device) if not done: next_state = torch.cat(tuple(frames), 1) else: next_state = None # Store the transition in memory memory.push(state, action, next_state, reward) # Move to the next state state = next_state # Perform one step of the optimization (on the target network) optimize_model(policy_net, target_net, optimizer, memory, device, BATCH_SIZE, GAMMA) if done: durations.append(total_reward) #utils.plot_performance(durations) break i_episode += 1 if i_episode % CHART_UPDATE == 0: utils.plot_performance(durations) if i_episode % CHECKPOINT_UPDATE == 0: utils.save_checkpoint(policy_net, optimizer, name+str(i_episode), extra={ "i_episode": i_episode, "steps_done": steps_done, "durations": durations, "memory": memory }) if i_episode > CHECKPOINT_UPDATE*2: utils.delete_checkpoint(name+str(i_episode-(CHECKPOINT_UPDATE*2))) # Update the target network, copying all weights and biases in DQN if i_episode % TARGET_UPDATE == 0: target_net.load_state_dict(policy_net.state_dict()) print('Complete') env.render() env.close() plt.ioff() plt.show() # - env.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="etniX_KTlJ5U" pycharm={"name": "#%% md\n"} # # LSTM-VAE pro # + pycharm={"name": "#%%\n"} from model.bagging_lstmvae_pro import * import torch.utils.data as data_utils from utils.eval_methods import * from sklearn import preprocessing device = get_default_device() min_max_scaler = preprocessing.MinMaxScaler() # Read data normal = pd.read_csv("data/SWaT/SWaT_Dataset_Normal_v1.csv", nrows=10000) # , nrows=1000) normal = normal.drop(["Timestamp", "Normal/Attack"], axis=1) # Transform all columns into float64 for i in list(normal): normal[i] = normal[i].apply(lambda x: str(x).replace(",", ".")) normal = normal.astype(float) # 数据归一化 x = normal.values x_scaled = min_max_scaler.fit_transform(x) normal = pd.DataFrame(x_scaled) # Read data attack = pd.read_csv("data/SWaT/SWaT_Dataset_Attack_v0.csv", sep=";", nrows=10000) # , nrows=1000) labels = [float(label != 'Normal') for label in attack["Normal/Attack"].values] attack = attack.drop(["Timestamp", "Normal/Attack"], axis=1) # Transform all columns into float64 for i in list(attack): attack[i] = attack[i].apply(lambda x: str(x).replace(",", ".")) attack = attack.astype(float) x = attack.values x_scaled = min_max_scaler.transform(x) attack = pd.DataFrame(x_scaled) # + pycharm={"name": "#%%\n"} ############## windows ################### window_size = 12 # np.arange(window_size)[None, :] 1*12 (0,1,2,3,4,5,6,7,8,9,10,11)一行12列 # np.arange(normal.shape[0] - window_size)[:, None] (1000-12)*1 (0,1,2,3,4,5...) 988列,每列递增 # np.arange(window_size)[None, :] + np.arange(normal.shape[0] - window_size)[:, None] (1000-12)*12 windows_normal = normal.values[np.arange(window_size)[None, :] + np.arange(attack.shape[0] - window_size)[:, None]] windows_attack = attack.values[np.arange(window_size)[None, :] + np.arange(attack.shape[0] - window_size)[:, None]] windows_labels=[] for i in range(len(labels)-window_size): windows_labels.append(list(np.int_(labels[i:i+window_size]))) y_test = [1.0 if (np.sum(window) > 0) else 0 for window in windows_labels] y_test = np.array(y_test) # + pycharm={"name": "#%%\n"} ############## training ################### BATCH_SIZE = 500 N_EPOCHS = 10 N = 5 * round((normal.shape[1] / 3) / 5) # 10 for both bootstrap sample size and number of estimators decoder_layers = 2 # number of hidden layers for each decoder z = int((N / 2) - 1) # size of latent space windows_normal_train = windows_normal[:int(np.floor(.8 * windows_normal.shape[0]))] windows_normal_val = windows_normal[int(np.floor(.8 * windows_normal.shape[0])):int(np.floor(windows_normal.shape[0]))] train_loader = torch.utils.data.DataLoader(data_utils.TensorDataset( torch.from_numpy(windows_normal_train).float().view( ([windows_normal_train.shape[0], windows_normal_train.shape[1], windows_normal_train.shape[2]])) ), batch_size=BATCH_SIZE, shuffle=False, num_workers=0) val_loader = torch.utils.data.DataLoader(data_utils.TensorDataset( torch.from_numpy(windows_normal_val).float().view( ([windows_normal_val.shape[0], windows_normal_train.shape[1], windows_normal_train.shape[2]])) ), batch_size=BATCH_SIZE, shuffle=False, num_workers=0) test_loader = torch.utils.data.DataLoader(data_utils.TensorDataset( torch.from_numpy(windows_attack).float().view( ([windows_attack.shape[0], windows_attack.shape[1], windows_attack.shape[2]])) ), batch_size=BATCH_SIZE, shuffle=False, num_workers=0) # + pycharm={"name": "#%%\n"} model = BaggingLstmVAE(time_step=window_size, input_dim=normal.shape[1], hidden_size=N, n_estimators=N, max_features=N, latent_dim=z, decoding_depth=decoder_layers) for i in range(model.n_estimators): model.LSTMVAEs[i] = to_device(model.LSTMVAEs[i], device) model.DivLstmVAEs[i] = to_device(model.DivLstmVAEs[i], device) # + pycharm={"name": "#%%\n"} history = training(N_EPOCHS, model, train_loader) # + pycharm={"name": "#%%\n"} lower, upper = testing(model, test_loader) # + pycharm={"name": "#%%\n"} # 点调整法 windows_attack = windows_attack[:, -1, :] attack_tiles = np.tile(windows_attack.reshape(windows_attack.shape[0], 1, windows_attack.shape[1]), (1, N, 1)) result = np.where((attack_tiles < lower.numpy()) | (attack_tiles > upper.numpy()), 1, 0) inference = np.mean(np.mean(result, axis=1), axis=1) print(inference[0:100]) t, th = bf_search(inference, y_test, start=0, end=1, step_num=1000, display_freq=50) # + pycharm={"name": "#%%\n"} result = np.where((attack_tiles < upper.numpy()) | (attack_tiles > lower.numpy()), 1, 0) inference = np.mean(np.mean(result, axis=1), axis=1) print(inference[0:100]) t, th = bf_search(inference, y_test, start=0, end=1, step_num=1000, display_freq=50) # + pycharm={"name": "#%%\n"} a = lower.numpy() b = upper.numpy() print(a) print(b) # + pycharm={"name": "#%%\n"} a = a.flatten() b = b.flatten() # + pycharm={"name": "#%%\n"} aUb, aLb, eq = [], [], [] for i in range(len(a)): if a[i] > b[i]: aUb.append(i) elif a[i] < b[i]: aLb.append(i) else: eq.append(i) # + pycharm={"name": "#%%\n"} # print(aUb) # print(aLb) # print(eq) print(len(aUb)) print(len(aLb)) print(len(eq)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] nbpresent={"id": "1a745ba6-5e68-4ad1-b6a8-38c95d3a080c"} slideshow={"slide_type": "slide"} # # Electrical properties of electrons # hi again! # # This lecture: # * Drude model and why it works # * Electrical conductivity # * Hall effect # # Edit these notes online at http://tiny.cc/solidstatephys/drude.ipynb # (or download from http://tiny.cc/solidstate_src/drude.ipynb) # + [markdown] slideshow={"slide_type": "slide"} # # Electrons in fields # # * Start with Lorentz force: # $$ \frac{d\mathbf{p}}{dt} = e\left(\mathbf{E} + \frac{1}{m}\mathbf{p}\times \mathbf{B}\right) $$ # **Attention**: $e < 0$! # + [markdown] slideshow={"slide_type": "fragment"} # * $\mathbf{B} = 0\quad \Rightarrow\quad \mathbf{p} = e \mathbf{E} t,$ electrons accelerate forever! # + [markdown] slideshow={"slide_type": "fragment"} # * Electrons collide with impurities, lattice vibrations (phonons) # $\Rightarrow$ scattering makes $\mathbf{p}$ random. # + [markdown] nbpresent={"id": "b8928c20-20fa-4776-b2c4-c45342ce0f14"} slideshow={"slide_type": "fragment"} # * Simplest model for scattering: # $$\frac{d \mathbf{p}}{dt} = - \frac{\mathbf{p}}{\tau}$$ # $\tau$ average time between scattering events. # + [markdown] nbpresent={"id": "934aa51c-31a0-4aa6-a9ca-5af7fe96a948"} slideshow={"slide_type": "subslide"} # # Collision rates # # $\tau^{-1}$ is rate of collisions; additive from phonons and crystal disorder: # # ![](figures/rates.svg) # + [markdown] nbpresent={"id": "c53b0a02-ce4f-4588-8840-b4f831e79d54"} slideshow={"slide_type": "subslide"} # # [Drude Model](https://en.wikipedia.org/wiki/Drude_model): # # > Electrons are moved by electric and magnetic fields, # > and slowed by friction # > – 1900, 1905 (quotation approximate) # # $$ \frac{d\mathbf{p}}{dt} = e\left(\mathbf{E} + \frac{1}{m}\mathbf{p}\times \mathbf{B}\right)-\frac{\mathbf{p}}{\tau} $$ # # No interactions, 25 years before the Pauli exclusion principle! # + [markdown] nbpresent={"id": "8fbc3c99-e758-42b0-94e2-393d586619cf"} slideshow={"slide_type": "subslide"} # # Charge mobility: # # In steady state: # $$\mathbf{p} = \text{const};\quad 0 = e\mathbf{E} - m\mathbf{v}/\tau\Rightarrow \mathbf{v} = \frac{e \tau}{m}\mathbf{E}$$ # # Mobility $\mu = e\tau / m$ [cm$^2$/Vs], varies from $\sim 1$ to $\sim 10^8$. # + [markdown] nbpresent={"id": "c134ea81-3daa-4a91-8b09-a938db828117"} slideshow={"slide_type": "subslide"} # # Electric conductivity # # Current density $\mathbf{j} = n e \mathbf{v}$ ($n$ is electron density) # $$\mathbf{j} = e n \mu \mathbf{E},$$ # so electric conductivity $\sigma = ne\mu = e^2 n\tau/m$. # # Sanity check: $j = \frac{I}{A};\quad E = \frac{V}{L}; \quad I = \frac{A}{L}\sigma V$ — this is just Ohm's law! # + [markdown] nbpresent={"id": "2c877278-4eba-4b74-9dde-6e9227a131e8"} slideshow={"slide_type": "slide"} # # [Hall effect](https://en.wikipedia.org/wiki/Hall_effect) # Switch to $\mathbf{B} \neq 0$ # # ![](figures/hall_1.svg) # + [markdown] nbpresent={"id": "aaac3be9-a562-44c1-8118-ba45bae8118d"} slideshow={"slide_type": "subslide"} # # Determining $E$ # # Force balance: $ 0 = e\left(\mathbf{E} + \mathbf{v}\times \mathbf{B}\right)-\mathbf{p}/\tau $ # # ![](figures/hall_2.svg) # + [markdown] nbpresent={"id": "2c877278-4eba-4b74-9dde-6e9227a131e8"} slideshow={"slide_type": "subslide"} # # Hall conductance # # $\mathbf{E} = \mathbf{p}/e\tau -\mathbf{v}\times \mathbf{B}$ # # Two components of $\mathbf{E}$: # * $\mathbf{E}_\text{ext} \parallel \mathbf{j}$ same as with $\mathbf{B} = 0$ # * Hall field: $\mathbf{E}_\text{H} = \mathbf{j}\times\mathbf{B}/ne$ measures charge carrier concentration. # # Hall coefficient $(ne)^{-1}$ is sometimes positive # $\Rightarrow$ particles with positive charge (or negative mass)! # + [markdown] nbpresent={"id": "c617c070-a14c-42ea-841a-38e5d0f0c861"} slideshow={"slide_type": "slide"} # # Limitations of the Drude-Lorentz model # # * Interactions between electrons are very strong: # Addressed by Landau using [Fermi liquid theory](https://en.wikipedia.org/wiki/Fermi_liquid_theory) # Result: can introduce **quasiparticles** that behave like non-interacting electrons! # * What about Fermi sea? # + [markdown] nbpresent={"id": "c3962ed4-4f67-418a-b1ea-a48d50903ebf"} slideshow={"slide_type": "slide"} # # Fermi vs Drude # # Fermi velocity $v_\text{F} = \sqrt{2E_\text{F} / m} \sim 10^6 \text{m/s}$ # Drift velocity $v = \mu E \sim \text{mm/s} \ll v_F$ ! # # Explanation: most Fermi sea is "inert", we only see effects of the Fermi surface: # # ![](figures/fermi_drude.svg) # + [markdown] slideshow={"slide_type": "slide"} # # Conclusions # # * Drude-Lorentz model describes electric properties of conductors # * New concepts: scattering rate/time, mobility, Hall coefficient # * Works surprisingly well with interactions and Fermi statistics # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="bnmoQwR4zsEZ" # # Principal component analysis (PCA) # 1. Generate 2D data of 1000 points # + id="_askiIvrh1eA" colab={"base_uri": "https://localhost:8080/", "height": 826} outputId="60d3f9ad-338e-48ce-b659-32ba230c2b9e" import numpy as np import matplotlib.pyplot as plt mean1=np.array([0,0]) mean2=np.array([4,5]) var=np.array([[1,0.1],[0.1,1]]) np.random.seed(0) data1=np.random.multivariate_normal(mean1,var,500) data2=np.random.multivariate_normal(mean2,var,500) data=np.concatenate((data1,data2)) label=np.concatenate((np.zeros(data1.shape[0]),np.ones(data2.shape[0]))) plt.figure() plt.scatter(data[:,0],data[:,1],c=label) plt.title('Data visualization') plt.figure() plt.scatter(data[:,0],np.zeros(data.shape[0]),c=label) plt.title('distribution in x direction') plt.figure() plt.scatter(data[:,1],np.zeros(data.shape[0]),c=label) plt.title('distribution in y direction') # + id="4Ql5DHisnPw5" colab={"base_uri": "https://localhost:8080/", "height": 298} outputId="8ef78df4-c54c-4080-d240-55085a060890" #Data normalization mean=np.mean(data,axis=0) std=np.std(data,axis=0) data=(data-mean)/std # perform data normalization here using mean substraction and std division plt.figure() plt.scatter(data[:,0],data[:,1],c=label) plt.title('Data visualization') # + id="5X28YubVUSAR" colab={"base_uri": "https://localhost:8080/", "height": 860} outputId="fdf2fa39-822d-4b0c-e261-ad9559dccc7f" # PCA # coverance matrix cov=data.T @ data # using sigular value decomposition u,s,v=np.linalg.svd(cov) trans_data=data @ v # insert your code here var_pca1=np.var(trans_data[:,0]) var_pca2=np.var(trans_data[:,1]) print('variance along pca1 direction=',var_pca1) print('variance along pca2 direction=',var_pca2) plt.figure() plt.scatter(trans_data[:,0],trans_data[:,1],c=label) plt.title('Data visualization') plt.figure() plt.scatter(trans_data[:,0],np.zeros(data.shape[0]),c=label) plt.title('distribution in pca1 direction') plt.figure() plt.scatter(trans_data[:,1],np.zeros(data.shape[0]),c=label) plt.title('distribution in pca2 direction') # + id="NJHGyIHPdomj" class pca: # Constructor def __init__(self, name='reg',data=None,retain_dim=None): self.name = name # Create an instance variable self.data=data self.retain_dim=retain_dim if retain_dim is not None else self.ret_dim(self.data) # compute pca transform value def pca_comp(self,data): data=self.pre_process(data) cov=self.data.T @ self.data # insert your code here u,_,_=np.linalg.svd(cov) # singular value decomposition u_req= u[:,:self.retain_dim]# insert your code here trans_data=self.data @ u_req # insert your code here return trans_data,u_req # compute the required retain dimension def ret_dim(self,data): data=self.pre_process(data) cov=data.T @ data _,s,_=np.linalg.svd(cov) #s=[a**2 for a in s] s=[a/sum(s) for a in s] summ=0 for i in range(len(s)): summ+=s[i] if summ>0.9: ind=i break # ind=# insert your code here # can also take 90% return ind+1 def pre_process(self,data): data1=(data-np.mean(data,axis=0)) data=data1/(np.std(data1,axis=0)+10**(-30)) # avoid divide by zero return data # + id="cvezv1V5tmrs" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="f1e4bd51-0281-4b50-ef69-204fd7029a2f" # pca transformation PCA=pca(data=data) # retain dims automatically becomes 1 here trans_data,trans_mat=PCA.pca_comp(data) plt.scatter(trans_data,np.zeros(trans_data.shape),c=label) # + id="Bkj155AEw9gX" colab={"base_uri": "https://localhost:8080/"} outputId="25331815-0377-4e71-92fe-506e6136aafe" #classification using pca #use k-nearest neighbour classifier after dimensionality reduction from sklearn.neighbors import KNeighborsClassifier k=5 knn = KNeighborsClassifier(n_neighbors=k) knn.fit(trans_data, label) print('KNN Training accuracy =',knn.score(trans_data,label)*100) # test data np.random.seed(0) data1=np.random.multivariate_normal(mean1,var,50) data2=np.random.multivariate_normal(mean2,var,50) data=np.concatenate((data1,data2)) tst_label=np.concatenate((np.zeros(data1.shape[0]),np.ones(data2.shape[0]))) print('KNN Testing accuracy =',knn.score(PCA.pre_process(data) @ trans_mat,tst_label)*100) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import logging import importlib importlib.reload(logging) # see https://stackoverflow.com/a/21475297/1469195 log = logging.getLogger() log.setLevel('INFO') import sys logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout) # + # %%capture import os import site os.sys.path.insert(0, '/home/schirrmr/code/reversible/') os.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/') os.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//') # %load_ext autoreload # %autoreload 2 import numpy as np import logging log = logging.getLogger() log.setLevel('INFO') import sys logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout) import matplotlib from matplotlib import pyplot as plt from matplotlib import cm # %matplotlib inline # %config InlineBackend.figure_format = 'png' matplotlib.rcParams['figure.figsize'] = (12.0, 1.0) matplotlib.rcParams['font.size'] = 14 import seaborn seaborn.set_style('darkgrid') from reversible2.sliced import sliced_from_samples from numpy.random import RandomState import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import numpy as np import copy import math import itertools import torch as th from braindecode.torch_ext.util import np_to_var, var_to_np from reversible2.splitter import SubsampleSplitter from reversible2.view_as import ViewAs from reversible2.invert import invert from reversible2.affine import AdditiveBlock from reversible2.plot import display_text, display_close from reversible2.bhno import load_file, create_inputs # + orig_train_cnt = load_file('/data/schirrmr/schirrmr/HGD-public/reduced/train/4.mat') train_cnt = orig_train_cnt.reorder_channels(['C3',]) train_inputs = create_inputs(train_cnt, final_hz=64, half_before=True) # - orig_test_cnt = load_file('/data/schirrmr/schirrmr/HGD-public/reduced/test/4.mat') test_cnt = orig_test_cnt.reorder_channels(['C3', ]) test_inputs = create_inputs(test_cnt, final_hz=64, half_before=True) cuda = True if cuda: train_inputs = [i.cuda() for i in train_inputs] test_inputs = [i.cuda() for i in test_inputs] # + from reversible2.distribution import TwoClassDist from reversible2.blocks import dense_add_block, conv_add_block_3x3 from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter set_random_seeds(2019011641, cuda) feature_model = nn.Sequential( SubsampleSplitter(stride=[2,1],chunk_chans_first=False),# 2 x 32 conv_add_block_3x3(2,32), conv_add_block_3x3(2,32), SubsampleSplitter(stride=[2,1],chunk_chans_first=True), # 4 x 16 conv_add_block_3x3(4,32), conv_add_block_3x3(4,32), SubsampleSplitter(stride=[2,1],chunk_chans_first=True), # 8 x 8 conv_add_block_3x3(8,32), conv_add_block_3x3(8,32), SubsampleSplitter(stride=[2,1],chunk_chans_first=True), # 16 x 4 conv_add_block_3x3(16,32), conv_add_block_3x3(16,32), SubsampleSplitter(stride=[2,1],chunk_chans_first=True), # 32 x 2 conv_add_block_3x3(32,32), conv_add_block_3x3(32,32), SubsampleSplitter(stride=[2,1],chunk_chans_first=True), # 64 x 1 ViewAs((-1,64,1, 1), (-1,64)), dense_add_block(64,64), dense_add_block(64,64), dense_add_block(64,64), dense_add_block(64,64), dense_add_block(64,64), dense_add_block(64,64), RFFT(), ) if cuda: feature_model.cuda() device = list(feature_model.parameters())[0].device from reversible2.ot_exact import ot_euclidean_loss_for_samples class_dist = TwoClassDist(2,62) class_dist.cuda() optim_model = th.optim.Adam(feature_model.parameters()) optim_dist = th.optim.Adam(class_dist.parameters(), lr=1e-2) # - i_class = 0 class_ins = train_inputs[i_class] from reversible2.timer import Timer with Timer(name='all'): with Timer(name='samples'): samples = class_dist.get_samples(i_class, len(train_inputs[i_class]) * 5) with Timer(name='invert'): inverted = invert(feature_model, samples) with Timer(name='forward'): outs = feature_model(class_ins) #with Timer(name='ot_out'): # ot_loss_out = ot_euclidean_loss_for_samples(outs[:,:2].squeeze(), samples[:,:2].squeeze()) #with Timer(name='ot_in'): # ot_loss_in = ot_euclidean_loss_for_samples(class_ins.squeeze(), inverted.squeeze()) # + from timeit import default_timer x = class_ins times = [] start = default_timer() for module in feature_model.children(): x = module(x) times.append(default_timer()) times_inv = [] start_inv = default_timer() x = samples for module in list(feature_model.children())[::-1]: x = invert(nn.Sequential(module), x) times_inv.append(default_timer()) # - (np.array(times) - start) * 1000 (np.array(times_inv) - start_inv) * 1000 list(zip([m.__class__.__name__ for m in feature_model.children()], np.diff(np.insert(np.array(times) - start, 0, 0) * 1000))) list(zip([m.__class__.__name__ for m in feature_model.children()], np.diff(np.insert(np.array(times_inv) - start_inv,0,0) * 1000)[::-1])) plt.plot(np.diff(np.insert(np.array(times) - start, 0, 0) * 1000)) plt.plot(np.diff(np.insert(np.array(times_inv) - start_inv,0,0) * 1000)[::-1]) # + from plot import plot_outs n_epochs = 2001 for i_epoch in range(n_epochs): optim_model.zero_grad() optim_dist.zero_grad() for i_class in range(len(train_inputs)): class_ins = train_inputs[i_class] samples = class_dist.get_samples(i_class, len(train_inputs[i_class]) * 5) inverted = invert(feature_model, samples) outs = feature_model(class_ins) ot_loss_out = ot_euclidean_loss_for_samples(outs[:,:2].squeeze(), samples[:,:2].squeeze()) ot_loss_in = ot_euclidean_loss_for_samples(class_ins.squeeze(), inverted.squeeze()) other_class_ins = train_inputs[1-i_class] other_outs = feature_model(other_class_ins) changed_outs = class_dist.change_to_other_class(other_outs, i_class_from=1-i_class, i_class_to=i_class) changed_inverted = invert(feature_model, changed_outs) ot_transformed_in = ot_euclidean_loss_for_samples(class_ins.squeeze(), changed_inverted.squeeze()) ot_transformed_out = ot_euclidean_loss_for_samples(changed_outs[:,:2].squeeze(), samples[:,:2].squeeze(),) loss = ot_loss_in + ot_loss_out + ot_transformed_in + ot_transformed_out loss.backward() optim_model.step() optim_dist.step() if i_epoch % (n_epochs // 20) == 0: print("Epoch {:d} of {:d}".format(i_epoch, n_epochs)) print("Loss: {:E}".format(loss.item())) print("OT Loss In: {:E}".format(ot_loss_in.item())) print("OT Loss Out: {:E}".format(ot_loss_out.item())) print("Transformed OT Loss In: {:E}".format(ot_transformed_in.item())) print("Transformed OT Loss Out: {:E}".format(ot_transformed_out.item())) plot_outs(feature_model, train_inputs, test_inputs, class_dist) fig = plt.figure(figsize=(8,2)) plt.plot(var_to_np(th.cat((th.exp(class_dist.class_log_stds), th.exp(class_dist.non_class_log_stds)))), marker='o') display_close(fig) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="YPdxitvrCxQc" outputId="41ef1844-a45e-423a-8371-cd88f540a73b" ## typical imports # %matplotlib inline # !pip install lightkurve==1.9.0 #b/c non-standard library & we want a specific version import lightkurve as lk import numpy as np import pandas as pd import matplotlib.pyplot as plt # + colab={"base_uri": "https://localhost:8080/", "height": 99} id="ygBs5I04C4uP" outputId="91d9eaa6-6c51-488c-f1d5-516c3db779f4" #find available data search_result = lk.search_lightcurvefile('TIC 190885165') search_result # + colab={"base_uri": "https://localhost:8080/"} id="WTu7gSoWDACj" outputId="93f286fb-5d4b-4879-8e92-d28c6770a701" #download all data available lc_files = search_result.download_all() lc_files # + colab={"base_uri": "https://localhost:8080/", "height": 387} id="Dqlfa5MnDImj" outputId="f72fa8d4-a361-4936-8568-347203a5f66e" #clean & visualize data lc = lc_files[0].PDCSAP_FLUX.normalize() #select first file, select detrended flux, normalize #NOTE: if only one file available try removing [0] b/c it will cause an error lcflat=lc.flatten() #flatten rotational modulations lc_gtg = lcflat.remove_nans() #remove nans from data gaps #NOTE: we skipped the remove outliers step to avoid cutting off transit bottoms lc.scatter(); #plot light curve before flattening (ie theres still rotational modulations in the data) # + id="vYIrgNI1Da2h" #functions from our LCAnalysis def periods(N=1000): period=np.logspace(-0.523, 1.43, N, endpoint=True) return period def duration_grid(N=3): duration=np.linspace(.01, 0.298, N) return duration def BLS(periodgrid,lightcurve,flat_time,durationgrid): from astropy.timeseries import BoxLeastSquares ''' Purppose ------------------ A Box Least Squares function to print out the compute stats of the periodogram. Parameters ------------------- period grid - describes how often the transit is happening (arrays different value) duration grid - describes the width of the transit (array of different values) lightcurve - lightkurve class object Returns list of stats in the following order: period, duration, transit-time, power, depth ------------------ Calculate several statistics of a candidate transit. ''' #assigning parameters to variables period = periodgrid duration = durationgrid lc = lightcurve t = flat_time #time y = lc.flux #flux #dy is the uncertianty model = BoxLeastSquares(t, y, dy= lc.flux_err) periodogram = model.power(period,duration) max_power = np.argmax(periodogram.power) #calculates the max stats w/in the transit stats = [periodogram.period[max_power], periodogram.duration[max_power], periodogram.transit_time[max_power], max_power, periodogram.depth[max_power], periodogram.period,periodogram.power,periodogram.transit_time] #stats is the one peak, periodogram is the array return stats # + colab={"base_uri": "https://localhost:8080/"} id="aWB63DqHCn8N" outputId="5f84e81f-e163-4136-8614-e2cd23a148ab" #run BLS ##to use defaults #pg = periods() #dg = duration_grid() ## OR customize period & duration grids pg = np.linspace(1.2,2, 10000) # change 1st & 2nd values to change startpoint & endpoint of list of periods to check dg = np.linspace(.03,0.04, 10000) # change same as for period grid #run BLS bls_output = BLS(pg, lc_gtg, lc_gtg.time, dg) bls_output[0:5] #period, duration, transit_time, power, depth <---meaning of order of output values # + colab={"base_uri": "https://localhost:8080/", "height": 350} id="gZ_0KviYUaU9" outputId="413a9720-7164-4d64-be28-24b5557a7960" #plot the periodogram plt.figure(figsize=(10,5)) plt.plot(bls_output[5],bls_output[6]) #x-axis = periods checked, y-axis = strength of model fit at period checked plt.xlabel('Period [day]') plt.ylabel('Power') plt.title('BLS Periodogram'); #highest peak is the best fit orbital period, any other peaks should appear at multiples of the best peak OR could be more planets! # + colab={"base_uri": "https://localhost:8080/", "height": 408} id="2lYbxpH1FVn4" outputId="6e70f9c1-7e45-4559-d309-896621e6d231" #plot folded light curve #fold lc lcfold = lc_gtg.fold(bls_output[0],t0=bls_output[2]) #inputs are best fit model's (period value, transit_time) #the transit time helps center the transit in the folded plot lcfold.scatter() #actual plot # + id="fJG5k-7EF7M8" from astroquery.mast import Catalogs catalog_data = Catalogs.query_criteria(catalog='Tic',ID=190885165) catalog_df = catalog_data.to_pandas() # + id="JJLtmm6VLpZg" catalog_df['Depth']= bls_output[4] # + colab={"base_uri": "https://localhost:8080/", "height": 146} id="GE7GUd21LpfT" outputId="b99ca1db-dbb4-44aa-ccf8-8dff516bb340" catalog_df # + id="08uEtefqLx4J" def planet_radii(depth, star_radius): import math as m R_Sun = 696340000 #m R_Earth = 6371000 #m r_star = star_radius * R_Sun #b/c in R_Sun units radius = r_star * m.sqrt(depth) r_planet = radius / R_Earth #b/c need in R_Earth return r_planet # + colab={"base_uri": "https://localhost:8080/"} id="j4Qp6cenL5JB" outputId="bbdb68f6-0744-4a30-fb51-d6a8152ab288" PLANET = catalog_df['Depth'].to_numpy() #transit's depth STAR = catalog_df['rad'].to_numpy() #star's radius NAMES = catalog_df['ID'].to_numpy() #star id radii = [] #empty list to hold results for i,j in zip(PLANET,STAR): starr = planet_radii(i, j) #apply function radii.append(starr) #capture all results print("Estimared exoplanet radius:", starr, "Earth radii") #print out results # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # This file is part of the pyMOR project (http://www.pymor.org). # Copyright 2013-2019 pyMOR developers and contributors. All rights reserved. # License: BSD 2-Clause License (http://opensource.org/licenses/BSD-2-Clause) # + import numpy as np import scipy.linalg as spla import scipy.sparse as sps import matplotlib.pyplot as plt import matplotlib as mpl from pymor.core.config import config from pymor.models.iosys import LTIModel from pymor.operators.numpy import NumpyMatrixOperator from pymor.parameters.functionals import ProjectionParameterFunctional from pymor.reductors.bt import BTReductor from pymor.reductors.h2 import IRKAReductor # - # # Model # # https://morwiki.mpi-magdeburg.mpg.de/morwiki/index.php/Synthetic_parametric_model # + n = 100 # order of the resulting system # set coefficients a = -np.linspace(1e1, 1e3, n // 2) b = np.linspace(1e1, 1e3, n // 2) c = np.ones(n // 2) d = np.zeros(n // 2) # build 2x2 submatrices aa = np.empty(n) aa[::2] = a aa[1::2] = a bb = np.zeros(n) bb[::2] = b # set up system matrices Amu = sps.diags(aa, format='csc') A0 = sps.diags([bb, -bb], [1, -1], shape=(n, n), format='csc') B = np.zeros((n, 1)) B[::2, 0] = 2 C = np.empty((1, n)) C[0, ::2] = c C[0, 1::2] = d # - A0 = NumpyMatrixOperator(A0) Amu = NumpyMatrixOperator(Amu) B = NumpyMatrixOperator(B) C = NumpyMatrixOperator(C) A = A0 + Amu * ProjectionParameterFunctional('mu', ()) lti = LTIModel(A, B, C) # # Magnitude plot mu_list_short = [1/50, 1/20, 1/10, 1/5, 1/2, 1] # + w = np.logspace(0.5, 3.5, 200) fig, ax = plt.subplots() for mu in mu_list_short: lti.mag_plot(w, ax=ax, mu=mu, label=fr'$\mu = {mu}$') ax.legend() plt.show() # + w_list = np.logspace(0.5, 3.5, 200) mu_list = np.linspace(1/50, 1, 50) lti_w_mu = np.zeros((len(w_list), len(mu_list))) for i, mu in enumerate(mu_list): lti_w_mu[:, i] = spla.norm(lti.freq_resp(w_list, mu=mu), axis=(1, 2)) # - fig, ax = plt.subplots() out = ax.contourf(w_list, mu_list, lti_w_mu.T, norm=mpl.colors.LogNorm(), levels=np.logspace(np.log10(lti_w_mu.min()), np.log10(lti_w_mu.max()), 100)) ax.set_xlabel(r'Frequency $\omega$') ax.set_ylabel(r'Parameter $\mu$') ax.set_xscale('log') #ax.set_yscale('log') fig.colorbar(out, ticks=np.logspace(-2, 1, 7)) plt.show() # # Hankel singular values fig, ax = plt.subplots() for mu in mu_list_short: hsv = lti.hsv(mu=mu) ax.semilogy(range(1, len(hsv) + 1), hsv, '.-', label=fr'$\mu = {mu}$') ax.set_title('Hankel singular values') ax.legend() plt.show() # # System norms # + fig, ax = plt.subplots() mu_fine = np.linspace(1/50, 1, 20) h2_norm_mu = [lti.h2_norm(mu=mu) for mu in mu_fine] ax.plot(mu_fine, h2_norm_mu, '.-', label=r'$\mathcal{H}_2$-norm') if config.HAVE_SLYCOT: hinf_norm_mu = [lti.hinf_norm(mu=mu) for mu in mu_fine] ax.plot(mu_fine, hinf_norm_mu, '.-', label=r'$\mathcal{H}_\infty$-norm') hankel_norm_mu = [lti.hankel_norm(mu=mu) for mu in mu_fine] ax.plot(mu_fine, hankel_norm_mu, '.-', label='Hankel norm') ax.set_xlabel(r'$\mu$') ax.set_title('System norms') ax.legend() plt.show() # - # # Balanced truncation def reduction_errors(lti, r, mu_fine, method): h2_err_mu = [] hinf_err_mu = [] hankel_err_mu = [] for mu in mu_fine: rom_mu = method(lti, r, mu=mu) h2_err_mu.append((lti - rom_mu).h2_norm(mu=mu) / lti.h2_norm(mu=mu)) if config.HAVE_SLYCOT: hinf_err_mu.append((lti - rom_mu).hinf_norm(mu=mu) / lti.hinf_norm(mu=mu)) hankel_err_mu.append((lti - rom_mu).hankel_norm(mu=mu) / lti.hankel_norm(mu=mu)) return h2_err_mu, hinf_err_mu, hankel_err_mu r = 20 mu_fine = np.linspace(1/50, 1, 10) ( h2_bt_err_mu, hinf_bt_err_mu, hankel_bt_err_mu ) = reduction_errors(lti, r, mu_fine, lambda lti, r, mu=None: BTReductor(lti, mu=mu).reduce(r)) # + fig, ax = plt.subplots() ax.semilogy(mu_fine, h2_bt_err_mu, '.-', label=r'$\mathcal{H}_2$') if config.HAVE_SLYCOT: ax.semilogy(mu_fine, hinf_bt_err_mu, '.-', label=r'$\mathcal{H}_\infty$') ax.semilogy(mu_fine, hankel_bt_err_mu, '.-', label='Hankel') ax.set_xlabel(r'$\mu$') ax.set_title('Balanced truncation errors') ax.legend() plt.show() # - # # Iterative Rational Krylov Algorithm (IRKA) ( h2_irka_err_mu, hinf_irka_err_mu, hankel_irka_err_mu ) = reduction_errors(lti, r, mu_fine, lambda lti, r, mu=mu: IRKAReductor(lti, mu=mu).reduce(r, conv_crit='h2')) # + fig, ax = plt.subplots() ax.semilogy(mu_fine, h2_irka_err_mu, '.-', label=r'$\mathcal{H}_2$') if config.HAVE_SLYCOT: ax.semilogy(mu_fine, hinf_irka_err_mu, '.-', label=r'$\mathcal{H}_\infty$') ax.semilogy(mu_fine, hankel_irka_err_mu, '.-', label='Hankel') ax.set_xlabel(r'$\mu$') ax.set_title('IRKA errors') ax.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Basic *nix Commands - Users & System # # + [markdown] slideshow={"slide_type": "slide"} # # Who, where, what, when...? # # + [markdown] slideshow={"slide_type": "fragment"} # ## Structure of a UNIX command # # $$ # \overbrace{\mathtt{ls}}^{\text{Command}} \quad \underbrace{{\overbrace{\mathtt{-s -t}}^{\text{Options}} \quad \mathtt{arg1 \quad arg2}}}_{\text{Arguments}} # $$ # # + [markdown] slideshow={"slide_type": "subslide"} # # # Who? # # + [markdown] slideshow={"slide_type": "fragment"} # ## Machine and OS # # * `hostname` - print the machine's hostname # * `uname` - print other system information # * `uname`- print OS name # * `uname -a` - print **a**ll system information # * `uname -m` - print **m**achine hardware # * `uname -n` - same as `hostname` # # + [markdown] slideshow={"slide_type": "fragment"} # ## Users # # * `passwd` - change your **passw**or**d** # * `who` - print logged in users (see also `users`) # * `who am i` - print your own username `whoami` # * `finger` - display information about users # # + [markdown] slideshow={"slide_type": "subslide"} # # # What? # # + [markdown] slideshow={"slide_type": "fragment"} # ## List # # * `ls` - **l**i**s**t directory contents # * `ls -a` - list **a**ll # * `ls -A` - list **A**lmost all # * `ls -l` - **l**ong listing format # # + [markdown] slideshow={"slide_type": "fragment"} # * Multiple options can be combined, e.g. # * `ls -l -A` $\equiv$ `ls -lA` # # + [markdown] slideshow={"slide_type": "fragment"} # ## Environment variables # # * The system uses *environment variables* to store system and user information # * define with `export MY_NEW_VAR='myVAr'` (**`bash`** shell) # * access with `$`, e.g. `echo $MY_NEW_VAR` # # + [markdown] slideshow={"slide_type": "subslide"} # # # Where? # # + [markdown] slideshow={"slide_type": "fragment"} # ## Navigation commands # # * `pwd` - **p**rint **w**orking **d**irectory # * `cd` - **c**hange **d**irectory # * `cd $HOME` or better, `cd ∼`, even better `cd` # * `cd ∼-` - go to previous directory # * `cd ..` - go up one level # # * Absolute *paths* (directory structures) start with `/`, the *root* directory # # + [markdown] slideshow={"slide_type": "subslide"} # # # When? # # + [markdown] slideshow={"slide_type": "fragment"} # ## Date # # * `date` -- display the full date # * `date +%m` – the `+%m` is a format specifier instruction to only print the month number # * `date +%h` – the `+%h` is a format specifier instruction to only print the month string # # + [markdown] slideshow={"slide_type": "slide"} # # Getting Help # # + [markdown] slideshow={"slide_type": "fragment"} # ## `man` documentation # # * `man ls` - brings up the `man`ual pages for `ls` # # + [markdown] slideshow={"slide_type": "fragment"} # ## Use Google! # # * Use good keyword strings "Linux ls" or "Unix echo" # # + hide_input=true slideshow={"slide_type": "skip"} language="javascript" # function hideElements(elements, start) { # for(var i = 0, length = elements.length; i < length;i++) { # if(i >= start) { # elements[i].style.display = "none"; # } # } # } # # var prompt_elements = document.getElementsByClassName("prompt"); # hideElements(prompt_elements, 0) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Importing Structures # This tutorial demonstrates how to visualize 3D structures using the [py3Dmol](http://3dmol.csb.pitt.edu/index.html) Jupyter Notebook widget. Here we show how to download structures from the PDB as well as read local structure files. # # 3Dmol.js: molecular visualization with WebGL, , , [Bioinformatics (2015) 31, 1322–1324](https://doi.org/10.1093/bioinformatics/btu829) # # Also, check out the [py3Dmol tutorial](http://nbviewer.jupyter.org/github/3dmol/3Dmol.js/blob/9050b97144e81f065df7eecc87ba9a16723ab14b/py3Dmol/examples.ipynb) for additional features. import py3Dmol # ## Download a structure from the RCSB Protein Data Bank # py3Dmol downloads PDB structures using the [compressed](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0174846) binary [MMTF file format](https://doi.org/10.1371/journal.pcbi.1005575) from https://mmtf.rcsb.org. Prepend the 'pdb:" prefix to the 4-letter PDB ID. # # Downloading PDB structures in MMTF format has the following advantages: # * Very large structures are downloaded efficiently # * Molecules contain bond order information # * DSSP secondary structure has been recalculated for consistency # + viewer = py3Dmol.view(query='pdb:5MXB') # setting styles will be covered in the next tutorial viewer.setStyle({'cartoon': {'color': 'spectrum'}}) viewer.setStyle({'hetflag': True}, {'stick':{'radius': 0.3, 'singleBond': False}}) viewer.zoomTo() viewer.show() # - # # Import a structure from a local file # [List of supported file formats](http://3dmol.csb.pitt.edu/doc/types.html#FileFormats) # # A disadvantage of reading local PDB files is the absence of bond order information. Compare the small molecules below with the downloaded version above. # + structure = open('5mxb.pdb','r').read() viewer = py3Dmol.view() viewer.addModel(structure,'pdb') viewer.setStyle({'cartoon': {'color': 'spectrum'}}) viewer.setStyle({'hetflag': True}, {'stick':{'radius': 0.3, 'singleBond': False}}) viewer.zoomTo() viewer.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import matplotlib.pyplot as plt import numpy as np from hazma.scalar_mediator import ScalarMediator # - gsxx = 1.0 gsff = 1.0 gsGG = 1.0 gsFF = 1.0 ms = 1.0 mx = 144. # + [markdown] heading_collapsed=true # ### e-ASTROGAM's effective area # + hidden=true from hazma.gamma_ray_limits.gamma_ray_limit_parameters import A_eff_e_ASTROGAM e_gams = np.logspace(0, np.log10(500), 500) plt.loglog(e_gams, [A_eff_e_ASTROGAM(e) for e in e_gams], label="Actual") plt.loglog(e_gams[[0, -1]], [500, 500], label="Estimate") plt.xlabel(r"$E_\gamma$ (MeV)") plt.ylabel(r"$A_{\text{eff}}$ (cm$^2$)") plt.legend() # + [markdown] heading_collapsed=true # ### DM annihilation spectra # + hidden=true from hazma.gamma_ray_limits import gamma_ray_limit_parameters sm = ScalarMediator(mx, ms, gsxx, gsff, gsGG, gsFF) e_gams = np.logspace(0, np.log10(770), 250) # DM spectra for mx in [110., 130.]: # 144., 285., 550., 770.]: sm.mx = mx spectra = sm.spectra(e_gams, 2.01*mx) plt.loglog(e_gams, e_gams * spectra["total"], label=r"$m_X = %.0f$ MeV" % mx) # Background spectrum plt.loglog(gamma_ray_limit_parameters.e_Bs, gamma_ray_limit_parameters.e_Bs * gamma_ray_limit_parameters.dPhi_dEdOmega_Bs, label="Background") plt.xlim(e_gams[[0, -1]]) plt.legend() # - # ### Compute limits # + sm = ScalarMediator(mx, ms, gsxx, gsff, gsGG, gsFF) mx_min = 50. mx_max = 1000. n_mxs = 25 mxs = np.logspace(np.log10(mx_min), np.log10(mx_max), n_mxs) # Compute limits sv_lims = sm.compute_limits(mxs) # + plt.plot(mxs, sv_lims) plt.grid() plt.xlim(mxs[[0, -1]]) plt.ylim([1e-29, 1e-26]) plt.yscale("log") plt.xlabel(r"$m_X$ (MeV)") plt.ylabel(r"$\langle \sigma v \rangle$ (cm$^3$/s)") plt.title("Projected e-ASTROGAM limits") # + [markdown] heading_collapsed=true # ### Scratch # + hidden=true from scipy.interpolate import interp1d from hazma.gamma_ray_limits.gamma_ray_limit_parameters import eASTROGAM_params, dSph_params from hazma.gamma_ray_limits.compute_limits import __f_lim, __jac_lim sm = ScalarMediator(104., ms, gsxx, gsff, gsGG, gsFF) e_gams = np.logspace(np.log10(sm.mx / 100.), np.log10(sm.mx), 200) dN_dE_DM = interp1d(e_gams, sm.spectra(e_gams, 2.001*sm.mx)["total"]) def fn(e_a, e_b): e_a = min([e_a, e_b]) e_b = max([e_a, e_b]) if e_a == e_b: return 0. else: return __f_lim([e_a, e_b], dN_dE_DM, eASTROGAM_params, dSph_params) # + hidden=true def npmap2d(fun, x_spec, y_spec, doPrint=False): xs = np.linspace(*x_spec) ys = np.linspace(*y_spec) Z = np.empty(len(xs) * len(ys)) i = 0 for y in ys: for x in xs: Z[i] = fun(x, y) if doPrint: print([i, x, y, Z[i]]) i += 1 X, Y = np.meshgrid(xs, ys) Z.shape = X.shape return X, Y, Z # + hidden=true e_as, e_bs, fn_vals = npmap2d(fn, (sm.mx / 50., sm.mx, 10), (sm.mx / 50., sm.mx, 10)) # + hidden=true CS = plt.contourf(e_as, e_bs, fn_vals) plt.colorbar() plt.show() # + hidden=true import scipy from scipy import optimize from hazma.parameters import neutral_pion_mass as mpi0 def f(e_ab): e_a = min(e_ab) e_b = max(e_ab) if e_a == e_b: return 0. else: return __f_lim([e_a, e_b], dN_dE_DM, eASTROGAM_params, dSph_params) e_a_0 = 0.5 * (sm.mx - sm.mx / 100.) e_b_0 = 0.75 * (sm.mx - sm.mx / 100.) e_bounds = [sm.mx / 100., sm.mx] limit_obj = scipy.optimize.minimize(f, [e_a_0, e_b_0], bounds=[e_bounds, e_bounds], # args=(dN_dE_DM, eASTROGAM_params, # dSph_params), method="L-BFGS-B", options={"ftol": 1e-3}) # + hidden=true limit_obj # + hidden=true # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # __Вопросы для повторения:__ # # * Что такое стэк и куча? # * Что такое указатель? # * `sizeof(int *)`? # * `sizeof(int)`? # * `sizeof(uint64_t)`? # * Куда указывает `p`? # # ```c++ # std::uint64_t arr[100]; # std::uint64_t *p = &arr[0]; # p = p + 8; # ``` # # * Где хранятся данные строк при исполнении программы? # # ```c++ # const char* s = "hello world"; # # std::string s2 = "London is the capital of Great Britain"; # ``` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Neural Networks and Deep Learning for Life Sciences and Health Applications - An introductory course about theoretical fundamentals, case studies and implementations in python and tensorflow # # (C) 2018 - # # github repository: https://github.com/michelucci/dlcourse2018_students # # Fall Semester 2018 import numpy as np import random import matplotlib.pyplot as plt import matplotlib as mpl # ## Function for plots # We will use the following function to perform some plotting, so you can safely ignore it for now. But the following cell should be run before the rest of the code. def myplot(x,y, name, xlab, ylab): plt.rc('font', family='arial') plt.rc('xtick', labelsize='x-small') plt.rc('ytick', labelsize='x-small') plt.tight_layout() fig = plt.figure(figsize=(8, 5)) ax = fig.add_subplot(1, 1, 1) plt.tick_params(labelsize=16) ax.plot(x, y, ls='solid', color = 'black') ax.set_xlabel(xlab, fontsize = 16) ax.set_ylabel(ylab, fontsize = 16) # ## Activation functions plots # ### Creation of arrays of activation functions # First let's create the data that we need to plot the different activation functions x = np.arange(-5,5,0.1) identity = x sigmoid = 1.0 / (1.0 + np.exp(-x)) arctan = np.tanh(x) relu = np.maximum(x, 0) leakyrelu = relu - 0.05 * np.maximum(-x, 0) # ## Identity myplot(x, identity, 'Figure_1-4', 'z', 'Identity $I(z)$') # ## Sigmoid activation function myplot(x, sigmoid, 'Figure_1-5', 'z', 'sigmoid $\sigma(z)$') # ## tanh activation function myplot(x, arctan, 'Figure_1-6', 'z', r'Hyperbolic Tangent $\tanh(z)$') # ## ReLU activation function myplot(x, relu, 'Figure_1-7', 'z', 'ReLU') # ## Leaky ReLU Activation function myplot(x, leakyrelu, 'Figure_1-8', 'z', 'Leaky ReLU') # # SWISH activation function # + swish1 = x / (1.0 + np.exp(-0.1*x)) swish2 = x / (1.0 + np.exp(-0.5*x)) swish3 = x / (1.0 + np.exp(-10.0*x)) plt.rc('font', family='arial') #plt.rc('font',**{'family':'serif','serif':['Palatino']}) plt.rc('xtick', labelsize='x-small') plt.rc('ytick', labelsize='x-small') plt.tight_layout() fig = plt.figure(figsize=(8, 5)) ax = fig.add_subplot(1, 1, 1) plt.tick_params(labelsize=16) ax.plot(x, swish1, ls='solid', color = 'black', label=r'$\beta=0.1$') ax.plot(x, swish2, ls='dashed', color = 'black', label=r'$\beta=0.5$') ax.plot(x, swish3, ls='dotted', color = 'black', label=r'$\beta=10.0$') ax.set_xlabel('z', fontsize = 16) ax.set_ylabel('SWISH activation function', fontsize = 16) #plt.xlim(0,8) plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., fontsize = 16) # - # # Gradient Descent Plots # # Image for well behaved converging # + import numpy as np import matplotlib.pyplot as plt # The data to fit m = 30 theta0_true = 2 theta1_true = 0.5 x = np.linspace(-1,1,m) y = theta0_true + theta1_true * x # + def cost_func(theta0, theta1): # The cost function, J(theta0, theta1) describing the goodness of fit. theta0 = np.atleast_3d(np.asarray(theta0)) theta1 = np.atleast_3d(np.asarray(theta1)) return np.average((y-hypothesis(x, theta0, theta1))**2, axis=2)/2 def hypothesis(x, theta0, theta1): # Our "hypothesis function", a straight line. return theta0 + theta1*x # - # First construct a grid of (theta0, theta1) parameter pairs and their # corresponding cost function values. theta0_grid = np.linspace(-1,4,101) theta1_grid = np.linspace(-5,5,101) J_grid = cost_func(theta0_grid[:,np.newaxis,np.newaxis], theta1_grid[np.newaxis,:,np.newaxis]) # + # Let's start with the plotting fig, ax = plt.subplots(figsize=(12, 8)) plt.rc('font', family='arial') plt.rc('xtick', labelsize='x-small') plt.rc('ytick', labelsize='x-small') plt.tick_params(labelsize=16) # CHECK: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis # A labeled contour plot for the RHS cost function X, Y = np.meshgrid(theta0_grid, theta1_grid) contours = ax.contour(X, Y, J_grid, 30, colors='k') ax.clabel(contours) # The target parameter values indicated on the cost function contour plot ax.scatter([theta0_true]*2,[theta1_true]*2,s=[50,10], color=['k','w']) # Take N steps with learning rate alpha down the steepest gradient, # starting at (theta0, theta1) = (0, 0). N = 8 alpha = 0.7 theta = [np.array((0,0))] J = [cost_func(*theta[0])[0]] for j in range(N-1): last_theta = theta[-1] this_theta = np.empty((2,)) this_theta[0] = last_theta[0] - alpha / m * np.sum( (hypothesis(x, *last_theta) - y)) this_theta[1] = last_theta[1] - alpha / m * np.sum( (hypothesis(x, *last_theta) - y) * x) theta.append(this_theta) J.append(cost_func(*this_theta)) # Annotate the cost function plot with coloured points indicating the # parameters chosen and red arrows indicating the steps down the gradient. # Also plot the fit function on the LHS data plot in a matching colour. colors = ['b', 'g', 'm', 'c', 'orange'] for j in range(1,N): ax.annotate('', xy=theta[j], xytext=theta[j-1], arrowprops={'arrowstyle': '->', 'color': 'r', 'lw': 1}, va='center', ha='center') ax.scatter(*zip(*theta), cmap='gray', s=80, lw=0) # Labels, titles and a legend. ax.set_xlabel(r'$w_0$', fontsize = 16) ax.set_ylabel(r'$w_1$', fontsize = 16) ax.set_title('Cost function', fontsize = 16) plt.show() # - # ## Figure 1-11 J1=J # + import numpy as np import matplotlib.pyplot as plt # The plot: LHS is the data, RHS will be the cost function. fig, ax = plt.subplots(figsize=(12, 8)) plt.rc('font', family='arial') plt.rc('xtick', labelsize='x-small') plt.rc('ytick', labelsize='x-small') plt.tick_params(labelsize=16) # A labeled contour plot for the RHS cost function X, Y = np.meshgrid(theta0_grid, theta1_grid) contours = ax.contour(X, Y, J_grid, 30, colors='k') ax.clabel(contours) # The target parameter values indicated on the cost function contour plot ax.scatter([theta0_true]*2,[theta1_true]*2,s=[50,10], color=['k','w']) # Take N steps with learning rate alpha down the steepest gradient, # starting at (theta0, theta1) = (0, 0). N = 8 alpha = 2 theta = [np.array((0,0))] J = [cost_func(*theta[0])[0]] for j in range(N-1): last_theta = theta[-1] this_theta = np.empty((2,)) this_theta[0] = last_theta[0] - alpha / m * np.sum( (hypothesis(x, *last_theta) - y)) this_theta[1] = last_theta[1] - alpha / m * np.sum( (hypothesis(x, *last_theta) - y) * x) theta.append(this_theta) J.append(cost_func(*this_theta)) # Annotate the cost function plot with coloured points indicating the # parameters chosen and red arrows indicating the steps down the gradient. # Also plot the fit function on the LHS data plot in a matching colour. colors = ['b', 'g', 'm', 'c', 'orange'] for j in range(1,N): ax.annotate('', xy=theta[j], xytext=theta[j-1], arrowprops={'arrowstyle': '->', 'color': 'r', 'lw': 1}, va='center', ha='center') ax.scatter(*zip(*theta), cmap='gray', s=80, lw=0) # Labels, titles and a legend. ax.set_xlabel(r'$w_0$', fontsize = 16) ax.set_ylabel(r'$w_1$', fontsize = 16) ax.set_title('Cost function', fontsize = 16) import numpy as np import matplotlib.pyplot as plt # - # ## Figure 1-12 J2=J # ## Learning rate too small # + import numpy as np import matplotlib.pyplot as plt # The plot: LHS is the data, RHS will be the cost function. fig, ax = plt.subplots(figsize=(12, 8)) plt.rc('font', family='arial') plt.rc('xtick', labelsize='x-small') plt.rc('ytick', labelsize='x-small') plt.tick_params(labelsize=16) # First construct a grid of (theta0, theta1) parameter pairs and their # corresponding cost function values. theta0_grid = np.linspace(-1,4,101) theta1_grid = np.linspace(-5,5,101) J_grid = cost_func(theta0_grid[:,np.newaxis,np.newaxis], theta1_grid[np.newaxis,:,np.newaxis]) # A labeled contour plot for the RHS cost function X, Y = np.meshgrid(theta0_grid, theta1_grid) contours = ax.contour(X, Y, J_grid, 30, colors='k') ax.clabel(contours) # The target parameter values indicated on the cost function contour plot ax.scatter([theta0_true]*2,[theta1_true]*2,s=[50,10], color=['k','w']) # Take N steps with learning rate alpha down the steepest gradient, # starting at (theta0, theta1) = (0, 0). N = 30 alpha = 0.05 theta = [np.array((0,0))] J = [cost_func(*theta[0])[0]] for j in range(N-1): last_theta = theta[-1] this_theta = np.empty((2,)) this_theta[0] = last_theta[0] - alpha / m * np.sum( (hypothesis(x, *last_theta) - y)) this_theta[1] = last_theta[1] - alpha / m * np.sum( (hypothesis(x, *last_theta) - y) * x) theta.append(this_theta) J.append(cost_func(*this_theta)) # Annotate the cost function plot with coloured points indicating the # parameters chosen and red arrows indicating the steps down the gradient. # Also plot the fit function on the LHS data plot in a matching colour. colors = ['b', 'g', 'm', 'c', 'orange'] for j in range(1,N): ax.annotate('', xy=theta[j], xytext=theta[j-1], arrowprops={'arrowstyle': '->', 'color': 'r', 'lw': 1}, va='center', ha='center') ax.scatter(*zip(*theta), cmap='gray', s=80, lw=0) # Labels, titles and a legend. ax.set_xlabel(r'$w_0$', fontsize = 16) ax.set_ylabel(r'$w_1$', fontsize = 16) ax.set_title('Cost function', fontsize = 16) plt.show() fig.savefig('Figure_1-12'+'.pdf', format='pdf', dpi=300,bbox_inches='tight') fig.savefig('Figure_1-12'+'.png', format='png', dpi=300,bbox_inches='tight') # - # ### Figure 1-13 - Cost function for different gammas J3=J # + plt.rc('font', family='arial') #plt.rc('font',**{'family':'serif','serif':['Palatino']}) plt.rc('xtick', labelsize='x-small') plt.rc('ytick', labelsize='x-small') plt.tight_layout() fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(1, 1, 1) #x = np.linspace(1., 8., 30) ax.plot(J1, ls='solid', color = 'black', label='$\gamma=0.8$') ax.plot(J2, ls='dashed', color = 'black', label='$\gamma=2.0$') ax.plot(J3, ls='dotted', color = 'black', label='$\gamma=0.05$') ax.set_xlabel('Iterations', fontsize = 16) ax.set_ylabel('Cost function $J$', fontsize = 16) plt.xlim(0,8) plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., fontsize = 16) # - # ## Difference between lists and numpy arrays lst1 = random.sample(range(1, 10**8), 10**7) lst2 = random.sample(range(1, 10**8), 10**7) # random.sample takes a population and a sample size k and returns k random members of the population. # ## With zip # %%timeit out1 = [a*b for a,b in zip(lst1,lst2)] # ## with enumerate # %%timeit ab = [val * lst2[i] for i, val in enumerate(lst1)] # ## Standard loop # %%timeit ab = [lst1[i]*lst2[i] for i in range(len(lst1))] # ## with numpy list1_np = np.array(lst1) list2_np = np.array(lst2) # %%timeit out2 = np.multiply(list1_np, list2_np) # %%timeit out2 = np.multiply(list1_np, list2_np) # %%timeit out2 = np.multiply(lst1, lst2) # # Exercises # ## Exercise 1 (Difficulty: easy) # Try to see, as we have done before, what happens if you use the following equation for the learning rate (where $j$ is the number of the iteration) # # $$ # \alpha = 2 \ \ \ \text{for} \ \ j <4; \ \ \ \alpha =0.4 \ \ \ \text{for} \ \ j \geq 4 # $$ # + import numpy as np import matplotlib.pyplot as plt # The plot: LHS is the data, RHS will be the cost function. fig, ax = plt.subplots(figsize=(12, 8)) plt.rc('font', family='arial') plt.rc('xtick', labelsize='x-small') plt.rc('ytick', labelsize='x-small') plt.tick_params(labelsize=16) # First construct a grid of (theta0, theta1) parameter pairs and their # corresponding cost function values. theta0_grid = np.linspace(-1,4,101) theta1_grid = np.linspace(-5,5,101) J_grid = cost_func(theta0_grid[:,np.newaxis,np.newaxis], theta1_grid[np.newaxis,:,np.newaxis]) # A labeled contour plot for the RHS cost function X, Y = np.meshgrid(theta0_grid, theta1_grid) contours = ax.contour(X, Y, J_grid, 30, colors='k') ax.clabel(contours) # The target parameter values indicated on the cost function contour plot ax.scatter([theta0_true]*2,[theta1_true]*2,s=[50,10], color=['k','w']) N = 30 # # Initial value of the learning rate # alpha = 2.0 theta = [np.array((0,0))] J = [cost_func(*theta[0])[0]] for j in range(N-1): # # Add your code here on how to modify alpha with the number # of iterations # last_theta = theta[-1] this_theta = np.empty((2,)) this_theta[0] = last_theta[0] - alpha / m * np.sum( (hypothesis(x, *last_theta) - y)) this_theta[1] = last_theta[1] - alpha / m * np.sum( (hypothesis(x, *last_theta) - y) * x) theta.append(this_theta) J.append(cost_func(*this_theta)) colors = ['b', 'g', 'm', 'c', 'orange'] for j in range(1,N): ax.annotate('', xy=theta[j], xytext=theta[j-1], arrowprops={'arrowstyle': '->', 'color': 'r', 'lw': 1}, va='center', ha='center') ax.scatter(*zip(*theta), cmap='gray', s=80, lw=0) # Labels, titles and a legend. ax.set_xlabel(r'$w_0$', fontsize = 16) ax.set_ylabel(r'$w_1$', fontsize = 16) ax.set_title('Cost function', fontsize = 16) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="wKY1p5iyzY0f" # # Final project - Applied Mathematics # # + [markdown] id="VBCjvry_65ug" # Members # - A00834191 # - A01197044 # + [markdown] id="XZmEvzPT6SDj" # In this python notebook we implemented several machine learning and statistical models to predict the inflation in Mexico, using the bi-weekly data recorded by INEGI. # + [markdown] id="2vVDsmYRzYYm" # ### INEGI data # - INPC alogn with its components, extracted from [INEGI](https://www.inegi.org.mx/app/tabulados/default.aspx?nc=ca56_2018) # - INPC per city, extracted from [INEGI](https://www.inegi.org.mx/app/tabulados/default.aspx?nc=ca62_2018) # - INPC classifiying by object, extracted from [INEGI](https://www.inegi.org.mx/app/tabulados/default.aspx?nc=ca58_2018) # # [Inflation calculator](https://www.inegi.org.mx/app/indicesdeprecios/calculadorainflacion.aspx) # # [Price index](https://www.inegi.org.mx/app/indicesdeprecios/Estructura.aspx?idEstructura=112001300030&T=%C3%8Dndices%20de%20Precios%20al%20Consumidor&ST=Inflaci%C3%B3n%20Mensual) # # [INEGI main page (check graphics)](https://www.inegi.org.mx/temas/inpc/#Informacion_general) # + [markdown] id="30kqU_3-4B1G" # ## Process data # + [markdown] id="A8M1Bb2-335n" # ### Libraries # + colab={"base_uri": "https://localhost:8080/"} id="mXgZHMlzCDBl" outputId="7a26ed80-ee12-4859-def7-c0f207d145ce" # !pip install pystan==2.19.1.1 && pip install prophet # + id="1Jo740twzDlq" colab={"base_uri": "https://localhost:8080/"} outputId="17d0a445-4ce5-4745-fe5c-209d71d002f3" import datetime as dt import matplotlib.pyplot as plt import numpy as np import pandas as pd import sklearn as sk import tensorflow as tf import warnings from pandas.plotting import autocorrelation_plot from prophet import Prophet from prophet.plot import plot_plotly, plot_components_plotly from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from statsmodels.tsa.arima_model import ARIMA # + [markdown] id="4dPUqW5D9Nej" # ### INEGI Dataframes # + id="T7mGBOzY4IJR" # INEGI Data inpc_components_link = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vSpdpTVzL6d_p4qhhkuVHxMMXIYKnITeyFtd98_e575z4MPiBtWdb8WKqmzXAlWYg/pub?gid=1239599080&single=true&output=csv' inpc_per_city_link = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vTJ_JokBZWk1rFvOWK-frzbLo9cOw_IzyLkXyFbGejKytzyBkuoaUrz3ydCL5PH3A/pub?gid=988073853&single=true&output=csv' inpc_per_objects_link = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vSTBQ9lwW-BX20fU8_wR0Ux2IzPTVe8yf6px5vFED9EzaijnzBKsjKn4jHRi2GEEQ/pub?gid=1466962329&single=true&output=csv' # DataFrames df_components = pd.read_csv(inpc_components_link) df_city = pd.read_csv(inpc_per_city_link) df_objects = pd.read_csv(inpc_per_objects_link) # Parse dates months = ['Ene', 'Feb', 'Mar', 'Abr', 'May', 'Jun', 'Jul', 'Ago', 'Sep', 'Oct', 'Nov', 'Dic'] def change_format_date(old_date): date_splitted = old_date.split(' ') day = '1' if date_splitted[0] == '1Q' else '15' month = str(months.index(date_splitted[1]) + 1) year = date_splitted[2] parsed_date = '-'.join([year, month, day]) return parsed_date df_components['Fecha'] = df_components['Fecha'].apply(lambda date: change_format_date(date)) df_city['Fecha'] = df_city['Fecha'].apply(lambda date: change_format_date(date)) df_objects['Fecha'] = df_objects['Fecha'].apply(lambda date: change_format_date(date)) # + [markdown] id="HDK4FsCC9hgG" # ## Statistical models # + [markdown] id="80vReg3Y9p-M" # ### Linear Regression # + id="BVNhevP79cqs" def linear_regression(timeSerie, test_size=0.2): # Given a time serie, train a model that uses the position of time and # previous value to predict the next values. # f(t, x_t) -> x_{t+1} X = timeSerie.copy() y = X.copy() X.pop(0) y.pop() X = [[idx, x] for idx, x in enumerate(X)] X, y = np.array(X), np.array(y) # Train-test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, shuffle=False) # Train model model = LinearRegression().fit(X_train, y_train) # Predict y_predict = model.predict(X_test) y_predict = [] last_known_t, last_known_x = X_train[-1] for _ in range(len(X_test)): y_hat = model.predict(np.array([[last_known_t, last_known_x]], dtype=object)) y_predict.append(y_hat) last_known_t += 1 last_known_x = y_hat return y_train, y_test, y_predict # + [markdown] id="KmaKgaGL9s50" # ### ARIMA # + id="XVPDEI3ZHF30" def arima(timeSerie, test_size=0.2, order=(5, 1, 0)): # Given a time serie, train an ARIMA model to predict next values. X = timeSerie.copy() train_size_X = int(len(X) * (1 - test_size)) # Train-test split X_train, X_test = X[:train_size_X], X[train_size_X:] # Train model, and predict y_predict = [] history = X_train.copy() for _ in range(len(X_test)): model = ARIMA(np.array(history, dtype=object), order=order) model_fit = model.fit() y_hat = model_fit.forecast()[0] y_predict.append(y_hat) history.append(y_hat) return X_train, X_test, y_predict # + [markdown] id="6RczwrweLbT4" # ### Prophet # + id="8IUp3s1vJfur" def prophet(timeSerie, dates, test_size=0.2, periods=365): X = timeSerie.copy() train_size_X = int(len(X) * (1 - test_size)) # Train-test split X_train, X_test = X[:train_size_X], X[train_size_X:] dates_train, dates_test = dates[:train_size_X], dates[train_size_X:] # Train model df = pd.DataFrame({'ds': dates_train, 'y':X_train}) model = Prophet() model.fit(df) # Predict future = model.make_future_dataframe(periods=len(X_test)) forecast = model.predict(future) y_predict = forecast['yhat'].to_numpy(dtype=float)[-len(X_test):] y_predict_upper = forecast['yhat_upper'].to_numpy(dtype=float)[-len(X_test):] y_predict_lower = forecast['yhat_lower'].to_numpy(dtype=float)[-len(X_test):] """ # Plotting prophet fig1 = model.plot(forecast) fig1.show() fig2 = model.plot_components(forecast) fig2.show() plot_plotly(model, forecast) plot_components_plotly(model, forecast) """ return X_train, X_test, y_predict, y_predict_lower, y_predict_upper # + [markdown] id="150Xb1T__0yb" # ## Machine Learning models # # + [markdown] id="0UCfhwSO_WSM" # ### Multi-Layer Perceptron # # + id="1ltWTMnD_n3Y" def multi_layer_perceptron(timeSerie, look_back=10, test_size=0.2, epochs=100, verbose=False): # Given a time serie, train a model that uses the last 'look_back' values # to predict the next value. # f(x_{t-4}, x_{t-3}, x_{t-2}, x_{t-1}, x_{t}) -> x_{t+1} X, y = [], [] for idx in range(len(timeSerie) - look_back): X.append(timeSerie[idx : idx + look_back]) y.append(timeSerie[idx + look_back]) X, y = np.array(X), np.array(y) # Train-test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, shuffle=False) # Architecture of model model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation='relu', input_shape=(look_back,)), tf.keras.layers.Dense(8, activation='relu'), tf.keras.layers.Dense(1) ]) model.compile(loss=tf.keras.losses.mean_squared_error, optimizer=tf.keras.optimizers.Adam(), metrics=['mse', 'mae']) # Train model model.fit(X_train, y_train, epochs=epochs, verbose=verbose) # Predict y_predict = [] last_known_xs = X_train[-1] for _ in range(len(X_test)): y_hat = model.predict(np.array([last_known_xs])) y_predict.append(y_hat[0]) last_known_xs = np.append(last_known_xs, y_hat[0]) last_known_xs = np.delete(last_known_xs, 0) return y_train, y_test, y_predict # + [markdown] id="BqwwTucXFWeU" # ### Long Short Term-Memory # + id="ZFb5mYC-FVYd" def long_short_term_memory(timeSerie, look_back=10, test_size=0.2, batch_size=8, epochs=350, verbose=False): # Given a time serie, train a model that uses the last 'look_back' values # to predict the next value. # f(x_{t-4}, x_{t-3}, x_{t-2}, x_{t-1}, x_{t}) -> x_{t+1} X, y = [], [] for idx in range(len(timeSerie) - look_back): x = timeSerie[idx : idx + look_back] X.append([[t] for t in x]) y.append(timeSerie[idx + look_back]) X, y = np.array(X), np.array(y) # Train-test split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, shuffle=False) # Architecture of model model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(look_back, 1)), tf.keras.layers.LSTM(5, activation='tanh'), tf.keras.layers.Dense(1) ]) model.compile(loss=tf.keras.losses.mean_squared_error, optimizer=tf.keras.optimizers.Adam(), metrics=['mse', 'mae']) # Train model model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, verbose=verbose) # Predict y_predict = [] last_known_xs = X_train[-1] for _ in range(len(X_test)): y_hat = model.predict(np.array([last_known_xs])) y_predict.append(y_hat[0]) last_known_xs = np.append(last_known_xs, y_hat) last_known_xs = np.delete(last_known_xs, 0) last_known_xs = [[x] for x in last_known_xs] return y_train, y_test, y_predict # + [markdown] id="MW1iZzikQmoG" # ## Bemnchark # + [markdown] id="lrytHXDFtbZ_" # ### Plotting functions # + id="3PDuV9e_gNsU" def particular_plot(dates_train, dates_test, y_train, y_test, y_predict=None, model_name='', ticks=10, suffix='', y_predict_lower=None, y_predict_upper=None): fig, ax = plt.subplots() # Plotting plt.ion() plt.plot(dates_train, y_train, color='red', label='Train') plt.plot(dates_test, y_test, color='blue', label='Test') plt.plot(dates_test, y_predict, color='green', label='Prediction') if y_predict_lower is not None: plt.plot(dates_test, y_predict_lower, color='yellowgreen', label='Lower limit') if y_predict_upper is not None: plt.plot(dates_test, y_predict_upper, color='darkgreen', label='Upper limit') # Configuration plt.xlabel('Time') plt.ylabel('INPC') plt.title(model_name) inv_ticks = (len(dates_train) + len(dates_test) - 1)//ticks + 1 ax.set_xticks(ax.get_xticks()[::inv_ticks]) ax.tick_params(axis="x", labelrotation=-60) ax.legend() # Show plt.ioff() plt.savefig(f'{model_name}{suffix}.png', dpi=333, transparent=True) fig.show() def show_plots(dates, y_train, y_test, y_predict=None, model_name='', percentage_closeup=0.95, ticks_normal=12, ticks_closeup=10, y_predict_lower=None, y_predict_upper=None): dates_train = dates[:len(y_train)+1] dates_test = dates[len(y_train) : len(y_train) + len(y_test)] y_train_ = list(y_train) y_train_.append(y_test[0]) particular_plot(dates_train, dates_test, y_train_, y_test, y_predict, model_name, ticks_normal, y_predict_lower=y_predict_lower, y_predict_upper=y_predict_upper) closer_point = int(len(dates_train) * percentage_closeup) dates_train_closeup = dates_train[closer_point:] y_train_closeup = y_train_[closer_point:] particular_plot(dates_train_closeup, dates_test, y_train_closeup, y_test, y_predict, model_name, ticks_closeup, suffix='_closeup', y_predict_lower=y_predict_lower, y_predict_upper=y_predict_upper) # + [markdown] id="r4FUZdTCtgtI" # ### Plotting each model # + id="281ZEEINEM6w" def get_series(days=None, biweeks=None): if biweeks is None: biweeks = days // 15 + 1 # Approximation of bi-weeks dates = df_components['Fecha'].to_numpy()[-biweeks:] timeSerie = list(df_components['INPC'].to_numpy())[-biweeks:] return timeSerie, dates # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="IXEMDPXGQ76j" outputId="fc46f81a-0641-4f90-d54c-a0024224c517" timeSerie, dates = get_series(biweeks=len(df_components['Fecha'].to_numpy())) ## Linear regression y_train_lr, y_test_lr, y_predict_lr = linear_regression(timeSerie) show_plots(dates, y_train_lr, y_test_lr, y_predict_lr, 'Linear Regression', 0.85) ## ARIMA y_train_ar, y_test_ar, y_predict_ar = arima(timeSerie) show_plots(dates, y_train_ar, y_test_ar, y_predict_ar, 'ARIMA', 0.85) ## Prophet y_train_fb, y_test_fb, y_predict_fb, y_predict_lower_fb, y_predict_upper_fb = prophet(timeSerie, dates) show_plots(dates, y_train_fb, y_test_fb, y_predict_fb, 'Prophet', 0.85, y_predict_lower=y_predict_lower_fb, y_predict_upper=y_predict_upper_fb) ## MLP y_train_mlp, y_test_mlp, y_predict_mlp = multi_layer_perceptron(timeSerie, epochs=200) show_plots(dates, y_train_mlp, y_test_mlp, y_predict_mlp, 'Multi-Layer Perceptron', 0.85) ## LSTM y_train_lstm, y_test_lstm, y_predict_lstm = long_short_term_memory(timeSerie, epochs=200) show_plots(dates, y_train_lstm, y_test_lstm, y_predict_lstm, 'Long Short Term-Memory', 0.85) # + colab={"base_uri": "https://localhost:8080/", "height": 593} id="hSij8l4NKimL" outputId="399f4446-c219-4c16-f29d-9dabf1fc534d" fig, ax = plt.subplots() # Plotting plt.ion() dates_train_lr = dates[:len(y_train_lr)] dates_test_lr = dates[len(y_train_lr) : len(y_train_lr) + len(y_test_lr)] plt.plot(dates_train_lr, y_train_lr, color='red', label='Train') plt.plot(dates_test_lr, y_test_lr, color='blue', label='Test') models_data = [ [y_train_lr, y_test_lr, y_predict_lr, 'Linear Regression'], [y_train_ar, y_test_ar, y_predict_ar, 'ARIMA'], [y_train_fb, y_test_fb, y_predict_fb, 'Prophet'], [y_train_mlp, y_test_mlp, y_predict_mlp, 'MLP'], [y_train_lstm, y_test_lstm, y_predict_lstm, 'LSTM'] ] for y_train_model, y_test_model, y_predict_model, model_name in models_data: plt.plot(dates[len(y_train_model) : len(y_train_model) + len(y_test_model)], y_predict_model, label=model_name) # Configuration plt.xlabel('Time') plt.ylabel('INPC') plt.title('Benchmark models') ticks = 10 inv_ticks = (len(dates_train) + len(dates_test) - 1)//ticks + 1 ax.set_xticks(ax.get_xticks()[::inv_ticks]) ax.tick_params(axis="x", labelrotation=-60) ax.legend() # Show plt.ioff() plt.savefig('benchmark_models.png', dpi=333, transparent=True) fig.show() # + id="Vv5XnDXXMyWz" colab={"base_uri": "https://localhost:8080/", "height": 593} outputId="940f21ed-4a5d-4443-e7f2-5fb755b3abf4" fig, ax = plt.subplots() # Plotting plt.ion() percentage_closeup=0.85 closer_point = int(len(y_train_lr) * percentage_closeup) dates_train_lr = dates[closer_point:len(y_train_lr)] dates_test_lr = dates[len(y_train_lr) : len(y_train_lr) + len(y_test_lr)] y_train_ = list(y_train) y_train_.append(y_test[0]) plt.plot(dates_train_lr, y_train_lr[closer_point:], color='red', label='Train') plt.plot(dates_test_lr, y_test_lr, color='blue', label='Test') models_data = [ [y_train_lr, y_test_lr, y_predict_lr, 'Linear Regression'], [y_train_ar, y_test_ar, y_predict_ar, 'ARIMA'], [y_train_fb, y_test_fb, y_predict_fb, 'Prophet'], [y_train_mlp, y_test_mlp, y_predict_mlp, 'MLP'], [y_train_lstm, y_test_lstm, y_predict_lstm, 'LSTM'] ] for y_train_model, y_test_model, y_predict_model, model_name in models_data: plt.plot(dates[len(y_train_model) : len(y_train_model) + len(y_test_model)], y_predict_model, label=model_name) # Configuration plt.xlabel('Time') plt.ylabel('INPC') plt.title('Benchmark models') ticks = 10 inv_ticks = (len(dates_train) + len(dates_test) - 1)//ticks + 1 ax.set_xticks(ax.get_xticks()[::inv_ticks]) ax.tick_params(axis="x", labelrotation=-60) ax.legend() # Show plt.ioff() plt.savefig('benchmark_models_closeup.png', dpi=333, transparent=True) fig.show() # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="hpNzqBj3VZPB" outputId="32aa0fb0-02be-469b-c592-fbf5d0e73c75" dates_train_lr[-1] # + colab={"base_uri": "https://localhost:8080/"} id="LeDQrgmpZ0aa" outputId="74aff983-ee3e-4e3a-b6bc-aa73b12ac571" y_predict_lstm # + id="mOCDBw1iEoc8" timeSerie, dates = get_series(biweeks=54) plot_models(timeSerie, dates) # + colab={"base_uri": "https://localhost:8080/"} id="1emDvdRXVmPO" outputId="85512c8d-e819-48d0-f65e-459f640e7292" from scipy.stats import pearsonr, spearmanr def calculate_errors(y_predict, y_test): if isinstance(y_predict[0], np.ndarray): y_predict = [ x[0] for x in y_predict ] covariance = np.cov(y_predict, y_test) corr, _ = pearsonr(y_predict, y_test) corr_2, _ = spearmanr(y_predict, y_test) return mean_squared_error(y_test, y_predict),covariance[0][1], corr, corr_2 print(""" \\begin{table}[H] \\centering \\begin{tabular}{|l|r|r|r|r|} \\hline \\multicolumn{1}{|c|}{\\textbf{Models}} & \\multicolumn{1}{c|}{\\textbf{Mean Square Error}} & \\multicolumn{1}{c|}{\\textbf{Covariance}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Pearson\\\\ correlation\\end{tabular}}} & \\multicolumn{1}{c|}{\\textbf{\\begin{tabular}[c]{@{}c@{}}Spearman\\\\ correlation\\end{tabular}}} \\\\ \hline """) for _, y_test_model, y_predict_model, model_name in models_data: mse, cov, pearson, spearman_c = calculate_errors(y_predict_model, y_test_model) print("{} & {:.4f} & {:.4f} & {:.4f} & {:.4f}".format(model_name, mse, cov, pearson, spearman_c), end='\\\\ \\hline\n') print(""" \\end{tabular} \\caption{Benchmark results} \\label{table:benchmark} \\end{table} """ ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tutorial 3: Classification # # # In the previous two tutorials we explored how to manipulate csv datasets, first as a dictionary then as a list. In this tutorial, we will step it up a notch by briefly exploring a third data structure called a data frame by leveraging the pandas Python library. We will also explore how to use the scikit-learn library to do basic classification. Our task is to build a model that can predict the country where a purchase is being made from. In reality, this is a pretty low value task for this dataset, but we will use it to demonstrate how classification works. Classification is one of the essential elements of machine learning, and is the backbone of many artificial intelligence technologies. # # We will start by importing two new libraries: pandas and numpy. As before, we want to start by loading the csv data into our Python environment. We will do this similarly this time, but using pandas. You can learn more about pandas here: https://pandas.pydata.org/ # + import pandas as pd #import pandas as an object pd ec = pd.read_csv('data.csv', encoding="ISO-8859-1", dtype={'CustomerID': str,'InvoiceNo': str}) #we will name our dataframe ec ec.head() #see the first five entries # - # Data frames look nice in Jupyter! The pandas dataframe is a two-dimensional data structure which makes it much easier to process and manage data. Not only does it look nice, it also comes with a number of built in features designed to make our life easy. For instance, we can easily observe the shape of our data. ec.shape # This tells us that there are 541909 rows and 8 columns. We are not just limited to descriptions however. If we want to quickly process our data to drop null values, we can do that as well. ec.dropna(inplace = True) #drop the null values ec.shape # It is often good to eliminate or represent null values in your dataset. In this case, we opted to drop any values that had a null in any of the columns. We can also perform advanced functions like counting values in the dataset. For instance, we can observe the breakdown of the countries represented in the data. print(ec['Country'].value_counts()) # Our data is rather imbalanced, with the majority of the orders from the United Kingdom. This may create problems for us later, if we are going to build a classification algorithm. Finally, we can also observe our data types. ec.dtypes # It would seem that only two of our values are numerical: Quantity and Unit Price. Classification algorithms (as with all machine learning) can only understand values that are represented numerically. Though there may be good information contained in the descriptions, we would have to add additional analysis to process the text data. For the purposes of this tutorial, we will only focus on classification -- the proper processing of textual data is a live research question in natural language processing! # # We can cut our dataframe down easily by taking a subset. Let's take a subset with just the numerical values and the country values, and see if we can build a predictive model on those. new_ec = ec[['Quantity', 'UnitPrice', 'Country']] new_ec.head() # ## Logistic Regression: Round 1 # Let's start by exploring a basic predictive algorithm. Logistic regression is one of many regression models designed to best fit the data using a predefined method. If you are familar with statistics or economics, you probably already understand how it works--we are just using a logistic regression library built in Scikit. We will use this to start our classification analysis. Let's begin by importing the model from scikit. We will save the model in the variable clf, as per scikit-learn's conventions. from sklearn.linear_model import LogisticRegression clf = LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial') #the classification model # The classification models we will use here belong in the category of supervised learning. Supervised learning algoirthms need data and labels to learn from them. In our case, we will need to take part of our data as a training set. Though there are many ways that you can do this, one simple method is to take the majority of the dataset for training, and save a minority for testing. The author of this tutorial is lazy, so he will just take the first 300 000, which is roughly 3/4 of this dataset. We will use the last 1/4 for testing. # + train = new_ec[:300000] #take the first 300 000 and save it in train test = new_ec[300000:] #take what remains and save it in test print("Train: " + str(len(train)) + " Test: " + str(len(test))) # - # The next step is to fit the model. We saved the model as the variable clf, so it's just a matter of fitting our training data to the model. Typically, you fit the model by specifying first the data that the model is assessing, followed by the labels. We will do this by telling it to observe the columns that are not 'Country' for the model, while using the 'Country' values as labels. Note: this may take a minute or two on some computers. clf.fit(train.loc[:, train.columns != 'Country'], train['Country']) #the 'non-country' columns are inputs while 'country' is the label # The model is trained! We can now tell the model to predict values based on inputs. Let's save the predictions as the preds variable. We will print some of the output to make sure that it is working. preds = clf.predict(test.loc[:, test.columns != 'Country']) print(preds[1]) # In machine learning research, there are multiple measures that you can use to determine whether an algorithm is good. One of the most common measures is the algorithm's accuracy, which can be defined as the ratio of true values to the data overall. We can import an accuracy_score function from scikit learn to make this easy. from sklearn.metrics import accuracy_score accuracy_score(preds, test['Country']) #measure accuracy of preds versus the test values # 90 percent accuracy! These are amazing results! Or are they? # # One problem with accuracy measures is that there could be an underdetermining factor that drives high accuracy results. Earlier we noticed that the data was weighted heavily toward one country. Let's look closer at our predictions. import collections collections.Counter(preds) # Our model seems to have simply classified most of the data as "United Kingdom". Let's see how close that was to reality. collections.Counter(test['Country']) # Clearly there was more variance that the algorithm discovered... and a six year old could have come up with this solution. Given that there are 106 829 values in the test dataset, our 90% accuracy is an illusion--the results are no better than random chance. This is a very common issue with imbalanced datasets, as the algorithms used might detect a simple solution: select the majority. If we are going to have meaningful results, we should consider digging deeper and rebalancing the data. # ## Logistic Regression: Round 2 # If we want to eventually develop some sort of predictive algorithm, we should consider balancing the dataset. One simple way for us to do that is to cut down the number of UK values of the other countries. We can observe the number of instances that were not 'United Kingdom' by using the value_counts function below. (new_ec['Country'] != 'United Kingdom').value_counts() # how many times values other than 'United Kingdom' appear # With this we can further process our data by dividing it between the "uk" subset and the "not uk" subset. We would want to do this because we want to cut down on the data, but only that data which has the label of "United Kingdom". With pandas this is really easy; we just specify the subset conditions. ec_uk = new_ec[new_ec['Country'] == 'United Kingdom'] ec_others = new_ec[new_ec['Country'] != 'United Kingdom'] # With a separate data frame, we can use the sample function that is contained in the pandas dataframe class. We can thus take a random sample. ec_uk_under = ec_uk.sample(44951) # the number of values for not 'United Kingdom' new_ec = pd.concat([ec_uk_under, ec_others]) #bring the disparate data together collections.Counter(new_ec['Country']) #show the countries # We can further simplify our task by reducing the number of classes to two. Many classification algorithms (such as support vector machines) are designed to be binary classifiers, so are optimized for exactly two classes. One way we can do this is to distinguish domestic orders from foreign orders. We can do this be changing all foreign orders to 'Other Country'. The way to do this in Pandas is to use the .loc feature. new_ec.loc[new_ec['Country'] != 'United Kingdom', 'Country'] = 'Other Country' #select the values other than United Kingdom and make them one value collections.Counter(new_ec['Country']) #list all of the country data new_ec.shape #shape of the new data frame # Finally, when we appended the two halves of our dataset, we essentially added our reduced 'United Kingdom' set to the end of our 'Other Country' set. We should shuffle them before beginning classification, or else our results will be biased by our distribution. rand_ec = new_ec.sample(frac=1) #take a random fraction of 100% of the data frame. We could use frac = 0.1 to take a random 10% # We're now ready to try Logistic Regression again! As before, we will divide them into a train and test before fitting the algorithm. Let's try this again and see how we fare. # + train = rand_ec[:60000] #60000, approximately 2/3 of the data test = rand_ec[60000:] print("Train: " + str(len(train)) + " Test: " + str(len(test))) # - clf.fit(train.loc[:, train.columns != 'Country'], train['Country']) # We are now ready to make predictions, as before. Let's test it on the test dataset and record the accuracy. preds = clf.predict(test.loc[:, test.columns != 'Country']) accuracy_score(preds, test['Country']) # 57 percent accuracy-- terrible, and certainly not better than random chance. Let's break this down a bit more using the confusion matrix. The confusion matrix will show the number of items classified as 'Other Country' on the left, followed by those which were classified as 'United Kingdom' on the right. The items that were actually 'Other Country' are on the top, while those which were actually 'United Kingdom' are on the bottom. This is very useful to seeing the breakdown of our classifier and what went wrong. In our case, it is clear that our classifier identified far too many as 'United Kingdom'. from sklearn.metrics import confusion_matrix confusion_matrix(test['Country'], preds) #shows the confusion matrix collections.Counter(preds) #show the sollectoin of predicted countries # Let's try some other techniques before calling it a day. # ### Naive Bayes # A second technique we can try is called Naive Bayes. This is a probabilistic classifier based on Bayes Theorem. It is primarily used in text analysis, but can be used in our context as well. Let's train this classifier using the same code as before. We end up with an even worse result using Naive Bayes. from sklearn.naive_bayes import GaussianNB clf = GaussianNB() clf.fit(train.loc[:, train.columns != 'Country'], train['Country']) preds = clf.predict(test.loc[:, test.columns != 'Country']) accuracy_score(preds, test['Country']) confusion_matrix(test['Country'], preds) collections.Counter(preds) # ### Random Forest # A third technique we can try is called random forest. This classifier belongs to the category called decision trees, which create an algorithm based on the information gained. They are called 'forests' because they are actually the average of many decision trees. Using random forest, we get 65% accuracy, which is a significant improvement. Though nothing to write home about, this is on the path to usefulness. # + from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0) clf.fit(train.loc[:, train.columns != 'Country'], train['Country']) # - preds = clf.predict(test.loc[:, test.columns != 'Country']) accuracy_score(preds, test['Country']) confusion_matrix(test['Country'], preds) collections.Counter(preds) # ### Support Vector Machines # The fourth technique that we will try is called support vector machines. Similarly to regression, this classifier envisions the data as points in space on a plane and tries to fit the data as best possible. Unlike regression, it plots on a hyperplane, using a kernel function that is specified by the user. We will use the default radial basis function kernel for classification. Using RBF, we attain a classification accuracy of 67% which is much closer to useful. # # With this in hand, we have a working (if not terribly good) predictive algorithm that can determine with 67% accuracy whether an order is from the United Kingdom or a foreign country, and brings us to the end of the classification tutorial. from sklearn import svm clf = svm.SVC(gamma='scale') clf.fit(train.loc[:, train.columns != 'Country'], train['Country']) preds = clf.predict(test.loc[:, test.columns != 'Country']) accuracy_score(preds, test['Country']) collections.Counter(preds) # ## Challenge Question # In this tutorial, we explored machine learning techniques that are often called "shallow learning". With the hype around deep learning, neural networks appear to be all the rage. How would we implement a neural network using scikit learn? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import re, spacy, gensim # Sklearn from sklearn.decomposition import LatentDirichletAllocation from sklearn.feature_extraction.text import CountVectorizer from sklearn.model_selection import GridSearchCV from pprint import pprint # Plotting tools import pyLDAvis import pyLDAvis.sklearn # - # Import Dataset df = pd.read_json('https://raw.githubusercontent.com/selva86/datasets/master/newsgroups.json') print(df.target_names.unique()) df.head(15) data = df.content.values.tolist()[0:100] # Remove Emails data = [re.sub('\S*@\S*\s?', '', sent) for sent in data] # Remove new line characters data = [re.sub('\s+', ' ', sent) for sent in data] # Remove distracting single quotes data = [re.sub("\'", "", sent) for sent in data] pprint(data[:1]) # + def sent_to_words(sentences): for sentence in sentences: yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) # deacc=True removes punctuations data_words = list(sent_to_words(data)) print(data_words[:1]) # + def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']): """https://spacy.io/api/annotation""" texts_out = [] for sent in texts: doc = nlp(" ".join(sent)) texts_out.append(" ".join([token.lemma_ if token.lemma_ not in ['-PRON-'] else '' for token in doc if token.pos_ in allowed_postags])) return texts_out # Initialize spacy 'en' model, keeping only tagger component (for efficiency) # Run in terminal: python3 -m spacy download en nlp = spacy.load('en', disable=['parser', 'ner']) # Do lemmatization keeping only Noun, Adj, Verb, Adverb data_lemmatized = lemmatization(data_words, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']) print(data_lemmatized[:2]) # + vectorizer = CountVectorizer(analyzer='word', min_df=10, # minimum reqd occurences of a word stop_words='english', # remove stop words lowercase=True, # convert all words to lowercase token_pattern='[3,}', # num chars > 3 # max_features=50000, # max number of uniq words ) data_vectorized = vectorizer.fit_transform(data_lemmatized) # - # Materialize the sparse data data_dense = data_vectorized.todense() # Compute Sparsicity = Percentage of Non-Zero cells print("Sparsicity: ", ((data_dense > 0).sum()/data_dense.size)*100, "%") # + # Build LDA Model lda_model = LatentDirichletAllocation(n_components=20, # Number of topics max_iter=10, # Max learning iterations learning_method='online', random_state=100, # Random state batch_size=128, # n docs in each learning iter evaluate_every = -1, # compute perplexity every n iters, default: Don't n_jobs = -1, # Use all available CPUs ) lda_output = lda_model.fit_transform(data_vectorized) print(lda_model) # Model attributes # + # Log Likelyhood: Higher the better print("Log Likelihood: ", lda_model.score(data_vectorized)) # Perplexity: Lower the better. Perplexity = exp(-1. * log-likelihood per word) print("Perplexity: ", lda_model.perplexity(data_vectorized)) # See model parameters pprint(lda_model.get_params()) # + # Define Search Param search_params = {'n_components': [10], 'learning_decay': [.5]} # Init the Model lda = LatentDirichletAllocation() # Init Grid Search Class model = GridSearchCV(lda, param_grid=search_params) # Do the Grid Search model.fit(data_vectorized) # + # Best Model best_lda_model = model.best_estimator_ # Model Parameters print("Best Model's Params: ", model.best_params_) # Log Likelihood Score print("Best Log Likelihood Score: ", model.best_score_) # Perplexity print("Model Perplexity: ", best_lda_model.perplexity(data_vectorized)) # + # Create Document - Topic Matrix lda_output = best_lda_model.transform(data_vectorized) # column names topicnames = ["Topic" + str(i) for i in range(best_lda_model.n_components)] # index names docnames = ["Doc" + str(i) for i in range(len(data))] # Make the pandas dataframe df_document_topic = pd.DataFrame(lda_output, columns=topicnames, index=docnames) # Get dominant topic for each document dominant_topic = np.argmax(df_document_topic.values, axis=1) df_document_topic['dominant_topic'] = dominant_topic df_document_topic.round(2) # - df_topic_distribution = df_document_topic['dominant_topic'].value_counts().reset_index(name="Num Documents") df_topic_distribution.columns = ['Topic Num', 'Num Documents'] df_topic_distribution pyLDAvis.enable_notebook() panel = pyLDAvis.sklearn.prepare(best_lda_model, data_vectorized, vectorizer, mds='tsne') panel # + # Topic-Keyword Matrix df_topic_keywords = pd.DataFrame(best_lda_model.components_) # Assign Column and Index df_topic_keywords.columns = vectorizer.get_feature_names() df_topic_keywords.index = topicnames # View df_topic_keywords.head() # - df_topic_keywords.sum(axis = 1) # + # Show top n keywords for each topic def show_topics(vectorizer=vectorizer, lda_model=lda_model, n_words=20): keywords = np.array(vectorizer.get_feature_names()) topic_keywords = [] for topic_weights in lda_model.components_: top_keyword_locs = (-topic_weights).argsort()[:n_words] topic_keywords.append(keywords.take(top_keyword_locs)) return topic_keywords topic_keywords = show_topics(vectorizer=vectorizer, lda_model=best_lda_model, n_words=15) # Topic - Keywords Dataframe df_topic_keywords = pd.DataFrame(topic_keywords) df_topic_keywords.columns = ['Word '+str(i) for i in range(df_topic_keywords.shape[1])] df_topic_keywords.index = ['Topic '+str(i) for i in range(df_topic_keywords.shape[0])] df_topic_keywords # - ids = list(map(lambda x : int(x[3:]), list(df_document_topic[df_document_topic['dominant_topic']==4].index))) data_to_summarize = [data[i] for i in ids] len(data_to_summarize) # + from transformers import pipeline def run_summary(gpt2_input): summarizer = pipeline("summarization") return summarizer(gpt2_input, max_length = 10) test = """ died years ago. But for devoted fans of the iconic Apple co-founder, it seems like just yesterday that Jobs stepped away from his role as Apple’s chief executive to seek treatment for the rare pancreatic cancer that would eventually kill him. But that begs the question: When did die? And what were his last words? Read on to get all the details on Jobs’ untimely death, what he regretted about the choices he made during his life, and what his sister said were ’ last words. developed a rare form of pancreatic cancer opens the Apple Worldwide Developers conference in 2005 opened the Apple Worldwide Developers conference in 2005. | / Getty Images WebMD reports that if had developed the most common form of pancreatic cancer, adenocarcinoma, he would likely have died soon after his 2003 diagnosis. Instead, he had an unusual form of pancreatic cancer known as a neuroendocrine tumor or islet cell carcinoma, which typically has a much better prognosis. Islet cells — the hormone-producing cells of the pancreas — develop cancers that are highly treatable and often curable. In 2004, Jobs underwent surgery to remove the tumor. Jobs is said to have undergone a Whipple procedure. That usually involves removing the head of the pancreas, part of the bile duct, the gallbladder, and the first part of the small intestine. Years later, in 2009, he received a liver transplant. That would have meant that cancer had spread to his liver. However, cancer can recur even after the transplant. And because the patient is on immune-suppressing anti-rejection drugs, there’s little that doctors can do. That’s what happened to . He regretted his treatment decisions The Telegraph reports that told his biographer that he regretted spending time trying to treat his cancer with alternative medicine. Jobs delayed operations and chemotherapy for nine months after his diagnosis in 2003 to try to treat his cancer without surgery. Biographer explained, “I think he felt: if you ignore something you don’t want to exist, you can have magical thinking. It had worked for him in the past. He would regret it.” Jobs’ wife, , told Isaacson, “The big thing was he really was not ready to open his body. It’s hard to push someone to do that.” When Jobs did agree to traditional treatment, he had his DNA sequenced at the cost of $100,000. That enabled doctors to specifically target treatment to the particular molecular pathways that were defective in his body. But eventually, surgery revealed that cancer had spread beyond the pancreas to his liver. But the tumor could have spread even with earlier treatment introduces the iPad 2 waves to the crowd. | / Getty Images might have regretted the decision to try acupuncture, a vegan diet, herbs, and juices before operating on his cancer. But The New York Times notes that it’s impossible to know whether that would have stopped his cancer from eventually killing him. , one of Jobs’ doctors, told the Times, “No one can say whether or not having surgery earlier would have made any difference because of the possibility of micrometastases.” Micrometastases are tiny cancers that form in various organs when a tumor starts to spread around the body. The Times notes, “Dr. Ornish’s comment means that in theory, Mr. Jobs’ tumor could already have spread invisibly to his liver by the time it was first diagnosed. If it had, operating earlier probably would not have made a difference.” Another doctor explained that among patients with this kind of tumor, “when they are first found on a scan, about 60 percent of the time it’s already metastasized to the liver.” When did die? Harvard Health reports that died on October 5, 2011, “almost exactly eight years after his cancer was discovered incidentally on a CT scan of his kidneys (the pancreas is near the left kidney).” Jobs got the CT scan at the recommendation of his urologist, who was concerned about kidney stones he’d had several years earlier. Jobs’ liver transplant didn’t preclude his cancer from recurring. His liver was full of cancer when doctors removed it. That means that cancer had likely already spread outside the pancreas and liver at the time of the transplant. When died on October 5, 2011, he was just 56 years old. He had taken medical leave from Apple starting in January that year, CNN reports. And he stepped down as Apple’s chief executive in August 2011, saying he could “no longer meet (his) duties and expectations.” Jobs was survived by his wife of 20 years, Laurene, and four children, including one — -Jobs — from a prior relationship. """ run_summary(test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Days 9 Class Exercises: Seaborn # For these class exercises, we will be using a wine quality dataset which was obtained from this URL: # http://mlr.cs.umass.edu/ml/machine-learning-databases/wine-quality. The data for these exercises can be found in the `data` directory of this repository. # # ![Task](../media/new_knowledge.png) Additionally, with these class exercises we learn a few new things. When new knowledge is introduced you'll see the icon shown on the right: # # ## Get Started # Import the Numpy, Pandas, Matplotlib (matplotlib magic) and Seaborn. # + import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # - # ## Exercise 1. Explore the data # First, read about this dataset from the file [../data/winequality.names](../data/winequality.names) # Next, read in the file named `winequality-red.csv`. This data, despite the `csv` suffix, is separated using a semicolon. wine = pd.read_csv("..//data/winequality-red.csv", sep=';') wine.head() # How many samples (observations) do we have? wine.shape # Are the data types for the columns in the dataframe appropriate for the type of data in each column? wine.dtypes # Any missing values? wine.isna().sum() wine.duplicated().sum() # ## Exercise 2: Explore the Data # The quality column contains our expected outcome. Wines scored as 0 are considered very bad and wines scored as 10 are very excellent. Plot a bargraph to see how many samples are there per each quality of wine. # # **Hints**: # - Use the [pd.Series.value_counts()](https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html) function to count the number of values # - Panda DataFrames and Series have built in plotting funcitons that use MatplotLib. Therefore, we can use the [pd.Series.plot.bar()](https://pandas.pydata.org/docs/reference/api/pandas.Series.plot.bar.html) function to simplify use of matplotlib. wine['quality'].value_counts(sort = False).plot.bar(); # Now use Matplotlib functionality to recreate the plot (no need to color each bar) qcounts =wine['quality'].value_counts(sort = False) fig = plt.figure() # Recreate the bargraph using Seaborn sns.barplot(x = 'quality', data = wine ) # Describe the data for all of the columns in the dataframe. This includes our physicochemical measurements (independent data) as well as the quality data (dependent). # Visualizing the data can sometimes better help undrestand it's limits. Create a single figure, that contains boxplots for each of the data columns. Use the [seaborn.boxplot()](https://seaborn.pydata.org/generated/seaborn.boxplot.html) function to do this: # # ![Task](../media/new_knowledge.png)In our plot, the axis labels are squished together and many of the box plots are too hard to see because all of them share the same y-axis coordinate system. Unfortunately, not all Seaborn functions provide arguments to control the height and widht of a plot, the `boxplot` function is one of them. However, remember that Seaborn uses matplotlib! So, we can use matplot lib functions set the height using a command such as: # # ```python # plt.figure(figsize=(10, 6)) # ``` # Where the first number is the width and the second number is the height. Repeat the plot from the previous cell but add this line of code just above the figure. # ![Task](../media/new_knowledge.png) Unfortunately, we are still unable to read some of the x-axis labels. But we can use Matplotlib to correct this. When calling a Seaborn plot function it will return the Matplotlib axis object. We can then call functions on the axis such as the [set_xticklabels](https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_xticklabels.html) function. That function will allow us to set a rotation on the axis tick labels and it takes a `rotation` argument. For example. The following function call on an axis object named `g` will reset the tick labels (using the `get_xticklabels()` function) and set a rotation of 45 degrees. # # ```python # g.set_xticklabels(g.get_xticklabels(), rotation=45); # ``` # # Try it on the wine data boxplot: # The boxplots from some of the measurements are too squished to view their distribution. The [seaborn.FacetGrid()](https://seaborn.pydata.org/generated/seaborn.FacetGrid.html) function can help. It allows us to divide our data into differnet panels of the same figure. But, it requires that our data be in tidy format. # ![Task](../media/new_knowledge.png)Using `FacetGrid` we can divide up our plots into rows and columns using variable. Here are few important arguments that can be passed to the `FacetGrid` function. # # - **data**: Tidy (“long-form”) dataframe where each column is a variable and each row is an observation. # - **row**, **col**: Variables that define subsets of the data, which will be drawn on separate facets in the grid. # - **col_wrap**: “Wrap” the column variable at this width, so that the column facets span multiple rows. Incompatible with a row facet. # - **sharex**, **sharey**: If true, the facets will share y axes across columns and/or x axes across rows. # # We have two variables in our tidy wine data set: "quality" and the "measurement". We want to create a separate boxplot for each measurement regardless of quality in this case we can either have a grid of 1 column or 1 row. It is your choice. # # After you've created a `FacetGrid` you must then tell the grid what type of plot you want to draw. This is performed using the [map](https://seaborn.pydata.org/generated/seaborn.FacetGrid.map.html#seaborn.FacetGrid.map) function of the `seaborn.axisgrid.FacetGrid` object. Let's walk through a demonstration to see how it works. # # First, import the tips dataset: # # ```python # tips = sns.load_dataset('tips') # tips.head() # ``` # Next create a `FacetGrid` that will divide the data by meal time and sex # # ```python # g = sns.FacetGrid(tips, col="time", row="sex") # ``` # Notice the result is an empty grid. Now we need to indicate the type of plot we weant to draw. For this example, we'll draw a `sns.scatterplot` plot. When we call the `map` function any arguments given get passed to the scatterplot function: # # ```python # g = sns.FacetGrid(tips, col="time", row="sex") # g.map(sns.scatterplot, "total_bill", "tip"); # ``` # Now, lets use a `FacetGrid` to create boxplots for each measurement in separate facet. Do the following # 1. Tidy the wine data. Be sure to keep the `quality` column as is, and melt the others into a single column named `measurement` # 2. Unlike the tip data, we only have one variable we want to calculate boxplots for: measurement. We don't not want to create box plots for measurement and quality. So, we only need one row of plots. # 3. Make the row of plots span 2 rows so we can see them more easily. # 4. Make sure that each boxplot does not share the x-axis coordinates with all other boxplots. # Redo the FacetGrid plot but use the [seaborn.violinplot](https://seaborn.pydata.org/generated/seaborn.violinplot.html) instead. # # Redo the FacetGrid plot but with the [seaborn.swarmplot](https://seaborn.pydata.org/generated/seaborn.swarmplot.html) instead. Be sure to set the `size` argument for the swarmplot to 1. # # **Note**: this may take awhile to create. # Next, let's look for columns that might show correlation with other columns. Colinear data can be problematic for some analyses. Use the Seaborn [seaborn.pairplot](https://seaborn.pydata.org/generated/seaborn.pairplot.html) function to do this. # # Be sure to: # # - Color each point with the quality value. # - Use the 'tab10' palette for coloring # # **Note**: this may take awhile to create) # Do you see any measurement types that are correlated? # Perform correlation analysis on the data columns. Exclude the `quality` column from the correlation analysis. # Use the [seaborn.heatmap](https://seaborn.pydata.org/generated/seaborn.heatmap.html) function to create a heatmap of the correlation values between data columns. # # Be sure to: # - Make sure the color values span from -1 to 1 (i.e., they are not inferred from the data). # - Show the correlation values in the cells of the heatmap # - Make the figure large enough to read the correlation values # - Make the cells of the heatmap square # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib import seaborn as sns plt.style.use('ggplot') plt.style.use(['dark_background']) # - # ## Motivation # # I just had a conversation with a friend who is looking to get a hold of her own finances but doesn't know where to start. I couldn't find a good summary that reduces the math down to a simple terms. MMM comes close. So I just decided to write my own piece. # # ## Financial Independence # Financial independence is the idea that you've "invested" enough money so that these investments start "passively" making money for you. The basic principle is that you leave the original investment untouched and simply shave the excess of every year. If that investment generates enough for you to live on, congratulations you're now financially independent. # # You can pursue that passion project you've always wanted to start but the 9 to 5 never left you with enough time/energy to do. Your house, groceries, etc are paid for. You don't have to work unless you get bored and want to. You've bought your way out of the system. # # > IMAGE of root beer # # # # ## But how much money do you need? # # Well, that's where things get complicated for many people. They read about different asset classes, market cycles, balancing portfolios, etc and decide: "I'll just pay someone else to do this for me." Or worse, I will just save 20% and call it good enough. # # However, not everyone can afford a financial planner and simply not thinking about it is a great way to never retire. # # The good news is that we simplify things if we stop trying to "beat" the system. # # The market grows every year right? We'd have problems if it didn't... What if there was a single stock that simply followed the market's inherent growth? If you chose to simply put ALL your excess money into that stock... maybe that could be enough? # # That's what I hope to dive into with this post. # # ## Introducing the 4% rule # # The basic assumption is that, on average, the market grows 7% every year and that the SP500 (for simplicity let's just call this "a stock") follows that growth. So if you put \\$100 into that "stock" this year, you can expect to have \\$107 next year (on average). But, of course, inflation decreases the value of that \\$107 by about 3% every year (yes, your money is basically on fire if you leave it in cash). # # > sp500 plot # # Thus, we have the 4% rule. This simply says that you can expect 4% growth of your investment every year assuming the market grows 7% and you lose 3% of that to inflation. # # It's crude. It's reductionist. But it turns out to be good enough to get started. There are many arguments that can be made against this "rule of thumb" but I think MMM handles them well so I won't list them here. # # ## Applying the 4% rule # # Remember, we want to keep our initial investment "untouched" when we retire so we can just shave the cream off the top every year. If that "cream," is enough to live on, then we can retire. So we need our "nest egg" to be large enough such that 4% of it matches our standard of living (how much we spend). # # Again, how much money is that? # # Well, if we use the 4% rule above, it turns out to be 25 times the amount that you spend every year. So if you spend \\$100,000 a year then you'd need to save \\$2.5 million to retire. That seems like a lot. But if you can live off \\$40,000, it reduces to \\$1 million. # # "What you need to retire" = 25 x "what you require to live" # # To make it visual, I plotted your nest-egg size (in millions) vs your annual spending budget (in thousands). upper_lim = 200 plt.figure(figsize=(16,8)) spending = np.linspace(10,upper_lim,100) nestegg = spending * 25 / 1000 plt.plot(spending,nestegg,c='c') plt.grid(alpha=0.1); plt.ylabel('Needed for Financial Independence (Millions $)',fontsize=18) plt.xlabel('How much you spend annually (Thousands $)',fontsize=18) matplotlib.rcParams.update({'font.size': 22}) plt.xlim(0,upper_lim); plt.suptitle('How much to save before Financial Independence',fontsize=25) plt.show() # ## Lower your Living Standard >> Break Free Faster # # As I looked for articles for my friend, I kept finding things that said "well just make sure you save 10 to 20% and you'll be fine." # # Contrary to this theory, it's not how much you earn but how much you can save. Of course earning a lot helps, but the rate that you **save** is heavily influenced by how much you **spend**. As you spend more, # - you save less of your income # - your "retirement target" gets bigger/harder to reach (again see plot above) # # Spend less, save faster. Spend more, make it reallllly difficult to retire. # # ## But ... my kids' college, house payments, health care, etc. # # Exactly. Start thinking about how much things cost. # # How much of a difference is it to eat-in vs eat-out? How much do you expect to pay for rent or house payments? We all need to unwind so how much do my "nights out" cost? All of these things fall under "cost of living." # # The best way to stay stuck in the work-for-someone-else lifestyle, is to act like these things are too complicated for you. Get a spreadsheet out and start playing around. # # How much do you need to retire given the above calculation? Can you cut costs anywhere to make it easier to hit that mark? # # Remember as you cut costs, the number you need to hit goes down AND you're able to save faster on the same income. # ## How long do I have to retire? # # Well, we've got the first step. We have calculated how much we need to retire based on our "tastes" in lifestyle. However, most people care about how long do they have to work # # ## The true cost of buying something # # So that's it. If you can determine what your standard of living is, you can determine how much money you could retire on if you put all your savings into the SP500 and just milked that as your retirement plan. But the key is calculating how much you spend. Notice in the plot above that if you spend \\$50,000 a year that you could retire after you had just over a million dollars. However, if you insist on living at $200,000 a year, you would need to acquire 4x that amount. It's just simple math, # # # # However, it's actually not as simple as working 4 times as long. Because you're actually # # ## But... I like {insert expensive thing} # # That's totally fine. If you can't find a sufficient substitute etc... MMM has a great argument for this... # # It's about being happy... not about keeping up. If you need that thing and there is absolutely nothing else that fu7lfills that need in your life. Then you know where you draw your line. # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + deletable=true editable=true # %matplotlib inline import numpy as np import pandas as pd from sklearn.neural_network import MLPClassifier from sklearn import metrics from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import Imputer # + deletable=true editable=true train_df = pd.read_csv('../data/raw/train.csv') train_df.sample(5) # + deletable=true editable=true test_df = pd.read_csv('../data/raw/test.csv') test_df.sample(5) # + deletable=true editable=true # dummies for unseen data # http://stackoverflow.com/a/37451867/436721 train_df["SibSp"] = train_df["SibSp"].astype('category',categories=[0,1,2,3,4,5,6,7,8,9]) train_df["Parch"] = train_df["Parch"].astype('category',categories=[0,1,2,3,4,5,6,7,8,9]) test_df["SibSp"] = test_df["SibSp"].astype('category',categories=[0,1,2,3,4,5,6,7,8,9]) test_df["Parch"] = test_df["Parch"].astype('category',categories=[0,1,2,3,4,5,6,7,8,9]) # + deletable=true editable=true def xtract(df,test=False): cpy = df.copy() cpy = pd.concat([df,pd.get_dummies(df["Sex"],prefix='sex')],axis=1) cpy = pd.concat([cpy,pd.get_dummies(df["Pclass"],prefix='class')],axis=1) cpy = pd.concat([cpy,pd.get_dummies(df["SibSp"],prefix='sib')],axis=1) cpy = pd.concat([cpy,pd.get_dummies(df["Parch"],prefix='parch')],axis=1) cpy = pd.concat([cpy,pd.get_dummies(df["Embarked"],prefix='emb')],axis=1) # only columns we'll actually use if test: cpy = cpy[ [col for col in list(cpy) if col.startswith('sex') or col.startswith('class') or col.startswith('sib') or col.startswith('parch') or col.startswith('emb') or col == "Age" or col == "Survived" or col == "PassengerId" ] ] return cpy else: cpy = cpy[ [col for col in list(cpy) if col.startswith('sex') or col.startswith('class') or col.startswith('sib') or col.startswith('parch') or col.startswith('emb') or col == "Age" or col == "Survived" ] ] return cpy.dropna() train_df_clean = xtract(train_df) train_df_clean.head() # + deletable=true editable=true test_df_clean = xtract(test_df,test=True) test_df_clean.sample(5) # + deletable=true editable=true X = [] y = [] for row in train_df_clean.values: X.append(row[1:]) y.append(row[0]) X = np.array(X) y = np.array(y) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.10) # + deletable=true editable=true # comment this block to see the difference scaler = StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test) # + deletable=true editable=true clf = MLPClassifier(random_state=1,solver='lbfgs') clf.fit(X_train,y_train) metrics.accuracy_score(y_test,clf.predict(X_test)) # + deletable=true editable=true X_test.shape # + deletable=true editable=true passenger_ids = [] X_out = [] for row in test_df_clean.values: passenger_ids.append(row[0]) X_out.append(row[1:]) X_out = np.array(X_out) # + deletable=true editable=true imp = Imputer() imp.fit(X_train) X_out = imp.transform(X_out) X_out.shape # + deletable=true editable=true y_out = clf.predict(X_out) # + deletable=true editable=true out_df = pd.DataFrame({ 'PassengerId': passenger_ids, 'Survived': y_out }) out_df.head() # + deletable=true editable=true out_df["PassengerId"] = out_df["PassengerId"].apply(lambda dbl: int(dbl)) out_df["Survived"] = out_df["Survived"].apply(lambda dbl: int(dbl)) out_df.head() # + deletable=true editable=true out_df.to_csv("../data/interim/nn.csv", index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="KeysP1LNr3qb" # # Before you begin # # # 1. Have a GCP projrect ready. # 3. [Enable Iam Recommender](https://console.cloud.google.com/flows/enableapi?apiid=recommender.googleapis.com) APIs for the project. # + [markdown] id="8lKMrjzbr_VW" # ### Provide your credentials to the runtime # + id="Z1Oy7UPTsSCn" from google.colab import auth auth.authenticate_user() print('Authenticated') # + [markdown] id="v3D9dGE0sWaG" # ## Understand GCP IAM Recommender # + [markdown] id="HlmIyARMHkr_" # **Declare the Cloud project ID which will be used throughout this notebook** # + id="spjyyYA-Hv0h" project_id = "Enter-your-project" # + [markdown] id="bhsmYC0MLkZF" # **A helper function to execute `gcloud` commands** # + id="p2wSITYfLrbM" import json import subprocess def execute_command(command): return json.loads(subprocess.check_output(filter(lambda x: x, command.split(" "))).decode("utf-8")) # + id="yNZm0xtVH0PA" recommender_command = f"""gcloud recommender recommendations list \ --location=global \ --recommender=google.iam.policy.Recommender \ --project={project_id} \ --format=json """ # + id="_Qcy3RBzIe-8" recommendations = execute_command(recommender_command) # + id="1HGqDGDQJE_2" recommendations[7] # + [markdown] id="QYhLpEUONC9J" # ### Getting insight for the recommendations # + id="tjHLyGwzK6c4" insight_command = f"""gcloud recommender insights list \ --project={project_id} \ --location=global \ --insight-type=google.iam.policy.Insight \ --format=json """ # + id="25xiSvoxNCKQ" insights = execute_command(insight_command) # + id="mfOKhPgnNYbB" insights[0] # + [markdown] id="rTz9WTExOMvn" # # Generate diff view # + id="UMnkEtLSeu1_" recommendation_name = "Enter-the-recommendation-name" # + id="OzY3iVqFe673" cellView="form" #@title A helper to generate diff view. It uses IAM roles api also. import pandas as pd def generate_diff_view(recommendation_name): role_to_permission_command = "gcloud iam roles describe {} --format=json" recommendation = [r for r in recommendations if r["name"] == recommendation_name][0] insight_name = recommendation["associatedInsights"][0]["insight"] added_roles = [] removed_role = [] for op in recommendation["content"]["operationGroups"][0]["operations"]: if op["action"] == "add": added_roles.append(op["pathFilters"]["/iamPolicy/bindings/*/role"]) if op["action"] == "remove": removed_role.append(op["pathFilters"]["/iamPolicy/bindings/*/role"]) cur_permissions = set(execute_command( role_to_permission_command.format(removed_role[0]))["includedPermissions"]) recommended_permisisons = set() for r in added_roles: recommended_permisisons.update(execute_command( role_to_permission_command.format(r))["includedPermissions"]) removed_permisisons = cur_permissions - recommended_permisisons insight = [insight for insight in insights if insight["name"] == insight_name][0] used_permissions = set(k["permission"] for k in insight["content"]["exercisedPermissions"]) inferred_permissions = set(k["permission"] for k in insight["content"]["inferredPermissions"]) unused_but_still_common_permissions = (recommended_permisisons - used_permissions - inferred_permissions) types = (["used"] * len(used_permissions) + ["ml-inferred"] * len(inferred_permissions) + ["common"] * len(unused_but_still_common_permissions) + ["removed"] * len(removed_permisisons)) permissions = [*used_permissions, *inferred_permissions, *unused_but_still_common_permissions, *removed_permisisons] return pd.DataFrame({"type": types, "permission": permissions}) # + id="tBAj0ausiCdD" diff_view = generate_diff_view(recommendation_name) # + id="JPXHLLryj34u" diff_view # + id="D51dSXzuki6e" diff_view["type"].value_counts() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 import prediction from utils import load_data,get_data_day import logging logging.basicConfig(level=logging.INFO) # ## Use functions separately all_data=load_data('Data/training/') # get data from files # returns a dictionnary {"sites":sites,"chimeres":chimeres,"geops":geops,"meteo":meteo,"concentrations":concentrations} # see utils.load_data function for more details data_day=get_data_day(3,all_data) ## extract allowed data for day=3 (! january 4 as python starts with 0) ## apply predict function for day=3 prediction.predict(3,sites=all_data['sites'],chimeres_day=data_day[0],geops_day=data_day[1],meteo_day=data_day[2],concentrations_day=data_day[3]) # ## Main function similar to the platform prediction.run_predict(list_days=[5],dirname='Data/training/') ## Get score # %run scoring # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Part 1: Data Wrangling # ## Introduction # # This project is a self-made end to end machine learning project in which I scrape a website called 'Jendela 360'. The scraped dataset is saved in a csv file named 'Apartment Data Raw'. The dataset contains the details of apartment units available to be rented in Jakarta and its surrouding (Jabodetabek region) on October 18, 2020. The data discussed here might not be up-to-date. # # Problem Statement of this project: # "Based on the scraped data of apartments in Jakarta and its surrounding, the writer aims to construct a machine learning model to predict the annual rent price of apartment units. If possible, the writer aims to find which feature/factors has the most immpact on an apartment unit's annual rent price." # # In the first notebook, we are going to load the raw dataset and conduct data wrangling to draw insights and clean the data. Our goal is to have a cleaned dataset at the end of this notebook, so we can use the cleaned data to create and test regression models in the second notebook. # # Last but not least, this project is non-profit and made for learning purposes only. # ## Importing Packages # + # Essentials import numpy as np import pandas as pd import datetime import random # Plots import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline # Models from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor, AdaBoostRegressor, BaggingRegressor from sklearn.kernel_ridge import KernelRidge from sklearn.linear_model import Ridge, RidgeCV from sklearn.linear_model import ElasticNet, ElasticNetCV from sklearn.svm import SVR from mlxtend.regressor import StackingCVRegressor import lightgbm as lgb from lightgbm import LGBMRegressor from xgboost import XGBRegressor # Stats from scipy.stats import skew, norm from scipy.special import boxcox1p from scipy.stats import boxcox_normmax # Misc from sklearn.model_selection import GridSearchCV from sklearn.model_selection import KFold, cross_val_score from sklearn.metrics import mean_squared_error from sklearn.preprocessing import OneHotEncoder from sklearn.preprocessing import LabelEncoder from sklearn.pipeline import make_pipeline from sklearn.preprocessing import scale from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import RobustScaler from sklearn.decomposition import PCA # - # ## Importing Dataset # + raw_df = pd.read_csv('Web Scraping/Apartment Dataset Raw.csv') #We are going to save the unaltered dataset as raw_df, and use the dataframe 'df' to do the next data wrangling operations df = raw_df df.head() # - df = df.rename({'Unnamed: 0' : 'Index'}, axis = 'columns') # ## Data Cleaning # ### Raw Data Shape and Column Description print(df.columns) df.shape # Each row represents a unique unit of apartment which was displayed on Jendela 360 for rent on 18th October 2020. We have 5339 rows and 47 columns. The columns represent various characteristics of each unit, and is described as follows. # # The following columns describe the identification data of each unit (location, name, etc). # * Index: the index of each row (self-ecplanatory) starting at 0. # * URL: the URL each apartment unit page on Jendela 360 website. # * Unit_Name: the apartment unit name on its page. # * Unit_ID: the ID of each page (the last seven characters of the URL). Unique for each apartment unit. # * Apt_Name: the apartment building name of the unit. # * Street: the street address of the unit. # * Locality: the local district of the unit. # * Region: the city of the unit. # * Longitude and Latitude: the geographical longitude and latitude coordinate of the unit # * Floor: the floor location of the unit. # * Tower: the name of the tower in which the unit is located in. # # The following columns describe the facilities of each apartment unit. The two columns which houses numerical (quantitative) data about each apartment unit's facilities are: # * No_Rooms: the number of bedrooms in each apartment unit. # * Area: the area in meter suqared of each apartment unit. # # The other columns which describe the facilities of each unit are categorical in nature. The value of each column is '1' if the facility is present, and '0' if the facility is not present. These columns are: # * Furnished (1 represents that the unit is fully furnished, and vice versa) # * AC # * Water_Heater # * Dining_Set # * Electricity # * Bed # * Access_Card # * Kitchen # * Fridge # * Washing_Machine # * TV # * ATM # * TV_Cable # * Grocery # * Internet # * Swim_Pool (swimming pool) # * Laundry # * Security # * Basketball (basketball field) # * Multipurpose_room # * Gym # * Jogging (jogging track) # * Tennis (tennis field) # * Restaurant # * Playground # # The following columns describe the fee of each unit. The only fee that each apartment has is the annual rent price. Not all apartment units are available to be rented on a monthly term. There are also cases where the deposit and service charges are not listed. Furthermore, it will be very easy to predict the annual price if we know the monthly price, as we just need to multiply it by 12. That's why we are going to remove every fee column in the dataset and only take the annual rent price (in rupiah) as the dependent variable of our model. # # * Currency: the currency unit of the listed price. # * Monthly_Price: the monthly payment fee if the tenant wishes to rent it on monthly term. # * Annual_Price: the annual payment fee if the tenant wishes to rent it on yearly term. # * Deposit_Currency: the currency unit of the listed deposit charge. # * Deposit_Charge: the initial deposit charge. # * Service_Currency: the currency unit of the service charge. # * Service_Charge: the service charge of the unit. # ### Omiting ERROR Rows # # The web scraper uses a ```try:...except:``` block to keep on reading and scraping new pages even if the current iteration raises an error. This is done so the scraping process could be automated, and if a web page raises an error, we don't have to restart the scraping process from the beginning again. If a page raises an error, the whole row (except the URL) will be filled with the string 'ERROR'. The best way to find 'ERROR' rows is to find which rows that have an 'ERROR' Apt_Name column, as that is the features that exists in all apartment unit web pages. # # In this step, we are going to remove all 'ERROR' rows. df.shape df = df[df.Apt_Name != 'ERROR'] df = df.reset_index(drop = True, inplace=False) df.shape # We can see that there are 18 rows which are omitted. These rows are the 'ERROR' rows. # ### Identifying the Dependent/Outcome Variable # Referring to the initial problem statement of this project, we hereby decide that the annual rent price of the apartment will be our dependent variable for the regression model. Furthermore, we should not look at the values from monthly price, deposit charge, and service charge, as we would like to predict the annual rent price only using the apartment unit's identification data (location) and facilities. # # After deciding which variable will be our outcome variable, we should make sure that the annual price data is in the same currency unit. If the currency of the annual rent price is in dollars, we have to convert it to Rupiah. # # The assumption used is that 1 USD = 14,700 IDR. df.Currency.value_counts() # We see that there are 5200 apartment unit rent prices which are listed in Rupiah, 57 prices which are listed in US Dollars. We need to convert the price of these 57 apartment units from USD to IDR. To convert it, we need to multiply the Annual_Price value by 14700 if the value of Currency equals to 'USD'. However, before doing any of that, we need to make sure that the values in Annual_Price columns are read as numbers by pandas. df.Annual_Price # As we can see, the 'Annual_Price' has the data type of object. This means we need to convert it to float first, before multiplying it by 14700 if it is in USD to convert it properly. # + Rupiah_Annual_Price = list() currency_changed = 0 for i, price in enumerate(df.Annual_Price): if df.Currency[i] == 'USD': Rupiah_Annual_Price.append(float(price)*14700) currency_changed += 1 else: Rupiah_Annual_Price.append(float(price)) df['Rupiah_Annual_Price'] = Rupiah_Annual_Price print(currency_changed) # - # The currency_changed counter is used to tell us how many currency conversion has been done, and we are glad to see that there are 57 currency conversions, which is the same number of 'USD' occurences in the 'Currency' column of our dataset. # # Next, we are going to remove the columns which are no longer needed ('Currency', 'Annual_Price' 'Monthly_Price', 'Deposit_Currency', 'Deposit_Charge', 'Service_Currency', 'Service_Charge'). # # We are then renaming the 'Rupiah_Annual_Price' to 'AnnualPrice'. df = df.drop(['Currency', 'Annual_Price', 'Monthly_Price', 'Deposit_Currency', 'Deposit_Charge', 'Service_Currency', 'Service_Charge'], axis = 'columns') df = df.rename({'Rupiah_Annual_Price':'AnnualPrice'}, axis = 'columns') df.head() # ## Exploratory Data Analysis df.columns # In this step, we are going to do some data exploration to gain more insights on our dataset. First, we'll drop columns which we think might not be insightful for our model. We'll drop the 'Street' and 'Tower' column as it's quite difficult to parse and does not supply us with any insightful information. The 'Street' column is irrelevant as we have 'Locality', 'Region', as well as 'Longitude' & 'Latitude' column to draw geospatial insights form. The 'Tower' column is dropped because it's the name of the tower of each unit, and each apartment complex has different tower names. We suspect that the 'Unit_Name' and 'Apt_Name' might be dropped too, but we'll inspect them in a little bit to see if there are any insights we can draw from those columns. # # Note: We'll keep the 'URL' and 'Unit_ID' until we finish exploring the data in case we want to check on specific apartment units. df = df.drop(['Street', 'Tower'], axis = 'columns') # Next, we are going to inspect the 'Unit_Name' and 'Apt_Name' columns. df[['Apt_Name', 'Unit_Name']].head() # It seems that the 'Apt_Name' column just indicates the overall name of our Apartment complex, while the 'Unit_Name' mentions the number of bedrooms, and in some case, the furnished status of the apartment. Interestingly, the furnished status in 'Unit_Name' are divided into three levels: 'Non Furnished', 'Semi Furnished', and 'Fully Furnished'. However, in our 'Furnished' column, there are only two levels: 'Non Furnished' and 'Fully Furnished'. # # We can add a new level to our 'Furnished' feature by creating a 'Semi Furnished' level if the 'Unit_Name' of a particular row has the word 'semi' in it. We'll create a new column called 'FurnishedNew' for this feature. FurnishedNew = list() for i in range(len(df['Index'])): if df.Furnished[i] == '1': FurnishedNew.append('Full') elif df.Furnished[i] == '0': if 'semi' in df.Unit_Name[i].lower(): FurnishedNew.append('Semi') else: FurnishedNew.append('Non') df['FurnishedNew'] = FurnishedNew df.FurnishedNew.value_counts() # We'll see if this new feature is better than the existing 'Furnished' column. If this feature makes the model worse, then we'll simply use the two level 'Furnished' feature. We'll then drop the 'Apt_Name' and 'Unit_Name' column. df = df.drop(['Unit_Name', 'Apt_Name'], axis = 'columns') df.head() # Next, we are going to analyse each column and see if it is a good feature for our model or not. We will be plotting each feature against the predicted value, the 'AnnualPrice'. While there are other ways to perform feature selection which are relatively more automated, the writer chooses to do this to gain more insights personally on the dataset. # #### Number of Bedrooms bedroom_df = df[['URL','No_Rooms', 'AnnualPrice']] bedroom_df.No_Rooms.value_counts() # The apartment units in our dataset have 0 till 6 'number of bedrooms'. What does '0' number of bedroom means? During the scraping process, the writer discover that studio apartment units are written as having '0' number of bedrooms in the ```.json``` schema of the web page. We can then use ```df.groupby``` to see the average annual rent price of each category. avg_no_rooms = bedroom_df.groupby('No_Rooms')['AnnualPrice'].mean().reset_index().rename({'AnnualPrice':'Average Annual Price'}, axis = 'columns') print(avg_no_rooms) avg_no_rooms.plot(x = 'No_Rooms', y = 'Average Annual Price', kind = 'bar', figsize = [5,5]) # First column and we're already greeted with a surprise. Why is the studio apartment unit's average price higher than the average price of apartment units with 6 bedrooms? This is why exploring our dataset manually, or the way I prefer to say it - 'personally', is important. This data does not match our common sense, and we need to investigate it. The first thing to do in this situation is try to check for outliers. studio_check = bedroom_df['No_Rooms'] == '0' sns.boxplot(x = bedroom_df[studio_check].AnnualPrice) # First, we filter the 'AnnualPrice' column by 'No_Rooms' category. After selecting the annual rent prices of apartment units which are studio-typed, we can draw the boxplot using seaborn and we see there are two outliers. Let's check these out. bedroom_df[studio_check].sort_values(by=['AnnualPrice'], ascending=False).head(5) # After sorted by Annual Price, the top two apartment units have prices that are clearly beyond what's 'the norm' for studio apartment units. Using ```pd.set_option('display.max_colwidth', None)```, we can get the URL for these two apartment units, and then see for ourselves in their respective page. pd.set_option('display.max_colwidth', None) bedroom_df[studio_check].sort_values(by=['AnnualPrice'], ascending=False).head(2).URL # ![Website](54_million_studio.PNG "Upon opening the first link, we see a fifty four million dollars studio apartment") # Upon looking at the first link, we see that this 25 meter squared, studio apartment, is priced at fifty four million dollars. I think we can see the problem here. There are a few pages in which the currency used is wrong. Even apartment with 6 bedrooms are not priced fifty four million dollars a year. This unit's price should be fifty four million rupiah. # # The second unit in question also shares the problem. This time, the studio apartment is priced at thirty million dollars. We first need to clean this mess before we continue exploring the other columns. Let's also check if other number of bedrooms share the same issue. br2_check = bedroom_df['No_Rooms'] == '2' sns.boxplot(x = bedroom_df[br2_check].AnnualPrice) bedroom_df[br2_check].sort_values(by=['AnnualPrice'], ascending=False).head(5) # Turns out the problem isn't unique to studio apartments. We have to solve this issue first then, and unfortunately this can only be done in a relatively manual manner (checking the URL one by one). I'll get back after resolving this issue. # #### Finding and Fixing Outliers based on Number of Bedrooms # Create boolean identifiers studio_check = bedroom_df['No_Rooms'] == '0' br1_check = bedroom_df['No_Rooms'] == '1' br2_check = bedroom_df['No_Rooms'] == '2' br3_check = bedroom_df['No_Rooms'] == '3' br4_check = bedroom_df['No_Rooms'] == '4' br5_check = bedroom_df['No_Rooms'] == '5' br6_check = bedroom_df['No_Rooms'] == '6' # Fix for No_Rooms = '0' (Studio-Type) bedroom_df[studio_check].sort_values(by=['AnnualPrice'], ascending=False).head(5) df.loc[df.Unit_ID == 'sgpa014', 'AnnualPrice'] = 54000000 df.loc[df.Unit_ID == 'pgva007', 'AnnualPrice'] = 30000000 # Fix for No_Rooms = '1' (One Bedroom) bedroom_df[br1_check].sort_values(by=['AnnualPrice'], ascending=False).head(5) sns.boxplot(x = bedroom_df[br1_check].AnnualPrice) # I think the rent price for 1 bedroom appartment units are skewed to the right. None of the five highest apartment units (of one bedroom) have annual rent prices displayed in dollars. However, we're going to remove one point which is the highest priced apartment unit as it's quite far from the rest of the data points. i = df[((df.Unit_ID == 'frrb001'))].index df = df.drop(i) # Fix for 'No_Rooms' = '2' (Two Bedrooms) bedroom_df[br2_check].sort_values(by=['AnnualPrice'], ascending=False).head(5) sns.boxplot(x = bedroom_df[br2_check].AnnualPrice) df.loc[df.Unit_ID == 'blmc009', 'AnnualPrice'] = 50400000 # Fix for 'No_Rooms' = '3' (Three Bedrooms) bedroom_df[br3_check].sort_values(by=['AnnualPrice'], ascending=False).head(5) sns.boxplot(x = bedroom_df[br3_check].AnnualPrice) # It turns out that the highest bedroom price is still in Rupiah. However, the rightmost data point is considerably far away from the rest of the data points, and we'll consider it as an outlier to be removed. i = df[((df.Unit_ID == 'esdd002'))].index df = df.drop(i) # Fix for 'No_Rooms' = '4' (Four Bedrooms) bedroom_df[br4_check].sort_values(by=['AnnualPrice'], ascending=False).head(5) sns.boxplot(x = bedroom_df[br4_check].AnnualPrice) # Although there seems to be two outliers, upon further checking they don't seem to be a case of misused currency. However, those two rightmost points are considerably far away from the rest of the other data points, and we'll consider them as outliers to be removed. These two prices are even higher than apartment units with 6 bedrooms, and do not represent the norm. i = df[((df.Unit_ID == 'pkrf001'))].index df = df.drop(i) i = df[((df.Unit_ID == 'ppre001'))].index df = df.drop(i) # Fix for 'No_Rooms' = '5' (Five Bedrooms) bedroom_df[br5_check].sort_values(by=['AnnualPrice'], ascending=False).head(5) # There are only two apartment units in our dataset which has five bedrooms. We are not going to remove anything for now. # Fix for 'No_Rooms' = '6' (Six Bedrooms) bedroom_df[br6_check].sort_values(by=['AnnualPrice'], ascending=False).head(5) # There is only one aaprtment unit with six bedrooms. We are not going to remove anything for now - however, we might combine the units with 4, 5, and 6 into one category. br456_check = (bedroom_df.No_Rooms== '4') | (bedroom_df.No_Rooms == '5') | (bedroom_df.No_Rooms == '6') sns.boxplot(x = bedroom_df[br456_check].AnnualPrice) # #### Checking on the Updated Dataframe for No_Rooms Feature # + New_No_Rooms = list() for i, br_no in enumerate(df.No_Rooms): br_float = int(br_no) if br_float >= 4: New_No_Rooms.append(4) else: New_No_Rooms.append(br_float) df.drop(['No_Rooms'], axis = 'columns') df['No_Rooms'] = New_No_Rooms # - bedroom_df_updated = df[['URL','No_Rooms', 'AnnualPrice']] avg_no_rooms = bedroom_df_updated.groupby('No_Rooms')['AnnualPrice'].mean().reset_index().rename({'AnnualPrice':'Average Annual Price'}, axis = 'columns') print(avg_no_rooms) avg_no_rooms.plot(x = 'No_Rooms', y = 'Average Annual Price', kind = 'bar', figsize = [5,5]) sns.boxplot(x = "No_Rooms", y = 'AnnualPrice', data = df) # There we go. Now it made sense - the more number of bedrooms an apartment unit has, the higher the annual rent price. However, there no apartment units which are priced way above the other units in the same category. Through evaluating outliers and checking on the source data, we have 'cleaned' the 'No_Rooms' feature for now. # # The last step taken for this feature column is grouping the categories of '4', '5', and '6'. There are only 3 units out of our more than 5000 rows which have 5 and 6 bedrooms, and that is not quite representative. # # Now, we might ask, why the new category (of units with 4 and more bedrooms) are given the value '4'? Shouldn't it be '4 and more'? # # Yes. It represents the number of bedrooms of 4 and more. However, this categorical variable will be treated as ordinal variable in the machine learning model. That's why we have to keep the values as integers. We'll just have to keep in our mind later when writing the final report, that the number '4' in No_Rooms feature not only represents units with 4 bedrooms, but also units with more than 4 bedrooms. # #### Analyzing Location Feature Columns # The next part of our features to be discussed are the columns which describe where our unit is on the map. There are four columns being discussed here - two which are categorical ('Locality' and 'Region'), as well as two continuous columns ('Longitude' and 'Latitude'). First let's look at the 'Region' columns. df.Region.value_counts() # Whoa. Turns out the scraped pages also includes apartment units from outside Jakarta and its surroundings. To stay true to our problem statement, we'll remove regions outside 'Jabodetabek'. df = df[(df.Region == 'Jakarta Selatan') | (df.Region == 'Jakarta Barat') | (df.Region == 'Jakarta Pusat') | (df.Region == 'Jakarta Timur') | (df.Region == 'Jakarta Utara') | (df.Region == 'Tangerang') | (df.Region == 'Bekasi') | (df.Region == 'Depok') | (df.Region == 'Bogor')] # Let's visualize the data using a boxplot again. Now, we're investigating if differences in regions affect annual rent price. dims = (12,8) fig, ax = plt.subplots(figsize=dims) sns.boxplot(x = "Region", y = 'AnnualPrice', data = df, ax=ax) JakBar = df['Region'] == 'Jakarta Barat' df[JakBar][['URL', 'AnnualPrice']].sort_values(by = ['AnnualPrice'], ascending=False).head(1) # From the visualization, we can see that the region in DKI Jakarta with the highest average annual rent price is 'Jakarta Selatan', followed by 'Jakarta Pusat', 'Jakarta Barat', 'Jakarta Utara', and 'Jakarta Timur' consecutively. Regions outside Jakarta have lower average prices than regions inside Jakarta. This distribution makes sense, as it is quite a common knowledge for Jakartans to know that the region with the highest property price in Jakarta is 'Jakarta Selatan'. # # There seems to be an outlier in 'Jakarta Barat', but upon further checking - it's the only unit with 6 bedrooms, so the price reflects more of its number of rooms than its region. We will not remove this data point for now. # There are a few options on how we are going to use the locations columns in our model: # # Option 1: Uses one hot encoding on Region. This seems to be the go-to-solution if we wishes to make location a categorical variable. We'll divide the area into six major Regions - West, North, South, East, Center Jakarta, and outside Jakarta (we group Bogor, Depok, Tangerang, and Bekasi into one Region). # # Option 2: Uses one hot encoding on Locality. There are over 90 different local districts in this data set, and one hot encoding would mean that we'll have 90+ extra feature columns of zeros and ones. Furthermore, a lot of these local districts have only one apartment unit. # # Option 3: Uses the 'Longitude' and 'Latitude' column as continuous variables. This could be the case if we notice a pattern on the longitude and latitude data. We could also do clustering algorithm on longitude and latitude data. # # We'll look into the 'Longitude' and 'Latitude' columns first. print(df.Longitude.dtype) print(df.Latitude.dtype) # It seems that these two columns are classified as 'object' and not 'float' by Pandas. We need to transform them first. # + df = df.reset_index(drop = True, inplace=False) Longitude_Float = list() Latitude_Float = list() for i in range(len(df.Index)): Longitude_Float.append(float(df.Longitude[i])) Latitude_Float.append(float(df.Latitude[i])) df.drop(['Longitude', 'Latitude'], axis = 'columns') df['Longitude'] = Longitude_Float df['Latitude'] = Latitude_Float # - df.Longitude.plot() # After converting both columns to float, let's visualize each column to analyze if there are any outliers. As the geographical location chosen for this project is quite close to each other, there shouldn't be any outliers. The 'Longitude' dataset makes sense: all our apartment units have Longitude between 106.6 until 107.2. df.Latitude.plot() # The 'Latitude' feature column, however, seems to have yet another issue related to an error in data entering. Most of the apartment units have Latitude around -6, which makes sense, as Jakarta (and its surrounding) are located slightly beneath the Equator. Howver, there are a few data points which have latitude of 6. This is suspicious as it could very well be a case of forgetting to add '-' (the negative sign) during data entry process for these apartment units. For now, let's assume this to be the case, and put a negative value on the latitude feature of these apartment units. Latitude_fixed = [la if la<0 else -1*la for la in df.Latitude] df.drop(['Latitude'], axis = 'columns') df['Latitude'] = Latitude_fixed df.Latitude.plot() # This distribution made more sense as the value of 'Latitude' ranges from -6.6 to -6.1, not a big margin, and the three data points with the lowest 'Latitude' seems to be apartment units outside Jakarta (maybe in Bogor/Depok). # #### Analyzing Furnished Status Feature Column # # Now, let's visualize and take a look at the two columns describing the furnished status of each apartment unit - the original 'Furnished', and our newly created 'FurnishedNew'. fig, (ax1, ax2) = plt.subplots(1, 2) sns.scatterplot(x = "Furnished", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax1) sns.scatterplot(x = "FurnishedNew", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax2) fig.tight_layout() # There are two takeaways from this: first, the discrepancy between non-furnished and fully furnished apartment units' prices doesn't seem to be that big. Second, our new column, 'FeatureNew', shows that semi-furnished apartments have lower prices compared to non-furnished ones. # # What should we make of this? Our new feature column doesn't seem to work well - this might be because not all apartment units which are semi-furnished write that they are 'semi-furnished' in their page name. The population of 'semi-furnished' apartments may be much more than what was being labeled as 'Semi'. This explains two things: why adding an extra category doesn't work well, and why the discrepancy between '0' and '1' is not that far away from each other. # # This could indicate that 'Furnished' is not a good predictor for AnnualPrice, but we'll decide it later in the next feature engineering section. # #### Analyzing Floor Position of Apartment Units # # The feature column we're looking at this section is the 'Floor' column. We'll see if there are differences in annual rent price between units with different floor positions. sns.boxplot(x = "Floor", y = 'AnnualPrice', data = df) # Not only the discrepancy among all floor locations seem to be miniscule, we also have quite a few apartment units with no labels of their floor location. For now, let's not use this categorical variable in our model. df = df.drop(['Floor'], axis = 'columns') df = df.reset_index(drop = True, inplace=False) # #### Analyzing Area of Units to AnnualPrice # + Area_Float = list() for i in range(len(df.Index)): Area_Float.append(float(df.Area[i])) df.drop(['Area'], axis = 'columns') df['Area'] = Area_Float # - dims = (8,5) fig, ax = plt.subplots(figsize=dims) sns.scatterplot(x = "Area", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax) # Based on the above plot, we can see that the general trend is that AnnualPrice increases as Area increases. We also see that as number of bedrooms increases, area also increases. However, there are a few data points which are scattered far from the others that we need to investigate. They could be outliers and we should remove them. df[['URL', 'Area']].sort_values(by=['Area'], ascending = False).head(12) # There are six apartment units with areas above 500 meter squared. That's a huge apartment unit - two of them even reaches more than seven thousand meter squared. These units are not what in most people's mind when they're looking to rent an apartment unit - as these units come in the form of condominium or penthouse. We'll be removing these six units from our data set. In the deployment stage of this machine learning model, we'll limit the maximum Area to be 350 meter squared, as that is already a very big apartment unit. # + i = df[((df.Unit_ID == 'tacc001'))].index df = df.drop(i) i = df[((df.Unit_ID == 'tacc002'))].index df = df.drop(i) i = df[((df.Unit_ID == 'kmvd027'))].index df = df.drop(i) i = df[((df.Unit_ID == 'mqrd023'))].index df = df.drop(i) i = df[((df.Unit_ID == 'csbe001'))].index df = df.drop(i) i = df[((df.Unit_ID == 'stme008'))].index df = df.drop(i) # - df = df.reset_index(drop = True, inplace=False) df.shape dims = (8,5) fig, ax = plt.subplots(figsize=dims) sns.scatterplot(x = "Area", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax) # This visualization made more sense as there are no far outliers. However, something we notice is that there are some apartment units which are listed as having 0 Area. We'll simply remove these units as it's impossible for apartment units to have 0 meter squared of area. We'll consider than 20 meter squared is the minimum apartment unit area. df = df[df['Area']>20] df.columns # #### Checking Categorical Facility Features # # Our last sets of features are the facilities that each unit has. During the web scraping process, I added a column in which it counts how many of these features that the unit has, and store them in a column called 'Total_Facilities'. Let's first take a look at this column, before diving into other facilities one-by-one. # + Facilities_Int = list() for i, count in enumerate(df.Total_Facilities): Facilities_Int.append(int(count)) df.drop(['Total_Facilities'], axis = 'columns') df['Total_Facilities'] = Facilities_Int sns.boxplot(x="Total_Facilities", data = df) # - sns.scatterplot(x="Total_Facilities", y = "AnnualPrice", data = df) # It seems that most apartment units have at least 10 facilities. The more facilities a unit has, the higher its rend price is. Let's take a look at the units which has features less than 10, and see if they actually have less than 10 features, or there is some errors here. df[['URL', 'Total_Facilities', 'AnnualPrice', 'Furnished']].sort_values(by = ['Total_Facilities'], ascending = True).head(10) # The apartment units with low Total_Facilities tend to be Non-Furnished units. However, there's an oddball here - the unit with 0 'Total_Facilities' is a fully-furnished unit! Upon further investigation, based on the photos of the room, there are indeed facilities and it might be some errors in inputing the data (or the unit owner/seller does not describe the facilities fully). We are going to remove that unit from our dataset. As for the other fully-furnished unit with only 3 total facilities, the page and pictures show that it is indeed quite a blank unit. There are beds and sofas - but there is no fancy facilities like TV or Internet. i = df[((df.Unit_ID == 'spsa001'))].index df = df.drop(i) df = df.reset_index(drop = True, inplace=False) # Next, we are going to draw boxplots of each facilities. To recall, if a facility is present in a unit, it will has value '1', if not, it will have the value of '0'. We would like to see if the presence of these facilities impact the annual rent price of apartment units. We'll remove facilities whose existence (or inexistence) does not impact the annual rent price. fig, (ax1, ax2) = plt.subplots(1, 2) sns.scatterplot(x = "Furnished", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax1) sns.scatterplot(x = "FurnishedNew", y = 'AnnualPrice', data = df, hue = 'No_Rooms', ax=ax2) fig.tight_layout() fig, ((ax1, ax2, ax3, ax4), (ax5, ax6, ax7, ax8), (ax9, ax10, ax11, ax12), (ax13, ax14, ax15, ax16), (ax17, ax18, ax19, ax20), (ax21, ax22, ax23, ax24)) = plt.subplots(6, 4, figsize = (15,25)) fig.suptitle('Facilities and AnnualPrice Visualization') for i, ax in enumerate(fig.get_axes()): column = df.columns[i+10] sns.boxplot(x = column, y = 'AnnualPrice', data = df, ax = ax) # Based on the visualization above, for each facilities, the trend is clear - the presence of facilities affects the unit annual rent price positively. This proves to be quite troublesome when we want to do feature selection - we don't know which facility is less important than the other. We'll keep most facilities for the most part, but we'll reomve two of them right away - 'Electricity' and 'Access Card'. Why? Because most apartment units have them - it's not a 'facility' anymore - it is a necessity. There are 300-400 apartments which are listed as having no 'Electricity', but it doesn't really make sense. We do this because we are thinking about the deployment phase of our model. Our future users won't choose to have an apartment unit without 'Electricity' or 'Access Card'. # # This concludes our first part. To recap, we have: # - removed uninsightful columns # - checked and removed outliers # - fixed abnormal data (latitude and misused currency) # - visualize features # # We also now have a rough understanding on the annual rent price of apartment units in Jakarta: the most expensive apartments are usually found at Jakarta Selatan - and the more area a unit occupies, the more bedrooms & facilities it has, the higher its annual rent price is. # # In the next part, we are going to: # - scale numerical features # - split the dataset into testing and training set # - create and evaluate baseline model # - conduct feature engineering based on the feedback gained on baseline model # - test new models and decide which model is the best df = df.drop(['Electricity', 'Access_Card'], axis = 'columns') df.to_csv('Cleaned Apartment Data.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Random Acts of Pizza # # W207 Final Project # # , , , # # _"The universe is hilarious. Like, Venus is 900 degrees. I could tell you it melts lead. But that's not as fun as saying, 'You can cook a pizza on the windowsill in nine seconds.' And next time my fans eat pizza, they're thinking of Venus!"_ # # _- _ # # ---------------------------------------------------------------------- # ## Section 1: Setting Up & Processing Data # + # For figures to show inline # %matplotlib inline ## Import Libraries ## import json from pprint import pprint from pandas.io.json import json_normalize import pandas as pd # General libraries. import re import numpy as np import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import mlxtend import scipy import datetime as dt from itertools import product # SK-learn library for importing the newsgroup data. from sklearn.datasets import fetch_20newsgroups # SK-learn libraries for feature extraction from text. from sklearn.feature_extraction.text import * # SK-learn libraries for pre/processing data from sklearn import preprocessing # NLTK for text processing, analyzing tools from nltk.stem.porter import PorterStemmer from nltk.stem import WordNetLemmatizer from nltk.sentiment.util import * from sklearn.decomposition import TruncatedSVD from sklearn.decomposition import PCA # SK-lear library for feature selection from sklearn.feature_selection import VarianceThreshold from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 from sklearn.feature_selection import SelectFromModel from sklearn.feature_selection import mutual_info_classif from sklearn.feature_selection import SelectPercentile from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer # SK-learn libraries for learning from sklearn.pipeline import Pipeline from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import BernoulliNB from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import AdaBoostClassifier from mlxtend.classifier import EnsembleVoteClassifier # SK-learn libraries for evaluation from sklearn import metrics from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report from sklearn.metrics import mean_squared_error from sklearn.model_selection import GridSearchCV from sklearn.metrics import roc_curve, roc_auc_score, recall_score # + ## Get Data ## # Reference for data: https://www.kaggle.com/c/random-acts-of-pizza/data # Pull in the training and test data with open('data/train.json', encoding='utf-8') as data_file: trainData = json.loads(data_file.read()) with open('data/test.json', encoding='utf-8') as data_file: testData = json.loads(data_file.read()) # Create a dev data set devData = trainData[0:1000] trainData = trainData[1000:] # Show how the data looks in its original format #pprint("data in json format:") #pprint(trainData[1]) # Create a normalized view allTData = json_normalize(trainData) print("\nSize of the normalized Data:", allTData.shape) print("\nnormalized data columns:", list(allTData)) allDData = json_normalize(devData) # + ## Create subsets of data for analysis ### # Create a flat dataset without the subreddits list flatData = allTData.drop('requester_subreddits_at_request', 1) # Create a separate dataset with just subreddits, indexed on request id # We can creata a count vector on the words, run Naive Bayes against it, # and add the probabilities to our flat dataset subredTData = allTData[['request_id','requester_subreddits_at_request']] subredTData.set_index('request_id', inplace=True) subredDData= allDData[['request_id','requester_subreddits_at_request']] subredDData.set_index('request_id', inplace=True) # our training labels trainLabel = allTData['requester_received_pizza'] devLabel = allDData['requester_received_pizza'] # What do these look like? #print(list(flatData)) print(subredTData.shape) #print(subredTData['requester_subreddits_at_request'][1]) # Create a corpus of subreddits to vectorize trainCorpus = [] rTCorpus = [] rDCorpus = [] for index in range(len(subredTData)): trainCorpus.append(' '.join(subredTData['requester_subreddits_at_request'][index])) rTCorpus.append(' '.join(subredTData['requester_subreddits_at_request'][index])) devCorpus = [] for index in range(len(subredDData)): devCorpus.append(' '.join(subredDData['requester_subreddits_at_request'][index])) rDCorpus.append(' '.join(subredDData['requester_subreddits_at_request'][index])) # Baseline infofrom mlxtend.plotting import plot_decision_regions print("\nPercent of people who got pizza:", round(sum(trainLabel)/len(trainLabel),3)) plt.figure(1,figsize=(10,4)) plt.subplot(121) plt.hist(allTData['requester_received_pizza']) plt.title("Distribtution of pizza's received in training data") plt.subplot(122) plt.hist(allDData['requester_received_pizza']) plt.title("Distribtution of pizza's received in dev data") # + # combine all text sources into a single corpus fldTText = allTData[['request_title', 'request_text']] fldDText = allDData[['request_title', 'request_text']] #fldDText = allDData[['request_id','request_text', 'request_text_edit_aware', 'request_title']] #print(fldTText[:3]) #print(fldDText['request_text'][:3]) #print(len(fldTText)) trainCorpus = [] for index in range(len(fldTText)): a = ''.join(fldTText['request_title'][index]) b = (a, fldTText['request_text'][index]) trainCorpus.append(' '.join(b)) devCorpus = [] for index in range(len(fldDText)): a = ''.join(fldDText['request_title'][index]) b = (a, fldDText['request_text'][index]) devCorpus.append(' '.join(b)) # Print 3 examples print(len(trainCorpus)) print(trainCorpus[:3]) #labels = trainLabel.astype(int) #labels = list(labels) #print(labels[:3]) #print('-'*75) print(len(devCorpus)) print('\n' , devCorpus[:3]) #labels_dev = devLabel.astype(int) #labels_dev = list(labels_dev) #print(labels_dev[:3]) # - # ## Section 2: Feature Extraction - Text, then Others # We now extract features from text and other characteristics of the post. We find that time is one indicator that seems to have good explanatory power. # ### Section 2.0: Simple pre-processing function # + # Simple Pre-Processing Function def data_preprocessor(s): """ Note: this function pre-processors data: (1) removes non-alpha characters (2) converts digits to 'number' (3) regularizes spaces (although CountVectorizer ignores this unless they are part of words) (4) reduces word size to n """ s = [re.sub(r'[?|$|.|!|@|\n|(|)|<|>|_|-|,|\']',r' ',s) for s in s] # strip out non-alpha numeric char, replace with space s = [re.sub(r'\d+',r'number ',s) for s in s] # convert digits to number s = [re.sub(r' +',r' ',s) for s in s] # convert multiple spaces to single space # This sets word size to n=8 num = 8 def size_word(s): temp = [] for s in s: x = s.split() z = [elem[:num] for elem in x] z = ' '.join(z) temp.append(z) return temp # Using NLTK 3.0 #stemmer = PorterStemmer() lemmanizer = WordNetLemmatizer() def set_word(s): temp = [] for s in s: #x = stemmer.stem(s) z = lemmanizer.lemmatize(s,pos='v') z = ''.join(z) temp.append(z) return temp s = size_word(s) s = set_word(s) return s # - # ### Section 2.1: Tokenization (for text) # After trying unigram and trigram vectorizers, the best results were found using bigrams in logistic regression # # + # Try it with bigrams # Create the vectorizer vectorizer = CountVectorizer(min_df=2, max_df=0.95, lowercase=True, stop_words='english', strip_accents='unicode', ngram_range=(1,3)) # Transform the corpus into vectorized trigrams tVector = vectorizer.fit_transform(data_preprocessor(trainCorpus)) dVector = vectorizer.transform(data_preprocessor(devCorpus)) # How does it look? print ('\nRaw data:') print ("The size of the vocabulary for the training text data is", tVector.shape[1]) print ("First 5 feature Names:", vectorizer.get_feature_names()[1:6], "\n") # Use the preprocessor and do the same vectorizer_p = TfidfVectorizer(min_df=2, max_df=0.95, lowercase=True, stop_words='english', strip_accents='unicode', ngram_range=(1,3)) tVector_p = vectorizer_p.fit_transform(data_preprocessor(trainCorpus)) dVector_p = vectorizer_p.transform(data_preprocessor(devCorpus)) # How does the pre-processed vector look? print ('\nRaw data:') print ("The size of the vocabulary for the training text data is", tVector_p.shape[1]) print ("First 5 feature Names:", vectorizer_p.get_feature_names()[1:6], "\n") # - # ### Section 2.2 PCA # Given the sparse matrix, we apply PCA to reduce dimensionality for the text features # + # PCA, we tried PCA with dense() as well as TruncatedSVD; the latter works better in explaining variance n_comp = 600 pca_mod = PCA(n_components=600) #pca_mod = TruncatedSVD(n_components=600) tVector_s = pca_mod.fit_transform(tVector.todense()) dVector_s = pca_mod.fit_transform(dVector.todense()) tVector_ps = pca_mod.fit_transform(tVector_p.todense()) dVector_ps = pca_mod.fit_transform(dVector_p.todense()) # Find the fraction of the variance explained by each component pcaVarRatio = pca_mod.explained_variance_ratio_ pcaCumVarRatio = np.cumsum(pca_mod.explained_variance_ratio_) # Plot the fraction of variance explained by each component, and the cumulative percent fig = plt.figure() ax1 = fig.add_subplot(111) ax1.scatter(range(len(pcaVarRatio)), pcaVarRatio, c = 'g', marker="s", label='Fraction') ax1.scatter(range(len(pcaVarRatio)), pcaCumVarRatio, c = 'purple',marker="o", label='Cumulative') plt.legend(loc='upper left'); ax1.set_title('Fraction of Total Variance for k = 1 to 600'); # - # ### Section 2.3: Adding Other Features # # 1) Up votes, downvotes # 2) Number of commments, posts # 3) Account Age # 4) Time - Month # 5) Time - Hour # 6) Vadersentiment # # + # Extract other features def plot_figure(x): plt.figure() plt.hist(x) plt.show() subTTFe = allTData[['giver_username_if_known', 'number_of_downvotes_of_request_at_retrieval', 'number_of_upvotes_of_request_at_retrieval', 'request_number_of_comments_at_retrieval', 'requester_account_age_in_days_at_request', 'requester_number_of_comments_at_request', 'requester_number_of_comments_in_raop_at_request', 'requester_number_of_posts_at_request', 'requester_number_of_subreddits_at_request', 'requester_upvotes_minus_downvotes_at_request', 'requester_upvotes_minus_downvotes_at_retrieval', 'requester_upvotes_plus_downvotes_at_request', 'requester_upvotes_plus_downvotes_at_retrieval']] subDTFe = allDData[['giver_username_if_known', 'number_of_downvotes_of_request_at_retrieval', 'number_of_upvotes_of_request_at_retrieval', 'request_number_of_comments_at_retrieval', 'requester_account_age_in_days_at_request', 'requester_number_of_comments_at_request', 'requester_number_of_comments_in_raop_at_request', 'requester_number_of_posts_at_request', 'requester_number_of_subreddits_at_request', 'requester_upvotes_minus_downvotes_at_request', 'requester_upvotes_minus_downvotes_at_retrieval', 'requester_upvotes_plus_downvotes_at_request', 'requester_upvotes_plus_downvotes_at_retrieval']] # Convert first col to numerical temp = 1*(subTTFe['giver_username_if_known']!='N/A').values subTTFe = subTTFe.drop('giver_username_if_known',1).values temp = np.reshape(temp,(-1,1)) subTTFe = np.concatenate((subTTFe,temp), axis=1) #print(subTTFe[1]) temp = 1*(subDTFe['giver_username_if_known']!='N/A').values subDTFe = subDTFe.drop('giver_username_if_known',1).values temp = np.reshape(temp,(-1,1)) subDTFe = np.concatenate((subDTFe,temp), axis=1) # Create new features # Upvote minus downvotes at request - upvote minus downvote at retrieval temp = np.reshape((subTTFe[:,10] - subTTFe[:,9]),(-1,1)) subTTFe = np.concatenate((subTTFe,temp),axis=1) temp = np.reshape((subDTFe[:,10] - subDTFe[:,9]),(-1,1)) subDTFe = np.concatenate((subDTFe,temp),axis=1) # Hour and Month of request unixT = allTData[['unix_timestamp_of_request_utc']].copy() unixD = allDData[['unix_timestamp_of_request_utc']].copy() # Convert from unix > datetime unixT['Datetime'] = pd.to_datetime(unixT['unix_timestamp_of_request_utc'], unit='s') unixT['Hour'] = unixT['Datetime'].dt.hour unixT['Month'] = unixT['Datetime'].dt.month unixT = unixT.drop(['Datetime','unix_timestamp_of_request_utc'], axis=1) unixT = unixT.values unixD['Datetime'] = pd.to_datetime(unixD['unix_timestamp_of_request_utc'], unit='s') unixD['Hour'] = unixD['Datetime'].dt.hour unixD['Month'] = unixD['Datetime'].dt.month unixD = unixD.drop(['Datetime','unix_timestamp_of_request_utc'], axis=1) unixD = unixD.values print(subTTFe.shape, unixT.shape) print(subDTFe.shape, unixD.shape) subTTFe = np.concatenate((subTTFe,unixT),axis=1) subDTFe = np.concatenate((subDTFe,unixD),axis=1) # Create sentiment score using vader sentiment analysis titles = allTData['request_title'] analyzer = SentimentIntensityAnalyzer() scores = [] for title in titles: x = analyzer.polarity_scores(title) scores.append(x['compound']) subTTFe = np.concatenate((subTTFe,np.reshape(scores,(-1,1))),axis=1) scores = [] titles = allDData['request_title'] for title in titles: x = analyzer.polarity_scores(title) scores.append(x['compound']) subDTFe = np.concatenate((subDTFe,np.reshape(scores,(-1,1))),axis=1) print(subTTFe.shape) print(subDTFe.shape) # Scale features #print(describe(subTTFe[:,0])) n1 = preprocessing.MinMaxScaler().fit_transform(subTTFe) n2 = preprocessing.MinMaxScaler().fit_transform(subDTFe) #print(n1.shape) for i in range(n1.shape[1]): plot_figure(n1[:,i]) # - # ## Section 3: Feature Selection # We combine text features and other features and do some selection (turns out less is better here) # + # We apply some feature selection to tVector and dVector (text) which did not go through PCA # Variancethreshold """ sel = VarianceThreshold(threshold=(0.8*(1-0.8))) tVector = sel.fit_transform(tVector) dVector = sel.transform(dVector) """ # Select k best #sel = SelectKBest(chi2, k=8) # Select percentile sel = SelectPercentile(mutual_info_classif, percentile=10) tVector = sel.fit_transform(tVector,trainLabel) tVector_p = sel.fit_transform(tVector_p,trainLabel) dVector = sel.fit_transform(dVector,devLabel) dVector_p = sel.fit_transform(dVector_p,devLabel) #nb = BernoulliNB(alpha=0.01).fit(tVector,trainLabel) #model = SelectFromModel(nb, prefit=True) #tVector = model.transform(tVector) #dVector = model.transform(dVector) print(tVector.shape) print(dVector.shape) print(tVector_p.shape) print(dVector_p.shape) # + # Commbine text features with other features # We have one text feature that has undergone PCA # Another which we used SelectPercentile() tVector = np.concatenate((tVector.toarray(),n1),axis=1) tVector_p = np.concatenate((tVector_p.toarray(),n1),axis=1) dVector = np.concatenate((dVector.toarray(),n2),axis=1) dVector_p = np.concatenate((dVector_p.toarray(),n2),axis=1) tVector_s = np.concatenate((tVector_s,n1),axis=1) tVector_ps = np.concatenate((tVector_ps,n1),axis=1) dVector_s = np.concatenate((dVector_s,n2),axis=1) dVector_ps = np.concatenate((dVector_ps,n2),axis=1) print(tVector.shape) print(dVector.shape) print(tVector_p.shape) print(dVector_p.shape) print(tVector_s.shape) print(dVector_s.shape) print(tVector_ps.shape) print(dVector_ps.shape) # - # ## Section 4: Models # We tried the following models are a priori we thought a Logistic Regression or Naive Bayes would work well: # # 1. SVM # 2. AdaBoost # 3. Logistic Regression # 4. Nearest Neighbor # 5. Naive Bayes # 6. Decision Tree # 7. Random Forest # # In this section we show the most promising models, Logistic Regression, Random Forest, and and ensemble approach. # # + def roc_curve1(y_true, y_pred_prob): """This function plots the ROC curve Inputs: y_true, correct label y_pred_prob, predicted probabilities """ fpr, tpr, thr = roc_curve(y_true, y_pred_prob) thr = np.arange(0,1,1/100) plt.figure() plt.plot(fpr,tpr, 'r', thr, thr, 'b:') plt.xlabel("False positive rate") plt.ylabel("True positive rate") plt.title("ROC Curve") plt.show() def score_rep(y_true, y_pred, desc): """Function to print out comprehensive report for classification test Inputs: y_true, correct label y_pred, predicted label from model desc, description of model Output: classification report """ print(desc) print("-"*75) print("Accuracy: ", metrics.accuracy_score(y_true, y_pred)) print("Area under curve of ROC: ", metrics.roc_auc_score(y_true, y_pred)) print("Classification report:\n") print(metrics.classification_report(y_true, y_pred)) print("-"*75) # - # ### 4.1: Logistic Regression # We show our application to four data variants - # # 1) tVector, dVector - based on CountVectorizer & some feature selection # 2) tVector_p, dVector_p - basedon on TdifVectorizer & some feature selection # 3) tVector_s, dVector_s - based on CounVectorizer & PCA # 4) tVector_ps, dVector_ps - based on TdifVectorizer & PCA # # **We find that the first variant works well enough** (We also ran other experiments.) # + # Logistic Regression which we apply to four variants of the data C = 0.01 #(For now) modelLogit = LogisticRegression(penalty='l2', C=C) modelLogit.fit(tVector,trainLabel) score_rep(devLabel,modelLogit.predict(dVector),'Logistic Regression, C = 0.01') roc_curve1(devLabel, modelLogit.decision_function(dVector)) modelLogit.fit(tVector_p,trainLabel) score_rep(devLabel,modelLogit.predict(dVector_p),'Logistic Regression, C = 0.01') roc_curve1(devLabel, modelLogit.decision_function(dVector_p)) modelLogit.fit(tVector_s,trainLabel) score_rep(devLabel,modelLogit.predict(dVector_s),'Logistic Regression, C = 0.01') roc_curve1(devLabel, modelLogit.decision_function(dVector_s)) modelLogit.fit(tVector_ps,trainLabel) score_rep(devLabel,modelLogit.predict(dVector_ps),'Logistic Regression, C = 0.01') roc_curve1(devLabel, modelLogit.decision_function(dVector_ps)) # - # ### Section 4.1: Logistic Regression - Grid Search # We now explore tuning C for Logistic Regression # + # GridSearch parameters = {'C':[1e-2,1e-1,1, 10,1e2,1e3]} clf = LogisticRegression() clf = GridSearchCV(clf, parameters,scoring='f1') clf.fit(tVector, trainLabel) print(clf.best_estimator_) clf = LogisticRegression(C=0.10, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1, penalty='l2', random_state=None, solver='liblinear', tol=0.0001, verbose=0, warm_start=False) clf.fit(tVector, trainLabel) score_rep(devLabel,clf.predict(dVector),'Logistic Regression with C=10, Tuned by GridSearch') roc_curve1(devLabel, -clf.predict_proba(dVector)[:,0]) # - # ### Section 4.2: Random Forest # Overall best results # + # GridSearch parameters = {'n_estimators':[10,100, 200, 500, 750, 1000]} clf = RandomForestClassifier() clf = GridSearchCV(clf, parameters,scoring='f1') clf.fit(tVector, trainLabel) print(clf.best_estimator_) print(clf.best_params_) print(clf.scorer_) clf = RandomForestClassifier(n_estimators=100) clf.fit(tVector, trainLabel) score_rep(devLabel,clf.predict(dVector),'Random Forest with n=500, Tuned with GridSearch') roc_curve1(devLabel, -clf.predict_proba(dVector)[:,0]) # + # We could get better results with n=100. Slight difference from above because GridSearch works on just # training data RF = RandomForestClassifier(n_estimators=100) RF.fit(tVector, trainLabel) score_rep(devLabel, RF.predict(dVector),'Random Forest with n=100') roc_curve1(devLabel, -RF.predict_proba(dVector)[:,0]) #RF.fit(tVector_p, trainLabel) #score_rep(devLabel,RF.predict(dVector_p),'Random Forest') #roc_curve1(devLabel, -RF.predict_proba(dVector_p)[:,0]) #RF.fit(tVector_s, trainLabel) #score_rep(devLabel,RF.predict(dVector_s),'Random Forest') #roc_curve1(devLabel, -RF.predict_proba(dVector_s)[:,0]) #RF.fit(tVector_ps, trainLabel) #score_rep(devLabel,RF.predict(dVector_ps),'Random Forest') #roc_curve1(devLabel, -RF.predict_proba(dVector_ps)[:,0]) # - # ### Section 4.3 An Ensemble Model # + # Create an ensemble model based on LR, NB, RF # Set up lr_1 lr_1 = LogisticRegression(penalty='l2', C=0.01) # Set up lr_2 lr_2 = LogisticRegression(penalty='l2', C=0.1) # Set up lr_3 lr_3 = LogisticRegression(penalty='l2', C=1) # Set up lr_4 lr_4 = LogisticRegression(penalty='l2', C=10) # Set up nb nb_1 = BernoulliNB(alpha=0.001) # Set up rf rf_1 = RandomForestClassifier(n_estimators=100) # Set up ensemble of the models clf = EnsembleVoteClassifier(clfs=[lr_1, lr_2, lr_3, lr_4, nb_1, rf_1], voting='soft', weights=[1,1,1,1,1,5]) # Fit training data clf.fit(tVector,trainLabel) # Probabilities, predictions devProb = -clf.predict_proba(dVector) devPred = clf.predict(dVector) score_rep(devLabel, devPred,'Ensemble Model') roc_curve1(devLabel, devProb[:,0]) # - # # Section 5: Conclusion # The best feature set seems to be: # # 1) Text, with CountVectorizer applied # 2) This results in a very sparse matrix => we apply SelectPercentile to this to reduce (PCA is also possible) # 3) Text is then combined with other features - TIME (Hour and Month) is crucial - but vaderSentiment also helps. These features exhibit more variation than others. Features are scaled to between 0 to 1 to be consistent with text before combined # 4) A logistic regression works well, but Random Forest seems best. When tuned, we get C=10 for LR as best, n=100 for Random Forest. # # The best model going by accuracy and f1-score is **Random Forest Classifier with n=500 gives an accuracy of 83%, f1-score of 81% and AUC of 0.71**. However, on AUC, either a **Logistic or our Ensemble Classifier is better with an AUC of 0.73.** # + RF = RandomForestClassifier(n_estimators=100) RF.fit(tVector, trainLabel) score_rep(devLabel, RF.predict(dVector),'Random Forest with n=100') roc_curve1(devLabel, -RF.predict_proba(dVector)[:,0]) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="LqzWox9dlk08" # # Dimensionality reduction example # # The goal of this script is to illustrate dimensionality reduction of timeseries # data using the singular value decomposition to derive a maximum variance # representation. # # For this, do the following: # # 1. Definitions and imports # 2. Create covariance function # 3. Simulate the data # 4. Dimensionality reduction # 5. Plots and illustrations # # The following entries may be changed to explore the script: # # * sigma_noise , sigma_signal : Both together define the signal-to-noise ration in the data # * cov_type : The form of the correlation function. Only the strings 'sq_exp', # 'exp', and 'white' are allowed. # * corr_length_signal : A quantity describing how slow the correlation between values drops # depending on temporal separation. Increase to generate a smoother signal. # * cov_type_guess : The form of the guessed correlation function. Only the strings 'sq_exp', # 'exp', and 'white' are allowed. # * corr_length_signal_guess : A quantity describing how slow assumedly the correlation between values drops # depending on temporal separation. Increase to generate a smoother signal. # * n_expansion : The maximum amount of expansion coefficients for the dimensionality reduction. # The bigger this quantity, the more terms are used to represent the signal. # * n_show : The amount of reconstructions / elements shown in figures # # This script is given out as part of the Machine Learning Tutorial during IV2020, Munich. Please consult the slides for some background information regarding motivation and a possible physical setup for this toy example. # # Written by , , ETH Zurich. # + [markdown] id="wau_7tVcmmCY" # # 1. Definitions and imports ------------------------------------------------ # + [markdown] id="NzX2jLruKNks" # Define the length $n_{datapoints}$ of the timeseries $x$, the standarddeviation of noise $\sigma_{noise}$ and the correlation length $\rho_{signal}$ of the simulated signal. Different correlation lengths $\rho_{signal}$ lead to different regularity properties of the data that is to be reconstructed in later steps. Different types of correlation structures can be chosen. # $$\begin{align}\text{'sq_exp'}: k_{signal}(s,t)&=\exp\left[-\left(\frac{s-t}{\rho_{signal}}\right)^2\right] &&\text{leads to smooth signals} \\ # \text{'exp'}: k_{signal}(s,t)&=\exp\left[-\frac{|s-t|}{\rho_{signal}}\right] &&\text{leads to continuous, nondifferentiable signals}\\ # \text{'white'}: k_{signal}(s,t)&=\delta_{st} =\mathbb{1}_{s=t}(s,t) &&\text{leads to uncorrelated signals} \end{align}$$ # # Example signal 'sq_exp' $\hspace{5cm}$ Example signal 'exp' $\hspace{5cm}$ Example signal 'white' # # ![High correlation length -> smooth behavior](https://drive.google.com/uc?id=1tWeF_Btm5L-pMf0sk2ZNipUnNWwUeCiw) # # Adjusting $\sigma_{noise}$ leads to different signal to noise ratios. The bigger $\sigma_{noise}$ is the more noisy the data looks like and the harder it is for the algorithm to reconstruct the data due to the unpredictability of the white noise. For reconstructing the data, $n_{expansion}$ basis functions $u_k(\cdot)$ are employed. They are superimposed in the following way. # $$ x_{data}\approx \sum_{k=1}^{n_{expansion}}\alpha_k u_{k}(\cdot)$$ The number n_{expansion} of basis functions to be used is adjustable and the quality of the reconstruction improves with increasing $n_{expansion}$. The basis functions $u_k(\cdot)$ are not known a priori and finding them so as to reconstruct $x_{data}$ with the minimum number of coefficients is part of the problem. # + id="UWvEWKedkO5O" colab={"base_uri": "https://localhost:8080/"} outputId="ef7ff071-2cf7-4c73-8379-043d03879830" # 1.1 Import numerical and plotting libraries import numpy as np import matplotlib.pyplot as plt # 1.2 Define the number of observations and the time at which they take place # (Time interval is arbitrarily set to [0,1] to avoid scaling issues) n_datapoints=100 time=np.linspace(0,1,n_datapoints) #1.3 Parameters for simulationa and estimation # Parameters for simulation # The following parameters may be changed to explore the method sigma_noise=0.0 # Quantifies the average magnitude of the noise cov_type='sq_exp' # Quantifies the correlation structure of the data corr_length_signal=0.3 # A quantity describing how slow the correlation between values drops # depending on temporal separation. # Parameters for dimensionality reduction cov_type_guess='sq_exp' # A guess for the correlation structure of the data corr_length_signal_guess=0.3 # A guess for the smoothness of the signal. Is used to construct a # covariance matrix whose eigenfunctions reconstruct the signal n_expansion=10 # The amount of eigenfunctions used to reconstruct the measured signal # Parameters for visualization n_show=3 print('Packages imported. The parameters for simulation and representation have been defined and are now \ ready for later use.') # + [markdown] id="Q0_GEZ9hxUem" # # 2. Create covariance function ------------------------------------------------ # + [markdown] id="whWzMetZKOBI" # Covariance functions $k(\cdot,\cdot)$ are a compact way of encoding the spatial dependence of the correlation structure. They are used to construct the covariance matrices necessary for inference and prediction by evaluating $k$ via # $$k(\cdot,\cdot):T\times T\ni (s,t)\mapsto k(s,t)\in \mathbb{R}$$ # to generate the covariance matrix $K$ with elements $(K)_{ij}=k(t_i,t_j)$. For the simple case of an 1-D interval $T$ in time, $k(\cdot,\cdot)$ is a two-dimensional function specifying the correlation between $t$ and $s$ for all $(s,t)\in T\times T$. Some typical structures are plotted below. # # Covariance matrix 'sq_exp' $\hspace{3cm}$ Covariance matrix 'exp' $\hspace{3cm}$ Covariance matrix 'white' # ![alt text](https://drive.google.com/uc?id=1GpVAyEaOwiWonGdTSO5tmmeGTZopClWI) # + id="ZULFU5A6yJ81" colab={"base_uri": "https://localhost:8080/"} outputId="9508f605-f6bd-4f3f-d92e-ef05e908c6c8" def Create_covariance_matrix(cov_type,input_vector,correlation_length,dimensions): # 2.1 Rename for comfort n=dimensions t=input_vector # 2.2 Case based evaluation if cov_type == 'white': cov_fun=lambda t_1,t_2: np.multiply(1,t_1==t_2) elif cov_type == 'sq_exp': cov_fun=lambda t_1,t_2: np.exp(-np.power(((t_1-t_2)/correlation_length),2)) elif cov_type == 'exp': cov_fun=lambda t_1,t_2: np.exp(-np.abs(((t_1-t_2)/correlation_length))) else: print('Wrong type of covariance function specified.') # 2.3 Assemble matrix and return Cov_mat=np.zeros((n,n)) for i in range(n): for j in range(n): Cov_mat[i,j]=cov_fun(t[i],t[j]) return Cov_mat print('Defined a function for assembling covariance matrices from correlation functions.') # + [markdown] id="h98uGxPkmuWf" # # 3. Simulate the data ------------------------------------------------------ # + [markdown] id="3GlGckxNKOn-" # Simulate randomly chosen smooth signals by considering them to be drawn from a multivariate Gaussian distribution with dimension $n_{timesteps}$. With e.g. $n_{timesteps}=100$, each throw of the dice produces a vector of length 100 that we interpret as a function mapping dimension indices to function values. We then draw $x_{signal}$ according to # $$ x_{signal}\sim\mathcal{N}(0,K)$$ # where $0\in \mathbb{R}^{100}$ and $K\in \mathbb{R}^{100 \times 100}$. $\mathcal{N}$ symbolizes the multivariate Gaussian distribution. Even though each draw is random and the different realizations are independent of each other, there exists significant spatial correlation between neighboring values. This can be exploited not only for interpolation and estimation (as done in the regression example). It also to means that of the $100$ elements few are truly uncorrelated and many redundant - therefore the $100$ elements in $x_{signal}$ can be represented by significantly less numbers. More explanations of the intuition behind the simulation process may be found in the notes associated to the regression example. # + id="KcL6Xhr8mvOD" colab={"base_uri": "https://localhost:8080/"} outputId="99dcaa0e-6bdc-49dd-b34e-dd8ed4b0e6a7" # 3.1 Define the covariance matrices of signal an noise for simulation Cov_mat_signal=Create_covariance_matrix(cov_type,time,corr_length_signal,n_datapoints) Cov_mat_noise=pow(sigma_noise,2)*Create_covariance_matrix('white',time,0,n_datapoints) # 3.2 Define a guess for the covariance matrix of the signal Cov_mat_signal_guess=Create_covariance_matrix(cov_type_guess,time,corr_length_signal_guess,n_datapoints) # 3.3 Do the simulation - first simulate random signal ... x_true=np.random.multivariate_normal(np.zeros([n_datapoints]),Cov_mat_signal) # ... then add noise to create the synthetic measurements x_measured=x_true+np.random.multivariate_normal(np.zeros([n_datapoints]),Cov_mat_noise) print('Covariance matrices have been constructed. Simulations were carried out by sampling from \ multivariate normal distributions.') # + [markdown] id="HO71ONMNmvZv" # # 4. Dimensionality reduction ----------------------------------------------- # + [markdown] id="byMtczXYKPWM" # The goal is to find an orthonormal sequence of basis vectors $\{u_j(\cdot)\}_{j=1}^k$ such that reconstruction of the signal $x_{signal}$ via $x_{reconstruct}=\sum_{j=1}^k\langle x_{signal},u_j\rangle u_j$ is possible up to a small residual RMSE with the smallest number $k$ of basis vectors. Since our data is stochastic and the exact form of $x_{signal}$ is unknown beforehand, we need to pose the optimization problem in terms of expected values. The task is to solve # $$ u_1, ... , u_k =\underset{v_1, ... , v_k \text{ orthonormal}}{\operatorname{argmin}} E\left[ \|\sum_{j=1}^k \langle x_{signal},v_j\rangle v_j - x_{signal}\|\right]. $$ # # Knowledge of the covariance matrix $K$ is sufficient to reconstruct any randomly chosen signal $x_{signal}$ with known correlation structure and in this case a closed form solution can be derived. It is based on the spectral decomposition of $K$. # # $$\text{Spectral theorem } \hspace{2cm} K= U \Lambda U^T \hspace{2cm}\text{where } \Lambda \text{ diagonal and } U \text{ unitary}$$ # # # Denoting the elements of $\Lambda$ by $\lambda_i$ one finds the energy of the expected reconstruction error to be # $$ \begin{align} # E[\|x_{signal}-x_{reconstruct}\|^2_2]=E\left[\|\sum_{j=k+1}^{n_{timesteps}}\langle x_{signal},u_j\rangle u_j\|^2_2\right]&=\sum_{j=k+1}^{n_{timesteps}}u_jE\left[x_{signal}x_{signal}^T\right]u_j \\ # &=\sum_{j=k+1}^{n_{timesteps}} \lambda_j # \end{align}$$ # # The reconstruction error decreases therefore monotonically with $k$ and the first eigenvector decreases the energy of the residuals $x_{signal} - x_{reconstruct}$ by $\lambda_1$ - the largest amount possible. This means that - on average - no other single function $u(\cdot)$ can be used better to approximate $x_{signal}$ than $u_1(\cdot)$. # When trying to find the best $k$ vectors $v_1 , ... ,v_k$ to reconstruct $x_{signal}$ via $x_{reconstruct}=\sum_{j=1}^k \alpha_j v_j$, the eigenvectors $u_1, ... ,u_k$ minimize $E\left[\|x_{signal}-x_{reconstruct}\|^2_2\right]$ among all possible vectors and are therefore the best choice. # + id="kdldImU9mvjq" colab={"base_uri": "https://localhost:8080/"} outputId="fe34030f-bce4-42a0-ed6c-f7d60a7ac735" # 4.1 Extract basis elements u,s,vh=np.linalg.svd(Cov_mat_signal_guess) basis_elements=u[:,0:n_expansion] # 4.2 Represent signal in new basis coeffs=basis_elements.T@x_measured signal_reconstruct=np.zeros([n_datapoints,n_expansion]) reconstruction_rmse=np.zeros(n_expansion) for k in range(n_expansion): signal_reconstruct[:,k]=basis_elements[:,0:k+1]@coeffs[0:k+1] reconstruction_rmse[k]=np.linalg.norm(signal_reconstruct[:,k]-x_measured)/np.sqrt(n_datapoints); print('Singular value decomposition performed and basis functions extracted. Reconstruction using the \ most efficient n_expansion basis functions.') # + [markdown] id="PokHm1Iqm5_a" # # 5. Plots and illustrations ------------------------------------------------ # + [markdown] id="qJCY0VJ9KQJS" # Illustrate the decomposition and reconstruction procedure by first plotting the randomly chosen signal to be processed. The eigenvectors corresponding to the largest eigenvalues of the covariance matrices are plotted in the second figure. They correspond to the basis vectors with the most explanatory power. # # Figure 3 shows the increasingly more faithful reconstructions whose RMSE monotonically decreases with increasing number of basis functions used for reconstruction. For smooth functions with much correlations structure to exploit, $10$ coefficients may be sufficient for an almost perfect reconstruction amounting to a compression of factor $10$. The actual decay of the RMSE with increasing amount of basis elements used for reconstruction is shown in figure 4. # + id="v0nNBWM4m6LH" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="6af86ef8-02e3-46cd-bb67-4413ba19e550" # 5.1 Figure showing the measured data and the true underlying signal plt.figure(1) plt.scatter(time,x_measured,s=60, facecolors='none', edgecolors='k') plt.plot(time,x_true,color='xkcd:cerulean',linewidth=2) plt.title('True signal and measured data') plt.legend(['x_true','x_measured']) plt.xlabel('Time') plt.ylabel('Function value') # 5.2 Figure showing the basis elements used to reconstruct the measured signal plt.figure(3) index_show=np.round(np.linspace(0,n_expansion-1,n_show)).astype(int) index_basis=index_show+np.ones(3) plt.plot(time,basis_elements[:,index_show]) plt.title('Basis elements used for approximation') plt.legend(['Basis element %d'% index_basis[0],'Basis element %d'% index_basis[1], 'Basis element %d' % index_basis[2]]) plt.xlabel('Time') plt.ylabel('Function value') # 5.3 Figure showing the reconstructions of the measured data in terms of an # increasing sequence of basis elements plt.figure(2) plt.plot(time,signal_reconstruct[:,index_show]) plt.title('Approximations to data using basis elements') plt.legend(['Approximation using %d elements' % index_basis[0],'Approximation using %d elements' % index_basis[1],\ 'Approximation using %d elements' % index_basis[2]]) plt.xlabel('Time') plt.ylabel('Function value') # 5.4 Figure showing the decay of the rmse with increasing amount of basis elements # used for reconstruction plt.figure(4) plt.plot(np.insert(reconstruction_rmse,0,np.linalg.norm(x_measured)/np.sqrt(n_datapoints))) plt.title('Root mean square error') plt.legend(['RMSE']) plt.xlabel('# Expansion coefficients') plt.ylabel('RMSE') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as numpy import os import re label_df = pd.read_csv('flower_natural_with_label2.csv', index_col=0) pres = list(label_df['pre'].unique()) pres[:10] l = ! ls flower/images # + # # os.makedirs("flower_drop/images") # for fname in l: # pre = re.findall('(.+)_', fname)[0] # if pre in pres: # shutil.move("flower/images/"+fname, "flower_drop/images/"+fname) # - # fnames = !ls flower_drop/images df = pd.DataFrame(fnames) df[1] = [re.findall('(.+)_', fname)[0] for fname in fnames] df[2] = [int(re.findall('_(\d+).png', fname)[0]) % 2 for fname in fnames] df.columns = ['fname', 'pre', 'label'] idxs = list(df.index) labels = df['label'] pres = df['pre'].unique() df len(unique) from sklearn.model_selection import train_test_split pre_train, pre_test = train_test_split(pres, random_state=0) sorted(pre_train) # + pres_list = list(df['pre']) tr_ts = ['ts' if pre in pre_test else 'tr' for pre in pres_list] df['tr/ts'] = tr_ts # - label_df['label'] = [0 if label == 'r' else 1 for label in label_df['label']] label_df pres_lst = label_df['pre'] # + trts = ['ts' if pre in pre_test else 'tr' for pre in pres_lst] label_df['tr/ts'] = trts # - label_df[label_df['pre'] == '94'] df.to_csv('support_set.csv', index = False) label_df.to_csv('query_set.csv', index = False) len(pres) print(list(label_df[label_df['tr/ts'] == 'tr']['label'] == 0).count(True)) print(list(label_df[label_df['tr/ts'] == 'tr']['label'] == 1).count(True)) label_df.groupby(['label', 'tr/ts']).count()['fname'] 589 + 197 + 593 + 198 395 + 1182 import shutil # l = !ls ../MAML-Pytorch_copy4/flower_natural2/images os.makedirs("../MAML-Pytorch_copy4/flower_natural2_drop/images") for fname in l: if fname in fnames: shutil.move("../MAML-Pytorch_copy4/flower_natural2/images/"+fname, "../MAML-Pytorch_copy4/flower_natural2_drop/images/"+fname) pre_val_, pre_test_ = train_test_split(pre_test, random_state=0, train_size = 0.5) len(pre_test) len(pre_train) len(pre_val_), len(pre_test_) # + dic = [] for pre in pre_train: dic.append((pre, 'tr')) for pre in pre_test_: dic.append((pre, 'ts')) for pre in pre_val_: dic.append((pre, 'val')) dic = dict(dic) df_ = df df_['tr/ts/val'] = [dic[pre] for pre in df['pre']] df_.groupby('tr/ts/val').count()['fname'] df_.to_csv('support_set_tr_ts_val.csv', index = False) # + dic_label = [] for pre in pre_train: dic_label.append((pre, 'tr')) for pre in pre_test_: dic_label.append((pre, 'ts')) for pre in pre_val_: dic_label.append((pre, 'val')) dic_label = dict(dic_label) label_df_ = label_df label_df_['tr/ts/val'] = [dic_label[pre] for pre in label_df['pre']] print(label_df_.groupby('tr/ts/val').count()['fname']) label_df_.to_csv('query_set_tr_ts_val.csv', index = False) # - # + import numpy as np with open('data/acc/natural2(task_num_10).txt') as f: l = f.readlines() # print(l) l_ = [float(re.findall('(.+)\n', p)[0]) for p in l] np.array(l_).mean(), np.array(l_).std() # + import numpy as np with open('data/acc/natural2(task_num_5).txt') as f: l = f.readlines() # print(l) l_ = [float(re.findall('(.+)\n', p)[0]) for p in l] np.array(l_).mean(), np.array(l_).std() # + import numpy as np with open('data/acc/natural2(task_num_50).txt') as f: l = f.readlines() # print(l) l_ = [float(re.findall('(.+)\n', p)[0]) for p in l] np.array(l_).mean(), np.array(l_).std() # + import numpy as np with open('data/acc/natural2(task_num_100).txt') as f: l = f.readlines() # print(l) l_ = [float(re.findall('(.+)\n', p)[0]) for p in l] np.array(l_).mean(), np.array(l_).std() # + import re import numpy as np with open('data/acc/natural2_test(task_num_5).txt') as f: l = f.readlines() # print(l) l_ = [float(re.findall('(.+)\n', p)[0]) for p in l] np.array(l_).mean(), np.array(l_).std(), np.array(l_).max() format(np.array(l_).mean(), '.3f'), format(np.array(l_).std(), '.3f'), format(np.array(l_).max(), '.3f') # + import re import numpy as np with open('data/acc/natural2_test(task_num_5).txt') as f: l = f.readlines() # print(l) l_ = [float(re.findall('(.+)\n', p)[0]) for p in l] np.array(l_).mean(), np.array(l_).std(), np.array(l_).max() format(np.array(l_).mean(), '.3f'), format(np.array(l_).std(), '.3f'), format(np.array(l_).max(), '.3f') # + import re import numpy as np with open('data/acc/natural2_val(task_num_5epoch_num_30000).txt') as f: l = f.readlines() # print(l) l_ = [float(re.findall('(.+)\n', p)[0]) for p in l] np.array(l_).mean(), np.array(l_).std(), np.array(l_).max() format(np.array(l_).mean(), '.3f'), format(np.array(l_).std(), '.3f'), format(np.array(l_).max(), '.3f') # + import re import numpy as np with open('data/acc/natural2_val(task_num_5epoch_num_40000).txt') as f: l = f.readlines() # print(l) l_ = [float(re.findall('(.+)\n', p)[0]) for p in l] np.array(l_).mean(), np.array(l_).std(), np.array(l_).max() format(np.array(l_).mean(), '.3f'), format(np.array(l_).std(), '.3f'), format(np.array(l_).max(), '.3f') # - # !pwd # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.5 64-bit ('.venv') # metadata: # interpreter: # hash: 67b393f23005f5647497c50fa99fb25b525d8642232b1bdc07a39bdb19f3ee4f # name: python3 # --- import numpy as np import matplotlib.pyplot as plt import pandas as pd import re import math from scipy import interpolate plt.rc('font',family='Times New Roman') L=480e-6 H=100e-6 Tref=773 # + fieldminMaxFile="./fieldMinMax.dat" with open(fieldminMaxFile,"r") as fp: comment=fp.readline() header=fp.readline() header=header[1:-1].split() indexs_processor=[] for i,name in enumerate(header): if header[i]=="processor": indexs_processor.append(i) indexs_processor.reverse() data=pd.read_csv(fieldminMaxFile,comment='#', sep='\t',header=None) data=data.drop(indexs_processor,axis=1) data.rename(columns=lambda x:header[x],inplace=True) data.head() # - fig, ax = plt.subplots() ax.plot(data["Time"],data["max"]/Tref) ax.set_xlabel(f"Time (s)") ax.set_ylabel(f"Dimensionless T") ax.set_title(f"Combustion Tempereature Evolution") # ax.legend(loc="upper right") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7 (tensorflow) # language: python # name: tensorflow # --- # # Implementation of a Hash Table # # In this lecture we will be implementing our own Hash Table to complete our understanding of Hash Tables and Hash Functions! Make sure to review the video lecture before this to fully understand this implementation! # # Keep in mind that Python already has a built-in dictionary object that serves as a Hash Table, you would never actually need to implement your own hash table in Python. # # Map # # The idea of a dictionary used as a hash table to get and retrieve items using keys is often referred to as a mapping. In our implementation we will have the following methods: # #
  • HashTable() Create a new, empty map. It returns an empty map collection. #
  • put(key,val) Add a new key-value pair to the map. If the key is already in the map then replace the old value with the new value. #
  • get(key) Given a key, return the value stored in the map or None otherwise. #
  • del Delete the key-value pair from the map using a statement of the form del map[key]. #
  • len() Return the number of key-value pairs stored #
  • in the map in Return True for a statement of the form key in map, if the given key is in the map, False otherwise. # # + class HashTable(object): def __init__(self,size): self.size = size self.slots = [None]* self.size self.data = [None] * self.size def put(self,key,data): hashvalue = self.hashfunction(key,len(self.slots)) if self.slots[hashvalue] == None: self.slots[hashvalue] = key self.data[hashvalue] = data elif self.slots[hashvalue] == key: self.data[hashvalue] == data else: nextslot = self.reshash[hashvalue,len(self.slots)] while self.slots[nextslots] != None and self.slots[nextslot] != key: nextslot = self.rehash(nextslot,len(self.slots)) if self.slots[nextslot] == None: self.slots[nextslots] = key self.data[nextslot] = data else: self.data[nextslot] = data def hashfunction(self,key,size): # The actual hash function return key%size def reshash(self,oldhash,size): return (oldhash+1)%size def get(self,key): # Getting items given a key # Set up variables for our search startslot = self.hashfunction(key,len(self.slots)) data = None stop = False found = False position = startslot # Until we discern that its not empty or found (and haven't stopped yet) while self.slots[position] != None and not found and not stop: if self.slots[position] == key: found = True data = self.data[position] else: position=self.rehash(position,len(self.slots)) if position == startslot: stop = True return data # Special Methods for use with Python indexing def __getitem__(self,key): return self.get(key) def __setitem__(self,key,data): self.put(key,data) # - h = HashTable(5) h[1] = 'one' h[2] = 'two' h[3] = 'three' h[1] print(h[5]) h[2] = 'newtwoo' h[2] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Deep Learning with RNNs (Sequence Models) # ## Abstract # This Jupyter Notebook is for applying Hugging Face to various transformer models - its library downloads pretrained models for Natural Language Understanding (NLU) tasks, such as analyzing the sentiment of a text, and Natural Language Generation (NLG), such as completing a prompt with new text or translating in another language. # # Transformers provides general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between Jax, PyTorch and TensorFlow. # # In this notebook, it will implement fill-mask model to generate inputs and labels from texts, question answering model can be used for answering questions, summarization is to summarize a document or an article into a shorter text, text generation to create a coherent portion of text that is a continuation from the given contextm amd some classification model is for classifying the texts, tokens and features. The sentences similarity and translation are operating below as well. # # With each transformer model, at least one example will be given in the loaded models to show its results meet our task requirements or not. # Adapted from Transformers Model in [Hugging Face](https://huggingface.co/), and [HuggingFace Crash Course](https://www.youtube.com/watch?v=GSt00_-0ncQ) in Youtube # ## Table of contents # # * [Installation](#installation) # # # * [Implementation](#implementation) # # # * [Fill-Mask](#fill-mask) # # * [Explanation](#fm_explanation) # # # * [Question Answering](#question_answering) # # * [Explanation](#qa_explanation) # # # * [Summarization](#summarization) # # * [Explanation](#sm_explanation) # # # * [Text Classification](#text_classification) # # * [Explanation](#tc_explanation) # # # * [Text Generation](#text_generation) # # * [Explanation](#tg_explanation) # # # * [Text2Text Generation](#text2text_generation) # # * [Explanation](#t2t_explanation) # # # * [Token Classification](#token_classification) # # * [Explanation](#tkc_explanation) # # # * [Translation](#translation) # # * [Explanation](#trans_explanation) # # # * [Zero-Shot Classification](#zero-shot_classification) # # * [Explanation](#zs_explanation) # # # * [Sentence Similarity](#sentence_similarity) # # * [Explanation](#ss_explanation) # # # # ## 0. Installing Transformers and Importing Dependencies pip install transformers from transformers import pipeline conda install pytorch torchvision -c pytorch # ## 1. Loading Hugging Face Models # ### 1-1. Load Fill-Mask Pipeline # Fill-Mask (10 Points) # # Run a [Fill-Mask](https://huggingface.co/models?pipeline_tag=fill-mask&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. unmasker = pipeline('fill-mask', model='bert-base-uncased') s = "Excuse me sir, do you speak [MASK]?" unmasker(s) t = "I'm a student from [MASK] University." unmasker(t) # ### Explanation - Fill-Mask # It was a pretrained model on English language using a masked language modeling (MLM). # # Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. # The model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks, and can train a standard classifier using the features produced by the BERT model as inputs. # From the result we got, the model can analyze the masked token and their accuracy score. We can see when I set a sentence whose marked token is obviously language. The model perfectly gives me results of different languages. Then I set another example which is the name of the university is marked, and we check their results. The first result is the definite article, this one is obviously not the answer I expected. However, it makes sense. The rest of the results are great to complete the sentence and match the meaning. Generally speaking, the words predicted by the fill-mask model are quite accurate, and basically have the meaning of comparing the words before and after. But it can't give a complete text, it can only give an accuracy analysis in the filling of the blanks of the words. # ### 1-2. Load Question Answering Pipeline # # Question Answering (10 Points) # # Run a [Question Answering](https://huggingface.co/models?pipeline_tag=question-answering&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. from transformers import pipeline question_answerer = pipeline("question-answering") context = """ 🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. """ question = "Which deep learning libraries back 🤗 Transformers?" question_answerer(question=question, context=context) long_context = """ 🤗 Transformers: State of the Art NLP 🤗 Transformers provides thousands of pretrained models to perform tasks on texts such as classification, information extraction, question answering, summarization, translation, text generation and more in over 100 languages. Its aim is to make cutting-edge NLP easier to use for everyone. 🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments. Why should I use transformers? 1. Easy-to-use state-of-the-art models: - High performance on NLU and NLG tasks. - Low barrier to entry for educators and practitioners. - Few user-facing abstractions with just three classes to learn. - A unified API for using all our pretrained models. - Lower compute costs, smaller carbon footprint: 2. Researchers can share trained models instead of always retraining. - Practitioners can reduce compute time and production costs. - Dozens of architectures with over 10,000 pretrained models, some in more than 100 languages. 3. Choose the right framework for every part of a model's lifetime: - Train state-of-the-art models in 3 lines of code. - Move a single model between TF2.0/PyTorch frameworks at will. - Seamlessly pick the right framework for training, evaluation and production. 4. Easily customize a model or an example to your needs: - We provide examples for each architecture to reproduce the results published by its original authors. - Model internals are exposed as consistently as possible. - Model files can be used independently of the library for quick experiments. 🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. """ question_answerer( question=question, context=long_context ) # ### Explanation - Question Answering # This is a model which finds the answer to questions in given context. The question-answering pipeline is initialized to easily create the Question Answering pipeline, because it utilizes the DistilBERT model fine-tuned to SQuAD. After finding the possible answer with the best score, the offset mappings are used to find the corresponding answer in the context. When the context is very long, it might get truncated by the tokenizer. Then the most likely answer will be selected for each feature and the final answer is the one with the best score. # In the part of implementations, there are two examples to be adapted. A short context was provided and then ask the question about the context was. The model analyzes the answer and gets its score briefly. After that, we take a look at a long context and set the same question. We can know that the same answer we got from the longer context as well, yet an answer from a pair of start and end positions are different from the short context. # ### 1-3. Load Summarization Pipeline # Summarization (10 Points) # # Run a [Summarization](https://huggingface.co/models?pipeline_tag=summarization&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. from transformers import pipeline summarizer = pipeline("summarization") # #### Summarize Text ARTICLE = """ You don’t always have to give your boss the finger Maybe it’s your first day on the job. Perhaps your manager just made an announcement. You’ve been asked to scan your fingerprint every time you clock in and out. Is that even allowed? From Hooters to Hyatt Hotels, employers tantalized by the promise of a futuristic, streamlined way to track workers’ attendance are starting to use time clock machines that fingerprint employees. Vendors like Kronos and Allied Time say that because the machines are tied to your biometric information — unique characteristics such as your face, fingerprints, how you talk, and even how you walk — they provide a higher level of workplace security and limit employees’ ability to commit “time theft” by punching in for one another. But the benefits for your boss may come at a cost to you — both your privacy and possibly your health. With the global outbreak of COVID-19, your personal health could be at risk when using frequently touched screens and fingerprint scanners. The Centers for Disease Control says that coronavirus can remain on surfaces for hours, so screens and scanners should be regularly disinfected with cleaning spray or wipes. And you should wash your hands for 20 seconds or use alcohol-based hand sanitizer immediately after using one. In addition to these health concerns, critics argue that biometric devices pose massive personal security issues, exposing workers to potential identity theft and subjecting them to possible surveillance from corporations and law enforcement. In an amicus brief in a case before a federal court of appeals, a group of privacy advocates, including the ACLU and the EFF, wrote that “the immutability of biometric information” puts people “at risk of irreparable harm in the form of identity theft and/or tracking.” “You can get a new phone, you can change your password, you can even change your Social Security number; you can’t change your face,” said , the Technology for Liberty program director at ACLU of Massachusetts. Companies facing legal action over their use of the machines range from fast food joints like McDonald’s and Wendy’s, to hotel chains like Marriott and Hyatt, to airlines like United and Southwest. In some cases, the companies have countered in the lawsuits that their employees’ union agreement allows the use of the machines: “Southwest and United contend that the plaintiffs’ unions have consented — either expressly or through the collective bargaining agreements’ management-rights clauses — and that any required notice has been provided to the unions,” the court’s opinion states. Other companies have not responded to requests for comment or have said they cannot comment on active litigation. Privacy and labor laws have lagged behind the shifts in the American workplace. But in some places, you have the right to refuse and even sue. Biometric Privacy Laws As the collection and use of biometrics has exploded, lawmakers in three states have responded by passing laws restricting its deployment. """ summary = summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False) summary[0]['summary_text'] # ### Explanation - Summarization # Summarization model is the task of summarizing a document or an article into a shorter text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs. # There are two types of Text Summarization, one is Extractive Type and another one is Abstractive Type. Extractive summarization takes the original text and extracts information that is identical to it. In other words, rather than providing a unique summary based on the full content, it will rate each sentence in the document against all others, based on how well each line explains. # Hugging Face Transformer falls under abstractive type text summarization technique. # From the result, the given article was summarized and we can see that it summarized by the requirements, such as max_length=130, min length=30. It provides a unique summary based on the full content and can get good scores even when pre-training with a very small sample. # ### 1-4. Load Text Classification Pipeline # Text Classification (10 Points) # # Run a [Text Classification](https://huggingface.co/models?pipeline_tag=text-classification&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. from transformers import pipeline classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True) prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", ) print(prediction) # ### Explanation - Text Classification # This model is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. The model was fine-tuned and evaluated on datasets from diverse text sources to enhance generalization across different types of texts. # In this model, I only use a small text to test the classifier and we got its sentiment analysis and accuracy scores of different emotional texts. The above example is a good example of positive emotions. We can see its keywords about joy, so this prediction is very accurate and precise. # ### 1-5. Load Text Generation Pipeline # Text Generation (10 Points) # # Run a [Text Generation](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. from transformers import pipeline, set_seed generator = pipeline('text-generation', model='gpt2') set_seed(42) generator("Hello, I'm a graduate student in Northeastern University,", max_length=30, num_return_sequences=5) # ### Explanation - Text Generation # It pretrained model on English language using a causal language modeling (CLM) objective. GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. The inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens. # This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. # The generator generates several possible partial sentences according to its maximum length. We can know that the content it generates is related and mostly reasonable, but try to think about the logic of some sentence contexts. The logic may be wrong or contradictory. Therefore, this model needs more training or tuning to achieve better performance. # ### 1-6. Load Text2Text Generation Pipeline # Text2Text Generation (10 Points) # # Run a [Text2Text](https://huggingface.co/models?pipeline_tag=text2text-generation&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "मुझे बोस्टन में रहना पसंद है।" chinese_text = "我喜歡住在波士頓。" pip install sentencepiece model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") # translate Hindi to French tokenizer.src_lang = "hi" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # translate Chinese to English tokenizer.src_lang = "zh" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # ### Explanation - Text2Text Generation # M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the forced_bos_token_id parameter to the generate method. M2M100Tokenizer depends on sentencepiece, so we install it before running the example. # At the first, I define a sentence in 2 kind of languages - Hindi and Traditional Chinese. The sentence we set means 'I like to live in Boston.' In tokenizer.get_lang_id("fr"), which means the language you want to transfer, so you can set any languages which are provided by. After that, we can see if it works or not. We transfer Hindi to French, then Traditional Chinese to English. After checking, it works well. # ### 1-7. Load Token Classification Pipeline # Token Classification (10 Points) # # Run a [Token Classification](https://huggingface.co/models?pipeline_tag=token-classification&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER") model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "My name is Julia and I live in Boston" ner_results = nlp(example) print(ner_results) # ### Explanation - Token Classification # bert-base-NER is a fine-tuned BERT model that is ready to use for Named Entity Recognition and achieves state-of-the-art performance for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC). this model is a bert-base-cased model that was fine-tuned on the English version of the standard CoNLL-2003 Named Entity Recognition dataset. # In the example, the simple sentence is defined and we get the result about each key token. We know that there are two key components which are Julia and Boston (Name and Location). The result print these tokens and their accuracy scores. # ### 1-8. Load Translation Pipeline # Translation (10 Points) # # Run a [Translation](https://huggingface.co/models?pipeline_tag=translation&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. from transformers import pipeline translator = pipeline("translation_en_to_de") print(translator("Hugging Face is a technology company based in New York and Paris", max_length=40)) # ### Explanation - Translation # Translation is the task of translating a text from one language to another. We try to use an example of a translation dataset is the WMT English to German dataset, which has sentences in English as the input data and the corresponding sentences in German as the target data. # The translator pipeline is different from the pipeline of the Text2Text generator, but in general, they all can do translation between different languages. # In the translator pipeline, we directly specify a specific language conversion in the pipeline. # ### 1-9. Load Zero-Shot Classification Pipeline # Zero-Shot Classification (10 Points) # # Run a [Zero-Shot](https://huggingface.co/models?pipeline_tag=zero-shot-classification&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. # + from transformers import pipeline classifier = pipeline("zero-shot-classification") # - sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing', 'exploration'] classifier(sequence_to_classify, candidate_labels, multi_class=True) # ### Explanation - Zero-Shot Classification # It is a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. The probabilities for entailment and contradiction are then converted to label probabilities. # For the sequence classification, we set the candidate labels and test which category it belongs to. After classifying, and we can understand what kind of category it is, and its results. The better score, the closer it is to that category, and the lower score, the less relevant it is to that category. # ### 1-10. Load Sentence Similarity Pipeline # Sentence Similarity (10 Points) # # Run a [Sentence Similarity](https://huggingface.co/models?pipeline_tag=sentence-similarity&sort=downloads) language model. Explain the theory behind your model, and run it. Analyze how well you think it worked. # pip install -q sentence_transformers from sentence_transformers import SentenceTransformer from sklearn.metrics.pairwise import cosine_similarity from pprint import pprint model = SentenceTransformer('paraphrase-MiniLM-L6-v2') sentences = \ ['Nothing much to say as it is a macbook. the M1 processor works like a charm', 'Amazing laptop, super performance with M1, its blazing fast', 'Working very slow and takes 15-20 minutes to start thus not worth for money', 'This is not a good laptop. It is very slow. It is taking 20 minutes to start' ] sentence_embeddings = model.encode(sentences) for sentence, embedding in zip(sentences, sentence_embeddings): print("Sentence:", sentence) print("Embedding:", embedding) print("") len(sentence_embeddings) len(sentence_embeddings[0]) print('Similarity between {} and {} is {}'.format(sentences[0], sentences[1], cosine_similarity(sentence_embeddings[0].reshape(1, -1), sentence_embeddings[1].reshape(1, -1))[0][0])) print('Similarity between {} and {} is {}'.format(sentences[0], sentences[2], cosine_similarity(sentence_embeddings[0].reshape(1, -1), sentence_embeddings[2].reshape(1, -1))[0][0])) print('Similarity between {} and {} is {}'.format(sentences[2], sentences[3], cosine_similarity(sentence_embeddings[2].reshape(1, -1), sentence_embeddings[3].reshape(1, -1))[0][0])) # ### Explanation - Sentence Similarity # We can use this framework to compute sentence / text embeddings for more than 100 languages. These embeddings can then be compared e.g. with cosine-similarity to find sentences with a similar meaning. # In our examples, we use the sentence_transformers model to compare different similarities. It compares each similarity between the sentence_embeddings, and we can get the higher similarity if they are much more similar. # # References # + active="" # Cited as: # + active="" # @article{Hugging Face # title = "Hugging Face", # author = "", # year = "2016", # url = "https://huggingface.co/models" # } # + active="" # @article{Youtube # title = "HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning", # author = "", # year = "2021", # url = "https://www.youtube.com/watch?v=GSt00_-0ncQ" # } # + active="" # @article{Youtube # title = "Text Classification Using BERT & Tensorflow | Deep Learning Tutorial 47)", # author = "Codebasics", # year = "2021", # url = "https://www.youtube.com/watch?v=hOCDJyZ6quA" # } # + active="" # @article{GitHub # title = "Hugging-Face-Transformers-Summarization)", # author = "", # year = "2021", # url = "https://github.com/nicknochnack/Hugging-Face-Transformers-Summarization" # } # - # # Copyright and Licensing # BSD 3-Clause License # # Copyright (c) 2021, # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are met: # # * Redistributions of source code must retain the above copyright notice, this # list of conditions and the following disclaimer. # # * Redistributions in binary form must reproduce the above copyright notice, # this list of conditions and the following disclaimer in the documentation # and/or other materials provided with the distribution. # # * Neither the name of the copyright holder nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # + # Copyright (c) . # Distributed under the terms of the 3-Clause BSD License. # - # You are free to use or adapt this notebook for any purpose you'd like. However, please respect the [Modified BSD License](https://jupyter.org/governance/projectlicense.html) that governs its use. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="QstktJDB4UKT" # # Matematica para Machine Learning # # + [markdown] id="XVCBzY8lAHyJ" # # Máximo e Mínimo Local # # As funçoes matematicas podem ter "colinas e vales": locais onde atingem um mínimo ou máximo. # + colab={"base_uri": "https://localhost:8080/", "height": 252} id="Qe_m1KWQ5XUV" outputId="3556ca65-f521-4f96-8a47-de9eea803f7b" from IPython.display import Image Image('func1.png') # + [markdown] id="MTGzF-XuAzbp" # # Global (ou Absoluto) Máximo e Mínimo # # O maximo ou mínimo em toda a função é chamado de máximo ou mínimo "absoluto ou global". # Existe apenas um máximo global e um mínimo global, mas apenas pode haver mais de um máximo ou mínimo local. # + colab={"base_uri": "https://localhost:8080/", "height": 242} id="9kxgOXtk4p6_" outputId="419154e1-90cc-4a19-ea54-6edb72748976" from IPython.display import Image Image('func2.png') # + colab={"base_uri": "https://localhost:8080/", "height": 691} id="mOTK8XMMBs6g" outputId="1c1e47c6-4540-4c20-abb8-9ba88b6d3299" from IPython.display import Image Image('gradiente1.png') # + colab={"base_uri": "https://localhost:8080/", "height": 369} id="pxM97E-sB15o" outputId="6f6d2d84-65c4-4316-a516-5983a3a435cd" from IPython.display import Image Image('gradiente2.png') # + [markdown] id="HXpYCiv6CG37" # # Exemplo # # Encontre os minimos locais da função y =(x + 5)² começando do ponto x = 3 # # Sabemos a resposta apenas olhando para o grafico y = (x + 5)² atinge seu mínimo quando x = -5 (ou seja, quando x = -5, y = 0). Portanto, x = -5 é o mínimo local e global da função. # + colab={"base_uri": "https://localhost:8080/", "height": 817} id="dwdeiQcXB_uG" outputId="4637f8f1-e6d2-4025-d069-9651269e2276" from IPython.display import Image Image('grafico.png') # + [markdown] id="CZyNHZXkDRW5" # # Aplicando Gradiente Descendente # + id="4e5Izd6UDBAV" # O algoritmo inicia com o parâmetro x=3 cur_x = 3 # Learning rate rate = 0.01 # Define quando parar o algoritmo # Parar o loop quando a diferença entre os valores de x em 2 iterações consecutivas for menor que 0,000001 # ou quando o número de iterações exceder 10 mil precision = 0.000001 # Inicializa o contador do passo anterior previous_step_size = 1 # Número máximo de iterações max_iters = 10000 # Contador de iterações iters = 0 # Gradiente da função df = lambda x: 2*(x+5) # + colab={"base_uri": "https://localhost:8080/"} id="4jfaHM-aDkEg" outputId="3818a185-2dc7-4069-9a9d-d80c916ad63c" while previous_step_size > precision and iters < max_iters: # Armazena o valor corrente de x em prev_x prev_x = cur_x # Aplica o Gradient descent cur_x = cur_x - rate * df(prev_x) # Altera o valor de x previous_step_size = abs(cur_x - prev_x) # Incrementa o número de iterações iters = iters + 1 # Imprime as iterações print("Iteration", iters,"\nValor de x igual a ", cur_x) print("\nO mínimo local da função ocorre em: ", cur_x) # + id="ov98CmaEEAc9" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np from sklearn.utils import shuffle # - def check_random_seed(max_iter=None,random_state=None): if max_iter == None: max_iter = 100 print('max_iter:',max_iter) np.random.seed(random_state) a = np.random.choice(20,size=5,replace=False) #print(a) return a b = check_random_seed(max_iter=10) print(b) random_state = 2 np.random.seed(random_state) a = np.random.choice(20,size=5,replace=False) print(a) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Vf3GioQwcepr" # # Tema 3: Aproximación de autovalores # + id="1hDZNaYFX936" # !pip install -r https://raw.githubusercontent.com/alexmascension/ANMI/main/requirements.txt # + id="dYo_JGn2TSxC" from sympy import * from sympy.matrices import Matrix as mat from sympy.matrices import randMatrix from sympy import symbols import sympy import numpy as np from scipy.linalg import orth # + id="l-V6kF8_X937" from anmi.genericas import norma, print_verbose from anmi.T2 import factorizacion_QR from anmi.T3 import matriz_krylov, sucesion_krylov, potencia_iterada, metodo_autovals_QR # + [markdown] id="B3T7oD8E1bEY" # ### Sucesiones de Krylov # Sea $A$ una matriz (aplicación lineal) y $x$ un vector. Si aplicamos la multiplicación de $A$ por $x$ de manera iterada obtenemos una serie de vectores $\{x, Ax, A^2x, A^3x, \cdots\}$. Si $x$ no es un autovector de $A$, entonces esa sucesión tendrá $n$ (dimensión de $A$) vectores independientes. Si $x$ es un autovector, con su autovalor $\lambda$, entonces la sucesión de vectores será, $\{x, \lambda x, \lambda^2x, \lambda^3x, \cdots\}$. Estas sucesiones de vectores se llaman *sucesiones de Krylov*. # # Por otra parte, por el teorema de Cayley-Hamilton se tiene que $A^nx$ tiene que ser una combinación lineal de los siguientes elementos de la sucesión, es decir: # $$(-1)^nA^n + a_{n-1}A^{n-1} + \cdots + a_1A + a_0I = 0$$ # # Luego si tomamos $a = \begin{bmatrix}a_0\\a_1\\ \cdots \\ a_n\end{bmatrix}$ se tiene que # $$(x|Ax|\cdots|A^{n-1}x)a = (-1)^{n+1}A^nx$$ # # Y si resolvemos $a$, entonces se tienen los coeficientes del polinómio característico $p(\lambda) = a_0 + a_1\lambda + a_2\lambda^2 + \cdots + a_n\lambda^n$$ # # + id="hWHzJKX-_fyl" help(matriz_krylov) # + id="YFMQq7yCwW6W" # EJERCICIO 26 A = mat([[1, 1, 1], [0, 2, 2], [3, -1, 0]]) x = mat([[1, 0, 0]]).T m_krylov = matriz_krylov(A, x) m_krylov # + id="ADu3VAd5wXZ0" sk, a = sucesion_krylov(A, x) sk # + id="C0Ystsv66PVG" # EJEMPLO 15 A = mat([[2, -1, 0], [-1, 2, -1], [0, -1, 2]]) x = mat([[-1, 0, 1]]).T matriz_krylov(A, x) # + [markdown] id="9W55409Jywkz" # ### Método de la potencia iterada # # En el método de la potencia iterada, se aplica la matriz de krylov hasta una potencia determinada, $k$. Entonces, se tiene que # $$\lim_{k \to \infty} \frac{A^kw}{A^{k-1}w} = |\lambda_1|$$ # Es decir, el mayor autovalor. # # Además, si tomamos $ B= A^{-1} $, tenemos que # $$\lim_{k \to \infty} \frac{B^kw}{B^{k-1}w} = \frac{1}{|\lambda_n|}$$ # Donde $\lambda_n$ es el menor autovalor. # + id="SiaUm0xL_fyn" help(potencia_iterada) # + id="GTIZmAOv2Uni" A.eigenvals() # + id="ZG--F3wJBtXr" x = mat([[-2, 0, 1]]).T matriz_krylov(A, x, 17) # + id="moRETh9eBtax" x = mat([[-1, 0, 0]]).T np.array(potencia_iterada(A, x, 30, devolver_ultimo=False)[:, -3:], dtype=float) # + id="52wGtGj_1A4t" x = mat([[-1, 0, 0]]).T np.array(potencia_iterada(A, x, 300, devolver_ultimo=True), dtype=float) # + id="3TExHlDA2aR9" N(2+sqrt(2)) # Vemos que converge al mayor autovalor # + id="5mRLmChE0Lsx" np.array(potencia_iterada(A**-1, x, 300, devolver_ultimo=True), dtype=float) # + id="fZeAlvoy0Qsj" 1/N(2-sqrt(2)) # Y lo mismo con el menor # + id="aCtHNnOo32Y3" # Si tomamos una matriz ortogonal, el metodo de la potencia no tiene validez # porque se requiere que haya autovalores dominantes, y en este caso los # autovalores tienen módulo 1. dict_QR = factorizacion_QR(A) Q = dict_QR['Q'] Q # + id="DxNd9Bu25PY0" Q * Q.T # + id="4OTGpPI24XMP" Q.eigenvals() # + id="8JqObRQh4dVy" N(-1/2 + 3*sqrt(70)/70 + 3*sqrt(14)/28 + sqrt(5)/5 + sqrt(70)*I*sqrt(6*sqrt(14) + 20*sqrt(5) + 73)/140) # + id="wQA0QfnG4u4b" matriz_krylov(Q, x, 5) N(matriz_krylov(N(Q), x, 30), 4) # + id="vHbR8UGa4hgd" potencia_iterada(N(Q), x, 100, devolver_ultimo=False)[:, -5:] # No hay una convergencia # + [markdown] id="cjhzEPbM5x7P" # ### Método QR # # El método QR consiste en aplicar los siguientes pasos: # # $$A^{(1)} = A$$ # # De ahí sacamos que # $$A^{(1)} = Q^{(1)}R^{(1)}$$ # # De ahí construimos: # $$A^{(2)} = R^{(1)}Q^{(1)}$$ # # Y se cumple que $A^{(1)}$ y $A^{(2)}$ son semejantes, luego tienen los mismos # autovalores. # # Con ello se reitera el proceso, y se cumple que las matrices equivalentes # construidas convergen a una matriz triangular superior. Los la diagonal de $A^{(k)}$ converge a los autovalores de $A$. # + id="mxXCIhve_fys" help(metodo_autovals_QR) # + id="pcOLWedR_AYp" dict_QR = metodo_autovals_QR(A, n_iters=10) # + id="5_VzEkvM_LkV" N(dict_QR['A'][-2], 3) # + id="gJ8ibIVd_HH7" N(dict_QR['A'][-1], 3) # + id="8dWLVfTX_lAz" N(2- sqrt(2), 3), 2, N(2 + sqrt(2), 3), # + id="9AYS1YyAAAU9" A = mat([[1, 1, 1], [0, 0, 1], [0, 1, 1]]) dict_QR = metodo_autovals_QR(A, n_iters=30, verbose=False) # + id="IKEdAQ_CAAU_" N(dict_QR['A'][-15], 3) # + id="MtaeFkK0AAVA" N(dict_QR['A'][-1], 3) # + id="79oglbStAAVB" [N(i, 3) for i in list(A.eigenvals().keys())] # + [markdown] id="9IX9k4ZTSSiO" # ## Ejercicios # + [markdown] id="dxhzBKn7SSrD" # ### Ejercicio 27 # # Determinar las primera iteraciones generadas por el método de la potencia con normalización con norma infinito cuando se aplica a la matriz con vector inicial. # + id="qHvk_qhbSp4D" A = Matrix([[0, -1, 1], [0, 1, -1], [-1, -1, 2]]) A # + id="8_31S9zcSqNb" x0 = Matrix([1, -1, 2]) # + id="MiAEZjhFTGsv" n_iters = 15 m_krylov_x, m_krylov_w = zeros(A.shape[0], n_iters), zeros(A.shape[0], n_iters) m_krylov_x[:, 0] = x0 m_krylov_w[:, 0] = x0/max(x0) for i in range(1, n_iters): kriv_i_x = A * m_krylov_w[:, i - 1] m_krylov_x[:, i] = kriv_i_x m_krylov_w[:, i] = kriv_i_x / max(kriv_i_x) # + id="jIGHZaZFSvz4" m_krylov_x # + id="zeLdxnm0Udkl" m_krylov_w # + id="-dR0ujFSU1bi" [N(i) for i in (np.array(m_krylov_x[0, 1:]) / np.array(m_krylov_w[0, :-1])).tolist()[0]] # + id="OwcyXCkwVlhT" # Probamos sin normalizar n_iters = 15 m_krylov_x = zeros(A.shape[0], n_iters) m_krylov_x[:, 0] = x0 for i in range(1, n_iters): kriv_i_x = A * m_krylov_x[:, i - 1] m_krylov_x[:, i] = kriv_i_x m_krylov_x # + id="5L6GS871VPUF" [N(i) for i in (np.array(m_krylov_x[0, 1:]) / np.array(m_krylov_x[0, :-1])).tolist()[0]] # + [markdown] id="zIdRJzsoV_Xc" # ### Ejercicio 28 # # Realizar una iteración con el método QR para el cálculo de autovalores de la matriz $A$ # + id="w059mn3BWMjj" A = Matrix([[1, 1, 0], [2, 1, 0], [2, 0, 1]]) # + id="3ISZtAmqWYh1" metodo_autovals_QR(A, n_iters=1, verbose=True) # + id="JU4FJ9AkWplI" dict_QR = metodo_autovals_QR(A, n_iters=15, verbose=False) N(dict_QR['A'][-1]) # + id="s-9D_Cb_W8BJ" N(dict_QR['A'][-2]) # + id="0GfeTbWJXEwu" np.sort(solve(det(A - eye(3) * Symbol('lambda')), Symbol('lambda')))[::-1] # + [markdown] id="gk5nafPlXZ8L" # ### Ejercicio 29 # Sea $A$ con $\theta$ dado. Determinar el n-ésimo términa de la sucesión de Krylov asociada a $A$ y al vector $x$. Por medio de la sucesión de Krylov determia el polinomio característico de $A$. # + id="IOeqbEldXuE9" t = Symbol('theta') A = Matrix([[1, 0, 0], [0, cos(t), -sin(t)], [0, sin(t), cos(t)]]) x0 = Matrix([1, cos(t), -sin(t)]) # + id="CoJNBxDIYD_L" ss = simplify(matriz_krylov(A, x0, n_iters=8)) ss # + [markdown] id="_l6W_2WVSSyi" # Así a priori tiene pintas de que $A^nx = (1, \cos(n-1)\theta, sin(n-1)\theta)^t$. # # Como $n=3$ solo necesitamos los primeros 4 términos de la sucesión de krylov para resolver el sistema y crear el polinomio característico. # + id="8Gk6eFrWZF9l" M = ss[:3, :3] am = Matrix([Symbol('a0'), Symbol('a1'), Symbol('a2')]) rhs = ss[:3, 3] # + id="baxYJ0ekZgv7" sol = simplify(M.inv() * rhs) sol # + id="QUiHi5OFZ2lF" l = Symbol('lambda') simplify(sol[0] + sol[1] * l + sol[2] * l ** 2 - l ** 3) # + [markdown] id="E7tNXMTwsQwe" # ### Ejercicio 31 # # Si $\alpha$ es una constante, aproximar el autovalore de módulo máximo de la matriz $A$ usando el método de la potencia (sin normalización) como vector inicial $v$, probando las diferentes componentes. # + id="1eqi22XlsgMw" a = Symbol('alpha') n = Symbol('n') A = Matrix([[1, 0, 0], [0, -2*cos(a)**2, 2*cos(a)*sin(a)], [0, 2*cos(a)*sin(a), -2*sin(a)**2]]) A # + id="nhfb3jZksxHS" v = Matrix([1, 1, 1]) v # + id="Yl5fxN4Ps2c8" pol, vecfinal = sucesion_krylov(A, v) pol # + id="IplEaEeYyCGE" solve(pol, l) # + id="PIymkNwntGSZ" vecfinal # + id="tW_NB3tNtJNO" # Vamos a ir desglosando mk = matriz_krylov(A, v, n_iters=7) mk # + id="mZZ0JSKlyRxa" simplify(Matrix(np.array(mk[:, 1:]) / np.array(mk[:, :-1]))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys # for automation and parallelisation manual, scenario = (True, 'base') if 'ipykernel' in sys.argv[0] else (False, sys.argv[1]) if manual: # %matplotlib inline import os import pandas as pd import numpy as np import geopandas as gpd from tqdm import tqdm from syspy.skims import skims from quetzal.model import stepmodel from quetzal.io import excel # # Preparation of the LoS tables # ## Corrects footpaths in the LoS table # ## Needs PT LoS table and footpaths input_path = '../input_static/spatial_' output_path = '../output/' model_path = '../model/' # Load scenario parameters params = excel.read_var(file='../input/parameters.xls', scenario=scenario) sm = stepmodel.read_json(model_path + 'de_pt_los') ae = stepmodel.read_json(model_path + 'de_pt_access_egress') # The only kind of all-walk connections between centroids should be direct connections without intermediate stops at PT nodes because these are always connected to access/egress links which have non-footpath properties. # # Thus, footpaths already computed in prep2 will be parametrised with census based mean distances. # Drop walking links that use access-egress links sm.pt_los = sm.pt_los.loc[sm.pt_los['route_type']!='walk'] # Use saved non-motorised paths, if they exist from a previous run import os.path if os.path.exists(model_path + 'de_pt_los_walk/'): w = stepmodel.read_json(model_path + 'de_pt_los_walk') sm.pt_los = sm.pt_los.append(w.pt_los.loc[w.pt_los['route_type']=='walk']).reset_index(drop=True) print('Added walking paths from previous run') sm.to_json(model_path + 'de_pt_los', only_attributes=['pt_los'], encoding='utf-8') assert manual # break automation sm.pt_los.sample() # ## Compute mean distances # Based on census data: mean population-weighted distance from origin centroid to every population cluster in the destination zone # Load census data: Zensus 2011 - Einwohnerzahl je Hektar # Copyright: © Statistisches Bundesamt, Wiesbaden 2015 # (im Auftrag der Herausgebergemeinschaft) # Vervielfältigung und Verbreitung, auch auszugsweise, mit Quellenangabe gestattet if os.path.isfile(input_path + 'Zensus_2011.geojson'): pop = gpd.read_file(input_path + 'Zensus_2011.geojson', driver='GeoJSON') else: print('No file with geometries found') import shapely.speedups pop = pd.read_csv(input_path + 'Zensus_2011.csv', sep=';') # Restrict to entries that hold information pop = pop.loc[pop['Einwohner']!=-1] pop = gpd.GeoDataFrame(pop[['Einwohner']], geometry=gpd.points_from_xy(pop['x_mp_100m'], pop['y_mp_100m'])) pop.crs = 3035 pop.to_crs(sm.epsg, inplace=True) pop['FID'] = np.nan z = stepmodel.read_zip(model_path + 'de_zones.zip') shapely.speedups.enable for _, zone in tqdm(z.zones.iterrows(), total=z.zones.shape[0]): pop.loc[pop['geometry'].within(zone['geometry']), 'FID'] = zone['FID'] pop.to_file(input_path + 'Zensus_2011.geojson', driver='GeoJSON') pop.sample() # Get zone-connecting footpaths foot = ae.footpaths.loc[(ae.footpaths['a'].apply(lambda s: len(s.split('_'))<=2)) & (ae.footpaths['b'].apply(lambda s: len(s.split('_'))<=2))] print(len(foot)) foot.sample() try: speed = foot['speed'].mean() # in km/h except KeyError: speed = params['pt_access']['speed_bicycle'] foot = foot[['a', 'b']].copy() speed # Build all columns of LoS table foot['index'] = foot.index foot['path'] = [(a,i,b) for a,i,b in zip(foot['a'], foot.index, foot['b'])] foot.drop('index', axis=1, inplace=True) foot = foot.rename(columns={'a': 'origin', 'b': 'destination'}) foot['link_path'] = [[] for _ in range(len(foot))] for col in ['ntransfers', 'access_time', 'footpath_time', 'in_vehicle_time', 'waiting_time', 'boarding_time', 'time', 'length', 'price']: if col in sm.pt_los.columns: foot[col] = 0 foot['route_types'] = [('walk',) for _ in range(len(foot))] foot['route_type'] = 'walk' foot = foot.loc[~(~(foot['origin'].str.startswith('DE')) | ~(foot['origin'].str.startswith('DE')) | (foot['origin']==foot['destination']))] foot.shape foot.drop_duplicates(subset=['origin', 'destination'], inplace=True) foot.reset_index(drop=True, inplace=True) foot.sample() # Calculate mean population-weighted distances # from the origin centroid to all points in the destination zone for ind, row in tqdm(foot.iterrows(), total=len(foot.index)): o = ae.centroids.loc[row['origin'], 'geometry'] zone = pop.loc[pop['NUTS_ID']==row['destination']] weighted_dist = [skims.get_distance_from_lon_lat_in_m( o.coords[0][0], o.coords[0][1], p['geometry'].coords[0][0], p['geometry'].coords[0][1]) * p['Einwohner'] for _, p in zone.iterrows()] foot.loc[ind, 'length'] = sum(weighted_dist) / zone['Einwohner'].sum() foot['length'].hist(bins=50) # Compare to quickly approximated distances ae.footpaths.loc[(ae.footpaths['a'].apply(lambda s: len(s.split('_'))<=2)) & (ae.footpaths['b'].apply(lambda s: len(s.split('_'))<=2)), 'distance'].hist(bins=50) # Replace NaN foot.loc[foot['length'].isna(), 'length'] = foot.loc[foot['length'].notna(), 'length'].mean() # Generate time in seconds foot['time'] = foot['length'] / (speed / 3.6) if 'footpath_time' in sm.pt_los.columns: foot['footpath_time'] = foot['time'] (foot['time']/3600).hist(bins=50) # in h # Make DataFrame lighter cols = ['length'] + [col for col in foot.columns if col[-4:]=='time'] foot[cols] = foot[cols].astype(int) # Add to LoS table sm.pt_los = sm.pt_los.append(foot).reset_index(drop=True) sm.pt_los.loc[sm.pt_los['route_type']=='walk'].sample() # ## Save sm.to_json(model_path + 'de_pt_los', only_attributes=['pt_los'], encoding='utf-8') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Explanation # # This routine changes the coding convention of aaa_appcodes.do # # Former convention : # ```Stata # replace isApp = 0 if rootdom0 == "hello.edu" # replace isApp = 0 if rootdom1 == "hello.edu" # replace isApp = 0 if rootdom2 == "hello.edu" # ``` # # Newer convention : # ```Stata # replace isApp = 0 if rootdom == "hello.edu0" # replace isApp = 0 if rootdom == "hello.edu1" # replace isApp = 0 if rootdom == "hello.edu2" # ``` # Move the digit in the do file. def move_digit(string): new_string = string[0:string.find('rootdom') + 7] new_string = new_string + string[string.find('rootdom') + 8:len(string) - 1] new_string = new_string + string[string.find('rootdom') + 7] new_string = new_string + '"' return new_string # Test string mystring = 'replace isApp = 0 if rootdom0 == "hello.edu"' # Test output print(move_digit(mystring)) print(move_digit('replace isApp = 1 if rootdom0 == "aacc.edu"')) dofile = open('aaa_appcodes.do', mode='r') # + newdo = [] for lines in dofile.readlines()[29:]: if lines.find('rootdom') == -1: newdo.append(lines) elif lines.find('rootdom') > -1: newdo.append(move_digit(lines)) newdo # - # This cell writes newdo to a new file that can be pasted in App_Rec_Train/aaa_appcodes.do with open('aaa_appcodes_fixed' + '.do', mode='w') as newdofile: for do_line in newdo: print(do_line, file = newdofile) newdofile.close # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_amazonei_tensorflow_p36 # language: python # name: conda_amazonei_tensorflow_p36 # --- import sys sys.path.append('/home/ec2-user/SageMaker') from toniutils import dirtoni as dirt from toniutils import printoni #

    Load Data

    from script import load path = 'engineered_data/experiment-1/' X_train, X_val, Y_train, Y_val = [load(path + i + '.pkl') for i in ('X_train', 'X_val', 'Y_train', 'Y_val')] #

    Train Model

    import tensorflow as tf import numpy as np from tensorflow import float32 as fl32 from tensorflow import random_normal as rnorm from tensorflow import matmul, add from tensorflow.nn import relu,dropout, tanh from tensorflow.initializers import he_uniform, glorot_uniform from tensorflow.python.client import device_lib #device_lib.list_local_devices() batch_size = 1 inputs = tf.placeholder(shape = [batch_size, X_train.shape[1]], dtype = fl32, name = 'inputs') labels = tf.placeholder(shape = [batch_size, Y_train.shape[1]], dtype = fl32, name = 'labels') printoni(dirt(tf.initializers), 4) # + jupyter={"source_hidden": true} with tf.variable_scope('l1', reuse = tf.AUTO_REUSE): w1 = tf.get_variable('w1', [inputs.shape[1].value, 110], initializer = he_uniform()) b1 = tf.get_variable('b1', [110,], initializer = tf.initializers.zeros) layer1 = dropout( relu(add(matmul(inputs, w1), b1), name = 'relu'), rate = 0.1 ) with tf.variable_scope('l2', reuse = tf.AUTO_REUSE): w2 = tf.get_variable('w2', [layer1.shape[1].value, 220], initializer = he_uniform()) b2 = tf.get_variable('b2', [220,], initializer = tf.initializers.zeros) layer2 = dropout( relu(add(matmul(layer1, w2), b2), name = 'relu'), rate = 0.1) with tf.variable_scope('l3', reuse = tf.AUTO_REUSE): w3 = tf.get_variable('w3', [layer2.shape[1].value, 220], initializer = he_uniform()) b3 = tf.get_variable('b3', [220,], initializer = tf.initializers.zeros) layer3 = dropout( relu(add(matmul(layer2, w3), b3), name = 'relu'), rate = 0.1) '''with tf.variable_scope('l4', reuse = tf.AUTO_REUSE): w4 = tf.get_variable('w4', [layer3.shape[1].value, 220], initializer = he_uniform()) b4 = tf.get_variable('b4', [220,], initializer = tf.initializers.zeros) layer4 = dropout( relu(add(matmul(layer3, w4), b4), name = 'relu'), rate = 0.1)''' with tf.variable_scope('output_layer', reuse = tf.AUTO_REUSE): w_out = tf.get_variable('w_out', [layer3.shape[1].value, 80], initializer = glorot_uniform()) b_out = tf.get_variable('b_out', [80,], initializer = tf.initializers.zeros) layer_out = add(matmul(layer3, w_out), b_out) # - #

    ResNet

    # + def layer(layer_name, parameter_names, prev_layer, neurons = 50, activation = 'relu', drout = 0): with tf.variable_scope(layer_name, reuse = tf.AUTO_REUSE): w = tf.get_variable(parameter_names[0], [prev_layer.shape[1].value, neurons], initializer = he_uniform()) b = tf.get_variable(parameter_names[1], [neurons,], initializer = tf.initializers.zeros) if activation == 'relu': result_layer = relu(add(matmul(prev_layer, w), b), name = 'relu') elif activation == 'linear': result_layer = add(matmul(prev_layer, w), b) if drout != 0: result_layer = dropout(result_layer, rate = drout) return result_layer def residual_layer(layer_name, parameter_names, prev_layers, neurons = 100, activation = 'relu', drout = 0): unified_prev_layers = tf.concat(prev_layers, 1) with tf.variable_scope(layer_name, reuse = tf.AUTO_REUSE): w = tf.get_variable(parameter_names[0], [unified_prev_layers.shape[1].value, neurons], initializer = he_uniform()) b = tf.get_variable(parameter_names[1], [neurons,], initializer = tf.initializers.zeros) if activation == 'relu': result_layer = relu(add(matmul(unified_prev_layers, w), b), name = 'relu') elif activation == 'linear': result_layer = add(matmul(unified_prev_layers, w), b) if drout != 0: result_layer = dropout(result_layer, rate = drout) return result_layer # - layer1 = layer('l1', ['w1','b1'], inputs, 400) layer2 = layer('l2', ['w2','b2'], layer1, 400) layer3 = layer('l3', ['w3','b3'], layer2, 400) layer4 = layer('l4', ['w4','b4'], layer3, 400) layer5 = layer('l5', ['w5','b5'], layer4, 400) layer6 = residual_layer('l6', ['w6','b6'], [layer5, layer1]) layer7 = residual_layer('l7', ['w7','b7'], [layer6, layer2]) layer8 = residual_layer('l8', ['w8','b8'], [layer7, layer3]) layer9 = residual_layer('l9', ['w9','b9'], [layer8, layer4]) layer10 = residual_layer('l10', ['w10','b10'], [layer9, layer5, layer1]) layer11 = residual_layer('l11', ['w10','b11'], [layer10, layer6, layer2]) layer12 = residual_layer('l12', ['w10','b12'], [layer11, layer7, layer3]) layer_out = layer('l_out', ['w_out','b_out'], layer12, neurons = 80, activation = 'linear') with tf.variable_scope('loss', reuse = tf.AUTO_REUSE): loss = tf.reduce_mean(tf.squared_difference(layer_out, labels)) loss_minimize = tf.train.AdamOptimizer().minimize(loss) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) saver = tf.train.Saver() # + #test_loss = [] mean_train_loss = [] mean_test_loss = [] for epoch in range(120): epoch_train_loss = [] for i in range(X_train.shape[0] // batch_size): new_loss, _ = sess.run([loss, loss_minimize], feed_dict = {inputs: X_train[i * batch_size:(i + 1)*batch_size, :], labels: Y_train[i * batch_size:(i + 1)*batch_size, :]}) epoch_train_loss.append(new_loss) epoch_mean_train_loss = np.array(epoch_train_loss).mean() epoch_test_loss = [] for i in range(X_val.shape[0] // batch_size): # new_loss, _ = sess.run([loss, loss_minimize], new_loss = sess.run([loss], feed_dict = {inputs: X_val[i * batch_size:(i + 1)*batch_size, :], labels: Y_val[i * batch_size:(i + 1)*batch_size, :]}) epoch_test_loss.append(new_loss) epoch_mean_test_loss = np.array(epoch_test_loss).mean() print('Epoch: {} --- t_loss: {} - v_loss: {}'.format(epoch, epoch_mean_train_loss, epoch_mean_test_loss)) # if epoch % 10 == 0: # # Append the step number to the checkpoint name: # saver.save(sess, save_path = '/home/ec2-user/SageMaker/tf_models/my-model', global_step = epoch) # train_loss.extend(epoch_train_loss) mean_train_loss.append(epoch_mean_train_loss) # test_loss.extend(epoch_test_loss) mean_test_loss.append(epoch_mean_test_loss) #saver.save(sess, save_path = '/home/ec2-user/SageMaker/tf_models/my-model', global_step = 21) # - sess.close() mean_test_loss mean_test_loss mean_test_loss mean_test_loss # 20 epochs mean_test_loss mean_test_loss epoch_mean_test_loss # deep ResNet mean_test_loss # deep no ResNet epoch_mean_test_loss # ResNet 2000 epochs epoch_mean_test_loss # ResNet 3000 epochs epoch_mean_test_loss #

    Save Model

    # + jupyter={"outputs_hidden": true} tf.io.write_graph(sess.graph, '/tmp/my-model', 'train.pbtxt') # - #

    Plots

    import matplotlib.pyplot as plt plt.figure( figsize = (10, 10)) plt.plot(mean_test_loss) plt.show() plt.figure( figsize = (15, 15)) plt.plot(test_loss, linewidth=0.1) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import cv2 import numpy as np import matplotlib.pyplot as plt from matplotlib import rcParams from matplotlib.pyplot import imshow from tqdm import tqdm import torch from torchsummary import summary from collections import namedtuple, defaultdict from pathlib import Path import time # %matplotlib inline rcParams['figure.figsize'] = (10, 15) # - import sys sys.path.append('..') import os from src.constructor.config_structure import TrainConfigParams from src.registry import TASKS from src.constructor.data import create_dataset from train import load_config config_path = '../configs/leaves_hrnet_w18_manual.yml' config_yaml = load_config(config_path) config = TrainConfigParams(**config_yaml) # + data_params = config.data common_params = data_params.common_params other_params = data_params.train_params dataset_name = other_params.name # other_params = data_params.valid_params # other_params = data_params.test_params dataset = create_dataset(dataset_name, common_params, other_params) # - mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] def decode_img(tensor): tensor = tensor.permute(1, 2, 0).detach().cpu().numpy() return (np.clip(tensor * std + mean, 0., 1.) * 255).astype(np.uint8) # + start_idx = 10 n_imgs = 10 n_cols = 5 fig, axs = plt.subplots(n_imgs // n_cols * 2, n_cols, figsize=(20, 8 * n_imgs // n_cols)) targets = [] for i in range(n_imgs): sample = dataset[start_idx + i] img = decode_img(sample['input']) mask = sample['target'].numpy() axs[(i // n_cols) * 2][i % n_cols].imshow(img, vmin=0, vmax=2) axs[(i // n_cols) * 2 + 1][i % n_cols].imshow(mask, vmin=0, vmax=2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="nq_B010XthHd" colab_type="text" # # #
    Neural Transfer Using PyTorch
    # ##
    Studying Gradient Descent Methods
    # ###
    # # # This is a quick extension of the PyTorch tutorial by [NST Tutorial](https://pytorch.org/tutorials/advanced/neural_style_tutorial.html), but designed to run within the Colaboratory environment with a Google Drive as the permanent storage area. The particular choice for using Google Drive is driven by my personal transition to a Chromebox/book environment; therefore, optimization of this change requires minimizing any transfers to/from my local drive. # # ##Introduction # There is a lot of activity in this notebook since we need to: # 1. Demostrate moving data to/from a Google Drive # 2. Demonstrate installing and using PyTorch # 3. Using gradient methods in PyTorch while leveraging and extending a previously trained model. # # My main focus is on the gradient methods, but the other two pieces are also informative for solving future problems. # There are a couple of challenges, namely: # # This notebook doesn't spend a lot of time on the Neural Style Transfer algorithm (since it is readily available at [Tutorial ](https://pytorch.org/tutorials/advanced/neural_style_tutorial.html#sphx-glr-download-advanced-neural-style-tutorial-py)), rather, the focus is on leveraging gradient descent and Colaboratory to do something interesting. This notebook has four main paragraphs: # # 1. Setting up the Environment # 2. Performing the Initial Image Processing # 3. Using Gradient Descent for Neural Transfer # 4. Saving the Result # # NOTE: This can be done using just a cpu; however, the GPU processing is much faster. Consequently using a GPU is recommended for this notebook. Since this notebook is assumed to be running in Colaboratory, you should change the runtime to include a GPU. # + [markdown] id="aWEi3nSy56Kb" colab_type="text" # ##1. Set up the Environment # # After we load the torch capabilities, we need to restart to runtime so that the PIL capabilities are synched between the default on the Colaboratory environment (the old verion 4.x) and the requirement for torch (the newer verion 5.x). Otherwise everything is fairly standard, so the main steps are: # #
      #
    1. Import my standard python environment: os, sys, numpy, scipy, matplotlib, skimage tools
    2. #
    3. Install/import torch and torchvision
    4. #
    5. Update the Python Environment
    6. #
    7. Copy the two working files from my Google Drive
    8. #
    # # # + [markdown] id="X3KWNihrsy7X" colab_type="text" # ###1.a Import Standard Python # + [markdown] id="uknRpA3o9yu9" colab_type="text" # python is a rich environment, but these import the standard imports in the first compute cell and what I consider the specific ones for this notebook in the second cell. # + id="xCQpN8kotKWJ" colab_type="code" colab={} #General python includes #skimage is use for much of the preprocessing import os, sys import os.path import numpy as np import scipy as sp from pprint import pprint, pformat from itertools import product import skimage from skimage import io, transform # #The plot capabilities import matplotlib import matplotlib.image as mpimg import matplotlib.pyplot as plt #Since these are images, turn off the grids matplotlib.rc('axes',**{'grid':False}) #Output system specific informaiton print("Current Working Directory: {:s}".format(os.getcwd())) _content_path_ = os.getcwd() # + id="AnHg1ik9t0Y1" colab_type="code" colab={} #To perform deep copies import copy # + [markdown] id="uQaVXgbfuQl4" colab_type="text" # ###1.b Install and Import Torch/Torchvision # # The Colaboratory environment is very nice because you have the option of using a GPU. Once a data scientist moves beyond the simple examples, the choice of using a GPU really isn't a choice; therefore, a user must choose an environment. The obvious choice is Tensorflow (since it is embedded in Colaboratory), but personal preferences come in to play choosing between Tensorflow and PyTorch. I have based my own choice of Torch (I don't use it for my building class labs because it requires a few extra steps.) on the following observations - #
      #
    1. PyTorch feels more pythonic
    2. #
    3. PyTorch recomputes the gradient dynamically; therefore, we can easily mix/match gradient techniques within a module
    4. #
    5. The entendability of pretrained Deep Learning models seems more natural
    6. #
    # These are just personal preferences, but they work in this notebook. # + id="MeFjQNPy6Xxk" colab_type="code" colab={} # #Install Torch and TorchVision, torchvision is the home to vgg models # !pip3 install -U torch torchvision import torch import torchvision import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import transforms from torchvision import models # import PIL since torchvision uses PIL import PIL from PIL import Image # # Output Information # has_cuda = torch.cuda.is_available() current_device = torch.cuda.current_device() if has_cuda else -1 gpu_count = torch.cuda.device_count() if has_cuda else -1 gpu_name = torch.cuda.get_device_name(current_device) if has_cuda else "NA" print("Number of devices: {:d}".format(gpu_count)) print("Current GPU Number: {:d}".format(current_device)) print("GPU Name: {:s}".format(gpu_name)) #Set the accelerator variable accelerator = 'cuda' if has_cuda else 'cpu' print("Accelerator: {:s}".format(accelerator)) # # + [markdown] id="2JjjSL-uzKCJ" colab_type="text" # ###Update the Python Environment # + [markdown] id="5iUXguzD-gBD" colab_type="text" # At this point, I apologize for the inconvenience, but restart the current runtime. After restart, re-execute the previous compute cells and skip this action. (If anyone knows an easier way to accomplish these updates, please drop me a note.) # + [markdown] id="iY97_nc--aVy" colab_type="text" # ###1.d Copy the Two Working Files from Google Drive # # Per my transition to using the web for data storage, this notebook assumes all permanent storage is on a Google Drive. Anyone using Colaboratory has a Google Drive; therefore, this is a nice optimization. To optimize this approach, a personal library was developed (and available at [Connection To Google Drive](https://github.com/drdavidrace/colab_gdrive)) that is more "Linux" like to avoid obtaining the file_id for each individual file of interest. The library uses the "file path" to identify a file and contains methods for copying the file to the local directory. For this we need to : # # * Connect to the Google Drive # * Copy the files to the local VM # # We will use the connection to save the file at a later point. # # Connect to your Google Drive - # + id="M-fKEDvL-xCz" colab_type="code" colab={} #Connect to your Google Drive import logging # # !pip uninstall --yes colab_gdrive # !pip install -U -q git+https://github.com/drdavidrace/colab_gdrive.git # !pip list | grep -i colab # !pip install -U -q PyDrive # # Connect to a ColabGDrive from colab_gdrive import colab_gdrive myGdrive = colab_gdrive.ColabGDrive(logging_level=logging.ERROR) pprint(myGdrive.is_connected()) pprint(myGdrive.getcwd()) # + [markdown] id="8mZQcTsp_khf" colab_type="text" # Download your images - # # This uses a form, which must be run before proceeding. # + id="u2Q3-i1E5uK5" colab_type="code" colab={} #@title Run this cell to update the files and default locations content_photo = 'Self.jpg' #@param {type:"string"} style_photo = 'picasso.jpg' #@param {type:"string"} google_drive_path = 'BigDataTraining/FunWithGradientDescent' #@param{type:"string"} # + id="JEiHz1XH_nKv" colab_type="code" colab={} #Define files to copy #change the next three lines to match your files content_photo = content_photo.strip() style_photo = style_photo.strip() google_drive_path = google_drive_path.strip() # # Change the google drive path # myGdrive.chdir(google_drive_path) # data_files = [content_photo, style_photo] #Check local file existence files_to_copy = [] for df in data_files: if not os.path.isfile(df): image_name = df files_to_copy.append(image_name) # #copy files pprint(files_to_copy) if files_to_copy: myGdrive.copy_from(files_to_copy) #Check local file existence for df in data_files: if os.path.isfile(df): pprint("Found local version of " + df) else: pprint("Did not find local version of " + df) # + [markdown] id="fxtabQiP_1xz" colab_type="text" # These files should be stored in the "/content" directory, so run a visual check for the files. # + id="8u4BxN0U_42r" colab_type="code" colab={} # !pwd # !ls -alF # + [markdown] id="iHMGD2dt7iB6" colab_type="text" # ### Conclusion # + [markdown] id="XfHuNdPk7lY_" colab_type="text" # At this point the environment is set up and the data is ready for processing. # + [markdown] id="l4vIPBTgAOST" colab_type="text" # ##2. Perform the Initial Image Processing # + [markdown] id="6RV8PWYIAaAL" colab_type="text" # I have always liked the Picasso effects, but style tranfer alone isn't sufficient to achieve an interesting look. To achieve the desired effects, the initial image processing performs these other steps: # # * Permuting the squares of the images (I experimented with different size squares, but 3x3 decompositon seems to work well for my photo.) # * Rotating the squares of the images (I don't rotate many of the squares, but this gives a nice look.) # * Doing a blue color shift (this can be adjusted.) # # Since I am using a 224x224 image (primarily for my personal photo), I first sample to 225x225 for the rotations. Then later resample to the target size of 224x224. # # For the permutation of the square, this supports both a random permutation or a fixed permutation. Random was nice to start, but I eventually went with fixed since I am only working with a single image. # # For the rotations, this codel only supports rotations of 0, $\frac{\pi}{2}$, $\pi$, and $\frac{\pi}{2}$ degrees (so I don't have to do some type of mapping from the other rotated squares to the target image squares). # # For the color shift, this is rather rudimentary. This just adds a shift of blue_shift to the blue and subtracts $\frac{blue_shift}{2}$ from the other two colors. # # >Note: After working with this a little, I have chosen to not do the rotation. I have left the code so other can have fun with it as desired. # # # # Set the processing variables: (You must run the form cell before the values will be set.) # + id="uia68stpAdXX" colab_type="code" colab={} cellView="form" #@title After running this cell the first time, it changes automatically on a change. do_permute = True #@param ["False", "True"]{type:"raw"} do_rotation = True #@param ["False", "True"]{type:"raw"} do_shift = True #@param ["False", "True"]{type:"raw"} random_permute = False #@param ["False", "True"] {type:"raw"} blue_shift = .1 #@param {type:"number"} target_image_rows = 224 #@param {type:"integer"} target_image_cols = 224 #@param {type: "integer"} number_blocks = 3 #@param {type: "integer"} content_image_file = "Self.jpg" #@param{type: "string"} style_image_file = "picasso.jpg" #@param{type: "string"} stage_one_output_file = "working_image_one.jpg" #@param{type: "string"} # + [markdown] id="Xe1ls3hUA0xW" colab_type="text" # Define the basic functions. # + id="Zn3jdENzA29e" colab_type="code" colab={} from math import sqrt # # functions # def make_content_name(in_name): ''' Purpose: Create a complete file name based upon the base directory: /content Input: in_name - The name of the input file Uses: _content_path_ - The global variable that stores the current content path Output: The concatenation of the content_path with the in_name ''' return os.path.join(_content_path_,in_name) def print_move_matrix(in_matrix): ''' Purpose: Only used during debugging, but left here. ''' m,nx,ny = in_matrix.shape for i,j in product(range(nx),range(ny)): pprint('{:d},{:d} -> {:d},{:d}]'.format(i,j,in_matrix[0,i,j],in_matrix[1,i,j])) def compute_start_rc(k, num_sq, sq_size): ''' Purpose: compute the start row and start pixel of a particular square used in the permuatation of the squares Inputs: k: number of the chosen square (row major order) num_sq: The number of squares in each row and column sq_size: The number of pixels in each square (sq_size x sq_size) Outputs: row_start: The row the data to start cut out col_start: The column of the data to start cut out ''' assert k >= 0 assert num_sq > 0 assert sq_size > 0 row = int(k / num_sq) col = k - row * num_sq row_start = row * sq_size col_start = col * sq_size return row_start, col_start def copy_image_part(in_image, k, num_sq): ''' Purpose: Create a copy of an square cut out of an image Inputs: in_image: The original image k: The square number to cut out num_sq: The number of square to use for image cut outs Outputs: A copy of the data in the square to cut out ''' nx,ny,m = in_image.shape sq_size = int(nx/num_sq) assert sq_size * num_sq == nx assert nx == ny row_start, col_start = compute_start_rc(k, num_sq, sq_size) temp_array = in_image[row_start:row_start + sq_size, col_start:col_start + sq_size,:].copy() return temp_array def permute_image(in_image, permute_array, num_sq): ''' Purpose: Permute the square in the image Inputs: in_image: The original image permute_array: An array defining the permutation num_sq: The number of squares in each row and column Outputs: A copy of the image with the squares permuted ''' nx,ny,m = in_image.shape w_image = in_image.copy() assert nx == ny sq_size = int(nx/num_sq) assert sq_size * num_sq == nx work_array = np.zeros((sq_size,sq_size,m)) temp_array = np.zeros((sq_size,sq_size,m)) touched = np.full((len(permute_array)),False,dtype=bool) cur_no = 0 work_array = w_image[:sq_size,:sq_size,:].copy() next_no = 0 while not np.all(touched): next_no = permute_array[cur_no] if not touched[next_no]: temp_array = copy_image_part(w_image,next_no,num_sq) row_start, col_start = compute_start_rc(next_no, num_sq, sq_size) w_image[row_start:row_start + sq_size, col_start:col_start + sq_size,:] = work_array.copy() work_array = temp_array.copy() else: for i in range(len(permute_array)): if not touched[i]: cur_no = i break work_array = copy_image_part(w_image, cur_no, num_sq) next_no = permute_array[cur_no] temp_array = copy_image_part(w_image,next_no,num_sq) row_start, col_start = compute_start_rc(next_no, num_sq, sq_size) w_image[row_start:row_start + sq_size, col_start:col_start + sq_size,:] = work_array.copy() work_array = temp_array.copy() touched[next_no] = True cur_no = next_no return w_image # + [markdown] id="A5mfzzRQBEb7" colab_type="text" # Process the Content Image with permutations, rotations and color shifts. (These aren't necessary, but I found the result more interesting.) # + id="kYUX34nzBHjd" colab_type="code" colab={} #Set assert target_image_rows == target_image_cols, "The target number of rows " + str(target_image_rows) + " must equal the number of columns " + str(target_image_cols) img_size = target_image_rows image_path = make_content_name(content_image_file) num_sq = int(number_blocks) t_size = 0 if( int(target_image_rows/num_sq) * num_sq == target_image_rows): t_size = target_image_rows else: t_size = (int(target_image_rows/num_sq) + 1) * num_sq print("Target Content Image Size {:d}".format(target_image_rows)) print("Working Image Size: {:d}".format(t_size)) out_path = make_content_name(stage_one_output_file) # matrix_0, matrix_90, matrix_180, matrix_270 = define_rotation_matrices(t_size, num_sq) # content_image = content_image = io.imread(image_path) content_array = transform.resize(content_image,(t_size,t_size)) print("Content Image - Resampled") pprint(content_array.shape) plt.imshow(content_array) plt.show() tot_sq = num_sq * num_sq #permute image #set this if you want to use a specific permutation rand_perm_in = [8, 3, 1, 2, 7, 6, 0, 5, 4] np.random.seed(0) image_out = None if do_permute: rand_perm = np.random.permutation(tot_sq) if random_permute else rand_perm_in pprint("The permutation") pprint(rand_perm) image_out = permute_image(content_array,rand_perm,num_sq) else: image_out = content_array.copy() plt.imshow(image_out) plt.show() if do_shift: print("Blue Shift {:f}".format(blue_shift)) image_out[:,:,2] = image_out[:,:,2] + 2.0 * blue_shift image_out[:,:,0] = image_out[:,:,0] - blue_shift/3.0 image_out[:,:,1] = image_out[:,:,1] - 2.0 * blue_shift/3.0 #clip image_out = np.clip(image_out,0.0,1.0) plt.imshow(image_out) plt.show() if do_rotation: nx,ny,m = image_out.shape sq_size = int(nx/num_sq) p_rotate = 1.0/8. move_matrix = None rot_val = 1 np.random.seed(np.random.randint(0,high=np.iinfo(np.int32).max)) for k in range(num_sq * num_sq): r_val = np.random.rand() row_start,col_start = compute_start_rc(k,num_sq, sq_size) rotate = False rot_angle = 0.0 if r_val < 3 * p_rotate: rot_val = (rot_val + 1) %3 rotate = True if rotate and (rot_val ==0): rot_angle = 90. rotate = True elif rotate and (rot_val == 1): rot_angle = 180. rotate = True elif rotate and (rot_val == 2): rot_angle = 270. rotate = True if rotate: w_array = copy_image_part(image_out, k, num_sq) t_array = np.zeros(w_array.shape) wx, wy, wm = w_array.shape t_array = transform.rotate(w_array,rot_angle) image_out[row_start:row_start+sq_size,col_start:col_start+ sq_size,:] = t_array if do_permute: permute_array = [0, 3, 2, 7, 4, 5, 6, 1, 8] pprint(rand_perm) image_out = permute_image(image_out,permute_array,num_sq) show_image = image_out * 255. show_image = np.array(show_image).astype(int) show_out = np.uint8(show_image) plt.imshow(image_out) plt.show() # !rm -rf out_path io.imsave(out_path,show_out) # + [markdown] id="2NuFe4NGBmdL" colab_type="text" # Check the files. # + id="5BVVz6B0Bo4-" colab_type="code" colab={} # !ls -alF # + [markdown] id="SsDBKWOvCRlb" colab_type="text" # ##3. Use "Gradient Descent" for Neural Style Transfer # # In this process we read in the content image and style images to perform Neural Style Transfer. The differnce in this part of the notebook is the way we use autograd. The use of autograd within PyTorch is very powerful, namely, there is nothing restricting us to a single gradient descent optimization technique during the optimization process. The PyTorch autograd method allows us some flexibility to both converge faster to a solution and smooth the solution as we approach the end of the process. # + [markdown] id="mK2hndnhG3he" colab_type="text" # ###3.1 Load the Images onto the GPU # + id="_qyS4sLlCYSx" colab_type="code" colab={} device = torch.device("cuda" if torch.cuda.is_available() else "cpu") #load the images content_file = 'working_image_one.jpg' style_file = 'picasso.jpg' loader = transforms.Compose([ transforms.Resize([img_size,img_size],PIL.Image.LANCZOS), # scale imported image transforms.ToTensor()]) # content_file = make_content_name(content_file) style_file = make_content_name(style_file) pprint(content_file) pprint(style_file) content_image = Image.open(content_file) content_image = loader(content_image).to(device,torch.float) style_image = Image.open(style_file) style_image = loader(style_image).to(device,torch.float) # # Build the target image # target_image = content_image.clone() pprint(type(content_image)) pprint(content_image.device) pprint(content_image.shape) pprint(type(style_image)) pprint(style_image.device) pprint(style_image.shape) pprint(type(target_image)) pprint(target_image.device) pprint(target_image.shape) # + [markdown] id="OBMqmBllDrL1" colab_type="text" # Show the images # + id="eEEu8XVaDtHA" colab_type="code" colab={} unloader = transforms.ToPILImage() # reconvert into PIL image def imshow(tensor, title=None): image = tensor.cpu().clone() # we clone the tensor to not do changes on it image = image.squeeze(0) # remove the fake batch dimension image = unloader(image) plt.imshow(image) if title is not None: plt.title(title) plt.pause(0.001) plt.figure() imshow(style_image, title='Style Image') plt.figure() imshow(content_image, title='Content Image') plt.figure() imshow(target_image, title='Target Image') # + [markdown] id="1tYJOK0RHKQM" colab_type="text" # ###3.2 Define the Functions that Generate a "Loss" Function # + [markdown] id="MlDfm80AthHu" colab_type="text" # This isn't a true PyTorch Loss function, but rather a measure of the way we want to converge the image between the content and style. Autograd can work with this looser definition of a loss as easily as it works with an Autograd Loss function. # # # + [markdown] id="Q3ia8CwuthHx" colab_type="text" # Loss Functions # -------------- # Define the loss functions for the style and content # # # # + id="hyqsUTKethHz" colab_type="code" colab={} class ContentLoss(nn.Module): def __init__(self, target,): super(ContentLoss, self).__init__() # we 'detach' the target content from the tree used # to dynamically compute the gradient: this is a stated value, # not a variable. Otherwise the forward method of the criterion # will throw an error. self.target = target.detach() def forward(self, input): self.loss = F.mse_loss(input, self.target) return input def gram_matrix(input): a, b, c, d = input.size() # a=batch size(=1) # b=number of feature maps # (c,d)=dimensions of a f. map (N=c*d) features = input.view(a * b, c * d) # resise F_XL into \hat F_XL G = torch.mm(features, features.t()) # compute the gram product # we 'normalize' the values of the gram matrix # by dividing by the number of element in each feature maps. return G.div(a * b * c * d) class StyleLoss(nn.Module): def __init__(self, target_feature): super(StyleLoss, self).__init__() self.target = gram_matrix(target_feature).detach() def forward(self, input): G = gram_matrix(input) self.loss = F.mse_loss(G, self.target) return input def content_and_style(style_losses, content_losses, style_weight, content_weight): style_score = 0.0 content_score = 0 for sl in style_losses: style_score += sl.loss for cl in content_losses: content_score += cl.loss style_score *= style_weight content_score *= content_weight return content_score, style_score # + [markdown] id="E1oejX4xthID" colab_type="text" # ## Building the Model # # # Now we need to import a pre-trained neural network. We will use a 19 # layer VGG network like the one used in the paper. # # PyTorch’s implementation of VGG is a module divided into two child # ``Sequential`` modules: ``features`` (containing convolution and pooling layers), # and ``classifier`` (containing fully connected layers). We will use the # ``features`` module because we need the output of the individual # convolution layers to measure content and style loss. Some layers have # different behavior during training than evaluation, so we must set the # network to evaluation mode using ``.eval()``. # # # # + id="GFxIJ0jFthIF" colab_type="code" colab={} #Global normalization based upon pretrained model cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device) cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device) # cnn_normalization_mean = torch.tensor([0.406, 0.456, 0.485]).to(device) # cnn_normalization_std = torch.tensor([0.225, 0.224, 0.229]).to(device) #Load the model cnn = models.vgg19(pretrained=True).features.to(device).eval() # create a module to normalize input image so we can easily put it in a # nn.Sequential class Normalization(nn.Module): def __init__(self, mean, std): super(Normalization, self).__init__() # .view the mean and std to make them [C x 1 x 1] so that they can # directly work with image Tensor of shape [B x C x H x W]. # B is batch size. C is number of channels. H is height and W is width. self.mean = torch.tensor(mean).view(-1, 1, 1) self.std = torch.tensor(std).view(-1, 1, 1) def forward(self, img): # normalize img return (img - self.mean) / self.std # + [markdown] id="Sybl7iCythIO" colab_type="text" # A ``Sequential`` module contains an ordered list of child modules. For # instance, ``vgg19.features`` contains a sequence (Conv2d, ReLU, MaxPool2d, # Conv2d, ReLU…) aligned in the right order of depth. We need to add our # content loss and style loss layers immediately after the convolution # layer they are detecting. To do this we must create a new ``Sequential`` # module that has content loss and style loss modules correctly inserted. # # # # + id="zTehFhYqthIP" colab_type="code" colab={} # desired depth layers to compute style/content losses : content_layers_default = ['conv2_3_2'] style_layers_default = [ 'conv2_3_1', 'conv2_4_2', 'conv2_5_3'] def get_style_model_and_losses(in_cnn, norm_mean, norm_std, style_img, content_img, content_layers=content_layers_default, style_layers=style_layers_default): cnn = copy.deepcopy(in_cnn) normalization = Normalization(norm_mean, norm_std).to(accelerator) #Define list for content_losses and style_losses content_losses = [] style_losses = [] #Create new model model = nn.Sequential(normalization) name = '' l = 1 conv_sub_l = 1 relu_sub_l = 1 for layer in cnn.children(): if isinstance(layer, nn.Conv2d): name = 'conv2_{:d}_{:d}'.format(l,conv_sub_l) conv_sub_l += 1 elif isinstance(layer, nn.ReLU): name = 'relu_{:d}_{:d}'.format(l, relu_sub_l) layer = nn.ReLU(inplace=False) relu_sub_l += 1 elif isinstance(layer, nn.MaxPool2d): name = 'pool_{:d}'.format(l) l += 1 conv_sub_l = 1 relu_sub_l = 1 else: raise RuntimeError('Unknown Layer: {}'.format(layer.__class__.__name__)) # model.add_module(name, layer) # if name in content_layers: target = model(content_img).detach() content_loss = ContentLoss(target) model.add_module("content_loss_{:s}".format(name), content_loss) content_losses.append(content_loss) # if name in style_layers: # add style loss: target_feature = model(style_img).detach() style_loss = StyleLoss(target_feature) model.add_module("style_loss_{:s}".format(name), style_loss) style_losses.append(style_loss) # for layer in model.named_children(): # pprint(layer[0]) return model, style_losses, content_losses # + [markdown] id="1UJMxdGVthIW" colab_type="text" # ## Define Gradient Descent # # This uses the Adam descent algorithm # # # # + id="0MMhePYXthIa" colab_type="code" colab={} def get_input_optimizer0(input_img): # this line to show that input is a parameter that requires a gradient optimizer = optim.Adam([input_img.requires_grad_()], lr=.01) return optimizer def get_input_optimizer1(input_img): # this line to show that input is a parameter that requires a gradient optimizer = optim.Adam([input_img.requires_grad_()], lr=.005) return optimizer def get_input_optimizer2(input_img): # this line to show that input is a parameter that requires a gradient optimizer = optim.Adam([input_img.requires_grad_()], lr=.001) return optimizer # + [markdown] id="2O6OkpetthIe" colab_type="text" # Finally, we must define a function that performs the neural transfer. For # each iteration of the networks, it is fed an updated input and computes # new losses. We will run the ``backward`` methods of each loss module to # dynamicaly compute their gradients. The optimizer requires a “closure” # function, which reevaluates the modul and returns the loss. # # We still have one final constraint to address. The network may try to # optimize the input with values that exceed the 0 to 1 tensor range for # the image. We can address this by correcting the input values to be # between 0 to 1 each time the network is run. # # # # + id="ixbCIPllthIg" colab_type="code" colab={} global last_loss, cur_opt, run_step, cur_rel_err last_loss = 0.0 cur_opt = None cur_rel_err = 1.0 run_step = -1 def run_style_transfer(cnn, normalization_mean, normalization_std, content_img, style_img, input_img, num_steps=10000, style_weight=1000000, content_weight=1): global last_loss, cur_opt, run_step, cur_rel_err """Run the style transfer.""" print('Building the style transfer model..') model, style_losses, content_losses = get_style_model_and_losses(cnn, normalization_mean, normalization_std, style_img, content_img) cur_opt = get_input_optimizer1(input_img) print(cur_opt) print(type(cur_opt)) cur_rel_err = 0.5 print('Optimizing..') run_step = 0 while run_step <= num_steps: def closure(): global last_loss, cur_opt, run_step, cur_rel_err # correct the values of updated input image input_img.data.clamp_(0, 1) cur_opt.zero_grad() model(input_img) content_score, style_score = content_and_style(style_losses, content_losses, style_weight, content_weight) loss = content_score + style_score loss.backward() #This is included to demonstrate changing the gradient algorithm if run_step % 50 == 0: curr_loss = loss.item() cur_rel_err = np.abs((last_loss - curr_loss)) last_loss = curr_loss if run_step % 500 == 0: print("step {:8d}:".format(run_step)) print('Style Loss : {:4f} Content Loss: {:4f}'.format( style_score.item(), content_score.item())) print() plt.figure() imshow(input_img, title='Output Image') plt.show return style_score + content_score # cur_opt.step(closure) if run_step % 1000 == 0: if cur_rel_err > 2.0: cur_opt = get_input_optimizer2(input_img) elif cur_rel_err > 1.0: cur_opt = get_input_optimizer1(input_img) else: cur_opt = get_input_optimizer0(input_img) pprint(cur_opt) run_step = run_step + 1 # a last correction... input_img.data.clamp_(0, 1) return input_img, run_step # + [markdown] id="epzkedVfthIk" colab_type="text" # Finally, we can run the algorithm. # # # # + id="oP8idZVzthIm" colab_type="code" colab={} content_image_s = torch.stack([content_image]) style_image_s = torch.stack([style_image]) target_image_s = content_image_s.clone() output, N = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std, content_image_s, style_image_s, target_image_s) print("Number of steps: {}".format(N)) plt.figure() imshow(output, title='Output Image') # sphinx_gallery_thumbnail_number = 4 plt.ioff() plt.show() # + [markdown] id="t7LfSQ--KtQT" colab_type="text" # ##4. Save the Results # # This is an easy step when using the colab_gdrive library. We save the file locally, then transfer the file to the Google Drive. # + [markdown] id="khpXt0hRGwpj" colab_type="text" # Copy the file to local storage # + id="KP1tKlpzET6Q" colab_type="code" colab={} # # Save file locally # out_jpg_file = "style_converged.jpg" out_jpg_full = make_content_name(out_jpg_file) torchvision.utils.save_image(output.squeeze(),out_jpg_full) # !ls -alF # # show the file # simple_loader = transforms.Compose([ transforms.ToTensor()]) # converge_image = Image.open(out_jpg_full) converge_image = loader(converge_image).to(device,torch.float) imshow(converge_image,title="Converged Image") # + [markdown] id="IbYNkdJKGtm1" colab_type="text" # Copy the file to the google drive # # Recall that we have already set the current working directory with the chdir command, so this is relatively easy. # + id="qyAaxeIhHdTw" colab_type="code" colab={} print(myGdrive.getcwd()) print(os.getcwd()) myGdrive.copy_to(out_jpg_file) #This only supports copying from the local cwd # + [markdown] id="QoOYLEKUSP-e" colab_type="text" # ## 5. Conclusions # # This notebook demonstrated these important ideas: # 1. Using Colaboratory and Google Drive together to solve problems # 2. Installing and using Pytorch # 3. Using gradient methods within PyTorch while leveraging and extending previously trained models # # If you have any questions about the operation or contents of this notebook, please contact # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np # User inputs start_date = '2017-10-01' total_days = 200 # + df = pd.DataFrame() # Generate date range for the time-series as string df['date'] = pd.date_range(start_date, periods = total_days).strftime('%Y-%m-%d') # Generate frequency of the data df['count'] = np.random.randint(5, 50, total_days) df.head() # - # Expand the frequencies, generate # of records - date x count df = pd.concat([pd.DataFrame(data = [row], index = range(row['count'])) for _, row in df.iterrows()], ignore_index = True) import random def generate_timestamp(column): return '{0:s} {1:02d}:{2:02d}:{3:02d}'.format(column, random.randint(0, 23), # Hours random.randint(0, 59), # Minutes random.randint(0, 59)) # Seconds # Convert date string to timestamp string and sort df['date'] = df['date'].apply(generate_timestamp) df.sort_values(by = 'date', inplace = True) # Looks good! df.head() # Set of moods to spread randomly. This can be repeated for more randomness. mood = ['happy', 'neutral', 'sad'] # + # Randomly assign moods to each row df['mood'] = np.random.choice(list(mood), len(df)) # Device Id stays same df['device'] = '1' df.head() # - # Clean up! df.drop('count', axis = 1, inplace = True) # Save as CSV df.to_csv("smiley.csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt # %matplotlib inline x = np.linspace(2, 2*np.pi) len(x) print(x[:5]) x = np.linspace(0, 2*np.pi, 100) len(x) y = np.sin(x) len(y) y[:5] plt.figure(figsize=(12,4)) plt.plot(x,y, "x-" , ms=7, label = "$\sin(x)$") plt.plot(x,np.cos(x), label = "$\cos(x)$", lw = 5) plt.xlabel("$x$", fontsize = 18) plt.ylabel("$y$", fontsize = 18) plt.title("Funciones") plt.legend() plt.grid() plt.plot(x,0*x, lw = 1) plt.show() import sympy as sym sym.init_printing(use_latex='mathjax') sym.var('x') sym.sin(x)*np.sin(3) sym.var("x", real = True) f= x**2; f df = sym.diff(f,x,1) # resolver f'(x)=0 y mostrar soluciones xc = sym.solve(df, x) xc # + # convertir f e una función que se pueda evaluar numéricamente f_num = sym.lambdify([x], f, 'numpy') x_vec = np.linspace(-5, 5, 100) # graficar plt.plot(x_vec, f_num(x_vec), color = "cyan") plt.grid() plt.xlabel('$x$', fontsize = 18) plt.ylabel('$x^2$', fontsize = 18) plt.show() # - import numpy as np import matplotlib.pyplot as plt # %matplotlib inline def f(x): return x**2 def h(x,y): return np.sin(x)/np.tan(y) x = np.linspace(-5,5) plt.plot(x,f(x),"--") plt.plot(x,h(x,.1), "-.") plt.plot(x,h(x,.4), "-") plt.scatter(0,0,marker = "D",s=50,lw=50) plt.show # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import geopandas as gpd import pandas as pd import numpy as np from geopy.distance import geodesic import math import gurobipy as gp from gurobipy import GRB # - zones = gpd.read_file('taxi_zones/taxi_zones.shp') table = pd.read_csv('taxi+_zone_lookup.csv') zones = zones.to_crs("EPSG:4326") # + def dist(pt1, pt2): return geodesic(pt1, pt2).miles def optql(x_list, pi_list, dQ, epsilon=0.5): ''' input: x_list: list of geographic coordinates pi_list: probability distribution for x_list dQ: distance metric epsilon: desired privacy level output: matrix: stochastic transition matrix pre_prob: normalized pre-process probability distribution post_prob: post-process probability distribution ''' pre_prob = np.array(pi_list) / sum(pi_list) # normalize probability distribution n = len(x_list) # get number of elements threshold = math.exp(epsilon) # define a model model = gp.Model('OptQL') # add variables accessed as (0, 0), (0, 1), (1, 1), ... variables = model.addVars(n, n, lb=0.0, ub=1.0, name='k') # set objective function model.setObjective(gp.quicksum(pre_prob[i] * variables[i, j] * dQ(x_list[i], x_list[j]) \ for i in range(n) for j in range(n)), GRB.MINIMIZE) # add constraints (1) model.addConstrs(variables[i, k] <= pow(threshold, dQ(x_list[i], x_list[j])) * variables[j, k] \ for i in range(n) for j in range(n) for k in range(n)) # add constraints (2) model.addConstrs(gp.quicksum(variables.select(i, '*')) == 1 for i in range(n)) # constriants (3) are already satisfied # optimize the model model.optimize() # build a matrix to store the stochastic matrix variables = model.getAttr('x', variables) matrix = np.zeros((n, n)) for key, value in variables.items(): matrix[key] = value # get post-process probability distribution post_prob = pre_prob @ matrix return matrix, pre_prob, post_prob # + zones['Centroid'] = zones['geometry'].apply(lambda x: x.centroid) x_list = [(lon, lat) for (lon, lat) in zones['Centroid'].apply(lambda x: (x.xy[1][0], x.xy[0][0]))] pi_list = [geometry.area for geometry in zones['geometry']] dQ = dist epsilon = 0.5 p_matrix, pre_prob, post_prob = optql(x_list, pi_list, dQ, epsilon=epsilon) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # BI-PST Domácí úkol # - # - # - # # --- # ## Parametry úkolu # Reprezentantem úlohy byl zvolen . Spočítejme příslušné parametry: # # K = 11 (11.08.1997) # L = 14 (Tsaregorodtsev) # M = ((K+L)*47) mod 11 + 1 = 1175 mod 11 + 1 = 9 + 1 = 10 # # Podle tabulky nám vyjde úloha "ex0221 - váha dle přežití vrabců". # # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt from statsmodels.distributions.empirical_distribution import ECDF from scipy.stats.distributions import uniform, expon, norm # ## Načtení a analýza dat # # Načtena z knihovny "Sleuth2" data byla exportována do CSV souboru, který se snadno parsuje pomocí knihovny "pandas". # # Každý záznam souboru se skládá z hmotnosti dospělého vrabce v gramech a statusu, který značí, zdali vrabec v zimní bouři zahynul nebo přežil. # # Pozorovaná data byla rozdělena na dvě skupiny: vrabci se statusem "survived" a vrabci se statusem "perished". my_data = pd.read_csv('data.csv', sep=";").values # + data_survived = np.array([value[0] for value in my_data if value[1] == 'survived']) plt.figure(figsize=(20, 7)) plt.stem(data_survived) plt.grid(True) plt.title('Pozorovaní vrabci, kteří přežili "survived"') plt.ylabel('Hmotnost vrabce, g') plt.show() print(data_survived) print("\n") survived_cnt = data_survived.size survived_EX = np.average(data_survived) survived_var = np.var(data_survived) survived_median = np.median(data_survived) survived_min = np.min(data_survived) survived_max = np.max(data_survived) print('Analýza hmotnosti "survived" vrabců:\n') print(f' celkový počet = {survived_cnt:d},\n') print(f' střední hodnota = {survived_EX:.3f},\n') print(f' rozptyl = {survived_var:.3f},\n') print(f' medián = {survived_median:.3f},\n') print(f' min = {survived_min:.3f},\n') print(f' max = {survived_max:.3f}.') # + data_perished = np.array([value[0] for value in my_data if value[1] == 'perished']) plt.figure(figsize=(20, 7)) plt.stem(data_perished) plt.grid(True) plt.title('Pozorovaní vrabci, kteří zahynuli "perished"') plt.ylabel('Hmotnost vrabce, g') plt.show() print(data_perished) print("\n") perished_cnt = data_perished.size perished_EX = np.average(data_perished) perished_var = np.var(data_perished) perished_median = np.median(data_perished) perished_min = np.min(data_perished) perished_max = np.max(data_perished) print('Analýza hmotnosti "perished" vrabců:\n') print(f' celkový počet = {perished_cnt:d},\n') print(f' střední hodnota = {perished_EX:.3f},\n') print(f' rozptyl = {perished_var:.3f},\n') print(f' medián = {perished_median:.3f},\n') print(f' min = {perished_min:.3f},\n') print(f' max = {perished_max:.3f}.') # - # --- # ## Zkoumání "survived" vrabců # # Z předchozí analýzy víme, že v této skupině neexistuje žádný vrabec s hmotností menší než 23.2 a větší než 28, takže budeme analyzovat data na intervalu [23, 28]. # + #Určíme interval, počet přehrádek a přehrádky ox_interval = np.linspace(int(survived_min), survived_max) #Optimální počet přehrádek byl zvolen takový: breaks = 6 bins_survived = np.linspace(round(survived_min), survived_max, breaks + 1) #Výsledný histogram plt.figure(figsize=(20,7)) plt.title('Histogram "survived" vrabců') plt.xlabel('Hmotnost, g') plt.ylabel('Hustota') plt.hist(data_survived, bins_survived, density=True) plt.show() # + #Najdeme empirickou distribuční funkci ecdf_survived = ECDF(data_survived) plt.figure(figsize=(20,6)) plt.title('Emperická distribuční funkce "survived" vrabců') plt.xlabel('Hmotnost') plt.ylabel('Pravděpodobnost') plt.plot(ecdf_survived.x, ecdf_survived.y) plt.show() # - # Pozorováním histogramu a emperické distribuční funkce "survived" vrabců předpokládáme, že se jedná o normální rozdělení, pro které je hustota ve tvaru # # $$ # f(x) = \frac{1}{\sigma\sqrt{2\pi}} \mathrm{e}^{-\frac{{(x-\mu)}^2}{2\sigma^2}} # $$ # # a je distribuční funkce ve tvaru # # $$ # F(x) = \int_{-\infty}^x \frac{1}{\sigma\sqrt{2\pi}} \mathrm{e}^{-\frac{{(t-\mu)}^2}{2\sigma^2}} \;\mathrm{d}t # $$ # Nicméně, porovnějme histogram "survived" vrabců s histogramy náhodně vygenerovaných známých rozdělení (Normální, Exponenciální, Rovnoměrné), abychom mohli ověřit náš předpoklad. Odhad parametrů těchto rozdělení uděláme ze zkoumaných dat. # + #Porovnání s normálním rozdělěním plt.figure(figsize=(20,7)) plt.title('Porovnání historgramu "survived" vrabců\ns náhodným histogramem normálního rozdělení') plt.xlabel('Hmotnost, g') plt.ylabel('Hustota') #Histogram naších dat plt.hist(data_survived, bins_survived, density=True) #Z naších dat odhadujeme parametry normálního rozdělení mu = survived_EX sigma = survived_var #Vygenerujeme si 100 náhodných hodnot z normálního rozdělění #a dáme je do histogramu se stejným počtem přehrádek random_normal_values = np.random.normal(mu, sigma, 100) plt.hist(random_normal_values, bins_survived, alpha=0.6, density=True) #A ještě si pro přehlednost nakreslíme graf hustoty normálního rozdělení plt.plot(ox_interval, norm.pdf(ox_interval, loc=mu, scale=sigma), color='r') plt.show() # + #Porovnání s exponenciálním rozdělěním plt.figure(figsize=(20,7)) plt.title('Porovnání histogramu "survived" vrabců\ns náhodným histogramem exponencionálního rozdělení') plt.xlabel('Hmotnost, g') plt.ylabel('Hustota') #Histogram naších dat plt.hist(data_survived, bins_survived, density=True) #Z naších dat odhadujeme parametry exponenciálního rozdělení lmb = 1 / survived_EX mu = 1 / lmb * 0.05 #Vygenerujeme si 100 náhodných hodnot z exponenciálního rozdělění #a dáme je do histogramu se stejným počtem přehrádek. #Jelikož hodnoty z exponencíalního rozdělení se generujou na intervalu [0, +inf], #musíme je posunotut, aby ležely od bodu round(survived_min), totiž na intervalu [23, +inf] random_exponential_values = np.random.exponential(mu, 100) + round(survived_min) #Protože nechceme zobrazovat hodonty větší než survived_max, ořezáme je al = [] for i in random_exponential_values: if i <= survived_max: al.append(i) #Výsledné hodnoty leží na intervalu [round(survived_min), survived_max] plt.hist(al, alpha=0.6, density=True) #A ještě si pro přehlednost nakreslíme graf hustoty exponenciálního rozdělení plt.plot(ox_interval, expon.pdf(ox_interval, scale=mu, loc=round(survived_min)), color='r') plt.show() # + #Porovnání s rovnoměrným rozdělěním plt.figure(figsize=(20,7)) plt.title('Porovnání histogramu "survived" vrabců\ns náhodným histogramem rovnoměrného rozdělení') plt.xlabel('Hmotnost, g') plt.ylabel('Hustota') #Histogram naších dat plt.hist(data_survived, bins_survived, density=True) #Z naších dat odhadujeme parametry rovnoměrného rozdělení a = round(survived_min) b = survived_max #Vygenerujeme si 100 náhodných hodnot z rovnoměrného rozdělění #a dáme je do histogramu se stejným počtem přehrádek random_uniform_values = np.random.uniform(a, b, 100) plt.hist(random_uniform_values, bins_survived, alpha=0.6, density=True) #A ještě si pro přehlednost nakreslíme graf hustoty rovnoměrného rozdělení plt.plot(ox_interval, uniform.pdf(ox_interval, loc=a, scale=(b - a)), color='r') plt.show() # - # Jak vidíme, histogram "survived" vrabců nejlépe odpovídá normálnímu rozdělení. Předpoklad byl splněn. # # --- # ## Zkoumání "perished" vrabců # Z předchozí analýzy víme, že v této skupině neexistuje žádný vrabec s hmotností menší než 24.6 a větší než 31.1, takže budeme analyzovat data na intervalu [24, 32]. # + #Určíme interval, počet přehrádek a přehrádky ox_interval = np.linspace(round(perished_min) - 1, round(perished_max) + 1) #Optimální počet přehrádek byl zvolen takový: breaks = 6 bins_perished = np.linspace(round(perished_min) - 1, round(perished_max) + 1, breaks + 1) #Výsledný histogram plt.figure(figsize=(20,7)) plt.title('Histogram "perished" vrabců') plt.xlabel('Hmotnost') plt.ylabel('Počet') plt.hist(data_perished, bins_perished, density=True) plt.show() # + #Najdeme empirickou distribuční funkci ecdf_perished = ECDF(data_perished) plt.figure(figsize=(20,6)) plt.title('Emperická distribuční funkce "perished" vrabců') plt.xlabel('Hmotnost') plt.ylabel('Pravděpodobnost') plt.plot(ecdf_perished.x, ecdf_perished.y) plt.show() # - # Pozorováním histogramu a emperické distribuční funkce "perished" vrabců předpokládáme, že se jedná o exponenciální rozdělení, pro které je hustota ve tvaru: # # $$ # f_{X}(x) = \begin{cases} # \lambda e^{-\lambda x} & x > 0, \\ # 0 & x \leq 0. # \end{cases} # $$ # # a distribuční funkce je: # # $$ # F(x) = \begin{cases} # 1-e^{-\lambda x} & x > 0, \\ # 0 & x \leq 0. # \end{cases} # $$ # Nicméně, porovnějme histogram "perished" vrabců s histogramy náhodně vygenerovaných známých rozdělení (Normální, Exponenciální, Rovnoměrné), abychom mohli ověřit náš předpoklad. Odhad parametrů těchto rozdělení uděláme ze zkoumaných dat. # + #Porovnání s normálním rozdělěním plt.figure(figsize=(20,7)) plt.title('Porovnání historgramu "perished" vrabců\ns náhodným histogramem normálního rozdělení') plt.xlabel('Hmotnost, g') plt.ylabel('Hustota') #Histogram naších dat plt.hist(data_perished, bins_perished, density=True) #Z naších dat odhadujeme parametry normálního rozdělení mu = perished_EX sigma = perished_var #Vygenerujeme si 100 náhodných hodnot z normálního rozdělění #a dáme je do histogramu se stejným počtem přehrádek random_normal_values = np.random.normal(mu, sigma, 100) plt.hist(random_normal_values, bins_perished, alpha=0.6, density=True) #A ještě si pro přehlednost nakreslíme graf hustoty normálního rozdělení plt.plot(ox_interval, norm.pdf(ox_interval, loc=mu, scale=sigma), color='r') plt.show() # + #Porovnání s exponenciálním rozdělěním plt.figure(figsize=(20,7)) plt.title('Porovnání histogramu "perished" vrabců\ns náhodným histogramem exponencionálního rozdělení') plt.xlabel('Hmotnost, g') plt.ylabel('Hustota') #Histogram naších dat plt.hist(data_perished, bins_perished, density=True) #Z naších dat odhadujeme parametry exponenciálního rozdělení lmb = 1 / perished_EX mu = 1 / lmb * 0.05 #Vygenerujeme si 100 náhodných hodnot z exponenciálního rozdělění #a dáme je do histogramu se stejným počtem přehrádek. #Jelikož hodnoty z exponencíalního rozdělení se generujou na intervalu [0, +inf], #musíme je posunotut, aby ležely od bodu round(perished_min)-1, totiž na intervalu [24, +inf] random_exponential_values = np.random.exponential(mu, 100) + (round(perished_min) - 1) #Protože nechceme zobrazovat hodonty větší než round(perished_max)+1, ořezáme je al = [] for i in random_exponential_values: if i <= (round(perished_max) + 1): al.append(i) #Výsledné hodnoty leží na intervalu [round(perished_min)-1, round(perished_max)+1] plt.hist(al, alpha=0.6, density=True) #A ještě si pro přehlednost nakreslíme graf hustoty exponenciálního rozdělení plt.plot(ox_interval, expon.pdf(ox_interval, scale=mu, loc=(round(perished_min) - 1)), color='r') plt.show() # + #Porovnání s rovnoměrným rozdělěním plt.figure(figsize=(20,7)) plt.title('Porovnání histogramu "perished" vrabců\ns náhodným histogramem rovnoměrného rozdělení') plt.xlabel('Hmotnost, g') plt.ylabel('Hustota') #Histogram naších dat plt.hist(data_perished, bins_perished, density=True) #Z naších dat odhadujeme parametry rovnoměrného rozdělení a = round(perished_min) - 1 b = round(perished_max) + 1 #Vygenerujeme si 100 náhodných hodnot z rovnoměrného rozdělění #a dáme je do histogramu se stejným počtem přehrádek random_uniform_values = np.random.uniform(a, b, 100) plt.hist(random_uniform_values, bins_perished, alpha=0.6, density=True) #A ještě si pro přehlednost nakreslíme graf hustoty rovnoměrného rozdělení plt.plot(ox_interval, uniform.pdf(ox_interval, loc=a, scale=(b - a)), color='r') plt.show() # - # Jak vidíme, histogram "perished" vrabců nejlépe odpovídá exponenciálnímu rozdělení. Předpoklad byl splněn. # # --- # ## Oboustranný 95% konfidenční interval pro střední hodnoty # # Použili jsme studentovo rozdělení, protože neznáme teoretický rozptyl. # + # survived t = 2.030 survived_koin_a = (survived_EX - ((t * (survived_var ** 0.5)) / (survived_cnt ** 0.5))) survived_koin_b = (survived_EX + ((t * (survived_var ** 0.5)) / (survived_cnt ** 0.5))) # perished t = 2.064 perished_koin_a = (perished_EX - ((t * (perished_var ** 0.5)) / (perished_cnt ** 0.5))) perished_koin_b = (perished_EX + ((t * (perished_var ** 0.5)) / (perished_cnt ** 0.5))) print(f'Oboustranný 95% konfidenční interval pro střední hodnotu survived: [{survived_koin_a:.5}, {survived_koin_b:.5}], EX = {survived_EX:.5}') print(f'Oboustranný 95% konfidenční interval pro střední hodnotu perished: [{perished_koin_a:.5}, {perished_koin_b:.5}], EX = {perished_EX:.5}') # - # --- # ## Test na hladině významnosti 5% hypotézy # # $$ # H_0: EX = K \quad\quad proti \quad\quad H_a: EX \neq K # $$ # # K je parametr dne narození, hodnota je 11. print(f'Pro survived, hodnota K neleží v intervalu [{survived_koin_a:.5}, {survived_koin_b:.5}]') print(f'Pro perished, hodnota K neleží v intervalu [{perished_koin_a:.5}, {perished_koin_b:.5}]') # Zamítáme tedy hypotézy $H_0$ pro obě skupiny (survived i perished). # # --- # ## Společný test stejné střední hodnoty # # $ # H_0: EX\_perished = EX\_survived \quad\quad proti \quad\quad H_a: EX\_perished \neq EX\_survived # $ # + print(f'Rozptyly se nerovnají ({survived_var:.5} != {perished_var:.5})') s = ((survived_var / survived_cnt) + (perished_var / perished_cnt)) ** 0.5 T = (survived_EX - perished_EX) / s nd = (s ** 4) / (((1/(survived_cnt-1)) * ((survived_var/survived_cnt) ** 2)) + ((1/(perished_cnt-1)) * ((perished_var/perished_cnt) ** 2))) print(f's = {s:.5}') print(f'|T| = {abs(T):.5}') print(f'nd = {nd:.5}') print(f'abs(T) > 2.014 = {abs(T) > 2.014}') print('Jelikož je absolutní hodnota T větší, hypotézu H0 zamítáme.') # - # --- # --- # jupyter: # jupytext: # text_representation: # extension: .sh # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Bash # language: bash # name: bash # --- # PROBLEM 2 # osboxes@osboxes ~/BioinformaticsCourseGit/ZMC_Bioinformatics $ cd Accelerated_Intro_WilkinsonExams/ # osboxes@osboxes ~/BioinformaticsCourseGit/ZMC_Bioinformatics/Accelerated_Intro_WilkinsonExams $ ls -lh # total 2.4M # -rw-rw-r-- 1 osboxes osboxes 1.3K Sep 5 10:01 Answers_1_exam.ipynb # -rw-rw-r-- 1 osboxes osboxes 1.2M Sep 5 05:38 conv.txt # -rw-rw-r-- 1 osboxes osboxes 147 Sep 5 05:38 DISCLAIMER # -rw-rw-r-- 1 osboxes osboxes 4.9K Sep 5 09:31 Exam Week 1.ipynb # -rw-rw-r-- 1 osboxes osboxes 4.3K Sep 5 05:38 Exam Week 2.ipynb # -rw-rw-r-- 1 osboxes osboxes 4.2K Sep 5 05:38 Germplasm.tsv # -rw-rw-r-- 1 osboxes osboxes 141 Sep 5 05:38 how_to_convert.txt # -rw-rw-r-- 1 osboxes osboxes 1.1K Sep 5 05:38 LICENSE # -rw-rw-r-- 1 osboxes osboxes 637 Sep 5 09:31 LocusGene.tsv # -rw-rw-r-- 1 osboxes osboxes 1.2M Sep 5 09:31 Locus_Germplasm_Phenotype_20130122.txt # -rw-rw-r-- 1 osboxes osboxes 743 Sep 5 09:31 README.md # # File owner is Oxboxes. Osboxes user and Osboxes group have both permision to read and write but not to execute. Anyone has permission to read only. # PROBLEM 3 # osboxes@osboxes ~/BioinformaticsCourseGit/ZMC_Bioinformatics/Accelerated_Intro_WilkinsonExams $ head -1 Locus_Germplasm_Phenotype_20130122.txt # Locus_name) Germplasm_name phenotype pubmed_id # # PROBLEM 4 # osboxes@osboxes ~/BioinformaticsCourseGit/ZMC_Bioinformatics/Accelerated_Intro_WilkinsonExams $ wc -l Locus_Germplasm_Phenotype_20130122.txt # 7216 Locus_Germplasm_Phenotype_20130122.txt # # PROBLEM 5 # osboxes@osboxes ~/BioinformaticsCourseGit/ZMC_Bioinformatics/Accelerated_Intro_WilkinsonExams $ tail -7215 Locus_Germplasm_Phenotype_20130122.txt > Data_Only.csv # # PROBLEM 6 # osboxes@osboxes ~/BioinformaticsCourseGit/ZMC_Bioinformatics/Accelerated_Intro_WilkinsonExams $ grep -n root Locus_Germplasm_Phenotype_20130122.txt # PROBLEM 7 # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:neon] # language: python # name: conda-env-neon-py # --- import numpy as np import matplotlib.pyplot as plt # %matplotlib inline dat = np.load("tiny_data.npz") arr = dat.items()[0][1][:200, :] plt.imshow(arr[2].reshape((25, 25, 25))[:, 12, :]) import os from datetime import datetime from neon.callbacks.callbacks import Callbacks, GANCostCallback, LossCallback #from neon.callbacks.plotting_callbacks import GANPlotCallback from neon.initializers import Gaussian from neon.layers import GeneralizedGANCost, Affine, Sequential, Conv, Deconv, Dropout, Pooling, BatchNorm from neon.layers.layer import Linear, Reshape from neon.layers.container import GenerativeAdversarial from neon.models.model import GAN, Model from neon.transforms import Rectlin, Logistic, GANCost, Tanh from neon.util.argparser import NeonArgparser from neon.util.persist import ensure_dirs_exist from neon.layers.layer import Dropout from neon.data.dataiterator import ArrayIterator from neon.optimizers import GradientDescentMomentum, RMSProp from neon.backends import gen_backend import numpy as np from sklearn.cross_validation import train_test_split X = arr.copy() y = np.ones(arr.shape[0]) mean = np.mean(X, axis=0, keepdims=True) X-=mean max_elem = np.max(np.abs(X)) X = (X)/max_elem np.max(X), np.min(X) plt.plot(X[20].reshape((25, 25, 25))[12, 12, :]) X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.9, random_state=42) print(X_train.shape, 'X train shape') print(y_train.shape, 'y train shape') gen_backend(backend='cpu', batch_size=10) train_set = ArrayIterator(X=X_train, y=y_train, nclass=2, lshape=(1, 25, 25, 25)) valid_set = ArrayIterator(X=X_test, y=y_test, nclass=2) # + # setup weight initialization function init = Gaussian(scale=0.01) # discriminiator using convolution layers lrelu = Rectlin(slope=0.1) # leaky relu for discriminator # sigmoid = Logistic() # sigmoid activation function conv1 = dict(init=init, batch_norm=False, activation=lrelu, padding=2, bias=init) # what's about BatchNorm Layer and batch_norm parameter? conv2 = dict(init=init, batch_norm=False, activation=lrelu, padding=2, bias=init) conv3 = dict(init=init, batch_norm=False, activation=lrelu, padding=2, bias=init) D_layers = [ Conv((5, 5, 5, 16), **conv1), Conv((5, 5, 5, 16), **conv2), Pooling((2, 2, 2), strides=2), Conv((5, 5, 5, 16), **conv1), Conv((5, 5, 5, 16), **conv2), Pooling((2, 2, 2), strides=2), Affine(1024, init=init, activation=lrelu), Affine(1, init=init, activation=Logistic()) ] # generator using convolution layers init_gen = Gaussian(scale=0.01) relu = Rectlin(slope=0) # relu for generator pad1 = dict(pad_h=0, pad_w=0, pad_d=0) str1 = dict(str_h=1, str_w=1, str_d=1) conv1 = dict(init=init_gen, batch_norm=False, activation=relu, padding=pad1, strides=str1, bias=init_gen) pad2 = dict(pad_h=0, pad_w=0, pad_d=0) str2 = dict(str_h=2, str_w=2, str_d=2) conv2 = dict(init=init_gen, batch_norm=False, activation=relu, padding=pad2, strides=str2, bias=init_gen) pad3 = dict(pad_h=0, pad_w=0, pad_d=0) str3 = dict(str_h=1, str_w=1, str_d=1) conv3 = dict(init=init_gen, batch_norm=False, activation=Tanh(), padding=pad3, strides=str3, bias=init_gen) G_layers = [ Affine(1024, init=init_gen, bias=init_gen, activation=relu), BatchNorm(), Affine(8 * 9 * 9 * 9, init=init_gen, bias=init_gen, activation=relu), BatchNorm(), Reshape((8, 9, 9, 9)), Deconv((5, 5, 5, 16), **conv1), #14x14x14 BatchNorm(), Deconv((5, 5, 5, 16), **conv1), #14x14x14 BatchNorm(), Deconv((5, 5, 5, 16), **conv1), #14x14x14 BatchNorm(), Deconv((5, 5, 5, 1), **conv3), #27x27x27 ] layers = GenerativeAdversarial(generator=Sequential(G_layers, name="Generator"), discriminator=Sequential(D_layers, name="Discriminator")) # setup optimizer optimizer = RMSProp(learning_rate=1e-2, decay_rate=0.9, epsilon=1e-8) #optimizer = GradientDescentMomentum(learning_rate=1e-3, momentum_coef = 0.9) # setup cost function as Binary CrossEntropy cost = GeneralizedGANCost(costfunc=GANCost(func="wasserstein")) nb_epochs = 13 latent_size = 200 inb_classes = 2 nb_test = 100 # initialize model noise_dim = (latent_size) gan = GAN(layers=layers, noise_dim=noise_dim, k=5) # configure callbacks callbacks = Callbacks(gan, eval_set=valid_set) callbacks.add_callback(GANCostCallback()) #callbacks.add_save_best_state_callback("./best_state.pkl") # run fit gan.fit(train_set, num_epochs=nb_epochs, optimizer=optimizer, cost=cost, callbacks=callbacks) # - x_new = np.random.standard_normal((64, latent_size)) inference_set = ArrayIterator(x_new, None, nclass=2, lshape=(latent_size)) my_generator = Model(gan.layers.generator) # + test = my_generator.get_outputs(inference_set) test = test * max_elem test += mean test = test.reshape((64, 25, 25, 25)) # - plt.imshow(test[0, 12, :, :]) plt.imshow(test[12, :, 12, :]) test[0, :, 12, :] for i in range(10): plt.plot(test[i, 12, :, 12]) for i in range(20): plt.plot(arr[i].reshape((25, 25, 25))[12, :, 12]) x_new = np.random.randn(100, latent_size) inference_set = ArrayIterator(x_new, None, nclass=2, lshape=(latent_size)) my_generator = Model(gan.layers.generator) test1 = my_generator.get_outputs(inference_set) test1 = test1 * max_elem + mean test1 = test1.reshape((100, 25, 25, 25)) # Last activation function - Logistic() -test1 plt.imshow(test1[0, :, 12, :]) test1[0, :, 12, :] np.max(test1) plt.plot(test1[10, 12, 12, :]) # Last activation function - Tanh() -test plt.imshow(test[0, :, 12, :]) test[0, :, 12, :] plt.plot(test[20, 13, 12, :]) plt.imshow(test[20, :, :, 12]) # Last activation function - lrelu() plt.imshow(test[0, :, 12, :]) test[0, :, 12, :] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns # # Plotting & Visualization # ## [1] matplotlib Primer # set style like ggplot plt.style.use('ggplot') # ### [1.1] Figures & Subplots # #### Cumbersome Method fig = plt.figure(); ## Figure object ax1 = fig.add_subplot(2, 2, 1) ## creates empty 2x2 figure; subplot 1 object ax2 = fig.add_subplot(2, 2, 2) ## subplot 2 object ax3 = fig.add_subplot(2, 2, 3) ## subplot 3 object fig ax1.hist(np.random.randn(100), bins=20, color='k', alpha=0.3); ax2.scatter(np.arange(30), np.arange(30) + 3 * np.random.randn(30)); ax3.plot(np.random.randn(50).cumsum(), 'k--'); fig # #### Better Method fig, axes = plt.subplots(2, 2, sharex=False, sharey=False) axes[0,0].hist(np.random.randn(100), bins=20, color='b', alpha=0.3); axes[0,1].scatter(np.arange(30), np.arange(30) + 3 * np.random.randn(30)); axes[1,0].plot(np.random.randn(50).cumsum(), 'k--'); axes[1,1].plot(range(30), range(30)); fig # ### [1.2] Adjusting Spacing Around Subplots fig, axes = plt.subplots(2, 2, sharex=True, sharey=True) for i in range(2): for j in range(2): axes[i, j].hist(np.random.randn(500), bins=50, color='k', alpha=0.5) plt.subplots_adjust(wspace=0, hspace=0) ## removes width and height spacing, respectively # ### [1.3] Colors, Markers, & Line Styles plt.plot(range(10), range(0, 20, 2), 'b--'); plt.plot(range(10), range(0, 20, 2), linestyle='--', color='darkgreen'); ## to separate linestyle and color np.random.seed(11) plt.plot(np.random.randn(30).cumsum(), 'ko--'); ## include data point markers plt.grid(); np.random.seed(11) plt.plot(np.random.randn(30).cumsum(), linestyle='--', color='orange', marker='o'); ## explicit version of graph above np.random.seed(11) data = np.random.randn(30).cumsum() plt.plot(data, 'k-', drawstyle='steps-post', label='steps-post'); ## use drawstyle to alter interpolation between pts plt.plot(data, 'k--', drawstyle='steps-mid', label='steps-post'); ## use drawstyle to alter interpolation between pts # ### [1.4] Ticks, Labels, & Legends fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.plot(np.random.randn(1000).cumsum()); ticks = ax.set_xticks([0, 250, 500, 750, 1000]) ## select label positions labels = ax.set_xticklabels(['one', 'two', 'three', 'four', 'five'], rotation=45, fontsize='small') ## select label names fig ax.set_title('Example Plot') ## add title ax.set_xlabel('Stages') ## add x-axis label fig # ### [1.5] Adding Legends np.random.seed(10) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.plot(np.random.randn(1000).cumsum(), 'r', label='one') ax.plot(np.random.randn(1000).cumsum(), 'b--', label='two') ax.plot(np.random.randn(1000).cumsum(), 'g.', label='three') ax.legend(loc='best'); # ### [1.6] Annotations on a Subplot # + np.random.seed(42) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) data = np.random.randn(20).cumsum() labels = ('here', 'no here', 'etc') locations = (20, 50, 95) ax.plot(data) ax.annotate('look here', xy=(17, -1), xytext=(17, 1), arrowprops=dict(facecolor='black'), horizontalalignment='center', verticalalignment='top') ## add text + arrow circ = plt.Circle((3, 2), 1, color='g', alpha=0.3) ## add green circle ax.add_patch(circ); # - # ### [1.7] matplotlib Configuration # + #plt.rc('figure', figsize=(8,8)) ## set global parameters # - plt.plot(range(10)); # ## [2] Plotting w/pandas # ### [2.1] Line Plots s = pd.Series(np.random.randn(10).cumsum(), index=np.arange(0, 100, 10)) s.plot(); df = pd.DataFrame(np.random.randn(10, 4).cumsum(0), columns=['A', 'B', 'C', 'D'], index=np.arange(0, 100, 10)) df df.plot(); df.plot(subplots=True, figsize=(7, 7), sharex=True, sharey=True, title="DF Subplots"); # ### [2.2] Bar Plots fig, axes = plt.subplots(2, 1) data = pd.Series(abs(np.random.randn(16)), index=list('abcdefghijklmnop')) data.plot(kind='bar', ax=axes[0], color='r', alpha=0.7); data.plot(kind='barh', ax=axes[1], color='b', alpha=0.7); df = pd.DataFrame(abs(np.random.randn(6, 4)), index=['one', 'two', 'three', 'four', 'five', 'six'], columns=pd.Index(['A', 'B', 'C', 'D'], name='Genus')) df df.plot(kind='bar', alpha=0.7); df.plot(kind='bar', stacked=True, alpha=0.5); # ### [2.3] Histograms & Density Plots df.C.plot(kind='hist'); df.plot(kind='kde'); comp1 = np.random.normal(0, 1, size=200) comp2 = np.random.normal(10, 2, size=200) values = pd.Series(np.concatenate([comp1, comp2])) values.hist(bins=100, alpha=0.3, color='k', normed=True); values.hist(bins=100, alpha=0.3, color='k', normed=False); values.plot(kind='kde', style='k--'); sns.distplot(values, bins=100); df = pd.DataFrame(np.random.randn(100, 4), columns=pd.Index(['A', 'B', 'C', 'D'], name='Genus')) pd.plotting.scatter_matrix(df, diagonal='hist', color='k', alpha=0.3); sns.pairplot(df); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- from astropy.io import ascii as asc import astropy.units as u from matplotlib import pyplot as plt from supernova import LightCurve2 import glob import os import sys import numpy as np import visualization as vis # %matplotlib plt.style.use('default') #plt.style.use('seaborn-poster') sn2012aw = LightCurve2('2012aw') sn2005ay = LightCurve2('2005ay') sn2013ej = LightCurve2('2013ej') sn2009bw = LightCurve2('2009bw') asassn15oz = LightCurve2('asassn-15oz') sn2012aw.get_photometry() sn2012aw.get_abs_mag() sn2005ay.get_photometry() sn2005ay.get_abs_mag() sn2013ej.get_photometry() sn2013ej.get_abs_mag() sn2009bw.get_photometry() sn2009bw.get_abs_mag() asassn15oz.get_photometry() asassn15oz.get_abs_mag() #plt.plot(sn2005ay.phase['V'], sn2005ay.abs_mag['V'], 'o', label='SN2005ay') plt.figure(figsize=[8.72, 5.19]) plt.plot(sn2012aw.phase['V'], sn2012aw.abs_mag['V'], 'o', label = '2012aw') plt.plot(sn2013ej.phase['V'], sn2013ej.abs_mag['V'], 's',label = '2013ej') plt.plot(sn2009bw.phase['V'], sn2009bw.abs_mag['V'], 'gd',mec='k', label='2009bw') plt.plot(asassn15oz.phase['V'], asassn15oz.abs_mag['V'], 'k*', label = 'ASAS-SN15oz') plt.xlim(0, 200) plt.ylim(-12, -18.5) plt.title('Light Curve Diversity') plt.legend(loc='best') plt.xlabel('Phase (Days)') plt.ylabel('Absolute Magntidue') plt.tight_layout() plt.savefig('../figures/lc_diversity.pdf') plt.savefig('../figures/lc_diversity.pdf') from matplotlib.ticker import (MultipleLocator) fig = plt.figure(figsize=[8, 5]) ax = fig.add_subplot(1,1,1) ax.plot(sn2013ej.phase['V'], sn2013ej.abs_mag['V'], 'o',markersize=3, label = '2013ej') ax.set_xlim(-5, 150) ax.set_ylim(-12, -18.5) ax.set_xticks(np.arange(0, 150, 25)) minorLocator = MultipleLocator(5) plt.gca().xaxis.set_minor_locator(minorLocator) ax.set_title('Light Curve of SN2013ej') #plt.legend(loc='best') ax.set_xlabel('Phase (Days)') ax.set_ylabel('Absolute Magntidue') plt.tight_layout() plt.savefig('../figures/lc_example.pdf') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 正则表达式入门 # ## 基本概念 # **正则表达式**(*regular expression*)是一个罕见的例子,它既有极其深刻的理论背景,又成为了一种极其常用的工具。 # # Wikipedia 上对正则表达式的说明如下: # # > **正则表达式**(英语:Regular Expression,在代码中常简写为 *regex*、*regexp* 或 *RE*),又称*正规表示式*、*正规表示法*、*正规运算式*、*规则运算式*、*常规表示法*,是计算机科学的一个概念。正则表达式使用单个字符串来描述、匹配一系列符合某个句法规则的字符串。在很多文本编辑器里,正则表达式通常被用来检索、替换那些符合某个模式的文本。许多程序设计语言都支持利用正则表达式进行字符串操作。例如,在 Perl 中就内建了一个功能强大的正则表达式引擎。正则表达式这个概念最初是由 Unix 中的工具软件(例如 sed 和 grep)普及开的。 # # 英文里 *regular* 本是“正规”的意思,在翻译 *regular expression* 时,可能是为了和太过通用的“正规”一词区分开来,使用了“正则”作为技术名词特有的译法。 # # 在正则表达式中,还有两个词儿需要了解: # # * 一个是 **pattern**,一个写出来的正则表达式就是一个 *pattern*,也就是前面定义里的“用于匹配的语法规则”; # * 一个是 **match**,可以用作动词和名词,作动词是就是拿上述 *pattern* 去字符串中寻找符合规则的子串,作名词就是找到的那个子串。 # # 所有主流的程序设计语言都内置对 *regex* 的支持,实际上就是内置了一个“正则表达式引擎”,这个引擎能够拿着我们写好的规则(*pattern*)去任何字符串里寻找匹配的子串,还提供找到之后替换等功能。 # # 我们来看看 Python 里的例子: import re str = 'The quick brown fox jumps over the lazy dog' pattern = re.compile(r'\wo\w') re.findall(pattern, str) # 首先引入 Python 自带的正则表达式模块 `re`,然后用 `re` 提供的 `compile()` 函数把我们书写的正则规则 `\wo\w` 编译成一个 *pattern*,这个规则里的 `\w` 表示任意字母数字或者下划线,而 `o` 就是字母 ‘o’。 # # > 这个规则字符串前面的 `r` 表示 *raw string*,Python 不会对其中的 `\` 做转义处理。 # # 最后调用 `re` 提供的 `findall()` 函数用这个 *pattern* 去寻找 `str` 中所有满足这个规则的匹配(*matches*)。 # # 输出结果是所有 *matches* 组成的列表。 # # 正则引擎不仅可以寻找匹配,还可以**捕获**(*capture*)匹配中的一个片段,可以将其**替换**(*replace* 或 *substitute*)成别的字符串。我们来看下面一个例子: # + str = 'The white dog wears a black hat.' pattern = r'The (white|black) dog wears a (white|black) hat.' repl = r'The \2 dog wears a \1 hat.' re.sub(pattern, repl, str) # - # 和 `re.find()` 或者 `re.findall()` 不一样,`re.sub()` 有三个参数,第一个是用来匹配的规则 `pattern`,第二个是找到 *matches* 之后执行替换的目标模板 `target`,第三个是被操作的字符串。这里需要留意的是: # * 正则规则 `pattern` 中有两个小括号 `()`,这两个小括号的作用就是“捕获”匹配到的内容,规则中第一对小括号捕获的内容会被存在临时变量 `$1` 中,第二对小括号对应的内容则存在 `$2` 中,依此类推; # * 替换目标模板 `target` 中有一个 `\1` 和一个 `\2`,代表这里要换成 `$1` 和 `$2` 的值。 # # 于是上面的代码就把原句中的 *white* 和 *black* 交换了位置。 # # 试试看下面的代码,能预测它的输出吗? repl = r'The \1 dog wears a \1 hat.' re.sub(pattern, repl, str) # 小结一下: # * 我们可以按照正则表达式(*regex*)的语法书写正则规则(*pattern*); # * 正则引擎可以用这个 *pattern* 去匹配(*match*)指定字符串; # * 输出所有匹配(*matches*); # * 这个过程中还可以捕获匹配中的特定部分,并进行替换。 # # 使用 *regex* 的最主要场景有二: # * 用规则去匹配字符串,确认字符串是不是包含符合规则定义的子串,通常用来确认字符串是不是符合特定“格式”; # * 用规则去匹配字符串,捕获并替换其中的特定片段。 # # 顺便,在自学的过程中,想尽一切办法把一切术语用简单直白的“人话”重新表述,是特别有效的促进进步的行为模式,也可以检验你是不是真的搞懂学会了。 # ## 准备工作 # 虽然基本概念并不复杂,但 *regex* 还是一种比较抽象的语言,其语法远不如 Python 那么直观易懂,所以在继续学习前我们先介绍两个工具,在学习 *regex* 的过程中会很有帮助。 # # 第一个是正则规则可视化工具,可以对输入的规则给出可视化的解析,比如 [Regexper](https://regexper.com)。 # # 第二个是正则规则测试工具,可以对输入的规则给出说明,并对给出的测试字符串运行规则给出结果,比如 [regex101](https://regex101.com)。 # # 你现在就可以试试这两个工具,将前面例子里的规则填进去,看看效果,还可以试着写点别的规则玩玩,反正也不怕出错。 # 另外,我们还需要个测试文本文件,用来当作下面练习使用正则表达式时的字符串,因为有时长一点才能说明一些问题,所以存在一个文件中比直接写在代码里好。我们已经把这个文件放在 `assets` 自目录下,文件名是 `sample.txt`。你可以打开这个文件看看,里面有很多用来测试 *regex* 的字符串,是个不错的测试基底。 # # 下面的代码就拿这个文件做测试: with open('assets/sample.txt', 'r') as f: str = f.read() pattern = r'beg[iau]ns?' re.findall(pattern, str) # 第一行是 Python 操作文件的标准方法,内置函数 `open` 按照指定方式(`'r'` 表示只读模式)打开指定文件(第一个参数指定路径),并用一个文件对象(`f`)的方式给我们使用;下面用 `f.read()` 来读取文件的内容并赋给 `str` 变量。 # # 这里我们使用的 `pattern` 比前面复杂多了,你可以把 `beg[iau]ns?` 分别贴到 [Regexper](https://regexper.com/#beg%5Biau%5Dns%3F) 和 [regex101](https://regex101.com) 看看。你会看到可视化解析能很清晰的告诉我们这个规则是什么意思。所以在下面每个例子中你都可以这么做。 # # 最后 `findall()` 将所有 *matches* 输出成一个列表。 # ## 操作符,原子与优先级 # 学到这里的你已经不是“啥都不懂”的人了,你已经知道一个事实:编程语言无非是用来运算的。 # # 所谓的运算,就有**操作符**(*operators*)和**操作元**(*operands*),而操作符肯定是有优先级的,不然的话,那么多操作元和操作符放在一起,究竟先操作哪个呢?就和四则运算(括号由内向外处理、先乘除后加减、同级别从左到右等)是一个道理。 # # *Regex* 本身就是个迷你语言(*mini language*),它也有很多操作符,操作符也有优先级;而它的操作元有个专门名称,叫做**原子**(*atom*)。 # # 看看下面列出的 *regex* 操作符优先级,你会对它有相当不错的了解: # | 排列 | 原子与操作符优先级 |(从高到低)| # |---|-----------------------------------|------------------------| # | 1 | 转义符号 (Escaping Symbol) | `\` | # | 2 | 分组、捕获 (Grouping or Capturing) | `(...)` `(?:...)` `(?=...)` `(?!...)` `(?<=...)` `(?a|b|c | # | 6 | 原子 (Atoms) | `a` `[^abc]` `\t` `\r` `\n` `\d` `\D` `\s` `\S` `\w` `\W` `.` | # 下面我们来看看所有这些东西到底是什么。 # + [markdown] toc-hr-collapsed=false # ## 原子 # - # *Regex pattern* 中最基本的元素被称为**原子**(*atom*),包括下面这些类型: # ### 本义字符 # 最基本的原子,就是本义字符,它们都是单个字符。 # # 本义字符包括从 `a` 到 `z`,`A` 到 `Z`,`0` 到 `9`,还有下划线 `_`,它们所代表的就是它们的字面值。 # # 相当于 Python 中的 `string.ascii_letters`、`string.digits` 及 `_`。 # 以下字符在 *regex* 中都有特殊含义:`\` `+` `*` `.` `?` `-` `^` `$` `|` `(` `)` `[` `]` `{` `}` `<` `>`。 # # 所以你在写规则时,如果需要匹配这些字符,建议都在前面加上转义符 `\`,比如,你想匹配 `$`,那你就写 `\$`,或者想匹配 `|` 那就写 `\|`。 # # 跟过往一样,所有的细节都很重要,它们就是需要花时间逐步熟悉到牢记。 # ### 集合原子 # 原子的集合还是原子。 # # *Regex* 使用方括号 `[]` 来标示集合原子,`[abc]` 的意思就是这个位置匹配 `a` 或者 `b` 或者 `c`,即 `abc` 中任一字符。 # # 如下面的例子 [`beg[iau]n`](https://regexper.com#beg[iau]n) 能够匹配 `begin`、`began` 和 `begun`: str = 'begin began begun bigins begining' re.findall(r'beg[iau]n', str) # 在集合原子中,我们可以使用两个操作符:`-`(表示一个区间)和 `^`(表示排除)。 # # * `[a-z]` 表示从小写字母 `a` 到 `z` 中的任意一个字符。 # * `[^abc]` 表示 `abc` 以外的其它任意字符;注意,**一个集合原子中,`^` 符号只能用一次,只能紧跟在 `[` 之后**,否则不起作用。 # ### 类别原子 # 类别原子,是指能够代表“一类字符”的原子,类别有特殊转义符定义,包括下面这些: # * `\d` 任意数字;等价于 `[0-9]` # * `\D` 任意非数字;等价于 `[^0-9]` # * `\w` 任意本义字符;等价于 `[a-zA-Z0-9_]` # * `\W` 任意非本义字符;等价于 `[^a-zA-Z0-9_]` # * `\s` 任意空白;相当于 `[ \f\n\r\t\v]`(注意,方括号内第一个字符是空格符号) # * `\S` 任意非空白;相当于 `[^ \f\n\r\t\v]`(注意,紧随 `^` 之后的是一个空格符号) # * `.` 除 `\r` `\n` 之外的任意字符;相当于 `[^\r\n]` # # 类别原子挺好记的,如果你知道各个字母是哪个词的首字母的话: # * `d` 是 *digits*; # * `w` 是 *word characters*; # * `s` 是 *spaces*。 # # 另外,在空白的集合 `[ \f\n\r\t\v]` 中:`\f` 是分页符;`\n` `\r` 是换行符;`\t` 是制表符;`\v` 是纵向制表符(很少用到)。各种关于空白的转义符也同样挺好记忆的,如果你知道各个字母是那个词的首字母的话: # * `f` 是 *flip*; # * `n` 是 *new line*; # * `r` 是 *return*; # * `t` 是 *tab*; # * `v` 是 *vertical tab*。 str = '
    (843) 542-4256
    (431) 270-9664
    ' re.findall(r'\d\d\d\-', str) # ### 边界原子 # 边界原子用来指定“边界”,有时也叫“定位操作符”: # * `^` 匹配被搜索字符串的开始位置; # * `$` 匹配被搜索字符串的结束位置; # * `\b` 匹配单词的边界;[`er\b`](https://regexper.com#er%5Cb),能匹配 `coder` 中的 `er`,却不能匹配 `error` 中的 `er`; # * `\B` 匹配非单词边界;[`er\B`](https://regexper.com#er%5CB),能匹配 `error` 中的 `er`,却不能匹配 `coder` 中的 `er`。 str = 'never ever verb never however everest' re.findall(r'er\b', str) re.findall(r'er\B', str) re.findall(r'nev', str) re.findall(r'^nev', str) # `^` 和 `$` 在 Python 语言中也可以用 `\A` `\Z`。事实上,每种语言或多或少都对 *regex* 有自己的定制,不过本章讨论的绝大多数细节,都是通用的。 # ### 组合原子 # 我们可以用圆括号 `()` 将多个单字符原子组合成一个原子,这么做的效果是 `()` 内的字符串将被当作一整个原子,可以被随后我们要讲解的数量操作符操作。这个语法叫做**分组**或者**组合**(*grouping*)。 # # 另外,`()` 这个操作符,除了用于 *grouping*,还用于**捕获**(*capturing*),前面看到过,后面还会讲。 # # 所以现在你应该可以分清楚,[`er`](https://regexper.com#er)、[`[er]`](https://regexper.com#[er]) 和 [`(er)`](https://regexper.com#(er) 含义各不相同。 # # > * `er` 是两个原子,`'e'` 和紧随其后的 `'r'`; # > * `[er]` 是一个原子,或者 `'e'` 或者 `'r'`; # > * `(er)` 是一个原子,`'er'`。 # # 下一节中讲到数量操作符的时候,会再次强调这点。 # ## 数量操作符 # 数量操作符有这几个:`+` `?` `*` `{n, m}`。 # # 它们是用来限定位于它们之前的原子出现的个数;不加数量操作符代表出现一次且仅出现一次,而加上数量操作符的含义分别是: # # * `+` 代表前面的原子必须至少出现一次,即 出现次数 ≧ 1;例如,[`go+gle`](https://regexper.com#go+gle)可以匹配 `google` `gooogle` `goooogle` 等; # * `?` 代表前面的原子最多只可以出现一次,即 0 ≦ 出现次数 ≦ 1;例如,[`colou?red`](https://regexper.com#colou?red)可以匹配 `colored` 或者 `coloured`; # * `*` 代表前面的原子可以不出现,也可以出现一次或者多次,即 出现次数 ≧ 0;例如,[`520*`](https://regexper.com#520*)可以匹配 `52` `520` `52000` `5200000` `520000000000` 等; # * `{n}` 代表前面的原子确定出现 n 次;`{n,}` 代表前面的原子出现至少 n 次;`{n, m}` 代表前面的原子出现至少 `n` 次,至多 `m` 次;例如,[`go{2,5}gle`](https://regexper.com#go%7B2,5%7Dgle),能匹配 `google` `gooogle` `goooogle` 或 `gooooogle`,但不能匹配 `gogle` 和 `gooooooogle`。 # # 如下例: # + with open('assets/sample.txt', 'r') as f: str = f.read() re.findall(r'go+gle', str) # - re.findall(r'go{2,5}gle', str) re.findall(r'colou?red', str) re.findall(r'520*', str) # 数量操作符是对它之前的原子进行操作的,换言之,数量操作符的操作元是操作符之前的原子。 # # 上一节提到,要注意区别:`er`、`[er]` 和 `(er)` 各不相同。 # # > * `er` 是两个原子,`'e'` 之后 `'r'` # > * `[er]` 是一个原子,或者 `'e'` 或者 `'r'`; # > * `(er)` 是一个原子,`'er'` # + str = 'error wonderer severeness' re.findall(r'er', str) # - re.findall(r'[er]', str) re.findall(r'(er)', str) # 这里我们还看不出 `er` 和 `(er)` 的区别,但是,加上数量操作符就不一样了——因为数量操作符只对它之前的那**一个**原子进行操作: re.findall(r'er+', str) re.findall(r'[er]+', str) re.findall(r'(er)+', str) # ## 或操作符 # 或操作符 `|` 是所有操作符中优先级最低的,数量操作符的优先级比它高,所以,在 `|` 前后的原子被数量操作符(如果有的话)操作之后才交给 `|` 操作。简单理解就是规则被 `|` 分成若干段,各段里出现一种就匹配成功。 # # 下面例子里,[`begin|began|begun`](https://regexper.com#begin%7Cbegan%7Cbegun) 能够匹配 `begin` 或 `began` 或 `begun`。 str = 'begin began begun begins beginn' re.findall(r'begin|began|begun', str) # **注意**:前面讲的集合原子方括号中的 `|` `()` 不会被当作特殊符号,而是被当作字符本身,没有或操作符和分组的作用。 # ## 匹配并捕获 # **捕获**(*capture*)使用的是圆括号 `()`。使用圆括号可以捕获匹配中的特定片段,并将其值暂存进一个带有索引的列表,第一个片段是 `$1`,第二个是 `$2`,依此类推。随后,我们可以在替换的过程中使用 `\1` `\2` 中来引用所保存的值。 # # 比如前面我们讲过的例子: # + str = 'The white dog wears a black hat.' pattern = r'The (white|black) dog wears a (white|black) hat.' repl = r'The \2 dog wears a \1 hat.' re.sub(pattern, repl, str) # - repl = r'The \1 dog wears a \1 hat.' re.sub(pattern, repl, str) # 有时,你并不想捕获圆括号中的内容,在那个地方你使用括号的目的只是分组(*grouping*),而非捕获,所以不希望捕获那里的值占用一个 `$` 标号,那么就在圆括号内最开头加上 `?:`: str = 'The white dog wears a black hat.' pattern = r'The (?:white|black) dog wears a (white|black) hat.' re.findall(pattern, str) repl = r'The \1 dog wears a \1 hat.' # 之前的一处捕获,在替换时可被多次引用 re.sub(pattern, repl, str) # 实际上在 Python 代码中使用正则表达式匹配和捕获以及随后的替换,有更灵活的方式,因为可以对那些值直接编程,`re.sub()` 中的 `repl` 参数甚至可以接收另外一个函数作为参数,具体可以查阅 Python 的[官方文档](https://docs.python.org/3/library/re.html)。 # 非捕获匹配,还有几个**预查**(*look ahead*, *look behind*)操作符。 # # 所谓 *look ahead* 和 *look behind*,意思就是匹配一个部分时看看这个部分前后是不是符合某个规则,前后的部分只进行规则检查但不参与匹配: # # * `(?=pattern)`:*look ahead positive assert*,例如 [`Windows(?=95|98|NT|2000)`](https://regexper.com#%60Windows(?=95%7C98%7CNT%7C2000)%60) 能匹配 `Windows2000` 中的 `Windows`(注意匹配的只有 `Windows`,而不是 `Windows2000`),但不能匹配 `Windows3.1` 中的 `Windows`; # * `(?!pattern)`:*look ahead negative assert*,例如 [`Windows(?!95|98|NT|2000)`](https://regexper.com#Windows(?=95%7C98%7CNT%7C2000)) 能匹配 `Windows3.1` 中的 `Windows`,但不能匹配 `Windows2000` 中的 `Windows`; # * `(?<=pattern)`:*look behind positive assert*,这是检查匹配部分之前的字符,例如 [`(?<=95|98|NT|2000)Windows`](https://regexper.com#(?%3C=95%7C98%7CNT%7C2000)Windows) 能匹配 `2000Windows` 中的 `Windows`,但不能匹配 `3.1Windows` 中的 `Windows`; # * `(? ELIZA: Good Evening. Tell me your problems. Please terminate input with a period or a question mark. # > # > PARRY: People get on my nerves sometimes. # > # > ELIZA: I am not sure I understand you fully. # > # > PARRY: You should pay more attention. # > # > ELIZA: Suppose you should pay more attention. # > # > PARRY: You're entitled to your own opinion. # > # > ELIZA: What makes you think I am entitled to my own opinion? # > # > PARRY: It bothers me just to be around people in general. # > # > ELIZA: Please go on. # > # > PARRY: I went to the track at Bay Meadows a while back. # > # > ELIZA: What does that suggest to you? # > # > PARRY: I went to the races. # > # > ...... # 这俩的完整聊天记录可以在[这里](https://tools.ietf.org/html/rfc439)看到,不知道你看了是什么感觉…… # 不少后人用 Python 重新实现了 ELIZA,比如[这个](https://github.com/jezhiggins/eliza.py),你可以执行它,然后试着阅读和理解其实现,作为之前我们自己做[练习](../p2/b-oo-4.ipynb)的持续演进。 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # https://edube.org/quiz/pe-2/pe2-summary-test-1 # the sys.stderr stream is normally associated with: # the keyboard # the screen # the printer # a null device * # + # file a.py print("a",end='') #file b.py import a print("b",end='') #file c.py print("c",end='') import a import b '''out put cab''' # - class A: def __init__(self): pass a =A(1) print(hasattr(a, 'A')) # + try: raise Exception except Exception: print("b") except BaseException: print("a") except: print("c") # + try: raise Exception except BaseException: print("a") except Exception: print("b") except: print("c") # - try: raise Exception except: print("c") except BaseException: print("a") except Exception: print("b") class A: A = 1 def __init__(self): self.a =0 print(hasattr(A, 'a')) # !pip3 --version from datetime import timedelta d = timedelta(weeks=1, days=7,hours=11) print(d*2) # + #generator class I: def __init__(self): self.s='abcd' self.i=0 def __iter__(self): return self def __next__(self): if self.i==len(self.s): raise StopIteration v=self.s[self.i] self.i+=1 return v for x in I(): print(x, end='') # - import calendar calendar.setfirstweekday(calendar.SUNDAY) print(calendar.weekheader(3)) numbers=[0,2,7,9,10] foo=map(lambda num: num**2, numbers) print(foo) print(list(foo)) numbers=[i*i for i in range(5)] foo=(filter(lambda num: num%2, numbers)) print(foo) print(list(foo)) # + class A: def __init__(self,v=2): self.v =v def set(self, v=1): self.v +=v return self.v a =A() print(a.v) b=a print(a.v) b.set() print(a.v) print(id(a),id(b)) '''a, b are same obj''' # + class A: A = 1 def __init__(self): self.a =0 print(hasattr(A,'a')) print(hasattr(A,'A')) # - class A: A = 1 def __init__(self): self.a =0 b=A() print(hasattr(b,'a')) print(hasattr(b,'A')) class A: def __init__(self,v): self.__a =v+1 b=A(0) print(hasattr(b,'__a')) print((b.__a)) # if there are more that one except: branches after the try: clause, we can say that:not more than one except: block will executed # print(len("\\\\")) # + print(len("\\\")) # - try: raise Exception(1,2,3) except Exception as e: print(len(e.args)) for x in open('text.txt','rt'): print('----------------------') print(x) '''read file line by line''' # + class A: pass class B(A): pass class C(B): pass print(issubclass(A,C)) print(issubclass(C,A)) # - import random a=random.choice((0,100,3)) b=random.randrange(10,100,3) c=random.randint(0,100) print(a,b,c) # + class A: def a(self): print('a') class B: def a(self): print('b') class C(B,A): def c(self): self.a() o=C() o.c() # + #closure def o(p): def q(): return '*' * p return q r = o(1) s = o(2) print(r() + s()) # + #what will be the output of the following code, located in the p.py file? print(__name__) #__main__ # - #yiled test def f (n): s = '+' for i in range(n): s += s # double up yield s for x in f(3): print(x, end=' ') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Assess income level predictions on adult census data # This notebook demonstrates the use of the `responsibleai` API to assess a model trained on census data. It walks through the API calls necessary to create a widget with model analysis insights, then guides a visual analysis of the model. # * [Launch Responsible AI Toolbox](#Launch-Responsible-AI-Toolbox) # * [Train a Model](#Train-a-Model) # * [Create Model and Data Insights](#Create-Model-and-Data-Insights) # * [Assess Your Model](#Assess-Your-Model) # * [Aggregate Analysis](#Aggregate-Analysis) # * [Individual Analysis](#Individual-Analysis) # ## Launch Responsible AI Toolbox # The following section examines the code necessary to create datasets and a model. It then generates insights using the `responsibleai` API that can be visually analyzed. # ### Train a Model # *The following section can be skipped. It loads a dataset and trains a model for illustrative purposes.* # + import sklearn import zipfile from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import ColumnTransformer from sklearn.model_selection import train_test_split import pandas as pd from lightgbm import LGBMClassifier # - # First, load the census dataset and specify the different types of features. Compose a pipeline which contains a preprocessor and estimator. # + from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import ColumnTransformer def split_label(dataset, target_feature): X = dataset.drop([target_feature], axis=1) y = dataset[[target_feature]] return X, y def create_classification_pipeline(X, y, target_feature): features = X.columns.values.tolist() classes = y[target_feature].unique().tolist() pipe_cfg = { 'num_cols': X.dtypes[X.dtypes == 'int64'].index.values.tolist(), 'cat_cols': X.dtypes[X.dtypes == 'object'].index.values.tolist(), } num_pipe = Pipeline([ ('num_imputer', SimpleImputer(strategy='median')), ('num_scaler', StandardScaler()) ]) cat_pipe = Pipeline([ ('cat_imputer', SimpleImputer(strategy='constant', fill_value='?')), ('cat_encoder', OneHotEncoder(handle_unknown='ignore', sparse=False)) ]) feat_pipe = ColumnTransformer([ ('num_pipe', num_pipe, pipe_cfg['num_cols']), ('cat_pipe', cat_pipe, pipe_cfg['cat_cols']) ]) # Append classifier to preprocessing pipeline. # Now we have a full prediction pipeline. pipeline = Pipeline(steps=[('preprocessor', feat_pipe), ('model', LGBMClassifier())]) return pipeline outdirname = 'responsibleai.12.28.21' try: from urllib import urlretrieve except ImportError: from urllib.request import urlretrieve zipfilename = outdirname + '.zip' urlretrieve('https://publictestdatasets.blob.core.windows.net/data/' + zipfilename, zipfilename) with zipfile.ZipFile(zipfilename, 'r') as unzip: unzip.extractall('.') target_feature = 'income' categorical_features = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'gender', 'native-country'] train_data = pd.read_csv('adult-train.csv') test_data = pd.read_csv('adult-test.csv') X_train_original, y_train = split_label(train_data, target_feature) X_test_original, y_test = split_label(test_data, target_feature) pipeline = create_classification_pipeline(X_train_original, y_train, target_feature) y_train = y_train[target_feature].to_numpy() y_test = y_test[target_feature].to_numpy() # Take 500 samples from the test data test_data_sample = test_data.sample(n=500, random_state=5) # - # Train the classification pipeline composed in the previous cell on the training data. model = pipeline.fit(X_train_original, y_train) # ### Create Model and Data Insights from raiwidgets import ResponsibleAIDashboard from responsibleai import RAIInsights # To use Responsible AI Toolbox, initialize a RAIInsights object upon which different components can be loaded. # # RAIInsights accepts the model, the full dataset, the test dataset, the target feature string, the task type string, and a list of strings of categorical feature names as its arguments. rai_insights = RAIInsights(model, train_data, test_data_sample, target_feature, 'classification', categorical_features=categorical_features) # Add the components of the toolbox that are focused on model assessment. # Interpretability rai_insights.explainer.add() # Error Analysis rai_insights.error_analysis.add() # Counterfactuals: accepts total number of counterfactuals to generate, the label that they should have, and a list of # strings of categorical feature names rai_insights.counterfactual.add(total_CFs=10, desired_class='opposite') # Once all the desired components have been loaded, compute insights on the test set. rai_insights.compute() # Finally, visualize and explore the model insights. Use the resulting widget or follow the link to view this in a new tab. ResponsibleAIDashboard(rai_insights) # ## Assess Your Model # ### Aggregate Analysis # The Error Analysis component is displayed at the top of the dashboard widget. To visualize how error is broken down across cohorts, use the tree map view to understand how it filters through the nodes. # ![Error Analysis tree map with "Marital Status == 2," "Capital Gain <= 1287.5," "Capital Loss <= 1494.5" path selected](./img/classification-assessment-1.png) # Over 40% of the error in this model is concentrated in datapoints of people who are married, have higher education and minimal capital gain. # # Let's see what else we can discover about this cohort. # # First, save the cohort by clicking "Save as a new cohort" on the right side panel of the Error Analysis component. # ![Cohort creation sidebar and tree map cohort creation popup](./img/classification-assessment-2.png) # To switch to this cohort for analysis, click "Switch global cohort" and select the recently saved cohort from the dropdown. # ![Popup with dropdown to shift cohort from "All data" to "Married, Low Capital Loss/Gain" accompanied by cohort statistics](./img/classification-assessment-3.png) # The Model Overview component allows the comparison of statistics across multiple saved cohorts. # # The diagram indicates that the model is misclassifying datapoints of married individuals with low capital gains and high education as lower income (false negative). # ![Bar chart of classification outcomes (true negative, true positive, false negative, false positive) compared across cohorts](./img/classification-assessment-4.png) # Looking at the ground truth statistics of the overall data and the erroneous cohort, we realize there are opposite patterns in terms of high income representation in ground truth. While the overall data is representing more individuals with actual income of <= 50K, the married individuals with low capital gains and high education represent more individuals with actual income of > 50K. Given the small size of the dataset and this reverse pattern, the model makes more mistakes in predicting high income individuals. One action item is to collect a lot more data in both cohorts and retrain the model. # ![image-3.png](./img/classification-assessment-5.png) # ![image.png](./img/classification-assessment-6.png) # The Interpretability component displays feature importances for model predictions at an individual and aggregate level. The plot below indicates that the `marital-status` attribute influence model predictions the most on average. # ![Top 5 features of the cohort, in descending importance: relationship, age, capital gain, education-num, hours per week](./img/classification-assessment-7.png) # The lower half of this tab specifies how marita status affects model prediction. Being a husband or wife (married-civ-spouse) is more likely to pull the prediction away from <=50k, possibly because couples have a higher cumulative income. # ![Feature importance stratified by relationship](./img/classification-assessment-8.png) # ### Individual Analysis # Let's revisit Data Explorer. In the "Individual datapoints" view, we can see the prediction probabilities of each point. Point 510 is one that was just above the threshold to be classified as income of > 50K. # ![Scatter plot of prediction probabilities (rounded to 0.2) on the y-axis and index on the x-axis](./img/classification-assessment-9.png) # What factors led the model to make this decision? # # The "Individual feature importance" tab in the Interpretability component's Feature Importances section let you select points for further analysis. # ![Table of datapoints with row 510 selected](./img/classification-assessment-10.png) # Under this, the feature importance plot shows `capital-gain` and `native-country` as the most significant factors leading to the <= 50K classification. Changing these may cause the threshold to be crossed and the model to predict the opposite class. Please note that depending on the context, the high importance of `native-country` might be considered as a fairness issue. # ![Feature importance plot for classification of 0 (descending, positive to negative): age, hours per week, capital gain, race, education-num, workclass, sex, country, occupation, marital status, relationship, capital loss](./img/classification-assessment-11.png) # The What-If Counterfactuals component focuses on how to change features slightly in order to change model predictions. As seen in its top ranked features bar plot, changing this person's marital-status, capital-loss, and education-num have the highest impact on flipping the prediction to > 50K. # ![Top-ranked features (descending) for datapoint 510 to perturb to flip model prediction: age, hours per week, capital gain, capital loss, marital status, occupation, education-num, workclass, relationship, race, sex, country](./img/classification-assessment-12.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Welcome to Python # # Python is a programming language which can be used to make computer games, cross platform softwares, website applications and much more. # # # ## Download and Installation # # Visit the offical website - https://www.python.org/ # # # ## Math Basics # + # Addition 3 + 4 # + # Multiplication 5 * 20 # + # Bodmas 8 + 2 * 10 # + # Paranthesis (8 + 2) * 10 # + # Float Division # Returns the actual division output 18 / 4 # + # Integral Division # Returns the integer part of the division 18 // 4 # + # Remainder 18 % 4 # + # Exponent 5 ** 3 # + [markdown] slideshow={"slide_type": "subslide"} # ## Input/Output # + slideshow={"slide_type": "fragment"} # Takes some input from the user x = input('Input : ') # Prints out the same as output print('Output : ' + x) # - # ## Data Types # ### Integers x = 5 y = 5 x + y 5 ** x # ### Strings 'String is comprised of characters in quotes' "It also works in double quotes" # + # 'I don't think if this works' # + # Escaping character 'I don\'t think this will create any problem.' # + # Use double quotes to remove this type of problem "Just don't use single quotes!!" # + # Formatting Characters # Tab - '\t' # Newline - '\n' print('C:\Desktop\newfolder') # + # Use r to consider the raw string i.e. without any formatting print(r'C:\Desktop\newfolder') # + # String multiplication x = 'String ' x * 5 # + # String Concatenation 'Rajat ' + 'Goyal' # + # Integer to String Conversion quantity = 5 denomination = " Rupees" str(quantity) + denomination # + name = '' # Substring of a string by defining the range name[3:8] # - # Substring by defining the start index name[6:] # Substring by defining the end index name[:8] # Substring without any index specified will return the original string name[:] # + # Print function which outputs the value on the console screen print("Hello World") # + # Length of a string - Blankspace also counts as a character len("Hello World") # - # ### Characters # + name = "" # Access characters of string which is 0-indexed using square brackets name[0] # - # Access characters from the last using -1 indexing name[-1] # ### Lists # + # Define a list of primitive data types marks = [49, 53, 87, 65, 71] # + # Access the list which is 0-indexed i.e. (0, 1, 2, ... , n-1) marks[0] # + # Change any value in the list marks[2] = 57 marks # + # Add new elements to the list updatedMarks = marks + [84, 68] updatedMarks # + # Add elements one at a time marks.append(91) marks # + # Sublist of a list marks[:2] # + # Change sublist to a new list marks[:2] = [0] marks # - # ### Sets # # Sets do not contain any element twice subjects = {'Mathematics', 'Physics', 'Chemistry', 'English', 'Economics'} print(subjects) if 'Physics' in subjects: print("It's gonna be fun!!") else: print('-_-') # ### Dictionaries # # Dictionary is a set of key value pair # + students = {59:'Rajat', 64:'Rohan', 66:'Rohit'} print(students) print(students[64]) # + # Iterate over all elements of the dictionary for key, value in students.items(): print(str(key) + " : " + value) # - # ## Loops # ### If # + age = 27 if age >= 18: print("I can vote now!") # - # ### If-Else # + name = "Rajat" if name is "Rajat": print('Hello Lord!') else: print("I don't know ya") # - # ### If - Else If - Else # + num = 18 if num % 15 == 0: print("FooBar") elif num % 5 == 0: print("Foo") elif num % 3 == 0: print("Bar") else: print("Nothing") # - # ### For # + foods = ['bacon', 'tuna', 'ham', 'beef'] for f in foods: print(f) print(len(f)) # - # ### Range # 1 argument - [0,n-1] for x in range(5): print(x) # 2 arguments - [a, b-1] for x in range(1, 6): print(x) # 3 arguments - arithmetic progression with 3rd argument as the difference for x in range(0, 25, 5): print(x) # ### While # + num = 5 while num < 10: print("Num :", num) num += 1 # - # ## Flow Control # ### Break # + number = 5 for n in range(11): if n is number: print("Found the number!!") break print('-') # - # ### Continue # + number = 5 for n in range(11): if n is number: print("Skip the number!!") continue print(n) # - # ## Comments # ### Single Line # + # This is a single line comment # - # ### Multi Line # + print('Hello') ''' Triple quotes are used for - multi - line - comments ''' print('world') # - # ## Functions # + # Defining a function def sayHello(): print('Hello World!') # Calling the function sayHello() # - # ### Parameters # + # Single Parameter def feet_to_inches(feet): inches = 12 * feet print(str(feet) + " feet = " + str(inches) + " inches") feet_to_inches(5) feet_to_inches(6.2) # + # Multiple Parameters def calculateSimpleInterest(amount, rate, time): interest = (amount * rate * time) / 100 print(interest) calculateSimpleInterest(1250, 5, 3) # - # ### Return Values # + # Return statement def calculatePercentage(obtainedMarks, totalMarks): percentage = (obtainedMarks / totalMarks) * 100 return percentage print('I scored ' + str(calculatePercentage(323, 400)) + "%") # - # ### Variable Scope # + a = 25 # Global variable def funcA(): b = 30 # Local variable print(a) # Prints out the global variable print(b) # Prints out the local variable def funcB(): print(a) # Prints out the global variable # print(b) # Variable b is not accessible here funcA() funcB() # - # ### Default Arguments # + # Default value for the parameter 'sex' is 'Unknown' # This means that if no value is passed for the parameter 'sex', default value will be taken def get_gender(sex='Unknown'): if sex is 'm': sex = 'Male' elif sex is 'f': sex = 'Female' print(sex) get_gender('m') get_gender('f') get_gender() # - # ### Keyword Arguments # + # Default value of 'time' is '1 year' # With the help of keyword arguments, we can change specific arguments also def calculateSimpleInterest(amount=1000, rate=2, time=1): interest = (amount * rate * time) / 100 print(interest) calculateSimpleInterest() # Default arguments calculateSimpleInterest(1200, 5, 2) calculateSimpleInterest(time=2) # Specific value for parameter 'time' only calculateSimpleInterest(time=2, rate=5) # No need to maintain the correct order # - # ### Flexible Number of Arguments # + # Use asterisk(*) to specify unknown number of arguments def add_numbers(*args): total = 0 for a in args: total += a print(total) add_numbers() add_numbers(3) add_numbers(1, 2, 3, 4, 5) # - # ### Unpacking Arguments # + # Asterisk(*) does the unpacking of list into arguments def calculateSimpleInterest(amount=1000, rate=2, time=1): interest = (amount * rate * time) / 100 print(interest) data = [1200, 5, 2] calculateSimpleInterest(data[0], data[1], data[2]) calculateSimpleInterest(*data) # - # ## Modules # # With the help of modules, there is no need to rewrite functions over and over again. # + # Create a file test.py which acts as a module # Import the module(thus importing all functions in it) import test # Call the function using the module name test.hello() # + import random x = random.randrange(1, 1000) print(x) # + # From 'test' module, import specific function 'hello' from test import hello hello() # - # ### Download an image from the web # + import urllib.request def download_web_image(url): urllib.request.urlretrieve(url, 'images/image.png') download_web_image("https://www.python.org/static/opengraph-icon-200x200.png") # - # ### Read and write files # + fw = open('sample.txt', 'w') for i in range(5): fw.write(str(i) + '\n') fw.close() # + fr = open('sample.txt', 'r') data = fr.read() print(data) fr.close() # - # ## Exception while True: try: x = int(input('\nEnter a number \n')) print(50/x) break except ValueError: # If input contains anything other than digits print('Oops! Please enter a number only.') except ZeroDivisionError: # If input is equal to zero print("Don't pick zero") except: # All other errors will be caught by this method break finally: # This is called in every case print('Loop complete') # ## Classes and Objects class Class: variable = 10 def incrementVariable(self): self.variable += 1 def decrementVariable(self): self.variable -= 1 def printVariable(self): print(self.variable) # + # Creating an object obj = Class() # Calling the member functions of the class using the object obj.printVariable() obj.incrementVariable() obj.printVariable() obj.decrementVariable() obj.printVariable() # + # Different objects have different memory blocks associated for class member variables obj1 = Class() obj2 = Class() obj1.incrementVariable() obj2.decrementVariable() obj1.printVariable() obj2.printVariable() # - # ### init function class Class: # Constructor helps in constructing objects def __init__(self, x=10): self.variable = x def printVariable(self): print(self.variable) # + # Initiating an object will first call the constructor i.e init function obj = Class() obj1 = Class(32) obj.printVariable() obj1.printVariable() # - # ### Class Variables vs Instance Variables # + class Girl: # Class variables are shared by all instances of the class gender = 'female' def __init__(self, name): # For each object of the class, instance variables are different self.name = name r = Girl('name1') s = Girl('name2') print(r.gender) print(s.gender) print(r.name) print(s.name) # - # ## Inheritance # + class Parent(): def print_last_name(self): print('Last Name') class Child(Parent): def print_first_name(self): print('First Name') child = Child() child.print_first_name() # We can call the parent class functions also on any object of child class child.print_last_name() # - # ### Overriding # + class Parent(): def print_last_name(self): print('Last Name') class Child(Parent): def print_first_name(self): print('First Name') # Overriding a function already present in the parent class def print_last_name(self): print('Another Last Name') child = Child() child.print_first_name() # New function defined in child class overrides the parent function child.print_last_name() # - # ## Multiple Inheritance # # Inherting the functions from two or more classes # + class A(): def funcA(self): print('Function of class A') class B(): def funcB(self): print('Function of class B') # Multiple inheritance class C(A, B): # If no function or class variable in present in class, we have to write the keyword 'pass' pass c = C() c.funcA() c.funcB() # - # ## Threading # + import threading class Messenger(threading.Thread): def run(self): for _ in range(10): print(threading.currentThread().getName()) x = Messenger(name = 'Send out messages') y = Messenger(name = 'Receive messages') x.start() y.start() # - # ### Unpacking Lists # + details = ['Rajat', 'Goyal', 20] # Unpacking a list into different variables first_name, last_name, age = details print(first_name) print(last_name) print(age) # + # In the above example, we knew the number of elements(attributes) present in the list # We can use asterisk(*) to store the remaining elements of the list marks = [23, 26, 76, 45, 59, 38, 94, 43] first, *remaining, last = marks print(first) print(remaining) print(last) # - # ### Zip function # # Attach two lists together # + first = ['Bruce', 'Tom', 'Taylor'] last = ['Wayne', 'Hanks', 'Swift'] names = zip(first, last) for a, b in names: print(a, b) # - # ### Lambda function # # function_name = lambda parameter: computation answer = lambda x: x*5 print(answer(5)) # ### Min, Max and Sorting Dictionaries # + marks = { 'physics' : 93, 'mathematics' : 95, 'chemistry' : 93, 'english' : 90, 'physical education' : 98 } # Minimum according to the alphabetical order of keys print(min(zip(marks.keys(), marks.values()))) # Minimum according to the numerical value print(min(zip(marks.values(), marks.keys()))) # Maximum of all the values print(max(zip(marks.values(), marks.keys()))) # Sorted list of tuples print(sorted(zip(marks.values(), marks.keys()))) # - # ### Struct # + from struct import * # Store as bytes data packed_data = pack('iif', 6, 19, 4.73) print(packed_data) # - print(unpack('iif', packed_data)) # ### Map function # + income = [10, 30, 75] def double_money(money): return money * 2 new_income = list(map(double_money, income)) print(new_income) # - # ## Bitwise Operators # ### AND # + a = 50 # 110010 b = 25 # 011001 c = a & b # 010000 print(c) # - # ### OR # + a = 50 # 110010 b = 25 # 011001 c = a | b # 111011 print(c) # - # ### Left and Right Shift # + a = 240 # 11110000 # Left shift b = a << 2 # 1111000000 print(b) # Right shift b = a >> 2 # 00111100 print(b) # - # ### Finding Largest or Smallest Items # + import heapq grades = [32, 43, 654, 34, 132, 66, 99, 532] print(heapq.nlargest(3, grades)) # + stocks = [ {'ticker': 'AAPL', 'price': 201}, {'ticker': 'GOOG', 'price': 800}, {'ticker': 'F', 'price': 54}, {'ticker': 'MSFT', 'price': 313}, {'ticker': 'YHOO', 'price': 68}, ] print(heapq.nsmallest(2, stocks, key=lambda stock: stock['price'])) # - # ### Finding most frequent items # + from collections import Counter text = "Python can be easy to pick up whether you're a first time programmer or you're experienced with other languages. The following pages are a useful first step to get on your way writing programs with Python!" words = text.split() counter = Counter(words) top_three = counter.most_common(3) print(top_three) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import datetime import pytz import grpc import baseline_optimizer_pb2 import baseline_optimizer_pb2_grpc import xbos_services_getter as xsg # + XBOS_MICROSERVICES_HOST_ADDRESS="ms.xbos.io:9001" temperature_bands_stub = xsg.get_temperature_band_stub(XBOS_MICROSERVICES_HOST_ADDRESS) occupancy_stub = xsg.get_occupancy_stub(XBOS_MICROSERVICES_HOST_ADDRESS) building_zone_names_stub = xsg.get_building_zone_names_stub(XBOS_MICROSERVICES_HOST_ADDRESS) channel = grpc.insecure_channel("localhost:50050") baseline_optimizer_stub = baseline_optimizer_pb2_grpc.BaselineOptimizerStub(channel) # + building = "ciee" zones = xsg.get_zones(building_zone_names_stub,building) start = pytz.timezone("US/Pacific").localize(datetime.datetime(year=2019, month=6, day=5, hour=9, minute=0)) end = start + datetime.timedelta(hours=1) start = start.replace(microsecond=0) end = end.replace(microsecond=0) start_unix = int(start.timestamp() * 1e9) end_unix = int(end.timestamp() * 1e9) window = "1h" unit = "F" occupancy = False do_not_exceed = True max_zones = int(len(zones)/2) include_all_zones = True starting_temperatures = {} expansion_degrees = {} comfort_band = {} occupancy_prop = {} do_not_exceed_band = {} t = 84 for zone in zones: t += 1 starting_temperatures[zone] = t expansion_degrees[zone] = 10.0 comfort_band[zone] = xsg.get_comfortband(temperature_bands_stub,building,zone,start,end,window).iloc[0] occupancy_prop[zone] = xsg.get_occupancy(occupancy_stub,building,zone,start,end,window).iloc[0] do_not_exceed_band[zone]= xsg.get_do_not_exceed(temperature_bands_stub,building,zone,start,end,window).iloc[0] print(comfort_band) print(do_not_exceed_band) print(occupancy_prop) normal_actions = baseline_optimizer_stub.GetNormalScheduleAction(baseline_optimizer_pb2.NormalScheduleRequest(building=building,zones=zones,start=start_unix,end=end_unix,window=window,starting_temperatures=starting_temperatures,unit=unit,occupancy=occupancy,do_not_exceed=do_not_exceed)) print("normal",normal_actions.actions) expansion_actions = baseline_optimizer_stub.GetSetpointExpansionAction(baseline_optimizer_pb2.SetpointExpansionRequest(building=building,zones=zones,start=start_unix,end=end_unix,window=window,starting_temperatures=starting_temperatures,unit=unit,occupancy=occupancy,do_not_exceed=do_not_exceed,expansion_degrees=expansion_degrees)) print("expansion",expansion_actions.actions) demand_charge_actions = baseline_optimizer_stub.GetDemandChargeAction(baseline_optimizer_pb2.DemandChargeRequest(building=building,zones=zones,start=start_unix,end=end_unix,window=window,starting_temperatures=starting_temperatures,unit=unit,occupancy=occupancy,do_not_exceed=do_not_exceed,max_zones=max_zones,include_all_zones=include_all_zones)) print("demand",demand_charge_actions.actions) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from typing import List, Tuple from tqdm import tqdm from scipy import linalg import matplotlib.pyplot as plt import numpy as np import random import math file_names = ["1", "2", "3", "4", "5", "6", "7"] path = "resources/" extension = ".txt" files = [path + file_name + extension for file_name in file_names] # my_files = list(files[1:]) # + class Dataset(object): def __init__(self, x_train, y_train, x_test, y_test): self.x_train = np.array(x_train) self.y_train = np.array(y_train) self.x_test = np.array(x_test) self.y_test = np.array(y_test) def attributes_num(self): return self.x_train.shape[1] class Graph(object): def __init__(self, x_axis=[], y_axis=[]): self.x_axis = x_axis self.y_axis = y_axis def add_point(self, x, y): self.x_axis.append(x) self.y_axis.append(y) # - def read_dataset_from_file(filename): file = open(filename, 'r') m = int(file.readline()) n_train = int(file.readline()) objects_train = [] labels_train = [] rows = [] for _ in range(n_train): obj = [float(x) for x in file.readline().split()] rows.append(obj) for row in rows: label = row.pop() objects_train.append(row + [1.]) labels_train.append(label) n_test = int(file.readline()) objects_test: List[List[int]] = [] labels_test: [List[int]] = [] rows = [] for _ in range(n_test): obj = [float(x) for x in file.readline().split()] rows.append(obj) for row in rows: label = row.pop() objects_test.append(row + [1.]) labels_test.append(label) file.close() return Dataset(objects_train, labels_train, objects_test, labels_test) def nrmse(X, y, w): pred = X @ w sum_errors = sum([(y1 - y2) ** 2 for (y1, y2) in zip(y, pred)]) return np.sqrt(sum_errors / len(y)) / (np.max(y) - np.min(y)) # + def compute_stochastic_gradient(X_i, y_i, w, lambda_reg): return 2 * (np.dot(w, X_i) - y_i) * X_i + 2 * lambda_reg * w def learning_rate(grad, X_i, y_i, w): scalar_products = np.transpose(X_i @ grad.T) s = np.sum(np.square(scalar_products)) if s == 0: return 0 else: return np.dot(np.dot(X_i, w.T) - y_i, scalar_products) / s # - def do_gradient(x, y, steps_limit, lambda_reg=1): graph = Graph([], []) m = len(x[0]) - 1 w = np.array([random.uniform(-1./(2 * m), 1./(2 * m)) for _ in range(m + 1)]) for step in range(steps_limit): i = random.randint(0, len(x) - 1) X_i = x[i] y_i = y[i] grad = compute_stochastic_gradient(X_i, y_i, w, lambda_reg) mu = learning_rate(grad, X_i, y_i, w) / (step + 1) w = w * (1 - mu * lambda_reg) - grad * mu err = nrmse(x, y, w) graph.add_point(step, err) return graph # + def gradient_with_lambda(x, y, lambda_reg): graph = do_gradient(x, y, 2000, lambda_reg) err = graph.y_axis[-1] return err, graph def handle_file(file): dataset = read_dataset_from_file(file) lambda_regs = [0, 1e-5, 1e-2, 1e-1] + list(range(1, 100)) best_graph = Graph([], []) best_err = 9999999 best_lambda = 0 for lambda_reg in tqdm(lambda_regs): cur_err, _ = gradient_with_lambda(dataset.x_train, dataset.y_train, lambda_reg) if cur_err < best_err: best_err = cur_err best_lambda = lambda_reg print(f'Best lambda: {best_lambda}') _, graph1 = gradient_with_lambda(dataset.x_train, dataset.y_train, best_lambda) _, graph2 = gradient_with_lambda(dataset.x_test, dataset.y_test, best_lambda) plt.plot(graph1.x_axis, graph1.y_axis) plt.show() plt.plot(graph2.x_axis, graph2.y_axis) plt.show() # - handle_file(files[0]) handle_file(files[1]) handle_file(files[2]) handle_file(files[3]) handle_file(files[4]) handle_file(files[5]) handle_file(files[6]) # + def lsm_regularized(X, y, lambda_reg): n = X.shape[1] r = np.linalg.matrix_rank(X) U, sigma, VT = linalg.svd(X, full_matrices=False) coefs = ((sigma[:r] ** 2) / (sigma[:r] ** 2 + lambda_reg)) D = np.diag(np.hstack([coefs, np.zeros(n - r)])) V = VT.T UT = U.T w = V.dot(D).dot(UT).dot(y) # print(np.dot(X, w.T)) # print(f'y:{y}') return nrmse(X, y, w) def handle_file_lsm(file): dataset = read_dataset_from_file(file) lambda_regs = [0, 1e-5, 1e-2, 1e-1] + list(range(1, 100)) best_lambda = 0 best_err = 9999999 for lambda_reg in tqdm(lambda_regs): err = lsm_regularized(dataset.x_train, dataset.y_train, lambda_reg) if err < best_err: best_err = err best_lambda = lambda_reg print(f'Best lambda: {best_lambda}, Best nrmse: {best_err}') return best_lambda # - handle_file_lsm(files[0]) handle_file_lsm(files[1]) handle_file_lsm(files[2]) handle_file_lsm(files[3]) handle_file_lsm(files[4]) handle_file_lsm(files[5]) handle_file_lsm(files[6]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook shows how to use the simple model serving functionality found in `tfe.serving`. # # Setup # + from collections import OrderedDict import tensorflow as tf import tf_encrypted as tfe # - # Turn on TFE profling flags since we want to inspect with TensorBoard. # + TENSORBOARD_DIR = "/tmp/tensorboard" tfe.set_tfe_trace_flag(True) tfe.set_tfe_events_flag(True) tfe.set_log_directory(TENSORBOARD_DIR) # !rm -rf {TENSORBOARD_DIR} # - # # Protocol # # We first configure the protocol we will be using, as well as the servers on which we want to run it. # # Note that the configuration is saved to file as we will be needing it in the client as well. # + players = OrderedDict([ ('server0', 'localhost:4000'), ('server1', 'localhost:4001'), ('server2', 'localhost:4002'), ]) config = tfe.RemoteConfig(players) config.save('/tmp/tfe.config') # - tfe.set_config(config) tfe.set_protocol(tfe.protocol.Pond()) # ## Launching servers # # Before actually serving the computation below we need to launch TensorFlow servers in new processes. Run the following in three different terminals. You may have to allow Python to accept incoming connections. for player_name in players.keys(): print("python -m tf_encrypted.player --config /tmp/tfe.config {}".format(player_name)) # # Computation # # We then define the computation we want to run. These will happen on private tensors on the servers defined above. # # Note that the only single-tensor computations are currently supported. # + input_shape = (5, 5) output_shape = (5, 5) def computation(x): return x * x # - # We can then set up a new `tfe.serving.QueueServer` to serve this computation. server = tfe.serving.QueueServer( input_shape=input_shape, output_shape=output_shape, computation_fn=computation) # # Serving # # With all of the above in place we can finally connect to our servers, push our graph to them, and start serving computations. # + sess = tfe.Session(disable_optimizations=True) tf.Session.reset(sess.target) # + def step_fn(): print("Next") server.run( sess, num_steps=5, step_fn=step_fn) # - # At this point we switch to the client notebook to run computations. from copy import deepcopy from time import time import numpy as np from HARK.utilities import plotFuncs import HARK.ConsumptionSaving.ConsumerParameters as Params from HARK.ConsumptionSaving.ConsRepAgentModel import ( RepAgentConsumerType, RepAgentMarkovConsumerType, ) # Make a quick example dictionary RA_params = deepcopy(Params.init_idiosyncratic_shocks) RA_params["DeprFac"] = 0.05 RA_params["CapShare"] = 0.36 RA_params["UnempPrb"] = 0.0 RA_params["LivPrb"] = [1.0] # Make and solve a rep agent model RAexample = RepAgentConsumerType(**RA_params) t_start = time() RAexample.solve() t_end = time() print( "Solving a representative agent problem took " + str(t_end - t_start) + " seconds." ) plotFuncs(RAexample.solution[0].cFunc, 0, 20) # Simulate the representative agent model RAexample.T_sim = 2000 RAexample.track_vars = ["cNrmNow", "mNrmNow", "Rfree", "wRte"] RAexample.initializeSim() t_start = time() RAexample.simulate() t_end = time() print( "Simulating a representative agent for " + str(RAexample.T_sim) + " periods took " + str(t_end - t_start) + " seconds." ) # Make and solve a Markov representative agent RA_markov_params = deepcopy(RA_params) RA_markov_params["PermGroFac"] = [[0.97, 1.03]] RA_markov_params["MrkvArray"] = np.array([[0.99, 0.01], [0.01, 0.99]]) RA_markov_params["MrkvNow"] = 0 RAmarkovExample = RepAgentMarkovConsumerType(**RA_markov_params) RAmarkovExample.IncomeDstn[0] = 2 * [RAmarkovExample.IncomeDstn[0]] t_start = time() RAmarkovExample.solve() t_end = time() print( "Solving a two state representative agent problem took " + str(t_end - t_start) + " seconds." ) plotFuncs(RAmarkovExample.solution[0].cFunc, 0, 10) # Simulate the two state representative agent model RAmarkovExample.T_sim = 2000 RAmarkovExample.track_vars = ["cNrmNow", "mNrmNow", "Rfree", "wRte", "MrkvNow"] RAmarkovExample.initializeSim() t_start = time() RAmarkovExample.simulate() t_end = time() print( "Simulating a two state representative agent for " + str(RAexample.T_sim) + " periods took " + str(t_end - t_start) + " seconds." ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Train your first neural network: basic classification from __future__ import absolute_import, division,print_function,unicode_literals # ### TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras # ### Helper libraries import numpy as np import matplotlib.pyplot as plt print(tf.__version__) # ### Import the fashion MNIST dataset fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() class_names = ['T-shirt/top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle boot'] # ### Explore the data train_images.shape len(train_labels) train_labels # + #train_images[1] # + # #%matplotlib inline #import matplotlib.image as mpimg #imgplot = plt.imshow(train_images[1]) # - test_images.shape len(test_labels) # ### Preprocess the data plt.figure() plt.imshow(train_images[0]) plt.colorbar() plt.grid(False) plt.show() train_images = train_images / 255.0 test_images = test_images / 255.0 plt.figure(figsize=(10,10)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.imshow(train_images[i], cmap = plt.cm.binary) plt.xlabel(class_names[train_labels[i]]) plt.show() # ## Build the model # ### setup the layers model = keras.Sequential([keras.layers.Flatten(input_shape = (28,28)), keras.layers.Dense(128,activation = tf.nn.relu), keras.layers.Dense(128,activation = tf.nn.relu), keras.layers.Dense(10,activation = tf.nn.softmax)]) # ### Compile the model model.compile(optimizer = 'adam', loss='sparse_categorical_crossentropy', metrics = ['accuracy']) # ### Train the model model.fit(train_images,train_labels,epochs=2) # ### Evaluate accuracy test_loss,test_acc = model.evaluate(test_images,test_labels) print('Test accuracy:', test_acc) # ### Make predictions predictions = model.predict(test_images) predictions[0] np.argmax(predictions[0]) test_labels[0] def plot_image(i, predictions_array, true_label, img): predictions_array, true_label, img =predictions_array[i],true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap = plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color = color) def plot_value_array(i,predictions_array,true_label): predictions_array,true_label = predictions_array[i], true_label[i] plt.grid(False) plt.xticks([]) plt.yticks([]) thisplot = plt.bar(range(10),predictions_array,color = '#777777') plt.ylim([0,1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') i = 0 plt.figure(figsize = (6,3)) plt.subplot(1,2,1) plot_image(i,predictions,test_labels,test_images) plt.subplot(1,2,2) plot_value_array(i,predictions,test_labels) plt.show() num_rows = 5 num_cols = 5 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols,2*num_rows)) for i in range(num_images): plt.subplot(num_rows,2*num_cols,2*i+1) plot_image(i,predictions,test_labels,test_images) plt.subplot(num_rows,2*num_cols,2*i+2) plot_value_array(i,predictions,test_labels) plt.show() image_no = 0 img = test_images[image_no] print(img.shape) img = (np.expand_dims(img,0)) print(img.shape) predictions_single = model.predict(img) print(predictions_single) # + plot_value_array(0,predictions_single,test_labels) plt.xticks(range(10),class_names,rotation = 45) plt.show() # - prediction_result = np.argmax(predictions_single[0]) print(prediction_result) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: nlu3 # language: python # name: nlu3 # --- # + from process_util import * from random import sample SubReddits_to_include_10 = ['relationships', 'legaladvice', 'nfl', 'pettyrevenge', 'atheismbot', 'ShouldIbuythisgame', 'ukpolitics', 'Dogtraining', 'AskHistorians', 'Anxiety'] # read jsonl dataset src = "/home/raw_reddit/tldr-training-data.jsonl" reader = jsonlines.open(src) # + def create_preprocessed_story_file_include_subreddit(input_dict, save_dir): ''' input: input_dict: input dictionary, include information about its id, content, summary etc save_dir: a directory about where to save the story files reference: https://medium.com/@datamonsters/text-preprocessing-in-python-steps-tools-and-examples-bf025f872908 here we preprocessed the content and the summary of the story by: 1) get rid of extra space tab 2) filter out those whose summary is too short/content is too short 3) delete special characters like [...] 4) [potential] Stemming (spend/spent/spends...) 5) [potential] Lemmatization (do/done/did) ''' dic_id = input_dict["id"] content = input_dict["content"] summary = input_dict['summary'] subreddit = input_dict["subreddit"] #print(type(summary.split())) if(len(summary.split()) > 3): # get rid of extra space tab content = re.sub('\s+', ' ', content).strip() summary = re.sub('\s+', ' ', summary).strip() # get rid of words inside special characterss content = re.sub("[\(\[].*?[\)\]]", "", content) summary = re.sub("[\(\[].*?[\)\]]", "", summary) filename = os.path.join(save_dir, dic_id +'_'+subreddit+ ".story") file1 = open(filename,"w") # add the subreddit information before the summary file1.writelines(content+'\n') file1.writelines('@highlight \n') # file1.writelines(subreddit+' '+summary) file1.writelines(summary) file1.close() def create_final_list(subreddit_type, input_type, input_str): filename = os.path.join("/home/cs224u/processed_10_1k"+'/'+"processed_"+subreddit_type+"/", subreddit_type + input_type + "_list.txt") # filename = os.path.join("/home/ubuntu/cs224u/processed_10_1k_mymodel"+'/'+"processed_"+subreddit_type+"/", subreddit_type + input_type + "_list.txt") print(filename) f = open(filename,"w") f.writelines(input_str) f.close() # + for i in range(len(SubReddits_to_include_10)): print(i) # choose a subreddit input_subreddit = SubReddits_to_include_10[i] # create the directory to this corresponding dataset dst = "/home/cs224u/processed_10_1k"+'/'+"processed_"+input_subreddit os.mkdir(dst) dst = "/home/cs224u/processed_10_1k"+'/'+"processed_"+input_subreddit+"/"+input_subreddit+'_story' os.mkdir(dst) #os.mkdir(dst) # get a corresponding dataset count = 0 dic_list = [] reader = jsonlines.open(src) for dic in reader: if("subreddit" in dic.keys() and dic["subreddit"] == input_subreddit and isEnglish(dic["summary"]) == True and isEnglish(dic["content"]) == True): dic_list.append(dic) # create a small dataset if needed sample_list = sample(dic_list,1200) for dic in sample_list: create_preprocessed_story_file_include_subreddit(dic, dst) #input_subreddit = "relationships" #dst = "/home/ubuntu/cs224u/new_relationships"+'/'+input_subreddit+'_story' result_list = os.listdir(dst) np.random.shuffle(result_list) size = len(result_list) train_list = result_list[0:int(0.8*size)-1] train_str = "\n".join(x for x in train_list) dev_list = result_list[int(0.8*size):int(0.9*size)-1] dev_str = "\n".join(x for x in dev_list) test_list = result_list[int(0.9*size): int(size)-1] test_str = "\n".join(x for x in test_list) # create three lists create_final_list(input_subreddit, "_train", train_str) create_final_list(input_subreddit, "_val", dev_str) create_final_list(input_subreddit, "_test", test_str) # - # # combine dataset and lists import os import shutil import glob # combine all data into a file root_dir = "/home/ubuntu/cs224u/processed_10_1k_mymodel" # create a processed_combine_all file dst = "/home/ubuntu/cs224u/processed_10_1k_mymodel"+'/'+"processed_combine_all" os.mkdir(dst) dest = "/home/ubuntu/cs224u/processed_10_1k_mymodel"+'/'+"processed_combine_all/"+ "combine_all_story" os.mkdir(dest) for i in range(len(SubReddits_to_include_10)): # add all information into the processed_combine_all file from input_subreddit = SubReddits_to_include_10[i] src = root_dir + "/processed_"+ input_subreddit + "/"+input_subreddit + "_story" src_files = os.listdir(src) for file_name in src_files: full_file_name = os.path.join(src, file_name) if os.path.isfile(full_file_name): shutil.copy(full_file_name, dest) dst # + # combine all subreddit_train_list.txt data into a combine_train_list.txt # create a result outfile name combine_train_list.txt # combine_train_list.txt combine_train = os.path.join(dst, "combine_train_list.txt") print(combine_train) #"/home/ubuntu/cs224u/processed_10_1k/processed_" + input_subreddit+ "/"+input_subreddit+"_train_list.txt" for i in range(len(SubReddits_to_include_10)): # add all information into the processed_combine_all file from input_subreddit = SubReddits_to_include_10[i] read_files = glob.glob("/home/ubuntu/cs224u/processed_10_1k_mymodel/processed_" + input_subreddit+ "/"+input_subreddit+"_train_list.txt") print(read_files) combine_train = os.path.join(dst, "combine_train_list.txt") with open(combine_train, "a") as outfile: for f in read_files: with open(f, "rb") as infile: outfile.writelines(infile.read()) outfile.writelines("\n") # num_lines = sum(1 for line in open("/home/ubuntu/cs224u/processed_10_1k/processed_combine_all/combine_all_story/combine_train_list.txt")) # print(num_lines) # + # combine all subreddit_train_list.txt data into a combine_train_list.txt # create a result outfile name combine_train_list.txt # combine_train_list.txt combine_train = os.path.join(dst, "combine_val_list.txt") print(combine_train) #"/home/ubuntu/cs224u/processed_10_1k/processed_" + input_subreddit+ "/"+input_subreddit+"_train_list.txt" for i in range(len(SubReddits_to_include_10)): # add all information into the processed_combine_all file from input_subreddit = SubReddits_to_include_10[i] read_files = glob.glob("/home/ubuntu/cs224u/processed_10_1k_mymodel/processed_" + input_subreddit+ "/"+input_subreddit+"_val_list.txt") print(read_files) combine_train = os.path.join(dst, "combine_val_list.txt") with open(combine_train, "a") as outfile: for f in read_files: with open(f, "rb") as infile: outfile.writelines(infile.read()) outfile.writelines("\n") # num_lines = sum(1 for line in open("/home/ubuntu/cs224u/processed_10_1k/processed_combine_all/combine_all_story/combine_val_list.txt")) # print(num_lines) # + # combine all subreddit_train_list.txt data into a combine_train_list.txt # create a result outfile name combine_train_list.txt # combine_train_list.txt combine_train = os.path.join(dst, "combine_test_list.txt") print(combine_train) #"/home/ubuntu/cs224u/processed_10_1k/processed_" + input_subreddit+ "/"+input_subreddit+"_train_list.txt" for i in range(len(SubReddits_to_include_10)): # add all information into the processed_combine_all file from input_subreddit = SubReddits_to_include_10[i] read_files = glob.glob("/home/ubuntu/cs224u/processed_10_1k_mymodel/processed_" + input_subreddit+ "/"+input_subreddit+"_test_list.txt") print(read_files) combine_train = os.path.join(dst, "combine_test_list.txt") with open(combine_train, "a") as outfile: for f in read_files: with open(f, "rb") as infile: outfile.writelines(infile.read()) outfile.writelines("\n") # num_lines = sum(1 for line in open("/home/ubuntu/cs224u/processed_10_1k/processed_combine_all/combine_all_story/combine_test_list.txt")) # print(num_lines) # - # # create a baseline result for the test set # create the directory to the corresponding baseline result make_dir = '/home/ubuntu/cs224u/processed_' +input_subreddit+'/baseline' os.mkdir(make_dir) # + make_dec_dir = make_dir + '/decoded' os.mkdir(make_dec_dir) make_ref_dir = make_dir + '/reference' os.mkdir(make_ref_dir) # - # get the name of the test list test_name_list = [x[:-6] for x in test_list] test_name_list reader = jsonlines.open(src) # create corresponding baseline summarization for dic in reader: if("subreddit" in dic.keys() and dic["subreddit"] == input_subreddit and isEnglish(dic["content"]) == True): if(dic["id"] in test_name_list): print(dic["id"]) create_baseline_summarization_file(dic, make_dec_dir) create_reference_file(dic, make_ref_dir) # # create an example.story (not relavant to here) for dic in reader: if("subreddit" in dic.keys() and dic["subreddit"] == "AskReddit" and isEnglish(dic["summary"]) == True and isEnglish(dic["content"]) == True ): print(dic) break dic["content"] len(dic["summary"].split()) # cwd = os.getcwd() filename = os.path.join(cwd, "example.story") file1 = open(filename,"w") file1.writelines(dic["content"]+'\n') #file1.writelines('@hightlight \n') #file1.writelines(input_dict['summary']) file1.close() dic["content"] dic["summary"] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #triangle of pattern n=int(input()) i=1 while i<=n: spaces=1 while spaces<=n-i: print(' ',end='') spaces+=1 j=1 p=i while j<=i: print(p,end='') p+=1 j+=1 p=i-1 j = 0 while p>=1: r=(2*i)-j-2 print(r,end='') p-=1 j += 1 print() i+=1 n=int(input()) i=1 while(i<=n): j=n while(j>=i): print(" ",end="") j=j-1 k=1 while(k<=2*i-1): print("*",end="") k=k+1 print(" ") i=i+1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %reset import matplotlib.pyplot as plt import pandas as pd import os from os import listdir from os.path import isfile, join import numpy as np import csv import datetime import time from datetime import datetime from datetime import timedelta import dill ###set working directory os.chdir('insert directory here') dill.load_session(os.getcwd() + '/clean_webinar_data.db') data=webinar_data_all #specify timestamps with 1 minute frequency for each conference day #Day 1 start1=pd.Timestamp(2020, 5, 18, 10, 0, 0) end1=pd.Timestamp(2020, 5, 18, 13, 15, 0) timestamps1 = pd.date_range(start=start1,end=end1,freq='min') #Day 2 start2=pd.Timestamp(2020, 5, 19, 10, 0, 0) end2=pd.Timestamp(2020, 5, 19, 12, 15, 0) timestamps2 = pd.date_range(start=start2,end=end2,freq='min') #Day 3 start3=pd.Timestamp(2020, 5, 20, 10, 0, 0) end3=pd.Timestamp(2020, 5, 20, 15, 0, 0) timestamps3 = pd.date_range(start=start3,end=end3,freq='min') #Day 4 start4=pd.Timestamp(2020, 5, 21, 10, 0, 0) end4=pd.Timestamp(2020, 5, 21, 15, 0, 0) timestamps4 = pd.date_range(start=start4,end=end4,freq='min') stamps_total=timestamps1.append(timestamps2) stamps_total=stamps_total.append(timestamps3) stamps_total=stamps_total.append(timestamps4) #create list of unique participants participants = data['Participant'].tolist() participants=list(set(participants)) #create dataframe of with columns for unique participants tseries = pd.DataFrame(columns = participants) #set index to timestamps for conference days tseries=tseries.reindex(stamps_total) #set all table values to 0 for col in tseries.columns: tseries[col].values[:] = 0 #cycle through each unique participant for person in tseries.columns: # print(person) #get data for unique participant person_data = data.loc[data['Participant']==person] #cycle through each timestamp of conference for stamp in tseries.index: #cycle through each logged entry or exit stamp for unique participant for row in person_data.index: #check if conference timestamp of interest is within the range of this login if stamp > person_data.loc[row,'Join Time'] and stamp < person_data.loc[row,'Leave Time']: #list the room that this unique particpant was in at this timestamp tseries.loc[stamp,person] = person_data.loc[row,'Room'] #create columns for rooms to sum the number of participants in attendance tseries['Room1']=0 tseries['Room2']=0 tseries['Room3']=0 tseries['Room4']=0 tseries['Room5']=0 #cycle through conference timestamps and sum the number of participants in each room at the timestamp for stamp in tseries.index: tseries.loc[stamp,'Room1'] = sum(1 for i in list(tseries.loc[stamp,:]) if i == 1) tseries.loc[stamp,'Room2'] = sum(1 for i in list(tseries.loc[stamp,:]) if i == 2) tseries.loc[stamp,'Room3'] = sum(1 for i in list(tseries.loc[stamp,:]) if i == 3) tseries.loc[stamp,'Room4'] = sum(1 for i in list(tseries.loc[stamp,:]) if i == 4) tseries.loc[stamp,'Room5'] = sum(1 for i in list(tseries.loc[stamp,:]) if i == 5) #sum the attendance in all five webinar rooms tseries['Total'] = tseries['Room1']+tseries['Room2']+tseries['Room3']+tseries['Room4']+tseries['Room5'] #reformat dataframe for importing to Tableau tab_data=pd.DataFrame() room1=pd.DataFrame(tseries['Room1']) room1=room1.rename(columns={"Room1":"Quantity"}) room1['Room']='Room1' tab_data=tab_data.append(room1) room2=pd.DataFrame(tseries['Room2']) room2=room2.rename(columns={"Room2":"Quantity"}) room2['Room']='Room2' tab_data=tab_data.append(room2) room3=pd.DataFrame(tseries['Room3']) room3=room3.rename(columns={"Room3":"Quantity"}) room3['Room']='Room3' tab_data=tab_data.append(room3) room4=pd.DataFrame(tseries['Room4']) room4=room4.rename(columns={"Room4":"Quantity"}) room4['Room']='Room4' tab_data=tab_data.append(room4) room5=pd.DataFrame(tseries['Room5']) room5=room5.rename(columns={"Room5":"Quantity"}) room5['Room']='Room5' tab_data=tab_data.append(room5) #Create tableau_data export folder if it doesn't exist if not os.path.exists(os.getcwd() + '/tableau_data'): os.makedirs(os.getcwd() + '/tableau_data') #Export time series of room attendance and reformatted data for tableau tab_data.to_csv(os.getcwd() + '/tableau_data/tableau_data.csv',index_label='Timestamp') tseries.to_csv(os.getcwd() + '/tableau_data/tseries_data.csv',index_label='Timestamp') #Export environment as database file dill.dump_session(os.getcwd() + '/tableau_data/attendees_per_session.db') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="bOChJSNXtC9g" colab_type="text" # # Introduction to Python # + [markdown] id="OLIxEDq6VhvZ" colab_type="text" # # # In this lesson we will learn the basics of the Python programming language (version 3). We won't learn everything about Python but enough to do some basic machine learning. # # # # # # + [markdown] id="VoMq0eFRvugb" colab_type="text" # # Variables # + [markdown] id="qWro5T5qTJJL" colab_type="text" # Variables are objects in python that can hold anything with numbers or text. Let's look at how to make some variables. # + id="0-dXQiLlTIgz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="13d53c5b-6941-41ac-de44-edb4419158fb" # Numerical example x = 5 print (x) # + id="5Ym0owFxTkjo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2a99cb83-5cc5-4728-9b4a-a97a46afa7c5" # Text example x = "hello" print (x) # + id="1a4ZhMV1T1-0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8775f617-e7d3-4a95-aa8b-7f15ed6fca6f" # Variables can be used with each other a = 1 b = 2 c = a + b print (c) # + [markdown] id="nbKV4aTdUC1_" colab_type="text" # Variables can come in lots of different types. Even within numerical variables, you can have integers (int), floats (float), etc. All text based variables are of type string (str). We can see what type a variable is by printing its type. # + id="c3NJmfO4Uc6V" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 153} outputId="619ba3a4-9589-466f-ff0d-179de822f4b5" # int variable x = 5 print (x) print (type(x)) # float variable x = 5.0 print (x) print (type(x)) # text variable x = "5" print (x) print (type(x)) # boolean variable x = True print (x) print (type(x)) # + [markdown] id="6HPtavfdU8Ut" colab_type="text" # It's good practice to know what types your variables are. When you want to use numerical operations on then, they need to be compatible. # + id="8pr1-i7IVD-h" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="409eab3a-4767-4333-c5dc-48965fd2eae2" # int variables a = 5 b = 3 print (a + b) # string variables a = "5" b = "3" print (a + b) # + [markdown] id="q4R_UF6PVw4V" colab_type="text" # # Lists # + [markdown] id="LvGsQBj4VjMl" colab_type="text" # Lists are objects in python that can hold a ordered sequence of numbers **and** text. # + id="9iPESkq9VvlX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0ba3299d-a49c-41a8-be8f-ea292d07099b" # Making a list list_x = [3, "hello", 1] print (list_x) # + [markdown] id="0xC6WvuwbGDg" colab_type="text" # # + id="7lbajc-zV515" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="dc63b23c-1b9f-4668-e40a-dd3168e20d4e" # Adding to a list list_x.append(7) print (list_x) # + id="W0xpIryJWCN9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="ff4556f8-bc22-47ad-880f-4e5b9f99193e" # Accessing items at specific location in a list print ("list_x[0]: ", list_x[0]) print ("list_x[1]: ", list_x[1]) print ("list_x[2]: ", list_x[2]) print ("list_x[-1]: ", list_x[-1]) # the last item print ("list_x[-2]: ", list_x[-2]) # the second to last item # + id="VSu_HNrnc1WK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="46c5d289-f7b1-4360-c832-c1332a7783ec" # Slicing print ("list_x[:]: ", list_x[:]) print ("list_x[2:]: ", list_x[2:]) print ("list_x[1:3]: ", list_x[1:3]) print ("list_x[:-1]: ", list_x[:-1]) # + id="dImY-hVzWxB4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="def2a30f-00e1-402c-bbed-2ec5476a0494" # Length of a list len(list_x) # + id="3-reXDniW_sm" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6afcd0d4-03eb-427f-9cdb-7e4053e5ba30" # Replacing items in a list list_x[1] = "hi" print (list_x) # + id="X8T5I3bjXJ0S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="388ff773-e62f-4ee6-b7a5-c67713f83d41" # Combining lists list_y = [2.4, "world"] list_z = list_x + list_y print (list_z) # + [markdown] id="ddpIO6LLVzh0" colab_type="text" # # Tuples # + [markdown] id="CAZblq7oXY3s" colab_type="text" # Tuples are also objects in python that can hold data but you cannot replace values (for this reason, tuples are called immutable, whereas lists are known as mutable). # + id="G95lu8xWXY90" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="6350ad63-0f99-482f-ad6d-41b037e058e6" # Creating a tuple tuple_x = (3.0, "hello") print (tuple_x) # + id="kq23Bej1acAP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9c763fff-9862-40ef-d045-b0cd9b6dce13" # Adding values to a tuple tuple_x = tuple_x + (5.6,) print (tuple_x) # + id="vyTmOc6BXkge" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 164} outputId="972c571e-c530-4826-f485-1d850383b12b" # Trying to change a tuples value (you can't) tuple_x[1] = "world" # + [markdown] id="UdlJHkwZV3Mz" colab_type="text" # # Dictionaries # + [markdown] id="azp3AoxYXS26" colab_type="text" # Dictionaries are python objects that hold key-value pairs. In the example dictionary below, the keys are the "name" and "eye_color" variables. They each have a value associated with them. A dictionary cannot have two of the same keys. # + id="pXhNLbzpXXSk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="405c35a0-2791-42ac-fd94-59567f4b6800" # Creating a dictionary goku = {"name": "Goku", "eye_color": "brown"} print (goku) print (goku["name"]) print (goku["eye_color"]) # + id="1HXtX8vQYjXa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d0a6d036-dcc7-43bc-a099-2562d770be15" # Changing the value for a key goku["eye_color"] = "green" print (goku) # + id="qn33iB0MY5dT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="76304793-cd3d-4b22-a2d0-847a9bfcf02c" # Adding new key-value pairs goku["age"] = 24 print (goku) # + id="g9EYmzMKa9YV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="ffe8ec5f-09b3-45af-c880-812428f291a5" # Length of a dictionary print (len(goku))å # + [markdown] id="B-DInx_Xo2vJ" colab_type="text" # # If statements # + [markdown] id="ZG_ICGRGo4tY" colab_type="text" # You can use if statements to conditionally do something. # + id="uob9lQuKo4Pg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="8fb241ae-d1e1-4457-92e7-019ec3737b48" # If statement x = 4 if x < 1: score = "low" elif x <= 4: score = "medium" else: score = "high" print (score) # + id="vwsQaZqIpfJ3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0ff2a53b-28e5-468c-9b20-75eee0bb528f" # If statment with a boolean x = True if x: print ("it worked") # + [markdown] id="sJ7NPGEKV6Ik" colab_type="text" # # Loops # + [markdown] id="YRVxhVCkn0vc" colab_type="text" # You can use for or while loops in python to do something repeatedly until a condition is met. # + id="OB5PtyqAn8mj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="5bbd3474-486a-4652-c82c-f63372da1f61" # For loop x = 1 for i in range(3): # goes from i=0 to i=2 x += 1 # same as x = x + 1 print ("i={0}, x={1}".format(i, x)) # printing with multiple variables # + id="6XyhCrFeoGj4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="6004d886-596a-426e-c72f-8adde0a34189" # Loop through items in a list x = 1 for i in [0, 1, 2]: x += 1 print ("i={0}, x={1}".format(i, x)) # + id="5Tf2x4okp3fH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="3423016a-b8b5-479c-f979-8ab8253d1d04" # While loop x = 3 while x > 0: x -= 1 # same as x = x - 1 print (x) # + [markdown] id="gJw-EDO9WBL_" colab_type="text" # # Functions # + [markdown] id="hDIOUdWCqBwa" colab_type="text" # Functions are a way to modularize resuable pieces of code. # + id="iin1ZXmMqA0y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="eaf42827-db58-4ec7-e038-7d2743e666cc" # Create a function def add_two(x): x += 2 return x # Use the function score = 0 score = add_two(x=score) print (score) # + id="DC6x3DMrqlE3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="c9360480-1553-4358-a646-4ecf74064fff" # Function with multiple inputs def join_name(first_name, last_name): joined_name = first_name + " " + last_name return joined_name # Use the function first_name = "Goku" last_name = "Mohandas" joined_name = join_name(first_name=first_name, last_name=last_name) print (joined_name) # + [markdown] id="lBLa1n54WEd2" colab_type="text" # # Classes # + [markdown] id="mGua8QnArAZh" colab_type="text" # Classes are a fundamental piece of object oriented Python programming. # + id="DXmPwI1frAAd" colab_type="code" colab={} # Create the function class Pets(object): # Initialize the class def __init__(self, species, color, name): self.species = species self.color = color self.name = name # For printing def __str__(self): return "{0} {1} named {2}.".format(self.color, self.species, self.name) # Example function def change_name(self, new_name): self.name = new_name # + id="ezQq_Fhhrqrv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="585493c2-1fcf-4b68-d4fb-d3f00c4c7433" # Making an instance of a class my_shiba = Pets(species="dog", color="orange", name="Guiness",) print (my_shiba) print (my_shiba.name) # + id="qTinlRj1szc5" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="1f9e75b8-e43a-484d-ea3e-c0123374382a" # Using a class's function my_shiba.change_name(new_name="Charlie") print (my_shiba) print (my_shiba.name) # + [markdown] id="kiWtd0aJtNtY" colab_type="text" # # Additional resources # + [markdown] id="cfLF4ktmtSC3" colab_type="text" # This was a very quick look at python and we'll be learning more in future lessons. If you want to learn more right now before diving into machine learning, check out this free course: [Free Python Course](https://www.codecademy.com/learn/learn-python) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + print("How old are you?", end=' ') age = input() print("How tall are you?", end=' ') height = input() print("How much do you weigh?", end=' ') weight = input() print(f"So, you're {age} old, {height} tall and {weight} heavy.") # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PySpark # language: python # name: pysparkkernel # --- # + # https://www.kaggle.com/bryanb/fifa-player-stats-database?select=FIFA20_official_data.csv import hsfs connection = hsfs.connection() fs = connection.get_feature_store() # + from hops import hdfs from pyspark.sql.functions import lower, col, lit defaware = ['22', '21', '20', '19', '18', '17'] for year in defaware: df=spark.read.format("csv").option("header","true").load(hdfs.project_path()+"Resources/FIFA" + year + "_official_data.csv") if year == "22" or year == "21" or year == "20": df = df.drop(col("DefensiveAwareness")) df = df.toDF(*[c.lower() for c in df.columns]) df = df.toDF(*[c.replace(' ', '_') for c in df.columns]) df.coalesce(1).write.mode("overwrite").csv(hdfs.project_path()+"Resources/FIFA" + year + "_official_data_cleaned.csv",header=True) # + from pyspark.sql.functions import * # Fix 2017, which does not have a 'release_clause' column. Take the data from 2018 df1 = spark.read.format("csv").option("header","true").load(hdfs.project_path()+"Resources/FIFA17_official_data_cleaned.csv/") df2 = spark.read.format("csv").option("header","true").load(hdfs.project_path()+"Resources/FIFA18_official_data_cleaned.csv/") df1 = df1.alias('df1') df2 = df2.alias('df2') df1 = df1.join(df2, df1.id == df2.id, "left").select('df1.*', 'df2.release_clause') df1 = df1.na.fill(value="€1.0M", subset=['release_clause']) df1.show(n=2, truncate=False, vertical=True) # - spark.catalog.clearCache() df1.coalesce(1).write.mode("overwrite").csv(hdfs.project_path()+"Resources/FIFA17_official_data_cleaned.csv",header=True) hdfs.rmr(hdfs.project_path()+"Resources/FIFA17_official_data_cleaned.csv") hdfs.move(hdfs.project_path()+"Resources/FIFA17_official_data_cleaned2.csv", hdfs.project_path()+"Resources/FIFA17_official_data_cleaned.csv") # + df17 = spark.read.format("csv").option("header","true").load(hdfs.project_path()+"Resources/FIFA17_official_data_cleaned.csv/") fg = fs.create_feature_group("fifa", version=1, description="Fifa players", primary_key=['id'], statistics_config=True, online_enabled=True) fg.save(df17) # - years = ['18', '19', '20', '21', '22'] for year in years: df=spark.read.format("csv").option("header","true").load(hdfs.project_path()+"Resources/FIFA" + year + "_official_data_cleaned.csv") fg.insert(df) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="Aq7nFQZHm4Zl" # Importing necessary libraries # + id="qWU66LqPm8af" import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix, accuracy_score # + [markdown] id="umFTDaDlnPLU" # Reading the dataset with Pandas after mounting the dataset in google colab # + id="cCj4W_Q8nMEY" #reading the dataset with a dataframe df = pd.read_csv("final_preprocessed_dataset.csv") df=df.drop('Unnamed: 0',1) # + colab={"base_uri": "https://localhost:8080/", "height": 389} id="IDZMAnU1n-V4" executionInfo={"status": "ok", "timestamp": 1607024428551, "user_tz": -330, "elapsed": 1145, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="d5a76317-4127-4d4c-a31c-f92d3e5663f7" #eye balling the data df.head(5) # + [markdown] id="CU6VA4KKqfP-" # Dropping resource column and categorical APT group column # + id="MUc7p6mpqj-w" df = df.drop('Resource', 1) df =df.drop('APT Group',1) # + id="KG32i0KwonId" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1607024439041, "user_tz": -330, "elapsed": 1093, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="b799a832-9bfc-40e5-86ca-ab9afa59be7f" #eye balling information about our dataset df.head(5) df.info() # + [markdown] id="uw80LLnIpAr2" # Sklearn Train Test Split Method : Splitting the data into training and testing data. # + id="hmtvadQIozzG" #the labelled column is Apt Group and the rest is features label = np.array(df['APTGroup']) features = df.drop('APTGroup',1) # + colab={"base_uri": "https://localhost:8080/", "height": 355} id="tlfcRvnTqQj_" executionInfo={"status": "ok", "timestamp": 1606807513486, "user_tz": -330, "elapsed": 2281, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="38a0fe69-11cb-40a1-e4b1-ed11ad6de1f4" features.head() # + colab={"base_uri": "https://localhost:8080/"} id="2FppczNYwFfe" executionInfo={"status": "ok", "timestamp": 1606807513491, "user_tz": -330, "elapsed": 2275, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="d4b78da3-3464-4dbe-b883-846048c78901" label # + id="tFjI9r5CpP0L" #choosing a 70-30 split to test out the performance from sklearn.model_selection import train_test_split seed =50 X_train, X_test, y_train, y_test = train_test_split(features,label,test_size=0.30, random_state = seed) # + id="hhVRpS2SphSP" colab={"base_uri": "https://localhost:8080/", "height": 355} executionInfo={"status": "ok", "timestamp": 1606807513899, "user_tz": -330, "elapsed": 2658, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="660da4ae-b867-4ea1-97ac-23b27ae17015" X_train.head(5) # + id="b-v3szuWppsQ" colab={"base_uri": "https://localhost:8080/", "height": 355} executionInfo={"status": "ok", "timestamp": 1606807513913, "user_tz": -330, "elapsed": 2655, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="92cb8b37-eeed-4dd5-8dbb-9292d5b50110" X_test.head(5) # + [markdown] id="ZqF_BNh5qJAr" # Building the Random Forest Classifier # + id="enqRkSj5p4rL" rf_classifier = RandomForestClassifier( min_samples_leaf=50, n_estimators=150, bootstrap=True, oob_score=True, n_jobs=-1, random_state=seed, max_features='auto') # + id="tv6uDVvcqO5k" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1606807513918, "user_tz": -330, "elapsed": 2643, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="493749e0-a9f2-4d4a-e5f6-1ad5868a02be" rf_classifier.fit(X_train,y_train) # + id="g3xVIV_PqRZz" y_pred = rf_classifier.predict(X_test) # + id="teXAXkmQqxjH" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1606807514291, "user_tz": -330, "elapsed": 3001, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="41fda03e-4dff-464c-e443-cfa52f0506bd" print(y_pred) # + [markdown] id="eiFEQe-mq2TT" # Confusion Matrix # + id="v7hasH0xq4Tc" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1606807514293, "user_tz": -330, "elapsed": 2987, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="abc875d5-d144-4ef3-9045-8ea8ae218187" cm = confusion_matrix(y_test, y_pred) print(cm) accuracy_score(y_test, y_pred) # + [markdown] id="5MxpXYgkrDQP" # # Observation: # # Accuracy acquired about 58% accuracy. Needs more hyperparameter tuning to increase the accuracy. Maybe contributing other factors along the way too can lead to better accuracy. # + [markdown] id="OSZYPJubdmtM" # ## Model Attempt#2 : Hyper Parameter tunning # + id="rrHdIN9xdWB9" """ rf_classifier = RandomForestClassifier( min_samples_leaf=50, n_estimators=150, bootstrap=True, oob_score=True, n_jobs=-1, random_state=seed, max_features='auto') """ rf_classifier = RandomForestClassifier( min_samples_leaf=3, n_estimators=300, bootstrap=True, oob_score=True, n_jobs=-1, random_state=seed, max_features='auto') # + colab={"base_uri": "https://localhost:8080/"} id="c35NvrKExAw3" executionInfo={"status": "ok", "timestamp": 1606807515575, "user_tz": -330, "elapsed": 4250, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="bbb7e314-255b-4424-d423-e399a661455d" rf_classifier.fit(X_train, y_train) y_pred = rf_classifier.predict(X_test) print("accuracy ", accuracy_score(y_test, y_pred) ) # + [markdown] id="z6N1hMWE01mQ" # #Cross validation # + colab={"base_uri": "https://localhost:8080/", "height": 355} id="mSjJV2kb1J8n" executionInfo={"status": "ok", "timestamp": 1606807515580, "user_tz": -330, "elapsed": 4243, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="78f059c7-4934-467d-88ab-eb0a40a74fbf" df.head(5) # + id="TwRYQGf5Nh5R" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1606807541045, "user_tz": -330, "elapsed": 29697, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="a5a1054d-cac5-4ce7-f4bf-cfa4de7de2f1" from sklearn.model_selection import ShuffleSplit ss = ShuffleSplit(n_splits=20, test_size=0.30) accuracy=0 for train_indices, test_indices in ss.split(df): #print("Train") #print(X_train) #print("Test") #print(X_test) train = df.iloc[train_indices] y_train= np.array(train['APTGroup']) X_train= train.drop('APTGroup',1) test = df.iloc[test_indices] y_test= np.array(train['APTGroup']) X_test= train.drop('APTGroup',1) rf_classifier.fit(X_train, y_train) y_pred = rf_classifier.predict(X_test) print("accuracy ", accuracy_score(y_test, y_pred) ) accuracy+=accuracy_score(y_test, y_pred) print(accuracy/20) # + [markdown] id="UM_EDGT9QtyD" # # Observation 1: # # Accuracy acquired : 86.62 % for n_splits=20 \\ # + [markdown] id="xAgsk_Uo22E_" # # Another method for cross validation: cross_val_score # # + colab={"base_uri": "https://localhost:8080/"} id="4dPMhSjd2-rM" executionInfo={"status": "ok", "timestamp": 1606807570615, "user_tz": -330, "elapsed": 59258, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="c3cd020b-3fa7-413c-e28a-09b29fa3bd00" from sklearn.model_selection import cross_val_score from sklearn.ensemble import RandomForestClassifier print(cross_val_score(rf_classifier, features, label, cv=20, scoring ='accuracy').mean()) # + [markdown] id="XsEcvI0T3Jnx" # # Observation 2: # # Accuracy acquired : 84.2 % for n_splits=20 \\ # + [markdown] id="Byu3lM7NpFhF" # ## Important Feature Calculation # + id="fUD2NR27dgbu" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1606807571505, "user_tz": -330, "elapsed": 60121, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="e06727ff-1004-4f4e-b47d-8211577ae913" rf_classifier.fit(X_train,y_train) # + colab={"base_uri": "https://localhost:8080/", "height": 293} id="I3ptheS5pQOS" executionInfo={"status": "ok", "timestamp": 1606807571507, "user_tz": -330, "elapsed": 60108, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="47a8445a-96b9-47a6-dbf0-c2df52e6596c" df.head(3) # + id="XbJvnoDGq_Ub" pd.set_option("display.max.rows", None) # + id="xiQQNyVprHHd" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1606807571983, "user_tz": -330, "elapsed": 60543, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="4514c9f1-73cb-443a-e6a7-1b1f3824b29f" # Important Features feature_importance = pd.DataFrame(rf_classifier.feature_importances_, index = X_train.columns, columns=['importance']).sort_values('importance',ascending=False) feature_importance # summarize feature importance #for i,v in enumerate(importance): # print('Feature: %0d, Score: %.5f' % (i,v)) # + [markdown] id="2yeqzJAJttxh" # Exporting the results into an excel file for better view # + id="zXxomxYLtb6v" excel_data = feature_importance.copy() # + id="9ZATk1qjt9HR" excel_data.to_csv('important_features.csv', index=True) # + colab={"base_uri": "https://localhost:8080/"} id="T0T43YIJu458" executionInfo={"status": "ok", "timestamp": 1606807571994, "user_tz": -330, "elapsed": 60527, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="1859f38f-3daf-40a7-f07f-9869fc97590b" # %ls # + colab={"base_uri": "https://localhost:8080/"} id="J_FRBzYWu4RO" executionInfo={"status": "ok", "timestamp": 1606807571996, "user_tz": -330, "elapsed": 60520, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/-nAR3so_GP8s/AAAAAAAAAAI/AAAAAAAAJus/3CfFQkqOuv8/s64/photo.jpg", "userId": "17934275643265308227"}} outputId="afbf6f4e-d558-4618-a28d-bcc52cc64692" #checking with shell command if there was a proper write in the new file created # ! cat important_features.csv # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Includes 5 tables of which 4 are domains and 1 is performance # Instead of creation 4 data frames we use dictionary of dataframes using comprehensions # See below an example of comprehensions from https://towardsdatascience.com/dictionary-comprehensions-in-python-912a453da512 num_list = [1,1,7,3,5,3,2,9,5,1,3,2,2,2,2,2,9] count_dict = {} for num in num_list: count_dict[num] = num_list.count(num) # Create a key value pair print(count_dict) # {1: 3, 7: 1, 3: 3, 5: 2, 2: 6, 9: 2} # - # # Ok now back to healthcare dataset with HVBP # # ![HVBP.jpeg](attachment:HVBP.jpeg) # + import pandas as pd # df = pd.read_csv('./Datasets/Input2/ESRD_QIP_-_Complete_QIP_Data_-_Payment_Year_2018.csv',header=0) #D:\\Stackroute\\Phillips\\Delivery Phillips\\Track 3B_Unit_F_ML in Healthcare\\Track 3B_Unit_F_Notebooks\\Datasets\\Input2 pathname = 'D:\\Stackroute\\Phillips\\Delivery Phillips\\Track 3B_Unit_F_ML in Healthcare\\Track 3B_Unit_F_Notebooks\\Datasets\\Input2\\' files_of_interest = [ 'hvbp_tps_11_07_2017.csv', 'hvbp_clinical_care_11_07_2017.csv', 'hvbp_safety_11_07_2017.csv', 'hvbp_efficiency_11_07_2017.csv', 'hvbp_hcahps_11_07_2017.csv' ] dfs = { foi: pd.read_csv(pathname + foi, header=0) for foi in files_of_interest } # + #Let us now see the dataframe # - dfs # + # This is the combined dictionary of dataframes - Uncomment to run the cell above. # Note all 5 files from Hvbp # - #Demo of items() function used below - it returns list of tuple pairs # Recall: A tuple is a sequence of immutable Python objects. Tuples are sequences, just like lists. #The differences between tuples and lists are, the tuples cannot be changed unlike lists and tuples use parentheses, # whereas lists use square brackets. dict = {'Name': 'Zara', 'Age': 7} print ("Value : %s" % dict.items()) for k, v in dfs.items(): print( k + ' - Number of rows: ' + str(v.shape[0]) + ', Number of columns: ' + str(v.shape[1]) ) for v in dfs.values(): # We loop to find the common column ID just like SQL JOIN for column in v.columns: print(column) print('\n') # # Observation: Looking at the 5 tables above carefully we can see that 'Provider Number' is common to all tables # + df_master = dfs[files_of_interest[0]].merge( dfs[files_of_interest[1]], on='Provider Number', how='left', copy=False ) print(df_master.shape) # + # See initial output of cell 3 #hvbp_tps_11_07_2017.csv - Number of rows: 2808, Number of columns: 16 #hvbp_clinical_care_11_07_2017.csv - Number of rows: 2808, Number of columns: 28 # If the merge is a success we should have 16+ 28 Minus 1 rows = 44 -1 = 43 rows # - # # Observation: The merge of tps & clinical care table is now correct print(df_master.columns) # Columns of the newly joined data frames # # Now Let us join the remaining 3 tables + rename Provider_Number to Provider Number for a clean JOIN # + for df in dfs.values(): df.columns = [col if col not in ['Provider_Number'] else 'Provider Number' for col in df.columns] for num in [2,3,4]: df_master = df_master.merge( dfs[files_of_interest[num]], on='Provider Number', how='left', copy=False ) print(df_master.shape) # - # # Observation: We can again check that sum of 5 tables minus 4 rows matches for column in df_master.columns: print(column) # # We can continue to do same kind of analysis now as we did for Dialysis center example. Some are shown # Check for MSPB-1 Measure Score print(df_master.groupby('MSPB-1 Measure Score').size()) #Weighted Normalized Clinical Care Domain Score print(df_master.groupby('Weighted Normalized Clinical Care Domain Score').size()) print(df_master.groupby('Weighted Normalized Clinical Care Domain Score').size().sort_values(ascending=False)) print(df_master.head(n=1)) print(df_master.groupby('MSPB-1 Measure Score').size().sort_values(ascending=False)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Spooky Author Identification import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline # ## Import training data data = pd.read_csv("../../data/train.csv") data.info() data.groupby('author').size() data.head() # ## Number of characters data['characters'] = data['text'].str.len() data['characters'].hist(bins=range(0,1000,20)) # ## Number of words data['words'] = data['text'].str.split().apply(len) data['words'].hist(bins=range(0,200,5)) # ## Number of semicolons data['semicolons_freq'] = (data['text'].str.count(';') / data['characters'])*100 data.groupby('author')['semicolons_freq'].mean() # ## Remove punctuation and stopwords import nltk nltk.download('stopwords') stop = nltk.corpus.stopwords.words('english') from string import punctuation punctuation translate = str.maketrans(punctuation,' '*len(punctuation)) data['stripped'] = data['text'].str.translate(translate).str.lower().str.split().map(lambda x: " ".join([word for word in x if word not in stop])) # ## Fraction of unique words data['unique_words'] = data['stripped'].str.split().map(lambda x: len(set(x))/(len(x) + 1)*100) data.head() # ## Number of sentences per passage data['sentences'] = data['text'].str.count('\.') # ## Sentence length data['sentence_length'] = data['words'] / (data['sentences'] + 1) # ## Try a model from sklearn.model_selection import train_test_split from sklearn.metrics import log_loss data.columns features = ['characters', 'words', 'semicolons_freq','unique_words', 'sentences', 'sentence_length'] target = 'author' X = data[features] y = data[target] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) from sklearn.ensemble import RandomForestClassifier as RFC clf = RFC() clf.fit(X_train,y_train) clf.classes_ y_true = pd.get_dummies(y_test.map({'EAP':0, 'HPL': 1, 'MWS': 2})) y_pred = clf.predict_proba(X_test[features]) log_loss(y_true,y_pred) y_pred # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 2020년 9월 3일 목요일 # ### leetCode - String Compression (Python) # ### 문제 : https://leetcode.com/problems/string-compression/ # ### 블로그 : https://somjang.tistory.com/entry/leetCode-443-String-Compression-Python # ### 첫번째 시도 class Solution: def compress(self, chars: List[str]) -> int: stack = [] nums = 1 idx = 0 if len(chars) > 0: stack.append(chars[0]) for i in range(1, len(chars)): if chars[i] == stack[idx]: nums = nums + 1 if i == len(chars) - 1: num_list = list(str(nums)) idx = idx + 1 + len(num_list) stack = stack + num_list elif chars[i] != stack[idx]: if nums == 1: idx = idx + 1 num_list = list(str(nums)) if num_list != ['1']: idx = idx + 1 + len(num_list) stack = stack + num_list stack.append(chars[i]) nums = 1 for i in range(len(stack)): chars[i] = stack[i] chars = chars[0:len(stack)] return len(chars) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="Um6OuOdW4nEm" # # 載入資料 # + id="HNkMiTtGXwm5" # !gdown --id '1rKgQnFdmzUEnVMHog75myo_oJ8zFSzr1' --output 'cabbage.xlsx' # !gdown --id '1-2GdInJS8I5QOO3nJCKC9Bc6YG2j6o5m' --output 'weather.csv' # + [markdown] id="NBsbc7QU4qsl" # # 安裝套件 # + id="aJ5wwhXH_7t3" # %%capture # !pip install prophet # !pip install pmdarima # + id="ck7soj8Unlqz" import pandas as pd import numpy as np from statsmodels.tsa.stattools import adfuller from statsmodels.graphics.tsaplots import plot_acf, plot_pacf from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.tsa.statespace.sarimax import SARIMAX import pmdarima as pm from prophet import Prophet # + id="X52mtZD_ZNQ8" # %%capture # !wget -O TaipeiSansTCBeta-Regular.ttf https://drive.google.com/uc?id=1eGAsTN1HBpJAkeVM57_C7ccp7hbgSz3_&export=download import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.font_manager import fontManager fontManager.addfont('TaipeiSansTCBeta-Regular.ttf') mpl.rc('font', family='Taipei Sans TC Beta') # + [markdown] id="0P_YGTCe4wjc" # # 檢視資料 # + [markdown] id="qSrIq-mSYrEm" # ## 甘藍 # + colab={"base_uri": "https://localhost:8080/", "height": 491} id="YQMUXqLlHgvy" outputId="c19a1ab9-889e-430d-e9f6-72e946fa91ed" df_c = pd.read_excel('cabbage.xlsx', index_col='date', parse_dates=True) display(df_c.tail()) df_c.price.plot() plt.title('每日甘藍批發價格') plt.ylabel('元/公斤') plt.xlabel('') plt.show() # + id="nVykBc-An_rc" df_c['pq'] = df_c.price * df_c.quantity df_c = df_c.resample('W').sum() df_c.price = df_c.pq / df_c.quantity df_c = df_c.drop(columns=['quantity', 'pq']) # + colab={"base_uri": "https://localhost:8080/", "height": 502} id="fHrgtx7EyNAJ" outputId="e7939acc-4468-4ba8-e7e2-206e2c9ec4ff" display(df_c.tail()) df_c.plot() plt.title('每週甘藍批發價格') plt.ylabel('元/公斤') plt.xlabel('') plt.show() # + [markdown] id="5JbfSMSfY5L9" # ## 天氣 # + colab={"base_uri": "https://localhost:8080/", "height": 238} id="-BfA3t4OY7Wg" outputId="9ed9b140-9911-4c2b-bd3c-9f41d9206900" cols = ['ObsTime', 'Temperature', 'Precp'] df_w = pd.read_csv('weather.csv', usecols=cols, index_col='ObsTime', parse_dates=True) df_w.index.name = 'date' # ℃、mm df_w.head() # + colab={"base_uri": "https://localhost:8080/", "height": 238} id="uo-1LSFVy-G7" outputId="a0346f1b-7592-45a1-bdfd-fa6362701f7c" df_w = df_w.resample('W').mean() df_w.tail() # + [markdown] id="64xAW1dPyP6W" # ## 合併 # + colab={"base_uri": "https://localhost:8080/", "height": 255} id="-dnMJm0_ZAiT" outputId="71f82ae5-36c4-4467-b9d4-879cb3c66f86" df = pd.merge(df_c, df_w, how='outer', left_index=True, right_index=True) df = df.dropna() print(df.shape) df.tail() # + id="JN4OjDcOPuoo" # df['price'] = np.log(df.price) df['ln_price'] = np.log(df.price) # + colab={"base_uri": "https://localhost:8080/"} id="6wqkoOkwG9rG" outputId="8994e9a2-0025-4352-cf3e-2a93901e539e" print(df.isna().sum()) # + [markdown] id="Bjcvhw3GRWS1" # # 切割資料集 # + id="1kWatWTyRxZY" # Validation val_df = df[-31:] # Training df = df[:-31] # + [markdown] id="CXZABd-tYwxc" # # 資料分析 # + [markdown] id="NMRmVkccloH6" # ## Visualization # + colab={"base_uri": "https://localhost:8080/", "height": 269} id="zos1gbn_lpSU" outputId="9c0e764b-f62d-4d4b-9d5a-7de9c50b78bf" import seaborn as sns sns.heatmap(df.corr(), annot=True, cmap="YlGnBu") plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 726} id="EsmuMzHqmO0W" outputId="7a70fcb5-53ba-44c7-f111-22920ccb20e0" sns.pairplot(df) plt.show() # + [markdown] id="A10AaU8VWSiJ" # ## Decompose # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="npBhZ40wHt2d" outputId="92565180-44a2-41d6-e64f-89154c6dad95" decomp_results = seasonal_decompose(df.ln_price) decomp_results.plot() plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="1JRWzunSoNjx" outputId="a2b6c175-62a9-40ad-81c4-3bb8f6971cc6" df.ln_price.plot() plt.title('每週甘藍價格(取自然對數)') plt.ylabel('ln_price') plt.show() # + [markdown] id="wS6M9otd486y" # ## ADF TEST # + colab={"base_uri": "https://localhost:8080/"} id="frPOnYbHoQHY" outputId="ce9ba1c4-5f47-4903-b5a6-7b829f7d58ba" # print(adfuller(df.price)) print(adfuller(df.ln_price)) # + [markdown] id="7IT3qstn5D1k" # ## ACF與PACF # + colab={"base_uri": "https://localhost:8080/", "height": 499} id="K7R37LN1rKC0" outputId="75060ad6-c145-465a-83f1-7409df8fe158" # Create figure fig, (ax1, ax2) = plt.subplots(2,1, figsize=(8,8)) # Make ACF plot plot_acf(df.ln_price, lags=52, zero=False, ax=ax1) # Make PACF plot plot_pacf(df.ln_price, lags=52, zero=False, ax=ax2, method='ywm') plt.show() # + [markdown] id="CAoG9LKSiwrN" # # 模型 # + [markdown] id="30nuOwv15NRN" # ## SARIMAX # $SARIMAX(p, d, q) \times (P, D, Q): Seasonal + ARIMA + Exogenous$ # # $AR(p): Y_t = \beta_0 + \beta_1Y_{t-1} + \beta_2Y_{t-2} + \dots + \beta_pY_{t-p} + u_t$ # # $MR(q): Y_t = \theta_0 + \theta_1u_{t-1} + \theta_2u_{t-2} + \dots + \theta_qu_{t-q} + u_t$ # # $ARIMR(p, d, q): (1 - \sum_{i=1}^p \beta_i L^i)(1-L)^dY_t = (1 - \sum_{i=1}^q \theta_i L^i)u_t$, where $L$ stands for Lag operator. # + id="lqfisCiWshoE" # result = pm.auto_arima(df.ln_price, X=df[['Temperature', 'Precp']], seasonal=True, m=12, D=1) # + id="Zr9qtdA4l-kB" # %%capture model = SARIMAX(df.ln_price, exog=df[['Temperature', 'Precp']], order=(3,0,0), seasonal_order=(2, 1, 0, 12)) result = model.fit() # + colab={"base_uri": "https://localhost:8080/"} id="WjF66VEF_XCn" outputId="a1ba840b-e7d6-4510-c958-05f5146c5cdb" print(result.summary()) # WEEK: SARIMAX(3, 0, 0)x(2, 1, 0, 12) # MONTH: SARIMAX(2, 0, 0)x(2, 1, 0, 12) # + [markdown] id="gRBBJFchHxg6" # ### 樣本內(訓練資料)預測 # + id="uFVI7paP9BQo" predicted = result.get_prediction(start=-10) mean = predicted.predicted_mean conf1 = predicted.conf_int(alpha=0.32) # 68% conf2 = predicted.conf_int(alpha=0.05) # 95% # + id="Wmubt6mefRg3" mean = np.exp(predicted.predicted_mean) conf1 = np.exp(conf1) conf1.columns = ['lower price', 'upper price'] conf2 = np.exp(conf2) conf2.columns = ['lower price', 'upper price'] # + id="x6ejCx9N9A1w" colab={"base_uri": "https://localhost:8080/", "height": 291} outputId="a39ca972-f1d1-4bdc-8dc1-690262895641" fig, ax = plt.subplots() df['2019':'2021'].price.plot(ax=ax, legend=False) df['2021':].price.plot(style=' ', ax=ax, legend=False) mean.plot() plt.axvspan(mean.index.min(), mean.index.max(), color='grey', alpha=0.3) plt.fill_between(conf1.index, conf1['lower price'], conf1['upper price'], color='xkcd:tomato red', facecolor='black') plt.fill_between(conf2.index, conf2['lower price'], conf2['upper price'], color='xkcd:tomato red', alpha=0.6, facecolor='black') plt.show() # + [markdown] id="CPGje3hnH0Nf" # ### 樣本外(測試資料)預測 # + id="7TynE1MJ0Zld" forecast = result.get_forecast(steps=len(val_df), exog=val_df[['Temperature', 'Precp']]) conf1 = forecast.conf_int(alpha=0.32) # 68% conf2 = forecast.conf_int(alpha=0.05) # 95% # + id="Nq4nfCeoHTVi" mean = np.exp(forecast.predicted_mean).append(df.price[-1:]).sort_index() lastp = df[['price', 'price']][-1:] lastp.columns = ['lower price', 'upper price'] lastp.index.name = 'ds' # + id="yTt0Czx2VYpP" conf1 = np.exp(conf1) conf1.columns = ['lower price', 'upper price'] conf1 = conf1.append(lastp).sort_index() conf2 = np.exp(conf2) conf2.columns = ['lower price', 'upper price'] conf2 = conf2.append(lastp).sort_index() # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="bu-qlK010ZdR" outputId="22801df6-fb0b-4ca5-fbbc-8fb9b9d3d32e" fig, ax = plt.subplots() df['2020':'2021'].price.plot(ax=ax, legend=False) val_df.price.plot() mean.plot() plt.axvspan(mean.index.min(), mean.index.max(), color='grey', alpha=0.3) plt.fill_between(conf1.index, conf1['lower price'], conf1['upper price'], color='xkcd:tomato red', facecolor='black') plt.fill_between(conf2.index, conf2['lower price'], conf2['upper price'], color='xkcd:tomato red', alpha=0.6, facecolor='black') ax.set_xlabel('') ax.set_ylabel('元/公斤') ax.set_ylim(0, 80) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 295} id="JjsIM0zEvorj" outputId="ae704cbe-15f2-4291-c5ae-01909543ba86" result.plot_diagnostics() # pm.auto_arima(df, seasonal=True, m=7) plt.show() # + [markdown] id="k0N-rpvSAs7Q" # ## fbporphet # $y(t) = g(t) + s(t) + h(t) + e(t)$ # # $g(t)$: trend models non-periodic changes. # # $s(t)$: seasonality presents periodic changes. # # $h(t)$: effects of holidays with irregular schedules. # # $e(t)$: covers idiosyncratic changes not accommodated by the model. # + [markdown] id="Wdyw9Z4RJe5K" # ### 設定資料 # + id="1918_z7HvyEr" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="cf0f0ce0-a6dd-4ece-d102-2e014197b4d8" dfp = df.reset_index() dfp.columns = ['ds', 'price', 'Temperature', 'Precp', 'ln_price'] dfp['y'] = dfp.ln_price dfp.tail() # + id="2Bw_wSuHc-yY" val_dfp = val_df.reset_index() val_dfp.columns = ['ds', 'price', 'Temperature', 'Precp', 'ln_price'] # + [markdown] id="CeZSvnKqm_Yg" # ### 樣本內(訓練資料)預測 # + id="DcvmrZy1nIRr" m = Prophet(seasonality_mode='multiplicative', daily_seasonality=False, weekly_seasonality=True) m.add_regressor('Temperature') m.add_regressor('Precp') m.fit(dfp[['ds', 'y', 'Temperature', 'Precp']]) predict = m.make_future_dataframe(periods=0, freq='W', include_history=True) predict['Temperature'] = df[['Temperature']].reset_index(drop=True) predict['Precp'] = df[['Precp']].reset_index(drop=True) fcst0 = m.predict(predict) fcst0 = fcst0.set_index(fcst0.ds, drop=True) # + [markdown] id="G29WocZcnNHF" # ### 樣本外(測試資料)預測 # + [markdown] id="iSA1kKe3Jkvk" # #### 68%信賴區間 # + id="0J5ABix5G4WD" m = Prophet(seasonality_mode='multiplicative', interval_width=0.68, daily_seasonality=False, weekly_seasonality=True) m.add_regressor('Temperature') m.add_regressor('Precp') m.fit(dfp[['ds', 'y', 'Temperature', 'Precp']]) future = m.make_future_dataframe(periods=len(val_dfp), freq='W', include_history=False) future['Temperature'] = val_dfp[['Temperature']] future['Precp'] = val_dfp[['Precp']] fcst1 = m.predict(future) fcst1 = fcst1.set_index(fcst1.ds, drop=True) # fig = m.plot(fcst) # + [markdown] id="1ZsqHbnaJroI" # #### 95%信賴區間 # + id="iKsrz5E0AvkF" m = Prophet(seasonality_mode='multiplicative', interval_width=0.95, daily_seasonality=False, weekly_seasonality=True) m.add_regressor('Temperature') m.add_regressor('Precp') m.fit(dfp[['ds', 'y', 'Temperature', 'Precp']]) future = m.make_future_dataframe(periods=len(val_dfp), freq='W', include_history=False) future['Temperature'] = val_dfp[['Temperature']] future['Precp'] = val_dfp[['Precp']] fcst2 = m.predict(future) fcst2 = fcst2.set_index(fcst2.ds, drop=True) # fig = m.plot(fcst) # + [markdown] id="JRon2JFPnV47" # #### 彙整 # + id="S9pvN_-wgDZJ" mean = np.exp(fcst1.yhat).append(df.price[-1:]).sort_index() conf1 = np.exp(fcst1[['yhat_lower', 'yhat_upper']]) conf1.columns = ['lower price', 'upper price'] conf1 = conf1.append(lastp).sort_index() conf2 = np.exp(fcst2[['yhat_lower', 'yhat_upper']]) conf2.columns = ['lower price', 'upper price'] conf2 = conf2.append(lastp).sort_index() # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="Wit6TtZ4gTMu" outputId="3f8c20d8-50e1-41f0-9055-2a13bd9a32fa" fig, ax = plt.subplots() df['2020':'2021'].price.plot(ax=ax, legend=False) val_df.price.plot() mean.plot() plt.axvspan(mean.index.min(), mean.index.max(), color='grey', alpha=0.3) plt.fill_between(conf1.index, conf1['lower price'], conf1['upper price'], color='xkcd:tomato red', facecolor='black') plt.fill_between(conf2.index, conf2['lower price'], conf2['upper price'], color='xkcd:tomato red', alpha=0.6, facecolor='black') ax.set_xlabel('') ax.set_ylabel('元/公斤') ax.set_ylim(0, 80) plt.show() # + [markdown] id="182Sp-MjZ6Hg" # # 評估 # + [markdown] id="NMIziSzqVftL" # ## 準備評估資料 # + [markdown] id="acVB02WIn0yT" # ### 訓練資料(樣本內資料) # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="KUZZACjVU8Tg" outputId="c846e61b-64c4-47d6-9482-f80881b3ac2c" dfins = pd.DataFrame({'y_real': df.price, 'y_sarimax': np.exp(result.get_prediction().predicted_mean), 'y_prophet': np.exp(fcst0.yhat)}) print(dfins.shape) dfins.tail() # + [markdown] id="F5iZPiQB6Maf" # ### 測試資料(樣本外資料) # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="IkwpoxAV1GqJ" outputId="71ac6d51-0c5b-4515-9c47-b9f645c83b31" dfoos = pd.DataFrame({'y_real': val_df.price, 'y_sarimax': np.exp(forecast.predicted_mean), 'y_prophet': np.exp(fcst1.yhat)}) print(dfoos.shape) dfoos.head() # + [markdown] id="hn0FTOctWjf3" # ## [評估標準](https://scikit-learn.org/stable/modules/model_evaluation.html#mean-absolute-error) # # + id="L76IffX2RIPn" from sklearn.metrics import mean_squared_error, mean_absolute_error, mean_absolute_percentage_error # + [markdown] id="H2KgbWpsWt8A" # * $\mbox{MSE}(Y, \widehat{Y}) = \frac{1}{T} \sum_{t=1}^T(Y_t - \widehat{Y}_t)^2$ # # * $\mbox{MAE}(Y, \widehat{Y}) = \frac{1}{T} \sum_{t=1}^T|Y_t - \widehat{Y}_t|$ # # * $\mbox{MAPE}(Y, \widehat{Y}) = \frac{1}{T} \sum_{t=1}^T\frac{|Y_t - \widehat{Y}_t|}{max(u_t, |Y_t|)}$ # # + [markdown] id="pJtK1gAzYleQ" # ## 樣本內評估 # + colab={"base_uri": "https://localhost:8080/", "height": 112} id="xnjzTFdXYoyh" outputId="ff12ddb2-4d15-4818-b1eb-71ce6becdb1c" mse = [mean_squared_error(dfins.y_real, dfins.y_sarimax), mean_squared_error(dfins.y_real, dfins.y_prophet)] mae = [mean_absolute_error(dfins.y_real, dfins.y_sarimax), mean_absolute_error(dfins.y_real, dfins.y_prophet)] mape = [mean_absolute_percentage_error(dfins.y_real, dfins.y_sarimax), mean_absolute_percentage_error(dfins.y_real, dfins.y_prophet)] dfv_ins = pd.DataFrame({'MSE': mse, 'MAE': mae, 'MAPE': mape}, index=['SARIMAX', 'prophet']) dfv_ins # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="UFfp-CPBapGV" outputId="e459a3ab-af14-4831-9a9c-3be6c0dad756" dfins.plot() plt.show() # + [markdown] id="yOBlmyv5Z6bC" # ## 樣本外評估 # + colab={"base_uri": "https://localhost:8080/", "height": 112} id="43efEfKhZ2Pu" outputId="a757f954-32b6-4531-e3b7-77b1c4d266c6" mse = [mean_squared_error(dfoos.y_real, dfoos.y_sarimax), mean_squared_error(dfoos.y_real, dfoos.y_prophet)] mae = [mean_absolute_error(dfoos.y_real, dfoos.y_sarimax), mean_absolute_error(dfoos.y_real, dfoos.y_prophet)] mape = [mean_absolute_percentage_error(dfoos.y_real, dfoos.y_sarimax), mean_absolute_percentage_error(dfoos.y_real, dfoos.y_prophet)] dfv_oos = pd.DataFrame({'MSE': mse, 'MAE': mae, 'MAPE': mape}, index=['SARIMAX', 'prophet']) dfv_oos # + colab={"base_uri": "https://localhost:8080/", "height": 277} id="FWfmJw9-a-jk" outputId="90bc58f7-0fbd-4fa0-cc57-200b0b738c8f" dfoos.plot() plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 112} id="UZLYZV4A6cd6" outputId="5e184a07-de8c-4a3e-9502-e9980684d9e3" # for Question mse = [mean_squared_error(np.log(dfoos.y_real), np.log(dfoos.y_sarimax)), mean_squared_error(np.log(dfoos.y_real), np.log(dfoos.y_prophet))] mae = [mean_absolute_error(np.log(dfoos.y_real), np.log(dfoos.y_sarimax)), mean_absolute_error(np.log(dfoos.y_real), np.log(dfoos.y_prophet))] mape = [mean_absolute_percentage_error(np.log(dfoos.y_real), np.log(dfoos.y_sarimax)), mean_absolute_percentage_error(np.log(dfoos.y_real), np.log(dfoos.y_prophet))] dfv_oos = pd.DataFrame({'MSE': mse, 'MAE': mae, 'MAPE': mape}, index=['SARIMAX', 'prophet']) dfv_oos # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="z4OD_y1WUd2h" import pandas as pd import numpy as np from matplotlib import pyplot as plt import statistics import math # + id="3eiNaH1RUiIY" outputId="a4926e32-e5ae-4580-81f2-77c191cf6dc6" colab={"base_uri": "https://localhost:8080/", "height": 445} matches = pd.read_csv("/content/sample_data/matches.csv") matches.head() # + id="nA3SyLi1U4zt" outputId="eb1dca85-04e4-4f16-98d4-a66b2a92c01c" colab={"base_uri": "https://localhost:8080/", "height": 394} # setting up the table with relevant columns dropList = ["result","player_of_match","venue","umpire1","umpire2","umpire3"] matches.drop(labels=dropList, axis=1, inplace=True) matches.head() # + id="Pd3l2_iBVlyA" outputId="a752893e-33b7-476e-95f4-9dcaa9d3922d" colab={"base_uri": "https://localhost:8080/", "height": 312} matches[pd.isnull(matches['winner'])] # + id="pGnAtkWHVomw" outputId="f9dde95f-c315-4b36-e01d-d1e6e61ac03f" colab={"base_uri": "https://localhost:8080/", "height": 49} matches['winner'].fillna('Draw', inplace=True) matches[pd.isnull(matches['winner'])] # + id="N1xOYdgDVqfP" outputId="f4499d3b-a531-4eb3-b7b0-606cf554a7e0" colab={"base_uri": "https://localhost:8080/", "height": 490} matches[pd.isnull(matches['city'])] # + id="ZBg0kS0HVsKz" outputId="7a0502f5-12a7-43f5-ce21-8ab3053c17e7" colab={"base_uri": "https://localhost:8080/", "height": 49} matches['city'].fillna('Dubai', inplace=True) matches[pd.isnull(matches['city'])] # + id="O98Q5N9zVuB8" outputId="b40ced7c-7c7a-49d9-f9e2-aca362ab8500" colab={"base_uri": "https://localhost:8080/"} matches.replace(["Deccan Chargers","Delhi Daredevils"],["Sunrisers Hyderabad","Delhi Capitals"],inplace=True,regex=True) match1 = matches[((matches.team1=="Royal Challengers Bangalore") & (matches.team2=="Sunrisers Hyderabad")) | ((matches.team1=="Sunrisers Hyderabad") & (matches.team2=="Royal Challengers Bangalore"))] match1.shape[0] # + id="9Sw-oxaJd7mB" outputId="4eaca2ee-30a6-4112-8bac-514f3d721b35" colab={"base_uri": "https://localhost:8080/"} mw_srh = 0 mw_rcb = 0 lst= [i for i in match1['winner']] print("Win Tracker!") for i in lst: if i=="Royal Challengers Bangalore": mw_rcb += 1 elif i=='Draw': continue else: mw_srh += 1 print(str(mw_srh)+" "+str(mw_rcb)) print("SRH vs RCB : "+str(mw_srh)+" "+str(mw_rcb)) # + id="CIBO-yKZeZVb" outputId="357801ec-45a3-4ce5-dcc3-bc1655514857" colab={"base_uri": "https://localhost:8080/"} last_3_season = match1[match1.season >= 2017] last_3_season.groupby('winner').winner.count() # + [markdown] id="JAgC7IJJWjqP" # Out of 19 matches held between SRH and RCB , SRh leads RCB 14 is to 11. In the case with last three seasons, SRH has lead of victories over RR i.e 3 is to 2 # + id="FyY9pRGkaqWW" def statistics_for_lists(lst): print("Maximum Value Of List:") print(max(lst)) print("Median of the List:") print(statistics.median(lst)) print("Mean of the List:") print(statistics.mean(lst)) print("75% of the Median is:") print(statistics.median_high(lst)) print("Minimum Value of List:") print(min(lst)) # + id="KonYIWemWeSX" outputId="0c7016df-f355-4187-b079-f7fd8b7ee186" colab={"base_uri": "https://localhost:8080/", "height": 394} deliveries = pd.read_csv("/content/sample_data/deliveries.csv") deliveries.head() # + id="Xfj0J0pBWyRX" outputId="80999a81-4e76-4ec5-99a9-fbe345890a3d" colab={"base_uri": "https://localhost:8080/", "height": 394} dropToBeList = ['inning','is_super_over','bye_runs','legbye_runs','fielder'] deliveries.drop(dropToBeList, axis=1, inplace=True) deliveries.replace(['Deccan Chargers','Delhi Daredevils'],['Sunrisers Hyderabad','Delhi Capitals'],inplace=True,regex=True) deliveries['dismissal_kind'].fillna('Not Out',inplace=True) deliveries.head() # + id="KlS2o3qIW2GL" outputId="f095efcf-900b-4d5e-c566-962de3fbddf8" colab={"base_uri": "https://localhost:8080/"} ballbyball = deliveries[((deliveries.batting_team=="Royal Challengers Bangalore") & (deliveries.bowling_team=="Sunrisers Hyderabad")) | ((deliveries.batting_team=="Sunrisers Hyderabad") & (deliveries.bowling_team=="Royal Challengers Bangalore"))] no_of_matches=list(set([i for i in ballbyball['match_id']])) no_of_matches.sort() print(len(no_of_matches)) # + id="bdWITQe5oNC3" outputId="4c5c2b15-ad52-4649-d38c-305c67886842" colab={"base_uri": "https://localhost:8080/"} #Q4 wickets_lost_srh_pp = ballbyball[(ballbyball.batting_team=='Sunrisers Hyderabad') & (ballbyball.over>=1) & (ballbyball.over<=5)].groupby('match_id').player_dismissed.count() wickets_lost_rcb_pp = ballbyball[(ballbyball.batting_team=='Royal Challengers Bangalore') & (ballbyball.over>=1) & (ballbyball.over<=5)].groupby('match_id').player_dismissed.count() srh_pp=[i for i in wickets_lost_srh_pp] rcb_pp=[i for i in wickets_lost_rcb_pp] diff=[] for i in range(len(srh_pp)): diff.append(abs(rcb_pp[i]-srh_pp[i])) statistics_for_lists(diff) # + id="1kJmZcQGBa-N" outputId="d57c0447-9f8d-449a-f22c-f444c132fede" colab={"base_uri": "https://localhost:8080/"} #Q5 dot_balls = ballbyball[(ballbyball.total_runs==0)].groupby('match_id').total_runs.count() dot_balls.describe() # + [markdown] id="nFq6aDcCHC10" # In all matches between RCB and SRH, the average number of dot balls expected is 81 to 88 # + id="itmnb6ZQsmut" outputId="893821a5-3543-4978-9d1c-577c3d0631d5" colab={"base_uri": "https://localhost:8080/"} #Q2 total_scores = ballbyball.groupby('match_id').total_runs.sum() total_scores.describe() # + id="2lTPW4QcqWaX" outputId="642ce8dd-f97f-428f-a1d6-4f17ad27e1ca" colab={"base_uri": "https://localhost:8080/"} #Q3 srh = ballbyball[ballbyball.batting_team=='Sunrisers Hyderabad'] srh_dif=[] for i in no_of_matches: df = srh[srh.match_id==i] tot_runs = [k for k in df['total_runs']] wides = [k for k in df['wide_runs']] nobs = [k for k in df['noball_runs']] ball_to_30=0 ball_to_50=0 score_to_50=0 score_to_30=0 for j in range(len(tot_runs)): if(score_to_30 < 30 and wides[j]==0 and nobs[j]==0): ball_to_30 +=1 ball_to_50 +=1 score_to_30 += tot_runs[j] score_to_50 += tot_runs[j] elif(score_to_30 < 30 and (wides[j]!=0 or nobs[j]!=0)): score_to_30 += tot_runs[j] score_to_50 += tot_runs[j] elif(score_to_50 < 50 and wides[j]==0 and nobs[j]==0): score_to_50 += tot_runs[j] ball_to_50 += 1 elif(score_to_50 < 50 and (wides[j]!=0 or nobs[j]!=0)): score_to_50 += tot_runs[j] diff = ball_to_50 - ball_to_30 srh_dif.append(diff) print(srh_dif) # + id="Uc_qtyLqtDyZ" outputId="d6359093-cb19-4e09-d5fc-9e19bfa241f9" colab={"base_uri": "https://localhost:8080/"} rcb = ballbyball[ballbyball.batting_team=='Royal Challengers Bangalore'] rcb_dif=[] for i in no_of_matches: df = rcb[rcb.match_id==i] tot_runs = [k for k in df['total_runs']] wides = [k for k in df['wide_runs']] nobs = [k for k in df['noball_runs']] ball_to_30=0 ball_to_50=0 score_to_50=0 score_to_30=0 for j in range(len(tot_runs)): if(score_to_30 < 30 and wides[j]==0 and nobs[j]==0): ball_to_30 +=1 ball_to_50 +=1 score_to_30 += tot_runs[j] score_to_50 += tot_runs[j] elif(score_to_30 < 30 and (wides[j]!=0 or nobs[j]!=0)): score_to_30 += tot_runs[j] score_to_50 += tot_runs[j] elif(score_to_50 < 50 and wides[j]==0 and nobs[j]==0): score_to_50 += tot_runs[j] ball_to_50 += 1 elif(score_to_50 < 50 and (wides[j]!=0 or nobs[j]!=0)): score_to_50 += tot_runs[j] diff = ball_to_50 - ball_to_30 rcb_dif.append(diff) print(rcb_dif) # + id="3B9jN37vwXqk" outputId="fe3de085-1e90-4a94-baa1-0e9399668d2f" colab={"base_uri": "https://localhost:8080/"} diff_bw_srh_rcb = [] for i in range(len(rcb_dif)): diff_bw_srh_rcb.append(abs(srh_dif[i]-rcb_dif[i])) print(diff_bw_srh_rcb) statistics_for_lists(diff_bw_srh_rcb) # + id="UmCV74Kvw04A" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="qokXXtdNSbU9" import numpy as np import matplotlib.pyplot as plt import scipy.integrate as integrate import scipy.special as special import numbers # + [markdown] id="uvH0iS-Crw1A" # ## Set Up # + id="gQ_wZrYqTPBC" # true metric p_star = (0.8,0.2) q_star = (0.3,0.1,0.3) def eta(x): return 1/(1+np.exp(5*x)) zeta = 0.5 # f_X /sim U(-1,1) # implementation of proposition 1 # return a classifier with t def h_bar(t): m11, m00 = np.cos(t), np.sin(t) def hb(x): if m11+m00 >= 0: return int(eta(x)>=m00/(m11+m00)) else: return int(eta(x)<=m00/(m11+m00)) return hb # confusion matrix, analytical solution # confusion matrix, analytical solution def C11(t): # P(Y=1, h=1) m11,m00 = np.cos(t), np.sin(t) x_prime = 0. h = h_bar(t) if m00 == 0: x_prime = 1 elif m11/m00 <= 0: x_prime = -1 else: x_prime = np.log(m11/m00)/5 if x_prime > 1: x_prime = 1 elif x_prime < -1: x_prime = -1 print(x_prime) if m00+m11 >= 0: return (x_prime-0.2*np.log(1+np.exp(5*x_prime))+1+0.2*np.log(1+np.exp(-5))) # x-0.2ln(1+e^5x) else: return (1-0.2*np.log(1+np.exp(5))-x_prime+0.2*np.log(1+np.exp(5*x_prime))) def C10(t): # P(Y=0, h=1) return 1-C11(t) def C01(t): # P(Y=1, h=0) return 1-C00(t) def C00(t): # P(Y=0, h=0) m11,m00 = np.cos(t), np.sin(t) x_prime = 0 h = h_bar(t) if m00 == 0: x_prime = 1 elif (m00+m11)/m00-1 <= 0: x_prime = -1 else: x_prime = np.log(m11/m00)/5 if x_prime > 1: x_prime = 1 elif x_prime < -1: x_prime = -1 if m00+m11 >= 0: return (0.2*np.log(1+np.exp(5))-0.2*np.log(1+np.exp(5*x_prime))) # ln(1+e^5x) else: return (0.2*np.log(1+np.exp(5*x_prime))-0.2*np.log(1+np.exp(-5))) # metric evaluation def phi(t): return (p_star[0]*C11(t)+p_star[1]*C00(t))/(q_star[0]*C11(t)+q_star[1]*C00(t)+q_star[2]) # query function (always maximize phi function) # alpha: error rate —— with probability alpha, the oracle will return wrong answer def query(t_1, t_2, alpha): if phi(t_1) < phi(t_2): if np.random.rand() > alpha: return 1 # prefer t2 else: return 0 else: if np.random.rand() > alpha: return 0 # prefer t1 else: return 1 # + [markdown] id="x-t-q_QuMZ9r" # # Algorithm 1 # + id="44ei2YcaMY77" # implements algorithm 1 # analytical version # alpha: error rate of oracle def max_quasiconcave_metric(eps, alpha): t_a = 0 t_b = np.pi/2 m_bar = np.zeros(2) C_bar = 0 iter = 0 while np.linalg.norm(t_a-t_b) > eps: # divide the searching range into equally seperated intervals t_c = (3*t_a+t_b)/4 t_d = (t_a+t_b)/2 t_e = (t_a+3*t_b)/4 # compute Confusion Matrices C_a = np.array([[C00(t_a), C01(t_a)],[C10(t_a), C11(t_a)]]) C_b = np.array([[C00(t_b), C01(t_b)],[C10(t_b), C11(t_b)]]) C_c = np.array([[C00(t_c), C01(t_c)],[C10(t_c), C11(t_c)]]) C_d = np.array([[C00(t_d), C01(t_d)],[C10(t_d), C11(t_d)]]) C_e = np.array([[C00(t_e), C01(t_e)],[C10(t_e), C11(t_e)]]) # pairwise comparisons ca = query(t_c, t_a, alpha) dc = query(t_d, t_c, alpha) ed = query(t_e, t_d, alpha) be = query(t_b, t_e, alpha) # determine the next iter search range based on oracle resposne to query if ca: t_b = t_d elif not ca and dc: t_b = t_d elif not dc and ed: t_a = t_c t_b = t_e elif not ed and be: t_a = t_d else: t_a = t_d m_bar[0], m_bar[1] = np.cos(t_d), np.sin(t_d) C_bar = C_d iter += 1 # print("iteration run:"+str(iter)) return m_bar,C_bar # + [markdown] id="rXS9CE4jMevA" # # Algorithm 2 # + id="T_6Ibc_vMg9_" # implements algorithm 1 # analytical version # alpha: error rate of oracle def min_quasiconvex_metric(eps, alpha): t_a = np.pi t_b = np.pi*1.5 m_bar = np.zeros(2) C_bar = 0 iter = 0 while np.linalg.norm(t_a-t_b) > eps: # divide the searching range into equally seperated intervals t_c = (3*t_a+t_b)/4 t_d = (t_a+t_b)/2 t_e = (t_a+3*t_b)/4 # compute Confusion Matrices C_a = np.array([[C00(t_a), C01(t_a)],[C10(t_a), C11(t_a)]]) C_b = np.array([[C00(t_b), C01(t_b)],[C10(t_b), C11(t_b)]]) C_c = np.array([[C00(t_c), C01(t_c)],[C10(t_c), C11(t_c)]]) C_d = np.array([[C00(t_d), C01(t_d)],[C10(t_d), C11(t_d)]]) C_e = np.array([[C00(t_e), C01(t_e)],[C10(t_e), C11(t_e)]]) # pairwise comparisons ca = query(t_c, t_a, alpha) dc = query(t_d, t_c, alpha) ed = query(t_e, t_d, alpha) be = query(t_b, t_e, alpha) # determine the next iter search range based on oracle resposne to query if not ca: t_b = t_d elif ca and not dc: t_b = t_d elif dc and not ed: t_a = t_c t_b = t_e elif ed and not be: t_a = t_d else: t_a = t_d m_bar[0], m_bar[1] = np.cos(t_d), np.sin(t_d) C_bar = C_d iter += 1 # print("iteration run:"+str(iter)) return m_bar,C_bar # + colab={"base_uri": "https://localhost:8080/"} id="QdTJKjYKkh4y" outputId="14f1543a-d468-41b6-b6e7-7f076794f015" m,C = max_quasiconcave_metric(1e-4, 0.) print("elicited metric: "+str(m)) print("confusion matrix: \n"+str(C)) # + colab={"base_uri": "https://localhost:8080/"} id="rVYHFE4ski2D" outputId="70ac26ef-2d3b-4572-f2f9-1142142cfc0f" m,C = min_quasiconvex_metric(1e-4, 0.) print("elicited metric: "+str(m)) print("confusion matrix: \n"+str(C)) # + [markdown] id="TSUbTDHZtMgi" # # Algorithm 3 # + id="p7tk6hZ5kpnC" def grid_search_for_p(m11, m00, C0, m11_, m00_, C0_, k, delta): sig_opt = np.inf p11_opt = 0 kt = np.append(np.linspace(0, np.pi/2, k//2), np.linspace(np.pi, np.pi*1.5, k//2)) sigs=[] for p11 in np.arange(0, 1+delta, delta): p00 = 1-p11 P = p11*zeta+p00*(1-zeta) Qp = P+C0-m11*zeta-m00*(1-zeta) q0p = C0*P/Qp q11p = (p11-m11)*P/Qp q00p = (p00-m00)*P/Qp Qpp = P+C0_-m11_*zeta-m00_*(1-zeta) q0pp = C0_*P/Qpp q11pp = (p11-m11_)*P/Qpp q00pp = (p00-m00_)*P/Qpp phip = (p11*np.array(list(map(C11, kt)))+p00*np.array(list(map(C00, kt))))/(q11p*np.array(list(map(C11, kt)))+q00p*np.array(list(map(C00, kt)))+q0p) phipp = (p11*np.array(list(map(C11, kt)))+p00*np.array(list(map(C00, kt))))/(q11pp*np.array(list(map(C11, kt)))+q00pp*np.array(list(map(C00, kt)))+q0pp) r = phip/phipp sig = np.std(r) sigs.append(sig) if sig np.pi/2: ta = np.pi*2-ta C0 = ma[0]*C11(ta)+ma[1]*C00(ta) mi,Ci = min_quasiconvex_metric(1e-4, 0.) ti = np.arccos(mi[0]) if ti > np.pi/2: ti = np.pi*2-ti C0_ = mi[0]*C11(ti)+mi[1]*C00(ti) p11 = grid_search_for_p(m11=ma[0], m00=ma[1], C0=C0, m11_=mi[0], m00_=mi[1], C0_=C0_, k=2000, delta=0.01) p00 = 1-p11 P = p11*zeta+p00*(1-zeta) Q = P+C0-ma[0]*zeta-ma[1]*(1-zeta) q0 = C0*P/Q q11 = (p11-ma[0])*P/Q q00 = (p00-ma[1])*P/Q print((p11,p00)+(q11,q00,q0)) # + colab={"base_uri": "https://localhost:8080/"} id="-V9OdPbCJ8rh" outputId="dff5046c-d692-4619-d56c-728c9732353f" p00 = 1-p11 P = p11*zeta+p00*(1-zeta) Q = P+C0_-mi[0]*zeta-mi[1]*(1-zeta) q0 = C0_*P/Q q11 = (p11-mi[0])*P/Q q00 = (p00-mi[1])*P/Q print((p11,p00)+(q11,q00,q0)) # + colab={"base_uri": "https://localhost:8080/"} id="B-0RAa-cQ0Gy" outputId="5f710cf3-c522-402e-c7ea-25e5a9c181f2" p11 = 0.86 p00 = 1-p11 P = p11*zeta+p00*(1-zeta) Q = P+C0-ma[0]*zeta-ma[1]*(1-zeta) q0 = C0*P/Q q11 = (p11-ma[0])*P/Q q00 = (p00-ma[1])*P/Q print((p11,p00)+(q11,q00,q0)) # + [markdown] id="SJHZVHzjWh47" # # Visualize $\phi$ # Gaurush et al. result # + colab={"base_uri": "https://localhost:8080/", "height": 365} id="33rP3AXHTjjH" outputId="4a9ac0da-72ed-4175-b7a6-f0c94c06e1e4" # Plot phi function versus different thetas def phi_elicited(t): return (p11*C11(t)+p00*C00(t))/(q11*C11(t)+q00*C00(t)+q0) thetas = np.linspace(0,np.pi*2,200) ph_true = list(map(phi, thetas)) ph_elicited = list(map(phi_elicited, np.linspace(0,np.pi*2,200))) plt.figure(figsize=(16,5)) plt.plot(thetas, ph_true, "-", color='b') plt.plot(thetas, ph_elicited, "--", color='g') for p in np.arange(0.5, 1.5, 0.5): plt.axvline(x=np.pi*p, c='r', ls='--', alpha=0.7) plt.axvline(x=thetas[np.argmax(ph_true)], c='b') plt.axvline(x=thetas[np.argmax(ph_elicited)], c='g') plt.axvline(x=thetas[np.argmin(ph_true)], c='b') plt.axvline(x=thetas[np.argmin(ph_elicited)], c='g') plt.axvline(x=ta, c='r') plt.axvline(x=ti, c='y') plt.xticks(np.arange(0, np.pi*1.5, np.pi/36), rotation=60, size="small") plt.title("phi change with theta") plt.xlabel("theta/radian") plt.ylabel("phi") plt.show() # + [markdown] id="B0bMEHnZ6TOr" # Current model result # + id="0kSCxuH2UCzT" colab={"base_uri": "https://localhost:8080/", "height": 365} outputId="d3f0f6a4-3eae-4215-88eb-221194b434cd" # Plot phi function versus different thetas def phi_elicited(t): return (p11*C11(t)+p00*C00(t))/(q11*C11(t)+q00*C00(t)+q0) thetas = np.linspace(0,np.pi*2,200) ph_true = list(map(phi, thetas)) ph_elicited = list(map(phi_elicited, np.linspace(0,np.pi*2,200))) plt.figure(figsize=(16,5)) plt.plot(thetas, ph_true, "-", color='b') plt.plot(thetas, ph_elicited, "--", color='g') for p in np.arange(0.5, 1.5, 0.5): plt.axvline(x=np.pi*p, c='r', ls='--', alpha=0.7) plt.axvline(x=thetas[np.argmax(ph_true)], c='b') plt.axvline(x=thetas[np.argmax(ph_elicited)], c='g') plt.axvline(x=thetas[np.argmin(ph_true)], c='b') plt.axvline(x=thetas[np.argmin(ph_elicited)], c='g') plt.axvline(x=ta, c='r') plt.axvline(x=ti, c='y') plt.xticks(np.arange(0, np.pi*1.5, np.pi/36), rotation=60, size="small") plt.title("phi change with theta") plt.xlabel("theta/radian") plt.ylabel("phi") plt.show() # + id="u4XR-l39EA2W" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # ### What does XML stand for? # + [markdown] slideshow={"slide_type": "fragment"} # XML stands for Extensible Markup Language # + [markdown] slideshow={"slide_type": "slide"} # ### What is a markup language? # + [markdown] slideshow={"slide_type": "fragment"} # According to _Wikipedia_ # # > In computer text processing, a markup language is a system for annotating a document in a way that is syntactically distinguishable from the text, meaning when the document is processed for display, the markup language is not shown, and is only used to format the text. # + [markdown] slideshow={"slide_type": "slide"} # ### What is the purpose of XML? # + [markdown] slideshow={"slide_type": "slide"} # - XML is used for sharing of data. # - Sends data in a structure format. # + [markdown] slideshow={"slide_type": "slide"} # ### Basic Terminology # + [markdown] slideshow={"slide_type": "slide"} # #### Tag # + [markdown] slideshow={"slide_type": "fragment"} # Generally, strings that `<` and ends with `>`. # + [markdown] slideshow={"slide_type": "slide"} # #### Types of tags # + [markdown] slideshow={"slide_type": "fragment"} # - start-tag, such as `
    ` # + [markdown] slideshow={"slide_type": "fragment"} # - end-tag, such as `
    ` # # + [markdown] slideshow={"slide_type": "fragment"} # - empty-element tag, such as `` # + [markdown] slideshow={"slide_type": "slide"} # #### Element # + [markdown] slideshow={"slide_type": "fragment"} # - Logical document component that either begins with a start-tag and ends with a matching end-tag # # OR # # - consists only of an empty-element tag. # # + [markdown] slideshow={"slide_type": "fragment"} # ##### Examples # + [markdown] slideshow={"slide_type": "fragment"} # - `Hello, world!`. # + [markdown] slideshow={"slide_type": "fragment"} # - ``. # + [markdown] slideshow={"slide_type": "slide"} # #### Attribute # + slideshow={"slide_type": "fragment"} - name-value pair that exists within a start-tag or empty-element tag. # + [markdown] slideshow={"slide_type": "slide"} # #### Examples # + [markdown] slideshow={"slide_type": "fragment"} # - `foo` # + [markdown] slideshow={"slide_type": "fragment"} # # - `
    FooBar
    ` # + [markdown] slideshow={"slide_type": "slide"} # ### Example # + slideshow={"slide_type": "fragment"} active="" # # # # 1 # 2008 # 141100 # # # # # 4 # 2011 # 59900 # # # # 68 # 2011 # 13600 # # # # # + [markdown] slideshow={"slide_type": "slide"} # ### Basic Parsing # + import xml.etree.ElementTree as ET # for parsing document from string root = ET.fromstring(val) # for parsing from a file root = ET.parse('file.xml').getroot() # + slideshow={"slide_type": "slide"} # + [markdown] slideshow={"slide_type": "slide"} # #### Getting Interesting Elements # + slideshow={"slide_type": "fragment"} for child in root.iter(): print(child.tag, child.attrib) # + slideshow={"slide_type": "fragment"} for neighbor in root.iter('neighbor'): print(neighbor.tag, neighbor.attrib) # + slideshow={"slide_type": "slide"} # + [markdown] slideshow={"slide_type": "slide"} # ### More realistic example # + URL = 'https://www.hackadda.com/latest/feed/' # fetch feed from this URL and extract some interesting data # like, find urls that contain 'django' in them. # - # + [markdown] slideshow={"slide_type": "slide"} # #### Iterating through every element # + slideshow={"slide_type": "fragment"} import xml.etree.ElementTree as ET tree = ET.iterparse('data.xml') # first element is event for _, ele in tree: print(ele.tag, ele.attrib) # + slideshow={"slide_type": "slide"} # + [markdown] slideshow={"slide_type": "slide"} # ### SAX(Simple API for XML) # + [markdown] slideshow={"slide_type": "fragment"} # - This can be used to parse XML element by element, line by line. # + [markdown] slideshow={"slide_type": "fragment"} # - It is generally slower and memory inefficient, prefer `ET.iterparse` instead. # - # + [markdown] slideshow={"slide_type": "slide"} # ### DOM(Document Object Model) API # + [markdown] slideshow={"slide_type": "fragment"} # - DOM is a cross-language API from W3C(World Wide Web Consortium) for accessing and modifying XML documents. # + slideshow={"slide_type": "slide"} from xml.dom.minidom import parse, parseString tree = parse(source) countries = tree.getElementsByTagName('country') for country in countries: tag = country.tagName children = country.childNodes print('name:', country.getAttribute('name')) # + [markdown] slideshow={"slide_type": "slide"} # #### SideNotes # + [markdown] slideshow={"slide_type": "fragment"} # - This is generally memory consuming and `xml.etree.ElementTree` is generally preferred over it. # + slideshow={"slide_type": "slide"} # + [markdown] slideshow={"slide_type": "slide"} # ### Security Considerations # + [markdown] slideshow={"slide_type": "fragment"} # From the official `python` documentation # #
    # # > Warning: The XML modules are not secure against erroneous or maliciously constructed data. If you need to parse untrusted or unauthenticated data see the [XML vulnerabilities](https://docs.python.org/3/library/xml.html#xml-vulnerabilities) and The [defusedxml](https://docs.python.org/3/library/xml.html#defusedxml-package) Package sections. # #
    # + [markdown] slideshow={"slide_type": "slide"} # ### Other useful parsing libraries # + [markdown] slideshow={"slide_type": "fragment"} # - [`xmltodict`](https://docs.python.org/3/library/xml.html#xml-vulnerabilities) # - convert XML to JSON like object # + [markdown] slideshow={"slide_type": "fragment"} # - [`untangle`](https://github.com/stchris/untangle) # - converts XML to a python like object # + slideshow={"slide_type": "slide"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt import gbm import mgbm import sgbm import stdev import simple_return as sr import duration # import durationNew import numpy as np import pandas as pd import statistics as sc from scipy.stats import kurtosis, skew # N=10000;p=85 # mu=0.241851845471164 # si=0.07871329243839084 seed=range(0,201) dt=5./(250*360) sdt=np.sqrt(dt) t1=[] t2=[] t3=[] t4=[] x=pd.ExcelFile('/home/sharan/Desktop/Transport/IISER/ClusterOutput/Squeeze/Stat.xlsx') page=x.parse(0) tdrift=np.array(page.tdrift) tvola=np.array(page.tvola) Neach=np.array(page.N) # print(len(tvola)) t1star=np.array(page.Mean) t2star=np.array(page.SD) t3star=np.array(page.Skew) t4star=np.array(page.Kurtosis) # print(t1star) for i in range(18): N = Neach[i] mu = tdrift[i] si = tvola[i] GM_g=[] GS_g=[] GW_g=[] GK_g=[] # print(i," ",N," ",mu," ",si) for j in range(0,200): sg=gbm.gbm(dt,N,mu,si,seed[j]) ret_g=sr.s_ret(np.array(sg,dtype=float)) ret_g=np.array(ret_g) L=len(ret_g) n=20 new_ret_g=[np.array(ret_g[i:i+n]) for i in range(L-n)] Ln=len(new_ret_g) new_std_g=np.array([stdev.sd(new_ret_g[i]) for i in range(Ln)]) volatility_g= new_std_g/sdt dur_g=duration.duration(np.array(volatility_g)) dur_g=np.array(dur_g,dtype=float) GM_g.append(np.mean(dur_g)) GS_g.append(stdev.sd(dur_g)) GW_g.append(skew(dur_g)) GK_g.append(kurtosis(dur_g,fisher=False)) t1.append(GM_g) t2.append(GS_g) t3.append(GW_g) t4.append(GK_g) writer=pd.ExcelWriter('/home/sharan/Desktop/Transport/IISER/ClusterOutput/Squeeze/GBM/t1FinalSqueeze.xlsx',engine='xlsxwriter') df=pd.DataFrame({'IO1':t1[0],'IO2':t1[1],'IO3':t1[2],'IO4':t1[3],'IO5':t1[4],'IO6':t1[5],'IO7':t1[6],'IO8':t1[7],'IO9':t1[8],'I10':t1[9],'I11':t1[10],'I12':t1[11],'I13':t1[12],'I14':t1[13],'I15':t1[14],'I16':t1[15],'I17':t1[16],'I18':t1[17]},index=range(1,201)) df.to_excel(writer,sheet_name='sheet') writer.save() writer=pd.ExcelWriter('/home/sharan/Desktop/Transport/IISER/ClusterOutput/Squeeze/GBM/t2FinalSqueeze.xlsx',engine='xlsxwriter') df=pd.DataFrame({'IO1':t2[0],'IO2':t2[1],'IO3':t2[2],'IO4':t2[3],'IO5':t2[4],'IO6':t2[5],'IO7':t2[6],'IO8':t2[7],'IO9':t2[8],'I10':t2[9],'I11':t2[10],'I12':t2[11],'I13':t2[12],'I14':t2[13],'I15':t2[14],'I16':t2[15],'I17':t2[16],'I18':t2[17]},index=range(1,201)) df.to_excel(writer,sheet_name='sheet') writer.save() writer=pd.ExcelWriter('/home/sharan/Desktop/Transport/IISER/ClusterOutput/Squeeze/GBM/t3FinalSqueeze.xlsx',engine='xlsxwriter') df=pd.DataFrame({'IO1':t3[0],'IO2':t3[1],'IO3':t3[2],'IO4':t3[3],'IO5':t3[4],'IO6':t3[5],'IO7':t3[6],'IO8':t3[7],'IO9':t3[8],'I10':t3[9],'I11':t3[10],'I12':t3[11],'I13':t3[12],'I14':t3[13],'I15':t3[14],'I16':t3[15],'I17':t3[16],'I18':t3[17]},index=range(1,201)) df.to_excel(writer,sheet_name='sheet') writer.save() writer=pd.ExcelWriter('/home/sharan/Desktop/Transport/IISER/ClusterOutput/Squeeze/GBM/t4FinalSqueeze.xlsx',engine='xlsxwriter') df=pd.DataFrame({'IO1':t4[0],'IO2':t4[1],'IO3':t4[2],'IO4':t4[3],'IO5':t4[4],'IO6':t4[5],'IO7':t4[6],'IO8':t4[7],'IO9':t4[8],'I10':t4[9],'I11':t4[10],'I12':t4[11],'I13':t4[12],'I14':t4[13],'I15':t4[14],'I16':t4[15],'I17':t4[16],'I18':t4[17]},index=range(1,201)) df.to_excel(writer,sheet_name='sheet') writer.save() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.2 64-bit # language: python # name: python37264bit4a2e9be422294fc3a9a080314b1a5773 # --- # # Anonymous Functions # # - [Download the lecture notes](https://philchodrow.github.io/PIC16A/content/functions/functions_3.ipynb). # # # $\lambda$-expressions provide a concise way to create very simple functions. While the syntax is somewhat odd, it's quite readable once you know the correct idiom. # + # "double is the function that takes x and multiplies it by 2" double = lambda x: 2*x # same as: # def double(x): # return 2*x double(4) # - # "second_char is the function that returns the second character of a string s" second_char = lambda s: s[1] second_char("Picard") # $\lambda$ expressions are extremely useful when a relatively simple function is required, for example when sorting lists. # + # sort a list into even and odd: L = [4, 6, 9, 3, 4, 6, 7, 3, 2, 0, 9, 5] L.sort(key = lambda x: (x % 2) == 1) L # + # decreasing order within even and odd groups L.sort(key = lambda x: ((x % 2) == 1, -x)) L # + # lambda functions also accept multiple arguments multiply = lambda x,y: x*y multiply(2, 3) # - # Don't let your $\lambda$-expressions get too complicated. Generally speaking, these expressions should not contain control flow statements, and should not be longer than a single, 80-character line of code. If your $\lambda$-expression is getting complex, use an explicitly-defined function instead. # 80 characters # # Arguments and Keyword Arguments # # In some cases, we might not know how many arguments a function will accept. For example, consider the following function: def add(a, b): return(a+b) add(2, 2) # This works, but only for two numbers. add(2, 2, 2) # --- # Of course, we could define a version of "add" that works with three numbers, or four....but there's a better way. # # The special `*args` argument can be passed to the function. Within the function scope, `args` (no `*` asterisk) is then a list of all **positional** arguments passed to the function. So, we can write a general `add` this way: # + def better_add(*args): total = 0 # args is a list containing all of the inputs for a in args: total += a return(total) better_add(2, 2), better_add(2, 2, 2), better_add(2, 2, 2, 2, 2) # - # In some cases, you might not be sure how many **keyword** arguments will be used in your function. In this case, use `**kwargs` (with two `*` asterisks). Having done so, `kwargs` will be available as a dictionary within the function scope. # + def favorites(**kwargs): for key, val in kwargs.items(): print("My favorite " + key + " is " + val + ".") favorites(TV = "Star Trek", captain = "Picard") # --- # - # It is possible to use `*args` and `**kwargs` together. Since positional arguments always come first, it's necessary to use `*args` before `**kwargs`. # An especially useful application of `**kwargs` occurs when one is using functions from modules with many optional arguments. For example, the `scatter` command in the `matplotlib` library has a [large number of keyword arguments](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.scatter.html). If you want to call this function in a custom function, then you don't have to enumerate all of them: use `**kwargs` and it just works. # + from matplotlib import pyplot as plt import random def random_scatter(n, **kwargs): """ produce a scatter plot of n random 2d points """ x, y = [], [] for i in range(n): x.append(random.random()) y.append(random.random()) plt.scatter(x, y, **kwargs) # - random_scatter(100) # image: a scatterplot of 100 blue, filled-in circles, with x and y coordinates each between 0 and 1. # --- random_scatter(100, color = "orange") # image: a scatterplot of 100 orange, filled-in circles, with x and y coordinates each between 0 and 1. # --- # make the points very large and slightly transparent random_scatter(100, color = "orange", s = 200, alpha = 0.4) # image: a scatterplot of 100 large, orange, slightly transparent circles, with x and y coordinates each between 0 and 1. # --- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Imports # # `numpy` import to manupulate arrays # `pandas` import to create and modify dataframes # `matplotlib` to visulaize graphs # `seaborn` build on matploblib, higher level graph functions # Imports import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # # Loading the data # Here we are taking the inbuilt function of keras to load the data from the server # The dataset file in present in the [Link to dataset in amazon server](http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/) # The inbuilt code # ```python # def load_data(): # """Loads the Fashion-MNIST dataset. # # Returns # Tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`. # """ # dirname = os.path.join('datasets', 'fashion-mnist') # base = 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/' # files = ['train-labels-idx1-ubyte.gz', 'train-images-idx3-ubyte.gz', # 't10k-labels-idx1-ubyte.gz', 't10k-images-idx3-ubyte.gz'] # # paths = [] # for fname in files: # paths.append(get_file(fname, # origin=base + fname, # cache_subdir=dirname)) # # with gzip.open(paths[0], 'rb') as lbpath: # y_train = np.frombuffer(lbpath.read(), np.uint8, offset=8) # # with gzip.open(paths[1], 'rb') as imgpath: # x_train = np.frombuffer(imgpath.read(), np.uint8, # offset=16).reshape(len(y_train), 28, 28) # # with gzip.open(paths[2], 'rb') as lbpath: # y_test = np.frombuffer(lbpath.read(), np.uint8, offset=8) # # with gzip.open(paths[3], 'rb') as imgpath: # x_test = np.frombuffer(imgpath.read(), np.uint8, # offset=16).reshape(len(y_test), 28, 28) # # return (x_train, y_train), (x_test, y_test) # ``` # which downloads the data and unpacks and gives back as tuples of test, train splits # # ## Regarding Dataset # Fashion Mnist is data set of 10 fashoin articles namely # # | Label | Description | # |:---- |:------------- | # |0 | T-shirt/top | # |1 | Trouser | # |2 | Pullover | # |3 | Dress | # |4 | Coat | # |5 | Sandal | # |6 | Shirt | # |7 | Sneaker | # |8 | Bag | # |9 | Ankle boot | # # [official Link](https://github.com/zalandoresearch/fashion-mnist) # # `Line 4` : created a dictionary with respective labels # Loading the dataset from keras.datasets import fashion_mnist (x_train,y_train),(x_test,y_test) = fashion_mnist.load_data() fashion_mnist_labels_dict = {0:"T-shirt/top",1:"Trouser",2:"Pullover",3:"Dress",4:"Coat",5:"Sandal",6:"Shirt",7:"Sneaker",8:"Bag",9:"Ankle boot"} # # Understand the dataset # Let see the size and shape of test and training tuples # # ``On Execution`` # There will be 60000 samples of 28*28 resolution of images in training set # There will be 10000 samples of 28*28 resolution of images in testing set # Understanding the data print("The number of training samples",len(x_train)) print("The number of testing samples",len(x_test)) print("The shape of training sample array",np.shape(x_train)) print("The shape of training labels",np.shape(y_train)) # # Visualizing the data # Let us try to visualise the few samples of data so, we could get an idea of how the data looks like # `On Execution` # We can see ten images of # Visulizing the data fig, ax = plt.subplots(2, 5, sharex=True, sharey=True) index = 0 for row in ax: for col in row: col.set_xlabel(str(fashion_mnist_labels_dict[y_train[index]])) col.imshow(x_train[index],cmap='gray') index+=1 plt.show() print("first ten labels") for i in range(0,10): print("Label value :",y_train[i]) print("Object Name :",fashion_mnist_labels_dict[y_train[i]]) # # Preprocessing the data # # ## Vairables # `image_width`,`image_height`,`image_channels` would describe the dimentions of the image # `classes` to detemine how many catogories of samples present in out dataset. By nature mnist have 0-9 images to ten classes # # ## Creating sparse vector representation # `to_categorical` is converting into one hot encoding. Means each vector is represented by one hot encoding. # 0 --> [1,0,0,0,0,0,0,0,0,0] # 1 --> [0,1,0,0,0,0,0,0,0,0] # 2 --> [0,0,1,0,0,0,0,0,0,0] # and similarly goes on # # `np.expand_dims` would just increst one dimentions in the end. Like .... [1,2,3] to [[1],[2],[3]] # # ## Normalization # `Line16`,`Line17`is normalization we are divinding all the pixal values by 255. so all the numerical values are converted between 0 and 1 # >Note: since its a simple dataset there is not much of processing required to attain good accuracies. For all real time datasest preprocessing like normalizing , standadising , on hot encoding, filling the missing values, transforming features, feeding the data in batches and all other type of preprocessing is required # # + # Preprocessing the data # Variables image_width = 28 image_height = 28 image_channels = 1 image_shape = (image_width,image_height,image_channels) x_train = np.expand_dims(x_train,axis=3) x_test = np.expand_dims(x_test,axis=3) ## Creating sparse vector representation from keras.utils import to_categorical y_train_sparse = to_categorical(y_train) y_test_sparse = to_categorical(y_test) ## Normalization x_train = x_train /255 x_test = x_test/255 # - # # Training varibles # These Training varbles are hyper parameters for neural network training. # `epochs` : each epoch is forward propagation + backward propagation over the whole dataset once is called one epoch. # `learning_rate` : the magnitude in which the weights are modified one the acquired loss. # `learning_rate_decay` : there can be high leanring rate at the beining of the training when the loss is high. Over a period of time the learning rate can reduce for fine training of network. # `batch_size` : the data is fed to the network in batches of 32 samples at each time. This batch feeding is done all over the whole dataset. # Training varibles learning_rate = 0.001 learning_rate_decay = 0.00001 batch_size = 32 epochs = 20 classes = 10 # # Neural Netowork Model # `Line 6` : we are building a keras sequential model # `Line 32` : we are using stochastic gradient decent optimizer # `Line 36` : compiling the model to check if the model is build properly. # # The loss function being used is `categorical_crossentropy` since its a multi class classification # # + # Building the models from keras.models import Sequential from keras.layers import Conv2D,MaxPooling2D,Activation,Dropout,Flatten,Dense from keras.optimizers import SGD model = Sequential() # Layer 1 model.add(Conv2D(filters=128, kernel_size=(3,3), strides=(1, 1), padding='valid', data_format="channels_last", input_shape=image_shape)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(rate=0.3)) # Layer 2 model.add(Conv2D(filters = 64, kernel_size=(3,3), strides=(1, 1), padding='valid')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(rate=0.3)) # Layer 3 model.add(Flatten(data_format="channels_last")) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(rate=0.25)) # Layer 4 model.add(Dense(10,activation="softmax")) sgd_optimizer = SGD(lr = learning_rate, decay = learning_rate_decay) model.compile(optimizer=sgd_optimizer, loss='categorical_crossentropy', metrics=['accuracy']) # - # # Training # Training is the process of feeding the data to neural network and modifiying the weights of the model using the the backpropagation algorithm. The backpropagation using loss the function acquires the loss over batch size of data and does a backpropagation to modify the weights in such a way the in the next epoch the loss would be less when compared to the current epoch # Training the mode model_history = model.fit(x=x_train, y=y_train_sparse, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test,y_test_sparse), shuffle=True) # # Results # using the trained model we try to predict what are the values of images in the test set # Results y_pred = model.predict(x_test, batch_size = batch_size, verbose=1) # # Verifying the results # cheking the results how good they are with the first 10 samples. # Plotting the graphs of test and train set accuracies and loss values. # > NOTE: This plot is a very curicial step. These plots would tell us how good the model converges and if there is any overfitting # + # Verifying the results print("Ground truths of first 10 images in test set",np.array(y_test[0:10])) print("Predicted values of first 10 image in test set",np.argmax(y_pred[0:10],axis=1)) loss = model_history.history['loss'] val_loss = model_history.history['val_loss'] plt.plot(loss,label='train') plt.plot(val_loss,label='test') plt.title('loss Graph') plt.ylabel('precentage') plt.xlabel('epochs') plt.legend() plt.show() acc = model_history.history['acc'] val_acc = model_history.history['val_acc'] plt.plot(acc,label='train') plt.plot(val_acc,label='test') plt.title('Accuracy Graph') plt.ylabel('precentage') plt.xlabel('epochs') plt.legend() plt.show() # - # # Visulizing the results # checking the results by visulizing them and creating a confusion matrix. The values of precession and accuracy can be obtained by the help of confusion matrix and f1 scores to compare this architecure with other architectures of neural networks # + # Visulizing the results y_pred = np.argmax(y_pred,axis=1) y_pred = pd.Series(y_pred,name = "predicted") y_test = pd.Series(y_test,name = "Actual") df_confusion = pd.crosstab(y_test,y_pred) df_confusion.columns = [i for i in list(fashion_mnist_labels_dict.values())] df_confusion.index = [i for i in list(fashion_mnist_labels_dict.values())] print(df_confusion) plt.figure(figsize = (20,20)) plt.title('Confusion Matrix',fontsize=20) sns.heatmap(df_confusion, annot=True,fmt="d") plt.xlabel('Predicted', fontsize=18) plt.ylabel('Actaul', fontsize=18) # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python2 # name: python2 # --- # n-мерное нормальное распределение имеет вид: # # $p(x) = \frac{1}{\sqrt{(2\pi)^n|\Sigma|}}exp(-\frac{(x-\mu)^T\Sigma^{-1}(x-\mu)}{2})$, где # # $\mu_i = Ex_i$ # # $\Sigma_{ij} = E(x_i - \mu_i)(x_j-\mu_j) = Ex_ix_j - \mu_i\mu_j$, $\Sigma = Exx^T - \mu\mu^T$ # В непрервыном случае энтропия считается, как # # $Entropy = \int_{R} -p(x)ln(p(x))dx = - Eln(\frac{1}{\sqrt{(2\pi)^n|\Sigma|}}) + E\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu) = \frac{1}{2}ln((2\pi)^n|\Sigma|) + \frac{1}{2}E(x-\mu)^T\Sigma^{-1}(x-\mu)$ # # # Найдём $E(x-\mu)^T\Sigma^{-1}(x-\mu)$: # # $E(x-\mu)^T\Sigma^{-1}(x-\mu) = Ex^T\Sigma^{-1} x - Ex^T\Sigma^{-1}\mu - E\mu^T\Sigma^{-1} x + \mu^T\Sigma^{-1}\mu = Ex^T\Sigma^{-1} x - \mu^T\Sigma^{-1}\mu$ # Под знаком матожидания стоит число (матрица 1x1), значит: # # $Ex^T\Sigma^{-1} x - \mu^T\Sigma^{-1}\mu = E(trace(x^T\Sigma^{-1}x) - trace(\mu^T\Sigma^{-1}\mu) = trace(\Sigma^{-1}xx^T) # - trace(\Sigma^{-1}\mu\mu^T) =\\= trace(\Sigma^{-1}(Exx^T - \mu\mu^T)) = trace(\Sigma^{-1}\Sigma) = trace(I) = n$ # $Entropy = \frac{1}{2}(ln((2\pi)^n|\Sigma|) + n) = \frac{1}{2}ln((2\pi e)^n|\Sigma|)$, что и требовалось доказать. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ``` # further: # scaler.fit(), scaler.fit_transform(), get X_scalered with pipeline, scalar and scaler # clf.support_vectors_, loss='hinge' # ``` # ``` # 1. Linear SVM Classification # 2. Nonlinear SVM Classification # 3. SVM Regression # 4. Under the Hood # ``` # ## 1. SVM Classification: Linear import numpy as np import matplotlib.pyplot as plt plt.style.use('tableau-colorblind10') # + from sklearn.datasets import load_iris iris = load_iris() X = iris['data'][:, 2:] y = iris['target'] y_2 = (y == 2).astype('int') plt.scatter(X[:, 0], X[:, 1], c=y_2, cmap='tab20b', alpha=0.5) plt.colorbar() plt.show() ##'setosa', 'versicolor', 'virginica' ## petal length, petal widt # + ## svc = SVC(kernel='linear'), svc.support_vectors_ ## svc = LinearSVC(), no # + from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.svm import LinearSVC from sklearn.svm import SVC scaler_svc = Pipeline([ ('scaler', StandardScaler()), ('linear_svc', LinearSVC(C=10, loss='hinge') ) ]) scaler = StandardScaler() X_scalered = scaler.fit_transform(X) linear_svc = SVC(kernel='linear', C=10) linear_svc.fit(X_scalered, y_2) plt.scatter(X_scalered[:, 0], X_scalered[:, 1], c=y_2, cmap='tab20b', alpha=0.5) ax = plt.gca() xlim = ax.get_xlim() ylim = ax.get_ylim() xx = np.linspace(xlim[0], xlim[1], 50) yy = np.linspace(ylim[0], ylim[1], 50) XX, YY = np.meshgrid(xx, yy) xy = np.c_[XX.ravel(), YY.ravel()] Z = linear_svc.decision_function(xy).reshape(XX.shape) ax.contour(XX, YY, Z, levels=[-2, 0, 2], colors='k', alpha=0.6, linestyles=['--', '-', '--']) plt.show() ## np.c_: add column, np.vstack().T: stack vertically # - # ## 2. SVM Classification: Nonlinear(Polynomial) from sklearn.datasets import make_moons from sklearn.preprocessing import PolynomialFeatures X, y = make_moons(noise=0.15) X.shape, y.shape ## a simple toy dataset: default n_samples = 100, noise = 0 plt.scatter(X[:, 0], X[:, 1], c=y, cmap='tab20b', alpha=0.5) plt.show() poly_features = PolynomialFeatures(degree=3, include_bias=False) X_poly = poly_features.fit_transform(X) plt.scatter(X_poly[:, 0], X_poly[:, 2], c=y, cmap='tab20b', alpha=0.5) plt.show() poly_svc = Pipeline([ ('poly', PolynomialFeatures(degree=3)), ('scaler', StandardScaler()), ('linear_svc', LinearSVC(C=10, loss='hinge')) ]) poly_kernel_svc = Pipeline([ ('scaler', StandardScaler()), ('poly_svc', SVC(kernel='poly', degree=3, C=10, coef0=1)) ]) # + ## similarity features ## add new features to seperate data ## rbf kernel: guassian radial basis function ## string kernel: for DNA sequences or text documents # - rbf_kernel_scv = Pipeline([ ('scaler', StandardScaler()), ('rbf_svc', SVC(kernel='rbf', C=0.001, gamma=5)) ]) # ## 3. SVM Regression # + ## SVM for outlier detection # - # ## 4. Under the Hood # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="NOJei8t7O0AN" colab_type="code" colab={} from sklearn.model_selection import train_test_split from sklearn.preprocessing import LabelBinarizer from tensorflow.keras import Input from tensorflow.keras.datasets import mnist from tensorflow.keras.layers import Dense from tensorflow.keras.models import Model from tensorflow.keras.models import Sequential # + id="heB1liYuPOsp" colab_type="code" colab={} layers = [Dense(256, input_shape=(28 * 28 * 1,), activation='sigmoid'), Dense(128, activation='sigmoid'), Dense(10, activation='softmax')] sequential_model_list = Sequential(layers) # + id="HqqnZQChPZRw" colab_type="code" colab={} sequential_model = Sequential() sequential_model.add(Dense(256, input_shape=(28 * 28 * 1,), activation='sigmoid')) sequential_model.add(Dense(128, activation='sigmoid')) sequential_model.add(Dense(10, activation='softmax')) # + id="cMNXw68APjPa" colab_type="code" colab={} input_layer = Input(shape=(28 * 28 * 1,)) dense_1 = Dense(256, activation='sigmoid')(input_layer) dense_2 = Dense(128, activation='sigmoid')(dense_1) predictions = Dense(10, activation='softmax')(dense_2) functional_model = Model(inputs=input_layer, outputs=predictions) # + id="5MmNOcVhPxVX" colab_type="code" colab={} class ClassModel(Model): def __init__(self): super(ClassModel, self).__init__() self.dense_1 = Dense(256, activation='sigmoid') self.dense_2 = Dense(256, activation='sigmoid') self.predictions = Dense(10, activation='softmax') def call(self, inputs, **kwargs): x = self.dense_1(inputs) x = self.dense_2(x) return self.predictions(x) class_model = ClassModel() # + id="TYzFcWvVQfDq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="843119c0-51fd-4f26-cc5b-56380c1c8a2d" (X_train, y_train), (X_test, y_test) = mnist.load_data() X_train = X_train.reshape((X_train.shape[0], 28 * 28 * 1)) X_test = X_test.reshape((X_test.shape[0], 28 * 28 * 1)) X_train = X_train.astype('float32') / 255.0 X_test = X_test.astype('float32') / 255.0 # + id="_29veag5Qsd_" colab_type="code" colab={} label_binarizer = LabelBinarizer() y_train = label_binarizer.fit_transform(y_train) y_test = label_binarizer.fit_transform(y_test) # + id="VEJuso1XQxvs" colab_type="code" colab={} X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, train_size=0.8) # + id="BeOorUL4Q61R" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="33fc3e46-4274-4370-ef21-5b97d4bab902" models = { 'sequential_model': sequential_model, 'sequential_model_list': sequential_model_list, 'functional_model': functional_model, 'class_model': class_model } for name, model in models.items(): print(f'Compiling model: {name}') model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(f'Training model: {name}') model.fit(X_train, y_train, validation_data=(X_valid, y_valid), epochs=50, batch_size=256, verbose=0) accuracy = model.evaluate(X_test, y_test, verbose=0) print(f'Testing model: {name}. \nAccuracy: {accuracy}') print('---') # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.4.1 # language: julia # name: julia-1.4 # --- # # Aproximação e Interpolação # # Como aproximar um conjunto de dados usando uma sequência de funções simples? # # * Polinômios # * Interpolação - Introdução # * Interpolação de Lagrange # * Método dos mínimos quadrados # * Códigos # # Introdução # # Qual o objetivo? # # 1. Modelar dados experimentais # 2. Aproximar uma função complexa usando funções mais simples # # Dada uma função $u(x)$ queremos aproximá-la usando um conjunto de funções que são, de alguma, forma convenientes: # # $$ # u \approx u^\delta (x) = \sum_{i=1}^N \hat{u}_i \phi_i(x) # $$ # # # Porque queremos fazer isso? # # * Funções muito complexas # * Funções que não são conhecidas explicitamente (algumas funções especiais) # * Queremos desempenho na hora de calcular as funções: praticamente todas as funções básicas são aproximadas utilizando polinômios (muitas vezes polinômios racionais) # * Quando conhecemos apenas alguns pontos da função: sempre que você tiver um experimento ou estiver processando dados de uma sequência de simulações numéricas. # # # # methods(sin) # ## Interpolação # # É lógico que, em geral, a aproximação é apenas uma aproximação! Então existe um erro: # # $$ # u(x) - u^\delta(x) = \varepsilon(x) # $$ # # A idéia é minimizar este erro de alguma forma. # # Uma possibilidade é escolher $N$ pontos $x_i$ e impor que o erro é zero nestes pontos: # # $$ # \varepsilon(x_i) = 0 \qquad i=1, \ldots, N # $$ # # Esta abordagem é conhecida na literatura como colocação. Se a função $u(x)$ é conhecida apenas por alguns pontos $(x_i, u_i)$ isto é chamado de *interpolação*. # # Isto resulta num sistema de equações lineares: # # $$ # \left(\begin{matrix} # \phi_1(x_1) & \phi_2(x_1) & \cdots & \phi_N(x_1) \\ # \phi_1(x_2) & \phi_2(x_2) & \cdots & \phi_N(x_2) \\ # \vdots & \vdots & \ddots & \vdots \\ # \phi_1(x_N) & \phi_2(x_N) & \cdots & \phi_N(x_N) \\ # \end{matrix}\right) # \cdot\left(\begin{matrix} \hat{u}_1 \\ \hat{u}_2 \\ \vdots \\ \hat{u}_N \end{matrix} # \right) # = \left(\begin{matrix} u(x_1) \\ u(x_2) \\ \vdots \\ u(x_N) \end{matrix} # \right) # $$ # Esta é a matriz de Vandermonde. # # A escolha de funções $\phi_i(x)$ adequadas simplificam a solução do problema. Se, por exemplo, $\phi_i(x_k) = \delta_{ik}$ onde $\delta$, neste caso, é o delta de Kronecker, a matriz é diagonal. # # ## Mínimos quadrados # Por outro lado, podemos, para uma sequência de pontos minimizar o erro quadrático total: # # $$ # R(\hat{u}_1, \ldots, \hat{u}_N) = \sum_{i=1}^Q \left[ u(x_i) - u^\delta(x_i)\right]^2 # $$ # onde $Q$ é o número de pontos. Esta operação se chama método dos mínimos quadrados. # # Chamando $u(x_i) = u_i$, para minimizar este resíduo (erro quadrático total), basta derivar e igualar a zero, assim, chega-se ao seguinte sistema de equações lineares: # # $$ # \left( # \begin{matrix} # \sum_{i=1}^Q \phi_1(x_i)\cdot\phi_1(x_i) & # \cdots & # \sum_{i=1}^Q \phi_1(x_i)\cdot\phi_N(x_i) \\ # \sum_{i=1}^Q \phi_2(x_i)\cdot\phi_1(x_i) & # \cdots & # \sum_{i=1}^Q \phi_2(x_i)\cdot\phi_N(x_i) \\ # \vdots & \ddots & \vdots \\ # \sum_{i=1}^Q \phi_N(x_i)\cdot\phi_1(x_i) & # \cdots & # \sum_{i=1}^Q \phi_N(x_i)\cdot\phi_N(x_i) \\ # \end{matrix}\right) # \cdot # \left(\begin{matrix} \hat{u}_1 \\ \hat{u}_2 \\ \vdots \\ \hat{u}_N \end{matrix}\right) # = # \left(\begin{matrix} \sum_{i=1}^Q u_i \phi_1(x_i) \\ \sum_{i=1}^Q u_i \phi_2(x_i) \\ \vdots \\ \sum_{i=1}^Q u_i \phi_N(x_i)\end{matrix}\right) # $$ # # # # # # ## Uma formulação mais geral # # Uma formulação mais geral é escolher funções peso $w_k(x)$ de modo que # $$ # \int_a^b u(x)w_k(x)\:dx = \int_a^b u^\delta (x) w_k(x) \:dx # $$ # # Escolhendo $w_k(x)$ = $\delta(x_k)$ (*Delta* de Dirac), recuperamos a interpolação. Escolhendo $w_k(x) = \phi_k(x)$ temos o método de Galerkin, muito usado no método de elementos finitos. # # Com o método de elementos finitos, a função não necessariamente vai passar pelos pontos, mas o erro será minimizado e pode-se obter uma aproximação melhor. # # Outro ponto a ser considerador é o que ocorre quando as medidas possuem erro e como isso afeta a aproximação. # # Polinômios # # $$ # p_n(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \ldots a_n x^n = \sum_{i=0}^n a_i x^i # $$ # Um polinômio é dado por seus coeficientes $a_i$. # Em Julia, a indexação começa em 1, portanto é conveniente reescrever a definição acima como: # # $$ # p_n(x) = a_1 + a_2 x + a_3 x^2 + a_4 x^3 + \ldots a_n x^{n-1} + a_{n+1} x^n = \sum_{i=1}^{n+1} a_i x^{i-1} # $$ # # function polyval1(a, x) n = length(a) y = 0.0 for i in 1:n y += a[i] * x^(i-1) end return y end # ### Método de Horner # # Maneira mais eficiente e mais precisa de calcular o valor de um polinômio: # # $$ # p_n(x) = a_1 + x\left(a_2 + x\left(a_3 + x\left(\ldots + a_{n+1} x\right) \right) \right) # $$ # + function polyval2(a, x) y = a[end] for i = (lastindex(a)-1):-1:1 y = a[i] + x * y end return y end # - a1 = rand(3) a2 = rand(6) a3 = rand(15) a4 = rand(30); using PyPlot using BenchmarkTools x = 0.5 y1 = polyval1(a3, x) y2 = polyval2(a3, x) y1≈y2 @btime polyval1(a1, x) @btime polyval2(a1, x) @btime polyval1(a2, x) @btime polyval2(a2, x) @btime polyval1(a3, x) @btime polyval2(a3, x) typeof(a3) @btime polyval1(a4, x) @btime polyval2(a4, x) # ### E os tipos dos dados??? a5 = rand(-5:5, 7) polyval1(a5, 2) polyval1(a5, 2.0) polyval2(a5, 2) polyval2(a5, 2.0) a6 = [1//2, 2//3, 3//4, 4//5] polyval2(a6, 0.5) polyval2(a6, 1//2) polyval2(a6, 1) @code_warntype polyval2(a6, 0.5) # + function polyval3(a::Vector{T}, x::S) where {T,S} R = promote_type(T,S) y = convert(R, a[end]) for i = (lastindex(a)-1):-1:1 y = a[i] + x * y end return y end # - @code_warntype polyval3(a6, 0.5) @btime polyval2(a6, 0.5+0.2im) @btime polyval3(a6, 0.5+0.2im) # ### Polynomials.jl using Polynomials p = Polynomial([1,2,3], :z) typeof(p) p(1//2 + 3//4im) roots(p) p2 = Polynomial([1,2]) # Raízes roots(p2) integrate(p) derivative(p) degree(p) # ### Macro quando os coeficientes são conhecidos @evalpoly(0.5, 1, 2, 3) @macroexpand @evalpoly(0.5, 1, 2, 3) evalpoly(0.5, [1,2,3]) # # Interpolação # # Dado um conjunto de n pontos $(x_i, y_i)$, qual o poliômio que passa por todos? # # $$ # y_i = a_0 + a_1 x_i + a_2 x_i^2 + \ldots a_n x_i^n \qquad i=1, \ldots, m # $$ # ## Vandermonde # # Com n+1 pontos distintos, se o polinômio for de grau n, pode-se montar o seguinte sistema linear: # # $$ # \begin{bmatrix} # 1 & x_0 & x_0^2 & \cdots & x_0^n \\ # 1 & x_1 & x_1^2 & \cdots & x_1^n \\ # \vdots & \vdots & \vdots & \ddots & \vdots\\ # 1 & x_n & x_n^2 & \cdots & x_n^n\\ # \end{bmatrix}\cdot # \left\{ \begin{matrix} a_0 \\ a_1 \\ \vdots \\ a_n\\ \end{matrix}\right\} # = \left\{\begin{matrix} y_0 \\ y_1 \\ \vdots \\ y_n\\\end{matrix}\right\} # $$ # # Mas esta é uma operação cara, $\mathcal{O}(n^3)$! # ## Outra possibilidade # # $$ # y = f(x) \approx a_0 + a_1(x-x_0) + a_2(x-x_0)(x-x_1) + \cdots + a_n(x-x_0)(x-x_1)\cdots(x - x_{n-1}) # $$ # # Com isso chegamos ao seguinte sistema linear triangular: # $$ # \begin{bmatrix} # 1 & 0 & 0 & 0 &\cdots & 0\\ # 1 & (x_1-x_0) & 0 & 0 & \cdots & 0\\ # 1 & (x_2 - x_0) & (x_2 - x_0)(x_2 - x_1) & 0 & \cdots & 0\\ # \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ # 1 & (x_n - x_0) & (x_n - x_0)(x_n - x_1) & (x_n - x_0)(x_n - x_1)(x_n-x_2) & \cdots &(x_n-x_0)(x_n-x_1)\cdots(x_n-x_{n-1})\\ # \end{bmatrix}\cdot\left\{\begin{matrix} a_0\\ a_1 \\ a_2 \\ \vdots \\ a_n\end{matrix}\right\} = # \left\{\begin{matrix} y_0\\ y_1 \\ y_2 \\ \vdots \\ y_n\end{matrix}\right\} # $$ # # Resolver este sistema é muito mais barato: $\mathcal{O}(n^2)$ # ## Interpolação de Lagrange: # # $$ # y(x) = \sum_{i=1}^n y_i h_i(x) # $$ # # onde $h_i(x)$ é o interpolador de Lagrange: # # $$ # h_k(x) = \prod_{i=1\ldots n,}^n \frac{x - x_i}{x_k - x_i} \qquad i\ne k # $$ # # Propriedade: # $$ # h_i(x_j) = \delta_{ij} \quad \text{onde} \quad \delta_{ij} = \left\{\begin{matrix}1, \: i=j \\ 0, i\ne j\\ \end{matrix}\right. # $$ # + function lagrange1(k, z, x) h = 1.0 n = length(z) for i = 1:n if i != k h *= (x - z[i]) / (z[k] - z[i]) end end return h end # - function lagrange2(k, z, x) h = 1.0 n = length(z) for i = 1:(k-1) h *= (x - z[i]) / (z[k] - z[i]) end for i = (k+1):n h *= (x - z[i]) / (z[k] - z[i]) end return h end # + N = 10 x = range(-1.0, 1.0, step=0.2) #x = [cos(k*π/N) for k in 0:N] xx = range(-1.0, 1.0, step=0.005) # - hh = [lagrange2.(k, Ref(x), xx) for k in 1:length(x)]; for i in 1:length(x) plot(xx, hh[i]) end # + plot(xx, hh[5]) axhline(y=0, color="black", linestyle = "--") axhline(y=1, color="black", linestyle = "--") for xv in x axvline(x=xv, color="red", linestyle="--") end # - # ### Vamos organizar a interpolação de Lagrange # + struct Lagrange x::Vector{Float64} y::Vector{Float64} Lagrange(x, y) = new(copy(x), copy(y)) end Base.Broadcast.broadcastable(lgr::Lagrange) = Ref(lgr) function lagrange(k, z, x) h = 1.0 n = length(z) for i = 1:(k-1) h *= (x - z[i]) / (z[k] - z[i]) end for i = (k+1):n h *= (x - z[i]) / (z[k] - z[i]) end return h end function interp(lgr::Lagrange, x) y = lgr.y[1] * lagrange(1, lgr.x, x) for i = 2:length(lgr.x) y += lgr.y[i] * lagrange(i, lgr.x, x) end return y end (lgr::Lagrange)(x) = interp(lgr, x) # - x1 = range(-1, 1, step=0.2) y1 = sin.(π.*x1); xx = range(-1, 1, step=0.005) lgr = Lagrange(x1, y1) yy = sin.(π.*xx); yy1 = lgr.(xx); # + plot(x1, y1, "ro") plot(xx, yy, "r-") plot(xx, yy1, "b--") # - # ## Interpolação Linear # # # + x2 = range(-1, 1, step=0.2) y2 = sin.(π.*x2); plot(x2, y2, "ro") plot(x2, y2, "b-") # + struct LinearInterp x::Vector{Float64} y::Vector{Float64} LinearInterp(x, y) = new(copy(x), copy(y)) end Base.Broadcast.broadcastable(lin::LinearInterp) = Ref(lin) function interp1(lin::LinearInterp, x) if x < lin.x[1] || x > lin.x[end] error("Fora do Range") end index = 2 n = length(lin.x) for i = 2:n if lin.x[i] >= x index = i break end end i1 = index-1 return lin.y[i1] + (lin.y[index] - lin.y[i1]) * (x - lin.x[i1]) / (lin.x[index] - lin.x[i1]) end (lin::LinearInterp)(x) = interp1(lin, x) # - lin = LinearInterp(x2, y2) yy2 = interp1.(lin, xx); plot(x2, y2, "ro") plot(xx, yy2, "b-") @btime yy2 .= interp1.(lin, xx); # # Mínimos quadrados # function linfit(x,y) sx = sum(x) sx2 = sum(x->x^2, x) N = length(x) sy = sum(y) syx = sum(x[i]*y[i] for i in 1:N) return [N sx; sx sx2] \ [sy; syx] end x = 1.0:20 y = 2 .*x .+ 3.0 linfit(x,y) # # Pacotes # # * [Interpolations](https://github.com/JuliaMath/Interpolations.jl) # * [Dierckx](https://github.com/kbarbary/Dierckx.jl) # * [GridInterpolations](https://github.com/sisl/GridInterpolations.jl) # ## Exercícios # # ### Problema 1 # # Interpole a função de Runge com $-1 \le x \le 1$: # $$ # f(x) = \frac{1}{1 + 25x^2} # $$ # # 1. Use 11 pontos uniformemente distribuídos # 2. Aumente o número de pontos # 3. Tente usar os pontos $x_k = \cos\left(\frac{k\pi}{N}\right)$ para $k = 0\ldots N$. # 4. Brinque com o número de pontos # # ### Problema 2 # # Procure na Net o método de diferenças divididas de Newton a interpole a função anterior nos mesmos pontos. Este método é simplesmente um jeito inteligente de resolver a matriz apresentada lá em cima. # # ### Problema 3 # # Use a biblioteca Interpolations.jl e Dierckx.jl para fazer as interpolações. Compare a interpolação linear com os splines. # # ### Problema 4 # # Crie funções para fazer os seguintes problemas de mínimos quadrados: # * $y = a_0 x^ a_1$ # * $y = a_0 \exp \left( a_1 \cdot x\right)$ # * Polinômio genérico de ordem n # f(x) = 1.0 / (1.0 + 25x^2) xx = -1:0.01:1 yy = f.(xx); plot(xx, yy) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import matplotlib.pyplot as plt data = pd.read_csv("../countries.csv") data.head() # + # Calculates GDP per Country data['gdpTotal'] = data.population * data.gdpPerCapita # Get the top 10 countries per GDP country_gdp = data.sort_values('gdpTotal', ascending=False) top_10_gdp = country_gdp[country_gdp.year == 2007].head(10) # Sumarize their names for usage in the for loop top_10_gdp = set(top_10_gdp.country) # + # Sort the data by Year data.sort_values('year', ascending=True) legend = [] for country in set(top_10_gdp): country_data = data[data.country == country] plt.plot(country_data.year, country_data.gdpTotal / country_data.gdpTotal.iloc[0], label=country) legend.append(country) # Change the figure (graph) zise fig = plt.gcf() fig.set_size_inches(20, 12) plt.legend(legend) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + from tensorflow import keras from sklearn.metrics import accuracy_score import os import pickle def load_data(): X_train = pickle.load(open("X_train.pickle", "rb")) X_test = pickle.load(open("X_test.pickle", "rb")) y_train = pickle.load(open("y_train.pickle", "rb")) y_test = pickle.load(open("y_test.pickle", "rb")) return X_train, X_test, y_train, y_test X_train, X_test, y_train, y_test = load_data() model = keras.models.load_model("model_save") # - def predictions(model, data): preds = [] for each in model.predict(data): preds.append(1 if each > 0.5 else 0) return preds # + import cv2 as cv testing_dir = "dataset/data/test" testing_images = [] for image_name in os.listdir(testing_dir): image_path = os.path.join(testing_dir, image_name) image = cv.imread(image_path, 0) / 255.0 image = cv.resize(image, (64, 64)) testing_images.append([image_name, image]) # - preds = [] for image in testing_images: name = image[0] pred = 1 if model.predict(image[1].reshape(1, 64, 64, -1)) > 0.5 else 0 prediction = "Hindi" if pred == 1 else "Background" print(f"Prediction for image {name} is {prediction}") preds.append([name, pred]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # #### BAD def grow_list(val, my_list=[]): my_list.append(val) return my_list my_list = [] my_list = grow_list(42, my_list) my_list = grow_list(42, my_list) my_list = grow_list(42, my_list) print(my_list) # + my_list2 = grow_list(42) print(my_list2) my_list3 = grow_list(43) print(my_list3) # - # ### GOOD def grow_list(val, my_list=None): if my_list: my_list.append(val) else: my_list = [val] return my_list # + tags=[] my_list2 = grow_list(42) print(my_list2) my_list3 = grow_list(43) print(my_list3) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="5oVT8zjjMjZj" # # TP5 # + id="-ig3zoccMqSH" from sklearn.decomposition import PCA from sklearn.cluster import KMeans import matplotlib % matplotlib inline import matplotlib.pyplot as plt import numpy as np from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV # + id="VjY-3SXmNTx6" file_path="/content/drive/My Drive/ROB311/" train_data = np.genfromtxt(file_path+"optdigits.tra", delimiter=',') test_data=np.genfromtxt(file_path+"optdigits.tes",delimiter=",") # + id="S-MChhp4Onn-" outputId="8756a85f-0f85-4457-858f-e5698bdf681f" colab={"base_uri": "https://localhost:8080/", "height": 51} print("train data shape",train_data.shape) print("test data shape",test_data.shape) # + id="WvZ9_5RPO1oL" outputId="01c1208a-a15e-41db-c921-1c1132da8b6e" colab={"base_uri": "https://localhost:8080/", "height": 51} # split the data into the features and attribute x_train=train_data[:,:-1] y_train=train_data[:,-1] x_test=test_data[:,:-1] y_test=test_data[:,-1] print("train features: {}, train label: {}".format(x_train.shape,y_train.shape)) print("test features: {}, test label: {}".format(x_test.shape,y_test.shape)) # + id="bo44ThSkPxQO" # create the pipeline pca = PCA(n_components=5) kmeans=KMeans(n_clusters=10) pipe = Pipeline(steps=[('pca', pca), ('kmeans', kmeans)]) # + id="5VZT9L2WSkeo" # prediction pipe.fit(x_train) y_pred=pipe.predict(x_test) # + [markdown] id="wcljpO1TlKaH" # # project the cluster to labels and evaluate it with classification metric # + id="pzYymNcV3R0C" def get_class_name(nums): counts = np.bincount(nums) return np.argmax(counts) # + id="M817Uz9kqNvC" # evaluation # find the projection between predicted attributes and annotations projected_pred=y_pred.copy() for i in range(10): pred=y_pred[y_test==i] c_index=get_class_name(pred) projected_pred[y_pred==c_index]=i # + id="AwIXCGbUc35u" outputId="0ef7f454-ed1a-46c1-eacb-01553dca52bc" colab={"base_uri": "https://localhost:8080/", "height": 34} metrics.accuracy_score(y_test,projected_pred) # + [markdown] id="yBIQzriqk_B4" # # find the most appropriate parameter # + id="g0uEsMvDqAvR" # search and find the most appropriate ncomponents component_range=[5,15,25,35,45,55,64] accuracy_score=[] for c in component_range: pca = PCA(n_components=c) kmeans=KMeans(n_clusters=10) pipe = Pipeline(steps=[('pca', pca), ('kmeans', kmeans)]) # prediction pipe.fit(x_train) y_pred=pipe.predict(x_test) # evaluation # find the projection between predicted attributes and annotations projected_pred=y_pred.copy() for i in range(10): pred=y_pred[y_test==i] c_index=get_class_name(pred) projected_pred[y_pred==c_index]=i accuracy_score.append(metrics.accuracy_score(y_test,projected_pred)) # + id="n3ncXYO9eYTe" outputId="7eeb3475-c17a-4d6c-d785-634f0f366b53" colab={"base_uri": "https://localhost:8080/", "height": 136} accuracy_score # + id="eHBE8v4wemv7" outputId="3fcb23cf-19b6-41d4-e384-5c2d4274b01e" colab={"base_uri": "https://localhost:8080/", "height": 417} # draw the accuracy score figure plt.figure(figsize=(8,6)) plt.plot(component_range,accuracy_score,"-o") plt.ylabel("accuracy",fontsize=18) plt.xlabel("n_components",fontsize=18) plt.title("Accuracy Score",fontsize=20) plt.show() # + id="Pg-JfTgzraI8" outputId="18814142-d3b7-4cff-eb48-ad6384e34a1a" colab={"base_uri": "https://localhost:8080/", "height": 105} from sklearn import metrics print('init\t\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette') print('%-9s\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f' % ("kmeans", kmeans.inertia_, metrics.homogeneity_score(y_test, y_pred), metrics.completeness_score(y_test, y_pred), metrics.v_measure_score(y_test, y_pred), metrics.adjusted_rand_score(y_test, y_pred), metrics.adjusted_mutual_info_score(y_test, y_pred), metrics.silhouette_score(y_test.reshape(-1,1), y_pred.reshape(-1,1), metric='euclidean', sample_size=300))) # + [markdown] id="RavoA0bgk2GK" # # visualize the results # + id="zqo2KpQUgUrM" outputId="c3a80ac0-22af-4e18-8019-4f172764f7cf" colab={"base_uri": "https://localhost:8080/", "height": 283} # Visualize the results on PCA-reduced data reduced_data = PCA(n_components=2).fit_transform(x_train) # choose the best params kmeans = KMeans(init='k-means++', n_clusters=10, n_init=10) kmeans.fit(reduced_data) # Step size of the mesh. Decrease to increase the quality of the VQ. h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max]. # Plot the decision boundary. For that, we will assign a color to each x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Obtain labels for each point in mesh. Use last trained model. Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(1) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) # Plot the centroids as a white X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 'Centroids are marked with white cross') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- #

    Metodo de Maxima Verossimilhança usando Espectro de Potência

    #

    Aluno:, Astroinformática 2021

    # Nessa etapa utilizaremos uma simulação de um catalogo de objetos gerados a partir de um modelo de universo baseado nos parametros obtidos pelos resultados de 2018 do Planck. # importando o modulo classy # %matplotlib inline import matplotlib.pyplot as plt import numpy as np from classy import Class from math import pi from nbodykit.lab import * from nbodykit import style, setup_logging from nbodykit.lab import cosmology # %config InlineBackend.figure_format = 'retina' from scipy.interpolate import InterpolatedUnivariateSpline plt.style.use(style.notebook) setup_logging() # + redshift = 0.01 cosmo = cosmology.Cosmology(h=0.67556, T0_cmb=2.7255, Omega0_b = 0.0483,Omega0_cdm = 0.2638,n_s = 0.9619) Plin = cosmology.LinearPower(cosmo, redshift, transfer='EisensteinHu') b1 = 2.0 cat = LogNormalCatalog(Plin=Plin, nbar=3e-4, BoxSize=1380., Nmesh=256, bias=b1, seed=42) # - # add RSD line_of_sight = [0,0,1] cat['RSDPosition'] = cat['Position'] + cat['VelocityOffset'] * line_of_sight # convert to a MeshSource, using TSC interpolation on 256^3 mesh mesh = cat.to_mesh(window='tsc', Nmesh=256, compensated=True, position='RSDPosition') plt.imshow(mesh.preview(axes=[0,1], Nmesh=200)) r = FFTPower(mesh, mode='1d', dk=0.005, kmin=0.01) Pk = r.power # + # print the shot noise subtracted P(k) k = np.logspace(-2, 0, 100) plt.loglog(Pk['k'], Pk['power'].real - Pk.attrs['shotnoise']) plt.loglog(k, Plin(k), c='k') # format the axes plt.xlabel(r"$k$ [$h \ \mathrm{Mpc}^{-1}$]") plt.ylabel(r"$P(k)$ [$h^{-3}\mathrm{Mpc}^3$]") plt.xlim(0.01, 0.6) # - power = Pk['power'].real - Pk.attrs['shotnoise'] Knovo = Pk['k'] print(type(power)) print(type(Knovo)) import scipy Pknovo = scipy.interpolate.interp1d(Knovo, power) # + # print the shot noise subtracted P(k) k = np.logspace(-2, 0, 100) plt.loglog(Pk['k'], Pk['power'].real - Pk.attrs['shotnoise'] ,c='k') plt.loglog(k, Plin(k), c='b') #plt.loglog(Knovo, Pknovo(Knovo), c='r') # format the axes plt.xlabel(r"$k$ [$h \ \mathrm{Mpc}^{-1}$]") plt.ylabel(r"$P(k)$ [$h^{-3}\mathrm{Mpc}^3$]") plt.xlim(0.01, 0.6) # - def NewSigma8(Pk, k,r=8, kmin=1e-5, kmax=1e1): import mcfit from scipy.interpolate import InterpolatedUnivariateSpline as spline k = numpy.logspace(numpy.log10(kmin), numpy.log10(kmax), len(Pk)) #Pk = self(k) R, sigmasq = mcfit.TophatVar(k, lowring=True)(Pk, extrap=True) return spline(R, sigmasq)(r)**0.5 Nbins = 20 SigmaMesh=NewSigma8(power,Knovo) print(SigmaMesh) len(power) OmegaB = np.linspace(0.011, 0.06, Nbins) logL = np.empty(Nbins) sigmaSimulados = np.empty(Nbins) for m in range(Nbins): cosmo = cosmology.Cosmology(h=0.67556, T0_cmb=2.7255, Omega0_b = OmegaB[m],Omega0_cdm = 0.2638,n_s = 0.9619) Plin = cosmology.LinearPower(cosmo, redshift, transfer='CLASS') sigmaSimulados[m] = Plin.sigma_r(r=8) ErroSigma = Plin.sigma_r(r=8)/10 logL[m] = -np.sum((0.5 * ((Plin.sigma_r(r=8) - SigmaMesh ) / ErroSigma) ** 2)) #Buscando qual foi o valor de OmegaB que determinou o máximo da Likelihood loc = np.where(logL == np.max(logL)) #print(loc) #print(sigmaSimulados[loc]) print("O valor de Omega Barions que melhor ajusta os dados é {}".format(OmegaB[loc])) print("E o besfit de sigma8 é {}".format(sigmaSimulados[loc])) plt.axvline(x = sigmaSimulados[loc], color = 'b', label = 'Besfit Sigma8 ') plt.plot(sigmaSimulados, logL, c='k') plt.xlabel(r'$\sigma_{8}$') plt.ylabel(r'$\mathcal{L}$') plt.legend() #Usando a mesma ídeia para restringir H0 utilizando um Omega0_b maior (0.06) hSample = np.linspace(0.4, 0.9, Nbins) logL = np.empty(Nbins) sigmaSimulados = np.empty(Nbins) for m in range(Nbins): cosmo = cosmology.Cosmology(h=hSample[m], T0_cmb=2.7255, Omega0_b = 0.0483,Omega0_cdm = 0.2638,n_s = 0.9619) Plin = cosmology.LinearPower(cosmo, redshift, transfer='CLASS') sigmaSimulados[m] = Plin.sigma_r(r=8) ErroSigma = Plin.sigma_r(r=8)/10 logL[m] = - np.sum(0.5 * ((Plin.sigma_r(r=8) - SigmaMesh ) / ErroSigma) ** 2) #Buscando qual foi o valor de h que determinou o máximo da Likelihood loc = np.where(logL == np.max(logL)) #print(loc) #print(sigmaSimulados[loc]) print("O valor de h que melhor ajusta os dados é {}".format(hSample[loc])) print("E o besfit de sigma8 é {}".format(sigmaSimulados[loc])) plt.axvline(x = sigmaSimulados[loc], color = 'b', label = 'Besfit Sigma8 ') plt.plot(sigmaSimulados, logL, c='k') plt.xlabel(r'$\sigma_{8}$') plt.ylabel(r'$\mathcal{L}$') plt.legend() # + #Tentando fitar Omega_b para o mesmo sigma8 obtido através da simulação porém utilizando uma cosmologia com H0 = 73 # - OmegaB = np.linspace(0.011, 0.06, Nbins) logL = np.empty(Nbins) sigmaSimulados = np.empty(Nbins) for m in range(Nbins): cosmo = cosmology.Cosmology(h=0.73, T0_cmb=2.7255, Omega0_b = OmegaB[m],Omega0_cdm = 0.2638,n_s = 0.9619) Plin = cosmology.LinearPower(cosmo, redshift, transfer='CLASS') sigmaSimulados[m] = Plin.sigma_r(r=8) ErroSigma = Plin.sigma_r(r=8)/10 logL[m] = -np.sum((0.5 * ((Plin.sigma_r(r=8) - SigmaMesh ) / ErroSigma) ** 2)) #Buscando qual foi o valor de OmegaB que determinou o máximo da Likelihood loc = np.where(logL == np.max(logL)) #print(loc) #print(sigmaSimulados[loc]) print("O valor de Omega Barions que melhor ajusta os dados é {}".format(OmegaB[loc])) print("E o besfit de sigma8 é {}".format(sigmaSimulados[loc])) plt.axvline(x = sigmaSimulados[loc], color = 'b', label = 'Besfit Sigma8 ') plt.plot(sigmaSimulados, logL, c='k') plt.xlabel(r'$\sigma_{8}$') plt.ylabel(r'$\mathcal{L}$') plt.legend() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: dev # language: python # name: dev # --- import numpy as np from sklearn.datasets import load_iris from sklearn.metrics.pairwise import euclidean_distances as skeuclidean_distances def euclidean_distances(X, Y=None): XX = np.sum(np.square(X), axis=1)[:, np.newaxis] if Y is None: YY = XX.T XY = np.dot(X, X.T) else: YY = np.sum(np.square(Y), axis=1)[np.newaxis, :] XY = np.dot(X, Y.T) dist = -2 * XY + XX + YY dist[dist < 0] = 0 return np.sqrt(dist) X, _ = load_iris(return_X_y=True) ans1 = euclidean_distances(X) ans2 = skeuclidean_distances(X) assert np.allclose(ans1, ans2, atol=1e-6) ans1 = euclidean_distances(X[:100], X[100:]) ans2 = skeuclidean_distances(X[:100], X[100:], squared=False) assert np.allclose(ans1, ans2) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="RWtqpFWDmntd" colab_type="code" colab={} # !cat /proc/cpuinfo # + id="uv0F7a6xa8wF" colab_type="code" outputId="821679a4-be81-4988-b1e4-24cc92415c2c" executionInfo={"status": "ok", "timestamp": 1591248164511, "user_tz": -540, "elapsed": 26095, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 124} # This cell imports the drive library and mounts your Google Drive as a VM local drive. You can access to your Drive files # using this path "/content/gdrive/My Drive/" from google.colab import drive drive.mount('/content/gdrive') # + id="Tm-0mZQPc-tH" colab_type="code" colab={} # Not Necessary cell # List the content of your local computer folder # !ls -la "/content/gdrive/My Drive/yolo/darknet" # + id="TOeRpBV_f1sZ" colab_type="code" outputId="000730cb-f684-4942-b3c7-f5b4875ed8b8" executionInfo={"status": "ok", "timestamp": 1590555079391, "user_tz": -540, "elapsed": 8447, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 418} # !sudo apt-get install tree # + id="SQ8NLU4Jf6lV" colab_type="code" colab={} # !tree /content/gdrive/My\ Drive/yolo/darknet/ # + id="nHQ6Zlo-f-GV" colab_type="code" outputId="98013aeb-f113-40d7-bc4e-c8d0493f4f44" executionInfo={"status": "ok", "timestamp": 1589969481792, "user_tz": -540, "elapsed": 3216, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 86} # This cell can be commented once you checked the current CUDA version # CUDA: Let's check that Nvidia CUDA is already pre-installed and which version is it. In some time from now maybe you # !/usr/local/cuda/bin/nvcc --version # + id="qXirnSvJgN1k" colab_type="code" outputId="985ea48f-7b54-466e-f205-7b3c93c9c25c" executionInfo={"status": "ok", "timestamp": 1589969489090, "user_tz": -540, "elapsed": 6841, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 121} # We're unzipping the cuDNN files from your Drive folder directly to the VM CUDA folders # !tar -xzvf gdrive/My\ Drive/darknet/cuDNN/cudnn-10.0-linux-x64-v7.5.0.56.tgz -C /usr/local/ # !chmod a+r /usr/local/cuda/include/cudnn.h # Now we check the version we already installed. Can comment this line on future runs # !cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2 # + id="na5XeB8Mgy3s" colab_type="code" outputId="4c9a7af1-c5e6-4e9f-af6c-ff392763cf8b" executionInfo={"status": "ok", "timestamp": 1589969129460, "user_tz": -540, "elapsed": 17588, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 121} # !git clone https://github.com/kriyeng/darknet/ # + id="rZ8Exb80i_Nq" colab_type="code" outputId="19c1271f-438c-4120-83fe-99a482f64b7a" executionInfo={"status": "ok", "timestamp": 1590555168747, "user_tz": -540, "elapsed": 1374, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 69} # %cd /content/gdrive/My\ Drive/yolo/darknet/darknet # %pwd # + id="iNsBPf8rg3HL" colab_type="code" colab={} # Check the folder # !ls # I have a branch where I have done the changes commented above # !git checkout feature/google-colab #Compile Darknet # !make #Copies the Darknet compiled version to Google drive # !cp ./darknet /content/gdrive/My\ Drive/yolo/darknet/bin/darknet # + id="wPEJcpPnjdsr" colab_type="code" colab={} def imShow(path): import cv2 import matplotlib.pyplot as plt # %matplotlib inline image = cv2.imread(path) height, width = image.shape[:2] resized_image = cv2.resize(image,(3*width, 3*height), interpolation = cv2.INTER_CUBIC) fig = plt.gcf() fig.set_size_inches(18, 10) plt.axis("off") #plt.rcParams['figure.figsize'] = [10, 5] plt.imshow(cv2.cvtColor(resized_image, cv2.COLOR_BGR2RGB)) plt.show() # + id="6vOzB6xanRMA" colab_type="code" outputId="70fb775d-89bb-451b-eb19-fac53769f3dc" executionInfo={"status": "ok", "timestamp": 1589972666325, "user_tz": -540, "elapsed": 2272657, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 209} # Not necessary cell # Get yolov3 weights # #!wget https://pjreddie.com/media/files/yolov3.weights # + id="Z4CjPlPDjlJv" colab_type="code" colab={} # Not necessary cell # Execute darknet using YOLOv3 model with pre-trained weights to detect objects on 'person.jpg' # !./darknet detect cfg/yolov3.cfg yolov3.weights.1 data/giraffe.jpg -dont-show # Show the result using the helper imgShow() imShow('predictions.jpg') # + id="42ww45zRRS4e" colab_type="code" outputId="d91b314c-acab-4371-82ce-c715ed54a7c7" executionInfo={"status": "ok", "timestamp": 1591248674366, "user_tz": -540, "elapsed": 1199, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 34} # cd "/content/gdrive/My Drive/yolo/darknet" # + id="CsPUMzSyStOj" colab_type="code" outputId="4084b590-9ac9-4970-baed-91e85f149f7f" executionInfo={"status": "ok", "timestamp": 1590559017061, "user_tz": -540, "elapsed": 205755, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 209} # !wget https://pjreddie.com/media/files/yolov3-tiny.weights # + id="LrurE6XZtPfc" colab_type="code" colab={} # !./darknet partial cfg/yolov3-tiny.cfg yolov3-tiny.weights yolov3-tiny.conv.15 15 # + id="q2DYjHwberIK" colab_type="code" colab={} # !chmod 755 darknet # + id="7nNVd3Z6XUDV" colab_type="code" outputId="dffcc142-3389-4760-e116-a9635c04243c" executionInfo={"status": "ok", "timestamp": 1591270705462, "user_tz": -540, "elapsed": 13325274, "user": {"displayName": "\uc2e0\uc6a9\uc2b9", "photoUrl": "", "userId": "09296461578988112178"}} colab={"base_uri": "https://localhost:8080/", "height": 1000} # !./darknet detector train "/content/gdrive/My Drive/yolo/darknet/obj.data" "/content/gdrive/My Drive/yolo/darknet/cfg/yong-mine.cfg" "/content/gdrive/My Drive/yolo/darknet/yolov3-tiny.conv.15" -dont_show # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + [markdown] slideshow={"slide_type": "slide"} # # The ipyrad.analysis tool kit # # # + [markdown] slideshow={"slide_type": "slide"} # ### Install software # All required software for this walkthrough is available on conda. # + slideshow={"slide_type": "-"} # conda install -c ipyrad ipyrad structure clumpp bpp # conda install -c eaton-lab toytree toyplot # conda install -c bioconda raxml # + [markdown] slideshow={"slide_type": "slide"} # ### Start an ipyparallel cluster # In a separate terminal run the following command to start a cluster of engines. If working on a notebook running remotely, use the dashboard to open a new terminal. # + # ipcluster start --n=4 # - # You should then be able to connect to the engines in your notebook: # + ## connect to the cluster import ipyparallel as ipp ipyclient = ipp.Client() ## print number of engines print len(ipyclient), "connected engines" # + [markdown] slideshow={"slide_type": "slide"} # ### Assemble a RAD data set # The code here is to assemble the example empirical data set from the [ipyrad tutotial](http://ipyrad.readthedocs.io/userguide.html#empirical-examples). # - ## import ipyrad import ipyrad as ip # Minimal workflow: scroll down for details. # + slideshow={"slide_type": "-"} ## create an Assembly object data = ip.Assembly("simdata") ## set I/O paths for the data data.set_params("project_dir", "~/workshop") data.set_params("raw_fastq_path", "ipsimdata/rad_example_R1_.fastq.gz") data.set_params("barcodes_path", "ipsimdata/rad_example_barcodes.txt") ## run all steps of the Assembly data.run("1234567") # + [markdown] slideshow={"slide_type": "subslide"} # ### Modify more parameters # + slideshow={"slide_type": "-"} ## set params data.set_params("filter_adapters", 2) data.set_params("output_formats", "lpask") ## show params data.get_params() # + [markdown] slideshow={"slide_type": "subslide"} # ### Assemble the data set # You can run one or more steps just like in the CLI. # - ## run all steps of assembly data.run("1234567") # + [markdown] slideshow={"slide_type": "subslide"} # ### Access assembly results # You can easily access summary stats for the assembly as a data frame. # - ## summary stats data.stats # + [markdown] slideshow={"slide_type": "subslide"} # ### Plot statistics # + slideshow={"slide_type": "-"} import toyplot ## plot barplot c, a, m = toyplot.bars( data.stats.hetero_est, height=250, width=500, ) ## style the axes a.x.ticks.locator = toyplot.locator.Explicit( locations=range(len(data.stats)), labels=data.stats.index) a.y.label.text = "Heterozygosity" a.y.ticks.show = True # + [markdown] slideshow={"slide_type": "subslide"} # ### Access result files # You can also access the stats files for each step, and the output files for downstream analyses. # + ## s2 stats file print data.stats_files.s2 ## the .loci file location print data.outfiles.loci # + [markdown] slideshow={"slide_type": "slide"} # ## ipyrad.analysis tools # The ipyrad.analysis module includes many *wrapper* tools that can be used to efficiently run evolutionary analysis tools in a notebook. # - ## import the toolkit import ipyrad.analysis as ipa # + [markdown] slideshow={"slide_type": "slide"} # ### RAxML analysis # Simply enter the location of the phylip file, which can be accessed from the `.outfiles` attribute of the Assembly object. You can also provide a name and output directory, and set many other optional parameters. # + slideshow={"slide_type": "skip"} import ipyrad as ip import ipyparallel as ipp data = ip.load_json("/home/deren/workshop/simdata.json") ipyclient = ipp.Client() # - # Minimal workflow: scroll down for details. # + slideshow={"slide_type": "-"} ## create a raxml object s = ipa.raxml( name=data.name, phyfile=data.outfiles.phy, workdir="~/workshop/analysis-raxml"); ## run the analysis s.run() # + [markdown] slideshow={"slide_type": "subslide"} # ### Modify parameters and other functions # - ## modify params s.params.T = 4 s.params.N = 100 # + slideshow={"slide_type": "fragment"} ## print the raxml command as a string print s.command # + slideshow={"slide_type": "fragment"} ## overwrite existing result with this 'name' s.run(force=True) # + [markdown] slideshow={"slide_type": "subslide"} # ### Access the tree files and plot # - print s.trees # + slideshow={"slide_type": "fragment"} import toytree tre = toytree.tree(s.trees.bipartitions) tre.root(wildcard='3') tre.draw( width=300, node_labels=tre.get_node_values("support"), node_size=20, ); # + [markdown] slideshow={"slide_type": "slide"} # ### introgression (abba-baba) analysis # The baba object can be used to set up abba-baba tests, to calculate results, and to generate plots to visualize them. # - # Minimal example, scroll down for details. # + ## create a baba object b = ipa.baba(data=data.outfiles.loci) ## generate tests given the rooted tree b.tests = [ {"p4":["3L_0"], "p3":["2F_0"], "p2":["1D_0"], "p1":["1A_0"]}] ## run jobs distributed across the cluster b.run(ipyclient) b.results_table # + [markdown] slideshow={"slide_type": "subslide"} # ### Auto-generate tests # Instead of writing out many tests explicitly, you can instead enter a rooted tree to the baba object and use this function to auto-generate four-taxon test fitting the tree and constraints. # + slideshow={"slide_type": "-"} ## init baba object b = ipa.baba(data=data.outfiles.loci, newick=tre) ## generate all possible tests on this tree b.generate_tests_from_tree() ## set constraints on tests cdict = {"p4": ["3L_0"], "p3": ["2E_0", "2F_0"], "p2": ["1D_0"]} ## generate constrainted number of tests b.generate_tests_from_tree( constraint_dict=cdict, constraint_exact=False, ) # + [markdown] slideshow={"slide_type": "subslide"} # ### Run all tests linked to a baba object # + slideshow={"slide_type": "-"} ## run the tests (in this case 4) linked to the baba object b.run(ipyclient) ## show results table b.results_table # + [markdown] slideshow={"slide_type": "subslide"} # ### Plot results # - b.plot( height=350, pct_tree_x = 0.4, pct_tree_y = 0.2, ); # + slideshow={"slide_type": "subslide"} ### Save the plot import toyplot.pdf canvas, axes, mark = b.plot(height=350, pct_tree_x=0.4, pct_tree_y=0.2) toyplot.pdf.render(canvas, "/home/deren/workshop/abba-baba.pdf") ## save the results table b.results_table.to_csv("~/workshop/abba-baba.csv", sep="\t") # + [markdown] slideshow={"slide_type": "slide"} # ### Species tree inference by phylogenetic invariants # The program tetrad follows the algorithm of SVDquartets by inferring all possible quartet trees from a large SNP alignment and uses the program quartet maxcut (Snir et al. 2012) to infer a species tree by quartet joining. # + ## create a tetrad class object tet = ipa.tetrad( name=data.name, seqfile=data.outfiles.snpsphy, mapfile=data.outfiles.snpsmap, workdir="~/workshop/analysis-tetrad", nboots=100 ) ## run the analysis tet.run(ipyclient) # - # ### Access result tetrad trees and draw # + slideshow={"slide_type": "subslide"} tet.trees # + slideshow={"slide_type": "subslide"} ## load unrooted result tree with toytree and draw tre = toytree.tree(tet.trees.cons) tre.draw( node_labels=tre.get_node_values("support"), node_size=20, ); # - # ### Infer a species tree with BPP # + # conda install bpp -c ipyrad # + ## setup: define how samples group into 'species' IMAP = { "1": ["1A_0", "1B_0", "1C_0"], "D": ["1D_0"], "2": ["2F_0", "2E_0", "2G_0"], "H": ["2H_0"], "3": ["3J_0", "3I_0", "3K_0"], "L": ["3L_0"], } ## setup: define a guidetree GUIDE = "(((1,D),(2,H)),(3,L));" ## init a bpp object bpp = ipa.bpp( locifile=data.outfiles.loci, imap=IMAP, guidetree=GUIDE, workdir="~/workshop/analysis-bpp" ); ## submit jobs to run on the cluster bpp.submit_bpp_jobs("A00", nreps=2, ipyclient=ipyclient) # + [markdown] slideshow={"slide_type": "subslide"} # ### Set parameters and filters # You can define all of the parameter settings that will be used in the BPP .ctl file by modifying the .params attributes. Similarly, you can modify which loci will be included in the analysis using the .filters attributes. # + ## set some parameters bpp.params.burnin = 1000 bpp.params.nsample = 5000 bpp.params.infer_sptree = 1 bpp.params.infer_delimit = 0 ## set some filters bpp.filters.maxloci = 200 bpp.filters.minsnps = 2 ## submit jobs to run on the cluster bpp.submit_bpp_jobs("A00", nreps=2, ipyclient=ipyclient) # + [markdown] slideshow={"slide_type": "subslide"} # ### Track running jobs # Unlike some of the other ipyrad.analysis tools, the bpp object does not "block" while the jobs are running. Meaning that after it sends jobs to run on the cluster you can continue to interact with the notebook. This is useful since BPP is not multi-threaded, so you will likely want to submit many different types of jobs. You can check on running jobs like below. # + ## a list of submitted jobs print bpp.asyncs ## a list of result files produced by jobs print bpp.files # + [markdown] slideshow={"slide_type": "slide"} # ### Structure analyses # + slideshow={"slide_type": "skip"} import ipyrad as ip import ipyrad.analysis as ipa import ipyparallel as ipp data = ip.load_json("/home/deren/workshop/simdata.json") ipyclient = ipp.Client() # + # conda install structure -c ipyrad # conda install clumpp -c ipyrad # + ## create a structure class object s = ipa.structure( name=data.name, strfile=data.outfiles.str, mapfile=data.outfiles.snpsmap, workdir="~/workshop/analysis-structure", ); s.mainparams.burnin = 100 s.mainparams.numreps = 1000 ## submit jobs to run on the cluster for kpop in [2, 3, 4, 5]: s.submit_structure_jobs(kpop=kpop, nreps=5, ipyclient=ipyclient) # + [markdown] slideshow={"slide_type": "subslide"} # ### Modify parameters settings # - s.mainparams.burnin = 10000 s.mainparams.numreps = 100000 s.extraparams.usepopinfo = 0 # ### Summarize results with CLUMPP # + ## get results for a single K value s.get_clumpp_table(3) ## make a dict for all results tables = {} for kpop in [2, 3, 4, 5]: tables[kpop] = s.get_clumpp_table(kpop) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import pandas as pd import json from pandas.io.json import json_normalize import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set(color_codes=True) from pylab import plot, show, text import datetime import matplotlib.dates as mdates import pylab # + # load as Pandas dataframe df_users = pd.read_csv('takehome_users.csv', encoding='latin-1') # look at basic column descriptions df_users.describe() # - # put zero for NaN values df_users = df_users.fillna(0) # peek at the dataframe df_users.head(3) # what organization has the most users? # ANSWER: most users belong to organization id: 1 -> 10 except 8 df_users.org_id.value_counts().head(10) # who invited most users df_users.invited_by_user_id.value_counts().head(10) # who are these users who invited the most users , 11770 df_users[(df_users['object_id'] == 10741) | (df_users['object_id'] == 2527) | (df_users['object_id'] == 2308)| (df_users['object_id'] == 1525)| (df_users['object_id'] == 11770)]['name'] # check if there are blank email print(len(df_users[df_users.email == None])) # how many opted in to mailing list df_users.opted_in_to_mailing_list.value_counts() # + # Create a pie chart # create dataframe for creation source df_mail = df_users.opted_in_to_mailing_list.value_counts() df_mail = df_mail.reset_index() # Put parameter values plt.pie( df_mail['opted_in_to_mailing_list'], labels=df_mail['index'], shadow=False, startangle=0, autopct='%1.1f%%', ) # Add title plt.title('25% percent of the users has opted_in_to_mailing_list') plt.axis('equal') # Display plot plt.tight_layout() plt.show() # - # how many enabled for marketing drip df_users.enabled_for_marketing_drip.value_counts() # + # Create a pie chart # create dataframe for creation source df_drip = df_users.enabled_for_marketing_drip.value_counts() df_drip = df_drip.reset_index() # Put parameter values plt.pie( df_drip['enabled_for_marketing_drip'], labels=df_drip['index'], shadow=False, startangle=0, autopct='%1.1f%%', ) # Add title plt.title('15% of users have enabled_for_marketing_drip') plt.axis('equal') # Display plot plt.tight_layout() plt.show() # - #creation source distribution df_users.creation_source.value_counts() # + # Create a pie chart # create dataframe for creation source df_source = df_users.creation_source.value_counts() df_source = df_source.reset_index() # Put parameter values plt.pie( df_source['creation_source'], labels=df_source['index'], shadow=False, startangle=0, autopct='%1.1f%%', ) # Add title plt.title('Percent distribution of creation source') plt.axis('equal') # Display plot plt.tight_layout() plt.show() # - # read file into dataframe df_engage = pd.read_csv('takehome_user_engagement.csv') df_engage.head(3) # top 10 active users (may or may not be adopted users) df_engage.user_id.value_counts().head(10) # who are these top 3 users? df_users[(df_users['object_id'] == 3623) | (df_users['object_id'] == 906) | (df_users['object_id'] == 1811)] # + # convert string time stamp into datetime df_engage['time_stamp'] = pd.to_datetime(df_engage['time_stamp']) # change index to time_stamp column for timegrouper function (later on) df_engage.index = pd.to_datetime(df_engage.time_stamp, unit='D') # + # create dataframe with users that has logged into the product on three separate days in at least one sevenday period df_adoption = df_engage.groupby(['user_id', pd.TimeGrouper(freq='7D')]).filter(lambda x: len(x)>1).groupby('user_id').sum() # reset index df_adoption = df_adoption.reset_index() # - # peek at some data df_adoption.head(3) # + # merge users and adopted users dataframe df = df_users.merge(df_adoption, left_on='object_id', right_on='user_id', how='outer') # drop column user_id since it is duplicate with object_id df.drop('user_id', axis=1, inplace=True) # replace NaN with zero df = df.fillna(0) from datetime import datetime # convert unix timestamp to datetime df['last_session_creation_time'] = df['last_session_creation_time'].apply( lambda x: datetime.strptime(str(datetime.fromtimestamp(float(int(x)))), '%Y-%m-%d %H:%M:%S')) df['creation_time'] = df['creation_time'].apply(lambda x: datetime.strptime(str(x), '%Y-%m-%d %H:%M:%S')) #calculate active days df['days_since_signup'] = df['last_session_creation_time'] - df['creation_time'] df['days_since_signup'] = df['days_since_signup'].apply(lambda x: abs(x.total_seconds()/60/60/24/30)) #convert creation_source into numeric values df['creation_source']= df['creation_source'].astype('category') cat_columns = df.select_dtypes(['category']).columns df[cat_columns] = df[cat_columns].apply(lambda x: x.cat.codes) # create column if us df['adopted_user']=df['visited'].apply(lambda x: int(x > 0)) # column visited is not needed df.drop('visited', axis=1, inplace=True) df.head(3) # + from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score #construct the dataset X, y X = df[['creation_source', 'opted_in_to_mailing_list', 'enabled_for_marketing_drip', 'days_since_signup']] y = (df.adopted_user == 1) # Split the data into a training and test set. Xlr, Xtestlr, ylr, ytestlr = train_test_split(X.values, y.values, test_size=0.20, random_state=5) clf = LogisticRegression() # Fit the model on the trainng data. clf.fit(Xlr, ylr) # Print the accuracy from the testing data. print("Accuracy score: ", accuracy_score(clf.predict(Xtestlr), ytestlr)) # Print importance of each features clf.fit(Xlr / np.std(Xlr, 0), ylr) print("Regression coefficients: ", clf.coef_) print("Intecept: ", clf.intercept_) print("Column names: ", (X.columns.values)) # - #

    Results/Interpretation/Further Research:

    #
    # Results: #
    I used the algorithm Logistics Regression since the target variable is binary (0, 1) if the user is adopted (1) or not (0). The features are 'creation_source', 'opted_in_to_mailing_list', 'enabled_for_marketing_drip', 'days_since_signup'. Accuracy score is 83.75% and the regression coefficients are [[-0.1408732 0.03920644 -0.01876419 -1.59402912]].
    # Interpretation: #
    Creation source most important is Org invite and least is signup google auth. Having a high value on creation source is 0.14 unit decrease in the odds of being an adopted user.
    # Opted in mailing list is 0.03 unit increase in the probability that the user becomes adopted user.
    # Enabled for marketing drip is .02 unit decrease in the likelihood of being an adopted user.
    # Number of days since signup is 1.6 unit decrease in the chance that user is an adopted user
    # Further research/data:
    # (Assuming that Relax Inc. is a spa facility) Add user information such as gender, age, location, salary range and number of friends/co-workers/relatives who are also member. Also, user profile on social network and their outlook regarding about health and fitness can also help to improve this prediction. # # # #END OF REPORT # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import pandas as pd import scipy from datetime import datetime, timedelta ## For SDK import getpass from odp_sdk import ODPClient from getpass import getpass import warnings warnings.filterwarnings("ignore") # - # # Please do the tests to the best of your ability and fill in the google form for feedback on each test # # # # If you encounter any problems, please see code at the end of the notebook for help, or feel free to reach out to either: #
    # : or : # # Test 3 # Connect to ODP using SDK # Please see Basic Usage example notebook in GitHub for help #
    # **Hint**: Check the Basic Usage Example notebook for help. You should see ' connection successful' once you connect. # # + ## Connect to SDK here # - # # Test 4 # Using the cast function, download data from: #
    # longitude=[-10,10], #
    # latitude=[55,70], #
    # timespan=['2018-06-01','2018-08-31'] #
    # Let's exclude flagged data. #
    # **Hint**: Please see the Basic Usage example notebook for help. # + ## Download casts here and display first five lines of output # - # # Test 5 # Using the get_available_casts function, download casts and cast information for: #
    # longitude=[-10,10], #
    # latitude=[55,70], #
    # timespan=['2018-06-01','2018-08-31'] #
    # # Filter the casts for the ones which have 'NORWAY' as the associated country. #
    # # Download the data only for those specific casts #
    # # **Hint**: Please see the Basic Usage example notebook for help. # + ## Get casts here and display first five lines of output # + ## Filter the casts for 'NORWAY' as the associated country # + ## Download data only for those specific casts # - # # Test 6 # Clone the repo in order to use the extra functionalities. Under the 'Example's' folder follow instructions for intalling Cartopy and then pip install the packages in requirements_func.txt #
    # Then import UtilityFunctions #
    # **Hint**: See the README under the Examples folder #
    # Remember that you need to import the function files and the necessary packages # + #import extra packages and functions here from UtilityFunctions import * ## If you are not running your notebook from the same folder where UtilityFunctions is, remember to point to the proper path # - # # Test 7 # Using the plot_casts function in UtilityFunctions.py, visualize the individual casts with associated Temperature values retrieved in Test 4 **for depths above 5 meters**. Remember that you can specify the colormap from cmocean (a color palette for plotting ocean data) #
    # **Hint:** See Data Coverage Example notebook # + #visualize casts here # - # # Test 8 # # Interpolate the data retrieved in Test 4 to get continuous Temperature map at 5m depth for the date '2018-07-14'. # **Hint:** Different algorithms can be used for this portion. Please see the Example Notebook Gridding Data from Ocean Data Platform. Use interpolate_casts_to_z() to interpolate all profiles to a depth of 5m and use interpolate_to_grid() for the spatial gridding # + ##Interpolate and plot results here # - # # Help With Code Below # # Test 3 # Connect to ODP using SDK # Please see Basic Usage example notebook in GitHub for help #
    # **Hint**: Check the Basic Usage Example notebook for help. You should see ' connection successful' once you connect. # ## This code will prompt you to enter your API key to connect client = ODPClient(api_key=getpass(prompt='Insert your personal ODP API key:'), project="odp", client_name="odp") # # Test 4 # Using the cast function, download data from: #
    # longitude=[-10,10], #
    # latitude=[55,70], #
    # timespan=['2018-06-01','2018-08-31'] #
    # Let's exclude flagged data #
    # **Hint**: Please see the Basic Usage example notebook for help # + df=client.casts(longitude=[0,15], latitude=[50,65], timespan=['2018-06-01','2018-08-31'], include_flagged_data = False, n_threads=35) df.head() # - # # Test 5 # Using the get_available_casts function, download casts and cast information for: #
    # longitude=[-10,10], #
    # latitude=[55,70], #
    # timespan=['2018-06-01','2018-08-31'] #
    # # Filter the casts for the ones which have 'NORWAY' as the associated country. #
    # # Download the data only for those specific casts #
    # # **Hint**: Please see the Basic Usage example notebook for help. # + casts=client.get_available_casts(longitude=[0,15], latitude=[50,65], timespan=['2018-06-01','2018-08-31'], meta_parameters=['extId','date','time','lon','lat','country', 'Platform','dataset_code', 'equipment']) casts.head() # - casts_norway = casts[casts.country=='NORWAY'] casts_norway.head() # + data_norway = client.download_data_from_casts(casts_norway.extId.tolist(),n_threads=35) data_norway.head() # - # # Test 6 # Clone the repo in order to use the extra functionalities. Under the 'Example's' folder follow instructions for installing Cartopy and then pip install the packages in requirements_func.txt #
    # Then import UtilityFunctions #
    # **Hint**: See the README under the Examples folder #
    # Remember that you need to import the function files and the necessary packages # + ## Once the repo is cloned, you need to import the following packages and functions. #If you get an error for the imports of Cast Functions and DataStatsFunctions, make sure they are in the same # folder from which you are running your notebook from UtilityFunctions import * # - # # Test 7 # Using the plot_casts function in UtilityFunctions.py, visualize the individual casts with associated Temperature values retrieved in Test 4 **for depths above 5 meters**. Remember that you can specify the colormap from cmocean (a color palette for plotting ocean data) #
    # **Hint:** See Data Coverage Example notebook plot_casts('Temperature',df[df.z<5],cmap=cmocean.cm.thermal) plt.title('Temperature Measurements') # # Test 8 # # Interpolate the data retrieved in Test 4 to get continuous Temperature map at 5m depth for the date '2018-07-14'. # **Hint:** Different algorithms can be used for this portion. Please see the Example Notebook Gridding Data from Ocean Data Platform example notebook. Use interpolate_casts_to_z() to interpolate all profiles to a depth of 5m and use interpolate_to_grid() for the spatial gridding df_int=interpolate_casts_to_z(df,'Temperature',[5]) # Interpolate all profiles to 5m depth. OBS: This one may take a long time # + ##need to drop null values df_int.dropna(inplace=True) longitude=[0,15] # longitude gridding range latitude=[50,65] # latitude gridding range df_int['unixtime']=df_int['datetime'].apply(lambda x : x.value) points=df_int[['lon','lat','unixtime']].values.astype('float') values=df_int['Temperature'].values .astype('float') # Interpolating to a grid for the date '2018-07-14' int_points=[np.linspace(longitude[0],longitude[1],15*4+1), np.linspace(latitude[0],latitude[1],15*4+1), [pd.Timestamp('2018-07-14').value]] # + ## specify interpolation algorithm kind='rbf' ##create grid and interpolate values grid,g=interpolate_to_grid(points.copy(),values.copy(),int_points.copy(), interp_type=kind, rbf_func='linear',rbf_smooth=1e-9,rescale=True) ## plot interpolated values plot_grid(grid[0][:,:,0],grid[1][:,:,0],g[:,:,0],cmap=cmocean.cm.thermal,vrange=[None,None], variable_name='Temperature (degrees_C)') plt.title('RBF Interpolation') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # All the IPython Notebooks in this lecture series by Dr. are available @ **[GitHub](https://github.com/milaan9/02_Python_Datatypes/tree/main/002_Python_String_Methods)** # # # Python String `center()` # # The **`center()`** method returns a string which is padded with the specified character. # # **Syntax**: # # ```python # string.center(width[, fillchar]) # ``` # ## `center()` Parameters # # The **`center()`** method takes two arguments: # # * **width** - length of the string with padded characters # * **fillchar** (optional) - padding character # # The **`fillchar`** argument is optional. If it's not provided, space is taken as default argument. # ## Return Value from `center()` # # The **`center()`** method returns a string padded with specified **`fillchar`**. It doesn't modify the original string. # + # Example 1: center() Method With Default fillchar string = "Python is awesome" new_string = string.center(24) print("Centered String: ", new_string) # + # Example 2: center() Method With * fillchar string = "Python is awesome" new_string = string.center(24, '*') print("Centered String: ", new_string) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Accessing Model Training History in Keras # * It records training metrics for each epoch # * we can use data collected in the history object to create plots.The plots can provide an indication of useful things about the training of model such as: # * It's speed of convergence over epochs(slope) # * Whether the model may have already converged(plateau of the line) # * Whether the model may be over-learning the training data # + # we can list all metrics collected in history object # - # ## 1. Visualizing Model Training History in Keras # * A plot of accuracy on the training and validation datasets over training epochs. # * A plot of loss on the training and validation datasets over training epochs. # + # Visualize training history from keras.models import Sequential from keras.layers import Dense import matplotlib.pyplot as plt import numpy as np # fix random seed for reproducibility seed = 7 np.random.seed(seed) # load pima indians dataset dataset = np.loadtxt("pima-indians-diabetes.data.csv",delimiter=",") # split into input(X) and Output(Y) variables X = dataset[:,0:8] Y = dataset[:,8] # create model model = Sequential() model.add(Dense(12,input_dim=8,activation='relu')) model.add(Dense(8,activation='relu')) model.add(Dense(1,activation='sigmoid')) # compile model model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy']) # Fit the model history = model.fit(X,Y,validation_split=0.33,epochs=150,batch_size=10,verbose=0) # list all data in history print(history.history.keys()) # summarize history for accuracy plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train','test'],loc='upper left') plt.savefig('accuracy') plt.show() # summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train','test'],loc='upper left') plt.savefig('loss') plt.show() # - # # summary # * Discovered the importance of collecting and reviewing metrics during the training of deep learning models. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Before you begin, execute this cell to import numpy and packages from the D-Wave Ocean suite, and all necessary functions for the gate-model framework you are going to use, whether that is the Forest SDK or Qiskit. In the case of Forest SDK, it also starts the qvm and quilc servers. # %run -i "assignment_helper.py" # %matplotlib inline # # Unitary evolution and the Hamiltonian # # **Exercise 1** (2 points). You have already seen that the eigendecomposition of a Hamiltonian gives you the energy levels of the system. This comes out of the time-independent Schrödinger equation, $H|\psi \rangle =E|\psi \rangle$. # # The solution of the time-dependent Schrödinger equation establishes the connection between unitary operations (solutions of the Schrödinger equation) and the Hamiltonian. Imagine, for instance, that you are looking for the Hamiltonian that implements the phase gate $S=\begin{bmatrix}1 & 0\\ 0& i\end{bmatrix}$. This is a valid unitary, so we are looking for a time-independent Hamiltonian such that $S = \exp(-i Ht/\hbar)$. The $S$ gate is diagonal, so will be our Hamiltonian. That makes matrix exponentiation easy. So we are looking for a Hamiltonian $H=\begin{bmatrix}h_1 & 0\\0 & h_2\end{bmatrix}$ for some values $h_1$ and $h_2$. # # We can take, for instance $\sigma^Z=\begin{bmatrix}1 & 0\\0 & -1\end{bmatrix}$ and substract an identity matrix -- this only shifts the energy. That would give us a Hamiltonian $H=\sigma^Z-\mathbb{1}=\begin{bmatrix}0 & 0\\0 & -2\end{bmatrix}$. So now we have $S=\exp(-i Ht/\hbar)=\begin{bmatrix}\exp(-i 0t/\hbar) & 0\\0 & \exp(i 2t/\hbar)\end{bmatrix}=\begin{bmatrix}1 & 0\\0 & \exp(i 2t/\hbar)\end{bmatrix}$. Calculate the value for $t/\hbar$ to get the $S$ gate from this Hamiltonian. Store the result in a variable called `t`. ### ### YOUR CODE HERE ### t = np.log(1j)/(2*1j) # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise1", "locked": true, "points": "2", "solution": false} ### ### AUTOGRADER TEST - DO NOT REMOVE ### # - # This tells you for how long you would have to evolve the Hamiltonian to get the effect of an $S$ gate. # # The adiabatic theorem # # **Exercise 2** (2 points). In an adiabatic process, conditions change slowly enough for the system to adapt to the new configuration. We can start from some Hamiltonian $H_0$ and slowly change it to some other Hamiltonian $H_1$, for instance, on a linear schedule: $H(t) = (1-t) H_0 + t H_1$. The speed of change heavily depends on the energy gap, that is, the difference between the ground state energy and the first excited state of all Hamiltonians $H(t)$, $t\in[0,1]$. # # It is easy to craft a Hamiltonian where this gap is small, so the speed limit has to be low. If you take a classical Ising model with coupling strengths on vastly different scales, that is what you get. For instance, calculate the gap (the difference between the smallest and second smallest eigenvalue) of the Hamitonian $H_1=-1000\sigma^Z_1\sigma^Z_2-0.1\sigma^Z_2\sigma^Z_3-0.5\sigma^Z_1$ acting on a three-qubit system (the last linear term is there to make the ground state unique). The result should be in a variable called `gap`. Remember that since you have three qubits, the $\sigma^Z_1\sigma^Z_2$ operator, for instance, actually means $\sigma^Z\otimes\sigma^Z\otimes\mathbb{1}$. import numpy as np Z = np.array([[1, 0], [0, -1]]) ### ### YOUR CODE HERE ### I = np.eye(2) H = -1000*np.kron(np.kron(Z, Z), I) - 0.1*np.kron(I, np.kron(Z, Z)) - 0.5*np.kron(Z, np.kron(I, I)) eigenvalues, _ = np.linalg.eigh(H) gap = abs(eigenvalues[0] - eigenvalues[1]) gap # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise2", "locked": true, "points": "2", "solution": false} ### ### AUTOGRADER TEST - DO NOT REMOVE ### # - # **Exercise 3** (1 point). Contrast this to the gap of the Hamiltonian $H_0 = \sum_{i=1}^3 -\sigma^X_i$. Again, calculate the value in a variable called `gap`. X = np.array([[0, 1], [1, 0]]) ### ### YOUR CODE HERE ### I = np.eye(2) XII = np.kron(np.kron(X, I), I) IXI = np.kron(np.kron(I, X), I) IIX = np.kron(np.kron(I, I), X) H = -XII - IXI - IIX eigenvalues, _ = np.linalg.eigh(H) gap = abs(eigenvalues[0] - eigenvalues[1]) gap # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise3", "locked": true, "points": "1", "solution": false} ### ### AUTOGRADER TEST - DO NOT REMOVE ### # - # You can see that there is a vast difference in the gap between the two Hamiltonians. This could be leveraged: for instance, the initial part of the annealing could go faster, since the gap is large, and then slow down towards reaching the target Hamiltonian. The optimal annealing schedule is a research topic on its own. # # Quantum annealing # # **Exercise 4** (1 point). On a real quantum annealing device, we drop the stringent theoretical requirements of following the adiabatic pathway and we repeat the transition over and over again. Then we choose the lowest energy solution as our optimum. # # The classical 'simulator' for a quantum annealer is some heuristic solver of combinatorial optimization, for instance, simulated annealing. Use the dimod package to implement the Hamiltonian with a small gap: $H_1=-1000\sigma^Z_1\sigma^Z_2-0.1\sigma^Z_2\sigma^Z_3-0.5\sigma^Z_1$. Your solution should be a `BinaryQuadraticModel` in an object called `model`. # + import dimod ### ### YOUR CODE HERE ### h = {0:0.5, 1:0, 1:0} J = {(0,1):1000, (1,2):0.1, (2,3):0} model = dimod.BinaryQuadraticModel(h, J, 0.0, dimod.SPIN) # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise4", "locked": true, "points": "1", "solution": false} sampler = dimod.SimulatedAnnealingSampler() response = sampler.sample(model, num_reads=10) assert np.isclose(response.first.energy, -1000.6) # - # Unlike in the case of a simple system, you often do not get the ground state: print([solution.energy for solution in response.data()]) # This shows that the problem is also hard for a classical heuristic solver, which should not come as a surprise. # The qubits are often not fully connected, especially not in superconducting architectures. This is why we have to embed our graph in the connectivity graph of the hardware. In the case of the D-Wave 2048Q system, the connectivity has the Chimera graph structure: # # Unit cell in Chimera graph # # We are interested in the chain length of an embedding, which reflects how many physical qubits are used to represent a logical qubit. Assume that your quantum device has a Chimera structure of four unit cells, that is, 8 times 4 = 32 qubits. # + import matplotlib.pyplot as plt import dwave_networkx as dnx connectivity_structure = dnx.chimera_graph(2, 2) dnx.draw_chimera(connectivity_structure) plt.show() # - # Let's try to embed the complete graph $K_n$ on eight nodes: import networkx as nx G = nx.complete_graph(8) plt.axis('off') nx.draw_networkx(G, with_labels=False) # **Exercise 5** (2 points). The function `find_embedding` of the package minorminer returns you a dictionary: the keys represent the nodes of the graph to be embedded, and the values are a list (the chain) of physical qubits for that particular graph node. Write a function that takes this dictionary and return the length of the longest chain. ### ### YOUR CODE HERE ### def get_max_chain_length(embedded_graph): max_length = 0 for node in embedded_graph: max_length = max(max_length, len(embedded_graph[node])) return max_length # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise5", "locked": true, "points": "2", "solution": false} embedded_graph = {0: [1, 12, 4], 1: [30, 11, 27, 87], 2: [28, 9, 25]} assert get_max_chain_length(embedded_graph) == 4 embedded_graph = {} assert get_max_chain_length(embedded_graph) == 0 # - # **Exercise 6** (2 points). Finding the optimal embedding is NP-hard, so the minorminer package implements a heuristic algorithm that relies on randomness. It often occurs that subsequent runs yield different chain lengths. Run the `find_embedding` function to embed the graph `G` in the graph `connectivity_structure` 100 times. Write the embedding with the shortest maximum chain length to a variable called `shortest_chain`, and the longest one to `longest_chain`. Use the function you wrote above. ### ### YOUR CODE HERE ### shortest_max_chain_length = 32 longest_max_chain_length = 0 for _ in range(100): embedded_graph = minorminer.find_embedding(G, connectivity_structure) max_length = get_max_chain_length(embedded_graph) if shortest_max_chain_length > max_length: shortest_max_chain_length = max_length shortest_chain = embedded_graph if longest_max_chain_length < max_length: longest_max_chain_length = max_length longest_chain = embedded_graph # + deletable=false editable=false nbgrader={"grade": true, "grade_id": "exercise6", "locked": true, "points": "2", "solution": false} assert len(shortest_chain) == 8 assert len(longest_chain) == 8 assert get_max_chain_length(shortest_chain) < get_max_chain_length(longest_chain) # - # Let's plot the embeddings: dnx.draw_chimera_embedding(connectivity_structure, shortest_chain) plt.show() dnx.draw_chimera_embedding(connectivity_structure, longest_chain) plt.show() # Depending on the heuristic outcome, you have a chance of seeing that the embedding with the longer chains uses more unit cells. Since the optimality of an embedding is hard to guarantee, if we want to use quantum annealing for machine learning, we should think of models that are either easy to embed or for which we can manually construct the optimal embedding. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exploring Environment Canada Weather Data with Python and Jupyter Notebooks # # - By () - December 3rd 2017 # # This notebook demonstrates how to download Environment Canada's weather data using Python with popular data analysis libraries like pandas and Beautiful Soup. Read the accompanying blog post [here](https://www.ubcenvision.com/blog/2017/11/30/jupyter-part1.html) on the [UBC Envision](https://www.ubcenvision.com) website for more details. # # To run this notebook on the cloud, use [Binder](https://mybinder.org/) or the [Syzygy](http://intro.syzygy.ca/getting-started/) platform. # # Feel free to use, copy, paste, modify or distribute this notebook however you wish. # # *These examples are provided as-is with no guarantee that they will work in the future if Environment Canada modifies their API.* # # ## Updates: # - March 13th 2020: # - Environment Canada API updated. `getHourlyData()` updated with skiprows=0. # - Updated requirements.txt so this notebook works on Binder without messing with the libraries # # # - October 4th 2018: # - Environment Canada API updated, we need to skip 15 rows instead of 16. `getHourlyData()` updated with skiprows=15 # # Part I: Data Extraction & Cleaning import pandas as pd import datetime import matplotlib.pyplot as plt import seaborn as sns from dateutil import rrule from datetime import datetime, timedelta from bs4 import BeautifulSoup import requests import re # + # We'll need `fuzzywuzzy` to look up weather stations later # Run "!pip install fuzzywuzzy --user" if you get an error # # !pip install fuzzywuzzy --user from fuzzywuzzy import fuzz # - # ## How to download one month of data: # + month = "01" # January year = "2020" # 2020 stationID = 51442 #Vancouver base_url = "http://climate.weather.gc.ca/climate_data/bulk_data_e.html?" query_url = "format=csv&stationID={}&Year={}&Month={}&timeframe=1".format(stationID, year, month) api_endpoint = base_url + query_url print("Click me to download CSV data:") print(api_endpoint) # - # ## Function for calling the Environment Canada API # Call Environment Canada API # Returns a dataframe of data def getHourlyData(stationID, year, month): base_url = "http://climate.weather.gc.ca/climate_data/bulk_data_e.html?" query_url = "format=csv&stationID={}&Year={}&Month={}&timeframe=1".format(stationID, year, month) api_endpoint = base_url + query_url return pd.read_csv(api_endpoint, skiprows=0) # ## How to download data between a specified date range # + stationID = 51442 start_date = datetime.strptime('Jun2015', '%b%Y') end_date = datetime.strptime('Jun2016', '%b%Y') frames = [] for dt in rrule.rrule(rrule.MONTHLY, dtstart=start_date, until=end_date): df = getHourlyData(stationID, dt.year, dt.month) frames.append(df) weather_data = pd.concat(frames) weather_data['Date/Time'] = pd.to_datetime(weather_data['Date/Time']) weather_data['Temp (°C)'] = pd.to_numeric(weather_data['Temp (°C)']) # - # ## Plot average data and a rolling average # Notice the broken lines, they indicate missing data points. # %matplotlib inline sns.set_style('whitegrid') fig = plt.figure(figsize=(15,5)) plt.plot(weather_data['Date/Time'], weather_data['Temp (°C)'], '-o', alpha=0.8, markersize=2) plt.plot(weather_data['Date/Time'], weather_data['Temp (°C)'].rolling(window=250,center=False).mean(), '-k', alpha=1.0) plt.ylabel('Temp (°C)') plt.xlabel('Time') plt.show() # ## Fix missing data points by interpolation # Don't really care about accuracy right now, use simple linear interpolation weather_data['Temp (°C)'] = weather_data['Temp (°C)'].interpolate() # Then plot the data again: # %matplotlib inline sns.set_style('whitegrid') fig = plt.figure(figsize=(15,5)) plt.plot(weather_data['Date/Time'], weather_data['Temp (°C)'], '-o', alpha=0.8, markersize=2) plt.plot(weather_data['Date/Time'], weather_data['Temp (°C)'].rolling(window=250,center=False).mean(), '-k', alpha=1.0) plt.ylabel('Temp (°C)') plt.xlabel('Time') plt.show() # ## For convenience: Scrape StationIDs to lookup cities # This section demonstrates a simple Python script for scraping StationIDs from Environment Canada using Beautiful Soup. # # The stationIDs are provided by province in this Environment Canada [page](http://climate.weather.gc.ca/historical_data/search_historic_data_e.html). Environment Canada limits the number of rows in the search results to 100 entries. This script loops through all pages and grabs the StationID, Station Name, Intervals and Year Range. # # + # Specify Parameters province = "BC" # Which province to parse? start_year = "2006" # I want the results to go back to at least 2006 or earlier max_pages = 5 # Number of maximum pages to parse, EC's limit is 100 rows per page, there are about 500 stations in BC with data going back to 2006 # Store each page in a list and parse them later soup_frames = [] for i in range(max_pages): startRow = 1 + i*100 print('Downloading Page: ', i) base_url = "http://climate.weather.gc.ca/historical_data/search_historic_data_stations_e.html?" queryProvince = "searchType=stnProv&timeframe=1&lstProvince={}&optLimit=yearRange&".format(province) queryYear = "StartYear={}&EndYear=2017&Year=2017&Month=5&Day=29&selRowPerPage=100&txtCentralLatMin=0&txtCentralLatSec=0&txtCentralLongMin=0&txtCentralLongSec=0&".format(start_year) queryStartRow = "startRow={}".format(startRow) response = requests.get(base_url + queryProvince + queryYear + queryStartRow) # Using requests to read the HTML source soup = BeautifulSoup(response.text, 'html.parser') # Parse with Beautiful Soup soup_frames.append(soup) # - # ## Parsing the Environment Canada page with Beautiful Soup # + # Empty list to store the station data station_data = [] for soup in soup_frames: # For each soup forms = soup.findAll("form", {"id" : re.compile('stnRequest*')}) # We find the forms with the stnRequest* ID using regex for form in forms: try: # The stationID is a child of the form station = form.find("input", {"name" : "StationID"})['value'] # The station name is a sibling of the input element named lstProvince name = form.find("input", {"name" : "lstProvince"}).find_next_siblings("div")[0].text # The intervals are listed as children in a 'select' tag named timeframe timeframes = form.find("select", {"name" : "timeframe"}).findChildren() intervals =[t.text for t in timeframes] # We can find the min and max year of this station using the first and last child years = form.find("select", {"name" : "Year"}).findChildren() min_year = years[0].text max_year = years[-1].text # Store the data in an array data = [station, name, intervals, min_year, max_year] station_data.append(data) except: pass # Create a pandas dataframe using the collected data and give it the appropriate column names stations_df = pd.DataFrame(station_data, columns=['StationID', 'Name', 'Intervals', 'Year Start', 'Year End']) stations_df.head() # - # ## Filtering Station Data # For example, to show only stations with hourly data, use the map() function. # Show only data with hourly intervals hourly_stations = stations_df.loc[stations_df['Intervals'].map(lambda x: 'Hourly' in x)] hourly_stations.head() # ## Looking up stations # Now that we have the stationIDs, we can use Fuzzywuzzy and fuzzy string matching to lookup the station IDs with the city names. Here are 2 examples # + # Find the stations that are in Whistler string = "Whistler" tolerance = 90 hourly_stations[hourly_stations['Name'].apply(lambda x: fuzz.token_set_ratio(x, string)) > tolerance] # + # Find the stations that are in Vancouver string = "Vancouver" tolerance = 90 hourly_stations[hourly_stations['Name'].apply(lambda x: fuzz.token_set_ratio(x, string)) > tolerance] # - # ## Download Weather Data # Now that we know the station ID, we can use our getHourlyData() function to call the Environment Canada API with the station ID that we found. # # Here's how we get Whistler data from November 2016 to November 2017: # + # Get Whistler weather data for November 2016 to November 2017 stationID = 43443 start_date = datetime.strptime('Nov2016', '%b%Y') end_date = datetime.strptime('Nov2017', '%b%Y') frames = [] for dt in rrule.rrule(rrule.MONTHLY, dtstart=start_date, until=end_date): df = getHourlyData(stationID, dt.year, dt.month) frames.append(df) whistler = pd.concat(frames) whistler['Date/Time'] = pd.to_datetime(whistler['Date/Time']) whistler['Temp (°C)'] = pd.to_numeric(whistler['Temp (°C)']) whistler.head() # - # ## Let's plot the data for Whistler # %matplotlib inline sns.set_style('whitegrid') fig = plt.figure(figsize=(15,5)) plt.plot(whistler['Date/Time'], whistler['Temp (°C)'], '-o', alpha=0.8, markersize=2) plt.plot(whistler['Date/Time'], whistler['Temp (°C)'].rolling(window=250,center=False).mean(), '-k', alpha=1.0) plt.ylabel('Temp (°C)') plt.xlabel('Time') plt.show() # ## How to fix missing data points with interpolation # + # Find the number of rows with a 'M' for missing temperature flag, or NaN for the actual temperature value print('Missing data rows: ', whistler.loc[(~whistler['Temp Flag'].isnull()) | (whistler['Temp (°C)'].isnull())].shape[0]) # Do interpolation whistler['Temp (°C)'] = whistler['Temp (°C)'].interpolate() # Did we fix everything? print('Missing data rows: ', whistler.loc[(whistler['Temp (°C)'].isnull())].shape[0]) # - # Re-plot the data # %matplotlib inline sns.set_style('whitegrid') fig = plt.figure(figsize=(15,5)) plt.plot(whistler['Date/Time'], whistler['Temp (°C)'], '-o', alpha=0.8, markersize=2) plt.plot(whistler['Date/Time'], whistler['Temp (°C)'].rolling(window=250,center=False).mean(), '-k', alpha=1.0) plt.ylabel('Temp (°C)') plt.xlabel('Time') plt.show() # # Exporting Data # We'll export the dataframes in CSV format so we don't have to re-download the data every time we restart Jupyter: stations_df.to_csv('stations.csv') whistler.to_csv('whistler.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''microsoft'': conda)' # name: python38564bitmicrosoftcondaa3295319a17f4ae4a671465397595e70 # --- #Required libraries import pandas as pd import numpy as np from sklearn import linear_model, model_selection, metrics from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn import tree from sklearn import preprocessing import pydotplus from IPython.display import Image #Loading data launch_data = pd.read_excel('RocketLaunchDataCompleted.xlsx') launch_data.head() launch_data.columns # + tags=[] """ We only have 60 launches in this dataset and we have data for other days as before and after launch, this means we will choose 60 records where the field Launched? is Y' """ launch_data.info() # - """To handle missing values, we will fill the missing values with appropriate values, for instance: Launched? is Na -> (we'll assume that wasn't launch) Crewed or Uncrewed is Na -> (Assume that the launch was uncrewed) Wind condition is Na -> (Assume as unkown) Condition is Na -> (Assume as Fair) """ launch_data['Launched?'].fillna('N',inplace=True) launch_data['Crewed or Uncrewed'].fillna('Uncrewed',inplace=True) launch_data['Wind Direction'].fillna('unknown',inplace=True) launch_data['Condition'].fillna('Fair',inplace=True) launch_data.fillna(0,inplace=True) launch_data.head() # + ## As part of the data cleaning process, we have to convert text data to numerical because computers understand only numbers label_encoder = preprocessing.LabelEncoder() # Three columns have categorical text info, and we convert them to numbers # For instance, 0 for Crewed - 1 for Uncrewed launch_data['Crewed or Uncrewed'] = label_encoder.fit_transform(launch_data['Crewed or Uncrewed']) launch_data['Wind Direction'] = label_encoder.fit_transform(launch_data['Wind Direction']) launch_data['Condition'] = label_encoder.fit_transform(launch_data['Condition']) # - launch_data.head() # + #Building the model # + # First, we save the output we are interested in. In this case, "launch" yes and no's go into the output variable. y = launch_data['Launched?'] # Removing the columns we are not interested in launch_data.drop(['Name','Date','Time (East Coast)','Location','Launched?','Hist Ave Sea Level Pressure','Sea Level Pressure','Day Length','Notes','Hist Ave Visibility', 'Hist Ave Max Wind Speed'],axis=1, inplace=True) # Saving the rest of the data as input data X = launch_data # List of variables that our machine learning algorithm is going to look at: X.columns # - """The X input data represents the weather for a particular day. In this case, we aren't worried about the date or time. We want the profile of the weather for that day to be the indicator for whether a launch should happen, not the date or time.""" X.head() # + """ Predict a rocket launch given weather conditions is a yes or no question. This is a two-class classification problem. We'll use a decision tree given that is an accurate and fast training algorithm. For this job we'll use sickit-learn. """ tree_model = DecisionTreeClassifier(random_state=0,max_depth=5) # - #Splitting data """ Test size: 0.2 - 80% training size, 20% test size Random state: 99 - random seed""" X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, random_state=99) # Fitting the model to the training data tree_model.fit(X_train,y_train) # + tags=[] #Testing the model y_pred = tree_model.predict(X_test) print(y_pred) # - #Scoring the model # Calculate accuracy tree_model.score(X_test,y_test) # + #Creating the visual tree # Let's visualizing our decision tree. from sklearn.tree import export_graphviz def tree_graph_to_png(tree, feature_names,class_names, png_file_to_save): tree_str = export_graphviz(tree, feature_names=feature_names, class_names=class_names, filled=True, out_file=None) graph = pydotplus.graph_from_dot_data(tree_str) return Image(graph.create_png()) # - # This function visualizes the model. tree_graph_to_png(tree=tree_model, feature_names=X.columns.values,class_names=['No Launch','Launch'], png_file_to_save='decision-tree.png') """ 192 are no launches 48 are launches Given the visualization we could think the following: "If the wind speed was less than 1.0, then 191 of the 240 samples guessed that no launch was possible on that day." """ # + #A test for a recent lauch. """On July 30, 2020, NASA launched the Perseverance rover to Mars from Cape Canaveral at 7:50 AM Eastern Time. The next is hypotethical data: """ # ['Crewed or Uncrewed', 'High Temp', 'Low Temp', 'Ave Temp', # 'Temp at Launch Time', 'Hist High Temp', 'Hist Low Temp', # 'Hist Ave Temp', 'Precipitation at Launch Time', # 'Hist Ave Precipitation', 'Wind Direction', 'Max Wind Speed', # 'Visibility', 'Wind Speed at Launch Time', 'Hist Ave Max Wind Speed', # 'Hist Ave Visibility', 'Condition'] data_input = [ 1. , 75. , 68. , 71. , 0. , 75. , 55. , 65. , 0. , 0.08, 0. , 16. , 15. , 0. , 0. ] test = tree_model.predict([data_input]) # + tags=[] print(test) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- m = [[1, 2, 3], [4, 5, 6]] print(m[1][1]) for i in range(len(m)): for j in range(len(m[i])): print(m[i][j]) for row in m: for col in row: print(col) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Model for aimpoint drift (aka ACA alignment drift) # # This notebook documents and computes fit coefficients for a simple model that # gives the relative ACA alignment as a linear function of the ACA CCD temperature. # # The ACA alignment is measured accurately each science observation via the apparent # positions of the fid lights. These are referred to by their CXC aspect solution # designation as the SIM DY and DZ offsets. This is actually a misnomer based on # the pre-launch understanding of what physical mechanism would generate such offsets. # We now know via HRMA optical axis measurements that a temperature-dependent change # in the ACA boresight alignment is responsible. The HRMA to SIM alignment is quite # stable. # # The ACA alignment relates directly to the X-ray detector aimpoint that is used in # observation planning and analysis. With this model it will be possible to improve # the aimpoint accuracy by introducing a dynamic pointing offset based on the # predicted ACA CCD temperature for each observation. # # The model is # ``` # DY/Z = (t_ccd - offset) * scale + (year - 2016.0) * trend + JUMPS # ``` # where # ``` # t_ccd : ACA CCD temperature (degF) # scale : scaling in arcsec / degF # offset : ACA CCD temperature corresponding to DY/Z = 0.0 arcsec # trend : Trend in DY/Z (arcsec / year) # year : decimal year # jumpYYYYDDD : step function from 0.0 to jumpYYYYDDD (arcsec) for date > YYYY:DDD # ``` # The jumps are persistent step function changes in alignment that have been observed following # extended dwells at normal sun where the ACA gets substantially hotter than during # normal operations. The exact mechanism is not understood, but could be due to # a non-linear stiction release of a stress point that impacts alignment. # # Note that the ACA alignment has a direct linear correlation to the ACA housing temperature (AACH1T). # However, in this model we use the ACA CCD temperature as the model dependent variable because it # is linearly related to housing temperature (AACCDPT = m * AACH1T + b) as long as the TEC is at # max drive current. Since there is already # an existing Xija model to predict ACA CCD temperature this reduces duplication. # # This model was fitted to data from 2012:180 to 2016:110 using Sherpa. The key fit results are: # ``` # DY # ----- # scale = 2.2 arcsec / degF # trend = -1.4 arcsec / year # jumps ~ -2 to -5 arcsec # # model error = +/- 1.9 arcsec (1st to 99th percentile range) # # DZ # ----- # scale = 1.1 arcsec / degF # trend = -0.4 arcsec / year # jumps ~ -0.3 to -1.8 arcsec # # model error = +/- 2.8 arcsec (1st to 99th percentile range) # # ``` # # The model accuracy will be degraded somewhat when ACA CCD temperature # is taken from a predictive Xija model instead of from telemetry. # # *This notebook lives in the **aimpoint_mon** project repository* # ## Code # + import re from itertools import izip import tables import matplotlib.pyplot as plt import numpy as np from astropy.time import Time import Ska.engarchive.fetch_eng as fetch from Ska.Numpy import interpolate from kadi import events from sherpa import ui from Ska.Matplotlib import plot_cxctime # - # %matplotlib inline SIM_MM_TO_ARCSEC = 20.493 # Discrete jumps after 2012:001. Note also jumps at: # '2008:293', # IU-reset # '2010:151', # IU-reset # '2011:190', # Safe mode JUMPS = ['2015:006', # IU-reset '2015:265', # Safe mode 6 '2016:064', # Safe mode 7 ] ltt_bads = events.ltt_bads(pad=(0, 200000)) normal_suns = events.normal_suns(pad=(0, 100000)) # Aspect camera CCD temperature trend since 2010 t_ccd = fetch.Msid('aacccdpt', start='2010:001', stat='5min') t_ccd.remove_intervals(ltt_bads | normal_suns) t_ccd.plot() plt.ylabel('T_ccd (degF)') plt.title('ACA CCD temperature') plt.grid() # Get aspect solution DY and DZ (apparent SIM offsets via fid light positions) # which are sampled at 1 ksec intervals and updated daily. if 'adat' not in globals(): h5 = tables.openFile('/proj/sot/ska/data/aimpoint_mon/aimpoint_asol_values.h5') adat = h5.root.data[:] h5.close() adat.sort(order=['time']) # Filter bad data when asol DY and DZ are both exactly 0.0 (doesn't happen normally) bad = (adat['dy'] == 0.0) & (adat['dz'] == 0.0) adat = adat[~bad] class AcaDriftModel(object): """ Class to encapsulate necessary data and compute the model of ACA alignment drift. The object created from this class is called by Sherpa as a function during fitting. This gets directed to the __call__() method. """ YEAR0 = 2016.0 # Reference year for linear offset def __init__(self, adat, start='2012:001', stop=None): """ adat is the raw data array containing aspect solution data sampled at 1 ksec intervals. """ # Get the ACA CCD temperature telemetry t_ccd = fetch.Msid('aacccdpt', stat='5min', start=start, stop=stop) # Slice the ASOL data corresponding to available ACA CCD temps i0, i1 = np.searchsorted(adat['time'], [t_ccd.times[0], t_ccd.times[-1]]) self.asol = adat[i0:i1].copy() # Convert from mm to arcsec for convenience self.asol['dy'] *= SIM_MM_TO_ARCSEC self.asol['dz'] *= SIM_MM_TO_ARCSEC self.times = self.asol['time'] self.years = Time(self.times, format='cxcsec').decimalyear self.years_0 = self.years - self.YEAR0 # Resample CCD temp. data to the 1 ksec ASOL time stamps self.t_ccd = interpolate(t_ccd.vals, t_ccd.times, self.asol['time'], method='linear') # Get indices corresponding to jump times for later model computation self.jump_times = Time(JUMPS).cxcsec self.jump_idxs = np.searchsorted(self.times, self.jump_times) def __call__(self, pars, years=None, t_ccd=None): """ Calculate model prediction for DY or DZ. Params are: scale : scaling in arcsec / degF offset : ACA CCD temperature corresponding to DY/Z = 0.0 arcsec trend : Trend in DY/Z (arcsec / year) jumpYYYYDDD : discrete jump in arcsec at date YYYY:DDD """ # Sherpa passes the parameters as a list scale, offset, trend = pars[0:3] jumps = pars[3:] # Allow for passing in a different value for ACA CCD temperature if t_ccd is None: t_ccd = self.t_ccd # Compute linear part of model out = (t_ccd - offset) * scale + self.years_0 * trend # Put in the step function jumps for jump_idx, jump in izip(self.jump_idxs, jumps): if jump_idx > 10 and jump_idx < len(out) - 10: out[jump_idx:] += jump return out def fit_aimpoint_aca_temp(axis='dy', start='2012:180', stop=None): """ Use Sherpa to fit the model parameters """ # Create the object used to define the Sherpa user model, then # load as a model and create parameters aca_drift = AcaDriftModel(adat, start, stop) ui.load_user_model(aca_drift, 'aca_drift_model') parnames = ['scale', 'offset', 'trend'] parnames += ['jump{}'.format(re.sub(':', '', x)) for x in JUMPS] ui.add_user_pars('aca_drift_model', parnames) # Sherpa automatically puts 'aca_drift_model' into globals, but # make this explicit so code linters don't complain. aca_drift_model = globals()['aca_drift_model'] # Get the DY or DZ values and load as Sherpa data dyz = aca_drift.asol[axis] ui.load_arrays(1, aca_drift.years, dyz) # Set the model and fit using Simplex (Nelder-Mead) minimization ui.set_model(1, aca_drift_model) ui.set_method('simplex') ui.fit(1) return aca_drift, ui.get_fit_results() def plot_aimpoint_drift(axis, aca_drift, fit_results): """ Plot our results """ # DY or DZ values from aspect solution dyz = aca_drift.asol[axis] # Call model directly with best-fit parameters to get model values dyz_fit = aca_drift(fit_results.parvals) dyz_resid = dyz - dyz_fit years = aca_drift.years times = aca_drift.times plt.figure(1) plt.clf() plot_cxctime(times, dyz, label='Data') plot_cxctime(times, dyz_fit, 'r-', alpha=0.5, label='Fit') plot_cxctime(times, dyz_resid, 'r-', label='Residual') plt.title('Fit aspect solution {} to scaled ACA CCD temperature' .format(axis.upper())) plt.ylabel('{} (arcsec)'.format(axis.upper())) plt.grid() plt.legend(loc='upper left', framealpha=1.0) plt.draw() plt.show() std = dyz_resid.std() p1, p99 = np.percentile(dyz_resid, [1, 99]) print('Fit residual stddev = {:.2f} arcsec'.format(std)) print('Fit residual 99th - 1st percentile = {:.2f}'.format(p99 - p1)) # ## Fit model coefficients for DY and plot results aca_drift_dy, fit_dy = fit_aimpoint_aca_temp('dy') plot_aimpoint_drift('dy', aca_drift_dy, fit_dy) # ### Illustrate model behavior by assuming a constant ACA CCD temperature dyz_fit = aca_drift_dy(fit_dy.parvals, t_ccd=7) plot_cxctime(aca_drift_dy.times, dyz_fit) plt.title('DY drift model assuming constant ACA temperature') plt.ylim(14, 32) plt.grid(); # ## Fit model coefficients for DZ and plot results aca_drift_dz, fit_dz = fit_aimpoint_aca_temp('dz') plot_aimpoint_drift('dz', aca_drift_dz, fit_dz) # ### Illustrate model behavior by assuming a constant ACA CCD temperature dyz_fit = aca_drift_dz(fit_dz.parvals, t_ccd=7) plot_cxctime(aca_drift_dz.times, dyz_fit) plt.title('DZ drift model assuming constant ACA temperature') plt.ylim(16, 32) plt.grid(); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- from IPython.core.display import display, HTML from sqlalchemy import create_engine engine = create_engine(open('~/.sqlconninfo').read()) import pandas as pd z = pd.read_sql_query("""SELECT distinct start_station_id as id, start_station_name as name, start_station_latitude as latitude, start_station_longitude as longitude from bike_ingest order by id """, engine) display(HTML(z.to_html())) m = z.groupby('id').count() display(HTML(z.merge(m[m.name > 1], left_on='id', right_index=True, how='inner').to_html())) z = pd.read_sql_query("""SELECT DISTINCT start_station_id as id, start_station_name as name, start_station_latitude as lat, start_station_longitude as lon FROM bike_ingest ;""", engine) z z = pd.read_sql_query("""SELECT DISTINCT end_station_id as id, end_station_name as name, end_station_latitude as lat, end_station_longitude as lon FROM bike_ingest ;""", engine) z z = pd.read_sql_query(""" SELECT DISTINCT id, name, lat, lon FROM ( SELECT start_station_id AS id, start_station_name AS name, start_station_latitude AS lat, start_station_longitude AS lon FROM bike_ingest AS T1 UNION ALL SELECT end_station_id AS id, end_station_name AS name, end_station_latitude AS lat, end_station_longitude AS lon FROM bike_ingest AS T2 ) AS T; """, engine) m = z[['name', 'id']].groupby('id').count() display(HTML((m[m.name > 1].merge(z, left_index=True, right_on="id", how='inner')).to_html())) z = pd.read_sql_query("""SELECT * from bike_ingest limit 1;""", engine) z.columns # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Compute uni-mutational fractions # %run "Header.ipynb" # %run "GeneUtils.ipynb" import pileup seq2pos2pileup = pileup.load() # ## Compute uni-mutational fraction for each position in the genome # # ... Rather than write it out again here, just see the paper appendix on this # + seq2pos2f = {} for seq in SEQS: seq2pos2f[seq] = {} for pos in range(1, seq2len[seq] + 1): mismatch_cts = pileup.get_mismatch_cts(seq2pos2pileup[seq][pos]) sum_mismatches = sum(mismatch_cts) if sum_mismatches > 0: f = max(mismatch_cts) / sum_mismatches seq2pos2f[seq][pos] = f else: # There are no non-matches, so N2, N3, and N4 are all 0. This means that f is undefined # (since it'd be 0 / 0). seq2pos2f[seq][pos] = None # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Instructions # Please be patient while waiting for the environment to load; it may take a few minutes. # Once the notebook has finished loading, in the top bar, select `Kernel`--> `Restart & Run All`. # Once cell has finished running, you should see a plot with sliders appear at the bottom. # Enjoy! #Imports import sys; sys.path.append('../python/'); import widget_BH_clean as widget; # %matplotlib inline # Widget Output widget.VBox([widget.button,widget.out,widget.interactive_plot(widget.f)]) # ## Description for NGC 5533 slider: # # Black holes are ***MACHOs*** or Massive Compact Halo Objects. They are hard to detect with our present technology, especially if they have small masses. For example, a black hole with the mass of our sun has an event horizon radius of 3 km. A solar-mass sized black hole would be so tiny that the only way we would be able to see it is by gravitational microlensing (blackhole passing in front of a star). # # The left photo is an arbitrary face-on galaxy (NGC 6814) for a better visualization of the spread of the number of black holes. ***Each dot represents 1 million tiny black holes***. # # Increasing the number and mass of each black hole can change the velocities of the stars in a galaxy (velocity curve). # There are three sliders that can change the velocity curve of the dark halo. # 1. ***Number of millions of tiny black holes***: We need to start with an incredibly high number of tiny black holes to account for the dark matter halo curve. Starting with merely 50 million, all the way up to 500 million tiny black holes, the velocity curve changes drastically. # 2. ***Mass of each tiny black hole (in solar masses)***: Starting from 0.1 solar masses, and up to 3.8 solar masses (which is the smallest black hole ever discovered according to https://www.scientificamerican.com/gallery/the-smallest-known-black-hole/), this slider can change the mass of each black hole. # 3. ***Cutoff radius***: The cutoff radius (also called "scale radius") indicates where the density falls off by a factor of e (~2.7). Adjusting this factor changes where the "bump" of the curve is located. # # All other components (bulge, disk, central supermassive black hole) are fixed because these values (their mass to light ratio) can be measured. # # ### ... # If you wish to see the optimal number and individual mass of theoretical black holes, press the `Best Fit` button at any time. But be creative, there's more than one optimal combination. Enjoy! #Imports import sys; sys.path.append('./python/'); import w5 as widget; # Widget Output # %matplotlib inline widget.VBox([widget.button,widget.out,widget.interactive_plot(widget.f)]) # ## Description for NGC 7814 slider: # # ### Right Graph: # The rotation curve of NGC 7814 (represented by blue data points in the right graph) is emulated after Fraternali et al. (2011) which can be accessed here. The graph to the right represents an interactive rotation curve of NGC 7814 similar to what you can see in our rotation curve widget program, however, in this case all components have been fixed except for the dark matter component (green line), and now it is modeled based on a hypothetical mass distribution of small black holes. # ### Left Graph: # The left graph shows a visual helper of what the rotation curve is physically describing (note that the galaxy image used is NOT of NGC 7814, but a similar spiral galaxy). Here, each dot represents 23 million blackholes (an arbitrary scaling factor). Keep careful watch as you move the sliders to see each "black hole" (red dots) change in quantity and size. It's up to you to choose how many black holes (by factors of 23 million) and masses of each black hole in solar masses (1 solar mass = almost $2 × 10^{30}$ kilograms!) to find a "best fit" rotation curve, which is to say, a mass distrubtion of black holes that would most closely match real observations. # ### ... # If you wish to see the optimal number and individual mass of theoretical black holes, press the `Best Fit` button at any time. But be creative, there's more than one optimal combination. Enjoy! # ## Image Sources: # “A Spiral Snowflake.” The European Space Agency, ESA/Hubble & NASA, 9 May 2016, esahubble.org/images/potw1619a/. # # ## References: # F. . . Kamphuis (2011), **A tale of two galaxies: light and mass in NGC 891 and NGC 7814**. Astronomy & Astrophysics, 531: A64. https://doi.org/10.1051/0004-6361/201116634 # # . (2008), **The rotation curves of flattened Sérsic bulges**. Monthly Notices of the Royal Astronomical Society, 385: 1359-1364. https://doi.org/10.1111/j.1365-2966.2008.12837.x # # NASA. “**The Smallest Known Black Hole**.” Scientific American, 2 Apr. 2008, www.scientificamerican.com/gallery/the-smallest-known-black-hole. # # “**GIPSY, the Groningen Image Processing System**.” GIPSY, www.astro.rug.nl/~gipsy/index.html. Accessed 2020-2021. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={} colab_type="code" id="1Nhau6hODx_H" from __future__ import print_function import numpy as np import pickle import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report, confusion_matrix from sklearn.preprocessing import LabelEncoder from keras.preprocessing import sequence from keras.layers.noise import GaussianNoise from keras.models import Sequential from keras.layers.core import Dense,Dropout,Activation,Flatten from keras.layers.normalization import BatchNormalization from keras.layers.convolutional import Conv1D,MaxPooling1D from keras.utils import np_utils from tensorflow.keras import regularizers # + colab={} colab_type="code" id="cplzftzzROem" #set parameters nb_filter = 64 filter_length_1 = 10 filter_length_2 = 7 filter_length_3 = 3 hidden_dims = 10 batch_size = 64 nb_epoch = 100 nb_classes = 16 # + colab={} colab_type="code" id="YosiMd03RqAE" # #load the sequential MFCC data # with open(os.path.join(proj_dir, 'seq_mfcc.pickle'), 'rb') as jar: # mfcc_data = pickle.load(jar) mfcc_filename = "mfccs_dev_22050.csv" dataset = pd.read_csv(mfcc_filename) df = pd.DataFrame(dataset) mfcc_data = df[["mfcc_mean1","mfcc_mean2","mfcc_mean3","mfcc_mean4","mfcc_mean5","mfcc_mean6","mfcc_mean7","mfcc_mean8","mfcc_mean9","mfcc_mean10","mfcc_mean11","mfcc_mean12","mfcc_mean13"]] test_dim = mfcc_data.shape[0] #set up targets (labels) accent_data = pd.read_csv("cv-valid-dev-acc-mp3.csv") targets_raw = np.array(accent_data['accent']) label_encoder = LabelEncoder() mfcc_targets = label_encoder.fit_transform(targets_raw) X_train,X_test,y_train,y_test = train_test_split( mfcc_data,mfcc_targets,test_size=0.20,random_state=42 ) Y_train = np_utils.to_categorical(y_train, nb_classes) Y_test = np_utils.to_categorical(y_test, nb_classes) X_train = X_train.values.reshape(X_train.shape[0], X_train.shape[1], 1) X_test = X_test.values.reshape(X_test.shape[0], X_test.shape[1],1) print(len(X_train), 'train sequences') print(len(X_test), 'test sequences') print('X_train shape:', X_train.shape) print('X_test shape:', X_test.shape) print('y_train shape:', y_train.shape) print('y_test shape:', y_test.shape) # + colab={} colab_type="code" id="6DbKSHznSMzW" #build model model = Sequential() #layer 1 #add a Convolution1D which will learn nb_filter mfcc groups: model.add(Conv1D(filters=nb_filter, kernel_size=filter_length_1, input_shape=(X_train.shape[1], 1), padding='valid', activation='relu' )) #batch normalization to keep weights in [0,1] range: model.add(BatchNormalization()) #layer2 model.add(Conv1D(filters=nb_filter, kernel_size=filter_length_2, padding='same', activation='relu' )) # #standard maxpooling to halve the output of previous layer # model.add(MaxPooling1D(pool_size=2)) #layer3 model.add(BatchNormalization()) model.add(Conv1D(filters=nb_filter, kernel_size=filter_length_2, padding='same', activation='relu' )) model.add(MaxPooling1D(pool_size=2)) # #layer4 # model.add(BatchNormalization()) # model.add(Conv1D(filters=nb_filter, # kernel_size=filter_length_3, # padding='same', activation='relu' # )) # # model.add(MaxPooling1D(pool_size=2)) # #layer5 # model.add(BatchNormalization()) # model.add(Conv1D(filters=nb_filter, # kernel_size=filter_length_1, # padding='same', activation='relu' # )) # model.add(MaxPooling1D(pool_size=2)) #flatten the output of the convolutional layers #so that we can add a vanilla dense layer model.add(Flatten()) # Dropout reduces overfitting model.add(Dropout(.25)) #project onto a single unit output layer #and squash with a softmax into 0-1 probability space model.add(Dense(nb_classes,activation='softmax', # kernel_regularizer=regularizers.l2(1e-5) kernel_regularizer=regularizers.l1_l2(l1=1e-5, l2=1e-4) )) #fit model model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=["accuracy"]) model.summary() history = model.fit(X_train,Y_train, batch_size=batch_size, epochs=nb_epoch, validation_data=(X_test, Y_test), verbose=1) #print report of recall, precision, f1 score y_pred = model.predict_classes(X_test) print(classification_report(y_test,y_pred)) # + #increased epoch from 10 to 30 #decreased nb_filters from 512 to 64 #increased dropout rate from 0.1 to 0.25 (accuracy converges to 0.85) # + from keras.callbacks import History import matplotlib.pyplot as plt print(history.history.keys()) # "Accuracy" plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model acuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() # "Loss" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() # - print(confusion_matrix(y_test,y_pred)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # My Early Journey in TACC, Navigating Linux, and More # > Brief notes over my workflow organization and linux terminal commands # # - categories: [jupyter] # ## The Workflow Layout # All computational work is done on the TACC supercomputer at UT. I was allocated to the Frontera machine, and my first task was to ssh into the supercomputer so that I could set up my files and directories in order to get working. # # ssh -X # # The main directories I use are # # - /home1/08068/ekun # # and # # - /work2/08068/ekun/frontera # # as well as # # - /scratch1/08068/ekun # # The main [website](vis.tacc.utexas.edu) used to create Jupyter Notebooks for FastAI and creating my own classifiers. All Jupyter Notebooks created on vis.tacc.utexas.edu are stored on the home directory # # All UKBiobank directory metadata and images are located at # # - /corral-repl/utexas/UKB-Imaging-Genetics/ # # This repo has a lot which I still need to explore, but all the DXA Images are located under # # - /corral-repl/utexas/UKB-Imaging-Genetics/Imaging_Data/DXA/DXA_Images # # This folder has all patient EID zip files with their images. # # - /corral-repl/utexas/UKB-Imaging-Genetics/unzipped_DXA_Images # # This folder has all patient EID files unzipped with their images as well as DXA images by body part # # ### Other Directories of Interest: # # Generated CSV files are stored in # # - /work2/08068/ekun/frontera/output_files # # Currently a copy of all unzipped DXA images and the images separated by body parts are located in # # - /scratch1/08068/ekun/unzipped_DXA_Images # ## Navigating The Linux Terminal # My first issues arose after ssh-ing into TACC as the TACC UI is a Linux command line terminal. The most useful commands I learned were as follows: # # - cd /directory - Enters the directory specified # - cd .. - Navigates back a directory # - pwd - Displays path of current directory # - ls - Displays files of current directory # - ls -la - Displays all files of current directory and additional information # - wc -l /file - gives line count of a file # - ls | wc -l allows you to pipe the line count command to the directory, giving the file count of the directory # - head filename.txt or .csv - Displays first few lines of most files # - less -S filename.txt or .csv - Allows you to easily view files # - vim filename.txt or .sh allows you to begin writing a file whether it be a text file or bash script # - i - allows you to start writing inside vim # - :wq - save and close the file # - scp -r /directory /destination - recursively copies a directory or file to a destination # - rm /file - deletes a file # - rm -r - recursively deletes a folder and everything inside # - grep "term" /file - searches a file for given term # - ls | grep "term" - searches current directory for a given term # ## Writing My First Bash Scripts # The DXA Images were all stored in .zip files with the following syntax: PatientEID_DataField_2_0.zip (ex: 1003186_20158_2_0.zip) # # I first had to write a bash script to unzip the files and rename the folders containing the images to retain the patient EID. (ex: 1003186_20158_2_0_unzip) # ```shell # filename unzip.sh # # #!/bin/bash # # # echo "This unzips all the files in the directory and puts the unzipped files into directories named after the original zip folder" # # for f in *.zip; do # unzip -d "${f%.zip}_unzip" "$f" # done # # # To unzip the files and put them into a new directory outside of the current directory while renaming them: # # for f in *.zip; do # unzip -d "/corral-repl/utexas/UKB-Imaging-Genetics/unzipped_DXA_Images/${f%.zip}_unzip" "$f" # done # ``` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import glob import datetime import requests import json import ast # %matplotlib inline # - # set working folder for hierarchy files working_folder = '2018-02-16-hierarchy-files' update_date = datetime.datetime.today().strftime('%m.%d.%y') # + # download latest indicators dataset in Backend PROD response = requests.get('http://tcdata360-backend.worldbank.org/api/v1/indicators/') tc_indicators = pd.read_json(response.text) # set key columns for hierarchy file key_columns = list(df_ind.columns) key_columns.append('id') # get latest hierarchy.indicators file shared with Vendor df_ind_filename = glob.glob("%s/hierarchy.indicators *.csv" % working_folder)[0] df_ind = pd.read_csv(df_ind_filename) # merge indicators dataset API ID against latest hierarchy file df_tc_hier_id_mapping = df_ind.merge(tc_indicators, how='left', left_on=['Display Name', 'Dataset','Value Type Slug', 'Value Type Descriptor', 'Units'], right_on=['name', 'dataset','valueType', 'subindicatorType','units'])[key_columns] # check if pulled IDs are unique. if df_tc_hier_id_mapping['id'][df_tc_hier_id_mapping['id'].notnull()].is_unique: print("IDs are unique.") current_new = df_tc_hier_id_mapping[(pd.to_numeric(df_tc_hier_id_mapping['id'], errors='coerce') != pd.to_numeric(df_tc_hier_id_mapping['Indicator ID'], errors='coerce')) & (df_tc_hier_id_mapping['id'].notnull())]['Indicator ID'].value_counts().sum() print("There are currently %d indicators labeled as 'New' or 'new' in the latest hierarchy file." % current_new) # update all new indicators with latest API ID. df_tc_hier_id_mapping.loc[df_tc_hier_id_mapping['Indicator ID'].isin(['new', 'New']), 'Indicator ID'] = np.nan df_tc_hier_id_mapping['Indicator ID_updated'] = df_tc_hier_id_mapping['Indicator ID'].combine_first(df_tc_hier_id_mapping['id']) # all uningested indicators will have ID = "new" uningested_indicators = df_tc_hier_id_mapping.loc[df_tc_hier_id_mapping['Indicator ID_updated'].isnull(), 'Indicator ID_updated'].size df_tc_hier_id_mapping.loc[df_tc_hier_id_mapping['Indicator ID_updated'].isnull(), 'Indicator ID_updated'] = 'new' print("Finished updating indicator IDs for %d indicators in the hierarchy file." % (current_new - uningested_indicators)) print("There are %d uningested indicators in the hierarchy file labeled as 'new'." % uningested_indicators) # update Indicator ID column in hierarchy file df_tc_hier_id_mapping['Indicator ID'] = df_tc_hier_id_mapping['Indicator ID_updated'] df_tc_hier_id_mapping = df_tc_hier_id_mapping[df_ind.columns] df_tc_hier_id_mapping.to_csv("%s/hierarchy.indicators %s.csv" % (working_folder, update_date), index=False, line_terminator='\r') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python3 (tf-gpu) # language: python # name: tf-gpu # --- import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import pickle import numpy as np bus_data_normal = pickle.load(open('output/bus_data_normal_traffic.pkl', 'rb')) bus_data_remove_traffic = pickle.load(open('./bus_data_removed_background_equal_speed.pkl', 'rb')) bus_id = 'Route16_trip174568020_06:30:00' # + # %matplotlib notebook ax = plt.axes(projection='3d') df_normal = pd.DataFrame(bus_data_normal[bus_id]) df_normal['sim'] = 'normal' df_1m = pd.DataFrame(bus_data_remove_traffic[bus_id]) df_1m['sim'] = '1-min' print(bus_id) normal_traj = np.array(df_normal['position'].tolist()) remove_traj = np.array(df_1m['position'].tolist()) ax.scatter3D(remove_traj[:, 0], remove_traj[:, 1], df_1m['step'].tolist(), s=0.3, c='b', label='1-min', alpha=0.5) ax.scatter3D(normal_traj[:, 0], normal_traj[:, 1], df_normal['step'].tolist(), s=0.3, c='orange', label='normal', alpha=0.5) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('time step', rotation = -90) plt.legend() plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="I-NZXBybeUFS" # # Importing the Required Libraries # + id="3H2A9oApeNzU" import cv2 # for using computer vision related functions import numpy as np # for numerical computations on 2D image array import pandas as pd # for dataset preparation for deep learning libraries import matplotlib.pyplot as plt # for displaying image and plotting graph # + id="BWQa96fBoYX-" def gaussian_filter(img, mask_size = 5, sigma = 2): offset = mask_size // 2 x, y = np.meshgrid(range(-offset, offset + 1), range(-offset, offset + 1)) gauss_filter = np.exp(-((x ** 2 + y ** 2) / (2 * sigma ** 2))) gauss_filter /= gauss_filter.sum() return cv2.filter2D(src = img, ddepth = -1, kernel = gauss_filter) # + colab={"base_uri": "https://localhost:8080/", "height": 773} id="e-8Kn2h8eZNf" outputId="8b95f4d5-8189-4fb0-8786-31b2fc9c21f9" img = cv2.imread("/content/drive/MyDrive/sem 8/CV/processed_shapes/shapes.png") orig_img = img.copy() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) threshed_img = cv2.threshold(gaussian_filter(gray, mask_size = 5, sigma = 10), 0, 1, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1] plt.imshow(img), plt.show(); plt.imshow(gray, cmap = 'gray'), plt.show(); plt.imshow(threshed_img, cmap = 'binary_r'), plt.show(); # + colab={"base_uri": "https://localhost:8080/", "height": 269} id="42P8qPJRmtaa" outputId="599324a1-71d5-44df-99a8-b4da09404824" edges = cv2.Canny(threshed_img, 0.2, 0.8) plt.imshow(edges, cmap = 'gray'); # + colab={"base_uri": "https://localhost:8080/", "height": 269} id="HRzl-KKKhWRO" outputId="474e3366-0565-4c7a-8be9-3f174588f26c" contours, hierarchy = cv2.findContours(edges.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) blank = np.zeros(threshed_img.shape) cv2.drawContours(blank, contours, -1, (255,0,0), 1) plt.imshow(blank, cmap = 'binary'); # + id="D7H3AcdvsafN" # the different classes of our shapes categories = ["circle", "square", "star", "triangle"] # + id="aGnwesbBt2Le" # # !pip install cPickle import _pickle as cPickle # load the gaussian model again with open('/content/drive/MyDrive/sem 8/CV/processed_shapes/gauss-without-lda.pkl', 'rb') as fid: clf_loaded = cPickle.load(fid) # + colab={"base_uri": "https://localhost:8080/", "height": 428} id="l_q_wMngerP-" outputId="883dc16a-b50e-463e-f4c9-641804748467" # obtaining the bounding box, extracting and saving the ROI (region of interest) font font = cv2.FONT_HERSHEY_SIMPLEX # org org = (50, 50) # fontScale fontScale = 0.5 # Blue color in BGR color = (255, 0, 0) # Line thickness of 2 px thickness = 2 ROI_number = 0 img = orig_img.copy() for c in contours: offset = 5 x,y,w,h = cv2.boundingRect(c) x = x-offset y = y-offset w += 2*offset h += 2*offset cv2.rectangle(img, (x, y), (x + w, y + h), (36,255,12), 2) ROI = cv2.resize(blank[y:y+h, x:x+w], (25,25), interpolation = cv2.INTER_AREA) thres, ROI_thresh = cv2.threshold(ROI, 50, 255, cv2.THRESH_BINARY); ROI_thresh = ROI_thresh/ROI_thresh.max() pred = clf_loaded.predict([ROI_thresh.flatten()]) cv2.putText(img, categories[pred[0]], (x, y), font, fontScale, color, thickness, cv2.LINE_AA) plt.imshow(img); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # WELCOME TO THE YOUTUBE SCRAPER USING BS4 # Hello All, I will show you all how to scrape youtube and display the video in the Notebook itself. Also, we can store the details in the excel for further use. # Before Scraping any website, go through their terms and conditional. # # You can find the code in github - https://github.com/JohnPravin97/Youtube_Scraper # ## To Import the Needed Library # from bs4 import BeautifulSoup from urllib.request import urlopen, Request from selenium import webdriver import pandas as pd from IPython.display import IFrame, YouTubeVideo #from webdriver_manager.chrome import ChromeDriverManager import os # ## Scraper Function def youtube_scraper(inp): Search = '+'.join(inp.split()) driver = webdriver.Chrome(executable_path=r'C:\Users\jpravijo\Desktop\Anaconda\chromedriver_win32 (2)\chromedriver.exe') driver.get('https://www.youtube.com/results?search_query='+Search) html = driver.page_source soup = BeautifulSoup(html) search = soup.find('body', dir='ltr') first_content = soup.find('div', id='content') link,name,channel=[],[],[] for i,second_content in enumerate (first_content.find_all('div', class_='text-wrapper style-scope ytd-video-renderer')): try: third_content=second_content.find('h3', class_='title-and-badge style-scope ytd-video-renderer') # To get the link of the song link.append(('https://www.youtube.com'+(third_content.a)['href']).strip()) # To get the name of the song k=third_content.a.text.strip() name.append(k) # To get the channel details of the songs channel.append(second_content.find('div', class_='hidden style-scope paper-tooltip').text.strip()) if i>10: break except: pass dic={'Name of the Songs': name, 'Channel': channel, 'Links':link} print ('\n') print ('\033[1m' + 'These are the top 5 searches from youtube for your search'.center(50)) driver.close() return dic inp=input('enter the song to play from youtube - Scraping the data: ') df= pd.DataFrame(youtube_scraper(inp)) df.head() # ## Choose the Scraped data to play the songs print('\033[1m'+"please select the songs by choosing the indexes (e.g) 0,1,2,3,4") song_input=int(input()) YouTubeVideo(df.loc[song_input, 'Links'].split('=')[1], width=800, height=600) # ## To save the dataframe into HDF Format and CSV format hdf=pd.HDFStore(r'C:\Users\jpravijo\Desktop\Anaconda\youtube.h5'); hdf.put(inp,df,format='table', data_columns=True); df.to_csv(r'C:\Users\jpravijo\Desktop\Anaconda\youtube.csv',mode='a'); # ## To open the CSV which contains the Dataframe os.startfile(r'C:\Users\jpravijo\Desktop\Anaconda\youtube.csv') # + [markdown] heading_collapsed=true # ## To open the HDF which contains the Dataframe # + hidden=true hdf=pd.HDFStore(r'C:\Users\jpravijo\Desktop\Anaconda\youtube.h5') hdf.keys() # + hidden=true hdf.get('/Alan Walker Faded song') # + [markdown] hidden=true # THANK YOU # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Bokeh Libraries from bokeh.io import output_notebook from bokeh.plotting import figure, show # The figure wil be rendered into a static HTML file output_notebook() # Set up a generic figure() object fig = figure() # view the figure show(fig) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # <제어문> # ## - Control statement # # 프로그램의 흐름을 제어하는 방법
    # if, else, elif, while, for 와 같은 keyword를 사용한다. for i in range(10): while True: i = i + 1 print(i, end="") if i >= 10: break print() # ## - 조건문 (Conditional statement) # # if 조건식 :
    #     수행할 명령1
    #     수행할 명령2
    #
    # # 조건식의 결과가 True 이면 들여쓰기한 명령을 수행한다.

    # # # elif 조건식 :
    #     수행할 명령1
    #     수행할 명령2
    #
    # # if 가 반드시 있어야 쓸 수 있는 제어문으로, if 의 조건문이 참이 아닐때 다른 조건식을 확인한다.

    # # else :
    #     수행할 명령1
    #     수행할 명령2
    #
    # # if 가 반드시 있어야 쓸 수 있는 제어문으로, if 의 조건문이 참이 아닐때 수행된다. # + num = 5 if num == 1: print("num은 1입니다.") elif num == 2: print("num은 2입니다.") elif num == 3 or num == 4: print("num은 3또는 4입니다.") else: print("num은 5입니다.") # - # ## - 반복문 (Iteration statement) # # while 조건식 :
    #     수행할 명령1
    #     수행할 명령2
    #
    # # 조건식의 결과가 True 이면 들여쓰기한 명령을 수행한다.
    # 수행할 명령이 끝나면 다시 조건식을 확인하며 반복한다.

    # # # for 변수 in 리스트
    #     수행할 명령1
    #     수행할 명령2
    #
    # # 리스트 내의 요소를 순서대로 하나씩 꺼내서 변수에 저장한 후 들여쓰기한 명령을 수행한다.

    # # for 문에 같이 자주 사용되는 정해진 범위의 숫자 리스트를 반환해주는 메소드로
    range( start(optional), end, step(optional) ) 가 있다. # + num = 10 while num > 0: print(num, end="") num = num - 1 print() for t in ["Hello", "list", "for"]: print(t,end="") print() for i in range(10): print(i,end="") # - # ### 반복문과 조건문 활용하기 # # a = [1, 2, 3, 4]
    # [x**2 for x in a] 와 같이 for의 결과를 바로 리스트로 만들 수 있다. a = [1, 2, 3 ,4] print([x**2 for x in a]) # + a = [2, 3] if 3 in a: print("3이 있음") else : if 2 in a: print("2가 있음") print("없음") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sys sys.path.append('/Users/spacecoffin/Development') import GravelKicker as gk import os import pandas as pd from datetime import datetime from supriya.tools import nonrealtimetools # - this_dir = '/Users/spacecoffin/Development/GravelKicker/__gen_files' # # Batch processing # # 1. Decide on a size limit for aiff files (882kb/10s file) # 2. Generate that much files # 3. Process those files and append results to DataFrame # 4. Remove those files import hurry.filesize hurry.filesize.size(903168, system=hurry.filesize.alternative) hurry.filesize.size(882102, system=hurry.filesize.si) hurry.filesize.size(882000, system=hurry.filesize.si) hurry.filesize.size(1073741824, system=hurry.filesize.alternative) hurry.filesize.size(1073741824, system=hurry.filesize.si) 1073741824 / 882102 # # Loading # + dir_list = os.listdir(path=this_dir) if "df.p" in dir_list: _pickle_path = os.path.join(this_dir, "df.p") _old_df = pd.read_pickle(_pickle_path) # - _old_df["hash"] _old_df.dtypes pmtx = gk.generator.gendy1.gen_params(rows=20) df = gk.generator.gendy1.format_params(pmtx) df.sort_values(["hash"]) for i, row in df.iterrows(): session = nonrealtimetools.Session() builder = gk.generator.gendy1.make_builder(row) out = gk.generator.gendy1.build_out(builder) synthdef = builder.build() with session.at(0): synth_a = session.add_synth(duration=10, synthdef=synthdef) gk.util.render_session(session, this_dir, row["hash"]) dt = datetime.now().strftime("%Y_%m_%d") #_%H-%M-%S") identifier = '{0}-len{1}'.format(dt, str(df.shape[0])) df.to_pickle("{0}/df-{1}.p".format(this_dir, dt)) df.to_pickle("{0}/df.p".format(this_dir, dt)) # Next, we need to: # # 1. extract features from each file # 2. join features with parameter df (index on hash?) # 3. get it into a regressor # + active="" # # # def pickle_data(file_pickle, path): # """Saves pickle file depending on object instance""" # # if isinstance(file_pickle, pd.DataFrame): # file_pickle.to_pickle(path) # # elif isinstance(file_pickle, pyspark.RDD): # file_pickle.saveAsPickleFile(path) # # else: # with open(path, 'wb') as p_file: # pickle.dump(file_pickle, p_file) # # pickle_data(data_pandas, path='{0}/pandas-df-{1}.p'.format(dat_path, identifier)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Catalog Class # **** # ## Catalog requirements : # # * FW = ENGL # # * FC = DLS # # * FM = DLM # # * FN = DLN # # * FA = DLV # # * FH = DLL # # * FS = DLS class Catalog: courses = None courses_reqs = None year_from = None year_to = None def __init__(year_from, year_to): types_of_courses = ['UF','FW','FC','FM','FN','FA','FH','FS','FF', 'CS_req','Math_elective','Upper_CS'] courses_reqs = {key: list() for key in type_of_courses} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- import pickle from collections import Counter import codecs def read_sentences(filepath): sentences = [] with codecs.open(filepath, encoding="utf-8", mode="r") as fp: for sentence in fp: sentences.append(sentence.lower()) return sentences def read_sentences_format2(filepath): X = [] Y = [] with codecs.open(filepath, encoding = "utf-8", mode = "r") as fp: for sentence in fp: splits = sentence.split("\t") X.append(splits[3].strip().lower()) Y.append(splits[4].strip().lower()) return X, Y def create_dataset(l1_sentences, l2_sentences): l1_vocab_dict = Counter(word.strip(',." ;:)(][?!-\'') for sentence in l1_sentences for word in sentence.split()) l2_vocab_dict = Counter(word.strip(',." ;:)(|][?!<>a-zA-Z') for sentence in l2_sentences for word in sentence.split()) #l1_vocab_dict = Counter(word for sentence in l1_sentences for word in sentence.split()) #l2_vocab_dict = Counter(word for sentence in l2_sentences for word in sentence.split()) l1_vocab = list(map(lambda x: x[0], sorted(l1_vocab_dict.items(), key = lambda x: -x[1]))) l2_vocab = list(map(lambda x: x[0], sorted(l2_vocab_dict.items(), key = lambda x: -x[1]))) # Limit the vocabulary size. Consider only the top 20,000 and 30,000 words respectively l1_vocab = l1_vocab[:30000] l2_vocab = l2_vocab[:30000] # Build a Word to Index Dictionary for English start_idx = 2 l1_word2idx = dict([(word, idx+start_idx) for idx, word in enumerate(l1_vocab)]) l1_word2idx[''] = 0 # Unknown words l1_word2idx[''] = 1 # Padding word # Build an Index to Word Dictionary for English using the already created Word to Index Dictionary l1_idx2word = dict([(idx, word) for word, idx in l1_word2idx.items()]) # Build a Word to Index Dictionary for Hindi start_idx = 4 l2_word2idx = dict([(word, idx+start_idx) for idx, word in enumerate(l2_vocab)]) l2_word2idx[''] = 0 # Unknown l2_word2idx[''] = 1 l2_word2idx[''] = 2 # End of sentence l2_word2idx[''] = 3 # Padding # Build an Index to Word Dictionary for Hindi using the already created Word to Index Dictionary l2_idx2word = dict([(idx, word) for word, idx in l2_word2idx.items()]) # Encode words in senteces by their index in Vocabulary x = [[l1_word2idx.get(word.strip(',." ;:)(][?!-\''), 0) for word in sentence.split()] for sentence in l1_sentences] y = [[l2_word2idx.get(word.strip(',." ;:)(|][?!<>a-zA-Z'), 0) for word in sentence.split()] for sentence in l2_sentences] #x = [[l1_word2idx.get(word, 0) for word in sentence.split()] for sentence in l1_sentences] #y = [[l2_word2idx.get(word, 0) for word in sentence.split()] for sentence in l2_sentences] X = [] Y = [] for i in range(len(x)): n1 = len(x[i]) n2 = len(y[i]) n = n1 if n1 < n2 else n2 if abs(n1 - n2) < 0.3 * n: if n1 <= 20 and n2 <= 20: X.append(x[i]) Y.append(y[i]) return X, Y, l1_word2idx, l1_idx2word, l1_vocab, l2_word2idx, l2_idx2word, l2_vocab def save_dataset(filepath, obj): with open(filepath, 'wb') as fp: pickle.dump(obj, fp, -1) def read_dataset(filepath): with open(filepath, 'rb') as fp: return pickle.load(fp) # + def main(): dataset_save_location = "D:/NLP Project/Hindi English/data.p" X, Y = read_sentences_format2("D:/NLP Project/Hindi English/hindencorp05.plaintext") out_obj = {'X':X, 'Y':Y} pickle.dump(out_obj, open("D:/NLP Project/Hindi English/hindencorp05.p", "wb")) #en_sentences = read_sentences(en_path) #hi_sentences = read_sentences(hi_path) #en_sentences.extend(X) #hi_sentences.extend(Y) en_sentences = X hi_sentences = Y #save_dataset(dataset_save_location , create_dataset(en_sentences, hi_sentences)) if __name__ == '__main__': main() # - X, Y = read_sentences_format2("D:/NLP Project/Hindi English/hindencorp05.plaintext") print(X[:2]) print(Y[:2]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import re import json import pandas as pd import numpy as np import requests from bs4 import BeautifulSoup import collections # + import pymongo import json # from Processing.Processing import Processing client = pymongo.MongoClient() db = client["news"] print(client['news'].list_collection_names()) coll_raw = db['bharian_raw'] coll_processed = db['thestar_pro'] # - # coll_raw.delete_many({}) len(list(coll_raw.find())) y = coll_raw.find_one() y list(x) x = coll_raw.aggregate( [ {"$group": { "_id": "$url", "count": { "$sum": 1 } } }, {"$match": {"count": { "$gte": 2 }}} ] ) x = coll_raw.update_many( {}, {"$unset": { "processed_date": "" }} ) urls = [y['_id'] for y in list(x)] len(urls) # + # for url in urls: # x = coll_raw.delete_one({'url':url}) # print(x.deleted_count) # + from datetime import datetime, timedelta x= datetime.now() - timedelta(days=2) # news_date = datetime.datetime.strptime(date_1, "%d %B %Y @ %I:%M %p") # - len(list(coll_processed.find())) list(coll_processed.find({'news_date':{'$gte':x}})) # + # ec_conn.indices.delete(index='news2') # ec_conn.indices.create(index='news2', ignore=400, body=mapping) # + from datetime import datetime from elasticsearch import Elasticsearch INDEX_NAME = 'all_news' ec_conn= Elasticsearch('http://127.0.0.1:9200') ec_conn ec_conn.search(index='all_news', body = {'from':0, 'size':1})['hits']['hits'][0]['_source'].keys() # + # ec.indices.create(index='test-index', ignore=400) # ec_conn.indices.delete(index='news2') # - ec_conn.search(index='news', body = {'from':0, 'size':30}) import docker client = docker.from_env() for container in client.containers.list(): print(container.id) # container.stop() ec_conn.search( index="all_news", body={ 'size':1, "query": { "multi_match": {"fields": ["title","content_text"], "query": 'covid malaysia kematian'} } } ) ec_conn.search( index="all_news", body={ 'size':1, "query": { "range": { "news_date": { "lt": datetime.now() } } } } ) # + datequery=[ {"index":"all_news"}, { "body":{ "query": { "range": { "news_date": { "lt": datetime.now() } } } } } ] textquery=[ {"index":"all_news"}, { "body":{ 'size':1, "query": { "multi_match": {"fields": ["title","content_text"], "query": 'covid malaysia kematian'} } } } ] ec_conn.msearch([datequery,textquery]) # - query = { "bool":{ "filter":[ { "multi_match": {"fields": ["title","content_text"], "query": 'sabar'} }, { "terms":{ "category":[ "News", ] } }, { "terms":{ "topic":[ "COVID-19", ] } }, { "range": { "news_date": {"gte": datetime.datetime(2020,4,10), "lte": datetime.datetime(2020,4,12)}, }, } ] } } parser.parse(date) import dateutil.parser as parser parser.parse(r[0].replace('|','')) response = requests.get('https://www.bharian.com.my/search?s=covid') soup = BeautifulSoup(response.text, 'lxml') [x.find('a').get('href') for x in (soup.find('div', class_='region-content').find_all('div', class_='block-content')[0].find_all('div', class_='views-field-title'))] import re r= re.search('\d.*', date) r[0].replace('|','') bahasa_months_dict = { "Januari": "January", "Februari": "February", "Mac": "March", "Mei": "May", "Julai": "July", "Ogos": "August", "Oktober": "October", "Disember": "December" } mon =[key for key in bahasa_months_dict.keys() if date.find(key) !=-1] date = date.replace(mon[0], bahasa_months_dict[mon[0]]) if (len(mon) > 0) else date cond = len([tar.find(key) for key in bahasa_months_dict.keys() if tar.find(key) !=-1])>0 mon datetime.strptime('12 Apr 2020', "%d %b %Y") tar = r[0].replace('|','') date = soup.find('div', class_='node-meta').text date soup = BeautifulSoup(y['content_html'], 'html.parser') x.find('div class="region region-content-bottom">') soup.find('img').get('alt') if soup.find('img') is not None: print('asda') soup.find("img").get("data-src") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 8F_controls # # 08/25/2020 # - positive controls = common loops, common ATAC, and expressed genes across all types # - negative controls = no loop, no ATAC, no expression across all cell types # # # # # # from collections import Counter, defaultdict import pandas as pd import numpy as np; np.random.seed(0) import os, glob import seaborn as sns; sns.set() import matplotlib import matplotlib.pyplot as plt import pybedtools from scipy.stats import ttest_ind # %load_ext autoreload # %autoreload 2 save_dir = '../data/processed/fig4_modelling/vocab_sum_final/' if not os.path.exists(save_dir): os.makedirs(save_dir) data_all = pd.read_csv('/Users/mguo123/Google Drive/1_khavari/omics_project-LD/pan_omics/data/processed/tissue_crms/all_count_sep_overall.csv',index_col=0,header=0) print(len(data_all.tissue.unique())) data_all.tissue.unique() # + # # expression labels # exp_label = list(np.log10(data_all.exp.values+1e-2)) # labels_all = np.array(np.array(exp_label)>THRES) # tissues_label = data_all.tissue.values#np.array((data_all.exp>THRES).values) # genes_all = data_all.index.values # - # # negative controls # no loop, no ATAC, no expression across all cell types for negative controls # # for the negative controls, narrow down the list by loci that don’t have an ATAC peak within a certain distance: Like 100 KB or some such? I # # make sure actually in a gene desert or region of inaccessible chromatin so really should be “off” / negative # NEG_DIST_MIN = 25000 # for negative control, gene must by at least 10kb away from closest atac peak # %%time neg_crms = data_all[data_all.iloc[:,1:].sum(axis=1)<0.001] print(data_all.shape) print(neg_crms.shape) gene_tissue_counts = pd.Series(neg_crms.index).value_counts() neg_genes = gene_tissue_counts[gene_tissue_counts==len(data_all.tissue.unique())].index len(neg_genes) # number of genes with no loop, ATAC or expression (looped or proximal) gene_regions = pybedtools.BedTool('../data/external/gencode.v19.gene.bed').to_dataframe() gene_regions_neg = gene_regions[gene_regions.name.isin(neg_genes)] print(gene_regions_neg.shape) # # genes with genomic location gene_regions_neg_bed = pybedtools.BedTool.from_dataframe(gene_regions_neg).sort() print(len(glob.glob('../data/interim/merged/atac/*bed'))) sorted(glob.glob('../data/interim/merged/atac/*bed')) # %%time all_far_genes = [] all_far_genes_dict = {} for atac_bed in sorted(glob.glob('../data/interim/merged/atac/*bed')): tissue = os.path.basename(atac_bed).split('_merged')[0] print(tissue) gene_regions_neg_bed_tissue = gene_regions_neg_bed.closest(atac_bed,d=True) gene_dist_df = gene_regions_neg_bed_tissue.to_dataframe().groupby('name').agg({'thickEnd':min}) gene_dist_df.columns = ['dist'] far_genes_tissue = list(gene_dist_df.index[gene_dist_df.dist > NEG_DIST_MIN]) print(len(far_genes_tissue)) all_far_genes+=far_genes_tissue all_far_genes_dict[tissue] = far_genes_tissue far_genes_count = pd.Series(Counter(all_far_genes)) neg_genes_far = sorted(far_genes_count.index[far_genes_count==len(glob.glob('../data/interim/merged/atac/*bed'))]) print(len(neg_genes_far)) print(neg_genes_far) pd.Series(neg_genes_far).to_csv(os.path.join(save_dir, 'neg_genes_far.csv'),index=None,header=None) # # 2. positive controls # # positive controls = common loops, common ATAC, and expressed genes across all types # + THRES=1 # for RNA normal_tissues_pos = [ 'Airway', 'Astrocytes', 'Bladder', 'Colon', 'Esophageal', 'GDSD0', 'GDSD3', 'GDSD6', 'GM12878', 'HMEC', 'Melanocytes', 'Ovarian', 'Pancreas', 'Prostate', 'Renal', 'Thyroid', 'Uterine'] cancer_tissues_pos = ['A431-CTRLi', 'CAL27-CTRLi', 'SCC13-CTRLi','COLO_SCR_DMSO','WM_SCR_DMSO', 'A431-p63i','CAL27-p63i', 'SCC13-p63i', 'COLO_SCR_PLX','COLO_shMITF_DMSO', 'COLO_shMITF_PLX'] all_tissue_pos = normal_tissues_pos + ['A431-CTRLi', 'CAL27-CTRLi', 'SCC13-CTRLi','COLO_SCR_DMSO','WM_SCR_DMSO'] # 'D0-CTRLi','D3-CTRLi', 'SCC13-p63i', 'D0-p63i', 'D3-p63i', # 'A431-CTRLi', 'CAL27-CTRLi', 'SCC13-CTRLi','COLO_SCR_DMSO','WM_SCR_DMSO'] # + # expressed_genes expr_df = data_all[data_all.tissue.isin(all_tissue_pos)]['exp'].reset_index() expr_df.columns = ['gene','exp'] expr_df_min = expr_df.groupby('gene').agg({'exp':'min'}) # basically find lowest expressed gene expr_genes = expr_df_min.index[expr_df_min.exp>THRES] print(len(expr_genes)) # number of commonly expressed genes #common ATAC atac_pro_df = data_all[data_all.tissue.isin(all_tissue_pos)].num_atac_regions_pro.reset_index() atac_pro_df.columns = ['gene','atac'] atac_pro_df_min = atac_pro_df.groupby('gene').agg({'atac':'min'}) # basically find lowest # of atac peaks in the promoter region gene atac_genes = atac_pro_df_min.index[atac_pro_df_min.atac>0] print(len(atac_genes)) # number of atac genes that have at least 1 atac peak in promoter region (2kb upstream, 500 bp downstream of tss) in all tissues # common loop hichip_pro_df = data_all[data_all.tissue.isin(all_tissue_pos)].num_loops.reset_index() hichip_pro_df.columns = ['gene','hichip'] hichip_pro_df_min = hichip_pro_df.groupby('gene').agg({'hichip':'min'}) # basically find lowest # of atac peaks in the promoter region gene hichip_genes = hichip_pro_df_min.index[hichip_pro_df_min.hichip>0] print(len(hichip_genes)) # number of atac genes that have at least 1 loop to promoter region (2kb upstream, 500 bp downstream of tss) in all tissues all_pos_genes = sorted(set(expr_genes).intersection(set(atac_genes)).intersection(set(hichip_genes))) len(all_pos_genes) # - expr_genes all_pos_genes pd.Series(all_pos_genes).to_csv(os.path.join(save_dir, 'all_pos_genes.csv'),index=None,header=None) # + # expressed_genes expr_df = data_all[data_all.tissue.isin(normal_tissues_pos)]['exp'].reset_index() expr_df.columns = ['gene','exp'] expr_df_min = expr_df.groupby('gene').agg({'exp':'min'}) # basically find lowest expressed gene expr_genes = expr_df_min.index[expr_df_min.exp>THRES] print(len(expr_genes)) # number of commonly expressed genes #common ATAC atac_pro_df = data_all[data_all.tissue.isin(normal_tissues_pos)].num_atac_regions_pro.reset_index() atac_pro_df.columns = ['gene','atac'] atac_pro_df_min = atac_pro_df.groupby('gene').agg({'atac':'min'}) # basically find lowest # of atac peaks in the promoter region gene atac_genes = atac_pro_df_min.index[atac_pro_df_min.atac>0] print(len(atac_genes)) # number of atac genes that have at least 1 atac peak in promoter region (2kb upstream, 500 bp downstream of tss) in all tissues # common loop hichip_pro_df = data_all[data_all.tissue.isin(normal_tissues_pos)].num_loops.reset_index() hichip_pro_df.columns = ['gene','hichip'] hichip_pro_df_min = hichip_pro_df.groupby('gene').agg({'hichip':'min'}) # basically find lowest # of atac peaks in the promoter region gene hichip_genes = hichip_pro_df_min.index[hichip_pro_df_min.hichip>0] print(len(hichip_genes)) # number of atac genes that have at least 1 loop to promoter region (2kb upstream, 500 bp downstream of tss) in all tissues normal_pos_genes = list(set(expr_genes).intersection(set(atac_genes)).intersection(set(hichip_genes))) len(normal_pos_genes) # + # expressed_genes expr_df = data_all[data_all.tissue.isin(cancer_tissues_pos)]['exp'].reset_index() expr_df.columns = ['gene','exp'] expr_df_min = expr_df.groupby('gene').agg({'exp':'min'}) # basically find lowest expressed gene expr_genes = expr_df_min.index[expr_df_min.exp>THRES] print(len(expr_genes)) # number of commonly expressed genes #common ATAC atac_pro_df = data_all[data_all.tissue.isin(cancer_tissues_pos)].num_atac_regions_pro.reset_index() atac_pro_df.columns = ['gene','atac'] atac_pro_df_min = atac_pro_df.groupby('gene').agg({'atac':'min'}) # basically find lowest # of atac peaks in the promoter region gene atac_genes = atac_pro_df_min.index[atac_pro_df_min.atac>0] print(len(atac_genes)) # number of atac genes that have at least 1 atac peak in promoter region (2kb upstream, 500 bp downstream of tss) in all tissues # common loop hichip_pro_df = data_all[data_all.tissue.isin(cancer_tissues_pos)].num_loops.reset_index() hichip_pro_df.columns = ['gene','hichip'] hichip_pro_df_min = hichip_pro_df.groupby('gene').agg({'hichip':'min'}) # basically find lowest # of atac peaks in the promoter region gene hichip_genes = hichip_pro_df_min.index[hichip_pro_df_min.hichip>0] print(len(hichip_genes)) # number of atac genes that have at least 1 loop to promoter region (2kb upstream, 500 bp downstream of tss) in all tissues cancer_pos_genes = list(set(expr_genes).intersection(set(atac_genes)).intersection(set(hichip_genes))) len(cancer_pos_genes) # - len(set(normal_pos_genes).union(set(cancer_pos_genes))) pos_genes_df = pd.concat([pd.DataFrame({'gene':normal_pos_genes,'type':'normal'}), pd.DataFrame({'gene':cancer_pos_genes,'type':'cancer'})],axis=0) pos_genes_df = pos_genes_df.groupby('gene').agg({'type':'|'.join }).reset_index() pos_genes_df.to_csv(os.path.join(save_dir, 'pos_genes_df.csv')) pos_genes_df # # 3. Check Controls for Motifs (MOODS # # 09/09/2020 # # after 8F1_make_mpra # from scipy.stats import ttest_ind import pandas as pd import numpy as np; np.random.seed(0) import os, glob import seaborn as sns; sns.set() import matplotlib import matplotlib.pyplot as plt from collections import Counter glob.glob('../data/processed/fig4_modelling/vocab_sum_final/mpra*') # read rna rna_df = pd.read_csv('../data/interim/rna/tissue_tpm_sym.csv',index_col=0) print(rna_df.columns) # read mpra info mpra_df = pd.read_csv( '../data/processed/fig4_modelling/vocab_sum_final/mpra_oligo_df_final.txt', sep='\t') name_to_seq_type = pd.Series(mpra_df.seq_type.values, index=mpra_df.name.values).to_dict() # want seq_type 'pos' or 'neg' len(name_to_seq_type) # check rna tpm values of the pos controls mpra_df_controls = mpra_df[mpra_df.seq_type.isin(['pos','neg'])] mpra_df_controls_instance = mpra_df_controls[['name','seq_type']].drop_duplicates().reset_index(drop=True) print(mpra_df_controls_instance.shape) mpra_df_controls_instance['Colon_exp'] = mpra_df_controls_instance.name.apply(lambda x: rna_df.loc[x, 'Colon']) mpra_df_controls_instance['GDSD6_exp'] = mpra_df_controls_instance.name.apply(lambda x: rna_df.loc[x, 'GDSD6']) mpra_df_controls_instance['GM12878_exp'] = mpra_df_controls_instance.name.apply(lambda x: rna_df.loc[x, 'GM12878']) mpra_df_controls_instance['Melanocytes_exp'] = mpra_df_controls_instance.name.apply(lambda x: rna_df.loc[x, 'Melanocytes']) mpra_df_controls_instance['A431-CTRLi_exp'] = mpra_df_controls_instance.name.apply(lambda x: rna_df.loc[x, 'A431-CTRLi']) mpra_df_controls_instance['CAL27-CTRLi_exp'] = mpra_df_controls_instance.name.apply(lambda x: rna_df.loc[x, 'CAL27-CTRLi']) mpra_df_controls_instance['SCC13-CTRLi_exp'] = mpra_df_controls_instance.name.apply(lambda x: rna_df.loc[x, 'SCC13-CTRLi']) mpra_df_controls_instance['COLO_SCR_DMSO_exp'] = mpra_df_controls_instance.name.apply(lambda x: rna_df.loc[x, 'COLO_SCR_DMSO']) mpra_df_controls_instance['WM_SCR_DMSO_exp'] = mpra_df_controls_instance.name.apply(lambda x: rna_df.loc[x, 'WM_SCR_DMSO']) mpra_df_controls_instance.sort_values(['seq_type','name'],inplace=True) mpra_df_pos_controls_instance = mpra_df_controls_instance[mpra_df_controls_instance.seq_type=='pos'].reset_index(drop=True) display(mpra_df_pos_controls_instance.describe()) name_to_seq_type_count = Counter(name_to_seq_type.values()) name_to_seq_type_count #read tf info tf_annon_df = pd.read_csv('../data/external/HOCOMOCOv11_annotation.csv',index_col=0) tf_annon_df['id_trim'] = tf_annon_df['id'] + '.pwm.trim' tf_name_to_id_dict = pd.Series(tf_annon_df.id_trim.values, index=tf_annon_df.tf.values).to_dict() tf_id_to_name_dict = pd.Series(tf_annon_df.tf.values, index=tf_annon_df.id_trim.values).to_dict() tf_name_to_id_abbr_dict = pd.Series(tf_annon_df.id.values, index=tf_annon_df.tf.values).to_dict() tf_id_abbr_to_name_dict = pd.Series(tf_annon_df.tf.values, index=tf_annon_df.id.values).to_dict() # motif_scan_control_dir = '../data/processed/fig4_modelling/vocab_sum_final/motif_scan_control/' motif_scan_control_dir = '../data/processed/fig4_modelling/vocab_sum_final/motif_scan_control_pval001/' motif_scan_control_files = glob.glob(os.path.join(motif_scan_control_dir, '*.csv')) print(len(motif_scan_control_files)) # + def get_str_arr(l): return '|'.join([str(x) for x in l]) def ttest_motif(row): list_neg = [float(x) for x in row['score_get_str_arr_neg'].split('|')] list_pos = [float(x) for x in row['score_get_str_arr_pos'].split('|')] t, p = ttest_ind(list_pos, list_neg) return t,p # + # %%time pos_neg_scan_info_all=pd.DataFrame() pos_neg_instance_scan_info_all=pd.DataFrame() # for motif_scan_control_file in motif_scan_control_files[:5]: ## DEBUG for motif_scan_control_file in motif_scan_control_files: motif_scan_results = pd.read_csv(motif_scan_control_file,names=['name_config','pwm_file','pos','strand','score','seq','seq_actual']) motif = tf_id_abbr_to_name_dict[os.path.basename(motif_scan_control_file).split('_scan')[0]] motif_scan_results['motif'] = motif_scan_results.pwm_file.str.split('.pwm',expand=True).iloc[:,0].map(tf_id_abbr_to_name_dict) motif_scan_results['seq_type'] = motif_scan_results.name_config.map(name_to_seq_type) # create df, name_config,seq_type motif, avg score instance_scan_info = motif_scan_results.groupby(['name_config','seq_type','motif']).agg({'score':['count','mean','std','min','max']}) instance_scan_info.columns = ['_'.join(x) for x in instance_scan_info.columns ] instance_scan_info.reset_index(inplace=True) pos_neg_instance_scan_info_all = pd.concat([pos_neg_instance_scan_info_all, instance_scan_info],axis=0) # # create df, seq_type motif, avg score pos_neg_scan_info = motif_scan_results.groupby(['seq_type','motif']).agg({'score':['count','mean','std','min','max', get_str_arr ]}) pos_neg_scan_info.columns = ['_'.join(x) for x in pos_neg_scan_info.columns ] pos_neg_scan_info.reset_index(inplace=True) pos_neg_scan_info_all = pd.concat([pos_neg_scan_info_all, pos_neg_scan_info],axis=0) # - # ### check to see which motifs are enriched in controls, and do a t test to compare pos vs neg (MOODS) # there are duplicate motif values print(pos_neg_scan_info_all.shape) pos_neg_scan_info_all = pos_neg_scan_info_all.groupby(['seq_type','motif']).agg({'score_count':sum, 'score_mean': 'mean', 'score_std': 'mean', 'score_min': min, 'score_max':max, 'score_get_str_arr': lambda x: '|'.join(x) }).reset_index() print(pos_neg_scan_info_all.shape) pos_neg_scan_info_wide = pos_neg_scan_info_all.pivot(index='motif',columns='seq_type',values=['score_count','score_mean','score_std','score_min','score_max','score_get_str_arr']) pos_neg_scan_info_wide.columns = ['_'.join(x) for x in pos_neg_scan_info_wide.columns ] pos_neg_scan_info_wide['mean_pos_neg_range'] = pos_neg_scan_info_wide.score_mean_pos - pos_neg_scan_info_wide.score_mean_neg pos_neg_scan_info_wide = pd.concat([pos_neg_scan_info_wide, pos_neg_scan_info_wide.apply(ttest_motif,axis=1).apply(lambda s: pd.Series({'t_val':s[0], 'p_val':s[1]}))],axis=1) pos_neg_scan_info_wide['p_val_bonf'] = pos_neg_scan_info_wide.p_val.apply(lambda x: min (1, x*pos_neg_scan_info_wide.shape[0])) print(pos_neg_scan_info_wide.shape) pos_neg_scan_info_wide_filt = pos_neg_scan_info_wide[pos_neg_scan_info_wide.p_val_bonf<0.05] print(pos_neg_scan_info_wide_filt.shape) motif_count_pos_neg_ratio = pd.Series(pos_neg_scan_info_wide.score_count_pos/pos_neg_scan_info_wide.score_count_neg*name_to_seq_type_count['neg']/name_to_seq_type_count['pos']).sort_values() print(motif_count_pos_neg_ratio.shape) motif_count_pos_neg_ratio[motif_count_pos_neg_ratio>1].shape, motif_count_pos_neg_ratio[motif_count_pos_neg_ratio<1].shape pos_neg_scan_info_wide_filt[pos_neg_scan_info_wide_filt.t_val<0].index.values pos_neg_scan_info_wide_filt[pos_neg_scan_info_wide_filt.t_val>0].index.values pos_neg_scan_info_wide_filt[pos_neg_scan_info_wide_filt.t_val<0].index.values # ## try to find vocab pairs within a single instance vocab_pairs = mpra_df[mpra_df.seq_type=='vocab'].name.apply(lambda x:x.split('+')[0]).unique() len(vocab_pairs) pos_neg_instance_scan_info_all[:5] neg_instance_scan_info_all = pos_neg_instance_scan_info_all[pos_neg_instance_scan_info_all.seq_type=='neg'] neg_instance_motifs = neg_instance_scan_info_all.groupby('name_config').agg({'motif':'|'.join}).reset_index() def check_vocab_pair(motif_str, vocab_pairs=vocab_pairs): motif_list = motif_str.split('|') vocab_pairs_found = [] for vocab_pair in vocab_pairs: v1, v2 = vocab_pair.split('::') if (v1 in motif_list) & (v2 in motif_list): vocab_pairs_found.append(vocab_pair) return '|'.join(sorted(vocab_pairs_found)),len(vocab_pairs_found) neg_instance_motifs = pd.concat([neg_instance_motifs, neg_instance_motifs.motif.apply(check_vocab_pair).apply(lambda s: pd.Series({'vocab_pairs_found':s[0], 'num_vocab_pairs_found':s[1]}))],axis=1) neg_instance_motifs neg_instance_motifs_counts = Counter('|'.join(list(neg_instance_motifs.vocab_pairs_found)).split('|')) pos_instance_scan_info_all = pos_neg_instance_scan_info_all[pos_neg_instance_scan_info_all.seq_type=='pos'] pos_instance_motifs = pos_instance_scan_info_all.groupby('name_config').agg({'motif':'|'.join}).reset_index() pos_instance_motifs = pd.concat([pos_instance_motifs, pos_instance_motifs.motif.apply(check_vocab_pair).apply(lambda s: pd.Series({'vocab_pairs_found':s[0], 'num_vocab_pairs_found':s[1]}))],axis=1) pos_instance_motifs_counts = Counter('|'.join(list(pos_instance_motifs.vocab_pairs_found)).split('|')) control_vocab_pair_dict = {} for vocab in list(neg_instance_motifs_counts.keys()) + list(pos_instance_motifs_counts.keys()): control_vocab_pair_dict[vocab] = {'in_pos':vocab in pos_instance_motifs_counts, 'in_neg':vocab in neg_instance_motifs_counts, 'count_pos':pos_instance_motifs_counts.get(vocab, 0), 'count_neg':neg_instance_motifs_counts.get(vocab, 0), } control_vocab_pair_df = pd.DataFrame.from_dict(control_vocab_pair_dict,orient='index') control_vocab_pair_df['pos_neg_ratio'] = (control_vocab_pair_df.count_pos+1)/(control_vocab_pair_df.count_neg+1) control_vocab_pair_df['pos_neg_ratio_corr'] = control_vocab_pair_df.pos_neg_ratio*neg_instance_motifs.shape[0]/pos_instance_motifs.shape[0] control_vocab_pair_df.sort_values('pos_neg_ratio_corr',inplace=True) control_vocab_pair_df.describe() # # 4. AME enrichment # ## AME motif enrichment for negative controls # # `ame --verbose 1 --oc . --scoring avg --method fisher --hit-lo-fraction 0.25 --evalue-report-threshold 10.0 --control --shuffle-- --kmer 2 mpra_neg_control_seq.fasta motifs.meme` # # AME motif enrichment neg control # # motifs: '../data/external/hocomoco_human_trim_jaspar_format.txt' save_dir # num actual tfs used vocab_pairs = mpra_df[mpra_df.seq_type=='vocab'].name.apply(lambda x:x.split('+')[0]).unique() list_tfs = [] for vocab_pair in vocab_pairs: vocab1, vocab2 = vocab_pair.split('::') list_tfs.append(vocab1) list_tfs.append(vocab2) list_tfs = sorted(set(list_tfs)) len(list_tfs) #get motif annotation file tf_annon_df = pd.read_csv('../data/external/HOCOMOCOv11_annotation.csv',index_col=0) tf_annon_df['id_trim'] = tf_annon_df['id'] + '.pwm.trim' tf_name_to_id_dict = pd.Series(tf_annon_df.id_trim.values, index=tf_annon_df.tf.values).to_dict() tf_id_to_name_dict = pd.Series(tf_annon_df.tf.values, index=tf_annon_df.id_trim.values).to_dict() tf_name_to_id_abbr_dict = pd.Series(tf_annon_df.id.values, index=tf_annon_df.tf.values).to_dict() tf_id_abbr_to_name_dict = pd.Series(tf_annon_df.tf.values, index=tf_annon_df.id.values).to_dict() #get motif lengths motif_to_len = {} for file in sorted(glob.glob('../data/external/hocomoco_human_trim/*')): motif_id = os.path.basename(file).split('.pwm')[0] # print(tf_id_abbr_to_name_dict[motif_id]) with open(file, 'r') as g: len_motif = len(g.readlines()[0].strip().split()) motif_to_len[tf_id_abbr_to_name_dict[motif_id]] = len_motif len(set(tf_id_abbr_to_name_dict.values())), len(motif_to_len) #read in motif enrichment results ame_result_file = os.path.join(save_dir, 'motif_scan_neg_control_ame.tsv') ame_result = pd.read_csv(ame_result_file, sep='\t') ame_result['motif_DB'] = 'hocomoco_human_trim' ame_result['motif'] = ame_result.motif_ID.map(tf_id_abbr_to_name_dict) ame_result = ame_result[~ame_result.motif.isna()] ame_result['in_vocab'] = ame_result.motif.isin(list_tfs) ame_result # #### these are the motifs that are enriched in the negative control sequences enriched_motifs_in_neg = sorted(ame_result.motif.unique()) print(len(enriched_motifs_in_neg)) print(enriched_motifs_in_neg) # #### these are the motifs that are enriched in the negative control sequences AND are part of vocab words ame_result_filt = ame_result[ame_result.in_vocab] print(ame_result.shape, ame_result_filt.shape) enriched_motifs_in_neg_vocab = sorted(ame_result_filt.motif.unique()) print(len(enriched_motifs_in_neg_vocab)) print(enriched_motifs_in_neg_vocab) # ## AME motif enrichment negative controls (background vocab words) # `ame --verbose 1 --oc . --scoring avg --method fisher --hit-lo-fraction 0.25 --evalue-report-threshold 10.0 --control mpra_vocab_seq2.fasta mpra_neg_control_seq.fasta motifs.meme` # + #read in motif enrichment results ame_result_file = os.path.join(save_dir, 'motif_scan_neg_control_bckgdvocab_ame.tsv') ame_result = pd.read_csv(ame_result_file, sep='\t') ame_result['motif_DB'] = 'hocomoco_human_trim' ame_result['motif'] = ame_result.motif_ID.map(tf_id_abbr_to_name_dict) ame_result = ame_result[~ame_result.motif.isna()] ame_result['in_vocab'] = ame_result.motif.isin(list_tfs) print('enriched motifs') enriched_motifs = sorted(ame_result.motif.unique()) print(len(enriched_motifs)) print(enriched_motifs) print('enriched motifs with tested vocab pair') ame_result_filt = ame_result[ame_result.in_vocab] print(ame_result.shape, ame_result_filt.shape) enriched_motifs_vocab = sorted(ame_result_filt.motif.unique()) print(len(enriched_motifs_vocab)) print(enriched_motifs_vocab) # - # ## AME motif enrichment for positive controls # # `ame --verbose 1 --oc . --scoring avg --method fisher --hit-lo-fraction 0.25 --evalue-report-threshold 10.0 --control --shuffle-- --kmer 2 mpra_pos_control_seq.fasta motifs.meme` # # AME motif enrichment neg control # # motifs: '../data/external/hocomoco_human_trim_jaspar_format.txt' # + #read in motif enrichment results ame_result_file = os.path.join(save_dir, 'motif_scan_pos_control_ame.tsv') ame_result = pd.read_csv(ame_result_file, sep='\t') ame_result['motif_DB'] = 'hocomoco_human_trim' ame_result['motif'] = ame_result.motif_ID.map(tf_id_abbr_to_name_dict) ame_result = ame_result[~ame_result.motif.isna()] ame_result['in_vocab'] = ame_result.motif.isin(list_tfs) print('enriched motifs') enriched_motifs = sorted(ame_result.motif.unique()) print(len(enriched_motifs)) print(enriched_motifs) print('enriched motifs with tested vocab pair') ame_result_filt = ame_result[ame_result.in_vocab] print(ame_result.shape, ame_result_filt.shape) enriched_motifs_vocab = sorted(ame_result_filt.motif.unique()) print(len(enriched_motifs_vocab)) print(enriched_motifs_vocab) # - # ## AME motif enrichment for positive controls background vocab words # # `ame --verbose 1 --oc . --scoring avg --method fisher --hit-lo-fraction 0.25 --evalue-report-threshold 10.0 --control mpra_vocab_seq2.fasta mpra_pos_control_seq.fasta motifs.meme` # # AME motif enrichment neg control # # motifs: '../data/external/hocomoco_human_trim_jaspar_format.txt' # + #read in motif enrichment results ame_result_file = os.path.join(save_dir, 'motif_scan_pos_control_bckgdvocab_ame.tsv') ame_result = pd.read_csv(ame_result_file, sep='\t') ame_result['motif_DB'] = 'hocomoco_human_trim' ame_result['motif'] = ame_result.motif_ID.map(tf_id_abbr_to_name_dict) ame_result = ame_result[~ame_result.motif.isna()] ame_result['in_vocab'] = ame_result.motif.isin(list_tfs) print('enriched motifs') enriched_motifs = sorted(ame_result.motif.unique()) print(len(enriched_motifs)) print(enriched_motifs) print('enriched motifs with tested vocab pair') ame_result_filt = ame_result[ame_result.in_vocab] print(ame_result.shape, ame_result_filt.shape) enriched_motifs_vocab = sorted(ame_result_filt.motif.unique()) print(len(enriched_motifs_vocab)) print(enriched_motifs_vocab) # - # ## AME motif enrichment for vocab words # # `ame --verbose 1 --oc . --scoring avg --method fisher --hit-lo-fraction 0.25 --evalue-report-threshold 10.0 --control --shuffle-- --kmer 2 mpra_vocab_seq2.fasta motifs.meme # ` # # AME motif enrichment neg control # # motifs: '../data/external/hocomoco_human_trim_jaspar_format.txt' # + #read in motif enrichment results ame_result_file = os.path.join(save_dir, 'motif_scan_vocab_ame.tsv') ame_result = pd.read_csv(ame_result_file, sep='\t') ame_result['motif_DB'] = 'hocomoco_human_trim' ame_result['motif'] = ame_result.motif_ID.map(tf_id_abbr_to_name_dict) ame_result = ame_result[~ame_result.motif.isna()] ame_result['in_vocab'] = ame_result.motif.isin(list_tfs) print('enriched motifs') enriched_motifs = sorted(ame_result.motif.unique()) print(len(enriched_motifs)) print(enriched_motifs) print('enriched motifs with tested vocab pair') ame_result_filt = ame_result[ame_result.in_vocab] print(ame_result.shape, ame_result_filt.shape) enriched_motifs_vocab = sorted(ame_result_filt.motif.unique()) print(len(enriched_motifs_vocab)) print(enriched_motifs_vocab) # - motif_df = pd.DataFrame({'motif':list(tf_id_abbr_to_name_dict.values())}) motif_df['len_motif'] = motif_df.motif.map(motif_to_len) motif_df['in_vocab'] = motif_df.motif.isin(list_tfs) motif_df['enriched_vocab'] = motif_df.motif.isin(enriched_motifs) motif_df = motif_df[~motif_df.len_motif.isna()] motif_df[motif_df.in_vocab & (~motif_df.enriched_vocab)] pd.read_csv(os.path.join(save_dir, 'mpra_oligo_df_final_SUBMIT_091120.txt'),sep='\t').shape#oligo.str.len().value_counts() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Jupyter Notebooks Tutorial # # In this four-part course, we will constantly use Jupyter Notebooks, so it's important to be comfortable with how they work. # # This tutorial is meant for you to practice how to run code in this file to help you throughout the course. Also, don't worry about understanding the code as we will learn about what it means on day one. # # ### What is a Jupyter Notebook? # # Jupyter Notebooks are, at their core, environments that allow us to write and **interpret** our code in real time. There are many programming languages that can be used in these notebooks including Python, R, Julia, and Matlab. However, in this course, we will be concentrating on Python. # # ### How To Run Python Code # # In our Jupyter Notebooks, this is how our code will look like: # 1 + 1 # # To see the result of the code also known as the **output**, we must run it. You have two options: # # 1. You can either click the play button to the left of the block to run the code in Google Colab. # 2. You can click on the block and press the keys `shift + enter`. # # As practice, try running the Python code above. The output of the code above should be a **2**. # # ### Additional Practice # This Python code should output 50: 5 * 10 # This Python code should output 3: 15 / 3 # This Python code should output 90: 100 - 10 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:scvi] # language: python # name: conda-env-scvi-py # --- # + [markdown] nbpresent={"id": "1f01a376-e4e2-4d3c-8723-0f0d91682b22"} # # X-Inactivation Cell Type Differences DV Analysis # # Detection of X-inactivation via differential variance # # No detectable differences. # - # %env PYTHONPATH='/Users/mincheolkim/Github/scrna-parameter-estimation/simplesc' # + nbpresent={"id": "9bc26761-c243-4e99-98ca-0cc6425e76f5"} import pandas as pd import matplotlib.pyplot as plt import scanpy.api as sc import scipy as sp import itertools import numpy as np import scipy.stats as stats from scipy.integrate import dblquad import scipy.sparse as sparse import seaborn as sns import imp import time from statsmodels.stats.multitest import fdrcorrection # - import sys sys.path.append('/Users/mincheolkim/Github/scrna-parameter-estimation/simplesc') import simplesc from IPython.core.display import display, HTML display(HTML("")) import warnings warnings.filterwarnings('ignore') # + nbpresent={"id": "928db49a-7d74-456c-bf37-9192ad38ba99"} data_path = '/Users/mincheolkim/Google Drive/UCSF/research/parameter_estimation/x_inactivation_data/' # - # ### Read the the Lupus AnnData object adata = sc.read(data_path + 'lupus_annotated_nonorm_V6_x_genes.h5ad') # ### Read cell type and individual information #ct_list = pd.read_csv(data_path + 'lupus_ct_list.csv')['ct_cov'].tolist() ct_list = ['Tc', 'Th', 'cM', 'B', 'NK'] ind_list = pd.read_csv(data_path + 'lupus_ind_list.csv') males = set(ind_list.query('Female == 0.0').ind_cov) females = set(ind_list.query('Female == 1.0').ind_cov) # ### Read DE and DV results # # The results are computed in the `compute_x_inactivation_statistics.py` script in the Wynton cluster. # + de_t_stats = {ct:pd.read_csv(data_path + 'ind_ct_combined_statistics/{}_de_t_stats.csv'.format(ct), index_col=0).T.dropna(how='all') for ct in ct_list} dv_t_stats = {ct:pd.read_csv(data_path + 'ind_ct_combined_statistics/{}_dv_t_stats.csv'.format(ct), index_col=0).T.dropna(how='all') for ct in ct_list} de_pval = {ct:pd.read_csv(data_path + 'ind_ct_combined_statistics/{}_de_pvals.csv'.format(ct), index_col=0).T.dropna(how='all') for ct in ct_list} dv_pval = {ct:pd.read_csv(data_path + 'ind_ct_combined_statistics/{}_dv_pvals.csv'.format(ct), index_col=0).T.dropna(how='all') for ct in ct_list} mean = {ct:pd.read_csv(data_path + 'ind_ct_combined_statistics/{}_ct_mean.csv'.format(ct), index_col=0).T.dropna(how='all') for ct in ct_list} var = {ct:pd.read_csv(data_path + 'ind_ct_combined_statistics/{}_ct_var.csv'.format(ct), index_col=0).T.dropna(how='all') for ct in ct_list} # - # ### Male vs female analysis for ct in ct_list: mean_pvals = np.zeros(mean[ct].shape[0]) var_pvals = np.zeros(var[ct].shape[0]) var_diff = np.zeros(var[ct].shape[0]) # Compute difference in means idx = 0 for gene, row in mean[ct].iterrows(): _, mean_pvals[idx] = stats.mannwhitneyu(row[males], row[females]) idx += 1 # Compute difference in vars idx = 0 for gene, row in var[ct].iterrows(): var_diff[idx] = row[females].mean() - row[males].mean() _, var_pvals[idx] = stats.mannwhitneyu(row[males], row[females]) idx += 1 mean[ct]['sex_diff_pval'] = mean_pvals var[ct]['sex_diff_pval'] = var_pvals var[ct]['var_diff'] = var_diff _, mean[ct]['sex_diff_fdr'] = fdrcorrection(mean_pvals) _, var[ct]['sex_diff_fdr'] = fdrcorrection(var_pvals) print(ct) print('mean', mean[ct].query('sex_diff_fdr < 0.1').index.tolist()) print('var', var[ct].query('sex_diff_fdr < 0.1 & var_diff > 0').index.tolist()) print() # + mean_T = {ct:mean[ct].T for ct in ct_list} var_T = {ct:var[ct].T for ct in ct_list} for ct in ct_list: mean_T[ct]['sex'] = mean_T[ct].index.map(lambda x: 'male' if x in males else 'female') var_T[ct]['sex'] = var_T[ct].index.map(lambda x: 'male' if x in males else 'female') # - sns.stripplot(x='sex', y='RPS4X', data=mean_T['Tc'], jitter=True) #sns.boxplot(x='sex', y='RPL36A', data=mean_T['Tc']) sns.stripplot(x='sex', y='RPS4X', data=var_T['Tc'], jitter=True) #sns.boxplot(x='sex', y='XIST', data=var_T['Tc']) sns.stripplot([0, 1], [ var[ct].loc[gene][males].dropna(), var[ct].loc[gene][females].dropna()]) # + gene = 'CD99' plt.figure(figsize=(20, 5)) plt.subplots_adjust(wspace=0.7, hspace=0.5) for idx, ct in enumerate(ct_list): try: plt.subplot(2, len(ct_list), idx+1); plt.title(ct + ' Mean') plt.boxplot([ mean[ct].loc[gene][males].dropna(), mean[ct].loc[gene][females].dropna()]) except: continue for idx, ct in enumerate(ct_list): try: plt.subplot(2, len(ct_list), len(ct_list) + idx+1); plt.title(ct + ' Var') plt.boxplot() except: continue # + gene = 'XIST' plt.figure(figsize=(20, 5)) plt.subplots_adjust(wspace=0.7, hspace=0.5) for idx, ct in enumerate(ct_list): try: plt.subplot(2, len(ct_list), idx+1); plt.title(ct + ' Mean') plt.boxplot([ mean[ct].loc[gene][males].dropna(), mean[ct].loc[gene][females].dropna()]) except: continue for idx, ct in enumerate(ct_list): try: plt.subplot(2, len(ct_list), len(ct_list) + idx+1); plt.title(ct + ' Var') plt.boxplot([ var[ct].loc[gene][males].dropna(), var[ct].loc[gene][females].dropna()]) except: continue # - # ### CT differences female_dv = pd.read_csv(data_path + 'female_specific_ct_dv.csv', index_col=0) def plot_ct_diff(gene): plt.figure(figsize=(20, 5)); plt.subplots_adjust(wspace=0.1, hspace=0.5) plt.subplot(2, 2, 1); plt.boxplot([mean[ct].loc[gene][females].dropna().values for ct in ct_list]); plt.xticks(np.arange(len(ct_list))+1, ct_list) plt.title('Mean - female'); plt.subplot(2, 2, 2); plt.boxplot([mean[ct].loc[gene][males].dropna().values for ct in ct_list]); plt.xticks(np.arange(len(ct_list))+1, ct_list) plt.title('Mean - male'); plt.subplot(2, 2, 3); plt.boxplot([var[ct].loc[gene][females].dropna().values for ct in ct_list]); plt.xticks(np.arange(len(ct_list))+1, ct_list) plt.title('Var - female'); plt.subplot(2, 2, 4); plt.boxplot([var[ct].loc[gene][males].dropna().values for ct in ct_list]); plt.xticks(np.arange(len(ct_list))+1, ct_list) plt.title('Var - male'); mean['cM'].shape temp = np.diag(np.load(data_path + 'IGTB986_B_ct_cov.npy')) temp = pd.read_csv(data_path + 'ind_ct_combined_statistics/B_ct_mean.csv', index_col=0).T temp mean['B'].shape temp.shape temp.shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # NBA Team Ratings, Pt. 1 # # For each game, we observe the number of points, possessions, home court, and the teams involved. # Then, for each side of the court, we model the points scored (normalized by possessions) as a normal distribution with mean determined by the teams along with home court. # # \begin{align} # x_i &\thicksim \mathcal{N}\left(\frac{\text{poss}_i}{100} \left( \mu + \text{off}_i - \text{def}_i + \delta_\text{home} \text{home} \right), \frac{\text{poss}_i}{100}\sigma_{\text{game}}^2 \right) \notag \\ # \text{where } x_i &\equiv \text{points scored in the }i^\text{th} \text{ game} \notag \\ # \text{poss}_i &\equiv \text{possessions in the }i^\text{th} \text{ game} \notag \\ # \delta_\text{home} &\equiv \begin{cases} 1 & \text{if home team is on offense} \\ -1 & \text{otherwise} \end{cases} \notag \\ # \mu &\equiv \text{league average scoring rate per 100 possessions} \notag # \end{align} # # This model has four parameters of interest: # 1. $ \overrightarrow{\text{off}} $, the points scored per 100 possessions above league average for each team # 2. $ \overrightarrow{\text{def}} $, the points allowed per 100 possessions above league average for each team # 3. $ \text{home} $, the points scored per 100 possessions above league average by the home team # 4. $ \sigma_{\text{game}} $, the variance for the number of points scored per 100 possessions # # These paremeters each have a prior distribution (since we're being Bayesian about it): # # \begin{align} # \overrightarrow{\text{off}} &\thicksim \mathcal{N}(0, \sigma_{\text{off}} ^ 2) \notag \\ # \overrightarrow{\text{def}} &\thicksim \mathcal{N}(0, \sigma_{\text{def}} ^ 2) \notag \\ # \text{home} &\thicksim \text{Gamma}(1.5, 0.5) \notag \\ # \sigma_{\text{game}} &\thicksim \text{Gamma}(5, 2) \notag # \end{align} # # Note that each prior has its own set of parameters, aka hyperparameters, and specifically, that the priors for $ \overrightarrow{\text{off}} $ and $ \overrightarrow{\text{def}} $ have parameters with their own priors, aka hyperpriors. # The $ \text{home} $ and $ \sigma_{\text{game}} $ parameters use a Gamma prior. # This distribution is conveniently continuous and positive, making it a common one for the prior of a positive parameter like the standard deviation. # For that reason, I've also used it for the two hyperpriors: # # \begin{align} # \sigma_{\text{off}} &\thicksim \text{Gamma}(2, 0.5) \notag \\ # \sigma_{\text{def}} &\thicksim \text{Gamma}(1.5, 0.4) \notag \\ # \end{align} # # I haven't been too careful with setting these priors since this model is mostly for demonstration, but I've run it on the 2019 NBA Regular Season: # + import pymc3 as pm from matplotlib import pyplot as plt import pandas as pd import numpy as np from pynba import halfgames_from_file, team_id_to_abb, plot_ratings, use_blackontrans_style league = "nba" year = 2019 season_type = "Regular Season" # - halfgames = halfgames_from_file(league, year, season_type) # + n_teams = halfgames["off_team_id"].unique().shape[0] team_id_to_team_ind = { team_id: team_ind for team_ind, team_id in enumerate(halfgames["off_team_id"].unique()) } team_id_to_team_abb = team_id_to_abb(league, year) team_abb_to_team_id = { team_abb: team_id for team_id, team_abb in team_id_to_team_abb.items() } team_ind_to_team_abb = { team_ind: team_id_to_team_abb[team_id] for team_id, team_ind in team_id_to_team_ind.items() } points_mu = halfgames["points_scored"].sum() / halfgames["possession_num"].sum() * 100 off_index = ( halfgames["off_team_id"].map(team_id_to_team_ind).to_numpy() ) def_index = ( halfgames["def_team_id"].map(team_id_to_team_ind).to_numpy() ) home_index = ( (halfgames["off_team_id"] != halfgames["home_team_id"]) .to_numpy() .astype(int) ) * 2 - 1 with pm.Model() as model: sigma_game = pm.Gamma("sigma_game", mu=5, sigma=2) sigma_off = pm.Gamma("sigma_off", mu=2, sigma=0.5) sigma_def = pm.Gamma("sigma_def", mu=1.5, sigma=0.4) home = pm.Gamma("home", mu=1.5, sigma=0.5) teams_off = pm.Normal("teams_off", mu=0, sigma=sigma_off, shape=n_teams) teams_def = pm.Normal("teams_def", mu=0, sigma=sigma_def, shape=n_teams) poss_norm = halfgames["possession_num"].to_numpy() / 100 pm.Normal( "points", mu=poss_norm * (points_mu + teams_off[off_index] - teams_def[def_index] + home * home_index), sigma=poss_norm * sigma_game, observed=halfgames["points_scored"].to_numpy() ) # - with model: trace = pm.sample( draws=10000, init="adapt_diag", chains=4, random_seed=42, return_inferencedata=False, ) # + results = pd.DataFrame({ "team": [team_ind_to_team_abb[ind] for ind in range(n_teams)], "net_scoring_above_average": trace["teams_off"].mean(0) + trace["teams_def"].mean(0), "off_scoring_above_average": trace["teams_off"].mean(0), "def_scoring_above_average": trace["teams_def"].mean(0), "league": league, "year": year, "season_type": season_type, }) results.sort_values("net_scoring_above_average", ascending=False) # - # In addition to plotting the offensive and defensive ratings of each team, I've annotated the difference between each team's "raw" ratings and their modeled ones. # We can see that although the model is factoring in strength of schedule and home-court advantage, it's primary effect is "shrinking" the raw ratings. # This is due to the priors for each team rating, which make extreme values less likely. # + use_blackontrans_style() fig = plt.figure(figsize=(8, 8)) ax = fig.add_subplot(1, 1, 1) for team, off_aa, def_aa in zip(results["team"], results["off_scoring_above_average"], results["def_scoring_above_average"]): team_id = team_abb_to_team_id[team] filt = halfgames["off_team_id"] == team_id raw_off = halfgames.loc[filt, "points_scored"].sum() / halfgames.loc[filt, "possession_num"].sum() * 100 - points_mu filt = halfgames["def_team_id"] == team_id raw_def = points_mu - halfgames.loc[filt, "points_scored"].sum() / halfgames.loc[filt, "possession_num"].sum() * 100 ax.annotate("", xy=(off_aa, def_aa), xytext=(raw_off, raw_def), arrowprops={"arrowstyle": "->", "color": "white"}) plot_ratings(results, ax) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### 08/26/18 숫자 블록 # # #### 링크 # # https://programmers.co.kr/learn/courses/30/lessons/12923?language=python3 # # #### 문제 설명 # # 그렙시에는 0으로 된 도로에 숫자 블록을 설치하기로 하였습니다. 숫자 블록의 규칙은 다음과 같습니다. # # 블록의 번호가 n 일 때, 가장 처음 블록은 n * 2번째 위치에 설치합니다. 그다음은 n * 3, 그다음은 n * 4, ...로 진행합니다.만약 기존에 블록이 깔려있는 자리라면 그 블록을빼고 새로운 블록으로 집어넣습니다. # # 예를 들어 1번 블록은 2,3,4,5, ... 인 위치에 우선 설치합니다. 그다음 2번 블록은 4,6,8,10, ... 인 위치에 설치하고, 3번 블록은 6,9,12... 인 위치에 설치합니다. # # 이렇게 3번 블록까지 설치하고 나면 첫 10개의 블록은 0, 1, 1, 2, 1, 3, 1, 2, 3, 2이됩니다. # # 그렙시는 길이가 1,000,000,000인 도로에 1번 블록부터 시작하여 10,000,000번 블록까지 위의 규칙으로 모두 놓았습니다. # # 그렙시의 시장님은 특정 구간의 어떤 블록이 깔려 있는지 알고 싶습니다. # # 구간을 나타내는 두 수 begin, end 가 매개변수로 주어 질 때, 그 구간에 깔려 있는 블록의 숫자 배열(리스트)을 return하는 solution 함수를 완성해 주세요. # # #### 제한 사항 # begin, end 는 1 이상 1,000,000,000이하의 자연수 이고, begin는 항상 end보다 작습니다. # end - begin 의 값은 항상 1,000을 넘지 않습니다. def solution(begin, end): if (1 <= begin <= 1000000000) and (1 <= end <= 1000000000) and (begin < end) and (end - begin <= 1000): answer = [0 for i in range(end)] for j in range(1, int(end/2)+1): # 블럭의 종류 for k in range(2, int(end/j)+1): # 도로의 위치 answer[(j*k)-1] = j return answer[begin-1:end] solution(1,10) # #### 알고리즘 풀고나서 # 1. 처음에는 1 ~ 10,000,000번 블록까지 모든 도로에 순서대로 놓으려고 했지만, end 파라미터만큼의 길이만큼의 도로만 블럭 까는 것으로 방법을 바꿈 # 2. 채점 후 정확성 테스트는 모두 통과했으나, 효율성 테스트에서 런타임 에러로 실패함 # 3. 다른 사람들이 올린 질문들을 살펴보니 같은 문제를 겪고 있었음. 효율성 테스트는 통과하지 못했지만 문제에 제시된 조건을 충족하는 코드로 마무리 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Linear Regression import numpy as np import pandas as pd import pandas_datareader as pdr import matplotlib.pyplot as plt # %matplotlib inline housing = pd.read_csv('housing.csv') housing.head() housing.info() housing.dropna(inplace=True) housing.info() housing[['Price (00s)', 'SqFt']].corr() plt.scatter(housing['Price (00s)'], housing['SqFt']) from sklearn.linear_model import LinearRegression as lr model = lr(fit_intercept=True) price = np.array(housing['Price (00s)']) sqft = np.array(housing['SqFt']) model.fit(sqft[:, np.newaxis], price) xfit = np.linspace(sqft.min(),sqft.max(),100) yfit = model.predict(xfit[:,np.newaxis]) plt.scatter(price,sqft) plt.plot(yfit,xfit, c='r') plt.ylabel('Price') plt.xlabel('Sqft') model.predict(1500) print(model.coef_) print(model.intercept_) import statsmodels.api as sm sqft = sm.add_constant(sqft) reg = sm.OLS(price,sqft).fit() reg.summary() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py36 # language: python # name: py36 # --- import argparse import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np import re from matplotlib.backends.backend_pdf import PdfPages parser = argparse.ArgumentParser() parser.add_argument('infile', type=str) # ## Import data # The test data was exported from FlowJo v9 table editor using the "Save" button to save as a tab-delimited text file. TSV is necessary in this case, because FlowJo uses commas in the gate descriptions. df = pd.read_table(filepath_or_buffer="jw112_cytokine_test.tsv", encoding="mac_roman") # **Note:** There is an import error when encoding is not specified - changing the encoding to `"mac_roman"` within the `pd.read_table()` call allowed import, but might not be applicable to all platforms. It might be useful to include an encoding conversion step using something like `chardet`. # Drop mean and standard deviation rows df = df.drop(df.tail(2).index) # ## Add grouping variables as new column(s) # I will try to add grouping variables using a function to repeat a sequence of strings and add as a new column. genotypes = ['KO', 'WT', 'KO', 'WT', 'KO'] organs = ["Spleen", "PP", "coLP"] df.insert(1, "genotype", np.tile(genotypes, 3)) df.insert(2, "organ", np.repeat(organs, 5)) df.tail() # ## Reformat column names to remove leading gate hierarchy # We want to leave only the current and parent gates, which is basically everything to the right of the second-to-last "/" in the name. It should be easy enough to trim this with regex. def trimColnameDepth(df, pattern=None): '''Trim column names according to a regex pattern. Returns the first match to the regex as a list of new column names. If no match, returns unaltered column name.''' trimmed = [] if not pattern: # Default pattern if none provided pattern = re.compile(r"\/([^\/]+\/[^\/]+$)") for c in df.columns: match = pattern.search(c) # Look for a match to the pattern if match: trimmed.append(match.group(1)) else: trimmed.append(c) return(trimmed) def regexDepth(d): '''Generates a regex pattern to trim gating paths to a specified depth d.''' string = "[^\/]+$" # Default pattern, depth 0 unit = "[^\/]+\/" # Incremental unit of depth to add to regex if d == 0: return(re.compile("\/(" + string + ")")) else: return(re.compile("\/(" + "".join(np.repeat(unit, d)) + string + ")")) df.columns = trimColnameDepth(df, regexDepth(1)) # ## Plot some data # This is a simple test of what the plots should hopefully look like. np.sort(df['genotype'].unique()) g = sns.catplot(data=df, x="organ", y="Q5: IL-13–, IL-4+,Freq. of Parent", hue="genotype", hue_order=["WT", "KO"], dodge=True, kind="bar", linewidth=1.5, facecolor=(1, 0, 1, 0), errcolor=".2", edgecolor="0.2", capsize=0.2, ci="sd", legend=False) sns.swarmplot(data=df, x="organ", y="Q5: IL-13–, IL-4+,Freq. of Parent", hue="genotype", hue_order=["WT", "KO"], dodge=True, s=8, ax=g.ax) plt.show() # Here I've used matplotlib's `PdfPages` to output a graph of every parameter into 2x2 grids on a landscape letter-size PDF page. Works pretty well! Still need to figure out how to remove the legend for the barplots, since it's redundant with the ## Output 4 plots per 8.5 x 11 page in landscape with PdfPages("test.pdf") as pdf: ppp = 4 # Plots per page for i, var in enumerate(df.columns[3:]): if i % ppp == 0: fig = plt.figure(figsize=(11, 8.5), dpi=100) fig.subplots_adjust(hspace=0.3) ax = fig.add_subplot(2, 2, i % ppp + 1) else: ax = fig.add_subplot(2, 2, i % ppp + 1) sns.catplot(data=df, x="organ", y=var, hue="genotype", hue_order=np.sort(df['genotype'].unique()), dodge=True, kind="bar", linewidth=1.5, facecolor=(1, 0, 1, 0), errcolor=".2", edgecolor="0.5", errwidth=1.5, capsize=0.2, ci="sd", ax=ax, legend=False) ax.legend() sns.swarmplot(data=df, x="organ", y=var, hue="genotype", hue_order=np.sort(df['genotype'].unique()), dodge=True, s=8, ax=ax) ax.set_title(var) if (i+1) % ppp == 0: pdf.savefig(fig) plt.close() else: plt.close() if (i+1) % ppp != 0: pdf.savefig(fig) def makePlots(df, filename): with PdfPages(filename) as pdf: ppp = 4 # Plots per page for i, var in enumerate(df.columns[3:]): if i % ppp == 0: fig = plt.figure(figsize=(11, 8.5), dpi=100) fig.subplots_adjust(hspace=0.3) ax = fig.add_subplot(2, 2, i % ppp + 1) else: ax = fig.add_subplot(2, 2, i % ppp + 1) sns.catplot(data=df, x="organ", y=var, hue="genotype", dodge=True, kind="bar", linewidth=1.5, facecolor=(1, 0, 1, 0), errcolor=".2", edgecolor="0.5", errwidth=1.5, capsize=0.2, ci="sd", ax=ax, legend=False) ax.legend() sns.swarmplot(data=df, x="organ", y=var, hue="genotype", dodge=True, s=8, ax=ax) ax.set_title(var) if (i+1) % ppp == 0: pdf.savefig(fig) plt.close() else: plt.close() if (i+1) % ppp != 0: pdf.savefig(fig) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

    Capstone Project 1: Inferential Statistics

    # # In this section, I present an analysis of data using inferntial statistics. I will test the statistical significance of various observations raised in [Data Story](https://github.com/lekhnath12/Traffic_violation_Montgomery/blob/gh-pages/Data_Story.ipynb) section. For all statistical analysis, we set level of signifiance to be 1%. # + #import necessary modules import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns import scipy.stats as st from datetime import datetime from sklearn.feature_extraction.text import CountVectorizer from IPython.core.display import Image, display from IPython.core.display import HTML # - # Read the accident data from csv df_traffic = pd.read_csv('Traffic_Violations-api.csv', parse_dates = [['Date Of Stop', 'Time Of Stop']], infer_datetime_format = True) # More violation takes place in spring (Mar - May) compared to summer (June - Aug). # # To test this hypothesis, first assume a null hypothesis that the number of violations are equal. We find that the the p-value is 0.0, which confirms that more violations takes place in spring compared to summer. # + # Create datetime column and series containing month information df_traffic['datetime'] = df_traffic['Date Of Stop_Time Of Stop'] month = df_traffic.datetime.dt.month # Find the spring and summer subset and replace the data by 1 or 0 springbool = (month>=3) & (month <= 5) summerbool = (month>=6) & (month <= 8) spring = month.where(springbool, 0) spring = spring.where(~springbool, 1) summer = month.where(summerbool, 0) summer = summer.where(~summerbool, 1) t_value, p_value = st.ttest_ind_from_stats(mean1 = spring.mean(), std1 = spring.std(), nobs1 = spring.count(), mean2 = summer.mean(), std2 = summer.std(), nobs2 = summer.count()) print('Total # of Violations in spring', spring.mean()*spring.count()) print('Total # of Violations in summer', summer.mean()*summer.count()) print('p-value: ', p_value) # - # More violations occur in late night (after 10 pm) rather than the busy traffic hours. # # We set a null hypothesis that equal number of violations occur during busy hours and late night. The p-value of our statistical test is zero, which rejectes our null hypothesis. # + # Create datetime column and series containing month information hour = df_traffic.datetime.dt.hour latenightbool = (hour>=22) & (hour <= 24) busyhourbool = (hour>=7) & (hour < 9) latenight = hour.where(latenightbool, 0) latenight = latenight.where(~latenightbool, 1) busyhour = hour.where(busyhourbool, 0) busyhour = busyhour.where(~busyhourbool, 1) t_value, p_value = st.ttest_ind_from_stats(mean1 = latenight.mean(), std1 = latenight.std(), nobs1 = latenight.count(), mean2 = busyhour.mean(), std2 = busyhour.std(), nobs2 = busyhour.count()) print('Total # of Violations in busy hours', busyhour.mean()*busyhour.count()) print('Total # of Violations in late nights', latenight.mean()*latenight.count()) print('p-value: ', p_value) # - # The citation probability is higher in the weekends rather than a busy weekday. # # Null hypothesis: The citation probability of all days are equal # # The test below gives p-value of 0.012, which is greater than 1% level of significance. Hence we can not reject the null hypothesis. In other words, our statement that the citation probability depends on days of week is not statistically reliable. # + df_traffic['day'] = df_traffic.datetime.dt.dayofweek df = df_traffic.groupby(['day', 'Violation Type']).count() ct_prob = list() n = 6 ages = np.linspace(0,n, n+1) for age in ages: citn = df.loc[(age, 'Citation'), 'Agency'] other = df.loc[(age, 'Warning'), 'Agency'] prob = citn/(citn + other) ct_prob.append(prob) plt.plot(ages, ct_prob, 'ro', label = 'Origional Data') slope, intercept, rvalue, pvalue, std_err = st.linregress(ages,ct_prob) plt.plot(ages, intercept+slope*ages, 'b', label = 'Regression') plt.xlabel('Day of Week') plt.ylabel('Citation Probability') plt.legend() print('Regression:', rvalue) print('p-value:', pvalue) # - # Hispanic and Black drivers receive much more citations per warning compared to Asian and white drivers. # # We consider a null hypothesis that both subgroups HISPANIC/BLACK and WHITE/ASIAN have same ratio of citations to warning. To test this hypothesis, we perform a statistical test. The p-value of our statistical test is 0, hence we can not reject our null hypothesis. This confirms that Hispanic and Black drivers receive much more citations per warning compared to Asian and white drivers. # + hisblk = df_traffic[(df_traffic.Race == 'HISPANIC') | (df_traffic.Race == 'BLACK')]\ ['Violation Type'] hisblk = pd.get_dummies(hisblk, drop_first = True) asnwht = df_traffic[(df_traffic.Race == 'ASIAN') | (df_traffic.Race == 'WHITE')]\ ['Violation Type'] asnwht = pd.get_dummies(asnwht, drop_first = True) data1 = hisblk.Warning data2 = asnwht.Warning t_value, p_value = st.ttest_ind_from_stats(mean1 = data1.mean(), std1 = data1.std(), nobs1 = data1.count(), mean2 = data2.mean(), std2 = data2.std(), nobs2 = data2.count()) print('Hispanic/Black citation probability:', (data1.count() - data1.sum())/data1.count()) print('Asian/White citation probability:', (data2.count() -data2.sum())/data2.count()) print('p-value: ', p_value) # + # First convert into vector df_traffic.dropna(inplace = True) corpus = df_traffic.Description vectorizer = CountVectorizer(stop_words = 'english', strip_accents = 'ascii', min_df = 0.0, max_features = 100, token_pattern='[a-z]+', max_df = 1.0, binary = True) X = vectorizer.fit_transform(corpus) # + # then group the dataframe by description column. This compress the size of Series by a #factor of 10 df = df_traffic.loc[:, ['Color', 'Description']].groupby('Description').count() df.sort_values(by = 'Color', ascending = False, inplace = True) df.rename(index=str, columns={"Color": "Count"}, inplace = True) df.reset_index(inplace = True) # change the text into vector array X = vectorizer.transform(df.Description).toarray() # This function calculates the similarity of each rows to all other remaining rows, and return # the index of dataframe df that are similar. The tuning parameter factor choice. Higher value # factor means gives index of columns that are more similar def matching_text(text_vector, TEXT_ARRAY, factor = 0.7): x1 = np.matmul(text_vector, TEXT_ARRAY.transpose()) x1 = x1/np.sqrt(np.sum(x1**2)) ind = np.argwhere(x1[0]> factor*x1[0].max()) return ind # High speed drivers ? df_copy = df_traffic.copy(deep = False) text = 'EXCEEDING SPEED LIMIT' X = vectorizer.transform(df_copy.Description).toarray() x = vectorizer.transform([text]).toarray() similarity = np.matmul(x, X.T) similarity = similarity/np.sqrt(np.sum(similarity**2)) similarity = similarity/np.max(similarity) df_copy['speeding'] = similarity.T df_copy.speeding = df_copy.speeding.where(df_copy.speeding < 0.66, 1) df_copy.speeding = df_copy.speeding.where(df_copy.speeding >= 0.66, 0) # - # White and Asian drivers generally cause speeding compared to Black and Hispanics. # # Our analysis tells that Asian and White drivers cause more accidents compared to Black and Hispanics, although Black and Hispanics receive more citations compared to Asian and Whites. First, we set a null hypothesis that both category receive equal proportion of citation and warnings. From our statistical analysis, we obtain a p-value of 0, hence we can reject the null hypothesis. Hence, it is statistically significant that white and Asian cause more speeding. # + hisblk = df_copy[(df_copy.Race == 'HISPANIC') | (df_copy.Race == 'BLACK')].speeding asnwht = df_copy[(df_copy.Race == 'ASIAN') | (df_copy.Race == 'WHITE')].speeding data1 = hisblk data2 = asnwht t_value, p_value = st.ttest_ind_from_stats(mean1 = data1.mean(), std1 = data1.std(), nobs1 = data1.count(), mean2 = data2.mean(), std2 = data2.std(), nobs2 = data2.count()) print('Fraction of speeding drivers in Hispanic/Black:', (data1.sum())/data1.count()) print('Fraction of speeding drivers in Asian/White :', (data2.sum())/data2.count()) print('p-value: ', p_value) # - # Women are more likely to cause speeding compared to men. # # Similarly, we set a null hypothesis that women cause equal speeding compared to men. From the statistical test below, we obtain p-value of 0. Therefore we can reject the null hypothesis. # + women = df_copy[(df_copy.Gender == 'F')].speeding men = df_copy[(df_copy.Gender == 'M')].speeding data1 = women data2 = men t_value, p_value = st.ttest_ind_from_stats(mean1 = data1.mean(), std1 = data1.std(), nobs1 = data1.count(), mean2 = data2.mean(), std2 = data2.std(), nobs2 = data2.count()) print('Fraction of female speeding drivers:', (data1.sum())/data1.count()) print('Fraction of male speeding drivers in Asian/White :', (data2.sum())/data2.count()) print('p-value: ', p_value) # - # Drivers having new cars are more likely have speeding habits # # We perform a linear regression on age and fraction of speeding drivers. This gives a pearson correlation coefficient of -0.98, which means the Age is negatively correlated with the fraction of speeding drivers, i.e. fraction of speeding drivers decreases with increasing age of the vehicle. # # The p-value of our analysis is extremely small $10^{-16}$, therefore, we can reject the null hypothesis that there is no correlation. This shows that it is real that drivers with new cars have more likely to have speeding habits. # # # + df_copy['datetime'] = df_copy['Date Of Stop_Time Of Stop'] df_copy['V_Age'] = df_copy.datetime.dt.year - df_copy.Year real_bool = (df_copy.V_Age <= 70) & (df_copy.V_Age >= 0) year_med = np.median(df_copy[real_bool].V_Age) df_copy.V_Age = df_copy[['V_Age']].where(real_bool, year_med) df = df_copy.groupby(['V_Age', 'speeding']).count() ct_prob = list() n = 20 ages = np.linspace(0,n, n+1) for age in ages: speeding = df.loc[(age, 1), 'Agency'] other = df.loc[(age, 0), 'Agency'] prob = speeding/(speeding + other) ct_prob.append(prob) plt.plot(ages, ct_prob, 'ro', label = 'Origional Data') slope, intercept, rvalue, pvalue, std_err = st.linregress(ages,ct_prob) plt.plot(ages, intercept+slope*ages, 'b', label = 'Regression') plt.xlabel('Age of Vehicle') plt.ylabel('Fraction of speeding drivers') plt.legend() print('Regression:', rvalue) print('p-value:', pvalue) # - # Women are also less involved in dangerous accidents. # # First, we set a null hypothesis that women are equally involved in dangerous accidents as men. Our statistical analysis gives p-value of 0.03, which is not within 99% confidence interval. Therefore, we can not reject the null hypothesis. # # In other words, our analysis that women are less involved in dangerous accidents is not statistically significant. # + women = df_traffic[(df_traffic.Gender == 'F')]['Contributed To Accident'] women = pd.get_dummies(women, drop_first=True) men = df_traffic[(df_traffic.Gender == 'M')]['Contributed To Accident'] men = pd.get_dummies(men, drop_first=True) t_value, p_value = st.ttest_ind_from_stats(mean1 = women.mean(), std1 = women.std(), nobs1 = women.count(), mean2 = men.mean(), std2 = men.std(), nobs2 = men.count()) print('Fraction of Female involved in accidents:', ((women.sum()/women.count()).values)) print('Fraction of Men involved in accident :', ((men.sum()/men.count()).values)) print('p-value: ', p_value) # - # Women receive less citations compared to men. # # Null hypothesis: Women receive equal citations compared to men # # However, in this case, we obtain p-value of 0, hence we can reject the null hypothesis. # + men = df_traffic[(df_traffic.Gender == 'M')]['Violation Type'] men = pd.get_dummies(men, drop_first = True) women = df_traffic[(df_traffic.Gender == 'F')]['Violation Type'] women = pd.get_dummies(women, drop_first = True) men = men.Warning women = women.Warning t_value, p_value = st.ttest_ind_from_stats(mean1 = women.mean(), std1 = women.std(), nobs1 = women.count(), mean2 = men.mean(), std2 = men.std(), nobs2 = men.count()) print('Male citation probability:', (men.count() - men.sum())/men.count()) print('Female citation probability:', (women.count() -women.sum())/women.count()) print('p-value: ', p_value) # - # Women appear to get a newer vehicle compared to men. # # Null hypothesis: Women drive as old vehicle as men # # Our statistical analysis gives p-value of 0, hence we can reject the null hypothesis. # + men = df_copy[(df_copy.Gender == 'M')]['V_Age'] women = df_copy[(df_copy.Gender == 'F')]['V_Age'] t_value, p_value = st.ttest_ind_from_stats(mean1 = women.mean(), std1 = women.std(), nobs1 = women.count(), mean2 = men.mean(), std2 = men.std(), nobs2 = men.count()) print('Average age of Vehicle men drive:', men.mean()) print('Average age of Vehicle women drive:', women.mean()) print('p-value: ', p_value) # - # Older vehicles are more likely to get citations compared to new vehicles. # # We perform a linear regression on age and citation probability. We obtained a pearson correlation coefficient of 0.96, which means the Age is positively correlated with the citation probability, i.e. vehicles are more likely to recive citation as they get older. # # Our null hypothesis that citation probability is independent of the age of vehicle. # # The p-value of our analysis is extremely small $10^{-13}$, therefore, we can reject the null hypothesis. Hence it is convincing that drivers with older cars are more likely to receive citations. # + df = df_copy.groupby(['V_Age', 'Violation Type']).count() ct_prob = list() n = 20 ages = np.linspace(0,n, n+1) for age in ages: citn = df.loc[(age, 'Citation'), 'Agency'] other = df.loc[(age, 'Warning'), 'Agency'] prob = citn/(citn + other) ct_prob.append(prob) plt.plot(ages, ct_prob, 'ro', label = 'Origional Data') slope, intercept, rvalue, pvalue, std_err = st.linregress(ages,ct_prob) plt.plot(ages, intercept+slope*ages, 'b', label = 'Regression') plt.xlabel('Age of Vehicle') plt.ylabel('Citation Probability') plt.legend() print('Regression:', rvalue) print('p-value:', pvalue) # - # The difference in citation probability is not just a racial bias but due to the age of the vehicle they drive. Among different races, the asian drivers drive newest vehicle (average age of 7.3 yrs) compared to hispanic drivers (average vehicle age 10.5 yrs). The citation probability scales well with the age of the vehicle. # # To check the wheter or not this statement is statistically significant, we set a null hypothesis that Hispanic/Black drivers with new vehicle (age less than 1) have equal citation probability as compared to White/Asian drivers. We otbain a p-value in the order of $10^{-55}$, which means we can reject the null hypothesis. # # This means that the difference in citation probability among different races is not just the age of the vehicle they drive but there are other factors to be considered. # + df_new = df_copy[df_copy.V_Age <= 0] hisblk = df_new[(df_new.Race == 'HISPANIC') | (df_new.Race == 'BLACK')]\ ['Violation Type'] hisblk = pd.get_dummies(hisblk, drop_first = True) asnwht = df_new[(df_new.Race == 'ASIAN') | (df_new.Race == 'WHITE')]\ ['Violation Type'] asnwht = pd.get_dummies(asnwht, drop_first = True) data1 = hisblk.Warning data2 = asnwht.Warning t_value, p_value = st.ttest_ind_from_stats(mean1 = data1.mean(), std1 = data1.std(), nobs1 = data1.count(), mean2 = data2.mean(), std2 = data2.std(), nobs2 = data2.count()) print('Hispanic/Black citation probability:', (data1.count() - data1.sum())/data1.count()) print('Asian/White citation probability:', (data2.count() -data2.sum())/data2.count()) print('p-value: ', p_value) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Gridtools Module # # A new gridtools.py module has some tools for reading in a solution as a set of grid patches computed using AMR and extracting data on a uniform 2d grid or along a 1d transect, by interpolating from the finest level patch available at each point. # # You can read in a time frame of the solution at some fixed time using, e.g.: # ``` # from clawpack.pyclaw import solution # framesoln = solution.Solution(frameno, path=outdir, file_format='binary') # ``` # and then pass `framesoln` in to `gridtools.grid_output_2d` along with other arguments that specify what set of scalar values to extract and the set of grid points on which to extract them: # ``` # def grid_output_2d(framesoln, out_var, xout, yout, levels='all', # return_ma=True): # :Input: # framesoln: One frame of Clawpack solution (perhaps with AMR), # An object of type pyclaw.Solution.solution. # out_var: function that maps q to desired quantities Q[m,i,j] or # Q[i,j] if only one. # If type(out_var) == int, then Q[i,j] = q[out_var,i,j] # xout, yout: arrays of output points (1d or 2d arrays) # levels: list of levels to use, or 'all' # return_ma: True to return as masked_array, False to return with # NaN in locations that framesoln doesn't cover. # :Output: # qout: Solution obtained on xout,yout grid # ``` # %matplotlib inline from pylab import * from IPython.display import Image import os,sys from clawpack.visclaw import colormaps, frametools, geoplot from clawpack.geoclaw import dtopotools from clawpack.pyclaw.solution import Solution from clawpack.pyclaw import solution sys.path.insert(0,'../new_python') import region_tools, topotools, marching_front from plottools import pcolorcells import gridtools # ## Sample data from test case # # We use output that can be generated by running the notebook [RunGeoclaw_test1.ipynb](RunGeoclaw_test1.ipynb) in the directory `examples/geoclaw_test1`. See that notebook for more discussion of this test problem. rundir = '../examples/geoclaw_test1' outdir = os.path.join(rundir, '_output_3') sys.path.insert(0,rundir) frameno = 5 framesoln = solution.Solution(frameno, path=outdir, file_format='binary') print('Frame %i solution at t = %.0f seconds has %i grid patches' \ % (frameno, framesoln.t, len(framesoln.states))) # ### Plot using frametools # # The standard way to plot the AMR solution using visclaw is to provide a `setplot.py` file that specifies the desired plots and then use `clawpack.visclaw.frametools` to loop over all the grid patches and produce the desired plots. This is invoked behind the scenes when doing `make plots` or using the interactive `Iplotclaw` module. But it is also possible to use frametools directly to produce one set of plots, for example: from setplot import setplot plotdata = setplot() plotdata.outdir = outdir frametools.plotframe(frameno,plotdata) # ## Extract uniform grid # # But now suppose we want to extract values on a uniform 2D grid for some purpose, e.g. when making an animation over some region. # # The water surface eta is given by `q[3,:,:]` and the topography B can be computed by subtracting the water depth `q[0,:,:]` from this, so we can define two functions that return these as 2D arrays for any `q` defining the full solution on a grid patch: eta = lambda q: q[3,:,:] B = lambda q: q[3,:,:]-q[0,:,:] # Define the desired output grid. For illustration we use a very coarse grid: x = linspace(-0.005,0.01,16) y = linspace(-0.01,0.01,21) Xout, Yout = meshgrid(x,y) B_out_2d = gridtools.grid_output_2d(framesoln, B, Xout, Yout, levels='all',return_ma=True) eta_out_2d = gridtools.grid_output_2d(framesoln, eta, Xout, Yout, levels='all',return_ma=True) print('Interpolated to uniform grids of shape ', B_out_2d.shape) # + figure(figsize=(10,6)) pcolorcells(Xout, Yout, B_out_2d, cmap=geoplot.land_colors) clim(-0.5,0.5) contour(Xout, Yout, B_out_2d, [0], colors='k') h_out_2d = eta_out_2d - B_out_2d eta_masked = ma.masked_where(h_out_2d < 0.001, eta_out_2d) pcolorcells(Xout, Yout, eta_masked, cmap=geoplot.tsunami_colormap) clim(-0.5,0.5) colorbar() gca().set_aspect(1) title('Surface on uniform coarse grid\nBlack contour is B=0 at this resolution'); # - # ## Extract 1d transects # # It is often difficult to visualize the topography and water depth from 2d plots like those shown above, and so it is useful to plot the solution along 1d transects. # # As an example, we plot the solution along a transect at constant latitude `y = 0.002` over `-0.005 <= x <= 0.01`, which goes through the Gaussian depression near the shore. # # We also illustrate that a single call to `gridtools.grid_output_2d` can be used for each frame by defining `out_var` below to be an array that will return both `B_out` and `eta_out`. This is more efficient for large data sets and several output quantities than multiple calls to `gridtools.grid_output_2d`. eta = lambda q: q[3,:,:] B = lambda q: q[3,:,:]-q[0,:,:] out_var = lambda q: array((B(q),eta(q))) # + # output grid (1d transect): #xout = linspace(-0.005, 0.01, 1001) xout = linspace(-0.005, 0.03, 1001) ylat = 0.002 yout = ylat * ones(xout.shape) # single call to extract both quantities of interest: qout = gridtools.grid_output_2d(framesoln, out_var, xout, yout, levels='all',return_ma=True) # unpack the results: B_out = qout[0,:] eta_out = qout[1,:] # - # Plot the transect results, using `fill_between` to show the cross section of earth as green and of water as blue: figure(figsize=(10,4)) fill_between(xout, eta_out, B_out, color=[.5,.5,1]) fill_between(xout, B_out, -6, color=[.7,1,.7]) plot(xout, B_out, 'g') plot(xout, eta_out, 'b') # Note that we are interpolating to a fine grid with 1001 points, and piecewise constant interpolation is performed using the cell average values. So in the plot above the curves look piecewise constant with jumps at the cell interfaces of the computational grid from which the solution is interpolated. # ### Loop over frames # # Putting this in a loop lets us see much better how the solution evolves along the coast. # # For these plots we zoom in on the region near the coast. # # Note in the plots below that at early times only a coarse grid is present in this region, and the interpolated solution clearly shows this coarse grid structure. # # Also note that we are plotting results from the version of this example in which the `force_dry` mask is used to indicate cells that should be initialized to dry (`h = 0`) even if the topography is below sea level (`B < 0`). However, this is applied only on the finest grid and so at early times there is water in the depression that disappears at time 600, when the finest grid is introduced (which has been carefully chosen to be before the tsunami arrives). # + xout = linspace(-0.005, 0.01, 1001) ylat = 0.002 yout = ylat * ones(xout.shape) for frameno in range(6): framesoln = solution.Solution(frameno, path=outdir, file_format='binary') qout = gridtools.grid_output_2d(framesoln, out_var, xout, yout, levels='all',return_ma=True) B_out = qout[0,:] eta_out = qout[1,:] figure(figsize=(10,4)) fill_between(xout, eta_out, B_out, color=[.5,.5,1]) fill_between(xout, B_out, -6, color=[.7,1,.7]) plot(xout, B_out, 'g') plot(xout, eta_out, 'b') title('Transect along y = %.4f at t = %.1f' % (ylat, framesoln.t)) # - # ## Transects at an angle to the grid # # In the example above our transect was along a line of constant latitude, but this is not necessary. A transect between any two points `(x1,y1)` and `(x2,y2)` can be defined by e.g. x1 = -0.004; x2 = 0.008 y1 = -0.005; y2 = 0.0075 npts = 1001 xout = linspace(x1,x2,npts) yout = linspace(y1,y2,npts) # + figure(figsize=(10,6)) pcolorcells(Xout, Yout, B_out_2d, cmap=geoplot.land_colors) clim(-0.5,0.5) contour(Xout, Yout, B_out_2d, [0], colors='k') h_out_2d = eta_out_2d - B_out_2d eta_masked = ma.masked_where(h_out_2d < 0.001, eta_out_2d) pcolorcells(Xout, Yout, eta_masked, cmap=geoplot.tsunami_colormap) clim(-0.5,0.5) colorbar() gca().set_aspect(1) plot(xout,yout,'w',linewidth=2) text(0.006,0.008,'Transect',color='w',fontsize=15) title('Surface on uniform coarse grid\nBlack contour is B=0 at this resolution'); # + qout = gridtools.grid_output_2d(framesoln, out_var, xout, yout, levels='all',return_ma=True) B_out = qout[0,:] eta_out = qout[1,:] figure(figsize=(10,4)) fill_between(xout, eta_out, B_out, color=[.5,.5,1]) fill_between(xout, B_out, -6, color=[.7,1,.7]) plot(xout, B_out, 'g') plot(xout, eta_out, 'b') xlabel('Longitude x') title('Plot cross section on transect vs longitude'); # - # (Note that the 2d plot above showed the coarser resolution uniform grid solution extracted above, while the transect plot uses the full AMR solution.) # ### Plot vs. distance along transect # # In the plot above we plotted the value on the transect vs. longitude. If the transect had been running more N-S than E-W we could have instead plotted against latitude. # # Sometimes we want to plot values on the transect vs. the distance in meters. When GeoClaw is used in longitude-latitude coordinates, this distance can be calculated using the `clawpack.geoclaw.util.haversine` function: from clawpack.geoclaw import util dist = util.haversine(x1, y1, xout, yout) print('The length of this transect is %.2f meters' % dist[-1]) figure(figsize=(10,4)) fill_between(dist, eta_out, B_out, color=[.5,.5,1]) fill_between(dist, B_out, -6, color=[.7,1,.7]) plot(dist, B_out, 'g') plot(dist, eta_out, 'b') xlabel('Distance along transect (meters)') title('Plot cross section on transect vs distance'); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="71mQEXWm964S" # PEQNP # # # SAT-X # ## [![Downloads](https://pepy.tech/badge/satx)](https://pepy.tech/project/satx) # ### The constraint modeling language for SAT solvers # # SAT-X is a language for constrained optimization and decision problems over positive integers, that work with any SAT Competition standard SAT solver. Is based on Python, and is ase to learn and easy to use with all technologies associated to this language. # # # Some excelent SAT Solvers # # 1- https://github.com/maxtuno/SLIME (standalone and cloud [MPI] - Oscar Riveros) # # 2- https://github.com/arminbiere/kissat (The Kissat SAT Solver - ) # # 3- https://github.com/maxtuno/MiniSat (MiniSat 2.2.0 with DRUP proof, CMake and StarExec Ready. - , ) # # 4- https://github.com/maxtuno/blue (A Powerful SAT Solver for Java - Oscar Riveros) # # 5- Any SAT Solver with the SAT Competition standars. (http://www.satcompetition.org) # + [markdown] id="JzM69hBn964T" # # INSTALLATION # ```python # pip install SATX # ``` # + colab={"base_uri": "https://localhost:8080/"} id="GMnjfDhU_EEc" tags=[] outputId="b0e32304-9ed8-4a6f-e484-0a4353f722ed" # !pip install SATX --upgrade --force # + [markdown] id="n71dPAXZecdj" # # INTRODUCION # On SAT-X all elements are integers, the relations are at bit level or arithmetic level. All integers live on $\mathbb{N}_{2 ^ {bits} - 1}$ and always positives, i.e. for two integer $(x, y)$ the operation, $(x - y)$ take all possibilities such that $x - y >= 0$. # # # SUPPORTED OPERATIONS # $+$, $-$, $*$, $/$, $**$, $abs$, $powmod$, $\%$, $\&$, $|$, $^$, $==$, $=$, $<$, $<=$, $>$, $>=$ # + [markdown] id="Kd2ebfRUkjre" # # FIND ALL SOLUTIONS TO $2^n-7=x^2$ # # Ref: http://www.artofproblemsolving.com/Forum/viewtopic.php?t=66245 # + colab={"base_uri": "https://localhost:8080/"} id="vY7upZWNkoS_" outputId="9f8f8a9d-d151-4507-82cb-60796de01e22" import satx satx.engine(32, cnf_path='tmp.cnf') _2 = satx.constant(2) n = satx.integer() x = satx.integer() assert _2 ** n - 7 == x ** 2 while satx.satisfy('slime'): print(n, x) # + [markdown] id="RbN-rZnek7Dl" # SOLVE IN POSITIVE INTEGERS THE FOLLOWING EQUATION: $n^3 − 5n + 10 = 2^k$. # # Ref: http://www.artofproblemsolving.com/Forum/viewtopic.php?t=103239 # # tip: avoid the negative signs # + colab={"base_uri": "https://localhost:8080/"} id="yOQyM4QslOAz" outputId="b4f30395-8651-46fc-ff82-152cf1fd0155" import satx satx.engine(16, cnf_path='tmp.cnf') _2 = satx.constant(2) n = satx.integer() k = satx.integer() assert n ** 3 + 10 == _2 ** k + 5 * n while satx.satisfy('slime'): print(n, k) # + [markdown] id="AcGfS1eRlT6U" # # FIND ALL POSITIVE INTEGER SOLUTIONS TO $a^{2}=b^{3}+1$ # # Ref: http://www.artofproblemsolving.com/Forum/viewtopic.php?t=103239 # + colab={"base_uri": "https://localhost:8080/"} id="V8LEjkg3lYP7" outputId="6f299539-084c-43c1-9fee-8233b24e147a" import satx satx.engine(16, cnf_path='tmp.cnf') a = satx.integer() b = satx.integer() assert a ** 2 == b ** 3 + 1 while satx.satisfy('slime'): print(a, b) # + [markdown] id="3EWbG3SMlbOs" # # FIND ALL POSITIVE INTEGER SOLUTIONS TO $x,y$: $3^x - 1 == y2^x + 1$ # # ref: http://www.artofproblemsolving.com/Forum/viewtopic.php?t=31947 # + colab={"base_uri": "https://localhost:8080/"} id="Ci3EZWlYlhV5" outputId="4b0fa28f-8938-4309-efa1-fd64ec0e2094" import satx satx.engine(32, cnf_path='tmp.cnf') _2 = satx.constant(2) _3 = satx.constant(3) x = satx.integer() y = satx.integer() assert _3 ** x == y * _2 ** x + 1 while satx.satisfy('slime'): print(x, y) # + [markdown] id="aiWphw2k964V" # # INTEGER FACTORIZATION # In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these factors are further restricted to prime numbers, the process is called prime factorization. # + colab={"base_uri": "https://localhost:8080/"} id="7S96pDBk964W" tags=[] outputId="93c578c4-6343-4574-da32-cc3603e20567" import satx rsa = 3007 satx.engine(rsa.bit_length(), cnf_path='tmp.cnf') p = satx.integer() q = satx.integer() assert p * q == rsa while satx.satisfy('slime'): print(p, q) # + [markdown] id="kpXoFzLi2PVw" # # PALINDROMIC NUMBERS (Bits Level) # + colab={"base_uri": "https://localhost:8080/"} id="6EU_vqh-1hIG" outputId="f82de897-c78e-4f01-d0a6-6dd5865e4e03" import numpy as np import satx satx.engine(10, cnf_path='tmp.cnf') x = satx.integer() # without "copy" for inplace reverse of bits assert x == x.reverse(copy=True) while satx.satisfy('slime'): print(x, np.binary_repr(x.value, satx.bits())) # + [markdown] id="9sIHVlpNt1qX" # # XOR Problem # # The XOr, or “exclusive or”, problem is a classic problem in ANN research. It is the problem of using a neural network to predict the outputs of XOr logic gates given two binary inputs. An XOr function should return a true value if the two inputs are not equal and a false value if they are equal. # + colab={"base_uri": "https://localhost:8080/"} id="AeFgWSlAtu95" outputId="54b06e4e-7e5d-40ab-dec8-1937b6a08f8c" import satx x = [[0, 0], [0, 1], [1, 0], [1, 1]] y = [0, 1, 1, 0] n, m = len(x), len(x[0]) satx.engine(10, cnf_path='tmp.cnf') w = satx.matrix(dimensions=(n, m)) b = satx.vector(size=n) for i in range(n): assert y[i] == satx.dot(x[i], w[i]) + b[i] if satx.satisfy('slime'): for i in range(n): print(x[i], satx.dot(x[i], w[i]) + b[i]) else: print('Infeasible ...') # + [markdown] id="-BeLNuT8COAO" # # ABSOLUTE VALUES # + colab={"base_uri": "https://localhost:8080/"} id="vB2QHUbPCOAP" outputId="d0a90421-6325-4320-8984-7e6185ef74f5" import satx satx.engine(4, cnf_path='tmp.cnf') x = satx.integer() y = satx.integer() assert abs(x - y) == 1 assert x != satx.oo() assert y != satx.oo() while satx.satisfy('slime'): print(x, y, x - y, abs(x - y)) # + [markdown] id="LrPaLRqeOjkg" # # CENTROID # + colab={"base_uri": "https://localhost:8080/", "height": 280} id="zeVnsPPqOnPX" outputId="87434b91-c7b1-4c4e-fc8b-18bc20e93558" import numpy as np import satx import matplotlib.pyplot as plt n = 15 data = np.random.randint(0, 10, size=(n, 2)) opt = 1 while True: satx.engine(10, cnf_path='tmp.cnf') x = satx.integer() y = satx.integer() assert sum(abs(xy[0] - x) + abs(xy[1] - y) for xy in data) < opt assert x != satx.oo() assert y != satx.oo() if satx.satisfy('slime'): print(x, y) a, b = zip(*data) plt.plot(a, b, '.') plt.plot(x, y, 'x') plt.show() break else: opt += 1 # + [markdown] id="oWFt1q0XV-7j" # # FERMAT'S FACTORIZATION METHOD # # Note: when there is a negative number in the model, increment the bits by 1. # + colab={"base_uri": "https://localhost:8080/"} id="k01dOcjOVk9w" tags=[] outputId="d39152c4-15a5-4d95-d825-40c34688f944" import satx rsa = 3007 satx.engine(rsa.bit_length() + 1, cnf_path='tmp.cnf') p = satx.integer() q = satx.integer() assert p ** 2 - q ** 2 == rsa assert q < p if satx.satisfy('slime'): print(p, q, p + q, p - q) else: print('Is Prime!') # + [markdown] id="GgE1AgRiYmij" # # EXPONENTIAL DIOPHANTINE EQUATIONS # # + colab={"base_uri": "https://localhost:8080/"} id="XU0I3MnZYu-x" tags=[] outputId="91f586df-131a-4669-9107-86f47365a056" import satx satx.engine(10, cnf_path='tmp.cnf') x = satx.integer() y = satx.integer() z = satx.integer() satx.apply_single([x, y, z], lambda t: t != 0) assert x ** y == z while satx.satisfy('slime'): print(x, y, z) # + [markdown] id="EUBndQTWCOAd" # # ON THE DIOPHANTINE EQUATION $x^2 + c = 3^n$ WITH $x, c, n > 1$ # + colab={"base_uri": "https://localhost:8080/"} id="yTLNkcWACOAd" tags=[] outputId="e3f397de-9eeb-48c5-9ab7-52a209d27a5b" import satx n = 32 satx.engine(n.bit_length(), cnf_path='tmp.cnf') _3 = satx.constant(3) n = satx.integer() x = satx.integer() c = satx.integer() assert x ** 2 + c == _3 ** n assert x > 1 assert c > 1 assert n > 1 if satx.satisfy('slime'): print(n, x, c) else: print('Infeasible for bit range...') # + [markdown] id="8EJXiM2FCOAh" # # FACTORIALS # + colab={"base_uri": "https://localhost:8080/"} id="iwack4VxCOAh" outputId="1fff8b25-1dd2-4701-e98f-1e0eb4f4a7f6" import math import satx satx.engine(32, cnf_path='tmp.cnf') x = satx.integer() satx.factorial(x) == math.factorial(10) if satx.satisfy('slime'): print(x) else: print('Need more bits!') # + [markdown] id="O4gi4qEGCOAk" # # $\Sigma$ # + colab={"base_uri": "https://localhost:8080/"} id="MANm7Pu8COAl" outputId="01a54a59-5712-4ba6-8753-48b891aa7473" import satx satx.engine(16, cnf_path='tmp.cnf') x = satx.integer() n = satx.integer() satx.sigma(lambda k: k ** 2, 1, n) == x while satx.satisfy('slime'): print(x, n, sum(k ** 2 for k in range(1, n.value + 1))) # + [markdown] id="VWgIBrvZCOAn" # # $\Pi$ # + colab={"base_uri": "https://localhost:8080/"} id="TxIWHtjXCOAo" outputId="0a65013f-b102-49a8-e2d9-a6263a5a1dbc" import functools import operator import math import satx satx.engine(32, cnf_path='tmp.cnf') x = satx.integer() n = satx.integer() satx.pi(lambda k: k ** 2, 1, n) == x assert 0 < x <= 2 ** math.log(satx.oo()) # limit the CNF overflow assert n > 0 while satx.satisfy('slime'): print(x, n, functools.reduce(operator.mul, (k ** 2 for k in range(1, n.value + 1)))) # + [markdown] id="dq3QrO15MjGZ" # # SAT-X VS FIBONACCI NUMBERS | N VS TIME # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="hmnhjMXsMm3n" outputId="0e0f8107-d77c-457a-c121-fe7ec430e7b3" import time import satx import matplotlib.pyplot as plt ns, ts = [], [] for n in range(2, 100): satx.engine(n, cnf_path='tmp.cnf') x = satx.vector(size=n + 1) assert x[0] == 0 assert x[1] == 1 for i in range(2, n + 1): assert x[i - 1] + x[i - 2] == x[i] ini = time.time() if satx.satisfy('slime'): print(n, x[n]) else: print('Infeasible ...') end = time.time() ns.append(n) ts.append(end - ini) plt.title('SAT-X vs Fibonacci Numbers | n vs time') plt.plot(ns, ts) plt.show() # + [markdown] id="fkO7OMupWbBy" # # TENSORS # # Tensors object are the most advanced concept behind SAT-X, integers are tensors, work like integers, but their bits act like an multidimensional matrix of lambda functions. # # Note: [[*]] for acces to lambda (bit) functions. # + colab={"base_uri": "https://localhost:8080/"} id="RR7JLifNZo95" outputId="23b1b73c-4785-4a11-a8d0-022fed5229e9" import satx satx.engine(10, cnf_path='tmp.cnf') x = satx.tensor(dimensions=(4)) y = satx.tensor(dimensions=(2, 2)) assert x + y == 10 assert x[[0]](0, 1) == 1 assert y[[0, 0]](0, 1) == 1 while satx.satisfy('slime'): print(x, y, x.binary, y.binary) # + colab={"base_uri": "https://localhost:8080/"} id="upsI_STPaZee" outputId="dbf79ef4-8d6f-4f62-d493-ff774f9778fd" import numpy as np import satx n = 2 satx.engine(4, cnf_path='tmp.cnf') x = satx.tensor(dimensions=(n, n)) a = satx.integer() b = satx.integer() assert sum(x[[i, j]](a ** 2 - b ** 3, a ** 3 - b ** 2) for i in range(n) for j in range(n)) == 0 while satx.satisfy('slime'): print(a, b) print(np.vectorize(int)(x.binary)) print() # + [markdown] id="yfhZFB3BXuNH" # # RSA FACTORIZATION WITH TENSORS # + colab={"base_uri": "https://localhost:8080/"} id="67L02LcOWKSV" outputId="1db60e2a-cd23-4fc5-a24e-bf30840f1974" import satx rsa = 3007 satx.engine(rsa.bit_length(), cnf_path='tmp.cnf') p = satx.tensor(dimensions=(satx.bits())) q = satx.tensor(dimensions=(satx.bits())) assert p * q == rsa assert p[[0]](0, 1) == 1 assert q[[0]](0, 1) == 1 assert sum(p[[i]](0, 1) for i in range(satx.bits() // 2 + 1, satx.bits())) == 0 assert sum(q[[i]](0, 1) for i in range(satx.bits() // 2, satx.bits())) == 0 if satx.satisfy('slime'): print(p, q) else: print('Is Prime!') # + [markdown] id="U7_zMGVNXVnh" # # SAT REFORMULATION WITH TENSORS # + colab={"base_uri": "https://localhost:8080/"} id="fFW0JlyaXMRO" outputId="9368de74-ae6e-4690-d47c-4e77ae040871" import functools import operator import sys import satx n, m, sat = 10, 24, [[9, -5, 10, -6, 3], [6, 8], [8, 4], [-10, 5], [-9, 8], [-9, -3], [-2, 5], [6, 4], [-2, -1], [7, -2], [-9, 4], [-1, -10], [-3, 4], [7, 5], [6, -3], [-10, 7], [-1, 7], [8, -3], [-2, -10], [-1, 5], [-7, 1, 9, -6, 3], [-9, 6], [-8, 10, -5, -4, 2], [-4, -7, 1, -8, 2]] if __name__ == '__main__': satx.engine(bits=1, cnf_path='tmp.cnf') x = satx.tensor(dimensions=(n,)) assert functools.reduce(operator.iand, (functools.reduce(operator.ior, (x[[abs(lit) - 1]](lit < 0, lit > 0) for lit in cls)) for cls in sat)) == 1 if satx.satisfy('slime'): print('SAT') print(' '.join(map(str, [(i + 1) if b else -(i + 1) for i, b in enumerate(x.binary)])) + ' 0') else: print('') # + [markdown] id="9hQcWGrRX1Wg" # # SUM SUBSET PROBLEM WITH TENSORS # + colab={"base_uri": "https://localhost:8080/"} id="p5khFp70X6a_" outputId="a6fe42da-a7ee-4745-a09c-2b359bf9f9c5" import numpy as np import satx universe = np.random.randint(1, 2 ** 16, size=100) t = np.random.randint(min(universe), sum(universe)) satx.engine(t.bit_length(), cnf_path='tmp.cnf') x = satx.tensor(dimensions=(len(universe))) assert sum(x[[i]](0, universe[i]) for i in range(len(universe))) == t if satx.satisfy('slime'): sub = [universe[i] for i in range(len(universe)) if x.binary[i]] print(t, sum(sub), sub) else: print('Infeasible ...') # + [markdown] id="lxKWdGrI964h" # # MULTISET RECONSTRUCTION BY DIFFERENCES # # Given a sorted multiset, their differences and one tip (an element and position for only one arbitrary element), is possible recovery the original multiset? # + colab={"base_uri": "https://localhost:8080/"} id="sDtWaw6l964i" outputId="681dbd6c-d8f6-4d32-b69e-3de1e983d0c7" import time import random import satx def generator(n, max_val): return sorted([random.randint(1, max_val) for _ in range(n)]) def differences(lst): return [abs(lst[i] - lst[i - 1]) for i in range(1, len(lst))] # 100 tests for n in range(1, 10): m = random.randint(1, n ** 2) original = generator(n, m) diffs = differences(original) print('N, M : {}, {}'.format(n, m)) print('DIFFERENCES : {}'.format(diffs)) print('ORIGINAL : {}'.format(original)) # only one tip ith = random.choice(range(n)) tip = original[ith] # init timer ini = time.time() # Empirical bits necessarily to solve the problem. satx.engine(sum(diffs).bit_length() + 4, cnf_path='tmp.cnf') # Declare a n-vector of integer variables to store the solution. x = satx.vector(size=n) # The tip is on x at index ith assert tip == satx.index(ith, x) # The i-th element of the instance is the absolute difference of two consecutive elements for i in range(n - 1): assert x[i] <= x[i + 1] assert satx.index(i, diffs) == x[i + 1] - x[i] # Solve the problem for only one solution # Turbo parameter is a destructive simplification # Solve with all power os SLIME SAT Solver but only for the fist solution. if satx.satisfy('slime'): o = [abs(x[i + 1] - x[i]) for i in range(n - 1)] c = 100 * len(set(map(int, x)).intersection(set(original))) / len(set(original)) print('SOLVED : {}'.format(x)) print('COINCIDENCES : {}%'.format(c)) if o == diffs: print('OK! - {}s'.format(time.time() - ini)) else: print('NOK! - {}s'.format(time.time() - ini)) raise Exception('ERROR!') if c != 100: raise Exception('Hypothesis Fail - 100%') # + [markdown] id="c2MllZ2i964r" # # DIOPHANTINE EQUATIONS # # https://en.wikipedia.org/wiki/Diophantine_equation # + [markdown] id="DK1DC7aE964s" # ### Let be $x, y \in \mathbb{N} \vert x^3 - x + 1 = y^2$ # + colab={"base_uri": "https://localhost:8080/"} id="7bgCxLRI964t" outputId="4ca3a6ee-2765-4aca-de04-15c07a5ef656" import satx satx.engine(10, cnf_path='tmp.cnf') x = satx.integer() y = satx.integer() assert x ** 3 - x + 1 == y ** 2 assert x != 0 assert y != 0 while satx.satisfy('slime'): print('{0} ** 3 - {0} + 1, {1} ** 2'.format(x, y)) # + [markdown] id="tXkkLVGl964w" # ### Let be $x, y \in \mathbb{Q} \vert x^3 + xy = y^2$ # + colab={"base_uri": "https://localhost:8080/"} id="p84V_ORl964w" outputId="b5bddbb7-b512-4f97-9e5e-1c28218bd450" import satx satx.engine(10, cnf_path='tmp.cnf') x = satx.rational() y = satx.rational() assert x ** 3 + x * y == y ** 2 assert x != 0 assert y != 0 while satx.satisfy('slime'): print('{0} ** 3 + {0} * {1} == {1} ** 2'.format(x, y)) # + [markdown] id="ijexlhdd9642" # # Vectors # + colab={"base_uri": "https://localhost:8080/", "height": 263} id="QNWHLqHS9644" outputId="03e99a57-06b1-4e5b-bbdb-29650a1a4507" import numpy as np import satx import matplotlib.pyplot as plt dim = 2 satx.engine(5, cnf_path='tmp.cnf') ps = satx.vector(size=dim, is_rational=True) assert sum([p ** dim for p in ps]) <= 1 dots = [] while satx.satisfy('slime'): dots.append(np.vectorize(float)(ps)) x, y = zip(*dots) plt.axis('equal') plt.plot(x, y, 'r.') plt.show() # + [markdown] id="C0lkauin9646" # # NP-COMPLETE PROBLEMS # # NP-Complete problem, any of a class of computational problems for which no efficient solution algorithm has been found. Many significant computer - science problems belong to this class—e.g., the traveling salesman problem, satisfiability problems, and graph - covering problems. # # https://en.wikipedia.org/wiki/NP-completeness # + [markdown] id="hJQDg3rb9647" # # SATISFIABILITY # # Study of boolean functions generally is concerned with the set of truth assignments(assignments of 0 or 1 to each of the variables) that make the function true. # # https://en.wikipedia.org/wiki/Boolean_satisfiability_problem # + colab={"base_uri": "https://localhost:8080/"} id="GR62G6M99647" outputId="0148d75e-63a6-4b1f-b023-c17a549a6b10" import functools import operator import satx n, m, sat = 10, 24, [[9, -5, 10, -6, 3], [6, 8], [8, 4], [-10, 5], [-9, 8], [-9, -3], [-2, 5], [6, 4], [-2, -1], [7, -2], [-9, 4], [-1, -10], [-3, 4], [7, 5], [6, -3], [-10, 7], [-1, 7], [8, -3], [-2, -10], [-1, 5], [-7, 1, 9, -6, 3], [-9, 6], [-8, 10, -5, -4, 2], [-4, -7, 1, -8, 2]] satx.engine(bits=1, cnf_path='tmp.cnf') x = satx.tensor(dimensions=(n,)) assert functools.reduce(operator.iand, (functools.reduce(operator.ior, (x[[abs(lit) - 1]](lit < 0, lit > 0) for lit in cls)) for cls in sat)) == 1 if satx.satisfy('slime'): print('SAT') print(' '.join(map(str, [(i + 1) if b else -(i + 1) for i, b in enumerate(x.binary)])) + ' 0') else: print('UNSAT') # + [markdown] id="-rrrwc0V9649" # # k-CLIQUE # # Input: Graph $G$, positive integer $k$ # # Property: $G$ has a set of mutually adjacent nodes. # # https://en.wikipedia.org/wiki/Clique_problem # + colab={"base_uri": "https://localhost:8080/"} id="QQDWim1w9649" outputId="446f9348-07b5-458a-f02a-e4e905df2795" import satx # Ths bits of the clique to search k = 3 # Get the graph, and the dimension for the graph n, matrix = 5, [(1, 0), (0, 2), (1, 4), (2, 1), (4, 2), (3, 2)] # Ensure the problem can be represented satx.engine(bits=k.bit_length(), cnf_path='tmp.cnf') # Declare an integer of n-bits bits = satx.integer(bits=n) # The bits integer have "bits"-active bits, i.e, the clique has "bits"-elements assert sum(satx.switch(bits, i) for i in range(n)) == k # This entangles all elements that are joined together for i in range(n - 1): for j in range(i + 1, n): if (i, j) not in matrix and (j, i) not in matrix: assert satx.switch(bits, i) + satx.switch(bits, j) <= 1 if satx.satisfy('slime'): print(k) print(' '.join([str(i) for i in range(n) if not bits.binary[i]])) else: print('Infeasible ...') # + [markdown] id="W9A_lLrn964_" # # VERTEX COVER # # In the mathematical discipline of graph theory, a vertex cover (sometimes node cover) of a graph is a set of vertices that includes at least one endpoint of every edge of the graph. The problem of finding a minimum vertex cover is a classical optimization problem in computer science and is a typical example of an NP-hard optimization problem that has an approximation algorithm. Its decision version, the vertex cover problem, was one of Karp's 21 NP-complete problems and is therefore a classical NP-complete problem in computational complexity theory. Furthermore, the vertex cover problem is fixed-parameter tractable and a central problem in parameterized complexity theory. # # https://en.wikipedia.org/wiki/Vertex_cover # + colab={"base_uri": "https://localhost:8080/"} id="qaxFZxFL965A" outputId="ab6be7fc-2fda-4a19-a5b4-46f756a1b083" import satx # Get the graph and dimension, and the bits of the cover. n, graph, vertex, k = 5, [(1, 0), (0, 2), (1, 4), (2, 1), (4, 2), (3, 2)], [0, 1, 2, 3, 4], 3 # Ensure the problem can be represented satx.engine(bits=n.bit_length() + 1, cnf_path='tmp.cnf') # An integer with n-bits to store the indexes for the cover index = satx.integer(bits=n) # This entangled the all possible covers for i, j in graph: assert satx.switch(index, vertex.index(i), neg=True) + satx.switch(index, vertex.index(j), neg=True) >= 1 # Ensure the cover has bits k assert sum(satx.switch(index, vertex.index(i), neg=True) for i in vertex) == k if satx.satisfy('slime'): opt = sum(index.binary) print('p bits {}'.format(opt)) print(' '.join([str(vertex[i]) for i in range(n) if index.binary[i]])) else: print('Infeasible ...') # + [markdown] id="-Itdoe64965B" # # MULTIDIMENSIONAL LATIN SQUARES # # In combinatorics and in experimental design, a Latin square is an n × n array filled with n different symbols, each occurring exactly once in each row and exactly once in each column. # # https://en.wikipedia.org/wiki/Latin_square # + colab={"base_uri": "https://localhost:8080/"} id="ZobMPYao965C" outputId="ac143817-b05b-4b66-9f94-b5b2afef40cb" import numpy as np import satx n = 6 m = 3 satx.engine(n.bit_length(), cnf_path='tmp.cnf') Y = satx.vector(size=n ** m) satx.apply_single(Y, lambda k: k < n) Y = np.reshape(Y, newshape=(m * [n])) for i in range(n): satx.all_different(Y[i]) satx.all_different(Y.T[i]) for j in range(n): satx.all_different(Y[i][j]) satx.all_different(Y.T[i][j]) for idx in satx.hyper_loop(m - 1, n): s = Y for i in idx: s = s[i] satx.all_different(s) satx.all_different(s.T) if satx.satisfy('slime'): y = np.vectorize(int)(Y).reshape(m * [n]) print(y) else: print('Infeasible ...') # + [markdown] id="bbdyVi1o965E" # # TRAVELLING SALESMAN PROBLEM WITH HESS ALGORITHM () # # https://independent.academia.edu/oarr # # The travelling salesman problem asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science. # # https://en.wikipedia.org/wiki/Travelling_salesman_problem # # SAT-X include a full implementation of the HESS black-box algorithm for more advanced examples see here https://github.com/maxtuno/hess # + colab={"base_uri": "https://localhost:8080/", "height": 263} id="3CHTjOaA965E" outputId="57a18fb9-f89b-44ee-fd1c-f2c616bce260" import satx import numpy as np import matplotlib.pyplot as plt n = 100 data = np.random.logistic(size=(n, 2)) seq = satx.hess_sequence(n, oracle=lambda seq: sum(np.linalg.norm(data[seq[i - 1]] - data[seq[i]]) for i in range(n)), fast=False) x, y = zip(*[data[i] for i in seq + [seq[0]]]) plt.plot(x, y, 'k-') plt.plot(x, y, 'r.') plt.show() # + id="3f8ACnp_JBHb" outputId="6699b42e-654f-4421-dfa0-330710305041" import numpy as np import satx import matplotlib.pyplot as plt n = 10 data = np.random.randint(0, 10, size=(n, 2)) matrix = np.zeros(shape=(n, n)) for i in range(n): for j in range(n): matrix[i][j] = int(np.linalg.norm(data[i] - data[j])) opt = satx.oo() - 1 while True: satx.engine(int(sum(matrix.flatten())).bit_length() + 1, cnf_path='tmp.cnf') x, y = satx.matrix_permutation(matrix.flatten(), n) assert sum(y) < opt if satx.satisfy('slime'): opt = sum(y) print(opt, x, y) a, b = zip(*[data[i.value] for i in x + [x[0]]]) plt.plot(a, b, 'ro') plt.plot(a, b, 'k-') plt.show('tsp.png') plt.close() satx.clear(x) satx.clear(y) else: break # + [markdown] id="3CaSxvbE965I" # # MAGIC SQUARE # # # In recreational mathematics and combinatorial design, a magic square is a $n\times n$ square grid (where n is the number of cells on each side) filled with distinct positive integers in the range # ${1,2,...,n^{2}}$ such that each cell contains a different integer and the sum of the integers in each row, column and diagonal is equal. # # https://en.wikipedia.org/wiki/Magic_square # + colab={"base_uri": "https://localhost:8080/"} id="AyWeR36j965I" outputId="943a5c1c-1f74-44c4-9bad-46c6ce29f797" import satx import numpy as np n = 3 satx.engine(5, cnf_path='tmp.cnf') c = satx.integer() xs = satx.matrix(dimensions=(n, n)) satx.apply_single(satx.flatten(xs), lambda x: x > 0) satx.all_different(satx.flatten(xs)) for i in range(n): assert sum(xs[i][j] for j in range(n)) == c for j in range(n): assert sum(xs[i][j] for i in range(n)) == c assert sum(xs[i][i] for i in range(n)) == c assert sum(xs[i][n - 1 - i] for i in range(n)) == c if satx.satisfy('slime'): print(c) print(np.vectorize(int)(xs)) else: print('Infeasible ...') # + [markdown] id="lEaH-Epz965K" # # SCHUR TRIPLES PROBLEM: # # Input: list of 3N distinct positive integers # # Question: Is there a partition of the list into N triples $(a_i, b_i, c_i)$ such that $a_i+b_i=c_i$ # # The condition that all numbers must be distinct makes the problem very interesting and McDiarmid calls it a surprisingly troublesome. # # https://cstheory.stackexchange.com/questions/16253/list-of-strongly-np-hard-problems-with-numerical-data # + colab={"base_uri": "https://localhost:8080/"} id="Jd9xBJiE965K" outputId="0d42ee84-3c69-4d06-f8d4-fcecbea9e3fd" import satx import numpy as np bits = 7 size = 3 * 10 triplets = [] while len(triplets) < size: a = np.random.randint(1, 2 ** bits) b = np.random.randint(1, 2 ** bits) if a != b and a not in triplets and b not in triplets and a + b not in triplets: triplets += [a, b, a + b] triplets.sort() print(triplets) satx.engine(bits=max(triplets).bit_length(), cnf_path='tmp.cnf') xs, ys = satx.permutations(triplets, size) for i in range(0, size, 3): assert ys[i] + ys[i + 1] == ys[i + 2] if satx.satisfy('slime'): for i in range(0, size, 3): print('{} == {} + {}'.format(ys[i + 2], ys[i], ys[i + 1])) else: print('Infeasible ...') # + [markdown] id="ehRx6Mud965M" # # SUBSET SUM PROBLEM # # In this problem, there is a given set with some integer elements. And another some value is also provided, we have to find a subset of the given set whose sum is the same as the given sum value. # # https://en.wikipedia.org/wiki/Subset_sum_problem # + colab={"base_uri": "https://localhost:8080/"} id="l1u8ka7t965M" outputId="8f6d0492-cbfa-46c7-8161-b94b559f9caf" import satx import numpy as np universe = np.random.randint(1, 1000, size=32) t = np.random.randint(min(universe), sum(universe)) print(t, universe) satx.engine(t.bit_length(), cnf_path='tmp.cnf') bits, subset = satx.subsets(universe) assert sum(subset) == t if satx.satisfy('slime'): solution = [universe[i] for i in range(len(universe)) if bits.binary[i]] print(sum(solution), solution) else: print('Infeasible ...') # + [markdown] id="daXn5SYi965Q" # # PERMUTATION RECONSTRUCTION FROM DIFFERENCES # # https://arxiv.org/pdf/1410.6396.pdf # + colab={"base_uri": "https://localhost:8080/", "height": 280} id="YMKxvACO965Q" outputId="f3675001-a4d0-4add-f12f-316e14736041" import satx import numpy as np import matplotlib.pyplot as plt def gen_instance(n): import random y = list(range(1, n + 1)) random.shuffle(y) return [abs(y[i + 1] - y[i]) for i in range(n - 1)] import time start = time.time() times = [] sizes = [] for n in range(1, 30): diffs = gen_instance(n) ini = time.time() satx.engine(n.bit_length() + 1, cnf_path='tmp.cnf') x = satx.vector(size=n) satx.all_different(x) satx.apply_single(x, lambda a: 1 <= a <= n) for i in range(n - 1): assert satx.index(i, diffs) == satx.one_of([x[i + 1] - x[i], x[i] - x[i + 1]]) if satx.satisfy('slime'): end = time.time() - ini xx = [abs(x[i + 1] - x[i]) for i in range(n - 1)] if xx == diffs: sizes.append(n) times.append(end) else: raise Exception('Error!') else: raise Exception('Error!') end = time.time() - start plt.title('TIME {}(s)'.format(end)) plt.plot(sizes, times, 'k-') plt.plot(sizes, times, 'r.') plt.show() plt.close() # + [markdown] id="bJwHMwDc965S" # # HAMILTONIAN CYCLE PROBLEM # # In the mathematical field of graph theory, a Hamiltonian path (or traceable path) is a path in an undirected or directed graph that visits each vertex exactly once. A Hamiltonian cycle (or Hamiltonian circuit) is a Hamiltonian path that is a cycle. Determining whether such paths and cycles exist in graphs is the Hamiltonian path problem, which is NP-complete. # # https://en.wikipedia.org/wiki/Hamiltonian_path # + colab={"base_uri": "https://localhost:8080/"} id="UbxWGgHu965S" outputId="c4a43604-2d9d-4565-aecc-a633ab519d85" import sys import satx import numpy as np n = 10 M = np.random.randint(0, 2, size=(n, n)) print(M) satx.engine((n ** 2).bit_length(), cnf_path='tmp.cnf') ids, elements = satx.matrix_permutation((1 - M).flatten(), n) assert sum(elements) == 0 if satx.satisfy('slime'): for i in ids: for j in ids: sys.stdout.write('{} '.format(M[i.value][j.value])) sys.stdout.write('\n') sys.stdout.write('\n') else: print('Infeasible ...') # + [markdown] id="4a2Y3ykM965U" # # BIN PACKING PROBLEM # # In the bin packing problem, items of different volumes must be packed into a finite number of bins or containers each of a fixed given volume in a way that minimizes the number of bins used. In computational complexity theory, it is a combinatorial NP-hard problem. The decision problem (deciding if items will fit into a specified number of bins) is NP-complete. # # https://en.wikipedia.org/wiki/Bin_packing_problem # + colab={"base_uri": "https://localhost:8080/"} id="udwwcxlG965U" outputId="8e7251d2-eff8-4395-e9c6-a1c9b374ff6c" import satx import numpy as np capacity = 50 size = 50 elements = sorted([np.random.randint(1, capacity // 2 - 1) for _ in range(size)], reverse=True) print(capacity) print(elements) bins = int(np.ceil(sum(elements) / capacity)) while True: satx.engine(bits=capacity.bit_length() + 1, cnf_path='tmp.cnf') slots = satx.vector(bits=len(elements), size=bins) for i in range(len(elements)): assert sum(satx.switch(slot, i) for slot in slots) == 1 for slot in slots: assert sum(satx.switch(slot, i) * elements[i] for i in range(len(elements))) <= capacity if satx.satisfy('slime'): print('Solution for {} bins...'.format(bins)) for slot in slots: print(''.join(['_' if boolean else '#' for boolean in slot.binary])) for slot in slots: sub = [item for i, item in enumerate(elements) if not slot.binary[i]] print(sum(sub), sub) break else: print('No solution for {} bins...'.format(bins)) bins += 1 # + [markdown] id="oI6JNjwq965W" # # ZERO-ONE INTEGER PROGRAMMING DEFINITION # # Zero-one integer programming (which can also be written as 0-1 integer programming) is a mathematical method of using a series of binary, yes (1) and no (0) answers to arrive at a solution when there are two mutually exclusive options. # # https://en.wikipedia.org/wiki/Integer_programming # + colab={"base_uri": "https://localhost:8080/"} id="UBSuy8E1965W" outputId="b7b186bc-922d-4ebd-f295-e250e3b74fed" import satx import numpy as np n, m = 10, 5 cc = np.random.randint(0, 1000, size=(n, m)) d = np.dot(cc, np.random.randint(0, 2, size=(m,))) print(cc) print(d) satx.engine(bits=int(np.sum(cc)).bit_length(), cnf_path='tmp.cnf') xs = satx.vector(size=m) satx.all_binaries(xs) assert (np.dot(cc, xs) == d).all() if satx.satisfy('slime'): print(xs) print('Proof:') print(np.dot(cc, xs)) else: print('Infeasible...') # + [markdown] id="Q2UGOuKW965Y" # # n-QUEENS COMPLETION PROBLEM # # The n- Queens Completion problem is a variant, dating to 1850, in which some queens are already placed and the solver is asked to place the rest, if possi- ble. ... The n-Queens problem is to place n chess queens on an n by n chessboard so that no two queens are on the same row, column or diagonal. # # https://www.ijcai.org/Proceedings/2018/0794.pdf # + colab={"base_uri": "https://localhost:8080/"} id="rECfC3zy965Y" outputId="b0dd0557-cb34-448c-8adc-b93014c652f6" import satx def completion(n, m, seed): import random """ http://www.csplib.org/Problems/prob079/data/queens-gen-fast.py.html """ random.seed(seed) d1 = [0 for _ in range(2 * n - 1)] d2 = [0 for _ in range(2 * n - 1)] valid_rows = [i for i in range(n)] valid_cols = [j for j in range(n)] def no_attack(r, c): return d1[r + c] == 0 and d2[r - c + n - 1] == 0 pc = [] queens_left = n for attempt in range(n * n): i = random.randrange(queens_left) j = random.randrange(queens_left) r = valid_rows[i] c = valid_cols[j] if no_attack(r, c): pc.append([r, c]) d1[r + c] = 1 d2[r - c + n - 1] = 1 valid_rows[i] = valid_rows[queens_left - 1] valid_cols[j] = valid_cols[queens_left - 1] queens_left -= 1 if len(pc) == m: return [[x + 1, y + 1] for x, y in pc] def show(pc): table = '' for i in range(1, n + 1): table += '' for j in range(1, n + 1): if [i, j] not in pc: table += '. ' else: table += 'Q ' table += '\n' print(table) print('# seed = {}'.format(seed)) n, m, seed = 30, 15, 0 placed_queens = completion(n, m, seed) show(placed_queens) satx.engine(bits=n.bit_length() + 1, cnf_path='tmp.cnf') qs = satx.vector(size=n) for (a, b) in placed_queens: assert qs[a - 1] == b - 1 satx.apply_single(qs, lambda x: x < n) satx.apply_dual(qs, lambda x, y: x != y) satx.apply_dual([qs[i] + i for i in range(n)], lambda x, y: x != y) satx.apply_dual([qs[i] - i for i in range(n)], lambda x, y: x != y) if satx.satisfy('slime'): for i in range(n): print(''.join(['Q ' if qs[i] == j else '. ' for j in range(n)])) print('') else: print('Infeasible ...') # + [markdown] id="QJiSiXi_DyFL" # # PARTITION PROBLEM # # n number theory and computer science, the partition problem, or number partitioning, is the task of deciding whether a given multiset $S$ of positive integers can be partitioned into two subsets $S_1$ and $S_2$ such that the sum of the numbers in $S_1$ equals the sum of the numbers in $S_2$. # # https://en.wikipedia.org/wiki/Partition_problem # + colab={"base_uri": "https://localhost:8080/"} id="yZlJB_TdDwyc" outputId="8784d3d9-6301-4ff3-9b05-9bcaa2a65a14" import numpy as np import satx size = 20 data = np.random.randint(1000, size=size) print(data) satx.engine(int(sum(data)).bit_length(), cnf_path='tmp.cnf') T, sub, com = satx.subsets(data, complement=True) assert sum(sub) == sum(com) if satx.satisfy('slime'): sub_ = [data[i] for i in range(size) if T.binary[i]] com_ = [data[i] for i in range(size) if not T.binary[i]] print(sum(sub_), sub_) print(sum(com_), com_) else: print('Infeasible ...') # + [markdown] id="zaw48EMAikEp" # # SUDOKU # # is a logic-based, combinatorial number-placement puzzle. The objective is to fill a 9×9 grid with digits so that each column, each row, and each of the nine 3×3 subgrids that compose the grid (also called "boxes", "blocks", or "regions") contain all of the digits from 1 to 9. The puzzle setter provides a partially completed grid, which for a well-posed puzzle has a single solution. # # Completed games are always an example of a Latin square which include an additional constraint on the contents of individual regions. For example, the same single integer may not appear twice in the same row, column, or any of the nine 3×3 subregions of the 9×9 playing board. # # https://en.wikipedia.org/wiki/Sudoku # + colab={"base_uri": "https://localhost:8080/"} id="Xv2NJptiijYi" outputId="35309dc1-72de-4aef-f28b-91b2e5a017aa" import numpy as np import satx def expand_line(line): return line[0] + line[5:9].join([line[1:5] * (base - 1)] * base) + line[9:13] def show(board): import string line0 = expand_line('╔═══╤═══╦═══╗') line1 = expand_line('║ . │ . ║ . ║') line2 = expand_line('╟───┼───╫───╢') line3 = expand_line('╠═══╪═══╬═══╣') line4 = expand_line('╚═══╧═══╩═══╝') symbol = ' ' + string.printable.replace(' ', '') nums = [[''] + [symbol[n] for n in row] for row in board] print(line0) for r in range(1, side + 1): print("".join(n + s for n, s in zip(nums[r - 1], line1.split('.')))) print([line2, line3, line4][(r % side == 0) + (r % base == 0)]) def generate(base): # pattern for a baseline valid solution def pattern(r, c): return (base * (r % base) + r // base + c) % side # randomize rows, columns and numbers (of valid base pattern) from random import sample def shuffle(s): return sample(s, len(s)) rBase = range(base) rows = [g * base + r for g in shuffle(rBase) for r in shuffle(rBase)] cols = [g * base + c for g in shuffle(rBase) for c in shuffle(rBase)] nums = shuffle(range(1, base * base + 1)) # produce board using randomized baseline pattern board = [[nums[pattern(r, c)] for c in cols] for r in rows] squares = side * side empties = (squares * 3) // 4 for p in map(int, sample(range(squares), empties)): board[p // side][p % side] = 0 show(board) return board base = 4 side = base * base puzzle = np.asarray(generate(base)) satx.engine(side.bit_length(), cnf_path='tmp.cnf') board = np.asarray(satx.matrix(dimensions=(side, side))) satx.apply_single(board.flatten(), lambda x: 1 <= x <= side) for i in range(side): for j in range(side): if puzzle[i][j]: assert board[i][j] == puzzle[i][j] for c, r in zip(board, board.T): satx.all_different(c) satx.all_different(r) for i in range(base): for j in range(base): satx.all_different(board[i * base:(i + 1) * base, j * base:(j + 1) * base].flatten()) if satx.satisfy('slime'): show(np.vectorize(int)(board)) # + [markdown] id="rpW3jjf8edtW" # # MAXIMUM CONSTRAINED PARTITITON # # --- # # # # --- # # # # http://www.csc.kth.se/~viggo/wwwcompendium/node152.html # + colab={"base_uri": "https://localhost:8080/"} id="qyLQYD-je1Jq" outputId="115ed459-c444-44e3-f61e-06c7ed59c64c" import random import satx bits = 10 n = 2 * 100 D = [random.randint(1, 2 ** bits) for _ in range(n)] print('D : {}'.format(D)) satx.engine(sum(D).bit_length(), cnf_path='tmp.cnf') bins, sub, com = satx.subsets(D, n // 2, complement=True) assert sum(sub) == sum(com) if satx.satisfy('slime'): sub = [D[i] for i in range(n) if bins.binary[i]] com = [D[i] for i in range(n) if not bins.binary[i]] print(sum(sub), len(sub), sub) print(sum(com), len(com), com) print('\n') else: print('Infeasible ...') # + [markdown] id="1qFj_5FpCOBt" # # PARALLEL OPTIMIZATION - MIN-MAX SUM SUBSET CARDINALITY # + [markdown] id="6YJtctLJwDqo" # # GRIEWANK FUNCTION # # Ref: https://www.sfu.ca/~ssurjano/griewank.html # + colab={"base_uri": "https://localhost:8080/"} id="cmVv0aqxTm1M" outputId="13ca6f29-d383-4234-cf15-bc953b2c0a0a" import numpy import satx class GriewankFunctionHESS: def __init__(self, a, b): self.a = a self.b = b def oracle(self, xs): return numpy.sum([(x ** 2) / 4000 for x in xs]) - numpy.prod([numpy.cos(x / numpy.sqrt(i + 1)) for i, x in enumerate(xs)]) + 1 def f(self, i, j, xs): xs[i], xs[j] = self.a + xs[i] / self.b, self.a + xs[j] / self.b xs[i:j] = xs[i:j][::-1] def g(self, i, j, xs): xs[i:j] = xs[i:j][::-1] xs[i], xs[j] = self.b * xs[i] - self.a, self.b * xs[j] - self.a def log(self, top, opt): print(top) def run(self, n): xs = numpy.random.randint(-600, 600, size=n) return satx.hess_abstract(xs, self.oracle, self.f, self.g, self.log, target=0) n = 100 gf = GriewankFunctionHESS(n ** -2, n ** 2) print(gf.run(n)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # MySQL - Basics # # - [SQL Tutorial](https://www.w3schools.com/sql/default.asp) # - [MySQL Documentation](https://dev.mysql.com/doc/refman/5.7/en/create-table.html) # - [MySQL - Tutorial](https://www.tutorialspoint.com/mysql/index.htm) # - [Quickstart for Cloud SQL for MySQL](https://cloud.google.com/sql/docs/mysql/quickstart) # ## Load ipython-sql # # `ipython-sql`: # # - 是jupyter notebook的extension,用來擴充jupyter對SQL的支援 # - 其底層是使用SQLAlchemy # %load_ext sql # for engines that do not support autocomit # %config SqlMagic.autocommit=False # ## Connect Database # # Because `ipython-sql` is based on `SQLAlchemy`, # we use the SQLAlchemy's DBAPI to connect the MySQL database via the mysqlclient # (maintained fork of MySQL-Python) driver. # # [SQLAlchemy - MySQL DBAPI](https://docs.sqlalchemy.org/en/13/dialects/mysql.html#module-sqlalchemy.dialects.mysql.pymysql) # # # ```bash # mysql+mysqldb://:@[:]/ # ``` # %sql mysql+mysqldb://root:abc123456@192.168.3.11/kaka_test # + language="sql" # # SELECT * FROM entries; # - # ## MySQL Version # %sql SHOW VARIABLES LIKE '%version%'; # ## Create Table # # - [MySQL - Data Type](https://www.tutorialspoint.com/mysql/mysql-data-types.htm) # + language="sql" # CREATE TABLE persons( # PRIMARY KEY (person_id), # person_id INT NOT NULL AUTO_INCREMENT, # firstname VARCHAR(255) NOT NULL, # lastname VARCHAR(255), # age INT, # height FLOAT, # weight FLOAT, # city VARCHAR(255) # ); # - # ## CRUD for Data # # - C: Create # - R: Read # - U: Update # - D: Delete # # ![](https://raw.githubusercontent.com/kaka-lin/Notes/master/DB/images/crud.png) # ### Create Data: SQL INSERT INTO # + language="sql" # INSERT INTO persons # VALUES (10, 'kaka','Lin', 28, 175, 70, 'Taipei'); # # INSERT INTO persons (firstname, lastname, age, height, weight, city) # VALUES ('kiwi','Li', 30, 173, 70, 'Taipei'); # - # ### Read Data: SQL SELECT # + language="sql" # # SELECT * FROM persons; # - # ### Update Data: SQL UPDATE # + language="sql" # # UPDATE persons # SET weight = 68 # WHERE firstname = 'kaka'; # + language="sql" # # SELECT * FROM persons; # - # ### Delete Data: SQL DELETE # # Before we delete data, # we first add the data that we want to delete. # + language="sql" # # INSERT INTO persons # VALUES (3, 'albert','Lin', 28, 180, 70, 'Taipei'); # + language="sql" # # SELECT * FROM persons; # + language="sql" # # DELETE FROM persons # WHERE person_id = 3; # + language="sql" # # SELECT * FROM persons; # - # ## SQL WHERE # + language="sql" # # INSERT INTO persons (firstname, lastname, age, height, weight, city) # VALUES ('Albert', 'Lin', 28, 160, 70, 'Taipei'), # ('Andy', 'Wei', 24, 175, 72, 'Teipei'), # ('kevin', 'Wang', 30, 174, 63, 'San Francisco'), # ('kevin', 'Wei', 27, 178, 65, 'Taipei'), # ('David', 'Kang', 26, 175, 65, 'Washington'), # ('Matt', 'Wang', 26, 172, 72, 'Taipei'), # ('kaka-ideal', 'Lin', 28, 178, 70, 'Janpan'); # + language="sql" # # SELECT * FROM persons # + language="sql" # # SELECT * # FROM persons # WHERE age = 28; # - # ## SQL AND, OR and NOT # ### AND # + language="sql" # # SELECT * # FROM persons # WHERE age = 28 # AND height > 170; # - # ### OR # + language="sql" # # SELECT * # FROM persons # WHERE age = 28 # OR height > 170; # - # #### SQL IN Operator # # The IN operator allows you to specify multiple values in a WHERE clause. # + language="sql" # # SELECT * # FROM persons # WHERE age = 28 OR age = 26; # + language="sql" # # SELECT * # FROM persons # WHERE age IN (26, 28); # - # ### Not # + language="sql" # # SELECT * # FROM persons # WHERE age != 28; # - # ## SQL ORDER BY # # ``` # SELECT column1, column2, ... # FROM table_name # ORDER BY column1, column2, ... ASC|DESC; # ``` # # - Default: ASC # + language="sql" # # SELECT * # FROM persons # ORDER BY age; # - # ## SQL LIKE Operator # # The LIKE operator is used in a WHERE clause to search for a specified pattern in a column. # # There are two wildcards often used in conjunction with the LIKE operator: # # - % : The percent sign represents zero, one, or multiple characters # - _ : The underscore represents a single character # + language="sql" # # SELECT * # FROM persons # WHERE city LIKE '%pei%'; # - # ## MySQL - Functions # # - [MySQL Function](https://www.w3schools.com/sql/sql_ref_mysql.asp) # # ### Modify Table: SQL ALTER TABLE # # - Add a new column in an existing table # # ``` # ALTER TABLE table_name ADD column_name datatype; # ``` # + language="sql" # # ALTER TABLE persons # ADD height_meters REAL; # # UPDATE persons # SET height_meters = round(height / 100, 2); # # SELECT * FROM persons; # - # ## Drop Table # + language="sql" # # DROP TABLE persons; # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import re file_path = './aw_or_links.csv' regex = re.compile('\/content\/River\/detail\/id\/(?P\d+)\/') df = pd.read_csv(file_path) df.head() df = df[df.URL.apply(lambda val: regex.match(val) is not None)].copy() df.head() print(list(df.URL.apply(lambda val: regex.match(val).group(1)).unique())) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import cv2 import numpy as np from matplotlib import pyplot as plt # + #Created by On Jupyter Notebook # - def pixelVal(pix, r1, s1, r2, s2): if (0 <= pix and pix <= r1): return (s1 / r1)*pix elif (r1 < pix and pix <= r2): return ((s2 - s1)/(r2 - r1)) * (pix - r1) + s1 else: return ((255 - s2)/(255 - r2)) * (pix - r2) + s2 img = cv2.imread('test2.jpg',0) r1 = 70 s1 = 0 r2=140 s2 = 255 pixelVal_vec = np.vectorize(pixelVal) contrast_stretched = pixelVal_vec(img, r1, s1, r2, s2) equ=np.hstack((img,contrast_stretched)) plt.title("Original/Contrast Stretched") plt.imshow(equ,'gray') plt.show() # + histr = cv2.calcHist([img],[0],None,[256],[0,256]) # show the plotting graph of an image plt.plot(histr) plt.show() # - stretch_near = cv2.resize(img, (3000,3000), interpolation = cv2.INTER_NEAREST) plt.title('original') plt.imshow(img,'gray') plt.show() plt.title('Zoomed Image') plt.imshow(stretch_near,'gray') plt.show() img = cv2.imread('test1.jpg',0) equ = cv2.equalizeHist(img) res = np.hstack((img, equ)) plt.title("original/enhaced stacked side by side") plt.imshow(res,'gray') plt.show() image=cv2.imread("test.jpg",0) x,y=image.shape th=np.sum(image)/(x*y) binary=np.zeros((x,y),np.double) binary=(image>=th)*255 binary=binary.astype(np.uint8) plt.title("original/binary") equ=np.hstack((image,binary)) plt.imshow(equ,'gray') plt.show() image=cv2.imread('test2.jpg',0) x,y=image.shape z=np.zeros((x,y)) for i in range(0,x): for j in range(0,y): if(image[i][j]>50 and image[i][j]<150): z[i][j]=255 else: z[i][j]=image[i][j] equ=np.hstack((image,z)) plt.title('Original\Graylevel slicing with background') plt.imshow(equ,'gray') plt.show() image=cv2.imread('test2.jpg',0) x,y=image.shape z=np.zeros((x,y)) for i in range(0,x): for j in range(0,y): if(image[i][j]>50 and image[i][j]<150): z[i][j]=255 else: z[i][j]=0 equ=np.hstack((image,z)) plt.title('Original\Graylevel slicing w/o background') plt.imshow(equ,'gray') plt.show() image=cv2.imread('test.jpg',0) x,y=image.shape z=255-image equ=np.hstack((image,z)) plt.title('Original\Image Negative') plt.imshow(equ,'gray') plt.show() image=cv2.imread('test.png',0) x,y=image.shape c=255/(np.log(1+np.max(image))) z=c*np.log(1+image) z=np.array(z,dtype=np.uint8) equ=np.hstack((image,z)) plt.title('Log Transformation') plt.imshow(equ,'gray') plt.show() img=cv2.imread('saltandpep.jpg',0) x,y=img.shape z=cv2.blur(img,(3,3)) z1=cv2.blur(img,(5,5)) equ=np.hstack((img,z)) plt.title('original/Averaging filter3X3') plt.imshow(equ,'gray') plt.show() equ=np.hstack((img,z1)) plt.title('original/averging filter 5X5') plt.imshow(equ,'gray') plt.show() z=cv2.medianBlur(img,5) equ=np.hstack((img,z)) plt.title('original/Median Blur') plt.imshow(equ,'gray') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import keras import numpy as np import pandas as pd nRowsRead = 194354 # specify 'None' if want to read whole file # deliveries.csv has 150460 rows in reality, but we are only loading/previewing the first 1000 rows df = pd.read_csv('all_matches.csv', delimiter=',', nrows = nRowsRead) df.dataframeName = 'deliveries.csv' nRow, nCol = df.shape print(f'There are {nRow} rows and {nCol} columns') df.head(10) from mpl_toolkits.mplot3d import Axes3D from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt # plotting import os # accessing directory structure df.info() df.isnull().sum() s = pd.value_counts(df['wides']) s1 = pd.Series({'nunique': len(s), 'unique values': s.index.tolist()}) s.append(s1) s = pd.value_counts(df['noballs']) s1 = pd.Series({'nunique': len(s), 'unique values': s.index.tolist()}) s.append(s1) s = pd.value_counts(df['byes']) s1 = pd.Series({'nunique': len(s), 'unique values': s.index.tolist()}) s.append(s1) s = pd.value_counts(df['legbyes']) s1 = pd.Series({'nunique': len(s), 'unique values': s.index.tolist()}) s.append(s1) s = pd.value_counts(df['penalty']) s1 = pd.Series({'nunique': len(s), 'unique values': s.index.tolist()}) s.append(s1) s = pd.value_counts(df['wicket_type']) s1 = pd.Series({'nunique': len(s), 'unique values': s.index.tolist()}) s.append(s1) #player_dismissed s = pd.value_counts(df['player_dismissed']) s1 = pd.Series({'nunique': len(s), 'unique values': s.index.tolist()}) s.append(s1) #other_wicket_type s = pd.value_counts(df['other_wicket_type']) s1 = pd.Series({'nunique': len(s), 'unique values': s.index.tolist()}) s.append(s1) #other_player_dismissed s = pd.value_counts(df['other_player_dismissed']) s1 = pd.Series({'nunique': len(s), 'unique values': s.index.tolist()}) s.append(s1) import seaborn as sns plt.rcParams['figure.figsize']=70,54 sns.countplot(x=df['striker'], hue='extras', data=df, linewidth=7, palette = "Set2") plt.ylabel('Number of matches') plt.xticks(rotation=90) plt.xlabel('striker') plt.title('striker V/s extras',size=50, fontweight="bold") sns.countplot(x=df['season'], hue='extras', data=df, linewidth=7, palette = "Set2") plt.ylabel('Number of matches') plt.xticks(rotation=90) plt.xlabel('year') plt.title('Season V/s extras',size=50, fontweight="bold") sns.countplot(x=df['season'], hue='wides', data=df, linewidth=7, palette = "Set2") plt.ylabel('Number of matches') plt.xticks(rotation=90) plt.xlabel('year') plt.title('Season V/s Wides',size=50, fontweight="bold") sns.countplot(x=df['season'], hue='legbyes', data=df, linewidth=7, palette = "Set2") plt.ylabel('Number of matches') plt.xticks(rotation=90) plt.xlabel('year') plt.title('Season V/s legbyes',size=50, fontweight="bold") sns.countplot(x=df['season'], hue='byes', data=df, linewidth=7, palette = "Set2") plt.ylabel('Number of matches') plt.xticks(rotation=90) plt.xlabel('year') plt.title('Season V/s byes',size=50, fontweight="bold") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #挑战性练习 import random,math def menu(): print('''=====游戏菜单===== 1.游戏说明 2.开始游戏 3.退出游戏 4.制作团队 =====游戏菜单=====''') def welcome(): print('''=====欢迎聪明的你来到天下第一好玩的猜数游戏===== (^O^) =====欢迎聪明的你来到天下第一好玩的猜数游戏=====''') def win(): print('''=====恭喜你!你赢了!===== ((`'-"` `""-'`))    ) -  -  (   /  (o _ o)  \   \  ( 0 )  /  _'-.._ '=' _..-'_ /`;#'#'#. -. #'#'#;`\ \_))   '#'   ((_/ #.  ☆ ☆ ☆  .# '#.  送你一只小熊熊  .#'  /'#.     .#'\  _\\'#.   .#'//_   (((___)'#'(___ =====恭喜你!你赢了!=====''') def lose(): print('''=====输了吧?哈哈哈哈哈哈哈哈===== │\__╭╭╭╭╭__/│ │           │ │           │ │ >       ● │ │≡  ╰┬┬┬╯  ≡│ │    ╰—╯    │ ╰——┬o———o┬——╯    │就知道你猜不出来│    ╰┬———┬╯ =====输了吧?哈哈哈哈哈哈哈哈=====''') def game_over(): print('''=====GG===== ,------. `-____-' ,-----------. ,i--i. | | / @ @ \ / haha! | | -.__.- | ___-' J \. ,/ """"""""""""""""""' ,\""""/. ,' `--' `. (_,i' `i._) | | | ,. | | | | | `-' `-' =====GG=====''') def team(): print('''=====此游戏由python小分队倾力奉献=====   ∧_∧    ∧_∧   ∧_∧    ∧_∧    (^ .^)  (^ 、^)  (^ 0^)  (^ Д^) ----∪-∪-------∪-∪-------∪-∪-------∪-∪--- =====此游戏由python小分队倾力奉献=====''') def guess_game(): number=int(input('输入请计算机猜的整数')) n=int(input('请输入这个整数的上界')) max_times=math.ceil(math.log(n,2)) guess_times=1 while guess_times<=max_times: guess_times+=1 guess=int(input('请输入你猜测的整数')) print('一共可以猜',max_times,'次') print('计算机已经猜了',guess_times,'次') if guess==number: win() print('神秘整数是:',guess) print('计算机比标准次数少',max_times-guess_times,'次') break elif guess>number: print('计算机,你猜大了') else: print('计算机,你猜小了') else: lose() print('神秘整数是:',number) def main(): while True: menu() choice=int(input('请输入你的选择')) if choice==1: welcome() elif choice==2: guess_game() elif choice==3: game_over() break else: team() if __name__=='__main__': main() # - #练习3 def Sum(): a=random.randint(1,9) n=int(input('输入你想要相加的个数:')) i=0 total=0 while i\u0144ski", "photoUrl": "", "userId": "07435370255154987946"}} # !pip install --upgrade tables # !pip install eli5 # !pip install xgboost # + id="LqBbntd40Dfj" colab_type="code" colab={} import pandas as pd import numpy as np from sklearn.dummy import DummyRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor import xgboost as xgb from sklearn.metrics import mean_absolute_error as mae from sklearn.model_selection import cross_val_score, KFold import eli5 from eli5.sklearn import PermutationImportance # + id="ixqHmpk70_QZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="f5d15b78-3774-4f66-9265-9a4710d7f4d2" executionInfo={"status": "ok", "timestamp": 1583414778905, "user_tz": -60, "elapsed": 3061, "user": {"displayName": "", "photoUrl": "", "userId": "07435370255154987946"}} df = pd.read_hdf('data/car.h5') df.shape # + [markdown] id="DeaI7Pj52eZz" colab_type="text" # ##Feature Engineering # + id="aFxVj4fr2hrN" colab_type="code" colab={} SUFFIX_CAT = '__cat' for feat in df.columns: if isinstance(df[feat][0], list):continue factorized_values = df[feat].factorize()[0] if SUFFIX_CAT in feat: df[feat] = factorized_values else: df[feat + SUFFIX_CAT] = factorized_values # + id="5ewQj23e2-jx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="1ecd9fb5-d623-41d9-930d-2a25045308a6" executionInfo={"status": "ok", "timestamp": 1583414788078, "user_tz": -60, "elapsed": 988, "user": {"displayName": "", "photoUrl": "", "userId": "07435370255154987946"}} cat_feats = [x for x in df.columns if SUFFIX_CAT in x] cat_feats = [x for x in cat_feats if 'price' not in x] len(cat_feats) # + id="Bn8RvPjF3bLg" colab_type="code" colab={} def run_model(model, feats): X = df[feats].values y = df['price_value'].values scores = cross_val_score(model, X, y, cv=3, scoring="neg_mean_absolute_error") return np.mean(scores), np.std(scores) # + [markdown] id="Zuu6SzBZ4mYO" colab_type="text" # ## DecisionTree # + id="S3PLuFBd3fYj" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="3a8a2771-6558-4efe-acfa-39a0da19aefe" executionInfo={"status": "ok", "timestamp": 1583414915896, "user_tz": -60, "elapsed": 5735, "user": {"displayName": "\u0144ski", "photoUrl": "", "userId": "07435370255154987946"}} run_model(DecisionTreeRegressor(max_depth=5), cat_feats) # + [markdown] id="20odFAfd4vnm" colab_type="text" # ## Random Forest # + id="yaTW1y-X4vAM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="a39056b9-1148-49f6-e912-b0c9d0dd51d8" executionInfo={"status": "ok", "timestamp": 1583415140145, "user_tz": -60, "elapsed": 103627, "user": {"displayName": "\u0144ski", "photoUrl": "", "userId": "07435370255154987946"}} model = RandomForestRegressor(max_depth=5, n_estimators=50, random_state=0) run_model( model, cat_feats) # + [markdown] id="ar1myUT85G1S" colab_type="text" # XGBoost # + id="lG4KR7Dk4hxb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="8c393ba2-fe82-403f-ad2b-888cb9fb8aec" executionInfo={"status": "ok", "timestamp": 1583415847313, "user_tz": -60, "elapsed": 59357, "user": {"displayName": "0144ski", "photoUrl": "", "userId": "07435370255154987946"}} xgb_params = { 'max_depth': 5, 'n_estimators': 50, 'learning_rate': 0.1, 'seed': 0 } run_model(xgb.XGBRegressor(**xgb_params), cat_feats) # + id="Xg6GbfD85sYk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 438} outputId="d05f9053-9de3-4611-9fac-3ec19548c47b" executionInfo={"status": "ok", "timestamp": 1583416346209, "user_tz": -60, "elapsed": 361125, "user": {"displayName": "0144ski", "photoUrl": "", "userId": "07435370255154987946"}} m = xgb.XGBRegressor(max_depth=5, n_estimators=50, learning_rate=0.1, seed=0) m.fit(X, y) imp = PermutationImportance(m, random_state=0).fit(X,y) eli5.show_weights(imp, feature_names = cat_feats) # + id="V1wrz-078oNZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="f2a2b126-5a35-4202-df60-e7b487149fd3" executionInfo={"status": "ok", "timestamp": 1583416977374, "user_tz": -60, "elapsed": 13909, "user": {"displayName": "0144ski", "photoUrl": "", "userId": "07435370255154987946"}} feats = ['param_napęd__cat','param_rok-produkcji__cat','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc__cat','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa__cat','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat'] run_model(xgb.XGBRegressor(**xgb_params), feats) # + id="9Fc6jLED_KXx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="c9676aca-465e-4528-a255-34bd8af9629e" executionInfo={"status": "ok", "timestamp": 1583417485998, "user_tz": -60, "elapsed": 14607, "user": {"displayName": "\u0144ski", "photoUrl": "", "userId": "07435370255154987946"}} df['param_rok_produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x)) feats = ['param_napęd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc__cat','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa__cat','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat'] run_model(xgb.XGBRegressor(**xgb_params), feats) # + id="dhYiY-_7_pR3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="4d71d30e-04d4-4660-8724-0e0de9a6545e" executionInfo={"status": "ok", "timestamp": 1583417587345, "user_tz": -60, "elapsed": 14394, "user": {"displayName": "ali\u0144ski", "photoUrl": "", "userId": "07435370255154987946"}} df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]) ) # + id="hXi5cIxCAsTN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="fcd9eb97-d89c-4f7c-840f-c2c0b8e2a6b2" executionInfo={"status": "ok", "timestamp": 1583417758419, "user_tz": -60, "elapsed": 14244, "user": {"displayName": "", "photoUrl": "", "userId": "07435370255154987946"}} df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else (int(str(x).split(' ')[0]) ) ) df['param_rok_produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x)) feats = ['param_napęd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa__cat','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat'] run_model(xgb.XGBRegressor(**xgb_params), feats) # + id="Ot4yxci1C4eX" colab_type="code" colab={} df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int(str(x).split('cm')[0].replace(' ','') ) ) # + id="m7-mi5MjDvoa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 111} outputId="7cbacfd0-0963-4c38-9a6b-50e1fea8d91a" executionInfo={"status": "ok", "timestamp": 1583418246548, "user_tz": -60, "elapsed": 14013, "user": {"displayName": "\u0144ski", "photoUrl": "", "userId": "07435370255154987946"}} feats = ['param_napęd__cat','param_rok-produkcji','param_stan__cat','param_skrzynia-biegów__cat','param_faktura-vat__cat','param_moc','param_marka-pojazdu__cat','feature_kamera-cofania__cat','param_typ__cat','param_pojemność-skokowa','seller_name__cat','feature_wspomaganie-kierownicy__cat','param_model-pojazdu__cat','param_wersja__cat','param_kod-silnika__cat','feature_system-start-stop__cat','feature_asystent-pasa-ruchu__cat','feature_czujniki-parkowania-przednie__cat','feature_łopatki-zmiany-biegów__cat','feature_regulowane-zawieszenie__cat'] run_model(xgb.XGBRegressor(**xgb_params), feats) # + id="1ApaREOpFM4m" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="2a971a9d-590b-4213-b82a-9219725f5b3f" executionInfo={"status": "ok", "timestamp": 1583418416824, "user_tz": -60, "elapsed": 3103, "user": {"displayName": "\u0144ski", "photoUrl": "", "userId": "07435370255154987946"}} # + id="PKFzW9M8F4yR" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:anaconda] # language: python # name: conda-env-anaconda-py # --- # ## Westeros Tutorial Part III - Introducing emission taxes # # In the second part, we showed how to introduce emissions into a stylized energy systems model, and what happens if you put a constraint on total CO2 emissions. # # Now, we will tackle the complementary policy to emissions constraints, namely emissions taxes. # + import pandas as pd import ixmp as ix import message_ix from message_ix.utils import make_df # %matplotlib inline # - mp = ix.Platform(dbtype='HSQLDB') model = 'Westeros Electrified' base = message_ix.Scenario(mp, model=model, scenario='baseline') # ## Load the scenario with an emission bound and look at the result in more detail scen_bd = message_ix.Scenario(mp, model=model, scenario='emission_bound') emissions = scen_bd.var('EMISS', {'node': 'Westeros'}) emissions emission_prices = scen_bd.var('PRICE_EMISSION') emission_prices # When setting a cumlulative bound, the optimization model choses an emission trajectory that pushes the cost towards the end of the model horizon. As a consequence, the shadow price or dual variable of the constraint increase exponentially at the discount rate. # ## Make a new scenario with emission bounds by year # # In the previous example, we imposed a bound on emissions over the entire model horizon by using the `type_year 'cumulative'`. Now, we will create a similar scenario, but the constraint will be defined per year. # # For the sake of comparison, the per-year emission values will be chosen exactly in line with the optimal emission trajectory from the previous scenario. scen_bd_by_year = base.clone(model, 'carbon_bound_by_year','introducing a carbon tax', keep_sol=False) scen_bd_by_year.check_out() scen_bd_by_year.add_set('emission', 'CO2') scen_bd_by_year.add_cat('emission', 'GHG', 'CO2') scen_bd_by_year.add_par('emission_factor', scen_bd.par('emission_factor')) # + base_bd_emission = { 'node': 'Westeros', 'type_year': [700,710,720], 'type_tec': 'all', 'unit': 'tCO2', 'type_emission': 'GHG', 'value': emissions.lvl } bd_emission = make_df(base_bd_emission) scen_bd_by_year.add_par('bound_emission', bd_emission) # - scen_bd_by_year.commit(comment='emission bound by year') scen_bd_by_year.solve() emission_prices_by_year = scen_bd_by_year.var('PRICE_EMISSION') emission_prices_by_year # Comparing the emission prices between the two scenarios, we see that the values are identical in the year 710 and close in the year 720. However, the bound in the year 700 is not binding, so the shadow price is 0 (and is not shown here). # ## Setting an emissions tax instead of a bound # # Again, we choose the emissions prices from the first example (with a cumulative bound) as the tax level over time. scen_tax = base.clone(model, 'carbon_tax','introducing a carbon tax', keep_sol=False) scen_tax.check_out() scen_tax.add_set('emission', 'CO2') scen_tax.add_cat('emission', 'GHG', 'CO2') scen_tax.add_par('emission_factor', scen_bd.par('emission_factor')) # + scen_tax.add_set('type_year', [700,710,720]) base_tax_emission = { 'node': 'Westeros', 'type_year': [700,710,720], 'type_tec': 'all', 'unit': 'tCO2', 'type_emission': 'GHG', 'value': emission_prices.lvl } tax_emission = make_df(base_tax_emission) scen_tax.add_par('tax_emission', tax_emission) # - scen_tax.commit(comment='setting taxes on emissions') scen_tax.solve() scen_tax.var('EMISS', {'node': 'Westeros'}) # Comparing the emissions trajectory in the tax scenario to the outcome in the cumulative budget constraint scenario, we notice that the values in the years 700 and 720 are identical, but the value in 710 is different. # # This is the flip side of having an identical shadow price on the constraint in the two previous examples - at that price, the costs between wind and coal (with the tax) are exactly equal, hence the optimal solution is not unique. # # This is usually only an issue in small, stylized problems... mp.close_db() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import sys import matplotlib.pyplot as plt # %matplotlib inline import pandas as pd import tensorflow as tf import numpy as np import scipy as sp import sklearn as sk sys.argv=['/usr/bin/python',] import config import main import train import generator # + from importlib import reload reload(model) tf.reset_default_graph() # - main.main() (X_variable, Y_variable, pred, loss, final_loss, gene_vars)=model.create_model() Y_variable tf.trainable_variables() tf.trainable_variables() train_minimize, learning_rate, global_step = model.create_optimizers(final_loss) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # + import numpy as np from scipy.ndimage import gaussian_filter import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.colors as colors from astropy.io import fits # - plt.style.use("default") plt.rcParams["savefig.dpi"] = 100 plt.rcParams["figure.dpi"] = 100 plt.rcParams["font.size"] = 16 plt.rcParams["font.family"] = "sans-serif" plt.rcParams["font.sans-serif"] = ["Liberation Sans"] plt.rcParams["font.cursive"] = ["Liberation Sans"] plt.rcParams["mathtext.fontset"] = "custom" get_ipython().magic('config InlineBackend.figure_format = "retina"') # + with fits.open("/data/rv_uncertainty_grid.fits") as f: hdr = f[0].header mu = f[1].data sigma = f[2].data color_bins = np.linspace(hdr["MIN_COL"], hdr["MAX_COL"], hdr["NUM_COL"] + 1) mag_bins = np.linspace(hdr["MIN_MAG"], hdr["MAX_MAG"], hdr["NUM_MAG"] + 1) ivar = 1.0 / sigma ** 2 # - mu_smooth = gaussian_filter(np.mean(mu, axis=-1), (1, 0.8)) # + plt.figure(figsize=(7, 5)) # plt.pcolor(color_bins, mag_bins, np.mean(mu, axis=-1)) c = plt.contourf( 0.5 * (color_bins[:-1] + color_bins[1:]), 0.5 * (mag_bins[:-1] + mag_bins[1:]), np.mean(mu, axis=-1), levels=15 ) plt.contour( 0.5 * (color_bins[:-1] + color_bins[1:]), 0.5 * (mag_bins[:-1] + mag_bins[1:]), mu_smooth, colors="k", linestyles="solid", linewidths=0.5, levels=15 ) plt.colorbar(c, label=r"$\ln(\sigma_\mathrm{rv})$ [km/s]") plt.ylim(plt.ylim()[::-1]) plt.ylabel("$m_\mathrm{G}$") plt.xlabel("$G_\mathrm{BP}-G_\mathrm{RP}$"); # + for m in range(len(color_bins) - 1): plt.plot( 0.5 * (mag_bins[1:] + mag_bins[:-1]), np.mean(mu[:, m], axis=-1), ":", lw=0.5, color=plt.cm.viridis(m / (len(color_bins) - 1))) plt.errorbar( 0.5 * (mag_bins[1:] + mag_bins[:-1]), mu_smooth[:, m], yerr=np.std(mu[:, m], axis=-1), fmt=".-", color=plt.cm.viridis(m / (len(color_bins) - 1)), label="BP$-$RP = {0:.3f}".format(0.5*(color_bins[m] + color_bins[m + 1]))) plt.ylabel(r"$\ln (\sigma_\mathrm{rv})$ [km/s]") plt.xlabel("$m_\mathrm{G}$") ylim = plt.ylim() ax2 = plt.gca().twinx() ax2.set_ylim(np.exp(ylim)) ax2.set_yscale("log") ax2.yaxis.set_major_formatter(mpl.ticker.ScalarFormatter()) ax2.yaxis.set_minor_formatter(mpl.ticker.ScalarFormatter()) ax2.tick_params(axis='both', which='major', labelsize=10) ax2.tick_params(axis='both', which='minor', labelsize=8) ax2.set_ylabel(r"$\sigma_\mathrm{rv}$ [km/s]"); # + for n in range(len(mag_bins) - 1): plt.plot( 0.5 * (color_bins[1:] + color_bins[:-1]), np.mean(mu[n], axis=-1), ":", lw=0.5, color=plt.cm.viridis(n / (len(mag_bins) - 1))) plt.errorbar( 0.5 * (color_bins[1:] + color_bins[:-1]), mu_smooth[n], yerr=np.std(mu[n], axis=-1), fmt=".-", color=plt.cm.viridis(n / (len(mag_bins) - 1)), label="$m_G$ = {0:.3f}".format(0.5*(mag_bins[n] + mag_bins[n + 1]))) plt.ylabel(r"$\ln(\sigma_\mathrm{rv})$ [km/s]") plt.xlabel("$G_\mathrm{BP}-G_\mathrm{RP}$") # plt.legend(fontsize=10) ax2 = plt.gca().twinx() ax2.set_ylim(np.exp(ylim)) ax2.set_yscale("log") ax2.yaxis.set_major_formatter(mpl.ticker.ScalarFormatter()) ax2.yaxis.set_minor_formatter(mpl.ticker.ScalarFormatter()) ax2.tick_params(axis='both', which='major', labelsize=10) ax2.tick_params(axis='both', which='minor', labelsize=8) ax2.set_ylabel(r"$\sigma_\mathrm{rv}$ [km/s]"); # - plt.hist(np.std(mu, axis=-1).flatten()) plt.xlabel("scatter on the RV uncertainty within a bin"); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Review inputs # # This `Python 3` notebook checks the setup for each of the games in the experiment, # to make sure that the social network, clues, and clue assignments reflect # the desired experimental condition. # # + # %pylab inline import networkx as nx import json import numpy as np import pandas as pd from collections import Counter import language_check import re import string from collections import Counter # - # ## Import and describe Experiment # + experiment_filepath = "games/prereg_20200702_170602.json" with open(experiment_filepath, 'r') as f: experiment = json.load(f) describe = {k: experiment[k] for k in set(experiment.keys())-{"games"}} describe['n_games'] = len(experiment["games"]) describe # - for game_id, game in experiment['games'].items(): print(game_id) # ## check network structure # # + for game_id, game in experiment['games'].items(): g = nx.from_dict_of_lists(game['neighbors']) #nx.draw_networkx(g) # check that each panel has two components assert(nx.number_connected_components(g)==2) # check that each component has nodes of the same names c1, c2 = [g.subgraph(c) for c in nx.connected_components(g)] assert(set([n[1:] for n in c1.nodes()]) == set([n[1:] for n in c2.nodes()])) # check that each component has the right number of nodes assert(nx.number_of_nodes(c1)==describe['n_players']) assert(nx.number_of_nodes(c2)==describe['n_players']) # check that each node in each component has matching neighbors for n in c1: n2 = 't'+n[1:] if n[0] == 'c' else 'c'+n[1:] assert(set([nb[1:] for nb in c1.neighbors(n)]) == set([nb[1:] for nb in c2.neighbors(n2)])) # check that each node has the number of neighbors listed in the game heading assert(all([v== describe['deg'] for k, v in g.degree()])) print('pass') # - # spot check game_id = list(experiment['games'].keys())[0] game = experiment['games'][game_id] g = nx.from_dict_of_lists(game['neighbors']) nx.draw_networkx(g) game_id game_id = list(experiment['games'].keys())[1] game = experiment['games'][game_id] g = nx.from_dict_of_lists(game['neighbors']) nx.draw_networkx(g) game_id # ## check clue structure # + tool = language_check.LanguageTool('en-US') err_counter = Counter() err_examples = {} for game_id, game in experiment['games'].items(): for clue_id, clue in game['clues'].items(): # check that each clue is well formatted assert(len(clue['nodes'])==2) assert(all([len(re.sub(r"\s+", "", node, flags=re.UNICODE))>=2 # remove whitespace for node in clue['nodes']])) # each node has at least two non-whitespace chars assert(all([re.sub(r'\{.*\}', '', node.lower()) in clue['content'].lower() # allow template nodes for node in clue['nodes']])) # each node present in clue string assert(clue['content'].endswith('.')) assert(clue['content'][0].isupper()) # check for language errors errs = tool.check(clue['content']) if(errs): for err in errs: key = err.locqualityissuetype + ': "' + err.context[err.contextoffset:err.contextoffset+err.errorlength] +'"' err_counter[key]+=1 err_examples[key] = err.context # check that clues connected to the stolen object or crime scene are identical between trmt and ctrl for clue_id, clue in game['clues'].items(): if (clue_id.startswith('t') and ('_1_' in clue_id or '_2_' in clue_id)): control_clue = game['clues']['c'+clue_id.lstrip('t')] assert clue['content'] == control_clue['content'] # check that clue graph is complete for treatment case cgt = nx.from_edgelist([ clue['nodes'] for clue_id, clue in game['clues'].items() if clue_id.startswith('t') ]) assert(all([v == cgt.number_of_nodes()-1 for k, v in cgt.degree()])) # check that clue graph has correct degree distribution for control case cgc = nx.from_edgelist([ clue['nodes'] for clue_id, clue in game['clues'].items() if clue_id.startswith('c') ]) assert(Counter([v for k, v in cgc.degree()]) == {12:2, 7:11, 1:55}) #if not Counter([v for k, v in cgc.degree()]) == {12:2, 7:11, 1:55}: # print([(k,v) for k, v in cgc.degree() if v not in [12, 7, 1]], '\n') print('pass') for key, count in err_counter.most_common(): print(count, key, err_examples[key]) # - # ## manually spot check a single game's clues game = experiment['games'][np.random.choice(list(experiment['games'].keys()))] # choose a random game for clue_id, clue in game['clues'].items(): if clue_id.startswith('t'): control_clue = game['clues']['c'+clue_id.lstrip('t')] if '_1_' in clue_id or '_2_' in clue_id: print(clue['content'].center(110)) else: print(clue['content'].rjust(55), '|', control_clue['content']) # + # check the ensemble of games to make sure that no particular clue elements have undue weight element_counter = Counter() for game_id, game in experiment['games'].items(): nodes = set() for clue_id, clue in game['clues'].items(): for node in clue['nodes']: nodes.add(node) for node in nodes: element_counter[node] += 1 plt.figure(figsize=(6,40)) plt.barh(range(len(element_counter)), element_counter.values()) plt.yticks(range(len(element_counter)), element_counter.keys()); # - plt.hist(element_counter.values()); # ## check clue/belief assignments # + for game_id, game in experiment['games'].items(): belief_list = pd.Series([bf for n in game['beliefs'] for bf in game['beliefs'][n]]) belief_counts = belief_list.value_counts() # check that each belief shows up exactly once, apart from clue_1_2, which is present 3 times assert(belief_counts['cclue_1_2'] == 3) assert(belief_counts['tclue_1_2'] == 3) assert(all([v==1 for b, v in belief_counts.iteritems() if b not in ['cclue_1_2', 'tclue_1_2']])) # check that each player has 4 beliefs assert(all([len(v)==4 for k,v in game['beliefs'].items()])) # check that control positions have beliefs that match their corresponding treatment positions for pos, bs in game['beliefs'].items(): if pos.startswith('t'): control_bs = game['beliefs']['c'+pos.lstrip('t')] assert set([b[1:] for b in bs]) == set([cb[1:] for cb in control_bs]) print('pass') # - # ## visually spot-check the relationships between clues # + random_game_id = np.random.choice(list(experiment['games'].keys())) game = experiment['games'][random_game_id] #plt.figure(figsize=(10,10)) cg = nx.from_edgelist([n['nodes'] for cid, n in game['clues'].items() if cid[0]=='t']) # clue graph center_nodes = [game['nodes']['CrimeScene_1'], game['nodes']['StolenObject_1']] center = nx.from_edgelist(cg.edges(center_nodes)) cg.remove_edges_from(center.edges()) cg.remove_nodes_from(center_nodes) pos = nx.spring_layout(center) theta = {key: np.arctan2(val[1], val[0]) * 180/np.pi if key not in center_nodes else 0 for key, val in pos.items() } plt.figure(figsize=(8,8)) nx.draw_networkx_nodes(center, pos=pos, alpha=.5) labels = nx.draw_networkx_labels(center, pos=pos, font_size=12, ha='left') for key,t in labels.items(): if key not in center_nodes: if 90 < theta[key] or theta[key] < -90 : angle = 180 + theta[key] t.set_ha('right') else: angle = theta[key] t.set_ha('left') t.set_va('center') t.set_rotation(angle) t.set_rotation_mode('anchor') nx.draw_networkx_edges(cg, pos=pos, alpha=.2) nx.draw_networkx_edges(center, pos=pos, alpha=.4) plt.box(False) a=3 plt.xlim(-a,a) plt.ylim(-a,a) # save locations with clues for later clue_pos = {} for i, n in game['clues'].items(): if i[0] == 't': clue_pos[i[1:]] = np.array([pos[n['nodes'][0]], pos[n['nodes'][1]]]) centroids = {k: np.mean(v, axis=0) for k, v in clue_pos.items()} lengths = {k: sqrt(np.sum((v[1]-v[0])**2)) for k, v in clue_pos.items()} # + ccg = nx.from_edgelist([n['nodes'] for cid, n in game['clues'].items() if cid[0]=='c']) # control clue graph # group by shell center_nodes = [game['nodes']['CrimeScene_1'], game['nodes']['StolenObject_1']] center = nx.from_edgelist(ccg.edges(center_nodes)) shell_1_nodes = list(set(center.nodes()) - set(center_nodes)) shell_2_nodes = [nb for n in shell_1_nodes for nb in ccg.neighbors(n) if nb not in center_nodes] rot_n = 2 # rotate n (helps align the outer shell with the middle shell) shell_2_nodes = shell_2_nodes[rot_n:] + shell_2_nodes[:rot_n] # positions for nodes pos = nx.shell_layout(ccg, [center_nodes, shell_1_nodes, shell_2_nodes], scale=1.3) pos = nx.spring_layout(ccg, pos=pos, fixed=center_nodes+shell_1_nodes, iterations=1, k=1/sqrt(70)) def rotate(x, rotation=0): length = sqrt(x[0]**2 + x[1]**2) angle = np.arctan2(x[0], x[1]) new_angle = angle+rotation return [length*cos(new_angle), length*sin(new_angle)] pos = {k:rotate(v, -0.25) for k, v in pos.items()} theta = {key: np.arctan2(val[1], val[0]) * 180/np.pi if key in shell_2_nodes else 0 for key, val in pos.items() } plt.figure(figsize=(10,10), dpi=150) nx.draw_networkx_nodes(ccg, pos=pos, alpha=.4, node_size=70) nx.draw_networkx_edges(ccg, pos=pos, alpha=.2) labels = nx.draw_networkx_labels(ccg, pos=pos, font_size=7, ha='left') for key,t in labels.items(): if key in shell_2_nodes: if 90 < theta[key] or theta[key] < -90 : angle = 180 + theta[key] t.set_ha('right') else: angle = theta[key] t.set_ha('left') t.set_va('center') t.set_rotation(angle) t.set_rotation_mode('anchor') plt.box(False) a=3 plt.xlim(-a,a) plt.ylim(-a,a); # - # # stats all_clues = [cl['content'] for _, game in experiment['games'].items() for _, cl in game['clues'].items()] clue_counts = pd.value_counts(all_clues) print("There are a total of %i unique clues in the experiment" % len(clue_counts)) plt.figure(figsize=(12,4)) plt.hist(clue_counts.values, bins=range(describe['replications'])) plt.xlabel('Number of games that each clue participates in') plt.ylabel('Number of clues in the bin'); # + # check that the average length of clues is comparable between treatment and control t_length_list = [] c_length_list = [] for game_id, game in experiment['games'].items(): for clue_id, clue in game['clues'].items(): if clue_id.startswith('t'): t_length_list.append(len(clue['content'])) elif clue_id.startswith('c'): c_length_list.append(len(clue['content'])) else: assert False plt.hist(t_length_list, alpha=0.5, label='Treatment', bins=range(10,100,5)) plt.hist(c_length_list, alpha=0.5, label='Control', bins=range(10,100,5)) plt.xlim(0,95) plt.xlabel('Number of Characters') plt.ylabel('Number of clues') plt.title('T mean: %.02f, C mean: %.02f'%(mean(t_length_list), mean(c_length_list))); # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import csv import os import pandas as pd # + budget = pd.read_csv(r'C:\Users\\Python-Challenge\PyBank\Resources\budget_data.csv') budget # - budget['Date'].count() budget['Profit/Losses'].sum() budget['MoM'] = budget['Profit/Losses'].diff() budget (budget['Profit/Losses'].diff().sum()) / budget['Profit/Losses'].count() budget['MoM'].max() budget['MoM'].min() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import nibabel as nib import matplotlib.pyplot as plt import os import cv2 import nibabel as nib from PIL import Image from model.concave_dps_w import ResUNet as resu import torch model = resu(3,4) resunet_checkpoint = torch.load('/zion/fangx2/mu_or/tmp/sf_uni4_1206_lr_2e-4/resu_best_axial.pth.tar') resunet_dict = resunet_checkpoint['state_dict'] model.load_state_dict(resunet_dict) # + from torch.autograd import Variable import torch.nn.functional as F import nibabel = nib vol24 = nib.load('/zion/fangx2/BTCV/training_256/volume-24.nii') a = vol24.get_data() a = a.transpose(2,0,1) print(a.max(),a.min()) img24 = a[94:97, 16:240, 16:240] print(img24.shape) img24[img24 > 200] = 200.0 img24[img24 < -200] = -200.0 print(img24.shape) img24 = img24[np.newaxis,:,:,:] print(img24.shape) img24 = Variable(torch.from_numpy(img24), volatile=True).float() img24 = img24.cuda() model.cuda() output5, output4, output3, output2, output1 = model.resnet(img24) output6 = model(img24) # - attmap(output5).shape plt.imshow(attmap(output5)) def pmap(output): #output = F.softmax(output,dim=1) # output = torch.sigmoid(output) output = output.cpu().data.numpy() out = np.squeeze(output) out = out[1:] out = out.transpose(2,1,0) # out[out>1] = 1 # out[out<0] = 0 # fi_out = (out - out.min()) / (out.max() - out.min()) #out = out.astype(np.uint8) return out def attmap(output): w_map = model.att(output) w_map = w_map.cpu().data.numpy() w_map = np.squeeze(w_map) print(w_map.shape) w_map = w_map.transpose(1,0) #w_map = w_map.astype(np.uint8) return w_map def imap(img): img = img.cpu().data.numpy() img = np.squeeze(img) img = img.transpose(2,1,0) img = (img + 200) / 400 * 255 return img plt.imshow(imap(img24)[:,:,1],cmap = 'gray') I1 = imap(img24)[:,:,1] I2 = np.stack((I1/255,I1/255,I1/255), -1) I2.max() plt.imshow(pmap(F.softmax(fi_out,dim=1))) I4 = pmap(F.softmax(fi_out,dim=1)) I5 = 0.4*I2 + 1*I4 plt.imshow(I5, cmap = 'gray') plt.imshow(pmap(0.1*fi_out)) plt.imshow(pmap(output1)) plt.imshow(pmap(output2)) plt.imshow(pmap(output3)) plt.imshow(pmap(output4)) plt.imshow(pmap(output5)) fi_out.shape pmap(fi_out).shape plt.imshow(pmap(output6)) plt.imshow(pmap(output5)) plt.imshow(pmap(a),cmap = 'jet') plt.imshow(pmap(output5),cmap = 'jet') plt.imshow(pmap(output1)) plt.imshow(pmap(output4)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.6 64-bit (''base'': conda)' # name: python3 # --- import json from collections import defaultdict, Counter import numpy as np import pandas as pd import matplotlib.pyplot as plt import json with open('./data/docred/valid_revise.json', 'r') as fh: revise_dataset = json.load(fh) with open('./data/docred/valid_scratch.json', 'r') as fh: scratch_dataset = json.load(fh) with open('./data/docred/valid_recommend.json', 'r') as fh: recommend_dataset = json.load(fh) # + max_sent_dis, min_sent_dis = [], [] miss_max_sent_dis, miss_min_sent_dis = [], [] head_sent_first, head_sent_last = [], [] tail_sent_first, tail_sent_last = [], [] miss_head_sent_first, miss_head_sent_last = [], [] miss_tail_sent_first, miss_tail_sent_last = [], [] sent_head_behind_tail, sent_tail_behind_head, miss_sent_head_behind_tail, miss_sent_tail_behind_head = 0, 0, 0, 0 for recommend_data, revise_data, scratch_data in zip(recommend_dataset, revise_dataset, scratch_dataset): human_pos = set() for label in revise_data['labels']: human_pos.add((label['h'], label['t'], label['r'])) ds_pos = [(label['h'], label['t'], label['r']) for label in recommend_data['labels']] human_ds = set(human_pos) - set(ds_pos) scratch_pos = [(label['h'], label['t'], label['r']) for label in scratch_data['labels']] miss_pos = set(scratch_pos) - set(human_pos) # Sent offset: cur_offset = 0 sent_offset = [] for sent in scratch_data['sents']: sent_offset.append(cur_offset) cur_offset += len(sent) # distance num between head and tail entities all_ent_pos = {} for e_idx, nodes in enumerate(scratch_data['vertexSet']): ent_pos = {'word_pos': set(), 'sent_pos': set()} for node in nodes: ent_pos['sent_pos'].add(node['sent_id']) ent_pos['word_pos'].add(sent_offset[node['sent_id']] + node['pos'][0]) all_ent_pos[e_idx] = ent_pos ent_pair_dis = {} for e_idx1, ent_pos1 in all_ent_pos.items(): for e_idx2, ent_pos2 in all_ent_pos.items(): if e_idx1 == e_idx2: continue word_dis = [w_pos1 - w_pos2 for w_pos1 in ent_pos1['word_pos'] for w_pos2 in ent_pos2['word_pos']] abs_word_dis = np.abs(word_dis) sent_dis = [s_pos1 - s_pos2 for s_pos1 in ent_pos1['sent_pos'] for s_pos2 in ent_pos2['sent_pos']] abs_sent_dis = np.abs(sent_dis) ent_pair_dis[(e_idx1, e_idx2)] = { 'word_dis': word_dis, 'abs_word_dis': abs_word_dis, 'sent_dis': sent_dis, 'abs_sent_dis': abs_sent_dis, } for label in human_ds: dis = ent_pair_dis[(label[0], label[1])] head_pos, tail_pos = all_ent_pos[label[0]], all_ent_pos[label[1]] max_sent_dis.append(max(dis['abs_sent_dis'])) min_sent_dis.append(min(dis['abs_sent_dis'])) head_sent_first.append(min(head_pos['sent_pos'])) head_sent_last.append(max(head_pos['sent_pos'])) tail_sent_first.append(min(tail_pos['sent_pos'])) tail_sent_last.append(max(tail_pos['sent_pos'])) if min(head_pos['sent_pos']) > max(tail_pos['sent_pos']): sent_head_behind_tail += 1 if min(tail_pos['sent_pos']) > max(head_pos['sent_pos']): sent_tail_behind_head += 1 for label in miss_pos: if label[0] == label[1]: continue dis = ent_pair_dis[(label[0], label[1])] head_pos, tail_pos = all_ent_pos[label[0]], all_ent_pos[label[1]] miss_max_sent_dis.append(max(dis['abs_sent_dis'])) miss_min_sent_dis.append(min(dis['abs_sent_dis'])) miss_head_sent_first.append(min(head_pos['sent_pos'])) miss_head_sent_last.append(max(head_pos['sent_pos'])) miss_tail_sent_first.append(min(tail_pos['sent_pos'])) miss_tail_sent_last.append(max(tail_pos['sent_pos'])) if min(head_pos['sent_pos']) > max(tail_pos['sent_pos']): miss_sent_head_behind_tail += 1 if min(tail_pos['sent_pos']) > max(head_pos['sent_pos']): miss_sent_tail_behind_head += 1 # + fig=plt.figure(figsize=(10,4)) colors=['orangered', 'royalblue', 'orangered', 'royalblue'] ax=fig.add_subplot(121) miss_head_tail_sent_first = np.array(miss_head_sent_first) + np.array(miss_tail_sent_first) head_tail_sent_first = np.array(head_sent_first) + np.array(tail_sent_first) miss_head_tail_sent_last = np.array(miss_head_sent_last) + np.array(miss_tail_sent_last) head_tail_sent_last = np.array(head_sent_last) + np.array(tail_sent_last) miss_head_tail_sent_first = miss_head_tail_sent_first / 2 head_tail_sent_first = head_tail_sent_first / 2 miss_head_tail_sent_last = miss_head_tail_sent_last / 2 head_tail_sent_last = head_tail_sent_last / 2 stats = [miss_head_tail_sent_first, head_tail_sent_first, miss_head_tail_sent_last, head_tail_sent_last] fmts = ['r--', 'b--', 'r', 'b'] bin_max = 12 bin_size = 1 bound = 12 for idx in [3, 2]: s = pd.Series(stats[idx]) bin_stats = s.groupby(pd.cut(s, bins=list(range(-1, bin_max, bin_size)) + [10000])).count() / len(stats[idx]) * 100 per, x, y = 0, [], [] for k, v in bin_stats.items(): if k.right == 10000: x.append(bound) else: x.append(k.right) per += v y.append(per) ax.plot(x, y, fmts[idx], color=colors[idx]) ax.set_xlabel("(a) Entity Position (Sentences)", fontsize=16) ax.set_ylabel("Coverage %", fontsize=16) labels = ax.get_xticklabels() + ax.get_yticklabels() _ = [label.set_fontsize(16) for label in labels] ################################################ # ax=fig.add_subplot(142) # miss_head_tail_sent_first_per = np.array(miss_head_sent_first_per) + np.array(miss_tail_sent_first_per) # head_tail_sent_first_per = np.array(head_sent_first_per) + np.array(tail_sent_first_per) # miss_head_tail_sent_last_per = np.array(miss_head_sent_last_per) + np.array(miss_tail_sent_last_per) # head_tail_sent_last_per = np.array(head_sent_last_per) + np.array(tail_sent_last_per) # miss_head_tail_sent_first_per /= 2 # head_tail_sent_first_per /= 2 # miss_head_tail_sent_last_per /= 2 # head_tail_sent_last_per /= 2 # stats = [miss_head_tail_sent_first_per, head_tail_sent_first_per, miss_head_tail_sent_last_per, head_tail_sent_last_per] # fmts = ['r--', 'b--', 'r', 'b'] # bin_max = 100 # bin_size = 10 # bound = 100 # for idx in [2, 3]: # s = pd.Series(stats[idx]) # bin_stats = s.groupby(pd.cut(s, bins=list(range(-1, bin_max, bin_size)) + [10000])).count() / len(stats[idx]) * 100 # per, x, y = 0, [], [] # for k, v in bin_stats.items(): # if k.right == 10000: # x.append(bound) # else: # x.append(k.right) # per += v # y.append(per) # ax.plot(x, y, fmts[idx], color=colors[idx]) # ax.set_xlabel("(b) Relative Entity Position (%)", fontsize=16) # labels = ax.get_xticklabels() + ax.get_yticklabels() # _ = [label.set_fontsize(16) for label in labels] ################################################################## ax=fig.add_subplot(122) stats = [miss_max_sent_dis, miss_min_sent_dis, max_sent_dis, min_sent_dis] labels = ["D$_{Scratch}$ - D$_{Recommend}$", "D$_{Revise}$ - D$_{Recommend}$", ] fmts = ['r', 'b', 'r', 'b'] bin_max = 12 bin_size = 1 bound = 12 for idx in [1, 0]: s = pd.Series(stats[idx]) bin_stats = s.groupby(pd.cut(s, bins=list(range(-1, bin_max, bin_size)) + [10000])).count() / len(stats[idx]) * 100 per, x, y = 0, [], [] for k, v in bin_stats.items(): if k.right == 10000: x.append(bound) else: x.append(k.right) per += v y.append(per) ax.plot(x, y, fmts[idx], label=labels[idx], color=colors[idx]) ax.set_yticks(range(0,120, 20)) ax.set_xlabel("(b) Interval (Sentences)", fontsize=16) labels = ax.get_xticklabels() + ax.get_yticklabels() _ = [label.set_fontsize(16) for label in labels] ############################## # ax=fig.add_subplot(144) # miss_h_t_freq = (np.array(miss_h_freq) + np.array(miss_t_freq)) / 2 # h_t_freq = (np.array(h_freq) + np.array(t_freq)) / 2 # stats = [miss_h_t_freq, h_t_freq] # labels=['$D_{Scratch} - D_{Recommend}$', '$D_{Revise} - D_{Recommend}$'] # fmts = ['r', 'b'] # bin_max = 12 # bin_size = 1 # bound = 12 # for idx in [1,0]: # s = pd.Series(stats[idx]) # bin_stats = s.groupby(pd.cut(s, bins=list(range(0, bin_max, bin_size)) + [10000])).count() / len(stats[idx]) * 100 # per, x, y = 0, [], [] # for k, v in bin_stats.items(): # if k.right == 10000: # x.append(bound) # else: # x.append(k.right) # per += v # y.append(per) # ax.plot(x, y, fmts[idx], label=labels[idx], color=colors[idx]) # ax.set_xlabel("(d) Entity Frenquency", fontsize=16) # labels = ax.get_xticklabels() + ax.get_yticklabels() # _ = [label.set_fontsize(16) for label in labels] ax.legend(loc='center right', bbox_to_anchor=(1., 0.15), ncol=1, fontsize=14) ax.set_yticks([]) fig.subplots_adjust(hspace=0.3) fig.savefig('./figs/entity_analysis.pdf', bbox_inches='tight') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + raw_mimetype="text/restructuredtext" active="" # .. _nb_termination: # - # ## Termination Criterion # Whenever an algorithm is executed, it needs to be decided whether a next iteration should be started. For single-objective algorithms, a naive implementation can consider the relative improvement in the last $n$ generations. # ### Default Termination ('default') # We have added recently developed a termination criterion set if no termination is supplied to the `minimize()` method: # + code="termination/usage_default.py" from pymoo.algorithms.nsga2 import NSGA2 from pymoo.factory import get_problem from pymoo.optimize import minimize problem = get_problem("zdt1") algorithm = NSGA2(pop_size=100) res = minimize(problem, algorithm, pf=False, seed=2, verbose=False) print(res.algorithm.n_gen) # - # This allows you to terminated based on a couple of criteria also explained later on this page. # Commonly used are the movement in the design space `f_tol` and the convergence in the constraint `cv_tol` and objective space `f_tol`. # To provide an upper bound for the algorithm, we recommend supplying a maximum number of generations `n_max_gen` or function evaluations `n_max_evals`. # # Moreover, it is worth mentioning that tolerance termination is based on a sliding window. Not only the last, but a sequence of the `n_last` generations are used to calculate compare the tolerances with an bound defined by the user. # By default for multi-objective problems, the termination will be set to # + from pymoo.util.termination.default import MultiObjectiveDefaultTermination termination = MultiObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=0.0025, nth_gen=5, n_last=30, n_max_gen=1000, n_max_evals=100000 ) # - # And for single-optimization to # + from pymoo.util.termination.default import SingleObjectiveDefaultTermination termination = SingleObjectiveDefaultTermination( x_tol=1e-8, cv_tol=1e-6, f_tol=1e-6, nth_gen=5, n_last=20, n_max_gen=1000, n_max_evals=100000 ) # + raw_mimetype="text/restructuredtext" active="" # .. _nb_n_eval: # - # ### Number of Evaluations ('n_eval') # The termination can simply be reached by providing an upper bound for the number of function evaluations. Whenever in an iteration, the number of function evaluations is greater than this upper bound the algorithm terminates. # + code="termination/usage_n_eval.py" from pymoo.algorithms.nsga2 import NSGA2 from pymoo.factory import get_problem, get_termination from pymoo.optimize import minimize problem = get_problem("zdt3") algorithm = NSGA2(pop_size=100) termination = get_termination("n_eval", 300) res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=True) # + raw_mimetype="text/restructuredtext" active="" # .. _nb_n_gen: # - # ### Number of Generations ('n_gen') # Moreover, the number of generations / iterations can be limited as well. # + code="termination/usage_n_gen.py" from pymoo.algorithms.nsga2 import NSGA2 from pymoo.factory import get_problem, get_termination from pymoo.optimize import minimize problem = get_problem("zdt3") algorithm = NSGA2(pop_size=100) termination = get_termination("n_gen", 10) res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=True) # + raw_mimetype="text/restructuredtext" active="" # .. _nb_time: # - # ### Based on Time ('time') # The termination can also be based on the time of the algorithm to be executed. For instance, to run an algorithm for 3 seconds the termination can be defined by `get_termination("time", "00:00:03")` or for 1 hour and 30 minutes by `get_termination("time", "01:30:00")`. # + code="termination/usage_time.py" from pymoo.algorithms.nsga2 import NSGA2 from pymoo.factory import get_problem, get_termination from pymoo.optimize import minimize problem = get_problem("zdt3") algorithm = NSGA2(pop_size=100) termination = get_termination("time", "00:00:03") res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=False) print(res.algorithm.n_gen) # + raw_mimetype="text/restructuredtext" active="" # .. _nb_xtol: # - # ### Design Space Tolerance ('x_tol') # # Also, we can track the change in the design space. For a parameter explanation, please have a look at 'ftol'. # + code="termination/usage_xtol.py" from pymoo.algorithms.nsga2 import NSGA2 from pymoo.factory import get_problem from pymoo.optimize import minimize from pymoo.util.termination.x_tol import DesignSpaceToleranceTermination problem = get_problem("zdt3") algorithm = NSGA2(pop_size=100) termination = DesignSpaceToleranceTermination(tol=0.0025, n_last=20) res = minimize(problem, algorithm, termination, pf=problem.pareto_front(), seed=1, verbose=False) print(res.algorithm.n_gen) # + raw_mimetype="text/restructuredtext" active="" # .. _nb_ftol: # - # ### Objective Space Tolerance ('f_tol') # # The most interesting stopping criterion is to use objective space change to decide whether to terminate the algorithm. Here, we mostly use a simple and efficient procedure to determine whether to stop or not. We aim to improve it further in the future. If somebody is interested in collaborating, please let us know. # # The parameters of our implementation are: # # **tol**: What is the tolerance in the objective space on average. If the value is below this bound, we terminate. # # **n_last**: To make the criterion more robust, we consider the last $n$ generations and take the maximum. This considers the worst case in a window. # # **n_max_gen**: As a fallback, the generation number can be used. For some problems, the termination criterion might not be reached; however, an upper bound for generations can be defined to stop in that case. # # **nth_gen**: Defines whenever the termination criterion is calculated by default, every 10th generation. # + code="termination/usage_ftol.py" from pymoo.algorithms.nsga2 import NSGA2 from pymoo.factory import get_problem from pymoo.optimize import minimize from pymoo.util.termination.f_tol import MultiObjectiveSpaceToleranceTermination from pymoo.visualization.scatter import Scatter problem = get_problem("zdt3") algorithm = NSGA2(pop_size=100) termination = MultiObjectiveSpaceToleranceTermination(tol=0.0025, n_last=30, nth_gen=5, n_max_gen=None, n_max_evals=None) res = minimize(problem, algorithm, termination, pf=True, seed=1, verbose=False) print("Generations", res.algorithm.n_gen) plot = Scatter(title="ZDT3") plot.add(problem.pareto_front(use_cache=False, flatten=False), plot_type="line", color="black") plot.add(res.F, color="red", alpha=0.8, s=20) plot.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 ('MSCS-torch') # language: python # name: python3 # --- import pandas as pd from sklearn.preprocessing import LabelEncoder from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score pd.options.mode.chained_assignment = None # + train_df = pd.read_csv('train.csv', index_col=0) test_df = pd.read_csv('test.csv', index_col=0) Y_columns = ['koi_disposition', 'koi_pdisposition', 'koi_score'] misc_columns = ['kepid', 'kepoi_name', 'kepler_name', 'koi_tce_delivname'] train_X = train_df.drop(columns=Y_columns + misc_columns) train_Y = train_df[Y_columns + misc_columns] test_X = test_df.drop(columns=Y_columns + misc_columns) test_Y = test_df[Y_columns + misc_columns] # - les = {} Y = pd.concat([train_Y, test_Y]) for dtype, col in zip(Y.dtypes, Y.columns): if dtype == 'object': les[col] = LabelEncoder() les[col].fit(Y[col]) train_Y[col] = les[col].transform(train_Y[col]) test_Y[col] = les[col].transform(test_Y[col]) # ### KOI Disposition - Exoplanet Archive Disposition m = DecisionTreeClassifier() cvs = cross_val_score(m, train_X, train_Y['koi_disposition'], cv=5) m.fit(train_X, train_Y['koi_disposition']) score = m.score(test_X, test_Y['koi_disposition']) print(f'Cross Validation Score: {cvs.min()}\nScore: {score}') m = KNeighborsClassifier() cvs = cross_val_score(m, train_X, train_Y['koi_disposition'], cv=5) m.fit(train_X, train_Y['koi_disposition']) score = m.score(test_X, test_Y['koi_disposition']) print(f'Cross Validation Score: {cvs.min()}\nScore: {score}') m = RandomForestClassifier() cvs = cross_val_score(m, train_X, train_Y['koi_disposition'], cv=5) m.fit(train_X, train_Y['koi_disposition']) score = m.score(test_X, test_Y['koi_disposition']) print(f'Cross Validation Score: {cvs.min()}\nScore: {score}') # ### Conclusion # # The random forest classifier is unsurprisingly the best classifier to predict KOI disposition. Further, both the random forest and decision tree classifiers did surprisingly well considering KOI disposition is according the scholarly sources and not statistical analysis. Lastly, the k nearest neighbor classifier was expected to perform poorly because there are so many dimensions and its hard to get a notion of "nearest." # ### KOI P-Disposition - Disposition Using Kepler Data m = DecisionTreeClassifier() cvs = cross_val_score(m, train_X, train_Y['koi_pdisposition'], cv=5) m.fit(train_X, train_Y['koi_pdisposition']) score = m.score(test_X, test_Y['koi_pdisposition']) print(f'Cross Validation Score: {cvs.min()}\nScore: {score}') m = KNeighborsClassifier() cvs = cross_val_score(m, train_X, train_Y['koi_pdisposition'], cv=5) m.fit(train_X, train_Y['koi_pdisposition']) score = m.score(test_X, test_Y['koi_pdisposition']) print(f'Cross Validation Score: {cvs.min()}\nScore: {score}') m = RandomForestClassifier() cvs = cross_val_score(m, train_X, train_Y['koi_pdisposition'], cv=5) m.fit(train_X, train_Y['koi_pdisposition']) score = m.score(test_X, test_Y['koi_pdisposition']) print(f'Cross Validation Score: {cvs.min()}\nScore: {score}') # ### Conclusion # # The same conclusions from KOI disposition can be used with the exception that the classifiers are a lot better because KOI p-disposition is according to statistical analysis. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="sQ8jh5aiy8zo" # # HOME ASSIGNMENT #6: CLOUD FUNCTION & STREAMLIT # # **Mục đích của bài Assignment** # > * [Optional] Data Deploy Cloud Function # > * Tạo Data Apps với Streamlit # > * Thao tác với data bằng Pandas # > * Data Visualization # # **Các kiến thức áp dụng** # * Slack API, JSON to DataFrame # * GCP Cloud Function # * Streamlit # * Python Pandas # * Python Data Visualization # # **Lời Khuyên** # * Các bạn dành thời gian ôn lại và xâu chuỗi kiến thức # * Review Assignment 1-5 cho ít nhất 2 bạn học viên khác # - # ls .. # # TODO 1: Python Data Viz # Hoàn tất các sets bài tập trên [Kaggle Data Visualization](https://www.kaggle.com/learn/data-visualization) - Nếu chưa hoàn thành trong [Assignment 5](https://github.com/anhdanggit/atom-assignments/blob/main/assignment_5/home_assignment_5.ipynb) # + # Copy các link Kaggle sau: ## 1. Link tới Kaggle Account của bạn -----> ## 2. Link tới các bài tập ## DataViz 1: ---> https://www.kaggle.com/lee1368/exercise-hello-seaborn ## DataViz 2: ---> https://www.kaggle.com/lee1368/exercise-line-charts ## DataViz 3: ---> https://www.kaggle.com/lee1368/exercise-bar-charts-and-heatmaps ## DataViz 4: ---> https://www.kaggle.com/lee1368/exercise-scatter-plots ## DataViz 5: ---> https://www.kaggle.com/lee1368/exercise-distributions ## DataViz 6: ---> https://www.kaggle.com/lee1368/exercise-choosing-plot-types-and-custom-styles ## DataViz 7: ---> https://www.kaggle.com/lee1368/exercise-final-project # + [markdown] id="t28PUQoNzy1k" # # TODO 2 (OPTIONAL): DEPLOY GOOGLE CLOUD FUNCTION # * Làm theo Lab của Week 6: [HERE](https://anhdang.gitbook.io/datacracy/atom/6-cloud-function-and-streamlit/6.2-lab-cloud-function-hands-on) # * Click đôi vào các tab Markdown bên dưới để trả lời các câu hỏi ([Markdown Cheatsheet](https://guides.github.com/features/mastering-markdown/)) # - # ## Screenshot Cloud Function on GCP # > *Copy Screenshot vào folder img trong repo, và đổi link bên dưới* # # ![Cloud Function Deployed](/img/yourscreenshot.png) # ## Screenshot Cloud Function Testing on GCP # > *Copy Screenshot vào folder img trong repo, và đổi link bên dưới* # # ![Cloud Function Testing](../img/yourscreenshot.png) # + [markdown] id="2QUVZlLm00PG" # ## Screenshot Cloud Function Call on Postman # > *Copy Screenshot vào folder img trong repo, và đổi link bên dưới* # # ![Cloud Function on Postman](../img/yourscreenshot.png) # - # + [markdown] id="u5c_Lx9MyzSF" # ## Các lỗi gặp trong quá trình thực hiện # *Liên kê bên dưới các lỗi bạn gặp và các giải quyết* # # 1. # 2. # 3. # - # + [markdown] id="bT_ziqVJ1COI" # # TODO 3: HIỂU & DIAGRAM CODE STREAMLIT # Mình thường khuyên các bạn mới học code rằng: # # > Hãy code với một cây bút chì và tờ giấy # # Như thế nào? # # 1. Đầu tiên, là hình dung trong đầu: Bạn sẽ bắt đầu từ gì (`inputs`) và cho ra những gì (`output`) # 2. Rồi, để đi từ inputs đến outputs thì bạn cần thực hiện những bước nào (các `functions`) # # Bạn có thể vẽ ra một diagram như vậy giúp bạn: # * Nhìn bức tranh lớn, và không bị sa đà vào tiểu tiết, syntax # * Rõ ràng hơn về flow # * Giúp bạn tối ưu flow trước, rồi sau đó code sẽ thuận lợi hơn # * Rất hiệu quả để bạn debugs trong tương lai # # Tham khảo Diagram sau của [streamlit/data_glimpse.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/data_glimpse.py) và vẽ diagram theo cách hiểu của bạn cho [streamlit/datacracy_slack.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/datacracy_slack.py) # - # ## Diagram Data Glimpse Apps # > Bên dưới là ví dụ Diagram của app [streamlit/data_glimpse.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/data_glimpse.py) # # ![data-glimpse-diagream](../img/streamlit-Data-Glimpse-Diagram.png) # ## DataCracy Slack # > Là apps để tổng hợp lịch sử nộp bài, review và discussion của Datacracy Learners # ![Datacracy-slack-streamlit](../img/dataCracy-slack-streamlit.png) # ## Diagram DataCracy Slack Apps # * Xem code của app [streamlit/datacracy_slack.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/datacracy_slack.py) # > *Copy Diagram bạn vẽ vào folder img trong repo, và đổi link bên dưới* # # ![datacracy-slack-diagram](../img/IMG_0912.png) # ## Giải thích # Xem code của app [streamlit/datacracy_slack.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/datacracy_slack.py): # # 1. Trong mỗi function (steps) trong Diagram của bạn, giải thích function làm những việc gì? # 2. Liệt kê các logics được áp dụng để xử lý data? ''' 1. Trong mỗi function, chức năng của từng fucntion là: load_user_df (channle/ msg): load data từ slack API process_msg_data: từ bảng msg_df lấy ra những người đã review (reply_users), thêm tên user và channel 2. Logic áp dụng để xử lý data - Check invalid user: check trong user_df.user_id == user_id - Lọc ra toàn bộ thông tin liên quan đến user_id đó (filter_user_id/msg/p_msg_df) - Tính submission bằng cách lấy msg có latest ts groupby each channel - Tính review bằng cách: review user_id ra khỏi bảng p_msg_df (bỏ những tin mình tự report), lọc theo channel assignment và người reviewer là learner - Tính discussion: lấy message trong channel có 'discuss', sort by time (date + time) - Tính summary: Đã nộp: len(submit_df) Được review" 100* reply_user_count / total đã nộp Đã review: len(review_df) Thảo luận: số chữ trong discuss_df ''' # + id="UccXW_FH4Glg" # - # # TODO 4: VISUALIZATION ON STREAMLIT # Áp dụng kiến thức đã học trong *TODO 1* + Pandas thực hiện các tasks sau: # # 1. Tổng hợp cho tất cả learners các chỉ số sau: # * Số assignment đã nộp # * % bài được review # * Số workcount đã thảo luận # * Extract thứ trong tuần (weekday) của ngày nộp bài # * Extract giờ trong ngày nộp bài (hour) # # 4. Vẽ biểu đồ thể hiện phân phối (Distribution - [Kaggle Tutorial](https://www.kaggle.com/alexisbcook/distributions)) của các thông số trên và add vào app Streamlit # + import streamlit as st import json import requests import sys import os import pandas as pd import numpy as np import re from datetime import datetime as dt # st.set_page_config(layout="wide") # st.title('DataCracy ATOM Tiến Độ Lớp Học') with open('env_variable.json','r') as j: # with open('OneDrive/Documents/GitHub/atom-assignments/streamlit/env_variable.json','r') as j: json_data = json.load(j) #SLACK_BEARER_TOKEN = os.environ.get('SLACK_BEARER_TOKEN') ## Get in setting of Streamlit Share SLACK_BEARER_TOKEN = json_data['SLACK_BEARER_TOKEN'] DTC_GROUPS_URL = ('https://raw.githubusercontent.com/anhdanggit/atom-assignments/main/data/datacracy_groups.csv') #st.write(json_data['SLACK_BEARER_TOKEN']) # @st.cache def load_users_df(): # Slack API User Data endpoint = "https://slack.com/api/users.list" headers = {"Authorization": "Bearer {}".format(json_data['SLACK_BEARER_TOKEN'])} response_json = requests.post(endpoint, headers=headers).json() user_dat = response_json['members'] # Convert to CSV user_dict = {'user_id':[],'name':[],'display_name':[],'real_name':[],'title':[],'is_bot':[]} for i in range(len(user_dat)): user_dict['user_id'].append(user_dat[i]['id']) user_dict['name'].append(user_dat[i]['name']) user_dict['display_name'].append(user_dat[i]['profile']['display_name']) user_dict['real_name'].append(user_dat[i]['profile']['real_name_normalized']) user_dict['title'].append(user_dat[i]['profile']['title']) user_dict['is_bot'].append(int(user_dat[i]['is_bot'])) user_df = pd.DataFrame(user_dict) # Read dtc_group hosted in github dtc_groups = pd.read_csv(DTC_GROUPS_URL) user_df = user_df.merge(dtc_groups, how='left', on='name') return user_df # @st.cache def load_channel_df(): endpoint2 = "https://slack.com/api/conversations.list" data = {'types': 'public_channel,private_channel'} # -> CHECK: API Docs https://api.slack.com/methods/conversations.list/test headers = {"Authorization": "Bearer {}".format(SLACK_BEARER_TOKEN)} response_json = requests.post(endpoint2, headers=headers, data=data).json() channel_dat = response_json['channels'] channel_dict = {'channel_id':[], 'channel_name':[], 'is_channel':[],'creator':[],'created_at':[],'topics':[],'purpose':[],'num_members':[]} for i in range(len(channel_dat)): channel_dict['channel_id'].append(channel_dat[i]['id']) channel_dict['channel_name'].append(channel_dat[i]['name']) channel_dict['is_channel'].append(channel_dat[i]['is_channel']) channel_dict['creator'].append(channel_dat[i]['creator']) channel_dict['created_at'].append(dt.fromtimestamp(float(channel_dat[i]['created']))) channel_dict['topics'].append(channel_dat[i]['topic']['value']) channel_dict['purpose'].append(channel_dat[i]['purpose']['value']) channel_dict['num_members'].append(channel_dat[i]['num_members']) channel_df = pd.DataFrame(channel_dict) return channel_df # @st.cache(allow_output_mutation=True) def load_msg_dict(): endpoint3 = "https://slack.com/api/conversations.history" headers = {"Authorization": "Bearer {}".format(SLACK_BEARER_TOKEN)} msg_dict = {'channel_id':[],'msg_id':[], 'msg_ts':[], 'user_id':[], 'latest_reply':[],'reply_user_count':[],'reply_users':[],'github_link':[],'text':[]} for channel_id, channel_name in zip(channel_df['channel_id'], channel_df['channel_name']): print('Channel ID: {} - Channel Name: {}'.format(channel_id, channel_name)) try: data = {"channel": channel_id} response_json = requests.post(endpoint3, data=data, headers=headers).json() msg_ls = response_json['messages'] for i in range(len(msg_ls)): if 'client_msg_id' in msg_ls[i].keys(): msg_dict['channel_id'].append(channel_id) msg_dict['msg_id'].append(msg_ls[i]['client_msg_id']) msg_dict['msg_ts'].append(dt.fromtimestamp(float(msg_ls[i]['ts']))) msg_dict['latest_reply'].append(dt.fromtimestamp(float(msg_ls[i]['latest_reply'] if 'latest_reply' in msg_ls[i].keys() else 0))) ## -> No reply: 1970-01-01 msg_dict['user_id'].append(msg_ls[i]['user']) msg_dict['reply_user_count'].append(msg_ls[i]['reply_users_count'] if 'reply_users_count' in msg_ls[i].keys() else 0) msg_dict['reply_users'].append(msg_ls[i]['reply_users'] if 'reply_users' in msg_ls[i].keys() else 0) msg_dict['text'].append(msg_ls[i]['text'] if 'text' in msg_ls[i].keys() else 0) ## -> Censor message contains tokens text = msg_ls[i]['text'] github_link = re.findall('(?:https?://)?(?:www[.])?github[.]com/[\w-]+/?', text) msg_dict['github_link'].append(github_link[0] if len(github_link) > 0 else None) except: print('====> '+ str(response_json)) msg_df = pd.DataFrame(msg_dict) return msg_df def process_msg_data(msg_df, user_df, channel_df): ## Extract 2 reply_users msg_df['reply_user1'] = msg_df['reply_users'].apply(lambda x: x[0] if x != 0 else '') msg_df['reply_user2'] = msg_df['reply_users'].apply(lambda x: x[1] if x != 0 and len(x) > 1 else '') ## Merge to have a nice name displayed msg_df = msg_df.merge(user_df[['user_id','name','DataCracy_role']].rename(columns={'name':'submit_name'}), \ how='left',on='user_id') msg_df = msg_df.merge(user_df[['user_id','name']].rename(columns={'name':'reply1_name','user_id':'reply1_id'}), \ how='left', left_on='reply_user1', right_on='reply1_id') msg_df = msg_df.merge(user_df[['user_id','name']].rename(columns={'name':'reply2_name','user_id':'reply2_id'}), \ how='left', left_on='reply_user2', right_on='reply2_id') ## Merge for nice channel name msg_df = msg_df.merge(channel_df[['channel_id','channel_name','created_at']], how='left',on='channel_id') ## Format datetime cols msg_df['created_at'] = msg_df['created_at'].dt.strftime('%Y-%m-%d') msg_df['msg_date'] = msg_df['msg_ts'].dt.strftime('%Y-%m-%d') msg_df['msg_time'] = msg_df['msg_ts'].dt.strftime('%H:%M') msg_df['wordcount'] = msg_df.text.apply(lambda s: len(s.split())) return msg_df # - user_df = load_users_df() channel_df = load_channel_df() msg_df = load_msg_dict() total = process_msg_data(msg_df, user_df, channel_df) total.head() # + as_total = total[total.channel_name.str.contains('assignment')] as_total = as_total[as_total.DataCracy_role.str.contains('Learner')] as_by_channel = as_total.groupby(by=(['user_id','submit_name','channel_id','channel_name']))['github_link'].nunique() as_by_user = as_by_channel.groupby(by=(['user_id','submit_name'])).count() as_by_user = as_by_user.sort_values(ascending=False) # as_by_user.set_axis(['user_id','username','assignments'],axis='columns') as_by_user = as_by_user.reset_index() as_by_user.head() # - dis_total = total[total.channel_name.str.contains('discuss')] dis_total = dis_total[dis_total.DataCracy_role.str.contains('Learner')] user_wordcount = dis_total.groupby(by=(['user_id','submit_name'])).wordcount.sum() user_wordcount = user_wordcount.sort_values(ascending=False) user_wordcount = user_wordcount.reset_index() user_wordcount.head() len(user_wordcount) as_total.head() reviewed_total = as_total[(as_total.reply_users != 0) & \ ((as_total.reply_user1 != as_total.user_id) | (as_total.reply_user2 != as_total.user_id))] reviewed_total=reviewed_total.loc[reviewed_total.groupby(by=(['user_id','submit_name','channel_id'])).msg_ts.idxmax()] # reviewed_total = reviewed_total.reset_index().sort_values(by='channel_id',ascending=False) # reviewed_total.rename(columns = {'channel_id':'no of be reviewed'}) reviewed_total = reviewed_total.groupby(by=(['user_id','submit_name'])).channel_id.nunique().reset_index() reviewed_total = reviewed_total.rename(columns = {'channel_id':'time be reviewed'}) reviewed_total.head() len(reviewed_total) final = as_by_user.merge(user_wordcount, how='left',left_on=('user_id','submit_name'), right_on = ('user_id','submit_name')) final = final.merge(reviewed_total, how='left',left_on=('user_id','submit_name'), right_on = ('user_id','submit_name')) final = final.rename(columns={'github_link':'no of assignments'}) final['% be reviewed'] = final['time be reviewed']/final['no of assignments']*100 final['% be reviewed']=final['% be reviewed'].round(decimals=1) final = final.sort_values(by='no of assignments',ascending=False) final.head() pd.plotting.register_matplotlib_converters() import matplotlib.pyplot as plt # %matplotlib inline import seaborn as sns sns.distplot(a=final['no of assignments'], kde=False) sns.distplot(a=final['wordcount'], kde=False) sns.distplot(a=final['% be reviewed'], kde=False) sns.jointplot(x=final['no of assignments'], y=final['time be reviewed'], kind='kde') sns.regplot(x=final['no of assignments'], y=final['time be reviewed']) sns.regplot(x=final['no of assignments'], y=final['wordcount']) new_as_total=as_total.loc[as_total.groupby(by=(['user_id','submit_name','channel_id'])).msg_ts.idxmax()] ts_time = new_as_total.msg_ts.reset_index() ts_time['dayofweek'] = ts_time.msg_ts.dt.dayofweek ts_time['day_name'] = ts_time.msg_ts.dt.day_name() ts_time['hour'] = ts_time.msg_ts.dt.hour ts_time ts_time.dt.day_name() day_dis = sns.distplot(a=ts_time['dayofweek'], kde=False) day_dis.set_xticklabels(['','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday']) for item in day_dis.get_xticklabels(): item.set_rotation(90) sns.distplot(a=ts_time['hour'], kde=False) plt.xticks([0,6, 12, 18,24]) # link file streamlit: https://github.com/lethuthao1368/atom-assignments/blob/main/streamlit/datacracy_slack-thaole.py from PIL import Image for i in range(4): k = Image.open(f'streamlit_{i+1}.png') display(k) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Block Move in Fixed Time # # Here, we look at a problem called "Block Move". Block Move is a very simple optimal control problem defined by in the paper *[An Introduction to Trajectory Optimization: How to Do Your Own Direct Collocation](https://epubs.siam.org/doi/10.1137/16M1062569)*. # # The basics of the problem are this: # # ----- # # Suppose we have a block with a unit mass on a frictionless surface; the block can slide forward and backwards along a number line $x$. At $t=0$, the block starts at $x=0$ with velocity $v=0$. At $t=1$, we want the block to have moved to $x=1$ and again be stationary with $v=0$. # # We are allowed to apply any amount of force $u(t)$ to the block, but we want to find the path that minimizes the amount of "effort" applied. We measure "effort" as the integral $\int_0^1 u(t)^2\ dt$. # # What should our force input $u(t)$ be? # # ----- # # Let's solve the problem. First, we do some boilerplate setup: # + import aerosandbox as asb import aerosandbox.numpy as np opti = asb.Opti() n_timesteps = 300 mass_block = 1 # - # Then, we define our time vector, our state vectors, and our force vector. # + pycharm={"name": "#%%\n"} time = np.linspace(0, 1, n_timesteps) position = opti.variable( init_guess=np.linspace(0, 1, n_timesteps) # Guess a trajectory that takes us there linearly. ) velocity = opti.derivative_of( position, with_respect_to=time, derivative_init_guess=1, # Guess a velocity profile that is uniform over time. ) force = opti.variable( init_guess=np.linspace(1, -1, n_timesteps), # Guess that the force u(t) goes from 1 to -1 over the time window. n_vars=n_timesteps ) # - # We can't forget to constrain the derivative of velocity to be equal to the acceleration! # + pycharm={"name": "#%%\n"} opti.constrain_derivative( variable=velocity, with_respect_to=time, derivative=force / mass_block, # F = ma ) # - # Now, we compute the amount of effort expended using a numerical integral: # + pycharm={"name": "#%%\n"} effort_expended = np.sum( np.trapz(force ** 2) * np.diff(time) ) opti.minimize(effort_expended) # - # Can't forget to add those boundary conditions! # # Some notes: # * *"Wait, isn't $x=0$ an initial condition, not a boundary condition?"* There is no mathematical difference between *initial conditions* and *boundary conditions*. We use the phrase "boundary conditions" to refer to both. This helps eliminate any confusion between "initial conditions" and "initial guesses". # # * *"Wait, what's the difference between initial conditions and initial guesses?"* "Initial conditions" are really just boundary conditions that happen to be applied at the boundary $t=0$. "Initial guesses" are our best guess for each of our design variables - basically, our best guess for the optimal trajectory. It is so important that the distinction be understood! Again, we use the "boundary conditions" catch-all rather than "initial conditions" to help reinforce this distinction. # # Now for those boundary conditions: # + pycharm={"name": "#%%\n"} opti.subject_to([ position[0] == 0, position[-1] == 1, velocity[0] == 0, velocity[-1] == 0, ]) # - # Now, we solve. # + pycharm={"name": "#%%\n"} sol = opti.solve() # - # This actually solves in just one iteration because it is an unconstrained quadratic program. # # Let's plot what our solution looks like: # + pycharm={"name": "#%%\n"} import matplotlib.pyplot as plt import seaborn as sns fig, ax = plt.subplots(1, 1, figsize=(6.4, 4.8), dpi=200) plt.plot(sol.value(time), sol.value(position)) plt.xlabel(r"Time") plt.ylabel(r"Position") plt.title(r"Position") plt.tight_layout() plt.show() fig, ax = plt.subplots(1, 1, figsize=(6.4, 4.8), dpi=200) plt.plot(sol.value(time), sol.value(velocity)) plt.xlabel(r"Time") plt.ylabel(r"Velocity") plt.title(r"Velocity") plt.tight_layout() plt.show() fig, ax = plt.subplots(1, 1, figsize=(6.4, 4.8), dpi=200) plt.plot(sol.value(time), sol.value(force)) plt.xlabel(r"Time") plt.ylabel(r"Force") plt.title(r"Force") plt.tight_layout() plt.show() # - # That makes sense! We can actually prove optimality of these functions here using calculus of variations; see Appendix B of the Kelly paper for more details. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # 说明: # 给定两个字符串s和t,您的目标是将k或更少的步数将s转换为t。 # 在第i次(1 <= i <= k)移动期间,您可以: # 1、从s中选择任何索引j(1索引),以使1 <= j <= s.length和j在之前的任何移动中都没有选择,并在该索引处移动字符i次。 # 2、没做什么。 # 移位字符意味着将其替换为字母表中的下一个字母(环绕以使“ z”变为“ a”)。 # 用i移位字符意味着要进行i次移位操作。 # 请记住,任何索引j最多只能选择一次。如果可以在不超过k步的时间内将s转换为t,则返回true,否则返回false。 # # Example 1: # Input: s = "input", t = "ouput", k = 9 # Output: true # Explanation: In the 6th move, we shift 'i' 6 times to get 'o'. And in the 7th move we shift 'n' to get 'u'. # # Example 2: # Input: s = "abc", t = "bcd", k = 10 # Output: false # Explanation: # We need to shift each character in s one time to convert it into t. # We can shift 'a' to 'b' during the 1st move. # However, there is no way to shift the other characters in the remaining moves to obtain t from s. # # Example 3: # Input: s = "aab", t = "bbb", k = 27 # Output: true # Explanation: In the 1st move, we shift the first 'a' 1 time to get 'b'. In the 27th move, we shift the second 'a' 27 times to get 'b'. # # Constraints: # 1、1 <= s.length, t.length <= 10^5 # 2、0 <= k <= 10^9 # 3、s, t contain only lowercase English letters. # - class Solution: def canConvertString(self, s: str, t: str, k: int) -> bool: alphabet = "abcdefghigklmnopqistuvwxyz" solution = Solution() solution.canConvertString('input', 'output', 9) c = "abcdefghigklmnopqistuvwxyz" print(len(c)) s tu vwx yzab cdefg higklm # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Object Detection # In this notebook we will have a look at the fridge dataset using different dashboards. # Imports from icevision.all import * import icedata from icevision_dashboards.data import * # ## Setup data # path = icedata.fridge.load_data() class_map = icedata.fridge.class_map() parser = icedata.fridge.parser(data_dir=path) train_records, valid_records = parser.parse() record_ds = RecordDataset(valid_records, class_map) record_ds.name = "record_dataset" record_ds.save("test_data") # ## Train model for data generation # Define transforms train_tfms = tfms.A.Adapter( [*tfms.A.aug_tfms(size=384, presize=512), tfms.A.Normalize()] ) valid_tfms = tfms.A.Adapter([*tfms.A.resize_and_pad(384), tfms.A.Normalize()]) # Create datasets for training train_ds = Dataset(train_records, train_tfms) valid_ds = Dataset(valid_records, valid_tfms) model_type = models.mmdet.faster_rcnn backbone = model_type.backbones.resnet50_fpn_1x(pretrained=True) # + tags=[] model = model_type.model(backbone=backbone, num_classes=len(class_map)) metrics = [COCOMetric(metric_type=COCOMetricType.bbox)] train_dl = model_type.train_dl(train_ds, batch_size=1, num_workers=4, shuffle=True) valid_dl = model_type.valid_dl(valid_ds, batch_size=1, num_workers=4, shuffle=False) learn = model_type.fastai.learner(dls=[train_dl, valid_dl], model=model, metrics=metrics) learn.fine_tune(5, freeze_epochs=1, base_lr=1e-4) # - # ## Create preds and sampels files _ = model.to("cuda:0") samples, losses_stats = model_type.interp.get_losses(model, valid_ds) dl = model_type.interp.infer_dl(valid_ds, batch_size=2) preds = model_type.interp.predict_from_dl(model=model, infer_dl=dl, keep_images=True) def get_compnent(components, component_type): for component in components: if isinstance(component, component_type): return component else: return None def remove_image(components): for entry in list(components): if isinstance(entry, FilepathRecordComponent) or isinstance(entry, ImageRecordComponent): entry.img = None def cleanup_preds(preds): new_preds = deepcopy(preds) for pred in new_preds: remove_image(pred.ground_truth.common.components) remove_image(pred.pred.common.components) return new_preds # remove all the data not required (image) and convert the masks to a smaller data fromat clean_preds = cleanup_preds(preds) pickle.dump(clean_preds, open("test_data/object_detection_preds.pkl", "wb")) preds = pickle.load(open("test_data/object_detection_preds.pkl", "rb")) def cleanup_samples(samples): new_samples = deepcopy(samples) for sample in new_samples: remove_image(sample.components) return new_samples # remove all the data not required (image) clean_samples = cleanup_samples(samples) pickle.dump(clean_samples, open("test_data/object_detection_samples.pkl", "wb")) samples = pickle.load(open("test_data/object_detection_samples.pkl", "rb")) valid_result_ds = ObjectDetectionResultsDataset.init_from_preds_and_samples(preds, samples, class_map=class_map) valid_result_ds.save("test_data/object_detection_result_ds.dat") # ## Cleanup shutil.rmtree("checkpoints/") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Step 1: Load the data # # Load data from the csv and the images # + #AWS import pickle import os from urllib.request import urlretrieve from zipfile import ZipFile def download(url, file): """ Download file from :param url: URL to file :param file: Local file path """ if not os.path.isfile(file): print('Downloading ' + file + '...') urlretrieve(url, file) print('Download Finished') #Unzip the downloaded file to get pickled data zip = ZipFile('data.zip') zip.extractall() # Downloading the training and test dataset. download('https://d17h27t6h515a5.cloudfront.net/topher/2016/December/584f6edd_data/data.zip', 'data.zip') # Wait until you see that all files have been downloaded. print('All files downloaded.') # + import csv import cv2 import numpy as np lines =[] #with open('../Behavioural Cloning Data/driving_log.csv') as csvfile: with open('data/driving_log.csv') as csvfile: #with open('../data/driving_log_2.csv') as csvfile: reader = csv.reader(csvfile) for line in reader: lines.append(line) images=[] measurements=[] count=0 for line in lines[1:]: source_path = line[0] filename = source_path.split('/')[-1] current_path = 'data/IMG/' + filename #current_path = source_path image = cv2.imread(current_path) #print(image.shape) images.append(image) measurement = float(line[3]) measurements.append(measurement) if (abs(measurement)>0.1): count=count+1 image_flipped = np.fliplr(image) measurement_flipped = -measurement images.append(image_flipped) measurements.append(measurement_flipped) images.append(image_flipped) measurements.append(measurement_flipped) if (abs(measurement)>0.3): count=count+1 image_flipped = np.fliplr(image) measurement_flipped = -measurement images.append(image_flipped) measurements.append(measurement_flipped) images.append(image) measurements.append(measurement) X_train = np.array(images) y_train = np.array(measurements) '''X_train = np.array(X_train) y_train = np.array(y_train)''' print('Hello') print (X_train.shape) print(count) print(len(lines)) print(len(images)) # - # ## Step 2: Design and Test a Model Architecture # # Model architecture # + # %config IPCompleter.greedy=True from keras.models import Sequential from keras.layers import Flatten, Dense, Lambda, Activation from keras.layers import Convolution2D, Cropping2D from keras.layers.pooling import MaxPooling2D model = Sequential() #normalization model.add(Lambda(lambda x:x/255.0-0.5,input_shape=(160,320,3))) model.add(Cropping2D(cropping=((75,25),(0,0)))) #Layer 1: Convolution model.add(Convolution2D(6, 5, 5,border_mode='valid')) model.add(MaxPooling2D((2, 2))) model.add(Activation('elu')) #Layer 2: Convolution model.add(Convolution2D(16, 5, 5,border_mode='valid')) model.add(MaxPooling2D((2, 2))) model.add(Activation('elu')) #Layer 3: Convolution model.add(Convolution2D(18, 5, 5,border_mode='valid')) model.add(MaxPooling2D((1, 2))) model.add(Activation('elu')) #Layer 4: Convolution model.add(Convolution2D(20, 3, 3,border_mode='valid')) model.add(MaxPooling2D((1, 2))) model.add(Activation('relu')) #Layer 5: Convolution model.add(Convolution2D(8, 3, 3,border_mode='valid')) #model.add(MaxPooling2D((2, 2))) model.add(Activation('elu')) model.add(Flatten()) model.add(Dense(400)) model.add(Activation('elu')) model.add(Dense(200)) model.add(Activation('elu')) model.add(Dense(80)) model.add(Activation('elu')) model.add(Dense(1)) model.summary() # - # ## Training the model # + model.compile(loss='mse', optimizer='adam') model.fit(X_train,y_train, validation_split=0.2, shuffle=True, nb_epoch=1) model.save('model.h5') print('Done') # - # ## Step 3: Training # # # # # ## Step 4: Validation # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="TNXfFd4fPeyZ" import pandas as pd from matplotlib import pyplot import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from keras.optimizers import RMSprop # + colab={"base_uri": "https://localhost:8080/"} id="o74KA3LPPk0e" outputId="90bde47d-f721-4c70-d2e6-323ba8d91b55" from google.colab import drive drive.mount("/content/drive") # + colab={"base_uri": "https://localhost:8080/", "height": 247} id="wbDEGyfBPuSk" outputId="e7456683-3ac3-4cec-f8fc-7a1a30a1f968" data = pd.read_csv('/content/drive/My Drive/kaggle_compitions/Digit_recognization/train.csv') data.head() # + colab={"base_uri": "https://localhost:8080/"} id="fOjmDt4FPz1D" outputId="0f699b90-534a-4a62-b819-00882aea5800" data.shape # + colab={"base_uri": "https://localhost:8080/"} id="8amRu_XaP2GE" outputId="40efd25d-c22a-4735-bea2-ec3c7778f857" #checking unique numbers in train label column unique = data['label'].unique() print(unique) n_classes = len(unique) print("Number of classes: ",n_classes) # + colab={"base_uri": "https://localhost:8080/"} id="ehmHf2dDP3Wr" outputId="1add10f4-96cd-4919-9aeb-e29f31380031" x = data.drop(labels = ["label"], axis=1) print(x) # + colab={"base_uri": "https://localhost:8080/"} id="vKK-fJRSP4k7" outputId="ff41845d-d2c6-47cb-ef10-65383bb41e40" y = data['label'] print(y) # + id="hccqN-51P54b" x_train, x_test, y_train, y_test = train_test_split(x,y, test_size=0.20, random_state=42) # + colab={"base_uri": "https://localhost:8080/"} id="o9KhjsCaP9Fj" outputId="874067bd-2811-4a4a-e5b8-c55a82398b10" #Training Data and testing data print("X_train: ", x_train) print("X_test: ", y_test) print("Y_train: ", y_train) print("Y_test: ", y_test) # + id="0hXL3pOBP-07" x_train = x_train.values.reshape(-1,28,28,1) x_test = x_test.values.reshape(-1,28,28,1) # + colab={"base_uri": "https://localhost:8080/"} id="S9ZxxlmlQAf7" outputId="5c00b6b0-a26c-4de5-b0bc-2b36909033f6" print("X_train: ",x_train) print("X_train: ",x_test) # + colab={"base_uri": "https://localhost:8080/"} id="BwvjVOQjQBeS" outputId="2d1b6c7a-88f5-4234-9939-0c7183610982" #determine the shape of input images in_shape = x_train.shape[1:] print(in_shape) # + colab={"base_uri": "https://localhost:8080/", "height": 283} id="WHWVEbhxQDxK" outputId="fad7ed98-8db2-4677-8b07-71cbc424f34a" plt.imshow(x_train[0].reshape([28,28])) # + id="_qYEsNV9QE3r" # normalize pixel values x_train = x_train.astype('float32') / 255.0 x_test = x_test.astype('float32') / 255.0 # + id="XJv-Zn8sQJfp" from numpy import asarray, unique, argmax from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense, Conv2D, MaxPool2D, Flatten, Dropout # + id="6T9khUVGQJt-" model = Sequential() model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', activation ='relu', input_shape = in_shape)) model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', activation ='relu')) model.add(MaxPool2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', activation ='relu')) model.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', activation ='relu')) model.add(MaxPool2D(pool_size=(2,2), strides=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(256, activation = "relu")) model.add(Dropout(0.5)) model.add(Dense(10, activation = "softmax")) # + id="30u8YYQCRy4P" optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0) # + id="d5EL4TiMQJw7" # define loss and optimizer model.compile(optimizer= optimizer, loss = 'sparse_categorical_crossentropy', metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="z2VGY9y0QJy9" outputId="8c142fe7-0e3d-4dcb-99fa-093fa5659772" # fit the model model.fit(x_train, y_train, validation_split=0.2, epochs=100, batch_size=128, verbose= 1) # + colab={"base_uri": "https://localhost:8080/"} id="azRuPskQQJ2A" outputId="851cac76-e521-4ba0-e2cf-822c3fccf558" #Evaluate the model loss, accuracy = model.evaluate(x_test, y_test, verbose=1) print('Accuracy: %.3f' %accuracy) print('Loss: ',loss) # + colab={"base_uri": "https://localhost:8080/"} id="WtlKCs4DTGDz" outputId="9f953ada-e8c5-41db-ac30-bce96fe2779e" #make a pridiction image = x_test[1] ypred = model.predict(asarray([image])) print('Prediction: Class =%d' %argmax(ypred)) # + colab={"base_uri": "https://localhost:8080/", "height": 283} id="FU_d2fHxTK-5" outputId="5b7bfb34-32c2-441f-a515-51e66e0584ac" plt.imshow(x_test[1].reshape([28,28])) # + colab={"base_uri": "https://localhost:8080/", "height": 247} id="MrqoyrTJTMSu" outputId="ecde3f90-a643-4cb6-c8f0-aa7db820d681" #Pridict the model on the trained Data test_data = pd.read_csv('/content/drive/My Drive/kaggle_compitions/Digit_recognization/test.csv') test_data.head() # + id="55zxTmQfTNuy" test = test_data/255.0 # + id="aPMKoEobTPdR" test_final_Data = test.values.reshape(-1,28,28,1) # + colab={"base_uri": "https://localhost:8080/"} id="S9EyEr3NTQir" outputId="611eb7e0-e5ee-41d4-cfc5-d3763394f7ab" label = model.predict(test_final_Data) print(label) # + colab={"base_uri": "https://localhost:8080/"} id="x4uz9D1VTRnu" outputId="0c7dd404-47a9-4bf5-91bf-7d8cd85a88cb" label.shape # + colab={"base_uri": "https://localhost:8080/"} id="yZgcVH9FTS2K" outputId="d0003b69-90d2-4cde-ee66-478fde24841c" print(label) # + id="rOPsEY0mTUBR" label = np.argmax(label, axis=1) # + colab={"base_uri": "https://localhost:8080/"} id="_ymfxR2tTbSx" outputId="baf28ae6-11a1-4a18-90ff-eb451c8337e8" print(label) # + colab={"base_uri": "https://localhost:8080/"} id="c0Or0RE5TdGR" outputId="e91cdf92-b553-4386-80dd-b19a2f37b03d" print(label.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 198} id="xyJQUpnqTeth" outputId="0424cb8f-3dfb-4304-845e-f4f5083960d9" sample_submission = pd.read_csv('/content/drive/My Drive/kaggle_compitions/Digit_recognization/sample_submission.csv') sample_submission.head() # + colab={"base_uri": "https://localhost:8080/", "height": 198} id="rByMHNaaTfpJ" outputId="fd233f16-3362-4857-f2e6-2147db865012" index = test_data.index.values + 1 data = {'ImageId' : index, "Label" : label} df = pd.DataFrame(data=data) df.head() # + id="c_Ecd8PFTg1p" submit = pd.DataFrame({'ImageId' : index, "Label" : label.astype(int).ravel()}) submit.to_csv("submission.csv",index = False) # + colab={"base_uri": "https://localhost:8080/", "height": 407} id="XxrcLzH0TiAC" outputId="647d6421-52ee-4162-fb0b-f52bbd14fb1f" submit # + id="SRK-JQSRTkMR" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Reading files and creating data structures with open("chapter1-1-1.txt", "r") as f: data = f.read() data data.split(":") data.split(":")[1] full_name = data.split(":")[1].strip() full_name full_name.split(",") # method 1 [part.strip() for part in full_name.split(",")] # method 2 full_name.split(", ") import json with open("chapter-1-1-2.json", "r") as f: data = f.read() data json.loads(data) type(json.loads(data)), type(json.loads(data)[0]) # # Using data science libraries to import your first dataset import csv with open('chapter-1-1-3.csv', newline='') as csvfile: reader = csv.DictReader(csvfile) for row in reader: print(row['first_name'], row['last_name']) with open('chapter-1-1-3.csv', newline='') as csvfile: reader = csv.DictReader(csvfile) for row in reader: print(row) import pandas as pd pd.read_csv("chapter-1-1-3.csv") pd.read_json("chapter-1-1-2.json") from sklearn.datasets import load_linnerud dataset = load_linnerud() dataset.keys() dataset dataset = load_linnerud(return_X_y=True) dataset from sklearn.datasets import fetch_california_housing dataset = fetch_california_housing() dataset["data"].shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="CsFr1XMC7kI3" colab={"base_uri": "https://localhost:8080/"} outputId="3877495f-926d-44b1-faf1-b3e78e68fd8c" # !pip install tensorflow-gpu # + id="44RojY_374xX" colab={"base_uri": "https://localhost:8080/"} outputId="636acf42-bf09-40ec-b165-c8295842187b" import tensorflow as tf print(tf.__version__) # + id="b8oRdI2k7-rp" colab={"base_uri": "https://localhost:8080/"} outputId="2a99d6a7-741c-470b-c506-c84511f79f6b" # !git clone https://github.com/tensorflow/models.git # + id="8WniYn0X8HxH" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="4fbbd61a-c414-43c7-956b-24bae6c57b4e" pwd # + id="Y4c-gSZJ8OST" colab={"base_uri": "https://localhost:8080/"} outputId="bc1ae0b9-9ef9-41b9-a1f2-5789f954a4e8" # cd /content/models/research # + id="fAuUzagy8TgB" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="d13aac98-4243-44ad-802c-5291c58203a9" pwd # + id="dDjdYARn8Z4k" # !protoc object_detection/protos/*.proto --python_out=. # + id="MyFZsYxH8ezO" colab={"base_uri": "https://localhost:8080/"} outputId="64d00d1e-423f-445c-e9eb-6d8f4acae7a5" # !git clone https://github.com/cocodataset/cocoapi.git # + id="gEubNVCI8neD" colab={"base_uri": "https://localhost:8080/"} outputId="cdda9cea-a933-4ab4-a6c5-5c44a4d14f14" # cd cocoapi/PythonAPI # + id="QU7d9UKj8tva" colab={"base_uri": "https://localhost:8080/"} outputId="eb113e1d-dc6f-4bd0-c21e-e322729dc6ad" # !make # + id="XZ5lJRFI825-" # cp -r pycocotools /content/models/research # + id="zvNRhSY69BQf" colab={"base_uri": "https://localhost:8080/"} outputId="08940997-427d-4c47-a925-8fe4f1eb806a" # cd .. # + id="0tCu_NwJ9D7m" colab={"base_uri": "https://localhost:8080/"} outputId="0640cfea-97bf-44c3-9536-0949b39e4f9c" # cd .. # + id="4Qbr9C1x9IzO" # cp object_detection/packages/tf2/setup.py . # + id="hLiyWOT69M3W" colab={"base_uri": "https://localhost:8080/"} outputId="82365237-1f73-46b5-a7ca-b4b72b603802" # !python -m pip install . # + id="T0oiGsDM-ac2" colab={"base_uri": "https://localhost:8080/"} outputId="27314abe-0328-489d-ea67-8fb2fff591da" # !python object_detection/builders/model_builder_tf2_test.py # + id="Xpt6gU8n-mx7" colab={"base_uri": "https://localhost:8080/"} outputId="835c2210-4aea-4d73-aa8d-4e6c4b9af375" # cd /content/training_demo/pre_trained_models # + id="6QKQqVMaYE-Q" colab={"base_uri": "https://localhost:8080/"} outputId="83c552cf-3907-490a-8253-1631308e73ce" # !wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz # + id="mbpllL0cYT-W" colab={"base_uri": "https://localhost:8080/"} outputId="2fde3ff5-f952-45b0-94c4-fa7843fc24a2" # !tar -xvf ssd_resnet101_v1_fpn_640x640_coco17_tpu-8.tar.gz # + id="mP2dy1P-Y0FC" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="f1c162f5-9e8c-4a41-b870-913e4ce8bec3" pwd # + id="1n0rks7OQOPM" colab={"base_uri": "https://localhost:8080/"} outputId="c8b6ec7b-c426-41cc-9a42-f6cb8c1ebf7a" # cd /content/training_demo # + id="HR2Cu6svT2Ei" colab={"base_uri": "https://localhost:8080/"} outputId="50d50229-d264-4f40-eb3d-320b37168993" # Create train data: # !python generate_tfrecord.py -x /content/training_demo/images/train -l /content/training_demo/annotations/label_map.pbtxt -o /content/training_demo/annotations/train.record # Create test data: # !python generate_tfrecord.py -x /content/training_demo/images/test -l /content/training_demo/annotations/label_map.pbtxt -o /content/training_demo/annotations/test.record # + id="vAVf4FlGVV8Z" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="f3307591-7488-4bbb-f515-922f5c6c1377" pwd # + id="6ZAOfy_TVV36" colab={"base_uri": "https://localhost:8080/"} outputId="a61d07b9-4328-4473-f204-1c26247c6939" # ls # + id="HrynXN5RVVx8" colab={"base_uri": "https://localhost:8080/"} outputId="760d9ef1-567e-435a-a8ed-91d5990e6e18" # !python model_main_tf2.py --model_dir=/content/training_demo/models/my_ssd_resnet101_v1_fpn --pipeline_config_path=/content/training_demo/models/my_ssd_resnet101_v1_fpn/pipeline.config # + id="-bmJNSvUVVgz" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="7687dd31-234f-4bc6-de0b-e70d2c7c656e" pwd # + id="C66Q9v7IVVZy" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # + onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))] print(onlyfiles) # + def reshape_img(fl): fl_ = np.zeros((256,256,fl.shape[0])) for c in range(fl.shape[0]): fl_[:,:,c] = fl[c,:,:] return fl_ def clean_br(img): img = img - img.min() img = img/img.max() return img def clean_fl(img): img = reshape_img(img) img_ = np.zeros_like(img) print(img.shape) print(img_.shape) for c in range(img.shape[-1]): img_[:,:,c] = img[:,:,c] - img[:,:,c].min() img_[:,:,c] = img_[:,:,c]/img_[:,:,c].max() return img_ # - for i in range(len(onlyfiles)): data = np.load(mypath+onlyfiles[i]) print(data[0].shape) tf.test.is_gpu_available() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="o4JZ84moBKMr" # # Navier-Stokes Forward Simulation # # Now let's target a somewhat more complex example: a fluid simulation based on the Navier-Stokes equations. This is still very simple with ΦFlow (phiflow), as differentiable operators for all steps exist there. The Navier-Stokes equations (in their incompressible form) introduce an additional pressure field $p$, and a constraint for conservation of mass, as introduced in equation {eq}`model-boussinesq2d`. We're also moving a marker field, denoted by $d$ here, with the flow. It indicates regions of higher temperature, and exerts a force via a buouyancy factor $\xi$: # # $$\begin{aligned} # \frac{\partial \mathbf{u}}{\partial{t}} + \mathbf{u} \cdot \nabla \mathbf{u} &= - \frac{1}{\rho} \nabla p + \nu \nabla\cdot \nabla \mathbf{u} + (0,1)^T \xi d # \quad \text{s.t.} \quad \nabla \cdot \mathbf{u} = 0, # \\ # \frac{\partial d}{\partial{t}} + \mathbf{u} \cdot \nabla d &= 0 # \end{aligned}$$ # # # Here, we're aiming for an incompressible flow (i.e., $\rho = \text{const}$), and use a simple buoyancy model (the Boussinesq approximation) via the term $(0,1)^T \xi d$. This models changes in density without explicitly calculating $\rho$, and we assume a gravity force that acts along the y direction via the vector $(0,1)^T$. # We'll solve this PDE on a closed domain with Dirichlet boundary conditions $\mathbf{u}=0$ for the velocity, and Neumann boundaries $\frac{\partial p}{\partial x}=0$ for pressure, on a domain $\Omega$ with a physical size of $100 \times 80$ units. # [[run in colab]](https://colab.research.google.com/github/tum-pbs/pbdl-book/blob/main/overview-ns-forw.ipynb) # # # - # ## Implementation # # As in the previous section, the first command with a "!" prefix installs the [phiflow python package from GitHub](https://github.com/tum-pbs/PhiFlow) via `pip` in your python environment. (Skip or modify this command if necessary.) # + colab={"base_uri": "https://localhost:8080/"} id="da1uZcDXdVcF" outputId="1082dc87-796c-4b57-e72e-5790fc1444c9" # !pip install --upgrade --quiet phiflow # #!pip install --upgrade --quiet git+https://github.com/tum-pbs/PhiFlow@develop from phi.flow import * # The Dash GUI is not supported on Google colab, ignore the warning import pylab # + [markdown] id="BVV1IKVqDfLl" # ## Setting up the simulation # # The following code sets up a few constants, which are denoted by upper case names. We'll use $40 \times 32$ cells to discretize our domain, introduce a slight viscosity via $\nu$, and define the time step to be $\Delta t=1.5$. # # We're creating a first `CenteredGrid` here, which is initialized by a `Sphere` geometry object. This will represent the inflow region `INFLOW` where hot smoke is generated. # + id="WrA3IXDxv31P" DT = 1.5 NU = 0.01 INFLOW = CenteredGrid(Sphere(center=(30,15), radius=10), extrapolation.BOUNDARY, x=32, y=40, bounds=Box[0:80, 0:100]) * 0.2 # + [markdown] id="ExA0Pi2sFVka" # The inflow will be used to inject smoke into a second centered grid `smoke` that represents the marker field $d$ from above. Note that we've defined a `Box` of size $100 \times 80$ above. This is the physical scale in terms of spatial units in our simulation, i.e., a velocity of magnitude $1$ will move the smoke density by 1 unit per 1 time unit, which may be larger or smaller than a cell in the discretized grid, depending on the settings for `x,y`. You could parametrize your simulation grid to directly resemble real-world units, or keep appropriate conversion factors in mind. # # The inflow sphere above is already using the "world" coordinates: it is located at $x=30$ along the first axis, and $y=15$ (within the $100 \times 80$ domain box). # # Next, we create grids for the quantities we want to simulate. For this example, we require a velocity field and a smoke density field. # - smoke = CenteredGrid(0, extrapolation.BOUNDARY, x=32, y=40, bounds=Box[0:80, 0:100]) # sampled at cell centers velocity = StaggeredGrid(0, extrapolation.ZERO, x=32, y=40, bounds=Box[0:80, 0:100]) # sampled in staggered form at face centers # We sample the smoke field at the cell centers and the velocity in [staggered form](https://tum-pbs.github.io/PhiFlow/Staggered_Grids.html). The staggered grid internally contains 2 centered grids with different dimensions, and can be converted into centered grids (or simply numpy arrays) via the `unstack` function, as explained in the link above. # # Next we define the update step of the simulation, which calls the necessary functions to advance the state of our fluid system by `dt`. The next cell computes one such step, and plots the marker density after one simulation frame. # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="WmGZdOwswOva" outputId="3ae4d68d-b586-4bbe-eca9-a223d7720949" def step(velocity, smoke, pressure, dt=1.0, buoyancy_factor=1.0): smoke = advect.semi_lagrangian(smoke, velocity, dt) + INFLOW buoyancy_force = (smoke * (0, buoyancy_factor)).at(velocity) # resamples smoke to velocity sample points velocity = advect.semi_lagrangian(velocity, velocity, dt) + dt * buoyancy_force velocity = diffuse.explicit(velocity, NU, dt) velocity, pressure = fluid.make_incompressible(velocity) return velocity, smoke, pressure velocity, smoke, pressure = step(velocity, smoke, None, dt=DT) print("Max. velocity and mean marker density: " + format( [ math.max(velocity.values) , math.mean(smoke.values) ] )) pylab.imshow(np.asarray(smoke.values.numpy('y,x')), origin='lower', cmap='magma') # - # A lot has happened in this `step()` call: we've advected the smoke field, added an upwards force via a Boussinesq model, advected the velocity field, and finally made it divergence free via a pressure solve. # # The Boussinesq model uses a multiplication by a tuple `(0, buoyancy_factor)` to turn the smoke field into a staggered, 2 component force field, sampled at the locations of the velocity components via the `at()` function. This function makes sure the individual force components are correctly interpolated for the velocity components of the staggered velocity. Note that this also directly ensure the boundary conditions of the original grid are kept. It internally also does `StaggeredGrid(..., extrapolation.ZERO,...)` for the resulting force grid. # # The pressure projection step in `make_incompressible` is typically the computationally most expensive step in the sequence above. It solves a Poisson equation for the boundary conditions of the domain, and updates the velocity field with the gradient of the computed pressure. # # Just for testing, we've also printed the mean value of the velocities, and the max density after the update. As you can see in the resulting image, we have a first round region of smoke, with a slight upwards motion (which does not show here yet). # # ## Datatypes and dimensions # # The variables we created for the fields of the simulation here are instances of the class `Grid`. # Like tensors, grids also have the `shape` attribute which lists all batch, spatial and channel dimensions. # [Shapes in phiflow](https://tum-pbs.github.io/PhiFlow/Math.html#shapes) store not only the sizes of the dimensions but also their names and types. print(f"Smoke: {smoke.shape}") print(f"Velocity: {velocity.shape}") print(f"Inflow: {INFLOW.shape}, spatial only: {INFLOW.shape.spatial}") # Note that the phiflow output here indicates the type of a dimension, e.g., $^S$ for a spatial, and $^V$ for a vector dimension. Later on for learning, we'll also introduce batch dimensions. # # The actual content of a shape object can be obtained via `.sizes`, or alternatively we can query the size of a specific dimension `dim` via `.get_size('dim')`. Here are two examples: print(f"Shape content: {velocity.shape.sizes}") print(f"Vector dimension: {velocity.shape.get_size('vector')}") # The grid values can be accessed using the `values` property. This is an important difference to a phiflow tensor object, which does not have `values`, as illustrated in the code example below. # + print("Statistics of the different simulation grids:") print(smoke.values) print(velocity.values) # in contrast to a simple tensor: test_tensor = math.tensor(numpy.zeros([2, 5, 3]), spatial('x,y'), channel('vector')) print("Reordered test tensor shape: " + format(test_tensor.numpy('vector,y,x').shape) ) #print(test_tensor.values.numpy('y,x')) # error! tensors don't return their content via ".values" # - # Grids have many more properties which are documented [here](https://tum-pbs.github.io/PhiFlow/phi/field/#phi.field.Grid). # Also note that the staggered grid has a [non-uniform shape](https://tum-pbs.github.io/PhiFlow/Math.html#non-uniform-tensors) because the number of faces is not equal to the number of cells (in this example the x component has $31 \times 40$ cells, while y has $32 \times 39$). The `INFLOW` grid naturally has the same dimensions as the `smoke` grid. # # ## Time evolution # # With this setup, we can easily advance the simulation forward in time a bit more by repeatedly calling the `step` function. # + colab={"base_uri": "https://localhost:8080/"} id="0hZk5HX3w4Or" outputId="f7811af7-4b58-4ff6-a8b6-6e7bedefaa6e" for time_step in range(10): velocity, smoke, pressure = step(velocity, smoke, pressure, dt=DT) print('Computed frame {}, max velocity {}'.format(time_step , np.asarray(math.max(velocity.values)) )) # + [markdown] id="GMKKWQBLHIwP" # Now the hot plume is starting to rise: # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="Mfl80CjZxZcL" outputId="92f3a9ba-d403-4799-a543-132ee8ed234c" pylab.imshow(smoke.values.numpy('y,x'), origin='lower', cmap='magma') # + [markdown] id="wnbQJvA-HPSL" # Let's compute and show a few more steps of the simulation. Because of the inflow being located off-center to the left (with x position 30), the plume will curve towards the right when it hits the top wall of the domain. # + colab={"base_uri": "https://localhost:8080/", "height": 489} id="tkhCOzc0ITsj" outputId="f6366c12-1eb5-4ff6-e0d7-94b806bfd8e4" steps = [[ smoke.values, velocity.values.vector[0], velocity.values.vector[1] ]] for time_step in range(20): if time_step<3 or time_step%10==0: print('Computing time step %d' % time_step) velocity, smoke, pressure = step(velocity, smoke, pressure, dt=DT) if time_step%5==0: steps.append( [smoke.values, velocity.values.vector[0], velocity.values.vector[1]] ) fig, axes = pylab.subplots(1, len(steps), figsize=(16, 5)) for i in range(len(steps)): axes[i].imshow(steps[i][0].numpy('y,x'), origin='lower', cmap='magma') axes[i].set_title(f"d at t={i*5}") # - # We can also take a look at the velocities. The `steps` list above already stores `vector[0]` and `vector[1]` components of the velocities as numpy arrays, which we can show next. # + fig, axes = pylab.subplots(1, len(steps), figsize=(16, 5)) for i in range(len(steps)): axes[i].imshow(steps[i][1].numpy('y,x'), origin='lower', cmap='magma') axes[i].set_title(f"u_x at t={i*5}") fig, axes = pylab.subplots(1, len(steps), figsize=(16, 5)) for i in range(len(steps)): axes[i].imshow(steps[i][2].numpy('y,x'), origin='lower', cmap='magma') axes[i].set_title(f"u_y at t={i*5}") # + [markdown] id="ooqVxCPM8PXl" # It looks simple here, but this simulation setup is a powerful tool. The simulation could easily be extended to more complex cases or 3D, and it is already fully compatible with backpropagation pipelines of deep learning frameworks. # # In the next chapters we'll show how to use these simulations for training NNs, and how to steer and modify them via trained NNs. This will illustrate how much we can improve the training process by having a solver in the loop, especially when the solver is _differentiable_. Before moving to these more complex training processes, we will cover a simpler supervised approach in the next chapter. This is very fundamental: even when aiming for advanced physics-based learning setups, a working supervised training is always the first step. # - # ## Next steps # # You could create a variety of nice fluid simulations based on this setup. E.g., try changing the spatial resolution, the buoyancy factors, and the overall length of the simulation run. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="SEZMaMLfrG9M" import numpy as np import matplotlib.pyplot as plt # + [markdown] id="gApgZsEoiSU3" # Regression # + id="71rIK6EniPOA" outputId="385921a1-fae4-425f-b378-ea145e914f2a" colab={"base_uri": "https://localhost:8080/", "height": 50} X = np.array([[1,2,3],[3,4,5],[5,6,7],[7,8,9],[9,8,7]]) Y = np.array([1,2,3,4,5]) print(X.shape) print(Y.shape) # + id="OGR8Z5t1n0bS" outputId="925fd64d-96ad-498d-c6aa-be35b1f179f6" colab={"base_uri": "https://localhost:8080/", "height": 265} fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(X[:,0], X[:,1], X[:,2], c='r', marker='o') # + [markdown] id="VViIopAmsCnY" # Hyperparameters & Parameters # + id="EbxVDzvLsBzz" outputId="4c8c4912-9893-434a-9bb6-c24b1520e70c" colab={"base_uri": "https://localhost:8080/", "height": 34} no_of_inputs = X.shape[1] epochs = 50 learning_rate = .01 weights = np.random.rand(no_of_inputs + 1) print(weights.shape) # + [markdown] id="zB23JptVszqe" # Activation Function # + id="4Cermj4Fst_M" def relu_activation(sum): if sum > 0: return sum else: return 0 # + [markdown] id="62CgnF7SjeKD" # Step Activation # # + [markdown] id="Lp8_pfqew8Y9" # The Preceptron Class # + id="-i15fphnw-xO" class Perceptron(object): def __init__(self, no_of_inputs, activation): self.learning_rate = learning_rate self.weights = np.zeros(no_of_inputs + 1) self.activation = activation def predict(self, inputs): summation = np.dot(inputs, self.weights[1:]) + self.weights[0] return self.activation(summation) def train(self, training_inputs, training_labels, epochs=100, learning_rate=0.01): history = [] for _ in range(epochs): for inputs, label in zip(training_inputs, training_labels): prediction = self.predict(inputs) loss = (label - prediction) loss2 = loss*loss history.append(loss2) print(f"loss = {loss2}") self.weights[1:] += self.learning_rate * loss * inputs self.weights[0] += self.learning_rate * loss return history # + [markdown] id="QBhQ6O12u0A6" # Regression Perceptron & Training # # + id="IHY5n0MxxbDM" outputId="f1fc2219-598e-4503-f918-6a0d8978ccf9" colab={"base_uri": "https://localhost:8080/", "height": 1000} perceptron = Perceptron(no_of_inputs, relu_activation) history = perceptron.train(X,Y, epochs=epochs) # + [markdown] id="WIyIMYX_jGY3" # Regression Prediction # + id="DkfR58uEyfTJ" outputId="c8cfe0a5-bf4e-4316-9585-36db5d9f987b" colab={"base_uri": "https://localhost:8080/", "height": 34} perceptron.predict(np.array([3,3,3])) # + id="dy9cIYdvMBHN" outputId="fcd8e492-bd29-43f6-c863-4f10234f0b77" colab={"base_uri": "https://localhost:8080/", "height": 282} plt.plot(history) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + deletable=true editable=true import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl import pandas as pd import re import scipy as sp import scipy.stats as stats from scipy.interpolate import interp1d import stats13tools.stats13tools as st # %matplotlib inline # + deletable=true editable=true colors = {"orange": "#f0ad4e", "red": "#d9534f", "blue": "#5bc0de", "green": "#5cb85c", "gray": "#636c72", "lightgray": "#d2d2d2" } # - # # One sample mean (t distribution) # + fig = plt.figure(figsize=(6,4)) ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.74]) ax2 = ax1.twinx() #normal distribution xnorm = np.linspace(stats.norm.ppf(0.00001), stats.norm.ppf(0.999999), 100) ynorm = stats.norm.pdf(xnorm) #t distribution xt = np.linspace(-10, 10, 1000) yt = stats.t(2, 0).pdf(xt) ax1.fill_between(xnorm, ynorm, color=colors["orange"], alpha=0.7, zorder=100) ax2.fill_between(xt, yt, color=colors["blue"], alpha=0.7, zorder=100) #ax2.plot(xt, stats.t(1E10, 0).pdf(xt)) for spine in ["bottom"]: ax1.spines[spine].set_linewidth(1) ax1.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax1.spines[spine].set_visible(False) for ax in [ax1]: ax.set_yticks([]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_xlabel("x", size=14, color=colors["lightgray"]) ax.set_xlim(-8, 8) ax.set_ylim(0) ax.text(1, 0.35, "Normal distribution (z)", size=14, color=colors["orange"]) ax.text(3, 0.05, "t distribution", size=14, color=colors["blue"]) for ax in [ax2]: ax.set_ylim(ax1.get_ylim()) ax.axis("off") plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/distribution-t-vs-z.svg", transparent=True) # - # # UCLA salaries (2014) # ## Full population data_original = pd.read_csv("data-src/salaries-ucla2014.csv") #data = data_original.loc[(data_original.NAME != "***** , *****") & (data_original.BASE >= 12000), "GROSS"] data = data_original.loc[(data_original.NAME != "***** , *****") & (data_original.BASE >= 12000)] #in 2014, there was no base salary < 12000 print("Min salary:\n {} => {}".format(data[data.GROSS==data.GROSS.min()].NAME.values[0], data[data.GROSS==data.GROSS.min()].GROSS.values[0])) print("Max salary:\n {} => {}".format(data[data.GROSS==data.GROSS.max()].NAME.values[0], data[data.GROSS==data.GROSS.max()].GROSS.values[0])) # + fig = plt.figure(figsize=(10,4)) ax1 = fig.add_axes([0.1, 0.15, 0.85, 0.8]) ax2 = ax1.twinx() ax3 = fig.add_axes([0.5, 0.35, 0.45, 0.6]) ax1.hist(data.GROSS, bins="auto", color=colors["blue"]); ax3.hist(data.loc[data.GROSS<240000, "GROSS"], bins="auto", color=colors["blue"]); for ax in [ax1, ax3]: for spine in ["bottom", "left"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right"]: ax.spines[spine].set_visible(False) for ax in [ax1, ax3]: ax.set_xlim(0) ax.set_ylim(0) ax.set_xlabel("Salary (UCLA - 2014)", size=16, color=colors["lightgray"]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_ylabel("Frequency", size=16, color=colors["lightgray"]) for ax in [ax2]: ax.set_ylim(ax1.get_ylim()) ax.axis("off") ax.axvline(data.GROSS.mean(), color=colors["orange"]) ax.axvline(data.GROSS.median(), color=colors["red"]) ax.text(ax.get_xlim()[1]*0.3, ax.get_ylim()[1]*0.95, "Mean: $\mu=\$${:.0f}".format(data.GROSS.mean()), size=14, color=colors["orange"], ha="right", va="top") ax.text(ax.get_xlim()[1]*0.3, ax.get_ylim()[1]*0.87, "Median: M=${:.0f}".format(data.GROSS.median()), size=14, color=colors["red"], ha="right", va="top") for ax in [ax3]: ax.set_xlim(0, 240000) ax.axvline(data.GROSS.mean(), color=colors["orange"], lw=2) ax.axvline(data.GROSS.median(), color=colors["red"], lw=2) #plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-all.svg", transparent=True) # + fig = plt.figure(figsize=(10,4)) ax1 = fig.add_axes([0.1, 0.15, 0.85, 0.45]) ax2 = fig.add_axes([0.1, 0.63, 0.85, 0.3]) for ax in [ax1, ax2]: ax.hist(data.GROSS, bins="auto", color=colors["blue"]); for spine in ["bottom", "left"]: ax1.spines[spine].set_linewidth(1) ax1.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right"]: ax1.spines[spine].set_visible(False) for ax in [ax1]: ax.set_xlim(0) ax.set_ylim(0, 50) ax.set_xlabel("Salary (UCLA - 2014)", size=16, color=colors["lightgray"]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) #ax.text(-300000, 50, "Frequency", size=16, color=colors["lightgray"], rotation=90) ax.set_ylabel("Frequency", size=16, color=colors["lightgray"]) for big_salary in data.GROSS.sort_values().values[-3:]: ax.text(big_salary, 5, "{}\n{}".format(data[data.GROSS==big_salary].NAME.values[0].split(",")[0], data[data.GROSS==big_salary].NAME.values[0].split(",")[1]), size=10, color=colors["red"], ha="center") for ax in [ax2]: ax.set_xlim(0) ax.set_xticks([]) ax.set_ylim(50) for spine in ["left"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "bottom"]: ax.spines[spine].set_visible(False) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) #plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-zoom.svg", transparent=True) # - # + fig = plt.figure(figsize=(10,4)) ax1 = fig.add_axes([0.1, 0.15, 0.85, 0.80]) ax2 = fig.add_axes([0.5, 0.45, 0.45, 0.5]) for ax in [ax1]: ax.hist(data.GROSS, bins="auto", color=colors["blue"]); ax.axvline(data.GROSS.mean(), color=colors["orange"]) ax.axvline(data.GROSS.median(), color=colors["red"]) for spine in ["bottom", "left"]: ax1.spines[spine].set_linewidth(1) ax1.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right"]: ax1.spines[spine].set_visible(False) for ax in [ax1]: ax.set_xlim(0) ax.set_ylim(0, 50) ax.set_xlabel("Salary (UCLA - 2014)", size=16, color=colors["lightgray"]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) #ax.text(-300000, 50, "Frequency", size=16, color=colors["lightgray"], rotation=90) ax.set_ylabel("Frequency", size=16, color=colors["lightgray"]) for big_salary in data.GROSS.sort_values().values[-3:]: ax.text(big_salary, 2, "{}\n{}".format(data[data.GROSS==big_salary].NAME.values[0].split(",")[0], data[data.GROSS==big_salary].NAME.values[0].split(",")[1]), size=10, color=colors["red"], ha="center") ax.text(ax.get_xlim()[1]*0.33, ax.get_ylim()[1]*1, "Mean: $\mu=\$${:.0f}".format(data.GROSS.mean()), size=14, color=colors["orange"], ha="right", va="top") ax.text(ax.get_xlim()[1]*0.33, ax.get_ylim()[1]*0.9, "Median: M=${:.0f}".format(data.GROSS.median()), size=14, color=colors["red"], ha="right", va="top") ax2.hist(data.loc[data.GROSS<240000, "GROSS"], bins="auto", color=colors["blue"]); for ax in [ax2]: for spine in ["bottom", "left"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right"]: ax.spines[spine].set_visible(False) ax.set_xlim(0) ax.set_ylim(0) ax.set_xlabel("Salary (UCLA - 2014)", size=16, color=colors["lightgray"]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_ylabel("Frequency", size=16, color=colors["lightgray"]) ax.set_xlim(0, 240000) ax.axvline(data.GROSS.mean(), color=colors["orange"], lw=2) ax.axvline(data.GROSS.median(), color=colors["red"], lw=2) plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-zoom.svg", transparent=True) # - # ## Sample # + n_sample = 100 ucla_sample_idx = np.random.choice(data.index.values, replace=False, size=n_sample) ucla_sample = data.ix[ucla_sample_idx] # - print("Sample mean: {}".format(ucla_sample.GROSS.mean())) print("Sample std: {}".format(ucla_sample.GROSS.std())) #savesample data to csv to be generated as table via d3 pd.DataFrame(np.concatenate([list(ucla_sample.GROSS.values), ["-" for i in range(2)]]).reshape((17,6))).to_csv("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/d3-anim/js-graphs/js-graphs-data-src/salaries-ucla-sample.csv", index=False, header=False) # + fig = plt.figure(figsize=(6,4)) ax1 = fig.add_axes([0.11, 0.15, 0.85, 0.8]) ax2 = ax1.twinx() ax1.hist(ucla_sample.GROSS, bins="auto", color=colors["blue"]); for spine in ["bottom", "left"]: ax1.spines[spine].set_linewidth(1) ax1.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right"]: ax1.spines[spine].set_visible(False) for ax in [ax1]: ax.set_xlim(0) ax.set_ylim(0) ax.set_xticks(np.arange(0, ucla_sample.GROSS.max()+1, 50000)) ax.set_xlabel("Salary (Sample from UCLA - 2014)", size=16, color=colors["lightgray"]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.ticklabel_format(style='sci', axis='x', scilimits=(0,5)) ax.set_ylabel("Frequency", size=16, color=colors["lightgray"]) ax.text(ax.get_xlim()[1]*0.95, ax.get_ylim()[1]*0.95, "n={:.0f}".format(len(ucla_sample)), size=16, color=colors["blue"], ha="right", va="top") for ax in [ax2]: ax.set_ylim(ax1.get_ylim()) ax.axis("off") ax.axvline(ucla_sample.GROSS.mean(), color=colors["orange"]) ax.axvline(ucla_sample.GROSS.median(), color=colors["red"]) ax.text(ax.get_xlim()[1]*0.95, ax.get_ylim()[1]*0.7, r"Mean: $\bar{{x}}=$\${:.0f}".format(ucla_sample.GROSS.mean()), size=16, color=colors["orange"], ha="right", va="top") ax.text(ax.get_xlim()[1]*0.95, ax.get_ylim()[1]*0.61, "Median: m=${:.0f}".format(ucla_sample.GROSS.median()), size=16, color=colors["red"], ha="right", va="top") plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-sample.svg", transparent=True) # - # ### National US data (2014) # https://www.census.gov/data/tables/time-series/demo/income-poverty/cps-pinc/pinc-01.2014.html census2014 = {"mean": 62931, "median": 46480} # ### bootstrap ci # + n_simul = 10000 res_ucla_mean = np.zeros(n_simul) res_ucla_median = np.zeros(n_simul) for i in range(n_simul): res_ucla_mean[i]=np.random.choice(ucla_sample.GROSS.values, size=len(ucla_sample)).mean() res_ucla_median[i]=np.median(np.random.choice(ucla_sample.GROSS.values, size=len(ucla_sample))) ucla_mean_ci = [np.percentile(res_ucla_mean, 2.5), np.percentile(res_ucla_mean, 97.5)] ucla_median_ci = [np.percentile(res_ucla_median, 2.5), np.percentile(res_ucla_median, 97.5)] # - print("Mean 95% ci: [{:.2f}, {:.2f}]".format(ucla_mean_ci[0], ucla_mean_ci[1])) print("Median 95% ci: [{:.2f}, {:.2f}]".format(ucla_median_ci[0], ucla_median_ci[1])) # ### t statistic # ucla_sample.GROSS.std() ucla_sample_se = ucla_sample.GROSS.std()/np.sqrt(len(ucla_sample)) t_ucla = (ucla_sample.GROSS.mean()-census2014["mean"])/ucla_sample_se t_ucla # + #t distribution tdist_ucla = stats.t(df=len(ucla_sample)-1, loc=0, scale=1) xt0_ucla = np.linspace(-4.5, 4.5, 1000) yt0_ucla = tdist_ucla.pdf(xt0_ucla) fig = plt.figure(figsize=(6,4)) ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.75]) for ax in [ax1]: ax.plot(xt0_ucla, yt0_ucla, color=colors["blue"], lw=1.5) ax.fill_between(xt0_ucla[xt0_ucla<-t_ucla], yt0_ucla[xt0_ucla<-t_ucla], color=colors["orange"], alpha=0.7) ax.fill_between(xt0_ucla[xt0_ucla>t_ucla], yt0_ucla[xt0_ucla>t_ucla], color=colors["orange"], alpha=0.7) ax.axvline(t_ucla, ymax=0.33, color=colors["orange"], ls="--") ax.axvline(-t_ucla, ymax=0.33, color=colors["orange"], ls="--") ax.text(t_ucla, ax.get_ylim()[1]*0.35, "{:.2f}".format(t_ucla), color=colors["orange"], size=13, ha="center") ax.text(-t_ucla, ax.get_ylim()[1]*0.35, "{:.2f}".format(-t_ucla), color=colors["orange"], size=13, ha="center") for spine in ["bottom"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax.spines[spine].set_visible(False) ax.set_yticks([]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_xlabel(r'Standardized $\bar{x}$ (t)', size=18, color=colors["lightgray"]) ax.set_xlim(xt0_ucla.min(), xt0_ucla.max()) ax.set_ylim(0) ax.text(ax.get_xlim()[1]*1, ax.get_ylim()[1]*0.7, "Area under the curve\n2-tailed p-value\n{:.4f}".format(stats.t.sf(np.abs(t_ucla), len(ucla_sample)-1)*2), ha="right", color=colors["orange"], size=14) plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-ttest.svg", transparent=True) # - print("p-value (two-tailed):", stats.t.sf(np.abs(t_ucla), len(ucla_sample)-1)*2) # ### resampling vs t distribution # + #t distribution tdist_ucla = stats.t(df=len(ucla_sample)-1, loc=ucla_sample.GROSS.mean(), scale=(ucla_sample.GROSS.std())/np.sqrt(len(ucla_sample))) xt_ucla = np.linspace(res_ucla_mean.min()-7000, res_ucla_mean.max()+5000, 100000) yt_ucla = tdist_ucla.pdf(xt_ucla) fig = plt.figure(figsize=(6,4)) ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.75]) ax2 = ax1.twinx() ax3 = ax1.twinx() ax4 = ax1.twinx() ax5 = ax1.twinx() ax6 = ax1.twinx() ax7 = ax1.twinx() ax8 = ax1.twinx() for ax in [ax1]: ax.hist(res_ucla_mean, bins="auto", color=colors["blue"]) for spine in ["bottom"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax.spines[spine].set_visible(False) ax.set_yticks([]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_xlabel(r'$\bar{x}$', size=18, color=colors["lightgray"]) ax.set_xlim(xt_ucla.min(), xt_ucla.max()) ax.ticklabel_format(style='sci', axis='x', scilimits=(0,5)) ax.set_ylim(0) for ax in [ax2]: ax.set_ylim(ax1.get_ylim()) ax.axvline(ucla_sample.GROSS.mean(), color=colors["red"], ls="--") ax.text(ucla_sample.GROSS.mean(), ax.get_ylim()[1]*1.01, "Original sample\n" r"$\bar{{x}}=$" "{:.0f}".format(ucla_sample.GROSS.mean()), color=colors["red"], size=13, ha="center") for ax in [ax3]: ax.axvline(ucla_mean_ci[0], ymax=0.75, color=colors["lightgray"], ls="-") ax.axvline(ucla_mean_ci[1], ymax=0.75, color=colors["lightgray"], ls="-") ax.text(ucla_mean_ci[0], ax.get_ylim()[1]*0.8, "2.5$^{{th}}$ %\n({:.0f})".format(ucla_mean_ci[0]), color=colors["lightgray"], size=13, ha="center") ax.text(ucla_mean_ci[1], ax.get_ylim()[1]*0.8, "97.5$^{{th}}$ %\n({:.0f})".format(ucla_mean_ci[1]), color=colors["lightgray"], size=13, ha="center") for ax in [ax4]: ax.axvline(census2014["mean"], color=colors["green"], ls="--") ax.text(census2014["mean"], ax.get_ylim()[1]*1.01, "Census data\n({:.0f})".format(census2014["mean"]), color=colors["green"], size=13, ha="center") for ax in [ax8]: ax.axvline(data.GROSS.mean(), ymax=0.92, color=colors["orange"], ls="--") ax.text(data.GROSS.mean(), ax.get_ylim()[1]*0.94, "true $\mu$", color=colors["orange"], size=13, ha="left") for ax in [ax5]: ax.plot(xt_ucla, yt_ucla, color=colors["orange"], lw=2) for ax in [ax2, ax3, ax4, ax5, ax8]: ax.set_ylim(0) ax.axis("off") for ax in [ax6]: ax.set_ylim(ax5.get_ylim()) ax.axis("off") ax.fill_between(xt_ucla[xt_uclaucla_sample.GROSS.mean()+1.96*ucla_sample_se], yt_ucla[xt_ucla>ucla_sample.GROSS.mean()+1.96*ucla_sample_se], color=colors["orange"], alpha=0.7) ax.axvline(ucla_sample.GROSS.mean()-1.96*ucla_sample_se, ymax=0.33, color=colors["orange"], ls="--") ax.axvline(ucla_sample.GROSS.mean()+1.96*ucla_sample_se, ymax=0.33, color=colors["orange"], ls="--") for ax in [ax7]: ax.set_ylim(ax4.get_ylim()) ax.axis("off") ax.text(ax.get_xlim()[1]*0.99, ax.get_ylim()[1]*0.1, "2-tailed\np-value\n{:.4f}".format(stats.t.sf(np.abs(t_ucla), len(ucla_sample)-1)*2), ha="right", color=colors["orange"], size=14) plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-resampling-ttest.svg", transparent=True) # + #t distribution se_sample = (ucla_sample.GROSS.std())/np.sqrt(len(ucla_sample)) tdist_census = stats.t(df=len(ucla_sample)-1, loc=census2014["mean"], scale=se_sample) xt_census = np.linspace(census2014["mean"]-4*se_sample, ucla_sample.GROSS.mean()+2*se_sample, 100000) yt_census = tdist_census.pdf(xt_census) fig = plt.figure(figsize=(6,4)) ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.75]) ax2 = ax1.twinx() for ax in [ax1]: ax.fill_between(xt_census, yt_census, color=colors["green"], alpha=0.7) for spine in ["bottom"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax.spines[spine].set_visible(False) ax.set_yticks([]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_xlabel('Salary', size=18, color=colors["lightgray"]) ax.set_xlim(xt_census.min(), xt_census.max()) ax.ticklabel_format(style='sci', axis='x', scilimits=(0,5)) ax.set_ylim(0) ax.axvline(census2014["mean"], color=colors["green"], ls="--") ax.text(census2014["mean"], ax.get_ylim()[1]*1.01, "US average\n(Census 2014)\n${:.0f}".format(census2014["mean"]), color=colors["green"], size=13, ha="center") ax.text(ax.get_xlim()[0]+(ax.get_xlim()[1]-ax.get_xlim()[0])*-0.05, ax.get_ylim()[1]*0.3, "$\mu=$US average\n" r"$\sigma=\frac{s_{sample}}{\sqrt{n_{sample}}}$", size=15, color=colors["green"], ha="left") for ax in [ax2]: ax.set_ylim(ax1.get_ylim()) ax.axvline(ucla_sample.GROSS.mean(), color=colors["red"], ls="--") ax.text(ucla_sample.GROSS.mean(), ax.get_ylim()[1]*1.01, "Original sample\n" r"$\bar{{x}}=$" "\${:.0f}".format(ucla_sample.GROSS.mean()), color=colors["red"], size=13, ha="center") ax.axis("off") plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-ttest-hypothesized-dist.svg", transparent=True) # - # ### bootstrap steps # + n_simul = 10000 res_ucla_mean = np.zeros(n_simul) res_ucla_median = np.zeros(n_simul) for i in range(n_simul): res_ucla_mean[i]=np.random.choice(ucla_sample.GROSS.values, size=len(ucla_sample)).mean() res_ucla_median[i]=np.median(np.random.choice(ucla_sample.GROSS.values, size=len(ucla_sample))) # + x1, y1 = st.to_dotplot(res_ucla_mean[:120], kind="bins", scale=0.05, nbins=20) #randomize order of each rows of data points rows_yval = np.unique(y1) idx_by_rows = [list(np.where(y1 == val)[0]) for val in rows_yval] for i in range(len(idx_by_rows)): np.random.shuffle(idx_by_rows[i]) shuffled_idx = np.concatenate(idx_by_rows) x1 = x1[shuffled_idx] y1 = y1[shuffled_idx] fig, ax1 = plt.subplots(figsize=(6, 4)) ax2 = ax1.twinx() ax3 = ax1.twinx() ax4 = ax1.twinx() ax5 = ax1.twinx() ax6 = ax1.twinx() ax7 = ax1.twinx() ax8 = ax1.twinx() for ax in [ax1]: for spine in ["bottom"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax.spines[spine].set_visible(False) ax.set_yticks([]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_xlabel(r'$\bar{x}$', size=18, color=colors["lightgray"]) ax.set_ylim(-0.02, 1.2) ax.set_xlim(xt_ucla.min(), xt_ucla.max()) ax.ticklabel_format(style='sci', axis='x', scilimits=(0,5)) for ax in [ax2, ax3, ax4, ax5, ax6, ax7, ax8]: ax.set_ylim(ax1.get_ylim()) ax.axis("off") for ax in [ax2]: ax.scatter(x1[0], y1[0], s=50, color=colors["blue"]) for ax in [ax3]: ax.scatter(x1[1], y1[1], s=50, color=colors["blue"]) for ax in [ax4]: ax.scatter(x1[2], y1[2], s=50, color=colors["blue"]) for ax in [ax5]: ax.scatter(x1[3:25], y1[3:25], color=colors["blue"]) for ax in [ax6]: ax.scatter(x1[25:50], y1[25:50], color=colors["blue"]) for ax in [ax7]: ax.scatter(x1[50:80], y1[50:80], color=colors["blue"]) for ax in [ax8]: ax.scatter(x1[80:], y1[80:], color=colors["blue"]) plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-bootstrap-steps.svg", transparent=True) # - # ## steps median # + x1, y1 = st.to_dotplot(res_ucla_median[:120], kind="bins", scale=0.05, nbins=20) #randomize order of each rows of data points rows_yval = np.unique(y1) idx_by_rows = [list(np.where(y1 == val)[0]) for val in rows_yval] for i in range(len(idx_by_rows)): np.random.shuffle(idx_by_rows[i]) shuffled_idx = np.concatenate(idx_by_rows) x1 = x1[shuffled_idx] y1 = y1[shuffled_idx] fig, ax1 = plt.subplots(figsize=(6, 4)) ax2 = ax1.twinx() ax3 = ax1.twinx() for ax in [ax1]: for spine in ["bottom"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax.spines[spine].set_visible(False) ax.set_yticks([]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_xlabel('$m$', size=18, color=colors["lightgray"]) ax.set_ylim(-0.02, 1.6) ax.set_xlim(40000, 90000) ax.ticklabel_format(style='sci', axis='x', scilimits=(0,5)) for ax in [ax2, ax3, ax4, ax5, ax6, ax7, ax8]: ax.set_ylim(ax1.get_ylim()) ax.axis("off") for ax in [ax2]: ax.scatter(x1[:50], y1[:50], color=colors["blue"]) for ax in [ax3]: ax.scatter(x1[50:], y1[50:], color=colors["blue"]) plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-bootstrap-steps-median.svg", transparent=True) # - # ## Bootstrap median # + fig = plt.figure(figsize=(6,4)) ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.75]) ax2 = ax1.twinx() ax3 = ax1.twinx() ax4 = ax1.twinx() ax5 = ax1.twinx() for ax in [ax1]: ax.hist(res_ucla_median, bins="auto", color=colors["blue"]) for spine in ["bottom"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax.spines[spine].set_visible(False) ax.set_yticks([]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_xlabel('$m$', size=18, color=colors["lightgray"]) ax.set_xlim(40000, 90000) ax.ticklabel_format(style='sci', axis='x', scilimits=(0,5)) ax.set_ylim(0) for ax in [ax2]: ax.set_ylim(ax1.get_ylim()) ax.axvline(ucla_sample.GROSS.median(), color=colors["red"], ls="--") ax.text(ucla_sample.GROSS.median(), ax.get_ylim()[1]*1.01, "Original sample\n$m$=\${:.0f}".format(ucla_sample.GROSS.median()), color=colors["red"], size=13, ha="center") for ax in [ax3]: ax.axvline(ucla_median_ci[0], ymax=0.75, color=colors["lightgray"], ls="-") ax.axvline(ucla_median_ci[1], ymax=0.75, color=colors["lightgray"], ls="-") ax.text(ucla_median_ci[0], ax.get_ylim()[1]*0.8, "2.5$^{{th}}$ %\n({:.0f})".format(ucla_median_ci[0]), color=colors["lightgray"], size=13, ha="center") ax.text(ucla_median_ci[1], ax.get_ylim()[1]*0.8, "97.5$^{{th}}$ %\n({:.0f})".format(ucla_median_ci[1]), color=colors["lightgray"], size=13, ha="center") for ax in [ax4]: ax.axvline(census2014["median"], color=colors["green"], ls="--") ax.text(census2014["median"], ax.get_ylim()[1]*1.01, "Census data\n(\${:.0f})".format(census2014["median"]), color=colors["green"], size=13, ha="center") for ax in [ax5]: ax.set_ylim(0, 1) ax.axis("off") #ax.axvline(data.GROSS.median(), ymax=0.92, color=colors["orange"], ls="--") ax.text(data.GROSS.median(), -0.15, "true $M$", color=colors["orange"], size=13, ha="center") ax.annotate('', xy=(data.GROSS.median(), 0), xycoords='data', xytext=(data.GROSS.median(), -0.01), textcoords='data', arrowprops=dict(facecolor=colors["orange"], ec="none", shrink=0.5), horizontalalignment='right', verticalalignment='top') for ax in [ax2, ax3, ax4]: ax.set_ylim(0) ax.axis("off") plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-resampling-median.svg", transparent=True) # - # ## True sampling distribution true_sampling = np.zeros(n_simul) for i in range(n_simul): sample_idx = np.random.choice(data.index.values, replace=False, size=100) sample = data.ix[sample_idx] true_sampling[i]=sample.GROSS.mean() # + #plt.hist(true_sampling, bins="auto"); # + #t distribution se_truesampling = (data.GROSS.std())/np.sqrt(len(ucla_sample)) tdist_truesampling = stats.t(df=len(ucla_sample)-1, loc=data.GROSS.mean(), scale=se_truesampling) xt_truesampling = np.linspace(data.GROSS.mean()-6*se_sample, data.GROSS.mean()+7*se_sample, 10000) yt_truesampling = tdist_truesampling.pdf(xt_truesampling) fig = plt.figure(figsize=(10,4)) ax1 = fig.add_axes([0.1, 0.15, 0.85, 0.8]) ax2 = fig.add_axes([0.4, 0.35, 0.45, 0.55]) ax3 = ax2.twinx() ax1.hist(data.GROSS, bins="auto", color=colors["blue"]); ax2.hist(true_sampling, bins="auto", color=colors["blue"]); ax3.plot(xt_truesampling, yt_truesampling, color=colors["orange"], lw=2); for ax in [ax1]: for spine in ["bottom", "left"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right"]: ax.spines[spine].set_visible(False) ax.set_xlim(0) ax.set_ylim(0) ax.set_xlabel("Salary (UCLA - 2014)", size=16, color=colors["lightgray"]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_ylabel("Frequency", size=16, color=colors["lightgray"]) ax.text(ax.get_xlim()[1]*0.35, ax.get_ylim()[1]*0.95, "Mean: $\mu=\$${:.0f}".format(data.GROSS.mean()), size=18, color=colors["orange"], ha="right", va="top") ax.text(ax.get_xlim()[1]*0.35, ax.get_ylim()[1]*0.87, "Median: M=${:.0f}".format(data.GROSS.median()), size=18, color=colors["red"], ha="right", va="top") for ax in [ax2]: for spine in ["bottom"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax.spines[spine].set_visible(False) ax.text(ax.get_xlim()[1]*1.1, ax.get_ylim()[1]*1, "True sampling distribution\n10000 " r"$\bar{x}$" " from 10000\nrandom samples (n=100)", size=14, color=colors["blue"], ha="right", va="top") ax.set_yticks([]) ax.set_xlabel(r"$\bar{x}$", size=16, color=colors["lightgray"]) ax.tick_params(axis="x", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.ticklabel_format(style='sci', axis='x', scilimits=(0,5)) ax.axvline(data.GROSS.mean(), color=colors["orange"]) ax.text(data.GROSS.mean(), ax.get_ylim()[1]*1.04, "$\mu$", size=18, ha="center", color=colors["orange"]) for ax in [ax3]: ax.set_ylim(0) ax.text(ax.get_xlim()[1]*1.1, ax.get_ylim()[1]*0.5, "Theoretical\nsampling distribution\n$\mu=\mu_{pop}$\n" r"$\sigma=\frac{\sigma_{pop}}{\sqrt{n_{sample}}}$", size=14, color=colors["orange"], ha="right", va="top") ax.axis("off") plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/salaries-ucla2014-true-sampling-distribution.svg", transparent=True) # - # ## Central limit theorem for one sample mean # + mu = 35 sigma = 5 #Population (gumbel left skewed) gumbel_l = stats.gumbel_l(mu, sigma) #Sample (normal) n_sample = 40 ynorm = stats.norm(mu, sigma/np.sqrt(n_sample))#.rvs(n) # + x = np.linspace(0, 60, 500) fig = plt.figure(figsize=(6,4)) ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.75]) ax2 = ax1.twinx() for ax in [ax1]: ax.fill_between(x, gumbel_l.pdf(x), color=colors["red"], alpha=0.7) for spine in ["bottom"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax.spines[spine].set_visible(False) ax.set_yticks([]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_xlabel('value', size=18, color=colors["lightgray"]) ax.set_ylim(0) ax.text(0, 0.05, "Distribution of\nthe Population data\n($\mu_p=35$, $\sigma=5$)", size=15, color=colors["red"]) ax.axvline(mu, color=colors["lightgray"]) ax.text(mu, ax.get_ylim()[1]*1.04, "$\mu_p$", size=18, ha="center", color=colors["lightgray"]) for ax in [ax2]: ax.fill_between(x, ynorm.pdf(x), color=colors["blue"], alpha=0.7) ax.text(43, 0.1, "Distribution of\nsample means\n($\mu=\mu_p$, " r"$\sigma=\frac{\sigma_p}{\sqrt{n}}$)", size=15, color=colors["blue"]) ax.set_ylim(0) for ax in [ax2]: ax.axis("off") plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/central-limit-theorem-one-mean.svg", transparent=True) # + #t distribution x = np.linspace(-4.5, 4.5, 1000) y = stats.t(df=30, loc=0, scale=1).pdf(x) fig = plt.figure(figsize=(6,4)) ax1 = fig.add_axes([0.1, 0.15, 0.82, 0.75]) for ax in [ax1]: ax.plot(x, y, color=colors["blue"], lw=1.5) ax.fill_between(x[x<-1.96], y[x<-1.96], color=colors["orange"], alpha=0.7) ax.fill_between(x[x>1.96], y[x>1.96], color=colors["orange"], alpha=0.7) ax.axvline(1.96, ymax=0.33, color=colors["orange"], ls="--") ax.axvline(-1.96, ymax=0.33, color=colors["orange"], ls="--") ax.text(1.96, ax.get_ylim()[1]*0.35, r"$1.96\times SE$", color=colors["orange"], size=15, ha="left") ax.text(-1.96, ax.get_ylim()[1]*0.35, r"$-1.96\times SE$", color=colors["orange"], size=15, ha="right") for spine in ["bottom"]: ax.spines[spine].set_linewidth(1) ax.spines[spine].set_color(colors["lightgray"]) for spine in ["top", "right", "left"]: ax.spines[spine].set_visible(False) ax.set_yticks([]) ax.tick_params(axis="both", width=1, size=4, color=colors["lightgray"], labelcolor=colors["lightgray"], labelsize=13, pad=4) ax.set_xlabel(r'Standardized $\bar{x}$ (t)', size=18, color=colors["lightgray"]) ax.set_xlim(xt0_ucla.min(), xt0_ucla.max()) ax.set_ylim(0) ax.text(0, ax.get_ylim()[1]*0.25, "95% of the\nsample means", ha="center", color=colors["blue"], size=15) xticklabels = [label.get_text() for label in ax.get_xticklabels()] middlelabelposition = int(len(xticklabels)/2) newlabels = np.concatenate([-1*np.arange(len(xticklabels)/2)[::-1][:-1], [r"$\bar{x}$"], np.arange(len(xticklabels)/2)[1:]]) ax.set_xticklabels(newlabels) plt.savefig("/Users/Gui/Box Sync/_STATS13/_Slides/_stats13-Lectures/assets/img/lec/central-limit-theorem-one-mean-ci95.svg", transparent=True) # - np.concatenate([-1*np.arange(len(xticklabels)/2-1)[::-1][:-1], [r"$\bar{x}$"], np.arange(len(xticklabels)/2-1)[1:]]) -1*np.arange(len(xticklabels)/2-1)[::-1][:-1] np.arange(len(xticklabels)/2-1)[1:] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import csv def create_csvs(infile): pos = infile.rfind('/') file = infile[:pos+1] + '/csvs' filmfile = open(file + "films.csv", "w") film_writer = csv.writer(filmfile, delimiter='\t') edgefile = open(file + "edges.csv", "w") edge_writer = csv.writer(edgefile, delimiter='\t') directorfile = open(file + "directors.csv", "w") director_writer = csv.writer(directorfile, delimiter='\t') grantfile = open(file + "grants.csv", "w") grant_writer = csv.writer(grantfile, delimiter='\t') countryfile = open(file + "countries.csv", "w") country_writer = csv.writer(countryfile, delimiter='\t') film_header = ["fid:ID", "name", "year", ":LABEL"] film_writer.writerow(film_header) edge_header = [":START_ID", ":END_ID", ":TYPE"] edge_writer.writerow(edge_header) director_header = ["dirid:ID", "name", ":LABEL"] director_writer.writerow(director_header) country_header = ["cid:ID", "name", ":LABEL"] country_writer.writerow(country_header) grant_header = ["gid:ID", "name", ":LABEL"] grant_writer.writerow(grant_header) hdict = {'countries' : {}, 'films' : {}, 'directors' : {}, 'grants' : {} } with open(infile) as csvfile: reader = csv.reader(csvfile, delimiter='|') for row in reader: try: if row[0] not in hdict['films'].keys(): hdict['films'][row[0]] = ['f' + str(len(hdict['films'].keys()) + 1), row[0].strip(), row[2].strip(), "film"] id_film = hdict['films'][row[0]][0] directores = row[1].split("/") countries = row[3].split("/") grants = row[4].split("/") if len(row) > 4 else [] for d in directores: d = d.strip() if d not in hdict['directors'].keys(): hdict['directors'][d] = ['d' + str(len(hdict['directors'].keys()) + 1), d, "director"] id_dir = hdict['directors'][d][0] edge_writer.writerow([id_dir, id_film, "DIRECTED"]) for c in countries: c = c.strip() if c == "2011" or c == "2013": print(row) if c not in hdict['countries'].keys(): hdict['countries'][c] = ['c' + str(len(hdict['countries'].keys()) + 1), c, "country"] id_count = hdict['countries'][c][0] edge_writer.writerow([id_film, id_count, "IS_FROM"]) for g in grants: g = g.strip() if g not in hdict['grants'].keys(): hdict['grants'][g] = ['g' + str(len(hdict['grants'].keys()) + 1), g, "grant"] id_grant = hdict['grants'][g][0] edge_writer.writerow([id_film, id_grant, "FUNDED_BY"]) except: print(row) for k, v in hdict["countries"].items(): country_writer.writerow(v) for k, v in hdict["directors"].items(): director_writer.writerow(v) for k, v in hdict["grants"].items(): grant_writer.writerow(v) for k, v in hdict["films"].items(): film_writer.writerow(v) filmfile.close() edgefile.close() directorfile.close() countryfile.close() create_csvs('data/HBF_2002_2018.csv') # !cp data/csvs2/* /usr/share/neo4j/import/ # !/usr/share/neo4j/bin/neo4j-admin import --mode=csv --database=hbf.db --nodes:film /usr/share/neo4j/import/films.csv --nodes:country /usr/share/neo4j/import/country.csv --nodes:director /usr/share/neo4j/import/director.csv --relationships /usr/share/neo4j/import/edges.csv --id-type=string --delimiter='\t' a= ' ' a.strip() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:pyvizenv2] # language: python # name: conda-env-pyvizenv2-py # --- # + import os from web3 import Web3 from dotenv import load_dotenv from web3.middleware import geth_poa_middleware from eth_account import Account from pathlib import Path from getpass import getpass load_dotenv() w3 = Web3(Web3.HTTPProvider("http://127.0.0.1:8545")) w3.middleware_onion.inject(geth_poa_middleware, layer=0) print(w3.eth.blockNumber) private_key = os.getenv("PRIVATE_KEY") account_one = Account.from_key(private_key) with open(Path("UTC--2021-10-24T20-48-25.024380600Z--e0a28831285e227ba3af29d751cd599a19d64092")) as keyfile: encrypted_key = keyfile.read() private_key = w3.eth.account.decrypt( encrypted_key, getpass("Enter keystore password: ") ) account_two = Account.from_key(private_key) def create_raw_tx(account, recipient, amount): gasEstimate = w3.eth.estimateGas( {"from": account.address, "to": recipient, "value": amount} ) return { "from": account.address, "to": recipient, "value": amount, "gasPrice": w3.eth.gasPrice, "gas": gasEstimate, "nonce": w3.eth.getTransactionCount(account.address), } def send_tx(account, recipient, amount): tx = create_raw_tx(account, recipient, amount) signed_tx = account.sign_transaction(tx) result = w3.eth.sendRawTransaction(signed_tx.rawTransaction) print(result.hex()) return result.hex() send_tx(account_one, account_two.address, 555555555555555) # - pip install Web3 print(account_one.address) print(account_two.address) w3.eth.getTransactionReceipt("") # + # Convert the hash to the correct format Web3.toChecksumAddress("") # - w3.eth.blockNumber w3.eth.getBalance("") w3.eth.getBalance("") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to BSplines # # # We start this section by recalling some basic properies about B-splines curves and surfaces. We also recall some fundamental algorithms (knot insertion and degree elevation). # # [//]: # (For a basic introduction to the subject, we refer to the books and .) # # A B-Splines family, $(N_i)_{ 1 \leqslant i \leqslant n}$ of order $k$, can be generated using a non-decreasing sequence **of knots** $T=(t_i)_{1\leqslant i \leqslant n + k}$. # # ## B-Splines series # # The $j-th$ B-Spline of order $k$ is defined by the recurrence relation: # # $$ # N_j^k = w_j^k N_j^{k-1} + ( 1 - w_{j+1}^k ) N_{j+1}^{k-1} # $$ # where, # # $$ # w_j^k (x) = \frac{x-t_j}{t_{j+k-1}-t_{j}} \hspace{2cm} N_j^1(x) = \chi_{ \left[ t_j, t_{j+1} \right[ }(x) # $$ # # for $k \geq 1$ and $1 \leq j \leq n$. # # We note some important properties of a B-splines basis: # # * B-splines are piecewise polynomial of degree $p=k-1$, # # * Compact support; the support of $N_j^k$ is contained in $\left[ t_j, t_{j+k} \right]$ , # # * If $x \in~ ] t_j,t_{j+1} [$, then only the *B-splines* $\{ N_{j-k+1}^k,\cdots,N_{j}^k \}$ are non vanishing at $x$, # # * Positivity: $\forall j \in \{1,\cdots,n \}~~N_j(x) >0, ~~\forall x \in ] t_j, t_{j+k} [$, # # * Partition of unity $\sum_{i=1}^n N_i^{k}(x) = 1, \forall x \in \mathbb{R}$, # # * Local linear independence, # # * If a knot $t_i$ has a multiplicity $m_i$ then the B-spline is $\mathcal{C}^{(p-m_i)}$ at $t_i$. # # # --- # **Example** # # In the following example we plot the family of BSplines generated by the knot vector $T=\{0, 0, 0, 1, 2, 3, 4, 5, 5, 5\}$ of degree $2$. # + # importing the bsplines module from bsplines import Bspline import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # ... def plot_splines(T, p, xmin, xmax, nx=100): """Plots all BSplines of degree p and knots vector T""" # grid points for evaluation x = np.linspace(xmin,xmax,nx) # this is the number of the BSplines in the Schoenberg space N = len(T) - p - 1 # create BSplines family with the knots sequence T # of degree p bsp = Bspline(T,p) y = np.zeros((N,nx), dtype=np.double) for i in range(0,N): # evaluation of the i^th B-spline over x y[i]=bsp(x, i=i) plt.plot(x,y[i], label='$N_{}$'.format(i+1)) plt.legend(loc=9, ncol=4) plt.show() # ... # create a knots vector T = np.array([0, 0, 0, 1, 2, 3, 4, 5, 5, 5]) # spline degree [here quadratic] p = 2 # plot BSplines plot_splines(T, p, xmin=0., xmax=5., nx=100) # - # --- # ### Knots vector families # # There are two kind of **knots vectors**, called **clamped** and **unclamped**. Both families contains **uniform** and **non-uniform** sequences. # # The following are examples of such knots vectors # # #### Clamped knots (open knots vector) # # ##### uniform # T1 = [0, 0, 0, 1, 2, 3, 4, 5, 5, 5] # + T2 = [-0.2, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8, 0.8] plot_splines(T2, p=2, xmin=0., xmax=.6, nx=100) # - # ##### non-uniform # + T3 = [0, 0, 0, 1, 3, 4, 5, 5, 5] plot_splines(T3, p=2, xmin=0., xmax=5., nx=100) # + T4 = [-0.2, -0.2, 0.4, 0.6, 0.8, 0.8] plot_splines(T4, p=2, xmin=0.4, xmax=.6, nx=100) # - # #### Unclamped knots # # ##### uniform T5 = [] # + T6 = [-0.2, 0.0, 0.2, 0.4, 0.6, 0.8, 1.0] plot_splines(T6, p=2, xmin=0.2, xmax=.6, nx=100) # - # ##### non-uniform # + T7 = [0, 0, 3, 4, 7, 8, 9] plot_splines(T7, p=2, xmin=3., xmax=7., nx=100) # + T8 = [-0.2, 0.2, 0.4, 0.6, 1.0, 2.0, 2.5] plot_splines(T8, p=2, xmin=.4, xmax=1., nx=100) # - # ## B-Spline curve # # The B-spline curve in $\mathbb{R}^d$ associated to knots vector $T=(t_i)_{1\leqslant i \leqslant n + k}$ and the control polygon $(\mathbf{P}_i)_{ 1 \leqslant i \leqslant n}$ is defined by : # # $$ # \mathcal{C}(t) = \sum_{i=1}^n N_i^k(t) \textbf{P}_i # $$ # # The following plot shows an example of a quadratic B-Spline curve, and its corresponding knot vector and control points. # T = [0., 0., 0., 0.25, 0.5, 0.5, 0.75, 1., 1., 1.] p = 2 plot_splines(T, p, xmin=0., xmax=1., nx=100) # + # normaly we should take an array of size (N-p-1) # ie the number of bsplines # but scipy.interpolate.splev needs an array # of the same size as T x = np.zeros_like(T) y = np.zeros_like(T) x[0] = 0. ; y[0] = .5 x[1] = 1. ; y[1] = -1. x[2] = 1. ; y[2] = 1. x[3] = 0. ; y[3] = 2. x[4] = -2. ; y[4] = 1. x[5] = -1. ; y[5] = 0. x[6] = -2. ; y[6] = -1. # splev to evaluate splines series from scipy.interpolate import splev tck_x = (T, x, p) tck_y = (T, y, p) # parametric grid points for evaluation tau = np.linspace(0., 1., 200) Px = splev(tau, tck_x) Py = splev(tau, tck_y) plt.plot(Px, Py) plt.plot(x[:7],y[:7], 'or') plt.show() # - # # We have the following properties for a *B-spline* curve: # # * If $n=k$, then $\mathcal{C}$ is just a Bézier-curve, # # * $\mathcal{C}$ is a piecewise polynomial curve, # # * The curve interpolates its extremas if the associated multiplicity of the first and the last knot are maximum (*i.e.* equal to $k$), *i.e.* open knot vector, # # * Invariance with respect to affine transformations, # # * Strong convex-hull property: # # if $t_i \leq t \leq t_{i+1}$, then $\mathcal{C}(t)$ is inside the convex-hull associated to the control points $\mathbf{P}_{i-p},\cdots,\mathbf{P}_{i}$, # # * Local modification : moving the $i^{th}$ control point $\mathbf{P}_{i}$ affects $\mathcal{C}(t)$, only in the interval $[t_i,t_{i+k}]$, # * The control polygon approaches the behavior of the curve. # # --- # **Note** # # In order to model a singular curve, we can use multiple control points : $\mathbf{P}_{i}=\mathbf{P}_{i+1}$. # # --- # # ## Multivariate tensor product splines # # Let us consider $d$ knot vectors $\mathcal{T} = \{T^1,T^2,\cdots,T^d\}$. For simplicity, we consider that these knot vectors are open, which means that $k$ knots on each side are duplicated so that the spline is interpolating on the boundary, and of bounds $0$ and $1$. In the sequel we will use the notation $I=[0,1]$. # Each knot vector $T^i$, will generate a basis for a Schoenberg space, $\mathcal{S}_{k_{i}}(T^i,I)$. The tensor product of all these spaces is also a Schoenberg space, namely $\mathcal{S}_{\mathbf{k}}(\mathcal{T})$, where $\mathbf{k}=\{k_1,\cdots,k_d\}$. The cube $\mathcal{P}=I^d=[0,1]^d$, will be referred to as a patch. # # The basis for $\mathcal{S}_{\mathbf{k}}(\mathcal{T})$ is defined by a tensor product : # # $$ # N_{\mathbf{i}}^{\mathbf{k}} := N_{i_1}^{k_1} \otimes N_{i_2}^{k_2} \otimes \cdots \otimes N_{i_d}^{k_d} # $$ # # where, $\mathbf{i}=\{i_1,\cdots , i_d \}$. # # A typical cell from $\mathcal{P}$ is a cube of the form : $Q_{\mathbf{i}}=[\xi_{i_1}, \xi_{i_1+1}] \otimes \cdots \otimes [\xi_{i_d}, \xi_{i_d+1}]$. # # # # ## Deriving a B-spline curve # # The derivative of a B-spline curve is obtained as: # # $$ # \mathcal{C}^{\prime}(t) = \sum_{i=1}^{n} {N_{i}^{k}}^{\prime}(t) \mathbf{P}_i = \sum_{i=1}^{n} \left(\frac{p}{t_{i+p}-t_{i}}N_{i}^{k-1}(t) \mathbf{P}_i - \frac{p}{t_{i+1+p}-t_{i+1}}N_{i+1}^{k-1}(t) \mathbf{P}_i \right) # = \sum_{i=1}^{n-1} {N_{i}^{k-1}}^{\ast}(t) \mathbf{Q}_i # $$ # # where $\mathbf{Q}_i = p \frac{\mathbf{P}_{i+1} - \mathbf{P}_i}{t_{i+1+p}-t_{i+1}}$, and $\{{N_{i}^{k-1}}^{\ast},~~1 \leq i \leq n-1\}$ are generated using the knot vector $T^{\ast}$, which is obtained from $T$ by reducing by one the multiplicity of the first and the last knot (in the case of open knot vector), *i.e.* by removing the first and the last knot. # # More generally, by introducing the B-splines family $\{ {N_{i}^{k-j}}^{\ast}, 1 \leq i \leq n-j \}$ generated by the knots vector $T^{j^{\ast}}$ obtained from $T$ by removing the first and the last knot $j$ times, we have the following result: # # --- # **Proposition** # # The $j^{th}$ derivative of the curve $\mathcal{C}$ is given by # # $$ # \mathcal{C}^{(j)}(t) = \sum_{i=1}^{n-j} {N_{i}^{k-j}}^{\ast}(t) \mathbf{P}_i^{(j)}$ # $$ # # where, for $j>0$ # # $$ # \mathbf{P}_i^{(j)} = \frac{p-j+1}{t_{i+p+1}-t_{i+j}} \left( \mathbf{P}_{i+1}^{(j-1)} - \mathbf{P}_i^{(j-1)} \right) # \\ # \mbox{and} ~ ~ ~ \mathbf{P}_i^{(0)} = \mathbf{P}_i. # $$ # # --- # # By denoting $\mathcal{C}^{\prime}$ and $\mathcal{C}^{\prime\prime}$ the first and second derivative of the B-spline curve $\mathcal{C}$, it is easy to show that: # # We have, # # * $\mathcal{C}^{\prime}(0) = \frac{p}{t_{p+2}} \left(\mathbf{P}_{2} - \mathbf{P}_1\right)$, # # * $\mathcal{C}^{\prime}(1) = \frac{p}{1-t_{n}} \left(\mathbf{P}_{n} - \mathbf{P}_{n-1}\right)$, # # * $\mathcal{C}^{\prime\prime}(0) = \frac{p(p-1)}{t_{p+2}} \left( \frac{1}{t_{p+2}}\mathbf{P}_{1} - \{ \frac{1}{t_{p+2}} + \frac{1}{t_{p+3}} \} \mathbf{P}_2 + \frac{1}{t_{p+3}}\mathbf{P}_{3} \right)$, # # * $\mathcal{C}^{\prime\prime}(1) = \frac{p(p-1)}{1-t_{n}} \left( \frac{1}{1-t_{n}}\mathbf{P}_{n} - \{ \frac{1}{1-t_{n}} + \frac{1}{1-t_{n-1}} \} \mathbf{P}_{n-1} + \frac{1}{1-t_{n-1}}\mathbf{P}_{n-2} \right)$. # # # --- # **Example** # # Let us consider the quadratic B-spline curve associated to the knots vector $T=\{000~\frac{2}{5}~\frac{3}{5}~111 \}$ and the control points $\{ P_i, 1 \leq i \leq 5 \}$: # # # $$ # \mathcal{C}(t) = \sum_{i=1}^{5} {N_{i}^{3}}^{\prime}(t) \mathbf{P}_i # $$ # # we have, # # $$ # \mathcal{C}^{\prime}(t) = \sum_{i=1}^{4} {N_{i}^{2}}^{\ast}(t) \mathbf{Q}_i # $$ # # where # # $$ # \mathbf{Q}_1 = 5 \{\mathbf{P}_{2} - \mathbf{P}_1\}, ~~~~\mathbf{Q}_2 = \frac{10}{3} \{ \mathbf{P}_{3} - \mathbf{P}_2\}, # \\ # \mathbf{Q}_3 = \frac{10}{3} \{ \mathbf{P}_{4} - \mathbf{P}_3\},~~~~\mathbf{Q}_4 = 5 \{\mathbf{P}_{5} - \mathbf{P}_4\}. # $$ # # The *B-splines* $\{ {N_{i}^{2}}^{\ast},~~1 \leq i \leq 4\}$ are associated to the knot vector $T^{\ast}=\{00~\frac{2}{5}~\frac{3}{5}~11 \}$. # # --- # # ## Fundamental geometric operations # # By inserting new knots into the knot vector, we add new control points without changing the shape of the B-Spline curve. This can be done using the DeBoor algorithm :cite:$DeBoor_Book2001$. We can also elevate the degree of the B-Spline family and keep unchanged the curve :cite:$qi$. In (Fig. \ref{refinement_curve_B_Spline}), we apply these algorithms on a quadratic B-Spline curve and we show the position of the new control points. # # # ### Knot insertion # # # After modification, we denote by $\widetilde{n}, \widetilde{k}, \widetilde{T}$ the new parameters. $(\textbf{Q}_i)$ are the new control points. # # One can insert a new knot $t$, where $t_j \leqslant t < t_{j+1}$. For this purpose we use the DeBoor algorithm :cite:$DeBoor_Book2001$: # # # $$ # \widetilde{n} = n+1 # \\ # \widetilde{k} = k # \\ # \widetilde{T} = \{ t_1,.., t_j, t, t_{j+1},.., t_{n+k}\} # \\ # \alpha_i = \left\{\begin{array}{cc}1 & 1 \leqslant i \leqslant j-k+1 \\\frac{t-t_i}{t_{i+k-1}-t_i} & j-k+2 \leqslant i \leqslant j \\0 & j+1 \leqslant i \end{array}\right. # \\ # \textbf{Q}_i = \alpha_i \textbf{P}_i + (1-\alpha_i) \textbf{P}_{i-1} # $$ # # Many other algorithms exist, like blossoming for fast insertion algorithm. For more details about this topic, we refer to :cite:$goldman_lyche_book$. # # ### Order elevation # # We can elevate the order of the basis, without changing the curve. Several algorithms exist for this purpose. We used the one by Huang et al. :cite:$prautzsch$, :cite:$qi$. # # A quadratic B-spline curve and its control points. The knot vector is $T = \{ 000, \frac{1}{4}, \frac{1}{2}, \frac{3}{4}, 1 1 1 \}$. # # **TODO: plot** # # The curve after a h-refinement by inserting the knots $\{ 0.15, 0.35\}$ while the degree is kept equal to $2$. # # **TODO: plot** # # The curve after a p-refinement, the degree was raised by $1$ (using cubic B-splines). # # **TODO: plot** # # The curve after duplicating the multiplicity of the internal knots $\{ \frac{1}{4}, \frac{1}{2}, \frac{3}{4} \}$, # this leads to a B\'ezier description. We can then, split the curve into $4$ pieces (sub-domains), each one will corresponds to a quadratic B\'ezier curve. # # **TODO: plot** # # ### Translation # # **TODO** # # ### Rotation # # **TODO** # # ### Scaling # # **TODO** # # # # css style from IPython.core.display import HTML def css_styling(): styles = open("../../styles/custom.css", "r").read() return HTML(styles) css_styling() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import pandas as pd import numpy as np # # Intro to Pandas # There are primarily two data structures # 1. Series # 2. DataFrame # ## 1. Series # - An array of elements with types similar to Numpy # - Contains an index for each element # ### 1.1 Create Series series_a = pd.Series([3, 1, 2.3, 'a', True]) series_a series_a.values # list all the values in a series series_a.index # list the index values # ### 1.2 Let us create a series with int objects series_b = pd.Series([1, 4, 2]) series_b # ### 1.3 Access any element with index series_b[1] # access the element at position 2 (default starting index is 0) # ### 1.4 Access multiple elements # + # x:y:z --> access every zth element starting from index x upto index y # if x is missing --> consider 0 # if z is missing --> consider all elements series_b[:2] # access all elements from 0 upto 2 (2 not included) i.e., 0 and 1 # - # ### 1.5 Edit any element using index series_b[1] = 3 series_b # ### 1.6 Query And Subset series_b > 1 series_b[series_b > 1] # ### 1.7 Edit multiple elements series_b.loc[series_b>1] = [2, 3] series_b # ### 1.8 Perform operations series_b + 10 series_b ** 2 np.log(series_b) # ### 1.9 Identify null values series_c = pd.Series([np.nan, 2, 23, 4]) series_c pd.isnull(series_c) pd.notnull(series_c) # ### 1.10 Operate two series of equal length series_1 = pd.Series([1, 2, 3]) series_2 = pd.Series([4, 5, 6]) series_1 + series_2 series_1 * series_2 # ### 1.11 Operate two series of unequal length series_1 = pd.Series([1, 2, 3]) series_2 = pd.Series([4, 5, 6, 7]) series_1 + series_2 # ## 2. DataFrame # - A collection of Series where each Series represents one Column # - Similar to data represented in Excel/Database tables # ### 2.1 Create DataFrame df_dict = {"a": [1, 2, 3], "b": ['a', 'b', 'c'], "c": [True, False, True]} df = pd.DataFrame(df_dict) df df_dict = [[1, 2, 3], ['a', 'b', 'c'], [True, False, True]] df = pd.DataFrame(df_dict, columns = ['a', 'b', 'c'], index=[1, 2, 3]) df # ### 2.2 Reading and Writing df = pd.read_csv("../datasets/Oscars-demographics-DFE.csv", encoding='latin1') df.head() # ### 2.3 Subset rows len(df) df.index = range(100, 541) df.loc[100:105] # based on index names df.iloc[0:5] # based on actual position of rows df.head(10) df.tail(3) df.sample(n=10) df.sample(frac=0.01) df[df['date_of_birth:confidence']==0.6667] # ### 2.4 Subset Columns # #### 2.4.1 Single Column df['_golden'] # pick one column - returns Series df._golden # #### 2.4.2 Multiple Columns df[['_unit_id', '_golden', '_unit_state']] # pick multiple columns - returns DataFrame df.filter(regex='date_of_birth') # ### 2.5 Subset Rows and Columns # the first condition is on rows, the second condition is on columns df.loc[ df['date_of_birth:confidence']==1 , ['_unit_id', '_golden'] ] # ## 2.6 Dropping rows, columns # dropping columns df.drop(['birthplace_gold', 'race_ethnicity'], axis=1) # dropping duplicate rows df.drop_duplicates() # ## 2.7 Summarize Data # Numeric columns - calculates descriptive statistics df.describe() # Categorical columns - the unique values with their count of appearance in that column df['_unit_state'].value_counts() # the count of unique elements in a column df['_unit_state'].nunique() # Calculate the length of the DataFrame len(df) # + # apply a function to summarize # in-built functions - max, min, sum, count, median, mean, std, var df.max() # max # + # apply your custom function def func(x): return x+1 df[['year_of_award', 'year_of_award:confidence']].apply(func) # - # ## 2.8 Group Data # + # group by a set of columns and apply functions # apply in-built functions directly df.groupby(['year_of_award']).sum() # + # apply custom functions using agg def custom_mean(x): return np.mean(x)+1 df.groupby(['year_of_award'])['_golden', '_trusted_judgments'].agg(custom_mean) # - # ## 2.9 Missing values # drop rows with any columns having NA df.dropna() # can apply on a column df['_last_judgment_at'].dropna() # ### 2.9.1 Replace missing values # Replace missing values specific to each column df.loc[df['_last_judgment_at'].isna(), '_last_judgment_at'] df.loc[df['_last_judgment_at'].isna(), '_last_judgment_at'] = df['_last_judgment_at'].mode()[0] # use the boolean index to replace the NAs with a custom value df[['birthplace_gold', 'race_ethnicity_gold']].fillna('Australia') # ## 2.10 Create new rows / columns simple_df = pd.DataFrame({1:[1, 2, 3], 2:['a', 'b', 'c'], 3:[True, False, True]}) simple_df # ### 2.10.1 Create new row using .loc # add the new row using a new index value (3 here) simple_df.loc[3] = [4, 'd', False] simple_df # ### 2.10.2 Create new column simple_df # the new column should be of the same length as the df simple_df[4] = [42, 2.3, 3, 65] simple_df # ## 2.11 Combining two dataframes # ### 2.11.1 Create new row by combining DataFrames new_df = pd.DataFrame([[4, 'd', False]], columns=[1,2,3]) new_df # Concat function can be used for concatenating rows/columns # if axis = 0 -> rows will be concatenated # if axis = 1 -> columns will be concatenated pd.concat([simple_df, new_df], axis=0) # ### 2.11.2 Merge # Merge is like the join operation from SQL. # # You will merge two dataframes based on a column common between the two. # # Wherever the values of the column match in both dataframes, a new row which is a concat of both rows from A and B is formed sales_df = pd.DataFrame([[1, 23, 100], [1, 45, 142], [2, 52, 45]], columns = ['customer_id', 'order_id', 'order_value']) sales_df customer_data = pd.DataFrame([[1, 'Male', 25], [2, 'Female', 49], [3, 'Female', 32]], columns = ['customer_id', 'gender', 'age']) customer_data # merge with a left condition - suggesting us to keep all rows from left dataframe pd.merge(sales_df, customer_data, on="customer_id", how="left") # merge with a right condition - suggesting us to keep all rows from right dataframe pd.merge(sales_df, customer_data, on="customer_id", how="right") # ## 2.12 Reshaping your DataFrame # ### 2.12.1 Sorting # sort_values takes columns df.sort_values(['_trusted_judgments'], ascending=False) # ### 2.12.2 Reset index df.reset_index() # ### 2.12.3 Pandas Melt simple_df # keeps the columns in id_vars intact, takes columns in value_vars and creates a row for each entry pd.melt(simple_df, id_vars=[1], value_vars=[2, 3]) # ### 2.12.4 Pandas Pivot simple_df # + # index columns get the index in the pivot output # columns in column would be the columsn of the pivot output # values represent the value that will be placed in the dataframe # aggfunc can be used when there are multiple values in that position # fill_value fills NaN with the argument value pd.pivot_table(simple_df, index=[1], values=[4], columns=[2], aggfunc=np.mean, fill_value=0) # - # ## 2.13 Plotting df['birthplace:confidence'].plot.hist() df.plot.scatter(x="_trusted_judgments", y="birthplace:confidence") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Argumentation # Nessa implementação usaremos a tecnica de **argumentation**, que consiste na **geraçao de novas imagens** a partir de nosso banco de dados original. Podemos fazer rotaçoes, zoons, mudar escalas de cores, espelhamentos etc. # Dessa forma, os dados que temos podem aumentar consideravelmente, podendo assim, aumentar tambem a precisao dos resultados de nossa REDE NEURAL # # Importações # # Prestar atenção que o KERAS foi incorporado no TensorFLow 2.0, dessa madeira a importaçao dos modulos keras deve ser feita como tensorflow.keras from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D from tensorflow.python.keras.utils import np_utils from tensorflow.keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt # + (X_treinamento, y_treinamento), (X_teste, y_teste) = mnist.load_data() #Fazemos a importaçao do MNIST plt.imshow(X_treinamento[1], cmap = 'gray') #abrimos como exemplo o treinamento [1], e colocacamos e escala preto e branco plt.title('Classe ' + str(y_treinamento[1])) #Titulo previsores_treinamento = X_treinamento.reshape(X_treinamento.shape[0], 28, 28, 1) #Fazemos o reshape para que o TF consiga ler os dados previsores_teste = X_teste.reshape(X_teste.shape[0], 28, 28, 1) previsores_treinamento = previsores_treinamento.astype('float32') #Mudamos para float para podermos dividir logo abaixo previsores_teste = previsores_teste.astype('float32') previsores_treinamento /= 255 #Normalizaçao (1) Pra diminuir o custo operacional, dividimos os valores RGB por 255, #dessa forma temos uma escala de 0 ate 1 previsores_teste /= 255 classe_treinamento = np_utils.to_categorical(y_treinamento, 10) #Transformamos os dados em variaveis dummy classe_teste = np_utils.to_categorical(y_teste, 10) # + classificador = Sequential() classificador.add(Conv2D(32, (3,3), input_shape=(28, 28, 1), activation = 'relu')) classificador.add(MaxPooling2D(pool_size = (2,2))) classificador.add(Flatten()) classificador.add(Dense(units = 128, activation = 'relu')) classificador.add(Dense(units = 10, activation = 'softmax')) classificador.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy']) # - # # Implementando o DataGenerator # https://keras.io/api/preprocessing/image/ # fit_generator: é o metodo responsavel por treinar e gerar as nossas imagens # 1- steps_per_epochs: Numero de etapas de amostras que serao analisadas antes de passar a epoca. # Se colocar 60000 ela pegara todos os registros e aplicara as tecnicas que passamos em ImageDataGenerator. # A documentaçao recomenda dividir pelo numero de neuroneos da primeira camada densa. # + gerador_treinamento = ImageDataGenerator(rotation_range = 7, horizontal_flip = True, shear_range = 0.2, height_shift_range = 0.07, zoom_range = 0.2) # Os argumentos aqui mostram qual as tecnicas que queremos aplicar as imagens gerador_teste = ImageDataGenerator() # Nesse exemplo na nossa base de teste nao precisamos gerar nada base_treinamento = gerador_treinamento.flow(previsores_treinamento, classe_treinamento, batch_size = 128) base_teste = gerador_teste.flow(previsores_teste, classe_teste, batch_size = 128) classificador.fit_generator(base_treinamento, steps_per_epoch = 600000 / 128, epochs = 5, validation_data = base_teste, validation_steps = 10000 / 128) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # NOTE: Videos and graphs do not display immediately on all devices. If this is the case for you, click the "Cell" menu option, and select "Run All". If you still do not see the videos/graphs, try a different browser. If that also does not work, please let your instructor know by email. # # # The Logistic Equation # # In this notebook, we analyze the Logistic Equation. The Logistic Equation is a separable differential equation that represents limited growth. The Logistic Equation perhaps is the second-most important differential equaion (after the World's Most Important Differential Equation, which represents exponential growth [and decay]), at least in the context of modelling in the life sciences. # # > Visit [the Wikepedia page on the Logistic Equation](https://en.wikipedia.org/wiki/Logistic_function) to learn more about the applicability of the Logistic Equation in a wide variety of scientific and social disciplines. # # We solve the Logistic Equation, and compare the solution to the outcome of qualitative analysis (phase plot analysis, covered in MATH 134). For readers who did not take MATH 134, we review phase plot analysis. # # ### 1. About the Logistic Equation # # While exponential growth (or decay) is a reasonable model for some sitations, such as the initial growth of a population, or the initial growth of the number of infected individuals in a population in the context of an infectious disease spreading through the population, eventually the exponential model no longer applies. For example, at a certain size, a growing population will reach a limit where the habitat will not support any more individuals. # # Suppose that the per-capita growth rate of a population of size $N$ declines linearly from a value of $r$ when $N=0$ to a value of 0 when $N=K$, where $K$ is the carrying capacity of the environment, then # $$\frac{dN/dt}{N} = r \left( 1 - \frac{N}{K} \right),$$ # where $N(t)$ is the size of the population at time $t$. The left-hand side of the equation expresses the per-capita growth rate (growth rate $dN/dt$ divided by the number of individuals in the population), and the right-hand side of the equation expresses the linear decrease in the per-capital growth rate as a function of $N$. When $N=0$, the per-capita growth rate is $r$; when $N=K$, the per-capita growth rate is $K$. # # Rewriting the equation, we obtain the Logistic Equation, namely # $$\frac{dN}{dt} = r N \left( 1 - \frac{N}{K} \right)$$ # or # $$\frac{dN}{dt} = \frac{r}{K} N \left( K - N \right).$$ # We recognize the Logistic Equation as a differential equation, where $t$ is the independent variable, $N$ is the dependent variable, and $r$ and $K$ are parameters (constants that may vary from one population or environment to another). # # We seek a function $N(t)$ that satisfies the differential equation. We will do so in Section 5 below. # # ### 2. Review of Phase Plot Analysis # # Before we find the exact form of the solution, we will determine the qualitative nature of the solutions using phase plot analysis of the differential equation. For readers who have taken MATH 134, this is a review; for other readers, this is an introduction. # # Consider a generic differential equation $\displaystyle \frac{dy}{dt} = f(y)$. Here, we seek a function $y(t)$ that satisfies the differential equation. # # Phase plot analysis is facilitated by the graph of $\displaystyle \frac{dy}{dt} = f(y)$ versus $y$, as summarized in the following graphic: # # ![Fig-PhaselineAnalysis.jpg](attachment:Fig-PhaselineAnalysis.jpg) # # Given the graph of $\displaystyle \frac{dy}{dt} = f(y)$ versus $y$ on the left, we aim to infer qualitative sketches of the family of solutions, that is, we aim to infer the graph of $y(t)$ versus $t$ on the right. # # > This is a challenging concept. Do take a moment to understand the difference between the two graphs. # # Phase plot analysis brings together the following to infer the graph of $y(t)$ versus $t$: # 1. Whenever $f(y) > 0$, $\displaystyle \frac{dy}{dt} > 0$, and so $y(t)$ increases. # 2. Whenever $f(y) < 0$, $\displaystyle \frac{dy}{dt} < 0$, and so $y(t)$ decreases. # 3. Whenever $f(y) = 0$, $\displaystyle \frac{dy}{dt} = 0$, and so $y(t)$ does not change. # # In the following video, we give an overview of phase plot analysis. from IPython.display import YouTubeVideo YouTubeVideo('KKiFCTJPYkE') # ### 3. Phase Plot Analysis of the Logistic Equation # # We now return to the Logistic Equation, # $$\frac{dN}{dt} = r N \left( 1 - \frac{N}{K} \right).$$ # We let $\displaystyle r = \frac{3}{2}$ and $K = 4000$ to obtain # $$\frac{dN}{dt} = \frac{3}{2} N \left( 1 - \frac{N}{4000} \right).$$ # # In the following video, we use phase plot analysis to infer the qualitative nature of the solutions of this Logistic Equation. from IPython.display import YouTubeVideo YouTubeVideo('-hQGBGUlr3Y') # ### 4. Solving the Logistic Equation # # We now proceed with determining the solution of the Logistic Equation # $$\frac{dN}{dt} = \frac{3}{2} N \left( 1 - \frac{N}{4000} \right)$$ # or # $$\frac{dN}{dt} = \frac{3}{2 \cdot 4000} N \left( 4000 - N \right).$$ # # Note that the Logistic Equation is separable. In differential form, we have # $$ \frac{4000 \cdot dN}{N ( 4000 - N ) } = \frac{3}{2} dt$$ # provided that $N \ne 0$ or $N \ne 4000$. The latter is important; we will return to these conditions later. # # We now integrate both sides: # $$\int \frac{4000 \cdot dN}{N ( 4000 - N ) } = \frac{3}{2} \int dt.$$ # The integral on the left-hand side can be evaluated using the method of partial fractions. It is left as an exercise to the reader to show that # $$\frac{4000}{N ( 4000 - N ) } = \frac{1}{N} + \frac{1}{4000-N}.$$ # We thus obtain # $$\ln|N| - \ln|4000-N| = \frac{3}{2} t + C,$$ # where $C$ is the constant of integration. # Combining the terms on the left-hand side into one term gives # $$\ln \left| \frac{4000-N}{N} \right| = - \frac{3}{2} t - C.$$ # Exponentiating both sides gives # $$\left| \frac{4000-N}{N} \right| = e^{ - \frac{3}{2} t - C } = e^{-\frac{3}{2} t} e^{-C} = \hat{C}e^{ - \frac{3}{2} t},$$ # where $\hat{C} = e^{-C}$. # Note that $\hat{C}$ is positive. Allowing $\hat{C}$ to be non-positive as well means that we can drop the absolute value on the left-hand side, that is # $$\frac{4000-N}{N} = C e^{ - \frac{3}{2} t },$$ # where we have dropped the hat from the constant $C$ for convenience. # It is left as an exercise for the reader to work through the algebra # to write $N$ explicitly as as function of $N$, resulting in # $$N(t) = \frac{ 4000 }{ 1 + C e^{-\frac{3}{2} t } }.$$ # The latter is the general solution of the Logistic Equation, defining a family of solutions. # # If we are given an initial condition, $N(0) = N_0$, then we can determine $C$. It is left as an exercise for the reader to show that $\displaystyle C = \frac{4000 - N_0}{N_0}$. # # Recall that rewriting the Logistic Equation in differential form was valid only when $N \ne 0, 4000$. It turns out that $N(t) = 0$ and $N(t) = 4000$ also satisfy the Logistic Equation (check it!), and therefore also are solutions. These solutions correspond to initial conditions $N_0 = 0$ and $N_0 = 4000$, respectively. # # Putting everything together, the particular solution of the Logistic Equation is # $$N(t) = # \begin{cases} # \frac{ 4000 }{ 1 + \frac{4000 - N_0}{N_0} e^{-\frac{3}{2} t } } & \text{if } N_0 > 0 \\ # 0 & \text{if } N_0 = 0. # \end{cases}$$ # # The graph of $N(t)$ for several values of $N_0$ is shown below. This is precisely the graph that we found earlier from phase plot analysis. # # ![Fig-Solution.jpg](attachment:Fig-Solution.jpg) # # Note that $N(t)$ has 4000 as the horizontal asymptote for all $N_0 > 0$. If $0 < N_0 < 4000$, the population eventually reaches carrying capacity, but does not exceed the carrying capacity. If $N_0 > 4000$, the population self-regulates and decreases towards the carrying capacity. # # The advantage of knowing $N(t)$ explicitly is that we can determine/predict the exact size of the population at any given time $t$. # # ### 5. Summary # # The Logistic Equation # # - The Logistic Equation is $$\frac{dN}{dt} = r N \left( 1 - \frac{N}{K} \right),$$ where $r$ is a per-capita growth rate, and $K$ is the carrying capacity. # # # - The Logistic Equation is a separable differential equation that can be solved explicitly for $N(t)$. # # Phase Plot Analysis # # - Phase plot analysis facilitates the visualization of the qualitative nature of solutions to autonomous differential equations, to locate their equilibria, and to determine properties of these equilibria. # # # - A phase plot shows the graph of $\displaystyle \frac{dy}{dt} = g(y)$ versus $y$. # 1. Whenever $g(y)>0$, then $y$ is increasing. # 2. Whenever $g(y)<0$, then $y$ is decreasing. # 3. When $g(\hat{y})=0$, then $y=\hat{y}$ is an equilibrium. # # ![07p431_f01.jpg](attachment:07p431_f01.jpg) # # Credit: Biocalculus: Calculus for the Life Sciences, and , Cengage Learning, 2015. # # ### 6. Further Study # # Please refer to Sections 7.2 and 7.4 in the textbook for additional treatment of these topics. # # ### 7. Don't Forget # # Don't forget to return to eClass to complete the pre-class quiz. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="vZe8oEJj2EZR" # # + colab={} colab_type="code" id="6Ks9_kkrCIQi" import numpy as np import pandas as pd # + # load data dataX=pd.read_csv("trainX.csv") dataA=pd.read_csv("trainA.csv") dataY=pd.read_csv("trainY.csv") # remove index dataX=dataX.values[:,1:] dataY=dataY.values[:,1:-1] dataA=dataA.values[:,1:] # + colab={} colab_type="code" id="F77bYonkF8AE" # split cure or not cure 치료한환자데이터=pd.read_csv("2번 열을 제거한 치료한 환자 데이터.csv") 치료하지않은환자데이터=pd.read_csv("8번 열을 제거한 치료하지 않은 환자 데이터.csv") # - 치료한환자데이터 치료하지않은환자데이터 data_CY=치료한환자데이터['time'] dataNCY=치료하지않은환자데이터['time'] data_CX = 치료한환자데이터.loc[:,['X0','X1','X3','X4','X5','X6','X7','X8','X9','X10','X11','X12','X13','X14','X15','X16']] dataNCX = 치료하지않은환자데이터.loc[:,['X0','X1','X2','X3','X4','X5','X6','X7','X9','X10','X11','X12','X13','X14','X15','X16']] data_CX dataNCX # + colab={} colab_type="code" id="8bTfqtUGPvsq" from keras import models from keras import layers import tensorflow as tf # + def build_dnn_swish_adam(): # 2. 모델의 구조를 BatchNormalization layer를 사용하여 만든다. X = tf.keras.layers.Input(shape=[16]) H = tf.keras.layers.Dense(2048)(X) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 1 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 2 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 3 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 4 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 5 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 6 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 7 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 8 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 9 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 10 H = tf.keras.layers.Dense(2048)(X) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 11 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 12 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 13 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 14 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 15 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 16 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 17 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 18 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 19 H = tf.keras.layers.Dense(2048)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) # Hidden layer 20 Y = tf.keras.layers.Dense(1)(H) model = tf.keras.models.Model(X, Y) model.compile(loss='mse', optimizer='adam', metrics=['mean_absolute_error']) return model def build_model_NC_100_stepper(): # 2. 모델의 구조를 BatchNormalization layer를 사용하여 만든다. X = tf.keras.layers.Input(shape=[16]) H = tf.keras.layers.Dense(2048)(X) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) H = tf.keras.layers.Dense(1024)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) H = tf.keras.layers.Dense(512)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) H = tf.keras.layers.Dense(256)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) H = tf.keras.layers.Dense(128)(H) H = tf.keras.layers.BatchNormalization()(H) H = tf.keras.layers.Activation('swish')(H) Y = tf.keras.layers.Dense(1)(H) model = tf.keras.models.Model(X, Y) model.compile(loss='mse', optimizer='adam', metrics=['mean_absolute_error']) return model # - # 치료 데이터 num_epochs = 10000 all_mae_histories_cured = [] k = 4 num_val_samples = len(data_CX) // k for i in range(k): print('처리중인 폴드 #', i) # 검증 데이터 준비: k번째 분할 val_data = data_CX[i * num_val_samples: (i + 1) * num_val_samples] val_targets = data_CY[i * num_val_samples: (i + 1) * num_val_samples] # 훈련 데이터 준비: 다른 분할 전체 partial_train_data = np.concatenate( [data_CX[:i * num_val_samples], data_CX[(i + 1) * num_val_samples:]], axis=0) partial_train_targets = np.concatenate( [data_CY[:i * num_val_samples], data_CY[(i + 1) * num_val_samples:]], axis=0) # 케라스 모델 구성(컴파일 포함) model = build_dnn_swish_adam() # 모델 훈련(verbose=0 이므로 훈련 과정이 출력되지 않습니다) history = model.fit(partial_train_data, partial_train_targets, validation_data=(val_data, val_targets), epochs=num_epochs, batch_size=1, verbose=0) mae_history_cured = history.history['val_mean_absolute_error'] all_mae_histories_cured.append(mae_history_cured) # 치료하지 않은 데이터 num_epochs = 10000 all_mae_histories_non_cured = [] k = 4 num_val_samples = len(dataNCX) // k for i in range(k): print('처리중인 폴드 #', i) # 검증 데이터 준비: k번째 분할 val_data = dataNCX[i * num_val_samples: (i + 1) * num_val_samples] val_targets = dataNCY[i * num_val_samples: (i + 1) * num_val_samples] # 훈련 데이터 준비: 다른 분할 전체 partial_train_data = np.concatenate( [dataNCX[:i * num_val_samples], dataNCX[(i + 1) * num_val_samples:]], axis=0) partial_train_targets = np.concatenate( [dataNCY[:i * num_val_samples], dataNCY[(i + 1) * num_val_samples:]], axis=0) # 케라스 모델 구성(컴파일 포함) model = build_dnn_swish_adam() # 모델 훈련(verbose=0 이므로 훈련 과정이 출력되지 않습니다) history = model.fit(partial_train_data, partial_train_targets, validation_data=(val_data, val_targets), epochs=num_epochs, batch_size=1, verbose=0) mae_history_non_cured = history.history['val_mean_absolute_error'] all_mae_histories_non_cured.append(mae_history_non_cured) # 치료 의사 결정 모델과 비치료 의사 결정 모델의 평균 MAE 값의 히스토리를 구한다. average_mae_history_cured = [ np.mean([x[i] for x in all_mae_histories_cured]) for i in range(num_epochs)] average_mae_history_non_cured = [ np.mean([x[i] for x in all_mae_histories_non_cured]) for i in range(num_epochs)] import matplotlib.pyplot as plt # 치료 의사 결정 모델 예상 생존 시간을 그래프로 출력한다. plt.plot(range(1, len(average_mae_history_cured) + 1), average_mae_history_cured) plt.xlabel('Epochs') plt.ylabel('Validation MAE') plt.show() # 치료하지 않는 의사 결정 모델 예상 생존 시간을 그래프로 출력한다. plt.plot(range(1, len(average_mae_history_non_cured) + 1), average_mae_history_non_cured) plt.xlabel('Epochs') plt.ylabel('Validation MAE') plt.show() def smooth_curve(points, factor=0.9): smoothed_points = [] for point in points: if smoothed_points: previous = smoothed_points[-1] smoothed_points.append(previous * factor + point * (1 - factor)) else: smoothed_points.append(point) return smoothed_points # + smooth_mae_history_cured = smooth_curve(average_mae_history_cured[10:]) plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history_cured) plt.xlabel('Epochs') plt.ylabel('Validation MAE') plt.show() # + smooth_mae_history_non_cured = smooth_curve(average_mae_history_cured[10:]) plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history_non_cured) plt.xlabel('Epochs') plt.ylabel('Validation MAE') plt.show() # + colab={} colab_type="code" id="0oeHzl4DXr4P" # 3.데이터로 모델을 학습(FIT)합니다. model = build_dnn_swish_adam() # 최저 loss 값 : 0.0492 model.fit(data_CX, data_CY, epochs=30000, batch_size=16, verbose=0) model.fit(data_CX, data_CY, epochs=1, batch_size=16, verbose=1) # + colab={"base_uri": "https://localhost:8080/", "height": 386} colab_type="code" id="kzsyJB09b3t6" outputId="801dfd98-ec7d-4f5d-97cb-fefbf54ef5b4" # 3.치료하지 않았고 생존시간이 100 이하인 환자 데이터로 모델을 학습(FIT)합니다. model_NC = build_dnn_swish_adam() # 최저 loss 값 : 0.0159 model_NC.fit(dataNCX, dataNCY, epochs=30000, batch_size=16,verbose=0) model.fit(dataNCX, dataNCY, epochs=1, batch_size=16, verbose=1) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="B6OBWoPGHuWw" outputId="a53a20ec-daaa-497f-d427-9bde00b08329" 치료_환자_파일경로 = '2번 열을 제거한 치료한 환자 테스트 데이터.csv' 비치료_환자_파일경로 = '8번 열을 제거한 치료하지 않은 환자 테스트 데이터.csv' 치료_환자_데이터 = pd.read_csv(치료_환자_파일경로) 비치료_환자_데이터 = pd.read_csv(비치료_환자_파일경로) # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="iEzT2ja2IWrd" outputId="699e3367-7e90-4bc5-c92a-3c1339be05d0" # 치료할 경우 생존 시간을 구한다. cured = model.predict(치료_환자_데이터) # 치료하지 않을 경우 생존 시간을 구한다. non_cured = model_NC.predict(비치료_환자_데이터) # 치료한 경우 생존 시간 > 치료하지 않은 경우 생존 시간 : 1, # 치료한 경우 생존 시간 > 치료하지 않은 경우 생존 시간 : 0 result = np.where(cured > non_cured, 1, 0) print(result) # + colab={} colab_type="code" id="xdWBw8y0NZMB" import csv # 이차원 리스트 with open('result_20200902_2.csv','w', newline='') as f: makewrite = csv.writer(f) for value in result: makewrite.writerow(value) # + colab={"base_uri": "https://localhost:8080/", "height": 88} colab_type="code" id="TQ9WCCfehqVH" outputId="909e27c3-fb09-4021-e84b-69715a0c87c0" """ from openpyxl import Workbook write_wb = Workbook() #이름이 있는 시트를 생성 #write_ws = write_wb.create_sheet('생성시트') #Sheet1에다 입력 write_ws = write_wb.active write_ws['A1'] = 'Title' write_ws['B1'] = 'action' #행 단위로 추가 # write_ws.append([1,2,3]) #셀 단위로 추가 i = 1 for value in result: write_ws.cell(i,2, value) i= i+1 write_wb.save('/content/drive/My Drive/traindata/result_20200824_3.csv') """ # + colab={} colab_type="code" id="RAlZ120_AXbj" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # VacationPy # ---- # # #### Note # * Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing. # # * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. # + # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import numpy as np import requests import gmaps import os # Import API key from api_keys import g_key # - # ### Store Part I results into DataFrame # * Load the csv exported in Part I to a DataFrame # + # To get File Path file_path = "../WeatherPy/output_data/cities.csv" cities_df = pd.read_csv(file_path) cities_df['Date'] = cities_df['Date'].astype(int) cities_df # - # ### Humidity Heatmap # * Configure gmaps. # * Use the Lat and Lng as locations and Humidity as the weight. # * Add Heatmap layer to map. gmaps.configure(api_key=g_key) locations = cities_df[['Lat','Lng']] locations humidity_cities = cities_df['Humidity'] humidity_cities # cities_df['Humidity'].max() # + max_humidity = humidity_cities.max() # Plot Heatmap fig = gmaps.figure(center=(0,0),zoom_level=2) # Create heat layer heat_layer = gmaps.heatmap_layer(locations, weights=humidity_cities, dissipating=False, max_intensity=max_humidity, point_radius=3, opacity=0.6) # Add layer fig.add_layer(heat_layer) # Display figure fig # - # ### Create new DataFrame fitting weather criteria # * Narrow down the cities to fit weather conditions. # * Drop any rows will null values. # + # A max temperature lower than 80 degrees but higher than 70. # Wind speed less than 10 mph. # Zero cloudiness. # Drop any rows that don’t contain all three conditions. You want to be sure the weather is ideal. weather_fit_df = cities_df.loc[(cities_df['Max Temp'] < 80) & (cities_df['Max Temp'] > 70) & (cities_df['Wind Speed'] < 10) & (cities_df['Cloudiness'] == 0)] weather_fit_df # - # ### Hotel Map # * Store into variable named `hotel_df`. # * Add a "Hotel Name" column to the DataFrame. # * Set parameters to search for hotels with 5000 meters. # * Hit the Google Places API for each city's coordinates. # * Store the first Hotel result into the DataFrame. # * Plot markers on top of the heatmap. hotel_df = weather_fit_df[['City','Lat','Lng','Country']].copy() # weather_fit_df hotel_df["Hotel Name"]="" hotel_df # + from pprint import pprint # To get Parameters for gmap params={ "radius":5000, "types":'lodging', "key": g_key } base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json" # To get first Hotel Name nearby each city and add it to dataFrame for index, row in weather_fit_df.iterrows(): params["location"] = f"{row['Lat']},{row['Lng']}" response = requests.get(base_url, params=params) json_response = response.json() try: hotel_df.loc[index, "Hotel Name"] = json_response["results"][0]["name"] except: print("Missing field/result... skipping.") pass hotel_df # + # NOTE: Do not change any of the code in this cell # Using the template add the hotel marks to the heatmap info_box_template = """
    Name
    {Hotel Name}
    City
    {City}
    Country
    {Country}
    """ # Store the DataFrame Row # NOTE: be sure to update with your DataFrame name hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()] locations = hotel_df[["Lat", "Lng"]] # + # Add marker layer ontop of heat map markers = gmaps.marker_layer(locations) hotel_layer = gmaps.symbol_layer( locations, fill_color='rgba(0, 150, 0, 0.4)', stroke_color='rgba(0, 0, 150, 0.4)', scale=4, info_box_content=hotel_info ) # Display figure fig.add_layer(markers) fig.add_layer(hotel_layer) fig # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Tarea 2: Clase 3 # # # # Cada clase que veamos tendrá una tarea asignada, la cual contendrá problemas varios que se pueden resolver con lo visto en clase, de manera que puedas practicar lo que acabas de aprender. # # En esta ocasión, la tarea tendrá ejercicios relativos a la clase 3, de programación orientada a objetos. # # Para resolver la tarea, por favor cambiar el nombre del archivo a "Tarea2_ApellidoNombre.ipynb", sin acentos ni letras ñ (ejemplo: en mi caso, el archivo se llamaría "Tarea2_JimenezEsteban.ipynb"). Luego de haber cambiado el nombre, resolver cada uno de los puntos en los espacios provistos. # # Referencias: # - https://realpython.com/python3-object-oriented-programming/ # - https://www.w3resource.com/python-exercises/class-exercises/ # - https://anandology.com/python-practice-book/object_oriented_programming.html # ___ # ## 1. # # Usando la clase `Perro()` que definimos en la clase, instanciar tres nuevos perros, cada uno con edades diferentes. # # Luego, fuera de la clase, escriba una función llamada `perro_mas_viejo()`, cuyos argumentos sean 3 objetos de la clase perro e imprima el nombre del perro más viejo con su edad; algo como: # # > El perro más viejo es `firulais`, y tiene `8` años. class Perro(): def __init__(self, nombre, edad): self.nombre = nombre self.edad = edad # + perro1 = Perro(nombre="Chief", edad=0.7) perro2 = Perro(nombre="Mandarina", edad=3) perro3 = Perro(nombre="Pascal", edad=1.2) perro1.edad, perro2.edad, perro3.edad # + def perro_mas_viejo(perro1, perro2, perro3): viejo = max(perro1.edad, perro2.edad, perro3.edad) print(f"El perro más viejo tiene {viejo} años") return perro_mas_viejo(perro1, perro2, perro3) #No supe como traerme el nombre del perro más viejo. # - # ## 2. # # Escriba una clase cuyo nombre sea `Circulo`, la cual tenga un atributo de instancia `radio`, y dos métodos para calcular el área y el perímetro del círculo. # # Además, instancie tres círculos con diferentes radios y calcule su área y su perímetro. import numpy as np class Circulo(): def __init__(self, radio): if(radio <= 0): print("No se puede tener un radio de 0 o menor a 0") return 0 else: self.radio = radio def area(self): return np.pi * self.radio**2 def perimetro(self): return 2 * np.pi * self.radio cir1 = Circulo(66) cir2 = Circulo(12) cir3 = Circulo(43) cir1.area(), cir3.area(), cir2.area() cir1.perimetro(), cir3.perimetro(), cir2.perimetro() # ## 3. # # Escriba una clase cuyo nombre sea `CuentaBancaria`, y que inicialice un atributo de instancia llamado `saldo` con valor cero. Además, esta clase debe contener tres métodos: # - método `depositar()`: el cual reciba el monto de depósito y actualice el atributo de `saldo`; # - método `retirar()`: el cual reciba el monto a retirar, y actualice el atributo de `saldo`; # - método `imprimir_saldo()`: el cual imprima el saldo actual de la cuenta. # # Instancie una cuenta bancaria, y haga varios retiros y depósitos. class CuentaBancaria(): def __init__(self, saldo = 0): self.saldo = saldo def depositar(self, deposito): self.saldo = self.saldo + deposito return self.saldo def retirar(self, retiro): if self.saldo - retiro < 0: print("Lo sentimos, no cuenta con saldo suficiente para retirar esa cantidad") else: self.saldo = self.saldo - retiro return self.saldo def imprimir_saldo(self): print(f"Su saldo actual es de ${self.saldo}") C0123 = CuantaBancaria(1000) C2523 = CuantaBancaria(5000) C6723 = CuantaBancaria(12000) C0123.depositar(5000), C2523.depositar(800), C6723.depositar(12000) C0123.retirar(4000), C2523.retirar(8000), C6723.retirar(1100) C0123.imprimir_saldo(), C2523.imprimir_saldo(), C6723.imprimir_saldo() # ## 4. # # A partir de lo resuelto en el problema anterior, cree una clase llamada `CuentaSaldoMinimo` que herede de la clase `CuentaBancaria`. # # La clase hija `CuentaSaldoMinimo` debe requerir un parámetro adicional de *saldo mínimo*, y debe sobreescribir el método `retirar()` de la clase padre, de forma que al llamar el método `retirar()`: # - si el saldo que resultaría después del retiro es mayor al saldo mínimo, se efectúa el retiro; # - si el saldo que resultaría después del retiro es menor al saldo mínimo, se imprima un mensaje con la cantidad máxima que se puede retirar y no se efectúe el retiro. # # Instancie una cuenta bancaria con saldo mínimo, y haga varios retiros y depósitos. class CuentaSaldoMinimo(CuentaBancaria): def __init__(self, saldo, saldo_minimo): super().__init__(saldo) self.saldo_minimo = saldo_minimo def retirar(self, retiro): if self.saldo - retiro >= self.saldo_minimo: self.saldo = self.saldo - retiro print("Transacción exitosa") else: retiro_max = self.saldo - self.saldo_minimo print(f"Lo siento mi chavo lo máximo que puedes sacar es {retiro_max}") return print(f"Su saldo actual es de ${self.saldo}") C6789 = CuentaSaldoMinimo(10000, 5000) C6789.imprimir_saldo() C6789.retirar(6000) C4623 = CuentaSaldoMinimo(1000, 100) C4623.depositar(500) C4623.imprimir_saldo() C4623.retirar(400) C2342 = CuentaSaldoMinimo(1000, 100) C2342.depositar(1000) C2342.imprimir_saldo() C2342.retirar(2000) # # #
    # Created with Jupyter by . #
    # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # # #
    # # prepared by (QLatvia) #
    #
    This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros.
    # $ \newcommand{\bra}[1]{\langle #1|} $ # $ \newcommand{\ket}[1]{|#1\rangle} $ # $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ # $ \newcommand{\dot}[2]{ #1 \cdot #2} $ # $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ # $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ # $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ # $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ # $ \newcommand{\mypar}[1]{\left( #1 \right)} $ # $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ # $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ # $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ # $ \newcommand{\onehalf}{\frac{1}{2}} $ # $ \newcommand{\donehalf}{\dfrac{1}{2}} $ # $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ # $ \newcommand{\vzero}{\myvector{1\\0}} $ # $ \newcommand{\vone}{\myvector{0\\1}} $ # $ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $ # $ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ # $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ # $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ # $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ # $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ # $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ # $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ # $ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} #1 \mspace{-1.5mu} \rfloor } $ #

    Quantum Tomography

    # # We start with initializing a qubit with an arbitrary state by using a rotation. #

    Initialize a qubit with an arbitrary state

    # # # We can specify a (real-valued) quantum state by its angle ranged from 0 to $ 2\pi $ radian. # # If $ \theta $ is our angle, then our quantum state is $ \ket{v} = \myvector{\cos \theta \\ \sin \theta} $. # # How can we set a qubit to an arbitrary quantum state when started in state $ \ket{0} $? # # We can use a rotation operator. Rotations preserve the lengths of vectors, and so they are quantum operators. # # In qiskit, ry-gate can be used for rotation in 2-dimensional real-valued plane. # #

    Technical remark

    # # Even though, we focus on only real-valued quantum systems in this tutorial, the quantum state of a qubit is represented by 2-dimensional complex-valued vector in general. To visually represent a complex number, we use two dimensions. So, to visually represent the state of a qubit, we use four dimensions. # # On the other hand, we can still visualize any state of a qubit by using certain mapping from four dimensions to three dimensions. This representation is called as Bloch sphere. # # The rotation operators over a single (complex-valued) qubit are defined on Bloch sphere. The names of gates "x", "y", or "z" refer to the axes on Bloch sphere. When we focus on real-valued qubit, then we should be careful about the parameter(s) that a gate takes. # # In qiskit, ry-gate makes a rotation around $y$-axis with the given angle, say $\theta$, on Bloch sphere. This refers to a rotation in our real-valued $\ket{0}$-$\ket{1}$ plane with angle $ \frac{\theta}{2} $. Therefore, we should provide the twice of the desired angle in this tutorial. #

    Rotations with ry-gate

    # # The ry-gate is used for rotation in 2-dimensional real-valued plane. # # If our angle is $ \theta $ radians, then we pass $ 2 \theta $ radians as the parameter to ry-gate. # # Then ry-gate implements the rotation with angle $\theta$. # # The default direction of a rotation by ry-gate is counterclockwise. # # mycircuit.ry(2*angle_of_rotation,quantum_register) # + from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer from math import pi # we define a quantum circuit with one qubit and one bit qreg1 = QuantumRegister(1) # quantum register with a single qubit creg1 = ClassicalRegister(1) # classical register with a single bit mycircuit1 = QuantumCircuit(qreg1,creg1) # quantum circuit with quantum and classical registers # angle of rotation in radian rotation_angle = 2*pi/3 # rotate the qubit with rotation_angle mycircuit1.ry(2*rotation_angle,qreg1[0]) # measure the qubit mycircuit1.measure(qreg1,creg1) # - # draw the circuit mycircuit1.draw(output='mpl') # + # execute the program 1000 times job = execute(mycircuit1,Aer.get_backend('qasm_simulator'),shots=1000) # print the results counts = job.result().get_counts(mycircuit1) print(counts) # counts is a dictionary # + from math import sin,cos # the quantum state quantum_state = [ cos(rotation_angle) , sin (rotation_angle) ] the_expected_number_of_zeros = 1000*cos(rotation_angle)**2 the_expected_number_of_ones = 1000*sin(rotation_angle)**2 # expected results print("The expected value of observing '0' is",round(the_expected_number_of_zeros,4)) print("The expected value of observing '1' is",round(the_expected_number_of_ones,4)) # + # draw the quantum state # %run qlatvia.py draw_qubit() draw_quantum_state(quantum_state[0],quantum_state[1],"|v>") # - #

    Task 1

    # # You are given 1000 copies of an arbitrary quantum state which lies in the first or second quadrant of the unit circle. # # This quantum state can be represented by an angle $ \theta \in [0,180) $. # # Please execute the following cell, but do not check the value of $\theta$. # + from random import randrange from math import pi theta = randrange(18000)/18000 * pi # - # Your task is to guess this quantum state by writing quantum programs. # # We assume that the quantum state is given to us with the following code. # # from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer # # # we define a quantum circuit with one qubit and one bit # qreg2 = QuantumRegister(1) # quantum register with a single qubit # creg2 = ClassicalRegister(1) # classical register with a single bit # circuit2 = QuantumCircuit(qreg2,creg2) # quantum circuit with quantum and classical registers # # # rotate the qubit with rotation_angle # circuit2.ry(2*theta,qreg2[0]) # # You should write further codes without using variable $theta$ again. # # You may use measurements or further $ry$-gates. # # You can use 1000 shots in total when executing your quantum programs (you can have more than one program starting with the above code). # # After your guess, please check the actual value and calculate your error in percentage. # + # program 1 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer from math import pi # we define a quantum circuit with one qubit and one bit qreg1 = QuantumRegister(1) # quantum register with a single qubit creg1 = ClassicalRegister(1) # classical register with a single bit circuit1 = QuantumCircuit(qreg1,creg1) # quantum circuit with quantum and classical registers # rotate the qubit with rotation_angle circuit1.ry(2*theta,qreg1[0]) # # your code is here # # + # program 2 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer from math import pi # we define a quantum circuit with one qubit and one bit qreg2 = QuantumRegister(1) # quantum register with a single qubit creg2 = ClassicalRegister(1) # classical register with a single bit circuit2 = QuantumCircuit(qreg2,creg2) # quantum circuit with quantum and classical registers # rotate the qubit with rotation_angle circuit2.ry(2*theta,qreg2[0]) # + # program 3 from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer from math import pi # we define a quantum circuit with one qubit and one bit qreg3 = QuantumRegister(1) # quantum register with a single qubit creg3 = ClassicalRegister(1) # classical register with a single bit circuit3 = QuantumCircuit(qreg3,creg3) # quantum circuit with quantum and classical registers # rotate the qubit with rotation_angle circuit3.ry(2*theta,qreg3[0]) # - # click for our solution #

    Task 2 [extra]

    # # In Task 1, assume that you are given two qubits that are in states $ \myvector{\cos \theta_1 \\ \sin \theta_1} $ and $ \myvector{\cos \theta_2 \\ \sin \theta_2} $, where $ \theta_1,\theta_2 \in [0,\pi) $. # # By following the same assumptions in Task 1, can you approximate $ \theta_1 $ and $ \theta_2 $ by using qiskit? # # Your circuit should have a quantum register with these two qubits, and so your measurement outcomes will be '00', '01', '10', and '11'. #

    Task 3 (Discussion)

    # # If the angle in Task 1 is picked in range $ [0,360) $, then can we determine its quadrant correctly? #

    Global phase

    # # Suppose that we have a qubit and its state is either $ \ket{0} $ or $ -\ket{0} $. # # Is there any sequence of one-qubit gates such that we can measuare different results after applying them? # # All one-qubit gates are $ 2 \times 2 $ matrices, and their application is represented by a single matrix: $ A_n \cdot \cdots \cdot A_2 \cdot A_1 = A $. # # By linearity, if $ A \ket{0} = \ket{u} $, then $ A - \ket{0} = -\ket{u} $. Thus, after measurement, the probabilities of observing state $ \ket{0} $ and state $ \ket{1} $ are the same. Therefore, we cannot distinguish them. # # Even though the states $ \ket{0} $ and $ -\ket{0} $ are different mathematically, they are assumed the same from the physical point of view. # # The minus sign in front of $ -\ket{0} $ is also called as global phase. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # List of concepts in AI, ML, Probablity, Statistics # / teleported.in # ### Specificity and Sensitivity # # Ref: # * https://www.youtube.com/watch?v=21Igj5Pr6u4 # * https://www.youtube.com/watch?v=vtYDyGGeQyo # ### Cross Validation # ### Data Leakage # ### Bias Variance Tradeoff # * An algorithm which is highly Biased does not learn from training data # * An algorithm which is high on variance is very sensitive to the training data # * One way or other, they do not have the capacity top adapt to new data # * This is Bias Variance tradeoff # * Tuning an ML Algorith means reaching a sweet spot # ### Confusion Matrix # ### Naive Bayes explanation # # Ref: # * http://stackoverflow.com/questions/10059594/a-simple-explanation-of-naive-bayes-classification # ### Regression vs Classification # # In regression, we try to predict a real number (continuous) # In classification, we try to predict a category (from a set of possible finite categories. Discrete) # ### Clustering # * Purpose is to discover sub population of the population # * Useful to identify outliers (Classification can not do that) # # ### Decision Boundary # For Classification, the bounary within the problem space (the space of all possible feature values) which our classifier learns which separates one class from another # ### Attribute Value Pair # * The way we represent data in machine learning # * We need to represent observations numerically # * You can be very creative in designing how to represent your data in Machine Learning # * Highly unlikely one will come up with a new ML algorithm. But one can show creativity in how to represent data. # * One can achieve gain in performance by engineering the right representation of data. # * Each individual obsevation represented with a set of attributes, and each attribute has a number assigned to it: # ```python # person = {height_inch: 70, weight_kg: 85, age_years: 45} # ``` # # * Unordered _bag-of-features_ # * Any data needs to be converted to this form before learning takes place # * Attributes can be categorised into: # * Categorical: Red, Blue, Yellow # * Discrete list of possible values - no ordering # * Can only hold one value at a time (out of the many categories) # * Mutually exclusive # * Categories turn to numbers when you code it up # * But not really numbers, can only do == and != # * Tagging process is important, needs to be controlled # * Ordinal: high, medium, low # * Like categorical, but has natural ordering to it # * Encoded as numbers, can test for ordering in addition to equality check # * ==, !=, >, <, >=, <= etc. # * Numerical: 1, 2, 3 # * Meaningful to add, multiply, computer mean/variance # * Usually you want to normalize them # * Because raw incoming data can take any range, need to control it # ```python # x` = (x - mean)/st.dev # Will make the values center around 0, with variance of 1 # x` = (x-min)/(max-min) # Makes sense if there are definite max and min values # ``` # * You bring the data to the same scale, so that all units are roughly comparable to each other # * They are sensitive to outliers - unusually large or small values. Must be handled before normalization. Normalization will get confused by outliers. # * BUT - Some outliers might actually be valid data due to skewed distribution # * Systematic extreme values # * Affects regression, kNN, NB. Not DT's # * Fix: log(x) for positive numbers or atan(x) for pos and neg numbers, then normalise # * Can also use Cumulative Distribution Function (Use rank instead of values) # * Monotonic vs Non-monotonic: # * Monotonic e.g.: higher net worth -> lower lending risk # * Non-monotonic: Age -> winning marathon # * Fix: Quantization (Turn numeric to Categorical or Ordinal) # * Can be unsupervised, overlapping # * Time series: series of tweets # * Picking attribute (Detecting preditor in image) - one idea that works is similarity # * If you have several instances/observations and they have the same class # * Blurring helps Machine Learning (recognising numbers, letters) # # Ref: # * https://www.youtube.com/playlist?list=PLBv09BD7ez_4Z5ap8fJOzn-Ezz7WpWI5- # ### Case study: Identifying a Zebra # * In this domain, using pixel won't work - a single pixel no longer carries any meaning # * A zebra can be put upside down and still be zebra # * **Segment** the image into regions - goes into image and identifies homogenous regions - regions which has similar local properties # * Algorithms: BlobWorld, Normalized Cuts # * Caveat: Segmentation algorithms not so much accurate on still pics, good on videos # * Hope that the anomalies would be systematic, so that it happens in every photo # * Compute features describing the region # * position # * relative area # * circumference # * convexity # * orientation # * color frequency # * texture filters etc. # # ### Case study: text classification - spam no spam # * Input: string of characters # * Idea: Words carry meaning # * Naive way: Words as value (as many values as words in email, categorical) # * For robustness, instead of tying a word to a position in the email, tie it to a word in the vocabulary # * May use frequency or tf-idf weights # * Use binary values for attributes # # ### Case study: Music Classification # * Naive representation # * sample at regular intervals # * X(t) = amplitude at t # * Else (better) # * Music is a time series # * Decompose it into constituent base frequencies f (Fourier transform) # * Find weight of each f # * The weights of the base frequency can be used to classify music - it is insensitive to shifts and volume increase # ### Word frequency, TF-IDF weights # * Word Frequency # * How many times the word occured in the document # * TF-IDF # * TF: Term Frequency, the number of times the word occured in the document # * IDF: Inverse Document Frequency, words unique to this document, but rarely found in corpus of other documents are important for this document # * Score = #times the word appeares in this document * log(# total documents / # other documents the term appears in) # * Example: If the word 'automobile' appeared 10 times in this document, if there are 100 documents in the corpus, and if 20 other documents have the word 'automobile' in them, then: # * `Score = 10 * log(100/20) = 6.98` # ### Robustness of an algorithm # A small change to input should result in small change in output (and not huge) # ### Learning Algorithms and Predictors # * Learning Algorithms create Predictors (functions) that can be then deployed in production # * Learning Algorithms use training data (sample inputs and expected outputs) to create the Predictor # * For Classification: Predictor takes the form of a decision boundary. The function creating the decision boundary _is_ the Predictor. And the learning algorithm creates it (the function). # ### Supervised vs Unsupervised # # * Supervised # * Trying to predict a specific quantity # * Have training examples with lables # * Can measure accuracy directly # # * Unsupervised # * Trying to understand data # * Looking for structure or unusual patterns # * Not looking for something specific # * Does not require labled data # * Evaluation usually indirect or qualitative # * Semi supervised # * Use Unsupervised methods to improve supervised algos # * Usually few labled examples + lots of unlabled # ### Types of learning # 1. Classification (Supervised) # * Multiclass Classification # * Classes mutually exclusive # * NB, kNN, DT, Logistics # * Assumes that there are not other classes than specified (i.e. classes are predefined) # * Binary Classification # * SVM is fundamentally a binary classifier # * One vs Rest can be done # * {a} vs {not b}, {b} vs {not b} # * Classes may overlap # * Instances can be both a and b # * Can be in none of the classes # * E.g. SVM, Logistic (softmax), Perceptron # 2. Regression (Supervised) # 3. Clustering (Unsupervised) # 4. Dimentionality Reduction (Unsupervised) # # # # * Linear Classifier: Draws straignt decision boundaries # ### Classification accuracy and imbalanced classes # * Example: Look at a paper and predict if it will lead to a Nobel Prize # * Always say 'no', will lead to 99.9% accuracy classifier # * This is an example of unbalanced classes, where one class dominates # * Question to ask: What is the right error metric? # * Give relative weights to false positives, false negatives # * Accuracy/Error rate poor metric (above) # * Want: cost(Misses) > cost (FA) # * What is FA? # ### Generative vs Discriminative learning # * Generative approach # * Probabilistic "model" of each class # * Decision boundary # * Where one model becomes more likely # * Natural use of unlabeled data, can also use labeled data # * Supervised or Unsupervised # * All Generative are Probablistic, but all Probablistic are not Generative, i.e. for some, you can calculate the probablity without first creating a model # * Discriminative # * Focus on decision boundary # * More powerful when we have lots of examples # * Not designed to use unlabeled data # * Only Supervised tasks # ### Outliers # * Isolated instances in your data that don't look right # * E.g. extreme values for one or more attribute value # * Can be detected using confidence intervals # * Remove or threshold # ### Hyper Plane # In geometry a hyperplane is a subspace of one dimension less than its ambient space. If a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyperplanes are the 1-dimensional lines. This notion can be used in any general space in which the concept of the dimension of a subspace is defined. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Research2018 # language: python # name: research # --- # + import os import numpy as np import pandas as pd import json import pickle from scipy import sparse import scipy.io dataset_name = 'dblp' data_path = os.path.join('../dataset/raw/{}'.format(dataset_name)) # + citations = [] incomming = {} for i in range(4): fn = os.path.join(data_path, 'dblp-ref-{}.json'.format(i)) with open(fn) as in_fn: for line in in_fn: paper = json.loads(line.strip()) citations.append(paper) if 'references' in paper: for ref_id in paper['references']: if ref_id in incomming: incomming[ref_id].append(paper['id']) else: incomming[ref_id] = [paper['id']] df = pd.DataFrame(citations) # + is_first_line = True conferences = {} with open('../dataset/clean/dblp/venue_info.tsv') as in_csv: for line in in_csv: tokens = line.strip().split('\t') if is_first_line: #print(tokens) is_first_line = False else: conf_name = tokens[0] labels = [int(num_str) for num_str in tokens[2].split(',')] labels = [n-2 for n in labels if n > 1] # remove the first label (signal processing has too many documents) conferences[conf_name] = {'name': conf_name, 'label': labels} #conferences[conf_name] = {'name': conf_name, } max_labels = np.max([np.max(val['label']) for key, val in conferences.items()]) min_labels = np.min([np.min(val['label']) for key, val in conferences.items()]) num_labels = max_labels - min_labels + 1 print('label min:{} max:{} total:{}'.format(min_labels, max_labels, num_labels)) # + # remove any row that is not present in the selected venues def is_selected_venue(row): return (row in conferences) print("num paper (before): {}".format(len(df))) df = df[df.venue.apply(is_selected_venue)] print("num paper (after): {}".format(len(df))) # + cut_off_years = 2016 df_train = df[df.year < cut_off_years] df_test = df[df.year >= cut_off_years] num_trains = len(df_train) num_tests = len(df_test) print("num trains: {} num tests: {} ratio: {:.4f}".format(num_trains, num_tests, num_tests / num_trains)) # + #venue_count = df_train.groupby('venue').count().sort_values(['abstract'], ascending=False).abstract # + def assign_labels(venue): label_list = conferences[venue]['label'] return np.sum(np.eye(num_labels)[label_list], axis=0).astype(np.int) df_train = df_train.copy() df_train['label'] = df_train.venue.apply(assign_labels) df_train.set_index('id', inplace=True) # set paper as the row index df_test = df_test.copy() df_test['label'] = df_test.venue.apply(assign_labels) df_test.set_index('id', inplace=True) # set paper as the row index num_train_doc_per_labels = np.sum(np.array(list(df_train.label)), axis=0) num_test_doc_per_labels = np.sum(np.array(list(df_test.label)), axis=0) print(num_train_doc_per_labels) print(num_test_doc_per_labels) # - # remove any row that does not have abstract, title, paperId, or venue print("num paper = {}".format(len(df_train))) df_train.dropna(axis=0, subset=['abstract', 'venue', 'year', 'label'], inplace=True) print("num paper = {}".format(len(df_train))) # + # This method adds incoming edges to each node as well as removing any edge that points outside the train set def createEdges(row): if row.references is not np.nan: outgoing_edges = [r for r in row.references if r in df_train.index] else: outgoing_edges = [] if row.name in incomming: incomming_edges = [r for r in incomming[row.name] if r in df_train.index] else: incomming_edges = [] return outgoing_edges + incomming_edges df_train['links'] = df_train.apply(createEdges, axis=1) # Remove any row that has no link print("num paper = {}".format(len(df_train))) df_train = df_train[df_train.links.apply(len) > 0] print("num paper = {}".format(len(df_train))) # There must be no train nodes that references to non-train nodes def count_invalid_edges(refs): return len([r for r in refs if r not in df_train.index]) assert(len(df_train[df_train.links.apply(count_invalid_edges) > 0]) == 0) # + global_id_2_train_id = {node_id: idx for idx, node_id in enumerate(df_train.index)} def convert_2_train_id(ref): return [global_id_2_train_id[r] for r in ref] train_edges = df_train.links.apply(convert_2_train_id) train_graph = {} for node_id, value in train_edges.iteritems(): train_graph[global_id_2_train_id[node_id]] = value print('num train: {}'.format(len(train_graph))) # - # # Process Test Data # remove any row that does not have abstract, title, paperId, or venue print("num paper = {}".format(len(df_test))) df_test.dropna(axis=0, subset=['abstract', 'venue', 'year', 'label'], inplace=True) print("num paper = {}".format(len(df_test))) # + # This method adds incoming edges to each node as well as removing any edge that points outside the train set def createEdges(row): if row.references is not np.nan: outgoing_edges = [r for r in row.references if r in df_train.index] else: outgoing_edges = [] if row.name in incomming: incomming_edges = [r for r in incomming[row.name] if r in df_train.index] else: incomming_edges = [] return outgoing_edges + incomming_edges df_test['links'] = df_test.apply(createEdges, axis=1) # Remove any row that has no link print("num paper = {}".format(len(df_test))) df_test = df_test[df_test.links.apply(len) > 0] print("num paper = {}".format(len(df_test))) # There must be no train nodes that references to non-train nodes def count_invalid_edges(refs): return len([r for r in refs if r not in df_train.index]) assert(len(df_test[df_test.links.apply(count_invalid_edges) > 0]) == 0) # + global_id_2_test_id = {node_id: idx for idx, node_id in enumerate(df_test.index)} # each link MUST point to the train nodes test_edges = df_test.links.apply(convert_2_train_id) test_graph = {} for node_id, value in test_edges.iteritems(): test_graph[global_id_2_test_id[node_id]] = value print('num test: {}'.format(len(test_graph))) # - # # Save Graph Data # + data_path = '../dataset/clean/dblp' save_fn = os.path.join(data_path, 'ind.{}.train.graph.pk'.format(dataset_name)) pickle.dump(train_graph, open(save_fn, 'wb')) print('save graph data to {}'.format(save_fn)) save_fn = os.path.join(data_path, 'ind.{}.test.graph.pk'.format(dataset_name)) pickle.dump(test_graph, open(save_fn, 'wb')) print('save graph data to {}'.format(save_fn)) # - # # Process contents # + from sklearn.feature_extraction.text import TfidfVectorizer vectorizer = TfidfVectorizer(stop_words='english', max_df=0.8, min_df=5, sublinear_tf=True, max_features=10000) train_feas = vectorizer.fit_transform(list(df_train.abstract)) print(np.nonzero(np.sum(train_feas, axis=1))[0].shape) test_feas = vectorizer.transform(list(df_test.abstract)) print(np.nonzero(np.sum(test_feas, axis=1))[0].shape) gnd_train = sparse.csr_matrix(np.array(list(df_train.label))) gnd_test = sparse.csr_matrix(np.array(list(df_test.label))) # + assert(train_feas.shape[1] == test_feas.shape[1]) assert(gnd_train.shape[1] == gnd_test.shape[1]) assert(train_feas.shape[0] == gnd_train.shape[0]) assert(test_feas.shape[0] == gnd_test.shape[0]) data_path = '../dataset/clean/dblp' save_fn = os.path.join(data_path, 'ind.{}.mat'.format(dataset_name)) scipy.io.savemat(save_fn, mdict={'train': train_feas, 'test': test_feas, 'cv': test_feas, 'gnd_train': gnd_train, 'gnd_test': gnd_test, 'gnd_cv': gnd_test}) print('save data to {}'.format(save_fn)) # - # # Convert to dataframe with the format as doc_id, bow, label, and neighbors # create a connection matrix n_train = train_feas.shape[0] row = [] col = [] for doc_id in train_graph: row += [doc_id] * len(train_graph[doc_id]) col += train_graph[doc_id] data = [1] * len(row) train_connections = sparse.csr_matrix((data, (row, col)), shape=(n_train, n_train)) n_test = test_feas.shape[0] row = [] col = [] for doc_id in test_graph: row += [doc_id] * len(test_graph[doc_id]) col += test_graph[doc_id] data = [1] * len(row) test_connections = sparse.csr_matrix((data, (row, col)), shape=(n_test, n_train)) # test graph points to train graph # + from tqdm import tqdm save_dir = os.path.join('../dataset/clean', dataset_name) ########################################################################################## train = [] for doc_id in tqdm(train_graph): doc = {'doc_id': doc_id, 'bow': train_feas[doc_id], 'label': gnd_train[doc_id], 'neighbors': train_connections[doc_id]} train.append(doc) train_df = pd.DataFrame.from_dict(train) train_df.set_index('doc_id', inplace=True) fn = os.path.join(save_dir, '{}.train.pkl'.format(dataset_name)) train_df.to_pickle(fn) ########################################################################################## test = [] for doc_id in tqdm(test_graph): doc = {'doc_id': doc_id, 'bow': test_feas[doc_id], 'label': gnd_test[doc_id], 'neighbors': test_connections[doc_id]} test.append(doc) test_df = pd.DataFrame.from_dict(test) test_df.set_index('doc_id', inplace=True) fn = os.path.join(save_dir, '{}.test.pkl'.format(dataset_name)) test_df.to_pickle(fn) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Risk assessment and error analysis # # # ![Untitled%20presentation%20%281%29.png](attachment:Untitled%20presentation%20%281%29.png) # # # ## First import libraries import numpy as np # %matplotlib inline import numpy as np from coare3 import coare3 from air_specific_humidity import air_humidity_method_qsat26air from surface_specific_humidity import sea_humidity_method_qsee from temperature_structure import temperature_structure_method_psit_30 # # define variables # + #variable value units notes Cp = 1004.67 #J/K/kg specific heat at constant pressure Rgas = 287.1 wind = 7.00000 #m/s average wind speed over ocean Tsea = 289.25 #K average SST over ocean Tair = Tsea-1.3 #K average surface air temperature over ocean Tsea_degC = Tsea - 273.15 Tair_degC = Tair - 273.15 rel_hum = 0.80000 #N/A near surface relative humidity pres = 1013.00000 #mb average surface pressure used in Bentamy #calculate Qsea and Qair from Tsea, pres, Tair, rel_hum from coare Lv = (2.501-0.00237*Tsea)*1e6 #J/kg latent heat of vaporization SB had 2500000 CG changed to coare3 def sat_vap_pres = 6.1121*np.exp(17.502*Tair_degC/(Tair_degC+240.97))*(1.0007+3.46e-6*pres) #from coare3 Qsea = sea_humidity_method_qsee(Tsea_degC,pres)/1000 Qair = air_humidity_method_qsat26air(Tair_degC,pres,rel_hum*100)/1000 air_density = pres*100./(Rgas*(Tair)*(1.+0.61*Qair)) #cH_over_cE = 0.00120 #N/A aerodynamic transfer coef #Qsea=18.0/1000 #Qair=15.0/1000 #Qsea=18.0 #/1000 = .0018 #Qair=15.0 = .0015 #rh=80.0 print('Qair',Qair*1000) print('Qsea',Qsea*1000) # - # # Observational error for either FluxSat or Current variables # - errors from FluxSat via Shannon's simulations # - errors from current variables from publications cited in paper # # + #pick one and comment out the other with # #ob_err = 'fluxsat' ob_err = 'current' #fluxsat if ob_err == 'fluxsat': dTsea = 0.45 dwind = 0.6 dTair = 0.7 dQair = .95 dQair_computed = Qair * 0.07 #percent from shannon dpres = 5 if ob_err == 'current': dTsea = 0.5 #ref found dwind = 0.8 #ref found dTair = 1.42 dQair = 1.25 dQair_computed = 1.25/1000. dpres = 5 dpres_computed = (pres+dpres)*100 / (287*Tair) - air_density print('Qair error:', dQair_computed*1000) # - #compute Qsea errors from coare3 using uncertainty in SST & pres Qsea0 = sea_humidity_method_qsee(Tsea_degC,pres) Qsea1 = sea_humidity_method_qsee(Tsea_degC+dTsea,pres) Qsea2 = sea_humidity_method_qsee(Tsea_degC-dTsea,pres) Qsea_dTsea = (np.maximum(np.abs(Qsea0-Qsea2),np.abs(Qsea0-Qsea1))) Qsea0 = sea_humidity_method_qsee(Tsea_degC,pres) Qsea1 = sea_humidity_method_qsee(Tsea_degC,pres+dpres) Qsea2 = sea_humidity_method_qsee(Tsea_degC,pres-dpres) Qsea_dpres = (np.maximum(np.abs(Qsea0-Qsea2),np.abs(Qsea0-Qsea1))) dQsea_computed = np.sqrt(Qsea_dTsea**2+Qsea_dpres**2)/1000 print('Qsea and Qsea uncertainty:',Qsea*1000,dQsea_computed*1000) # # Error propagation for each variable () # calculate Ch and Ce using coare inputs = {'u':wind,'t':Tair_degC,'Q':Qair*1000,'Qs':Qsea*1000,'P':pres,'ts':Tsea_degC,'lat':35.} res = coare3(inputs) Ch = res['Ch'] Ce = res['Ce'] print(Ch,Ce) # # Compute Sensible Heat Flux errors # + dSH_over_dTsea = air_density * Cp * Ch * wind * dTsea dSH_over_dTair = air_density * Cp * Ch * wind * dTair dSH_over_dwind = air_density * Cp * Ch * (Tsea-Tair) * dwind dSH_over_dpres = Cp * Ch * wind * (Tsea-Tair) * dpres_computed total_error = np.sqrt(dSH_over_dTsea**2+ dSH_over_dTair**2+ dSH_over_dwind**2+ dSH_over_dpres**2) signal = air_density * Cp * Ch * (Tsea-Tair) * wind percent_error = total_error / signal *100 print('dSH_over_dTsea',dSH_over_dTsea) print('dSH_over_dTair',dSH_over_dTair) print('dSH_over_dwind',dSH_over_dwind) print('dSH_over_dpres',dSH_over_dpres) print('Sensible Heat Flux total_error',total_error) print('signal',signal) print('percent error',percent_error) sensible_heat_flux_total_error = total_error # - # # Compute Latent Heat Flux errors # + dLH_over_dqs = air_density * Ce * wind * Lv * dQsea_computed dLH_over_dqair = air_density * Ce * wind * Lv * dQair_computed dLH_over_dwind = air_density * Ce * Lv * (Qsea-Qair) * dwind dLH_over_dpres = Ce * wind * Lv * (Qsea-Qair) * dpres_computed total_error = np.sqrt(dLH_over_dqs**2+ dLH_over_dqair**2+ dLH_over_dwind**2+ dLH_over_dpres**2) signal = air_density * Lv * Ce * (Qsea-Qair) * wind percent_error = total_error / signal *100 print('dLH_over_dqs',dLH_over_dqs) print('dLH_over_dqair',dLH_over_dqair) print('dLH_over_dwind',dLH_over_dwind) print('dLH_over_dpres',dLH_over_dpres) print('Latent Heat Flux total_error',total_error) print('signal',signal) print('percent error',percent_error) latent_heat_flux_total_error = total_error # - # # Gradient error calculation from . print('LHF error',latent_heat_flux_total_error) print('#','gradient_error','spatial_scale (km)') for number_of_points in range(2,20): sampling_distance = 10 # spacing (in swath) between observations observational_error = latent_heat_flux_total_error #W/m2 Uncertainty in the observation sqpt = 2 * number_of_points**2 one_over = 1. / np.sqrt(sqpt) sqnn2 = np.sqrt( (2*number_of_points) / (2*number_of_points - 2) ) prod = one_over * sqnn2 uncertainty_gradient = (observational_error / sampling_distance) * prod spatial_scale = number_of_points * sampling_distance print(number_of_points,' ', "%1.2f" % uncertainty_gradient,' ', spatial_scale, ) # # Below here is develoment area # # Computed Errors # + dQair_computed = (Qsea*rel_hum*dQair)/100 num = 17.67*(Tsea_degC + dTsea) dem = (Tsea_degC + 243.5 + dTsea) dQsea_computed = (6.112*np.exp(num/dem)) / pres*0.622 - Qsea dpres_computed = (pres+dpres)*100 / (287*Tair) - air_density print('dqair:',dQair_computed) print('dqs:', dQsea_computed) print('dpres:',dpres_computed) # - # # use COARE model # # Args: # * inputs (dic): inputs parameter containing u,us,ts,t,Qs,Q,Rs,Rl,rain,zi,P,zu,zt,zq,lat,jcool,jwave,twave,hwave fields # - u (float): wind speed (m/s) at height zu (m) # - us (float): surface current speed in the wind direction (m/s) # - ts (float): bulk water temperature (C) if jcool=1, interface water T if jcool=0 # - t (float): bulk air temperature (C), height zt # - Qs (float): bulk water spec hum (g/kg) if jcool=1, ... # - Q (float): bulk air spec hum (g/kg), height zq # - Rs (float): downward solar flux (W/m^2) # - Rl (float): downard IR flux (W/m^2) # - rain (float): rain rate (mm/hr) # - zi (float): Planet Boundary Layer depth (m) # - P (float): Atmos surface pressure (mb) # - zu (float): wind speed measurement height (m) # - zt (float): air T measurement height (m) # - zq (float): air q measurement height (m) # - lat (float): latitude (deg, N=+) # - jcool (float): implement cool calculation skin switch, 0=no, 1=yes # - jwave (float): implement wave dependent roughness model # - twave (float): wave period (s) # - hwave (float): wave height (m) # # # + inputs = {'u':wind,'t':Tair_degC,'Q':Qair*1000,'Qs':Qsea*1000,'P':pres,'ts':Tsea_degC,'lat':35.} #print(inputs) res = coare3(inputs) inputs['P']=pres+dpres res2 = coare3(inputs) dif2 = abs(res['hsb']-res2['hsb']) dif2a = abs(res['hlb']-res2['hlb']) inputs['P']=pres-dpres res2 = coare3(inputs) dif3 = abs(res['hsb']-res2['hsb']) dif3a = abs(res['hlb']-res2['hlb']) SH_perror = max(dif2,dif3) LH_perror = max(dif2a,dif3a) print('error due to pressure SENS:',SH_perror) inputs = {'u':wind,'t':Tair_degC,'Q':Qair*1000,'Qs':Qsea*1000,'P':pres,'ts':Tsea_degC,'lat':35.} res = coare3(inputs) inputs['u']=wind+dwind res2 = coare3(inputs) dif2 = abs(res['hsb']-res2['hsb']) dif2a = abs(res['hlb']-res2['hlb']) inputs['u']=wind-dwind res2 = coare3(inputs) dif3 = abs(res['hsb']-res2['hsb']) dif3a = abs(res['hlb']-res2['hlb']) SH_winderror = max(dif2,dif3) LH_winderror = max(dif2a,dif3a) print('error due to wind SENS:',SH_winderror) inputs = {'u':wind,'t':Tair_degC,'Q':Qair*1000,'Qs':Qsea*1000,'P':pres,'ts':Tsea_degC,'lat':35.} res = coare3(inputs) inputs['ts']=Tsea_degC+dTsea res2 = coare3(inputs) dif2 = abs(res['hsb']-res2['hsb']) dif2a = abs(res['hlb']-res2['hlb']) inputs['ts']=Tsea_degC-dTsea res2 = coare3(inputs) dif3 = abs(res['hsb']-res2['hsb']) dif3a = abs(res['hlb']-res2['hlb']) #print('DIF:', res['hsb']-res3['hsb'],res['hlb']-res3['hlb']) SH_Tseaerror = max(dif2,dif3) print('error due to Tsea SENS:',SH_Tseaerror) inputs = {'u':wind,'t':Tair_degC,'Q':Qair*1000,'Qs':Qsea*1000,'P':pres,'ts':Tsea_degC,'lat':35.} res = coare3(inputs) inputs['t']=Tair_degC+dTair res2 = coare3(inputs) dif2 = abs(res['hsb']-res2['hsb']) dif2a = abs(res['hlb']-res2['hlb']) inputs['t']=Tair_degC-dTair res2 = coare3(inputs) dif3 = abs(res['hsb']-res2['hsb']) dif3a = abs(res['hlb']-res2['hlb']) #print('DIF:', res['hsb']-res3['hsb'],res['hlb']-res3['hlb']) SH_Tairerror = max(dif2,dif3) #LH_Tairerror = max(dif2a,dif3a) print('error due to Tair SENS:',SH_Tairerror) total_error = np.sqrt(SH_Tairerror**2+ SH_Tseaerror**2+ SH_winderror**2+ SH_perror**2) print('Sensible total error',total_error) # + inputs = {'u':wind,'t':Tair_degC,'Q':Qair*1000,'Qs':Qsea*1000,'P':pres,'ts':Tsea_degC,'lat':35.} #print(inputs) res = coare3(inputs) inputs['P']=pres+dpres res2 = coare3(inputs) dif2 = abs(res['hsb']-res2['hsb']) dif2a = abs(res['hlb']-res2['hlb']) inputs['P']=pres-dpres res2 = coare3(inputs) dif3 = abs(res['hsb']-res2['hsb']) dif3a = abs(res['hlb']-res2['hlb']) SH_perror = max(dif2,dif3) LH_perror = max(dif2a,dif3a) print('error due to pressure LAT:',LH_perror) inputs = {'u':wind,'t':Tair_degC,'Q':Qair*1000,'Qs':Qsea*1000,'P':pres,'ts':Tsea_degC,'lat':35.} res = coare3(inputs) inputs['u']=wind+dwind res2 = coare3(inputs) dif2 = abs(res['hsb']-res2['hsb']) dif2a = abs(res['hlb']-res2['hlb']) inputs['u']=wind-dwind res2 = coare3(inputs) dif3 = abs(res['hsb']-res2['hsb']) dif3a = abs(res['hlb']-res2['hlb']) SH_winderror = max(dif2,dif3) LH_winderror = max(dif2a,dif3a) print('error due to wind LAT:',LH_winderror) inputs = {'u':wind,'t':Tair_degC,'Q':Qair*1000,'Qs':Qsea*1000,'P':pres,'ts':Tsea_degC,'lat':35.} #print(inputs) print('qair',Qair*1000,dQair_computed*1000) res = coare3(inputs) inputs['Q']=Qair*1000+dQair_computed*1000 res2 = coare3(inputs) dif2 = abs(res['hsb']-res2['hsb']) dif2a = abs(res['hlb']-res2['hlb']) inputs['Q']=Qair*1000-dQair_computed*1000 res2 = coare3(inputs) dif3 = abs(res['hsb']-res2['hsb']) dif3a = abs(res['hlb']-res2['hlb']) LH_Qairerror = max(dif2a,dif3a) print('error due to Qair LAT:',LH_Qairerror) inputs = {'u':wind,'t':Tair_degC,'Q':Qair*1000,'Qs':Qsea*1000,'P':pres,'ts':Tsea_degC,'lat':35.} res = coare3(inputs) inputs['Qs']=Qsea*1000+dQsea_computed*1000 res2 = coare3(inputs) dif2 = abs(res['hsb']-res2['hsb']) dif2a = abs(res['hlb']-res2['hlb']) inputs['Qs']=Qsea*1000-dQsea_computed*1000 res2 = coare3(inputs) dif3 = abs(res['hsb']-res2['hsb']) dif3a = abs(res['hlb']-res2['hlb']) LH_Qseaerror = max(dif2a,dif3a) print('error due to Qsea LAT:',LH_Qseaerror) #calculate totals total_error = np.sqrt(LH_Qairerror**2+ LH_Qseaerror**2+ LH_winderror**2+ LH_perror**2) print('Latent total error',total_error) # - Qair0 = air_humidity_method_qsat26air(Tair,pres,rel_hum) Qair1 = air_humidity_method_qsat26air(Tair+dTair,pres,rel_hum) Qair2 = air_humidity_method_qsat26air(Tair-dTair,pres,rel_hum) Qair_dTair = max(np.abs(Qsea0-Qsea2),np.abs(Qsea0-Qsea1)) Qair0 = air_humidity_method_qsat26air(Tair,pres,rel_hum) Qair1 = air_humidity_method_qsat26air(Tair,pres+dpres,rel_hum) Qair2 = air_humidity_method_qsat26air(Tair,pres-dpres,rel_hum) Qair_dpres = max(np.abs(Qsea0-Qsea2),np.abs(Qsea0-Qsea1)) Qair0 = air_humidity_method_qsat26air(Tair,pres,rel_hum) Qair1 = air_humidity_method_qsat26air(Tair,pres,rel_hum+drh) Qair2 = air_humidity_method_qsat26air(Tair,pres,rel_hum-drh) Qair_drh = max(np.abs(Qsea0-Qsea2),np.abs(Qsea0-Qsea1)) dQair_computed = np.sqrt(Qair_dTair**2+Qair_dpres**2+Qair_drh**2)/1000 print('Qair and Qair uncertainty:',Qair*1000,dQair_computed*1000) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # <- Go back to introduction notebook # # # Step 1: Import public reference data for US counties. # # We'll need this data to match up FIPS codes (which some of the Covid data uses) to states, which is how our salespeople are assigned. FIPS codes are a 5-digit number that identifies a county within a state, or area within a territory. # # I've already grabbed some USDA data (license). # # I already have Postgres installed and running locally, so let's create a table and insert the CSV data from the USDA. # # I have saved the data into ```data/usda_county_pop_2019.csv```. # # The CSV file looks like this: # ``` # FIPStxt,State,Area_Name,POP_ESTIMATE_2019 # 01000,AL,Alabama,4903185 # ... # ``` # # # # We can use the Postgres COPY command to import the CSV file into a Postgres table. # ## 1.1 Create a little helper function to connect to the database. # # The format of the ```CONFIG_FILE``` below will be like this containing my Postgres username and password: # # ``` # [database] # login=sales # password= # ``` # # We'll use ```psycopg2``` as our database access library, but there are others, like sqlalchemy. For a good overview of accessing Postgres from Python, see this article. # This will install the prerequisite packages we'll use # !pip install psycopg2 pandas # + import configparser import psycopg2 CONFIG_FILE = r'c:\keys\sales.properties' def my_connect(): config = configparser.RawConfigParser() config.read(CONFIG_FILE) db_username=config.get('database', 'login') db_password=config.get('database', 'password') connection = psycopg2.connect(user=db_username, password=, host='localhost', port=5432, database='sales') return connection # - # ### Reusing Code Within These Notebooks # # I'll copy this to a file called my_connect.py and put it in the same directory as these notebooks. Then we can do: # ``` # from my_connect import my_connect # # connection = my_connect() # ``` # # We'll do that in the subsequent notebooks. (Please note that if you changed the location of your sales.properties file # above, you'll also need to change it in the my_connect.py file in this directory. You can just use a text editor.) # # ## 1.2 Create the fips table connection = my_connect() cursor = connection.cursor() q = """ CREATE TABLE IF NOT EXISTS fips ( fipstxt VARCHAR(5) PRIMARY KEY, state VARCHAR(5), area_name VARCHAR(100), pop_estimate_2019 INTEGER ) """ cursor.execute(q) connection.commit() # ## 1.3 Import the data from the CSV file # + import psycopg2 import psycopg2.sql as sql import os connection = my_connect() cursor = connection.cursor() CSV_FILE = os.path.join(os.getcwd(), "usda_county_pop_2019.csv") q2 = sql.SQL(""" DELETE FROM fips; COPY fips(fipstxt, state, area_name, pop_estimate_2019) FROM {} CSV HEADER; """) cursor.execute(q2.format(sql.Literal(CSV_FILE))) connection.commit() # - # ## 1.4 Show a few rows to validate # + import pandas connection = my_connect() cursor = connection.cursor() cursor.execute("SELECT COUNT(*) FROM fips") result = cursor.fetchone() print("FIPS rows: %s" % result[0]) df = pandas.io.sql.read_sql_query("SELECT * FROM fips LIMIT 5;", connection) print(df.head()) # - # # # Next notebook: generating sales data # # Go to the next notebook -> # # # *Contents © Copyright 2020 HP Development Company, L.P. SPDX-License-Identifier: MIT* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import csv import random import os df = pd.read_csv('Ants.csv') #df.tail() df_group = df.groupby("subfamily").count() #reading the csv file df_a = df.groupby("subfamily")[["specimen","min_ma", "max_ma"]].min().reset_index(drop=True) df_a.to_csv('Ants_file_a.tsv', sep = "\t", index = False) df_b = df.groupby("subfamily")[["specimen","min_ma", "max_ma"]].max().reset_index(drop=True) df_b.to_csv('Ants_file_b.tsv', sep = "\t", index = False) for i in range(20): y = df.groupby('subfamily')[["specimen", "min_ma", "max_ma"]].apply(lambda x: x.sample(1)).reset_index(drop=True) y.columns = ["taxon", "min", "max"] print(y) file_string = "Ants_random_" + str(i) y.to_csv(file_string, sep = "\t") search_list= ["Acanthoponera_minor", "Amblyopone_pallipes", "Aneuretus", "Anomalomyrma_sp", "Apomyrma_stygia", "Brownimecia_clavata", "Camelomecia", "Cerapachys_sexspinus", "Chalybion_californicum", "Chyphotes_mellipes", "Formica_fusca", "Gerontoformica_gracilis", "Gerontoformica_magnus", "Gerontoformica_pilosus", "Gerontoformica_spiralis", "Haidomyrmex_scimitarus", "Haidomyrmodes_mammuthus", "Haidoterminus_cippus", "Heterogyna", "Hypoponera_opacior", "Kyromyrma", "Lasius_californicus", "Leptanilla_swani", "Leptanilloides_nomada", "Leptogenys_diminuta", "Martialis_heureka", "Metapolybia_cingulata", "Myanmyrma_gracilis", "Myrmecia_nigriceps", "Myrmica_americana", "Nothomyrmecia_macrops", "Opamyrma_hungvuong", "Paraponera_clavata", "Platythyrea_punctata", "Pogonomyrmex_californicus", "Proceratium_stictum", "Scolia_verticalis", "Sphecomyrma_freyi", "Tatuidris_tatusia", "Tetraponera_punctulata", "Zigrasimecia", "Adetomyrma_sp.", "Amblyopone_armigera", "Amblyopone_australis", "Amblyopone_mercovichi", "Amblyopone_mystriops", "Amblyopone_pluto", "Amblyopone_mutica", "Concoctio_concenta", "Myopopone_castanea", "Mystrium_voeltzkowi", "Onychomyrmex_doddi", "Prionopelta_aethiopica", "Prionopelta_antillana", "Ectatomma_tuberculatum", "Gnamptogenys_annulata", "Gnamptogenys_striatula", "Gnamptogenys_bufonis", "Gnamptogenys_minuta", "Rhytidoponera_confusa", "Typhlomyrmex_pusillus", "Typhlomyrmex_rogenhoferi", "Heteroponera_brouni", "Heteroponera_relicta", "Platythyrea_turneri", "Anochetus_emarginatus", "Odontomachus_bauri", "Asphinctopone_silvestrii", "Belonopelta_deletrix", "Centromyrmex_brachycola", "Cryptopone_gilva", "Diacamma_ceylonense", "Dinoponera_lucida", "Dolioponera_fustigera", "Emeryopone_buttelreepeni", "Harpegnathos_saltator", "Hypoponera_sp.", "Leptogenys_sp._1", "Leptogenys_sp._2", "Leptogenys_podenzanai", "Loboponera_obeliscata", "Loboponera_vigilans", "Myopias_maligna", "Odontoponera_transversa", "Pachycondyla_apicalis", "Pachycondyla_berthoudi", "Pachycondyla_crassinoda", "Pachycondyla_croceicornis", "Pachycondyla_guianensis", "Pachycondyla_marleyi", "Pachycondyla_pachyderma", "Pachycondyla_porcata", "Pachycondyla_stigma", "Pachycondyla_tarsata", "Pachycondyla_villosa", "Phrynoponera_gabonensis", "Plectroctena_strigosa", "Ponera_alpha", "Ponera_pennsylvanica", "Psalidomyrmex_procerus", "Simopelta_oculata", "Streblognathus_peetersi", "Thaumatomyrmex_atrox", "Discothyrea_oculata", "Discothyrea_testacea", "Proceratium_croceum", "Proceratium_pergandei", "Probolomyrmex_guineensis", "Aneuretus_simoni", "Leptomyrmex_pallens", "Iridomyrmex_purpureus", "Dolichoderus_laminatus", "Tapinoma_erraticum", "Technomyrmex_albipes", "Gesomyrmex_luzonensis", "Oecophylla_smaragdina", "Myrmecia_nigriscapa", "Manica_rubida", "Pogonomyrmex_barbatus", "Metapone_madagascarica", "Pseudomyrmex_gracilis", "Tetraponera_aethiops", "Tetraponera_attenuata", "Acanthostichus_serratulus", "Cerapachys_nitidulus", "Cerapachys_doryloides", "Cylindromyrmex_brevitarsus", "Simopone_schoutedeni", "Leptanilliodes_biconstricta", "Dorylus_helvolus", "Aenictus_binghami", "Cheliomyrmex_morosus", "Labidus_coecus", "Eciton_hamatum", "Scolia_nobilitata"] df_1 = pd.read_csv("taxa.tsv", sep = "\t") print(df_1) df2 = df_1[df_1["taxon"].isin(search_list)] df2 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py2] # language: python # name: conda-env-py2-py # --- import code.independence_test as it import code.additive_noise as an import numpy as np import numpy.random as rn import code.hsic as hs n=200 uni = rn.uniform(0,10,n) exp = rn.exponential(7,n) gamma = rn.gamma(10,1,n) log = rn.logistic(10,0.5,n) n_Y = np.random.normal(0,0.4,n) n_L = np.random.normal(0,0.5,n) #Y = uni + exp + gamma + log + n_Y Y = gamma + log + n_Y leak = Y + n_L X = np.stack([gamma,log, leak],axis=1) an.ANM_algorithm(X,Y) hsic = it.HSIC_b(exp,Y) hsic.empirical_test() X = np.random.uniform(0,10,1000) n_Y = np.random.normal(0,1,1000) #Y= X Y = n_Y hsic = it.HSIC_b(X,Y) hsic.empirical_test() hsic.alpha() hsic.beta() hsic.p_value a = {1:'uno', 2:'dos', 3:'tres'} for i in a: print i, a[i] from sklearn.metrics.pairwise import rbf_kernel, laplacian_kernel # + # rbf_kernel? # - len(str(20)) range(59,105) # + accuracy = list() for i in range(105): pair = str(i+1) if len(pair)==1: pair = '000'+ pair if len(pair)==2: pair = '00'+ pair if len(pair)==3: pair = '0'+ pair if pair != '0047' and pair != '0060' and pair != '0061': data = np.loadtxt('./data/pair'+pair+'.txt') if data.shape[0]>500: data = data[0:500] print 'pair', pair print 'n_columns',data.shape[1] if data.shape[1]==2: X = data[:,0] Y = data[:,1] dire = an.ANM_algorithm2(X,Y) if dire == directions[i]: accuracy.append(1) else: accuracy.append(0) else: continue # - np.array(accuracy).sum()/float(len(accuracy)) np.array(accuracy).shape pair = np.loadtxt('./data/pair0006.txt') pair[:,0][0:500].shape metadata = np.loadtxt('./pairmeta.txt') directions = metadata[:,3]-metadata[:,2] directions.shape directions.shape import modshogun # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 1-异常检测 # # note: # * [covariance matrix](http://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html) # * [multivariate_normal](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.multivariate_normal.html) # * [seaborn bivariate kernel density estimate](https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.kdeplot.html#seaborn.kdeplot) # + import matplotlib.pyplot as plt import seaborn as sns sns.set(context="notebook", style="white", palette=sns.color_palette("RdBu")) import numpy as np import pandas as pd import scipy.io as sio from scipy import stats from sklearn.cross_validation import train_test_split # - # You want to divide data into 3 set. # 1. Training set # 2. Cross Validation set # 3. Test set. # # You shouldn't be doing prediction using training data or Validation data as it does in the exercise. mat = sio.loadmat('./data/ex8data1.mat') mat.keys() X = mat.get('X') # divide original validation data into validation and test set Xval, Xtest, yval, ytest = train_test_split(mat.get('Xval'), mat.get('yval').ravel(), test_size=0.5) # Visualize training data sns.regplot('Latency', 'Throughput', data=pd.DataFrame(X, columns=['Latency', 'Throughput']), fit_reg=False, scatter_kws={"s":20, "alpha":0.5}) plt.show() # # estimate multivariate Gaussian parameters $\mu$ and $\sigma^2$ # > according to data, X1, and X2 is not independent # + mu = X.mean(axis=0) print(mu, '\n') cov = np.cov(X.T) print(cov) # - # example of creating 2d grid to calculate probability density np.dstack(np.mgrid[0:3,0:3]) # + # create multi-var Gaussian model multi_normal = stats.multivariate_normal(mu, cov) # create a grid x, y = np.mgrid[0:30:0.01, 0:30:0.01] pos = np.dstack((x, y)) fig, ax = plt.subplots() # plot probability density ax.contourf(x, y, multi_normal.pdf(pos), cmap='Blues') # plot original data points sns.regplot('Latency', 'Throughput', data=pd.DataFrame(X, columns=['Latency', 'Throughput']), fit_reg=False, ax=ax, scatter_kws={"s":10, "alpha":0.4}) plt.show() # - # # select threshold $\epsilon$ # 1. use training set $X$ to model the multivariate Gaussian # 2. use cross validation set $(Xval, yval)$ to find the best $\epsilon$ by finding the best `F-score` # def select_threshold(X, Xval, yval): """use CV data to find the best epsilon Returns: e: best epsilon with the highest f-score f-score: such best f-score """ # create multivariate model using training data mu = X.mean(axis=0) cov = np.cov(X.T) multi_normal = stats.multivariate_normal(mu, cov) # this is key, use CV data for fine tuning hyper parameters pval = multi_normal.pdf(Xval) # set up epsilon candidates epsilon = np.linspace(np.min(pval), np.max(pval), num=10000) # calculate f-score fs = [] for e in epsilon: y_pred = (pval <= e).astype('int') fs.append(f1_score(yval, y_pred)) # find the best f-score argmax_fs = np.argmax(fs) return epsilon[argmax_fs], fs[argmax_fs] from sklearn.metrics import f1_score, classification_report e, fs = select_threshold(X, Xval, yval) print('Best epsilon: {}\nBest F-score on validation data: {}'.format(e, fs)) # # visualize prediction of `Xval` using learned $\epsilon$ # 1. use CV data to find the best $\epsilon$ # 2. use all data (training + validation) to create model # 3. do the prediction on test data # + def select_threshold(X, Xval, yval): """use CV data to find the best epsilon Returns: e: best epsilon with the highest f-score f-score: such best f-score """ # create multivariate model using training data mu = X.mean(axis=0) cov = np.cov(X.T) multi_normal = stats.multivariate_normal(mu, cov) # this is key, use CV data for fine tuning hyper parameters pval = multi_normal.pdf(Xval) # set up epsilon candidates epsilon = np.linspace(np.min(pval), np.max(pval), num=10000) # calculate f-score fs = [] for e in epsilon: y_pred = (pval <= e).astype('int') fs.append(f1_score(yval, y_pred)) # find the best f-score argmax_fs = np.argmax(fs) return epsilon[argmax_fs], fs[argmax_fs] def predict(X, Xval, e, Xtest, ytest): """with optimal epsilon, combine X, Xval and predict Xtest Returns: multi_normal: multivariate normal model y_pred: prediction of test data """ Xdata = np.concatenate((X, Xval), axis=0) mu = Xdata.mean(axis=0) cov = np.cov(Xdata.T) multi_normal = stats.multivariate_normal(mu, cov) # calculate probability of test data pval = multi_normal.pdf(Xtest) y_pred = (pval <= e).astype('int') print(classification_report(ytest, y_pred)) return multi_normal, y_pred # - multi_normal, y_pred = predict(X, Xval, e, Xtest, ytest) # + # construct test DataFrame data = pd.DataFrame(Xtest, columns=['Latency', 'Throughput']) data['y_pred'] = y_pred # create a grid for graphing x, y = np.mgrid[0:30:0.01, 0:30:0.01] pos = np.dstack((x, y)) fig, ax = plt.subplots() # plot probability density ax.contourf(x, y, multi_normal.pdf(pos), cmap='Blues') # plot original Xval points sns.regplot('Latency', 'Throughput', data=data, fit_reg=False, ax=ax, scatter_kws={"s":10, "alpha":0.4}) # mark the predicted anamoly of CV data. We should have a test set for this... anamoly_data = data[data['y_pred']==1] ax.scatter(anamoly_data['Latency'], anamoly_data['Throughput'], marker='x', s=50) plt.show() # - # # high dimension data mat = sio.loadmat('./data/ex8data2.mat') X = mat.get('X') Xval, Xtest, yval, ytest = train_test_split(mat.get('Xval'), mat.get('yval').ravel(), test_size=0.5) e, fs = select_threshold(X, Xval, yval) print('Best epsilon: {}\nBest F-score on validation data: {}'.format(e, fs)) multi_normal, y_pred = predict(X, Xval, e, Xtest, ytest) print('find {} anamolies'.format(y_pred.sum())) # The huge difference between my result, and the official `117` anamolies in the ex8 is due to: # 1. my use of **multivariate Gaussian** # 2. I split data very differently # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Parameterisation of the spheroid model # %pylab inline # # In Condor the geometry of a spheroid is defined by two parameters: The average diameter $d$ and the flattening $f$. The average diameter can be understood as the diameter of a sphere of identical volume. The flattening $f$ is the ratio of the semi-axes $a$ and $c$, where $c$ is measured along the symmetry axis $z$. # $$d=2\cdot(a^2c)^{1/3}$$ # $$f = \frac{a}{c}$$ # For internal calculations in Condor the parameters $d$ and $f$ are translated into the semi-axes $a$ and $c$ for convenience: # $$a=\frac{f^{1/3}d}{2}$$ # $$c = \frac{f^{-2/3}d}{2}$$ import condor # + S = condor.Source(wavelength=1.E-9, pulse_energy=1E-3, focus_diameter=1E-6) P1 = condor.ParticleSpheroid(diameter=100E-9, flattening=0.5) D = condor.Detector(nx=1000, ny=1000, distance=1., pixel_size=100E-6) E = condor.Experiment(source=S, particles={"particle_spheroid": P}, detector=D) # - res = E.propagate() imshow(log10(res["entry_1"]["data_1"]["data"])) help(condor.ParticleMap(geome)) dx © condor.ParticleMap(geometry='custom', ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="130546d4-d72c-46f1-a506-246ace42ad56" # banner # + [markdown] id="a6d32920-c6f8-4df6-91a7-925640a9920a" # # Working with a custom metrics provider # + [markdown] id="ab7073b7-baf8-48ba-b9fb-e39e497596aa" # This notebook demonstrates how to configure the custom monitor and custom metrics deployment by using IBM Watson OpenScale. It requires service credentials for the following services: # * Watson OpenScale # * Watson Machine Learning # # # ## Contents # # This notebook contains the following parts: # # 1. [Set up your environment.](#setup) # 1. [Create the custom metrics provider - python function.](#provider) # 1. [Register the custom metrics provider and create a deployment.](#deployment) # 1. [Configure Watson OpenScale](#config) # 1. [Create the integrated system for the custom metrics provider.](#custom) # 1. [Set up the custom monitor definition and instance.](#instance) # # + [markdown] id="8c3c7fcc-8701-45f3-ab34-af1b093b590c" # ## 1. Set up your environment. # # Before you use the sample code in this notebook, you must perform the following setup tasks: # + [markdown] id="8b4212ee-4116-4a47-a586-f0997df1ec53" # ### Install the `ibm-watson-machine-learning` and `ibm-watson-openscale` packages. # + id="2eb75b61-93b5-4139-96d5-fae98c8c9173" # !pip install --upgrade ibm-watson-machine-learning | tail -n 1 # !pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1 # + [markdown] id="cd55a125-14e4-440c-a2ee-c58813d75f9e" # ### Action: restart the kernel! # + [markdown] id="2abe81f0-cd47-4fe0-9b5c-505712c17752" # ### Credentials for IBM Cloud Pak for Data # To authenticate, in the following code boxes, replace the sample data with your own credentials. Get the information from your system administrator or through the Cloud Pak for Data dashboard. # # # ### Obtaining your Watson OpenScale credentials # # You can retrieve the URL by running the following command: `oc get route -n namespace1 --no-headers | awk '{print $2}'` Replace the `namespace1` variable with your namespace. # # You should have been assigned a username and password when you were added to the Cloud Pak for Data system. You might need to ask either your database administrator or your system administrator for some of the information. # # + id="3f499be0-0ed7-46c9-9bf4-753c2529525f" ############################################################################################ # Paste your Watson OpenScale credentials into the following section and then run this cell. ############################################################################################ WOS_CREDENTIALS = { "url": "", "username": "*****", "password": "*****" } # - # # ### Enter your Watson OpenScale GUID. # # For most systems, the default GUID is already entered for you. You would only need to update this particular entry if the GUID was changed from the default. # WOS_GUID="00000000-0000-0000-0000-000000000000" # + id="4515adc7fa8b46ac8e6e9ac59d396b7f" WML_CREDENTIALS = WOS_CREDENTIALS.copy() WML_CREDENTIALS['instance_id']='openshift' WML_CREDENTIALS['version']='4.0' WML_CREDENTIALS # + [markdown] id="9cce9ad8f9bf4fb095680d91ad38b2a0" # ### Define the Watson OpenScale subscription to which the custom metrics have to be sent. # # Create a subscription from the Watson Openscale UI or SDK to configure custom metrics. You can configure custom metrics for the subscriptions that have predefined monitors, such as fairness, quality, or drift or without predefined monitors. # + id="48519343224a4ca582ee954c3a0e6605" #################################################################### # Paste your Subscription in the following field and then run this cell. #################################################################### subscription_id = "" # + [markdown] id="51dab6f2e41045f38ead783922814c0a" # ## 2. Create the custom metrics provider - Python function. # # The Python function receives the required variables, such as the `datamart_id`, `monitor_instance_id`, `monitor_id`, `monitor_instance_parameters` and `subscription_id` from the Watson OpenScale service when it is invoked by the custom monitor. # # In the Python function, add your own logic to compute the custom metrics in the `get_metrics` method, publish the metrics to the Watson Openscale service and update the status of the run to the `finished` state in the custom monitor instance. # # Update the `WOS_CREDENTIALS` in the Python function. # + id="e7ec6979ce114f528246ba99e8619090" #wml_python_function def custom_metrics_provider(): import json import requests import base64 from requests.auth import HTTPBasicAuth import time import uuid #Update the WOS_CREDENTIALS here WOS_CREDENTIALS = { "url": "", "username": "*****", "password": "*****" } # Get the access token def get_access_token(): token_url = WOS_CREDENTIALS['url'] + '/v1/preauth/validateAuth' headers = {} headers["Accept"] = "application/json" auth = HTTPBasicAuth(WOS_CREDENTIALS['username'], WOS_CREDENTIALS['password']) response = requests.get(token_url, headers=headers, auth=auth, verify=False) json_data = response.json() access_token = json_data['accessToken'] return access_token # Get the WOS base url def get_wos_url(service_instance_id): return WOS_CREDENTIALS['url'] + '/openscale' + '/' + service_instance_id # construct the auth headers for communication with WOS APIs def get_headers(access_token): headers = {} headers["Content-Type"] = "application/json" headers["Accept"] = "application/json" headers["Authorization"] = "Bearer {}".format(access_token) return headers #Get the current time def get_current_time(): import datetime date_format = "%Y-%m-%dT%H:%M:%S.%fZ" now = datetime.datetime.utcnow() timestamp = now.strftime(date_format) return timestamp #Update the run status to Finished in the custom monitor instance def update_monitor_instance(base_url, access_token, custom_monitor_instance_id, payload): monitor_instance_url = base_url + '/v2/monitor_instances/' + custom_monitor_instance_id + '?update_metadata_only=true' patch_payload = [ { "op": "replace", "path": "/parameters", "value": payload } ] response = requests.patch(monitor_instance_url, headers=get_headers(access_token), json = patch_payload, verify=False) monitor_response = response.json() return response.status_code, monitor_response #Add your code to compute the custom metrics. def get_metrics(subscription_id): #Add the logic here to compute the metrics. Use the below metric names while creating the custom monitor definition dummy_metrics = {"specificity": 1.2, "sensitivity": 0.85,"region": "us-south"} return dummy_metrics # Publishes the Custom Metrics to OpenScale def publish_metrics(base_url, access_token, subscription_id, custom_monitor_id, custom_monitor_instance_id, custom_monitoring_run_id): # Generate an monitoring run id, where the publishing happens against this run id timestamp = get_current_time() measurements_payload = [ { "timestamp": timestamp, "run_id": custom_monitoring_run_id, "metrics": [get_metrics(subscription_id)] } ] measurements_url = base_url + '/v2/monitor_instances/' + custom_monitor_instance_id + '/measurements' response = requests.post(measurements_url, headers=get_headers(access_token), json = measurements_payload, verify=False) published_measurement = response.json() return response.status_code, published_measurement def publish( input_data ): payload = input_data.get("input_data")[0].get("values") data_mart_id = payload['data_mart_id'] subscription_id = payload['subscription_id'] custom_monitor_id = payload['custom_monitor_id'] custom_monitor_instance_id = payload['custom_monitor_instance_id'] custom_monitor_instance_params = payload['custom_monitor_instance_params'] base_url = get_wos_url(data_mart_id) access_token = get_access_token() published_measurements = [] error_msg = None custom_monitoring_run_id = custom_monitor_instance_params["run_details"]["run_id"] try: status_code, published_measurement = publish_metrics(base_url, access_token, subscription_id, custom_monitor_id, custom_monitor_instance_id, custom_monitoring_run_id) if int(status_code) in [200, 201, 202]: custom_monitor_instance_params["run_details"]["run_status"] = "finished" published_measurements.append(published_measurement) else: custom_monitor_instance_params["run_details"]["run_status"] = "error" custom_monitor_instance_params["run_details"]["run_error_msg"] = published_measurement error_msg = published_measurement custom_monitor_instance_params["last_run_time"] = get_current_time() status_code, response = update_monitor_instance(base_url, access_token, custom_monitor_instance_id, custom_monitor_instance_params) if not int(status_code) in [200, 201, 202]: error_msg = response except Exception as ex: error_msg = str(ex) if error_msg is None: response_payload = { "predictions" : [{ "values" : published_measurements }] } else: response_payload = { "error_msg": error_msg } return response_payload return publish # + [markdown] id="47683fe263944fafb25f856214b1ec70" # ## 3. Register the custom metrics provider and create a deployment. # + id="d8948f9b03e849d9aba1f74d7e7830c3" import json from ibm_watson_machine_learning import APIClient wml_client = APIClient(WML_CREDENTIALS) wml_client.version # + id="15601ba7120748b9902f57ca107067b6" wml_client.spaces.list(limit=10) # + id="2f9a983b13d84b40bf2b5544ac11d078" space_name = "" #update your space id spaces = wml_client.spaces.get_details()['resources'] space_id = None for space in spaces: if space['entity']['name'] == space_name: space_id = space["metadata"]["id"] print(space_id) wml_client.set.default_space(space_id) # - PYTHON_FUNCTION_NAME = 'Custom Metrics Provider Function' #update your python function name DEPLOYMENT_NAME = 'Custom Metrics Provider Deployment'#update your deployment name # ### Remove existing function and deployment. # + deployments_list = wml_client.deployments.get_details() for deployment in deployments_list["resources"]: model_id = deployment["entity"]["asset"]["id"] deployment_id = deployment["metadata"]["id"] if deployment["metadata"]["name"] == DEPLOYMENT_NAME: print("Deleting deployment id", deployment_id) wml_client.deployments.delete(deployment_id) print("Deleting model id", model_id) wml_client.repository.delete(model_id) wml_client.repository.list_functions() # + [markdown] id="36a1b7cb5d3f48049fd42edcb471d157" # ### Create the function meta properties. # # + id="bd7e0a1e303c41b9b90815e8c2b33894" software_spec_id = wml_client.software_specifications.get_id_by_name('default_py3.7_opence') print(software_spec_id) function_meta_props = { wml_client.repository.FunctionMetaNames.NAME: PYTHON_FUNCTION_NAME, wml_client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: software_spec_id } # + [markdown] id="e7fbbccd0f2344d097433ebdaf04469c" # ### Store the Python function. # + id="2d2aef0d0e1e47048cbf4abd94aac631" function_artifact = wml_client.repository.store_function(meta_props=function_meta_props, function=custom_metrics_provider) function_uid = wml_client.repository.get_function_id(function_artifact) print("Function UID = " + function_uid) # + id="82a7d7df8fd344158108d57be229c5fa" function_details = wml_client.repository.get_details(function_uid) from pprint import pprint pprint(function_details) # + [markdown] id="8b89b9d948f242ffa77bde26a8e9cb81" # ### Deploy the Python function. # # + id="d1c33df40ea94f4ca7f8de257818b417" hardware_spec_id = wml_client.hardware_specifications.get_id_by_name('M') hardware_spec_id # + [markdown] id="6774445edd764e708ba6b4612edaed71" # ### Create deployment metadata for the Python function. # + id="888a3c84fc06461f8ebc2120dfd58b22" deploy_meta = { wml_client.deployments.ConfigurationMetaNames.NAME: DEPLOYMENT_NAME, wml_client.deployments.ConfigurationMetaNames.ONLINE: {}, wml_client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "id": hardware_spec_id} } # + [markdown] id="3273246ba52a49a1a7e7e98964d2cb63" # ### Create a deployment. # + id="7b5a9733edfb484e9a8533d22581bd8d" deployment_details = wml_client.deployments.create(function_uid, meta_props=deploy_meta) # - # ### Get the scoring URL. # # + id="bb56417d0e6e444484c25da9e6048aa5" created_at = deployment_details['metadata']['created_at'] find_string_pos = created_at.find("T") if find_string_pos is not -1: current_date = created_at[0:find_string_pos] scoring_url = wml_client.deployments.get_scoring_href(deployment_details) scoring_url = scoring_url + "?version="+current_date print(scoring_url) # + [markdown] id="73c85183a81f4997837587b2c64753c6" # ## 4. Configure OpenScale. # # Import the required libraries and set up the Watson OpenScale Python client. # + id="addc28e79d494e68a5cb6a942d4f8bac" from ibm_watson_openscale import APIClient from ibm_watson_openscale.base_classes.watson_open_scale_v2 import MonitorMeasurementRequest from ibm_watson_openscale.base_classes.watson_open_scale_v2 import MonitorMetricRequest from ibm_watson_openscale.base_classes.watson_open_scale_v2 import MetricThreshold from ibm_watson_openscale.supporting_classes.enums import MetricThresholdTypes from ibm_watson_openscale.base_classes.watson_open_scale_v2 import MonitorTagRequest from ibm_watson_openscale.base_classes.watson_open_scale_v2 import Target from ibm_watson_openscale.supporting_classes.enums import TargetTypes from ibm_watson_openscale.base_classes.watson_open_scale_v2 import IntegratedSystems from datetime import datetime, timezone, timedelta import uuid # + id="1bb826627adf4a2ba2e2bc20d5ab25a2" from ibm_cloud_sdk_core.authenticators import CloudPakForDataAuthenticator from ibm_watson_openscale import * from ibm_watson_openscale.supporting_classes.enums import * from ibm_watson_openscale.supporting_classes import * authenticator = CloudPakForDataAuthenticator( url=WOS_CREDENTIALS['url'], username=WOS_CREDENTIALS['username'], password=['password'], disable_ssl_verification=True ) wos_client = APIClient(service_url=WOS_CREDENTIALS['url'],authenticator=authenticator, service_instance_id=WOS_GUID) wos_client.version # + [markdown] id="09b669c1459a4f75ad0d040d433795e4" # ## 5. Create the integrated system for the custom metrics provider. # # # Update the custom metrics deployment URL, which is created during the Python function creation in the integrated system. Watson OpenScale invokes the deployment URL at runtime to compute the custom metrics. # # You must define the authentication type based on the communication with custom metrics deployment. Watson OpenScale supports 2 types of authentication: basic and bearer. If custom metrics deployment accepts the `basic` authentication type, then provide `auth_type=basic` otherwise use `auth_type=bearer`. # + CUSTOM_METRICS_ENGINE_NAME = "Sample Custom Metrics Engine" # update the metrics engine name here auth_type = "bearer" #Supported values are basic and bearer if auth_type == "basic": CUSTOM_METRICS_ENGINE_CREDENTIALS = { "auth_type":"basic", "username": "*****",# update the username here "password": "*****"# Update the password here } if auth_type == "bearer": CUSTOM_METRICS_ENGINE_CREDENTIALS = { "auth_type":"bearer", "username":"*****",# update the username here "apikey": "*****" #update the apikey of the cpd cluster where custom metrics deployment is created } #if custom metrics deployment is on other cpd cluster or some other cloud then please uncomment and update #the below "TOKEN_INFO" properties to generate the token to communicate to the custom metrics deployment url #Here are the sample values given in the token_info #TOKEN_INFO = { # "url": "https://iam.ng.bluemix.net/oidc/token", # update the token generation here # "headers": { "Content-type": "application/x-www-form-urlencoded" }, # update the headers here # "payload": "grant_type=urn:ibm:params:oauth:grant-type:apikey&response_type=cloud_iam&apikey=", # update the payload here # "method": "POST" # # update the http method here #} #CUSTOM_METRICS_ENGINE_CREDENTIALS["token_info"] = TOKEN_INFO # - # ### Remove existing integrated system # Delete existing custom metrics provider integrated systems if present integrated_systems = IntegratedSystems(wos_client).list().result.integrated_systems for system in integrated_systems: if system.entity.type == 'custom_metrics_provider' and system.entity.name == CUSTOM_METRICS_ENGINE_NAME: print("Deleting integrated system {}".format(system.entity.name)) IntegratedSystems(wos_client).delete(integrated_system_id=system.metadata.id) # + custom_metrics_integrated_system = IntegratedSystems(wos_client).add( name=CUSTOM_METRICS_ENGINE_NAME, description=CUSTOM_METRICS_ENGINE_NAME, type="custom_metrics_provider", credentials= CUSTOM_METRICS_ENGINE_CREDENTIALS, connection={ "display_name": CUSTOM_METRICS_ENGINE_NAME, "endpoint": scoring_url } ).result integrated_system_id = custom_metrics_integrated_system.metadata.id print(custom_metrics_integrated_system) # - # ## 6. Set up the custom monitor definition and instance. # ################################################################### # UPDATE your custom monitor name in the following field and then run this cell. #################################################################### CUSTOM_MONITOR_NAME = 'Sample model performance' # + [markdown] id="59b9827e31ab45edbad79789213703d1" # ### Check for the existence of the custom monitor definition. # # + id="14e3f1e2e2f24bfd86f86feee24619e0" def get_custom_monitor_definition(): monitor_definitions = wos_client.monitor_definitions.list().result.monitor_definitions for definition in monitor_definitions: if CUSTOM_MONITOR_NAME == definition.entity.name: return definition return None # + [markdown] id="30720ce70ddf485782ebf767a606366b" # ### Create the custom monitor definition. # # Update the custom metric names, threshold types (`LOWER_LIMIT`, `UPPER_LIMIT`) and default values as required. You can define the threshold type as lower limit, upper limit, or both. # - ################################################################### # Update your custom monitor metrics names in the following field. Use the same metric names for creating the # monitor definition and publishing the metrics to Openscale in your python function #################################################################### CUSTOM_MONITOR_METRICS_NAMES = ['sensitivity','specificity'] #Update the tag values if you want to fetch the metrics by tags TAGS= ['region'] TAG_DESCRIPTION =['customer geographical region'] # + id="df7bfd7b6f274d54897133f94dc14310" #Update the Threshold types and default values of the metrics def custom_metric_definitions(): metrics = [MonitorMetricRequest(name=CUSTOM_MONITOR_METRICS_NAMES[0], thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.8)]), MonitorMetricRequest(name=CUSTOM_MONITOR_METRICS_NAMES[1], thresholds=[MetricThreshold(type=MetricThresholdTypes.LOWER_LIMIT, default=0.6),MetricThreshold(type=MetricThresholdTypes.UPPER_LIMIT, default=1)])] #Comment the below tags code if there are no tags to be created tags = [MonitorTagRequest(name=TAGS[0], description=TAG_DESCRIPTION[0])] return metrics, tags # + id="79e2f275e13948e5834e77ec4dfadeb6" def create_custom_monitor_definition(): # check if the custom monitor definition already exists or not existing_definition = get_custom_monitor_definition() # if it does not exists, then create a new one. if existing_definition is None: metrics, tags = custom_metric_definitions() custom_monitor_details = wos_client.monitor_definitions.add(name=CUSTOM_MONITOR_NAME, metrics=metrics, tags=tags, background_mode=False).result else: # otherwise, send the existing definition custom_monitor_details = existing_definition return custom_monitor_details # + id="8d352bcf09e646d0a6d174e54078f2c6" custom_monitor_details = create_custom_monitor_definition() custom_monitor_id = custom_monitor_details.metadata.id custom_monitor_id # + [markdown] id="3a0dbf50822c40c680cc9430eb88306c" # ### Check the existence of custom monitor instance. # # + id="7176b942f61f4d0381fb63b92101e76f" def get_custom_monitor_instance(custom_monitor_id): monitor_instances = wos_client.monitor_instances.list(data_mart_id = WOS_GUID, monitor_definition_id = custom_monitor_id, target_target_id = subscription_id).result.monitor_instances if len(monitor_instances) == 1: return monitor_instances[0] return None # + # Openscale MRM service invokes custom metrics deployment url during runtime and wait for the default time of 60 second's to # to check the run status ie finished/Failed and fetch the latest measurement. Increase the wait time, if the runtime deployment # takes more than 60 seconds to compute and publish the custom metrics #Update the wait time here. custom_metrics_wait_time = 60 #time in seconds # - # ### Update the custom monitor instance. def update_custom_monitor_instance(custom_monitor_instance_id): payload = [ { "op": "replace", "path": "/parameters", "value": { "custom_metrics_provider_id": integrated_system_id, "custom_metrics_wait_time": custom_metrics_wait_time } } ] response = wos_client.monitor_instances.update(custom_monitor_instance_id, payload, update_metadata_only = True) result = response.result return result # + [markdown] id="678df1de54164d6284cdf9f255affd8a" # ### For the custom monitor definition, create a custom monitor instance. # # + id="97624e1c0a8e4648af05162690c96c66" def create_custom_monitor_instance(custom_monitor_id): # Check if an custom monitor instance already exists existing_monitor_instance = get_custom_monitor_instance(custom_monitor_id) # If it does not exist, then create one if existing_monitor_instance is None: target = Target( target_type=TargetTypes.SUBSCRIPTION, target_id=subscription_id ) parameters = { "custom_metrics_provider_id": integrated_system_id, "custom_metrics_wait_time": custom_metrics_wait_time } # create the custom monitor instance id here. custom_monitor_instance_details = wos_client.monitor_instances.create( data_mart_id=WOS_GUID, background_mode=False, monitor_definition_id=custom_monitor_id, target=target, parameters=parameters ).result else: # otherwise, update the existing one with latest integrated system details. instance_id = existing_monitor_instance.metadata.id custom_monitor_instance_details = update_custom_monitor_instance(instance_id) return custom_monitor_instance_details # + id="ffb159c5c1f54ffe9f1b7a48b727810c" monitor_instance_details = create_custom_monitor_instance(custom_monitor_id) custom_monitor_instance_id = monitor_instance_details.metadata.id print(monitor_instance_details) # - # ### Invoke the custom metrics deployment Python function. # # Validate the custom metrics provider deployment by providing the correct set of paramaters to generate the custom metrics. # + import uuid parameters = { "custom_metrics_provider_id": integrated_system_id, "custom_metrics_wait_time": custom_metrics_wait_time, "run_details": { "run_id": str(uuid.uuid4()), "run_status": "Running" } } payload= { "data_mart_id" : WOS_GUID, "subscription_id" : subscription_id, "custom_monitor_id" : custom_monitor_id, "custom_monitor_instance_id" : custom_monitor_instance_id, "custom_monitor_instance_params": parameters } input_data= { "input_data": [ { "values": payload } ] } deployment_uid=wml_client.deployments.get_uid(deployment_details) job_details = wml_client.deployments.score(deployment_uid, input_data) pprint(job_details) # + [markdown] id="210c3b5835494f67812f0704bad84705" # ## Congratulations # # You have finished configuring Custom Monitor Definition and Monitor instance for your Subscription. You can now run the custom monitor from `Watson OpenScale Dashboard`(https://url-to-your-cp4d-cluster/aiopenscale). Click the tile of your model and select `Evaluate Now` option from `Actions` drop down menu to run the monitor. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Windows 不可以執行!!!! 請到IBM的LAB進行實驗 # ### 練習 5 - 變分量子特徵求解器 # # ## 歷史背景 # # 在過去的十年中,量子計算機迅速成熟並開始實現費曼最初的夢想,即能夠以量子方式模擬自然規律的計算系統。 2014 年由 首次撰寫的一篇論文介紹了變分量子特徵求解器 (VQE),這是一種用於尋找分子基態能量(最低能量)的算法,其電路比其他方法淺得多。 [1] 並且,在 2017 年,IBM Quantum 團隊使用 VQE 算法來模擬氫化鋰分子的基態能量。 [2] # # VQE 的特色是將一些問題的處理工作量外包給一台經典計算機。 該算法從稱為 ansatz(最佳猜測)的參數化量子電路開始,然後使用經典優化器找到該電路的最佳參數。 VQE 相對於經典算法的優勢來自這樣一個事實,即量子處理單元可以表示和存儲問題的確切波函數,這對於經典計算機來說是一個指數級的難題。 # # T # 這項練習5使您可以自己實現費曼的夢想,設置變分量子特徵值來確定分子的基態和能量。這很有趣因為可以使用基態來計算各種分子性質,例如精準作用在核上立。可以用來進行分子動力學模擬,探索化學系統中隨時間變化的情況。 # # # # ### 參考文獻 # # 1. , et al. "A variational eigenvalue solver on a photonic quantum processor." Nature communications 5.1 (2014): 1-7. # 2. , et al. "Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets." Nature 549.7671 (2017): 242-246. # 3. Sokolov, ., et al. "Microcanonical and finite-temperature ab initio molecular dynamics simulations on quantum computers." Physical Review Research 3.1 (2021): 013125. # # ## 簡介 # # 對於VQE的應用,您將能夠選擇如何編寫模擬,尤其是ansatz量子電路。. # 這是因為在誤差大的量子計算機上運行VQE時重要,的任務之一是通過找到能夠代表基態的最精簡的量子電路來減少真實度的損失(來減少錯誤)。. # 實際上,這需要減少雙味原量子閘的數量(例如. CNOT),同時又不失去準確性。. # #
    # # Goal # 查找最精簡的ansatz電路,以準確表示給定問題的基態。發揮創意。 # # # Plan # # 首先,您將學習如何為最小的分子組成VQE模擬,然後將所學到的知識應用於較大的分子。 # # **1. Tutorial - VQE for H$_2$:** 熟悉VQE,並通過運行特徵向量模擬來選擇ansatz /經典優化器的最佳組合。. # # **2. Final Challenge - VQE for LiH:** # 執行與第一部分類似的調查,但僅限於特徵模擬器。. 使用Qiskit中可用的qubit數量減少方案,並為該較大的系統找到最佳電路。優化電路,並發揮您的想像力,找到選擇參數化電路的最佳構建塊的方法,並組成它們以構建用於基態的最精簡的ansatz電路,這比Qiskit中已經提供的電路要好。 # #
    # # #
    # # 以下是VQE模擬背後理論的介紹。. 在繼續之前,您不必了解整個過程。. 別害怕! # #
    # # # # # ## Theory # # 以下是代表如何在量子計算機上執行使用VQE的分子模擬的一般工作流程。. # # # # 他的核心思想混合量子和古典方法是將其最擅長的部分外包給** CPU(經典處理單元)**和** QPU(量子處理單元)**。. CPU負責列出需要測量的語法以計算能量並優化電路參數。. QPU實現代表系統量子態的量子電路並測量能量。. 以下是一些更多詳細信息: # # The [Hartree–Fock (HF) method](https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method#:~:text=In%20computational%20physics%20and%20chemistry,system%20in%20a%20stationary%20state.) # efficiently computes an approximate grounds state wavefunction by assuming that the latter can be represented by a single Slater determinant (e.g. for H$_2$ molecule in STO-3G basis with 4 spin-orbitals and qubits, $|\Psi_{HF} \rangle = |0101 \rangle$ where electrons occupy the lowest energy spin-orbitals). What QPU does later in VQE is finding a quantum state (corresponding circuit and its parameters) that can also represent other states associated missing electronic correlations (i.e. $\sum_i c_i |i\rangle$ states in $|\Psi \rangle = c_{HF}|\Psi_{HF} \rangle + \sum_i c_i |i\rangle $ where $i$ is a bitstring). # # 計算HF後,將Hamiltonian中的運算符映射到使用費米子到量子變換的QPU上的測量值(請參見下面的Hamiltonian部分)。.可以進一步分析系統的屬性,以減少量子位的數量或縮短ansatz電路:。 # # - For Z2 symmetries and two-qubit reduction, see [Bravyi *et al*, 2017](https://arxiv.org/abs/1701.08213v1). # - For entanglement forging, see [Eddins *et al.*, 2021](https://arxiv.org/abs/2104.10220v1). # - For the adaptive ansatz see, [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). You may use the ideas found in those works to find ways to shorten the quantum circuits. # # **QPU** 運用量子電路 (看下方 Ansatzes 的部分), 參數化角度 $\vec\theta$, 通過放置各種單個量子位旋轉和糾纏器來表示基態波函數。 (e.g. 雙位元量子閘). 量子優勢在於,QPU可以有效地表示和存儲確切的波函數,這對於具有多個原子的系統在經典計算機上變得難以計算。. 最後,QPU測量選擇的運算符(例如. 代表漢密爾頓人)。. # 以下是VQE算法每個部分的數學細節。. 如果您觀看我們的內容,可能會有所幫助。 [video episode about VQE](https://www.youtube.com/watch?v=Z-A6G0WVI9w). # # # ### Hamiltonian # # 在這裡,我們解釋瞭如何獲得需要測量的運算符以獲取給定係統的能量。. # 這些術語包含在分子漢密爾頓語中,定義為: # $$ # \begin{aligned} # \hat{H} &=\sum_{r s} h_{r s} \hat{a}_{r}^{\dagger} \hat{a}_{s} \\ # &+\frac{1}{2} \sum_{p q r s} g_{p q r s} \hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}+E_{N N} # \end{aligned} # $$ # 與 # $$ # h_{p q}=\int \phi_{p}^{*}(r)\left(-\frac{1}{2} \nabla^{2}-\sum_{I} \frac{Z_{I}}{R_{I}-r}\right) \phi_{q}(r) # $$ # $$ # g_{p q r s}=\int \frac{\phi_{p}^{*}\left(r_{1}\right) \phi_{q}^{*}\left(r_{2}\right) \phi_{r}\left(r_{2}\right) \phi_{s}\left(r_{1}\right)}{\left|r_{1}-r_{2}\right|} # $$ # # # 其中$ h_ {r s} $和$ g_ {p q r s} $是一體/二體積分(使用Hartree-Fock方法)和$ E_ {N N} $核排斥能。. # 單體積分代表電子的動能及其與核的相互作用。. # 兩體積分代表電子-電子相互作用。. # $\ hat {a} _ {r} ^ {\ dagger},\ hat {a} _ {r} $運算符表示自旋軌道$ r $中電子的創建和an滅,並要求對運算符進行映射,以便我們可以在量子計算機上測量它們。. # 請注意,VQE使電子能源最小化,因此您必須檢索並添加核排斥能$ E_ {NN} $才能計算總能量。. # # # # # 因此,對於$ h_ {r s} $和$ g_ {p q r s} $張量中的每個非零矩陣元素,我們都可以通過以下費米子到量子位轉換來構造相應的Pauli字符串(Pauli運算符的張量乘積)。. # 例如,在Jordan-Wigner映射中,軌道$ r = 3 $,我們獲得以下Pauli字符串:。 # $$ # \hat a_{3}^{\dagger}= \hat \sigma_z \otimes \hat \sigma_z \otimes\left(\frac{ \hat \sigma_x-i \hat \sigma_y}{2}\right) \otimes 1 \otimes \cdots \otimes 1 # $$ # 其中$\ hat \ sigma_x,\ hat \ sigma_y,\ hat \ sigma_z $是著名的Pauli運算符。. 放置$ \ hat \ sigma_z $運算符的張量乘積以執行費米離子反換向關係。. # 下面給出了水分子的14個自旋軌道與約14個量子位之間的Jordan-Wigner映射的表示形式: # # # # # 然後,只需替換一個/兩個身體的興奮(例如. $ \ hat {a} _ {r} ^ {\ dagger} \ hat {a} _ {s} $,$ \ hat {a} _ {p} ^ \ dagger} \ hat {a} _ {r} \ hat {a} _ {s} $)。. $\ hat {P} _i $,請參見上圖)。. 生成的運算符集已準備好在QPU上進行測量。 # 有關其他詳細信息,請參見。 [Seeley *et al.*, 2012](https://arxiv.org/abs/1208.5986v1). # # ### Ansatzes # # 您主要可以使用兩種ansatzes的化學問題。. # - **q-UCC ansatzes** are physically inspired, and roughly map the electron excitations to quantum circuits. The q-UCCSD ansatz (`UCCSD`in Qiskit) possess all possible single and double electron excitations. The paired double q-pUCCD (`PUCCD`) and singlet q-UCCD0 (`SUCCD`) just consider a subset of such excitations (meaning significantly shorter circuits) and have proved to provide good results for dissociation profiles. For instance, q-pUCCD doesn't have single excitations and the double excitations are paired as in the image below. # - **Heuristic ansatzes (`TwoLocal`)** were invented to shorten the circuit depth but still be able to represent the ground state. # # 如下圖所示,R門代表參數化的單個量子位旋轉,$ U_ {CNOT} $糾纏器(兩個量子位門)。. 這樣做的想法是,在重複某些$ D $倍後,相同的塊(具有獨立的參數)可以到達地面狀態。. # # For additional details refer to [Sokolov *et al.* (q-UCC ansatzes)](https://arxiv.org/abs/1911.10864v2) and [Barkoutsos *et al.* (Heuristic ansatzes)](https://arxiv.org/pdf/1805.04340.pdf). # # # # # # ### VQE # # # 給定一個具有未知最小特徵值$ E_ {min} $的Hermitian運算符$ \ hat H $,與特徵狀態$ | psi_ {min} \ rangle $相關聯,VQE提供了一個估計值$ E_ {\ theta} $,以$ E_ {min為界} $: # # \begin{align*} # E_{min} \le E_{\theta} \equiv \langle \psi(\theta) |\hat H|\psi(\theta) \rangle # \end{align*} # # 其中$ |\ psi(\ theta)\ rangle $是與$ E_ {\ theta} $關聯的試用狀態。. 通過將以$ U(\ theta)$表示的參數化電路應用於某些任意起始狀態$ |\ psi \ rangle $,該算法獲得估計值$ U(\ theta)|\ psi \ rangle \ equiv | psi(\ theta)\ rangle $ on $ | \ psi {min} \ rangle $。. 經典優化器通過更改參數$ \ theta $並最小化$ \ langle \ psi(\ theta)的期望值來迭代優化估計值| hat H |\ psi(\ theta)\ rangle $。 # # As applications of VQE, there are possibilities in molecular dynamics simulations, see [Sokolov *et al.*, 2021](https://arxiv.org/abs/2008.08144v1), and excited states calculations, see [Ollitrault *et al.*, 2019](https://arxiv.org/abs/1910.12890) to name a few. # #
    # # References for additional details # # 有關實現此算法的qiskit-nature教程,請參見。 [here](https://qiskit.org/documentation/nature/tutorials/01_electronic_structure.html) # 但這還不夠,您可能想看看。 [first page of github repository](https://github.com/Qiskit/qiskit-nature) and the [test folder](https://github.com/Qiskit/qiskit-nature/tree/main/test)它們包含為每個組件編寫的測試,它們提供了使用每個功能的基本代碼。. # #
    # ## 最終挑戰 - LiH 分子的 VQE # # # 在這部分中,您將使用 STO-3G 基礎和 PySCF 驅動程序模擬 LiH 分子。 #

  • # #
    # # 目標 # # # 實驗所有參數,然後找到最好的ansatz。. 您可以根據需要發揮創造力。! # # 對於每個問題,請給出第1部分的“ ansatz”對象。. 您的最終分數將僅基於第2部分。. # #
    # # 請注意,該系統現在更大。. 通過檢索自旋軌道的數量來確定此系統需要多少量子位。. # # ### Reducing the problem size # # # 您可能想減少用於模擬的量子位數量:。 # # - you could freeze the core electrons that do not contribute significantly to chemistry and consider only the valence electrons. Qiskit  already has this functionality implemented. So inspect the different transformers in `qiskit_nature.transformers` and find the one that performs the freeze core approximation. # - you could use `ParityMapper` with `two_qubit_reduction=True` to eliminate 2 qubits. # - you could reduce the number of qubits by inspecting the symmetries of your Hamiltonian. Find a way to use `Z2Symmetries` in Qiskit. # # ### Custom ansatz # # You might want to explore the ideas proposed in [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [ *et al.*,2019](https://arxiv.org/abs/1911.10205), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). # You can even get try machine learning algorithms to generate best ansatz circuits. # # ### Setup the simulation # # # 現在讓我們運行Hartree-Fock計算,其餘的取決於您。! # #
    # # Attention # # 我們在下面給出“驅動程序”,“ initial_point”,“ initial_state”應該保持給定的狀態。. # 然後,您可以自由探索Qiskit中所有其他可用的東西。. # 因此,您必須從此初始點開始(所有參數設置為0.01):。 # # `initial_point = [0.01] * len(ansatz.ordered_parameters)` # or # `initial_point = [0.01] * ansatz.num_parameters` # # 您的初始狀態必須是Hartree-Fock狀態:。 # # `init_state = HartreeFock(num_spin_orbitals, num_particles, converter)` # # 對於每個問題,請給出“ ansatz”對象。. # 請記住,您必須達到化學精度$ | E_ {exact}-E_ {VQE} | \ leq 0.004 $ Ha $ = 4 $ mHa。. # #
    #匯入工具 from qiskit import IBMQ, BasicAer, Aer # #### 1. Driver # Qiskit中可用的經典化學代碼的接口稱為驅動程序。. 例如,我們有“ PSI4Driver”,“ PyQuanteDriver”,“ PySCFDriver”。. # # 通過在下面的單元格中運行驅動程序(給定基礎集和分子幾何的Hartree-Fock計算),我們獲得了有關分子的所有必要信息,然後應用了量子算法。. # + #匯入分子資訊 from qiskit_nature.drivers import PySCFDriver molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474' driver = PySCFDriver(atom=molecule) qmolecule = driver.run() # - #
    # # Questions # # Look into the attributes of `qmolecule` and answer the questions below. # # # 1. We need to know the basic characteristics of our molecule. What is the total number of electrons in your system? # 2. What is the number of molecular orbitals? # 3. What is the number of spin-orbitals? # 3. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping? # 5. What is the value of the nuclear repulsion energy? # #
    # # + n_el = qmolecule.num_alpha + qmolecule.num_beta n_mo = qmolecule.num_molecular_orbitals n_so = 2 * qmolecule.num_molecular_orbitals n_q = 2* qmolecule.num_molecular_orbitals e_nn = qmolecule.nuclear_repulsion_energy print(n_el) print(n_mo) print(n_so) print(n_q) # - # #### 2. Electronic structure problem # # 然後,您可以創建一個`ElectronicStructureProblem',可以生成費米離子運算符的列表,然後再將它們映射到量子位(Pauli字符串)。. # + #寫下系統總能量 from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem from qiskit_nature.transformers import FreezeCoreTransformer # 冷凍可以忽略計算的軌域 freezeCoreTransfomer = FreezeCoreTransformer(freeze_core=True,remove_orbitals=[3,4]) #這裡是建立要解決問題的地方,我們改動了函式的相關參數 #可以在這裡找到更多答案 : https://qiskit.org/documentation/nature/tutorials/01_electronic_structure.html qmolecule = freezeCoreTransfomer.transform(qmolecule) problem = ElectronicStructureProblem(driver,q_molecule_transformers=[freezeCoreTransfomer]) # 創建二次量子化運算符 - 二次量子化的目的在於得到系統哈密頓量也就是總能量 second_q_ops = problem.second_q_ops() # 哈密頓量 main_op = second_q_ops[0] #在這裡我建議print哈密頓量查看 print(main_op) # - # #### 3. QubitConverter # 允許定義您將在模擬中使用的映射。. 您可以嘗試不同的映射。 我們將堅持使用“ JordanWignerMapper”,因為它允許簡單的對應關係:量子位代表分子中的自旋軌道。. # + from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter from qiskit.opflow.primitive_ops import PauliSumOp # 設置映射器和量子比特轉換器 - 在這裡我們轉化哈密頓量為 Qubits的相關參數 mapper_type = 'ParityMapper' if mapper_type == 'ParityMapper': mapper = ParityMapper() elif mapper_type == 'JordanWignerMapper': mapper = JordanWignerMapper() elif mapper_type == 'BravyiKitaevMapper': mapper = BravyiKitaevMapper() #two_qubit_reduction - 使我們能忽略兩個位元,進而降低CX閘的使用 #z2symmetry_reduction - Z2對稱約化 - 可以搜尋群論找到更多答案 converter = QubitConverter(mapper=mapper, two_qubit_reduction=True,z2symmetry_reduction = [-1]) # 費米子算符映射到量子比特算符 num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) qubit_op = converter.convert(main_op, num_particles=num_particles) # - # #### 4. Initial state # 正如我們在理論部分中所描述的那樣,化學的良好初始狀態是HF狀態(即. | 𝑃𝑠𝑖𝐻𝐹 𝑟𝑎𝑛𝑔𝑙𝑒=|0101 𝑟𝑎𝑛𝑔𝑙𝑒 )。. 我們可以將其初始化如下: # + from qiskit_nature.circuit.library import HartreeFock # 在這裡引入HartreeFock作為初始值 - 用來解多體量子系統的波函數 # 相關連結 :https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals init_state = HartreeFock(num_spin_orbitals, num_particles, converter) #我猜可能的波函數 print(init_state) # - # ## 解題想法 # # 在解題目的過程中,我們成功使用了TwoLocal成功拿到cost = 30,但是看到slack的討論中我們看到了有人只使用了3個CX完成!於是我們開始發想是什麼方法可以達成,在我們的努力不懈下,我們發現到TwoLocal也可以有3個CX的電路也就是重複一次,但我觀察到,電路中有被優化的參數只有Z方向,於是我就嚐試使用u閘一次控制三個參數,因此我們必須放置4x2個u閘,且每一個u的參數都是各自獨立的,藉由這個想法,在搭配上SLSQP的強力優化效果,得到了3分解。 # # ### 優化器怎麼作用? # 比如 : theta0 = Parameter('a0') # 這行程式給定了一參數名為'a0',優化器會自動調整他的角度,而不是我們手動調整,經過一次計算後發現實際值跟理想值優些差距,就會再次微調角度參數,再次送入量子電腦中計算,再得到一解,反覆循環直到收斂 # #### 5. Ansatz # 最重要的選擇之一是選擇近似基態的量子電路。. 這是qiskit電路庫的示例,其中包含許多製作自己的電路的可能性。 # + from qiskit.circuit.library import TwoLocal from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD # 選擇我們想要的 ansatz - 可以理解程波函數 ansatz_type = "Custom" # q-UCC antatze 的相關係數 num_particles = (problem.molecule_data_transformed.num_alpha, problem.molecule_data_transformed.num_beta) num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals # 定義五種ansatz,其中第五種是自訂義的 if ansatz_type == "TwoLocal": # 放置在所有具有獨立參數的量子位上的單個量子位旋轉 rotation_blocks = ['ry', 'rz'] # 糾纏閘 entanglement_blocks = 'cx' # # 怎樣的糾纏方式? entanglement = 'full' # entanglement = 'linear' # 具有獨立參數的rotation_blocks + entanglement_blocks 的重複次數 repetitions = 1 # 跳過最後的 rotation_blocks 層 skip_final_rotation_layer = True ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions, entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer) # 添加初始狀態 ansatz.compose(init_state, front=True, inplace=True) elif ansatz_type == "UCCSD": ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "PUCCD": ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "SUCCD": ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state) elif ansatz_type == "Custom": # 如何編寫自己的電路的示例 from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister # 定義變分參數 - 在這裡我們定義了每一個u的參數 theta0 = Parameter('a0') phi0 = Parameter('a1') lam0 = Parameter('a2') theta1 = Parameter('a3') phi1 = Parameter('a4') lam1 = Parameter('a5') theta2 = Parameter('a6') phi2 = Parameter('a7') lam2 = Parameter('a8') theta3 = Parameter('a9') phi3 = Parameter('a10') lam3 = Parameter('a11') theta4 = Parameter('a12') phi4 = Parameter('a13') lam4 = Parameter('a14') theta5 = Parameter('a15') phi5 = Parameter('a16') lam5 = Parameter('a17') theta6 = Parameter('a18') phi6 = Parameter('a19') lam6 = Parameter('a20') theta7 = Parameter('a21') phi7 = Parameter('a22') lam7 = Parameter('a23') theta8 = Parameter('a24') phi8 = Parameter('a25') lam8 = Parameter('a26') gamma= Parameter('a27') n = qubit_op.num_qubits # 製作一個空的量子電路 qc = QuantumCircuit(qubit_op.num_qubits) qubit_label = 0 qc.u(theta0,phi0,lam0,0) qc.u(theta1,phi1,lam1,1) qc.u(theta2,phi2,lam2,2) qc.u(theta3,phi3,lam3,3) # Place a CNOT ladderCUGate(theta, phi, lam, gamma, label=None, ctrl_state=None) qc.cx(0,1) qc.cx(1,2) qc.cx(2,3) qc.barrier() qc.u(theta4,phi4,lam4,0) qc.u(theta5,phi5,lam5,1) qc.u(theta6,phi6,lam6,2) qc.u(theta7,phi7,lam7,3) ansatz = qc ansatz.compose(init_state, front=True, inplace=True) print(ansatz) # - print( qubit_op.num_qubits) # #### 6. Backend # 最重要的選擇之一是選擇近似基態的量子電路。. 這是qiskit電路庫的示例,其中包含許多製作自己的電路的可能性。. from qiskit import Aer backend = Aer.get_backend('statevector_simulator') # #### 7. Optimizer # 優化器指導ansatz參數的演變,因此研究能量收斂非常重要,因為它將定義必須在QPU上執行的測量次數。 明智的選擇可能會大大減少所需的能源評估數量。. # + from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP,DIRECT_L #選擇使用的優化器 - 在我的測試中 SLSQP為最佳優化器 optimizer_type = 'SLSQP' # You may want to tune the parameters # of each optimizer, here the defaults are used if optimizer_type == 'COBYLA': optimizer = COBYLA(maxiter=500) elif optimizer_type == 'L_BFGS_B': optimizer = L_BFGS_B(maxfun=500) elif optimizer_type == 'SPSA': optimizer = SPSA(maxiter=500) elif optimizer_type == 'SLSQP': optimizer = SLSQP(maxiter=500) # - # #### 8. Exact eigensolver # 出於學習目的,我們可以通過Hamiltonian矩陣的精確對角化準確地解決問題,因此我們知道VQE的目標。 當然,該矩陣的尺寸在分子軌道的數量上呈指數級,因此您可以嘗試對選擇的大分子進行此操作,並查看其變慢程度。. 對於非常大的系統,您將用盡內存來嘗試存儲其波函數。. # + from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver import numpy as np def exact_diagonalizer(problem, converter): solver = NumPyMinimumEigensolverFactory() calc = GroundStateEigensolver(converter, solver) result = calc.solve(problem) return result result_exact = exact_diagonalizer(problem, converter) exact_energy = np.real(result_exact.eigenenergies[0]) print("Exact electronic energy", exact_energy) print(result_exact) # The targeted electronic energy for H2 is -1.85336 Ha # Check with your VQE result. # - # #### 9. VQE and initial parameters for the ansatz # Now we can import the VQE class and run the algorithm. # + #匯入 VQE 函式 - 這個是IBM已經寫好的我們可以直接使用 from qiskit.algorithms import VQE from IPython.display import display, clear_output # Print and save the data in lists def callback(eval_count, parameters, mean, std): # Overwrites the same line when printing display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std)) clear_output(wait=True) counts.append(eval_count) values.append(mean) params.append(parameters) deviation.append(std) counts = [] values = [] params = [] deviation = [] # Set initial parameters of the ansatz # We choose a fixed small displacement # So all participants start from similar starting point try: initial_point = [0.01] * len(ansatz.ordered_parameters) except: initial_point = [0.01] * ansatz.num_parameters #執行VQE #在這裡我們可以看到我們上面所定義的參數都在這裡被輸入 algorithm = VQE(ansatz, optimizer=optimizer, quantum_instance=backend, callback=callback, initial_point=initial_point) result = algorithm.compute_minimum_eigenvalue(qubit_op) print(result) # + # Check your answer using following code from qc_grader import grade_ex5 freeze_core = True # change to True if you freezed core electrons grade_ex5(ansatz,qubit_op,result,freeze_core) # - # ## 另解 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 3d_thuync # language: python # name: 3d_thuync # --- # # Libraries # + import numpy as np import os, cv2, pickle, cvut from matplotlib import pyplot as plt from pcdet.utils import common_utils from pcdet.datasets import build_dataloader from pcdet.config import cfg, log_config_to_file, cfg_from_list, cfg_from_yaml_file # Move to the root directory if not 'workdir' in globals(): workdir = os.getcwd() workdir = "/".join(workdir.split('/')[:-1]) # %cd "$workdir" # Create cache folder os.makedirs("cache", exist_ok=True) # - # # Load pickle result # + infer_result_file = "output/cfgs/kitti_models/pv_rcnn/default/eval/epoch_8369/val/default/result.pkl" with open(infer_result_file, 'rb') as fp: det_results = pickle.load(fp) print("Detection size:", len(det_results)) print(det_results[0].keys()) print() for key, val in det_results[0].items(): print(key, val.shape) # - obj_dix = 0 print("frame_id:", det_results[0]['frame_id']) print("name:", det_results[0]['name'][obj_dix]) print("truncated:", det_results[0]['truncated'][obj_dix]) print("occluded:", det_results[0]['occluded'][obj_dix]) print("alpha:", det_results[0]['alpha'][obj_dix]) print("bbox:", det_results[0]['bbox'][obj_dix]) print("dimensions:", det_results[0]['dimensions'][obj_dix]) print("location:", det_results[0]['location'][obj_dix]) print("rotation_y:", det_results[0]['rotation_y'][obj_dix]) print("score:", det_results[0]['score'][obj_dix]) print("boxes_lidar:", det_results[0]['boxes_lidar'][obj_dix]) # + sample_idx = 2000 image_dir = "/dataset/kitti/training/image_2" image_file = os.path.join(image_dir, det_results[sample_idx]['frame_id']+'.png') image = cv2.imread(image_file)[...,::-1] bboxes = det_results[sample_idx]['bbox'] classnames = det_results[sample_idx]['name'] scores = det_results[sample_idx]['score'] CLASSES = ['Car', 'Pedestrian', 'Cyclist'] labels = np.array([CLASSES.index(item) for item in classnames]) selected_indicators = scores > 0.3 bboxes = bboxes[selected_indicators] labels = labels[selected_indicators] image = cvut.draw_bboxes(image, bboxes, labels=labels, classnames=CLASSES, color=None) plt.figure(figsize=(35,35)) plt.imshow(image) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: VPython # language: python # name: vpython # --- # + from vpython import * # GlowScript version of Jupyter demo program Color-RGB-HSV scene.userzoom = False scene.userspin = False scene.width = 460 scene.height = 200 scene.range = 1 scene.background = color.red box(pos=vector(10,0,0)) # Force creation of canvas; box is not seen because it is outside the canvas cancopy = 'You can Ctrl-C or Command-C copy these RGB and HSV values:\n' scene.title = cancopy scene.append_to_title("RGB = <") titlergb = wtext(pos=scene.title_anchor, text="1.000, 0.000, 0.000") scene.append_to_title(">, HSV = <") titlehsv = wtext(pos=scene.title_anchor, text="0.000, 0.000, 0.000") scene.append_to_title(">") C = ['Red', 'Green', 'Blue', 'Hue', 'Saturation', 'Value'] sliders = [] wts = [] def set_background(sl): if sl.id < 3: wts[sl.id].text = '{:1.3f}'.format(sl.value) rgb = vector(sliders[0].value, sliders[1].value, sliders[2].value) hsv = color.rgb_to_hsv(rgb) sliders[3].value = int(1000*hsv.x)/1000 # reset HSV slider positions; display 3 figures sliders[4].value = int(1000*hsv.y)/1000 sliders[5].value = int(1000*hsv.z)/1000 wts[3].text = '{:1.3f}'.format(hsv.x) wts[4].text = '{:1.3f}'.format(hsv.y) wts[5].text = '{:1.3f}'.format(hsv.z) else: wts[sl.id].text = '{:1.3f}'.format(sl.value) hsv = vector(sliders[3].value, sliders[4].value, sliders[5].value) rgb = color.hsv_to_rgb(hsv) sliders[0].value = int(1000*rgb.x)/1000 # reset RGB slider positions; display 3 figures sliders[1].value = int(1000*rgb.y)/1000 sliders[2].value = int(1000*rgb.z)/1000 wts[0].text = '{:1.3f}'.format(rgb.x) wts[1].text = '{:1.3f}'.format(rgb.y) wts[2].text = '{:1.3f}'.format(rgb.z) scene.background = rgb # For readability, limit precision of display of quantities to 3 figures titlergb.text = "{:1.3f}, {:1.3f}, {:1.3f}".format(rgb.x, rgb.y, rgb.z) titlehsv.text = "{:1.3f}, {:1.3f}, {:1.3f}".format(hsv.x, hsv.y, hsv.z) scene.caption = '\n' for i in range(6): # Create the 3 RGB and 3 HSV sliders sliders.append(slider(length=300, left=10, min=0, max=1, bind=set_background, id=i)) scene.append_to_caption(' '+C[i]+' ') # Display slider name wts.append(wtext(text='0.000')) scene.append_to_caption('\n\n') if i == 2: scene.append_to_caption("\n\n") # Separate the RGB and HSV sliders sliders[0].value = 1 # make the background red sliders[4].value = sliders[5].value = 1 wts[0].text = '1.000' wts[4].text = wts[5].text = '1.000' # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.8 64-bit (conda) # name: python3 # --- # + [markdown] papermill={"duration": 0.028289, "end_time": "2021-09-16T16:18:48.494895", "exception": false, "start_time": "2021-09-16T16:18:48.466606", "status": "completed"} tags=[] # # EDA on German Fake News Dataset # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 1.642953, "end_time": "2021-09-16T16:18:50.164299", "exception": false, "start_time": "2021-09-16T16:18:48.521346", "status": "completed"} tags=[] import os import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer import nltk #from nltk.corpus import stopwords from dotenv import find_dotenv, load_dotenv load_dotenv(find_dotenv()) pd.set_option('display.max_rows', 100) # + [markdown] papermill={"duration": 0.026218, "end_time": "2021-09-16T16:18:50.217641", "exception": false, "start_time": "2021-09-16T16:18:50.191423", "status": "completed"} tags=[] # ## Read data # + papermill={"duration": 4.974594, "end_time": "2021-09-16T16:18:55.218688", "exception": false, "start_time": "2021-09-16T16:18:50.244094", "status": "completed"} tags=[] FAKE_NEWS_CSV = os.path.join(os.getenv('PROJECT_DIR'), 'data', 'interim', 'GermanFakeNC_interim.csv') df = pd.read_csv(FAKE_NEWS_CSV) df.head(20) # + [markdown] papermill={"duration": 0.026486, "end_time": "2021-09-16T16:18:55.271851", "exception": false, "start_time": "2021-09-16T16:18:55.245365", "status": "completed"} tags=[] # ## Start exploring the data # + papermill={"duration": 0.090589, "end_time": "2021-09-16T16:18:55.389333", "exception": false, "start_time": "2021-09-16T16:18:55.298744", "status": "completed"} tags=[] print('Columns with missing values:') print(df.isnull().sum()) # - # Skip rows with NULL values in text and title df_ = df.dropna(subset=['titel', 'text']) df_.isnull().sum() # + papermill={"duration": 0.06018, "end_time": "2021-09-16T16:18:55.532083", "exception": false, "start_time": "2021-09-16T16:18:55.471903", "status": "completed"} tags=[] # Still NULLS for Date... df_[df_['Date'].isnull()] # - # From the URL we can see that the Date could be 2017/12 so we impute the Date to work with... df_['Date'].fillna(pd.to_datetime('01/12/2017'), inplace=True) # + [markdown] papermill={"duration": 0.028934, "end_time": "2021-09-16T16:18:56.231678", "exception": false, "start_time": "2021-09-16T16:18:56.202744", "status": "completed"} tags=[] # First we check for duplicates in the dataset... # + papermill={"duration": 1.63057, "end_time": "2021-09-16T16:18:57.891474", "exception": false, "start_time": "2021-09-16T16:18:56.260904", "status": "completed"} tags=[] print(df_.duplicated(subset=['titel', 'text']).value_counts(normalize=True)) df_[df_.duplicated(subset=['titel', 'text'])] # + papermill={"duration": 0.689924, "end_time": "2021-09-16T16:18:58.611702", "exception": false, "start_time": "2021-09-16T16:18:57.921778", "status": "completed"} tags=[] # We drop them... df_ = df_.drop_duplicates(subset=['titel', 'text']).reset_index() # + [markdown] papermill={"duration": 0.030143, "end_time": "2021-09-16T16:18:58.672014", "exception": false, "start_time": "2021-09-16T16:18:58.641871", "status": "completed"} tags=[] # Now lets have a quick look at how many words and characters we have in Titel and Body? # + papermill={"duration": 3.286412, "end_time": "2021-09-16T16:19:01.988871", "exception": false, "start_time": "2021-09-16T16:18:58.702459", "status": "completed"} tags=[] # Count for max character and word length in Titel and Body print('Max # of char in Title: %i' % df_['titel'].str.len().max()) cnt_lst = [] for i in range(0, len(df_)): cnt_lst.append(len(df_['titel'][i].split())) print('Max # words in Title: %i' % max(cnt_lst)) print('Average # of words in Title: %i' % np.mean(cnt_lst)) print('Max char length of Body: %i' % df_['text'].str.len().max()) cnt_lst = [] for i in range(0, len(df_)): cnt_lst.append(len(df_['text'][i].split())) print('Max # of words in Body: %i' % max(cnt_lst)) print('Average # of words in Body: %i' % np.mean(cnt_lst)) # + [markdown] papermill={"duration": 0.031095, "end_time": "2021-09-16T16:19:02.052285", "exception": false, "start_time": "2021-09-16T16:19:02.021190", "status": "completed"} tags=[] # Ok, this seams to be quite reasonable for news articles. # # ### From what year they actually stem from? # + papermill={"duration": 0.825199, "end_time": "2021-09-16T16:19:02.909017", "exception": false, "start_time": "2021-09-16T16:19:02.083818", "status": "completed"} tags=[] df_['Date'] = pd.to_datetime(df_['Date']) df_['Year'] = df_['Date'].apply(lambda x: x.year) df_['Year'].plot(kind = 'hist', figsize=(10, 8) ) # + [markdown] papermill={"duration": 0.032314, "end_time": "2021-09-16T16:19:02.973890", "exception": false, "start_time": "2021-09-16T16:19:02.941576", "status": "completed"} tags=[] # Ok, so the majority of articles were published from 2017 to 2018. We are in 2021 as this EDA was perfromed. So the latest articles are already 3 years old. Could that have an implication on our model we want to train? # * Topics (and therefore the vocabulary) in the train data could be outdated? (e.g. Nobody talked about covid-19 in 2018!) # + [markdown] papermill={"duration": 0.031917, "end_time": "2021-09-16T16:19:03.038341", "exception": false, "start_time": "2021-09-16T16:19:03.006424", "status": "completed"} tags=[] # ## Preprocessing and preparing text features # + [markdown] papermill={"duration": 0.032357, "end_time": "2021-09-16T16:19:03.104069", "exception": false, "start_time": "2021-09-16T16:19:03.071712", "status": "completed"} tags=[] # We already know (and expected) from a first look on the text features, that they contain a lot of stopwords. Therefore we take a stopword list from nltk. # # Afterwards we ceate a word count matrix with CountVectorizer Object from scikit-learn and play around a bit... # + papermill={"duration": 0.046642, "end_time": "2021-09-16T16:19:03.184065", "exception": false, "start_time": "2021-09-16T16:19:03.137423", "status": "completed"} tags=[] # Init a stopword list from nltk and use it as arg in CountVectorizer... nltk.download('stopwords') stopword_list = nltk.corpus.stopwords.words('german') stemmer = nltk.stem.snowball.GermanStemmer(ignore_stopwords=True) # - class StemmedCountVectorizer(CountVectorizer): def build_analyzer(self): analyzer = super(StemmedCountVectorizer, self).build_analyzer() return lambda doc: ([stemmer.stem(w) for w in analyzer(doc)]) # Count Vec Titel... # + papermill={"duration": 1.68061, "end_time": "2021-09-16T16:19:04.897760", "exception": false, "start_time": "2021-09-16T16:19:03.217150", "status": "completed"} tags=[] count_vec_titel = StemmedCountVectorizer(stop_words=stopword_list, token_pattern=r'\', max_features=50) #ngram_range=(2,2) #) # We first do that on Titel... count_vec_titel.fit(df_['titel']) word_counts_in_titel = count_vec_titel.transform(df_['titel']).todense() # + [markdown] papermill={"duration": 0.032147, "end_time": "2021-09-16T16:19:04.963509", "exception": false, "start_time": "2021-09-16T16:19:04.931362", "status": "completed"} tags=[] # ### What are the most occuring words? # + papermill={"duration": 0.057817, "end_time": "2021-09-16T16:19:05.053689", "exception": false, "start_time": "2021-09-16T16:19:04.995872", "status": "completed"} tags=[] df_titel = pd.DataFrame(word_counts_in_titel, columns=count_vec_titel.get_feature_names()) df_titel_trans = pd.DataFrame(df_titel.T.sum(axis=1), columns=['count']) df_titel_trans.sort_values(by='count', ascending=False).head(50) # + [markdown] papermill={"duration": 0.033735, "end_time": "2021-09-16T16:19:05.473716", "exception": false, "start_time": "2021-09-16T16:19:05.439981", "status": "completed"} tags=[] # Now count vec the body... # + papermill={"duration": 44.012255, "end_time": "2021-09-16T16:19:49.519822", "exception": false, "start_time": "2021-09-16T16:19:05.507567", "status": "completed"} tags=[] count_vec_body = StemmedCountVectorizer(stop_words=stopword_list, token_pattern=r'\b[a-zA-Z]{2,}\b', max_features=50, #ngram_range=(2,2) ) count_vec_body.fit(df_['text']) word_counts_in_body = count_vec_body.transform(df_['text']).todense() # + papermill={"duration": 0.058799, "end_time": "2021-09-16T16:19:49.612838", "exception": false, "start_time": "2021-09-16T16:19:49.554039", "status": "completed"} tags=[] df_body = pd.DataFrame(word_counts_in_body, columns=count_vec_body.get_feature_names()) df_body_trans = pd.DataFrame(df_body.T.sum(axis=1), columns=['count']) df_body_trans.sort_values(by='count', ascending=False).head(20) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lab 02 : Training with epochs -- demo import sys, os import torch # ### Let's make an artificial training set of 10 images (28x28 pixels) train_data = torch.rand(10,28,28) print(train_data.size()) # ### Let's define a random order in which we are going to visit these images shuffled_indices = torch.randperm(10) print(shuffled_indices) # ### Visit the training set in this random order and do minibatch of size 2 # + bs = 2 for count in range(0, 10, bs): batch_of_indices = shuffled_indices[count:count+bs] print(batch_of_indices) batch_of_images = train_data[batch_of_indices] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="JrWT7652JGFI" # This is the Part-12 of the Deep Reinforcement Learning Notebook series. In this Notebook I have introduced Rainbow Algorithm. # + [markdown] id="q6roxCNJJKX0" # # The Notebook series is about Deep RL algorithms so it excludes all other techniques that can be used to learn functions in reinforcement learning and also the Notebook Series is not exhaustive i.e. it contains the most widely used Deep RL algorithms only. # + [markdown] id="v-P4isz3JhCW" # ##What is Rainbow Algorithm? # + [markdown] id="2LAQht6uJl38" # Rainbow is a DQN based off-policy deep reinforcement learning algorithm with six extensions of DQN's that each have addressed a limitation and improved overall performance. # + [markdown] id="h9vw0gp9GG8O" # ##Extensions Used # # 1) Double Q-learning # # => Double Q-learning address problem of overestimation of Q-values by neural network. # # 2) Prioritized replay # # => Prioritized replay replaces the randomly sampling process in DQN by efficiently sampling the transitions from which there is much to learn. # # 3) Dueling networks # # => The dueling network is a neural network architecture designed for value based RL. It features two streams of computation, the value and advantage streams, sharing a convolutional encoder, and merged by a special aggregator # # You can read more about Double Q-learning, Prioritized replay, Dueling networks at https://github.com/Rahul-Choudhary-3614/Deep-Reinforcement-Learning-Notebooks/blob/master/Deep_Reinforcement_Learning_Part_5_.ipynb # # 4) Multi-step learning # # => When calculating the target value in Q-Learning, the target value is based on only the current reward. For N-step Q-Learning, rewards from N steps are added together and the Q function value is added only at the very end. # # # 5) Distributional RL # # => We can learn to approximate the distribution of returns instead of the expected return. # # 6) Noisy Nets # # => Noisy Nets are way to improve exploration in the environment by adding noise to network parameters # # You can read more about Noisy networks at https://github.com/Rahul-Choudhary-3614/Deep-Reinforcement-Learning-Notebooks/blob/master/Deep_Reinforcement_Learning_Part_11_.ipynb # # # # # Full Rainbow paper => [https://arxiv.org/pdf/1710.02298.pdf] # + [markdown] id="a3tZRkrroXRr" # Below code loads the required library and 2 components for our algorithm i.e. Priority Experience Replay memory buffer and Noisy Dense Layer # + id="fwTDTPXzldKO" import tensorflow as tf from tensorflow.keras.layers import Dense,Dropout,Conv2D,Flatten,MaxPooling2D,Activation import numpy as np import gym import pickle import random import imageio import os from collections import deque from tensorflow.python.framework import tensor_shape import cv2 import matplotlib import matplotlib.pyplot as plt from skimage.transform import resize # %matplotlib inline from PriorityExperienceReplay import Memory from noisy_nets import noisy_dense # + [markdown] id="mdgqNweooU4X" # This part ensures the reproducibility of the code below by using a random seed and setups the environment. # + id="uwtjJq6Lld-5" RANDOM_SEED=1 # random seed (reproduciblity) np.random.seed(RANDOM_SEED) tf.random.set_seed(RANDOM_SEED) # set the env env_name = "Bowling-v0" env = gym.make(env_name) env.seed(RANDOM_SEED) env.reset(); # + [markdown] id="E0wEhmFvpDwW" # This parts initializes parameter necessary to implement Distributional RL # # + id="u9Lk3ilapUr6" N_atoms = 51 V_Max = 20.0 V_Min = 0.0 Delta_z = (V_Max - V_Min)/(N_atoms - 1) z_list = tf.constant([V_Min + i * Delta_z for i in range(N_atoms)],dtype=tf.float32) z_list_broadcasted = tf.tile(tf.reshape(z_list,[1,N_atoms]), tf.constant([action_shape,1])) # + [markdown] id="jAhcQuIPpY7f" # This parts initializes necessary hyper-parameters for our algorithm # + id="Uag9enVVlhRE" observing_episodes = 4 #No of observations before updating the training network observing_episodes_target_model = 10000 #No of observations before updating the target network learning_rate = 3e-3 # learning rate epsilon_initial_value = 1.0 # initial value of epsilon epsilon_current_value = 1.0# current value of epsilon epsilon_final_value = 0.001 # final value of epsilon batch_size = 256 gamma = 0.99 # decay rate of past observations state_shape = (76, 160, 4) # the state space action_shape = env.action_space.n # the action space memory_size = 10000 # + [markdown] id="WBL8fzaupnF1" # Creating a Rainbow Algorithm Model Class. Here we have used 2 noisy dense layers as final layers of the model # + id="ksguratzloGE" class _rainbow_model(tf.keras.Model): def __init__(self,state_shape,action_shape,N_atoms): super(_rainbow_model,self).__init__() self.layer_1 = Conv2D(32,5,strides=2,input_shape=(state_shape)) self.layer_2 = MaxPooling2D(pool_size=(2,2)) self.layer_3 = Activation('relu') self.layer_4 = Conv2D(64,3) self.layer_5 = MaxPooling2D(pool_size=(2,2)) self.layer_6 = Activation('relu') self.layer_7 = Conv2D(128,3) self.layer_8 = MaxPooling2D(pool_size=(2,2)) self.layer_9 = Activation('relu') self.layer_10 = Flatten() self.layer_11 = noisy_dense(64) self.layer_12 = Activation('relu') self.layer_13 = noisy_dense(action_shape*N_atoms) def call(self,x): x = self.layer_3(self.layer_2(self.layer_1(x))) x = self.layer_6(self.layer_5(self.layer_4(x))) x = self.layer_9(self.layer_8(self.layer_7(x))) x = self.layer_12(self.layer_11(self.layer_10(x))) x = self.layer_13(x) return x # + [markdown] id="EaNtncrtp_T3" # Defining the Rainbow Class. You can see that I have commented out few things like temperature_parameter .You can uncomment them if you can want to use Boltzmann policy.In this epsilon greedy works good. So I have used that. # + id="UDWR4EiNmDLA" class Rainbow(): def __init__(self,env,memory_size,path_1=None,path_2=None): self.env = env self.memory = Memory(memory_size) self.learning_rate = learning_rate self.state_shape = state_shape self.action_shape = action_shape self.epsilon_current_value = epsilon_current_value self.epsilon_initial_value = epsilon_initial_value self.epsilon_final_value = epsilon_final_value self.observing_episodes = 10 self.observing_episodes_target_model = 200 #self.temperature_parameter_initial_value=5.0 # initial value of epsilon #self.temperature_parameter_current_value=5.0# current value of epsilon #self.temperature_parameter_final_value=1.0 # final value of epsilon self.gamma = gamma self.batch_size = batch_size self._num_step = 2000 if not path_1: self.target_model = _rainbow_model(self.state_shape,self.action_shape,N_atoms) #Target Model is model used to calculate target values self.training_model = _rainbow_model(self.state_shape,self.action_shape,N_atoms) #Training Model is model to predict q-values to be used. else: self.training_model=load_model(path_1) self.target_model=load_model(path_2) def get_frame(self,frame): frame=cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY) frame=frame[100:-34,:] frame=frame/255. return frame # + [markdown] id="epsYKbPWqSpp" # The model output is first passed through softmax layer and clipped to get distributional q values which are some algebraic manipulation are converted to q values. # # This is the Distributional RL part of the Rainbow algorithm # + id="Vri5QwwPmHS0" def get_q_values(self,model_output): q_distributional = tf.reshape(model_output, [-1, self.action_shape, N_atoms]) q_distributional = tf.nn.softmax(q_distributional, axis = 2) q_distributional = tf.clip_by_value(q_distributional, 1e-8, 1.0-1e-8) q_values = tf.multiply(q_distributional, z_list_broadcasted) q_values = tf.reduce_sum(q_values, axis=2) return q_distributional,q_values # + [markdown] id="GqIo-kQkqP2O" # Action Selection # # The get_action method guides out action choice. Initially, when training begins we use exploration policy but later we do exploitation. # # You can uncomment the commented lines to use Boltzmann exploration policy instead of epsilon greedy policy. # + id="6t2gqshamJId" def get_action(self, state,status='Training'): '''samples the next action based on the E-greedy policy''' if status=="Evaluating": _ , q_values = self.get_q_values(self.training_model(state)) action = np.argmax(q_values) return action if random.random() < self.epsilon_current_value: #Exlporation #_ , q_values = self.get_q_values(self.training_model.predict(state)) #q_values=(q_values[0])**(1/self.temperature_parameter_current_value) #This is the step where we use Boltzmann exploration policy #top_actions=q_values.argsort()[-self.nma:][::-1] #action=random.choice(top_actions) action = random.choice(list(range(self.action_shape))) else: _ , q_values = self.get_q_values(self.training_model(state)) #Exploitation action = np.argmax(q_values) return action # + [markdown] id="oy6WYLXqrL8M" # This function is used to get the overall loss of the alorithm. To fully understand this you are suggested to read the paper ([https://arxiv.org/pdf/1710.02298.pdf]) # + id="V_clw-rDrLt3" def get_loss(self,states_mb,dones_mb,rewards_mb,preds_mb,actions_mb,ISWeights_mb): states_mb = tf.cast(states_mb,tf.float32) dones_mb = tf.cast(dones_mb,tf.float32) rewards_mb = tf.cast(rewards_mb,tf.float32) preds_mb = tf.cast(preds_mb,tf.float32) actions_mb = tf.cast(actions_mb,tf.float32) ISWeights_mb = tf.cast(ISWeights_mb,tf.float32) Q_distributional_values_target,_ = self.get_q_values((self.target_model(states_mb))) Q_distributional_values_target = tf.cast(Q_distributional_values_target,tf.float32) tmp_batch_size = tf.shape(Q_distributional_values_target)[0] tmp_batch_size = tf.cast(tmp_batch_size,tf.float32) preds_mb = tf.convert_to_tensor(np.asarray(np.array(list(enumerate(preds_mb)),dtype=object).astype('int32'))) Q_distributional_chosen_by_action_target = tf.gather_nd(Q_distributional_values_target,preds_mb) target = tf.tile(tf.reshape(rewards_mb,[-1, 1]), tf.constant([1, N_atoms])) + (self.gamma**self._num_step) * tf.multiply(tf.reshape(z_list,[1,N_atoms]),(1.0 - tf.tile(tf.reshape(dones_mb ,[-1, 1]), tf.constant([1, N_atoms])))) target = tf.cast(target,tf.float32) target = tf.clip_by_value(target, V_Min, V_Max) b = (target - V_Min) / Delta_z u, l = tf.math.ceil(b), tf.math.floor(b) u_id, l_id = tf.cast(u, tf.int32), tf.cast(l, tf.int32) u_minus_b, b_minus_l = u - b, b - l Q_distributional_values_online,_ = self.get_q_values((self.training_model(states_mb))) Q_distributional_values_online = tf.cast(Q_distributional_values_online,tf.float32) actions_mb = tf.convert_to_tensor(np.asarray(np.array(list(enumerate(actions_mb)),dtype=object).astype('int32'))) Q_distributional_chosen_by_action_online = tf.gather_nd(Q_distributional_values_online, actions_mb) index_help = tf.tile(tf.reshape(tf.range(tmp_batch_size),[-1, 1]), tf.constant([1, N_atoms])) index_help = tf.expand_dims(index_help, -1) index_help = tf.cast(index_help,tf.int32) u_id = tf.cast(u_id,tf.int32) l_id = tf.cast(l_id,tf.int32) u_id = tf.concat([index_help, tf.expand_dims(u_id, -1)], axis=2) l_id = tf.concat([index_help, tf.expand_dims(l_id, -1)], axis=2) error = Q_distributional_chosen_by_action_target * u_minus_b * tf.math.log(tf.gather_nd(Q_distributional_chosen_by_action_online, l_id)) + Q_distributional_chosen_by_action_target * b_minus_l * tf.math.log(tf.gather_nd(Q_distributional_chosen_by_action_online, u_id)) error = tf.reduce_sum(error, axis=1) loss = tf.negative(error * ISWeights_mb) error_op = tf.abs(error) return error_op # + [markdown] id="F-2TUvs3q_cN" # Updating the model # # The update_training_model method updates the training model weights. # # The update_target_model method updates the target model weights. # + id="rEwaOxyamLeW" def update_training_model(self): tree_idx, batch, ISWeights_mb = self.memory.sample(self.batch_size) states_mb = np.zeros((self.batch_size,*self.state_shape)) dones_mb = np.zeros((self.batch_size,1)) rewards_mb = np.zeros((self.batch_size,1)) preds_mb = np.zeros((self.batch_size,1)) actions_mb = np.zeros((self.batch_size,1)) for i in range(self.batch_size): states_mb[i] = batch[i][0][0] _ ,q_values = self.get_q_values(self.training_model(batch[i][0][0])) preds_mb[i] = np.argmax(q_values) actions_mb[i] = batch[i][0][1] rewards_mb[i] = (batch[i][0][2]) dones_mb[i] = batch[i][0][3] optimizer = tf.keras.optimizers.Adam(learning_rate=self.learning_rate) def train_step(states_mb,dones_mb,rewards_mb,preds_mb,actions_mb,ISWeights_mb): with tf.GradientTape() as tape: loss = self.get_loss(states_mb,dones_mb,rewards_mb,preds_mb,actions_mb,ISWeights_mb) grads = tape.gradient(loss,self.training_model.trainable_variables) optimizer.apply_gradients(zip(grads, self.training_model.trainable_variables)) return loss abs_error = train_step(states_mb,dones_mb,rewards_mb,preds_mb,actions_mb,ISWeights_mb) # Update priority abs_error=abs_error/(np.max(abs_error)+1e-30) self.memory.batch_update(tree_idx, abs_error) def update_target_model(self): self.target_model.set_weights(self.training_model.get_weights()) # + [markdown] id="mP3XlUOUsHsM" # This function is used to evaluate the algorithm during training of the algorithm and record the results in video format # # + id="IdRBbpRPmXlp" def evaluate(self,ep,no_of_testing_episodes=20): Average_Reward=[] for episode in range(no_of_testing_episodes): writer = imageio.get_writer("Evaluating_video_{}_{}.mp4".format(ep,episode), fps=20) env = (gym.make("Bowling-v0")) state = env.reset() writer.append_data(state) state_ = self.get_frame(state) stacked_frames = np.stack((state_,state_,state_,state_),axis=2) stacked_frames = stacked_frames.reshape(1,stacked_frames.shape[0],stacked_frames.shape[1],stacked_frames.shape[2]) done=False episode_reward=0 while not done: action=self.get_action(stacked_frames,"Evaluating") next_state, reward, done, info=env.step(action) writer.append_data(next_state) next_state_ = self.get_frame(next_state) next_state_ = next_state_.reshape(1,next_state_.shape[0],next_state_.shape[1],1) stacked_frames = np.append(next_state_, stacked_frames[:, :, :, :3], axis=3) episode_reward+=reward Average_Reward.append(episode_reward) print("Evaluating_Episode:{} Reward:{} Average_Reward:{}".format(episode,episode_reward,sum(Average_Reward)/len(Average_Reward))) writer.close() # + [markdown] id="4Rv3Lg4TtTxW" # Training the model # # This method creates a training environment for the model. Iterating through a set number of episodes, it uses the model to sample actions and play them. When such a timestep ends, the model is using the observations to update the policy. # # We know that in a dynamic game we cannot predict action based on 1 observation(which is 1 frame of the game in this case) so we will use a stack of 4 frames to predict the output. # + id="M1zvFiYLl-QN" def train(self,no_of_episodes): self.Average_rewards = [] for episode in range(no_of_episodes): state = env.reset() state_ = self.get_frame(state) stacked_frames = np.stack((state_,state_,state_,state_),axis=2) stacked_frames = stacked_frames.reshape(1,stacked_frames.shape[0],stacked_frames.shape[1],stacked_frames.shape[2]) done = False episode_reward = 0 while not done: action = self.get_action(stacked_frames) next_state, reward, done, info = env.step(action) next_state = self.get_frame(next_state) next_state_ = next_state.reshape(1,next_state.shape[0],next_state.shape[1],1) next_state_ = np.append(next_state_, stacked_frames[:, :, :, :3], axis=3) experience = stacked_frames, action, reward, 1*done episode_reward+=reward self.memory.store(experience) stacked_frames = next_state_ if episode%self.observing_episodes==0 and episode!=0: self.update_training_model() if episode%self.observing_episodes_target_model==0 and episode!=0: self.update_target_model() self.Average_rewards.append(episode_reward) avg_reward = np.mean(self.Average_rewards[-40:]) if self.epsilon_current_value > self.epsilon_final_value: self.epsilon_current_value=self.epsilon_current_value-(self.epsilon_initial_value-self.epsilon_final_value)/1000.0 print("Episode:{} Average Reward:{} Reward:{} Epsilon:{}".format(episode,avg_reward,episode_reward,self.epsilon_current_value)) if episode%500==0 and episode!=0: self.evaluate(episode,1) weights = self.training_model.get_weights() with open("training_model_{}.txt".format(episode), "wb") as fp: pickle.dump(weights, fp) #if self.temperature_parameter_current_value > self.temperature_parameter_final_value: #self.temperature_parameter_current_value=self.temperature_parameter_current_value-(self.temperature_parameter_initial_value-self.temperature_parameter_final_value)/1000.0 # + id="GbEBOhadmhnk" no_of_episodes=1001 Agent = Rainbow(env,memory_size) Agent.train(no_of_episodes) # + [markdown] id="2zJgYAEgtjS0" # With the help of below code we run our algorithm and see the success of it. With the help of below code we run our algorithm and see the success of it. # + id="VzFXfsG0uJyi" def get_action(self, state,status='Training'): '''samples the next action based on the E-greedy policy''' if status=="testing": _ , q_values = self.get_q_values(self.training_model(state)) action = np.argmax(q_values) return action if random.random() < self.epsilon_current_value: #Exlporation #_ , q_values = self.get_q_values(self.training_model.predict(state)) #q_values=(q_values[0])**(1/self.temperature_parameter_current_value) #This is the step where we use Boltzmann exploration policy #top_actions=q_values.argsort()[-self.nma:][::-1] #action=random.choice(top_actions) action = random.choice(list(range(self.action_shape))) else: _ , q_values = self.get_q_values(self.training_model(state)) #Exploitation action = np.argmax(q_values) return action # + [markdown] id="ft23LLgzvhPS" # With the help of below code we run our algorithm and see the success of it. With the help of below code we run our algorithm and see the success of it. # + id="FEctXzUQtjr0" class tester: def __init__(self,path): self.model = _rainbow_model(state_shape,action_shape,N_atoms) with open(path, "rb") as fp: weights = pickle.load(fp) self.model(np.zeros(*state_shape)); self.model.set_weights(weights) def get_action(self, state): '''samples the next action based on the E-greedy policy''' _ , q_values = self.get_q_values(self.training_model(state)) action = np.argmax(q_values) return action def get_frame(self,frame): frame=cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY) frame=frame[100:-34,:] frame=frame/255. return frame # + id="y-MDKihLvgfP" writer = imageio.get_writer("test_video.mp4", fps=20) env = (gym.make("Bowling-v0")) state = env.reset() writer.append_data(state) state_ = test.get_frame(state) stacked_frames = np.stack((state_,state_,state_,state_),axis=2) stacked_frames = stacked_frames.reshape(1,stacked_frames.shape[0],stacked_frames.shape[1],stacked_frames.shape[2]) episode_reward=0 while True: action = test.get_action(stacked_frames) next_state, reward, done, info=env.step(action) writer.append_data(next_state) next_state_=test.get_frame(next_state) next_state_ = next_state_.reshape(1,next_state_.shape[0],next_state_.shape[1],1) stacked_frames = np.append(next_state_, stacked_frames[:, :, :, :3], axis=3) episode_reward+=reward if done: break env.close() writer.close() print("Testing_Episode Reward:{}".format(episode_reward) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Source: [Tutorial](https://worldbank.github.io/OpenNightLights/tutorials/mod6_3_intro_to_sentinel2.html) # # [sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) # # [(GEE) collection’s landing pages](https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2) -- more details # # #### Intro # # ##### Sentinel-2 satellites # * hi-resolution imagery provided by the MultiSpectral Instrument (MSI), useful for land use monitoring # * * part of European Space Agency’s (ESA) Copernicus system # # * several image bands across the *optical* electromagnetic spectrum, useful for classifying land use, particular that of built-up areas. # # * * For visualization: just RGB channels # reminder that if you are installing libraries in a Google Colab instance you will be prompted to restart your kernal import geemap, ee import os # + ee.Initialize() # get our admin boundary aoi = ee.FeatureCollection("FAO/GAUL/2015/level0").filter(ee.Filter.eq('ADM0_NAME','Ireland')).geometry() # Sentinel-2 image filtered on 2019 and on Ireland se2 = ee.ImageCollection('COPERNICUS/S2').filterDate("2019-01-01","2019-12-31").filterBounds(aoi).median().divide(10000) # channels rgb = ['B4','B3','B2'] # set some thresholds rgbViz = {"min":0.0, "max":0.3,"bands":rgb} # + # initialize our map map1 = geemap.Map() map1.centerObject(aoi, 7) map1.addLayer(se2.clip(aoi), rgbViz, "S2") map1.addLayerControl() map1 # display in notebook # - # ![Ireland - cloud cover](./img/IRL_clouds.png) # Issue: It appears we’ve also captured some clouds. # # Solution: We will make a cloud mask to clear the image up using Sentinel-2’s QA band. We’re modeling this (in Python) from the [example used in GEE](https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2#bands) # * Name: `QA60` # * * Description: Cloud Mask def se2mask(image): quality_band = image.select('QA60') # using the bit mask for clouds, incl cirrus clouds cloudmask = 1 << 10 cirrusmask = 1 << 11 # only clear skies mask = quality_band.bitwiseAnd(cloudmask).eq(0) and (quality_band.bitwiseAnd(cirrusmask).eq(0)) # divide by 10000 to make interpreting reflectance values easier return image.updateMask(mask).divide(10000) # + se2 = ee.ImageCollection('COPERNICUS/S2').filterDate( "2019-01-01","2019-12-31").filterBounds(aoi).filter( ee.Filter.lt("CLOUDY_PIXEL_PERCENTAGE",20)).map(se2mask).median() # initialize our map map2 = geemap.Map() map2.centerObject(aoi, 7) map2.addLayer(se2.clip(aoi), rgbViz, "S2") map2.addLayerControl() map2 # - # ![Ireland - cloud free](./img/IRL_cloud-free.png) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Graded = 7/8 # NYT # # All API's: http://developer.nytimes.com/ # Article search API: http://developer.nytimes.com/article_search_v2.json # Best-seller API: http://developer.nytimes.com/books_api.json#/Documentation # Test/build queries: http://developer.nytimes.com/ # # Tip: Remember to include your API key in all requests! And their interactive web thing is pretty bad. You'll need to register for the API key. # # + import config import requests #imports key from config file nyt_articles_api = config.nyt_articles_api nyt_books_api = config.nyt_books_api nyt_movie_api = config.nyt_movie_api response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api) data = response.json() # print(data) # - # 1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day? # + published = ""; # response = requests.get('https://api.nytimes.com/svc/books/v3/lists//.json?api-key=' + nyt_books_api + "&list-name=hardcover-fiction&published-date=2009-10-05") #mother's day 2009 - 10 - 05 # mother's day 2010 2010-09-05 # father's day 2009 -21-06 # father day 2010 - 20 -06 dates = ['2009-05-10', '2010-05-09', '2009-06-21', '2010-06-20'] for date in dates: response = requests.get('https://api.nytimes.com/svc/books/v3/lists//.json?api-key=' + nyt_books_api + "&list-name=hardcover-fiction&published-date=" + date) bestseller_data = response.json() bestseller_data['results'] results = bestseller_data['results'][0] # print(type(results)) print("The best selling book on", date, "was", results['book_details'][0]['title']) # print(bestseller_data) #print(results['book_details']) # - # # 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015? # # + response = requests.get('https://api.nytimes.com/svc/books/v3/lists/names.json?api-key=' + nyt_books_api) bestseller_ldata = response.json() bestseller_ldata['results'] # print(bestseller_ldata['results'][0]) #The lists print("On June, 6th, 2009 the NYT published the following bestsellers lists:") for book in bestseller_ldata['results']: if book['oldest_published_date'] < '2009-06-06' and book['newest_published_date'] >= '2009-06-06': print(book['display_name']) else: pass print("\nOn June, 6th, 2015 the NYT published the following bestsellers lists:") for book in bestseller_ldata['results']: if book['oldest_published_date'] < '2015-06-06' and book['newest_published_date'] >= '2015-06-06': print(book['display_name']) else: pass # print("Too young") # for book in bestseller_ldata: # - # 3) 's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names? # # Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy. # # + ppl = ['Gaddafi','Gadafi', 'Kadafi','Qaddafi'] for person in ppl: # fq yields a lot more results than just q need to figure out difference b/w hits and times # response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&fq=' + person + ' Libya') response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&q=' + person + ' Libya') muammar_data = response.json() print("Muammar was referred to as ", person, muammar_data['response']['meta']['hits'], "times in the New York Times.") # print(muammar_data) # print(muammar_data['response']['docs']) # print(muammar_data['response']['docs'][0]['keywords']) # print(muammar_data['response']['docs'][0]) # keywords = [] # ppl = ['Gaddafi','Gadafi', 'Kadafi','Qaddafi'] #for article in muammar_data['response']['docs']: # for keyword in article['keywords']: # print(keyword['value']) # for person in ppl: #print(x) # if person in keyword: # print("print", keyword['value'], "was found") # print(keyword['value']) # keywords.append(keyword['value']) #from collections import Counter #counts = Counter(keywords) #print(counts) # - #for article in muammar_data['response']['docs']: # print(article["keywords"]) # + # len(muammar_data['response']['docs']) # - # # # 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph? # # + response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&q=hipster&begin_date=19950101&end_date=19951231&sort=oldest') hipster_data = response.json() # print(hipster_data['response']['docs']) hippie = hipster_data['response']['docs'] print("The first story to mention the word 'hipster' in 1995 was titled", hippie[0]['headline']['kicker'] + "; " + hippie[0]['headline']['main']) # - # # # 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present? # # Tip: You'll want to put quotes around the search term so it isn't just looking for "gay" and "marriage" in the same article. # # Tip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959. # + #Ta-Stephan: Beause you added to the start and end date early, the 1950s weren't counted. start_date = 19500101 end_date = 19591231 for n in [1,2,3,4,5,6]: if (n <= 5): start_date = start_date + 100000 end_date = end_date + 100000 else: start_date = start_date + 100000 end_date = 20160609 response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&q="\"gay marriage\""&begin_date=' + str(start_date) + '&end_date=' + str(end_date) + '&sort=oldest') gay_marriage_data = response.json() gay_marriage_hits = gay_marriage_data['response']['meta']['hits'] start_str = str(start_date) start_str = start_str[:4] end_str = str(end_date) end_str = end_str[:4] print("There were", gay_marriage_hits, "mentions of gay marriage between", start_str, "and", end_str) # - # # 6) What section talks about motorcycles the most? # # Tip: You'll be using facets # # # # # + response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?api-key=' + nyt_articles_api + '&q=motorcycle&facet_field=section_name') moto_data = response.json() # print(moto_data['response']['facets']['section_name']['terms']) # documentation found re: facets in NYT API # https://data-gov.tw.rpi.edu/wiki/How_to_use_New_York_Times_Article_Search_API moto_sections = moto_data['response']['facets']['section_name']['terms'] moto_count = 0 most_motos = "" for section in moto_sections: if section['count'] > moto_count: moto_count = section['count'] most_motos = section['term'] print("The section of the New York Times that mentions motorcycles the most is the", most_motos, "section which mentions motorcycles", moto_count, "times.") # - # 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60? # # Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them. # # # + criticPickCount = 0 for offset in [0,1,2,3]: offset = offset * 20 # print(offset) response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=' + nyt_movie_api + '&offset=' + str(offset)) movie_data = response.json() # print(movie_data) # print(movie_data['results']) for movie in movie_data['results']: if movie['critics_pick'] == 1: # print(movie['display_title']) criticPickCount = criticPickCount + 1 if offset == 0: print("There were", criticPickCount, "Critic' Picks in the last 20 movies that were reviewed.") if offset == 20: print("There were", criticPickCount, "Critic' Picks in the last 40 movies that were reviewed.") if offset == 40: print("There were", criticPickCount, "Critic' Picks in the last 60 movies that were reviewd.") if offset == 60: print("There were", criticPickCount, "Critic' Picks in the last 80 movies that were reviewed.") # print("There were", criticPickCount, "Critic' Picks.") # - # 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews? # + for offset in [0,1,2]: offset = offset * 20 # print(offset) response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=' + nyt_movie_api + '&offset=' + str(offset)) movie_data = response.json() # print(movie_data) criticPickCount = 0 authors = [] # print(movie_data['results']) #the critics name is stored in the byline for movie in movie_data['results']: authors.append(movie['byline']) # print(movie['byline']) from collections import Counter counts = Counter(authors) # print(counts) print(Counter(authors).most_common(1) , 'has written the most reviews out of the last 40 NYT reviews.') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # More advanced PDF practice # # This example shows how to parse a slightly more difficult PDF using our good friend [`pdfplumber`](https://github.com/jsvine/pdfplumber). # # For this exercise, we'll pull the data out of a PDF that has a fixed-width table of Colorado county-level voter registration data from April 2008. That file lives here: `../pdfs/apr08_party.pdf`. # # We'll need to use a text-based strategy and [explicit vertical lines](https://github.com/jsvine/pdfplumber#table-extraction-settings) in the table extraction settings, and we'll make _liberal_ use of [list comprehensions](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions), my favorite thing in Python. # # 👉 For more details on using list comprehensions, [see this notebook](../reference/Python%20data%20types%20and%20basic%20syntax.ipynb#List-comprehensions). # # First, let's do our imports: import pdfplumber import pandas as pd # Now let's open the PDF and dive in. # open the PDF with the pdfplumber `open` function with pdfplumber.open('../pdfs/apr08_party.pdf') as pdf: # the table settings I came up with after fiddling for a bit table_settings = { 'vertical_strategy': 'text', 'horizontal_strategy': 'text', 'explicit_vertical_lines': [95, 205, 245, 290] } # extract the table from the page table = pdf.pages[0].extract_table(table_settings=table_settings) # ~ lots of fiddling at this point to see what the results looked like ~ # use a list comprehension to grab the headers (which are in row 4) # and tack on a conditional to remove blank items # https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions headers = [x for x in table[4] if x] # remove the useless 'PREC' item from the headers headers.remove('PREC') # create an empty dataframe df = pd.DataFrame(columns=headers) # loop over the table, slicing so that we start with row 6 and leave off some cruft at the bottom for row in table[6:-2]: # clean up the row a little -- remove blanks and kill out commas data = [x.replace(',', '') for x in row if x] # use zip() and dict() to marry the data and headers and then create a dictionary # https://docs.python.org/3/library/functions.html#func-dict # https://docs.python.org/3/library/functions.html#zip d = dict(zip(headers, data)) # append the dict to the dataframe df = df.append(d, ignore_index=True) # Where'd we land? df.sort_values('COUNTY NAME') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np import matplotlib.pyplot as plt from datetime import datetime from datetime import timedelta from pandas.plotting import register_matplotlib_converters from statsmodels.tsa.stattools import acf, pacf from statsmodels.tsa.arima_model import ARMA register_matplotlib_converters() from time import time # # Catfish Sales Data def parser(s): return datetime.strptime(s, '%Y-%m-%d') #read data catfish_sales = pd.read_csv('catfish.csv', parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) #infer the frequency of the data catfish_sales = catfish_sales.asfreq(pd.infer_freq(catfish_sales.index)) start_date = datetime(2000,1,1) end_date = datetime(2004,1,1) lim_catfish_sales = catfish_sales[start_date:end_date] plt.figure(figsize=(10,4)) plt.plot(lim_catfish_sales) plt.title('Catfish Sales in 1000s of Pounds', fontsize=20) plt.ylabel('Sales', fontsize=16) for year in range(start_date.year,end_date.year): plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2) plt.axhline(lim_catfish_sales.mean(), color='r', alpha=0.2, linestyle='--') first_diff = lim_catfish_sales.diff()[1:] plt.figure(figsize=(10,4)) plt.plot(first_diff) plt.title('First Difference of Catfish Sales', fontsize=20) plt.ylabel('Sales', fontsize=16) for year in range(start_date.year,end_date.year): plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2) plt.axhline(first_diff.mean(), color='r', alpha=0.2, linestyle='--') # # ACF acf_vals = acf(first_diff) plt.bar(range(num_lags), acf_vals[:num_lags]) # ## Based on ACF, we should start with a MA(1) process # # PACF pacf_vals = pacf(first_diff) plt.bar(range(num_lags), pacf_vals[:num_lags]) # ## Based on PACF, we should start with a AR(4) process # # Get training and testing sets # + train_end = datetime(2003,7,1) test_end = datetime(2004,1,1) train_data = first_diff[:train_end] test_data = first_diff[train_end + timedelta(days=1):test_end] # - # # Fit the ARMA Model # define model model = ARMA(train_data, order=(4,1)) #fit the model start = time() model_fit = model.fit() end = time() print('Model Fitting Time:', end - start) #summary of the model print(model_fit.summary()) # ## So the ARMA(4,1) model is: # # ## $\hat{y_t} = -0.87y_{t-1} - 0.42y_{t-2} - 0.56y_{t-3} - 0.61y_{t-4} + 0.52\varepsilon_{t-1}$ #get prediction start and end dates pred_start_date = test_data.index[0] pred_end_date = test_data.index[-1] #get the predictions and residuals predictions = model_fit.predict(start=pred_start_date, end=pred_end_date) residuals = test_data - predictions plt.figure(figsize=(10,4)) plt.plot(residuals) plt.title('Residuals from AR Model', fontsize=20) plt.ylabel('Error', fontsize=16) plt.axhline(0, color='r', linestyle='--', alpha=0.2) # + plt.figure(figsize=(10,4)) plt.plot(test_data) plt.plot(predictions) plt.legend(('Data', 'Predictions'), fontsize=16) plt.title('First Difference of Catfish Sales', fontsize=20) plt.ylabel('Sales', fontsize=16) # - print('Root Mean Squared Error:', np.sqrt(np.mean(residuals**2))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"metadata": false} # # Chapter 5 - Image Filtering # # In this chapter, we're introducing the concept of Image Filtering. # Filters can be applied on 2D Image data either for various applications. We can broadly differenciate low-pass filters smooth images # (retrain low-frequenciy components) and high-pass filters (retain contours / edges, e.g. high frequencies). # # + [markdown] pycharm={"metadata": false} # ## Low-pass filters # Low pass filters are typically applied to reduce noise in images. Noise can be seen as random artifacts in an image. # For example, salt & pepper noise describes the random occurrence of black / white pixels in the image, while # gaussian noise is a random increase/decrease in each pixel’s color value, following a gaussian distribution. # # ![Different noise types](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/1_different_noise_types.png) # *Figure 1: Different noise types. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # # + [markdown] pycharm={"metadata": false} # Low-pass filters assume that the pixel variation in the image should be lower than perceived, e.g. pixels should have a # color value close to their neighbours. Low-pass filters therefore replace each pixels value with an average of the values in # the pixels neighbourhood. The neighbours can either be weighted based on their distance to the center pixel, or equally. # # + [markdown] pycharm={"metadata": false} # Moving a filter over all possible positions in an image is called a *convolution*, the filter is called a *Kernel* or *Mask* # and denoted *H*. # When convoluting a filter over an image, we flip the kernel by 180° before performing at each position before computing the weighted # average between the filter values and the pixel. If we do not flip the kernel, we speak of a cross-correlation instead of a convolution. # For symmetric filters like "Gaussian Filter" or "Median Filter", a convolution and a cross-correlation will of course produce # the same results. # # ![Convolution vs. Cross-corelation formula](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/3_convolution_vs_cross-correlation.png) # *Figure 2: Convolution vs. Cross-corelation formula. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # + [markdown] pycharm={"metadata": false} # In the following example, a smoothing averaging filter is applied to a 1D signal, bringing the neighbouring values significantly closer together and reducing outliers. # # ![Filter example, p. 15](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/2_smoothed_signal.png) # *Figure 3: World Coordinates -> Pixel Coordinates. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # + [markdown] pycharm={"metadata": false} # The same principle can be applied to 2D Data like images. In the following example, a 2D Filter with size 1/1 averages a neighbourhood of 9 pixels, overwriting the central pixel with an equally weighted average of all 9 pixels. # Such a filter is called a "box" filter. # # ![Box filter](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/4_before_filtering_image_with_box_filter.png) # *Figure 4: Illustration of a box filter over a black image with a white square. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # # The output is - of course - a blurred version of the very same image. # # ![Box filter output](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/5_after_filtering_image_with_box_filter.png) # *Figure 5: Output of a smoothed image using a box-filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # + [markdown] pycharm={"metadata": false} # While the box filter smoothens an image quite well, it produces horizontal and vertical aftifacts since the filter itself has # sharp edges. These artifacts are also called "aliasing" and is caused by the high frequency components of the box filter. # A better way to smooth an image is with a gaussian filter, a filter implementing the 2D gaussian function. # For perfect results, take a large gaussian filters with smooth edges, e.g. low standard derivation that ensures that the outermost # values of the filter are close to 1 while preserving a smooth derivative. # # ![Gaussian filter visualization](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/7_gaussian_filter_comparison.png) # *Figure 6: Gaussian filter visualization. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # # ![Gaussian Filter comparison, p. 26](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/17_gaussian_filter_comparison.png) # *Figure 7: Gaussian Filter comparison. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # + [markdown] pycharm={"metadata": false} # When we apply any filter on an image, the question remains how to deal with the image boundary. Since - in most cases - we don't # want the resulting image to be smaller than the input image, we have to simulate additional boundary pixels. # There are different strategies with varying results, like zero-padding (surrounding black pixels), wrap-around (repeating the image), # copy-edge (always use outermost pixel values) or reflect accross edge (mirroring around edge, gives best results). # # + [markdown] pycharm={"metadata": false} # ### Non-linear Low-pass filters # # Gaussian filters or box-filters do not denoise salt & pepper noise since they get influenced by outliers by a high degree. # That's where **median filters** come into play. They can not be interpreted as a classical convolution filter like a Gaussian # filter, it rather takes the median pixel value from the neighbourhood. The median filter is therefore much less influenced # by strong noise, while he also preserves edges much better than the linear smoothing filters. # # ![Gaussian Filter comparison, p. 26](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/8_median_filtered_image.png) # *Figure 8: Median Filter removing Salt & Pepper Noise. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # + [markdown] pycharm={"metadata": false} # Another such filter is the **billateral filter**. It acts like a median filter with preserving edges even more by adapting the kernel # locally to the intensitiy profile of the underlaying image. They only average pixels with similar brightness: Pixels that fall below # a brightness difference compared to the center pixel. # # ![Bilateral filer with mask, p. 26](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/9_billateral_filter_demonstration.png) # *Figure 9: Bilateral filer with mask. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # # The extend to which neighbouring pixels have to be similar to the central pixel is controlled via a factor *sigma*. # # + [markdown] pycharm={"metadata": false} # ## High-pass filters # # High-pass filters are mainly used for edge detection since react to sharp change in pixel intensity. Edges are sharp changes in # an image functions intensity value. Applying the first derivative on an image would leave us with an image where sharp edges # are shown. # # ![Image derivative detecting edges](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/10_image_first_derivative_demonstration.png) # *Figure 10: Image derivative detecting edges. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # # We therefore construct a filter that acts like a derivative by approximating the image derivative # *dI(x,y) / dx ~ I(x+1, y) - I(x,y)* and *dI(x,y) / dy ~ I(x, y+1) - I(x,y)*. # So we essentially compare each pixel to its direct neighbour and take the difference as an output. # # ![Partial derivative filter](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/11_partial_derivative_filters.png) # *Figure 11: Partial derivative filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # + [markdown] pycharm={"metadata": false} # More advanced filters are larger in size and therefore produce less artifacts. The sobel-filter is an example for a larger # derivative filter: # # ![Prewitt & Sobel filter](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/12_prewitt_sobel_filter.png) # *Figure 12: Prewitt & Sobel filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # # The direction of the edge can be determined by calculating the pixel regions gradient, so the diretion of fastest intensity change. # The gradient direction is given by *angle = arctan2(dI/dx, dI/dy)*, so the two dimensional arcus tangens of the image derivative # values. The edge strenght is given by the gradients magnitude: *strength = sqrt((dI/dx)^2 + (dI/dy)^2). # # + [markdown] pycharm={"metadata": false} # A big problem for high-pass filters is gaussian noise: there will always be a steep difference between two neighbouring pixels, caused # by normal gaussian noise produced by the image sensor. It is therefore best practice to softly filter the image first with a # gaussian filter before applying a high-pass filter. # # + [markdown] pycharm={"metadata": false} # In the following graphic, we see the original image I, the kernel H, the resulting image when H is applied I*H as well as the derrived # image d(I*H)/dx # # ![Process steps for edge detection](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/13_individual_processing_steps_for_edge_detection.png) # *Figure 13: Process steps for edge detection. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # # A way better approach is to directly include the smoothing in the filter itself, giving us the filter dH/dx as seen in # the following image: # # ![Gaussian smoothing within a derivative filter](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/14_gaussian_smoothing_and_derivative_filter.png) # *Figure 13: Gaussian smoothing within a derivative filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # + [markdown] pycharm={"metadata": false} # This is called a "derivative of gaussian" filter: it multiplies a normal gaussian filters with a high-pass 2x1 derivative filter. # # ![Difference of Gaussians filter](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/15_difference_of_gaussians_filter.png) # *Figure 14: Difference of Gaussians filter. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # + [markdown] pycharm={"metadata": false} # Since we deal with two partial derivatives, we'd need to filter the image twice. A solution to this is given by the # "Laplacian of Gaussian"-Filter, which finds the derivative in all directions simultaniously. It is constructed by # subtracting a smaller radius gaussian Filter from a large radius gaussian filter. # # ![Laplacian of Gaussian](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/18_laplacian_of_gaussian.png) # *Figure 15: Laplacian of Gaussian. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # + [markdown] pycharm={"metadata": false} # ## Canny edge detection # # Canny edge detection uses partial gaussian derivative filters to find all corners in an image. It then sets all pixelsvalues to 0 that # fall under a given threshold. Finally, Canny takes the local maximum of any corner along the gradient direction, e.g. it only # takes the peak of a wide edge. # # ![Canny edge detection](https://github.com/joelbarmettlerUZH/PyVisualOdometry/raw/master/img/chapter_5/16_canny_edge_detection.png) # *Figure 16: Canny edge detection. [source](http://rpg.ifi.uzh.ch/docs/teaching/2019/04_filtering.pdf)* # # # - # # Overview summary # # Let me quickly illustrate the main differences of Smoothing and Derivaitve filters. # # Smoothing filters always contain positive filter values that sum to 1 to preserve the overall brightness of constant regions. They are constructed to remove high-frequency components. # # In contrast, derivative filters have two regions with opposite signs to get a high response in regions of high contrast. Their components sum to 1 to create no response on images with constant color. They are created to highlight high-frequency, not to remove them. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # This notebook seeks to convey a core concept of inferential thinking - given some set of observations about a small sample of a population, attempt to draw robust conclusions about the (unobservable) population. # # Here we create a hypothetical population through simulation. It is based on the historical discussion in the data8 lecture about estimating the size of foreign bomber fleets from the observations of tail markings. # HIDDEN from datascience import * # %matplotlib inline import matplotlib.pyplot as plots import numpy as np plots.style.use('fivethirtyeight') # datascience version number of last run of this notebook version.__version__ # + # The magic number - size of the population that (in the real world) # we don't know and want to estimate def createPopulation(): def serNo(x): return "{:05d}".format(x) p = Table([np.arange(1,37*55)],["Ser No"]) p.set_format("Ser No", serNo) return p # - # Create a simulation of the population as a table - ordered collection of named columns population = createPopulation() population # computational thinking - simulate observing a sample of the population sample_size = 10 population.sample(sample_size,with_replacement=True) # Simulate observing multiple samples nsamples = 30 # use iteration to create a table of samples samples = Table() for i in range(nsamples): name = "sample-"+str(i) a_sample = population.sample(sample_size,with_replacement=True) samples[name] = a_sample["Ser No"] samples # gracefully transition between tables and arrays samples['sample-0'] # define a function to capture formally a idea about how to do the estimation def estimateA(smpl) : return np.max(smpl) estimateA(samples['sample-2']) # you might come up with lots of other estimators def estimateB(smpl) : return 2*np.mean(smpl) #verify it works estimateA(samples["sample-0"]) # illustrate list comprehension to explore data [estimateB(samples[s]) for s in samples] # Build a tables of estimates estA = Table([[estimateA(samples[s]) for s in samples]],['ests']) estA # Look at the behavior of this estimator as a histogram estA.hist(range=(1,np.max(estA['ests'])),bins=20) # Computational thinking: estimator as a higher order function # passed in to a function that creates a table of estimate def estimate(estimator): return Table([[estimator(samples[s]) for s in samples]],['ests']) estB = estimate(estimateB) estB.hist(range=(1,np.max(estB['ests'])),bins=20) comp = Table([estA['ests'],estB['ests']],['estA','estB']) comp comp.hist(overlay=True, bins=np.arange(1000,2500,50)) # How does these estimates compare with the true size of the population? population.num_rows # Produce a table containing the data associated with a histogram ebins = comp.bin(bins=np.arange(1000,2500,50)) ebins.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="RH4U56H_kpyQ" """ The following code simulates iVQAEC algorithm on QASM simulator. iVQAEC algorithm is a variational quantum algorithm for equality constrained semidefinite programs (SDPs) that we proposed in our paper. Here, we use iVQAEC to solve random instances of an equality constrained SDP. In the paper, we report simulation results for N = 8, 16, and 32, where N is the dimension of input Hermitian operators of an SDP. For writing clean, short and understandable code, here we fix N = 8. By simply substituting another value of N and with some other minor changes, one can obtain results for that N. """ """ This code is based on qiskit-aqua package which recently got deprecated. We intend to use new packages. However, for now the following code is based on qiskit-aqua package. To install it, use the following pip command: !pip install git+https://github.com/Qiskit/qiskit-aqua.git """ # + id="rJp_NmzgqDBS" # import packages. import pennylane as qml import numpy as np import cvxpy as cp import qiskit import qiskit.providers.aer.noise as noise import scipy as sc import scipy.stats as stats import scipy.sparse as sparse # + id="2CfnpyWgrEKG" def decompose_into_pauli_strings(M): """ This function expresses an Hermitian operator as a linear combination of Pauli strings :param M: Hermitian operator """ coeffs, pauli_string_objs = qml.utils.decompose_hamiltonian(M) return qml.Hamiltonian(coeffs, pauli_string_objs) # + id="k3kbsOXzrMRO" # modeling noise # probabilities of error for single and two qubit gates prob_1 = 0.001 prob_2 = 0.01 # Depolarizing errors error_1 = noise.depolarizing_error(prob_1, 1) error_2 = noise.depolarizing_error(prob_2, 2) noise_model = noise.NoiseModel() noise_model.add_all_qubit_quantum_error(error_2, ['cx']) noise_model.add_all_qubit_quantum_error(error_1, ['u1', 'u2', 'u3']) # + id="qAxwamZbUFQt" def sparse_rand_sym(n, density): """ This function generates a sparse symmetric matrix :param n: dimension of the matrix :param density: sparsity """ rvs = stats.norm().rvs # generates a sparse matrix X = sparse.random(n, n, density=density, data_rvs=rvs) # make it symmetric upper_X = sparse.triu(X) result = upper_X + upper_X.T - sparse.diags(X.diagonal()) return result # + id="IjcymwGTVSYn" # constants of an SDP N = 8 M = 3 R = 10 # + id="KX7HEKbTVbzv" # generate a weakly constrained random sparse SDP and solve them using cvxpy C = sparse_rand_sym(N, 0.1) A = [] b = [] for i in range(M-1): A.append(sparse_rand_sym(N, 0.1)) b.append(np.random.randn()) b.append(R) A.append(np.eye(N)) # define the variable and constraints X = cp.Variable((N,N), symmetric=True) # positive semidefinite constraint constraints = [X >> 0] constraints += [ cp.trace(A[i]@X) == b[i] for i in range(M) ] # solve the problem prob = cp.Problem(cp.Minimize(cp.trace(C@X)), constraints) prob.solve() # result. print("Optimal value:", prob.value) # + id="Rpd3sGkfyPhg" C = C.todense() A[0] = A[0].todense() A[1] = A[1].todense() # + id="pr1Pkftgr8fW" # Now solving the above SDP using iVQAEC # quantum circuit settings num_wires = 3 num_layers = 4 # quantum device # can also define the number of shots here device = qml.device("default.qubit", wires=num_wires) # def the ansatz def circuit(param): """ This function instantiates quantum circuit based on templates provided by pennylane :param param: the parameters of the circuit """ qml.templates.StronglyEntanglingLayers(param, wires=list(range(num_wires))) # + id="3wbRjJxhtZeJ" # def the cost function (Just the expectation value of the hamiltonian) @qml.qnode(device) def evalute_exp(param, M): """ This function evaluates the expectation value of a Hermitian operator :param param: circuit parameters :param M: Hermitian operator """ circuit(param) M_pauli = decompose_into_pauli_strings(M) return qml.expval(M_pauli) def cost_func_ALM(param, **kwargs): """ This function evaluates the cost function at given parameters :param param: circuit parameters """ exp_C = evalute_exp(param, C) exp_A1 = evalute_exp(param, A[0]) exp_A2 = evalute_exp(param, A[1]) exp_A3 = evalute_exp(param, A[2]) b_minus_phi_vector = np.subtract(b, [R*exp_A1, R*exp_A2, R*exp_A3]) return R*exp_C + np.dot(kwargs['y'], b_minus_phi_vector) + (c/2)*(np.linalg.norm(b_minus_phi_vector)**2) def cost_func_ALM_grad_step_and_cost(param, **kwargs): """ This function evaluates new parameters, as well as returns current cost function value :param param: circuit parameters """ gradient = qml.grad(evalute_exp, argnum=0) grad_C = gradient(param, C) grad_A1 = gradient(param, A[0]) grad_A2 = gradient(param, A[1]) grad_A3 = gradient(param, A[2]) exp_A1 = evalute_exp(param, A[0]) exp_A2 = evalute_exp(param, A[1]) exp_A3 = evalute_exp(param, A[2]) full_gradient = R*grad_C - R*(kwargs['y'][0]*grad_A1 + kwargs['y'][1]*grad_A2 + kwargs['y'][2]*grad_A3) - R*c*(b[0]*grad_A1 - R*exp_A1*grad_A1 + b[1]*grad_A2 - R*exp_A2*grad_A2 + b[2]*grad_A3 - R*exp_A3*grad_A3) updated_param = np.subtract(param, 0.01*full_gradient) prev_cost_func_value = cost_func_ALM(param, y=y) return updated_param, prev_cost_func_value # + id="MtJ_D7oSuBtn" # Inner maximizer function def sup_wrt_theta(y): # initialize thetas params_ALM = np.random.random(qml.templates.StronglyEntanglingLayers.shape(n_layers=4, n_wires=3)) # cost function storage cost_func_ALM_store = [cost_func_ALM(params_ALM, y=y)] # store the params params_ALM_store = [params_ALM] max_iterations = 100 max_tol = 1e-01 # iterate while True: params_ALM, prev_cost_func_ALM_value = cost_func_ALM_grad_step_and_cost(params_ALM, y=y) # append to the store cost_func_ALM_store.append(cost_func_ALM(params_ALM, y=y)) params_ALM_store.append(params_ALM) # check the tolerance tol = np.abs(cost_func_ALM_store[-1] - prev_cost_func_ALM_value) #print("Cost function: ", cost_func_ALM_store[-1], " Tolerance: ", tol) if tol <= max_tol: break return params_ALM # + id="3OgPRTa_uKez" # Outer minimization iteratation max_outer_iterations = 100 max_outer_tol = 1e-04 # penalty parameter c = 0 # initialize y y = [0, 0, 0] for i in range(max_outer_iterations): optimal_thetas = sup_wrt_theta(y) # evaluate the expectation values of A1, A2, and A3 wrt optimal thetas exp_A1 = evalute_exp(optimal_thetas, A[0]) exp_A2 = evalute_exp(optimal_thetas, A[1]) exp_A3 = evalute_exp(optimal_thetas, A[2]) # update the dual variables y[0] += 0.0005*(b[0] - R*exp_A1) y[1] += 0.0005*(b[1] - R*exp_A2) y[2] += 0.0005*(b[2] - R*exp_A3) print("Step:", i , " Cost function value:", cost_func_ALM(optimal_thetas, y=y)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="qOBFhUxcYB_7" # ## Recommendation Systems in Python # # We will work on the MovieLens dataset and build a model to recommend movies to the end users. This data has been collected by the GroupLens Research Project at the University of Minnesota. # # This dataset consists of: # * 100,000 ratings (1–5) from 943 users on 1682 movies # * Demographic information of the users (age, gender, occupation, etc.) # # First, we’ll import our standard libraries and read the dataset in Python. # + id="5E7yKqZCX7Da" import pandas as pd # %matplotlib inline import matplotlib import matplotlib.pyplot as plt import numpy as np # + id="iVSqiMCjYg55" # pass in column names for each CSV as the column name is not given in the file and read them using pandas. # You can check the column names from the readme file #Reading users file: u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code'] users = pd.read_csv('/content/drive/MyDrive/ml-100k/u.user', sep='|', names=u_cols,encoding='latin-1') #Reading ratings file: r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp'] ratings = pd.read_csv('/content/drive/MyDrive/ml-100k/u.data', sep='\t', names=r_cols,encoding='latin-1') #Reading items file: i_cols = ['movie id', 'movie title' ,'release date','video release date', 'IMDb URL', 'unknown', 'Action', 'Adventure', 'Animation', 'Children\'s', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', 'Sci-Fi', 'Thriller', 'War', 'Western'] items = pd.read_csv('/content/drive/MyDrive/ml-100k/u.item', sep='|', names=i_cols,encoding='latin-1') # + [markdown] id="GD6-26-fZILv" # **After loading the dataset, we should look at the content of each file (users, ratings, items).** # # ## - Users # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="9hnGkwCSZApC" outputId="f56d1813-5f54-4736-b637-7eb2ad1d1513" print(users.shape) users.head() # + [markdown] id="EMLmTGgTZmJM" # So, we have 943 users in the dataset and each user has 5 features, i.e. user_ID, age, sex, occupation and zip_code. Now let’s look at the ratings file. # # ## - Ratings # + colab={"base_uri": "https://localhost:8080/", "height": 221} id="PsY78af8Zghq" outputId="58cf2ca6-3622-49fd-a34c-55211e5a2545" print(ratings.shape) ratings.head() # + [markdown] id="5sLoWAFpZzia" # We have 100k ratings for different user and movie combinations. Now finally examine the items file. # # ## - Items # + colab={"base_uri": "https://localhost:8080/", "height": 394} id="fDwt0JQGZw6p" outputId="d01c06e0-721b-42ed-e6b9-b4c5bca23fd8" print(items.shape) items.head() # + [markdown] id="yZbir-sMaEgh" # This dataset contains attributes of 1682 movies. There are 24 columns out of which last 19 columns specify the genre of a particular movie. These are binary columns, i.e., a value of 1 denotes that the movie belongs to that genre, and 0 otherwise. # # + colab={"base_uri": "https://localhost:8080/"} id="KRk4UsA7Z5mG" outputId="b43e1991-0e9c-4706-8a97-c0fdd4bbb5f6" r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp'] ratings_train = pd.read_csv('/content/drive/MyDrive/ml-100k/ua.base', sep='\t', names=r_cols, encoding='latin-1') ratings_test = pd.read_csv('/content/drive/MyDrive/ml-100k/ua.test', sep='\t', names=r_cols, encoding='latin-1') ratings_train.shape, ratings_test.shape # + [markdown] id="kfSPvIGebOIK" # The dataset has already been divided into train and test by GroupLens where the test data has 10 ratings for each user, i.e. 9,430 rows in total. We will read both these files into our Python environment. # + [markdown] id="8E3rdTCvbRZQ" # It’s finally time to build our recommend engine! # # ## Building collaborative filtering model # # We will recommend movies based on user-user similarity and item-item similarity. For that, first we need to calculate the number of unique users and movies. # + colab={"base_uri": "https://localhost:8080/"} id="-bXQCl4AbDLz" outputId="5b5ab103-6d2c-4d59-f868-dd39c1e535c7" n_users = ratings.user_id.unique().shape[0] n_items = ratings.movie_id.unique().shape[0] n_users,n_items # + [markdown] id="52rEX2xZb2nW" # Now, we will create a user-item matrix which can be used to calculate the similarity between users and items. we will first initialize it with zeros array of shape 943 x 1643 having 943 users and 1643 movies # then we will loop in the ratings data frame row by row # + colab={"base_uri": "https://localhost:8080/"} id="Emi7yvePbu6h" outputId="299a7964-af53-4626-dabb-f25e6faf5db2" data_matrix = np.zeros((n_users, n_items)) for line in ratings.itertuples(): data_matrix[line[1]-1, line[2]-1] = line[3] # + [markdown] id="JJr0Fd_Qb7bi" # here,\ # line[1] is the userId and we are subtracting 1 from it since array indexing starts from 0 = row\ # line[2]-1 is the movie id = column\ # now at that specifec row and column i.e, user and movie we will add line[3] which is the movie rating # # Now, when we have rating os all the movies given by each user in a matrix we will calculate the similarity. We can use the pairwise_distance function from sklearn to calculate the cosine similarity. # # + colab={"base_uri": "https://localhost:8080/"} id="-XYn4bqUcuPR" outputId="d2307c3a-0d95-4805-8676-021437f75940" data_matrix[:5] # + colab={"base_uri": "https://localhost:8080/"} id="jz26uGMVclr4" outputId="8c6bbcee-3e66-4f9d-a8e6-a6e92c4e3b92" from sklearn.metrics.pairwise import pairwise_distances user_similarity = pairwise_distances(data_matrix, metric='cosine') item_similarity = pairwise_distances(data_matrix.T, metric='cosine') print(user_similarity.shape, item_similarity.shape) # + [markdown] id="lemlNTI5eazh" # This gives us the item-item and user-user similarity in an array form. The next step is to make predictions based on these similarities. Let’s define a function to do just that. # # + id="9Ndn5gEXePMM" def predict(ratings, similarity, type='user'): if type == 'user': mean_user_rating = ratings.mean(axis=1).reshape(-1,1) #We use np.newaxis so that mean_user_rating has same format as ratings ratings_diff = (ratings - mean_user_rating) pred = mean_user_rating + similarity.dot(ratings_diff) / np.array([np.abs(similarity).sum(axis=1)]).T elif type == 'item': pred = ratings.dot(similarity) / np.array([np.abs(similarity).sum(axis=1)]) return pred # + [markdown] id="kpEsvk0he_-i" # Finally, we will make predictions based on user similarity and item similarity. # + id="0w9PxVwXe_PV" user_prediction = predict(data_matrix, user_similarity, type='user') item_prediction = predict(data_matrix, item_similarity, type='item') # + colab={"base_uri": "https://localhost:8080/"} id="4AKbLIwkfE6P" outputId="8d2d6423-e65a-4eac-e135-61e4e147699a" user_prediction[0] # + id="" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pulp # Instantiate our problem class model = pulp.LpProblem("Cost minimising blending problem", pulp.LpMinimize) # Construct our decision variable lists sausage_types = ['economy', 'premium'] ingredients = ['pork', 'wheat', 'starch'] ing_weight = pulp.LpVariable.dicts("weight kg", ((i, j) for i in sausage_types for j in ingredients), lowBound=0, cat='Continuous') # Objective Function model += ( pulp.lpSum([ 4.32 * ing_weight[(i, 'pork')] + 2.46 * ing_weight[(i, 'wheat')] + 1.86 * ing_weight[(i, 'starch')] for i in sausage_types]) ) # + # Constraints # 350 economy and 500 premium sausages at 0.05 kg model += pulp.lpSum([ing_weight['economy', j] for j in ingredients]) == 350 * 0.05 model += pulp.lpSum([ing_weight['premium', j] for j in ingredients]) == 500 * 0.05 # Economy has >= 40% pork, premium >= 60% pork model += ing_weight['economy', 'pork'] >= ( 0.4 * pulp.lpSum([ing_weight['economy', j] for j in ingredients])) model += ing_weight['premium', 'pork'] >= ( 0.6 * pulp.lpSum([ing_weight['premium', j] for j in ingredients])) # Sausages must be <= 25% starch model += ing_weight['economy', 'starch'] <= ( 0.25 * pulp.lpSum([ing_weight['economy', j] for j in ingredients])) model += ing_weight['premium', 'starch'] <= ( 0.25 * pulp.lpSum([ing_weight['premium', j] for j in ingredients])) # We have at most 30 kg of pork, 20 kg of wheat and 17 kg of starch available model += pulp.lpSum([ing_weight[i, 'pork'] for i in sausage_types]) <= 30 model += pulp.lpSum([ing_weight[i, 'wheat'] for i in sausage_types]) <= 20 model += pulp.lpSum([ing_weight[i, 'starch'] for i in sausage_types]) <= 17 # We have at least 23 kg of pork to use up model += pulp.lpSum([ing_weight[i, 'pork'] for i in sausage_types]) >= 23 # - # Solve our problem model.solve() pulp.LpStatus[model.status] for var in ing_weight: var_value = ing_weight[var].varValue print("The weight of {0} in {1} sausages is {2} kg".format(var[1], var[0], var_value)) # + total_cost = pulp.value(model.objective) print("The total cost is €{} for 350 economy sausages and 500 premium sausages".format(round(total_cost, 2))) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="5mfPt49RObNc" import pandas as pd import numpy as np import sklearn.model_selection as skm import matplotlib.pyplot as plt # + id="yIqilSJ9Pc4A" np.random.seed(200804) # + [markdown] id="d5RirZknPpRF" # #EDA # + id="RZeZ8Jt7Pogj" titanic = pd.read_csv('https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv') # + colab={"base_uri": "https://localhost:8080/", "height": 203} id="ewSqxMsNPzPi" outputId="28a16177-d568-4c73-ebc5-f176796a0222" titanic.head() # + id="VN5xQf4zP49-" # colum renaming and dropna titanic.rename(columns={ col: col.lower() for col in titanic.columns}, inplace = True ) titanic.dropna(inplace=True) # + colab={"base_uri": "https://localhost:8080/"} id="wJc7Z2aNRLk3" outputId="ce1b34f2-66ed-43cb-f87a-93ed993a6e89" titanic.dtypes # + colab={"base_uri": "https://localhost:8080/", "height": 203} id="ZNfhn7LgRThi" outputId="52713caa-9557-4d44-d501-728f03b0c3fc" #masks for sex columns titanic.sex.mask(titanic.sex == 'female', '1', inplace=True) titanic.sex.mask(titanic.sex == 'male', '0', inplace=True) titanic.head() # + id="mVBsXUbjRkXH" #independent and dependent variables X = titanic[['pclass', 'sex', 'age', 'sibsp', 'parch', 'fare']] y = titanic[['survived']] # + [markdown] id="wDkM-DFyRrXL" # #Data split # + colab={"base_uri": "https://localhost:8080/"} id="4N9Z8PpERu5D" outputId="ae538605-dd5d-4629-9c0a-8eef5207fe29" X_train, X_test, y_train, y_test = skm.train_test_split(X, y, random_state=111, test_size=0.3) print('training data shape: ', X_train.shape, y_train.shape) print('testing data shape', X_test.shape, y_test.shape) # + [markdown] id="wfW4EZ9zVVR1" # #Training # + id="EJg5CbueVXIn" from sklearn.ensemble import RandomForestClassifier # + id="pHuc4pOMVd4e" # oob_score indica si usar muestras sout-of-bag para estimar score de generalización rf = RandomForestClassifier(n_jobs=-1, oob_score=True) #n_estimators -> How many trees will generate the forest grid = {'n_estimators': [600,800,1000], 'criterion': ['gini', 'entropy'], 'min_samples_leaf':[5,7,9,11]} #Althougth RandomForest does not need cv, GreadSearchCV object needs 3 k-folds at least. gs = skm.GridSearchCV(rf, param_grid=grid, scoring='precision', cv=3, n_jobs=-1, return_train_score=True) # + colab={"base_uri": "https://localhost:8080/"} id="4Q8zDspGYg9_" outputId="7b08f61f-b6f0-4cd6-b19d-3ae7a7a0bd9c" #Training 24 models gs.fit(X_train, y_train.values.ravel()) # + id="7miLRVjHYm93" best_model = gs.best_estimator_ # + colab={"base_uri": "https://localhost:8080/"} id="hiB1i6EjZ2aU" outputId="b04b6540-72cd-47ca-c271-293f15fbc3b3" type(best_model) # + colab={"base_uri": "https://localhost:8080/"} id="CpS1OFOAaSpD" outputId="2a0c13fb-c853-41bb-a6c9-07f06e6c73e6" best_model.get_params() # + colab={"base_uri": "https://localhost:8080/"} id="ez-XQB-tajLk" outputId="4dbed861-7b3b-488a-e069-d7fcebd3c171" best_model.oob_score_ # Out of bag score associated with accuracy # + colab={"base_uri": "https://localhost:8080/"} id="RTNoVL76apth" outputId="47c07964-ae71-4cd8-ae2a-5aea118dd963" gs.best_score_ # + [markdown] id="efNnBhPDbz4v" # **Feature importance** # + colab={"base_uri": "https://localhost:8080/"} id="rH8fBUtCb2zk" outputId="afe44ff5-95d1-4e24-cda9-09cb1f539481" best_model.feature_importances_ # + colab={"base_uri": "https://localhost:8080/"} id="gROMB-B6b8l_" outputId="57001008-7a2b-4444-a711-6636804d2064" X.columns.values # + [markdown] id="0WGsgHmScJob" # **Predictions** # + colab={"base_uri": "https://localhost:8080/"} id="Hvlbq58EcMIv" outputId="09daa846-59b0-4fc1-e7d1-eb850e92d8de" predicted_labels = best_model.predict(X_test) predicted_labels[:10] # + colab={"base_uri": "https://localhost:8080/"} id="W6cyzciJdQLT" outputId="6eac180b-eb17-49e7-bed2-712acee2fb16" predicted_scores = best_model.predict_proba(X_test) predicted_scores[:10,] # + [markdown] id="8PpfVo4-dc2M" # **Metrics** # + colab={"base_uri": "https://localhost:8080/"} id="6T98YGf9dfsl" outputId="a5ec348e-8cd8-402e-c60a-95d05cd7ef54" #Accuracy from sklearn.metrics import accuracy_score accuracy = accuracy_score(y_test, predicted_labels) accuracy # + id="aDAfDaCrd4u-" #ROC y AUC from sklearn.metrics import roc_auc_score, roc_curve # + colab={"base_uri": "https://localhost:8080/", "height": 279} id="Y5DiwM99eFOa" outputId="daedd51e-82d9-4ec2-bc05-e56d468c8bde" fpr, tpr, threshold = roc_curve(y_test, predicted_scores[:,1], pos_label=True) plt.clf() plt.plot([0,1], [0,1], 'k--', c='red') plt.plot(fpr,tpr) plt.xlabel('fpr') plt.ylabel('tpr') plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="8ZzUak4TfCAF" outputId="4e3185d6-78f2-4c91-9ec7-ebb8b1bcf2ea" #confusion matrix from sklearn.metrics import confusion_matrix confusion_matrix(y_test, predicted_labels) # + colab={"base_uri": "https://localhost:8080/"} id="XW3U6m6GfQST" outputId="824adcf1-a62a-423a-a47f-b7a7908fbfe2" #precision, recall and f1score from sklearn.metrics import recall_score, precision_score, f1_score print('precision', precision_score(y_test, predicted_labels)) print('recall', recall_score(y_test, predicted_labels) ) print('f1_score', f1_score(y_test, predicted_labels) ) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import cv2 import glob import matplotlib.pyplot as plt import matplotlib.image as mpimg # %matplotlib inline import pickle # Read calibration images images = glob.glob('./calibration*.jpg') # + # Create arrays to store object and image points objpoints = [] # 3D points in real world space imgpoints = [] # 2D points in image space # Prepare 3D object points like (0,0,0), (1,0,0), (2,0,0), ..., (nx-1,ny-1,0) nx = 9 ny = 6 objp = np.zeros((nx*ny,3), np.float32) objp[:,:2] = np.mgrid[0:nx,0:ny].T.reshape(-1,2) # - for fname in images: # Read each image in for loop img = mpimg.imread(fname) # Convert image to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Find the chessboard corners ret, corners = cv2.findChessboardCorners(gray, (nx,ny), None) # Fill object and image points array with corners if ret == True: imgpoints.append(corners) objpoints.append(objp) # Draw corners and write output files img = cv2.drawChessboardCorners(img, (nx,ny), corners, ret) file_name = 'corner_' + fname.split('/')[-1] cv2.imwrite(file_name, img) # + # Load example image for reference img_ref = cv2.imread('./calibration4.jpg') img_size = (img_ref.shape[1], img_ref.shape[0]) # Calibrate camera with determined object and image points ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None) # Undistort example image dst = cv2.undistort(img_ref, mtx, dist, None, mtx) # Save camera calibration results in a pickle file cal_pickle = {} cal_pickle["mtx"] = mtx cal_pickle["dist"] = dist pickle.dump(cal_pickle, open('./cal_pickle.p', "wb")) # - # Visualize undistortion f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10)) f.subplots_adjust(hspace = .2, wspace=.05) ax1.imshow(img_ref) ax1.set_title('Original Image', fontsize=30) ax2.imshow(dst) ax2.set_title('Undistorted Image', fontsize=30) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Validation and Verification of the 25mm collimator simulation, GP3 # Here we provide code and output which verifies and validates the 25mm collimator simulation. We're using simulation phase space file output and input to check the validity of the result. This is 25 sources machine, so length of the source is increased to 18mm, source is moved forward by 3mm and activity should be 180Ci. # + import math import matplotlib import numpy as np import matplotlib.pyplot as plt import BEAMphsf import text_loader import H1Dn import H1Du import ListTable # %matplotlib inline # - # *First, set filename to what we want to examine and read PhSF header* C = 25 phsfname = "PHSF" + "." + str(C) phsfname = "../" + phsfname print ("We're reading the {1}mm phase space file = {0}".format(phsfname, C)) # *Checking PhSF header parameters* # + events, nof_photons, nof_electrons, nof_positrons = text_loader.load_events(phsfname, -1) print("Number of loaded events: {0}".format(len(events))) print("Number of loaded photons: {0}".format(nof_photons)) print("Number of loaded electrons: {0}".format(nof_electrons)) print("Number of loaded positrons: {0}".format(nof_positrons)) print("Yield: {0}".format(nof_photons/40000000000.0)) # - # ## Energy Spectrum tests # *We expect energy spectrum to be scattering background together with peaks δ(E-1.17) and δ(E-1.33). Below we'trying to prove this statement. We will draw the distributions and histograms to estimate influence of the background scattering and get the data about δ-peaks* # ### We're filling energy histogram now, basic checks # *We're building scale with 5 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 5* # + # make scale with explicit bins at 1.17 MeV and 1.33 MeV nbins = 5 scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001) he = H1Dn.H1Dn(scale) for e in events: WT = e[0] E = e[1] he.fill(E, WT) print("Number of events in histogram: {0}".format(he.nof_events())) print("Integral in histogram: {0}".format(he.integral())) print("Underflow bin: {0}".format(he.underflow())) print("Overflow bin: {0}".format(he.overflow())) # - # *Underflow bin is empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT* # ### Drawing Probability Density Function for 5 bins between 1.33 peak and 1.17 peak. # + X = [] Y = [] W = [] scale = he.x() n = len(scale) norm = 1.0/he.integral() sum = 0.0 for k in range (-1, he.size()+1): x = 0.0 w = (he.lo() - x) if k == he.size(): w = (scale[-1]-scale[-2]) x = he.hi() elif k >= 0: w = (scale[k+1] - scale[k]) x = scale[k] d = he[k] # data from bin with index k y = d[0] / w # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(w) sum += y*w print("PDF normalization: {0}".format(sum)) E133_5 = Y[-2] E117_5 = Y[-2-nbins] p1 = plt.bar(X, Y, W, color='r') plt.xlabel('Energy(MeV)') plt.ylabel('PDF of the photons') plt.title('Energy distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() # - # saving peak values print("Peak PDF value at 1.33 MeV: {0}".format(E133_5)) print("Peak PDF value at 1.17 MeV: {0}".format(E117_5)) # ### Filling energy histogram with double number of bins # *We're building scale with 10 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 10* # + # make scale with explicit bins at 1.17 MeV and 1.33 MeV nbins = 10 scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001) he = H1Dn.H1Dn(scale) for e in events: WT = e[0] E = e[1] he.fill(E, WT) print("Number of events in histogram: {0}".format(he.nof_events())) print("Integral in histogram: {0}".format(he.integral())) print("Underflow bin: {0}".format(he.underflow())) print("Overflow bin: {0}".format(he.overflow())) # - # *Underflow bin is empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT* # ### Drawing Probability Density Function for 10 bins between 1.33 peak and 1.17 peak. # + X = [] Y = [] W = [] scale = he.x() n = len(scale) norm = 1.0/he.integral() sum = 0.0 for k in range (-1, he.size()+1): x = 0.0 w = (he.lo() - x) if k == he.size(): w = (scale[-1]-scale[-2]) x = he.hi() elif k >= 0: w = (scale[k+1] - scale[k]) x = scale[k] d = he[k] # data from bin with index k y = d[0] / w # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(w) sum += y*w print("PDF normalization: {0}".format(sum)) E133_10 = Y[-2] E117_10 = Y[-2-nbins] p1 = plt.bar(X, Y, W, color='r') plt.xlabel('Energy(MeV)') plt.ylabel('PDF of the photons') plt.title('Energy distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() # - # saving peak values print("Peak PDF value at 1.33 MeV: {0}".format(E133_10)) print("Peak PDF value at 1.17 MeV: {0}".format(E117_10)) # ### Filling energy histogram with quadruple number of bins # *We're building scale with 20 bins in the region between 1.17 and 1.33 MeV, all other bins below 1.17 are of about the same size as those 20.* # + # make scale with explicit bins at 1.17 MeV and 1.33 MeV nbins = 20 scale = BEAMphsf.make_energy_scale(nbins, lo = 0.01, me = 1.1700001, hi = 1.3300001) he = H1Dn.H1Dn(scale) for e in events: WT = e[0] E = e[1] he.fill(E, WT) print("Number of events in histogram: {0}".format(he.nof_events())) print("Integral in histogram: {0}".format(he.integral())) print("Underflow bin: {0}".format(he.underflow())) print("Overflow bin: {0}".format(he.overflow())) # - # *Underflow bin is empty, as well as Overflow bin. This is good because we do not expect events beyond 1.33MeV and below ECUT* # ### Drawing Probability Density Function for 10 bins between 1.33 peak and 1.17 peak. # + X = [] Y = [] W = [] scale = he.x() n = len(scale) norm = 1.0/he.integral() sum = 0.0 for k in range (-1, he.size()+1): x = 0.0 w = (he.lo() - x) if k == he.size(): w = (scale[-1]-scale[-2]) x = he.hi() elif k >= 0: w = (scale[k+1] - scale[k]) x = scale[k] d = he[k] # data from bin with index k y = d[0] / w # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(w) sum += y*w print("PDF normalization: {0}".format(sum)) E133_20 = Y[-2] E117_20 = Y[-2-nbins] p1 = plt.bar(X, Y, W, color='r') plt.xlabel('Energy(MeV)') plt.ylabel('PDF of the photons') plt.title('Energy distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() # - # saving peak values print("Peak PDF value at 1.33 MeV: {0}".format(E133_20)) print("Peak PDF value at 1.17 MeV: {0}".format(E117_20)) # ### Comparing peak values # *We would compare peak values at 10 bins and at 5 bins. The presence of δ-peaks means that with doubling number of bins we shall expect the roughly doubling the peak values.* # + table = ListTable.ListTable() table.append(["Nbins", "E=1.17", "E=1.33"]) table.append(["", "MeV", "MeV"]) table.append([5, 1.0, 1.0]) table.append([10, E133_10/E133_5, E133_10/E133_5]) table.append([20, E133_20/E133_5, E133_20/E133_5]) table # - # *The result is as expected. Only few percent of the values in the 1.33 and 1.17 MeV bins are due to scattered radiation. Most values are coming from primary source and are δ-peaks in energy.* # ## Spatial Distribution tests # *Here we will plot spatial distribution of the particles, projected from collimator exit position to the isocenter location at 38cm* # + Znow = 197.5 # we at 200mm at the cooolimator exit Zshot = 380.0 # shot isocenter is at 380mm # radial, X and Y, all units in mm hr = H1Du.H1Du(120, 0.0, 40.0) hx = H1Du.H1Du(128, -32.0, 32.0) hy = H1Du.H1Du(128, -32.0, 32.0) for e in events: WT = e[0] xx, yy, zz = BEAMphsf.move_event(e, Znow, Zshot) #xx = e[2] #yy = e[3] #zz = e[4] r = math.sqrt(xx*xx + yy*yy) hr.fill(r, WT) hx.fill(xx, WT) hy.fill(yy, WT) print("Number of events in R histogram: {0}".format(hr.nof_events())) print("Integral in R histogram: {0}".format(hr.integral())) print("Underflow bin: {0}".format(hr.underflow())) print("Overflow bin: {0}\n".format(hr.overflow())) print("Number of events in X histogram: {0}".format(hx.nof_events())) print("Integral in X histogram: {0}".format(hx.integral())) print("Underflow bin: {0}".format(hx.underflow())) print("Overflow bin: {0}\n".format(hx.overflow())) print("Number of events in Y histogram: {0}".format(hy.nof_events())) print("Integral in Y histogram: {0}".format(hy.integral())) print("Underflow bin: {0}".format(hy.underflow())) print("Overflow bin: {0}".format(hy.overflow())) # + X = [] Y = [] W = [] norm = 1.0/hr.integral() sum = 0.0 st = hr.step() for k in range (0, hr.size()+1): r_lo = hr.lo() + float(k) * st r_hi = r_lo + st r = 0.5*(r_lo + r_hi) ba = math.pi * (r_hi*r_hi - r_lo*r_lo) # bin area d = hr[k] # data from bin with index k y = d[0] / ba # first part of bin is collected weights y = y * norm X.append(r) Y.append(y) W.append(st) sum += y * ba print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, 0.0, color='b') plt.xlabel('Radius(mm)') plt.ylabel('PDF of the photons') plt.title('Radial distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() # - # NB: peak at the far right above 40mm is overflow bin # + X = [] Y = [] W = [] norm = 1.0/hx.integral() sum = 0.0 st = hx.step() for k in range (0, hx.size()): x_lo = hx.lo() + float(k)*st x_hi = x_lo + st x = 0.5*(x_lo + x_hi) d = hx[k] # data from bin with index k y = d[0] / st # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(st) sum += y*st print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, color='b') plt.xlabel('X(mm)') plt.ylabel('PDF of the photons') plt.title('X distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() # + X = [] Y = [] W = [] norm = 1.0/hy.integral() sum = 0.0 st = hy.step() for k in range (0, hy.size()): x_lo = hy.lo() + float(k)*st x_hi = x_lo + st x = 0.5*(x_lo + x_hi) d = hy[k] # data from bin with index k y = d[0] / st # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(st) sum += y*st print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, color='b') plt.xlabel('Y(mm)') plt.ylabel('PDF of the photons') plt.title('Y distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() # - # *We find spatial distribution to be consistent with the collimation setup* # ## Angular Distribution tests # *Here we plot particles angular distribution for all three directional cosines, at the collimator exit. We expect angular distribution to fill collimation angle which is close to 0.033 radians (0.5x25/380).* # + # angular, WZ, WX and WY, all units in radians h_wz = H1Du.H1Du(100, 1.0 - 0.05, 1.0) h_wx = H1Du.H1Du(110, -0.055, 0.055) h_wy = H1Du.H1Du(110, -0.055, 0.055) for e in events: WT = e[0] wx = e[5] wy = e[6] wz = e[7] h_wz.fill(wz, WT) h_wx.fill(wx, WT) h_wy.fill(wy, WT) print("Number of events in WZ histogram: {0}".format(h_wz.nof_events())) print("Integral in WZ histogram: {0}".format(h_wz.integral())) print("Underflow bin: {0}".format(h_wz.underflow())) print("Overflow bin: {0}\n".format(h_wz.overflow())) print("Number of events in WX histogram: {0}".format(h_wx.nof_events())) print("Integral in WX histogram: {0}".format(h_wx.integral())) print("Underflow bin: {0}".format(h_wx.underflow())) print("Overflow bin: {0}\n".format(h_wx.overflow())) print("Number of events in WY histogram: {0}".format(h_wy.nof_events())) print("Integral in WY histogram: {0}".format(h_wy.integral())) print("Underflow bin: {0}".format(h_wy.underflow())) print("Overflow bin: {0}".format(h_wy.overflow())) # + X = [] Y = [] W = [] norm = 1.0/h_wz.integral() sum = 0.0 st = h_wz.step() for k in range (0, h_wz.size()+1): x_lo = h_wz.lo() + float(k)*st x_hi = x_lo + st x = 0.5*(x_lo + x_hi) d = h_wz[k] # data from bin with index k y = d[0] / st # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(st) sum += y*st print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, color='g') plt.xlabel('WZ') plt.ylabel('PDF of the photons') plt.title('Angular Z distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() # + X = [] Y = [] W = [] norm = 1.0/h_wx.integral() sum = 0.0 st = h_wx.step() for k in range (0, h_wx.size()): x_lo = h_wx.lo() + float(k)*st x_hi = x_lo + st x = 0.5*(x_lo + x_hi) d = h_wx[k] # data from bin with index k y = d[0] / st # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(st) sum += y*st print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, color='g') plt.xlabel('WX') plt.ylabel('PDF of the photons') plt.title('Angular X distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() # + X = [] Y = [] W = [] norm = 1.0/h_wy.integral() sum = 0.0 st = h_wy.step() for k in range (0, h_wy.size()): x_lo = h_wy.lo() + float(k)*st x_hi = x_lo + st x = 0.5*(x_lo + x_hi) d = h_wy[k] # data from bin with index k y = d[0] / st # first part of bin is collected weights y = y * norm X.append(x) Y.append(y) W.append(st) sum += y*st print("PDF normalization: {0}".format(sum)) p1 = plt.bar(X, Y, W, color='g') plt.xlabel('WY') plt.ylabel('PDF of the photons') plt.title('Angular Y distribution') plt.grid(True); plt.tick_params(axis='x', direction='out') plt.tick_params(axis='y', direction='out') plt.show() # - # *We find photon angular distribution to be consistent with the collimation setup* # ## Conclusion # *After plotting and analyzing photons energy, spatial and angular distribution, we could conclude that it is consistent with simulation Co60 source, sending particles through collimation system of 25mm collimation spot at 380mm isocenter.* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Delete files # Linux # !rm -rf public # !rm car_1.bmp # !rm car.png # !rm synset_words.txt # !rm voc_labels.txt # !rm -rf benchmark_tool # Windows # !rmdir /s /q public # !del car_1.bmp # !del car.png # !del synset_words.txt # !del voc_labels.txt # !rmdir /s /q -rf benchmark_tool # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="_seE71WAElyq" # # GPT-2 Code Generator # ## This code was taken from publicly-available Huggingface repository # ## and was provided by SIC98 # ## link: https://huggingface.co/SIC98/GPT2-python-code-generator # + colab={"base_uri": "https://localhost:8080/"} id="gkOf4EGDfDGl" executionInfo={"status": "ok", "timestamp": 1626156596666, "user_tz": -60, "elapsed": 7144, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvXV91_cxrC4m7SmeaHRr4HdN9pQ5ReaB1LqMgcQ=s64", "userId": "00629242608009283527"}} outputId="7ca2acd0-3394-4789-b5f9-25ffa55830d0" pip install transformers # + colab={"base_uri": "https://localhost:8080/", "height": 316, "referenced_widgets": ["34cfe545395849bba152ac59130f52c5", "10cec0dc1415462b9eba8874dc5b5676", "1549e9b9ddea407bbb57fd69830ebcca", "4860a6bd207d4349b818dba8f4ba00a8", "5f70c6a91166488cacd0449e7fa4798c", "", "bc2d96644ea3445b904a69253066485f", "", "", "b7bcde3eab5049ed9f27d91387081437", "", "28cd1ec752714810a1aa5d8aaf175a47", "", "e5fdf64f5ad44ea0bdb4c5e9d206d93d", "", "eefa792808204e89982b40a33a1ff800", "", "", "", "", "846db92988d9469f82a5d1fd435d6071", "d0ec6a186adc47a5b7a2c22909394a6a", "c6fde3c7137247399e158ac6d919123c", "ea23060b67424ccab2689c9803cad4cb", "", "", "", "84ba92a89c7f4a38893887080d8b4493", "d8ebe59541ab41ccb2d59e2077f8356e", "", "", "", "c9c1e32374644c1aa670de8ea20a1416", "3f2a47addb244ba3a31e0240d5692021", "b270323fa3d94d068eadaa451d84105a", "", "", "", "", "637c300f43ee4feebe1bfe596634d950", "", "", "2a006771812548dfb73eb04bac2625c0", "adfbe9a519a14d6092ac4b3308b4c733", "9eab5fe626de4b128d25740de2508918", "a57ad55b068b4edc9491af35d46f66fe", "d3234ac59d1142c7a3b761aea8dcc423", "576e748123ba44c0ad2f1a7d56c0a237"]} id="MdE7hzioeglH" executionInfo={"status": "ok", "timestamp": 1626156612811, "user_tz": -60, "elapsed": 16148, "user": {"displayName": "Ave Coders", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvXV91_cxrC4m7SmeaHRr4HdN9pQ5ReaB1LqMgcQ=s64", "userId": "00629242608009283527"}} outputId="dcd4b02e-1bd6-4eae-cf73-3b1e4c1e5a03" import torch from transformers import GPT2LMHeadModel, GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('SIC98/GPT2-python-code-generator') model = GPT2LMHeadModel.from_pretrained('SIC98/GPT2-python-code-generator') # + id="qMH0CVoDeo3v" # Can use fast tokenizer from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained('SIC98/GPT2-python-code-generator') # + colab={"base_uri": "https://localhost:8080/"} id="K0yYU_y4ezn4" executionInfo={"status": "ok", "timestamp": 1626156615003, "user_tz": -60, "elapsed": 1743, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvXV91_cxrC4m7SmeaHRr4HdN9pQ5ReaB1LqMgcQ=s64", "userId": "00629242608009283527"}} outputId="1745c15b-6c5e-402b-effd-585a0c121d56" # Test tokenizer print(tokenizer("Hello world")) print(tokenizer(" Hello world")) print(tokenizer.encode("Hello world")) print(tokenizer.encode(" Hello world")) print(tokenizer.decode([15496, 995])) print(tokenizer.decode([18435, 995])) # + colab={"base_uri": "https://localhost:8080/"} id="qhB9WyVHe0HG" executionInfo={"status": "ok", "timestamp": 1626156793040, "user_tz": -60, "elapsed": 2574, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvXV91_cxrC4m7SmeaHRr4HdN9pQ5ReaB1LqMgcQ=s64", "userId": "00629242608009283527"}} outputId="36752241-3f54-4575-ab18-85844e7f4c0c" sequence = 'def is_palindrome(s):\n """Check whether a string is a palindrome"""' inputs = tokenizer.encode(sequence, return_tensors='pt') outputs = model.generate(inputs, max_length=64, do_sample=True, temperature=0.1, top_p=1.0) text = tokenizer.decode(outputs[0], skip_special_tokens=True) print("=====================================") print(text) # + colab={"base_uri": "https://localhost:8080/"} id="tPdNwcCOe5mf" executionInfo={"status": "ok", "timestamp": 1626156744756, "user_tz": -60, "elapsed": 8237, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvXV91_cxrC4m7SmeaHRr4HdN9pQ5ReaB1LqMgcQ=s64", "userId": "00629242608009283527"}} outputId="dff94be6-e4c8-446f-9ff4-a5f17cbbf3e6" sequence = '@dataclass\nclass Item:\n name: str\n price: float\n\n@dataclass\nclass Order\n id: int\n items: List[Item]\n \n def compute_total_price(self, palindrome_discount=0.2):\n """\n Compute the total price and return it.\n Apply a discount to items whose names are palindromes.\n """' inputs = tokenizer.encode(sequence, return_tensors='pt') outputs = model.generate(inputs, max_length=256, do_sample=True, temperature=0.2, top_p=1.0) text = tokenizer.decode(outputs[0], skip_special_tokens=True) print("=====================================") print(text) # + colab={"base_uri": "https://localhost:8080/"} id="GlC7bG56gCWg" executionInfo={"status": "ok", "timestamp": 1626156859924, "user_tz": -60, "elapsed": 14603, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhvXV91_cxrC4m7SmeaHRr4HdN9pQ5ReaB1LqMgcQ=s64", "userId": "00629242608009283527"}} outputId="65f19a07-bb13-4947-9e88-44fa7f933e69" sequence = 'def add_all(a, b, c, d):' inputs = tokenizer.encode(sequence, return_tensors='pt') outputs = model.generate(inputs, max_length=256, do_sample=True, temperature=0.2, top_p=1.0) text = tokenizer.decode(outputs[0], skip_special_tokens=True) print("=====================================") print(text) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- import torch a = torch.tensor([2.,3]) print(a) a = torch.nn.Parameter(a) # allow gradient calculation a3 = torch.tensor([4,6,7.,2]) a3 = torch.nn.Parameter(a3) # allow gradient calculation print("a3: ", a3) print(a) a1 = torch.FloatTensor([2,3]) print("a1: ", a1) a1.requires_grad_() print("a1 with grad: ", a1) a1.requires_grad_(True) print(a1.requires_grad) # Bool b = a+a * a print(b.requires_grad) a, b b = a * a #a = a.detach() print("a= ", a, ", a= ", a1, ", b= ", b) b = b + 3 #a = a.detach() print("a= ", a, ", a= ", a1, ", b= ", b) c = torch.sum(b) print(c) c = torch.sum(b) # Cannot run c.backward() two times in a row if a.grad != None: a.grad.zero_() c.backward(retain_graph=True) a.grad, b.grad # * Can only take derivatives with respect to leaves. # * Since $b$ depends on $a$, I can only compute the gradient with respect to $a$. # * To compute the gradient with respect to $b$, I have to cut off the gradient propagation at $b$. HOW TO DO THIS? # + a.grad, a # - a.grad c # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # MIT License # # Copyright (c) 2020 # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in all # copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # + input = '''acc +49 jmp +274 acc +49 acc +49 jmp +476 jmp +409 jmp +269 jmp +1 acc -11 acc +5 acc +24 jmp +17 acc +50 acc +9 acc +37 nop +266 jmp +60 jmp +329 acc +18 jmp +327 acc +22 acc -14 jmp +281 nop +287 acc +6 acc -1 acc +22 jmp +302 acc +44 jmp +576 acc +10 acc +33 nop +219 jmp +534 jmp +89 jmp +523 acc +40 acc +22 jmp +53 acc +6 acc +39 acc +26 jmp +81 acc +18 acc +20 acc +31 acc +31 jmp +244 jmp +1 jmp +237 acc -5 acc +2 nop +209 jmp +222 acc -16 jmp +277 jmp +48 jmp +317 jmp +564 acc -5 acc +11 acc -10 acc +1 jmp -5 acc +46 acc +14 acc -3 jmp +393 acc +8 acc +21 acc +6 jmp +142 acc +22 jmp +188 nop +258 jmp +505 acc +27 acc +13 nop +428 acc -12 jmp +354 acc +0 acc +0 jmp +54 acc +11 acc +32 acc +17 nop -3 jmp +182 acc +24 jmp +18 acc +1 acc -4 acc +13 acc +36 jmp +118 acc +48 jmp +383 nop +101 jmp -94 jmp +181 acc +43 jmp +123 jmp +285 acc +10 acc +13 jmp +261 jmp +98 acc +0 acc -12 acc +28 nop -3 jmp -54 jmp +509 acc +34 acc +48 jmp +1 jmp +20 acc +13 acc +6 acc -5 nop +267 jmp +299 acc -13 acc -1 acc +2 jmp +349 jmp +294 acc +20 acc +8 acc +17 acc -1 jmp +242 acc -11 acc +45 acc -13 jmp +98 acc +44 jmp +61 jmp +471 jmp +344 acc +38 jmp +1 nop +490 acc +45 jmp +276 acc -8 jmp +20 acc +49 nop +170 acc +44 jmp +100 nop +236 jmp +209 jmp +45 jmp +1 nop +464 jmp +311 nop +238 nop +212 jmp +236 jmp +328 acc +20 acc +0 acc +46 acc +28 jmp +12 jmp +52 nop +300 nop +420 jmp +149 acc +38 acc +23 jmp +271 acc +21 acc +27 acc +24 jmp +371 acc +20 acc +4 acc -6 nop +24 jmp -54 acc -5 acc +47 jmp -180 jmp +384 acc +44 acc +22 nop +148 acc +32 jmp -107 acc +25 jmp +355 jmp +1 acc +14 acc +11 acc +36 jmp +15 nop +281 acc +48 acc +23 acc +23 jmp +35 jmp +82 acc +19 acc +30 jmp +319 acc +30 acc +41 nop -176 jmp +1 jmp +79 acc +29 acc +41 acc +32 jmp -199 acc -15 jmp +402 nop +91 jmp -156 acc +16 acc +26 acc +8 jmp +282 acc +12 acc +38 acc +37 acc +13 jmp -115 acc -12 acc -1 acc +44 jmp +347 jmp -133 nop +240 acc +27 jmp +321 acc +16 acc -9 jmp +1 jmp +348 jmp +166 acc -7 acc +7 jmp -238 acc +26 acc -5 acc -17 acc +30 jmp -16 acc +34 acc +0 jmp +66 acc +26 acc -7 acc +49 jmp +18 jmp -80 nop -131 jmp +59 acc -18 jmp +1 acc -6 acc +15 jmp -174 acc +50 acc +21 acc +10 jmp -185 acc +49 jmp +66 acc +42 acc +21 jmp +63 acc +38 acc +47 acc +2 jmp +342 acc +19 jmp -224 acc +0 jmp +356 acc +46 acc -17 jmp +82 nop +85 jmp +1 nop +108 jmp -255 jmp -218 acc +43 acc +22 jmp +227 acc +29 acc +25 jmp +155 acc +38 jmp +298 nop -74 acc +23 acc -13 jmp -77 acc -12 acc +22 acc +30 acc -10 jmp +225 acc +48 jmp +190 acc +24 jmp +1 acc +42 nop -10 jmp +226 acc +0 acc +40 acc +48 jmp -311 acc -6 jmp -168 jmp -70 jmp +1 acc -1 nop -210 jmp +186 acc +28 acc +15 jmp -191 jmp -158 nop +23 jmp +263 acc +7 acc +46 jmp -121 acc +37 jmp -272 jmp +1 acc +27 acc +23 acc +0 jmp -233 acc +2 acc -2 acc +34 jmp -75 acc +12 acc +39 jmp -196 nop -30 acc +42 acc +45 jmp -318 acc +15 acc +2 jmp +1 jmp -27 acc +14 jmp +1 acc +41 jmp -310 jmp -15 acc +43 acc -5 jmp -130 acc +44 jmp +85 acc -2 acc +19 jmp -164 jmp +26 nop -39 jmp +238 jmp -227 jmp +1 jmp -46 acc -1 jmp -305 acc +43 acc -4 acc -2 acc +30 jmp +251 jmp +1 acc -6 acc +47 nop +94 jmp -337 nop +80 acc +9 jmp -139 acc +17 acc +20 acc +0 acc +22 jmp -24 acc -19 acc +4 acc +19 jmp -21 acc +2 nop -337 acc -12 jmp -331 acc +21 jmp +46 acc +44 jmp -293 acc +30 acc +4 jmp -124 nop -101 acc -9 jmp +12 acc +0 acc +16 acc +16 acc -5 jmp -121 nop -267 jmp -110 acc +32 acc -11 jmp -283 jmp -95 acc +36 acc +24 nop -222 jmp -236 acc +0 acc +0 acc +0 acc +32 jmp +205 nop -176 acc -5 acc -5 nop -156 jmp +68 nop -367 acc -2 acc +9 acc +42 jmp -251 jmp +1 nop -409 acc -18 acc +30 jmp -372 acc -15 jmp +155 nop -353 acc +26 acc +28 jmp -434 acc +48 nop +33 acc +12 nop -303 jmp +21 acc +36 acc +40 acc +21 nop -101 jmp -421 acc +32 acc -10 jmp -254 acc -18 jmp -159 acc +3 nop -93 acc +13 nop -417 jmp -334 acc +36 nop -305 acc +30 jmp +102 jmp -160 acc +7 jmp +77 nop -345 jmp +65 acc +16 acc +42 jmp -450 acc -13 jmp +106 acc +11 acc +14 acc +37 acc +11 jmp -421 acc +2 acc +16 acc +29 acc +8 jmp -201 acc +48 jmp -112 acc -17 jmp +1 nop -460 nop +129 jmp -186 acc -12 jmp -340 jmp +1 acc +7 jmp -276 acc +49 acc +29 acc +1 acc +43 jmp -360 acc +24 nop -413 nop -378 jmp -68 nop +74 jmp +104 acc +38 acc +36 acc +3 jmp -117 acc -11 nop -153 acc -13 nop -125 jmp -126 jmp +79 acc -9 acc +39 jmp -373 acc +40 acc +46 nop -436 acc +38 jmp -347 acc -18 acc -4 acc -4 jmp -190 acc +36 acc +21 nop -482 jmp -286 acc -15 acc +10 acc +5 acc +17 jmp -509 acc +41 acc +0 acc -6 acc -2 jmp -344 acc +2 acc -8 acc +28 jmp -190 acc +5 acc -17 acc +0 nop -545 jmp -234 jmp -286 acc +32 jmp -21 nop +1 nop -1 acc +1 nop -465 jmp +68 acc +22 acc +46 nop +45 acc -3 jmp -309 acc +45 jmp +1 nop -377 jmp -261 acc +18 acc +17 acc +5 nop -336 jmp -327 nop -397 jmp -492 acc -11 acc +21 jmp -35 jmp -169 jmp -403 acc +40 acc -3 acc -3 acc -11 jmp -357 acc +11 jmp +1 acc -13 jmp -467 acc +3 jmp -198 nop -211 acc +0 jmp -481 acc -2 nop +27 acc +28 acc +21 jmp -247 acc -14 acc -12 acc +39 acc +4 jmp -61 jmp -274 jmp -299 nop -538 jmp -437 jmp -540 acc +38 acc +14 acc +16 jmp -572 acc +8 acc +21 acc +34 jmp +1 acc +3 jmp -488 acc -19 nop -375 jmp -126 acc +7 acc +46 jmp -308 jmp -52 acc +14 acc +23 acc -3 nop -375 jmp +1''' lines = input.split('\n') # + # part 1 visited = [False] * len(lines) lineno = 0 acc = 0 while lineno < len(lines): if visited[lineno]: break visited[lineno] = True inst = lines[lineno].split(' ') op = inst[0] arg = int(inst[1]) if op == 'nop': lineno += 1 elif op == 'acc': acc += arg lineno += 1 elif op == 'jmp': lineno += arg print(acc) # + # part 2 def run(instructions, change): visited = [False] * len(lines) lineno = 0 acc = 0 exits = True while lineno < len(lines): if visited[lineno]: exits = False break visited[lineno] = True inst = lines[lineno].split(' ') op = inst[0] arg = int(inst[1]) if change == lineno: if op == 'nop': op = 'jmp' elif op == 'jmp': op = 'nop' if op == 'nop': lineno += 1 elif op == 'acc': acc += arg lineno += 1 elif op == 'jmp': lineno += arg return exits, acc for i in range(len(lines)): if 'acc' in lines[i]: continue else: exits, acc = run(lines, i) if exits: print(i, lines[i], acc) break # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [anaconda3] # language: python # name: Python [anaconda3] # --- from textgenrnn import textgenrnn # Also requires tensorflow backend to be installed textgen = textgenrnn() textgen.train_from_file('../data/post_titles.txt', new_model=True, max_length=4, num_epochs=40, dropout=.2) # # Text Generation textgen.generate(5, temperature=.25) textgen.generate(5, temperature=.5) textgen.generate(5, temperature=.75) textgen.generate(5, temperature=1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Try what you've learned so far # Now we have some time fo you to try out Python basics that you've just learned. # ## 1. Operating and testing scalar types int and float # Pizza party! # # * Given an number of guests and pizza fraction required per guest # * Calculate how many pizzas to order # * Calculate actual pizza fraction per person # + from math import ceil # ceil() rounds up to nearest integer num_guests = 3 per_person = 1/3 #num_pizzas = #print('Order',num_pizzas,'pizzas') #print('Actual pizzas per person', num_pizzas/(num_guests+1)) # - # Test Pythagoras Theorem for triangles with sides a, b and c # # > In a right angled triangle: # > the square of the hypotenuse is equal to # > the sum of the squares of the other two sides # + a = 3 b = 4 c = 5 # print('Was Pythagoras right?:', ) # - # Now try your solution with the values of a, b and c in the next cell. You might need a different solution as hinted below. # + from math import isclose a = 2 b = 2 c = 8**0.5 # this is the same as # c = math.sqrt(8) # print('Was Pythagoras right?', ) # - # ## 2. Explore Booleans # Booleans can also be constructed using the ``bool()`` object constructor: values of any other type can be converted to Boolean via predictable rules. # For example, any numeric type is False if equal to zero, and True otherwise: bool(1) bool(0) # The Boolean conversion of ``None`` is always False: bool(None) # For strings, ``bool(s)`` is False for empty strings and True otherwise: bool("") bool("abc") # For sequences, which we'll see in the next section, the Boolean representation is False for empty sequences and True for any other sequences bool([1, 2, 3]) bool([]) # ## 3. Explore Strings # * Copy a sentence from today's news headline # * Return the number of words # * Replace spaces by smilyes # your code # ## 4. Explore Lists # Store at least 3 different types of objects in a list. # + # mylst = [ ] # - # Now, # * append one element # * modify one element # * create a new list with the first 2 elements of the previous list # * create a new list that is the reverse order of the previous list # # ### Sequence comprehensions # # * Starting by creating a list of Olympic Games years starting in 1948 # * Create a list with the predicted Football world Cup years, which is held 2 years after each Olympic Games. # + olympics = list( range(1948,2020,4 ) ) print( 'Olympics', olympics) #worldcup = # - # Dictionary comprehension # # * Use the dictionary below to create a list of dates that are in the XIX century (>1800). # # Remember to use dictionary method `.values()` to access the years. # + Battles= {'Austerlitz':1805,'Alkmaar':1799,'Arhem':1813, 'Arcole':1796,'Jena':1806} # your code below # - # * Use the same dictionary to create a list of Battle names that are in the XIX century (>1800). # # Other dictionary methods are `.items()` and `.keys()` # # ## 5. Dictionary attributes # Remember dictionaries dict(a='123', say='hellozles', other_key=['wow', 'a', 'list', '!'], and_another_key=3.14) # Store the following information in a dictionary: # pet's name, age, colour and weight # + # mypet = {'name': } # - # Modify one of the values and add a new attribute, species # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Homework 9.2: MLE of microtubule catastrophe data (40 pts) # # [Dataset download](https://s3.amazonaws.com/bebi103.caltech.edu/data/gardner_time_to_catastrophe_dic_tidy.csv) # #
    # Refresh yourself about the microtubule catastrophe data we explored in previous homeworks. We will again work with this data set here. # # **a)** In their [paper](http://dx.doi.org/10.1016/j.cell.2011.10.037), , and coworkers modeled microtubule catastrophe times as Gamma distributed. Perform a maximum likelihood estimate for the parameters of the Gamma distribution. Because you showed in a previous homework that there is little difference between labeled and unlabeled tubulin, you only need to work this out for the labeled tubulin now and in part (b). Be sure to include confidence intervals or a confidence region for your MLE and discuss the method you used to get the confidence intervals or confidence region. # # + import numpy as np import pandas as pd import os import scipy import scipy.optimize import scipy.stats as st import tqdm import math rg = np.random.default_rng() data_path = "../data" fname = os.path.join(data_path, "gardner_time_to_catastrophe_dic_tidy.csv") df = pd.read_csv(fname) df # - # We made a separate dataframe for the data labeled "True", and obtained the time stamps of those labeled "True". df_true = df[df["labeled"] == True] true_time = df_true["time to catastrophe (s)"].values # Now, we defined the bootstrap to return us alpha and beta. def bootstrap_param(data, size=1): """Parametric bootstrap replicates of parameters of Normal distribution.""" bs_mean = np.empty(size) bs_sd = np.empty(size) bs_alpha = np.empty(size) bs_beta = np.empty(size) for i in range(size): bs_sample = np.random.choice(data, size=len(data)) bs_mean[i] = np.mean(bs_sample) bs_sd[i] = np.std(bs_sample) # Since the mean of the gamma distribution is alpha/beta, and variance is alpha/beta^2, mean/var = beta. bs_beta[i] = bs_mean[i]/(bs_sd[i]**2) # With the beta value, we solve for alpha by just saying beta * mean = alpha. bs_alpha[i] = bs_beta[i] * bs_mean[i] return bs_alpha, bs_beta # We run the bootstrap for our "True" time stamps. # + bs_alpha_true, bs_beta_true = bootstrap_param( true_time, size=100000 ) bs_alpha_true # - # We can obtain the alpha and beta values as an expression of the mean and variance. # + mean_true = np.mean(true_time) var_true = np.var(true_time) # Since the mean of the gamma distribution is alpha/beta, and variance is alpha/beta^2, mean/var = beta. beta_true = mean_true / var_true # With the beta value, we solve for alpha by just saying beta * mean = alpha. alpha_true = beta_true * mean_true print(alpha_true, beta_true) # - print('α: {:.4f} | 95% conf int α: {}'.format(np.mean(bs_alpha_true),str(np.percentile(bs_alpha_true, [2.5, 97.5])))) print('β2: {:.4f} | 95% conf int β2: {}'.format(np.mean(bs_beta_true),str(np.percentile(bs_beta_true, [2.5, 97.5])))) # As the values of the parameters alpha and beta do not overlap, and fall inside the 95% confidence interval for the labeled microtubules, we can reasonably assume that the MLE value we have obtained for the Gamma distribution is reasonable. # **b)** Obtain a maximum likelihood estimate for the parameters $\beta_1$ and $\beta_2$ from the model you derived in homework 5.2. As a reminder, you derived that the PDF for microtubule catastrophe times is # # $$\begin{align} # f(t;\beta_1, \beta_2) = \frac{\beta_1\beta_2}{\beta_2 - \beta_1}\left(\mathrm{e}^{-\beta_1 t} - \mathrm{e}^{-\beta_2 t}\right). # \end{align}$$ # # Again, include confidence intervals. **Be careful**; this is a *very* tricky calculation. It is possible to analytically compute the MLE. If you choose to do it numerically, you need to think about what happens when $\beta_1 \approx \beta_2$. You may also need to think about how you will handle the [log of sums of exponentials](https://en.wikipedia.org/wiki/LogSumExp). # $$\begin{align}\tag{1} # f(t;\beta_1, \beta_2) = \frac{\beta_1 \beta_2}{\beta_2 - \beta_1}\left(\mathrm{e}^{-\beta_1 t} - \mathrm{e}^{-\beta_2 t}\right) # \end{align}$$ # # $$\begin{align}\tag{2} # L(t;\beta_1, \beta_2)= \ln{f(t;\beta_1, \beta_2)} = \ln{\left(\beta_1\beta_2\right)} - \ln{\left(\beta_2-\beta_1\right)}+\ln{\left(\mathrm{e}^{-\beta_1 t} - \mathrm{e}^{-\beta_2 t}\right)} # \end{align}$$ # # Our third term is the log of sum of exponentials, and we can change this term to: # # $$\begin{align}\tag{3} # y = \ln{\left(\mathrm{e}^{-\beta_1 t} - \mathrm{e}^{-\beta_2 t}\right)}\\ # \Leftrightarrow \mathrm{e}^{y} = \sum_{i=1}^{n}\mathrm{e}^{x_{i}}\\ # \Leftrightarrow \mathrm{e}^{\beta_1 t}\mathrm{e}^{y} = \mathrm{e}^{-\beta_1 t}\sum_{i=1}^{n}\mathrm{e}^{x_{i}}\\ # \Leftrightarrow \mathrm{e}^{y + \beta_1 t} = \sum_{i=1}^{n}\mathrm{e}^{x_{i} + \beta_1 t} \\ # \Leftrightarrow y + \beta_1 t = \ln{\sum_{i=1}^{n}\mathrm{e}^{x_{i} + \beta_1 t}}\\ # \Leftrightarrow y = -\beta_1 t + \ln{\sum_{i=1}^{n}\mathrm{e}^{x_{i} + \beta_1 t}}\\ # \Leftrightarrow y = -\beta_1 t + \ln{\mathrm{e}^{\left(\beta_1-\beta_2\right)t}} # \end{align}$$ # # As such, our initial log equation will become as follows: # $$\begin{align}\tag{4} # \ln{f(t;\beta_1, \beta_2)} = \ln{\left(\beta_1\beta_2\right)} - \ln{\left(\beta_2-\beta_1\right)}+\ln{\mathrm{e}^{\left(\beta_1-\beta_2\right)t}}\\ # = \ln{\beta_1} + \ln{\beta_2} - \ln{(\beta_2-\beta_1)} -\beta_1 t - ln{\mathrm{e}^{\left(\beta_1-\beta_2\right)t}} \\ # = \ln{\beta_1} + \ln{\beta_2} - \ln{(\beta_2-\beta_1)} -\beta_1 t - \left(\beta_1-\beta_2\right)t # \end{align}$$ # # Taking the partial derivatives of each terms gives: # $$\begin{align}\tag{5} # \frac{\partial L}{\partial \beta_1} = \frac{1}{\beta_1} + 0 - \left(\frac{1}{\beta_2-\beta_1}\left(0-1\right)\right) - t - t + 0 \\ # = \frac{1}{\beta_1} + \frac{1}{\beta_2-\beta_1} - 2t # \end{align}$$ # $$\begin{align}\tag{6} # \frac{\partial L}{\partial \beta_2} = 0 + \frac{1}{\beta_2}-\left(\frac{1}{\beta_2-\beta_1}\left(1-0\right)\right)-0-0+t\\ # = \frac{1}{\beta_2} - \frac{1}{\beta_2-\beta_1} + t # \end{align}$$ # $$\begin{align}\tag{7} # \frac{\partial L}{\partial t} = 0 + 0 - 0 -\beta_1 -\beta_1 + \beta_2\\ # = \beta_2 - 2\beta_1\\ # \end{align}$$ # # We can evaluate the turning point by setting $\frac{\partial L}{\partial t} = 0$, which gives the expression: # $$\begin{align}\tag{8} # \beta_1 = \frac{1}{2}\beta_2 \Leftrightarrow 2\beta_1 = \beta_2 # \end{align}$$ # # Whereas we can rearrange our initial equation to express $t$ in terms of $\beta_1$ and $\beta_2$: # $$\begin{align}\tag{9} # t = \frac{\ln{\beta_1} + \ln{\beta_2} - \ln{\left(\beta_2-\beta_1\right)}}{2\beta_1-\beta_2} # \end{align}$$ # # As such, the turning point are points where-in: # $$\begin{align}\tag{10} # \beta_1 = \frac{1}{2} \wedge \beta_2 = 1 # \end{align}$$ # $$\begin{align}\tag{11} # 2\beta_1 - \beta_2 \neq 0 \wedge t = \frac{\ln{\beta_1} + \ln{\beta_2} - \ln{\left(\beta_2-\beta_1\right)}}{2\beta_1-\beta_2} # \end{align}$$ # # Using a numerical approach, we defined the log likelihood function using the gamma distribution, as well as the corresponding MLE. # + def log_like_iid_gamma(params, n): beta1, beta2 = params mlesum = 0 for t in n: mlesum += np.log(beta1)+ np.log(beta2) - np.log((beta2-beta1)) + np.log(np.exp(-beta1*t) - np.exp(-beta2*t)) if (math.isnan(mlesum)): return -np.inf return mlesum def mle_iid_custom(n): """Perform maximum likelihood estimates for parameters for i.i.d.""" # with warnings.catch_warnings(): # warnings.simplefilter("ignore") res = scipy.optimize.minimize( fun=lambda params, n: -log_like_iid_gamma(params, n), x0=np.array([0.0035, 0.0045]), args=(n,), method='Powell', ) if res.success: return res.x else: raise RuntimeError('Convergence failed with message', res.message) # - # Now we run our MLE to obtain our beta values. mle_labeled = mle_iid_custom(df_true["time to catastrophe (s)"].values) print('β1: ' + str(mle_labeled[0])) print('β2: ' + str(mle_labeled[1])) # We can define a function to draw our bootstrap replicates, as such: # + def draw_bs_sample(data): """Draw a bootstrap sample from a 1D data set.""" return rg.choice(data, size=len(data)) def draw_bs_reps_mle(mle_fun, data, args=(), size=1, progress_bar=False): if progress_bar: iterator = tqdm.tqdm(range(size)) else: iterator = range(size) return np.array([mle_fun(draw_bs_sample(data), *args) for _ in iterator]) # - # We run our bootstrap to obtain the confidence intervals for our beta values. bs_reps = draw_bs_reps_mle( mle_iid_custom, df_true["time to catastrophe (s)"].values, size=100, ) # + conf_int = np.percentile(bs_reps, [2.5, 97.5], axis=0) conf_int = np.transpose(conf_int) print('β1: {:.4f} | 95% conf int β1: {}'.format(np.mean(bs_reps[:,0]),str(conf_int[0]))) print('β2: {:.4f} | 95% conf int β2: {}'.format(np.mean(bs_reps[:,1]),str(conf_int[1]))) # - # Comparing the parameter estimates from both the analytical and numerical approach gives very different results; in our analytical approach we were not able to find a single point for the MLE, whereas using bootstrapping methods we can obtain precise beta values with a small confidence interval range. # ## Attributes: # # Done altogether as a group # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:ABG-cnn_tf230] * # language: python # name: conda-env-ABG-cnn_tf230-py # --- # # Train CNN on ABG data wihtout prior XC training # - You will need to provide training and validation data to run this script. # - Uses env ABG-cnn_tf230 # + import tensorflow as tf # %env CUDA_DEVICE_ORDER=PCI_BUS_ID # %env CUDA_VISIBLE_DEVICES=1 import IPython.display as display from PIL import Image import numpy as np import matplotlib.pyplot as plt import os import pathlib # option to not use GPUs os.environ["CUDA_VISIBLE_DEVICES"] = "-1" AUTOTUNE = tf.data.experimental.AUTOTUNE tf.__version__ # - print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) # MODEL OUTPUT PATH # Let's checkpoint the model here when needed checkpoint_path = '../results/ABGQI-CNN/cp.ckpt' print("does this checkpoint exist?") print(checkpoint_path) os.path.isfile(checkpoint_path) # + # INPUT DATA # Set up training data tr_pth = '../data/splits/training' data_dir = pathlib.Path(tr_pth) # Set up the validation data val_pth = '../data/splits/validation' val_data_dir = pathlib.Path(val_pth) # - # PRETRAINED MODEL INPUT # MobNet pretrained on imagenet model_path = '../data/IMGNET_mobileNet_S2L_finetune/my_model/' new_model = tf.keras.models.load_model(model_path) new_model.summary() # ### Functions # + def get_label(file_path): # convert the path to a list of path components parts = tf.strings.split(file_path, os.path.sep) # The second to last is the class-directory return parts[-2] == CLASS_NAMES def decode_img(img): # convert the compressed string to a 3D uint8 tensor img = tf.image.decode_jpeg(img, channels=3) # Use `convert_image_dtype` to convert to floats in the [0,1] range. img = tf.image.convert_image_dtype(img, tf.float32) # resize the image to the desired size. return tf.image.resize(img, [IMG_HEIGHT, IMG_WIDTH]) def process_path(file_path): label = get_label(file_path) # load the raw data from the file as a string img = tf.io.read_file(file_path) img = decode_img(img) return img, label # + def prepare_for_training(ds, cache=True, shuffle_buffer_size=1000, repeat=1): # This is a small dataset, only load it once, and keep it in memory. # use `.cache(filename)` to cache preprocessing work for datasets that don't # fit in memory. if cache: if isinstance(cache, str): ds = ds.cache(cache) else: ds = ds.cache() ds = ds.shuffle(buffer_size=shuffle_buffer_size) # Repeat forever ds = ds.repeat(repeat) # repeat has arg 'count' = A tf.int64 scalar tf.Tensor, representing the number of times the dataset should be repeated. The default behavior (if count is None or -1) is for the dataset be repeated indefinitely. ds = ds.batch(BATCH_SIZE) # `prefetch` lets the dataset fetch batches in the background while the model # is training. ds = ds.prefetch(buffer_size=AUTOTUNE) return ds def show_batch(image_batch, label_batch): plt.figure(figsize=(10,10)) for n in range(25): ax = plt.subplot(5,5,n+1) plt.imshow(image_batch[n]) plt.title(CLASS_NAMES[label_batch[n]==1][0].title()) plt.axis('off') # - # ### Image analysis image_count = len(list(data_dir.glob('*/*.png'))) image_count CLASS_NAMES = np.array([item.name for item in data_dir.glob('*')]) CLASS_NAMES # The 1./255 is to convert from uint8 to float32 in range [0,1]. image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255) # training parameters BATCH_SIZE = 64 IMG_HEIGHT = 224 IMG_WIDTH = 224 STEPS_PER_EPOCH = np.ceil(image_count/BATCH_SIZE) # example of 5 pngs list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*')) for f in list_ds.take(5): print(f.numpy()) # Set `num_parallel_calls` so multiple images are loaded/processed in parallel. labeled_ds = list_ds.map(process_path, num_parallel_calls=AUTOTUNE) # what are the dimensions of a png and what do the labels look like for image, label in labeled_ds.take(1): print("Image shape: ", image.numpy().shape) print("Label: ", label.numpy()) # Prep dataset iterations train_ds = prepare_for_training(labeled_ds, repeat = None) image_batch, label_batch = next(iter(train_ds)) # display some pngs show_batch(image_batch.numpy(), label_batch.numpy()) # #### Validation data # - follows the same preparation as training data val_image_count = len(list(val_data_dir.glob('*/*.png'))) val_image_count val_data_dir CLASS_NAMES = np.array([item.name for item in val_data_dir.glob('*') if item.name != "LICENSE.txt"]) CLASS_NAMES # The 1./255 is to convert from uint8 to float32 in range [0,1]. image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255) STEPS_PER_EPOCH = np.ceil(val_image_count/BATCH_SIZE) # + list_ds = tf.data.Dataset.list_files(str(val_data_dir/'*/*')) for f in list_ds.take(5): print(f.numpy()) # - # Set `num_parallel_calls` so multiple images are loaded/processed in parallel. labeled_ds = list_ds.map(process_path, num_parallel_calls=AUTOTUNE) for image, label in labeled_ds.take(1): print("Image shape: ", image.numpy().shape) print("Label: ", label.numpy()) validation_ds = prepare_for_training(labeled_ds) image_batch, label_batch = next(iter(validation_ds)) show_batch(image_batch.numpy(), label_batch.numpy()) # NOW WE HAVE: print(validation_ds) print(train_ds) # ## MODEL TRAINING # Image size, here 224 is default MobileNet x, y with 3 bands (RGB) IMG_SIZE = 224 IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3) # number of target class : ABGIQ n_classes = len(CLASS_NAMES) print(n_classes) print(CLASS_NAMES) # + # Remove FC and Global pooling layers to allow for ABGQI fine tuning base_model_output = new_model.layers[-3]#.output print(base_model_output) feature_batch = base_model_output(image_batch) base_model_output.trainable = False # - # Add pooling layer global_average_layer = tf.keras.layers.GlobalAveragePooling2D() feature_batch_average = global_average_layer(feature_batch) print(feature_batch_average.shape) # Add FC/ Dense layer prediction_layer = tf.keras.layers.Dense(n_classes, activation = None) # compile the new model with S2L-mobilenet weights and new pooling + FC layers model = tf.keras.Sequential([ base_model_output, global_average_layer, prediction_layer ]) # Let's take a look to see how many layers are in the base model (i.e. S2L pre-trained mobileNet) print("Number of layers in the base model: ", len(base_model_output.layers)) # + # Fine tune FC layers base_learning_rate = 0.0001 #the initial learning rate. This will be reduced by a factor of 10 in the Finetuning stage # specify what loss function, optimizer, and accuracy metric to use model.compile(optimizer = tf.keras.optimizers.Adam(lr=base_learning_rate), metrics=tf.keras.metrics.CategoricalAccuracy(), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True)) #Whether to interpret y_pred as a tensor of logit values. By default, we assume that y_pred contains probabilities (i.e., values in [0, 1]). **Note - Using from_logits=True may be more numerically stable. # - model.summary() # trainable params = 8,965 here len(model.trainable_variables) # pooling and dense layers # NOW USE THE validation_ds and train_ds THAT WE BUILT BEFORE loss0,accuracy0 = model.evaluate(validation_ds, steps= val_image_count // BATCH_SIZE) print("initial loss: {:.2f}".format(loss0)) print("initial accuracy: {:.2f}".format(accuracy0)) # train with our prepared data initial_epochs = 10 # short training period history = model.fit(train_ds, epochs=initial_epochs, validation_data=validation_ds, steps_per_epoch = np.ceil(image_count/BATCH_SIZE)) # + # visualize accuracy and loss acc = history.history['categorical_accuracy'] val_acc = history.history['val_categorical_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.ylabel('Accuracy') plt.ylim([min(plt.ylim()),1]) plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.ylabel('Cross Entropy') plt.ylim([0,1.0]) plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.show() # - # ### MODEL TRAINING: fine tuning the base model # update the ability to train th mobilenet base base_model_output.trainable = True # Let's take a look to see how many layers are in the base model print("Number of layers in the base model: ", len(base_model_output.layers)) # + # Train CNN features here # Fine-tune from this layer onwards fine_tune_at = 50 # Freeze all the layers before the `fine_tune_at` layer for layer in base_model_output.layers[:fine_tune_at]: layer.trainable = False # - # reduce learning rate by factor of ten second_tr_lr = base_learning_rate/10 # set up model but with second learning rate model.compile(optimizer = tf.keras.optimizers.Adam(lr=second_tr_lr), # reduce lr by a factor of 10! LR is 0.00001 here then metrics=tf.keras.metrics.CategoricalAccuracy(), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True)) model.summary() len(model.trainable_variables) # more trainable parameters because we are tuning the base mobilenet now fine_tune_epochs = 10 # short training period total_epochs = initial_epochs + fine_tune_epochs # total training # Create a callback that saves the model's weights as a checkpoint # Checkpoints use less memory and speed up training - can compile model after training cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_weights_only=True, # checkpoints not full model save_best_only=True, # save the best model based on what's being monitored monitor='val_categorical_accuracy', verbose=1) # second full fine-tune learning history_fine = model.fit(train_ds, epochs=total_epochs, initial_epoch = history.epoch[-1], validation_data = validation_ds, steps_per_epoch = np.ceil(image_count/BATCH_SIZE), callbacks=[cp_callback]) # added this callback for checkpointing # + acc = history_fine.history['categorical_accuracy'] val_acc = history_fine.history['val_categorical_accuracy'] loss = history_fine.history['loss'] val_loss = history_fine.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.ylabel('Accuracy') plt.ylim([min(plt.ylim()),1]) plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.ylabel('Cross Entropy') plt.ylim([0,1.0]) plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.7.6 64-bit (''ds-Flask'': conda)' # name: python37664bitdsflaskconda5d6acf79da9542b3aeeb7103e7ad852b # --- # + [markdown] id="RfqZ-DJyMV6w" colab_type="text" # # TFIDF KNN Approach # + id="2F9FXBToMbtd" colab_type="code" colab={} import pandas as pd import spacy from spacy.tokenizer import Tokenizer from spacy.lang.en import English from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.neighbors import NearestNeighbors import pickle # + id="3lmqxnzPMbos" colab_type="code" colab={} # Instantiate the tokenizer nlp=English() tokenizer = Tokenizer(nlp.vocab) # + id="zcHGr5LwMbmE" colab_type="code" colab={} df = pd.read_csv("https://raw.githubusercontent.com/Build-Week-Med-Cabinet-6/DS/mark-dev/data/cannabis.csv") df.head() # + id="Ai8m8CZAMbjy" colab_type="code" colab={} # drop nan or missing values df = df.dropna() df = df.drop(df.index[149]) #df = df.reset_index(drop=True) # + id="O5NOVr4KQ1wz" colab_type="code" colab={} # + id="A_tnPRrH22pO" colab_type="code" colab={} # + id="DmG3Uj7DMbhJ" colab_type="code" colab={} # Combine the Effects and Flavors in one column df['Combined'] = df['Effects'] + ',' + df['Flavor'] # + id="Wb1hUFEiQXrD" colab_type="code" colab={} # Instantiate vecorizer object - call tokenize dtm_combined_tf = TfidfVectorizer(stop_words='english') # dtm_combined (vocabulary) and get word counts # effects and flavors combined dtm_combined = dtm_combined_tf.fit_transform(df['Combined'].values.astype('U')) dtm_combined = pd.DataFrame(dtm_combined.todense(), columns=dtm_combined_tf.get_feature_names()) dtm_combined.head() # + id="smEmTZ5ORe52" colab_type="code" colab={} # Fit on TF-IDF Vectors nn = NearestNeighbors(n_neighbors=5, algorithm='ball_tree') nn.fit(dtm_combined) # + id="BIMinWmrRp4H" colab_type="code" colab={} # Practice passing a strain to the model with this string ideal_strain = ['Creative,Energetic,Tingly,Euphoric,Relaxed,Earthy,Sweet,Citrus'] # + id="EyPWaR-wYMoO" colab_type="code" colab={} # Query for similar strains using the test case new = dtm_combined_tf.transform(ideal_strain) results = nn.kneighbors(new.todense()) # + id="L7do34AQYMm3" colab_type="code" colab={} # Results are returned in a tuple of arrays results[1][0] # + id="I2rPgrI3lA9M" colab_type="code" colab={} type(results) # + id="aR9PpMF7YMlS" colab_type="code" colab={} # Pull the strain name from 1st value (0) of the 1st array (0) of the 2nd tuple (1) - the 0 index df['Strain'][results[1][0][0]] # + id="C4lPTojobvOa" colab_type="code" colab={} df['Combined'][results[1][0][0]] # + id="DrtjNSNubvMF" colab_type="code" colab={} df['Description'][results[1][0][0]] # + id="LFUpOCgSbvJh" colab_type="code" colab={} # + [markdown] id="34NqaZSwboUl" colab_type="text" # ### Second Value # + id="VMGgkCJGYMhK" colab_type="code" colab={} # Pull the strain name from 2nd value (1) of the 1st array (0) of the 2nd tuple (1) - the 1972 index df['Strain'][results[1][0][1]] # + id="TwTwjUzpaLPW" colab_type="code" colab={} # Pull the criteria from 2nd value (1) of the 1st array (0) of the 2nd tuple (1) - the 1972 index df['Combined'][results[1][0][1]] # + id="Qe2XQHC5YMeL" colab_type="code" colab={} # Pull the criteria from 2nd value (1) of the 1st array (0) of the 2nd tuple (1) - the 1972 index df['Description'][results[1][0][1]] # + colab_type="code" id="E7fXJNKsfcF4" colab={} # Imagine doing a detailed return of information afterwards # #output results here, rec strains criteria and description # rec_str = [strains['Strain'][results[1][0][i]] for i in range(5)] # rec_crit = [strains['Combined'][results[1][0][i]] for i in range(5)] # rec_str_desc = [strains['Description'][results[1][0][i]] for i in range(5)] # rec1 = rec_str[0] + ' * ' + rec_crit[0] + ' * ' + rec_str_desc[0] # rec2 = rec_str[1] + ' * ' + rec_crit[1] + ' * ' + rec_str_desc[1] # rec3 = rec_str[2] + ' * ' + rec_crit[2] + ' * ' + rec_str_desc[2] # rec4 = rec_str[3] + ' * ' + rec_crit[3] + ' * ' + rec_str_desc[3] # rec5 = rec_str[4] + ' * ' + rec_crit[4] + ' * ' + rec_str_desc[4] # + [markdown] id="RrsvE1pZzn4i" colab_type="text" # ### Pickling step here # + id="USIl6KZOe3oa" colab_type="code" colab={} # Pickle the dtm and tf for use in the prediction #pickle.dump(dtm_combined, open('/content/dtm_combined.pkl', 'wb')) pickle.dump(dtm_combined_tf, open('../pickles/dtm_combined_tf.pickle', 'wb')) with open('../pickles/nn.pickle','wb') as fp: pickle.dump(nn,fp) # + id="eH-wTv7pzqqD" colab_type="code" colab={} # + [markdown] id="mMtpbHkWSPUp" colab_type="text" # # Effects # + id="-74HkjnWSVMH" colab_type="code" colab={} # Instantiate vecorizer object - call tokenize dtm_effects_tf = TfidfVectorizer(stop_words='english') # dtm_effects (vocabulary) and get word counts # effects and flavors effects dtm_effects = dtm_effects_tf.fit_transform(df['Effects'].values.astype('U')) dtm_effects = pd.DataFrame(dtm_effects.todense(), columns=dtm_effects_tf.get_feature_names()) dtm_effects.head() # + id="B4W2Ne_-XdmN" colab_type="code" colab={} df.head() # + [markdown] id="CL3MD1Q9SSyo" colab_type="text" # # Flavors # + id="ymTgOC1zS-MJ" colab_type="code" colab={} # Instantiate vecorizer object - call tokenize dtm_flavors_tf = TfidfVectorizer(stop_words='english') # dtm_flavors (vocabulary) and get word counts # flavors dtm_flavors = dtm_flavors_tf.fit_transform(df['Flavor'].values.astype('U')) dtm_flavors = pd.DataFrame(dtm_flavors.todense(), columns=dtm_flavors_tf.get_feature_names()) dtm_flavors.head() # + [markdown] id="5nJjum1ZvqGS" colab_type="text" # # Leafly API EDA # # + id="iBsKVRVbvtpP" colab_type="code" colab={} import pandas as pd # + id="n3fY77Yw6IGx" colab_type="code" colab={} df = pd.read_csv("https://raw.githubusercontent.com/Build-Week-Med-Cabinet-6/DS/mark-dev/data/cannabis.csv") df.head() # + id="L_lPy4oM8FEd" colab_type="code" colab={} df.shape # + id="F66Q0q4t8NwN" colab_type="code" colab={} df = df.dropna() # + id="wvXZ9vvZ8Ql8" colab_type="code" colab={} df.shape # + id="kv08GL068SKl" colab_type="code" colab={} df.describe(exclude='number') # + id="BPZhHbjSkY3Z" colab_type="code" colab={} pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip # + id="joUifMN58rua" colab_type="code" colab={} import pandas_profiling df.profile_report() # + id="zBiqcySq8wYo" colab_type="code" colab={} # + [markdown] id="WWd4YokVvChK" colab_type="text" # # Kushy API # + id="XUbMm-CyVl68" colab_type="code" colab={} import pandas as pd # + id="H1ZxPJP7VoF_" colab_type="code" colab={} df = pd.read_csv("https://raw.githubusercontent.com/kushyapp/cannabis-dataset/master/Dataset/Strains/strains-kushy_api.2017-11-14.csv") # + id="VR3ANWphVtva" colab_type="code" colab={} df.head() # + id="zCh6GGddW1vJ" colab_type="code" colab={} df.shape # + id="hD3qkLmxVvI6" colab_type="code" colab={} df.isnull().values.sum() # + id="nN0N1j3bVych" colab_type="code" colab={} df.info() # + id="4qGJ2NArWvEW" colab_type="code" colab={} # These are the columns with a percentage of missing values. df.isnull().sum()/len(df)*100 # + id="1p0MIHSjXAqU" colab_type="code" colab={} df.describe(exclude="number") # + id="dinojcUpsY50" colab_type="code" colab={} df.describe(include="number") # + id="fy95Mk_VuiKC" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + import sys import dlib import openface from skimage import io import numpy as np predictor_model = "/home/mckc/Downloads/shape_predictor_68_face_landmarks.dat" detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(predictor_model) # - #Reading the image data into numpy def rgb2gray(rgb): return np.dot(rgb[:,:,:], [0.299, 0.587, 0.114]) def get_landmarks(im,i): predictor_model = "/home/mckc/Downloads/shape_predictor_68_face_landmarks.dat" detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor(predictor_model) rects = detector(im, 1) if len(rects) > 1: print 'TooManyFaces',i return None if len(rects) == 0: print 'NoFaces',i return None coords = np.array([[p.x, p.y] for p in predictor(im, rects[0]).parts()]) centroid = coords.mean(axis=0) return ((coords - centroid )).reshape(1,136) def load_data(): import pandas as pd import numpy as np from PIL import Image from termcolor import colored train = pd.read_csv('/home/mckc/image class//train.csv') test = pd.read_csv('/home/mckc/image class//test.csv') print 'the training data shape is ',train.shape print 'the test data shape is ', test.shape X_tr = np.zeros((1,136),dtype=np.uint8) Y_tr=[] for i in range(train.shape[0]): image = np.array(Image.open(train.values[i,0])) landmarks = get_landmarks(image,train.values[i,0]) if landmarks is not None: X_tr = np.vstack((X_tr,landmarks)) Y_tr = np.append(Y_tr,train.values[i,1]) if i % 50==0: print colored((float(i)/train.shape[0]*100 ,' Percentage complete'), 'green') X_tr = X_tr[1:,:] X_ts = np.zeros((1,136),dtype=np.uint8) Y_ts=[] for i in range(test.shape[0]): image = np.array(Image.open(test.values[i,0])) landmarks = get_landmarks(image,test.values[i,0]) if landmarks is not None: X_ts = np.vstack((X_ts,landmarks)) Y_ts = np.append(Y_ts,test.values[i,1]) if i % 50==0: print colored((float(i)/test.shape[0]*100 ,' Percentage complete'), 'green') X_ts = X_ts[1:,:] print 'the training file shape',X_tr.shape,Y_tr.shape print 'the testing file shape',X_ts.shape,Y_ts.shape return X_tr,X_ts,Y_tr,Y_ts X_tr,X_tst,Y_tr,Y_tst = load_data() map, Y_number = np.unique(Y_tr, return_inverse=True) Y_test_number = np.unique(Y_tst, return_inverse=True)[1] # + from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score clf = LogisticRegression(verbose=0,n_jobs=-1,multi_class='multinomial',solver='lbfgs',max_iter=500,warm_start=True) clf.fit(X_tr,Y_number) Y_logictic= clf.predict(X_tst) Y_log_vales = map[Y_logictic] print 'Accuracy of the model is ',accuracy_score(Y_tst,Y_log_vales) confusion_matrix(Y_tst,Y_log_vales) # + from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score recognizer = RandomForestClassifier(500,verbose=0,oob_score=True,n_jobs=-1,max_features=20) recognizer.fit(X_tr,Y_number) Y_rf= recognizer.predict(X_tst) Y_rf_vales = map[Y_rf] print 'Accuracy of the model is ',accuracy_score(Y_tst,Y_rf_vales) confusion_matrix(Y_tst,Y_rf_vales) # + from keras.models import Sequential from keras.layers import Dense, Activation,LSTM from keras import backend as K from keras.optimizers import Adam,SGD from keras.utils import np_utils from keras.callbacks import EarlyStopping early_stopping = EarlyStopping(monitor='val_loss', patience=10) from keras.layers import Merge left_branch = Sequential() left_branch.add(Dense(1000, input_dim=136,activation='relu')) right_branch = Sequential() right_branch.add(Dense(50, input_dim=136,activation='sigmoid')) merged = Merge([left_branch, right_branch], mode='concat') final_model = Sequential() final_model.add(merged) final_model.add(Dense(7,activation='softmax')) final_model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'],lr=0.0001) final_model.fit([X_tr,X_tr], Y_Keras,nb_epoch=100, batch_size=1,verbose=1, validation_split=0.2, callbacks=[early_stopping]) y_keras = map[final_model.predict_classes([X_tst,X_tst])] print '/n Accuracy of the model is ',accuracy_score(Y_tst,y_keras) confusion_matrix(Y_tst,y_keras) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Notebook to check the working of the trained model import pickle import torch import torchvision # !dir model = pickle.load(open(r"saved_model/resnetmain.pkl","rb"))#map_location=torch.device('cpu')) checkpoint = torch.load(r"saved_model/main_model.pt",map_location=torch.device('cpu')) model.load_state_dict(checkpoint['state_dict']) class_names = ["safe","unsafe"] # + from PIL import Image import torchvision.transforms as transforms standard_normalization = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) def load_input_image(img_path): image = Image.open(img_path).convert('RGB') prediction_transform = transforms.Compose([transforms.Resize(size=(224, 224)), transforms.ToTensor(), standard_normalization]) # discard the transparent, alpha channel (that's the :3) and add the batch dimension image = prediction_transform(image)[:3,:,:].unsqueeze(0) return image # - def predict_image(model, class_names, img_path): # load the image and return the predicted breed img = load_input_image(img_path) model = model.cpu() model.eval() idx = torch.argmax(model(img)) return class_names[idx] # + import matplotlib.pyplot as plt # %matplotlib inline def run_app(img_path): img = Image.open(img_path) plt.imshow(img) plt.show() prediction = predict_image(model, class_names, img_path) return prediction # - run_app("./test_images/test1.jpg") run_app("./test_images/test2.jpg") run_app("./test_images/test3.jpg") run_app("./test_images/test4.jpg") run_app("./test_images/test5.jpg") run_app("./test_images/test6.jpg") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 63654, "status": "ok", "timestamp": 1611819361068, "user": {"displayName": "horst hammer", "photoUrl": "", "userId": "16387219538505432860"}, "user_tz": -60} id="3k_02h3jjoTn" outputId="634fb43f-574f-4dfd-d967-4ee1209503ca" import os base_mrcnn = '/content/drive/MyDrive/instance_segmentation/Mask_RCNN/' # %cd /content # !pip install -r /content/drive/MyDrive/instance_segmentation/Mask_RCNN/requirements.txt # !pip install nvgpu # !pip install -U imgaug # !pip install tensorflow-gpu==1.15 weights_path_dir = os.path.join(base_mrcnn, 'snapshots') weights_path = os.path.join(weights_path_dir, 'mask_rcnn_coco.h5') if not os.path.isfile(weights_path): # !wget https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5 weights_path_dir = os.path.join(base_mrcnn, 'snapshots') if not os.path.exists(weights_path_dir): os.mkdir(weights_path) # !mv mask_rcnn_coco.h5 /content/drive/MyDrive/instance_segmentation/Mask_RCNN/snapshots # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 489123, "status": "ok", "timestamp": 1611819786545, "user": {"displayName": "horst hammer", "photoUrl": "", "userId": "16387219538505432860"}, "user_tz": -60} id="SPt0LVj3OSst" outputId="57aa37e5-d735-43c6-a271-e1129559cade" # !nvidia-smi # %cd /content # !wget https://rgbd.cs.princeton.edu/data/SUNRGBD.zip # !unzip SUNRGBD.zip # + [markdown] id="LC_SQXyEF9QO" # # Evaluation # + colab={"background_save": true, "base_uri": "https://localhost:8080/"} id="kOOgdeA-MGNH" # # !pip install --force-reinstall /content/drive/MyDrive/instance_segmentation/Mask_RCNN weights_path = '/content/drive/MyDrive/instance_segmentation/Mask_RCNN/logs/plain_new_eval/mask_rcnn_sun_0005.h5' # %cd /content/drive/MyDrive/instance_segmentation/Mask_RCNN # !python -m samples.sunrgbd.sun evaluate\ # --dataset '/content/SUNRGBD' # + [markdown] id="la-dV5OAF_FI" # # Training # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 5121, "status": "ok", "timestamp": 1611754443668, "user": {"displayName": "horst hammer", "photoUrl": "", "userId": "16387219538505432860"}, "user_tz": -60} id="RCa4PMAbkKTL" outputId="5487ebf0-39ed-4161-e476-eddad57b81b9" # %cd /content/drive/MyDrive/instance_segmentation/Mask_RCNN weights_path = '/content/drive/MyDrive/instance_segmentation/Mask_RCNN/logs/snapshots/mask_rcnn_sun_0005.h5' import sys # !python -m samples.sunrgbd.sun train\ # --dataset '/content/SUNRGBD'\ # --epochs 25\ # --lr 0.001\ # --augm-num 2\ # --augm-strength 7\ # --depth-mode True\ # --weights {weights_path} # + id="8cd_N6avNHp4" # + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 238018, "status": "ok", "timestamp": 1608496814294, "user": {"displayName": "horst hammer", "photoUrl": "", "userId": "16387219538505432860"}, "user_tz": -60} id="56alueRcww7F" outputId="b8d919ad-6055-4d45-c088-12b8ddf43280" import multiprocessing import os multiprocessing.cpu_count() os.name is 'nt' # + [markdown] id="HOpGH1bCbOiH" # # + id="1BnUzHYgey2c" function ConnectButton(){ console.log("Connect pushed"); document.querySelector("#top-toolbar > colab-connect-button").shadowRoot.querySelector("#connect").click() } setInterval(ConnectButton,300000); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''default'': conda)' # language: python # name: python3 # --- #Notebook used for generating graphs import math import matplotlib.pyplot as plt # + averages = [0.003333333333332078, 0.19500000000000028, 0.16333333333332808, 0.005833333333334245, 0.3899999999999976, 0.5716666666666684, 0.17833333333333368, 0.43249999999999716, 0.3108333333333319, 0.27833333333333304] vars = [2.2222222222205486e-05, 0.017458333333333007, 0.024755555555555992, 2.4305555555563157e-05, 0.05483333333333348, 0.08706388888888843, 0.04086388888888768, 0.10150208333333195, 0.06345763888888968, 0.02213055555555504] max = [0.010000000000005116, 0.46000000000000796, 0.519999999999996, 0.010000000000005116, 0.7199999999999989, 1.0600000000000023, 0.789999999999992, 1.1399999999999935, 0.9200000000000159, 0.5999999999999943] min = [0.0, 0.01999999999999602, 0.0, 0.0, 0.04000000000000625, 0.12999999999999545, 0.020000000000010232, 0.020000000000010232, 0.010000000000005116, 0.020000000000010232] letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] print(len(averages), len(vars), len(max), len(min), len(letters)) # + fig, graph = plt.subplots(2, 2) graph[0, 0].plot(letters, averages) graph[0, 0].set_title("Averages of differences") graph[0, 1].plot(letters, vars) graph[0, 1].set_title("Variances of differences") graph[1, 0].plot(letters, max) graph[1, 0].set_title("Max of differences") graph[1, 1].plot(letters, min) graph[1, 1].set_title("Min of differences") fig.tight_layout() plt.show() # + averages = [0.0, 0.08666666666665417, 0.29999999999998295, 0.0, 0.9166666666666572, 0.6966666666666773, 0.3266666666666633, 0.25, 0.16666666666666666, 0.6666666666666666] vars = [0.0, 0.003288888888889601, 0.04246666666666767, 0.0, 0.17628888888888902, 0.5979555555555666, 0.012688888888890503, 0.02726666666667069, 0.020355555555552508, 0.18135555555556246] max = [0.0, 0.1599999999999966, 0.5099999999999909, 0.0, 1.509999999999991, 1.7900000000000205, 0.4300000000000068, 0.4000000000000057, 0.3599999999999852, 1.210000000000008] min = [0.0, 0.01999999999998181, 0.01999999999998181, 0.0, 0.5999999999999943, 0.12999999999999545, 0.1699999999999875, 0.01999999999998181, 0.020000000000010232, 0.1699999999999875] letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] print(len(averages), len(vars), len(max), len(min), len(letters)) # + fig, graph = plt.subplots(2, 2) graph[0, 0].plot(letters, averages) graph[0, 0].set_title("Averages of differences") graph[0, 1].plot(letters, vars) graph[0, 1].set_title("Variances of differences") graph[1, 0].plot(letters, max) graph[1, 0].set_title("Max of differences") graph[1, 1].plot(letters, min) graph[1, 1].set_title("Min of differences") fig.tight_layout() plt.show() # + averages = [0.0018704884164723991, 0.19737247107815808, 0.41033886311660084, 0.00432467384814896, 0.8768884241660528, 1.0527926659156448, 0.21003097720397335, 1.0347719888819311, 0.5474882437182174, 0.6398471302463649] vars = [2.1675815842568814e-05, 0.19879036905256137, 0.818143823601231, 0.00012778351521944864, 3.6445310345021698, 6.00593081604332, 0.22940267254883817, 5.752970539766577, 2.0777819481613156, 2.2746220211730397] max = [0.015333517128397034, 1.622511049087791, 3.168925019621952, 0.03549198759395722, 6.458266989754634, 8.279940220668493, 1.519187910127357, 8.339745861654308, 4.735697694834073, 4.776618644931602] min = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] print(len(averages), len(vars), len(max), len(min), len(letters)) # + fig, graph = plt.subplots(2, 2) graph[0, 0].plot(letters, averages) graph[0, 0].set_title("Averages of differences") graph[0, 1].plot(letters, vars) graph[0, 1].set_title("Variances of differences") graph[1, 0].plot(letters, max) graph[1, 0].set_title("Max of differences") graph[1, 1].plot(letters, min) graph[1, 1].set_title("Min of differences") fig.tight_layout() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] papermill={"duration": 0.017145, "end_time": "2021-03-11T21:01:41.723736", "exception": false, "start_time": "2021-03-11T21:01:41.706591", "status": "completed"} tags=[] # # Import dependencies # + papermill={"duration": 6.576545, "end_time": "2021-03-11T21:01:48.316397", "exception": false, "start_time": "2021-03-11T21:01:41.739852", "status": "completed"} tags=[] import numpy as np import pandas as pd import pandas_profiling import time from sklearn.model_selection import train_test_split, StratifiedKFold, cross_val_score from sklearn.preprocessing import LabelEncoder, OrdinalEncoder from sklearn.compose import ColumnTransformer from sklearn.metrics import roc_auc_score, average_precision_score, log_loss from hyperopt import fmin, hp, tpe, Trials, space_eval from hyperopt.pyll import scope as ho_scope from hyperopt.pyll.stochastic import sample as ho_sample from functools import partial from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier from xgboost import XGBClassifier from lightgbm import LGBMClassifier from catboost import CatBoostClassifier from imblearn.pipeline import Pipeline #from imblearn.over_sampling import ADASYN from imblearn.under_sampling import OneSidedSelection, NeighbourhoodCleaningRule, TomekLinks, RandomUnderSampler import category_encoders as ce # + papermill={"duration": 0.025132, "end_time": "2021-03-11T21:01:48.359859", "exception": false, "start_time": "2021-03-11T21:01:48.334727", "status": "completed"} tags=[] def evalue_model(model, y_test, X_test, model_name): yhat_prob = [x[1] for x in model.predict_proba(X_test)] results = {'model': model_name, 'auc': roc_auc_score(y_true = y_test, y_score = yhat_prob), 'aucpr': average_precision_score(y_true = y_test, y_score = yhat_prob), 'logloss': log_loss(y_test, yhat_prob)} return results # + papermill={"duration": 3.599286, "end_time": "2021-03-11T21:01:51.976202", "exception": false, "start_time": "2021-03-11T21:01:48.376916", "status": "completed"} tags=[] submission = pd.read_csv('../input/tabular-playground-series-mar-2021/sample_submission.csv') new_data = pd.read_csv('../input/tabular-playground-series-mar-2021/test.csv') df = pd.read_csv('../input/tabular-playground-series-mar-2021/train.csv') # + papermill={"duration": 0.024796, "end_time": "2021-03-11T21:01:52.019798", "exception": false, "start_time": "2021-03-11T21:01:51.995002", "status": "completed"} tags=[] #profile = df.profile_report(title="Profile train.csv", explorative=True) #profile.to_file(output_file="profile_report.html") # + papermill={"duration": 0.116973, "end_time": "2021-03-11T21:01:52.155339", "exception": false, "start_time": "2021-03-11T21:01:52.038366", "status": "completed"} tags=[] df.drop(columns = "id", inplace = True) new_data.drop(columns = "id", inplace = True) # + papermill={"duration": 0.77179, "end_time": "2021-03-11T21:01:52.944501", "exception": false, "start_time": "2021-03-11T21:01:52.172711", "status": "completed"} tags=[] for col in df.columns[df.dtypes == "object"].tolist(): df[col] = df[col].astype('category') # + [markdown] papermill={"duration": 0.016985, "end_time": "2021-03-11T21:01:52.979311", "exception": false, "start_time": "2021-03-11T21:01:52.962326", "status": "completed"} tags=[] # # Prepare Data # + papermill={"duration": 0.261591, "end_time": "2021-03-11T21:01:53.258324", "exception": false, "start_time": "2021-03-11T21:01:52.996733", "status": "completed"} tags=[] X = df.drop('target', axis=1) y = df['target'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y) # + papermill={"duration": 0.026569, "end_time": "2021-03-11T21:01:53.303125", "exception": false, "start_time": "2021-03-11T21:01:53.276556", "status": "completed"} tags=[] high_cardinality = ["cat5", "cat7", "cat8", "cat10"] categorical_cols = X.columns[X.dtypes == "category"].tolist() categorical_cols = list(set(categorical_cols) - set(high_cardinality)) cat_columns_position = [X.columns.tolist().index(x) for x in categorical_cols + high_cardinality] # + [markdown] papermill={"duration": 0.017072, "end_time": "2021-03-11T21:01:53.337280", "exception": false, "start_time": "2021-03-11T21:01:53.320208", "status": "completed"} tags=[] # # Baseline Models # + papermill={"duration": 0.025974, "end_time": "2021-03-11T21:01:53.381148", "exception": false, "start_time": "2021-03-11T21:01:53.355174", "status": "completed"} tags=[] categorical_transformer = Pipeline(steps=[ # ('OrdinalEncoder', ce.OrdinalEncoder(cols=categorical_cols + high_cardinality)) ('OrdinalEncoder', OrdinalEncoder(handle_unknown = "use_encoded_value", unknown_value = 999)) #('TargetEncoder', ce.TargetEncoder(cols=high_cardinality)) ]) preprocessor = ColumnTransformer( transformers=[ ('cat', categorical_transformer, categorical_cols + high_cardinality) ]) undersample = RandomUnderSampler(sampling_strategy='majority') # + papermill={"duration": 0.03416, "end_time": "2021-03-11T21:01:53.432523", "exception": false, "start_time": "2021-03-11T21:01:53.398363", "status": "completed"} tags=[] # Desicion Tree ---------------------------------------- model_tree = DecisionTreeClassifier(random_state = 42) pipe_tree = Pipeline([('preprocessor', preprocessor), ('clf', model_tree )]) pipe_tree_un = Pipeline([('preprocessor', preprocessor), ('undersample', undersample), ('clf', model_tree )]) # Random Forest ---------------------------------------- model_rf = RandomForestClassifier(random_state = 42, n_jobs=-1) pipe_rf = Pipeline([('preprocessor', preprocessor), ('clf', model_rf )]) pipe_rf_un = Pipeline([('preprocessor', preprocessor), ('undersample', undersample), ('clf', model_rf )]) # LightGBM --------------------------------------------- model_lgbm = LGBMClassifier(random_state = 42, device = "gpu", n_estimators = 500) pipe_lgbm = Pipeline([('preprocessor', preprocessor), ('clf', model_lgbm )]) pipe_lgbm_un = Pipeline([('preprocessor', preprocessor), ('undersample', undersample), ('clf', model_lgbm )]) # CatBoost ---------------------------------------------- model_cat = CatBoostClassifier(random_state = 42, verbose = 0, task_type="GPU", border_count = 32, n_estimators = 500, early_stopping_rounds = 100) pipe_cat = Pipeline([('clf', model_cat )]) pipe_cat_un = Pipeline([('undersample', undersample), ('clf', model_cat )]) # CatBoost (fast mode)---------------------------------- model_cat_fast = CatBoostClassifier(random_seed=42, verbose = 0, task_type="GPU", border_count = 32, # turn on gpu n_estimators = 500, boosting_type = "Plain", bootstrap_type = "Bernoulli", max_ctr_complexity = 1, subsample = 0.6, #colsample_bylevel = 0.5, # ONLY CPU #one_hot_max_size = 4, #leaf_estimation_iterations = 5, #[1,10] early_stopping_rounds = 100) pipe_cat_fast = Pipeline([('clf', model_cat_fast )]) pipe_cat_fast_un = Pipeline([('undersample', undersample), ('clf', model_cat_fast )]) # + papermill={"duration": 0.024358, "end_time": "2021-03-11T21:01:53.474052", "exception": false, "start_time": "2021-03-11T21:01:53.449694", "status": "completed"} tags=[] classifiers = { "DecisionTreeClassifier": pipe_tree, "DecisionTreeClassifier_un": pipe_tree_un, "RandomForestClassifier": pipe_rf, "RandomForestClassifier_un": pipe_rf_un, "LGBMClassifier": pipe_lgbm, "LGBMClassifier_un": pipe_lgbm_un, "CatBoostClassifier": pipe_cat, "CatBoostClassifier_un": pipe_cat_un, "CatBoostClassifier_fast": pipe_cat_fast, "CatBoostClassifier_fast_un": pipe_cat_fast_un } # + papermill={"duration": 0.026225, "end_time": "2021-03-11T21:01:53.517788", "exception": false, "start_time": "2021-03-11T21:01:53.491563", "status": "completed"} tags=[] results = pd.DataFrame(columns= ["model", "auc", "aucpr", "logloss", "time"]) # + papermill={"duration": 214.148032, "end_time": "2021-03-11T21:05:27.683262", "exception": false, "start_time": "2021-03-11T21:01:53.535230", "status": "completed"} tags=[] # %%time import time for key, classifier in classifiers.items(): print("Running", key) start_time = time.time() if key.find("LGBMClassifier") != -1: classifiers[key] = classifier.fit(X_train, y_train, clf__categorical_feature = cat_columns_position) elif key.find("CatBoostClassifier") != -1: classifiers[key] = classifier.fit(X_train, y_train, clf__cat_features = cat_columns_position) else: classifiers[key] = classifier.fit(X_train, y_train) end_time = (time.time() - start_time) results = results.append( pd.concat([ pd.DataFrame(evalue_model(classifiers[key], y_test, X_test, key), index=[0]), pd.DataFrame({"time" : end_time}, index = [0]) ], axis=1) ) # + papermill={"duration": 0.05476, "end_time": "2021-03-11T21:05:27.769837", "exception": false, "start_time": "2021-03-11T21:05:27.715077", "status": "completed"} tags=[] results.sort_values(by=['auc'], ascending = False) # + papermill={"duration": 12.311426, "end_time": "2021-03-11T21:05:40.113993", "exception": false, "start_time": "2021-03-11T21:05:27.802567", "status": "completed"} tags=[] from sklearn import metrics import numpy as np import matplotlib.pyplot as plt for key, classifier in classifiers.items(): yhat_prob = [x[1] for x in classifier.predict_proba(X_test)] fpr, tpr, thresh = metrics.roc_curve(y_true = y_test, y_score = yhat_prob) auc = metrics.roc_auc_score(y_test, yhat_prob) plt.plot(fpr,tpr,label=key+", auc="+str(round(auc, 4))) plt.legend(loc=0) # + [markdown] papermill={"duration": 0.023275, "end_time": "2021-03-11T21:05:40.161217", "exception": false, "start_time": "2021-03-11T21:05:40.137942", "status": "completed"} tags=[] # # Bayes Tunning # + papermill={"duration": 0.029156, "end_time": "2021-03-11T21:05:40.213892", "exception": false, "start_time": "2021-03-11T21:05:40.184736", "status": "completed"} tags=[] # eval dataset to lgbm #X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=42, stratify=y_train) # + papermill={"duration": 0.035915, "end_time": "2021-03-11T21:05:40.273267", "exception": false, "start_time": "2021-03-11T21:05:40.237352", "status": "completed"} tags=[] hp_space = { 'undersample': hp.choice(label = 'undersample', options = [True, False]), 'clf': { #'n_estimators': ho_scope.int(hp.quniform('n_estimators',200,600,100)), 'max_depth': ho_scope.int(hp.quniform('max_depth',1,9,1)), #'leaf_estimation_iterations': ho_scope.int(hp.quniform('leaf_estimation_iterations',1,5,1)), 'subsample': hp.uniform('colsample_bytree',0.2,0.7), #'colsample_bylevel': hp.uniform('colsample_bylevel',0.2,0.6), 'reg_lambda': hp.loguniform('reg_lambda',np.log(1e-4),np.log(7)) # catboost # 'border_count': hp.choice(label = 'border_count', options = [32, 64 ,128, 254]) # turn off on gpu #'ctr_target_border_count': hp.choice(label = 'ctr_target_border_count', options = [1,5,10,20,50,100,200, 250]) } } ho_sample(hp_space) # + papermill={"duration": 0.030242, "end_time": "2021-03-11T21:05:40.327095", "exception": false, "start_time": "2021-03-11T21:05:40.296853", "status": "completed"} tags=[] iteracoes = Trials() # + papermill={"duration": 0.032399, "end_time": "2021-03-11T21:05:40.383866", "exception": false, "start_time": "2021-03-11T21:05:40.351467", "status": "completed"} tags=[] def instancia_modelo(hiperparametros): clf = CatBoostClassifier(**hiperparametros['clf'], random_seed=42, verbose = 0, task_type="GPU", border_count = 32, # turn on gpu boosting_type = "Plain", bootstrap_type = "Bernoulli", max_ctr_complexity = 0, n_estimators = 550, early_stopping_rounds = 100) if hiperparametros['undersample'] == True: undersample = RandomUnderSampler(sampling_strategy='majority') else: undersample = None #categorical_transformer = Pipeline(steps=[ # ('OrdinalEncoder', ce.OrdinalEncoder(cols=categorical_cols)), # ('JamesSteinEncoder', ce.JamesSteinEncoder(cols=high_cardinality)) #]) #preprocessor = ColumnTransformer( # transformers=[ # #('num', numerical_transformer, numerical_cols), # ('cat', categorical_transformer, high_cardinality + categorical_cols) #]) pipe = Pipeline([#('preprocessor', preprocessor), ('undersample', undersample), ('clf', clf) ]) return pipe # + papermill={"duration": 0.031549, "end_time": "2021-03-11T21:05:40.439357", "exception": false, "start_time": "2021-03-11T21:05:40.407808", "status": "completed"} tags=[] ## criando uma função para realizar o treino do modelo def funcao_para_minimizar(hiperparametros, features, target): ## criando uma instancia do modelo com a combinação definida de hiperparametros para usar dentro da função pipe = instancia_modelo(hiperparametros) #pipe.named_steps.preprocessor.fit(X_train, y_train) #X_val_interim = pipe.named_steps.preprocessor.transform(X_val) # Usando dados de validacao #eval_set = [X_val, y_val] #fit_params={'clf__early_stopping_rounds': 100, # 'clf__eval_metric': 'auc', # 'clf__verbose': False, # 'clf__categorical_feature': categorical_cols + high_cardinality, # 'clf__eval_set': eval_set} fit_params={'clf__cat_features':cat_columns_position} cv = StratifiedKFold(n_splits=5) ## treinando o modelo com cross-validation resultado = cross_val_score(estimator = pipe, X = features, y = target, scoring = "roc_auc", cv = cv, error_score = "raise", fit_params = fit_params, n_jobs = -1) ## retornando a metrica da performance do modelo return -resultado.mean() # + papermill={"duration": 13150.489501, "end_time": "2021-03-12T00:44:50.952972", "exception": false, "start_time": "2021-03-11T21:05:40.463471", "status": "completed"} tags=[] # %%time ## rodando a otimização otimizacao = fmin(fn = partial(funcao_para_minimizar, features = X_train, target = y_train), space = hp_space, algo = tpe.suggest, trials = iteracoes, max_evals = int(120), rstate = np.random.RandomState(42)) # + papermill={"duration": 0.097475, "end_time": "2021-03-12T00:44:51.136398", "exception": false, "start_time": "2021-03-12T00:44:51.038923", "status": "completed"} tags=[] def extrai_space_eval(hp_space, trial): ## desempacota o resultado desempacota_trial = space_eval(space = hp_space, hp_assignment = {k: v[0] for (k, v) in trial['misc']['vals'].items() if len(v) > 0}) ## retornando o resultado return desempacota_trial # + papermill={"duration": 0.064211, "end_time": "2021-03-12T00:44:51.277748", "exception": false, "start_time": "2021-03-12T00:44:51.213537", "status": "completed"} tags=[] def desempacota_dicionario(dicionario): desempacotado = {} for (chave, valor) in dicionario.items(): if isinstance(valor, dict): desempacotado = {**desempacotado, **desempacota_dicionario(valor)} else: desempacotado[chave] = valor return desempacotado # + papermill={"duration": 0.122831, "end_time": "2021-03-12T00:44:51.456962", "exception": false, "start_time": "2021-03-12T00:44:51.334131", "status": "completed"} tags=[] ## colocando o historico em um dataframe historico = pd.DataFrame([desempacota_dicionario(extrai_space_eval(hp_space, x)) for x in iteracoes.trials]) ## colocando o AUC como uma das colunas historico['auc'] = [-x['loss'] for x in iteracoes.results] # + papermill={"duration": 0.064937, "end_time": "2021-03-12T00:44:51.578376", "exception": false, "start_time": "2021-03-12T00:44:51.513439", "status": "completed"} tags=[] # hiperparâmetros selecionados pela otimização hiperparametros_selecionados = space_eval(space = hp_space, hp_assignment = otimizacao) print('Hiperparâmetros selecionados:\n%s' % hiperparametros_selecionados) # + papermill={"duration": 1.728731, "end_time": "2021-03-12T00:44:53.363621", "exception": false, "start_time": "2021-03-12T00:44:51.634890", "status": "completed"} tags=[] import plotly.express as px historico.loc[:,'undersample'] = historico.loc[:,'undersample']*1 fig = px.parallel_coordinates(historico, color="auc") fig.show() # + papermill={"duration": 29.397449, "end_time": "2021-03-12T00:45:22.819838", "exception": false, "start_time": "2021-03-12T00:44:53.422389", "status": "completed"} tags=[] # %%time # Local results modelo_final = instancia_modelo(hiperparametros=hiperparametros_selecionados) modelo_final = modelo_final.fit(X_train, y_train, clf__cat_features=cat_columns_position) results = results.append(pd.DataFrame(evalue_model(modelo_final, y_test, X_test, "final_model"), index=[0])) results.sort_values(by=['auc'], ascending = False) # + papermill={"duration": 273.711746, "end_time": "2021-03-12T00:49:56.623937", "exception": false, "start_time": "2021-03-12T00:45:22.912191", "status": "completed"} tags=[] # %%time # Submit results clf = CatBoostClassifier(**hiperparametros_selecionados['clf'], random_seed=42, verbose = 0, task_type="GPU", border_count = 32, bootstrap_type = "Bernoulli", n_estimators = 3000, early_stopping_rounds = 100) pipe = Pipeline([#('preprocessor', preprocessor), ('undersample', undersample), ('clf', clf) ]) final_fit = pipe.fit(X, y, clf__cat_features=cat_columns_position) submission.loc[:, 'target'] = final_fit.predict_proba(new_data)[:,1] submission.to_csv('submission.csv', index = False) # + papermill={"duration": 0.05896, "end_time": "2021-03-12T00:49:56.742969", "exception": false, "start_time": "2021-03-12T00:49:56.684009", "status": "completed"} tags=[] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Calculate tile classes per H5 file # # Once the Ilastik classifier has classified images, it produces per-pixel probabilities as an H5 file. This workbook loads those per-pixel probabilities, and then samples each tile to obtain the class for all the pixels in the tile, so that the tile can be compared with the ground truth data as a whole # ## Step 1: Determine how many pixels to sample per tile for a statistically significant finding # # Use the Cochran formula to determine a statistically significant sample of pixels. Then trim that down to the total population of a texture tile (900 pixels) def sample_size(tile_size): # Calculate statistically significant test sample size # Z score for a 95% confidence (from Z table) Z_score = 1.96 # Calculate with a 5% margin of error margin_of_error = 0.05 # Some selected expected population percentages population_with_attribute_water = 0.09 population_with_attribute_foliage = 0.876 population_with_attribute_road = 0.027 population_with_attribute_building = 0.006 # Number of samples is the size of a square tile N = tile_size ** 2 required_samples = 0 # Calc for all population percentages, output all values. We will pick the largest one for p in [population_with_attribute_water, population_with_attribute_foliage, population_with_attribute_road, population_with_attribute_building]: q = 1 - p # Calc required samples for an unlimited population n_0 = ((Z_score ** 2) * p * q) / (margin_of_error ** 2) # Now reduce this unbounded required sample count down by the known population # per tile (900 pixels) n = n_0 / (1 + ((n_0 - 1) / N)) if (n > required_samples): required_samples = round(n) + (1 if round(n) != n else 0) return required_samples # ## Step 2: Calculate the class per tile in each probability file, save as csv # # Each image will have a statistically significant sample of pixels chosen per tile (uniformly distributed), and the probabilities of those pixels will be aggregated to provide the class of the tile. # # A CSV file will be saved per image with the class of all the tiles in the image # + import numpy as np import matplotlib.pyplot as plt import h5py import pandas as pd import itertools import os import glob def calc_class(foliage, water, building, road): if foliage > 0.5: return 'foliage' if water > 0.5: return 'water' if building > 0.5: return 'building' if road > 0.5: return 'road' return 'unknown' def calc_tile_classes(img_data_arr, tile_size): # The image data is in a 3 dimensional array. Dimension 1 is the Y pixel value, 2 is the X pixel value # and Z contains an array of the 4 classes probabilities names = ['y', 'x', 'z'] # Create an index for the dataframe index = pd.MultiIndex.from_product([range(s) for s in img_data_arr.shape], names=names) # create the dataframe itself image_df = pd.DataFrame({'A': img_data_arr.flatten()}, index=index)['A'] # Reformat into a 4 column frame with the 2 column index by unpacking the array image_df = image_df.unstack(level='z').swaplevel().sort_index() # Set the column names for the probabilities image_df.columns = ['A', 'B', 'C', 'D'] y_size, x_size, z_size = img_data_arr.shape required_samples = sample_size(tile_size) # Now build a matrix of sample pixels across the image, the array will contain the 30 pixel tile number # and a uniformly distributed random selection of pixels from that tile sample_pixel_arr = [] all_items = itertools.product(np.asarray(range(int(x_size / tile_size))), np.asarray(range(int(y_size/tile_size)))) for x, y in all_items: x_s = np.asarray([int(round(x[0])) for x in np.random.uniform((x * tile_size), (x * tile_size) + tile_size, (required_samples, 1))]) y_s = np.asarray([int(round(x[0])) for x in np.random.uniform((y * tile_size), (y * tile_size) + tile_size, (required_samples, 1))]) sample_pixel_arr.append(list(zip(x_s, y_s, itertools.repeat(x), itertools.repeat(y)))) # Convert the sample array to a dataframe for joining sample_matrix = np.asarray(sample_pixel_arr) sample_arr = np.reshape(sample_matrix, (sample_matrix.shape[0] * sample_matrix.shape[1], sample_matrix.shape[2])) sample_df = pd.DataFrame(sample_arr) sample_df.columns = ['x', 'y', 'tile_x', 'tile_y'] sample_df.set_index(['x', 'y'], inplace=True) # Join the sample pixels with the original probabilities frame to get the probabilities for each sample pixel sample_pixels = pd.merge(left=sample_df, right=image_df, left_on=['x', 'y'], right_on=['x', 'y']) # Sum all probabilities to give a total probability value for each category aggregated_samples = sample_pixels.groupby(['tile_x', 'tile_y'], as_index=False).agg( {"A": "sum", "B": "sum", "C": "sum", "D": "sum"}) aggregated_samples = aggregated_samples.fillna(0) sample_counts = [0] * len(aggregated_samples) sample_counts = sample_counts + aggregated_samples['A'] sample_counts = sample_counts + aggregated_samples['B'] sample_counts = sample_counts + aggregated_samples['C'] sample_counts = sample_counts + aggregated_samples['D'] # Divide total probability by number of samples to give a probability percentage per tile aggregated_samples['A'] = aggregated_samples['A'] / sample_counts aggregated_samples['B'] = aggregated_samples['B'] / sample_counts aggregated_samples['C'] = aggregated_samples['C'] / sample_counts aggregated_samples['D'] = aggregated_samples['D'] / sample_counts # Now calculate the class by inspecting the probability percentage. If any of the categories is # above 50% that is considered to be the predicted category of that tile aggregated_samples.loc[:,'tile_class'] = pd.Series('unsure', index=aggregated_samples.index) aggregated_samples.loc[aggregated_samples['A'] > 0.5, 'tile_class'] = pd.Series('foliage', index=aggregated_samples.index) aggregated_samples.loc[aggregated_samples['B'] > 0.5, 'tile_class'] = pd.Series('water', index=aggregated_samples.index) aggregated_samples.loc[aggregated_samples['C'] > 0.5, 'tile_class'] = pd.Series('building', index=aggregated_samples.index) aggregated_samples.loc[aggregated_samples['D'] > 0.5, 'tile_class'] = pd.Series('road', index=aggregated_samples.index) #aggregated_samples['tile_class'] = df.apply(lambda x: calc_class(x['A'], x['B'], x['C'], x['D']), axis=1) return aggregated_samples # + import re from PIL import Image probability_filenames = glob.glob('../../ilastik/TestingData/*/*/*.h5', recursive=True) for file in probability_filenames: head_tail = os.path.split(file) folder = os.path.basename(head_tail[0]) filename = head_tail[1] pre, ext = os.path.splitext(filename) image_file_portion = re.search("DJI\_[0-9]*", pre).group() search_folder = '../../Texture_Repo/Donegal_Rural_Terrain_Textures/Test_Images/*/' + image_file_portion + '.jpg' match_files = glob.glob(search_folder) if len(match_files) == 0: print('ERROR: Source file not found for', pre) else: orig_image = Image.open(match_files[0]) original_width = orig_image.width f = h5py.File(file, 'r') img_data_arr = np.asarray(f['exported_data']) classified_width = img_data_arr.shape[1] tile_size = (classified_width / original_width) * 30 print('Processing', file, 'orig width', original_width, 'classified width', classified_width, 'tile size', tile_size) tile_data = calc_tile_classes(img_data_arr, tile_size) tile_data.to_csv('../../TestPredictions/Ilastik/' + folder + '_' + pre + '.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # One of the basic things we have to do in math is to express numbers. Numbers are the basic particles of essence so it makes sense to start with them first. We'll start with the *natural numbers* and work our way up through the whole spectrum of numbers finally arriving at the *complex numbers*. # # We'll see that every type of number has a use. These numbers are not just cooked up because somebody figured that would be cool. No, every number serves a definite purpose and even though we might not always understand completely what that purpose is that does not mean we shouldn't use them in order to try a greater understanding. # # Take for example the number $\sqrt{2}$ which is a perfectly fine number nowadays. However, how hard we might try, there's really no other way to write down this number. We could try for an approximation like $1.4142135\ldots$ but that's really just being sloppy. There *are* other ways to write this number but we will never get a *real* number out of it. Only approximations. # # ### natural numbers # Some people like to include the number zero in this set but we'll stick with with the numbers $1, 2, 3, \ldots, n$ where $n \gt 0$. They are basically all the whole numbers. # # $$\mathbb{N} = {1, 2, 3, \ldots}$$ # # If we want to be really unambiguous about what we mean we could be extra explicit. # # $$ # \begin{align} # \mathbb{N^0} = \mathbb{N_0} & = {0, 1, 2, 3, \ldots}\\ # \mathbb{N^*} = \mathbb{N^+} = \mathbb{N_1} = \mathbb{N_{\gt 0}} & = {1, 2, 3, \ldots} # \end{align} # $$ # # If we really want to include zero in this text we'll use a different set though. # # ### whole numbers # This is all of the numbers in $\mathbb{N}$ and the number zero. We don't have any cool letter for this though but we can say it's all the whole numbers $0, 1, 2, \ldots, n$ where $n \ge 0$. # # ### integers # Now this is an interesting set. If only because it's so prevalent in almost all math that we do. For example, computers love integers because they can be so easily represented by a sequence of bits. # # Integers is the set of *whole numbers* but it also includes all the negatives of the *natural numbers*. So now were talking about $-n, \ldots, -2, -1, 0, 1, 2, \ldots, n$. This set is important enough to get its own symbol: # # $$\mathbb{Z} = {-n, \ldots, -2, -1, 0, 1, 2, \ldots, n}$$ # # ### interlude: integers # Even though we are not even half-way up our ladder of number systems those integers are already getting a bit interesting. Why do we like them so much in computing? Well, because they can easily be represented as a sequence of one and zero. How does this work though? # # If we look at any integer, let's take for example $321$ and analyze what it means we can come to the insight that: # # $$321 = (3 \times 100) + (2 \times 10) + (1 \times 1)$$ # # If we look a little deeper we can also see that $100 = 10^2$, $10 = 10^1$ and $1 = 10^0$ so in other words: # # $$321 = (3 \times 10^2) + (2 \times 10^1) + (1 \times 10^0)$$ # # Our number system is called the *decimal* system and that's because we have base `10` numbers. There other number systems. Other sytems that are in commonly in use are *binary*, *hexadecimal* and sometimes *octal*. The binary system is popular because it aligns with electronic switches that can be either on or off. There are only two possibilities and that is what binary means. The octal system is sometimes used because it aligns nicely with the *byte* memory unit in computers but you don't see it much these days. However, the hexadecimal system is still prevalent and you'll see a lot, for example in color codes. # # So how does the binary system work? Well remember that the decimal system operates on *powers* of ten so the binary system operates on power of two. # # $$ # \begin{align} # 0 = 0 \times 2^0 & = 0\\ # 1 = 1 \times 2^0 & = 1 \\ # 2 = (1 \times 2^1) + (0 \times 2^0) & = 10 \\ # 3 = (1 \times 2^1) + (1 \times 2^0) & = 11 \\ # 4 = (1 \times 2^2) + (0 \times 2^1) + (0 \times 2^0) & = 100 \\ # 5 = (1 \times 2^2) + (0 \times 2^1) + (1 \times 2^0) & = 101 \\ # 6 = (1 \times 2^2) + (1 \times 2^1) + (0 \times 2^0) & = 110 \\ # 7 = (1 \times 2^2) + (1 \times 2^1) + (1 \times 2^1) & = 111 # \end{align} # $$ # # ### rational numbers # We get rational numbers when we need to express one integer as some part of another. When people started doing real math this problem soon cropped up. The easier way to deal with it is to kind of *not* deal with it and just say well it's this number expressed as some ratio of another. This might be a bit abstract so let's take an example. # # When we first started doing divisions we took stuff like $\frac{3}{3} = 1$ and the world was good. As people started doing more fancy stuff with math and numbers things got a bit out of hand though. At some point we found ourselfs in the need to express something else than *integers*. As always, when math runs into a wall we'll just invent something to get over it. And as such we got *rational numbers* which are basically just *fractions* like $\frac{1}{3}$ or $\frac{1}{\pi}$. # # At this point note that in order to get better numbers we just take some existing numbers we know and combine them in some way that makes sense but is somewhat unexpected. I mean, people are still drawing out proplems with triangles and such and now you're starting to abstract some of this stuff away. And it makes sense too because you don't want to lose any information. By keeping that number in its exact *ratio* $\frac{1}{3}$ you will have a clean number to calculate with. # # Which leads us to the unfortunately necessary... # # ### real numbers # Let me start by saying that *real numbers* are unfortunately named because most of them are anything but real. Real numbers are supposed to be plotted along a line (usually the x-axis) and they are but when we *do have* to work with them they are usually just an approximation of a rational number. # # We like to be pure as long as we can so we'll use rational numbers over real numbers but sometimes (especially dealing with computers) we'll *have* to convert our rational number to some *real* approximation. For most purposes you can just think of a *real* number as an *approximation* of some rational number that is to be used for real-life purposes. # # ### irrational numbers # These are interesting numbers because they are so called *real* numbers but we cannot express them as a rational number. As always, when we encounter such a thing in math we tend to give it a name or a convenient notation. Examples are $\pi$, our friend $\sqrt{2}$ and $e$. # # Irrational numbers are awkward and in order to stay pure we sometimes can do no better than express something as a fraction of an irrational unit. Of course we could try to get a *real* number out but we have to remember that this will always be an approximiation. This might be fine though depending on our purposes. # # ### imaginary numbers # This is where things start to get really interesting (and strange). After hundreds of years of working on math puzzles mathematicians where getting annoyed by the fact that this $\sqrt{-1}$ kept popping up in their would-be solutions. And (at that time) there was no possible way to calculate a negative *square root* so they just gave up... Mostly. Finally they just decided to go with it and complete the calculations involving the negative square root $\sqrt{-1}$ and things turned out beautifully. # # After a few more hundred years it has become so useful that we gave it a special notation and even a special algebra so it all makes sense as a number as well. # # So the first thing we have to consider is that we have this new number now called $i$ which is defined as $i^2 = -1$ so $i = \sqrt{-1}$. Now we can just use regular algebra to express any negative root, for example if we need $\sqrt{-5}$ I could just go ${i \times \sqrt{5}}$ and this simple transformation allows us to actually calculate with those things. # # Note that the name *imaginary* is actually a bad name. These numbers *are* real in the normal sense of that word. The fact that we describe them this way is just byproduct of the way we write math in general. It's better to look at any number and imagine it to have a so-called *imaginary* component. In a lot of cases this will just be $0 \times i$ (no imaginary component) but sometimes they do. # # ### complex numbers # In some sense, this is the best way to describe a number. This form of numbers is called *complex* buty they are not really that complex though. Again, this is kind of a misnomer and actually complex numbers are very easy. In fact, they are just numbers. # # It's just so many numbers we deal with are on the x-axis of the *complex plane* that we don't even notice we are dealing with complex numbers at all. Thanks to a lot of evolution and schooling we can now reasonably *feel* how most numbers work up to and including rational numbers. However, *imaginary* and *complex numbers* are still a bit weird. # # One of the best ways to show how complex numbers enter math is to show an innocent looking equation like: $y = x^2 + 1$. If we try to solve this for $y = 0$ we get $0 = x^2 + 1$. And going further we get $-1 = x^2 \implies x = \sqrt{-1}$. Before complex numbers there was no such thing as $\sqrt{-1}$ and we would have simply given up. # # Nowadays we can say that the solution is $i$ (and $-i$ is a valid solution too). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Boeuf results # + # Scientific libraries import numpy as np import scipy # Graphic libraries import matplotlib.pyplot as plt # %matplotlib widget plt.style.use("presentation") plt.rcParams["figure.figsize"] = (4, 3) # Creating alias for magic commands # LPPview Classes from LPPview import * from LPPview.Classes.LPPic_temporal import History from LPPview.Classes.LPPic_fields import field from LPPview.Classes.LPPic_fields import field as Field from LPPview.Classes.LPPic_newwalls import newwalls as Newwalls from LPPview.Classes.LPPic_temporal import History from plasmapy.physics import Debye_length from plasmapy.physics import plasma_frequency from astropy import units as u from scipy.ndimage import gaussian_filter1d as smooth from scipy.ndimage import gaussian_filter from tqdm import tqdm_notebook as tqdm # + path_ref = "/DATA/tavant/266_Boeuf_166Thomas/" path_L2 = "/DATA/tavant/158_Beauf_fakeR/" path_L4 = "/DATA/tavant/163_Beauf_fakeR2/" paths = [path_ref, path_L4, path_L2] names = ["no $L_R$", "$L_R$=4cm", "$L_R$=2cm"] colors = ["k","b", "r"] fields = [Field(path) for path in paths ] histories = [History(path)for path in paths ] walls = [Newwalls(path) for path in paths ] # - # # Wave energy density def return_wave_energy(field : fields, i=-1): """Return the wave energy: $ W = 3 eps_0 | d E|^2 $ By estimating dE = std(E) \sqrt{2} """ tab = field.return_fromkey(i, "Ej(1)") std_E = tab.std(axis=0) return 3*field.eps0*2*std_E**2 # + fig, ax = plt.subplots(1, 1, figsize=(3.5, 3)) vmax = 0 for f, n in zip(fields, names): value = return_wave_energy(f, -1) plt.plot(value, label=n) ax.legend() ax.set_xlabel("Axial position $z$ [cm]") ax.set_ylabel("Wave energy $W$ [SI]") #plt.savefig("Boeuf_Ex_snapshot.png", dpi=400) # - # # Wave amplitude evolution def return_W_2D_zt(field, nt_max=None): """return the spatio_temporal evolution of the wave energy""" if nt_max is None: nt_max = field._nT w_2D = np.zeros( shape=( nt_max, field._ymax+1)) for i in tqdm(range(nt_max)): value = return_wave_energy(field, i) w_2D[i] = value return w_2D import ipywidgets as widgets widgets.IntSlider() # + nt_max = 9999 for f in fields: nt_max = min(nt_max, f._nT) w2D_0, w2D_1, w2D_2 = [return_W_2D_zt(f, nt_max) for f in fields] # + def W_2D_evolution(smooth_gaussian_2D=False): fig, axarr = plt.subplots(1, 3, figsize=(8, 3)) icut = 10 nt_max = 900 for f in fields: nt_max = min( nt_max, f._nT) datas = [w2D_0, w2D_1, w2D_2] if smooth_gaussian_2D : datas = [ gaussian_filter(d, sigma=2, order=0) for d in datas] vmax = max( [ np.ma.masked_invalid(np.abs(d)).max() for d in datas]) print(vmax) vmax *= 0.5 for f, ax, n, data in zip(fields, axarr, names, datas): f.definecoords() im =ax.imshow(data/data.max(), extent=(0,f._Ly*100, 0, nt_max*f._dT*f._Na*1e6), vmin=0, vmax=1, cmap="plasma") ax.set_title(n, fontsize=12) ax.set_xlabel("Axial position $z$ [cm]") axarr[0].set_ylabel("Time [$\mu$s]") cb = plt.colorbar(im, ax=axarr[2], fraction=0.05, aspect=30, ticks=[0,1]) cb.ax.set_ylabel(" Wave energy $W$ ") #cb.ax.set_yticks([0,1]) cb.ax.set_yticklabels(["min", "max"]) fig.suptitle("Axial-temporal evolution of $W$", fontsize=14) #plt.savefig("Boeuf_Ex_snapshot.png", dpi=400) W_2D_evolution() plt.savefig("Boeuf_Fake_R_W.png", dpi=400) W_2D_evolution(True) plt.savefig("Boeuf_Fake_R_W_smoothed.png", dpi=400) # - axial_velocity = ( 2- 1.5 ) / (7.94-7.62) * u.cm/u.microsecond axial_velocity.to(u.m/u.second) # + def W_2D_evolution(smooth_gaussian_2D=False): fig, ax = plt.subplots(1, 1, figsize=(8, 3)) icut = 10 nt_max = 900 for f in fields: nt_max = min( nt_max, f._nT) datas = [w2D_0, w2D_1, w2D_2] if smooth_gaussian_2D : datas = [ gaussian_filter(d, sigma=2, order=0) for d in datas] vmax = max( [ np.ma.masked_invalid(np.abs(d)).max() for d in datas]) print(vmax) vmax *= 0.5 for f, n, data in zip(fields, names, datas): f.definecoords() ax.plot(f.tab_y, data[ 300:400, :].mean(axis=0), label=n) #im =ax.imshow(data/data.max(), extent=(0,f._Ly*100, 0, nt_max*f._dT*f._Na*1e6), vmin=0, vmax=1, cmap="plasma") ax.legend( fontsize=12) ax.set_xlabel("Axial position $z$ [cm]") ax.set_ylabel(" Wave energy $W$ ") fig.suptitle("Axial evolution of $W$", fontsize=14) #plt.savefig("Boeuf_Ex_snapshot.png", dpi=400) W_2D_evolution() W_2D_evolution(True) # - # ## Temporal evolution # + def W_evolution(smooth_gaussian_2D=False, z_cm_list=[0.4, 0.7, 1.5], scaled=False): fig, axarr = plt.subplots(1, 3, figsize=(8, 3)) icut = 10 nt_max = 900 for f in fields: nt_max = min( nt_max, f._nT) tab_time = np.arange(0, nt_max)*fields[0]._dT*fields[0]._Na*1e6 datas = [w2D_0, w2D_1, w2D_2] if smooth_gaussian_2D : datas = [ gaussian_filter(d, sigma=2, order=0) for d in datas] z_cm_list = [0.4, 0.8, 1.5] z_index_list = [int( z*1e-2 / fields[0]._dX) for z in z_cm_list] print(z_index_list) dat_max = 0 for f, ax, n, data in zip(fields, axarr, names, datas): f.definecoords() for ax, z in zip(axarr, z_index_list): ax.plot(tab_time, data[:, z], label=n) dat_max = max(dat_max, data[:, z].max()) for ax in axarr: ax.set_xlabel("Time [$\mu$s]") ax.set_xlim(0, 10) if scaled: ax.set_ylim(top=dat_max*1.1) for ax, z in zip(axarr, z_cm_list): ax.annotate(f"z={z:2.1f} cm", (6.5, 0.035)) axarr[0].set_ylabel("Wave energy W [SI]") axarr[1].legend(loc='upper center', bbox_to_anchor=(0.5, 1.2), ncol=3, # fancybox=True, # shadow=True, ) plt.subplots_adjust(wspace=0.3, bottom=0.2, top=0.85) # fig.suptitle("Axial-temporal evolution of $W$", fontsize=14) W_evolution(scaled=True) plt.savefig("Boeuf_Fake_R_W_temporal_smoothed.png", dpi=400) W_evolution(smooth_gaussian_2D=True, scaled=True) plt.savefig("Boeuf_Fake_R_W_temporal_smoothed.png", dpi=400) # - # # Wave group velocity # + def return_vg(field:fields, i=-1): """The group velocity is the axial ion velocity""" Ji = field.return_fromkey(-1, "Ji(2)") ni = field.return_fromkey(-1, "Numi") vi = Ji/(ni * field.qe) return vi.mean(axis=0) fig, ax = plt.subplots(1, 1, figsize=(3.5, 3)) vmax = 0 for f, n in zip(fields, names): value = return_vg(f, -1) plt.plot(f.tab_y, value*1e-3, label=n) ax.set_xlim(0, 2.5) ax.set_xlabel("Axial position $z$ [cm]") ax.set_ylabel("Group velocity $v_g$ [km/s]") plt.legend() plt.savefig("Boeuf_v_g.png", dpi=400) ax.axhline(axial_velocity.to(u.m/u.second).value*1e-3, linestyle="--", c="k", label="Axial velocity of $W$") plt.legend() plt.savefig("Boeuf_v_g_with_observed.png", dpi=400) # - # # Growth rate from PIC # + def return_growth_rate_axial(field, i=-1, smooth_W=False, smooth_sigma=2): """Compute the growth rate \gamma by Eq. 20 of Lafleur 2018: d_t W + \div( v_g W) \gamma = -------------------- 2 W - d_t W is computed by upwind difference, - \div(v_g W) is computed by center difference """ if i == -1 or i == field._nT-1: i1 = i - 1 i2 = i else: i1 = i i2 = i+1 W1 = return_wave_energy(field, i1) try: W2 = return_wave_energy(field, i2) except IndexError: print(i2, field._nT) raise RuntimeError if smooth_W : W1 = smooth(W1, sigma=smooth_sigma) W2 = smooth(W2, sigma=smooth_sigma) vg = return_vg(field, i1) dt_W = ( W2 - W1)/ (field._dT*field._Na) div_vgW = np.gradient(vg*W1, field._dX) return ( dt_W + div_vgW)/(2 * W1), dt_W # - # ## Axial profile at t=10 $\mu$s # + fig, ax = plt.subplots(1, 1, figsize=(3.5, 3)) icut = 10 nt0 = 400 for f, n in zip(fields, names): f.definecoords() value, dt_W = return_growth_rate_axial(f, nt0, smooth_W=True, smooth_sigma=5) Nt_av = 100 print(f"averaged over {Nt_av*f._dT*f._Na*1e6:f} $\mu$s") for i in range(1,Nt_av): a, b= return_growth_rate_axial(f, nt0-i, smooth_W=True, smooth_sigma=5) value += a dt_W += b value /= Nt_av dt_W /= Nt_av if True: value = smooth(value, sigma=1) plt.plot(f.tab_y[icut: -1-icut], value[icut: -1-icut]*1e-6, label=n) ax.set_xlabel("Axial position $z$ [cm]") ax.set_ylabel("Growth rate $\gamma$ [$\mu$s$^{-1}$]") plt.legend() #plt.savefig("Boeuf_Ex_snapshot.png", dpi=400) # - # ## Temporal evolution # + def temporal_evolution_2D(field, smooth_W=True, smooth_sigma=5, nt_max = None): """return the temporal evolution as a 2D table Z-t""" if nt_max is None: nt_max = field._nT growthrate_2D = np.zeros((nt_max, field._ymax+1)) dtW_2D = np.zeros((nt_max, field._ymax+1)) for i in tqdm(range(nt_max)): gamma, dt_W = return_growth_rate_axial(field, i, smooth_W=smooth_W, smooth_sigma=smooth_sigma) growthrate_2D[i] = gamma dtW_2D[i] = dt_W return growthrate_2D, dtW_2D def temporal_evolution(field, z_cm=[0.7]): """return the temporal evolution at the positions choosen""" pass # + nt_max = 900 for f in fields: nt_max = min( nt_max, f._nT) growthrate_2D_0, dtW_2D_0 = temporal_evolution_2D(fields[0], smooth_W=True, smooth_sigma=2, nt_max=nt_max) growthrate_2D_1, dtW_2D_1 = temporal_evolution_2D(fields[1], smooth_W=True, smooth_sigma=2, nt_max=nt_max) growthrate_2D_2, dtW_2D_2 = temporal_evolution_2D(fields[2], smooth_W=True, smooth_sigma=2, nt_max=nt_max) # - # # 2D Axial-temporal evolutions # ## $\partial_t W$ # + def dtW_2D_evolution(smooth_gaussian_2D=False): fig, axarr = plt.subplots(1, 3, figsize=(8, 3)) icut = 10 nt_max = 900 for f in fields: nt_max = min( nt_max, f._nT) datas = [dtW_2D_0, dtW_2D_1, dtW_2D_2] if smooth_gaussian_2D : datas = [ gaussian_filter(d, sigma=2, order=0) for d in datas] vmax = min( [ np.ma.masked_invalid(np.abs(d)).max() for d in datas]) print(vmax) vmax *= 1 for f, ax, n, data in zip(fields, axarr, names, datas): f.definecoords() im =ax.imshow(data, extent=(0,f._Ly*100, 0, nt_max*f._dT*f._Na*1e6), vmin=-vmax, vmax=vmax, cmap="seismic") ax.set_title(n, fontsize=12) ax.set_xlabel("Axial position $z$ [cm]") axarr[0].set_ylabel("Time [$\mu$s]") cb = plt.colorbar(im, ax=axarr[2], fraction=0.05, aspect=30) cb.ax.set_ylabel(" $\partial_t W$ [SI]") fig.suptitle("Axial-temporal evolution of $\partial_t W$", fontsize=14) #plt.savefig("Boeuf_Ex_snapshot.png", dpi=400) dtW_2D_evolution() plt.savefig("Boeuf_Fake_R_dtW.png", dpi=400) dtW_2D_evolution(True) plt.savefig("Boeuf_Fake_R_dtW_smoothed.png", dpi=400) # + fig, axarr = plt.subplots(1, 3, figsize=(8, 3)) smooth_gaussian_2D = True datas = datas = [dtW_2D_0, dtW_2D_1, dtW_2D_2] if smooth_gaussian_2D : datas = [ gaussian_filter(d, sigma=5, order=0) for d in datas] tab_time = np.arange(0, nt_max)*fields[0]._dT*fields[0]._Na*1e6 z_cm_list = [0.4, 0.7, 1.5] z_index_list = [int( z*1e-2 / fields[0]._dX) for z in z_cm_list] for f, ax, n, data in zip(fields, axarr, names, datas): f.definecoords() for ax, z in zip(axarr, z_index_list): ax.plot(tab_time, 1e-3*data[:, z], label=n) for ax, z in zip(axarr, z_cm_list): ax.set_title(f"Axial position $z={z}$ [cm]", fontsize=11) for ax in axarr: ax.legend(labelspacing=0.02) ax.set_xlabel("Time $t$ [$\\mu$s]") axarr[0].set_ylabel("$\partial_t W$ [10$^{-3}$ SI]") plt.savefig("Boeuf_dtW_toporal.pdf") # - # ## Growth rate $\gamma$ # + def growth_rate_2D_evolution(smooth_gaussian_2D=False, datas=[growthrate_2D_0, growthrate_2D_1, growthrate_2D_2]): fig, axarr = plt.subplots(1, 3, figsize=(8, 3)) icut = 10 nt_max = 900 for f in fields: nt_max = min( nt_max, f._nT) if smooth_gaussian_2D : datas = [ gaussian_filter(d, sigma=2, order=0) for d in datas] vmax = min( [ np.ma.masked_invalid(d).max() for d in datas ]) print(vmax) vmax *= 1 for f, ax, n, data in zip(fields, axarr, names, datas): f.definecoords() im =ax.imshow(data*1e-6, extent=(0,f._Ly*100, 0, nt_max*f._dT*f._Na*1e6), vmin=-vmax*1e-6, vmax=vmax*1e-6, cmap="seismic") ax.set_title(n, fontsize=12) ax.set_xlabel("Axial position $z$ [cm]") axarr[0].set_ylabel("Time [$\mu$s]") cb = plt.colorbar(im, ax=axarr[2], fraction=0.05, aspect=30) cb.ax.set_ylabel("Growth rate $\gamma$ [$\mu$s$^{-1}$]") fig.suptitle("Axial-temporal evolution of $\gamma$", fontsize=14) growth_rate_2D_evolution() plt.savefig("Boeuf_Fake_R_gamma.png", dpi=400) growth_rate_2D_evolution(True) plt.savefig("Boeuf_Fake_R_gamma_smoothed.png", dpi=400) # + def gamma_evolution(smooth_gaussian_2D=False, z_cm_list=[0.4, 0.7, 1.5], scaled=False, datas=[growthrate_2D_0, growthrate_2D_1, growthrate_2D_2]): fig, axarr = plt.subplots(1, 3, figsize=(8, 3)) icut = 10 nt_max = 900 for f in fields: nt_max = min( nt_max, f._nT) tab_time = np.arange(0, nt_max)*fields[0]._dT*fields[0]._Na*1e6 if smooth_gaussian_2D : datas = [ gaussian_filter(d, sigma=2, order=0) for d in datas] datas = [ 1e-6*d for d in datas] z_cm_list = [0.4, 0.8, 1.5] z_index_list = [int( z*1e-2 / fields[0]._dX) for z in z_cm_list] print(z_index_list) dat_max = 0 dat_min = 0 for f, ax, n, data in zip(fields, axarr, names, datas): f.definecoords() for ax, z in zip(axarr, z_index_list): ax.plot(tab_time, data[:, z], label=n) dat_max = max(dat_max, np.ma.masked_invalid(data[:, z]).max()) dat_min = min(dat_max, np.ma.masked_invalid(data[:, z]).min()) for ax in axarr: ax.set_xlabel("Time [$\mu$s]") ax.set_xlim(0, 10) if scaled: ax.set_ylim(bottom=1.1*dat_min, top=dat_max*1.1) for ax, z in zip(axarr, z_cm_list): ax.annotate(f"z={z:2.1f} cm", (5, 5)) axarr[0].set_ylabel("Growth rate $\gamma$ [$\mu$s$^{-1}$]") axarr[1].legend(loc='upper center', bbox_to_anchor=(0.5, 1.2), ncol=3, # fancybox=True, # shadow=True, ) plt.subplots_adjust(wspace=0.3, bottom=0.2, top=0.85) # fig.suptitle("Axial-temporal evolution of $W$", fontsize=14) gamma_evolution(scaled=True) plt.savefig("Boeuf_Fake_R_gamma_temporal.pdf") gamma_evolution(smooth_gaussian_2D=True, scaled=True) plt.savefig("Boeuf_Fake_R_gamma_temporal_smoothed.pdf") # - # ## Axial profile of $\gamma$ # + def gamma_axial_evolution(smooth_gaussian_2D=False, scaled=False, tmin_mus=9, plot_zpos=False, datas = [growthrate_2D_0, growthrate_2D_1, growthrate_2D_2]): fig, ax = plt.subplots(1, 1, figsize=(3.5, 3)) icut = 10 nt_max = 900 for f in fields: nt_max = min( nt_max, f._nT) tmin = int( tmin_mus *nt_max/10) if smooth_gaussian_2D : datas = [ gaussian_filter(d, sigma=2, order=0) for d in datas] datas = [ 1e-6*d for d in datas] to_pots = [ d[tmin:, :].mean(axis=0) for d in datas] z_cm_list = [0.4, 0.8, 1.5] z_index_list = [int( z*1e-2 / fields[0]._dX) for z in z_cm_list] dat_max = 0 dat_min = 0 for f, n, data in zip(fields, names, to_pots): f.definecoords() ax.plot(f.tab_y, data, label=n) dat_max = max(dat_max, np.ma.masked_invalid(data).max()) dat_min = min(dat_max, np.ma.masked_invalid(data).min()) ax.set_xlabel("Axial position $z$ [cm]") ax.set_xlim(0, 2.5) if scaled: ax.set_ylim(bottom=-0.5*dat_max, top=dat_max*1.1) if plot_zpos: for z in z_cm_list: ax.axvline(z, c='k', linestyle="--", alpha=0.5) ax.set_ylabel("Growth rate $\gamma$ [$\mu$s$^{-1}$]") #ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.2), # ncol=3, # # fancybox=True, # # shadow=True, # ) ax.legend() #plt.subplots_adjust(wspace=0.3, bottom=0.2, top=0.85) # fig.suptitle("Axial-temporal evolution of $W$", fontsize=14) gamma_axial_evolution(scaled=True, tmin_mus=3) plt.savefig("Boeuf_Fake_R_gamma_axial.pdf") gamma_axial_evolution(smooth_gaussian_2D=True, scaled=True, tmin_mus=3) plt.savefig("Boeuf_Fake_R_gamma_axial_smoothed.pdf") # - # # Integration of the growth rate # # In the stationnary case, we have # $$ 2 \gamma W = \nabla \cdot ( \vec{v_g} W) $$ # # Lets see if we can integrate it, and obtaine the same mean profile. # # Unfortunatly, we need a _seed_ to start. def integrate_gamma(field, gamma_vect, z_cm_start=0, W_start=1): v_g = return_vg(field) dz = field._dX z_ind_start = int(z_cm_start*1e-2 / dz) w_vector = np.zeros(field._ymax+1) w_vector[z_ind_start] = W_start for z in range(z_ind_start+1, field._ymax+1): w_vector[z] = (2*dz*gamma_vect[z-1] + v_g[z-1])*w_vector[z-1]/v_g[z] return w_vector # + tmin_mus = 7 tmin_ind = int( tmin_mus*1e-6/(fields[0]._dT*fields[0]._Na) ) gamma_vect = growthrate_2D_0[tmin_ind:, :].mean(axis=0) z_cm_start=0.05 w_vector = integrate_gamma(fields[0], gamma_vect, z_cm_start=z_cm_start, W_start=1) plt.figure() plt.plot(fields[0].tab_y, w_vector, label="By integration") PIC_W = w2D_0[tmin_ind:, :].mean(axis=0) plt.plot(fields[0].tab_y, PIC_W/PIC_W[int(z_cm_start*1e-2 / fields[0]._dX)], label="PIC measurement") plt.ylabel("Wave energy $W$ normalized") plt.ylabel("Axial position $z$ [cm]") plt.legend() plt.title("Case no $L_R$, \n integration between $t=7$ and $10\mu$s") # - # # Ion accoutic wave growth rate # Because the saturation may not be responssible for the observations, lets compare $\gamma_{\rm PIC}$ with the theory: # $$\gamma_{IAW} = \sqrt{ \frac{\pi m_e}{8 m_i} } \frac{ \vec{k}\cdot\vec{u_e}}{ 1 + k^2 \lambda_{De}^2)^{3/2}} $$ # # Plausible hypotheses on $k$: # 1. _local parameters_ : The maximum growth rate correspond to the wavevector $k \lambda_{De} = 1/\sqrt{2}$ # 2. _Convected wave_ : the wave rises in the maximum growing place (z ~ 0.5cm) then is transported. # + from plasmapy.physics import Debye_length from plasmapy.physics import plasma_frequency from astropy import units as u # + def get_ue(run: Field): """return the mean azimuthal electron velocity""" Je_x = np.array(run.meanfield("Je(1)", mean_axis="x", imin=280 )) ne = np.array(run.meanfield("Nume", mean_axis="x", imin=280)) v_e = Je_x/run.qe/ne return v_e def get_lde_wpi(f): ne_vect = f.meanfield("Nume", "x") Te_vect = f.meanfield("Eke(1)", "x") wpi_vect = plasma_frequency(ne_vect/u.m**3, particle="Xe+") lDe_vect = Debye_length(Te_vect*u.eV, ne_vect/u.m**3,) return wpi_vect, lDe_vect # + field = fields[0] u_e = get_ue(field) wpi_vect, lDe_vect = get_lde_wpi(field) factor = np.sqrt(field.me * np.pi/(8*field.mi)) factor # - # ## First hypothese: local wave # + def compute_gamma_IAW(field, case=1): u_e = -get_ue(field) wpi_vect, lDe_vect = get_lde_wpi(field) factor = np.sqrt(field.me * np.pi/(8*field.mi)) if case==1: k_vect = 1/(np.sqrt(2) * lDe_vect.value) elif case==2: k_vect = np.ones_like(lDe_vect.value) * 1/(np.sqrt(2) * lDe_vect[int(0.5*1e-2/field._dX)].value) gamma_theo = factor * ( k_vect * u_e) / ( 1 + k_vect**2 * lDe_vect.value**2)**(3/2) return gamma_theo # - tmin_ind # + gamma_theo_IAW = compute_gamma_IAW(field) gamma_PIC = growthrate_2D_0[tmin_ind:, :].mean(axis=0) plt.figure() plt.plot(field.tab_y,1e-6* gamma_theo_IAW, label = "Theo IAW") plt.plot(field.tab_y,1e-6* gamma_PIC, label = "PIC measurment") plt.legend() plt.ylabel("Growth rate $\gamma$ [$\mu$s$^{-1}$]") plt.xlabel("Azimuthal position $z$ [cm]") plt.ylim(bottom=-5) plt.xlim(0, 2.5) # + gamma_theo_IAW = compute_gamma_IAW(field, case=2) gamma_PIC = growthrate_2D_0[tmin_ind:, :].mean(axis=0) plt.figure() plt.plot(field.tab_y, gamma_theo_IAW, label = "Theo IAW") plt.plot(field.tab_y, gamma_PIC, label = "PIC measurment") plt.legend() plt.ylabel("Growth rate $\gamma$") plt.xlabel("Azimuthal position $z$ [cm]") plt.ylim(bottom=-0.5e7) plt.xlim(0, 2.5) # - # # All the same, but not smoothed # let see what it can change # + nt_max = 900 for f in fields: nt_max = min( nt_max, f._nT) growthrate_2D_0_raw, dtW_2D_0_raw = temporal_evolution_2D(fields[0], smooth_W=False, smooth_sigma=2, nt_max=nt_max) growthrate_2D_1_raw, dtW_2D_1_raw = temporal_evolution_2D(fields[1], smooth_W=False, smooth_sigma=2, nt_max=nt_max) growthrate_2D_2_raw, dtW_2D_2_raw = temporal_evolution_2D(fields[2], smooth_W=False, smooth_sigma=2, nt_max=nt_max) # + datas_raw=[growthrate_2D_0_raw, growthrate_2D_1_raw, growthrate_2D_2_raw ] growth_rate_2D_evolution(smooth_gaussian_2D=False, datas=datas_raw) growth_rate_2D_evolution(smooth_gaussian_2D=True, datas=datas_raw) # + gamma_axial_evolution(scaled=True, tmin_mus=3, datas=datas_raw) gamma_axial_evolution(smooth_gaussian_2D=True, scaled=True, tmin_mus=3, datas=datas_raw) # - gamma_evolution(smooth_gaussian_2D=True, z_cm_list=[0.4, 0.8, 1.6], scaled=True, datas=datas_raw) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="8UNitMGegPiF" outputId="36ba007b-08db-405f-a2d7-6d82bbe16500" #Part 1 - Python Basics #1 #Write a command to get the Python version you are using. # !python --version # + colab={"base_uri": "https://localhost:8080/"} id="FFz5JAHSgvFV" outputId="bfc48244-7040-456f-c65b-6a37cdde5cea" #2 # Write a command or program to locate Python site-packages import site; print(site.getsitepackages()) # + colab={"base_uri": "https://localhost:8080/"} id="rn7vs4TSg9Ce" outputId="f72e0cdf-e992-4a21-b911-ca6ff8d45a5f" #3 # Write a command or program to get the path and name of the file that is currently executing. from pathlib import Path import os print('File dir:', os.path.dirname('')) print('File name:', Path(os.path.abspath('')).resolve().stem) # + colab={"base_uri": "https://localhost:8080/"} id="-maOOi7cjQgS" outputId="99b83061-8786-49be-c310-3b2d11f004c4" #4 # Write a python program to randomly generate two matrices of # size [2x3 and 3x2] and do the matrix multiplication using # the two generated matrices. (Say A = BC) import numpy as np B= np.random.rand(2,3) C=np.random.rand(3,2) A=np.matmul(B,C) A # + colab={"base_uri": "https://localhost:8080/"} id="M5AHm72HkhWI" outputId="9b38b3ab-a3e4-4c04-b520-a953ec5fdfa6" #5 # Use a for loop to implement the above question five times # and thereafter take the average of all the five matrices # and display the result. import numpy as np #create for loop for 5 times i = 0 while i < 5: B = np.random.rand(2,2) A = np.add(A,B) i += 1 np.divide(A,5) print (A) # + colab={"base_uri": "https://localhost:8080/"} id="84SV2-o5V8Vw" outputId="fc69ff44-aa41-43c3-af51-b8f07b82d47b" #6 # Write a program to transfer the matrix (generated in #question 5) to the CSV file and then read the same matrix #and display the transposition of it. from numpy import genfromtxt np.savetxt("foo.csv", A, delimiter=",") print(genfromtxt('foo.csv', delimiter=',')) print(A.transpose()) # + colab={"base_uri": "https://localhost:8080/"} id="0zvuECFdhh7_" outputId="f32db097-c26e-4ca3-8838-951f6a258438" #7 # Write a program to find the mean, mode, and median of the #above-mentioned matrix. from scipy import stats A = np.array([[30,80,10],[80,80,10],[80,50,50]]) print(np.mean(A)) print(np.median(A)) print(stats.mode(A)) # + colab={"base_uri": "https://localhost:8080/"} id="xl8A9i1Qmfh-" outputId="fb0fa14b-6853-4fb1-8cd8-0e9976bc7bf2" # 8 # upper traingle of the above array print(A[np.triu_indices(3)]) # + colab={"base_uri": "https://localhost:8080/"} id="a6zOa3zVnR5P" outputId="90bfadaa-5996-4b48-aaa9-8ec170ec0147" # 9 # Write a Python program to read specific columns of a given #CSV file and print the content of the columns # (Column number should be given by the user) import pandas as pd column=input("input column number to read in csv like 0, 1, 2, or 3 : ") name = ["aparna", "pankaj", "krishna", "sudhir"] degree = ["MBA", "BCA", "M.Tech", "MBA"] score = [90, 40, 80, 98] dict = {'name': name, 'degree': degree, 'score': score} df = pd.DataFrame(dict) df.to_csv('scores.csv') # saving the dataframe df = pd.read_csv('scores.csv',usecols=[int(column)], low_memory = True) print(df) # + colab={"base_uri": "https://localhost:8080/"} id="2fxY8_T0yLV_" outputId="92244873-d800-4f11-e419-3996a66a8b66" # 10 # Write a Python program to replace a user-defined number # present in the CSV file by the character ‘A’ import pandas as pd # Making data frame from the csv file df = pd.read_csv("scores.csv") # Printing the first 4 rows of the data frame for visualization print("before changing\n", ) print(df[:4]) column=input("value to replace: "); df.replace(to_replace =int(column), value ='A', inplace=True) print("changing now") df.to_csv("scores_changed.csv") df = pd.read_csv("scores_changed.csv") print(df[:4]) # + colab={"base_uri": "https://localhost:8080/"} id="z1nOmROX8i4q" outputId="c04fa709-9424-4789-c49c-55cb01365538" # Part 2 Numpy Basics #1 # Write a NumPy program to create an array of 10 zeros,10 ones, # 10 fives import numpy as np array=np.zeros(10)*0 print("An array of 10 zeros:") print(array) array=np.ones(10)*1 print("An array of 10 ones:") print(array) array=np.ones(10)*5 print("An array of 10 fives:") print(array) # + colab={"base_uri": "https://localhost:8080/"} id="82bhxgOOZwAf" outputId="7dc7bb98-96e4-4246-fd9b-352ccf733862" #2 #Write a NumPy program to create an array of all the even #integers from 10 to 50 import numpy as np array=np.arange(10,50,2) print(array) # + colab={"base_uri": "https://localhost:8080/"} id="VceoilBAaIGu" outputId="3e2258e9-f25d-4edf-df26-60993f4c3414" #3 #Write a NumPy program to generate a random number between 0 and 1 import random as rand rand.uniform(0,1) # + colab={"base_uri": "https://localhost:8080/"} id="nZhw8GP9a1_V" outputId="8a010f81-3b82-43e8-9d3d-71afcdc359af" #4 # Write a NumPy program to save the matrix import numpy as np A=np.random.rand(2,2) print ("actual random array is :\n" + str(A)) np.savetxt("array_save.txt",A, fmt='%f') print("reading from file :\n"+ str(np.loadtxt("array_save.txt", dtype=float))) # + colab={"base_uri": "https://localhost:8080/"} id="UCygpJYIeHga" outputId="65558efc-5f50-454a-8474-7f0675f90bd0" # Part 3 # 1 # Write a python program to read an image and save the image as a # matrix to a .csv file using pandas from PIL import Image from numpy import matrix import numpy as np image = Image.open("sample_data/Home.jpg") image.load() imagearray = np.asarray(image) # print first two elemnts as test print(imagearray[:2]) np.save("sample_data/imagedata.csv",imagearray) # + id="BTH6xx4Enqrv" # 2 #Write a program to import excel data from the .csv file #(generated in question 1) by excluding the last row and last column import pandas as pd read_file=pd.read_csv("sample_data/california_housing_test.csv") # remove the last column read_file.drop(read_file.columns[-1], axis=1, inplace=True) # remove last row read_file = read_file[:-1] read_file.to_excel ('sample_data/sample.xlsx') # + colab={"base_uri": "https://localhost:8080/"} id="I_UY4XD1s1cv" outputId="814a49ed-6d8d-4196-9f3f-d85bcda1b56f" import pandas as pd from datetime import date now = pd.to_datetime(str(date.today()), format='%Y-%m-%d') print("Today's date:") print(now) # + colab={"base_uri": "https://localhost:8080/"} id="y1Aw9RmstHHy" outputId="45ba3f56-7d59-4c93-e729-56313f2d47a8" # Part 4 # Create a mini calculator to know the age of a person after # putting the range (i.e. date of birth and current date) birth = input("Input your birth year :" ) birth = int(birth) today = date.today() age = today.year - birth print(age) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Let's explore the metadata import pandas as pd # ## Load the metadata df = pd.read_csv("../data/index.csv") df.head() # ## Check the labels df['label1'].unique() df['label2'].unique() df['label3'].unique() # ## Imbalanced classes df['label3'].value_counts() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### In this Notebook I develop a first model on MNIST dataset using plain PyTorch # + import torch from torch import nn from torch.nn import functional as F from torch.utils.data import DataLoader from torch.utils.data import random_split import torch.optim as optim from torchvision.datasets import MNIST from torchvision import transforms import matplotlib.pyplot as plt # - # globals BATCH_SIZE = 64 # + # data dataset = MNIST('', train=True, download=True, transform=transforms.ToTensor()) mnist_train, mnist_val = random_split(dataset, [55000, 5000]) train_loader = DataLoader(mnist_train, batch_size=BATCH_SIZE) val_loader = DataLoader(mnist_val, batch_size=BATCH_SIZE) # + # have a look at inputs for i, batch in enumerate(train_loader): inputs, targets = batch print(inputs.shape) if i == 0: break # + # now let's define a simple network to train on MNIST N_INPUT = 28*28*1 N_CLASSES = 10 class SimpleNet(nn.Module): def __init__(self): super(SimpleNet, self).__init__() self.fc1 = nn.Linear(N_INPUT, 84) self.fc2 = nn.Linear(84, 50) self.fc3 = nn.Linear(50,N_CLASSES) def forward(self, x): x = x.view(-1, N_INPUT) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) # network produces Logits x = self.fc3(x) return x # + simplenet = SimpleNet() optimizer = optim.Adam(simplenet.parameters(), lr=0.001) # + # custom train loop history = [] def train(model, optimizer, loss_fn, train_loader, val_loader, epochs=5, device="cpu"): for epoch in range(1, epochs+1): training_loss = 0.0 valid_loss = 0.0 model.train() for batch in train_loader: optimizer.zero_grad() inputs, targets = batch # portiamo dati sulla GPU, se disponibile inputs = inputs.to(device) targets = targets.to(device) output = model(inputs) loss = loss_fn(output, targets) # back propagation loss.backward() optimizer.step() training_loss += loss.data.item() * inputs.size(0) # compute the average training_loss /= len(train_loader.dataset) model.eval() num_correct = 0 num_examples = 0 for batch in val_loader: inputs, targets = batch inputs = inputs.to(device) output = model(inputs) targets = targets.to(device) loss = loss_fn(output,targets) valid_loss += loss.data.item() * inputs.size(0) # compute accuracy on validation set correct = torch.eq(torch.max(F.softmax(output, dim=1), dim=1)[1], targets) num_correct += torch.sum(correct).item() num_examples += correct.shape[0] valid_loss /= len(val_loader.dataset) accuracy = num_correct / num_examples print('Epoch: {}, Training Loss: {:.2f}, Validation Loss: {:.2f}, accuracy = {:.2f}'.format(epoch, training_loss, valid_loss, accuracy)) # for history during epochs metrics = {} metrics['loss'] = training_loss metrics['valid_loss'] = valid_loss metrics['accuracy'] = accuracy history.append(metrics) return history # - history = train(simplenet, optimizer,torch.nn.CrossEntropyLoss(), train_loader, val_loader, epochs=10, device='cpu') history # + # extract the data from the history list vet_loss = [x['loss'] for x in history] vet_val_loss = [x['valid_loss'] for x in history] plt.figure(figsize=(9,6)) plt.title('Loss') plt.xlabel('epochs') plt.ylabel('loss') plt.plot(vet_loss, label='training loss') plt.plot(vet_val_loss, label='validation loss') plt.legend() plt.grid() # + # extract the data from the history list vet_acc = [x['accuracy'] for x in history] plt.figure(figsize=(9,6)) plt.title('Accuracy') plt.xlabel('epochs') plt.ylabel('loss') plt.plot(vet_acc, label='validation accuracy') plt.legend() plt.grid() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Generate data for the Budget Scenario Problem # This function creates the model to solve the Budget Scenario problem # using a Particle Swarm algorithm, BPSO coded by Seyedali Mirjalili. # # Data used here is from Priority List - Revisited by N. Order (2009) # # Developer: # Contact Info: # Created 20/03/2020 # Adapted to deal with modifications on the model, the idea we feed the # model with the number of the initiative we want to remove from the model, # that way I can call this function inside of loop on the main code. # # - 29/04/2020 # + import numpy as np import math import random ## Defeine the vector to fill up NumIn = 15 #total number of initiatives NumInOrder = 15 #number of original Initiatives in Order data Cost = np.ones(NumIn)*math.nan S1 = np.ones(NumIn)*math.nan S2 = np.ones(NumIn)*math.nan S3 = np.ones(NumIn)*math.nan S4 = np.ones(NumIn)*math.nan Initiative = np.array(['O','M','N','D','J','I','B','L','E','A','H','G','C','K','F']); Cost[0:NumInOrder] = [1,1,3,4,7,6,8,11,13,12,14,17,14,20,16]; # cost of each initiative S1[0:NumInOrder] = [10,7,6,8,4,8,8,10,8,7,1,9,5,6,1]; # score for scenario 1 S2[0:NumInOrder] = [1,1,3,7,3,7,8,5,4,2,5,5,7,4,7]; # score for scenario 2 S3[0:NumInOrder] = [8,3,5,1,7,4,3,6,9,9,9,8,5,9,9]; # score for scenario 3 S4[0:NumInOrder] = [2,4,6,2,9,1,10,9,7,1,9,4,7,4,1]; # score for scenario 4 # Generate the uniformly random data to increase for ind in range(NumInOrder, NumIn): S1[ind]=random.randint(1,9) S2[ind]=random.randint(1,9) S3[ind]=random.randint(1,9) S4[ind]=random.randint(1,9) Cost[ind] = random.randint(7,50) ScenarioMatrix = np.array([S1,S2,S3,S4]) # Saving the generated data nameSave = 'data' +str(NumIn)+'in' np.save(nameSave, [Cost, S1, S2, S3, S4]) # - # Load the data nameload = 'data' +str(NumIn)+'in.npy' data = np.load(nameload) data # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: py_36_env # language: python # name: py_36_env # --- import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from matplotlib import pyplot as plt import numpy as np # + import psycopg2 import os from urllib.parse import urlparse # pyodbc connection string # connection_string = "Driver={Amazon Redshift (x64)}; Server=openalex-4.cnw77bm9bpcp.us-east-1.redshift.amazonaws.com; Database=dev; UID=awsuser; PWD=!; Port=5439" # conn = pyodbc.connect(connection_string, autocommit=True) connection_string = urlparse(os.getenv("DATABASE_URL_OPENALEX_REDSHIFT")) username = connection_string.username password = database = connection_string.path[1:] hostname = connection_string.hostname port = connection_string.port conn = psycopg2.connect( database = database, user = username, password = password, host = hostname, port = port ) # + # SQL query to Dremio sql_query = r""" SELECT * FROM util.temp_data_estimated_citation where citation_count > 0 order by random() limit 10000""" df = pd.read_sql(sql_query, conn) # - df.hist(figsize=(12,10)) df # df.drop(['dteday', 'instant'], axis=1, inplace=True) sns.jointplot('citation_count','estimated_citation', data=df) sns.heatmap(df.corr(), annot=True, linewidth=2, cmap=plt.cm.Blues) # + import math df = pd.read_sql(sql_query, conn) # - single author or not # - log years since published # - is it a book, monograph, report, or book-chapter # - is it a journal-article (because interacts with publishers and big journals below) # - is it a big publisher # - lookup for big journals , maybe using 122 journals with avg multiplier in the last 10 years with multiplier over 1.05 journals = """64187185 125754415 3121261024 21442059 1010394304 3880285 9692511 13479253 24807848 45305740 98026630 111155417 2493613807 140251998 104917558 49861241 109565702 177147899""".split() publishers = """Society for Neuroscience Association for Computing Machinery Journal of Bone and Joint Surgery American Diabetes Association American Society for Microbiology American Economic Association American Psychological Association EMBO Institute for Operations Research and the Management Sciences Cold Spring Harbor Laboratory The Endocrine Society The Rockefeller University Press American Society for Clinical Investigation The American Association of Immunologists Annual Reviews Ovid Technologies Wolters Kluwer -American Heart Association Proceedings of the National Academy of Sciences""".split() df['log_citation_count'] = [math.log10(count+1.0) for count in df['citation_count']] df['log_estimated_citation'] = [math.log10(count+1.0) for count in df['estimated_citation']] if True: df['log_years_old'] = [math.log10(age+1.0) for age in df['years_old']] # df['log_num_authors'] = np.log(df.num_authors+0.01) # df['is_single_author'] = [1 if n > 1 else 0 for n in df['num_authors']] # df['has_references'] = [1 if n > 1 else 0 for n in df['reference_count']] # df['log_reference_count'] = np.log(df.reference_count+0.01) # df['is_journal'] = [1 if g == 'journal-article' else 0 for g in df['genre']] # df['is_other_cited_genre'] = [1 if g in ['book', 'report', 'monograph', 'book-chapter'] else 0 for g in df['genre']] # df['boost_journal'] = [1 if str(j) in journals else 0 for j in df['journal_id']] df['big_publisher'] = [1 if p in publishers else 0 for p in df['publisher']] # df.drop(['years_old', 'reference_count', 'num_authors', 'paper_id', 'genre', 'display_name', 'publisher', 'multiplier', 'journal_id'], axis=1, inplace=True) # df.drop(['paper_id', 'genre', 'display_name', 'publisher', 'multiplier', 'journal_id'], axis=1, inplace=True) if False: df.drop(['citation_count', 'estimated_citation', 'years_old', 'reference_count', 'num_authors', 'paper_id', 'genre', 'display_name', 'publisher', 'multiplier', 'journal_id'], axis=1, inplace=True) else: df.drop(['citation_count', 'estimated_citation', 'years_old', 'reference_count', 'num_authors', 'paper_id', 'genre', 'display_name', 'publisher', 'multiplier', 'journal_id'], axis=1, inplace=True) # - df.describe() # + from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler # data features and target feature # x = df.drop('estimated_citation', axis=1) # y = df['estimated_citation'] x = df.drop('log_estimated_citation', axis=1) y = df['log_estimated_citation'] # split data x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=0, test_size=0.3) # + from sklearn.linear_model import LinearRegression from sklearn import metrics # Linear Regression Model # model and fit model = LinearRegression(fit_intercept=False) model.fit(x_train, y_train) # prediction predictions = model.predict(x_test) # plot prediction plt.scatter(y_test, predictions) plt.title('Linear Regression Model') plt.xlabel("Test") plt.ylabel("Prediction") plt.grid(True) plt.show() # metrics print('Accuracy metrics:') print('MAE: ', metrics.mean_absolute_error(y_test, predictions)) print('MSE: ', metrics.mean_squared_error(y_test, predictions)) print('Root MSE: ', np.sqrt(metrics.mean_squared_error(y_test, predictions))) print('Score: ', model.score(x_test, y_test)) # - model.feature_names_in_ model.coef_ pow(10, model.predict(np.array([[math.log10(1000), math.log10(10), 1]]))) pow(10, math.log10(1000) * 1.07156829 + math.log10(10) * -0.02960674 + math.log10(10) * 0.07476177) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="2bmfoDXVoAh3" # # Multilingual CLIP # # ## Install Requirements and Download OpenAI CLIP Model # This section might take some minutes. # + id="v5JU97PayTvv" outputId="886751cf-133f-45c5-9068-87bf42b11891" colab={"base_uri": "https://localhost:8080/"} import subprocess CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]).decode("UTF-8").split(", ") if s.startswith("release")][0].split(" ")[-1] print("CUDA version:", CUDA_version) if CUDA_version == "10.0": torch_version_suffix = "+cu100" elif CUDA_version == "10.1": torch_version_suffix = "+cu101" elif CUDA_version == "10.2": torch_version_suffix = "" else: torch_version_suffix = "+cu110" # !pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex # !pip install ftfy==5.8 # !pip install transformers import matplotlib.pyplot as plt from PIL import Image import numpy as np import os, random import torch import warnings warnings.filterwarnings("ignore") # !pip install git+https://github.com/openai/CLIP.git import clip # !git clone https://github.com/FreddeFrallan/Multilingual-CLIP # %cd Multilingual-CLIP # !bash get-weights.sh # + [markdown] id="8ssQbLEFoksN" # ### Load The Multilingual Text Encoder # + id="4QMUva872Fr7" outputId="04180bda-cf17-40f0-d5ea-a119cc06276b" colab={"base_uri": "https://localhost:8080/", "height": 177} from src import multilingual_clip text_model = multilingual_clip.load_model('M-BERT-Distil-40') # + [markdown] id="56aiJ7CspXWd" # ### Load The Matching CLIP Model # + id="FymFpSD48OFB" outputId="73fdd6de-042c-4319-8fe7-b9835ebca7b7" colab={"base_uri": "https://localhost:8080/"} clip_model, compose = clip.load('RN50x4') #input_resolution = clip_model.input_resolution.item() # error occured context_length = clip_model.context_length vocab_size = clip_model.vocab_size print("Model parameters:", f"{np.sum([int(np.prod(p.shape)) for p in clip_model.parameters()]):,}") #print("Input resolution:", input_resolution) print("Context length:", context_length) print("Vocab size:", vocab_size) # + [markdown] id="Q_ronntW8Z1F" # ### Read in the Images # + colab={"base_uri": "https://localhost:8080/", "height": 139} id="w4CyQ2tl8dKd" outputId="f1c4d46a-02fe-4a25-fff7-4da5b2667182" main_path = '/content/Multilingual-CLIP/Images/' demo_images = { 'Green Apple': 'green apple.jpg', 'Red Apple': 'red apple.jpg', 'Purple Apple': 'purple apple.png', 'Orange Apple': 'Orange Apple.png', 'Fruit Bowl': 'fruit bowl.jpg', 'Bananas on Tree': 'bananas.jpg', } images = {name: Image.open(main_path + p) for name, p in demo_images.items()} fig = plt.figure() fig.set_size_inches(30,5) for i, img in enumerate(images.values()): a=fig.add_subplot(1,len(images), i+1) plt.imshow(img, ) plt.axis('off') # + [markdown] id="3xeGiNoVAq7t" # ### Create Captions # + id="ergScfHaAwbw" japanese_captions = [ '緑色のリンゴ', '紅いリンゴ', '紫のりんごと手', 'オレンジ色の果物', 'たくさんの果物とバスケット', 'バナナがたくさん' ] russan_captions = [ 'Зеленое яблоко', 'Красное яблоко', 'Фиолетовое яблоко', 'Апельсиновое яблоко', 'Миска с фруктами', 'Гроздь бананов свисает с дерева' ] french_captions = [ 'Une pomme verte', 'Une pomme rouge', 'Une pomme violette', 'Une pomme orange', 'Un bol rempli de fruits', 'Un tas de bananes pendu à un arbre' ] german_captions = [ 'Ein grüner Apfel', 'Ein roter Apfel', 'Ein lila Apfel', 'Ein orangefarbener Apfel', 'Eine Schüssel voller Früchte', 'Ein Bündel Bananen hängt an einem Baum' ] spansh_captions = [ 'Una manzana verde', 'Una manzana roja', 'Una manzana de color lila', 'Una manzana de color naranja', 'Un frutero lleno de fruta', 'Un racimo de bananas colgados de un banano', ] greek_captions = [ 'Ένα πράσινο μήλο', 'Ένα κόκκινο μήλο', 'Ένα μοβ μήλο', 'Ένα πορτοκαλί μήλο', 'Ένα μπολ γεμάτο με φρούτα', 'Ένα τσαμπί μπανάνες κρεμάμενες από ένα δέντρο', ] swedish_captions = [ 'Ett grönt äpple', 'Ett rött äpple', 'Ett lila äpple', 'Ett oranget äpple', 'En skål fylld med frukt', 'En klase bananer som hänger från ett träd' ] all_captions = {'Japanese': japanese_captions, 'French': french_captions, 'German': german_captions, 'Spanish': spansh_captions, 'Greek': greek_captions, 'Swedish': swedish_captions } # + [markdown] id="AleQ_9u5r_FN" # ### Prepare Images for CLIP # + id="PDvfAOlOruTq" img_input = torch.stack([compose(img).to('cuda:0') for img in images.values()]) # + [markdown] id="kAHNhOG5DJNc" # ### Generate Text & Vision Embeddings # + id="jh4CEXWKGRJD" outputId="0a0f7416-f667-4d04-f4ab-a150b8f75828" colab={"base_uri": "https://localhost:8080/"} with torch.no_grad(): image_embs = clip_model.encode_image(img_input).float().to('cpu') language_embs = {} for lang, captions in all_captions.items(): language_embs[lang] = text_model(captions) print("CLIP-Vision: {}".format(image_embs.shape)) for lang, embs in language_embs.items(): print("{}: {}".format(lang, embs.shape)) # + [markdown] id="nN2d9mJoFoso" # ### Compare Predictions # + [markdown] id="pvcy1M27G4td" # Compare the Cosine-Similarities between the image embeddings and the different language embeddings. # + id="QjF_zccgGcTF" def compare_embeddings(logit_scale, img_embs, txt_embs): # normalized features image_features = img_embs / img_embs.norm(dim=-1, keepdim=True) text_features = txt_embs / txt_embs.norm(dim=-1, keepdim=True) # cosine similarity as logits logits_per_image = logit_scale * image_features @ text_features.t() logits_per_text = logit_scale * text_features @ image_features.t() # shape = [global_batch_size, global_batch_size] return logits_per_image, logits_per_text # CLIP Temperature scaler logit_scale = clip_model.logit_scale.exp().float().to('cpu') language_logits = {} for lang, embs in language_embs.items(): language_logits[lang] = compare_embeddings(logit_scale, image_embs, embs) # + [markdown] id="fXQ49XtJL1V1" # ### Visualize Results # + [markdown] id="uNU1R2rkGrMD" # Here we will not visualize the results, so that every column is the Softmax distribution over all the texts for the respective image. # + id="CRap5vgT57nt" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="6d353fd4-1444-4fc0-8ae6-9fb2b61b4e0c" def plot_heatmap(result_matrix): height, width = result_matrix.shape fig, ax = plt.subplots() fig.set_size_inches(8,8) im = ax.imshow(result_matrix) # Create X & Y Labels ax.set_xticks(np.arange(width)) ax.set_yticks(np.arange(height)) ax.set_xticklabels(["Image {}".format(i) for i in range(width)]) ax.set_yticklabels(["Text {}".format(i) for i in range(height)]) for i in range(height): for j in range(width): text = ax.text(j, i, result_matrix[i, j], ha="center", va="center", color='grey', size=20) fig.tight_layout() plt.show() for lang, (img_logits, txt_logits) in language_logits.items(): # Convert Logits into Softmax predictions probs = img_logits.softmax(dim=-1).cpu().detach().numpy() # Transpose so that each column is the softmax for each picture over the texts probs = np.around(probs, decimals=2).T * 100 print("Language: {}".format(lang)) plot_heatmap(probs) # + [markdown] id="w3yGhGEvFh7v" # ## Conclusion # Although the diagonal is not completely maxed out, all languages managed to correctly classify all images. Interestingly, all languages had an easier time classifying the purple apple which was photoshopped than the red apple. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Wilcoxon and Chi Squared # + import numpy as np import pandas as pd df = pd.read_csv("prepared_neuror2_data.csv") # + def stats_for_neuror2_range(lo, hi): admissions = df[df.NR2_Score.between(lo, hi)] total_patients = admissions.shape[0] readmits = admissions[admissions.UnplannedReadmission] total_readmits = readmits.shape[0] return (total_readmits, total_patients, "%.1f" % (total_readmits/total_patients*100,)) mayo_davis = [] for (expected, (lo, hi)) in [(1.4, (0, 0)), (4, (1, 4)), (5.6, (5, 8)), (14.2, (9, 13)), (33.0, (14, 19)), (0.0, (20, 22))]: (total_readmits, total_patients, readmit_percent) = stats_for_neuror2_range(lo, hi) mayo_davis.append([lo, hi, expected, readmit_percent, total_readmits, total_patients]) title="Davis and Mayo Populations by NeuroR2 Score" print(title) print("-" * len(title)) print(pd.DataFrame(mayo_davis, columns=["Low", "High", "Mayo %", "Davis %", "Readmits", "Total"]).to_string(index=False)) # + # Continuous variables were compared using wilcoxon from scipy.stats import ranksums as wilcoxon def create_samples(col_name): unplanned = df[df.UnplannedReadmission][col_name].values planned = df[~df.UnplannedReadmission][col_name].values return (unplanned, planned) continous_vars = ["AdmissionAgeYears", "LengthOfStay", "NR2_Score"]#, "MsDrgWeight"] for var in continous_vars: (unplanned, planned) = create_samples(var) (stat, p) = wilcoxon(unplanned, planned) print ("%30s" % (var,), "p-value %f" % (p,)) # - unplanned, planned = create_samples("LengthOfStay") print(pd.DataFrame(unplanned, columns=["Unplanned Readmission"]).describe()) print(pd.DataFrame(planned, columns=[" Index Only Admission"]).describe()) # + # Categorical variables were compared using chi squared from scipy.stats import chi2, chi2_contingency from IPython.core.display import display, HTML # Collect all the categorical features cols = sorted([col for col in df.columns if "_" in col]) for var in continous_vars: try: cols.remove(var) except: pass index_only = df[~df.UnplannedReadmission].shape[0] unplanned_readmit = df[df.UnplannedReadmission].shape[0] html = "" for th in ["Characteristic", "Index admission only
    (n=%d)" % (index_only,), "Unplanned readmission
    (n = %d)" % (unplanned_readmit,),"p Value"]: html += "" % (th,) html += "" start_row = "" end_row = "" pval_str = lambda p: "<0.001" if p<0.001 else "%.3f" % p col_str = lambda col, p: "%s" % (col,) if p < 0.05 else col for col in sorted(cols): table = pd.crosstab(df[col], df.UnplannedReadmission) stat, p, dof, expected = chi2_contingency(table) html += "" % (col_str(col,p), pval_str(p)) html += start_row % ("No",) html += end_row % (table.values[0][0], expected[0][0], table.values[0][1], expected[0][1]) try: html += start_row % ("Yes",) html += end_row % (table.values[1][0], expected[1,0], table.values[1][1], expected[1][1]) except IndexError: html += "" html += "
    %s
    %s%d (%.1f)%d (%.1f)
    %s%s
    --
    " display(HTML(html)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.4 (''venv'': venv)' # name: pythonjvsc74a57bd08eef4f11fdc106e9248655934e6a72dc1782c2a0ac6e772311e9a3a7e820e6d0 # --- # + ### 加载数据 import pandas import pickle import numpy from pyecharts.charts import Bar, Radar from pyecharts.charts import Pie from pyecharts import options as opts from pyecharts.commons.utils import JsCode from pyecharts.charts import Radar SAVE_DATA = False dataset = pandas.read_csv('dataset.csv') print(f'Total data: {dataset.shape[0]}') # + pycharm={"name": "#%%\n"} ### 清洗数据 # 洗掉不必要的列 data = dataset.drop( [ 'Unnamed: 0', '提交答卷时间', '所用时间', '来源', '来源详情', '来自IP', '1、请问您的学校所在地区和类别:' ], axis=1 ) # 列名含义,接下来使用下标访问 # 下标为整型int64 不是str! COLS_NAME = data.columns.values.tolist() data.columns = [idx for idx in range(len(COLS_NAME))] # 地区有两个-3的异常值 清洗 data = data[data[91] != -3] # 14题数据为1or2 因此全体-1 统一处理 data.iloc[:,53:58] = data.iloc[:,53:58] - 1 # 线上学习时间有错误值及空值 0~15 data = data[data[22].isin([x for x in range(16)])].astype(int) # 输出最后行数 print(f'Data cleaning completed!Total:\n{data.shape[0]}') NUMS = data.shape[0] # + pycharm={"name": "#%%\n"} # 导出数据csv if SAVE_DATA: data.to_csv('./cooked.csv') with open('./COLS_NAME.dat', 'wb') as f: pickle.dump(COLS_NAME, f) # - # 富文本饼图设置 PIE_SETTINGS = opts.LabelOpts( position="outside", formatter="{b|{b}: }{per|{d}%} ", background_color="#eee", border_color="#aaa", border_width=1, border_radius=4, rich={ "a": {"color": "#999", "lineHeight": 22, "align": "center"}, "abg": { "backgroundColor": "#e3e3e3", "width": "100%", "align": "right", "height": 22, "borderRadius": [4, 4, 0, 0], }, "hr": { "borderColor": "#aaa", "width": "100%", "borderWidth": 0.5, "height": 0, }, "b": {"fontSize": 16, "lineHeight": 33}, "per": { "color": "#eee", "backgroundColor": "#334455", "padding": [2, 4], "borderRadius": 2, }, }, ) print('PIE_SETTING deployed!') # + pycharm={"name": "#%%\n"} # 分析参加调查的学生的年级分布(输出柱状图) res = [0 for _ in range(12)] for _, grade in data[1].items(): res[grade-1] = res[grade-1] + 1 bar = ( Bar() .add_xaxis(["一年级", "二年级", "三年级", "四年级", "五年级", "六年级", "初一", "初二", "初三", "高一", "高二", "高三"]) .add_yaxis("问卷人数", res) .set_global_opts( title_opts=opts.TitleOpts(title="学生的年级分布"), legend_opts=opts.LegendOpts(is_show=True) ) ) bar.render_notebook() # + # 统计学生使用设备情况 keys = ['电视', '台式电脑', '平板', '手机', '音频', '纸质学习资料'] res = [data[idx].value_counts()[1] for idx in range(2,8)] res = numpy.array(res) res = res / res.sum() res = [list(x) for x in zip(keys, res)] pie = ( Pie() .add( "", res, radius=["40%", "55%"], label_opts=PIE_SETTINGS ) .set_global_opts(title_opts=opts.TitleOpts(title="学生上课设备使用情况统计")) ) pie.render_notebook() # + # 统计平台功能使用情况 keys = ['回看课程视频', '作业提交', '随堂测试', '视频会议', '作业批改反馈', '课堂发言', '班级通知', '班级圈', '优秀作业查看', '学科竞赛游戏', '屏幕共享', '弹幕', '讨论'] res = [int(data[idx].value_counts()[1]) for idx in range(8,20)] bar = ( Bar() .add_xaxis(keys) .add_yaxis('使用人数', res) .set_global_opts( xaxis_opts=opts.AxisOpts(axislabel_opts=opts.LabelOpts(rotate=30)), title_opts=opts.TitleOpts(title="平台功能使用情况"), legend_opts=opts.LegendOpts(is_show=True) ) ) bar.render_notebook() # + keys = ['20分钟', '20~30分钟', '30~45分钟', '45分钟以上'] res = [data[21].value_counts()[idx] for idx in range(1,5)] res = numpy.array(res) res = res / res.sum() res = [list(x) for x in zip(keys, res)] pie = ( Pie() .add( "", res, radius=["40%", "55%"], label_opts=PIE_SETTINGS ) .set_global_opts(title_opts=opts.TitleOpts(title="学生上课时长情况统计")) ) pie.render_notebook() # + # 每天在线学习时间分析 res = [int(data[22].value_counts()[idx]) for idx in range(0,16)] bar = ( Bar() .add_xaxis(list(range(0,16))) .add_yaxis('人数', res) .set_global_opts( title_opts=opts.TitleOpts(title="学生每天在线学习时间统计"), legend_opts=opts.LegendOpts(is_show=True) ) ) bar.render_notebook() # + pycharm={"name": "#%%\n"} keys = ["能","监督下能","有时能,有时不能","基本不能","不适应"] res = [data[23].value_counts()[idx] for idx in range(1,6)] res = numpy.array(res) res = res / res.sum() res = [list(x) for x in zip(keys,values)] pie = ( Pie() .add( "", res, radius=["40%", "55%"], label_opts=PIE_SETTINGS ) .set_global_opts(title_opts=opts.TitleOpts(title="学生状态统计")) ) pie.render_notebook() # + pycharm={"name": "#%%\u5b66\u4e60\u65f6\u5019\u662f\u5426\u9700\u8981\u5bb6\u4eba\u966a\u4f34\n"} keys = ["完全不需要","有时需要","完全需要"] res = [data[24].value_counts()[idx] for idx in range(1,4)] res = numpy.array(res) res = res / res.sum() res = [list(x) for x in zip(keys,values)] pie = ( Pie() .add( "", res, radius=['40%', '55%'], label_opts=PIE_SETTINGS ) .set_global_opts(title_opts=opts.TitleOpts(title="学习需要家人陪伴统计")) ) pie.render_notebook() # + pycharm={"name": "#%%\u7edf\u8ba1\u5b66\u751f\u559c\u6b22\u7684\u8bfe\u5802\u7ec4\u7ec7\u5f62\u5f0f\n"} keys = ["直播","录播","资源包","电视课堂","直播+录播","直播+资源包","录播+资源包","直播+录播+资源包","录播+资源包+线上辅导答疑"] res = [int(data[idx].value_counts()[1]) for idx in range(25,34)] res = numpy.array(res) res = res / res.sum() bar = ( Bar() .add_xaxis(keys) .add_yaxis("", list(res)) .set_global_opts( xaxis_opts=opts.AxisOpts(axislabel_opts=opts.LabelOpts(rotate=30)), title_opts=opts.TitleOpts(title="统计学生喜欢的课堂组织形式") ) .set_series_opts( label_opts=opts.LabelOpts( position="top", formatter=JsCode( "function(x){return Number(x.data*100).toFixed() + '%';}" ), ) ) ) bar.render_notebook() # + pycharm={"name": "#%% \u7edf\u8ba1\u5b66\u751f\u559c\u6b22\u7684\u7ebf\u4e0a\u8bfe\u7a0b\u5185\u5bb9\n"} keys = ["学科课程新课","学科课程复习","音美体劳教育","专题教育"] res = [data[idx].value_counts()[1] for idx in range(34,38)] res = numpy.array(res) res = res / res.sum() bar = ( Bar() .add_xaxis(keys) .add_yaxis('', list(res)) .set_global_opts( title_opts=opts.TitleOpts(title="统计学生对线上课程内容的喜爱情况"), ) .set_series_opts( label_opts=opts.LabelOpts( position="top", formatter=JsCode( "function(x){return Number(x.data*100).toFixed() + '%';}" ), ) ) ) bar.render_notebook() # + pycharm={"name": "#%% \u7edf\u8ba1\u5b66\u751f\u7528\u54ea\u4e9b\u65b9\u6cd5\u89e3\u51b3\u672a\u638c\u63e1\u77e5\u8bc6\u70b9\n"} keys = ["查阅线上资源","视频回放","教师线上答疑","社交软件咨询教师","同学交流","暂时放下"] res = [data[idx].value_counts()[1] for idx in range(38,44)] res = numpy.array(res) res = res / res.sum() bar = ( Bar() .add_xaxis(keys) .add_yaxis('', list(res)) .set_global_opts( xaxis_opts=opts.AxisOpts(axislabel_opts=opts.LabelOpts(rotate=30)), title_opts=opts.TitleOpts(title="统计学生通过哪些方法解决未掌握知识点"), ) .set_series_opts( label_opts=opts.LabelOpts( position="top", formatter=JsCode( "function(x){return Number(x.data*100).toFixed() + '%';}" ), ) ) ) bar.render_notebook() # + pycharm={"name": "#%% \u7edf\u8ba1\u5b66\u751f\u7ebf\u4e0a\u5b66\u4e60\u4e92\u52a8\u9891\u7387\n"} keys = ["不回答","偶尔参与回答","大多数情况下能回答","积极发言","没有问答环节"] res = [data[idx].value_counts()[1] for idx in range(44,49)] res = numpy.array(res) res = res / res.sum() res = [list(x) for x in zip(keys,values)] pie = ( Pie() .add( "", res, radius=['40%', '55%'], label_opts=PIE_SETTINGS ) .set_global_opts(title_opts=opts.TitleOpts(title="统计学生线上互动频率")) ) pie.render_notebook() # + pycharm={"name": "#%%\u7edf\u8ba1\u7ebf\u4e0a\u5b66\u4e60\u65f6\u5019\u9047\u5230\u7684\u4e3b\u8981\u95ee\u9898\n"} keys = ["网络卡顿","线上软件缺陷","与老师沟通不便", "作业不合理","课程质量欠佳","眼睛疲劳","软件太多容易混淆","环境干扰"] res = [data[idx].value_counts()[1] for idx in range(49,57)] res = numpy.array(res) res = res / res.sum() bar = ( Bar() .add_xaxis(keys) .add_yaxis("", list(res)) .set_global_opts( xaxis_opts=opts.AxisOpts(axislabel_opts=opts.LabelOpts(rotate=30)), title_opts=opts.TitleOpts(title="线上学习问题统计") ) .set_series_opts( label_opts=opts.LabelOpts( position="top", formatter=JsCode( "function(x){return Number(x.data*100).toFixed() + '%';}" ), ) ) ) bar.render_notebook() # + pycharm={"name": "#%%\u7edf\u8ba1\u5b66\u751f\u7ebf\u4e0a\u5b66\u4e60\u57f9\u517b\u7684\u80fd\u529b\n"} # 培养能力 res = [data[idx].value_counts()[1] for idx in range(57,63)] res = numpy.array(res) res = res / res.sum() res = [{"value": res.tolist(), "name": "培养能力"}] MAX_VALUE = 0.35 radar_schema = [ {"name": "自主学习能力", "max": MAX_VALUE}, {"name": "自控能力", "max": MAX_VALUE}, {"name": "数字化资源的利用能力", "max": MAX_VALUE}, {"name": "表达沟通", "max": MAX_VALUE}, {"name": "生活实践", "max": MAX_VALUE}, {"name": "其他", "max": MAX_VALUE}, ] radar = ( Radar() .set_colors(["#4587E7"]) .add_schema( schema=radar_schema, shape="circle", center=["50%", "50%"], radius="80%", angleaxis_opts=opts.AngleAxisOpts( min_=0, max_=360, is_clockwise=False, interval=5, axistick_opts=opts.AxisTickOpts(is_show=False), axislabel_opts=opts.LabelOpts(is_show=False), axisline_opts=opts.AxisLineOpts(is_show=False), splitline_opts=opts.SplitLineOpts(is_show=False), ), radiusaxis_opts=opts.RadiusAxisOpts( min_=0, max_=MAX_VALUE, interval=0.1, splitarea_opts=opts.SplitAreaOpts( is_show=True, areastyle_opts=opts.AreaStyleOpts(opacity=1) ), ), polar_opts=opts.PolarOpts(), splitarea_opt=opts.SplitAreaOpts(is_show=False), splitline_opt=opts.SplitLineOpts(is_show=False), ) .add( series_name="", data=res, areastyle_opts=opts.AreaStyleOpts(opacity=0.1), linestyle_opts=opts.LineStyleOpts(width=1), ) .set_series_opts(label_opts=opts.LabelOpts(is_show=False)) .set_global_opts( title_opts=opts.TitleOpts(title="统计学生线上学习培养能力"), legend_opts=opts.LegendOpts() ) ) radar.render_notebook() # + keys = ["直播方式","录播方式","教师教学态度","教师教学水平","资源内容","线上学习平台","总体满意度"] level1 = [ { "value": data[idx].value_counts()[1] / NUMS, "percent": data[idx].value_counts()[1] / NUMS } for idx in range(64,71) ] level2 = [ { "value":data[idx].value_counts()[2] / NUMS, "percent":data[idx].value_counts()[2] / NUMS } for idx in range(64,71) ] level3 = [ { "value":data[idx].value_counts()[3] / NUMS, "percent":data[idx].value_counts()[3] / NUMS } for idx in range(64,71) ] level4 = [ { "value":data[idx].value_counts()[4] / NUMS, "percent":data[idx].value_counts()[4] / NUMS } for idx in range(64,71) ] bar = ( Bar() .add_xaxis(keys) .add_yaxis("非常满意", level1, stack="stack1",category_gap="25%") .add_yaxis("满意",level2, stack="stack1", category_gap="25%") .add_yaxis("一般",level3, stack="stack1", category_gap="25%") .add_yaxis("不满意",level4, stack="stack1", category_gap="25%") .set_series_opts( label_opts=opts.LabelOpts( position="right", formatter=JsCode( "function(x){return Number(x.data.percent * 100).toFixed() + '%';}" ), ) ) .set_global_opts( title_opts=opts.TitleOpts(title="线上学习满意度统计"), xaxis_opts=opts.AxisOpts(axislabel_opts=opts.LabelOpts(rotate=30)), ) ) bar.render_notebook() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import torch from torch.autograd import Variable import warnings from torch import nn from collections import OrderedDict import os import numpy as np import pandas as pd import seaborn as sns from sklearn.linear_model import LogisticRegression import matplotlib.pyplot as plt import data as data from data.BehavioralDataset import BehavioralDataset from data.BehavioralHmSamples import BehavioralHmSamples import scipy from sklearn.preprocessing import MinMaxScaler warnings.filterwarnings("ignore") def load_netG(path, isize, nz, nc, ngf, n_extra_layers): assert isize % 16 == 0, "isize has to be a multiple of 16" cngf, tisize = ngf//2, 4 while tisize != isize: cngf = cngf * 2 tisize = tisize * 2 main = nn.Sequential() # input is Z, going into a convolution main.add_module('initial:{0}-{1}:convt'.format(nz, cngf), nn.ConvTranspose2d(nz, cngf, 4, 1, 0, bias=False)) main.add_module('initial:{0}:batchnorm'.format(cngf), nn.BatchNorm2d(cngf)) main.add_module('initial:{0}:relu'.format(cngf), nn.ReLU(True)) csize, cndf = 4, cngf while csize < isize//2: main.add_module('pyramid:{0}-{1}:convt'.format(cngf, cngf//2), nn.ConvTranspose2d(cngf, cngf//2, 4, 2, 1, bias=False)) main.add_module('pyramid:{0}:batchnorm'.format(cngf//2), nn.BatchNorm2d(cngf//2)) main.add_module('pyramid:{0}:relu'.format(cngf//2), nn.ReLU(True)) cngf = cngf // 2 csize = csize * 2 # Extra layers for t in range(n_extra_layers): main.add_module('extra-layers-{0}:{1}:conv'.format(t, cngf), nn.Conv2d(cngf, cngf, 3, 1, 1, bias=False)) main.add_module('extra-layers-{0}:{1}:batchnorm'.format(t, cngf), nn.BatchNorm2d(cngf)) main.add_module('extra-layers-{0}:{1}:relu'.format(t, cngf), nn.ReLU(True)) main.add_module('final:{0}-{1}:convt'.format(cngf, nc), nn.ConvTranspose2d(cngf, nc, 4, 2, 1, bias=False)) main.add_module('final:{0}:tanh'.format(nc), nn.Tanh()) state_dict = torch.load(path, map_location=torch.device('cpu')) new_state_dict = OrderedDict() for k, v in state_dict.items(): name = k[5:] # remove `main.` new_state_dict[name] = v main.load_state_dict(new_state_dict, strict=False) return main def load_netG_mlp(path, isize, nz, nc, ngf): main = nn.Sequential( # Z goes into a linear of size: ngf nn.Linear(nz, ngf), nn.ReLU(True), nn.Linear(ngf, ngf), nn.ReLU(True), nn.Linear(ngf, ngf), nn.ReLU(True), nn.Linear(ngf, nc * isize * isize), ) state_dict = torch.load(path, map_location=torch.device('cpu')) new_state_dict = OrderedDict() for k, v in state_dict.items(): name = k[5:] # remove `main.` new_state_dict[name] = v main.load_state_dict(new_state_dict, strict=False) return main # + def load_netD(path, isize, nc, ndf, n_extra_layers): assert isize % 16 == 0, "isize has to be a multiple of 16" main = nn.Sequential() # input is nc x isize x isize main.add_module('initial:{0}-{1}:conv'.format(nc, ndf), nn.Conv2d(nc, ndf, 4, 2, 1, bias=False)) main.add_module('initial:{0}:relu'.format(ndf), nn.LeakyReLU(0.2, inplace=True)) csize, cndf = isize / 2, ndf # Extra layers for t in range(n_extra_layers): main.add_module('extra-layers-{0}:{1}:conv'.format(t, cndf), nn.Conv2d(cndf, cndf, 3, 1, 1, bias=False)) main.add_module('extra-layers-{0}:{1}:batchnorm'.format(t, cndf), nn.BatchNorm2d(cndf)) main.add_module('extra-layers-{0}:{1}:relu'.format(t, cndf), nn.LeakyReLU(0.2, inplace=True)) while csize > 4: in_feat = cndf out_feat = cndf * 2 main.add_module('pyramid:{0}-{1}:conv'.format(in_feat, out_feat), nn.Conv2d(in_feat, out_feat, 4, 2, 1, bias=False)) main.add_module('pyramid:{0}:batchnorm'.format(out_feat), nn.BatchNorm2d(out_feat)) main.add_module('pyramid:{0}:relu'.format(out_feat), nn.LeakyReLU(0.2, inplace=True)) cndf = cndf * 2 csize = csize / 2 # state size. K x 4 x 4 main.add_module('final:{0}-{1}:conv'.format(cndf, 1), nn.Conv2d(cndf, 1, 4, 1, 0, bias=False)) state_dict = torch.load(path, map_location=torch.device('cpu')) new_state_dict = OrderedDict() for k, v in state_dict.items(): name = k[5:] # remove `module.` new_state_dict[name] = v main.load_state_dict(new_state_dict, strict=False) return main # - def load_netD_mlp(path, isize, nc, ndf): main = nn.Sequential( # Z goes into a linear of size: ndf nn.Linear(nc * isize * isize, ndf), nn.ReLU(True), nn.Linear(ndf, ndf), nn.ReLU(True), nn.Linear(ndf, ndf), nn.ReLU(True), nn.Linear(ndf, 1), ) state_dict = torch.load(path, map_location=torch.device('cpu')) new_state_dict = OrderedDict() for k, v in state_dict.items(): name = k[5:] # remove `module.` new_state_dict[name] = v main.load_state_dict(new_state_dict, strict=False) return main def reshape_mlp_input(input_sample): return input_sample.view(input_sample.size(0), input_sample.size(1) * input_sample.size(2) * input_sample.size(3)) def sample_wrapper(samples): input = torch.FloatTensor(samples.shape[0], 1, 32, 32) input.resize_as_(samples).copy_(samples) return Variable(input) # + # load in all needed data # load real samples num_models = 10 real_samples_list = [] for i in range(num_models): dataset = BehavioralDataset(isCnnData=True, isScoring=True, auto_number=i, niter=50) dataloader = torch.utils.data.DataLoader(dataset, batch_size=3, shuffle=True, num_workers=1) data_iter = iter(dataloader) data = data_iter.next() next_samples, _ = data real_samples_list.append(next_samples) # load fake samples h_models_samples_list = [] for i in range(1,6): dataset = BehavioralHmSamples(modelNum=i, isCnnData=True, isScoring=True) dataloader = torch.utils.data.DataLoader(dataset, batch_size=1000, shuffle=True, num_workers=1) data_iter = iter(dataloader) data = data_iter.next() next_samples, _ = data h_models_samples_list.append(next_samples) # - # read in discriminators num_nets = 10 dataset = 'behavioral' netD_list = [load_netD('./loss_curves/netD_{0}_50k_{1}_automated.pth'.format(dataset, i), isize=32, nc=1, ndf=64, n_extra_layers=0) for i in range(num_nets)] # + # score hierarchical models # old way of calculating scores num_h_models = 5 num_real_samples = real_samples_list[0].shape[0] scores = np.zeros((num_h_models, num_nets)) for j, netD in enumerate(netD_list): # score real samples real_samples_scores = netD(sample_wrapper(real_samples)).data.numpy() for i, fake_samples in enumerate(fake_samples_list): fake_samples_scores = netD(sample_wrapper(fake_samples)).data.numpy() # see how many fake samples were scored as more real than each real samples num_right = sum([np.sum(fake_samples_scores < real_samples_scores[k]) for k in range(num_real_samples)]) # print('index: ({0}, {1})'.format(i,j)) # print(num_right) # print(np.sum(fake_samples_scores < real_samples_scores[0])) # print(np.sum(fake_samples_scores < real_samples_scores[1])) # print(np.sum(fake_samples_scores < real_samples_scores[2])) # print(num_right / (1000 * real_samples_scores.shape[0])) # print('-----------------------') scores[i,j] = num_right / (1000 * num_real_samples) # + # calculate wins array # wins indices: # i is the index for the real triple and net number # j corresponds to the fake sample index in a given fake sample vector # k corresponds to the hierarchical model num_h_models = 5 num_real_samples = len(real_samples_list) num_real_samples_per_set = real_samples_list[0].shape[0] num_fake_samples = h_models_samples_list[0].shape[0] array_list = [np.zeros((num_real_samples, num_fake_samples)) for i in range(num_h_models)] wins = np.dstack(array_list) for k, h_model_samples in enumerate(h_models_samples_list): for i, real_samples in enumerate(real_samples_list): netD = netD_list[i] real_samples_scores = netD(sample_wrapper(real_samples)).data.numpy() fake_samples_scores = netD(sample_wrapper(h_model_samples)).data.numpy() for j in range(num_fake_samples): wins_for_sample = 0 for m in range(num_real_samples_per_set): if fake_samples_scores[j] < real_samples_scores[m]: wins_for_sample += 1 wins[i,j,k] = wins_for_sample # - np.save('./data/behavioral_wins.npy', wins) wins = np.load('./data/behavioral_wins.npy') import json pd.Series(wins.tolist()).to_json('./data/behavioral_wins.json', orient='records') with open('./data/behavioral_wins2.json', 'w+') as f: f.write(json.dumps(wins.tolist())) wins[0,0:20,0] wins[1,0:20,1] wins[0:20,0,0] # + import codecs, json file_path = "./data/behavioral_wins3.json" ## your path variable json.dump(wins.tolist(), codecs.open(file_path, 'w', encoding='utf-8'), separators=(',', ':'), sort_keys=True, indent=4) # - # # Visualizations of Wins wins.shape means_rs_by_hm = np.mean(wins/3, axis=1) means_rs_by_hm model_num = [] mean_correct_for_sample = [] for j in range(means_rs_by_hm.shape[1]): for i in range(means_rs_by_hm.shape[0]): model_num.append(j+1) mean_correct_for_sample.append(means_rs_by_hm[i,j]) correct_df = pd.DataFrame(list(zip(model_num, mean_correct_for_sample)), columns=['Hierarchical Model', 'Average Win Percentage']) import seaborn as sns sns.set() # + ax = sns.scatterplot(x="Hierarchical Model", y="Average Win Percentage", data=correct_df) ax.set_title('Average Win Percentage by Hierarchical Model') ax.set_xticklabels(np.linspace(0.5, 5, 10)) # code to modify xticks taken from here: # https://stackoverflow.com/questions/38947115/how-to-decrease-the-density-of-x-ticks-in-seaborn for ind, label in enumerate(ax.get_xticklabels()): if (ind + 1) % 2 == 0: # every 10th label is kept label.set_visible(True) else: label.set_visible(False) # - np.median(means_rs_by_hm, axis=0) np.mean(means_rs_by_hm, axis=0) np.var(means_rs_by_hm, axis=0) scipy.stats.sem(means_rs_by_hm, axis=0) # # Analysis of Gammas from R # $x_{i,j,k}:=$# of wins for fake sample $j$ from model $k$ over real sample set $i$. # # \begin{align*} # x_{i,j,k}&\sim Poisson(\theta_{i,k})\\ # \theta_{i,k}&=exp(\mu_{i}+\gamma_{k}) # \end{align*} gamma = pd.read_csv('./data/behavioral_gamma.csv').to_numpy() mu = pd.read_csv('./data/behavioral_mu.csv').to_numpy() gamma mu # + index_to_model = {i:'Model {0}'.format(i+1) for i in range(num_h_models)} places = {i:{j+1:0 for j in range(num_h_models)} for i in index_to_model.values()} # + for i in range(gamma.shape[0]): ranked_indices = np.argsort(gamma[i,:])[::-1] placement = 1 for next_best in ranked_indices: places[index_to_model[next_best]][placement]+=1 placement += 1 # - places model_num = [] gamma_list = [] for j in range(gamma.shape[1]): for i in range(gamma.shape[0]): model_num.append(j+1) gamma_list.append(gamma[i,j]) analysis_df = pd.DataFrame(list(zip(model_num, gamma_list)), columns=['Model Number', 'Gamma']) ax = sns.scatterplot(x="Model Number", y="Gamma", data=analysis_df) ax.set_title('Gamma vs Model Number') np.mean(gamma, axis=0) np.median(gamma, axis=0) np.var(gamma, axis=0) scipy.stats.sem(gamma, axis=0) means = list(np.mean(scores, axis=1)) samp_se_list = list(scipy.stats.sem(scores, axis=1)) var_list = list(np.var(scores, axis=1)) # Where $\hat{p}$ is the probability of a dog getting shocked. # # $X_{1it}$ and $X_{2it}$ are the number of times a dog has respectively been shocked and avoided being shocked in previous trials. # # Model 1 # \begin{align*} # \hat{p}=\sigma(\beta_{1}+\beta_{2}X_{1it}+\beta_{3}X_{2it}) # \end{align*} # # Model 2 # \begin{align*} # \hat{p}=exp(\beta_{1}X_{1it}+\beta_{2}X_{2it}) # \end{align*} # # Model 3 # \begin{align*} # \hat{p}&=\sigma(\frac{\alpha}{t}+\gamma)&\text{$t$ is the trial number} # \end{align*} # # Model 4 is the "switch." # # Model 5 # \begin{align*} # \hat{p}&=\frac{\alpha}{t}&\text{$t$ is the trial number} # \end{align*} # for mean, var, se, i in zip(means, var_list, samp_se_list, range(1,len(means)+1)): print('Model\t {0} \nMean\t {1}\nSE\t {2}\nVar\t {3}'.format(i, mean, se, var)) print('------------------') # # Scaled Scores scaled_scores = scores.copy() scaler = MinMaxScaler() scaled_scores = scaler.fit_transform(scaled_scores) scaled_scores_df = pd.DataFrame(scaled_scores) scaled_scores_df.to_csv('./data/behavioral_model_scaled_scores.csv', index=False, float_format='%.3f') scaled_means = list(np.mean(scaled_scores, axis=1)) scaled_samp_se_list = list(scipy.stats.sem(scaled_scores, axis=1)) scaled_var_list = list(np.var(scaled_scores, axis=1)) for mean, var, se, i in zip(scaled_means, scaled_var_list, scaled_samp_se_list, range(1,len(scaled_means)+1)): print('Model\t {0} \nMean\t {1}\nSE\t {2}\nVar\t {3}'.format(i, mean, se, var)) print('------------------') a = np.zeros((5,1000)) b = np.zeros((5,1000)) np.dstack((a,b)).shape fake_samples_list[0].shape[0] num_real_samples np.sum(a, axis=1).shape np.reshape(a, (a.shape[1], a.shape[0])).shape type(a) a.shape[0] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Population Tool: Alpha # ## First Step: Define functions we need # Import necessary packages and declare constant variables # + import pandas as pd import matplotlib.pyplot as plt import matplotlib matplotlib.style.use('ggplot') # Use these paths to pull real data POP_DATA_PATH = 'https://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/1_Population/WPP2017_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx' POP_RELATABLE_PATH = 'https://raw.githubusercontent.com/ONEcampaign/humanitarian-data-service/master/resources/data/derived/example/2017_relatable_population_rankings.csv' POP_AGE_DATA_PATH = 'https://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/1_Population/WPP2017_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.xlsx' # Use these paths when testing locally # POP_DATA_PATH = 'local_data/WPP2017_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx' # POP_RELATABLE_PATH = 'local_data/2017_relatable_population_rankings.csv' # POP_AGE_DATA_PATH = 'local_data/WPP2017_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.xlsx' # - # %matplotlib inline # Make a function that pulls general population data from the UN site and re-shapes it for our use def ReadPopulationData(path, skiprows = 16): medium_variant = pd.read_excel(path, sheetname=1, skiprows=skiprows) medium_variant_long = pd.melt(medium_variant, id_vars=['Index','Variant','Region, subregion, country or area *','Notes','Country code'], value_name = 'Population', var_name = 'Year') return medium_variant_long # Write another function that pulls the age-disaggregated data from the UN site def ReadPopulationAgeData(path, skiprows = 16): age_data = pd.read_excel(path, sheetname=1, skiprows=skiprows) age_data_long = pd.melt(age_data, id_vars=['Index','Variant','Region, subregion, country or area *','Notes','Country code','Reference date (as of 1 July)'], value_name = 'Population', var_name = 'Age Cohort') return age_data_long # Make a function that asks user to input a valid country name contained in the data set def GetValidCountry(dataset): valid_country = False while valid_country == False: country = input('Enter a country name (e.g. Nigeria):') if country == 'quit': quit() elif country in dataset['Region, subregion, country or area *'].unique(): valid_country = True print('Thanks. {} is in the dataset.'.format(country)) else: print('Sorry, {} is not in dataset. Please try again, e.g. Nigeria:'.format(country)) return country # Make a function that asks user to input a valid year contained in the data set def GetValidYear(dataset, check_field = 'Year'): valid_year = False while valid_year == False: year = int(input('Enter a projection year (e.g. 2040):')) if year == 'quit': quit() elif year in dataset[check_field].unique(): valid_year = True print('Thanks. {} is in the dataset.'.format(year)) else: print('Sorry, {} is not in dataset. Please try again:'.format(year)) return year # Another function that asks user to input a valid age cohort range def GetValidAgeCohort(dataset, check_field = 'Age Cohort'): valid_cohort = False while valid_cohort == False: cohort = str(input('Enter an age cohort (e.g. 10-14):')) if cohort == 'quit': quit() elif cohort in dataset[check_field].unique(): valid_cohort = True print('Thanks. {} is in the dataset.'.format(cohort)) else: print('Sorry, {} is not in dataset. Please try again. Valid values:'.format(cohort)) print(dataset[check_field].unique()) return cohort # Write a function that runs the main menu prompts and asks the user to select an option def MainMenu(): print("") print("*********************") print("Welcome to David & Kate's Very Simple Python Population Tool. Please select an option from the list:") print("1) Get a population projection for a given country and year") print("2) Find a population projection for a given country, year and age cohort, along with a comparable population") valid_answer = False while valid_answer == False: selection = str(input('Input a number (1 or 2):')) if selection in ['1','2','quit']: valid_answer = True else: print("Sorry, that is not a valid selection. Please enter numbers 1-2 or type 'quit':") return selection # A small function to ask if the user would like to keep investigating or quit def AnotherQuery(): valid_answer = False while valid_answer == False: response = input('Would you like to make another query? (Y/N)') if response.lower() == 'y': valid_answer = True keep_playing = True elif response.lower() == 'n': print('Thanks for using this tool. Quitting....') keep_playing = False break else: print('Sorry, invalid response. Please type Y or N.') return keep_playing # A function for Task 1: input a country and year and return the relevant population projection def TaskOne(dataset): country = GetValidCountry(dataset) year = GetValidYear(dataset) population = dataset.loc[(dataset['Region, subregion, country or area *'] == country) & (dataset['Year'] == year),'Population'].values[0] print('The population for {} in the year {} is projected to be {} thousand.'.format(country, year, population)) print('A time series plot of this population over time:') subset = dataset[(dataset['Region, subregion, country or area *'] == country)] subset.plot(x='Year', y='Population') plt.title('Projected Population of {}'.format(country)) plt.ylabel('Population (thousands)') plt.show() # A function for Task 2: load relatable populations def GetComparablePopulation(reference_value, path = POP_RELATABLE_PATH): df_relatable_populations = pd.read_csv(path) df_relatable_populations['Population'] = df_relatable_populations[[ 'Population - World Bank (2015)','Population - UNFPA (2016)' ]].max(axis=1) df_relatable_populations = df_relatable_populations[['City, State, Country', 'Population']].dropna() def find_nearest_place_population(reference_value, df_relatable_populations = df_relatable_populations): if reference_value: nearest_row = df_relatable_populations.iloc[(df_relatable_populations['Population']- reference_value).abs().argsort()[0]] nearest_population = nearest_row['Population'] else: nearest_population = 0.00 return nearest_population def find_nearest_place(reference_value, df_relatable_populations = df_relatable_populations): if reference_value: nearest_row = df_relatable_populations.iloc[(df_relatable_populations['Population']- reference_value).abs().argsort()[0]] nearest_place = nearest_row['City, State, Country'] else: nearest_place = '' return nearest_place return find_nearest_place(reference_value), find_nearest_place_population(reference_value) # A function for Task 3: Find a population projection for a given country, year and age cohort def TaskThree(dataset): country = GetValidCountry(dataset) year = GetValidYear(dataset, check_field = 'Reference date (as of 1 July)') age = GetValidAgeCohort(dataset) population = dataset.loc[(dataset['Region, subregion, country or area *'] == country) & (dataset['Reference date (as of 1 July)'] == year) & (dataset['Age Cohort'] == age),'Population'].values[0] similar_place, similar_pop = GetComparablePopulation(population*1000) print('The population aged {} for {} in the year {} is projected to be {} thousand.'.format(age, country, year, population)) print('That is similar to the current population of {} ({} thousand people).'.format(similar_place, similar_pop/1000)) print('A time series plot of this age cohort over time:') subset = dataset[(dataset['Region, subregion, country or area *'] == country) & (dataset['Age Cohort'] == age)] subset.plot(x='Reference date (as of 1 July)', y='Population') plt.title('Projected Population Aged {} in {}'.format(age, country)) plt.ylabel('Population (thousands)') plt.show() # Write the main function that calls all the other functions when needed def run(): keep_using = True # Create a blank variables to store each population data sets when needed pop_data = None pop_relateable = None pop_age_data = None # Start a loop that will keep going until the user decides to quit while keep_using == True: selection = MainMenu() # Run main menu function to retrieve a valid menu option # Series of if statements to do different actions based on menu option if selection == '1': print("Thanks. You selected option 1.") if pop_data is None: # Check if the population data is already downloaded and if not, download it print("Downloading the latest data from the UN....") pop_data = ReadPopulationData(POP_DATA_PATH) pop_data['Year'] = pop_data['Year'].astype('int64') TaskOne(pop_data) # Run task 1 function elif selection == '2': print("Thanks. You selected option 3.") if pop_age_data is None: # Check if the population data is already downloaded and if not, download it print("Downloading the latest data from the UN....") pop_age_data = ReadPopulationAgeData(POP_AGE_DATA_PATH) TaskThree(pop_age_data) # Run Task 3 function elif selection == "quit": # Add a secret 'quit' option in case the programme malfunctions in testing print("Quitting...") break else: # Hopefully no one should get to this point, but just in case print an error message and stop the loop print("Error") break # Before re-running the loop, run the AnotherQuery function to see if the user would like to continue keep_using = AnotherQuery() return # ## Run the programme! run() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] colab_type="text" id="Yu7u1mNfnxVP" # **Copyright 2019 The Sonnet Authors. All Rights Reserved.** # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # --- # + [markdown] colab_type="text" id="WAfR3cvnoGMB" # # Preamble # + colab={} colab_type="code" id="4FqOAJb_jJR9" import sys assert sys.version_info >= (3, 6), "Sonnet 2 requires Python >=3.6" # + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="XnWX2azUDuCl" outputId="8516864f-06cb-4dd5-946c-adeae1e17f3a" # !pip install dm-sonnet tqdm # + colab={} colab_type="code" id="mn5ofK4-D1Qk" import sonnet as snt import tensorflow as tf import tensorflow_datasets as tfds # + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="Rpp_houJEHr9" outputId="9e7597ba-d8a5-483e-b415-5729ef3102e8" print("TensorFlow version: {}".format(tf.__version__)) print(" Sonnet version: {}".format(snt.__version__)) # + [markdown] colab_type="text" id="5RmHUmz1padR" # Finally lets take a quick look at the GPUs we have available: # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="TXoxEvKepdw2" outputId="da76b78b-274e-462d-fa47-89d626a4370f" # !grep Model: /proc/driver/nvidia/gpus/*/information | awk '{$1="";print$0}' # + [markdown] colab_type="text" id="UYYmqvOKfNbk" # # Dataset # # We need to get our dataset in a state where we can iterate over it easily. The TensorFlow Datasets package provides a simple API for this. It will download the dataset and prepare it for us to speedily process on a GPU. We can also add our own pre-processing functions to mutate the dataset before our model sees it: # + colab={} colab_type="code" id="UkBRriaQEr4z" batch_size = 100 def process_batch(images, labels): images = tf.squeeze(images, axis=[-1]) images = tf.cast(images, dtype=tf.float32) images = ((images / 255.) - .5) * 2. return images, labels def mnist(split): dataset = tfds.load("mnist", split=split, as_supervised=True) dataset = dataset.map(process_batch) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE) dataset = dataset.cache() return dataset mnist_train = mnist("train").shuffle(10) mnist_test = mnist("test") # + [markdown] colab_type="text" id="JfOCWVGEfgcq" # MNIST contains `28x28` greyscale handwritten digits. Let's take a look at one: # + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="I_yM0TVjFCZq" outputId="54959260-c8fc-4e39-ea54-e566e9122893" import matplotlib.pyplot as plt images, _ = next(iter(mnist_test)) plt.imshow(images[0]); # + [markdown] colab_type="text" id="d7bsizs5gK3K" # # Sonnet # # The next step is to define a model. In Sonnet everything that contains TensorFlow variables (`tf.Variable`) extends `snt.Module`, this includes low level neural network components (e.g. `snt.Linear`, `snt.Conv2D`), larger nets containing subcomponents (e.g. `snt.nets.MLP`), optimizers (e.g. `snt.optimizers.Adam`) and whatever else you can think of. # # Modules provide a simple abstraction for storing parameters (and `Variable`s used for other purposes, like for storing moving avergages in `BatchNorm`). # # To find all the parameters for a given module, simply do: `module.variables`. This will return a `tuple` of all the parameters that exist for this module, or any module it references: # + [markdown] colab_type="text" id="GrN37pi1o4HT" # ## Building the model # + [markdown] colab_type="text" id="c6XoN56S2lSW" # In Sonnet you build neural networks out of `snt.Module`s. In this case we'll build a multi-layer perceptron as a new class with a `__call__` method that computes the logits by passing the input through a number of fully connected layers, with a ReLU non-linearity. # + colab={} colab_type="code" id="hgjyB9yhFclD" class MLP(snt.Module): def __init__(self): super(MLP, self).__init__() self.flatten = snt.Flatten() self.hidden1 = snt.Linear(1024, name="hidden1") self.hidden2 = snt.Linear(1024, name="hidden2") self.logits = snt.Linear(10, name="logits") def __call__(self, images): output = self.flatten(images) output = tf.nn.relu(self.hidden1(output)) output = tf.nn.relu(self.hidden2(output)) output = self.logits(output) return output # + [markdown] colab_type="text" id="0i03px8y8gf7" # Now we'll create an instance of our class whose weights will be randomly initialized. We'll train this MLP such that it learns to recognize digits in the MNIST dataset. # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="XqL8oIMqGAnU" outputId="96efc445-d7bb-45db-f6b1-5aa4a8b72dcb" mlp = MLP() mlp # + [markdown] colab_type="text" id="snzkUUh9oXPy" # ## Using the model # + [markdown] colab_type="text" id="On8wI6VwpDPm" # Let's feed an example input through the model and see what it predicts. Since the model is randomly initialized there is a 1/10 chance that it will predict the right class! # + colab={"base_uri": "https://localhost:8080/", "height": 286} colab_type="code" id="4T-qmIc0GHfP" outputId="316f41f2-319c-452e-a132-5006b732e86b" images, labels = next(iter(mnist_test)) logits = mlp(images) prediction = tf.argmax(logits[0]).numpy() actual = labels[0].numpy() print("Predicted class: {} actual class: {}".format(prediction, actual)) plt.imshow(images[0]); # + [markdown] colab_type="text" id="V297xpzfobXK" # ## Training the model # + [markdown] colab_type="text" id="WTrv-jn4pPSx" # To train the model we need an optimizer. For this simple example we'll use Stochastic Gradient Descent which is implemented in the `SGD` optimizer. To compute gradients we'll use a `tf.GradientTape` which allows us to selectively record gradients only for the computation we want to back propagate through: # + cellView="form" colab={} colab_type="code" id="V7gi8NQ-WZOl" #@title Utility function to show progress bar. from tqdm import tqdm # MNIST training set has 60k images. num_images = 60000 def progress_bar(generator): return tqdm( generator, unit='images', unit_scale=batch_size, total=(num_images // batch_size) * num_epochs) # + colab={"base_uri": "https://localhost:8080/", "height": 85} colab_type="code" id="UUkshshiK6Eq" outputId="edf09c6d-cc08-482d-98d8-45778082ed0b" opt = snt.optimizers.SGD(learning_rate=0.1) num_epochs = 10 def step(images, labels): """Performs one optimizer step on a single mini-batch.""" with tf.GradientTape() as tape: logits = mlp(images) loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels) loss = tf.reduce_mean(loss) params = mlp.trainable_variables grads = tape.gradient(loss, params) opt.apply(grads, params) return loss for images, labels in progress_bar(mnist_train.repeat(num_epochs)): loss = step(images, labels) print("\n\nFinal loss: {}".format(loss.numpy())) # + [markdown] colab_type="text" id="2K0_eoR8og-G" # ## Evaluating the model # + [markdown] colab_type="text" id="Cm_9RMJopgWc" # We'll do very simple analysis of the model to get a feeling for how well it does against this dataset: # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="PM7IPcOeXtxH" outputId="fc983d03-f92c-4002-f7df-514c251bd300" total = 0 correct = 0 for images, labels in mnist_test: predictions = tf.argmax(mlp(images), axis=1) correct += tf.math.count_nonzero(tf.equal(predictions, labels)) total += images.shape[0] print("Got %d/%d (%.02f%%) correct" % (correct, total, correct / total * 100.)) # + [markdown] colab_type="text" id="Lnkc55PtqA_I" # To understand the result a bit better, lets take a look at a small sample of where the model correctly identified the digits: # + cellView="form" colab={} colab_type="code" id="eFro_RB4YR-X" #@title Utility function to show a sample of images. def sample(correct, rows, cols): n = 0 f, ax = plt.subplots(rows, cols) if rows > 1: ax = tf.nest.flatten([tuple(ax[i]) for i in range(rows)]) f.set_figwidth(14) f.set_figheight(4 * rows) for images, labels in mnist_test: predictions = tf.argmax(mlp(images), axis=1) eq = tf.equal(predictions, labels) for i, x in enumerate(eq): if x.numpy() == correct: label = labels[i] prediction = predictions[i] image = images[i] ax[n].imshow(image) ax[n].set_title("Prediction:{}\nActual:{}".format(prediction, label)) n += 1 if n == (rows * cols): break if n == (rows * cols): break # + colab={"base_uri": "https://localhost:8080/", "height": 214} colab_type="code" id="PSamdka2dodW" outputId="e5c521a6-3d5b-4be6-f371-a2c7a87ae124" sample(correct=True, rows=1, cols=5) # + [markdown] colab_type="text" id="hzHp02F_pzdh" # Now lets take a look at where it incorrectly classifies the input. MNIST has some rather dubious handwriting, I'm sure you'll agree that some of the samples below are a little ambiguous: # + colab={"base_uri": "https://localhost:8080/", "height": 451} colab_type="code" id="KQe5Q9LNdnb0" outputId="7f28c377-2226-468e-e388-a1cb649a26f5" sample(correct=False, rows=2, cols=5) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import gzip import matplotlib.pyplot as plt import numpy import struct import time import random import urllib.request def getData(fileName): file = gzip.open(fileName, 'r') file.read(4) numImages = struct.unpack('>i', file.read(4))[0] numRows = struct.unpack('>i', file.read(4))[0] numColumns = struct.unpack('>i', file.read(4))[0] buffer = file.read(numImages * numRows * numColumns) data = numpy.frombuffer(buffer, dtype = numpy.uint8) data = data.reshape(numImages, numRows * numColumns) return data, numImages, numRows, numColumns def getLabels(fileName): file = gzip.open(fileName, 'r') file.read(4) numImages = struct.unpack('>i', file.read(4))[0] buffer = file.read(numImages) data = numpy.frombuffer(buffer, dtype = numpy.uint8) labels = numpy.zeros((numImages, 10)) for i in range(numImages): labels[i][data[i]] = 1 return labels def train(trainData, trainLabels, numTrainImages, testData, testLabels, numTestImages, weightRowSize, iterations, batchSize, learningRate): W = numpy.zeros((weightRowSize, 10)) if numTrainImages == 0: return W losses = [] accuracies = [] for i in range(iterations): indices = numpy.random.choice(numTrainImages, batchSize) X = trainData[indices] Y = trainLabels[indices] loss = 0 norm = Y - X.dot(W) for i in norm: for j in i: loss += j**2 loss /= 2 * batchSize losses.append(loss) if numTestImages > 0: accuracy = 0 predictedLabels = testData.dot(W) for predictedLabel, testLabelIndex in zip(predictedLabels, range(numTestImages)): maxIndex = numpy.argmax(predictedLabel) if (predictedLabel.size == 1 or predictedLabel[maxIndex] > numpy.amax(numpy.delete(predictedLabel, maxIndex))) and testLabels[testLabelIndex][maxIndex] == 1: accuracy += 1 accuracy /= numTestImages accuracies.append(accuracy) W -= learningRate * (1 / batchSize) * (numpy.transpose(X).dot(X.dot(W)) - numpy.transpose(X).dot(Y)) return losses, accuracies iterations = 1000 learningRate = 10**-10 batchSizes = [1, 10, 100, 1000] datasetSizes = [100, 500, 1000, 10000] testSize = 10000 iterationVariables = numpy.arange(1, iterations + 1) print('Initialization complete') # + website = 'http://yann.lecun.com/exdb/mnist/' files = ['train-images-idx3-ubyte.gz', 'train-labels-idx1-ubyte.gz', 't10k-images-idx3-ubyte.gz', 't10k-labels-idx1-ubyte.gz'] for file in files: url = website + file urllib.request.urlretrieve(url, file) trainData, numTrainImages, numRows, numColumns= getData(files[0]) trainLabels = getLabels(files[1]) testData, numTestImages, _, _ = getData(files[2]) testLabels = getLabels(files[3]) print('File downloading and formatting complete') # + batchSizeLosses = [] batchSizeTimes = [] for size in batchSizes: startTime = time.time() loss, _ = train(trainData, trainLabels, numTrainImages, [], [], 0, numRows * numColumns, iterations, size, learningRate) endTime = time.time() batchSizeLosses.append(loss) batchSizeTimes.append(endTime - startTime) plt.plot(iterationVariables, batchSizeLosses[0], label = 'Batch Size: 1', color = 'purple') plt.plot(iterationVariables, batchSizeLosses[1], label = 'Batch Size: 10', color = 'blue') plt.plot(iterationVariables, batchSizeLosses[2], label = 'Batch Size: 100', color = 'green') plt.plot(iterationVariables, batchSizeLosses[3], label = 'Batch Size: 1000', color = 'red') plt.title('Training Loss vs Iteration') plt.xlabel('Iteration') plt.ylabel('Training Loss') plt.legend() plt.show() # - for size, timing in zip(batchSizes, batchSizeTimes): print('Time to compute batch size ', size, ': ', timing, ' seconds', sep = '') # + batchSizeAccuracies = [] for size in batchSizes: if testSize < numTestImages: indices = numpy.random.choice(numTestImages, testSize) testDataSubset = testData[indices] testLabelSubset = testLabels[indices] else: testDataSubset = testData testLabelSubset = testLabels testSize = numTestImages _, accuracy = train(trainData, trainLabels, numTrainImages, testDataSubset, testLabelSubset, testSize, numRows * numColumns, iterations, size, learningRate) batchSizeAccuracies.append(accuracy) plt.plot(iterationVariables, batchSizeAccuracies[0], label = 'Batch Size: 1', color = 'purple') plt.plot(iterationVariables, batchSizeAccuracies[1], label = 'Batch Size: 10', color = 'blue') plt.plot(iterationVariables, batchSizeAccuracies[2], label = 'Batch Size: 100', color = 'green') plt.plot(iterationVariables, batchSizeAccuracies[3], label = 'Batch Size: 1000', color = 'red') plt.title('Accuracy vs Iteration') plt.xlabel('Iteration') plt.ylabel('Accuracy') plt.legend() plt.show() # + datasetSizeLosses = [] datasetSizeTimes = [] for size in datasetSizes: if size < numTrainImages: indices = numpy.random.choice(numTrainImages, size, replace = False) trainDataSubset = trainData[indices] trainLabelSubset = trainLabels[indices] else: trainDataSubset = trainData trainLabelSubset = trainLabels size = numTrainImages startTime = time.time() loss, _ = train(trainDataSubset, trainLabelSubset, size, [], [], 0, numRows * numColumns, iterations, 100, learningRate) endTime = time.time() datasetSizeLosses.append(loss) datasetSizeTimes.append(endTime - startTime) plt.plot(iterationVariables, datasetSizeLosses[0], label = 'Dataset Size: 100', color = 'purple') plt.plot(iterationVariables, datasetSizeLosses[1], label = 'Dataset Size: 500', color = 'blue') plt.plot(iterationVariables, datasetSizeLosses[2], label = 'Dataset Size: 1000', color = 'green') plt.plot(iterationVariables, datasetSizeLosses[3], label = 'Dataset Size: 10000', color = 'red') plt.title('Training Loss vs Iteration') plt.xlabel('Iteration') plt.ylabel('Training Loss') plt.legend() plt.show() # - for size, timing in zip(datasetSizes, datasetSizeTimes): print('Time to compute dataset size ', size, ': ', timing, ' seconds', sep = '') # + datasetSizeAccuracies = [] for size in datasetSizes: if size < numTrainImages: indices = numpy.random.choice(numTrainImages, size, replace = False) trainDataSubset = trainData[indices] trainLabelSubset = trainLabels[indices] else: trainDataSubset = trainData trainLabelSubset = trainLabels if testSize < numTestImages: indices = numpy.random.choice(numTestImages, testSize) testDataSubset = testData[indices] testLabelSubset = testLabels[indices] else: testDataSubset = testData testLabelSubset = testLabels testSize = numTestImages _, accuracy = train(trainDataSubset, trainLabelSubset, size, testDataSubset, testLabelSubset, testSize, numRows * numColumns, iterations, 100, learningRate) datasetSizeAccuracies.append(accuracy) plt.plot(iterationVariables, datasetSizeAccuracies[0], label = 'Dataset Size: 100', color = 'purple') plt.plot(iterationVariables, datasetSizeAccuracies[1], label = 'Dataset Size: 500', color = 'blue') plt.plot(iterationVariables, datasetSizeAccuracies[2], label = 'Dataset Size: 1000', color = 'green') plt.plot(iterationVariables, datasetSizeAccuracies[3], label = 'Dataset Size: 10000', color = 'red') plt.title('Accuracy vs Iteration') plt.xlabel('Iteration') plt.ylabel('Accuracy') plt.legend() plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # 1.文件下的所有代码都是使用的Python 2 # 2.文件内容主要来自于《父与子的编程之旅》 # + import sys sys.version_info # - numBlocks = int(raw_input('How many blocks of stars do you want? ')) for block in range(1, numBlocks + 1): for line in range(1, block * 2): for star in range(1, (block + line) * 2): print '*', print print numBlocks = int(raw_input('How many blocks of stars do you want? ')) for block in range(1, numBlocks + 1): print 'block =', block for line in range(1, block * 2): for star in range(1, (block + line) * 2): print '*', print 'line =', line, 'star =', star print numBlocks = int(raw_input('How many blocks of stars do you want? ')) for block in range(1, numBlocks + 1): print 'block =', block for line in range(1, block * 2): for star in range(1, (block + line) * 2): print '*', print 'line =', line, 'star =', star print print '\tDog\tBun\tKetchup\tMustard\tOnions' count = 1 for dog in [0, 1]: for bun in [0, 1]: for ketchup in [0, 1]: for mustard in [0, 1]: for onion in [0, 1]: print '#', count, '\t', dog, '\t', bun, '\t', print ketchup, '\t', mustard, '\t', onion count += 1 dog_cal = 140 bun_cal = 120 mus_cal = 20 ket_cal = 80 onion_cal = 40 print '\tDog\tBun\tKetchup\tMustard\tOnions\tCalories' count = 1 for dog in [0, 1]: for bun in [0, 1]: for ketchup in [0, 1]: for mustard in [0, 1]: for onion in [0, 1]: tot_cal = dog * dog_cal + bun * bun_cal + \ mustard * mus_cal + ketchup * ket_cal + \ onion * onion_cal print '#', count, '\t', dog, '\t', bun, '\t', print ketchup, '\t', mustard, '\t', onion, '\t', print tot_cal count += 1 dog_cal = 140 bun_cal = 120 mus_cal = 20 ket_cal = 80 onion_cal = 40 print '\tDog\tBun\tKetchup\tMustard\tOnions\tCalories' count = 1 for dog in [0, 1]: for bun in [0, 1]: for ketchup in [0, 1]: for mustard in [0, 1]: for onion in [0, 1]: tot_cal = (dog * dog_cal + bun * bun_cal + mustard * mus_cal + ketchup * ket_cal + onion * onion_cal) print '#', count, '\t', dog, '\t', bun, '\t', print ketchup, '\t', mustard, '\t', onion, '\t', print tot_cal count += 1 # + import time starter = int(raw_input("How many seconds? ")) for i in range(starter, 0, -1): print i, for j in range(i): print '*', print time.sleep(1) print "BLAST OFF!" # - letters = ['d', 'a', 'e', 'c', 'b'] print letters letters.sort() print letters letters = ['d', 'a', 'e', 'c', 'b'] print letters.sort() letters = ['d', 'a', 'e', 'c', 'b'] letters.sort() print letters classMarks = [[55, 63, 77, 81], [65, 61, 67, 72], [97, 95, 92, 88]] print classMarks for studentMarks in classMarks: print studentMarks print classMarks[0] print classMarks[0][2] phoneNumbers = {} phoneNumbers['John'] = '555-1234' print phoneNumbers print phoneNumbers['John'] print phoneNumbers.keys() print phoneNumbers.values() dishes = {'eggs': 2, 'sausage': 1, 'bacon': 1, 'spam': 500} print dishes.keys() dishes = {'eggs': 2, 'sausage': 1, 'bacon': 1, 'spam': 500} keys = dishes.keys() values = dishes.values() sorted_key_list = sorted(keys) for value in sorted(values): for key in sorted_key_list: if dishes[key] == value: print key, dishes[key] sorted_key_list.remove(key) break print "Enter 5 names:" name1 = raw_input() name2 = raw_input() name3 = raw_input() name4 = raw_input() name5 = raw_input() name_list = [name1, name2, name3, name4, name5] print "The names are", for name in name_list: print name, print "Enter 5 names:" name1 = raw_input() name2 = raw_input() name3 = raw_input() name4 = raw_input() name5 = raw_input() name_list = [name1, name2, name3, name4, name5] print "The names are", for name in name_list: print name, print print "The sorted names are", for name in sorted(name_list): print name, print "Enter 5 names:" name1 = raw_input() name2 = raw_input() name3 = raw_input() name4 = raw_input() name5 = raw_input() name_list = [name1, name2, name3, name4, name5] print "The third name you entered is:", name3 print "Enter 5 names:" name1 = raw_input() name2 = raw_input() name3 = raw_input() name4 = raw_input() name5 = raw_input() name_list = [name1, name2, name3, name4, name5] print "The names are", for name in name_list: print name, print sub_index = int(raw_input("Replace one name. Which one? (1-5): ")) name_list[sub_index - 1] = raw_input("New name: ") print "The names are", for name in name_list: print name, # + import sys dicts = {} operation = raw_input("Add or look up a word (a/l)? ") running = True while running: if operation == 'a': key = raw_input("Type the word: ") value = raw_input("Type the definition: ") dicts[key] = value print "Word added!" operation = raw_input("Add or look up a word (a/l)? ") elif operation == 'l': query_key = raw_input("Type the word: ") if query_key in dicts: print dicts[query_key] operation = raw_input("Add or look up a word (a/l)? ") else: print "That word isn't in the dictionary yet." operation = raw_input("Add or look up a word (a/l)? ") elif operation == 'quit': sys.exit() # + def printMyAddress(): print "" print "123 Main Street" print "Ottawa, Ontario, Canada" print "K2M 2E9" print printMyAddress() # + def calculateTax(price, tax_rate): taxTotal = price + (price * tax_rate) return taxTotal my_price = float(raw_input("Enter a price: ")) totalPrice = calculateTax(my_price, 0.06) print "price =", my_price, "Total price =", totalPrice # + def calculateTax(price, tax_rate): taxTotal = price + (price * tax_rate) my_price = 1000 print "my_price (inside function) =", my_price return taxTotal my_price = float(raw_input("Enter a price: ")) totalPrice = calculateTax(my_price, 0.06) print "price =", my_price, "Total price =", totalPrice print "my_price (outside function) =", my_price # - def calculateTax(price, tax_rate): global my_price # 告诉Python你想使用全局版本的my_price # + def calculateTax(price, tax_rate): taxTotal = price + (price * tax_rate) global my_price my_price = 1000 print "my_price (inside function) =", my_price return taxTotal my_price = float(raw_input("Enter a price: ")) totalPrice = calculateTax(my_price, 0.06) print "price =", my_price, "Total price =", totalPrice print "my_price (outside function) =", my_price # + class Ball: def bounce(self): if self.direction == "down": self.direction = "up" myBall = Ball() myBall.direction = "down" myBall.color = "green" myBall.size = "small" print "I just created a ball." print "My ball is", myBall.size print "My ball is", myBall.color print "My ball's direction is", myBall.direction print "Now I'm going to bounce the ball" print myBall.bounce() print "Now the ball's direction is", myBall.direction # + class Ball: def __init__(self, color, size, direction): self.color = color self.size = size self.direction = direction def bounce(self): if self.direction == "down": self.direction = "up" myBall = Ball("red", "small", "down") print "I just create a ball." print "My ball is", myBall.size print "My ball is", myBall.color print "My ball's direction is", myBall.direction print "Now I'm going to bounce the ball" print myBall.bounce() print "Now the ball's direction is", myBall.direction print myBall # + class Ball: def __init__(self, color, size, direction): self.color = color self.size = size self.direction = direction def __str__(self): msg = "Hi, I'm a " + self.size + " " + self.color + " ball!" return msg myBall = Ball("red", "small", "down") print myBall # - # 创建 Ball 类的两个实例 cartersBall = Ball("red", "small", "down") warrensBall = Ball("green", "medium", "up") # + class HotDog: def __init__(self): self.cooked_level = 0 self.cooked_string = "Raw" self.condiments = [] def __str__(self): msg = "hot dog" if len(self.condiments) > 0: msg = msg + " with " for i in self.condiments: msg = msg + i + ", " msg = msg.strip(", ") msg = self.cooked_string + " " + msg + "." return msg def cook(self, time): self.cooked_level = self.cooked_level + time if self.cooked_level > 8: self.cooked_string = "Charcoal" elif self.cooked_level > 5: self.cooked_string = "Well-done" elif self.cooked_level > 3: self.cooked_string = "Medium" else: self.cooked_string = "Raw" def addCondiment(self, condiment): self.condiments.append(condiment) myDog = HotDog() print myDog print "Cooking hot dog for 4 minutes..." myDog.cook(4) print myDog print "Cooking hot dog for 3 more minutes..." myDog.cook(3) print myDog print "What happens if I cook it for 10 more minutes?" myDog.cook(10) print myDog print "Now, I am going to add some stuff on my hot dog" myDog.addCondiment("ketchup") myDog.addCondiment("mustard") print myDog # + class Triangle: def __init__(self, width, height): self.width = width self.height = height def getArea(self): area = self.width * self.height / 2.0 return area class Square: def __init__(self, size): self.size = size def getArea(self): area = self.size * self.size return area myTriangle = Triangle(4, 5) mySquare = Square(7) print myTriangle.getArea() print mySquare.getArea() # + class GameObject: def __init__(self, name): self.name = name def pickUp(self, player): pass # put code here to add the object # to the player's collection class Coin(GameObject): # Coin是GameObject的子类 def __init__(self, value): GameObject.__init__(self, "coin") # 在__init__()中,继承GameObject的初始化方法 self.value = value # 并补充新内容 def spend(self, buyer, seller): pass # put code here to remove the coin # from the buyer's money and # add it to the seller's money # + class BankAccount: def __init__(self, username, cardnumber): self.username = username self.cardnumber = cardnumber self.balance = 0.0 def getBalance(self): return self.balance def deposit(self, money): self.balance += money print "Now your balance is", self.balance def quqian(self, money): if self.balance >= money: self.balance -= money print "Now your balance is", self.balance else: print "You tried to withdraw", money print "The account balance is", self.balance class InterestAccount(BankAccount): def __init(self, username, cardnumber, rate): BankAccount.__init__(self, username, cardnumber) self.rate = rate def addInterest(self): interst = self.balance * self.rate print "adding interest to the account,", self.rate * 100, "percent" self.deposit(interest) # - # %%writefile upper.py def upper(name): '''convert name to upper charactor''' return name.upper() # !cat upper.py # + import random random_list = [] while len(random_list) < 5: random_list.append(random.randint(1, 20)) print random_list # + import random import time for i in range(10): time.sleep(3) print random.random() # + import random random_list = [] for i in range(5): random_list.append(random.randint(1, 20)) print random_list # + import pygame pygame.init() screen = pygame.display.set_mode([640, 480]) # + # coding=utf-8 # GUI 程序的代码不适合在jupyter notebook中编写,运行的时候会引起kernel die import sys import pygame pygame.init() screen = pygame.display.set_mode([640, 480]) screen.fill([255, 255, 255]) # 用白色背景填充窗口 pygame.draw.circle(screen, [255, 0, 0], [320, 240], 30, 0) pygame.display.flip() running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False pygame.quit() # - import sys sys.executable # !python --version # !/usr/bin/python --version import sys sys.version # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="-_EFFjrkxTLa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1acb4c85-aa76-4256-af45-02ed5dfcd7b4" import pandas as pd pd.__version__ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Notebook used for generating sample data for FID comparison import os from pathlib import Path import numpy as np import cv2 # %pip install pytorch-fid def copy_subset(src, dst, size, count=None): src = Path(src) dst = Path(dst) dst.mkdir(parents=True, exist_ok=True) objs = sorted(os.listdir(src)) if count is not None: objs = np.random.choice(objs, count, replace=False) for o in objs: img = cv2.imread(str(src/o)) img = cv2.resize(img, size) cv2.imwrite(str(dst/o), img) def resize_all(folder, width, height): src = Path(folder) for o in os.listdir(folder): img = cv2.imread(str(src/o)) img = cv2.resize(img, (width, height)) cv2.imwrite(str(src/o), img) # + # resize_all('../data/CelebA/val/faces/', 256, 256) # - # ### Uncomment below to generate a random sample set of ground truth faces # + # copy_subset('../data/CelebA/train/faces/', '../data/Samples/ground_truth_faces', (256, 256), count=16384) # - # ### Uncomment below to generate two random ground truth datasets of paprika style # + # copy_subset('../data/CelebA/train/paprika', '../data/Samples/ground_truth_paprika1', (256, 256), count=16384) # + # copy_subset('../data/CelebA/train/paprika', '../data/Samples/ground_truth_paprika2', (256, 256), count=16384) # - # ### Uncomment below to generate two random subsets of webtoon style # + # copy_subset('../data/CelebA/train/webtoon', '../data/Samples/ground_truth_webtoon1', (256, 256), count=16384) # + # copy_subset('../data/CelebA/train/webtoon', '../data/Samples/ground_truth_webtoon2', (256, 256), count=16384) # - # ### Uncomment below to generate two random subsets of face v2 style # + # copy_subset('../data/CelebA/train/face_v2', '../data/Samples/ground_truth_face_v2_1', (256, 256), count=16384) # + # copy_subset('../data/CelebA/train/face_v2', '../data/Samples/ground_truth_face_v2_2', (256, 256), count=16384) # + import os import random from pathlib import Path from urllib.request import urlretrieve, urlcleanup from zipfile import ZipFile import albumentations as albm import cv2 import matplotlib.pyplot as plt import numpy as np from pycocotools.coco import COCO import torch import torch.nn as nn import torchvision from tqdm.notebook import tqdm from PIL import Image import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import ImageGrid import numpy as np # %config InlineBackend.figure_format = 'retina' RANDOM_SEED = 1337 torch.manual_seed(RANDOM_SEED) torch.cuda.manual_seed(RANDOM_SEED) np.random.seed(RANDOM_SEED) # Flip values for slower training speed, but more determenistic results. torch.backends.cudnn.deterministic = False torch.backends.cudnn.benchmark = True # - DEVICE = torch.device('cpu') if torch.cuda.is_available(): DEVICE = torch.device('cuda') torch.cuda.manual_seed(123) def translate_subset(model, src, dst, size, count=None): model = model.to(DEVICE) src = Path(src) dst = Path(dst) dst.mkdir(parents=True, exist_ok=True) objs = sorted(os.listdir(src)) if count is not None: objs = np.random.choice(objs, count, replace=False) for o in objs: img = cv2.imread(str(src/o))[:, :, ::-1] img = cv2.resize(img, size) imageT = torchvision.transforms.ToTensor()(Image.fromarray(img)).unsqueeze(0).to(DEVICE) output = np.uint8(model(imageT).squeeze(0).detach().permute(1, 2, 0).cpu().numpy() * 255.) cv2.imwrite(str(dst/o), output[:, :, ::-1]) # + # We also need to replace Mobilenet's ReLU6 activations with ReLU. # There is no noticeable difference in quality, but this will # allow us to use CoreML for mobile inference on iOS devices. def replace_relu6_with_relu(model): for name, module in reversed(model._modules.items()): if len(list(module.children())) > 0: model._modules[name] = replace_relu6_with_relu(model=module) if isinstance(module, nn.ReLU6): model._modules[name] = nn.ReLU() return model class AnimeNet(nn.Module): def __init__(self): super().__init__() mobilenet = torchvision.models.mobilenet_v2(width_mult=0.5) # We reuse state dict from mobilenet v2 width width_mult == 1.0. # This is not the optimal way to use pretrained models, but in this case # it gives us good initialization for faster convergence. state_dict = torchvision.models.mobilenet_v2(pretrained=True).state_dict() target_dict = mobilenet.state_dict() for k in target_dict.keys(): if len(target_dict[k].size()) == 0: continue state_dict[k] = state_dict[k][:target_dict[k].size(0)] if len(state_dict[k].size()) > 1: state_dict[k] = state_dict[k][:, :target_dict[k].size(1)] mobilenet.load_state_dict(state_dict) weight = mobilenet.features[0][0].weight.detach() # mobilenet.features[0][0].weight = nn.Parameter(data=weight / 255.) mobilenet = replace_relu6_with_relu(mobilenet) self.features = mobilenet.features[:-2] self.upscale0 = nn.Sequential( nn.Conv2d(80, 48, 1, 1, 0, bias=False), nn.BatchNorm2d(48), nn.ReLU() ) self.upscale1 = nn.Sequential( nn.Conv2d(48, 16, 3, 1, 1, bias=False), nn.BatchNorm2d(16), nn.ReLU() ) self.upscale2 = nn.Sequential( nn.Conv2d(16, 16, 3, 1, 1, bias=False), nn.BatchNorm2d(16), nn.ReLU() ) self.upscale3 = nn.Sequential( nn.Conv2d(16, 8, 3, 1, 1, bias=False), nn.BatchNorm2d(8), nn.ReLU() ) self.upscale4 = nn.Sequential( nn.Conv2d(8, 4, 3, 1, 1, bias=False), nn.BatchNorm2d(4), nn.ReLU() ) self.upscale5 = nn.Conv2d(4, 3, 3, 1, 1, bias=True) def forward(self, x): out = x skip_outs = [] for i in range(len(self.features)): out = self.features[i](out) if i in {1, 3, 6, 13}: skip_outs.append(out) out = self.upscale0(out) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale1(out + skip_outs[3]) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale2(out + skip_outs[2]) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale3(out + skip_outs[1]) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale4(out + skip_outs[0]) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale5(out) return torch.sigmoid(out) # - class WideAnimeNet(nn.Module): def __init__(self): super().__init__() mobilenet = torchvision.models.mobilenet_v2(width_mult=0.75) # We reuse state dict from mobilenet v2 width width_mult == 1.0. # This is not the optimal way to use pretrained models, but in this case # it gives us good initialization for faster convergence. state_dict = torchvision.models.mobilenet_v2(pretrained=True).state_dict() target_dict = mobilenet.state_dict() for k in target_dict.keys(): if len(target_dict[k].size()) == 0: continue state_dict[k] = state_dict[k][:target_dict[k].size(0)] if len(state_dict[k].size()) > 1: state_dict[k] = state_dict[k][:, :target_dict[k].size(1)] mobilenet.load_state_dict(state_dict) weight = mobilenet.features[0][0].weight.detach() mobilenet = replace_relu6_with_relu(mobilenet) self.features = mobilenet.features[:-2] self.upscale0 = nn.Sequential( nn.Conv2d(120, 72, 1, 1, 0, bias=False), nn.BatchNorm2d(72), nn.ReLU() ) self.upscale1 = nn.Sequential( nn.Conv2d(72, 24, 3, 1, 1, bias=False), nn.BatchNorm2d(24), nn.ReLU() ) self.upscale2 = nn.Sequential( nn.Conv2d(24, 24, 3, 1, 1, bias=False), nn.BatchNorm2d(24), nn.ReLU() ) self.upscale3 = nn.Sequential( nn.Conv2d(24, 16, 3, 1, 1, bias=False), nn.BatchNorm2d(16), nn.ReLU() ) self.upscale4 = nn.Sequential( nn.Conv2d(16, 4, 3, 1, 1, bias=False), nn.BatchNorm2d(4), nn.ReLU() ) self.upscale5 = nn.Conv2d(4, 3, 3, 1, 1, bias=True) def forward(self, x): out = x skip_outs = [] for i in range(len(self.features)): out = self.features[i](out) if i in {1, 3, 6, 13}: skip_outs.append(out) out = self.upscale0(out) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale1(out + skip_outs[3]) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale2(out + skip_outs[2]) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale3(out + skip_outs[1]) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale4(out + skip_outs[0]) out = nn.functional.interpolate(out, scale_factor=2, mode='nearest') out = self.upscale5(out) return torch.sigmoid(out) model = AnimeNet() PAPRIKA = 'paprika.pth' FACE_V2 = 'face_v2.pth' WEBTOON = 'webtoon.pth' # model.load_state_dict(torch.load('mobilenetv2_256_sobel2_anime.pth')) model.load_state_dict(torch.load('face_v2_2.pth')) # Generate a dataset from our trained model for evaluation (using FID) against ground truth translate_subset(model, '../data/Samples/ground_truth_faces', '../data/Samples/face_v2_3', (256, 256)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import hashlib import binascii import itertools import datetime password_a4 = "" password_a4_bin = password_a4.encode("utf-8") hash_object_a4 = hashlib.sha256(password_a4_bin) print("hash4\t\t", hash_object_a4.hexdigest()) print("hash4bin\t", hash_object_a4.digest()) print("") password_a5 = "" password_a5_bin = password_a5.encode("utf-8") hash_object_a5 = hashlib.sha256(password_a5_bin) print("hash5\t\t", hash_object_a5.hexdigest()) print("hash5bin\t", hash_object_a5.digest()) print("") password_6 = "" password_6_bin = password_6.encode("utf-8") hash_object_6 = hashlib.sha256(password_6_bin) print("hash6\t\t", hash_object_6.hexdigest()) print("hash6bin\t", hash_object_6.digest()) #x1 = binascii.unhexlify(hash_object.hexdigest()) #print(x1) # - alphabet_a = "abcdefghijklmnopqrstuvwxyz" alphabet_Aa = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" alphabet_a_bin = alphabet_a.encode("utf-8") print(alphabet_a_bin) # ### Prosty łamacz haseł # + import hashlib import binascii import itertools import datetime def break_password(hash_sha256, letter_num, alphabet, write_every=None): print("Debug: łamanie hasła") done = 0 start_moment = datetime.datetime.now() for candidate_t in itertools.product(alphabet, repeat=letter_num): #konwertowanie iterowanego obiektu, aby móc go hashować candidate = "".join(candidate_t) candidate_bin = candidate.encode("utf-8") #hashowanie hash_obj = hashlib.sha256(candidate_bin) calculated_hash = hash_obj.digest() #sprawdzamy warunek if calculated_hash == hash_sha256: #kończymy działanie i zwracany wynik jeśli znaleziono end_moment = datetime.datetime.now() print("Odnaleziono w ciągu: " + str((end_moment - start_moment) / datetime.timedelta(seconds = 1, microseconds=1))) return candidate #zwiększamy licznik prób done = done + 1 if done %10000 == 0 and write_every is not None : #jeśli spełnione są warunki wypisania postępu to wypisujemy print("checking: " + candidate) end_moment = datetime.datetime.now() print("Porażka, nie odnaleziono; sprawdzono: " +str(done)) print("\t czas działania: " + str((end_moment - start_moment) / datetime.timedelta(seconds = 1, microseconds=1)) + "us") # - # 4 znakowe alfabet a hash_to_break = b'\xf1\xac\x93\xb9\\I\x00Z\x86r5\xc6\x19QDX\xb2{\xad2q8i\xc3\xb5%\xe74T\xc9\x04f' break_password(hash_to_break, 4, alphabet_a) # 5 znakowe alfabet a hash_to_break = b'0\xc2\xcbH\xd3c\xdb\x8d\xfc\xcd\x7f\xb8(\xea=w=/N\x88\xe8&\xa1F-\x9e\x15\x02\xc9\xc2\x85\x85' break_password(hash_to_break, 4, alphabet_a) # ### Przeszukanie #pełne przeszukanie 4a hash_to_break = b'' break_password(hash_to_break, 4, alphabet_a) #pełne przeszukanie 5a hash_to_break = b'' break_password(hash_to_break, 5, alphabet_a) #pełne przeszukanie 6a hash_to_break = b'' break_password(hash_to_break, 6, alphabet_a) #pełne przeszukanie 4Aa hash_to_break = b'' break_password(hash_to_break, 4, alphabet_Aa) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # word2vec # # This notebook is equivalent to `demo-word.sh`, `demo-analogy.sh`, `demo-phrases.sh` and `demo-classes.sh` from the Google examples. # + jupyter={"outputs_hidden": false} # %load_ext autoreload # %autoreload 2 # - # ## Training # Download some data, for example: [http://mattmahoney.net/dc/text8.zip](http://mattmahoney.net/dc/text8.zip) # # You could use `make test-data` from the root of the repo. # + jupyter={"outputs_hidden": false} import word2vec # - # Run `word2phrase` to group up similar words "Los Angeles" to "Los_Angeles" # + jupyter={"outputs_hidden": false} word2vec.word2phrase('../data/text8', '../data/text8-phrases', verbose=True) # - # This created a `text8-phrases` file that we can use as a better input for `word2vec`. # # Note that you could easily skip this previous step and use the text data as input for `word2vec` directly. # # Now train the word2vec model. # + jupyter={"outputs_hidden": false} word2vec.word2vec('../data/text8-phrases', '../data/text8.bin', size=100, binary=True, verbose=True) # - # That created a `text8.bin` file containing the word vectors in a binary format. # Generate the clusters of the vectors based on the trained model. # + jupyter={"outputs_hidden": false} word2vec.word2clusters('../data/text8', '../data/text8-clusters.txt', 100, verbose=True) # - # That created a `text8-clusters.txt` with the cluster for every word in the vocabulary # ## Predictions # %load_ext autoreload # %autoreload 2 # + jupyter={"outputs_hidden": false} import word2vec # - # Import the `word2vec` binary file created above # + jupyter={"outputs_hidden": false} model = word2vec.load('../data/text8.bin') # - # We can take a look at the vocabulary as a numpy array # + jupyter={"outputs_hidden": false} model.vocab # - # Or take a look at the whole matrix # + jupyter={"outputs_hidden": false} model.vectors.shape # + jupyter={"outputs_hidden": false} model.vectors # - # We can retreive the vector of individual words # + jupyter={"outputs_hidden": false} model['dog'].shape # + jupyter={"outputs_hidden": false} model['dog'][:10] # - # We can calculate the distance between two or more (all combinations) words. model.distance("dog", "cat", "fish") # ## Similarity # # We can do simple queries to retreive words similar to "socks" based on cosine similarity: # + jupyter={"outputs_hidden": false} indexes, metrics = model.similar("dog") indexes, metrics # - # This returned a tuple with 2 items: # 1. numpy array with the indexes of the similar words in the vocabulary # 2. numpy array with cosine similarity to each word # We can get the words for those indexes # + jupyter={"outputs_hidden": false} model.vocab[indexes] # - # There is a helper function to create a combined response as a numpy [record array](http://docs.scipy.org/doc/numpy/user/basics.rec.html) # + jupyter={"outputs_hidden": false} model.generate_response(indexes, metrics) # - # Is easy to make that numpy array a pure python response: # + jupyter={"outputs_hidden": false} model.generate_response(indexes, metrics).tolist() # - # ### Phrases # Since we trained the model with the output of `word2phrase` we can ask for similarity of "phrases", basically compained words such as "Los Angeles" # + jupyter={"outputs_hidden": false} indexes, metrics = model.similar('los_angeles') model.generate_response(indexes, metrics).tolist() # - # ### Analogies # Its possible to do more complex queries like analogies such as: `king - man + woman = queen` # This method returns the same as `cosine` the indexes of the words in the vocab and the metric # + jupyter={"outputs_hidden": false} indexes, metrics = model.analogy(pos=['king', 'woman'], neg=['man']) indexes, metrics # + jupyter={"outputs_hidden": false} model.generate_response(indexes, metrics).tolist() # - # ### Clusters # + jupyter={"outputs_hidden": false} clusters = word2vec.load_clusters('../data/text8-clusters.txt') # - # We can see get the cluster number for individual words # + jupyter={"outputs_hidden": false} clusters.vocab # - # We can see get all the words grouped on an specific cluster # + jupyter={"outputs_hidden": false} clusters.get_words_on_cluster(90).shape # + jupyter={"outputs_hidden": false} clusters.get_words_on_cluster(90)[:10] # - # We can add the clusters to the word2vec model and generate a response that includes the clusters # + jupyter={"outputs_hidden": false} model.clusters = clusters # + jupyter={"outputs_hidden": false} indexes, metrics = model.analogy(pos=["paris", "germany"], neg=["france"]) # + jupyter={"outputs_hidden": false} model.generate_response(indexes, metrics).tolist() # + jupyter={"outputs_hidden": false} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import io import requests url="https://raw.githubusercontent.com/fbarth/ds-saint-paul/master/data/base_train.csv" s=requests.get(url).content X_train = pd.read_csv(io.StringIO(s.decode('utf-8')), sep=",") X_train = X_train.drop(columns=['Unnamed: 0']) url="https://raw.githubusercontent.com/fbarth/ds-saint-paul/master/data/base_train_answer.csv" s=requests.get(url).content df_diagnosis = pd.read_csv(io.StringIO(s.decode('utf-8')), sep=",") df_diagnosis = df_diagnosis.drop(columns=['Unnamed: 0']) y_train = df_diagnosis['diagnosis'].ravel() # joining info and diagnosis into one df df_full = pd.concat([df_diagnosis, X_train], axis=1) df_full.head() # - print(X_train.shape) print(y_train.shape) # + import seaborn as sns import numpy as np from sklearn.ensemble import GradientBoostingClassifier from sklearn.metrics import recall_score, make_scorer from sklearn.model_selection import cross_val_score, cross_val_predict # loop to find best number of estimators min_estimators = 100 max_estimators = 1000 step = 50 result = [] best_score = 0 best_estimator = 0 for i in range(min_estimators, max_estimators+step, step): clf = GradientBoostingClassifier( n_estimators=i, random_state=0) s = make_scorer(recall_score, pos_label='M') scores = cross_val_score(clf, X_train, y_train, cv=5, scoring=s) if (scores.mean() > best_score): best_estimator = i best_score = scores.mean() result.append((i, scores.mean())) # converting result into dataframe estimators = np.array(result)[:,0] score = np.array(result)[:,1] d = {'estimators': estimators, 'score': score} df_scores = pd.DataFrame(d) print(f'Best estimator: {best_estimator}') print(df_scores) # plotting results sns.set_theme(style="dark") sns.set_palette("colorblind") sns.lineplot( data=df_scores, x="estimators", y="score" ) # best and smallest number of estimators using above loop was 200 # - clf = GradientBoostingClassifier( n_estimators=200, random_state=0) s = make_scorer(recall_score, pos_label='M') scores = cross_val_score(clf, X_train, y_train, cv=5, scoring=s) y_pred = cross_val_predict(clf, X_train, y_train, cv=5) print("recall_score: %0.5f (+/- %0.5f)" % (scores.mean(), scores.std())) # + from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report, plot_confusion_matrix print(confusion_matrix(y_train, y_pred)) print(classification_report(y_train, y_pred)) # - # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.6.2 # language: julia # name: julia-1.6 # --- # # Exercises: Plotting # ### Installing a package: `Primes` # # Load the `Primes` packages (source code at https://github.com/JuliaMath/Primes.jl). # # Verify that you can now use the function `primes` to grab all the primes under `100`. If unsure, maybe `?primes` will help you? using Pkg # use Pkg to "add" the package "Primes" using Primes BLANK # ### Simple plots # # Given `x = -10:10` plot y vs. x for # $ # y=x^2 # $ # + using Plots x = -10:10 y = BLANK plot(BLANK) # - # ### Mount Bruno # # Let's visualize Mount Bruno! # # Read the 2D topological data of Mount Bruno from the file `../data/bruno.csv`. Next we need some plotting engine for `Plots`, I recommend `Plotly`/`PlotlyJS` for this task. As a final touch, see what `surface()` function can do with your array. # ## Advanced: Animations # # Create an animation of our epidemic simulation. Fill in the skeleton code below with the plotting functions. # The following two lines load the epidemic functions from a file include("../epidemic_simple.jl") # + using Plots # Set the size of the animation window default(size = (400, 300)) # The map of cells (a 2D array) cells = make_cells(64, 64) # Build the animation frames by running simulation steps and generating a plot anim = @animate for i ∈ 1:50 # run update function here BLANK # Then call plot to create a frame plot(BLANK, legend=false, border=:none) end gif(anim, "pandemic.gif", fps = 5) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="2QR-3dEjBmy9" colab_type="text" # ## Drive Mounter # + id="mQj80MyKBlcD" colab_type="code" outputId="0c5120ca-6337-43b1-87d5-cbf53df5c1cd" colab={"base_uri": "https://localhost:8080/", "height": 122} from google.colab import drive drive.mount('/content/drive') # + [markdown] id="kpuTVZx4mM2z" colab_type="text" # ## Set up tensorBoard # + id="Jm6tjIXixazT" colab_type="code" outputId="71022ceb-dc5b-4899-b788-b2126cee6f7b" colab={"base_uri": "https://localhost:8080/", "height": 51} # %load_ext tensorboard # + id="YDTBWvJwyXXj" colab_type="code" outputId="c65366c7-ef32-4947-fe19-274f1bc0454c" colab={"base_uri": "https://localhost:8080/", "height": 821} # %tensorboard --logdir logs # + id="VvO5IFcczJxK" colab_type="code" outputId="c2318788-e96d-4e4f-add1-a4fed8d72120" colab={"base_uri": "https://localhost:8080/", "height": 51} from tensorboard import notebook notebook.list() # + id="2sVsUoYTzKjH" colab_type="code" outputId="9807e88b-d4f7-40ef-9a20-d91bd5b7f099" colab={"base_uri": "https://localhost:8080/", "height": 1000} notebook.display(port=6006, height=1000) # + [markdown] id="-NBU1GlMXXdL" colab_type="text" # ## Modelling # + id="5Q9yZ9dwXWen" colab_type="code" outputId="e15cf403-6462-472e-c0e1-ab8000d99c6e" colab={"base_uri": "https://localhost:8080/", "height": 34} import tensorflow as tf import numpy as np import keras from keras.models import Model, load_model from keras.layers import Input ,BatchNormalization , Activation from keras.layers.convolutional import Conv2D, UpSampling2D from keras.layers.pooling import MaxPooling2D from keras.layers.merge import concatenate def Convolution(input_tensor,filters): x = Conv2D(filters=filters,kernel_size=(3, 3),padding = 'same',strides=(1, 1))(input_tensor) x = BatchNormalization()(x) x = Activation('relu')(x) return x def model(input_shape): inputs = Input((input_shape)) conv_1 = Convolution(inputs,32) maxp_1 = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same') (conv_1) conv_2 = Convolution(maxp_1,64) maxp_2 = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same') (conv_2) conv_3 = Convolution(maxp_2,128) maxp_3 = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same') (conv_3) conv_4 = Convolution(maxp_3,256) maxp_4 = MaxPooling2D(pool_size = (2, 2), strides = (2, 2), padding = 'same') (conv_4) conv_5 = Convolution(maxp_4,512) upsample_6 = UpSampling2D((2, 2)) (conv_5) upsample_6 = concatenate([upsample_6, conv_4]) conv_6 = Convolution(upsample_6,256) upsample_7 = UpSampling2D((2, 2)) (conv_6) upsample_7 = concatenate([upsample_7, conv_3]) conv_7 = Convolution(upsample_7,128) upsample_8 = UpSampling2D((2, 2)) (conv_7) upsample_8 = concatenate([upsample_8, conv_2]) conv_8 = Convolution(upsample_8,64) upsample_9 = UpSampling2D((2, 2)) (conv_8) upsample_9 = concatenate([upsample_9, conv_1]) conv_9 = Convolution(upsample_9,32) outputs = Conv2D(1, (1, 1), activation='sigmoid') (conv_9) model = Model(inputs=[inputs], outputs=[outputs]) return model # + [markdown] id="bz07lh2tBbeF" colab_type="text" # ## Dataset loading # + id="cmhOZtMZTiBp" colab_type="code" colab={} path='drive/My Drive/MS_data_k_fold/' # + id="-lDhPZE6UKj2" colab_type="code" colab={} import os ext="Prepared_data" fldr=path+ext # + id="ulq3RxsGVn6K" colab_type="code" colab={} import os import numpy as np import pandas as pd def return_sets(i,fldr): sub_fldr=fldr+'/'+str(i) k=os.listdir(sub_fldr) for fl in k: if 'test' in fl: test_X=sub_fldr+'/'+fl+'/X.npy' X_test=np.load(test_X) test_Y=sub_fldr+'/'+fl+'/Y.npy' Y_test=np.load(test_Y) if 'train' in fl: train_X=sub_fldr+'/'+fl+'/X.npy' X_train=np.load(train_X) train_Y=sub_fldr+'/'+fl+'/Y.npy' Y_train=np.load(train_Y) if 'validate' in fl: validate_X=sub_fldr+'/'+fl+'/X.npy' X_validate=np.load(validate_X) validate_Y=sub_fldr+'/'+fl+'/Y.npy' Y_validate=np.load(validate_Y) return X_test,Y_test,X_train,Y_train,X_validate,Y_validate # + [markdown] id="yzWO21F2dQzY" colab_type="text" # ## Performance Metrices # + id="olNo4xImdLHQ" colab_type="code" colab={} from keras import backend as K import numpy as np import tensorflow as tf # Computing Dice_Coefficient def dice_coef(y_true, y_pred, smooth=1.0): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) # Computing Precision def precision(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = true_positives / (predicted_positives + K.epsilon()) return precision # Computing Sensitivity def sensitivity(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) return true_positives / (possible_positives + K.epsilon()) # Computing Specificity def specificity(y_true, y_pred): true_negatives = K.sum(K.round(K.clip((1-y_true) * (1-y_pred), 0, 1))) possible_negatives = K.sum(K.round(K.clip(1-y_true, 0, 1))) return true_negatives / (possible_negatives + K.epsilon()) # + [markdown] id="oc2iNvecegZ4" colab_type="text" # ## Accuracy Plots # + id="XD8LoTpnegEg" colab_type="code" colab={} import numpy as np import matplotlib.pyplot as plt # Accuracy vs Epoch def Accuracy_Graph(history,i,fldr): plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) #plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='upper left') plt.subplots_adjust(top=1.00, bottom=0.0, left=0.0, right=0.95, hspace=0.25, wspace=0.35) file_1=fldr+'/'+str(i)+'/Accuracy.png' plt.savefig(file_1) plt.show() # Dice Similarity Coefficient vs Epoch def Dice_coefficient_Graph(history,i,fldr): plt.plot(history.history['dice_coef']) plt.plot(history.history['val_dice_coef']) #plt.title('Dice_Coefficient') plt.ylabel('Dice_Coefficient') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='upper left') plt.subplots_adjust(top=1.00, bottom=0.0, left=0.0, right=0.95, hspace=0.25, wspace=0.35) file_1=fldr+'/'+str(i)+'/Dice_coef.png' plt.savefig(file_1) plt.show() # Loss vs Epoch def Loss_Graph(history,i,fldr): plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) #plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Validation'], loc='upper left') plt.subplots_adjust(top=1.00, bottom=0.0, left=0.0, right=0.95, hspace=0.25, wspace=0.35) file_1=fldr+'/'+str(i)+'/Loss.png' plt.savefig(file_1) plt.show() # + [markdown] id="_7UfgkOdcDoR" colab_type="text" # ## Model executer # + id="oDs3xd0_cdIc" colab_type="code" colab={} import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from sklearn.model_selection import train_test_split import json #import keras from tensorflow.keras.models import Model, load_model from tensorflow.keras.layers import Input ,BatchNormalization , Activation from tensorflow.keras.layers import Conv2D, UpSampling2D from tensorflow.keras.layers import MaxPooling2D from tensorflow.keras.layers import concatenate from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint from tensorflow.keras.optimizers import Adam import datetime def executer(type_1,model,X_test,Y_test,X_train,Y_train,X_validate,Y_validate,fldr,i): logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1) #model = model(input_shape = (256,256,5)) model.summary() file_1=fldr+'/'+str(i)+'/'+'Modified_UNet_'+type_1+'.h5' checkpointer = ModelCheckpoint(file_1, verbose=1) callback_list=[checkpointer,tensorboard_callback] record_file=fldr+'/'+str(i)+'/record.json' if os.path.exists(record_file): f=open(record_file,'r') data=json.load(f) else: data={} # Compiling the model k_adam=Adam(lr=0.001) model.compile(optimizer=k_adam, loss='binary_crossentropy', metrics=['accuracy',dice_coef,precision,sensitivity,specificity]) # Fitting the model over the data history = model.fit(X_train,Y_train,validation_data=(X_validate,Y_validate),batch_size=32,epochs=60,validation_split=0.20,verbose=1,initial_epoch=0,callbacks=callback_list) # Saving the model model.save(file_1) history.history # Evaluating the model on the training and testing data train_eval=model.evaluate(x=X_train, y=Y_train, batch_size=32 , verbose=1, sample_weight=None, steps=None) test_eval=model.evaluate(x=X_test, y=Y_test, batch_size=32, verbose=1, sample_weight=None, steps=None) data[str(i)]={} data[str(i)]["training"]=train_eval data[str(i)]["testing"]=test_eval with open(record_file, "w") as outfile: json.dump(data, outfile) # Plotting the Graphs of Accuracy, Dice_coefficient, Loss at each epoch on Training and Testing data return history # + [markdown] id="aIIklmGAn1ey" colab_type="text" # ## Caller # + id="L3Ue8STwkbtr" colab_type="code" colab={} def caller(): path='drive/My Drive/MS_data_k_fold/' list_c=['Major'] ext='Prepared_data' mdl=model(input_shape = (256,256,5)) for i in list_c: j=3 while j<=3: fldr=path+i+'/'+ext X_test,Y_test,X_train,Y_train,X_validate,Y_validate=return_sets(j,fldr) history=executer(i,mdl,X_test,Y_test,X_train,Y_train,X_validate,Y_validate,fldr,j) Accuracy_Graph(history,j,fldr) Dice_coefficient_Graph(history,j,fldr) Loss_Graph(history,j,fldr) j+=1 # + [markdown] id="qgZnQuDdp-Sc" colab_type="text" # ## main # + id="WHaf85knp8aK" colab_type="code" outputId="36331780-508c-4555-abd9-d788c27996d9" colab={"base_uri": "https://localhost:8080/", "height": 1000} caller() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # !pip install pyswarms # + import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import load_iris import pyswarms as ps # %load_ext autoreload # %autoreload 2 iris = load_iris() X = iris.data y = iris.target # Forward propagation def forward_prop(params): # Neural network architecture n_inputs = 4 n_hidden = 20 n_classes = 3 # Roll-back the weights and biases W1 = params[0:80].reshape((n_inputs,n_hidden)) b1 = params[80:100].reshape((n_hidden,)) W2 = params[100:160].reshape((n_hidden,n_classes)) b2 = params[160:163].reshape((n_classes,)) # Perform forward propagation z1 = X.dot(W1) + b1 # Pre-activation in Layer 1 a1 = np.tanh(z1) # Activation in Layer 1 z2 = a1.dot(W2) + b2 # Pre-activation in Layer 2 logits = z2 # Logits for Layer 2 # Compute for the softmax of the logits exp_scores = np.exp(logits) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # Compute for the negative log likelihood N = 150 # Number of samples corect_logprobs = -np.log(probs[range(N), y]) loss = np.sum(corect_logprobs) / N return loss def f(x): """Higher-level method to do forward_prop in the whole swarm. Inputs ------ x: numpy.ndarray of shape (n_particles, dimensions) The swarm that will perform the search Returns ------- numpy.ndarray of shape (n_particles, ) The computed loss for each particle """ n_particles = x.shape[0] j = [forward_prop(x[i]) for i in range(n_particles)] return np.array(j) # Initialize swarm options = {'c1': 0.5, 'c2': 0.3, 'w':0.9} # Call instance of PSO dimensions = (4 * 20) + (20 * 3) + 20 + 3 optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options) # Perform optimization cost, pos = optimizer.optimize(f, iters=1000) def predict(X, pos): """ Use the trained weights to perform class predictions. Inputs ------ X: numpy.ndarray Input Iris dataset pos: numpy.ndarray Position matrix found by the swarm. Will be rolled into weights and biases. """ # Neural network architecture n_inputs = 4 n_hidden = 20 n_classes = 3 # Roll-back the weights and biases W1 = pos[0:80].reshape((n_inputs,n_hidden)) b1 = pos[80:100].reshape((n_hidden,)) W2 = pos[100:160].reshape((n_hidden,n_classes)) b2 = pos[160:163].reshape((n_classes,)) # Perform forward propagation z1 = X.dot(W1) + b1 # Pre-activation in Layer 1 a1 = np.tanh(z1) # Activation in Layer 1 z2 = a1.dot(W2) + b2 # Pre-activation in Layer 2 logits = z2 # Logits for Layer 2 y_pred = np.argmax(logits, axis=1) return y_pred (predict(X, pos) == y).mean() # - # + [markdown] colab_type="text" id="BcLhx7R4YuUv" # # MNIST in Swift for TensorFlow (Perceptron) # # Blog post: https://rickwierenga.com/blog/s4tf/s4tf-mnist.html # + [markdown] colab_type="text" id="lvZux3i-YzNv" # ## Importing dependencies # + colab={} colab_type="code" id="8GHBoSjRZuqt" %install-location $cwd/swift-install %install '.package(url: "https://github.com/tensorflow/swift-models", .branch("tensorflow-0.6"))' Datasets # + colab={} colab_type="code" id="dvaI1fPZRfQH" import TensorFlow import Foundation import Datasets # + colab={} colab_type="code" id="0awo42kpWWaE" import Python let plt = Python.import("matplotlib.pylab") let np = Python.import("numpy") # + colab={} colab_type="code" id="hxyQaWVajlrA" %include "EnableIPythonDisplay.swift" IPythonDisplay.shell.enable_matplotlib("inline") # + [markdown] colab_type="text" id="gFp59EhiY3Y4" # ## Loading MNIST # # [tensorflow/swift-models](https://github.com/tensorflow/swift-models) # + colab={} colab_type="code" id="BBzeC88oZ1dS" let batchSize = 512 # + colab={} colab_type="code" id="cmEkbL2UHP2r" let mnist = MNIST(batchSize: batchSize, flattening: false, normalizing: true) # + [markdown] colab_type="text" id="2ppaI_0SZKh2" # ## Constructing the network # + colab={} colab_type="code" id="k-hGkSXkRux3" struct Model: Layer { var flatten = Flatten() var hiddenLayer = Dense(inputSize: 28 * 28, outputSize: 300, activation: relu) var outputLayer = Dense(inputSize: 300, outputSize: 10, activation: softmax) @differentiable func callAsFunction(_ input: Tensor) -> Tensor { return input.sequenced(through: flatten, hiddenLayer, outputLayer) } } # + colab={} colab_type="code" id="5XCYHPH_qPaz" var model = Model() # + [markdown] colab_type="text" id="qAXVzcGFZTAn" # ## Training # + colab={} colab_type="code" id="px-gCWe7gSJp" let epochs = 10 var trainHistory = np.zeros(epochs) var valHistory = np.zeros(epochs) # + colab={} colab_type="code" id="GvM422SwOyvN" let optimizer = Adam(for: model) # + colab={} colab_type="code" id="bdqKlxmpHkc7" for epoch in 0..= mnist.trainingSize ? (mnist.trainingSize - ((i - 1) * batchSize)) : batchSize let images = mnist.trainingImages.minibatch(at: i, batchSize: thisBatchSize) let labels = mnist.trainingLabels.minibatch(at: i, batchSize: thisBatchSize) let (_, gradients) = valueWithGradient(at: model) { model -> Tensor in let logits = model(images) return softmaxCrossEntropy(logits: logits, labels: labels) } optimizer.update(&model, along: gradients) } // Evaluate model Context.local.learningPhase = .inference var correctTrainGuessCount = 0 var totalTrainGuessCount = 0 for i in 0..<(mnist.trainingSize / batchSize)+1 { let thisBatchSize = i * batchSize >= mnist.trainingSize ? (mnist.trainingSize - ((i - 1) * batchSize)) : batchSize let images = mnist.trainingImages.minibatch(at: i, batchSize: thisBatchSize) let labels = mnist.trainingLabels.minibatch(at: i, batchSize: thisBatchSize) let logits = model(images) let correctPredictions = logits.argmax(squeezingAxis: 1) .== labels correctTrainGuessCount += Int(Tensor(correctPredictions).sum().scalarized()) totalTrainGuessCount += thisBatchSize } let trainAcc = Float(correctTrainGuessCount) / Float(totalTrainGuessCount) trainHistory[epoch] = PythonObject(trainAcc) var correctValGuessCount = 0 var totalValGuessCount = 0 for i in 0..<(mnist.testSize / batchSize)+1 { let thisBatchSize = i * batchSize >= mnist.testSize ? (mnist.testSize - ((i - 1) * batchSize)) : batchSize let images = mnist.testImages.minibatch(at: i, batchSize: thisBatchSize) let labels = mnist.testLabels.minibatch(at: i, batchSize: thisBatchSize) let logits = model(images) let correctPredictions = logits.argmax(squeezingAxis: 1) .== labels correctValGuessCount += Int(Tensor(correctPredictions).sum().scalarized()) totalValGuessCount += thisBatchSize } let valAcc = Float(correctValGuessCount) / Float(totalValGuessCount) valHistory[epoch] = PythonObject(valAcc) print("\(epoch) | Training accuracy: \(trainAcc) | Validation accuracy: \(valAcc)") } # + [markdown] colab_type="text" id="WjUY0gwXTDX-" # ## Inspecting training history # + colab={} colab_type="code" id="b_f-DNeqp9qO" plt.plot(trainHistory) plt.title("Training History") plt.show() # + colab={} colab_type="code" id="1AkaTk_3THsK" plt.plot(valHistory) plt.title("Validation History") plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np from scipy import spatial import folium from IPython.display import IFrame # + import math def cartesian(latitude, longitude, elevation = 0): # Convert to radians latitude = latitude * (math.pi / 180) longitude = longitude * (math.pi / 180) R = 6371 # 6378137.0 + elevation # relative to centre of the earth X = R * math.cos(latitude) * math.cos(longitude) Y = R * math.cos(latitude) * math.sin(longitude) Z = R * math.sin(latitude) return (X, Y, Z) # - # Put here the coordinates of the centre of the area of analysis # For Sydney: c_point = (-33.882882592547034, 151.2063073174587) # For Zurich: #c_point = (47.372133, 8.516359) # #### Import the dataframe with the POIs, including the calculated POI density # Then, import the dataframe with the nodes # Import the dataframe with the POIs poi_df = pd.read_csv(r'C:\Users\demdr\Desktop\Testing the thesis functions\Project data\Spatial database\POI_df.csv') # Import the dataframe with the nodes node_csv = pd.read_csv(r'C:\Users\demdr\Desktop\Testing the thesis functions\Project data\Spatial database\Urban network analysis_nodes.csv') node_csv # #### Create arrays to store information regarding the distance from the closest poi and its POI density Closest_Poi_Density_array = np.zeros(len(node_csv)) Closest_Poi_Distance_array = np.zeros(len(node_csv)) Closest_Poi_Index_array = np.zeros(len(node_csv)) # #### Construct a kd-tree for the poi data places = [] for index, row in poi_df.iterrows(): coordinates = [row['Latitude'], row['Longitude']] cartesian_coord = cartesian(*coordinates) places.append(cartesian_coord) tree = spatial.KDTree(places) # #### Query the kd-tree that has the spatial proximity information for the poi data # For each street network node, find the closest POI and store the distance from it,and the POI density # + node_csv['Closest Poi Density']=0 node_csv['Closest Poi Distance']=0 node_csv['Closest Poi Index']=0 for item in node_csv.index: this_pt = (node_csv.loc[item, 'y'],node_csv.loc[item, 'x']) closest = tree.query([cartesian(this_pt[0],this_pt[1])], p = 2) closest_distance, closest_poi_index = closest[0][0]*1000, closest[1][0] closest_poi = (poi_df.loc[poi_df.index[closest_poi_index], 'Latitude'],poi_df.loc[poi_df.index[closest_poi_index], 'Longitude']) closest_poi_density = poi_df.loc[poi_df.index[closest_poi_index], 'Density'] node_density = closest_poi_density Closest_Poi_Density_array[item] = node_density Closest_Poi_Distance_array[item] = closest_distance Closest_Poi_Index_array[item] = closest_poi_index # - # #### Transform the POI density so that it takes into account the distance from the closest POI # The logic is that in some suburban areas the closest POI is far away # # So the nodes that are far away from the closest POI get a reduced POI density value instead of the initial one # # + node_csv['Closest Poi Density']=Closest_Poi_Density_array node_csv['Closest Poi Distance']=Closest_Poi_Distance_array node_csv['Closest Poi Index']=Closest_Poi_Index_array node_csv['Closest Poi Density_with distance decay']=Closest_Poi_Distance_array new_dens =(node_csv['Closest Poi Density']/np.log1p(node_csv['Closest Poi Distance']).apply(np.ceil)).apply(np.ceil).values**2 node_csv['Closest Poi Density_with distance decay']=new_dens #this results in some cases where the new density is larger than the old one. So we use the following to replace it max_density = node_csv['Closest Poi Density'] node_csv.loc[node_csv[node_csv['Closest Poi Density_with distance decay']>max_density].index, 'Closest Poi Density_with distance decay']=node_csv.loc[node_csv[node_csv['Closest Poi Density_with distance decay']>max_density].index, 'Closest Poi Density'] node_csv # - # #### Visualise POI density # + osm_and_poi = folium.Map(location = [c_point[0],c_point[1]], zoom_start = 20, tiles='CartoDB dark_matter' ) osm_and_poi.save("POI density projected on street network nodes.html" ) for item in node_csv.index: this_pt = (node_csv.loc[item, 'y'],node_csv.loc[item, 'x']) node_density = node_csv.loc[item,'Closest Poi Density_with distance decay'] this_color_poi='green' this_color_node='green' # colour scheme for visualisation if node_density>=50: this_color_node='darkpurple' elif (node_density>40) & (node_density<=50): this_color_node='purple' elif (node_density>30) & (node_density<=40): this_color_node='darkred' elif (node_density>20) & (node_density<=30): this_color_node='red' elif (node_density>10) & (node_density<=20): this_color_node = 'orange' elif (node_density>5) & (node_density<=10): this_color_node = 'beige' else: this_color_node = 'green' folium.CircleMarker([this_pt[0],this_pt[1]], color=this_color_node, fill=True, radius=0.5, opacity=0.5, popup = 'node ' + str(item)).add_to(osm_and_poi) osm_and_poi.save("POI density projected on street network nodes.html" ) IFrame(src='./POI density projected on street network nodes.html', width=1500, height=1500) # - # Replace the old 'Urban network analysis_nodes' csv with the modified one, which now has the POI density projected on the street network nodes node_csv.to_csv(r'C:\Users\demdr\Desktop\Testing the thesis functions\Project data\Spatial database\Urban network analysis_nodes.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Семинар по numpy # # В этом семинаре мы познакомимся с первой библиотекой, которая понадобится нам в курсе - библиотекой numpy. Это библиотека для работы с матрицами, и она может быть также полезна для матричных вычислений в других дисциплинах. Но сначала познакомимся с интерфейсом jupyter notebook. Это инструмент, позволяющий писать код в интерактивном режиме, рисовать графики и оформлять материалы. # # ## Очень краткий обзор интерфейса jupyter notebook # Jupyter notebook состоит из ячеек. # В ячейку можно записывать код, выполнять, а потом использовать созданные переменные / определенные функции и т. д. в других ячейках: x = [1, 2, 3] print(x) x = 9 y = 3 # Выполнить ячейку: shift + enter, показать аргументы функции: shift + tab. Есть много других горячих клавиш, см. Help -> Keyboard Shortcuts (вверху интерфейса). Там же есть Help -> User Interface Tool! # # Обратите внимание на кнопки + (добавить ячейку), ножницы (удалить ячейку), стрелки, меняющие ячейки местами, кнопку остановить выполнение. # # Ноутбук сохраняется автоматически. Чтобы скачать: File -> Download as -> ipynb. # # Этот текст написан в ячейке типа Markdown, она позволяет красиво оформлять код. Переключение типа ячейки справа от кнопки стоп (черный квадрат). Ячейки с кодом имеют тип Code. # ## Создание массивов в numpy # # Numpy - библиотека для работы с матрицами. Импортируем библиотеку: import numpy as np # Предположим, что мы провели опрос, в котором было четыре вопроса, на каждый можно ответить Да, Нет, Воздержался. В итоге получим таблицу размера 4 на 3. Создадим такую таблицу в numpy: l = [[30, 2, 0], [3, 27, 1], [28, 1, 1], [6, 17, 5]] ar = np.array(l) ar # выводится созданное в последней строке, если в ней нет присваивания (знак =) # Можно создавать массивы из нулей (np.zeros) или из единиц (np.ones): A = np.ones((7, 9)) A # Также часто используется создание векторов из натуральных чисел: vec = np.arange(15) vec # И случайно сгенерированные массивы: r = np.random.rand(3, 5) r # ### Размерности массивов # Размерности: ar.shape[0] A.shape vec.shape # В numpy нулевая размерность отвечает за строки, первая - за столбцы. В нашей матрице ar 4 строки и 3 столбца. Вообще в numpy можно создавать массивы любых размерностей точно таким же образом, как мы сделали выше - numpy не различает векторы, матрицы и тензоры, все они называются массивами. Например, vec имеет длину 15 только по одной размерности, поэтому его shape равен (15,). Shape - это обычный кортеж языка python. # Можно вытянуть все ответы в одну строку: ar.ravel() # Можно наоборот: преобразовать вектор в матрицу: vec.reshape(3, 5) # Обратите внимание, что числа записываются по строкам. vec.reshape(3, 5).shape # По одной из осей можно указывать -1, тогда библиотека сама посчитает число элементов по этой размерности: vec.reshape(3, -1) # Аналогичным образом можно дублироать функционал функции ravel: vec.reshape(-1) # ### Операции с массивами # Можно выделить три группы операций с массивами в numpy: # * поэлементные # * матричные # * агрегирующие # Поэлементные выполняются между массивами одной формы (shape), хотя ниже мы обсудим некое обощение правила одной формы. vec + vec + vec A * 10 x = np.array([1, 2, 3]) y = np.array([-1, 1, -1]) x * y # Обратите внимание, что * - это поэлементное умножение, а не матричное! np.exp(x) # Матричные операции - операции из линейной алгебры. Например, матричное произведение: A = np.random.rand(7, 8) B = np.random.rand(8, 3) A.dot(B) # Можно писать и так: np.dot(A, B) # И так: A @ B # Проверим форму: A.dot(B).shape # Обращение матрицы: np.linalg.inv(np.random.rand(3, 3)) # Модуль np.linalg содержит много полезных матричных функций, их можно посмотреть в документации модуля. # Агрегирующие операции агрерируют информацию в троках, столбцах, во всем массиве и т. д. Самые популярные такие операции - суммирование np.sum, усреднение np.mean, медиана np.median, максимум np.max и минимум np.min. # Число полученных ответов на вопросы (всего): np.sum(ar) # Пробуем выяснить число респондентов. Для этого просуммируем матрицу по строкам (это делается с помощью указания axis=1): np.sum(ar, axis = 1) np.sum(ar, axis = 1).shape # По столбцам: axis=0, по строкам: axis=1. # # В результате суммирования получился вектор (размерность на 1 меньше, чем у исходной матрицы). Можно указать keepdims=True, чтобы сохранть размерности: np.sum(ar, axis = 1, keepdims=True).shape # Задание для студентов: посчитать сумму по строкам, используя матричное произведение. np.sum(A.dot(B), axis=1) # Считаем число ответов "да", "нет", "воздержался" двумя способами: np.sum(ar, axis=0) ones = np.ones(4) np.dot(ones, ar) # ### Индексация # Для индексации ставим [ ] и через запятую перечисляем действия с осями. В матрице 0 - по вертикали, 1 - по горизонтали ar[1, 1] # выбрать 1 элемент ar # вывели для проверки ar[:, 2] # выделить столбец ar[:, -1] # выделить последний столбец ar[0] # выделить строку ar[:, ::2] # выделить все столбы с четными номерами # Можно делать логическую индексацию, чтобы выбирались только те элементы, для которых стоит True. Выберем ответы на вопросы с номерами, кратными 2: ar[np.arange(ar.shape[0])%2==0] # ### Добавление оси # # Для удобного перехода между размерностями используют добавление оси. Вектор можно сделать матрицей с размером 1 по одной из размерностей. ones[:, np.newaxis] ones[:, np.newaxis].shape # вместо вектора с формой (4,) стала матрциа с формой (4, 1) ones[np.newaxis, :] # вместо вектора с формой (4,) стала матрциа с формой (1, 4) ones[np.newaxis, :].shape # ### Добавление оси в поэлементных операциях # В поэлементных операциях можно использовать не только массивы в одинаковым в точности размером. В общем виде условие такое: по каждой размерности либо размер совпадает, либо в одном из массивов размер 1. Например, матрица размера (4, 3) и вектор размера (4, 1). В этом случае при выполнении операции столбец будет как бы "дублироваться" для каждого столбца в первой матрице. Воспользуемся этим, чтобы найти долю каждого ответа на все вопросы: sums = ar.sum(axis=1) # всего ответов на каждый вопрос sums.shape sums[:, np.newaxis].shape # добавили ось ar / sums[:, np.newaxis] # поделили число каждого варианта на общее число ответов на вопрос # ### Объединение массивов # Добавляем новый вопрос в таблицу: row = np.array([5, 12, 15]) row = row[np.newaxis, :] # конкретно тут можно без увеличения размерности # но в других случаях может быть ошибка, лучше добавлять ar = np.vstack((ar, row)) # Добавляем новый столбец в таблицу - максимальное число ответов: mx = np.max(ar, 1) mx mx.shape mx = mx[:, np.newaxis] ar = np.hstack ((ar, mx)) ar # Удаление строки (аналогично можно удалять столбец): np.delete(ar, np.arange(3, 4), axis=1) # ### Задания для студентов # Выделите строки, у которых ответов "нет" больше, чем ответов "да": ar[ar[:, 0], # + pycharm={"name": "#%%\n"} import numpy as np import pyxdf import matplotlib.pyplot as plt f = r"C:\Users\noam\Documents\CurrentStudy\sub-P001\ses-S001\eeg\sub-P001_ses-S001_task-Default_run-001_eeg_old3.xdf" f # + pycharm={"name": "#%%\n"} data, header = pyxdf.load_xdf(f) # + pycharm={"name": "#%%\n"} data # + pycharm={"name": "#%%\n"} header # + pycharm={"name": "#%%\n"} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="mMlSeyeEJKrc" # ## Example code to reproduce the results of *Developing a Victorious Strategy to the Second Strong Gravitational lensing Data Challenge* by Bom et al. # # This notebook is used with a subsample of the ISGLC dataset, using the HJY+VIS(repeated) configuration. The network will be retrained on a given number of images, using the same number for validation and the rest for testing. # + id="F3r1lKG4JKrf" import os os.environ['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID' os.environ['CUDA_VISIBLE_DEVICES'] = '1' import tensorflow as tf from tensorflow import keras import cv2 import sklearn.metrics as sk_metrics import numpy as np import matplotlib.pyplot as plt import h5py import pandas as pd import scipy from PIL import Image from tensorflow_addons import optimizers # + [markdown] id="JdlgGoMTJKrh" # Include the RADAM optimizer to tf namespace so the model can be loaded # + id="4FOfQ9CUJKrh" tf.keras.optimizers.RectifiedAdam = optimizers.RectifiedAdam # + [markdown] id="OV0v5vwtJKri" # Set the parameter: # # - n_img: number of images available for training (same number for validation). Evaluation will be done on the rest # - batch_size: batch size to feed model.fit. Must be smaller than n_img # - n_epochs: number of epochs for training # + id="Le9q_PjTJKri" n_img = 150 batch_size = 50 n_epochs = 10 # + id="c7i9rvLXJKrj" def plot_all_bands(img, ax=None): img_norm = np.zeros_like(img) for i in range(img.shape[-1]): norm = ((img[:,:,i:i+1] - img[:,:,i:i+1].min()) / np.ptp(img[:,:,i:i+1]))#*255 #norm = cv2.normalize(img[:,:,i:i+1], np.zeros_like(img[:,:,i:i+1]), 0, 1.0, cv2.NORM_MINMAX) img_norm[:,:,i:i+1] = norm if ax == None: f, ax = plt.subplots() ax.imshow(img_norm, interpolation='nearest', cmap='gray') return None def resize_imgs(imgs, newsize): #newsize is a square result = np.zeros((imgs.shape[0], newsize, newsize, imgs.shape[-1])) for i, img in enumerate(imgs): for j in range(imgs.shape[-1]): resized = cv2.resize(img[:,:,j], (newsize, newsize), interpolation = cv2.INTER_NEAREST) result[i,:,:,j] = resized return result def combine_bands(imgs, alone=0): n_bands = imgs.shape[-1] assert alone < n_bands, 'chosen band must be between 0 and {}'.format(n_bands - 1) solitary = imgs[:,:,:,alone:alone+1] rest_ids = [iid for iid in range(n_bands) if iid != alone] #print(rest_ids) rest = np.zeros((imgs.shape[0], imgs.shape[1], imgs.shape[2], n_bands-1)) for j, band in enumerate(rest_ids): rest[:,:,:,j] = imgs[:,:,:,band] return rest, solitary def fbeta(rec, prec, beta2=0.001): return (1+beta2) * (rec*prec) / ((beta2*prec) + rec) def normalize(arr): result = np.zeros_like(arr) if arr.shape[-1] > 1: for i in range(arr.shape[-1]): maxi = np.max(arr[:,:,:,i]) mini = np.min(arr[:,:,:,i]) new_arr = (arr[:,:,:,i] - mini) / np.ptp(arr[:,:,:,i]) result[:,:,:,i] = new_arr else: maxi = np.max(arr[:,:,:,0]) mini = np.min(arr[:,:,:,0]) new_arr = (arr[:,:,:,:1] - mini) / np.ptp(arr[:,:,:,:1]) result = new_arr return result def create_input_data(data, alone=0, norm=True): hjy, vis = combine_bands(data, alone) hjy_res = resize_imgs(hjy, 66) vis_res = resize_imgs(vis, 200) if norm: hjy_res = normalize(hjy_res) vis_res = normalize(vis_res) pad = np.zeros_like(vis_res) vis_res = np.concatenate([vis_res, pad, pad], axis=-1) return hjy_res, vis_res # + id="FVQIx_rNJKrk" # %config Completer.use_jedi=False # + [markdown] id="Khyo75Y_JKrl" # Set the directories where the data and model are, load them. One-hot encode the labels since the I SGLC provided them as 0 or 1. # + id="V-xSRxdnJKrl" data_dir = '/tf/bernardo/EscolaCBPF/lensdata' model_dir = '/tf/bernardo/Challenge2/data' # + id="EjOO2cbrJKrm" imgs = np.load(os.path.join(data_dir, 'imgs_challenge1_git.npy')) is_lens = np.load(os.path.join(data_dir, 'labels_challenge1_git.npy')) is_lens = keras.utils.to_categorical(is_lens) # + [markdown] id="Yrf_rmZiJKrm" # Prepare the data. Since the images are in four bands, we choose one of them to be the equivalent of VIS, while the rest are the equivalent to HJY. We found the best results when $r$ was alone, so we choose its index in `combine_bands` and `create_input_data`, as the images are $ugri$. `combine_bands` only combine the bands this way for plotting, while `create_input_data` also resizes them to the format of the II SGLC (200x200 and 66x66) and normalizes them. # + id="ltl5DNdhJKrm" hjy_plot, vis_plot = combine_bands(imgs, alone=2) hjy, vis = create_input_data(imgs, alone=2) # + [markdown] id="06ptT0qGJKrn" # Select example lens and no lens for visualization # + id="EjJCOgMqJKrn" outputId="c6186ef6-c779-4e20-f2d6-f89a747e1a0d" lens = np.where(is_lens == 1)[0] nolens = np.where(is_lens == 0)[0] id_lens = np.random.choice(lens) id_nolens = np.random.choice(nolens) f, ax = plt.subplots(2,2,figsize=[10,10], gridspec_kw={'wspace':0.01, 'hspace':0.02}) ax[0,0].imshow(vis_plot[id_lens], interpolation='nearest', cmap='gray') ax[0,0].set_axis_off() ax[0,0].set_title('R', fontsize=20) ax[0,0].annotate('LENS', (10,10), xytext=(-25,50), annotation_clip=False, fontsize=20, weight='bold') plot_all_bands(hjy_plot[id_lens], ax=ax[0,1]) ax[0,1].set_axis_off() ax[0,1].set_title('UGI', fontsize=20) ax[1,0].imshow(vis_plot[id_nolens], interpolation='nearest', cmap='gray') ax[1,0].set_axis_off() ax[1,0].annotate('NO LENS', (10,10), xytext=(-40,50), annotation_clip=False, fontsize=20, weight='bold') plot_all_bands(hjy_plot[id_nolens], ax=ax[1,1]) ax[1,1].set_axis_off() plt.show() # + [markdown] id="GJpfijcyJKrn" # Load the model. Select random `n_img` from the dataset, split in train/validation/test # + id="EeuI20bOJKro" model = keras.models.load_model(os.path.join(model_dir, 'efn2_hjy_vis_newpre_repeated_aug_new_fold5')) # + id="mWGtP5VlJKro" ids_train = np.random.choice(hjy.shape[0], n_img*2, replace=False) hjy_train = hjy[ids_train[:n_img]] vis_train = vis[ids_train[:n_img]] y_train = is_lens[ids_train[:n_img]] hjy_val = hjy[ids_train[n_img:]] vis_val = vis[ids_train[n_img:]] y_val = is_lens[ids_train[n_img:]] ids_test = np.array([i for i in range(hjy.shape[0]) if i not in ids_train]) hjy_test = hjy[ids_test] vis_test = vis[ids_test] y_test = is_lens[ids_test] # + id="BCjNvh7gJKro" outputId="67f3fa73-dedb-48b1-bfc0-3dd5ae3341bd" save_name = '/tf/bernardo/Challenge2/data/test_model' checkpoint = keras.callbacks.ModelCheckpoint(save_name, monitor='val_loss', save_best_only=True, verbose=1) history = model.fit([vis_train, hjy_train], y_train, batch_size=batch_size, epochs=n_epochs, validation_data=([vis_val, hjy_val], y_val), callbacks=[checkpoint]) # + [markdown] id="1eQ57T-uJKrp" # Load the best model, with the lowest validation loss # + id="F1n2lqMeJKrp" outputId="8137dcdd-d47e-4307-a8fc-2fbba469e535" best_model = keras.models.load_model(save_name) # + [markdown] id="R6b5fJoFJKrp" # Predict on the test set and get the metrics # + id="1Fm1_5dTJKrp" preds = best_model.predict([vis_test, hjy_test]) # + id="BYKge4bnJKrp" fpr, tpr, _ = sk_metrics.roc_curve(y_test[:,1], preds[:,1]) auroc = sk_metrics.roc_auc_score(y_test[:,1], preds[:,1]) precision, recall, _ = sk_metrics.precision_recall_curve(y_test[:,1], preds[:,1]) prauc = sk_metrics.average_precision_score(y_test[:,1], preds[:,1]) # + id="L9ZFK054JKrq" outputId="050f25f6-2674-4fc9-d201-ae390fbba018" f, ax = plt.subplots(1,2, figsize=[14,6]) ax[0].plot(fpr, tpr, 'k-', label = 'AUC={:2.3f}'.format(auroc)) ax[0].set_xlabel('False Positive Rate') ax[0].set_ylabel('True Positive Rate') ax[0].legend(loc='lower right') ax[1].plot(recall, precision, 'k-', label = 'AUC={:2.3f}'.format(prauc)) ax[1].set_xlabel('Recall') ax[1].set_ylabel('Precision') ax[1].legend(loc='lower left') plt.tight_layout() plt.show() # + id="3WTf8k4OJKrq" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true #

    Table of Contents

    # # - # # Loading Libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # %matplotlib inline import sklearn import warnings warnings.filterwarnings("ignore") # # Loading Data df = pd.read_csv('Datasets/applicant.csv',names=[ "Age","Workclass","fnlwgt","Education","Education-Num","Marital Status", "Occupation","Relationship","Race","Gender","Capital Gain","Capital Loss", "Hours per week","Country","Target"], sep=r'\s*,', engine='python', na_values='?') df.head() df.shape df.info() df = df.replace('?',np.nan) df = df.replace(' ?',np.nan) for i in list(df.columns): print(i) print(df.loc[:,i].unique()) print('=====================================================') df.isna().sum() df.dropna(inplace=True) df.shape df.info() # # Visualization plt.figure(figsize=(10,7)) plt.subplot(1,2,1) sns.scatterplot(data = df , x = 'Age' , y = 'Capital Gain',hue = 'Target',palette='flare',alpha=0.3); plt.subplot(1,2,2) sns.scatterplot(data = df , x = 'Age' , y = 'Capital Loss',hue = 'Target',palette='flare',alpha = 0.3); sns.countplot(data = df , x = 'Target'); # The dataset is imbalanced # # Data Preprocessing # let's make age as category df['Age'] = pd.cut(df['Age'],bins = 20,labels = ["Age_" + str(i) for i in range(20)]) cat_features = [] num_features = [] for i in list(df.columns): if (df.dtypes[i]) ==('int64'): num_features.append(i) else: cat_features.append(i) num_features cat_features from sklearn.preprocessing import OrdinalEncoder,LabelEncoder,StandardScaler sc = StandardScaler() oe = OrdinalEncoder() le = LabelEncoder() # + df[cat_features] = oe.fit_transform(df[cat_features]) # - X = df.drop('Target',axis = 1) y = df['Target'] # + from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=42) # - X_train.shape,y_train.shape # + X_train.loc[:,num_features] = sc.fit_transform(X_train.loc[:,num_features]) X_test.loc[:,num_features] = sc.transform(X_test.loc[:,num_features]) # - # # Modelling from sklearn.svm import SVC sv = SVC(kernel = 'rbf',C = 1.0) sv.fit(X_train,y_train) y_pred = sv.predict(X_test) # # Evaluation from sklearn.metrics import accuracy_score,confusion_matrix accuracy_score(y_pred , y_test) sns.heatmap(confusion_matrix(y_test,y_pred),annot=True) plt.ylabel('True Class') plt.xlabel('Predicted Class'); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Recorded data import pickle from matplotlib import pyplot as plt import numpy as np import tensorflow as tf from tensorflow.keras import datasets, layers, models # read data data = {} with open('data-test.pkl', 'rb') as f: data = pickle.load(f) print(len(data)) print(data[0]["stack"]) plt.imshow(data[0]["stack"][0]) plt.imshow(data[50]["stack"][0]) for d in data[0:20]: print(d["value"]) print(d["stack"][0].shape) # # Sample model train # + model = models.Sequential() model.add(layers.Conv2D( 32, (3, 3), activation='relu', input_shape=(40, 120, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(2, activation='softmax')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) model.summary() # + x = [] y = [] for d in data: tmp = [d["stack"][0]/255, d["stack"][1]/255, d["stack"][2]/255] x.append(np.stack(tmp, axis=2)) y.append(np.array(d["value"])) x = np.array(x) y = np.array(y) print(len(x)) print(x.shape) print(len(y)) print(y.shape) print(y[20]) plt.imshow(x[20]) # - xx = np.array([x[9], x[0]]) yy = np.array([y[9], y[0]]) print(yy) history = model.fit(x, y, epochs=20) # + plt.plot(history.history['acc'], label='accuracy') plt.plot(history.history['loss'], label = 'val_loss') plt.xlabel('Epoch') plt.ylabel('Accuracy') plt.ylim([0.5, 1]) plt.legend(loc='lower right') # - test_loss, test_acc = model.evaluate(x, y, verbose=2) print(test_loss) print(test_acc) print(model.predict(np.array([x[10]]))) print(model.predict_classes(np.array([x[10]]))) tf.keras.models.save_model(model, 'model.tf') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="eKEJ8sTQx4uV" # # **Naive Bayes Example** # # Predciting whether or not a person makes under $50K / year. # 0. Notebook Set Up # 1. Data Preprocessing # 2. Exploratory Data Analysis # 3. Performance Metrics: Basic Model # 4. Performance Metrics: Range of Split Percentages # 5. Performance Metrics: Tuned Hyperparameters # + [markdown] id="3d_WNx1wyOgO" # ## **0. Notebook Set Up** # + [markdown] id="Hg02-dg9z4lg" # Import files from google drive. # + colab={"base_uri": "https://localhost:8080/"} id="SICYlH6AxUwq" outputId="6a727bad-6768-47cc-e008-f2400b662d02" from google.colab import drive drive.mount('/content/drive', force_remount = True) # + [markdown] id="AZPu34Z50AgN" # Import required Python modules. # + colab={"base_uri": "https://localhost:8080/"} id="cROsJcyw0Ks6" outputId="dcbda89f-e593-4571-9c0a-0ebd9d604e59" # %%time import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.naive_bayes import GaussianNB from sklearn.metrics import f1_score from sklearn.metrics import accuracy_score as acc_score from sklearn.metrics import roc_auc_score as roc_score from sklearn.preprocessing import OrdinalEncoder # + [markdown] id="e-OIXFox0Nly" # Functions for Data Pre-Processing # + colab={"base_uri": "https://localhost:8080/"} id="-MJdYMb20Rla" outputId="10dc01e4-efc9-485d-e5a5-a447b00da24c" # %%time # Replace labels for columns. Ugly but gets the job done. def replace_labels(df): df.columns = ['age', 'workclass', 'final_weight', 'education', 'education_num', 'marital_status', 'occupation', 'relationship', 'race', 'sex', 'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', '<=50K'] return df # Split features according to data type. Necessary? def split_column_types(df): num_labels = [] cat_labels = [] for feature in df.columns: if df[feature].dtype == 'int64': num_labels.append(feature) else : cat_labels.append(feature) return num_labels, cat_labels # returns a list containing names of columns cotaining # categorical data. def make_cat_labels(df): cat_labels = [] for feature in df.columns: if df[feature].dtype != 'int64': cat_labels.append(feature) return cat_labels # Remove leading and trailing spaces from the df def strip_df(df, cat_labels): for feature in cat_labels: df[feature] = df[feature].str.strip() return df # Helper function for remove_null_records def check_for_null(df, cat_labels): missing_list = [] for feature in cat_labels: count_list = df[feature].value_counts().index.tolist() if count_list.count('?'): missing_list.append(feature) return missing_list # Helper function for remove_null_records def insert_null_values(df, cat_labels): missing_list = check_for_null(df, cat_labels) for feature in missing_list: df[feature].replace('?', np.NaN, inplace = True) return df # Helper function for process_cat_data def remove_null_records(df, cat_labels): return insert_null_values(df, cat_labels).dropna() # Cleans categorical data by stripping spaces # removing null records. def process_cat_data(df): cat_labels = make_cat_labels(df) df = strip_df(df, cat_labels) df = remove_null_records(df, cat_labels) return df # Changes data type of target vector from string to integer def convert_target_vector(df): df['<=50K'].replace('<=50K', 1, inplace = True) df['<=50K'].replace('>50K', 0, inplace = True) return df # + [markdown] id="kvlCAnP70Yyj" # Functions for producing distributions of a model's performance metrics. # + colab={"base_uri": "https://localhost:8080/"} id="flUOmDsc08wy" outputId="1815b6d4-213b-4c0b-909d-ca417698f9bf" # %%time # Here to remind me of my computer's hard work <3 def progress_bar(trial, no_trials): if trial != no_trials - 1: if trial % (no_trials / 20) == 0: print('#', end = '') else : print('#') # Helper function for type_error() def make_test_dict(): df = pd.DataFrame() gnb = GaussianNB() t_pct = 0.5 no_trials = 100 return {'df': df, 'gnb': gnb, 't_pct': t_pct, 'no_trials': no_trials} # Simple sanity check. def type_error(model): test = make_test_dict() if type(model) != type(test): return 1 if len(model) == 0: return 1 for feature in test.keys(): if type(test[feature]) != type(model[feature]): return 1 # Returns dictionary that contains resulting metrics distributions def model_dist(model): if type_error(model): print('type error: check definition of "model"') return # lists to store distributions of different metrics. f1_dist = [] acc_dist = [] roc_dist = [] # Do that one thing. TODO: look this up model['df'] = model['df'].sample(frac = 1).reset_index(drop = True) # Instantiate the tranformer ec = OrdinalEncoder() for x in range(model['no_trials']): X = model["df"].drop(columns = '<=50K') y = model["df"]["<=50K"] X = ec.fit_transform(X) # Split into training/testing set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = model['t_pct'], stratify = y) # fit the model model['gnb'].fit(X_train, y_train) # Make prediction y_pred = model['gnb'].predict(X_test) # Update individual metrics f1_dist.append(f1_score(y_test, y_pred)) acc_dist.append(acc_score(y_test, y_pred)) roc_dist.append(roc_score(y_test, y_pred)) # Print small progress bar. progress_bar(x, model['no_trials']) return {'f1': f1_dist, 'acc': acc_dist, 'roc': roc_dist, 't_pct': model['t_pct']} def make_model_dict(train, t_pct, no_trials): return {'df': train, 'gnb': GaussianNB(), 't_pct': t_pct, 'no_trials': no_trials} # Returns a dictionary containing three df, that each have # one of the three metrics distributions for evaluating model performance. def split_dist(train, t_range, no_trials): split_dist = [] for t_pct in range(t_range['l_bound'], t_range['u_bound'], t_range['incr']): model = make_model_dict(train, t_pct / 100, no_trials) split_dist.append(model_dist(model)) return pd.DataFrame(split_dist) # + [markdown] id="vdJ7ijd71CxI" # Functions for graphing distributions of model metrics. # + colab={"base_uri": "https://localhost:8080/"} id="N39yZZL-1CFk" outputId="01e1084c-7524-4ea9-cb67-f65da5853c6c" # %%time def metrics_histplot(metrics): # Graph perfomance distributions for model metrics. perf_fig, axes = plt.subplots(1, 3, figsize = (30, 10)); perf_fig.suptitle('Perfomance Metrics ', y = 1.02); perf_fig.tight_layout() scores = list(metrics.keys())[:3] index = 0 for score in scores: sns.histplot(ax = axes[index], x = metrics[score]) axes[index].set_title(score) index += 1 def metrics_boxplot(metrics): # Graph perfomance distributions for model metrics. perf_fig, axes = plt.subplots(1, 3, figsize = (30, 10)); perf_fig.suptitle('Perfomance Metrics ', y = 1.02); perf_fig.tight_layout() index = 0 scores = list(metrics.keys())[:3] for score in scores: sns.boxplot(ax = axes[index], y = metrics[score]) axes[index].set_title(score) index += 1 # Graph performance metrics across a range of test/train # split values. Please enter all values as integers. # They all represent percentages. # TODO: 1. use dicts to simplify the parameters. # 2. finish # def graph_metrics_by_split(t_range): # for t_pct in range(t_range['l_bound'], t_range['u_bound'], t_range['incr']): # + [markdown] id="cE1bumz21QH9" # ## **1. Data Preprocessing** # + colab={"base_uri": "https://localhost:8080/"} id="ZA5dN1G81L1k" outputId="068a4276-43f9-4541-902f-73417b7bb73a" # %%time # Load and clean training data. path = '/content/drive/My Drive/adult.csv' train = replace_labels(pd.read_csv(path)) train = process_cat_data(train) train = convert_target_vector(train) # + [markdown] id="rUEMezqv1nTP" # ## **2. Exploratory Data Analysis** # + id="1Ssqmb_e1tfW" # + [markdown] id="vs2jPD5N1uCQ" # ## **3. Performance Metrics: Basic Model** # # + id="Ps5lsR6112cd" colab={"base_uri": "https://localhost:8080/"} outputId="6f6257eb-e611-470e-d57c-728fcf13737e" # %%time # Run model and produce distributions for performance metrics. model = {'df': train, 'gnb': GaussianNB(), 't_pct': 0.5, 'no_trials': 5} metrics = model_dist(model) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="zFgN5ggrMRE-" outputId="5ff0e8d9-12e0-4992-ded3-3e3f524cccdc" # Graph performance metrics for basic model. metrics_histplot(metrics) metrics_boxplot(metrics) # + [markdown] id="tMRJVAD_YLFt" # ## **4. Performance Metrics: Range of Split Percentages** # # + colab={"base_uri": "https://localhost:8080/"} id="49dWyViLYWbd" outputId="8e26e203-d585-4bd4-9ec7-f662d6bc1256" # %%time # Range of split percentages to loop over. t_range = {'l_bound': 5, 'u_bound': 50, 'incr': 5} # Produce dictionary containing dictionaries that store the # performance metrics for each t_pct value. dist_split = split_dist(train, t_range, 25) #TODO: re-write split_dist to that each record holds the metrics single for a single run, # not the entire minte carlo simulation ditribution. This should solve issues with graphing. # + colab={"base_uri": "https://localhost:8080/", "height": 328} id="FTMQgX3iC3Mb" outputId="2e30815f-135b-45aa-ccfc-726128035d56" dist_split # + colab={"base_uri": "https://localhost:8080/", "height": 877} id="sn59fJa2qefi" outputId="3a7b64aa-f448-4824-8bf0-dabec69beda0" # for t_pct in dist_split.keys(): # metrics_boxplot(dist_split[t_pct]) # # TODO: Refactor to be nice and tidy like Dr. H's # # Should give clear visual comparisons of each t_pct value. # print(type(dist_split.iat[0,0])) # print(dist_split) split_figs, axes = plt.subplots(1, 10, figsize = (50, 10)) for t_pct in range(int((t_range['u_bound'] - t_range['l_bound']) / t_range['incr'])): print(t_pct) sns.boxplot(y = 'f1', x = 't_pct', data = dist_split) # sns.boxplot(ax = axes[t_pct], y = dist_split.iat[0, t_pct]) # dist_split # + colab={"base_uri": "https://localhost:8080/", "height": 510} id="ZzR6Qs3B5i3b" outputId="af8ec470-549b-44a3-ccfe-bdd344e7e8bb" print(type(dist_split)) sns.boxplot(x = 't_pct', y = 'f1', data = dist_split) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import mdtraj as md import sys np.set_printoptions(threshold=sys.maxsize) # + t = md.load('OP19_7.pdb') origin = 0 #index for first origin atom atomspermol = 6 #how many atoms per mole nmols = 3200 #total number of molecules cutoff = 5 #cutoff in angstroms atom_indices = np.zeros(nmols, dtype=int) for i in range(len(atom_indices)): atom_indices[i] = int(i*atomspermol+origin) t = t.atom_slice(atom_indices) #get PBCs bounds. This only works for orthohombic PBCs for the time being. Might implement more general solution later. pbc_vectors = np.diag(np.asarray((t.unitcell_vectors*10))[0]) coords = t.xyz*10 #defines coordinates. multiply by 10 to get angstroms, since mdtraj converts to nm. #computes distances from two sets of xyz coordinates def distance(a,b,bounds): a = np.asarray((a)) b = np.asarray((b)) min_dists = np.min(np.dstack(((a - b) % bounds, (b - a) % bounds)), axis = 2) dist = np.sqrt(np.sum(min_dists ** 2, axis = 1)) return dist # + neighbors = np.zeros((nmols,nmols)) for i in range(nmols): for j in range(i+1,nmols): if distance(coords[0][i],coords[0][j], pbc_vectors) <= cutoff: neighbors[i][j] = 1 neighbors[j][i] = 1 nneighbors = np.sum(neighbors,axis=1) # - # + f = open('OP19_7.cop','r') g = open('normalOPs','w') orderparameters = [] for x in f: orderparameters.append(x) f.close() bondops = np.array(orderparameters[(10+nmols):(10+2*nmols)],dtype=float) normalops = bondops/(nneighbors+1) np.savetxt('newOPs.txt',normalops) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os #print(os.listdir("../input/Data/Stocks")) # Any results you write to the current directory are saved as output. # + _uuid="f8325e01c1bdd8621b4c3674ad04daaceb9c64e8" DATA_PATH = '../input/Data/Stocks' # + _uuid="de8db9021cb0526b3a1b069e34db4ba40dd19e20" class TradeEnv(): def reset(self): self.data = self.gen_universe() self.pos = 0 self.game_length = self.data.shape[0] self.returns = [] # return first state return self.data[0,:-1,:] def step(self,allocation): ret = np.sum(allocation * self.data[self.pos,-1,:]) self.returns.append(ret) mean = 0 std = 1 if len(self.returns) >= 20: mean = np.mean(self.returns[-20:]) std = np.std(self.returns[-20:]) + 0.0001 sharpe = mean / std if (self.pos +1) >= self.game_length: return None, sharpe, True, {} else: self.pos +=1 return self.data[self.pos,:-1,:], sharpe, False, {} def gen_universe(self): stocks = os.listdir(DATA_PATH) stocks = np.random.permutation(stocks) frames = [] idx = 0 while len(frames) < 100: try: stock = stocks[idx] frame = pd.read_csv(os.path.join(DATA_PATH,stock),index_col='Date') frame = frame.loc['2005-01-01':].Close frames.append(frame) except: # catch *all* exceptions e = sys.exc_info()[0] idx += 1 df = pd.concat(frames,axis=1,ignore_index=False) df = df.pct_change() df = df.fillna(0) batch = df.values episodes = [] for i in range(batch.shape[0] - 101): eps = batch[i:i+101] episodes.append(eps) data = np.stack(episodes) assert len(data.shape) == 3 assert data.shape[-1] == 100 return data # + _uuid="d91c72d82ff718ce383e90d559914dd6ebd970c7" class RandomTrader(): def get_action(self): action = np.random.rand(100) * 2 - 1 action = action * (np.abs(action) / np.sum(np.abs(action))) return action # + _uuid="ad47a700332256944cb25d5c8d61e644d539f1a5" import sys #import gym import numpy as np from scipy.stats import norm from keras.layers import Dense, Input, Lambda, LSTM from keras.models import Model from keras.optimizers import Adam from keras import backend as K from collections import deque import random EPISODES = 3000 # A2C(Advantage Actor-Critic) agent for the Cartpole class A2CAgent: def __init__(self, state_size, state_seq_length, action_size): # if you want to see Cartpole learning, then change to True self.render = False self.state_size = state_size self.state_seq_length = state_seq_length self.action_size = action_size self.value_size = 1 self.exp_replay = deque(maxlen=2000) # get gym environment name # these are hyper parameters for the A3C self.actor_lr = 0.0001 self.critic_lr = 0.001 self.discount_factor = .9 # create model for actor and critic network self.actor, self.critic = self.build_model() # method for training actor and critic network #self.optimizer = [self.actor_optimizer(), self.critic_optimizer()] self.optimize_actor = self.actor_optimizer() #5 self.optimize_critic = self.critic_optimizer() def build_model(self): state = Input(batch_shape=(None, self.state_seq_length, self.state_size)) x = LSTM(120,return_sequences=True)(state) x = LSTM(100)(x) actor_input = Dense(100, activation='relu', kernel_initializer='he_uniform')(x) # actor_hidden = Dense(self.hidden2, activation='relu')(actor_input) mu = Dense(self.action_size, activation='tanh', kernel_initializer='he_uniform')(actor_input) sigma_0 = Dense(self.action_size, activation='softplus', kernel_initializer='he_uniform')(actor_input) sigma = Lambda(lambda x: x + 0.0001)(sigma_0) critic_input = Dense(30, activation='relu', kernel_initializer='he_uniform')(x) # value_hidden = Dense(self.hidden2, activation='relu')(critic_input) state_value = Dense(1, activation='linear', kernel_initializer='he_uniform')(critic_input) actor = Model(inputs=state, outputs=(mu, sigma)) critic = Model(inputs=state, outputs=state_value) actor._make_predict_function() critic._make_predict_function() actor.summary() critic.summary() return actor, critic def actor_optimizer(self): action = K.placeholder(shape=(None, 1)) advantages = K.placeholder(shape=(None, 1)) # mu = K.placeholder(shape=(None, self.action_size)) # sigma_sq = K.placeholder(shape=(None, self.action_size)) mu, sigma_sq = self.actor.output pdf = 1. / K.sqrt(2. * np.pi * sigma_sq) * K.exp(-K.square(action - mu) / (2. * sigma_sq)) log_pdf = K.log(pdf + K.epsilon()) entropy = K.sum(0.5 * (K.log(2. * np.pi * sigma_sq) + 1.)) exp_v = log_pdf * advantages exp_v = K.sum(exp_v + 0.01 * entropy) actor_loss = -exp_v optimizer = Adam(lr=self.actor_lr) updates = optimizer.get_updates(self.actor.trainable_weights, [], actor_loss) train = K.function([self.actor.input, action, advantages], [], updates=updates) return train # make loss function for Value approximation def critic_optimizer(self): discounted_reward = K.placeholder(shape=(None, 1)) value = self.critic.output loss = K.mean(K.square(discounted_reward - value)) optimizer = Adam(lr=self.critic_lr) updates = optimizer.get_updates(self.critic.trainable_weights, [], loss) train = K.function([self.critic.input, discounted_reward], [], updates=updates) return train # using the output of policy network, pick action stochastically def get_action(self, state): mu, sigma_sq = self.actor.predict(np.reshape(state, [1, self.state_seq_length,self.state_size])) # sigma_sq = np.log(np.exp(sigma_sq + 1)) epsilon = np.random.randn(self.action_size) # action = norm.rvs(loc=mu, scale=sigma_sq,size=1) action = mu + np.sqrt(sigma_sq) * epsilon action = np.clip(action, -2, 2) return action # update policy network every episode def train_model(self, state, action, reward, next_state, done): self.exp_replay.append((state, action, reward, next_state, done)) (state, action, reward, next_state, done) = random.sample(self.exp_replay,1)[0] target = np.zeros((1, self.value_size)) advantages = np.zeros((1, self.action_size)) value = self.critic.predict(state)[0] next_value = self.critic.predict(next_state)[0] if done: advantages[0] = reward - value target[0][0] = reward else: advantages[0] = reward + self.discount_factor * (next_value) - value target[0][0] = reward + self.discount_factor * next_value self.optimize_actor([state, action, advantages]) self.optimize_critic([state, target]) # + _uuid="64e1f7c58461094e08bdebaab5cc4056c20cbf92" state_size = 100 state_seq_length = 100 action_size = 100 # + _uuid="dd9f5a8d682bf91161b7b8967c885219bc0b98a4" import time # + _uuid="2544132fb716e5567889e80bfbd34be798e89224" def run_experiment(): start = time.time() env = TradeEnv() agent = A2CAgent(state_size, state_seq_length, action_size) epochs = 10 reward_hist = [] print('Setup: {:.4f}'.format(time.time() - start)) for e in range(epochs): start = time.time() state = env.reset() state = np.reshape(state, [1,state_seq_length, state_size]) done = False total_reward = 0 print('Game Start: {:.4f}'.format(time.time() - start)) while not done: start = time.time() action = agent.get_action(state) print('Get Action: {:.4f}'.format(time.time() - start)) start = time.time() next_state, reward, done, info = env.step(action) print('Step: {:.4f}'.format(time.time() - start)) start = time.time() next_state = np.reshape(next_state, [1,state_seq_length, state_size]) agent.train_model(state, action, reward, next_state, done) print('Train: {:.4f}'.format(time.time() - start)) total_reward += reward state = next_state print(total_reward) reward_hist.append(total_reward) return reward_hist # + _uuid="d41408802fcb5b48eae55409638e1f1bbdd0b19e" # Running training takes very long #import matplotlib.pyplot as plt #reward_hist = run_experiment() #plt.plot(reward_hist) # + _uuid="1b803c08f8165217248959fff7b4bea51adb524f" # + _uuid="8bc07bf631b369dc9539e58530c4a04aecb24ecd" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ![](./assets/ITDP_PrestigeLogo.png) # # ## Limpieza Inicial de Datos, Unión de Tablas y Formateo de Fechas # # Se requiere de un conjunto de datos limpio, es decir, que no se presenten entradas nulas o NaN’s, que el formato de fechas sea el mismo para todos los valores y que los atributos estén en forma de columnas i.e. que cada variable meteorológica o de contaminantes estén en una columna separada, entre otras propiedades que se describirán a continuación. # # El proceso de limpieza de datos consiste en hacer un conjunto de manipulaciones a la tablas para generar un dataset óptimo. A continuación, se muestra el diagrama de la limpieza de datos realizada: # # # # # __Pasos y descripción general del notebook__ # # 1. __Descarga de Tablas:__ Los datos de contaminantes y meteorología son descargados por separado. Los datos usados para el entrenamiento son verificados de manera manual por la SEDEMA. En este notebook vamos a juntar los archivos de contaminación y meoteorología de cada año en un solo archivo, también se eliminan las entradas vacías. # # # 2. __Convertir a tabla con variables por columna__: Se pasa de tener una columna que indica el atributo medido y otro el valor de la medición a una columna por cada atribute que indica el valor de la medición. # # # 3. __Formateo de Fechas:__ se arreglará el formato de fechas al formato **YY/m/d hh:mm** con horas de 0 a 23 y también vamos a generar columnas de información temporal con parámetros como hora, día y mes para cada medición # # # - __Datos recibidos:__ [Meteorología,](http://www.aire.cdmx.gob.mx/default.php?opc='aKBhnmI='&opcion=Zw==) # [Contamianción](http://www.aire.cdmx.gob.mx/default.php?opc='aKBhnmI='&opcion=Zg==) # - __Responsable:__ # - __Contacto:__ # # ___ # + import numpy as np import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') import pandas as pd import matplotlib import seaborn as sns from datetime import datetime, timedelta from datetime import timedelta import datetime as dt from tqdm import tqdm # - # ## Presentación de los datos utilizados # # El Sistema de Monitoreo Atmosférico de la Ciudad de México presenta de forma horaria desde el año 1986 las condiciones meteorológicas y de contaminación que describen la atmósfera de la zona metropolitana. La información descrita se presenta de dos formas: puede ser una base de datos revisada por expertos de la SEDEMA para descartar mediciones de fuentes atípicas de contaminación tales como incendios o desperfectos en las estaciones de monitoreo, o no revisada, obteniendo directamente la medición como se midió en la estación de monitoreo. Esta falta de consistencia de la información puede generar valores erróneos en el pronóstico generado, limitando el desempeño de los modelos. Por este motivo, los datos de monitoreo usados para el entrenamiento de los modelos son los datos revisados por los expertos. # # Para el entrenamiento de los modelos los datos usados abarcan el periodo de enero 2014 hasta diciembre 2018, accesibles en la sección de datos Horarios por contaminante y de datos horarios de Meteorología. Las variables meteorológicas y de contaminación utilizadas para el desarrollo del modelo se muestra en la siguiente tabla: # # # # Las estaciones en operación se distribuyen en el área metropolitana, concentrándose en la zona central de la CDMX. En la siguiente figura se muestra la posición geográfica de las estaciones. # # # # Como parte del proceso de la generación de los modelos de pronóstico de contamianción, es necesario realizar un conjunto de operaciones a los datos obtenidos de la página de [Monitoreo de Calidad del Aire de la Ciudad de México](http://www.aire.cdmx.gob.mx/default.php). Como se mencionó en el archivo de metodología, los datos a usar son los datos verificados por los expertos de la SEDEMA. Los datos para meteorología y contaminanción se pueden obtener acontinuación: # # - [Meteorología](http://www.aire.cdmx.gob.mx/default.php?opc='aKBhnmI='&opcion=Zw==) # - [Contamianción](http://www.aire.cdmx.gob.mx/default.php?opc='aKBhnmI='&opcion=Zg==) # # # # Juntaremos los dataframes con una PivotTable y las agruparemos por el momento de la medición # #### Definimos tres funciones para formatear el formato de las fechas: # # Convertir el formato de 1 a 24 horas al formato de 0 a 23 horas. Por defecto python trabaja con el formato de 0 a 23 horas, es conveniente trabajar en este formato debido a que muchas de las funciones implementadas en python u otras librerias suponen que este es el formato de las fechas. # # El formato original de las fechas, es d/m/YY h:m y el formato despuésde aplicar la función es YY/m/d hh:mm. def time_converter(x): x0 = x.split(" ")[0] x0 = x0.split("/") x1 = x.split(" ")[1] if x1[:].endswith("24:00"): # Notemos que cuando la hora es 24, es necesario convertirla a 00 sin embargo también es necesario # esta fecha se desplazará al siguiente día, es deicr, si se tiene '19-05-01 24:00', al terminar con "24", # se sustituirá por '19-05-02 00:00' # Considerando esto, se aplica lasiguiente función: fecha_0 = x0[2]+"-"+x0[1]+"-"+x0[0]+" 23:00" date = datetime.strptime(fecha_0, "%Y-%m-%d %H:%M") new_time = date + timedelta(hours=1) return new_time.strftime('%Y-%m-%d %H:%M') else: return x0[2]+"-"+x0[1]+"-"+x0[0]+" "+ x1[:] # Definamos el año a limpiar: target = "meteorologia" target = "contaminantes" anio = "2020" # A continuación se define una función que realizará los siguientes procesos: # # - Leer el archivo de contaminantes o meteorología del año seleccionado. # - Eliminar las entradas vacías # - Hacer una tabla pivote para pasar de una columna con el nombre del atributo y su valor a una columna por atributo. # - Convertir la columna fecha de d/m/yy hh:mm a yy/mm/dd hh:mm y pasar del formato de horas de 1..24 a 0...23. # # + active="" # met_2018 = pd.read_csv(str('./datasets/' + target + "/" + target + "_" + str(anio) + ".csv"),header=10) # leer archivo # + active="" # if "cve_station" in met_2018.columns or "cve_parameter" in met_2018.columns: # met_2018.rename(columns={'cve_station': 'id_station', 'cve_parameter': 'id_parameter'}, inplace=True) # checar nombre columbas # + active="" # met_2018['hora'] = met_2018['date'].astype(str).str[-5:-3].astype(int) # met_2018 = met_2018.dropna(subset=["value"]).reset_index(drop=True)#PM25 # + active="" # sns.distplot(met_2018["hora"], bins=24, kde=False, rug=True); # + active="" # for hora in tqdm(range(1,25)): # valores por estación # estaciones.loc[:,hora] = met_2018[met_2018["hora"]==hora]["id_station"].value_counts().values # - # ### Juntemos este proceso en una función, se aplicará a meteorología y contaminantes def formateo_csv(target, anio): #leemos el archivo met_2018 = pd.read_csv(str('./data/raw/' + target + "/" + target + "_" + str(anio) + ".csv"),header=10) if "cve_station" in met_2018.columns or "cve_parameter" in met_2018.columns: met_2018.rename(columns={'cve_station': 'id_station', 'cve_parameter': 'id_parameter'}, inplace=True) #eliminamos las entradas vacías met_2018 = met_2018.dropna(how='any') met_2018 = met_2018.drop(['unit'], axis=1) met_ACO = met_2018 met_ACO = met_ACO.reset_index(drop=False) met_ACO = met_ACO[["date","id_station","id_parameter","value"]] # nos quedamos con las siguientes columnas: #Hacer una tabla pivote para pasar de una columna con el nombre del atributo # y su valor a una columna por atributo. met_ACO_hour = pd.pivot_table(met_ACO,index=["date","id_station"],columns=["id_parameter"]) met_ACO_hour = met_ACO_hour.reset_index(drop=False) met_ACO_hour.columns = met_ACO_hour.columns.droplevel() met_ACO_hour["id_station"] = met_ACO_hour.iloc[:,1] met_ACO_hour["date"] = met_ACO_hour.iloc[:,0] #eliminamos la columna vacía met_ACO_hour = met_ACO_hour.drop([""],axis=1) # Convertir la columna fecha de d/m/yy hh:mm a yy/mm/dd hh:mm y pasar del formato de horas de 1..24 a 0...23. met_ACO_hour['date'] = met_ACO_hour.apply(lambda row: time_converter(row['date']), axis=1) met_ACO_hour['date'] = pd.to_datetime(met_ACO_hour['date'], format='%Y-%m-%d %H:%M') met_ACO_hour = met_ACO_hour.rename(columns={'date': 'fecha'}) return(met_ACO_hour) # Ejecutamos la función anterior para los datos de metereología y contaminantes: target1 = "meteorologia" anio = "2019" meteorologia = formateo_csv(target1, anio) target2 = "contaminantes" contaminacion = formateo_csv(target2, anio) meteorologia.head() # ### Merge de Dataframes # Juntamos los dataframes generados, así podremos trabajar con ambos archivos a la vez: data_hour_merge = pd.merge(meteorologia, contaminacion, on=["fecha","id_station"],how="outer") # Generamos 3 columnas con la información temporal del momento en que se tomó la medición # en la columna de fecha se elimina la información de hora y minuto. data_hour_merge['hora'] = data_hour_merge['fecha'].astype(str).str[10:13].astype(int) data_hour_merge['dia'] = data_hour_merge['fecha'].astype(str).str[8:10].astype(int) data_hour_merge['mes'] = data_hour_merge['fecha'].astype(str).str[5:7].astype(int) # data_hour_merge['fecha'] = data_hour_merge['fecha'].astype(str).str[0:10] data_hour_merge.head(5) # ## Una vez que corroboramos el correcto funcionamiento del proceso, podemos juntar los pasos anteriores en una función y así agilizar el proceso de la limpieza de cada año: def data_parser(anio_1): print(anio_1) target1 = "meteorologia" meteorologia = formateo_csv(target1, anio_1) target2 = "contaminantes" contaminacion = formateo_csv(target2, anio_1) data_hour_merge = pd.merge(meteorologia, contaminacion, on=["fecha","id_station"],how="outer") data_hour_merge['hora'] = data_hour_merge['fecha'].astype(str).str[10:13] data_hour_merge['dia'] = data_hour_merge['fecha'].astype(str).str[8:10] data_hour_merge['mes'] = data_hour_merge['fecha'].astype(str).str[5:7] # data_hour_merge['fecha'] = data_hour_merge['fecha'].astype(str).str[0:10] data_hour_merge.to_csv(str("./data/processed/met_cont_hora/cont_hora"+ str(anio_1) +".csv"), index=False) # Corremos la función desde el 2012 al 2019: [data_parser(str(anio)) for anio in range(2019,2021)] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Minimum number of Coins for Change # # ### Using Dynamic Programming # # If solution exists -> Dynamic Programming will find it and it will be able to find OPTIMAL Solution, compared to Recusion and Greedy. # + coins = [2, 3, 5, 10] w = 15 coins_mat = [] # - A = [[None for i in range(16)] for j in coins] A[0] = [1 if j % coins[0] == 0 else 0 for j in range(len(A[0]))] A[0] A[0][15] # + def get_num_of_coins(coins, w): A = [[None for i in range(w + 1)] for j in coins] A[0] = [1 if j % coins[0] == 0 else 0 for j in range(len(A[0]))] num_coins = len(A) for i in range(1, num_coins): for j in range(w+1): if coins[i] > j: A[i][j] = A[i - 1][j] else: A[i][j] = A[i - 1][j] + A[i][j - coins[i]] print(A) return A[len(A) - 1][len(A[0]) - 1] def exclude_include_sum(coin, sum): # + coins = [2, 3, 5, 10] w = 15 min_coins = get_num_of_coins(coins, w) print(min_coins) coins = [1, 5, 10] w = 7 min_coins = get_num_of_coins(coins, w) print(min_coins) coins = [2, 3, 5] w = 6 min_coins = get_num_of_coins(coins, w) print(min_coins) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Practice import numpy as np import torch import torch.nn as nn def calc_iou(box1, box2): """ Param: box1, box2 Return: Intersection over Union of two boxes Each boxes should be like [x1, y1, x2, y2], and x1 <= x2, y1 <= y2 """ (ax1, ay1, ax2, ay2) = box1 (bx1, by1, bx2, by2) = box2 assert (ax1 <= ax2) & (ay1 <= ay2) assert (bx1 <= bx2) & (by1 <= by2) cx1 = max(ax1, bx1) cy1 = max(ay1, by1) cx2 = min(ax2, bx2) cy2 = min(ay2, by2) assert (cx1 <= cx2) & (cy1 <= cy2) a_area = (ax2 - ax1) * (ay2 - ay1) b_area = (bx2 - bx1) * (by2 - by1) c_area = (cx2 - cx1) * (cy2 - cy1) union_area = a_area + b_area - c_area intersecion_area = c_area smooth = 1e-6 # print(intersecion_area) return (intersecion_area + smooth) / (union_area + smooth) def calc_iou_many_to_one(boxes, ground_truth): """ Param: boxes: shape([N, 4]), ground_truth: shape([4]) Return: IoU of boxes over on ground truth box Each boxes should be like [x1, y1, x2, y2], and x1 <= x2, y1 <= y2 """ (gt_x1, gt_y1, gt_x2, gt_y2) = ground_truth boxes_x1s = boxes[:, 0] boxes_y1s = boxes[:, 1] boxes_x2s = boxes[:, 2] boxes_y2s = boxes[:, 3] assert (gt_x1 <= gt_x2) & (gt_y1 <= gt_y2) assert (boxes_x1s <= boxes_x2s).all() & (boxes_y1s <= boxes_y2s).all() inter_x1s = torch.max(boxes_x1s, gt_x1) inter_y1s = torch.max(boxes_y1s, gt_y1) inter_x2s = torch.min(boxes_x2s, gt_x2) inter_y2s = torch.min(boxes_y2s, gt_y2) assert (inter_x1s <= inter_x2s).all() & (inter_y1s <= inter_y2s).all() gt_area = (gt_x2 - gt_x1) * (gt_y2 - gt_y1) box_areas = (boxes_x2s - boxes_x1s) * (boxes_y2s - boxes_y1s) intersect_areas = (inter_x2s - inter_x1s) * (inter_y2s - inter_y1s) union_area = gt_area + box_areas - intersect_areas intersecion_area = intersect_areas smooth = 1e-6 return (intersecion_area + smooth) / (union_area + smooth) def determine_anchor_label(anchors, ground_truth, pos_threshold=0.7, neg_threshold=0.3): """ Determine a label of anchors. Params: Anchors: array of [x1, y1, x2, y2]. shape([N, 4]) ground_truth: ground truth bbox. shape([4]) pos_threshold: IoU Threshold used to determine positive anchor neg_threshold: IoU Threshold used to determine negative anchor Return: Tensor of integer values denoting the label of anchors. shape([N]) Positive: 1 Negative: 0 Neither positive or negative: -1 """ num_of_anchors = anchors.shape[0] labels = -torch.ones(num_of_anchors) ious = calc_iou_many_to_one(anchors, ground_truth) print(ious) # First positive condition: Highest IoU with ground truth max_index = torch.argmax(ious).item() labels[max_index] = 1 # Second positive condition: Higher than pos_threshold or equal wihh pos_threshold IoU with ground truth positive_flags = torch.ge(ious, pos_threshold) labels[positive_flags] = 1 # Negative condition: Among non-positive anchors, less than neg_threshold IoU negative_flags = torch.eq(labels, -1) & torch.lt(ious, neg_threshold) labels[negative_flags] = 0 return labels def rpn_loss_cls(preds, labels): """ Classification loss of RPN Layer. Log loss between probability that anchor is object and binary ground truth label Params: preds: Probabilities that anchors are objects labels: Labels that anchors are objects """ assert torch.all(torch.ge(preds, 0.0)) assert torch.all(torch.le(preds, 1.0)) binary_cross_entropy = nn.BCELoss(reduction='none') output = binary_cross_entropy(preds, labels) return output def smooth_L1(ti, ti_star): """ smooth L1 function: 0.5 * (x^2) if abs(x) < 1 abs(x) - 0.5 otherwise Params: ti: shape([N]) ti_star: shape([N]) Return: score: shape([N]) """ abs_sub = torch.abs(ti - ti_star) smaller_than_1 = torch.where(abs_sub < 1) greater_than_1 = torch.where(abs_sub >= 1) abs_sub[smaller_than_1] = torch.pow(abs_sub[smaller_than_1], 2) / 2 abs_sub[greater_than_1] = abs_sub[greater_than_1] - 0.5 return abs_sub def rpn_loss_reg(pred_boxes, anchor_boxes, gt_box): # TODO: gt_box? or gt_boxes? """ Regression loss of RPN Layer. Params: pred_boxes: Predicted boxes by RPN layer. shape([N, 4]) anchor_boxes: Anchor boxes used by the predictions. shape([N, 4]) gt_box: Ground truth box of image. shape([4]) """ x = pred_boxes[:, 0] y = pred_boxes[:, 1] w = pred_boxes[:, 2] - pred_boxes[:, 0] h = pred_boxes[:, 3] - pred_boxes[:, 1] x_a = anchor_boxes[:, 0] y_a = anchor_boxes[:, 1] w_a = anchor_boxes[:, 2] - anchor_boxes[:, 0] h_a = anchor_boxes[:, 3] - anchor_boxes[:, 1] x_star = gt_box[0] y_star = gt_box[1] w_star = gt_box[2] - gt_box[0] h_star = gt_box[3] - gt_box[1] t_x = (x - x_a) / w_a t_y = (y - y_a) / h_a t_w = torch.log(w/w_a) t_h = torch.log(h/h_a) t_x_star = (x_star - x_a) / w_a t_y_star = (y_star - y_a) / h_a t_w_star = torch.log(w_star/w_a) t_h_star = torch.log(h_star/h_a) losses = torch.zeros(anchor_boxes.shape[0]) losses += smooth_L1(t_x, t_x_star) losses += smooth_L1(t_y, t_y_star) losses += smooth_L1(t_w, t_w_star) losses += smooth_L1(t_h, t_h_star) return losses def multitask_loss(pred_probs, pred_boxes, anchor_boxes, gt_box, anchor_num=9, balance=10): """ L(p, t) = (1/N_cls) * sigma{L_cls(pi, pi_star)} + lambda * (1/N_reg) * sigma{pi_star * L_reg(ti, ti_star)} """ # Positive: 1 Negative: 0 Neither positive or negative: -1 labels = determine_anchor_label(anchor_boxes, gt_box) # Only get positive and negative anchors valid_indices = torch.where(labels > -0.5) valid_labels = labels[valid_indices] # pi_star valid_pred_probs = pred_probs[valid_indices] valid_pred_boxes = pred_boxes[valid_indices] valid_anchor_boxes = anchor_boxes[valid_indices] cls_loss = rpn_loss_cls(valid_pred_probs, valid_labels) reg_loss = rpn_loss_reg(valid_pred_boxes, valid_anchor_boxes, gt_box) positive_reg_loss = reg_loss * valid_labels n_cls = anchor_boxes.shape[0] / anchor_num n_reg = anchor_boxes.shape[0] cls_term = torch.sum(cls_loss) / n_cls reg_term = torch.sum(positive_reg_loss) / n_reg * balance return cls_term + reg_term len(pred_boxes) # + gt_box = torch.tensor([2.0, 2.0, 5.0, 5.0]) pred_boxes = torch.tensor([ [2.0, 2.0, 5.0, 4.5], [1.0, 4.0, 3.0, 6.0], [2.0, 2.0, 5.0, 6.0], [2.0, 2.0, 4.0, 4.0], [3.0, 3.0, 4.0, 4.0] ]) anchor_boxes = torch.tensor([ [2.0, 2.0, 5.0, 4.3], [1.0, 4.0, 3.0, 6.0], [4.0, 4.0, 6.0, 6.0], [2.0, 2.0, 5.0, 4.5], [3.0, 3.0, 4.0, 4.0] ]) # - pred_probs = torch.tensor([0.1, 0.2, 0.3, 0.4, 0.5]) multitask_loss(pred_probs, pred_boxes, anchor_boxes, gt_box, anchor_num=2, balance=3) rpn_loss_reg(pred_boxes, anchor_boxes, gt_box) # + x = pred_boxes[:, 0] y = pred_boxes[:, 1] w = pred_boxes[:, 2] - pred_boxes[:, 0] h = pred_boxes[:, 3] - pred_boxes[:, 1] x_a = anchor_boxes[:, 0] y_a = anchor_boxes[:, 1] w_a = anchor_boxes[:, 2] - anchor_boxes[:, 0] h_a = anchor_boxes[:, 3] - anchor_boxes[:, 1] x_star = gt_box[0] y_star = gt_box[1] w_star = gt_box[2] - gt_box[0] h_star = gt_box[3] - gt_box[1] t_x = (x - x_a) / w_a t_y = (y - y_a) / h_a t_w = torch.log(w/w_a) t_h = torch.log(h/h_a) t_x_star = (x_star - x_a) / w_a t_y_star = (y_star - y_a) / h_a t_w_star = torch.log(w_star/w_a) t_h_star = torch.log(h_star/h_a) # - (x - x_a) / w_a x_a labels = determine_anchor_label(many_boxes, ground_truth) labels ret = calc_iou_many_to_one(many_boxes, ground_truth) ret labels = -torch.ones(many_boxes.shape[0]) labels a = torch.randn(5) print(a) max_index = torch.argmax(a).item() max_index labels torch.where(labels <= 0) labels[torch.where(labels <= 0)] torch.abs(labels) import torch.nn as nn loss = nn.BCELoss() in_tensor = torch.randn(3) target = torch.empty(3).random_(2) in_tensor target output = loss(in_tensor, target) output # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''data_analysis'': conda)' # language: python # name: python3 # --- # ### Slicing, Broadcasting, and Array Types # + # You have data for a variety of professions, and you want to increase the salaries of just the data scientists by 10 percent every other year. # - import numpy as np # Data: yearly salary in $(1000) [2025, 2026, 2027] dataScientist = [130,132,137] productManager = [127,140,145] designer = [118,118,127] softwareEngineer = [129,131,137] employees = np.array([dataScientist, productManager, designer, softwareEngineer]) employees # + # One liner employees[0,::2] = employees[0,::2] * 1.1 # Result print(employees) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Race Time Non-parametric Regression # Spring 2019 AME-70790 Final Project # # () # # Reference: ., & . (1994). Kernel smoothing. Chapman and Hall/CRC. # ___ # As an application of the local polynomial regression techniques we have learned, we will look at the prediction of record [Scottish hill race times](http://www.scottishhillracing.co.uk/). # Hill running, also known as fell running, is an off-road running competition over upland country where the gradient climbed is a significant component of the race difficulty. # Such races occur all around the *hilly* terrain of Scotland. # The goal of this data will be to predict record race times for male and female competitors given the total distance of the race and the elevation climbed. # The data consists of 77 different races from 2000 and was provided by the [Rdatasets library](https://vincentarelbundock.github.io/Rdatasets/datasets.html). # The data has already been preprocessed into a numpy array that contains race length (miles), elevation (feet), record male time (hours) and record female time (hours) for each entry. # Null timing values are represented by a negative number. import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.cm as cm from scipy.stats import multivariate_normal from IPython.display import display from mpl_toolkits.mplot3d import Axes3D # + plt.close("all") np.random.seed(123) # Load data and create pandas datafram data = np.load('data/race_2000_data.npy') data_m = data[:,:3] data_f = np.delete(np.delete(data, np.where(data[:,3] < 0), 0), 2, 1) # Remove null values in female data dataset = pd.DataFrame({'Length (mi)':data[:,0].astype(int),'Elevation (ft)':data[:,1],\ 'Time Male (hr)':data[:,2], 'Time Female (hr)':data[:,3]}) with pd.option_context('display.max_rows',7): display(dataset) # - # For simplicity we will use multivariate local linear regression with the loss function: # $$\mathcal{L} = \sum_{i=1}^{n}\left(t_{i}-\beta_{0} + \beta_{1}(x_{i}-x) + \beta_{2}(y_{i}-y)\right)^{2}K_{h}(x_{i}-x, y_{i}-y),$$ # where the inputs are the length of the race and elevation change. # We will once again use the standard multivariate Gaussian kernel: # $$K(\textbf{x})=\mathcal{N}(\textbf{x}|0,H), \quad H=\left[\begin{matrix} 5 & 0 \\ 0 & 5e5 \end{matrix}\right].$$ # Large bandwidths had to be used due to the sparseness of the data. # + H = np.array([[5,0],[0,5e5]]) # Kernel Bandwidth # Predict n_pred = 50 length = np.linspace(2, 10, n_pred) elev = np.linspace(500, 3000, n_pred) x1, x2 = np.meshgrid(length, elev) x_pred = np.stack([np.reshape(x1, (-1)), np.reshape(x2, (-1))], axis=0) betas_m = np.zeros((3,x_pred.shape[1])) betas_f = np.zeros((3,x_pred.shape[1])) # Predict male times for i, x0 in enumerate(x_pred.T): # Male time X = np.stack([np.ones(data_m.shape[0]), data_m[:,0]-x0[0], data_m[:,1]-x0[1]], axis=1) W = np.diag(multivariate_normal.pdf(data_m[:,:2]-x0, cov=H)) # Add a little jitter to the matrix to ensure invertability betas_m[:,i] = np.linalg.inv(X.T.dot(W).dot(X) + 1e-7*np.eye(X.shape[1])).dot(X.T).dot(W).dot(data_m[:,2]) # Female time X = np.stack([np.ones(data_f.shape[0]), data_f[:,0]-x0[0], data_f[:,1]-x0[1]], axis=1) W = np.diag(multivariate_normal.pdf(data_f[:,:2]-x0, cov=H)) betas_f[:,i] = np.linalg.inv(X.T.dot(W).dot(X) + 1e-7*np.eye(X.shape[1])).dot(X.T).dot(W).dot(data_f[:,2]) # + fig = plt.figure(figsize=(10,5)) ax = [] ax.append(plt.subplot2grid((1, 2), (0, 0), projection='3d')) ax.append(plt.subplot2grid((1, 2), (0, 1), projection='3d')) zpred = np.reshape(betas_m[0], (n_pred, n_pred)) ax[0].plot_surface(x1, x2, zpred, cmap=cm.inferno, alpha=1.0) ax[0].scatter(data_m[:,0], data_m[:,1], data_m[:,2], s=10, c='k', marker='x', alpha=1.0, label='Training Data') zpred = np.reshape(betas_f[0], (n_pred, n_pred)) ax[1].plot_surface(x1, x2, zpred, cmap=cm.inferno, alpha=1.0) ax[1].scatter(data_f[:,0], data_f[:,1], data_f[:,2], s=10, c='k', marker='x', alpha=1.0) for ax0 in ax: ax0.view_init(elev=25., azim=150) ax0.set_xlabel('Length (mi)') ax0.set_ylabel('Elevation (ft)') ax0.set_zlabel('Time (hr)') ax0.set_xlim([1,10]) ax0.set_ylim([500, 3000]) ax0.set_zlim([0, 1.6]) ax[0].set_title('Male') ax[0].legend() ax[1].set_title('Female') # Save and show figure plt.savefig('figs/09_scottish_hill_races.pdf') plt.savefig('figs/09_scottish_hill_races.png') plt.tight_layout() plt.show() # - # (Left to right) The male time regression model prediction with the training points and the female time regression model with training points. # Since a large bandwidth had to be used, the results are very smooth however the predicted trend is what is expected. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Матричные факторизации # В данной работе вам предстоит познакомиться с практической стороной матричных разложений. # Работа поделена на 4 задания: # 1. Вам необходимо реализовать SVD разложения используя SGD на explicit данных # 2. Вам необходимо реализовать матричное разложения используя ALS на implicit данных # 3. Вам необходимо реализовать матричное разложения используя BPR(pair-wise loss) на implicit данных # 4. Вам необходимо реализовать матричное разложения используя WARP(list-wise loss) на implicit данных # # + import implicit import pandas as pd import numpy as np import scipy.sparse as sp from tqdm import trange from lightfm.datasets import fetch_movielens # - # В данной работе мы будем работать с explicit датасетом movieLens, в котором представленны пары user_id movie_id и rating выставленный пользователем фильму # # Скачать датасет можно по ссылке https://grouplens.org/datasets/movielens/1m/ ratings = pd.read_csv('data/ml-1m/ratings.dat', delimiter='::', header=None, names=['user_id', 'movie_id', 'rating', 'timestamp'], usecols=['user_id', 'movie_id', 'rating'], engine='python') movie_info = pd.read_csv('data/ml-1m/movies.dat', delimiter='::', header=None, names=['movie_id', 'name', 'category'], engine='python') ratings['user_id'] -= 1 ratings['movie_id'] -= 1 movie_info['movie_id'] -= 1 # Explicit данные ratings.head(10) # Для того, чтобы преобразовать текущий датасет в Implicit, давайте считать что позитивная оценка это оценка >=4 implicit_ratings = ratings.loc[(ratings['rating'] >= 4)] implicit_ratings.head(10) # Удобнее работать с sparse матричками, давайте преобразуем DataFrame в CSR матрицы users = implicit_ratings["user_id"] movies = implicit_ratings["movie_id"] user_item = sp.coo_matrix((np.ones_like(users), (users, movies))) user_item_t_csr = user_item.T.tocsr() user_item_csr = user_item.tocsr() # В качестве примера воспользуемся ALS разложением из библиотеки implicit # # Зададим размерность латентного пространства равным 64, это же определяет размер user/item эмбедингов model = implicit.als.AlternatingLeastSquares(factors=64, iterations=100, calculate_training_loss=True) # В качестве loss здесь всеми любимый RMSE model.fit(user_item_t_csr) # Построим похожие фильмы по 1 movie_id = Истории игрушек movie_info.head(5) get_similars = lambda item_id, model : pd.concat([movie_info[movie_info["movie_id"] == x[0]] for x in model.similar_items(item_id)], axis=0) # Как мы видим, симилары действительно оказались симиларами. # # Качество симиларов часто является хорошим способом проверить качество алгоритмов. # # P.S. Если хочется поглубже разобраться в том как разные алгоритмы формируют разные латентные пространства, рекомендую загружать полученные вектора в tensorBoard и смотреть на сформированное пространство get_similars(0, model) # Давайте теперь построим рекомендации для юзеров # # Как мы видим юзеру нравится фантастика, значит и в рекомендациях ожидаем увидеть фантастику get_user_history = lambda user_id, dataset : pd.concat([movie_info[movie_info["movie_id"] == x] for x in dataset[dataset["user_id"] == user_id]["movie_id"]], axis=0) get_user_history(3, implicit_ratings) # Получилось! # # Мы действительно порекомендовали пользователю фантастику и боевики, более того встречаются продолжения тех фильмов, которые он высоко оценил get_recommendations = lambda user_id, model, dataset : pd.concat([movie_info[movie_info["movie_id"] == x[0]] for x in model.recommend(user_id, dataset)], axis=0) get_recommendations(3, model, user_item_csr) # Теперь ваша очередь реализовать самые популярные алгоритмы матричных разложений # # Что будет оцениваться: # 1. Корректность алгоритма # 2. Качество получившихся симиларов # 3. Качество итоговых рекомендаций для юзера # Base MF class # + class MF: def __init__(self, size, factors, reg_param, max_iter, user_label='user_id', item_label='movie_id', target_label='rating'): self.users_sz = size[0] self.items_sz = size[1] self.factors = factors self.max_iter = max_iter self.reg_param = reg_param self.users_m = np.random.uniform(0.0, 1 / np.sqrt(factors), (self.users_sz, factors)) self.items_m = np.random.uniform(0.0, 1 / np.sqrt(factors), (self.items_sz, factors)) self.user_label = user_label self.item_label = item_label self.target_label = target_label def similar_items(self, item_id, amount=10): distances = np.linalg.norm(self.items_m - self.items_m[item_id], axis=1) return list(zip(np.argsort(distances), list(sorted(distances))))[:amount] def recommend(self, user_id, data, amount=10): ratings = self.users_m[user_id] @ self.items_m.T non_zeros = data[user_id].nonzero()[1] return list(filter(lambda x: x[0] not in non_zeros, zip(np.argsort(ratings), list(sorted(ratings)))))[-amount:] # - # ### Задание 1. Не использую готовые решения, реализовать SVD разложение используя SGD на explicit данных # + class SVD(MF): def __init__(self, size, factors, learning_rate, reg_param, max_iter, user_bias_param=1e-6, item_bias_param=1e-6, user_label='user_id', item_label='movie_id', target_label='rating'): super().__init__(size, factors, reg_param, max_iter, user_label, item_label, target_label) self.learning_rate = learning_rate self.user_bias_param = user_bias_param self.item_bias_param = item_bias_param def fit(self, train): self.users_bias = np.zeros(self.users_sz) self.items_bias = np.zeros(self.items_sz) self.mu = train[self.target_label].mean() with trange(self.max_iter, desc='Learning...', leave=True) as t: for i in t: user, item, rating = train.iloc[np.random.randint(len(train))][:] error = (self.users_m[user, :] @ self.items_m[item, :] + self.users_bias[user] + self.items_bias[item] + self.mu) - rating self.users_bias[user] -= self.learning_rate * (error + self.user_bias_param * self.users_bias[user]) self.items_bias[item] -= self.learning_rate * (error + self.item_bias_param * self.items_bias[item]) self.users_m[user, :] -= self.learning_rate * (error * self.items_m[item, :] + self.reg_param * self.users_m[user, :]) self.items_m[item, :] -= self.learning_rate * (error * self.users_m[user, :] + self.reg_param * self.items_m[item, :]) if i % 10000 == 0: self.A = (self.users_m @ self.items_m.T) + self.users_bias[:,np.newaxis] + self.items_bias + self.mu measured = self.A[train[self.user_label], train[self.item_label]] rmse = np.sqrt(np.mean((measured - train[self.target_label].to_numpy())**2)) t.set_description("RMSE: " + str(rmse)) t.refresh() self.A = (self.users_m @ self.items_m.T) + self.users_bias[:, np.newaxis] + self.items_bias + self.mu # + tags=[] svd = SVD((np.max(ratings['user_id']) + 1, np.max(ratings['movie_id']) + 1), factors=64, learning_rate=1e-2, reg_param=1e-5, max_iter=int(1e7)) svd.fit(ratings) # - get_similars(0, svd) users = ratings["user_id"] movies = ratings["movie_id"] explicit_user_item_csr = sp.coo_matrix((np.ones_like(users), (users, movies))).tocsr() get_user_history(3, ratings) get_recommendations(3, svd, explicit_user_item_csr) # ### Задание 2. Не использую готовые решения, реализовать матричное разложение используя ALS на implicit данных # + class ALS(MF): def fit(self, train): train_arr = train.toarray() rows, cols = train.nonzero() reg_matrix = self.reg_param * sp.identity(self.factors) identity_items = sp.identity(self.items_sz) identity_users = sp.identity(self.users_sz) with trange(self.max_iter, desc='Learning...', leave=True) as t: self.A = self.users_m @ self.items_m.T measured = self.A[rows, cols] rmse = np.sqrt(np.mean((measured - train_arr[rows, cols])**2)) t.set_description("RMSE: " + str(rmse)) t.refresh() for i in t: items_t = self.items_m.T item_x_itemt = items_t @ self.items_m for user in range(self.users_sz): cur_user_ratings = train_arr[user, :] confidence_m = sp.diags(10 * cur_user_ratings) Cu = items_t @ sp.csr_matrix.dot(confidence_m, self.items_m) self.users_m[user, :] = np.linalg.inv(item_x_itemt + Cu + reg_matrix) @ items_t @ (confidence_m + identity_items) @ cur_user_ratings users_t = self.users_m.T user_x_usert = users_t @ self.users_m for item in range(self.items_sz): cur_item_users = train_arr[:, item] confidence_m = sp.diags(10 * cur_item_users) Ci = users_t @ sp.csr_matrix.dot(confidence_m, self.users_m) self.items_m[item, :] = np.linalg.inv(user_x_usert + Ci + reg_matrix) @ users_t @ (confidence_m + identity_users) @ cur_item_users self.A = self.users_m @ self.items_m.T measured = self.A[rows, cols] rmse = np.sqrt(np.mean((measured - train_arr[rows, cols])**2)) t.set_description("RMSE: " + str(rmse)) t.refresh() # + als = ALS(user_item_csr.shape, factors=64, reg_param=1e-2, max_iter=5) als.fit(user_item_csr) # - get_similars(0, als) get_user_history(3, implicit_ratings) get_recommendations(3, als, user_item_csr) # ### Задание 3. Не использую готовые решения, реализовать матричное разложение BPR на implicit данных # + from collections import defaultdict from numpy.random import choice, randint class BPR(MF): def __init__(self, size, factors, learning_rate, reg_param, max_iter, user_label='user_id', item_label='movie_id', target_label='rating'): super().__init__(size, factors, reg_param, max_iter, user_label, item_label, target_label) self.learning_rate = learning_rate def fit(self, train): train_arr = train.toarray() positives = defaultdict(list) rows, cols = train.nonzero() for i, j in zip(rows, cols): positives[i].append(j) with trange(self.max_iter, desc='Learning...', leave=True) as t: for i in t: for user in range(self.users_sz): for i in positives[user]: j = self.get_j(user, positives) rating = (self.users_m[user, :] @ self.items_m[i, :].T) - (self.users_m[user, :] @ self.items_m[j, :].T) exponent = np.exp(-rating) dsigmoid = exponent / (1 + exponent) self.users_m[user, :] = self.users_m[user, :] + self.learning_rate * (dsigmoid * (self.items_m[i, :] - self.items_m[j, :]) + self.reg_param * self.users_m[user, :]) self.items_m[i, :] = self.items_m[i, :] + self.learning_rate * (dsigmoid * self.users_m[user, :] + self.reg_param * self.items_m[i, :]) self.items_m[j, :] = self.items_m[j, :] + self.learning_rate * (-dsigmoid * self.users_m[user, :] + self.reg_param * self.items_m[j, :]) self.A = self.users_m @ self.items_m.T measured = self.A[rows, cols] rmse = np.sqrt(np.mean((measured - train_arr[rows, cols])**2)) t.set_description("RMSE: " + str(rmse)) t.refresh() self.A = self.users_m @ self.items_m.T def get_j(self, user, positives): j = randint(self.items_sz) while j in positives[user]: j = randint(self.items_sz) return j # + bpr = BPR(user_item_csr.shape, factors=64, learning_rate=1e-2, reg_param=1e-5, max_iter=150) bpr.fit(user_item_csr) # - get_similars(0, bpr) get_recommendations(3, bpr, user_item_csr) # ### Задание 4. Не использую готовые решения, реализовать матричное разложение WARP на implicit данных class WARP(BPR): def __init__(self, size, factors, learning_rate, reg_param, max_iter, max_sampled, user_label='user_id', item_label='movie_id', target_label='rating'): super().__init__(size, factors, learning_rate, reg_param, max_iter, user_label, item_label, target_label) self.max_sampled = max_sampled def get_j(self, user, positives): j = randint(self.items_sz) highest_rating = self.users_m[user, :] @ self.items_m[j, :].T for _ in range(self.max_sampled): candidate = randint(self.items_sz) while candidate in positives[user]: candidate = randint(self.items_sz) candidate_rating = self.users_m[user, :] @ self.items_m[candidate, :].T if highest_rating < candidate_rating: highest_rating = candidate_rating j = candidate return j # + warp = WARP(user_item_csr.shape, factors=64, learning_rate=1e-3, reg_param=1e-3, max_iter=40, max_sampled=50) warp.fit(user_item_csr) # - get_similars(0, warp) get_recommendations(3, warp, user_item_csr) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.9.7 64-bit (''tfGpuEnv'': conda)' # name: python3 # --- # # The COVIDNetX challenge # drawing # # The following is a classification challenge using the [COVID-X dataset](https://github.com/lindawangg/COVID-Net/blob/master/docs/COVIDx.md). # The goal is to predict whether a person has COVID-19 or not based on chest X-RAY images. # # There are two different categories: `positive` and `negative`. `positive` means a person has COVID-19, `negative` means a person # has not COVID-19. # # The metric we use is F1 (https://en.wikipedia.org/wiki/F1_score). The goal is to maximize F1. # # The data contains images with their associated labels. # ## EDA import numpy as np import pandas as pd import matplotlib.pyplot as plt from PIL import Image import seaborn as sns # ### Load Data # + data_dir = 'data/' # data_dir = 'data_subset/' df = pd.read_csv(data_dir+'train.csv') test_df = pd.read_csv(data_dir+'submission_valid.csv') # - (df.label.value_counts())/len(df) sns.countplot(data=df, x='label'); # ### Add "img_shape" Column df['img_shape'] = df.image.apply(lambda img: Image.open(data_dir+'train/'+img).size) df.sample(10) # ### Image Shapes fig = plt.figure(figsize=(15,4), dpi=150) sns.countplot(data=df.sample(frac=0.1, random_state=42), x='img_shape'); plt.xticks(rotation=90); df.img_shape.value_counts()/len(df) # **NOTE**: 87% images have shape: 1024x1024 # #### Covid "positive" Image Shapes pos = df[df.label=='positive'] fig = plt.figure(figsize=(15,4), dpi=150) sns.countplot(data=pos.sample(frac=0.1, random_state=42), x='img_shape'); plt.xticks(rotation=90); pos.img_shape.value_counts()/len(pos) pos_non_1024x1024 = pos[pos.img_shape!=(1024,1024)] print(f"% of covid positive images which are not of 1024x1024 shape: {len(pos_non_1024x1024)/len(pos)}") # **NOTE**: 99% X-ray images of covid positive cases are _not_ of 1024x1024 shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py27] # language: python # name: conda-env-py27-py # --- # + import os import sys import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as patches import seaborn as sns from skimage import io from skimage.util import view_as_windows from keras.utils import np_utils from keras.optimizers import SGD from keras.callbacks import CSVLogger, ModelCheckpoint from keras.layers import Dense from PIL import Image, ImageDraw sys.path.append('/home/tanuj/Workspace/power-grid-detection') # %matplotlib inline Image.MAX_IMAGE_PIXELS = None # + import config from utils.model.helpers import get_model_from_json from utils.img.helpers import sliding_window from utils.dataset.helpers import get_image_collection from utils.img.collection import ImageCollection from utils.dataset.data_generator import DataGenerator # - lr = 0.01 batch_size = 32 batch_size=256 n_epochs=500 input_shape=(140, 140, 1) name = 'cnn_140_1_thr_dil_ero_lr_%f_conv_freeze' % lr model = get_model_from_json('cnn_140_1_thr_dil_ero_lr_0.100000_final.json') model.load_weights('/home/tanuj/Workspace/power-grid-detection/training/cnn_140_1_thr_dil_ero_lr_0.100000_training_weights_best.hdf5') model for layer in model.layers: print(layer) if not isinstance(layer, Dense): layer.trainable = False model.summary() # + optimizer = SGD(lr=lr) print('compiling model...') model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy']) print('done.') # - csv_logger = CSVLogger('%s_training.log' % name) best_model_checkpointer = ModelCheckpoint(filepath=("./%s_training_weights_best.hdf5" % name), verbose=1, save_best_only=True) print('Initializing data generators...') train_data_gen = DataGenerator(dataset_file=config.train_data_file, batch_size=batch_size) validation_data_gen = DataGenerator(dataset_file=config.validation_data_file, batch_size=batch_size) test_data_gen = DataGenerator(dataset_file=config.test_data_file, batch_size=batch_size) print('done.') print('Fitting model...') history = model.fit_generator(train_data_gen, nb_epoch=n_epochs, samples_per_epoch=train_data_gen.n_batches * batch_size, validation_data=validation_data_gen, nb_val_samples=validation_data_gen.n_samples, verbose=1, callbacks=[csv_logger, best_model_checkpointer]) print('done.') # + print('Evaluating model...') score = model.evaluate_generator(test_data_gen, val_samples=test_data_gen.n_samples) print('done.') print('Test score:', score[0]) print('Test accuracy:', score[1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] run_control={"frozen": false, "read_only": false} # # Identify user's affiliation with IBM # Author: # Last modified: 2017-05-28 # + [markdown] run_control={"frozen": false, "read_only": false} # # Roadmap # 1. Tag native tweets for keyword 'ibm' in 'text' field (multiprocessing) # 2. Aggreate number of tweets and number of IBM tweets for each user # 3. Identify affiliation based on the proportion of IBM tweets # + [markdown] run_control={"frozen": false, "read_only": false} # # Steps # + run_control={"frozen": false, "read_only": false} """ Initialization """ ''' Data analysis modules: pandas, matplotlib, numpy, and etc ''' # %matplotlib inline # %config InlineBackend.figure_format = 'retina' # render double resolution plot output for Retina screens import matplotlib.pyplot as plt import pandas as pd import numpy as np ''' Standard modules, MongoDB modules ''' import os, sys, json, datetime, pickle, multiprocessing, logging from pprint import pprint import pymongo from pymongo import IndexModel, ASCENDING, DESCENDING ''' Custom tool modules ''' import mongodb # module for setting up connection with (local) MongoDB database import multiprocessing_workers # module for splitting workloads between processes import utilities # module for various custom utility functions from config import * # import all global configuration variables ''' Misc ''' NB_NAME = '20170504-user_affiliation_2' # + [markdown] heading_collapsed=true run_control={"frozen": false, "read_only": false} # ## Tag native tweets for keyword 'ibm' in 'text' field (multiprocessing) # + hidden=true run_control={"frozen": true, "read_only": true} # %%time # """ # Use multiprocessing to tag the 'text' field of native tweets for keyword 'ibm' # Worker function 'worker_tag_kws_in_tw' is wrapped in multiprocessing_workers.py # """ # if 0 == 1: # multiprocessing.log_to_stderr(logging.DEBUG) # # procedure_name = 'tag_{}_text_ibm'.format(TW_NT_COL) # kw_lst = ['ibm'] # process_n = multiprocessing.cpu_count() - 1 # set processes number to CPU numbers minus 1 # suffix = 'json' # inter_files = utilities.gen_inter_filenames_list(NB_NAME, procedure_name, process_n, suffix) # # jobs = [] # for batch_i in range(process_n): # p = multiprocessing.Process(target=multiprocessing_workers.worker_tag_kws_in_tw, # args=(DB_NAME, TW_NT_COL, batch_i, process_n, inter_files[batch_i], kw_lst), # name='Process-{}/{}'.format(batch_i, process_n)) # jobs.append(p) # # for job in jobs: # job.start() # # for job in jobs: # job.join() # + hidden=true run_control={"frozen": true, "read_only": true} # %%time # """ # Build a new collection for keyword 'ibm' tag on 'text' field of native tweets # Register in config: # TW_NT_TXT_IBM_TAG_COL # """ # if 0 == 1: # procedure_name = 'tag_{}_text_ibm'.format(TW_NT_COL) # kw_lst = ['ibm'] # process_n = multiprocessing.cpu_count() - 1 # set processes number to CPU numbers minus 1 # suffix = 'json' # inter_files = utilities.gen_inter_filenames_list(NB_NAME, procedure_name, process_n, suffix) # # tw_nt_txt_ibm_tag_col = mongodb.initialize(db_name=DB_NAME, collection_name=TW_NT_TXT_IBM_TAG_COL) # for inter_file in inter_files: # print('Reading {}...'.format(inter_file), end=' ') # lines = open(inter_file).readlines() # parsed_jsons = [json.loads(line) for line in lines] # print('Importing into {}.{}...'.format(DB_NAME, TW_NT_TXT_IBM_TAG_COL)) # tw_nt_txt_ibm_tag_col.insert_many(parsed_jsons) # print('Done') # + hidden=true run_control={"frozen": false, "read_only": false} """ Build compound index 'user_id'-'id' on new colleciton """ if 0 == 1: tw_nt_txt_ibm_tag_col = mongodb.initialize(db_name=DB_NAME, collection_name=TW_NT_TXT_IBM_TAG_COL) index_lst = [('user_id', pymongo.ASCENDING), ('id', pymongo.ASCENDING)] print('Building compond index {}...'.format(index_lst)) tw_nt_txt_ibm_tag_col.create_index(keys=index_lst) print('Done') # + hidden=true run_control={"frozen": false, "read_only": false} """ Check how many native tweets are tagged as having 'ibm' keyword in 'text' field """ if 0 == 1: tw_nt_txt_ibm_tag_col = mongodb.initialize(db_name=DB_NAME, collection_name=TW_NT_TXT_IBM_TAG_COL) tw_nt_num = tw_nt_txt_ibm_tag_col.count() tw_nt_txt_ibm_num = tw_nt_txt_ibm_tag_col.count(filter={'X_0': {'$eq': True}}) print('Native tweets tagged as having "ibm" keyword: {} ({:.2%} out of total) '.format(tw_nt_txt_ibm_num, tw_nt_txt_ibm_num / tw_nt_num)) # + [markdown] run_control={"frozen": false, "read_only": false} # ## Aggreate number of tweets and number of IBM tweets for each user # + [markdown] run_control={"frozen": false, "read_only": false} # Aggregate information into list of dictionaries. Example: # ```{'user_id': id of the user, # 'tweets_num': number of tweets (native) authored by the user, # 'ibm_tweets_num': number of IBM tweets (native) authored by the user}``` # + run_control={"frozen": false, "read_only": false} """ Aggreate 'tweets_num' and 'ibm_tweets_num' for each user and write to tmp pickle """ user_tw_ibmtw_num_lst_pkl = os.path.join(TMP_DIR, '{}-{}'.format(NB_NAME, 'user_tw_ibmtw_num.lst.pkl')) if 0 == 1: print("Building pickle from database...") data_lst = [] group_dict = {'$group': {'_id': '$user_id', 'tweets_num': {'$sum': 1}, 'ibm_tweets_num': {'$sum': {'$cond': ['$X_0', 1, 0]}}}} # if 'X_0' is true, add count for IBM tweets project_dict = {'$project': {'_id': 0, 'user_id': '$_id', 'tweets_num': 1, 'ibm_tweets_num': 1}} ppl_lst = [group_dict, project_dict] print('Aggreating on collection "{}"'.format(TW_NT_TXT_IBM_TAG_COL)) tw_nt_ibm_tag_col = mongodb.initialize(db_name=DB_NAME, collection_name=TW_NT_TXT_IBM_TAG_COL) cursor = tw_nt_ibm_tag_col.aggregate(pipeline=ppl_lst, allowDiskUse=True) # Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in. for doc in cursor: data_lst.append(doc) with open(user_tw_ibmtw_num_lst_pkl, 'wb') as f: pickle.dump(data_lst, f) print('Done') # + [markdown] run_control={"frozen": false, "read_only": false} # ## Identify affiliation based on the proportion of IBM tweets # + [markdown] run_control={"frozen": false, "read_only": false} # ### Plot the proportion of IBM tweets against number of tweets # + run_control={"frozen": false, "read_only": false} """ Load pickled data """ if 1 == 1: data_lst = [] with open(user_tw_ibmtw_num_lst_pkl, 'rb') as f: data_lst = pickle.load(f) df_user_nt = pd.DataFrame(data=data_lst, columns=['user_id', 'tweets_num', 'ibm_tweets_num'], # explicitly pass in names of columns dtype=int) # compute the proportion of IBM tweets df_user_nt['ibm_tweets_prop'] = df_user_nt['ibm_tweets_num'] / df_user_nt['tweets_num'] # + run_control={"frozen": false, "read_only": false} pcls = np.arange(start=0.1, stop=1, step=0.1) df_user_nt.describe() # + run_control={"frozen": false, "read_only": false} df_user_nt[df_user_nt['ibm_tweets_num'] >= 1].describe() # + run_control={"frozen": false, "read_only": false} """ Plot the proportion of IBM tweets against number of tweets """ ibm_tweets_prop_fig = os.path.join(FIG_DIR, 'ibm_tweets_prop.png') if 1 == 1: ''' Prepare data ''' # set minimum tweets_num so ibm_tweets_prop would be meaningful df_tmp = df_user_nt[df_user_nt['tweets_num'] >= 3] X = df_tmp['tweets_num'] Y = df_tmp['ibm_tweets_prop'] ''' Plot ''' fig, ax = plt.subplots(figsize=(12, 8)) title_fontdict = {'weight': 'bold', 'size': 'x-large'} # ax.set_title('Proportion of IBM tweets', fontdict=title_fontdict) label_fontdict = {'size': 'large'} ax.set_xlabel('Number of tweets', fontdict=label_fontdict) ax.set_ylabel('Proportion of IBM tweets', fontdict=label_fontdict) # plt.axvline(x=6, color='g') # plt.axhline(y=0.0475, color='g') # plt.axhline(y=0.2, color='g') plt.xscale('log') # plt.yscale('log') plt.plot(X, Y ,'b.') ''' Save figure ''' plt.savefig(ibm_tweets_prop_fig, dpi=200) # + [markdown] run_control={"frozen": false, "read_only": false} # ### Identifying affiliation with different combinations of (ibm_tweets_num, ibm_tweets_prop) # + [markdown] run_control={"frozen": false, "read_only": false} # We are looking for users: # 1. ~~with _reasonable_ 'tweets_num' value. So the 'ibm_tweets_prop' value would be meaningful.~~ ?? # 2. with _high_ 'ibm_tweets_prop' value, which means the user frequently mentions keyword 'ibm' in her tweets # + run_control={"frozen": false, "read_only": false} """ Load in IBM/non-IBM users identified by 'descritption' field. Check the differences of 'ibm_tweets_num' and 'ibm_tweets_prop' between them """ if 1 == 1: ''' Load pickled data ''' user_nt_ibm_desc_ids_lst = [] with open(USER_NT_IBM_DESC_IDS_LST_PKL, 'rb') as f: user_nt_ibm_desc_ids_lst = pickle.load(f) print('List length: "{}"'.format(len(user_nt_ibm_desc_ids_lst))) user_nt_nonibm_desc_ids_lst = [] with open(USER_NT_NONIBM_DESC_IDS_LST_PKL, 'rb') as f: user_nt_nonibm_desc_ids_lst = pickle.load(f) print('List length: "{}"'.format(len(user_nt_nonibm_desc_ids_lst))) ''' Subset users ''' user_nt_ibm_desc_cond = df_user_nt['user_id'].isin(user_nt_ibm_desc_ids_lst) user_nt_nonibm_desc_cond = df_user_nt['user_id'].isin(user_nt_nonibm_desc_ids_lst) df_user_nt_ibm_desc = df_user_nt[user_nt_ibm_desc_cond] df_user_nt_nonibm_desc = df_user_nt[user_nt_nonibm_desc_cond] # + [markdown] run_control={"frozen": false, "read_only": false} # Check distribution the of 'tweets_num', 'ibm_tweets_num', and 'ibm_tweets_prop' # + run_control={"frozen": false, "read_only": false} """ IBM users identified based on 'description' field """ df_user_nt_ibm_desc[['tweets_num', 'ibm_tweets_num', 'ibm_tweets_prop']].describe() # np.arange(start=0.1, stop=1, step=0.1) # + run_control={"frozen": false, "read_only": false} """ non-IBM users identified based on 'description' field """ df_user_nt_nonibm_desc[['tweets_num', 'ibm_tweets_num', 'ibm_tweets_prop']].describe() # np.arange(start=0.1, stop=1, step=0.1) # + [markdown] run_control={"frozen": false, "read_only": false} # We can see that, for IBM/non-IBM users identified based on their 'description' field # 1. IBM users have slightly higher 'tweets_num'. # 2. IBM users have significantly higher 'ibm_tweets_num' and 'ibm_tweets_prop' # # This means 'ibm_tweets_num'/'ibm_tweets_prop' can be used to identify IBM users as a supplement of the 'description' field based method # + [markdown] run_control={"frozen": false, "read_only": false} # ### Different conditions # + [markdown] run_control={"frozen": false, "read_only": false} # We set three identification conditions: # 1. Cond1: mimimal requirements ('ibm_tweets_num' >= 1) # 2. Cond2: IBM tweets prop >= first quartile ('ibm_tweets_prop' >= 0.063002) # 3. Cond3: IBM tweets prop >= median ('ibm_tweets_prop' >= 0.562500) # # **Note**: To make 'ibm_tweets_prop' value meaningful and not affected by large amount of users with few tweets, we set 'tweets_num' >= 3, which is the median of 'tweets_num' of IBM users from 'description' field based method. # + run_control={"frozen": false, "read_only": false} df_user_nt[df_user_nt['tweets_num'] >= 3].describe(np.arange(0.1, 1, 0.1)) # + run_control={"frozen": false, "read_only": false} """ Users with >=3 tweets and >=1 IBM tweets """ user_nt_ibm_tw_prop_cond1 = (df_user_nt['ibm_tweets_num'] >= 1) & (df_user_nt['tweets_num'] >= 3) df_user_nt_ibm_tw_prop_1 = df_user_nt[user_nt_ibm_tw_prop_cond1] df_user_nt_ibm_tw_prop_1.describe() # + run_control={"frozen": true, "read_only": true} # """ # Make pickle # Register in config: # USER_NT_IBM_TW_PROP_1_IDS_LST_PKL # USER_NT_NONIBM_TW_PROP_1_IDS_LST_PKL # """ # if 1 == 1: # ibm_user_ids_lst = list(df_user_nt_ibm_tw_prop_1['user_id']) # # print('Dumping to pickle: {}...'.format(USER_NT_IBM_TW_PROP_1_IDS_LST_PKL)) # with open(USER_NT_IBM_TW_PROP_1_IDS_LST_PKL, 'wb') as f: # pickle.dump(ibm_user_ids_lst, f) # # df_rest_user = df_user_nt[~ user_nt_ibm_tw_prop_cond1] # rest_user_ids_lst = list(df_rest_user['user_id']) # print('Dumping to pickle: {}...'.format(USER_NT_NONIBM_TW_PROP_1_IDS_LST_PKL)) # with open(USER_NT_NONIBM_TW_PROP_1_IDS_LST_PKL, 'wb') as f: # pickle.dump(rest_user_ids_lst, f) # # print('Done') # + run_control={"frozen": false, "read_only": false} """ Users with >=3 tweets and IBM tweets proportion >= first quartile """ user_nt_ibm_tw_prop_cond2 = (df_user_nt['ibm_tweets_prop'] >= 0.063002) & (df_user_nt['tweets_num'] >= 3) # user_nt_ibm_tw_prop_cond2 = (df_user_nt['ibm_tweets_prop'] > 0.063002) & (df_user_nt['tweets_num'] > 1) df_user_nt_ibm_tw_prop_2 = df_user_nt[user_nt_ibm_tw_prop_cond2] df_user_nt_ibm_tw_prop_2.describe() # + run_control={"frozen": true, "read_only": true} # """ # Make pickle # Register in config: # USER_NT_IBM_TW_PROP_2_IDS_LST_PKL # USER_NT_NONIBM_TW_PROP_2_IDS_LST_PKL # """ # if 0 == 1: # ibm_user_ids_lst = list(df_user_nt_ibm_tw_prop_2['user_id']) # # print('Dumping to pickle: {}...'.format(USER_NT_IBM_TW_PROP_2_IDS_LST_PKL)) # with open(USER_NT_IBM_TW_PROP_2_IDS_LST_PKL, 'wb') as f: # pickle.dump(ibm_user_ids_lst, f) # # df_rest_user = df_user_nt[~ user_nt_ibm_tw_prop_cond2] # rest_user_ids_lst = list(df_rest_user['user_id']) # print('Dumping to pickle: {}...'.format(USER_NT_NONIBM_TW_PROP_2_IDS_LST_PKL)) # with open(USER_NT_NONIBM_TW_PROP_2_IDS_LST_PKL, 'wb') as f: # pickle.dump(rest_user_ids_lst, f) # # print('Done') # + run_control={"frozen": false, "read_only": false} """ Users with >=3 tweets and IBM tweets proportion >= median """ user_nt_ibm_tw_prop_cond3 = (df_user_nt['ibm_tweets_prop'] >= 0.562500) & (df_user_nt['tweets_num'] >= 3) df_user_nt_ibm_tw_prop_3 = df_user_nt[user_nt_ibm_tw_prop_cond3] df_user_nt_ibm_tw_prop_3.describe() # + run_control={"frozen": false, "read_only": false} """ Make pickle Register in config: USER_NT_IBM_TW_PROP_3_IDS_LST_PKL USER_NT_NONIBM_TW_PROP_3_IDS_LST_PKL """ if 0 == 1: ibm_user_ids_lst = list(df_user_nt_ibm_tw_prop_3['user_id']) print('Dumping to pickle: {}...'.format(USER_NT_IBM_TW_PROP_3_IDS_LST_PKL)) with open(USER_NT_IBM_TW_PROP_3_IDS_LST_PKL, 'wb') as f: pickle.dump(ibm_user_ids_lst, f) df_rest_user = df_user_nt[~ user_nt_ibm_tw_prop_cond3] rest_user_ids_lst = list(df_rest_user['user_id']) print('Dumping to pickle: {}...'.format(USER_NT_NONIBM_TW_PROP_3_IDS_LST_PKL)) with open(USER_NT_NONIBM_TW_PROP_3_IDS_LST_PKL, 'wb') as f: pickle.dump(rest_user_ids_lst, f) print('Done') # + [markdown] run_control={"frozen": false, "read_only": false} # ### Compare with 'description' field based method # + run_control={"frozen": false, "read_only": false} """ Check the overlap with 'description' field based method """ if 1 == 1: user_nt_ibm_desc_ids_lst = [] with open(USER_NT_IBM_DESC_IDS_LST_PKL, 'rb') as f: user_nt_ibm_desc_ids_lst = pickle.load(f) user_nt_ibm_tw_prop_1_ids_lst = [] with open(USER_NT_IBM_TW_PROP_1_IDS_LST_PKL, 'rb') as f: user_nt_ibm_tw_prop_1_ids_lst = pickle.load(f) user_nt_ibm_tw_prop_2_ids_lst = [] with open(USER_NT_IBM_TW_PROP_2_IDS_LST_PKL, 'rb') as f: user_nt_ibm_tw_prop_2_ids_lst = pickle.load(f) user_nt_ibm_tw_prop_3_ids_lst = [] with open(USER_NT_IBM_TW_PROP_3_IDS_LST_PKL, 'rb') as f: user_nt_ibm_tw_prop_3_ids_lst = pickle.load(f) user_nt_ibm_desc_ids_set = set(user_nt_ibm_desc_ids_lst) user_nt_ibm_tw_prop_1_ids_set = set(user_nt_ibm_tw_prop_1_ids_lst) user_nt_ibm_tw_prop_2_ids_set = set(user_nt_ibm_tw_prop_2_ids_lst) user_nt_ibm_tw_prop_3_ids_set = set(user_nt_ibm_tw_prop_3_ids_lst) common_num_1 = len(user_nt_ibm_desc_ids_set.intersection(user_nt_ibm_tw_prop_1_ids_set)) common_num_2 = len(user_nt_ibm_desc_ids_set.intersection(user_nt_ibm_tw_prop_2_ids_set)) common_num_3 = len(user_nt_ibm_desc_ids_set.intersection(user_nt_ibm_tw_prop_3_ids_set)) print('IBM users in M1: {}'.format(len(user_nt_ibm_desc_ids_lst))) print('IBM users in M2_1: {}; common IBM users with M1: {}'.format(len(user_nt_ibm_tw_prop_1_ids_lst), common_num_1)) print('IBM users in M2_2: {}; common IBM users with M1: {}'.format(len(user_nt_ibm_tw_prop_2_ids_lst), common_num_2)) print('IBM users in M2_3: {}; common IBM users with M1: {}'.format(len(user_nt_ibm_tw_prop_3_ids_lst), common_num_3)) # + run_control={"frozen": false, "read_only": false} """ Plot different choices of IBM tweets proportion affect the common users identified with 'description' field based method """ if 0 == 1: ''' Prepare data ''' ibm_tweets_prop_lst = np.arange(0.005, 0.995, 0.0025) cmm_ibm_user_pct_lst = [] total_users_pct_lst = [] data_lst = [] with open(user_tw_ibmtw_num_lst_pkl, 'rb') as f: data_lst = pickle.load(f) df_user_nt = pd.DataFrame(data=data_lst, columns=['user_id', 'tweets_num', 'ibm_tweets_num'], # explicitly pass in names of columns dtype=int) # compute the proportion of IBM tweets df_user_nt['ibm_tweets_prop'] = df_user_nt['ibm_tweets_num'] / df_user_nt['tweets_num'] user_nt_ibm_desc_ids_lst = [] with open(USER_NT_IBM_DESC_IDS_LST_PKL, 'rb') as f: user_nt_ibm_desc_ids_lst = pickle.load(f) user_nt_ibm_tw_prop_1_ids_lst = [] with open(USER_NT_IBM_TW_PROP_1_IDS_LST_PKL, 'rb') as f: user_nt_ibm_tw_prop_1_ids_lst = pickle.load(f) for ibm_tweets_prop in ibm_tweets_prop_lst: user_nt_ibm_tw_num = len(user_nt_ibm_tw_prop_1_ids_lst) select_cond = df_user_nt['ibm_tweets_prop'] > ibm_tweets_prop df_tmp = df_user_nt[select_cond] tmp_user_ids_set = set(df_tmp['user_id']) cmm_ibm_user_pct = len(set(user_nt_ibm_desc_ids_lst).intersection(tmp_user_ids_set)) / len(user_nt_ibm_desc_ids_lst) cmm_ibm_user_pct_lst.append(cmm_ibm_user_pct) row_n = df_tmp.shape[0] # number of users identified total_users_pct = row_n / user_nt_ibm_tw_num total_users_pct_lst.append(total_users_pct) ''' Plot ''' fig, ax1 = plt.subplots() fig.set_size_inches((12,8)) ax1.set_title('IBM tweets proportions') X = ibm_tweets_prop_lst Y1 = cmm_ibm_user_pct_lst ax1.plot(X, Y1, 'b-') ax1.set_ylabel("% of IBM users in common with method1", color='blue') ax1.set_ylim(0, 1) ax1.set_xticks(np.arange(0, 1.1, 0.1)) X_vals = ax1.get_xticks() ax1.set_xticklabels(['{:.1%}'.format(X_val) for X_val in X_vals]) ax1.grid() Y1_vals = ax1.get_yticks() ax1.set_yticklabels(['{:.1%}'.format(Y1_val) for Y1_val in Y1_vals]) ax2 = ax1.twinx() Y2 = total_users_pct_lst ax2.plot(X, Y2, 'g-') ax2.set_ylabel("% of IBM tweets users", color='g') # ax2.set_ylim(0, 0.1) Y2_vals = ax2.get_yticks() ax2.set_yticklabels(['{:.1%}'.format(Y2_val) for Y2_val in Y2_vals]) ''' Save figure ''' # + [markdown] heading_collapsed=true run_control={"frozen": false, "read_only": false} # ## Check accounts @Natasha_D_G and @jameskobielus # + [markdown] hidden=true run_control={"frozen": false, "read_only": false} # While by using method_1, we can identify all users have keyword 'ibm' in their description field as affiliated with IBM, method_1 is limited that not all users would explicitly express their affiliation with IBM in their description field. # So, the users identified by method_1 should be a subset of actual users affiliated with IBM. In other words, there are more users affiliated with IBM. # The idea we use here is that, if a person frequently mention keyword 'ibm' in tweets, he or she is highly likely affiliated with IBM. We use the proportion of tweets with keyword 'ibm' to identify these users. # + hidden=true run_control={"frozen": false, "read_only": false} if 1 == 1: screen_name_1 = 'Natasha_D_G' screen_name_2 = 'jameskobielus' id_1 = 39413322 id_2 = 14072398 user_nt_ibm_tw_prop_1_ids_lst = [] with open(USER_NT_IBM_TW_PROP_1_IDS_LST_PKL, 'rb') as f: user_nt_ibm_tw_prop_1_ids_lst = pickle.load(f) user_nt_ibm_tw_prop_2_ids_lst = [] with open(USER_NT_IBM_TW_PROP_2_IDS_LST_PKL, 'rb') as f: user_nt_ibm_tw_prop_2_ids_lst = pickle.load(f) user_nt_ibm_tw_prop_3_ids_lst = [] with open(USER_NT_IBM_TW_PROP_3_IDS_LST_PKL, 'rb') as f: user_nt_ibm_tw_prop_3_ids_lst = pickle.load(f) print('User {} exists in M2_1 IBM-user set? {}'.format(screen_name_1, id_1 in set(user_nt_ibm_tw_prop_1_ids_lst))) print('User {} exists in M2_2 IBM-user set? {}'.format(screen_name_1, id_1 in set(user_nt_ibm_tw_prop_2_ids_lst))) print('User {} exists in M2_3 IBM-user set? {}'.format(screen_name_1, id_1 in set(user_nt_ibm_tw_prop_3_ids_lst))) print('User {} exists in M2_1 IBM-user set? {}'.format(screen_name_2, id_2 in set(user_nt_ibm_tw_prop_1_ids_lst))) print('User {} exists in M2_2 IBM-user set? {}'.format(screen_name_2, id_2 in set(user_nt_ibm_tw_prop_2_ids_lst))) print('User {} exists in M2_3 IBM-user set? {}'.format(screen_name_2, id_2 in set(user_nt_ibm_tw_prop_3_ids_lst))) # + [markdown] hidden=true run_control={"frozen": false, "read_only": false} # Since @jameskobielus () is not identified as IBM-user by our method_2, we check the tweets we captured for him. # + hidden=true run_control={"frozen": false, "read_only": false} """ How many tweets we have in our database authored by @jameskobielus? """ tw_nt_txt_ibm_tag_col = mongodb.initialize(db_name=DB_NAME, collection_name=TW_NT_TXT_IBM_TAG_COL) jameskobielus_nt_num = tw_nt_txt_ibm_tag_col.count(filter={'user_id': id_2}) jameskobielus_ibm_nt_num = tw_nt_txt_ibm_tag_col.count(filter={'user_id': id_2, 'X_0': True}) print('{} native tweets we have for user {}'.format(jameskobielus_nt_num, screen_name_2)) print('{} native IBM tweets we have for user {}'.format(jameskobielus_ibm_nt_num, screen_name_2)) # + hidden=true run_control={"frozen": false, "read_only": false} """ Check the content of these tweets """ cursor = updated_col.find(filter={'user.id': id_2}, projection={'_id':0, 'id': 1, 'text': 1}) count = 0 for doc in cursor: count += 1 tweet_id = doc['id'] tweet_text = doc['text'] print('{}. ({})'.format(count, tweet_id)) print(tweet_text) # + [markdown] hidden=true run_control={"frozen": false, "read_only": false} # After manully checking all his tweets, 9 tweets were found including 'ibm' keyword. # + [markdown] run_control={"frozen": false, "read_only": false} # # Notes # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Kudi Tutorial # First install the Kudi library following the instructions on the github page. # # The you need to create two Path instances with the output files: # # "sn2_h.dat" and sn2_ch3.dat" # # Once you have done that, you can plot different global and local properties along # the reaction path. To analyze the effect of the Methyl subsituent, plot the # energy profile and reaction force profile, which is defined as the negative derivative of # the reaction energy. # # Also plot the two key bond distances in the Sn2 reaction and the HOMO and LUMO orbitals. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # TensorFlow y Redes Neuronales # *Esta notebook fue creada originalmente como un blog post por [](http://relopezbriega.com.ar/) en [Matemáticas, Analisis de datos y Python](http://relopezbriega.github.io). El contenido esta bajo la licencia BSD.* # # ## Sobre TensoFlow # # [TensorFlow](https://www.tensorflow.org/) es una biblioteca open source desarrollada por Google que nos permite realizar cálculos numéricos usando diagramas de flujo de datos. Los nodos del [grafo](https://es.wikipedia.org/wiki/Grafo) representan operaciones matemáticas, mientras que los arcos del [grafo](https://es.wikipedia.org/wiki/Grafo) representan los arreglos de datos multidimensionales ([tensores](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial)) comunicados entre ellos. Esta arquitectura flexible nos permite realizar los cálculos en más de un CPU o GPU utilizando la misma API. # # ## ¿Qué es un diagrama de flujo de datos? # # Los diagramas de flujo de datos describen cálculos matemáticos con un [grafo](https://es.wikipedia.org/wiki/Grafo) de nodos y arcos. Los nodos normalmente implementan operaciones matemáticas, pero también pueden representar los puntos para alimentarse de datos, devolver resultados, o leer / escribir variables persistentes. Los arcos o aristas describen las relaciones de entrada / salida entre los nodos. Estos arcos están representados por los arreglos de datos multidimensionales o [tensores](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial). El flujo de los tensores a través del [grafo](https://es.wikipedia.org/wiki/Grafo) es de donde [TensorFlow](https://www.tensorflow.org/) recibe su nombre. Los nodos se asignan a los dispositivos computacionales y se ejecutan de forma asincrónica y en paralelo una vez que todos los [tensores](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial) en los arcos de entrada están disponibles. # # # ## Introducción a TensorFlow # # Para poder utilizar [TensorFlow](https://www.tensorflow.org/) primero es necesario entender cómo la librería: # # * Representa cálculos en forma de [grafos](https://es.wikipedia.org/wiki/Grafo). # * Ejecuta los [grafos](https://es.wikipedia.org/wiki/Grafo) en el contexto de Sesiones. # * Representa los datos como [tensores](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial). # * Mantiene el estado con variables. # * Se alimenta de datos y devuelve los resultados de cada operación. # # # ### Funcionamiento general # # [TensorFlow](https://www.tensorflow.org/) es un sistema de programación en el que representamos cálculos en forma de [grafos](https://es.wikipedia.org/wiki/Grafo). Los nodos en el [grafo](https://es.wikipedia.org/wiki/Grafo) se llaman *ops* (abreviatura de operaciones). Una *op* tiene cero o más [tensores](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial), realiza algún cálculo, y produce cero o más [tensores](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial). # # Un [grafo](https://es.wikipedia.org/wiki/Grafo) de [TensorFlow](https://www.tensorflow.org/) es una descripción de cálculos. Para calcular cualquier cosa dentro de [TensorFlow](https://www.tensorflow.org/), el [grafo](https://es.wikipedia.org/wiki/Grafo) debe ser lanzado dentro de una *sesión*. La *Sesión* coloca las operaciones del [grafo](https://es.wikipedia.org/wiki/Grafo) en los diferentes *dispositivos*, tales como CPU o GPU, y proporciona métodos para ejecutarlas. # # ### Creando un Grafo # # Para construir un [grafo](https://es.wikipedia.org/wiki/Grafo) simple, podemos comenzar con *ops* que no necesitan ningún dato de entrada, como son las *constantes* y luego le pasamos su salida a *ops* que realizan cálculos. # + # importamos la libreria import tensorflow as tf # importamos librerías adicionales import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm import pandas as pd # %matplotlib inline # - # #### Constantes # # Podemos construir *ops* de *constantes* utilizando `constant`, su API es bastante simple: # # `constant(value, dtype=None, shape=None, name='Const')` # # Le debemos pasar un valor, el cual puede ser cualquier tipo de [tensor](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial) (un escalar, un vector, una matriz, etc) y luego opcionalmente le podemos pasar el tipo de datos, la forma y un nombre. # + # Creación de Constantes # El valor que retorna el constructor es el valor de la constante. # creamos constantes a=2 y b=3 a = tf.constant(2) b = tf.constant(3) # creamos matrices de 3x3 matriz1 = tf.constant([[1, 3, 2], [1, 0, 0], [1, 2, 2]]) matriz2 = tf.constant([[1, 0, 5], [7, 5, 0], [2, 1, 1]]) # + # Realizamos algunos cálculos con estas constantes suma = tf.add(a, b) mult = tf.mul(a, b) cubo_a = a**3 # suma de matrices suma_mat = tf.add(matriz1, matriz2) # producto de matrices mult_mat = tf.matmul(matriz1, matriz2) # - # #### Sesiones # # Ahora que ya definimos algunas *ops constantes* y algunos cálculos con ellas, debemos lanzar el [grafo](https://es.wikipedia.org/wiki/Grafo) dentro de una *Sesión*. Para realizar esto utilizamos el objeto `Session`. Este objeto va a encapsular el ambiente en el que las operaciones que definimos en el [grafo](https://es.wikipedia.org/wiki/Grafo) van a ser ejecutadas y los [tensores](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial) son evaluados. # + # Todo en TensorFlow ocurre dentro de una Sesión # creamos la sesion y realizamos algunas operaciones con las constantes # y lanzamos la sesión with tf.Session() as sess: print("Suma de las constantes: {}".format(sess.run(suma))) print("Multiplicación de las constantes: {}".format(sess.run(mult))) print("Constante elevada al cubo: {}".format(sess.run(cubo_a))) print("Suma de matrices: \n{}".format(sess.run(suma_mat))) print("Producto de matrices: \n{}".format(sess.run(mult_mat))) # - # Las *Sesiones* deben ser cerradas para liberar los recursos, por lo que es una buena práctica incluir la *Sesión* dentro de un bloque "with" que la cierra automáticamente cuando el bloque termina de ejecutar. # # Para ejecutar las *operaciones* y evaluar los [tensores](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial) utilizamos `Session.run()`. # #### Variables persistentes # # Las *Variables* mantienen el estado a través de las ejecuciones del [grafo](https://es.wikipedia.org/wiki/Grafo). Son buffers en memoria que contienen [tensores](https://es.wikipedia.org/wiki/C%C3%A1lculo_tensorial). Se deben inicializar explícitamente y se pueden guardar en el disco para luego restaurar su estado de necesitarlo. Se crean utilizando el objeto `Variable`. # + # Creamos una variable y la inicializamos con 0 estado = tf.Variable(0, name="contador") # Creamos la op que le va a sumar uno a la Variable `estado`. uno = tf.constant(1) nuevo_valor = tf.add(estado, uno) actualizar = tf.assign(estado, nuevo_valor) # Las Variables deben ser inicializadas por la operación `init` luego de # lanzar el grafo. Debemos agregar la op `init` a nuestro grafo. init = tf.initialize_all_variables() # Lanzamos la sesion y ejecutamos las operaciones with tf.Session() as sess: # Ejecutamos la op `init` sess.run(init) # imprimir el valor de la Variable estado. print(sess.run(estado)) # ejecutamos la op que va a actualizar a `estado`. for _ in range(3): sess.run(actualizar) print(sess.run(estado)) # - # #### Variables simbólicas (contenedores) # # Las *Variables simbólicas* o *Contenedores* nos van a permitir alimentar a las operaciones con los datos durante la ejecución del [grafo](https://es.wikipedia.org/wiki/Grafo). Estos *contenedores* deben ser alimentados antes de ser evaluados en la sesión, sino obtendremos un error. # + # Ejemplo variables simbólicas en los grafos # El valor que devuelve el constructor representa la salida de la # variable (la entrada de la variable se define en la sesion) # Creamos un contenedor del tipo float. Un tensor de 4x4. x = tf.placeholder(tf.float32, shape=(4, 4)) y = tf.matmul(x, x) with tf.Session() as sess: # print(sess.run(y)) # ERROR: va a fallar porque no alimentamos a x. rand_array = np.random.rand(4, 4) print(sess.run(y, feed_dict={x: rand_array})) # ahora esta correcto. # - # Ahora ya conocemos en líneas generales como es la mecánica detrás del funcionamiento de [TensorFlow](https://www.tensorflow.org/) y como deberíamos proceder para crear las *operaciones* dentro de los [grafos](https://es.wikipedia.org/wiki/Grafo). Veamos si podemos implementar modelos de neuronas simples con la ayuda de esta librería. # # ## Ejemplo de neuronas simples # # Una neurona simple, va a tener una forma similar al siguiente diagrama: # # # # En donde sus componentes son: # # * $x_1, x_2, \dots, x_n$: son los datos de entrada en la neurona, los cuales también puede ser que sean producto de la salida de otra neurona de la red. # # * $x_0$: Es la unidad de sesgo; un valor constante que se le suma a la entrada de la función de activación de la neurona. Generalmente tiene el valor 1. Este valor va a permitir cambiar la función de activación hacia la derecha o izquierda, otorgándole más flexibilidad para aprender a la neurona. # # * $w_0, w_1, w_2, \dots, w_n$: Los pesos relativos de cada entrada. Tener en cuenta que incluso la unidad de sesgo tiene un peso. # # * a: La salida de la neurona. Que va a ser calculada de la siguiente forma: # # $$a = f\left(\sum_{i=0}^n w_i \cdot x_i \right)$$ # # Aquí $f$ es la ***función de activación*** de la neurona. Esta función es la que le otorga tanta flexibilidad a las [redes neuronales](https://es.wikipedia.org/wiki/Red_neuronal_artificial) y le permite estimar complejas relaciones no lineales en los datos. Puede ser tanto una [función lineal](https://es.wikipedia.org/wiki/Funci%C3%B3n_lineal), una [función logística](https://es.wikipedia.org/wiki/Funci%C3%B3n_log%C3%ADstica), [hiperbólica](https://es.wikipedia.org/wiki/Funci%C3%B3n_hiperb%C3%B3lica), etc. # # Ahora que ya conocemos como se construye una neurona tratemos de implementar con este modelo las funciones lógicas **AND, OR y XNOR**. Podemos pensar a estas funciones como un problema de clasificación en el que la salida va a ser 0 o 1, de acuerdo a la combinación de las diferentes entradas. # # Las podemos modelar linealmente con la siguiente función de activación: # # $$f(x) = \left\{ # \begin{array}{ll} # 0 & \mbox{si } x < 0 \\ # 1 & \mbox{si } x \ge 0 # \end{array} # \right.$$ # # ### Neurona AND # # La neurona AND puede ser modelada con el siguiente esquema: # # # # La salida de esta neurona entonces va a ser: # # $$a = f(-1.5 + x_1 + x_2)$$ # # Veamos como la podemos implementar en [TensorFlow](https://www.tensorflow.org/). # + # Neurona con TensorFlow # Defino las entradas entradas = tf.placeholder("float", name='Entradas') datos = np.array([[0, 0] ,[1, 0] ,[0, 1] ,[1, 1]]) # Defino las salidas uno = lambda: tf.constant(1.0) cero = lambda: tf.constant(0.0) with tf.name_scope('Pesos'): # Definiendo pesos y sesgo pesos = tf.placeholder("float", name='Pesos') sesgo = tf.placeholder("float", name='Sesgo') with tf.name_scope('Activacion'): # Función de activación activacion = tf.reduce_sum(tf.add(tf.matmul(entradas, pesos), sesgo)) with tf.name_scope('Neurona'): # Defino la neurona def neurona(): return tf.case([(tf.less(activacion, 0.0), cero)], default=uno) # Salida a = neurona() # path de logs logs_path = '/tmp/tensorflow_logs/neurona' # - # Lanzar la Sesion with tf.Session() as sess: # para armar el grafo summary_writer = tf.train.SummaryWriter(logs_path, graph=sess.graph) # para armar tabla de verdad x_1 = [] x_2 = [] out = [] act = [] for i in range(len(datos)): t = datos[i].reshape(1, 2) salida, activ = sess.run([a, activacion], feed_dict={entradas: t, pesos:np.array([[1.],[1.]]), sesgo: -1.5}) # armar tabla de verdad en DataFrame x_1.append(t[0][0]) x_2.append(t[0][1]) out.append(salida) act.append(activ) tabla_info = np.array([x_1, x_2, act, out]).transpose() tabla = pd.DataFrame(tabla_info, columns=['x1', 'x2', 'f(x)', 'x1 AND x2']) tabla # Aquí podemos ver los datos de entrada de $x_1$ y $x_2$, el resultado de la función de activación y la decisión final que toma la neurona de acuerdo este último resultado. Como podemos ver en la tabla de verdad, la neurona nos dice que $x_1$ `and` $x_2$ solo es verdad cuando ambos son verdaderos, lo que es correcto. # # ### Neurona OR # # La neurona OR puede ser modelada con el siguiente esquema: # # # # La salida de esta neurona entonces va a ser: # # $$a = f(-0.5 + x_1 + x_2)$$ # # Como se puede ver a simple vista, el modelo de esta neurona es similar a la de la neurona AND, con el único cambio en el valor del *sesgo*, por lo tanto solo tendríamos que cambiar ese valor en nuestro modelo anterior para crear esta nueva neurona. # Neurona OR, solo cambiamos el valor del sesgo with tf.Session() as sess: # para armar el grafo summary_writer = tf.train.SummaryWriter(logs_path, graph=sess.graph) # para armar tabla de verdad x_1 = [] x_2 = [] out = [] act = [] for i in range(len(datos)): t = datos[i].reshape(1, 2) salida, activ = sess.run([a, activacion], feed_dict={entradas: t, pesos:np.array([[1.],[1.]]), sesgo: -0.5}) # sesgo ahora -0.5 # armar tabla de verdad en DataFrame x_1.append(t[0][0]) x_2.append(t[0][1]) out.append(salida) act.append(activ) tabla_info = np.array([x_1, x_2, act, out]).transpose() tabla = pd.DataFrame(tabla_info, columns=['x1', 'x2', 'f(x)', 'x1 OR x2']) tabla # Como vemos, cambiando simplemente el peso del *sesgo*, convertimos a nuestra neurona AND en una neurona OR. Como muestra la tabla de verdad, el único caso en que $x_1$ `OR` $x_2$ es falso es cuando ambos son falsos. # # ### Red Neuronal XNOR # # El caso de la función XNOR, ya es más complicado y no puede modelarse utilizando una sola neurona como hicimos con los ejemplos anteriores. $x_1$ `XNOR` $x_2$ va a ser verdadero cuando ambos son verdaderos o ambos son falsos, para implementar esta función lógica debemos crear una red con dos capas, la primer capa tendrá dos neuronas cuya salida servirá de entrada para una nueva neurona que nos dará el resultado final. Esta red la podemos modelar de acuerdo al siguiente esquema: # # # # Veamos entonces si podemos implementar este modelo en [TensorFlow](https://www.tensorflow.org/). # + # Red Neuronal XNOR con TensorFlow # Defino las entradas entradas = tf.placeholder("float", name='Entradas') datos = np.array([[0, 0] ,[1, 0] ,[0, 1] ,[1, 1]]) # Defino las salidas uno = lambda: tf.constant(1.0) cero = lambda: tf.constant(0.0) with tf.name_scope('Pesos'): # Definiendo pesos y sesgo pesos = { 'a1': tf.constant([[-1.0], [-1.0]], name='peso_a1'), 'a2': tf.constant([[1.0], [1.0]], name='peso_a2'), 'a3': tf.constant([[1.0], [1.0]], name='peso_a3') } sesgo = { 'a1': tf.constant(0.5, name='sesgo_a1'), 'a2': tf.constant(-1.5, name='sesgo_a2'), 'a3': tf.constant(-0.5, name='sesgo_a3') } with tf.name_scope('Red_neuronal'): # Defino las capas def capa1(entradas, pesos, sesgo): # activacion a1 a1 = tf.reduce_sum(tf.add(tf.matmul(entradas, pesos['a1']), sesgo['a1'])) a1 = tf.case([(tf.less(a1, 0.0), cero)], default=uno) # activacion a2 a2 = tf.reduce_sum(tf.add(tf.matmul(entradas, pesos['a2']), sesgo['a2'])) a2 = tf.case([(tf.less(a2, 0.0), cero)], default=uno) return a1, a2 def capa2(entradas, pesos, sesgo): # activacion a3 a3 = tf.reduce_sum(tf.add(tf.matmul(entradas, pesos['a3']), sesgo['a3'])) a3 = tf.case([(tf.less(a3, 0.0), cero)], default=uno) return a3 # path de logs logs_path = '/tmp/tensorflow_logs/redXNOR' # - # Sesion red neuronal XNOR with tf.Session() as sess: # para armar el grafo summary_writer = tf.train.SummaryWriter(logs_path, graph=sess.graph) # para armar tabla de verdad x_1 = [] x_2 = [] out = [] for i in range(len(datos)): t = datos[i].reshape(1, 2) # obtenos resultados 1ra capa a1, a2 = sess.run(capa1(entradas, pesos, sesgo), feed_dict={entradas: t}) # pasamos resultados a la 2da capa ent_a3 = np.array([[a1, a2]]) salida = sess.run(capa2(ent_a3, pesos, sesgo)) # armar tabla de verdad en DataFrame x_1.append(t[0][0]) x_2.append(t[0][1]) out.append(salida) tabla_info = np.array([x_1, x_2, out]).transpose() tabla = pd.DataFrame(tabla_info, columns=['x1', 'x2', 'x1 XNOR x2']) tabla # Como vemos, la [red neuronal](https://es.wikipedia.org/wiki/Red_neuronal_artificial) nos da el resultado correcto para la función lógica XNOR, solo es verdadera si ambos valores son verdaderos, o ambos son falsos. # # Hasta aquí implementamos simples neuronas y les pasamos los valores de sus pesos y sesgo a mano; esto es sencillo para los ejemplos; pero en la vida real, si queremos utilizar [redes neuronales](https://es.wikipedia.org/wiki/Red_neuronal_artificial) necesitamos implementar un procesos que vaya actualizando los pesos a medida que la red vaya aprendiendo con el entrenamiento. Este proceso se conoce con el nombre de [propagación hacia atrás](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_hacia_atr%C3%A1s) o [backpropagation](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_hacia_atr%C3%A1s). # # ### Propagación hacia atrás # # La [propagación hacia atrás](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_hacia_atr%C3%A1s) o [backpropagation](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_hacia_atr%C3%A1s) es un algoritmo que funciona mediante la determinación de la pérdida (o error) en la salida y luego propagándolo de nuevo hacia atrás en la red. De esta forma los pesos se van actualizando para minimizar el error resultante de cada neurona. Este algoritmo es lo que les permite a las [redes neuronales](https://es.wikipedia.org/wiki/Red_neuronal_artificial) aprender. # # Veamos un ejemplo de como podemos implementar una [red neuronal](https://es.wikipedia.org/wiki/Red_neuronal_artificial) que pueda aprender por sí sola con la ayuda de [TensorFlow](https://www.tensorflow.org/). # # ## Ejemplo de Perceptron multicapa para reconocer dígitos escritos # # En este ejemplo vamos a construir un [peceptron multicapa](https://es.wikipedia.org/wiki/Perceptr%C3%B3n_multicapa) para clasificar dígitos escritos. Antes de pasar a la construcción del modelo, exploremos un poco el conjunto de datos con el que vamos a trabajar en la clasificación. # # # ### MNIST dataset # # [MNIST](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) es un simple conjunto de datos para reconocimiento de imágenes por computadora. Se compone de imágenes de dígitos escritos a mano como los siguientes: # # # # Para más información sobre el dataset pueden visitar el siguiente [enlace](http://colah.github.io/posts/2014-10-Visualizing-MNIST/), en donde hacen un análisis detallado del mismo. # importando el dataset from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # ### Explorando MNIST dataset # forma del dataset 55000 imagenes mnist.train.images.shape # cada imagen es un array de 28x28 con cada pixel # definido como escala de grises. digito1 = mnist.train.images[0].reshape((28, 28)) # visualizando el primer digito plt.imshow(digito1, cmap = cm.Greys) plt.show() # valor correcto mnist.train.labels[0].nonzero()[0][0] # visualizando imagenes de 5 en 5 def visualizar_imagenes(dataset, cant_img): img_linea = 5 lineas = int(cant_img / img_linea) imagenes = [] for i in range(lineas): datos = [] for img in dataset[img_linea* i:img_linea* (i+1)]: datos.append(img.reshape((28,28))) imgs = np.hstack(datos) imagenes.append(imgs) data = np.vstack(imagenes) plt.imshow(data, cmap = cm.Greys ) plt.show() # visualizando los primeros 30 dígitos plt.figure(figsize=(8, 8)) visualizar_imagenes(mnist.train.images, 30) # ### Construyendo el perceptron multicapa # # Ahora que ya conocemos los datos con los que vamos a trabajar, ya estamos en condiciones de construir el modelo. Vamos a construir un [peceptron multicapa](https://es.wikipedia.org/wiki/Perceptr%C3%B3n_multicapa) que es una de las redes neuronales más simples. El modelo va a tener dos capas ocultas, que se van a activar con la [función de activación](https://es.wikipedia.org/wiki/Funci%C3%B3n_de_activaci%C3%B3n) ReLU y vamos a optimizar los pesos reduciendo la [entropía cruzada](https://es.wikipedia.org/wiki/Entrop%C3%ADa_cruzada) utilizando el algoritmo [Adam](http://arxiv.org/abs/1412.6980) que es un método para [optimización estocástica](https://en.wikipedia.org/wiki/Stochastic_optimization). # + # Parametros tasa_aprendizaje = 0.001 epocas = 15 lote = 100 display_step = 1 logs_path = "/tmp/tensorflow_logs/perceptron" # Parametros de la red n_oculta_1 = 256 # 1ra capa de atributos n_oculta_2 = 256 # 2ra capa de atributos n_entradas = 784 # datos de MNIST(forma img: 28*28) n_clases = 10 # Total de clases a clasificar (0-9 digitos) # input para los grafos x = tf.placeholder("float", [None, n_entradas], name='DatosEntrada') y = tf.placeholder("float", [None, n_clases], name='Clases') # - # Creamos el modelo def perceptron_multicapa(x, pesos, sesgo): # Función de activación de la capa escondida capa_1 = tf.add(tf.matmul(x, pesos['h1']), sesgo['b1']) # activacion relu capa_1 = tf.nn.relu(capa_1) # Función de activación de la capa escondida capa_2 = tf.add(tf.matmul(capa_1, pesos['h2']), sesgo['b2']) # activación relu capa_2 = tf.nn.relu(capa_2) # Salida con activación lineal salida = tf.matmul(capa_2, pesos['out']) + sesgo['out'] return salida # + # Definimos los pesos y sesgo de cada capa. pesos = { 'h1': tf.Variable(tf.random_normal([n_entradas, n_oculta_1])), 'h2': tf.Variable(tf.random_normal([n_oculta_1, n_oculta_2])), 'out': tf.Variable(tf.random_normal([n_oculta_2, n_clases])) } sesgo = { 'b1': tf.Variable(tf.random_normal([n_oculta_1])), 'b2': tf.Variable(tf.random_normal([n_oculta_2])), 'out': tf.Variable(tf.random_normal([n_clases])) } with tf.name_scope('Modelo'): # Construimos el modelo pred = perceptron_multicapa(x, pesos, sesgo) with tf.name_scope('Costo'): # Definimos la funcion de costo costo = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) with tf.name_scope('optimizador'): # Algoritmo de optimización optimizar = tf.train.AdamOptimizer( learning_rate=tasa_aprendizaje).minimize(costo) with tf.name_scope('Precision'): # Evaluar el modelo pred_correcta = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # Calcular la precisión Precision = tf.reduce_mean(tf.cast(pred_correcta, "float")) # Inicializamos todas las variables init = tf.initialize_all_variables() # Crear sumarización para controlar el costo tf.scalar_summary("Costo", costo) # Crear sumarización para controlar la precisión tf.scalar_summary("Precision", Precision) # Juntar los resumenes en una sola operación merged_summary_op = tf.merge_all_summaries() # - # Lanzamos la sesión with tf.Session() as sess: sess.run(init) # op to write logs to Tensorboard summary_writer = tf.train.SummaryWriter( logs_path, graph=tf.get_default_graph()) # Entrenamiento for epoca in range(epocas): avg_cost = 0. lote_total = int(mnist.train.num_examples/lote) for i in range(lote_total): lote_x, lote_y = mnist.train.next_batch(lote) # Optimización por backprop y funcion de costo _, c, summary = sess.run([optimizar, costo, merged_summary_op], feed_dict={x: lote_x, y: lote_y}) # escribir logs en cada iteracion summary_writer.add_summary(summary, epoca * lote_total + i) # perdida promedio avg_cost += c / lote_total # imprimir información de entrenamiento if epoca % display_step == 0: print("Iteración: {0: 04d} costo = {1:.9f}".format(epoca+1, avg_cost)) print("Optimización Terminada!\n") print("Precisión: {0:.2f}".format(Precision.eval({x: mnist.test.images, y: mnist.test.labels}))) print("Ejecutar el comando:\n", "--> tensorboard --logdir=/tmp/tensorflow_logs ", "\nLuego abir http://0.0.0.0:6006/ en el navegador") # Como vemos [TensorFlow](https://www.tensorflow.org/) nos da mucha flexibilidad para construir el modelo, modificando muy pocas líneas podríamos cambiar el algoritmo de optimización o el calculo del error y obtener otros resultados; de esta forma vamos a poder personalizar el modelo para alcanzar mayores niveles de precisión. # # ## TensorBoard # # Otra gran herramienta que nos proporciona [TensorFlow](https://www.tensorflow.org/) es [TensorBoard](https://www.tensorflow.org/versions/r0.8/how_tos/graph_viz/index.html) que nos permite visualizar nuestros [grafos](https://es.wikipedia.org/wiki/Grafo) y nos ayudan a alcanzar un mayor entendimiento del flujo de cálculos que ocurre en nuestro modelo. # # Para crear la información de la que se va a nutrir el [TensorBoard](https://www.tensorflow.org/versions/r0.8/how_tos/graph_viz/index.html), podemos definir algunos *scopes* utilizando `tf.name_scope`; también podemos incluir algunos gráficos sumarizados con `tf.scalar_summary` y luego llamamos a la función `tf.train.SummaryWriter` dentro de una *Sesión*. # # Luego podemos iniciar el *board* con el comando `tensorboard --logdir=logpath` como se puede ver en la salida del último ejemplo. # # Los [grafos](https://es.wikipedia.org/wiki/Grafo) de los casos que vimos por ejemplo, se ven así. # # # # Los invito a explorar la herramienta y adentrarse en el fascinante mundo de las [redes neuronales](https://es.wikipedia.org/wiki/Red_neuronal_artificial). # # Saludos! # *Este post fue escrito utilizando IPython notebook. Pueden descargar este [notebook](https://github.com/relopezbriega/relopezbriega.github.io/blob/master/downloads/IntroTensorFlow.ipynb) o ver su version estática en [nbviewer](http://nbviewer.ipython.org/github/relopezbriega/relopezbriega.github.io/blob/master/downloads/IntroTensorFlow.ipynb).* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import xetrapal, pandas from xetrapal import gdastras a=xetrapal.karma.load_xpal_smriti("/opt/xpal-data//avxpal.json") avxpal=xetrapal.Xetrapal(a) config=xetrapal.karma.load_config_json(a.configfile) pygsheetsconfig = xetrapal.karma.load_config_json(config['Pygsheets']['avdrive']) gd = gdastras.gd_get_googledriver(pygsheetsconfig) # + gd.spreadsheet_titles() # - ndhmgraph = gd.open("NDHM Graph") #rels=ndhmgraph.work edgesheet=ndhmgraph.worksheet_by_title("edges") nodesheet=ndhmgraph.worksheet_by_title("nodes") edges = edgesheet.get_as_df(include_tailing_empty=False) edges # nodes = nodesheet.get_as_df(include_tailing_empty=False).dropna() # + p=pandas.DataFrame() p['id']=pandas.array(list(set(list(edges.source.unique())+list(edges.target.unique())))) nodesheet.set_dataframe(p,(1,1)) # - import d3fdgraph d3fdgraph.plot_force_directed_graph(edges[["source", "target", "weight"]], link_width_scale=10,node_radius=10) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="kk-3YAoFKzMn" # !pip install sagemaker==2.88.0 s3fs joblib scikit-learn==1.0.2 xgboost # + id="jFpnVi_bK94a" import sagemaker from sagemaker.session import Session from sagemaker.feature_store.feature_group import FeatureGroup # import os # os.environ["AWS_ACCESS_KEY_ID"] = "" # os.environ["AWS_SECRET_ACCESS_KEY"] = "" # os.environ["AWS_DEFAULT_REGION"] = "us-east-1" role = "arn:aws:iam:::role/sagemaker-iam-role" FEATURE_GROUP_NAME = "telcom-customer-features" sagemaker_session = sagemaker.Session() region = sagemaker_session.boto_region_name s3_bucket_name = "feast-demo-mar-2022" customers_feature_group = FeatureGroup( name=FEATURE_GROUP_NAME, sagemaker_session=sagemaker_session ) # + id="9sCMx5R1LDGn" get_latest_snapshot_query = customers_feature_group.athena_query() query = f"""SELECT * FROM (SELECT *, row_number() OVER (PARTITION BY customerid ORDER BY event_timestamp desc, Api_Invocation_Time DESC, write_time DESC) AS row_num FROM "{get_latest_snapshot_query.table_name}") WHERE row_num = 1 and NOT is_deleted;""" # + id="JoRSQVHxLF5a" get_latest_snapshot_query.run(query_string=query, output_location=f"s3://{s3_bucket_name}/output") get_latest_snapshot_query.wait() # + id="6QUv8b_ZLJpc" churn_data = get_latest_snapshot_query.as_dataframe() churn_data = churn_data.drop(columns=["event_timestamp", "write_time", "api_invocation_time", "is_deleted", "row_num"]) # + id="5aoYaFpwLP9s" import boto3 from datetime import date s3 = boto3.client('s3') s3.download_file(s3_bucket_name, f"model-repo/customer-churn-v0.0", "customer-churn-v0.0") # + id="59yoPIzxLShg" features = churn_data.drop(['customerid', 'churn'], axis=1) loaded_model = joblib.load('/content/customer-churn-v0.0') prediction = loaded_model.predict(features) prediction.tolist() # + id="MxetQ8AxLcxr" file_name = f"customer_churn_prediction_{date.today()}.parquet" churn_data["predicted_churn"] = prediction.tolist() s3_url = f's3://{s3_bucket_name}/prediction_results/{file_name}' churn_data.to_parquet(s3_url) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import mdtraj as md from nglview import datafiles import nglview as nv traj0 = md.load('../data/tz2.pdb') # traj1 = md.load(datafiles.XTC, top=datafiles.PDB) trajlist = [nv.MDTrajTrajectory(traj) for traj in [traj0, traj1]] trajlist view = nv.NGLWidget(trajlist) view # - # clear repr: trpcage view._clear_repr(component=1) # clear repr: trpzip2 view._clear_repr(component=0) # try back: trpcage view.add_line(component=1) # + # try back: trpzip2 view.add_cartoon(color='residueindex') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6 # language: python # name: python36 # --- # Copyright (c) Microsoft Corporation. All rights reserved. # # Licensed under the MIT License. # ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/azure-arcadia/spark_session_on_synapse_spark_pool.png) # # Interactive Spark Session on Synapse Spark Pool # ### Install package # !pip install -U "azureml-synapse" # For JupyterLab, please additionally run: # !jupyter lab build --minimize=False # ## PLEASE restart kernel and then refresh web page before starting spark session. # ## 0. How to leverage Spark Magic for interactive Spark experience # show help # %synapse ? # ## 1. Start Synapse Session synapse_compute_name=os.getenv("SYNAPSE_COMPUTE_NAME", "") # + # use Synapse compute linked to the Compute Instance's workspace with an aml envrionment. # conda dependencies specified in the environment will be installed before the spark session started. # %synapse start -c $synapse_compute_name -e AzureML-Minimal # + # use Synapse compute from anther workspace via its config file # # %synapse start -c -f config.json # + # use Synapse compute from anther workspace via subscription_id, resource_group and workspace_name # # %synapse start -c -s -r -w # + # start a spark session with an AML environment, # # %synapse start -c -s -r -w -e AzureML-Minimal # - # ## 2. Data prepration # # Three types of datastore are supported in synapse spark, and you have two ways to load the data. # # # | Datastore Type | Data Acess | # |--------------------|-------------------------------| # | Blob | Credential | # | Adlsgen1 | Credential & Credential-less | # | Adlsgen2 | Credential & Credential-less | # ### Example 1: Data loading by HDFS path # **Read data from Blob** # # ```python # # setup access key or sas token # # sc._jsc.hadoopConfiguration().set("fs.azure.account.key..blob.core.windows.net", "") # sc._jsc.hadoopConfiguration().set("fs.azure.sas...blob.core.windows.net", "sas token") # # df = spark.read.parquet("wasbs://@.blob.core.windows.net/") # ``` # # **Read data from Adlsgen1** # # ```python # # setup service pricinpal which has access of the data # # If no data Credential is setup, the user identity will be used to do access control # # sc._jsc.hadoopConfiguration().set("fs.adl.account..oauth2.access.token.provider.type","ClientCredential") # sc._jsc.hadoopConfiguration().set("fs.adl.account..oauth2.client.id", "") # sc._jsc.hadoopConfiguration().set("fs.adl.account..oauth2.credential", "") # sc._jsc.hadoopConfiguration().set("fs.adl.account..oauth2.refresh.url", "https://login.microsoftonline.com//oauth2/token") # # df = spark.read.csv("adl://.azuredatalakestore.net/") # ``` # # **Read data from Adlsgen2** # # ```python # # setup service pricinpal which has access of the data # # If no data Credential is setup, the user identity will be used to do access control # # sc._jsc.hadoopConfiguration().set("fs.azure.account.auth.type..dfs.core.windows.net","OAuth") # sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth.provider.type..dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") # sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth2.client.id..dfs.core.windows.net", "") # sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth2.client.secret..dfs.core.windows.net", "") # sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth2.client.endpoint..dfs.core.windows.net", "https://login.microsoftonline.com//oauth2/token") # # df = spark.read.csv("abfss://@.dfs.core.windows.net/") # ``` # + # %%synapse from pyspark.sql.functions import col, desc df = spark.read.option("header", "true").csv("wasbs://@dprep.windows.net/Titanic.csv") df.filter(col('Survived') == 1).groupBy('Age').count().orderBy(desc('count')).show(10) # - # ### Example 2: Data loading by AML Dataset # # You can create tabular data by following the [guidance](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-register-datasets) and use to_spark_dataframe() to load the data. # # ```text # # %%synapse # # import azureml.core # print(azureml.core.VERSION) # # from azureml.core import Workspace, Dataset # ws = Workspace.get(name='', subscription_id='', resource_group='') # ds = Dataset.get_by_name(ws, "") # df = ds.to_spark_dataframe() # # # You can do more data transformation on spark dataframe # ``` # ## 3. Session Metadata # After session started, you can check the session's metadata, find the links to Synapse portal. # %synapse meta # ## 4. Stop Session # When current session reach the status timeout, dead or any failure, you must explicitly stop it before start new one. # %synapse stop # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # name: ir # --- # + [markdown] colab_type="text" id="view-in-github" # Open In Colab # + [markdown] id="yaU2RH1eTY7v" # # Test EpiEstim # + [markdown] id="Sj4Jj74HTZne" # https://cran.r-project.org/web/packages/EpiEstim/index.html # # https://cran.r-project.org/web/packages/EpiEstim/vignettes/demo.html # # + id="zlJqP9QCTUyu" install.packages("EpiEstim", lib='/content') install.packages("incidence", lib='/content') library(EpiEstim, lib.loc='/content') library(incidence, lib.loc='/content') library(ggplot2) # + id="yxFc1CddWjKQ" data(Flu2009) # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="0cIfonoHXh0V" outputId="ba9c4666-a144-41ad-e819-9d54a4a6fcf2" ## incidence: head(Flu2009$incidence) # + colab={"base_uri": "https://localhost:8080/", "height": 35} id="DlaPrk09Xh3O" outputId="e408f737-b26e-43b0-95b4-6529a032288b" ## serial interval (SI) distribution: Flu2009$si_distr # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="lW9NMKLMXh6P" outputId="bae64d31-e0db-4c63-9341-432d4efe7310" ## interval-ceonsored serial interval data: ## each line represents a transmission event, ## EL/ER show the lower/upper bound of the symptoms onset date in the infector ## SL/SR show the same for the secondary case ## type has entries 0 corresponding to doubly interval-censored data ## (see Reich et al. Statist. Med. 2009). head(Flu2009$si_data) # + colab={"base_uri": "https://localhost:8080/", "height": 437} id="BiAidnA8Xh94" outputId="41e09ba8-4540-4d58-deb6-913b3039733b" plot(as.incidence(Flu2009$incidence$I, dates = Flu2009$incidence$dates)) # + [markdown] id="wo_3lbS8Z6Nu" # We can run estimate_R on the incidence data to estimate the reproduction number R. For this, we need to specify i) the time window(s) over which to estimate R and ii) information on the distribution of the serial interval. # # For i), the default behavior is to estimate R over weekly sliding windows. This can be changed through the config$t_start and config$t_end arguments (see below, “Changing the time windows for estimation”). For ii), there are several options, specified in the method argument. # # The simplest is the parametric_si method, where you only specify the mean and standard deviation of the SI. # + [markdown] id="t1UKQInuZ-Yc" # # Estimating R on sliding weekly windows, with a parametric serial interval # In this example, we only specify the mean and standard deviation of the serial interval. In that case an offset gamma distribution is used for the serial interval. In the following example, we use the mean and standard deviation of the serial interval for flu from Ferguson et al., Nature, 2005: # + colab={"base_uri": "https://localhost:8080/", "height": 69} id="jACCIuMNZzik" outputId="c99e1ceb-75b2-4ad4-d02a-06ff6179c46a" res_parametric_si <- estimate_R(Flu2009$incidence, method="parametric_si", config = make_config(list( mean_si = 2.6, std_si = 1.5)) ) # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="xwoB6RfyZzlT" outputId="1419865e-5bcf-4795-e1af-b6e0be54e3ef" head(res_parametric_si$R) # + colab={"base_uri": "https://localhost:8080/", "height": 437} id="SfrEWBjWZzoS" outputId="aa851096-a88e-48d0-bd4b-92c9db981a0d" plot(res_parametric_si, legend = FALSE) # + id="ZQ7ytSTnZzrT" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import sys import numpy import pandas import matplotlib import scipy import seaborn import sklearn print("python : {}".format(sys.version)) print("numpy : {}".format(numpy.__version__)) print("pandas : {}".format(pandas.__version__)) print("matplotlib : {}".format(matplotlib.__version__)) print("scipy : {}".format(scipy.__version__)) print("seaborn : {}".format(seaborn.__version__)) print("sklearn : {}".format(sklearn.__version__)) # - import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # + #loading data sets download it from kaggle approx.of size 150mb data=pd.read_csv("creditcard.csv") #exploring datasets print(data.shape) print(data.columns) #"class" in columns has two value 0=credit_card_transaction; 1=fraudaluent_transaction # - print(data.describe()) #minimising data to 10% data=data.sample(frac=0.1,random_state=1) print(data.shape) #ploting histogram of each parameter of data data.hist(figsize= (20,20)) plt.show() # + #determining no. of fraud cases in datasets fraud=data[data['Class']==1] valid=data[data['Class']==0] outlier_fraction =len(fraud)/float(len(valid)) print(outlier_fraction) print("Fraud cases: {}".format(len(fraud))) print("Valid cases: {}".format(len(valid))) # + #correlation matrix corrmap=data.corr() fig=plt.figure(figsize=(12,9)) sns.heatmap(corrmap,vmax=.8,square=True) plt.show() # - #getting all the column fromthe dataframe columns=data.columns.tolist() print(columns) # + # filtering the data-column which we do not need columns=[c for c in columns if c not in ['Class']] print(columns) # store the variable we'll be predicting target="Class" X=data[columns] # columns in whichwe are intrested in Y=data[target] # target column 0= valid transaction; 1=fraud transaction # print the shape print(X.shape) print(Y.shape) # + from sklearn.metrics import classification_report,accuracy_score from sklearn.ensemble import IsolationForest from sklearn.neighbors import LocalOutlierFactor #calculates on the density of loacls # define random-state state=1 # define the outlier detection methods in Dictionary data-type classifiers = { "Isolation Forest": IsolationForest(max_samples=len(X), contamination = outlier_fraction, random_state=state), "Local Outlier Factor": LocalOutlierFactor( n_neighbors = 20, contamination = outlier_fraction ) } # + # fitting the model n_outliers=len(fraud) for i, (clf_name, clf) in enumerate(classifiers.items()): #fit the data and tag outliers if clf_name=="Local Outlier Factor": y_pred=clf.fit_predict(X) score_pred=clf.negative_outlier_factor_ else: clf.fit(X) score_pred=clf.decision_function(X) y_pred=clf.predict(X) #reshape the prediction values to 0 for valid and 1 for fraud y_pred[y_pred == 1] = 0 y_pred[y_pred == -1] = 1 n_errors=(y_pred !=Y).sum() # Run the classification matrix print("(): {}".format(clf_name,n_errors)) print(accuracy_score(Y,y_pred)) print(classification_report(Y,y_pred)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="pAM7jZb3iyUu" executionInfo={"status": "ok", "timestamp": 1614335466757, "user_tz": -60, "elapsed": 556, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgdXvq_fTRzxq1W4xPc6kpcov6LGHBk-CJycAyf=s64", "userId": "04563411618181728218"}} import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.style.use('seaborn') # + id="9TPgGTNBi20i" executionInfo={"status": "ok", "timestamp": 1614335675741, "user_tz": -60, "elapsed": 442, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgdXvq_fTRzxq1W4xPc6kpcov6LGHBk-CJycAyf=s64", "userId": "04563411618181728218"}} X = np.load('X.npy') y = np.load('y.npy') # + id="V01BTOf-6-pe" executionInfo={"status": "ok", "timestamp": 1614335676709, "user_tz": -60, "elapsed": 348, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgdXvq_fTRzxq1W4xPc6kpcov6LGHBk-CJycAyf=s64", "userId": "04563411618181728218"}} def filter(array, inf=-1, sup=2.5): array = array[array > inf] array = array[array < sup] return array # + id="0oi2VX6Wz2pw" executionInfo={"status": "ok", "timestamp": 1614335676935, "user_tz": -60, "elapsed": 334, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgdXvq_fTRzxq1W4xPc6kpcov6LGHBk-CJycAyf=s64", "userId": "04563411618181728218"}} reactivity = y[:, :, 0].mean(axis=1) deg_Mg_pH10 = y[:, :, 1].mean(axis=1) deg_pH10 = y[:, :, 2].mean(axis=1) deg_Mg_50C = y[:, :, 3].mean(axis=1) deg_50C = y[:, :, 4].mean(axis=1) # + colab={"base_uri": "https://localhost:8080/", "height": 586} id="0AaFHQqK2Xyc" executionInfo={"status": "ok", "timestamp": 1614335678831, "user_tz": -60, "elapsed": 2019, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgdXvq_fTRzxq1W4xPc6kpcov6LGHBk-CJycAyf=s64", "userId": "04563411618181728218"}} outputId="ea8446c6-2aa4-4839-9023-aeff31433b74" targets = [reactivity, deg_Mg_pH10, deg_pH10, deg_Mg_50C, deg_50C] colors = ['pink', 'purple', 'cyan', 'red', 'orange'] names = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C'] def plot_targets(targets, colors, names): plt.figure(figsize=(35, 11)) for i in range(5): plt.subplot(2, 3, i+1) plt.hist(targets[i], bins=50, density=True, edgecolor='white', color=colors[i], label=names[i]) plt.legend(loc='upper right') plt.show() plot_targets(targets, colors, names) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="pCVb275DKzl4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="f666fbef-8150-4efa-9fc2-b8a82d1e4b88" executionInfo={"status": "ok", "timestamp": 1581456838193, "user_tz": -60, "elapsed": 699, "user": {"displayName": "\u0144ski", "photoUrl": "", "userId": "06453494618211679268"}} print('Hello Github :)') # + id="o9UObwUNLDqH" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt # ## Generate Dataset # + and_data = [ {'x1': 0, 'x2': 0, 'y' : 0}, {'x1': 1, 'x2': 0, 'y' : 0}, {'x1': 0, 'x2': 1, 'y' : 0}, {'x1': 1, 'x2': 1, 'y' : 1}, ] and_data = pd.DataFrame.from_dict(and_data) plt.scatter(and_data['x1'], and_data['x2'], c=and_data['y']) print(and_data.shape) and_data.head() # + or_data = [ {'x1': 0, 'x2': 0, 'y' : 0}, {'x1': 1, 'x2': 0, 'y' : 1}, {'x1': 0, 'x2': 1, 'y' : 1}, {'x1': 1, 'x2': 1, 'y' : 1}, ] or_data = pd.DataFrame.from_dict(or_data) plt.scatter(or_data['x1'], or_data['x2'], c=or_data['y']) print(or_data.shape) or_data.head() # + xor_data = [ {'x1': 0, 'x2': 0, 'y' : 0}, {'x1': 1, 'x2': 0, 'y' : 1}, {'x1': 0, 'x2': 1, 'y' : 1}, {'x1': 1, 'x2': 1, 'y' : 0}, ] xor_data = pd.DataFrame.from_dict(xor_data) plt.scatter(xor_data['x1'], xor_data['x2'], c=xor_data['y']) print(xor_data.shape) xor_data.head() # - # ### Define Sigmoid def sigmoid(n): return 1 / (1 + np.exp(-n)) # + xx = np.linspace(-10, 10, num=21) yy = sigmoid(xx) plt.plot(xx, yy) # - # ### Single layer Neural Network def gradient_descent(X, y, num_epoch=100, learning_rate=1.0): num_features = X.shape[1] w = np.random.uniform(-1.0, 1.0, size=num_features) b = np.random.uniform(-1.0, 1.0) for epoch in range(num_epoch): y_predict = X.dot(w) + b y_predict = sigmoid(y_predict) predict = y_predict >= 0.5 error = (predict != y).mean() print("{0} error = {1:.5f}".format(epoch, error)) if error == 0.00: break w = w - learning_rate * (y_predict - y).dot(X) b = b - learning_rate * (y_predict - y).mean() return w, b, error # ### And Data # + data = and_data X = data[["x1", "x2"]].values y = data["y"].values w, b, error = gradient_descent(X, y) print("----" * 10) print("error = {0:.5f}".format(error)) # + xx = np.linspace(0.0, 1.0, num=11) yy = -1.0 * (w[0] * xx + b) / w[1] plt.scatter(data["x1"], data["x2"], c=y) plt.plot(xx, yy) # - # ### Or Data # + data = or_data X = data[["x1", "x2"]].values y = data["y"].values w, b, error = gradient_descent(X, y) print("----" * 10) print("error = {0:.5f}".format(error)) # + xx = np.linspace(0.0, 1.0, num=11) yy = -1.0 * (w[0] * xx + b) / w[1] plt.scatter(data["x1"], data["x2"], c=y) plt.plot(xx, yy) # - # ### XOR Data # + data = xor_data X = data[["x1", "x2"]].values y = data["y"].values w, b, error = gradient_descent(X, y) print("----" * 10) print("error = {0:.5f}".format(error)) # + xx = np.linspace(0.0, 1.0, num=11) yy = -1.0 * (w[0] * xx + b) / w[1] plt.scatter(data["x1"], data["x2"], c=y) plt.plot(xx, yy) # + x1 = data["x1"].values x2 = data["x2"].values # np.invert(x1) flipped_x1 = (x1 == 0).astype('int') flipped_x2 = (x2 == 0).astype('int') new_x1 = flipped_x1 & x2 new_x2 = x1 & flipped_x2 plt.scatter(new_x1, new_x2, c=y) new_xor_data = pd.DataFrame({'x1': new_x1, 'x2': new_x2, 'y': y}) new_xor_data # + data = new_xor_data X = data[["x1", "x2"]].values y = data["y"].values w, b, error = gradient_descent(X, y) print("----" * 10) print("error = {0:.5f}".format(error)) # + xx = np.linspace(0.0, 1.0, num=11) yy = -1.0 * (w[0] * xx + b) / w[1] plt.scatter(data["x1"], data["x2"], c=y) plt.plot(xx, yy) # - # ### Polynomial # + x1 = xor_data["x1"] x2 = xor_data["x2"] x3 = x1 * x1 x4 = x1 * x2 x5 = x2 * x2 polynomial_xor_data = { 'x1': x1, 'x2': x2, 'x3': x3, 'x4': x4, 'x5': x5, 'y': y, } polynomial_xor_data = pd.DataFrame(polynomial_xor_data) polynomial_xor_data # + data = polynomial_xor_data X = data[["x1", "x2", "x3", "x4", "x5"]].values y = data["y"].values w, b, error = gradient_descent(X, y) print("----" * 10) print("error = {0:.5f}".format(error)) # - # ### Multi-layer Neural Network # + data = xor_data X = data[["x1", "x2"]].values y = data["y"].values y = y.reshape(4, 1) # + num_epoch = 300 learning_rate = 1.0 w1 = np.random.uniform(low=0.0, high=1.0, size=(2, 3)) w2 = np.random.uniform(low=0.0, high=1.0, size=(3, 1)) b1 = np.random.uniform(low=0.0, high=1.0, size=(1, 3)) b2 = np.random.uniform(low=0.0, high=1.0, size=(1, 1)) for epoch in range(num_epoch): a1 = X.dot(w1) + b1 z1 = sigmoid(a1) a2 = z1.dot(w2) + b2 z2 = sigmoid(a2) y_predict = z2 predict = y_predict >= 0.5 error = (predict != y).mean() if epoch % 10 == 0: print("{0:3} error = {1:.5f}".format(epoch, error)) if error == 0.00: break d2 = z2 - y d1 = d2.dot(w2.T) * z1 * (1 - z1) w2 = w2 - learning_rate * z1.T.dot(d2) w1 = w1 - learning_rate * X.T.dot(d1) b2 = b2 - learning_rate * d2.mean(axis=0) b1 = b1 - learning_rate * d1.mean(axis=0) print("----" * 10) print("{0:3} error = {1:.5f}".format(epoch, error)) # - # ### Result # + a1 = X.dot(w1) + b1 z1 = sigmoid(a1) a2 = z1.dot(w2) + b2 z2 = sigmoid(a2) y_predict = z2 predict = y_predict >= 0.5 data["y(predict)"] = predict.astype('int') data # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Training # # ### Building and compiling a model # + # %%capture import h5py from chimeranet.models import ChimeraPPModel # probe shape of dataset and set embedding dimension dataset_path = 'example-dataset-train.hdf5' with h5py.File(dataset_path, 'r') as f: _, T, F, C = f['y/embedding'].shape D = 20 cm = ChimeraPPModel(T, F, C, D) # build_model returns Keras' Model object model = cm.build_model() model.compile( 'rmsprop', loss={ 'embedding': cm.loss_deepclustering(), 'mask': cm.loss_mask() }, loss_weights={ 'embedding': 0.9, 'mask': 0.1 } ) # - # ### Training a model # + # load train and validation data dataset_validation_path = 'example-dataset-validation.hdf5' with h5py.File(dataset_path, 'r') as f: x_train = f['x'][()] y_train = {'mask': f['y/mask'][()], 'embedding': f['y/embedding'][()]} with h5py.File(dataset_validation_path, 'r') as f: x_validation = f['x'][()] y_validation = {'mask': f['y/mask'][()], 'embedding': f['y/embedding'][()]} # train model by model.fit function model.fit( x=x_train, y=y_train, validation_data=(x_validation, y_validation), batch_size=32, epochs=10 ) # save the model model_path = 'example-model.hdf5' model.save(model_path) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import matplotlib import matplotlib.pyplot as plt import scipy import astropy.units as u from astropy.coordinates import AltAz from astropy.coordinates import SkyCoord, EarthLocation from astropy.time import Time import numpy as np from ctapipe.io import event_source from ctapipe.calib import CameraCalibrator from ctapipe.coordinates import GroundFrame, TiltedGroundFrame from ctapipe.image import tailcuts_clean, dilate from ctapipe.reco.ImPACT import ImPACTReconstructor, guess_shower_depth from ctapipe.image import hillas_parameters, HillasParameterizationError from ctapipe.reco import HillasReconstructor from ctapipe.reco.energy_regressor import EnergyRegressor from ctapipe.io.containers import ReconstructedShowerContainer, ReconstructedEnergyContainer from ctapipe.visualization import CameraDisplay from astropy.table import Table import copy # - # %load_ext autoreload # %autoreload 2 from CREED_VTK import CREED_VTK from ctapipe.reco import HillasReconstructor from IPython.core.display import display, HTML display(HTML("")) # + class HillasNotFinite(Exception): """ Error to be raised when hillas parameters are not finite """ pass def reconstruction(event): location = EarthLocation.of_site('Roque de los Muchachos') obstime = Time('2018-11-01T02:00') horizon_frame = AltAz(location=location, obstime=obstime) telescope_pointings = {} features = {} hillas_dict = {} pointing_azimuth = {} pointing_altitude = {} cleaned_dict = {} array_pointing = SkyCoord( az=event.mc.az, alt=event.mc.alt, frame=AltAz() ) for tel_id in event.r0.tels_with_data: telescope_pointings[tel_id] = SkyCoord( alt=event.mc.tel[tel_id].altitude_raw * u.rad, az=event.mc.tel[tel_id].azimuth_raw * u.rad, frame=horizon_frame ) dl1 = event.dl1.tel[tel_id] camera = event.inst.subarray.tels[tel_id].camera mask = tailcuts_clean(camera, dl1.image, boundary_thresh=4, picture_thresh=8, min_number_picture_neighbors=2) telescope_type_name = event.inst.subarray.tels[tel_id].optics #dl1.cleaned = copy.deepcopy(dl1.image[0]) cleaned = copy.deepcopy(dl1.image) # cleaned = dl1.cleaned cleaned[~mask] = 0 if cleaned.sum() > 0: try: h = hillas_parameters(camera, cleaned) if not all(map(np.isfinite, h.values())): raise HillasNotFinite("bad Hillas parameters") hillas_dict[tel_id] = h cleaned_dict[tel_id] = cleaned except HillasNotFinite: pass else: pass if len(hillas_dict) < 2: print("mono") reconstruction = None else: # reconstruction, hillas_planes, correct_angle_hillas = hillas_reco.predict(hillas_dict, event.inst, array_pointing, telescopes_pointings=telescope_pointings) reconstruction = hillas_reco.predict(hillas_dict, event.inst, array_pointing, telescopes_pointings=telescope_pointings) return reconstruction, hillas_dict, cleaned_dict # + pwd = "/home/thomas/Programs/astro/CTAPIPE_DAN/" #filename= "Data/gamma_20deg_180deg_run100___cta-prod3-lapalma3-2147m-LaPalma.simtel.gz" filename= "/home/thomas/Programs/astro/CTAPIPE_DAN/Data/gamma_20deg_0deg_run101___cta-prod3-lapalma3-2147m-LaPalma.simtel.gz" #filename= "/home/thomas/Programs/astro/CTAPIPE_DAN/Data/gamma_20deg_0deg_run100___cta-prod3_desert-2150m-Paranal-merged.simtel.gz" # filename= "/home/thomas/Programs/astro/Divergent/Data/gamma_20deg_180deg_run10___cta-prod3-demo-2147m-LaPalma-baseline.simtel.gz" # North # layout = np.loadtxt(pwd+'Utils/CTA.prod3Nb.3AL4-BN15.lis', usecols=0, dtype=int) # FlashCam # layout = np.loadtxt(pwd+'CTA.prod3Sb.3HB9-FG.lis', usecols=0, dtype=int) # NectarCam # layout = np.loadtxt(pwd+'CTA.prod3Sb.3HB9-NG.lis', usecols=0, dtype=int) if "Paranal" in filename: layout = [4, 5, 6, 11] print("PARANAL WITH {0}".format(layout)) elif "palma" in filename: # layout = [5, 6, 7, 8] layout = np.loadtxt(pwd+'Utils/CTA.prod3Nb.3AL4-BN15.lis', usecols=0, dtype=int) layout = set(layout) print("LAPALMA WITH {0}".format(layout)) # SELECT ONLY THE MSTs # layout = list(np.arange(5,20,1)) # layout = [5,6,7,8] # # layout = [1,2,3,4] # if "Paranal" in filename: # layout = [4, 5, 6, 11] # print("PARANAL WITH {0}".format(layout)) # elif "palma" in filename: # layout = [5, 6, 7, 8] # print("LAPALMA WITH {0}".format(layout)) source = event_source(filename) source.max_events = 2 source.allowed_tels = layout events = [copy.deepcopy(event) for event in source] # + # #!more /home/thomas/Programs/astro/CTAPIPE_DAN/Utils/Utils/CT # - # Calibration calibrator = CameraCalibrator() for event in events: calibrator(event) for counter, event in enumerate(events): if len(event.r0.tels_with_data) > 1: # event = events[10] print(counter, event.count, event.mc.energy, event.r0.tels_with_data) # + ## Find "big" event #events_amplitude = [] #for event in events: # event_amplitude = 0 # for tel_id in event.r0.tels_with_data: # if event.dl1.tel[tel_id].image is not None: # event_amplitude += event.dl1.tel[tel_id].image[0].sum() # events_amplitude.append(event_amplitude) #events_amplitude = np.array(events_amplitude) # #mm = events_amplitude.argmax() #print(mm) event = events[1] # Hillas reconstruction cleaning_level = { 'ASTRICam': (5, 7, 2), # (5, 10)? 'LSTCam': (3.5, 7.5, 2), # ?? (3, 6) for Abelardo... 'FlashCam': (4, 8, 2), # there is some scaling missing? } reco = HillasReconstructor() print('Id: {}, E = {:1.3f}, Telescopes: {}'.format(event.count, event.mc.energy, len(event.r0.tels_with_data))) # Hillas reconstruction hillas_reco = HillasReconstructor() reco, hillas_dict, cleaned_dict = reconstruction(event) # - # + # event.r0.tels_with_data # # print(event.mc.tel[37].azimuth_raw) # print(event.mc.tel[37].altitude_raw*180/np.pi) # event.mc.tel[37] # # event.inst.subarray.optics_types # # info = {'CHEC': [], # 'ASTRICam': [], # 'LSTCam': [], # 'FlashCam': [], # 'DigiCam': [], # 'NectarCam': [], # 'SCTCam': [], # } # # # list(info.keys()) # info # from astropy.io import ascii # ascii.write(info, names=list(info.keys()), output="file.dat") # event = events[4] # for tel_id in event.r0.tels_with_data: # telescope = event.inst.subarray.tel[tel_id] # #info[telescope.camera.cam_id] = [telescope.camera.pix_x, telescope.camera.pix_y ] # print("{0}; \t{1} \t {2} ".format(telescope.optics.identifier[0], telescope.camera.cam_id, telescope.camera.pix_x.shape)) # # + code_folding=[] ncol = 4 nrow = len(event.r0.tels_with_data)//ncol + 1 * (len(event.r0.tels_with_data)%ncol > 0) fs = 10 fig, axes = plt.subplots(nrow, ncol, figsize=(fs*ncol, fs*nrow)) #hillas_dict = {} #cleaned_dict = {} for ii, tel_id in enumerate(hillas_dict.keys()): camera = event.inst.subarray.tel[tel_id].camera image = event.dl1.tel[tel_id].image cleaned = cleaned_dict[tel_id] disp = CameraDisplay(camera, image, ax = axes.ravel()[ii]) disp.axes.set_title("{0}_{1}".format(str(tel_id), disp.geom.cam_id)) cleaned_dict[tel_id] = cleaned if cleaned.sum() > 0: hillas = hillas_parameters(camera, cleaned) hillas_dict[tel_id] = hillas #disp = CameraDisplay(camera, cleaned, ax=axes.ravel()[ii]) #disp.overlay_moments(hillas, color='RED', linewidth=3, zorder=10) plt.show() # - arr_point = SkyCoord( alt=event.mcheader.run_array_direction[1], az=event.mcheader.run_array_direction[0], frame=AltAz() ) arr_point.az.value layout # + #render = CREED_VTK(event, telescopes_ids=list(hillas_dict.keys())) render = CREED_VTK(event, telescopes_ids=list(layout)) # render.event_type(clean_level = "None") # render.event_type(clean_level = "clean", clean_dict=cleaned_dict) #render.add_arrows_camera_frame() render.add_gnd_tels() render.add_gnd_frame(size=1000) render.add_tilted_frame(size=1000) render.add_tilted_tels() render.camera_view(elev=20) render.tel_labels() # render.add_impact_point(label="X mc", status="mc", frame="ground") #render.add_impact_point(label="X mc", status="mc", frame="ground") #gnd_reco_pos = GroundFrame(x= reco.core_x, y = reco.core_y, z = 0 * u.m) # render.add_impact_point(reco.core_x.value, reco.core_y.value, label="+ reco") # render.add_impact_point(label="+ reco", gnd_reco_pos=gnd_reco_pos) #render.add_impact_point(status="reco", label="+ reco", gnd_reco_pos=gnd_reco_pos, frame="ground") #render.plot_hillas_lines(hillas_dict=hillas_dict, length=500, frame="ground") render.show(width= 1600, height=1000) # - print(hillas_planes.keys()) tel_id = 5 tel_id_2 = 6 print(hillas_planes[tel_id].a, hillas_planes[tel_id].b, hillas_planes[tel_id].c, hillas_planes[tel_id].norm, hillas_planes[tel_id].weight, hillas_planes[tel_id].pos) print(hillas_planes[tel_id_2].a, hillas_planes[tel_id_2].b, hillas_planes[tel_id_2].c, hillas_planes[tel_id_2].norm, hillas_planes[tel_id_2].weight, hillas_planes[tel_id_2].pos) # + code_folding=[12] # %matplotlib notebook from mpl_toolkits.mplot3d import Axes3D # plt3d = plt.figure(figsize=(10,10)).gca(projection='3d') fig = plt.figure(figsize=(10, 10)) plt3d = fig.add_subplot(111, projection='3d') colors = ['r', 'g', 'blue', 'yellow'] names = ['a', 'b', 'c', 'norm'] tels_list = [60, 62] for tel_id in tels_list: #hillas_planes.keys(): point = hillas_planes[tel_id].c normal = hillas_planes[tel_id].norm # a plane is a*x+b*y+c*z+d=0 # [a,b,c] is the normal. Thus, we have to calculate # d and we're set d = -point.dot(normal) # create x,y xx, yy = np.meshgrid(np.linspace(-1, 1, 2), np.linspace(-1, 1, 2)) # calculate corresponding z z = (-normal[0] * xx - normal[1] * yy - d) * 1. /normal[2] # plot the surface plt3d.plot_surface(xx, yy, z, alpha = 0.2) for i, point in enumerate([hillas_planes[tel_id].a, hillas_planes[tel_id].b, hillas_planes[tel_id].c, hillas_planes[tel_id].norm]): color_choosen = colors[i] x_pos = point[0]/(np.linalg.norm(point)) y_pos = point[1]/(np.linalg.norm(point)) z_pos = point[2]/(np.linalg.norm(point)) plt3d.quiver(0, 0, 0, point[0], point[1], point[2], normalize=True, color = color_choosen, arrow_length_ratio=0.2) plt3d.scatter( xs = x_pos, ys = y_pos, zs = z_pos, s = 50, c = color_choosen ) plt3d.text(x_pos, y_pos, z_pos, '%s' % names[i]+" "+ str(tel_id), size=20, zorder=1, color='k') plt3d.scatter(0,0,0, marker="P", s=100, c = "magenta") u = np.linspace(0, 2 * np.pi, 30) v = np.linspace(0, np.pi, 30) x = 1* np.outer(np.cos(u), np.sin(v)) y = 1 * np.outer(np.sin(u), np.sin(v)) z = 1 * np.outer(np.ones(np.size(u)), np.cos(v)) # Plot the surface plt3d.plot_surface(x, y, z, color='b', alpha = 0.1) plt3d.set_xlim((-1, 1)) plt3d.set_ylim((-1, 1)) plt3d.set_zlim((-1, 1)) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # # Desafio 05 | SONDA # ## Objetivo # O desafio consiste de implementar um algoritmo de Machine Learning para classificação binária, capaz de identificar se um cliente será perdido ou não. from PIL import Image image = Image.open("Ranking11.png") image # ## Import das bibliotecas # + colab={"base_uri": "https://localhost:8080/"} id="BjwKH552p4x9" outputId="e776a6fa-fc72-4a86-ba28-1401b88062a5" # !pip install dython # + id="4sA5TwtEm9nD" from dython import nominal import matplotlib.pyplot as plt import numpy as np import pandas as pd from imblearn.over_sampling import SMOTE, SMOTENC from imblearn.pipeline import Pipeline as imbpipeline from sklearn.base import BaseEstimator, TransformerMixin from sklearn.compose import ColumnTransformer, make_column_selector as selector from sklearn.ensemble import RandomForestClassifier from sklearn.impute import SimpleImputer from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report, f1_score, confusion_matrix from sklearn.model_selection import GridSearchCV, train_test_split, cross_validate, cross_val_score, StratifiedKFold, KFold from sklearn.pipeline import Pipeline from sklearn.preprocessing import OneHotEncoder, StandardScaler, MinMaxScaler, LabelEncoder # - # ## Leitura e análise do dataset # + id="KuZJIqA3nAx3" df = pd.read_csv('dataset.csv') # + colab={"base_uri": "https://localhost:8080/"} id="Rpd3dmYvVAHk" outputId="1b671029-3a60-4cde-b50a-88c5d28a0f3f" df['CHURN'].unique() # + id="dOo2mSeFnkOL" dfNa = df.dropna() # + id="sEEu-gn-tN-e" dfteste = df.fillna(df.mode().iloc[0]) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="O-InXLl0c8Xs" outputId="80d6d047-4075-4b59-ebd8-a5def28f17f5" nominal.associations(dfNa, figsize=(20,10), mark_columns=True) # + colab={"base_uri": "https://localhost:8080/", "height": 719} id="9m2dpUqtn7ob" outputId="c502a63c-b610-4509-bf78-9327ed118f66" nominal.associations(dfteste, figsize=(20,10), mark_columns=True); # + [markdown] id="jdC3cNRp9nBZ" # Analisei a matriz e removi as que possuiam menos relações no total e com a coluna CHURN # + id="XgXOKzw2QZYN" class DropColumns(BaseEstimator, TransformerMixin): def __init__(self, columns): self.columns = columns def fit(self, X, y=None): return self def transform(self, X): data = X.copy() return data.drop(labels=self.columns, axis='columns') # + id="SZdw7xIumDb2" class DummyEstimator(BaseEstimator): def fit(self): pass def score(self): pass # - # ## Distribuição das variáveis de treino e teste # + id="ABv9378cQlmv" X = df.drop('CHURN', axis=1, inplace=False) y = df.CHURN # + id="l5UuODhfQtvn" X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state = 1) # + id="cNsOGEz9dWeT" unwanted_columns = ['ID', 'GENDER', 'SENIORCITIZEN', 'PARTNER', 'DEPENDENTS', 'PHONESERVICE', 'MULTIPLELINES'] Num_features = ['TENURE', 'MONTHLYCHARGES'] Cat_features = list((set(X.columns) - set(unwanted_columns) - set(Num_features))) numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent', missing_values=np.nan)), ('scaler', StandardScaler())]) categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent', missing_values=np.nan)), ('onehot', OneHotEncoder(drop='first', sparse=False, handle_unknown='ignore'))]) preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, Num_features), ('cat', categorical_transformer, Cat_features) ] ) dropCol = DropColumns(unwanted_columns) # - # ## Testando e validando o modelo # + colab={"base_uri": "https://localhost:8080/"} id="r5ySvWnR_eQF" outputId="bcec7898-951a-43b3-a0b8-30fe52fdfcdc" pip_LR = imbpipeline(steps=[ ('drop_columns', dropCol), ('pre', preprocessor), ('smote', SMOTE(random_state=42)), ('clf', LogisticRegression(max_iter=30000)) ]) pip_LR.fit(X_train, y_train) # + colab={"base_uri": "https://localhost:8080/"} id="lcpmWYngFumU" outputId="11c78cf2-0261-447d-c5ac-bd4dff4a0a68" y_pred_test = pip_LR.predict(X_test) print(classification_report(y_test, y_pred_test)) # + id="gXt4G5I4GKoS" stratified_kfold = StratifiedKFold(n_splits=5) kf = KFold(n_splits=5, random_state=1, shuffle=True) param_grid = {'clf__C': [0.00001, 0.0001, 0.001, 0.01, 0.1, 1, 10, 100, 1000]} gridsearchLR = GridSearchCV(estimator=pip_LR, param_grid=param_grid, scoring='f1_micro', cv=kf, n_jobs=-1) best_modelLR = gridsearchLR.fit(X_train, y_train) # + colab={"base_uri": "https://localhost:8080/"} id="FbQ3ATodNbgb" outputId="dedbb05c-6230-49c9-a1a4-84fa85a0ee21" y_pred_testGRID = best_modelLR.best_estimator_.predict(X_test) print(classification_report(y_test, y_pred_testGRID)) #COMO QUE FICOU PIOR???? # + colab={"base_uri": "https://localhost:8080/"} id="rq7Tu-DgkBG9" outputId="55d46cc3-3495-41a2-9ea2-beb9c4e880a9" confusion_matrix(y_test, y_pred_test) # + colab={"base_uri": "https://localhost:8080/"} id="IdW-wgCwF3Ul" outputId="7354ec70-89da-4d83-bf64-1e5a7ae5e11a" pip_LR.score(X_train, y_train) # - # ## Calculando a resposta Alvo = pd.read_csv('ANSWERS.csv') XAns = Alvo.drop('CHURN', axis=1, inplace=False) y_pred_Ans = pip_LR.predict(XAns) # o pipeline cuida desse warning Alvo['CHURN'] = y_pred_Ans Alvo.to_csv('ANSWERS.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="nX_W0YPqzyRc" colab_type="text" # # Analysis of App Profiles for the App Store and Google Play Markets # # My aim in this project is to find mobile app profiles that are profitable for the App Store and Google Play markets. My job is to enable mobile app developers to make data-driven decisions with respect to the kind of apps they build. # # I will suppose that the main source of revenue consists of in-app ads. This means that the revenue for any given app is mostly influenced by the number of users that use the app. # # My goal for this project is to analyze data to help developers understand what kinds of apps are likely to attract more users. # + [markdown] id="nj3qhQSnzyRg" colab_type="text" # ## Opening and Exploring the Data # # As of January 2020, there were approximately 2 million iOS apps available on the App Store, and 2.1 million Android apps on Google Play. # # Collecting data for over four million apps requires a significant amount of time and money, so we'll try to analyze a sample of data instead. To avoid spending resources with collecting new data ourselves, we should first try to see whether we can find any relevant existing data at no cost. Luckily, these are two data sets that seem suitable for our purpose: # # - [A data set](https://www.kaggle.com/lava18/google-play-store-apps) containing data about approximately ten thousand Android apps from Google Play. You can download the data set directly from [this link](https://dq-content.s3.amazonaws.com/350/googleplaystore.csv). # - [A data set](https://www.kaggle.com/ramamet4/app-store-apple-data-set-10k-apps) containing data about approximately seven thousand iOS apps from the App Store. You can download the data set directly from [this link](https://dq-content.s3.amazonaws.com/350/AppleStore.csv). # # Let's start by opening the two data sets and then continue with exploring the data. # + id="ZNzXBSALzyRl" colab_type="code" colab={} from csv import reader # Google Play dataset with open("googleplaystore.csv") as f: android_data = list(reader(f)) # Apple Store dataset with open("AppleStore.csv") as f: ios_data = list(reader(f)) android_cols = android_data[0] android_data = android_data[1:] ios_cols = ios_data[0] ios_data = ios_data[1:] # + [markdown] id="aVNUM6G2zyRr" colab_type="text" # To make it easier to explore the two data sets, I'll first write a function named `explore_data()` that I can use repeatedly to explore rows in a more readable way. I'll also add an option for our function to show the number of rows and columns for any data set. # + id="SiW2GuwezyRs" colab_type="code" colab={} def explore_data(dataset, start, end, rows_and_columns=False): dataset_slice = dataset[start:end] for row in dataset_slice: print(row) print('\n') # adds a new (empty) line after each row if rows_and_columns: print('Number of rows:', len(dataset)) print('Number of columns:', len(dataset[0])) # + [markdown] id="vTlm-FjczyRv" colab_type="text" # Now let's take a look at the first few rows of each dataset. # + id="FoMhUn5SzyRw" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 642} outputId="bde2d770-9092-4397-bb85-01a88a097fce" n_rows = 3 print("First {} rows of the".format(n_rows), "Google Play Store", "dataset\n") print(android_cols) print("\n") explore_data(android_data, 0, n_rows, True) print("\n") print("\n") print("First {} rows of the".format(n_rows), "Apple Store", "dataset\n") print(ios_cols) print("\n") explore_data(ios_data, 0, n_rows, True) # + [markdown] id="UE3vWssczyR0" colab_type="text" # The Google Play data set has 10841 apps and 13 columns. At a quick glance, the columns that might be useful for the purpose of our analysis are `'App'`, `'Category'`, `'Reviews'`, `'Installs'`, `'Type'`, `'Price'`, and `'Genres'`. # # We have 7197 iOS apps in this data set, and the columns that seem interesting are: `'track_name'`, `'currency'`, `'price'`, `'rating_count_tot'`, `'rating_count_ver'`, and `'prime_genre'`. Not all column names are self-explanatory in this case, but details about each column can be found in the data set [documentation](https://www.kaggle.com/ramamet4/app-store-apple-data-set-10k-apps/home). # + [markdown] id="XEkJ769CzyR6" colab_type="text" # ## Data Cleaning # # Now we need to process the data to make some analysis. # # ### Deleting Wrong Data # # I build a function called `clean_dataset()` to analyze row by row the data set and print an error when there is a missing element and eventually remove the row. # + id="FrEFsxP_zyR7" colab_type="code" colab={} def clean_dataset(data, clean=False): LEN_COL_DATA = len(data[0]) LEN_DATA = len(data) for idx, row in enumerate(data): len_row = len(row) if len_row != LEN_COL_DATA: print("Row number", len_row, "contains missing values\n") if clean: del data[idx] print("Removed bad row\n") if LEN_DATA == len(data): print("No bad rows!\n") # + [markdown] id="lt_Y-spPzyR-" colab_type="text" # Now I'm looking on the entire data sets to see if there are some missing values. # # # + id="egbI7oCjzyR_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 218} outputId="c556537c-d207-443c-ca41-1507969a742d" print("Google Play Store", "dataset\n") clean_dataset(android_data, True) print("\n") print("Apple Store", "dataset\n") clean_dataset(ios_data, True) # + [markdown] id="ogtsjo0kzySC" colab_type="text" # ### Removing Duplicate Entries # # Let's build a function called `duplicate_in_dataset()` to analyze row by row the data set and tell us if there are some duplicate rows. # + id="auCVio7ZzySD" colab_type="code" colab={} def duplicate_in_dataset(data, NAME_COL=0): duplicate_apps = [] unique_apps = [] for app in data: name = app[NAME_COL] if name in unique_apps: duplicate_apps.append(name) else: unique_apps.append(name) print("Number of duplicate apps:", len(duplicate_apps)) print("\n") print("Examples of duplicate apps:", duplicate_apps[:15]) # + [markdown] id="Wk_wRbcyzySI" colab_type="text" # For the *Google Play Store* the column that contains the name of the APP is *App* at index 0. # # For the *Apple Store* the column that contains the name of the APP is *track_name* at index 1. # + id="IsGRT84FzySK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 272} outputId="8a3e4202-d23a-46d3-f0ff-44fb940a48b4" print("Google Play Store", "dataset\n") duplicate_in_dataset(android_data, 0) print("\n") print("Apple Store", "dataset\n") duplicate_in_dataset(ios_data, 1) # + [markdown] id="t3T74HO0zySQ" colab_type="text" # We don't want to count certain apps more than once when we analyze data, so we need to remove the duplicate entries and keep only one entry per app. One thing we could do is remove the duplicate rows randomly, but we could probably find a better way. # # The main difference happens on the number of reviews. The different numbers show that the data was collected at different times. We can use this to build a criterion for keeping rows. We won't remove rows randomly, but rather we'll keep the rows that have the highest number of reviews because the higher the number of reviews, the more reliable the ratings. # # To do that, we will: # # - Create a dictionary where each key is a unique app name, and the value is the highest number of reviews of that app # - Use the dictionary to create a new data set, which will have only one entry per app (and we only select the apps with the highest number of reviews) # # Let's build a function `remove_duplicates()` to do this. # + id="hH8yXBRpzySS" colab_type="code" colab={} def remove_duplicates(data, NAME_COL, RATING_COL): # create an empty dict to store unique APPs rows dict_rows = {} for row in data: # save name and rating for next comparison name = row[NAME_COL] rating = float(row[RATING_COL]) # if we don't have already a row for that app save it if name not in dict_rows: dict_rows[name] = row # else compare the rating stored to check if its greater elif rating > float(dict_rows[name][RATING_COL]): dict_rows[name] = row # finally merge all the rows stored as a new dataset data_new = list(dict_rows.values()) return data_new # + [markdown] id="nFzR0RTwzySV" colab_type="text" # For the *Google Play Store* the column that contains the number of ratings of the APP is *Reviews* at index 3. # # For the *Apple Store* the column that contains the number of ratings of the APP is *rating_count_tot* at index 5. # + id="bDYqAIqNzySW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} outputId="d11e1bfd-a054-4544-e10e-77bdb1b0aa8c" print("Google Play Store", "dataset\n") android_clean = remove_duplicates(android_data, 0, 3) print("Removed:", len(android_data) - len(android_clean),"rows\n") print("\n") print("Apple Store", "dataset\n") ios_clean = remove_duplicates(ios_data, 1, 5) print("Removed:", len(ios_data) - len(ios_clean),"rows\n") # + [markdown] id="ioLk2sUSzySa" colab_type="text" # ### Removing Non-English Apps # # If you explore the data sets enough, you'll notice the names of some of the apps suggest they are not directed toward an English-speaking audience. # # We're not interested in keeping these kind of apps, so we'll remove them. One way to go about this is to remove each app whose name contains a symbol that is not commonly used in English text — English text usually includes letters from the English alphabet, numbers composed of digits from 0 to 9, punctuation marks (., !, ?, ;, etc.), and other symbols (+, *, /, etc.). # # All these characters that are specific to English texts are encoded using the ASCII standard. Each ASCII character has a corresponding number between 0 and 127 associated with it, and we can take advantage of that to build a function `normal_string()` that checks an app name and tells us whether it contains non-ASCII characters more than a fixed thereshold. # # We built this function below, and we use the built-in `ord()` function to find out the corresponding encoding number of each character. # + id="jxPW_xoVzySb" colab_type="code" colab={} def normal_string(string, LIMIT=3): count = 0 for a in string: if ord(a) > 127: # it is a non-English character count += 1 if count > LIMIT: return False # if it has finished the for loop there aren't non-English characters return True # + [markdown] id="I-II4VNuzySh" colab_type="text" # Check the output of the function for some examples: # + id="L2GadvB6zySi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="ac32dbe4-fc9e-4201-db36-745465b01f16" print(normal_string("Instagram")) print(normal_string("爱奇艺PPS -《欢乐颂2》电视剧热播")) print(normal_string("Docs To Go™ Free Office Suite")) print(normal_string("Instachat 😜")) # + [markdown] id="PdNcNamgzySl" colab_type="text" # Let's build a function called `english_dataset()` to create a new dataset containing only English apps using the `normal_string()` function. # + id="9C_eTjFezySm" colab_type="code" colab={} def english_dataset(data, NAME_COL=0): data_new = [] for row in data: name = row[NAME_COL] if normal_string(name): data_new.append(row) return data_new # + [markdown] id="m9v22wUqzySp" colab_type="text" # The function is still not perfect, and very few non-English apps might get past our filter, but this seems good enough at this point in our analysis — we shouldn't spend too much time on optimization at this point. # # Use the above new function `english_dataset()` to create new datasets. # + id="eICu3tPYzySq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} outputId="fe14dac4-743d-4b12-a978-ce57cd99ec6b" print("Google Play Store", "dataset\n") android_clean_english = english_dataset(android_clean, 0) print("Removed:", len(android_clean) - len(android_clean_english),"non-English apps\n") print("\n") print("Apple Store", "dataset\n") ios_clean_english = english_dataset(ios_clean, 1) print("Removed:", len(ios_clean) - len(ios_clean_english),"non-English apps\n") # + [markdown] id="ubJDzVmYzySt" colab_type="text" # ### Isolating the Free Apps # # As we mentioned in the introduction, we only study apps that are free to download and install, and our main source of revenue consists of in-app ads. Our data sets contain both free and non-free apps, and we'll need to isolate only the free apps for our analysis. Below, we isolate the free apps for both our data sets. # # Let's build a function called `free_dataset()` to create a new dataset containing only free apps. # + id="Q4G3kvKDzySv" colab_type="code" colab={} def free_dataset(data, PRICE_COL): data_new = [] for row in data: # check if the APP is free if row[PRICE_COL] == "0.0" or row[PRICE_COL] == "0": data_new.append(row) return data_new # + [markdown] id="MEOLV7glzySy" colab_type="text" # For the *Google Play Store* the column that contains the price of the APP is *Price* at index 7. # # For the *Apple Store* the column that contains the price of the APP is *price* at index 4. # + id="j2KnyyQLzySz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} outputId="6c9b128f-2a6a-4522-fe0c-723331963450" print("Google Play Store", "dataset\n") android_final = free_dataset(android_clean_english, 7) print("Removed:", len(android_clean_english) - len(android_final),"paid apps\n") print("\n") print("Apple Store", "dataset\n") ios_final = free_dataset(ios_clean_english, 4) print("Removed:", len(ios_clean_english) - len(ios_final),"paid apps\n") # + [markdown] id="4Uo4zeGdzyS2" colab_type="text" # Final datasets: # + id="gz5H5Hv7zyS2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 507} outputId="0fa7acf6-690e-464a-f399-52fb6a2b4985" print("First {} rows of the".format(n_rows), "Google Play Store", "dataset\n") explore_data(android_final, 0, n_rows, True) print("\n") print("First {} rows of the".format(n_rows), "Apple Store", "dataset\n") explore_data(ios_final, 0, n_rows, True) # + [markdown] id="sOPAojtFzyS4" colab_type="text" # ## Data Analysis # # + [markdown] id="aaAqbsA2Dtk7" colab_type="text" # ### Most Common Apps by Genre # # As I mentioned in the introduction, my aim is to determine the kinds of apps that are likely to attract more users because our revenue is highly influenced by the number of people using the apps. # # To minimize risks and overhead, the validation strategy for an app idea is comprised of three steps: # # 1. Build a minimal Android version of the app, and add it to Google Play. # 2. If the app has a good response from users, develop it further. # 3. If the app is profitable after six months, also build an iOS version of the app and add it to the App Store. # # Because the end goal is to add the app on both the App Store and Google Play, we need to find app profiles that are successful on both markets. For instance, a profile that might work well for both markets might be a productivity app that makes use of gamification. # # Let's begin the analysis by getting a sense of the most common genres for each market. For this, we'll build a frequency table for the `prime_genre` column of the App Store data set, and the `Genres` and `Category` columns of the Google Play data set. # # I'll build two functions we can use to analyze the frequency tables: # # - One function to generate frequency tables that show percentages # - Another function that we can use to display the percentages in a descending order # + id="zTEHShVEzyS6" colab_type="code" colab={} def freq_table(dataset, index): table = {} for row in dataset: val = row[index] if val in table: table[val] += 1 else: table[val] = 1 return table def display_table(dataset, index): table = freq_table(dataset, index) table_display = [] for key in table: key_val_as_tuple = (table[key], key) table_display.append(key_val_as_tuple) table_sorted = sorted(table_display, reverse = True) for entry in table_sorted: print(entry[1], ':', entry[0]) # + [markdown] id="QqaJpm9qDC9Y" colab_type="text" # We start by examining the frequency table for the `prime_genre` column of the App Store data set. # + id="jc1EmHq-zyS9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="1ca7be5b-bdd0-480a-d1d8-1b546389e6b7" display_table(ios_final, 11) # + [markdown] id="XpGySRstDIJx" colab_type="text" # We can see that among the free English apps, more than a half (58.16%) are games. Entertainment apps are close to 8%, followed by photo and video apps, which are close to 5%. Only 3.66% of the apps are designed for education, followed by social networking apps which amount for 3.29% of the apps in our data set. # # The general impression is that App Store (at least the part containing free English apps) is dominated by apps that are designed for fun (games, entertainment, photo and video, social networking, sports, music, etc.), while apps with practical purposes (education, shopping, utilities, productivity, lifestyle, etc.) are more rare. However, the fact that fun apps are the most numerous doesn't also imply that they also have the greatest number of users — the demand might not be the same as the offer. # # Let's continue by examining the `Genres` and `Category` columns of the Google Play data set (two columns which seem to be related). # + id="V-pUsqpxzyTA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 571} outputId="e93ce24c-6acd-46a4-f7f3-844937f8b252" display_table(android_final, 1) # Category # + [markdown] id="fCjmPqbkDQZD" colab_type="text" # The landscape seems significantly different on Google Play: there are not that many apps designed for fun, and it seems that a good number of apps are designed for practical purposes (family, tools, business, lifestyle, productivity, etc.). However, if we investigate this further, we can see that the family category (which accounts for almost 19% of the apps) means mostly games for kids. # # # Even so, practical apps seem to have a better representation on Google Play compared to App Store. This picture is also confirmed by the frequency table we see for the `Genres` column: # + id="zXyPnGArzyTC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="4f67c657-403a-45de-a037-6a45faf9c68c" display_table(android_final, 9) # Genres # + [markdown] id="kqdqkNr9DbfE" colab_type="text" # The difference between the `Genres` and the `Category` columns is not crystal clear, but one thing we can notice is that the `Genres` column is much more granular (it has more categories). We're only looking for the bigger picture at the moment, so we'll only work with the `Category` column moving forward. # # Up to this point, we found that the App Store is dominated by apps designed for fun, while Google Play shows a more balanced landscape of both practical and for-fun apps. Now we'd like to get an idea about the kind of apps that have most users. # + [markdown] id="4itFZkw4DgM_" colab_type="text" # ### Most Popular Apps by Genre on the App Store # # One way to find out what genres are the most popular (have the most users) is to calculate the average number of installs for each app genre. For the Google Play data set, we can find this information in the `Installs` column, but for the App Store data set this information is missing. As a workaround, we'll take the total number of user ratings as a proxy, which we can find in the `rating_count_tot` app. # # Below, we calculate the average number of user ratings per app genre on the App Store: # + id="kGAYQ7O3zyTF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 403} outputId="32776e0f-ff26-4bef-d471-0455bee5735f" ios_genre_freq_table = freq_table(ios_final, 11) for genre in ios_genre_freq_table: total = 0 len_genre = 0 for row in ios_final: if row[11] == genre: total += float(row[5]) len_genre += 1 print(genre, ":", round(total/len_genre)) # + [markdown] id="VVlrDyBpD-H1" colab_type="text" # On average, navigation apps have the highest number of user reviews, but this figure is heavily influenced by Waze and Google Maps, which have close to half a million user reviews together: # + colab_type="code" id="rdNDNVbVdw4X" outputId="4d9f8744-a164-4ac0-a56c-ba8eb5ecf6f9" colab={"base_uri": "https://localhost:8080/", "height": 118} for app in ios_final: if app[-5] == 'Navigation': print(app[1], ':', app[5]) # print name and number of ratings # + [markdown] colab_type="text" id="55kQqN_qdw4Z" # The same pattern applies to social networking apps, where the average number is heavily influenced by a few giants like Facebook, Pinterest, Skype, etc. Same applies to music apps, where a few big players like Pandora, Spotify, and Shazam heavily influence the average number. # # Our aim is to find popular genres, but navigation, social networking or music apps might seem more popular than they really are. The average number of ratings seem to be skewed by very few apps which have hundreds of thousands of user ratings, while the other apps may struggle to get past the 10,000 threshold. We could get a better picture by removing these extremely popular apps for each genre and then rework the averages, but we'll leave this level of detail for later. # # Reference apps have 74,942 user ratings on average, but it's actually the Bible and Dictionary.com which skew up the average rating: # + colab_type="code" id="T2H7TxY6dw4a" outputId="7c81bbba-365c-47bc-e8cf-64beec498733" colab={"base_uri": "https://localhost:8080/", "height": 319} for app in ios_final: if app[-5] == 'Reference': print(app[1], ':', app[5]) # + [markdown] colab_type="text" id="RClnbNGSdw4d" # However, this niche seems to show some potential. One thing we could do is take another popular book and turn it into an app where we could add different features besides the raw version of the book. This might include daily quotes from the book, an audio version of the book, quizzes about the book, etc. On top of that, we could also embed a dictionary within the app, so users don't need to exit our app to look up words in an external app. # # This idea seems to fit well with the fact that the App Store is dominated by for-fun apps. This suggests the market might be a bit saturated with for-fun apps, which means a practical app might have more of a chance to stand out among the huge number of apps on the App Store. # # Other genres that seem popular include weather, book, food and drink, or finance. The book genre seem to overlap a bit with the app idea we described above, but the other genres don't seem too interesting to us: # # - Weather apps — people generally don't spend too much time in-app, and the chances of making profit from in-app adds are low. Also, getting reliable live weather data may require us to connect our apps to non-free APIs. # # - Food and drink — examples here include Starbucks, Dunkin' Donuts, McDonald's, etc. So making a popular food and drink app requires actual cooking and a delivery service, which is outside the scope of our company. # # - Finance apps — these apps involve banking, paying bills, money transfer, etc. Building a finance app requires domain knowledge, and we don't want to hire a finance expert just to build an app. # # Now let's analyze the Google Play market a bit. # + [markdown] id="fkw_5EcKEM2p" colab_type="text" # ### Most Popular Apps by Genre on Google Play # # For the Google Play market, we actually have data about the number of installs, so we should be able to get a clearer picture about genre popularity. However, the install numbers don't seem precise enough — we can see that most values are open-ended (100+, 1,000+, 5,000+, etc.): # + colab_type="code" id="k9XN-3Oxdw4e" outputId="4c059bef-c6a1-4c8b-8be2-a65f8fb32676" colab={"base_uri": "https://localhost:8080/", "height": 370} display_table(android_final, 5) # the Installs columns # + [markdown] colab_type="text" id="MS8oEkvcdw4g" # One problem with this data is that is not precise. For instance, we don't know whether an app with 100,000+ installs has 100,000 installs, 200,000, or 350,000. However, we don't need very precise data for our purposes — we only want to get an idea which app genres attract the most users, and we don't need perfect precision with respect to the number of users. # # We're going to leave the numbers as they are, which means that we'll consider that an app with 100,000+ installs has 100,000 installs, and an app with 1,000,000+ installs has 1,000,000 installs, and so on. # # To perform computations, however, we'll need to convert each install number to `float` — this means that we need to remove the commas and the plus characters, otherwise the conversion will fail and raise an error. We'll do this directly in the loop below, where we also compute the average number of installs for each genre (category). # + colab_type="code" outputId="68579b74-d620-445b-eccb-8c66726673a7" id="RUs1BELB42qM" colab={"base_uri": "https://localhost:8080/", "height": 571} android_category_freq_table = freq_table(android_final, 1) for category in android_category_freq_table: total = 0 len_category = 0 for row in android_final: if row[1] == category: total += float(row[5].replace(",", "").replace("+", "")) len_category += 1 print(category, ":", round(total/len_category)) # + [markdown] colab_type="text" id="jmJZ_ZNAdw4m" # On average, communication apps have the most installs: 38,456,119. This number is heavily skewed up by a few apps that have over one billion installs (WhatsApp, Facebook Messenger, Skype, Google Chrome, Gmail, and Hangouts), and a few others with over 100 and 500 million installs: # + colab_type="code" id="yDgqLB3xdw4p" outputId="8b5839ac-aac6-4396-85ff-4a816a258717" colab={"base_uri": "https://localhost:8080/", "height": 470} for app in android_final: if app[1] == 'COMMUNICATION' and (app[5] == '1,000,000,000+' or app[5] == '500,000,000+' or app[5] == '100,000,000+'): print(app[0], ':', app[5]) # + [markdown] colab_type="text" id="GkB0Ewlsdw47" # If we removed all the communication apps that have over 100 million installs, the average would be reduced roughly ten times: # + colab_type="code" id="GVS8X24bdw48" outputId="0eaeadd0-c078-4f0b-fba5-a21dc4fb7d34" colab={"base_uri": "https://localhost:8080/", "height": 34} under_100_m = [] for app in android_final: n_installs = app[5] n_installs = n_installs.replace(',', '') n_installs = n_installs.replace('+', '') if (app[1] == 'COMMUNICATION') and (float(n_installs) < 100000000): under_100_m.append(float(n_installs)) sum(under_100_m) / len(under_100_m) # + [markdown] colab_type="text" id="VGf0n6cydw5A" # We see the same pattern for the video players category, which is the runner-up with 24,727,872 installs. The market is dominated by apps like Youtube, Google Play Movies & TV, or MX Player. The pattern is repeated for social apps (where we have giants like Facebook, Instagram, Google+, etc.), photography apps (Google Photos and other popular photo editors), or productivity apps (Microsoft Word, Dropbox, Google Calendar, Evernote, etc.). # # Again, the main concern is that these app genres might seem more popular than they really are. Moreover, these niches seem to be dominated by a few giants who are hard to compete against. # # The game genre seems pretty popular, but previously we found out this part of the market seems a bit saturated, so we'd like to come up with a different app recommendation if possible. # # The books and reference genre looks fairly popular as well, with an average number of installs of 8,767,811. It's interesting to explore this in more depth, since we found this genre has some potential to work well on the App Store, and our aim is to recommend an app genre that shows potential for being profitable on both the App Store and Google Play. # # Let's take a look at some of the apps from this genre and their number of installs: # + colab_type="code" id="EsjrE2Nudw5B" outputId="bc245ebd-ea99-4aaa-9f67-4fc94050f24b" colab={"base_uri": "https://localhost:8080/", "height": 1000} for app in android_final: if app[1] == 'BOOKS_AND_REFERENCE': print(app[0], ':', app[5]) # + [markdown] colab_type="text" id="j4vZcL-Udw5G" # The book and reference genre includes a variety of apps: software for processing and reading ebooks, various collections of libraries, dictionaries, tutorials on programming or languages, etc. It seems there's still a small number of extremely popular apps that skew the average: # + colab_type="code" id="sPYrMGhKdw5H" outputId="95ec4ea1-8b98-422f-ec55-2dd7ad72cb20" colab={"base_uri": "https://localhost:8080/", "height": 101} for app in android_final: if app[1] == 'BOOKS_AND_REFERENCE' and (app[5] == '1,000,000,000+' or app[5] == '500,000,000+' or app[5] == '100,000,000+'): print(app[0], ':', app[5]) # + [markdown] colab_type="text" id="jGuGD7ODdw5K" # However, it looks like there are only a few very popular apps, so this market still shows potential. Let's try to get some app ideas based on the kind of apps that are somewhere in the middle in terms of popularity (between 1,000,000 and 100,000,000 downloads): # + colab_type="code" id="s9pL6QCddw5K" outputId="c6b8fa6c-42a1-410a-b87c-c0d93dd3f0a8" colab={"base_uri": "https://localhost:8080/", "height": 823} for app in android_final: if app[1] == 'BOOKS_AND_REFERENCE' and (app[5] == '1,000,000+' or app[5] == '5,000,000+' or app[5] == '10,000,000+' or app[5] == '50,000,000+'): print(app[0], ':', app[5]) # + [markdown] colab_type="text" id="MnArt0Mbdw5M" # This niche seems to be dominated by software for processing and reading ebooks, as well as various collections of libraries and dictionaries, so it's probably not a good idea to build similar apps since there'll be some significant competition. # # We also notice there are quite a few apps built around the book Quran, which suggests that building an app around a popular book can be profitable. It seems that taking a popular book (perhaps a more recent book) and turning it into an app could be profitable for both the Google Play and the App Store markets. # # However, it looks like the market is already full of libraries, so we need to add some special features besides the raw version of the book. This might include daily quotes from the book, an audio version of the book, quizzes on the book, a forum where people can discuss the book, etc. # + [markdown] id="VZcWb5Q7EvQp" colab_type="text" # ## Conclusions # # In this project, we analyzed data about the App Store and Google Play mobile apps with the goal of recommending an app profile that can be profitable for both markets. # # We concluded that taking a popular book (perhaps a more recent book) and turning it into an app could be profitable for both the Google Play and the App Store markets. The markets are already full of libraries, so we need to add some special features besides the raw version of the book. This might include daily quotes from the book, an audio version of the book, quizzes on the book, a forum where people can discuss the book, etc. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="crPVEDSN71sl" # Open In Colab # # + colab={"base_uri": "https://localhost:8080/"} id="akHUDpl5RRWk" cellView="form" outputId="ee99fdd9-0c33-43b4-9f57-af543e6a724a" #@title Download data and files. # !git clone https://github.com/Khamies/LSTM-Language-Generator.git import os os.chdir("LSTM-Language-Generator") # + id="bsX47UMWT0NT" import torch from data.ptb import PTB from model import LSTM_Language # + id="u6QyBIaJWGLI" # Settings torch.manual_seed(1000) device = "cuda" if torch.cuda.is_available() else "cpu" batch_size = 32 bptt = 60 lr = 0.001 embed_size = 300 hidden_size = 256 latent_size = 16 lstm_layer=1 # + [markdown] id="GcXo1SRDfIVm" # ## Load the data # + id="k2uVseOtT_4J" # Load the data train_data = PTB(data_dir="./data", split="train", create_data= False, max_sequence_length= bptt) test_data = PTB(data_dir="./data", split="test", create_data= False, max_sequence_length= bptt) valid_data = PTB(data_dir="./data", split="valid", create_data= False, max_sequence_length= bptt) # Batchify the data train_loader = torch.utils.data.DataLoader( dataset= train_data, batch_size= batch_size, shuffle= True) test_loader = torch.utils.data.DataLoader( dataset= test_data, batch_size= batch_size, shuffle= True) valid_loader = torch.utils.data.DataLoader( dataset= valid_data, batch_size= batch_size, shuffle= True) # + [markdown] id="mZ2jOq3se8-x" # ## Load the model # + colab={"base_uri": "https://localhost:8080/"} id="jw2_Fc_9Vhh5" outputId="1e0886bb-8969-4d43-efbd-082656d391cb" vocab_size = train_data.vocab_size model = LSTM_Language(vocab_size = vocab_size, embed_size = embed_size, hidden_size = hidden_size, latent_size = latent_size).to(device) checkpoint = torch.load("models/LSTM_lang.pt") model.load_state_dict(checkpoint["model"]) # + [markdown] id="7s4CNs1J4otu" # ##Sample Generation # + colab={"base_uri": "https://localhost:8080/"} id="PKhUHL_7Vg5X" cellView="code" outputId="f3b2e51f-3e87-4eb6-f26d-6cd1aa70a639" sos = "" sample = model.inference(10 , sos) print(sample) # + id="a_XvIj4l4QMi" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #Copyright 2020 , # #Licensed under the Apache License, Version 2.0 (the "License"); #you may not use this file except in compliance with the License. #You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # #Unless required by applicable law or agreed to in writing, software #distributed under the License is distributed on an "AS IS" BASIS, #WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. #See the License for the specific language governing permissions and #limitations under the License. import autogluon as ag from autogluon import TabularPrediction as task import pandas as pd import numpy as np import re import warnings warnings.filterwarnings("ignore") # + testdf = pd.read_csv('../../Benchmark-Labeled-Data/data_test.csv') test_metadata = pd.read_csv('../../RawCSV/Metadata/meta_data.csv') test_merged = pd.merge(testdf,test_metadata,on='Record_id') y_true = test_merged.y_act.values.tolist() # + # We first change the AutoGluon's default_learner's fit function to make sure that we just return the ML feature types rather than building their entire ML pipeline. # Just replace the fit function with this codeblock in project_env/lib/python3.7/site-packages/autogluon/utils/tabular/ml/learner/default_learner.py # project_env is my python virtual environment directory # def fit(self, X: DataFrame, X_test: DataFrame = None, scheduler_options=None, hyperparameter_tune=True, # feature_prune=False, holdout_frac=0.1, num_bagging_folds=0, num_bagging_sets=1, stack_ensemble_levels=0, # hyperparameters=None, time_limit=None, save_data=False, save_bagged_folds=True, verbosity=2): # """ Arguments: # X (DataFrame): training data # X_test (DataFrame): data used for hyperparameter tuning. Note: final model may be trained using this data as well as training data # hyperparameter_tune (bool): whether to tune hyperparameters or simply use default values # feature_prune (bool): whether to perform feature selection # scheduler_options (tuple: (search_strategy, dict): Options for scheduler # holdout_frac (float): Fraction of data to hold out for evaluating validation performance (ignored if X_test != None, ignored if kfolds != 0) # num_bagging_folds (int): kfolds used for bagging of models, roughly increases model training time by a factor of k (0: disabled) # num_bagging_sets (int): number of repeats of kfold bagging to perform (values must be >= 1), # total number of models trained during bagging = num_bagging_folds * num_bagging_sets # stack_ensemble_levels : (int) Number of stacking levels to use in ensemble stacking. Roughly increases model training time by factor of stack_levels+1 (0: disabled) # Default is 0 (disabled). Use values between 1-3 to improve model quality. # Ignored unless kfolds is also set >= 2 # hyperparameters (dict): keys = hyperparameters + search-spaces for each type of model we should train. # """ # if hyperparameters is None: # hyperparameters = {'NN': {}, 'GBM': {}} # # TODO: if provided, feature_types in X, X_test are ignored right now, need to pass to Learner/trainer and update this documentation. # if time_limit: # self.time_limit = time_limit # logger.log(20, f'Beginning AutoGluon training ... Time limit = {time_limit}s') # else: # self.time_limit = 1e7 # logger.log(20, 'Beginning AutoGluon training ...') # logger.log(20, f'AutoGluon will save models to {self.path}') # logger.log(20, f'Train Data Rows: {len(X)}') # logger.log(20, f'Train Data Columns: {len(X.columns)}') # if X_test is not None: # logger.log(20, f'Tuning Data Rows: {len(X_test)}') # logger.log(20, f'Tuning Data Columns: {len(X_test.columns)}') # time_preprocessing_start = time.time() # logger.log(20, 'Preprocessing data ...') # X_before=X # X, y, X_test, y_test, holdout_frac, num_bagging_folds = self.general_data_processing(X, X_test, holdout_frac, num_bagging_folds) # time_preprocessing_end = time.time() # self.time_fit_preprocessing = time_preprocessing_end - time_preprocessing_start # logger.log(20, f'\tData preprocessing and feature engineering runtime = {round(self.time_fit_preprocessing, 2)}s ...') # if time_limit: # time_limit_trainer = time_limit - self.time_fit_preprocessing # else: # time_limit_trainer = None # df = pd.DataFrame(columns=['column', 'feature_type']) # for col in X_before.columns: # if col=='label_target': continue # df_col = X_before[col] # numeric = False # if df_col.dtypes == 'int64' or df_col.dtypes == 'float64': numeric = True # text = False # X_unique = df_col.unique() # num_unique = len(X_unique) # num_rows = len(df_col) # unique_ratio = num_unique / num_rows # if unique_ratio <= 0.01: text = False # else: # import re # avg_words = np.mean([len(re.sub(' +', ' ', value).split(' ')) if isinstance(value, str) else 0 for value in X_unique]) # if avg_words < 3: text = False # else: text = True # unusable = False # col_val = df_col # num_unique = len(col_val.unique()) # if num_unique == 1: unusable = True # date = False # try: # df_col.apply(pd.to_datetime) # date = True # except: date = False # curpred=0 # if unusable: curpred = 7 # elif text: curpred = 3 # elif date: curpred = 2 # if date and numeric and (not unusable): curpred = 0 # if (not numeric) and curpred == 0: # check_unusable_for_categ=False # rank = df_col.value_counts().sort_values(ascending=True) # rank = rank[rank >= 3] # rank = rank.reset_index() # val_list = list(rank['index'].values) # if len(val_list) <= 1: check_unusable_for_categ = True # if check_unusable_for_categ: curpred = 7 # else: curpred = 1 # df.loc[len(df)] = [col, curpred] # df.to_csv("AutoGluon_predictions.csv",index=False, mode='a', header=False) # return self.feature_generator.features # + def Load_GLUON(dataDownstream): df = pd.DataFrame(columns=['column', 'feature_type']) df.to_csv('AutoGluon_predictions.csv',index=False) import time time.sleep(3) train= copy.deepcopy(dataDownstream) train['label_target'] = 1 train_data = task.Dataset(df=train) label_column = 'label_target' try: features = task.fit(train_data=train_data, label=label_column) except: AlwaysTrue = 1 agl_predictions = pd.read_csv('AutoGluon_predictions.csv') predictions = agl_predictions['feature_type'].values.tolist() curpred=0 if(len(predictions)==0): curpred=7 else: curpred=predictions[0] return curpred cntExceptions = 0 y_agl = [0]*1985 prv_csv_name,csv_name = '','' exception_indices = [] for index, row in test_merged.iterrows(): if index%100==0: print(index) col = row['Attribute_name'] prv_csv_name = csv_name csv_name = '../../RawCSV/RawCSVFiles/' + row['name'] if prv_csv_name != csv_name: df = pd.read_csv(csv_name,encoding='latin1') try: df_col = df[[col]] except KeyError: y_agl[index]=1 exception_indices.append(row) cntExceptions += 1 continue y_agl[index] = Load_GLUON(df_col) # + from sklearn.metrics import accuracy_score from sklearn.metrics import confusion_matrix dict_label_true = { 'numeric': 0, 'categorical': 1, 'datetime': 2, 'sentence': 3, 'url': 4, 'embedded-number': 5, 'list': 6, 'not-generalizable': 7, 'context-specific': 8 } y_true = [dict_label_true[str(i)] for i in y_true] print(accuracy_score(y_true, y_agl)) print(confusion_matrix(y_true, y_agl)) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import ROOT import ostap.fixes.fixes from ostap.core.core import cpp, Ostap from ostap.core.core import pwd, cwd, ROOTCWD from ostap.core.core import rootID, funcID, funID, fID, histoID, hID, dsID from ostap.core.core import VE from ostap.histos.histos import h1_axis, h2_axes, h3_axes from ostap.histos.graphs import makeGraph, hToGraph, hToGraph2, hToGraph3, lw_graph import ostap.trees.trees import ostap.trees.cuts import ostap.histos.param import ostap.histos.compare import ostap.io.root_file import ostap.math.models import ostap.fitting.roofit import ostap.fitting.models as Models # - canv = ROOT.TCanvas("canv","canv",900,450) rfile = ROOT.TFile("new.root","READ") ds = rfile["tree"] ds.draw("T/35310.:energy","T/35310.>4") print(ds.GetEntries()) canv.Draw() ds.Draw("T/energy","T/energy>33000 && T/energy<38000") canv.Draw() ds.Draw("(energy - T/35310.)/energy","T/energy>33000 && T/energy<38000") canv.Draw() ds.Draw("energy - T/35310.","T/energy>33000 && T/energy<38000") canv.Draw() h2 = ROOT.TH1F("h2",";T_{TPC}, MeV;entries",120,4.4,5.6) h2.SetLineColor(2) h2.SetLineWidth(2) h3 = ROOT.TH1F("h3",";T_{TPC}, MeV;entries",120,4.4,5.6) h3.SetLineColor(4) h3.SetLineWidth(2) ds.Draw("(T/35310.)>>h2","energy3<100") ds.Draw("(T/35310.)>>h3","energy3>100") h2.Draw("hist") h3.Draw("hist same") canv.Draw() ha = h2.asym(h3) ha.GetXaxis().SetRangeUser(4.925,5.025) ha.GetYaxis().SetTitle("Asymmetry") fa = ROOT.TF1("fa","(x-[0])*[1]",4.925,5.025) fa.SetParameter(0,4.97) ha.Fit(fa) ha.Draw() ROOT.gPad.SetGridx() ROOT.gPad.SetGridy() T23 = VE(fa.GetParameter(0),fa.GetParError(0)**2) print(T23) canv.Draw() h2t = ROOT.TH1F("h2t",";T_{true}, MeV;entries",120,4.4,5.6) h2t.SetLineColor(2) h2t.SetLineWidth(2) h3t = ROOT.TH1F("h3t",";T_{true}, MeV;entries",120,4.4,5.6) h3t.SetLineColor(4) h3t.SetLineWidth(2) ds.Draw("energy>>h2t","energy3<100") ds.Draw("energy>>h3t","energy3>100") h2t.Draw("hist") h3t.Draw("hist same") canv.Draw() hat = h2t.asym(h3t) hat.GetXaxis().SetRangeUser(4.925,5.025) hat.GetYaxis().SetTitle("Asymmetry") fat = ROOT.TF1("fa","(x-[0])*[1]",4.925,5.025) fat.SetParameter(0,4.97) hat.Fit(fat) hat.Draw() ROOT.gPad.SetGridx() ROOT.gPad.SetGridy() T23t = VE(fat.GetParameter(0),fat.GetParError(0)**2) print(T23t) canv.Draw() print("True T23 (7000 ev) = " + str(T23t) + "\t\t" + str(1e4*T23t.prec().value() ) ) print("TPC T23 (7000 ev) = " + str(T23 ) + "\t\t" + str(1e4*T23 .prec().value() ) ) print( " (True - TPC) : " + str( (T23.value()-T23t.value())*1000 ) + " keV" ) print( " Deviation : " + str((T23.value()-T23t.value())/T23.error()) + " sigma") hT = ROOT.TH1F("hT",";T_{True}, MeV;entries",40,4.980-0.04,4.980+0.04) ds.draw("energy>>hT","energy>(4.980-0.04) && energy<(4.980+0.04)") hT.blue() hT.Draw("e1") hT.Fit("pol1") canv.Draw() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## The most Trending programming languages used in egypt last 3 years !!! # We Will Go Throudg data analysis using Stackoverflow’s 2017,2018 and 2019 Annual Developer Survey. # Markdown Monster icon # ### Introduction # + import numpy as np import pandas as pd from collections import Counter # to visualise al the columns in the dataframe pd.pandas.set_option('display.max_columns', None) # to make plots import matplotlib.pyplot as plt # %matplotlib inline # to change plot style # sns.set(style="darkgrid") # to ignore warnings import warnings warnings.filterwarnings("ignore") # %matplotlib inline # - df2019=pd.read_csv("2019/survey_results_public.csv") df2018=pd.read_csv("2018/survey_results_public.csv") df2017=pd.read_csv("2017/survey_results_public.csv") # ### UnderstandingData # let's have a quick look on Data df2018.info() df2017.head() # ### 1-What are The kinds of developers? def display_bar_chart(df, column, title): ''' Displays a bar chart with a title Returns:None ''' status_vals = df[column].value_counts() dfp=(status_vals[:10]/df.shape[0]) dfp.sort_values().plot(kind="barh"); plt.title(title); display_bar_chart(df2017, "Professional", "kinds of developers") # ### 2-what is egyptien developers ratio all over the wold? dfco = df2017.query("Country=='Egypt'") ratio=dfco.shape[0]/df2017.shape[0]*100 print("egyption developers",ratio,"%") # ### 3-what is the ratio of males and females devolpers in egypt? d1=df2017.query("Country=='Egypt'") d1=d1["Gender"].value_counts() fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.axis('equal') explode = (0.1, 0.1, 0) d1.plot(kind="pie", shadow=True,explode=explode,autopct='%1.2f%%') # ### 4-What programming languages are most used to work in egypt last 3 years? # ### 5-which programming languages are most required in egypt last 3 years? def get_countery_data(df,country_col,country_name,columns): ''' grouping by countery ''' df_copy =df for column in columns: df_copy=df_copy[df_copy[country_col]==country_name].dropna(subset=[column]) return df_copy egypt2017=get_countery_data(df2017, 'Country', 'Egypt', ['HaveWorkedLanguage', 'WantWorkLanguage']) egypt2018=get_countery_data(df2018, 'Country', 'Egypt', ['LanguageWorkedWith', 'LanguageDesireNextYear']) egypt2019=get_countery_data(df2019, 'Country', 'Egypt', ['LanguageWorkedWith', 'LanguageDesireNextYear']) def split_column(df, column): ''' Split column by ;, returns a splited series. ''' df_copy = df column_series = df_copy[column].apply(lambda x: x.split(';')) return column_series # + # Splitting the Data Frame by column into a Series. worked_lang_2017 = split_column(egypt2017, 'HaveWorkedLanguage') wanted_lang_2017 = split_column(egypt2017, 'WantWorkLanguage') worked_lang_2018 = split_column(egypt2018, 'LanguageWorkedWith') wanted_lang_2018 = split_column(egypt2018, 'LanguageDesireNextYear') worked_lang_2019 = split_column(egypt2019, 'LanguageWorkedWith') wanted_lang_2019 = split_column(egypt2019, 'LanguageDesireNextYear') # + def disarray(array_list): ''' Flat a nested list, returns a flat list. ''' objects = [] for row in array_list: for obj in row: objects.append(obj.strip()) return objects # Flatting nested list objects. list_worked_languages_2017 = disarray(worked_lang_2017) list_wanted_languages_2017 = disarray(wanted_lang_2017) list_worked_languages_2018 = disarray(worked_lang_2018) list_wanted_languages_2018 = disarray(wanted_lang_2018) list_worked_languages_2019 = disarray(worked_lang_2019) list_wanted_languages_2019 = disarray(wanted_lang_2019) # + def group_list(data_list, year): ''' Group by count to a list, returns a result dict. ''' grouped_list = dict(Counter(data_list)) grouped_dict = [{'Programming Language':key, 'Count': value, 'Year': year} for key, value in grouped_list.items()] return grouped_dict # Groping a list and creating a dict. dict_worked_languages_2017 = group_list(list_worked_languages_2017, '2017') dict_wanted_languages_2017 = group_list(list_wanted_languages_2017, '2017') dict_worked_languages_2018 = group_list(list_worked_languages_2018, '2018') dict_wanted_languages_2018 = group_list(list_wanted_languages_2018, '2018') dict_worked_languages_2019 = group_list(list_worked_languages_2019, '2019') dict_wanted_languages_2019 = group_list(list_wanted_languages_2019, '2019') # - dict_worked_languages_2019 df1_1=pd.DataFrame(dict_worked_languages_2017).sort_values(by=['Count'], ascending=False).head(10) df1_2=pd.DataFrame(dict_wanted_languages_2017).sort_values(by=['Count'], ascending=False).head(10) df2_1=pd.DataFrame(dict_worked_languages_2018).sort_values(by=['Count'], ascending=False).head(10) df2_2=pd.DataFrame(dict_wanted_languages_2018).sort_values(by=['Count'], ascending=False).head(10) df3_1=pd.DataFrame(dict_worked_languages_2019).sort_values(by=['Count'], ascending=False).head(10) df3_2=pd.DataFrame(dict_wanted_languages_2019).sort_values(by=['Count'], ascending=False).head(10) df3_1 ax1=df1_1.plot.bar(x="Programming Language",y="Count",edgecolor="orange") plt.title("top 10 programming languages worked in egypt 2017"); ax1=df1_2.plot.bar(x="Programming Language",y="Count",color="orange",edgecolor="blue") plt.title("top 10 programming languages wanted in egypt 2017"); ax1=df2_1.plot.bar(x="Programming Language",y="Count",edgecolor="orange") plt.title("top 10 programming languages worked in egypt 2018"); ax1=df2_2.plot.bar(x="Programming Language",y="Count",color="orange",edgecolor="blue") plt.title("top 10 programming languages wanted in egypt 2018"); ax1=df3_1.plot.bar(x="Programming Language",y="Count",edgecolor="orange") plt.title("top 10 programming languages worked in egypt 2019"); ax1=df3_2.plot.bar(x="Programming Language",y="Count",color="orange",edgecolor="blue") plt.title("top 10 programming languages wanted in egypt 2019"); # ### Evaluate the Results # We then looked at the most used programming languages vs wanted in 2017,2018 and 2019 in Egypt and it’s clear that Younger programming languages like Python have been well-deserved to be learned, but the oldest ones still have their value and are being much demanded. # ### References # # Stackoverflow Developer Survey Data: https://insights.stackoverflow.com/survey
    # Medium link: https://medium.com/@himamohamed9688/what-are-the-most-trending-programming-languages-in-egypt-in-the-last-3-years-52789bc8d236 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # 说明: # 给定字符串s,返回给定字符串可拆分为的唯一子字符串的最大数量。 # 您可以将字符串s拆分为任何非空子字符串列表, # 其中子字符串的串联构成了原始字符串。 # 但是,必须拆分子字符串,以使所有子字符串都是唯一的。 # 子字符串是字符串中连续的字符序列。 # # Example 1: # Input: # s = "ababccc" # Output: # 5 # Explanation: # One way to split maximally is ['a', 'b', 'ab', 'c', 'cc']. # Splitting like ['a', 'b', 'a', 'b', 'c', 'cc'] is not valid as you have 'a' and 'b' multiple times. # # Example 2: # Input: s = "aba" # Output: 2 # Explanation: One way to split maximally is ['a', 'ba']. # # Example 3: # Input: s = "aa" # Output: 1 # Explanation: It is impossible to split the string any further. # # Constraints: # 1、1 <= s.length <= 16 # 2、s contains only lower case English letters. # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # #
    #

    Clasificación de Sentimiento de Reviews de peliculas

    # #

    INF-396 Introducción a la Ciencia de Datos

    #

    Autor:

    # # Lenguaje: Python # # Temas: # # - Sentiment Analysis # - Feed Forward, RNN, LSTM # - Embedding GLOVE #
    # + ## Puede encontrar información sobre keras en https://keras.io/getting_started/intro_to_keras_for_engineers/ ## Puede encontrar información sobre sklearn en https://scikit-learn.org/stable/ import os import numpy as np import pickle # - # Primero, carguemos en memoria el conjunto de datos y veamos algunos de sus datos classes = ['negative', 'positive'] X_train, y_train, X_test, y_test = pickle.load(open('movies_review_train_test.dat', 'rb')) # ## Modelos de redes neuronales from tensorflow import keras from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Dense, Input, Embedding, LSTM, SimpleRNN, Add, Lambda, Dropout from tensorflow.keras.optimizers import Adam import tensorflow.keras.backend as K # Definición de parámetros batch_size = 32 epochs = 2 num_classes = 2 max_words = 70 embedding_size = 30 # ### Preprocesamiento # A continuación preprocesaremos los datos, realizando tokenización. # + from tensorflow.keras.preprocessing.text import Tokenizer # Tokenizing text tokenizer = Tokenizer() tokenizer.fit_on_texts(X_train + X_test) word_index = tokenizer.word_index num_words = len(tokenizer.word_index) + 1 # - # Transformando texto a secuencias de indices X_tr = tokenizer.texts_to_sequences(X_train) X_te = tokenizer.texts_to_sequences(X_test) from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.utils import to_categorical # Se agregan 0 a textos que tienen menos de max_words palabras y se cortan los que tienen mas X_tr_pad = pad_sequences(X_tr, maxlen=max_words) X_te_pad = pad_sequences(X_te, maxlen=max_words) # Se convierten label a version categorica y_train = to_categorical(y_train, num_classes=num_classes) y_test = to_categorical(y_test, num_classes=num_classes) # # Entrenando los modelos # # ## Red neuronal feedforward # # Primero entrenaremos una red neuronal feedforward directamente en el texto tokenizado, utilizando un embedding simple. # + document_input = Input(shape=(max_words, ), dtype='int32') x = Embedding(num_words, embedding_size, input_length=max_words)(document_input) f = Lambda(lambda x: K.sum(x,axis=1))(x) y = Dense(20, activation='relu')(f) output = Dense(num_classes, activation='softmax')(y) # Construyendo el modelo model = Model(inputs=[document_input], outputs=[output]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Entrenando el modelo history = model.fit(X_tr_pad, y_train, batch_size=batch_size, epochs=2, verbose=1, validation_split=0.2) # - loss, acc = model.evaluate(X_te_pad, y_test) print('Accuracy en datos de test: {0}'.format(acc)) # # Red Neuronal Recurrente Simple con Embedding # + document_input = Input(shape=(max_words, ), dtype='int32') # Embedding Layer embedding = Embedding(input_dim=num_words, output_dim=embedding_size, input_length=max_words, trainable=True)(document_input) rnn = SimpleRNN(10,return_sequences=False)(embedding) # Aplicando una capa densa luego de la RNN output = Dense(num_classes, activation='softmax')(rnn) # Construyendo el modelo model = Model(inputs=[document_input], outputs=[output]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Entrenando el modelo history = model.fit(X_tr_pad, y_train, batch_size=batch_size, epochs=2, verbose=1, validation_split=0.2) # - loss, acc = model.evaluate(X_te_pad, y_test) print('Accuracy en datos de test: {0}'.format(acc)) # ## Dropout para evitar el sobre ajuste # + document_input = Input(shape=(max_words, ), dtype='int32') # Embedding Layer embedding = Embedding(input_dim=num_words, output_dim=embedding_size, input_length=max_words, trainable=True)(document_input) rnn = SimpleRNN(10,dropout=0.4,return_sequences=False)(embedding) ''' ----------- ''' # Aplicando una capa densa luego de la RNN output = Dense(num_classes, activation='softmax')(rnn) # Construyendo el modelo model = Model(inputs=[document_input], outputs=[output]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Entrenando el modelo history = model.fit(X_tr_pad, y_train, batch_size=batch_size, epochs=2, verbose=1, validation_split=0.2) # - # ## Explique aquí Dropout # # Dropout significa "ignorar" algunas neuronas al azar (cantidad de acuerdo al dropout rate) para los pasos forward propagation o backward propagation. Esto ayuda a prevenir el overfitting, ya que la red se ve forzada a poder generalizar con un subconjunto de neuronas y a no desarrollar codependencias entre neuronas. loss, acc = model.evaluate(X_te_pad, y_test) print('Accuracy en datos de test: {0}'.format(acc)) # # Red LSTM con Embedding # + document_input = Input(shape=(max_words, ), dtype='int32') # Embedding Layer embedding = Embedding(input_dim=num_words, output_dim=embedding_size, input_length=max_words, trainable=True) embedding_output = embedding(document_input) rnn = LSTM(10,dropout=0.4)(embedding_output) output = Dense(num_classes, activation='softmax')(rnn) # Construyendo el modelo model = Model(inputs=[document_input], outputs=[output]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Entrenando el modelo history = model.fit(X_tr_pad, y_train, batch_size=batch_size, epochs=2, verbose=1, validation_split=0.2) # - loss, acc = model.evaluate(X_te_pad, y_test) print('Accuracy en datos de test: {0}'.format(acc)) # %matplotlib inline from matplotlib import pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) ax.plot(history.history['loss']) ax.plot(history.history['val_loss']) ax.set_xlabel('epoch') ax.set_ylabel('loss') fig.legend(['train', 'validation'], loc='upper left') # ### Escriba aquí sus conclusiones # # El error de validación se mantiene más o menos constante. Sin embargo, la accuracy (métrica que estamos optimizando) si aumenta. # # Red LSTM con GLOVE embedding # # Descargue los embeddings pre entrenados desde https://nlp.stanford.edu/projects/glove/ # archivo glove.6B.zip y extraiga los datos para utilizarlos en esta sección # + # Funciones para cargar los vectores GLOVE def glorot_uniform_np(shape): fan_in, fan_out = shape[0],shape[1] s = np.sqrt(6. / (fan_in + fan_out)) return np.random.uniform(-s, s, size=shape) def load_word_vectors(embeddings_index, glove_file): print('Indexing word vectors.') for line in glove_file: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs glove_file.close() print('Found %s word vectors.' % len(embeddings_index)) # + # Cargando los vectores GLOVE ## LOAD PRETRAINED WORD VECTORS GLOVE_FILE = 'glove.6B.100d.txt' embeddings_index = {} glove_file = open(GLOVE_FILE) load_word_vectors(embeddings_index, glove_file) embedding_matrix = np.zeros((len(word_index) + 1, 100)) for word, i in word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector embeddings_index = {} # + from tensorflow.keras.initializers import Constant document_input = Input(shape=(max_words, ), dtype='int32') embedding = Embedding(input_dim=num_words, output_dim=100, input_length=max_words, trainable=True,embeddings_initializer=Constant(embedding_matrix)) embedding_output = embedding(document_input) # + f = Lambda(lambda x: K.sum(x,axis=1))(embedding_output) y = Dense(20, activation='relu')(f) output = Dense(num_classes, activation='softmax')(y) # Construyendo el modelo model = Model(inputs=[document_input], outputs=[output]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Entrenando el modelo history = model.fit(X_tr_pad, y_train, batch_size=batch_size, epochs=2, verbose=1, validation_split=0.2) # - loss, acc = model.evaluate(X_te_pad, y_test) print('Accuracy en datos de test: {0}'.format(acc)) # + rnn = SimpleRNN(10,return_sequences=False)(embedding_output) # Aplicando una capa densa luego de la RNN output = Dense(num_classes, activation='softmax')(rnn) # Construyendo el modelo model = Model(inputs=[document_input], outputs=[output]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Entrenando el modelo history = model.fit(X_tr_pad, y_train, batch_size=batch_size, epochs=2, verbose=1, validation_split=0.2) # - loss, acc = model.evaluate(X_te_pad, y_test) print('Accuracy en datos de test: {0}'.format(acc)) # + rnn = SimpleRNN(10,return_sequences=False)(embedding_output) # Aplicando una capa densa luego de la RNN output = Dense(num_classes, activation='softmax')(rnn) # Construyendo el modelo model = Model(inputs=[document_input], outputs=[output]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Entrenando el modelo history = model.fit(X_tr_pad, y_train, batch_size=batch_size, epochs=2, verbose=1, validation_split=0.2) # - loss, acc = model.evaluate(X_te_pad, y_test) print('Accuracy en datos de test: {0}'.format(acc)) # + rnn = SimpleRNN(10,dropout=0.4,return_sequences=False)(embedding_output) # Aplicando una capa densa luego de la RNN output = Dense(num_classes, activation='softmax')(rnn) # Construyendo el modelo model = Model(inputs=[document_input], outputs=[output]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Entrenando el modelo history = model.fit(X_tr_pad, y_train, batch_size=batch_size, epochs=2, verbose=1, validation_split=0.2) # - loss, acc = model.evaluate(X_te_pad, y_test) print('Accuracy en datos de test: {0}'.format(acc)) # + rnn = LSTM(10,dropout=0.4)(embedding_output) output = Dense(num_classes, activation='softmax')(rnn) # Construyendo el modelo model = Model(inputs=[document_input], outputs=[output]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Entrenando el modelo history = model.fit(X_tr_pad, y_train, batch_size=batch_size, epochs=3, verbose=1, validation_split=0.2) # - loss, acc = model.evaluate(X_te_pad, y_test) print('Accuracy en datos de test: {0}'.format(acc)) # # Visualizando Words Embedding con PCA (opcional, 15 ptos.) # # Este ejercicio es más difícil, pero resolverlo aclarará muchas dudas sobre el funcionamiento de Keras. # Visualice los embeddings entrenados utilizando PCA. # Para esto debe seguir los siguientes pasos: # 1. Obtenga los pesos de la capa de embedding, estos tendran tamaño (num_vocab, embedding_dim). Puede obtener la correspondencia entre vectores y palabras utilizando el word_index. # 2. Ajuste un modelo PCA (https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) y quédese con los dos componentes principales. # 3. Visualice algunas palabras tomadas alreatoriamente en un gráfico de dos dimensiones y concluya. # from sklearn.decomposition import PCA import random embedding_weights = embedding.get_weights()[0] pca = PCA(n_components=2,svd_solver="full") pca.fit(embedding_weights) new_embeddings = pca.transform(embedding_weights) words_embeddings = {w:new_embeddings[idx] for w, idx in word_index.items()} fig = plt.figure(figsize=(15,15), facecolor='white') ax = fig.add_subplot(111) cont = 0 random.seed(0) while(cont < 20): random_number = random.randint(0,len(word_index.items())) palabra_random = list(words_embeddings.keys())[random_number] ax.scatter(words_embeddings[palabra_random][0],words_embeddings[palabra_random][1], marker='o',label=palabra_random) ax.annotate(palabra_random, (words_embeddings[palabra_random][0],words_embeddings[palabra_random][1])) cont += 1 ax.set_xlabel('component 1') ax.set_ylabel('component 2') ax.legend() # Se puede observar como el modelo agrupa palabras positivas en la parte superior y palabras negativas en la parte inferior. Considerando que sólo se están ocupando 2 componentes, hace una buena división. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt import numpy from scipy.interpolate import interp1d from typing import Tuple na = 11037 np = 11034 xa = 139 xp = 239 theta = numpy.linspace(0.2, 0.99, 100) def profile_likelihood(theta: numpy.ndarray, na: float, np: float, xa: float, xp: float) -> numpy.ndarray: ratio = na * theta / (na * theta + np) log_like = xa * numpy.log(ratio) + xp * numpy.log(1 - ratio) like = numpy.exp(log_like) like /= numpy.max(like) return like def likelihood_interval(theta: numpy.ndarray, likelihood: numpy.ndarray, cutoff: float) -> Tuple[float, float]: # intersection points occur below and above the maximum likelihood estimate mle_index = numpy.argmax(likelihood) interp_below_max = interp1d(likelihood[:mle_index], theta[:mle_index]) interp_above_max = interp1d(likelihood[mle_index:], theta[mle_index :]) lower_int = numpy.round(interp_below_max(cutoff).flatten()[0], 2) upper_int = numpy.round(interp_above_max(cutoff).flatten()[0], 2) return (lower_int, upper_int) def plot_profile_likelihood( theta: numpy.ndarray, likelihood: numpy.array, title='') -> None: plt.plot(theta, likelihood) plt.axhline(y=0.15, linewidth=1) if title.startswith("(b)"): plt.axvline(x=1.0, linewidth=1) plt.xlabel(r'$\theta$') plt.ylabel('Likelihood') plt.title(title); likelihood = profile_likelihood(theta, na, np, xa, xp) plot_profile_likelihood(theta, likelihood, title='(a) Heart attacks') ci = likelihood_interval(theta, likelihood, 0.15) print(f"95% CI = ", ci) na = 11037 np = 11034 xa = 119 xp = 98 theta = numpy.linspace(0.7, 1.99, 100) likelihood = profile_likelihood(theta, na, np, xa, xp) plot_profile_likelihood(theta, likelihood, title='(b) Strokes') ci = likelihood_interval(theta, likelihood, 0.15) print(f"95% CI = ", ci) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="FPtdCTumRZhz" # ## **Scandinanimal - The scandinavian deer classification app** # + [markdown] id="t3fSPsHuRkgD" # Have you ever been to the scandinavian forests and wondered if you are facing a moose, caribou or an elk? Upload a picture and you will find out. Unfortunately this app can not handle the bavarian "Wolpertinger", but it can help you if you are Santa Claus and looking for a new reliable caribou. # + id="JwI3b58FQ9MU" from fastai.vision.all import * from fastai.vision.widgets import * # + id="b-LYfgMDXyoY" download_url('https://github.com/pds2021/a5-fino-git/releases/download/1/deer_classifier_seraphin.pkl', 'deer_classifier_seraphin.pkl') learn_inf = load_learner('deer_classifier_seraphin.pkl', cpu=True) btn_upload = widgets.FileUpload() btn_run = widgets.Button(description='Classify') out_pl = widgets.Output() lbl_pred = widgets.Label() def on_click_classify(change): img = PILImage.create(btn_upload.data[-1]) out_pl.clear_output() with out_pl: display(img.to_thumb(128,128)) pred,pred_idx,probs = learn_inf.predict(img) lbl_pred.value = f'Prediction: {pred}; Probability: {probs[pred_idx]:.04f}' btn_run.on_click(on_click_classify) VBox([widgets.Label('Select your scandinavian deer!'), btn_upload, btn_run, out_pl, lbl_pred]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import sys from __future__ import division # + import numpy as np from phasor.utilities.ipynb.displays import * from phasor.utilities.ipynb.sympy import * import declarative from declarative.bunch import ( DeepBunch ) import phasor.math.dispatched as dmath #import phasor.math.dispatch_sympy # + import phasor.utilities.version as version print(version.foundations_version()) from phasor.utilities.np import logspaced from phasor import optics from phasor import base from phasor import signals from phasor import system from phasor import readouts from phasor.signals.pade_fit.pade_fit import ratsvd import scipy.signal # + F_nyquist = 16384 / 2 F_AC = F_nyquist * 2 * np.arange(0, 1001) / 301 F_AC = logspaced(1, 7000, 500) ZPKz = ( [ .9, -.3, .1+.95j, .1-.95j, ], [ #.9, .99, #.97, -0.03+.95j, -.03-.95j, #-.2, ], 10 ) b,a = scipy.signal.zpk2tf(*ZPKz) Fb = mplfigB(Nrows=2) w, h = scipy.signal.freqz_zpk(*ZPKz , worN = F_AC / F_nyquist * np.pi) F_exact = h Fb.ax0.loglog(F_AC, abs(h), label = 'discrete') Fb.ax1.semilogx(F_AC, np.angle(h), label = 'discrete') w, h = scipy.signal.freqz(b, a , worN = F_AC / F_nyquist * np.pi) #F_exact = h Fb.ax0.loglog(F_AC, abs(h), label = 'discrete') Fb.ax1.semilogx(F_AC, np.angle(h), label = 'discrete') Fb.ax0.legend() # - b R = np.random.randn(len(F_AC)) + 1j*np.random.randn(len(F_AC)) F_noise = F_exact * (1 + R / 10) fitz = ratsvd(F_AC, F_noise, W = 10, order_a = 10, order_b = 10, F_nyquist= F_nyquist, max_size = 100, ms_method = 'sum', #aorder_min = 300, ) #fits # + Fb = mplfigB(Nrows=2) w, h = scipy.signal.freqz(b, a, worN = F_AC / F_nyquist * np.pi) Fb.ax0.loglog(F_AC, abs(h), label = 'discrete') Fb.ax1.semilogx(F_AC, np.angle(h), label = 'discrete') h = F_exact Fb.ax0.loglog(F_AC, abs(F_exact), label = 'Fex') Fb.ax1.semilogx(F_AC, np.angle(F_exact), label = 'Fex') h = F_exact Fb.ax0.loglog(F_AC, abs(F_noise), label = 'Fnoise') Fb.ax1.semilogx(F_AC, np.angle(F_noise), label = 'Fnoise') for ftup in fitz[:4]: res, b_fit, a_fit, order_b, order_a, F_rescale = ftup w, h = scipy.signal.freqz(b_fit, a_fit, worN = F_AC / F_rescale * np.pi) Fb.ax0.loglog( F_AC, abs(h), label = 'fit {order_b} ({res})'.format( order_a = order_a, order_b = order_b, res = res, ), #color = 'green', ) Fb.ax1.semilogx(F_AC, np.angle(h), label = 'fit') Fb.ax0.legend(ncol = 3, fontsize = 6) # - fits = ratsvd(F_AC, F_exact, W = 10, order_a = 10, order_b = 10, F_nyquist=None, max_size = 100, ms_method = 'sum', #aorder_min = 300, ) fits # + Fb = mplfigB(Nrows=2) w, h = scipy.signal.freqz(b, a, worN = F_AC / F_nyquist * np.pi) Fb.ax0.loglog(F_AC, abs(h), label = 'discrete') Fb.ax1.semilogx(F_AC, np.angle(h), label = 'discrete') h = F_exact Fb.ax0.loglog(F_AC, abs(F_exact), label = 'Fex') Fb.ax1.semilogx(F_AC, np.angle(F_exact), label = 'Fex') h = F_exact Fb.ax0.loglog(F_AC, abs(F_noise), label = 'Fnoise') Fb.ax1.semilogx(F_AC, np.angle(F_noise), label = 'Fnoise') for ftup in fits[:4]: res, b_fit, a_fit, order_b, order_a, F_scale = ftup w, h = scipy.signal.freqs(b_fit, a_fit, worN = F_AC / F_rescale * 2 * np.pi) Fb.ax0.loglog( F_AC, abs(h), label = 'fit {order_b} ({res})'.format( order_a = order_a, order_b = order_b, res = res, ), #color = 'green', ) Fb.ax1.semilogx(F_AC, np.angle(h), label = 'fit') Fb.ax0.legend(ncol = 3, fontsize = 6) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.6.7 64-bit (''energyanalytics'': conda)' # language: python # name: python36764bitenergyanalyticsconda420549f17c0a492fa1ad380efb26bb87 # --- # + import pandas as pd import datetime import numpy as np import matplotlib.pyplot as plt import seaborn as sns import missingno as msno import re # - originalDF = pd.read_excel('UT Completion and Sequencing.xlsx') # ## Cleaning Data nan values # + ## Compl. Type has a couple nan values originalDF['Compl. Type'] = originalDF['Compl. Type'].replace("No Data", np.nan).replace("unknown, probably hybrid", np.nan).replace("Not indicated",np.nan) # - DatabaseDF = originalDF # ## Using Missingno to visualize incomplete Data msno.matrix(DatabaseDF) # Our data appears to be relatively complete at first glace. The only variables that have noticibly many missing values are : Stages, Best# Mo BOPD, Compl. Type, and Fluid Type from DI. # # It should be noted however that for Stages we are not sure if 0 should be interpreted as a NaN value msno.heatmap(DatabaseDF) # I wasn't expecting this to give meaningfull information, since really we don't care why the data isn't present. It is interesting to note that higher Best# Mo BOPD does not allways gaurentee the existence of lower Month BOPD, which means that there are some interesting holes in our data. # That's kind of all that Missingno can do, if I'm being perfectly honest its not super usefull for us # ## Variable comparabilities # + tags=["outputPrepend"] ## 2 variable comparison colList = list(DatabaseDF.columns.values) totalEntries = len(DatabaseDF) Heatmap = [] for column in colList: rowList = [] for row in colList: missing = len((DatabaseDF[(DatabaseDF[column].isna()) | DatabaseDF[row].isna()])) percentAvailable = 1 - (missing/totalEntries) rowList.append(percentAvailable) Heatmap.append(rowList) mask = np.zeros_like(Heatmap) mask[np.triu_indices_from(mask)] = True heatmapDF = pd.DataFrame(Heatmap, columns = colList, index = colList) with sns.axes_style("white"): plt.subplots(figsize = (15,10)) sns.heatmap(heatmapDF, annot = True, mask = mask, square = True) # - # This table shows us what percent of total entries have both values, which gives us an idea of what values we will be able to compare. For the most part most 2 value pairs are present within the entire dataset except for the best Mo production, those are the main soucre of error. # ## Compl. Type Repurposing # + def completionTypeParser(string, stages): if (string == "nan"): return np.nan, np.nan, np.nan, np.nan, np.nan, np.nan if (stages == "nan"): stages = 0 integers = list(map(int, re.findall(r'\d+',string))) integerCount = len(integers) typeString = '' typeCount = [np.nan] * 5 if (integerCount == 0): ## need to determine if uncountable if "and" not in string: ## it is uncountable if "orts" in string: typeString = "Frac Ports" typeCount[2] = stages elif "Sleeves" in string: typeString = "Sleeves" typeCount[0] = stages elif "&" in string: typeString = "P & P" typeCount[1] = stages elif "CT" in string: typeString = "CT" typeCount[4] = stages elif "Tubing" in string: typeString = "CT" typeCount[4] = stages else: typeString = string else: typeString = "Sleeves and P & P" elif (integerCount == 2): ## Determine which to set ## Need to determine if this is worth checking ##if (sum(integers) != stages): ##print(string , " ", stages) ##raise Exception("Stages should equal sum of completions") if "&" in string: ##P&P type if "eeve" in string: typeString = "Sleeves and P & P" typeCount[0] = integers[0] typeCount[1] = integers[1] elif "eater" in string: typeString = "Repeater Ports and P & P" typeCount[3] = integers[0] typeCount[1] = integers[1] elif "ort" in string: typeString = "Frac Ports and P & P" typeCount[2] = integers[0] typeCount[1] = integers[1] else: typeString = "P & P and CT" typeCount[1] = integers[0] typeCount[4] = integers[1] else: ##not P&P type typeString = "Frac Ports and Repeater Ports" typeCount[2] = integers[0] typeCount[3] = integers[1] else: raise Exception("String should contain either two or no numbers") return typeString, typeCount[0], typeCount[1], typeCount[2], typeCount[3], typeCount[4] # + completionTypeData = [] for index, row in DatabaseDF.iterrows(): parseResults = completionTypeParser(str(row['Compl. Type']), row['Stages']) completionTypeData.append(parseResults); completionTypeDF = pd.DataFrame(completionTypeData, columns= ["Completion Type", "Sleeves", "P&P", "Frac Ports", "Repeater Ports", "CT"]) DatabaseDF = DatabaseDF.join(completionTypeDF) # - DatabaseDF.head() # ## Adding the Year the well was drilled # Adding a column recording the year from the date for potentially future clustering of the data based upon the year the well was drilled. DatabaseDF["Year Drilled"] = DatabaseDF["Date Fracd"].dt.year DatabaseDF.head() # ## Exporting to CSV DatabaseDF.to_csv('CleanedDataset.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="dvosFKszd4FQ" # # **Machine Learning - case: estilo musical** # + [markdown] id="ofrAoZQ9NpfC" # #### **Importando as bibliotecas necessárias** # + id="r0TVvSNAMakd" import sklearn import pandas as pd import numpy as np # para ignorar os avisos de mensagens das funções import warnings warnings.filterwarnings("ignore") # pacote para trabalhar com data frame (tabelas) import pandas as pd # pacote mais básico para vizualização gráfica import matplotlib.pyplot as plt # um dos pacotes para fazer gráficos import seaborn as sns # sklearn - pacote análise de dados # existe grande parte dos métodos mais famosos from sklearn.model_selection import train_test_split # função para fazer avaliação dos modelos from sklearn.metrics import classification_report # + [markdown] id="_aCnz7Ooa4Tt" # #### **Explicação da base de dados** # # * **Base de dados**: devido à natureza artística da música, as classificações são frequentemente arbitrárias e controversas, e alguns gêneros podem se sobrepor. # * **Variáveis**: # * filename - nome do arquivo conforme fornecido no conjunto de dados marsyas; # * tempo - a velocidade com que uma passagem de música é tocada; # * beats - unidade rítmica na música; # * chroma_stft - transformada de Fourier de curto tempo; # * rmse - erro de raiz quadrada média; # * spectral_centroid - indica onde o "centro de massa" do espectro está localizado; # * spectral_bandwidth - é o intervalo de comprimento de onda em que uma quantidade espectral irradiada não é inferior a metade de seu valor máximo; # * rolloff - roll-off é a inclinação de uma função de transmissão com frequência; # * zero_crossing_rate - a taxa na qual o sinal muda de positivo para negativo ou de volta; # * mfcc1 a mfcc20 - coeficientes cepstrais de frequência de Mel (MFCCs) são coeficientes que coletivamente constituem um MFC; # * label - o estilo musical da música. # * **Objetivo**: treinar uma modelo e saiba a qual gênero sua música favorita pertence. # * A variável resposta será: label (nome do estilo musical). # * **Técnica usada**: modelos ensembles: Bagging, Random Forest, AdaBoost, Gradiente Estocástico Boosting e Voting Ensemble. # # [base de dados](https://www.kaggle.com/insiyeah/musicfeatures) # + [markdown] id="osvPPHa0a4UQ" # #### **Carregando os dados** # + id="kYXkBU76dTsn" colab={"base_uri": "https://localhost:8080/"} outputId="82d65ef3-8ef5-48d0-b26e-dbd19dca4739" from google.colab import drive drive.mount('/content/drive') # + id="B_EXLQNwa4UR" colab={"base_uri": "https://localhost:8080/"} outputId="ab2f000b-22da-4492-9a4f-50c43e37bef1" df = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Estagio_TAO_Python/Datasets/data_music.csv', sep = ",") # tamanho da base de dados print(df.shape) # + [markdown] id="6wYu8UDLmbJr" # Temos na base 30 variáveis, sendo 29 variáveis explciativas ($X$) e 1 variável resposta ($y$). Em cada linha temos dados de uma música. # + id="nNH1eSkD-5m5" colab={"base_uri": "https://localhost:8080/", "height": 223} outputId="26d9b9e6-93d7-402b-a1e3-c3ffb752e36c" # vizualizando as cinco primeiras linhas da base de dados df.head(5) # + id="KJtGI5HD4Fru" colab={"base_uri": "https://localhost:8080/"} outputId="9e84bc9c-fd35-4ae6-e53b-e250c36018c2" df.label.unique() # + [markdown] id="ocLsSkr5mnwc" # Nesta base de dados sobre música temos 10 estilos musicais, mostrados acima. # + [markdown] id="H8gO9oww0cDM" # #### **Informações da base de dados** # + [markdown] id="55ltrJS0jvFT" # **Informações sobre os tipos das variáveis da base de dados** # + id="ufotTlW---BG" colab={"base_uri": "https://localhost:8080/"} outputId="7aa82993-b884-4ca8-be0a-0d273f7ad692" df.info() # + [markdown] id="8Gy6GvkQm3cC" # A base de dados tem 1000 linhas de dados, não havendo dados faltantes. A maiori dos dados é numérico (inteiro ou real), com exceção da label que é a variável resposta que contém o estilo musical da música e do filename, que tem o código da música. # + [markdown] id="KDFdwlvbPqSP" # **Quantidade de dados por estilo musical** # + id="j8CyQXWcPAD7" colab={"base_uri": "https://localhost:8080/", "height": 357} outputId="75151014-cbed-4dab-860f-2c181b7a2aa2" geral = pd.DataFrame([df['label'].value_counts(normalize = False), df['label'].value_counts(normalize = True)*100]).T geral.columns = ['valor', 'proporção'] geral # + [markdown] id="TdmR5NGAnppy" # Verificamos que as classes referente ao estilo musical tem mesma proporção de dados. # + [markdown] id="fXnr0t6UFdoR" # #### **Análise descritiva dos dados** # + [markdown] id="lgBDTkU5oHZr" # A seguir temos as estatísticas de cada variável da base de dados. # + id="X7iHz2pzFdoT" colab={"base_uri": "https://localhost:8080/", "height": 911} outputId="f03208c0-d770-4da3-c1ac-e7621d6fcc26" df.describe().T # + [markdown] id="NT8cxXj-FdoV" # **correlação entre as variáveis** # + id="ruE2atf8FdoV" colab={"base_uri": "https://localhost:8080/", "height": 630} outputId="1ff8a882-e635-42b6-dae9-fbdc414ad875" corr = df.corr(method = 'spearman') corr.style.background_gradient(cmap='coolwarm') # + [markdown] id="0PitKbBMoPir" # * Com a matriz de correlação, podemos notar que há grande relação entre a vaariável tempo que se refere a velocidade da música e tempo que é o tempo total da música, o que instintivamente se espera porque o tempo total da música depende da velocidade. # # * Também verificamos correlação das variáveis "spectral_centroid", "spectral_bandwidth", "rolloff", "zero_crossing_rate", "mfcc1" com as variáveis "spectral_bandwidth", "rolloff", "zero_crossing_rate" entre si. O que referência a onda sonora e o espectro da onda, como as variáveis tem referência a aspectos semelhantes, faz sentido tem correlação entre estas variáveis. # # * Há outras correlações que são grande, mas nada que se deve ter um estudo separado. # + [markdown] id="eNGvvYvLFdoW" # **Gráfico box plot para verificar existência de outliers** # + id="pZnDFMIr7PS1" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="18659caf-5dc4-4277-e4d9-983bf3f6d442" sns.boxplot(x = 'value', y = 'variable', data = df[['spectral_centroid', 'spectral_bandwidth', 'rolloff']].melt()) plt.show() # + id="3PL6vLSY6oDz" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="af7722dc-73a2-4a51-905c-4ec2a4fd7f49" sns.boxplot(x = 'value', y = 'variable', data = df[['mfcc1']].melt()) plt.show() # + id="x728L2B97bKr" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="73076d95-5fde-4718-bd0c-5d7b6a30aaa0" sns.boxplot(x = 'value', y = 'variable', data = df[['tempo', 'mfcc2']].melt()) plt.show() # + id="Y40GmKYdFdoX" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="0a59ef1a-6daa-4d6c-8154-a4075129aeca" sns.boxplot(x = 'value', y = 'variable', data = df.drop(['filename', 'label', 'tempo', 'mfcc2', 'spectral_centroid', 'spectral_bandwidth', 'rolloff', 'mfcc1'], axis = 1).iloc[:, 0:28].melt()) plt.show() # + [markdown] id="7v_9PPioKSOY" # Pelos bloxplot acima verificamos que existe outliers na marioria das variáveis. Para remover os outliers utilizamos o código a seguir. Neste código removemos os valores abaixo de $Q1 - 1.5 IQR$ e acima de $Q3 + 1.5 IQR$, para cada uma das variáveis. # + id="jKdgbCC2lQT-" for i in range(0, 26): data = df.drop(['filename', 'label'], axis = 1) name_column = data.columns[i] Q1 = data.describe().unstack()[name_column, '25%'] Q3 = data.describe().unstack()[name_column, '75%'] IQR = Q3 - Q1 df_extra = np.array(data[name_column].values.tolist()) df[name_column] = np.where(df_extra < Q1 - 1.5 * IQR, data.iloc[:, i].mean(), df_extra).tolist() df[name_column] = np.where(df_extra > Q3 + 1.5 * IQR, data.iloc[:, i].mean(), df_extra).tolist() # + id="9OxKtTIc8YqR" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="25f102ce-2e2b-462f-862c-3f6f499b50df" sns.boxplot(x = 'value', y = 'variable', data = df[['spectral_centroid', 'spectral_bandwidth', 'rolloff']].melt()) plt.show() # + id="CwSNJzuQ8YqU" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="337bb8aa-0545-4290-f91a-1785c23bf2d7" sns.boxplot(x = 'value', y = 'variable', data = df[['mfcc1']].melt()) plt.show() # + id="V2woTit68YqV" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="0eeec82a-02c3-4c00-aadb-e3225b527468" sns.boxplot(x = 'value', y = 'variable', data = df[['tempo', 'mfcc2']].melt()) plt.show() # + id="P0x8U1PP8YqW" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="6edaa982-8a3e-44de-c819-fea60e6d674d" sns.boxplot(x = 'value', y = 'variable', data = df.drop(['filename', 'label', 'tempo', 'mfcc2', 'spectral_centroid', 'spectral_bandwidth', 'rolloff', 'mfcc1'], axis = 1).iloc[:, 0:28].melt()) plt.show() # + [markdown] id="G-6hXuBFLPe9" # A maioria dos outliers foram removidos. # + [markdown] id="qKXxrWKDYrx5" # #### Normalizando as variáveis # + [markdown] id="L8ud55j3qcH7" # Neste passo separamos a variável explicativa ($X$) da variável resposta ($y$). Em seguinda normalizamos as variáveis, usando 'StandardScaler', para que fiquem na mesma escala. # + id="cXQkI926h2e5" X = df.drop(['filename', 'label'], axis = 1) y = df['label'] # + id="pBfjgrOo-tJh" colab={"base_uri": "https://localhost:8080/"} outputId="23fb05da-2070-4874-b5f4-b2fa76a8f13b" X.info() # + id="gbEW7yunYwRV" from sklearn.preprocessing import StandardScaler scaler = StandardScaler() X = scaler.fit_transform(X) X = pd.DataFrame(X, columns = df.drop(['filename', 'label'], axis = 1).columns) # + [markdown] id="pyXrKMtldQ6g" # #### **Análise de Componentes Principais (PCA)** # + [markdown] id="U2QQ1lAHr7P8" # Usamos a análise de componentes principais para reduzir o tamanho da base de dados. # + id="mEq3pPyJRE2S" from sklearn.decomposition import PCA # escolhendo número de dimensões pca = PCA() pca.fit(X) cumsum = np.cumsum(pca.explained_variance_ratio_) d = np.argmax(cumsum >= 0.99999999) + 1 # + id="cOLA5TB_Spez" colab={"base_uri": "https://localhost:8080/", "height": 279} outputId="f87a3205-b3e9-4853-df5a-651cb471b0f7" plt.plot(cumsum) plt.ylabel('Variância explicada') plt.xlabel('Dimensões') plt.show() # + id="Q3ZIeFjodQdZ" from sklearn.decomposition import PCA pca = PCA(n_components = d) principalComponents = pca.fit_transform(X) X = pd.DataFrame(data = principalComponents) #, columns = ['principal component 1', 'principal component 2']) # + [markdown] id="D3JlW6Nrh72o" # #### **Dados em treinamento e teste** # + [markdown] id="YQT7836ssRkr" # Separamos dividimos os dados em dados de treinamento e dados de teste, sendo 30% para teste e 70% para treinamento. # + id="lyU4IEWDNSOa" # todas as colunas da base de dados X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, shuffle = 'False', random_state = 376) # + [markdown] id="PytgdNvGBWnN" # #### **Treinamento dos modelos Ensemble** # + [markdown] id="T3WDOeL2setE" # * Agora com os dados organizados e arrumados aplicamos os modelos Ensemble. # # * Os modelos usados são os modelos ensemble. Os modelos ensemble são técnicas avançadas para resolver problemas conplexos, em que modelos independentes são conbinados para se conseguir um resultado melhor. A relação dos modelos usados é a seguinte: # * Bagging; # * Random Forest; # * AddaBoost; # * Gradiente Estocástico Boosting; # * Voting Ensemble. # # * Em cada aplicação de modelo, são apresentados os melhores modelos parâmetros usados em cada modelo ou o melhor modelo com seus parâmetros como no caso do modelo Voting Ensemble. # + [markdown] id="9n2DKzF9sutb" # ##### **Modelo Bagging** # + id="gDhmQobB9vfJ" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="86dd86e3-bcea-43b6-b276-d7ddb8348ce3" from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import BaggingClassifier from sklearn import model_selection kfold = model_selection.KFold(n_splits = 10, random_state = 376) model_1 = BaggingClassifier(base_estimator = DecisionTreeClassifier(), n_estimators = 100, random_state = 376) print(model_1.fit(X_train, y_train), '\n') # predizendo a nota de novos alunos y_pred = model_1.predict(X_test) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report results_1 = accuracy_score(y_test, y_pred) print('Acurácia modelo Bagging: {}.'.format(results_1)) print("Melhores parâmetros:") print(model_1.base_estimator, '\n') cm = confusion_matrix(y_test, y_pred) sns.heatmap(cm, annot = True) plt.show() print(classification_report(y_test, y_pred)) # + [markdown] id="oLZ9xWsSszjC" # ##### **Random Forest Classification** # + id="cJDtAyet_aDZ" colab={"base_uri": "https://localhost:8080/", "height": 853} outputId="d1084a39-292e-4440-f939-cd5a91a2bed8" from sklearn.ensemble import RandomForestClassifier kfold = model_selection.KFold(n_splits = 10, random_state = 376) model_rf = RandomForestClassifier(n_estimators = 100, max_features = 5) print(model_rf.fit(X_train, y_train), '\n') # predizendo a nota de novos alunos y_pred = model_rf.predict(X_test) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report results_rf = accuracy_score(y_test, y_pred) print('Acurácia modelo Random Forest: {}.'.format(results_rf)) print("Melhores parâmetros:") print(model_rf.base_estimator, '\n') cm = confusion_matrix(y_test, y_pred) sns.heatmap(cm, annot = True) plt.show() print(classification_report(y_test, y_pred)) # + [markdown] id="80QNwCr2s3Qy" # ##### **Modelo AdaBoost** # + id="X2_YCYO6_aqd" colab={"base_uri": "https://localhost:8080/", "height": 672} outputId="ac430e9d-3558-47d8-cb71-01fac2a797b1" from sklearn.ensemble import AdaBoostClassifier kfold_ada = model_selection.KFold(n_splits = 10, random_state = 376) model_ada = AdaBoostClassifier(n_estimators = 30, random_state = 376) print(model_ada.fit(X_train, y_train), '\n') # predizendo a nota de novos alunos y_pred = model_ada.predict(X_test) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report results_ada = accuracy_score(y_test, y_pred) print('Acurácia modelo AdaBoost: {}.'.format(results_ada)) print("Melhores parâmetros:") print(model_ada.base_estimator, '\n') cm = confusion_matrix(y_test, y_pred) sns.heatmap(cm, annot = True) plt.show() print(classification_report(y_test, y_pred)) # + [markdown] id="KxrVm5AXs6It" # ##### **Modelo Gradiente Estocástico Boosting** # + id="wZHzKobd_ata" colab={"base_uri": "https://localhost:8080/", "height": 954} outputId="d2165e84-a01b-4cf5-9ef5-afb4a4c86d4e" from sklearn.ensemble import GradientBoostingClassifier kfold_sgb = model_selection.KFold(n_splits = 10, random_state = 376) model_sgb = GradientBoostingClassifier(n_estimators = 100, random_state = 376) print(model_sgb.fit(X_train, y_train), '\n') # predizendo a nota de novos alunos y_pred = model_sgb.predict(X_test) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report results_sgb = accuracy_score(y_test, y_pred) print('Acurácia modelo Gradiente Estocástico Boosting: {}.'.format(results_sgb)) print("Melhores parâmetros:") print(model_sgb.get_params, '\n') cm = confusion_matrix(y_test, y_pred) sns.heatmap(cm, annot = True) plt.show() print(classification_report(y_test, y_pred)) # + [markdown] id="hlrnUVMMs98t" # ##### **Modelo Voting Ensemble** # + id="xy9OL7lh_n2q" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="8c69757a-fef3-4539-ee6b-7b1a45343526" from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC from sklearn.ensemble import VotingClassifier kfold_vc = model_selection.KFold(n_splits = 10, random_state = 376) # Lines 2 to 8 estimators = [] mod_lr = LogisticRegression() estimators.append(('logistic', mod_lr)) mod_dt = DecisionTreeClassifier() estimators.append(('cart', mod_dt)) mod_sv = SVC() estimators.append(('svm', mod_sv)) # Lines 9 to 11 ensemble = VotingClassifier(estimators) results_vc = model_selection.cross_val_score(ensemble, X, y, cv = kfold_vc) print(results_vc.mean()) from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC from sklearn.ensemble import VotingClassifier kfold_vc = model_selection.KFold(n_splits = 10, random_state = 376) # Lines 2 to 8 estimators = [] mod_lr = LogisticRegression() estimators.append(('logistic', mod_lr)) mod_dt = DecisionTreeClassifier() estimators.append(('cart', mod_dt)) mod_sv = SVC() estimators.append(('svm', mod_sv)) # Lines 9 to 11 ensemble = VotingClassifier(estimators) print(ensemble.fit(X_train, y_train), '\n') # predizendo a nota de novos alunos y_pred = ensemble.predict(X_test) from sklearn.metrics import accuracy_score, confusion_matrix, classification_report results_vc = accuracy_score(y_test, y_pred) print('Acurácia modelo Voting Ensemble: {}.'.format(results_vc)) print("Melhores parâmetros:") print(ensemble.named_estimators_, '\n') cm = confusion_matrix(y_test, y_pred) sns.heatmap(cm, annot = True) plt.show() print(classification_report(y_test, y_pred)) # + [markdown] id="xcKTDYbFNDLd" # #### **Resumo dos modelos** # + [markdown] id="risO09FzvARt" # Por último apresentamos os resultados da acurácia dos modelos em forma de tabela e gráfico. # + id="eB4eYlKF_n5E" colab={"base_uri": "https://localhost:8080/", "height": 203} outputId="7c2c6009-afd5-47ae-8dd6-dafd9ff672e6" import pandas as pd results_ac = [] results_ac.append(round(results_1.mean(), 4)) results_ac.append(round(results_rf.mean(), 4)) results_ac.append(round(results_sgb.mean(), 4)) results_ac.append(round(results_ada.mean(), 4)) results_ac.append(round(results_vc.mean(), 4)) names = ['Bagging', 'Random Forest', 'Boosting Adaptável', 'AdaBoost', 'Voting Ensemble'] results_final = pd.DataFrame([names, results_ac]).T results_final.columns = ['Modelo Ensemble', 'Acurácia'] results_final results_final.sort_values(['Acurácia'], ascending = False) # + id="ZzV8kMM__n7o" colab={"base_uri": "https://localhost:8080/", "height": 281} outputId="4ab8dbc5-82e2-4758-87bc-c0546601424d" plt.scatter(results_final['Acurácia'], results_final['Modelo Ensemble']) for i in range(0, len(results_final['Acurácia'])): plt.annotate(results_final['Acurácia'][i], xy = (results_final['Acurácia'][i], 0.05 + i), xytext = (results_final['Acurácia'][i] - 0.14, -0.08 + i)) plt.title('Acurácia dos modelos Ensemble') axes = plt.gca() axes.set_xlim([0, 1]) # plt.set_ylim([ymin,ymax]) plt.show() # + [markdown] id="YSkss1PTvGy9" # * Pelos resultados o melhor modelo é o Voting Ensemble com $63.00%$ de acurácia, neste modelo utiliza-se os modelos Regressão Logística, Árvore de Decisão e Máquina de Vetor Suporte (SVM). Apesar disso o modelo Random Forest também teve acurácia próxima. E o pior foi o modelo AdaBoost com acurácia de $15.33\%$. # # * Apesar do modelo Voting Ensemble se apresentar melhor entre os modelos apresentados, pode haver modelos melhores. Porém, o objetivo do estudo foi a aplciação dos métodos Ensemble. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="view-in-github" # Open In Colab # + [markdown] colab_type="text" id="XIu-8nkCV4zm" # # How to train your DragoNN tutorial 5: # ## Functional variant characterization for non-coding SNPs within the SPI1 motif # # This tutorial is a supplement to the DragoNN manuscript. # # This tutorial will take 2 - 3 hours if executed on a GPU. # # ## Outline #
      #
    1. Input data: SPI1 ChiP-seq and experimental bQTL data
    2. #
    3. Genomewide classification and regression labels for SPI1 TF ChiPseq
    4. #
    5. Optional: Download pre-generated models and test-set predictions
    6. #
    7. Genome-wide classification for SPI1
    8. #
    9. Genome-wide regression for SPI1
    10. #
    11. Genome-wide interpretation of true positive predictions in SPI1, with DeepLIFT
    12. #
    13. Recovering bQTL effect sizes: Classification vs Regression
    14. #
    15. Model-predicted SNP effect sizes vs bQTL effect sizes
    16. #
    17. Kat's architecture: Classification Model
    18. #
    19. Kat's architecture: Regression Model
    20. #
    21. Conclusions
    22. #
    23. Save tutorial outputs
    24. #
    # Github issues on the [dragonn repository](https://github.com/kundajelab/dragonn) with feedback, questions, and discussion are always welcome. # # + colab={} colab_type="code" id="72XlYRZBluGr" # If you don't have bedtools installed in your environment (i.e. Google Colab), uncomment and run the command below # #!apt-get install bedtools # #!pip install pybedtools # + colab={} colab_type="code" id="FmftiCCDV4zo" #uncomment the lines below if you are running this tutorial from Google Colab # #!pip install dragonn>=0.2.2 # + colab={} colab_type="code" id="fyLzeiF5V4zq" # Making sure our results are reproducible from numpy.random import seed seed(1234) from tensorflow import set_random_seed set_random_seed(1234) # + colab={} colab_type="code" id="8M6gdfuJV4zu" #load dragonn tutorial utilities # %reload_ext autoreload # %autoreload 2 # %matplotlib inline import warnings warnings.filterwarnings('ignore') from dragonn.tutorial_utils import * # + [markdown] colab_type="text" id="djDLAi21V4zy" # ## Input data # Home # # This tutorial uses the same in vivo SPI1 transcription factor CHiP-seq dataset that was used in [Tutorial 4](https://colab.research.google.com/github/kundajelab/dragonn/blob/keras_2.2_tensorflow_1.6_purekeras/paper_supplement/PrimerTutorial%204%20-%20Interpreting%20predictive%20sequence%20features%20in%20in-vivo%20TF%20binding%20events.ipynb). Our goal is to compare predicted variant effect sizes from classification and regression models against experimental bQTL data. The bQTL data in this way serves as a "gold-standard" validation that in silico mutagenesis on the deep learning inputs leads to correct variant effect size prediction. We will use bQTL data that has been intersected with SPI1 CISBP genome motif annotations. # + colab={"base_uri": "https://localhost:8080/", "height": 413} colab_type="code" id="O707uf21V4zy" outputId="f5ca2dbc-9594-4a62-aa67-97190945d622" # SPI1, optimal IDR thresholded peaks, Myers lab, hg19 # https://www.encodeproject.org/experiments/ENCSR000BGQ/ # !wget -O SPI1.narrowPeak.gz http://mitra.stanford.edu/kundaje/projects/dragonn/dragonn_gm12878_pipeline/spi1_ENCSR000BGQ/cromwell-executions/chip/bb0c3c5a-3889-43fe-a218-05851cecc74a/call-reproducibility_idr/execution/optimal_peak.regionPeak.gz #Fold change bigWig track for the SPI1 dataset: # !wget -O SPI1.pooled.fc.bigWig http://mitra.stanford.edu/kundaje/projects/dragonn/dragonn_gm12878_pipeline/spi1_ENCSR000BGQ/cromwell-executions/chip/bb0c3c5a-3889-43fe-a218-05851cecc74a/call-macs2_pooled/execution/ENCFF000OBU.Rep1.merged.nodup.pooled_x_ENCFF000OCW.Control.Rep1.merged.nodup.fc.signal.bigwig ## Download the hg19 chromsizes file (We only use chroms 1 -22, X, Y for training) # !wget http://mitra.stanford.edu/kundaje/projects/dragonn/hg19.chrom.sizes ## Download the hg19 fasta reference genome (and corresponding .fai index) # !wget http://mitra.stanford.edu/kundaje/projects/dragonn/hg19.genome.fa.gz # !wget http://mitra.stanford.edu/kundaje/projects/dragonn/hg19.genome.fa.gz.fai # !wget http://mitra.stanford.edu/kundaje/projects/dragonn/hg19.genome.fa.gz.gzi # + colab={"base_uri": "https://localhost:8080/", "height": 215} colab_type="code" id="-YwnqCV-V4z2" outputId="85791b16-8647-45ff-cd8b-cf281d618350" # Download bQTL experimental data for SPI1 loci # !wget http://mitra.stanford.edu/kundaje/projects/dragonn/SPI1.bQTLs.txt.gz # + [markdown] colab_type="text" id="sp9mi-6_V4z4" # ## Generating genome-wide classification and regression labels # Home # + [markdown] colab_type="text" id="Zmt5OJP_V4z5" # We will use the *genomewide_labels* function from the [seqdataloader](https://github.com/kundajelab/seqdataloader) package to generate positive and negative labels for the TF-ChIPseq peaks across the genome. We will treat each sample as a task for the model and compare the performance of the model on SPI1 task in the single-tasked and multi-tasked setting. # + colab={} colab_type="code" id="SLGpH2rOV4z6" from seqdataloader import * # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="u2wpRugxV4z7" outputId="c695d444-b6d0-408b-f294-6afcdc6b1033" ## seqdataloader accepts an input file, which we call SPI1.tasks.tsv, with task names in column 1, corresponding ## peak files in column 2, and the signal track in column 3. In this tutorial, the task file will have a single task entry for the SPI1 TF CHiP-seq with open("SPI1.task.tsv",'w') as f: f.write("SPI1\tSPI1.narrowPeak.gz\tSPI1.pooled.fc.bigWig\n") f.close() # !cat SPI1.task.tsv # + [markdown] colab_type="text" id="-pqz2oVGV4z_" # With the parameter configuration below, seqdataloader splits the genome into 1kb regions, with a stride of 50. Each 1kb region is centered at a 200 bp bin, with a left flank of 400 bases and a right flank of 400 bases. # # * In the classification case, each 200 bp bin is labeled as positive if a narrowPeak summit overlaps with it. The bin is labeled negative if there is no overlap with the narrowPeak. # * In the regression case, the asinh(mean coverage) in the 200 bp bin is computed. # # + [markdown] colab_type="text" id="e--f8QWuV4z_" # **Note**: The label generation may take 10 - 15 minutes to complete. If you prefer not to wait, you can download the # pre-generated classification and regression labels for the training, validation, and test sets by uncommenting the code below: # + colab={"base_uri": "https://localhost:8080/", "height": 1205} colab_type="code" id="LMG9IzPnV40A" outputId="aca1e05d-dc62-416e-b22f-b68a53aaf3f7" ## Classification labels # ! wget http://mitra.stanford.edu/kundaje/projects/dragonn/SPI1.train.classification.hdf5 # ! wget http://mitra.stanford.edu/kundaje/projects/dragonn/SPI1.valid.classification.hdf5 # ! wget http://mitra.stanford.edu/kundaje/projects/dragonn/SPI1.test.classification.hdf5 ## Regression labels # ! wget http://mitra.stanford.edu/kundaje/projects/dragonn/SPI1.train.regression.hdf5 # ! wget http://mitra.stanford.edu/kundaje/projects/dragonn/SPI1.valid.regression.hdf5 # ! wget http://mitra.stanford.edu/kundaje/projects/dragonn/SPI1.test.regression.hdf5 # + [markdown] colab_type="text" id="_lleRdAaV40B" # If you prefer to generate the labels from scratch, execute the two code cell below: # + colab={} colab_type="code" id="erh3B4hIV40D" # Generate genome-wide classification labels #1) Training set: all chromosomes with the exception of 1,2, and 19 in our training set. Also, the dataset does not # include chromosome Y, so we exclude it as well. train_set_params={ 'task_list':"SPI1.task.tsv", 'outf':"SPI1.train.classification.hdf5", 'output_type':'hdf5', 'chrom_sizes':'hg19.chrom.sizes', 'chroms_to_exclude':['chr1','chr2','chr19','chrY'], 'bin_stride':50, 'left_flank':400, 'right_flank':400, 'bin_size':200, 'threads':4, 'subthreads':4, 'allow_ambiguous':False, 'labeling_approach':'peak_summit_in_bin_classification' } genomewide_labels(train_set_params) #2) Validation set: Chromosome 1 valid_set_params={'task_list':"SPI1.task.tsv", 'outf':"SPI1.valid.classification.hdf5", 'output_type':'hdf5', 'chrom_sizes':'hg19.chrom.sizes', 'chroms_to_keep':'chr1', 'bin_stride':50, 'left_flank':400, 'right_flank':400, 'bin_size':200, 'threads':1, 'subthreads':4, 'allow_ambiguous':False, 'labeling_approach':'peak_summit_in_bin_classification' } genomewide_labels(valid_set_params) #3) Test set: Chromosomes 2, 19 test_set_params={ 'task_list':"SPI1.task.tsv", 'outf':"SPI1.test.classification.hdf5", 'output_type':'hdf5', 'chrom_sizes':'hg19.chrom.sizes', 'chroms_to_keep':['chr2','chr19'], 'bin_stride':50, 'left_flank':400, 'right_flank':400, 'bin_size':200, 'threads':2, 'subthreads':4, 'allow_ambiguous':False, 'labeling_approach':'peak_summit_in_bin_classification' } genomewide_labels(test_set_params) # + colab={} colab_type="code" id="PBqbysuAV40G" # Generate regression labels genome-wide #1) Training set: all chromosomes with the exception of 1,2, and 19 in our training set train_set_params={ 'task_list':"SPI1.task.tsv", 'outf':"SPI1.train.regression.hdf5", 'output_type':'hdf5', 'chrom_sizes':'hg19.chrom.sizes', 'chroms_to_exclude':['chr1','chr2','chr19','chrY'], 'bin_stride':50, 'left_flank':400, 'right_flank':400, 'bin_size':200, 'threads':4, 'subthreads':4, 'allow_ambiguous':False, 'labeling_approach':'all_genome_bins_regression' } genomewide_labels(train_set_params) #2) Validation set: Chromosome 1 valid_set_params={'task_list':"SPI1.task.tsv", 'outf':"SPI1.valid.regression.hdf5", 'output_type':'hdf5', 'chrom_sizes':'hg19.chrom.sizes', 'chroms_to_keep':'chr1', 'bin_stride':50, 'left_flank':400, 'right_flank':400, 'bin_size':200, 'threads':1, 'subthreads':4, 'allow_ambiguous':False, 'labeling_approach':'all_genome_bins_regression' } genomewide_labels(valid_set_params) #3) Test set: Chromosomes 2, 19 test_set_params={ 'task_list':"SPI1.task.tsv", 'outf':"SPI1.test.regression.hdf5", 'output_type':'hdf5', 'chrom_sizes':'hg19.chrom.sizes', 'chroms_to_keep':['chr2','chr19'], 'bin_stride':50, 'left_flank':400, 'right_flank':400, 'bin_size':200, 'threads':2, 'subthreads':4, 'allow_ambiguous':False, 'labeling_approach':'all_genome_bins_regression' } genomewide_labels(test_set_params) # + [markdown] colab_type="text" id="x1m9HsgXV40J" # Let's examine the files that were generated: # + colab={"base_uri": "https://localhost:8080/", "height": 390} colab_type="code" id="Q0SgKnZtV40J" outputId="5e5501e6-61e3-44e1-f6ca-94e70f3ff63f" #The code generates bed file outputs with a label of 1 or 0 for each 1kb # genome bin for each task. Note that the bins are shifted with a stride of 50. pd.read_hdf("SPI1.train.classification.hdf5",start=1000000,stop=1000010) # + colab={"base_uri": "https://localhost:8080/", "height": 390} colab_type="code" id="weH35tmhV40N" outputId="a8aea293-f560-400f-cb2f-2d0085625934" pd.read_hdf("SPI1.train.regression.hdf5",start=1000000,stop=1000010) # + [markdown] colab_type="text" id="FKHBBFpRV40Q" # ## Optional: Download pre-generated models and test-set predictions # Home # # Next, we will train classification and regression models to predict TF CHiP-seq peaks for SPI1. If you want to skip straight to model interpretation and bQTL analysis, you can download the pre-trained models by uncommenting the # block of code below. # + colab={} colab_type="code" id="CqyIROINV40R" from keras.models import load_model ## Download classification model # #! wget http://mitra.stanford.edu/kundaje/projects/dragonn/SPI1.classification.model.hdf5 spi1_classification_model=load_model("SPI1.kat.classification.model.hdf5") ## Download regression model # #! wget http://mitra.stanford.edu/kundaje/projects/dragonn/SPI1.regression.model.hdf5 spi1_regression_model=load_model("SPI1.kat.regression.model.hdf5") ## Get test set classification model and regression model predictions #import h5py #test_set_predictions=h5py.File("SPI1.test.predictions.hdf5") #spi1_test_classification_predictions=test_set_predictions['classification'].value #spi1_test_regression_predictions=test_set_predictions['regression'].value # + [markdown] colab_type="text" id="FZcwz5AmV40U" # ## Genome-wide classification model # Home # # + colab={} colab_type="code" id="ZeBBukYGV40V" #To prepare for model training, we import the necessary functions and submodules from keras from keras.models import Sequential from keras.layers.core import Dropout, Reshape, Dense, Activation, Flatten from keras.layers.convolutional import Conv2D, MaxPooling2D from keras.optimizers import Adadelta, SGD, RMSprop; import keras.losses; from keras.constraints import maxnorm; from keras.layers.normalization import BatchNormalization from keras.regularizers import l1, l2 from keras.callbacks import EarlyStopping, History from keras import backend as K K.set_image_data_format('channels_last') # + colab={} colab_type="code" id="CXtJhYf2V40Z" from concise.metrics import tpr, tnr, fpr, fnr, precision, f1 def initialize_classification_model(ntasks=1): #Define the model architecture in keras (regularized, 3-layer convolution model followed by 1 dense layer) model=Sequential() model.add(Conv2D(filters=15,kernel_size=(1,10),input_shape=(1,1000,4))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(MaxPooling2D(pool_size=(1,35))) model.add(Conv2D(filters=15,kernel_size=(1,10))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Conv2D(filters=15,kernel_size=(1,10))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(ntasks)) model.add(Activation("sigmoid")) ##compile the model, specifying the Adam optimizer, and binary cross-entropy loss. model.compile(optimizer='adam',loss='binary_crossentropy', metrics=[tpr, tnr, fpr, fnr, precision, f1]) return model # + [markdown] colab_type="text" id="XY-g6ik9V40c" # We create generators for the training and validation data: # + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="5PrSMLLiV40d" outputId="d7ae2438-6e8e-4fc0-c1b3-6bde39667604" #create the generators, upsample positives to ensure they constitute 30% of each batch from dragonn.generators import * spi1_train_classification_gen=DataGenerator("SPI1.train.classification.hdf5","hg19.genome.fa.gz",upsample_ratio=0.3, batch_size=256) spi1_valid_classification_gen=DataGenerator("SPI1.valid.classification.hdf5","hg19.genome.fa.gz",upsample_ratio=0.3, batch_size=256) # + colab={} colab_type="code" id="NLRfBhebV40e" #Train the SPI1 classification model spi1_classification_model=initialize_classification_model() ## use the keras fit_generator function to train the model with early stopping after 3 epochs history_classification=spi1_classification_model.fit_generator(spi1_train_classification_gen, validation_data=spi1_valid_classification_gen, steps_per_epoch=10000, validation_steps=5000, epochs=150, verbose=1, use_multiprocessing=True, workers=40, max_queue_size=100, callbacks=[EarlyStopping(patience=3,restore_best_weights=True),History()]) # + colab={} colab_type="code" id="DN2bFz_aV40h" outputId="323bc266-fc71-4e30-e277-b2db3c3e9a0a" ## Plot the learning curves for SPI1 from dragonn.tutorial_utils import plot_learning_curve plot_learning_curve(history_classification) # + [markdown] colab_type="text" id="YoP336y_V40k" # We now measure how well the model performed by calculating performance metrics on the test splits across the whole genome. # + colab={} colab_type="code" id="ZF6VNZH3V40k" outputId="101ca0a8-db49-44cc-c133-e799b070e1e0" from dragonn.generators import * spi1_test_classification_gen=DataGenerator("SPI1.test.classification.hdf5", "hg19.genome.fa.gz", upsample=False, add_revcomp=False, batch_size=1000, tasks=['SPI1']) spi1_test_classification_predictions=spi1_classification_model.predict_generator(spi1_test_classification_gen, max_queue_size=5000, workers=40, use_multiprocessing=True, verbose=1) spi1_test_classification_truth=spi1_test_classification_gen.data # + colab={} colab_type="code" id="1QGwsaLsV40n" outputId="829e46ff-12f4-42c9-c18c-2d9c66a22e97" spi1_test_classification_predictions.shape # + colab={} colab_type="code" id="9LAq8j_MV40r" outputId="d1b31200-f1a6-493b-ad99-6c0c62b65645" spi1_test_classification_truth.shape # + colab={} colab_type="code" id="X83Mt6VXV40t" outputId="b7b4fe79-812d-463a-f2f1-611122a6fa0e" ## Generate a ClassificationResult object to print performance metrics on held-out test set from dragonn.metrics import ClassificationResult print(ClassificationResult(spi1_test_classification_truth.values.astype(bool),spi1_test_classification_predictions)) # + colab={} colab_type="code" id="jBvkHg7zV40y" #save the models spi1_classification_model.save("SPI1.classification.model.hdf5") # - # + [markdown] colab_type="text" id="b0sibNElV403" # ## Genome-wide regression model # Home # + colab={} colab_type="code" id="nM0-1aa9V404" def initialize_regression_model(ntasks=1): #Define the model architecture in keras (regularized, 3-layer convolution model followed by 1 dense layer) model=Sequential() model.add(Conv2D(filters=15,kernel_size=(1,10),input_shape=(1,1000,4))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(MaxPooling2D(pool_size=(1,35))) model.add(Conv2D(filters=10,kernel_size=(1,10))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Conv2D(filters=5,kernel_size=(1,10))) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(ntasks)) ##compile the model, specifying the Adam optimizer, and binary cross-entropy loss. model.compile(optimizer='adam',loss='mse') return model # + colab={} colab_type="code" id="zO-uKK8GV407" outputId="04c1a178-b8ee-4b14-c944-328c17ecec9c" #we want to determine a threshold for upsampling the non-zero bins in a given batch # extract 5 million datapoints from the training data and observe the distribution of non-zero signal values sample=pd.read_hdf("SPI1.train.regression.hdf5",start=0,stop=5000000) nonzero_sample=sample[sample.max(axis=1)>0] print(nonzero_sample.shape) nonzero_sample.hist(bins=100) # + [markdown] colab_type="text" id="F5Raqyz3V40_" # This suggests that 0.1 is a reasonable threshold for upsampling signal bins in regression # + colab={} colab_type="code" id="JSjGIlYTV41B" #create the generators, no upsampling of positives is used for regression. from dragonn.generators import * spi1_train_regression_gen=DataGenerator("SPI1.train.regression.hdf5","hg19.genome.fa.gz",upsample_ratio=0.3,upsample_thresh=0.01) spi1_valid_regression_gen=DataGenerator("SPI1.valid.regression.hdf5","hg19.genome.fa.gz",upsample_ratio=0.3,upsample_thresh=0.01) # + colab={} colab_type="code" id="P3pSs4TYV41E" outputId="bb50b654-2336-4b6d-d16c-a8a15672f379" #Train the SPI1 regression model spi1_regression_model=initialize_regression_model() ## use the keras fit_generator function to train the model with early stopping after 3 epochs history_regression=spi1_regression_model.fit_generator(spi1_train_regression_gen, validation_data=spi1_valid_regression_gen, steps_per_epoch=10000, validation_steps=5000, epochs=150, verbose=1, use_multiprocessing=True, workers=40, max_queue_size=100, callbacks=[EarlyStopping(patience=3,restore_best_weights=True),History()]) # + colab={} colab_type="code" id="EFxneNJqV41H" outputId="472b64dc-f85d-49e6-8bd0-4d023374e04a" plot_learning_curve(history_regression) # + colab={} colab_type="code" id="_o3I4gN7V41K" outputId="da95db30-a6ad-4043-86c6-ed587a44277b" from dragonn.generators import * spi1_test_regression_gen=DataGenerator("SPI1.test.regression.hdf5", "hg19.genome.fa.gz", upsample=False, add_revcomp=False, batch_size=1000, tasks=['SPI1']) spi1_test_regression_predictions=spi1_regression_model.predict_generator(spi1_test_regression_gen, max_queue_size=5000, workers=40, use_multiprocessing=True, verbose=1) spi1_test_regression_truth=spi1_test_regression_gen.data # + colab={} colab_type="code" id="QMbEZo8XGd2m" ## find the indices of the non-zero coverage bins nonzero_bins=spi1_test_regression_truth.max(axis=1)>0 # + colab={} colab_type="code" id="-byNK4qMV41N" outputId="aa18b7e6-6ee2-439d-ca0b-d68d2a7aa8a1" #Calculate spearman and pearson correlation between truth labels and predictions from scipy.stats import pearsonr, spearmanr corr_pearson=pearsonr(spi1_test_regression_truth,spi1_test_regression_predictions) corr_spearman=spearmanr(spi1_test_regression_truth,spi1_test_regression_predictions) print("Pearson correlation on test set:"+str(corr_pearson)) print("Spearman correlation on test set:"+str(corr_spearman)) # + colab={} colab_type="code" id="YipLiXRFGd2t" outputId="f7e2d334-30b4-4661-d8bb-23832c8c5763" # Calculate the spearman and pearson correlation, restricted to non-zero bins corr_pearson_nonzero_bins=pearsonr(spi1_test_regression_truth[nonzero_bins],spi1_test_regression_predictions[nonzero_bins]) corr_spearman_nonzero_bins=spearmanr(spi1_test_regression_truth[nonzero_bins],spi1_test_regression_predictions[nonzero_bins]) print("Pearson correlation on test set:"+str(corr_pearson_nonzero_bins)) print("Spearman correlation on test set:"+str(corr_spearman_nonzero_bins)) # + colab={} colab_type="code" id="kVEfZ1d0V41O" #There is some overfitting, let's save this model and see if we can do better spi1_regression_model.save("SPI1.regression.model.hdf5") # + colab={} colab_type="code" id="UGW2FQrgGd2y" outputId="e3069d0a-65d3-4c8c-be0d-eef77ed83581" spi1_test_regression_truth.values[0:10].squeeze() # + colab={} colab_type="code" id="SKa34qjbGd21" test_df=pd.DataFrame({"Observed":list(spi1_test_regression_truth.values.squeeze()), "Predicted":list(spi1_test_regression_predictions.squeeze())}) # + colab={} colab_type="code" id="L1tUvhU5Gd27" test_df_nonzero=pd.DataFrame({"Observed":list(spi1_test_regression_truth[nonzero_bins].values.squeeze()), "Predicted":list(spi1_test_regression_predictions[nonzero_bins].squeeze())}) # + colab={} colab_type="code" id="Xqvik10lGd29" outputId="4d35b751-13ce-4555-f798-916765d4e981" import plotnine from plotnine import * print((ggplot(test_df,aes(x="Observed",y="Predicted")) +geom_bin2d(bins=100) +theme_bw() +xlab("Observed asinh(mean coverage in FC bigWig") +ylab("Model prediction") +ggtitle("SPI1 regression model test set prediction"))) print((ggplot(test_df_nonzero,aes(x="Observed",y="Predicted")) +geom_bin2d(bins=100) +theme_bw() +xlab("Observed asinh(mean coverage in FC bigWig") +ylab("Model prediction") +ggtitle("SPI1 regression model test set prediction: bins with nonzero coverage"))) # + colab={} colab_type="code" id="TOBvin88Gd3A" outputId="5fc16ecd-16d2-47f0-a39f-e91244e4d9cb" # Plot observed vs predicted regression values plt.scatter(spi1_test_regression_truth, spi1_test_regression_predictions, alpha=0.01) plt.xlabel("Observed asinh(mean coverage in FC bigWig)") plt.ylabel("Model prediction") plt.title("SPI1 regression model test set prediction") plt.show() # + colab={} colab_type="code" id="WCgR4DmGGd3C" outputId="82246537-1073-48b1-81f7-1dc8f2a4b178" # Plot observed vs predicted regression values for the nonzero bins plt.scatter(spi1_test_regression_truth, spi1_test_regression_predictions, alpha=0.01) plt.xlabel("Observed asinh(mean coverage in FC bigWig) for bins ") plt.ylabel("Model prediction") plt.title("SPI1 regression model test set prediction: bins with nonzero coverage") plt.show() # + [markdown] colab_type="text" id="48jGNSBtV41R" # ## Genome-wide interpretation of true positive predictions in SPI1, with DeepLIFT # Home # # ### Classification Model # + colab={} colab_type="code" id="RWumi0mQV41S" #get the true positive predictions with a threshold of 0.9 (i.e. high confidence true positive predictions) spi1_test_classification_truth_bool=spi1_test_classification_truth.values.astype(bool) true_pos_spi1=spi1_test_classification_truth[spi1_test_classification_truth_bool*spi1_test_classification_predictions >0.9] true_pos_spi1.head # + colab={} colab_type="code" id="ecJa0a2HV41U" outputId="bb860e8e-f083-41e3-ed79-e66bb8aec2ac" true_pos_spi1.shape # + colab={} colab_type="code" id="UH5WvDmXV41W" outputId="c549e164-1087-4f2b-93bd-7f0677075090" from dragonn.utils import one_hot_from_bed deep_lift_input_spi1=one_hot_from_bed([i for i in true_pos_spi1.index],"hg19.genome.fa.gz") deep_lift_input_spi1.shape # + colab={} colab_type="code" id="41dInIg-V41Y" from dragonn.tutorial_utils import deeplift # + colab={} colab_type="code" id="hViDtFpGV41a" deep_lift_scores_spi1=deeplift(spi1_classification_model,deep_lift_input_spi1) # + colab={} colab_type="code" id="mFkS3UwpV41c" outputId="953cdbc2-d7c0-4a17-d6d1-227acb47c325" deep_lift_scores_spi1.shape # + [markdown] colab_type="text" id="skCZQ8j0V41e" # Let's plot a few of the DeepLIFT tracks and see if the model successfully learned SPI1: # + colab={} colab_type="code" id="Srt-dPa_V41e" from dragonn.tutorial_utils import plot_seq_importance # + colab={} colab_type="code" id="nhcSoq48V41h" outputId="fd5de43b-d65f-4abb-d998-7115c5ad1c1e" plot_seq_importance(deep_lift_scores_spi1[0],deep_lift_input_spi1[0]) # + colab={} colab_type="code" id="eyejvCeWV41j" outputId="9985d83c-d05f-4979-f472-bb352cce2f80" plot_seq_importance(deep_lift_scores_spi1[1],deep_lift_input_spi1[1]) # + colab={} colab_type="code" id="CyuioSW_V41o" outputId="75f83e18-bf46-475d-c43b-22ab8cdc56d9" plot_seq_importance(deep_lift_scores_spi1[2],deep_lift_input_spi1[2]) # + [markdown] colab_type="text" id="AfkuGJTGV41r" # Let's zoom in to the center of one sequence so that it is easier to distinguish the motif: # + colab={} colab_type="code" id="YCr3vtPeV41s" outputId="a04a10b2-ff18-44cb-f918-005872cb3b59" plot_seq_importance(deep_lift_scores_spi1[2].squeeze()[550:650],deep_lift_input_spi1[2].squeeze()[550:650]) # + [markdown] colab_type="text" id="tiw5YPInV41v" # If we query the sequence "CACTTCCCCT" in the [TomTom](http://meme-suite.org/tools/tomtom) software from the MEME suite, we find that the motif is a good match for SPIB: # SPI12TomTom # # + [markdown] colab_type="text" id="kEUxg-r1V41x" # ### Regression model # + colab={} colab_type="code" id="bEJudwWXV41y" outputId="f2395f38-be1f-4b76-95a3-91cc85efa471" #Sanity-check that the model is learning the SPI1 motif by running DeepLIFT on True Positives with high confidence (>0.9) #get the true positive predictions true_pos=spi1_test_regression_truth[(spi1_test_regression_truth.values*spi1_test_regression_predictions)>2] true_pos.shape # + colab={} colab_type="code" id="z5dpuvzLV413" outputId="c0db98aa-22bf-4e44-f15f-0a75df7c9239" deep_lift_input=one_hot_from_bed([i for i in true_pos.index],"hg19.genome.fa.gz") deep_lift_input.shape # + colab={} colab_type="code" id="D-3kc90ZV416" outputId="514a9b2b-0f73-45ac-fe0c-b695bb182dbb" help(deeplift) # + colab={} colab_type="code" id="YeuZFMZqV41-" deep_lift_scores_spi1=deeplift(spi1_regression_model,deep_lift_input_spi1,target_layer_idx=-1) # + colab={} colab_type="code" id="8qigmzDOV41-" outputId="1ef6a324-0b04-4000-a658-87c75cb0c50d" plot_seq_importance(deep_lift_scores_spi1[0],deep_lift_input_spi1[0]) # + colab={} colab_type="code" id="EPHe9I8VV42A" outputId="b7ee3839-2d4e-466b-e2f7-ede00db8a16b" plot_seq_importance(deep_lift_scores_spi1[1],deep_lift_input_spi1[1]) # + colab={} colab_type="code" id="bogEKZN2V42C" outputId="6465c01d-b842-4217-9766-aca7a14797b2" plot_seq_importance(deep_lift_scores_spi1[2],deep_lift_input_spi1[2]) # + colab={} colab_type="code" id="Ck63kAsAV42F" outputId="5353b1fb-b881-4515-f931-96f49f196045" plot_seq_importance(deep_lift_scores_spi1[2].squeeze()[550:650],deep_lift_input_spi1[2].squeeze()[550:650]) # + [markdown] colab_type="text" id="PWqxtR6NV42I" # The motif learned by the regression model matches the canonical SPI1 motif, though the deepLIFT tracks are noisier compared to those for the classification model. # # + [markdown] colab_type="text" id="-PGA_k3RV42J" # ## Recovering bQTL effect sizes: Classification vs Regression # Home # + colab={} colab_type="code" id="GROAoPZDV42J" from dragonn.generators import * bqtl_ref_gen=BQTLGenerator("SPI1.bQTLs.txt.gz","hg19.genome.fa.gz","POSTallele") bqtl_alt_gen=BQTLGenerator("SPI1.bQTLs.txt.gz","hg19.genome.fa.gz","ALTallele") # + colab={} colab_type="code" id="GJq0Ic_8V42L" outputId="f27e9d3e-87a2-479f-90ca-320bf3066fc0" bqtl_ref_classification_predictions=spi1_classification_model.predict_generator(bqtl_ref_gen, max_queue_size=5000, workers=40, use_multiprocessing=True, verbose=1) # + colab={} colab_type="code" id="xlPJpmmLV42L" outputId="bf8b8ff1-0268-49af-ce7d-9d8875e2ae3f" bqtl_alt_classification_predictions=spi1_classification_model.predict_generator(bqtl_alt_gen, max_queue_size=5000, workers=40, use_multiprocessing=True, verbose=1) bqtl_ref_classification_truth=bqtl_ref_gen.data['pvalue'] # + colab={} colab_type="code" id="MGXJaSvPV42N" outputId="32d58541-15ab-4c82-90d5-b3118ed18520" print(bqtl_ref_classification_predictions.shape) print(bqtl_alt_classification_predictions.shape) print(bqtl_ref_classification_truth.shape) # + colab={} colab_type="code" id="1NpOM03tV42P" outputId="9e58fdff-c60a-463a-fa43-2a6979d96064" bqtl_ref_regression_predictions=spi1_regression_model.predict_generator(bqtl_ref_gen, max_queue_size=5000, workers=40, use_multiprocessing=True, verbose=1) bqtl_alt_regression_predictions=spi1_regression_model.predict_generator(bqtl_alt_gen, max_queue_size=5000, workers=40, use_multiprocessing=True, verbose=1) # + colab={} colab_type="code" id="hi_Pr6jcV42Q" outputId="62710f23-734d-4ea4-b9a9-e1c28948d597" plt.scatter(bqtl_ref_classification_predictions, bqtl_alt_classification_predictions, alpha=0.01) plt.xlabel("Ref") plt.ylabel("Alt") plt.title("BQTL Classification Model Predictions") plt.show() # + colab={} colab_type="code" id="tg-ZRo1tV42R" outputId="9cb78eaf-ed5f-4fe4-ca02-6f927b07bcca" plt.scatter(bqtl_ref_regression_predictions, bqtl_alt_regression_predictions, alpha=0.01) plt.xlabel("Ref") plt.ylabel("Alt") plt.title("BQTL Regression Model Predictions") plt.show() # + [markdown] colab_type="text" id="yxG07_SzV42T" # ## Model-predicted SNP effect sizes vs bQTL effect sizes # Home # + colab={} colab_type="code" id="YdFUgn60V42T" logpval=np.log10(bqtl_ref_classification_truth.values) delta=bqtl_alt_classification_predictions-bqtl_ref_classification_predictions # + [markdown] colab_type="text" id="WdQsnAM7Gd4B" # ## Kat's Model Architecture (Classification) # Home # + colab={} colab_type="code" id="7yIyOg-AGd4B" from concise.metrics import tpr, tnr, fpr, fnr, precision, f1 from keras.constraints import max_norm def initialize_kat_classification_model(ntasks=1): #Define the model architecture in keras (regularized, 3-layer convolution model followed by 1 dense layer) model=Sequential() model.add(Conv2D(filters=50,kernel_size=(1,15),padding="same", kernel_constraint=max_norm(7.0,axis=-1),input_shape=(1,1000,4))) model.add(BatchNormalization(axis=-1)) model.add(Activation('relu')) model.add(Conv2D(filters=50,kernel_size=(1,15),padding="same")) model.add(BatchNormalization(axis=-1)) model.add(Activation('relu')) model.add(Conv2D(filters=50,kernel_size=(1,13),padding="same")) model.add(BatchNormalization(axis=-1)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(1,40))) model.add(Flatten()) model.add(Dense(50)) model.add(BatchNormalization(axis=-1)) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Dense(ntasks)) model.add(Activation("sigmoid")) ##compile the model, specifying the Adam optimizer, and binary cross-entropy loss. model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[tpr, tnr, fpr, fnr, precision, f1]) return model # + colab={} colab_type="code" id="EG64R1eNGd4C" #create the generators, upsample positives to ensure they constitute 30% of each batch from dragonn.generators import * spi1_train_classification_gen=DataGenerator("SPI1.train.classification.hdf5","hg19.genome.fa.gz",upsample_ratio=0.3, batch_size=256) spi1_valid_classification_gen=DataGenerator("SPI1.valid.classification.hdf5","hg19.genome.fa.gz",upsample_ratio=0.3, batch_size=256) # + colab={} colab_type="code" id="3SeWRHimGd4D" outputId="7516c239-52e8-4c19-dafc-ee32c264808c" #Train the SPI1 classification model spi1_kat_classification_model=initialize_kat_classification_model() ## use the keras fit_generator function to train the model with early stopping after 3 epochs history_kat_classification=spi1_kat_classification_model.fit_generator(spi1_train_classification_gen, validation_data=spi1_valid_classification_gen, steps_per_epoch=10000, validation_steps=5000, epochs=150, verbose=1, use_multiprocessing=True, workers=40, max_queue_size=100, callbacks=[EarlyStopping(patience=3,restore_best_weights=True),History()]) # + colab={} colab_type="code" id="a3D2uhXwGd4E" outputId="b5450d6e-447b-4a4a-e5c0-19890b4949bc" ## Plot the learning curves for SPI1 from dragonn.tutorial_utils import plot_learning_curve plot_learning_curve(history_kat_classification) # + colab={} colab_type="code" id="Jq6tLmcTGd4G" outputId="9757df93-a9e1-45c9-85f9-8cc5c51fd926" from dragonn.generators import * spi1_test_classification_gen=DataGenerator("SPI1.test.classification.hdf5", "hg19.genome.fa.gz", upsample=False, add_revcomp=False, batch_size=1000, tasks=['SPI1']) spi1_test_classification_predictions=spi1_kat_classification_model.predict_generator(spi1_test_classification_gen, max_queue_size=5000, workers=40, use_multiprocessing=True, verbose=1) spi1_test_classification_truth=spi1_test_classification_gen.data # + colab={} colab_type="code" id="-fWPY4DoGd4I" outputId="ab351e3b-7f55-465f-8f74-a5e8b7981d18" ## Generate a ClassificationResult object to print performance metrics on held-out test set from dragonn.metrics import ClassificationResult print(ClassificationResult(spi1_test_classification_truth.values.astype(bool),spi1_test_classification_predictions)) # + [markdown] colab_type="text" id="dPWznGgeGd4K" # ## Kat's Model Architecture (Regression) # Home # + colab={} colab_type="code" id="2Nt_b4_BGd4K" def initialize_kat_regression_model(ntasks=1): #Define the model architecture in keras (regularized, 3-layer convolution model followed by 1 dense layer) model=Sequential() model.add(Conv2D(filters=50,kernel_size=(1,15),padding="same", kernel_constraint=max_norm(7.0,axis=-1),input_shape=(1,1000,4))) model.add(BatchNormalization(axis=-1)) model.add(Activation('relu')) model.add(Conv2D(filters=50,kernel_size=(1,15),padding="same")) model.add(BatchNormalization(axis=-1)) model.add(Activation('relu')) model.add(Conv2D(filters=50,kernel_size=(1,13),padding="same")) model.add(BatchNormalization(axis=-1)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(1,40))) model.add(Flatten()) model.add(Dense(50)) model.add(BatchNormalization(axis=-1)) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Dense(ntasks)) ##compile the model, specifying the Adam optimizer, and binary cross-entropy loss. model.compile(optimizer='adam',loss='mse') return model # + colab={} colab_type="code" id="Wm8P3ABcGd4L" #create the generators, no upsampling of positives is used for regression. from dragonn.generators import * spi1_train_regression_gen=DataGenerator("SPI1.train.regression.hdf5","hg19.genome.fa.gz",upsample_ratio=0.3,upsample_thresh=0.01) spi1_valid_regression_gen=DataGenerator("SPI1.valid.regression.hdf5","hg19.genome.fa.gz",upsample_ratio=0.3,upsample_thresh=0.01) # + colab={} colab_type="code" id="O2xnIWHXGd4M" outputId="716c5c00-0b5b-417d-aa33-72cd483c7546" #Train the SPI1 regression model spi1_kat_regression_model=initialize_kat_regression_model() ## use the keras fit_generator function to train the model with early stopping after 3 epochs history_kat_regression=spi1_kat_regression_model.fit_generator(spi1_train_regression_gen, validation_data=spi1_valid_regression_gen, steps_per_epoch=10000, validation_steps=5000, epochs=150, verbose=1, use_multiprocessing=True, workers=40, max_queue_size=100, callbacks=[EarlyStopping(patience=3,restore_best_weights=True),History()]) # + colab={} colab_type="code" id="S2sQWdgtGd4O" outputId="3db4acbc-8de5-4d24-96da-3c4c41eac26a" plot_learning_curve(history_kat_regression) # + colab={} colab_type="code" id="JvSYpEFZGd4P" outputId="7fcf68b2-7f0e-4191-ae45-71948da1cd3f" from dragonn.generators import * spi1_test_regression_gen=DataGenerator("SPI1.test.regression.hdf5", "hg19.genome.fa.gz", upsample=False, add_revcomp=False, batch_size=1000, tasks=['SPI1']) spi1_test_regression_predictions=spi1_kat_regression_model.predict_generator(spi1_test_regression_gen, max_queue_size=5000, workers=40, use_multiprocessing=True, verbose=1) spi1_test_regression_truth=spi1_test_regression_gen.data # + colab={} colab_type="code" id="2vIyYceoGd4R" ## find the indices of the non-zero coverage bins nonzero_bins=spi1_test_regression_truth.max(axis=1)>0 # + colab={} colab_type="code" id="pJ8NbtgWGd4T" outputId="8a0c8f6c-512f-419e-dd49-8002895cdeb7" #Calculate spearman and pearson correlation between truth labels and predictions from scipy.stats import pearsonr, spearmanr corr_pearson=pearsonr(spi1_test_regression_truth,spi1_test_regression_predictions) corr_spearman=spearmanr(spi1_test_regression_truth,spi1_test_regression_predictions) print("Pearson correlation on test set:"+str(corr_pearson)) print("Spearman correlation on test set:"+str(corr_spearman)) # + colab={} colab_type="code" id="QpnVA7VsGd4U" outputId="94a59775-862e-48e4-f036-d7d3fb4d7654" # Calculate the spearman and pearson correlation, restricted to non-zero bins corr_pearson_nonzero_bins=pearsonr(spi1_test_regression_truth[nonzero_bins],spi1_test_regression_predictions[nonzero_bins]) corr_spearman_nonzero_bins=spearmanr(spi1_test_regression_truth[nonzero_bins],spi1_test_regression_predictions[nonzero_bins]) print("Pearson correlation on test set:"+str(corr_pearson_nonzero_bins)) print("Spearman correlation on test set:"+str(corr_spearman_nonzero_bins)) # + colab={} colab_type="code" id="KCaEkJICGd4V" test_df=pd.DataFrame({"Observed":list(spi1_test_regression_truth.values.squeeze()), "Predicted":list(spi1_test_regression_predictions.squeeze())}) # + colab={} colab_type="code" id="9kktZbRCGd4W" test_df_nonzero=pd.DataFrame({"Observed":list(spi1_test_regression_truth[nonzero_bins].values.squeeze()), "Predicted":list(spi1_test_regression_predictions[nonzero_bins].squeeze())}) # + colab={} colab_type="code" id="af-4fxu0Gd4X" outputId="bc675856-7680-4a00-d016-a4e0fe3e6d55" import plotnine from plotnine import * print((ggplot(test_df,aes(x="Observed",y="Predicted")) +geom_bin2d(bins=100) +theme_bw() +xlab("Observed asinh(mean coverage in FC bigWig") +ylab("Model prediction") +ggtitle("SPI1 regression model test set prediction"))) print((ggplot(test_df_nonzero,aes(x="Observed",y="Predicted")) +geom_bin2d(bins=100) +theme_bw() +xlab("Observed asinh(mean coverage in FC bigWig") +ylab("Model prediction") +ggtitle("SPI1 regression model test set prediction: bins with nonzero coverage"))) # + [markdown] colab_type="text" id="cZnRq3SzGd4Z" # ## Kat's Model DeepLIFT profiles (Classification) # + colab={} colab_type="code" id="UdWiImsYGd4Z" spi1_test_classification_truth_bool=spi1_test_classification_truth.values.astype(bool) true_pos_spi1=spi1_test_classification_truth[spi1_test_classification_truth_bool*spi1_test_classification_predictions >0.9] # + colab={} colab_type="code" id="6iSq-J-iGd4a" outputId="578fdbcd-8cd1-4d5d-c6e2-4d28234b6610" from dragonn.utils import one_hot_from_bed deep_lift_input_spi1=one_hot_from_bed([i for i in true_pos_spi1.index],"hg19.genome.fa.gz") deep_lift_input_spi1.shape # + colab={} colab_type="code" id="XRIkb8oFGd4c" from dragonn.tutorial_utils import deeplift, plot_seq_importance deep_lift_scores_spi1=deeplift(spi1_kat_classification_model,deep_lift_input_spi1) # + colab={} colab_type="code" id="sCAY5G5JGd4g" outputId="580aca0a-9a35-4cc8-84bc-15e9bf399d56" plot_seq_importance(deep_lift_scores_spi1[0],deep_lift_input_spi1[0]) plot_seq_importance(deep_lift_scores_spi1[1],deep_lift_input_spi1[1]) plot_seq_importance(deep_lift_scores_spi1[2],deep_lift_input_spi1[2]) # + colab={} colab_type="code" id="fefAyEp4Gd4h" outputId="dca31e18-7f19-48bd-e76d-86a98e6cb221" plot_seq_importance(deep_lift_scores_spi1[2].squeeze()[400:500],deep_lift_input_spi1[2].squeeze()[400:500]) # + [markdown] colab_type="text" id="NaizmyYbGd4i" # If we query the sequence "GTTTCACTTCTGCAAA" in the [TomTom](http://meme-suite.org/tools/tomtom) software from the MEME suite, we find that the motif is a good match (p=3.55e-03) for SPIB: # SPI12TomTom # + [markdown] colab_type="text" id="RFlubWfKGd4i" # ## Kat's Model DeepLIFT profiles (Regression) # + colab={} colab_type="code" id="5DdIebdVGd4j" outputId="450f7622-0471-49af-fcb4-cd442ec04c57" #Sanity-check that the model is learning the SPI1 motif by running DeepLIFT on True Positives with high confidence (>0.9) #get the true positive predictions true_pos=spi1_test_regression_truth[(spi1_test_regression_truth.values*spi1_test_regression_predictions)>4] true_pos.shape # + colab={} colab_type="code" id="My8t2AvgGd4k" outputId="65d55c2a-48ca-4878-f554-5dcf00e33c9f" deep_lift_input=one_hot_from_bed([i for i in true_pos.index],"hg19.genome.fa.gz") deep_lift_input.shape # + colab={} colab_type="code" id="PzURc6r9Gd4l" deep_lift_scores_spi1=deeplift(spi1_regression_model,deep_lift_input_spi1,target_layer_idx=-1) # + colab={} colab_type="code" id="tqrN9urUGd4n" outputId="709f3bef-fe1b-40d3-b775-d20740a84deb" plot_seq_importance(deep_lift_scores_spi1[0],deep_lift_input_spi1[0]) plot_seq_importance(deep_lift_scores_spi1[1],deep_lift_input_spi1[1]) plot_seq_importance(deep_lift_scores_spi1[2],deep_lift_input_spi1[2]) # + colab={} colab_type="code" id="leSwvZWFGd4n" outputId="848dd89f-eaa3-4872-ae04-3301ac0fff7d" plot_seq_importance(deep_lift_scores_spi1[2].squeeze()[400:500],deep_lift_input_spi1[2].squeeze()[400:500]) # + [markdown] colab_type="text" id="QMBWHvnZV42V" # ## Conclusions # Home # + [markdown] colab_type="text" id="PFyKUGQnV42X" # ## Save tutorial outputs # Home # # We save the models and test set predictions generated in this tutorial to an hdf5 file so that they can be loaded more readily in the future. # + colab={} colab_type="code" id="WBNQNhg3V42X" #save the models #spi1_kat_classification_model.save("SPI1.kat.classification.model.hdf5") #spi1_kat_regression_model.save("SPI1.kat.regression.model.hdf5") #spi1_classification_model.save("SPI1.classification.model.hdf5") #spi1_regression_model.save("SPI1.regression.model.hdf5") #save the test predictions import h5py test_set_predictions=h5py.File("SPI1.test.kat.predictions.hdf5",'w') test_set_predictions.create_dataset("classification",data=spi1_test_classification_predictions) test_set_predictions.create_dataset("regression",data=spi1_test_regression_predictions) test_set_predictions.close() # + colab={} colab_type="code" id="LOPcHHY5V42a" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Python y Rendimiento # # Python es un lenguaje interpretado de alto nivel que es muy conveniente para prototipar y hacer análisis # exploratorio # # Sin embargo, esta conveniencia tiene un costo: # # > Python tiene un menor **rendimiento** a igual **complejidad** en comparación a lenguajes compilados de bajo nivel # # Podemos ser más específicos y hablar de **eficiencia** de diversas índoles: # # - Eficiencia temporal: Tiempo para completar una tarea (tiempo en la CPU) # - Eficiencia espacial: Utilización de espacio (memoria RAM, disco) # # Ambos son factores críticos en algunas aplicaciones, por ejemplo aplicaciones con muchos datos o mucho cómputo # # Existe entonces una necesidad por **mejorar el rendimiento de nuestro código**. En esta serie de lecciones veremos distintas formas de **optimizar** código escrito en Python # ## ¿Qué es la optimización de códigos/software? # # Se refiere a modificar una rutina computacional para mejorar su eficiencia, es decir reducir sus tiempos de ejecución y/o consumo de recursos # # El aspecto que se intenta modificar es aquel que limita nuestro programa. Un programa puede estar # # - limitado en CPU (compute-bound) # - limitado en memoria (memory-bound) # - limitado en ancho de banda (bandwidth-bound) # - limitado en entrada/salida (I/O bound) # # En el ámbito de la computación científica lo más común es enfrentar programas que están **límitados en CPU**, es decir # # - programas que utilizan la mayor parte de su tiempo haciendo cálculos # - programas que mejoran su rendimiento con la velocidad del CPU # # # :::{note} # # En ciertos casos podemos disminuir el tiempo de ejecución de una rutina incrementando el uso de memoria. Esto podría convertir un programa que es limitado en CPU a uno que es limitado en memoria. # # ::: # # ## Consejos antes de optimizar # # Considera las siguientes preguntas antes de comenzar el (a veces arduo) proceso de optimizar tus códigos # # **¿Cuándo optimizar?** # # `Si:` # # tu rutina está incompleta o no entrega el resultado esperado # # `Entonces:` # # No es momento de optimizar # # Dicho de otro modo, para casi cualquier rutina que escribamos, deberíamos considerar la secuencia # # 1. que (la rutina) corra # 1. que (la rutina) retorne el resultado correcto # 1. (opcionalmente) que (la rutina) tenga un buen rendimiento # # :::{important} # # La optimización está asociada al último punto y se lleva a cabo luego de cumplir los dos primeros # # ::: # # En la práctica hay que considerar que optimizar podría: # # - Hacer el código más complicado y menos legible # - Introducir bugs # - Tomar tiempo y bastante dedicación # # Por lo tanto debemos evitar optimizar de forma prematura # # ```{epigraph} # La optimización prematura es la raíz de todos los males # # -- [](http://wiki.c2.com/?PrematureOptimization) # ``` # # **¿Por qué optimizar?** # # En la secuencia mostrada anteriormente, podemos notar que el último punto no es esencial como lo son los dos primeros # # Optimizar solo es necesario si nuestro código: # # - no entrega el resultado correcto en el tiempo requerido # - requiere más memoria que la disponible en el hardware donde va a correr # # **¿Dónde optimizar?** # # Se debe evitar gastar tiempo optimizando rutinas que influyan poco en el rendimiento total del programa # # La optimización debería concentrarse en las secciones más lentas y/o costosas: **cuellos de botella** # # Previo a optimizar debemos hacer un ***profiling*** de nuestro código para encontrar las secciones críticas # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #importing numpy package import numpy as np A = np.array([[3,4],[2,1]]) B = np.array([[1,5],[3,7]]) print('A:\n',A) print('B:\n',B) #Addition of matrix A + B np.add(A,B) #Subtraction of matrix A - B np.subtract(A,B) # Scalar addition print(A) A + 2 np.add(A,2) A - 2 np.subtract(A,2) # Scalar Multiplication A * 2 np.multiply(A, 2) #This is not matrix multiplication print('A:\n',A) print('B:\n',B) np.multiply(A,B) A * B #This is not matrix multiplication #Matrix Multiplication np.matmul(A,B) #This is matrix matrix multiplication # ### Real time example on Matrix Multiplication # The local shop sells 3 types of fruits.
    # Apple cost Rs.3 each
    # Cherry cost Rs.4 each
    # Blueberry cost Rs.2 each

    # # ![image.png](attachment:image.png) # Matrix Multiplication example C = np.array([[3,4,2]]) D = np.array([[13,9,7,15],[8,7,4,6],[6,4,0,3]]) print('C:\n',C, 'Shape: ',C.shape) print('D:\n',D, 'Shape: ',D.shape) np.matmul(C,D) #Matrix Multiplication is not commutative EF != FE E = np.array([[1,1],[100,100]]) F = np.array([[-1,1],[1,-1]]) np.matmul(E,F) np.matmul(F,E) #Matrix transpose print('A:\n',A) print('A Transpose:\n',A.T) np.transpose(A) np.transpose(np.transpose(A)) # Two times transpose gives the actual matrix # ### Transposition rule # ##### (AB)T = BT x AT # Here T stands for Transpose print('A:\n',A) print('B:\n',B) NewMatrix = np.matmul(A,B) np.transpose(NewMatrix) np.matmul(np.transpose(B), np.transpose(A)) #This satisfies the property np.matmul(np.transpose(A), np.transpose(B)) #As per rule this is incorrect # ### Special Matrices #Symmetric sMatrix = np.array([[1,1,-1],[1,2,0],[-1,0,5]]) sMatrix sMatrix.T #Matrix transpose gives us the same matrix as it is a symmetric matrix. aij = aji # Skew Symmetric Matrix ssMatrix = np.array([[0,1,-2],[-1,0,3],[2,-3,0]]) ssMatrix ssMatrix.T #Matrix transpose gives us the same negative matrix as it is skew symmetric matrix. aij = -aji #Diagonal Matrix dMatrix = np.array([[12,0,0],[0,18,0],[0,0,26]]) dMatrix #Convert a vector to diagonal matrix a = np.array([12,18,26]) np.diag(a) #Finding Determinant of a matrix mat = np.array([[3,1],[1,2]]) mat np.linalg.det(mat) mat2 = np.array([[2,-3,1],[2,0,-1],[1,4,5]]) sol = np.linalg.det(mat2) sol np.around(sol) #Round the array elements #Identity Matrix imatrix = np.identity(3) imatrix mat2 np.matmul(mat2,imatrix) np.matmul(mat2,imatrix).astype(int) # #### Inverse of a matrix A A_inv = np.linalg.inv(A) A_inv # A X A_inv = I np.matmul(A,A_inv) # #### Solving Linear System of equations # x + y + z = 3
    # 2x + 3y + 7z = 0
    # x + 3y - 2z = 17
    A = np.array([[1,1,1],[2,3,7],[1,3,-2]]) b = np.array([3,0,17]) np.linalg.solve(A,b) # Pass the augmented matrix # Rank of a matrix np.linalg.matrix_rank(A) # #### EigenValues and EigenVectors A w, v = np.linalg.eig(A) # w - eigenvalue, v - eigenvectors w v # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import pandas_datareader.data as web def get_closes(tickers, start_date=None, end_date=None, freq=None): import pandas as pd pd.core.common.is_list_like = pd.api.types.is_list_like import pandas_datareader.data as web closes = pd.DataFrame(columns = tickers, index=web.YahooDailyReader(symbols=tickers[0], start=start_date, end=end_date, interval=freq).read().index) for ticker in tickers: df = web.YahooDailyReader(symbols=ticker, start=start_date, end=end_date, interval=freq).read() closes[ticker]=df['Adj Close'] closes.index_name = 'Date' closes = closes.sort_index() return closes # + asset1 = '^MXX' asset2 = 'BIMBOA.mx' asset3 = 'ELEKTRA.MX' asset4 = 'GMEXICOB.MX' asset5 = 'WALMEX.MX' names = [asset1,asset2,asset3,asset4,asset5] start = '2019-08-19' end = '2020-08-19' # - # Precios diarios daily_closes = get_closes(tickers = names, start_date = start, end_date = end, freq = 'd') daily_closes daily_closes.to_csv("Daily Closes.csv") daily_closes.to_excel("Daily Closes.xlsx") # + #daily_ret = daily_closes.pct_change().dropna() #daily_ret # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Discussion 04 # # In this discussion you will look at thecode below and give the user advice on what they need to change to make this code more **generic** as a Python Toolbox. The current snippet is the code the user uses by copying and pasting it into the `Python Window` in ArcGIS Pro. # # Your goal is to provide feedback on how they can make it a Python Toolbox. # # Post your suggestions on what they need to add to make it a generic tool as a Python Toolbox. Then post comment 1-2 meaningful posts on other's suggestions. # ```python # import arcpy # from arcpy import sharing # from arcpy import mp # import os # import shutil # # # Use Inputs # # # # # prj = r"CURRENT" # ArcGIS Pro Path # wrksp = r"c:\staging" # Folder # service_name = "airmonitor2" # String # map_name = "Map1" # String - Finds the Map by the Name # summary = "This is the Summary" # String - Summary of the Map # tags = "Tag1,Tag2,Tag3" # String # description = "This is a description" # String # credits = "data credits" # strings # # End User Inputs # ### # # # Local Variables # # # sddraft = r"%s\dataset.sddraft" % wrksp # sdfile = r"%s\%s.sd" % (wrksp, service_name.replace(" ", "_")) # # ## Logic # # # # if os.path.isdir(wrksp): # shutil.rmtree(wrksp) # os.makedirs(wrksp) # # aprx = mp.ArcGISProject(prj) # m = aprx.listMaps("Map")[0] # print(m) # sharing_draft = m.getWebLayerSharingDraft("HOSTING_SERVER", "FEATURE", service_name) # sharing_draft.summary = summary # sharing_draft.tags = tags # sharing_draft.description = description # sharing_draft.credits = credits # sharing_draft.exportToSDDraft(sddraft) # arcpy.StageService_server(sddraft, sdfile) # print(sdfile) # ``` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Pandas # # [Pandas](http://pandas.pydata.org/) is a an open source library providing high-performance, easy-to-use data structures and data analysis tools. Pandas is particularly suited to the analysis of _tabular_ data, i.e. data that can can go into a table. In other words, if you can imagine the data in an Excel spreadsheet, then Pandas is the tool for the job. # # A [recent analysis](https://stackoverflow.blog/2017/09/06/incredible-growth-python/) of questions from Stack Overflow showed that python is the fastest growing and most widely used programming language in the world (in developed countries). # # ![python growth](https://zgab33vy595fw5zq-zippykid.netdna-ssl.com/wp-content/uploads/2017/09/growth_major_languages-1-1024x878.png) # # A [follow-up analysis](https://stackoverflow.blog/2017/09/14/python-growing-quickly/) showed that this growth is driven by the data science packages such as numpy, matplotlib, and especially pandas. # # ![pandas growth](https://zgab33vy595fw5zq-zippykid.netdna-ssl.com/wp-content/uploads/2017/09/related_tags_over_time-1-1024x1024.png) # # The exponential growth of pandas is due to the fact that it _just works_. It saves you time and helps you do science more efficiently and effictively. # # ### Pandas capabilities (from the Pandas website): # # * A fast and efficient DataFrame object for data manipulation with integrated indexing; # * Tools for reading and writing data between in-memory data structures and different formats: CSV and text files, Microsoft Excel, SQL databases, and the fast HDF5 format; # * Intelligent data alignment and integrated handling of missing data: gain automatic label-based alignment in computations and easily manipulate messy data into an orderly form; # * Flexible reshaping and pivoting of data sets; # * Intelligent label-based slicing, fancy indexing, and subsetting of large data sets; # * Columns can be inserted and deleted from data structures for size mutability; # * Aggregating or transforming data with a powerful group by engine allowing split-apply-combine operations on data sets; # * High performance merging and joining of data sets; # * Hierarchical axis indexing provides an intuitive way of working with high-dimensional data in a lower-dimensional data structure; # * Time series-functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging. Even create domain-specific time offsets and join time series without losing data; # * Highly optimized for performance, with critical code paths written in Cython or C. # * Python with pandas is in use in a wide variety of academic and commercial domains, including Finance, Neuroscience, Economics, Statistics, Advertising, Web Analytics, and more. # # In this lecture, we will go over the basic capabilities of Pandas. It is a very deep library, and you will need to dig into the [documentation](http://pandas.pydata.org/pandas-docs/stable/) for more advanced usage. # # Pandas was created by [](http://wesmckinney.com/). Many of the examples here are drawn from Wes McKinney's book [Python for Data Analysis](http://shop.oreilly.com/product/0636920023784.do), which includes a github repo of [code samples](https://github.com/wesm/pydata-book). import pandas as pd import numpy as np from matplotlib import pyplot as plt # %matplotlib inline # ## Pandas Data Structures: Series # # A Series represents a one-dimensional array of data. The main difference between a Series and numpy array is that a Series has an _index_. The index contains the labels that we use to access the data. # # There are many ways to [create a Series](https://pandas.pydata.org/pandas-docs/stable/dsintro.html#series). We will just show a few. names = ['Ryan', 'Chiara', 'Johnny'] values = [35, 36, 1.8] ages = pd.Series(values, index=names) ages # Series have built in plotting methods. ages.plot(kind='bar') # Arithmetic operations and most numpy function can be applied to Series. # An important point is that the Series keep their index during such operations. np.log(ages) / ages**2 # We can access the underlying index object if we need to: ages.index # We can get values back out using the index via the `.loc` attribute ages.loc['Johnny'] # Or by raw position using `.iloc` ages.iloc[2] # If we need to, we can always get the raw data back out as well ages.values ages.index # ## Pandas Data Structures: DataFrame # # There is a lot more to Series, but they are limit to a single "column". A more useful Pandas data structure is the DataFrame. A DataFrame is basically a bunch of series that share the same index. It's a lot like a table in a spreadsheet. # # Below we create a DataFrame. # first we create a dictionary data = {'age': [35, 36, 1.8], 'height': [180, 155, 83], 'weight': [72.5, np.nan, 11.3]} df = pd.DataFrame(data, index=['Ryan', 'Chiara', 'Johnny']) df # Pandas handles missing data very elegantly, keeping track of it through all calculations. df.info() # A wide range of statistical functions are available on both Series and DataFrames. df.min() df.mean() df.std() df.describe() # We can get a single column as a Series using python's getitem syntax on the DataFrame object. df['height'] # ...or using attribute syntax. df.height # New columns can easily be added to DataFrames df['density'] = df.weight / df.height df # ## Merging Data # # Pandas supports a wide range of methods for merging different datasets. These are described extensively in the [documentation](https://pandas.pydata.org/pandas-docs/stable/merging.html). Here we just give a few examples. education = pd.Series(['PhD', 'PhD', None, 'masters'], index=['Ryan', 'Chiara', 'Johnny', 'Takaya'], name='education') # returns a new DataFrame df.join(education) # returns a new DataFrame df.join(education, how='right') # returns a new DataFrame df.reindex(['Ryan', 'Chiara', 'Johnny', 'Takaya', 'Kerry']) # We can also index using a boolean series. This is very useful adults = df[df.age > 18] adults df['is_adult'] = df.age > 18 df # ## Plotting # # DataFrames have all kinds of [useful plotting](https://pandas.pydata.org/pandas-docs/stable/visualization.html) built in. df.plot(kind='scatter', x='age', y='height', grid=True) df.plot(kind='bar') # ## Time Indexes # # Indexes are very powerful. They are a big part of why Pandas is so useful. There are different indices for different types of data. Time Indexes are especially great! two_years = pd.date_range(start='2014-01-01', end='2016-01-01', freq='D') timeseries = pd.Series(np.sin(2 *np.pi *two_years.dayofyear / 365), index=two_years) timeseries.plot() # We can use python's slicing notation inside `.loc` to select a date range. timeseries.loc['2015-01-01':'2015-07-01'].plot() # ## Stock Market Data # # * oops - Ryan's links don't work, so we will use static files # # Now we read some stock market data from Google finance. I have created direct links to Google and Apple stock price data. # #!curl -L -o goog.csv http://tinyurl.com/rces-goog # #!curl -L -o aapl.csv http://tinyurl.com/rces-aapl-csv # ! cp /home/pangeo/notebooks/GOOGL.csv goog.csv # ! cp /home/pangeo/notebooks/AAPL.csv aapl.csv # ! head goog.csv # We can see that this is well-formated, tidy CSV data, ready for immediate ingestion into Pandas. # We use Pandas' amazing [read_csv](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) function to do this. goog = pd.read_csv('goog.csv') goog.head() # Not bad! But we can do better by giving read_csv some hints. goog = pd.read_csv('goog.csv', parse_dates=[0], index_col=0) goog.head() goog.info() goog.Close.plot() aapl = pd.read_csv('aapl.csv', parse_dates=[0], index_col=0) aapl.info() aapl_close = aapl.Close.rename('aapl') goog_close = goog.Close.rename('goog') stocks = pd.concat([aapl_close, goog_close], axis=1) stocks.head() stocks.plot() # Pandas knows how to take correlations. And [tons of other computations](https://pandas.pydata.org/pandas-docs/stable/computation.html). stocks.corr() # Because it understands times, it can do really cool stuff like resampling. # + # resample by taking the mean over each month fig, ax = plt.subplots() stocks.resample('MS').mean().plot(ax=ax, colors=['r', 'b']) # and each year stocks.resample('AS').mean().plot(ax=ax, colors=['r', 'b']) # - # The string `QS` means "month start. The string `AS` mean "year start". There is a long list of possible [frequency aliases](https://pandas.pydata.org/pandas-docs/stable/timeseries.html#timeseries-offset-aliases). # # We can also apply other reduction operations with resample. These are described in the [resample docs](https://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling). # get resample object rs = stocks.goog.resample('MS') # standard deviation of each month rs.std().plot() # ## Temperature Data # # We download some timeseries data from the [Berkeley Earth(http://berkeleyearth.org/) surface temperature dataset. This is timeseries data from various locations around earth. Let's get our local temperatures. # ! curl -o nyc_temp.txt http://berkeleyearth.lbl.gov/auto/Local/TAVG/Text/40.99N-74.56W-TAVG-Trend.txt # If we examine this data, we see it is NOT a well formated CSV file. Loading it will be a bit painful, but Pandas makes the job retatively easy. # ! head -72 nyc_temp.txt | tail -8 # + ##### http://berkeleyearth.lbl.gov/locations/40.99N-74.56W # http://berkeleyearth.lbl.gov/auto/Local/TAVG/Text/40.99N-74.56W-TAVG-Trend.txt #temp = pd.read_csv('nyc_temp.txt') col_names = ['year', 'month', 'monthly_anom'] + 10*[] temp = pd.read_csv('nyc_temp.txt', header=None, usecols=[0, 1, 2], names=col_names, delim_whitespace=True, comment='%') temp.head() # - # need a day date_df = temp.drop('monthly_anom', axis=1) date_df['day'] = 1 date_index = pd.DatetimeIndex(pd.to_datetime(date_df)) temp = temp.set_index(date_index).drop(['year', 'month'], axis=1) temp.head() temp.plot() fig, ax = plt.subplots() temp.plot(ax=ax) temp.resample('AS').mean().plot(ax=ax) temp.resample('10AS').mean().plot(ax=ax) # Pandas can do both time-based resampling and operation over fixed-length rolling windows. These are very similar but distinct; see [discussion in Pandas docs](https://pandas.pydata.org/pandas-docs/stable/computation.html#time-aware-rolling-vs-resampling). # + # more advanced operation on rolling windows def difference_max_min(data): return data.max() - data.min() rw = temp.rolling('365D') rw.apply(difference_max_min).plot() # - # To create a "climatology" (i.e. the average of all same months), we can use Pandas' [groupby](https://pandas.pydata.org/pandas-docs/stable/groupby.html) functionality. # diurnal cycle has been removed! temp.groupby(temp.index.month).mean().plot() # find the hottest years temp.groupby(temp.index.year).mean().sort_values('monthly_anom', ascending=False).head(10) # ## Groupby # # Now we will explore groupby's capabilities more in a public dataset from the City of New York: the [Rat Information Portal](The Rat Information Portal)! # https://data.cityofnewyork.us/Health/Rats/amyk-xiv9 rats = pd.read_csv('https://data.cityofnewyork.us/api/views/amyk-xiv9/rows.csv', parse_dates=['APPROVED_DATE', 'INSPECTION_DATE']) rats.info() rats.head() # Let's do some grouping to explore the data. rats.groupby('INSPECTION_TYPE')['INSPECTION_TYPE'].count() rats.groupby('BORO_CODE')['BORO_CODE'].count().head() rats.groupby('STREET_NAME')['STREET_NAME'].count().head(20) # This dataset clearly needs some cleaning. We can Pandas' [text features](https://pandas.pydata.org/pandas-docs/stable/text.html) to strip the whitespace out of the data. # clean up street name street_names_cleaned = rats.STREET_NAME.str.strip() street_names_cleaned.groupby(street_names_cleaned).count().head(20) count = street_names_cleaned.groupby(street_names_cleaned).count() count.sort_values(ascending=False).head(20) # To get a better idea of the geography, let's plot the locations of the inspections. But first let's look at the statistics. rats[['LATITUDE', 'LONGITUDE']].describe() # There are clearly some weird outliers in the location data. We need to strip these out before plotting. valid_latlon = rats[(rats.LATITUDE > 30) & (rats.LONGITUDE < -70)] valid_latlon.plot.hexbin('LONGITUDE', 'LATITUDE', C='BORO_CODE', cmap='Set1') # https://github.com/pandas-dev/pandas/issues/10678 valid_latlon.plot.hexbin('LONGITUDE', 'LATITUDE', sharex=False) valid_latlon.plot.hexbin('LONGITUDE', 'LATITUDE', sharex=False, bins='log', cmap='magma') manhattan_rats = valid_latlon[valid_latlon.BORO_CODE==1] manhattan_rats.plot.hexbin('LONGITUDE', 'LATITUDE', sharex=False, bins='log', cmap='magma') # + inspection_date = pd.DatetimeIndex(rats.INSPECTION_DATE) fig, ax = plt.subplots() rats.groupby(inspection_date.weekday)['JOB_ID'].count().plot(kind='bar', ax=ax) ax.set_xlabel('weekday'); # - fig, ax = plt.subplots() rats.groupby(inspection_date.hour)['JOB_ID'].count().plot(kind='bar', ax=ax) ax.set_xlabel('hour'); fig, ax = plt.subplots() rats.groupby(inspection_date.month)['JOB_ID'].count().plot(kind='bar', ax=ax) ax.set_xlabel('month') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="B9bb2Wunewvi" # # 背景介绍 # 随着人工智能技术的不断发展,人工智能技术不仅提高了社会生产水平,也提高了人们的生活质量。无论是从自动驾驶领域中的智能视觉到工业领域的智能机器人,还是医疗领域中的智能辅助诊断系统,都是人工智能技术在实际社会场景中的应用落地。计算机视觉中的手写字识别更是应用在了各个领域,比如教育行业的学生试卷主观题自动识别,办公自动化领域的文字批量自动处理,以及公安刑侦领域的手写字迹相似度判别,可见手写字识别的应用广泛且重要。与此同时,由于手写字形态种类多样,尤其是汉字数量大,这些都对手写字识别提出了更高的要求。 # # 本文采用基于多层感知机的神经网络进行手写字识别实验. # # # 理论知识 # ## 多层感知机(Multilayer Perceptron) # 在介绍多层感知机之前,我们先介绍多层感知机的基础单元——单个感知机,也称感知机.感知机由两层神经元组成,一层输入层神经元和一层输出层神经元。对于从输入层输入的向量$x$,经过输出层上的线性变换$f(x)=x^Tw+b$后直接输出. # 但是由于感知机只有输出层,并没有非线性结构,就算这样多次复合,出来仍是一个线性变换,所以其学习能力非常有限,一般只能学习解决线性可分问题.对于简单的“异或问题”,感知机就已经无能为力了. # 【【【【【【图:异或问题】】】】】】】】 # # 也就是说,我们需要更强大的工具。单个感知机的缺点主要来源于他仅有线性结构对于很多非线性问题,我们尝试在每个感知机做完仿射变换之后,再经过一个非线性函数,这样感知机就有了非线性表达能力。 # # 对于更加复杂的问题,研究者们提出将这样的单层感知机(一排感知机),做多层,这样网络的表达能力就已经非常强大,也就是多层感知机(MLP). # 【【【【【【图:多层感知机】】】】】】】】 # # 多层感知机的算法流程如下: # 【【【【【【多层感知机流程latex】】】】】】】】 # # # # # 模型设计 # # 训练技巧 # # 数据测试 # # 结果分析 # + id="PZz2vgfDeviP" # !pip3 install d2l==0.17.1 # + id="3F1VvMHYe3W6" import torch import torchvision from torch import nn from torch.nn import functional as F from torch.utils import data from torchvision import transforms # %matplotlib inline from d2l import torch as d2l d2l.use_svg_display() from IPython import display # + [markdown] id="F1qTVLe9HJxD" # ### 必要函数 # + id="BH3jB13eEAiB" class Accumulator: """在`n`个变量上累加""" def __init__(self, n): """Defined in :numref:`sec_softmax_scratch`""" self.data = [0.0] * n def add(self, *args): self.data = [a + float(b) for a, b in zip(self.data, args)] def reset(self): self.data = [0.0] * len(self.data) def __getitem__(self, idx): return self.data[idx] def train_epoch_ch3(net, train_iter, loss, updater): """训练模型一个迭代周期 Defined in :numref:`sec_softmax_scratch`""" # 将模型设置为训练模式 if isinstance(net, torch.nn.Module): net.train() # 训练损失总和、训练准确度总和、样本数 metric = Accumulator(3) for X, y in train_iter: # 计算梯度并更新参数 y_hat = net(X) l = loss(y_hat, y) if isinstance(updater, torch.optim.Optimizer): # 使用PyTorch内置的优化器和损失函数 updater.zero_grad() l.sum().backward() updater.step() else: # 使用定制的优化器和损失函数 l.sum().backward() updater(X.shape[0]) metric.add(float(l.sum()), accuracy(y_hat, y), y.numel()) # 返回训练损失和训练精度 return metric[0] / metric[2], metric[1] / metric[2] class Animator: """在动画中绘制数据""" def __init__(self, xlabel=None, ylabel=None, legend=None, xlim=None, ylim=None, xscale='linear', yscale='linear', fmts=('-', 'm--', 'g-.', 'r:'), nrows=1, ncols=1, figsize=(3.5, 2.5)): """Defined in :numref:`sec_softmax_scratch`""" # 增量地绘制多条线 if legend is None: legend = [] d2l.use_svg_display() self.fig, self.axes = d2l.plt.subplots(nrows, ncols, figsize=figsize) if nrows * ncols == 1: self.axes = [self.axes, ] # 使用lambda函数捕获参数 self.config_axes = lambda: d2l.set_axes( self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend) self.X, self.Y, self.fmts = None, None, fmts def add(self, x, y): # 向图表中添加多个数据点 if not hasattr(y, "__len__"): y = [y] n = len(y) if not hasattr(x, "__len__"): x = [x] * n if not self.X: self.X = [[] for _ in range(n)] if not self.Y: self.Y = [[] for _ in range(n)] for i, (a, b) in enumerate(zip(x, y)): if a is not None and b is not None: self.X[i].append(a) self.Y[i].append(b) self.axes[0].cla() for x, y, fmt in zip(self.X, self.Y, self.fmts): self.axes[0].plot(x, y, fmt) self.config_axes() display.display(self.fig) display.clear_output(wait=True) def accuracy(y_hat, y): """计算预测正确的数量 Defined in :numref:`sec_softmax_scratch`""" if len(y_hat.shape) > 1 and y_hat.shape[1] > 1: y_hat = d2l.argmax(y_hat, axis=1) cmp = d2l.astype(y_hat, y.dtype) == y return float(d2l.reduce_sum(d2l.astype(cmp, y.dtype))) def evaluate_accuracy(net, data_iter): """计算在指定数据集上模型的精度 Defined in :numref:`sec_softmax_scratch`""" if isinstance(net, torch.nn.Module): net.eval() # 将模型设置为评估模式 metric = Accumulator(2) # 正确预测数、预测总数 with torch.no_grad(): for X, y in data_iter: metric.add(accuracy(net(X), y), d2l.size(y)) return metric[0] / metric[1] # + id="p39JDRixgrsF" def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater): """ 训练模型 """ animator = Animator(xlabel='epoch', xlim=[1, num_epochs], ylim=[0.3, 1.2], legend=['train loss', 'train acc', 'test acc']) for epoch in range(num_epochs): train_metrics = train_epoch_ch3(net, train_iter, loss, updater) test_acc = evaluate_accuracy(net, test_iter) animator.add(epoch + 1, train_metrics + (test_acc,)) train_loss, train_acc = train_metrics assert train_loss < 0.5, train_loss assert train_acc <= 1 and train_acc > 0.7, train_acc assert test_acc <= 1 and test_acc > 0.7, test_acc print("测试集准确率为:",test_acc) # + id="YG8nWIFzwmW_" def predict_ch3(net, test_iter, n=6): """ 预测标签 """ for X, y in test_iter: break trues = get_mnist_labels(y) preds = get_mnist_labels(d2l.argmax(net(X), axis=1)) titles = [true +'\n' + pred for true, pred in zip(trues, preds)] d2l.show_images( d2l.reshape(X[0:n], (n, 28, 28)), 1, n, titles=titles[0:n]) # + [markdown] id="s3w_bnYQhrOw" # ## 读取数据 # + id="CMUDDQcGfwsX" def load_data_mnist(batch_size, transform, resize=None): """ 下载MNIST数据集,增加了transform参数,用于特征增强,然后将其加载到内存中 num_workers: 进程数 """ trans = transform if resize: trans.insert(0, transforms.Resize(resize)) trans = transforms.Compose(trans) mnist_train = torchvision.datasets.MNIST( root="../data", train=True, transform=trans, download=True) mnist_test = torchvision.datasets.MNIST( root="../data", train=False, transform=trans, download=True) return (data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=4), data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=4)) # + id="GQy2HgtLfJZ9" colab={"base_uri": "https://localhost:8080/"} outputId="58258658-8dfd-4961-f368-893562465d2b" batch_size = 18 transform = transforms.ToTensor() train_iter, test_iter = load_data_mnist(batch_size, transform) # + [markdown] id="ni5VxhDBh3pL" # ## 数据集介绍 # MNIST数据集 [LeCun et al., 1998] 是图像分类中广泛使用的数据集之一. # + id="BTjWG9Hdh2ud" # 通过ToTensor实例将图像数据从PIL类型变换成32位浮点数格式, # 并除以255使得所有像素的数值均在0到1之间 from torch.utils import data from torchvision import transforms # + id="pG3atqBrmn2G" trans = transforms.ToTensor() mnist_train = torchvision.datasets.MNIST( root="../data", train=True, transform=trans, download=True) mnist_test = torchvision.datasets.MNIST( root="../data", train=False, transform=transforms.ToTensor(), download=True) # + [markdown] id="8k6Z6btQiO0W" # 本文采用的MNIST数据集是图像分类中广泛使用的数据集之一。MNIST由10个类别的手写数字图像组成,10个类别分别为0、1、2、3、4、5、6、7、8、9的手写数字。每个类别由训练数据集中的6000张图像和测试数据集中的1000张图像组成。因此,训练集和测试集分别包含60000和10000张图像。测试集只用于评估模型性能,不参与模型训练。MNIST数据集图像均为灰度图像,每张输入图像的高度和宽度均为28像素。 # + colab={"base_uri": "https://localhost:8080/"} id="xznbNVHGgicN" outputId="669c886f-5b21-42f6-c575-5112a7a77795" len(mnist_train), len(mnist_test) # + colab={"base_uri": "https://localhost:8080/"} id="NIbNIrUCiOmr" outputId="3261d460-92a5-4acc-e2e2-3bea786daabf" mnist_train[0][0].shape # + [markdown] id="FweOfWmEiUtY" # 每个输入图像的高度和宽度均为28像素. 数据集由灰度图像组成,其通道数为1. # + id="ArNd9rAZjiC1" # 以下函数用于在数字标签索引及其文本名称之间进行转换 def get_mnist_labels(labels): """返回MNIST数据集的文本标签""" text_labels = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] return [text_labels[int(i)] for i in labels] # + id="7_zfwGOqiKmp" import PIL def show_images(imgs, num_rows, num_cols, titles=None, scale=1.5): """绘制图像列表""" figsize = (num_cols * scale, num_rows * scale) _, axes = d2l.plt.subplots(num_rows, num_cols, figsize=figsize) axes = axes.flatten() for i, (ax, img) in enumerate(zip(axes, imgs)): if torch.is_tensor(img): # 图片张量 ax.imshow(img.numpy()) else: # PIL图片 ax.imshow(img) ax.axes.get_xaxis().set_visible(False) ax.axes.get_yaxis().set_visible(False) if titles: ax.set_title(titles[i]) return axes # + colab={"base_uri": "https://localhost:8080/", "height": 0} id="7h17kBBcjETG" outputId="3bc6c678-83e2-45a8-ec9d-f8dc8ca71e8b" X, y = next(iter(data.DataLoader(mnist_train, batch_size=20))) show_images(X.reshape(20, 28, 28), 2, 10, titles=get_mnist_labels(y)) # + [markdown] id="j4M5DudmlSZR" # ## 手动实现MLP # ### 参数初始化 # + [markdown] id="dp9BgMB3mDHF" # 由于MNIST中的每个图像由$28×28=784$个灰度像素值组成,所有图像共分为10个类别. # 忽略像素之间的空间结构,我们可以将每个图像视为具有784个输入特征和10个类的简单分类数据集. # # 首先,我们将实现一个具有单隐藏层的多层感知机,它包含256个隐藏单元.通常,我们选择2的若干次幂作为层的宽度,因为内存在硬件中的分配和寻址方式,这么做往往可以在计算上更高效。 # + id="A_v_wOPSlSFg" num_inputs, num_outputs, num_hiddens = 784, 10, 800 W1 = nn.Parameter(torch.randn( num_inputs, num_hiddens, requires_grad=True) * 0.01) b1 = nn.Parameter(torch.zeros(num_hiddens, requires_grad=True)) W2 = nn.Parameter(torch.randn( num_hiddens, num_outputs, requires_grad=True) * 0.01) b2 = nn.Parameter(torch.zeros(num_outputs, requires_grad=True)) params = [W1, b1, W2, b2] # + id="0vDjwYtHkNdH" # 激活函数 def relu(X): a = torch.zeros_like(X) return torch.max(X, a) # + id="OF4aZjM-miK0" # 模型 def net(X): X = X.reshape((-1, num_inputs)) H = relu(X@W1 + b1) # 这里“@”代表矩阵乘法 return (H@W2 + b2) # + id="SKANSYssmjvu" # 交叉熵损失函数 loss = nn.CrossEntropyLoss() # + id="Lo--yu-bmsh6" num_epochs, lr = 10, 0.1 updater = torch.optim.SGD(params, lr=lr) #优化器选择SGD train_ch3(net, train_iter, test_iter, loss, num_epochs, updater) # + id="AnXa48L9nZzD" def predict_ch3(net, test_iter, n=6): """ 预测标签 """ for X, y in test_iter: break trues = get_mnist_labels(y) preds = get_mnist_labels(d2l.argmax(net(X), axis=1)) titles = [true +'\n' + pred for true, pred in zip(trues, preds)] d2l.show_images(d2l.reshape(X[0:n], (n, 28, 28)), 1, n, titles=titles[0:n]) return [trues,preds] # + id="PrP9NNk5m9mA" predict_ch3(net, test_iter) # + [markdown] id="4ERul0YwntMg" # ## 损失评价 # + id="TN_K-W3cnsCd" def evaluate_loss(net, data_iter, loss): """评估给定数据集上模型的损失 Defined in :numref:`sec_model_selection`""" metric = d2l.Accumulator(2) # 损失的总和,样本数量 for X, y in data_iter: out = net(X) y = d2l.reshape(y, out.shape) l = loss(out, y) metric.add(d2l.reduce_sum(l), d2l.size(l)) return metric[0] / metric[1] # + [markdown] id="N89WSrqhouEC" # # Baseline # ## 模型 # + colab={"base_uri": "https://localhost:8080/"} id="0KDbU57nnLaW" outputId="368f35a2-3131-4f29-8867-122bbbb9b545" net = nn.Sequential(nn.Flatten(), nn.Linear(784, 256), nn.ReLU(), nn.Linear(256, 10)) def init_weights(m): if type(m) == nn.Linear: nn.init.normal_(m.weight, std=0.01) net.apply(init_weights) # + id="aY_t368ZpTDN" def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater): """ 训练模型 """ animator = Animator(xlabel='epoch', xlim=[1, num_epochs], ylim=[0.3, 1.2], legend=['train loss', 'train acc', 'test acc']) for epoch in range(num_epochs): train_metrics = train_epoch_ch3(net, train_iter, loss, updater) test_acc = evaluate_accuracy(net, test_iter) animator.add(epoch + 1, train_metrics + (test_acc,)) train_loss, train_acc = train_metrics assert train_loss < 0.5, train_loss assert train_acc <= 1 and train_acc > 0.7, train_acc assert test_acc <= 1 and test_acc > 0.7, test_acc print("测试集准确率为:",test_acc) # + id="P-jVPqkKo-Iv" colab={"base_uri": "https://localhost:8080/", "height": 284} outputId="3f23b849-95e0-48cc-e7ef-bff41c32b58d" batch_size, lr, num_epochs = 256, 0.1, 10 loss = nn.CrossEntropyLoss() trainer = torch.optim.SGD(net.parameters(), lr=lr) trans = [transforms.ToTensor()] train_iter, test_iter = load_data_mnist(batch_size, trans) train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer) # + [markdown] id="wiRFTnK47Uie" # # Baseline + 图像增强 # + id="wtx_G1OY7aDn" from torchvision.transforms import autoaugment, transforms # + colab={"base_uri": "https://localhost:8080/"} id="gpaNCvSnBA_X" outputId="296cf458-6df0-47f1-80ad-bea120efafb3" net = nn.Sequential(nn.Flatten(), nn.Linear(784, 256), nn.ReLU(), nn.Linear(256, 10)) def init_weights(m): if type(m) == nn.Linear: nn.init.normal_(m.weight, std=0.01) net.apply(init_weights) # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="vRN99cjb8lpp" outputId="b631af6e-afcc-4815-de2b-4324b694c3e9" batch_size, lr, num_epochs = 256, 0.05, 50 loss = nn.CrossEntropyLoss() trainer = torch.optim.SGD(net.parameters(), lr=lr) transform = [transforms.ToTensor(), #transforms.RandomResizedCrop(224, scale=(0.08, 1.0), ratio=(3. / 4., 4. / 3.)), transforms.RandomHorizontalFlip(), transforms.Normalize((0.5,), (0.5,))] train_iter, test_iter = load_data_mnist(batch_size, transform) train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer) # + [markdown] id="ndsF41TcEMYA" # ### 增加训练轮数 # + id="vEdOHwqQEPxn" net = nn.Sequential(nn.Flatten(), nn.Linear(784,1000), nn.ReLU(), nn.Linear(1000,2300), nn.ReLU(), nn.Linear(2300,600), nn.ReLU(), nn.Linear(600,80), nn.ReLU(), nn.Linear(80,10)) def init_weights(m): if type(m) == nn.Linear: nn.init.normal_(m.weight, std=0.01) net.apply(init_weights) batch_size, lr, num_epochs = 80, 0.03, 100 loss = nn.CrossEntropyLoss() trainer = torch.optim.SGD(net.parameters(), lr=lr) # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="tPpqNu3_EUyO" outputId="27a9c6fe-a356-4a36-db19-aeece7a42353" trans = [transforms.ToTensor()] train_iter, test_iter = load_data_mnist(batch_size, trans) train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer) # + [markdown] id="eGfXWe2fzHHr" # ## 多层网络 # + [markdown] id="OHM8rte5Ibpa" # ## dropout网络 # + id="gX_Krl_qIfkA" net = nn.Sequential(nn.Flatten(), nn.Linear(784,800), nn.ReLU(), nn.Linear(800,256), nn.Dropout(p=0.2), nn.Linear(256,32), nn.ReLU(), nn.Linear(32,10), ) def init_weights(m): if type(m) == nn.Linear: nn.init.normal_(m.weight, std=0.01) net.apply(init_weights) batch_size, lr, num_epochs = 128, 0.05, 20 loss = nn.CrossEntropyLoss() trainer = torch.optim.SGD(net.parameters(), lr=lr) # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="-GK9NXEKI2kX" outputId="4708317d-fc8f-4515-a436-d97b9d6d904d" trans = [transforms.ToTensor()] train_iter, test_iter = load_data_mnist(batch_size, trans) train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer) # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="47NpYvDQzYGH" outputId="bf03ed83-d63d-44f6-842a-7be86f4fedb4" trans = [transforms.ToTensor()] train_iter, test_iter = load_data_mnist(batch_size, trans) train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer) # + [markdown] id="JBASVwbXgYVv" # ### 更换激活函数 # + id="t0_Ly91XgX64" net = nn.Sequential(nn.Flatten(), nn.Linear(784,256), nn.ReLU(), nn.Linear(256,64), nn.ReLU(), nn.Linear(64,32), nn.ReLU(), nn.Linear(32,10)) def init_weights(m): if type(m) == nn.Linear: nn.init.normal_(m.weight, std=0.01) net.apply(init_weights) batch_size, lr, num_epochs = 100, 0.05, 20 loss = nn.CrossEntropyLoss() trainer = torch.optim.SGD(net.parameters(), lr=lr) # + colab={"base_uri": "https://localhost:8080/", "height": 284} id="Y4LZODUegiGC" outputId="ab733f4e-9f62-411e-90f2-1b04778aabae" trans = [transforms.ToTensor(), #transforms.RandomResizedCrop(224, scale=(0.08, 1.0), ratio=(3. / 4., 4. / 3.)), transforms.RandomHorizontalFlip(), transforms.Normalize((0.5,), (0.5,))] train_iter, test_iter = load_data_mnist(batch_size, trans) train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer) # + id="iqGRjTbsxFu6" def load_data_test(batch_size, transform, resize=None): """ 下载MNIST数据集,增加了transform参数,用于特征增强,然后将其加载到内存中 num_workers: 进程数 """ trans = transform if resize: trans.insert(0, transforms.Resize(resize)) trans = transforms.Compose(trans) mnist_test = torchvision.datasets.MNIST( root="../data", train=False, transform=trans, download=True) return (data.DataLoader(mnist_test, batch_size, shuffle=False,num_workers=4)) # + colab={"base_uri": "https://localhost:8080/"} id="nuXdAUZQxfA9" outputId="14c63294-894c-4770-86ca-0af5a214ca43" trans = [transforms.ToTensor()] test = load_data_test(batch_size, trans, resize=None) # + colab={"base_uri": "https://localhost:8080/", "height": 234} id="j5RtmzBhurZn" outputId="1b35b87a-15cc-4322-e25b-5c86eaa00e51" A = predict_ch3(net, test, n=6) # + id="0uho3jchMrH2" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="fbBRdxRO7e3S" # 1. **Combines a pretrained Densenet169 Model with the Final Layer trained seperately.** # # 2. **Final Layer weights are loaded into the model and then entire model is trained.** # # + [markdown] id="oxBVKIs7zbAU" # ## Get Data # + id="wdCKIRnFzJAk" from IPython.display import clear_output from google.colab import files files.upload() # !pip install -q kaggle # !mkdir -p ~/.kaggle # !cp kaggle.json ~/.kaggle/ # !ls ~/.kaggle # !chmod 600 /root/.kaggle/kaggle.json # !kaggle datasets download -d jackstapleton/petfinder-pretrained-images-nocrop # !mkdir ~/.data # !unzip -q petfinder-pretrained-images-nocrop.zip -d /.data clear_output() # + id="vAcY29SW3mRO" from google.colab import drive drive.mount("/content/gdrive") clear_output() # + [markdown] id="jRO1nmX73yxE" # ## Library Imports # + id="Ujg_cbzw30XF" import os import re import gc import pickle import random as r import numpy as np import pandas as pd import matplotlib.pyplot as plt import torch from torch import nn, optim from torch.utils.data import Dataset from torch.utils.data import DataLoader as DL from torch.nn.utils import weight_norm as WN from torchvision import models, transforms from time import time from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error from sklearn.preprocessing import StandardScaler import warnings warnings.filterwarnings("ignore") # + [markdown] id="D7f3Fu1q31oH" # ## Constants and Utilities # + id="5CO2Y8R135S_" SEED = 49 DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") NUM_FEATURES = 1664 TRANSFORM = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), ]) IMAGE_PATH = "/.data" STATE_PATH = "/content/gdrive/My Drive/Temp" verbose = True DEBUG = False sc_y = StandardScaler() # + id="BrpUJpuR4EHC" def breaker(num=50, char="*") -> None: print("\n" + num*char + "\n") def get_targets() -> np.ndarray: df = pd.read_csv("/content/gdrive/My Drive/train.csv", engine="python") targets = df["Pawpularity"].copy().values return targets.reshape(-1, 1) def show_graphs(L: list, title=None) -> None: TL, VL = [], [] for i in range(len(L)): TL.append(L[i]["train"]) VL.append(L[i]["valid"]) x_Axis = np.arange(1, len(L) + 1) plt.figure() plt.plot(x_Axis, TL, "r", label="train") plt.plot(x_Axis, VL, "b", label="valid") plt.grid() plt.legend() if title: plt.title("{} Loss".format(title)) else: plt.title("Loss") plt.show() # + [markdown] id="1KA-btTd4Iob" # ## Dataset Template and Build Dataloader # + id="raIbkq-_4KY6" class DS(Dataset): def __init__(self, images=None, targets=None, transform=None): self.images = images self.targets = targets self.transform = transform def __len__(self): return self.images.shape[0] def __getitem__(self, idx): return self.transform(self.images[idx]), torch.FloatTensor(self.targets[idx]) def build_dataloaders(tr_images: np.ndarray, va_images: np.ndarray, tr_targets: np.ndarray, va_targets: np.ndarray, batch_size: int, seed: int, transform=None): if verbose: breaker() print("Building Train and Validation DataLoaders ...") tr_data_setup = DS(images=tr_images, targets=tr_targets, transform=transform) va_data_setup = DS(images=va_images, targets=va_targets, transform=transform) dataloaders = { "train" : DL(tr_data_setup, batch_size=batch_size, shuffle=True, generator=torch.manual_seed(seed)), "valid" : DL(va_data_setup, batch_size=batch_size, shuffle=False) } return dataloaders # + [markdown] id="gO-E8pae4zfA" # ## Build Model # + id="zJbYa-Fq41wt" def build_model(IL: int, seed: int, path: str): class Model(nn.Module): def __init__(self, IL=None): super(Model, self).__init__() self.features = models.densenet169(pretrained=True, progress=False) self.features = nn.Sequential(*[*self.features.children()][:-1]) self.features.add_module("Adaptive Average Pool", nn.AdaptiveAvgPool2d(output_size=(1, 1))) self.features.add_module("Flatten", nn.Flatten()) self.predictor = nn.Sequential() self.predictor.add_module("BN", nn.BatchNorm1d(num_features=IL, eps=1e-5)) self.predictor.add_module("FC", WN(nn.Linear(in_features=IL, out_features=1))) def get_optimizer(self, lr=1e-3, wd=0.0): params = [p for p in self.parameters() if p.requires_grad] return optim.SGD(params, lr=lr, momentum=0.9, weight_decay=wd) def forward(self, x1, x2=None): if x2 is not None: x1 = self.features(x1) x2 = self.features(x2) return self.predictor(x1), self.predictor(x2) else: x1 = self.features(x1) return self.predictor(x1) model = Model(IL=IL) pretrained_state_dict = torch.load(path, map_location=DEVICE)["model_state_dict"] model.predictor.BN.weight = nn.Parameter(pretrained_state_dict["predictor.BN.weight"]) model.predictor.BN.bias = nn.Parameter(pretrained_state_dict["predictor.BN.bias"]) model.predictor.FC.bias = nn.Parameter(pretrained_state_dict["predictor.FC.bias"]) model.predictor.FC.weight_g = nn.Parameter(pretrained_state_dict["predictor.FC.weight_g"]) model.predictor.FC.weight_v = nn.Parameter(pretrained_state_dict["predictor.FC.weight_v"]) return model # + [markdown] id="Q0niMi7JJONc" # ## Fit and Predict # + id="u_CBK6ytJQCm" def fit(model=None, optimizer=None, scheduler=None, epochs=None, early_stopping_patience=None, dataloaders=None, verbose=False) -> tuple: name = "./state.pt" breaker() print("Training ...") breaker() Losses = [] bestLoss = {"train" : np.inf, "valid" : np.inf} start_time = time() for e in range(epochs): e_st = time() epochLoss = {"train" : np.inf, "valid" : np.inf} for phase in ["train", "valid"]: if phase == "train": model.train() else: model.eval() lossPerPass = [] for X, y in dataloaders[phase]: X, y = X.to(DEVICE), y.to(DEVICE) optimizer.zero_grad() with torch.set_grad_enabled(phase == "train"): output = model(X) loss = torch.nn.MSELoss()(output, y) if phase == "train": loss.backward() optimizer.step() lossPerPass.append(loss.item()) epochLoss[phase] = np.mean(np.array(lossPerPass)) Losses.append(epochLoss) if early_stopping_patience: if epochLoss["valid"] < bestLoss["valid"]: bestLoss = epochLoss BLE = e + 1 torch.save({"model_state_dict": model.state_dict(), "optim_state_dict": optimizer.state_dict()}, name) early_stopping_step = 0 else: early_stopping_step += 1 if early_stopping_step > early_stopping_patience: if verbose: print("\nEarly Stopping at Epoch {}".format(e)) break if epochLoss["valid"] < bestLoss["valid"]: bestLoss = epochLoss BLE = e + 1 torch.save({"model_state_dict": model.state_dict(), "optim_state_dict": optimizer.state_dict()}, name) if scheduler: scheduler.step(epochLoss["valid"]) if verbose: print("Epoch: {} | Train Loss: {:.5f} | Valid Loss: {:.5f} | Time: {:.2f} seconds".format(e+1, epochLoss["train"], epochLoss["valid"], time()-e_st)) if verbose: breaker() print("Best Validation Loss at Epoch {}".format(BLE)) breaker() print("Time Taken [{} Epochs] : {:.2f} minutes".format(len(Losses), (time()-start_time)/60)) breaker() print("Training Completed") breaker() return Losses, BLE, name ##################################################################################################### def predict_batch(model=None, dataloader=None, mode="test", path=None) -> np.ndarray: model.load_state_dict(torch.load(path, map_location=DEVICE)["model_state_dict"]) model.to(DEVICE) model.eval() y_pred = torch.zeros(1, 1).to(DEVICE) if re.match(r"valid", mode, re.IGNORECASE): for X, _ in dataloader: X = X.to(DEVICE) with torch.no_grad(): output = model(X) y_pred = torch.cat((y_pred, output.view(-1, 1)), dim=0) elif re.match(r"test", mode, re.IGNORECASE): for X in dataloader: X = X.to(DEVICE) with torch.no_grad(): output = model(X) y_pred = torch.cat((y_pred, output.view(-1, 1)), dim=0) return y_pred[1:].detach().cpu().numpy() # + [markdown] id="UdxevwnmJWKa" # # Train # + id="WVPG30fTJXhv" def train(images: np.ndarray, targets: np.ndarray, n_splits: int, batch_size: int, lr: float, wd: float, epochs: int, early_stopping: int, path: str, patience=None, eps=None) -> list: fold = 1 for tr_idx, va_idx in KFold(n_splits=n_splits, shuffle=True, random_state=SEED).split(images): fold += 1 if fold == 8: break tr_images, va_images = images[tr_idx], images[va_idx] tr_targets, va_targets = targets[tr_idx], targets[va_idx] tr_targets = sc_y.fit_transform(tr_targets) va_targets = sc_y.transform(va_targets) dataloaders = build_dataloaders(tr_images, va_images, tr_targets, va_targets, batch_size, SEED, transform=TRANSFORM) model = build_model(IL=NUM_FEATURES, seed=SEED, path=path).to(DEVICE) optimizer = model.get_optimizer(lr=lr, wd=wd) scheduler = None if isinstance(patience, int) and isinstance(eps, float): scheduler = model.get_plateau_scheduler(optimizer, patience, eps) L, _, name = fit(model=model, optimizer=optimizer, scheduler=scheduler, epochs=epochs, early_stopping_patience=early_stopping, dataloaders=dataloaders, verbose=verbose) y_pred = predict_batch(model=model, dataloader=dataloaders["valid"], mode="valid", path=name) RMSE = np.sqrt(mean_squared_error(sc_y.inverse_transform(y_pred), sc_y.inverse_transform(va_targets))) if verbose: print("Validation RMSE : {:.5f}".format(RMSE)) breaker() show_graphs(L) # + [markdown] id="yBUy00LcKG_I" # ## Main # + id="WQYu9Db6KITD" def main(): breaker() print("Clean Memory , {} Objects Collected ...".format(gc.collect())) ########### Params ########### if DEBUG: n_splits = 3 patience, eps = 5, 1e-8 epochs, early_stopping = 5, 5 batch_size = 64 lr = 1e-5 wd = 1e-5 else: n_splits = 10 patience, eps = 5, 1e-8 epochs, early_stopping = 25, 8 batch_size = 64 lr = 1e-5 wd = 1e-5 ############################## if verbose: breaker() print("Loading Data ...") images = np.load(os.path.join(IMAGE_PATH, "Images_224x224.npy")) targets = get_targets() pretrained_ann_path = os.path.join(STATE_PATH, "Fold_8_state.pt") # Without Scheduler train(images, targets, n_splits, batch_size, lr, wd, epochs, early_stopping, pretrained_ann_path, patience=None, eps=None) # With Scheduler # train(images, targets, n_splits, batch_size, lr, wd, epochs, early_stopping, pretrained_ann_path, patience=patience, eps=eps) breaker() # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="FZ_PwKpxMZSs" outputId="2d325d55-4ca3-4fb4-f05a-89a618bd57ab" main() # --- # jupyter: # jupytext: # formats: ipynb,py:percent # text_representation: # extension: .py # format_name: percent # format_version: '1.3' # jupytext_version: 1.14.4 # kernelspec: # display_name: Master's Thesis # language: python # name: masters-thesis # --- # %% [markdown] # # Computing the DTW Distance On UCR Datasets # %% import csv import time import numpy as np import pandas as pd from dtw import dtw from tqdm.notebook import tqdm # %% files = !(find ../UCRArchive_2018/ -type f -name "*TRAIN.tsv" -exec ls -al {} \; | sort -k 5 -n | sed 's/ \+/\t/g' | cut -f 9) files # %% len(files) # %% ress = [] with open("../logs/create_features_dtw.csv", "w") as log_file: writer = csv.writer(log_file, delimiter=",", quotechar='"') writer.writerow(["name", "train_distance_time", "test_distance_time"]) for file_name in tqdm(files, desc="Files processing"): np.random.seed(42) name = file_name.split("/")[-1].replace("_TRAIN.tsv", "") train_frame = pd.read_csv(file_name, delimiter="\t", header=None).interpolate( limit_direction="backward", axis=1 ) test_frame = pd.read_csv( file_name.replace("TRAIN.tsv", "TEST.tsv"), delimiter="\t", header=None ).interpolate(limit_direction="backward", axis=1) start_time = time.monotonic() train_dtw = pd.DataFrame( [ [ dtw(w[~np.isnan(w)], x[~np.isnan(x)]).distance for w in train_frame.values[:, 1:] ] for x in tqdm( train_frame.values[:, 1:], desc=f"{name} Train frame", leave=False ) ] ) train_timer = time.monotonic() - start_time train_dtw.to_csv( file_name.replace("TRAIN.tsv", "TRAIN_train_dtw.csv"), header=None, index=None, ) start_time = time.monotonic() test_dtw = pd.DataFrame( [ [ dtw(w[~np.isnan(w)], x[~np.isnan(x)]).distance for w in train_frame.values[:, 1:] ] for x in tqdm( test_frame.values[:, 1:], desc=f"{name} Test frame", leave=False ) ] ) test_timer = time.monotonic() - start_time test_dtw.to_csv( file_name.replace("TRAIN.tsv", "TEST_train_dtw.csv"), header=None, index=None, ) log = [name, train_timer, test_timer] writer.writerow(log) print(*log) # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.2.0 # language: julia # name: julia-1.2 # --- # # Generic programming # From [Wikipedia](https://en.wikipedia.org/wiki/Generic_programming): # > **Generic programming** is a style of computer programming in which algorithms are written in terms of types *to-be-specified-later* that are then *instantiated* when needed for specific types provided as parameters. # # Example: Vandermonde matrix (revisited) # [Vandermonde matrix:](https://en.wikipedia.org/wiki/Vandermonde_matrix) # \begin{align}V=\begin{bmatrix}1&\alpha _{1}&\alpha _{1}^{2}&\dots &\alpha _{1}^{n-1}\\1&\alpha _{2}&\alpha _{2}^{2}&\dots &\alpha _{2}^{n-1}\\1&\alpha _{3}&\alpha _{3}^{2}&\dots &\alpha _{3}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\alpha _{m}&\alpha _{m}^{2}&\dots &\alpha _{m}^{n-1}\end{bmatrix}\end{align} # ### Keep function argument types generic if possible function vander_naive(x::Vector) m = length(x) V = Matrix{Float64}(undef, m, m) for j = 1:m V[j,1] = 1.0 end for i= 2:m for j = 1:m V[j,i] = x[j] * V[j,i-1] end end return V end x = rand(3) vander_naive(x) vander_naive(1:3) # The annotation `::Vector` in the the function signature is unnecessarily specific. function vander_naive(x::AbstractVector) m = length(x) V = Matrix{Float64}(undef, m, m) for j = 1:m V[j,1] = 1.0 end for i= 2:m for j = 1:m V[j,i] = x[j] * V[j,i-1] end end return V end vander_naive(1:3) # ### Avoid explicit typing if possible vander_naive([1,2,3]) # Why is the result a matrix of floating point numbers....? # # Even worse: vander_naive(rand(ComplexF64, 3)) # We can easily cover those cases as well by only slightly modifying our code. function vander_almost_generic(x::AbstractVector) T = eltype(x) m = length(x) V = Matrix{T}(undef, m, m) for j = 1:m V[j,1] = 1.0 end for i= 2:m for j = 1:m V[j,i] = x[j] * V[j,i-1] end end return V end vander_almost_generic([1,2,3]) vander_almost_generic(rand(ComplexF64, 3)) vander_almost_generic(["Stadt", "Land", "Fluss"]) function vander_generic(x::AbstractVector{T}) where T # this is the same as just x::AbstractVector m = length(x) V = Matrix{T}(undef, m, m) for j = 1:m V[j,1] = one(T) end for i= 2:m for j = 1:m V[j,i] = x[j] * V[j,i-1] end end return V end vander_generic(["Stadt", "Land", "Fluss"]) # One more level of generality, just because we can. :) vander_generic([3, "Stadt", 4 + 5im]) function vander_supergeneric(x::AbstractVector{T}) where T m = length(x) V = Matrix{T}(undef, m, m) for j = 1:m V[j,1] = one(x[j]) end for i= 2:m for j = 1:m V[j,i] = x[j] * V[j,i-1] end end return V end vander_supergeneric([3, "Stadt", 4 + 5im]) # ### And all of this comes at no performance penality using BenchmarkTools x = rand(Float64, 100); @btime vander_naive($x); @btime vander_supergeneric($x); # Actually, for this specific example **our generic code is faster** in a few cases inasmuch as type conversions are unnecessary. x = rand(Int, 100); @btime vander_naive($x); @btime vander_supergeneric($x); x = rand(Bool, 100); @btime vander_naive($x); @btime vander_supergeneric($x); # On the other hand, sometimes it is worth converting to a different type to dispatch to a faster method or to utilize magic like compiler optimizations. x = rand(Float32, 100); @btime vander_naive($x); @btime vander_supergeneric($x); # # Arbitrary precision computations # Let's say you have implemented the following crazily complex physics code as part of your thesis project which takes in a number and spits out the answer to life, the universe, and everything. function answer_to_life_universe_and_everything(x::Integer) m = sin((2*x)^100) c = 42 E = m*c^2 a = sqrt(abs(E)) b = atan(m) c = a^2 + b^2 answer = sqrt(1764)/(1+exp(-c)) end answer_to_life_universe_and_everything(2) # Alright, apparently the answer is 21! # # The author: "I checked the code multiple times, it is correct. So let's publish." # Without changing a line of code we can check the *correctness* (in the numerical sense) of our result using [arbitrary precision arithmetics](https://docs.julialang.org/en/latest/manual/integers-and-floating-point-numbers/#Arbitrary-Precision-Arithmetic-1). big(2) typeof(big(2)) answer_to_life_universe_and_everything(big(2)) # # Let's get a bit more fancy! using Interact @manipulate for n in 1:20 [i*j for i in 1:n, j in 1:n] end function insert_block(A::AbstractMatrix, i, j, what=7) B = copy(A) B[i:i+2, j:j+2] .= what B end A = fill(0, 9, 9) insert_block(A, 3, 5) # this returns the new matrix # + A = fill(0, 10, 10) n = size(A, 1) @manipulate for i in 1:n-2, j in 1:n-2 insert_block(A, i, j) end # - # ### Let's add some color! # Our function `insert_block` is generic. Since the first argument `A isa AbstractArray`, we can index into it and set new values. Pretty much every value type is fine! using Colors @manipulate for n in 1:80 distinguishable_colors(n) end colors = distinguishable_colors(10) colors[1] # + A = fill(colors[1], 10, 10) n = size(A, 1) @manipulate for i in 1:n-2, j in 1:n-2 insert_block(A, i, j, colors[4]) end # - # # Generic Programming + Multiple Dispatch + JIT # The possibility to write generic algorithms that compile to fast machine code in combination with multiple dispatch leads to an ([unreasonable](https://pretalx.com/juliacon2019/talk/BCYWZJ/)) amount of code reuse. This sharing of code comes in two forms: # 1. **Sharing types** among a wide variety of packages implementing different algorithms; # 2. **Sharing generic algorithms** that work for different package-defined types implementing common abstractions. # drawing # 1. **Sharing types:** DataStructures.jl, OrderedCollections.jl, StaticArrays.jl, Colors.jl, Measurements.jl ... # As of the time of this writing, **804 packages** depend on the data types provided in [DataStructures.jl](https://juliacollections.github.io/DataStructures.jl/latest/). # # **851 packages** reuse type implementations in [OrderedCollections.jl](https://github.com/JuliaCollections/OrderedCollections.jl). # # **That's about every third package!** # 2. **Sharing generic algorithms:** StatsBase.jl, SortingAlgorithms.jl, GenericLinearAlgebra.jl, ... # # Emergent features # `Measurement` type from [Measurements.jl]() and differential equation solver from [OrdinaryDiffEq.jl](https://github.com/JuliaDiffEq/OrdinaryDiffEq.jl) (i.e. [DifferentialEquations.jl](https://github.com/JuliaDiffEq/DifferentialEquations.jl)) # + using OrdinaryDiffEq, Measurements, PyPlot #Half-life of Carbon-14 is 5730 years. c = 5.730 ± 2 #Setup u0 = 1.0 ± 0.1 tspan = (0.0 ± 0.0, 1.0 ± 0.0) #Define the problem radioactivedecay(u,p,t) = -c*u #Pass to solver prob = ODEProblem(radioactivedecay,u0,tspan) sol = solve(prob, Tsit5(), reltol=1e-8, abstol=1e-8) # + # analytic solution u = u0 .* exp.(-c .* sol.t); # plot solution ts = getfield.(sol.t, :val) solvals = getfield.(sol, :val) solerrs = getfield.(sol, :err); errorbar(ts, solvals, yerr=solerrs) plot(ts, getfield.(u, :val), color="red", lw=2) ylabel("u(t)") xlabel("t"); # - # Note that, in some sense, **Julia implemented that feature by itself**. # # The authors of Measurements.jl and DifferentialEquations.jl never had any collabration on this. # # It **just works**. # # Core messages of this Notebook # # * It is simple to write **type-generic code** in Julia and you should do it. # * Generally, **generic code is just as fast as specific code**. # * Generic Programming + Multiple Dispatch + JIT = **lots of code sharing and emergent features** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: tf # language: python # name: tf # --- import random as rd import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap import statistics as stat import math import scipy.stats as st # + N=200 N1 = sum(np.random.randint(0,2,size = N) == 0) N2 = N-N1 mean1 = np.array([2.5,2.5]) cov1 = np.array([[2,-0.8],[-0.8,2]]) X1 = np.random.multivariate_normal(mean1, cov1, N1) label1 = 1 mean2 = np.array([0,0]) cov2 = np.array([[1,0],[0,1]]) X2 = np.random.multivariate_normal(mean2, cov2, N2) label2 = 0 x1,x2 = [],[] for i in range(N1): x1 += [list(X1[i,]) + [label1]] for i in range(N2): x2 += [list(X2[i,]) + [label2]] X = np.array(x1+x2) # - # ### Grid of size 15*15 x1_min, x1_max=min(X[:,0]), max(X[:,0]) x2_min, x2_max=min(X[:,1]), max(X[:,1]) Neval=15;h1=(x1_max-x1_min)/Neval;h2=(x2_max-x2_min)/Neval x1Eval, x2Eval=np.meshgrid(np.arange(x1_min, x1_max, h1), np.arange(x2_min, x2_max, h2)); plt.plot(X1[:,0],X1[:,1],'ro') plt.plot(X2[:,0],X2[:,1],'bo') plt.plot(x1Eval,x2Eval,'kx') plt.show() # ### Random Forest # #### Model Calibration on train set # + #Random forest from sklearn.ensemble import RandomForestClassifier from sklearn import tree from sklearn.tree import DecisionTreeClassifier from sklearn.tree import export_text #tree = tree.DecisionTreeClassifier() from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X[:,0:2], X[:,2], test_size=1/3) RF = RandomForestClassifier(max_depth=2, random_state=0, oob_score = True) RFfit = RF.fit(X_train, y_train); # - score=RF.score; OOB=RF.oob_score_ IF=RF.feature_importances_ perf = {'train_accuracy' : score(X_train,y_train), 'test_accuracy' : score(X_test, y_test), 'OOB_score' : OOB, 'Features_importance' : IF} perf # #### map plt.figure(figsize=(15,5)) plt.subplot(132) plt.scatter(X[:,0],X[:,1],c = RFfit.predict(X[:,0:2]),cmap = 'Spectral') plt.title('Predicted Model ld') plt.subplot(131) plt.scatter(X[:,0],X[:,1],c = X[:,2],cmap = 'Spectral') plt.title('True Model') ax = plt.subplot(133) ax.set_title('MAP RF C. Decision boudaries') tt = np.array(list(zip(list(x1Eval.flat),list(x2Eval.flat)))) Z = RFfit.predict(tt) Z = Z.reshape(x1Eval.shape) cm = plt.cm.RdBu ax.contourf(x1Eval, x2Eval, Z, cmap=cm, alpha=.8); #ax.scatter(x1Eval,x2Eval,c='gray',marker='x') plt.show() # ### ADABOOST # ##### Overall data ##Boosting from sklearn.ensemble import AdaBoostClassifier from sklearn import metrics Ab = AdaBoostClassifier(n_estimators=100, random_state=0) Abfit=Ab.fit(X[:,0:2], X[:,2]) y_pred=Abfit.predict(X[:,0:2]); E_all=(X[:,2] != y_pred).sum()/len(X[:,2]) print("Boost Error on the complete training set %5.2f->",E_all) # ##### Separating train and test # + #### splitting data X_train, X_test, y_train, y_test = train_test_split(X[:,0:2], X[:,2], test_size=1/3) #### Training data Ab = AdaBoostClassifier(n_estimators=100, random_state=0) Abfit=Ab.fit(X_train, y_train) y_pred_train=Abfit.predict(X_train); E_train=(y_train != y_pred_train).sum()/len(y_train) print("Boost Error on the complete training set %5.2f->",E_train) #### prediction on test data y_pred_test=Abfit.predict(X_test); E_test=(y_test != y_pred_test).sum()/len(y_test) print("Boost Error on the complete training set %5.2f->",E_test) # - # #### map ADABOOST plt.figure(figsize=(15,5)) plt.subplot(132) plt.scatter(X[:,0],X[:,1],c = Abfit.predict(X[:,0:2]),cmap = 'Spectral') plt.title('Predicted Model ld') plt.subplot(131) plt.scatter(X[:,0],X[:,1],c = X[:,2],cmap = 'Spectral') plt.title('True Model') ax = plt.subplot(133) ax.set_title('MAP ADABOOST C. Decision boudaries') tt = np.array(list(zip(list(x1Eval.flat),list(x2Eval.flat)))) Z = Abfit.predict(tt) Z = Z.reshape(x1Eval.shape) cm = plt.cm.RdBu ax.contourf(x1Eval, x2Eval, Z, cmap=cm, alpha=.8); #ax.scatter(x1Eval,x2Eval,c='gray',marker='x') plt.show() # #### Structure of the successive Adaboost trees import matplotlib.pyplot as plt Ad = AdaBoostClassifier(n_estimators=200, random_state=0) Ad.fit(X[:,0:2], X[:,2]) Ad_seq_errors = [] for Ad_train_predict in Ad.staged_predict(X[:,0:2]): Ad_seq_errors.append(metrics.accuracy_score(Ad_train_predict, X[:,2])) plt.figure(figsize=(15, 5)) plt.plot(Ad_seq_errors); plt.title('Adaboost underlaying tree Accuracy') plt.ylabel('Error'); plt.xlabel('Number of Trees') # ### Gradient Boosting # #### Model calibration from sklearn.datasets import make_classification from sklearn.ensemble import GradientBoostingClassifier GB = GradientBoostingClassifier(random_state=0) GBfit=GB.fit(X[:,0:2], X[:,2]) y_pred=GBfit.predict(X[:,0:2]) E_all=(X[:,2] != y_pred).sum()/len(X[:,2]) print("Gradient Boosting Error on the complete training set %5.2f->",E_all) # + #### splitting data X_train, X_test, y_train, y_test = train_test_split(X[:,0:2], X[:,2], test_size=1/3) #### Training data GB = GradientBoostingClassifier(random_state=0) GBfit=GB.fit(X_train, y_train) y_pred_train=GBfit.predict(X_train); E_train=(y_train != y_pred_train).sum()/len(y_train) print("Boost Error on the complete training set %5.2f->",E_train) #### prediction on test data y_pred_test=GBfit.predict(X_test); E_test=(y_test != y_pred_test).sum()/len(y_test) print("Boost Error on the complete training set %5.2f->",E_test) # - # #### map Gradient Boosting plt.figure(figsize=(15,5)) plt.subplot(132) plt.scatter(X[:,0],X[:,1],c = GBfit.predict(X[:,0:2]),cmap = 'Spectral') plt.title('Predicted Model ld') plt.subplot(131) plt.scatter(X[:,0],X[:,1],c = X[:,2],cmap = 'Spectral') plt.title('True Model') ax = plt.subplot(133) ax.set_title('MAP ADABOOST C. Decision boudaries') tt = np.array(list(zip(list(x1Eval.flat),list(x2Eval.flat)))) Z = GBfit.predict(tt) Z = Z.reshape(x1Eval.shape) cm = plt.cm.RdBu ax.contourf(x1Eval, x2Eval, Z, cmap=cm, alpha=.8); #ax.scatter(x1Eval,x2Eval,c='gray',marker='x') plt.show() # #### Structure of the successive Adaboost trees import matplotlib.pyplot as plt GB = GradientBoostingClassifier(random_state=0) GB.fit(X[:,0:2], X[:,2]) # + GB_seq_errors = [] for GB_train_predict in GB.staged_predict(X[:,0:2]): GB_seq_errors.append(metrics.accuracy_score(GB_train_predict, X[:,2])) plt.figure(figsize=(15, 5)) plt.plot(GB_seq_errors); plt.title('Gradient Boosting underlaying tree Accuracy') plt.ylabel('Error'); plt.xlabel('Number of Trees') # - # #### Stacking # + from sklearn.ensemble import StackingClassifier from sklearn.linear_model import LogisticRegression from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.tree import DecisionTreeClassifier # get a stacking ensemble of models def stack_model(): # define the base models level0 = list() level0.append(('bayes', GaussianNB())) level0.append(('lda', LinearDiscriminantAnalysis(n_components=1))) level0.append(('qda', QuadraticDiscriminantAnalysis())) level0.append(('knn', KNeighborsClassifier(n_neighbors=5))) level0.append(('logreg', LogisticRegression(C=1e5))) level0.append(('Tree', DecisionTreeClassifier())) # define meta learner model level1 = DecisionTreeClassifier(random_state=0) # define the stacking ensemble model = StackingClassifier(estimators=level0, final_estimator=level1, cv=5) return model # - stack = stack_model() stackfit = stack.fit(X[:,0:2], X[:,2]) y_pred=stackfit.predict(X[:,0:2]); E_all=(X[:,2] != y_pred).sum()/len(X[:,2]) print("Stack Error on the complete training set %5.2f->",E_all) # + #### splitting data X_train, X_test, y_train, y_test = train_test_split(X[:,0:2], X[:,2], test_size=1/3) #### Training data stack = stack_model() stackfit=stack.fit(X_train, y_train) y_pred_train=stackfit.predict(X_train); E_train=(y_train != y_pred_train).sum()/len(y_train) print("Stack Error on the complete training set %5.2f->",E_train) #### prediction on test data y_pred_test=stackfit.predict(X_test); E_test=(y_test != y_pred_test).sum()/len(y_test) print("Stack Error on the complete test set %5.2f->",E_test) # - plt.figure(figsize=(15,5)) plt.subplot(132) plt.scatter(X[:,0],X[:,1],c = stackfit.predict(X[:,0:2]),cmap = 'Spectral') plt.title('Predicted Model ld') plt.subplot(131) plt.scatter(X[:,0],X[:,1],c = X[:,2],cmap = 'Spectral') plt.title('True Model') ax = plt.subplot(133) ax.set_title('MAP stacking C. Decision boudaries') tt = np.array(list(zip(list(x1Eval.flat),list(x2Eval.flat)))) Z = stackfit.predict(tt) Z = Z.reshape(x1Eval.shape) cm = plt.cm.RdBu ax.contourf(x1Eval, x2Eval, Z, cmap=cm, alpha=.8); #ax.scatter(x1Eval,x2Eval,c='gray',marker='x') plt.show() # #### Classification models for a real application: the Heart dataset. #Application SA Heart#################################### import pandas as pd import numpy as np tab=pd.read_csv('SAheart.txt') #print(tab) np.shape(tab)## (462, 11) Y=tab["chd"] Xnum=tab.loc[:,['famhist','sbp','tobacco','ldl','adiposity','typea','obesity','alcohol','age']] Xnum['famhist'] = (Xnum['famhist'] == 'Present')*1.0 X=Xnum#.to_numpy(); X.shape # + # get a stacking ensemble of models from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler import numpy as np params = {'tree' : {'max_depth': [None,1,3, 5, 7], 'min_samples_split': [2, 5, 10]}, 'RF' : {'n_estimators' : [int(x) for x in np.linspace(10,200, num = 25) ], 'max_features' : ['auto', 'sqrt'], 'max_depth' : [None,1,3, 5, 7], 'bootstrap' : [False,True]}, # [int(x) for x in np.linspace(10, 110, num = 11)], #'oob_score' : [F]}, 'Adaboost' : {'n_estimators' : [int(x) for x in np.linspace(10,200, num = 30)], 'learning_rate' : [0.001,0.01,0.1,0.2,0.5]}, #'max_depth' : [3, 5, 7,9, None]}, 'GB' : {'learning_rate' : [0.5, 0.25, 0.1, 0.05, 0.01, 0.001], 'n_estimators' : [int(x) for x in np.linspace(10,200, num = 30) ], 'max_depth' : [3, 5, 7,None,9]} } def stack_model(): # define the base models level0 = list() level0.append(('bayes', GaussianNB())) level0.append(('lda', LinearDiscriminantAnalysis(n_components=1))) level0.append(('qda', QuadraticDiscriminantAnalysis())) level0.append(('knn', KNeighborsClassifier(n_neighbors=5))) #level0.append(('logreg', make_pipeline(StandardScaler(), LogisticRegression(C = 1e5)))) level0.append(('Tree', DecisionTreeClassifier())) # define meta learner model level1 = DecisionTreeClassifier(random_state=0) # define the stacking ensemble model = StackingClassifier(estimators=level0, final_estimator=level1, cv=5) return model # get a list of models to evaluate def get_models(): models = dict() models['tree'] = DecisionTreeClassifier() models['RF'] = RandomForestClassifier() models['Adaboost'] = AdaBoostClassifier() models['GB'] = GradientBoostingClassifier() #models['Stacking'] = stack_model() return models def evaluate_model(model, X, y): cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise') return scores def scoring_function(y_true, y_pred): accuracy = metrics.accuracy_score(y_true=y_true,y_pred=y_pred) precision = metrics.precision_score(y_true=y_true, y_pred=y_pred) f1_score = metrics.f1_score(y_true=y_true,y_pred=y_pred) recall = metrics.recall_score(y_true=y_true,y_pred=y_pred) perf = {'accuracy' : accuracy,'precision' : precision, 'f1_score' : f1_score, 'recall' : recall} return perf # + ff = {} ff['fdg'] = 2 ff['sff'] = 3 ff models.items() # - # + #### splitting data import pandas as pd from sklearn.model_selection import KFold from sklearn import metrics from sklearn.metrics import make_scorer from sklearn.model_selection import GridSearchCV import time models = get_models() stackModel = stack_model() perf = {} model_trained = {} X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=1/4) scoring = {'accuracy': make_scorer(accuracy_score),'prec': 'precision','f1_score' : 'f1', 'recall' : 'recall'} i = 1 for name, model in models.items(): print("-----------------tunning hyperparameters model {} : {}----------------".format(i, name)) i+=1 start = time.time() model_i = GridSearchCV(model, param_grid=params[name],cv = 3, scoring="accuracy", refit='accuracy', verbose=2, n_jobs=4 ,return_train_score=True).fit(X_train, y_train) print('Execution time {} : {} seconds'.format(name,time.time() - start)) model_trained[name] = model_i y_pred = model_i.predict(X_test) perf[name] = scoring_function(y_test, y_pred) stackModel.fit(X_train, y_train) y_pred = stackModel.predict(X_test) perf['stack'] = scoring_function(y_test, y_pred) # - print(np.mean(model_trained["tree"].best_estimator_.predict(X_test) == y_test)) print(np.mean(model_trained["GB"].predict(X_test) == y_test)) X_train.shape model_trained["Adaboost"].best_estimator_ model_trained["GB"].best_estimator_ pd.DataFrame(perf) # + import pandas as pd from sklearn.model_selection import KFold from sklearn import metrics from sklearn.metrics import make_scorer models = get_models() kf = KFold(n_splits= 10,shuffle=True) data_train = pd.DataFrame() data_test = pd.DataFrame() j = 0 for train_index, test_index in kf.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = Y[train_index], Y[test_index] ### model #tree = DecisionTreeClassifier(min_samples_split= 30, # min_samples_leaf = 10) #bagging = BaggingClassifier(n_estimators=20, # base_estimator=tree, # random_state=0) #RF = RandomForestClassifier(max_depth=5, # random_state=0) for name, model in models.items(): modelfit = model.fit(X_train, y_train) predxclass_train= modelfit.predict(X_train) predxclass_test = modelfit.predict(X_test) #predxclass_train = np.argmax(pY_train, axis=1) #predxclass_test = np.argmax(pY_test, axis=1) data_test.loc[j,name] = metrics.accuracy_score(y_test,predxclass_test) data_train.loc[j,name] = metrics.accuracy_score(y_train,predxclass_train) j+=1 #data = [accuracy_train, accuracy_test, precision_train, precision_test, recall_train, recall_test, f1_train, f1_test, auc_train, auc_test] plt.figure(figsize=(25,8)) ax = plt.subplot(121) ax.boxplot(data_train,labels = list(data_train.columns),showmeans=True) ax = plt.subplot(122) ax.boxplot(data_test,labels = list(data_train.columns),showmeans=True) plt.show() # - models = get_models() data_train model from sklearn.model_selection import cross_validate from sklearn.metrics import make_scorer from sklearn.metrics import accuracy_score kf = KFold(n_splits= 5,shuffle=True) scoring = {'accuracy': make_scorer(accuracy_score),'prec': 'precision','f1_score' : 'f1', 'recall' : 'recall'} ss = cross_validate(model, X_train, y_train, scoring= scoring, cv=kf, n_jobs=-1, error_score='raise', return_train_score=True) cross_val_score(model, X_train, y_train, scoring='accuracy', cv=kf, n_jobs=-1, error_score='raise') pd.DataFrame(ss) ss['test_prec'] param_grid = {'max_depth': [None,3, 5, 10], 'min_samples_split': [2, 5, 10]} mm = models['RF'] # explicitly require this experimental feature from sklearn.experimental import enable_halving_search_cv # noqa # now you can import normally from model_selection from sklearn.model_selection import HalvingGridSearchCV base_estimator = RandomForestClassifier(random_state=0) sh = HalvingGridSearchCV(base_estimator, param_grid, cv=5, factor=2, resource='n_estimators', max_resources=30,refit=True).fit(X_train, y_train) from sklearn.model_selection import GridSearchCV scoring = {'accuracy': make_scorer(accuracy_score),'prec': 'precision','f1_score' : 'f1', 'recall' : 'recall'} gs = GridSearchCV(DecisionTreeClassifier(random_state=42), param_grid={'min_samples_split': range(2, 60, 1)},cv = 5, scoring=scoring, refit='f1_score', return_train_score=True).fit(X_train, y_train) results = gs.cv_results_ sum(sh.predict(X_test)!= y_test)/len(y_test) sum(gs.predict(X_test)!= y_test)/len(y_test) gs.best_estimator_ # + plt.figure(figsize=(13, 13)) plt.title("GridSearchCV evaluating using multiple scorers simultaneously", fontsize=16) plt.xlabel("min_samples_split") plt.ylabel("Score") ax = plt.gca() ax.set_xlim(0, 402) ax.set_ylim(0.73, 1) # Get the regular numpy array from the MaskedArray X_axis = np.array(results['param_min_samples_split'].data, dtype=float) for scorer, color in zip(sorted(scoring), ['g', 'k']): for sample, style in (('train', '--'), ('test', '-')): sample_score_mean = results['mean_%s_%s' % (sample, scorer)] sample_score_std = results['std_%s_%s' % (sample, scorer)] ax.fill_between(X_axis, sample_score_mean - sample_score_std, sample_score_mean + sample_score_std, alpha=0.1 if sample == 'test' else 0, color=color) ax.plot(X_axis, sample_score_mean, style, color=color, alpha=1 if sample == 'test' else 0.7, label="%s (%s)" % (scorer, sample)) best_index = np.nonzero(results['rank_test_%s' % scorer] == 1)[0][0] best_score = results['mean_test_%s' % scorer][best_index] # Plot a dotted vertical line at the best score for that scorer marked by x ax.plot([X_axis[best_index], ] * 2, [0, best_score], linestyle='-.', color=color, marker='x', markeredgewidth=3, ms=8) # Annotate the best score for that scorer ax.annotate("%0.2f" % best_score, (X_axis[best_index], best_score + 0.005)) plt.legend(loc="best") plt.grid(False) plt.show() # - results # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Laboratorio 1 # 123972 \ # 151280 \ # 119523 # ## Data profiling - EDA import re import pandas as pd import numpy as np from pandas_profiling import ProfileReport import csv import requests # + import sys sys.path.append('./../') # OPTIONAL: Load the "autoreload" extension so that code can change # %load_ext autoreload #Own Library modules import src # OPTIONAL: always reload modules so that as you change code in src, it gets loaded # %autoreload 2 from src.algorithms import eda as eda_eq # + # para leer el archivo como local agua = pd.read_csv('../data/consumo-agua.csv') # Para leer el archivo directo desde la url: #CSV_URL = 'https://datos.cdmx.gob.mx/explore/dataset/consumo-agua/download/?format=csv&timezone=America/Mexico_City&lang=es&use_labels_for_header=true&csv_separator=%2C' #agua_liga = pd.read_csv(CSV_URL) #agua = agua_liga # Otra forma de extraerlo desde la liga, pero menos eficiente # CSV_URL = 'https://datos.cdmx.gob.mx/explore/dataset/consumo-agua/download/?format=csv&timezone=America/Mexico_City&lang=es&use_labels_for_header=true&csv_separator=%2C' # r = requests.get(CSV_URL) # with requests.Session() as s: # download = s.get(CSV_URL) # decoded_content = download.content.decode('utf-8') # cr = csv.reader(decoded_content.splitlines(), delimiter=',') # my_list = list(cr) # agua = pd.DataFrame(my_list[1:len(my_list)], columns = my_list[0]) # - agua.head() # ### ¿Cuántas variables tenemos? len(agua.columns) # ### ¿Cuántas observaciones tenemos? len(agua) # ### ¿Cuántas observaciones únicas tenemos por variable? tabla_nunicos = pd.DataFrame(agua.nunique()) tabla_nunicos.columns = ['número de elementos únicos'] tabla_nunicos['número de elementos únicos'] = tabla_nunicos['número de elementos únicos'].map('{:,}'.format) tabla_nunicos # #### Transformaciones que deberán realizarse más adelantte: # Geo Point object --> split y dos float # Geo Shape object # consumo_total_mixto float64 # anio int64 --> Eliminar # nomgeo object --> Categorica # consumo_prom_dom float64 # consumo_total_dom float64 # alcaldia object --> Categorica # colonia object --> Categorica # consumo_prom_mixto float64 # consumo_total float64 # consumo_prom float64 # consumo_prom_no_dom float64 # bimestre int64 --> Categorica # consumo_total_no_dom float64 # gid int64 --> Categorica # indice_des object --> Categorica # ### ¿Cuántas variables numéricas tenemos? eda_eq.cuenta_tipo_de_dato(agua,'numerico') # ### ¿Cuántas variables de fecha tenemos? eda_eq.cuenta_tipo_de_dato(agua,'Date') # ### ¿Cuántas variables categóricas tenemos? eda_eq.cuenta_tipo_de_dato(agua,'category') # ### ¿Cuántas variables de texto tenemos? eda_eq.cuenta_tipo_de_dato(agua,'object') # ### Genera el profiling de cada variable (propio) # #### Perfilamiento general profiling_general = eda_eq.genera_profiling_general(agua) #profiling_general['Resultado'] = profiling_general['Resultado'].map('{:,}'.format) profiling_general['Resultado'][1] = '{:,.0f}'.format(profiling_general['Resultado'][1]) profiling_general['Resultado'][2] = '{:,.0f}'.format(profiling_general['Resultado'][2]) profiling_general['Resultado'][7] = '{:,.0f}'.format(profiling_general['Resultado'][7]) profiling_general eda_eq.cuenta_nulos_por_columnas(agua) eda_eq.cuenta_nulos_por_renglones(agua) eda_eq.cuenta_nulos_por_renglones_tabla(agua) # ## Perfilamiento por variable profiling_numerico,profiling_categoricas,profiling_texto = eda_eq.genera_profiling_por_variable(agua) #pd.set_option('display.max_colwidth', -1) profiling_numerico profiling_categoricas profiling_texto # ### Genera el profiling de cada variable (Pandas profiling) # + #profile = ProfileReport(agua, title='Pandas Profiling Report') # + #profile # + #profile.to_file(output_file = '../results/pandas_profiling_inicial.html') # - # ### ¿Cuántas alcadías tienes? ¿Cuántos nomgeo tienes? ¿Identificas algún error? alcaldias = pd.unique(agua['alcaldia']) alcaldias_n = agua['alcaldia'].nunique() print(alcaldias) print(alcaldias_n) nomgeos = pd.unique(agua['nomgeo']) nomgeos_n = agua['nomgeo'].nunique() print(nomgeos) print(nomgeos_n) agua.loc[agua["nomgeo"].str.contains('Talpan', case = False, na = None), "nomgeo"] = 'Tlalpan' nomgeos = pd.unique(agua['nomgeo']) nomgeos_n = agua['nomgeo'].nunique() print(nomgeos) print(nomgeos_n) # ### ¿Qué conocemos ahora de este set de datos por variable? # * El dataframe tiene 17 columnas. # * La variable `anio` tiene solamente un valor en toda la base de datos, por lo que no tendría sentido conservarla. # * La variable `nomgeo` contenía un error, poniendo como otra categoría la palabra `talpan`, dicho error fue corregido. # * `gid`indica el ID de cada observación. # * En realidad, deberían ser 8 variables numéricas # * Hay 26,318 valores faltantes en toda la base de datos, que corresponden al 2.2% de los datos. # * No hay valores duplicados # * Hay 5 columnas que tienen NA's --> `consumo_total_mixto, consumo_prom_mixto, consumo_prom_dom, consumo_total_dom y Geo Shape` # * En la variable consumo_total_no_dom y consumo_prom_no_dom los faltantes están expresados con ceros. --> **pregunta para cliente para ver si es correcto** # * La observación que tiene más valores faltantes es el 54,555, con 5 valores nulos. # # ------------------------------------- # ##### Profiling numérico: # * La variable `consumo_total_mixto` tiene una desviación estándar muy alta (casi el doble de la media), hay una gran cantidad de ceros (incluso, llegan a ser por lo menos el 25% de los datos) --> son 17,715 ceros # * El cuartil 75% de la variable `consumo_total_mixto`está en un valor de 233, mientras que el máximo está en 23,404 --> podría indicar valores atípicos. # * La variable `consumo_prom_dom` tiene 9,861 ceros. # * El cuartil 75% de la variable `consumo_prom_dom`está en un valor de 36.25, mientras que el máximo está en 7,796 --> podría indicar valores atípicos. # * El cuartil 75% de la variable `consumo_total_dom`está en un valor de 1,261, mientras que el máximo está en 95,060 --> podría indicar valores atípicos. # * `consumo_total_mixto`y `consumo_prom_mixto` tienen la misma cantidad de ceros # * El cuartil 75% de la variable `consumo_prom_mixto`está en un valor de 61, mientras que el máximo está en 11,702 --> podría indicar valores atípicos. # * El cuartil 75% de la variable `consumo_total`está en un valor de 1,808, mientras que el máximo está en 119,727 --> podría indicar valores atípicos. # * La variable `consumo_prom`tiene una desviación estándar de 1,069 y una media de 111. Adicionalmente, el cuartil 75% está en un valor de 45, mientras el máximo está en 89,691. # * `consumo_prom_no_dom`y `consumo_total_no_dom`tienen ambas una media no tan alta, pero una desviación estándar muy grande y un máximo muy elevado. # # ------------------------------------- # ##### Profiling categórico: # * `nomgeo`y `alcaldia`son variables redundantes, no es necesario mantener ambas. # * Iztapalapa, y Cuauhtémoc son las alcaldías con más registros. --> verificar con cliente si es correcto. # * Hay muchísimas colonias # * Solo se tiene información de los primeros 3 bimestres del año # * `gid`indica el ID de cada observación. # * En cuanto al índice de desarrollo (`indice_des`), las tres clasificaciones con mayor cantidad de registros son: # - bajo --> incluye el 41% de las observaciones # - popular # - alto # # Entre estas últimas tres, forman poco más del 80% de los registros, por lo que la categoría "medio" no está tan lejos de ellas. # # # ------------------------------------- # ##### Profiling fecha: # # No hay variables de tipo fecha. # # # ------------------------------------- # ##### Profiling texto: # # No hay variables que debieran estar como de tipo texto, sino más bien categóricas. # # ------------------------------------- # # ##### Comentarios para FE # * Geo Point debe ser separada y dividida en `latitud` y `longitud`. # * Geo Shape indica información para gráficas geoespaciales. # * `nomgeo, alcaldía, colonia, bimestre, gid e indice_des` deberían estar en formato categórico # * anio debería ser eliminada por tener un solo valor # * Solo se tiene información de los primeros 3 bimestres del año # # ### Transformar las variables a formato estándar: minúsculas, sin espacios en blanco, sin signos de puntuación. agua = eda_eq.EstandarizaFormato(agua) agua.head() agua = agua.astype({"bimestre":'category', "indice_des":'category', "nomgeo":'category', "alcaldia":'category',"colonia":'category', "gid":'category'}) agua.head() # ### Agregar la variable latitud y longitud. new = agua['geo_point'].str.split(",", n = 1, expand = True) agua["latitud"]= new[0] agua["longitud"]= new[1] # ### Pasar la variable latitud y longitud a numérica -si no la tomó como numérica-. # Revisamos si está como tipo numérico agua.dtypes agua = agua.astype({"latitud":'float64', "longitud":'float64'}) agua.dtypes # ### Eliminar la columna geo_point -una vez que creaste la variable latitud y longitud. agua.drop(columns =["geo_point"], inplace = True) agua.head() # ### Eliminar la columna geo_shape. agua.drop(columns =["geo_shape"], inplace = True) agua.head() # ### Cambiar a minúsculas las columnas alcaldía, colonia e indice_des. # hecho con las funciones anteriores # ### Volver a correr el proceso de identificación de variables numéricas, categóricas, texto y fechas. # #### Variables numéricas eda_eq.cuenta_tipo_de_dato(agua,'numerico') # #### Variables categóricas eda_eq.cuenta_tipo_de_dato(agua,'category') # #### Variables de texto eda_eq.cuenta_tipo_de_dato(agua,'object') # #### Variables de tipo fecha eda_eq.cuenta_tipo_de_dato(agua,'Date') # ### Genera el data profiling por variable (puedes ocupar el paquete pandas-profiling de pandas) # #### Perfilamiento propio --> General profiling_general = eda_eq.genera_profiling_general(agua) #profiling_general['Resultado'] = profiling_general['Resultado'].map('{:,}'.format) profiling_general['Resultado'][1] = '{:,.0f}'.format(profiling_general['Resultado'][1]) profiling_general['Resultado'][2] = '{:,.0f}'.format(profiling_general['Resultado'][2]) profiling_general['Resultado'][7] = '{:,.0f}'.format(profiling_general['Resultado'][7]) profiling_general eda_eq.cuenta_nulos_por_columnas(agua) eda_eq.cuenta_nulos_por_renglones(agua) eda_eq.cuenta_nulos_por_renglones_tabla(agua) # #### Perfilamiento propio --> Por variable profiling_numerico,profiling_categoricas,profiling_texto = eda_eq.genera_profiling_por_variable(agua) profiling_numerico profiling_categoricas profiling_texto # ### Genera el profiling de cada variable (Pandas profiling) # + #profile = ProfileReport(agua, title='Pandas Profiling Report') # + #profile # + #profile.to_file(output_file = '../results/pandas_profiling_final.html') # - # Queremos generar el data profiling de estos datos. # # * ¿Cuántas variables tenemos? # * ¿Cuántas observaciones tenemos? # * ¿Cuántas observaciones únicas tenemos por variable? # * ¿Cuántas variables numéricas tenemos? # * ¿Cuántas variables de fecha tenemos? # * ¿Cuántas variables categóricas tenemos? # * ¿Cuántas variables de texto tenemos? # * Generea el profiling de cada variable # * ¿Qué conocemos ahora de este set de datos por variable? # * ¿Cuántas alcadías tienes? ¿Cuántos nomgeo tienes? ¿Identificas algún error? # # * Transformar las variables a formato estándar: minúsculas, sin espacios en blanco, sin signos de puntuación. # * Agregar la variable latitud y longitud. # * Pasar la variable latitud y longitud a numérica -si no la tomó como numérica-. # * Eliminar la columna geo_point -una vez que creaste la variable latitud y longitud. # * Eliminar la columna geo_shape. # * Cambiar a minúsculas las columnas alcaldía, colonia e indice_des. # * Volver a correr el proceso de identificación de variables numéricas, categóricas, texto y fechas. # * Genera el data profiling por variable (puedes ocupar el paquete pandas-profiling de pandas) agua.head() agua.drop(columns =["nomgeo", "anio", "colonia"], inplace = True) agua.head() # + #agua['indice_des'].unique() # - indices_des_que_hay = list(agua['indice_des'].unique()) indices_des_que_hay indice_alto = agua.loc[agua["indice_des"] == 'alto'] indice_medio = agua.loc[agua["indice_des"] == 'medio'] indice_popular = agua.loc[agua["indice_des"] == 'popular'] indice_bajo = agua.loc[agua["indice_des"] == 'bajo'] print(len(indice_alto)) tabla_alto = eda_eq.cuenta_nulos_por_columnas(indice_alto) tabla_alto['indice'] = 'alto' tabla_alto print(len(indice_medio)) tabla_medio = eda_eq.cuenta_nulos_por_columnas(indice_medio) tabla_medio['indice'] = 'medio' tabla_medio print(len(indice_popular)) tabla_popular = eda_eq.cuenta_nulos_por_columnas(indice_popular) tabla_popular['indice'] = 'popular' tabla_popular print(len(indice_bajo)) tabla_bajo = eda_eq.cuenta_nulos_por_columnas(indice_bajo) tabla_bajo['indice'] = 'bajo' tabla_bajo eda_eq.CreaTablaConteoPorcentaje(agua,'indice_des',True) tabla_todos_NA = tabla_alto.append(tabla_medio) tabla_todos_NA = tabla_todos_NA.append(tabla_popular) tabla_todos_NA = tabla_todos_NA.append(tabla_bajo) tabla_todos_NA['categoria']= tabla_todos_NA.index tabla_todos_NA # + import seaborn as sns import matplotlib.pyplot as plt chart = sns.barplot(x = 'categoria', y = 'Missing Values', hue = 'indice', data = tabla_todos_NA ) #Poner el título a la gráfica chart.set_title("Cantidad absoluta de NA's por variable y su distinción por indice de desarrollo") # Rotar labels del eje x chart.set_xticklabels(chart.get_xticklabels(), rotation=10) #Mover cuadro de leyendas afuera de la gráfica plt.legend(bbox_to_anchor=(1.01, 1),borderaxespad=0) plt.tight_layout() # + chart = sns.barplot(x = 'categoria', y = '% del Total', hue = 'indice', data = tabla_todos_NA ) #Poner el título a la gráfica chart.set_title("Cantidad relativa de NA's por variable y su distinción por indice de desarrollo") # Rotar labels del eje x chart.set_xticklabels(chart.get_xticklabels(), rotation=10) #Mover cuadro de leyendas afuera de la gráfica plt.legend(bbox_to_anchor=(1.01, 1),borderaxespad=0) plt.tight_layout() # - # ## GEDA # ### Análisis de ceros # + ### Libreria necesaria #imports del notebook anterior import re import pandas as pd import numpy as np from pandas_profiling import ProfileReport import csv import requests #disable warnings messages import warnings warnings.simplefilter("ignore") #imports para graficar import matplotlib.pyplot as plt import seaborn as sns #importando librería propia import sys sys.path.append('./../') # OPTIONAL: Load the "autoreload" extension so that code can change # %load_ext autoreload #Own Library modules import src # OPTIONAL: always reload modules so that as you change code in src, it gets loaded # %autoreload 2 from src.algorithms import eda as eda_eq # - agua = eda_eq.prepara_dataset(agua) variables_numericas = np.array(["consumo_total_mixto", "consumo_prom_dom", "consumo_total_dom", "consumo_prom_mixto", "consumo_total", "consumo_prom", "consumo_prom_no_dom", "consumo_total_no_dom"]) print(variables_numericas) # Los siguientes histogramas muestran la **gran densidad en niveles bajos** (ceros y/o cercanos a dicho valor), además de que se logra apreciar que todas las variables poseen colas pesadas, pues hay valores muy altos para todas las variables. for column in variables_numericas: plt.figure() sns.distplot(agua[column]) # En la siguiente sección se realizará un análisis detallado en aquellas observaciones donde no hubo medición en el consumo de agua, es decir, en donde el **consumo fue cero** para el periodo en cuestión. # Filtrar por suma por renglones igual a cero agua["suma"] = agua[variables_numericas].sum(axis=1) agua_no_zero = agua[agua.suma == 0] agua_no_zero = agua_no_zero[agua.columns.difference(np.append(variables_numericas, 'suma'))] n_zero = agua_no_zero.shape[0] n_agua = agua.shape[0] print(n_zero/n_agua) # Los valores que contienen puros ceros en las variables numéricas corresponden aproximadamente al $3.4\%$ de los datos, se preguntará a cliente porqué tenemos observaciones sin consumo de agua para dichos periodos y lugares: # * ¿Esto se debe a un error de medición o captura de los datos? # * ¿Son establecimientos que no son habitados o fueron deshabitados a lo largo del periodo en cuestión? # * Alguna otra opción. variables_categoricas = np.array(['alcaldia', 'bimestre', 'colonia', 'gid', 'indice_des', 'nomgeo']) for column in variables_categoricas: aux = pd.crosstab(index = agua_no_zero[column], columns='count') aux = aux.sort_values('count', ascending = False) print(column) print(aux.head()) # * La alcaldía de Iztapalapa tiene una cantidad de ceros "fuera de lo común", en el sentido de que se encuentra muy por encima de los demás valores. # * El índice de desarrollo popular tiene una cantidad de ceros "fuera de lo común". # * Entre las demás variables, la cantidad de ceros no es fuera de lo común (los rangos de dispersión no son muy altos). cross_col_alc = pd.crosstab(index = agua_no_zero['colonia'], columns = agua_no_zero['alcaldia']) sns.heatmap(cross_col_alc, cmap = sns.diverging_palette(20, 220, n = 200), center = 0).set_title('Distribución de ceros por alcaldías y colonias') # Cabe mencionar que no todas las colonias se encuentran en cada una de las alcaldías, lo cual puede confundir al lector, esta tabla muestra la cantidad de ceros que cada una de las colonias de distintas alcaldías. Nótese que las alcaldías de Iztapalapa y Tlalpan contienen una mayor cantidad de ceros, esto se debe a que son las que contienen más colonias. La siguiente tabla muestra la cantidad de colonias por alcaldía. n_by_alcaldia = agua_no_zero.groupby("alcaldia")["colonia"].count() n_by_alcaldia = n_by_alcaldia.sort_values(ascending = False) n_by_alcaldia cross_col_alc = pd.crosstab(index = agua_no_zero['alcaldia'], columns = agua_no_zero['indice_des']) sns.heatmap(cross_col_alc, cmap = sns.diverging_palette(20, 220, n = 200), center = 0).set_title('Distribución de ceros por índice de darrrollo y colonias') # Se observa que hay un patrón: independientemente de la alcaldía, si hay ceros en el nivel popular, entonces hay ceros en el nivel bajo y viceversa, por lo que son estos $2$ ídices de desarrollo que son más afectados por la cantidad de ceros, lo cual no es para sorprenderse, debido a que son los que presentan mayor densidad. Sin embargo, para el caso de Tlalpan, se incluye el índice medio, donde se concluye que los 3 niveles son afectados, sin importar la densidad. n_by_alcaldia = agua_no_zero.groupby("indice_des")["indice_des"].count() n_by_alcaldia = n_by_alcaldia.sort_values(ascending = False) n_by_alcaldia cross_col_alc = pd.crosstab(index = agua_no_zero['alcaldia'], columns = agua_no_zero['bimestre']) sns.heatmap(cross_col_alc, cmap = sns.diverging_palette(20, 220, n = 200), center = 0).set_title('Distribución de ceros por índice de darrrollo y colonias') # Las colonias presentan la misma cantidad de ceros a lo largo del semestre, de lo que concluímos que la presencia de ceros en las observaciones no se debe a un tema temporal, sino que esto se presenta de manera recurrente cada periodo. agua_no_zero = agua[agua.suma == 0] sns.scatterplot(x = agua_no_zero['longitud'], y = agua_no_zero['latitud'], hue = agua_no_zero['indice_des']).set_title('Longitud y latitud (ceros)') # Observamos que se pueden observar ciertas zonas en donde hay una mayor agrupación de datos, esto se debe a la densidad en las zonas, no a que haya un fallo en la medición. # Conclusión: # * Las alcaldías con mayor cantidad de ceros son Iztapalapa, seguida de Tlalpan, este punto no es para alarmarse, pues se debe a que son las que presentan una mayor densidad. # * A lo largo del tiempo, las distintas alcaldías han tenido el mismo comportamiento en relación a los ceros. # * Los principales índices de desarrollo afectados por este "suceso" son popular, seguido del bajo, lo cual se debe a la densidad en dichos índices. # * Por último, concluimos que la presencia de ceros no se debe a que en una zona haya fallos en la medición o errores de captura, pues es un fenómeno que se presenta de manera similar a lo largo de los tiempos en la misma cantidad para los distintos lugares. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import os from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import train_test_split, KFold, cross_val_score from sklearn.metrics import mean_absolute_error from sklearn.impute import KNNImputer from sklearn.preprocessing import MinMaxScaler, StandardScaler from sklearn.pipeline import Pipeline from sklearn.decomposition import PCA from sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score from sklearn.model_selection import GridSearchCV import statsmodels.api as sm from statsmodels.tsa.arima_model import ARIMA, ARIMAResults from statsmodels.tsa.arima_process import ArmaProcess from statsmodels.stats.diagnostic import acorr_ljungbox from statsmodels.tsa.statespace.sarimax import SARIMAX plt.style.use('ggplot') pd.set_option('max_columns', None) pd.set_option('max_rows', None) # - # ## Common Code # Files supplied by the competition for model training X_train = pd.read_csv('../../data/dengue_features_train.csv') y_train = pd.read_csv('../../data/dengue_labels_train.csv', usecols=['total_cases']) X_train.head() # + active="" # # - # Files supplied by the competition for submission X_test = pd.read_csv('../../data/dengue_features_test.csv') y_test = pd.read_csv('../../data/submission_format.csv') def create_submission_file(pipeline, filename_comment): next_file_id = generate_next_submission_fileid() X_test_processed = data_preprocess(X_test) y_submit_pred = np.rint(pipeline.predict(X_test_processed)) y_test['total_cases'] = y_submit_pred y_test['total_cases'] = y_test['total_cases'].astype(int) filename = f'../../data/dengue_submission_{next_file_id}_{filename_comment}.csv' y_test.to_csv(filename, index = False) return y_submit_pred, filename def generate_next_submission_fileid(): files_found = [] for file in os.listdir("../../data"): if file.startswith("dengue_submission"): files_found.append(file[18:20]) return f'{int(sorted(files_found).pop()) + 1 :02}' # ## Notebook-specific code # Autocorrelate precipitation to total cases and add as a new colums to original dataset X_train['week_start_date'] = pd.to_datetime(X_train['week_start_date']) X_test['week_start_date'] = pd.to_datetime(X_test['week_start_date']) df_train = pd.merge(X_train, y_train, left_index= True, right_index=True) df_train.set_index('week_start_date', inplace = True) precip_subset = df_train[df_train['city']=='sj'][['precipitation_amt_mm','total_cases']] precip_subset.head() n_offset = 12 for i in range(-1,-n_offset,-1): precip_subset[f'prc_{i}'] = precip_subset['precipitation_amt_mm'].shift(i) precip_subset.head(10) correlation = precip_subset.corr() fig, ax = plt.subplots(figsize = (12,12)) sns.heatmap(correlation, annot=True, ax=ax, fmt=".2f") # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python3.5 # language: python # name: python3.5 # --- # + import numpy as np import pandas as pd from numpy import array from math import sqrt from pyspark.mllib.clustering import KMeans, KMeansModel from pyspark.ml.feature import VectorAssembler from pyspark.ml.clustering import KMeans import matplotlib.pyplot as plt from pyspark.sql.functions import col # - spark.version sqlContext = SQLContext(sc) batting=sqlContext.read.csv("/home/anup/bdProject/bd/stats_batting_2.csv", header=True, mode="DROPMALFORMED") bowling=sqlContext.read.csv("/home/anup/bdProject/bd/stats_bowling_1 .csv", header=True, mode="DROPMALFORMED") bowling.show() batting.show() # + batting.na.drop() bowling=bowling.na.drop() FEATURES_COL=["runs_scored","balls_faced","times_out","batting_average","strike_rate"] FEATURES_COL_BOWL=["runs_conceded","wickets_taken","overs_bowled","bowling_average","economy_rate","bowling_strike_rate"] # df_feat = batting.select(*(batting[c].cast("float").alias(c) for c in batting.columns[1:])) # df_feat_bowl = bowling.select(*(bowling[c].cast("float").alias(c) for c in bowling.columns[1:])) # df_feat_bowl.show() # df_feat.show() # + for col in batting.columns: if col in FEATURES_COL: batting = batting.withColumn(col,batting[col].cast('float')) batting.show() for col in bowling.columns: if col in FEATURES_COL_BOWL: bowling = bowling.withColumn(col,bowling[col].cast('float')) bowling.show() # + vecAssembler = VectorAssembler(inputCols=FEATURES_COL, outputCol="features") df_kmeans = vecAssembler.transform(batting).select('player_name', 'features') df_kmeans.show() vecAssembler_bowl = VectorAssembler(inputCols=FEATURES_COL_BOWL, outputCol="features_bowl") df_kmeans_bowl = vecAssembler_bowl.transform(bowling).select('player_name', 'features_bowl') df_kmeans_bowl.show() # + cost = np.zeros(20) for k in range(2,20): kmeans = KMeans().setK(k).setSeed(1).setFeaturesCol("features") model = kmeans.fit(df_kmeans.sample(False,0.1, seed=42)) cost[k] = model.computeCost(df_kmeans) # requires Spark 2.0 or later cost_bowl = np.zeros(20) for k in range(2,20): kmeans = KMeans().setK(k).setSeed(1).setFeaturesCol("features_bowl") model = kmeans.fit(df_kmeans_bowl.sample(False,0.1, seed=42)) cost_bowl[k] = model.computeCost(df_kmeans_bowl) # requires Spark 2.0 or later # + fig, ax = plt.subplots(1,1, figsize =(8,6)) ax.plot(range(2,20),cost[2:20]) ax.set_xlabel('k') ax.set_ylabel('cost') fig, ax = plt.subplots(1,1, figsize =(8,6)) ax.plot(range(2,20),cost_bowl[2:20]) ax.set_xlabel('k') ax.set_ylabel('cost_bowl') # + k = 10 kmeans = KMeans().setK(k).setSeed(1).setFeaturesCol("features") model = kmeans.fit(df_kmeans) centers = model.clusterCenters() print("Cluster Centers: ") for center in centers: print(center) k = 7 kmeans_bowl = KMeans().setK(k).setSeed(1).setFeaturesCol("features_bowl") model_bowl = kmeans_bowl.fit(df_kmeans_bowl) centers_bowl = model_bowl.clusterCenters() print("Cluster Centers: ") for center in centers_bowl: print(center) # + transformed = model.transform(df_kmeans).select('player_name', 'prediction') rows = transformed.collect() print(rows[:3]) transformed = model_bowl.transform(df_kmeans_bowl).select('player_name', 'prediction') rows_bowl = transformed.collect() print(rows_bowl[:3]) # + df_pred = sqlContext.createDataFrame(rows) df_pred.show() df_pred_bowl = sqlContext.createDataFrame(rows_bowl) df_pred_bowl.show() # + bat_cluster=df_pred.toPandas() bat_cluster.to_csv("/home/anup/bdProject/bd/batting_cluster.csv") bowl_cluster=df_pred_bowl.toPandas() bowl_cluster.to_csv("/home/anup/bdProject/bd/bowling_cluster.csv") # - # df_pred = df_pred.join(batting, 'player_name') # df_pred_bowl = df_pred_bowl.join(bowling, 'player_name') df_pred.show() df_pred_bowl.show() # + # pddf_pred = df_pred.toPandas().set_index('player_name') # pddf_pred.to_csv("/home/gajendra/Documents/Big_Data/Project/stats_batting_prediction.csv") # - batting_prob=sqlContext.read.csv("/home/anup/bdProject/bd/r_prob.csv", header=True, mode="DROPMALFORMED") bowling_prob=sqlContext.read.csv("/home/anup/bdProject/bd/wicket.csv",header=True,mode="DROPMALFORMED") # batting_prob.show() # bowling_prob.show() # + batting_prob_clus=batting_prob.join(df_pred_bowl,df_pred_bowl.player_name==batting_prob.bowler,'leftouter').select(batting_prob["*"],df_pred_bowl["prediction"]) batting_prob_clus=batting_prob_clus.selectExpr("_c0 as index","batsman as batsman","bowler as bowler","P0 as P0","P1 as P1","P2 as P2","P3 as P3","P4 as P4","P6 as P6","prediction as bowler_cluster") batting_prob_clus=batting_prob_clus.join(df_pred,df_pred.player_name==batting_prob_clus.batsman,'leftouter').select(batting_prob_clus["*"],df_pred["prediction"]) batting_prob_clus=batting_prob_clus.selectExpr("index as index","batsman as batsman","bowler as bowler","P0 as P0","P1 as P1","P2 as P2","P3 as P3","P4 as P4","P6 as P6","bowler_cluster as bowler_cluster","prediction as batsman_cluster") bowling_prob_clus=bowling_prob.join(df_pred_bowl,df_pred_bowl.player_name==bowling_prob.bowler,'leftouter').select(bowling_prob["*"],df_pred_bowl["prediction"]) bowling_prob_clus=bowling_prob_clus.selectExpr("_c0 as index","batsman as batsman","bowler as bowler","no_of_times as no_of_times","no_of_balls as no_of_balls","p_out as p_out","p_not_out as p_not_out","prediction as bowler_cluster") bowling_prob_clus=bowling_prob_clus.join(df_pred,df_pred.player_name==bowling_prob_clus.batsman,'leftouter').select(bowling_prob_clus["*"],df_pred["prediction"]) bowling_prob_clus=bowling_prob_clus.selectExpr("index as index","batsman as batsman","bowler as bowler","no_of_times as no_of_times","no_of_balls as no_of_balls","p_out as p_out","p_not_out as p_not_out","bowler_cluster as bowler_cluster","prediction as batsman_cluster") batting_prob_clus.show() # bowling_prob_clus.show() # pddf_pred = bowling_prob_clus.toPandas() # pddf_pred1 = bowling_prob.toPandas() # print(pddf_pred.shape) # print(pddf_pred1.shape) # print(pddf_pred) # - pd_bowling_prob=bowling_prob_clus.toPandas() pd_batting_prob=batting_prob_clus.toPandas() pd_batting_prob.to_csv("/home/anup/bdProject/bd/pd_batting_prob.csv") pd_bowling_prob.to_csv("/home/anup/bdProject/bd/pd_bowling_prob.csv") df=pd.read_csv('/home/anup/bdProject/bd/pd_batting_prob.csv') df2=pd.read_csv('/home/anup/bdProject/bd/pd_bowling_prob.csv') #res=df.groupby(['player_out','bowler'])['wicket_kind'].count().reset_index(name='number_of_times') res = df.groupby(["bowler_cluster","batsman_cluster"])['P0','P1','P2','P3','P4','P6'].mean().round(2).reset_index() res2 = df2.groupby(["bowler_cluster","batsman_cluster"])['p_not_out'].mean().round(2).reset_index() ref = df.groupby(["batsman","bowler"])['P0','P1','P2','P3','P4','P6'].mean().round(2).reset_index() ref2 = df2.groupby(["batsman","bowler"])['p_not_out'].mean().round(2).reset_index() final_ref=ref final_ref=final_ref.merge(ref2,how='outer',left_on=["batsman","bowler"],right_on=["batsman","bowler"]) final=res final['p_not_out']=res2['p_not_out'] final_ref.to_csv('/home/anup/bdProject/bd/pp_prob.csv',index=None) final.to_csv('/home/anup/bdProject/bd/clusters_prob.csv',index=None) # + #-------------------------- # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- trips=pd.read_csv('trips.csv') import redis import pandas as pd # + trips=pd.read_csv('trips.csv') date_sorted_trips = trips.sort_values(by='end_date') date_sorted_trips.head() # - for trip in date_sorted_trips.itertuples(): print(trip.end_date, '', trip.bike_number, '', trip.end_station_name) current_bike_locations = redis.Redis(host='redis', port='6379') current_bike_locations.keys() for trip in date_sorted_trips.itertuples(): current_bike_locations.set(trip.bike_number, trip.end_station_name) current_bike_locations.keys() current_bike_locations.get('92') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:mypy36] # language: python # name: conda-env-mypy36-py # --- # # Automatic Lab Evaluator # # ## Assessment based on student-provided results # # * # * # Version History: # # Version 0.1 (Dec. 2016) # - Firts python 2 version and python 3 adaptation # Version 0.2 (Dec. 2017) # - All configurable parameters in the first and second code cell. # - Managing multiple mat files in students' zip files. # - Corrected bug in readdatafiles (new student variables were not properly added to the dataframe) # - Managing multiple class lists in Spanish and English. # - External evaluation functions # - New format of students report. # - Uses ExamProject class. # - Integrated student groups # + import numpy as np import pandas as pd import os from os.path import isfile, join import scipy.io as sio import scipy import zipfile as zp import shutil import difflib import csv import glob # Evaluation libraries from lib.dbEvaluatorB12 import * from lib.dbSolverB12 import * # - # ## 1. Configurable parameters: # + # ######### # Libraries # ######### # Use the inherited examProject class corresponding to the exam to be evaluated. import lib.examProjectB3 as ex import lib.examProjectB3 as ex # ################# # Files and folders # ################# # Project path # project_path = '../LabEvaluationProjects/ProjectB3_1718_GSCGT/' # project_path = '../LabEvaluationProjects/ProjectB3_1718/' project_path = 'prb12' # Exam_name. An exam evaluation project main contain several exams. Write here which one of them you want to evaluate. # exam_label = 'ExLabB12_0' exam_label = 'ExLabB12_1' # Expected name of the students' results file. # This is used to disambiguate situations where the student uploaded multiple mat files # (e.g. the input data file provided with the exam statement, or .mat files in .DS_STORE folders) results_fname = 'results.mat' # Output file name with finalnotes_fname = 'student_notes.xlsx' # #### # Exam # #### # Penalties: p_nocode = 0.75 p_noresults = 0.75 p_delay = 0.25 # score reduction per minute. exam = ex.ExamProjectB3(project_path) exam.load() # Paths to input and output files class_list_path = exam.f_struct['class_list'] all_students_path = exam.f_struct['all_students'] data4st_path = exam.f_struct['data4students'] results_path = exam.f_struct['student_results'] + exam_label + '/' output_path = exam.f_struct['eval_results'] + exam_label + '/' csv_questions_list = exam.f_struct['exam_statement'] + exam_label + '/' + exam_label + '.csv' # + # List of exam questions from the database print(csv_questions_list) with open(csv_questions_list, 'r') as f: reader = csv.reader(f) questions = list(reader)[0] # If the fils os not available, you can write the list os questions by hand # questions = ['F0_estimate_06', 'F1_model_01', 'F2_predict_03', 'F4_lms_02'] print("Questions in the exam: {0}".format(questions)) # - # ## 2. Read datafiles for all students # # Student datafiles can be in any of the following formats: # # * `'.zip'`: When uncompressed, the zip may contain one or several matlab files. All matlab files are read and incorporated to a pandas Dataframe where each student is a column, and each index is a variable available for the exam solution # * `'.mat'`: All data variables for the students are given in a single matlab file # + def getFileName(fpath): return fpath.split('/')[-1] def readData4st(datafiles_path): ''' This function is used for reading the matlab data files provided to students ''' # Read matlab files in the input directory tree datafiles = glob.glob(datafiles_path + '**/*.mat', recursive=True) df = pd.DataFrame() # Read files print('Processing {0} files in {1} ...'.format(len(datafiles), datafiles_path)) for dtfile in sorted(datafiles): # The tag can be the NIA, the student's name or just the begining of some other file tag = getFileName(dtfile).split('.')[0] # Load matlab data file data = sio.loadmat(dtfile, squeeze_me=True) # Read all variable names and the corresponding data values idx = [] val = [] for var in [el for el in data.keys() if not el.startswith('_')]: idx.append(var) val.append(data[var]) # Add to dataframe df2 = pd.DataFrame() df2[tag] = pd.Series(val, index = idx) df = pd.concat([df, df2], axis=1) df.sort_index(axis=1, inplace=True) return df # + # Read students' data. print(data4st_path) student_data = readData4st(data4st_path) print('') print('Number of students in dataframe:', str(student_data.shape[1])) print('Number of variables read:', str(student_data.shape[0])) print('Displaying data for first students ... ') student_data[student_data.columns[:7]] student_data['100318675'] # - # ## 2. Read answers provided by students def readdatafiles(datafiles_path, splitsymbol): ''' This function is used for reading both the data files provided to students and the response files provided by students ''' # Read file paths datafiles = glob.glob(datafiles_path + '**/*.*', recursive=True) # datafiles = [f for f in os.listdir(datafiles_path) if isfile(join(datafiles_path, f))] temporary_dir = './tmp' df = pd.DataFrame() # Read files print('Processing {0} files in {1} ...'.format(len(datafiles), datafiles_path)) for dtfile in sorted(datafiles): idx = [] val = [] makedf = True # This is a default flag. If it remains True, a new column will be added to the df # The tag can be the NIA, the student's name or just the begining of some other file tag = getFileName(dtfile).split(splitsymbol)[0] # tag = dtfile.split(splitsymbol)[0] if dtfile.endswith('.zip'): # Read names of .mat files zpobj = zp.ZipFile(dtfile) # zpobj = zp.ZipFile(join(datafiles_path, dtfile)) mat_fnames = [f for f in zpobj.namelist() if f.endswith('mat')] # mat file selection. This is to disambiguate cases with multiple files n = len(mat_fnames) if n == 0: print (' WARNING: {} has not delivered any mat file'.format(tag)) fname = None else: if n > 1: print(' WARNING: {} has provided multiple mat files:'.format(tag)) print(' {0}'.format(mat_fnames)) # Define a nested set of criteria to select a single mat file form multiple options: criteria = [mat_fnames] criteria.append([f for f in criteria[0] if '.ipynb_checkpoints' not in f]) criteria.append([f for f in criteria[1] if f[0].isalnum()]) criteria.append([f for f in criteria[2] if getFileName(f)[0].isalnum()]) criteria.append([f for f in criteria[3] if getFileName(f)[0].isalpha()]) criteria.append([f for f in criteria[4] if f.endswith(results_fname)]) # Selecte the file according to the most restrictive criterium with non empty members. for c in reversed(criteria): if len(c) > 0: # We take the first file in the list (an arbitrary choice) fname = c[0] break if n > 1: print(' Selected file: {}'.format(fname)) # Read the selected mat file, if any if fname is not None: # Matlab files are extracted to a temporal subfolder zpobj.extract(fname, temporary_dir) data = sio.loadmat(join(temporary_dir, fname), squeeze_me=True) # Read all variable names and the corresponding data values for var in [el for el in data.keys() if not el.startswith('_')]: idx.append(var) val.append(data[var]) # Remove temporary directory, if it has been created if os.path.exists(temporary_dir): shutil.rmtree(temporary_dir) elif dtfile.endswith('.mat'): # This block of code was removed from the original notebook. # I have rescued it from another notebook # data = sio.loadmat(join(datafiles_path, dtfile), squeeze_me=True) data = sio.loadmat(dtfile, squeeze_me=True) # Read all variable names and the corresponding data values for var in [el for el in data.keys() if not el.startswith('_')]: idx.append(var) val.append(data[var]) elif dtfile.endswith('m') or dtfile.endswith('py') or dtfile.endswith('.ipynb'): print(' WARNING: {} has provided a code file only:'.format(tag)) print(' {0}'.format(dtfile)) else: makedf = False if os.path.isfile(dtfile): print(' File ignored: {0}'.format(dtfile)) if makedf: df2 = pd.DataFrame() df2[tag] = pd.Series(val, index = idx) df = pd.concat([df, df2], axis=1) df.sort_index(axis=1, inplace=True) return df # ### 2.1. Requested variable names. # # In order to get the names of the requested variables, we solve the exam with an arbitrary set of variables. # + data = student_data[student_data.columns[0]].to_dict() print(questions) solution, scoring_ex = solveExam(questions, data) truenames = list(solution.keys()) print(truenames) # - # ### 2.2. Read student results into panda dataframe # + # Read student results student_results = readdatafiles(results_path, splitsymbol='_') # Build a set of indices containing the expected variable names and all other variables provided by students newindex = truenames + [el for el in student_results.index.tolist() if el not in truenames] student_results = student_results.reindex(newindex) print('') print('Number of students in dataframe:', str(student_results.shape[1])) print('Number of variables read:', str(student_results.shape[0])) print('Displaying data for first students ... ') student_results[student_results.columns[0:7]] # - # ### 2.3. Common Mistakes on variable names # # In view of all variable names provided by all students, we may decide to allow alternative names for variables without any penalty # + # print(student_results) # + print('Number of students in dataframe:', str(student_results.shape[1])) print('\nDisplaying number of missing data per variable name.') print('Those with a large number are potential common mistakes for a variable name') student_results.isnull().sum(axis=1) # + ########################################### # EXAM DEPENDENT VARIABLE #Dictionary with accepted mistakes in the following format # Expected variable name : Accepted mistake if exam_label == 'ExLabB12_0': Mistakes = {'xnVal': 'xnTest', 'wp': 'w', 'EAPval':'EAP'} elif exam_label == 'ExLabB12_1': Mistakes = {'sEst': 'sTest', 'xnTest': 'xnTest.mat', 'xnVal': 'xnTest', 'we': 'w2', 'w5': 'we', 'w': 'w1', 'PFAx1': 'PFAX1', 'uNP': 'etaNPX1', 'uNP': 'etanNPx1', 'etaNPx1': 'uNP'} elif exam_label == 'ExLabB12_2': Mistakes = {'xnTest': 'xnTest.mat', 'xmVal': 'xnTest', 'xnTrain': 'xnTrain.mat', 'xmTrain': 'xnTest', 'we': 'we3', 'w3': 'we4', 'm0': 'mo'} ########################################## # Fill and empty variable by the value of its accepted mistake. for el in Mistakes: print(el) # The following 'if is necessary because some of the mistakes in the dictionary may not happen. if Mistakes[el] in student_results.index.tolist(): # print(student_results.loc[Mistakes[el]]) student_results.loc[el] = student_results.loc[el].fillna(student_results.loc[Mistakes[el]]) # Remove rows with the wrong variables. for el in student_results.index.tolist(): if el not in truenames: student_results.drop(el, inplace=True) student_results.head(40) # - # ### 2.4. Name to NIA dictionary # # Finally, since datafiles are created by NIA and results are available per student name, we need to create a dictionary connecting them. # # Student names are taken from one or several student lists. Using multiple list is useful when the same exam is stated to multiple groups, or in the frequent situation where students from one group carry out the exam of another group. # + # Select xls file names in the class list folder print("Reading class lists...") xls_files = [f for f in os.listdir(class_list_path) if f.endswith('.xls') or f.endswith('.xlsx')] if len(xls_files) > 1: print(" There are {} excel files in the class_list folder.".format(len(xls_files))) print(" All students will be merged in a single list.") # Load all xls files into dataframes groups = [] for g in xls_files: df = pd.read_excel(class_list_path + g) # Translate column names form Spanish to English. # This is required to concatenate student lists in different languages. df.rename(columns={'Dirección de correo': 'Email address', 'Apellido(s)': 'Surname', 'Nombre': 'First name'}, inplace=True) groups.append(df) # Concatenate class lists (we do not expect duplicated NIU's in different lists) student_NIA_names = pd.concat(groups) print("Done. {0} students in the lists".format(len(student_NIA_names))) student_NIA_names.sort_values('Surname') #.head() # + # UTF-8 encoding of everything # AFAIK, this is no longer needed in Python 3, but I left it just in case... for fld in student_NIA_names.keys(): if fld != 'NIU': student_NIA_names[fld] = student_NIA_names[fld].str.encode('utf8') # Build dictionary NIA: name NIA_name = {} for el in student_results.columns.tolist(): # Find the student name in student_NIA_names that is most similar to el sim_list = [] for idx, NIA in enumerate(student_NIA_names['NIU'].values): std_name = str(student_NIA_names['First name'].values.tolist()[idx]) + ' ' + \ str(student_NIA_names['Surname'].values.tolist()[idx]) sim_list.append(difflib.SequenceMatcher(a=el.lower(), b=std_name.lower()).ratio()) max_sim = max(sim_list) max_idx = sim_list.index(max_sim) NIA_name[student_NIA_names['NIU'].values.tolist()[max_idx]] = el # Build reverse dictionary name: NIA name_NIA = {NIA_name[el]: el for el in NIA_name} # - # ### 2.5. Group of each student # We will include the information about the group in the final dataframe of results so as to make the separation of evaluation reports easier. NIA_group = pd.read_csv(all_students_path)[['NIA', 'group']] NIA_group.sort_values(['NIA']).head() # At this point we have: # # * student_data: dataframe with data given to the students. Each index is a variable, and each column a NIA # * student_results: dataframe with student results. Each index is a variable, and each column a name # * NIA_name: NIA to name dictionary # * name_NIA: name to NIA dictionary # * NIA_group: dataframe # ## 3. Exam evaluation # # To carry out the evaluation of the exam, we use the external evaluation libraries. # # Function evaluateExam computes the correct solutions for the given data and compares them with the responses provided by the students. # + df = pd.DataFrame() print('Evaluating all students... ') for NIA in NIA_name: name = NIA_name[NIA] print('Evaluating {0} {1} ...'.format(NIA, name)) # Evaluate the exam from the data provided to the student and the student response dataex = student_data[str(NIA)].to_dict() response = student_results[name].to_dict() exam_report = evaluateExam(questions, dataex, response) # Convert exam_report, which is a nested dictionary, into a pandas dataframe # Note that all this conversion to and from dictionaries can be avoided if evaluateExam # worked with dataframes. This is a pending task. ex = {} # Note that we take the last 2 characters of the group name only. ex[('', 'Group')] = NIA_group[NIA_group['NIA'] == NIA]['group'].tolist()[0][-2:] for v in exam_report: for w in exam_report[v]: ex[(v,w)] = exam_report[v][w] df[NIA_name[NIA]] = pd.Series(ex) # Take the transpose to place students in rows, and restate the original variable ordering # This is because pd.Series does not preserve the order. cols = list(ex.keys()) df = df.T[cols] # Pretty print results df[df.columns[:]].head(100) # - # ### 3.1. Penalties # # In addition to the evaluation of the results file provided by the student, the final mark depends on other factors: # # 1. If the student uploaded the code files # 2. Delays in delivering the files during the exam. # 3. Errors in the delivering process (use of e-mail, incorrect file types, etc). # # The following function is used to identify the code uploaded by the student. def detectCode(datafiles_path, splitsymbol): ''' This function is used to check if the student has uploaded a python or a matlab code file ''' # Read file paths # datafiles = [f for f in os.listdir(datafiles_path) if isfile(join(datafiles_path, f))] datafiles = glob.glob(datafiles_path + '**/*.*', recursive=True) # Read files df = pd.DataFrame() print('Processing {0} files in {1} ...'.format(len(datafiles), datafiles_path)) for dtfile in datafiles: # This is a flag. If it remains True, a new column will be added to the df makedf = True # The tag can be the NIA, the student's name or just the begining of some other file # tag = dtfile.split(splitsymbol)[0] tag = getFileName(dtfile).split(splitsymbol)[0] if tag in name_NIA: if dtfile.endswith('.zip'): # Read names of .mat files # files_in_zip = zp.ZipFile(join(datafiles_path, dtfile)).namelist() files_in_zip = zp.ZipFile(dtfile).namelist() # mat file selection. This is to disambiguate cases with multiple files n_mat = len([f for f in files_in_zip if f.endswith('.m')]) n_py = len([f for f in files_in_zip if f.endswith('.py') or f.endswith('.ipynb')]) if n_py * n_mat > 0: print('WARNING: {} has delivered both matlab and python code'.format(name)) if n_py > 0: code = 'Py' elif n_mat > 0: code = 'Mat' else: code = 'None' elif dtfile.endswith('.py') or dtfile.endswith('.ipynb'): code = 'Py' elif dtfile.endswith('.m'): code = 'Mat' else: code = 'None' df2 = pd.DataFrame() df2[tag] = pd.Series(code, index = ['Code']) df = pd.concat([df, df2], axis=1) elif os.path.isfile(dtfile): print(' File ignored: {0}'.format(dtfile)) return df.T # + # Identify the code delivered by the students code_data = detectCode(results_path, splitsymbol='_') code_data[code_data.columns][:].head() # Add the code data to the evaluation dataframe df['Delivery', 'Code'] = code_data df['Delivery', 'Delay'] = 0.0 df['Delivery', 'Factor'] = 1.0 # Penalties for students that did not delivered any code. df.loc[df['Delivery', 'Code'] == 'None', ('Delivery', 'Factor')] = 0.5 # + # This cell contains project specific instructions. # PENALTIES: if project_path == '../LabEvaluationProjects/ProjectB3_1718/': # STUDENTS THAT DID NOT DELIVER ANY RESULTS. # : (no e-mail) Delivers code only. # Results generated with penalty df.at['', ('Delivery', 'Factor')] = p_noresults # : (no e-mail) Does not deliver results file. However, code computes some variables. # Results generated with penalty df.at['', ('Delivery', 'Factor')] = p_noresults # : (e-mail) His computer get blocked and could not generate results file # savemat command incorrect. Code generated without penalty. df.at['', ('Delivery', 'Factor')] = 1.0 # : (no e-mail) entrega un fichero Lab12.7z, pero cambia el nombre por Lab12zip # Results generated with penalty. df.at['', ('Delivery', 'Factor')] = p_noresults # : (e-mail) Does not deliver results file. Code does not compute any of the variables # : (no e-mail) Delivers multiple code versions. # (no e-mail) No results file. The code is completely wrong. # : compressed files with .7z. Changed without penalty. elif project_path == '../LabEvaluationProjects/ProjectB3_1718_Gbil/': # NO INCIDENTS IN THIS GROUP pass if project_path == 'prb12': # : # (1) python does not recognize the delivered file as zip. However, I could decompress with # the unarchiver. zip file re-generated without penalty # (2) the mat file is actualy a .ipynb with the extension changed. # : the .zip file cannot be read in any way. I have changed the extension to # .unk. # : delivers a .7z file. File .zip generated without penalty #  delivers a .7z file. File .zip generated without penalty pass # if exam_label == 'ExLabB12_0': # df.drop('mTrain', axis=1, inplace=True) # - # Now we are ready to compute the final score df['Final', 'Score'] = (df['Exam', 'Score'] - p_delay * df['Delivery', 'Delay']) * df['Delivery', 'Factor'] df[df.columns] # .head() # ## 4. Save results # Save to excel file. if not os.path.exists(output_path): os.makedirs(output_path) df.to_excel(output_path + finalnotes_fname, columns=df.columns) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### First Try of Predicting Salary # # For the last two questions regarding what are related to relationships of variables with salary and job satisfaction - Each of these questions will involve not only building some sort of predictive model, but also finding and interpretting the influential components of whatever model we build. # # To get started let's read in the necessary libraries and take a look at some of our columns of interest. # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score, mean_squared_error import WhatHappened as t import seaborn as sns # %matplotlib inline df = pd.read_csv('./survey_results_public.csv') df.head() # - # Now take a look at the summary statistics associated with the quantitative variables in your dataset. df.describe() # #### Question 1 # # **1.** Use the above to match each variable (**a**, **b**, **c**, **d**, **e**, or **f**) as the appropriate key that describes the value in the **desc_sol** dictionary. # + a = 40 b = 'HoursPerWeek' c = 'Salary' d = 'Respondent' e = 10 f = 'ExpectedSalary' desc_sol = {'A column just listing an index for each row': #letter here, 'The maximum Satisfaction on the scales for the survey': #letter here, 'The column with the most missing values': #letter here, 'The variable with the highest spread of values': #letter here} # Check your solution t.describe_check(desc_sol) # - # A picture can often tell us more than numbers. df.hist(); # Often a useful plot is a correlation matrix - this can tell you which variables are related to one another. sns.heatmap(df.corr(), annot=True, fmt=".2f"); # #### Question 2 # # **2.** Use the scatterplot matrix above to match each variable (**a**, **b**, **c**, **d**, **e**, **f**, or **g**) as the appropriate key that describes the value in the **scatter_sol** dictionary. # + a = 0.65 b = -0.01 c = 'ExpectedSalary' d = 'No' e = 'Yes' f = 'CareerSatisfaction' g = -0.15 scatter_sol = {'The column with the strongest correlation with Salary': #letter here, 'The data suggests more hours worked relates to higher salary': #letter here, 'Data in the ______ column meant missing data in three other columns': #letter here, 'The strongest negative relationship had what correlation?': #letter here} t.scatter_check(scatter_sol) # - # Here we move our quantitative variables to an X matrix, which we will use to predict our response. We also create our response. We then split our data into training and testing data. Then when starting our four step process, our fit step breaks. # # ### Remember from the Video, this code will break! # + # Consider only numerica variables X = df[['CareerSatisfaction', 'HoursPerWeek', 'JobSatisfaction', 'StackOverflowSatisfaction']] y = df['Salary'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .30, random_state=42) #Four steps: #Instantiate lm_model = LinearRegression(normalize=True) #Fit - why does this break? lm_model.fit(X_train, y_train) #Predict #Score # - # #### Question 3 # # **3.** Use the results above to match each variable (**a**, **b**, **c**, **d**, **e**, or **f** ) as the appropriate key that describes the value in the **lm_fit_sol** dictionary. # + a = 'it is a way to assure your model extends well to new data' b = 'it assures the same train and test split will occur for different users' c = 'there is no correct match of this question' d = 'sklearn fit methods cannot accept NAN values' e = 'it is just a convention people do that will likely go away soon' f = 'python just breaks for no reason sometimes' lm_fit_sol = {'What is the reason that the fit method broke?': #letter here, 'What does the random_state parameter do for the train_test_split function?': #letter here, 'What is the purpose of creating a train test split?': #letter here} t.lm_fit_check(lm_fit_sol) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from Bio import Entrez #Entrez.esearch(db,id,rettype,retmode) #Entrez.efetch(db,id,rettype,retmode) Entrez.email = '' handle = Entrez.esearch(db='gene', term='fumarase homo sapiens', sort='relevance', idtype='acc') #print(handle.read()) fastafile = '' for i in Entrez.read(handle)['IdList']: h = Entrez.efetch(db='gene',id=i, rettype='fasta', retmode='text') seq = h.read() print(seq) fastafile = seq # if i == 0: # fastafile = seq # print('yea') # break print(fastafile) # + from Bio import SeqIO for lines in fastafile: print(lines) # + #Enzymes #GLycolysis:(1) triosephosphate isomerase, (2) phosphoglycerate kinase, #(3)Phosphoenolpyruvate carboxylase, (4) Pyruvate kinase (also in pentose phosphate) #Pentose phosphate:(1) Glucose-6-phosphate dehydrogenase (also in glycolysis), #(2) Gluconolactonase, (3) 6-phosphogluconate dehydrogenase #(4) transaldolase #TCA cyle:(3)Malate dehydrogenase (also in glycolysis), (2) Fumarase , # (4)Citrate synthase (1)Succinate dehydrogenase # + Enzyme Function GLycolysis:(1) triosephosphate isomerase catalyzes the reversible interconversion of the triose phosphate isomers dihydroxyacetone phosphate and D-glyceraldehyde 3-phosphate EC 5.3.1.1 (2) phosphoglycerate kinase catalyzes the reversible transfer of a phosphate group from 1,3-bisphosphoglycerate to ADP producing 3-phosphoglycerate and ATP EC 2.7.2.3 (3)Phosphoenolpyruvate carboxylase, an enzyme in the family of carboxy-lyases found in plants and some bacteria that catalyzes the addition of bicarbonate (HCO3−) to phosphoenolpyruvate (PEP) to form the four-carbon compound oxaloacetate and inorganic phosphate EC 4.1.1.31 (4) Pyruvate kinase (also in pentose phosphate) It catalyzes the transfer of a phosphate group from phosphoenolpyruvate to adenosine diphosphate, yielding one molecule of pyruvate and one molecule of ATP. EC 2.7.1.40 ------------------# Pentose phosphate: (1) Glucose-6-phosphate dehydrogenase (also in glycolysis), D-glucose 6-phosphate + NADP⁺ ⇌ 6-phospho-D-glucono-1,5-lactone + NADPH + H⁺ EC 1.1.1.49 (2) Gluconolactonase, This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. EC 3.1.1.17 (3) 6-phosphogluconate dehydrogenase It forms ribulose 5-phosphate from 6-phosphogluconate. It is an oxidative carboxylase that catalyses the decarboxylating reduction of 6-phosphogluconate into ribulose 5-phosphate in the presence of NADP EC 1.1.1.44 (4) transaldolase transaldolase:catalyzes the following reaction sedoheptulose 7-phosphate + glyceraldehyde 3-phosphate erythrose 4-phosphate + fructose 6-phosphate. EC 2.2.1.2 ------------------# #TCA Cycle: (1) Succinate dehydrogenase This enzyme catalyzes the oxidation of succinate to fumarate with the reduction of ubiquinone to ubiquinol EC 1.3.5.1 (2) Fumarase It catalyzes the reversible hydration/dehydration of fumarate to malate EC 4.2.1.2 (3) Malate dehydrogenase (also in glycolysis) It reversibly catalyzes the oxidation of malate to oxaloacetate using the reduction of NAD+ to NADH. EC 1.1.1.37 (4)Citrate synthase It catalyzes the condensation reaction of the two-carbon acetate residue from acetyl coenzyme A and a molecule of four-carbon oxaloacetate to form the six-carbon citrate EC 2.3.3.1 ### TCA cycle - explanation from wikipedia # + # Gene description #Humans #GLycolysis:(1) triosephosphate isomerase, # This gene encodes an enzyme, consisting of two identical proteins, which catalyzes the isomerization of glyceraldehydes 3-phosphate (G3P) and dihydroxy-acetone phosphate (DHAP) in glycolysis and gluconeogenesis. Mutations in this gene are associated with triosephosphate isomerase deficiency. Pseudogenes have been identified on chromosomes 1, 4, 6 and 7 #(2) phosphoglycerate kinase #The protein encoded by this gene is a glycolytic enzyme that catalyzes the conversion of 1,3-diphosphoglycerate to 3-phosphoglycerate. The encoded protein may also act as a cofactor for polymerase alpha. Additionally, this protein is secreted by tumor cells where it participates in angiogenesis by functioning to reduce disulfide bonds in the serine protease, plasmin, which consequently leads to the release of the tumor blood vessel inhibitor angiostatin. The encoded protein has been identified as a moonlighting protein based on its ability to perform mechanistically distinct functions. #(3)Phosphoenolpyruvate carboxylase #This gene is a main control point for the regulation of gluconeogenesis. The cytosolic enzyme encoded by this gene, along with GTP, catalyzes the formation of phosphoenolpyruvate from oxaloacetate, with the release of carbon dioxide and GDP. The expression of this gene can be regulated by insulin, glucocorticoids, glucagon, cAMP, and diet. #(4) Pyruvate kinase (also in pentose phosphate) # This gene encodes a protein involved in glycolysis. The encoded protein is a pyruvate kinase that catalyzes the transfer of a phosphoryl group from phosphoenolpyruvate to ADP, generating ATP and pyruvate. This protein has been shown to interact with thyroid hormone and may mediate cellular metabolic effects induced by thyroid hormones. This protein has been found to bind Opa protein, a bacterial outer membrane protein involved in gonococcal adherence to and invasion of human cells, suggesting a role of this protein in bacterial pathogenesis. #Pentose phosphate: #(1) Glucose-6-phosphate dehydrogenase (also in glycolysis) # This gene encodes glucose-6-phosphate dehydrogenase. This protein is a cytosolic enzyme encoded by a housekeeping X-linked gene whose main function is to produce NADPH, a key electron donor in the defense against oxidizing agents and in reductive biosynthetic reactions. G6PD is remarkable for its genetic diversity. Many variants of G6PD, mostly produced from missense mutations, have been described with wide ranging levels of enzyme activity and associated clinical symptoms. #(2) Gluconolactonase #protein encoded by this gene is a highly conserved, calcium-binding protein, that is preferentially expressed in the liver and kidney. It may have an important role in calcium homeostasis. Studies in rat indicate that this protein may also play a role in aging, as it shows age-associated down-regulation. This gene is part of a gene cluster on chromosome Xp11.3-Xp11.23. #(3) 6-phosphogluconate dehydrogenase # This gene, which encodes a member of the serine/threonine kinase family, regulates cell polarity and functions as a tumor suppressor. Mutations in this gene have been associated with Peutz-Jeghers syndrome, an autosomal dominant disorder characterized by the growth of polyps in the gastrointestinal tract, pigmented macules on the skin and mouth, and other neoplasms #(4) transaldolase # Transaldolase 1 is a key enzyme of the nonoxidative pentose phosphate pathway providing ribose-5-phosphate for nucleic acid synthesis and NADPH for lipid biosynthesis. This pathway can also maintain glutathione at a reduced state and thus protect sulfhydryl groups and cellular integrity from oxygen radicals. The functional gene of transaldolase 1 is located on chromosome 11 and a pseudogene is identified on chromosome 1 but there are conflicting map locations. The second and third exon of this gene were developed by insertion of a retrotransposable element. This gene is thought to be involved in multiple sclerosis #TCA Cycle: (1) Succinate dehydrogenase des1 (2) Fumarase (3) Malate dehydrogenase (also in glycolysis) (4)Citrate synthase #------------------# #E coli # GLycolysis: # (1)triosephosphate isomerase #Binds TrxA #(2) phosphoglycerate kinase #Phosphoglycerate kinase is one of the proteins induced by anaerobiosis. #(3)Phosphoenolpyruvate carboxylase #Mutant has reduced growth rate with little acetate excreation, decrease glucose consumption and a decreased carbon dioxide evolution rate. #(4) Pyruvate kinase (also in pentose phosphate) #Pyruvate kinase I and pyruvate kinase II differ in physical and chemical properties as well as in their kinetic behavior #Pentose phosphate: #(1) Glucose-6-phosphate dehydrogenase (also in glycolysis), # ATP-regulated binding and release of polypeptide substrates. [More information is available at EcoGene: EG10241]. Hsc56 exhibits specificity toward Hsc62, as Hsc56 does not activate DnaK or Hsc66 ATPase activity #(2) Gluconolactonase may be an issue did no show in database # #(3) 6-phosphogluconate dehydrogenase # A null mutation in the gnd gene encoding 6-phosphogluconate dehydrogenase does not affect the growth rate significantly. [More information is available at EcoCyc: EG10411]. #(4) transaldolase #Transaldolase is an enzyme of the pentose phosphate pathway, where it catalyzes the reversible interconversion of glyceraldehyde-3-phosphate and sedoheptulose-7-phosphate to fructose-6-phosphate and erythrose-4-phosphate #TCA Cycle: (1) Succinate dehydrogenase (2) Fumarase (3) Malate dehydrogenase (also in glycolysis) (4)Citrate synthase #------------------# Drosphilia GLycolysis: (1)triosephosphate isomerase (2) phosphoglycerate kinase (3)Phosphoenolpyruvate carboxylase (4) Pyruvate kinase (also in pentose phosphate) Pentose phosphate: (1) Glucose-6-phosphate dehydrogenase (also in glycolysis), (2) Gluconolactonase may be an issue did no show in database (3) 6-phosphogluconate dehydrogenase (4) transaldolase TCA Cycle: (1) Succinate dehydrogenase (2) Fumarase (3) Malate dehydrogenase (also in glycolysis) (4)Citrate synthase # -*- coding: utf-8 -*- # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.1.0 # language: julia # name: julia-1.1 # --- # # Moving body # This notebook demonstrates the use of conformally-mapped bodies and edge conditions to generate flows in which vorticity is released from one or two edges. using PotentialFlow using Plots pyplot() clibrary(:colorbrewer) default(grid = false) # ## Set up # The following function computes the right-hand side of the evolution equation. At the beginning of every time-step, we first determine the bound vortex sheet strength required to satisfy the no-flow-through condition, then velocity of all vortex elements. Finally, we need to transform the computed velocities so that they apply to the elements in the circle plane. function compute_ẋ!(ẋ, x, t) body, ambient_sys = x motion = ẋ[1] # update the instantaneous motion of the body with the current motion motion.ċ, motion.c̈, motion.α̇, motion.α̈ = motion.kin(t) Bodies.enforce_no_flow_through!(body, motion, ambient_sys, t) # Zero the velocity reset_velocity!(ẋ, x) # Compute the self-induced velocity of the system self_induce_velocity!(ẋ, x, t) # Modify the velocity so that it provides the rate of change in the circle plane. Bodies.transform_velocity!(ẋ, ẋ, x, body) end # Once we have advected all the vortex elements, we release new blobs from the designated edges of the body. This function adds new blobs to the set with the correct strength to enforce edge conditions. # + function shed_new_vorticity!(blobs, edge1, edge2, body, motion, t, spcrit1 = 0.0, spcrit2 = 0.0) # assume that sheet positions are in the circle plane ϕ = 1/3 # fractional distance from the edge point to the previous blob # Location edges in physical plane zedge1 = body.zs[edge1] # body.c + body.m.z[edge1]*exp(im*body.α) zedge2 = body.zs[edge2] # body.c + body.m.z[edge2]*exp(im*body.α) # positions of previously-released blobs, transformed to physical plane zblob1 = conftransform(blobs[end-1].z,body) zblob2 = conftransform(blobs[end].z,body) # positions of new blobs in physical plane z1 = ϕ*zblob1 + (1-ϕ)*zedge1 z2 = ϕ*zblob2 + (1-ϕ)*zedge2 # positions of new blobs in circle plane ζ1 = inverse_conftransform(z1,body) ζ2 = inverse_conftransform(z2,body) # new blobs, with unit strength for now blob1 = Vortex.Blob(ζ1, 1.0, δ) blob2 = Vortex.Blob(ζ2, 1.0, δ) Bodies.enforce_no_flow_through!(body, motion, blobs, t) # need blobs in circle plane # Determine the strengths of the new blobs Γ1, Γ2 = Bodies.vorticity_flux(body, edge1, edge2, (body,blobs), [blob1], [blob2], t, spcrit1, spcrit2); # Add the new blobs to the list push!(blobs, Vortex.Blob(ζ1, Γ1, blobs[1].δ), Vortex.Blob(ζ2, Γ2, blobs[1].δ)) end # - # ## Set up the body # ### A bent plate L = 1 nv = 4 # number of vertices θ = range(0,π,length=nv) xv = [0.5L*cos.(θ);reverse(0.5L*cos.(θ[2:end-1]))] yv = [0.5L*sin.(θ);reverse(0.5L*sin.(θ[2:end-1]))] p = Bodies.Polygon(xv,yv); c = 0.0+0.0im α₀ = -π/2 # angle of attack (this will become the mean angle in oscillatory pitching/heaving) ċ = 1 # translation velocity b = Bodies.ConformalBody(p,c,α₀) plot(b) # #### Designate edges from which to release vortices. You can select one or two. edgeindices = findall(b.m.angle .== minimum(b.m.angle)) # Look for vertices that have interior angle = 0 kLE = edgeindices[1] # edge 1 for releasing vorticity kTE = edgeindices[2] # edge 2 for releasing vorticity # Check that these are the correct edges by checking their positions b.zs[[kLE,kTE]] # ### Set up the motion. Here are two possibilities: # #### Simple steady translation motion = Plates.RigidBodyMotion(ċ, 0.0) α = α₀; # #### Oscillatory pitching and heaving # If you simply desire steady translation, then do not run this cell or the next one. # # Take note of the χ parameter below. If it is less than 1, then there is a tendency for a plate or airfoil to generate thrust rather than drag. # + a = 0.25 # location of pitch axis, a = 0.5 is leading edge ϕ = 0 #π/2 # phase lag of pitch to heave A = 0.025 # amplitude/chord fstar = 1/π # fc/U Δα = 10π/180 # amplitude of pitching K = π*fstar # reduced frequency, K = πfc/U χ = Δα/atan(2π*A*fstar) println("f* = ",fstar) println("Δα = ",Δα*180/π) println("α(1/4) = ",abs(atan(2*K*A)-Δα)*180/π) println("A/c = ",A) println("K = ",K) println("χ = ",χ) oscil = RigidBodyMotions.PitchHeave(real(ċ),a,K,ϕ,α₀,Δα,A); motion = Plates.RigidBodyMotion(oscil); α = oscil.α(0.0); # - # #### Plot of the effective angle of attack for oscillatory pitching and heaving trange = 0:0.01:(4/fstar) Vy = map(x -> imag(x[1]),oscil.(trange)) αeff = atan.(-Vy)+oscil.α.(trange) plot(trange,αeff*180/π,ylim=(-140,40),ylabel="Effective angle (deg)",xlabel="Convective time") # ### Initialize the problem Δt = 5e-3; # time step # We place the initial blobs near the edges of the body. # # #### NOTE: if you find you are getting an error, you might try changing the sign of Δz₀. # + b = Bodies.ConformalBody(p,c,α) edge1 = kLE # leading edge index edge2 = kTE # trailing edge index # blob radius δ = 0.02/abs(b.m.constant) # locations of edges in physical plane zedge1, zedge2 = b.zs[[edge1,edge2]] # Vector to add to these edges. This determines the initial placement of the first vortex elements relative # to the edges. Δz₀ = -3*im*Δt*exp(im*α) #Δz₀ = -3*Δt+0.001im # locations of initial blobs in circle plane ζblob = inverse_conftransform(Δz₀ .+ [zedge1, zedge2],b) # create the blobs, for now with unit strength blobs = Vortex.Blob.(ζblob, 1.0, δ) # - # We then adjust the circulation of the vortex blobs to satisfy the edge conditions. # In this library, the vorticity flux from the edge of the body is determined through the edge suction parameter. # The Kutta condition simply corresponds to the suction parameter being zero at the edge, whereas Inf suppresses shedding altogether from the edge. # # #### NOTE: For problems involving small angle of attack, it is best to set spcrit1 to Inf to suppress vortex shedding from the leading edge. # + # critical edge suction parameters spcrit1 = 0 # leading edge. Make this Inf if you want to suppress vortex shedding from the leading edge. spcrit2 = 0 # trailing edge Bodies.enforce_no_flow_through!(b, motion, (), 0) sys = (b,) # This determines the circulations that enforce the edge conditions Γ1, Γ2 = Bodies.vorticity_flux(b, edge1, edge2, sys, [blobs[1]], [blobs[2]], 0, spcrit1, spcrit2); # Now create the blobs with the correct circulations blobs = Vortex.Blob.(ζblob, [Γ1, Γ2], δ) # This creates the image blobs, so that no-penetration condition is enforced Bodies.enforce_no_flow_through!(b, motion, blobs, 0) # Set up the initial system ambient_sys = blobs sys = (b, ambient_sys) nothing # - # Set up initial data structures for the solution # + t = 0.0 sys₊ = deepcopy(sys) # Used for storage during time-marching #ẋs = [(motion, allocate_velocity(ambient_sys)) for k = 1:4] # For RK4 ẋs = (motion, allocate_velocity(ambient_sys)) # For forward Euler method # Storage time = Float64[] imp = ComplexF64[] blob_z = conftransform(ambient_sys,b) track = [deepcopy((b,blob_z))] push!(imp,Elements.impulse((b,blob_z))) tsamp = 0.25 # Rate at which to save system data in `track` array nothing # - # ## Time-Marching # We use forward Euler to evolve the system and apply filtering on the trailing edge vortex sheets to suppress small-scale instabilities. # + tf = 0.25 T = 0:Δt:tf for tloc in T b_now, ambient_ω_ζ = sys motion, ambient_u = ẋs resize!(sys₊[2], length(sys[2])) TimeMarching.forward_euler!(sys₊, sys, t, Δt, compute_ẋ!, advect!, ẋs) global sys₊, sys = sys, sys₊ t += Δt shed_new_vorticity!(sys[2], edge1, edge2, sys[1], ẋs[1], t, spcrit1, spcrit2) # save stuff push!(time,t) b_now, ambient_ω_ζ = deepcopy(sys) blob_z = conftransform(ambient_ω_ζ,b_now) if isapprox(mod(t,tsamp),0.0;atol=1e-8) || isapprox(mod(t,tsamp),tsamp;atol=1e-8) push!(track,deepcopy((b_now,blob_z))) end Bodies.enforce_no_flow_through!(b_now, motion, ambient_ω_ζ, t) push!(imp,Elements.impulse((b_now,blob_z))) end b_now, ambient_ω_ζ = sys blob_z = conftransform(ambient_ω_ζ,b_now); # - # ### Plot the flow field tkfont = Plots.font("Times New Roman",15) ps = plot(track[end],legend=false,markerstrokewidth=0,color=:RdBu_r,clim=(-0.025/(2π),0.025/(2π)),markersize=3,tickfont=tkfont,ratio=1,xlim=(-1,3),ylim=(-1,1)) # # ### Plot the force coefficients force = -diff(imp)/Δt tkfont = Plots.font("Times New Roman",12) plot(time,2*imag.(force),tickfont=tkfont,label="Cy",xlim=(0,5)) plot!(time,2*real.(force),label="Cx") using Statistics println("Mean lift coefficient = ",-Statistics.mean(2*imag.(force))) println("Mean drag coefficient = ",-Statistics.mean(2*real.(force))) # ### Plotting streamlines # Set up a polar grid on which to plot # + rmax = 5.0 # largest radial coordinate (smallest is 1) ϵ = 0.00001 # small offset from the surface of the unit circle nth = 400 # number of circumferential points dth = 2π/nth θ = range(0,2π,length=nth+1) dr = dth r = [1+ϵ] while maximum(r) < rmax push!(r,r[end]+dr) dr = r[end]*dth end # - # Plot the streamlines of the current system of body and vortices tkfont = Plots.font("Times New Roman",15) ps = streamlines(r,θ,sys) plot!(ps,b,legend=false,markerstrokewidth=0,color=:RdBu_r,clim=(-0.025/(2π),0.025/(2π)),markersize=3,tickfont=tkfont) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Problem 52 # ## Permuted multiples # # It can be seen that the number, $125874$, and its double, $251748$, contain exactly the same digits, but in a different order. # # Find the smallest positive integer, $x$, such that $2x, 3x, 4x, 5x$, and $6x$, contain the same digits. # # OEIS Sequence: [A133220](https://oeis.org/A133220) # # ## Solution # + pycharm={"name": "#%%\n"} def compute(n: int) -> int: i = 1 while True: for j in range(10 ** i, 10 ** (i + 1) // n): j_digits = sorted(str(j)) for k in range(2, n + 1): k_digits = sorted(str(j * k)) if j_digits != k_digits: break else: return j i += 1 # + pycharm={"name": "#%%\n"} compute(2) # + pycharm={"name": "#%%\n"} compute(6) # + pycharm={"name": "#%%\n"} # %timeit -n 100 -r 1 -p 6 compute(6) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # The Easiest Way to Create an Interactive Dashboard in Python # # This notebook supports the blog post # # **The Easiest Way to Create an Interactive Dashboard in Python. Turn Pandas pipelines into a # dashboard using hvplot `.interactive`** # # by ** and **. # # ![Data App](assets/easy-dataframe-dashboards.gif) # # Source: https://github.com/sophiamyang/hvplot_interactive # ## Import and configure packages # # Please note that in **Colab** you will need to `!pip install panel hvplot` and add `comms='colab'` to `pn.extension`. # + # # !pip install panel==0.12.6 hvplot==0.7.3 # + import panel as pn pn.extension('tabulator', sizing_mode="stretch_width") # pn.extension('tabulator', sizing_mode="stretch_width", comms="colab") # + import hvplot.pandas import holoviews as hv hv.extension('bokeh') # - # ## Define function to determine environment def environment(): try: get_ipython() return "notebook" except: return "server" environment() # ## Define Color Palette PALETTE = ["#ff6f69", "#ffcc5c", "#88d8b0", ] pn.Row( pn.layout.HSpacer(height=50, background=PALETTE[0]), pn.layout.HSpacer(height=50, background=PALETTE[1]), pn.layout.HSpacer(height=50, background=PALETTE[2]), ) # ## Load Data from bokeh.sampledata.autompg import autompg_clean as df df.head() # ## Define DataFrame Pipeline ( df[ (df.cyl == 4) & (df.mfr.isin(['ford','chevrolet'])) ] .groupby(['origin', 'cyl', 'mfr', 'yr'])['hp'].mean() .to_frame() .reset_index() .sort_values(by='yr') ).head(1) # ## Make DataFrame Pipeline Interactive idf = df.interactive() # Define [Panel widgets](https://panel.holoviz.org/reference/index.html#widgets) cylinders = pn.widgets.IntSlider(name='Cylinders', start=4, end=8, step=2) mfr = pn.widgets.ToggleGroup( name='MFR', options=['ford', 'chevrolet', 'honda', 'toyota', 'audi'], value=['ford', 'chevrolet', 'honda', 'toyota', 'audi'], button_type='success') yaxis = pn.widgets.RadioButtonGroup( name='Y axis', options=['hp', 'weight'], button_type='success' ) # Combine pipeline and widgets ipipeline = ( idf[ (idf.cyl == cylinders) & (idf.mfr.isin(mfr)) ] .groupby(['origin', 'mpg'])[yaxis].mean() .to_frame() .reset_index() .sort_values(by='mpg') .reset_index(drop=True) ) ipipeline.head() # ## Pipe to Table if environment()=="server": theme="fast" else: theme="simple" itable = ipipeline.pipe(pn.widgets.Tabulator, pagination='remote', page_size=10, theme=theme) itable # Check out the [Tabulator Reference Guide](https://panel.holoviz.org/reference/widgets/Tabulator.html) for more inspiration. # ## Pipe to hvplot ihvplot = ipipeline.hvplot(x='mpg', y=yaxis, by='origin', color=PALETTE, line_width=6, height=400) ihvplot # ## Layout using Template # # Here we use the [FastListTemplate](https://panel.holoviz.org/reference/templates/FastListTemplate.html#templates-gallery-fastlisttemplate). template = pn.template.FastListTemplate( title='Interactive DataFrame Dashboards with hvplot .interactive', sidebar=[cylinders, 'Manufacturers', mfr, 'Y axis' , yaxis], main=[ihvplot.panel(), itable.panel()], accent_base_color="#88d8b0", header_background="#88d8b0", ) # template.show() template.servable(); # Please note that to get the Tabulator table styled nicely for dark mode you can set `theme='fast'` when you define the `itable`. It won't work in the notebook though. # # To *serve the notebook* run `panel serve hvplot_interactive.ipynb`. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Module 8: Cluster Analysis # # The following tutorial contains Python examples for solving classification problems. You should refer to Chapters 7 and 8 of the "Introduction to Data Mining" book to understand some of the concepts introduced in this tutorial. The notebook can be downloaded from http://www.cse.msu.edu/~ptan/dmbook/tutorials/tutorial8/tutorial8.ipynb. # # Cluster analysis seeks to partition the input data into groups of closely related instances so that instances that belong to the same cluster are more similar to each other than to instances that belong to other clusters. In this tutorial, we will provide examples of using different clustering techniques provided by the scikit-learn library package. # # Read the step-by-step instructions below carefully. To execute the code, click on the corresponding cell and press the SHIFT-ENTER keys simultaneously. # # ## 8.1 K-means Clustering # # The k-means clustering algorithm represents each cluster by its corresponding cluster centroid. The algorithm would partition the input data into *k* disjoint clusters by iteratively applying the following two steps: # 1. Form *k* clusters by assigning each instance to its nearest centroid. # 2. Recompute the centroid of each cluster. # # In this section, we perform k-means clustering on a toy example of movie ratings dataset. We first create the dataset as follows. # + import pandas as pd ratings = [['john',5,5,2,1],['mary',4,5,3,2],['bob',4,4,4,3],['lisa',2,2,4,5],['lee',1,2,3,4],['harry',2,1,5,5]] titles = ['user','Jaws','Star Wars','Exorcist','Omen'] movies = pd.DataFrame(ratings,columns=titles) movies # - # In this example dataset, the first 3 users liked action movies (Jaws and Star Wars) while the last 3 users enjoyed horror movies (Exorcist and Omen). Our goal is to apply k-means clustering on the users to identify groups of users with similar movie preferences. # # The example below shows how to apply k-means clustering (with k=2) on the movie ratings data. We must remove the "user" column first before applying the clustering algorithm. The cluster assignment for each user is displayed as a dataframe object. # + from sklearn import cluster data = movies.drop('user',axis=1) k_means = cluster.KMeans(n_clusters=2, max_iter=50, random_state=1) k_means.fit(data) labels = k_means.labels_ pd.DataFrame(labels, index=movies.user, columns=['Cluster ID']) # - # The k-means clustering algorithm assigns the first three users to one cluster and the last three users to the second cluster. The results are consistent with our expectation. We can also display the centroid for each of the two clusters. centroids = k_means.cluster_centers_ pd.DataFrame(centroids,columns=data.columns) # Observe that cluster 0 has higher ratings for the horror movies whereas cluster 1 has higher ratings for action movies. The cluster centroids can be applied to other users to determine their cluster assignments. # + import numpy as np testData = np.array([[4,5,1,2],[3,2,4,4],[2,3,4,1],[3,2,3,3],[5,4,1,4]]) labels = k_means.predict(testData) labels = labels.reshape(-1,1) usernames = np.array(['paul','kim','liz','tom','bill']).reshape(-1,1) cols = movies.columns.tolist() cols.append('Cluster ID') newusers = pd.DataFrame(np.concatenate((usernames, testData, labels), axis=1),columns=cols) newusers # - # To determine the number of clusters in the data, we can apply k-means with varying number of clusters from 1 to 6 and compute their corresponding sum-of-squared errors (SSE) as shown in the example below. The "elbow" in the plot of SSE versus number of clusters can be used to estimate the number of clusters. # + import matplotlib.pyplot as plt # %matplotlib inline numClusters = [1,2,3,4,5,6] SSE = [] for k in numClusters: k_means = cluster.KMeans(n_clusters=k) k_means.fit(data) SSE.append(k_means.inertia_) plt.plot(numClusters, SSE) plt.xlabel('Number of Clusters') plt.ylabel('SSE') # - # ## 8.2 Hierarchical Clustering # # This section demonstrates examples of applying hierarchical clustering to the vertebrate dataset used in Module 6 (Classification). Specifically, we illustrate the results of using 3 hierarchical clustering algorithms provided by the Python scipy library: (1) single link (MIN), (2) complete link (MAX), and (3) group average. Other hierarchical clustering algorithms provided by the library include centroid-based and Ward's method. # + import pandas as pd data = pd.read_csv('datasets/vertebrate.csv',header='infer') data # - # ### 8.2.1 Single Link (MIN) # + from scipy.cluster import hierarchy import matplotlib.pyplot as plt # %matplotlib inline names = data['Name'] Y = data['Class'] X = data.drop(['Name','Class'],axis=1) Z = hierarchy.linkage(X.values, 'single') dn = hierarchy.dendrogram(Z,labels=names.tolist(),orientation='right') # - # ### 8.2.2 Complete Link (MAX) Z = hierarchy.linkage(X.values, 'complete') dn = hierarchy.dendrogram(Z,labels=names.tolist(),orientation='right') # ### 8.3.3 Group Average Z = hierarchy.linkage(X.values, 'average') dn = hierarchy.dendrogram(Z,labels=names.tolist(),orientation='right') # ## 8.3 Density-Based Clustering # # Density-based clustering identifies the individual clusters as high-density regions that are separated by regions of low density. DBScan is one of the most popular density based clustering algorithms. In DBScan, data points are classified into 3 types---core points, border points, and noise points---based on the density of their local neighborhood. The local neighborhood density is defined according to 2 parameters: radius of neighborhood size (eps) and minimum number of points in the neighborhood (min_samples). # # For this approach, we will use a noisy, 2-dimensional dataset originally created by Karypis et al. [1] for evaluating their proposed CHAMELEON algorithm. The example code shown below will load and plot the distribution of the data. # + import pandas as pd data = pd.read_csv('datasets/chameleon.data', delimiter=' ', names=['x','y']) data.plot.scatter(x='x',y='y') # - # We apply the DBScan clustering algorithm on the data by setting the neighborhood radius (eps) to 15.5 and minimum number of points (min_samples) to be 5. The clusters are assigned to IDs between 0 to 8 while the noise points are assigned to a cluster ID equals to -1. # + from sklearn.cluster import DBSCAN db = DBSCAN(eps=15.5, min_samples=5).fit(data) core_samples_mask = np.zeros_like(db.labels_, dtype=bool) core_samples_mask[db.core_sample_indices_] = True labels = pd.DataFrame(db.labels_,columns=['Cluster ID']) result = pd.concat((data,labels), axis=1) result.plot.scatter(x='x',y='y',c='Cluster ID', colormap='jet') # - # ## 8.4 Spectral Clustering # # One of the main limitations of the k-means clustering algorithm is its tendency to seek for globular-shaped clusters. Thus, it does not work when applied to datasets with arbitrary-shaped clusters or when the cluster centroids overlapped with one another. Spectral clustering can overcome this limitation by exploiting properties of the similarity graph to overcome such limitations. To illustrate this, consider the following two-dimensional datasets. # + import pandas as pd data1 = pd.read_csv('datasets/2d_data.txt', delimiter=' ', names=['x','y']) data2 = pd.read_csv('datasets/elliptical.txt', delimiter=' ', names=['x','y']) fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,5)) data1.plot.scatter(x='x',y='y',ax=ax1) data2.plot.scatter(x='x',y='y',ax=ax2) # - # Below, we demonstrate the results of applying k-means to the datasets (with k=2). # + from sklearn import cluster k_means = cluster.KMeans(n_clusters=2, max_iter=50, random_state=1) k_means.fit(data1) labels1 = pd.DataFrame(k_means.labels_,columns=['Cluster ID']) result1 = pd.concat((data1,labels1), axis=1) k_means2 = cluster.KMeans(n_clusters=2, max_iter=50, random_state=1) k_means2.fit(data2) labels2 = pd.DataFrame(k_means2.labels_,columns=['Cluster ID']) result2 = pd.concat((data2,labels2), axis=1) fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,5)) result1.plot.scatter(x='x',y='y',c='Cluster ID',colormap='jet',ax=ax1) ax1.set_title('K-means Clustering') result2.plot.scatter(x='x',y='y',c='Cluster ID',colormap='jet',ax=ax2) ax2.set_title('K-means Clustering') # - # The plots above show the poor performance of k-means clustering. Next, we apply spectral clustering to the datasets. Spectral clustering converts the data into a similarity graph and applies the normalized cut graph partitioning algorithm to generate the clusters. In the example below, we use the Gaussian radial basis function as our affinity (similarity) measure. Users need to tune the kernel parameter (gamma) value in order to obtain the appropriate clusters for the given dataset. # + from sklearn import cluster import pandas as pd spectral = cluster.SpectralClustering(n_clusters=2,random_state=1,affinity='rbf',gamma=5000) spectral.fit(data1) labels1 = pd.DataFrame(spectral.labels_,columns=['Cluster ID']) result1 = pd.concat((data1,labels1), axis=1) spectral2 = cluster.SpectralClustering(n_clusters=2,random_state=1,affinity='rbf',gamma=100) spectral2.fit(data2) labels2 = pd.DataFrame(spectral2.labels_,columns=['Cluster ID']) result2 = pd.concat((data2,labels2), axis=1) fig, (ax1,ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,5)) result1.plot.scatter(x='x',y='y',c='Cluster ID',colormap='jet',ax=ax1) ax1.set_title('Spectral Clustering') result2.plot.scatter(x='x',y='y',c='Cluster ID',colormap='jet',ax=ax2) ax2.set_title('Spectral Clustering') # - # ## 8.5 Summary # # This tutorial illustrates examples of using different Python's implementation of clustering algorithms. Algorithms such as k-means, spectral clustering, and DBScan are designed to create disjoint partitions of the data whereas the single-link, complete-link, and group average algorithms are designed to generate a hierarchy of cluster partitions. # # References: # [1] , , and . CHAMELEON: A Hierarchical Clustering Algorithm Using Dynamic Modeling. IEEE Computer 32(8): 68-75, 1999. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Training models on Paperspace with nvidia docker # # What is Paperspace? # # - Easy way to train your Deep Learning models on cloud GPUs if you don't have one already # - Cheap cloud GPUs # - Trial GPU hours # # # Introduction # # How do you get started? # # https://www.youtube.com/watch?v=swXhAI6DF0E # # #### ^ Follow this tutorial but # # - Create a machine using *Ubuntu 16.04* template # - Choose a machine type with GPU # # # Once you have created a machine, you should have recieved a root user password from Paperspace in your email. # # Installing nvidia-docker # # Why nvidia-docker? # # - It is the best way to run our models in consistent environment as our local environment # - It is very easy to get started # # # ##### Steps # # - So far you must have created a GPU enabled Ubuntu 16.04 machine # - Run below command to install nvidia-docker + docker # # ```bash # wget -O - -q 'https://gist.githubusercontent.com/dte/8954e405590a360614dcc6acdb7baa74/raw/d1b5a01ed0b9252654016d2a9a435dc8b4c045e7/install-CUDA-docker-nvidia-docker.sh' | sudo bash # ``` # # - Then you must restart your machine # # ```bash # sudo shutdown -r now # ``` # # Training || running Jupyter Notebook || just bash # # #### 1. Jupyter notebook # # ```bash # sudo nvidia-docker run --rm --name tf-notebook -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu jupyter notebook --allow-root # ``` # # - Note that you must enable **Public IP* to acce # # # #### 2. Python file # # ```bash # sudo nvidia-docker run -d -it -v $(pwd)/030-chatbot:/notebooks -v /inputs:/inputs -v /output:/output tensorflow/tensorflow:1.0.0-gpu python chatbot_simple.py # ``` # # Above command will do # # - Run a container in daemonised mode # - Interactive shell enabled # - Mount `030-chatbot`, `/inputs` and `/output` folders from host # - Run `python chatbot_simple.py` # - Create a container using `tensorflow:1.0.0-gpu # # # #### 3. TF container bash access # # # TF 1.0.0 version # # ```bash # sudo nvidia-docker run -it tensorflow/tensorflow:1.0.0-gpu bash # ``` # # TF latest version # # ```bash # sudo nvidia-docker run -it tensorflow/tensorflow:latest-gpu bash # ``` # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Exploring US Data Flights Exercise Solutions # # * Below are 5 asnwers for all the questions asked in flights.ipynb. # * The code assumes that the files "515364771_T_ONTIME_REPORTING.csv" and "L_UNIQUE_CARRIERS.csv" are located in the same folder as this notebook. # * Each exercise reads the dataset anew, so they can be run independantly. # # >
    # > Α.Μ. 3160245 # # ## Question 1 # # * We are called to present 3 tables. The first contains the number of delays along with chance of having a delayed flight for every airport in the dataset. # * The dataset contains a multitude of cells without actual data (NaN values). Since we will only be utilising the origin and departure delay columns, we will exclusively check for NaN values in the latter. # * We proceed to count the total number of flights. This number will be used to find the probability of delay later. # * We remove the flights with no delay, along with the outliers, by grouping the airports by their origin name and using the quantile function. # * We now create a dataframe for the "misery index", the airports sorted by their number of delayed flights. # * To illustrate the probabiities easier, we round them up, reducing the number of decimal places # + import pandas as pd import numpy as np # QUESTION 1 # read datatset and create index. Remove unnecessary columns # DEP_DELAY appears to contain the sum of the specific delays mentioned on the rest of the columns # since we only care about the sum, we can safely drop the rest dataset = pd.read_csv("515364771_T_ONTIME_REPORTING.csv", sep=",") df = dataset.set_index(['ORIGIN']) # update the dataframe, dropping NaN values from the DEP_DELAY column. make note of the total flights df = df[np.isfinite(df.DEP_DELAY)] flight_num = len(df) # remove flights with no departure delay, along with the outliers df = df.loc[df.DEP_DELAY > 0] df = df.loc[df.groupby('ORIGIN').size() > df.groupby('ORIGIN').size().quantile(0.01)] # create new dataframe that will contain the misery index, along with the average and median delay of each airport # first add the number of delays of each airport flights = df.groupby('ORIGIN').size().to_frame() flights.columns = ['NUMBER OF DELAYS'] # next, calculate the chance of delay flights['CHANCE OF DELAY'] = (flights['NUMBER OF DELAYS']/flight_num).round(3) flights = flights.sort_values(by = ['CHANCE OF DELAY'], ascending = False) flights # - # * We return on the original dataframe to calculate the mean and median values of the delayed flights. # * We group the airlines by their origin name and use mean()/median() on their respective departure delay cells # * These values are stored and shown from a new dataframe, sorted by the average delay column. # + # calculate the average and median delay of each airport mean_med = pd.DataFrame(columns = ['MEAN', 'MEDIAN']) mean_med['MEAN'] = df.groupby('ORIGIN')['DEP_DELAY'].mean().round(1) mean_med['MEDIAN'] = df.groupby('ORIGIN')['DEP_DELAY'].median() # sort by average mean_med = mean_med.sort_values(by = ['MEAN'], ascending = False) mean_med # - # * With the two dataframes in hand, we concatenate and sort them by the airport's chance of a delayed flight. # concatenate the two dataframes and sort by the probability of having a delayed departure airport_delay = pd.concat([flights, mean_med], sort = False, axis = 1).reindex(flights.index) airport_delay = airport_delay.sort_values(by = ['CHANCE OF DELAY'], ascending = False) airport_delay # ## Question 2 # # * For this question, we will be mostly using the same code as above. # * Instead of aiports, we will be presenting airline carriers. # * We will also read the full name of the airlines from another cvs file. For each dataframe we create, we will merge those names with the airline codes. # + # QUESTION 2 import pandas as pd import numpy as np # create a dataframe with a new index to represent the carriers # also load carrier names from another csv file df = pd.read_csv("515364771_T_ONTIME_REPORTING.csv", sep=",") df = df.set_index(['CARRIER']) carrier_names = pd.read_csv("L_UNIQUE_CARRIERS.csv", sep=",") carrier_names = carrier_names.rename(columns = {'Code' : 'CARRIER'}).set_index('CARRIER') # update the dataframe, dropping NaN values from the DEP_DELAY column. make note of the total flights df = df[np.isfinite(df.DEP_DELAY)] flight_num = len(df) # remove flights with no departure delay. this time we keep the outliers df = df.loc[df.DEP_DELAY > 0] # create new dataframe that will contain the misery index, along with the average and median delay of each carrier # first add the number of delays of each carrier carriers = df.groupby('CARRIER').size().to_frame() carriers.columns = ['NUMBER OF DELAYS'] # next, calculate the chance of delay carriers['CHANCE OF DELAY'] = (carriers['NUMBER OF DELAYS']/flight_num).round(3) # include the names of the carriers by merginf the two dataframes carriers = carriers.merge(carrier_names, how = 'inner', on = 'CARRIER') carriers = carriers[['Description', 'NUMBER OF DELAYS', 'CHANCE OF DELAY']].rename(columns = {'Description' : 'CARRIER NAME'}) # sort the carriers by their chance of havinf a delay and print carriers = carriers.sort_values(by = ['CHANCE OF DELAY'], ascending = False) carriers # - # * Same code once more, only our mean/median dataframe is merged with the airline names' dataframe. # + # calculate the average and median delay of each carrier mean_med_c = pd.DataFrame(columns = ['MEAN', 'MEDIAN']) mean_med_c['MEAN'] = df.groupby('CARRIER')['DEP_DELAY'].mean().round(1) mean_med_c['MEDIAN'] = df.groupby('CARRIER')['DEP_DELAY'].median() # sort by average mean_med_c = mean_med_c.sort_values(by = ['MEAN'], ascending = False) # include the names of the carriers mean_med_c = mean_med_c.merge(carrier_names, how = 'inner', on = 'CARRIER') mean_med_c = mean_med_c[['Description', 'MEAN', 'MEDIAN']].rename(columns = {'Description' : 'CARRIER NAME'}) mean_med_c # - # * As in question 1, the final dataframe concatenates the previous two, now merged with the airline names. # concatenate the two dataframes and sort by the probability of having a delayed departure carrier_delay = pd.concat([carriers, mean_med_c], sort = False, axis = 1).reindex(carriers.index) carrier_delay = carrier_delay.sort_values(by = ['CHANCE OF DELAY'], ascending = False) # include the names of the carriers carrier_delay = carrier_delay.merge(carrier_names, how = 'inner', on = 'CARRIER') carrier_delay = carrier_delay[['Description', 'NUMBER OF DELAYS', 'CHANCE OF DELAY', 'MEAN', 'MEDIAN']].rename(columns = {'Description' : 'CARRIER NAME'}) carrier_delay # ## Question 3 # # * To create the histogram, first we read our dataset and as before, drop the NaN values in the departure delay column. # * We also drop all the unecessary columns of the dataset, keeping only the origin and destination columns. # * We group those by origin name and count the number of flights for each one. # * Finally, we create the histogram. # + # QUESTION 3 import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # create the dataframe df = pd.read_csv("515364771_T_ONTIME_REPORTING.csv", sep=",") # update the dataframe, dropping NaN values from the DEP_DELAY column. df = df[np.isfinite(df['DEP_DELAY'])] df = df[['ORIGIN', 'DEST']].set_index('ORIGIN') df = df.groupby('ORIGIN').count() df.hist(bins = [0, 50000, 100000, 150000, 200000, 250000, 300000, 350000, 400000], edgecolor = 'black', figsize = [15, 10]) plt.xticks(fontsize = 14) plt.yticks(fontsize = 12) plt.xlabel("Flights range", fontsize=16) plt.ylabel("Number of airports with departing flights", fontsize=14) plt.show() # - # ## Question 4 # # * To create this plot, first we will create a new dataframe with the two necessary columns. # * This time, we'll be keeping the fl_date column, after we parse it with parse_dates. This will allow us to group flights based on their dates. # * "Number of flights" will be filled with the total number of flights of every airport, for each month. # * "Number of delayed flights" will only count the delayed flights of every airport, for each month. # * Before we create our plot, we write a list with month names. We'll use them as tick labels on the x-axis of the plot. # * We create the plot and use a couple of matplotlib function to better visualise the data. # + # QUESTION 4 import pandas as pd import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # create the dataframe. parse_dates parses the given dates of the FL_date column and allows # for easier grouping of our data in months df = pd.read_csv("515364771_T_ONTIME_REPORTING.csv", sep=",", parse_dates = ['FL_DATE']) # update the dataframe, dropping NaN values from the DEP_DELAY column. df = df[np.isfinite(df['DEP_DELAY'])] # fill the dataframe with the number of flights and the number of delays for each month pf = pd.DataFrame(columns = ['NUMBER OF FLIGHTS', 'NUMBER OF DELAYED FLIGHTS']) pf['NUMBER OF FLIGHTS'] = df.groupby(df['FL_DATE'].dt.month).size() pf['NUMBER OF DELAYED FLIGHTS'] = df.loc[df['DEP_DELAY'] > 0].groupby(df['FL_DATE'].dt.month).size() # create plot, replace x-axis labels with months (instead of index numbers) months = ['JAN','FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP', 'OCT', 'NOV', 'DEC'] pf_plot = pf.plot(x_compat = True, figsize = [15, 10]) pf_plot.set_xticklabels(months) plt.xticks([x for x in range(1, 13)], fontsize = 14) plt.xlabel("2018", fontsize = 16) plt.ylabel("Number of Departures", fontsize = 14) plt.show() # - # ## Question 5 # # * After the usual preprocessing, we keep the four columns that concern our problem: Carrier code, origin and destination names and the departure delay. # * We group the first three and find the mean values of the delays. # * We sort those values and then keep only the first one for each origin-dest-carrier. # * Meaning, we now have the carrier with the best mean delay for the specific origin - destination set. # * We remove the carrier from the index of the table (we'll be using it on the next cell). # * We print the first 25 rows of our resulting table. # + # QUESTION 5 import pandas as pd import numpy as np import matplotlib.pyplot as plt # create the default dataframe df = pd.read_csv("515364771_T_ONTIME_REPORTING.csv", sep=",") # update the dataframe, dropping NaN values from the DEP_DELAY column. df = df[np.isfinite(df['DEP_DELAY'])] df = df[['CARRIER', 'ORIGIN', 'DEST', 'DEP_DELAY']].set_index(['ORIGIN', 'DEST']) df = df.groupby(['ORIGIN', 'DEST', 'CARRIER']).mean().round(2).rename(columns = {'DEP_DELAY' : 'AVERAGE DELAY OF CARRIER'}) df = df.sort_values(['ORIGIN', 'DEST', 'AVERAGE DELAY OF CARRIER'], ascending = True) df = df[~df.index.get_level_values(1).duplicated()] df = df.reset_index(level = ['CARRIER']) df.head(25) # - # * The above table is obviously not that useful on its current state. Finding a specific row can be time consuming. # * Below is a piece of code that recieves input (the desired origin and destination set) and searches the dataframe for the best carrier. # * The input is case-agnostic. # user can now search for the best airline code for a specific origin-destination point print("Enter the origin code of your departure airport") org = input().upper() print("Enter the destination code of your destination") dest = input().upper() print("Finding best airline carrier...\n") answer = df.loc[(df.index.get_level_values(0) == org) & (df.index.get_level_values(1) == dest)] if answer.empty: print("No carrier found! Origin/Destination might not exist. Check for mistakes on your input.") else: print(answer) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="11La92D7sMoN" # + id="5ddnT-ChTgyy" outputId="6b4b48af-dd1e-4a4b-81a9-21ee2b2479b4" colab={"base_uri": "https://localhost:8080/", "height": 34} from google.colab import drive drive.mount('/content/drive', force_remount=True) # + id="-odkR3PcJu7u" dir = "/content/drive/My Drive/InfraredSolarModules/" # + id="1jwqCXRHTuDJ" import numpy as np import keras as k import json import PIL # + id="ucFbuQLaUXU4" from keras.applications.vgg16 import preprocess_input from keras.applications.vgg16 import VGG16 from keras.applications.resnet50 import preprocess_input from keras.preprocessing import image # + id="dDBPe4nNUC4U" outputId="894ac318-044b-4ff2-c462-1655f049c47b" colab={"base_uri": "https://localhost:8080/", "height": 1000} def imagegen(): f = open('/content/drive/My Drive/InfraredSolarModules/module_metadata.json',) meta = json.load(f) f.close() img = [] label = [] for i in range(20000): print("doing image"+str(i)) img.append(np.array(k.preprocessing.image.load_img('/content/drive/My Drive/InfraredSolarModules/' + meta[str(i)]['image_filepath'], target_size=(224, 224)))) if meta[str(i)]['anomaly_class'] == 'No-Anomaly': label.append('No-Anomaly') else: label.append('Anomaly') return np.array(img), np.array(label) imgs, labels = imagegen() print(imgs.shape) print(len(imgs)) print(len(imgs.shape)) # + id="YPD0nHYmS9yv" def imageLoader(imgs, batch_size): L = len(imgs) # this line is just to make the generator infinite, keras needs that while True: batch_start = 0 batch_end = batch_size while batch_start < L: print(batch_start) limit = min(batch_end, L) X = imgs[batch_start:limit] # Y = labels[batch_start:limit] # break yield X batch_start += batch_size batch_end += batch_size # + id="au1bXOVO-D_4" outputId="32af1358-9bf3-4919-e7fb-68843762f5be" colab={"base_uri": "https://localhost:8080/", "height": 1000} def preprocess_image(imgs): preprocessed_images = [] counter = 1 for image_item in imgs: print("doing image {}".format(counter)) # image_item = image.img_to_array(image_item) image_item = np.expand_dims(image_item, axis=0) image_item = preprocess_input(image_item) counter+=1 preprocess_image(imgs) # + id="KG2KvH56UcIB" model = VGG16(weights="imagenet", include_top=False, classes=2) # + id="2f5EWYv3TII0" outputId="72e40c06-2175-41db-ee44-2122b21813d4" colab={"base_uri": "https://localhost:8080/", "height": 1000} # print("1", labels[0]) # print("2", imgs[0].shape) # features = model.predict_generator(imageLoader(imgs, labels, 32)) # print(model.predict(imgs[5]).shape) # print(model.predict(imgs[-5]).shape) # print("I PASS") features = model.predict(imageLoader(imgs,32),steps=625) # + id="dsYpUGo703wM" outputId="ff3e2e37-44cd-44c5-d417-e4340038285d" colab={"base_uri": "https://localhost:8080/", "height": 34} print(features.shape) # + id="PSF0prVENbCy" outputId="023874c1-f171-4f68-e2d1-c540530e468e" colab={"base_uri": "https://localhost:8080/", "height": 34} type(imgs) # + id="b4L8_k6FcJ8o" outputId="be192cea-a9a6-4f25-998e-ce0b75776a75" colab={"base_uri": "https://localhost:8080/", "height": 34} imgs.shape # + id="JMTVzTjEZ79e" from keras.applications.vgg16 import preprocess_input from keras.applications.vgg16 import VGG16 from keras.applications.resnet50 import preprocess_input from keras.preprocessing import image model = VGG16(weights="imagenet", include_top=False, classes=2) # def preprocess_image(imgs): # preprocessed_images = [] # counter = 1 # for image_item in imgs: # print("doing image {}".format(counter)) # # image_item = image.img_to_array(image_item) # image_item = np.expand_dims(image_item, axis=0) # image_item = preprocess_input(image_item) # counter+=1 # preprocess_image(imgs) # features = model.predict(imgs) # print(features.shape) # + id="BJYe9UCHvyKv" from sklearn import svm from sklearn.model_selection import train_test_split from sklearn.linear_model import SGDClassifier features = features.reshape(20000,7*7*512) svm_classifier = SGDClassifier() X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.35) # + id="d_wq7lXwsYSc" outputId="f5fc626d-ec51-4dcd-fb4e-915543cbf4d8" colab={"base_uri": "https://localhost:8080/", "height": 272} print(X_train.shape) print(X_train.shape) print(y_train) # + id="DoIiQEamsWOo" outputId="e1802710-2d47-4534-a8c3-9ca19b0dd151" colab={"base_uri": "https://localhost:8080/", "height": 51} svm_classifier.fit(X_train, y_train) y_pred = svm_classifier.predict(X_test) from sklearn.metrics import confusion_matrix confusion_matrix(y_test, y_pred) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: DeepF-kernel # language: python # name: myenv # --- import numpy as np foo = np.array([]) if foo.any: print("okay") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from pyhht.visualization import plot_imfs import numpy as np from pyhht.emd import EMD import matplotlib.pyplot as plt from tftb.generators import fmconst from scipy import angle, unwrap from scipy.signal import hilbert t = np.linspace(0, 10, 100) modes = np.cos(2 * 3.14 * 50 * t) + np.cos(2 * 3.14 * 100 * t) x = modes + t decomposer = EMD(x) imfs = decomposer.decompose() plot_imfs(x, imfs, t); print('%.3f' % decomposer.io()) hs4 = hilbert(imfs[0]) plt.plot(t, imfs[0], 'b') #plt.plot( np.imag(hs4),np.real(hs4), 'b') omega_imfs_zero = unwrap(angle(hs4)) # unwrapped instantaneous phase f_inst_imfs_zero = np.diff(omega_imfs_zero) # instantaneous frequency #plt.plot(t[1:], f_inst_imfs_zero, "b") hs5 = hilbert(imfs[1]) plt.plot(t, imfs[1], 'g') #plt.plot( np.imag(hs5),np.real(hs5), 'g') omega_imfs_one = unwrap(angle(hs5)) # unwrapped instantaneous phase f_inst_imfs_one = np.diff(omega_imfs_one) # instantaneous frequency #plt.plot(t[1:], f_inst_imfs_one, "g") plt.show() # + x = np.linspace(0, 2 * np.pi, 1000) s1=np.sin(x) s2=np.sin(2*x) hs1 = hilbert(s1) hs2 = hilbert(s2) plt.plot(x, s1, 'b') plt.plot(x, s2, 'g') plt.plot( np.imag(hs1),np.real(hs1), 'b') plt.plot( np.imag(hs2),np.real(hs2), 'g') omega_s1 = unwrap(angle(hs1)) # unwrapped instantaneous phase omega_s2 = unwrap(angle(hs2)) # unwrapped instantaneous phase f_inst_s1 = np.diff(omega_s1) # instantaneous frequency f_inst_s2 = np.diff(omega_s2) # instantaneous frequency plt.plot(x[1:], f_inst_s1, "b") plt.plot(x[1:], f_inst_s2, "g") plt.show() # - plt.plot(t, imfs[0], 'b') plt.plot(t, imfs[1], 'g') plt.plot(t, imfs[2], 'r') # + from mpl_toolkits.mplot3d import Axes3D from matplotlib import pyplot as plt import numpy x=numpy.array([1,2,3,4,5]) y=numpy.array([5,4,7,1,7]) r=numpy.array([0.1,0.6,0.4,1.0,0.3]) fig = plt.figure(figsize=(7,5)) ax = Axes3D(fig) # plot data line1 = ax.plot(x,y,r,'ok') #modify axes ax.set_xlim(0,6) ax.set_ylim(0,8) ax.minorticks_on() ax.tick_params(axis='both',which='minor') ax.tick_params(axis='both',which='major') #display plt.show() # + from mpl_toolkits import mplot3d # %matplotlib inline import numpy as np import matplotlib.pyplot as plt def f(x, y): return np.sin(np.sqrt(x ** 2 + y ** 2)) x = np.linspace(-6, 6, 30) y = np.linspace(-6, 6, 30) X, Y = np.meshgrid(x, y) Z = f(X, Y) fig = plt.figure() ax = plt.axes(projection='3d') ax.contour3D(X, Y, Z, 50, cmap='binary') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z'); # - Axes3D.plot_surface(X, Y, Z, *args, **kwargs) # + from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import numpy as np X = np.array(abs(f_inst_imf_one[1:1140])) Y = np.array(a1[1:1140]) Z = np.array(10000000*pxx_den[1:1140]) x = np.reshape(X, (9, 12)) y = np.reshape(Y, (9, 12)) z = np.reshape(Z, (9, 12)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(x, y, z) ax.set_xlabel('X Label') ax.set_ylabel('Y Label') ax.set_zlabel('Z Label') plt.show() # - import numpy as np import pylab as pl rate = 30.0 t = a1 x = IMF[0], p = 20*np.log10(np.abs(np.fft.rfft(x))) f = np.linspace(0, rate/2, len(p)) plt.plot(f, p) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import matplotlib.pyplot as plt # + # Plot Lidar NIS nis_lidar = [] with open("lidarNis.txt", "r") as file: nis_lidar = file.read().splitlines() nis_lidar = list(map(float, nis_lidar)) threshold_laser = [5.991]*len(nis_lidar) fig = plt.figure() plt.plot(nis_lidar) plt.title("NIS (LIDAR)") plt.xlabel("Measurement Index") plt.ylabel("NIS") plt.plot(threshold_laser) plt.show() fig.savefig('lidarNis.png') # Plot Radar NIS nis_radar = [] with open("radarNis.txt", "r") as file: nis_radar = file.read().splitlines() nis_radar = list(map(float, nis)) threshold_radar = [7.815]*len(nis_lidar) fig = plt.figure() plt.plot(threshold_radar) plt.plot(nis_radar) plt.title("NIS (RADAR)") plt.xlabel("Measurement Index") plt.ylabel("NIS") plt.show() fig.savefig('radarNis.png') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:data science] # language: python # name: conda-env-data_science-py # --- import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import seaborn as sns import matplotlib.pyplot as plt from sklearn.preprocessing import OrdinalEncoder country = pd.read_csv("../dataset/country.csv") league = pd.read_csv("../dataset/league.csv") match = pd.read_csv("../dataset/match.csv") player = pd.read_csv("../dataset/player.csv") player_attributes = pd.read_csv("../dataset/player_attributes.csv") team = pd.read_csv("../dataset/team.csv", encoding = "ISO-8859-1") team_attributes = pd.read_csv("team_attributes_GrpByNewTEAMID.csv") dogdata = pd.read_csv("../groupscores/dogdata_new.csv") dogs = dogdata["dog"].to_list() dogdata["dogscore"]= (dogdata["homerevenue"] + dogdata["awayrevenue"])/(dogdata["homegames"] + dogdata["awaygames"])-1 dogdata = dogdata.sort_values(by = "dogscore", ascending = False) good_dogs = dogdata[:200]["dog"].to_list() good_dogs dogdata dogdata[dogdata["dogscore"]>0] team_attributes.columns simp_team_attributes = pd.DataFrame() simp_team_attributes["team_id"] = team_attributes['new_id'] simp_team_attributes["buildUpScore"] =team_attributes[["buildUpPlaySpeed", "buildUpPlayPassing"]].mean(axis = 1) simp_team_attributes["chanceCreationScore"] =team_attributes[["chanceCreationPassing", "chanceCreationCrossing","chanceCreationShooting"]].mean(axis = 1) simp_team_attributes["defenceScore"] = team_attributes[["defencePressure", "defenceAggression","defenceTeamWidth"]].mean(axis = 1) simp_team_attributes["is_dog"]= simp_team_attributes["team_id"].isin(dogs) simp_team_attributes["is_gooddog"] = simp_team_attributes["team_id"].isin(good_dogs) simp_dog_attributes = simp_team_attributes[simp_team_attributes.is_dog] simp_good_dog_attributes = simp_dog_attributes[simp_dog_attributes.is_gooddog] simp_bad_dog_attributes = simp_dog_attributes[~simp_dog_attributes.is_gooddog] team_attributes["is_dog"]= team_attributes["new_id"].isin(dogs) team_attributes["is_gooddog"] = team_attributes["new_id"].isin(good_dogs) dog_attributes = team_attributes[team_attributes.is_dog] good_dog_attributes = dog_attributes[dog_attributes.is_gooddog] bad_dog_attributes = dog_attributes[~dog_attributes.is_gooddog] # # 3D scatter plot # + from mpl_toolkits import mplot3d import numpy as np import matplotlib.pyplot as plt font = {'family' : 'normal', 'size' : 12} plt.rc('font', **font) fig = plt.figure(figsize = (10, 7)) ax = plt.axes(projection ="3d") x = simp_bad_dog_attributes.buildUpScore y = simp_bad_dog_attributes.chanceCreationScore z = simp_bad_dog_attributes.defenceScore ax.scatter(x, y, z) plt.title("Profitable underdogs' profile") x = simp_good_dog_attributes.buildUpScore y = simp_good_dog_attributes.chanceCreationScore z = simp_good_dog_attributes.defenceScore ax.scatter(x, y, z,color ='red') ax.set_xlabel('buildUpScore') ax.set_ylabel('chanceCreationScore') ax.set_zlabel('defenceScore') ax.legend(['other underdogs', 'profitable underdogs']) # show plot plt.show() fig.savefig('underdogs.pdf', bbox_inches='tight') # - num_col = ['buildUpPlaySpeed', 'buildUpPlayPassing', 'chanceCreationPassing', 'chanceCreationCrossing', 'chanceCreationShooting', 'defencePressure', 'defenceAggression', 'defenceTeamWidth'] cat_col = ['buildUpPlayPositioningClass','chanceCreationPositioningClass', 'defenceDefenderLineClass'] good_dog_attributes dogscore = dogdata[["dog","dogscore"]] dogscore = dogscore.set_index(["dog"]) dog_attributes = dog_attributes.set_index("new_id") dog_num = dog_attributes[num_col+cat_col].join(dogscore) dog_num # + num_features_all = pd.melt(dog_attributes[num_col]) num_features_highdogs = pd.melt(good_dog_attributes[num_col]) font = {'family' : 'normal', 'weight' : 'normal', 'size' : 20} plt.rc('font', **font) df = pd.concat([num_features_all, num_features_highdogs],keys = ["all dogs","good dogs"]) df = df.reset_index() df = df.drop(["level_1"], axis =1) df = df.rename({"level_0": "category"}, axis =1) # Initialize the figure with a logarithmic x axis f, ax = plt.subplots(figsize=(20, 15)) # Plot the orbital period with horizontal boxes sns.boxplot(y="variable", x="value", hue = "category", data = df, whis=[0, 100], width=.6, palette="vlag") # Tweak the visual presentation ax.xaxis.grid(True) ax.set(ylabel="") plt.legend(loc='upper right') sns.despine(trim=True, left=True) plt.savefig('underdogs2.pdf', bbox_inches='tight') # - cat_col = ['buildUpPlayPositioningClass','chanceCreationPositioningClass', 'defenceDefenderLineClass'] cat_features_all = dog_attributes[cat_col] cat_features_highdogs = good_dog_attributes[cat_col] df = pd.concat([pd.melt(cat_features_all),pd.melt(cat_features_highdogs)],keys = ["all","high-dogs"]) df = df.reset_index() df = df.drop(["level_1"], axis =1) df = df.rename({"level_0": "category"}, axis =1) # + #sns.set_theme(style="ticks") # Initialize the figure with a logarithmic x axis f, ax = plt.subplots(figsize=(30, 20)) #ax.set_xscale("log") # Plot the orbital period with horizontal boxes sns.barplot(y="variable", x="value", hue = "category", data = df) # Tweak the visual presentation ax.xaxis.grid(True) ax.set(ylabel="") sns.despine(trim=True, left=True) # - # # Linear regression dog_lr = dog_num for i in num_col: X = dog_lr[i] X = (X - X.min())/(X.max()- X.min()) dog_lr[i] = X # + from sklearn.linear_model import LinearRegression y = dog_lr.dogscore X = dog_lr.drop(["dogscore"], axis = 1) reg = LinearRegression().fit(X, y) reg.coef_ # - # # Correlations for i in num_col + cat_col: print(dog_num["dogscore"].corr(dog_num[i])) num_col + cat_col # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') # %matplotlib inline data=pd.read_csv('Mall_Customers.csv') data.head() data.info() sns.FacetGrid(data,hue='Genre',size=5,palette='Set2').map(plt.scatter,'Annual Income (k$)','Spending Score (1-100)').add_legend() plt.title('Annual Income and Spending Score') plt.show() data=data.drop(columns='CustomerID') sns.pairplot(data,kind='scatter') sns.countplot(data['Genre']) def dist(a): sns.FacetGrid(data,height=6).map(sns.distplot,a).add_legend() data.columns dist('Age') plt.title('Distribution of Age') plt.show() dist('Annual Income (k$)') plt.title('Annual Income Distribution') plt.show() dist('Spending Score (1-100)') plt.title('Distribution ') plt.show() from sklearn.preprocessing import LabelEncoder le=LabelEncoder() data['Genre']=le.fit_transform(data['Genre']) # Checking the optimal number of Clustures. from sklearn.cluster import KMeans inertias = [] for i in range(1, 15): km = KMeans(n_clusters=i).fit(data) inertias.append(km.inertia_) plt.plot(range(1, 15), inertias, 'bx-') plt.xlabel('Values of K') plt.ylabel('Inertia') plt.title('The Elbow Method using Inertia') plt.show() km = KMeans(n_clusters=5).fit(data) y_km = km.fit_predict(data) n_cluster, km_count = np.unique(y_km, return_counts=True) plt.bar(n_cluster, km_count) plt.ylabel('No of customer') plt.xlabel('Clustering') plt.title('Customer segmentation by 3 groups') # + plt.scatter(data['Annual Income (k$)'], data['Spending Score (1-100)'], c=y_km, s=100) plt.xlabel('Annual Income') plt.ylabel('Spending Score') plt.title('Customer segmentation into 5 clusters') plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Output is cleared due to confidentiality of the information. import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import os from jupyterthemes import jtplot jtplot.style() # + import pandas as pd import numpy as np from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow,Flow from google.auth.transport.requests import Request import os import pickle os.chdir(r'C:\Users\luc57.DESKTOP-NB5DC80\AE\ipynb') SCOPES = ['https://www.googleapis.com/auth/spreadsheets'] # here enter the id of your google sheet SAMPLE_SPREADSHEET_ID_input = '' SAMPLE_RANGE_NAME = 'A1:ZZ30000' def main(): global values_input, service creds = None if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) # here enter the name of your downloaded JSON file creds = flow.run_local_server(port=0) with open('token.pickle', 'wb') as token: pickle.dump(creds, token) service = build('sheets', 'v4', credentials=creds) # Call the Sheets API sheet = service.spreadsheets() result_input = sheet.values().get(spreadsheetId=SAMPLE_SPREADSHEET_ID_input, range=SAMPLE_RANGE_NAME).execute() values_input = result_input.get('values', []) if not values_input and not values_expansion: print('No data found.') main() concatdf=pd.DataFrame(values_input[1:], columns=values_input[0]) #file must be a google sheets, not a normal xlsx uploaded to gdrive # - concatdf = pd.read_excel(r'C:\Users\luc57.DESKTOP-NB5DC80\AE\ipynb\concatdf.xlsx',index_col=0) concatdf.dtypes # # Data Wrangling concatdf.day.nunique() #convert column day to datetime concatdf['day']=pd.to_datetime(concatdf['day']) concatdf['day'].nunique() concatdf.month # + from datetime import datetime #correct the month variable concatdf['month'] = concatdf['day'].dt.month concatdf['month'] = concatdf['month'].astype(str) def month_name(month_number): datetime_object = datetime.strptime(month_number, "%m") return datetime_object.strftime("%B") concatdf['month']=concatdf['month'].apply(month_name) concatdf['month'] # - cols = ['orders','product_price','gross_sales','discounts','net_sales', 'taxes','total_sales','average_order_value', 'units_per_transaction','net_quantity', 'ordered_item_quantity'] concatdf[cols] = concatdf[cols].apply(pd.to_numeric, errors='coerce', axis=1) concatdf.drop(columns=[''],inplace=True) concatdf.dtypes concatdf.product_variant # # Determine Last Transactions s = concatdf.groupby("customer_name")["day"].transform("max") grouped_data=concatdf[concatdf["day"].eq(s)].groupby("customer_name").agg(lambda d: ", ".join(d.unique())) grouped_data grouped_data[['product_variant','discount_code','channel']] # + #get total net sales net_sales=concatdf.groupby('customer_name')['net_sales'].sum().reset_index() #get earliest transaction date earliest_trans=concatdf.groupby('customer_name')['day'].min().reset_index() #get last transaction date last_trans=concatdf.groupby('customer_name')['day'].max().reset_index() #rename column name earliest_trans.columns=['customer_name','earliest_transaction_date'] last_trans.columns=['customer_name','last_transaction_date'] #convert the column to a datetime variable last_trans['last_transaction_date']=pd.to_datetime(last_trans['last_transaction_date']) earliest_trans['earliest_transaction_date']=pd.to_datetime(earliest_trans['earliest_transaction_date']) # - earliest_trans.sort_values(by='earliest_transaction_date') import datetime as dt today=pd.to_datetime(dt.date.today()) #create new column to determine last purchase date last_trans['last_purchase_in_days']=today-last_trans['last_transaction_date'] last_trans['last_purchase_in_days']=last_trans['last_purchase_in_days'].apply(lambda x: x.days) #get the product variants purchased on the last day and discount code used product_variants=pd.DataFrame(grouped_data.product_variant.tolist(),columns=['product_variants']) discount_codes=pd.DataFrame(grouped_data.discount_code.tolist(),columns=['discount_codes']) channels=pd.DataFrame(grouped_data.channel.tolist(),columns=['channels']) order_count=concatdf.groupby(['customer_name'])['order_name'].agg(pd.Series.nunique).to_frame() last_trans = pd.merge(last_trans, order_count,on='customer_name',how='left') last_trans.rename(columns={'order_name':'order_count'},inplace=True) last_trans['product_variants']=product_variants last_trans['discount_codes']=discount_codes last_trans['channels']=channels last_trans last_trans.product_variants.value_counts() # # Determine churners last_trans['Churn']=None # + #Churn last_trans['Churn']=np.where((last_trans['product_variants'].str.contains('50ml')) & (last_trans['last_purchase_in_days']>150), 'Churned', last_trans['Churn'] ) last_trans['Churn']=np.where((last_trans['product_variants'].str.contains('30ml')) & (last_trans['last_purchase_in_days']>90), 'Churned', last_trans['Churn'] ) #Survived last_trans['Churn']=np.where((last_trans['product_variants'].str.contains('50ml')) & (last_trans['last_purchase_in_days']<=150) & (last_trans['order_count'] >1), 'Survived', last_trans['Churn'] ) last_trans['Churn']=np.where((last_trans['product_variants'].str.contains('30ml')) & (last_trans['last_purchase_in_days']<=90) & (last_trans['order_count'] >1), 'Survived', last_trans['Churn'] ) # Too Early to Identify last_trans['Churn']=np.where((last_trans['product_variants'].str.contains('30ml')) & (last_trans['last_purchase_in_days']<=90) & (last_trans['order_count'] ==1), 'Too Early to Identify', last_trans['Churn'] ) last_trans['Churn']=np.where((last_trans['product_variants'].str.contains('50ml')) & (last_trans['last_purchase_in_days']<=150) & (last_trans['order_count'] ==1), 'Too Early to Identify', last_trans['Churn'] ) # - last_trans['Churn'].value_counts() last_trans date_trans = pd.merge(last_trans, earliest_trans, on='customer_name',how='left') date_trans # # Create DF to Include All Disc. Codes and Product Variants concatdf.columns product_variant_and_discount_code=concatdf[['customer_name','day','product_variant','discount_code','channel']] product_variant_and_discount_code product_variant_and_discount_code['cc'] = (product_variant_and_discount_code.groupby('customer_name').\ cumcount() + 1).astype(str) product_variant_and_discount_code = product_variant_and_discount_code.pivot( \ index=['customer_name'], columns='cc', values=['day','product_variant','discount_code','channel']) product_variant_and_discount_code.columns = ['_'.join(col) for col in product_variant_and_discount_code.columns] product_variant_and_discount_code product_variant_and_discount_code.to_excel('customer_data_one_line.xlsx') # # Create customer_df shipping_city=concatdf.groupby(['customer_name'])['shipping_city'].agg(pd.Series.mode).to_frame() shipping_country=concatdf.groupby(['customer_name'])['shipping_country'].agg(pd.Series.mode).to_frame() shipping_postal_code=concatdf.groupby(['customer_name'])['shipping_postal_code'].agg( pd.Series.mode).to_frame() channel_unique=concatdf.groupby(['customer_name'])['channel'].agg( pd.Series.nunique).to_frame() average_order_value=concatdf.groupby(['customer_name'])['average_order_value'].agg( pd.Series.mean).to_frame() product_variant_unique=concatdf.groupby(['customer_name'])['product_variant'].agg( pd.Series.nunique).to_frame() quantity_count=concatdf.groupby(['customer_name'])['product_id'].agg( pd.Series.count).to_frame() disc_code_unique=concatdf.groupby(['customer_name'])['discount_code'].agg( pd.Series.nunique).to_frame() net_sales= concatdf.groupby('customer_name')['net_sales'].sum() customer_df = pd.concat([shipping_city,shipping_country,shipping_postal_code, product_variant_unique,channel_unique, order_count,quantity_count,disc_code_unique, average_order_value,net_sales],axis=1) customer_df customer_df.rename(columns={'order_name': 'order_quantity', 'product_id':'purchase_quantity', 'channel':'number_of_channels', 'product_variant':'number_of_product_variants', 'discount_code':'number_of_discount_codes'},inplace=True) # # Merge date_trans with customer_df customer_df = pd.merge(customer_df, date_trans, on='customer_name',how='left') customer_df customer_df.drop(columns=['discount_codes','channels','product_variants','order_quantity'],inplace=True) #these columns are dropped because they only contain information from the last day of transactions # # Merge with order_name, disc_code, channel, product_variant #concatenate the following variables to later store them in customer_df filter_column = concatdf.groupby(by='customer_name', sort=False).agg( ','.join) filter_column = filter_column[['order_name','discount_code','channel','product_variant']] customer_df = pd.merge(customer_df, filter_column, on='customer_name',how='left') customer_df for column in customer_df.columns: print("\n" + column) print(customer_df[column].value_counts()) customer_df.Churn.value_counts() print('Churn Rate: ',1983/(1983+513+245)) customer_df.loc[customer_df['customer_name']==''] customer_df.product_variant customer_df.columns customer_df.iloc[5] # As order name, discount code, channel and product variant may contain duplicate values, we join them. # ## Join duplicate values in the string customer_df['channel_2']=customer_df['channel'].apply(lambda x: ', '.join(dict.fromkeys(y for y in x.split(',')).keys())) customer_df[['channel','channel_2']][:20] # The result looks good, so we can apply the same concept to the other variables mentioned above. customer_df['order_name']=customer_df['order_name'].apply(lambda x: ', '.join(dict.fromkeys(y for y in x.split(',')).keys())) customer_df['discount_code']=customer_df['discount_code'].apply(lambda x: ', '.join(dict.fromkeys(y for y in x.split(',')).keys())) customer_df['product_variant']=customer_df['product_variant'].apply(lambda x: ', '.join(dict.fromkeys(y for y in x.split(',')).keys())) customer_df['channel']=customer_df['channel_2'] customer_df.drop(columns=['channel_2'],inplace=True) customer_df[['customer_name','order_name','discount_code','product_variant','channel']] # # Define customer group according to by the numbers segmentation customer_df[['customer_name','last_purchase_in_days','order_count']] # best: >=4 orders, <=30 days ago # recent: 1 order, <=30 days ago # loyal: >=4 orders, 1-6 months ago # # promising: 2-3 orders, 30 to 180 days # # defecting: 1 order, 1-6 months ago 30 to 180 days # at risk : 180 to 360 ago # dormant: >360 days ago # # # Combine with By the Numbers segments at_risk = pd.read_csv(r'C:\Users\luc57.DESKTOP-NB5DC80\AE\ipynb\By_the_numbers\at_risk_customers.csv') best = pd.read_csv(r'C:\Users\luc57.DESKTOP-NB5DC80\AE\ipynb\By_the_numbers\best_customers.csv') defecting = pd.read_csv(r'C:\Users\luc57.DESKTOP-NB5DC80\AE\ipynb\By_the_numbers\defecting_customers.csv') dormant = pd.read_csv(r'C:\Users\luc57.DESKTOP-NB5DC80\AE\ipynb\By_the_numbers\dormant_customers.csv') loyal = pd.read_csv(r'C:\Users\luc57.DESKTOP-NB5DC80\AE\ipynb\By_the_numbers\loyal_customers.csv') promising = pd.read_csv(r'C:\Users\luc57.DESKTOP-NB5DC80\AE\ipynb\By_the_numbers\promising_customers.csv') recent = pd.read_csv(r'C:\Users\luc57.DESKTOP-NB5DC80\AE\ipynb\By_the_numbers\recent_customers.csv') at_risk.head() #create the variable customer_name at_risk['customer_name']=at_risk['first_name']+str(' ')+at_risk['last_name'] best['customer_name']=best['first_name']+str(' ')+best['last_name'] defecting['customer_name']=defecting['first_name']+str(' ')+defecting['last_name'] dormant['customer_name']=dormant['first_name']+str(' ')+dormant['last_name'] loyal['customer_name']=loyal['first_name']+str(' ')+loyal['last_name'] promising['customer_name']=promising['first_name']+str(' ')+promising['last_name'] recent['customer_name']=recent['first_name']+str(' ')+recent['last_name'] #convert the string to upper cases at_risk['customer_name']=at_risk['customer_name'].str.upper() best['customer_name']=best['customer_name'].str.upper() defecting['customer_name']=defecting['customer_name'].str.upper() dormant['customer_name']=dormant['customer_name'].str.upper() loyal['customer_name']=loyal['customer_name'].str.upper() promising['customer_name']=promising['customer_name'].str.upper() recent['customer_name']=recent['customer_name'].str.upper() # We only need the customer_group and verified_email, then merge on customer_name. at_risk = at_risk[['customer_name','customer_group','verified_email']] best = best[['customer_name','customer_group','verified_email']] defecting = defecting[['customer_name','customer_group','verified_email']] dormant = dormant[['customer_name','customer_group','verified_email']] loyal = loyal[['customer_name','customer_group','verified_email']] promising = promising[['customer_name','customer_group','verified_email']] recent = recent[['customer_name','customer_group','verified_email']] at_risk['customer_name']=at_risk['customer_name'].astype(str) best['customer_name'] = best['customer_name'].astype(str) defecting['customer_name'] = defecting['customer_name'].astype(str) dormant['customer_name'] = dormant['customer_name'].astype(str) loyal['customer_name'] = loyal['customer_name'].astype(str) promising['customer_name'] = promising['customer_name'].astype(str) recent['customer_name'] = recent['customer_name'].astype(str) recent.dtypes # ## Concatenate the different customer groups customer_groups = pd.concat([at_risk, best, defecting, dormant, loyal, promising, recent]) print(len(customer_groups)) customer_groups customer_groups = customer_groups.groupby(by='customer_name', sort=False).agg( ','.join) #customer_groups.columns = customer_groups.columns.droplevel(0) customer_groups.reset_index() # ## Merge customer_df and customer_groups customer_df = pd.merge(customer_df, customer_groups, on='customer_name', how='left') print(len(customer_df.columns)) print(len(customer_df)) customer_df.customer_group.value_counts() # + customer_df['customer_group']=np.where(customer_df['customer_group']=='at_risk,at_risk', 'at_risk',customer_df['customer_group']) customer_df['customer_group']=np.where(customer_df['customer_group']=='defecting,defecting', 'defecting',customer_df['customer_group']) customer_df['customer_group']=np.where(customer_df['customer_group']=='dormant,dormant', 'dormant',customer_df['customer_group']) customer_df['customer_group']=np.where(customer_df['customer_group']=='promising,promising', 'promising',customer_df['customer_group']) # - customer_df.customer_group.value_counts() # ## Fill in the na values customer_df.isna().sum() # ### Fillna for customer_group cust_group_na=customer_df.loc[customer_df['customer_group'].isna()] print(len(cust_group_na)) cust_group_na[['customer_name','last_purchase_in_days','order_count']] # + cust_group_na['customer_group']=np.where(cust_group_na['last_purchase_in_days']>365, 'dormant', cust_group_na['customer_group']) cust_group_na['customer_group']=np.where((cust_group_na['last_purchase_in_days']<=30) & (cust_group_na['order_count']>=4), 'best', cust_group_na['customer_group']) cust_group_na['customer_group']=np.where((cust_group_na['last_purchase_in_days']<=30) & (cust_group_na['order_count']==1), 'recent', cust_group_na['customer_group']) cust_group_na['customer_group']=np.where((cust_group_na['last_purchase_in_days']<=180) & (cust_group_na['order_count']>=4), 'loyal', cust_group_na['customer_group']) cust_group_na['customer_group']=np.where((cust_group_na['last_purchase_in_days']<=180) & (cust_group_na['order_count']==1), 'defecting', cust_group_na['customer_group']) cust_group_na['customer_group']=np.where((cust_group_na['last_purchase_in_days']<=180) & (cust_group_na['order_count']<=3), 'promising', cust_group_na['customer_group']) cust_group_na['customer_group']=np.where(cust_group_na['last_purchase_in_days']>=180, 'at_risk', cust_group_na['customer_group']) cust_group_na['customer_group'].value_counts() # - customer_df= customer_df.loc[~customer_df['customer_group'].isnull()] print(len(customer_df)) # Concatenate the dataframe which initially have null customer groups with the initial sliced dataframe. customer_df = pd.concat([customer_df,cust_group_na],axis=0) print('Number of customers: ',len(customer_df)) print('') print(customer_df.customer_group.value_counts()) print('') print(customer_df.isna().sum()) # ### Fillna for Churn customer_df.loc[customer_df['Churn'].isna()][['customer_name','last_purchase_in_days','order_count', 'product_variant']] na_churn = customer_df.loc[customer_df['Churn'].isna()] print(len(na_churn)) # + #Churn na_churn['Churn']=np.where((na_churn['product_variant'].str.contains('50ml')) & (na_churn['last_purchase_in_days']>150), 'Churned', na_churn['Churn'] ) na_churn['Churn']=np.where((na_churn['product_variant'].str.contains('30ml')) & (na_churn['last_purchase_in_days']>90), 'Churned', na_churn['Churn'] ) #Survived na_churn['Churn']=np.where((na_churn['product_variant'].str.contains('50ml')) & (na_churn['last_purchase_in_days']<=150) & (na_churn['order_count'] >1), 'Survived', na_churn['Churn'] ) na_churn['Churn']=np.where((na_churn['product_variant'].str.contains('30ml')) & (na_churn['last_purchase_in_days']<=90) & (na_churn['order_count'] >1), 'Survived', na_churn['Churn'] ) # Too Early to Identify na_churn['Churn']=np.where((na_churn['product_variant'].str.contains('30ml')) & (na_churn['last_purchase_in_days']<=90) & (na_churn['order_count'] ==1), 'Too Early to Identify', na_churn['Churn'] ) na_churn['Churn']=np.where((na_churn['product_variant'].str.contains('50ml')) & (na_churn['last_purchase_in_days']<=150) & (na_churn['order_count'] ==1), 'Too Early to Identify', na_churn['Churn'] ) na_churn.Churn.value_counts() # - na_churn.loc[na_churn['Churn'].isna()][['customer_name','last_purchase_in_days', 'product_variant','order_count']] # + na_churn['Churn']= np.where((na_churn['product_variant']!='Sample') & (na_churn['last_purchase_in_days']>120),'Churn', na_churn['Churn']) na_churn['Churn']= np.where((na_churn['product_variant']!='Sample') & (na_churn['last_purchase_in_days']<=120) & (na_churn['order_count']==1), 'Too Early to Identify', na_churn['Churn']) na_churn['Churn']= np.where((na_churn['product_variant']!='Sample') & (na_churn['last_purchase_in_days']<=120) & (na_churn['order_count']>1), 'Survived', na_churn['Churn']) na_churn['Churn']= np.where(na_churn['product_variant']=='Sample', 'Sample-Taker', na_churn['Churn']) na_churn['Churn'].value_counts() # - # Concatenate the revised churn and the original customer_df. customer_df = customer_df.loc[~customer_df['Churn'].isnull()] print(len(customer_df)) customer_df = pd.concat([customer_df, na_churn],axis=0) print(len(customer_df)) print('') print(customer_df.Churn.value_counts()) # # No more na values customer_df.isnull().sum() # ## Fix the city variable again customer_df['shipping_city']=customer_df['shipping_city'].astype(str) customer_df.dtypes customer_df.shipping_city.value_counts()[:50] # + #get a list of unique cities shipping_city = customer_df.shipping_city.tolist() def unique_city(x): return list(dict.fromkeys(x)) unique_city_list =unique_city(shipping_city) print(unique_city_list) # - top_20_shipping_cities = customer_df.shipping_city.value_counts()[:20].index.tolist() top_20_shipping_cities customer_df['shipping_city_rev']=customer_df['shipping_city'] customer_df.loc[~customer_df['shipping_city_rev'].isin(top_20_shipping_cities), 'shipping_city_rev']='Others' #if the shipping city is not in the top 50 shipping cities, the value is others. customer_df['shipping_city']=customer_df['shipping_city_rev'] customer_df.drop(columns=['shipping_city_rev'],inplace=True) customer_df.shipping_city.value_counts() # # Fix the customers which listed two countries. customer_df.shipping_country = customer_df.shipping_country.astype(str) customer_df.loc[customer_df['shipping_country'].str.contains('Netherlands')]\ [['customer_name','last_transaction_date','shipping_country','order_name']] customer_df.shipping_country.value_counts() # ## Value Distribution for column in customer_df.columns: print("\n" + column) print(customer_df[column].value_counts()) # # Reorder the columns customer_df.columns customer_df=customer_df.reindex(columns=['customer_name','Churn','customer_group', 'shipping_country', 'shipping_city', 'shipping_postal_code','order_name', 'channel','discount_code','product_variant', 'average_order_value','net_sales', 'order_count','purchase_quantity', 'number_of_channels', 'number_of_discount_codes','number_of_product_variants', 'earliest_transaction_date','last_transaction_date', 'last_purchase_in_days']) customer_df.to_excel('customer_df.xlsx') customer_df customer_df.loc[customer_df['customer_group']=='defecting']\ [['customer_name','customer_group','Churn','last_purchase_in_days','order_count', 'product_variant']] customer_df.Churn.value_counts() # + def segmentor(row): if (row["last_purchase_in_days"] <= 30) & (row["order_count"] >= 4): return "best" elif (row["last_purchase_in_days"] <= 180) & (row["order_count"] >= 4): return "loyal" elif (row["last_purchase_in_days"] <= 30) & (row["order_count"] == 1): return "recent" elif (row["last_purchase_in_days"] <= 180) & (row["order_count"] == 1): return "defecting" elif (row["last_purchase_in_days"] <= 180) & (row["order_count"] <= 3): return "promising" elif row["last_purchase_in_days"] <= 360: return "at_risk" elif row["last_purchase_in_days"] >= 360: return "dormant" customer_df["segment"] = customer_df.apply(lambda x: segmentor(x), axis=1) # - customer_df.loc[~(customer_df['segment'] == customer_df['customer_group'])]\ [['customer_name','last_purchase_in_days','order_count','segment','customer_group']][250:300] customer_df.drop(columns=['customer_group'],inplace=True) customer_df.columns customer_df.to_excel('customer_df.xlsx') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="4L_iAbXksgQl" # """ # # author: , # # index: s12623, # # email: # # To run module: # # import module into Google Colaboratory notebook and run. # # This module recognize type of animal according to given image of animal. # # Keras model is build as classification type and contains two types of classification neural network architecture. # # """ # + [markdown] id="pDagT1Ehspzd" # **First model** # + colab={"base_uri": "https://localhost:8080/"} id="Zn5tfQjbwUY-" executionInfo={"status": "ok", "timestamp": 1611883193216, "user_tz": -60, "elapsed": 2086, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="b7983c33-dcbc-4077-fe01-1a4d24dddf57" # %tensorflow_version 2.x import numpy as np import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Flatten, Dense, Dropout, Conv2D, MaxPooling2D from keras.utils import to_categorical import matplotlib.pyplot as plt import random print('Tensorflow version: ', tf.__version__) # + colab={"base_uri": "https://localhost:8080/"} id="Y8vl9oqd1RfG" executionInfo={"status": "ok", "timestamp": 1611883198711, "user_tz": -60, "elapsed": 7559, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="d24df4b6-7b5b-4a88-8dd0-3ab20d1c8422" (X_train, y_train), (X_test, y_test) = tf.keras.datasets.cifar10.load_data() # + colab={"base_uri": "https://localhost:8080/"} id="_Kt23fiZDPZt" executionInfo={"status": "ok", "timestamp": 1611883683045, "user_tz": -60, "elapsed": 491883, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="03aff38b-9b7b-4b70-d5cf-0f5dfde2984f" print('before') print('X_train size: ', len(X_train)) print('y_train size: ', len(y_train)) print() print('still work:') i = 0 while not i == len(y_train): if y_train[i][0] in {0, 1, 8, 9}: X_train = np.delete(X_train, i, 0) y_train = np.delete(y_train, i, 0) i = i - 1 i = i + 1 if (i % (len(y_train) // 100)) == 0: print('{:.0%}'.format(i / len(y_train)), end=' ') if (i % (len(y_train) // 10)) == 0: print() print() print('done.') print() print('actual:') print('X_train size: ', len(X_train)) print('y_train size: ', len(y_train)) # + colab={"base_uri": "https://localhost:8080/"} id="cMh3kjD09P_c" executionInfo={"status": "ok", "timestamp": 1611883697801, "user_tz": -60, "elapsed": 506626, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="63f130c0-a14d-40f0-a1ce-4eb548818fc2" print('before') print('X_test size: ', len(X_test)) print('y_test size: ', len(y_test)) print() print('still work:') i = 0 while not i == len(y_test): if y_test[i][0] in {0, 1, 8, 9}: X_test = np.delete(X_test, i, 0) y_test = np.delete(y_test, i, 0) i = i - 1 i = i + 1 if (i % (len(y_test) // 100)) == 0: print('{:.0%}'.format(i / len(y_test)), end=' ') if (i % (len(y_test) // 10)) == 0: print() print() print('done.') print() print('actual:') print('X_test size: ', len(X_test)) print('y_test size: ', len(y_test)) # + colab={"base_uri": "https://localhost:8080/"} id="lUeFrLkygV7L" executionInfo={"status": "ok", "timestamp": 1611883698307, "user_tz": -60, "elapsed": 507116, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="cf2df54c-c8ee-4d31-8fc8-731eef5608a7" idx = 0 for y in y_train: if y[0] == 2: y_train[idx][0] = 0 if y[0] == 3: y_train[idx][0] = 1 if y[0] == 4: y_train[idx][0] = 2 if y[0] == 5: y_train[idx][0] = 3 if y[0] == 6: y_train[idx][0] = 4 if y[0] == 7: y_train[idx][0] = 5 idx = idx + 1 idx = 0 for y in y_test: if y[0] == 2: y_test[idx][0] = 0 if y[0] == 3: y_test[idx][0] = 1 if y[0] == 4: y_test[idx][0] = 2 if y[0] == 5: y_test[idx][0] = 3 if y[0] == 6: y_test[idx][0] = 4 if y[0] == 7: y_test[idx][0] = 5 idx = idx + 1 y_train_cat = to_categorical(y_train) y_test_cat = to_categorical(y_test) print(y_train_cat.shape, y_test_cat.shape) # + colab={"base_uri": "https://localhost:8080/"} id="dRt-L-JHutq2" executionInfo={"status": "ok", "timestamp": 1611883704012, "user_tz": -60, "elapsed": 512806, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="1780efb5-8968-4b5b-e25f-a570b7cb4e7d" model = Sequential() model.add(Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(32, 32, 3))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=128, kernel_size=(3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(6, activation='softmax')) model.summary() model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['categorical_accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="lbxywzjjt7xw" executionInfo={"status": "ok", "timestamp": 1611883778162, "user_tz": -60, "elapsed": 586943, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="94e86a58-8f80-45d7-bbed-adc9b001aeb3" model.fit(X_train, y_train_cat, epochs=20, validation_data=(X_test, y_test_cat)) # + colab={"base_uri": "https://localhost:8080/", "height": 356} id="M8vYE3DdVoBW" executionInfo={"status": "ok", "timestamp": 1611883778751, "user_tz": -60, "elapsed": 587518, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="7f27e2c6-9c7a-46b0-ec64-6408ea5e4870" f, axarr = plt.subplots() f.set_size_inches(16, 5) animals_cifar10_map = { 0: 'bird', 1: 'cat', 2: 'deer', 3: 'dog', 4: 'frog', 5: 'horse' } i = random.randrange(0, len(X_test)) print(animals_cifar10_map[y_test[i][0]]) img = X_test[i] axarr.imshow(img) img_rgb = X_test[i].reshape(1,32,32,3) propability = model.predict(img_rgb) max_prop_idx = np.argmax(propability) predicted_animal = animals_cifar10_map[max_prop_idx] print('prediction of animal: ', predicted_animal) # + [markdown] id="66Y5g6q2swuo" # **Second model** # + colab={"base_uri": "https://localhost:8080/"} id="1_OOUAOws0OD" executionInfo={"status": "ok", "timestamp": 1611883860121, "user_tz": -60, "elapsed": 604, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="0ae8966f-fc15-4a07-ddd3-4e8a0c795448" model = Sequential() model.add(Conv2D(filters=32, kernel_size=(2, 2), activation='relu', input_shape=(32, 32, 3))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=64, kernel_size=(2, 2), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(filters=128, kernel_size=(2, 2), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(2048, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(1024, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(6, activation='softmax')) model.summary() model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['categorical_accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="Hzai5Wypt7GK" executionInfo={"status": "ok", "timestamp": 1611883956424, "user_tz": -60, "elapsed": 80299, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="da3e3e72-2550-40dd-846c-48d761faf0bc" model.fit(X_train, y_train_cat, epochs=20, validation_data=(X_test, y_test_cat)) # + colab={"base_uri": "https://localhost:8080/", "height": 356} id="CTelacrot71L" executionInfo={"status": "ok", "timestamp": 1611884010309, "user_tz": -60, "elapsed": 716, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gg75vIikNETLJCNaBd2LU4PCIaqbUU-ZYwJTMLkB5k=s64", "userId": "00227173073049198188"}} outputId="6feb4e57-904c-44d3-9d2f-0a16e0bf6d1c" f, axarr = plt.subplots() f.set_size_inches(16, 5) animals_cifar10_map = { 0: 'bird', 1: 'cat', 2: 'deer', 3: 'dog', 4: 'frog', 5: 'horse' } i = random.randrange(0, len(X_test)) print(animals_cifar10_map[y_test[i][0]]) img = X_test[i] axarr.imshow(img) img_rgb = X_test[i].reshape(1,32,32,3) propability = model.predict(img_rgb) max_prop_idx = np.argmax(propability) predicted_animal = animals_cifar10_map[max_prop_idx] print('prediction of animal: ', predicted_animal) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf import matplotlib.pyplot as plt import numpy as np # + def gelu1(x): cdf = 0.5 * (1.0 + tf.tanh( (np.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3))))) return x * cdf def gelu2(input_tensor): cdf = 0.5 * (1.0 + tf.math.erf(input_tensor / tf.sqrt(2.0))) return input_tensor * cdf # + x = np.linspace(-5, 5, 100) y1 = gelu1(x) y2 = gelu2(x) plt.figure(figsize=(15, 10)) plt.plot(x, y1, '.') plt.plot(x, y2) plt.title("Gelu activation comparison") plt.legend(["Gelu BioBERT", "Gelu Google BERT"]); # + x = np.linspace(-10, 10, 300) y = (gelu1(x).numpy() - gelu2(x).numpy()) plt.figure(figsize=(15, 10)) plt.plot(x, y, '.-') plt.title("Difference Gelu1 - Gelu2"); # + x = np.linspace(-10, 10, 300) y1 = np.diff(gelu1(x), prepend=[0]) y2 = np.diff(gelu2(x), prepend=[0]) plt.figure(figsize=(15, 10)) plt.plot(x, y1, '.-') plt.plot(x, y2) plt.title("Difference Gelu1 - Gelu2"); # + x = np.linspace(-10, 10, 300) y1 = np.diff(gelu1(x), prepend=[0, 0], n=2) y2 = np.diff(gelu2(x), prepend=[0, 0], n=2) plt.figure(figsize=(15, 10)) plt.plot(x, y1, '.-') plt.plot(x, y2) plt.title("Difference Gelu1 - Gelu2"); # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import os import warnings from functools import partial import multiprocessing.pool from six.moves import xrange import tensorflow as tf import math import sys sys.path.insert(0, '../') from utils import * from create_synthetic_dataset import * os.environ["CUDA_VISIBLE_DEVICES"]="0" # %load_ext autoreload # %autoreload 2 # + def placeholder_inputs(): """Generate placeholder variables to represent the input tensors. Returns: images_placeholder: Images placeholder. labels_placeholder: Labels placeholder. """ images_placeholder = tf.placeholder(tf.float32, shape=(None, model.IM_SIZE, model.IM_SIZE, model.IM_SIZE, model.CHANNELS)) labels_placeholder = tf.placeholder(tf.int64, shape=(None)) return images_placeholder, labels_placeholder # - X,y = create_synthetic_dataset() # + ## Import the model definition: import s_lri_model as model #import s_lri_deep2_model as model #import sse_lri_model as model #import bispectrum_lri_model as model #import g_lri_model as model #import g_ri_model as model #import conv_separable_model as model #import g_lri_sep_model as model ## All parameters for training is_shconv = True # To compare with a normal CNN (shconv=False) is_trainable = True # To test freezing the conv weights is_hn=True # If false: polar-separable is_augment=False # Right angle rotation augmentation at training M = 4 # number of orientations (only needed for S-LRI). 1, 4, 24 or 72 nf1 = 2 # number of filters in first layer batch_size = 16 stride = 1 lr = 1e-3 max_steps = 50000 degreeMax = 1 # maxmum degree of the SHs ksize = 7 # - list_test_acc = [] for _ in xrange(10): n_class = 2 tf.reset_default_graph() # Placeholders xph, yph = placeholder_inputs() learning_rate = tf.placeholder(tf.float32, name='learning_rate') # Construct model and optimizer pred = model.build_model(X = xph, batch_size = batch_size, n_class = n_class, nf1 = nf1, ksize = ksize, stride = stride, is_trainable = is_trainable, degreeMax = degreeMax, is_shconv = is_shconv, is_hn = is_hn, M = M ) loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=yph)) # Evaluation criteria correct_pred = tf.equal(tf.argmax(pred, 1), yph) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Optimizer optim = tf.train.AdamOptimizer(learning_rate=learning_rate, beta1=0.99, beta2=0.9999) grads_and_vars = optim.compute_gradients(loss) train_op = optim.apply_gradients(grads_and_vars) # Configure tensorflow session init_global = tf.global_variables_initializer() init_local = tf.local_variables_initializer() config = tf.ConfigProto() config.gpu_options.allow_growth = True config.log_device_placement = False sess = tf.Session(config=config) sess.run([init_global, init_local]) # shuffle np.random.seed(0) idx = np.arange(0,len(X)) np.random.shuffle(idx) X_shuf = X[idx] y_shuf = y[idx] # Split 4/5 train 1/5 test ntest = int(X_shuf.shape[0]/5) ntrain = int(X_shuf.shape[0]*4/5) X_train, X_test = X_shuf[:ntrain], X_shuf[ntrain:ntrain+ntest] y_train, y_test = y_shuf[:ntrain], y_shuf[ntrain:ntrain+ntest] print('X_train shape:',X_train.shape, 'X_test shape:',X_test.shape, 'y_train shape:',y_train.shape, 'y_test shape:',y_test.shape) print('Starting training loop...') for step in xrange(max_steps): Xb, Yb = next_batch(batch_size, X_train, y_train, is_augment) feed_dict = {xph: Xb, yph: Yb, learning_rate: lr} __, loss_train, acc_train = sess.run([train_op, loss, accuracy], feed_dict=feed_dict) if (step) % 5000 == 0 and step!=0 or (step + 1) == max_steps: print('Step %d'%(step)) print('Training Data Eval:') print ("accuracy training: " + "{:.5f}".format(acc_train)) print ("loss training: " + "{:.5f}".format(loss_train)) # Training Data Eval: ############ nb_train_steps = int(math.ceil(float(y_train.shape[0])/(batch_size))) acc_train_all=0. for step_t in xrange(nb_train_steps): Xt=X_train[step_t*batch_size:(step_t+1)*batch_size] Yt=y_train[step_t*batch_size:(step_t+1)*batch_size] feed_dict = {xph: Xt, yph: Yt} accuracy_,pred_ = sess.run([accuracy,pred], feed_dict=feed_dict) acc_train_all += accuracy_*Xt.shape[0] acc_train_all=acc_train_all/X_train.shape[0] print ("accuracy training: " + "{:.5f}".format(acc_train_all)) print('Test Data Eval:') ############ nb_test_steps = int(math.ceil(float(y_test.shape[0])/(batch_size))) acc_test=0. for step_t in xrange(nb_test_steps): Xt=X_test[step_t*batch_size:(step_t+1)*batch_size] Yt=y_test[step_t*batch_size:(step_t+1)*batch_size] feed_dict = {xph: Xt, yph: Yt} accuracy_,pred_ = sess.run([accuracy,pred], feed_dict=feed_dict) acc_test += accuracy_*Xt.shape[0] acc_test=acc_test/X_test.shape[0] print ("accuracy test: " + "{:.5f}".format(acc_test)) list_test_acc.append(acc_test) print('list_test_acc: ',list_test_acc, np.mean(list_test_acc)) sess.close() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lab 7 - Inference - Solution % matplotlib inline # This will make all the `matplotlib` images appear in the notebook. # + import numpy as np import time import seaborn as sns import matplotlib.pyplot as plt import scipy.stats as stats sns.set(style="whitegrid") # - # ## Directions # # **Failure to follow the directions will result in a "0"** # # The due dates for each are indicated in the Syllabus and the course calendar. If anything is unclear, please email the official email for the course or ask questions in the Lab discussion area on Blackboard. # # The Labs also present technical material that augments the lectures and "book". You should read through the entire lab at the start of each module. # # ### General Instructions # # 1. You will be submitting your assignment to Blackboard. If there are no accompanying files, you should submit *only* your notebook and it should be named using *only* your JHED id: fsmith79.ipynb for example if your JHED id were "fsmith79". If the assignment requires additional files, you should name the *folder/directory* your JHED id and put all items in that folder/directory, ZIP it up (only ZIP...no other compression), and submit it to Blackboard. # # * do **not** use absolute paths in your notebooks. All resources should appear in the same directory as the rest of your assignments. # * the directory **must** be named your JHED id and **only** your JHED id. # # 2. Data Science is as much about what you write (communicating) as the code you execute (researching). In many places, you will be required to execute code and discuss both the purpose and the result. Additionally, Data Science is about reproducibility and transparency. This includes good communication with your team and possibly with yourself. Therefore, you must show **all** work. # # 3. Avail yourself of the Markdown/Codecell nature of the notebook. If you don't know about Markdown, look it up. Your notebooks should not look like ransom notes. Don't make everything bold. Clearly indicate what question you are answering. # # 4. Submit a cleanly executed notebook. The first code cell should say `In [1]` and each successive code cell should increase by 1 throughout the notebook. # ## Bayesian Inference # # Really, there are only a few classical problems in statistical inference. However, we use the Bayes Theorem as the basis for solving all of them: # # $$P(H|D) = \frac{P(D|H)P(H)}{P(D)}$$ # # You only need to identify what $H$ relates to...what is it? Is it some parameter of a distribution? Some property of a model (coefficients, error rate, etc.). For some formulations, we are more specific and specify $H$ as some parameter or parameters, $\theta$: # # $$P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)}$$ # # In the text we saw how we could estimate the posterior distribution using four methods: Grid, Exact, Monte Carlo and Bootstrap. For this Lab, we'll concentrate on the Bootstrap method for the reasons specified in the text. # ## Statistical inference of a proportion in a Bernoulli Trial # # **1\. Suppose we have a coin that shows up heads 60% of the time ($\theta=p=0.6$). Generate 100 samples from this Binomial distribution (either as True/False or 1/0).** # + np.random.seed([1244875]) theta = 0.6 data = [1 if np.random.rand() < theta else 0 for _ in range( 100)] print( data[0:20]) # - # This is the synthetic data. At this point, we pretend that this is data we collected from the real world and we have no idea what $\theta$ really is. # # Understanding that inference is not certain, our goal is to make inferences about this parameter's value using this data we just "collected." Normally, the first thing we do is just calculate the parameter from our data. An *estimate* of some real world parameter is often given a "hat", for example $\theta$ becomes $\hat{\theta}$. Sometimes, it goes from Greek to Latin as in $\sigma$ to $s$ and sometimes it gets an adornment as well as in $\mu$ to $\bar{x}$. # # **2\. Calculate $\hat{theta}$.** theta_est = np.mean( data) print( theta_est) # But we know that this $\hat{\theta}$ is not necessarily the "true" value. We want to know all the values that are supported by the data we collected and the degree to which they are supported...how confident we are in them. This is basically what we get when we calculate a posterior distribution over $\theta$ based on the data. # # And this is where the **(Non-Parameteric Bayesian) Bootstrap** estimate of that posterior distribution comes in. In the text we established *theoretically* how we went from a single data set to an estimate of the posterior distribution of our parameters. Now we're going to do it for reals. Use the data we have to "bootstrap" an estimate of the posterior probability distribution over $\theta$, $P(\theta|D)$ which is "given the data we observed, how much are we to believe in the various values of $\theta$ and how much should we believe in them?" Remember that belief is quantified as probability. # # **3\. Generate the Bootstrap of the posterior distribution of $\hat{\theta}$ and answer the following questions:** # # First, we write a simple function to do our bootstrap sampling for us. It takes the data, a metric function and the number of bootstrap samples as the arguments. A metric function can be anything we like but it will most likely be something like `np.mean`, `np.var`, etc., it is whatever function we use to calculate our parameter/statistics. def bootstrap_sample( data, f, n=100): result = [] m = len( data) for _ in range( n): sample = np.random.choice( data, len(data), replace=True) r = f( sample) result.append( r) return np.array( result) # Now we used the function by supplying the data we "collected", our metric function `np.mean` and indicate we want 1000 bootstrap samples. This returns the data we can use as our posterior distribution of the proportion. posterior = bootstrap_sample( data, np.mean, 1000) # If we like, we can plot a histogram of this posterior distribution: # + figure = plt.figure(figsize=(10, 6)) # first element is width, second is height. axes = figure.add_subplot(1, 1, 1) axes.hist( posterior, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "$\hat{theta}$") axes.set_title( "Posterior Distribution of $\hat{theta}$") plt.show() plt.close() # - # Note that while the data is discrete and boolean (true/false), the parameter $\theta$ is continuous. You might also notice that our distribution appears to be normally distributed. Based on the Central Limit Theorem, this is what we'd expect. # **4\. What is the 90% Credible Interval (Bayesian Confidence Interval) for $\hat{\theta}$? Interpret it.** # # Although we'll often plot the posterior distribution, the real payoff from having it is to be able to do computations with it. There are a number of functions we can use for that purpose, for example, `mquantiles`. `mquantiles` is normally used to summarize the distributions of data but in this case, our data is estimates of $\theta$. stats.mstats.mquantiles( posterior, [0.05, 0.95]) # An important part of Data Science and assignments in this course is interpreting the results. This is not purely a coding class. Therefore, you should always, *always* interpret your results: # # There is a 90% probability that the value of $\theta$ is between 0.59 and 0.74 based on the data. # # Of course, there's nothing magical about only looking at the 90% confidence/credible interval and you can look at other ranges of interest as well. # **5\. In Bayesian Statistics, we often identify a range of possible values for a parameter that we consider the same. This is known as the ROPE (Region of Practical Equivalance). We know that a fair coin would have $\theta$ of 0.5 but we're unlikely to get an exact value of 0.5 from our data. If the ROPE is 0.48-0.52, what is the probability that our coin's $\theta$ lies in that range and is thus "fair"?** np.mean((0.48 <= posterior) & (posterior <= 0.52)) # One of the downsides to the Boostrap approach is that we do not follow "Cromwell's Dictum" and we can get events with zero probability. We should just interpret these events are really have very small probabilities. # # Of course, now that we have this posterior distribution we can answer all kinds of (possibly) interesting and relevant questions to our problem. Let's stick with the basics, for now. # ## Exercises # # **Exercise 1.** # # In addition to estimates of the posterior distribution of parameters such as $\theta$, we are often interested in the posterior distribution of the *difference* of two $\theta$s. For example, we might be interested in the proportion of men who smoke versus the proportion of women who smoke. # # We can model $\theta_{men}$ and $\theta_{women}$ just as if they were coin biases and generate synthetic data just as if they were coin flips. Using the Non-Parametric Bootstrap, we can generate posterior distributions for $\hat{\theta}_{men}$ and $\hat{\theta}_{women}$ as well as $d$, the *difference*. # # These are the steps: # # 1. Generate synthetic data for using $\theta_{men}$ = 0.23 and $\theta_{women}$ = 0.34, with 100 observations each. # 2. Generate the bootstrap data for each. # 3. Generate difference data. You can do this by simply subtracting, element by element, one bootstrap sample from the other, $\theta_{men}$ - $\theta_{women}$. # 4. Plot the distributions of all three. # 5. Calculate the 90% Bayesian Confidence Interval of all three **and interpret them**. # 6. Determine a ROPE for the difference and tell me what's the probability that the "true" value of the difference falls in the ROPE. # # Use as many Markdown Cells and Code Cells as you need; it should look nice (not like a ransom note). # **Step 1. Generate the synthetic data.** # # Note that in this case I just generate *exact* data with the values we want. You can't always do this but for most problems involving boolean values (live/die, for/against, etc.), we can generate the data explicitly. # # This just shows you another way to do it. # + np.random.seed([9235274]) men_theta = 0.23 men_data = [1] * 23 + [0] * 77 np.random.shuffle( men_data) print( "men's data: ", men_data[0:20]) women_theta = 0.34 women_data = [1] * 34 + [0] * 66 np.random.shuffle( women_data) print( "women's data: ", women_data[0:20]) # - # This isn't really a step in inference at all. This is just to generate data that either that we would already have or, if we didn't have access to the raw data (for example, from a journal article or a newpaper story), it is data that matches the characteristics of the data in the articl or story. # # The first thing we should do is to see what the main statistics of this data is. This is where the *meaningful* part comes from. Based on how we derived it, it should be exactly what we want: mens_mean = np.mean( men_data) print( "men's mean=", mens_mean) womens_mean = np.mean( women_data) print( "women's mean=", womens_mean) print( "difference=", mens_mean - womens_mean) # The men's mean is 11% below the women's mean, as we saw at the start. # **Step 2\. Generate the bootstrap data for each** # # This begins the actual inference step. The Bootstrap samples are an estimation of the posterior distribution, usually arrived at using Bayes Rule: men_posterior = bootstrap_sample( men_data, np.mean, 1000) women_posterior = bootstrap_sample( women_data, np.mean, 1000) # **Step 3. Calculate the difference.** # # Each element of each posterior is just a different possible theta. If we match each of the possible thetas by index, we have a possible index. As a result, we have generated a bootstrap sample of the *differences*. difference = men_posterior - women_posterior # **Step 4. Plot the posterior distributions** # # The `hist` plotting function will take of turning the raw observations of $\theta$s and difference into an actual probability distribution (density). # # We can actually plot these side by side by adjusting the number of plots in the `add_subplot` command: # + figure = plt.figure(figsize=(20, 6)) # first element is width, second is height. axes = figure.add_subplot(1, 3, 1) axes.hist( men_posterior, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "$\hat{theta}$") axes.set_title( "Posterior Distribution of Men's $\hat{theta}$") axes = figure.add_subplot(1, 3, 2) axes.hist( women_posterior, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "$\hat{theta}$") axes.set_title( "Posterior Distribution of Women's $\hat{theta}$") axes = figure.add_subplot(1, 3, 3) axes.hist( difference, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "difference in $\hat{theta}$") axes.set_title( "Posterior Distribution of Difference in $\hat{theta}$s") plt.show() plt.close() # - # **Step 5** print( "90% BCI for men's theta:", stats.mstats.mquantiles( men_posterior, [0.05, 0.95])) print( "90% BCI for women's theta:", stats.mstats.mquantiles( women_posterior, [0.05, 0.95])) print( "90% BCI for difference:", stats.mstats.mquantiles( difference, [0.05, 0.95])) # The BIC or Bayes Confidence Interval or Credible Interval (or sometimes when it's obvious, just Confidence Interval. often does this) is range of theta values (means and difference in this case) that contain 90% of the probability density/mass. # # Based on the data (and the priors), there is a 90% probalility that the men's percentage is between 16 and 30%. There is a 90% probability that the women's percentage is between 26 and 42%. Finally, there is a 90% chance that the difference between the two percentages/rates is -21 to 0 percentage *points*. # ** Step 6 ** # # Again, what difference is and is not significant will generally be problem specific. If we think about this problem, and someone reported something in the newspaper and you saw "20% of men and 21% of women", you'd probably be inclined to think that those aren't very different. Suppose you thought non-smoking campaigns might need to be gender based. Would that difference encourage you? What about 5 percentage points? Given the cost of different campaigns, I would be inclined to think that a difference of 10 percentage points at least would be necessary. # # So let's define a ROPE of about 9-11% percent for the difference in absolute terms. Since we know that the men's rate appears to be lower, we're trying to see the probability that the difference is between (-11, -9) percent: np.mean((-0.11 <= difference) & (difference <= -0.09)) # So there's only a 14.2% probability that the men's rate is really as much as 9 to 11 percentage points lower, given the data and priors. This might be our initial report. We can also report on things like, what is the probability that the difference is less than 0? np.mean(difference<0) # so the probability that the difference less than zero (that men's rates are lower than women's rates) is highly probable (94.8%) it's just not clear it's sufficiently below to warrant a separate anti-smoking campaign. # Some other things to think about might include, what would it take for our ROPE to have a 50% probability? Would we simply need more data (increase the observations to 1000 and see what happens). # # **Notes** # # The basic idea of using bootstrap sampling to estimate a posterior distribution will stay with us throughout the entire semester. This will be our fundamental approach to statistical inference (there are other approaches and there are other *Bayesian* approaches). The important thing is to understand 1. why and 2. the dimensions along which the problems can vary such as, # # 1. The nature of data. The data may take on a variety of different types. We've looked primarily at boolean or Bernoulli data. However, the data might be categorical (more than two discrete outcomes), counts, real values, etc. This means that there may be more than one $\theta$. For example, the Normal distribution has two $\theta$s: the mean, $\mu$, and the variance, $\sigma^2$. We don't often see inferential claims about the variance but that doesn't mean we could use the Bootstrap to test them. But you should think even more broadly than this. A linear regression as many $\theta$s: the coefficients, the coefficient of determination, the error of the regression, etc. A decision tree has a structure and error rate. # 2. A related concept is variability. We may have two true values, 0.23 and 0.24, but the variability of the data may not permit us to distinguish between them. # 3. Another dimension is the amount of data. We may not be able to get a "good" inference because we have not collected enough data. # # And, of course, all of these will and do interact. And a lot of experimental design is based on trying to limit variability (by "holding other things constant") and to get the "right" amount of data to support the inference we want to make. # # These exercises investigate some of the dimensions. # **Exercise 2** # # **1\. Repeat the guide example (coin flips) with a $\theta = 0.05$ and discuss. Were the credible intervals the same size? Was your estimate of $\theta$ as good? What does this say about statistical inference on relatively rare events or extreme values?** np.random.seed([87928356]) # **Step 1. Generate the data using $\theta=0.05$:** # # Note I switch back to the random generation like the initial example. I could have generated exact data here. theta = 0.05 data = [1 if np.random.rand() < theta else 0 for _ in range( 100)] # we could use the fixed amount as well np.random.shuffle( data) print( data[0:20]) # **Step 2. Calculate the estimate of theta.** theta_est = np.mean( data) print( theta_est) # **Step 3. Calculate the bootstrap sampling of $\theta$.** posterior = bootstrap_sample( data, np.mean, 1000) # **Step 4. Plot the posterior distribution of $\theta$.** # + figure = plt.figure(figsize=(10, 6)) # first element is width, second is height. axes = figure.add_subplot(1, 1, 1) axes.hist( posterior, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "$\hat{theta}$") axes.set_title( "Posterior Distribution of $\hat{theta}$") plt.show() # - # **Step 5. Calculate the 90% Bayesian Confidence Interval and interpret it.** stats.mstats.mquantiles( posterior, [0.05, 0.95]) # Based on the data, there is a 90% probability that the true proportion is between 4 and 12%. # # **Step 6. Using the ROPE of 48 to 52% for a "fair" coin, determine if this coin is fair.** np.mean((0.48 <= posterior) & (posterior <= 0.52)) # Using the Bootstrap method, we can't really conclude that the probability is *truly* zero but it does seem that it is highly unlikely that coin is fair... # **Discuss** # # 1. Both coins were determined to not be fair, although certainly one was less fair than the other! # 2. Looking at the BCI (Bayesian Confidence Interval), we can see that for $\theta$ = 0.67, the 90% BIC was 0.59-0.74 while for $\theta=0.05$, the 90% BIC was 0.04-0.12. The spread is a lot smaller for the second (8 percentage points) than the original (15 percentage points). # 3. This points out a general problem with inference involving proportions. The more extreme the value, the more certain we are about it. But this doesn't particularly make sense. This is an artifact of the floor/ceiling effect (that $p$ cannot be lower than 0 or higher than 1). In fact, in Frequentist statistics, it is generally a rule of thumb that you should not make inferences about $p$ that are in the ranges (0, 0.05) and (0.95, 1.00). # # Can you think of an experiment that would test your intuitions about inferences for a rare event? # ** Statistical Inference for a single real valued $\theta$** # # **Exercise 3** # # We can do the same thing for a real valued data (like weights, heights, etc.) and the $\theta$'s that describe such distributions. If we have a normal distribution, there are two such $\theta$s, $\mu$, the mean, and $\sigma$, the standard deviation. Remember, however, that we often think of the dispersion of our data as a percent of the mean or the *coefficient of variation*, v. # **a\. Generate 50 observations from a normal distribution with $\mu=102.7$ and $v=5\%$.** # # You should refer to the previous Lab for generating synthetic data from the normal distribution and working with $v$, the coefficient of variation. np.random.seed([2386431651]) # + def to_std( mu, v): return mu * v mu = 102.7 v = 0.05 s = to_std( mu, v) xs = np.random.normal( mu, s, 50) print( xs[0:20]) # - # **b. What is $\bar{x}$?** x_bar = np.mean( xs) print( "sample mean =", x_bar) # **c. Generate the Bootstrap estimate of the posterior distribution of $\bar{x}$.** posterior = bootstrap_sample( xs, np.mean, 1000) # We can chart the posterior as before. As we will learn in the Visualization module, chart everything. # + figure = plt.figure(figsize=(10, 6)) # first element is width, second is height. axes = figure.add_subplot(1, 1, 1) axes.hist( posterior, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "x-bar") axes.set_title( "Posterior Distribution of x-bar") plt.show() # - # **d. What is the 90% Credible Interval (Bayesian Confidence Interval) for $\hat{\theta}$?** stats.mstats.mquantiles( posterior, [0.05, 0.95]) # There is a 90% probability, given the prior and data, that the value of the mean lies between 101.7 and 104.1. # **e. Define a ROPE of "about 100". What is the probability that $\bar{x}$ falls within the ROPE?.** # # In general, defining a ROPE will be problem dependent, but for now I'd say anything in the range 98 to 102 would be "about 100": np.mean((98.0 <= posterior) & (posterior <= 102)) # There's about an 11% chance the value is "about" 100, *given the data*. # **Exercise 4\. Repeat Steps 1-5 with $v=25\%$.** np.random.seed([484716248]) # **a. Generate synthetic data** mu = 102.7 v = 0.25 s = to_std( mu, v) xs = np.random.normal( mu, s, 50) print( xs[0:20]) # **b. What is $\bar{x}$?** x_bar = np.mean( xs) print( "sample mean =", x_bar) # **c. Generate the Bootstrap estimate of the posterior distribution of $\bar{x}$.** posterior = bootstrap_sample( xs, np.mean, 1000) # + figure = plt.figure(figsize=(10, 6)) # first element is width, second is height. axes = figure.add_subplot(1, 1, 1) axes.hist( posterior, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "x-bar") axes.set_title( "Posterior Distribution of x-bar") plt.show() # - # This distribution is a lot more spread out than before and the center is not over the actual mean as it was; clearly the larger variance is having an effect on the data we "sampled". # # **d. What is the 90% Credible Interval (Bayesian Confidence Interval) for $\hat{\theta}$?** stats.mstats.mquantiles( posterior, [0.05, 0.95]) # There is a 90% probability, based on the data and our prior, that the mean lies between 100.9 and 112.1. Previously, it was 101.7 to 104.1. The increased dispersion of the data has made our ability to "zero in" on the mean more difficult. # **e. Define a ROPE of "about 100". What is the probability that $\bar{x}$ falls within the ROPE?.** # # We can use the same ROPE as our idea of "about 100" hasn't changed, anything in the range 98 to 102 would be "about 100": np.mean((98.0 <= posterior) & (posterior <= 102)) # Our confidence in the ROPE has gone from 10.5% to 8.5% probability, given the data, that the mean is in the range 98 to 102. # **Exercise 5\. Repeat Steps #1-5 with $v=25\%$ and 500 samples.** np.random.seed([484716248]) # The key difference here is that although the coefficient of variation is larger in the first experiment we did, the sample size is larger than in the second experiment we did for this problem. # # **a. Generate synthetic data** mu = 102.7 v = 0.25 s = to_std( mu, v) xs = np.random.normal( mu, s, 500) # increase sample size print( xs[0:20]) # Remember this step is not actually part of inference. This is data that was actually collected. In a few circumstances, you might need to generate data if you don't have access to the raw data. # **b. What is $\bar{x}$?** x_bar = np.mean( xs) print( "sample mean =", x_bar) # **c. Generate the Bootstrap estimate of the posterior distribution of $\bar{x}$.** posterior = bootstrap_sample( xs, np.mean, 1000) # + figure = plt.figure(figsize=(10, 6)) # first element is width, second is height. axes = figure.add_subplot(1, 1, 1) axes.hist( posterior, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "x-bar") axes.set_title( "Posterior Distribution of x-bar") plt.show() # - # Even with 10x more data, we don't really see a bootstrap distribution of the mean that centers over the true mean. An interesting exercise would be, how large a sample *do* you need? # **d. What is the 90% Credible Interval (Bayesian Confidence Interval) for $\hat{\theta}$?** stats.mstats.mquantiles( posterior, [0.05, 0.95]) # Our original 90% BCI was 101.7 to 104.1 (for a "true" mean of 102.7) or 104.1 - 101.7 = 2.4 points. After the increase in dispersion, we saw the 90% BCI go to 100.9 to 112.1 or 11.2 points. In this experiment we see that there is a 90% probability, given the data and priors, that the "true" mean lies in the range 102.3 to 106.1 or 3.8 points. # # So by increasing the sample size, we have become more confident in our range. It's worth nothing, however, that the true mean is almost out of that range. As mentioned above, it would be interesting to see how large a sample we would need to make everything work out "perfectly" (small 90% confidence interval that contains the "true" value about dead center). # # **e. Define a ROPE of "about 100". What is the probability that $\bar{x}$ falls within the ROPE?.** # # We can use the same ROPE as our idea of "about 100" hasn't changed, anything in the range 98 to 102 would be "about 100". Again, where did "about 100" come from? It might be a stakeholder's claim about the number or a manufacturer's claim or you might have arrived at the value analytically and you're trying to do an experiment to verify it. np.mean((98.0 <= posterior) & (posterior <= 102)) # We've gone from 10.5% to 8.5% to 2.7% probability that the mean is "really" 100. This is an interesting way to see how given some hypothesis, "about 100", how the different dispersion of the data and sample size might affect your ability to determine the probability of the data supporting that hypothesis. # ** Statistical Inference for a two real valued $\theta$s** # # **Exercise 6. Following the text, apply the Bootstrap to make inferences about the difference between $\mu_1$ and $\mu_2$** # # 1. Data set 1 has $\mu_1=102.7$ and $v_1=10\%$ and 100 observations. # 2. Data set 2 has $\mu_2=104.2$ and $v_2=5\%$ and 100 observations. # # Pay special consideration to formulating your ROPE for the *difference* between the two parameters and making inferences about it. np.random.seed([67366372]) # **a. Generate synthetic data for using the parameters above.** # # Since these are normally distributed variables, we can use the same code as above but we'll prepend `x1_` and `x2_` to each: # + x1_mu = 102.7 x1_v = 0.10 x1_s = to_std( x1_mu, x1_v) x1_xs = np.random.normal( x1_mu, x1_s, 100) print( x1_xs[0:20]) x2_mu = 104.2 x2_v = 0.05 x2_s = to_std( x2_mu, x2_v) x2_xs = np.random.normal( x2_mu, x2_s, 100) print( x2_xs[0:20]) # - # **b. Generate the bootstrap data for each.** x1_posterior = bootstrap_sample( x1_xs, np.mean, 1000) x2_posterior = bootstrap_sample( x2_xs, np.mean, 1000) # **b'. Generate difference data. You can do this by simply subtracting, element by element, one bootstrap sample from the other.** # # This is just like before. Each element of x{i}\_posterior is an independent estimate of the corresponding $\bar{x_i}$ and we can create a corresponding estimate of the difference: difference = x1_posterior - x2_posterior # + figure = plt.figure(figsize=(20, 6)) # first element is width, second is height. axes = figure.add_subplot(1, 3, 1) axes.hist( x1_posterior, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "$\hat{theta}$") axes.set_title( "Posterior Distribution of x1 $\hat{theta}$") axes = figure.add_subplot(1, 3, 2) axes.hist( x2_posterior, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "$\hat{theta}$") axes.set_title( "Posterior Distribution of x2 $\hat{theta}$") axes = figure.add_subplot(1, 3, 3) axes.hist( difference, density=True) axes.set_ylabel( "Density") axes.set_xlabel( "difference in $\hat{theta}$") axes.set_title( "Posterior Distribution of Difference in $\hat{theta}$s") plt.show() plt.close() # - # **c. Calculate the 90% Bayesian Confidence Interval of all three **and interpret them**. print( "x1 90% BIC:", stats.mstats.mquantiles( x1_posterior, [0.05, 0.95])) print( "x2 90% BIC:", stats.mstats.mquantiles( x2_posterior, [0.05, 0.95])) print( "difference 90% BIC", stats.mstats.mquantiles( difference, [0.05, 0.95])) # There is a 90% probability, given the data and priors, that the mean value of $x_1$ lies between 101.0 and 104.1. Similarly, there is a 90% probability that the mean value of $x_2$ lies between 104.2 and 106.2. There is a 90% probability that the difference in means between the two data sets lies between -4.5 and -0.9, again, given the data and the priors. # # Since we know the real values were 102.7 and 104.2, respectively, we can see how uncertain statistical inference actually is. While all the values are in their respective 90% BIC, the differences are a bit exaggerated. This might be because of the higher variability on $x_1$, the number of data points and the number of bootstrap samples. # **e. Determine a ROPE for the difference** # # The typical ROPE for a difference is centered at 0 (unless we expect something else) and includes an interval of values that we would consider "about zero". For this problem, let's take about 1% as that range, so -1 to 1 is our ROPE: np.mean((-1.0 <= difference) & (difference <= 1.0)) # So based on this ROPE, there is only a 6.7% probability, given the data and priors, that the difference between the means of the two data sets if about zero. # ## Discussion # # 1\. Discuss the similarities and differences in your results for Exercises 3-5. What do you think caused them given they all have the same mean? # This is discussed along the way but the chief differences are probably due to: # # 1. Sampling variation - this is not an artifact of the synthetic data. In the real world, you will experience sampling variation. # 2. Variance - although the means are the same, the variances are different. More disperse data will have more sampling variation and this impedes our ability to make inferences from it. # 3. Sample sizes - given the dispersion and the effect on sampling variation, larger samples sizes are the antidote. However, it's not always clear how big a sample size you need. We'll talk about this more in Experimental Design. # 2\. Why are we interested in estimating the posterior distribution? # The posterior distribution is the estimate of the probability of all our hypotheses, given the data. We basically want to know, given the data, how believable each hypothesis is. As said, "Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth." The quote is interesting because P(D|H=h1) may actually *be* very improbable but if it is larger than any other $H=h_i$, it is still the most probable hypothesis (and because of normalization will actually end up with a high probability). # # In the case of categorical or discrete outcomes, our hypotheses are simple: either Elvis had an identical twin or he did not. We can just use Bayes Rule with the data. The posterior distribution is easy to calculate. However, in the case of numeric hypotheses about parameter values and descriptive statistics we wish to use as models, there are an infinite number of hypotheses, values for the parameter. Because of this, we have to deal with ranges, either the ROPE or Credible/Confidence intervals or others of our own design and interest. # # We can only get these if we calculate the posterior distribution. Still, the posterior distribution of the mean is not different in essence from the posterior distribution of the possibilities, identical twin or not identical twin. It's just because numeric hypotheses involve continuous values that we need to work with ranges. # 3\. In the previous Lab, we talked about how Systems Theory related to the variability of a variable. How then is "keeping other things the same" in experimental design or comparison related both to inference and Systems Theory? # Previously we noted that a measurement of a variable has dispersion because there are all these other factors affecting it that we don't model directly and thus we end up with uncertainty. Basically, we measure the outcome of the coin but not all the factors that went into the result; we measure someone's height but not all the factors that went into their height, etc. # # Now, especially in the context of comparisons (differences in $\theta$) between two groups, we want to really make sure that the only difference between the two groups is either the intervention or characteristic we're isolating. If we sampled only young women and only old men and presented it as the differences in voting patterns of women and men then we would probably end up with incorrect inferences. This is because we didn't keep all the factors affecting voting patterns the same, except for the one of interest. This is an extreme case. We can still be in trouble if: # # 1. Our samples are biased (they don't reflect the population at large): young women are overrepresented in our sample and young men are underrepresented; old women are underrepresented and old men are overrepresented. # 2. They reflect the population at large but the populations are biased. If there are more young women than young men and more old men than older women, then what we think we are measuring by looking at men v. women might really be old v. young. # # In those situations where we want to test the efficacy of an intervention (a new drug or marketing campaign), this is the idea beyond randomized experiments. You randomly assign people (or anything really) to the two (or more) groups and then draw conclusions based the thing (or things) in your control. # ## On Your Own # # We have only scratched the surface here. What you really want to understand is how variability and the amount of data you have interact especially when looking at *differences* in proportions and means. # # Based on the experiments above, two things tend to happen. First, the bounds of the Credible/Confidence Interval can change. They can get bigger or smaller. And they can contain the "true" value or not or with lesser or greater probility. # # Second, the probability of the ROPE changes. Additionally, the probability that a value of interest is contained in the ROPE changes. # # What you want to see, under controlled circumstances, is how the sample size and dispersion of the data interact to affect your conclusions. # # To do this, you could take examples above and, # # 1. decrease $v$. # 2. increase $v$. # 3. decrease observations. # 4. increase observations, # 5. change the difference in the real $\theta$s both for normal ($\mu$) and bernoulli distributions keeping the other factors fixed to see what differences are and are not detectable with those factors (variability and data). # 6. change the ROPE...for example, supposed we *did* believe the mean was "around 102". How would these experiments affect you conclusions. # 7. do the same experiment over with a different random seed! # # You can write a helper function that does all the things at once to more quickly see what's going on. Additionally, make hypotheses ahead of time about what you think will happen. # *your work here* # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Data Science Fundamentals in Python 3 # # ## Introduction # # The purpose of this example is to practice different components of within a data science process. We will use the classic 1974 *Motor Trend* car road tests (`mtcars`) dataset. # # We are going to practice: # 1. How does a machine learn using Stochastic Gradient Descent # 2. How to load data into Pandas dataframe and explore. # 3. How to train different models using Scikit-learn library and compare their performances. # - A linear model using all variables # - A Gradient Boosting Machine (GBM) model # ## Section 1. Practice on Machine Learning using Stochastic Gradient Descent (SGD) # # # ### 1.1 Generate the data import numpy as np import pandas as pd MU, SIGMA = 6, 2 SIGMA_NOISE = 0.05 NUM_OBS = 100 x = np.random.normal(MU, SIGMA, NUM_OBS) noise = np.random.normal(0, SIGMA_NOISE, NUM_OBS) A, B = 3.5, 8.5 y = A + B * x + noise # ### 1.2 Split the Data into Training and Testing from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=123) # ### 1.3 Start Training the Model # # Here is the formula we used to derive the slopes for coefficients a and b. # # from random import shuffle NUM_EPOCHS = 40 LEARNING_RATE = 0.01 NUM_TRAINING_OBS = len(X_train) a_hat, b_hat = np.random.normal(0, 1, 2) print(a_hat, b_hat) sse_progress = [0] * NUM_EPOCHS a_progress = [0] * NUM_EPOCHS b_progress = [0] * NUM_EPOCHS train_index = list(range(NUM_TRAINING_OBS)) for k in range(NUM_EPOCHS): shuffle(train_index) SSE = 0 for i in train_index: y_hat = a_hat + b_hat * X_train[i] delta = y_train[i] - y_hat SSE += delta**2 slope_a = 2 * delta * (-1) slope_b = 2 * delta * (-X_train[i]) a_hat = a_hat - slope_a * LEARNING_RATE b_hat = b_hat - slope_b * LEARNING_RATE sse_progress[k] = SSE a_progress[k] = a_hat b_progress[k] = b_hat print("Epoch = {0}, SSE={1}".format(k, round(SSE,4))) print("In the end, the learned coefficients are {0} and {1}.".format(round(a_hat, 4), round(b_hat, 4))) # ### 1.4 Plot the Training Progress # + # %matplotlib inline import matplotlib.pyplot as plt plt.figure(1) plt.subplot(3,1,1) plt.plot(range(1, NUM_EPOCHS+1), a_progress) plt.xlabel("Training Epochs") plt.ylabel("Coefficient a") plt.subplot(3,1,2) plt.plot(range(1, NUM_EPOCHS+1), b_progress) plt.xlabel("Training Epochs") plt.ylabel("Coefficient b") plt.subplot(3,1,3) plt.plot(range(2, NUM_EPOCHS+1), sse_progress[1:]) plt.xlabel("Training Epochs") plt.ylabel("SSE") # - # ## Section 2. Train Models Using Scikit-Learn Library # ### 2.1 Prepare Data # # We'll start by loading the `mtcars` sample dataset and displaying its description: # !pip install pydataset --disable-pip-version-check -q # install a Python package containing the dataset import pydataset from pydataset import data df = data('mtcars') df.head() # We can also quickly examine the distribution of values and first few rows of the dataset: # ### 2.2 Explore the Data in a Better Detail using pandas-profiling Package df.describe() # ### 2.3 Get More Detailed Report of the Data Using Pandas Profiling # !pip install pandas-profiling import pandas_profiling # Drop the row index of the data to avoid special characters in row index df1 = df.reset_index(drop=True) pandas_profiling.ProfileReport(df1) # ### 2.4 Split the Data into Training and Testing # The goal for the machine learning models in this tutorial will be to predict each car's gas mileage (`mpg`) from the car's other features. # # We will split the records into training and test datasets: each model will be fitted using the training data, and evaluated using the withheld test data. # + # split the dataset into features available for prediction (X) and value to predict (y) y = df['mpg'].values X = df.drop('mpg', 1).values feature_names = df.drop('mpg', 1).columns # save 30% of the records for the test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123) X_train.shape # - # As you can see from the description above, the number of predictive features available in this dataset (10) is comparable to the number of records (22). Such conditions tend to produce overfitted models that give exceptional predictions on their own training data, but poor predictions on the withheld test data. We will see an example of an overfitted model below. # ### 2.5 Fit Models # #### 2.5.1 Linear Regression Model # The following lines of code fit a linear model (without regularization) using all of the original features: # + from sklearn.linear_model import LinearRegression lm = LinearRegression() lm.fit(X_train, y_train) # - # Below, we print the R-squared value for the true vs. predicted `mpg` values in the *training* set. We also show the fitted coefficients for different features. # + import pandas as pd from sklearn.metrics import r2_score # print R^2 for the training set print('The R-squared value for the training set is: {:0.4f}'.format(r2_score(y_train, lm.predict(X_train)))) # - # Notice that the model performs very well on the training data to which it was fitted. (Predictions of the model account for 89% of the variance in `mpg` values.) Some of the feature coefficients may reflect our intuition: for example, heavy cars tend to have worse gas mileage ($\beta_{\textrm{wt}} = -5.0$), and cars with manual transmissions tend to have better gas mileage ($\beta_{\textrm{am}} = 5.2$). # # Now, let's check the model's performance on the test dataset: # + import numpy as np predicted = lm.predict(X_test) r_squared = r2_score(y_test, predicted) mae = np.mean(abs(predicted - y_test)) rmse = np.sqrt(np.mean((predicted - y_test)**2)) rae = np.mean(abs(predicted - y_test)) / np.mean(abs(y_test - np.mean(y_test))) rse = np.mean((predicted - y_test)**2) / np.mean((y_test - np.mean(y_test))**2) # Create a data frame for storing results from each model summary_df = pd.DataFrame(index = ['R-squared', 'Mean Absolute Error', 'Root Mean Squared Error', 'Relative Absolute Error', 'Relative Squared Error']) summary_df['Linear Regression, all variables'] = [r_squared, mae, rmse, rae, rse] summary_df # - # Notice that the R-squared value for true vs. predicted `mpg` of the test set is much lower than it was for the training set. (Granted, our test set is not very large, so some fluctuation is expected.) This is indicative of model overfitting. # ### Gradient Boosting Machine Regression Model # + from sklearn.ensemble import GradientBoostingRegressor from time import time params = {'alpha':0.9, 'criterion': 'friedman_mse', 'learning_rate': 0.01, 'loss': 'ls', 'max_depth': 2, 'min_samples_leaf': 1, 'min_samples_split': 2, 'n_estimators': 200, 'random_state': 123, 'subsample': 1, 'verbose': 0} params['random_state'] = 123 params['loss'] = 'ls' gbm = GradientBoostingRegressor(**params) gbm.fit(X_train, y_train) # - # Now we can check the model's performance on the test data: # + predicted = gbm.predict(X_test) r_squared = r2_score(y_test, predicted) mae = np.mean(abs(predicted - y_test)) rmse = np.sqrt(np.mean((predicted - y_test)**2)) rae = np.mean(abs(predicted - y_test)) / np.mean(abs(y_test - np.mean(y_test))) rse = np.mean((predicted - y_test)**2) / np.mean((y_test - np.mean(y_test))**2) summary_df['Gradient Boosted Machine Regression'] = [r_squared, mae, rmse, rae, rse] summary_df # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction # # Often you'll have hundreds or thousands of features after various encodings and feature generation. This can lead to two problems. First, the more features you have, the more likely you are to overfit to the training and validation sets. This will cause your model to perform worse at generalizing to new data. # # Secondly, the more features you have, the longer it will take to train your model and optimize hyperparameters. Also, when building user-facing products, you'll want to make inference as fast as possible. Using fewer features can speed up inference at the cost of predictive performance. # # To help with these issues, you'll want to use feature selection techniques to keep the most informative features for your model. # # We'll show that in this lesson. To start, here is the code you've seen so far. # + # %matplotlib inline import itertools import matplotlib.pyplot as plt import numpy as np import pandas as pd import lightgbm as lgb from sklearn.preprocessing import LabelEncoder from sklearn import metrics ks = pd.read_csv('../input/kickstarter-projects/ks-projects-201801.csv', parse_dates=['deadline', 'launched']) # Drop live projects ks = ks.query('state != "live"') # Add outcome column, "successful" == 1, others are 0 ks = ks.assign(outcome=(ks['state'] == 'successful').astype(int)) # Timestamp features ks = ks.assign(hour=ks.launched.dt.hour, day=ks.launched.dt.day, month=ks.launched.dt.month, year=ks.launched.dt.year) # Label encoding cat_features = ['category', 'currency', 'country'] encoder = LabelEncoder() encoded = ks[cat_features].apply(encoder.fit_transform) data_cols = ['goal', 'hour', 'day', 'month', 'year', 'outcome'] baseline_data = ks[data_cols].join(encoded) cat_features = ['category', 'currency', 'country'] interactions = pd.DataFrame(index=ks.index) for col1, col2 in itertools.combinations(cat_features, 2): new_col_name = '_'.join([col1, col2]) # Convert to strings and combine new_values = ks[col1].map(str) + "_" + ks[col2].map(str) label_enc = LabelEncoder() interactions[new_col_name] = label_enc.fit_transform(new_values) baseline_data = baseline_data.join(interactions) launched = pd.Series(ks.index, index=ks.launched, name="count_7_days").sort_index() count_7_days = launched.rolling('7d').count() - 1 count_7_days.index = launched.values count_7_days = count_7_days.reindex(ks.index) baseline_data = baseline_data.join(count_7_days) def time_since_last_project(series): # Return the time in hours return series.diff().dt.total_seconds() / 3600. df = ks[['category', 'launched']].sort_values('launched') timedeltas = df.groupby('category').transform(time_since_last_project) timedeltas = timedeltas.fillna(timedeltas.max()) baseline_data = baseline_data.join(timedeltas.rename({'launched': 'time_since_last_project'}, axis=1)) def get_data_splits(dataframe, valid_fraction=0.1): valid_fraction = 0.1 valid_size = int(len(dataframe) * valid_fraction) train = dataframe[:-valid_size * 2] # valid size == test size, last two sections of the data valid = dataframe[-valid_size * 2:-valid_size] test = dataframe[-valid_size:] return train, valid, test def train_model(train, valid): feature_cols = train.columns.drop('outcome') dtrain = lgb.Dataset(train[feature_cols], label=train['outcome']) dvalid = lgb.Dataset(valid[feature_cols], label=valid['outcome']) param = {'num_leaves': 64, 'objective': 'binary', 'metric': 'auc', 'seed': 7} print("Training model!") bst = lgb.train(param, dtrain, num_boost_round=1000, valid_sets=[dvalid], early_stopping_rounds=10, verbose_eval=False) valid_pred = bst.predict(valid[feature_cols]) valid_score = metrics.roc_auc_score(valid['outcome'], valid_pred) print(f"Validation AUC score: {valid_score:.4f}") return bst # - # # Univariate Feature Selection # # The simplest and fastest methods are based on univariate statistical tests. For each feature, measure how strongly the target depends on the feature using a statistical test like $\chi^2$ or ANOVA. # # From the scikit-learn feature selection module, `feature_selection.SelectKBest` returns the K best features given some scoring function. For our classification problem, the module provides three different scoring functions: $\chi^2$, ANOVA F-value, and the mutual information score. The F-value measures the linear dependency between the feature variable and the target. This means the score might underestimate the relation between a feature and the target if the relationship is nonlinear. The mutual information score is nonparametric and so can capture nonlinear relationships. # # With `SelectKBest`, we define the number of features to keep, based on the score from the scoring function. Using `.fit_transform(features, target)` we get back an array with only the selected features. # + from sklearn.feature_selection import SelectKBest, f_classif feature_cols = baseline_data.columns.drop('outcome') # Keep 5 features selector = SelectKBest(f_classif, k=5) X_new = selector.fit_transform(baseline_data[feature_cols], baseline_data['outcome']) X_new # - # However, I've done something wrong here. The statistical tests are calculated using all of the data. This means information from the validation and test sets could influence the features we keep, introducing a source of leakage. This means we should select features using only a training set. # + feature_cols = baseline_data.columns.drop('outcome') train, valid, _ = get_data_splits(baseline_data) # Keep 5 features selector = SelectKBest(f_classif, k=5) X_new = selector.fit_transform(train[feature_cols], train['outcome']) X_new # - # You should notice that the selected features are different than when I used the entire dataset. Now we have our selected features, but it's only the feature values for the training set. To drop the rejected features from the validation and test sets, we need to figure out which columns in the dataset were kept with `SelectKBest`. To do this, we can use `.inverse_transform` to get back an array with the shape of the original data. # Get back the features we've kept, zero out all other features selected_features = pd.DataFrame(selector.inverse_transform(X_new), index=train.index, columns=feature_cols) selected_features.head() # This returns a DataFrame with the same index and columns as the training set, but all the dropped columns are filled with zeros. We can find the selected columns by choosing features where the variance is non-zero. # + # Dropped columns have values of all 0s, so var is 0, drop them selected_columns = selected_features.columns[selected_features.var() != 0] # Get the valid dataset with the selected features. valid[selected_columns].head() # - # # L1 regularization # # Univariate methods consider only one feature at a time when making a selection decision. Instead, we can make our selection using all of the features by including them in a linear model with L1 regularization. This type of regularization (sometimes called Lasso) penalizes the absolute magnitude of the coefficients, as compared to L2 (Ridge) regression which penalizes the square of the coefficients. # # As the strength of regularization is increased, features which are less important for predicting the target are set to 0. This allows us to perform feature selection by adjusting the regularization parameter. We choose the parameter by finding the best performance on a hold-out set, or decide ahead of time how many features to keep. # # For regression problems you can use `sklearn.linear_model.Lasso`, or `sklearn.linear_model.LogisticRegression` for classification. These can be used along with `sklearn.feature_selection.SelectFromModel` to select the non-zero coefficients. Otherwise, the code is similar to the univariate tests. # + from sklearn.linear_model import LogisticRegression from sklearn.feature_selection import SelectFromModel train, valid, _ = get_data_splits(baseline_data) X, y = train[train.columns.drop("outcome")], train['outcome'] # Set the regularization parameter C=1 logistic = LogisticRegression(C=1, penalty="l1", random_state=7).fit(X, y) model = SelectFromModel(logistic, prefit=True) X_new = model.transform(X) X_new # - # Similar to the univariate tests, we get back an array with the selected features. Again, we will want to convert these to a DataFrame so we can get the selected columns. # + # Get back the kept features as a DataFrame with dropped columns as all 0s selected_features = pd.DataFrame(model.inverse_transform(X_new), index=X.index, columns=X.columns) # Dropped columns have values of all 0s, keep other columns selected_columns = selected_features.columns[selected_features.var() != 0] # - # In this case with the L1 parameter `C=1`, we're dropping the `time_since_last_project` column. # # In general, feature selection with L1 regularization is more powerful the univariate tests, but it can also be very slow when you have a lot of data and a lot of features. Univariate tests will be much faster on large datasets, but also will likely perform worse. # # # Your Turn # Do **[feature selection](#$NEXT_NOTEBOOK_URL$)** to find the most important features in the model you've built in previous lessons. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py38] * # language: python # name: conda-env-py38-py # --- # # Mooring Export -> Excel (each column/page is an instrument) # + from erddapy import ERDDAP import pandas as pd import numpy as np server_url = 'http://ecofoci-field.pmel.noaa.gov:8080/erddap' d = ERDDAP(server=server_url, protocol='tabledap', response='csv', ) d.dataset_id='datasets_Mooring_19bsm2a_preliminary' d.variables = [ 'timeseries_id', 'temperature', 'depth', 'time', # 'Instrument_Identifier' ] # + df = d.to_pandas( index_col='time (UTC)', parse_dates=True, ).dropna() df.columns = [x[1].split()[0] for x in enumerate(df.columns)] # - df # + jupyter={"outputs_hidden": true} tags=[] ### for prawler export df['samplenum'] = df.groupby('profile_id').cumcount() dft = df.pivot(index='samplenum',columns='profile_id', values='Salinity (PSU)') df.reset_index() import openpyxl with pd.ExcelWriter('19bs2c.xlsx') as writer: df.tz_localize(None).reset_index().pivot(index='samplenum',columns='profile_id', values='time (UTC)').to_excel(writer, sheet_name='Time') df.pivot(index='samplenum',columns='profile_id', values='Salinity (PSU)').to_excel(writer, sheet_name='Salinity') df.pivot(index='samplenum',columns='profile_id', values='depth (m)').to_excel(writer, sheet_name='depth') # - #all depths import openpyxl with pd.ExcelWriter('19bsm2a.xlsx') as writer: for instrument,dfg in df.groupby('timeseries_id'): dfg.tz_localize(None).to_excel(writer, sheet_name=instrument) # ### Occasionally a filtered dataset is more appropriate so run the filter then export import EcoFOCIpy.math.lanzcos as lanzcos #<- instrument specific dfm= df.resample('1H').mean() dfm['filt35hr'] = lanzcos.lanzcos(dfm['temperature'].values,1,35) + df['temperature'].mean() dfm[['temperature','filt35hr']].plot() dfm['filt35hr'][(dfm.index.hour == 0) | (dfm.index.hour == 6) | (dfm.index.hour == 12) | (dfm.index.hour == 18)].plot(marker='.',markersize=.1) import openpyxl with pd.ExcelWriter('19bsm2a_f35.xlsx') as writer: for instrument,dfg in df.groupby('timeseries_id'): dfm= dfg.resample('1H').mean() dfm['filt35hr'] = lanzcos.lanzcos(dfm['temperature'].values,1,35) + df['temperature'].mean() dfm['filt35hr'][(dfm.index.hour == 0) | (dfm.index.hour == 6) | (dfm.index.hour == 12) | (dfm.index.hour == 18)].tz_localize(None).to_excel(writer, sheet_name=instrument) import openpyxl with pd.ExcelWriter('19bsm2a_f35_3hr.xlsx') as writer: for instrument,dfg in df.groupby('timeseries_id'): if instrument in ['19bsm2a_sc_0006m']: dfm= dfg.resample('1H').mean().interpolate() dfm['filt35hr'] = lanzcos.lanzcos(dfm['temperature'].values,1,35) + df['temperature'].mean() dfm['filt35hr'][(dfm.index.hour == 0) | (dfm.index.hour == 6) | (dfm.index.hour == 12) | (dfm.index.hour == 18)].tz_localize(None).to_excel(writer, sheet_name=instrument) for instrument,dfg in df.groupby('timeseries_id'): print(instrument) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="1CRGkVUOcnvc" # ## Preprocessing # + colab={"base_uri": "https://localhost:8080/"} id="ti0ZR-f3zEvS" outputId="9eef41fa-6b6c-4e36-8dd1-f187e4a9d00e" pip install tensorflow # + colab={"base_uri": "https://localhost:8080/", "height": 496} id="BLhcSleWcnvg" outputId="1d1b566a-928d-4e20-d212-07f22773c29d" # Import our dependencies from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import pandas as pd import tensorflow as tf # Import and read the charity_data.csv. import pandas as pd application_df = pd.read_csv("charity_data.csv") application_df.head() # + colab={"base_uri": "https://localhost:8080/"} id="FQ_nN-S6cnvj" outputId="c2f95a78-4ae2-41ea-d036-c670f4fda8f2" # Drop the non-beneficial ID columns, 'EIN' and 'NAME'. application_df = application_df.drop(["EIN", "NAME"], 1) # + colab={"base_uri": "https://localhost:8080/", "height": 267} id="gRDa7XqV0DwC" outputId="a145b097-5132-4738-cfc0-11c3c4985487" application_df.head() # + colab={"base_uri": "https://localhost:8080/"} id="1Z6S2LXxcnvj" outputId="b7ff7343-2762-4df2-dff7-3e290dd4daef" # Determine the number of unique values in each column. application_df.nunique() # + [markdown] id="_7NdbQ2n1kIO" # Application_Type and Classification are the focus as they have values greater than ten. # + colab={"base_uri": "https://localhost:8080/"} id="1kLeHIyNcnvk" outputId="22bcb06a-b924-4fc2-b2c8-4957763f09f0" # Look at APPLICATION_TYPE value counts for binning application_counts = application_df["APPLICATION_TYPE"].value_counts() application_counts # + [markdown] id="PiR7Tfwo2aL8" # Suggest cutting of at 500, less will be binned as other # + colab={"base_uri": "https://localhost:8080/"} id="5oKYg6PFcnvl" outputId="1b5dfd6f-65d8-4566-c6e0-34187763eb3b" # Choose a cutoff value and create a list of application types to be replaced # use the variable name `application_types_to_replace` application_types_to_replace = list(application_counts[application_counts <500].index) # Replace in dataframe for app in application_types_to_replace: application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other") # Check to make sure binning was successful application_df['APPLICATION_TYPE'].value_counts() # + colab={"base_uri": "https://localhost:8080/"} id="6_nmfMOzcnvl" outputId="5f3a14b6-7bc2-476a-8df8-d997ebe9c309" # Look at CLASSIFICATION value counts for binning class_counts = application_df["CLASSIFICATION"].value_counts() class_counts # + [markdown] id="lUakpk-R3TTM" # Cutting off where the classification_counts is >1. # + colab={"base_uri": "https://localhost:8080/"} id="6w3nEq13cnvm" outputId="8e7d4f6c-b002-45c0-8bfa-25e9bbee7a5c" # You may find it helpful to look at CLASSIFICATION value counts >1 class_counts[class_counts>1] # + [markdown] id="BpEuYIWH3rtK" # Update the cutoff to > 1000 for classification_counts # + colab={"base_uri": "https://localhost:8080/"} id="JQJTAvDCcnvm" outputId="1286371a-7568-47da-c4c9-7c1dda40cafd" # Choose a cutoff value and create a list of classifications to be replaced # use the variable name `classifications_to_replace` classifications_to_replace = list(class_counts[class_counts < 1000].index) # Replace in dataframe for cls in classifications_to_replace: application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other") # Check to make sure binning was successful application_df['CLASSIFICATION'].value_counts() # + colab={"base_uri": "https://localhost:8080/", "height": 287} id="Fm-Rv3JPcnvn" outputId="c418173b-72ec-45a5-d80c-2b91b0886010" # Convert categorical data to numeric with `pd.get_dummies` appication_with_dummies_df = pd.get_dummies(application_df) appication_with_dummies_df.head(5) # + id="rJdroEvFcnvo" # Split our preprocessed data into our features and target arrays X = appication_with_dummies_df.drop(["IS_SUCCESSFUL"],axis="columns").values y = appication_with_dummies_df["IS_SUCCESSFUL"].values # Split the preprocessed data into a training and testing dataset X_train, X_test, y_train, y_test = train_test_split(X,y) # + id="4g0in-Qscnvo" # Create a StandardScaler instances scaler = StandardScaler() # Fit the StandardScaler X_scaler = scaler.fit(X_train) # Scale the data X_train_scaled = X_scaler.transform(X_train) X_test_scaled = X_scaler.transform(X_test) # + [markdown] id="fabODgRscnvp" # ## Compile, Train and Evaluate the Model # + colab={"base_uri": "https://localhost:8080/"} id="GQ7yS0jMcnvp" outputId="bbc55720-92b8-4ba9-b3cd-71a94037b22b" # Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer. number_input_features = len(X_train[0]) layer1Nodes = 80 layer2Nodes = 30 nn = tf.keras.models.Sequential() # First hidden layer nn.add( tf.keras.layers.Dense(units=layer1Nodes, input_dim=number_input_features, activation='relu') ) # Second hidden layer nn.add( tf.keras.layers.Dense(units=layer2Nodes, input_dim=layer1Nodes, activation='relu') ) # Output layer nn.add( tf.keras.layers.Dense(units=1, input_dim=layer2Nodes, activation='sigmoid') ) # Check the structure of the model nn.summary() # + id="OtNIIoDjcnvp" # Compile the model nn.compile(loss="binary_crossentropy", optimizer='adam', metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="bxKt1rAGcnvq" outputId="4b791f02-5348-4e98-f003-b91feaa1dca5" # Train the model nn.compile(loss="binary_crossentropy", optimizer='adam', metrics=['accuracy']) # + colab={"base_uri": "https://localhost:8080/"} id="8yc1aQ9xcnvq" outputId="e6c9c80d-6856-489b-ad81-5355a27ec3b6" # Evaluate the model using the test data model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2) print(f"Loss: {model_loss}, Accuracy: {model_accuracy}") # + id="IOY1j8wgcnvr" # Export our model to HDF5 file nn.save("AlphabetSoupCharity.h5") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.6 64-bit # metadata: # interpreter: # hash: 604522f1918a4f70ab804f3225c4c25389e7cf7d629ed4133598ebf924cdb86e # name: python3 # --- print('hello world') # + def hello(): print('world!') print(hello) # - x = 4 print(hex(id(x))) # Create a function taking a positive integer as its parameter and returning a string containing the Roman Numeral representation of that integer. # # Modern Roman numerals are written by expressing each digit separately starting with the left most digit and skipping any digit with a value of zero. In Roman numerals 1990 is rendered: 1000=M, 900=CM, 90=XC; resulting in MCMXC. 2008 is written as 2000=MM, 8=VIII; or MMVIII. 1666 uses each Roman symbol in descending order: MDCLXVI. # # Symbol Value # # --- # # I | 1 # # --- # V | 5 # # --- # X | 10 # # --- # L | 50 # # --- # C | 100 # # --- # D | 500 # # --- # M | 1,000 # # --- # # solution(1000) # should return 'M' # + tags=[] def solution(n): values = [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1] literal = ["M", "CM", "D", "CD", "C", "XC", "L", "XL", "X", "IX", "V", "IV", "I"] roman = '' for i in range(len(values)): cDig = n // values[i] if (cDig): roman += cDig * literal[i] n = n % values[i] return roman # solution(56) print(solution(516)) # - # Using recursion(best practice) def solution(x): table = [ (1000,"M"), (900,"CM"), (500,"D"), (400,"CD"), (100,"C"), (90,"XC"), (50,"L"), (40,"XL"), (10,"X"), (9,"IX"), (5,"V"), (4,"IV"), (1,"I") ] for num, rep in table: if x >= num: return rep + solution(x-num) return str() # + # Find the number is prime or not def is_prime(num): # print(list(range(2, num))) for n in range(2, num): if (num % n == 0): return False return True is_prime(9) # + # Efficient method, less number to check for def is_prime(num): # Eliminates all even numbers if num % 2 == 0 and num > 2: return False # check for all numbers starting from 3, in # step of 2 like 3, 5, 7... # no need to check the full range only upto # the square root of the number, so the number of checks reduced for n in range(3, int(num**(1/2))+1, 2): if (num % n == 0): return False return True is_prime() # + # need to consider cases like , -ve values, 0 and 1 def is_prime(num): # Eliminates all numbers less tham 2 if num < 2: return False # Eliminates all even numbers if num % 2 == 0 and num > 2: return False # check for all numbers starting from 3, in # step of 2 up to the sq. root of given number. for n in range(3, int(num**(1/2))+1, 2): if (num % n == 0): return False return True # - print ('testing') # Given two arrays of strings a1 and a2 return a sorted array r in lexicographical order of the strings of a1 which are substrings of strings of a2. # # #Example 1: a1 = \["arp", "live", "strong"\] # # a2 = \["lively", "alive", "harp", "sharp", "armstrong"\] # # returns \["arp", "live", "strong"\] # # #Example 2: a1 = \["tarp", "mice", "bull"\] # # a2 = \["lively", "alive", "harp", "sharp", "armstrong"\] # # returns \[\] # + def in_array(array1, array2): r = [] for w2 in array2: for w1 in array1: if (w1 in w2) and (w1 not in r) : r.append(w1) return sorted(r) a1 = ["live", "arp", "strong"] a2 = ["lively", "alive", "harp", "sharp", "armstrong"] # r = ['arp', 'live', 'strong'] print(in_array(a1, a2)) in_array(a1, a2) # + # Good solution def in_array(a1, a2): return sorted(set(s1 for s1 in a1 if any(s1 in s2 for s2 in a2))) a1 = ["live", "arp", "strong"] a2 = ["lively", "alive", "harp", "sharp", "armstrong"] # r = ['arp', 'live', 'strong'] print(in_array(a1, a2)) in_array(a1, a2) # - # + # encryption def encrypt(text, n): str1 = '' str2 = '' for i in range(0, len(text), 2): str1 += text[i] if i < len(text) - 1: str2 += text[i + 1] text = str2 + str1 n -= 1 if n != 0: return encrypt(text, n) return text # encrypt("This is a test!", 0) # "This is a test!" encrypt("This is a test!", 10) # "hsi etTi sats!" # + def decrypt(text, n): print(n % 4) return (encrypt(text, 4 - (n % 4))) decrypt("s eT ashi tist!", 10) # + # decription def decrypt(text, n): eIStr = text[int(len(text)/2):] oIStr = text[:int(len(text)/2)] orStr = "".join([a + b for a,b in zip(eIStr, oIStr)]) if len(eIStr) > len(oIStr): orStr = orStr + eIStr[-1] n -= 1 if n != 0: return decrypt(orStr, n) return orStr decrypt("hskt svr neetn!Ti aai eyitrsig", 1) # + d = "patti" s = 'Thendi' f = zip(d, s) # print(list(f)) print([a + b for a,b in f]) # print the one with max length print(max(d,s, key=len)[min(len(d),len(s)):]) # [min(len(d),len(s)):] # + def encrypt(text, n): if text and n > 0: str1, str2 = '','' for i in range(0, len(text), 2): str1 += text[i] if i < len(text) - 1: str2 += text[i + 1] text = str2 + str1 n -= 1 return encrypt(text, n) return text # encrypt("This", 1) encrypt("hskt svr neetn!Ti aai eyitrsig", 4) # + def decrypt(encrypted_text, n): if len(encrypted_text)%2 == 0: c = 4 else: c = 5 return (encrypt(encrypted_text, c - (n % c))) # decrypt("hsi etTi sats!", 1) # encrypt("This is a test!", -1) # hsi etTi sats! # decrypt("hsi etTi sats!", 1) # decrypt("hsi etTi sats!", 1) decrypt("This is a test!", 1) # + def encrypt(text, n): if text and n > 0: str1, str2 = '','' for i in range(0, len(text), 2): str1 += text[i] if i < len(text) - 1: str2 += text[i + 1] text = str2 + str1 n -= 1 return encrypt(text, n) return text def decrypt(encrypted_text, n): if encrypted_text and n > 0: eIStr = encrypted_text[int(len(encrypted_text)/2):] oIStr = encrypted_text[:int(len(encrypted_text)/2)] orStr = "".join([a + b for a,b in zip(eIStr, oIStr)]) if len(eIStr) > len(oIStr): orStr = orStr + eIStr[-1] n -= 1 return decrypt(orStr, n) return encrypted_text # + def add(a, b): return a + b add(1, 2) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import json from pathlib import Path from typing import Any, Mapping import numpy as np # + def main(filepath: Path): people = get_people(filepath) friends_difference = get_account_difference(people) for person_id, diff in friends_difference.items(): print(f"{person_id}: {diff:0.2f} EUR") def get_account_difference( people: Mapping[str, Mapping[str, Any]] ) -> Mapping[int, float]: result = {} for person in people.values(): if person["friends"] is not None: friends_account = [ people[friend]["bank_account"] for friend in person["friends"] ] median = np.median(friends_account) else: median = None if median is None: result[person["id"]] = 0 else: result[person["id"]] = person["bank_account"] - median return result def get_people(filepath: Path) -> Mapping[str, Mapping[str, Any]]: with open(filepath) as fp: people = json.loads(fp.read()) id2person = {} for person in people: id2person[person["id"]] = person return id2person if __name__ == "__main__": main(Path("people.json")) # + def get_people(filepath: Path) -> Mapping[str, Mapping[str, Any]]: with open(filepath) as fp: people = json.loads(fp.read()) print(people) id2person = {} for person in people: id2person[person["id"]] = person return id2person get_people('people.json') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # !pip3 install -U pip && pip3 install -r requirements.txt # !pip3 install -v -e . # !pip uninstall -y torch torchvision torchaudio # May need to change in the future if Colab no longer uses CUDA 11.0 # !pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html # !apt-get update # !apt install -y libgl1-mesa-glx # !pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' # !python tools/train.py -f exps/example/yolox_voc/yolox_voc_l.py -d 0 -b 18 --fp16 -o -c models/yolox_l.pth # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="view-in-github" # Open In Colab # - # # Tutorial 1: Decoding Neural Responses # **Week 2, Day 1: Deep Learning** # # **By Neuromatch Academy** # # **Content creators**: , # # **Content reviewers**: , , , , , # # **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** # #

    # --- # # Tutorial Objectives # # *Estimated timing of tutorial: 1 hr, 20 minutes* # # In this tutorial, we'll use deep learning to decode stimulus information from the responses of sensory neurons. Specifically, we'll look at the activity of ~20,000 neurons in mouse primary visual cortex responding to oriented gratings recorded in [this study](https://www.biorxiv.org/content/10.1101/679324v2.abstract). Our task will be to decode the orientation of the presented stimulus from the responses of the whole population of neurons. We could do this in a number of ways, but here we'll use deep learning. Deep learning is particularly well-suited to this problem for a number of reasons: # * The data are very high-dimensional: the neural response to a stimulus is a ~20,000 dimensional vector. Many machine learning techniques fail in such high dimensions, but deep learning actually thrives in this regime, as long as you have enough data (which we do here!). # * As you'll be able to see below, different neurons can respond quite differently to stimuli. This complex pattern of responses will, therefore, require non-linear methods to be decoded, which we can easily do with non-linear activation functions in deep networks. # * Deep learning architectures are highly flexible, meaning we can easily adapt the architecture of our decoding model to optimize decoding. Here, we'll focus on a single architecture, but you'll see that it can easily be modified with few changes to the code. # # More concretely, our goal will be learn how to: # * Build a deep feed-forward network using PyTorch # * Evaluate the network's outputs using PyTorch built-in loss functions # * Compute gradients of the loss with respect to each parameter of the network using automatic differentiation # * Implement gradient descent to optimize the network's parameters # # + cellView="form" # @title Tutorial slides # @markdown These are the slides for all videos in this tutorial. from IPython.display import IFrame IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/vb7c4/?direct%26mode=render%26action=download%26mode=render", width=854, height=480) # + cellView="form" # @title Video 1: Decoding from neural data using feed-forward networks in pytorch from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1Xa4y1a7Jz", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="SlrbMvvBOzM", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # - # This video covers the decoding task we will use in these tutorials, a linear network with one hidden layer, and how to build this in Pytorch. # # Generalized linear models were used as decoding and encoding models in W1D4 Machine Learning. A model that decodes a variable from neural activity can tell us *how much information* a brain area contains about that variable. An encoding model is a model from an input variable, like visual stimulus, to neural activity. The encoding model is meant to approximate the same transformation that the brain performs on input variables and therefore help us understand *how the brain represents information*. Today we will use deep neural networks to build these models because deep neural networks can approximate a wide range of non-linear functions and can be easily fit. # # # --- # # Setup # # + cellView="both" # Imports import os import numpy as np import torch from torch import nn from torch import optim import matplotlib as mpl from matplotlib import pyplot as plt # + cellView="form" #@title Data retrieval and loading import hashlib import requests fname = "W3D4_stringer_oribinned1.npz" url = "https://osf.io/683xc/download" expected_md5 = "436599dfd8ebe6019f066c38aed20580" if not os.path.isfile(fname): try: r = requests.get(url) except requests.ConnectionError: print("!!! Failed to download data !!!") else: if r.status_code != requests.codes.ok: print("!!! Failed to download data !!!") elif hashlib.md5(r.content).hexdigest() != expected_md5: print("!!! Data download appears corrupted !!!") else: with open(fname, "wb") as fid: fid.write(r.content) # + cellView="form" #@title Figure Settings # %config InlineBackend.figure_format = 'retina' plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle") # + cellView="form" # @title Plotting Functions def plot_data_matrix(X, ax): """Visualize data matrix of neural responses using a heatmap Args: X (torch.Tensor or np.ndarray): matrix of neural responses to visualize with a heatmap ax (matplotlib axes): where to plot """ cax = ax.imshow(X, cmap=mpl.cm.pink, vmin=np.percentile(X, 1), vmax=np.percentile(X, 99)) cbar = plt.colorbar(cax, ax=ax, label='normalized neural response') ax.set_aspect('auto') ax.set_xticks([]) ax.set_yticks([]) def plot_decoded_results(train_loss, test_loss, test_labels, predicted_test_labels): """ Plot decoding results in the form of network training loss and test predictions Args: train_loss (list): training error over iterations test_labels (torch.Tensor): n_test x 1 tensor with orientations of the stimuli corresponding to each row of train_data, in radians predicted_test_labels (torch.Tensor): n_test x 1 tensor with predicted orientations of the stimuli from decoding neural network """ # Plot results fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 6)) # Plot the training loss over iterations of GD ax1.plot(train_loss) # Plot the testing loss over iterations of GD ax1.plot(test_loss) ax1.legend(['train loss', 'test loss']) # Plot true stimulus orientation vs. predicted class ax2.plot(stimuli_test.squeeze(), predicted_test_labels, '.') ax1.set_xlim([0, None]) ax1.set_ylim([0, None]) ax1.set_xlabel('iterations of gradient descent') ax1.set_ylabel('negative log likelihood') ax2.set_xlabel('true stimulus orientation ($^o$)') ax2.set_ylabel('decoded orientation bin') ax2.set_xticks(np.linspace(0, 360, n_classes + 1)) ax2.set_yticks(np.arange(n_classes)) class_bins = [f'{i * 360 / n_classes: .0f}$^o$ - {(i + 1) * 360 / n_classes: .0f}$^o$' for i in range(n_classes)] ax2.set_yticklabels(class_bins); # Draw bin edges as vertical lines ax2.set_ylim(ax2.get_ylim()) # fix y-axis limits for i in range(n_classes): lower = i * 360 / n_classes upper = (i + 1) * 360 / n_classes ax2.plot([lower, lower], ax2.get_ylim(), '-', color="0.7", linewidth=1, zorder=-1) ax2.plot([upper, upper], ax2.get_ylim(), '-', color="0.7", linewidth=1, zorder=-1) plt.tight_layout() def plot_train_loss(train_loss): plt.plot(train_loss) plt.xlim([0, None]) plt.ylim([0, None]) plt.xlabel('iterations of gradient descent') plt.ylabel('mean squared error') plt.show() # + cellView="form" # @title Helper Functions def load_data(data_name=fname, bin_width=1): """Load mouse V1 data from Stringer et al. (2019) Data from study reported in this preprint: https://www.biorxiv.org/content/10.1101/679324v2.abstract These data comprise time-averaged responses of ~20,000 neurons to ~4,000 stimulus gratings of different orientations, recorded through Calcium imaging. The responses have been normalized by spontaneous levels of activity and then z-scored over stimuli, so expect negative numbers. They have also been binned and averaged to each degree of orientation. This function returns the relevant data (neural responses and stimulus orientations) in a torch.Tensor of data type torch.float32 in order to match the default data type for nn.Parameters in Google Colab. This function will actually average responses to stimuli with orientations falling within bins specified by the bin_width argument. This helps produce individual neural "responses" with smoother and more interpretable tuning curves. Args: bin_width (float): size of stimulus bins over which to average neural responses Returns: resp (torch.Tensor): n_stimuli x n_neurons matrix of neural responses, each row contains the responses of each neuron to a given stimulus. As mentioned above, neural "response" is actually an average over responses to stimuli with similar angles falling within specified bins. stimuli: (torch.Tensor): n_stimuli x 1 column vector with orientation of each stimulus, in degrees. This is actually the mean orientation of all stimuli in each bin. """ with np.load(data_name) as dobj: data = dict(**dobj) resp = data['resp'] stimuli = data['stimuli'] if bin_width > 1: # Bin neural responses and stimuli bins = np.digitize(stimuli, np.arange(0, 360 + bin_width, bin_width)) stimuli_binned = np.array([stimuli[bins == i].mean() for i in np.unique(bins)]) resp_binned = np.array([resp[bins == i, :].mean(0) for i in np.unique(bins)]) else: resp_binned = resp stimuli_binned = stimuli # Return as torch.Tensor resp_tensor = torch.tensor(resp_binned, dtype=torch.float32) stimuli_tensor = torch.tensor(stimuli_binned, dtype=torch.float32).unsqueeze(1) # add singleton dimension to make a column vector return resp_tensor, stimuli_tensor def identityLine(): """ Plot the identity line y=x """ ax = plt.gca() lims = np.array([ax.get_xlim(), ax.get_ylim()]) minval = lims[:, 0].min() maxval = lims[:, 1].max() equal_lims = [minval, maxval] ax.set_xlim(equal_lims) ax.set_ylim(equal_lims) line = ax.plot([minval, maxval], [minval, maxval], color="0.7") line[0].set_zorder(-1) def get_data(n_stim, train_data, train_labels): """ Return n_stim randomly drawn stimuli/resp pairs Args: n_stim (scalar): number of stimuli to draw resp (torch.Tensor): train_data (torch.Tensor): n_train x n_neurons tensor with neural responses to train on train_labels (torch.Tensor): n_train x 1 tensor with orientations of the stimuli corresponding to each row of train_data, in radians Returns: (torch.Tensor, torch.Tensor): n_stim x n_neurons tensor of neural responses and n_stim x 1 of orientations respectively """ n_stimuli = train_labels.shape[0] istim = np.random.choice(n_stimuli, n_stim) r = train_data[istim] # neural responses to this stimulus ori = train_labels[istim] # true stimulus orientation return r, ori def stimulus_class(ori, n_classes): """Get stimulus class from stimulus orientation Args: ori (torch.Tensor): orientations of stimuli to return classes for n_classes (int): total number of classes Returns: torch.Tensor: 1D tensor with the classes for each stimulus """ bins = np.linspace(0, 360, n_classes + 1) return torch.tensor(np.digitize(ori.squeeze(), bins)) - 1 # minus 1 to accomodate Python indexing # - # --- # # Section 1: Load and visualize data # #
    # Click here for text recap of relevant part of video # # We will be exploring neural activity in mice while the mice is viewing oriented grating stimuli on a screen in front of it. We record neural activity using a technique called two-photon calcium imaging, which allows us to record many thousands of neurons simultanously. The neurons light up when they fire. We then convert this imaging data to a matrix of neural responses by stimuli presented. For the purposes of this tutorial we are going to bin the neural responses and compute each neuron’s tuning curve. We used bins of 1 degree. We will use the response of all neurons in a single bin to try to predict which stimulus was shown. So we are going to be using the responses of 24000 neurons to try to predict 360 different possible stimulus conditions corresponding to each degree of orientation - which means we're in the regime of big data! # #
    # # In the next cell, we have provided code to load the data and plot the matrix of neural responses. # # Next to it, we plot the tuning curves of three randomly selected neurons. These tuning curves are the averaged response of each neuron to oriented stimuli within 1$^\circ$, and since there are 360$^\circ$ in total, we have 360 responses. # # In the recording, there were actually thousands of stimuli shown, but in practice we often create these tuning curves because we want to visualize averaged responses with respect to the variable we varied in the experiment, in this case stimulus orientation. # + cellView="form" #@title #@markdown Execute this cell to load and visualize data # Load data resp_all, stimuli_all = load_data() # argument to this function specifies bin width n_stimuli, n_neurons = resp_all.shape print(f'{n_neurons} neurons in response to {n_stimuli} stimuli') fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(2 * 6, 5)) # Visualize data matrix plot_data_matrix(resp_all[:, :100].T, ax1) # plot responses of first 100 neurons ax1.set_xlabel('stimulus') ax1.set_ylabel('neuron') # Plot tuning curves of three random neurons ineurons = np.random.choice(n_neurons, 3, replace=False) # pick three random neurons ax2.plot(stimuli_all, resp_all[:, ineurons]) ax2.set_xlabel('stimulus orientation ($^o$)') ax2.set_ylabel('neural response') ax2.set_xticks(np.linspace(0, 360, 5)) plt.tight_layout() # - # We will split our data into a training set and test set. In particular, we will have a training set of orientations (`stimuli_train`) and the corresponding responses (`resp_train`). Our testing set will have held-out orientations (`stimuli_test`) and the corresponding responses (`resp_test`). # + cellView="form" #@title #@markdown Execute this cell to split into training and test sets # Set random seeds for reproducibility np.random.seed(4) torch.manual_seed(4) # Split data into training set and testing set n_train = int(0.6 * n_stimuli) # use 60% of all data for training set ishuffle = torch.randperm(n_stimuli) itrain = ishuffle[:n_train] # indices of data samples to include in training set itest = ishuffle[n_train:] # indices of data samples to include in testing set stimuli_test = stimuli_all[itest] resp_test = resp_all[itest] stimuli_train = stimuli_all[itrain] resp_train = resp_all[itrain] # - # --- # # Section 2: Deep feed-forward networks in *pytorch* # # #
    # Click here for text recap of relevant part of video # # We can build a linear network with no hidden layers, where the stimulus prediction $y$ is a product of weights $\mathbf{W}_{out}$ and neural responses $\mathbf{r}$ with an added term $\mathbf{b}$ which is called the bias term. When you fit a linear model such as this you minimize the squared error between the predicted stimulus $y$ and the true stimulus $\tilde{y}$, this is the “loss function”. # \begin{align} # L &= (y - \tilde{y})^2 \\ # &= ((\mathbf{W}^{out} \mathbf{r} + \mathbf{b}) - \tilde{y})^2 # \end{align} # The solution to minimizing this loss function in a linear model can be found in closed form, and you learned how to solve this linear regression problem in the first week if you remember. If we use a simple linear model for this data we are able to predict the stimulus within 2-3 degrees. Let’s see if we can predict the neural activity better with a deep network. # # Let’s add a hidden layer with $M$ units to this linear model, where now the output $y$ is as follows: # \begin{align} # \mathbf{h} &= \mathbf{W}^{in} \mathbf{r} + \mathbf{b}^{in}, && [\mathbf{W}^{in}: M \times N,\, \mathbf{b}^{in}: M \times 1], \\ # y &= \mathbf{W}^{out} \mathbf{h} + \mathbf{b}^{out}, && [\mathbf{W}^{out}: 1 \times M,\, \mathbf{b}^{in}: 1 \times 1], # \end{align} # # # Note this linear network with one hidden layer where $M$ hidden units is less than $N$ inputs is equivalent to performing [reduced rank regression](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3444519/), a technique that is useful for regularizing your regression model. # # Adding this hidden layer means the model now has a depth of $1$. The number of units $M$ is termed the width of the network. Increasing the depth and the width of the network can increase the expressivity of the model -- in other words how well it can fit complex non-linear functions. Many state-of-the-art models now have close to 100 layers! But for now let’s start with a model with a depth of $1$ and see if we can improve our prediction of the stimulus. See [bonus section 1](#b1) for a deeper discussion of what this choice entails, and when one might want to use deeper/shallower and wider/narrower architectures. # # The $M$-dimensional vector $\mathbf{h}$ denotes the activations of the **hidden layer** of the network. The blue components of this diagram denote the **parameters** of the network, which we will later optimize with gradient descent. These include all the weights and biases $\mathbf{W}^{in}, \mathbf{b}^{in}, \mathbf{W}^{out}, \mathbf{b}^{out}$. The **weights** are matrices of size (# of outputs, # of inputs) that are multiplied by the input of each layer, like the regression coefficients in linear regression. The **biases** are vectors of size (# of outputs, 1), like the intercept term in linear regression (see W1D3 for more details on multivariate linear regression). # #

    # #

    # #
    # # We'll now build a simple deep neural network that takes as input a vector of neural responses and outputs a single number representing the decoded stimulus orientation. # # Let $\mathbf{r}^{(n)} = \begin{bmatrix} r_1^{(n)} & r_2^{(n)} & \ldots & r_N^{(n)} \end{bmatrix}^T$ denote the vector of neural responses (of neurons $1, \ldots, N$) to the $n$th stimulus. The network we will use is described by the following set of equations: # \begin{align} # \mathbf{h}^{(n)} &= \mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in}, && [\mathbf{W}^{in}: M \times N,\, \mathbf{b}^{in}: M \times 1], \\ # y^{(n)} &= \mathbf{W}^{out} \mathbf{h}^{(n)} + \mathbf{b}^{out}, && [\mathbf{W}^{out}: 1 \times M,\, \mathbf{b}^{in}: 1 \times 1], # \end{align} # where $y^{(n)}$ denotes the scalar output of the network: the decoded orientation of the $n$th stimulus. # # # # # # ### Section 2.1: Introduction to PyTorch # # *Estimated timing to here from start of tutorial: 16 min* # # Here, we'll use the **PyTorch** package to build, run, and train deep networks of this form in Python. PyTorch uses a data type called a `torch.Tensor`. `torch.Tensor`'s are effectively just like a `numpy` arrays, except that they have some important attributes and methods needed for automatic differentiation (to be discussed below). They also come along with infrastructure for easily storing and computing with them on GPU's, a capability we won't touch on here but which can be really useful in practice. # #
    # Click here for text recap of relevant part of video # # First we import the pytorch library called `torch` and its neural network module `nn`. Next we will create a class for the deep network called DeepNet. A class has functions which are called methods. A class in python is initialized using a method called `__init__`. In this case the init method is declared to takes two inputs (other than the `self` input which represents the class itself), which are `n_inputs` and `n_hidden`. In our case `n_inputs` is the number of neurons we are using to do the prediction, and `n_hidden` is the number of hidden units. We first call the super function to invoke the `nn.Module`’s init function. Next we add the hidden layer `in_layer` as an attribute of the class. It is a linear layer called `nn.Linear` with size `n_inputs` by `n_hidden`. Then we add a second linear layer `out_layer` of size `n_hidden` by `1`, because we are predicting one output - the orientation of the stimulus. PyTorch will initialize all weights and biases randomly. # # Note the number of hidden units `n_hidden` is a parameter that we are free to vary in deciding how to build our network. See [Bonus Section 1](#b1) for a discussion of how this architectural choice affects the computations the network can perform. # # Next we add another method to the class called `forward`. This is the method that runs when you call the class as a function. It takes as input `r` which is the neural responses. Then `r` is sent through the linear layers `in_layer` and `out_layer` and returns our prediction `y`. Let’s create an instantiation of this class called `net` with 200 hidden units with `net = DeepNet(n_neurons, 200)`. Now we can run the neural response through the network to predict the stimulus (`net(r)`); running the “net” this way calls the forward method. # # #
    # # The next cell contains code for building the deep network we defined above and in the video using the `nn.Module` base class for deep neural network models (documentation [here](https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=nn%20module#torch.nn.Module)). # + cellView="both" class DeepNet(nn.Module): """Deep Network with one hidden layer Args: n_inputs (int): number of input units n_hidden (int): number of units in hidden layer Attributes: in_layer (nn.Linear): weights and biases of input layer out_layer (nn.Linear): weights and biases of output layer """ def __init__(self, n_inputs, n_hidden): super().__init__() # needed to invoke the properties of the parent class nn.Module self.in_layer = nn.Linear(n_inputs, n_hidden) # neural activity --> hidden units self.out_layer = nn.Linear(n_hidden, 1) # hidden units --> output def forward(self, r): """Decode stimulus orientation from neural responses Args: r (torch.Tensor): vector of neural responses to decode, must be of length n_inputs. Can also be a tensor of shape n_stimuli x n_inputs, containing n_stimuli vectors of neural responses Returns: torch.Tensor: network outputs for each input provided in r. If r is a vector, then y is a 1D tensor of length 1. If r is a 2D tensor then y is a 2D tensor of shape n_stimuli x 1. """ h = self.in_layer(r) # hidden representation y = self.out_layer(h) return y # - # ## Section 2.2: Activation functions # # *Estimated timing to here from start of tutorial: 25 min* # + cellView="form" # @title Video 2: Nonlinear activation functions from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV1m5411h7V5", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="JAdukDCQALA", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # - # This video covers adding a nonlinear activation funciton, specifically a Rectified Linear Unit (ReLU), to the linear network. # #
    # Click here for text recap of video # # Note that the deep network we constructed above comprises solely **linear** operations on each layer: each layer is just a weighted sum of all the elements in the previous layer. It turns out that linear hidden layers like this aren't particularly useful, since a sequence of linear transformations is actually essentially the same as a single linear transformation. We can see this from the above equations by plugging in the first one into the second one to obtain # \begin{equation} # y^{(n)} = \mathbf{W}^{out} \left( \mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in} \right) + \mathbf{b}^{out} = \mathbf{W}^{out}\mathbf{W}^{in} \mathbf{r}^{(n)} + \left( \mathbf{W}^{out}\mathbf{b}^{in} + \mathbf{b}^{out} \right) # \end{equation} # In other words, the output is still just a weighted sum of elements in the input -- the hidden layer has done nothing to change this. # # To extend the set of computable input/output transformations to more than just weighted sums, we'll incorporate a **non-linear activation function** in the hidden units. This is done by simply modifying the equation for the hidden layer activations to be # \begin{equation} # \mathbf{h}^{(n)} = \phi(\mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in}) # \end{equation} # where $\phi$ is referred to as the activation function. Using a non-linear activation function will ensure that the hidden layer performs a non-linear transformation of the input, which will make our network much more powerful (or *expressive*, see [Bonus Section 1](#b1)). In practice, deep networks *always* use non-linear activation functions. # # The most common non-linearity used is the rectified linear unit (or ReLU), which is a max(0, x) function. At the beginning of neural network development, researchers experimented with different non-linearities such as sigmoid and tanh functions, but in the end they found that RELU activation functions worked the best. It works well because the gradient is able to back-propagate through the network as long as the input is positive - the gradient is 1 for all values of x greater than 0. If you use a saturating non-linearity then the gradients will be very small in the saturating regimes, reducing the effective computing regime of the unit. # #
    # # # # #### Coding Exercise 2.2: Nonlinear Activations # # Create a new class `DeepNetReLU` by modifying our above deep network model to add a **non-linear activation** function $\phi$: # \begin{equation} # \mathbf{h}^{(n)} = \phi(\mathbf{W}^{in} \mathbf{r}^{(n)} + \mathbf{b}^{in}) # \end{equation} # # We'll use the linear rectification function: # \begin{equation} # \phi(x) = # \begin{cases} # x & \text{if } x > 0 \\ # 0 & \text{else} # \end{cases} # \end{equation} # which can be implemented in PyTorch using `torch.relu()`. Hidden layers with this activation function are typically referred to as "**Re**ctified **L**inear **U**nits", or **ReLU**'s. # # Initialize this network with 10 hidden units and run on an example stimulus. # # **Hint**: you only need to modify the `forward()` method of the above `DeepNet()` class to include `torch.relu()`. # # # We then initialize and run this network. We use it to decode stimulus orientation (true stimulus given by `ori`) from a vector of neural responses `r` to the very first stimulus. Note that when the initialized network class is called as a function on an input (e.g. `net(r)`), its `.forward()` method is called. This is a special property of the `nn.Module` class. # # Note that the decoded orientations at this point will be nonsense, since the network has been initialized with random weights. Below, we'll learn how to optimize these weights for good stimulus decoding. # # + class DeepNetReLU(nn.Module): """ network with a single hidden layer h with a RELU """ def __init__(self, n_inputs, n_hidden): super().__init__() # needed to invoke the properties of the parent class nn.Module self.in_layer = nn.Linear(n_inputs, n_hidden) # neural activity --> hidden units self.out_layer = nn.Linear(n_hidden, 1) # hidden units --> output def forward(self, r): ############################################################################ ## TO DO for students: write code for computing network output using a ## rectified linear activation function for the hidden units # Fill out function and remove raise NotImplementedError("Student exercise: complete DeepNetReLU forward") ############################################################################ h = ... # h is size (n_inputs, n_hidden) y = ... # y is size (n_inputs, 1) return y # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize a deep network with M=200 hidden units net = DeepNetReLU(n_neurons, 200) # Get neural responses (r) to and orientation (ori) to one stimulus in dataset r, ori = get_data(1, resp_train, stimuli_train) # using helper function get_data # Decode orientation from these neural responses using initialized network out = net(r) # compute output from network, equivalent to net.forward(r) print('decoded orientation: %.2f degrees' % out) print('true orientation: %.2f degrees' % ori) # + # to_remove solution class DeepNetReLU(nn.Module): """ network with a single hidden layer h with a RELU """ def __init__(self, n_inputs, n_hidden): super().__init__() # needed to invoke the properties of the parent class nn.Module self.in_layer = nn.Linear(n_inputs, n_hidden) # neural activity --> hidden units self.out_layer = nn.Linear(n_hidden, 1) # hidden units --> output def forward(self, r): h = torch.relu(self.in_layer(r)) # h is size (n_inputs, n_hidden) y = self.out_layer(h) # y is size (n_inputs, 1) return y # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize a deep network with M=200 hidden units net = DeepNetReLU(n_neurons, 200) # Get neural responses (r) to and orientation (ori) to one stimulus in dataset r, ori = get_data(1, resp_train, stimuli_train) # using helper function get_data # Decode orientation from these neural responses using initialized network out = net(r) # compute output from network, equivalent to net.forward(r) print('decoded orientation: %.2f degrees' % out) print('true orientation: %.2f degrees' % ori) # - # You should see that the decoded orientation is 0.17 $^{\circ}$ while the true orientation is 139.00 $^{\circ}$. # --- # # Section 3: Loss functions and gradient descent # # + cellView="form" # @title Video 3: Loss functions & gradient descent from ipywidgets import widgets out2 = widgets.Output() with out2: from IPython.display import IFrame class BiliVideo(IFrame): def __init__(self, id, page=1, width=400, height=300, **kwargs): self.id=id src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page) super(BiliVideo, self).__init__(src, width, height, **kwargs) video = BiliVideo(id="BV19k4y1271n", width=854, height=480, fs=1) print('Video available at https://www.bilibili.com/video/{0}'.format(video.id)) display(video) out1 = widgets.Output() with out1: from IPython.display import YouTubeVideo video = YouTubeVideo(id="aEtKpzEuviw", width=854, height=480, fs=1, rel=0) print('Video available at https://youtube.com/watch?v=' + video.id) display(video) out = widgets.Tab([out1, out2]) out.set_title(0, 'Youtube') out.set_title(1, 'Bilibili') display(out) # - # This video covers loss functions, gradient descent, and how to implement these in Pytorch. # # # # ### Section 3.1: Loss functions # # *Estimated timing to here from start of tutorial: 40 min* # # Because the weights of the network are currently randomly chosen, the outputs of the network are nonsense: the decoded stimulus orientation is nowhere close to the true stimulus orientation. We'll shortly write some code to change these weights so that the network does a better job of decoding. # # But to do so, we first need to define what we mean by "better". One simple way of defining this is to use the squared error # \begin{equation} # L = (y - \tilde{y})^2 # \end{equation} # where $y$ is the network output and $\tilde{y}$ is the true stimulus orientation. When the decoded stimulus orientation is far from the true stimulus orientation, $L$ will be large. We thus refer to $L$ as the **loss function**, as it quantifies how *bad* the network is at decoding stimulus orientation. # #
    # Click here for text recap of relevant part of video # # First we run the neural responses through the network `net` to get the output `out`. Then we declare our loss function, we will use the built in `nn.MSELoss` function for this purpose: `loss_fn = nn.MSELoss()`. This loss function takes two inputs, the network output `out` and the true stimulus orientations `ori` and finds the mean squared error: `loss = loss_fn(out, ori)`. Specifically, it will take as arguments a **batch** of network outputs $y_1, y_2, \ldots, y_P$ and corresponding target outputs $\tilde{y}_1, \tilde{y}_2, \ldots, \tilde{y}_P$, and compute the **mean squared error (MSE)** # \begin{equation} # L = \frac{1}{P}\sum_{n=1}^P \left(y^{(n)} - \tilde{y}^{(n)}\right)^2 # \end{equation} # where $P$ is the number of different stimuli in a batch, called the *batch size*. # # # **Computing MSE** # # # Evaluate the mean squared error for a deep network with $M=10$ rectified linear units, on the decoded orientations from neural responses to 20 random stimuli. # + # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize a deep network with M=10 hidden units net = DeepNetReLU(n_neurons, 10) # Get neural responses to first 20 stimuli in the data set r, ori = get_data(20, resp_train, stimuli_train) # Decode orientation from these neural responses out = net(r) # Initialize PyTorch mean squared error loss function (Hint: look at nn.MSELoss) loss_fn = nn.MSELoss() # Evaluate mean squared error loss = loss_fn(out, ori) print('mean squared error: %.2f' % loss) # - # You should see a mean squared error of 42949.14. # ### Section 3.2: Optimization with gradient descent # *Estimated timing to here from start of tutorial: 50 min* # #
    # Click here for text recap of relevant part of video # # # Next we minimize this loss function using gradient descent. In **gradient descent** we compute the gradient of the loss function with respect to each parameter (all W’s and b’s). We then update the parameters by subtracting the learning rate times the gradient. # # Let’s visualize this loss function $L$ with respect to a weight $w$. If the gradient is positive (the slope $\frac{dL}{dw}$ > 0) as in this case then we want to move in the opposite direction which is negative. So we update the $w$ accordingly in the negative direction on each iteration. Once the iterations complete the weight will ideally be at a value that minimizes the cost function. # # In reality these cost functions are not convex like this one and depend on hundreds of thousands of parameters. There are tricks to help navigate this rocky cost landscape such as adding momentum or changing the optimizer but we won’t have time to get into that today. There are also ways to change the architecture of the network to improve optimization, such as including skip connections. These skip connections are used in residual networks and allow for the optimization of many layer networks. # #
    # # # # # # + cellView="form" #@markdown Execute this cell to view gradient descent gif from IPython.display import Image Image(url='https://github.com/NeuromatchAcademy/course-content/blob/master/tutorials/static/grad_descent.gif?raw=true') # - # We'll use the **gradient descent (GD)** algorithm to modify our weights to reduce the loss function, which consists of iterating three steps. # # 1. **Evaluate the loss** on the training data, # ``` # out = net(train_data) # loss = loss_fn(out, train_labels) # ``` # where `train_data` are the network inputs in the training data (in our case, neural responses), and `train_labels` are the target outputs for each input (in our case, true stimulus orientations). # 2. **Compute the gradient of the loss** with respect to each of the network weights. In PyTorch, we can do this with the `.backward()` method of the loss `loss`. Note that the gradients of each parameter need to be cleared before calling `.backward()`, or else PyTorch will try to accumulate gradients across iterations. This can again be done using built-in optimizers via the method `.zero_grad()`. Putting these together we have # ``` # optimizer.zero_grad() # loss.backward() # ``` # 3. **Update the network weights** by descending the gradient. In Pytorch, we can do this using built-in optimizers. We'll use the `optim.SGD` optimizer (documentation [here](https://pytorch.org/docs/stable/optim.html#torch.optim.SGD)) which updates parameters along the negative gradient, scaled by a learning rate. To initialize this optimizer, we have to tell it # * which parameters to update, and # * what learning rate to use # # For example, to optimize *all* the parameters of a network `net` using a learning rate of .001, the optimizer would be initialized as follows # ``` # optimizer = optim.SGD(net.parameters(), lr=.001) # ``` # where `.parameters()` is a method of the `nn.Module` class that returns a [Python generator object](https://wiki.python.org/moin/Generators) over all the parameters of that `nn.Module` class (in our case, $\mathbf{W}^{in}, \mathbf{b}^{in}, \mathbf{W}^{out}, \mathbf{b}^{out}$). # # After computing all the parameter gradients in step 2, we can then update each of these parameters using the `.step()` method of this optimizer, # ``` # optimizer.step() # ``` # In the next exercise, we'll give you a code skeleton for implementing the GD algorithm. Your job will be to fill in the blanks. # # For the mathematical details of the GD algorithm, see [bonus section 2.1](#b21). # # In this case we are using gradient descent (not *stochastic* gradient descent) because we are computing the gradient over ALL training data at once. Normally there is too much training data to do this in practice, and for instance the neural responses may be divided into sets of 20 stimuli. An **epoch** in deep learning is defined as the forward and backward pass of all the training data through the network. We will run the forward and backward pass of the network here for 20 **epochs**, in practice training may require thousands of epochs. # # See [bonus section 2.2](#b22) for a more detailed discussion of stochastic gradient descent. # #### Coding Exercise 3.2: Gradient descent in PyTorch # # Complete the function `train()` that uses the gradient descent algorithm to optimize the weights of a given network. This function takes as input arguments # * `net`: the PyTorch network whose weights to optimize # * `loss_fn`: the PyTorch loss function to use to evaluate the loss # * `train_data`: the training data to evaluate the loss on (i.e. neural responses to decode) # * `train_labels`: the target outputs for each data point in `train_data` (i.e. true stimulus orientations) # # We will then train a neural network on our data and plot the loss (mean squared error) over time. When we run this function, behind the scenes PyTorch is actually changing the parameters inside this network to make the network better at decoding, so its weights will now be different than they were at initialization. # + def train(net, loss_fn, train_data, train_labels, n_epochs=50, learning_rate=1e-4): """Run gradient descent to opimize parameters of a given network Args: net (nn.Module): PyTorch network whose parameters to optimize loss_fn: built-in PyTorch loss function to minimize train_data (torch.Tensor): n_train x n_neurons tensor with neural responses to train on train_labels (torch.Tensor): n_train x 1 tensor with orientations of the stimuli corresponding to each row of train_data n_epochs (int, optional): number of epochs of gradient descent to run learning_rate (float, optional): learning rate to use for gradient descent Returns: (list): training loss over iterations """ # Initialize PyTorch SGD optimizer optimizer = optim.SGD(net.parameters(), lr=learning_rate) # Placeholder to save the loss at each iteration train_loss = [] # Loop over epochs for i in range(n_epochs): ###################################################################### ## TO DO for students: fill in missing code for GD iteration raise NotImplementedError("Student exercise: write code for GD iterations") ###################################################################### # compute network output from inputs in train_data out = ... # compute network output from inputs in train_data # evaluate loss function loss = loss_fn(out, train_labels) # Clear previous gradients ... # Compute gradients ... # Update weights ... # Store current value of loss train_loss.append(loss.item()) # .item() needed to transform the tensor output of loss_fn to a scalar # Track progress if (i + 1) % (n_epochs // 5) == 0: print(f'iteration {i + 1}/{n_epochs} | loss: {loss.item():.3f}') return train_loss # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize network with 10 hidden units net = DeepNetReLU(n_neurons, 10) # Initialize built-in PyTorch MSE loss function loss_fn = nn.MSELoss() # Run gradient descent on data train_loss = train(net, loss_fn, resp_train, stimuli_train) # Plot the training loss over iterations of GD plot_train_loss(train_loss) # + # to_remove solution def train(net, loss_fn, train_data, train_labels, n_epochs=50, learning_rate=1e-4): """Run gradient descent to opimize parameters of a given network Args: net (nn.Module): PyTorch network whose parameters to optimize loss_fn: built-in PyTorch loss function to minimize train_data (torch.Tensor): n_train x n_neurons tensor with neural responses to train on train_labels (torch.Tensor): n_train x 1 tensor with orientations of the stimuli corresponding to each row of train_data n_epochs (int, optional): number of epochs of gradient descent to run learning_rate (float, optional): learning rate to use for gradient descent Returns: (list): training loss over iterations """ # Initialize PyTorch SGD optimizer optimizer = optim.SGD(net.parameters(), lr=learning_rate) # Placeholder to save the loss at each iteration train_loss = [] # Loop over epochs for i in range(n_epochs): # compute network output from inputs in train_data out = net(train_data) # compute network output from inputs in train_data # evaluate loss function loss = loss_fn(out, train_labels) # Clear previous gradients optimizer.zero_grad() # Compute gradients loss.backward() # Update weights optimizer.step() # Store current value of loss train_loss.append(loss.item()) # .item() needed to transform the tensor output of loss_fn to a scalar # Track progress if (i + 1) % (n_epochs // 5) == 0: print(f'iteration {i + 1}/{n_epochs} | loss: {loss.item():.3f}') return train_loss # Set random seeds for reproducibility np.random.seed(1) torch.manual_seed(1) # Initialize network with 10 hidden units net = DeepNetReLU(n_neurons, 10) # Initialize built-in PyTorch MSE loss function loss_fn = nn.MSELoss() # Run gradient descent on data train_loss = train(net, loss_fn, resp_train, stimuli_train) # Plot the training loss over iterations of GD with plt.xkcd(): plot_train_loss(train_loss) # - # **We can further improve our model - please see the Bonus Tutorial when you have time to dive deeper into this model by evaluating and improving its performance by visualizing the weights, looking at performance on test data, switching to a new loss function and adding regularization.** # --- # # Summary # # *Estimated timing of tutorial: 1 hour, 20 minutes* # # We have now covered a number of common and powerful techniques for applying deep learning to decoding from neural data, some of which are common to almost any machine learning problem: # * Building and training deep networks using the **PyTorch** `nn.Module` class and built-in **optimizers** # * Choosing **loss functions** # # An important aspect of this tutorial was the `train()` function we wrote in coding exercise 3.2. Note that it can be used to train *any* network to minimize *any* loss function on *any* training data. This is the power of using PyTorch to train neural networks and, for that matter, **any other model**! There is nothing in the `nn.Module` class that forces us to use `nn.Linear` layers that implement neural network operations. You can actually put anything you want inside the `.__init__()` and `.forward()` methods of this class. As long as its parameters and computations involve only `torch.Tensor`'s, and the model is differentiable, you'll then be able to optimize the parameters of this model in exactly the same way we optimized the deep networks here. # # What kinds of conclusions can we draw from these sorts of analyses? If we can decode the stimulus well from visual cortex activity, that means that there is information about this stimulus available in visual cortex. Whether or not the animal uses that information to make decisions is not determined from an analysis like this. In fact mice perform poorly in orientation discrimination tasks compared to monkeys and humans, even though they have information about these stimuli in their visual cortex. Why do you think they perform poorly in orientation discrimination tasks? # # See [this paper](https://www.biorxiv.org/content/10.1101/679324v2) for some potential hypotheses, but this is totally an open question! # --- # # Bonus # # ## Bonus Section 1: Neural network *depth*, *width* and *expressivity* # # Two important architectural choices that always have to be made when constructing deep feed-forward networks like those used here are # * the number of hidden layers, or the network's *depth* # * the number of units in each layer, or the layer *widths* # # Here, we restricted ourselves to networks with a single hidden layer with a width of $M$ units, but it is easy to see how this code could be adapted to arbitrary depths. Adding another hidden layer simply requires adding another `nn.Linear` module to the `__init__()` method and incorporating it into the `.forward()` method. # # The depth and width of a network determine the set of input/output transormations that it can perform, often referred to as its *expressivity*. The deeper and wider the network, the more *expressive* it is; that is, the larger the class of input/output transformations it can compute. In fact, it turns out that an infinitely wide *or* infinitely deep networks can in principle [compute (almost) *any* input/output transformation](https://en.wikipedia.org/wiki/Universal_approximation_theorem). # # A classic mathematical demonstration of the power of depth is given by the so-called [XOR problem](https://medium.com/@jayeshbahire/the-xor-problem-in-neural-networks-50006411840b#:~:text=The%20XOr%2C%20or%20%E2%80%9Cexclusive%20or,value%20if%20they%20are%20equal.). This toy problem demonstrates how even a single hidden layer can drastically expand the set of input/output transformations a network can perform, relative to a shallow network with no hidden layers. The key intuition is that the hidden layer allows you to represent the input in a new format, which can then allow you to do almost anything you want with it. The *wider* this hidden layer, the more flexibility you have in this representation. In particular, if you have more hidden units than input units, then the hidden layer representation of the input is higher-dimensional than the raw data representation. This higher dimensionality effectively gives you more "room" to perform arbitrary computations in. It turns out that even with just this one hidden layer, if you make it wide enough you can actually approximate any input/output transformation you want. See [here](http://neuralnetworksanddeeplearning.com/chap4.html) for a neat visual demonstration of this. # # In practice, however, it turns out that increasing depth seems to grant more expressivity with fewer units than increasing width does (for reasons that are not well understood). It is for this reason that truly *deep* networks are almost always used in machine learning, which is why this set of techniques is often referred to as *deep* learning. # # That said, there is a cost to making networks deeper and wider. The bigger your network, the more parameters (i.e. weights and biases) it has, which need to be optimized! The extra expressivity afforded by higher width and/or depth thus carries with it (at least) two problems: # * optimizing more parameters usually requires more data # * a more highly parameterized network is more prone to overfit to the training data, so requires more sophisticated optimization algorithms to ensure generalization # ## Bonus Section 2: Gradient descent # # ### Bonus Section 2.1: Gradient descent equations # # Here we provide the equations for the three steps of the gradient descent algorithm, as applied to our decoding problem: # # 1. **Evaluate the loss** on the training data. For a mean squared error loss, this is given by # \begin{equation} # L = \frac{1}{P}\sum_{n=1}^P (y^{(n)} - \tilde{y}^{(n)})^2 # \end{equation} # where $y^{(n)}$ denotes the stimulus orientation decoded from the population response $\mathbf{r}^{(n)}$ to the $n$th stimulus in the training data, and $\tilde{y}^{(n)}$ is the true orientation of that stimulus. $P$ denotes the total number of data samples in the training set. In the syntax of our `train()` function above, $\mathbf{r}^{(n)}$ is given by `train_data[n, :]` and $\tilde{y}^{(n)}$ by `train_labels[n]`. # # 2. **Compute the gradient of the loss** with respect to each of the network weights. In our case, this entails computing the quantities # \begin{equation} # \frac{\partial L}{\partial \mathbf{W}^{in}}, \frac{\partial L}{\partial \mathbf{b}^{in}}, \frac{\partial L}{\partial \mathbf{W}^{out}}, \frac{\partial L}{\partial \mathbf{b}^{out}} # \end{equation} # Usually, we would require lots of math in order to derive each of these gradients, and lots of code to compute them. But this is where PyTorch comes to the rescue! Using a cool technique called [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), PyTorch automatically calculates these gradients when the `.backward()` function is called. # # More specifically, when this function is called on a particular variable (e.g. `loss`, as above), PyTorch will compute the gradients with respect to each network parameter. These are computed and stored behind the scenes, and can be accessed through the `.grad` attribute of each of the network's parameters. As we saw above, however, we actually never need to look at or call these gradients when implementing gradient descent, as this can be taken care of by PyTorch's built-in optimizers, like `optim.SGD`. # # 3. **Update the network weights** by descending the gradient: # \begin{align} # \mathbf{W}^{in} &\leftarrow \mathbf{W}^{in} - \alpha \frac{\partial L}{\partial \mathbf{W}^{in}} \\ # \mathbf{b}^{in} &\leftarrow \mathbf{b}^{in} - \alpha \frac{\partial L}{\partial \mathbf{b}^{in}} \\ # \mathbf{W}^{out} &\leftarrow \mathbf{W}^{out} - \alpha \frac{\partial L}{\partial \mathbf{W}^{out}} \\ # \mathbf{b}^{out} &\leftarrow \mathbf{b}^{out} - \alpha \frac{\partial L}{\partial \mathbf{b}^{out}} # \end{align} # where $\alpha$ is called the **learning rate**. This **hyperparameter** of the SGD algorithm controls how far we descend the gradient on each iteration. It should be as large as possible so that fewer iterations are needed, but not too large so as to avoid parameter updates from skipping over minima in the loss landscape. # # While the equations written down here are specific to the network and loss function considered in this tutorial, the code provided above for implementing these three steps is completely general: no matter what loss function or network you are using, exactly the same commands can be used to implement these three steps. # # The way that the gradients are calculated is called **backpropagation**. We have a loss function: # \begin{align} # L &= (y - \tilde{y})^2 \\ # &= (\mathbf{W}^{out} \mathbf{h} - \tilde{y})^2 # \end{align} # where $\mathbf{h} = \phi(\mathbf{W}^{in} \mathbf{r} + \mathbf{b}^{in})$ # You may see that $\frac{\partial L}{\partial \mathbf{W}^{out}}$ is simple to calculate as it is on the outside of the equation (it is also a vector in this case, not a matrix, so the derivative is standard): # \begin{equation} # \frac{\partial L}{\partial \mathbf{W}^{out}} = 2 (\mathbf{h} - \tilde{y}) # \end{equation} # Now let's compute the derivative with respect to $\mathbf{W}^{in}$ using the chain rule. Note it is only positive if the output is positive due to the RELU $\phi$: # \begin{align} # \frac{\partial L}{\partial \mathbf{W}^{in}} &= \begin{cases} # \frac{\partial L}{\partial \mathbf{W}^{out}} \frac{\partial \mathbf{h}}{\partial \mathbf{W}^{in}} & \text{if } \mathbf{h} > 0 \\ # 0 & \text{else} # \end{cases} \\ # &= \begin{cases} # 2 (\mathbf{h} - \tilde{y}) \mathbf{r}^\top & \text{if } \mathbf{h} > 0 \\ # 0 & \text{else} # \end{cases} # \end{align} # It is most efficient to compute the derivative once for the last layer, then once for the next layer and multiply by the previous layer's derivative and so on using the chain rule. Each of these operations is relatively fast, making training of deep networks feasible. # # The command `loss.backward()` computes these gradients for the defined `loss` with respect to each network parameter. The computation is done using [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), which implements backpropagation. Note that this works no matter how big/small the network is, allowing us to perform gradient descent for any deep network model built using PyTorch. # # ### Bonus Section 2.2: *Stochastic* gradient descent (SGD) vs. gradient descent (GD) # # In this tutorial, we used the gradient descent algorithm, which differs in a subtle yet very important way from the more commonly used **stochastic gradient descent (SGD)** algorithm. The key difference is in the very first step of each iteration, where in the GD algorithm we evaluate the loss *at every data sample in the training set*. In SGD, on the other hand, we evaluate the loss only at a random subset of data samples from the full training set, called a **mini-batch**. At each iteration, we randomly sample a mini-batch to perform steps 1-3 on. All the above equations still hold, but now the $P$ data samples $\mathbf{r}^{(n)}, \tilde{y}^{(n)}$ denote a mini-batch of $P$ random samples from the training set, rather than the whole training set. # # There are several reasons why one might want to use SGD instead of GD. The first is that the training set might be too big, so that we actually can't actually evaluate the loss on every single data sample in it. In this case, GD is simply infeasible, so we have no choice but to turn to SGD, which bypasses the restrictive memory demands of GD by sub-sampling the training set into smaller mini-batches. # # But, even when GD is feasible, SGD turns out to often be better. The stochasticity induced by the extra random sampling step in SGD effectively adds some noise in the search for local minima of the loss function. This can be really useful for avoiding potential local minima, and enforce that whatever minimum is converged to is a good one. This is particularly important when networks are wider and/or deeper, in which case the large number of parameters can lead to overfitting. # # Here, we used only GD because (1) it is simpler, and (2) it suffices for the problem being considered here. Because we have so many neurons in our data set, decoding is not too challenging and doesn't require a particularly deep or wide network. The small number of parameters in our deep networks therefore can be optimized without a problem using GD. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## CLAHE of Black and White Image: # Contrast Limited Adaptive Histogram Equalization(CLAHE) of a Black and White Image is fairly straight forward. The image after CLAHE has Cumulative Distribution Function(CDF) as approximately a **straight line**. #Importing the libraries import numpy as np import matplotlib.pyplot as plt import cv2 import matplotlib.image as mpimg # + #Reading the original image, here 0 implies that image is read as grayscale image = cv2.imread('lc.jpeg', 0) #Generating the histogram of the original image hist,bins = np.histogram(image.flatten(),256,[0,256]) #Generating the cumulative distribution function of the original image cdf = hist.cumsum() cdf_normalized = cdf * hist.max()/ cdf.max() # + #Creating CLAHE clahe = cv2.createCLAHE(clipLimit=2, tileGridSize=(8,8)) #Apply CLAHE to the original image image_clahe = clahe.apply(image) #Generating the histogram of the image after applying CLAHE hist_clahe,bins_clahe = np.histogram(image_clahe.flatten(),256,[0,256]) #Generating the cumulative distribution function of the original image cdf_clahe = hist_clahe.cumsum() cdf_clahe_normalized = cdf_clahe * hist_clahe.max()/ cdf_clahe.max() # + #Plotting the Original and Histogram Equalized Image, Histogram and CDF fig, axs = plt.subplots(2, 2) axs[0, 0].imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) axs[0, 0].axis('off') axs[0, 0].set_title('Original Image') axs[0, 1].imshow(cv2.cvtColor(image_clahe, cv2.COLOR_BGR2RGB)) axs[0, 0].axis('off') axs[0, 1].set_title('CLAHE Image') axs[1, 0].plot(cdf_normalized, color = 'b') axs[1, 0].hist(image.flatten(),256,[0,256], color = 'r') axs[1, 0].legend(('cdf','histogram'), loc = 'upper left') axs[1, 1].plot(cdf_clahe_normalized, color = 'b') axs[1, 1].hist(image_clahe.flatten(),256,[0,256], color = 'r') axs[1, 1].legend(('cdf_clahe','histogram_clahe'), loc = 'upper left') # Hide x labels and tick labels for top plots and y ticks for right plots. for ax in axs.flat: ax.label_outer() # - # ## Histogram Equalization of Color Images # Histogram Equaliztion of color images is a little complicated. OpenCV loads color images in BGR color space. With this color space, it is not possible to equalize the histogram without affecting to the color information because all 3 channels contain color information. Therefore you have to convert the BGR image to a color space like YCrCb.
    # In YCrCb color space, the **Y channel** of the image only contains intensity information where as Cr and Cb channels contain all the color information of the image. Therefore only the Y channel should be processed to get an image after applying CLAHE without changing any color information. # + #Reading the original image, here 1 implies that image is read as color image_c = cv2.imread('lc.jpeg', 1) #Generating the histogram of the original image hist_c,bins_c = np.histogram(image_c.flatten(),256,[0,256]) #Generating the cumulative distribution function of the original image cdf_c = hist_c.cumsum() cdf_c_normalized = cdf_c * hist_c.max()/ cdf_c.max() # + #Converting the image to YCrCb image_yuv = cv2.cvtColor(image_c, cv2.COLOR_BGR2YUV) #Creating CLAHE clahe = cv2.createCLAHE(clipLimit=2, tileGridSize=(8,8)) # Applying Histogram Equalization on the original imageof the Y channel image_yuv[:,:,0] = clahe.apply(image_yuv[:,:,0]) # convert the YUV image back to RGB format image_c_clahe = cv2.cvtColor(image_yuv, cv2.COLOR_YUV2BGR) #Generating the histogram of the image after applying CLAHE hist_c_clahe, bins_c_clahe = np.histogram(image_c_clahe.flatten(),256,[0,256]) #Generating the cumulative distribution function of the original image cdf_c_clahe = hist_c_clahe.cumsum() cdf_c_clahe_normalized = cdf_c_clahe * hist_c_clahe.max()/ cdf_c_clahe.max() # + #Plotting the Original and Histogram Equalized Image, Histogram and CDF fig, axs = plt.subplots(2, 2) axs[0, 0].imshow(cv2.cvtColor(image_c, cv2.COLOR_BGR2RGB)) axs[0, 0].axis('off') axs[0, 0].set_title('Original Image') axs[0, 1].imshow(cv2.cvtColor(image_c_clahe, cv2.COLOR_BGR2RGB)) axs[0, 0].axis('off') axs[0, 1].set_title('Image after CLAHE') axs[1, 0].plot(cdf_c_normalized, color = 'b') axs[1, 0].hist(image_c.flatten(),256,[0,256], color = 'r') axs[1, 0].legend(('cdf','histogram'), loc = 'upper left') axs[1, 1].plot(cdf_c_clahe_normalized, color = 'b') axs[1, 1].hist(image_c_clahe.flatten(),256,[0,256], color = 'r') axs[1, 1].legend(('cdf_clahe','histogram_clahe'), loc = 'upper left') # Hide x labels and tick labels for top plots and y ticks for right plots. for ax in axs.flat: ax.label_outer() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="VfGzS7lzmZr3" # # Text Pre Processor - Experimento # # Este é um componente que utiliza a biblioteca [nltk](https://www.nltk.org/) e [ftfy](https://pypi.org/project/ftfy/) e [regex](https://docs.python.org/3/library/re.html) para pré processar textos que entrrão em outros componentes. # # Este notebook apresenta: # - como usar o [SDK](https://platiagro.github.io/sdk/) para carregar datasets, salvar modelos e outros artefatos. # - como declarar parâmetros e usá-los para criar componentes reutilizáveis. # + [markdown] colab_type="text" id="tD7cOYh6mZr5" # ## Declaração de parâmetros e hiperparâmetros # # Declare parâmetros com o botão na barra de ferramentas.
    # O parâmetro `dataset` identifica os conjuntos de dados. Você pode importar arquivos de dataset com o botão na barra de ferramentas. # + colab={} colab_type="code" id="jK9Bd6X7mZr6" tags=["parameters"] # parâmetros dataset = "/tmp/data/imdb-2.csv" #@param {type:"string"} target = "sentiment" #@param {type:"string", label:"Atributo alvo", description:"Seu modelo será treinado para prever os valores do alvo."} language = "english" #@param ["portuguese", "english"] {type:"string", label:"Linguagem", description:"Linguagem da qual os stopwords pertencem. Deve ser a mesma utilizada no dataset."} # selected features to perform the model filter_type = "incluir" #@param ["incluir","remover"] {type:"string",label:"Modo de seleção das features", description:"Se deseja informar quais features deseja incluir no modelo, selecione a opção [incluir]. Caso deseje informar as features que não devem ser utilizadas, selecione [remover]."} #model_features = "review" #@param {type:"string",multiple:true,label:"Features para incluir/remover no modelo",description:"Seu modelo será feito considerando apenas as features selecionadas. Caso nada seja especificado, todas as features serão utilizadas"} model_features = "review" #@param {type:"string"} # preprocessamento case = "Lower" #@param ["Lower","Upper","NotApply"] {type:"string",label:"Aplicação de casing", description:"Caixa baixa, caixa alta ou não aplicação de caixa"} remove_stop_words = True #@param {type:"boolean",label:"Remoção de Stop Words", description:"Remoção de palavras, conjunções, artigos e outros"} remove_top_words = True #@param {type:"boolean",label:"Remoção de Top Words", description:"Remoção dea porcentagem palavras mais frequentes no texto"} top_words_percentage = 0.01 #@param {type:"number",label:"Porcentagem de Top Words", description:"Porcentagem das palavras mais frequentes no texto"} stemming = False #@param {type:"boolean",label:"Stemming"} lemmatization = True #@param {type:"boolean",label:"Lemmatization"} remove_punctuation = True #@param {type:"boolean",label:"Remoção de pontuação"} remove_line_braks = True #@param {type:"boolean",label:"Remoção de quebras de lina",description:"Remoção de quebras de linha por \n e \r"} remove_accents = True #@param {type:"boolean",label:"Remoção de acentos"} remove_html = True #@param {type:"boolean",label:"Remoção de HTML"} remove_css = True #@param {type:"boolean",label:"Remoção de CSS"} # + [markdown] colab_type="text" id="nsJMgpBRmZr_" # ## Acesso ao conjunto de dados # # O conjunto de dados utilizado nesta etapa será o mesmo carregado através da plataforma.
    # O tipo da variável retornada depende do arquivo de origem: # - [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) para CSV e compressed CSV: .csv .csv.zip .csv.gz .csv.bz2 .csv.xz # - [Binary IO stream](https://docs.python.org/3/library/io.html#binary-i-o) para outros tipos de arquivo: .jpg .wav .zip .h5 .parquet etc # + colab={} colab_type="code" id="dzan1wl1mZr_" outputId="b893eabd-dbe5-46a8-bc26-81a96418dc8d" import pandas as pd df = pd.read_csv(dataset) print(df) # + [markdown] colab_type="text" id="duCQT-2YmZsF" # ## Acesso aos metadados do conjunto de dados # # Utiliza a função `stat_dataset` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para carregar metadados.
    # Por exemplo, arquivos CSV possuem `metadata['featuretypes']` para cada coluna no conjunto de dados (ex: categorical, numerical, or datetime). # + colab={} colab_type="code" id="qyNVXA7CmZsG" import numpy as np #from platiagro import stat_dataset #metadata = stat_dataset(name=dataset) #featuretypes = metadata["featuretypes"] columns = df.columns.to_numpy() #featuretypes = np.array(featuretypes) target_index = np.argwhere(columns == target) columns = np.delete(columns, target_index) #featuretypes = np.delete(featuretypes, target_index) # + [markdown] colab_type="text" id="qtyfRV_0mZsJ" # ## Remoção de linhas com valores faltantes no atributo alvo # # Caso haja linhas em que o atributo alvo contenha valores faltantes, é feita a remoção dos casos faltantes. # + colab={} colab_type="code" id="Vh8OXGLImZsK" df.dropna(subset = [target],inplace=True) df.dropna(subset = [model_features],inplace=True) y = df[target].to_numpy() # + [markdown] colab_type="text" id="bRlxsXqwmZsO" # ## Filtragem das features # # Seleciona apenas as features que foram declaradas no parâmetro model_features. Se nenhuma feature for especificada, todo o conjunto de dados será utilizado para a modelagem. # + colab={} colab_type="code" id="B2viBFvUmZsP" if filter_type == 'incluir': if len(model_features) >= 1: columns_index = (np.where(np.isin(columns,model_features)))[0] columns_index.sort() columns_to_filter = columns[columns_index] #featuretypes = featuretypes[columns_index] else: columns_to_filter = columns else: if len(model_features) >= 1: columns_index = (np.where(np.isin(columns,model_features)))[0] columns_index.sort() columns_to_filter = np.delete(columns,columns_index) #featuretypes = np.delete(featuretypes,columns_index) else: columns_to_filter = columns # keep the features selected df_model = df[columns_to_filter] X = df_model.to_numpy() # + [markdown] colab_type="text" id="dUySX0csmZsT" # ## Codifica labels do atributo alvo # # As labels do atributo alvo são convertidos em números inteiros ordinais com valor entre 0 e n_classes-1. # + colab={} colab_type="code" id="OBbq--yEmZsU" from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() y = label_encoder.fit_transform(y) # + [markdown] colab_type="text" id="tqtHW62PmZsc" # ## Definição dos filtros # + colab={"base_uri": "https://localhost:8080/", "height": 373} colab_type="code" id="Ic4xH_S3mZsg" outputId="37afeb30-6cb0-43d3-c272-e9ed67a8fbb9" import string from collections import defaultdict from functools import reduce from re import sub import unidecode from ftfy import fix_text from nltk.stem import PorterStemmer, WordNetLemmatizer from nltk.tokenize import word_tokenize def tokenize_text(text_list: list = None): """Tokenize Text without the hyperparâmeters defined. Args: text_list (list): a list of texts to be used. Returns: A list of tokenized text without punctuation. """ tokenize_list = list() for text in text_list: text = text[0] text = fix_text(text) text = sub("<.*?>", " ", text) if remove_html else text text = sub("{.*?}", " ", text) if remove_css else text text = unidecode.unidecode(text) if remove_accents else text text = sub("/\r\n|\n|\r|", "", text) if remove_line_braks else text text = ( sub("[" + string.punctuation + "]", "", text) if remove_punctuation else text ) text = sub(" +", " ", text) # only to avoid multiple spaces text = text.split(" ") tokenize_list.append(text) return tokenize_list def top_tokens_stopwords(sentence_list: list, percentage: float = 0.01): """Selects the most relevant stops words of the tokerized texts. Args: sentence_list (list): list of tokens. percentage (float): percentage threshold. """ percentage = top_words_percentage vocabulary = defaultdict(int) for sample in sentence_list: for token in sample: vocabulary[token] += 1 all_tokens = sorted(vocabulary.items(), key=lambda token: token[1], reverse=True) top_tokens = all_tokens[: int(len(all_tokens) * percentage)] return [token[0] for token in top_tokens] def remove_specific_tokens(sentence_list: list, tokens_to_be_removed: list = None): """Removes specific tokens from a token list. Args: sentence_list (list): list of tokens from which other tokens will be removed. tokens_to_be_removed (list): list of tokens that need to be removed. """ sentence_list_ = list() sentence_list_ = [x for x in sentence_list if x not in tokens_to_be_removed] return sentence_list_ def apply_stemming(sentence_list: list): ps = PorterStemmer() sentence_list = [ [ps.stem(word) for word in token_list] for token_list in sentence_list ] return sentence_list def apply_lemmatization(sentence_list: list): lemmatizer = WordNetLemmatizer() sentence_list = [ [lemmatizer.lemmatize(word) for word in token_list] for token_list in sentence_list ] return sentence_list def apply_casing(sentence_list: list, case: str): if case == "Lower": sentence_list = [ [word.lower() for word in token_list] for token_list in sentence_list ] elif case == "Upper": sentence_list = [ [word.upper() for word in token_list] for token_list in sentence_list ] else: pass return sentence_list def token_restructuring(sentence_list: list): """Reduce a nested list of tokens to a single list (1D). Args: sentence_list (list): list to be work on. """ return reduce(lambda x, y: x + y, sentence_list) # + [markdown] colab_type="text" id="miiSextFmZso" # ## Aplicação dos filtros # + [markdown] colab_type="text" id="QUZAOGE8mZso" # Baixando stop words do idioma especificado e a wordnet para lemmatizaton # + colab={} colab_type="code" id="oWSBNLFamZsp" outputId="13c97fdc-8719-4ba0-a457-e51dc2ee196c" import nltk if remove_stop_words: # Download stopwords from nltk nltk.download('stopwords') # Get a list of stopwords for the defined language stopwords = nltk.corpus.stopwords.words(language) if lemmatization: nltk.download('wordnet') # + [markdown] colab_type="text" id="nB4iSRIOmZst" # Aplicando pré processamento # + colab={} colab_type="code" id="fmu-I2rTmZsw" vocab = tokenize_text(X) top_tokens = top_tokens_stopwords(vocab) if remove_top_words else None vocab = remove_specific_tokens(vocab,top_tokens) if remove_top_words else vocab vocab = remove_specific_tokens(vocab,stopwords) if remove_stop_words else vocab vocab = apply_stemming(vocab) if stemming else vocab vocab = apply_lemmatization(vocab) if lemmatization else vocab vocab = apply_casing(vocab,case) text = [' '.join(tokens) for tokens in vocab] # + [markdown] colab_type="text" id="l0K6F2vSmZsz" # ## Salva alterações no conjunto de dados # # O conjunto de dados será salvo (e sobrescrito com as respectivas mudanças) localmente, no container da experimentação, utilizando a função `pandas.DataFrame.to_csv`.
    # + colab={} colab_type="code" id="HnP9SI5wmZs0" outputId="df0b124e-41cd-49ca-cf11-6459746b7fa1" y_dec = list(label_encoder.inverse_transform(y)) new_columns = ['review', 'sentiment'] df_result = pd.DataFrame(list(zip(text, y_dec)),columns = new_columns) #save dataset changes df_result.to_csv(dataset, index=False) df_result # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Query and consolidate median household income and population data by Illinois census tract # + # packages import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as tkr import os from datetime import datetime import requests, json # display all columns and rows pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) # display plots in ipynb # %matplotlib inline # - # # Median household income # request data from census api r = requests.get("https://api.census.gov/data/2017/acs/acs5?get=NAME,B19013_001E&for=tract:*&in=state:17&in=county:*") # json to dictionary income_data = r.json() income_data[:5] # looking at first five elements # + # convert dictionary to df income_df = pd.DataFrame(income_data[1:], columns = income_data[0]).rename(columns={ "B19013_001E":"median_household_income"}) # display first five rows income_df.head() # - # split the NAME column on the comma income_df[["census_tract_name", "county_name", "state_name"]] = income_df["NAME"].str.split(", ", expand = True) # now that the column is split, remove "NAME" column del income_df["NAME"] # create a fips (five digit code for the county) column income_df["fips"] = income_df["state"] + income_df["county"] # #### create a census tract code (this consists of state|county|tract) # create a census tract code by pasting 'state', 'county' and 'tract' columns income_df['census_tract'] = income_df['state'] + income_df['county'] + income_df['tract'] # display first five rows income_df.head() # info on the table income_df.info() # median_household_income to numeric income_df["median_household_income"] = pd.to_numeric(income_df["median_household_income"]) # descriptive stats on the table # .apply portion removes scientific notation income_df.describe().apply(lambda s: s.apply(lambda x: format(x, 'f'))) # #### It was a bit surprising to see a min value of -666666666.000000. Code below will show what values appear below 0. # + # unique median_household print(income_df.median_household_income[income_df.median_household_income < 0].unique()) # count of said unique values under 0 print(income_df.median_household_income[income_df.median_household_income < 0].value_counts()) # - # what census_tract values have this median household income? income_df[income_df.median_household_income.isin([-666666666])] # + # do any of those census_tract values appear more than once? negative_census_tract = income_df[income_df.median_household_income.isin([-666666666])]['census_tract'] # sum ocurrence of these census tracts across entire df income_df['census_tract'].isin(negative_census_tract.tolist()).sum() # - # #### So it was just that one very negative value, and it occurred for 13 records. The code below will filter out the -666666666 from the df. # subset income_df to rows with median_household_income > 0 income_df_sub = income_df.query('median_household_income > 0') income_df_sub.shape # # Bring in population # second request - this time for population r2 = requests.get("https://api.census.gov/data/2017/acs/acs5?get=NAME,B01001_001E&for=tract:*&in=state:17&in=county:*") print(r2) # + # json to dictionary pop_data = r2.json() pop_data[:5] # looking at first five rows # convert dictionary to df pop_df = pd.DataFrame(pop_data[1:], columns = pop_data[0]).rename(columns={ "B01001_001E":"population"}) # display first five rows pop_df.head() # - # #### As with median household income, split the "NAME" column. # split the NAME column on the comma pop_df[["census_tract_name", "county_name", "state_name"]] = pop_df["NAME"].str.split(", ", expand = True) # now that the column is split, remove "NAME" column del pop_df["NAME"] # create a fips (five digit code for the county) column pop_df["fips"] = pop_df["state"] + pop_df["county"] # #### Also, as with income, create a census tract number from state, county, and tract pop_df['census_tract'] = pop_df['state'] + pop_df['county'] + pop_df['tract'] # + # info print(pop_df.info()) # display first five rows as well pop_df.head() # - # population to numeric pop_df["population"] = pd.to_numeric(pop_df["population"]) # descriptive stats on table pop_df.describe(include='all') # How many census tracts have 0 people? # sort by lowest population pop_df['population'].value_counts().reset_index().sort_values(by = 'index').rename( columns = {'index':'population', 'population':'record count'})[:10] # ## Combine income and pop tables # Code below will allow for displaying multiple objects from one cell output. Code pulled from [Jake Vanderplas' book](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html) class display(object): """Display HTML representation of multiple objects""" template = """

    {0}

    {1}
    """ def __init__(self, *args): self.args = args def _repr_html_(self): return '\n'.join(self.template.format(a, eval(a)._repr_html_()) for a in self.args) def __repr__(self): return '\n\n'.join(a + '\n' + repr(eval(a)) for a in self.args) # display first five rows of income and population dataframes display('income_df_sub.head()', 'pop_df.head()') # is the tract a unique key? display("pop_df['census_tract'].value_counts().reset_index().head()", "income_df['census_tract'].value_counts().reset_index().head()") # join two tables on 'tract' and 'fips' combined = pd.merge(income_df_sub, pop_df, on = 'census_tract', how = 'left', suffixes = ('_income_df', '_pop_df')) print(combined.shape) combined.head() # # Explore combined table # looking at histogram of median household income in Illinois combined.median_household_income.hist() # what is the median value? combined.median_household_income.median() # Important caveat on above median: It is the median value looking at tract level, but would be different if looking statewide (about $62,000). # What are the unique values in the FIPS column? # + # count of unique values in 'fips' column print(combined['fips_income_df'].nunique()) # unique values in 'fips' column print(combined['fips_income_df'].unique()) # - # What census tract has the highest income? # return row with max value for 'median_household_income' combined.iloc[combined['median_household_income'].idxmax()] # # Final clean up and export combined.head() # #### Define columns to keep and select those columns combined.columns # + # mask to be used to filter cols_to_keep = ['fips_income_df', 'tract_income_df', 'census_tract', 'county_name_income_df', 'state_name_income_df', 'median_household_income', 'population'] # filter on above mask combined_2 = combined[cols_to_keep] # - # display first five rows combined_2.head() # + # rename the columns combined_final = combined_2.rename(columns = {'fips_income_df': 'fips', 'tract_income_df': 'tract', 'county_name_income_df': 'county_name', 'state_name_income_df': 'state'}) # print shape of final file print(combined_final.shape) # display first five rows combined_final.head() # - # #### Write above table to CSV # write to outputs folder (one level up, then outputs) # using to_excel to maintain 'fips', 'tract', and 'census_tract' columns as character combined_final.to_excel("../outputs/Illinois_Median_Household_Income_and_Population_by_Tract.xlsx", index = False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Install # !pip install mimikit # ## Imports and Source Code for Redundance Rate # + from mimikit.freqnet import FreqNetNetwork from mimikit.data import FileType from mimikit.utils import audio, signal from mimikit import NeptuneConnector import torch import numpy as np from random import randint # %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (20, 6) # functions we need to compute the redundance rate def cosine_similarity(X, Y, eps=1e-10): """ safely computes the cosine similarity between matrices X and Y. Shapes: ------- X : (*, N, D) Y : (*, M, D) D_xy : (*, N, M) Notes: ------ The need for this function arises from the fact that torch.nn.CosineSimilarity only computes the diagonal of D_xy, as in cosine_sim(output, target) """ if not isinstance(eps, torch.Tensor): eps = torch.tensor(eps).to(X) dot_prod = torch.matmul(X, Y.transpose(-2, -1)) norms = torch.norm(X, p=2, dim=-1).unsqueeze_(-1) * torch.norm(Y, p=2, dim=-1).unsqueeze_(-2) cos_theta = dot_prod / torch.maximum(norms, eps) return cos_theta def angular_distance(X, Y, eps=1e-10): """ angular distance is a valid distance metric based on the cosine similarity see https://en.wikipedia.org/wiki/Cosine_similarity#Angular_distance_and_similarity Shapes: ------- X : (*, N, D) Y : (*, M, D) D_xy : (*, N, M) """ if not isinstance(eps, torch.Tensor): eps = torch.tensor(eps).to(X) def safe_acos(x): # torch.acos returns nan near -1 and 1... see https://github.com/pytorch/pytorch/issues/8069 return torch.acos(torch.clamp(x, min=-1+eps/2, max=1-eps/2)) have_negatives = torch.any(X < 0) or torch.any(Y < 0) cos_theta = cosine_similarity(X, Y, eps) pi = torch.acos(torch.zeros(1)).item() * 2 D_xy = (1 + int(not have_negatives)) * safe_acos(cos_theta) / pi return D_xy def nearest_neighbor(X, Y): """ computes nearest neighbor by angular distance """ D_xy = angular_distance(X, Y) dists, nn = torch.min(D_xy, dim=-1) return dists, nn def torch_frame(x, frame_size, hop_length): """ helper to reshape an array into frames """ N = x.size(-1) org_size = x.size()[:-1] tmp_0 = np.prod(tuple(org_size)) new_dims = (1 + int((N - frame_size) / hop_length), frame_size) framed = torch.as_strided(x.reshape(-1, N), (tmp_0, *new_dims), (N, hop_length, 1)) return framed.reshape(*org_size, *new_dims) def repeat_rate(x, frame_size, hop_length): """ frames x and compute repeat-rate per frame """ framed = torch_frame(x, frame_size, hop_length) uniques = torch.tensor([torch.unique(row).size(0) for row in framed.reshape(-1, framed.size(-1))]) return (1 - (uniques-1) / (frame_size-1)).reshape(framed.size()[:-1], -1) # - # ![title](imgs/redundance-rate.png) # ## Setup DB & model # + nep_con = NeptuneConnector(user="k-tonal", setup=dict( db="data-and-base-notebooks/DAT-55", model="experiment-2/EX2-217" )) db_name = "MyMelodies.h5" path_to_db = "./" + db_name path_to_model = "./models/" # uncomment the ones you need : # nep_con.download_experiment("model", destination=path_to_model, artifacts="states/") # db = nep_con.download_database("db", db_name) # db = FileType(path_to_db) db.metadata # - # ### Load a Model # + epoch = 99 path_to_ckpt = path_to_model + nep_con.setup["model"].split("/")[-1] + "/states/epoch=%i.ckpt" % epoch model = FreqNetNetwork.load_from_checkpoint(path_to_ckpt, data_object=db.fft) # - # ## Generate single output # + prompt_length = 64 n_steps = 2048 # prompt index : i = randint(0, model.data.shape[0] - prompt_length) output = model.generate(model.data[i:i+prompt_length], time_domain=False, n_steps=n_steps).squeeze(0) wrt = torch.from_numpy(model.data[i+prompt_length:i+prompt_length+n_steps]).to(output).unsqueeze(0) audio(output.squeeze().numpy().T, hop_length=db.fft.attrs["hop_length"]) # - # ## Compute RR over time at mutiple levels for a single output # + # compute nearest neighbors: with torch.no_grad(): _, neighbs = nearest_neighbor(output[:, prompt_length:], wrt) # for plotting multiple levels of locality, we have one hop_length for several frame_sizes frame_size = (8, 32, 128) hop_length = 2 # compute rr and plot for fs in frame_size: with torch.no_grad(): r = repeat_rate(neighbs, fs, hop_length) plt.plot(r.squeeze().cpu().numpy(), label="frame size = "+str(fs)) axes = plt.gca() axes.set_ylim([-0.1, 1.1]) plt.legend() plt.ylabel('Redundance Rate') plt.xlabel('Time') plt.title('Local Redundance Rate over Time') None # - # ## Generate outputs for regularly spaced prompt indices # # > if, in the next cell, `n_prompts * n_steps` is too big, this will crash the RAM! # + # params for all prompts : prompt_length = 64 n_steps = 600 # number of prompts we will score : n_prompts = 500 # and their indices : indices = range(0, db.fft.shape[0]-prompt_length-n_steps, db.fft.shape[0] // n_prompts) # compute prompts = torch.from_numpy(np.stack([db.fft[i:i+prompt_length] for i in indices])) wrts = torch.from_numpy(np.stack([db.fft[i+prompt_length:i+prompt_length+n_steps] for i in indices])) outputs = model.generate(prompts, time_domain=False, n_steps=n_steps).squeeze(0) # - # ## Compute the mean RR for each prompt index and plot # + # compute nearest neighbors: with torch.no_grad(): _, neighbs = nearest_neighbor(outputs[:, prompt_length:], wrts) # multiple levels of locality : frame_size = (8, 32, 64) hop_length = 1 # compute rr and plot scores = {} for fs in frame_size: with torch.no_grad(): r = repeat_rate(neighbs, fs, hop_length).mean(dim=-1) scores[fs] = {i: x for i, x in zip(range(r.size(-1)), r.squeeze().cpu().numpy())} plt.plot(list(indices), r.squeeze().cpu().numpy(), label="frame size = "+str(fs)) plt.legend() axes = plt.gca() axes.set_ylim([-0.1, 1.1]) plt.ylabel('Mean Local Redundance Rate') plt.xlabel('Prompt Index') plt.title("Output's Scores") None # - # ## Listen to the "bests" and "worsts" outputs # + # pick a frame_size : fs = frame_size[0] srtd = sorted(list(scores[fs].keys()), key=lambda k: scores[fs][k]) bests = srtd[:4] worsts = srtd[-4:] print() print("Less redundants :") print() for i in bests: print("Prompt index =", list(indices)[i], "score =", scores[fs][i]) audio(outputs[i].squeeze().numpy().T, hop_length=db.fft.attrs["hop_length"]) print() print("Most redundants :") print() for i in worsts: print("Prompt index =", list(indices)[i], "score =", scores[fs][i]) audio(outputs[i].squeeze().numpy().T, hop_length=db.fft.attrs["hop_length"]) # - # ## Select prompts closest to some target score # + # pick a target score : my_target_score = 0.555 # pick a frame_size : fs = frame_size[0] srtd = sorted(list(scores[fs].keys()), key=lambda k: abs(scores[fs][k] - my_target_score)) bests = srtd[:4] print() print("Closests to target score :") print() for i in bests: print("Prompt index =", list(indices)[i], "score =", scores[fs][i]) audio(outputs[i].squeeze().numpy().T, hop_length=db.fft.attrs["hop_length"]) # --- # jupyter: # jupytext: # split_at_heading: true # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #default_exp foundation # - #export from fastcore.imports import * from fastcore.test import * from nbdev.showdoc import * # # Core # # > Basic functions used in the fastai library # export defaults = SimpleNamespace() # ## Metaclasses # See this [blog post](https://realpython.com/python-metaclasses/) for more information about metaclasses. # - `PrePostInitMeta` ensures that the classes defined with it run `__pre_init__` and `__post_init__` (without having to write `self.__pre_init__()` and `self.__post_init__()` in the actual `init` # - `NewChkMeta` gives the `PrePostInitMeta` functionality and ensures classes defined with it don't re-create an object of their type whenever it's passed to the constructor # - `BypassNewMeta` ensures classes defined with it can easily be casted form objects they subclass. # **NB: While a class is being defined from a metaclass, the `cls` arg in `__new__` of the metaclass points to the metaclass itself while `base` points to the defined class.** # # **This behaviour changes as soon as the class has been defined. When it is now being instantiated, the `cls` in all the dunder methods of the metaclass now point to the class being instantaited instead of the metaclass ** #export def _rm_self(sig): # literally remove `self` from the class' __init__ signature # this takes the signature of a class' __init__, gets it's args/params dict, # removes 'self' from the dict and then replaces the initial signature with the one without the 'self' sigd = dict(sig.parameters) sigd.pop('self') return sig.replace(parameters=sigd.values()) inspect.signature(object.__init__) class A(type): def __new__(cls, name, bases, dict): print('creating __new__') res = super().__new__(cls, name, bases, dict) print(res, cls, name, bases, dict) print(res) return res class B(metaclass = A): ... # the `cls` arg should point to the metaclass while `base` points to the defined class # + #export # this is the metaclass that handles `self`-stripping classes that are instatiated by this meta class FixSigMeta(type): "A metaclass that fixes the signature on classes that override __new__" def __new__(cls, name, bases, dict): # we want to change the __new__ from `type` in our new metaclass `FixSigMeta` # to represent the base class that will be inheriting this meta res = super().__new__(cls, name, bases, dict) # if the init of the child class(base class) is not a slot wrapper(init of an object) ie the class has # it's own init, strip that init signature of it's `self` arg by calling `_rm_self` if res.__init__ is not object.__init__: res.__signature__ = _rm_self(inspect.signature(res.__init__)) # return this self-stripped base class return res # - class A(type): def __call__(cls, *args, **kwargs): print(cls) class B(metaclass=A): ... B(3) #it should point to the defined class type(B.__new__(B)) #export # this is the metaclass inheriting a metaclass that makes sure the optional `__pre_init__` and `__post_init__` are run if they exist # We can use it to avoid ever having to call super() again in the init class PrePostInitMeta(FixSigMeta): "A metaclass that calls optional `__pre_init__` and `__post_init__` methods" def __call__(cls, *args, **kwargs): #define a new subclass of the base class # inotherwords # create a new definition of the base(defined) class inheriting this meta res = cls.__new__(cls) # if this subclass is of the same type with it's super class if type(res)==cls: # check if this subclass has `__pre_init__` if it does, call it if hasattr(res,'__pre_init__'): res.__pre_init__(*args,**kwargs) # now then run the init res.__init__(*args,**kwargs) # #if the class has a __post_init__, run it if hasattr(res,'__post_init__'): res.__post_init__(*args,**kwargs) # return the completely run class return res show_doc(PrePostInitMeta, title_level=3) # + class _T(metaclass=PrePostInitMeta): def __pre_init__(self): self.a = 0; assert self.a==0 def __init__(self,b=0): self.a += 1; assert self.a==1 def __post_init__(self): self.a += 1; assert self.a==2 t = _T() test_eq(t.a, 2) #the post init if it exists is always the last to run # - #export # NOTE: According to Jeremy, this is mostly used in L and fp16 # this is a meta that tries to make sure that if a subclass of a defined class is passed as an argument to the defined class, it knows and hence doesn't perform any operation # it ensures if the class instance is passed as an arg to the class, the class does not change class NewChkMeta(FixSigMeta): "Metaclass to avoid recreating object passed to constructor" def __call__(cls, x=None, *args, **kwargs): # if there's no args or kwarg input when the class is instantiated but x exists and it is an child of the defined class(ie it is an instance) if not args and not kwargs and x is not None and isinstance(x,cls): # break out of the and return the child (instance of defined class) as is x._newchk = 1 return x res = super().__call__(*((x,) + args), **kwargs) res._newchk = 0 return res class _T(metaclass=NewChkMeta): "Testing" def __init__(self, o=None, b=1): self.foo = getattr(o,'foo',0) + 1 self.b = b # + # building a full class with `type` # R = type('R', (object, ), {'foo': 10}) def init(self, _a=0): #the _a is just there so that self will be recognized, doesn't change much unless another arg is added self.foo = _a R = type('R', (_T, ), {'__init__': init}) #inorder to attach an init to a class defined with type, we have to use another func `init` # - issubclass(R, _T) R().foo #foo is an attribute of this which means that the `__init__` works! R.__new__(_T) # how to create a new instance of a class _T(R.__new__(_T)) # + class _T2(): def __init__(self, o): self.foo = getattr(o,'foo',0) + 1 t = _T(1) test_eq(t.foo,1) t2 = _T(t) test_eq(t2.foo,1) test_is(t,t2) t3 = _T(t, b=2) test_eq(t3.b, 2) assert not t3 is t t = _T2(1) test_eq(t.foo,1) t2 = _T2(t) test_eq(t2.foo,2) test_eq(_T.__doc__, "Testing") test_eq(str(inspect.signature(_T)), '(o=None, b=1)') # - #export # Not used anywhere else class BypassNewMeta(FixSigMeta): "Metaclass: casts `x` to this class if it's of type `cls._bypass_type`, initializing with `_new_meta` if available" def __call__(cls, x=None, *args, **kwargs): if hasattr(cls, '_new_meta'): x = cls._new_meta(x, *args, **kwargs) elif not isinstance(x,getattr(cls,'_bypass_type',object)) or len(args) or len(kwargs): x = super().__call__(*((x,)+args), **kwargs) if cls!=x.__class__: x.__class__ = cls return x # + class T0: pass class _T(T0, metaclass=BypassNewMeta): _bypass_type=T0 def __init__(self,x): self.x=x t = T0() t.a = 1 t2 = _T(t) test_eq(type(t2), _T) test_eq(t2.a,1) test_is(t2,t) t = _T(2) t.x = 2 # - # ## Foundational functions def f(mad='mad'): f = str(mad) + ' oh!' return f f # We can create a copy of this function simply by using `FunctionType(f.__code__, f.__globals__, f.__name__, f.__defaults__, f.__closure__)` f_ = FunctionType(f.__code__, f.__globals__, f.__name__, f.__defaults__, f.__closure__) f_ f(), f_() copy_f = copy(f) copy_f() f.__dict__ #most funcs contain empty __dict__s by default # + #demonstrating MethodType # we change a normal method to a class method class A: def __init__(self): return None # method defined outside class def foo(self): print('Hi') print(MethodType(foo, A)) setattr(A, foo.__name__, MethodType(foo, A)) A.foo() #it is a class method which can be called without instantiating the class # - #export # func makes an exact copy of another func and also resets it `__qualname__` def copy_func(f): "Copy a non-builtin function (NB `copy.copy` does not work for this)" if not isinstance(f,FunctionType): return copy(f) fn = FunctionType(f.__code__, f.__globals__, f.__name__, f.__defaults__, f.__closure__) fn.__dict__.update(f.__dict__) return fn # **NB**: I can't quite see the need for `copy_func` because unlike what he says, copy.copy work perfectly well here. # # **EDIT**: Jeremy is the smart guy here stupid... The reason why Jeremy uses `copy_func` is because `copy.copy` doesn't copy the reset the __qualname__ of a copied func if it hs been changed. I have done a simulation of this difference in a cell below # + def func(b: [float, int], a): return None print(f'initial: {func.__qualname__}') func.__qualname__ = '_T4.func' print(f'changed qual: {func.__qualname__}') print(' ') func_b = copy(func) func_b_ = copy_func(func) print(f'using copy.copy: {func_b.__qualname__}') #copy doesn't reset it print(f'using copy_func: {func_b_.__qualname__}') # copy func reset it # - # The trick to patching successfully is to chang the `__name__` of the func from `func_name` to the be `class_name.func_name` instead # + #export # this decorator (notice the wrapper _inner) is used to add a new method to a predefined class or list of classes either # as a normal method classmethod or as a cls property def patch_to(cls, as_prop=False, cls_method=False): "Decorator: add `f` to `cls`" # make sure it is a collection so we can loop over it if not isinstance(cls, (tuple,list)): cls=(cls,) def _inner(f): for c_ in cls: # create a copy of the func to be patched nf = copy_func(f) # `functools.update_wrapper` when passing patched function to `Pipeline`, so we do it manually # this is to update the info of the func `f` being wrapped so it won't be overridden by the wrapper # for o in functools.WRAPPER_ASSIGNMENTS: setattr(nf, o, getattr(f,o)) Jeremy style(from scratch method) # OR functools.update_wrapper(_inner, f) #Tendo style(pythonic style) # Note the reason why I'm not using a decorator @functools.wraps is because i don't have an _inner_inner # func inside here. When patching we dont want the func to be called, we just want to pass it into the class as a method # This may be the most important bit, it sets the copy of the func being pathed `nf` as a method to the class `c_`4 # Note: this isn't set to the main method because we may want to set the `f` as a class method or something else as shown below nf.__qualname__ = f"{c_.__name__}.{f.__name__}" # If we want this patched class to be a classmethod, We set the c.f.__name__ = method_bound(c_.nf) if cls_method: setattr(c_, f.__name__, MethodType(nf, c_)) else: setattr(c_, f.__name__, property(nf) if as_prop else nf) return f return _inner # + class _T3(int): pass @patch_to(_T3) def func1(x, a): return x+a @patch_to(_T3, cls_method=True) def from_func2(cls, a): return cls(a) t = _T3(1) #reason we can do this is because we are using int's __init__ test_eq(t.func1(2), 3) t2 = _T3(2) test_eq(t2, 2) # + _T3.from_func2(2) #A class method can be called without instantiating the method #Tendo test test_eq(_T3.from_func2(2), 2) # - # If `cls` is a tuple, `f` is added to all types in the tuple. # + class _T4(int): pass @patch_to((_T3,_T4)) def func2(x, a): return x+2*a t = _T3(1) test_eq(t.func2(1), 3) t = _T4(1) test_eq(t.func2(1), 3) # - def h(a: [float, int], b:float): ... print(h.__annotations__) # the annotations for every annotated func are stored in dunder annotations next(iter(h.__annotations__.values())) # we only want to annotation of the very first arg # This decorator uses `patch_to` to patch a func to a class which is based on the annotation of the # very first param of the func #export def patch(f): "Decorator: add `f` to the first parameter's class (based on f's type annotations)" cls = next(iter(f.__annotations__.values())) # only the first param's/arg's type return patch_to(cls)(f) # + @patch def func(x:_T3, a): "test" return x+a t = _T3(1) test_eq(t.func(3), 4) test_eq(t.func.__qualname__, '_T3.func') # - # If annotation is a tuple, the function is added to all types in the tuple. @patch def func3(x:(_T3,_T4), a): "test" return x+2*a # + @patch def func3(x:(_T3,_T4), a): "test" return x+2*a t = _T3(1) test_eq(t.func3(2), 5) test_eq(t.func3.__qualname__, '_T3.func3') t = _T4(1) test_eq(t.func3(2), 5) test_eq(t.func3.__qualname__, '_T4.func3') # - # decorator used to patch a method to a class based on the type of it's first arg as a class property #export def patch_property(f): "Decorator: add `f` as a property to the first parameter's class (based on f's type annotations)" cls = next(iter(f.__annotations__.values())) return patch_to(cls, as_prop=True)(f) # + @patch_property def prop(x:_T3): return x+1 t = _T3(1) test_eq(t.prop, 2) # a class property is a method that behaves like an attribute in the class. If it has a setter it can be changed dynamically # - # how to make a func param/arg inspect.Parameter('key', inspect.Parameter.KEYWORD_ONLY, default='value') # this is used to make a func kwargs(key:value) param that has a default key-value of d(None) #export def _mk_param(n,d=None): return inspect.Parameter(n, inspect.Parameter.KEYWORD_ONLY, default=d) # representation of the params in a function inspect.signature(f), str(inspect.signature(f)) # get the parameter dict or mapping proxy inspect.signature(f).parameters, dict(inspect.signature(f).parameters), _mk_param('new_param', 'Hello') # test to make sure func signatures match def test_sig(f, b): test_eq(str(inspect.signature(f)), b) # a decorator that allows us to use *kwargs while writing our function ans ensuring that autocomplete works becuase it saves # the signatures of the kwargs to the func's signature #export def use_kwargs_dict(keep=False, **kwargs): "Decorator: replace `**kwargs` in signature with `names` params" def _f(f): # obtain the Parameter signature sig = inspect.signature(f) # change this signature to a dict (mapping proxy) and then change it to a normal dict sigd = dict(sig.parameters) # remove the kwargs param from the dict k = sigd.pop('kwargs') # make new params using the key-value pairs passed into the decorator. Ensure that there is no key clash with the keys # already existing in the func s2 = {n:_mk_param(n,d) for n,d in kwargs.items() if n not in sigd} # update the the signature dict of the function with these new Param key-value pairs sigd.update(s2) if keep: sigd['kwargs'] = k # replace the signature of the func f.__signature__ = sig.replace(parameters=sigd.values()) return f return _f @use_kwargs_dict(y=1,z=None) def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, *, y=1, z=None)') #this decorator is different from `use_kwargs_dict` because attaches non keyword args to the func signature dict #export def use_kwargs(names, keep=False): "Decorator: replace `**kwargs` in signature with `names` params" def _f(f): sig = inspect.signature(f) sigd = dict(sig.parameters) k = sigd.pop('kwargs') s2 = {n:_mk_param(n) for n in names if n not in sigd} #<---this is the different bit. the default value `d` of None is used sigd.update(s2) if keep: sigd['kwargs'] = k f.__signature__ = sig.replace(parameters=sigd.values()) return f return _f # + @use_kwargs(['y', 'z']) def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, *, y=None, z=None)') @use_kwargs(['y', 'z'], keep=True) def foo(a, *args, b=1, **kwargs): pass test_sig(foo, '(a, *args, b=1, y=None, z=None, **kwargs)') # + #extra test Tendo @use_kwargs_dict(y_=1, z_=2) @use_kwargs(['y', 'z'], keep=True) def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, *, y=None, z=None, y_=1, z_=2)') class Foo: def __init__(self): return None @classmethod @use_kwargs_dict(y=1, z=3) def say(self, b, c=9, **kwargs): None test_sig(Foo.say, '(b, c=9, *, y=1, z=3)') # - # this decorator is used to transfer signature from one func `f` to another `to`. # This helps us with autocompletion when we a superclassing classes and using `**kwargs` # It can be nested and you can also select the arg signatures to transfer # export def delegates(to=None, keep=False, but=None): "Decorator: replace `**kwargs` in signature with params from `to`" if but is None: but = [] def _f(f): # if you're trying to delegate from one class to another. what you need to delegate is the classes' `__init__` # NOTE: in this case you don't pass arguments to the the class being delegated to if to is None: to_f,from_f = f.__base__.__init__,f.__init__ else: to_f,from_f = to,f from_f = getattr(from_f,'__func__',from_f) to_f = getattr(to_f,'__func__',to_f) # `__delwrap__` is a flag to prevent multiple nesting of delegates. If it occurs, break out if hasattr(from_f,'__delwrap__'): return f sig = inspect.signature(from_f) sigd = dict(sig.parameters) k = sigd.pop('kwargs') # ensure that the no value from the key-value kwargs are empty(not set) and that the key is not in `but` which # should be excluded from delegation s2 = {k:v for k,v in inspect.signature(to_f).parameters.items() if v.default != inspect.Parameter.empty and k not in sigd and k not in but} # update the signatures sigd.update(s2) # this if else loop is present because inorder for us to use nested delegates, there has to be # a **kwargs(ie the `keep` must be set to True) if keep: sigd['kwargs'] = k # if no keep don't allow nesting; set the flag else: from_f.__delwrap__ = to_f # replace them from_f.__signature__ = sig.replace(parameters=sigd.values()) # since no copying was done from `f` to `from_f` we may as well return `from_f` here if we are not delegating inside a class return f return _f # + def basefoo(e, c=2): pass @delegates(basefoo) # delegate all key:value pairs from basefoo to foo def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, c=2)') @delegates(basefoo, keep=True) def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, c=2, **kwargs)') @delegates(basefoo, but= ['c']) # delegate all args but c def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1)') class _T(): def foo(cls, a=1, b=2): pass @classmethod @delegates(foo) def bar(cls, c=3, **kwargs): pass test_sig(_T.bar, '(c=3, a=1, b=2)') # + class BaseFoo: def __init__(self, c, d=2, e=3): pass #note only kwargs are delegated @delegates() #note that i have no args class Foo(BaseFoo): # this super class we want to delagate from must be inherited def __init__(self, a, b=1, **kwargs): super().__init__(**kwargs) test_sig(Foo, '(a, b=1, d=2, e=3)') # + #tendo test (nested delegation) @delegates(BaseFoo.__init__) @delegates(basefoo, keep=True) #the kwargs must be kept while nesting def foo(a, b=1, **kwargs): pass test_sig(foo, '(a, b=1, c=2, d=2, e=3)') # - MethodType(foo, 1) print(isinstance(_T.foo,MethodType)) print(isinstance(_T.bar,MethodType)) print(isinstance(MethodType(foo, 1),MethodType)) # bind method foo to the class 1 (#numbers.Integral) #export def method(f): "Mark `f` as a method" # `1` is a dummy instance since Py3 doesn't allow `None` any more return MethodType(f, 1) g = {'r': '3', 'w': '6'} g.pop('f', '3'), g #this decorator makes it possible for us to replace a method in a class with any other function so long as the # method is in the `cls._methods` list of the class `cls` and the `cls.__init__` has `kwargs` #export def _funcs_kwargs(cls, as_method): old_init = cls.__init__ def _init(self, *args, **kwargs): for k in cls._methods: # if the method in cls._methods is in the `kwargs` of the instantiated class select the value of the kwargs (ie the new method) arg = kwargs.pop(k,None) #it returns None if it cannot find the key in the kwargs dict if arg is not None: # do you want the func to be replaced to be a normal method and not a class method? if as_method: arg = method(arg) # if the method we want to replace with is a classmethod from another class, change to become # a class method of the present class by binding it to the present class using `MethodType` if isinstance(arg,MethodType): arg = MethodType(arg.__func__, self) # change the method in the class with this new method or classmethod setattr(self, k, arg) # set the signature if the present class init old_init(self, *args, **kwargs) # update the signature of the present init to prevent what happens with decorators and signatures functools.update_wrapper(_init, old_init) # set all the cls._methods to be kwargs with value of None so that we can pass all into signature cls.__init__ = use_kwargs(cls._methods)(_init) if hasattr(cls, '__signature__'): cls.__signature__ = _rm_self(inspect.signature(cls.__init__)) return cls #export def funcs_kwargs(as_method=False): "Replace methods in `cls._methods` with those from `kwargs`" if callable(as_method): return _funcs_kwargs(as_method, False) # we have a closure so it is a decorator return partial(_funcs_kwargs, as_method=as_method) # + #tendo test class New(): def a(self): return 'I am the replacer from func a' @classmethod def b(self, a, **kwargs): return 'I am the replacer from func b' @funcs_kwargs(as_method=True) class _F(): _methods=['bar'] def __init__(self, **kwargs): assert not kwargs def foo(cls, a=1, b=2): return 'Hi' # @classmethod # def bar(cls, c=3, **kwargs): # return 'Hello' def lambda_(a): return 'i am the replacer lambda_' f = _F() # print(f'Initial f.bar: {f.bar().upper()}') f = _F(bar = lambda_) #normal func replacement test_eq(f.bar(), 'i am the replacer lambda_') f = _F() # print(f'Initial f.bar: {f.bar().upper()}') f = _F(bar = New.b) #classmethod replacement test_eq(f.bar(), 'I am the replacer from func b') # + @funcs_kwargs class T: _methods=['b'] def __init__(self, f=1, **kwargs): assert not kwargs def a(self): return 1 def b(self): return 2 t = T() test_eq(t.a(), 1) test_eq(t.b(), 2) t = T(b = lambda:3) test_eq(t.b(), 3) # so long as the method is in `_methods`, it'll have a default value of None when you check the class's signature thus overriding the class' __init__ args test_sig(T, '(f=1, *, b=None)') #the b=None happens due to the `use_kwargs` that was called in `_funcs_kwargs` test_fail(lambda: T(a = lambda:3)) # a is not in T._methods def _f(self,a=1): return a+1 @funcs_kwargs(True) class T: _methods=['b'] t = T(b = _f) test_eq(t.b(2), 3) class T2(T): def __init__(self,a): super().__init__(b = lambda self:3) #calling super on `T` should override the `b` which was added to the `T` class by `func_kwargs` self.a=a t = T2(a=1) test_eq(t.b(), 3) test_sig(T2, '(a)') #We have `T2`'s normal signature which is expected def _g(a=1): return a+1 class T3(T): b = staticmethod(_g) #funcs_kwargs can be used to override staticmethods also (static methods by default aren't bound to the class) t = T3() test_eq(t.b(2), 3) # + #hide #test it works with PrePostInitMeta class A(metaclass=PrePostInitMeta): pass @funcs_kwargs class B(A): _methods = ['m1'] def __init__(self, **kwargs): pass test_sig(B, '(*, m1=None)') # - # Runtime type checking is handy, so let's make it easy! @contextmanager def working_directory(path): "Change working directory to `path` and return to previous on exit." prev_cwd = Path.cwd() os.chdir(path) try: yield finally: os.chdir(prev_cwd) # + #def is_listy(x): return isinstance(x,(list,tuple,Generator)) # + class T: _methods=['b'] def __init__(self, f=1, **kwargs): assert not kwargs @classmethod def a(self, v=8): return 1 def b(self): return 2 vars(T)['a'].__func__ # + # this func ensures that a class and all it's public methods are properly documented #export def add_docs(cls, cls_doc=None, **docs): "Copy values from `docs` to `cls` docstrings, and confirm all public methods are documented" if cls_doc is not None: cls.__doc__ = cls_doc # the `**docs` arg contains key-value pairs for method_name:method for k,v in docs.items(): # get the method in the class from it's name f = getattr(cls,k) # check if the method has a __func__ attr if not, set it if hasattr(f,'__func__'): f = f.__func__ # required for class methods f.__doc__ = v # check for methods/callables in the class's namespace that aren't inner methods(ie don't start with '_') # and do not have docs(ie no `__doc__` attr) and list them # List of public callables without docstring nodoc = [c for n,c in vars(cls).items() if callable(c) and not n.startswith('_') and c.__doc__ is None] # if there is a callable without docs, raise an exception showing that callable assert not nodoc, f"Missing docs: {nodoc}" # if the cls has a `__doc__` but is None or empty raise an exception showing that the class has no doc assert cls.__doc__ is not None, f"Missing class docs: {cls}" # - #export def docs(cls): "Decorator version of `add_docs`, using `_docs` dict" add_docs(cls, **cls._docs) return cls # + class _T: def f(self): pass @classmethod def g(cls): pass add_docs(_T, cls_doc="a", f="f", g="g") #specify the docs as kwargs test_eq(_T.__doc__, "a") test_eq(_T.f.__doc__, "f") test_eq(_T.g.__doc__, "g") # - # Tendo test @docs class _T: _docs = {'cls_doc':'a', 'f':'f', 'g':'g'} #specify the doc as a dict def f(self): pass @classmethod def g(cls): pass # everything now has a doc test_eq(_T.__doc__, "a") test_eq(_T.f.__doc__, "f") test_eq(_T.g.__doc__, "g") show_doc(is_iter) # `show_doc` is a n nbdev method assert is_iter([1]) assert not is_iter(array(1)) assert is_iter(array([1,2])) assert (o for o in range(3)) #export class _Arg: def __init__(self,i): self.i = i arg0 = _Arg(0) arg1 = _Arg(1) arg2 = _Arg(2) arg3 = _Arg(3) arg4 = _Arg(4) # + # partial?? # - #export class bind: "Same as `partial`, except you can use `arg0` `arg1` etc param placeholders" def __init__(self, fn, *pargs, **pkwargs): self.fn,self.pargs,self.pkwargs = fn,pargs,pkwargs self.maxi = max((x.i for x in pargs if isinstance(x, _Arg)), default=-1) def __call__(self, *args, **kwargs): args = list(args) kwargs = {**self.pkwargs,**kwargs} #what is the purpose of kwargs tendo asks...doesnt't seem to do anything for k,v in kwargs.items(): if isinstance(v,_Arg): kwargs[k] = args.pop(v.i) fargs = [args[x.i] if isinstance(x, _Arg) else x for x in self.pargs] + args[self.maxi+1:] return self.fn(*fargs, **kwargs) def myfn(a,b,c,d=1,e=2): return(a,b,c,d,e) test_eq(bind(myfn, arg1, 17, arg0, e=3)(19,14), (14,17,19,1,3)) test_eq(bind(myfn, 17, arg0, e=3)(19,14), (17,19,14,1,3)) test_eq(bind(myfn, 17, e=3)(19,14), (17,19,14,1,3)) test_eq(bind(myfn)(17,19,14), (17,19,14,1,2)) test_eq(bind(myfn, 17,19,14,e=arg0)(3), (17,19,14,1,3)) # ## GetAttr - #export def custom_dir(c, add:list): "Implement custom `__dir__`, adding `add` to `cls`" return dir(type(c)) + list(c.__dict__.keys()) + add #export class GetAttr: "Inherit from this to have all attr accesses in `self._xtra` passed down to `self.default`" _default='default' def _component_attr_filter(self,k): if k.startswith('__') or k in ('_xtra',self._default): return False xtra = getattr(self,'_xtra',None) return xtra is None or k in xtra def _dir(self): return [k for k in dir(getattr(self,self._default)) if self._component_attr_filter(k)] def __getattr__(self,k): if self._component_attr_filter(k): attr = getattr(self,self._default,None) if attr is not None: return getattr(attr,k) raise AttributeError(k) def __dir__(self): return custom_dir(self,self._dir()) # def __getstate__(self): return self.__dict__ def __setstate__(self,data): self.__dict__.update(data) # Inherit from `GetAttr` to have attr access passed down to an instance attribute. # This makes it easy to create composites that don't require callers to know about their components. # # You can customise the behaviour of `GetAttr` in subclasses via; # - `_default` # - By default, this is set to `'default'`, so attr access is passed down to `self.default` # - `_default` can be set to the name of any instance attribute that does not start with dunder `__` # - `_xtra` # - By default, this is `None`, so all attr access is passed down # - You can limit which attrs get passed down by setting `_xtra` to a list of attribute names # + class _C(GetAttr): # allow all attributes to get passed to `self.default` (by leaving _xtra=None) def __init__(self,a): self.default = a def foo(self): noop t = _C('Hi') test_eq(t.lower(), 'hi') test_eq(t.upper(), 'HI') assert 'lower' in dir(t) assert 'upper' in dir(t) # + class _C(GetAttr): _xtra = ['lower'] # specify which attributes get passed to `self.default` def __init__(self,a): self.default = a def foo(self): noop t = _C('Hi') test_eq(t.default, 'Hi') test_eq(t.lower(), 'hi') test_fail(lambda: t.upper()) assert 'lower' in dir(t) assert 'upper' not in dir(t) # + class _C(GetAttr): _default = '_data' # use different component name; `self._data` rather than `self.default` def __init__(self,a): self._data = a def foo(self): noop t = _C('Hi') test_eq(t._data, 'Hi') test_eq(t.lower(), 'hi') test_eq(t.upper(), 'HI') assert 'lower' in dir(t) assert 'upper' in dir(t) # - class _C(GetAttr): _default = 'data' # use a bad component name; i.e. self.data does not exist def __init__(self,a): self.default = a def foo(self): noop # TODO: should we raise an error when we create a new instance ... t = _C('Hi') test_eq(t.default, 'Hi') # ... or is it enough for all GetAttr features to raise errors test_fail(lambda: t.data) test_fail(lambda: t.lower()) test_fail(lambda: t.upper()) test_fail(lambda: dir(t)) # + #hide # I don't think this test is essential to the docs but it probably makes sense to # check that everything works when we set both _xtra and _default to non-default values class _C(GetAttr): _xtra = ['lower', 'upper'] _default = 'data' def __init__(self,a): self.data = a def foo(self): noop t = _C('Hi') test_eq(t.data, 'Hi') test_eq(t.lower(), 'hi') test_eq(t.upper(), 'HI') assert 'lower' in dir(t) assert 'upper' in dir(t) # + #hide # when consolidating the filter logic, I choose the previous logic from # __getattr__ k.startswith('__') rather than # _dir k.startswith('_'). class _C(GetAttr): def __init__(self): self.default = type('_D', (), {'_under': 1, '__dunder': 2})() t = _C() test_eq(t.default._under, 1) test_eq(t._under, 1) # _ prefix attr access is allowed on component assert '_under' in dir(t) test_eq(t.default.__dunder, 2) test_fail(lambda: t.__dunder) # __ prefix attr access is not allowed on component assert '__dunder' not in dir(t) assert t.__dir__ is not None # __ prefix attr access is allowed on composite assert '__dir__' in dir(t) # - class B: def __init__(self): self.a = A() @funcs_kwargs class A(GetAttr): wif=after_iter= noops _methods = 'wif after_iter'.split() _default = 'dataset' def __init__(self, **kwargs): pass a = A() b = A(wif=a.wif) # + #Failing test. TODO Jeremy, not sure what you were testing here #a = A() #b = A(wif=a.wif) #tst = pickle.dumps(b) #c = pickle.loads(tst) # - #export def delegate_attr(self, k, to): "Use in `__getattr__` to delegate to attr `to` without inheriting from `GetAttr`" if k.startswith('_') or k==to: raise AttributeError(k) try: return getattr(getattr(self,to), k) except AttributeError: raise AttributeError(k) from None # + class _C: f = 'Hi' def __getattr__(self, k): return delegate_attr(self, k, 'f') t = _C() test_eq(t.lower(), 'hi') # - # ## L - # + #export def _is_array(x): return hasattr(x,'__array__') or hasattr(x,'iloc') def _listify(o): if o is None: return [] if isinstance(o, list): return o if isinstance(o, str) or _is_array(o): return [o] if is_iter(o): return list(o) return [o] # - # export def coll_repr(c, max_n=10): "String repr of up to `max_n` items of (possibly lazy) collection `c`" return f'(#{len(c)}) [' + ','.join(itertools.islice(map(repr,c), max_n)) + ( '...' if len(c)>10 else '') + ']' test_eq(coll_repr(range(1000), 5), '(#1000) [0,1,2,3,4...]') # export def mask2idxs(mask): "Convert bool mask or index list to index `L`" if isinstance(mask,slice): return mask mask = list(mask) if len(mask)==0: return [] it = mask[0] if hasattr(it,'item'): it = it.item() if isinstance(it,(bool,NoneType,np.bool_)): return [i for i,m in enumerate(mask) if m] return [int(i) for i in mask] # just for tests import torch test_eq(mask2idxs([False,True,False,True]), [1,3]) test_eq(mask2idxs(array([False,True,False,True])), [1,3]) test_eq(mask2idxs(torch.tensor([False,True,False,True])), [1,3]) test_eq(mask2idxs(array([1,2,3])), [1,2,3]) #export listable_types = typing.Collection,Generator,map,filter,zip #export class CollBase: "Base class for composing a list of `items`" def __init__(self, items): self.items = items def __len__(self): return len(self.items) def __getitem__(self, k): return self.items[list(k) if isinstance(k,CollBase) else k] def __setitem__(self, k, v): self.items[list(k) if isinstance(k,CollBase) else k] = v def __delitem__(self, i): del(self.items[i]) def __repr__(self): return self.items.__repr__() def __iter__(self): return self.items.__iter__() #export def cycle(o): "Like `itertools.cycle` except creates list of `None`s if `o` is empty" o = _listify(o) return itertools.cycle(o) if o is not None and len(o) > 0 else itertools.cycle([None]) test_eq(itertools.islice(cycle([1,2,3]),5), [1,2,3,1,2]) test_eq(itertools.islice(cycle([]),3), [None]*3) test_eq(itertools.islice(cycle(None),3), [None]*3) test_eq(itertools.islice(cycle(1),3), [1,1,1]) #export def zip_cycle(x, *args): "Like `itertools.zip_longest` but `cycle`s through elements of all but first argument" return zip(x, *map(cycle,args)) test_eq(zip_cycle([1,2,3,4],list('abc')), [(1, 'a'), (2, 'b'), (3, 'c'), (4, 'a')]) #export def is_indexer(idx): "Test whether `idx` will index a single item in a list" return isinstance(idx,int) or not getattr(idx,'ndim',1) #export def negate_func(f): "Create new function that negates result of `f`" def _f(*args, **kwargs): return not f(*args, **kwargs) return _f def f(a): return a>0 test_eq(f(1),True) test_eq(negate_func(f)(1),False) test_eq(negate_func(f)(a=-1),True) #export class L(CollBase): "Behaves like a list of `items` but can also index with list of indices or masks" _default='items' def __init__(self, items=None, *rest, use_list=False, match=None): if rest: items = (items,)+rest if items is None: items = [] if (use_list is not None) or not _is_array(items): items = list(items) if use_list else _listify(items) if match is not None: if is_coll(match): match = len(match) if len(items)==1: items = items*match else: assert len(items)==match, 'Match length mismatch' super().__init__(items) @property def _xtra(self): return None def _new(self, items, *args, **kwargs): return type(self)(items, *args, use_list=None, **kwargs) def __getitem__(self, idx): return self._get(idx) if is_indexer(idx) else L(self._get(idx), use_list=None) def copy(self): return self._new(self.items.copy()) def _get(self, i): if is_indexer(i) or isinstance(i,slice): return getattr(self.items,'iloc',self.items)[i] i = mask2idxs(i) return (self.items.iloc[list(i)] if hasattr(self.items,'iloc') else self.items.__array__()[(i,)] if hasattr(self.items,'__array__') else [self.items[i_] for i_ in i]) def __setitem__(self, idx, o): "Set `idx` (can be list of indices, or mask, or int) items to `o` (which is broadcast if not iterable)" if isinstance(idx, int): self.items[idx] = o else: idx = idx if isinstance(idx,L) else _listify(idx) if not is_iter(o): o = [o]*len(idx) for i,o_ in zip(idx,o): self.items[i] = o_ def __iter__(self): return iter(self.items.itertuples() if hasattr(self.items,'iloc') else self.items) def __contains__(self,b): return b in self.items def __reversed__(self): return self._new(reversed(self.items)) def __invert__(self): return self._new(not i for i in self) def __eq__(self,b): return False if isinstance(b, (str,dict,set)) else all_equal(b,self) def __repr__(self): return repr(self.items) if _is_array(self.items) else coll_repr(self) def __mul__ (a,b): return a._new(a.items*b) def __add__ (a,b): return a._new(a.items+_listify(b)) def __radd__(a,b): return a._new(b)+a def __addi__(a,b): a.items += list(b) return a def sorted(self, key=None, reverse=False): if isinstance(key,str): k=lambda o:getattr(o,key,0) elif isinstance(key,int): k=itemgetter(key) else: k=key return self._new(sorted(self.items, key=k, reverse=reverse)) @classmethod def split(cls, s, sep=None, maxsplit=-1): return cls(s.split(sep,maxsplit)) @classmethod def range(cls, a, b=None, step=None): if is_coll(a): a = len(a) return cls(range(a,b,step) if step is not None else range(a,b) if b is not None else range(a)) def map(self, f, *args, **kwargs): g = (bind(f,*args,**kwargs) if callable(f) else f.format if isinstance(f,str) else f.__getitem__) return self._new(map(g, self)) def filter(self, f, negate=False, **kwargs): if kwargs: f = partial(f,**kwargs) if negate: f = negate_func(f) return self._new(filter(f, self)) def argwhere(self, f, negate=False, **kwargs): if kwargs: f = partial(f,**kwargs) if negate: f = negate_func(f) return self._new(i for i,o in enumerate(self) if f(o)) def unique(self): return L(dict.fromkeys(self).keys()) def enumerate(self): return L(enumerate(self)) def val2idx(self): return {v:k for k,v in self.enumerate()} def itemgot(self, *idxs): x = self for idx in idxs: x = x.map(itemgetter(idx)) return x def attrgot(self, k, default=None): return self.map(lambda o:getattr(o,k,default)) def cycle(self): return cycle(self) def map_dict(self, f=noop, *args, **kwargs): return {k:f(k, *args,**kwargs) for k in self} def starmap(self, f, *args, **kwargs): return self._new(itertools.starmap(partial(f,*args,**kwargs), self)) def zip(self, cycled=False): return self._new((zip_cycle if cycled else zip)(*self)) def zipwith(self, *rest, cycled=False): return self._new([self, *rest]).zip(cycled=cycled) def map_zip(self, f, *args, cycled=False, **kwargs): return self.zip(cycled=cycled).starmap(f, *args, **kwargs) def map_zipwith(self, f, *rest, cycled=False, **kwargs): return self.zipwith(*rest, cycled=cycled).starmap(f, **kwargs) def concat(self): return self._new(itertools.chain.from_iterable(self.map(L))) def shuffle(self): it = copy(self.items) random.shuffle(it) return self._new(it) def append(self,o): return self.items.append(o) def remove(self,o): return self.items.remove(o) def count (self,o): return self.items.count(o) def reverse(self ): return self.items.reverse() def pop(self,o=-1): return self.items.pop(o) def clear(self ): return self.items.clear() def index(self, value, start=0, stop=sys.maxsize): return self.items.index(value, start, stop) def sort(self, key=None, reverse=False): return self.items.sort(key=key, reverse=reverse) def reduce(self, f, initial=None): return reduce(f, self) if initial is None else reduce(f, self, initial) def sum(self): return self.reduce(operator.add) def product(self): return self.reduce(operator.mul) inspect.signature(L) #export _docs = {o:"Passthru to `list` method" for o in 'append count remove reverse sort pop clear index'.split()} add_docs(L, __getitem__="Retrieve `idx` (can be list of indices, or mask, or int) items", range="Same as `range`, but returns an `L`. Can pass a collection for `a`, to use `len(a)`", split="Same as `str.split`, but returns an `L`", copy="Same as `list.copy`, but returns an `L`", sorted="New `L` sorted by `key`. If key is str then use `attrgetter`. If key is int then use `itemgetter`", unique="Unique items, in stable order", val2idx="Dict from value to index", filter="Create new `L` filtered by predicate `f`, passing `args` and `kwargs` to `f`", argwhere="Like `filter`, but return indices for matching items", map="Create new `L` with `f` applied to all `items`, passing `args` and `kwargs` to `f`", map_dict="Like `map`, but creates a dict from `items` to function results", starmap="Like `map`, but use `itertools.starmap`", itemgot="Create new `L` with item `idx` of all `items`", attrgot="Create new `L` with attr `k` of all `items`", cycle="Same as `itertools.cycle`", enumerate="Same as `enumerate`", zip="Create new `L` with `zip(*items)`", zipwith="Create new `L` with `self` zip with each of `*rest`", map_zip="Combine `zip` and `starmap`", map_zipwith="Combine `zipwith` and `starmap`", concat="Concatenate all elements of list", shuffle="Same as `random.shuffle`, but not inplace", reduce="Wrapper for `functools.reduce`", sum="Sum of the items", product="Product of the items", **_docs) #export Sequence.register(L); # You can create an `L` from an existing iterable (e.g. a list, range, etc) and access or modify it with an int list/tuple index, mask, int, or slice. All `list` methods can also be used with `L`. t = L(range(12)) test_eq(t, list(range(12))) test_ne(t, list(range(11))) t.reverse() test_eq(t[0], 11) t[3] = "h" test_eq(t[3], "h") t[3,5] = ("j","k") test_eq(t[3,5], ["j","k"]) test_eq(t, L(t)) test_eq(L(L(1,2),[3,4]), ([1,2],[3,4])) t # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.5 64-bit (''base'': conda)' # name: python385jvsc74a57bd0e660af3098993ea5160fd9270f74e6db286e749e474b6bbf115c5a1892218496 # --- # Daten importieren aus Excel # ------ # + import pandas as pd data_input = pd.read_excel('../data/raw/U bung kNN Klassifizierung Ecoli.xlsx', sheet_name=0) data_output = pd.read_excel('../data/raw/U bung kNN Klassifizierung Ecoli.xlsx', sheet_name=1) # - data_output # Train und Test Datensätze erstellen, normalisieren und kNN-Classifier trainieren # --------- # + X = data_input.loc[:,data_input.columns != "Target"] y = data_input["Target"] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20) #, random_state=0) # - data_input # + #Normalisieren der Daten from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaler.fit(X_train) X_train_norm = scaler.transform(X_train) X_test_norm = scaler.transform(X_test) # - from sklearn.neighbors import KNeighborsClassifier knnclf = KNeighborsClassifier(n_neighbors=7) knnclf.fit(X_train_norm, y_train) # Test Datensätze erstellen und kNN-Classifier verwenden # --------- X_output = data_output y_pred = knnclf.predict(X_output) #Normalisieren der Daten X_output_norm = scaler.transform(X_output) y_pred_norm = knnclf.predict(X_output_norm) print(X_output) print(y_pred) print(X_output_norm) print(y_pred_norm) # Akkuranz bestimmen # ---------------- score = knnclf.score(X_test,y_test) print('knn-Classifier scores with {}% accuracy'.format(score*100)) # Optimalen k-wert bestimmen # --------------- # y_pred = knnclf.predict(X_test) # # from sklearn.metrics import classification_report, confusion_matrix # print(confusion_matrix(y_test, y_pred)) # print(classification_report(y_test, y_pred)) # + import matplotlib.pyplot as plt import numpy as np error = [] # Calculating error for K values between 1 and 40 for i in range(1, 40): knn = KNeighborsClassifier(n_neighbors=i) knn.fit(X_train_norm, y_train) pred_i = knn.predict(X_test_norm) error.append(np.mean(pred_i != y_test)) # - plt.figure(figsize=(12, 6)) plt.plot(range(1, 40), error, color='red', linestyle='dashed', marker='o', markerfacecolor='blue', markersize=10) plt.title('Error Rate K Value') plt.xlabel('K Value') plt.ylabel('Mean Error') # Decision Tree # ---------- # + from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier(max_depth=3, criterion="entropy", min_samples_split=2) dt.fit(X_train,y_train) from sklearn.tree import DecisionTreeClassifier score = dt.score(X_test,y_test) print('Decision Tree scores with {}% accuracy'.format(score*100)) # + from sklearn.model_selection import GridSearchCV #tree parameters which shall be tested tree_para = {'criterion':['gini','entropy'],'max_depth':[i for i in range(1,20)], 'min_samples_split':[i for i in range (2,20)]} #GridSearchCV object grd_clf = GridSearchCV(dt, tree_para, cv=5) #creates differnt trees with all the differnet parameters out of our data grd_clf.fit(X_train, y_train) #best paramters that were found best_parameters = grd_clf.best_params_ print(best_parameters) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="wwKqWGYJF4jX" # # "Reproducing Reformer: Our Amazing Submission & Team Experience" # # > Fast ai collabaration with amazing peeps for participating in the 2020 Papers With Code Reproducibility Challenge # # - badges: true # - categories: [nlp, reformer, transformers, language-modelling] # - image: images/papers_with_code.png # - # *Where we all met?* **[here](https://forums.fast.ai/t/reproducibility-challenge-2020-fastai-folks-interested/80336)**❤️ # ### The Challenge # Way back in October 2020 the Papers With Code[ ML Reproducibility Challenge 2020](https://paperswithcode.com/rc2020) was launched and shared in the fast.ai [forums](https://forums.fast.ai/). A few of us jumped at the chance to test our ML knowledge and push our skills. Fast forward 110 days since that initial post and we delivered [our Reformer Reproducibility submission](https://openreview.net/forum?id=3s8Y7dHYkN-) via OpenReview!!🤩 # # Our Whole Project is Documented here : [Project](https://arampacha.github.io/reformer_fastai/) # # The Wandb reports we made : [reports](https://wandb.ai/fastai_community/reformer-fastai/reports/Reformer-Reproducibility-Report---Vmlldzo0MzQ1OTg) # # # # ```Here are a few reflections on our experience: what we enjoyed, tools we used and what we would have done differently:``` # # # ### TLDR; # # * Working as a team pushes your motivation, your skills and your throughput # * [nbdev](https://nbdev.fast.ai/) for development, Weights & Biases for tracking and Discord for communication # * We could have better used task/project management tools more, maybe we needed a different tool # * Next time we’ll start experiments sooner and maybe pick a more practical paper # * It was a massive learning experience and a lot of fun # # ![](my_icons/20210211_reformer_reproducibiliy/flags.png) # # ### Why participate # # Implementing code from scratch is much more enjoyable and meaningful when there is a direct application, e.g. working towards this reproducibility challenge. Spending weeks and months focussed on a single paper forces you to understand the paper down to the last full stop. It also gives you a great appreciation of how difficult writing a good paper is, you see almost every word and sentence is chosen carefully to communicate a particular concept, problem or model setting. # # ### N heads are better than one a.k.a. Multihead Attention # # Our team was distributed across 6 countries and everyone had a somewhat different background, set of skills and personality. This mix was definitely beneficial for getting things done much more smoothly. Having 2 x N eyes researching implementation information or reviewing code really improved coverage and sped up the entire process. It also makes debugging much faster! # # ![](my_icons/20210211_reformer_reproducibiliy/doh.png) # # # Writing code that the entire team will use also meant writing cleaner code with more tests so that it was as clear as possible for your teammates. And finally, during a long project like this it’s easy to get distracted or lazy, however seeing everyone else delivering great work quickly pulls you back into line! # # ![](my_icons/20210211_reformer_reproducibiliy/christmas.png) # # ### Good tools Are key for us : A good tool improves the way you work. A great tool improves the way you think. # # Read more: https://www.wisesayings.com/tool-quotes/#ixzz6mZj38LCP # # # **nbdev** # # The [nbdev](https://nbdev.fast.ai/) literate programming environment from fast.ai was super convenient to minimise the project’s development friction. Writing tests as we developed meant that we caught multiple bugs early and auto-generation of docs lends itself immensely to the reproducibility of your code. Most of us will be using this again for our next projects. # # **Weights & Biases** # # Weights & Biases generously gave us a team account which enabled us all to log our experiments to a single project. Being directly able to link your runs and results to the final report was really nice. Also it's pretty exciting monitoring 10+ experiments live! # # **Discord** # # A Discord server worked really well for all our chat and voice communication. Frequent calls to catchup and agree on next steps were super useful. Todo lists and core pieces of code often ended up as pinned messages for quick reference and linking Github activity to a channel was useful for keeping an eye on new commits to the repo. # # **Overleaf** # # When it came to writing the final report in latex, Overleaf was a wonderful tool for collaborative editing. # # **ReviewNB** # # The ReviewNB app on GitHub was very useful for visualizing diffs in notebooks. # # ![](my_icons/20210211_reformer_reproducibiliy/cuts.png) # # # ### Learn from the best # # The Reformer architecture had several complex parts, and having [Phil Wang's](https://github.com/lucidrains/reformer-pytorch) and [HuggingFace's](https://huggingface.co/transformers/model_doc/reformer.html) # Github code was very helpful to understand design decisions and fix issues. # # ### Things we can improve for the next time # # **Start experiments early** # # We started our experiments quite late in the project; as we aimed to reimplement Reformer in Pytorch (with reference to existing implementations) about ~90% of our time was spent on ensuring our implementation was faithful to the paper and that it was working correctly. In retrospect starting experiments earlier would have allowed more in depth exploration of what we observed while testing. Full scale experiments have a way of inducing problems you didn’t foresee during the implementation phase... # # **Task distribution and coordination** # # When working in a distributed and decentralized team, efficient task allocation and tracking is important. Early in the project todo lists lived in people’s heads, or were quickly buried under 50 chat messages. This was suboptimal for a number of reasons, including that it made involving new people in the project more challenging as they could not easily identify where they could best contribute. # # We made a switch to Trello to better track open tasks. It worked reasonably well however its effectiveness was probably proportional to how much time a couple of team members had to review the kanban board, advocate for its use and focus the team’s attention there. The extra friction associated with needing to use another tool unconnected to Github or Discord was probably the reason for why we didn’t use it as much as we could have. Integrating Trello into our workflow or giving Github Projects a trial could have been useful. # # # **More feedback** # # We had originally intended to get feedback from the fastai community during the project. In the end we were too late in sharing our material, so there wasn’t time for much feedback. Early feedback would have been very useful and the project might have benefited from some periodic summary of accomplishments and current problems. We could have solicited additional feedback from the authors too. # # **Distributed training** # # This was our first exposure to distributed training and unfortunately we had a lot of issues with it. We were also unable to log the results from distributed runs properly to Weights & Biases. This slowed down our experiment iteration speed and is why we could not train our models for as long as we would have preferred. # # ![](my_icons/20210211_reformer_reproducibiliy/gcp.png) # # **Choice of paper to reproduce** # # It would have been useful to calculate a rough estimate of the compute budget the paper’s experiments required before jumping into it. In the latter stages of the project we realised that we would be unable to fully replicate some of the paper’s experiments, but instead had to run scaled down versions. In addition, where your interest sits between theoretical and practical papers should be considered when selecting a paper for the challenge. # # **More tools** # # We could have tried even more handy tools such as [knockknock](https://github.com/huggingface/knockknock) to alert us when models are finished training and Github Projects for task management. # # ### Some final thoughts # # We came out of this project even more motivated compared to how we entered; a great indication that it was both enjoyable and useful for us! Our advice would be to not hesitate to join events like this one and challenge yourself, and try and find one or more other folks in the forums or Discord to work with. After successfully delivering our submission to the challenge we are all eager to work together again on our next project, stay tuned for more! # # ![](my_icons/20210211_reformer_reproducibiliy/launch.png) # + [markdown] id="odjoARyxw-yE" # # Thanks for Reading This Far 🙏 # # As always, I would love to hear your feedback, what could have been written better or clearer, you can find me on twitter & Linkedin: **[twitter](https://twitter.com/PriyanK_7n)** # **[Linkedin](https://www.linkedin.com/in/priyank-n-707019195)** # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Comparing CHIME admission predictions to our actual admit data # We've been using [CHIME](https://github.com/CodeForPhilly/chime) model since its early release days and have been trying to tailor things for our specific needs. We have had hospital administrators express concern due to the "census=0 at time=0" problem and were excited to see the model enhancement addressing this became available on the `develop` branch yesterday. The approach taken seems really solid and it's nice (from a UI perspective) that it only requires on additional input. # # So, I took our actual admits (from 2/20/2020 to a few days ago) and just fit an exponential growth model, got the implied growth rate and implied doubling time. Plotted actual admits vs predicted (just using simple exp growth model) and got very nice fit. Then used implied doubling time of 3.61 in CHIME model and our other hospital specific inputs (market share and population) and got very nice match with the exponential fit to admits and thus, to the actual admits. Everyone super happy to see this! # # The rest of this notebook includes: # # * the actual admits by date data, # * our CHIME input files, # * the comparisons of our actual with CHIME prediction, # * our plans going forward with respect to covid modeling/analysis. # + # from argparse import ( # Action, # ArgumentParser, # ) # from datetime import datetime # from penn_chime.settings import DEFAULTS # from penn_chime.parameters import Parameters, RateDays # from penn_chime.models import SimSirModel import numpy as np import pandas as pd from pandas import Timestamp import json import matplotlib.pyplot as plt from scipy.optimize import curve_fit # - # %matplotlib inline # ## Fit exponential growth model to our admits by date # As we are yet to hit capacity (getting close) and are still in the exponential growth phase, I'm going to: # # * fit an exponential growth model to our admits by date for 2020-02-20 - 2020-03-25) # * use fitted growth parameter to compute our doubling time, # * use that doubling time in the new CHIME model # * also use new CHIME model with our actual date of first admit # * compare fits of CHIME model projections to our actual admit by date data # Here are our actual admits by date. # + jupyter={"source_hidden": true} # actual_df = pd.read_csv("act_admits_20200220.csv", parse_dates = ['date']) actual_df = pd.DataFrame({'date': {0: Timestamp('2020-02-20 00:00:00'), 1: Timestamp('2020-02-21 00:00:00'), 2: Timestamp('2020-02-22 00:00:00'), 3: Timestamp('2020-02-23 00:00:00'), 4: Timestamp('2020-02-24 00:00:00'), 5: Timestamp('2020-02-25 00:00:00'), 6: Timestamp('2020-02-26 00:00:00'), 7: Timestamp('2020-02-27 00:00:00'), 8: Timestamp('2020-02-28 00:00:00'), 9: Timestamp('2020-02-29 00:00:00'), 10: Timestamp('2020-03-01 00:00:00'), 11: Timestamp('2020-03-02 00:00:00'), 12: Timestamp('2020-03-03 00:00:00'), 13: Timestamp('2020-03-04 00:00:00'), 14: Timestamp('2020-03-05 00:00:00'), 15: Timestamp('2020-03-06 00:00:00'), 16: Timestamp('2020-03-07 00:00:00'), 17: Timestamp('2020-03-08 00:00:00'), 18: Timestamp('2020-03-09 00:00:00'), 19: Timestamp('2020-03-10 00:00:00'), 20: Timestamp('2020-03-11 00:00:00'), 21: Timestamp('2020-03-12 00:00:00'), 22: Timestamp('2020-03-13 00:00:00'), 23: Timestamp('2020-03-14 00:00:00'), 24: Timestamp('2020-03-15 00:00:00'), 25: Timestamp('2020-03-16 00:00:00'), 26: Timestamp('2020-03-17 00:00:00'), 27: Timestamp('2020-03-18 00:00:00'), 28: Timestamp('2020-03-19 00:00:00'), 29: Timestamp('2020-03-20 00:00:00'), 30: Timestamp('2020-03-21 00:00:00'), 31: Timestamp('2020-03-22 00:00:00'), 32: Timestamp('2020-03-23 00:00:00'), 33: Timestamp('2020-03-24 00:00:00'), 34: Timestamp('2020-03-25 00:00:00')}, 'act_occ': {0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 2, 6: 2, 7: 2, 8: 2, 9: 2, 10: 2, 11: 3, 12: 3, 13: 3, 14: 4, 15: 6, 16: 6, 17: 6, 18: 6, 19: 6, 20: 6, 21: 11, 22: 18, 23: 28, 24: 37, 25: 62, 26: 89, 27: 124, 28: 182, 29: 238, 30: 287, 31: 352, 32: 461, 33: 572, 34: 658}, 'act_admits': {0: 1, 1: 0, 2: 0, 3: 0, 4: 0, 5: 1, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0, 11: 1, 12: 0, 13: 0, 14: 1, 15: 2, 16: 0, 17: 0, 18: 0, 19: 0, 20: 0, 21: 5, 22: 7, 23: 10, 24: 9, 25: 25, 26: 29, 27: 44, 28: 67, 29: 66, 30: 71, 31: 82, 32: 128, 33: 152, 34: 143}}) # - # Get x and y for model fitting x = np.array(actual_df.index) y = np.array(actual_df['act_admits']) # ### Play around with curve_fit from scipy # Trying a few values for initial parameter values. Seems getting stable estimates. curve_fit(lambda t,a,b: a*np.exp(b*t), x, y, p0=(.1, 0.10)) curve_fit(lambda t,a,b: a*np.exp(b*t), x, y, p0=(.2, 0.10)) curve_fit(lambda t,a,b: a*np.exp(b*t), x, y, p0=(1, 0.10)) curve_fit(lambda t,a,b: a*np.exp(b*t), x, y, p0=(.1, 0.30)) # ### Fit model simple exp growth model to our admits # Want to see what our intrinsic admit growth rate looks like. # Function to fit def func(x, a, b, c): return a * np.exp(b * x) + c # Do the fit and compute quantities of interest. popt, pcov = curve_fit(func, x, y, p0=(1, 0.10, 0)) print(popt) intrinsic_growth_rate_adm = popt[1] implied_doubling_time = np.log(2.0) / intrinsic_growth_rate_adm print("intrinsic_growth_rate_adm = {:3f}".format(intrinsic_growth_rate_adm)) print("implied_doubling_time = {:3f}".format(implied_doubling_time)) plt.plot(x, func(x, *popt), 'g--', label='fit: a={:5.3f}, b={:5.3f}, c={:5.3f}'.format(*popt)) plt.plot(x, y, 'b--') plt.xlabel('days since Feb 20') plt.ylabel('admits') plt.legend() plt.show() # ## Use CHIME with above params - doubling time approach # Trying doubling time first. Here's input file used. # # --current-hospitalized 658 # --doubling-time 3.61 # --hospitalized-day 7 # --hospitalized-rate 0.025 # --icu-days 9 # --icu-rate 0.0075 # --market_share 0.32 # --infectious-days 14 # --n-days 120 # --relative-contact-rate 0.31 # --population 5026226 # --ventilated-day 10 # --ventilated-rate 0.005 # + # dt3p61_df = pd.read_csv("using_chime_cli/2020-03-30_projected_admits_dt3.61.csv", # parse_dates = ['date'], # index_col = 0) # dt3p61_df = dt3p61_df.replace({np.nan: None}) # Barf out a dictionary so I can create hard coded data version # dt3p61_df.to_dict() # - # Here's the results as a dictionary. # + jupyter={"source_hidden": true} admits_df_dt_dict = {'day': {0: -34, 1: -33, 2: -32, 3: -31, 4: -30, 5: -29, 6: -28, 7: -27, 8: -26, 9: -25, 10: -24, 11: -23, 12: -22, 13: -21, 14: -20, 15: -19, 16: -18, 17: -17, 18: -16, 19: -15, 20: -14, 21: -13, 22: -12, 23: -11, 24: -10, 25: -9, 26: -8, 27: -7, 28: -6, 29: -5, 30: -4, 31: -3, 32: -2, 33: -1, 34: 0, 35: 1, 36: 2, 37: 3, 38: 4, 39: 5, 40: 6, 41: 7, 42: 8, 43: 9, 44: 10, 45: 11, 46: 12, 47: 13, 48: 14, 49: 15, 50: 16, 51: 17, 52: 18, 53: 19, 54: 20, 55: 21, 56: 22, 57: 23, 58: 24, 59: 25, 60: 26, 61: 27, 62: 28, 63: 29, 64: 30, 65: 31, 66: 32, 67: 33, 68: 34, 69: 35, 70: 36, 71: 37, 72: 38, 73: 39, 74: 40, 75: 41, 76: 42, 77: 43, 78: 44, 79: 45, 80: 46, 81: 47, 82: 48, 83: 49, 84: 50, 85: 51, 86: 52, 87: 53, 88: 54, 89: 55, 90: 56, 91: 57, 92: 58, 93: 59, 94: 60, 95: 61, 96: 62, 97: 63, 98: 64, 99: 65, 100: 66, 101: 67, 102: 68, 103: 69, 104: 70, 105: 71, 106: 72, 107: 73, 108: 74, 109: 75, 110: 76, 111: 77, 112: 78, 113: 79, 114: 80, 115: 81, 116: 82, 117: 83, 118: 84, 119: 85, 120: 86, 121: 87, 122: 88, 123: 89, 124: 90, 125: 91, 126: 92, 127: 93, 128: 94, 129: 95, 130: 96, 131: 97, 132: 98, 133: 99, 134: 100, 135: 101, 136: 102, 137: 103, 138: 104, 139: 105, 140: 106, 141: 107, 142: 108, 143: 109, 144: 110, 145: 111, 146: 112, 147: 113, 148: 114, 149: 115, 150: 116, 151: 117, 152: 118, 153: 119, 154: 120}, 'date': {0: Timestamp('2020-02-25 00:00:00'), 1: Timestamp('2020-02-26 00:00:00'), 2: Timestamp('2020-02-27 00:00:00'), 3: Timestamp('2020-02-28 00:00:00'), 4: Timestamp('2020-02-29 00:00:00'), 5: Timestamp('2020-03-01 00:00:00'), 6: Timestamp('2020-03-02 00:00:00'), 7: Timestamp('2020-03-03 00:00:00'), 8: Timestamp('2020-03-04 00:00:00'), 9: Timestamp('2020-03-05 00:00:00'), 10: Timestamp('2020-03-06 00:00:00'), 11: Timestamp('2020-03-07 00:00:00'), 12: Timestamp('2020-03-08 00:00:00'), 13: Timestamp('2020-03-09 00:00:00'), 14: Timestamp('2020-03-10 00:00:00'), 15: Timestamp('2020-03-11 00:00:00'), 16: Timestamp('2020-03-12 00:00:00'), 17: Timestamp('2020-03-13 00:00:00'), 18: Timestamp('2020-03-14 00:00:00'), 19: Timestamp('2020-03-15 00:00:00'), 20: Timestamp('2020-03-16 00:00:00'), 21: Timestamp('2020-03-17 00:00:00'), 22: Timestamp('2020-03-18 00:00:00'), 23: Timestamp('2020-03-19 00:00:00'), 24: Timestamp('2020-03-20 00:00:00'), 25: Timestamp('2020-03-21 00:00:00'), 26: Timestamp('2020-03-22 00:00:00'), 27: Timestamp('2020-03-23 00:00:00'), 28: Timestamp('2020-03-24 00:00:00'), 29: Timestamp('2020-03-25 00:00:00'), 30: Timestamp('2020-03-26 00:00:00'), 31: Timestamp('2020-03-27 00:00:00'), 32: Timestamp('2020-03-28 00:00:00'), 33: Timestamp('2020-03-29 00:00:00'), 34: Timestamp('2020-03-30 00:00:00'), 35: Timestamp('2020-03-31 00:00:00'), 36: Timestamp('2020-04-01 00:00:00'), 37: Timestamp('2020-04-02 00:00:00'), 38: Timestamp('2020-04-03 00:00:00'), 39: Timestamp('2020-04-04 00:00:00'), 40: Timestamp('2020-04-05 00:00:00'), 41: Timestamp('2020-04-06 00:00:00'), 42: Timestamp('2020-04-07 00:00:00'), 43: Timestamp('2020-04-08 00:00:00'), 44: Timestamp('2020-04-09 00:00:00'), 45: Timestamp('2020-04-10 00:00:00'), 46: Timestamp('2020-04-11 00:00:00'), 47: Timestamp('2020-04-12 00:00:00'), 48: Timestamp('2020-04-13 00:00:00'), 49: Timestamp('2020-04-14 00:00:00'), 50: Timestamp('2020-04-15 00:00:00'), 51: Timestamp('2020-04-16 00:00:00'), 52: Timestamp('2020-04-17 00:00:00'), 53: Timestamp('2020-04-18 00:00:00'), 54: Timestamp('2020-04-19 00:00:00'), 55: Timestamp('2020-04-20 00:00:00'), 56: Timestamp('2020-04-21 00:00:00'), 57: Timestamp('2020-04-22 00:00:00'), 58: Timestamp('2020-04-23 00:00:00'), 59: Timestamp('2020-04-24 00:00:00'), 60: Timestamp('2020-04-25 00:00:00'), 61: Timestamp('2020-04-26 00:00:00'), 62: Timestamp('2020-04-27 00:00:00'), 63: Timestamp('2020-04-28 00:00:00'), 64: Timestamp('2020-04-29 00:00:00'), 65: Timestamp('2020-04-30 00:00:00'), 66: Timestamp('2020-05-01 00:00:00'), 67: Timestamp('2020-05-02 00:00:00'), 68: Timestamp('2020-05-03 00:00:00'), 69: Timestamp('2020-05-04 00:00:00'), 70: Timestamp('2020-05-05 00:00:00'), 71: Timestamp('2020-05-06 00:00:00'), 72: Timestamp('2020-05-07 00:00:00'), 73: Timestamp('2020-05-08 00:00:00'), 74: Timestamp('2020-05-09 00:00:00'), 75: Timestamp('2020-05-10 00:00:00'), 76: Timestamp('2020-05-11 00:00:00'), 77: Timestamp('2020-05-12 00:00:00'), 78: Timestamp('2020-05-13 00:00:00'), 79: Timestamp('2020-05-14 00:00:00'), 80: Timestamp('2020-05-15 00:00:00'), 81: Timestamp('2020-05-16 00:00:00'), 82: Timestamp('2020-05-17 00:00:00'), 83: Timestamp('2020-05-18 00:00:00'), 84: Timestamp('2020-05-19 00:00:00'), 85: Timestamp('2020-05-20 00:00:00'), 86: Timestamp('2020-05-21 00:00:00'), 87: Timestamp('2020-05-22 00:00:00'), 88: Timestamp('2020-05-23 00:00:00'), 89: Timestamp('2020-05-24 00:00:00'), 90: Timestamp('2020-05-25 00:00:00'), 91: Timestamp('2020-05-26 00:00:00'), 92: Timestamp('2020-05-27 00:00:00'), 93: Timestamp('2020-05-28 00:00:00'), 94: Timestamp('2020-05-29 00:00:00'), 95: Timestamp('2020-05-30 00:00:00'), 96: Timestamp('2020-05-31 00:00:00'), 97: Timestamp('2020-06-01 00:00:00'), 98: Timestamp('2020-06-02 00:00:00'), 99: Timestamp('2020-06-03 00:00:00'), 100: Timestamp('2020-06-04 00:00:00'), 101: Timestamp('2020-06-05 00:00:00'), 102: Timestamp('2020-06-06 00:00:00'), 103: Timestamp('2020-06-07 00:00:00'), 104: Timestamp('2020-06-08 00:00:00'), 105: Timestamp('2020-06-09 00:00:00'), 106: Timestamp('2020-06-10 00:00:00'), 107: Timestamp('2020-06-11 00:00:00'), 108: Timestamp('2020-06-12 00:00:00'), 109: Timestamp('2020-06-13 00:00:00'), 110: Timestamp('2020-06-14 00:00:00'), 111: Timestamp('2020-06-15 00:00:00'), 112: Timestamp('2020-06-16 00:00:00'), 113: Timestamp('2020-06-17 00:00:00'), 114: Timestamp('2020-06-18 00:00:00'), 115: Timestamp('2020-06-19 00:00:00'), 116: Timestamp('2020-06-20 00:00:00'), 117: Timestamp('2020-06-21 00:00:00'), 118: Timestamp('2020-06-22 00:00:00'), 119: Timestamp('2020-06-23 00:00:00'), 120: Timestamp('2020-06-24 00:00:00'), 121: Timestamp('2020-06-25 00:00:00'), 122: Timestamp('2020-06-26 00:00:00'), 123: Timestamp('2020-06-27 00:00:00'), 124: Timestamp('2020-06-28 00:00:00'), 125: Timestamp('2020-06-29 00:00:00'), 126: Timestamp('2020-06-30 00:00:00'), 127: Timestamp('2020-07-01 00:00:00'), 128: Timestamp('2020-07-02 00:00:00'), 129: Timestamp('2020-07-03 00:00:00'), 130: Timestamp('2020-07-04 00:00:00'), 131: Timestamp('2020-07-05 00:00:00'), 132: Timestamp('2020-07-06 00:00:00'), 133: Timestamp('2020-07-07 00:00:00'), 134: Timestamp('2020-07-08 00:00:00'), 135: Timestamp('2020-07-09 00:00:00'), 136: Timestamp('2020-07-10 00:00:00'), 137: Timestamp('2020-07-11 00:00:00'), 138: Timestamp('2020-07-12 00:00:00'), 139: Timestamp('2020-07-13 00:00:00'), 140: Timestamp('2020-07-14 00:00:00'), 141: Timestamp('2020-07-15 00:00:00'), 142: Timestamp('2020-07-16 00:00:00'), 143: Timestamp('2020-07-17 00:00:00'), 144: Timestamp('2020-07-18 00:00:00'), 145: Timestamp('2020-07-19 00:00:00'), 146: Timestamp('2020-07-20 00:00:00'), 147: Timestamp('2020-07-21 00:00:00'), 148: Timestamp('2020-07-22 00:00:00'), 149: Timestamp('2020-07-23 00:00:00'), 150: Timestamp('2020-07-24 00:00:00'), 151: Timestamp('2020-07-25 00:00:00'), 152: Timestamp('2020-07-26 00:00:00'), 153: Timestamp('2020-07-27 00:00:00'), 154: Timestamp('2020-07-28 00:00:00')}, 'hospitalized': {0: None, 1: 0.28310821138712994, 2: 0.3430340403402965, 3: 0.41564313266782427, 4: 0.5036192828166834, 5: 0.6102138938691066, 6: 0.7393658901014439, 7: 0.8958468995540741, 8: 1.085437002594766, 9: 1.3151374385949, 10: 1.5934279793565027, 11: 1.9305782560452016, 12: 2.3390242135148918, 13: 2.8338231158105103, 14: 3.43320319903985, 15: 4.1592272272046715, 16: 5.038592919617905, 17: 6.103597549204952, 18: 7.393299012493187, 19: 8.954911373580837, 20: 10.845479270080574, 21: 13.133882546212703, 22: 15.903229828765962, 23: 19.25370707026245, 24: 23.30595363282464, 25: 28.205043104884034, 26: 34.125146891872674, 27: 41.274952895726756, 28: 49.903895118044005, 29: 60.3092167003449, 30: 72.8438300739748, 31: 87.92484145003954, 32: 106.04245647560066, 33: 127.76875812204004, 34: 153.76552008345325, 35: 127.5049331581938, 36: 142.2877266121809, 37: 158.63482615737303, 38: 176.67412495195842, 39: 196.53430931726325, 40: 218.34193474172184, 41: 242.21768814864552, 42: 268.2717294677164, 43: 296.5980214432375, 44: 327.26758740164365, 45: 360.3206868042234, 46: 395.75797223429663, 47: 433.53079268911233, 48: 473.5309385458658, 49: 515.5802819927629, 50: 559.4209467298679, 51: 604.706829223006, 52: 650.9974693056074, 53: 697.7554001685166, 54: 744.3481587643518, 55: 790.0560654465609, 56: 834.0866472890848, 57: 875.596156957934, 58: 913.7180270085083, 59: 947.597332840558, 60: 976.4294937028408, 61: 999.500638078216, 62: 1016.2264418250197, 63: 1026.185958778653, 64: 1029.1471134764506, 65: 1025.0811546081823, 66: 1014.1644246070756, 67: 996.7671419985454, 68: 973.4303074816527, 69: 944.833098791274, 70: 911.7540113491086, 71: 875.0294070747515, 72: 835.5130292054711, 73: 794.0395005323662, 74: 751.3939893489078, 75: 708.2892732897271, 76: 665.3505158595982, 77: 623.1073125607436, 78: 581.9920279282705, 79: 542.3431445542957, 80: 504.4122541061988, 81: 468.3733894451106, 82: 434.3335700139796, 83: 402.3436577267457, 84: 372.40885649037233, 85: 344.4984070666796, 86: 318.5542142402992, 87: 294.49828891607467, 88: 272.23899428868026, 89: 251.67615708483936, 90: 232.70514840330728, 91: 215.2200605620237, 92: 199.1161127547384, 93: 184.29141439253726, 94: 170.64820481264178, 95: 158.09367458071208, 96: 146.540458982563, 97: 135.90687986843113, 98: 126.11699859854707, 99: 117.10053087197592, 100: 108.79266386944073, 101: 101.13380739517015, 102: 94.06930345974979, 103: 87.54911284379341, 104: 81.5274924441037, 105: 75.96267345022352, 106: 70.8165474620546, 107: 66.05436539195944, 108: 61.64445226452518, 109: 57.55793972794345, 110: 53.76851712935604, 111: 50.2522013102207, 112: 46.987124783809115, 113: 43.953341621730935, 114: 41.13265015619254, 115: 38.50843147424894, 116: 36.06550261535449, 117: 33.78998336255609, 118: 31.669175533330414, 119: 29.69145371140621, 120: 27.846166413051833, 121: 26.123546741007885, 122: 24.51463164472807, 123: 23.011188971926455, 124: 21.605651562691488, 125: 20.291057701106183, 126: 19.06099730089773, 127: 17.909563258494014, 128: 16.831307460532116, 129: 15.821200982427401, 130: 14.87459805973049, 131: 13.98720345561742, 132: 13.15504288567172, 133: 12.374436195139424, 134: 11.641973015059191, 135: 10.954490651281958, 136: 10.309053985918581, 137: 9.702937193142132, 138: 9.133607091607699, 139: 8.598707974124409, 140: 8.096047771403391, 141: 7.6235854215410646, 142: 7.179419329724624, 143: 6.761776814848418, 144: 6.36900444953062, 145: 5.999559210518783, 146: 5.652000363472324, 147: 5.324982015183196, 148: 5.017246271629119, 149: 4.7276169473771015, 150: 4.454993776700576, 151: 4.1983470817431225, 152: 3.956712857354432, 153: 3.729188236451592, 154: None}, 'icu': {0: None, 1: 0.08493246341613891, 2: 0.1029102121020889, 3: 0.12469293980034735, 4: 0.151085784845005, 5: 0.1830641681607319, 6: 0.2218097670304333, 7: 0.2687540698662221, 8: 0.32563110077843005, 9: 0.3945412315784696, 10: 0.4780283938069512, 11: 0.579173476813561, 12: 0.7017072640544666, 13: 0.8501469347431523, 14: 1.0299609597119552, 15: 1.2477681681614021, 16: 1.511577875885373, 17: 1.8310792647614849, 18: 2.217989703747957, 19: 2.6864734120742457, 20: 3.253643781024172, 21: 3.9401647638638164, 22: 4.770968948629786, 23: 5.776112121078732, 24: 6.9917860898473885, 25: 8.461512931465208, 26: 10.237544067561815, 27: 12.382485868718016, 28: 14.971168535413186, 29: 18.092765010103488, 30: 21.853149022192426, 31: 26.37745243501187, 32: 31.81273694268017, 33: 38.330627436612026, 34: 46.12965602503599, 35: 38.25147994745811, 36: 42.68631798365431, 37: 47.59044784721193, 38: 53.00223748558756, 39: 58.96029279517887, 40: 65.50258042251653, 41: 72.66530644459374, 42: 80.48151884031495, 43: 88.97940643297113, 44: 98.18027622049307, 45: 108.096206041267, 46: 118.72739167028921, 47: 130.05923780673356, 48: 142.0592815637599, 49: 154.6740845978288, 50: 167.82628401896022, 51: 181.41204876690173, 52: 195.29924079168225, 53: 209.32662005055454, 54: 223.3044476293057, 55: 237.0168196339687, 56: 250.22599418672507, 57: 262.6788470873803, 58: 274.115408102552, 59: 284.2791998521675, 60: 292.92884811085287, 61: 299.85019142346385, 62: 304.8679325475068, 63: 307.85578763359626, 64: 308.74413404293585, 65: 307.5243463824527, 66: 304.24932738212283, 67: 299.03014259956404, 68: 292.02909224449564, 69: 283.4499296373815, 70: 273.5262034047337, 71: 262.50882212242414, 72: 250.653908761642, 73: 238.2118501597097, 74: 225.4181968046724, 75: 212.4867819869187, 76: 199.60515475787903, 77: 186.9321937682234, 78: 174.5976083784808, 79: 162.7029433662883, 80: 151.32367623185928, 81: 140.5120168335343, 82: 130.30007100419243, 83: 120.70309731802445, 84: 111.72265694711132, 85: 103.34952212000462, 86: 95.56626427208904, 87: 88.3494866748224, 88: 81.67169828660553, 89: 75.50284712544997, 90: 69.81154452099327, 91: 64.56601816860712, 92: 59.73483382642189, 93: 55.287424317761186, 94: 51.19446144379072, 95: 47.4281023742151, 96: 43.9621376947689, 97: 40.77206396053043, 98: 37.83509957956449, 99: 35.13015926159096, 100: 32.6377991608333, 101: 30.340142218548866, 102: 28.220791037927484, 103: 26.26473385313511, 104: 24.45824773323329, 105: 22.78880203506742, 106: 21.244964238616376, 107: 19.81630961758856, 108: 18.493335679357187, 109: 17.267381918380124, 110: 16.130555138810447, 111: 15.075660393065846, 112: 14.096137435140918, 113: 13.186002486520009, 114: 12.339795046858852, 115: 11.552529442271409, 116: 10.819650784607804, 117: 10.136995008764645, 118: 9.500752660002037, 119: 8.907436113420774, 120: 8.353849923916641, 121: 7.837064022302003, 122: 7.3543894934191485, 123: 6.903356691576847, 124: 6.481695468808539, 125: 6.087317310331856, 126: 5.718299190268227, 127: 5.372868977548933, 128: 5.049392238159271, 129: 4.746360294726402, 130: 4.4623794179206016, 131: 4.1961610366852256, 132: 3.946512865701152, 133: 3.712330858541464, 134: 3.4925919045181217, 135: 3.2863471953860426, 136: 3.0927161957733915, 137: 2.9108811579462786, 138: 2.7400821274804907, 139: 2.579612392237322, 140: 2.428814331418834, 141: 2.287075626463775, 142: 2.153825798917751, 143: 2.02853304445307, 144: 1.9107013348602775, 145: 1.7998677631530882, 146: 1.6956001090438804, 147: 1.5974946045553224, 148: 1.5051738814872806, 149: 1.418285084213494, 150: 1.3364981330105363, 151: 1.2595041245222092, 152: 1.1870138572066935, 153: 1.118756470936205, 154: None}, 'ventilated': {0: None, 1: 0.056621642277425936, 2: 0.0686068080680593, 3: 0.0831286265335649, 4: 0.10072385656333666, 5: 0.1220427787738213, 6: 0.14787317802028874, 7: 0.17916937991081486, 8: 0.21708740051895334, 9: 0.26302748771898, 10: 0.3186855958713004, 11: 0.3861156512090407, 12: 0.4678048427029777, 13: 0.5667646231621024, 14: 0.6866406398079699, 15: 0.8318454454409343, 16: 1.0077185839235812, 17: 1.2207195098409906, 18: 1.4786598024986377, 19: 1.790982274716166, 20: 2.1690958540161134, 21: 2.626776509242543, 22: 3.180645965753192, 23: 3.8507414140524863, 24: 4.661190726564929, 25: 5.641008620976808, 26: 6.825029378374537, 27: 8.25499057914535, 28: 9.98077902360879, 29: 12.061843340068993, 30: 14.56876601479496, 31: 17.584968290007893, 32: 21.20849129512013, 33: 25.55375162440801, 34: 30.753104016690656, 35: 25.500986631638735, 36: 28.457545322436204, 37: 31.726965231474647, 38: 35.334824990391674, 39: 39.30686186345264, 40: 43.668386948344335, 41: 48.44353762972912, 42: 53.65434589354335, 43: 59.31960428864738, 44: 65.45351748032886, 45: 72.0641373608446, 46: 79.15159444685924, 47: 86.70615853782247, 48: 94.70618770917316, 49: 103.1160563985527, 50: 111.88418934597372, 51: 120.9413658446008, 52: 130.1994938611217, 53: 139.5510800337031, 54: 148.86963175287042, 55: 158.0112130893126, 56: 166.8173294578166, 57: 175.11923139158714, 58: 182.7436054017012, 59: 189.51946656811148, 60: 195.285898740568, 61: 199.90012761564319, 62: 203.24528836500443, 63: 205.23719175573117, 64: 205.8294226952903, 65: 205.0162309216353, 66: 202.83288492141537, 67: 199.35342839970872, 68: 194.6860614963307, 69: 188.96661975825464, 70: 182.35080226982242, 71: 175.00588141494973, 72: 167.10260584109437, 73: 158.80790010647345, 74: 150.27879786978158, 75: 141.65785465794488, 76: 133.07010317191998, 77: 124.62146251214836, 78: 116.39840558565449, 79: 108.4686289108595, 80: 100.88245082124013, 81: 93.67467788902195, 82: 86.86671400279556, 83: 80.46873154534933, 84: 74.481771298073, 85: 68.89968141333702, 86: 63.71084284806057, 87: 58.899657783214934, 88: 54.44779885773551, 89: 50.33523141696696, 90: 46.541029680662184, 91: 43.04401211240384, 92: 39.82322255094823, 93: 36.858282878508355, 94: 34.129640962526544, 95: 31.61873491614369, 96: 29.30809179651351, 97: 27.181375973686954, 98: 25.22339971970814, 99: 23.420106174394284, 100: 21.758532773888874, 101: 20.226761479032575, 102: 18.81386069195105, 103: 17.509822568758864, 104: 16.305498488821286, 105: 15.192534690044338, 106: 14.16330949241001, 107: 13.21087307839298, 108: 12.328890452905398, 109: 11.511587945587053, 110: 10.753703425873027, 111: 10.050440262043594, 112: 9.397424956761824, 113: 8.790668324345459, 114: 8.226530031238326, 115: 7.7016862948494245, 116: 7.213100523071263, 117: 6.757996672511581, 118: 6.3338351066659015, 119: 5.938290742281424, 120: 5.569233282610186, 121: 5.2247093482028495, 122: 4.902926328944887, 123: 4.602237794386383, 124: 4.321130312537207, 125: 4.058211540220327, 126: 3.812199460180637, 127: 3.581912651698076, 128: 3.3662614921058776, 129: 3.1642401964854803, 130: 2.9749196119473713, 131: 2.797440691123484, 132: 2.631008577133798, 133: 2.4748872390282486, 134: 2.3283946030105653, 135: 2.190898130257665, 136: 2.0618107971831705, 137: 1.9405874386302455, 138: 1.8267214183206304, 139: 1.7197415948239725, 140: 1.6192095542801326, 141: 1.5247170843085769, 142: 1.4358838659454705, 143: 1.3523553629693197, 144: 1.2738008899059423, 145: 1.1999118421035746, 146: 1.1304000726950107, 147: 1.0649964030362753, 148: 1.0034492543263696, 149: 0.9455233894741468, 150: 0.8909987553406609, 151: 0.8396694163484426, 152: 0.7913425714714322, 153: 0.7458376472905002, 154: None}} # - dt3p61_df = pd.DataFrame(admits_df_dt_dict) dt3p61_df.head() # Need to shift the index on the CHIME output since it infers a first hospitalized date and that date, while hopefully close, is not guaranteed to match our actual date. num_days_to_shift = (dt3p61_df.date[0] - actual_df.date[0]).days print('Shifting CHIME output index by {} days'.format(num_days_to_shift)) dt3p61_df.index = dt3p61_df.index + num_days_to_shift #.shift() only works for DateTime like index dt3p61_df.head(10) chime_y_dt = np.array(dt3p61_df.hospitalized)[:len(x)] plt.plot(x, func(x, *popt), 'g--', label='exp fit: a={:5.3f}, b={:5.3f}, c={:5.3f}'.format(*popt)) plt.plot(x, y, 'b:', label = 'actual') plt.plot(x, chime_y_dt, 'r-', label = 'chime') plt.title('Model calibration using user specified doubling time') plt.xlabel('days since Feb 20') plt.ylabel('admits') plt.legend() plt.savefig("model_calibration_dt3p61.png") plt.show() # ### Nice! # ## Use CHIME with above params - actual first admit date approach # # Now let's see how things look if we use our actual first admit date. Here's input file used. # # --current-hospitalized 658 # --date-first-hospitalized 2020-02-20 # --hospitalized-day 7 # --hospitalized-rate 0.025 # --icu-days 9 # --icu-rate 0.0075 # --market_share 0.32 # --infectious-days 14 # --n-days 120 # --relative-contact-rate 0.31 # --population 5026226 # --ventilated-day 10 # --ventilated-rate 0.005 # + # fd20200220_df = pd.read_csv("using_chime_cli/2020-03-30_projected_admits_fd20200220.csv", # parse_dates = ['date'], # index_col = 0) # fd20200220_df = fd20200220_df.replace({np.nan: None}) # # Barf out a dictionary so I can create hard coded data version # fd20200220_df.to_dict() # + jupyter={"source_hidden": true} admits_df_fd_dict = {'day': {0: -39, 1: -38, 2: -37, 3: -36, 4: -35, 5: -34, 6: -33, 7: -32, 8: -31, 9: -30, 10: -29, 11: -28, 12: -27, 13: -26, 14: -25, 15: -24, 16: -23, 17: -22, 18: -21, 19: -20, 20: -19, 21: -18, 22: -17, 23: -16, 24: -15, 25: -14, 26: -13, 27: -12, 28: -11, 29: -10, 30: -9, 31: -8, 32: -7, 33: -6, 34: -5, 35: -4, 36: -3, 37: -2, 38: -1, 39: 0, 40: 1, 41: 2, 42: 3, 43: 4, 44: 5, 45: 6, 46: 7, 47: 8, 48: 9, 49: 10, 50: 11, 51: 12, 52: 13, 53: 14, 54: 15, 55: 16, 56: 17, 57: 18, 58: 19, 59: 20, 60: 21, 61: 22, 62: 23, 63: 24, 64: 25, 65: 26, 66: 27, 67: 28, 68: 29, 69: 30, 70: 31, 71: 32, 72: 33, 73: 34, 74: 35, 75: 36, 76: 37, 77: 38, 78: 39, 79: 40, 80: 41, 81: 42, 82: 43, 83: 44, 84: 45, 85: 46, 86: 47, 87: 48, 88: 49, 89: 50, 90: 51, 91: 52, 92: 53, 93: 54, 94: 55, 95: 56, 96: 57, 97: 58, 98: 59, 99: 60, 100: 61, 101: 62, 102: 63, 103: 64, 104: 65, 105: 66, 106: 67, 107: 68, 108: 69, 109: 70, 110: 71, 111: 72, 112: 73, 113: 74, 114: 75, 115: 76, 116: 77, 117: 78, 118: 79, 119: 80, 120: 81, 121: 82, 122: 83, 123: 84, 124: 85, 125: 86, 126: 87, 127: 88, 128: 89, 129: 90, 130: 91, 131: 92, 132: 93, 133: 94, 134: 95, 135: 96, 136: 97, 137: 98, 138: 99, 139: 100, 140: 101, 141: 102, 142: 103, 143: 104, 144: 105, 145: 106, 146: 107, 147: 108, 148: 109, 149: 110, 150: 111, 151: 112, 152: 113, 153: 114, 154: 115, 155: 116, 156: 117, 157: 118, 158: 119, 159: 120}, 'date': {0: Timestamp('2020-02-20 00:00:00'), 1: Timestamp('2020-02-21 00:00:00'), 2: Timestamp('2020-02-22 00:00:00'), 3: Timestamp('2020-02-23 00:00:00'), 4: Timestamp('2020-02-24 00:00:00'), 5: Timestamp('2020-02-25 00:00:00'), 6: Timestamp('2020-02-26 00:00:00'), 7: Timestamp('2020-02-27 00:00:00'), 8: Timestamp('2020-02-28 00:00:00'), 9: Timestamp('2020-02-29 00:00:00'), 10: Timestamp('2020-03-01 00:00:00'), 11: Timestamp('2020-03-02 00:00:00'), 12: Timestamp('2020-03-03 00:00:00'), 13: Timestamp('2020-03-04 00:00:00'), 14: Timestamp('2020-03-05 00:00:00'), 15: Timestamp('2020-03-06 00:00:00'), 16: Timestamp('2020-03-07 00:00:00'), 17: Timestamp('2020-03-08 00:00:00'), 18: Timestamp('2020-03-09 00:00:00'), 19: Timestamp('2020-03-10 00:00:00'), 20: Timestamp('2020-03-11 00:00:00'), 21: Timestamp('2020-03-12 00:00:00'), 22: Timestamp('2020-03-13 00:00:00'), 23: Timestamp('2020-03-14 00:00:00'), 24: Timestamp('2020-03-15 00:00:00'), 25: Timestamp('2020-03-16 00:00:00'), 26: Timestamp('2020-03-17 00:00:00'), 27: Timestamp('2020-03-18 00:00:00'), 28: Timestamp('2020-03-19 00:00:00'), 29: Timestamp('2020-03-20 00:00:00'), 30: Timestamp('2020-03-21 00:00:00'), 31: Timestamp('2020-03-22 00:00:00'), 32: Timestamp('2020-03-23 00:00:00'), 33: Timestamp('2020-03-24 00:00:00'), 34: Timestamp('2020-03-25 00:00:00'), 35: Timestamp('2020-03-26 00:00:00'), 36: Timestamp('2020-03-27 00:00:00'), 37: Timestamp('2020-03-28 00:00:00'), 38: Timestamp('2020-03-29 00:00:00'), 39: Timestamp('2020-03-30 00:00:00'), 40: Timestamp('2020-03-31 00:00:00'), 41: Timestamp('2020-04-01 00:00:00'), 42: Timestamp('2020-04-02 00:00:00'), 43: Timestamp('2020-04-03 00:00:00'), 44: Timestamp('2020-04-04 00:00:00'), 45: Timestamp('2020-04-05 00:00:00'), 46: Timestamp('2020-04-06 00:00:00'), 47: Timestamp('2020-04-07 00:00:00'), 48: Timestamp('2020-04-08 00:00:00'), 49: Timestamp('2020-04-09 00:00:00'), 50: Timestamp('2020-04-10 00:00:00'), 51: Timestamp('2020-04-11 00:00:00'), 52: Timestamp('2020-04-12 00:00:00'), 53: Timestamp('2020-04-13 00:00:00'), 54: Timestamp('2020-04-14 00:00:00'), 55: Timestamp('2020-04-15 00:00:00'), 56: Timestamp('2020-04-16 00:00:00'), 57: Timestamp('2020-04-17 00:00:00'), 58: Timestamp('2020-04-18 00:00:00'), 59: Timestamp('2020-04-19 00:00:00'), 60: Timestamp('2020-04-20 00:00:00'), 61: Timestamp('2020-04-21 00:00:00'), 62: Timestamp('2020-04-22 00:00:00'), 63: Timestamp('2020-04-23 00:00:00'), 64: Timestamp('2020-04-24 00:00:00'), 65: Timestamp('2020-04-25 00:00:00'), 66: Timestamp('2020-04-26 00:00:00'), 67: Timestamp('2020-04-27 00:00:00'), 68: Timestamp('2020-04-28 00:00:00'), 69: Timestamp('2020-04-29 00:00:00'), 70: Timestamp('2020-04-30 00:00:00'), 71: Timestamp('2020-05-01 00:00:00'), 72: Timestamp('2020-05-02 00:00:00'), 73: Timestamp('2020-05-03 00:00:00'), 74: Timestamp('2020-05-04 00:00:00'), 75: Timestamp('2020-05-05 00:00:00'), 76: Timestamp('2020-05-06 00:00:00'), 77: Timestamp('2020-05-07 00:00:00'), 78: Timestamp('2020-05-08 00:00:00'), 79: Timestamp('2020-05-09 00:00:00'), 80: Timestamp('2020-05-10 00:00:00'), 81: Timestamp('2020-05-11 00:00:00'), 82: Timestamp('2020-05-12 00:00:00'), 83: Timestamp('2020-05-13 00:00:00'), 84: Timestamp('2020-05-14 00:00:00'), 85: Timestamp('2020-05-15 00:00:00'), 86: Timestamp('2020-05-16 00:00:00'), 87: Timestamp('2020-05-17 00:00:00'), 88: Timestamp('2020-05-18 00:00:00'), 89: Timestamp('2020-05-19 00:00:00'), 90: Timestamp('2020-05-20 00:00:00'), 91: Timestamp('2020-05-21 00:00:00'), 92: Timestamp('2020-05-22 00:00:00'), 93: Timestamp('2020-05-23 00:00:00'), 94: Timestamp('2020-05-24 00:00:00'), 95: Timestamp('2020-05-25 00:00:00'), 96: Timestamp('2020-05-26 00:00:00'), 97: Timestamp('2020-05-27 00:00:00'), 98: Timestamp('2020-05-28 00:00:00'), 99: Timestamp('2020-05-29 00:00:00'), 100: Timestamp('2020-05-30 00:00:00'), 101: Timestamp('2020-05-31 00:00:00'), 102: Timestamp('2020-06-01 00:00:00'), 103: Timestamp('2020-06-02 00:00:00'), 104: Timestamp('2020-06-03 00:00:00'), 105: Timestamp('2020-06-04 00:00:00'), 106: Timestamp('2020-06-05 00:00:00'), 107: Timestamp('2020-06-06 00:00:00'), 108: Timestamp('2020-06-07 00:00:00'), 109: Timestamp('2020-06-08 00:00:00'), 110: Timestamp('2020-06-09 00:00:00'), 111: Timestamp('2020-06-10 00:00:00'), 112: Timestamp('2020-06-11 00:00:00'), 113: Timestamp('2020-06-12 00:00:00'), 114: Timestamp('2020-06-13 00:00:00'), 115: Timestamp('2020-06-14 00:00:00'), 116: Timestamp('2020-06-15 00:00:00'), 117: Timestamp('2020-06-16 00:00:00'), 118: Timestamp('2020-06-17 00:00:00'), 119: Timestamp('2020-06-18 00:00:00'), 120: Timestamp('2020-06-19 00:00:00'), 121: Timestamp('2020-06-20 00:00:00'), 122: Timestamp('2020-06-21 00:00:00'), 123: Timestamp('2020-06-22 00:00:00'), 124: Timestamp('2020-06-23 00:00:00'), 125: Timestamp('2020-06-24 00:00:00'), 126: Timestamp('2020-06-25 00:00:00'), 127: Timestamp('2020-06-26 00:00:00'), 128: Timestamp('2020-06-27 00:00:00'), 129: Timestamp('2020-06-28 00:00:00'), 130: Timestamp('2020-06-29 00:00:00'), 131: Timestamp('2020-06-30 00:00:00'), 132: Timestamp('2020-07-01 00:00:00'), 133: Timestamp('2020-07-02 00:00:00'), 134: Timestamp('2020-07-03 00:00:00'), 135: Timestamp('2020-07-04 00:00:00'), 136: Timestamp('2020-07-05 00:00:00'), 137: Timestamp('2020-07-06 00:00:00'), 138: Timestamp('2020-07-07 00:00:00'), 139: Timestamp('2020-07-08 00:00:00'), 140: Timestamp('2020-07-09 00:00:00'), 141: Timestamp('2020-07-10 00:00:00'), 142: Timestamp('2020-07-11 00:00:00'), 143: Timestamp('2020-07-12 00:00:00'), 144: Timestamp('2020-07-13 00:00:00'), 145: Timestamp('2020-07-14 00:00:00'), 146: Timestamp('2020-07-15 00:00:00'), 147: Timestamp('2020-07-16 00:00:00'), 148: Timestamp('2020-07-17 00:00:00'), 149: Timestamp('2020-07-18 00:00:00'), 150: Timestamp('2020-07-19 00:00:00'), 151: Timestamp('2020-07-20 00:00:00'), 152: Timestamp('2020-07-21 00:00:00'), 153: Timestamp('2020-07-22 00:00:00'), 154: Timestamp('2020-07-23 00:00:00'), 155: Timestamp('2020-07-24 00:00:00'), 156: Timestamp('2020-07-25 00:00:00'), 157: Timestamp('2020-07-26 00:00:00'), 158: Timestamp('2020-07-27 00:00:00'), 159: Timestamp('2020-07-28 00:00:00')}, 'hospitalized': {0: None, 1: 0.26063568643129265, 2: 0.30994780361612073, 3: 0.3685887684065661, 4: 0.4383230045117259, 5: 0.5212484848145089, 6: 0.6198597166427717, 7: 0.7371225920430557, 8: 0.8765633262449724, 9: 1.042374119124572, 10: 1.2395386602445295, 11: 1.4739811704548382, 12: 1.752743346298255, 13: 2.0841943635917453, 14: 2.478280021331585, 15: 2.9468181858690787, 16: 3.503848948669013, 17: 4.166049359819457, 18: 4.953224263771183, 19: 5.8888866603933, 20: 7.000943153724173, 21: 8.322502431663082, 22: 9.892827322227184, 23: 11.758453746026738, 24: 13.974502735048445, 25: 16.606214451970516, 26: 19.73073556093623, 27: 23.4391929676366, 28: 27.839087261695056, 29: 33.05703727755348, 30: 39.241901779511544, 31: 46.568293595526285, 32: 55.24048311496336, 33: 65.49665859068324, 34: 77.6134657096501, 35: 91.9106826692108, 36: 108.75579230321215, 37: 128.56808097881174, 38: 151.8217152864736, 39: 179.04701216563217, 40: 145.471884626862, 41: 159.89548608342258, 42: 175.57774145036728, 43: 192.59146131063633, 44: 211.00537625107518, 45: 230.88182970492718, 46: 252.27405479023946, 47: 275.22301889684917, 48: 299.7538372915992, 49: 325.871781288452, 50: 353.55793840149266, 51: 382.7646217812385, 52: 413.41067385325096, 53: 445.37686313927225, 54: 478.50163108998186, 55: 512.5775030663808, 56: 547.3485281814219, 57: 582.5091485306383, 58: 617.7049099303331, 59: 652.53540357582, 60: 686.559761745466, 61: 719.3049141850042, 62: 750.2766433517609, 63: 778.9732615463307, 64: 804.9014849968097, 65: 827.5938220979386, 66: 846.6265558166923, 67: 861.6372185615802, 68: 872.3403652742453, 69: 878.5404732462266, 70: 880.1409473938803, 71: 877.148481220298, 72: 869.6723905561047, 73: 857.9189566707864, 74: 842.1812346273109, 75: 822.8251481088338, 76: 800.2729584926857, 77: 774.9853354032492, 78: 747.4432607351192, 79: 718.1308808985123, 80: 687.5202114417625, 81: 656.0583319680553, 82: 624.1574266080788, 83: 592.1877605181289, 84: 560.4734609846673, 85: 529.2908068526449, 86: 498.86862592948825, 87: 469.3903523203698, 88: 440.9972945424269, 89: 413.79269846236997, 90: 387.8462441458069, 91: 363.19868157745805, 92: 339.8663781875548, 93: 317.8456150332277, 94: 297.1165245464035, 95: 277.6466091512229, 96: 259.393816417185, 97: 242.30917337058415, 98: 226.33900132020423, 99: 211.42674449165497, 100: 197.51445236410652, 101: 184.54395820241191, 102: 172.45779603748818, 103: 161.19989621212153, 104: 150.71609632036416, 105: 140.9545004800166, 106: 131.86571578634175, 107: 123.40299077035887, 108: 115.52227690087601, 109: 108.18223072100956, 110: 101.34417114614187, 111: 94.97200378067646, 112: 89.03212181977871, 113: 83.49329116675653, 114: 78.3265257760213, 115: 73.50495789424167, 116: 69.00370677465979, 117: 64.79974855121691, 118: 60.871789243719824, 119: 57.20014229659864, 120: 53.766611604485654, 121: 50.554380627931096, 122: 47.54790793235588, 123: 44.73282927802211, 124: 42.09586623506766, 125: 39.62474118575483, 126: 37.30809849574871, 127: 35.135431582348254, 128: 33.097015572573575, 129: 31.18384522476117, 130: 29.387577778747072, 131: 27.700480400802924, 132: 26.11538189517887, 133: 24.625628365924054, 134: 23.22504252604267, 135: 21.907886367196628, 136: 20.668826919878484, 137: 19.502904852044594, 138: 18.405505670336424, 139: 17.372333306724613, 140: 16.399385888449615, 141: 15.482933505816616, 142: 14.619497807085281, 143: 13.80583326327178, 144: 13.038909959897866, 145: 12.315897783417313, 146: 11.634151882921287, 147: 10.991199296695411, 148: 10.384726643984322, 149: 9.812568790446676, 150: 9.272698403874527, 151: 8.763216324688983, 152: 8.282342681595765, 153: 7.828408689922072, 154: 7.399849074812664, 155: 6.99519506735669, 156: 6.613067925834912, 157: 6.252172938868171, 158: 5.91129387068213, 159: None}, 'icu': {0: None, 1: 0.07819070592938776, 2: 0.09298434108483616, 3: 0.11057663052196992, 4: 0.13149690135351766, 5: 0.1563745454443528, 6: 0.18595791499283126, 7: 0.22113677761291692, 8: 0.2629689978734915, 9: 0.3127122357373717, 10: 0.3718615980733588, 11: 0.4421943511364508, 12: 0.5258230038894771, 13: 0.6252583090775232, 14: 0.7434840063994756, 15: 0.884045455760722, 16: 1.0511546846007054, 17: 1.2498148079458378, 18: 1.4859672791313532, 19: 1.7666659981179915, 20: 2.1002829461172516, 21: 2.4967507294989235, 22: 2.9678481966681516, 23: 3.5275361238080194, 24: 4.192350820514534, 25: 4.981864335591156, 26: 5.919220668280865, 27: 7.031757890290983, 28: 8.351726178508521, 29: 9.917111183266053, 30: 11.772570533853454, 31: 13.97048807865788, 32: 16.572144934489017, 33: 19.64899757720497, 34: 23.284039712895023, 35: 27.57320480076325, 36: 32.62673769096364, 37: 38.5704242936435, 38: 45.546514585942106, 39: 53.714103649689605, 40: 43.64156538805866, 41: 47.968645825026776, 42: 52.67332243511015, 43: 57.77743839319089, 44: 63.301612875322455, 45: 69.26454891147819, 46: 75.68221643707182, 47: 82.56690566905479, 48: 89.92615118747972, 49: 97.76153438653569, 50: 106.06738152044771, 51: 114.82938653437168, 52: 124.023202155975, 53: 133.6130589417817, 54: 143.5504893269947, 55: 153.77325091991432, 56: 164.2045584544262, 57: 174.75274455919183, 58: 185.3114729791, 59: 195.76062107274586, 60: 205.96792852363976, 61: 215.7914742555013, 62: 225.08299300552792, 63: 233.69197846389991, 64: 241.47044549904285, 65: 248.27814662938184, 66: 253.9879667450073, 67: 258.4911655684737, 68: 261.7021095822738, 69: 263.56214197386726, 70: 264.04228421816515, 71: 263.1445443660896, 72: 260.9017171668311, 73: 257.37568700123575, 74: 252.65437038819252, 75: 246.84754443265047, 76: 240.08188754780716, 77: 232.49560062097225, 78: 224.232978220537, 79: 215.43926426955383, 80: 206.25606343252912, 81: 196.81749959041645, 82: 187.24722798242328, 83: 177.65632815543813, 84: 168.14203829539932, 85: 158.7872420557942, 86: 149.6605877788479, 87: 140.81710569611099, 88: 132.29918836272736, 89: 124.13780953870993, 90: 116.3538732437428, 91: 108.95960447323704, 92: 101.9599134562668, 93: 95.35368450996977, 94: 89.13495736391815, 95: 83.29398274536835, 96: 77.81814492515514, 97: 72.6927520111767, 98: 67.90170039605982, 99: 63.42802334749649, 100: 59.254335709232684, 101: 55.36318746072357, 102: 51.73733881124645, 103: 48.359968863636816, 104: 45.214828896107065, 105: 42.28635014400607, 106: 39.55971473590398, 107: 37.020897231106574, 108: 34.65668307026499, 109: 32.45466921630032, 110: 30.403251343843287, 111: 28.491601134201122, 112: 26.70963654593652, 113: 25.047987350026236, 114: 23.497957732804927, 115: 22.05148736827323, 116: 20.701112032398672, 117: 19.43992456536398, 118: 18.26153677311595, 119: 17.160042688978134, 120: 16.129983481348972, 121: 15.166314188378234, 122: 14.2643723797064, 123: 13.419848783407359, 124: 12.628759870520296, 125: 11.887422355726812, 126: 11.192429548724249, 127: 10.540629474700836, 128: 9.929104671775349, 129: 9.355153567426896, 130: 8.816273333626668, 131: 8.310144120239785, 132: 7.834614568553661, 133: 7.387688509776126, 134: 6.967512757813893, 135: 6.572365910156805, 136: 6.200648075966456, 137: 5.850871455611923, 138: 5.52165170110311, 139: 5.2116999920144735, 140: 4.919815766535977, 141: 4.6448800517464415, 142: 4.385849342124857, 143: 4.1417499789804415, 144: 3.9116729879715417, 145: 3.694769335023011, 146: 3.4902455648752952, 147: 3.2973597890122615, 148: 3.115417993190931, 149: 2.943770637135458, 150: 2.7818095211623586, 151: 2.6289648974070587, 152: 2.4847028044769104, 153: 2.348522606978804, 154: 2.219954722444527, 155: 2.098558520205188, 156: 1.9839203777519288, 157: 1.875651881660815, 158: 1.7733881612057305, 159: None}, 'ventilated': {0: None, 1: 0.052127137286258514, 2: 0.06198956072322415, 3: 0.07371775368131317, 4: 0.08766460090234518, 5: 0.10424969696290183, 6: 0.12397194332855432, 7: 0.14742451840861126, 8: 0.17531266524899436, 9: 0.20847482382491436, 10: 0.24790773204890604, 11: 0.2947962340909676, 12: 0.3505486692596516, 13: 0.4168388727183485, 14: 0.495656004266317, 15: 0.5893636371738151, 16: 0.700769789733803, 17: 0.8332098719638923, 18: 0.9906448527542356, 19: 1.1777773320786604, 20: 1.400188630744834, 21: 1.6645004863326154, 22: 1.9785654644454385, 23: 2.3516907492053445, 24: 2.794900547009691, 25: 3.3212428903941067, 26: 3.946147112187244, 27: 4.687838593527316, 28: 5.567817452339014, 29: 6.611407455510701, 30: 7.8483803559023, 31: 9.313658719105256, 32: 11.048096622992666, 33: 13.099331718136668, 34: 15.522693141930006, 35: 18.38213653384217, 36: 21.751158460642415, 37: 25.713616195762338, 38: 30.364343057294718, 39: 35.80940243312642, 40: 29.094376925372416, 41: 31.97909721668452, 42: 35.11554829007349, 43: 38.518292262127204, 44: 42.201075250215005, 45: 46.17636594098548, 46: 50.4548109580478, 47: 55.044603779369936, 48: 59.950767458319895, 49: 65.17435625769042, 50: 70.71158768029841, 51: 76.5529243562478, 52: 82.68213477065012, 53: 89.07537262785446, 54: 95.70032621799636, 55: 102.51550061327612, 56: 109.46970563628406, 57: 116.50182970612808, 58: 123.54098198606675, 59: 130.50708071516374, 60: 137.31195234909342, 61: 143.86098283700062, 62: 150.05532867035208, 63: 155.79465230926644, 64: 160.98029699936203, 65: 165.51876441958802, 66: 169.32531116333848, 67: 172.32744371231502, 68: 174.4680730548498, 69: 175.70809464924469, 70: 176.0281894787763, 71: 175.42969624405987, 72: 173.9344781112209, 73: 171.58379133415747, 74: 168.43624692546248, 75: 164.56502962176637, 76: 160.05459169853745, 77: 154.99706708064878, 78: 149.4886521470244, 79: 143.62617617970318, 80: 137.50404228835214, 81: 131.21166639361036, 82: 124.83148532161611, 83: 118.43755210362451, 84: 112.09469219693436, 85: 105.85816137053007, 86: 99.77372518589708, 87: 93.87807046407488, 88: 88.19945890848341, 89: 82.75853969247419, 90: 77.56924882916155, 91: 72.63973631549106, 92: 67.97327563751243, 93: 63.569123006645896, 94: 59.42330490927907, 95: 55.529321830245856, 96: 51.87876328343646, 97: 48.46183467411811, 98: 45.26780026403957, 99: 42.285348898331904, 100: 39.50289047282149, 101: 36.908791640482384, 102: 34.49155920749672, 103: 32.23997924242485, 104: 30.143219264071373, 105: 28.190900096004047, 106: 26.373143157268714, 107: 24.680598154071962, 108: 23.10445538017484, 109: 21.636446144201727, 110: 20.26883422922947, 111: 18.994400756134386, 112: 17.806424363957372, 113: 16.698658233350216, 114: 15.665305155203896, 115: 14.700991578849425, 116: 13.800741354931233, 117: 12.959949710241744, 118: 12.174357848744874, 119: 11.440028459319365, 120: 10.753322320898404, 121: 10.110876125584582, 122: 9.50958158647245, 123: 8.946565855603694, 124: 8.419173247014443, 125: 7.924948237150602, 126: 7.461619699149196, 127: 7.0270863164696475, 128: 6.61940311451508, 129: 6.236769044950961, 130: 5.877515555751415, 131: 5.540096080159856, 132: 5.2230763790348655, 133: 4.925125673185903, 134: 4.645008505208351, 135: 4.381577273439689, 136: 4.133765383975516, 137: 3.9005809704076455, 138: 3.681101134067831, 139: 3.4744666613451045, 140: 3.2798771776906506, 141: 3.0965867011627783, 142: 2.9238995614168743, 143: 2.7611666526545378, 144: 2.6077819919792087, 145: 2.4631795566838264, 146: 2.32683037658444, 147: 2.1982398593390826, 148: 2.076945328795773, 149: 1.9625137580896987, 150: 1.8545396807758152, 151: 1.7526432649374328, 152: 1.6564685363182434, 153: 1.5656817379858694, 154: 1.4799698149618052, 155: 1.3990390134713377, 156: 1.322613585166437, 157: 1.2504345877759988, 158: 1.1822587741353343, 159: None}} # - fd20200220_df = pd.DataFrame(admits_df_dt_dict) fd20200220_df.head() num_days_to_shift = (fd20200220_df.date[0] - actual_df.date[0]).days print('Shifting CHIME output index by {} days'.format(num_days_to_shift)) fd20200220_df.index = fd20200220_df.index + num_days_to_shift #.shift() only works for DateTime like index fd20200220_df.head(10) chime_y_fd = np.array(fd20200220_df.hospitalized)[:len(x)] plt.plot(x, func(x, *popt), 'g--', label='exp fit: a={:5.3f}, b={:5.3f}, c={:5.3f}'.format(*popt)) plt.plot(x, y, 'b:', label = 'actual') plt.plot(x, chime_y_fd, 'r-', label = 'chime') plt.title('Model calibration using user specified first admit date') plt.xlabel('days since Feb 20') plt.ylabel('admits') plt.legend() plt.savefig("model_calibration_fd20200220.png") plt.show() # ### Great, results still look good.. # Are predictions identical? Seems like they should be. For the first date approach, chime said implied doubling was 4.0, but of course that's with a different time 0. chime_y_dt - chime_y_fd plt.plot(x, func(x, *popt), 'g--', label='exp fit: a={:5.3f}, b={:5.3f}, c={:5.3f}'.format(*popt)) plt.plot(x, y, 'b:', label = 'actual') plt.plot(x, chime_y_fd, 'r-', label = 'chime (first date)') plt.plot(x, chime_y_dt, 'b-', label = 'chime (doubling time)') plt.title('chime prediction comparison using doubling time vs first admit date') plt.xlabel('days since Feb 20') plt.ylabel('admits') plt.legend() plt.savefig("model_calibration_btvsfd.png") plt.show() # ## Moving forward # As our hospital system is going to be running up against capacity constraints very soon, we are trying to develop **finite capacity models** to help with the tactical decisions needed during this upcoming phase. We'll share whatever we come up with at https://github.com/misken/c19. # # A few other nice to have things that I'm working on are: # # - with one of the earlier versions I created a tailored version of the CLI as well as a way to evaluate hundreds of scenarios quickly by iterating over ranges of inputs and using models.py as a library. Since CLI still changing, don't really want to do this but will create a wrapper of some sort so that we can easily add some custom inputs A few simple such inputs are a scenario (str) label and output path (used to use the --prefix arg as a workaround but that seems to have vanished). We like to try out numerous scenarios on a given date and so just labelling with date isn't sufficient. # - wondering if the fact that social distancing practices likely changed when our state when into shelter in place model on March 23 has any practical impact on model projections going forward. Each state (or even more local) would have their own date as to when social distancing really kicked in. May try to extend early growth stage model to capture this nuance. # Thank you everyone at the [CodeForPhilly CHIME project](https://codeforphilly.org/projects/chime) and at [UPenn's Predictive Healthcare group](http://predictivehealthcare.pennmedicine.org/). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Example notebook for pyPROFILE # ## Setup # + # imports import matplotlib.pyplot as plt import numpy as np import pandas as pd from pyPROFILE import model_profile import contextlib from time import time @contextlib.contextmanager def timer(msg='timer'): tic = time() yield return print(f"{msg}: {time() - tic:.2f} s") # - # ## Find the optimal profile of net rate of production, R # # The function find_optimal_profile returns a pandas DataFrame containing the initial data, the modeled concentration and the optimal profile of R. with timer('Model profile'): df = model_profile("./Data/Example.csv", n_r = 20) # ## Results visualization # + # Plot the ground truth C_columns = [col for col in df.columns if (col[0]=='C' and col!='C_model')] C_GT = df[C_columns].values fig, ax = plt.subplots(1,2,figsize=[10,10], sharey=True) plt.gca().invert_yaxis() plt.sca(ax[0]) _ = plt.plot(C_GT,df['X'],'.b') plt.ylabel('Depth') plt.xlabel('Concentration [$g/cm^3$]') # Plot the model _ = plt.plot(df['C_model'],df['X'],'-r') # Plot the profile of net rate of production (R) plt.sca(ax[1]) _ = plt.plot(df['R'],df['X'],'-k') _ = plt.xlabel('net rate of production, R [$g/cm^3/s$]') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # Libraries that will be used to build the model, train and deal with the dataset must be import to be used import scipy import numpy as np import matplotlib.pyplot as plt import pandas as pd import itertools from pandas.plotting import scatter_matrix from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB, MultinomialNB from sklearn.svm import LinearSVC from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer # The dataset that will be used in the problem must be read, in order to do it the pandas library will be used to read the CSV file #load data GPAM_Challenge dataset dataset = pd.read_csv('olist_order_reviews_dataset.csv') # The dataset contains many rows without comments, and it needs to be treated it before using it to train the model, if not there will be "Trash" information that will not help to predict the product's rating based in blank comment. #take the blank comments row out dataset = dataset.dropna(subset=['review_comment_message']) # Print information of the datased. Such as it's shape, destribution, the first 10 rows and check about the rating distribution ( amount of 1, 2, 3, 4 and 5 stars) print(dataset.shape) print(dataset.head(10)) print(dataset.describe()) print(dataset.groupby('review_score').size()) # An histogram is a good way to visualize the rating distribution. How many data there are in each class (1, 2, 3, 4, 5). As the data distribution is not uniform accuracy is not a very good metric to be used when we check the histogram. #check data distribution dataset.hist() plt.show() # Extract the values of the dataset to an array array = dataset.values # Slice the comments value into a single array # split comments column X = array[:,4] # The rating column will be split out of the dataset array and cast into int, just to be sure #split rating column Y = array[:,2] Y = Y.astype(int) # The dataset has to be split into train and also test data. the proportion used is 30% to the test set and 70 to the training set validation_size = 0.3 # The dataset needs to have a seed to be used when splitting the data 5 was chosen randomly #random seed to split the dataset seed = 5 # The text needs to be vectorized in order to be better understood by machines, to do so a vectorized will be created and the comments (X) will fit it. # # Vectorizer to split text vectorizer = TfidfVectorizer(ngram_range=(1,2)) x_vectorized = vectorizer.fit_transform(X) # Now the splitting of the dataset will be done between training set and validation set. # The vectorized dataset will be used the validation size will be used (30%) and the seed to split(5) #split the datset between train and validation X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(x_vectorized, Y, test_size=validation_size, random_state=seed) # Just to check the size of each dataset training and validation print("Training set has {} samples.".format(X_train.shape[0])) print("Test set has {} samples.".format(X_validation.shape[0])) # The models to train will be created and tested to check which one will have better metrics on the dataset #Linear Classifier linearClassificer = LinearSVC() linearClassificer.fit(X_train, Y_train) linearPrediction = linearClassificer.predict(X_validation) # Check the linear classifier accuracy ( it is not the recommended metric, I've tried f1-score but I got some erros and I could solve them) print("Accuracy") print("LinearSVC:",accuracy_score(Y_validation,linearPrediction)) #KNeighbors Classifier classifierKNN = KNeighborsClassifier(n_neighbors=2) classifierKNN.fit(X_train, Y_train) predictionKNN = classifierKNN.predict(X_validation) print("Accuracy KNN:", accuracy_score(Y_validation, predictionKNN)) #MultinomialNB Classifier multinomialClassifier = MultinomialNB() multinomialClassifier.fit(X_train, Y_train) predictionMUB = multinomialClassifier.predict(X_validation) print("MultinomialNB:",accuracy_score(Y_validation, predictionMUB)) # Once the highest accuracy comes from the Liner Classifier, it was chosen to be used # A function the plot the confusion matrix of the classifier needs to be built def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = np.around((cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]),decimals=2) print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # create confusion matrix with the best model confusionMatrixLSVC = confusion_matrix(Y_validation, linearPrediction) # The plotted data will be able to be analysed # + classes = ['1', '2', '3', '4', '5'] plt.figure() plot_confusion_matrix(confusionMatrixLSVC, classes, normalize=False, title='Confusion Matrix - SVM') plt.show() # - # As the confusion matrix picture shows, the main diagonal shows the amount of correct predicted values the other values are the values that were wrongly predicted # # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.3.0 # language: julia # name: julia-1.3 # --- # # Spiral Matrix # # Given the size, return a square matrix of numbers in spiral order. # # The matrix should be filled with natural numbers, starting from 1 # in the top-left corner, increasing in an inward, clockwise spiral order, # like these examples: # # ###### Spiral matrix of size 3 # # ```text # 1 2 3 # 8 9 4 # 7 6 5 # ``` # # ###### Spiral matrix of size 4 # # ```text # 1 2 3 4 # 12 13 14 5 # 11 16 15 6 # 10 9 8 7 # ``` # # ## Source # # Reddit r/dailyprogrammer challenge #320 [Easy] Spiral Ascension. [https://www.reddit.com/r/dailyprogrammer/comments/6i60lr/20170619_challenge_320_easy_spiral_ascension/](https://www.reddit.com/r/dailyprogrammer/comments/6i60lr/20170619_challenge_320_easy_spiral_ascension/) # # ## Version compatibility # This exercise has been tested on Julia versions >=1.0. # # ## Submitting Incomplete Solutions # It's possible to submit an incomplete solution so you can see how others have completed the exercise. # ## Your solution # + # submit function spiral_matrix(n::Int) end # - # ## Test suite # + using Test # include("spiral-matrix.jl") @testset "Different valid values" begin @testset "Empty spiral" begin @test spiral_matrix(0) == Matrix{Int}(undef,0,0) end @testset "Trivial spiral" begin @test spiral_matrix(1) == reshape([1],(1,1)) end @testset "Spiral of size 2" begin @test spiral_matrix(2) == [1 2; 4 3] end @testset "Spiral of size 3" begin @test spiral_matrix(3) == [1 2 3; 8 9 4; 7 6 5] end @testset "Spiral of size 4" begin @test spiral_matrix(4) == [1 2 3 4; 12 13 14 5; 11 16 15 6; 10 9 8 7] end @testset "Spiral of size 5" begin @test spiral_matrix(5) == [1 2 3 4 5; 16 17 18 19 6; 15 24 25 20 7; 14 23 22 21 8; 13 12 11 10 9] end end # - # ## Prepare submission # To submit your exercise, you need to save your solution in a file called `spiral-matrix.jl` before using the CLI. # You can either create it manually or use the following functions, which will automatically write every notebook cell that starts with `# submit` to the file `spiral-matrix.jl`. # # + # using Pkg; Pkg.add("Exercism") # using Exercism # Exercism.create_submission("spiral-matrix") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # Libraries import re # # Load article # # Download the text from [**here**](https://drive.google.com/file/d/1PEUMaDaPye5pxlA-SZTsnZ4k7WSzdLLe/view?usp=sharing) and load it into python using the following code snippet. # Load lines into list with open("Article.txt", encoding='utf-8') as f: lines = f.readlines() lines+=list(lines) # list of lines lines_list = [] for line in lines: if len(line) > 1: lines_list.append(line) # # Tasks # ### Print the first 20 lines of article # + tags=[] lines_list[:20] # - # ## Print out every line from the file that... # ... that has 'q' for line in lines_list: if re.findall(r"q", line): print(line) # ... that starts with 'H' for line in lines_list: if re.findall(r"^H", line): print(line) # ... that has 'wh' for line in lines_list: if re.findall(r"wh", line): print(line) # ... that has an 'q' or a 'Q' for line in lines_list: if re.findall(r"q|Q", line): print(line) # ... that has a '*' in it for line in lines_list: if re.findall(r"\*", line): print(line) # ... that has a '*' (star) in it # + # # - # ... that starts with an 'T' or an 't' for line in lines_list: if re.findall(r"T|t", line): print(line) # ... that starts with number for line in lines_list: if re.findall(r"^\d", line): print(line) # ... that has both 'a' and 'e' and 'i' and 'o' and 'u' in it # + # # - # ... that has an 'a' and somewhere later an 'e' # + # # - # ... that does not have an 'i' for line in lines_list: if not re.findall(r"i", line): print(line) # ... that does not have an 'i' nor 'z' for line in lines_list: if not re.findall(r"i|z", line): print(line) # ... that has an 'x' but not 'y' for line in lines_list: if re.findall(r"(?!x)(y)", line): print(line) # ... that has at least 2 consecutive vowels (a, e, i, o, u) like in the word "bear" # + # # - # ... that has at least 3 vowels # + # # - # ... that has at least 30 characters # + # # - # ... has the same word appear twice in the same line # + # # - # ## Print all the words # Words with either 'Bar' or 'Baz' in them # + # # - # Words with either 'Whe' or 'The' in them # + # # - # Words containing a double character (e.g. 'oo') # + # # - # ## Cleanup string codes so they contain only numbers # * Remove slash and spaces # + codes = ['2373/ 8293', ' 8292342 / 8263', '12/903820 ', '8203184 / 02342 '] # # - # ## Switch order of the numbers in string # * Preserve slash / # * Remove spaces # + codes = ['2373/ 8293', ' 8292342 / 8263', '12/903820 ', '8203184 / 02342 '] # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %load_ext autoreload # %autoreload 2 import pdb # + import os import matplotlib from tqdm import tqdm from utils.config import opt from data.dataset import Dataset, TestDataset, inverse_normalize from model import FasterRCNNVGG16 from torch.autograd import Variable from torch.utils import data as data_ from trainer import FasterRCNNTrainer from utils import array_tool as at from utils.vis_tool import visdom_bbox from utils.eval_tool import eval_detection_voc # fix for ulimit # https://github.com/pytorch/pytorch/issues/973#issuecomment-346405667 import resource rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) resource.setrlimit(resource.RLIMIT_NOFILE, (20480, rlimit[1])) matplotlib.use('agg') import numpy as np import cupy as cp from model.utils.bbox_tools import bbox2loc, bbox_iou, loc2bbox from model.utils.nms import non_maximum_suppression from collections import namedtuple import time from torch.nn import functional as F from model.utils.creator_tool import AnchorTargetCreator, ProposalTargetCreator from torch import nn import torch as t from torch.autograd import Variable from utils import array_tool as at from utils.vis_tool import Visualizer from utils.config import opt # from torchnet.meter import ConfusionMeter, AverageValueMeter # + dataset = Dataset(opt) print('load data') dataloader = data_.DataLoader(dataset, \ batch_size=1, \ shuffle=True, \ pin_memory=True,\ num_workers=opt.num_workers) testset = TestDataset(opt) test_dataloader = data_.DataLoader(testset, batch_size=1, num_workers=opt.test_num_workers, shuffle=False, \ pin_memory=True ) # - faster_rcnn = FasterRCNNVGG16() print('model construct completed') trainer = FasterRCNNTrainer(faster_rcnn).cuda() best_map = 0 lr_ = opt.lr img, bbox_, label_, scale = next(iter(dataloader)) img.shape,bbox_.shape, label_.shape, scale.shape img, bbox, label = img.cuda().float(), bbox_.cuda(), label_.cuda() # + deletable=false editable=false run_control={"frozen": true} # trainer(img, bbox_, label_, scale) # - imgs, bboxes, labels, scale = img, bbox_, label_, scale # + n = bboxes.shape[0] if n != 1: raise ValueError('Currently only batch size 1 is supported.') _, _, H, W = imgs.shape img_size = (H, W) # - features = trainer.faster_rcnn.extractor(imgs) features.shape trainer.faster_rcnn.rpn rpn_locs, rpn_scores, rois, roi_indices, anchor = trainer.faster_rcnn.rpn(features, img_size, scale) rpn_locs.shape, rpn_scores.shape, rois.shape, roi_indices.shape, anchor.shape bbox = bboxes[0] label = labels[0] rpn_score = rpn_scores[0] rpn_loc = rpn_locs[0] roi = rois sample_roi, gt_roi_loc, gt_roi_label = trainer.proposal_target_creator( roi, at.tonumpy(bbox), # at = array_tools,tensor to numpy 用不着了,在pytorch0.4里 at.tonumpy(label), trainer.loc_normalize_mean, trainer.loc_normalize_std) sample_roi.shape,gt_roi_loc.shape, gt_roi_label.shape sample_roi_index = t.zeros(len(sample_roi)) roi_cls_loc, roi_score = trainer.faster_rcnn.head( features, sample_roi, sample_roi_index) roi_cls_loc.shape, roi_score.shape gt_rpn_loc, gt_rpn_label = trainer.anchor_target_creator( at.tonumpy(bbox), anchor, img_size) gt_rpn_loc.shape, gt_rpn_label.shape from trainer import _fast_rcnn_loc_loss gt_rpn_label = at.tovariable(gt_rpn_label).long() gt_rpn_loc = at.tovariable(gt_rpn_loc) # + run_control={"marked": true} rpn_loc_loss = _fast_rcnn_loc_loss(rpn_loc, gt_rpn_loc, gt_rpn_label.data, trainer.rpn_sigma) # + run_control={"marked": true} rpn_loc_loss # - rpn_score.shape,gt_rpn_label.shape rpn_cls_loss = F.cross_entropy(rpn_score, gt_rpn_label.cuda(), ignore_index=-1) _gt_rpn_label = gt_rpn_label[gt_rpn_label > -1] _rpn_score = at.tonumpy(rpn_score)[at.tonumpy(gt_rpn_label) > -1] # self.rpn_cm.add(at.totensor(_rpn_score, False), _gt_rpn_label.data.long()) _gt_rpn_label.shape,_rpn_score.shape # ------------------ ROI losses (fast rcnn loss) -------------------# n_sample = roi_cls_loc.shape[0] n_sample roi_cls_loc.shape # + run_control={"marked": false} roi_cls_loc = roi_cls_loc.view(n_sample, -1, 4) # - roi_cls_loc.shape gt_roi_label # + run_control={"marked": false} roi_loc = roi_cls_loc[t.arange(0, n_sample).long().cuda(), \ at.totensor(gt_roi_label).long()] # - roi_loc.shape # + run_control={"marked": false} gt_roi_label = at.tovariable(gt_roi_label).long() gt_roi_loc = at.tovariable(gt_roi_loc) # - # + roi_loc_loss = _fast_rcnn_loc_loss( roi_loc.contiguous(), gt_roi_loc, gt_roi_label.data, self.roi_sigma) roi_cls_loss = nn.CrossEntropyLoss()(roi_score, gt_roi_label.cuda()) # self.roi_cm.add(at.totensor(roi_score, False), gt_roi_label.data.long()) losses = [rpn_loc_loss, rpn_cls_loss, roi_loc_loss, roi_cls_loss] losses = losses + [sum(losses)] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import altair as alt import pandas as pd source = pd.DataFrame({ 'a': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I'], 'b': [28, 55, 43, 91, 81, 53, 19, 87, 52] }) chart = alt.Chart(source).mark_bar().encode( x='a', y='b' ) chart # - chart.save('chart_chrome.png') chart.save('chart_ff.png', webdriver='firefox') import altair as alt from vega_datasets import data source = data.stocks() chart = alt.Chart(source).mark_line().encode( x='date', y='price', color='symbol', # strokeDash='symbol', ).properties(background='white') chart.save("test.png", scale_factor=2.0, webdriver='firefox') alt.__version__ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Getters and Setters # So far we have seen how the `__get__` method is called when we assign an instance of a descriptors to a class attribute. # # But we can access that attribute either from the class itself, or the instance - as we saw in the last lecture, both accesses end up calling the `__get__` method. # But what changes are the arguments passed to the method. Let's explore this: # + from datetime import datetime class TimeUTC: def __get__(self, instance, owner_class): print(f'__get__ called, self={self}, instance={instance}, owner_class={owner_class}') return datetime.utcnow().isoformat() # + class Logger1: current_time = TimeUTC() class Logger2: current_time = TimeUTC() # - # Now let's access `current_time` from the class itself: Logger1.current_time # As you can see, the `instance` was `None` - this was because we called the descriptor from the `Logger1` class, not an instance of it. The `owner_class` tells us this descriptor instance is defined in the `Logger1` class. # # The same holds if we use `Logger2`: Logger2.current_time # But if we call the descriptor via an instance instead: l1 = Logger1() print(hex(id(l1))) l1.current_time # As you can see, `instance` is now the `l1` instance, and the owner class is still `Logger1`. # # The sme holds for instance of `Logger2`: l2 = Logger2() print(hex(id(l2))) l2.current_time # This means that we can differentiate, inside our `__get__` method whether the descriptor was accessed via the class or via an instance. # # Typically when a descriptor is access from the class we return the descriptor instance, and when accessed from the instance we return the instance specific value we want: # + from datetime import datetime class TimeUTC: def __get__(self, instance, owner_class): if instance is None: # called from class return self else: # called from instance return datetime.utcnow().isoformat() # - class Logger: current_time = TimeUTC() Logger.current_time l = Logger() l.current_time # This is consistent with the way properties work: class Logger: @property def current_time(self): return datetime.utcnow().isoformat() Logger.current_time # This returned the property instance, whereas calling it from an instance: l = Logger() l.current_time # Now, there is one subtle point we have to understand when we create multiple instances of a class that uses a descriptor as a class attribute. # Since the descriptor is assigned to an **class attribute**, all instances of the class will **share** the same descriptor instance! # + class TimeUTC: def __get__(self, instance, owner_class): if instance is None: # called from class return self else: # called from instance print(f'__get__ called in {self}') return datetime.utcnow().isoformat() class Logger: current_time = TimeUTC() # - l1 = Logger() l2 = Logger() # But look at the `current_time` for each of those instances l1.current_time, l2.current_time # As you can see the **same** instance of `TimeUTC` was used. # This does not matter in this particular example, since we just return the current time, but watch what happens if our property relies on some kind of state in the descriptor: class Countdown: def __init__(self, start): self.start = start + 1 def __get__(self, instance, owner): if instance is None: return self else: self.start -= 1 return self.start class Rocket: countdown = Countdown(10) # Now let's say we want to launch two rockets: rocket1 = Rocket() rocket2 = Rocket() # And let's start the countdown for each one: rocket1.countdown rocket2.countdown rocket1.countdown # As you can see, the current countdown value is shared by both `rocket1` and `rocket2` instances of `Rocket` - this is because the `Countdown` instance is a class attribute of `Rocket`. So we have to be careful how we deal with instance level state. # The `__set__` method works in a similar way to `__get__` but it is used when we assign a value to the class attribute. class IntegerValue: def __set__(self, instance, value): print(f'__set__ called, instance={instance}, value={value}') def __get__(self, instance, owner_class): if instance is None: print('__get__ called from class') else: print(f'__get__ called, instance={instance}, owner_class={owner_class}') class Point2D: x = IntegerValue() y = IntegerValue() Point2D.x p = Point2D() p.x p.x = 100 # So, where should we store the values `x` and `y`? # # Many "tutorials" I see on the web naively store the value in the descriptor itself: class IntegerValue: def __set__(self, instance, value): self._value = int(value) def __get__(self, instance, owner_class): if instance is None: return self else: return self._value class Point2D: x = IntegerValue() y = IntegerValue() # At first blush, this seems to work just fine: p1 = Point2D() p1.x = 1.1 p1.y = 2.2 p1.x, p1.y # But, remember the point I was making about the instance of the descriptor (`IntegeraValue` in this case) being shared by all instances of the class (`Point2D` in this case)? p2 = Point2D() p2.x, p2.y # And of course if we set the value: p2.x = 100.9 p2.x, p1.x # So, obviously using the descriptor instance dictionary for storage at the instance level is probably not going to work in most cases! # And this is the reason both the `__get__` and `__set__` methods need to know which instance we are dealing with. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # Insert Multiple Data # + from pymongo import MongoClient client = MongoClient( "mongodb.fastcamp.us", ) # + db = client["dobestan"] collection = db["zigbang"] collection.remove() collection = db["zigbang"] # - assert collection.count() is 0 import requests import json response = requests.get("https://api.zigbang.com/v1/items?detail=true&item_ids=3377348&item_ids=3661969&item_ids=3540767&item_ids=3648625&item_ids=3592832&item_ids=3628879&item_ids=3561673&item_ids=3609638&item_ids=3673409&item_ids=3592285&item_ids=3604128&item_ids=3600327&item_ids=3673218&item_ids=3672485&item_ids=3470573&item_ids=3343690&item_ids=3307263&item_ids=3656804&item_ids=2898516&item_ids=3673576&item_ids=3551788&item_ids=3675742&item_ids=3430678&item_ids=3637311&item_ids=3341551&item_ids=3598153&item_ids=3372490&item_ids=3537735&item_ids=3528364&item_ids=3598268&item_ids=3485646&item_ids=3533615&item_ids=3552246&item_ids=3577436&item_ids=3636629&item_ids=3500969&item_ids=3521848&item_ids=3502500") zigbang_dict = json.loads(response.text) zigbang_dict['items'] collection.insert_many(zigbang_dict['items']) collection.count() collection.find_one() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline # Importing standard Qiskit libraries from qiskit import QuantumCircuit, execute, Aer, IBMQ, BasicAer from qiskit.compiler import transpile, assemble from qiskit.tools.jupyter import * from qiskit.visualization import * from ibm_quantum_widgets import * from qiskit_textbook.tools import random_state, array_to_latex from qiskit.compiler import transpile, assemble from qiskit.tools.jupyter import * from qiskit.visualization import * from qiskit.quantum_info import Statevector, random_statevector from qiskit.visualization import plot_state_qsphere, state_visualization, plot_bloch_multivector from qiskit.visualization import plot_state_city, plot_state_paulivec, plot_state_hinton # Loading your IBM Q account(s) provider = IBMQ.load_account() provider # - from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit from numpy import pi from math import pi, atan, sqrt, sin, cos, acos #from coldquanta.qiskit_tools.coldquanta_sim_backend import ColdQuantaSimBackend # + # #!pip install coldquanta # - plot_bloch_vector([1,0,0]) plot_bloch_vector([0,0,1]) # + # hadamard qreg_q = QuantumRegister(1, 'q') creg_c = ClassicalRegister(1, 'c') qc = QuantumCircuit(qreg_q, creg_c) qc.h(0) sv = Statevector.from_instruction(qc).data plot_state_qsphere(sv) # + quantum_state = sv plot_state_city(quantum_state) # - plot_state_hinton(quantum_state) plot_bloch_multivector(quantum_state) # + qreg_q = QuantumRegister(1, 'q') creg_c = ClassicalRegister(1, 'c') qc = QuantumCircuit(qreg_q, creg_c) qc.reset(qreg_q[0]) qc.ry(2*pi/3, qreg_q[0]) sv = Statevector.from_instruction(qc).data plot_state_qsphere(sv) # - plot_state_city(sv) plot_state_hinton(sv) plot_bloch_multivector(sv) # + import numpy as np n = 6 qc = QuantumCircuit(n, n) for i in range(n-1): qc.h(i) qc.rz(2*np.pi/n, i) sv = Statevector.from_instruction(qc).data plot_state_qsphere(sv) # + n = 6 qc = QuantumCircuit(n, n) for i in range(n-1): qc.h(i) qc.rz(3*np.pi/n, i) sv = Statevector.from_instruction(qc).data plot_state_qsphere(sv) # - from qiskit.visualization import plot_bloch_vector, plot_bloch_multivector from qiskit.visualization import plot_bloch_multivector(sv) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # 00. Capture and Mimic First Run # # Procedure: # # Settings > Apps > Meijer. # # 1. Force Stop. # 2. Clear Data. # # Start from complete scratch for mitmproxy. # # 1. Launch mitmproxy, if not already running. # 2. Launch Meijer app. # There is a lot of 'chatter', requests blocked by pihole don't need replicated. # # Right now just working on replicating requests, not comprehending them. import requests # #### Request 1 request=dict() request["url"] = "https://meijer.122.2o7.net/id" request["headers"] = { 'user-agent': 'Dalvik/2.1.0 (Linux; U; Android 5.0.2; HTCONE Build/LRX22G)' } r = requests.get(**request) assert r.status_code==200 r.json() # An ID used for? # #### Request 2 request=dict() request["url"] = "https://static.meijer.com/mobileassets/info/mma_config.json" request["headers"] = { 'Platform': 'Android', 'Version': '5.10.0', 'Build': '51000000', 'User-Agent': 'okhttp/3.8.0' } r = requests.get(**request) assert r.status_code==200 r.json() # Determine if the user needs to update. # #### Request 3 # # The ```Authorization``` header is a base64 encoded string generated from: # # ```./res/values/strings.xml: drAqas76Re7RekeBanaMaNEMah7paDE5``` import base64 # Working Backwards. AUTH="" base64.decodebytes(AUTH.encode("UTF-8")) # Working Forward. account_services_secret="drAqas76Re7RekeBanaMaNEMah7paDE5" AUTH_=base64.encodebytes("mma:{}".format(account_services_secret).encode("UTF-8")).decode("UTF-8").strip() assert AUTH==AUTH_ request=dict() request["url"] = "https://login.meijer.com/as/token.oauth2" request["headers"] = { 'Authorization': 'Basic {}'.format(AUTH), 'Platform': 'Android', 'Version': '5.10.0', 'Build': '51000000', 'User-Agent': 'okhttp/3.8.0' } request["params"] = { 'grant_type': 'client_credentials', 'scope': 'openid', } r = requests.post(**request) assert r.status_code==200 r.json().keys() # Determine if the user needs to update. access_token = r.json()["access_token"] # We'll need this later. # #### Request 4 # # Get ads and dynamic content for the Meijer App. # # Relevant source file: ```./smali/com/meijer/mobile/serverapi/rxjava/DigitalMperksMMAAPIObservable.smali`` request=dict() request["url"] = "https://mperksservices.meijer.com/dgtlmPerksMMA/api/cms/home/content" request["headers"] = { 'Accept': 'application/vnd.meijer.digitalmperks.homecontent-v1.0+json', 'Authorization': 'Bearer {}'.format(access_token), 'Content-Type': 'application/vnd.meijer.digitalmperks.homecontent-v1.0+json', 'Platform': 'Android', 'Version': '5.10.0', 'Build': '51000000', 'User-Agent': 'okhttp/3.8.0' } r = requests.get(**request) r r.json() import IPython.display IPython.display.Image(url=r.json()["specialOffersBannerURL"]) IPython.display.Image(url=r.json()["topBanner"]["bannerURL"]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="mZWOhSLNRInc" # # Timesketch and Jupyter # # This is a small notebook that is built to demonstrate how to interact with Timesketch from a jupyter notebook in order to do some additional exploration of the data. # # Jupyter can greatly complement investigations by providing the analyst with access to the powers of using python to manipulate the data stored in Timeskech. Additionally it provides developers with the ability to do research on the data in order to speed up developments of analyzers, aggregators and graphing. The purpose of this notebook is simply to briefly introduce the powers of jupyter notebooks to analysts and developers, with the hope of inspiring more to take advantage of this powerful platform. It is also possible to use [colab](https://colab.research.google.com) to do these explorations, and a colab copy of this notebook can be found [here](https://colab.research.google.com/github/google/timesketch/blob/master/notebooks/colab-timesketch-demo.ipynb). # # Each code cell (denoted by the [] and grey color) can be run simply by hitting "shift + enter" inside it. In order to run the notebook you'll need to install the jupyter notebook on your machine, and start it (with a python3 kernel). Then you can connect to your local runtime. It is also possible to use [mybinder](https://mybinder.org/v2/gh/google/timesketch/master?urlpath=notebooks) to start up a docker instance with the jupyter notebook. Remember, you can easily add new code cells, or modify the code that is already there to experiment. # # ## README # # If you want to have your own copy of the notebook to make some changes or do some other experimentation you can simply select "File / Save as" button. # # If you want to connect the notebook to your own Timesketch instance (that is if it is not publicly reachable) you simply run the jupyter notebook binary on a machine that can reach your instance, and configure the SERVER/USER/PASSWORD parameters below to match yours. # # Once you have your local runtime setup you should be able to reach your local Timesketch instance. # # + [markdown] colab_type="text" id="lWnyv25sRUyk" # # Import Libraries # # We need to start by importing some libraries that we'll use in this notebook. # + colab={} colab_type="code" id="nh4pwmG-RI2w" import altair as alt # For graphing. import numpy as np # Never know when this will come in handy. import pandas as pd # We will be using pandas quite heavily. from timesketch_api_client import client # + [markdown] colab_type="text" id="uMYsDkCgYK5y" # ## Connect to TS # + [markdown] colab_type="text" id="9IUzNc8yRanl" # And now we can start creating a timesketch client. The client is the object used to connect to the TS server and provides the API to interact with it. # # This will connect to the public demo of timesketch, you may want to change these parameters to connect to your own TS instance. # + cellView="form" colab={} colab_type="code" id="1QQFoUFWRP4N" SERVER = 'https://demo.timesketch.org' USER = 'demo' PASSWORD = '' ts_client = client.TimesketchApi(SERVER, USER, PASSWORD) # + [markdown] colab_type="text" id="aKC_0-qlfzYA" # If you are running a Jupyter notebook and not JupyterLab you'll need to uncomment and run the cell below, otherwise there is no action needed. # + # This works in a Jupyter notebook settings. Uncomment if you are using a jupyter notebook. # (you'll need to have installed vega) #alt.renderers.enable('notebook') # + [markdown] colab_type="text" id="pX1B1gccRv9L" # ### Let's Explore # And now we can start to explore. The first thing is to get all the sketches that are available. Most of the operations you want to do with TS are available in the sketch API. # + colab={} colab_type="code" id="QN2r9x3uRvRG" sketches = ts_client.list_sketches() # + [markdown] colab_type="text" id="r6Qmi4v_SGX1" # Now that we've got a lis of all available sketches, let's print out the names of the sketches as well as the index into the list, so that we can more easily choose a sketch that interests us. # + colab={} colab_type="code" id="Wn0zDL6SRuYY" for i, sketch in enumerate(sketches): print('[{0:d}] {1:s}'.format(i, sketch.name)) # + [markdown] colab_type="text" id="c2ykfp9unWUK" # Another way is to create a dictionary where the keys are the names of the sketchces and values are the sketch objects. # + colab={} colab_type="code" id="GzQjPxiem7di" sketch_dict = dict((x.name, x) for x in sketches) # + colab={} colab_type="code" id="J8Rvpi-XnEZ8" sketch_dict # + [markdown] colab_type="text" id="NK0EqQ-1Siaa" # Let's now take a closer look at some of the data we've got in the "Greendale" investigation. # + colab={} colab_type="code" id="lT6Oh0GRSFg1" gd_sketch = sketch_dict.get('The Greendale incident - 2019', sketches[0]) # + [markdown] colab_type="text" id="KM5Gum5kWfim" # Now that we've connected to a sketch we can do all sorts of things. # # Try doing: `gd_sketch.` # # In colab you can use TAB completion to get a list of all attributes of the object you are working with. See a function you may want to call? Try calling it with `gd_sketch.function_name?` and hit enter.. let's look at an example: # # # + colab={} colab_type="code" id="e7oEZ80sYzc7" # gd_sketch.explore? # + [markdown] colab_type="text" id="8h2S3dXBY6c0" # This way you'll get a list of all the parameters you may want or need to use. You can also use tab completion as soon as you type, `gd_sketch.e` will give you all options that start with an `e`, etc. # # You can also type `gd_sketch.explore()` and get a pop-up with a list of what parameters this function provides. # # But for now, let's look at what views are available to use here: # + colab={} colab_type="code" id="AYgCmg_yZOO7" views = gd_sketch.list_views() for index, view in enumerate(views): print('[{0:d}] {1:s}'.format(index, view.name)) # + [markdown] colab_type="text" id="Tt5SWKzgZZe5" # You can then start to query the API to get back results from these views. Let's try one of them... # # Word of caution, try to limit your search so that you don't get too many results back. The API will happily let you get all the results back as you choose, but the more records you get back the longer the API call will take (10k events per API call). # + colab={} colab_type="code" id="yY8jk_UzSpCE" # You can change this number if you would like to test out another view. # The way the code works is that it checks first of you set the "view_text", and uses that to pick a view, otherwise the number is used. view_number = 1 view_text = '[phishy_domains] Phishy Domains' if view_text: for index, view in enumerate(views): if view.name == view_text: view_number = index break print('Fetching data from : {0:s}'.format(views[view_number].name)) print( ' Query used : {0:s}'.format(views[view_number].query_string if views[view_number].query_string else views[view_number].query_dsl)) # + [markdown] colab_type="text" id="zLgLBXXMlDKa" # If you want to issue this query, then you can run the cell below, otherwise you can change the view_number to try another one. # + colab={} colab_type="code" id="MmlF6oYcj8wh" greendale_frame = gd_sketch.explore(view=views[view_number], as_pandas=True) # + [markdown] colab_type="text" id="jEki5_BmZpKu" # Did you notice the "`as_pandas=True`" parameter that got passed to the "`explore`" function? That means that the data that we'll get back is a pandas DataFrame that we can now start exploring. # # Let's start with seeing how many entries we got back. # + colab={} colab_type="code" id="1_fjRL4XZ-XW" greendale_frame.shape # + [markdown] colab_type="text" id="XWjrJnivaSo9" # This tells us that the view returned back 670 events with 12 columns. Let's explore the first few entries, just so that we can wrap our head around what we got back. # + colab={} colab_type="code" id="ymR_NtseaRrO" greendale_frame.head(5) # + [markdown] colab_type="text" id="bdPUgvNtl82r" # Let's look at what columns we got back... and maybe create a slice that contains fewer columns. # + colab={} colab_type="code" id="zuu7VFCAmB9e" greendale_frame.columns # + colab={} colab_type="code" id="Dqv6gXlKmEUW" greendale_slice = greendale_frame[['datetime', 'timestamp_desc', 'tag', 'message', 'label']] greendale_slice.head(4) # + [markdown] colab_type="text" id="D-5xReYEHxKc" # Since this is a result from the analyzers we have few extra fields we can pull in. # # When running `gd_sketch.explore?` did you notice the field called `return_fields`: # # ``` # return_fields: List of fields that should be included in the # response. # ``` # # We can use that to specify what fields we would like to get back. Let's add few more fields (you can see what fields are available in the UI) # # # + colab={} colab_type="code" id="EMvcyumPH4eW" greendale_frame = gd_sketch.explore(view=views[view_number], return_fields='datetime,message,source_short,tag,timestamp_desc,url,domain,human_readable', as_pandas=True) # + [markdown] colab_type="text" id="_mvDnsFfIful" # Let's briefly look at these events. # + colab={} colab_type="code" id="ip3vimOaIhhY" greendale_slice = greendale_frame[['datetime', 'timestamp_desc', 'tag', 'human_readable', 'url', 'domain']] greendale_slice.head(5) # + [markdown] colab_type="text" id="PKioTKqoI85E" # OK,.... since this is a phishy domain analyzer, and all the results we got back are essentially from that analyzer, let's look at few things. First of all let's look at the tags tha are available. # + colab={} colab_type="code" id="MFMRBxtRJDcK" greendale_frame['tag_string'] = greendale_frame.tag.str.join('|') greendale_frame.tag_string.unique() # + [markdown] colab_type="text" id="KP_joLR7JVJ8" # OK... so we've got some that are part of the whitelisted-domain... let's look at those the domains that are marked as "phishy" yet excluding those that are whitelisted. # + colab={} colab_type="code" id="h9l165w_JdtT" greendale_frame[~greendale_frame.tag_string.str.contains('whitelisted-domain')].domain.value_counts() # + [markdown] colab_type="text" id="VADR_gpAJzMz" # OK... now we get to see all the domains that the domain analyzer considered to be potentially "phishy"... is there a domain that stands out??? what about that grendale one? # + colab={} colab_type="code" id="-1PBtCtjJ5Ag" greendale_slice[greendale_slice.domain == 'grendale.xyz'] # + [markdown] colab_type="text" id="4PbDcNMzJ-8S" # OK... this seems odd.. let's look at few things, a the `human_readable` string as well as the URL... # - greendale_slice[greendale_slice.domain == 'grendale.xyz'] # + colab={} colab_type="code" id="vxnMcThfKEOs" grendale = greendale_slice[greendale_slice.domain == 'grendale.xyz'] string_set = set() for string_list in grendale.human_readable: new_list = [x for x in string_list if 'phishy_domains' in x] _ = list(map(string_set.add, new_list)) for entry in string_set: print('Human readable string is: {0:s}'.format(entry)) print('') print('Counts for URL connections to the grendale domain:') grendale_count = grendale.url.value_counts() for index in grendale_count.index: print('[{0:d}] {1:s}'.format(grendale_count[index], index)) # + [markdown] colab_type="text" id="1QH7LFksLuLd" # We can start doing a lot more now if we want to... let's look at when these things occurred... # + colab={} colab_type="code" id="-EShYajvL1RE" grendale_array = grendale.url.unique() greendale_slice[greendale_slice.url.isin(grendale_array)] # + [markdown] colab_type="text" id="DrvPXI8zMETu" # OK... we can then start to look at surrounding events.... let's look at one date in particular... "2015-08-29 12:21:06" # + colab={} colab_type="code" id="kwJrIsErMNGv" query_dsl = """ { "query": { "bool": { "filter": { "bool": { "should": [ { "range": { "datetime": { "gte": "2015-08-29T12:20:06", "lte": "2015-08-29T12:22:06" } } } ] } }, "must": [ { "query_string": { "query": "*" } } ] } }, "size": 10000, "sort": { "datetime": "asc" } } """ data = gd_sketch.explore(query_dsl=query_dsl, return_fields='message,human_readable,datetime,timestamp_desc,source_short,data_type,tags,url,domain', as_pandas=True) # + colab={} colab_type="code" id="6NSD4izoNExQ" data[['datetime', 'message', 'human_readable', 'url']].head(4) # + [markdown] colab_type="text" id="gEQ9yZbTNzjJ" # Let's find the grendale and just look at events two seconds before/after # + colab={} colab_type="code" id="wA9lQ1JANdGg" data[(data.datetime > '2015-08-29 12:21:04') & (data.datetime < '2015-08-29 12:21:08')][['datetime', 'message', 'timestamp_desc']] # - # ## Let's look at aggregation # # Timesketch also has aggregation capabilities that we can call from the client. Let's take a quick look. # # Start by checking out whether there are any stored aggregations that we can just take a look at. # # You can also store your own aggregations using the `.save()` function on the aggregation object. However we are not going to do that in this colab. gd_sketch.list_aggregations() # OK, so there are some aggregations stored. Let's just pick one of those to take a closer look at. aggregation = gd_sketch.list_aggregations()[0] # Now we've got an aggregation object that we can take a closer look at. aggregation.name aggregation.description # OK, so from the name, we can determine that this has to do with top 10 visited domains. We can also look at all of the stored aggregations pd.DataFrame([{'name': x.name, 'description': x.description} for x in gd_sketch.list_aggregations()]) # Let's look at the aggregation visually, both as a table and a chart. aggregation.table aggregation.chart # We can also take a look at what aggregators can be used, if we want to run our own custom aggregator. gd_sketch.list_available_aggregators() # Now we can see that there are at least the "field_bucket" and "query_bucket" aggregators that we can look at. The `field_bucket` one is a terms bucket aggregation, which means we can take any field in the dataset and aggregate on that. # # So if we want to for instance see the top 20 domains that were visited we can just ask for an aggregation of the field `domain` and limit it to 20 records (which will be the top 20). Let's do that: aggregator = gd_sketch.run_aggregator( aggregator_name='field_bucket', aggregator_parameters={'field': 'domain', 'limit': 20, 'supported_charts': 'barchart'}) # Now we've got an aggregation object that we can take a closer look at... let's look at the data it stored. What we were trying to get out was the top 20 domains that were visited. aggregator.table # Or we can look at this visually... as a chart aggregator.chart # We can also do something a bit more complex. The other aggregator, the `query_bucket` works in a similar way, except you can filter the results first. We want to aggregate all the domains that have been tagged with the phishy domain tag. tag_aggregator = gd_sketch.run_aggregator( aggregator_name='query_bucket', aggregator_parameters={ 'field': 'domain', 'query_string': 'tag:"phishy-domain"', 'supported_charts': 'barchart', } ) # Let's look at the results. tag_aggregator.table # We can also look at all the tags in the timeline. What tags have been applied and how frequent are they. gd_sketch.run_aggregator( aggregator_name='field_bucket', aggregator_parameters={ 'field': 'tag', 'limit': 10, } ).table # And then to see what are the most frequent applications executed on the machine. # # Since not all of the execution events have the same fields in them we'll have to create few tables here... let's start with looking at what data types are there. gd_sketch.run_aggregator( aggregator_name='query_bucket', aggregator_parameters={ 'field': 'data_type', 'query_string': 'tag:"application_execution"', 'supported_charts': 'barchart', } ).table # And then we can do a summary for each one. gd_sketch.run_aggregator( aggregator_name='query_bucket', aggregator_parameters={ 'field': 'path', 'query_string': 'tag:"application_execution"', 'supported_charts': 'barchart', } ).table gd_sketch.run_aggregator( aggregator_name='query_bucket', aggregator_parameters={ 'field': 'link_target', 'query_string': 'tag:"application_execution"', 'supported_charts': 'barchart', } ).table # + [markdown] colab_type="text" id="0tA4pHRLOj5g" # ## Let's look at logins... # # Let's do a search to look at login entries... # + colab={} colab_type="code" id="vbsb58imPVCQ" login_data = gd_sketch.explore( 'data_type:"windows:evtx:record" AND event_identifier:4624', return_fields='datetime,timestamp_desc,human_readable,message,tag,event_identifier,computer_name,record_number,recovered,strings,username', as_pandas=True ) # + [markdown] colab_type="text" id="YydMkgThab09" # This will produce quite a bit of events... let's look at how many. # + colab={} colab_type="code" id="-PxPYIA5Pvks" login_data.shape # + [markdown] colab_type="text" id="ZsM5me3KQwp4" # Let's look at usernames.... # + colab={} colab_type="code" id="kfuzWPfJQynH" login_data.username.value_counts() # + [markdown] colab_type="text" id="ccCwN92sQ5qb" # The login analyzer in the demo site wasn't checked in, and therefore didn't extract all those usernames. Let's do this manually for logon entries. # + colab={} colab_type="code" id="cg1xPWKXa9QY" login_data['account_name'] = login_data.message.str.extract(r'Account Name:.+Account Name:\\t\\t([^\\]+)\\n', expand=False) login_data['account_domain'] = login_data.message.str.extract(r'Account Domain:.+Account Domain:\\t\\t([^\\]+)\\n', expand=False) login_data['process_name'] = login_data.message.str.extract(r'Process Name:.+Process Name:\\t\\t([^\\]+)\\n', expand=False) login_data['date'] = pd.to_datetime(login_data.datetime) # + [markdown] colab_type="text" id="5ApoSLqLfXbX" # What accounts have logged in: # + colab={} colab_type="code" id="tKF9UR3mbSC-" login_data.account_name.value_counts() # + [markdown] colab_type="text" id="9fy6EAURSpvK" # Let's look at all the computers in there... # + colab={} colab_type="code" id="DFU56oKaSUMC" login_data.computer_name.value_counts() # + [markdown] colab_type="text" id="8IApcGWwfaMS" # Let's graph.... and you can then interact with the graph... try zomming in, etc. # # First we'll define a graph function that we can then call with parameters... # + colab={} colab_type="code" id="HwU4K4MnaYdt" def GraphLogins(data_frame, machine_name=None): if machine_name: data_slice = data_frame[data_frame.computer_name == machine_name] title = 'Accounts Logged In - {0:s}'.format(machine_name) else: data_slice = data_frame title = 'Accounts Logged In' data_grouped = data_slice[['account_name', 'date']].groupby('account_name', as_index=False).count() data_grouped['count'] = data_grouped.date del data_grouped['date'] return alt.Chart(data_grouped, width=400).mark_bar().encode( x='account_name', y='count', tooltip=['account_name', 'count'] ).properties( title=title ).interactive() # + [markdown] colab_type="text" id="9stCil8TgXhq" # Start by graphing all machines # + colab={} colab_type="code" id="T-oUET5AgYyW" GraphLogins(login_data) # + [markdown] colab_type="text" id="drEG2TTSncoS" # Or we can look at this for a particular machine: # + colab={} colab_type="code" id="SP1vf_xBUr2a" GraphLogins(login_data, 'Student-PC1.internal.greendale.edu') # + [markdown] colab_type="text" id="2axKKhp7Unfe" # Or we can look at this as a scatter plot... # # First we'll define a function that munches the data for us. This function will essentially graph all logins in a day with a scatter plot, using colors to denote the count value. # # **This graph will be very interactive... try selecting a time period by clicking with the mouse on the upper graph and drawing a selection.** # + colab={} colab_type="code" id="D9DiG03hazwY" login_data['day'] = login_data['date'].dt.strftime('%Y-%m-%d') def GraphScatterLogin(data_frame, machine_name=''): if machine_name: data_slice = data_frame[data_frame.computer_name == machine_name] title = 'Accounts Logged In - {0:s}'.format(machine_name) else: data_slice = data_frame title = 'Accounts Logged In' login_grouped = data_slice[['day', 'computer_name', 'account_name', 'message']].groupby(['day', 'computer_name', 'account_name'], as_index=False).count() login_grouped['count'] = login_grouped.message del login_grouped['message'] brush = alt.selection_interval(encodings=['x']) click = alt.selection_multi(encodings=['color']) color = alt.Color('count:Q') chart1 = alt.Chart(login_grouped).mark_point().encode( x='day', y='account_name', color=alt.condition(brush, color, alt.value('lightgray')), ).properties( title=title, width=600 ).add_selection( brush ).transform_filter( click ) chart2 = alt.Chart(login_grouped).mark_bar().encode( x='count', y='account_name', color=alt.condition(brush, color, alt.value('lightgray')), tooltip=['count'], ).transform_filter( brush ).properties( width=600 ).add_selection( click ) return chart1 & chart2 # + [markdown] colab_type="text" id="Z4s-lEHxhQXH" # OK, let's start by graphing for all logins... # + colab={} colab_type="code" id="PuaJmcJMhShS" GraphScatterLogin(login_data) # + [markdown] colab_type="text" id="dpLDsdSGhT1r" # And now just for the Student-PC1 # + colab={} colab_type="code" id="2XaBqZqRVIoL" GraphScatterLogin(login_data, 'Student-PC1.internal.greendale.edu') # + [markdown] colab_type="text" id="z0f-qAxyhYa4" # And now it is your time to shine, experiment with python pandas, the graphing library and other data science techniques. # + colab={} colab_type="code" id="HejGxei3hfnM" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Apply Emotion Tracker # Nov 14th 2019 # # Testing that the emotion tracker works for utterances in corpus. # + # import required modules and set up environment import os # replace file path below with your own local convokit os.chdir('/Users/marianneaubin/Documents/Classes/CS6742/cs6742-fork') import convokit from convokit import Corpus, Parser, EmoTracker, Transformer import nltk # - # load corpus corpus = convokit.Corpus(filename='../datasets/democrats-filtered-labelled-small') corpus.print_summary_stats() et = EmoTracker(); corpus = et.transform(corpus) categories = ["emotional", "sadness", "negative_emotion", "shame", "violence", "rage", "pain", "anger", "disgust", "hate", "love", "politics"] counter = 1 emotional_dist_dict = {"emotional": 0, "sadness": 0, "negative_emotion": 0, "shame": 0, "violence":0, "rage":0, "pain":0, "anger":0, "disgust":0, "hate":0, "love":0, "politics":0} for conv_id in corpus.conversations: conv = corpus.get_conversation(conv_id) for utt in conv.iter_utterances(): if utt.meta["analysis"] != None: for cat in categories: if (utt.meta["analysis"][cat] != 0.0): emotional_dist_dict[cat] = emotional_dist_dict[cat] + 1 counter = counter + 1 print(emotional_dist_dict) utter_ids = corpus.get_utterance_ids() utter_ids[7000] # + pol_word_counter = 0 total_word_counter = 0 counter = 0 for x in range(0,3237456): utt = corpus.get_utterance(utter_ids[x]) pol_word_counter = pol_word_counter + utt.meta['num_pol_refs'] total_word_counter = total_word_counter + utt.meta['num_pol_refs_incidence'] counter = counter + 1 if counter % 100000 == 0: print(counter, " completed") print("total # of words:", pol_word_counter) print(total_word_counter) # + counter = 0 pol_words = {} for x in range(0, 3237456): utt = corpus.get_utterance(utter_ids[x]) utt_pol_words = utt.meta['pol_words'] for y in utt_pol_words: if (y not in pol_words.keys()): pol_words[y] = 1 else: pol_words[y] = pol_words[y] + 1 counter = counter + 1 if counter % 100000 == 0: print(counter, " completed") print(pol_words) # + freqs = [] large_pol_words = [] for x in pol_words.keys(): if pol_words[x] > 5000: freqs.append(pol_words[x]) large_pol_words.append(x) freqs, large_pol_words = (list(x) for x in zip(*sorted(zip(freqs, large_pol_words)))) freqs.reverse() large_pol_words.reverse() print(freqs) # + import matplotlib.pyplot as plt import numpy as np import matplotlib.pyplot as plt objects = large_pol_words y_pos = np.arange(len(objects)) plt.bar(y_pos, freqs, align='center', alpha=0.5) plt.xticks(y_pos, objects, rotation='vertical') plt.ylabel('Frequency') plt.title('Most commonly used political words in Corpus') plt.show() ## ISSUE: right now, this only works for single words. ## will need to change the transformer so that it can also ## compute multiple words e.g. "bill of rights" # + ## read in csv of relevant utts for sandy hook import csv #with open('../sandyhook_utterance_ids.csv', 'r') as f: with open('../auroratheater_utterance_ids.csv', 'r') as f: reader = csv.reader(f) sh_list = list(reader) # + ## recompute political freqs pol_words_sh = {} for x in sh_list: utt = corpus.get_utterance(x[0]) utt_pol_words = utt.meta['pol_words'] for y in utt_pol_words: if (y not in pol_words_sh.keys()): pol_words_sh[y] = 1 else: pol_words_sh[y] = pol_words_sh[y] + 1 print(pol_words_sh) # + freqs_sh = [] large_pol_words_sh = [] for x in pol_words_sh.keys(): freqs_sh.append(pol_words_sh[x]) large_pol_words_sh.append(x) freqs_sh, large_pol_words_sh = (list(x) for x in zip(*sorted(zip(freqs_sh, large_pol_words_sh)))) freqs_sh.reverse() large_pol_words_sh.reverse() print(freqs_sh) # + objects = large_pol_words_sh y_pos = np.arange(len(objects)) plt.figure(figsize=(10,4)) plt.bar(y_pos, freqs_sh, align='center', alpha=0.5) plt.xticks(y_pos, objects, rotation='vertical') plt.ylabel('Frequency') plt.title('Most commonly used political words in Sandy Hook Corpus') plt.show() # - print(corpus.get_utterance(utter_ids[60])) # ## Sandy Hook analysis # Now we try to establish a time series of how many words there are per day after December 14, 2012 (Sandy Hook shooting day). Timestamp: 1355461200 # + from datetime import datetime import pandas as pd ps = 7 #Sandy hook dates #start_date = '2012-12-14' #end_date = '2012-12-22' start_date = '2012-07-20' end_date = '2012-07-28' num_posts_sh = [0] * ps times = pd.date_range(start=start_date,end=end_date,periods=ps+1) times = np.array(times) bin_times = times[:-1] # convert to datetime object times_temp = [] for i,x in enumerate(times): times_temp.append(pd.to_datetime(x)) times = times_temp for i,x in enumerate(sh_list): utt = corpus.get_utterance(x[0]) posted_time = datetime.fromtimestamp(utt.timestamp) y = 0 while (posted_time > times[y]): y = y + 1 ## this gives us the timeframe to mark it as num_posts_sh[y-1] = num_posts_sh[y-1] + 1 print(num_posts_sh) # - plt.plot(bin_times,num_posts_sh) plt.xticks(rotation='vertical') plt.ylabel('Number of posts') plt.title('Number of posts per day 7 days after Sandy Hook') plt.show() # + from nltk.tokenize import word_tokenize ## function takes in utterance and counts total words def count_total_words(utt): if utt.text != None: tokenized = word_tokenize(utt.text.lower()) return len(tokenized) # - ##check above function works as expected print(count_total_words(corpus.get_utterance(utter_ids[400080]))) print(corpus.get_utterance(utter_ids[400080])) # + ## next, I want the incidence rate per day of political words inc_rate_sh = [0]*ps total_pol_words_sh = [0]*ps total_words_sh = [0]*ps for i,x in enumerate(sh_list): utt = corpus.get_utterance(x[0]) posted_time = datetime.fromtimestamp(utt.timestamp) y = 0 while (posted_time > times[y]): y = y + 1 ## this gives us the timeframe to mark it as total_pol_words_sh[y-1] = total_pol_words_sh[y-1] + utt.meta['num_pol_refs'] total_words_sh[y-1] = total_words_sh[y-1] + count_total_words(utt) if y == 0: print(count_total_words(utt)) print(total_pol_words_sh) print(total_words_sh) for i in range(0, ps): if total_words_sh[i] != 0: inc_rate_sh[i] = total_pol_words_sh[i]/total_words_sh[i] print(inc_rate_sh) # - plt.plot(bin_times,inc_rate_sh) plt.xticks(rotation='vertical') plt.ylabel('Political words/total words') plt.title('Incidence rate of political words in utterances') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import tensorflow as tf x = tf.Variable(0) increment_x = tf.assign(x, x + 1) # + sess = tf.Session() initializer = tf.global_variables_initializer() sess.run(initializer) saver = tf.train.Saver(max_to_keep=4, keep_checkpoint_every_n_hours=2) saver.save(sess, './saved_models/my-test-model') # - for counter in range(5): output = sess.run([increment_x]) print(output) saver.save(sess, './saved_models/my-test-model',global_step=counter,write_meta_graph=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="0A284inMFZqg" # # Variables # Una variable es como una caja donde ponemos guardar valores: # * numeros enteros (int) # * números con decimales (float) # * booleanos (True, False) # * cadenas alfanuméricas (string) # * ... # # Ejemplo: x = 5 # # El sistema operativo reserva una dirección de memoria RAM donde guardar esa variable. # # Luego se puede cambiar el valor almacenado en la variable: # # Ejemplo: x = 7 # # Se dice que hemos **asignado** un nuevo valor a la variable. # + colab={"base_uri": "https://localhost:8080/"} id="IqeAqipMG7rD" outputId="dff36516-5bb5-4fae-b8dc-dbb108ea2e74" x = 5 print("x =",x) x = 7 print("x =",x) # + colab={"base_uri": "https://localhost:8080/"} id="BoZ3cNctIHOW" outputId="46cb4fcd-f5ea-4820-b4e1-c8ece6acd916" edad = 14 # int, número entero nota_examen = 7.3 # float, número flotnte (con decimales) ciudadNacimiento = "Madrid" # string, cadena alfanumérica avion_aterrizado = True # boobleano, variable de tipo boolean, True, False edad_hermanos = [9,12,14] # list, variable de tipo lista. Las listas van entre corchetes [] print(edad) print(nota_examen) print(ciudadNacimiento) print(avion_aterrizado) print(edad_hermanos) # + [markdown] id="gXIy9WmPmMVL" # ## Tipos de variables # Con la función ``````type()`````` podemos ver de qué tipo es una variable. # + colab={"base_uri": "https://localhost:8080/"} id="5NVB5d_QmXAD" outputId="0ed6ce6b-1333-400b-8f06-a54c424521a9" print("edad = ", edad) print(type(edad)) print("nota_examen = ", nota_examen) print(type(nota_examen)) print("ciudadNacimiento = ", ciudadNacimiento) print(type(ciudadNacimiento)) print("avion_aterrizado = ", avion_aterrizado) print(type(avion_aterrizado)) print("edad_hermanos = ", edad_hermanos) print(type(edad_hermanos)) # + [markdown] id="8SbGCIoBki-I" # ## Nombres de variables ilegales # * No pueden empezar por número # * No pueden llevar símbolos como $€@%&? (la barra baja si se admite) # * No pueden tener espacios en blanco # * No valen palabras reservadas del lenguaje (and, or, True, import, ...) # + id="sO40EHRZJ4YY" #100Montaditos = 2.5 # error, no se puede empezar por número #money$ = 20 # error, no llevar simbolos #Elon Musk = "Sudáfrica" # error, no valen espacios, por convenio se usan minúsculas #import = 100 # error, palabra reservada del lenguaje _clave = "Voyager" # si se puede comenzar por barra baja # + [markdown] id="KrgyE0wV3IC_" # ## Caracteres no recomendados # * No se recomienda usar letras acentuadas áéíóú ÁÉÍÓÚ üá # * No se recomenda usar ñ Ñ ç Ç # * No se recomienda usar mayúsculas # # + [markdown] id="f1blFCkLLkj6" # ### Reto 2.1. Creación y asignación de variables # Crea unas cuantas variable de diferente tipo: # * int # * float # * string # * boolean # * list # Inicializa las variables con algún valor. # Mustra en pantalla el valor de cada variable. # Cambia el valor de las variables. # Imprime el nuevo valor de cada variable. # + [markdown] id="6pTGUkhaMzTP" # ### Reto 2.2. Precio con descuento # - Crea una variable con el precio de un juguete que inicialmente es de 20 euros. # - Imprime su precio # - El juguete se pone de oferta un 35%, asigna a la variable el nuevo precio # - Imprime el nuevo precio # + [markdown] id="gsWJLj4tOyw4" # ## Operar con variables # Podemos operar con variables usando: # * los operadores aritméticos (+,-,*,/,//,%) # * los operadores lógicos (not, and, or) # * los comparadores (==, !=,<,>,<=,>=). # + colab={"base_uri": "https://localhost:8080/"} id="RX4u7Exwg3ku" outputId="1f8b253e-3d38-4de5-bb67-a1db8d484790" x = 5 y = 2 print("x+y =", x+y) print(f"La resta de {x}-{y} es {x-y}") # la f significa format, permite formatear lo que imprimimos print(f"Si elevas {x} a {y} el resultado es {x**y}") # El cuadrado de 5 es 5*5=25 # + colab={"base_uri": "https://localhost:8080/"} id="RkMS8ssQPXRk" outputId="e04bb52d-d587-4e65-ac23-e9957bea757a" # precio de los productos en la cesta de la compra leche = 4.6 lechuga = 1.2 pan = 0.8 huevos = 4.2 total = leche + lechuga + pan + huevos print(f"El importe todal de la cesta de la compra es {total} euros.") # + [markdown] id="uiLHQjoMpYIc" # ### Reto 2.3 Calcular el precio con IVA # Sabemos que el precio de un dron es de 200 € antes de aplicar el impuesto del IVA y que el impuesto aplicable es del 21%. Calcular el precio de venta al público que incluye ya el impuesto. # Usar variables. # + [markdown] id="bNvB5pW8rpqf" # ### Reto 2.4 Comprar unos refrescos # Me han dado un billete de 20 € para ira a comprar unas latas de refrescos. El precio de cada lata antes de aplicar el IVA es de 1.2 €. El IVA aplicable a las bebidas es del 10%. # ¿Cuántas latas podré comprar? # ¿Cuánto me ha sobrado? # Usar variables. # + [markdown] id="3x2W2FLitawf" # ## Contadores # Un contador es una variable que se va incrementando, habitualmente de uno en uno. # Por ejemplo, cuando cumplimos un año más nuestra edad se incrementa en un año. # + colab={"base_uri": "https://localhost:8080/"} id="JNaNOjvetz4M" outputId="221ed85b-8a0b-4536-a9e2-2c2d634e93db" edad = 15 edad = edad + 1 # contador. La variable edad es igual al antiguo valor de la variable edad más uno. print(edad) # + colab={"base_uri": "https://localhost:8080/"} id="ZjdupJ0iuITs" outputId="4b45f1e4-197b-4d95-95db-dc8939ef362f" vidas = 3 # comenzamos un juego con 3 vidas print(vidas) vidas = vidas + 1 # si en el juego ganamos una vida podemos incrementar su valor en uno print(vidas) vidas += 1 # esta es una forma abreviada de escribir lo mismo que antes. Incrementa las vidas en una más. print(vidas) vidas -= 1 # perdemos una vida. Es una forma abreviada de escribir vidas = vidas - 1 print(vidas) # + colab={"base_uri": "https://localhost:8080/"} id="B33qx3lbuzfE" outputId="0f37e5af-3e03-414b-a956-3f5ca7127856" # vamos a ir anotando la página de un libro por la que vamos y las que leemos cada día, para ver como se incrementa pagina = 100 # página por la que comenamos a anotar el valor, para saber por donde vamos en la lectura del libro print(pagina) pagina += 12 # hoy he leido 12 páginas print(pagina) pagina += 18 # hoy he leido 18 páginas más print(pagina) pagina -= 5 # he retrocedido 5 páginas ya que quiero volver a leerlas, porque ayer no me enteré bien de esa parte print(pagina) # + [markdown] id="6G_03q_fw9Wu" # ### Reto 2.5. Cobros y pagos # * Inicialmente tengo 20 €. # * Me dan 50 € por mi cumpleaños. # * Me gasto en ir al cine 15 € # * Me encuentro una moneda de 2 €. # * Compro un juego de 17 €. # * Compro un comic por 8 €. # ¿Cuánto dinero tengo al final? # Utiliza variables, haciendo los pasos intermedios. # + [markdown] id="OhVFpZ6b0kJ6" # ### Reto 2.6. Variables permitidas # Comprueba si la siguientes variable son o no válidas. # * IVA = 21 # * input = 100 # * myCity = "Madrid" # * _100_Km = "Repostar" # * velocidad luz = 300000 # * velocidad_luz = 300_000 # * 101Dalmatas = "Disney" # * ciudad_de_la_luz = "París" # * nombre@email ="" # * puntos-totales = 117 # * global = "Viaje al mundo en 80 días" # * número_daño = 4 # no se recomienda el uso de la ñ ni acentos # + [markdown] id="OIjoTBLZ6LQl" # ## Variables booleanas # Una variable puede tomar el valor True o False. # + colab={"base_uri": "https://localhost:8080/"} id="yf5aiwAv6WYB" outputId="fab0acce-ee94-4273-efb7-3d68e8837b28" huevos = 100 docenas = huevos//12 sobran = huevos%12 justos = sobran==0 # justos es una variable booleana print(f"En {huevos} huevos hay {docenas} docenas y sobran {sobran} huevos.") print(f"¿Están justos? {justos}") # + [markdown] id="Tjayh3HH76JO" # ### Reto 2.7. Plazas de un autobús # * Un autobús tiene 37 plazas. # * Inicialmente viajan 20 personas. # * Crea una variable booleana que diga si quedan plazas libres. # * Luego suben al autobús 12 personas. # * En la siguiente parada bajan 4 y quieren subir 8, ¿será posible? # * En la siguiente parada bajan 3 y quieren subir 5, ¿será posible? # * Usa variables siguiendo el proceso paso a paso. # + [markdown] id="iqSp8fUu8fsX" # ### Reto 2.8. Usando variables booleanas # * Crea un caso con dos variables booleanas, por ejemplo en relación a si hemos pasado un nivel de un juego y si nos quedan recursos. # * Usa ambas variables booleanas y prueba a combinarlas con and, or, not. # + [markdown] id="Y1WEmFNH_20U" # ## Intercambio de valores de dos variables # Tenemos dos variables: # * x = 3 # * y = 5 # deseamos intercambiar sus valores. # Podemos resolverlo usando cualquiera de los siguientes métodos. # # ### Método 1 # Usando una variable auxiliar que podemos llamar aux, que recogerá el valor de x mientras y toma su valor. # + colab={"base_uri": "https://localhost:8080/"} id="friJc-pJAuVl" outputId="c76b5196-3adc-48a2-ec7b-46f4f109dc2f" x = 3 y = 5 aux = x x = y y = aux print("nuevo valor de x:", x) print("nuevo valor de y;", y) # + [markdown] id="rbZEJF3ABOMN" # ### Método 2 # Python tiene un método estupendo que permite intercambiar los valores de x e y sin usar una variable auxiliar. Muchos otros lenguajes no tienen este método. # + colab={"base_uri": "https://localhost:8080/"} id="E4pGtDLhBgBm" outputId="d9519eb5-424c-42fd-b4a0-c8c859c1b911" x = 3 y = 5 x,y = y,x # forma maravillosa de permutar valores entre variables print("nuevo valor de x:", x) print("nuevo valor de y;", y) # + [markdown] id="iuQoOJRkCIzB" # ## Asignaciones múltiples # + colab={"base_uri": "https://localhost:8080/"} id="ya5YgilKCOuk" outputId="f470af7e-036b-46eb-bffa-638e850030f3" # asignamos las edades de Ana y de Eva. ana, eva = 14, 16 print(f"La edad de Ana es {ana} y la edad de Eva es {eva}.") # + colab={"base_uri": "https://localhost:8080/"} id="Uav_N3qjCgZP" outputId="a7086336-79f6-43aa-a5b1-8a99a5dde089" # En un juego los tres coches parten con el mismo número de puntos rojo = verde = azul = 100 print(f"El rojo tiene {rojo}, el verde tiene {verde} y el azul tiene {azul} puntos.") # + [markdown] id="ZCld-pdaD63S" # ### Reto 2.9. Juego de cartas # En un juego de cartas cuando sale la carta "reverse" los jugadores deben intercambiar sus cartas, y por tanto sus puntos. # Hay dos jugadores que tienen estos puntos: # * luis = 80 # * raul = 45 # Crear una tirada de una carta donde se pueda sacar o no la carta "reverse", controlar este posible resultado con una variable booleana. # Si se da la situación de "reverse" intercambie los puntos de los jugadores. # Pruebe a realizar la permuta usando varios métodos. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # States of Mexico into the Spatial Feature Registry # # #### This code is in progress. The code registers the States of Mexico into the Spatial Feature Registry (SFR within Data Distilleries GC2 instance) using the following workflow. All data are retained from the source (unaltered), three registration fields are added (_id, reg_date, reg_source) and data are exported to a GeoJSON file. The GeoJSON file is then uploaded to ScienceBase to document the final data as it is represented in the SFR. Currently we are uploading data to the SFR using a manual process, with plans to automate this step in the future. # # #### General workflow involves: # 1: Retrieve Data From Source (ScienceBase Item: https://www.sciencebase.gov/catalog/item/5ab57393e4b081f61ab781f4) # 2: Create GeoDataFrame and identify native crs # 3: Define Variables needed throughout process # 4: Create new ScienceBase item to describe registration process # 5: Build and export GeoJSON representation of the data. This process includes the addition of two registration fields that document information about registration (reg_source-> points to new SB item), and a registered uuid (_id). # 6: Upload GeoJSON file to new ScienceBase item to document what was registered into SFR, along with additional information about when and how registration occured. This process will likely change as we introduce a more systematic way of tracking prov. During this step the user will upload data to GC2 as well (SFR schema). Currently this process is done manually through the UI. # # Code by: (USGS) # # Date: 20180330 #Import Needed Packages import geopandas as gpd import urllib.request as ur import subprocess import geojson from sfr_load_utils import * # #### Step 1: Retrieve data from source # + ### Step 1: Retrieve Dataset from ScienceBase #Geostatistical Framework of Mexico dataset stored at https://www.sciencebase.gov/catalog/item/5ab555c6e4b081f61ab78093 #Define url of zipped shapefile download downloadUrl ='https://www.sciencebase.gov/catalog/file/get/5ab57393e4b081f61ab781f4?f=__disk__a3%2Fb6%2F61%2Fa3b6610d637a38e0f76ca42a86b07607b2abd7c7' #Download government unit file to local directory ur.urlretrieve(downloadUrl, '889463142683_s.zip') #In working directory unzips file subprocess.call(r'"C:\Program Files\7-Zip\7z.exe" x ' + '889463142683_s.zip' ) # - # #### Step 2: Import shapefile into GeoDataFrame and identify native crs #GC2 currently does not hand the original epsg of 6372, transforming in python ran into issues so ESRI arcpy was used to do this step. import arcpy input_file = 'conjunto_de_datos/areas_geoestadisticas_estatales.shp' file = 'mexico_states_4326.shp' out_coord_sys = arcpy.SpatialReference('WGS 1984') arcpy.Project_management(input_file, file, out_coord_sys) #Create GeoDataFrame from downloaded shapefile df = gpd.read_file(file) #Eventually will need a coded method to extract the epsg number (used as variable later), might be tricky given how this is returned df.crs # #### Step 3: Define Variables #User Defined Variables epsg = {'code':'4326'} #starts as https://epsg.io/6372 but GC2 can't render this so transformed to 4326 (see above) expected_geom_type = 'MultiPolygon' outfile_name = 'mexico_states' source_sbitem = '5ab57393e4b081f61ab781f4' list_tags = ['Jurisdictional Units','Area Beyond National Jurisdiction','BIS Spatial Feature Registry','Mexico'] date = '2018-04-06' # #### Step 4: Create SB Item to describe SFR Registration # + #Build SB Item to house SFR GeoJSON File, including description of item. #This step outputs source_uri (uri to the new sb item that describes the data) to be included as registration information. #Turns list of tags into json format accepted by SB sb_tags = build_sb_tags(list_tags) #Create SB session and log in sb = sb_login() #Creates JSON needed to build and describe new SB item item_info = sfr_item_info(sb,source_sbitem, sb_tags, date) #Builds new SB item new_item = build_new_sfr_sbitem(sb,item_info) #URI of new SB item. This is inserted into GEOJSON so we have a direct connection in SFR to documentation... this step may not #be needed as we build prov capabilities. source_uri = str(new_item['link']['url']) print (source_uri) # - # #### Step 5: Build and export GeoJSON representation of data. Add registration id and source_uri (newly created SB item). Verify that the correct number of features were included in the GeoJSON dataset. #verify correct number of features collection = df_to_geojson(df, epsg, source_uri, expected_geom_type) print (verify_correct_count(collection, df)) #export_geojson(outfile_name, collection) #Add file to SB Item file = export_geojson(outfile_name, collection) outfile_zip = zip_geojson(outfile_name) # #### Step 6: Upload GeoJSON file to ScienceBase Item and also upload to GC2 using UI (make sure to specify UTF-8 encoding and MultiPolygon). sb.upload_file_to_item(new_item, outfile_zip) # + #Currently the new SB item needs to have some additional information uploaded. The UI can be used for this for now but in the future we will want to build as much as we can into this process. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="gfH3ez32gU_E" # $\Large{\text{Ensemble methods}}$ # + [markdown] id="h4UiPgD5ge2J" # Let us first generate a synthetic data set. # + [markdown] id="X4QxZRrBweDl" # We shall generate $n$ ($n$ being even) samples where the feature vector of each sample is 2-dimensional of the form $x^i = (x^i_1,x^i_2), i \in \{1,2,\ldots,n\}$. We assume that $\frac{n}{2}$ samples are from a spiral shaped data set called $S_1$ and other $\frac{n}{2}$ samples are from a different spiral called $S_2$. For each sample $x^i$ we have the following labeling scheme: # # $ # \begin{align} # y^i = \begin{cases} # +1 \text{ if } x^i \in S_1 \\ # -1 \text{ if } x^i \in S_2. # \end{cases} # \end{align} # $ # # Here the spirals $S_1$ and $S_2$ are associated with the parametric forms: # $x_1 = r(\varphi) \cos \varphi$ and $x_2 = r(\varphi) \sin \varphi$ where$\varphi$ the angle and $r(\varphi)$ is a (monotonically increasing) radius function depending on the angle $\varphi$. The coordinates are $x_1$ and $x_2$. # # # + colab={"base_uri": "https://localhost:8080/", "height": 318} id="J5c_jPIZ82v7" outputId="dd3b1673-b0b9-4057-a1c2-2667560b27f0" import numpy as np from numpy import pi import matplotlib.pyplot as plt num_samples = 600 angle = np.linspace(0,2*pi,int(num_samples/2)) mean = [0.0, 0.0] cov = [ [6.0, 6.0], [6.0, 6.0] ] X = np.zeros( (num_samples, 2) ) r_1 = 2*angle + pi data_1 = np.array([np.cos(angle)*r_1, np.sin(angle)*r_1]).T #print(data_1.shape) X_1 = data_1 + np.random.multivariate_normal(mean, cov, int(num_samples/2)) #np.random.randn(int(num_samples/2),2) X[:int(num_samples/2),:] = X_1 r_2 = -2*angle - pi data_2 = np.array([np.cos(angle)*r_2, np.sin(angle)*r_2]).T X_2 = data_2 + np.random.multivariate_normal(mean, cov, int(num_samples/2)) #np.random.randn(int(num_samples/2),2) X[int(num_samples/2):,:] = X_2 y = np.ones(num_samples) y[int(num_samples/2):] = -1*y[int(num_samples/2)] #print(y) print(X.shape) print(y.shape) figure, axes = plt.subplots(1) plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='o', color='blue') plt.title( 'Data from two spirals' ) plt.show() # + [markdown] id="0hl33pubBLXT" # Consider an input space $\mathcal{X} \subseteq {\mathbb{R}}^d$ and the output space $\mathcal{Y} = \{+1,-1\}$. Assume a fixed (but unknown) distribution $P(X,Y)$ over $\mathcal{X} \times \mathcal{Y}$. # # Let us assume that there are 15 base classifiers $C_1, C_2, \ldots, C_{15},$ where each classifier has an error rate of $\epsilon = 0.25$ on some sample from a fixed data distribution $P(X,Y)$. # # To predict the label for a test sample $\hat{x}$, we adopt the following inference procedure (called $\textbf{Ensemble Classifier}(\textbf{EC})$): # # 1. Predict $C_i(\hat{x})$ using each classifier $C_i, \ i \in \{1,2,\ldots,15\}$. # 2. Predict the final label $\hat{y} = \arg\max_{y \in \mathcal{Y}} \sum_{i=1}^{15} \delta(y==C_i(\hat{x}))$ where $\delta(p)$ is the indicator function given by: # # $ # \delta(p) = # \begin{cases} # 1 \text{ if } p \text{ is true. } \\ # 0 \text{ if } p \text{ is false. } # \end{cases} # $ # # # # $\textbf{Question:}$ What would be error rate of the classifier obtained from the above inference algorithm $\textbf{EC}$? # + [markdown] id="aaENz3hYEWrn" # $\textbf{One possible answer:}$ # # Suppose the classifiers are assumed to be independent, then the $\textbf{EC}$ classifier would make an error only when more than half of the classifiers (i.e. more than 7 classifiers) make error in the prediction. Hence we may write the error rate of $\textbf{EC}$ as: # # $ # \begin{align} # \text{error}_{\textbf{EC}} = \sum_{i=8}^{15} \begin{pmatrix} 15 \\ i \end{pmatrix} {\epsilon}^{i} (1-\epsilon)^{15-i} # \end{align} # $ # # which is approximately $0.017$. Note that this error rate is considerably smaller than the individual error rates of the classifiers. # + colab={"base_uri": "https://localhost:8080/", "height": 296} id="o2sIAgVOJiyr" outputId="cdfd772e-de88-4fd1-ee2d-14ef69a57756" #compute the error rates of EC for different error rates of C_i import math def comb(n, k): return math.factorial(n) // math.factorial(k) // math.factorial(n - k) epsilons = np.linspace(0,1,11) errors = np.zeros(epsilons.shape) num_classifiers = 15 eps_idx=0 for epsilon in epsilons: error_EC = 0 for j in np.arange(np.ceil(num_classifiers/2),num_classifiers+1): err = comb(num_classifiers,j)*math.pow(epsilon,j)*math.pow(1-epsilon,num_classifiers-j) error_EC += err errors[eps_idx] = error_EC eps_idx+=1 figure, ax = plt.subplots(1) plt.plot(epsilons, errors, marker='o') ax.set_xlabel('$\epsilon$') ax.set_ylabel('$Error_{EC}$') ax.set_xlim(0, 1) ax.set_ylim(errors.min()-0.1,errors.max()+0.1) ax.set_xticks(epsilons) ax.set_yticks(np.linspace(0,1,11)) ax.set_title('Error rate of Ensemble Classifier vs $\epsilon$') plt.show() # + [markdown] id="7grX2yxiONby" # $\textbf{Important to note:}$ # # # # 1. The base classifiers $C_1, C_2, \ldots, C_{15}$ are assumed to be independent. # 2. The error rate $\epsilon$ of each base classifier must be less than $0.5$ for the ensemble classifier to behave better. $\textbf{What is meant by a base classifier having an error rate less than } 0.5$? # # # + [markdown] id="u4dNY5VdTC2O" # $\large{\text{Ways of building an ensemble classifier}}$ # # $\textbf{Create multiple data partitions from training data}$ # # 1. Resample the original training data $D$ (using sampling with replacement) and create different data paritions $D_1, D_2, \ldots, D_M$. # 2. Train different classifiers $C_i$ on respective data partition $D_i$, $i \in \{1,2,\ldots,M\}$. # 3. For a test data point $\hat{x}$ predict the label $\hat{y}=\text{MajorityVote}(C_1(\hat{x}),C_2(\hat{x}), \ldots, C_M(\hat{x}))$. # # # # # # # + colab={"base_uri": "https://localhost:8080/", "height": 913} id="OVeI1P5_V7dm" outputId="daff8935-aa9c-48cd-b460-995ace3bfa19" from urllib.request import urlopen from PIL import Image img = Image.open(urlopen('https://github.com/balamurugan-palaniappan-CEP/AIML_CEP_2021/raw/main/images/ensemble_classifier.png')) img # + id="S9by3ooWWvsy" np.random.seed(1000) #Create an index array indexarr = np.arange(num_samples) #index array np.random.shuffle(indexarr) #shuffle the indices #print('shuffled indices of samples:') #print(indexarr) # + colab={"base_uri": "https://localhost:8080/"} id="9uPT5KOvWycn" outputId="860ee9e5-551c-4f1f-f371-dedc36ec4765" #Use the samples corresponding to first 80% of indexarr for training num_train = int(0.8*num_samples) #Use the remaining 20% samples for testing num_test = num_samples-num_train print('num_train: ',num_train, 'num_test: ', num_test) # + colab={"base_uri": "https://localhost:8080/"} id="0mZ6_O4jW01e" outputId="d62327fa-ea5b-41df-f12a-6534bc478c41" #Use the first 80% of indexarr to create the train data features and train labels train_X = X[indexarr[0:num_train]] train_y = y[indexarr[0:num_train]] print('shape of train data features:') print(train_X.shape) print('shape of train data labels') print(train_y.shape) # + colab={"base_uri": "https://localhost:8080/"} id="M79SAdaEW3KD" outputId="8884ea7f-9664-42f7-e794-e591d5f64210" #Use remaining 20% of indexarr to create the test data and test labels test_X = X[indexarr[num_train:num_samples]] test_y = y[indexarr[num_train:num_samples]] print('shape of test data features:') print(test_X.shape) print('shape of test data labels') print(test_y.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="vY_mFzGtip_u" outputId="3f3c1e37-f553-4948-a1b3-6c4ba7c960fc" #Let us now use three different base classifiers and check the decision boundary from sklearn.svm import LinearSVC #import linear SVM from sklearn.neighbors import KNeighborsClassifier from sklearn import tree #decision tree from scikit learn from sklearn.svm import SVC from sklearn.linear_model import LogisticRegression # creating an object of LogisticRegression class clf_list = [] #clf_linearsvc = LinearSVC(C=1.0) clf_neigh = KNeighborsClassifier(n_neighbors=1, metric='euclidean') #weights='uniform' (default) or 'distance' clf_svc = SVC(kernel='rbf', gamma=1) clf_tree = tree.DecisionTreeClassifier(criterion='entropy') #clf_logit = LogisticRegression(C=1.0) # C is set to be large number in order to remove the inbuilt regularization #clf_list.append(clf_linearsvc) clf_list.append(clf_neigh) clf_list.append(clf_svc) clf_list.append(clf_tree) #clf_list.append(clf_logit) clf_names = ['Nearest Neighbors', 'Kernel SVM', 'Decision Tree'] num_classifiers = 3 # create a mesh to plot in h=0.05 #mesh step size x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, h), np.arange(x2_min, x2_max, h)) for i in range(num_classifiers): print('classifier:',clf_list[i]) indexarr = np.arange(train_X.shape[0]) #index array for train data np.random.shuffle(indexarr) #shuffle the indices #we shall choose 60% of the data partition_prop = 0.6 num_samples_partition = int(partition_prop*train_X.shape[0]) X_partition = train_X[indexarr[0:num_samples_partition]] y_partition = train_y[indexarr[0:num_samples_partition]] base_clf = clf_list[i] base_clf_model = base_clf.fit(X_partition,y_partition.ravel()) #test accuracy from sklearn.metrics import accuracy_score test_y_predicted = base_clf_model.predict(test_X) test_acc = accuracy_score(test_y, test_y_predicted) print('test accuracy from classifier:',clf_names[i],' is:', test_acc) if i == 0: Z_all_clf = base_clf_model.predict(np.c_[xx1.ravel(), xx2.ravel()]) # Put the result into a color plot Z = Z_all_clf.reshape(xx1.shape) test_pred_all_clf = test_y_predicted fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,6)) ax1.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Plot also the training points ax1.scatter(train_X[:, 0], train_X[:, 1], c=train_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'Classifier='+str(clf_names[i]) ax1.set_xlabel(xlabel) ax1.set_ylabel('x2') ax1.set_xlim(xx1.min(), xx1.max()) ax1.set_ylim(xx2.min(), xx2.max()) ax1.set_xticks(()) ax1.set_yticks(()) ax1.set_title('decision boundary with training points') #plot the test points along with decision boundaries ax2.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Plot also the test points ax2.scatter(test_X[:, 0], test_X[:, 1], c=test_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'Classifier='+str(clf_names[i]) ax2.set_xlabel(xlabel) ax2.set_ylabel('x2') ax2.set_xlim(xx1.min(), xx1.max()) ax2.set_ylim(xx2.min(), xx2.max()) ax2.set_xticks(()) ax2.set_yticks(()) ax2.set_title('decision boundary with test points') plt.show() elif i ==1: Z_base_clf = base_clf_model.predict(np.c_[xx1.ravel(), xx2.ravel()]) Z_all_clf = np.column_stack( (Z_all_clf,Z_base_clf) ) # Put the result into a color plot Z = Z_base_clf.reshape(xx1.shape) test_pred_all_clf = np.column_stack( (test_pred_all_clf,test_y_predicted) ) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,6)) ax1.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Plot also the training points ax1.scatter(train_X[:, 0], train_X[:, 1], c=train_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'Classifier='+str(clf_names[i]) ax1.set_xlabel(xlabel) ax1.set_ylabel('x2') ax1.set_xlim(xx1.min(), xx1.max()) ax1.set_ylim(xx2.min(), xx2.max()) ax1.set_xticks(()) ax1.set_yticks(()) ax1.set_title('decision boundary with training points') #plot the test points along with decision boundaries ax2.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Plot also the test points ax2.scatter(test_X[:, 0], test_X[:, 1], c=test_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'Classifier='+str(clf_names[i]) ax2.set_xlabel(xlabel) ax2.set_ylabel('x2') ax2.set_xlim(xx1.min(), xx1.max()) ax2.set_ylim(xx2.min(), xx2.max()) ax2.set_xticks(()) ax2.set_yticks(()) ax2.set_title('decision boundary with test points') plt.show() elif i==2: Z_base_clf = base_clf_model.predict(np.c_[xx1.ravel(), xx2.ravel()]) Z_all_clf = np.column_stack( (Z_all_clf,Z_base_clf) ) test_pred_all_clf = np.column_stack( (test_pred_all_clf,test_y_predicted) ) Z = Z_base_clf.reshape(xx1.shape) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,6)) ax1.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Put the result into a color plot # Plot also the training points ax1.scatter(train_X[:, 0], train_X[:, 1], c=train_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'Classifier='+str(clf_names[i]) ax1.set_xlabel(xlabel) ax1.set_ylabel('x2') ax1.set_xlim(xx1.min(), xx1.max()) ax1.set_ylim(xx2.min(), xx2.max()) ax1.set_xticks(()) ax1.set_yticks(()) ax1.set_title('decision boundary with training points') #plot the test points along with decision boundaries ax2.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Plot also the test points ax2.scatter(test_X[:, 0], test_X[:, 1], c=test_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'Classifier='+str(clf_names[i]) ax2.set_xlabel(xlabel) ax2.set_ylabel('x2') ax2.set_xlim(xx1.min(), xx1.max()) ax2.set_ylim(xx2.min(), xx2.max()) ax2.set_xticks(()) ax2.set_yticks(()) ax2.set_title('decision boundary with test points') plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="ZfdDPeOBiv4O" outputId="8d6dbb38-1357-4824-8ab1-8281e8d884c2" Z_all_clf = np.array(Z_all_clf) print(Z_all_clf.shape) test_pred_all_clf = np.array(test_pred_all_clf) print(test_pred_all_clf.shape) # + id="fBaQJbI7i0VX" colab={"base_uri": "https://localhost:8080/"} outputId="3bd36f18-a178-4397-e0b3-9b791c7fa1f5" from scipy import stats Z_all_clf = Z_all_clf.astype(int) test_pred_all_clf = test_pred_all_clf.astype(int) Z, counts = stats.mode(Z_all_clf, axis=1) test_pred, counts = stats.mode(test_pred_all_clf, axis=1) test_acc = accuracy_score(test_y, test_pred) print('test accuracy from ensemble classifier is:', test_acc) # for i in range(Z_all_clf.shape[0]): # #print(Z_all_clf[i]) # if (i+1)%100000 == 0: # print('*') # unique, counts = np.unique(Z_all_clf[i].astype(int), return_counts=True) # Z[i] = unique[np.argmax(counts)] # + colab={"base_uri": "https://localhost:8080/", "height": 399} id="Xm_l6L47jF4R" outputId="aa76c490-8492-4986-c0f2-8d576502753f" # Put the result into a color plot Z = Z.reshape(xx1.shape) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,6)) ax1.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Put the result into a color plot # Plot also the training points ax1.scatter(train_X[:, 0], train_X[:, 1], c=train_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'Ensemble Classifier' ax1.set_xlabel(xlabel) ax1.set_ylabel('x2') ax1.set_xlim(xx1.min(), xx1.max()) ax1.set_ylim(xx2.min(), xx2.max()) ax1.set_xticks(()) ax1.set_yticks(()) ax1.set_title('decision boundary with training points') #plot the test points along with decision boundaries ax2.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Plot also the test points ax2.scatter(test_X[:, 0], test_X[:, 1], c=test_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'Ensemble Classifier' ax2.set_xlabel(xlabel) ax2.set_ylabel('x2') ax2.set_xlim(xx1.min(), xx1.max()) ax2.set_ylim(xx2.min(), xx2.max()) ax2.set_xticks(()) ax2.set_yticks(()) ax2.set_title('decision boundary with test points') plt.show() # + [markdown] id="ApvzhCr20KDq" # $\large{\text{Ways of building an ensemble classifier}}$ # # $\textbf{Create multiple attribute partitions from training data}$ # # 1. Resample the attributes from original training data $D$ (using sampling with replacement) and create different feature paritions $F_1, F_2, \ldots, F_M$. Note that the number of samples in these partitions might be same as that in $D$ or might be different. # 2. Train different classifiers $C_i$ on respective feature partition $F_i$, $i \in \{1,2,\ldots,M\}$. # 3. For a test data point $\hat{x}$ first create feature partitions based on $F_1, F_2, \ldots, F_M$ and predict the label $\hat{y}=\text{MajorityVote}(C_1(F_1(\hat{x})),C_2(F_2(\hat{x})), \ldots, C_M(F_M(\hat{x})))$. # # + id="Ty9A7B-B_9Em" colab={"base_uri": "https://localhost:8080/", "height": 910} outputId="e3dbe5a2-0892-4d16-aaae-21687bb63fa7" from urllib.request import urlopen from PIL import Image img = Image.open(urlopen('https://github.com/balamurugan-palaniappan-CEP/AIML_CEP_2021/raw/main/images/ensemble_classifier_RF.png')) img # + colab={"base_uri": "https://localhost:8080/"} id="w9ax-MqJ1UET" outputId="bc56d2a6-5e4d-4c7e-bcd1-864788adb24e" from sklearn.ensemble import RandomForestClassifier clf_rf = RandomForestClassifier(n_estimators = 100, random_state=0) clf_model = clf_rf.fit(train_X, train_y) test_y_predicted = clf_model.predict(test_X) test_acc = accuracy_score(test_y, test_y_predicted) print('test accuracy from RF classifier: is:', test_acc) # + colab={"base_uri": "https://localhost:8080/", "height": 399} id="r0oKPUy1DtgR" outputId="c0c7c8d1-b73f-44fc-f191-78bdca63e559" Z = clf_model.predict(np.c_[xx1.ravel(), xx2.ravel()]) Z = Z.reshape(xx1.shape) fig, (ax1, ax2) = plt.subplots(1,2, figsize=(16,6)) ax1.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Put the result into a color plot # Plot also the training points ax1.scatter(train_X[:, 0], train_X[:, 1], c=train_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'RF Classifier' ax1.set_xlabel(xlabel) ax1.set_ylabel('x2') ax1.set_xlim(xx1.min(), xx1.max()) ax1.set_ylim(xx2.min(), xx2.max()) ax1.set_xticks(()) ax1.set_yticks(()) ax1.set_title('decision boundary with training points') #plot the test points along with decision boundaries ax2.contourf(xx1, xx2, Z, cmap=plt.cm.coolwarm, alpha=0.8) # Plot also the test points ax2.scatter(test_X[:, 0], test_X[:, 1], c=test_y, cmap=plt.cm.coolwarm) # plt.scatter(X[:int(num_samples/2),0],X[:int(num_samples/2),1], marker='o', color='red') # plt.scatter(X[int(num_samples/2):,0],X[int(num_samples/2):,1], marker='s', color='green') xlabel = 'x1' + str('\n')+'RF Classifier' ax2.set_xlabel(xlabel) ax2.set_ylabel('x2') ax2.set_xlim(xx1.min(), xx1.max()) ax2.set_ylim(xx2.min(), xx2.max()) ax2.set_xticks(()) ax2.set_yticks(()) ax2.set_title('decision boundary with test points') plt.show() # + [markdown] id="IwIBxXMnJ2Xs" # $\large{\text{Exercise}}$ # # # 1. For the two spirals dataset considered above, try the ensemble of the following classifiers # # # # * Nearest Neighbor with $3$ nearest neighbors and Manhattan metric # * Nearest Neighbor with $5$ nearest neighbors and Chebyshev metric # * Nearest Neighbor with $7$ nearest neighbors weighted by the Euclidean distance # * Nearest Neighbor with $11$ nearest neighbors weighted by the Chebyshev metric # * Kernel SVM with polynomial kernel with a suitable $p$ # * Kernel SVM with sigmoid kernel with a suitable $\gamma$ # * Decision tree with gini metric # # # Analyze the training set and test set performance obtained by each classifier and by the ensemble classifier. # # # 2. Write suitable code to obtain the type of features used in each tree in the random forest. Write suitable code to get the individual predictions from the trees in the random forest. # # # + id="douKORyGQS7U" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Grab USDA county-level maize data # + import requests import pandas as pd import numpy as np import geopandas as gp api_key = '210BA222-FC6E-3FB2-B4D7-DA2DAA1CC829' # + state_alphas = ["AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DE", "FL", "GA", "HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME", "MD", "MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH", "NJ", "NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI", "SC", "SD", "TN", "TX", "UT", "VT", "VA", "WA", "WV", "WI", "WY"] needless_info = ["CV (%)", "agg_level_desc", "asd_desc", "begin_code", "asd_code", "class_desc", "commodity_desc", "congr_district_code", "state_name", "country_code", "country_name", "domain_desc", "domaincat_desc", "end_code", "freq_desc", "group_desc", "load_time", "location_desc", "prodn_practice_desc", "reference_period_desc", "region_desc", "sector_desc", "short_desc", "source_desc", "statisticcat_desc", "unit_desc", "util_practice_desc", "watershed_code", "watershed_desc", "week_ending", "zip_5", "county_ansi", "state_ansi"] def get_corn(states, yields=True): """ Grabs county-level corn data from USDA API. Input: states = list of U.S. state codes yields = boolean, if True return yields in bushels/acre, else return area in acres Output: pandas dataframe """ data = pd.DataFrame() for state in states: print("Now grabbing: " + state) if yields: dat = requests.get("http://quickstats.nass.usda.gov/api/api_GET/?key=" + api_key + "&source_desc=SURVEY§or_desc=CROPS&group_desc=FIELD CROPS&commodity_desc=CORN&statisticcat_desc=YIELD&util_practice_desc=GRAIN&unit_desc=BU / ACRE&agg_level_desc=COUNTY&year__GE=1950&state_alpha=" + state) else: dat = requests.get("http://quickstats.nass.usda.gov/api/api_GET/?key=" + api_key + "&source_desc=SURVEY§or_desc=CROPS&group_desc=FIELD CROPS&commodity_desc=CORN&statisticcat_desc=AREA HARVESTED&util_practice_desc=GRAIN&unit_desc=ACRES&agg_level_desc=COUNTY&year__GE=1950&state_alpha=" + state) if dat.status_code == 200: print("Data grabbed from USDA successfully...") dat = dat.json() dat = pd.DataFrame(dat["data"]) data = pd.concat([data, dat], ignore_index=True) print("Filled!") else: print("Data grabbed from USDA unsuccessfully. Error code " + str(dat.status_code) + ". Skipping.") return data.drop(columns=needless_info) # + jupyter={"outputs_hidden": true} # Get data usda_county_yield = get_corn(state_alphas) print('\n\n\n Now for the areas...\n\n\n') usda_county_area = get_corn(state_alphas, yields=False) # + # Merge and tidy usda_county_yield.rename(columns = {'Value':'yield', 'state_fips_code':'state'}, inplace=True) usda_county_yield['fips'] = usda_county_yield['state'] + usda_county_yield['county_code'] usda_county_yield.drop(columns = ['county_code'], inplace=True) usda_county_area.rename(columns = {'Value':'area', 'state_fips_code':'state'}, inplace=True) usda_county_area['fips'] = usda_county_area['state'] + usda_county_area['county_code'] usda_county_area.drop(columns = ['county_code'], inplace=True) usda_county_all = pd.merge(usda_county_yield.drop_duplicates(subset=['fips','year']).query('county_name != "OTHER (COMBINED) COUNTIES"'), usda_county_area.drop_duplicates(subset=['fips','year']).query('county_name != "OTHER (COMBINED) COUNTIES"'), how='inner') # - # County shapefile with lat, lon of centroid county_shapefile = gp.read_file('../other/plotting_tools/counties_contig_plot.shp') county_shapefile_lat_lon = county_shapefile.to_crs("EPSG:4326") county_shapefile_lat_lon['lat'] = county_shapefile_lat_lon['geometry'].geometry.centroid.y county_shapefile_lat_lon['lon'] = county_shapefile_lat_lon['geometry'].geometry.centroid.x # Merge county_all = pd.merge(county_shapefile_lat_lon.drop(columns='geometry'), usda_county_all, on = 'fips', how='inner') # To numeric county_all['yield'] = county_all['yield'].astype(float) county_all['area'] = county_all['area'].apply(lambda x: float(x.replace(",","")) if type(x) == str else x) # Log yield county_all['log_yield'] = county_all['yield'].apply(lambda x: np.log(x) if x > 0.0 else np.nan) county_all # Save county_all.to_csv("../data/usda/maize_county_yield_area.csv", index=False) import pandas as pd county_all = pd.read_csv("../data/usda/maize_county_yield_area.csv") county_all = county_all.query('year >= 1956 and year <= 2005') county_all = county_all.query('lon >= -100') (county_all.groupby('fips').count()['log_yield'].values == 50).sum() county_all # + ############# NOT NEEDED # # Build index structure for final dataframe (including years) # all_fips = county_all['fips'].unique() # years = np.arange(1950, 2021, 1) # all_fips = [[fips]*len(years) for fips in all_fips] # all_fips = np.ndarray.flatten(np.asarray(all_fips)) # years = [years] * len(all_fips) # years = np.ndarray.flatten(np.asarray(years)) # tuples = list(zip(*[all_fips, years])) # index = pd.MultiIndex.from_tuples(tuples) # # Build empty dataframe with complete indexing # all_index = pd.DataFrame(index = index) # all_index.index.names = ["fips", "year"] # # Merge yields so that empty county/year pairings are "NaN" # county_all_indexed = pd.merge(all_index, county_all, on = ['fips', 'year'], how = "outer") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:anaconda3] # language: python # name: conda-env-anaconda3-py # --- # ## methylprep v1.6.3: Comparing noob_meth / noob_unmeth with sesame and minfi import methylprep import pandas as pd from pathlib import Path PATH = '../example_data/' meth = pd.read_csv(Path(PATH,'GSE69852/9247377085/9247377085/9247377085_R04C02_processed.csv')).set_index('IlmnID')[['beta_value']] meth = meth.rename(columns={'beta_value':'9247377085_R04C02'}) ses = pd.read_csv(Path(PATH,'GSE69852/sesame_betas.csv')).set_index('Unnamed: 0')[['9247377085_R04C02']] ses.index.name = 'IlmnID' minfi = pd.read_csv(Path(PATH,'minfi/minfi_noob_betas.csv')).set_index('Unnamed: 0')[['9247377085_R04C02']] minfi.index.name = 'IlmnID' minfi = minfi.sort_index() meth162 = pd.read_csv(Path(PATH,'GSE69852/test_v162/9247377085/9247377085_R04C02_processed.csv')).set_index('IlmnID')[['beta_value']] meth162 = meth162.rename(columns={'beta_value':'9247377085_R04C02'}) (meth-ses).plot.hist(bins=300, xlim=(-0.05, 0.05), title="methylprep 1.6.3 vs sesame") (meth162-ses).plot.hist(bins=300, xlim=(-0.05, 0.05), title="methylprep 1.6.2 vs sesame") print((meth-ses).mean(), (meth-ses).min(), (meth-ses).max()) print((meth162-ses).mean(), (meth162-ses).min(), (meth162-ses).max()) import numpy as np _filter = (meth - ses) > 0.3 meth[_filter].dropna() ignore = meth[_filter].dropna().index meth_revised = meth[ ~meth.index.isin(ignore) ] ses_revised = ses[ ~ses.index.isin(ignore) ] print((meth_revised-ses_revised).mean(), (meth_revised-ses_revised).min(), (meth_revised-ses_revised).max()) ses minfi.sort_index() (meth-minfi).plot.hist(bins=300, xlim=(-0.05, 0.05), title="methylprep 1.6.3 vs minfi") (meth162-minfi).plot.hist(bins=300, xlim=(-0.05, 0.05), title="methylprep 1.6.2 vs minfi") print((meth-minfi).mean(), (meth-minfi).min(), (meth-minfi).max()) print((meth162-minfi).mean(), (meth162-minfi).min(), (meth162-minfi).max()) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="Okfr_uhwhS1X" colab_type="text" # # Lambda School Data Science - A First Look at Data # # # + [markdown] id="9dtJETFRhnOG" colab_type="text" # ## Lecture - let's explore Python DS libraries and examples! # # The Python Data Science ecosystem is huge. You've seen some of the big pieces - pandas, scikit-learn, matplotlib. What parts do you want to see more of? # + id="WiBkgmPJhmhE" colab_type="code" colab={} # More Scikit-learn! # + [markdown] id="002YYgH_CQqm" colab_type="text" # # + [markdown] id="lOqaPds9huME" colab_type="text" # ## Assignment - now it's your turn # # Pick at least one Python DS library, and using documentation/examples reproduce in this notebook something cool. It's OK if you don't fully understand it or get it 100% working, but do put in effort and look things up. # + id="TGUS79cOhPWj" colab_type="code" colab={} # I,ll reproduce a quick example of implementing a regression on housing data. # Source: https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/basic_regression.ipynb#scrollTo=f-OHX4DiXd8x # The original source explained what it was doing in text cells. I copied only # their code cells, so I'll add my own text cells to explain what's going on. # + [markdown] id="P7r4igooHU2b" colab_type="text" # This example uses Keras, " a high-level API to build and train deep learning models" in TensorFlow. # + id="RHtX8R36CvlJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="39ee07c3-9fde-48e6-9aa2-531f150890f1" from __future__ import absolute_import, division, print_function import tensorflow as tf from tensorflow import keras import numpy as np print(tf.__version__) # + [markdown] id="pvA3XlbmHqIs" colab_type="text" # The dataset also comes from TensorFlow, and consists of Boston housing data from the 70s, so we can predict housing prices based on various neighborhood features. Note how this dataset is already divided into treaining and testing. # + id="97Oqh_7fDJcw" colab_type="code" colab={} boston_housing = keras.datasets.boston_housing (train_data, train_labels), (test_data, test_labels) = boston_housing.load_data() # Shuffle the training set order = np.argsort(np.random.random(train_labels.shape)) train_data = train_data[order] train_labels = train_labels[order] # + id="u6PA8P4YEU5r" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 51} outputId="cb74f728-f8b3-41d3-8180-86462ec6b71f" print("Training set: {}".format(train_data.shape)) # 404 examples, 13 features print("Testing set: {}".format(test_data.shape)) # 102 examples, 13 features # + [markdown] id="VH_XOYRXIp3R" colab_type="text" # We move it into Pandas to check on the contents. # + id="XyP_xhY1EnWM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="3eb48c93-8000-4eac-bbd0-8d604894b1ba" import pandas as pd column_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT'] df = pd.DataFrame(train_data, columns=column_names) df.head() # + [markdown] id="ytptYkhYI0_5" colab_type="text" # All the features in this dataset vary over different ranges, so the next cell normalizes them by subtracting the mean and dividing by the standard deviation # + id="w2uyWAUMFEO9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 68} outputId="fdd93586-830a-4335-a8be-bf90fe63b07c" # Test data is *not* used when calculating the mean and std mean = train_data.mean(axis=0) std = train_data.std(axis=0) train_data = (train_data - mean) / std test_data = (test_data - mean) / std print(train_data[0]) # First training sample, normalized # + [markdown] id="PqOdMUSEJrUY" colab_type="text" # With the data normalized, we now build a deep learning model with " two densely connected hidden layers, and an output layer that returns a single, continuous value." All of this is internal to TensorFlow, but it's good to know how to interface with it. I have no idea what the internals are doing here. # + id="I2RuUy4FFJlr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="c27c1649-ef82-4267-ee34-06503c209f69" def build_model(): model = keras.Sequential([ keras.layers.Dense(64, activation=tf.nn.relu, input_shape=(train_data.shape[1],)), keras.layers.Dense(64, activation=tf.nn.relu), keras.layers.Dense(1) ]) optimizer = tf.train.RMSPropOptimizer(0.001) model.compile(loss='mse', optimizer=optimizer, metrics=['mae']) return model model = build_model() model.summary() # + [markdown] id="k75dlU9RKsPx" colab_type="text" # The model trains over 500 epochs, which I assume means "instances of tweaking the parameters in the layer". It uses the object "history" to record stats about the model at each epoch, thta can be graphed later. # + id="HaTpb_qRFafn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="9544e6cb-12bd-41d8-b3d2-a58defa5e373" # Display training progress by printing a single dot for each completed epoch class PrintDot(keras.callbacks.Callback): def on_epoch_end(self, epoch, logs): if epoch % 100 == 0: print('') print('.', end='') EPOCHS = 500 # Store training stats history = model.fit(train_data, train_labels, epochs=EPOCHS, validation_split=0.2, verbose=0, callbacks=[PrintDot()]) # + [markdown] id="3aghOBCDK-5r" colab_type="text" # First, the model is trained over 500 epochs and we graph its mean absolute error as it goes, so that we can see how the model stops improving after a certain number of epochs. I can't figure out the exact meaning of 'val_mean_absolute_error', the green line. # + id="-1_k-DPCFqHf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 365} outputId="3db47689-62e2-4f2a-89e8-fb02f4571948" import matplotlib.pyplot as plt def plot_history(history): plt.figure() plt.xlabel('Epoch') plt.ylabel('Mean Abs Error [1000$]') plt.plot(history.epoch, np.array(history.history['mean_absolute_error']), label='Train Loss') plt.plot(history.epoch, np.array(history.history['val_mean_absolute_error']), label = 'Val loss') plt.legend() plt.ylim([0, 5]) plot_history(history) # + [markdown] id="XZxdizVDLxuk" colab_type="text" # We create a parameter that tracks how the model is getting better, and stops when it plateaus. # + id="fvP44_HdGEnN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 436} outputId="f3523896-da76-422f-c162-77897e61453f" model = build_model() # The patience parameter is the amount of epochs to check for improvement early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=20) history = model.fit(train_data, train_labels, epochs=EPOCHS, validation_split=0.2, verbose=0, callbacks=[early_stop, PrintDot()]) plot_history(history) # + [markdown] id="jarGzCUYMFMr" colab_type="text" # Testing the model on the test set: # + id="T0-DMhFTGcpR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0709ed4c-ef9b-45ba-add3-6dca2f031c6d" [loss, mae] = model.evaluate(test_data, test_labels, verbose=0) print("Testing set Mean Abs Error: ${:7.2f}".format(mae * 1000)) # + [markdown] id="y2vYNlU9Mapq" colab_type="text" # And finally, making predictions based on the test set and plotting those predictions against the actual values. # + id="detWiGgOGeYe" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 361} outputId="b32f35c0-f50a-4332-8233-2e8e7c24342d" test_predictions = model.predict(test_data).flatten() plt.scatter(test_labels, test_predictions) plt.xlabel('True Values [1000$]') plt.ylabel('Predictions [1000$]') plt.axis('equal') plt.xlim(plt.xlim()) plt.ylim(plt.ylim()) _ = plt.plot([-100, 100], [-100, 100]) # + id="ycGI2rnMGhjn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 361} outputId="233a34cc-ab4e-49d6-fb5d-409da75e8eed" error = test_predictions - test_labels plt.hist(error, bins = 50) plt.xlabel("Prediction Error [1000$]") _ = plt.ylabel("Count") # + [markdown] id="2AiHYOsMMh0d" colab_type="text" # And that's it! # + [markdown] id="BT9gdS7viJZa" colab_type="text" # ### Assignment questions # # After you've worked on some code, answer the following questions in this text block: # # 1. Describe in a paragraph of text what you did and why, as if you were writing an email to somebody interested but nontechnical. # # I took a small amount of housing data from the 1970s in Boston, MA, and used it to train a black box called a "deep learning model". I don't know how the model works (hence 'black box'), but I cleaned up some data and fed it to the model. The model looked at the data 500 times and tried to better understand it. I asked the model to graph its progress as it went, and then asked it to stop trying to understand the data once it didn't seem to be understanding it any better (which happened after about 200 tries). Once the model was trained on all the training data, I asked it to make predictions for the test data and plotted the results to see how effective it had been on average. # # 2. What was the most challenging part of what you did? # # Finding an example online that was comprehensible without being trivial. This one is a relatively boring problem, but it's my first time trying to interface with TensorFlow. I expect that I'll want to use that tool a lot later on. # # 3. What was the most interesting thing you learned? # # That it can be quite this simple to play with deep learning, at least if you know what you're doing. Also, it's good to see that buzzword actually applied to something real, and with the details worked out of how to build a simple model. # # 4. What area would you like to explore with more time? # # What on Earth the deep learning model is doing on the inside. Some diagrams would be useful. # # + id="TTB3rLAlGg0P" colab_type="code" colab={} # + [markdown] id="_XXg2crAipwP" colab_type="text" # ## Stretch goals and resources # # Following are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub (and since this is the first assignment of the sprint, open a PR as well). # # - [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/) # - [scikit-learn documentation](http://scikit-learn.org/stable/documentation.html) # - [matplotlib documentation](https://matplotlib.org/contents.html) # - [Awesome Data Science](https://github.com/bulutyazilim/awesome-datascience) - a list of many types of DS resources # # Stretch goals: # # - Find and read blogs, walkthroughs, and other examples of people working through cool things with data science - and share with your classmates! # - Write a blog post (Medium is a popular place to publish) introducing yourself as somebody learning data science, and talking about what you've learned already and what you're excited to learn more about # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:seg_competition] # language: python # name: conda-env-seg_competition-py # --- # # 06 - Facies Classifier # # # # This is an extension / amalgamation of prior entries. The workflow remains not dissimilar to those completed previously, this is: # - Load and set strings to integers # - Cursory data examination, this workbook does not attempt to detail the full data analysis # - Group data by well and brute force feature creation # - Feature creation focuses on bringing results from adjacent samples into features # - Look at some ratios between features # - Leaving out two wells at a time, use TPOT to generate a pipeline for prediction. # - Modal vote on fitted model predicting on the test data set. # + import pandas as pd import bokeh.plotting as bk import numpy as np from sklearn import preprocessing from sklearn.model_selection import train_test_split from tpot import TPOTClassifier, TPOTRegressor import sys sys.path.append('~/home/slygeorge/Documents/Python/SEG ML Competition') from IPython.core.display import display, HTML display(HTML("")) bk.output_notebook() # - from scipy.stats import mode from sklearn.ensemble import VotingClassifier, RandomForestClassifier, GradientBoostingClassifier, ExtraTreesClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn.svm import LinearSVC from sklearn.feature_selection import SelectFwe, SelectKBest, f_classif, SelectPercentile from sklearn.model_selection import train_test_split from sklearn.naive_bayes import BernoulliNB, GaussianNB from sklearn.pipeline import make_pipeline, make_union from sklearn.preprocessing import FunctionTransformer, MinMaxScaler, Binarizer, Normalizer, StandardScaler from xgboost import XGBClassifier models = [ make_pipeline( MinMaxScaler(), XGBClassifier(learning_rate=0.02, max_depth=5, min_child_weight=20, n_estimators=500, subsample=0.19) ), make_pipeline( make_union(VotingClassifier([("est", LogisticRegression(C=0.13, dual=False, penalty="l1"))]), FunctionTransformer(lambda X: X)), RandomForestClassifier(n_estimators=500) ), make_pipeline( Binarizer(threshold=0.72), RandomForestClassifier(n_estimators=500) ), make_pipeline( make_union(VotingClassifier([("est", GradientBoostingClassifier(learning_rate=1.0, max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), BernoulliNB(alpha=28.0, binarize=0.85, fit_prior=True) ), make_pipeline( Normalizer(norm="l1"), make_union(VotingClassifier([("est", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)), SelectKBest(k=47, score_func=f_classif), SelectFwe(alpha=0.05, score_func=f_classif), RandomForestClassifier(n_estimators=500) ), make_pipeline( make_union(VotingClassifier([("est", LinearSVC(C=0.26, dual=False, penalty="l2"))]), FunctionTransformer(lambda X: X)), RandomForestClassifier(n_estimators=500) ), make_pipeline( Normalizer(norm="l2"), make_union(VotingClassifier([("est", ExtraTreesClassifier(criterion="entropy", max_features=0.3, n_estimators=500))]), FunctionTransformer(lambda X: X)), GaussianNB() ), make_pipeline( make_union(VotingClassifier([("est", BernoulliNB(alpha=49.0, binarize=0.06, fit_prior=True))]), FunctionTransformer(lambda X: X)), StandardScaler(), make_union(VotingClassifier([("est", GradientBoostingClassifier(learning_rate=0.87, max_features=0.87, n_estimators=500))]), FunctionTransformer(lambda X: X)), ExtraTreesClassifier(criterion="entropy", max_features=0.001, n_estimators=500) ), make_pipeline( make_union(VotingClassifier([("est", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)), BernoulliNB(alpha=1e-06, binarize=0.09, fit_prior=True) ), make_pipeline( Normalizer(norm="max"), MinMaxScaler(), RandomForestClassifier(n_estimators=500) ), make_pipeline( SelectPercentile(percentile=18, score_func=f_classif), RandomForestClassifier(n_estimators=500) ), make_pipeline( SelectKBest(k=50, score_func=f_classif), RandomForestClassifier(n_estimators=500) ), make_pipeline( XGBClassifier(learning_rate=0.51, max_depth=10, min_child_weight=20, n_estimators=500, subsample=1.0) ), make_pipeline( make_union(VotingClassifier([("est", KNeighborsClassifier(n_neighbors=5, weights="uniform"))]), FunctionTransformer(lambda X: X)), RandomForestClassifier(n_estimators=500) ), make_pipeline( StandardScaler(), SelectPercentile(percentile=19, score_func=f_classif), LinearSVC(C=0.02, dual=False, penalty="l1") ), make_pipeline( XGBClassifier(learning_rate=0.01, max_depth=10, min_child_weight=20, n_estimators=500, subsample=0.36) )] # + train_path = '../training_data.csv' test_path = '../validation_data_nofacies.csv' facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS'] def feature_extraction(file_path, retain_class = True): # Read training data to dataframe test = pd.read_csv(file_path) if 'Facies' in test.columns: test.rename(columns={'Facies': 'class'}, inplace=True) # Set string features to integers for i, value in enumerate(test['Formation'].unique()): test.loc[test['Formation'] == value, 'Formation'] = i for i, value in enumerate(test['Well Name'].unique()): test.loc[test['Well Name'] == value, 'Well Name'] = i # The first thing that will be done is to upsample and interpolate the training data, # the objective here is to provide significantly more samples to train the regressor on and # also to capture more of the sample interdependancy. upsampled_arrays = [] test['orig_index'] = test.index # Use rolling windows through upsampled frame, grouping by well name. # Empty list to hold frames mean_frames = [] above = [] below = [] for well, group in test.groupby('Well Name'): # Empty list to hold rolling frames constructor_list = [] for f in resample_factors: working_frame = group[['Depth', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']] mean_frame = working_frame.rolling(window = f, center = True).mean().interpolate(method = 'index', limit_direction = 'both', limit = None) mean_frame.columns = ['Mean_{0}_{1}'.format(f, column) for column in mean_frame.columns] max_frame = working_frame.rolling(window = f, center = True).max().interpolate(method = 'index', limit_direction = 'both', limit = None) max_frame.columns = ['Max_{0}_{1}'.format(f, column) for column in max_frame.columns] min_frame = working_frame.rolling(window = f, center = True).min().interpolate(method = 'index', limit_direction = 'both', limit = None) min_frame.columns = ['Min_{0}_{1}'.format(f, column) for column in min_frame.columns] std_frame = working_frame.rolling(window = f, center = True).std().interpolate(method = 'index', limit_direction = 'both', limit = None) std_frame.columns = ['Std_{0}_{1}'.format(f, column) for column in std_frame.columns] var_frame = working_frame.rolling(window = f, center = True).var().interpolate(method = 'index', limit_direction = 'both', limit = None) var_frame.columns = ['Var_{0}_{1}'.format(f, column) for column in var_frame.columns] diff_frame = working_frame.diff(f, axis = 0).interpolate(method = 'index', limit_direction = 'both', limit = None) diff_frame.columns = ['Diff_{0}_{1}'.format(f, column) for column in diff_frame.columns] rdiff_frame = working_frame.sort_index(ascending = False).diff(f, axis = 0).interpolate(method = 'index', limit_direction = 'both', limit = None).sort_index() rdiff_frame.columns = ['Rdiff_{0}_{1}'.format(f, column) for column in rdiff_frame.columns] skew_frame = working_frame.rolling(window = f, center = True).skew().interpolate(method = 'index', limit_direction = 'both', limit = None) skew_frame.columns = ['Skew_{0}_{1}'.format(f, column) for column in skew_frame.columns] f_frame = pd.concat((mean_frame, max_frame, min_frame, std_frame, var_frame, diff_frame, rdiff_frame), axis = 1) constructor_list.append(f_frame) well_frame = pd.concat(constructor_list, axis = 1) well_frame['Well Name'] = well # orig index is holding the original index locations, to make extracting the results trivial well_frame['orig_index'] = group['orig_index'] df = group.sort_values('Depth') u = df.shift(-1).fillna(method = 'ffill') b = df.shift(1).fillna(method = 'bfill') above.append(u[div_columns]) below.append(b[div_columns]) mean_frames.append(well_frame.fillna(method = 'bfill').fillna(method = 'ffill')) frame = test frame.index = frame['orig_index'] frame.drop(['orig_index', 'Well Name'], axis = 1, inplace = True) for f in mean_frames: f.index = f['orig_index'] rolling_frame = pd.concat(mean_frames, axis = 0) above_frame = pd.concat(above) above_frame.columns = ['above_'+ column for column in above_frame.columns] below_frame = pd.concat(below) below_frame.columns = ['below_'+ column for column in below_frame.columns] upsampled_frame = pd.concat((frame, rolling_frame, above_frame, below_frame), axis = 1) features = [feature for feature in upsampled_frame.columns if 'class' not in feature] std_scaler = preprocessing.StandardScaler().fit(upsampled_frame[features]) train_std = std_scaler.transform(upsampled_frame[features]) train_std_frame = upsampled_frame for i, column in enumerate(features): train_std_frame.loc[:, column] = train_std[:, i] upsampled_frame_std = train_std_frame for feature in div_columns: for f in div_columns: if f == feature: continue upsampled_frame['{0}_{1}'.format(feature, f)] = upsampled_frame[f] / upsampled_frame[feature] return upsampled_frame_std, features train_data_set, features = feature_extraction(train_path) test_data_set, test_features = feature_extraction(test_path) # - train_data_set.head() # + lpgo = LeavePGroupsOut(2) split_list = [] fitted_models = [] for train, val in lpgo.split(train_data_set[features], train_data_set['class'], groups = train_data_set['Well Name']): hist_tr = np.histogram(train_data_set.loc[train, 'class'], bins = np.arange(len(facies_labels) + 1) + 0.5) hist_val = np.histogram(train_data_set.loc[val, 'class'], bins = np.arange(len(facies_labels) + 1) + 0.5) if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0): split_list.append({'train': train, 'val': val}) for s, split in enumerate(split_list): print('Split %d' % s) print(' training: %s' % (train_data_set['Well Name'].loc[split['train']].unique())) print(' validation: %s' % (train_data_set['Well Name'].loc[split['val']].unique())) # + fitted_models = [] r = [] for i, split in enumerate(split_list): # Select training and validation data from current split X_tr = train_data_set.loc[split['train'], features] X_v = train_data_set.loc[split['val'], features] y_tr = train_data_set.loc[split['train'], 'class'] y_v = train_data_set.loc[split['val'], 'class'] # Fit model from split fitted_models.append(models[i].fit(X_tr, y_tr)) # Predict for model r.append(fitted_models[-1].predict(test_data_set[test_features])) results = mode(np.vstack(r))[0][0] test_data_set['Facies'] = results # - test_data_set.iloc[:, ::-1].head() test_data_set.iloc[:, ::-1].to_csv('06 - Combined Models.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction to PyCaret - An open source low-code ML library # # ## This notebook consists 2 parts # - Classification part using Titanic DataSet # - Regression part using House Price Regression DataSet # ![](https://pycaret.org/wp-content/uploads/2020/03/Divi93_43.png) # # You can reach pycaret website and documentation from https://pycaret.org # # PyCaret is an open source, low-code machine learning library in Python that allows you to go from preparing your data to deploying your model within seconds in your choice of notebook environment. # # PyCaret being a low-code library makes you more productive. With less time spent coding, you and your team can now focus on business problems. # # PyCaret is simple and easy to use machine learning library that will help you to perform end-to-end ML experiments with less lines of code. # # PyCaret is a business ready solution. It allows you to do prototyping quickly and efficiently from your choice of notebook environment. # # # let's install pycaret ! # + _kg_hide-output=true # !pip install pycaret # - # # Part 1 Classification # # ![](https://www.sciencealert.com/images/articles/processed/titanic-1_1024.jpg) # # We start by loading the libraries # + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" import numpy as np import pandas as pd # - # # Read our files # + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" train = pd.read_csv('../input/titanic/train.csv') test = pd.read_csv('../input/titanic/test.csv') sub = pd.read_csv('../input/titanic/gender_submission.csv') # - # # Import whole classification from pycaret.classification import * # # let's see what we're dealing with train.head() train.info() # # Set up our dataset (preprocessing) # + clf1 = setup(data = train, target = 'Survived', numeric_imputation = 'mean', categorical_features = ['Sex','Embarked'], ignore_features = ['Name','Ticket','Cabin'], silent = True) #quite intuitive isn't it ? # - # # Compare the models compare_models() # # let's create a Light GBM Model lgbm = create_model('lightgbm') # # Let's tune it! tuned_lightgbm = tune_model('lightgbm') # # Learning Curve plot_model(estimator = tuned_lightgbm, plot = 'learning') # # AUC Curve plot_model(estimator = tuned_lightgbm, plot = 'auc') # # Confusion Matrix plot_model(estimator = tuned_lightgbm, plot = 'confusion_matrix') # # Feature Importance plot_model(estimator = tuned_lightgbm, plot = 'feature') # # whole thing! evaluate_model(tuned_lightgbm) # # Interpretation interpret_model(tuned_lightgbm) # # Predictions predict_model(tuned_lightgbm, data=test) predictions = predict_model(tuned_lightgbm, data=test) predictions.head() sub['Survived'] = round(predictions['Score']).astype(int) sub.to_csv('submission.csv',index=False) sub.head() # # Extra: Blending made easy! # + logr = create_model('lr'); xgb = create_model('xgboost'); #blending 3 models blend = blend_models(estimator_list=[tuned_lightgbm,logr,xgb]) # - # # Part2 - Regression # ![](https://encrypted-tbn0.gstatic.com/images?q=tbn%3AANd9GcSYeyNpaoAW-3rFX9-ORmiJ-uLAAswYBRhszs2QzllV7MCfFPvk&usqp=CAU) # # Import Whole Regression from pycaret.regression import * # # let's see the data train = pd.read_csv('../input/house-prices-advanced-regression-techniques/train.csv') test = pd.read_csv('../input/house-prices-advanced-regression-techniques/test.csv') sample= pd.read_csv('../input/house-prices-advanced-regression-techniques/sample_submission.csv') train.head() train.info() # # Set up our dataset (preprocessing) reg = setup(data = train, target = 'SalePrice', numeric_imputation = 'mean', categorical_features = ['MSZoning','Exterior1st','Exterior2nd','KitchenQual','Functional','SaleType', 'Street','LotShape','LandContour','LotConfig','LandSlope','Neighborhood', 'Condition1','Condition2','BldgType','HouseStyle','RoofStyle','RoofMatl', 'MasVnrType','ExterQual','ExterCond','Foundation','BsmtQual','BsmtCond', 'BsmtExposure','BsmtFinType1','BsmtFinType2','Heating','HeatingQC','CentralAir', 'Electrical','GarageType','GarageFinish','GarageQual','GarageCond','PavedDrive', 'SaleCondition'] , ignore_features = ['Alley','PoolQC','MiscFeature','Fence','FireplaceQu','Utilities'], normalize = True, silent = True) # # let's compare different regression models! compare_models() # # let's do CatBoost cb = create_model('catboost') # # gotta tune it tuned_cb = tune_model('catboost') # # SHAP Values (impact on model output) interpret_model(tuned_cb) predictions = predict_model(tuned_cb, data = test) sample['SalePrice'] = predictions['Label'] sample.to_csv('submission_house_price.csv',index=False) sample.head() # # thank you very much for checking my notebook! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + #import os #os.environ["MODIN_ENGINE"] = "dask" import numpy as np np.random.seed(1337) import pandas as pd from matplotlib.pyplot import * from datetime import datetime import networkx as nx matplotlib.pyplot.style.use('classic') # !python --version # + sp500 = pd.read_csv('^GSPC.csv', header = 0, index_col = 'Date') sp500.index = pd.to_datetime(sp500.index, format = '%d-%m-%y') sp500 = sp500[1:] #sp500 = sp500.resample('W').mean() #sp500.head() print(len(sp500)) #import nifty50 data nifty = pd.read_csv('^NSEI.csv', header = 0, index_col = 'Date') nifty.index = pd.to_datetime(nifty.index, format = '%d-%m-%y') nifty = nifty.reindex(index = sp500.index, method = 'bfill') nifty.fillna(method = 'bfill', inplace=True) #nifty = nifty.resample('W').mean() #nifty.head() print(len(nifty)) sing_sti = pd.read_csv('^sti_d.csv', header = 0, index_col = 'Date') sing_sti.index = pd.to_datetime(sing_sti.index, format = '%Y-%m-%d') sing_sti = sing_sti.reindex(index = sp500.index, method = 'bfill') sing_sti.fillna(method = 'bfill', inplace=True) print(len(sing_sti)) uk_100 = pd.read_csv('^ukx_d.csv', header = 0, index_col = 'Date') uk_100.index = pd.to_datetime(uk_100.index, format = '%Y-%m-%d') uk_100 = uk_100.reindex(index = sp500.index, method = 'bfill') uk_100.fillna(method = 'bfill', inplace=True) print(len(uk_100)) hangseng = pd.read_csv('^hsi_d.csv', header = 0, index_col = 'Date') hangseng.index = pd.to_datetime(hangseng.index, format = '%Y-%m-%d') hangseng = hangseng.reindex(index = sp500.index, method = 'bfill') hangseng.fillna(method = 'bfill', inplace=True) print(len(hangseng)) nikkei = pd.read_csv('^nkx_d.csv', header = 0, index_col = 'Date') nikkei.index = pd.to_datetime(nikkei.index, format = '%Y-%m-%d') nikkei = nikkei.reindex(index = sp500.index, method = 'bfill') nikkei.fillna(method = 'bfill', inplace=True) print(len(nikkei)) shanghai_comp = pd.read_csv('^shc_d.csv', header = 0, index_col = 'Date') shanghai_comp.index = pd.to_datetime(shanghai_comp.index, format = '%Y-%m-%d') shanghai_comp = shanghai_comp.reindex(index = sp500.index, method = 'bfill') shanghai_comp.fillna(method = 'bfill', inplace=True) print(len(shanghai_comp)) inr = pd.read_csv('DEXINUS.csv', header = 0, index_col = 'DATE') inr.index = pd.to_datetime(inr.index, format = '%Y-%m-%d') inr = inr.reindex(index = sp500.index, method = 'bfill') inr.fillna(method = 'bfill', inplace=True) print(len(inr)) cny = pd.read_csv('DEXCHUS.csv', header = 0, index_col = 'DATE') cny.index = pd.to_datetime(cny.index, format = '%Y-%m-%d') cny = cny.reindex(index = sp500.index, method = 'bfill') cny.fillna(method = 'bfill', inplace=True) print(len(cny)) jpy = pd.read_csv('DEXJPUS.csv', header = 0, index_col = 'DATE') jpy.index = pd.to_datetime(jpy.index, format = '%Y-%m-%d') jpy = jpy.reindex(index = sp500.index, method = 'bfill') jpy.fillna(method = 'bfill', inplace=True) print(len(jpy)) sgd = pd.read_csv('DEXSIUS.csv', header = 0, index_col = 'DATE') sgd.index = pd.to_datetime(sgd.index, format = '%Y-%m-%d') sgd = sgd.reindex(index = sp500.index, method = 'bfill') sgd.fillna(method = 'bfill', inplace=True) print(len(sgd)) hkd = pd.read_csv('DEXHKUS.csv', header = 0, index_col = 'DATE') hkd.index = pd.to_datetime(hkd.index, format = '%Y-%m-%d') hkd = hkd.reindex(index = sp500.index, method = 'bfill') hkd.fillna(method = 'bfill', inplace=True) print(len(hkd)) gbp = pd.read_csv('DEXUSUK.csv', header = 0, index_col = 'DATE') gbp.index = pd.to_datetime(gbp.index, format = '%Y-%m-%d') gbp = gbp.reindex(index = sp500.index, method = 'bfill') gbp.fillna(method = 'bfill', inplace=True) print(len(gbp)) # - inr.iloc[:, 0] = pd.to_numeric(inr.iloc[:, 0].replace({'.':'0'})) cny.iloc[:, 0] = pd.to_numeric(cny.iloc[:, 0].replace({'.':'0'})) jpy.iloc[:, 0] = pd.to_numeric(jpy.iloc[:, 0].replace({'.':'0'})) sgd.iloc[:, 0] = pd.to_numeric(sgd.iloc[:, 0].replace({'.':'0'})) hkd.iloc[:, 0] = pd.to_numeric(hkd.iloc[:, 0].replace({'.':'0'})) gbp.iloc[:, 0] = pd.to_numeric(gbp.iloc[:, 0].replace({'.':'0'})) gbp = 1/gbp # + df = pd.DataFrame(index = sp500.index) df['nifty'] = nifty['Close'] df['sing_sti'] = sing_sti['Close'] df['hangseng'] = hangseng['Close'] df['nikkei'] = nikkei['Close'] df['shanghai_comp'] = shanghai_comp['Close'] df['sp500'] = sp500['Close'] df['uk_100'] = uk_100['Close'] df = df.transpose() df_1 = pd.DataFrame(index = sp500.index) df_1['inr'] = inr df_1['sgd'] = sgd df_1['hkd'] = hkd df_1['jpy'] = jpy df_1['cny'] = cny df_1['gbp'] = gbp df_1['usd'] = 1 df_1 = df_1.transpose() # - df_1['base'] = 'usd' df_1 = df_1.reset_index() df_exp = df_1.set_index(['index', 'base']) df_exp = df_exp.reset_index() df_exp.set_index(['index', 'base'], inplace = True) df_exp for currency_fix, base_fix in df_exp.index: for curr, base in df_exp.index[0:7]: df_exp.loc[(curr, currency_fix), :] = df_exp.loc[(curr, base), :]/df_exp.loc[(currency_fix, base_fix), :] print(curr,base) df_exp df['base'] = ['inr', 'sgd', 'hkd', 'jpy', 'cny', 'usd', 'gbp'] df = df.reset_index() df_index = df.set_index(['index', 'base']) df_index # + for index, base in df_index.index[0:7]: for curr, base_curr in df_exp.loc[(slice(None), base), :].index: df_index.loc[(index, curr), :] = df_index.loc[(index, base), :]*df_exp.loc[(curr, base_curr), :] df_index # - # # Creating edgelist for network (first attempt) # ## (Shelved) df_edges = pd.DataFrame(columns = ['Source', 'Target']) df_edges.set_index(['Source', 'Target'], inplace = True) df_edges ''' WARNING: Loop runs for 10-15 mins; output file is stored as csv. Request admin (Working on incorporating modin to the code:for parallel processing) ''' for i in range(0, len(df_index.columns), 2): corr_df = df_index.iloc[:, i:i+20].T.corr() k = -1 for index, row in corr_df.iterrows(): k = k+1 if (k//48 == 1): print(k//48) for j in range(k, len(corr_df.index)): df_edges.loc[((str(index[0]) + '/' + str(index[1])), (str(corr_df.index[j][0]) + '/' + str(corr_df.index[j][1]))), df_index.columns[i]] = row[j] df_edges.to_csv('daily_corr_networkx.csv') df_edges = df_edges.reset_index() df_edges.iloc[:, 5] G = nx.from_pandas_edgelist(df_edges, "Source", "Target", edge_attr = df_edges.columns[6]) type(G) # ## -------------------------------------------------------------------------------------------------- # # Creating weighted grpahs[Stored in dictionary: indexed using timestamps] (Second Attempt) df_corr = df_index.iloc[:, 0:20].T.corr() df_corr G = nx.from_pandas_adjacency(df_corr) # + G_cache = {} for i in range(0, len(df_index.columns) - 2, 2): corr_df = df_index.iloc[:, i:i+20].T.corr() G_cache[df_index.columns[i]] = nx.from_pandas_adjacency(corr_df) # - G = G_cache[list(G_cache)[10]] list(G_cache)[10] G[('nifty', 'inr')][('sp500', 'usd')] print(nx.info(G)) nx.draw(G) # ## --------------------------------------------------------------------------------------------------- # # Community Detection (Asyncronous Fluid communities) c = nx.algorithms.community.asyn_fluidc(G, k = 10, seed = 1337) tuple(sorted(comm) for comm in next(c)) # + import itertools l = [] for communities in itertools.islice(c, 10): l.append(tuple(sorted(comm) for comm in communities)) # - l[4] comm_cache = {} for timestamp in list(G_cache): G = G_cache[timestamp] comm_iter = nx.algorithms.community.asyn_fluidc(G, k = 10, seed = 1337) comm = [] for communities in itertools.islice(comm_iter, 10): comm.append(tuple(sorted(c) for c in communities)) comm_cache[timestamp] = comm comm_cache[list(comm_cache)[7]] # ## ----------------------------------------------------------------------------------------------------- # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # "Hello,World" 第一个Flask应用 # 创建microblog目录作为整个项目目录: # # `mkdir microblog` # # 初始项目结构图: # ``` # microblog/ # venv/ # app/ # __init__.py # routes.py # microblog.py # ``` # # 创建一个名为app的包来存放整个应用 # # `mkdir app` # # 在app下创建文件__init__.py,输入如下的代码: # # ```python # from flask import Flask # # app = Flask(__name__) # # from app import routes # ``` # # 从flask中导入的类Flask,并以此类创建了一个应用程序对象app。 # # 传递给Flask类的`__name__`变量是一个Python预定义的变量,它表示当前调用它的模块的名字。 # # 当需要加载相关的资源,如模板文件,Flask就使用这个位置作为起点来计算绝对路径。 # # # 其一,这里有两个实体名为app。 app包由app目录和`__init__.py`脚本来定义构成,并在from app import routes语句中被引用。 # # app变量被定义为`__init__.py`脚本中的Flask类的一个实例,以至于它成为app包的属性。 # # 其二,routes模块是在底部导入的,而不是在脚本的顶部。 # # 最下面的导入是解决循环导入的问题,这是Flask应用程序的常见问题。 # # 你将会看到routes模块需要导入在这个脚本中定义的app变量,因此将routes的导入放在底部可以避免由于这两个文件之间的相互引用而导致的错误。 # # # app/routes.py中的第一个视图函数的代码: # # ```python # from app import app # # @app.route('/') # @app.route('/index') # def index(): # return "Hello, World!" # ``` # # @app.route修饰器在作为参数给出的URL和函数之间创建一个关联。 # # 要完成应用程序,你需要在定义Flask应用程序实例的顶层(译者注:也就是microblog目录下)创建一个命名为microblog.py的Python脚本。 # 它仅拥有一个导入应用程序实例的行: # `from app import app` # # Flask应用程序实例被称为app,是app包的成员。from app import app语句从app包导入其成员app变量。 如果你觉得这很混乱,你可以重命名包或者变量。 # # 在运行之前,需要通过设置FLASK_APP环境变量告诉Flask如何导入它: # * linux: # `(venv) $ export FLASK_APP=microblog.py` # * windows: # `(venv) $ set FLASK_APP=microblog.py` # # # 运行应用 # ``` # flask run # * Serving Flask app "microblog" # * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) # ``` # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # ## Testing for Kaggle # + import pandas as pd import numpy as np import seaborn as sns # %matplotlib inline # - pwd # cd ../Vim/ data = pd.read_csv('train.csv') data.head() data.tail() data.shape() clean_data = data.fillna(0) clean_data.head() sns.pairplot(data, x_vars = ['v1'], y_vars = 'target', size = 7) data.head() data v1= data['v1'] v1 type(data) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np np.random.seed(seed=1) N = 200 K = 3 T = np.zeros((N,3), dtype=np.uint8) X = np.zeros((N,2)) X_range0 = [-3, 3] X_range1 = [-3, 3] Mu = np.array([[-.5, -.5], [.5, 1.0], [1, -.5]]) Sig = np.array([[.7, .7], [.8, .3],[.3, .8]]) Pi = np.array([0.4, 0.8, 1]) for n in range(N): wk = np.random.rand() for k in range(K): if wk < Pi[k]: T[n, k] = 1 break for k in range(2): X[n, k] = np.random.randn() * Sig[T[n, :] == 1, k] + \ Mu[T[n, :] == 1, k] # + TestRatio = 0.5 X_n_training = int(N * TestRatio) X_train = X[:X_n_training, :] X_test = X[X_n_training:, :] T_train = T[:X_n_training, :] T_test = T[X_n_training:, :] np.savez('class_data.npz', X_train = X_train, T_train=T_train, X_test = X_test, T_test = T_test, X_range0 = X_range0, X_range1 = X_range1) # + import matplotlib.pyplot as plt # %matplotlib inline def Show_data(x, t): wk, n = t.shape c = [[0, 0, 0], [.5, .5, .5], [1, 1, 1]] for i in range(n): plt.plot(x[t[:, i] == 1, 0], x[t[:, i] == 1, 1], linestyle = 'none', marker='o', markeredgecolor='black', color=c[i], alpha=0.8) plt.grid(True) plt.figure(1, figsize=(8, 3.7)) plt.subplot(1, 2, 1) Show_data(X_train, T_train) plt.xlim(X_range0) plt.ylim(X_range1) plt.title('Training Data') plt.subplot(1, 2, 2) Show_data(X_test, T_test) plt.xlim(X_range0) plt.ylim(X_range1) plt.title('Test Data') plt.show() # + def Sigmoid(x): y = 1 / (1+ np.exp(-x)) return y def FNN(wv, M, K, x): N, D = x.shape w = wv[:M*(D+1)] w = w.reshape(M, (D+1)) v = wv[M*(D+1):] v = v.reshape((K, M+1)) b = np.zeros((N, M+1)) z = np.zeros((N, M+1)) a = np.zeros((N,K)) y = np.zeros((N,K)) for n in range(N): for m in range(M): b[n, m] = np.dot(w[m, :], np.r_[x[n, :], 1]) z[n, m] = Sigmoid(b[n, m]) z[n, M] = 1 wkz = 0 for k in range(K): a[n, k] = np.dot(v[k, :], z[n, :]) wkz = wkz + np.exp(a[n, k]) for k in range(K): y[n, k] = np.exp(a[n, k]) / wkz return y, a, z, b WV = np.ones(15) M = 2 K = 3 FNN(WV, M, K, X_train[:2, :]) # + def CE_FNN(wv, M, K, x, t): N, D = x.shape y, a, z, b = FNN(wv, M, K, x) ce = -np.dot(np.log(y.reshape(-1)), t.reshape(-1)) / N return ce WV = np.ones(15) M = 2 K = 3 CE_FNN(WV, M, K, X_train[:2, :], T_train[:2, :]) # + def dCE_FNN_num(wv, M, K, x, t): epsilon = 0.001 dwv = np.zeros_like(wv) for iwv in range(len(wv)): wv_modified = wv.copy() wv_modified[iwv] = wv[iwv] - epsilon mse1 = CE_FNN(wv_modified, M, K, x, t) wv_modified[iwv] = wv[iwv] + epsilon mse2 = CE_FNN(wv_modified, M, K, x, t) dwv[iwv] = (mse2 -mse1) / (2 * epsilon) return dwv def Show_WV(wv, M): N = wv.shape[0] plt.bar(range(1, M*3 +1), wv[:M *3], align="center", color = 'black') plt.bar(range(M*3+1, N+1), wv[M*3:], align="center", color = 'cornflowerblue') plt.xticks(range(1, N+1)) plt.xlim(0, N+1) M = 2 K = 3 nWV = M*3 + K*(M+1) np.random.seed(1) WV = np.random.normal(0, 1, nWV) dWV = dCE_FNN_num(WV, M, K, X_train[:2, :], T_train[:2, :]) print(dWV) plt.figure(1, figsize=(5, 3)) Show_WV(dWV, M) plt.show() # + import time def Fit_FNN_num(wv_init, M, K, x_train, t_train, x_test, t_test, n, alpha): wvt = wv_init err_train = np.zeros(n) err_test = np.zeros(n) wv_hist = np.zeros((n, len(wv_init))) epsilon = 0.001 for i in range(n): wvt = wvt -alpha*dCE_FNN_num(wvt, M, K, x_train, t_train) err_train[i] = CE_FNN(wvt, M, K, x_train, t_train) err_test[i] = CE_FNN(wvt, M, K, x_test, t_test) wv_hist[i, :] =wvt return wvt, wv_hist, err_train, err_test startTime = time.time() M = 2 K = 3 np.random.seed(1) WV_init = np.random.normal(0, 0.01, M*3 + K*(M+1)) N_step = 1000 alpha = 0.5 WV, WV_hist, Err_train, Err_test = Fit_FNN_num( WV_init, M, K, X_train, T_train, X_test, T_test, N_step, alpha) calculation_time = time.time() -startTime print("Calculation time:{0:.3f} sec".format(calculation_time)) # - plt.figure(1, figsize =(3, 3)) plt.plot(Err_train, 'black', label = 'training') plt.plot(Err_test, 'cornflowerblue', label = 'test') plt.legend() plt.show() plt.figure(1, figsize = (3,3)) plt.plot(WV_hist[:, :M*3], 'black') plt.plot(WV_hist[:, M*3:], 'cornflowerblue') plt.show() # + def show_FNN(wv, M, K): xn = 60 x0 = np.linspace(X_range0[0], X_range0[1], xn) x1 = np.linspace(X_range1[0], X_range1[1], xn) xx0, xx1 = np.meshgrid(x0, x1) x = np.c_[np.reshape(xx0, xn * xn, 1), np.reshape(xx1, xn * xn, 1)] y, a, z, b = FNN(wv, M, K, x) plt.figure(1, figsize=(4,4)) for ic in range(K): f = y[:, ic] f = f.reshape(xn, xn) f = f.T cont = plt.contour(xx0, xx1, f, levels=[0.8, 0.9], colors = ['cornflowerblue', 'black']) cont.clabel(fmt = '%1.1f', fontsize=9) plt.xlim(X_range0) plt.ylim(X_range1) plt.figure(1, figsize=(3,3)) Show_data(X_test, T_test) show_FNN(WV, M, K) plt.show() # + def dCE_FNN(wv, M, K, x ,t): N, D = x.shape w = wv[:M * (D+1)] w = w.reshape(M, (D+1)) v = wv[M*(D+1):] v = v.reshape((K, M+1)) y, a, z, b = FNN(wv, M, K, x) dwv = np.zeros_like(wv) dw = np.zeros((M, D+1)) dv = np.zeros((K, M+1)) delta1 = np.zeros(M) delta2 = np.zeros(K) for n in range(N): for k in range(K): delta2[k] = (y[n, k] - t[n,k]) for j in range(M): delta1[j] = z[n,j] * (1 - z[n, j])*np.dot(v[:, j], delta2) for k in range(K): dv[k, :] = dv[k, :] + delta2[k] * z[n, :] / N for j in range(M): dw[j,:] = dw[j, :] +delta1[j] * np.r_[x[n, :], 1] / N dwv = np.c_[dw.reshape((1, M*(D+1))), \ dv.reshape((1, K*(M+1)))] dwv = dwv.reshape(-1) return dwv def Show_dWV(wv, M): N = wv.shape[0] plt.bar(range(1, M*3 +1), wv[:M*3], align="center", color= 'black') plt.bar(range(M*3+1, N+1), wv[M*3:], align="center", color='cornflowerblue') plt.xticks(range(1, N+1)) plt.xlim(0, N+1) M = 2 K = 3 N = 2 nWV = M*3 + K*(M+1) np.random.seed(1) WV = np.random.normal(0, 1, nWV) dWV_ana = dCE_FNN(WV, M, K, X_train[:N, :], T_train[:N, :]) print("analytical dWV") print(dWV_ana) dWV_num = dCE_FNN_num(WV, M, K, X_train[:N, :], T_train[:N, :]) print("numerical dWV") print(dWV_num) plt.figure(1, figsize=(8,3)) plt.subplots_adjust(wspace=0.5) plt.subplot(1, 2, 1) Show_dWV(dWV_ana, M) plt.title('analitical') plt.subplot(1, 2, 2) Show_dWV(dWV_num, M) plt.title('numerical') plt.show() # + import time def Fit_FNN(wv_init, M, K, x_train, t_train, x_test, t_test, n , alpha): wv = wv_init.copy() err_train = np.zeros(n) err_test = np.zeros(n) wv_hist = np.zeros((n, len(wv_init))) epsilon = 0.001 for i in range(n): wv = wv- alpha*dCE_FNN(wv, M, K, x_train, t_train) err_train[i] = CE_FNN(wv, M, K, x_train, t_train) err_test[i] = CE_FNN(wv, M, K, x_test, t_test) wv_hist[i, :] = wv return wv, wv_hist, err_train, err_test startTime = time.time() M = 2 K = 3 np.random.seed(1) WV_init = np.random.normal(0, 0.01, M*3 + K*(M+1)) N_step = 1000 alpha = 1 WV, WV_hist, ERR_train, ERR_test = Fit_FNN( WV_init, M, K, X_train, T_train, X_test, T_test, N_step, alpha) calculation_time = time.time() - startTime print("Calculation time:{0:.3f} sec".format(calculation_time)) # + plt.figure(1, figsize=(12,3)) plt.subplots_adjust(wspace=0.5) plt.subplot(1,3,1) plt.plot(Err_train, 'black', label = 'training') plt.plot(Err_test, 'cornflowerblue', label = 'test') plt.legend() plt.subplot(1,3,2) plt.plot(WV_hist[:, :M*3], 'black') plt.plot(WV_hist[:, M*3:], 'cornflowerblue') plt.subplot(1, 3,3) Show_data(X_test, T_test) M = 2 K = 3 show_FNN(WV, M, K) plt.show() # + from mpl_toolkits.mplot3d import Axes3D def show_activation3d(ax, v, v_ticks, title_str): f = v.copy() f = f.reshape(xn, xn) f = f.T ax.plot_surface(xx0, xx1, f, color= 'blue', edgecolor = 'black', rstride = 1, cstride = 1, alpha =0.5) ax.view_init(70, -110) ax.set_xticklabels([]) ax.set_yticklabels([]) ax.set_zticks(v_ticks) ax.set_title(title_str, fontsize=18) M = 2 K = 3 xn = 15 x0 = np.linspace(X_range0[0], X_range0[1], xn) x1 = np.linspace(X_range1[0], X_range1[1], xn) xx0, xx1 = np.meshgrid(x0, x1) x = np.c_[np.reshape(xx0, xn * xn, 'A'), np.reshape(xx1, xn*xn,'A')] y, a, z, b = FNN(WV, M, K, x) fig = plt.figure(1, figsize = (12, 9)) plt.subplots_adjust(left=0.075, bottom = 0.05, right = 0.95, top = 0.95, wspace = 0.4, hspace = 0.4) for m in range(M): ax = fig.add_subplot(3, 4, 1 + m * 4, projection = '3d') show_activation3d(ax, b[:,m],[-10,10], '$b_{0:d}$'.format(m)) ax = fig.add_subplot(3, 4, 2 + m * 4, projection = '3d') show_activation3d(ax, z[:, m], [0, 1], '$z_{0:d}$'.format(m)) for k in range(K): ax = fig.add_subplot(3, 4, 3+ k * 4, projection = '3d') show_activation3d(ax, a[:, k], [-5, 5], '$a_{0:d}$'.format(k)) ax = fig.add_subplot(3, 4, 4+k*4, projection='3d') show_activation3d(ax, y[:,k],[0,1], '$y_{0:d}$'.format(k)) plt.show() # - # %reset # + import numpy as np import matplotlib.pyplot as plt import time np.random.seed(1) import keras.optimizers from keras.models import Sequential from keras.layers.core import Dense, Activation outfile = np.load('class_data.npz') X_train = outfile['X_train'] T_train = outfile['T_train'] X_test = outfile['X_test'] T_test = outfile['T_test'] X_range0 = outfile['X_range0'] X_range1 = outfile['X_range1'] # - def Show_data(x, t): wk, n = t.shape c = [[0, 0, 0], [.5, .5, .5], [1, 1, 1]] for i in range(n): plt.plot(x[t[:, i] == 1, 0], x[t[:, i] == 1, 1], linestyle = 'none', marker='o', markeredgecolor = 'black', color = c[i], alpha = 0.8) plt.grid(True) # + np.random.seed(1) model = Sequential() model.add(Dense(2, input_dim = 2, activation = 'sigmoid', kernel_initializer = 'uniform')) model.add(Dense(3, activation = 'softmax', kernel_initializer = 'uniform')) sgd = keras.optimizers.SGD(lr = 1, momentum = 0.0, decay = 0.0, nesterov = False) model.compile(optimizer = sgd, loss = 'categorical_crossentropy', metrics = ['accuracy']) startTime = time.time() history = model.fit(X_train, T_train, epochs= 1000, batch_size = 100, verbose = 0, validation_data = (X_test, T_test)) score = model.evaluate(X_test, T_test, verbose = 0) print('cross entropy {0:3.2f}, accuracy {1:3.2f}'.format(score[0], score[1])) calculation_time = time.time() - startTime print("Calculation time:{0:.3f} sec".format(calculation_time)) # + plt.figure(1, figsize = (12, 3)) plt.subplots_adjust(wspace = 0.5) plt.subplot(1, 3, 1) plt.plot(history.history['loss'], 'black', label = 'training') plt.plot(history.history['val_loss'], 'cornflowerblue', label = 'test') plt.legend() plt.subplot(1, 3, 2) plt.plot(history.history['accuracy'], 'black', label = 'training') plt.plot(history.history['val_accuracy'], 'cornflowerblue', label='test') plt.legend() plt.subplot(1, 3, 3) Show_data(X_test, T_test) xn = 60 x0 = np.linspace(X_range0[0], X_range0[1], xn) x1 = np.linspace(X_range1[0], X_range1[1], xn) xx0, xx1 = np.meshgrid(x0, x1) x = np.c_[np.reshape(xx0, xn*xn, 'F'), np.reshape(xx1, xn * xn , 'F')] y = model.predict(x) K = 3 for ic in range(K): f = y[:, ic] f = f.reshape(xn, xn) f = f.T cont = plt.contour(xx0, xx1, f, levels = [0.5, 0.9], colors=[ 'cornflowerblue', 'black']) cont.clabel(fmt='%1.1f', fontsize = 9) plt.xlim(X_range0) plt.ylim(X_range1) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import scipy as sp dU = 2 dX = 12 acc = gains = np.ones(dU) dt = 0.02 Fd = np.vstack([ np.hstack([ np.eye(dU), dt * np.eye(dU), np.zeros((dU, dX - dU*2)), dt ** 2 * np.diag(gains) ]), np.hstack([ np.zeros((dU, dU)), np.eye(dU), np.zeros((dU, dX - dU*2)), dt * np.diag(gains) ]), np.zeros((dX - dU*2, dX+dU)) ]) fc = np.hstack([acc * dt ** 2, acc * dt, np.zeros((dX - dU*2))]) Fd.shape np.hstack([ np.eye(dU), dt * np.eye(dU), np.zeros((dU, dX - dU*2)), dt ** 2 * np.diag(gains) ]) (dt ** 2 * np.diag(gains)).shape np.zeros((dU, dX - dU*2)).shape np.hstack([ np.zeros((dU, dU)), np.eye(dU), np.zeros((dU, dX - dU*2)), dt * np.diag(gains) ]) np.diag(np.eye(5,5)) Ltt = np.diag(np.hstack([ 1 * np.ones(dU), 1 * 2 * np.ones(dU), np.zeros(dX - dU*2), np.ones(dU) ])) Ltt.shape np.hstack([ 1 * np.ones(dU), 1 * 2 * np.ones(dU), np.zeros(dX - dU*2), np.ones(dU) ]) x0 = np.ones(12) lt = -Ltt.dot(np.r_[x0, np.zeros(dU)]) lt.shape import scipy.linalg as linalg # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] pycharm={"name": "#%% md\n"} # # Let's grab some information # # In this notebook, we try our very best to get some key information # from the text versions of the pdpc decisions. # This key information is meant to enhance the pdpc decisions database. # # ## Table of contents # 1. Citation # 2. DP Case Number # 3. Cases cited in the decision # 4. Result (warning, directions, penalties) # + pycharm={"is_executing": false, "name": "#%% \n"} # We set up the environment import pandas as pd import spacy # %run ../set_up_zeekerDB.ipynb # + # We sample some decisions to check if our code is working sample = data_collection.aggregate( [ { "$sample": { "size": 7 } } ] ) sample_df = pd.DataFrame([ {'id': result['_id'], 'respondent': result['respondent']} for result in sample]) sample_df # + def citation_search(text): import re citation_search = re.compile(r'\s+\[\d{4}]\s+(?:\d\s+)?[A-Z|()]+\s+\d+[\s.]?\s+') match = re.search(citation_search, text) print(match) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + active="" # --- Day 10: The Stars Align --- # # It's no use; your navigation system simply isn't capable of providing walking directions in the arctic circle, and certainly not in 1018. # # The Elves suggest an alternative. In times like these, North Pole rescue operations will arrange points of light in the sky to guide missing Elves back to base. Unfortunately, the message is easy to miss: the points move slowly enough that it takes hours to align them, but have so much momentum that they only stay aligned for a second. If you blink at the wrong time, it might be hours before another message appears. # # You can see these points of light floating in the distance, and record their position in the sky and their velocity, the relative change in position per second (your puzzle input). The coordinates are all given from your perspective; given enough time, those positions and velocities will move the points into a cohesive message! # # Rather than wait, you decide to fast-forward the process and calculate what the points will eventually spell. # # For example, suppose you note the following points: # # position=< 9, 1> velocity=< 0, 2> # position=< 7, 0> velocity=<-1, 0> # position=< 3, -2> velocity=<-1, 1> # position=< 6, 10> velocity=<-2, -1> # position=< 2, -4> velocity=< 2, 2> # position=<-6, 10> velocity=< 2, -2> # position=< 1, 8> velocity=< 1, -1> # position=< 1, 7> velocity=< 1, 0> # position=<-3, 11> velocity=< 1, -2> # position=< 7, 6> velocity=<-1, -1> # position=<-2, 3> velocity=< 1, 0> # position=<-4, 3> velocity=< 2, 0> # position=<10, -3> velocity=<-1, 1> # position=< 5, 11> velocity=< 1, -2> # position=< 4, 7> velocity=< 0, -1> # position=< 8, -2> velocity=< 0, 1> # position=<15, 0> velocity=<-2, 0> # position=< 1, 6> velocity=< 1, 0> # position=< 8, 9> velocity=< 0, -1> # position=< 3, 3> velocity=<-1, 1> # position=< 0, 5> velocity=< 0, -1> # position=<-2, 2> velocity=< 2, 0> # position=< 5, -2> velocity=< 1, 2> # position=< 1, 4> velocity=< 2, 1> # position=<-2, 7> velocity=< 2, -2> # position=< 3, 6> velocity=<-1, -1> # position=< 5, 0> velocity=< 1, 0> # position=<-6, 0> velocity=< 2, 0> # position=< 5, 9> velocity=< 1, -2> # position=<14, 7> velocity=<-2, 0> # position=<-3, 6> velocity=< 2, -1> # # Each line represents one point. Positions are given as pairs: X represents how far left (negative) or right (positive) the point appears, while Y represents how far up (negative) or down (positive) the point appears. # # At 0 seconds, each point has the position given. Each second, each point's velocity is added to its position. So, a point with velocity <1, -2> is moving to the right, but is moving upward twice as quickly. If this point's initial position were <3, 9>, after 3 seconds, its position would become <6, 3>. # # Over time, the points listed above would move like this: # # Initially: # ........#............. # ................#..... # .........#.#..#....... # ...................... # #..........#.#.......# # ...............#...... # ....#................. # ..#.#....#............ # .......#.............. # ......#............... # ...#...#.#...#........ # ....#..#..#.........#. # .......#.............. # ...........#..#....... # #...........#......... # ...#.......#.......... # # After 1 second: # ...................... # ...................... # ..........#....#...... # ........#.....#....... # ..#.........#......#.. # ...................... # ......#............... # ....##.........#...... # ......#.#............. # .....##.##..#......... # ........#.#........... # ........#...#.....#... # ..#...........#....... # ....#.....#.#......... # ...................... # ...................... # # After 2 seconds: # ...................... # ...................... # ...................... # ..............#....... # ....#..#...####..#.... # ...................... # ........#....#........ # ......#.#............. # .......#...#.......... # .......#..#..#.#...... # ....#....#.#.......... # .....#...#...##.#..... # ........#............. # ...................... # ...................... # ...................... # # After 3 seconds: # ...................... # ...................... # ...................... # ...................... # ......#...#..###...... # ......#...#...#....... # ......#...#...#....... # ......#####...#....... # ......#...#...#....... # ......#...#...#....... # ......#...#...#....... # ......#...#..###...... # ...................... # ...................... # ...................... # ...................... # # After 4 seconds: # ...................... # ...................... # ...................... # ............#......... # ........##...#.#...... # ......#.....#..#...... # .....#..##.##.#....... # .......##.#....#...... # ...........#....#..... # ..............#....... # ....#......#...#...... # .....#.....##......... # ...............#...... # ...............#...... # ...................... # ...................... # # After 3 seconds, the message appeared briefly: HI. Of course, your message will be much longer and will take many more seconds to appear. # # What message will eventually appear in the sky? # # + example_input = """position=< 9, 1> velocity=< 0, 2> position=< 7, 0> velocity=<-1, 0> position=< 3, -2> velocity=<-1, 1> position=< 6, 10> velocity=<-2, -1> position=< 2, -4> velocity=< 2, 2> position=<-6, 10> velocity=< 2, -2> position=< 1, 8> velocity=< 1, -1> position=< 1, 7> velocity=< 1, 0> position=<-3, 11> velocity=< 1, -2> position=< 7, 6> velocity=<-1, -1> position=<-2, 3> velocity=< 1, 0> position=<-4, 3> velocity=< 2, 0> position=<10, -3> velocity=<-1, 1> position=< 5, 11> velocity=< 1, -2> position=< 4, 7> velocity=< 0, -1> position=< 8, -2> velocity=< 0, 1> position=<15, 0> velocity=<-2, 0> position=< 1, 6> velocity=< 1, 0> position=< 8, 9> velocity=< 0, -1> position=< 3, 3> velocity=<-1, 1> position=< 0, 5> velocity=< 0, -1> position=<-2, 2> velocity=< 2, 0> position=< 5, -2> velocity=< 1, 2> position=< 1, 4> velocity=< 2, 1> position=<-2, 7> velocity=< 2, -2> position=< 3, 6> velocity=<-1, -1> position=< 5, 0> velocity=< 1, 0> position=<-6, 0> velocity=< 2, 0> position=< 5, 9> velocity=< 1, -2> position=<14, 7> velocity=<-2, 0> position=<-3, 6> velocity=< 2, -1>""" with open('input/day10.txt', 'r') as f: actual_input = f.read() actual_input = actual_input.strip() print(actual_input[0:10]) # + def read_input(input): points = [] for row in input.split('\n'): c = row.split(' velocity') x = float(c[0][c[0].find('<') + 1: c[0].find(',')]) y = float(c[0][c[0].find(',') + 1: c[0].find('>')]) vx = float(c[1][c[1].find('<') + 1: c[1].find(',')]) vy = float(c[1][c[1].find(',') + 1: c[1].find('>')]) star = {'x':x, 'y':y, 'vx':vx, 'vy':vy} points.append(star) return points print(read_input(example_input)) # + import numpy as np def get_positions(points, tick): current_positions = [] for point in points: x = point['x'] + (tick * point['vx']) y = point['y'] + (tick * point['vy']) current_positions.append((x,y)) return current_positions def draw_sky(points): min_position_x = np.min([i[0] for i in points]) max_position_x = np.max([i[0] for i in points]) min_position_y = np.min([i[1] for i in points]) max_position_y = np.max([i[1] for i in points]) for j in range(int(max_position_y - min_position_y + 1)): row = '' for i in range(int(max_position_x - min_position_x + 1)): current_point = (min_position_x + i, min_position_y + j) if current_point in points: row = row + '#' else: row = row + '.' print(row) points = read_input(example_input) for i in range(4): pos = get_positions(points, i) draw_sky(pos) print() print() # + code_folding=[] def get_size(points): min_position_x = np.min([i[0] for i in points]) max_position_x = np.max([i[0] for i in points]) min_position_y = np.min([i[1] for i in points]) max_position_y = np.max([i[1] for i in points]) return max_position_x - min_position_x + 1, max_position_y - min_position_y + 1 def get_area(width, height): return width * height def get_compact(stars): tick = 0 size = get_area(*get_size(get_positions(stars, 0))) while tick < 100000: currentsize = get_area(*get_size(get_positions(stars, tick))) if currentsize > size: draw_sky(get_positions(stars, tick - 1)) break else: size = currentsize tick = tick + 1 return tick - 1 points = read_input(example_input) print(get_compact(points)) # - points = read_input(actual_input) print(get_compact(points)) # + active="" # --- Part Two --- # # Good thing you didn't have to wait, because that would have taken a long time - much longer than the 3 seconds in the example above. # # Impressed by your sub-hour communication capabilities, the Elves are curious: exactly how many seconds would they have needed to wait for that message to appear? # # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] hide_input=false # # Applying the data_block API # + hide_input=false from fastai import * from fastai.vision import * # - path = Path('data/aircrafts') path.ls() # (path/'train').ls() # (path/'test').ls() # **Transformations** # We can get a lot more specific with this. For example, for satellite images you can use # `planet_tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)` tfms = get_transforms(do_flip=False) data = (ImageList.from_folder(path) #Where to find the data? -> in path and its subfolders .split_by_folder() #How to split in train/valid? -> use the folders .label_from_folder() #How to label? -> depending on the folder of the filenames .add_test_folder() #Optionally add a test set (here default name is test) .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 64 .databunch()) #Finally? -> use the defaults for conversion to ImageDataBunch # Alternatively, split by random instead of by folder using split_by_rand_pct() data = (ImageList.from_folder(path) #Where to find the data? -> in planet 'train' folder .split_by_rand_pct() #How to split in train/valid? -> randomly with the default 20% in valid .label_from_folder() #How to label? .transform(tfms, size=64) #Data augmentation? -> use tfms with a size of 128 .databunch()) #Finally -> use the defaults for conversion to databunch # We can also split up the source and the data itself # and normalize using these `imagenet_stats` np.random.seed(42) src = (ImageList.from_folder(path) .split_by_folder() .label_from_folder()) data = (src.transform(tfms, size=64) .databunch().normalize(imagenet_stats)) # ## View data # Look at the data from the created databunch data.show_batch(3, figsize=(6,6), hide_axis=False) data.c # Classes are inferred from folder names data.classes, data.c, len(data.train_ds), len(data.valid_ds) # data.valid_ds.classes # data.train_ds.classes data.classes, data.c, len(data.train_ds), len(data.valid_ds) # ## Train model arch = models.resnet50 learn = cnn_learner(data, arch, metrics=accuracy) learn.lr_find() lr_find(learn) learn.recorder.plot() lr = 0.01 learn.fit_one_cycle(5, slice(lr)) learn.save('stage-1-rn50') learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, slice(1e-3, lr/5)) learn.save('stage-2-rn50') learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, slice(1e-4, lr/5)) learn.show_results(rows=3, figsize=(9,9)) # ## Interpretation learn.load('stage-2-rn50'); # + interp = ClassificationInterpretation.from_learner(learn) losses,idxs = interp.top_losses() len(data.valid_ds)==len(losses)==len(idxs) # - interp.plot_top_losses(9, figsize=(15,11)) interp.most_confused() # Try with `learn.predict()` using an actual image. # Steps taken from https://github.com/npatta01/web-deep-learning-classifier/blob/master/notebooks/1_train.ipynb plane_url = "https://upload.wikimedia.org/wikipedia/commons/5/5e/ANA_777-300_Taking_off_from_JFK.jpg" url = plane_url def fetch_image(url): response = requests.get(url) img = open_image(BytesIO(response.content)) return img img = fetch_image(plane_url) pred_class, pred_idx, outputs = learn.predict(img) pred_class, pred_idx, outputs import pprint def predict(url): img = fetch_image(url) pred_class,pred_idx,outputs = learn.predict(img) res = zip (learn.data.classes, outputs.tolist()) predictions = sorted(res, key=lambda x:x[1], reverse=True) top_predictions = predictions[0:5] pprint.pprint(top_predictions) return img.resize(500) predict(plane_url) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + #default_exp nbtemplate # + #all_flag # remove this cell to use with tests # - # #hide # optional user defined test_flag here from google.colab import drive drive.mount('/content/drive') # #hide # %load_ext autoreload # %autoreload 2 # #hide # !pip install nbdev # !pip install fastcore # #hide # add repo name then remove this comment # %cd /content/drive/My drive/ # #hide #not deps but we need them to use nbdev and run tests from nbdev import * from nbdev.showdoc import * from fastcore.test import * # # Notebook template # # > A minimal template for a new nbdev-Colab notebook # #hide # optional build script code - not required if you manage your project from another dedicated notebook. from nbdev.export import notebook2script notebook2script() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:ptlesson] * # language: python # name: conda-env-ptlesson-py # --- # + # 모형 최적화: 파라미터 튜닝 # ref: https://datascienceschool.net/view-notebook/ff4b5d491cc34f94aea04baca86fbef8/ import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.datasets import make_classification from matplotlib import rc import matplotlib as mpl import warnings warnings.filterwarnings(action="ignore") rc('font', family="AppleGothic") # %matplotlib inline ### Validation Curve: 성능 기준 세팅 from sklearn.datasets import load_digits from sklearn.svm import SVC from sklearn.model_selection import validation_curve digits= load_digits() X, y = digits.data, digits.target param_range = np.logspace(-6, -1, 10) # - # %%time train_scores, test_scores = validation_curve( SVC(), X, y, param_name="gamma", param_range=param_range, cv=10, scoring="accuracy", n_jobs=1 # number of jobs to run in parallel: -1 means using all processors ) # + train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) mpl.rcParams['font.family'] = 'DejaVu Sans' plt.semilogx(param_range, train_scores_mean, label="Training score", color="r") plt.fill_between(param_range, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.2, color="r") plt.semilogx(param_range, test_scores_mean, label="Cross-validation score", color="g") plt.fill_between(param_range, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.2, color="g") plt.legend(loc="best") plt.xlabel("$\gamma$") plt.ylabel("Score") plt.title("Validation Curve w/ SVM") plt.show() # + ### Grid Search from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler # substracts mean, unit variant from sklearn.svm import SVC pipe_svc = Pipeline([('scl', StandardScaler()), ('clf', SVC(random_state=1))]) param_range = [ np.exp(-4+i) for i in range(8) ] param_grid = [ {'clf__C' : param_range, 'clf__kernel': ['linear']}, {'clf__C' : param_range, 'clf__gamma': param_range, 'clf__kernel':['rbf']} ] gs = GridSearchCV(estimator=pipe_svc, param_grid=param_grid, scoring='accuracy', cv=10, n_jobs=1) # %time gs = gs.fit(X, y) # - gs.cv_results_["params"] gs.cv_results_["mean_test_score"] print(gs.best_score_) print(gs.best_params_) # + ### Parameter Grid: 임의의 파라미터 조합하여 탐색할 경우 from sklearn.model_selection import ParameterGrid # for example, param_grid = {'a':[1,2], 'b': [True, False]} list(ParameterGrid(param_grid)) # - param_grid = [{'kernel': ['linear']}, {'kernel': ['rbf'], 'gamma': [1,10]}] list(ParameterGrid(param_grid)) # + ### 병렬 처리: GridSearchCV나 validation_curve의 n_jobs 값을 늘려주면 parallel processor 사용 가능 # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from dlinputs import utils import numpy as np assert utils.make_gray(np.zeros((400, 300, 3))).shape == (400, 300) assert utils.make_gray(np.zeros((400, 300))).shape == (400, 300) assert utils.make_gray(np.zeros((400, 300, 4))).shape == (400, 300) assert utils.make_rgb(np.zeros((400, 300, 3))).shape == (400, 300, 3) assert utils.make_rgb(np.zeros((400, 300))).shape == (400, 300, 3) assert utils.make_rgb(np.zeros((400, 300, 4))).shape == (400, 300, 3) assert utils.make_rgba(np.zeros((400, 300, 3))).shape == (400, 300, 4) assert utils.make_rgba(np.zeros((400, 300))).shape == (400, 300, 4) assert utils.make_rgba(np.zeros((400, 300, 4))).shape == (400, 300, 4) d = dict(a=1, b=2, c=3) rd = utils.invert_mapping(d) assert rd[1] == "a" d = utils.get_string_mapping("a=x:b=y:c=z") assert d["a"] == "x" image = np.zeros((400, 300, 3)) png = utils.pildumps(image) image1 = utils.pilreads(png, "rgb") assert image.shape == image1.shape assert (image == image1).all() image = np.zeros((400, 300, 3)) png = utils.pildumps(image) image1 = utils.pilreads(png, "gray") assert image.shape[:2] == image1.shape sample = dict(png=np.zeros((400, 300, 3))) raw = utils.autoencode(sample) sample1 = utils.autodecode(raw) assert (sample["png"] == sample1["png"]).all() samples = [dict(png=np.zeros((400, 300, 3)))] * 10 batch = utils.samples_to_batch(samples) assert batch["png"].shape == (10, 400, 300, 3) import imp imp.reload(utils) samples = [dict(png=np.zeros((400, x, 3))) for x in [200, 100, 350, 150]] batch = utils.samples_to_batch(samples, expand=True) assert batch["png"].shape == (4, 400, 350, 3), batch["png"].shape # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline import numpy as np import seaborn as sns import warnings warnings.filterwarnings('ignore') ori=pd.read_csv('website_data_20190225.csv') ori.drop(['STATE','DISTRICT','WLCODE','SITE_TYPE','TEH_NAME'],axis=1,inplace=True) ori.replace(to_replace="'0",value=0,inplace=True) ori.head() # + dup_df=pd.DataFrame().reindex_like(ori) dup_df.dropna(inplace=True) # j=0 # for i in range(0,ori.shape[0]): # if ori['STATE'][i]=='RJ': # dup_df.loc[j] = ori.iloc[i] # j+=1 # dup_df.drop(['STATE'],axis=1,inplace=True) # j=0 # for i in range(0,ori.shape[0]): # if ori['DISTRICT'][i]=='Ajmer': # dup_df.loc[j] = ori.iloc[i] # j+=1 # dup_df.drop(['DISTRICT'],axis=1,inplace=True) j=0 for i in range(0,ori.shape[0]): if ori['BLOCK_NAME'][i]=='Arain': dup_df.loc[j] = ori.iloc[i] j+=1 dup_df.drop(['BLOCK_NAME'],axis=1,inplace=True) j=0 for i in range(0,ori.shape[0]): if ori['SITE_NAME'][i]=='Sanpla': dup_df.loc[j] = ori.iloc[i] j+=1 dup_df.drop(['SITE_NAME'],axis=1,inplace=True) dup_df.head() # - for i in range(0,dup_df.shape[0]): dup_df['MONSOON'][i]=float(dup_df['MONSOON'][i]) dup_df['POMRB'][i]=float(dup_df['POMRB'][i]) dup_df['POMKH'][i]=float(dup_df['POMKH'][i]) dup_df['PREMON'][i]=float(dup_df['PREMON'][i]) dup_df['YEAR_OBS'][i]=int(dup_df['YEAR_OBS'][i]) first=list(dup_df['MONSOON']) second=list(dup_df['POMRB']) third=list(dup_df['POMKH']) fourth=list(dup_df['PREMON']) dup_df['MONSOON']=pd.core.frame.DataFrame(x+y+z+w for x, y,z,w in zip(first, second, third, fourth)) dup_df.drop(['POMRB','POMKH','PREMON'],axis=1,inplace=True) dup_df.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="DUkwlNp0LExi" outputId="1189f3b5-6442-47ed-d46f-a100aab587b0" # !pip install pylast # !pip install pillow # + id="kiqqqT4uLIJs" import pylast import requests from PIL import Image import urllib.request # + id="aT7tz9LOMIGB" API_KEY = "" API_SECRET = "07ca82ca31889f736dd709c4501506b8" # + id="_ehpYgvXMyLn" network = pylast.LastFMNetwork( api_key = API_KEY, api_secret = API_SECRET, ) # + colab={"base_uri": "https://localhost:8080/"} id="MdFMHqlCTIxR" outputId="1e1ae5ae-cfb4-41ca-cf1f-0015862bdf79" network.get_top_tracks()[0] # + id="nqrRVp0UTjOI" top = network.get_album("",'good 4 u') # + colab={"base_uri": "https://localhost:8080/", "height": 317} id="qbqwtZzlT7fa" outputId="c496b304-2a38-4431-cead-9a5fa48313a6" sample = requests.get(top.get_cover_image(),stream=True).raw Image.open(sample) # + id="u4p5NYhtVbpn" # --- # jupyter: # jupytext: # text_representation: # extension: .jl # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Julia 1.7.2 # language: julia # name: julia-1.7 # --- pwd() using Pkg; Pkg.activate("../../FermiCG/") using FermiCG, NPZ readdir() # + # load integrals from disk h0 = npzread("cr_qubit_scf_integrals_h0.npz.npy") h1 = npzread("cr_qubit_scf_integrals_h1.npz.npy") h2 = npzread("cr_qubit_scf_integrals_h2.npz.npy") ints = InCoreInts(h0, h1, h2) ints_original = deepcopy(ints); print(" Integrals have the following sizes: h0= ", size(h0), " h1= ", size(h1), " h2= ", size(h2)) # + # Define clusters - this should probably be done in the python notebook, and then just read in here clusters_in = [ (1:10), # metal (11:16), # Benzene 1 (17:22), # Benzene 1 (23:28), # Benzene 1 (29:34), # Benzene 1 ] clusters = [Cluster(i,collect(clusters_in[i])) for i = 1:length(clusters_in)] init_fspace = [ (5,5), (3,3), (3,3), (3,3), (3,3)]; display(clusters) display(init_fspace) # + tags=[] rdm1 = zeros(size(ints.h1)) e_cmf, U, da1, db1 = FermiCG.cmf_oo(ints, clusters, init_fspace, rdm1, rdm1, verbose=0, gconv=1e-6, method="cg", sequential=true); # + tags=[] using JLD2 # rotate integrals to new cmf optimized basis ints = FermiCG.orbital_rotation(ints_original,U); @save "cr_qubit_cmf_5clusters.jld2" ints clusters init_fspace da1 db1 Da = da1 Db = db1 max_roots = 20 # # form Cluster data @time cluster_bases = FermiCG.compute_cluster_eigenbasis(ints, clusters, verbose=0, max_roots=max_roots, init_fspace=init_fspace, rdm1a=Da, rdm1b=Db) clustered_ham = FermiCG.extract_ClusteredTerms(ints, clusters) cluster_ops = FermiCG.compute_cluster_ops(cluster_bases, ints); FermiCG.add_cmf_operators!(cluster_ops, cluster_bases, ints, Da, Db); # - @save "cr_qubit_cmf_5clusters_bases_M20.jld2" ints clusters init_fspace da1 db1 cluster_bases # + # Create BST state v = FermiCG.BSTstate(clusters, FockConfig(init_fspace), cluster_bases, R=3) FermiCG.add_single_excitons!(v, FockConfig(init_fspace), cluster_bases) FermiCG.randomize!(v) FermiCG.orthonormalize!(v) e_ci, v = FermiCG.tucker_ci_solve(v, cluster_ops, clustered_ham); # + jupyter={"source_hidden": true} tags=[] using Printf display(v, root=2) for ei in 1:length(e_ci) @printf(" %5i %12.8f %12.8f eV\n", ei, e_ci[ei]+ints.h0, (e_ci[ei]-e_ci[1])*27.21165) end # + tags=[] e_var, v_var = FermiCG.block_sparse_tucker(v, cluster_ops, clustered_ham, max_iter = 20, max_iter_pt = 200, nbody = 4, H0 = "Hcmf", thresh_var = 1e-1, thresh_foi = 1e-3, thresh_pt = 1e-3, ci_conv = 1e-5, do_pt = true, resolve_ss = false, tol_tucker = 1e-4); # - @time e2 = FermiCG.compute_pt2_energy(v_var, cluster_ops, clustered_ham, thresh_foi=1e-5) @time e2 = FermiCG.compute_pt2_energy(v_var, cluster_ops, clustered_ham, thresh_foi=1e-6) @time e2 = FermiCG.compute_pt2_energy(v_var, cluster_ops, clustered_ham, thresh_foi=1e-7) # + tags=[] v1 = deepcopy(v_var); display(v1,root=3) # + tags=[] e2, v2 = FermiCG.block_sparse_tucker(v1, cluster_ops, clustered_ham, max_iter = 20, max_iter_pt = 200, nbody = 4, H0 = "Hcmf", thresh_var = 1e-1, thresh_foi = 1e-4, thresh_pt = 1e-3, ci_conv = 1e-5, do_pt = true, resolve_ss = false, tol_tucker = 1e-4); # + jupyter={"outputs_hidden": true} tags=[] display(v2, root=1, thresh=.1) # + tags=[] e3, v3 = FermiCG.block_sparse_tucker(v2, cluster_ops, clustered_ham, max_iter = 20, max_iter_pt = 200, nbody = 4, H0 = "Hcmf", thresh_var = 1e-1, thresh_foi = 1e-5, thresh_pt = 1e-4, ci_conv = 1e-5, do_pt = true, resolve_ss = false, tol_tucker = 1e-4); # + jupyter={"outputs_hidden": true} tags=[] # rotate integrals to new cmf optimized basis ints = FermiCG.orbital_rotation(ints_original,U); Da = da1 Db = db1 max_roots = 50 # # form Cluster data cluster_bases = FermiCG.compute_cluster_eigenbasis(ints, clusters, verbose=1, max_roots=max_roots, init_fspace=init_fspace, rdm1a=Da, rdm1b=Db, delta_elec=2) clustered_ham = FermiCG.extract_ClusteredTerms(ints, clusters) cluster_ops = FermiCG.compute_cluster_ops(cluster_bases, ints); FermiCG.add_cmf_operators!(cluster_ops, cluster_bases, ints, Da, Db); # + jupyter={"outputs_hidden": true} tags=[] # Create BST state v = FermiCG.BSTstate(clusters, FockConfig(init_fspace), cluster_bases, R=3) FermiCG.add_single_excitons!(v, FockConfig(init_fspace), cluster_bases) FermiCG.randomize!(v) FermiCG.orthonormalize!(v) e_ci, v = FermiCG.tucker_ci_solve(v, cluster_ops, clustered_ham); # + tags=[] e3_50, v3_50 = FermiCG.block_sparse_tucker(v, cluster_ops, clustered_ham, max_iter = 20, max_iter_pt = 200, nbody = 4, H0 = "Hcmf", thresh_var = 1e-1, thresh_foi = 1e-5, thresh_pt = 1e-4, ci_conv = 1e-5, do_pt = true, resolve_ss = false, tol_tucker = 1e-4); # + jupyter={"source_hidden": true} tags=[] # rotate integrals to new cmf optimized basis ints = FermiCG.orbital_rotation(ints_original,U); Da = da1 Db = db1 max_roots = 100 # # form Cluster data @time cluster_bases = FermiCG.compute_cluster_eigenbasis(ints, clusters, verbose=0, max_roots=max_roots, init_fspace=init_fspace, rdm1a=Da, rdm1b=Db) clustered_ham = FermiCG.extract_ClusteredTerms(ints, clusters) @time cluster_ops = FermiCG.compute_cluster_ops(cluster_bases, ints); FermiCG.add_cmf_operators!(cluster_ops, cluster_bases, ints, Da, Db); # + jupyter={"outputs_hidden": true, "source_hidden": true} tags=[] # Create BST state v = FermiCG.BSTstate(clusters, FockConfig(init_fspace), cluster_bases, R=3) FermiCG.add_single_excitons!(v, FockConfig(init_fspace), cluster_bases) FermiCG.randomize!(v) FermiCG.orthonormalize!(v) e_ci, v = FermiCG.tucker_ci_solve(v, cluster_ops, clustered_ham); # + jupyter={"source_hidden": true} tags=[] e3_50, v3_50 = FermiCG.block_sparse_tucker(v, cluster_ops, clustered_ham, max_iter = 20, max_iter_pt = 200, nbody = 4, H0 = "Hcmf", thresh_var = 1e-1, thresh_foi = 1e-5, thresh_pt = 1e-4, ci_conv = 1e-5, do_pt = true, resolve_ss = false, tol_tucker = 1e-4); # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import csv import requests from xml.dom import minidom from ContentDownloader import ContentDownloader from tqdm import tqdm from ipdb import set_trace import pandas as pd import numpy as np from bs4 import BeautifulSoup from readability import Document def loadXML(url = 'https://www.salvageautosauction.com/sitemap.xml',name=''): # url of rss feed # creating HTTP response object from given url resp = requests.get(url) # saving the xml file with open('csv/'+url.split('/')[-1], 'wb') as f: f.write(resp.content) def parseXML(xmlfile): xmldoc = minidom.parse(name) itemlist = xmldoc.getElementsByTagName('loc') print('URLs found: ',len(itemlist),'\dProcessing XML file ...') urls = [] for item in tqdm(itemlist): url = item.firstChild.nodeValue if 'vehicle_detail' in url: urls.append(url) return urls def find_bid(page): try: soup = BeautifulSoup(page, 'html.parser') dom = soup.findAll("p", {"class": "text-center"})[1] #return dom.contents[0] #return ''.join([s for s in dom.contents[0].split() if s.isdigit()]) return int(''.join([s for s in dom.contents[0] if s.isdigit()])) except: return np.nan def format_data(df): res_df = pd.DataFrame() for index, row in tqdm(df.iterrows()): pagetable = pd.read_html(row['Text'])[0] item_name = row['URL'].split('vehicle_detail/')[1].split('/')[0] pagetable.columns = ['Vehicle Name',item_name] pagetable = pagetable.set_index('Vehicle Name').T pagetable['bid'] = find_bid(page) res_df = res_df.append(pagetable,sort=False) return res_df # + import csv import requests from xml.dom import minidom from ContentDownloader import ContentDownloader from tqdm import tqdm from ipdb import set_trace import pandas as pd import numpy as np from bs4 import BeautifulSoup from readability import Document def loadXML(url = 'https://www.salvageautosauction.com/sitemap.xml',name=''): # url of rss feed # creating HTTP response object from given url resp = requests.get(url) # saving the xml file with open('csv/'+url.split('/')[-1], 'wb') as f: f.write(resp.content) def parseXML(xmlfile): xmldoc = minidom.parse(name) itemlist = xmldoc.getElementsByTagName('loc') print('URLs found: ',len(itemlist),'\dProcessing XML file ...') urls = [] for item in tqdm(itemlist): url = item.firstChild.nodeValue if 'vehicle_detail' in url: urls.append(url) return urls def find_bid(page): try: soup = BeautifulSoup(page, 'html.parser') dom = soup.findAll("p", {"class": "text-center"})[1] #return dom.contents[0] #return ''.join([s for s in dom.contents[0].split() if s.isdigit()]) return int(''.join([s for s in dom.contents[0] if s.isdigit()])) except: return np.nan def format_data(df): res_df = pd.DataFrame() for index, row in tqdm(df.iterrows()): pagetable = pd.read_html(row['Text'])[0] item_name = row['URL'].split('vehicle_detail/')[1].split('/')[0] pagetable.columns = ['Vehicle Name',item_name] pagetable = pagetable.set_index('Vehicle Name').T pagetable['bid'] = find_bid(page) res_df = res_df.append(pagetable,sort=False) res_df.to_csv('csv/formated_date.csv') return res_df # + xlm_name = 'SAA_sitemap.xml' text_file_name = 'SalvageAutos.csv' loadXML(name=xlm_name) urls = parseXML(xlm_name) scraped_data = ContentDownloader.run_url_download(batch_size=100,urls_list=urls[:300] ,path_to_csv=text_file_name) #scraped_data = pd.read_csv('csv/SalvageAutos.csv') res_df = format_data(scraped_data) # - res_df.to_csv('csv/formatted_date.csv') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #Import the necessary library import numpy as np #Manually add the Summer Olympics, London 2012 dataset as arrays countries = np.array(['Great Britain','China', 'Russia', 'United States', 'Korea', 'Japan', 'Germany']) gold = np.array([29,38,24,46,13,7,11]) silver = np.array([17,28,25,28,8,14,11]) Bronze = np.array([19,22,32,29,7,17,14]) #Find the country name with maximum gold medals max_gold = np.argmax(gold) name_of_country_with_max_gold = countries[max_gold] print(name_of_country_with_max_gold) #Find the countries with more than 20 gold medals print(countries[gold>20]) #Evaluate the dataset and print the name of each country with its gold medals and total number of medals for i in range(len(countries)): print("Name of the country: {} with gold medals: {} and total medals: {}".format(countries[i], gold[i],np.add(gold,silver,Bronze)[i])) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Effect of UBI on Supplemental Poverty Measure # # $500 per person. # ## Setup # ### Imports import pandas as pd import numpy as np import microdf as mdf import os # ### Load data # # Looks for the 2019 March Supplement in a `~/data` folder. ASEC_F = '~/data/cpspb/asec/prod/data/2019/pppub19.csv' if not os.path.isfile(os.path.expanduser(ASEC_F)): # !mkdir ~/data # !wget -O ~/data/asecpub19csv.zip http://thedataweb.rm.census.gov/pub/cps/march/asecpub19csv.zip # !unzip ~/data/asecpub19csv.zip -d ~/data SPM_COLS = ['povthreshold', 'resources', 'poor', 'numper', 'numkids', 'numadults', 'id'] OTHER_COLS = ['A_AGE', 'MARSUPWT', 'PRCITSHP'] cols = ['SPM_' + i.upper() for i in SPM_COLS] + OTHER_COLS raw = pd.read_csv(ASEC_F, usecols=cols) # ## Preprocess df = raw.copy(deep=True) df.columns = map(str.lower, df.columns) # Add true weight by dividing by 100. df['w'] = df.marsupwt / 100. # Add citizenship indicator, per https://www2.census.gov/programs-surveys/cps/techdocs/cpsmar19.pdf. df['is_citizen'] = df.prcitshp != 5 df['is_kid'] = df.a_age < 18 df['is_adult'] = df.a_age >= 18 df['is_kid_citizen'] = df.is_citizen & df.is_kid df['is_adult_citizen'] = df.is_citizen & df.is_adult # Number of citizens per SPM unit. spmu = df.groupby('spm_id')[['is_kid', 'is_adult', 'is_kid_citizen', 'is_adult_citizen']].sum() spmu.columns = ['spm_nu18', 'spm_n18', 'spm_numkidcitizens', 'spm_numadultcitizens'] df = df.merge(spmu, on='spm_id') # ## Analysis # Percent citizen. mdf.weighted_mean(df, 'is_citizen', 'w') mdf.weighted_mean(df[df.a_age < 18], 'is_citizen', 'w') mdf.weighted_mean(df[df.a_age >= 18], 'is_citizen', 'w') def rounded_pct(x): print(str((x * 100).round(1)) + '%') def print_pov(df, weight='w'): rounded_pct(mdf.weighted_mean(df, 'spm_poor', weight)) def ubi_pov(ubi_adult=0, ubi_kid=0, ubi_adult_citizen=0, ubi_kid_citizen=0): resources = ( df.spm_resources + ubi_adult * df.spm_n18 + ubi_kid * df.spm_nu18 + ubi_adult_citizen * df.spm_numadultcitizens + ubi_kid_citizen * df.spm_numkidcitizens) is_pov = resources < df.spm_povthreshold return (is_pov * df.w).sum() / df.w.sum() def ubi_cost(ubi_adult=0, ubi_kid=0, ubi_adult_citizen=0, ubi_kid_citizen=0): ubi_adult_cost = (ubi_adult * df.is_adult * df.w).sum() ubi_kid_cost = (ubi_kid * df.is_kid * df.w).sum() ubi_adult_citizen_cost = (ubi_adult_citizen * df.is_adult_citizen * df.w).sum() ubi_kid_citizen_cost = (ubi_kid_citizen * df.is_kid_citizen * df.w).sum() return ubi_adult_cost + ubi_kid_cost + ubi_adult_citizen_cost + ubi_kid_citizen_cost ubi_cost(6000, 6000) / 1e12 ubi_cost(0, 0, 6000, 6000) / 1e12 ubi_cost(7800, 0) / 1e12 ubi_pov(0, 0) ubi_pov(6000, 6000) ubi_pov(0, 0, 6000, 6000) 1 - ubi_pov(6000, 6000) / ubi_pov(0, 0) ubi_pov(7800, 0) 1 - ubi_pov(7800, 0) / ubi_pov(0, 0) ubi_cost(12000, 0) / 1e12 # ## Miscellaneous print_pov(df[df.is_citizen]) print_pov(df[~df.is_citizen]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Description about LUNA16 dataset # The LUNA16 dataset was created for the challenge of LUng Nodule Analysis 2016 including 888 CT scans, which were gathered from LIDC-IDRI with slice thickness less than 3mm. There are totally 36,378 annotations of nodules that were marked by more than one radiologists, while there are 2,290, 1,602, 1,186, and 777 nodules annotated by at least 1, 2, 3, or 4 radiologists, respectively. Nodules annotated by at least 3 radiologists are regarded as true nodules, whose annotations of diameters and positions are the average of annotation in LIDC-IDRI. # # # Data preprocessing # Before training the model, we should remove unwanted parts of chest CT Scans wich the dataset of luna16 contains, also we should normalize all of the 3d images to be in the same characteristics as "spacing" between voxels. # If you're new to medical imaging (just like me), read this [wiki page](https://www.slicer.org/wiki/Coordinate_systems#Image_coordinate_system) to acquaint yourself with the coordinate system of medical images. # # For a quick review to those of you who have had experience with medical images, you may remember some of it looking at this image: # ![coordinate_system](https://www.slicer.org/w/img_auth.php/d/de/Image_Coordinats.png) # * **what is slice thickness?** # # according to [this](https://www.materialise.com/en/faq/what-difference-between-slice-thickness-and-slice-increment), Slice thickness and slice increment are central concepts that surround CT/MRI imaging. Slice thickness refers to the (often axial) resolution of the scan (2 mm in the illustration). Slice Increment refers to the movement of the table/scanner for scanning the next slice (varying from 1 mm to 4 mm in the illustration). # ![alt text](http://www.materialise.com/sites/default/files/image-uploads/pages/Medical/slice_increment.jpg) # # # * **What are axial, sagittal, coronal?** # # These are image reconstruction planes. # The picture below shows everything needed: # # ![axial, sagittal, coronal](https://www.ipfradiologyrounds.com/_images/reconstruction-planes.png) # # # imports from PIL import Image from glob import glob import pandas as pd import skimage, os from skimage.morphology import ball, disk, dilation, binary_erosion, remove_small_objects, erosion, closing, reconstruction, binary_closing from skimage.measure import label,regionprops, perimeter from skimage.morphology import binary_dilation, binary_opening, convex_hull_image from skimage.filters import roberts, sobel from skimage import measure, feature from skimage.segmentation import clear_border from skimage import data from scipy import ndimage as ndi import matplotlib.pyplot as plt from mpl_toolkits.mplot3d.art3d import Poly3DCollection import pydicom import scipy.misc import numpy as np import SimpleITK as sitk # # Load image # # To load medical images of the dataset (.mhd files) we will use the SimpleITK package, which its usage is as simple as the below method. def load_itk(filename): # Reads the image using SimpleITK itkimage = sitk.ReadImage(filename) # Convert the image to a numpy array first and then shuffle the dimensions to get axis in the order z,y,x image_array = sitk.GetArrayFromImage(itkimage) # Read the origin of the ct_scan, will be used to convert the coordinates from world to voxel and vice versa. origin = np.array(list(reversed(itkimage.GetOrigin()))) # Read the spacing along each dimension spacing = np.array(list(reversed(itkimage.GetSpacing()))) return image_array, origin, spacing # Now we want to read a sample file. # At first, you should download [dataset](http://academictorrents.com/collection/luna-lung-nodule-analysis-16---isbi-2016-challenge) (which may take a long time, but it is ok to download just a single subset of it plus CSV files for the first steps). The description of the dataset is available at [challenge page](https://luna16.grand-challenge.org/data/). # ### List all available *.mhd files at subset0 of the dataset luna_subset_path = '/Users/mostafa/Desktop/dsb_analyse/input/subset0/' file_list=glob(luna_subset_path+"*.mhd") # Now let's take a look at voxel values to know the distribution of a sample of them. # + from matplotlib import pyplot as plt img, origin, spacing = load_itk(file_list[0]) first_patient_pixels = img plt.hist(first_patient_pixels.flatten(), bins=80, color='c') plt.xlabel("Hounsfield Units (HU)") plt.ylabel("Frequency") plt.show() # - # # Resample all images to isomorphic spacing # Since CT Scans have different spacings between voxels, which is because of that the different devices have different configurations, so we unify all of the 3d images spacings into isomorphic form of 1mm in each axis (or sagittal, coronal, and axial plane). def resample(image, previous_spacing, new_spacing=[1,1,1]): # Determine current pixel spacing spacing = np.array(previous_spacing, dtype=np.float32) resize_factor = spacing / new_spacing new_real_shape = image.shape * resize_factor new_shape = np.round(new_real_shape) real_resize_factor = new_shape / image.shape new_spacing = spacing / real_resize_factor image = scipy.ndimage.interpolation.zoom(image, real_resize_factor, mode='nearest') return image, new_spacing img2, spacing2 = resample(img, spacing) print(img.shape, spacing) print(img2.shape, spacing2) # So, as you see, the image shapes have been changed as well. # # Removing other body organs from image # # At the next step we would like to clean up the 3d array of CT Scan, because we like to tell our neural network to pay attention to really important parts, and not to get lost looking at a large amount of unnecessary information. # # To reach this, we will go through these steps: # 1. Convert the original 3d image into a binary image. # 2. Remove the blobs connected to the border of the image. # 3. Label the connected points of the image. # 4. Keep the labels with 2 largest areas and segment two lungs. # 5. Fill in the small holes inside the mask of lungs which we separate right and left lung. # 6. Fill the convex hull of each lung. # 7. Joint two separated right and left lungs. # 8. Closure operation with a disk of radius 10. This operation is to keep nodules attached to the lung wall. # 9. Superimpose the binary mask on the input image. # # # In summary, we segment lungs and make a mask from them, then throw away the voxels out of that mask at the original image. # (I have tried these steps to be similar to ones at [the paper of grt123 team](https://arxiv.org/abs/1711.08324), part B. Preprocessing) # # I have to add that some of the steps are borrowed from the luna16 challenge tutorial for preprocessing [here](https://www.kaggle.com/gzuidhof/full-preprocessing-tutorial). # But the key step is to fill the convex hull, which is from the mentioned paper. def get_segmented_lungs(im, plot=False): ''' This funtion segments the lungs from the given 2D slice. ''' plt_number = 0 # Original image label: 0 if plot: f, plots = plt.subplots(12, 1, figsize=(5, 40)) plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(im, cmap=plt.cm.bone) plt_number += 1 # Step 1: Convert into a binary image. # image label: 1 binary = im < -604 if plot: plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(binary, cmap=plt.cm.bone) plt_number += 1 # Step 2: Remove the blobs connected to the border of the image. # image label: 2 cleared = clear_border(binary) if plot: plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(cleared, cmap=plt.cm.bone) plt_number += 1 # Step 3: Label the image. # image label: 3 label_image = label(cleared) if plot: plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(label_image, cmap=plt.cm.bone) plt_number += 1 # Step 4: Keep the labels with 2 largest areas and segment two lungs. # image label: 4 areas = [r.area for r in regionprops(label_image)] areas.sort() labels = [] if len(areas) > 2: for region in regionprops(label_image): if region.area < areas[-2]: for coordinates in region.coords: label_image[coordinates[0], coordinates[1]] = 0 else: coordinates = region.coords[0] labels.append(label_image[coordinates[0], coordinates[1]]) else: labels = [1, 2] if plot: plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(label_image, cmap=plt.cm.bone) plt_number += 1 # Step 5: Fill in the small holes inside the mask of lungs which we seperate right and left lung. r and l are symbolic and they can be actually left and right! # image labels: 5, 6 r = label_image == labels[0] l = label_image == labels[1] r_edges = roberts(r) l_edges = roberts(l) r = ndi.binary_fill_holes(r_edges) l = ndi.binary_fill_holes(l_edges) if plot: plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(r, cmap=plt.cm.bone) plt_number += 1 plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(l, cmap=plt.cm.bone) plt_number += 1 # Step 6: convex hull of each lung # image labels: 7, 8 r = convex_hull_image(r) l = convex_hull_image(l) if plot: plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(r, cmap=plt.cm.bone) plt_number += 1 plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(l, cmap=plt.cm.bone) plt_number += 1 # Step 7: joint two separated right and left lungs. # image label: 9 sum_of_lr = r + l binary = sum_of_lr > 0 if plot: plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(binary, cmap=plt.cm.bone) plt_number += 1 # Step 8: Closure operation with a disk of radius 10. This operation is # to keep nodules attached to the lung wall. # image label: 10 selem = disk(10) binary = binary_closing(binary, selem) if plot: plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(binary, cmap=plt.cm.bone) plt_number += 1 # Step 9: Superimpose the binary mask on the input image. # image label: 11 get_high_vals = binary == 0 im[get_high_vals] = 0 if plot: plots[plt_number].axis('off') plots[plt_number].set_title(f'{plt_number}') plots[plt_number].imshow(im, cmap=plt.cm.bone) plt_number += 1 return im from matplotlib import pyplot as plt tmp_2d_img = get_segmented_lungs(img2[200,:,:], True) # So, Let's transform whole 3d image. # # Remember img2 is resampled: img3 = np.asarray([get_segmented_lungs(im) for im in img2]) # Let's see the result of a single slice of the resulting image. from matplotlib import pyplot as plt plt.imshow(img3[201,:,:], cmap=plt.cm.bone) plt.show() # Compared to first image which was: from matplotlib import pyplot as plt plt.imshow(first_patient_pixels[int(201*733/366),:,:], cmap=plt.cm.bone) plt.show() # Well done! # # Now let's go furthur to the next step of preprocessing. # # # Normalizing the image # Based on the image below (which is from [here](http://radclass.mudr.org/content/hounsfield-units-scale-hu-ct-numbers)), we don't need a big range of voxel values. # ![hu](./figs/hu_values.png) # # It is pretty simple, we make a bonding on voxel values to be inside [-1200.0, 600.0] and if it was out of the bounding, it would be decreased or increased to be equal to 600 or -1200. # # Then we will scale values to be inside [0.0, 255.0]. It is like a tradition in deep learning applications. # def normalize(image): MIN_BOUND = -1200 MAX_BOUND = 600. image2 = (image - MIN_BOUND) / (MAX_BOUND - MIN_BOUND) image2[image2 > 1] = 1. image2[image2 < 0] = 0. image2 *= 255. return image2 img4 = normalize(img3) plt.imshow(img4[201,:,:], cmap=plt.cm.bone) plt.show() # # Zero centering # Then we will zero center images. But it is not per image, it is based on the mean of the whole dataset. # # After some searches, I found out that the mean value of Luna16 is almost 0.25 (before scaling to [0.0, 255.0]). # (This is another tradition;) ) # # **Warning: Do not zero center with the mean per image (like is done in some kernels on here). The CT scanners are calibrated to return accurate HU measurements. There is no such thing as an image with lower contrast or brightness like in normal pictures.** def zero_center(image): PIXEL_MEAN = 0.25 * 256 image2 = image - PIXEL_MEAN return image2 img5 = zero_center(img4) plt.imshow(img5[200,:,:], cmap=plt.cm.bone) plt.show() # # The End # # We have covered almost all of the preprocessing steps of [the paper of grt123 team](https://arxiv.org/abs/1711.08324) and the `preprocess` method of [CTScan class](https://github.com/s-mostafa-a/Luna16/blob/master/preprocess/_ct_scan.py). # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + from config import * ds = '36p' # Denoising Strategy: ct = 'pearson' #Correlation Type negative_corr = False tm = 'gce' # Thresholding Method: # It could be 'GCE' or 'user_defined' tv = 0.05 # Thresholding Value, For user defined tm enodes_name = 'all' # - import src.group_level_analysis.group_level_analysis as gla from src.group_level_analysis.general import * selected_set = 0 group_couples,snd,sep,all_sub_list,all_subjects,subjects_list,sg,str_loc,str_glob,strlist,tm,sgn,measures,RAWmeasures,intersect,removed,removed_subL,r,cols,subL,subLRAW,subL_adj,subLRAW_adj,meanM,meanG,meanM_bin,meanG_bin,meanM_adj,meanG_adj = general(ds,ct,negative_corr,tm,tv,enodes_name,selected_set,'mean_net','load') sortednd = { 'Frontoparietal': [14,28,57,62,72,78,80,81,82,83,84,85,90,96,110,143,144,170,178,194,208,242,246,252,253,256,258,260,261,262,263,264,265,268,270,276,277,290,312,323,324,328,349,350,358], 'Default': [13,25,26,27,29,30,31,32,33,34,60,61,63,64,65,66,67,68,69,70,71,73,74,75,76,86,88,93,97,118,122,127,128,129,131,132,138,148,149,150,154,160,161,169,175,176,179,193,205,209,210,211,212,213,214,240,241,243,244,245,247,248,249,250,251,254,255,266,273,302,307,308,309,311,329,330,340,341,344,355,356,359], 'Limbic': [87,89,91,92,109,117,121,130,133,134,163,164,165,171,267,269,271,272,297,299,301,310,313,314,343,345,351], 'Somatomotor': [7,8,11,23,35,38,39,40,50,51,52,53,54,55,99,100,101,102,103,106,114,123,124,167,172,173,174,187,188,203,215,218,219,220,230,231,232,233,234,235,279,280,281,282,283,284,286,294,303,304,318,347,352,353,354], 'VentralAttention': [24,36,37,42,43,56,58,59,77,98,104,105,107,108,111,112,113,146,147,166,168,177,191,204,207,216,217,222,223,236,237,238,239,257,278,285,287,288,289,291,292,293,326,327,346,348,357], 'DorsalAttention': [1,9,10,16,41,44,45,46,47,48,49,79,94,95,115,116,135,136,137,139,140,142,156,189,190,196,206,221,224,225,226,227,228,229,259,274,275,295,296,315,316,317,319,320,322,325,336], 'Visual':[0,2,3,4,5,6,12,15,17,18,19,20,21,22,120,125,126,141,145,151,152,153,155,157,158,159,162,180,181,182,183,184,185,186,192,195,197,198,199,200,201,202,298,300,305,306,321,331,332,333,334,335,337,338,339,342] } sortedndLR = {} for Mod in nd: ndLR[Mod + '_L'] = [i for i in nd[Mod] if i<=180] ndLR[Mod + '_R'] = [i for i in nd[Mod] if i> 180] modulecolor = { 'Frontoparietal':'cd8221', 'Default':'d54758', 'Limbic':'909771', 'Somatomotor':'498ec4', 'VentralAttention':'d84bf3', 'DorsalAttention':'007a0d', 'Visual':'901d9e' } atlasdir=rootdir + '/references/HCP-MMP1' labelfile=os.path.join(atlasdir,'MMP_yeo2011_networks.csv') labeldata=pd.read_csv(labelfile) enc = labeldata.T.loc['MMP'] mod7idx = labeldata.T.loc['Yeo7'] mod7name = labeldata.T.loc['YeoDesc7'] sortedmod7idx = {} for i in range(360): if mod7idx[i] == 1: sortedmod7idx[i] = 7 if mod7idx[i] == 2: sortedmod7idx[i] = 4 if mod7idx[i] == 3: sortedmod7idx[i] = 6 if mod7idx[i] == 4: sortedmod7idx[i] = 5 if mod7idx[i] == 5: sortedmod7idx[i] = 3 if mod7idx[i] == 6: sortedmod7idx[i] = 1 if mod7idx[i] == 7: sortedmod7idx[i] = 2 import json def savejson(dic, path): with open(path, 'w') as fp: json.dump(dic, fp) # ### Toy Network toyjson = {'nodes': [{'id': 'L_V1_ROI', 'group': 1}, {'id': 'L_MST_ROI', 'group': 3}, {'id': 'L_V6_ROI', 'group': 2}, {'id': 'L_V2_ROI', 'group': 2}, {'id': 'L_V3_ROI', 'group': 4} ], 'links': [{'source': 'L_V1_ROI', 'target': 'L_V6_ROI', 'value': 331}, {'source': 'L_V1_ROI', 'target': 'L_V2_ROI', 'value': 741}, {'source': 'L_V2_ROI', 'target': 'L_V3_ROI', 'value': 621}, {'source': 'L_V2_ROI', 'target': 'L_V6_ROI', 'value': 457}, {'source': 'L_V3_ROI', 'target': 'L_V1_ROI', 'value': 403} ] } # ### Convert to JSON def json360_7_bin(M): jsondic = {"nodes":[], "links": []} for i in range(360): if i==119: continue nodesdic = {"id": enc[i], "group": int(sortedmod7idx[i])} jsondic["nodes"].append(nodesdic) for j in range(i+1,360): if j==119: continue if M[i][j]!= 0.0: linksdic = {"source": enc[i], "target": enc[j], "value": np.around(M[i][j],3)} jsondic["links"].append(linksdic) return jsondic def json360_7LR_bin(M): jsondic = {"nodes":[], "links": []} for i in range(360): if i==119: continue elif i <180: name = mod7name[i] + '_L' else: name = mod7name[i] + '_R' nodesdic = {"id": enc[i], "group": list(sortedndLR.keys()).index(name)} jsondic["nodes"].append(nodesdic) for j in range(i+1,360): if j==119: continue if M[i][j]!= 0.0: linksdic = {"source": enc[i], "target": enc[j], "value": np.around(M[i][j],3)} jsondic["links"].append(linksdic) return jsondic def json360_7_w(M): jsondic = {"nodes":[], "links": []} for i in range(360): if i==119: continue nodesdic = {"id": enc[i], "group": int(sortedmod7idx[i])} jsondic["nodes"].append(nodesdic) for j in range(i+1,360): if j==119: continue if M[i][j]!= 0.0: linksdic = {"source": enc[i], "target": enc[j], "value": np.around(M[i][j],3)} v = int(10* np.around(M[i][j],1)) jsondic["links"].extend([linksdic]*v) return jsondic def json360_7LR_w(M): jsondic = {"nodes":[], "links": []} for i in range(360): if i==119: continue elif i <180: name = mod7name[i] + '_L' else: name = mod7name[i] + '_R' nodesdic = {"id": enc[i], "group": list(sortedndLR.keys()).index(name)} jsondic["nodes"].append(nodesdic) for j in range(i+1,360): if j==119: continue if M[i][j]!= 0.0: linksdic = {"source": enc[i], "target": enc[j], "value": np.around(M[i][j],3)} v = int(10* np.around(M[i][j],1)) jsondic["links"].extend([linksdic]*v) return jsondic # ### Different Networks # delete Connections def rem_edges(M, remtype): newM = M.copy() if remtype == "L": for i in range(180,360): newM[i] = 0 newM = newM.T newM[i] = 0 newM = newM.T if remtype == "R": for i in range(180): newM[i] = 0 newM = newM.T newM[i] = 0 newM = newM.T if remtype == "intra": for Mod in sortednd: for i in sortednd[Mod]: for j in sortednd[Mod]: newM[i,j] = 0 if remtype == "intraLR": for Mod in sortedndLR: for i in sortedndLR[Mod]: for j in sortedndLR[Mod]: newM[i,j] = 0 if remtype == "inter": newM = newM - rem_edges(M, remtype = "intra") return newM remtypes = ["non", "L", "R", "intra", "intraLR"] # ### Save JSON Files # pth = rootdir + '/data/json' # for g in subjects_groups: # for remtype in remtypes: # M = rem_edges(meanM_bin[g], remtype) # if remtype not in ['L', 'R', 'intra']: # savejson(json360_7LR_bin(M), # pth + '/net360/group-%s_layout-7LR-bin_remtype-%s.json'%(g,remtype)) # savejson(json360_7LR_w(M), # pth + '/net360/group-%s_layout-7LR-w_remtype-%s.json'%(g,remtype)) # if remtype not in ['intraLR']: # savejson(json360_7_bin(M), # pth + '/net360/group-%s_layout-7-bin_remtype-%s.json'%(g,remtype)) # savejson(json360_7_w(M), # pth + '/net360/group-%s_layout-7-w_remtype-%s.json'%(g,remtype)) # # 1 7f1819 # # 2 e95924 # # 3 e6e01c # # 4 89c97a # # 5 3cc2db # # 6 3c54a4 # # 7 252160 # pth = rootdir + '/data/json' # for g in subjects_groups: # M = rem_edges(meanM_bin[g], remtype="inter") # savejson(json360_7LR_w(M), # pth + '/net360/group-%s_layout-7LR-w_remtype-%s.json'%(g,"inter")) # M = rem_edges(meanM_bin[g], remtype="intra") # savejson(json360_7LR_w(M), # pth + '/net360/group-%s_layout-7LR-w_remtype-%s.json'%(g,"intra")) # ### Activated and disactivated nodes def json360_7_w1(M1,M2): jsondic = {"nodes":[], "links": []} for i in range(360): if i==119: continue nodesdic = {"id": enc[i], "group": int(sortedmod7idx[i])} jsondic["nodes"].append(nodesdic) for j in range(i+1,360): if j==119: continue if M1[i][j]!= 0.0: linksdic = {"source": enc[i], "target": enc[j], "value": np.around(M1[i][j],3), "linktype": "activated" } v = int(10* np.around(M1[i][j],1)) jsondic["links"].extend([linksdic]*v) if M2[i][j]!= 0.0: linksdic = {"source": enc[i], "target": enc[j], "value": np.around(M2[i][j],3), "linktype": "deactivated" } v = int(10* np.around(M2[i][j],1)) jsondic["links"].extend([linksdic]*v) return jsondic pth = rootdir + '/data/json' for g in subjects_groups[1:]: activatedM = meanM_bin[g] - meanM_bin['CN'] deactivatedM = meanM_bin['CN'] - meanM_bin[g] activatedM = (activatedM > np.mean(activatedM) + 2 * np.std(activatedM)) * activatedM deactivatedM = (deactivatedM > np.mean(deactivatedM) + 2 * np.std(deactivatedM)) * deactivatedM M1 = rem_edges(activatedM, remtype="intra") M2 = rem_edges(deactivatedM, remtype="intra") savejson(json360_7_w1(M1,M2), pth + '/alternated-%s_layout-7-w_remtype-%s.json'%(g,"intra")) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Training the Models # ### Prepare the functions # + from preprocessing import preprocessing as pre import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.pipeline import make_pipeline from sklearn.svm import SVC from sklearn.neighbors import KNeighborsClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression from xgboost import XGBClassifier from sklearn.model_selection import KFold from sklearn.model_selection import GridSearchCV from sklearn.model_selection import StratifiedKFold from sklearn.model_selection import train_test_split from sklearn.preprocessing import Imputer from sklearn.metrics import make_scorer from sklearn.metrics import average_precision_score from sklearn.metrics import accuracy_score from sklearn.metrics import mean_squared_error from sklearn.preprocessing import StandardScaler import warnings from sklearn.exceptions import DataConversionWarning from sklearn.utils.testing import ignore_warnings from sklearn.exceptions import ConvergenceWarning warnings.simplefilter(action='ignore', category=FutureWarning) warnings.filterwarnings(action='ignore', category=DataConversionWarning) # + def prepare_data(dta): # dta = dta.drop(["q"], axis=1) X = dta.loc[:, dta.columns != 'actual'] y = dta.loc[:, dta.columns == 'actual'] return X,y def ML_logistic(X_other, X_test, y_other, y_test, kf, random_state): param_grid={"C":np.logspace(-5,4,num=100), "penalty":["l1","l2"]} logreg = LogisticRegression(solver="saga", max_iter=10000,n_jobs=-1,random_state=random_state) grid = GridSearchCV(logreg,param_grid=param_grid, scoring=make_scorer(accuracy_score), cv=kf, return_train_score=True,iid=True) grid.fit(X_other,y_other) return grid, grid.score(X_test, y_test) @ignore_warnings(category=ConvergenceWarning) def ML_svm(X_other, X_test, y_other, y_test, kf): param_grid = {'C': np.logspace(-4,4,num=30), 'gamma': np.logspace(-4,4,num=30)} svc = SVC(probability=True, max_iter=10000) grid = GridSearchCV(svc, param_grid=param_grid, scoring=make_scorer(accuracy_score),cv=kf, return_train_score=True,iid=True) grid.fit(X_other,y_other) return grid, grid.score(X_test, y_test) def ML_randomforest(X_other, X_test, y_other, y_test, kf, random_state): import random param_grid = {'max_features': ['auto', 'sqrt', 'log2'], 'max_depth': random.sample(range(2, 100), 10), 'min_samples_split': random.sample(range(2, 100), 10)} reg = RandomForestClassifier(random_state=random_state, n_jobs=-1, n_estimators=100) grid = GridSearchCV(reg,param_grid=param_grid, scoring=make_scorer(accuracy_score),cv=kf, return_train_score=True,iid=True) grid.fit(X_other,y_other) return grid, grid.score(X_test, y_test) def ML_XGB(X_other, X_test, y_other, y_test, kf, random_state): import random param_grid = {'gamma': np.logspace(-4,4,num=10), 'max_depth':random.sample(range(10, 25), 4), 'n_estimators': random.sample(range(200, 1000), 10)} xgb = XGBClassifier(learning_rate = 0.01, n_jobs=-1, random_state=random_state) grid = GridSearchCV(xgb,param_grid=param_grid, scoring=make_scorer(accuracy_score),cv=kf, return_train_score=True,iid=True) grid.fit(X_other,y_other) return grid, grid.score(X_test, y_test) def ML_nn(X_other, X_test, y_other, y_test, kf): import random param_grid = {'n_neighbors': random.sample(range(1, 100), 20), 'weights':["distance", "uniform"], 'metric': ["euclidean", "manhattan"]} knc = KNeighborsClassifier(n_jobs=-1) grid = GridSearchCV(knc,param_grid=param_grid, scoring=make_scorer(accuracy_score),cv=kf, return_train_score=True,iid=True) grid.fit(X_other,y_other) return grid, grid.score(X_test, y_test) def results(grid, test_score, ML_type,x, dta_name, time): from datetime import datetime import pytz central = pytz.timezone("US/eastern") print(' ') print('###########################################', "Data Set:",dta_name, '###########################################') print('###########################################', "ML Type:", ML_type, '###########################################') results = pd.DataFrame(grid.cv_results_) results["Run-Time (minutes)"]=time/60 dateTimeObj = datetime.now().astimezone(central) print("Completion Time:",dateTimeObj.hour, ':', dateTimeObj.minute, ':', dateTimeObj.second) print('Run time (minutes):',time/60) print('CV MSE:',-np.around(results[results['rank_test_score'] == 1]['mean_test_score'].values[0],2), '+/-',np.around(results[results['rank_test_score'] == 1]['std_test_score'].values[0],2)) print('test MSE:',-np.around(test_score,2)) print('Random State:',x) print(grid.best_estimator_) print('Best Accuracy:', grid.best_score_) print('Best Index:', grid.best_index_) print('########################################################################################################') print(' ') return results def ML_pipeline(X, y, random_state, n_folds,x, dta_name): from timeit import default_timer as timer from datetime import datetime import pytz central = pytz.timezone("US/eastern") dateTimeObj = datetime.now().astimezone(central) print("Start Time:",dateTimeObj.hour, ':', dateTimeObj.minute, ':', dateTimeObj.second) # import pdb; pdb.set_trace() # X.index = X["question"] # y.index = X["question"] X_other, X_test, y_other, y_test = train_test_split(X, y, test_size=0.2, random_state = random_state) X_other, enc_onehot, s_scaler = pre.preprocess_Xtrain(pre,X_other) X_test = pre.preprocess_Xtest(pre,X_test, enc_onehot, s_scaler) y_other, le = pre.preprocess_ytrain(pre, y_other) y_test = pre.preprocess_ytest(pre, y_test, le) columns = [] for col in X_other.columns: if "[" in col: col = re.sub("[", "\(", col) elif "]" in col: col = re.sub("]", "\)", col) columns.append(col) X_other.columns = columns X_test.columns = columns y_test.columns = ["actual"] y_other.columns = ["actual"] X_other = X_other.drop(["real_expert_number"],axis=1) X_test = X_test.drop(["real_expert_number"],axis=1) kf = KFold(n_splits=n_folds,shuffle=True,random_state=random_state) # Log Regression start = timer() log_grid, log_grid_score = ML_logistic(X_other, X_test, y_other, y_test, kf, random_state) end = timer() results_log = results(log_grid, log_grid_score, "Log Regression",x, dta_name, (end - start)) # SVM Regression start = timer() svm_grid, svm_grid_score = ML_svm(X_other, X_test, y_other, y_test, kf) end = timer() results_svm = results(svm_grid, svm_grid_score, "SVM Regression",x,dta_name,(end - start)) # Random Forest Regression start = timer() randf_grid, randf_grid_score = ML_randomforest(X_other, X_test, y_other, y_test, kf=kf, random_state=random_state) end = timer() results_randf = results(randf_grid, randf_grid_score, "Random Forest Classifier",x,dta_name,(end - start)) # # XGB Regressor # start = timer() # xgb_grid, xgb_grid_score = ML_XGB(X_other, X_test, y_other, y_test, # kf=kf, random_state=random_state) # end = timer() # results_xgb = results(xgb_grid, xgb_grid_score, "XGB Regressor",x,dta_name,(end - start)) # Nearest Neighbor Forest Regression start = timer() nn_grid, nn_grid_score = ML_nn(X_other, X_test, y_other, y_test, kf) end = timer() results_nn = results(nn_grid, nn_grid_score, "Nearest Neighbor",x,dta_name,(end - start)) ml_dta = pd.concat([results_log, results_svm, results_randf, results_nn]) return ml_dta def run_ML(dataset): import re count = 0 count = count+1 dta = dataset dta_name = "with_topics" X,y = prepare_data(dta) for x in range(1,10): print("Round:",x,"of 10") ml_dta = ML_pipeline(X,y,218*x,5,x*218, dta_name) if x == 1: result_ML_dta = ml_dta.copy() else: result_ML_dta = pd.concat([result_ML_dta, ml_dta]) total = result_ML_dta.copy() return total # - # ### Extra Data # + import re # extra data with_topic = pd.read_csv(f"Final_Datasets/not_preprocessed.csv") # q = with_topic["Unnamed: 0"] # with_topic = with_topic.set_index('Unnamed: 0') with_topic.columns = ['question', 'actual', 'confidence_(0.5, 0.55]', 'confidence_(0.55, 0.6]', 'confidence_(0.6, 0.65]', 'confidence_(0.65, 0.7]', 'confidence_(0.7, 0.75]', 'confidence_(0.75, 0.8]', 'confidence_(0.8, 0.85]', 'confidence_(0.85, 0.9]', 'confidence_(0.9, 0.95]', 'confidence_(0.95, 1.0]', 'sc_(0.0, 0.05]', 'sc_(0.05, 0.1]', 'sc_(0.1, 0.15]', 'sc_(0.15, 0.2]', 'sc_(0.2, 0.25]', 'sc_(0.25, 0.3]', 'sc_(0.3, 0.35]', 'sc_(0.35, 0.4]', 'sc_(0.4, 0.45]', 'sc_(0.45, 0.5]', 'sc_(0.5, 0.55]', 'sc_(0.55, 0.6]', 'sc_(0.6, 0.65]', 'sc_(0.65, 0.7]', 'sc_(0.7, 0.75]', 'sc_(0.75, 0.8]', 'sc_(0.8, 0.85]', 'sc_(0.85, 0.9]', 'sc_(0.9, 0.95]', 'sc_(0.95, 1.0]', 'number_of_participants', 'own', 'topic', 'mean_sc', 'std_sc', 'min_sc', 'p25_sc', 'p50_sc', 'p75_sc', 'max_sc', 'mean_conf', 'std_conf', 'min_conf', 'p25_conf', 'p50_conf', 'p75_conf', 'max_conf', 'skew_own', 'skew_sc', 'skew_conf', 'skew_row_average', 'skew_row_skew', 'minority_percentage', 'real_expert_number', 'P_expert_number_rT', 'P_expert_number_rF', 'MEAN_P_expert_rT', 'MEAN_P_expert_rF', 'prelec_prediction', 'majority_prediction'] RUNTHISDATAFRAME = with_topic RUNTHISDATAFRAME = RUNTHISDATAFRAME.loc[:,~RUNTHISDATAFRAME.columns.duplicated()] # - # # Run the Algorithm results = run_ML(RUNTHISDATAFRAME) results.to_csv("[1st_training]_raw_results.csv") # # Choosing the best parameters for each model top = results[results["rank_test_score"] == 1] log = top.dropna(subset=['param_penalty']) log = log.reset_index() svm = top.dropna(subset=['param_gamma']) svm = svm.reset_index() rf = top.dropna(subset=['param_max_depth']) rf = rf.reset_index() nn = top.dropna(subset=['param_n_neighbors']) nn = nn.reset_index() best_rows = [] best_rows.append(log.loc[log['mean_test_score'].idxmax()]) best_rows.append(svm.loc[svm['mean_test_score'].idxmax()]) best_rows.append(rf.loc[rf['mean_test_score'].idxmax()]) best_rows.append(nn.loc[nn['mean_test_score'].idxmax()]) best_results = pd.DataFrame(best_rows, index =['log', 'svm', 'rf', 'nn']) results.to_csv("[1st_training]_raw_results.csv") best_results.to_csv("training_best_results.csv") # # Standard Deviation and Mean # + top = results[results["rank_test_score"] == 1] log = top.dropna(subset=['param_penalty']) log = log.groupby("Run-Time (minutes)").mean() # log = log.mean() sd(log["mean_test_score"]) svm = top.dropna(subset=['param_gamma']) svm = svm.groupby("Run-Time (minutes)").mean() # svm = svm.mean() rf = top.dropna(subset=['param_max_depth']) rf = rf.groupby("Run-Time (minutes)").mean() # rf = rf.mean() nn = top.dropna(subset=['param_n_neighbors']) nn = nn.groupby("Run-Time (minutes)").mean() # nn = nn.mean() # - log["mean_test_score"].std() svm["mean_test_score"].std() rf["mean_test_score"].std() nn["mean_test_score"].std() # # Feature Importance # Change the outputs for the function def ML_randomforest(X_other, X_test, y_other, y_test, kf, random_state): import random param_grid = {'max_features': ['auto', 'sqrt', 'log2'], 'max_depth': random.sample(range(2, 100), 10), 'min_samples_split': random.sample(range(2, 100), 10)} reg = RandomForestClassifier(random_state=random_state, n_jobs=-1, n_estimators=100) grid = GridSearchCV(reg,param_grid=param_grid, scoring=make_scorer(accuracy_score),cv=kf, return_train_score=True,iid=True) grid.fit(X_other,y_other) return grid, X_test, y_test # + import re # extra data with_topic = pd.read_csv(f"Final_Datasets/not_preprocessed.csv") # q = with_topic["Unnamed: 0"] # with_topic = with_topic.set_index('Unnamed: 0') with_topic.columns = ['question', 'actual', 'confidence_(0.5, 0.55]', 'confidence_(0.55, 0.6]', 'confidence_(0.6, 0.65]', 'confidence_(0.65, 0.7]', 'confidence_(0.7, 0.75]', 'confidence_(0.75, 0.8]', 'confidence_(0.8, 0.85]', 'confidence_(0.85, 0.9]', 'confidence_(0.9, 0.95]', 'confidence_(0.95, 1.0]', 'sc_(0.0, 0.05]', 'sc_(0.05, 0.1]', 'sc_(0.1, 0.15]', 'sc_(0.15, 0.2]', 'sc_(0.2, 0.25]', 'sc_(0.25, 0.3]', 'sc_(0.3, 0.35]', 'sc_(0.35, 0.4]', 'sc_(0.4, 0.45]', 'sc_(0.45, 0.5]', 'sc_(0.5, 0.55]', 'sc_(0.55, 0.6]', 'sc_(0.6, 0.65]', 'sc_(0.65, 0.7]', 'sc_(0.7, 0.75]', 'sc_(0.75, 0.8]', 'sc_(0.8, 0.85]', 'sc_(0.85, 0.9]', 'sc_(0.9, 0.95]', 'sc_(0.95, 1.0]', 'number_of_participants', 'own', 'topic', 'mean_sc', 'std_sc', 'min_sc', 'p25_sc', 'p50_sc', 'p75_sc', 'max_sc', 'mean_conf', 'std_conf', 'min_conf', 'p25_conf', 'p50_conf', 'p75_conf', 'max_conf', 'skew_own', 'skew_sc', 'skew_conf', 'skew_row_average', 'skew_row_skew', 'minority_percentage', 'real_expert_number', 'P_expert_number_rT', 'P_expert_number_rF', 'MEAN_P_expert_rT', 'MEAN_P_expert_rF', 'prelec_prediction', 'majority_prediction'] RUNTHISDATAFRAME = with_topic RUNTHISDATAFRAME = RUNTHISDATAFRAME.loc[:,~RUNTHISDATAFRAME.columns.duplicated()] dta = RUNTHISDATAFRAME X,y = prepare_data(dta) X_other, X_test, y_other, y_test = train_test_split(X, y, test_size=0.2, random_state = 1090) X_other, enc_onehot, s_scaler = pre.preprocess_Xtrain(pre,X_other) X_test = pre.preprocess_Xtest(pre,X_test, enc_onehot, s_scaler) y_other, le = pre.preprocess_ytrain(pre, y_other) y_test = pre.preprocess_ytest(pre, y_test, le) columns = [] for col in X_other.columns: if "[" in col: col = re.sub("[", "\(", col) elif "]" in col: col = re.sub("]", "\)", col) columns.append(col) X_other.columns = columns X_test.columns = columns y_test.columns = ["actual"] y_other.columns = ["actual"] X_other = X_other.drop(["real_expert_number"],axis=1) X_test = X_test.drop(["real_expert_number"],axis=1) kf = KFold(n_splits=5,shuffle=True,random_state=1090) z = [] for i in range(1,10): z.append(i*218) rand = sum(z)/9 #1090 grid, X_test, y_test = ML_randomforest(X_other, X_test, y_other, y_test, kf=kf, random_state=rand) # - import pickle file = open('grid.save', 'wb') pickle.dump((grid,X_test,y_test),file) file.close() # + jupyter={"outputs_hidden": true} ftr_names = X_other.columns import pickle file = open('grid.save', 'rb') grid, X_test, y_test = pickle.load(file) file.close() nr_runs = 10 scores = np.zeros([len(ftr_names),nr_runs]) test_score = grid.score(X_test,y_test) print('test score = ',test_score) print('test baseline = ',np.sum(y_test == 0)/len(y_test)) # loop through the features for i in range(len(ftr_names)): print('shuffling '+str(ftr_names[i])) acc_scores = [] for j in range(nr_runs): X_test_shuffled = X_test.copy() X_test_shuffled[ftr_names[i]] = np.random.permutation(X_test[ftr_names[i]].values) acc_scores.append(grid.score(X_test_shuffled,y_test)) print(' shuffled test score:',np.around(np.mean(acc_scores),3),'+/-',np.around(np.std(acc_scores),3)) scores[i] = acc_scores # + jupyter={"outputs_hidden": true} sorted_indcs = np.argsort(np.mean(scores,axis=1))[::-1] plt.rcParams.update({'font.size': 14}) plt.figure(figsize=(15,30)) plt.boxplot(scores[sorted_indcs].T,labels=ftr_names[sorted_indcs],vert=False) plt.axvline(test_score,label='test score') plt.title("Permutation Importances (test set)") plt.xlabel('score with perturbed feature') plt.legend() plt.tight_layout() plt.show() # - # # Confusion Matrix for the best Model # + # https://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html import numpy as np import matplotlib.pyplot as plt from sklearn.utils.multiclass import unique_labels from sklearn.metrics import confusion_matrix def plot_confusion_matrix(y_true, y_pred, classes, normalize=False, title=None, cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if not title: if normalize: title = 'Normalized confusion matrix' else: title = 'Confusion matrix, without normalization' # Compute confusion matrix cm = confusion_matrix(y_true, y_pred) # Only use the labels that appear in the data classes = np.array(classes) classes = classes[unique_labels(y_true, y_pred)] if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] fig, ax = plt.subplots() im = ax.imshow(cm, interpolation='nearest', cmap=cmap) ax.figure.colorbar(im, ax=ax) # We want to show all ticks... ax.set(xticks=np.arange(cm.shape[1]), yticks=np.arange(cm.shape[0]), # ... and label them with the respective list entries xticklabels=classes, yticklabels=classes, title=title, ylabel='True label', xlabel='Predicted label') # Rotate the tick labels and set their alignment. plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor") # Loop over data dimensions and create text annotations. fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i in range(cm.shape[0]): for j in range(cm.shape[1]): ax.text(j, i, format(cm[i, j], fmt), ha="center", va="center", color="white" if cm[i, j] > thresh else "black") fig.tight_layout() return ax # + with_topic = pd.read_csv(f"Final_Datasets/not_preprocessed.csv") # q = with_topic["Unnamed: 0"] # with_topic = with_topic.set_index('Unnamed: 0') with_topic.columns = ['question', 'actual', 'confidence_(0.5, 0.55]', 'confidence_(0.55, 0.6]', 'confidence_(0.6, 0.65]', 'confidence_(0.65, 0.7]', 'confidence_(0.7, 0.75]', 'confidence_(0.75, 0.8]', 'confidence_(0.8, 0.85]', 'confidence_(0.85, 0.9]', 'confidence_(0.9, 0.95]', 'confidence_(0.95, 1.0]', 'sc_(0.0, 0.05]', 'sc_(0.05, 0.1]', 'sc_(0.1, 0.15]', 'sc_(0.15, 0.2]', 'sc_(0.2, 0.25]', 'sc_(0.25, 0.3]', 'sc_(0.3, 0.35]', 'sc_(0.35, 0.4]', 'sc_(0.4, 0.45]', 'sc_(0.45, 0.5]', 'sc_(0.5, 0.55]', 'sc_(0.55, 0.6]', 'sc_(0.6, 0.65]', 'sc_(0.65, 0.7]', 'sc_(0.7, 0.75]', 'sc_(0.75, 0.8]', 'sc_(0.8, 0.85]', 'sc_(0.85, 0.9]', 'sc_(0.9, 0.95]', 'sc_(0.95, 1.0]', 'number_of_participants', 'own', 'topic', 'mean_sc', 'std_sc', 'min_sc', 'p25_sc', 'p50_sc', 'p75_sc', 'max_sc', 'mean_conf', 'std_conf', 'min_conf', 'p25_conf', 'p50_conf', 'p75_conf', 'max_conf', 'skew_own', 'skew_sc', 'skew_conf', 'skew_row_average', 'skew_row_skew', 'minority_percentage', 'real_expert_number', 'P_expert_number_rT', 'P_expert_number_rF', 'MEAN_P_expert_rT', 'MEAN_P_expert_rF', 'prelec_prediction', 'majority_prediction'] RUNTHISDATAFRAME = with_topic RUNTHISDATAFRAME = RUNTHISDATAFRAME.loc[:,~RUNTHISDATAFRAME.columns.duplicated()] dta = RUNTHISDATAFRAME X,y = prepare_data(dta) X, enc_onehot, s_scaler = pre.preprocess_Xtrain(pre,X) columns = [] for col in X.columns: if "[" in col: col = re.sub("[", "\(", col) elif "]" in col: col = re.sub("]", "\)", col) columns.append(col) X.columns = columns # - grid, X_test, y_test = ML_randomforest(X_other, X_test, y_other, y_test, kf=kf, random_state=872) predictions = grid.best_estimator_.predict(X_test) y_pred = predictions y_true = y["actual"] plot_confusion_matrix(y_test,y_pred,classes=['True','False']) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #import libraries import requests from datetime import datetime as dt from datetime import timedelta import pandas as pd # + # get events from n days ago iso = 562 #Look in ACLED documentation ISO code for number this is for Niger limit = 400 api_url = 'https://api.acleddata.com/acled/read?terms=accept&iso={}'.format(iso) print (api_url, type(api_url)) #creates request according to ACLED format specifications - p. 13 # - response = requests.get(api_url) data = response.json() data.keys() data['count'] # ### From the documentation we know this is the max return --- How can we get all the results? # + # Let's mkae a function that updates our search to get the new pages def ping_acled(api_url): ''' Takes one parameter search term for API ''' response = requests.get(api_url) data = response.json() return data # + results = [] # empty data strcture to store results num_results = 500 # condition to continue adding pages count = 0 # tracker of results page = 1 #Per the documentation each page will give us more results while num_results == 500: #if less 500 or 0 we know we have all the results print ("starting ", page, " ", num_results) #just to see our progress api_url = 'https://api.acleddata.com/acled/read?terms=accept&iso={}&page={}'.format(iso,page) #the search data = ping_acled(api_url) #call the previous function results.append(data['data']) #store in our results count += data['count'] #Track number of results num_results = data['count'] #update our condition page += 1 #update our page variable print ("Total Results ", count) #Track our progress # - #Now I want to put them together into one giant result super_list = [] for res in results: super_list += res print (len(super_list)) #convert it into an pandas data frame or just use your data structure and do more stuff niger_res = pd.DataFrame(super_list) niger_res.head() # ### Do the right thing, take some time to look at the codebook and see what these columns are niger_res.columns # ### Homework --- Make a map of some ACLED Data (absolutely use the code from the Global Terrorism Database exercise) # + # Import additional libraries from bokeh.plotting import figure, output_notebook, show #builds interactive graphs for python from bokeh.models import Range1d import math #this is used in graphic section to use the irrational number pi output_notebook() #Allows inline plotting for Juptyer notebook #Imports necessary aspects of Bokeh for plotting on a map from bokeh.tile_providers import get_provider, Vendors from pyproj import Transformer tile_provider = get_provider('STAMEN_TERRAIN') # + # niger_res.info() # + # Take the data results from Niger and get the lat/long of the attacks and the type of event country_map = niger_res[["latitude", 'longitude', 'event_type']] country_map[['latitude', 'longitude']] = country_map[['latitude', 'longitude']].astype(float) #see the data this time first 7 rows country_map.head(7) # + events_by_type = {} #make an empty datastructure (dictionary) to fill #This loop goes through each row and counts the number of entries by group for index, row in country_map.iterrows(): if row["event_type"] in events_by_type.keys(): events_by_type[row["event_type"]] += 1 #if group is in the dictionary add 1 attack else: events_by_type[row["event_type"]] = 1 #add group name to dictionary if not in dictionary events_by_type # - events = set(country_map.event_type) events #create pyproj transformer to convert form lat/long to web mercator transformer = Transformer.from_crs('epsg:4326','epsg:3857') # + map_dict = {} # empty dictionary to track events by lat long nan_count = {} # some data doesn't have a lat/long so we need to know what we are losing # Iterate through tables and associate group with lat/long for idx, row in country_map.iterrows(): if row['event_type'] in map_dict.keys(): if math.isnan(row["latitude"]): #This counts no data if row['event_type'] in nan_count.keys(): nan_count[row['event_type']] += 1 else: nan_count[row['event_type']] = 1 else: #This has to convert the lat/long to a mercator projection point = transformer.transform(row["latitude"],row["longitude"]) map_dict[row['event_type']].append([point[0],point[1]]) #BOTH the if an else statement do the same thing but since it is a dictionary one needs to add the group name first else: if math.isnan(row["latitude"]): nan_count[row['event_type']] = 1 else: point = transformer.transform(row["latitude"],row["longitude"]) map_dict[row['event_type']] =[[point[0],point[1]]] #This tells how many attacks we are losing nan_count # + # Create the bounding box for Syria pts = [(23, -1), (12, 17)] bbox = [] for pt in transformer.itransform(pts): bbox.append(pt) # bbox # - NPA_x = [] NPA_y = [] for pt in map_dict["Violence against civilians"]: NPA_x.append(pt[0]) NPA_y.append(pt[1]) # + #Plots the bounding box p = figure(x_range=(bbox[0][0], bbox[1][0]),y_range=(bbox[0][1], bbox[1][1]),x_axis_type="mercator", y_axis_type="mercator") #add the map form the Bokeh map vendor in this case Stamen_Terrain --- see documentation p.add_tile(tile_provider) # Places a circle for each converted lat/long attack p.circle(x = NPA_x, y = NPA_y, color= "firebrick") #shows the plot show(p) # + ### This is the ACLED homework - I'm also using much ### of this code for my project - Cheers! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import datetime as dt import os import matplotlib.pyplot as plt import numpy as np from coropy.compartmental_models import SEIRModel, SEIRDModel # + # data DATA_PATH = os.path.join(os.pardir, 'data') COUNTRY = 'CRO' DATA = os.path.join(DATA_PATH, COUNTRY) CONFIRMED_CASES_PATH = os.path.join(DATA_PATH, COUNTRY, 'confirmed_cases.dat') RECOVERED_CASES_PATH = os.path.join(DATA_PATH, COUNTRY, 'recovered_cases.dat') DEATH_CASES_PATH = os.path.join(DATA_PATH, COUNTRY, 'death_cases.dat') TESTS_PATH = os.path.join(DATA_PATH, COUNTRY, 'tests.dat') confirmed_cases = np.loadtxt(CONFIRMED_CASES_PATH) recovered_cases = np.loadtxt(RECOVERED_CASES_PATH) death_cases = np.loadtxt(DEATH_CASES_PATH) daily_tests = np.loadtxt(TESTS_PATH) eff_dates=[dt.datetime(2020, 2, 25), dt.datetime(2020, 6, 1), dt.datetime(2020, 8, 8)] # + eff_population_scaler = 1 first_wave_eff_population = 2200 S0 = first_wave_eff_population * eff_population_scaler E0 = 3 * confirmed_cases[0] I0 = confirmed_cases[0] R0 = recovered_cases[0] D0 = death_cases[0] IC = (S0, E0, I0, R0, D0) S_tot, E_tot, I_tot, R_tot, D_tot = [], [], [], [], [] # past wave(s) start_idx = 0 for start_date, end_date in zip(eff_dates[:-1], eff_dates[1:]): end_idx = start_idx+abs((end_date - start_date).days) model = SEIRDModel() _, _ = model.fit(confirmed_cases[start_idx:end_idx], recovered_cases[start_idx:end_idx], death_cases[start_idx:end_idx], IC) (S, E, I, R, D) = model.simulate() S_tot.extend(S.tolist()) E_tot.extend(E.tolist()) I_tot.extend(I.tolist()) R_tot.extend(R.tolist()) D_tot.extend(D.tolist()) eff_population_scaler += 1 S0 = S0 * eff_population_scaler IC = (S0, 3 * I[-1], I[-1], R[-1], D[-1]) # update initial conditions start_idx = end_idx # update indexing # current wave model = SEIRDModel() _, _ = model.fit(confirmed_cases[start_idx:], recovered_cases[start_idx:], death_cases[start_idx:], IC) (S, E, I, R, D) = model.simulate() S_tot.extend(S.tolist()) E_tot.extend(E.tolist()) I_tot.extend(I.tolist()) R_tot.extend(R.tolist()) D_tot.extend(D.tolist()) # visualize fig, ax = plt.subplots() ax.plot(I_tot, 'b-', label='$I_{fit}(t)$') ax.plot(confirmed_cases - recovered_cases - death_cases, 'b.', label='$I_{data}(t)$') ax.plot(R_tot, 'r-', label='$R_{fit}(t)$') ax.plot(recovered_cases, 'r.', label='$R_{data}(t)$') ax.plot(D_tot, 'g-', label='$D_{fit}(t)$') ax.plot(death_cases, 'g.', label='$D_{data}(t)$') ax.set_xlabel('$\Delta t$ days from Feb, $25^{th}$') ax.set_ylabel('$N$') ax.legend() ax.grid() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + import requests from bs4 import BeautifulSoup comic_dictionary = {} # We will add each comics urls and other text info into a dictionary comic_index = 2000 # loop from 2000 - 2490 # - base_url = "https://xkcd.com/" + str(comic_index) + '/' page = requests.get(base_url) # page # page.status_code # page.content soup = BeautifulSoup(page.content, 'html.parser') # print(soup.prettify()) # list(soup.children) # [type(item) for item in list(soup.children)] comic = list(soup.find_all(id="comic")) # comic # print (base_url) # + comic_title = str(comic[0]) comic_title = comic_title.split(sep = "src")[0] comic_title = comic_title.split(sep = "alt")[1] comic_title = comic_title[2:len(comic_title)-2] print (comic_title) # + comic_alt = str(comic[0]) comic_alt = comic_alt.split(sep = "src")[2] print (comic_alt) comic_alt = comic_alt.split(sep = "title")[1] comic_alt = comic_alt[2:len(comic_alt)-15] print (comic_alt) # + comic_url_src = str(comic[0]) comic_url_src = comic_url_src.split(sep = "src")[1] comic_url_src = "http://" + comic_url_src[4:len(comic_url_src)-2] # print (comic_url_src) # - comic_dictionary[comic_index] = [base_url, comic_title, comic_alt, comic_url_src] comic_dictionary[comic_index] # + for comic_index in range(2000,2100): ## make the Url and wget it base_url = "https://xkcd.com/" + str(comic_index) + '/' page = requests.get(base_url) # page # page.status_code # page.content soup = BeautifulSoup(page.content, 'html.parser') # print(soup.prettify()) # list(soup.children) # [type(item) for item in list(soup.children)] comic = list(soup.find_all(id="comic")) # comic # print (base_url) ## String processing steps ## Step 1: Extract the Title comic_title = str(comic[0]) comic_title = comic_title.split(sep = "src")[0] comic_title = comic_title.split(sep = "alt")[1] comic_title = comic_title[2:len(comic_title)-2] ## Step 2: Extract the comic's alt-text comic_alt = str(comic[0]) comic_alt = comic_alt.split(sep = "src")[2] print (comic_alt) comic_alt = comic_alt.split(sep = "title")[1] comic_alt = comic_alt[2:len(comic_alt)-15] ## Step 3: Extract the url for the image comic_url_src = str(comic[0]) comic_url_src = comic_url_src.split(sep = "src")[1] comic_url_src = "http://" + comic_url_src[4:len(comic_url_src)-2] ## Now add all this data into the dictionary comic_dictionary[comic_index] = [base_url, comic_title, comic_alt, comic_url_src] # - comic_dictionary prin # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Fixed-Width File Reader Sample Usage # **Fixed-width files**: each field in a column is padded to a given number of bytes.
    # **Cython**: Python project to automatically generate Python bindings for C++ scripts.
    # **Python vs C++**: simple but slow vs fast but complicated. # ## Goals # * C++ project to read fixed-width files into in-memory Apache Arrow tables. # * Includes Cython-made Python bindings for accessibility and integration with Python projects. # # We use Apache Arrow's CSV reader as a base since it required only fairly minimal changes for this application. # * Modify the reader to optionally decode on read-in. # * Modify the parser to use field widths instead of a delimiter to separate the input stream into fields. # * Modify the converter to optionally convert COBOL-formatted numeric types to standard numeric types. # # ## Use # For installation, see this repo's README. # # First, we'll create an example fixed-width file, including COBOL-formatted numbers, null values and odd spacing. # %%file sample.fwf aa bb cc hello 123} 3.56 hi 9129A NaN spaces N/A 7.8 NA 3{ 0 # Now we can run the module on the fixed-width file. See the README for a full list of options. Note that the converter uses preset lists/maps for null values and the COBOL values below. These can be modified through: # * convert_options.null_values = \[some new list\] # * convert_options.\[pos/neg\]\_values = {some new mapping} # + import pyfwfr as pf convert_options = pf.ConvertOptions(is_cobol=True, strings_can_be_null=True) parse_options = pf.ParseOptions([13, 7, 4]) table = pf.read_fwf('sample.fwf', parse_options, convert_options=convert_options) for column in table.columns: print(column) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + [markdown] id="yFkzcmtll543" # # Lab Assignment # # For this project you will measure and analyze the performance of two types of heat exchanger over a range of operating conditions in both co-current and counter-flow configurations. One the heat exchangers must be the double pipe exchanger. For the second you can choose any of the other four heat exchangers mounted on the GUNT WL 315C equipment. # # ## Experiments # # 1. Familiarize yourself with GUNT WL 315C equipment (30-45 minutes). # * Using the schematic mounted on the equipment, trace all lines so you are understand the valve settings need to bring hot and cold water into the equipment, route flow to a specific heat exchanger whilst keeping flow away from all other heat exchangers, change flows from co-current to counter current flow. # * Identify [ball valves and their operation](https://en.wikipedia.org/wiki/Ball_valve), including the multi-port ball valves used to configure co-current and counter-current operation. # * Turn on the instrumentation and laboratory computer. Familiarize yourself with setting the instruments to display and log data from a selected heat exchanger. # * Operate the equipment by varying flowrates, heat-exchangers, and display options. # #
    # # 2. Plan a set of experiments for each heat exchanger / flow configuration that you will operating. You will need to collect measurements over the full operating range of the equipment. Design your experiments by selecting at least 3 levels for the cold water flow, 3 levels for the hot water flow, and a desired number of replications for each operational setting. You will repeat this set of experiments for each heat exchanger, for co-current and counter-current flow. # # 3. Collect your data. You can use the data logging feature of the GUNT WL 315 C software installed on the laboratory computer. As back up, you should manually record your data into your laboatory notebook. Use "Tidy Data" principles for data collection. # # ## Analysis # # The analysis of your data should include the following elements: # # 1. Compute the enthalpy change for the hot and cold streams for every measurement. Compute the heat exchanger efficiency. Describe your findings. # # 2. Compute the heat transfer coefficient $U$ using the log mean temperature difference observed for each measurement. Create a chart of $U$ as functions of hot and cold water flowrates. Describe your findings. # # 3. For each heat exchanger, flow configuration combination, fit the model for the heat transfer coefficient described in the prior notebooks. Use the model to predict $U$ for each measurement. How well do your measurements of $U$ fit this model? Do you observe any systematic error in the comparison of the model predictions to the measurements? # # 4. For the double-pipe heat exchanger, using the inlet water flow rates and temperatures, and the heat transfer coefficient predicted by the model you fit above, compute the temperature profile and the estimated heat duty. Overlay your measured temperatures on this plot. How well does the model predict the measurements? Do this for at least one set of flowrates for co-current and counter-current operation. # # ## Progress Report # # Your progress report is due prior to the start of the second lab session. The progress report describe the experimental design, a preliminary analysis (items 1 and 2 above) for at least one heat exchanger in at least one flow configuration, and a description of any additional measurements you need to do in the second lab session. Any questions regarding the further analysis should be addressed following submission of the progress report. # # ## Final Report # # This is a "writing intensive" lab report. Special attention should be given to the instructions delivered during the third lab session. All analytical elements should be completed and included in the report. Data tables and any Python coding should be included in appendices to the report. # # Keep in mind that the audience for this report would be an engineer familiar with heat exchangers and seeking specific information on the your modeling and your analysis of the performance of this particular set of equipment. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python2.7 with DSX Spark 2.0.2 # language: python # name: python2 # --- # + [markdown] deletable=true editable=true # ### Get current directory # + deletable=true editable=true import os os.getcwd() # + [markdown] deletable=true editable=true # ### Set the directory to main project folder # + deletable=true editable=true os.chdir('/user-home/1001/DSX_Projects/sms-spam-filter-using-hortonworks/') # + [markdown] deletable=true editable=true # ### Create the Spam Filter Scikit package folder # + deletable=true editable=true # ! mkdir SpamFilterScikit # ! mkdir SpamFilterScikit/SpamFilterScikit # + [markdown] deletable=true editable=true # ### Copying the scripts created to the package folder # + deletable=true editable=true # ! cp scripts/setup2.py SpamFilterScikit/ # ! cp scripts/__init__.py SpamFilterScikit/SpamFilterScikit/ # ! cp scripts/LRModelScikit.py SpamFilterScikit/SpamFilterScikit/ # + [markdown] deletable=true editable=true # ### Building the Spam Filter Scikit egg # + deletable=true editable=true os.chdir('/user-home/1001/DSX_Projects/sms-spam-filter-using-hortonworks/SpamFilterScikit') # ! python setup2.py bdist_egg # + [markdown] deletable=true editable=true # ### Egg file is created and available in the dist folder of the package # + deletable=true editable=true # ! ls dist # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # least square solution(more equations less solutions) # A(N*M) , W(M*1) , Y(N*1) # N>M (NO EXACT SOLUTION) # Aw=y # w=inv((A.T).A).(A.T).y # + # minimum norm solution(more parameters less equations) # A(N*M) , W(M*1) , Y(N*1) # NOpen In Colab # + [markdown] id="l9aCgUyFm4Cl" colab_type="text" # ![alt text](https://drive.google.com/uc?id=1SMxU6VSM8zIwEIzMaMTztlqBdMjdei4Q) # + [markdown] id="gpluge9Wm4Co" colab_type="text" # ### Projekt zaliczeniowy. # # Uniwersytet Ekonomiczny we Wrocławiu. # DATA SCIENCE - Zaawansowana analiza danych. # # I semestr. # # $Pawel~Tometczak$ # # + [markdown] id="KLQOxdiom4Cq" colab_type="text" # W projekcie przeanalizowano dane dotyczące przedsiębiorstwa $XXX$ sprzedajacego produkty różnego rodzaju. Poddano analizie wartość dokonanych zakupów w kolejnych latach - podział na płeć, sprzedaż w kontekście analizy zarówno ilościowej, jak i jakościowej, a także popularność miast sprzedaży i poszczególne produkty, pod kątem ich udziału zarówno w wielkości sprzedaży, jak i w przychodach przedsiębiorstwa. Określono sezonowość sprzedaż w poszczególnych latach w rozłożeniu na miesiące. # + [markdown] id="i678W-rjm4Cs" colab_type="text" # Zaczynamy od wczytania potrzebnych pakietów i bibliotek: # + id="YD6FZRk9m4Cu" colab_type="code" colab={} import pandas as pd import numpy as np import seaborn as sns import scipy.stats from scipy.stats import kstest from scipy import stats from scipy.stats import skew import random import matplotlib.pyplot as plt # %matplotlib inline #from google.colab import files #uploaded = files.upload() # + [markdown] id="23cxXjnum4Cz" colab_type="text" # Zmieniam precyzję wyświetlania liczb zmiennoprzecinkowych, liczby będą wyświetlane w Pandas z precyzją do trzech miejsc po przecinku: # + id="CEOOc2Pjm4C1" colab_type="code" colab={} pd.set_option('float_format', '{:.3f}'.format) # + [markdown] id="uOAYzh_Jm4C5" colab_type="text" # ### Wczytanie danych i oczyszczenie, normalizacja tabel dla wygodniejszej dalszej pracy: # + id="-23S3nOgm4C7" colab_type="code" colab={} products_raw = pd.read_csv("products.txt", sep='\t', engine='python', index_col = 'PRODUCTID') products_raw.columns = products_raw.columns.str.lower() products_raw.index.name = products_raw.index.name.lower() customers_raw = pd.read_csv("customers.txt", sep='\t', engine='python', index_col = 'customerid') orders_raw = pd.read_csv("orders.txt", encoding='cp1251', sep='\t', engine='python') orders_raw.orderdate = pd.to_datetime(orders_raw.orderdate) orderlines_raw = pd.read_csv("orderlines.txt",delimiter="\t", encoding="cp1251", index_col='orderlineid') orderlines_raw.shipdate = pd.to_datetime(orderlines_raw.shipdate) orderlines_raw.billdate = pd.to_datetime(orderlines_raw.billdate) orderlines_raw['year'] = orderlines_raw.shipdate.dt.year #dane dodatkowe do analizy dużej bazy (littleBigData) - TABELE calendar_raw_bd = pd.read_csv("calendar.txt", encoding='cp1251', sep='\t',low_memory=False) campaigns_raw_bd = pd.read_csv("campaigns.txt", encoding='cp1251', sep='\t',low_memory=False) zip1_raw_bd = pd.read_csv("ZipCensus.txt", encoding='cp1251', sep='\t',low_memory=False) zip2_raw_bd = pd.read_csv("zipcounty.txt", encoding='cp1251', sep='\t',low_memory=False) products_raw_bd = products_raw.reset_index() customers_raw_bd = customers_raw.reset_index() orders_raw_bd = orders_raw BigData = pd.merge(orders_raw_bd, customers_raw_bd, on='customerid', how='outer') # + id="TPG3Vw_Gm4C_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 264} outputId="3e0ac774-01df-45bf-f9cb-040be4434b71" # połączenie dataframe products i orderline. Metoda łączenia: INNER JOIN i klucz: 'productid' orderlines_raw_bd = pd.read_csv("orderlines.txt",delimiter="\t", encoding="cp1251") little_data = pd.merge(orderlines_raw_bd, products_raw_bd, on='productid', how='inner') # ujednolicenie wielkości liter w nazwach kolumn zbiorów produktów i pozycji zamówień products_raw_bd.columns = map(str.lower, products_raw_bd.columns) BigData = pd.merge(little_data, BigData, on='orderid', how='inner') differenc_table = BigData.numorderlines != BigData.numunits_y BigData[differenc_table].head(3) # + id="-Z1sVsUzm4DG" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 117} outputId="fab68641-1e13-47ab-e693-f002a06e8733" #dodanie kolumn z miesiącami i latami BigData.shipdate = pd.to_datetime(BigData.shipdate) BigData.billdate = pd.to_datetime(BigData.billdate) BigData['year'] = BigData.shipdate.dt.year BigData.shipdate = pd.to_datetime(BigData.shipdate) BigData.billdate = pd.to_datetime(BigData.billdate) BigData['month'] = BigData.shipdate.dt.month BigData.head(1) # + id="NRXTw74FpF0z" colab_type="code" colab={} #from google.colab import drive #drive.mount('/content/drive') # + id="oz7BK5pUm4DL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 510} outputId="4721d542-19f0-40dd-aeb8-9989a0c81b68" #sprawdzam jakiego typu zmienne występują w zbiorze (zmienne całkowite, #zmiennoprzecinkowe, kategoryczne porządkowe, kategoryczne nominalne, zmienne typu logicznego, daty). BigData.dtypes # + id="A9OmJGRAm4DP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 317} outputId="6676a488-1bcb-40a7-aa34-e4ab19f31892" #podstawowe podsumowanie zbioru BigData BigData.describe() # + id="rNjCXtEAm4DU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 917} outputId="6d975341-1f4e-48cd-8fae-ab8867f63ed8" #sprawdzam czy w zbiorze występują braki danych nulls_summary = pd.DataFrame(BigData.isnull().any(), columns=['Nulls']) nulls_summary['Braki danych [ilość]'] = pd.DataFrame(BigData.isnull().sum()) nulls_summary['Braki danych [%]'] = round((BigData.isnull().mean()*100),2) nulls_summary # + [markdown] id="BX5ZCCYZm4Db" colab_type="text" # ### Wyznaczam współczynniki skośności poszczególnych zmiennych numerycznych. # + [markdown] id="wJD3EHgWm4Dc" colab_type="text" # # # # # $$\Large {wspolczynnik}~{skosnosci} = 3 * \frac{srednia - mediana}{odchylenie~standardowe}$$ # # # # Współczynnik o wartości $0$, to rozkład symetryczny. # Współczynnik o wartości ujemnej to rozkład lewostronnie skośny (wydłużone lewe ramię rozkładu; średnia mniejsza od mediany). # Współczynnik o wartości dodatniej to rozkład prawostronnie skośny (wydłużone prawe ramię rozkładu; średniej większa od mediany). # # # + id="40mcp_epm4De" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="d4abc8bd-2bb0-4f20-8fa7-051764587b35" #skośność rozkładów, oczywiście interesują mnie zmienne numeryczne, które pozwolą na analizę BigData.skew() # + [markdown] id="W5pprmNZm4Di" colab_type="text" # ### Obserwacje odstające # Zgodnie z regułą, obserwacją odstającą jest obserwacja, która przyjmuje wartość: # Większą niż półtora odstępu międzykwartylowego $(Q3 – Q1)$, od trzeciego kwartyla$(Q3)$. # Mniejszą niż półtora odstępu międzykwartylowego $(Q3 – Q1)$, od pierwszego kwartyla$(Q1)$. # + id="tsrbvwHhm4Dj" colab_type="code" colab={} outputId="592c9b7b-7d7a-41fc-934d-8e195ef1b568" #sprawdzam gdzie występują wartości odstające Q_first = BigData.quantile(0.25) Q_third = BigData.quantile(0.75) iqr = Q_third-Q_first low_boundary = (Q_first - 1.5 * iqr) upp_boundary = (Q_third + 1.5 * iqr) num_of_outliers_L = (BigData[iqr.index] < low_boundary).sum() num_of_outliers_U = (BigData[iqr.index] > upp_boundary).sum() wartosci_odstajace = pd.DataFrame({'niska_granica':low_boundary, 'wysoka_granica':upp_boundary,\ 'wartosci_odstajace_L':num_of_outliers_L, 'wartosci_odstajace_U':num_of_outliers_U}) wartosci_odstajace # + id="aHWHIS4bm4Dn" colab_type="code" colab={} outputId="c136ead5-9cb9-4c53-d182-3a571dbe7e4c" #sprawdzam liczności zmiennych kategorycznych for col in BigData.select_dtypes(['object', 'category']): print(BigData[col].value_counts()) # + [markdown] id="ZW7Jo_12m4Dr" colab_type="text" # ### Dane klientów # + id="IL1Ylhc3m4Ds" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 235} outputId="0b424ae5-3952-4b62-949c-912968c0fd14" #customers_raw - czyszczenie danych customers_raw.head() # + id="7aVNcGZim4Dy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 142} outputId="79741341-adca-4d85-eb0c-7888da338f7d" # INFO-Data Frame, w którym znajdują się wszystkie niezbędne informacje: # nazwa zmiennej, informacja o jej typie, #informacja czy dana zmienna zawiera braki, #liczba braków, oraz procentowa brakującyh wartości w zmiennej def informacja(df): info1 = pd.DataFrame(df.dtypes, columns=['Dtype']) info1['Nulls'] = pd.DataFrame(df.isnull().any()) info1['Sum_of_nulls'] = pd.DataFrame(df.isnull().sum()) info1['Per_of_nulls'] = round((df.apply(pd.isnull).mean()*100),2) info1.Dtype = info1.Dtype.astype(str) return(info1) informacja(customers_raw) # + id="zZI8gRfnm4D3" colab_type="code" colab={} outputId="97301065-2fa1-4a94-944c-053355634e7d" customers = customers_raw.dropna() customers.head() #customers zmienna ostateczna zbioru customers.txt # + [markdown] id="q34_ZEL6m4D8" colab_type="text" # ### Dane o produktach # + id="LkYoDeprm4D9" colab_type="code" colab={} outputId="50e8b939-4d22-491f-bace-2359bafd4a08" #products_raw sprawdzanie i czyszczenie danych po wybraniu kolumn products_raw.head() # + id="KupGsWdom4EA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="2fc03af3-c18a-4e60-8669-fb73f5a9c36a" informacja(products_raw) # + id="Zbx1rzn7m4EE" colab_type="code" colab={} outputId="358f5c38-c7ab-4546-801d-e3def3c0b849" #sprawdzam jakie kolumny zawiera Data Frame products_raw.columns # + id="sl4odqCLm4EL" colab_type="code" colab={} outputId="514eb9c0-b75b-4dea-c28a-0c3a986f7820" #wybieram interesujące mnie kolumny products_df = products_raw[['productgroupcode','productgroupname', 'fullprice']]#, 'instockflag' products_df.info() # + id="aZ1BJRQFm4EP" colab_type="code" colab={} outputId="a469a0f2-aeff-4461-ae75-e1986b40b7bd" products_df.head() #products_df zmienna ostateczna zbioru products.txt # + [markdown] id="EoyRt83Bm4ET" colab_type="text" # ### Dane o zamówieniach # + id="TslMBhm1m4EU" colab_type="code" colab={} outputId="e29ad2ab-5742-4352-d1b1-c1b6f4c477de" # orders_raw- czyszczenie danych orders_raw.head() # + id="ioJFSUbym4EX" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 390} outputId="80172be4-91c5-4db0-b0a4-87ce448c264e" informacja(orders_raw) # + id="0HzpJLs6m4Ed" colab_type="code" colab={} outputId="7ef936a4-0de7-4223-97b2-ed380bc716a9" #procentowa znikomość niepełnych danych orders = orders_raw.dropna() orders.head() #orders ostateczna zmienna orders.txt # + id="wcEXqTyBm4Eh" colab_type="code" colab={} outputId="ed0a9bad-4483-43af-8c5b-d73f7d4b028e" orders.info() # + [markdown] id="PdGATafem4El" colab_type="text" # ### Dane o liście zamówień # + id="5IcYy83tm4Em" colab_type="code" colab={} outputId="9bd1e303-8a36-468a-8033-1de516d67036" orderlines_raw.head() # + id="WxADRriZm4Er" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="d8a86f78-e72f-4a8f-fc93-a66f27ebed7b" informacja(orderlines_raw) # + id="5m_AO28Qm4Ev" colab_type="code" colab={} outputId="2ae9cf8c-f9b0-4a60-edcf-c3477ecefb97" orderlines = orderlines_raw.dropna() orderlines.info() #ostateczna zmienna orderlines.txt # + [markdown] id="OBArcKJum4E0" colab_type="text" # ## Statystyki podstawowe # + [markdown] id="pHQHuJXSm4E1" colab_type="text" # #### Płeć klientów # # Spośród wszystkich klientów przedsiębiorstwa 55,7% stanowią mężczyźni a 44,3% kobiety. # Dane dotyczące płci klientów zostały przedstawione w postaci wykresu kołowego z zaznaczeniem etykiet danych. # + id="anjmX4Vcm4E2" colab_type="code" colab={} outputId="619e31f6-1213-4196-d699-f4a25351eb1d" plec = {"Mężczyźni":customers.gender[customers.gender == "M"].count(),"Kobiety": customers.gender[customers.gender == "F"].count()} plec_df = pd.DataFrame({"genders": plec}) plec_df.plot.pie(subplots=True, figsize=(6,6), autopct='%1.1f%%', colors = ['#ff9999', '#66b3ff'],startangle=80) _ = plt.title('Płeć klientów') # + id="eJawXlrRm4E7" colab_type="code" colab={} #łączę pliki customers i orders, dodając kolumny z latami i miesiącami customers_and_orders = pd.merge(customers, orders, on="customerid") customers_and_orders = customers_and_orders.dropna() customers_and_orders['month'] = orders['orderdate'].dt.month customers_and_orders['year'] = orders['orderdate'].dt.year customers_and_orders = customers_and_orders.dropna() customers_and_orders["year"] = customers_and_orders["year"].astype("int64") customers_and_orders["month"] = customers_and_orders["month"].astype("int64") # + id="AX5_au2Jm4E-" colab_type="code" colab={} outputId="86aa96fa-c487-43b5-e6c2-2ad6f1dc4be9" customers_and_orders.head(2) # + [markdown] id="KtwJ3iajm4FC" colab_type="text" # ### Sprzedaż kolejnych latach - podział na płeć # # W latach 2009-2011 sprzedaż w firmie $XXX$ wyraźnie rosła. W $2012$ roku sprzedaż była na stałym poziomie i zaczęła podnosić się w latach $2013-2014$. Od roku $2015$ zaobserwowana spadek sprzedaży. # Przez wszystkie lata udział mężczyzn w zakupach był większy niż kobiet. # + id="u3PMB_kEm4FC" colab_type="code" colab={} outputId="f319763f-fbd0-49a9-dc9e-2f924d1f98dc" plot1=BigData.groupby(['year', 'gender']).sum().loc[2009:2016].totalprice_x plot1_a = pd.DataFrame(plot1) sw = plot1_a.reset_index() _= sns.catplot(x= "year", data = sw, y="totalprice_x",\ hue = "gender",kind="bar",aspect=2.5)\ .set(title = 'Wartość dokonanych zakupów w kolejnych latach - podział na płeć'\ , ylabel = 'Sprzedaż', xlabel = 'Rok' ) # + id="-7O5YWBDm4FH" colab_type="code" colab={} outputId="1f624256-f4e4-40bb-83a4-bfb5af588333" #zyski w kolejnych latach year_total_price = orderlines.loc[:, ["year", "totalprice"]].groupby('year').sum().reset_index() year_total_price.head(8) # + id="an305unom4FL" colab_type="code" colab={} outputId="c28cecde-3322-4055-b683-dfbd34ab153c" _ =year_total_price.plot(x = "year", y = "totalprice" ,kind="bar",figsize=(13,5.5))\ .set(title = 'Sprzedaż w poszczególnych latach',\ ylabel = 'Sprzedaż', xlabel = 'Rok') # + [markdown] id="m-y8Aik8m4FP" colab_type="text" # ### Sprzedaż kolejnych latach # # Wyraźnie można zaobserwować sezonowość sprzedaży w poszczególnych miesiącah. Szczególnie widoczne to jesr w grudniu od $2013$ roku. # + id="yfZAd1psm4FQ" colab_type="code" colab={} outputId="5eafa2dc-4f6f-459b-e00f-fdb21a1e0ec4" #sprzedaż w poszczególnych latach z podziałem na miesiące plot2=BigData.groupby(['year', 'month']).sum().loc[2009:2016].totalprice_x plot2_a = pd.DataFrame(plot2) sw2 = plot2_a.reset_index() sw2=sw2.pivot("month", "year", "totalprice_x") f, ax = plt.subplots(figsize=(12, 9)) _=sns.heatmap(sw2, annot=False, linewidths=.5, ax=ax, cmap="Greens")\ .set(title = 'Sprzedaż w poszczególnych latach w rozłożeniu na miesiące',\ ylabel = 'Miesiąc', xlabel = 'Rok') # + [markdown] id="Y2GmEWImm4FT" colab_type="text" # ### Sprzedaż z podziałem na grupy produktów # # Najczęściej kupowanym produktem były te z grupy $ARTWORK$, łącznie sprzedano ich za kwotę $1~427~905$ dolarów. Jest to też produkt, którego średnia cena jest najwyższa. # Najmniej dochodowym artykułem są produkty z grupy $CALENDAR$. # # + id="IyWq8W1Gm4FU" colab_type="code" colab={} outputId="0aaa6533-f643-4d1a-9f0f-58548f618ffb" #ilość zamówień (grupowanie wg produktu) select = ["productgroupname","fullprice","numorders"] orders_a = orderlines_raw.groupby("productid").agg({"orderid" : "nunique"}).rename(columns={'orderid': 'numorders'}) product_orders = pd.merge(orders_a, products_df, on ="productid")[select] product_orders.sort_values(by = "numorders") product_orders = product_orders[product_orders.fullprice != 0] product_orders.head() # + id="Ivv1MbZpm4FX" colab_type="code" colab={} outputId="f2c0af62-1e40-4972-8b31-9d5d8540a6ca" #pogrupowanie produktów ze względu na rodzaj i obliczenia odnośnie ceny product_ordersA = product_orders.groupby("productgroupname").agg({'fullprice': [np.mean, np.std,np.median, np.sum]}) product_ordersA # + [markdown] id="I7q390Qim4Fa" colab_type="text" # Średnią arytmetyczną ciągu wartości $x_1, \dots, x_n$ nazwyamy: # $$\Large\bar{x}= \frac{x_{1} + x_{2} + \ \ldots \ + x_{n}}{n} = \frac{1}{n} \sum_{i=1}^{n} x_{i}$$ # # # Medianą ciągu wartości $x_1, \dots, x_n$ nazywamy środkowy wyraz tego ciągu, gdy $n$ jest liczbą nieparzystą, w przeciwnym przypadku średnią arytmetyczną dwóch wyrazów środkowych. # # $\Large Mediana = Me = \left\{\begin{array} {lll} x_{(k+1)} & \hbox{ dla } & n=2k+1 \\[1mm] \frac{x_{(k)}+x_{(k+1)}}{2} & \hbox{ dla } & n=2k. \end{array} \right.$, # gdzie $x_{(1)} \le x_{(2)} \le \dots \le x_{(n)}.$ # # # Odchyleniem standardowym ciągu $x_1, \dots, x_n$ nazywamy wartość: # # $$\Large s_n = \sqrt{\frac{1}{n} \sum_{i=1}^{n} ( x_{i}-\bar{x})^{2}}$$ # + id="iwwcygmzm4Fb" colab_type="code" colab={} outputId="911d6ece-d719-4eeb-f715-e8c6e120b30d" #wykres pudełkowy - produkty i cena całkowita sns.set(style="whitegrid", palette="pastel") _=sns.catplot(x ="productgroupname", y ="fullprice", kind="boxen",aspect=2.5, data =product_orders ).set(yscale="log")\ .set(title = 'Rozkład ceny całkowitej ze względu na grupę produktów'\ , ylabel = 'Cena całkowita', xlabel = 'Rodzaj produktu' ) # + id="msqxsjN1m4Fg" colab_type="code" colab={} outputId="66ee5118-60ab-462b-a6c6-007d02689aa3" #ilość artykułów a cena do zapłaty _ =product_orders.plot(y = "numorders", x = "fullprice" , kind="scatter", s=40, figsize=(15,8), xlim =[0.1, 100000], ylim =[0.1,10000], loglog= True,alpha=0.6, color = 'b') # + id="b-pmt-Fjm4Fk" colab_type="code" colab={} outputId="25460205-0b62-4e64-ea50-c55eb122ade9" # produkty z podziałem na rok select = ["productid","year","productgroupname"] product_years = pd.merge(products_df, orderlines, on ="productid")[select].groupby(["year","productgroupname"]).agg({"productid" : "count"}) \ .rename(columns={'productid': 'count'}).sort_values(by =["year","productgroupname"]) product_years.head() # + id="q6pbQqNpm4Fm" colab_type="code" colab={} outputId="0bde0ffd-49c6-433b-dd82-82296ed12c88" #ilość sprzedanych produktów z poszczególnych kategorii na przesrzeni lat sns.set(style="whitegrid", palette='Set2') product_years_index = product_years.reset_index() _ =sns.catplot(x= "year", data = product_years_index, y="count", hue = "productgroupname",kind="bar",aspect=3)\ .set(title ='Ilość sprzedanych produktów w kolejnych latach', ylabel = 'Ilość', xlabel = 'Lata') # + [markdown] id="xoUki7o_m4Fr" colab_type="text" # ### Miasta z największym przychodem # # Analiza wykazuje, że dziesięć, przynoszących nawiększy przychód, miast to: New York, Brooklyn, Chicago, Washington, Los Angeles, San Francisco, Houston, Atlanta, Dallas, Seattle. # Liderem jeat $NEW~YORK$ z przychodem $1~571~296.06$ dolarów. # + id="xbpCF40Vm4Fr" colab_type="code" colab={} outputId="e6dfc24c-c749-4b49-aba9-04b83c0b8acc" city_totalprice = orders[['city', 'totalprice']] city_totalprice_gr = city_totalprice.groupby('city').sum().reset_index().sort_values(by='totalprice', ascending=False).head(10) city_totalprice_gr # + id="IxfSUZTEm4Fx" colab_type="code" colab={} outputId="e4555d50-96d8-41eb-8f13-0ecb493f5227" plt.figure(figsize=(15,6)) _ =sns.barplot(x="city", y = 'totalprice', data = city_totalprice_gr)#.set(yscale="log") # + [markdown] id="ZX5KdpCnm4F1" colab_type="text" # ## Karty kredytowe # + [markdown] id="kfi_8P7fm4F2" colab_type="text" # Na wykresie poniże widać jak popularne są karty kredytowe i ich udział w zakupach z podziałem na płeć. # Zarówno wśród kobiet i mężczyzn nabardziej popularną kartą test $VI$. Kobiety rzadziej używają kart $MC$ i $AE$ niż mężczyźni. # Najmniej popularne karty to $DB$ i $OC$. # + id="2xzn3fQkm4F3" colab_type="code" colab={} outputId="39342662-baa0-4842-9ae2-8a3da8d9bcea" #Rodzaj używanej karty płatniczej - podział we względu na płeć gender_pay_typ = customers_and_orders[["orderid","gender","paymenttype",'year']] plot_pay_g = gender_pay_typ.groupby(["paymenttype","gender",'year']).agg({"orderid" : "count"}).rename(columns={'orderid': 'count'}).sort_values(by =["paymenttype","gender",'year']) plot_pay_g_index = plot_pay_g.reset_index() _ =sns.catplot(x= "gender", data = plot_pay_g_index, y="count", hue = "paymenttype",kind="bar",aspect=2)\ .set(title ='Rodzaj używanej karty płatniczej', ylabel = 'Ilość', xlabel = 'Płeć') # + [markdown] id="zkQsXN68m4F6" colab_type="text" # #### Rodzaje płatności i ich udział w zakupach w kolejnych latach # # Poddano analizie rodzaje płatności oraz ich udział w zakupach w latach $2009-2016$. Największą popularnością cieszyły się karty $VI$, a najmniejszą $OC$. W stosunku do sumy rodzajów płatności stałym zainteresowaniem cieszą się płatności typu $AE$ oraz $MC$. Płatność lkartą $DB$ była popularna w latach $2009 - 2011$ działalności firmy, poczym zauważenie spadła do $2016$ roku. # # + id="wUD8tNLAm4F7" colab_type="code" colab={} outputId="9649cf0a-6684-4716-9a9d-c2eb3ccbc691" #ilość płatności poszczególnymi kartami w latach 2009-2016 _ =sns.catplot(x= "year", data = plot_pay_g_index, y="count", hue = "paymenttype",kind="bar",aspect=3)\ .set(title ='Rodzaj używanej karty płatniczej', ylabel = 'Ilość płatności', xlabel = 'Lata') # + [markdown] id="SJKvJXhKm4F-" colab_type="text" # Największą ilość płatności odnotowana kartą $VI$. # + id="zKuu0-9fm4F_" colab_type="code" colab={} outputId="e7595aff-e13d-4851-c19f-59fc096345ab" #Karty kredytowe popularność tabela pay_typ = customers_and_orders.groupby('paymenttype').orderid.count().sort_values(ascending=False) pay_typ_df = pd.DataFrame(pay_typ).reset_index() tabela_karty = pay_typ_df.rename(columns={'orderid': 'ilość płatności','paymenttype': 'rodzaj karty'}) tabela_karty.groupby('rodzaj karty').head(6) # + id="kzUx51tbm4GD" colab_type="code" colab={} outputId="ba8e6724-34d2-40be-fb49-65e1f1db71c3" #popularność kart kredytowych procentowo #z poszukiwań w internecie uzyskałem prawdopodobne nazwy banków oferujące poszczególne karty karty = {"1FirstBank":customers_and_orders.paymenttype[customers_and_orders.paymenttype == "VI"].count(), "MC Bank":customers_and_orders.paymenttype[customers_and_orders.paymenttype == "MC"].count(), "American Express":customers_and_orders.paymenttype[customers_and_orders.paymenttype == "AE"].count(), "Deutsche Bank":customers_and_orders.paymenttype[customers_and_orders.paymenttype == "DB"].count(), "Orange County":customers_and_orders.paymenttype[customers_and_orders.paymenttype == "OC"].count(), "OTHER":customers_and_orders.paymenttype[customers_and_orders.paymenttype == "??"].count()} karty_df = pd.DataFrame({"paymenttype": karty}) karty_df.plot.pie(subplots=True, figsize=(9,9), autopct='%1.1f%%',startangle=75) _ = plt.title('Popularność kart karedytowych') # + [markdown] id="G2RxoI2bm4GH" colab_type="text" # # Tabele # + id="r1_X55grm4GI" colab_type="code" colab={} outputId="a821c12e-93bc-460f-c0ad-ec71c5bb7785" # ilości sprzedanych towarów - podział z uwzględnieniem kategorii produktów i lat tab_prod_year = pd.pivot_table(BigData, values='numunits_x', index=['productgroupname'],\ columns=['year'],aggfunc={'numunits_x': np.sum}, fill_value='0',\ margins=True, margins_name='Sum') tab_prod_year.sort_values(by=['Sum'], ascending=False, axis=0) # + id="jw-PLPI3m4GK" colab_type="code" colab={} outputId="f05e0a66-dd7c-4639-d31c-de30c65ca491" # ilości sprzedanych towarów - TOP 10 miast, podział z uwzględnieniem kategorii produktów sum_units = orderlines['numunits'].sum() pd.reset_option("display.float_format") tab_city_prod = pd.pivot_table(BigData, values='numunits_x', index=['city'],\ columns=['productgroupname'], aggfunc={'numunits_x': np.sum},\ fill_value='0', margins=True, margins_name='Sum') tab_city_prod['Percent'] = tab_city_prod['Sum'] / sum_units * 100 tab_city_prod.sort_values(by=['Sum'], ascending=False, axis=0).head(11) # + id="HOB4yhPum4GN" colab_type="code" colab={} outputId="ec964af8-3487-4779-ac35-45f69a25ff1c" #sezonowość sprzedaży produktów tab_month_prod = pd.pivot_table(BigData, values='numunits_x', index=['productgroupname'],\ columns=['month'], aggfunc={'numunits_x': np.sum}, fill_value='0',\ margins=True, margins_name='Sum') tab_month_prod .sort_values(by=['Sum'], ascending=False, axis=0) # + id="-cWz5cRrm4GP" colab_type="code" colab={} outputId="0c068af1-5f82-4e5b-c6a7-57165fea9f56" #sezonowość sprzedaży w kolejnych miesiącach z podziałem na płeć _ = sns.relplot(y="totalprice", x="month", data =customers_and_orders, hue = 'gender', kind="line",aspect=3) # + id="Im1pLDFam4GS" colab_type="code" colab={} outputId="9d9e1de6-9bb3-473d-bb22-1e313b458e8d" sns.set(style="whitegrid", palette="pastel") _=sns.catplot(x ="month", y ="totalprice", kind="box",aspect=3, data =customers_and_orders, hue = 'gender', showfliers = False )\ .set(title = 'Rozkład ceny z podziałem na kobiety i męższczyzn w kolejnych miesiącach',\ xlabel = 'Miesiące', ylabel = "Cena całkowita") # + [markdown] id="h8ZAI-Iem4GV" colab_type="text" # ## Testy statystyczne # + [markdown] id="J55VGRYqm4GV" colab_type="text" # ### Przy pomocy testu Shapiro-Wilka zbadam czy próba (ilość produktów w jednym zamówieniu) pochodzi z rozkładu normalnego # # Przy pomocy testu Shapiro-Wilka możemy przetestować, czy dana próba $X = (x_1, \dots, x_n)$ pochodzi z rozkładu normalnego. # Formułujemy w tym celu hipotezę zerową i hipotezę alternatywną # # $H_0$: „dane pochodzą z rozkładu normalnego”. # # $H_1$: „dane nie pochodzą z rozkładu normalnego”. # # $$\Large W=\frac{(\sum_{i=1}^{n}a_{i}x_{(i)})^2}{\sum_{i=1}^{n}(x_i-\overline{x})^2}$$ # # # Jako miarę istotności wnioskowania, ustalamy poziom istotności $\alpha$ jako maksymalne prawdopodobieństwo popełnienia błędu pierwszego rodzaju, czyli odrzucenia $H_0$ mimo jej prawdziwości. Zazwyczaj przyjmujemy w tym celu wartość $\alpha=0.05$ lub mniejszą. # # Wynikiem testu Shapiro-Wilka jest para liczb: # wartość statystyki testowej $W$ i p-wartość $p$, czyli prawdopodobieństwo otrzymania naszych (lub bardziej ekstremalnych) danych z rozkładu normalnego. # # Hipotezę zerową możemy odrzucić jeżeli $p<α$. # + id="DyF99sNZm4GW" colab_type="code" colab={} outputId="78e2dc3f-9167-4d27-dd60-88bae54a1c46" #wykres rozkładu normalnego sns.set(); np.random.seed(0) g = np.random.randn(100) ax = sns.distplot(g, hist=False, color ='g').set(title = "Wykres rozkładu normalnego") # + id="fNJ0Wedmm4GZ" colab_type="code" colab={} #dane do testu to ilość produktów w jednym zamówieniu pd.set_option('float_format', '{:.3f}'.format) test_dane = BigData[[ 'orderid', 'numunits_x']] test_dane_gr = test_dane.groupby(by = 'orderid').sum() lista = test_dane_gr['numunits_x'].tolist() # + id="9eiYCmEUm4Gg" colab_type="code" colab={} outputId="8e5e5357-02e7-4fe3-b13b-5a447c1f4438" #ponieważ test Shapiro-Wilka sprawda się w próbach N < 500, #niekiedy w literaturze podawana jest nawet wartość N < 100 #na potrzeby przeprowadzenie testu wybiorę losowo z całej próby 500 przypadków lista2 = random.sample(lista, 500) alpha = 0.05 X = lista2 W, p = scipy.stats.shapiro(X) if (p < alpha) : print("p < alpha \nDane nie pochodzą z rozkładu normalnego.") else : print( "Nie można odrzucić hipotezy o normalności rozkładu.") print("W=",W," ","p=",p) # + id="rJOcWu7Cm4Gl" colab_type="code" colab={} outputId="02b68ec0-c35c-4cc9-d2c1-9e3956429dd3" _= sns.kdeplot(lista2, shade=True, color ='g').set(title = "Wykres rozkładu dla badanych danych") # + [markdown] id="3NUrZNm2m4Gp" colab_type="text" # Test potwierdza: # $H_1$: „dane nie pochodzą z rozkładu normalnego”. # + [markdown] id="5R7Sh7iqm4Gq" colab_type="text" # ### Test Kołmogorowa-Smirnowa # # Stosowany alternatywnie do testu Shapiro-Wilka przy prubach większych. # # Przy pomocy testu Kołmogorowa-Smirnowa możemy przetestować, czy dana próba $X = (x_1, \dots, x_n)$ pochodzi ze wskazanego, znanego rozkładu. Działamy podobnie jak w teście Shapiro-Wilka. # # $$\Large K_n = \sup_{x} |(F_{n}-F) (x)|$$ # # # Wynikiem testu Kołmogorowa-Smirnowa jest para liczb: # wartość statystyki testowej $D$ i p-wartość $p$, czyli prawdopodobieństwo otrzymania naszych (lub bardziej ekstremalnych) danych ze wskazanego rozkładu. # Hipotezę zerową możemy odrzucić jeżeli $p<α.$ # + id="EFQzuZf0m4Gr" colab_type="code" colab={} outputId="7bae2722-1544-4afc-8d72-bcfc3ddf72a8" #dane do testu Kołmogorowa-Smirnowa to wszystkie zamówienie i liczba artykułów w jednym zamówieniu alpha = 0.05 x = lista D, p = kstest(x, 'norm', args=(np.mean(x),np.std(x,ddof=1))) if (p < alpha) : print("Dane nie pochodzą z rozkładu normalnego.") else : print ("Nie można odrzucić hipotezy o normalności rozkładu") print("D=",D," ","p=",p) # + [markdown] id="2wx22HaAm4Gu" colab_type="text" # __________________________________________________________________________________________________________________ # + [markdown] id="MD8nKNxBm4Gv" colab_type="text" # ### Podsumowanie # # W przedsiębiorstwie $XXX$ większość klientów, a co za tym idzie zamówień, stanowią mężczyźni. Jest to jednak stosunek około $55$% do $45$%. Największą popularnością cieszą się produkty z grupy $BOOK$ i $ARTWORK$, z których te drugie przynoszą największe zyski dla firmy. Zdecydowanie największą sprzedaż firma $XXX$ posiada w Nowym Jorku, w którym sprzedaż produktów znacznie przewyższa sprzedaż w innych miastach. W $2014$ roku przedsiębiorstwo osiągneło najwyższą sprzedaż a co za tym idzie zyski, które , niestyty, w kolejnych dwóch latach zaczęły spadać. # Analiza wyników sprzedaży pozwala na wyciąganie wniosków przez zarząd firmy $XXX$ dotyczących m.in.: pracy handlowców i systemów motywacyjnych, produktów, klientów oraz systemu planowania. Analiza struktury klientów jest równie ważna jak analiza produktów. Dzięki niej zarząd może ocenić, którzy klienci najbardziej przyczyniają się do wzrostu poziomu sprzedaży oraz jej rentowności. # Warto zwrócić również uwagę na lokalizacę najwyższej sprzedaży. $Nowy York$ nie jest miastem o wiele większym niż $Los$ $Angeles$ czy $Chicago$. Tym bardziej zastanawiająca jest zatrważająca przewaga w sprzedaży. # + [markdown] id="chkiYFu_m4Gw" colab_type="text" # __________________________________________________________________________________________________________________ # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Working with JAX numpy and calculating perplexity: Ungraded Lecture Notebook # Normally you would import `numpy` and rename it as `np`. # # However in this week's assignment you will notice that this convention has been changed. # # Now standard `numpy` is not renamed and `trax.fastmath.numpy` is renamed as `np`. # # The rationale behind this change is that you will be using Trax's numpy (which is compatible with JAX) far more often. Trax's numpy supports most of the same functions as the regular numpy so the change won't be noticeable in most cases. # # + import numpy import trax import trax.fastmath.numpy as np # Setting random seeds numpy.random.seed(32) # - # One important change to take into consideration is that the types of the resulting objects will be different depending on the version of numpy. With regular numpy you get `numpy.ndarray` but with Trax's numpy you will get `jax.interpreters.xla.DeviceArray`. These two types map to each other. So if you find some error logs mentioning DeviceArray type, don't worry about it, treat it like you would treat an ndarray and march ahead. # # You can get a randomized numpy array by using the `numpy.random.random()` function. # # This is one of the functionalities that Trax's numpy does not currently support in the same way as the regular numpy. numpy_array = numpy.random.random((5,10)) print(f"The regular numpy array looks like this:\n\n {numpy_array}\n") print(f"It is of type: {type(numpy_array)}") # You can easily cast regular numpy arrays or lists into trax numpy arrays using the `trax.fastmath.numpy.array()` function: trax_numpy_array = np.array(numpy_array) print(f"The trax numpy array looks like this:\n\n {trax_numpy_array}\n") print(f"It is of type: {type(trax_numpy_array)}") # Hope you now understand the differences (and similarities) between these two versions and numpy. **Great!** # # The previous section was a quick look at Trax's numpy. However this notebook also aims to teach you how you can calculate the perplexity of a trained model. # # ## Calculating Perplexity # The perplexity is a metric that measures how well a probability model predicts a sample and it is commonly used to evaluate language models. It is defined as: # # $$P(W) = \sqrt[N]{\prod_{i=1}^{N} \frac{1}{P(w_i| w_1,...,w_{n-1})}}$$ # # As an implementation hack, you would usually take the log of that formula (so the computation is less prone to underflow problems). You would also need to take care of the padding, since you do not want to include the padding when calculating the perplexity (to avoid an artificially good metric). # # After taking the logarithm of $P(W)$ you have: # # $$log P(W) = {\log\left(\sqrt[N]{\prod_{i=1}^{N} \frac{1}{P(w_i| w_1,...,w_{n-1})}}\right)}$$ # # # $$ = \log\left(\left(\prod_{i=1}^{N} \frac{1}{P(w_i| w_1,...,w_{n-1})}\right)^{\frac{1}{N}}\right)$$ # # $$ = \log\left(\left({\prod_{i=1}^{N}{P(w_i| w_1,...,w_{n-1})}}\right)^{-\frac{1}{N}}\right)$$ # # $$ = -\frac{1}{N}{\log\left({\prod_{i=1}^{N}{P(w_i| w_1,...,w_{n-1})}}\right)} $$ # # $$ = -\frac{1}{N}{{\sum_{i=1}^{N}{\log P(w_i| w_1,...,w_{n-1})}}} $$ # # You will be working with a real example from this week's assignment. The example is made up of: # - `predictions` : log probabilities for each element in the vocabulary for 32 sequences with 64 elements (after padding). # - `targets` : 32 observed sequences of 64 elements (after padding). # + from trax import layers as tl # Load from .npy files predictions = numpy.load('predictions.npy') targets = numpy.load('targets.npy') # Cast to jax.interpreters.xla.DeviceArray predictions = np.array(predictions) targets = np.array(targets) # Print shapes print(f'predictions has shape: {predictions.shape}') print(f'targets has shape: {targets.shape}') # - # Notice that the predictions have an extra dimension with the same length as the size of the vocabulary used. # # Because of this you will need a way of reshaping `targets` to match this shape. For this you can use `trax.layers.one_hot()`. # # Notice that `predictions.shape[-1]` will return the size of the last dimension of `predictions`. reshaped_targets = tl.one_hot(targets, predictions.shape[-1]) #trax's one_hot function takes the input as one_hot(x, n_categories, dtype=optional) print(f'reshaped_targets has shape: {reshaped_targets.shape}') # By calculating the product of the predictions and the reshaped targets and summing across the last dimension, the total log propbability of each observed element within the sequences can be computed: log_p = np.sum(predictions * reshaped_targets, axis= -1) # Now you will need to account for the padding so this metric is not artificially deflated (since a lower perplexity means a better model). For identifying which elements are padding and which are not, you can use `np.equal()` and get a tensor with `1s` in the positions of actual values and `0s` where there are paddings. non_pad = 1.0 - np.equal(targets, 0) print(f'non_pad has shape: {non_pad.shape}\n') print(f'non_pad looks like this: \n\n {non_pad}') # By computing the product of the log probabilities and the non_pad tensor you remove the effect of padding on the metric: real_log_p = log_p * non_pad print(f'real log probabilities still have shape: {real_log_p.shape}') # You can check the effect of filtering out the padding by looking at the two log probabilities tensors: print(f'log probabilities before filtering padding: \n\n {log_p}\n') print(f'log probabilities after filtering padding: \n\n {real_log_p}') # Finally, to get the average log perplexity of the model across all sequences in the batch, you will sum the log probabilities in each sequence and divide by the number of non padding elements (which will give you the negative log perplexity per sequence). After that, you can get the mean of the log perplexity across all sequences in the batch. log_ppx = np.sum(real_log_p, axis=1) / np.sum(non_pad, axis=1) log_ppx = np.mean(-log_ppx) print(f'The log perplexity and perplexity of the model are respectively: {log_ppx} and {np.exp(log_ppx)}') # **Congratulations on finishing this lecture notebook!** Now you should have a clear understanding of how to work with Trax's numpy and how to compute the perplexity to evaluate your language models. **Keep it up!** # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] extended_cell={"type": "singlechoice"} nbgrader={"grade": true, "grade_id": "task", "locked": false, "points": 0, "schema_version": 3, "solution": true, "task": false} # ### Singlechoice Question # # - Choice 1 # - Choice 2 # - Choice 3 # # Hint: Add the choices as list items, then select the correct answer! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import warnings warnings.filterwarnings('ignore') from bayes_opt import BayesianOptimization # use sklearn's default parameters for theta and random_start gp_params = {"alpha": 1e-5, "n_restarts_optimizer": 2} # ## 1. 定义需要最大化的函数f,以及参数边界bo.bounds bo = BayesianOptimization(lambda x, y: -x ** 2 - (y - 1) ** 2 + 1, {'x': (-4, 4), 'y': (-3, 3)}) # 边界bo.bounds # ## 2. 先验知识: 添加已知点(可不加) bo.initialize( { 'target': [-1, -1], 'x': [1, 1], 'y': [0, 2] } ) # ## 3. 添加算法探索点 # bo.init_points bo.explore({'x': [-1, 3], 'y': [-2, 2]}) # ## 4. 初始化后,maximize使得算法最优 # - ucb: kappa # - ei: xi # - poi: xi # init_points=5, n_iter=25, acq='ucb'|'ei'|'poi', kappa=2.576, xi=0.0 bo.maximize(n_iter=5, kappa=1) # ## 5. 若不能满足需求,添加更多探索点(从3.继续) bo.explore({'x': [0.6], 'y': [-0.23]}) bo.maximize(n_iter=5, acq='ei', **gp_params) # Finally, we take a look at the final results. print(bo.res['max']) # print(bo.res['all']) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 ('base') # language: python # name: python3 # --- # # Set e Fronzenset # - Set: sequenza mutabile # - Fronzenset: sequenza immutabile {1, 2, 3, 2, 1} # sequenza ove gli elementi devono essere univoci set('abracadabra') frozenset('abracadabra') # + # Tra tutte i metodi di set solo copy() è supportato da fronzenset, dato che gli altri metodi mutano il set # - # # Grafici # # ```python # pip install pygal # ``` # + import pygal def draw_xy(title, xvals, yvals): """ Draw xy plot with given x and y values. """ coords = [(xval, yval) for xval, yval in zip(xvals, yvals)] xyplot = pygal.XY(height=400) xyplot.title = title xyplot.add("Data", coords) xyplot.render_in_browser() xvals = [0, 1, 3, 5, 6, 7, 9, 11, 12, 15] yvals = [4, 3, 1, 2, 2, 4, 5, 2, 1, 4] draw_xy("My XY Plot", xvals, yvals) # - # ## matplotlib (recommended) # # ```python # import matplotlib.pyplot as plt # ``` # # Raccolta di funzioni che fanno funzionare matplotlib come MATLAB import matplotlib.pyplot as plt plt.plot([1, 2, 3, 4]) plt.ylabel('some numbers') plt.show() # Se matplotlib fosse limitato a lavorare con gli elenchi, sarebbe abbastanza inutile per l'elaborazione numerica. # Generalmente, si usano array numpy. # In effetti, tutte le sequenze vengono convertite internamente in array numpy. # # ```python # import numpy as np # import matplotlib.pyplot as plt # ``` # # Pandas # # La libreria Pandas viene utilizzata per analisi dei dati. # # ```python # from pandas import DataFrame, read_csv # import matplotlib.pyplot as plt # import pandas as pd # ``` # + # Il set di dati consisterà in 5 nomi di bambini e il numero di nascite registrate per quell'anno (1880). names = ['Bob','Jessica','Mary','John','Mel'] births = [968, 155, 77, 578, 973] # Per unire questi due elenchi insieme useremo la funzione zip. BabyDataSet = list(zip(names,births)) print(BabyDataSet) # - # --- # + import pandas as pd from numpy import random names = ['Bob','Jessica','Mary','John','Mel'] random.seed(500) random_names = [names[random.randint(low=0,high=len(names))] for i in range(1000)] print(random_names[:10]) births = [random.randint(low=0,high=1000) for i in range(1000)] print(births[:10]) BabyDataSet = list(zip(random_names,births)) df = pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births']) print(df[:10]) df['Names'].unique() for x in df['Names'].unique(): print(x) print(df['Names'].describe()) name = df.groupby('Names') df = name.sum() print(df) Sorted = df.sort_values(['Births'], ascending=False) print(Sorted) print(Sorted.head(1)) df['Births'].plot.bar() # - # --- # + import pandas as pd d = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] df = pd.DataFrame(data=d, columns=['Numbers']) print(df) # - df['Numbers'] = df['Numbers'] + 1 print(df) # --- # + import pandas as pd d = {'one':[1,1],'two':[2,2]} i = ['a','b'] # Create dataframe df = pd.DataFrame(data = d, index = i) print(df) print(df.index) # - stack = df.stack() print(stack) print(stack.index) # Doppia indicizzazione transpose = df.T print(transpose) # --- import pandas as pd d = {'one':[1,1,1,1,1], 'two':[2,2,2,2,2], 'letter':['a','a', 'b', 'b', 'c']} df = pd.DataFrame(d) print(df) one = df.groupby('letter') print(one.sum()) letterone = df.groupby(['letter','one']).sum() print(letterone) print(letterone.index) letterone = df.groupby(['letter','one'], as_index=False).sum() print(letterone) # # Animazioni # # Esistono due modi per creare animazioni utilizzando Matplotlib: # - Utilizzo della funzione pause() # - Utilizzo della funzione FuncAnimation() # ## Pause() # + from matplotlib import pyplot as plt x = [] y = [] for i in range(100): x.append(i) y.append(i) plt.xlim(0, 100) plt.ylim(0, 100) plt.plot(x, y, color = 'green') plt.pause(0.01) plt.show() # visualizzazione non è ottimale su Jupyter Notebook # - # ## FuncAnimation() # + from matplotlib import pyplot as plt from matplotlib.animation import FuncAnimation import numpy as np x = [] y = [] figure, ax = plt.subplots() # Setting limits for x and y axis ax.set_xlim(0, 100) ax.set_ylim(0, 12) # Since plotting a single graph line, = ax.plot(0, 0) def animation_function(i): x.append(i * 15) y.append(i) line.set_xdata(x) line.set_ydata(y) return line, animation = FuncAnimation(figure, func = animation_function, frames = np.arange(0, 10, 0.1), interval = 10) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np sample_list = [1,2,3] list_array = np.array(sample_list) list_array tup = (1,2,3) my_tup_array = np.array(tup) my_tup_array def multiply_loops(A, B): c=np.zeros((A.shape[0], B.shape[1])) for i in range(A.shape[0]): for k in range(B.shape[1]): c[i,k] = 0 for j in range(B.shape[0]): n = A[i,j] * B[j,k] c[i,k] += n return c def multiply_vector(A, B): return A @ B X = np.random.random((100, 100)) Y = np.random.random((100, 100)) # %timeit multiply_loops(X, Y) # %timeit multiply_vector(X, Y) tup_dim = (3,4) my_zeros_array = np.zeros(tup_dim, dtype = np.int16) my_zeros_array tup_dim = (3,4) my_ones_array = np.ones(tup_dim,dtype = np.int16) my_ones_array tup_dim = (3,4) my_seven_array = np.full(tup_dim,7, dtype = np.int16) my_seven_array my_identity_array = np.identity(4) my_identity_array my_rand_array = np.random.rand(3,4) my_rand_array tup_dim = (3,4) my_uninitialized_array = np.empty(tup_dim) my_uninitialized_array my_range_array = np.arange(10,30,5) my_range_array my_spaced_array = np.linspace(0,30,6) my_spaced_array array_A = np.array([ [3,4,6], [0,8,1]]) array_A.ndim array_A array_A.shape array_A.size array_A.dtype array_A.itemsize my_array = np.array([[1, 4, 5, 6], [7, 8, 9, 10], [11, 12, 14, 16]]) my_array.ndim my_array.shape my_array.dtype my_array.size import os import pandas as pd # defining housing.csv file path HOUSING_PATH = '/cxldata/datasets/project/housing' # reading the large housing.csv file using pandas housing_raw = pd.read_csv(os.path.join(HOUSING_PATH, "housing.csv")) # extracting only a few rows (5 rows) of data from the pandas dataframe 'my_df' my_df = housing_raw.iloc[ : 5] # creating a new small csv file - 'housing_short.csv' - containing the above extracted 5 rows of data my_df.to_csv('housing_short.csv', index=False) # + import numpy as np import os # using pandas to load a large csv file (housing.csv) and creating a new smaller csv file out of it after extracting only a few rows of data from it. import pandas as pd HOUSING_PATH = '/cxldata/datasets/project/housing' housing_raw = pd.read_csv(os.path.join(HOUSING_PATH, "housing.csv")) my_df = housing_raw.iloc[ : 5] my_df.to_csv('housing_short.csv', index=False) # loading the smaller csv file housing_short.csv using NumPy's loadtxt() function FILE = 'housing_short.csv' # defining load_housing_data() function, which takes filename (FILE) as input and loads this file using NumPy's loadtxt() function def load_housing_data(file =FILE ): return np.loadtxt(file, dtype={'names': ('longitude','latitude','housing_median_age','total_rooms','total_bedrooms','population','households','median_income','median_house_value','ocean_proximity'), 'formats': ('f8', 'f8', 'f8', 'f8', 'f8', 'f8', 'f8', 'f8', 'f8', '|S15')}, delimiter=',', skiprows=1, unpack=True) # calling the above defined load_housing_data() function, which returns various column values as NumPy arrays longitude_arr, latitude_arr, housing_median_age_arr, total_rooms_arr, total_bedrooms_arr, population_arr, households_arr, median_income_arr, median_house_value_arr, ocean_proximity_arr = load_housing_data() # printing the values of NumPy array - median_house_value_arr - that we got above. # median_house_value_arr contains values of median_house_value column of the csv file - housing_short.csv print(median_house_value_arr) # - HOUSING_PATH = '/cxldata/datasets/project/housing' FILE = os.path.join(HOUSING_PATH, "housing.csv") def load_housing_dataset(file=FILE): return np.genfromtxt(file, dtype={'names': ('longitude','latitude','housing_median_age','total_rooms','total_bedrooms','population','households','median_income','median_house_value','ocean_proximity'),'formats': ('f8', 'f8', 'f8', 'f8', 'f8', 'f8', 'f8', 'f8', 'f8', '|S15')}, delimiter=',', skip_header=1, filling_values = 99999999, unpack=False) result_arr = load_housing_dataset() print(len(result_arr)) # + my_arr = np.array([ [0,1], [2,3] ]) my_arr.resize((3,4)) # - print(my_arr) # + my_arr = np.array([[1,2,3,4], [5,6,7,8]]) my_arr.resize((3,4)) # - my_arr # + import numpy as np my_arr = np.arange(9) my_new_arr = my_arr.reshape(-1,3) # - my_new_arr # + my_first_arr = np.array([1,2,3,4,5,6,7,8]) my_new_arr = my_first_arr.reshape((2,4)) # - print(my_new_arr) print(my_first_arr) import numpy as np my_second_arr = np.arange(9) my_second_arr my_updated_arr = my_second_arr.reshape(-1,3) print(my_updated_arr) print(my_second_arr) import numpy as np from sklearn.datasets import load_sample_image china = load_sample_image("china.jpg") china.shape print(china) image = china[150:220,130:250] image.shape height, width, channels = image.shape image_grayscale = image.mean(axis=2).astype(np.float32) image_grayscale.shape images = image_grayscale.reshape(1, height, width, 1) images.shape # + import numpy as np my_orig_arr = np.array((1, 15, 3, 9, 26, 7, 89, 12)) # - my_rev_arr = my_orig_array[::-1] print(my_rev_arr) my_orig_arr2 = np.array((2, 9, 17, 13, 1, 4, 20, 57)) my_orig_arr2[2:5] = -6 print(my_orig_arr2) arr = [1, 3, 5, 10, 4, 15] arr[1 : 4] = -3 apple = np.array((1, 8, 23, 3, 18, 91, 7, 15)) apple_slice = apple[1:4] print(apple_slice) print(apple) apple_slice[1] = 99999 print(apple_slice) print(apple) apple_slice_new = apple[2:5].copy() apple_slice_new[1] = 222222 print(apple_slice_new) print(apple) from sklearn.datasets import load_sample_image import numpy as np china = load_sample_image("china.jpg") portion = china[120:250,110:230] portion.shape portion multi_arr = np.arange(12).reshape(3,4) print(multi_arr) rows_wanted = np.array( [True, False, True] ) multi_arr_portion = multi_arr[rows_wanted, : ] print(multi_arr_portion) import numpy as np my_multi_arr = np.arange(20).reshape(4,5) my_multi_arr_portion = my_multi_arr[2:4,2:5] print(my_multi_arr_portion) my_2d_arr = np.arange(20).reshape(4,5) my_1d_arr = my_2d_arr.ravel() my_1d_arr a_arr = np.array([60,70,80,90]) b_arr = np.arange(4) c_arr = a_arr + b_arr print(c_arr) d_arr = np.array([60,70,80,90]) e_arr = np.arange(4) f_arr = d_arr - e_arr print(f_arr) A_arr = np.array([ [ 5,9], [4, 7] ]) B_arr = np.array( [ [2, 8], [1, 6] ] ) M_arr = A_arr * B_arr print(M_arr) C_arr = np.array([ [ 5,9], [4, 7] ]) D_arr = np.array( [ [2, 8], [1, 6] ] ) P_arr = np.dot(C_arr,D_arr) print(P_arr) R_div = np.array([ [ 25,65], [40, 14] ]) S_div = np.array([ [2, 10], [4, 5] ] ) T_div = R_div/S_div print(T_div) R_int_div = np.array([ [ 25,65], [40, 70] ]) S_int_div = np.array([ [2, 10], [8, 3] ] ) T_int_div = R_int_div//S_int_div print(T_int_div) R_mod = np.array([ [ 20,65], [40, 70] ]) S_mod = np.array([ [2, 10], [8, 3] ] ) T_mod = R_mod%S_mod print(T_mod) R_exp = np.array([ [ 4, 2], [3, 4 ] ] ) S_exp = np.array( [ [2, 10], [4, 5] ] ) T_exp = R_exp**S_exp print(T_exp) U = np.array([ [ 22, 45], [90, 4 ] ] ) U[U<45] X_stat = np.array([ [ 1, 2, 3], [ 4, -5, 6] ]) print(f'Mean = {X_stat.mean()}') print(f'Mean = {X_stat.var()}') print(f'Mean = {X_stat.std()}') print(f'Mean = {X_stat.min()}') print(f'Mean = {X_stat.max()}') print(f'Mean = {X_stat.sum()}') print(f'Mean = {X_stat.prod()}') Z = np.arange(18).reshape(2,3,3) print(Z) print(Z.sum(axis=2)) N = np.arange(6).reshape(3,2) print(N) print("Transpose = ", N.T) X_broad = np.ones((3,3)) print(X_broad) Y_broad = np.arange(3) print(Y_broad) Z_broad = X_broad+Y_broad print(Z_broad) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Module - 2: Data visualization and Technical Analysis # ###### Loading required libraries import pandas as pd # data loading tool import matplotlib.pyplot as plt #ploting tool import seaborn as sns import numpy as np # ## 2.1 Loading dataset and changing the Date format mod2_data = pd.read_csv('week2.csv') del mod2_data['Unnamed: 0'] #deleting Unnammed column mod2_data.Date = pd.to_datetime(mod2_data['Date']) mod2_data= mod2_data.set_index('Date') print(mod2_data.index.dtype == "datetime64[ns]") mod2_data.head() fig, ax = plt.subplots(figsize=(10, 6)) sns.lineplot(data=mod2_data, x=mod2_data.index, y='Close Price', ax=ax) plt.tight_layout() plt.show() # ###### News: # 1. Between 2017-10 and 2018-02. # ***Tata Power Company (TPCL) has posted an improved performance in 1QFY18 with its consolidated net profit rising by 126% YoY to Rs1.64bn (vs. Rs0.72bn in 1QFY17) due to strong performance by the coal subsidiaries, renewable business and better operational performance. Notably, renewable business generated Rs1.09bn PAT in 1QFY18 compared to Rs0.26bn in 1QFY17. Consolidated revenue rose by 2% YoY to Rs67.2bn mainly due to improved revenue from Welspun Renewable Energy (WREPL).*** # Source:https://www.moneycontrol.com/news/business/stocks/buy-tata-power-target-of-rs-88-reliance-securities-2370965.html # 2. Between 2018-02 and 2018-11. # ***Tata Power Company's third quarter consolidated profit is expected to fall 22 percent to Rs 466 crore compared to Rs 599 crore in year-ago quarter.Revenue from operations may grow 11 percent to Rs 7,445 crore compared to Rs 6,684 crore in same quarter last fiscal, according to average of estimates of analysts polled by CNBC-TV18. Operating profit is likely to increase 15 percent year-on-year to Rs 1,611 crore and margin may expand 70 basis points to 21.6 percent in Q3.Year-on-year profit comparison may not be valid due to (1) higher interest cost in Q3FY18 to fund renewable asset acquisition and (2) tax reversal in Q3FY17, despite stable operations in the core distribution business.Analysts expect generation volumes to remain sluggish and realisations to remain flattish. They further expect coal business and renewable business to maintain strong momentum.More than numbers, the Street will watch out for restructuring news. Tata Power had guided for simplification of group structure in FY18 at the beginning of the year.*** # Source:https://www.moneycontrol.com/news/business/earnings/tata-power-q3-profit-seen-down-22-generation-volumes-may-remain-sluggish-2507829.html # 3. Between 2018-10 and 2019-01. # ***Tata Power, HPCL join hands to set up EV charging stations*** # source:https://www.moneycontrol.com/news/india/tata-power-hpcl-join-hands-to-set-up-ev-charging-stations-2991981.html # 4. Between 2019-01 and 2019-03. # ***Fuel cost of the company rose to Rs 3,189.87 crore from Rs 2,491.24 crore in the year-ago period. Similarly, the finance cost rose to Rs 1,013.96 crore from Rs 855.28 crore a year ago.*** # Source:https://www.moneycontrol.com/news/business/tata-power-q3-profit-plunges-67-to-rs-205-cr-in-q3-3445841.html # 5. After 2019-04. # ***Tata Power Q4 net drops 92% to Rs 107.32 cr; declares dividend of Rs 1.30/share*** # Source:https://www.moneycontrol.com/news/business/tata-power-q4-net-drops-92-to-rs-107-32-cr-declares-dividend-of-rs-1-30share-3924591.html # ## 2.2 Stem plot fig, ax = plt.subplots(figsize=(10, 6)) ax.stem(mod2_data.index, mod2_data.Day_Perc_Change, 'g', label='Percente Change') plt.tight_layout() plt.legend() plt.show() # ## 2.3 Daily volume and comparison with %stem plot # + volume_scaled = mod2_data['No. of Trades'] - mod2_data['No. of Trades'].min() volume_scaled = volume_scaled/volume_scaled.max()*mod2_data.Day_Perc_Change.max() fig, ax = plt.subplots(figsize=(10, 6)) ax.plot(mod2_data.index, volume_scaled, label='Volume') ax.set_xlabel('Date') plt.legend(loc=2) plt.tight_layout() plt.show() # + fig, ax = plt.subplots(figsize=(10, 6)) ax.stem(mod2_data.index, mod2_data.Day_Perc_Change , 'g', label='Percente Change') ax.plot(mod2_data.index, volume_scaled, 'k', label='Volume') ax.set_xlabel('Date') plt.legend(loc=2) plt.tight_layout() plt.show() # - # ###### Relationship between volume and daily percentage change # As the volume increases the percentage change becomes positive and vice versa. # ## 2.4 Pie chart and Bar plot # + gridsize = (2, 6) fig = plt.figure(figsize=(14, 10)) ax1 = plt.subplot2grid(gridsize, (0, 0), colspan=2, rowspan=1) ax2 = plt.subplot2grid(gridsize, (0, 3), colspan=3) ax3 = plt.subplot2grid(gridsize, (1, 0), colspan=6) mod2_data['ones'] = np.ones((mod2_data.shape[0])) sums = mod2_data.ones.groupby(mod2_data.Trend).sum() explod = [0.2, 0.2, 0.5, 0, 0, 0, 0 ,0,0] ax1.pie(sums, labels=sums.index, autopct='%1.1f%%', explode=explod) ax2.title.set_text('Trend') mod2_data = mod2_data.drop(['ones'], axis=1) bard1 = mod2_data[['Trend', 'Total Traded Quantity']].groupby(['Trend'], as_index=False).mean() bar1 = sns.barplot("Trend", 'Total Traded Quantity', data=bard1, ci=None, ax=ax2) for item in bar1.get_xticklabels(): item.set_rotation(45) ax2.set_ylabel('') ax2.title.set_text('Trend to mean of Total Traded Quantity') bard2 = mod2_data[['Trend', 'Total Traded Quantity']].groupby(['Trend'], as_index=False).median() bar2 = sns.barplot("Trend", 'Total Traded Quantity', data=bard2, ci=None, ax=ax3) for item in bar2.get_xticklabels(): item.set_rotation(45) ax3.set_ylabel('') ax3.title.set_text('Trend to meadian of Total Traded Quantity') plt.tight_layout() plt.show() # - # ## 2.5 Daily returns fig, ax = plt.subplots(figsize=(10, 6)) ax.hist(mod2_data.Day_Perc_Change, bins=50) ax.set_ylabel('Percent Change') plt.show() # ## 2.6 Correlation # + five_stocks = ['AMARAJABAT.csv', 'CUMMINSIND.csv', 'JINDALSTEL.csv', 'MRPL.csv', 'VOLTAS.csv'] dfs = {} for i in five_stocks: stock = i.split('.')[0] temp_df = pd.read_csv(i) temp_df = temp_df[temp_df["Series"] == "EQ"] temp_df['Day_Perc_Change'] = temp_df['Close Price'].pct_change()*100 temp_df = temp_df['Day_Perc_Change'] temp_df = temp_df.drop(temp_df.index[0]) dfs[stock] = temp_df dfs = pd.DataFrame(dfs) sns.pairplot(dfs) plt.show() # - # There is no correlation among almost all the stocks, which is good thing. To get the profit from the stock market, the company's stocks trend should be independent from the other stock's trend # ## 2.7 Volatility rolling1 = dfs.rolling(7).std() rolling1.dropna() fig, ax = plt.subplots(figsize=(15, 5)) ax.plot(np.arange(len(rolling1.VOLTAS)), rolling1.VOLTAS, 'k') plt.title('VOLTAS Volatility') plt.show() # ## 2.8 Comparing with Nifty50 nifty = pd.read_csv('Nifty50.csv') nifty['Day_Perc_Change'] = nifty['Close'].pct_change()*100 rolling2 = nifty['Day_Perc_Change'].rolling(7).std() rolling2 = rolling2.dropna() fig, ax = plt.subplots(figsize=(15, 5)) ax.plot(np.arange(len(rolling1.VOLTAS)), rolling1.VOLTAS, label = 'VOLTAS') ax.plot(np.arange(len(rolling2)), rolling2, 'k', label = 'Nifty') plt.legend() plt.tight_layout() plt.show() # ## 2.9 Trade calls nifty['roll21'] = nifty['Close'].rolling(21).mean() nifty['roll34'] = nifty['Close'].rolling(34).mean() nifty = nifty.dropna() # + nifty.Date = pd.to_datetime(nifty['Date']) fig, ax = plt.subplots(figsize=(15, 7)) def cross(values): l=[] were = values[0] flag = True for i, ele in enumerate(values): if were==ele: l.append(0) else: l.append(1) were = ele return l nifty['buy'] = nifty['roll21'] > nifty['roll34'] nifty['sell'] = nifty['roll21'] < nifty['roll34'] nifty['buy_change'] = np.array(cross(nifty.buy.values.reshape(1, len(nifty.buy)).flatten())) #reshaping from (461, ) nifty['sell_change'] = np.array(cross(nifty.sell.values.reshape(1, len(nifty.sell)).flatten())) #reshaping from(461, ) nifty['buy'] = nifty['buy_change'].where(nifty['buy']==True) nifty['buy'] = nifty['roll21'].where(nifty['buy']==1) nifty['sell'] = nifty['sell_change'].where(nifty['sell']==True) nifty['sell'] = nifty['roll21'].where(nifty['sell']==1) ax.plot(nifty.Date, nifty.Close, 'r') ax.plot(nifty.Date, nifty.roll34, 'b', label='34_SMA') ax.plot(nifty.Date, nifty.roll21, 'g', label='21_SMA') ax.plot(nifty.Date, nifty.buy, "g^") ax.plot(nifty.Date, nifty.sell, "kv") ax.set_xlabel('Date') plt.legend(loc=2) plt.tight_layout() plt.show() # - # ## 2.10 Trade call - using Bollinger band # + nifty['roll14'] = nifty['Close'].rolling(14).mean() std = nifty['Close'].rolling(14).std() nifty['upper_band'] = nifty['roll14']+2*std nifty['lower_band'] = nifty['roll14']-2*std fig, ax = plt.subplots(figsize=(15, 7)) ax.plot(nifty.Date, nifty['Close'],'k' ,label = 'avg price') ax.plot(nifty.Date, nifty.roll14, label= 'roll14') ax.plot(nifty.Date, nifty.upper_band, 'r', label = 'upper_band') ax.plot(nifty.Date, nifty.lower_band,'g', label = 'lower_band') ax.set_xlabel('Date') plt.legend(loc=2) plt.tight_layout() plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="XtzD0O2JZ2vI" # # ML 101 # # ## RDP and line simplification # # RDP is a line simplifaction algorithm that can be used to reduce the number of points. # + id="GkcFqqXpYbF-" # !pip install git+git://github.com/mariolpantunes/knee@main#egg=knee # + id="-3lXwxOCaCcq" # %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [16, 9] import numpy as np import knee.linear_fit as lf import knee.rdp as rdp # generate some data x = np.arange(0, 100, 0.1) y = np.sin(x) print(len(x)) plt.plot(x,y) plt.show() # + id="up00OKbCas7Q" # generate the points points = np.array([x,y]).T reduced, removed = rdp.rdp(points, 0.01, cost=lf.Linear_Metrics.rpd) reduced_points = points[reduced] print(f'Lenght = {len(reduced_points)}') plt.plot(x,y) x = reduced_points[:,0] y = reduced_points[:,1] plt.plot(x,y, 'ro') plt.show() # + [markdown] id="pAUoQ0ATjaw0" # ## Knee detection # # Elbow/knee detection is a method to select te ideal cut-off point in a performance curve # + id="sSKtjKj3jnkq" # generate some data x = np.arange(0.1, 10, 0.1) y = 1/(x**2) plt.plot(x,y) plt.show() # + id="bpW1ujGyj0LQ" import knee.lmethod as lmethod # generate the points points = np.array([x,y]).T idx = lmethod.knee(points) plt.plot(x,y) plt.plot(x[idx], y[idx], 'ro') plt.show() # + id="NPwzleXkkfDF" import knee.dfdt as dfdt # generate the points points = np.array([x,y]).T idx = dfdt.knee(points) print(idx) plt.plot(x,y) plt.plot(x[idx], y[idx], 'ro') plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # ![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png) # [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/models_hub/Train_a_Spark_NLP_Model.ipynb) # # + id="WZEkgBuabyGF" # %%capture # Setup Spark NLP on Colab # !wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/colab_setup.sh -O - | bash # + colab={"base_uri": "https://localhost:8080/"} id="7i4Z9OrgcE4b" outputId="579e109c-2eb9-4a6c-aadb-8ad4c14b76c1" import sparknlp spark = sparknlp.start() # for GPU training >> sparknlp.start(gpu = True) # for Spark 2.3 =>> sparknlp.start(spark23 = True) import pyspark.sql.functions as F from sparknlp.annotator import * from sparknlp.base import * import sparknlp from sparknlp.pretrained import PretrainedPipeline print("Spark NLP version", sparknlp.version()) print("Apache Spark version:", spark.version) # + id="ooWKiaQEcUPB" #download training data # !wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp/master/src/test/resources/conll2003/eng.train # !wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp/master/src/test/resources/conll2003/eng.testa # + id="jn72rDuTcW_l" from sparknlp.training import CoNLL training_data = CoNLL().readDataset(spark, './eng.train') testing_data= CoNLL().readDataset(spark, './eng.testa') # + colab={"base_uri": "https://localhost:8080/"} id="1RW2pBP_cdgP" outputId="a70f0a9d-b7b8-4d4c-8bf9-98dd85e06917" import pyspark.sql.functions as F training_data.select(F.explode(F.arrays_zip('token.result', 'pos.result', 'label.result')).alias("cols")) \ .select(F.expr("cols['0']").alias("token"), F.expr("cols['1']").alias("pos"), F.expr("cols['2']").alias("ner_label")).show(truncate=50) # + [markdown] id="cLule_H4rDmv" # ## 1. Create Spark NLP train pipeline # + id="Z6mvq8Avcjyx" colab={"base_uri": "https://localhost:8080/"} outputId="3b029733-eaf7-4eb4-d1ad-2b1dd3c2761a" # !mkdir ner_logs # + id="zijeZRPrcmE4" colab={"base_uri": "https://localhost:8080/"} outputId="4bf81eeb-db55-4229-d792-08a93b81fc6a" # You can use any word embeddings you want (Glove, Elmo, Bert, custom etc.) embeddings = WordEmbeddingsModel.pretrained('glove_100d')\ .setInputCols(["document", "token"])\ .setOutputCol("embeddings") nerTagger = NerDLApproach()\ .setInputCols(["sentence", "token", "embeddings"])\ .setLabelColumn("label")\ .setOutputCol("ner")\ .setMaxEpochs(1)\ .setLr(0.003)\ .setBatchSize(32)\ .setRandomSeed(0)\ .setVerbose(1)\ .setValidationSplit(0.2)\ .setEvaluationLogExtended(True) \ .setEnableOutputLogs(True)\ .setIncludeConfidence(True)\ .setOutputLogsPath('ner_logs') # if not set, logs will be written to ~/annotator_logs # .setGraphFolder('graphs') >> put your graph file (pb) under this folder if you are using a custom graph generated thru 4.1 NerDL-Graph.ipynb notebook # .setEnableMemoryOptimizer() >> if you have a limited memory and a large conll file, you can set this True to train batch by batch ner_converter = NerConverter() \ .setInputCols(['document', 'token', 'ner']) \ .setOutputCol('ner_chunk') ner_pipeline = Pipeline(stages=[ embeddings, nerTagger, ner_converter ]) # + [markdown] id="6lJ8fCjmrLtw" # ## 2. Train model # + colab={"base_uri": "https://localhost:8080/"} id="rsJO74W-czVS" outputId="74055cac-d50c-470b-e420-4178278e873d" # %%time ner_model = ner_pipeline.fit(training_data) ner_model.stages[-1].write().overwrite().save('outputs/ner_wiki_glove100d_en') # + colab={"base_uri": "https://localhost:8080/"} id="3TIDdUYHdG7y" outputId="7eb373a6-f632-4e28-a31d-e069926b5fdf" import pyspark.sql.functions as F predictions = ner_model.transform(testing_data) predictions.select(F.explode(F.arrays_zip('token.result','label.result','ner.result')).alias("cols")) \ .select(F.expr("cols['0']").alias("token"), F.expr("cols['1']").alias("ground_truth"), F.expr("cols['2']").alias("prediction")).show(truncate=False) # + [markdown] id="M7gTMzBXSJY1" # ## 3. Benchmark # + id="hV8fcONKdMgF" colab={"base_uri": "https://localhost:8080/"} outputId="07be85e0-501f-447f-cdb9-38dca72d7fd3" from sklearn.metrics import classification_report preds_df = predictions.select(F.explode(F.arrays_zip('token.result','label.result','ner.result')).alias("cols")) \ .select(F.expr("cols['0']").alias("token"), F.expr("cols['1']").alias("ground_truth"), F.expr("cols['2']").alias("prediction")).toPandas() print(classification_report(preds_df['ground_truth'], preds_df['prediction'])) # + [markdown] id="562zPiw4SlRg" # ## 4. Saving model and Zipping it # + id="cwIo061j8KPm" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e5ae8d70-69f5-4ab3-cf93-9280aadfdc1d" import shutil shutil.make_archive("/content/outputs/ner_wiki_glove100d_en", 'zip', "/content/outputs/ner_wiki_glove100d_en") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.6.5 64-bit ('venv') # language: python # name: python36564bitvenv9e08aef190a246478ffb70d57feaa30a # --- # + import numpy as np import skfuzzy as fuzz from skfuzzy import control as ctrl # the crisp values for egg size is the antecedent eggSize = ctrl.Antecedent(np.arange(40, 70, 0.5), 'size') # the boiling time is the consequent boilTime = ctrl.Consequent(np.arange(3, 6, 0.1), 'time') # Determine the fuzzy sets for antecedents eggSize['large'] = fuzz.zmf(eggSize.universe, 40, 70) eggSize['small'] = fuzz.smf(eggSize.universe, 40, 70) # Determine the fuzzy sets for antecedents boilTime['long'] = fuzz.zmf(boilTime.universe, 3, 6) boilTime['short'] = fuzz.smf(boilTime.universe, 3, 6) # view the fuzzy set. eggSize.view() # uncoment to view graphic (it can fails in windows) # Simple conditipnal rules "IF antecendent THEN consequence" rule1 = ctrl.Rule(eggSize['large'], boilTime['long']) rule2 = ctrl.Rule(eggSize['small'], boilTime['short']) # view the fuzzy set. boilTime.view() # controller boilerController = ctrl.ControlSystem([rule1, rule2]) # simulation boilerSimulator = ctrl.ControlSystemSimulation(boilerController) # input # rawEggSize = input("please input the weight of the egg (from 40 to 70 grams)") rawEggSize = 40.33 # ensure that is in range clippeEggSize = float(np.clip(rawEggSize, 40, 70)) # load the value in the simulator boilerSimulator.input['size'] = rawEggSize # process boilerSimulator.compute() # output resultTime = boilerSimulator.output['time'] print("you need %1.2f minutes to boil that egg" % resultTime) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + def read_csv(S): f= open(S,'r') data = f.read() data_list = data.split("\n") string_list = data_list[1:-1] #print(d) #for i in d : final_list = [] for i in string_list : int_fields = [] string_fields = i.split(',') for y in string_fields : x = int(y) int_fields.append(x) final_list.append(int_fields) for x in final_list: return final_list # + cdc_list = read_csv("US_births_1994-2003_CDC_NCHS.csv") print(cdc_list[:10]) # + def month_births(l) : births_per_month = {} for i in l : month =i[2] births = i[4] if month in births_per_month : births_per_month[month] = births_per_month[month]+births else : births_per_month[month] = births return births_per_month cdc_month_births = month_births(cdc_list) print(cdc_month_births) # + def dow_births(l) : births_per_month = {} for i in l : month =i[3] births = i[4] if month in births_per_month : births_per_month[month] = births_per_month[month]+births else : births_per_month[month] = births return births_per_month cdc_day_births = dow_births(cdc_list) print(cdc_day_births) # + def calc_counts(data, column): births_per_month = {} if column==3 : for i in data : month =i[column] births = i[4] if month in births_per_month : births_per_month[month] = births_per_month[month]+births else : births_per_month[month] = births return births_per_month if column==2: for i in data : month =i[column] births = i[4] if month in births_per_month : births_per_month[month] = births_per_month[month]+births else : births_per_month[month] = births return births_per_month if column==1: for i in data : month =i[column] births = i[4] if month in births_per_month : births_per_month[month] = births_per_month[month]+births else : births_per_month[month] = births return births_per_month if column == 0 : for i in data : month =i[column] births = i[4] if month in births_per_month : births_per_month[month] = births_per_month[month]+births else : births_per_month[month] = births return births_per_month cdc_year_births = calc_counts(cdc_list,0) print(cdc_year_births) cdc_month_births = calc_counts(cdc_list,1) #print(cdc_month_births) cdc_dom_births = calc_counts(cdc_list,2) #print(cdc_dom_births) cdc_dow_births = calc_counts(cdc_list,3) #print(cdc_dow_births) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## 2019-01-22: Initial ICA Exploration # # ### Authors # * () # # ### Notes # # * In theory, ICA should "learn" spectra features that are common across multiple materials. Projection of a new spectra onto these "component spectra" could be used to construct the input vector to a supervised learning system. # # * The independent components can be computed from the mixing matrix (FastICA.mixing_). Note that FastICA automatically "whitens" the training dataset, so the mean spectra (FastICA.mean_) needs to be added to each column of the mixing matrix. # # * To compute the representation (i.e., coefficients) of a spectra with respect to the independent components, multiply the spectra by the unmixing matrix (FastICA.components_). # ## Preparations # + # --- Imports # Standard libraries import os import re # External packages import matplotlib.pyplot as plt import numpy import pandas from sklearn.decomposition import FastICA # + # --- Configuration Parameters # Data directory data_dir = os.environ['DATA_DIR'] # Materials materials = { 'actinolite': 0, 'alunite': 1, 'calcite': 2, } # - # ### Data Preparation # + # --- Load data from files # Get file list data_files = [os.path.join(data_dir, file_name) for file_name in os.listdir(data_dir) if not file_name.startswith('.') and os.path.isfile(os.path.join(data_dir, file_name))] # Initialize spectra dataset spectra_data = pandas.DataFrame() # Initialize material labels class_labels = [] # Load data files for file_name in data_files: # Read data into DataFrame raw_data = pandas.read_csv(file_name) # Clean up header spectra_id = raw_data.columns[0].strip() raw_data.columns = [spectra_id] # Replace missing values (set to -1.23e+34 in raw data) with 0 raw_data[spectra_id][raw_data[spectra_id] < 0] = 0.0 # Append spectra spectra_data[spectra_id] = raw_data[spectra_id] # Assign class label for material, label in materials.items(): if re.search(material, spectra_id, re.IGNORECASE): class_labels.append(label) break # Calculate dataset parameters spectrum_length, num_spectra = spectra_data.shape # Convert labels to numpy array class_labels = numpy.array(class_labels) class_labels.resize([num_spectra, 1]) # - # ## Data Exploration # + # --- Plot spectra by material for material_name, material_id in materials.items(): # Get indices for spectra for material spectra_indices = numpy.argwhere(class_labels==material_id)[:, 0] # Plot spectra in their own figure plt.figure() plt.title(material_name) plt.plot(spectra_data.iloc[:, spectra_indices]) # - # ## ICA Exploration # + # --- Generate ICA model # ICA Parameters num_components = 5 # Create FastICA object ica = FastICA(n_components=num_components) # Fit ICA model X = spectra_data.values.T S = ica.fit_transform(X) # Compute independent spectra components # Note: mixing (not unmixing) matrix holds independent components mean_spectra = ica.mean_.reshape([spectrum_length, 1]) spectra_components = ica.mixing_ + numpy.tile(mean_spectra, [1, num_components]) # Display results print("Number of components generated:", num_components) print("Number of fitting iterations:", ica.n_iter_) # Display independent spectra components for i in range(spectra_components.shape[1]): plt.title('Component {}'.format(i)) plt.plot(spectra_components[:, i]) plt.figure() # + # --- Compute representation for spectra # Get unmixing matrix unmixing_matrix = ica.components_ # spectra_data[0] print("Coefficients from fit_transform():", S[0, :]) coefficients = numpy.dot(unmixing_matrix, X[0, :].reshape([spectrum_length, 1]) - mean_spectra).T print("Coefficients from multiplying by unmixing matrix:", coefficients) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: qiim2-2018.11 # language: python # name: qiim2-2018.11 # --- # ## author: # want to be able to easily join all datasets on id, so have all info per sample_id import pandas as pd import numpy as np pd.set_option('display.height', 1000) pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.width', 200) pd.options.display.max_seq_items = 200 # **DIRECTORY WHERE TO EXPORT ALL CLEANED UP DATA** output_dir = 'join_wflow_output_data/' # ### read in metadata meta_df = pd.read_csv('data/raw.2.21.agp_metadata.txt',sep='\t', low_memory=False) meta_df.shape meta_df.head(3) # + #meta_df.columns # - meta_df[['sample_name', 'survey_id', 'qiita_study_id']].head(2) meta_df[['sample_name', 'survey_id']].sort_values('sample_name').head(2) meta_df[meta_df['sample_name'].str.contains('Blank')].shape # ### read in drug data dense drug_df = pd.read_csv('data/drugbank_drug_data.csv') print(drug_df.shape) drug_df.head(2) drug_df.sort_values('sample_name').head(2) # ### create dataframes with joinable ids def meta_df_id_clean(meta_df): meta_df['sample_id'] = meta_df['sample_name'].apply(lambda x: x.split('.')[1]) meta_df['sample_id'] = pd.to_numeric(meta_df['sample_id'], errors='coerce', downcast='integer') meta_df_clean = meta_df.dropna(subset=['sample_id']).reset_index() meta_df_clean['sample_id'] = meta_df_clean['sample_id'].apply(lambda x: int(x)) return meta_df_clean def drug_df_id_clean(drug_df): drug_df['sample_id'] = drug_df['sample_name'].apply(lambda x: str(x).split('.')[0]) drug_df['sample_id'] = drug_df['sample_id'].astype('int') return drug_df # clean metadata sample_id that will match with drug_id meta_clean_id_df = meta_df_id_clean(meta_df) print(meta_clean_id_df.shape) meta_clean_id_df[['sample_id', 'sample_name']].head(2) # clean drug_df drug_clean_id_df = drug_df_id_clean(drug_df) print(drug_clean_id_df.shape) drug_clean_id_df.head(2) # try joing drug_data and metadata on id merge_df = pd.merge(meta_clean_id_df[['sample_name', 'sample_id']], drug_clean_id_df, how='left', on='sample_id') merge_df.shape merge_df.head() drug_clean_id_df.to_csv(output_dir + '4.16.drug_clean.csv', index=False) # lets split vioscreen and metadata columns col_names = pd.Series(meta_clean_id_df.columns) meta_col_names = list(col_names[~col_names.str.contains('vioscreen')]) vio_col_names = list(col_names[col_names.str.contains('vioscreen')]) vio_col_names = vio_col_names + ['sample_name', 'sample_id', 'survey_id'] print(len(meta_col_names)) print(len(vio_col_names)) meta_agp_df = meta_clean_id_df[meta_col_names] print(meta_agp_df.shape) meta_agp_df[['sample_id']].head(2) vio_df = meta_clean_id_df[vio_col_names] print(vio_df.shape) vio_df.head(2) # test merges to make sure works before exporting datasets to csv merge_df2 = pd.merge(meta_agp_df[['sample_name', 'sample_id']], drug_clean_id_df, how='left', on='sample_id') merge_df2.shape merge_df2.head(2) merge_df3 = pd.merge(vio_df[['sample_name', 'sample_id']], drug_clean_id_df, how='left', on='sample_id') merge_df3.shape merge_df3.head(2) # export to .csv meta_agp_df.to_csv(output_dir + 'all_body_4.16.agp_only_meta.csv', index=False) vio_df.to_csv(output_dir + 'all_body_4.16.vioscreen_only_meta.csv', index=False) # ## biom table data join biom_df = pd.read_pickle('data/greengenes_all_body_4.10.rar1000.biom_data.pkl') print(type(biom_df)) biom_df.shape # + #biom_df.head(3) # + biom_df['sample_id'] = biom_df['sample_name'].apply(lambda x: x.split('.')[1]) biom_df['sample_id'] = pd.to_numeric(biom_df['sample_id'], errors='coerce', downcast='integer') biom_join_df = biom_df.dropna(subset=['sample_id']) biom_join_df['sample_id'] = biom_join_df['sample_id'].apply(lambda x: int(x)) # - print(biom_join_df.shape) # + #biom_join_df.head() # - # #### test merge between biom data and metadata where we do left join from metadata merge_df4 = pd.merge(meta_agp_df[['sample_name', 'sample_id', 'body_site']], biom_join_df[['sample_name', 'sample_id']], how='left', on='sample_id') print(merge_df4.shape) merge_df4.head(2) merge_df4.isna().sum() # any nulls above in sample_name_y are the number of missing sample_ids in biom OTU df from samples in the metadata, it is expected there are some since the biom df is not filtered on feces only merge4_df = merge_df4.dropna() print(merge4_df.shape) merge_df4.body_site.value_counts() # we want to filter the biom dataframe on sample_ids only found in our cleaned up metadata df final_biom_df = biom_join_df[biom_join_df['sample_id'].isin(merge4_df['sample_id'])] print(final_biom_df.shape) # + #biom_join_df.head() # - # #### looking good! # change name of pkl file depending on date and rar sample level final_biom_df.to_pickle(output_dir + 'greengenes_all_body_4.10.rar1000_clean_biom.pkl') bdf = pd.read_pickle(output_dir + 'greengenes_all_body_4.10.rar1000_clean_biom.pkl') print(bdf.shape) # + #bdf.head(1) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="3PturmDN40NY" import numpy as np import os from time import time import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from keras.preprocessing.image import load_img, array_to_img from keras.models import Model from keras.layers import Conv2D, SeparableConv2D from keras.layers import BatchNormalization, Activation, advanced_activations from keras.layers import Input, MaxPooling2D, Add from keras.layers import Conv2DTranspose, UpSampling2D # + id="tLtNuCH7l2rx" TRAIN_MODE = 1 EPOCH = 1000 BATCH_SIZE = 32 # + id="r25FCPYv41Kd" with open('/content/drive/MyDrive/IDEC/arrays.npy', 'rb') as f: X_data = np.load(f) Y_data = np.load(f) # + colab={"base_uri": "https://localhost:8080/"} id="bf3al1tr45EE" outputId="75636429-9c67-4db6-b86d-ccb9f8070107" X_train, X_test, Y_train, Y_test = train_test_split(X_data, Y_data, test_size = 0.2) print("Train Input Data :", X_train.shape) print("Train Output Data :", Y_train.shape) print("Test Input Data :", X_test.shape) print("Test Output Data :", Y_test.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 449} id="KOFvl5al5EW0" outputId="ef65fe73-21c1-45f1-b950-0b903b6aa05c" # Image show after Scaling flg, spot = plt.subplots(1,2,figsize=(15,10)) # Color Image img = array_to_img(X_data[1154]) spot[0].imshow(img) # Edge Image img = array_to_img(Y_data[1154]) spot[1].imshow(img) # + [markdown] id="x5fQo4xo69Sd" # Design Network # + id="tKMPgs7X5Ewd" # U-Net Encoder def build_encoder(): inputs = Input(shape=(160, 160, 3)) x = Conv2D(filters=32, kernel_size=3, strides=2, padding='same')(inputs) x = BatchNormalization()(x) # Normalizatoin : mean = 0, deviation = 1 x = Activation('relu')(x) jump = x for filters in [64, 128, 256]: #x = advanced_activations.LeakyReLU(alpha=0.2)(x) # Conv1 x = Activation('relu')(x) x = SeparableConv2D(filters, kernel_size=3, padding='same')(x) x = BatchNormalization()(x) # Conv2 x = Activation('relu')(x) x = SeparableConv2D(filters, kernel_size=3, padding='same')(x) x = BatchNormalization()(x) # Pooling x = MaxPooling2D(pool_size=3, strides=2, padding='same')(x) # Residual residual = Conv2D(filters, kernel_size=1, strides=2, padding='same')(jump) x = Add()([x, residual]) jump = x return inputs, x # + id="JMb-WpfR7ERb" # U-Net Decoder def build_decoder(inputs, x): # Residual jump = x # De-Conv for filters in [256, 128, 64, 32]: # Conv1 x = Activation('relu')(x) x = Conv2DTranspose(filters, kernel_size=3, padding='same')(x) x = BatchNormalization()(x) # Conv2 x = Activation('relu')(x) x = Conv2DTranspose(filters, kernel_size=3, padding='same')(x) x = BatchNormalization()(x) x = UpSampling2D(size=2)(x) # Residual residual = UpSampling2D(size=2)(jump) residual = Conv2D(filters, kernel_size=1, padding='same')(residual) x = Add()([x, residual]) jump = x outputs = Conv2D(filters=3, kernel_size=3, activation='softmax', padding='same')(x) model = Model(inputs, outputs) return model # + id="TOt41x0LDjOR" inputs, link = build_encoder() model = build_decoder(inputs, link) # + [markdown] id="b3ovK_gdoN7_" # Train Model # + colab={"base_uri": "https://localhost:8080/"} id="Swza4Pa0fmVG" outputId="15cc9d0b-1781-4b4d-819d-e6e1772770fd" # Train Model if TRAIN_MODE: model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy') print("Start Train\n") begin = time() model.fit(X_train, Y_train, BATCH_SIZE, EPOCH, verbose=1) end = time() print("Learning TIme : {:.2f}".format(end-begin)) model.save_weights('segment.h5') else: model.load_weights('segment.h5') # + [markdown] id="oevjOqGHopKu" # Test Model # + colab={"base_uri": "https://localhost:8080/", "height": 333} id="dLXTw40OoEMe" outputId="3dc0b764-8dc9-4180-c0df-0715474117f8" which = 232 fig, spot = plt.subplots(1, 3, figsize=(15,8)) img = array_to_img(X_test[which]) spot[0].imshow(img) img = array_to_img(Y_test[which]) spot[1].imshow(img) pred = model.predict(X_test, verbose=1) mask = np.argmax(pred[which], axis=2) mask = np.expand_dims(mask, axis=2) img = array_to_img(mask) spot[2].imshow(img) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Convolutional Neural Networks # # CNNs are a twist on the neural network concept designed specifically to process data with spatial relationships. In the deep neural networks we've seen so far every node is always connected to every other node in the subsequent layer. While spatial relationships CAN be captured, as we've seen with out results on MNIST, the networks were not explicitly built with the assumption that spatial relationships definitely exist. Artificial neural networks are perfectly appropriate for data where the relationships are not spatial. # # But, for data such as images, it seems crazy to ignore the spatial relationships! For the vast majority of image data, neighboring pixels combined with each other tell us much more than combining the pixels in opposite corners or the image. CNN's rely on the assumption that our data has spatial relationships, and they have produced state-of-the-art results especially in image processing and computer vision. # # The fundamental unit of a CNN is a "convolution": # # ![](img/convolution.png) # # > Image Source: https://github.com/PetarV-/TikZ/tree/master/2D%20Convolution # # The key component of the convolution is called the kernel, which is a matrix. K in the image above. The kernel has a shape, 3x3 in this example, but we can define the for each convolution. We "slide" the kernel across every 3x3 section of the image performing item-by-item multiplication, for example in the above image the 4 highlighted in green is produced by taking the values highlighted in red, multiplying the values in the same position in the kernel, and summing the result of those multiplications. Specifically: # # # ``` # position: [0,0] [0,1] [0,2] [1,0] [1,1] [1,2] [2,0] [2,1] [2,2] # operation: (1*1) + (0*0) + (0*1) + (1*0) + (1*1) + (0*0) + (1*1) + (1*0) + (1*1) == 4 # ``` # # This value is (optionally, but typically) then passed through a non-linearity like ReLU or Sigmoid before it is passed to the next layer. # # > Side note: In the literature, you'll discover that in a "true" convolution the kernel is inverted prior to the multiply+sum operation, and that this operation without the inversion is actually called "cross correlation" by most mathematicians. This matters in some contexts but we typically ignore it in deep learning because the values of the kernel are the things that are fine tuned, and storing them as "pre-inverted" matrixes is computationally efficent compared to inverting the kernel repeatedly. # # Here is a helpful animation to visualize convolutions: # # ![](img/animated-conv.gif) # # > Image source: https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d # A convolutional layer has a few important properties: # # * **Number of kernels** -- this is similar to the number of nodes in an ANN # * Each kernel will be separately trained on the input data. # * Each kernel will produce an output layer, sometimes called a feature map. # * These feature maps are used as input to the next layer. # * **Kernel size** -- these are almost always 3x3 or 5x5. # * Bigger kernels are more computationally expensive. # * Bigger kernals have a wider "field of view" which can be helpful. # * Dialted convolutions can capture a wider field of view at a lower computational cost (see additional resources). # * **Padding** -- notice above that a convolution produces a smaller output layer than the input layer by 1 pixel in each direction. Padding the input (typically with 0 values) allows the convolution to produce an output with the same size as the input. # * Downsampling to smaller sizes isn't always bad. # * It reduces the computational costs at the next layer. # * If we don't pad, it limits the possible depth of the network esp. for small inputs # * Padding tends to preserve information at the borders. If your images have important features on the edges, padding can improve performance # * **Stride** -- in the above we "slide" the kernel over by 1 pixel at every step. Increasing the stride increases the amount we slide by. # * Stride is typically set to 1. # * Higher values reduce the amount of information captured. # * Higher values are more computationally efficent, as fewer values are combined per convolution. # One last important concept before we build a CNN: pooling. Pooling is a tactic used to decrease the resolution of our feature maps, and it is largely an issue of computational efficency. There are 2 popular kinds, max pooling and average pooling. Pooling layers use a window size, say 2x2, and take either the max or average value within each window to produce the output layer. The windows are almost always square, and the stride size is almost always set to the size of the window: # # ![](img/maxpool.jpeg) # # > Image source: https://cs231n.github.io/convolutional-networks/ # # It is worth noting that pooling has fallen out of favor in a lot of modern architectures. Many machine learning practitioners have started downsampling through convolutions with larger stride sizes instead of pooling. # ### Building Our First CNN # # Let's use Keras to build a CNN now. # + # Setting up MNST, this should look familiar: import numpy as np from matplotlib import pyplot as plt from tensorflow.keras.datasets import fashion_mnist # BUT NOTE, this is not mnist, but fashion_mnist from tensorflow.keras.models import Sequential # Note, new layers from tensorflow.keras.layers import Dense, MaxPooling2D, Conv2D, Flatten, Dropout, SpatialDropout2D, GlobalAveragePooling2D from tensorflow.keras.utils import to_categorical # For examining results from sklearn.metrics import confusion_matrix import seaborn as sn num_classes = 10 image_size = 784 (training_images, training_labels), (test_images, test_labels) = fashion_mnist.load_data() training_data = training_images.reshape(training_images.shape[0], image_size) test_data = test_images.reshape(test_images.shape[0], image_size) training_labels = to_categorical(training_labels, num_classes) test_labels = to_categorical(test_labels, num_classes) # - # ## Critically, Our Training/Test Data will Remain 2D # # And with RGB images it will be 3D: Hight by width by color-cchannels: 1 in b/w, 3 in RGB, and if working with something like CMYK images for some reason, it'd be four. As we'll see, deeper layers in a CNN will commonly have many "color channel" layers, although we'll call them feature-maps or filter-maps since they're the result of computation rather than strictly the color values. conv_training_data = training_images.reshape(60000, 28, 28, 1) conv_test_data = test_images.reshape(10000, 28, 28, 1) # ## This should look pretty familiar, just for plotting our training epochs def plot_training_history(history, model, eval_images=False): figure = plt.figure() plt.subplot(1, 2, 1) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['training', 'validation'], loc='best') plt.tight_layout() plt.subplot(1, 2, 2) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['training', 'validation'], loc='best') plt.tight_layout() figure.tight_layout() plt.show() if eval_images: loss, accuracy = model.evaluate(conv_test_data, test_labels, verbose=False) else: loss, accuracy = model.evaluate(test_data, test_labels, verbose=False) print(f'Test loss: {loss:.3}') print(f'Test accuracy: {accuracy:.3}') # # Fashion MNIST is a More Challenging Dataset # # This time, we're using a new dataset called "Fashion MNIST". Like the handwritten digits dataset, this is a set of grayscale images each 28 by 28 pixels. However, the subject of these images is very different from the handwritten digits dataset. Instead, these are images of fashion objects. # # This dataset was built as a "drop in" replacement for MNIST because neural networks can solve the MNIST digits problem a bit too easily. # # Let's take a look at some: # Lets visualize the first 100 images from the dataset for i in range(100): ax = plt.subplot(10, 10, i+1) ax.axis('off') plt.imshow(training_images[i], cmap='Greys') # + i = 0 # So we can look at one at a time... # So we can see the label label_map = { 0: 'T-shirt/top', 1: 'Trouser', 2: 'Pullover', 3: 'Dress', 4: 'Coat', 5: 'Sandal', 6: 'Shirt', 7: 'Sneaker', 8: 'Bag', 9: 'Ankle boot' } # - label = np.argmax(training_labels[i]) plt.title(label_map[label]) plt.imshow(training_images[i], cmap='Greys') i += 1 # Once again, there are 10 classes of image: # # 0 T-shirt/top # 1 Trouser # 2 Pullover # 3 Dress # 4 Coat # 5 Sandal # 6 Shirt # 7 Sneaker # 8 Bag # 9 Ankle boot # # As you might guess, this is a bigger challenge than the handwritten digits. Firstly, at 28 by 28 pixels much more fidelity is lost in this dataset compared to the digits dataset. Secondly, more pixels matter. In the digits dataset, we rarely care about the weight of the pixel, more or less what matters is if it's white or something else—we mostly cared about the edges between where someone had drawn and where they had not. Now internal differences in grayscale intensity are more informative, and comprise a larger amount of the image. # # ## For Comparison, A Traditional ANN # # Let's quickly verify that a standard ANN that worked well in the context of MNIST does less well in Fashion MNIST: # + # Recall from the Optimizers section that we were able to get # ~97% test accuracy with this network on regular MNIST: model = Sequential() model.add(Dense(units=512, activation='relu', input_shape=(image_size,))) model.add(Dense(units=256, activation='relu')) model.add(Dense(units=128, activation='relu')) model.add(Dense(units=64, activation='relu')) model.add(Dense(units=num_classes, activation='softmax')) model.summary() # - model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(training_data, training_labels, batch_size=128, epochs=10, verbose=True, validation_split=.1) plot_training_history(history, model) # ### Notes # # Not bad, but not nearly as good as we were able to achieve with regular MNIST. And, looks like during the last few epochs we may have started to overfit. Some regularization e.g. Dropout could help. # # ## Now, Lets Build a CNN # + # The model is still sequentail, nothing new here. model = Sequential() # add model layers. The first parameter is the number of filters to make at each layer. # Meaning here the result of the first layer is 64 different "feature maps" or "activation maps" # Note, typically we still have to specify the input shape. # There are some caveats to this detailed below, though. model.add(Conv2D(64, kernel_size=(5, 5), activation='relu', padding='same', input_shape=(28, 28, 1))) model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', padding='same',)) # Flatten turns our final convolutional layer's output shape from (28, 28, 32) => 1x25088 (!) model.add(Flatten()) # The tail of the network is a standard ANN and can also be deep or shallow depending # on the network architecture. model.add(Dense(512)) model.add(Dense(num_classes, activation='softmax')) # Lets fit it with identical parameters and see what happens... model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() # - # # 12,870,826 Parameters?! # # That flatten layer into a 512 layer is exploding the parameter count. We're going to have to solve this problem or buy a very nice GPU (or both). This is something CNN designers must be aware of, that the parameter counts can explode when we transition to the flat ANN portion of the network. # # We'll use heavy striding as well as a new type of layer called GlobalAveragePooling to mitigate this problem soon. # + # Note the use of conv_training_data (28,28,1,n_samples) instead of training_data (784, n_samples) # training_data => 784 long vector # conv_training_data => 28 x 28 x 1 matrix history = model.fit( conv_training_data, training_labels, batch_size=128, epochs=10, verbose=True, validation_split=.1 ) plot_training_history(history, model, eval_images=True) # - # # Notes # # ### Expensive Computation # # It took **forever**. CNN's tend to be more computationally intensive than ANN's, both because of the transition from the CNN to ANN portion explodes the number of weights, but also because the actual convolutional operation is expensive in and of itself. # # We will address some of this problem with striding/pooling layers and global average pooling instead of flatten. CNN operations are also highly parallizable, so a good GPU can make a huge difference. This saved version of the notebook was executed on a commodity CPU. # # ### Overfitting # # CNN's, like all their neural network cousins, have a strong capacity for overfitting. We can apply standard dropout on the final ANN layers as usual, and we'll introduce something called "spatial dropout" that is a unique regularization techniqe for CNN layers. # # ### Performance # # Minute for minute the ANN improved faster, but epoch for epoch the CNN did better. Both of these measures ultimately matter for different reasons. CNN's can learn the features of an image having seen that image fewer times, although it looks at each image "more carefully" during each epoch. This gets us closer to "few shot" and "one shot" learning, which matters for online learning for example, and is a major goal of AI research. # # At budget time, the minute for minute measure matters more though. You're paying for the energy or CPU/GPU time you use and CNN's tend to be more expensive in terms of cost per percentage-point-of-accuracy. However, the SOTA CNN's all outperform the SOTA ANN's in terms of top-line accuracy measures, so if you NEED a system that is highly accurate you'll probably need to suffer the addditional costs of a CNN. # # # ## A Helpful View Into The Performance: # # lets look at a "confusion matrix" to see what kinds of misclassifications our network made. This can be very useful in situatioons where some types of misclassification are "acceptable" such as a Sneaker classified as an Ankle Boot. In other cases, say "maliginant tumor" vs "benign tumor" such a miss is not acceptable. # # Knowing **how** your model fails can be very useful, and an important part of risk assessment. # + # When did our evaluator do poorly? predictions = model.predict(conv_test_data) cm = confusion_matrix(np.argmax(predictions, axis=1), np.argmax(test_labels, axis=1)) plt.figure(figsize = (15, 15)) name_labels = [ 'T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot' ] sn.heatmap(cm, annot=True, xticklabels=name_labels, yticklabels=name_labels) plt.show() # - # # A Less Naive CNN # # We're going to apply a few best practicess to our CNN architecture now to address the problems we saw in our naive network. # # ## Striding (Formerly Pooling) # # When we use strides we'll be reducing the size of the output filter-maps compared to the input features. 2x2 strides with padding cuts the input size in half! This will have two major impacts on our network: # # 1. Like in ANN's, we're trying to force the network to condense the relevant information into a smaller size. Leaving behind noise and extracting signal. This is part of why CNN's are sometimes refered to as "feature extractors." # # 2. This can have an enormous impact on the speed of the network. By shrinking the filter-maps at each layer we can afford to use more layers. # # > Note: Earlier in the history of CNNs it was more common to use an unstrided convolutional layer followed by a max pooling layer to acheive these same two goals. That has become much less popular as striding has been shown to have roughly the same impact in terms of the second goal (increase information density) and striding is significantly faster computationally than unstrided convolutions followed by pooling. # # ## Dropout and Spatial Dropout # # Regular Dropout can be used on the ANN sections of our networks. Spatial dropout is a very similar idea that drops out entire kernals at a time. # # ## Global Average Pooling # # Global average pooling is an alternative to Flatten. Instead of preserving all the values and simply reshaping them into a flat vector global average pooling returns the average of all the values in a single filter map. This does represent a loss of some of the spatial information, and isn't always appropriate (flatten is better for object localization for example) but really helps with the parameter explosion when transition to dense layers. # # > Note: GAB layers also allow us to accept arbitrary input sizes, rather than a fixed input size, since the number of paramters after the flatten layer becomes independent of the number of pixels in the input. Instead, only the number of filters in the final convolutional layer matters. This isn't used that commonly because simply reshaping images is pretty easy (as you'll see in later sections) but it is an interesting property that can be useful. # + # Lets make a few small changes, including a dropout layer, and add some MaxPooling layers to see what happens model = Sequential() # Note, strides will shrink the output size. model.add(Conv2D(64, kernel_size=(5, 5), strides=(2,2), activation='relu', padding='same', input_shape=(28, 28, 1))) # Note, we INCREASED the number of filters as we're decreasing the size of each one. # this is fairly common, especially if the number of classes is very high. # It has been shown that individual kernals tend to focus on individual classes esp. # w/ GlobalAveragePooling and at later layers model.add(Conv2D(128, kernel_size=(3, 3), strides=(2,2), activation='relu', padding='same')) model.add(SpatialDropout2D(rate=0.2)) model.add(Conv2D(265, kernel_size=(3, 3), strides=(2,2), activation='relu', padding='same')) model.add(SpatialDropout2D(rate=0.2)) # GAB instead of Flatten, which will be a huge paramter saving move. model.add(GlobalAveragePooling2D()) model.add(Dense(128, activation='relu')) model.add(Dense(64, activation='relu')) # A bit of standard dropout model.add(Dropout(0.2)) model.add(Dense(num_classes, activation='softmax')) model.summary() # - # ## Notes: # # Way fewer parameters, nice. # # Note the shape of the outputs, the final convolutional layer is only producing 4x4 feature maps but 256 of them. # # Lets see how it does... # + model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(conv_training_data, training_labels, batch_size=128, epochs=10, verbose=True, validation_split=.1) plot_training_history(history, model, eval_images=True) # - # ## Notes: # # Much faster, as we had hoped. # # Much lesss overfitting, as we had hoped. # # Better top-line validation and test accuracies. # # Win, win, win. # # For fun, lets look at the confusion matrix again. # + predictions = model.predict(conv_test_data) cm = confusion_matrix(np.argmax(predictions, axis=1), np.argmax(test_labels, axis=1)) plt.figure(figsize = (15, 15)) sn.heatmap(cm, annot=True, xticklabels=name_labels, yticklabels=name_labels) plt.show() # - # 90% is pretty respectible, especially considering how speedy training was, and given that we didn't apply any data augmentation. [Some state of the art networks get around 93-95% accuracy](https://github.com/zalandoresearch/fashion-mnist). It's also worth noting that we mostly fail on comparing pullovers to coats, and tops to t-shirts. Those are kind of acceptable, it's not like we're calling shoes coats or something like that. # # ## Mini-Exercise: # # Using what you've learned build a CNN architecture and train it. See if you can do better than the network we just trained! # + # Your code here... don't forget to plot your training epochs and maybe look at the confusion matrix! # Lets make a few small changes, including a dropout layer, and add some MaxPooling layers to see what happens model = Sequential() # Note, strides will shrink the output size. model.add(Conv2D(64, kernel_size=(5, 5), strides=(2,2), activation='relu', padding='same', input_shape=(28, 28, 1))) # Note, we INCREASED the number of filters as we're decreasing the size of each one. # this is fairly common, especially if the number of classes is very high. # It has been shown that individual kernals tend to focus on individual classes esp. # w/ GlobalAveragePooling and at later layers model.add(Conv2D(256, kernel_size=(3, 3), strides=(2,2), activation='relu', padding='same')) model.add(SpatialDropout2D(rate=0.2)) model.add(Conv2D(32, kernel_size=(3, 3), strides=(2,2), activation='relu', padding='same')) model.add(SpatialDropout2D(rate=0.2)) model.add(Conv2D(32, kernel_size=(3, 3), strides=(2,2), activation='relu', padding='same')) model.add(SpatialDropout2D(rate=0.2)) model.add(Conv2D(256, kernel_size=(3, 3), strides=(2,2), activation='relu', padding='same')) model.add(SpatialDropout2D(rate=0.2)) # GAB instead of Flatten, which will be a huge paramter saving move. model.add(GlobalAveragePooling2D()) model.add(Dense(256, activation='relu')) model.add(Dense(64, activation='relu')) # A bit of standard dropout model.add(Dropout(0.2)) model.add(Dense(num_classes, activation='softmax')) model.summary() # + model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(conv_training_data, training_labels, batch_size=128, epochs=10, verbose=True, validation_split=.1) plot_training_history(history, model, eval_images=True) # - # # Skip Layers # # # Skip layers allow us to send information directly from earlier layers to later layers without being transformed by the layers inbetween. This tactic was invented to solve a varient of the the "vanishing gradient" problem. With skip layers we can forward signals from the original image and layer them on top of activation maps produced at later layers. We also use the skip layers during backpropagation, which causes helps the gradient values flow backwards through the network without becoming so diminished that the early layers never change. # # This tactic is only needed as our networks become very deep. Just like ANN's with Sigmoid layers, CNN's can suffer from a vanishing gradient problem and these skip layers were a massive innovation in the history of CNN's that has really allowed "deep" learning with CNNs to thrive. # # In the following example, the skip layers are overkill and probably won't even help much. But for completenes we should see how to build them as nearly every SOTA CNN architecture includes this concept. # + # To use these, we need to use the non-sequential Model format from tensorflow.keras.layers import Input, Add from tensorflow.keras.models import Model # First we make an input layer... inputs = Input(shape=(28, 28,1)) # Each subsequent layer is called like a function with the layer(s) that should be the # inputs to this layer. These three each have a single input. cnn_1 = Conv2D(32, kernel_size=(3, 3), strides=(2,2), activation='relu', padding='same')(inputs) cnn_2 = Conv2D(32, kernel_size=(3, 3), strides=(1,1), activation='relu', padding='same')(cnn_1) cnn_3 = Conv2D(32, kernel_size=(3, 3), strides=(1,1), activation='relu', padding='same')(cnn_2) # But this Add layer takes 2 inputs. # For Add layers, the two inputs being combined must be exactly the same shape. # Those layers are lined up, and added "pixel-wise" or "matrix-cell-wise" to produce the # output layer, which will also be the same shape as the 2 input layers. add = Add()([cnn_1, cnn_3]) cnn_4 = Conv2D(32, kernel_size=(3, 3), strides=(1,1), activation='relu', padding='same')(add) flat = GlobalAveragePooling2D()(cnn_4) dense_1 = Dense(256, activation='relu')(flat) drop = Dropout(rate=0.2)(dense_1) classifier = Dense(num_classes, activation='softmax')(drop) model = Model(inputs=inputs, outputs=classifier) model.summary() # - # ## Notes: # # Pay special attention to the add layer's "connected to" section. Nothing else here is particularly new or interesting, but this conecept is VERY important to SOTA CNNs. We'll see more advanced arcchitecturess applying this idea quite a lot. # + model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) history = model.fit(conv_training_data, training_labels, batch_size=128, epochs=5, verbose=True, validation_split=.1) plot_training_history(history, model, eval_images=True) # - # # Performance notes: # # We didn't expect this to do any better than the networks above. It doesn't have dropout, it's quite small parameterwise, and serves only as a simple example of skip layers. # --- # jupyter: # jupytext: # text_representation: # extension: .ps1 # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: .NET (PowerShell) # language: PowerShell # name: .net-powershell # --- # # T1562 - Impair Defenses # Adversaries may maliciously modify components of a victim environment in order to hinder or disable defensive mechanisms. This not only involves impairing preventative defenses, such as firewalls and anti-virus, but also detection capabilities that defenders can use to audit activity and identify malicious behavior. This may also span both native defenses as well as supplemental capabilities installed by users and administrators. # # Adversaries could also target event aggregation and analysis mechanisms, or otherwise disrupt these procedures by altering other system components. # ## Atomic Tests: # Currently, no tests are available for this technique. # ## Detection # Monitor processes and command-line arguments to see if security tools or logging services are killed or stop running. Monitor Registry edits for modifications to services and startup programs that correspond to security tools. Lack of log events may be suspicious. # # Monitor environment variables and APIs that can be leveraged to disable security measures. # ## Shield Active Defense # ### Application Diversity # Present the adversary with a variety of installed applications and services. # # Application diversity is presenting multiple software targets to the adversary. On a single target system, defenders can configure multiple different services or user software applications. On a target network, defenders can present systems with a variety of operating systems, operating system versions, applications, and services. # #### Opportunity # There is an opportunity to study the adversary and collect first-hand observations about them and their tools. # #### Use Case # A defender can plant AV or monitoring tools which are easy for an adversary to remove. If an adversary removes these, they may be enticed to act more openly believing they have removed monitoring from the system. # #### Procedures # Use a mix of vulnerable and nonvulnerable software on a system to allow you to see what exploits the adversary leverages in their attacks. # Install Anti-virus or other end-point detection tools on systems to see if an adversary takes note of them and if so, how they react. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="uDI0ZLrS9jAX" # #

    Физтех-Школа Прикладной математики и информатики (ФПМИ) МФТИ

    # + [markdown] colab_type="text" id="k0ygS84T9jAY" # --- # + [markdown] colab_type="text" id="HnjQZLuC9jAY" #

    Перцептрон Розенблатта

    (нейрон с пороговой функцией активации)

    # + [markdown] colab_type="text" id="543-uGN-9jAZ" # --- # + [markdown] colab_type="text" id="1JBsLVMI9jAa" # В данном ноутбуке Вам нужно будет: # # - самостоятельно реализовать класс **`Perceptron()`** -- нейрон пороговой функцией активации # - обучить и протестировать Ваш перцептрон на сгенерированных и реальных данных (файлы с реальными данными помещены в папку /data в этой же директории) # - сравнить качество работы Вашего класса с классом из библиотеки `scikit-learn` (`sklearn.linear_model.Perceptron()`) # + [markdown] colab_type="text" id="cOAHk8eO9jAb" #

    Введение

    # + [markdown] colab_type="text" id="bF22tUW79jAc" # Почти любой алгоритм машинного обучения, решающий задачу *классификации* или *регрессии*, работает так: # # 1. (*стадия инициализации*) Задаются его **гиперпараметры**, то есть те величины, которые не "выучиваются" алгоритмом в процессе обучения самостоятельно # 2. (*стадия обучения*) Алгоритм запускается на данных, **обучаясь** на них и меняя свои **параметры** (не путать с *гипер*параметрами) каким-то определённым образом (например, с помощью *метода градиентного спуска* или *метода коррекции ошибки*), исходя из функции потерь (её называют *loss function*). Функция потерь, по сути, говорит, где и как ошибается модель # 3. (*стадия предсказания*) Модель готова, и теперь с помощью неё можно делать **предсказания** на новых объектах # + colab={} colab_type="code" id="3hxoVvmN9jAd" from matplotlib import pyplot as plt from matplotlib.colors import ListedColormap # тут лежат разные штуки для цветовой магии import numpy as np import pandas as pd # + [markdown] colab_type="text" id="jHd4CZjS9jAg" #

    Класс Perceptron

    # + [markdown] colab_type="text" id="PObIs0OB9jAh" # В даном разделе будет решаться задача **бинарной классификации** с помощью перцептрона: # - *Входные данные*: матрица $X$ размера $(n, m)$ и столбец $y$ из нулей и единиц размера $(n, 1)$. Строкам матрицы соответствуют объекты, столбцам - признаки (то есть строка $i$ есть набор признаков (*признаковое описание*) объекта $X_i$). # - *Выходные данные*: столбец $\hat{y}$ из нулей и единиц размера $(n, 1)$ - предсказания алгоритма. # + [markdown] colab_type="text" id="wkd_24Zr9jAi" # Модель нейрона в биологии и в deep learning: # # ![title](http://lamda.nju.edu.cn/weixs/project/CNNTricks/imgs/neuron.png) # + [markdown] colab_type="text" id="TwRqMBVPcy0j" # \**картинка из http://cs231n.github.io/neural-networks-1/* # + [markdown] colab_type="text" id="82qIny-49jAi" # Чтобы понять, как мы будем обновлять параметры модели (веса), нужно знать, какую функцию потерь мы оптимизируем (находим минимум). В данном случае мы решаем задачу бинарной классификации (2 класса: 1 или 0), возьмём в качестве функции потерь среднеквадратичную ошибку: # # $$Loss(w, x) = \frac{1}{2n}\sum_{i=1}^{n} (\hat{y_i} - y_i)^2 = \frac{1}{2n}\sum_{i=1}^{n} (f(w \cdot X_i) - y_i)^2$$ # # Здесь $w \cdot X_i$ - скалярное произведение, а $f(w \cdot X_i)$ - пороговая функция: # # $$ # f(z) = # \begin{cases} # 1, &\text{если } w \cdot X_i > 0 \\ # 0, &\text{если } w \cdot X_i \le 0 # \end{cases} # $$ # # **Примечание:** В формуле предполагается, что $b$ - свободный член - является частью вектора весов: $w_0$. Тогда, если к $X$ приписать слева единичный столбец, в скалярном произведении $b$ будет именно как свободный член (лучше распишите это -- станет понятнее). При реализации класса `Perceptron()` $b$ нужно считать отдельно (чтобы было нагляднее). # + [markdown] colab_type="text" id="6wGYvpsv9jAj" # ** Реализуйте функцию потерь $Loss$: ** # + colab={} colab_type="code" id="KIMPzh0B9jAk" def Loss(y_pred, y): return # Ваш код здесь # + [markdown] colab_type="text" id="5QrUrljB9jAn" # Поскольку у *пороговой функции* не существует производной (вы её график видели? Выглядит он, конечно, простым, но производная таких не любит), то мы не можем использовать градиентный спуск, ведь: # # # # $$ \frac{\partial Loss}{\partial w} = \frac{1}{n} X^T\left(f(w \cdot X) - y\right)f'(w \cdot X)$$ # # где $f^{'}(w \cdot X)$ - в точке 0 посчитать не получится. Но ведь хочется как-то обновлять веса, иначе как обучить алгоритм отличать груши от яблок? # # Поэтому предлагается обновлять так: # # $$w^{j+1} = w^{j} - \alpha\Delta{w^{j}}$$ # # где: # # $$\Delta{w} = \frac{1}{n}X^T(\hat{y} - y) = \frac{1}{n}X^T(f(w^j \cdot X) - y)$$ # # (не забудьте, что при $w_0 = b$ признак $x_0$ = 1), где $w \cdot X$ - матричное произведение столбца весов $w$ на матрицу объектов-признаков $X$, а индекс $j$ -- номер итерации градиентного спуска. # # Это правило является неким частным случаем градиентного спуска для данного случая (*[правило Хебба](https://ru.wikipedia.org/wiki/%D0%94%D0%B5%D0%BB%D1%8C%D1%82%D0%B0-%D0%BF%D1%80%D0%B0%D0%B2%D0%B8%D0%BB%D0%BE)*, *[метод коррекции ошибки](https://ru.wikipedia.org/wiki/%D0%9C%D0%B5%D1%82%D0%BE%D0%B4_%D0%BA%D0%BE%D1%80%D1%80%D0%B5%D0%BA%D1%86%D0%B8%D0%B8_%D0%BE%D1%88%D0%B8%D0%B1%D0%BA%D0%B8)*). # + [markdown] colab_type="text" id="Zrm01BR69jAo" # Теперь, вооружившись всеми формулами и силой духа, нужно написать свой класс **`Perceptron()`**. Уже есть код класса и немного кода реализации. По-максимуму используйте **Numpy** при реализации, т.к. будет проверяться и скорость работы Вашего алгоритма. # # *Примечание*: В коде ниже `y_pred` - это $\hat{y}$ из формул выше # + colab={"base_uri": "https://localhost:8080/", "height": 132} colab_type="code" id="rLBDN2G89jAo" outputId="a9b7d64e-e7fe-4ce4-ee46-608991580501" executionInfo={"status": "error", "timestamp": 1539420667947, "user_tz": -300, "elapsed": 761, "user": {"displayName": "\u0413\u0440\u0438\u0433\u043e\u0440\u0438\u0439 \u041b\u0435\u043b\u0435\u0439\u0442\u043d\u0435\u0440", "photoUrl": "", "userId": "07179937308049589303"}} class Perceptron: def __init__(self, w=None, b=0): """ :param: w -- вектор весов :param: b -- смещение """ # Пока что мы не знаем размер матрицы X, а значит не знаем, сколько будет весов self.w = w self.b = b def activate(self, x): return x > 0 def forward_pass(self, X): """ Эта функция рассчитывает ответ перцептрона при предъявлении набора объектов :param: X -- матрица объектов размера (n, m), каждая строка - отдельный объект :return: вектор размера (n, 1) из нулей и единиц с ответами перцептрона """ n = X.shape[0] y_pred = np.zeros((n, 1)) # y_pred(icted) - предсказанные классы # Ваш код здесь return y_pred def backward_pass(self, X, y, y_pred, learning_rate=0.005): """ Обновляет значения весов перцептрона в соответствие с этим объектом :param: X -- матрица объектов размера (n, m) y -- вектор правильных ответов размера (n, 1) learning_rate - "скорость обучения" (символ alpha в формулах выше) В этом методе ничего возвращать не нужно, только правильно поменять веса с помощью градиентного спуска. """ # Ваш код здесь def fit(self, X, y, num_epochs=300): """ Спускаемся в минимум :param: X -- матрица объектов размера (n, m) y -- вектор правильных ответов размера (n, 1) num_epochs -- количество итераций обучения :return: Loss_values -- вектор значений функции потерь """ self.w = np.zeros((X.shape[1], 1)) # столбец (m, 1) self.b = 0 # смещение (свободный член) losses = [] # значения функции потерь на различных итерациях обновления весов for i in range(num_epochs): # Ваш код здесь return losses # + [markdown] colab_type="text" id="XlWXLoHQ9jAr" # Класс готов. Посмотрим, правильно ли ведёт себя Ваш перцептрон. Далее идут несколько ячеек с тестовым кодом, Вам нужно просто запустить их и проверить, чтобы результаты запуска совпадали с соответствующими числами из таблиц: # + [markdown] colab_type="text" id="GnrccB6H9jAs" # **Проверка forward_pass():** # + colab={"base_uri": "https://localhost:8080/", "height": 238} colab_type="code" id="9LZgkcHv9jAt" outputId="dc10527e-b3ab-4065-ff8a-20bb5df39b4e" executionInfo={"status": "error", "timestamp": 1539420677714, "user_tz": -300, "elapsed": 616, "user": {"displayName": "\u0413\u0440\u0438\u0433\u043e\u0440\u0438\u0439 \u041b\u0435\u043b\u0435\u0439\u0442\u043d\u0435\u0440", "photoUrl": "", "userId": "07179937308049589303"}} w = np.array([1., 2.]).reshape(2, 1) b = 2. X = np.array([[1., 2., -1.], [3., 4., -3.2]]) perceptron = Perceptron(w, b) y_pred = perceptron.forward_pass(X.T) print ("y_pred = " + str(y_pred)) # + [markdown] colab_type="text" id="RlPOE9ia9jAv" # |Должно быть|| # |------|-------| # |**y_pred**|[1, 1, 0]| # + [markdown] colab_type="text" id="1rgBqV9D9jAv" # **Проверка backward_pass():** # + colab={} colab_type="code" id="9RkAnK0P9jAw" y = np.array([1, 0, 1]).reshape(3, 1) # + colab={"base_uri": "https://localhost:8080/", "height": 221} colab_type="code" id="Be7OJp8c9jA1" outputId="bc1d3643-d2f1-4024-dea3-d98cc9c50ff1" executionInfo={"status": "error", "timestamp": 1539420682215, "user_tz": -300, "elapsed": 619, "user": {"displayName": "\u0413\u0440\u0438\u0433\u043e\u0440\u0438\u0439 \u041b\u0435\u043b\u0435\u0439\u0442\u043d\u0435\u0440", "photoUrl": "", "userId": "07179937308049589303"}} perceptron.backward_pass(X.T, y, y_pred) print ("w = " + str(perceptron.w)) print ("b = " + str(perceptron.b)) # + [markdown] colab_type="text" id="nNThF8MT9jA4" # |Должно быть|| # |-|-| # |**w**| [[ 0.995], [1.988]] | # |**b**| 2.0 | # + [markdown] colab_type="text" id="EDjsmpZp9jA5" # Посмотрим, как меняется функция потерь в течение процесса обучения на реальных данных - датасет "Яблоки и Груши": # + colab={"base_uri": "https://localhost:8080/", "height": 878} colab_type="code" id="aPzhL2L99jA5" outputId="ad70b9a6-f5a3-4c7a-cf5b-4adc7c725f22" executionInfo={"status": "error", "timestamp": 1539420685187, "user_tz": -300, "elapsed": 711, "user": {"displayName": "\u0413\u0440\u0438\u0433\u043e\u0440\u0438\u0439 \u041b\u0435\u043b\u0435\u0439\u0442\u043d\u0435\u0440", "photoUrl": "", "userId": "07179937308049589303"}} data = pd.read_csv("./data/apples_pears.csv") # + colab={} colab_type="code" id="q7cWGg5S9jA7" data.head() # + colab={} colab_type="code" id="V6T8WK2w9jA-" plt.figure(figsize=(10, 8)) plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=data['target'], cmap='rainbow') plt.title('Яблоки и груши', fontsize=15) plt.xlabel('симметричность', fontsize=14) plt.ylabel('желтизна', fontsize=14) plt.show(); # + [markdown] colab_type="text" id="JYSpUvQM9jBE" # **Вопрос:** Какой класс соответствует яблокам (какого они цвета на графике)? # + [markdown] colab_type="text" id="MO0fW5R29jBF" # **Ответ:** <Ваш ответ> # + [markdown] colab_type="text" id="X6m0IAdu9jBF" # Обозначим, что здесь признаки, а что - классы: # + colab={} colab_type="code" id="ARYN13Io9jBG" X = data.iloc[:,:2].values # матрица объекты-признаки y = data['target'].values.reshape((-1, 1)) # классы (столбец из нулей и единиц) # + [markdown] colab_type="text" id="MRCQeKtH9jBI" # **Вывод функции потерь** # Функция потерь должна убывать и в итоге стать близкой к 0 # + colab={} colab_type="code" id="sIR0g6mQ9jBJ" # %%time perceptron = # Ваш код здесь losses = # Ваш код здесь plt.figure(figsize=(10, 8)) plt.plot(losses) plt.title('Функция потерь', fontsize=15) plt.xlabel('номер итерации', fontsize=14) plt.ylabel('$Loss(\hat{y}, y)$', fontsize=14) plt.show() # + [markdown] colab_type="text" id="gnzSyV_j9jBN" # Посмотрим, как перцептрон классифицировал объекты из выборки: # + colab={} colab_type="code" id="bNhsJbuY9jBO" plt.figure(figsize=(10, 8)) plt.scatter(data.iloc[:, 0], data.iloc[:, 1], c=perceptron.forward_pass(X).ravel(), cmap='spring') plt.title('Яблоки и груши', fontsize=15) plt.xlabel('симметричность', fontsize=14) plt.ylabel('желтизна', fontsize=14) plt.show(); # + [markdown] colab_type="text" id="aE8iMuT39jBQ" #

    Предсказание пола по голосу

    # + [markdown] colab_type="text" id="hQHD1i1W9jBR" # В этой задаче нужно сравнить качество работы Вашего перцептрона и алгоритма из библиотеки `sklearn` на датасете с сайта [Kaggle](https://www.kaggle.com) - [Gender Recognition by Voice](https://www.kaggle.com/primaryobjects/voicegender). В данном датасете в качестве признаков выступают различные звуковые характеристики голоса, а в качестве классов - пол (мужчина/женщина). Подробнее о самих признаках можно почитать [на странице датасета](https://www.kaggle.com/primaryobjects/voicegender) (на английском). Нашей целью пока что является просто протестировать на этих данных два алгоритма. # + [markdown] colab_type="text" id="2duv7On99jBR" # **! Обратите внимание на имя функции из sklearn - skPerceptron** (это сделано, чтобы не совпадало с именем вашего класса) # + colab={} colab_type="code" id="YaLaxBHR9jBS" import pandas as pd from sklearn.linear_model import Perceptron as skPerceptron from sklearn.metrics import accuracy_score # + colab={} colab_type="code" id="IaNjHU7Q9jBU" data_path = './data/voice.csv' data = pd.read_csv(data_path) data['label'] = data['label'].apply(lambda x: 1 if x == 'male' else 0) # + colab={} colab_type="code" id="eU1EZFzM9jBW" data.head() # + colab={} colab_type="code" id="QCSK3sfX9jBY" # Чтобы перемешать данные. Изначально там сначала идут все мужчины, потом все женщины data = data.sample(frac=1) # + colab={} colab_type="code" id="VKY1jHT79jBZ" X_train = data.iloc[:int(len(data)*0.7), :-1] # матрица объекты-признаки y_train = data.iloc[:int(len(data)*0.7), -1] # истинные значения пола (мужчина/женщина) X_test = data.iloc[int(len(data)*0.7):, :-1] # матрица объекты-признаки y_test = data.iloc[int(len(data)*0.7):, -1] # истинные значения пола (мужчина/женщина) # + [markdown] colab_type="text" id="DDsWavYZ9jBe" # Тут нужно натренировать Ваш перцептрон и перцептрон из `sklearn` на этих данных: # + colab={} colab_type="code" id="4Z8Rh-bu9jBf" # Ваш код здесь # + [markdown] colab_type="text" id="3qsolz149jBh" # Сравним доли правильных ответов (на тестовых данных): # + colab={} colab_type="code" id="l6R3cXLO9jBi" print('Точность (доля правильных ответов, из 100%) моего перцептрона: {:d}'.format(accuracy_score(<Ваш код здесь>) * 100)) print('Точность (доля правильных ответов) перцептрона из sklearn: {:.1f} %'.format(accuracy_score(<Ваш код здесь>) * 100)) # + [markdown] colab_type="text" id="CSasTfW09jBj" # **Вопрос:** Хорошее ли качество показывает перцептрон? Как Вы думаете, почему? Можете писать любые мысли на этот счёт. # + [markdown] colab_type="text" id="uWaJ-92j9jBj" # **Ответ:**<Ваш ответ> # + [markdown] colab_type="text" id="yyX2G7VvdvMC" # ### Важно # + [markdown] colab_type="text" id="sTZFLUqFdv57" # Стоит понимать, что перцептрон сам по себе не используется в приложениях. Мы продемонстрровали его вам, чтобы вы знали, с чего всё начиналось. На самом деле это просто один нейрон с пороговой функцией активации, который не используется в многослойных нейросетях и каких-либо прикладных задачах, но всё же является хорошим учебным примером, помогающим понять то, как обновляются веса в соответствие с ошибками и перейти к рассмотрению более полезных моделей (нейронов с другими функциями активации). # + [markdown] colab_type="text" id="MC0FNrfq9jBl" #

    Полезные ссылки

    # + [markdown] colab_type="text" id="AZLMhq149jBl" # 1). Lecture Notes Стэнфордского университета: http://cs231n.github.io/neural-networks-1/ # 2). [Википедия про перцептрон](https://ru.wikipedia.org/wiki/%D0%9F%D0%B5%D1%80%D1%86%D0%B5%D0%BF%D1%82%D1%80%D0%BE%D0%BD) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="VnHGTkrunq6T" executionInfo={"status": "ok", "timestamp": 1630500974932, "user_tz": -330, "elapsed": 642, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="4c3c039e-4525-442e-ecbf-d08b6041adab" from google.colab import drive drive.mount('/content/drive') # + id="qfZ9zKYXn7mt" executionInfo={"status": "ok", "timestamp": 1630500976601, "user_tz": -330, "elapsed": 1671, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} import nltk import matplotlib.pyplot as plt import pandas as pd # + colab={"base_uri": "https://localhost:8080/"} id="efz-9nBPN-Lw" executionInfo={"status": "ok", "timestamp": 1630500976602, "user_tz": -330, "elapsed": 11, "user": {"displayName": "CE137 - Jeet Trivedi", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="73cc84cc-135e-47d3-f1bc-51d87812e7d6" # text analysis text = """Discussing climate, sustainability, and preserving the natural world with President @EmmanuelMacron today in Paris. #BezosEarthFund #ClimatePledge""" import re import string from nltk.corpus import stopwords from nltk.stem import PorterStemmer from nltk.tokenize import TweetTokenizer remove_link = re.sub(f'https?:\/\/.*[\r\n]*', '', text) remove_link = re.sub(r'#', '', remove_link) print(remove_link) # + colab={"base_uri": "https://localhost:8080/"} id="-A7WllK8O2BD" executionInfo={"status": "ok", "timestamp": 1630500976602, "user_tz": -330, "elapsed": 9, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="fc6d760b-9e96-469a-fb57-d09098ab1405" print('\033[92m' + text) print('\033[92m' + remove_link) # + colab={"base_uri": "https://localhost:8080/"} id="naw66Y2BO8hg" executionInfo={"status": "ok", "timestamp": 1630500977370, "user_tz": -330, "elapsed": 774, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="b2eacdbf-15c2-4493-fd99-b03d5d0bd573" from nltk.tokenize import sent_tokenize text = """Hello Mr. steve, how you doing? whats up? The weather is great, and city is awesome. how you doing? The sky is pinkish-blue. You shouldn't eat cardboard, how you doing?""" nltk.download('punkt') tokenized_text = sent_tokenize(text) print(tokenized_text) # + colab={"base_uri": "https://localhost:8080/"} id="8UubfzEhPaZl" executionInfo={"status": "ok", "timestamp": 1630500977371, "user_tz": -330, "elapsed": 5, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="7a5080a6-7da9-4c04-a804-c614112cfb80" from nltk.tokenize import word_tokenize tokenized_word = word_tokenize(text) print(tokenized_word) # + colab={"base_uri": "https://localhost:8080/", "height": 330} id="hV1elgAaPn7E" executionInfo={"status": "ok", "timestamp": 1630500978092, "user_tz": -330, "elapsed": 723, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="d6457241-b80b-471b-bb1b-dd6d8f98ef6c" # frequency distribution from nltk.probability import FreqDist fredist = FreqDist(tokenized_word) fredist.most_common(4) #plotting frequency distribution import matplotlib.pyplot as plt fredist.plot(30, cumulative = False, color = 'blue') plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="FhB3jZWXQdKd" executionInfo={"status": "ok", "timestamp": 1630500978092, "user_tz": -330, "elapsed": 11, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="d057000f-c6e5-4d9b-cddf-48f62f7f8e75" from nltk.corpus import stopwords nltk.download('stopwords') stop_words = set(stopwords.words('english')) print(stop_words) # + colab={"base_uri": "https://localhost:8080/"} id="ooeyACgUQtpK" executionInfo={"status": "ok", "timestamp": 1630500978093, "user_tz": -330, "elapsed": 10, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="e61d7fa8-a6dd-4399-b60e-c2973a2da7d3" filtered_sentence = [] for word in tokenized_word: if word not in stop_words: filtered_sentence.append(word) print('Tokenized Sentence : \n', tokenized_word) print('\nFiltered Sentence : \n', filtered_sentence) # + colab={"base_uri": "https://localhost:8080/"} id="aopr6n44RKrM" executionInfo={"status": "ok", "timestamp": 1630500978094, "user_tz": -330, "elapsed": 9, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="dbd3f337-7bb2-411e-93cc-307d5719521e" from nltk.stem import PorterStemmer from nltk.tokenize import sent_tokenize, word_tokenize ps = PorterStemmer() stemmed_sentence = [] for word in filtered_sentence: stemmed_sentence.append(ps.stem(word)) print('Filtered Sentence : \n', filtered_sentence) print('\nStemmed Sentence : \n', stemmed_sentence) # + colab={"base_uri": "https://localhost:8080/"} id="FDsRBcF8TNGv" executionInfo={"status": "ok", "timestamp": 1630500980503, "user_tz": -330, "elapsed": 2416, "user": {"displayName": "CE137 - ", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjZC2bbr7T78zgxxjNKGVdv8bW0Fu-H40Pm5weg=s64", "userId": "07444974201945981837"}} outputId="9685b30e-bfb3-4533-d5d8-1af5bb939ee5" # stemming and lemmatization from nltk.stem.wordnet import WordNetLemmatizer from nltk.stem.porter import PorterStemmer nltk.download('wordnet') lemmatizer = WordNetLemmatizer() ps = PorterStemmer() word = 'crying' print('Lemmatized Word : ', lemmatizer.lemmatize(word, 'v')) print('Stemmed word : ', ps.stem(word)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.10 64-bit # name: python3 # --- import math import numpy as np import torch from torch import nn from d2l import torch as d2l # + MAX_DEGREE = 20 N_TRAIN = 100 N_TEST = 100 true_w = np.zeros(MAX_DEGREE) # Allocate lots of empty space true_w[0:4] = np.array([5, 1.2, -3.4, 5.6]) features = np.random.normal(size=(N_TRAIN + N_TEST, 1)) np.random.shuffle(features) poly_features = np.power(features, np.arange(MAX_DEGREE).reshape(1, -1)) for i in range(MAX_DEGREE): poly_features[:, i] /= math.gamma(i + 1) labels = np.dot(poly_features, true_w) labels += np.random.normal(scale=0.1, size=labels.shape) # + true_w, features, poly_features, labels = [torch.tensor(x, dtype= d2l.float32) for x in [true_w, features, poly_features, labels]] features[:2], poly_features[:2, :], labels[:2] # - def evaluate_loss(net, data_iter, loss): metric = d2l.Accumulator(2) for X, y in data_iter: out = net(X) y = y.reshape(out.shape) l = loss(out, y) metric.add(l.sum(), l.numel()) return metric[0] / metric[1] def train(train_features, test_features, train_labels, test_labels, num_epochs=400): loss = nn.MSELoss() input_shape = train_features.shape[-1] net = nn.Sequential(nn.Linear(input_shape, 1, bias=False)) batch_size = min(10, train_labels.shape[0]) train_iter = d2l.load_array((train_features, train_labels.reshape(-1, 1)), batch_size) test_iter = d2l.load_array((test_features, test_labels.reshape(-1, 1)), batch_size, is_train=False) trainer = torch.optim.SGD(net.parameters(), lr=0.01) animator = d2l.Animator(xlabel='epoch', ylabel='loss', yscale='log', xlim=[1, num_epochs], ylim=[1e-3, 1e2], legend=['train', 'test']) for epoch in range(num_epochs): d2l.train_epoch_ch3(net, train_iter, loss, trainer) if epoch == 0 or (epoch + 1) % 20 == 0: animator.add(epoch + 1, (evaluate_loss( net, train_iter, loss), evaluate_loss(net, test_iter, loss))) print('weight:', net[0].weight.data.numpy()) train(poly_features[:N_TRAIN, :4], poly_features[N_TRAIN:, :4], labels[:N_TRAIN], labels[N_TRAIN:]) train(poly_features[:N_TRAIN, :2], poly_features[N_TRAIN:, :2], labels[:N_TRAIN], labels[N_TRAIN:]) train(poly_features[:N_TRAIN, :], poly_features[N_TRAIN:, :], labels[:N_TRAIN], labels[N_TRAIN:], num_epochs=1500) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Excercises Electric Machinery Fundamentals # ## Chapter 4 # ## Problem 4-2 # + slideshow={"slide_type": "skip"} # %pylab notebook # %precision 0 # - # ### Description # Given a 13.8-kV, 50-MVA, 0.9-power-factor-lagging, 60-Hz, four-pole, Y-connected synchronous machine with: # # * a synchronous reactance of $2.5\,\Omega$ # * an armature resistance of $0.2\,\Omega$. # * at 60 Hz, its friction and windage losses are 1 MW # * its core losses are 1.5 MW. # * The field circuit has a dc voltage of 120 V, # * the maximum $I_F$ is 10 A. The current of the field circuit is adjustable over the range from 0 to 10 A. # # The OCC of this generator is shown in Figure P4-1 below # Vl = 13.8e3 # [V] PF = 0.9 Xs = 2.5 # [Ohm] Ra = 0.2 # [Ohm] P = 50e6 # [W] Pf_w = 1.0e6 # [W] Pcore = 1.5e6 # [W] Pstray = 0 # [W] n_m = 1800 # [r/min] # #### (a) # * How much field current is required to make the terminal voltage $V_T$ (or line voltage $V_L$ ) equal to 13.8 kV when the generator is running at no load? # # #### (b) # * What is the internal generated voltage $E_A$ of this machine at rated conditions? # # #### (c) # * What is the phase voltage $V_\phi$ of this generator at rated conditions? # # #### (d) # * How much field current is required to make the terminal voltage $V_T$ equal to 13.8 kV when the generator is running at rated conditions? # # #### (e) # Suppose that this generator is running at rated conditions, and then the load is removed without changing the field current. # # * What would the terminal voltage of the generator be? # # #### (f) # * How much steady-state power and torque must the generator’s prime mover be capable of supplying to handle the rated conditions? # # #### (g) # * Construct a capability curve for this generator. # ### SOLUTION # #### (a) # If the no-load terminal voltage is 13.8 kV, the required field current can be read directly from the open-circuit characteristic. It is $\underline{\underline{I_F = 3.50\,A}}$. # #### (b) # This generator is Y-connected, so $I_L = I_A$ . At rated conditions, the line and phase current in this generator is: # # $$I_A = I_L = \frac{P}{\sqrt{3}V_L}$$ ia = P / (sqrt(3) * Vl) Ia_angle = -arccos(PF) Ia = ia * (cos(Ia_angle) + sin(Ia_angle)*1j) print('Ia = {:.0f} A ∠{:.1f}°'.format(abs(Ia), Ia_angle/pi *180)) # The phase voltage of this machine is: # $$V_\phi = V_T / \sqrt{3}$$ V_phase = Vl / sqrt(3) print('V_phase = {:.0f} V'.format(V_phase)) # The internal generated voltage of the machine is: # $$\vec{E}_A = \vec{V}_\phi + R_A\vec{I}_A + jX_S\vec{I}_A$$ Ea = V_phase + Ra*Ia + Xs*1j*Ia Ea_angle = arctan(Ea.imag/Ea.real) print(''' Ea = {:.0f} V ∠{:.1f}° =================='''.format(abs(Ea), Ea_angle/pi*180)) # #### (c) # The phase voltage of the machine at rated conditions is: print(''' V_phase = {:.0f} V ================'''.format(V_phase)) # #### (d) # The equivalent open-circuit terminal voltage corresponding to an $E_A$ of the value calculated in **(b)** is: Vt_oc = sqrt(3) * abs(Ea) print('Vt_oc = {:.0f} kV'.format(Vt_oc/1000)) # From the OCC, the required field current is $\underline{\underline{I_F = 10\,A}}$. # #### (e) # If the load is removed without changing the field current then $V_\phi = E_A$: abs(Ea) # The corresponding terminal voltage would be $\underline{\underline{V_T = 20\,kV}}$. # #### (f) # The input power to this generator is equal to the output power plus losses. The rated output power is: Pout = P*PF print('Pout = {:.0f} MW'.format(Pout/1e6)) # $$P_{CU} = 3I^2_AR_A$$ Pcu = 3 * abs(Ia)**2 * Ra print('Pcu = {:.1f} MW'.format(Pcu/1e6)) Pin = Pout +Pcu + Pf_w + Pcore + Pstray print('Pin = {:.1f} MW'.format(Pin/1e6)) # Therefore the prime mover must be capable of supplying $P_{in}$. Since the generator is a four-pole 60 Hz machine, to must be turning at 1800 r/min. The required torque is: # # $$\tau_{app} = \frac{P_{in}}{\omega_m}$$ w_m = n_m * (2*pi/60.0) tau_app = Pin / w_m print(''' tau_app = {:.0f} Nm ==================='''.format(tau_app)) # #### (e) # The rotor current limit of the capability curve would be drawn from an origin of: # # $$Q = -\frac{3V^2_\phi}{X_S}$$ Q = - (3 * V_phase**2) / Xs print('Q = {:.2f} Mvar'.format(Q/1e6)) # The radius of the rotor current limit is: # # $$D_E = \frac{3V_\phi E_A}{X_S}$$ De = (3 * V_phase * abs(Ea)) / Xs print('De = {:.0f} Mvar'.format(De/1e6)) # The stator current limit is a circle at the origin of radius: # # $$S = 3V_\phi I_A$$ S = 3 * V_phase * abs(Ia) print('S = {:.0f} Mvar'.format(S/1e6)) # Get points for stator current limit: theta = arange(-95,95) # angle in degrees rad = theta * pi/180 # angle in radians s_curve = S * ( cos(rad) + sin(rad)*1j) # Get points for rotor current limit: orig = Q*1j theta = arange(65,115) # angle in degrees rad = theta * pi / 180 # angle in radians r_curve = orig + De * ( cos(rad) + sin(rad)*1j ) # Plot the capability diagram: fig= figure() ax=fig.add_subplot(1, 1, 1) ax.plot(real(s_curve/1e6),imag(s_curve/1e6),'b') ax.plot(real(r_curve/1e6),imag(r_curve/1e6),'r--') ax.set_title('Synchronous Generator Capability Diagram') ax.set_xlabel('Power (MW)') ax.set_ylabel('Reactive Power (Mvar)') ax.set_aspect('equal', 'datalim') ax.legend(('stator current limit', 'rotor current limit'), loc=3); ax.grid() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %matplotlib inline import matplotlib.pyplot as plt import math import numpy as np from skimage.draw import (line, polygon, circle, circle_perimeter, ellipse, ellipse_perimeter, bezier_curve) img = np.zeros((512, 512, 3), dtype=np.double) rr, cc = line(125, 125, 10, 300) img[rr, cc, 1] = 1 poly = np.array(( (300, 300), (480, 320), (380, 430), (220, 590), (300, 300), )) rr, cc = polygon(poly[:, 0], poly[:, 1], img.shape) img[rr, cc, 1] = 1 rr, cc = circle(250, 250, 50, img.shape) img[rr, cc, :] = (1, 1, 0) rr, cc = ellipse(300, 300, 150, 200, img.shape) img[rr, cc, 2] = 1 rr, cc = circle_perimeter(200, 300, 15) img[rr, cc, :] = (0.5, 0, 0) rr, cc = bezier_curve(70, 100, 10, 10, 150, 100, 1) img[rr, cc, :] = (1, 0, 0) rr, cc = ellipse_perimeter(120, 400, 60, 20, orientation=math.pi / 4.) img[rr, cc, :] = (1, 0, 1) rr, cc = ellipse_perimeter(120, 400, 60, 20, orientation=-math.pi / 4.) img[rr, cc, :] = (0, 0, 1) rr, cc = ellipse_perimeter(120, 400, 60, 20, orientation=math.pi / 2.) img[rr, cc, :] = (1, 1, 1) plt.imshow(img) plt.title('Shapes') plt.xticks([]) plt.yticks([]) plt.show() # + from skimage.draw import line_aa, circle_perimeter_aa img = np.zeros((512, 512, 3), dtype=np.double) rr, cc, val = line_aa(12, 12, 220, 250) img[rr, cc, :] = [1, 1, 1] rr, cc, val = circle_perimeter_aa(90, 90, 60) img[rr, cc, :] = [1, 1, 1] plt.imshow(img) plt.title('Shapes') plt.xticks([]) plt.yticks([]) plt.show() # + from skimage.draw import random_shapes image0, _ = random_shapes((128, 128), max_shapes=1, shape='rectangle', multichannel=False) image1, _ = random_shapes((128, 128), max_shapes=1, shape='circle') image2, _ = random_shapes((128, 128), max_shapes=10, intensity_range=((100, 255),)) image3, _ = random_shapes((128, 128), min_shapes=5, max_shapes=10, min_size=20, allow_overlap=True) images = [image0, image1, image2, image3] for i in range(4): plt.subplot(2, 2, i+1) if i == 0: plt.imshow(images[i], cmap='gray') else: plt.imshow(images[i]) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:tsfuse] # language: python # name: conda-env-tsfuse-py # --- # # Getting started # + raw_mimetype="text/restructuredtext" active="" # The example below shows the basic usage of TSFuse. # - # ## Data representation # + raw_mimetype="text/restructuredtext" active="" # As input, the system requires a dataset where each instance is a window that consists of time series and a label. # - # ### Time series # + raw_mimetype="text/restructuredtext" active="" # Time series are represented using a dictionary where each entry represents a univariate or multivariate time series. As an example, let's create a dictionary with two univariate time series: # + import pandas as pd from tsfuse.data import Collection X = { 'x1': Collection(pd.DataFrame({ 'id': [0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5], 'time': [0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], 'data': [1, 2, 3, 1, 2 ,3, 1, 2, 3, 3, 2, 1, 3, 2, 1, 3, 2, 1], })), 'x2': Collection(pd.DataFrame({ 'id': [0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5], 'time': [0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], 'data': [1, 2, 3, 1, 2 ,3, 1, 2, 3, 1, 2, 3, 1, 2 ,3, 1, 2, 3], })), } # + raw_mimetype="text/restructuredtext" active="" # The two univariate time series are named `x1` and `x2` and each series is represented as a :class:`~tsfuse.data.Collection` object. Each ``Collection`` is initialized with a DataFrame that has three columns: # # - `id` which is the identifier of each instance, i.e., each window, # - `time` which contains the time stamps, # - `data` contains the time series data itself. # # For multivariate time series data, there can be multiple columns similar to the `data` column. For example, the data of a tri-axial accelerometer would have three columns `x`, `y`, `z` instead of `data` as it simultaneously measures the `x`, `y`, `z` acceleration. # - # ### Labels # + raw_mimetype="text/restructuredtext" active="" # There should be one target value for each window, so we create a Series where the index contains all unique `id` values of the time series data and the data consists of the labels: # - y = pd.Series(index=[0, 1, 2, 3, 4, 5], data=[0, 0, 0, 1, 1, 1]) # ## Feature construction # + raw_mimetype="text/restructuredtext" active="" # To construct features, TSFuse provides a :meth:`~tsfuse.construct` function which takes time series data `X` and target data `y` as input, and returns a DataFrame where each column corresponds to a feature. In addition, this function can return a computation graph which contains all transformation steps required to compute the features for new data: # - from tsfuse import construct features, graph = construct(X, y, transformers='minimal', return_graph=True) # + raw_mimetype="text/restructuredtext" active="" # The DataFrame with the constructed features looks like this: # - features # + raw_mimetype="text/restructuredtext" active="" # And this is the corresponding computation graph: # - graph # + raw_mimetype="text/restructuredtext" active="" # To apply this computation graph, simply call :func:`~tsfuse.computation.Graph.transform` with a time series dictionary ``X`` as argument: # - graph.transform(X) # ## Feature computation # + raw_mimetype="text/restructuredtext" active="" # Instead of automatically generating a computation graph using the :meth:`~tsfuse.construct` function, it is also possible to define a custom computation graph. To build such a graph, create a :class:`~tsfuse.computation.Graph` object as follows: # - from tsfuse.computation import Graph graph = Graph() # + raw_mimetype="text/restructuredtext" active="" # First, the graph needs placeholders for the input data. These can be created by adding :class:`~tsfuse.computation.Input` nodes to the graph: # - from tsfuse.computation import Input input1 = graph.add_node(Input('x1')) input2 = graph.add_node(Input('x2')) # + raw_mimetype="text/restructuredtext" active="" # Then, add transformer nodes to create new data from the input data. Each transformer has one or more parent nodes which are used as input data. For example, we can define the features of cell [4] as follows: # + from tsfuse.transformers import * graph.add_node(Min(Ratio(input1, input2)), i=2) graph.add_node(Max(Ratio(input1, input2)), i=3) graph.add_node(Mean(Ratio(input1, input2)), i=4) graph.add_node(StandardDeviation(Ratio(input1, input2)), i=4) graph.add_node(Variance(Ratio(input1, input2)), i=5) graph.add_node(Skewness(Ratio(input1, input2)), i=6) graph.add_node(Kurtosis(Ratio(input1, input2)), i=7) graph # + raw_mimetype="text/restructuredtext" active="" # This is the same graph as the one we constructed previously, and we can use it in the same way to compute features: # - graph.transform(X) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true #

    Table of Contents

    # # - import numpy as np import matplotlib.pyplot as plt # %matplotlib inline # ## Create the dataset z = np.linspace(0, 1, 20) z_plot = np.linspace(0, 1, 21) def get_y(x1, x2, x3): x1 = np.atleast_1d(x1); x2 = np.atleast_1d(x2); x3 = np.atleast_1d(x3) tmp = (x1.reshape(x1.shape[0], 1) * np.sin(1 * np.pi * z.reshape(1, 20)) + x2.reshape(x1.shape[0], 1) * np.sin(4 * np.pi * z.reshape(1, 20)) + x3.reshape(x1.shape[0], 1) * np.sin(5 * np.pi * z.reshape(1, 20))) tmp = (tmp.T - np.mean(tmp, axis=1)).T rain = np.maximum(0, np.sum(tmp[:, 10:], axis=1).reshape((-1, 1))) * 10 tmp[:, 10:] = tmp[:, 10:] - (rain / 10.) return np.concatenate([tmp, rain], axis=1) # create sample data n_samples = 10000 x1 = np.random.normal(size=n_samples) x2 = np.random.normal(size=n_samples) x3 = np.random.normal(size=n_samples) y = get_y(x1, x2, x3) y.shape i = np.random.randint(0, n_samples) plt.plot(y[i], z_plot); np.sum(y[i]), y[i, -1] plt.hist(y[:, -1]) # Create test and valid data x = np.stack([x1, x2, x3], axis=1) # Add noise to xs x += (np.random.rand(x.shape[0], x.shape[1]) - 0.2) * 0.6 i_split = int(0.8*n_samples) x_train = x[:i_split] y_train = y[:i_split] x_valid = x[i_split:] y_valid = y[i_split:] # ## Create neural network import keras from keras.models import Sequential from keras.layers import Dense from keras.optimizers import Adam import tensorflow as tf config = tf.ConfigProto() config.gpu_options.allow_growth = True # Allocates as much memory as needed. keras.backend.tensorflow_backend.set_session(tf.Session(config=config)) model = Sequential([ Dense(200, input_shape=(3,), activation='relu'), Dense(200, activation='relu'), Dense(21) ]) model.summary() model.compile(Adam(0.001), 'mse') model.fit(x_train, y_train, batch_size=512, epochs=10, validation_data=(x_valid, y_valid)) y_pred = model.predict(x_valid) i = np.random.randint(0, n_samples*0.2) plt.plot(y_valid[i], z_plot, label='true') plt.plot(y_pred[i], z_plot, label='pred') np.mean(np.abs(np.mean(y_pred[:, :], axis=1))), np.mean(np.abs(np.mean(y_valid[:], axis=1))) plt.hist(y_pred[:, -1], bins=50); # ### Now with constraints import keras.backend as K def transform(x): """x[sample, z-1]""" x_rain = K.reshape(K.maximum(0., x[:, -1]), (-1, 1)) x_first = - K.reshape(K.sum(x[:, :-1], axis=1), (-1, 1)) - x_rain return K.concatenate([x_first, x[:, :-1], x_rain], axis=1) from keras.losses import mean_squared_error def transformed_mse(y_true, y_pred): return mean_squared_error(y_true, transform(y_pred)) model_transform = Sequential([ Dense(200, input_shape=(3,), activation='relu'), Dense(200, activation='relu'), Dense(21-1) ]) model_transform.compile(Adam(0.001), transformed_mse) model_transform.fit(x_train, y_train, batch_size=512, epochs=10, validation_data=(x_valid, y_valid)) # + y_pred_trans = model_transform.predict(x_valid) y_rain = np.maximum(0, y_pred_trans[:, -1]).reshape((-1, 1)) y_first = -y_pred_trans[:, :-1].sum(axis=1).reshape((-1, 1)) - y_rain y_pred_trans = np.concatenate([y_first, y_pred_trans[:, :-1], y_rain], axis=1) # - i = np.random.randint(0, 200) i = 1948 plt.plot(y_valid[i], z_plot, label='true') plt.plot(y_pred_trans[i], z_plot, label='pred') plt.legend(); np.sum(y_pred_trans[i]), np.sum(y_valid[i]) max_error = np.argmax(np.mean(np.abs((y_valid - y_pred_trans)), axis=1)) max_error y_pred_trans.shape np.mean(np.abs(np.mean(y_pred_trans, axis=1))), np.mean(np.abs(np.mean(y_valid, axis=1))) plt.hist(y_pred_trans[:, -1], bins=50); plt.hist(y_valid[:, -1], bins=50); # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Chapter 5 Simplifications from sympy import * x, y, z = symbols('x, y, z') init_printing(use_unicode=True) # ## 5.1 単純化 # `sympy`のどんな数式も`simplify()`で簡単な形にできる!: simplify(sin(x)**2 + cos(x)**2) simplify((x**3 + x**2 - x - 1) / (x**2 + 2*x + 1)) simplify(gamma(x) / gamma(x-2)) #ガンマ関数(特殊関数) # #### 注意点:その1 simplify(x**2 + 2*x + 1) # ---> **因数分解できない!!!** 因数分解は`factor()`関数を使う: factor(x**2 + 2*x + 1) # #### 注意点:その2 # `simplify()`は遅い! # #### 解決策 # # - `simplify()`は「ある程度」簡単な形にまでしか変形できないので、確実に式を簡単にしたいなら、その用途に応じた適切な関数を使うべき! # # - インタラクティブシェルで`simplify`の挙動を見てから**個別の関数**(以下) を使って簡単にしよう. # ## 5.2 多項式 / 有理式 # ### 5.2.1 `expand`関数 # 多項式を展開し、必要ならば項をキャンセルする. expand((x + 1)**2) expand((x + 2)*(x - 3)) # 「式を展開する」ことで「式が簡単になる」ことがある。 expand((x + 1)*(x - 2) - (x - 1)*x) #式がキャンセルし合う # ### 5.2.2 `factor`関数 # 数式を可能な限り因数分解する factor(x**3 - x**2 + x - 1) factor(x**2*z + 4*x*y*z + 4*y**2*z) factor_list(x**2*z + 4*x*y*z + 4*y**2*z) #(変数or定数, べき) # #### 三角関数程度の式なら、関数`factor`, `expand`で対応可能 expand((cos(x) + sin(x))**2) factor(cos(x)**2 + 2*cos(x)*sin(x) + sin(x)**2) # ### 5.2.3 `collect`関数 # 特定の変数でまとめたり、特定次の係数を取り出す. expr = x*y + x -3 + 2*x**2 - z*x**2 + x**3 expr collected_expr = collect(expr, x) #xでまとめる. collected_expr # さらに以下のようにcoeffメソッドで特定次を取り出せる. collected_expr.coeff(x, 2) #xの2次だけ取り出す. # ### 5.2.4 `cancel`関数 # 有理式を簡単にする cancel((x**2 + 2*x + 1) / (x**2 + x)) expr = 1/x + (2*x/2 - 2) /(x - 4) expr cancel(expr) #分母を通分する factor(expr) #factorも同じような操作をする. expr = (x*y**2 - 2*x*y*z + x*z**2 + y**2 - 2*y*z + z**2) / (x**2 - 1) expr cancel(expr) factor(expr) #factorも同じような変形をする. # **コメント** # # 式を単にキャンセルさせてシンプルにさせたいときは、`factor()`より`cancel()`のほうが効率的 # ### 5.2.5 `apart`関数 # 有理式(分数)を部分分数分解する x = symbols('x') expr = (4*x**3 + 21*x**2 + 10*x + 12) / (x**4 + 5*x**3 + 5*x**2 + 4*x) expr apart(expr) # ## 5.3 三角関数 # **コメント**: 逆三角関数は頭に"a"を付ける: acos, asin, atan, etc... acos(x) cos(acos(x)) asin(1) # ### 5.3.1 `trigsimp`関数 # 三角関数の表式を、公式を用いて可能な限りシンプルな形にする. trigsimp(sin(x)**2 + cos(x)**2) trigsimp(sin(x)**4 - 2*cos(x)**2*sin(x)**2 + cos(x)**4) trigsimp(sin(x)*tan(x)/sec(x)) trigsimp(cosh(x)**2-sinh(x)**2) # ### 5.3.2 `expand_trig`関数 # 三角関数の式を展開する。 `trigsimp`と`expand_trig`は完全に逆の操作をする expand_trig(sin(x + y)) expand_trig(tan(2*x)) # ## 5.4 べき乗 x, y = symbols('x y', positive=True) #変数が正であると仮定 a, b = symbols('a, b', real = True) #変数が実数であると仮定 z, t, c = symbols('z t c') # **コメント**: `sqrt(x)`と`x**Rational(1,2)`, `x**0.5`, `x**(1/2)`は同じ sqrt(x) x**Rational(1,2) x**(0.5) x**(1/2) # ### 5.4.1 `powsimp` 関数 # 冪が変数(`Sympy`シンボル)のときに限り、シンプルな形にする powsimp(x**a*x**b) #これ以上簡単にできない. powsimp(x**a*y**a) # 変数の仮定にかかわらず実行させたいとき: powsimp(t**c*z**c) # を powsimp(t**c*z**c, force=True) # とする. `t` もしくは `z` が負になっても強制的にこの変形は行われる. (z*t)**2 #冪が整数、有理数, 2のとき. sqrt(x*y) #同じ # **注意** このような式に対しては`powsimp`は使えない: powsimp(z**2*t**2) #指数が整数 sqrt(x*y) # --->冪が変数のときに`powsimp`で簡単にできる. # ### 5.4.2 `expand_power_expr`関数, `expand_power_base`関数 # べき乗を展開する. `powsimp`関数と逆の操作 expand_power_exp(x**(a + b)) expand_power_base((x*y)**a) # **注意** これも`powsimp()`と同様で、変形できないときは元の式を返す: expand_power_base((z*t)**c) # `t*z`が正という条件を`symbols`でつけていれば展開できるが、 # 今回のようにそうと限らないときは展開してくれない. 強制的に行うには expand_power_base((z*t)**c, force=True) # とする. また冪が数のときは x**2*x**3 expand_power_exp(x**5) # のように変形できない。 # ### 5.4.3 `powdenest`関数 # べき乗のべき乗を展開 (x**a)**b #カッコを外して展開 powdenest((x**a)**b) powdenest((z**a)**b) powdenest((z**a)**b, force=True) # ## 5.5 指数関数、対数関数 ln(x) #ln(x)とlog(x)は同じ. log(x) x, y = symbols('x y', positive=True) n = symbols('n', real=True) # ### 5.5.1 `expand_log`関数 # 対数関数を展開する expand_log(log(x*y)) expand_log(log(x/y)) expand_log(log(x**2)) expand_log(log(x**n)) expand_log(log(z*t)) # **注意** これまでと同様にして、正でない変数は展開できないので、そのときは`Force=True`オプションを付ける。 expand_log(log(z**2)) expand_log(log(z**2), force=True) # ### 5.5.2 `logcombine`関数 # 対数関数をシンプルにする. logcombine(log(x) + log(y)) #対数関数を簡単にする logcombine(n*log(x)) logcombine(n*log(z)) logcombine(n*log(z), force=True) # ## 5.6 特殊関数 x, y, z = symbols('x y z') k, m, n = symbols('k m n') # ### 5.6.1 階乗 factorial(n) factorial(10) # ### 5.6.2 組み合わせ (Combination) binomial(n, k) #nCk combsimp(factorial(n) / factorial(n - 3)) #シンプルにする combsimp(binomial(n + 1, k + 1) / binomial(n, k)) # ### 5.6.3 ガンマ関数 gamma(z) combsimp(gamma(x)*gamma(1 - x)) #ガンマ関数にも使える # ### 5.6.4 一般化された超幾何関数 hyper([1, 2], [3], z) # ### 5.6.5 関数を別の関数で書き換える tan(x).rewrite(sin) #tanをsinで書き換える factorial(x).rewrite(gamma) #階乗をガンマ関数で書き換える # ### 5.6.6 特殊関数をいくつかの恒等式で書き換える expand_func(gamma(x + 3)) # 次は[Chapter6 Calculus](https://hiroyuki827.github.io/SymPy_tutorial/Chapter6_Calculus.html)へ! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.8.12 ('aiffel_3.8') # language: python # name: python3 # --- # # 7. Object Detection # # **Object detection 문제와 이를 해결하기 위한 다양한 detection 모델들을 알아본다.** # ## 7-1. 들어가며 # ```bash # $ mkdir -p ~/aiffel/object_detection/images # ``` # ## 7-2. 용어 정리 # ```bash # $ ln -s ~/data/person.jpg ~/aiffel/object_detection/images/ # ``` # + from PIL import Image, ImageDraw import os img_path=os.getenv('HOME')+'/aiffel/object_detection/images/person.jpg' img = Image.open(img_path) # [[YOUR CODE]] draw = ImageDraw.Draw(img) draw.rectangle((130, 30, 670, 450), outline=(0,255,0), width=2) img # - # ```python # # 정답 코드 # # from PIL import Image, ImageDraw # import os # # img_path=os.getenv('HOME')+'/aiffel/object_detection/images/person.jpg' # img = Image.open(img_path) # # draw = ImageDraw.Draw(img) # draw.rectangle((130, 30, 670, 450), outline=(0,255,0), width=2) # # img # ``` # ## 7-3. Localization # + import tensorflow as tf from tensorflow import keras output_num = 1+4+3 # object_prob 1, bbox coord 4, class_prob 3 input_tensor = keras.layers.Input(shape=(224, 224, 3), name='image') base_model = keras.applications.resnet.ResNet50( input_tensor=input_tensor, include_top=False, weights='imagenet', pooling=None, ) x = base_model.output # [[YOUR CODE]] preds = keras.layers.Conv2D(output_num, 1, 1)(x) localize_model=keras.Model(inputs=base_model.input, outputs=preds) localize_model.summary() # - # ```python # # 정답 코드 # # import tensorflow as tf # from tensorflow import keras # # output_num = 1+4+3 # object_prob 1, bbox coord 4, class_prob 3 # # input_tensor = keras.layers.Input(shape=(224, 224, 3), name='image') # base_model = keras.applications.resnet.ResNet50( # input_tensor=input_tensor, # include_top=False, # weights='imagenet', # pooling=None, # ) # x = base_model.output # preds = keras.layers.Conv2D(output_num, 1, 1)(x) # localize_model=keras.Model(inputs=base_model.input, outputs=preds) # # localize_model.summary() # ``` # ## 7-4. Detection (1) 슬라이딩 윈도우, 컨볼루션 # ## 7-5. Detection (2) 앵커 박스, NMS # ## 7-6. Detection Architecture # ## 7-7. Anchor # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda root] # language: python # name: conda-root-py # --- # !python --version import json from pprint import pprint with open('data/goWOgenes.cyjs') as f: tree = json.load(f) # + tree.keys() pprint(tree['elements']['nodes'][1]) nodes = tree['elements']['nodes'] # + bc = map(lambda x : x['data']['BetweennessCentrality'], nodes) cc = map(lambda x : x['data']['ClosenessCentrality'], nodes) # print(min(bc)) print(max(bc)) # print(min(cc)) print(max(cc)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="lIYdn1woOS1n" outputId="cece5524-0d94-4cc4-b260-23e2f0ecc744" # + id="Y2upWg_Qvax1" import functools import datasets import torchtext import torch import torch.nn as nn import torch.optim as optim # + colab={"base_uri": "https://localhost:8080/"} id="ZIVeVqVUvdcK" outputId="db0dbf36-4a75-4d30-bcef-8cb52b7a5b30" train_data, test_data = datasets.load_dataset('imdb', split=['train', 'test']) # + colab={"base_uri": "https://localhost:8080/"} id="2f6USIQOvkV_" outputId="3444d1ad-e2eb-4766-92d8-ea4d85f7011f" train_data, test_data # + id="3Av1UUtZxoSL" tokenizer = torchtext.data.utils.get_tokenizer('basic_english') # + id="ju4G0eKEx2RO" def tokenize_data(example, tokenizer): tokens = {'tokens': tokenizer(example['text'])} return tokens # + colab={"base_uri": "https://localhost:8080/"} id="T6RecxI4xs7s" outputId="0c97c5cb-7bca-4664-98fa-5b665ab510c9" train_data = train_data.map(tokenize_data, fn_kwargs={'tokenizer': tokenizer}) test_data = test_data.map(tokenize_data, fn_kwargs={'tokenizer': tokenizer}) # + colab={"base_uri": "https://localhost:8080/"} id="-Foj4qesxqiz" outputId="09fc27eb-0f50-45df-aacb-debf53948cdc" train_data, test_data # + id="OfzYwhN2wFcA" train_valid_data = train_data.train_test_split(test_size=0.25) train_data = train_valid_data['train'] valid_data = train_valid_data['test'] # + colab={"base_uri": "https://localhost:8080/"} id="ovGkrOJkwZsC" outputId="b5ca62c6-96e5-4eca-8c0f-558fd087dee1" len(train_data), len(valid_data), len(test_data) # + id="9VMeQG_FxUVO" min_freq = 3 special_tokens = ['', ''] vocab = torchtext.vocab.build_vocab_from_iterator(train_data['tokens'], min_freq=min_freq, specials=special_tokens) # + colab={"base_uri": "https://localhost:8080/"} id="rbroBAClxXGB" outputId="91c5da92-7f97-4ad8-a946-6de1899e64a2" len(vocab) # + colab={"base_uri": "https://localhost:8080/"} id="3bKHqCxPyQSb" outputId="35b3c437-f0f8-43fb-8f10-b968c597d5b4" vocab.get_itos()[:10] # + colab={"base_uri": "https://localhost:8080/"} id="uStvd2szyUGR" outputId="d4c9d2a4-86a9-413f-9200-5f1a27ece925" unk_index = vocab[''] unk_index # + colab={"base_uri": "https://localhost:8080/"} id="gd5R8NCJyws4" outputId="46666a6d-56ff-42e3-ebe2-6f87930270c7" pad_index = vocab[''] pad_index # + id="_syj_YR8yp7B" vocab.set_default_index(unk_index) # + id="ENlE1eAM0lHe" def numericalize_data(example, vocab): ids = {'ids': [vocab[token] for token in example['tokens']]} return ids # + colab={"base_uri": "https://localhost:8080/", "height": 164, "referenced_widgets": ["ab96a5a035b140aa9958ada5eeaa8edc", "e393253753984998b533c18502dd4477", "9c6b3d3c5c5a4c0b940c0aec3d833f58", "a24b5ac5e18b48d48467217c8ea67463", "b699f48ad80442d38481d4f960cf23a3", "707426cd59454f53b955a2ed9e46b90e", "", "38e9f642f0714b95ba51da7cafbd1ff2", "852f269e78204293afba0e452e43eba2", "c87dea49f30b471d9128b2b2082adf89", "", "6c44fd275fa34a2ab02b16519e48aca7", "", "398dcca3803c46beb2854bd52d66e322", "", "0cf052dd17f54df19b0e5de1891ec1a4", "", "", "", "d52ef97893144dffa64d1da2b8ea52e6", "d0832872ea074689b8c7e3b8b647e68d", "", "eeac1f30dad64508ba7f1a9473f390bd", "958e2685a018455b9ea02803ca96b464"]} id="ux_YLzDA069-" outputId="11d67399-ace5-49ed-8327-0f85de3386a6" train_data = train_data.map(numericalize_data, fn_kwargs={'vocab': vocab}) valid_data = valid_data.map(numericalize_data, fn_kwargs={'vocab': vocab}) test_data = test_data.map(numericalize_data, fn_kwargs={'vocab': vocab}) # + id="GAXGqlXT1FD9" train_data.set_format(type='torch', columns=['ids', 'label']) valid_data.set_format(type='torch', columns=['ids', 'label']) test_data.set_format(type='torch', columns=['ids', 'label']) # + id="qCmyoFKAvmnj" class NBoW(nn.Module): def __init__(self, vocab_size, embedding_dim, output_dim, pad_index): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=pad_index) self.fc = nn.Linear(embedding_dim, output_dim) def forward(self, text): # text = [batch size, seq len] embedded = self.embedding(text) # embedded = [batch size, seq len, embedding dim] pooled = embedded.mean(dim=1) # pooled = [batch size, embedding dim] prediction = self.fc(pooled) # prediction = [batch size, output dim] return prediction # + id="lPym0qxrwC7e" vocab_size = len(vocab) embedding_dim = 256 output_dim = 2 model = NBoW(vocab_size, embedding_dim, output_dim, pad_index) # + colab={"base_uri": "https://localhost:8080/"} id="Prvx6C3TFyI4" outputId="c9069fbe-ec6e-423d-ff59-820f934c6432" vectors = torchtext.vocab.FastText() # + colab={"base_uri": "https://localhost:8080/"} id="e3gI343FIETN" outputId="44ed85d4-3e22-4bf0-e86e-0d7c01e33c35" vectors = torchtext.vocab.GloVe() # + id="xLg8TKFCzAfL" optimizer = optim.Adam(model.parameters()) # + id="XJrIlwlfzQqY" criterion = nn.CrossEntropyLoss() # + colab={"base_uri": "https://localhost:8080/"} id="KbEREGWmzR7J" outputId="44ec5013-ab39-4f35-b425-250f7e4093ef" device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device # + id="IBtd-0I3zVTo" model = model.to(device) criterion = criterion.to(device) # + id="TNaDUz3M2QDM" def collate(batch, pad_index): batch_ids = [i['ids'] for i in batch] batch_ids = nn.utils.rnn.pad_sequence(batch_ids, padding_value=pad_index, batch_first=True) batch_labels = [i['label'] for i in batch] batch_labels = torch.stack(batch_labels) batch = {'ids': batch_ids, 'labels': batch_labels} return batch # + id="LYsAzjrV0AnH" batch_size = 512 collate = functools.partial(collate, pad_index=pad_index) train_dataloader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, collate_fn=collate) valid_dataloader = torch.utils.data.DataLoader(valid_data, batch_size=batch_size, collate_fn=collate) test_dataloader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, collate_fn=collate) # + id="mKeLtjK5zX41" def train(dataloader, model, criterion, optimizer, device): model.train() epoch_loss = 0 epoch_accuracy = 0 for batch in dataloader: tokens = batch['ids'].to(device) labels = batch['labels'].to(device) predictions = model(tokens) loss = criterion(predictions, labels) accuracy = get_accuracy(predictions, labels) optimizer.zero_grad() loss.backward() optimizer.step() epoch_loss += loss.item() epoch_accuracy += accuracy.item() return epoch_loss / len(dataloader), epoch_accuracy / len(dataloader) # + id="3gJHwUZZ0NC6" def evaluate(dataloader, model, criterion, device): model.eval() epoch_loss = 0 epoch_accuracy = 0 with torch.no_grad(): for batch in dataloader: tokens = batch['ids'].to(device) labels = batch['labels'].to(device) predictions = model(tokens) loss = criterion(predictions, labels) accuracy = get_accuracy(predictions, labels) epoch_loss += loss.item() epoch_accuracy += accuracy.item() return epoch_loss / len(dataloader), epoch_accuracy / len(dataloader) # + id="DOPRg4Gg5L44" def get_accuracy(predictions, labels): batch_size = predictions.shape[0] predicted_classes = predictions.argmax(1, keepdim=True) correct_predictions = predicted_classes.eq(labels.view_as(predicted_classes)).sum() accuracy = correct_predictions.float() / batch_size return accuracy # + colab={"base_uri": "https://localhost:8080/"} id="y0T-FtNN0PO3" outputId="4781c655-bd66-4326-df6d-0d9dbedbaac5" n_epochs = 10 for epoch in range(n_epochs): train_loss, train_acc = train(train_dataloader, model, criterion, optimizer, device) valid_loss, valid_acc = evaluate(valid_dataloader, model, criterion, device) print(f'epoch: {epoch+1}') print(f'train_loss: {train_loss:.3f}, train_acc: {train_acc:.3f}') print(f'valid_loss: {valid_loss:.3f}, valid_acc: {valid_acc:.3f}') # + id="OD_BKHY_42XL" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Global TF Kernel (Python 3) # language: python # name: global-tf-python-3 # --- # + import keras import tensorflow as tf from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.optimizers import Adadelta from sklearn.preprocessing import StandardScaler, MinMaxScaler, Normalizer # - batch_size = 128 num_classes = 10 epochs = 12 # + (x_train, y_train), (x_test, y_test) = mnist.load_data() img_rows, img_cols = 28, 28 x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32') x_test = x_test.astype('float32') # - # same thing as applying minmaxscaler in the (0,1) range x_train = x_train / 255 x_test = x_test / 255 # one hot encoding y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) with tf.device('/cpu:0'): model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss="categorical_crossentropy", optimizer=Adadelta(), metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # COVID19 Global Forecasting (Week 5) # ## Problem Statement # # In this challenge, you will be predicting the daily number of confirmed COVID19 cases in various locations across the world, as well as the number of resulting fatalities, for future dates. This latest challenge includes US state county data. # ## Data # # * This evaluation data for this competition comes from John Hopkins CSSE, which is uninvolved in the competition. # * See their README for a description of how the data was collected. # * They are currently updating the data daily. # ## Solution # ### Libraries # + # Standard libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import time # Libraries for EDA and preprocessing import datetime as dt from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import MinMaxScaler # libraies for Model from xgboost import XGBRegressor from sklearn.model_selection import GridSearchCV # libraries for Evaluations from sklearn.model_selection import cross_val_score from sklearn.model_selection import StratifiedKFold # + ### Load data to variables train_data = pd.read_csv('Datasets/covid19-global-forecasting-week-5/train.csv') test_data = pd.read_csv('Datasets/covid19-global-forecasting-week-5/test.csv') # + # print the first 2 rows of train_data train_data.head(2) # + # print the first two rows of test_data test_data.head(2) # - # ### EDA and preprocessing # + # Information about train_data train_data.info() # - # **Our train_data contains 9 features which contains 1 int value, 5 object values, and 2 float values** # + # size of our data set train_data.shape # Contains 754934 data # + # check for any null values in our train data set train_data.isna().sum() # - # **We can see that our train_data has null values in following columns with respective numbers:** # # * County with 69760 # * Province_State with 40766 # + # lets check the information about test_data test_data.info() # - # **Test data contains all the columns in train_data except for Target_values, It contains float64(1), int64(2), object(5)** # + # find if there is any null value in the test data set test_data.isna().sum() # - # **Test data also contains 28800 null values of county and 16830 values of Provice_state** # + # Remove the null value data from our train_data set train_data.drop(['Id', 'County', 'Province_State'], axis = 1, inplace = True) train_data.head() # - # Remove the null value data from our test_data set test_data.drop(['ForecastId','County', 'Province_State'], axis =1, inplace = True ) test_data.head() # + # Check if there any null values present in train_data train_data.isna().sum() # - train_data.describe() # + # lets count the target values train_data['Target'].value_counts() # + fig, ax= plt.subplots(figsize = (20,10)) ax.pie(x=train_data.groupby(by=["Country_Region"])["TargetValue"].sum(), startangle=90) label =train_data['Country_Region'].unique() plt.title("Countries with COVID-19 Cases", fontsize = 24) fig.legend(label) plt.tight_layout() plt.show() # - # visualization of comfirmed and fatal cases fig, ax= plt.subplots(figsize = (20,10)) ax.pie(x=train_data.groupby(by=["Target"])["TargetValue"].sum(), autopct='%1.1f%%', startangle=90) label =train_data['Target'].unique() plt.title("Comfirmed and Fatal COVID-19 Cases", fontsize = 24) fig.legend(label) plt.tight_layout() plt.show() sns.countplot(x ='Target', data = train_data); # Finding the latest case in the countries last_date = train_data.Date.max() df=train_data[train_data["Date"]==last_date] df # calculating the sum of targetvalue of country region df=df.groupby(by=["Country_Region"],as_index=False)["TargetValue"].sum() df # 5 countries with large target value. countries = df.nlargest(5,"TargetValue") countries cases=train_data.groupby(by=["Date","Country_Region"],as_index=False)["TargetValue"].sum() cases cases=cases.merge(countries,on="Country_Region") cases # visualization of the target value with the country plt.figure(figsize=(15,10)) sns.set(style="darkgrid") sns.lineplot(x="Date",y="TargetValue_x",hue="Country_Region",data=cases); # finding the correlation train_data.corr() train_data.drop(["Target"],inplace=True,axis=1) test_data.drop(["Target"],inplace=True,axis=1) train_data # + # transform the country_region variable lab = LabelEncoder() train_data['Country_Region'] = lab.fit_transform(train_data['Country_Region']) train_data.head() # - test_data['Country_Region'] = lab.fit_transform(test_data['Country_Region']) test_data.head() train_data.Date=train_data.Date.apply(lambda x:x.split("-")) test_data.Date=test_data.Date.apply(lambda x:x.split("-")) # + # split Date into month and day def monthday(data): month = [] day= [] for date in data.Date: month.append(int(date[1])) day.append(int(date[2])) data['month'] = month data['day'] = day data = data.drop('Date', axis =1 ) return data # - train_data = monthday(train_data) test_data = monthday(test_data) train_data.head() # ### Modelling # + # Split data into y scaler = MinMaxScaler() y = train_data['TargetValue'].values train_data.drop(['TargetValue'], axis = 1, inplace = True) # - train_data.head() # Transforming features by scaling each feature to a given range. X = scaler.fit_transform(train_data) X # initialize the xgb xgb = XGBRegressor() # + # perform the measure performance = cross_val_score(xgb, X, y, cv=10, scoring ='neg_mean_absolute_error', n_jobs = -1) mae = -performance # - mae mae.mean() # + # transform the test data test_data= scaler.fit_transform(test_data) test_data # - xgb.fit(X,y) prediction = xgb.predict(test_data) prediction = np.around(prediction) prediction xgb_1500=XGBRegressor(n_estimators=1500,max_depth=15) xgb_1500.fit(X,y) prediction=xgb_1500.predict(test_data) prediction=np.around(prediction) prediction submission=pd.read_csv("Datasets/covid19-global-forecasting-week-5/submission.csv") submission.head() test_copy=pd.read_csv('Datasets/covid19-global-forecasting-week-5/test.csv') output = pd.DataFrame({'Id': test_copy.ForecastId , 'TargetValue': prediction}) output.head() a=output.groupby(['Id'])['TargetValue'].quantile(q=0.05).reset_index() b=output.groupby(['Id'])['TargetValue'].quantile(q=0.5).reset_index() c=output.groupby(['Id'])['TargetValue'].quantile(q=0.95).reset_index() a.columns=['Id','q0.05'] b.columns=['Id','q0.5'] c.columns=['Id','q0.95'] a=pd.concat([a,b['q0.5'],c['q0.95']],1) a['q0.05']=a['q0.05'] a['q0.5']=a['q0.5'] a['q0.95']=a['q0.95'] sub=pd.melt(a, id_vars=['Id'], value_vars=['q0.05','q0.5','q0.95']) sub['variable']=sub['variable'].str.replace("q","", regex=False) sub['ForecastId_Quantile']=sub['Id'].astype(str)+'_'+sub['variable'] sub['TargetValue']=sub['value'] sub=sub[['ForecastId_Quantile','TargetValue']] sub.reset_index(drop=True,inplace=True) sub.to_csv("submission.csv",index=False) sub.head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # %load_ext autoreload # %autoreload 2 import sys, os sys.path.insert(0, '..') from lib import models, graph, coarsening, utils import tensorflow as tf import numpy as np import time # %matplotlib inline # + flags = tf.app.flags FLAGS = flags.FLAGS # Graphs. flags.DEFINE_integer('number_edges', 8, 'Graph: minimum number of edges per vertex.') flags.DEFINE_string('metric', 'euclidean', 'Graph: similarity measure (between features).') # TODO: change cgcnn for combinatorial Laplacians. flags.DEFINE_bool('normalized_laplacian', True, 'Graph Laplacian: normalized.') flags.DEFINE_integer('coarsening_levels', 4, 'Number of coarsened graphs.') # Directories. flags.DEFINE_string('dir_data', os.path.join('..', 'data', 'mnist'), 'Directory to store data.') # - # # Feature graph # + def grid_graph(m, corners=False): z = graph.grid(m) dist, idx = graph.distance_sklearn_metrics(z, k=FLAGS.number_edges, metric=FLAGS.metric) A = graph.adjacency(dist, idx) # Connections are only vertical or horizontal on the grid. # Corner vertices are connected to 2 neightbors only. if corners: import scipy.sparse A = A.toarray() A[A < A.max()/1.5] = 0 A = scipy.sparse.csr_matrix(A) print('{} edges'.format(A.nnz)) print("{} > {} edges".format(A.nnz//2, FLAGS.number_edges*m**2//2)) return A t_start = time.process_time() A = grid_graph(28, corners=False) A = graph.replace_random_edges(A, 0) graphs, perm = coarsening.coarsen(A, levels=FLAGS.coarsening_levels, self_connections=False) L = [graph.laplacian(A, normalized=True) for A in graphs] print('Execution time: {:.2f}s'.format(time.process_time() - t_start)) graph.plot_spectrum(L) del A # - # # Data # + from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets(FLAGS.dir_data, one_hot=False) train_data = mnist.train.images.astype(np.float32) val_data = mnist.validation.images.astype(np.float32) test_data = mnist.test.images.astype(np.float32) train_labels = mnist.train.labels val_labels = mnist.validation.labels test_labels = mnist.test.labels t_start = time.process_time() train_data = coarsening.perm_data(train_data, perm) val_data = coarsening.perm_data(val_data, perm) test_data = coarsening.perm_data(test_data, perm) print('Execution time: {:.2f}s'.format(time.process_time() - t_start)) del perm # - # # Neural networks # + #model = fc1() #model = fc2(nhiddens=100) #model = cnn2(K=5, F=10) # K=28 is equivalent to filtering with fgcnn. #model = fcnn2(F=10) #model = fgcnn2(L[0], F=10) #model = lgcnn2_2(L[0], F=10, K=10) #model = cgcnn2_3(L[0], F=10, K=5) #model = cgcnn2_4(L[0], F=10, K=5) #model = cgcnn2_5(L[0], F=10, K=5) if False: K = 5 # 5 or 5^2 t_start = time.process_time() mnist.test._images = graph.lanczos(L, mnist.test._images.T, K).T mnist.train._images = graph.lanczos(L, mnist.train._images.T, K).T model = lgcnn2_1(L, F=10, K=K) print('Execution time: {:.2f}s'.format(time.process_time() - t_start)) ph_data = tf.placeholder(tf.float32, (FLAGS.batch_size, mnist.train.images.shape[1], K), 'data') # + common = {} common['dir_name'] = 'mnist/' common['num_epochs'] = 20 common['batch_size'] = 100 common['decay_steps'] = mnist.train.num_examples / common['batch_size'] common['eval_frequency'] = 30 * common['num_epochs'] common['brelu'] = 'b1relu' common['pool'] = 'mpool1' C = max(mnist.train.labels) + 1 # number of classes model_perf = utils.model_perf() # - if True: name = 'softmax' params = common.copy() params['dir_name'] += name params['regularization'] = 5e-4 params['dropout'] = 1 params['learning_rate'] = 0.02 params['decay_rate'] = 0.95 params['momentum'] = 0.9 params['F'] = [] params['K'] = [] params['p'] = [] params['M'] = [C] model_perf.test(models.cgcnn(L, **params), name, params, train_data, train_labels, val_data, val_labels, test_data, test_labels) # Common hyper-parameters for networks with one convolutional layer. common['regularization'] = 0 common['dropout'] = 1 common['learning_rate'] = 0.02 common['decay_rate'] = 0.95 common['momentum'] = 0.9 common['F'] = [10] common['K'] = [20] common['p'] = [1] common['M'] = [C] if True: name = 'fgconv_softmax' params = common.copy() params['dir_name'] += name params['filter'] = 'fourier' params['K'] = [L[0].shape[0]] model_perf.test(models.cgcnn(L, **params), name, params, train_data, train_labels, val_data, val_labels, test_data, test_labels) if True: name = 'sgconv_softmax' params = common.copy() params['dir_name'] += name params['filter'] = 'spline' model_perf.test(models.cgcnn(L, **params), name, params, train_data, train_labels, val_data, val_labels, test_data, test_labels) # With 'chebyshev2' and 'b2relu', it corresponds to cgcnn2_2(L[0], F=10, K=20). if True: name = 'cgconv_softmax' params = common.copy() params['dir_name'] += name params['filter'] = 'chebyshev5' # params['filter'] = 'chebyshev2' # params['brelu'] = 'b2relu' model_perf.test(models.cgcnn(L, **params), name, params, train_data, train_labels, val_data, val_labels, test_data, test_labels) # Common hyper-parameters for LeNet5-like networks. common['regularization'] = 5e-4 common['dropout'] = 0.5 common['learning_rate'] = 0.02 # 0.03 in the paper but sgconv_sgconv_fc_softmax has difficulty to converge common['decay_rate'] = 0.95 common['momentum'] = 0.9 common['F'] = [32, 64] common['K'] = [25, 25] common['p'] = [4, 4] common['M'] = [512, C] # Architecture of TF MNIST conv model (LeNet-5-like). # Changes: regularization, dropout, decaying learning rate, momentum optimizer, stopping condition, size of biases. # Differences: training data randomization, init conv1 biases at 0. if True: name = 'fgconv_fgconv_fc_softmax' # 'Non-Param' params = common.copy() params['dir_name'] += name params['filter'] = 'fourier' params['K'] = [L[0].shape[0], L[2].shape[0]] model_perf.test(models.cgcnn(L, **params), name, params, train_data, train_labels, val_data, val_labels, test_data, test_labels) if True: name = 'sgconv_sgconv_fc_softmax' # 'Spline' params = common.copy() params['dir_name'] += name params['filter'] = 'spline' model_perf.test(models.cgcnn(L, **params), name, params, train_data, train_labels, val_data, val_labels, test_data, test_labels) if True: name = 'cgconv_cgconv_fc_softmax' # 'Chebyshev' params = common.copy() params['dir_name'] += name params['filter'] = 'chebyshev5' model_perf.test(models.cgcnn(L, **params), name, params, train_data, train_labels, val_data, val_labels, test_data, test_labels) model_perf.show() if False: grid_params = {} data = (train_data, train_labels, val_data, val_labels, test_data, test_labels) utils.grid_search(params, grid_params, *data, model=lambda x: models.cgcnn(L,**x)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + colab={"base_uri": "https://localhost:8080/"} id="i4WE1j8_I-_A" executionInfo={"status": "ok", "timestamp": 1635264764911, "user_tz": -420, "elapsed": 722, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="5c31b0d3-44d2-4f97-f9e1-20bda0030b2b" # Mount to Google Drive either # to read or write to Google Drive from google.colab import drive drive.mount('/content/drive') # + [markdown] id="hoYnlSnjcn9H" # # I. Read data and EDA # + id="vAExi4hjJut0" executionInfo={"status": "ok", "timestamp": 1635264765626, "user_tz": -420, "elapsed": 7, "user": {"displayName": " \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} # Import the necessary packages import matplotlib.pyplot as plt import numpy as np from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from sklearn.pipeline import Pipeline from sklearn.linear_model import SGDClassifier from sklearn.model_selection import GridSearchCV from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import classification_report # + colab={"base_uri": "https://localhost:8080/"} id="r3TY-gu7K45f" executionInfo={"status": "ok", "timestamp": 1635264766221, "user_tz": -420, "elapsed": 600, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="dda5d94d-ffc0-4306-df44-50a8bab88351" # Load training documents print("[INFO] Loading training data ...") docs_train = fetch_20newsgroups(subset='train', shuffle=True, random_state=42) # + id="1tEkjgOJRlDd" executionInfo={"status": "ok", "timestamp": 1635264766223, "user_tz": -420, "elapsed": 29, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} class_names = docs_train.target_names # + colab={"base_uri": "https://localhost:8080/"} id="bT0mMEIScTEC" executionInfo={"status": "ok", "timestamp": 1635264766223, "user_tz": -420, "elapsed": 26, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="4c12eb70-9619-4508-dabb-6641205204a4" # Load tesing documents print("[INFO] Loading testing data ...") docs_test = fetch_20newsgroups(subset='test', shuffle=True) # + colab={"base_uri": "https://localhost:8080/"} id="vfTD0EZRM8KL" executionInfo={"status": "ok", "timestamp": 1635264766224, "user_tz": -420, "elapsed": 17, "user": {"displayName": "l\u00e2\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="f183a1d9-cea5-4a54-b577-94d1a3da9c0d" print("Number of category: ", len(class_names)) print("Number of train documents: ", len(docs_train.target)) print("Number of test documents: ", len(docs_test.target)) print("Categories: \n", '\n'.join(class_names)) # + id="LOdysacrsPc0" executionInfo={"status": "ok", "timestamp": 1635264766225, "user_tz": -420, "elapsed": 10, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} # Distribution of class _, train_class_dist = np.unique(docs_train.target, return_counts=True) _, test_class_dist = np.unique(docs_test.target, return_counts=True) # + colab={"base_uri": "https://localhost:8080/", "height": 618} id="hIJsHQaHxK5o" executionInfo={"status": "ok", "timestamp": 1635264767761, "user_tz": -420, "elapsed": 1545, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="f809ca28-d1c7-40d5-c499-102311942cca" import seaborn as sns plt.figure() plt.subplot(121) sns.barplot(train_class_dist, class_names) plt.figure() plt.subplot(122) sns.barplot(test_class_dist, class_names) # + colab={"base_uri": "https://localhost:8080/"} id="Q6cqp2JDNHm4" executionInfo={"status": "ok", "timestamp": 1635264767763, "user_tz": -420, "elapsed": 30, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="9f759f3e-4300-48c0-f94b-11c6799b061f" docs_train.data[2].split('\n')[0:5] # + colab={"base_uri": "https://localhost:8080/"} id="5fHlo-HrSXpU" executionInfo={"status": "ok", "timestamp": 1635264767764, "user_tz": -420, "elapsed": 28, "user": {"displayName": "l\u\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="432f3ffe-165b-45d2-874d-98229d1dde4f" docs_train.target # + [markdown] id="NWfUIp6EVKJb" # # II. Extracting features from text files # + [markdown] id="pDmsEdARaWlM" # # 1. SVM # + id="82Lqq0xdfJiC" executionInfo={"status": "ok", "timestamp": 1635264767765, "user_tz": -420, "elapsed": 20, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} pipeline_list = [('bow', CountVectorizer(stop_words='english')), ('tfidf', TfidfTransformer(sublinear_tf=True, norm='l2')), ('svm', SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, max_iter=5, random_state=42, tol=None))] parameters_svm = {'bow__ngram_range': [(1, 1), (1, 2)], 'tfidf__use_idf': (True, False), 'svm__alpha': (1e-2, 1e-3)} # + id="LxIto8sCWCdw" executionInfo={"status": "ok", "timestamp": 1635264767766, "user_tz": -420, "elapsed": 20, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} # define model's pipeline text_clf = Pipeline(pipeline_list) gs_text_clf = GridSearchCV(text_clf, parameters_svm, n_jobs=-1) # + colab={"base_uri": "https://localhost:8080/"} id="6ePVA34xaf6x" executionInfo={"status": "ok", "timestamp": 1635265016241, "user_tz": -420, "elapsed": 248494, "user": {"displayName": "l\u00\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="cf3c03ed-0f80-46cd-f221-3acd34da016b" print("[INFO] training SVM model ...") gs_text_clf.fit(docs_train.data, docs_train.target) # + colab={"base_uri": "https://localhost:8080/"} id="eqHA0RGXoZyy" executionInfo={"status": "ok", "timestamp": 1635265016242, "user_tz": -420, "elapsed": 11, "user": {"displayName": "l\u\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="29631091-526e-435f-912f-9f112863cb90" gs_text_clf.best_score_ gs_text_clf.best_params_ # + id="LLFCDNJ0b8cS" executionInfo={"status": "ok", "timestamp": 1635265019323, "user_tz": -420, "elapsed": 3088, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} # predict on the test documents predicted_svm = gs_text_clf.predict(docs_test.data) # + colab={"base_uri": "https://localhost:8080/"} id="_vjaIwsJbMuU" executionInfo={"status": "ok", "timestamp": 1635265019324, "user_tz": -420, "elapsed": 18, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="3835f566-a7f3-4677-d1ee-e7" print("[INFO] evaluating SVM model...") print(classification_report(predicted_svm, docs_test.target, target_names=class_names)) # + [markdown] id="_hdSxP5daN6X" # # 2. Random Forest # + id="TtpbC3bvdFAO" executionInfo={"status": "ok", "timestamp": 1635265019325, "user_tz": -420, "elapsed": 17, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} # Thes best parameters for two first model pipeline_list = [('bow', CountVectorizer(stop_words='english', ngram_range=(1, 2))), ('tfidf', TfidfTransformer(sublinear_tf=True, norm='l2')), ('rf', RandomForestClassifier(n_estimators=400, max_depth=45, random_state=42))] # Chưa Grid Search tối ưu =)) # + id="surMWl0ZMjZ5" executionInfo={"status": "ok", "timestamp": 1635265019326, "user_tz": -420, "elapsed": 17, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} # define model's pipeline text_clf = Pipeline(pipeline_list) # + colab={"base_uri": "https://localhost:8080/"} id="kOER1foiMqe8" executionInfo={"status": "ok", "timestamp": 1635265091056, "user_tz": -420, "elapsed": 71746, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="304419ba-a834-4dcd-f5ae-a8a7bd586ae7" print("[INFO] training Random Forest model ...") text_clf.fit(docs_train.data, docs_train.target) # + id="KwHZCHH7Okoi" executionInfo={"status": "ok", "timestamp": 1635265100137, "user_tz": -420, "elapsed": 9086, "user": {"displayName": "l\u00\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} predicted_rf = text_clf.predict(docs_test.data) # + colab={"base_uri": "https://localhost:8080/"} id="3TGDT5zCM6-N" executionInfo={"status": "ok", "timestamp": 1635265100140, "user_tz": -420, "elapsed": 16, "user": {"displayName": "l\u00e2m nguy\u1ec5n \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} outputId="90e07d29-bcdf-4f55-fce6-10b6f9ffde47" print("[INFO] evaluating Random Forest model...") print(classification_report(predicted_rf, docs_test.target, target_names=class_names)) # + id="kKTLf1LzOsum" executionInfo={"status": "ok", "timestamp": 1635265100141, "user_tz": -420, "elapsed": 14, "user": {"displayName": " \u0111\u00ecnh", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GgveoyIIIA5GiG5cYJYZfzYHaCvdw8TQH6Hix6q=s64", "userId": "00806905388804607111"}} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- #

    Facial Emotion Recognition - Live Prediction

    #
    A project for the French Employment Agency
    #
    Telecom ParisTech 2018-2019
    # # I. Context # The aim of this notebook is to explore facial emotion recognition techniques from a live webcam video stream. # # The data set used for training is the Kaggle FER2013 emotion recognition data set : https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data # # The models explored include : # - Manual filters # - Deep Learning Architectures # - DenseNet Inspired Architectures # # This model will be combined with voice emotion recongition as well as psychological traits extracted from text inputs, and should provide a benchmark and a deep analysis of both verbal and non-verbal insights for candidates seeking for a job and their performance during an interview. # # II. General imports # Versions used : # + active="" # Python : 3.6.5 # Tensorflow : 1.10.1 # Keras : 2.2.2 # Numpy : 1.15.4 # OpenCV : 4.0.0 # + ### General imports ### import numpy as np import pandas as pd import matplotlib.pyplot as plt from time import time from time import sleep import re import os import argparse from collections import OrderedDict import matplotlib.animation as animation ### Image processing ### from scipy.ndimage import zoom from scipy.spatial import distance import imutils from scipy import ndimage import cv2 import dlib from __future__ import division from imutils import face_utils ### Deep Learning models ### import tensorflow.keras from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img from tensorflow.keras.callbacks import TensorBoard from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten from tensorflow.keras.layers import Conv2D, MaxPooling2D, SeparableConv2D from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D, SeparableConv2D, ZeroPadding2D, UpSampling2D, BatchNormalization, Input, GlobalAveragePooling2D, AveragePooling2D #from tensorflow. keras.utils import np_utils from tensorflow.keras.regularizers import l2 from tensorflow.keras.optimizers import SGD, RMSprop from tensorflow.keras.utils import to_categorical from tensorflow.keras.layers import BatchNormalization from tensorflow.keras import models from keras.utils.vis_utils import plot_model from tensorflow.keras.layers import Input, GlobalAveragePooling2D from tensorflow.keras.models import Model from tensorflow.keras import layers from tensorflow.keras.applications import densenet from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau ### Build SVM models ### from sklearn.preprocessing import OneHotEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn import svm ### Same trained models ### import h5py from keras.models import model_from_json import pickle ### Visualization ### from ggplot import * import time from sklearn.manifold import TSNE # - # # III. Import datas path = '/Users/maelfabien/filrouge_pole_emploi/Video/' local_path = '/Users/maelfabien/Desktop/LocalDB/Videos/' X_train = np.load(local_path + "X_train.npy") X_test = np.load(local_path + "X_test.npy") y_train = np.load(local_path + "y_train.npy") y_test = np.load(local_path + "y_test.npy") shape_x = 48 shape_y = 48 nRows,nCols,nDims = X_train.shape[1:] input_shape = (nRows, nCols, nDims) classes = np.unique(y_train) nClasses = len(classes) nRows, nCols, nDims # # IV. Detect Faces # First of all, we need to detect the faces inside an image. This will allow us to : # - focus on the region of the face # - stop the prediction if no face is recognized. # # To do so, we use OpenCV faceCascade classifier. Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by and in their paper, "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images. # # The term "Cascade" comes from the fact that when a window is explored and no face edge is identified, the region is left apart and we move on to the next one using Adaboost classifier. This makes the overall process very efficient. def detect_face(frame): #Cascade classifier pre-trained model cascPath = '/usr/local/lib/python3.7/site-packages/cv2/data/haarcascade_frontalface_default.xml' faceCascade = cv2.CascadeClassifier(cascPath) #BGR -> Gray conversion gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #Cascade MultiScale classifier detected_faces = faceCascade.detectMultiScale(gray,scaleFactor=1.1,minNeighbors=6, minSize=(shape_x, shape_y), flags=cv2.CASCADE_SCALE_IMAGE) coord = [] for x, y, w, h in detected_faces : if w > 100 : sub_img=frame[y:y+h,x:x+w] cv2.rectangle(frame,(x,y),(x+w,y+h),(0, 255,255),1) coord.append([x,y,w,h]) return gray, detected_faces, coord #Extraire les features faciales def extract_face_features(faces, offset_coefficients=(0.075, 0.05)): gray = faces[0] detected_face = faces[1] new_face = [] for det in detected_face : #Region dans laquelle la face est détectée x, y, w, h = det #X et y correspondent à la conversion en gris par gray, et w, h correspondent à la hauteur/largeur #Offset coefficient, np.floor takes the lowest integer (delete border of the image) horizontal_offset = np.int(np.floor(offset_coefficients[0] * w)) vertical_offset = np.int(np.floor(offset_coefficients[1] * h)) #gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) #gray transforme l'image extracted_face = gray[y+vertical_offset:y+h, x+horizontal_offset:x-horizontal_offset+w] #Zoom sur la face extraite new_extracted_face = zoom(extracted_face, (shape_x / extracted_face.shape[0],shape_y / extracted_face.shape[1])) #cast type float new_extracted_face = new_extracted_face.astype(np.float32) #scale new_extracted_face /= float(new_extracted_face.max()) #print(new_extracted_face) new_face.append(new_extracted_face) return new_face # Initial picture : trump = '/Users/maelfabien/MER/Video/Test_Images/trump.jpg' trump_face = cv2.imread(trump, cv2.COLOR_BGR2RGB) plt.imshow(trump_face) # Extracted face : face = extract_face_features(detect_face(trump_face))[0] plt.imshow(face) # # V. Load model and visualize layers def entry_flow(inputs) : x = Conv2D(32, 3, strides = 2, padding='same')(inputs) x = BatchNormalization()(x) x = Activation('relu')(x) x = Conv2D(64,3,padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) previous_block_activation = x for size in [64, 128, 256] : x = Activation('relu')(x) x = SeparableConv2D(size, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(size, 3, padding='same')(x) x = BatchNormalization()(x) x = MaxPooling2D(3, strides=2, padding='same')(x) residual = Conv2D(size, 1, strides=2, padding='same')(previous_block_activation) x = tensorflow.keras.layers.Add()([x, residual]) previous_block_activation = x return x def middle_flow(x, num_blocks=8) : previous_block_activation = x for _ in range(num_blocks) : x = Activation('relu')(x) x = SeparableConv2D(256, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(256, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(256, 3, padding='same')(x) x = BatchNormalization()(x) x = tensorflow.keras.layers.Add()([x, previous_block_activation]) previous_block_activation = x return x def exit_flow(x) : previous_block_activation = x x = Activation('relu')(x) x = SeparableConv2D(256, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(1024, 3, padding='same')(x) x = BatchNormalization()(x) x = MaxPooling2D(3, strides=2, padding='same')(x) residual = Conv2D(1024, 1, strides=2, padding='same')(previous_block_activation) x = tensorflow.keras.layers.Add()([x, residual]) x = Activation('relu')(x) x = SeparableConv2D(728, 3, padding='same')(x) x = BatchNormalization()(x) x = Activation('relu')(x) x = SeparableConv2D(1024, 3, padding='same')(x) x = BatchNormalization()(x) x = GlobalAveragePooling2D()(x) x = Dense(7, activation='softmax', activity_regularizer=l2(0.001))(x) return x inputs = Input(shape=(shape_x, shape_y, 1)) outputs = exit_flow(middle_flow(entry_flow(inputs))) model = Model(inputs, outputs) model.load_weights(local_path + 'savedmodels/xception_2.hdf5') model.save(local_path +'final_xception.h5') # creates a HDF5 file 'my_model.h5' # returns a compiled model # identical to the previous one from tensorflow.keras.models import load_model model = load_model(local_path +'final_xception.h5') model. # + active="" # with open(local_path + 'savedmodels/xception_2.h5','r') as f: # json = f.read() # model = model_from_json(json) # # model.load_weights(local_path + 'savedmodels/xception_2.h5') # print("Loaded model from disk") # # model.load_weights(local_path + 'savedmodels/xception_2.h5') # print("Loaded model from disk") # # model = load_model(local_path + 'savedmodels/xception_2.h5') # - # # VI. Visualize layers layer_outputs = [layer.output for layer in model.layers[:12]] # Extracts the outputs of the top 12 layers activation_model = models.Model(inputs=model.input, outputs=layer_outputs) # + layer_names = [] for layer in model.layers[:12]: layer_names.append(layer.name) # Names of the layers, so you can have them as part of your plot images_per_row = 16 # + trump = '/Users/maelfabien/MER/Video/Test_Images/trump.jpg' trump_face = cv2.imread(trump) face = extract_face_features(detect_face(trump_face))[0] to_predict = np.reshape(face.flatten(), (1,48,48,1)) res = model.predict(to_predict) activations = activation_model.predict(to_predict) # - for layer_name, layer_activation in zip(layer_names, activations): # Displays the feature maps n_features = layer_activation.shape[-1] # Number of features in the feature map size = layer_activation.shape[1] #The feature map has shape (1, size, size, n_features). n_cols = n_features // images_per_row # Tiles the activation channels in this matrix display_grid = np.zeros((size * n_cols, images_per_row * size)) for col in range(n_cols): # Tiles each filter into a big horizontal grid for row in range(images_per_row): channel_image = layer_activation[0,:, :,col * images_per_row + row] channel_image -= channel_image.mean() # Post-processes the feature to make it visually palatable channel_image /= channel_image.std() channel_image *= 64 channel_image += 128 channel_image = np.clip(channel_image, 0, 255).astype('uint8') display_grid[col * size : (col + 1) * size, # Displays the grid row * size : (row + 1) * size] = channel_image scale = 1. / size plt.figure(figsize=(scale * display_grid.shape[1], scale * display_grid.shape[0])) plt.title(layer_name) plt.grid(False) plt.imshow(display_grid, aspect='auto', cmap='viridis') # # VII. Making a prediction on an image hanks = '/Users/maelfabien/MER/Video/Test_Images/hanks_vs.jpg' hanks_face = cv2.imread(hanks) plt.figure(figsize=(12,12)) plt.imshow(cv2.cvtColor(hanks_face, cv2.COLOR_BGR2RGB)) plt.figure(figsize=(12,12)) plt.imshow(detect_face(hanks_face)[0], 'gray') plt.show() for face in extract_face_features(detect_face(hanks_face)) : plt.figure(figsize=(10,10)) plt.imshow(face, 'gray') plt.show() for face in extract_face_features(detect_face(hanks_face)) : to_predict = np.reshape(face.flatten(), (1,48,48,1)) res = model.predict(to_predict) result_num = np.argmax(res) print(result_num) # This corresponds to the Happy Labels which is a good prediction. # # IX. Enhanced Visualization # This basic step is now woring properly and results are quite satisfying. There are lots of sources of improvements we'll try to implement over time : # - add features from manually selected filters (e.g Gabor filters) # - take into account the frequency of eye blinks # - take into account the symetry of the keypoints on a face # - display all the keypoints of the face # - align the face by scaling of the facial features # - add emojis translating the emotion # ## a. Frequency of eye blink def eye_aspect_ratio(eye): A = distance.euclidean(eye[1], eye[5]) B = distance.euclidean(eye[2], eye[4]) C = distance.euclidean(eye[0], eye[3]) ear = (A + B) / (2.0 * C) return ear thresh = 0.25 frame_check = 20 face_detect = dlib.get_frontal_face_detector() predictor_landmarks = dlib.shape_predictor("/Users/maelfabien/Desktop/LocalDB/Videos/landmarks/shape_predictor_68_face_landmarks.dat") (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"] (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"] # ## b. Detect Keypoints to plot them # + (nStart, nEnd) = face_utils.FACIAL_LANDMARKS_IDXS["nose"] (mStart, mEnd) = face_utils.FACIAL_LANDMARKS_IDXS["mouth"] (jStart, jEnd) = face_utils.FACIAL_LANDMARKS_IDXS["jaw"] (eblStart, eblEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eyebrow"] (ebrStart, ebrEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eyebrow"] # - # ## c. Face Alignment # + code_folding=[] desiredLeftEye=(0.35, 0.35) def align(gray, rect): # convert the landmark (x, y)-coordinates to a NumPy array shape = predictor(gray, rect) shape = shape_to_np(shape) # extract the left and right eye (x, y)-coordinates (lStart, lEnd) = FACIAL_LANDMARKS_IDXS["left_eye"] (rStart, rEnd) = FACIAL_LANDMARKS_IDXS["right_eye"] leftEyePts = shape[lStart:lEnd] rightEyePts = shape[rStart:rEnd] # compute the center of mass for each eye leftEyeCenter = leftEyePts.mean(axis=0).astype("int") rightEyeCenter = rightEyePts.mean(axis=0).astype("int") # compute the angle between the eye centroids dY = rightEyeCenter[1] - leftEyeCenter[1] dX = rightEyeCenter[0] - leftEyeCenter[0] angle = np.degrees(np.arctan2(dY, dX)) - 180 # compute the desired right eye x-coordinate based on the # desired x-coordinate of the left eye desiredRightEyeX = 1.0 - desiredLeftEye[0] # determine the scale of the new resulting image by taking # the ratio of the distance between eyes in the *current* # image to the ratio of distance between eyes in the # *desired* image dist = np.sqrt((dX ** 2) + (dY ** 2)) desiredDist = (desiredRightEyeX - desiredLeftEye[0]) desiredDist *= self.desiredFaceWidth scale = desiredDist / dist # compute center (x, y)-coordinates (i.e., the median point) # between the two eyes in the input image eyesCenter = ((leftEyeCenter[0] + rightEyeCenter[0]) // 2, (leftEyeCenter[1] + rightEyeCenter[1]) // 2) # grab the rotation matrix for rotating and scaling the face M = cv2.getRotationMatrix2D(eyesCenter, angle, scale) # update the translation component of the matrix tX = self.desiredFaceWidth * 0.5 tY = self.desiredFaceHeight * self.desiredLeftEye[1] M[0, 2] += (tX - eyesCenter[0]) M[1, 2] += (tY - eyesCenter[1]) # apply the affine transformation (w, h) = (self.desiredFaceWidth, self.desiredFaceHeight) #output = cv2.warpAffine(image, M, (w, h), flags=cv2.INTER_CUBIC) output = cv2.warpAffine(gray, M, (w, h), flags=cv2.INTER_CUBIC) # return the aligned face return output # - # ## d. Final Prediction # + #Lancer la capture video video_capture = cv2.VideoCapture(0) while True: # Capture frame-by-frame ret, frame = video_capture.read() face_index = 0 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) rects = face_detect(gray, 1) #gray, detected_faces, coord = detect_face(frame) try : for (i, rect) in enumerate(rects): shape = predictor_landmarks(gray, rect) shape = face_utils.shape_to_np(shape) # Identify face coordinates (x, y, w, h) = face_utils.rect_to_bb(rect) face = gray[y:y+h,x:x+w] #Zoom on extracted face face = zoom(face, (shape_x / face.shape[0],shape_y / face.shape[1])) #Cast type float face = face.astype(np.float32) #Scale face /= float(face.max()) face = np.reshape(face.flatten(), (1, 48, 48, 1)) #Make Prediction prediction = model.predict(face) prediction_result = np.argmax(prediction) # Rectangle around the face cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) cv2.putText(frame, "Face #{}".format(i + 1), (x - 10, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) for (j, k) in shape: cv2.circle(frame, (j, k), 1, (0, 0, 255), -1) # 12. Add prediction probabilities cv2.putText(frame, "----------------",(40,100 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 0) cv2.putText(frame, "Emotional report : Face #" + str(i+1),(40,120 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 0) cv2.putText(frame, "Angry : " + str(round(prediction[0][0],3)),(40,140 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 0) cv2.putText(frame, "Disgust : " + str(round(prediction[0][1],3)),(40,160 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 0) cv2.putText(frame, "Fear : " + str(round(prediction[0][2],3)),(40,180 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 1) cv2.putText(frame, "Happy : " + str(round(prediction[0][3],3)),(40,200 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 1) cv2.putText(frame, "Sad : " + str(round(prediction[0][4],3)),(40,220 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 1) cv2.putText(frame, "Surprise : " + str(round(prediction[0][5],3)),(40,240 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 1) cv2.putText(frame, "Neutral : " + str(round(prediction[0][6],3)),(40,260 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 1) # draw extracted face in the top right corner #frame[face_index * shape_x: (face_index + 1) * shape_x, -1 * shape_y - 1:-1, :] = cv2.cvtColor(face * 255, cv2.COLOR_GRAY2RGB) # 13. Annotate main image with a label if prediction_result == 0 : cv2.putText(frame, "Angry",(x+w-10,y-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) elif prediction_result == 1 : cv2.putText(frame, "Disgust",(x+w-10,y-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) elif prediction_result == 2 : cv2.putText(frame, "Fear",(x+w-10,y-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) elif prediction_result == 3 : cv2.putText(frame, "Happy",(x+w-10,y-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) elif prediction_result == 4 : cv2.putText(frame, "Sad",(x+w-10,y-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) elif prediction_result == 5 : cv2.putText(frame, "Surprise",(x+w-10,y-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) else : cv2.putText(frame, "Neutral",(x+w-10,y-10), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2) # 5. Eye Detection and Blink Count leftEye = shape[lStart:lEnd] rightEye = shape[rStart:rEnd] # Compute Eye Aspect Ratio leftEAR = eye_aspect_ratio(leftEye) rightEAR = eye_aspect_ratio(rightEye) ear = (leftEAR + rightEAR) / 2.0 # And plot its contours leftEyeHull = cv2.convexHull(leftEye) rightEyeHull = cv2.convexHull(rightEye) cv2.drawContours(frame, [leftEyeHull], -1, (0, 255, 0), 1) cv2.drawContours(frame, [rightEyeHull], -1, (0, 255, 0), 1) # Compute total blinks and frequency #if ear < thresh: #flag += 1 #cv2.putText(frame, "Blink", (10, 200), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 0) #cv2.putText(frame, "Total blinks : " + str(flag), (40, 280 + 180*i), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 0) #cv2.putText(frame, "Blink Frequency : " + str(int(flag/j)), (40, 220), cv2.FONT_HERSHEY_SIMPLEX, 0.5, 155, 0) # 6. Detect Nose nose = shape[nStart:nEnd] noseHull = cv2.convexHull(nose) cv2.drawContours(frame, [noseHull], -1, (0, 255, 0), 1) # 7. Detect Mouth mouth = shape[mStart:mEnd] mouthHull = cv2.convexHull(mouth) cv2.drawContours(frame, [mouthHull], -1, (0, 255, 0), 1) # 8. Detect Jaw jaw = shape[jStart:jEnd] jawHull = cv2.convexHull(jaw) cv2.drawContours(frame, [jawHull], -1, (0, 255, 0), 1) # 9. Detect Eyebrows ebr = shape[ebrStart:ebrEnd] ebrHull = cv2.convexHull(ebr) cv2.drawContours(frame, [ebrHull], -1, (0, 255, 0), 1) ebl = shape[eblStart:eblEnd] eblHull = cv2.convexHull(ebl) cv2.drawContours(frame, [eblHull], -1, (0, 255, 0), 1) cv2.putText(frame,'Number of Faces : ' + str(len(rects)),(40, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, 155, 1) cv2.imshow('Video', frame) except : pass if cv2.waitKey(1) & 0xFF == ord('q'): break # When everything is done, release the capture video_capture.release() cv2.destroyAllWindows() # + fig = plt.figure() ax1 = fig.add_subplot(1,1,1) def animate(i) : graph_data = emotion xs = [] ys = [] for emo in graph_data: xs.append(emo[0]) ys.append(emo[1]) ax1.clear() ax1.plot(xs,ys) ani = animation.FuncAnimation(fig, animate, interval=1000) plt.show() # - # # X. Sources # - Visualization : https://github.com/JostineHo/mememoji/blob/master/data_visualization.ipynb # - State of the art Architecture : https://github.com/amineHorseman/facial-expression-recognition-using-cnn # - Eyes Tracking : https://www.pyimagesearch.com/2017/04/24/eye-blink-detection-opencv-python-dlib/ # - Face Alignment : https://www.pyimagesearch.com/2017/05/22/face-alignment-with-opencv-and-python/ # - C.Pramerdorfer, and M.Kampel.Facial Expression Recognition using Con-volutional Neural Networks: State of the Art. Computer Vision Lab, TU Wien. https://arxiv.org/pdf/1612.02903.pdf # - A Brief Review of Facial Emotion Recognition Based # on Visual Information : https://www.mdpi.com/1424-8220/18/2/401/pdf # - Going deeper in facial expression recognition using deep neural networks : https://ieeexplore.ieee.org/document/7477450 # --- # jupyter: # jupytext: # text_representation: # extension: .r # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: R # language: R # name: ir # --- # # R cheat sheet # ![](https://www.r-project.org/Rlogo.png) # **ToC** # - [Basics](#Basics) # - [Concatenation](#Concatenation) # - [Numeric operations](#Numeric-operations) # - [List all variables in memory](#List-all-variables-in-memory) # - [Getting help](#Getting-help) # ## Basics # Concatenation, length of vectors, numeric operations, matrix creation # # ### Concatenation x = c(1,2,3,4,5,6) #concatenate to create a vector x length(x) # ### Numeric operations # vector 1 + vector 2 is a element by element summation y = c(8,9,10,11,12,14) x_y = x+y x_y # ### List all variables in memory # use `ls()` ls() # remove variables using `rm(var_name)` rm('x_y') ls() # ### Getting help # use `?symbol` to get help ?ls # ### Build a matrix # use `matrix(data, nrow, ncol)` method to build a nd matrix m1 = matrix(data= x, nrow=2, ncol=3) m1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] toc=true #

    Table of Contents

    # # - from jupyterthemes import get_themes from jupyterthemes.stylefx import set_nb_theme themes = get_themes() set_nb_theme(themes[3]) # + # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 # %matplotlib inline # %load_ext watermark # %load_ext autoreload # %autoreload 2 # %config InlineBackend.figure_format='retina' import os import json import time import numpy as np import pandas as pd # %watermark -a 'Ethen' -d -t -v -p numpy,pandas,pyarrow,sklearn # - # # Rossman GBT Modeling # ## Data Preparation # We've done most of our data preparation and feature engineering in the previous notebook, we'll still perform some additional ones here, but this notebook focuses on getting the data ready for fitting a Gradient Boosted Tree model. For the model, we will be leveraging lightgbm. # + data_dir = 'cleaned_data' path_train = os.path.join(data_dir, 'train_clean.parquet') path_test = os.path.join(data_dir, 'test_clean.parquet') engine = 'pyarrow' df_train = pd.read_parquet(path_train, engine) df_test = pd.read_parquet(path_test, engine) print('train dimension: ', df_train.shape) print('test dimension: ', df_test.shape) df_train.head() # - # We've pulled most of our configurable parameters outside into a json configuration file. In the ideal scenario, we can move all of our code into a python script and only change the configuration file to experiment with different type of settings to see which one leads to the best overall performance. # + config_path = os.path.join('config', 'gbt_training_template.json') with open(config_path) as f: config_file = json.load(f) config_file # + # extract settings from the configuration file into local variables columns = config_file['columns'] num_cols = columns['num_cols_pattern'] cat_cols = columns['cat_cols_pattern'] id_cols = columns['id_cols'] label_col = columns['label_col'] weights_col = columns['weights_col'] model_task = config_file['model_task'] model_type = config_file['model_type'] model_parameters = config_file['model_parameters'][model_type] model_hyper_parameters = config_file['model_hyper_parameters'][model_type] model_fit_parameters = config_file['model_fit_parameters'][model_type] search_parameters = config_file['search_parameters'] # - # Here, we will remove all records where the store had zero sale / was closed (feel free to experiment with not excluding the zero sales record and see if improves performance) # # We also perform a train/validation split. The validation split will be used in our hyper-parameter tuning process and for early stopping. Notice that because this is a time series application, where we are trying to predict different stores' daily sales. It's important to not perform a random train/test split, but instead divide the training and validation set based on time/date. # # Our training data is already sorted by date in decreasing order, hence we can create the validation set by checking how big is our test set and select the top-N observations to create a validation set that has similar size to our test set. Here we're saying similar size and not exact size, because we make sure that all the records from the same date falls under either training or validation set. # + df_train = df_train[df_train[label_col] != 0].reset_index(drop=True) mask = df_train['Date'] == df_train['Date'].iloc[len(df_test)] val_index = df_train.loc[mask, 'Date'].index.max() val_index # - # The validation fold we're creating is used for [sklearn's PredefinedSplit](https://scikit-learn.org/stable/modules/cross_validation.html#predefined-fold-splits-validation-sets), where we set the index to 0 for all samples that are part of the validation set, and to -1 for all other samples. val_fold = np.full(df_train.shape[0], fill_value=-1) val_fold[:(val_index + 1)] = 0 val_fold # Here, we assign the validation fold back to the original dataframe to illustrate the point, this is technically not required for the rest of the pipeline. Notice in the dataframe that we've printed out, the last record's date, 2015-06-18 is different from the rest, and the record's `val_fold` takes on a value of -1. This means that all records including/after the date 2015-06-19 will become our validation set. df_train['val_fold'] = val_fold df_train[(val_index - 2):(val_index + 2)] # We proceed to extracting the necessary columns both numerical and categorical that we'll use for modeling. # + # the model id is used as the indicator when saving the model model_id = 'gbt' input_cols = num_cols + cat_cols df_train = df_train[input_cols + [label_col]] # we will perform the modeling at the log-scale df_train[label_col] = np.log(df_train[label_col]) df_test = df_test[input_cols + id_cols] print('train dimension: ', df_train.shape) print('test dimension: ', df_test.shape) df_train.head() # + for cat_col in cat_cols: df_train[cat_col] = df_train[cat_col].astype('category') df_test[cat_col] = df_test[cat_col].astype('category') df_train.head() # - # ## Model Training # We use a helper class to train a boosted tree model, generate the prediction on our test set, create the submission file, check the feature importance of the tree-based model and also make sure we can save and re-load the model. # + from gbt_module.model import GBTPipeline model = GBTPipeline(input_cols, cat_cols, label_col, weights_col, model_task, model_id, model_type, model_parameters, model_hyper_parameters, search_parameters) model # - start = time.time() model.fit(df_train, val_fold, model_fit_parameters) elapsed = time.time() - start print('elapsed minutes: ', elapsed / 60) pd.DataFrame(model.model_tuned_.cv_results_) # + # we logged our label, remember to exponentiate it back to the original scale prediction_test = model.predict(df_test[input_cols]) df_test[label_col] = np.exp(prediction_test) submission_cols = id_cols + [label_col] df_test[submission_cols] = df_test[submission_cols].astype('int') submission_dir = 'submission' if not os.path.isdir(submission_dir): os.makedirs(submission_dir, exist_ok=True) submission_file = 'rossmann_submission_{}.csv'.format(model_id) submission_path = os.path.join(submission_dir, submission_file) df_test[submission_cols].to_csv(submission_path, index=False) df_test[submission_cols].head() # - model.get_feature_importance() # + model_checkpoint = os.path.join('models', model_id + '.pkl') model.save(model_checkpoint) loaded_model = GBTPipeline.load(model_checkpoint) # print the cv_results_ again to ensure the checkpointing works pd.DataFrame(loaded_model.model_tuned_.cv_results_) -- ## Cryptol AES Implementation ## -- Copyright (c) 2010-2015, Galois Inc. -- [www.cryptol.net](http://www.cryptol.net) -- You can freely use this source code for educational purposes. -- -- This is a fairly close implementation of the [FIPS-197 standard](http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf). -- ### Key size ### -- -- Let `Nk` be the number of blocks in the key. This must be one of `4` (AES128), `6` (AES192), or `8` (AES256). -- -- Aside from this line, no other code below needs to change for implementing AES128, AES192, or AES256. type Nk = 4 type AESKeySize = (Nk*32) -- Number of blocks and rounds type Nb = 4 type Nr = 6 + Nk -- Helper type definitions type GF28 = [8] type State = [4][Nb]GF28 type RoundKey = State type KeySchedule = (RoundKey, [Nr-1]RoundKey, RoundKey) -- $GF(2^8)$ operations -- + gf28Add : {n} (fin n) => [n]GF28 -> GF28 gf28Add ps = sums ! 0 where sums = [zero] # [ p ^ s | p <- ps | s <- sums ] irreducible = <| x^^8 + x^^4 + x^^3 + x + 1 |> gf28Mult : (GF28, GF28) -> GF28 gf28Mult (x, y) = pmod(pmult x y) irreducible gf28Pow : (GF28, [8]) -> GF28 gf28Pow (n, k) = pow k where sq x = gf28Mult (x, x) odd x = x ! 0 pow i = if i == 0 then 1 else if odd i then gf28Mult(n, sq (pow (i >> 1))) else sq (pow (i >> 1)) gf28Inverse : GF28 -> GF28 gf28Inverse x = gf28Pow (x, 254) gf28DotProduct : {n} (fin n) => ([n]GF28, [n]GF28) -> GF28 gf28DotProduct (xs, ys) = gf28Add [ gf28Mult (x, y) | x <- xs | y <- ys ] gf28VectorMult : {n, m} (fin n) => ([n]GF28, [m][n]GF28) -> [m]GF28 gf28VectorMult (v, ms) = [ gf28DotProduct(v, m) | m <- ms ] gf28MatrixMult : {n, m, k} (fin m) => ([n][m]GF28, [m][k]GF28) -> [n][k]GF28 gf28MatrixMult (xss, yss) = [ gf28VectorMult(xs, yss') | xs <- xss ] where yss' = transpose yss -- - -- The affine transform and its inverse -- + xformByte : GF28 -> GF28 xformByte b = gf28Add [b, (b >>> 4), (b >>> 5), (b >>> 6), (b >>> 7), c] where c = 0x63 xformByte' : GF28 -> GF28 xformByte' b = gf28Add [(b >>> 2), (b >>> 5), (b >>> 7), d] where d = 0x05 -- - -- The S-box sbox : [256]GF28 sbox = [ 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 0x30, 0x01, 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, 0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4, 0x72, 0xc0, 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, 0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, 0x07, 0x12, 0x80, 0xe2, 0xeb, 0x27, 0xb2, 0x75, 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84, 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, 0x6a, 0xcb, 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, 0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c, 0x9f, 0xa8, 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, 0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, 0xc4, 0xa7, 0x7e, 0x3d, 0x64, 0x5d, 0x19, 0x73, 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb, 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, 0xc2, 0xd3, 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, 0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a, 0xae, 0x08, 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, 0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, 0x61, 0x35, 0x57, 0xb9, 0x86, 0xc1, 0x1d, 0x9e, 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf, 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, 0x41, 0x99, 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16 ] -- The SubBytes transform and its inverse -- + SubByte : GF28 -> GF28 SubByte b = xformByte (gf28Inverse b) SubByte' : GF28 -> GF28 SubByte' b = sbox@b SubBytes : State -> State SubBytes state = [ [ SubByte' b | b <- row ] | row <- state ] InvSubByte : GF28 -> GF28 InvSubByte b = gf28Inverse (xformByte' b) InvSubBytes : State -> State InvSubBytes state =[ [ InvSubByte b | b <- row ] | row <- state ] -- - -- The ShiftRows transform and its inverse -- + ShiftRows : State -> State ShiftRows state = [ row <<< shiftAmount | row <- state | shiftAmount <- [0 .. 3] ] InvShiftRows : State -> State InvShiftRows state = [ row >>> shiftAmount | row <- state | shiftAmount <- [0 .. 3] ] -- - -- The MixColumns transform and its inverse -- + MixColumns : State -> State MixColumns state = gf28MatrixMult (m, state) where m = [[2, 3, 1, 1], [1, 2, 3, 1], [1, 1, 2, 3], [3, 1, 1, 2]] InvMixColumns : State -> State InvMixColumns state = gf28MatrixMult (m, state) where m = [[0x0e, 0x0b, 0x0d, 0x09], [0x09, 0x0e, 0x0b, 0x0d], [0x0d, 0x09, 0x0e, 0x0b], [0x0b, 0x0d, 0x09, 0x0e]] -- - -- The AddRoundKey transform AddRoundKey : (RoundKey, State) -> State AddRoundKey (rk, s) = rk ^ s -- Key expansion -- + Rcon : [8] -> [4]GF28 Rcon i = [(gf28Pow (<| x |>, i-1)), 0, 0, 0] SubWord : [4]GF28 -> [4]GF28 SubWord bs = [ SubByte b | b <- bs ] RotWord : [4]GF28 -> [4]GF28 RotWord [a0, a1, a2, a3] = [a1, a2, a3, a0] NextWord : ([8],[4][8],[4][8]) -> [4][8] NextWord(i, prev, old) = old ^ mask where mask = if i % `Nk == 0 then SubWord(RotWord(prev)) ^ Rcon (i / `Nk) else if (`Nk > 6) && (i % `Nk == 4) then SubWord(prev) else prev ExpandKeyForever : [Nk][4][8] -> [inf]RoundKey ExpandKeyForever seed = [ transpose g | g <- groupBy`{4} (keyWS seed) ] keyWS : [Nk][4][8] -> [inf][4][8] keyWS seed = ret where ret = seed # [ NextWord(i, prev, old) | i <- [ `Nk ... ] | prev <- drop`{Nk-1} ret | old <- ret ] -- - checkKey = take`{16} (drop`{8} (keyWS ["abcd", "defg", "1234", "5678"])) checkKey2 = [transpose g | g <- groupBy`{4}checkKey] checkKey checkKey2 -- + ExpandKey : [AESKeySize] -> KeySchedule ExpandKey key = (keys @ 0, keys @@ [1 .. (Nr - 1)], keys @ `Nr) where seed : [Nk][4][8] seed = split (split key) keys = ExpandKeyForever seed fromKS : KeySchedule -> [Nr+1][4][32] fromKS (f, ms, l) = [ formKeyWords (transpose k) | k <- [f] # ms # [l] ] where formKeyWords bbs = [ join bs | bs <- bbs ] -- - -- AES rounds and inverses -- + AESRound : (RoundKey, State) -> State AESRound (rk, s) = AddRoundKey (rk, MixColumns (ShiftRows (SubBytes s))) AESFinalRound : (RoundKey, State) -> State AESFinalRound (rk, s) = AddRoundKey (rk, ShiftRows (SubBytes s)) AESInvRound : (RoundKey, State) -> State AESInvRound (rk, s) = InvMixColumns (AddRoundKey (rk, InvSubBytes (InvShiftRows s))) AESFinalInvRound : (RoundKey, State) -> State AESFinalInvRound (rk, s) = AddRoundKey (rk, InvSubBytes (InvShiftRows s)) -- - -- Converting a 128 bit message to a `State` and then back -- + msgToState : [128] -> State msgToState msg = transpose (split (split msg)) stateToMsg : State -> [128] stateToMsg st = join (join (transpose st)) -- - -- AES Encryption aesEncrypt : ([128], [AESKeySize]) -> [128] aesEncrypt (pt, key) = stateToMsg (AESFinalRound (kFinal, rounds ! 0)) where (kInit, ks, kFinal) = ExpandKey key state0 = AddRoundKey(kInit, msgToState pt) rounds = [state0] # [ AESRound (rk, s) | rk <- ks | s <- rounds ] -- AES Decryption aesDecrypt : ([128], [AESKeySize]) -> [128] aesDecrypt (ct, key) = stateToMsg (AESFinalInvRound (kFinal, rounds ! 0)) where (kFinal, ks, kInit) = ExpandKey key state0 = AddRoundKey(kInit, msgToState ct) rounds = [state0] # [ AESInvRound (rk, s) | rk <- reverse ks | s <- rounds ] test1 where (test1,_,_) = ExpandKey 0x3243f6a8885a308d313198a2e0370734 aesEncrypt (0x3243f6a8885a308d313198a2e0370734, 0x2b7e151628aed2a6abf7158809cf4f3c) == 0x3925841d02dc09fbdc118597196a0b32 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Importing Libraries and Supporting python files import torch from torch.autograd import Variable import torch.nn.functional as F import torch.nn as nn import torch.utils.data import torchvision.transforms as transforms import torchvision.datasets as datasets import torchvision.models as models import torchvision import copy import os import numpy as np from options import Parameters from collections import OrderedDict # Calling parameters class from Arguments file args = Parameters() # # Embedding Network # #### The embedding sub-network consists of a deep convolutional network for feature extraction and a nonparametric one-shot classifier # # CNet # 1. Given an input image I, we # use a residual network to produce its feature representation fθemb (I) # 2. a fully-connected # layer on top of the embedding sub-network with a crossentropy loss (CELoss), that outputs |Cbase| scores class Identity(nn.Module): def __init__(self): super(Identity, self).__init__() def forward(self, x): return x class resnet(nn.Module): def __init__(self): super(resnet,self).__init__() ''' Update the resnet model last layer to match the weights ''' resnet18 = torchvision.models.resnet18(pretrained=False) num_features=resnet18.fc.in_features resnet18.fc=nn.Linear(num_features,64) ''' Updated the resnet model according to the weights(.t7) ''' state_dict = torch.load(r'softRandom.t7',map_location=torch.device('cpu')) names=[] for k , v in resnet18.state_dict().items(): names.append(k) i=0 new_state_dict = OrderedDict() for k, v in state_dict.items(): new_state_dict[names[i]] = v i=i+1 resnet18.load_state_dict(new_state_dict) ''' Store the last layer weights before removing ''' self.fc_layer_weight=resnet18.state_dict()[names[-2]] # resnet_updated.fc=Identity() ''' Create a sequence of layers ''' self.conv1=resnet18.conv1 self.conv1.load_state_dict(resnet18.conv1.state_dict()) self.bn1=resnet18.bn1 self.bn1.load_state_dict(resnet18.bn1.state_dict()) self.relu=resnet18.relu self.maxpool=resnet18.maxpool self.layer1=resnet18.layer1 self.layer1.load_state_dict(resnet18.layer1.state_dict()) self.layer2=resnet18.layer2 self.layer2.load_state_dict(resnet18.layer2.state_dict()) self.layer3=resnet18.layer3 self.layer3.load_state_dict(resnet18.layer3.state_dict()) self.layer4=resnet18.layer4 self.layer4.load_state_dict(resnet18.layer4.state_dict()) self.avgpool=resnet18.avgpool def forward(self,x): x=self.conv1(x) x=self.bn1(x) x=self.relu(x) x=self.maxpool(x) layer1 = self.layer1(x) # (, 64L, 56L, 56L) layer2 = self.layer2(layer1) # (, 128L, 28L, 28L) layer3 = self.layer3(layer2) # (, 256L, 14L, 14L) layer4 = self.layer4(layer3) # (,512,7,7) x = self.avgpool(layer4) # (,512,1,1) x = x.view(x.size(0), -1) return x # # Deformation Network class Flatten(nn.Module): def __init__(self): super(Flatten, self).__init__() def forward(self, x): return x.view(x.size(0), -1) class DeformationNetwork(nn.Module): ''' Two branch's performance are similar one branch's So we use one branch here Deeper attention network do not bring in benifits So we use small network here ''' def __init__(self): super(DeformationNetwork,self).__init__() def conv(inp,out): return nn.Sequential(nn.Conv2d(inp,out,3,padding=1), nn.BatchNorm2d(out), nn.ReLU(), nn.MaxPool2d(2) ) self.encoder=nn.Sequential(conv(6,32), #'6*224*224' conv(32,64),#'6*224*224' conv(64,64),#'6*224*224' conv(64,32),#'6*224*224' conv(32,16), Flatten() ) def forward(self,x): """ inputs: Batchsize*3*224*224 outputs: Batchsize*100 """ outputs=self.encoder(x) return outputs from torchsummary import summary deform=DeformationNetwork() summary(deform,(6,224,224)) # # IDEMENET # + class IDeMeNet(nn.Module): def __init__(self): super(IDeMeNet,self).__init__() self.deform=DeformationNetwork() self.embedding=resnet() #patch weight for Linear combination of probe and gallery images #defualt patch size if 3*3 self.patch=nn.Sequential(nn.Linear(784,3*3)) ''' FC layer in Embedding Network use the weight we stored in to a separate variable before making in to identity (in other words remove) ''' self.fc=nn.Linear(512,64) self.fc.weight=torch.nn.parameter.Parameter(self.embedding.fc_layer_weight) def forward(self,probe,gallery=1,syn_embedding=None,fixSquare=1,oneSquare=1,mode=None): if mode=='deform_embedding': batch_size=probe.size(0) feature=self.deform(torch.cat((probe,gallery),1)) weight=self.patch(feature) ''' Reshape weights to perform linear operation ''' patch_weight=weight.view(batch_size,3*3,1,1,1) patch_weight=patch_weight.expand(batch_size,3*3,3,224,224) patch_weight=patch_weight*fixSquare #[batch,9,3,224,224] patch_weight=torch.sum(patch_weight,dim=1) #[batch 3,224,224] img_syn=patch_weight*probe+(oneSquare-patch_weight)*gallery syn_embedding=self.embedding(img_syn) return syn_embedding,weight,feature elif mode=='fully_connected': fc_output=self.fc(syn_embedding) return fc_output elif mode=='feature_extraction': feature=self.embedding(probe) return feature IDeMeNet=IDeMeNet() # - # print("N.o GPU's using ",torch.cuda.device_count()) # IDeMeNet=nn.DataParallel(IDeMeNet) # # IDeMeNet=IDeMeNet.cuda() # # Set the optimization parameters for training # + assert args.train_from_scratch==True optimizer_deform=torch.optim.Adam([ {'params':IDeMeNet.deform.parameters()}, {'params':IDeMeNet.patch.parameters(),'lr':args.LR} ],lr=args.LR,eps=1e-04) optimizer_classifer=torch.optim.Adam([ {'params':IDeMeNet.embedding.parameters(),'lr':args.LR*0.1}, {'params':IDeMeNet.fc.parameters(),'lr':args.LR} ],lr=args.LR*0.2,eps=1e-05) # - # # The paper suggests to uses learing rate scheduler which led to better performance # + from torch.optim import lr_scheduler deform_lr_scheduler=lr_scheduler.StepLR(optimizer_deform,step_size=40,gamma=0.5) embedding_lr_scheduler=lr_scheduler.StepLR(optimizer_classifer,step_size=40,gamma=0.5) # - # # Load the train,test and validation datasets # + from data_loading import OneShot_Imagenet def worker_init_fn(worker_id): np.random.seed(np.random.get_state()[1][0] + worker_id) image_datasets={x:OneShot_Imagenet(path=r'C:\Users\vsankepa\Desktop\Untitled Folder 1',type_=x,ways=args.trainways if x=='train' else args.ways, shots=args.shots,test_num=args.test_num,epoch=args.epoch,gallery_img=args.gallery_img) for x in ['train','test','val']} # - dataloaders={x:torch.utils.data.DataLoader(image_datasets[x], batch_size=1,shuffle=True if x=='train' else False # ,num_workers=n_threads, commented for GPU # worker_init_fn=worker_init_fn ) for x in ['train','test','val']} # + #Load Gallery Images # - gallery=image_datasets['test'].gallery # train or test does not matter it will give the same data check the function gallery_feature=image_datasets['test'].get_features(IDeMeNet,args.batch_size) #torch.Size([1920, 512]) # # Supporting Functions def extract_feature(model,probe_images,requires_grad): batch=(len(probe_images)+args.batch_size-1)//args.batch_size for i in range(batch): features=model(Variable(probe_images[i*args.batch_size:(i+1)*args.batch_size],requires_grad=requires_grad),mode='feature_extraction') if i==0: # print(i) all_features=features # print(all_features.shape) else: all_features=torch.cat((all_features,features),dim=0) return all_features # # Perform Linear operation of Embeddings of support and gallery images # # ## Creating a weight matrix # + ###################################################################### # Weight matrix pre-process patch_xl = [] patch_xr = [] patch_yl = [] patch_yr = [] if args.patch_size == 3: point = [0,74,148,224] elif args.patch_size == 5: point = [0,44,88,132,176,224] elif args.patch_size == 7: point = [0,32,64,96,128,160,192,224] for i in range(args.patch_size): for j in range(args.patch_size): patch_xl.append(point[i]) patch_xr.append(point[i+1]) patch_yl.append(point[j]) patch_yr.append(point[j+1]) fixSquare = torch.zeros(1,args.patch_size*args.patch_size,3,224,224).float() for i in range(args.patch_size*args.patch_size): fixSquare[:,i,:,patch_xl[i]:patch_xr[i],patch_yl[i]:patch_yr[i]] = 1.00 fixSquare = fixSquare #.cuda() oneSquare = torch.ones(1,3,224,224).float() oneSquare = oneSquare #.cuda() # - def euclidean_dist(x, y): # x: N x D # y: M x D n = x.size(0) #192 m = y.size(0) # 5 d = x.size(1) #512 # assert d == y.size(1) x = x.unsqueeze(1).expand(n, m, d) # [192,5,512] y = y.unsqueeze(0).expand(n, m, d) #[192,5,512] # To accelerate training, but observe little effect return (torch.pow(x - y, 2)).sum(2) # # As we have less data we are augmenting dataset based on the euclidean distance between classes for support dataset # + ''' probe Image is same as support Image .Probe Image before augmenting dataset based on distance and Support Image after augmenting the dataset ''' def aug_images_basedOnDistance(support_images,support_features,support_group,support_class,ways): ''' Calculate the distance between the support/probe features and gallery features IMP: this step will separate gallery and probe/support images based on the distance with out this step gallery and probe/support images are same ''' support_center=support_features.view(ways,args.shots,-1).mean(dim=1) # [5,5,512] --> [5,512] batch_size=len(gallery_feature)//10 #1920//10=192 dists=euclidean_dist(gallery_feature[:batch_size],support_center) with torch.no_grad(): for i in range(1,10): #adding one will include distances=euclidean_dist(gallery_feature[i*batch_size:(i+1)*batch_size],support_center) dists=torch.cat((dists,distances),dim=0) dists=dists.transpose(0,1) ## [ways,ways*Gallery_size] check self.train_data in loadData probe_images=torch.FloatTensor(ways*args.shots*(1+args.augnum),3,224,224) # [5,5,6,3,224,224] gallery_images=torch.FloatTensor(ways*args.shots*(1+args.augnum),3,224,224) # [5,5,6,3,224,224] probe_group=torch.FloatTensor(ways*args.shots*(1+args.augnum),1)#way number # [5,5,6,1] probe_class=torch.FloatTensor(ways*args.shots*(1+args.augnum),1) # class # [5,5,6,1] _,distance_ind=torch.topk(dists,args.chooseNum,dim=1,largest=False) #returns top chooseNum distances in ascending order. #I think he are we are using gallery images which are close to original images in euclidean distance for i in range(ways): for j in range(args.shots): probe_images[i*args.shots*(1+args.augnum)+j*(1+args.augnum)+0]=support_images[i*args.shots+j] probe_group[i*args.shots*(1+args.augnum)+j*(1+args.augnum)+0]=support_group[i*args.shots+j] probe_class[i*args.shots*(1+args.augnum)+j*(1+args.augnum)+0]=support_class[i*args.shots+j] gallery_images[i*args.shots*(1+args.augnum)+j*(1+args.augnum)+0]=support_images[i*args.shots+j] for k in range(args.augnum): p=np.random.randint(0,2) if p==0: #1+k is because k starts from 0 and oth position has an image already probe_images[i*args.shots*(1+args.augnum)+j*(1+args.augnum)+1+k]=torch.flip(probe_images[i*args.shots+j],[2]) else: probe_images[i*args.shots*(1+args.augnum)+j*(1+args.augnum)+1+k]=probe_images[i*args.shots+j] probe_group[i*args.shots*(1+args.augnum)+j*(1+args.augnum)+1+k]=support_group[i*args.shots+j] probe_class[i*args.shots*(1+args.augnum)+j*(1+args.augnum)+1+k]=support_class[i*args.shots+j] choose=np.random.randint(0,args.chooseNum) # train or test does not matter it will select all train images for gallery gallery_images[i*args.shots*(1+args.augnum)+j*(1+args.augnum)+1+k]=image_datasets['test'].get_gallery_images(gallery[distance_ind[i][choose]]) return probe_images,gallery_images,probe_group,probe_class # - # # Training Function # + def train_model(model,num_epochs=25): summary=dict() num_epochs=20 best_model_wts = copy.deepcopy(model.state_dict()) best_loss = float('inf') emb_loss=nn.CrossEntropyLoss() for epoch in range(num_epochs): print('Running Epoch -->',epoch) for phase in ['train','test']: if phase=='train': deform_lr_scheduler.step() embedding_lr_scheduler.step() model.train(False) loss,acc=0.0,0 classifier_loss,classifier_acc=0,0 weights={} for k in range(args.patch_size*args.patch_size): weights[str(k)]=[] count=0 for i , (support_images,support_group,support_class,test_images,test_group,test_class) in enumerate(dataloaders['train']): ''' x = torch.tensor([1, 2, 3, 4]) # (4) x.unsqueeze(1).shape #(4,1) x=x.unsqueeze(0).shape #(1,4) x.squueze(0).shape # (4) ''' count=count+1 support_images=support_images.squeeze(0) #torch.Size([25, 3, 224, 224]) support_group=support_group.squeeze(0) #torch.Size([25, 1]) support_class=support_class.squeeze(0) #torch.Size([25, 1]) test_images=test_images.squeeze(0) #torch.Size([25, 3, 224, 224]) test_class=test_class.squeeze(0) #torch.Size([25, 3, 224, 224]) ways=int(support_images.size(0)//args.shots) support_features=extract_feature(model,support_images,requires_grad=True) #torch.Size([25, 512]) test_features=extract_feature(model,test_images,requires_grad=True) #torch.Size([25, 512]) probe_images,gallery_images,probe_group,probe_class=aug_images_basedOnDistance(support_images,support_features,support_group,support_class,ways=ways) batch=len(probe_images+args.batch_size-1)//args.batch_size first=True for b in range(batch): if b==batch-1: remaining=probe_images.size(0)-b*args.batch_size syn_embedding,patch_weight,features=model(Variable(probe_images[b*args.batch_size:],requires_grad=True), Variable(gallery_images[b*args.batch_size:],requires_grad=True), fixSquare=Variable(fixSquare.expand(remaining,args.patch_size*args.patch_size,3,224,224),requires_grad=False), oneSquare=Variable(oneSquare.expand(remaining,3,224,224),requires_grad=False), mode='deform_embedding' ) _cls = model(None,syn_embedding=syn_embedding,gallery=None,fixSquare=1,oneSquare=1,mode='fully_connected') else: syn_embedding,patch_weight,features=model(Variable(probe_images[b*args.batch_size:(b+1)*args.batch_size],requires_grad=True), Variable(gallery_images[b*args.batch_size:(b+1)*args.batch_size],requires_grad=True), fixSquare=Variable(fixSquare.expand(args.batch_size,args.patch_size*args.patch_size,3,224,224),requires_grad=False), oneSquare=Variable(oneSquare.expand(args.batch_size,3,224,224),requires_grad=False), mode='deform_embedding' ) _cls = model(None,syn_embedding=syn_embedding,gallery=None,fixSquare=1,oneSquare=1,mode='fully_connected') if b==0: all_syn_embedding=syn_embedding all_patch_weight=patch_weight all_features=features all_classes=_cls else: all_syn_embedding=torch.cat((all_syn_embedding,syn_embedding),dim=0) all_patch_weight=torch.cat((all_patch_weight,patch_weight),dim=0) all_features=torch.cat((all_features,features),dim=0) all_classes=torch.cat((all_classes,_cls),dim=0) all_patch_weight=all_patch_weight.transpose(1,0) for k in range(args.patch_size*args.patch_size): weights[str(k)]=weights[str(k)]+all_patch_weight[k].reshape(-1).tolist() syn_embedding_mean=all_syn_embedding.view(ways,args.shots*(1+args.augnum),-1).mean(1) #[ways,512] dists=euclidean_dist(test_features,syn_embedding_mean) #[ways*test_num,ways] log_prob=F.log_softmax(-dists,dim=1).view(ways,args.test_num,-1) # [ways,test_num,ways]] loss_val=-log_prob.gather(2,test_group.view(ways,args.test_num,1)).view(-1).mean() val,ind=log_prob.max(2) # 0- columns , 1-rows ,2- shape dim-0*dim-1 acc_val=torch.eq(ind,test_group.view(ways,args.test_num)).float().mean() loss+=loss_val.item() acc+=acc_val.item() #back propagation in training phase if phase=='train': if args.fix_deform==0: optimizer_deform.zero_grad() loss_val.backward(retain_graph=True) optimizer_deform.step() ind,pred=torch.max(all_classes,1) probe_class=probe_class.view(probe_class.size(0)) entropy_loss=emb_loss(all_classes,probe_class.long()) if epoch!=0 and args.fix_emb==True: optimizer_classifer.zero_grad() entropy_loss.backward() optimizer_classifer.step() classifier_loss+=entropy_loss.item() classifier_acc+=torch.eq(pred,probe_class.view(-1)).float().mean() epoch_loss=classifier_loss/float(count) epoch_acc=classifier_acc/float(count) epoch_classifier_loss=classifier_loss/float(count) epoch_classifier_acc=classifier_acc/float(count) summary[str(epoch)+'-'+phase]={ phase+'loss': epoch_loss, phase+'accuracy': epoch_acc, phase+'_classifier_loss': epoch_classifier_loss, phase+'_classifier_acc': epoch_classifier_acc, } print('{} Loss: {:.4f} Accuracy: {:.4f}'.format( phase, epoch_loss,epoch_acc)) # deep copy the model if phase == 'test' and epoch_loss < best_loss: best_loss = epoch_loss if torch.cuda.device_count() > 1: best_model_wts = copy.deepcopy(model.module.state_dict()) else: best_model_wts = copy.deepcopy(model.state_dict()) if epoch%2 == 0 : torch.save(best_model_wts,os.path.join(os.getcwd(),'saved_models/'+str(args.tensorname)+'.t7')) # load best model weights model.load_state_dict(best_model_wts) return model,summary # + IDeMeNet,summary = train_model(IDeMeNet, num_epochs=120) with open('summary.txt', 'w') as f: print(summary, file=f) if torch.cuda.device_count() > 1: torch.save(IDeMeNet.module.state_dict(),os.path.join(os.getcwd(),'saved_models/'+str(args.tensorname)+'.t7')) else: torch.save(IDeMeNet.state_dict(),os.path.join(os.getcwd(),'saved_models/'+str(args.tensorname)+'.t7')) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: 'Python 3.8.10 64-bit (''venv'': venv)' # language: python # name: python3 # --- # + import torch import gym from ppo import Agent # import numpy env = gym.make('CartPole-v1') agent = Agent(env.action_space.n, env.observation_space.shape) agent.load_models('model/actor','model/critic') # + import time import numpy obs = env.reset() total_reward = 0 last_action = 1 counter = 0 env._max_episode_steps = 4000 while True: action, _, _ = agent.choose_action(obs) # print(action) if last_action == action: counter += 1 if counter % 100 == 0: counter = 0 action = 1 else: counter = 0 last_action = action n_obs, reward, done, _ = env.step(action) total_reward += reward env.render() time.sleep(0.1) obs = n_obs if done: print("reward:", total_reward) break # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="6Si1xoyKJfR7" # Importing the dependencies # + id="qxT_JlHfpYW3" executionInfo={"status": "ok", "timestamp": 1636634885230, "user_tz": -330, "elapsed": 1266, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score # + [markdown] id="pgWcxd8hM0By" # Data collection and Data processing # + id="ksiCftdCMJnr" executionInfo={"status": "ok", "timestamp": 1636634886462, "user_tz": -330, "elapsed": 11, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} sonar_data=pd.read_csv("/content/Copy of sonar data.csv",header=None) # + colab={"base_uri": "https://localhost:8080/", "height": 226} id="8hribBItNQtU" executionInfo={"status": "ok", "timestamp": 1636634887173, "user_tz": -330, "elapsed": 719, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="c595e1b2-3be1-4ba3-9a40-c38b025ee2b8" sonar_data.head() # + colab={"base_uri": "https://localhost:8080/"} id="iki7JXNGNkNg" executionInfo={"status": "ok", "timestamp": 1636621251418, "user_tz": -330, "elapsed": 805, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="80568b40-44af-4c7d-d996-a549e921eaf6" # number of rpows and columns sonar_data.shape # + colab={"base_uri": "https://localhost:8080/", "height": 320} id="RczX4XsBOQ-t" executionInfo={"status": "ok", "timestamp": 1636621284269, "user_tz": -330, "elapsed": 631, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="08561a6a-af23-44f4-9b37-2e3f7c0b65b3" sonar_data.describe() #describe --> statstical measures of the data # + colab={"base_uri": "https://localhost:8080/"} id="eCs-w6DdOu0A" executionInfo={"status": "ok", "timestamp": 1636621432066, "user_tz": -330, "elapsed": 571, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="1ae33900-1679-42e9-e28e-2a83f739e181" sonar_data[60].value_counts() # + [markdown] id="43hVSqmbPn6d" # M-->Mine # R-->Rock # + colab={"base_uri": "https://localhost:8080/", "height": 163} id="OVGyP6gqPSwB" executionInfo={"status": "ok", "timestamp": 1636621898377, "user_tz": -330, "elapsed": 587, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="6bbfb14d-68f1-4132-fd8a-4b197baed178" sonar_data.groupby(60).mean() # + id="YmV6FeWxP2vI" executionInfo={"status": "ok", "timestamp": 1636622158977, "user_tz": -330, "elapsed": 550, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} # separating data and Labels X = sonar_data.drop(columns=60,axis=1) Y = sonar_data[60] # + colab={"base_uri": "https://localhost:8080/"} id="in8JsZ1rSEco" executionInfo={"status": "ok", "timestamp": 1636622183671, "user_tz": -330, "elapsed": 565, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="fd14bff4-ebee-48d9-9fd7-a58ef12f0788" print(X,Y) # + [markdown] id="LIp4Yj06SSVb" # Training and Testing data # + id="uvwtOc94SHzi" executionInfo={"status": "ok", "timestamp": 1636622523526, "user_tz": -330, "elapsed": 590, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.1,stratify=Y,random_state=1) # + colab={"base_uri": "https://localhost:8080/"} id="61IRn7a9Sjbh" executionInfo={"status": "ok", "timestamp": 1636622512140, "user_tz": -330, "elapsed": 8, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="0bcd7976-36f4-4d79-c5d4-15bda11fd8e1" print(X.shape,X_train.shape,X_test.shape) # + colab={"base_uri": "https://localhost:8080/"} id="xMPjxuqUTWsv" executionInfo={"status": "ok", "timestamp": 1636622590304, "user_tz": -330, "elapsed": 794, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="db345d58-fa99-4f52-b355-1e6483ea0968" print(Y.shape,Y_train.shape,Y_test.shape) # + colab={"base_uri": "https://localhost:8080/"} id="wp0ZQqNPTtsn" executionInfo={"status": "ok", "timestamp": 1636622622485, "user_tz": -330, "elapsed": 597, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="0a6c73a0-30ba-4dbd-da99-dcafaea6f6b9" print(X_train,Y_train) # + [markdown] id="QVyJAfs2UC70" # Model Regression--> Logistic Regression # + id="x5sqdKhlT1Zz" executionInfo={"status": "ok", "timestamp": 1636622764506, "user_tz": -330, "elapsed": 691, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} model=LogisticRegression() # + colab={"base_uri": "https://localhost:8080/"} id="gXJFgjUuUX5j" executionInfo={"status": "ok", "timestamp": 1636622788305, "user_tz": -330, "elapsed": 656, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="ed432781-d52d-48a8-89c0-1b94a36a6081" model.fit(X_train,Y_train) # + [markdown] id="Y2WCbZiyUjRc" # Model Evaluation # + id="9kJBqhJMUd2w" executionInfo={"status": "ok", "timestamp": 1636623274709, "user_tz": -330, "elapsed": 616, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} #accuracy on training data X_train_prediction = model.predict(X_train) training_data_accuracy=accuracy_score(X_train_prediction,Y_train) # + colab={"base_uri": "https://localhost:8080/"} id="p2h4PSpuUsDw" executionInfo={"status": "ok", "timestamp": 1636623277138, "user_tz": -330, "elapsed": 13, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="979eb549-7677-4abf-934b-e0561d990918" print("accuracy on training data :",training_data_accuracy.round(2)) # + id="kdeMhOSPVsj_" executionInfo={"status": "ok", "timestamp": 1636623696205, "user_tz": -330, "elapsed": 592, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} #accuracy of test data X_test_prediction=model.predict(X_test) test_data_accuracy=accuracy_score(X_test_prediction,Y_test) # + colab={"base_uri": "https://localhost:8080/"} id="BK8vC5NPXy7n" executionInfo={"status": "ok", "timestamp": 1636623743532, "user_tz": -330, "elapsed": 645, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="6a3108ef-bd26-4c7e-fc0c-95ecbfb35041" print("accuracy od test data :",test_data_accuracy) # + colab={"base_uri": "https://localhost:8080/"} id="X2dlBg5yYHRr" executionInfo={"status": "ok", "timestamp": 1636623883859, "user_tz": -330, "elapsed": 585, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="fb46aa00-7ef2-445c-c9e3-8dce8bac3fca" # + colab={"base_uri": "https://localhost:8080/"} id="eZI1hfABYpju" executionInfo={"status": "ok", "timestamp": 1636625781129, "user_tz": -330, "elapsed": 795, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="d8aef585-98a2-4298-a6f7-701b292e0d22" input_data=(0.0453,0.0523,0.0843,0.0689,0.1183,0.2583,0.2156,0.3481,0.3337,0.2872,0.4918,0.6552,0.6919,0.7797,0.7464,0.9444,1.0000,0.8874,0.8024,0.7818,0.5212,0.4052,0.3957,0.3914,0.3250,0.3200,0.3271,0.2767,0.4423,0.2028,0.3788,0.2947,0.1984,0.2341,0.1306,0.4182,0.3835,0.1057,0.1840,0.1970,0.1674,0.0583,0.1401,0.1628,0.0621,0.0203,0.0530,0.0742,0.0409,0.0061,0.0125,0.0084,0.0089,0.0048,0.0094,0.0191,0.0140,0.0049,0.0052,0.0044) #chaning the input_data to numpy array input_data_as_numpy_array=np.asarray(input_data) #reshape the np array as we are predicting for one instance input_data_reshaped=input_data_as_numpy_array.reshape(1,-1) prediction=model.predict(input_data_reshaped) print(prediction) if (prediction[0]=='R'): print('The object is a Rock') else: print('The object is a mine') # + colab={"base_uri": "https://localhost:8080/"} id="DjizaIG8eq7y" executionInfo={"status": "ok", "timestamp": 1636625785979, "user_tz": -330, "elapsed": 641, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="228b025c-e9ef-4dfc-8a32-1ec20243ae23" if (prediction[0]=='R'): print('The object is a Rock') else: print('The object is a mine') # + colab={"base_uri": "https://localhost:8080/"} id="Ub4U0AYifnm5" executionInfo={"status": "ok", "timestamp": 1636625814513, "user_tz": -330, "elapsed": 653, "user": {"displayName": "", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s64", "userId": "09496595787393543044"}} outputId="6d4f4d50-fd2f-4c49-f512-1a6920f4d522" prediction # + id="BoBH_JHwgAo2" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import math import nltk import string import operator from evaluate import evaluate # # Who will win 2016 election? # ## Using Naive Bayes Model for Sentiment Analysis to Analyze Tweets about 2016 Election # Generally speaking, sentiment analysis aims to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document. # # In this tutorial, we are going to implement a simple Naive Bayes Sentiment Classifier to classify text as being positive or negative. Then we are going to use the classifier we built to do simple analysis on tweets of 2016 election. Hope this will help you gain some insights into the public views of the presidential candidates. (Note this tutorial is just for educational purpose, no political views involved.) # # The tutorial is divided into several parts. Firstly, you will process sentiment labelled data into bag-of-word representation. Secondly, you will learn to build a Naive Bayes Classifier based on the statistical property of text. We will also cover the evaluation of binary classifier. Finally, we will scrape tweets about 2016 election using Twitter search API, apply the classifier, and calculate sentiment score distribution and top words mentioned. # ## Q0: Enviroment Setup # # Before the tutorial starts, it is important to make sure you have all the dataset and python modules that needed. # # Please download the dataset from [here](https://www.dropbox.com/sh/0g3wl2gdmvqmoi4/AABU1i71efGkg5LR_-lQo58sa?dl=0). # # It contains of 100M+ data and python modules we need for this tutorial. The downloaded folder also contains the same Jupyter Notebook for this tutorial. You can also run it to see the result. Note this tutorial cannot be executed without this downloaded folder since it contains python modules and datasets. Please make sure you download the file before trying to run it locally. # # ## Q1: Data Processing # # In the first part of this tutorial: we will do data processing on sentiment labelled data. The dataset we use for training and validation comes from multiple datasets across the web: [Sentiment Labelled Sentences Data Set for Yelp, Amazon, imdb](https://archive.ics.uci.edu/ml/datasets/Sentiment+Labelled+Sentences), [University of Michigan Sentiment Analysis competition on Kaggle](https://inclass.kaggle.com/c/si650winter11), [Twitter Sentiment Analysis Training Corpus Dataset](http://thinknook.com/wp-content/uploads/2012/09/Sentiment-Analysis-Dataset.zip) and manully labelled data. The total number of labelled text is 1,580,857. The total number of text used for validation is 5,000. # # The dataset format is: # # text + \t + score (0 for negative, 1 for positive) # We define following function data_process(training_data_file) to tokenize the raw data into bag-of-word representation. # + def process_line(line): ''' process each line: rules: lower all character, change "'s" to "s", eliminate all other punctuation characters Input: line Output: transformed line ''' line = line.rstrip().lower().replace("'s", "").replace("'", "") for char in string.punctuation: line = line.replace(char, " ") return line def data_process(training_data_file): """ process all labelled data Inputs training_data_file: a file contains all labelled data file name Outputs vocabulary: dictionary, key is token, value is the frequency it appears in the training data sentiment_data: dictionay, key is the processed text, value is the label (0 for negative, 1 for positive) total: total number of valid processed data positive: total number of positive labelled data negative: total number of negative labelled data """ training_data = open(training_data_file) stopwords = set(nltk.corpus.stopwords.words('english')) vocabulary = {} sentiment_data = {} total = 0 positive = 0 negative = 0 # process each training data file for file in training_data: labelled_data_file = open(file.rstrip()) labelled_data = '' for line in labelled_data_file: line = process_line(line) labelled_data += (line + " ") if "\t" in labelled_data: try: # get text and its label text = labelled_data.split("\t")[0] label = int(labelled_data.split('\t')[1]) sentiment_data[text] = label # update labelled data count total += 1 positive += label # tokenize and update term frequency for token in text.split(" "): if len(token) == 0 or token in stopwords: continue if token in vocabulary: vocabulary[token] += 1 else: vocabulary[token] = 1 labelled_data = '' except ValueError: labelled_data = '' continue labelled_data_file.close() training_data.close() negative = total - positive return (vocabulary, sentiment_data, total, positive, negative) # - vocabulary, sentiment_data, total, positive, negative = data_process("data/training_data.txt") print "unique tokens: " + str(len(vocabulary)) print "positive data: " + str(positive) + ", negative data: " + str(negative) # After running this call, We can tell that we have 670483 unique tokans and 1580857 labelled data. Next we are going to build a Naive Bayes classifier based on these processed text. # # unique tokens: 670483 # positive data: 795693, negative data: 793061 # ## Q2: Build a Naive Bayes Sentiment Classifier # # In the main portion of this tutorial, you will learn to build a Naive Bayes Sentiment Classifier. Naive Bayes Claasifier is known for creating simple yet well performing models, especially for text classification. It uses words in the text as features and assumes conditional independence across words. Therefore, the probabilty of the text under a certail class can be estimated by the words in the text. # # Here are steps to build a Naive Bayes Classifier: # # 1. Estimate the prior probability $P(c)$ of each class c ∈ C. $P(c)$ can be obtained by dividing the number of data in class c by the total number of labelled data. # 2. Estimate the probability distribution $P(w|c)$ for all words w and classes c. This can be done by dividing the total term frequency of w in class c by the total number of words in c. However, estimating probability only according to observed data may lead to many zero probabilities, so recall from class, we introduce Laplace Smoothing when estimating $P(w|c) = \frac{tf + α}{C + αD}$, where C is the total number of words in class c, D is the total vocabulary count. # 3. Use Bayes rule to estimate $P(c|d)$, the probablity of class c given a text d, we have: # # $P(c|d) = \frac{P(d|c)P(c)}{P(d)}$. # # Assume the words' positions don't matter and assume words are conditional independent (Naive Bayes rule), we have: # # $P(d|c) = P(w_1,w_2,w_3..w_N|c) = P(w_1|c)P(w_2|c)...P(w_N|c)$. # # # # Finally, we can simply classify a text d to be its most likely class label, which is c with the highest probability $P(c|d)$, using Naive Bayes rule, we can have following formula. # # $\underset{c\in C}{\operatorname{argmax}}P(c|d) = \underset{c\in C}{\operatorname{argmax}}\frac{P(d|c)P(c)}{P(d)} # = \underset{c\in C}{\operatorname{argmax}} P(d|c)P(c) # = \underset{c\in C}{\operatorname{argmax}} P(w_1|c)P(w_2|c)...P(w_N|c)P(c)$. # class sentiment_classifier: """ Naive Bayes Sentiment Classifier Attributes: vocabulary: dictionary, key is token, value is the frequency it appears in the training data total: total number of valid processed data positive: total number of positive labelled data negative: total number of negative labelled data vocabulary_pos: dictionary, key is token, value is term frequency it appears in class positive vocabulary_neg: dictionary, key is token, value is term frequency it appears in class negative count_pos: total vocabulary in positive class count_neg: total vocabulary in negative class Inputs for _init_ vocabulary: dictionary, key is token, value is the frequency it appears in the training data sentiment_data: dictionay, key is the processed text, value is the label (0 for negative, 1 for positive) total: total number of valid processed data positive: total number of positive labelled data negative: total number of negative labelled data """ def __init__(self, vocabulary, sentiment_data, total, positive, negative): self.positive = positive self.negative = negative self.total = total self.vocabulary = vocabulary self.vocabulary_pos = {} self.vocabulary_neg = {} self.count_pos = 0 self.count_neg = 0 # get vocabulary and term frequency for each class for text in sentiment_data.keys(): label = sentiment_data[text] if label is 1: for token in text.split(" "): if len(token) == 0: continue self.count_pos += 1 if token in self.vocabulary_pos: self.vocabulary_pos[token] += 1 else: self.vocabulary_pos[token] = 1 else: for token in text.split(" "): if len(token) == 0: continue self.count_neg += 1 if token in self.vocabulary_neg: self.vocabulary_neg[token] += 1 else: self.vocabulary_neg[token] = 1 # get probability p(w|c) for each word and class, using Laplace smooth q = 1 total_vocabulary = len(vocabulary) for token in vocabulary.keys(): tf_pos = 0.0 if token in self.vocabulary_pos: tf_pos = self.vocabulary_pos[token] p_pos = float(tf_pos + q) / (self.count_pos + q * total_vocabulary) self.vocabulary_pos[token] = p_pos tf_neg = 0.0 if token in self.vocabulary_neg: tf_neg = self.vocabulary_neg[token] p_neg = float(tf_neg + q) / (self.count_neg + q * total_vocabulary) self.vocabulary_neg[token] = p_neg def getScore(self, line, dict_pos, dict_neg): """ get sentiment score for a text line Inputs line: text dict_pos: dictionary used to store vocabulary occurred in positive class dict_neg: dictionary used to store vocabulary occurred in negative class Outputs score: sentiment score, defined as log(p(w|pos)) - log(p(d|neg)) if score is positive, means it is a positive sentiment score if score is negative, means it is a negative sentiment score """ line = process_line(line) p_pos = math.log(float(self.positive) / self.total) p_neg = math.log(float(self.negative) / self.total) tokens = line.split(" ") # multiply all probability for words in the text (using log fuction) for token in tokens: if token not in self.vocabulary: continue p_pos += math.log(self.vocabulary_pos[token]) p_neg += math.log(self.vocabulary_neg[token]) # output positive score for positive text if p_neg < p_pos: score = p_pos - p_neg for token in tokens: if token in set(nltk.corpus.stopwords.words('english')) or len(token) == 0: continue if token in dict_pos: dict_pos[token] += 1 else: dict_pos[token] = 1 # output negative score for negative text else: score = p_pos - p_neg for token in tokens: if token in set(nltk.corpus.stopwords.words('english')) or len(token) == 0: continue if token in dict_neg: dict_neg[token] += 1 else: dict_neg[token] = 1 return score classifier = sentiment_classifier(vocabulary, sentiment_data, total, positive, negative) # ## Q3: Evaluate Setiment Classifier # # After building the classifier, it is important to evaluate the performance of our sentiment classifier. To evaluate a binary classfier, we can compute true_positive(TP), false_positive(FP), true_negative(TN) and false negative(FN). These can be arranged into a 2×2 contingency table to clear show the performance of the classifier. # # We have assigned 5,000 labelled text as our validation data and define following function evaluate to calculate true_positive, false_positive, true_negative, false_negative. Due to the code limit, I have included the evaluate function in the evaluate.py attached to this tutorial. ''' Evalute functoin specs: Args: classifier: trained naive bayes classifier Returns: (true_positive, false_positive, true_negative, false_negative) metrics used for evaluation ''' true_positive, false_positive, true_negative, false_negative = evaluate(classifier) print true_positive, false_positive, true_negative, false_negative print "accuracy: " + str((true_negative + true_positive) / float(true_negative+true_positive+false_positive+false_negative)) # Accuracy Evaluation for classifier is shown in the following table: # # | | **positive** | **negative** | # |----------|-------------| # | **true_label** | 1134 |2827| # |**false_label**|310|729| # In addition, precision and recall are the two most widely used metrics when evaluating text classifier. Here I will introduce the concept of precision and recall. # # Precision: the number of true positives divided by the total number of elements labeled as belonging to the positive class. P = TP / (TP + FP) # # Recall: the number of true positives divided by the total number of elements that actually belong to the positive class. R = TP / (TP + FN) # # Precision and Recall for classifier: # # | ** Precision ** | ** Recall ** | # |----------|-------------| # |0.7853|0.6087| # We find that the precision is higher which means most items labeled as positive do indeed belong to positive. However, recall is low which means that we will not be able to correctly label all the positive data. This might hurt the overall performance of the classifier as we will see a large proportion of false_negative. In fact, there is an inverse relationship between precision and recall, where it is possible to increase one at the cost of reducing the other. So when designing a classfier, one must take the trade-off between precision and recall into consideration. # ## Q4: Scrape Tweets using Twitter Search API # # # Next, we are going to scrape some tweets about both and on Twitter for sentiment analysis. We will use search API in Twython library to search for tweets associated with a given hashtag. Please make sure you have installed Twython in your computer when trying to run the code. If you have not installed it, simply type following command on your terminal: # # pip install Twython # # To make sure the code can run successfully, I have included the installation in the following cell. If needed, please comment out the cell and execuate the installation. # + ''' This code is used to install Twython on your computer. You can also manually install using 'pip install Twython' If this went wrong, you can comment out this code. ''' import pip def install(package): pip.main(['install', package]) install('twython') # - # Note that Twitter has a rate limit of 180 calls per second. So I have to call this function multiple times on different date to get enough data, which takes a long time to run. After calling this function, I also manually remove the duplicate tweets from the file. The unique tweets are stored at trump_unique_tweets.txt and hillary_unique_tweets.txt seperately for evaluation. # # The following function getTwitter shows you how to scrape tweets about a specific hashtag. However, it is only for demonstration. The total scrape program takes 1h + to run due to rate limit. from twython import Twython import time def getTwitter(twitter, hashtag, another_candidate, file): ''' Search for hashtag on Twitter, write tweets into file Input: twitter: Twitter API object hashtag: tag that need to be searched file: write tweets to a file another_candidate: we want to download tag with only one candidate, if another candidate appears in the tweet, we discard the tweet Output: total: total count of tweets wrote to the file ''' tweets = set() MAX_ATTEMPTS = 10 f = open(file, 'w') total = 0 for i in range(0,MAX_ATTEMPTS): if(0 == i): results = twitter.search(q=hashtag,count='100',lang='en') else: results = twitter.search(q=hashtag,include_entities='true',max_id=next_max_id) for result in results['statuses']: tweet = process_line(result['text']) # we will drop tweets that contain both Hillary and Trump if tweet not in tweets and another_candidate not in tweet: tweets.add(tweet) f.write(tweet.encode("utf-8")) total += 1 try: next_results_url_params = results['search_metadata']['next_results'] next_max_id = next_results_url_params.split('max_id=')[1].split('&')[0] except: break time.sleep(0.5) f.close() return total # + # this secret key is only for testing purpose, please do not distribute TWITTER_APP_KEY = 'Iivojv9n2hkVb2VsVShuxy2mp' TWITTER_APP_KEY_SECRET = '' TWITTER_ACCESS_TOKEN = '' TWITTER_ACCESS_TOKEN_SECRET = '' twitter = Twython(app_key=TWITTER_APP_KEY, app_secret=TWITTER_APP_KEY_SECRET, oauth_token=TWITTER_ACCESS_TOKEN, oauth_token_secret=TWITTER_ACCESS_TOKEN_SECRET) print "number for hillary tweets: " + str(getTwitter(twitter, "#donaldtrump", "hilary", "trump_tweets.txt")) print "number for trump tweets: " + str(getTwitter(twitter, "#hillaryclinton", "trump", "hillary_tweets.txt")) # - # ## Q5: Sentiment Analysis on Election Tweets # # Finally, here comes the most fun part. We will apply the Naive Bayes Sentiment Classifier to the real tweets and see how public reviews towards two presidential candidates. While trying to get the sentiment score for the tweets, I also record what are the top words appear in positive and negative tweets. In total, I gathered 256 unique Tweets for and 379 unique Tweets for . # + import matplotlib import matplotlib.pyplot as plt def evaluateElection(classifier, filename): """ process all election data and output sentiment score Inputs classifier: sentiment classifier filenamae: file containing tweets Outputs none. Will print distribution of score for the topic """ f = open(filename) dic_neg = {} dic_pos = {} result = [] for line in f: score = classifier.getScore(line, dic_neg, dic_pos) result.append(score) f.close() sort_top_negative_word = sorted(dic_neg.items(), key=operator.itemgetter(1), reverse=True)[0 : 20] n, bins, patches = plt.hist(result, 50, normed=1, facecolor='green', alpha=0.75) plt.title("score distribution for " + filename) plt.show() print "Top negative words mentioned in " + filename + ":" print sort_top_negative_word # - evaluateElection(classifier, 'data/hillary_unique_tweets.txt') evaluateElection(classifier, 'data/trump_unique_tweets.txt') # Next, we plot the sentiment scores for tweets. The histogram plot shows the distribution of the score of all tweets. We can see that the distribution is similar to normal distribution where most people are neural to the candidates while there exist some strong supporters and strong opponents. # # The top words in the negative tweets show people's complain for each president candidate. For , the top words (except for some token in http..) include "sexual". For , the top words (except for some tokens in http..) include "wikileaks". This shows that sentiment analysis is also an effective way to mine the hot words about a topic. # # However, the result is not always reliable. By manully inspecting the result, I found the classifier wrongly labelled many negative examples as positive. The reason might be due to the training data has little to do with politics. Most of the training data comes from product reviews and tweets about daily life. Only manually labelled data is about politics. # ## Final Take-Away Notes for this Tutorial # ### 1. Training dataset is very important. # I actually start with only one dataset of around 1,000 labelled data. The accuracy is only 0.6 on validation data. When I added more training dataset into the system, both accuracy and robustness of the classifier is greatly improved. Also the alignment with training data and validation data helps with the accuracy. For example, if you are going to build a sentiment classfier for Twitter, the Twitter dataset helps most while Yelp data might not be that helpful. # # ### 2. The power of Naive Bayes Model # Naive Bayes model assumes conditional independence across different words. This assumption seems to be too simplified and not reasonable at first glance. Well, experiments show that this simple methods work pretty well especially for text classification. # # Using only labelled data and its words' probabilty, we can actually build a Naive Bayes Model without much computational cost. Easy to use and well performed! So next time, if you are building a text classification, do not forget about Naive Bayes Model. # # ### 3. Other sentiment classification methods # Besides Naive Bayes Model, there are also other popular sentiment classification methods. Many people use SVM for sentiment classification. It performs better than Naive Bayes but the computation cost is definitely higher. # # ### Hope you enjoy the tutorial and welcome any suggestions and further discussion. Thanks! # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- spec_no_data = { "$schema": "https://vega.github.io/schema/vega-lite/v5.json", "data": {"name": "data"}, "mark": "bar", "encoding": { "x": {"aggregate": "sum", "field": "yield"}, "y": {"field": "variety"}, "color": {"field": "site"} } } from vega.widget import VegaWidget import requests import json req = requests.get("https://raw.githubusercontent.com/vega/vega/master/docs/data/barley.json") values = json.loads(req.text) #data widget = VegaWidget(spec=spec_no_data) display(widget) # %time widget.update('data', insert=values) import pandas as pd URL = "https://forge.scilab.org/index.php/p/rdataset/source/file/368b19abcb4292c56e4f21079f750eb76b325907/csv/lattice/barley.csv" df = pd.read_csv(URL) widget = VegaWidget(spec=spec_no_data) display(widget) # %time widget.update("data", insert=df) import pandas as pd URL = "https://forge.scilab.org/index.php/p/rdataset/source/file/368b19abcb4292c56e4f21079f750eb76b325907/csv/lattice/barley.csv" df = pd.read_csv(URL) widget = VegaWidget(spec=spec_no_data) widget.compression = 'zlib' display(widget) # %time widget.update('data', insert=df) import pandas as pd from ipytablewidgets import LZ4Compressor URL = "https://forge.scilab.org/index.php/p/rdataset/source/file/368b19abcb4292c56e4f21079f750eb76b325907/csv/lattice/barley.csv" df = pd.read_csv(URL) widget = VegaWidget(spec=spec_no_data) widget.compression = LZ4Compressor(2) display(widget) # %time widget.update('data', insert=df) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt import matplotlib matplotlib.pyplot.style.use('seaborn') matplotlib.rcParams['figure.figsize'] = (15, 5) # %matplotlib inline import math import scipy from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # + def z_score(v, mean, std): return (v - mean) / float(std) def get_from_ztable(v): return 1 - scipy.stats.norm.sf(v) def get_proportion(v1, v2, mean, std): return get_from_ztable(z_score(v2, mean, std)) - get_from_ztable(z_score(v1, mean, std)) # - mean=170.4 std=10 z = z_score(170.5, mean=mean, std=std) z get_from_ztable(z) get_proportion(170.5, 180.3, mean=mean, std=std) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="rPwHmLbhkxMe" # # Lab-5(Dissimilarity Matrix for Binary Attributes) # + id="yrEs4wgQklW1" import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sbs # + colab={"base_uri": "https://localhost:8080/", "height": 224} id="nWHD6rQDl4mI" outputId="2289996c-57f7-47d2-cf22-fd2e97dd61d8" url="https://raw.githubusercontent.com/Anasuya-Sahoo/DMDW-Lab/main/student-mat.csv" df=pd.read_csv(url) df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="hbS-hoKCmGTE" outputId="f36c9ca8-736c-4e90-eec8-19f253f2b63a" #extract the dataset from the original dataset dfs=df[['schoolsup','famsup','paid','activities','nursery','romantic','internet','higher']] dfs.head() # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="B3826yQJnfic" outputId="2f72a01c-9bb8-4360-c03d-fd29812425c3" #convert binary into 0,1 format dfs=dfs.replace('no',0) dfs=dfs.replace(to_replace='yes',value=1) dfs.head() # + colab={"base_uri": "https://localhost:8080/"} id="SZ3Dg272oO70" outputId="5582e10f-fd52-4428-a9c2-1807e5810d96" # create obj and find the distance or the dissimilarity matrix using scipy n=np.array(dfs[['schoolsup','famsup']]) n=n.reshape(-1,2)# -1 => numpy will calculate whatever will be the no. and 2 => n.shape # + colab={"base_uri": "https://localhost:8080/"} id="7hS_Z6xLpAA8" outputId="ce200b6e-e868-4aac-a541-ace13d159e9d" m=np.array(dfs[['romantic','internet']]) m=m.reshape(-1,2) m.shape # + id="vLGzxNcjpKBx" from scipy.spatial import distance # + colab={"base_uri": "https://localhost:8080/"} id="cYoJiS2BpkcL" outputId="70a869c0-bc32-4371-b76c-23cbba0e30e6" dist_matrix=distance.cdist(n,m) dist_matrix.shape # + colab={"base_uri": "https://localhost:8080/"} id="_mYSZjzup9Df" outputId="56172d15-a885-4120-9647-182c924cb55c" print(dist_matrix) # + colab={"base_uri": "https://localhost:8080/", "height": 278} id="EtyuHTMxqfVe" outputId="892b1256-2105-47b4-eb02-14c3c64a3ae7" sbs.heatmap(dist_matrix) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 131} id="-BCZvRYbqrMI" outputId="cca1f26e-2f77-4796-a26d-40bd68e61d4d" #numerical attribute #extract df.head(2) # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="mG9rleUwrW4C" outputId="d3db3583-f51f-4bf8-dde2-630f73ae6c6c" numeric=df[['age','Medu','Fedu','studytime','failures']] numeric.head() # + colab={"base_uri": "https://localhost:8080/"} id="mI0j8-Hbrwu1" outputId="c946d204-aedf-4cee-ca28-348489542ed6" num1=np.array(numeric[['age','failures']]) num1.reshape(-1,2) num1.shape # + colab={"base_uri": "https://localhost:8080/"} id="9jHvN3sdsK9x" outputId="866555c4-777c-4e2c-cc17-38db9affb134" num2=np.array(numeric[['Fedu','Medu']]) num2.reshape(-1,2) num2.shape # + colab={"base_uri": "https://localhost:8080/"} id="qqJnJa_SsW7G" outputId="9b572f08-8d50-476a-ac95-66d149735bce" #Euclidean distance dist_matrix=distance.cdist(num1,num2) print(dist_matrix) # + colab={"base_uri": "https://localhost:8080/", "height": 278} id="ehpalU-esokb" outputId="cbb4d306-f90e-467d-d168-ed24234fb118" sbs.heatmap(dist_matrix) plt.show() # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="eLjkKQxzsumI" outputId="385928a9-de6e-42a0-b367-f43ead86ef4e" #Nominal Attributes(name or chars or string) nomi=df[['Mjob','Fjob','reason','guardian']] nomi.head() # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="7_sX_woDtdU8" outputId="832c1912-55a0-4a5a-e710-ff2ad111983e" nomi=nomi.replace('at_home','home') nomi.head() # + id="YoFh1hn3t22j" # 1st convert into categorical/ ordinal nomi=nomi.astype('category') # + id="so4t45IBuvRo" # labelencoder gives a unique and normalised nalue like from 0,1,2 etc from sklearn.preprocessing import LabelEncoder lb=LabelEncoder() # + id="fZCm_GJZvAzV" #fit the labelencoder and return the label value nomi['guardian']=lb.fit_transform(nomi['guardian']) nomi['Mjob']=lb.fit_transform(nomi['Mjob']) nomi['Fjob']=lb.fit_transform(nomi['Fjob']) nomi['reason']=lb.fit_transform(nomi['reason']) # + colab={"base_uri": "https://localhost:8080/", "height": 204} id="8fHpe0QTvofj" outputId="84c6966f-d8b6-431e-d4ce-15133602ca45" nomi.head() # + colab={"base_uri": "https://localhost:8080/"} id="YlFeXeEtwAYr" outputId="eef2dd01-9cec-40ff-ee94-50fb6e623e89" nom1=np.array(nomi) nom1.reshape(-1,2) nom1.shape # + colab={"base_uri": "https://localhost:8080/"} id="w9jIzQEuwoxr" outputId="36fa4db0-3899-45ae-8e5f-8518ffcf804f" nom2=np.array(nomi) nom2.reshape(-1,2) nom2.shape # + colab={"base_uri": "https://localhost:8080/"} id="0XCWXrJZwuHA" outputId="91b6ac56-2ca1-42d0-d3e6-ed446269afae" dist_matrix2=distance.cdist(nom1,nom2) dist_matrix2.shape # + colab={"base_uri": "https://localhost:8080/"} id="f7a4uH_wxltG" outputId="abc7ad0e-bf97-4458-a102-d92d01d216aa" print(dist_matrix2) # + colab={"base_uri": "https://localhost:8080/", "height": 278} id="rBFCBPbxxutI" outputId="98da7018-1794-45ed-857d-43ad71955988" sbs.heatmap(dist_matrix2) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## Wave equation # + raw_mimetype="text/latex" active="" # Using our framework, we want to infer the constant c from the Wave Equation. The Wave Equation is given by # # \begin{align*} # \frac{\partial^2 u}{\partial t^2} = c\nabla^2u, # \end{align*} # # where $u = u(x_1,x_2, \dotsc, x_n, t)$ and $c>0$ is some constant \phantomsection\label{\detokenize{index:id17}}{\hyperref[\detokenize{index:lamoureux2006mathematics}]{\sphinxcrossref{{[}17{]}}}}. In one spatial dimension it boils down to: # # \begin{align} \label{4} # \frac{\partial^2 u}{\partial t^2} - c\frac{\partial^2 u}{\partial x^2} = 0. # \end{align} # # We generate the data from a solution of the equation (\ref{4}) corresponding to $c=1$ and get an estimation of $c = 1.0003$. # - # #### Problem Setup # + raw_mimetype="text/latex" active="" # \begin{align*} # u_{tt} - c u_{xx} = 0 # \end{align*} # - # The general solution is given by: # $u(x,t) = F(x-ct) + G(x+ct)$ with F, G some functions. # # Take $F(x) = x^2$ and $G(x) = \sin(x)$ and $c=1$. # # Thus: $u(x,t) = (x-t)^2 + \sin(x + t)$. # # Set $f = 0$. # # Consider $u$ to be a Gaussian process: # # $u \sim \mathcal{GP}(0, k_{uu}(x_i, x_j; \tilde{\theta}))$ with the hyperparameters $\tilde{\theta} = \{\theta, l_x, l_t\}$. # # And the linear operator: # # $\mathcal{L}_x^c = \frac{d^2}{dt^2} \cdot - c \frac{d^2}{dx^2} \cdot$ # # so that # # $\mathcal{L}_x^c u = f$ # # Problem at hand: Estimate $c$ (should be $c = 1$ in the end). # # # #### Step 1: Simulate data # + nbsphinx="hidden" import time import numpy as np import sympy as sp from scipy.optimize import minimize import matplotlib.pyplot as plt import warnings # - # $x \in [0, 1]^n, \; t \in [0,1]^n$ # + def get_simulated_data(n = 20): t = np.random.rand(n) x = np.random.rand(n) y_u = np.multiply(x-t, x-t) + np.sin(x+t) y_f = 0*x return(x, t, y_u, y_f) (x, t, y_u, y_f) = get_simulated_data() # - # #### Step 2: Evaluate kernels # # 1) $k_{uu}(y_i, y_j; \tilde{\theta}) = \theta exp(-\frac{1}{2l_x}(x_i-x_j)^2 - \frac{1}{2l_t}(t_i-t_j)^2)$, where $y_i = (x_i, t_i)$, $y_j = (x_j, t_j)$. # + nbsphinx="hidden" x_i, x_j, t_i, t_j, theta, l_x, l_t, c = sp.symbols('x_i x_j t_i t_j theta l_x l_t c') kuu_sym = theta*sp.exp(-1/(2*l_x)*((x_i - x_j)**2) - 1/(2*l_t)*((t_i - t_j)**2)) kuu_fn = sp.lambdify((x_i, x_j, t_i, t_j, theta, l_x, l_t), kuu_sym, "numpy") def kuu(x, t, theta, l_x, l_t): k = np.zeros((x.size, x.size)) for i in range(x.size): for j in range(x.size): k[i,j] = kuu_fn(x[i], x[j], t[i], t[j], theta, l_x, l_t) return k # - # 2) $k_{ff}(y_i,y_j; \tilde{\theta}, c) # = \mathcal{L}_{y_i}^c \mathcal{L}_{y_j}^c k_{uu}(y_i, y_j; \tilde{\theta}) \\ # = \frac{d^4}{dt_i^2 dt_j^2}k_{uu} - c\frac{d^4}{dt_i^2 dx_j^2}k_{uu} - c\frac{d^4}{dx_i^2 dt_j^2}k_{uu} + c^2\frac{d^4}{dx_i^2 dx_j^2}k_{uu}$ # + nbsphinx="hidden" kff_sym = sp.diff(kuu_sym, t_i, t_i, t_j, t_j) \ - c*sp.diff(kuu_sym, t_i, t_i, x_j, x_j) \ - c*sp.diff(kuu_sym, x_i, x_i, t_j, t_j) \ + c**2*sp.diff(kuu_sym, x_i, x_i, x_j, x_j) kff_fn = sp.lambdify((x_i, x_j, t_i, t_j, theta, l_x, l_t, c), kff_sym, "numpy") def kff(x, t, theta, l_x, l_t, c): k = np.zeros((x.size, x.size)) for i in range(x.size): for j in range(x.size): k[i,j] = kff_fn(x[i], x[j], t[i], t[j], theta, l_x, l_t, c) return k # - # 3) $k_{fu}(y_i,y_j;\tilde{\theta}, c) # = \mathcal{L}_{\tilde{x}_i}^c k_{uu}(y_i, y_j; \tilde{\theta}) # = \frac{d^2}{dt_i^2}k_{uu} - c\frac{d^2}{dx_i^2}k_{uu}$ # + nbsphinx="hidden" kfu_sym = sp.diff(kuu_sym, t_i, t_i) - c*sp.diff(kuu_sym, x_i, x_i) kfu_fn = sp.lambdify((x_i, x_j, t_i, t_j, theta, l_x, l_t, c), kfu_sym, "numpy") def kfu(x, t, theta, l_x, l_t, c): k = np.zeros((x.size, x.size)) for i in range(x.size): for j in range(x.size): k[i,j] = kfu_fn(x[i], x[j], t[i], t[j], theta, l_x, l_t, c) return k # - # 4) $k_{uf}(y_i, y_j; \tilde{\theta}, c)$ is given by the transpose of $k_{fu}(y_i, y_j; \tilde{\theta}, c)$. # + nbsphinx="hidden" def kuf(x, t, theta, l_x, l_t, c): return kfu(x, t, theta, l_x, l_t, c).T # - # #### Steps 3 and 4: Compute NLML and optimize the hyperparameters # + nbsphinx="hidden" def nlml(params, x, t, y1, y2, s): params = np.exp(params) K = np.block([ [kuu(x, t, params[0], params[1], params[2]) + s*np.identity(x.size), kuf(x, t, params[0], params[1], params[2], params[3])], [kfu(x, t, params[0], params[1], params[2], params[3]), kff(x, t, params[0], params[1], params[2], params[3]) + s*np.identity(x.size)] ]) y = np.concatenate((y1, y2)) val = 0.5*(np.log(abs(np.linalg.det(K))) + np.mat(y) * np.linalg.inv(K) * np.mat(y).T) return val.item(0) # + nbsphinx="hidden" def minimize_restarts(x,t,y_u,y_f,n=10): nlml_wp = lambda params: nlml(params, x, t, y_u, y_f, 1e-7) all_results = [] for it in range(0,n): all_results.append(minimize(nlml_wp, np.random.rand(4), method="Nelder-Mead")) filtered_results = [m for m in all_results if 0==m.status] return min(filtered_results, key = lambda x: x.fun) # - m = minimize_restarts(x, t, y_u, y_f, 5) np.exp(m.x[3]) # This is the optimized value for our parameter c # #### Step 5: Plotting the behavior for varied parameters # The logarithms of the optimal hyperparameters are given by (arranged in $[\theta, l_x, l_t, c]$): m.x # We want to plot the behavior of the nlml-function around the minimizer: # + nbsphinx="hidden" lin0 = np.linspace(5, 9, 200) # Set to 200 lin1 = np.linspace(0, 5, 200) lin3 = np.linspace(0, 0.1, 200) res0 = [nlml((q, m.x[1], m.x[2], m.x[3]), x, t, y_u, y_f, 1e-7) for q in lin0] res1 = [nlml((m.x[0], q, m.x[2], m.x[3]), x, t, y_u, y_f, 1e-7) for q in lin1] res2 = [nlml((m.x[0], m.x[1], q, m.x[3]), x, t, y_u, y_f, 1e-7) for q in lin1] res3 = [nlml((m.x[0], m.x[1], m.x[2], q), x, t, y_u, y_f, 1e-7) for q in lin3] def show_1(lin0, lin1, lin3, res0, res1, res2, res3): f, (ax1, ax2) = plt.subplots(ncols=2, nrows=2, figsize=(13,7)) f.suptitle("Local behavior of the nlml around the optimum") ax1[0].plot(lin0, res0) ax1[0].set(xlabel= r"Value of $\theta$", ylabel= "nlml") ax1[1].plot(lin1, res1) ax1[1].set(xlabel= r"Value of $l_x$", ylabel= "nlml") ax2[0].plot(lin1, res2) ax2[0].set(xlabel= r"Value of $l_t$", ylabel= "nlml") ax2[1].plot(lin3, res3) ax2[1].set(xlabel= r"Value of c", ylabel= "nlml") plt.show() # - show_1(lin0, lin1, lin3, res0, res1, res2, res3); # + nbsphinx="hidden" lin = np.linspace(0, 10, 50) res = [nlml((q, m.x[1], m.x[2], m.x[3]), x, t, y_u, y_f, 1e-7) for q in lin] plt.plot(lin, res); # + nbsphinx="hidden" lin = np.linspace(0, 10, 50) res = [nlml((m.x[0], q, m.x[2], m.x[3]), x, t, y_u, y_f, 1e-7) for q in lin] plt.plot(lin, res); # + nbsphinx="hidden" lin = np.linspace(0, 10, 50) res = [nlml((m.x[0], m.x[1], q, m.x[3]), x, t, y_u, y_f, 1e-7) for q in lin] plt.plot(lin, res); # + nbsphinx="hidden" lin = np.linspace(-1, 1, 50) res = [nlml((m.x[0], m.x[1], m.x[2], q), x, t, y_u, y_f, 1e-7) for q in lin] plt.plot(lin, res); # - # #### Step 6: Analysis of the error # In this section we want to analyze the error of our algorithm using two different ways and plot its time complexity. # + nbsphinx="hidden" res = np.zeros((5,25)) timing = np.zeros((5,25)) # Needed for L2-Norm-calculation: in columns 25 vectors for five runs X = np.zeros((25, 25, 5)) T = X warnings.filterwarnings("ignore") for k in range(5): for n in range(25): start_time = time.time() (x, t, y_u, y_f) = get_simulated_data(n) # Storing the x and t-values for j in range(n): X[j][n][k] = x[j] T[j][n][k] = t[j] m = minimize(nlml, np.random.rand(4), args=(x, t, y_u, y_f, 1e-7), method="Nelder-Mead") res[k][n] = np.exp(m.x[3]) timing[k][n] = time.time() - start_time # - # **1. Plotting the error in our estimate for c:** # The error is given by $| c_{estimate} - c_{true} |$. # + nbsphinx="hidden" lin = np.linspace(8, res.shape[1] - 1, res.shape[1] - 8) ones = np.ones(res.shape[1]) est = np.repeat(0.043, len(lin)) def show_2(lin, ones, est): f, (ax1, ax2) = plt.subplots(ncols=2, nrows=2, figsize=(13,7)) to_del = np.linspace(0, 7, 8) a0 = np.delete(np.abs(res[0,:] - ones), to_del) ax1[0].plot(lin, a0) ax1[0].plot(lin, est, linestyle='dashed', color='green') ax1[0].set(xlabel= r"Number of data points", ylabel= "Error") a1 = np.delete(np.abs(res[1,:] - ones), to_del) ax1[1].plot(lin, a1) ax1[1].plot(lin, est, linestyle='dashed', color='green') ax1[1].set(xlabel= r"Number of data points", ylabel= "Error") a2 = np.delete(np.abs(res[2,:] - ones), to_del) ax2[0].plot(lin, a2) ax2[0].plot(lin, est, linestyle='dashed', color='green') ax2[0].set(xlabel= r"Number of data points", ylabel= "Error") a3 = np.delete(np.abs(res[3,:] - ones), to_del) ax2[1].plot(lin, a3) ax2[1].plot(lin, est, linestyle='dashed', color='green') ax2[1].set(xlabel= r"Number of data points", ylabel= "Error") plt.show() # + nbsphinx="hidden" show_2(lin, ones, est) # - # We ran the algorithm five times and plotted the respective outcomes in different colors: # + nbsphinx="hidden" lin = np.linspace(8, res.shape[1] - 1, res.shape[1] - 8) def show_3(lin, ones, res): plt.figure(figsize=(5,3)) for i in range(res.shape[0]): to_del = np.linspace(0, 7, 8) a_i = np.delete(np.abs(res[i,:] - ones), to_del) plt.plot(lin, a_i) plt.suptitle('Error in our estimate for c') plt.ylabel('Error') plt.xlabel('Number of data points') est1 = np.repeat(0.041, len(lin)) plt.plot(lin, est1, color='blue', linestyle='dashed') plt.show(); # - show_3(lin, ones, res) # We see that for n sufficiently large (in this case $n \geq 10$), we can assume the error to be bounded by 0.041. # **2. Plotting the error between the solution and the approximative solution:** # + raw_mimetype="text/latex" active="" # Another approach of plotting the error is by calculating the difference between the approximative solution and the true solution. # # That is: Let $\tilde{c}$ be the parameter, resulting from our algorithm. Set $\Omega := \{(x_i, t_i) \; \vert \; x_i \in x, t_i \in t\} \subseteq [0,1] \times [0,1]$. # # Then we can calculate the solution of the PDE # \begin{align} \label{sol} # \frac{d^2}{dt^2}\tilde{u}(x,t) - \tilde{c}\frac{d^2}{dx^2}\tilde{u}(x,t) = 0. # \end{align} # # and set the error to $\lVert \tilde{u}(x,t) - u(x,t) \rVert_{\Omega}$. The norm can be chosen freely. # # In our case, finding the solution to a given $\tilde{c}$ is not difficult. It is given by # \begin{align}\label{sol2} # \tilde{u}(x,t) = u(x,\sqrt{\tilde{c}}t) = (x-\sqrt{\tilde{c}}t)^2 + \sin(x+\sqrt{\tilde{c}}t) # \end{align} # # We thus get: # \begin{align*} # \lVert \tilde{u}(x,t) - u(x,t) \rVert_{\Omega} = \lVert (x-\sqrt{\tilde{c}}t)^2 + \sin(x+\sqrt{\tilde{c}}t) - (x-t)^2 - \sin(x+t) \rVert_{\Omega} # \end{align*} # # With the $L^2$-norm, this is # \begin{align*} # (\sum_{(x_i,t_i) \in \Omega} \vert (x_i-\sqrt{\tilde{c}}t_i)^2 + \sin(x_i+\sqrt{\tilde{c}}t_i) - (x_i-t_i)^2 - \sin(x_i+t_i) \vert^2 )^{1/2} # \end{align*} # + raw_mimetype="text/latex" active="" # \textit{Short proof} of $(\ref{sol2})$: # # We assume $\tilde{c} \geq 0$ and want to find some $\alpha \in \mathbb{R}$ such that # \begin{align*} # \frac{d^2}{dt^2}\tilde{u}(x,\alpha t) - \tilde{c}\frac{d^2}{dx^2}\tilde{u}(x,\alpha t) = 0. # \end{align*} # # By setting $\alpha = \sqrt{\tilde{c}}$ we have: # \begin{align*} # \frac{d^2}{dt^2}\tilde{u}(x,\alpha t) - \tilde{c}\frac{d^2}{dx^2}\tilde{u}(x,\alpha t) &= \alpha^2 [\frac{d^2}{dt^2}u(x,t)](x, \alpha t) - \tilde{c}[\frac{d^2}{dx^2}u(x,t)](x, \alpha t) \\ &= \tilde{c} \left(\frac{d^2}{dt^2}u(x,t) - \frac{d^2}{dx^2}u(x,t) \right)(x, \alpha t) \stackrel{(\ref{sol})}{=} 0. # \end{align*} # # + nbsphinx="hidden" lin = np.linspace(8, res.shape[1] - 1, res.shape[1] - 8) ones = np.ones(res.shape[1]) diff = np.ndarray(res.shape[1]) def show_4(lin, ones, res, diff): plt.figure(figsize=(5,3)) to_del = np.linspace(0, 7, 8) for i in range(res.shape[0]): for j in range(res.shape[1]): diff[j] = np.linalg.norm((X[:,j,i] - np.sqrt(res[i,j])*T[:,j,i])**2 + \ np.sin(X[:,j,i]+np.sqrt(res[i,j])*T[:,j,i])-(X[:,j,i]-T[:,j,i])**2 - \ np.sin(X[:,j,i]+T[:,j,i])) diff_i = np.delete(diff, to_del) plt.suptitle('$L^2$-error in our estimate for c') plt.plot(lin, diff_i) plt.ylabel('Error') plt.xlabel('Number of data points') est = np.repeat(0.015, len(lin)) plt.plot(lin, est, color='blue', linestyle='dashed') plt.show() # - show_4(lin, ones, res, diff) # The $L^2$-error is in our case bounded by 0.015 for $n \geq 10$. # **3. Plotting the execution time:** # + nbsphinx="hidden" lin = np.linspace(1, timing.shape[1], timing.shape[1]) for i in range(timing.shape[0]): plt.plot(lin, timing[i,:]) plt.ylabel('Execution time in seconds') plt.xlabel('Number of data points') plt.show() # + nbsphinx="hidden" lin = np.linspace(1, timing.shape[1], timing.shape[1]) def show_5(lin, timing): plt.figure(figsize=(5,3)) for i in range(timing.shape[0]): plt.suptitle('Execution time of our algorithm') plt.plot(lin, timing[i,:]) plt.ylabel('Seconds') plt.xlabel('Number of data points') est = lin**(1.33) plt.plot(lin, est, color='blue', linestyle='dashed') plt.show() # - show_5(lin, timing) # Curiously, the time complexity seems to be around $\mathcal{O}(n^{4/3})$ (blue-dashed line). # # Assuming an equal amount of function evaluations in the Nelder-Mead algorithm for different values of n, # we would have been expecting a time complexity of $\mathcal{O}(n^3)$, due to the computation of the inverse of an $n\times n$-matrix in every evaluation of $\textit{nlml}$. This could probably be seen with larger values of n. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np df = pd.read_csv("C:/Users/avakh/Desktop/sml_4_data/data/countries-aggregated_csv.csv") df.head() df1=df.loc[df['Country'] =='Costa Rica'] df2=df1[['Date','Confirmed']] df2.reset_index(drop=True, inplace=True) df2.set_index('Date',inplace=True) from statsmodels.tsa.arima_model import ARIMA # + import statsmodels.api as sm import matplotlib.pyplot as plt # - import statsmodels.api as sm model=sm.tsa.statespace.SARIMAX(df2['Confirmed'],order=(1, 1, 1),seasonal_order=(1,1,1,12)) results=model.fit() l1=results.predict(start=552,end=600,dynamic=True) l1 df2 type(l1) l1.to_frame() frames = [df2, l1] result = pd.concat(frames) result result[['Confirmed',0]].plot(figsize=(12,8)) df7=df.loc[df['Country'] =='Afghanistan'] df8=df7[['Date','Deaths']] df8.reset_index(drop=True, inplace=True) df8.set_index('Date',inplace=True) import statsmodels.api as sm model=sm.tsa.statespace.SARIMAX(df8['Deaths'],order=(1, 1, 1),seasonal_order=(1,1,1,12)) results=model.fit() l2=results.predict(start=552,end=600,dynamic=True) l2.to_frame() frames = [df8, l2] result = pd.concat(frames) result[['Deaths',0]].plot(figsize=(12,8)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #### 1. One nuclear reactor having a nominal power of 800MW generate an annual amount of 514k toe (1k =1000) of electricity. What is the correct capacity factor of this power unit ? # ##### Ans: 85% # #### 2. The above capacity factor is mainly due to # ##### Ans: the maintenance of the reactor # #### 3. We consider an offshore wind farm of 150 turbines of 6MW of nominal power each. The annual capacity factor of this large wind farm is around 35%. The annual amount of electric energy produced will be around: # ##### Ans: 238k toe # #### 4. How many offshore wind turbines of 6MW (same capacity factor as Q3) will be needed to produce the same amount of annual energy as a nuclear reactor of 800MW having a capacity factor of 78% # ##### Ans: 297 # #### 5. We consider the above offshore wind farm of 150 turbines of 6MW of nominal power each. The annual capacity factor is around 35%. The typical seasonal variability of the energy production for this wind farm is given by the following graph: # # # # #### This graph gives, in percentage, a relative value of the energy produced each month. The monthly mean energy (the annual energy /12) correspond to 100. Find which statements are true. # ##### Ans: # - The maximal power output of the wind farm is 900MW # - The maximal monthly production is around 340 GWh in december # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="N4cIPNBc1zhH" colab={"base_uri": "https://localhost:8080/"} outputId="26dc2151-b1a8-498d-8fb6-0fd220cc331e" import numpy as np import time import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision from torch.utils.data.sampler import SubsetRandomSampler import torchvision.transforms as transforms import matplotlib.pyplot as plt import torchvision.models from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix from sklearn.externals import joblib from sklearn.metrics import accuracy_score from sklearn.metrics import plot_confusion_matrix # + colab={"base_uri": "https://localhost:8080/"} id="fu6cwQ4KkvEx" outputId="6f9b3ed0-1383-479a-8617-a47eaa8de717" from google.colab import drive drive.mount('/content/gdrive') # + id="PgGbz85umHM7" def loadData(): np.random.seed(100) #Ensuring data is a 224x224 image, used the centercrop function to crop at center transform = transforms.Compose([transforms.Resize((224,224)), transforms.ToTensor()]) numWorkers = 1 batchSize = 1 classes = ['COVID-19', 'Normal', 'Pneumonial-Bacterial', 'Pneumonial-Viral'] #datasetPath = '/content/gdrive/MyDrive/APS360/ProgressReport/APS360SampleData' datasetPath = '/content/gdrive/MyDrive/Winter2021/APS360/project' sampleSet = torchvision.datasets.ImageFolder(datasetPath, transform=transform) print(len(sampleSet)) #All the data loaded is valid so we can use any index train = int((len(sampleSet) * 0.8)) #val = int((len(sampleSet) * 0.1)) test = int((len(sampleSet) * 0.2)) # Used the random_split data function to split the dataset into a 70, 20, 10 proportion trainData, testData = torch.utils.data.random_split(sampleSet, [train,test],generator=torch.Generator().manual_seed(100)) print(trainData, testData) #Load all the data trainLoader = torch.utils.data.DataLoader(trainData, batch_size=batchSize, num_workers= numWorkers, shuffle=True) # valLoader = torch.utils.data.DataLoader(valData, batch_size=batchSize, # num_workers= numWorkers, # shuffle=True) testLoader = torch.utils.data.DataLoader(testData, batch_size=batchSize, num_workers= numWorkers, shuffle=True) return trainLoader, testLoader # + colab={"base_uri": "https://localhost:8080/"} id="_8KJu6JHmrCo" outputId="3fdf390c-f0ac-45f2-edf9-e00b977f509d" test = loadData() trainLoader = test[0] # valLoader = test[1] testLoader = test[1] print(len(trainLoader),len(testLoader)) # + id="vKlC52onnH0J" # classes = ['COVID-19', 'Normal', 'Pneumonial-Bacterial', 'Pneumonial-Viral'] # dataiter = iter(trainLoader) # images, labels = dataiter.next() # image = np.transpose(images[0], (1, 2, 0)) # label = classes[labels[0]] # for images, labels in trainLoader: # print(classes[labels[0]]) # + id="EDq66NPmu9QV" #links #https://towardsdatascience.com/dealing-with-multiclass-data-78a1a27c5dcc #https://www.codementor.io/@agarrahul01/multiclass-classification-using-random-forest-on-scikit-learn-library-hkk4lwawu # + colab={"base_uri": "https://localhost:8080/"} id="KqcEK0KIvhqa" outputId="568c3d76-2abf-430c-815e-3b928b00c6ed" train_x = [] train_y = [] for x, y in trainLoader: train_x.append(x) train_y.append(y) train_x = torch.stack(train_x) train_y = torch.stack(train_y) train_x = train_x.reshape(760, 224*224*3) print(train_x.shape) print(train_y.shape) # + colab={"base_uri": "https://localhost:8080/"} id="QJKjltiUztfK" outputId="e5d3119e-58dc-4dd5-849a-a988c461189f" test_x = [] test_y = [] for x, y in testLoader: test_x.append(x) test_y.append(y) test_x = torch.stack(test_x) test_x = test_x.reshape(190, 224*224*3) test_y = torch.stack(test_y) print(test_x.shape) print(test_y.shape) # + colab={"base_uri": "https://localhost:8080/"} id="nDdciTglxv5h" outputId="35d647c9-17c4-4137-a0de-ee7af619a1a8" # Create a Gaussian Classfier model = RandomForestClassifier(n_estimators = 220, oob_score=True, criterion = 'entropy', random_state = 100) model.fit(train_x,train_y) #predict2 = model.predict(train_x) predict = model.predict(test_x) value = accuracy_score(test_y,predict) #value2 = accuracy_score(train_y,predict2) print(value) #print(value2) # + colab={"base_uri": "https://localhost:8080/"} id="rd3YP9o9MYK1" outputId="d7d872a0-4d60-41fc-a31c-e368b5e25672" print(predict) print(trainLoader) # + colab={"base_uri": "https://localhost:8080/", "height": 333} id="iro1imeBNpzJ" outputId="e74b4f95-64e9-4b9e-9a39-5732357acf4b" labels = [0,1,2,3] conf_mat = confusion_matrix(test_y, predict, normalize='true') print(conf_mat) # Visualize it as a heatmap import seaborn seaborn.heatmap(conf_mat) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="y8GLNgFNOy0S" outputId="737635f9-11ff-466b-aa5d-7e5c62efbeb1" print(test_y.shape) predictions = torch.from_numpy(predict) print(predictions.shape) # + colab={"base_uri": "https://localhost:8080/", "height": 580} id="cytUWUP1QCKS" outputId="d136c9cc-aed9-496a-8673-9180eb5da276" #plt.figure(figsize=(100, 100)) fig, ax = plt.subplots(figsize=(10, 10)) classes = ['COVID-19', 'Normal', 'Pneumonial-Bacterial', 'Pneumonial-Viral'] disp = plot_confusion_matrix(model, test_x, test_y, display_labels=classes, cmap=plt.cm.Blues, ax=ax) #disp.ax_.set_adjustable() #print(disp.confusion_matrix) plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import unittest import numpy as np import pandas as pd import numpy.testing as np_testing import pandas.testing as pd_testing import os import import_ipynb from sklearn.datasets import load_digits from sklearn.model_selection import train_test_split from sklearn import tree from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score import sys class Test(unittest.TestCase): def _dirname_if_file(self, filename): if os.path.isdir(filename): return filename else: return os.path.dirname(os.path.abspath(filename)) def setUp(self): import Activity3_02 self.activity = Activity3_02 self.digits = load_digits() self.X = pd.DataFrame(self.digits.data) self.Y = pd.DataFrame(self.digits.target) self.X_train, self.X_test, self.Y_train, self.Y_test = train_test_split(self.X,self.Y, test_size=0.2, random_state=0) self.model = tree.DecisionTreeClassifier(random_state=0) self.model = self.model.fit(self.X_train, self.Y_train) self.Y_pred = self.model.predict(self.X_test) def test_input_frames(self): np_testing.assert_equal(self.activity.digits.data, self.digits.data) def test_model_prediction(self): self.activity_Y_pred = self.activity.model.predict(self.X_test) np_testing.assert_almost_equal(self.Y_pred, self.activity_Y_pred) def test_model_precision(self): self.Y_test_2 = self.Y_test[:] self.Y_test_2[self.Y_test_2 != 6] = 1 self.Y_test_2[self.Y_test_2 == 6] = 0 self.Y_pred_2 = self.Y_pred self.Y_pred_2[self.Y_pred_2 != 6] = 1 self.Y_pred_2[self.Y_pred_2 == 6] = 0 self.precision = precision_score(self.Y_test_2, self.Y_pred_2) np_testing.assert_equal(self.precision, self.activity.precision) def test_model_recall(self): self.Y_test_2 = self.Y_test[:] self.Y_test_2[self.Y_test_2 != 6] = 1 self.Y_test_2[self.Y_test_2 == 6] = 0 self.Y_pred_2 = self.Y_pred self.Y_pred_2[self.Y_pred_2 != 6] = 1 self.Y_pred_2[self.Y_pred_2 == 6] = 0 self.recall = recall_score(self.Y_test_2, self.Y_pred_2) np_testing.assert_equal(self.recall, self.activity.recall) if __name__ == '__main__': unittest.main(argv=['first-arg-is-ignored'], exit=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # importing the libraries import cv2 # importing the opencv for detecting the face import numpy as np # for array manipulation # - # importing classifiers # for classifing the face face_classifier =cv2.CascadeClassifier(cv2.data.haarcascades +'haarcascade_frontalface_default.xml') face_classifier # + # reading_image # reading the image, it read the image and store the pixcel value of the image image = cv2.imread('faceDetect1.jpg') # converting the color image to black and white image gray_image = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) # - # detecting multiple faces in a image # detectMultiScale - Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles faces = face_classifier.detectMultiScale(gray_image) # + for (x,y,w,h) in faces: # draw rectangle around face region cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),2) # - # saving the detected images cv2.imwrite('DetectedFaces.png',image) # + # displaying the detected images cv2.imshow('DetectedFaces',image) cv2.waitKey(0) # closing all the windows cv2.destroyAllWindows() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # # A Guided Tour of Ray Core: Remote Objects # [*Remote Objects*](https://docs.ray.io/en/latest/walkthrough.html#objects-in-ray) # implement a [*shared-memory object store*](https://en.wikipedia.org/wiki/Shared_memory) pattern. # # Objects are immutable, and can be accessed from anywhere on the cluster, as they are stored in the cluster shared memory. # # # # In general, small objects are stored in their owner’s **in-process store** while large objects are stored in the **distributed object store**. This decision is meant to reduce the memory footprint and resolution time for each object. Note that in the latter case, a placeholder object is stored in the in-process store to indicate the object has been promoted to shared memory. # # [Ray Architecture Reference](https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview#) # # --- # First, let's start Ray… # + import logging import ray ray.init( ignore_reinit_error=True, logging_level=logging.ERROR, ) # - # ## Remote Objects example # To start, we'll define a remote object... # + # %%time num_list = [ 23, 42, 93 ] obj_ref = ray.put(num_list) obj_ref # - # Then retrieve the value of this object reference. This follows an object resolution protocol. # # # # Small objects are resolved by copying them directly from the owner’s **in-process store**. For example, if the owner calls `ray.get`, the system looks up and deserializes the value from the local in-process store. If the owner submits a dependent task, it inlines the object by copying the value directly into the task description. Note that these objects are local to the owner process: if a borrower attempts to resolve the value, the object is promoted to shared memory, where it can be retrieved through the distributed object resolution protocol described next. # # Resolving a large object. The object x is initially created on Node 2, e.g., because the task that returned the value ran on that node. This shows the steps when the owner (the caller of the task) calls `ray.get`: # # 1) Lookup object’s locations at the owner. # 2) Select a location and send a request for a copy of the object. # 3) Receive the object. # # # + # %%time ray.get(obj_ref) # - # Let's combine use of a remote function with a remote object, to illustrate *composable futures*: @ray.remote def my_function (num_list): return sum(num_list) # In other words, the remote function `myfunction()` will sum the list of integers in the remote object `num_list`: # + # %%time calc_ref = my_function.remote(obj_ref) # + # %%time ray.get(calc_ref) # - # You can gather the values of multiple object references in parallel using collections: # + # %%time ray.get([ray.put(i) for i in range(3)]) # - # Now let's set a timeout to return early from attempted access of a remote object that is blocking for too long... # + import time @ray.remote def long_running_function (): time.sleep(10) return 42 # + # %%time from ray.exceptions import GetTimeoutError obj_ref = long_running_function.remote() try: ray.get(obj_ref, timeout=6) except GetTimeoutError: print("`get` timed out") # - # Then shutdown Ray ray.shutdown() # ## References # # [Ray Architecture Reference](https://docs.google.com/document/d/1lAy0Owi-vPz2jEqBSaHNQcy2IBSDEHyXNOQZlGuj93c/preview#) # Ray 1.x Architecture Technical Paper # # [Ray Internals: A peek at ray,get](https://www.youtube.com/watch?v=a1kNnQu6vGw) # # [Ray Internals: Object management with Ownership Model](https://www.anyscale.com/events/2021/06/22/ray-internals-object-management-with-the-ownership-model) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:python35] # language: python # name: conda-env-python35-py # --- # + import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.utils import np_utils from keras.layers import Convolution2D, MaxPooling2D np.random.seed(25) # - (X_train, y_train), (X_test, y_test) = mnist.load_data() print("X_train original shape", X_train.shape) print("y_train original shape", y_train.shape) print("X_test original shape", X_test.shape) print("y_test original shape", y_test.shape) plt.imshow(X_train[0], cmap='gray') plt.title('Class '+ str(y_train[0])) # + X_train = X_train.reshape(X_train.shape[0], 28, 28, 1) X_test = X_test.reshape(X_test.shape[0], 28, 28, 1) X_train = X_train.astype('float32') X_test = X_test.astype('float32') X_train/=255 X_test/=255 X_train.shape # + number_of_classes = 10 Y_train = np_utils.to_categorical(y_train, number_of_classes) Y_test = np_utils.to_categorical(y_test, number_of_classes) y_train[0], Y_train[0] # + # Three steps to Convolution # 1. Convolution # 2. Activation # 3. Polling # Repeat Steps 1,2,3 for adding more hidden layers # 4. After that make a fully connected network # This fully connected network gives ability to the CNN # to classify the samples model = Sequential() model.add(Convolution2D(32, 3, 3, input_shape=(28,28,1))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Convolution2D(32,3,3)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(512)) model.add(Activation('relu')) model.add(Dropout(0.2)) model.add(Dense(10)) model.add(Activation('softmax')) # - model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X_train, Y_train, batch_size=128, nb_epoch=5, validation_data=(X_test, Y_test)) score = model.evaluate(X_test, Y_test) print() print('Test accuracy: ', score[1]) # + predictions = model.predict_classes(X_test) predictions = list(predictions) actuals = list(y_test) sub = pd.DataFrame({'Actual': actuals, 'Predictions': predictions}) sub.to_csv('./output_cnn.csv', index=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="enYLtsWu9ShD" colab_type="text" # #Common API # + id="-PiQMdOsS55I" colab_type="code" colab={} # + id="c-s6bow7kWWg" colab_type="code" colab={} import string import numpy as np import pandas as pd # + [markdown] id="Olubw13U9I1N" colab_type="text" # #Connect to Google Drive # + id="BXGGqt9_9g2v" colab_type="code" colab={} from pathlib import Path, PurePath from google.colab import drive # + id="hQxDXQoIjJqR" colab_type="code" colab={} # set data paths, this requires local drive to have a folder calld "COVID-19" with the metadata.csv file # returns a string to the local path setup def setup_local_data(): drive.mount('/content/drive') drive_path=PurePath('/content/drive/My Drive') input_dir = drive_path/'COVID-19' print(list(Path(input_dir).glob('*'))) return input_dir # + id="TMd3hEbLj7rs" colab_type="code" colab={} setup_local_data() # + [markdown] id="pwdWn9XQ9NcW" colab_type="text" # #Unzip JSON # + id="T4Od_SRKj-Rm" colab_type="code" colab={} def unzip(): from zipfile import ZipFile zf = ZipFile('/content/drive/My Drive/COVID-19/TopicModel/covid_data_full_v5.json.zip', 'r') zf.extractall('/content/drive/My Drive/COVID-19/') zf.close() # + id="PpYvUdSyk1Q7" colab_type="code" colab={} #unzip() # + id="xxJNAEnZk2qZ" colab_type="code" colab={} def read_json_to_pandas(): pd_json=pd.read_json (r'/content/drive/My Drive/COVID-19/TopicModel/covid_data_full_v5.json') return pd_json # + id="LBRYeINgmIlQ" colab_type="code" outputId="ec74d1ce-6615-4a10-be80-e8924671dafc" colab={"base_uri": "https://localhost:8080/", "height": 589} All_json_docs=read_json_to_pandas() All_json_docs # + id="eOr65HflmKLs" colab_type="code" outputId="5fd64b66-aee8-47bc-f303-0faa2f9de0b0" colab={"base_uri": "https://localhost:8080/", "height": 55} All_json_docs['text'][33372] # + id="STUfY7LnmY4o" colab_type="code" outputId="67b93527-b364-4113-96dd-51434e4dfd58" colab={"base_uri": "https://localhost:8080/", "height": 55} All_json_docs['abstract'][33372] # + id="hOcbd6pXmtjT" colab_type="code" colab={} def count_words(sent): num_words = len(sent.split()) return num_words # + id="JGg_KChNnS_L" colab_type="code" outputId="4beffcb8-1236-4c02-b439-a98090ad46c9" colab={"base_uri": "https://localhost:8080/", "height": 35} count_words(All_json_docs['abstract'][33372]) # + id="arwmmg41nVID" colab_type="code" outputId="5889ebae-fa70-4765-e057-9e1d1f9cbca0" colab={"base_uri": "https://localhost:8080/", "height": 35} count_words(All_json_docs['text'][33372]) # + [markdown] id="77qMMBCTrQwR" colab_type="text" # #Preprocessing and cleaned # + id="HNQ_b4opp0XD" colab_type="code" outputId="4e77198f-0bdc-4967-dee6-2a1a57dcdec5" colab={"base_uri": "https://localhost:8080/", "height": 89} import nltk nltk.download('stopwords') nltk.download('punkt') from nltk.corpus import stopwords from nltk.tokenize import word_tokenize # + id="eGu-SSqHndDE" colab_type="code" colab={} def convert_lower_case(data): import numpy as np return np.char.lower(data) # + id="pwqUJjkAp_K6" colab_type="code" colab={} def remove_stop_words(data): stop_words = stopwords.words('english') words = word_tokenize(str(data)) new_text = "" for w in words: if w not in stop_words and len(w) > 1: new_text = new_text + " " + w return new_text # + id="5aR0FOH8p_hZ" colab_type="code" colab={} def remove_punctuation(data): symbols = "!\"#$%&()*+-./:;<=>?@[\]^_`{|}~\n" for i in range(len(symbols)): data = np.char.replace(data, symbols[i], ' ') data = np.char.replace(data, " ", " ") data = np.char.replace(data, ',', '') return data # + id="QGmR1QWRqCLI" colab_type="code" colab={} def remove_apostrophe(data): return np.char.replace(data, "'", "") # + id="d9SsskF2qFh4" colab_type="code" colab={} def remove_numbers(data): from string import digits remove_digits = str.maketrans('', '', digits) res = data.translate(remove_digits) return res # + id="iLmJmcwp0Rql" colab_type="code" colab={} def remove_symboles(data): import re regex = re.compile('[^ s\ a-zA-Z]')#,'[\t\n\r\f\v]') data_no_sympoles=regex.sub('',str(data)) #print(type(data_no_sympoles)) return data_no_sympoles # + id="nLibXUm4Cw6w" colab_type="code" outputId="63a96ddd-2993-444a-9bbd-9a8c2b05eeac" colab={"base_uri": "https://localhost:8080/", "height": 55} remove_symboles(All_json_docs['text'][33372]) # + id="5qGZGtmkqw_c" colab_type="code" colab={} def preprocess(data): data = convert_lower_case(data) data = remove_punctuation(data) #remove comma seperately data = remove_apostrophe(data) data = remove_stop_words(data) data = remove_numbers(data) data = remove_punctuation(data) data = remove_symboles(data) return data # + id="q1bCqt0wq1OM" colab_type="code" colab={} dete_cleaned=preprocess(All_json_docs['text'][33372]) # + id="QZ7yI3JWCaoH" colab_type="code" outputId="e80c81db-d4ec-4712-80fe-ce25da749d10" colab={"base_uri": "https://localhost:8080/", "height": 55} dete_cleaned # + id="0QleE3aJrYyp" colab_type="code" colab={} #len(dete_cleaned) #dete_cleaned.size # + id="G_qR7V_68Fww" colab_type="code" colab={} def cleaned_COVID19(All_json_docs): All_json_docs['cleaned_text'] = All_json_docs['text'].apply(preprocess) covid_dataset_dropnan=All_json_docs.dropna() All_json_docs.to_csv(r'/content/drive/My Drive/COVID-19/TopicModel/covid_data_full_v5_cleaned.csv',index=False,sep=',',header=True) covid_dataset_dropnan.to_csv(r'/content/drive/My Drive/COVID-19/TopicModel/covid_data_full_v5_cleaned_dropnan.csv',index=False,sep=',',header=True) return All_json_docs # + id="NQ6PkhlfsrMI" colab_type="code" colab={} # + id="RM6u0CkAt5Ns" colab_type="code" colab={} cleaned_COVID19(All_json_docs) # + [markdown] id="dLpkZABp9rHi" colab_type="text" # #Topic Modelling # + [markdown] id="QGNncyzBLBA6" colab_type="text" # ##Subset # + id="kXBNeguc9yne" colab_type="code" colab={} def read_covid_dataset(columns,n): data = pd.read_csv("/content/drive/My Drive/COVID-19/TopicModel/covid_data_full_v5_cleaned_dropnan.csv",usecols= columns, nrows=n,index_col=0)#usecols= ['_id','text','cleaned_text'] return data # + id="KrtDaWEB-ERq" colab_type="code" colab={} columns=['_id','cleaned_text'] covid_dataset=read_covid_dataset(columns,10) covid_dataset.to_csv(r'/content/drive/My Drive/COVID-19/TopicModel/covid_data_full_v5_cleaned_dropnan_10.csv',index=False,sep=',',header=True) # + id="-zKgDJxNz0ql" colab_type="code" outputId="036ef17c-1720-4e95-e199-83d2b5b896dc" colab={"base_uri": "https://localhost:8080/", "height": 390} covid_dataset # + id="ArQ5w_WUFZD-" colab_type="code" colab={} def all_words(covid_dataset): all_words=[] for line in covid_dataset['cleaned_text']: wod_line=line.split(' ') for word in wod_line: all_words.append(word) print(len(all_words)) return all_words # + id="_GFNsE1PKzrL" colab_type="code" outputId="2b14a07f-985e-42f2-ed62-2f1f65a99a5e" colab={"base_uri": "https://localhost:8080/", "height": 35} total_words=all_words(covid_dataset) # + id="DccYY6nfLOLj" colab_type="code" outputId="a6521f68-b113-4cf5-9ccd-0422460e1756" colab={"base_uri": "https://localhost:8080/", "height": 35} type(total_words) # + id="yyGheK4nFJ2C" colab_type="code" outputId="1ab97753-b5d7-4737-9079-c7dd07019fd9" colab={"base_uri": "https://localhost:8080/", "height": 35} def unique_words(x): pure_list=list(dict.fromkeys(x)) import csv with open('/content/drive/My Drive/COVID-19/TopicModel/all_terms_10.csv',mode='w', newline='') as myfile: wr = csv.writer(myfile, quoting=csv.QUOTE_NONE,delimiter=',') wr.writerow(pure_list) return len(pure_list) mylist = unique_words(total_words) mylist # + id="m-KjYqIyPbtf" colab_type="code" outputId="27f052dd-dc99-454e-99aa-e180ed2e64ed" colab={"base_uri": "https://localhost:8080/", "height": 115} data = pd.read_csv("/content/drive/My Drive/COVID-19/TopicModel/all_terms_10.csv") data # + id="2mii2izIV_MW" colab_type="code" outputId="b47a4d0d-2cb3-43f5-9f90-5fe253a6cab0" colab={"base_uri": "https://localhost:8080/", "height": 35} #data[] s=data[data.columns[1]] type(s) s.name # + id="9Tntc2fLVxbo" colab_type="code" outputId="f213864a-2f9d-4a35-dc3d-b935a54a3f13" colab={"base_uri": "https://localhost:8080/", "height": 151} data.where(df =='a') # + [markdown] id="spS-TQl-9uZ2" colab_type="text" # ## TF-IDF # + id="8OSNU-4YBh-D" colab_type="code" outputId="371f221c-a96c-40a5-9d77-0caa83f2e1b3" colab={"base_uri": "https://localhost:8080/", "height": 53} import nltk nltk.download('punkt') def tokenizer(text): tokens = nltk.word_tokenize(text) return tokens # + id="jsyEp7sh_lr2" colab_type="code" colab={} def tfidf(): from sklearn.feature_extraction.text import TfidfVectorizer v = TfidfVectorizer(tokenizer=tokenizer,analyzer ='word') x = v.fit_transform(covid_dataset.cleaned_text.values) data=x.toarray() headers=v.get_feature_names() df = pd.DataFrame (data, columns =headers,index=covid_dataset.index.values) df.to_csv(r'/content/drive/My Drive/COVID-19/TopicModel/covid_TFIDF_10.csv')#,index=True,sep=',',header=True) return df # + id="UeypoCdJFWLp" colab_type="code" outputId="6a9ad7dd-4e90-493a-ce45-6d100a074fb0" colab={"base_uri": "https://localhost:8080/", "height": 197} covid_dataset.index.values # + id="AX9b0Dnwt6W0" colab_type="code" outputId="ca00d214-69b9-4816-efe3-06b42879f0d6" colab={"base_uri": "https://localhost:8080/", "height": 408} tfidf() # + id="eJ4R5D6qVSO_" colab_type="code" outputId="0b6f9a4c-2aa7-4391-953f-827ac1524c00" colab={"base_uri": "https://localhost:8080/", "height": 215} df['corona'] # + [markdown] id="o__uHgxauqEJ" colab_type="text" # #**CorelationCoeffitien matrix (H)** # https://towardsdatascience.com/word-embedding-using-bert-in-python-dd5a86c00342 # + [markdown] id="n8HCgjGiD5tF" colab_type="text" # ##FastText # https://fasttext.cc/docs/en/english-vectors.html # # https://medium.com/@h_bushroh/text-similarity-with-fasttext-word-embeddings-c765d97df682 # https://github.com/facebookresearch/fastText/blob/master/docs/pretrained-vectors.md # # https://github.com/facebookresearch/fastText/blob/master/docs/pretrained-vectors.md # # https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.zip # # https://github.com/ncbi-nlp/BioSentVec # # + id="3tS6tLMzKfyP" colab_type="code" colab={} import os os.chdir('/content/drive/My Drive/COVID-19/TopicModel/') # + id="-11tfB6IKboO" colab_type="code" outputId="3cd6bd23-bd9f-456e-e9db-5b66ecbbc769" colab={"base_uri": "https://localhost:8080/", "height": 269} # !ls # + [markdown] id="lWxli8joOpYT" colab_type="text" # ###Download # + id="KTs2jLUxFbuL" colab_type="code" outputId="0d064f1c-d89c-4a7e-cfe6-ed15d9fa2fa0" colab={"base_uri": "https://localhost:8080/", "height": 215} # !wget https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.en.zip # + id="pN8OXacQN047" colab_type="code" outputId="54dff748-f2c6-4d01-dc3e-f720eb118e0f" colab={"base_uri": "https://localhost:8080/", "height": 71} # !unzip '/content/drive/My Drive/COVID-19/TopicModel/wiki.en.zip' # + [markdown] id="Vw16jYskyrZA" colab_type="text" # ###Load Model # + id="EZ6WQ5A2RWnp" colab_type="code" outputId="c367cb91-e3f2-4dd5-f6eb-2442c9b243ad" colab={"base_uri": "https://localhost:8080/", "height": 73} #https://radimrehurek.com/gensim/models/deprecated/keyedvectors.html #https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Word2Vec_FastText_Comparison.ipynb #from gensim.models.fasttext import FastText from gensim.models import Word2Vec, KeyedVectors fast_text_vec = KeyedVectors.load_word2vec_format('/content/drive/My Drive/COVID-19/TopicModel/wiki.en.vec') #fast_text_model=FastText.load_fasttext_format('/content/drive/My Drive/COVID-19/TopicModel/wiki.en.vec') # + id="EJxpDbEv8ex6" colab_type="code" outputId="69028c79-92c8-43b8-92ca-cefa10d9faaf" colab={"base_uri": "https://localhost:8080/", "height": 253} fast_text_vec.most_similar('zot') # + id="ojs5sNFCAjdE" colab_type="code" outputId="162e4bc8-0408-4df1-f744-3f5c4e3bffb3" colab={"base_uri": "https://localhost:8080/", "height": 35} fast_text_vec.cosine_similarities(fast_text_vec.get_vector('dog'),[fast_text_vec.get_vector('cat'),fast_text_vec.get_vector('bit')]) # + id="mMew-MUXEz4q" colab_type="code" colab={} data = pd.read_csv("/content/drive/My Drive/COVID-19/TopicModel/all_terms_10.csv") data # + id="FbqafQTRE-cV" colab_type="code" outputId="aed231c3-f8a9-4658-daa5-9d2b0b5bda42" colab={"base_uri": "https://localhost:8080/", "height": 125} all_words=data.columns[1:] len(all_words) all_words # + id="o9NZcxjHFtzZ" colab_type="code" outputId="4c25c210-0db1-4ebb-9b85-b67f89f83837" colab={"base_uri": "https://localhost:8080/", "height": 1000} all_fast_text_vec_list=[] for w in all_words: print(type(w)) all_fast_text_vec_list.append(fast_text_vec.get_vector(w)) #all_fast_text_vec_list # + id="sy05cX9QFRfd" colab_type="code" colab={} for w in all_words: # + id="ZKqNBrG5Ez0q" colab_type="code" colab={} s=data[data.columns[1]] type(s) s.name # + [markdown] id="FpmlEbRpMdVX" colab_type="text" # ##Bio word2vec # https://github.com/ncbi-nlp/BioSentVec # https://github.com/jakerochlinmarcus/biomedical-word-embeddings # + [markdown] id="PDPW9ir_Pz3y" colab_type="text" # ###Download # + id="lHTrN80oP7zi" colab_type="code" colab={} import os os.chdir('/content/drive/My Drive/COVID-19/TopicModel/') # + id="l4yhPPmJQAWt" colab_type="code" outputId="e1dfb6d7-30b7-4f72-c34d-8475b38615eb" colab={"base_uri": "https://localhost:8080/", "height": 269} # !ls # + id="oa8ViHCZEzvi" colab_type="code" outputId="c7918006-6f93-4396-d1d4-336f8ba8bb00" colab={"base_uri": "https://localhost:8080/", "height": 215} # !wget https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/BioSentVec/BioWordVec_PubMed_MIMICIII_d200.vec.bin # + id="XfwfteT3Yj1S" colab_type="code" outputId="65d14e6f-daf0-4232-fdaa-26e0d779e8ab" colab={"base_uri": "https://localhost:8080/", "height": 215} # !wget http://evexdb.org/pmresources/vec-space-models/PubMed-and-PMC-w2v.bin # + [markdown] id="3FeXVpUSR531" colab_type="text" # ###Load # + id="ImEgtjq5lIsa" colab_type="code" outputId="1e06fca4-2386-4999-b6c7-a05ca8f10d01" colab={"base_uri": "https://localhost:8080/", "height": 35} # !ls # + id="KwCza2MGlONB" colab_type="code" colab={} # !mv ./PubMed-and-PMC-w2v.bin '/content/drive/My Drive/COVID-19/TopicModel/' # + id="6z9ghyHfR7mM" colab_type="code" outputId="8f07abca-9166-4217-f87e-387b0603c01a" colab={"base_uri": "https://localhost:8080/", "height": 73} from gensim.models import Word2Vec, KeyedVectors Bio_text_vec = KeyedVectors.load_word2vec_format('/content/drive/My Drive/COVID-19/TopicModel/PubMed-and-PMC-w2v.bin', binary=True) # + id="tZmFy5f4FhuF" colab_type="code" outputId="e8f1665f-4d6f-4f06-d739-9de3dca4939c" colab={"base_uri": "https://localhost:8080/", "height": 253} Bio_text_vec.most_similar('zot') # + id="9b8sSEuGJgbE" colab_type="code" outputId="89ec5557-b2ed-4161-8661-6944760367eb" colab={"base_uri": "https://localhost:8080/", "height": 115} def H1(): data = pd.read_csv("/content/drive/My Drive/COVID-19/TopicModel/all_terms_10.csv") all_words=data.columns[1:] len(all_words) all_words all_Bio_text_vec_list=[] Bio_text_word=[] Not_Bio_text_word=[] for w in all_words: #print(type(w)) try: all_Bio_text_vec_list.append(Bio_text_vec.get_vector(w)) Bio_text_word.append(w) except: Not_Bio_text_word.append(w) pass print(len(all_Bio_text_vec_list))#5751 print(len(Bio_text_word))#5751 print(len(Not_Bio_text_vec_list))#508 all_Bio_word_cos=[] for w in Bio_text_word: cos=Bio_text_vec.cosine_similarities(Bio_text_vec.get_vector(w),all_Bio_text_vec_list) all_Bio_word_cos.append(cos) df_H1 = pd.DataFrame(all_Bio_word_cos, index =Bio_text_word,columns=Bio_text_word) df_H1.to_csv(r'/content/drive/My Drive/COVID-19/TopicModel/H1_10doc_Bio.csv') return df_H1 # + id="cbWox1VUql0c" colab_type="code" outputId="ac413ebf-c2ea-48a1-f2e6-90d46726dbe4" colab={"base_uri": "https://localhost:8080/", "height": 439} H1() # + id="lDgoe5RW3aN3" colab_type="code" colab={} def H2_50(): data_H1 = pd.read_csv('/content/drive/My Drive/COVID-19/TopicModel/H1_10doc_Bio.csv',index_col=0)#,header=None) for col in data_1.columns: data_H1.loc[data_H1[col] < .50, col] = 0 data_H1.to_csv(r'/content/drive/My Drive/COVID-19/TopicModel/H2_10doc_Bio_50.csv') return data_H1 # + id="RE3jXE4A3yge" colab_type="code" outputId="cc7b709b-53f3-4dbd-d025-12c17deda3a4" colab={"base_uri": "https://localhost:8080/", "height": 439} H2_50() # + id="338Q5MA-5FtD" colab_type="code" outputId="e61b5858-4269-46a7-9083-026f22fe0f5c" colab={"base_uri": "https://localhost:8080/", "height": 233} #(data_1 == 0).astype(int).sum(axis=1) # + [markdown] id="958l3WS27rNW" colab_type="text" # #**Topic Doument** # + [markdown] id="C6TPJyXm_lCe" colab_type="text" # ##Inverse Martix H # + id="qPqft5H98WMV" colab_type="code" colab={} v=wHT--->wv/H or w=vHi # + id="1A_Fu4LL8cFa" colab_type="code" colab={} def inv_H2(): data_H2 = pd.read_csv('/content/drive/My Drive/COVID-19/TopicModel/H2_10doc_Bio_50.csv',index_col=0) df_inv = pd.DataFrame(np.linalg.pinv(data_H2.values), data_H2.columns, data_H2.index) df_inv.to_csv(r'/content/drive/My Drive/COVID-19/TopicModel/H2_Inv_Bio_50.csv') return df_inv # + id="dDopbZKF8cbW" colab_type="code" outputId="7e3bb1d4-3f77-4164-a3b2-d20b2d331860" colab={"base_uri": "https://localhost:8080/", "height": 609} inv_H2() # + id="VT8Y9PIy9Bsy" colab_type="code" colab={} def W(): inv_H2 = pd.read_csv('/content/drive/My Drive/COVID-19/TopicModel/H2_Inv_Bio_50.csv',index_col=0) inv_H2.sort_index(axis=1, inplace=True) inv_H2.sort_index(axis=0, inplace=True) tfIdf_bio = pd.read_csv('/content/drive/My Drive/COVID-19/TopicModel/covid_TFIDF_10.csv',index_col=0) tfIdf_bio_2=tfIdf_bio[inv_H2.columns] W=tfIdf_bio_2.dot(inv_H2) W.index=tfIdf_bio.index W.to_csv(r'/content/drive/My Drive/COVID-19/TopicModel/W_Bio_50.csv') return W # + id="hJpjm3xSCl8u" colab_type="code" outputId="cb8bf18b-636a-41f4-f161-c39b09c5671d" colab={"base_uri": "https://localhost:8080/", "height": 55} '''inv_H2 = pd.read_csv('/content/drive/My Drive/COVID-19/TopicModel/H2_Inv_Bio_50.csv',index_col=0) inv_H2.sort_index(axis=1, inplace=True) inv_H2.sort_index(axis=0, inplace=True) tfIdf_bio = pd.read_csv('/content/drive/My Drive/COVID-19/TopicModel/covid_TFIDF_10.csv',index_col=0)#,usecols=inv_H2.columns) tfIdf_bio_2=tfIdf_bio[inv_H2.columns] tfIdf_bio_2''' # + id="plnX0ptwYqet" colab_type="code" colab={} #tfIdf_bio_2.dot(inv_H2) # + id="a4o-lhcrTHIP" colab_type="code" outputId="d58c33ee-ac90-49cb-fc12-c36cc58c88e2" colab={"base_uri": "https://localhost:8080/", "height": 578} #tfIdf_bio[inv_H2.columns] W()# Doc-topic() topic is column, Doc is row) # + id="fkIO5869C4ET" colab_type="code" colab={} # + [markdown] id="KAsoaDaSPiPB" colab_type="text" # #**Searching** # + id="JDjVm3w5RXfA" colab_type="code" outputId="88860f7c-3bac-4bee-953b-1757aa79f478" colab={"base_uri": "https://localhost:8080/", "height": 73} from gensim.models import Word2Vec, KeyedVectors Bio_text_vec = KeyedVectors.load_word2vec_format('/content/drive/My Drive/COVID-19/TopicModel/PubMed-and-PMC-w2v.bin', binary=True) # + id="cCqHGOLNoy5F" colab_type="code" colab={} search="coronavirus,corona virus,covid-19,2019-ncov,ncov,sars-cov-2,sars,sars-cov,mers,mers-cov,severe acute respiratory syndrome,middle east respiratory syndrome" all_search_list=search.split(",") Search_NOT="animal,equine,porcine,calves,dog,canine,feline,bat,camel" all_Search_NOT_list=Search_NOT.split(",") Search_AND="incubation period,onset,exposure" all_Search_AND_list=Search_AND.split(",") # + id="2SZsirKBRI8N" colab_type="code" colab={} # + id="g8hVnsX_Rfc6" colab_type="code" outputId="6c40d72e-c06f-4bb5-9e29-ec91633d724e" colab={"base_uri": "https://localhost:8080/", "height": 91} all_most_similar=dict() for w in all_search_list: try: s=Bio_text_vec.most_similar(w.lower()) #search_list.append(w) #print(w) #print(s) most_similar_list=[] for i in range(len(s)): if s[i][1]>=.5: #print(s[i][1]) most_similar_list.append(s[i]) all_most_similar[w]=most_similar_list except: #print("Not") pass print("Research topic",all_most_similar) # + id="797ASEraRIrq" colab_type="code" colab={} doc_topic_df = pd.read_csv('/content/drive/My Drive/COVID-19/TopicModel/W_Bio_50.csv',index_col=0) # + id="3eH3ShwJa1pA" colab_type="code" outputId="ff7d1546-9b16-46bd-d322-9944cc50e746" colab={"base_uri": "https://localhost:8080/", "height": 35} print(all_most_similar.get(1)) # + id="lrbgCqbAa8vP" colab_type="code" colab={} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + # #! pip install scrapy # - from scrapy import Selector html ='''

    Hello World!

    Enjoy DataCamp!

    ''' sel= Selector(text=html) sel.xpath("//p") # Selector and SelectorList objects allow for chaining when using the xpath method . below are the same 21 & 22 sel.xpath('/html/body/div[1]') sel.xpath('/html').xpath('./body/div[1]') sel.xpath("//*") sel.xpath('//*[contains(@class,"hello datacamp")]/p') listext=sel.xpath("//p")[1] listext.extract() sel.xpath("//p").extract_first() div=Selector(text=html) div.xpath("//div").extract() import requests url= "https://www.datacamp.com/courses/all" html=requests.get(url).content sel=Selector(text=html) sel.xpath('//p[contains(@class,"course-block__description")]').extract() # + # Import a scrapy Selector from scrapy import Selector # Import requests import requests # Create the string html containing the HTML source html = requests.get( url ).content # Create the Selector object sel from html sel = Selector( text=html ) # Print out the number of elements in the HTML document print( 'There are {0} elements in the HTML document'.format( len( sel.xpath('//*'))) print( "You have found: ", len( sel.xpath('//*') ) ) # - sel.css('p.course-block__description').extract() sel.xpath('//p[contains(@class,"course-block__description")]/text()').extract() sel.xpath('//p[contains(@class,"course-block__description")]//text()').extract() len (sel.xpath('//p[contains(@class,"course-block__description")]//text()').extract()) sel.css('p.course-block__description::text').extract() element= sel.css('p.course-block__description::text').extract() for elm in element: print(elm) # + # Create an XPath string to the desired text. xpath = '//p[@id="p3"]//text()' # Create a CSS Locator string to the desired text. css_locator = 'p#p3 ::text' # Print the text from our selections print_results( xpath, css_locator ) # - from scrapy import Selector # + # Get the URL to the website loaded in response this_url = response.url # Get the title of the website loaded in response this_title = response.css('title::text').extract_first() # Print out our findings print_url_title( this_url, this_title ) # + # Create a SelectorList of the course titles crs_title_els = response.css('h4::text') # Extract the course titles crs_titles = crs_title_els.extract() # Print out the course titles for el in crs_titles: print( ">>", el ) # %capture # - import scrapy from scrapy.crawler import CrawlerProcess # Create the spider class class YourSpider(scrapy.Spider): name = "your_spider" # start_requests method def start_requests(self): pass # parse method def parse(self, response): pass class DCspider( scrapy.Spider ): name ='dc_spider' def start_requests( self ): urls = [ 'https://www.datacamp.com/courses/all' ] for url in urls: yield scrapy.Request( url = url, callback = self.parse ) def parse( self, response ): # simple example: write out the html html_file ='DC_courses.html' with open( html_file,'wb' ) as fout: fout.write( response.body ) # + # Import the scrapy library import scrapy # Create the Spider class class DCdescr( scrapy.Spider ): name = 'dcdescr' # start_requests method def start_requests( self ): yield scrapy.Request( url = url_short, callback = self.parse ) # First parse method def parse( self, response ): links = response.css( 'div.course-block > a::attr(href)' ).extract() # Follow each of the extracted links for link in links: yield response.follow(url=link, callback=self.parse_descr) # Second parsing method def parse_descr( self,response ): # Extract course description course_descr = response.css( 'p.course__description::text' ).extract_first() # For now, just yield the course description yield course_descr # Inspect the spider #inspect_spider( DCdescr ) # + # Import scrapy import scrapy # Import the CrawlerProcess: for running the spider from scrapy.crawler import CrawlerProcess # Create the Spider class class DC_Chapter_Spider(scrapy.Spider): name = "dc_chapter_spider" # start_requests method def start_requests(self): url_shor ='https://www.datacamp.com/courses/all' yield scrapy.Request(url = url_short, callback = self.parse_front) # First parsing method def parse_front(self, response): course_blocks = response.css('div.course-block') course_links = course_blocks.xpath('./a/@href') links_to_follow = course_links.extract() for url in links_to_follow: yield response.follow(url = url, callback = self.parse_pages) # Second parsing method def parse_pages(self, response): crs_title = response.xpath('//h1[contains(@class,"title")]/text()') crs_title_ext = crs_title.extract_first().strip() ch_titles = response.css('h4.chapter__title::text') ch_titles_ext = [t.strip() for t in ch_titles.extract()] dc_dict[ crs_title_ext ] = ch_titles_ext # Initialize the dictionary **outside** of the Spider class dc_dict = dict() # Run the Spider process = CrawlerProcess() process.crawl(DC_Chapter_Spider) process.start() # Print a preview of courses #previewCourses(dc_dict) # - # parse method def parse(self, response): # Extracted course titles crs_titles = response.xpath('//h4[contains(@class,"block__title")]/text()').extract() # Extracted course descriptions crs_descrs = response.xpath('//p[contains(@class,"block__description")]/text()').extract() # Fill in the dictionary for crs_title, crs_descr in zip(crs_titles, crs_descrs): dc_dict[crs_title] = crs_descr # + # Import scrapy import scrapy # Import the CrawlerProcess from scrapy.crawler import CrawlerProcess # Create the Spider class class YourSpider(scrapy.Spider): name = 'yourspider' # start_requests method def start_requests( self ): yield scrapy.Request(url = url_short, callback=self.parse) def parse(self, response): # My version of the parser you wrote in the previous part crs_titles = response.xpath('//h4[contains(@class,"block__title")]/text()').extract() crs_descrs = response.xpath('//p[contains(@class,"block__description")]/text()').extract() for crs_title, crs_descr in zip( crs_titles, crs_descrs ): dc_dict[crs_title] = crs_descr # Initialize the dictionary **outside** of the Spider class dc_dict = dict() # Run the Spider process = CrawlerProcess() process.crawl(YourSpider) process.start() # Print a preview of courses previewCourses(dc_dict) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import pandas as pd import numpy as np from sklearn.preprocessing import StandardScaler from sklearn.model_selection import StratifiedKFold from lightgbm import LGBMClassifier from sklearn import metrics import warnings from scipy import stats warnings.filterwarnings('ignore') df_train = pd.read_csv('./train/train.csv') df_test = pd.read_csv('./test/test.csv') X_train = np.asarray(df_train.drop(['ID', 'label'], axis = 1)) y_train = np.asarray(df_train['label'].map({'围网':0, '拖网':1,'刺网':2})) scaler = StandardScaler() scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(np.asarray(df_test.drop('ID',axis = 1))) fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) params = { 'n_estimators': 5000, 'boosting_type': 'gbdt', 'objective': 'multiclass', 'num_class': 3, 'early_stopping_rounds': 100, } clf = LGBMClassifier(**params) models = [] pred = np.zeros((len(X_test), 5)) for index, (train_idx, val_idx) in enumerate(fold.split(X_train, y_train)): model = clf.fit(X_train[train_idx], y_train[train_idx], eval_set = [(X_train[val_idx], y_train[val_idx])], early_stopping_rounds = 500, verbose = False) models.append(model) val_pred = model.predict(X_train[val_idx]) val_y = y_train[val_idx] print(index, 'val f1', metrics.f1_score(val_y, val_pred, average='macro')) pred[:, index] = model.predict(X_test) pred = [int(stats.mode(num)[0][0]) for num in pred] df_test['label'] = pred df_test['label'] = df_test['label'].map({0:'围网',1:'拖网',2:'刺网'}) res = df_test[['ID', 'label']] res.to_csv('./result/result.csv', index = None, header = None, encoding = 'utf-8') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import multiprocessing import numpy as np import matplotlib.pyplot as plt import cnn import utils import hmm # - cpus = multiprocessing.cpu_count() // 2 print(f'CPUs: {cpus}') cpus = 2 # + x = 3 y = np.array(['a', 'b', 'c']) s = np.zeros(x) s[0] = 1.0 real_hmm_rand = hmm.random_hmm( x=x, y=''.join(y), s=s ) print(f'real_hmm_rand A:\n{real_hmm_rand.a}\n') print(f'real_hmm_rand B:\n{real_hmm_rand.b}') # + epoch_size = 100 batch_size = 100 seq_len = 20 rand_data_gen = utils.HMMDataGenerator( real_hmm_rand, epoch_size, batch_size, seq_len ) # - t_hmm = real_hmm_rand t_gen = rand_data_gen # ## CNN 1 model = cnn.CNNModel(t_gen.input_shape()) model.summary() utils.plot_model(model, 'images/cnn1_arch.png') # + epochs = 20 history = model.fit_generator( generator=t_gen, epochs=epochs, callbacks=utils.callbacks('cnn1'), use_multiprocessing=True, workers=cpus ) # - utils.plot_acc(history, 'images/cnn1_acc.png') utils.plot_loss(history, 'images/cnn1_loss.png') test_X = t_hmm.simulate(seq_len, reset_before=True)[1] print(test_X) test_X = np.array([t_gen._encode_hmm_outputs(test_X)]) print(test_X) p = model.predict(test_X) print(p) pred_real = bool(round(p[0][0])) print(f'Predict - Real?: {pred_real}') model2 = cnn.CNNModel2(t_gen.input_shape()) model2.summary() utils.plot_model(model2, to_file='images/cnn2_arch.png') # + epochs = 20 history2 = model2.fit_generator( generator=t_gen, epochs=epochs, callbacks=utils.callbacks('cnn2'), # validation_data=v_gen, use_multiprocessing=True, workers=cpus ) # - utils.plot_acc(history2, 'images/cnn2_acc.png') utils.plot_loss(history2, 'images/cnn2_loss.png') # # Model 3 model3 = cnn.CNNModel3(t_gen.input_shape()) model3.summary() utils.plot_model(model3, 'images/cnn3_arch.png') # + epochs = 20 history3 = model3.fit_generator( generator=t_gen, epochs=epochs, callbacks=utils.callbacks('cnn3'), # validation_data=v_gen, use_multiprocessing=True, workers=cpus ) # - utils.plot_acc(history3, 'images/cnn3_acc.png') utils.plot_loss(history3, 'images/cnn3_loss.png') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # + # Import libraries - REQUIRES pip version 9.0.3 import pandas import os from os.path import join import sys # Using Cobrapy 0.13.0 import cobra import cobra.test from cobra import Model from cobra import Metabolite from cobra import Reaction from cobra.io import write_sbml_model # Estabish handler for logger import logging logging.basicConfig() logger = logging.getLogger('logger') # Verbose exception printing # %xmode # - # Initialize model toy = Model('toy_model') # + # Define reactions and metabolites cpdA_e = Metabolite( 'cpdA_e', name='compound A', compartment='e') cpdA_c = Metabolite( 'cpdA_c', name='compound A', compartment='c') rxn1 = Reaction('rxn1') rxn1.name = 'Transport' rxn1.gene_reaction_rule = 'gene1' rxn1.lower_bound = 0. rxn1.upper_bound = 1000. rxn1.add_metabolites({ cpdA_e: -1.0, cpdA_c: 1.0 }) cpdB_c = Metabolite( 'cpdB_c', name='compound B', compartment='c') rxn2 = Reaction('rxn2') rxn2.name = 'Metabolite conversion 1' rxn2.gene_reaction_rule = 'gene2' rxn2.lower_bound = -1000. rxn2.upper_bound = 1000. rxn2.add_metabolites({ cpdA_c: -1.0, cpdB_c: 1.0 }) cpdC_c = Metabolite( 'cpdC_c', name='compound C', compartment='c') rxn3 = Reaction('rxn3') rxn3.name = 'Metabolite conversion 2' rxn3.gene_reaction_rule = 'gene3' rxn3.lower_bound = 0. rxn3.upper_bound = 1000. rxn3.add_metabolites({ cpdB_c: -1.0, cpdC_c: 1.0 }) cpdD_c = Metabolite( 'cpdD_c', name='compound D', compartment='c') rxn4 = Reaction('rxn4') rxn4.name = 'Metabolite conversion 3' rxn4.gene_reaction_rule = 'gene4' rxn4.lower_bound = 0. rxn4.upper_bound = 1000. rxn4.add_metabolites({ cpdC_c: -1.0, cpdD_c: 1.0 }) cpdE_c = Metabolite( 'cpdE_c', name='compound E', compartment='c') rxn5 = Reaction('rxn5') rxn5.name = 'Metabolite conversion 4' rxn5.gene_reaction_rule = 'gene5' rxn5.lower_bound = 0. rxn5.upper_bound = 1000. rxn5.add_metabolites({ cpdC_c: -1.0, cpdE_c: 1.0 }) rxn6 = Reaction('rxn6') rxn6.name = 'Metabolite conversion 5' rxn6.lower_bound = 0. rxn6.upper_bound = 1000. rxn6.add_metabolites({ cpdE_c: -1.0, cpdD_c: 1.0 }) biomass_cpd = Metabolite( 'biomass_cpd', name='Biomass', compartment='c') biomass_rxn = Reaction('biomass_rxn') biomass_rxn.name = 'Biomass reaction' biomass_rxn.lower_bound = 0. biomass_rxn.upper_bound = 1000. biomass_rxn.add_metabolites({ cpdD_c: -1.0, biomass_cpd: 1.0 }) toy.add_reactions([rxn1,rxn2,rxn3,rxn4,rxn5,rxn6,biomass_rxn]) # - # Add input and output exchanges toy.add_boundary(cpdA_e, type='exchange', reaction_id='EX_cpdA_e', lb=-1000.0, ub=1000.0) toy.add_boundary(biomass_cpd, type='demand', reaction_id='EX_biomass', lb=None, ub=1000.0) # + #toy.objective = 'biomass_rxn' # - toy # Write to an SBML for later use cobra.io.write_sbml_model(toy, 'data/toy_model.sbml') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + slideshow={"slide_type": "slide"} b_parameters = { "B_1": "D76.V1-level1", "B_2": "D76.V8", "B_3": "*None*", "B_4": "*None*", "B_5": "*None*" } m_parameters = { "M_1": "D76.M1", "M_2": "D76.M2", "M_3": "D76.M3", } f_parameters = { "F_D76.V1": ["*All*"], "F_D76.V10": ["*All*"], "F_D76.V2": ["K00-K92"], "F_D76.V27": ["*All*"], "F_D76.V9": ["*All*"] } i_parameters = { "I_D76.V1": "*All* (All Dates)", "I_D76.V10": "*All* (The United States)", "I_D76.V2": "K00-K92 (Diseases of the digestive system)", "I_D76.V27": "*All* (The United States)", "I_D76.V9": "*All* (The United States)" } v_parameters = { "V_D76.V1": "", "V_D76.V10": "", "V_D76.V11": "*All*", "V_D76.V12": "*All*", "V_D76.V17": "*All*", "V_D76.V19": "*All*", "V_D76.V2": "", "V_D76.V20": "*All*", "V_D76.V21": "*All*", "V_D76.V22": "*All*", "V_D76.V23": "*All*", "V_D76.V24": "*All*", "V_D76.V25": "*All*", "V_D76.V27": "", "V_D76.V4": "*All*", "V_D76.V5": ["15-24", "25-34", "35-44"], "V_D76.V51": "*All*", "V_D76.V52": "*All*", "V_D76.V6": "00", "V_D76.V7": "*All*", "V_D76.V8": "*All*", "V_D76.V9": "" } o_parameters = { "O_V10_fmode": "freg", "O_V1_fmode": "freg", "O_V27_fmode": "freg", "O_V2_fmode": "freg", "O_V9_fmode": "freg", "O_aar": "aar_none", "O_aar_pop": "0000", "O_age": "D76.V5", "O_javascript": "on", "O_location": "D76.V9", "O_precision": "9", "O_rate_per": "100000", "O_show_totals": "false", "O_timeout": "300", "O_title": "Digestive Disease Deaths, by Year and Race", "O_ucd": "D76.V2", "O_urban": "D76.V19" } vm_parameters = { "VM_D76.M6_D76.V10": "", "VM_D76.M6_D76.V17": "*All*", "VM_D76.M6_D76.V1_S": "*All*", "VM_D76.M6_D76.V7": "*All*", "VM_D76.M6_D76.V8": "*All*" } misc_parameters = { "action-Send": "Send", "finder-stage-D76.V1": "codeset", "finder-stage-D76.V1": "codeset", "finder-stage-D76.V2": "codeset", "finder-stage-D76.V27": "codeset", "finder-stage-D76.V9": "codeset", "stage": "request" } # + slideshow={"slide_type": "slide"} def createParameterList(parameterList): parameterString = "" for key in parameterList: parameterString += "\n" parameterString += "" + key + "\n" if isinstance(parameterList[key], list): for value in parameterList[key]: parameterString += "" + value + "\n" else: parameterString += "" + parameterList[key] + "\n" parameterString += "\n" return parameterString # + slideshow={"slide_type": "slide"} xml_request = "\n" xml_request += createParameterList(b_parameters) xml_request += createParameterList(m_parameters) xml_request += createParameterList(f_parameters) xml_request += createParameterList(i_parameters) xml_request += createParameterList(o_parameters) xml_request += createParameterList(vm_parameters) xml_request += createParameterList(v_parameters) xml_request += createParameterList(misc_parameters) xml_request += "" # + slideshow={"slide_type": "slide"} import requests url = "https://wonder.cdc.gov/controller/datarequest/D76" response = requests.post(url, data={"request_xml": xml_request, "accept_datause_restrictions": "true"}) if response.status_code == 200: data = response.text else: print("something went wrong") # + slideshow={"slide_type": "slide"} import bs4 as bs import pandas as pd # + code_folding=[] slideshow={"slide_type": "slide"} def xml2df(xml_data): root = bs.BeautifulSoup(xml_data,"lxml") all_records = [] row_number = 0 rows = root.find_all("r") for row in rows: if row_number >= len(all_records): all_records.append([]) for cell in row.find_all("c"): if 'v' in cell.attrs: try: all_records[row_number].append(float(cell.attrs["v"].replace(',',''))) except ValueError: all_records[row_number].append(cell.attrs["v"]) else: if 'r' not in cell.attrs: all_records[row_number].append(cell.attrs["l"]) else: for row_index in range(int(cell.attrs["r"])): if (row_number + row_index) >= len(all_records): all_records.append([]) all_records[row_number + row_index].append(cell.attrs["l"]) else: all_records[row_number + row_index].append(cell.attrs["l"]) row_number += 1 return all_records # + slideshow={"slide_type": "slide"} data_frame = xml2df(data) pd.DataFrame(data=data_frame, columns=["Year", "Race", "Deaths", "Population", "Crude Rate Per 100,000"]) # --- # jupyter: # jupytext: # text_representation: # extension: .ps1 # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: PowerShell # name: powershell # --- # + [markdown] azdata_cell_guid="baa1b762-f2a9-461f-a126-49f0d3b5e4f0" # # Import Existing Azure SQL Server Resources # # + [markdown] azdata_cell_guid="1ccbf203-d568-408c-a641-7e5cfa93802a" # The notebook will help accomplish the below steps as a part of Importing existing Azure SQL Server Resources # # - [ ] Connect to Azure Subscription # - [ ] Choose Resource Group (Read access required) # - [ ] Choose Microsoft SQL Server Resources to import # - [ ] Choose/Create Migration Storage # - [ ] Install Application + Data Portability function # - [ ] Install ADP Azure Batch processing pipeline # - [ ] Store SqlPackage.exe in Migration Storage for ADP function to hand to Az Batch # # Execute: # # - [ ] Check all prerequisites # - [ ] Kick off ADP service # # Monitor: # # - [ ] Check import status. # + [markdown] azdata_cell_guid="09f648ab-67e8-4127-94bd-649d5e970321" # ## Set Variables for the Notebook # + azdata_cell_guid="01888595-0d1c-445b-ba85-dd12caa30192" tags=[] # ADP Resource $AdpSubscription = "" # Azure Subscription ID/Name # The bacpac files and ADP Resources are assumed to be in the same subscription $AdpResourceGroup = "" # Azure Resource Group which contains the ADP Resources # SQL Server $TargetResourceGroupName = "" # Azure ResourceGroup into which the sql server backup needs to be restored $StorageAccountName = "" $ContainerName = "" $LogicalSQLServerName = "" # New sql server name $LSqlServerPassword = "" # Set Variables for ADP Resources $AdpFunc = $AdpResourceGroup + "Control" $AdpBatch = $AdpResourceGroup.ToLower() + "batch" $AdpVNET = $AdpResourceGroup + "Vnet" # + [markdown] azdata_cell_guid="4a35cf8e-6598-43e8-a0da-2c0c369df548" # ## Notebook Functions # Defines logical functions for the rest of the notebook. Function blocks are combined in a single cell that can be collapsed for readability or expanded for further examination. Nothing is executed until called later in the notebook. As a result, this cell is a requirement for any of the other cells below it. # + azdata_cell_guid="4730aec5-7aa6-4a2a-baf4-696dc74aa898" tags=["hide_input"] # Expand cell to view framework function Login-Azure { # query azure locations to test for existing az login session exists with valid access tocken $azureLocations = az account list-locations -o JSON 2>$null | ConvertFrom-Json if (!$azureLocations){ #If there are no az locations, there is no existing az login session $subscriptions = az login -o JSON | ConvertFrom-Json # Login } else { $subscriptions = az account list -o JSON | ConvertFrom-Json # getting subscriptions for the user to use in gridview } if(![string]::IsNullOrWhiteSpace($AdpSubscription)) #If there is a subscription specified by user in the variables section { $specified_Subscription= az account show --subscription $AdpSubscription -o json |ConvertFrom-Json if (!$specified_Subscription) #if specified subscription is not valid { $currentUser= az ad signed-in-user show --query "{displayName:displayName,UPN:userPrincipalName}" -o json|ConvertFrom-Json # get current logged in user infomration Write-Host "Refer below for the list of subscriptions for logged in account '$($currentUser.UPN)'`n" az account list --query "[].{Name:name,SubscriptionID:id}" -o table # list subscriptions under current logged in account } else { # if specified subscription is valid Write-Output "Using subscription... '$($specified_Subscription.name)' ... '$($specified_Subscription.id)'" } } else { # if no subscription is specified, users are given a gridview to select subscription from $selectedSubscription = $subscriptions | Select-Object -Property Name, Id | Out-GridView -PassThru $SubscriptionId = $selectedSubscription.Id $Subscription = $selectedSubscription.Name $AdpSubscription = $subscription Write-Output "Using subscription... '$AdpSubscription' ... '$SubscriptionId'" } } function Verify-ADPResources { [CmdletBinding()] param( [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$Subscription, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$ADPResourceGroupName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$FunctionName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$BatchAccountName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$VNetName, [Parameter (Mandatory=$false)] [ValidateNotNullOrEmpty()] [string]$ApplicationName="SqlPackageWrapper", [Parameter (Mandatory=$false)] [ValidateNotNullOrEmpty()] [string]$ApplicationPackageVersionName="1", [Parameter (Mandatory=$false)] [ValidateNotNullOrEmpty()] [string]$SubNetName="default" ) # validate Subscription $specified_Subscription= az account show --subscription $Subscription -o json | ConvertFrom-Json if(!$specified_Subscription){ $currentUser= az ad signed-in-user show --query "{displayName:displayName,UPN:userPrincipalName}" -o json|ConvertFrom-Json # get current logged in user information Write-Host "Refer below for the list of subscriptions for logged in account '$($currentUser.UPN)'`n" az account list --query "[].{Name:name,SubscriptionID:id}" -o table # list subscriptions under current logged in account return } # validate ResourceGroup $specified_ResourceGroup= az group show -n $ADPResourceGroupName --subscription $Subscription -o json | ConvertFrom-Json if(!$specified_ResourceGroup) { return } $Installed = [ordered]@{} # ordered hash to store status of installation $countError=0 #Verify if VNet exists $specified_VNet= az network vnet show -n $VNetName -g $ADPResourceGroupName --subscription $Subscription -o JSON 2>$null |ConvertFrom-Json if(!$specified_VNet) { $Installed['VNET']="Not Found" $countError++ } else { $existingVnetSubnet = az network vnet subnet show -n $SubNetName --vnet-name $VNetName -g $ADPResourceGroupName --subscription $Subscription -o JSON 2>$null |ConvertFrom-Json if(!$existingVnetSubnet){ $Installed['VNET']="Default Subnet under"+ $VNetName + "Not Found" $countError++ } else { $Installed['VNET']="Installed" } } #Verify if FunctionApp Exists $specified_FunctionApp = az functionapp show -n $FunctionName -g $ADPResourceGroupName --subscription $Subscription -o JSON 2>$null | ConvertFrom-Json if(!$specified_FunctionApp) { $Installed['FunctionApp']="Not Installed" $countError++ } else { $Installed['FunctionApp']="Installed" } #check if Batch account exists $specified_BatchAccount = az batch account show -n $BatchAccountName -g $ADPResourceGroupName --subscription $Subscription -o JSON 2>$null | ConvertFrom-Json if(!$specified_BatchAccount) { $Installed['Batch']="Not Installed" $countError++ } else { $appPackageInstalled = az batch application package show --application-name $ApplicationName --version-name $ApplicationPackageVersionName -n $BatchAccountName -g $ADPResourceGroupName --subscription $Subscription -o JSON 2>$null | ConvertFrom-Json $connectedToStorage= $specified_BatchAccount.autoStorage if($connectedToStorage -and $appPackageInstalled){ # BatchAccount connected to storageaccount and applicationpackage is installed $Installed['Batch']="Installed" $Installed['Batch_ApplicationPackage']="Installed" $Installed['Batch_StorageAccount']="Connected to storage- "+$connectedToStorage.storageAccountId.Split("/")[-1] } if(!$connectedToStorage) { $Installed['Batch_StorageAccount']='Not Found' $countError++ } if(!$appPackageInstalled) { $Installed['Batch_ApplicationPackage']="Not Found" $countError++ } } if ($countError -gt 0){ Write-Output "ADP Resources are not installed correctly. Please refer the list below and use the Bootstrap NB to install ADP Resources" } $Installed if ($countError -eq 0){ Write-Output "`nFound all ADP Resources." } } function Prepare-InputForImportFunction { [CmdletBinding()] param( [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$Subscription, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$ADPResourceGroupName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$FunctionName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$BatchAccountName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$BackupFiles_StorageAccount, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$BackupFiles_ContainerName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$VNetName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$TargetRGName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$SqlServerName, [Parameter (Mandatory=$true)] [ValidateNotNullOrEmpty()] [string]$SqlServerPassword ) $Result = @{} # Build Header ## get Function key $FunctionAppID =az functionapp show -n $FunctionName -g $ADPResourceGroupName --subscription $Subscription --query "[id]" -o JSON 2>$null | ConvertFrom-Json $DefaultHostKey = az rest --method post --uri "$FunctionAppID/host/default/listKeys?api-version=2018-11-01" --query "[functionKeys.default]" -o JSON 2>$null | ConvertFrom-Json ## Build Json Object for Headers $headers = @{ 'x-functions-key' = $DefaultHostKey } $Result['Header']=$headers # Build string for Function URL $specified_Subscription= az account show --subscription $Subscription -o json |ConvertFrom-Json #Get SpecifiedSubscriptionID $SubscriptionID= $specified_Subscription.id $FunctionUrl = 'https://'+ $FunctionName +'.azurewebsites.net/api/subscriptions/'+ $SubscriptionID +'/resourceGroups/' + $ADPResourceGroupName + '/Import' $Result['FunctionURL']=$FunctionUrl # Set parameter variables for Body ## Get BatchAccountURL $specified_Batch = az batch account show -n $BatchAccountName -g $ADPResourceGroupName --subscription $Subscription -o JSON 2>$null | ConvertFrom-Json $BatchAccountURL = 'https://' + $specified_Batch.accountEndpoint ## Get default SubNet ID for specified VNet $specified_VNet_SubNet = az network vnet subnet show -g $ADPResourceGroupName --vnet-name $VNetName -n 'default' --subscription $Subscription -o JSON |ConvertFrom-Json $VNetSubNetID = $specified_VNet_SubNet.id ## Create access token to source sql server $targetAccessToken = az account get-access-token --resource=https://database.windows.net --query accessToken $targetAccessToken ## Build JSon object for Body $Body = @{ batchAccountUrl = $BatchAccountURL VNetSubnetId= $VNetSubNetID storageAccountName = $BackupFiles_StorageAccount containerName = $BackupFiles_ContainerName targetSqlServerResourceGroupName = $TargetRGName targetSqlServerName = $SQLServerName userName = $SqlServerLogin targetAccessToken = $targetAccessToken sqlAdminPassword = } $json = $Body | ConvertTo-Json $Result['Body']=$json $Result } function Provision-FuncRBAC { [CmdletBinding()] param ( [Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()][string]$Subscription, [Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()][string]$ResourceGroupName, [Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()][string]$FunctionName, [Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()][string]$ScopeRGName, [Parameter(Mandatory=$false)][ValidateNotNullOrEmpty()][string]$Role="Contributor" ) # Get the scope resource group's ID $scopeID = az group show --resource-group $ScopeRGName --subscription $Subscription --query "[id]" -o JSON | ConvertFrom-Json if(!$scopeID) { Write-Output "Provision-FuncRBAC failed." return } else { Write-Output "Found scope '$ScopeRGName' with ID... '$scopeID'" } # Get the az function principal id $app_PrincipalID = az functionapp show -n $FunctionName --resource-group $ResourceGroupName --subscription $Subscription --query "[identity.principalId]" -o JSON | ConvertFrom-Json if(!$app_PrincipalID) { Write-Output "Provision-FuncRBAC failed." return } else { Write-Output "Found principal id of Azure function '$FunctionName'... '$app_PrincipalID'" } # Verify if a role assignment has been created for function $app_RoleAssignmentDefinition= az role assignment list --subscription $Subscription --assignee $app_PrincipalID --scope $scopeID --query "[].roleDefinitionName" -o JSON 2>$null | ConvertFrom-Json if($app_RoleAssignmentDefinition -eq $Role) { Write-Output "Found Role Assignment for Principal ID.. '$app_PrincipalID' with Role.. '$app_RoleAssignmentDefinition' . No work needed" } else { # Continue to setup RBAC, once we verify an assignment is not setup and all the resources exist Write-Output "Creating new role assignment by running: 'az functionapp identity assign -n $FunctionName --role $Role -g $ResourceGroupName --scope $scopeID --subscription $Subscription'" Write-Warning "If your account does not have the access to assign new roles as Owner or User Access Administrator for the resource group, than you will need to contact your Azure AD Administrator to assign a service principle using the commands above" az functionapp identity assign -n $FunctionName --role $Role -g $ResourceGroupName --scope $scopeID --subscription $Subscription } } Write-Host "Helper Functions Created successfully" # + [markdown] azdata_cell_guid="5cd37536-c2c5-4b97-8383-fa761d7cbda3" # ## Connect to Azure Account # Run the below cell to login to an Azure account. Be sure to check the Windows Taskbar for a login dialog box underneath the notebook or other windows or by pressing Alt+TAB. # + azdata_cell_guid="e9cd7ac2-ff0b-43b4-baf8-b71d3128885c" Login-Azure # + [markdown] azdata_cell_guid="bec05e08-67ba-4071-8459-5b32dc7f876a" # ## Verify ADP Resources # Verify if ADP resources exists in specified Resource Group # + azdata_cell_guid="e89f6eb9-fcbc-4b7d-bcd1-37f1eb52cc02" tags=["hide_input"] Verify-ADPResources -Subscription $AdpSubscription -ADPResourceGroupName $AdpResourceGroup ` -BatchAccountName $AdpBatch -FunctionName $AdpFunc -VNetName $AdpVNET # + [markdown] azdata_cell_guid="0c95bb17-b3cf-4a8b-8aa6-690ac6139e37" # ## Verify RBAC of Azure Function # Roles based access control is a function of Azure that assigns services to a role with a specific access scope (or area of access). The ADP Orchestrator function requires Contributor access over the Resource Group where the SQL Server to be exported exists. The function below will attempt to create the role assignment. Any user executing this notebook will need to have Owner or User Access Administrator permissions to the Resource Group to assign the permission. Otherwise, contact your Azure AD Administrator. # + azdata_cell_guid="c374e57c-51ec-4a3f-9966-1e50cefc8510" Provision-FuncRBAC -FunctionName $AdpFunc -ScopeRGName $TargetResourceGroupName -ResourceGroupName $AdpResourceGroup -Subscription $AdpSubscription # + [markdown] azdata_cell_guid="b517742f-fa3d-4a4f-9ec0-b320c71738d4" # ## Prepare input variable for Orchestrator Import Function # + azdata_cell_guid="bfba288e-3466-4c57-9f3c-5281753601cf" tags=["hide_input"] $InputForImportFunction = Prepare-InputForImportFunction -Subscription $AdpSubscription -ADPResourceGroupName $AdpResourceGroup ` -BatchAccountName $AdpBatch -FunctionName $AdpFunc -TargetRGName $TargetResourceGroupName ` -VNetName $AdpVNET -BackupFiles_StorageAccount $StorageAccountName -BackupFiles_ContainerName $ContainerName ` -SqlServerName $LogicalSQLServerName -SqlServerPassword $ Write-Host "Setting parameter variables for Import Function Call..." $InputForImportFunction.Header $InputForImportFunction.FunctionURL $InputForImportFunction.Body # + [markdown] azdata_cell_guid="8f615b19-1e1d-405f-9f4a-ad7cc303487a" # ## Start Import of SQL Server # Run the cell to start import from specified backup files # + azdata_cell_guid="7e251aa5-7a92-4212-8c81-394c2058f1fa" tags=["hide_input"] $importResponse = Invoke-RestMethod -Method 'Post' -Headers $InputForImportFunction.Header -Uri $InputForImportFunction.FunctionURL -Body $InputForImportFunction.Body -ContentType 'application/json' $importResponse # + [markdown] azdata_cell_guid="cb1988a9-9797-49f0-8ddb-1634df48f027" # ## Get Status of Import Operation # Run the cell to get import operation status # + azdata_cell_guid="3c59dc7b-2648-46ee-b57e-d9e99580a093" tags=["hide_input"] $statusCheckResponse = Invoke-RestMethod -Method 'Get' -Uri $importResponse.statusQueryGetUri Write-Host "Orchestrator Request: " $statusCheckResponse.name Write-Host "`tOrchestrator Status: " $statusCheckResponse.runtimeStatus $outputParams = $statusCheckResponse.output if ($outputParams) { $batchJobID = $outputParams.Item2[2] $containerUrl = $outputParams.Item2[3] Write-Host "`tCreated Import Batch Job ID: " $batchJobId $azBatchLogin = az batch account login --name $AdpBatch --resource-group $AdpResourceGroup -o JSON | ConvertFrom-Json $jobStatus = az batch job show --job-id $batchJobID -o JSON | ConvertFrom-Json Write-Host "Import Job running on Pool: " $jobStatus.poolInfo.poolId Write-Host "`Import Request Status: " $jobStatus.state $taskList = az batch task list --job-id $batchJobId -o JSON | ConvertFrom-Json if ($taskList) { foreach ($task in $taskList) { Write-Host "`tDatabase Import Task ID: " $task.id Write-Host "`t`tStatus: " $task.state $taskExecution = $task.executionInfo if ($taskExecution) { Write-Host "`t`tResult: " $taskExecution.result } } } } # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # t-test # # ___ # # There may be situations where the standard deviation of the population is unknown, and the sample size is small. In all such cases, we use the T-distribution. This distribution is also called *Student’s T distribution*. # # The following are the chief characteristics of the T-distribution: # + The T-distribution is similar in shape to a normal distribution, except that it is slightly flatter. # + The sample size is small, generally less than 30. # + The T-distribution uses the concept of degrees of freedom. The degrees of freedom are the number of observations in a statistical test that can be estimated independently. # # Example: Suppose we have three numbers x, y and z. If we know the mean is 5. we can say the sum of numbers should be 5*3 = 15. Now we have freedom to choose any number as x and y, but not z. z should be choosen in a way such that the numbers add upto 15 so the mean turns to be 5. So even though we have 3 numbers we have freedom to choose only 2. i.e. we have 2 degrees of freedom. # # # + As the sample size decreases, the degrees of freedom reduce, or in other words, the certainty with which the population parameter can be predicted from the sample parameter reduces.The degrees of freedom (df) in the T-distribution is the number of samples (n) -1, or in other words, df = n - 1 # # ![](data/t_dist.png) # # The formula for the critical test statistic in a one-sample t-test is given by the following # equation: # $$t = \frac{\overline x - \mu}{\frac{s}{\sqrt n}}$$ # # where $\overline x$ is the sample mean, $\mu$ is the population mean, $s$ is the sample standard deviation and $n$ is the sample size. # # ## One-sample t-test # # A one-sample t-test is similar to a one-sample z-test, with the following differences: # 1. The size of the sample is small (<30). # 2. The population standard deviation is not known; we use the sample standard deviation(s) to calculate the standard error. # 3. The critical statistic here is the t-statistic, given by the following formula: # $$t = \frac{\overline x - \mu}{\frac{s}{\sqrt n}}$$ # A coaching institute, preparing students for an exam, has 200 students, and the average score of the students in the practice tests is 80. It takes a sample of nine students and records their scores; it seems that the average score has now increased. These are the scores of these ten students: 80, 87, 80, 75, 79, 78, 89, 84, 88. Conduct a hypothesis test at a 5% significance level to verify if there is a significant increase in the average score. # # $H_0:\mu = 80$ # $H_1:\mu > 80$ # + import numpy as np import scipy.stats as stats sample = np.array([80,87,80,75,79,78,89,84,88]) stats.ttest_1samp(sample,80) # - # Since the p-value is greater than 0.05, we fail to reject the null hypothesis. Hence, we cannot conclude that the average score of students has changed. # ## Two-sample t-test # # A two-sample t-test is used when we take samples from two populations, where both the sample sizes are less than 30, and both the population standard deviations are unknown. Formula: # # $$t = \frac{\overline x_1 - \overline x_2}{\sqrt{S_p^2(\frac{1}{n_1}+\frac{1}{n_2})}}$$ # # Where $x_1$ and $x_2$ are the sample means # # The degrees of freedom: $df=n_1 + n_2 − 2$ # # The pooled variance $S_p^2 = \frac{(n_1 -1)S_1^2 + (n_2-1)S_2^2}{n_1+n_2-2}$ # # A coaching institute has centers in two different cities. It takes a sample of ten students from each center and records their # scores, which are as follows: # # |Center A:| 80, 87, 80, 75, 79, 78, 89, 84, 88| # |---------|-----------------------------------| # |Center B:| 81, 74, 70, 73, 76, 73, 81, 82, 84| # # Conduct a hypothesis test at a 5% significance level, and verify if there a significant difference in the average scores of the # students in these two centers. # # $H_0:\mu_1 = \mu_2$ # $H_1:\mu_1 != \mu_2$ # + a = np.array([80,87,80,75,79,78,89,84,88]) b = np.array([81,74,70,73,76,73,81,82,84]) stats.ttest_ind(a,b) # - # We can conclude that there is a significant difference in the average scores of students in the two centers of the coaching # institute since the p-value is less than 0.05 # ## Two-sample t-test for paired samples # # This test is used to compare population means from samples that are dependent on each other, that is, sample values are measured twice using the same test group. # # + A measurement taken at two different times (e.g., pre-test and post-test score with an intervention administered between the two time points) # + A measurement taken under two different conditions (e.g., completing a test under a "control" condition and an "experimental" condition) # # This equation gives the critical value of the test statistic for a paired two-sample t-test: # # $$t = \frac{\overline d}{s/\sqrt{n}}$$ # # Where $\overline d$ is the average of the difference between the elements of the two samples. Both # the samples have the same size, $n$. # S = standard deviation of the differences between the elements of the two samples = # $$\sqrt{\frac{\sum d^2 -((\sum d)^2/ n)}{n -1}}$$ # The coaching institute is conducting a special program to improve the performance of the students. The scores of the same set of students are compared before and after the special program. Conduct a hypothesis test at a 5% significance level to verify if the scores have improved because of this program. # + a = np.array([80,87,80,75,79,78,89,84,88]) b = np.array([81,89,83,81,79,82,90,82,90]) stats.ttest_rel(a,b) # - # We can conclude, at a 5% significance level, that the average score has improved after the # special program was conducted since the p-value is less than 0.05 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] slideshow={"slide_type": "slide"} # # Aproximações e Erros de Arredondamento # # _Prof. Dr. _ # + [markdown] slideshow={"slide_type": "fragment"} # ## **Erros de Arredondamento** # ### Épsilon de Máquina # # - #Calcula o épsilon de máquina epsilon = 1 while (epsilon+1)>1: epsilon = epsilon/2 epsilon = 2 * epsilon print(epsilon) # Aproximação de uma função por Série de Taylor # # + tags=[] import numpy as np import matplotlib.pyplot as plt def f(x): return -0.1*x**4 -0.15*x**3 -0.5*x**2 -0.25*x +1.2 def df(x): return -0.4*x**3 -0.45*x**2 -1.0*x -0.25 def ddf(x): return -1.2*x**2 -0.9*x -1.0 def dddf(x): return -2.4*x -0.9 def d4f(x): return -2.4 x1 = 0 x2 = 1 # Aproximação de ordem zero fO_0 = f(x1) # Valor previsto erroO_0 = f(x2) - fO_0 # valor exato menos o valor previsto # Aproximação de primeira ordem fO_1 = f(x1) + df(x1)*(x2-x1) # Valor previsto erroO_1 = f(x2) - fO_1 # valor exato menos o valor previsto # Aproximação de segunda ordem fO_2 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 # Valor previsto erroO_2 = f(x2) - fO_2 # valor exato menos o valor previsto # Aproximação de terceira ordem fO_3 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 # Valor previsto 3!=3*2*1=6 erroO_3 = f(x2) - fO_3 # valor exato menos o valor previsto # Aproximação de quarta ordem fO_4 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 + (d4f(x1)/24)*(x2-x1)**4 # Valor previsto 4!=4*3*2*1=24 erroO_4 = f(x2) - fO_4 # valor exato menos o valor previsto print('Ordem ~f(x) Erro') print('0 {0:8f} {1:3f}'.format(fO_0, erroO_0)) print('1 {0:8f} {1:8f}'.format(fO_1, erroO_1)) print('2 {0:8f} {1:8f}'.format(fO_2, erroO_2)) print('3 {0:8f} {1:8f}'.format(fO_3, erroO_3)) print('4 {0:8f} {1:8f}'.format(fO_4, erroO_4)) # Plotagem dos gráficos xx = np.linspace(-2,2.0,40) yy = f(xx) plt.plot(xx,yy,'b',x2,fO_0,'*',x2,fO_1,'*r', x2, fO_2, '*g', x2, fO_3,'*y', x2, fO_4, '*r') plt.savefig('exemplo1.png') plt.show() # + tags=[] # Exercício do dia 17/08/2020 import numpy as np import matplotlib.pyplot as plt def f(x): return np.sin(x) def df(x): return np.cos(x) def ddf(x): return -np.sin(x) def dddf(x): return -np.cos(x) def d4f(x): return np.sin(x) x1 = np.pi/2 x2 = 3*np.pi/4 # igual a pi/2 +pi/4 # Aproximação de ordem zero fO_0 = f(x1) # Valor previsto erroO_0 = f(x2) - fO_0 # valor exato menos o valor previsto # Aproximação de primeira ordem fO_1 = f(x1) + df(x1)*(x2-x1) # Valor previsto erroO_1 = f(x2) - fO_1 # valor exato menos o valor previsto # Aproximação de segunda ordem fO_2 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 # Valor previsto erroO_2 = f(x2) - fO_2 # valor exato menos o valor previsto # Aproximação de terceira ordem fO_3 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 # Valor previsto 3!=3*2*1=6 erroO_3 = f(x2) - fO_3 # valor exato menos o valor previsto # Aproximação de quarta ordem fO_4 = f(x1) + df(x1)*(x2-x1) + (ddf(x1)/2)*(x2-x1)**2 + (dddf(x1)/6)*(x2-x1)**3 + (d4f(x1)/24)*(x2-x1)**4 # Valor previsto 4!=4*3*2*1=24 erroO_4 = f(x2) - fO_4 # valor exato menos o valor previsto print('Ordem ~f(x) Erro') print('0 {0:8f} {1:3f}'.format(fO_0, erroO_0)) print('1 {0:8f} {1:8f}'.format(fO_1, erroO_1)) print('2 {0:8f} {1:8f}'.format(fO_2, erroO_2)) print('3 {0:8f} {1:8f}'.format(fO_3, erroO_3)) print('4 {0:8f} {1:8f}'.format(fO_4, erroO_4)) # Plotagem dos gráficos xx = np.linspace(0,2.*np.pi,40) yy = f(xx) plt.plot(xx,yy,'b',x2,fO_0,'*',x2,fO_1,'*b', x2, fO_2, '*g', x2, fO_3,'*y', x2, fO_4, '*r') plt.savefig('exemplo2.png') plt.show() # + [markdown] slideshow={"slide_type": "fragment"} # ### Exercício - Aula 17/08/2020 # Utilizando o exemplo anterior faça expansões de Taylor para a função seno, de ordem zero até ordem 4, a partir de $x = \pi/2$ com $h = \pi/4$, ou seja, para estimar o valor da função em $x_{i+1} = 3 \pi/4$. E responda os check de verificação no AVA: # # 1. Check: Qual o erro da estimativa de ordem zero? # 2. Check: Qual o erro da estimativa de quarta ordem? # # + [markdown] slideshow={"slide_type": "fragment"} # ### Exercício - Aula 24/08/2020 # Utilizando os exemplo e exercícios anteriores faça os gráficos das expansões de Taylor para as funções estudadas, de ordem zero até ordem 4, salve o arquivo em formato png e faça o upload no AVA. # + [markdown] slideshow={"slide_type": "fragment"} # ## Referências # + [markdown] slideshow={"slide_type": "fragment"} # . (2013). **Numerical Methods in Engineering With Python 3**. Cambridge: Cambridge.
    # Brasil, R.M.L.R.F, ., . (2015) **Métodos Numéricos e Computacionais na Prática de Engenharias e Ciências**, São Paulo: # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import scipy as sp import pandas as pd import seaborn as sns import matplotlib as mpl import matplotlib.pyplot as plt # %matplotlib inline mpl.rcParams['figure.figsize'] = (8, 4) df = pd.read_table('consistency.tsv') df.head() df.shape[0] # #### Simple sum of taxa df.iloc[:, 3:].sum() # #### Sum taxa above threshold def sum_above(df, th): dfx = df.query('count >= %d' % th) print('%d taxa >= %d' % (dfx.shape[0], th)) print(dfx.iloc[:, 3:].sum()) sum_above(df, 2) sum_above(df, 5) sum_above(df, 10) sum_above(df, 20) sum_above(df, 50) sum_above(df, 100) # #### Weight by taxon size dfs = df.query('count > 1')[['rank', 'taxon']].copy() for tree in df.columns[3:]: dfs[tree] = df[tree] * df['count'] dfs.head() dfs.iloc[:, 2:].sum() # #### Sum by rank ranks = ['kingdom', 'phylum', 'class', 'order', 'family', 'genus', 'species'] for rank in ranks: print(rank) print(dfs.query('rank == "%s"' % rank).iloc[:, 2:].sum()) print() # #### Summarize dfr = pd.DataFrame() for rank in ranks[1:]: df_ = dfs.query('rank == "%s"' % rank).iloc[:, 2:].sum() df_.name = rank dfr = pd.concat([dfr, df_], axis=1, sort=True) dfr dfr.astype(int) # #### Plot th = 10 dfc = df.query('count >= %d' % th).drop(['count'], axis=1).melt( ['rank', 'taxon'], var_name='method', value_name='consistency') dfc.head() dfc = dfc.query('rank != "kingdom"') dfc = dfc[dfc['method'].isin(['astral', 'concat.cons', 'concat.rand'])] dfc.shape dfc['rank'].value_counts() ax = sns.barplot(x='method', y='consistency', hue='rank', data=dfc) ax.set_yscale('log') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0); fig = ax.get_figure() fig.tight_layout() fig.savefig('output.pdf', bbox_to_inches='tight') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Lidar remote sensing of snow # # ## Intro ASO # # See an overview of ASO operations [here](https://www.cbrfc.noaa.gov/report/AWRA2019_Pres3.pdf) # # ASO set-up: Riegl Q1560 dual laser scanning lidar 1064nm (image credit ASO) # # # # # # ASO data collection (image credit ASO) # # # # # # # # Laser reflections together create a 3D point cloud of the earth surface (image credit ASO) # # # Point clouds can be classified and processed using specialised software such as [pdal](https://pdal.io/). # # We won't cover that here, because ASO has already processed all the snow depth datasets for us. # # ASO rasterises the point clouds to produce snow depth maps as rasters. Point clouds can also be rasterised to create canopy height models (CHMs) or digital terrain models (DTMs). These formats allow us to analyse the information easier. # ASO states "Snow depths in exposed areas are within 1-2 cm at the 50 m scale" # # # # However, point-to-point variability can exist between manual and lidar measurements due to: # - vegetation, particularly shrubs # - geo-location accuracy of manual measurements # - combination of both in forests # ## Basic data inspection # ### Import the packages needed for this tutorial # + # general purpose data manipulation and analysis import numpy as np # packages for working with raster datasets import rasterio from rasterio.mask import mask from rasterio.plot import show from rasterio.enums import Resampling import xarray # allows us to work with raster data as arrays # packages for working with geospatial data import geopandas as gpd import pycrs from shapely.geometry import box # import packages for viewing the data import matplotlib.pyplot as pyplot # - #define paths import os CURDIR = os.path.dirname(os.path.realpath("__file__")) # matplotlib functionality # %matplotlib inline # # %matplotlib notebook # The command *%matplotlib notebook* allows you to plot data interactively, which makes things way more interesting. If you want, you can test to see if this works for you. If not, go back to *%matplotlib inline* # ## Data overview and visualisation # + # open the raster fparts_SD_GM_3m = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clipped.tif" SD_GM_3m = rasterio.open(fparts_SD_GM_3m) # - # check the CRS - is it consistent with other datasets we want to use? SD_GM_3m.crs # ASO datasets are in EPSG: 32612. However, you might find other SnowEx datasets are in EPGS:26912. This can be changed using reproject in rioxarray. See [here](https://corteva.github.io/rioxarray/stable/examples/reproject.html) for an example. # For now, we'll stay in 32612. # With the above raster open, you can look at the different attributes of the raster. For example, the cellsize: SD_GM_3m.res # The raster boundaries... SD_GM_3m.bounds # And the dimensions. Note this is in pixels, not in meters. To get the total size, you can multiply the dimensions by the resolution. print(SD_GM_3m.width,SD_GM_3m.height) # rasterio.open allows you to quickly look at the data... fig1, ax1 = pyplot.subplots(1, figsize=(5, 5)) show((SD_GM_3m, 1), cmap='Blues', interpolation='none', ax=ax1) # While this can allow us to very quickly visualise the data, it doesn't show us a lot about the data itself. # # We can also open the data from the geotiff as as a data array, giving us more flexibility in the data analysis. # First, close the rasterio file SD_GM_3m.close() # Now we can re-open the data as an array and visualise it using pyplot. # + dat_array_3m = xarray.open_rasterio(fparts_SD_GM_3m) # plot the raster fig2, ax2 = pyplot.subplots() pos2 = ax2.imshow(dat_array_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5); ax2.set_title('GM Snow Depth 3m') fig2.colorbar(pos2, ax=ax2) # - # We set the figure to display the colorbar with a maximum of 1.5m. But you can see in the north of the area there are some very deep snow depths. np.nanmax(dat_array_3m) # Optional - use the interactive plot to pan and zoom in and out to have a look at the snow depth distribution across the Grand Mesa. This should work for you if you run your notebook locally. # We can clip the larger domain to a smaller areas to better visualise the snow depth distributions in the areas we're interested in. # # Depending on the field site, you could look at distributions in different slope classes, vegetation classes (bush vs forest vs open) or aspect classes. # # For now, we'll focus on a forest-dominated area and use the canopy height model (CHM) to clip the snow depth data. # ## Canopy height models # # We will use an existing raster of a canopy height model (CHM) to clip our snow depth map. This CHM is an area investigated by [Mazzotti et al. 2019](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019WR024898). You can also access the data [here](https://www.envidat.ch/#/metadata/als-based-snow-depth). # load the chm chm = xarray.open_rasterio('data/CHM_20160926GMb_700x700_EPSG32612.tif') # check the crs is the same as the snow depth data chm.crs # Don't forget that if the coordinate systems in your datasets don't match then you will need to transform one of them. You can change the coordinate systems using the links above. (Note, I've already transformed this dataset from EPSG 32613). # Let's have a quick look at the chm data as an xarray. chm # You can see the resolution of the CHM is 0.5m, which is much higher than the snow depth dataset. # # Can you think why we would want to have CHM at such a high resolution? # There are two main reasons: # - resolution high enough to represent individual trees # - maximum canopy height can mis-represented in lower resolution CHMs # We can extract simple statistics from the dataset the same way you would with a numpy dataset. For example: chm.data.max() # plot the CHM, setting the maximum color value to the maximum canopy height in the dataset fig3, ax3 = pyplot.subplots() pos3 = ax3.imshow(chm.data[0,:,:], cmap='viridis', vmin=0, vmax=chm.data.max()) ax3.set_title('CHM Grand Mesa B') fig3.colorbar(pos3, ax=ax3) # If you play around and zoom in, you can see individual trees. If you were wanting to investigate the role of canopy structure at the individual tree level on snow depth distribution, this is the level of detail you would want to work with. # ## Clipping rasters # Let's clip the snow depth dataset to the same boundaries as the CHM. # One way to clip the snow depth raster is to use another raster as an area of interest. # # We will use the CHM as a mask, following [this](https://automating-gis-processes.github.io/CSC18/lessons/L6/clipping-raster.html) tutorial. # # You can also use shapefiles (see [here](https://rasterio.readthedocs.io/en/latest/topics/masking-by-shapefile.html) for another example) if you want to use more complicated geometry, or you can manually define your coordinates. # # We can extract the boundaries of the CHM and create a bounding box using the Shapely package bbox = box(chm.x.min(),chm.y.min(),chm.x.max(),chm.y.max()) print(bbox) # If you want to come back and do this later, you don't need a raster or shapefile. If you only know the min/max coordinates of the area you're interested in, that's fine too. # + # bbox = box(minx,miny,maxx,maxy) # - # You could also add a buffer around your CHM, if you wanted to see a bigger area: # + #buffer = 200 #bbox = box(cb[0]-buffer,cb[1]-buffer,cb[2]+buffer,cb[3]+buffer) # - # But for now let's just stay with the same limits as the CHM. # We need to put the bounding box into a geodataframe geo = gpd.GeoDataFrame({'geometry': bbox}, index=[0], crs=chm.crs) # And then extract the coordinates to a format that we can use with rasterio. def getFeatures(gdf): """Function to parse features from GeoDataFrame in such a manner that rasterio wants them""" import json return [json.loads(gdf.to_json())['features'][0]['geometry']] coords = getFeatures(geo) print(coords) # After all that, we're ready to clip the raster. We do this using the mask function from rasterio, and specifying crop=TRUE # # We also need to re-open the dataset as a rasterio object. SD_GM_3m.close() SD_GM_3m = rasterio.open(fparts_SD_GM_3m) out_img, out_transform = mask(SD_GM_3m, coords, crop=True) # We also need to copy the meta information across to the new raster out_meta = SD_GM_3m.meta.copy() epsg_code = int(SD_GM_3m.crs.data['init'][5:]) # And update the metadata with the new dimsions etc. out_meta.update({"driver": "GTiff", ....: "height": out_img.shape[1], ....: "width": out_img.shape[2], ....: "transform": out_transform, ....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()} ....: ) ....: # Next, we should save this new raster. Let's call the area 'GMb', to match the name of the CHM. # + out_tif = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clip_GMb.tif" with rasterio.open(out_tif, "w", **out_meta) as dest: dest.write(out_img) # - # To check the result is correct, we can read the data back in. SD_GMb_3m = xarray.open_rasterio(out_tif) # plot the new SD map fig4, ax4 = pyplot.subplots() pos4 = ax4.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5) ax4.set_title('GMb Snow Depth 3m') fig4.colorbar(pos4, ax=ax4) # Here's an aerial image of the same area. What patterns do you see in the snow depth map when compared to the aerial image? # # # # (Image from Google Earth) # # If you plotted snow depth compared to canopy height, what do you think you'd see in the graph? # ## Raster resolution # ASO also creates a 50m SD data product. So, let's have a look at that in the same area. SD_GM_50m = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m.tif") out_img_50, out_transform_50 = mask(SD_GM_50m, coords, crop=True) out_meta_50 = SD_GM_50m.meta.copy() epsg_code_50 = int(SD_GM_50m.crs.data['init'][5:]) out_meta_50.update({"driver": "GTiff", ....: "height": out_img_50.shape[1], ....: "width": out_img_50.shape[2], ....: "transform": out_transform_50, ....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()} ....: ) ....: # + out_tif_50 = "data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif" with rasterio.open(out_tif_50, "w", **out_meta_50) as dest: dest.write(out_img_50) # - SD_GM_50m.close() SD_GMb_50m = xarray.open_rasterio(out_tif_50) # Now we have the two rasters clipped to the same area, we can compare them. # + ### plot them side by side with a minimum and maximum values of 0m and 1.5m fig5, ax5 = pyplot.subplots() pos5 = ax5.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5) ax5.set_title('GMb Snow Depth 3m') fig5.colorbar(pos5, ax=ax5) fig6, ax6 = pyplot.subplots() pos6 = ax6.imshow(SD_GMb_50m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5) ax6.set_title('GM Snow Depth 50m') fig6.colorbar(pos6, ax=ax6) # - # Let's have a look at the two resolutions next to each other. What do you notice? # We can look at the data in more detail. For example, histograms show us the snow depth distribution across the area. # + # plot histograms of the snow depth distributions across a range from 0 to 1.5m in 25cm increments fig7, ax7 = pyplot.subplots(figsize=(5, 5)) pyplot.hist(SD_GMb_3m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025)); ax7.set_title('GM Snow Depth 3m') ax7.set_xlim((0,1.5)) fig8, ax8 = pyplot.subplots(figsize=(5, 5)) pyplot.hist(SD_GMb_50m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025)); ax8.set_title('GM Snow Depth 50m') ax8.set_xlim((0,1.5)) # - # Things to think about: # - What are the maximum and minimum snow depths between the two datasets? # - Does the distribution in snow depths across the area change with resolution? # - How representative are the different datasets for snow depth at different process scales? Can you see the forest in the 50m data? # - There are snow free areas in the 3m data, but not in the 50m. What do you think this means for validating modelled snow depletion? SD_GMb_3m.close() SD_GMb_50m.close() chm.close() # ## Resampling # If you are looking to compare your modelled snow depth, you can resample your lidar snow depth to the same resolution as your model. # You can see the code [here](https://rasterio.readthedocs.io/en/latest/topics/resampling.html) # # Let's say we want to sample the whole domain at 250 m resolution. # + # Resample your raster # select your upscale_factor - this is related to the resolution of your raster # upscale_factor = old_resolution/desired_resolution upscale_factor = 50/250 SD_GMb_50m_rio = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif") # resample data to target shape using the bilinear method new_res = SD_GMb_50m_rio.read( out_shape=( SD_GMb_50m_rio.count, int(SD_GMb_50m_rio.height * upscale_factor), int(SD_GMb_50m_rio.width * upscale_factor) ), resampling=Resampling.bilinear ) # scale image transform transform = SD_GMb_50m_rio.transform * SD_GMb_50m_rio.transform.scale( (SD_GMb_50m_rio.width / new_res.shape[-1]), (SD_GMb_50m_rio.height / new_res.shape[-2]) ) # display the raster fig9, ax9 = pyplot.subplots() pos9 = ax9.imshow(new_res[0,:,:], cmap='Blues', vmin=0, vmax=1.5) ax9.set_title('GM Snow Depth 50m') fig9.colorbar(pos9, ax=ax9) # - # Play around with different upscaling factors and see what sort of results you get. How do the maximum and minimum values across the area change? # Other possibilities: # - Load the 3 m dataset and resample from the higher resolution. # - You can clip to larger areas, such as a model domain, to resample to larger pixel sizes. # - Load another dataset and see if you see the same patterns. SD_GMb_50m_rio.close() # # Other things to think about # # This tutorial was just to get you started thinking about lidar datasets. ASO also collected data for SnowEx on Grand Mesa in 2017. They've also collected data in numerous other locations across the years that you also have access you. # # The geospatial tutorial showed you how to extract values from rasters to points. Using these methods, you could extract the ASO values to the manual snow depth measurements to assess the performance of the lidar snow depth data product. # ## Additional datasets # # ASO have collected many datasets, in numerous locations. # # If you're interested in the 2020 season here are some .zip files to access ASO 2020 data for other sites/campaigns # In these folders are snow depth and SWE data products. These links are to directly download .zip folders. # # [Grand Mesa Feb 1-2](https://asopublic.s3-us-west-1.amazonaws.com/USCO/GM/2020/0201/ASO_GrandMesa_mosaic_2020Feb1-2_AllData_and_Reports.zip) # # [Grand Mesa Feb 13](https://asopublic.s3-us-west-1.amazonaws.com/USCO/GM/2020/0213/ASO_GrandMesa_mosaic_2020Feb13_AllData_and_Reports.zip) # # [East River Feb 14-20](https://asopublic.s3-us-west-1.amazonaws.com/USCO/GE/2020/0214/ASO_EastRiver_mosaic_2020Feb14-20_AllData_and_Reports.zip) # # [Taylor River Feb 20](https://asopublic.s3-us-west-1.amazonaws.com/USCO/GT/2020/0220/ASO_TaylorRiver_mosaic_2020Feb20_AllData_and_Reports.zip) # # [Reynolds Creek Feb 18-19](https://asopublic.s3-us-west-1.amazonaws.com/USID/RY/2020/0218/ASO_Reynolds_mosaic_2020Feb18-19_AllData_and_Reports.zip) # # So far, there are no snow-off campaigns from the 2020 season due to covid. # # But you can find ASO bare earth DTMs and other ASO data, including 3 m and 50 m snow depth, SWE across other sites and years [here](https://nsidc.org/data/aso/data-summaries) # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import cv2 import numpy as np import csv from keras.models import Sequential from keras.layers import Flatten, Dense, Lambda, Cropping2D import keras from keras.layers.convolutional import MaxPooling2D from keras.layers.convolutional import Convolution2D from keras.layers import pooling lines = [] with open('./data/driving_log.csv') as csvfile: reader = csv.reader(csvfile) for line in reader: lines.append(line) correction_num = 0.2 images = [] measurements = [] # ['center', 'left', 'right', 'steering', 'throttle', 'brake', 'speed'] for line in lines: if line[0] == 'center': continue # center source_path = line[0] filename = source_path.split('/')[-1] current_path = './data/IMG/' + filename image = cv2.imread(current_path) images.append(image) images.append(cv2.flip(image, 1)) measurement = float(line[3]) measurements.append(measurement) measurements.append(measurement * -1.0) # left source_path = line[1] filename = source_path.split('/')[-1] current_path = './data/IMG/' + filename image = cv2.imread(current_path) images.append(image) images.append(cv2.flip(image, 1)) measurement = float(line[3]) measurements.append(measurement + correction_num) measurements.append((measurement+correction_num)* -1.0) # right source_path = line[2] filename = source_path.split('/')[-1] current_path = './data/IMG/' + filename image = cv2.imread(current_path) images.append(image) images.append(cv2.flip(image, 1)) measurement = float(line[3]) measurements.append(measurement-correction_num) measurements.append((measurement-correction_num)* -1.0) X_train = np.array(images) y_train = np.array(measurements) model = Sequential() model.add(Lambda(lambda x: x / 255.0 -0.5, input_shape=(160,320,3))) model.add(Cropping2D(cropping=((70,25), (0,0)))) model.add(Convolution2D(6,5,5, activation='relu')) model.add(MaxPooling2D()) model.add(Convolution2D(6,5,5, activation='relu')) model.add(MaxPooling2D()) model.add(Flatten()) model.add(Dense(120)) model.add(Dense(84)) model.add(Dense(1)) model.compile(loss = 'mse', optimizer='adam') print('Printing...') model.fit(X_train, y_train, validation_split = 0.2, shuffle=True, nb_epoch=5) model.save('model.h5') print('DOne') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + colab={"base_uri": "https://localhost:8080/"} id="O5FBkGt4rys6" outputId="c427f718-028f-4591-bbf3-846d4d44339c" lista = [1,5,8,2,12,7,9,25] def recursividad(x): suma = 0 if len(x) == 0: suma =0 else: for i in lista: suma = i + suma print(suma) recursividad(lista) # + colab={"base_uri": "https://localhost:8080/"} id="v82N3DlnugPL" outputId="42bf1266-38bf-40fb-82d8-db549d042cf5" def fnRec(x): rec=0 if x == 0: return rec else: print(x) fnRec(x-1) fnRec(100) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Siamese network in keras to detect pairs of scatter plots that are similar. # + import numpy as np import os import glob import random from PIL import Image from tensorflow.keras.models import Model from tensorflow.keras.layers import Input from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import Dense from tensorflow.keras.layers import Dropout from tensorflow.keras.layers import GlobalAveragePooling2D from tensorflow.keras.layers import MaxPooling2D from tensorflow.keras.layers import Lambda from tensorflow.keras.layers import Flatten, BatchNormalization import tensorflow as tf import tensorflow.keras.backend as K import matplotlib.pyplot as plt from tensorflow.keras.optimizers import Nadam from tensorflow.keras.optimizers import Adam # - POS_LABEL = 0 # Pair of plots that match NEG_LABEL = 1 # Pair of plots that do not match #If you reverse the labels, you have to change the Contrastive Loss function. SZ = 128 MARGIN = 5.0 # ## Siamese Model # + def feature_extract(inputShape): inputs = Input(inputShape) x = Conv2D(4, (3, 3), padding="same", activation="relu")(inputs) x = BatchNormalization()(x) x = Conv2D(8, (3, 3), padding="same", activation="relu")(inputs) x = BatchNormalization()(x) x = Conv2D(8, (3, 3), padding="same", activation="relu")(inputs) x = BatchNormalization()(x) x = Flatten()(x) x = Dense(500, activation="relu")(x) x = Dense(500, activation="relu")(x) outputs = Dense(5)(x) return Model(inputs, outputs,name='sister_network') return model def euclidean_distance(vectors): (featsA, featsB) = vectors sumSquared = K.sum(K.square(featsA - featsB), axis=1, keepdims=True) return K.sqrt(K.maximum(sumSquared, K.epsilon())) def contrastive_loss(y,preds): y = tf.cast(y, preds.dtype) squaredPreds = K.square(preds) squaredMargin = K.square(K.maximum(MARGIN - preds, 0.0)) loss = K.mean((1-y) * squaredPreds + y * squaredMargin) return loss # - # ## Make pairs of images for training and testing # + def get_positive_pairs(path='./mol_data/*'): #both images of same digit positive_pairs = [] all_fam_dirs = glob.glob(path) for famdir in all_fam_dirs: mol_files = glob.glob(famdir+'/*.png') for ff1 in mol_files: for ff2 in mol_files: if ff1 < ff2: positive_pairs.append((ff1,ff2)) return positive_pairs def get_negative_pairs(path='./mol_data/*',cnt=100): #images are from different digits negative_pairs = [] all_fam_dirs = glob.glob(path) random.shuffle(all_fam_dirs) all_fam_dirs_rev = all_fam_dirs[::-1] #reversed for famdir1,famdir2 in zip(all_fam_dirs,all_fam_dirs_rev): if famdir1!=famdir2: mol_files_1 = glob.glob(famdir1+'/*.png') mol_files_2 = glob.glob(famdir2+'/*.png') for ff1 in mol_files_1: for ff2 in mol_files_2: negative_pairs.append((ff1,ff2)) if len(negative_pairs) >= cnt: break return negative_pairs def read_img(img_path): img = Image.open(img_path) img = img.convert('L') img = img.resize((SZ,SZ)) img = np.asarray(img,dtype=np.float32)/255.0 return img def build_paired_data(path,shuffle): positive_pairs = get_positive_pairs(path) negative_pairs = get_negative_pairs(path,len(positive_pairs)) print('Got ',len(positive_pairs),'positive_pairs') print('Got ',len(negative_pairs),'negative_pairs') if shuffle: random.shuffle(positive_pairs) random.shuffle(negative_pairs) positive_labels = [POS_LABEL]*len(positive_pairs) negative_labels = [NEG_LABEL]*len(negative_pairs) all_pairs = positive_pairs + negative_pairs all_labels = positive_labels + negative_labels data = list(zip(all_pairs,all_labels)) random.shuffle(data) print('Loading data size',len(data)) pairImages = [] pairLabels = [] pairNames = [] for image_pair,label in data: img0 = read_img(image_pair[0]) img1 = read_img(image_pair[1]) pairImages.append([img0,img1]) pairLabels.append([label]) #very important to have labels as shape `batch_size` x 1 pairNames.append([image_pair[0],image_pair[1]]) return np.expand_dims(np.array(pairImages),axis=-1), np.array(pairLabels), np.array(pairNames) pairData, labelData, pairNames = build_paired_data('./mol_data/*',True) print(pairData.shape, labelData.shape) # - # ## Compute prediction accuracy def get_accuracy(model,ImgArr0,ImgArr1,labelArr): PP = model.predict_on_batch([ImgArr0, ImgArr1]) preds = (PP > MARGIN).astype(int) acc = np.sum(preds==labelArr)/len(labelArr) return acc # ## Build and compile the model # + # specify the shape of the inputs for our network IMG_SHAPE = (SZ, SZ, 1) # specify the batch size and number of epochs BATCH_SIZE = 16 EPOCHS = 30 imgA = Input(shape=IMG_SHAPE) imgB = Input(shape=IMG_SHAPE) featureExtractor = feature_extract(IMG_SHAPE) featureExtractor.summary() featsA = featureExtractor(imgA) featsB = featureExtractor(imgB) distance = Lambda(euclidean_distance)([featsA, featsB]) model = Model(inputs=[imgA, imgB], outputs=distance) print(model.summary()) optm = Adam(lr=0.0005) print("Compiling model...") model.compile(loss=contrastive_loss, optimizer=optm) # - # ## Training and testing # + #Train-test split : 80:20 ratio k = int(len(pairData)*0.80) imgArr0_train = pairData[:k,0] imgArr1_train = pairData[:k,1] label_train = labelData[:k] imgArr0_test = pairData[k:,0] imgArr1_test = pairData[k:,1] label_test = labelData[k:] print('Size of training data',imgArr0_train.shape,imgArr1_train.shape) print('Size of testing data',imgArr0_test.shape,imgArr1_test.shape) history = model.fit( [imgArr0_train, imgArr1_train], label_train, validation_data=([imgArr0_test, imgArr1_test], label_test) , batch_size=BATCH_SIZE, epochs=EPOCHS) # - # ## Training statistics # + print('Final Training accuracy=',get_accuracy(model,imgArr0_train, imgArr1_train, label_train)) print('Final Validation accuracy=',get_accuracy(model,imgArr0_test, imgArr1_test, label_test)) print('\nTraining and validation losses...') plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.show() # - # ## Plot 10 pairs of images and their dissimilarities. # + fig, axs = plt.subplots(10, 2, figsize=(10,40)) #, sharex=True, sharey=True) A = imgArr0_test[:10] B = imgArr1_test[:10] distances = model.predict_on_batch([A,B]) i = 0 for img0, img1 in zip(A,B): img0 = np.squeeze(img0) img1 = np.squeeze(img1) axs[i,0].imshow(img0,cmap='gray') axs[i,1].imshow(img1,cmap='gray') text = 'Dissm: {:.2f}'.format(float(distances[i])) axs[i,0].text(140, 50, text) i += 1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import numpy as np import xarray as xr import netCDF4 as nc # - def f(x, omega=1): """simple sine function""" return np.sin(2 * np.pi * omega *x) times = np.linspace(0, 10, 100) sine = f(times) # + # create DataArray with metadata metadata = {'timestep': times[1] - times[0], 'creation_data': '19-07-17', 'list': ['a', 'b', 2, 3]} df = xr.DataArray(sine, dims='time', coords={'time': times}, name='sine', attrs=metadata) df.attrs # - df.to_netcdf('sine.nc') new_df = xr.open_dataarray('sine.nc') new_df.attrs all(new_df == df) # ## Open with netCDF nc_df = nc.Dataset('sine.nc') nc_df.variables new_sine = np.asarray(nc_df['sine'][:]) sum(sine - new_sine) nc_df['sine'].__dict__['timestep'] # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="8FwIjTpsUq0w" colab_type="code" outputId="67cbae8b-e23e-4d2d-868f-ea31e8b747f4" colab={"base_uri": "https://localhost:8080/", "height": 1000} # !apt-get install -y xvfb python-pygame python-opengl import os # !pip install pygame # !git clone https://github.com/ntasfi/PyGame-Learning-Environment.git os.chdir('PyGame-Learning-Environment/') # !pip install -e . # !pip install pyvirtualdisplay gym_ple pygame from pyvirtualdisplay import Display display = Display(visible=0, size=(500, 500)) display.start() # + id="MjU9ljeYUq01" colab_type="code" outputId="3f4043cd-2138-4f0b-cbf7-bfc5457f063a" colab={"base_uri": "https://localhost:8080/", "height": 122} from google.colab import drive drive.mount('/data/') from pathlib import Path base_dir=Path('/data/My Drive') # + colab_type="code" id="BYsosv23FmVF" outputId="2f3c98be-cc12-4374-fa77-f417e75a3e5c" colab={"base_uri": "https://localhost:8080/", "height": 139} # %matplotlib inline # %config InlineBackend.figure_format = 'retina' import gym_ple import gym import math import random import numpy as np import matplotlib import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = [16,9] from collections import namedtuple from itertools import count from PIL import Image import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torchvision.transforms as T # ENV='CartPole-v0' # ENV='SpaceInvadersNoFrameskip-v4' # ENV='PongNoFrameskip-v4' ENV='FlappyBird-v0' torch.manual_seed(0) np.random.seed(0) env = gym.make(ENV) env.seed(0) # set up matplotlib is_ipython = 'inline' in matplotlib.get_backend() if is_ipython: from IPython import display plt.ion() # if gpu is to be used device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # + colab_type="code" id="tEcusKlvFmVI" colab={} from collections import deque Transition = namedtuple('Transition', ('state', 'action', 'reward', 'next_state', 'done')) class PriorityReplayMemory: # modified from https://github.com/susantamoh84/Deep-Reinforcement-Learning-Hands-On/blob/master/Chapter07/bench/prio_buffer_bench.py def __init__(self, capacity, alpha=0.6, beta_start=0.4, beta_frames=100000): self.alpha = alpha self.buffer = deque(maxlen=capacity) self.priorities = deque(maxlen=capacity) self.beta_start = beta_start self.beta_frames = beta_frames self.frame = 1 self.min_prio = 0.1 def beta_by_frame(self): self.frame += 1 return min(1.0, self.beta_start + self.frame * (1.0 - self.beta_start) / self.beta_frames) def __len__(self): return len(self.buffer) def push(self, sample): max_prio = max(self.priorities) if self.priorities else 1.0 self.buffer.append(sample) self.priorities.append(max_prio ** self.alpha) def sample(self, batch_size): probs = np.array(self.priorities, dtype=np.float32) probs /= probs.sum() total = len(self.buffer) indices = np.random.choice(total, batch_size, p=probs, replace=True) samples = [self.buffer[idx] for idx in indices] beta = self.beta_by_frame() weights = (total * probs[indices]) ** (-beta) weights /= weights.max() return samples, indices, torch.Tensor(weights).to(device) def update_priorities(self, batch_indices, batch_priorities): for idx, prio in zip(batch_indices, batch_priorities): self.priorities[idx] = (prio + self.min_prio)** self.alpha class ReplayMemory: def __init__(self, capacity): self.buffer = deque(maxlen=capacity) def push(self, sample): """Saves a transition.""" self.buffer.append(sample) def sample(self, batch_size): indices = np.random.choice(len(self.buffer), batch_size, replace=True) samples = [self.buffer[idx] for idx in indices] return samples, None, torch.Tensor([1/len(self.buffer) for _ in range(batch_size)]).to(device) def update_priorities(self, batch_indices, batch_priorities): pass def __len__(self): return len(self.buffer) # + colab_type="code" id="2Y0rue0jFmVK" colab={} class DQN(nn.Module): def __init__(self, h, w): super().__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2) self.bn1 = nn.BatchNorm2d(16) self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2) self.bn2 = nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2) self.bn3 = nn.BatchNorm2d(32) # Number of Linear input connections depends on output of conv2d layers # and therefore the input image size, so compute it. def conv2d_size_out(size, kernel_size = 5, stride = 2): return (size - (kernel_size - 1) - 1) // stride + 1 convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w))) convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h))) linear_input_size = convw * convh * 32 self.head = nn.Linear(linear_input_size, 2) # 448 or 512 # Called with either one element to determine next action, or a batch # during optimization. Returns tensor([[left0exp,right0exp]...]). def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) return self.head(x.view(x.size(0), -1)) class DDQN(nn.Module): # 3 frames def __init__(self, h, w): super().__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2) self.bn1 = nn.BatchNorm2d(16) self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2) self.bn2 = nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2) self.bn3 = nn.BatchNorm2d(32) # Number of Linear input connections depends on output of conv2d layers # and therefore the input image size, so compute it. def conv2d_size_out(size, kernel_size = 5, stride = 2): return (size - (kernel_size - 1) - 1) // stride + 1 convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w))) convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h))) linear_input_size = convw * convh * 32 self.val = nn.Linear(linear_input_size, 1) self.adv = nn.Linear(linear_input_size, 2) # Called with either one element to determine next action, or a batch # during optimization. Returns tensor([[left0exp,right0exp]...]). def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) x=x.view(x.size(0), -1) val=self.val(x) adv=self.adv(x) x=val+adv-adv.mean(1,keepdim=True) return x class DDQN2(nn.Module): # 3 frames, change convs, add fc def __init__(self, h, w): super().__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=8, stride=4) self.bn1 = nn.BatchNorm2d(16) self.conv2 = nn.Conv2d(16, 32, kernel_size=4, stride=2) self.bn2 = nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(32, 64, kernel_size=3, stride=1) self.bn3 = nn.BatchNorm2d(64) # Number of Linear input connections depends on output of conv2d layers # and therefore the input image size, so compute it. def conv2d_size_out(size, kernel_size = 5, stride = 2, padding=0): return (size - kernel_size +2*padding ) // stride + 1 convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w,8,4),4,2),3,1) convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h,8,4),4,2),3,1) linear_input_size = convw * convh * 64 fc_output_size=32 self.fc=nn.Linear(linear_input_size, fc_output_size) self.val = nn.Linear(fc_output_size, 1) self.adv = nn.Linear(fc_output_size, 2) # Called with either one element to determine next action, or a batch # during optimization. Returns tensor([[left0exp,right0exp]...]). def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) x = F.relu(self.fc(x.view(x.size(0), -1))) val=self.val(x) adv=self.adv(x) x=val+adv-adv.mean(1,keepdim=True) return x class DDQN3(nn.Module): # 3 frames, no bn, add 2 fc, add dropout def __init__(self, h, w): super().__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=8, stride=4) self.conv2 = nn.Conv2d(16, 32, kernel_size=4, stride=2) self.conv3 = nn.Conv2d(32, 64, kernel_size=3, stride=1) # Number of Linear input connections depends on output of conv2d layers # and therefore the input image size, so compute it. def conv2d_size_out(size, kernel_size = 8, stride = 4, padding=0): return (size - kernel_size +2*padding ) // stride + 1 convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w,8,4),4,2),3,1) convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h,8,4),4,2),3,1) linear_input_size = convw * convh * 64 fc_output_size=512 self.dropout=nn.Dropout() self.fc_val=nn.Linear(linear_input_size, fc_output_size) self.fc_adv=nn.Linear(linear_input_size, fc_output_size) self.val = nn.Linear(fc_output_size, 1) self.adv = nn.Linear(fc_output_size, N_ACTION) # Called with either one element to determine next action, or a batch # during optimization. Returns tensor([[left0exp,right0exp]...]). def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = F.relu(self.conv3(x)) x = self.dropout(x) x_val = F.relu(self.fc_val(x.view(x.size(0), -1))) x_adv = F.relu(self.fc_adv(x.view(x.size(0), -1))) val=self.val(x_val) adv=self.adv(x_adv) x=val+adv-adv.mean(1,keepdim=True) return x class DDQN4(nn.Module): # 4 frames, def __init__(self, h, w): super().__init__() self.conv1 = nn.Conv2d(4, 32, kernel_size=8, stride=4) self.bn1 = nn.BatchNorm2d(32) self.conv2 = nn.Conv2d(32, 64, kernel_size=4, stride=2) self.bn2 = nn.BatchNorm2d(64) self.conv3 = nn.Conv2d(64, 64, kernel_size=3, stride=1) self.bn3 = nn.BatchNorm2d(64) # Number of Linear input connections depends on output of conv2d layers # and therefore the input image size, so compute it. def conv2d_size_out(size, kernel_size = 6, stride = 2, padding=0): return (size - kernel_size +2*padding ) // stride + 1 convw = conv2d_size_out(conv2d_size_out(conv2d_size_out(w,8,4),4,2),3,1) convh = conv2d_size_out(conv2d_size_out(conv2d_size_out(h,8,4),4,2),3,1) linear_input_size = convw * convh * 64 fc_output_size=512 self.fc_val=nn.Linear(linear_input_size, fc_output_size) self.fc_adv=nn.Linear(linear_input_size, fc_output_size) self.val = nn.Linear(fc_output_size, 1) self.adv = nn.Linear(fc_output_size, N_ACTION) # Called with either one element to determine next action, or a batch # during optimization. Returns tensor([[left0exp,right0exp]...]). def forward(self, x): x = x.float() x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) x_val = F.relu(self.fc_val(x.view(x.size(0), -1))) x_adv = F.relu(self.fc_adv(x.view(x.size(0), -1))) val=self.val(x_val) adv=self.adv(x_adv) x=val+adv-adv.mean(1,keepdim=True) return x # + colab_type="code" id="-uDdMgQ1FmVM" outputId="1e34cb9f-cfb0-4fb6-b492-ef7a8abab539" colab={"base_uri": "https://localhost:8080/", "height": 538} process_pic = T.Compose([ T.ToPILImage(), # T.Grayscale(), T.Resize((84,84)), T.ToTensor(), ]) def get_screen_flappy(): # Returned screen requested by gym is 400x600x3, but is sometimes larger # such as 800x1200x3. Transpose it into torch order (CHW). screen = env.render(mode='rgb_array').transpose((2, 0, 1)) _, screen_height, screen_width = screen.shape screen = screen[:, :int(screen_height * 0.8)] screen = screen.mean(0).astype('uint8') # (values,counts) = np.unique(screen,return_counts=True) # ind=np.argmax(counts) # bg_color=values[ind] # print(bg_color) # bg_color=[157,94] # screen[screen==157] = 0 # screen[screen==94] = 0 screen = torch.from_numpy(screen) screen = process_pic(screen.unsqueeze(0)) screen = (screen*255).type(torch.uint8) return screen.unsqueeze(0).to(device) def get_screen_cartpole(): # Returned screen requested by gym is 400x600x3, but is sometimes larger # such as 800x1200x3. Transpose it into torch order (CHW). screen = env.render(mode='rgb_array').transpose((2, 0, 1)) _, screen_height, screen_width = screen.shape screen = screen[:, int(screen_height*0.4):int(screen_height * 0.8)] screen = screen.mean(0).astype('uint8') screen = torch.from_numpy(screen) screen = process_pic(screen.unsqueeze(0)) screen = (screen*255).type(torch.uint8) return screen.unsqueeze(0).to(device) def get_screen_space(): # Returned screen requested by gym is 400x600x3, but is sometimes larger # such as 800x1200x3. Transpose it into torch order (CHW). screen = env.render(mode='rgb_array').transpose((2, 0, 1)) _, screen_height, screen_width = screen.shape screen = screen[:, int(screen_height*0.2):int(screen_height)] screen = np.ascontiguousarray(screen, dtype=np.float32) / 255 screen = torch.from_numpy(screen) screen = process_pic(screen) return screen.unsqueeze(0).to(device) get_screen=get_screen_flappy # get_screen=get_screen_cartpole env.reset() for _ in range(3): plt.clf() display.clear_output(wait=True) plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).squeeze(2).numpy(),cmap='gray') plt.pause(0.0001) action = env.action_space.sample() env.step(action) # + colab_type="code" id="En4AONkeFmVQ" outputId="3b17e4dd-2828-49dc-d9f2-4a9c1f3c95bf" colab={"base_uri": "https://localhost:8080/", "height": 221} BATCH_SIZE = 32 GAMMA = 0.99 EPS_START = 1 EPS_END = 0.005 ESP_END2 = 0.1 FRAME_SKIP = 2 EPS_DECAY = 300000 EPS_DECAY2 = 200000 LOG_DECAY = False TARGET_UPDATE = 1000 PLOT_INTERVAL = 50 REPLAY_SIZE= 100000 SAVE_CHECKPOINT=500 FULL_RANDOM=40000 OBSERVE = 20000 LR = 1e-6 USE_PRIORITY_REPLAY = True N_ACTION = env.action_space.n USE_BONUS_REWARD = False LIMIT_MAX_REWARD = 250 CLIP_NORM=0.5 init_screen = get_screen() _, _, screen_height, screen_width = init_screen.shape policy_net = DDQN4(screen_height, screen_width).to(device) target_net = DDQN4(screen_height, screen_width).to(device) print(policy_net) target_net.load_state_dict(policy_net.state_dict()) target_net.eval() optimizer = optim.Adam(policy_net.parameters(),lr=LR) # optimizer = optim.RMSprop(policy_net.parameters(),lr=LR) if USE_PRIORITY_REPLAY: memory = PriorityReplayMemory(REPLAY_SIZE) else: memory = ReplayMemory(REPLAY_SIZE) def select_action(state): global action_q policy_net.eval() sample = random.random() if LOG_DECAY: eps_threshold = EPS_END + (EPS_START - EPS_END) * math.exp(-1. * (steps_done - OBSERVE)/ EPS_DECAY) else: if not EPS_DECAY2: eps_threshold = max( EPS_START - (EPS_START - EPS_END)/EPS_DECAY *(steps_done - OBSERVE),EPS_END) else: # 0-EPS_DECAY: reduce threshold to EPS_END+ESP_END2, EPS_DECAY-EPS_DECAY2: reduce threshold to EPS_END if steps_done-OBSERVE eps_threshold: with torch.no_grad(): t=policy_net(state) action_q=t.max().item() return t.max(1)[1].view(1, 1) else: return torch.tensor([[random.randrange(N_ACTION)]], device=device, dtype=torch.long) import time def plot_durations(): if i_episode%PLOT_INTERVAL==0: global action_qs plt.clf() display.clear_output(wait=True) rewards_t = torch.tensor(total_rewards, dtype=torch.float) bonus_rewards_t = torch.tensor(total_bonus_rewards, dtype=torch.float) action_qs_t = torch.tensor(action_qs, dtype=torch.float) plt.title('Training...episode:{},steps:{},time used:{}s'.format(i_episode,steps_done,round(time_used))) plt.xlabel('Episode') plt.ylabel('Duration') # plt.plot(action_qs_t.numpy(),label='Q') if USE_BONUS_REWARD: plt.plot(bonus_rewards_t.numpy(),label='bonus_reward') plt.plot(rewards_t.numpy(),label='reward') # Take 100 episode averages and plot them too if len(rewards_t) >= 100: means = rewards_t.unfold(0, 100, 1).mean(1).view(-1) means = torch.cat((torch.zeros(99), means)) plt.plot(means.numpy(),label='reward_mean') if len(action_qs_t) >= 100: means = action_qs_t.unfold(0, 100, 1).mean(1).view(-1) means = torch.cat((torch.zeros(99), means)) plt.plot(means.numpy(),label='Q_mean') plt.legend() plt.savefig(base_dir/'dqn'/'plot2.png') plt.pause(0.00001) # pause a bit so that plots are updated plt.close() # + colab_type="code" id="Yd5H9tJ2FmVS" colab={} def optimize_model(): policy_net.train() if len(memory) < BATCH_SIZE: return if steps_doneLIMIT_MAX_REWARD: True # Move to the next state state = next_state # Perform one step of the optimization (on the target network) optimize_model() if done: total_rewards.append(total_reward) total_bonus_rewards.append(total_bonus_reward) if episode_q: action_qs.append(sum(episode_q)/len(episode_q)) else: action_qs.append(0) plot_durations() break if steps_done % TARGET_UPDATE==0 and not is_full_random(): target_net.load_state_dict(policy_net.state_dict()) # if steps_done%100==0: # print(np.array(list(memory.priorities)).max(),np.array(list(memory.priorities)).mean(),len(memory.buffer)) if i_episode and i_episode%SAVE_CHECKPOINT==0 and not is_full_random(): save_checkpoint() if len(total_rewards)>100: last_100_avg_rewards=sum(total_rewards[-100:])/100 if last_100_avg_rewards>best_avg_score: best_avg_score=last_100_avg_rewards torch.save(policy_net,temp_policy) time_used+=time.time()-start_time print('Complete') env.render() env.close() # plt.ioff() # plt.show() print(sum(total_rewardsepisode_durations[-100:])/100) # + colab_type="code" id="5MCiiS7I3frv" colab={} # + colab_type="code" id="Xk3ouHnqr1wd" colab={} import gym import matplotlib.pyplot as plt from matplotlib import animation, rc import PIL from IPython import display env = gym.make(ENV) env.reset() for _ in range(2000): plt.clf() display.clear_output(wait=True) plt.imshow(env.render(mode='rgb_array')) # only call this once plt.pause(0.0001) action = env.action_space.sample() env.step(action) # + colab_type="code" id="nX3tcCwFrqtl" colab={} import gym import matplotlib.pyplot as plt import PIL from IPython import display import torch from PIL import Image # env = gym.make(ENV) # for i_episode in range(1): # observation = env.reset() # for t in range(100): # plt.clf() # plt.imshow(env.render(mode='rgb_array')) # plt.axis('off') # display.clear_output(wait=True) # display.display(plt.gcf()) # print(observation) # action = env.action_space.sample() # observation, reward, done, info = env.step(action) # if done: # print("Episode finished after {} timesteps".format(t+1)) # break # display.clear_output(wait=True) env = gym.make(ENV) env.reset() model=torch.load('best_score.pt') model.eval() last_screen = get_screen() current_screen = get_screen() img = plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).squeeze(2).numpy(),cmap='gray') def select_action_by_model(state): with torch.no_grad(): # t.max(1) will return largest column value of each row. # second column on max result is index of where max element was # found, so we pick action with the larger expected reward. return model(state).max(1)[1].view(1, 1) screens = deque([current_screen] * 4, 4) state = torch.cat(list(screens), dim=1) total_reward=0 for t in count(): # Select and perform an action action = select_action_by_model(state) _, reward, done, _ = env.step(action.item()) reward = torch.tensor([reward], device=device) total_reward+=reward.item() # Observe new state last_screen = current_screen current_screen = get_screen() screens.append(current_screen) # plt.imshow(current_screen.to('cpu').squeeze(0).permute(1, 2, 0)) # plt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).squeeze(2).numpy(),cmap='gray') if not done: next_state = torch.cat(list(screens), dim=1) else: next_state = None # Store the transition in memory state = next_state if t%5==0: plt.imshow(get_raw_screen().cpu().squeeze(0).permute(1, 2, 0).squeeze(2).numpy()) display.clear_output(wait=True) plt.pause(0.0001) print(t,action.item(),total_reward) im = Image.fromarray(get_raw_screen().cpu().squeeze(0).permute(1, 2, 0).squeeze(2).numpy()) im.save("./video/{:05d}.png".format(t)) if next_state is None: plt.imshow(get_raw_screen().cpu().squeeze(0).permute(1, 2, 0).squeeze(2).numpy()) display.clear_output(wait=True) plt.pause(0.0001) print(t,action.item(),total_reward) break # img = plt.imshow(env.render(mode='rgb_array')) # only call this once # for _ in range(100): # img.set_data(env.render(mode='rgb_array')) # just update the data # display.display(plt.gcf()) # display.clear_output(wait=True) # action = env.action_space.sample() # env.step(action) # + [markdown] colab_type="text" id="jCISTPImFmVX" # Here is the diagram that illustrates the overall resulting data flow. # # .. figure:: /_static/img/reinforcement_learning_diagram.jpg # # Actions are chosen either randomly or based on a policy, getting the next # step sample from the gym environment. We record the results in the # replay memory and also run optimization step on every iteration. # Optimization picks a random batch from the replay memory to do training of the # new policy. "Older" target_net is also used in optimization to compute the # expected Q values; it is updated occasionally to keep it current. # # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="s28r0_CGIk8C" import string import re from numpy import array, argmax, random, take import pandas as pd from keras.models import Sequential from keras.layers import Dense, LSTM, Embedding, Bidirectional, RepeatVector, TimeDistributed from keras.preprocessing.text import Tokenizer from keras.callbacks import ModelCheckpoint from keras.preprocessing.sequence import pad_sequences from keras.models import load_model from keras import optimizers import matplotlib.pyplot as plt % matplotlib inline pd.set_option('display.max_colwidth', 200) # + id="VZP6TMkjKbHQ" def read_text(filename): file = open(filename, mode='rt', encoding='utf-8') text = file.read() file.close() return text # + id="cXVDHT6BKnUF" def to_lines(text): sents = text.strip().split('\n') sents = [i.split('\t') for i in sents] return sents # + id="vJxzpjGlKsk4" data = read_text("/content/fra.txt") fra_eng = to_lines(data) fra_eng = array(fra_eng) # + id="JRSEnbtMK956" fra_eng = fra_eng[:50000,:] # + colab={"base_uri": "https://localhost:8080/"} id="q-vlOTkeLE_4" outputId="66a92e9b-eb38-4006-886a-dc8c6595fedf" fra_eng # + id="YPbk_5nSMzJK" fra_eng[:,0] = [s.translate(str.maketrans('', '', string.punctuation)) for s in fra_eng[:,0]] fra_eng[:,1] = [s.translate(str.maketrans('', '', string.punctuation)) for s in fra_eng[:,1]] # + colab={"base_uri": "https://localhost:8080/"} id="aAU0IKbkM6Su" outputId="5bb90be6-9790-4fce-f076-6f2558071723" fra_eng # + id="IuEBH8-4NG7d" for i in range(len(fra_eng)): fra_eng[i,0] = fra_eng[i,0].lower() fra_eng[i,1] = fra_eng[i,1].lower() # + colab={"base_uri": "https://localhost:8080/"} id="UuhC9o5CNPO7" outputId="23d8d4e3-0f47-4dcb-de2f-38e47dd323a6" fra_eng # + id="gB1jaIgZNSKY" eng_l = [] fra_l = [] for i in fra_eng[:,1]: eng_l.append(len(i.split())) for i in fra_eng[:,0]: fra_l.append(len(i.split())) # + id="MBlG9qSzNhJ3" length_df = pd.DataFrame({'eng':eng_l, 'fra':fra_l}) # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="m2oyY1XONrKK" outputId="5ba071ae-5a34-4d16-e330-2ffabdf68b07" length_df.hist(bins = 30) plt.show() # + id="Jjs1ouI-ODuc" def tokenization(lines): tokenizer = Tokenizer() tokenizer.fit_on_texts(lines) return tokenizer # + colab={"base_uri": "https://localhost:8080/"} id="YN7GlgEiOhPj" outputId="c0eabace-2964-4acd-846d-fa09569132f7" eng_tokenizer = tokenization(fra_eng[:, 1]) eng_vocab_size = len(eng_tokenizer.word_index) + 1 eng_length = 8 print('English Vocabulary Size: %d' % eng_vocab_size) # + colab={"base_uri": "https://localhost:8080/"} id="tSZBaTQmO3br" outputId="8d03f829-6b0a-4d63-f79d-330b9fc45bab" fra_tokenizer = tokenization(fra_eng[:, 0]) fra_vocab_size = len(fra_tokenizer.word_index) + 1 fra_length = 8 print('French Vocabulary Size: %d' % fra_vocab_size) # + id="FtCUksm-PL2Y" def encode_sequences(tokenizer, length, lines): # integer encode sequences seq = tokenizer.texts_to_sequences(lines) # pad sequences with 0 values seq = pad_sequences(seq, maxlen=length, padding='post') return seq # + id="U_oUdQVtPQgp" from sklearn.model_selection import train_test_split train, test = train_test_split(fra_eng, test_size=0.2, random_state = 12) # + id="fDqhF-RxPYCx" trainX = encode_sequences(fra_tokenizer, fra_length, train[:, 0]) trainY = encode_sequences(eng_tokenizer, eng_length, train[:, 1]) # + id="8t89bIinPw_P" testX = encode_sequences(fra_tokenizer, fra_length, test[:, 0]) testY = encode_sequences(eng_tokenizer, eng_length, test[:, 1]) # + id="ajPeLddKQAsB" def build_model(in_vocab, out_vocab, in_timesteps, out_timesteps, units): model = Sequential() model.add(Embedding(in_vocab, units, input_length=in_timesteps, mask_zero=True)) model.add(LSTM(units)) model.add(RepeatVector(out_timesteps)) model.add(LSTM(units, return_sequences=True)) model.add(Dense(out_vocab, activation='softmax')) return model # + id="olpw1zEXQHXL" model = build_model(fra_vocab_size, eng_vocab_size, fra_length, eng_length, 512) rms = optimizers.RMSprop(lr=0.001) model.compile(optimizer=rms, loss='sparse_categorical_crossentropy') # + colab={"base_uri": "https://localhost:8080/"} id="6TDnK23QQSCh" outputId="88082fb3-1f63-4c24-b365-c603e9aad306" filename = 'model.h1.152' checkpoint = ModelCheckpoint(filename, monitor='val_loss', verbose=1, save_best_only=True, mode='min') history = model.fit(trainX, trainY.reshape(trainY.shape[0], trainY.shape[1], 1), epochs=30, batch_size=512, validation_split = 0.2, callbacks=[checkpoint], verbose=1) # + colab={"base_uri": "https://localhost:8080/", "height": 265} id="X1s0YcDCQgXU" outputId="ab647726-ea9d-4caa-a819-df9036201646" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.legend(['train','validation']) plt.show() # + colab={"base_uri": "https://localhost:8080/"} id="I8dIm7kPQmEw" outputId="aed2798f-5777-4bdb-ccc1-fc52dd4ba07d" model = load_model('model.h1.152') preds = model.predict_classes(testX.reshape((testX.shape[0],testX.shape[1]))) # + id="9Z-dVZenQ4SU" def get_word(n, tokenizer): for word, index in tokenizer.word_index.items(): if index == n: return word return None # + id="qnHGmWUuRDGM" preds_text = [] for i in preds: temp = [] for j in range(len(i)): t = get_word(i[j], eng_tokenizer) if j > 0: if (t == get_word(i[j-1], eng_tokenizer)) or (t == None): temp.append('') else: temp.append(t) else: if(t == None): temp.append('') else: temp.append(t) preds_text.append(' '.join(temp)) # + id="uHi6XArbRLOs" pred_df = pd.DataFrame({'actual' : test[:,0], 'predicted' : preds_text}) # + id="GXvbDz7JRO0f" pd.set_option('display.max_colwidth', 200) # + colab={"base_uri": "https://localhost:8080/", "height": 787} id="JWqfH7RNRTGF" outputId="89a7bb44-b9d7-4da7-f942-6a1ba57f0499" pred_df.head(25) # + colab={"base_uri": "https://localhost:8080/", "height": 787} id="s_QaTqoBRVnI" outputId="999e1453-5031-4896-b6c4-a1f145b3d13d" pred_df.tail(25) # + colab={"base_uri": "https://localhost:8080/", "height": 787} id="zBsx-bfjRYVN" outputId="b7a48241-5643-4659-eff7-4b94225e2131" pred_df.tail(25) # + colab={"base_uri": "https://localhost:8080/", "height": 787} id="5hGIB2RDRaqi" outputId="0a70b3dc-d19b-4377-a726-a9aed88fde68" pred_df.sample(25) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Integral cross section # At electron energies $11.94eV$ and $12.14eV$ comes resonant scattering by exciting $E(^3\Sigma_g^+ \nu)$ stanja molekula $N_2$. Based on symmetry, cross section follows relation: # # \begin{equation} # \sigma(\theta) = A + B \cos^2\theta # \end{equation} # # Based on this formula and data we can calculate integral cross section. # # Analysis # + import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy.optimize import curve_fit from scipy.integrate import quad # - data_1 = pd.DataFrame({ "theta": np.array([15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120]), "cross section": np.array([7.17, 8.03, 7.76, 6.59, 5.68, 4.31, 2.68, 3.59, 2.2, 2.36, 3.49, 4.18]) }) data_2 = pd.DataFrame({ "theta": np.array([10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 128]), "cross section": np.array([7.2, 6.98, 5.76, 4.56, 3.01, 2.86, 1.93, 1.2, 1.2, 1.52, 2.19, 2.9, 4.06]) }) def cross_section_fit(theta, *args): return args[0] + args[1] * np.cos(np.pi * (theta - args[2]) / args[3]) ** 2 popt_1, _ = curve_fit(cross_section_fit, data_1["theta"], data_1["cross section"], p0=[2.5, 5.3, -50, 140], maxfev=8000) popt_2, _ = curve_fit(cross_section_fit, data_2["theta"], data_2["cross section"], p0=[2.5, 5.3, -50, 140], maxfev=8000) th = np.linspace(5, 135, 200) th_ = np.linspace(0, 180, 500) # + fig, ax = plt.subplots(2) fig.set_figwidth(15) fig.set_figheight(10) ax[0].scatter(data_1["theta"], data_1["cross section"], c='b', edgecolor='k') ax[0].plot(th, cross_section_fit(th, *popt_1)) ax[1].scatter(data_2["theta"], data_2["cross section"], c='b', edgecolor='k') ax[1].plot(th, cross_section_fit(th, *popt_2)) ax[0].set_title(r"Resonant scattering with $E=11.94eV$") ax[1].set_title(r"Resonant scattering with $E=12.14eV$") for axx in ax: axx.set_ylabel("Cross section") axx.set_xlabel("Angle") axx.grid() # - # ## Extrapolated curves # + fig, ax = plt.subplots() fig.set_figwidth(15) fig.set_figheight(5) ax.plot(th_, cross_section_fit(th_, *popt_1), label=r'$E=11.94eV$') ax.plot(th_, cross_section_fit(th_, *popt_2), label=r'$E=12.14eV$') ax.legend() ax.grid() ax.set_title(r"Resonant scattering") ax.set_ylabel("Cross section") ax.set_xlabel("Angle") # - # ## Integrated def integrate(theta, curve, args): fun = lambda x: curve(x, *args) step_by_step = np.fromiter((quad(fun, a, b)[0] for a, b in zip(theta[:-1], theta[1:])), dtype=np.float64) return np.fromiter((np.sum(step_by_step[:i]) for i in range(step_by_step.shape[0])), dtype=np.float64) i_1 = integrate(th_, cross_section_fit, popt_1) i_2 = integrate(th_, cross_section_fit, popt_2) # + fig, ax = plt.subplots() fig.set_figwidth(15) fig.set_figheight(5) ax.plot(th_[1:], i_1, label=r'$E=11.94eV$') ax.plot(th_[1:], i_2, label=r'$E=12.14eV$') ax.legend() ax.grid() ax.set_title(r"Resonant scattering") ax.set_ylabel("Integrated cross section") ax.set_xlabel("Angle") # - # # Conclusion - What is cross section?? # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + id="5XWjPOj4LGhc" # Import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import tensorflow as tf from tensorflow import keras # + id="T6kLEGTLLwuQ" from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Activation, Dense, Flatten, BatchNormalization, Conv2D, MaxPool2D, Dropout from tensorflow.keras.optimizers import Adam from tensorflow.keras.metrics import categorical_crossentropy from tensorflow.keras.preprocessing.image import ImageDataGenerator from sklearn.metrics import confusion_matrix # + id="Ts7DxuvVMIHw" import matplotlib.image as mpimg # %matplotlib inline from tensorflow.keras.preprocessing import image_dataset_from_directory # for reproducibility SEED = 42 np.random.seed(SEED) tf.random.set_seed(SEED) # + colab={"base_uri": "https://localhost:8080/"} id="-CbS8ecdMKrJ" outputId="c5f2199a-231a-4611-8645-f54d80fafc31" # Define file paths # ! git clone https://github.com/chibykelaw/Hamoye_capstone_project_smote.git train_path = 'Hamoye_capstone_project_smote/Data/train/' val_path = 'Hamoye_capstone_project_smote/Data/val/' test_path = 'Hamoye_capstone_project_smote/Data/test/' # + colab={"base_uri": "https://localhost:8080/"} id="8bo6ZnHRMM6a" outputId="974d2a1a-63d1-46b4-92a2-1df1795c07a4" # generate train, test and validation sets from directory train_ds = image_dataset_from_directory(train_path, label_mode = 'categorical', image_size = (528, 528)) val_ds = image_dataset_from_directory(val_path, label_mode = 'categorical', image_size = (528, 528)) test_ds = image_dataset_from_directory(test_path, label_mode = 'categorical', shuffle= False, image_size = (528, 528)) # + colab={"base_uri": "https://localhost:8080/", "height": 324} id="N5-iTFzgQGMx" outputId="4bc0088b-bbfb-4cdc-c507-9bc0906de60d" # View dataset class_names = train_ds.class_names plt.figure(figsize=(10, 10)) for images, labels in train_ds.take(1): for i in range(9): ax = plt.subplot(3, 3, i + 1) plt.imshow(images[i].numpy().astype("uint8")) plt.title(class_names[np.argmax(labels[i], axis=None, out=None)]) plt.axis("off") # + colab={"base_uri": "https://localhost:8080/", "height": 281} id="1wizO4teVNkV" outputId="2dd40590-7234-4e22-a638-85619aabf99b" for images, label in train_ds.take(1): first_image = images[0] plt.imshow(first_image.numpy().astype("uint8")) plt.title(class_names[np.argmax(label[i], axis=None, out=None)]) plt.axis('off') # + colab={"base_uri": "https://localhost:8080/"} id="yd1zQEocZsel" outputId="8f369907-ed8a-4b51-8ada-e5072e280ed7" # make predictions using a pretrained model from keras.applications.efficientnet import EfficientNetB6 from tensorflow.keras.applications.efficientnet import decode_predictions from tensorflow.keras.preprocessing import image x = image.img_to_array(first_image) x = np.expand_dims(x, axis=0) base_model = EfficientNetB6(weights='imagenet') pred = base_model.predict(x) print('Predicted:', decode_predictions(pred)) # + id="k_K7L7UAfxx_" from tensorflow.keras import Model from tensorflow.keras.layers import Dense, GlobalAveragePooling2D # + colab={"base_uri": "https://localhost:8080/"} id="9F_FnEhahYTv" outputId="de1cc965-6157-445f-9c03-a56a34ce6180" from tensorflow.keras import Model from tensorflow.keras.layers import Dense, GlobalAveragePooling2D base_model = EfficientNetB6(weights='imagenet', include_top=False, input_shape=(528, 528,3)) # freeze extraction layers base_model.trainable = False # add custom top layers x = base_model.output x = GlobalAveragePooling2D()(x) predictions = Dense(4, activation='softmax')(x) model = Model(inputs=base_model.input, outputs=predictions) # confirm unfrozen layers for layer in model.layers: if layer.trainable==True: print(layer) # + colab={"base_uri": "https://localhost:8080/"} id="Qq-TTQ2QlI0R" outputId="1b02e1c8-c86e-4b16-d3b0-62b4bcc5185f" # create callbacks fr training my_callbacks = tf.keras.callbacks.ModelCheckpoint('weights.h5', save_best_only = True, save_weights_only = True) #compile the model model.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['accuracy']) # train the model on the new data for a few epochs history = model.fit(train_ds, epochs=10, validation_data=val_ds, callbacks=[my_callbacks]) # + id="c1HKWa1PxYgg" # save results results = history.history # + colab={"base_uri": "https://localhost:8080/", "height": 246} id="9Xbw9942rOHy" outputId="839858a1-0ae9-4ff8-8d01-42a9339afbcb" # plot results n_epochs = len(results['loss']) plt.figure(figsize=[14,4]) plt.grid(True) plt.subplot(1,2,1) plt.plot(range(1, n_epochs+1), results['loss'], label='Training') plt.plot(range(1, n_epochs+1), results['val_loss'], label='Validation') plt.xlabel('Epoch'); plt.ylabel('Loss'); plt.title('Loss') plt.legend() plt.grid(True) plt.subplot(1,2,2) plt.plot(range(1, n_epochs+1), results['accuracy'], label='Training') plt.plot(range(1, n_epochs+1), results['val_accuracy'], label='Validation') plt.xlabel('Epoch'); plt.ylabel('Accuracy'); plt.title('Accuracy') plt.legend() plt.grid(True) plt.show() # + id="yHQZ5xghrPNa" model.load_weights('weights.h5') # + colab={"base_uri": "https://localhost:8080/"} id="XL0HMdxCrYez" outputId="2612f2cd-f9a9-4cd7-cb34-e617956f3468" predictions = model.evaluate(test_ds) # + colab={"base_uri": "https://localhost:8080/"} id="OInK4dAzrZTo" outputId="062d5111-925b-4cd4-aba1-d3049ca559a8" # make predictions on the unseen data predictions = model.predict(test_ds) predictions # + colab={"base_uri": "https://localhost:8080/"} id="-pTbp1yssCqG" outputId="5d4b2574-b5f8-4a1b-ed65-2d81920a12f7" # save the index of the highest probability predictions = predictions.argmax(axis=1) predictions # + colab={"base_uri": "https://localhost:8080/"} id="nQ8FznuArir7" outputId="4977501c-6bf0-42b0-d5b5-6c061a51b619" # get the actual values test_images = list(test_ds.unbatch().as_numpy_iterator()) y_true = np.array([i[1] for i in test_images]) y_true = y_true.argmax(axis=1) y_true # + colab={"base_uri": "https://localhost:8080/"} id="BGDuKZTvsWU4" outputId="87ffebcc-a0d7-432c-f467-120ff5e3a693" # calculate f1_score from sklearn.metrics import f1_score f1_score(y_true,predictions,average='macro') # + colab={"base_uri": "https://localhost:8080/"} id="bV8ukWhSsoaF" outputId="4fc16d88-e6d6-4388-a093-76c5ace707a0" # get the confusion matrix from sklearn.metrics import confusion_matrix confusion_matrix(y_true,predictions) # + [markdown] id="ghMF-ERxzQJ_" # **This looks like a perfect model with perfect accuracy** # + colab={"base_uri": "https://localhost:8080/"} id="gCCJPdSBzPKC" outputId="40115543-daad-4ced-b49e-510359aff0ab" # Save the model as .pkl import pickle pickle.dump(model, open('EfficientNetB6_1.0.pkl', 'wb')) # + id="i2u2eaLwzlPU" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Matplotlib图鉴——基础柱状图 # # ## 公众号:可视化图鉴 import matplotlib print(matplotlib.__version__) #查看Matplotlib版本 import pandas as pd print(pd.__version__) #查看pandas版本 import numpy as np print(np.__version__) #查看numpy版本 import matplotlib.pyplot as plt plt.rcParams['font.sans-serif'] = ['SimHei'] #设置中文 # 注意,代码在以下环境全部通过测试: # - Python 3.7.1 # - Matplotlib == 3.0.2 # - pandas == 0.23.4 # - numpy == 1.15.4 # # 因版本不同,可能会有部分语法差异,如有报错,请先检查拼写及版本是否一致! # ### 基础柱状图——堆叠柱状图 # + x = ['周一', '周二', '周三', '周四', '周五', '周六', '周日'] y1 = [7,6,5,4,3,2,1] y2 = [1,2,3,3,1,1,0.5] plt.figure(figsize=(10,7))#设置画布的尺寸 plt.bar(x, y1, label="系列1",edgecolor = 'black') plt.bar(x, y2, label="系列2",edgecolor = 'black') plt.legend(loc=1,fontsize=14) # 设置图例位置 plt.ylabel('我是Y轴',fontsize=16) plt.xlabel('我是X轴',fontsize=16) plt.title("基础柱状图——堆叠柱状图",fontsize=18) plt.show() # - # ### 水平堆叠柱状图 # # 使用`plt.barh`使用方法类比`MA_A_01`基础柱状图 # + x = ['周一', '周二', '周三', '周四', '周五', '周六', '周日'] y1 = [7,6,5,4,3,2,1] y2 = [1,2,3,3,1,1,0.5] plt.figure(figsize=(10,7))#设置画布的尺寸 plt.barh(x, y1, label="系列1",edgecolor = 'black') plt.barh(x, y2, label="系列2",edgecolor = 'black') plt.legend(loc=1,fontsize=14) # 设置图例位置 plt.ylabel('我是Y轴',fontsize=14) plt.xlabel('我是X轴',fontsize=14) plt.title("基础柱状图——堆叠柱状图",fontsize=18) plt.show() # - # ### 堆叠柱状图——添加百分比 # + x = ['周一', '周二', '周三', '周四', '周五', '周六', '周日'] y1 = [7,6,5,4,3,2,1] y2 = [1,2,3,3,1,1,0.5] plt.figure(figsize=(9,6),dpi = 100)#设置画布的尺寸 plt.bar(x, y1, label="系列1",edgecolor = 'black') plt.bar(x, y2, label="系列2",edgecolor = 'black') plt.legend(loc=1,fontsize=14) # 设置图例位置 plt.ylabel('我是Y轴',fontsize=16) plt.xlabel('我是X轴',fontsize=16) plt.title("基础柱状图——堆叠柱状图",fontsize=18) for x1, y1, y2 in zip(x, y1, y2): p1 = y1/(y1+y2) p2 = y2/(y1+y2) plt.text(x1 , y2 + (y1 - y2)* 0.4, '{:.0%}'.format(p1), ha='center',fontsize = 15) plt.text(x1 , y2 * 0.4, '{:.0%}'.format(p2), ha='center',fontsize = 15) plt.show() # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ## What is Data Preparation? # - Data preparation is the process of cleaning and transforming raw data prior to processing and analysis. It is an important step prior to processing and often involves reformatting data, making corrections to data and the combining of data sets to enrich data. # # - Data preparation is often a lengthy undertaking for data professionals or business users, but it is essential as a prerequisite to put data in context in order to turn it into insights and eliminate bias resulting from poor data quality. # # - For example, the data preparation process usually includes standardizing data formats, enriching source data, and/or removing outliers. # # 76% of data scientists say that data preparation is the worst part of their job, but the efficient, accurate business decisions can only be made with clean data. Data preparation helps: # # - Fix errors quickly — Data preparation helps catch errors before processing. After data has been removed from its original source, these errors become more difficult to understand and correct. # - Produce top-quality data — Cleaning and reformatting datasets ensures that all data used in analysis will be high quality. # - Make better business decisions — Higher quality data that can be processed and analyzed more quickly and efficiently leads to more timely, efficient and high-quality business decisions. # # Additionally, as data and data processes move to the cloud, data preparation moves with it for even greater benefits, such as: # - Superior scalability — Cloud data preparation can grow at the pace of the business. Enterprise don’t have to worry about the underlying infrastructure or try to anticipate their evolutions. # - Future proof — Cloud data preparation upgrades automatically so that new capabilities or problem fixes can be turned on as soon as they are released. This allows organizations to stay ahead of the innovation curve without delays and added costs. # - Accelerated data usage and collaboration — Doing data prep in the cloud means it is always on, doesn’t require any technical installation, and lets teams collaborate on the work for faster results. # - Additionally, a good, cloud-native data preparation tool will offer other benefits (like an intuitive and simple to use GUI) for easier and more efficient preparation. # ## Data Preparation Steps # ### Remove Unwanted observations # The first step to data cleaning is removing unwanted observations from your dataset. # # **This includes duplicate or irrelevant observations.** # # **Duplicate observations** # Duplicate observations most frequently arise during data collection, such as when you: # - Combine datasets from multiple places # - Scrape data # - Receive data from clients/other departments # # **Irrelevant observations** # Irrelevant observations are those that don’t actually fit the specific problem that you’re trying to solve. # - For example, if you were building a model for Single-Family homes only, you wouldn't want observations for Apartments in there. # - This is also a great time to review your charts from Exploratory Analysis. You can look at the distribution charts for categorical features to see if there are any classes that shouldn’t be there. # - Checking for irrelevant observations before engineering features can save you many headaches down the road. # ### Fix Structural Errors # The next bucket under data cleaning involves fixing structural errors. # # Structural errors are those that arise during measurement, data transfer, or other types of **"poor housekeeping."** # # For instance, you can check for **typos** or **inconsistent capitalization.** This is mostly a concern for categorical features, and you can look at your bar plots to check. # # Here's an example: # ![image.png](attachment:image.png) # As you can see: # # - 'composition' is the same as 'Composition' # - 'asphalt' should be 'Asphalt' # - 'shake-shingle' should be 'Shake Shingle' # - 'asphalt,shake-shingle' could probably just be 'Shake Shingle' as well # # After we replace the typos and inconsistent capitalization, the class distribution becomes much cleaner: # ![image.png](attachment:image.png) # # Finally, check for mislabeled classes, i.e. separate classes that should really be the same. # # - e.g. If ’N/A’ and ’Not Applicable’ appear as two separate classes, you should combine them. # - e.g. ’IT’ and ’information_technology’ should be a single class. # ### Filter Unwanted Outliers # Outliers can cause problems with certain types of models. For example, linear regression models are less robust to outliers than decision tree models. # # In general, if you have a **legitimate** reason to remove an outlier, it will help your model’s performance. # # However, outliers are **innocent until proven guilty.** You should never remove an outlier just because it’s a "big number." That big number could be very informative for your model. # # We can’t stress this enough: you must have a good reason for removing an outlier, such as suspicious measurements that are unlikely to be real data. # # you can check outliers by the help of percentiles # - In Numeric col # - eg housing_Data.describe(percentiles=[.05,.25,.5,.75,.90,.95,.99]) # - eg. By using box plot , # ###### plt.figure(figsize=(17, 20)) # ###### plt.subplot(5,3,1) # ###### sns.boxplot(y = 'ScreenPorch', palette='Set3', data = housing_Data) # # # # ## Handle Missing Data # Missing data is a deceptively tricky issue in applied machine learning. # # First, just to be clear, **you cannot simply ignore missing values in your dataset.** You must handle them in some way for the very practical reason that most algorithms do not accept missing values. # # #### "Common sense" is not sensible here # Unfortunately, from our experience, the 2 most commonly recommended ways of dealing with missing data actually suck. # # They are: # # - 1. **Dropping** observations that have missing values # - 2. **Imputing** the missing values based on other observations # # Dropping missing values is sub-optimal because when you drop observations, you ***drop information.*** # # - The fact that the value was missing may be informative in itself. # - Plus, in the real world, you often need to make predictions on new data even if some of the features are missing! # # Imputing missing values is sub-optimal because the value was originally missing but you filled it in, which always leads to a loss in information, no matter how sophisticated your imputation method is. # # - Again, ***"missingness"*** is almost always informative in itself, and you should ***tell your algorithm *** if a value was missing. # - Even if you build a model to impute your values, you’re not adding any real information. You’re just reinforcing the patterns already provided by other features. # # In short, you should always tell your algorithm that a value was missing because ***missingness is informative.*** # # So how can you do so? # # ### Missing categorical data # The best way to handle missing data for categorical features is to simply label them as ’Missing’! # # - You’re essentially adding a new class for the feature. # - This tells the algorithm that the value was missing. # - This also gets around the technical requirement for no missing values. # # ### Missing numeric data # For missing numeric data, you should **flag and fill** the values. # # - Flag the observation with an indicator variable of missingness. # - Then, fill the original missing value with 0 just to meet the technical requirement of no missing values. # - By using this technique of flagging and filling, you are essentially **allowing the algorithm to estimate the optimal constant for missingness,** instead of just filling it in with the mean. # ### Some of the command which will help you During cleaning process. # # #### Missing value in % # round(100*(df.isnull().sum()/len(df.Id)), 2) # # #### Return missing value in Categorical column only # # missing =round(100*df.select_dtypes(include='object').isnull().sum()/len(df.Id)),2) # missing.loc[missing > 0] # # #### Return missing value in Numeric Column only # missing =round(100*(df.select_dtypes(include=['int64','float']).isnull().sum()/len(df.Id)),2) # # missing.loc[missing > 0] # # #### Drop the columns where all elements are missing values: # df.dropna(axis=1, how='all') # # #### Drop the columns where any of the elements are missing values # df.dropna(axis=1, how='any') # # #### Keep only the rows which contain 2 missing values maximum # df.dropna(thresh=2) # # #### Drop the columns where any of the elements are missing values # df.dropna(axis=1, how='any') # # #### Fill all missing values with the mean of the particular column # df.fillna(df.mean()) # # #### Fill any missing value in column 'A' with the column median # df['A'].fillna(df['A'].median()) # # #### Fill any missing value in column 'Depeche' with the column mode # df['Depeche'].fillna(df['Depeche'].mode()) # # # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # notebook for writing edr browse products # # This is designed to be run on a fully-created version of the # EDR data collection. It simply opens every FITS file, converts # it to JPEG, fills out a simple label, and writes the JPEG and # XML files out to a duplicate of the EDR directory structure. # # ## performance tips # Performance tip: Parallelizing this probably won't help that # much unless you're in an unusual operating environment, because # your most likely bottleneck is IOPS -- encoding these teensy # arrays as JPEGs and inserting a few lines of text requires very # little working memory, processing power, or throughput. Use a very # fast disk if you want to speed it up. If you do have a good reason # to parallelize it, I recommend using ```pathos``` or simply # running multiple instances of this notebook; Python vanilla # ```multiprocessing``` will fail when attempting to pickle parts # of this pipeline. # + import datetime as dt import fs.copy from fs.osfs import OSFS from clem_conversion import ClemBrowseWriter # + # root of the to-be-created EDR browse directory tree browse_fs = OSFS('~/buckets/clem_output/browse/edr/') # root of the already-created EDR browse directory tree data_fs = OSFS('~/buckets/clem_output/data/edr/') # + pycharm={"name": "#%%\n"} # make the whole directory tree, avoiding tedious directory- # making later. will take a minute; there are a million or # so directories. fs.copy.copy_structure(data_fs, browse_fs) # + pycharm={"name": "#%%\n"} browse_start_time = dt.datetime.now() for ix, file in enumerate(data_fs.walk.files(filter=['*.fits'])): if ix % 1000 == 0: print(file) print(str((dt.datetime.now() - browse_start_time).total_seconds())) browse_start_time = dt.datetime.now() path, fn = fs.path.split(file) pds4_root = fn[:-5] output_path = browse_fs.getsyspath(path) ClemBrowseWriter( pds4_root, "edr" ).write_pds4(output_path + '/', verbose=False) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Book Linkage Tasks # # **Date:** 15/1/2019 # # **Author:** # # The aim of this notebook is to close out one of the major record linkage tasks for *Mapping Print, Charting Enlightenment*. There are three sets of book data that need to be more thoroughly joined to the rest of the database: # # * **The 'List of banned books' (BNF Ms Fr 21928-9):** This lists banned titles and was used by book inspectors in eighteenth-century France to police the trade. Each item on the list must be assigned a `super_book_code`. # * **The 'Bastille Register' (BNF Arsenal MS 10305):** This lists books found in the Bastille after it was stormed during the revolution. In principle it records information about particular editions, so each book in the list should be assigned a `book_code`. # * **The MMF-2 database:** This is a bibliographic database of French fiction from the eighteenth century. It records every known edition of every French novel from the period 1700-1800. Each record in the database should therefore assigned a `book_code` in the wider *MPCE* architecture. # # The data has all be drawn from the databases and preprocessed in R, transforming a record linkage task into a deduplication task. For each of the three tasks above, a csv has been prepared. A model will be trained to detect which rows refer to the same book, based on as much of the available information as possible. # Import main libraries import dedupe as dd import pandas as pd import os as os import time import numpy as np import random from dedupe_helper_functions import dedupe_initialise, run_deduper, save_clusters import json # ## Section 1: The List of Banned Books # + # Cell 1.1 Set main paths # Root folder for banned books data bbr = "banned_books_list/" # Set paths bb_input = bbr + "banned_books_list_dddata.csv" bb_output = bbr + "banned_books_deduped.csv" bb_settings = bbr + "banned_books_learned_settings" bb_training = bbr + "banned_books_training.json" # + # Cell 1.2 Import data bb_df = pd.read_csv(bb_input, encoding = "utf-8") print(f'bb_df has {bb_df.shape[0]} rows and {bb_df.shape[1]} columns:\n{", ".join(list(bb_df))}\n') print(f'{len(bb_df[pd.isnull(bb_df.super_book_code)])} books require super_book_codes.') # + # Cell 1.3 Initialise deduper # Define the fields that the model will examine bb_fields = [ {'field':'super_book_title', 'type': 'String'}, {'field':'author_name', 'type': 'String', 'has missing':True} ] # Create Dedupe object bb_model = dedupe_initialise(bb_df, bb_fields, training_file = bb_training) # - # Cell 1.4 Label training pairs dd.consoleLabel(bb_model) # + # Cell 1.5 Run the Deduper bb_model, matches = run_deduper(bb_model, bb_df, bb_settings, bb_training, recall_weight = 1.5) # - # Save the results _ = save_clusters(matches, bb_df, bb_output) bb_df[pd.notnull(bb_df.cluster) & pd.notnull(bb_df.ID)] # ## Section 2: The Bastille Register # + # Cell 2.1 Paths and data # Root folder for banned books data basr = "bastille_register/" # Paths bas_input = basr + "bastille_register_dddata.csv" bas_output = basr + "bas_reg_deduped.csv" bas_settings = basr + "bas_reg_learned_settings" bas_training = basr + "bas_reg_training.json" # Data bas_df = pd.read_csv(bas_input, encoding = "utf-8") print(f'bas_df has {bas_df.shape[0]} rows and {bas_df.shape[1]} columns:\n{", ".join(list(bas_df))}\n') print(f'{len(bas_df[pd.isnull(bas_df.book_code)])} books require book_codes.') # + # Cell 2.2 Initialise Deduper # Fields that the model will examine bas_fields = [ {'field':'full_book_title', 'type': 'String'}, {'field':'author_name', 'type': 'String', 'has missing':True}, {'field':'stated_publication_years', 'type':'DateTime', 'has missing':True}, {'field':'stated_publication_places', 'type': 'String', 'has missing':True}, ] # Create Dedupe object bas_model = dedupe_initialise(bas_df, bas_fields, training_file = bas_training) # - # Cell 2.3 Label tranining examples dd.consoleLabel(bas_model) # The labelling in this instance was rough and difficult. There are so many editions, and so little information about them, that it is impossible to quickly establish a 'ground truth'. Indeed, it is impossible in many cases to establish a ground truth at all. _, matches = run_deduper(bas_model, bas_df, bas_settings, bas_training, recall_weight = 2) _ = save_clusters(matches, bas_df, bas_output) # ## Section 3: The Big One (MMF-2) # # This section is only provisional, because at the time of writing I do not have 's latest version of the MMF-2 database. But since he is largely working on books to 1720, it is unlikely that too many new editions from our other FBTEE datasets will be missing. # + # Cell 3.1 Paths and data # Root folder for banned books data mmfr = "mmf_2/" # Paths mmf_input = mmfr + "mmf_2_dddata.csv" mmf_output = mmfr + "mmf_deduped.csv" mmf_settings = mmfr + "mmf_settings" mmf_training = mmfr + "mmf_training.json" # Data mmf_df = pd.read_csv(mmf_input, encoding = "utf-8") print(f'bas_df has {mmf_df.shape[0]} rows and {mmf_df.shape[1]} columns:\n{", ".join(list(mmf_df))}\n') print(f'{len(mmf_df[pd.isnull(mmf_df.book_code)])} books require book_codes.') # + # Cell 2.2 Initialise Deduper # Need to do some light preprocessing on the date column mmf_df['date'] = mmf_df.date.astype(str).str.replace("\.0", "") # Fields that the model will examine mmf_fields = [ {'field':'long_title', 'type':'String'}, {'field':'publisher', 'type':'String', 'has missing':True}, {'field':'place', 'type':'String', 'has missing':True}, {'field':'date', 'type':'DateTime', 'has missing':True}, {'field':'author_name', 'type':'String', 'has missing':True} ] # Create Dedupe object mmf_model = dedupe_initialise(mmf_df, mmf_fields) # - # Cell 2.3 Label tranining examples dd.consoleLabel(mmf_model) _, mmf_matches = run_deduper(mmf_model, mmf_df, mmf_settings, mmf_training, recall_weight = 1.5) _ = save_clusters(mmf_matches, mmf_df, mmf_output) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="ejoGZZRKFArL" import numpy as np import pandas as pd import os import pathlib # + colab={"base_uri": "https://localhost:8080/"} id="Ulmix5uzJ6wL" outputId="cac19930-b9d9-4364-8320-47126ea42bba" data_root = pathlib.Path('/content/drive/MyDrive/tobaco_OCR/') print(data_root) for item in data_root.iterdir(): print(item) # + id="yFa0ssSKFgEK" colab={"base_uri": "https://localhost:8080/"} outputId="1ff8daba-71e3-4dc8-f951-d84de6409cae" def get_file_paths_and_labels(data_root): text_paths = [str(path) for path in data_root.glob('*/*.txt')] return text_paths text_paths = get_file_paths_and_labels(data_root) print(text_paths) # + colab={"base_uri": "https://localhost:8080/"} id="jLdJ6h4VK3oA" outputId="a728267e-b295-4fad-ae19-1121fea3b63c" texts = [] for this_text in text_paths: with open(this_text) as f: lines = f.readlines() lines = ' '.join(lines) texts.append(lines) print(len(texts)) # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="5h4AM27PRWI2" outputId="572f828e-62e6-434b-944c-418be52029b7" df = pd.DataFrame(list(zip(text_paths, texts)), columns =['Path', 'text']) df.head() # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="O90R-3vvCrY6" outputId="7577d34f-b04d-4e17-cefb-23b84d39852c" import re df['text'] = [re.sub(r'[^\w\s]','',s) for s in df['text']] df['text'] = [s.replace('\n','') for s in df['text']] df # + colab={"base_uri": "https://localhost:8080/"} id="vR30wSq68Di8" outputId="939996d7-5222-41ae-fdb3-2bc9c7709bf7" ## Tokenize, Lemmatize, stopwords removal import spacy import nltk nlp = spacy.load("en", disable=['parser', 'tagger', 'ner']) from nltk.corpus import stopwords nltk.download('stopwords') stops = stopwords.words("english") def normalize(comment, lowercase, remove_stopwords): if lowercase: comment = comment.lower() comment = nlp(comment) lemmatized = list() for word in comment: lemma = word.lemma_.strip() if lemma: if not remove_stopwords or (remove_stopwords and lemma not in stops): lemmatized.append(lemma) return " ".join(lemmatized) df['text'] = df['text'].apply(normalize, lowercase=True, remove_stopwords=True) # + colab={"base_uri": "https://localhost:8080/", "height": 423} id="QNV0grylE14s" outputId="f6d25272-6afd-4011-af95-0838beb17704" df # + colab={"base_uri": "https://localhost:8080/", "height": 497} id="D9euiWtsFH9s" outputId="78f71d5f-05c5-430a-e7ef-a8ab8c77f110" #Tokenize import nltk nltk.download('punkt') nltk.download('wordnet') df['text'] = [nltk.word_tokenize(s) for s in df['text']] df # + id="i11N0u1EIsNT" from gensim.models.fasttext import FastText word_tokens = df['text'] # Defining values for parameters embedding_size = 300 window_size = 5 min_word = 5 down_sampling = 1e-2 # # %%time fast_Text_model = FastText(word_tokens, size=embedding_size, window=window_size, min_count=min_word, sample=down_sampling, workers = 4, sg=1, iter=100) # + id="8RoqxjW6NXgF" # !mkdir -p saved_model fast_Text_model.save('saved_model/my_model') # + colab={"base_uri": "https://localhost:8080/"} id="5P_SqxIN9Q77" outputId="1ab82c36-3579-4956-94d8-7fe2db63ef48" # !zip -r '/content/saved_model.zip' '/content/saved_model' # + colab={"base_uri": "https://localhost:8080/", "height": 16} id="1lVoXxje_vy8" outputId="d13ed10a-53b6-4501-d8e3-2e5f70ab5d2f" from google.colab import files files.download('/content/saved_model.zip') # + id="_pxQTmt7JJlJ" # from gensim.models import Word2Vec # # Save fastText gensim model # fast_Text_model.save("model/ft_model_yelp") # # Load saved gensim fastText model # fast_Text_model = Word2Vec.load("model/ft_model_yelp") # + colab={"base_uri": "https://localhost:8080/", "height": 677} id="fwNFNhv15Ng7" outputId="fb16e36c-e91b-403a-943f-e86a119e8f87" # !pip install fasttext import fasttext import fasttext.util ft = fasttext.load_model('cc.en.300.bin') # + id="RS3LlDHtKQgP" # Check word embedding for a perticular word l = fast_Text_model.wv['chicken'] print(len(l)) print(l) # + colab={"base_uri": "https://localhost:8080/", "height": 206} id="TXZ15kZbMtkA" outputId="afd21c64-b83c-4cc6-adad-e093540a7342" def vectorize_str(token_list): embedd = [] for word in token_list: embedd.append(fast_Text_model.wv['word']) df['text'] = [fast_Text_model.wv['chicken'] for word df['text']] # + colab={"base_uri": "https://localhost:8080/"} id="6vesJp81MzCH" outputId="fd3a718b-8422-4328-d947-b51a5e01a3da" biglist = [] for this_list in df['text']: for i in this_list: biglist.append(i) print(len(biglist)) print(len(set(biglist))) # + id="r_uQP3d8NwG8" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Documents, spans and tokens # ## Step 1 - Tokens from spacy.lang.en import English # Import the English language class and create the nlp object nlp = English() #Process the text doc = nlp("I like tree kangaroos and narwhals.") #Select the first token first_token = doc[0] #Print the first token's text print(first_token.text) # ## Step 2 - Spans #A slice of the Doc for tree kangaroos tree_kangaroos = doc[2:4] print(tree_kangaroos.text) #A slice of the Doc for "tree kangaroos and narwhals" without the '.' tree_kangaroos_and_narwhals = doc[2:6] print(tree_kangaroos_and_narwhals.text) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- from __future__ import division # %load_ext autoreload # %autoreload 2 from django.conf import settings import cPickle as pickle import csv import os import sys import numpy as np import pandas as pd sys.path.append('c:\dev\opinion\opinion\python\opinion') import utils.fileutils as fileutils import utils.metrics as metrics import nltk rootdir = r"C:\dev\opinion\papers\sentiplus" indir = os.path.join(rootdir, "Hybrid") # + # Hot Encode DSOSE values and then Change the previous Base file into the new one #infile = os.path.join(indir, "DSOSE_ResultsConsolidated_RF.xls") infile = os.path.join(indir, "ConsolidatedWithBestTrainedClfsFromThreeSETools.xls") df = pd.read_excel(infile, "Sheet1", encoding="ISO-8859-1") dsoseHotEncodes = [] for index, row in df.iterrows(): dsose = row["DSOSE"] if dsose == "p": dsoseHotEncodes.append(1) elif dsose == "n": dsoseHotEncodes.append(-1) else: dsoseHotEncodes.append(0) df["DSOSE_HotEncoded"] = (pd.Series(dsoseHotEncodes)).values df.to_excel(infile, index=False) # - # + def computePerformanceOverallOfLearner(infile, learnerCol, filenames): #infile = os.path.join(indir, "ResultsConsolidated_"+algo+".xls") df = fileutils.readExcel(infile, "Sheet1", encoding="ISO-8859-1") exps = [] gots = [] labels = set() for index, row in df.iterrows(): fname = row["File"] fname = fname.split("_")[0] if fname not in filenames: #print fname, " not in filenmaes" #return continue else: exp = row["ManualLabel"] got = row[learnerCol] labels.add(exp) exps.append(exp) gots.append(got) computer = metrics.PerformanceMultiClass(exps, gots, labels = list(labels)) for label in labels: pr = computer.precision(label) re = computer.recall(label) f1 = 2*pr*re/(pr+re) print "Label = %s. Precision = %.3f. Recall = %.3f. F1 = %.3f"%(label, pr, re, f1) f1_macro = computer.f1_macro_average() pr_macro = computer.precision_macro_average() rec_macro = computer.recall_macro_average() f1_micro, _, _ = computer.compute_micro_average() print "F1 Macro = %.3f. Micro = %.3f"%(f1_macro, f1_micro) print "Macro Precision = %.3f. Recall = %.3f"%(pr_macro, rec_macro) print "-------------------------------" def computePerformancOfLearner(infile, learnerCol, filenames): #infile = os.path.join(indir, "ResultsConsolidated_"+algo+".xls") df = fileutils.readExcel(infile, "Sheet1", encoding="ISO-8859-1") exps = dict() gots = dict() labels = dict() for index, row in df.iterrows(): fname = row["File"] fname = fname.split("_")[0] if fname not in filenames: #print fname, " not in filenmaes" continue else: if fname not in exps: exps[fname] = [] gots[fname] = [] labels[fname] = set() exp = row["ManualLabel"] got = row[learnerCol] labels[fname].add(exp) exps[fname].append(exp) gots[fname].append(got) for fname in filenames: computer = metrics.PerformanceMultiClass(exps[fname], gots[fname], labels = list(labels[fname])) for label in labels[fname]: pr = computer.precision(label) re = computer.recall(label) f1 = 2*pr*re/(pr+re) print "File %s. Label = %s. Precision = %.2f. Recall = %.2f. F1 = %.2f"%(fname, label, pr, re, f1) f1_macro = computer.f1_macro_average() f1_micro, _, _ = computer.compute_micro_average() print "File = %s. F1 Macro = %.2f. Micro = %.2f"%(fname, f1_macro, f1_micro) print "-------------------------------" # - filenames = ["DatasetLinJIRA", "BenchmarkUddinSO", "DatasetLinAppReviews", "DatasetLinSO", "DatasetSenti4SDSO", "OrtuJIRA"] infile = os.path.join(indir, "ConsolidatedWithBestTrainedClfsFromThreeSETools.xls") learnerCol = "DsoLabelFullTextW2V" print learnerCol, ": Overall Performance" print "-"*80 computePerformanceOverallOfLearner(infile, learnerCol, filenames) print learnerCol, ": By File Performance" print "-"*80 computePerformancOfLearner(infile, learnerCol, filenames) infile = os.path.join(indir, "ConsolidatedWithBestTrainedClfsFromThreeSETools.xls") learnerCol = "DsoLabelFullText" print learnerCol, ": Overall Performance" print "-"*80 computePerformanceOverallOfLearner(infile, learnerCol, filenames) print learnerCol, ": By File Performance" print "-"*80 computePerformancOfLearner(infile, learnerCol, filenames) infile = os.path.join(indir, "ConsolidatedWithBestTrainedClfsFromThreeSETools.xls") learnerCol = "Senti4SD" print learnerCol, ": Overall Performance" print "-"*80 computePerformanceOverallOfLearner(infile, learnerCol, filenames) print learnerCol, ": By File Performance" print "-"*80 computePerformancOfLearner(infile, learnerCol, filenames) infile = os.path.join(indir, "ConsolidatedWithBestTrainedClfsFromThreeSETools.xls") learnerCol = "SentiCR" print learnerCol, ": Overall Performance" print "-"*80 computePerformanceOverallOfLearner(infile, learnerCol, filenames) print learnerCol, ": By File Performance" print "-"*80 computePerformancOfLearner(infile, learnerCol, filenames) infile = os.path.join(indir, "ConsolidatedWithBestTrainedClfsFromThreeSETools.xls") learnerCol = "SentistrengthSE" print learnerCol, ": Overall Performance" print "-"*80 computePerformanceOverallOfLearner(infile, learnerCol, filenames) print learnerCol, ": By File Performance" print "-"*80 computePerformancOfLearner(infile, learnerCol, filenames) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Demystifying Neural Network # # ### Training Neural Network - Backpropagation # - Intro : complexity in computation # - what is backpropatagion? What's the goal? # - How to calculate backpropagation # - feed forward # - error calculation # - backward propagation # - code implementation of back propagation # I thought that the topic of backpropagation deserves a whole chapter since I struggled to fully understand the concept. You know one of those concepts that you just think about it, it seems like it makes sense.But then once you actually sit down and try to write it out or even implement it via code, you just get stuck. This was one of those concepts for me. # # I found a great source to understand backpropagation by [](https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/). If you want more detailed expalanation with additional information, please visit his website. In this chapter, I am going to use his example described in his guide to calculate myself backpropagation, and end with code implementation of backpropagation. # ____________________ # ### Intro : complexity in computation # I describe briefly last time how weights are updated via gradient descent. However, I didn't delve into how those gradients are calculated. I mean, we know that we would take slope of the cost function, and how much $w$ wiggles, as we wiggle a cost function. But what if there are million or even trillion wieghts? (which is said to be a common case in deep networks) Since any nueron in one layer can be connected to any neuron in the next layer, wouldn't computation grow exponentially? # # Thanks to backpropagation (will be explained shortly), computation is not going to blow up exponentially. This is because weights and biases close to input layer that influence cost function can only influence the cost function only by going through layers that are close to the right side of the network. If you look at the figure below, $w_1$, $w_2$, $w_3$, $w_4$ can only influene the cost function only by going through $h_1$ and $h_2$. # # That said, a lot of computations that we need to do in one column of neurons has already been computed in the column on the right. Therefore, computation is not going to grow exponentially. Professor Winston during one of his neural net lecture said "what's computed is computed and need not be recomputed" (I really enjoy his random but intellectual jokes during his lectures). # # - amount of computation: # 1. depth: if we increase the number of layers (depth), the amount of computation will increase **linearly**. # 2. width: any neuron in one column can be connected to any neuron in the next column. The amount of computation is proportional to the number of connections: $w^2$ # ____________________ # ### what is backpropagation? What's the goal? # Backpropagation is a method for training a neural network by minimizing a cost function. The name comes from the fact that it uses 'can't-stop-won't-stop" chain rule to 'propagate' an error computed at the output (commonly referred to as top layers as we are 'backpropagating') to distribute backwards all the way down to bottom layers. Backpropagation computes the contribution of each parameter (weights and biases) had in the loss value by calculating derivatives (gradients). # # We can think of backpropagation as defining the gradient descent algorithm. We are repeatedly changing positions of weights to find a minimum of the cost function. This enables a neural network to optimize its weights so that the network can learn how to correctly map inputs to outputs. # ____________________ # ### How backpropagation is calculated? # # Now, let's go through few steps of backpropagation. I've read about backpropagation enough, but in most cases, one learns most not by staring at an arbitrary chain rule formula but by writing it out and calculating it by oneself. So I'm going to do that now. # # ____________________ # #### Feed Forward # Note that at the neurons $h_i$, $w_i$ and $b_i$ are summed with input values. And at the neurons $a_i$ and $z_i$, the values calculated at $h_i$ go through a sigmoid activation function $$a(h) = \sigma(h) = \frac{1}{1+e^{-h}}$$ # *Forgive me for some rounding errors. I just wanted to write them all out rather than typing precise numbers.* # # Thus, at $h_1 = w_1 * x_1 + b_1$ which is, $0.15 * 0.05 + 0.2 * 0.1 + 0.35 * 1 = 0.3775$ # # Using $h_1$ as the input, $a_1 = \frac{1}{1+e^{-0.3775}} = 0.5932$ # # Going through the same process above, we can also get the value of $a_2 = 0.5968$ # # Using above calculated values as inputs, we also calculate the values for $h_3$, $h_4$, $z_1$, $z_2$. # # $$h_3 = w_5 * a_1 + w_6 * a_2 + b_2$$ # which is, # $$0.4 * 0.5932 + 0.45 * 0.5968 + 0.6 * 1 = 1.1059$$ # # Then, this value also goes through a sigmoid activation function and outputs $z_1$. # $$z_1 = \frac{1}{1+e^{-1.1059}} = 0.7514$$ # # Going through the same process above, we get the value of $z_2 = 0.7729$ # # ____________________ # #### Error calculation # Using mean squared error function that I introduced in part1, we can calculate the total error of our neural network. # # $$E_{total} = \sum \dfrac{1}{2}(predicted_{z_1} - true_{z_1})^2$$ # $$\dfrac{1}{2}(predicted_{z_1} - true_{z_1})^2 = \dfrac{1}{2}(0.7514 - 0.01)^2 = 0.2748$$ # # Repeating the above process with $z_2$ also, then we get the error of $0.0236$ # # $$E_{total} = 0.2748 + 0.0236 = 0.2984$$ # ____________________ # #### Backpropagation calculation # *Note that I wrote the equation starting from the output layer and ending at the bottom layer* # ##### top layer # - Let's first consider how $w_5$ contributes to the loss value above. # - $E_{total}$ refers to the sum of errors generated at the output neurons # # By chain rule, # $$\dfrac{\partial E_{total}}{\partial w_5} = \dfrac{\partial E_{total}}{\partial z_1} * \dfrac{\partial z_1}{\partial h_3} * \dfrac{\partial h_3}{\partial w_5}$$ # # Let's break down above equation into 3 when calculating. # # - Let's look at the first term : $\dfrac{\partial E_{total}}{\partial w_5}$ # # $$E_{total} = \dfrac{1}{2}(predicted_{z_1} - desired_{z_1})^2 + \dfrac{1}{2}(predicted_{z_2} - desired_{z_2})^2$$ # # $$\dfrac{\partial E_{total}}{\partial z_1} = \dfrac{\partial E_{total}}{\partial } = 2 * \dfrac{1}{2}(predicted_{z_1} - desired_{z_1})^{2-1} * -1 + 0$$ # # $$\dfrac{\partial E_{total}}{\partial z_1} = -(predicted_{z_1} - desired_{z_1}) = -(0.01 - 0.7513) = 0.7414$$ # # - Now let's calculate the second term: $\dfrac{\partial z_1}{\partial h_3}$ # Since $z_1 = \frac{1}{1+e^{h_3}}$, its derivative is $\dfrac{\partial z_1}{\partial h_3} = z_1 (1-z_1) = 0.7514 (1-0.7514) = 0.1868$ # - Then the last term: $\dfrac{\partial h_3}{\partial w_5}$ # $h_3 = w_5 * a_1 + w_6 * a_2 + b_2$ # $\dfrac{\partial h_3}{\partial w_5} = 1 * a_1 + 0 + 0 = a_1 = 0.5933$ # - Finally, we multiply all the calculated derivatives above to get $\dfrac{\partial E_{total}}{\partial w_5}$ # $\dfrac{\partial E_{total}}{\partial w_5} = \dfrac{\partial E_{total}}{\partial z_1} * \dfrac{\partial z_1}{\partial h_3} * \dfrac{\partial h_3}{\partial w_5} = 0.7414 * 0.1868 * 0.5932 = 0.0822$ # # This derivative is subtracted from the current weight of $w_5$ to decrease the error. Let's set a learning rate $\eta$ to 0.5. # $w_5^+ = w_5 - \eta * \dfrac{\partial E_{total}}{\partial w_5} = 0.4 - 0.5 * 0.0822 = 0.3589$ # # Our weight $w_5$ is updated! # # This process can also be repeated for $w_5, w_6, w_7, w_8$. # #### bottom layer # - Now for the bottom layer. Above was a lengthy process. However, rest assured. Most of the calculations will be reused. # - Let's consider $w_1$ # # # # $$\dfrac{\partial E_{total}}{\partial w_1} = \dfrac{\partial E_{total}}{\partial a_1} * \dfrac{\partial a_1}{\partial h_1} * \dfrac{\partial h_1}{\partial w_1}$$ # # $\dfrac{\partial E_{total}}{\partial a_1}$ can be expanded to $\dfrac{\partial E_{z_1}}{\partial a_1} + \dfrac{\partial E_{z_2}}{\partial a_1}$ # # Through the figure above, we can see that $w_1$ can only influence the cost function only by going through the top layer. This will be reflected on the chain rule. # We can reuse the the value of $\dfrac{\partial E_{z_1}}{\partial h_3}$ that we calculated above. # $$\dfrac{\partial E_{z_1}}{\partial h_3} = \dfrac{\partial E_{total}}{\partial z_1} * \dfrac{\partial z_1}{\partial h_3} = 0.7414 * 0.1868 = 0.1385$$ # $\dfrac{\partial h_3}{\partial a_1}$ is equal to $w_5$ as $h_3 = w_5 * a_1 + w_6 * a_2 + b_2$ # # Thus, # $$\dfrac{\partial h_3}{\partial a_1} = w_5 * 1 + w_6 * 0 + 0 = w_5 = 0.40$$ # # Plugging in the above calculated values, # $$\dfrac{\partial E_{z_1}}{\partial a_1} = \dfrac{\partial E_{z_1}}{\partial h_3} * \dfrac{\partial E_{h_3}}{\partial a_1} = 0.1385 * 0.40 = 0.0554$$ # # Repeating the above process to get $\dfrac{\partial E_{z_2}}{\partial a_1}$, we can get the value of $-0.1905$ # Therefore, # $$\dfrac{\partial E_{total}}{\partial h_1} = \dfrac{\partial E_{z_1}}{\partial a_1} + \dfrac{\partial E_{z_2}}{\partial a_1} = 0.0554 - 0.1905 = 0.0364$$ # Now we have to calculate $\dfrac{\partial E_{a_1}}{\partial h_1}$ and $\dfrac{\partial E_{h_1}}{\partial w_1}$. We are almost there! # $$a_1 = {\partial w_1} = \frac{1}{1+e^{-h_1}}$$ # $$\dfrac{\partial a_1}{\partial h_1} = a_1 (1- a_1) = 0.5933 (1 - 0.5933) = 0.2413$$ # For $\dfrac{\partial E_{h_1}}{\partial w_1}$, since $h_1 = w_1 * x_1 + w_2 * x_2 + b_1$ # $\dfrac{\partial E_{h_1}}{\partial w_1} = x_1 = 0.05$ # # Plugging in everything we calculated to the formula: $\dfrac{\partial E_{total}}{\partial w_1} = \dfrac{\partial E_{total}}{\partial a_1} * \dfrac{\partial a_1}{\partial h_1} * \dfrac{\partial h_1}{\partial w_1}$ # $$\dfrac{\partial E_{total}}{\partial w_1} = 0.0364 * 0.2413 * 0.05 = 0.000439$$ # With this value, we can now update $w_1$ as below: # $$w_1^+ = w_1 - \eta * \dfrac{\partial E_{total}}{\partial w_1} = 0.15 - 0.5 * 0.000439 = 0.1498$$ # # This can be repeated for $w_2, w_3, w_4$ also. # ### Let's conclude with backpropagation implementation code # - 's book is a great source to learn neural network code implementation. # + import random import numpy as np class NeuralNetwork(object): """the list sizes include the number of neurons from the input layer to the output layer If the argument is given as [3, 3, 2] this means that the number of input neurons is 3, that of hidden layer neurons is 3 and that of output neurons is 2.""" def __init__(self, sizes): self.n_layers = len(sizes) self.sizes = sizes self.biases = [np.random.randn(y,1) for y in sizes[1:]] self.weights = [np.random.randn(y,x) for (x,y) in zip(sizes[1:], sizes[1:])] def feedforward(self, a): for b, w in zip(self.biases, self.weights): a = sigmoid(np.dot(w,a)+b) return a def sigmoid(self, z): return 1.0 /(1.0+np.exp(-z)) def sigmoidPrime(self, z): return sigmoid(z) * (1.0-sigmoid(z)) def costPrime(self, output_activations, y): return (output_activations - y) def backpropagation(self, x, y): d_b = [np.zeros(b.shape) for b in self.biases] d_w = [np.zeros(w.shape) for w in self.weights] #forward pass activation = x activations = [x] zs = [] for b, w in zip(self.biases, self.weights): z = np.dot(w, activation)+b zs.append(z) activation = sigmoid(z) activations.append(activation) #backward pass d_C = self.costPrime(activations[-1], y) * sigmoidPrime(zs[-1]) d_b[-1] = d_C d_w[-1] = np.dot(d_C, activation[-2].transpose()) for layer in range(2, self.num_layers): z = zs[-1] sg_prime = sigmoidPrime(z) delta = np.dot(self.weights[-layer+1].transpose(), delta) * sg_prime d_b[-layer] = d_C d_w[-layer] = np.dot(d_C, activations[-layer-1].transpose()) return (d_b, d_w) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.8 64-bit # name: python3 # --- # + import numpy as np import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import tensorflow_datasets as tfds from tensorflow.keras import layers import matplotlib.cm as cm import random import glob import os from skimage.segmentation import chan_vese import numpy as np import PIL from PIL import Image from tensorboard.plugins.hparams import api as hp # Display # from IPython.display import Image, display import matplotlib.pyplot as plt import matplotlib.cm as cm # - import my_functions as mf from tf_explain.callbacks.grad_cam import GradCAMCallback batch_size = 32 img_height = 180 img_width = 180 image_size = (img_height, img_width) # + train_ds = tf.keras.utils.image_dataset_from_directory( 'datasets/ct_scan_3/train', labels='inferred', label_mode='int', class_names=None, color_mode='rgb', batch_size=batch_size, image_size=image_size, shuffle=True, seed=100, interpolation='bilinear', follow_links=False, crop_to_aspect_ratio=False) val_ds = tf.keras.utils.image_dataset_from_directory( 'datasets/ct_scan_3/val', labels='inferred', label_mode='int', class_names=None, color_mode='rgb', batch_size=batch_size, image_size=image_size, shuffle=True, seed=100, interpolation='bilinear', follow_links=False, crop_to_aspect_ratio=False) # - mf.check_dataset(train_ds) import pandas as pd covid_metadata = pd.read_csv ('datasets\ct_scan_3\meta_data_covid.csv') non_covid_metadata = pd.read_csv ('datasets\ct_scan_3\meta_data_normal.csv') covid_metadata.head() non_covid_metadata.head() # # Cada um dos pascientes diagnosticado com covid ou vai para o dataset de treino ou para o dataset de validação ## Testando selecionar todas as imagens de um dado pasciente com covid covid_metadata.head() covid_metadata['Patient ID'].unique() df.loc[df['Patient ID'] == 'patient103'] df.loc[df['Patient ID'] == 'patient101']['File name'][0] from shutil import copy base_src_folder = 'datasets\\ct_scan_3\\full-COVID-positive\\' src = base_src_folder + df.loc[df['Patient ID'] == 'patient101']['File name'][0] dst = 'datasets\\ct_scan_3\\train' copy(src, dst) len(covid_metadata['Patient ID'].unique()) def copy_all_images_from_one_patient(patient_id, src_folder, dst_folder, metadata): try: patient_dataset = metadata.loc[metadata['Patient ID'] == patient_id] files = patient_dataset['File name'] for file in files: src = src_folder + '\\' + file copy(src, dst_folder) except Exception as e: print('Copy error: {}'.format(e)) patient_id = 'P040' src_folder = 'datasets\\ct_scan_3\\full-COVID-positive' dst_folder = 'datasets\\ct_scan_3\\train' metadata = covid_metadata copy_all_images_from_one_patient(patient_id, src_folder, dst_folder, metadata) # Testando um split aleatório dos pacientes na esperança que a aleatoriedade salve a diferença de split final covid_metadata['Patient ID'].unique() def random_split_patients(unique_patients_ids, validation_split): number_of_patients = len(unique_patients_ids) number_of_train_patients = int(number_of_patients * (1 - validation_split)) np.random.shuffle(unique_patients_ids) train_patients_ids = unique_patients_ids[:number_of_train_patients] val_patients_ids = unique_patients_ids[number_of_train_patients:] return train_patients_ids, val_patients_ids train_patients_ids, val_patients_ids = random_split_patients(covid_metadata['Patient ID'].unique(), 0.2) len(train_patients_ids) + len(val_patients_ids) == len(covid_metadata['Patient ID'].unique()) # --------------- train_covid_patients_ids, val_covid_patients_ids = random_split_patients(covid_metadata['Patient ID'].unique(), 0.2) for patient in train_covid_patients_ids: copy_all_images_from_one_patient( patient_id=patient, src_folder='datasets\\ct_scan_3\\full-COVID-positive', dst_folder = 'datasets\\ct_scan_3\\train\\COVID-positive', metadata = covid_metadata) for patient in val_covid_patients_ids: copy_all_images_from_one_patient( patient_id=patient, src_folder='datasets\\ct_scan_3\\full-COVID-positive', dst_folder = 'datasets\\ct_scan_3\\val\\COVID-positive', metadata = covid_metadata) train_non_covid_patients_ids, val_non_covid_patients_ids = random_split_patients(non_covid_metadata['Patient ID'].unique(), 0.2) for patient in train_non_covid_patients_ids: copy_all_images_from_one_patient( patient_id=patient, src_folder='datasets\\ct_scan_3\\full-COVID-negative', dst_folder = 'datasets\\ct_scan_3\\train\\COVID-negative', metadata = non_covid_metadata) for patient in val_non_covid_patients_ids: copy_all_images_from_one_patient( patient_id=patient, src_folder='datasets\\ct_scan_3\\full-COVID-negative', dst_folder = 'datasets\\ct_scan_3\\val\\COVID-negative', metadata = non_covid_metadata) 5462 + 6307 + 1431 + 1283 == 6893 + 7590 train_ds = tf.keras.utils.image_dataset_from_directory( 'datasets/ct_scan_3/train', labels='inferred', label_mode='int', class_names=None, color_mode='rgb', batch_size=batch_size, image_size=image_size, shuffle=True, seed=100, interpolation='lanczos3', follow_links=False, crop_to_aspect_ratio=False) val_ds = tf.keras.utils.image_dataset_from_directory( 'datasets/ct_scan_3/val', labels='inferred', label_mode='int', class_names=None, color_mode='rgb', batch_size=batch_size, image_size=image_size, shuffle=True, seed=100, interpolation='lanczos3', follow_links=False, crop_to_aspect_ratio=False) 2714 / 11769 mf.check_dataset(train_ds) mf.check_dataset(val_ds) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # "Introduction to variational Monte Carlo with neural network quantum states" # # > "Monte Carlo methods are extremely powerful tools to deal with problems that have vast phase spaces, such as quantum many-body systems. Here, we introduce a specific technique: variational Monte Carlo (VMC), which we illustrate using restricted Boltzmann machines to represent the many-body quantum states. Everything is built from scratch with numpy!" # # - toc: true # - branch: master # - badges: true # - comments: true # - author: # - categories: [Variational Monte Carlo, Machine Learning, Neural network quantum states] # - image: images/vmc_intro.png # + #hide ## Optional: uncomment and run this cell in colab to update packages # # !pip install -U numpy # # !pip install -U tqdm # # !pip install -U matplotlib # - #hide import numpy as np from tqdm.auto import tqdm from copy import deepcopy import matplotlib.pyplot as plt # %matplotlib inline # ## Monte Carlo Integration # # The main power of Monte Carlo methods comes from the capability of computing high-dimensional integrals in large spaces. In physics, this allows us to compute expectation values of the form $$\langle f\rangle = \int dx p(x)f(x) \ \ \ \text{or} \ \ \ \langle f \rangle = \sum_{x} p(x)f(x)$$ for continuous and discrete sytems, respectively. Where $p$ is the probability distribution over states $x$ and $f$ is a function of the state, such as its corresponding energy. # # Physics is benevolent and, generally, the systems of interest only span a tiny bit of their phase space, meaning that $p(x)\simeq 0$ for most states $x$ and, therefore, most of the terms in the previous sum have a meaningless contribution. With Monte Carlo, rather than accounting for all the possible states $x$, we approximate the expectation value by sampling from $p(x)$. Hence, $$\langle f \rangle \approx \frac{1}{N}\sum_{i=1}^Nf(x_i),$$ where $x_i$ are sampled according to $p(x)$. This is called importance sampling and it allows us to obtain reasonably good approximations with a limitted amount of samples. # # ### Energy expectation of a quantum system # # In quantum phsyics, a quantity of utmost interest is the expected value of the energy of a system under the action of a Hamiltonian $H$ and a wave function $\Psi(x)$ in a given basis $x$ $$\langle H \rangle = \frac{\langle\Psi^*|H|\Psi\rangle}{\langle \Psi|\Psi\rangle} = \frac{\int dx\Psi^*(x)H\Psi(x)}{\int dx\Psi^*(x)\Psi(x)}.$$ # # From now on, we will omit the dependency on the state and denote $\Psi\equiv\Psi(x)$ unless needed for clarification. By introducing a term $\frac{\Psi}{\Psi}$ into the numerator, we can rewrite the integral in a convenient way for Monte Carlo integration $$\langle H \rangle = \frac{\int \Psi^*\frac{\Psi}{\Psi}H\Psi}{\int \Psi^*\Psi} = \frac{\int |\Psi|^2 \frac{H\Psi}{\Psi}}{\int |\Psi|^2} = \int \rho E_L,$$ # where $\rho=\frac{|\Psi|^2}{\int|\Psi|^2}$ is the probability density and $E_L=\frac{H\Psi}{\Psi}$ is the so-called **local energy**. # # Hence, the expected energy can be computed via Monte Carlo integration as the expectation value of the local energy over the probability distribution $\rho=\frac{|\Psi|^2}{\int|\Psi|^2}$, such that $$\langle H\rangle \approx \frac{1}{N}\sum_{k=1}^NE_L(x_k)$$ # # > Note: $x$ can be any convenient basis for the given problem at hand, it does not refer to position. In the example case that we will be solving in this tutorial, we take the basis $\sigma^z$ for a spin system. # ## Importance Sampling # # One of the most important aspects for Monte Carlo integration is the way that importance sampling is done. Markov Chain Monte Carlo (MCMC) is an efficient approach to perform sampling in many dimensions when the probability density $p(x)$ is dominated by a small part of the whole state space. # # Samples are drawn iteratively, forming a Markov Chain, starting from any given state. In order to properly compute the expected value, the Markov Chain needs to converge to the **stationary distribution** $p(x)$ regardless of the initial state. # # Let $t(x\rightarrow x')$ be the probability to transition from state $x$ to $x'$ such that $\sum_{x'}t(x\rightarrow x')=1$, and $p_s(x)$ the probability to be in state $x$ at step $s$. Then, $p_{s+1}(x) = \sum_{x'}p_s(x')t(x'\rightarrow x)$. A stationary probability is obtained when $p_s(x)$ is independent of the step and, therefore, $$p(x) = \sum_{x'}p(x')t(x'\rightarrow x).$$ # # If a Markov chain is irreducible, the stationary distribution is unique and, if it is also aperiodic, it converges to it. A sufficient condition for stationarity is satsifying the **detailed balance condition** $p(x)t(x\rightarrow x') = p(x')t(x'\rightarrow x)$. # # The **Metropolis-Hastings** algorithm {% cite HastingsBiometrika1970%} is built to satisfy detailed balance. This is a very simple algorithm in which we split the transition probability $t(x\rightarrow x')$ into two factors: the probability to propose or choose the next state $c(x\rightarrow x')$ and the probability to accept the next state $a(x\rightarrow x')$ such that $$t(x\rightarrow x') = c(x\rightarrow x')a(x\rightarrow x').$$ Detailed balance is fulfilled by taking $a(x\rightarrow x')=\min\left\{1, \frac{p(x')}{p(x)}\frac{c(x'\rightarrow x)}{c(x\rightarrow x')}\right\}$. # # Generally, the probability to propose a state is symmetric $c(x\rightarrow x')=c(x'\rightarrow x)$, as it can be, for instance, the case of randomly flipping a spin in a lattice. In these cases, the acceptance probability is simplified $$a(x\rightarrow x') = \min\left\{1, \frac{p(x')}{p(x)}\right\}.$$ # # A Markov Chain is generated by iterating over the following two steps: # 1. With a state $x$, propose a new state $x'$ with probability $c(x\rightarrow x')$ # 2. Accept $x'$ with probability $a(x\rightarrow x')$. If rejected, the next state is $x$. # # The time it takes for the Markov Chain to converge to the stationary distribution is called **thermalisation**. In other words, thermalisation is the time it takes to the Markov Chain to forget its initial state. With MCMC we need to wait for the thermalisation to finish before we can start drawing samples from the desired probability distribution. These samples, though, will be highly correlated between one another, thus requiring careful error analysis to be properly handled. We will deal with this later on. # # ### Metropolis-Hastings for quantum systems # # As it was previously introduced, the expectation value of the energy can be obtained by sampling configurations according to the distribution $\rho=\frac{|\Psi|^2}{\int|\Psi|^2}$. Hence, we want to create a Markov Chain that converges to the stationary distribution $\rho$ and, therefore, the acceptance probabilities need to be defined accordingly $$a(x\rightarrow x') = \min\left\{ # 1, \frac{\rho(x')}{\rho(x)}\right\} = \min\left\{1, \frac{|\Psi(x')|^2}{|\Psi(x)|^2}\right\}.$$ Notice that the normalisation factor $\int|\Psi|^2$ cancels out. Thus, we never have to worry about normalising probabilities, which would, most times, make the computation intractable. # ## Example - Monte Carlo Integration # # With this, we have the tools to compute the expectation value of an observable of a quantum many-body system. As an example, we will take the quantum Ising spin model $$H = J\sum_{i=1}^{n-1}\sigma_{i}^z\sigma_{i+1}^z + B\sum_{i=1}^n\sigma_{i}^x,$$ where $\sigma_i^z, \sigma_i^x$ are the Pauli matrices acting on the $i$-th site, with open boundary conditions. # # The only thing that is missing is a trial wave function $\Psi$ to perform the sampling. We will take the Restricted Boltzmann Machine (RBM) ansatz, as introduced in {% cite CarleoScience2017%}, of the form $$\Psi(x) = e^{b^Tx}\prod_{i=1}^{n_h}2\cosh(c_i + W_{i\cdot}x),$$ where $b, c, W$ are the visible biases, hidden biases and weight matrix, respectively, and $n_h$ denotes the number of hidden units. # # For now, we can just take this as a functional ansatz without diving much further into RBMs. The only thing that we need to know is that RBMs have two layers: a visible and a hidden layer. The visible layer corresponds to the physical system, while the hidden layer provides a set of auxiliary parameters that mediate the interaction between physical units. Therefore, the size of the visible layer is fixed by our problem and we need to choose the size of the hidden layer $n_h$. The higher $n_h$, the higher the representability of the ansatz, at the cost of more expensive computations. Since RBMs are universal approximators, we can always improve our solution by increasing $n_h$. # > Note: The code is by no means optimized. In fact, almost everything here presented is *very* suboptimal. It is meant to be easily readable and resemble the equations as much as possible, in order to provide an idea of the overall procedure. class RBM: "Super simple implementation of an RBM with complex parameters." def __init__(self, n_visible, n_hidden): self.n_visible = n_visible self.n_hidden = n_hidden self.reset() def reset(self): "Reinitializes the complex parameters at random." b = np.random.randn(self.n_visible) + 1j*np.random.randn(self.n_visible) # visible bias c = np.random.randn(self.n_hidden) + 1j*np.random.random(self.n_hidden) # hidden bias W = (np.random.randn(self.n_hidden, self.n_visible) + 1j*np.random.randn(self.n_hidden, self.n_visible)) # weights self.params = np.concatenate((b, c, W.ravel())) / 10 @property def b(self): return self.params[:self.n_visible] @property def c(self): return self.params[self.n_visible:self.n_visible+self.n_hidden] @property def W(self): return np.reshape(self.params[self.n_visible+self.n_hidden:], (self.n_hidden, self.n_visible)) def p(self, v): "Probability amplitude of a visible state `v`. We don't need it for Monte Carlo." return np.exp(np.conj(self.b) @ v)*np.prod(np.cosh(self.c + self.W @ v))*2**self.n_hidden def p_ratio(self, v1, v2): "Probability ratio between state `v2` and reference `v1`" f1 = np.cosh(self.c + self.W @ v1) f2 = np.cosh(self.c + self.W @ v2) log_diff = np.conj(self.b) @ (v2-v1) + sum(np.log(f2/f1)) # log of ratio for numerical stability return np.exp(log_diff) def p_ratios(self, v1, v2): "Probability ratio between list of states `v2` and reference state `v1`." return [self.p_ratio(v1, v) for v in v2] # Let us define the physical system by choosing the number of spins, the coefficients of the Hamiltonian and the parameters of our wave function. For now, provided that we need a starting point for our trial wave function, we simply take a set of random parameters for our RBM ansatz. np.random.seed(7) n = 10 # Number of spins J, B = -2, -1 # Hamiltonian n_visible, n_hidden = n, 2*n # RBM size: twice as many hidden neurons psi = RBM(n_visible, n_hidden) # Randomly initialize our anstaz # With this, we define some functions to make our code more readable. # + def local_energy(x, psi): "Local energy of Ising spin model." # Interaction term couplings = (x[:-1]==x[1:])*2-1 e_interaction = J*sum(couplings) # Transverse field states_with_flip = [flip(x, i) for i in range(len(x))] e_field = B*sum(psi.p_ratios(x, states_with_flip)) return e_interaction + e_field def flip(x, i): "flips i-th bit of x" xflip = deepcopy(x) xflip[i] = 1-xflip[i] return xflip # - # And now we are ready to do the Monte Carlo integration. When dealing with spins, new states are obtained by flipping spins, so we need to choose the total amount of samples to draw `n_samples` and the amount of spin flips `n_flips` performed to propose a new configuration. # + n_samples = 50000 n_flips = 1 state = np.random.randint(0, 2, n) # initial random state states, energies = [], [] for k in tqdm(range(n_samples)): # Sample new state spin_idx = np.random.randint(0, n, n_flips) new_state = flip(state, spin_idx) if np.random.random() <= np.abs(psi.p_ratio(state, new_state))**2: state = deepcopy(new_state) # Accept new state states.append(state) energies.append(np.real(local_energy(state, psi))) # - #collapse-hide plt.figure(figsize=(12, 5)) plt.plot(energies[:1000]) # Plot some plt.grid() plt.tick_params(labelsize=15) plt.ylabel("Local energy", fontsize=20) plt.xlabel("Sample", fontsize=20) plt.title(f"E = {np.mean(energies):.4f} +- {np.std(energies)/np.sqrt(n_samples):.4f}"); # With this random wave function, we do not observe any thermalisation. The result is the expected energy obtained with our wave function ansatz. Later on, we will see how to optimize its parameters to find the ground state energy. # ## Statistical analysis and autocorrelation time # # Because of the nature of Markov chains, measurements are always correlated to a certain degree. Given that new states are obtained by modifying the previous ones, consecutive states can be highly correlated, although the correlation fades as the number of steps between states increases. The distance at which we can consider two states to be uncorrlated in the Markov chain is the **autocorrelation time** $\tau$. # # The statistical error $\epsilon$ is obtained via $$\epsilon = \sqrt{\frac{s_f^2}{N}}, \ s_f^2 = \frac{1}{N-1}\sum_{i=1}^N\left(f(X_i) - \langle f \rangle\right)^2.$$ These quantities, however, are well defined for uncorrelated samples. Hence, knowing the autocorrelation time $\tau$, we can compute our estimation values by exclusively taking samples every $\tau$ (or $\geq\tau$) steps. # # ### Binning analysis # # Knowing the autocorrelation time is extremely important. However, finding the autocorrelation function is too costful and difficult to analyse so, in practice, we rely in the binning analysis of the time series to estimate both $\tau$ and $\epsilon$. The main idea is that averages over chunks of the time-series, which are longer than the autocorrelation time, are independent of each other, thus providing the right error estimates. # # Provided that we do not have any prior knowledge about the autocorrelation time, we have to use blocks of increasing lengths until the error estimate converges. We cut the time series into $N_B$ blocks of fixed length $k$ for several values of $k=2^0,2^1, 2^2, \dots$ With this, we can compute the block average of the $i$-th block $$\langle f \rangle_{B_i}=\frac{1}{k}\sum_{t=1}^kf(x_{(i-1)k+t}).$$ All the blocks have a mean $$\langle f\rangle_B = \frac{1}{N_B}\sum_{i=1}^{N_B}\langle f\rangle_{B_i},$$ which, when the block length $k$ is larger than the autocorrelation time $\tau$, allows us to compute the squared statistical error $$\epsilon^2\approx\frac{s_B^2}{N_B}=\frac{1}{N_B(N_B-1)}\sum_{i=1}^{N_B}\left(\langle f\rangle_{B_i} - \langle f\rangle_B\right)^2.$$ If the blocks are independent, $\frac{s_B^2}{N_B}$ remains constant for increasing values of $k$, although for large $k$ (low $N_B\sim100$) statistical fluctuations emerge. # # The integrated autocorrelation time can be infered from the binning analysis results. Being $$\tau=\frac{1}{2}\frac{\frac{s_B^2}{N_B}(k\rightarrow \infty)}{\frac{s_B^2}{N_B}(k=1)}.$$ Bear in mind that the autocorrelation time can **change between quantities**. # ## Example - Binning analysis # # Let us continue with our previous example and perform the binning analysis in order to properly infer the error and the autocorrelation time out of the Markov Chain that we have already generated. def bin_averages(x, bs): "Bins time-series `x` into `bs` chunks and takes means" nb = len(x)//bs bin_avg = [np.mean(x[b*bs:(b+1)*bs]) for b in range(nb)] return np.array(bin_avg) # + ks = [2**k for k in range(12)] errors, means, bin_avgs = [], [], [] for k in ks: bin_avg = bin_averages(energies, k) error = np.sqrt(np.var(bin_avg)/len(bin_avg)) errors.append(error) means.append(bin_avg.mean()) bin_avgs.append(bin_avg) # - #collapse-hide plt.figure(figsize=(12, 5)) plt.plot(ks, np.array(errors)**2, 'o-') plt.grid() plt.tick_params(labelsize=15) plt.ylabel(r"$s_B^2/N_B$", fontsize=20) plt.xlabel("Bin size", fontsize=20); # With the binning analysis, we see that the squared error converges for a bin size of $\sim100$. For very large bin sizes, the low number of bins incurs some statistical fluctuations. Thus, the result and statistical errors are properly computed for bin sizes $64\leq k\leq1000$. We can take any of these values as valid, although the smaller the bin sizes, the lower the overall computational cost. Hence, for future calculations, we will try to keep a bin size that is well converged but not too large, e.g. between $100$ and $200$. #collapse-hide print(f"{means[-3]:.4f} +- {errors[-3]:.4f} for k={ks[-3]}") # As shown before, we can also use these results to infer the autocorrelation time. tau = (errors[-3]/errors[0])**2; tau # Let us see the bin averages with the mean and statistical error. # + #collapse-hide k_idx = -3 # Choose the bin size bins = np.arange(len(bin_avgs[k_idx])) plt.figure(figsize=(12, 5)) plt.scatter(bins, bin_avgs[k_idx], s=50, marker='s') plt.hlines(means[k_idx], bins[0], bins[-1], linestyles='--', alpha=0.7) plt.fill_between(bins, means[k_idx]-errors[k_idx], means[k_idx]+errors[k_idx], color='k', alpha=0.1) plt.grid() plt.tick_params(labelsize=15) plt.title(f"Bin size {ks[k_idx]}", fontsize=20) plt.ylabel("Local energy", fontsize=20) plt.xlabel("Bin", fontsize=20); k_idx = -2 # Choose the bin size bins = np.arange(len(bin_avgs[k_idx])) plt.figure(figsize=(12, 5)) plt.scatter(bins, bin_avgs[k_idx], s=50, marker='s') plt.hlines(means[k_idx], bins[0], bins[-1], linestyles='--', alpha=0.7) plt.fill_between(bins, means[k_idx]-errors[k_idx], means[k_idx]+errors[k_idx], color='k', alpha=0.1) plt.grid() plt.tick_params(labelsize=15) plt.title(f"Bin size {ks[k_idx]}", fontsize=20) plt.ylabel("Local energy", fontsize=20) plt.xlabel("Bin", fontsize=20); # - # ## Monte Carlo optimization # # Now that we are able to reliably compute expectation values, we can use the same methods to optimize our variational ansatz according to a target function. In the previous examples, we have taken a completely random wave function, which is to be the starting point of our optimization process to find the ground state wave function. # # We use the **stochastic reconfiguration** (SR) method {% cite SorellaJCP2007%} to optimize the parameters of the ansatz, which approximates the natural gradient {% cite AmariBook2006%}. Let our parametrized wavefunction be $\Psi_\theta$ with parameters $\theta$. With SR, the parameter update rule is $\theta_{t+1}=\theta_t + \alpha S_t^{-1}F_t$, where $\alpha$ is the learning rate, $S$ is an estimation of the Fischer information matrix and $F$ is an estimation of the gradient of a given cost function. The term $S^{-1}F$ is the natural gradient estimation of the cost function with respect to the parameters $\theta$. # # Given that the ground state is the one with the lowest possible energy of the system, our cost function will be the expected energy of the system under our parametrized wave function. This way, the parameters of our ansatz will be tuned to approximate the ground state of the system as best as possible. # # Let us define the variational derivatives $O$ with respect to the $k$-th parameter $\theta_k$ of our variational ansatz $\Psi_\theta$ as the log-derivative of the wave function # $$O_k(x) = \frac{\partial}{\partial\theta_k}\log\left(\Psi_\theta(x)\right)=\frac{1}{\Psi_\theta(x)}\frac{\partial\Psi_\theta(x)}{\partial\theta_k}$$ # With this, we can define $S$ as the covariance matrix of the variational derivatives $O_k$ and compute $F$ in terms of the previously introduced local energy $E_L(x)$: # $$ S_{kk'}=\langle O_k^*O_{k'}\rangle-\langle O_k^*\rangle\langle O_{k'}\rangle$$ # $$ F_k= \langle E_LO_k^*\rangle - \langle E_L\rangle\langle O_k^*\rangle$$ # As an extra step, there can be introduced a regularization term to increase stability throught the optimization by removing the null diagonal terms of $S$ such that $S_{kk} = S_{kk}+\lambda$. # ## Example - Monte Carlo optimization # # Let us put everything together to find the ground state energy of the Ising Hamiltonian used in the previous examples. For the case of an RBM ansatz, the variational derivatives of the parameters can be obtained pretty easily, being: # $$O_{b_j}(s) = \frac{\partial}{\partial b_j}\log\left(\Psi(s)\right) = s_j $$ # $$O_{c_i}(s) = \frac{\partial}{\partial c_i}\log\left(\Psi(s)\right) = \tanh\left[\theta_i(s)\right]$$ # $$O_{W_{ij}}(s) = \frac{\partial}{\partial W_{ij}}\log\left(\Psi(s)\right) = s_j\tanh\left[\theta_i(s)\right]$$ # Where $\theta_i(s)$ is the argument of the hyperbolic cosine $\theta_i(s) = c_i + \sum_j W_{ij}s_j$. # # We have seen that the autocorrelation time was $\tau\sim 5$. In order to economize the simulation, rather than computing bin averages of $\sim100-200$ samples, we will draw samples every few steps larger than $\tau$ so that subsequent measurements are already uncorrelated. # # We define our custom functions to compute the variational derivatives and covariances, as well as a function that will sample a bunch of states at once. # + def variational_derivative(x, psi): "Computes variational derivatives for SR" theta = psi.c + psi.W @ x Ob = x Oc = np.tanh(theta) Ow = Oc[:, None] @ x[None, :] return np.concatenate((Ob, Oc, Ow.ravel())) def covariance(x1, x2): "Computes the covariance between `x1` and `x2`." samples = x1.shape[1] m1 = np.mean(x1, axis=1) m2 = np.mean(x2, axis=1) if len(x2.shape)>1 else np.mean(x2) return (x1 @ x2.T)/samples - m1[:,None] @ m2[None,:] def sample_block(psi, bs, x0=None, n_flips=1): "Sample `bs` states according to `psi`." state = np.random.randint(0, 2, psi.n_visible) if x0 is None else x0 states = [] for _ in range(bs): spin_idx = np.random.randint(0, psi.n_visible, n_flips) new_state = flip(state, spin_idx) if np.random.random() <= np.abs(psi.p_ratio(state, new_state))**2: state = deepcopy(new_state) # Accept new state states.append(state) return states # - # With `sample_block` we will be taking one energy sample every $N=10\gg\tau$ steps. However, we could also take the mini-block averages as samples, which can be easily implemented in the code below. Taking the mini-block average is, probably, a better practice, although it is more computationally expensive, provided that it still requires to compute the local energy for each state. Another way to lower the computational cost is by increasing the amount of spins that are flipped to propose new configurations. # # It all boils down to a the trade-off between computational cost and accuracy. For this case, a sample every few steps works just fine. I encourage the reader to try out different block sizes, e.g. $\text{bs}\in[1, 15]$, and test with mini-block averages and different number of spin flips. Keeping everything else constant, small block sizes should incur into slower and unstable convergence to the minimal energy due to the high correlation between samples. On the other side, excessively large block sizes suffer from the lack of statistics. # # When comparing different parameters, remember to set a random seed in order to always start with the same initial condition. #hide np.random.seed(7) psi.reset() # + learning_iterations = 275 lr = 1e-2 n_blocks = 150 thermalise = int(0.1*n_blocks) Nb = n_blocks - thermalise bs = 10 n_flips = 1 energies = [] for it in tqdm(range(learning_iterations)): EL, O = np.zeros(Nb, dtype=complex), np.zeros((len(psi.params), Nb), dtype=complex) states = sample_block(psi, thermalise*bs, n_flips=n_flips) state = states[-1] for k in range(Nb): batch = sample_block(psi, bs, x0=state, n_flips=n_flips) states += batch state = batch[-1] EL[k] = local_energy(state, psi) O[:, k] = variational_derivative(state, psi) energies.append(EL.mean()) F = covariance(O.conj(), EL[None,:]) # Gradient S = covariance(O.conj(), O) # Fisher info Sinv = np.linalg.pinv(S, rcond=1e-5) # (pseudo)Inversion d_params = lr*Sinv @ F psi.params -= d_params.squeeze() # - # Let us plot the expected energy over the optimization steps. #collapse-hide plt.figure(figsize=(12, 5)) plt.plot(np.real(energies)) plt.grid() plt.tick_params(labelsize=15) plt.ylabel(r"$\langle H\rangle$", fontsize=20) plt.xlabel("Learning iteration", fontsize=20); # The algorithm has converged to the ground state energy at around 150 iterations. Let us see the local energies sampled during the last optimization step. #collapse-hide plt.figure(figsize=(12, 5)) plt.plot(np.real(EL)) plt.grid() plt.tick_params(labelsize=15) plt.ylabel(r"$E_L$", fontsize=20) plt.xlabel("Sample", fontsize=20); # We can see how, by the end of the optimization, the wavefunction mainly provides the ground state and, with some small probability, other states are sampled. To report the ground state energy, we take the result of the last 100 optimization steps, analogously to the binning analysis (the imaginary part should average to zero). #collapse-hide bins = 100 energy = np.mean(np.real(energies[-bins:])) statistical_error = np.std(energies[-bins:])/np.sqrt(bins) print(f"The obtained energy is {energy:.4f}+-{statistical_error:.4f}") # Notice that we have taken the quantum Ising model in the ferromagnetic phase. Therefore, the ground state is found when all spins are aligned (either all 0 or 1). #collapse-hide print(f"The ground state is {state}") # We can compare this with exact diagonalization, provided that we are solving a small system, in order to see the actual real error. Let us build the Hamiltonian matrix and diagonlaize it. # + def tensor_prod(idx, s, size=10): "Tensor product of `s` acting on indexes `idx`. Fills rest with Id." Id = np.array([[1,0],[0,1]]) idx, s = np.array(idx), np.array(s) matrices = [Id if k not in idx else s for k in range(size)] prod = matrices[0] for k in range(1, size): prod = np.kron(prod, matrices[k]) return prod sx = np.array([[0,1],[1,0]]) sz = np.array([[1,0],[0,-1]]) H = (J*sum([tensor_prod([k, k+1], sz, size=N) for k in range(N-1)]) + B*sum([tensor_prod(k, sx, size=N) for k in range(N)])) e_vals, e_vecs = np.linalg.eigh(H) # - #collapse-hide relative_error = np.abs((energy-e_vals[0])/e_vals[0]) print(f"The exact ground state energy is {e_vals[0]:.4f}") print(f"Relative error between variational energy {energy:.4f} and exact solution {e_vals[0]:.4f}: {relative_error*100:.4f}%") # We see that the error is of the order of $0.2\%$, which means that our ansatz can accurately represent the exact ground state of the system at hand. # # Hopefully, this was helpful to anyone starting with Monte Carlo methods :) # ## References # {% bibliography --cited %} # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # %matplotlib inline # # ======================== # Whats New 0.99 Axes Grid # ======================== # # Create RGB composite images. # # + import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1.axes_rgb import RGBAxes def get_demo_image(): # prepare image delta = 0.5 extent = (-3, 4, -4, 3) x = np.arange(-3.0, 4.001, delta) y = np.arange(-4.0, 3.001, delta) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2) Z = (Z1 - Z2) * 2 return Z, extent def get_rgb(): Z, extent = get_demo_image() Z[Z < 0] = 0. Z = Z / Z.max() R = Z[:13, :13] G = Z[2:, 2:] B = Z[:13, 2:] return R, G, B fig = plt.figure() ax = RGBAxes(fig, [0.1, 0.1, 0.8, 0.8]) r, g, b = get_rgb() kwargs = dict(origin="lower", interpolation="nearest") ax.imshow_rgb(r, g, b, **kwargs) ax.RGB.set_xlim(0., 9.5) ax.RGB.set_ylim(0.9, 10.6) plt.show() # - # ------------ # # References # """""""""" # # The use of the following functions, methods, classes and modules is shown # in this example: # # import mpl_toolkits mpl_toolkits.axes_grid1.axes_rgb.RGBAxes mpl_toolkits.axes_grid1.axes_rgb.RGBAxes.imshow_rgb # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Supervised learning (without calculus!) # This tutorial wants to show the (possibly) most naive way to train the most naive neural network on the [MNIST database](https://en.wikipedia.org/wiki/MNIST_database). For our model we only need to known matrix multiplication, for loops. Importantly, no knowledge of calculus (gradient descent) is required and coding syntax is explained in detail. The reader expect to: # # - present key ideas in machine learning in simple setting, # - be impressed by the power of simple matrix multiplication, # - build intuition for issues such as overfitting and # - have plenty of extensions of the model to experiment. # # (This tutorial was originally made as a concrete application of the linear algebra course at Otago University.) # # ## Dataset and packages # We first import some basic packages along with the training/testing and test dataset and we reduce the size of the training dataset. (Note that tensorflow is only used to import the data.) # + # We import the packages that we will use import matplotlib.pyplot as plt import tensorflow as tf import numpy as np from numpy.linalg import norm as Euclid_norm from time import time #We import the dataset mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 size_train_small = 200 x_train, y_train = x_train[:size_train_small], y_train[:size_train_small] # - # ## Data visualisation # # Our aim is to "train" a "machine" capable of recognising the ten digits $\{0,1,2,3,4,5,6,7,8,9\}$ when hand-written. It is natural to show the machine a bunch of these digits and then to train it to choose correctly by rewarding the machine when correct, and penalising it otherwise (much like humans do). Once trained we send the machine into the unkown territory of the testing data set, and see how well it performs. Lukily, images are matrices (tabular arrays) where each position represent a pixel and the value of this entry is the color (and machines love matrices). Let's see this. fig, ax = plt.subplots(1,5,figsize=(10,4)) fig.suptitle('.jpg files as 28x28 matrices with the respective labels', fontsize=15) for j in range(5): ax[j].imshow(x_train[j:j+1][0]) ax[j].set_title(f'Label: {y_train[j]}'); plt.show(); print("To see the first image in matrix notation type x_train[0] in a cell and run it. "); # ## The neural network # # So an image is a $28\times 28$ matrix, or equivalenty a $28^2\times 1$ vector/matrix, obtained by flattening it (stack all rows in one line). For each image/vector the machine needs to return a number between 0 and 9 (according to what the image is understood to be). What is the easiest (reasonable) way to obtain ten evaluations out of a $28^2\times 1$ vector $x$. Of course it is matrix multiplication by a $10\times 28^2$ matrix `W`. The matrix `W` is called the weights matrix (it weights `x` to to measure its value out of 10 different options) and our model follows the simple decision ruled # # # **PREDICTION**: `x` is the digit equal to the index of the biggest entry in `Wx`. # # # For example if `Wx` $ = [0,0,0,1,0,3,-\pi, 7,100]^T$, then the machine tells us that `x` is the picture of a 9. That simple. # # # **TRAINING** = finding a good `W`. What's "good"? It's up to us to decide. For example, an option is to tell the machine that $W x$ needs to have a very high third entry if the label of $x$ is 2, this is the same as asking that # # - $\|W x-y \|_2$ is small, # - $y = [0,0,C,0,0,0,0,0,0,0]$, # - $C>0$ is very big and # # where $\|$ `x` $ \|_2=(x_0^2+...+x^2_9)^{\frac12}$ is the 10-$d$ Euclidean norm. So we require $\|W x-y \|_2$ to be samll for all images in the traning set by requiring that the loss function # # $$ # L(W)= \frac1n\sum_{i=1}^n\|Wx_i - y_i\|_2 # $$ # # is small, where $\{(x_i,y_i)\}_{i=1}^n$ are the pairs of images with their vector label and $n$ is `size_train_small`. Mathematically, we want a `W` that solves the optimisation problem $\arg\min_W L(W)$. SoHow do we train this model? With a # # **Trivialized gradient descent alghoritym**: # # Initialize at the entry `(i, j) = (0, 0)`, then # # 1. Take the current `W` and create a copy `W_temp` that equals `W` apart from `W_temp[i,j] = W[i,j] + 1`, # 2. if `L(W_temp) < L(W)` make `W_temp` the new `W`. # 3. Fix a new entry `(i,j)` to move horizontally in the $10\times 28^2$ matrix `W` (skipping lines once at the last column and staring back from `(0,0)` once all entries have been visited) # 4. start back at 1. # ## Model construction # # We first flatten all images and construct labeld vectors in `y_label`. #Flatten images in training set x_train = x_train.reshape(x_train.shape[0],28*28) x_test = x_test.reshape(x_test.shape[0],28**2) #Create 10x1 vectors with a 1-entry corresponding the respective #image label y_label = np.zeros((y_train.shape[0], 10)) for i in range(y_train.shape[0]): y_label[i,y_train[i]] = 1 # The following is the class cotaining our model. It performs three main tasks: # # I. `nl = NL()`: the model is created and named `nl`, and the size of each image input vector is set to 28*28, the output vector size to 10, the weight matrix `W` is initialised to a $10\times28*28$ matrix where each entry is a 0, the number of times we run through all the entries of `W` in our gradiant descent (in `self.steps`). # # II. `nl.fit(x_train,y_label)`: this is the trivialized gradient descent, check the description of the steps! # # III. `nl.predict(x_test):` the decision making process. The `nl` computes the matrix product `W@x` with the current `W` for each image `x` in `x_test` and returns its decisions, i.e. the index of the biggest entry in `W@x`. # # The remaining two functions simply compute the average correct predictions of `nl` and allow to return the current weights matrix `W`. class NL: def __init__(self): self.size_in = 28*28 #Lenght of a flattened image self.size_out = 10 #Ten possible choices for the possible digit in the image self.weights = np.zeros((self.size_out, self.size_in)) #Initialize W as a zero matrix self.steps = 10 #How many times the descent algorythm is performed on all entries of W def fit(self,x_train,y_label): """Train the neural network by performing the trivialized gradient descent.""" t_start = time() #Time at start of training steps = self.steps*(self.size_in*self.size_out) #Number of iteration of the descent algo W = self.weights h, C = 2., 10**5. #Set the learning step h and the big constant C to scale y_label y_label = C*y_label #Algorythm starts here: for i in range(steps): #Steps 3 and 4 happen here #Step 1: make a copy of W W_temp = W.copy() #Change the current entry of W_temp by h j = i % (self.size_in*self.size_out -1) W_temp.reshape(np.prod(W_temp.shape))[j] = W_temp.reshape(np.prod(W_temp.shape))[j] + h #Step 2: Create a matrix contaning all vectors Wx - y Wx_minus_y = np.ones(x_train.shape[0]*10).reshape(x_train.shape[0],10) for m in range(x_train.shape[0]): Wx_minus_y[m] = W@x_train[m]- y_label[m] #and create a matrix contaning all vectors W_temp x - y W_temp_x_minus_y = np.ones(x_train.shape[0]*10).reshape(x_train.shape[0],10) for m in range(x_train.shape[0]): W_temp_x_minus_y[m] = W_temp@x_train[m] - y_label[m] #and then compute the loss under the current and temporary weight matrices Loss_W = Euclid_norm(Wx_minus_y,axis=0).mean() Loss_W_temp = Euclid_norm(W_temp_x_minus_y,axis=0).mean() #and decide whether we descend with W_temp if Loss_W_temp < Loss_W: W = W_temp t_end = time() #Time at end of training self.weights = W # The training has finished and the last W is set to be the model weight matrix print("Training time:", np.round(t_end-t_start,2),"s") def predict(self,x_test): """Predict the digit in each image x in x_test by returning the position where W@x is the highest. """ y_pred = np.ones(x_test.shape[0]) for i in range(x_test.shape[0]): y_pred[i] = np.where(self.weights@x_test[i] == np.max(self.weights@x_test[i]))[0][0] return y_pred def return_weights(self): return self.weights def accuracy(self,y_test,y_pred): return len(np.where((y_pred - y_test) == 0.)[0])/len(y_test) # ## Training and predictions # # So initialise/instantiate the model and check its predictions before training, which are of course goiong to be pretty bad and expected to be around 10% accuracy (why?). nl = NL() #Compute prediction accuracy without training y_pred = nl.predict(x_test) print("Probability of correct prediction before training:\n", nl.accuracy(y_pred,y_test)) # Finally, we are ready to train (or fit) the model. So lets feed to `nl` the training images with the respective labels and let do its homework. nl.fit(x_train,y_label) # Now that the model is trained, let's check its predictions. y_pred_in_sample = nl.predict(x_train) print("Probability of correct prediction after training\n\n- on traning set:", nl.accuracy(y_pred_in_sample,y_train)) y_pred_out_of_sample = nl.predict(x_test) print("\n- on test set:", nl.accuracy(y_pred_out_of_sample,y_test)) # ## Discussion and exercises # # So we trained the machine to recognise digits with more than 60% accuracy with rather trivial mathematics!!! This is very exciting because 60% is so much higher than 10%, meaning that the machine did learn a good deal about digits during its first five minutes of training. # # However, note that on the training set the machine, or better, the weight matrix performs much better, close to 90%. This is a classic case of overfitting, mainly due to the small sized training set that we provided. So we leave the reader with some questions to think about: # # - How can we interpret the rows of `W`? # - How can you fix the overfitting? # - Can you speed up the calculations to allow for bigger size traning sets (would dropouts make sense here)? # - Can you improve the model by allowing $h$ and $-h$ learning steps? # ## Tensorflow equivalent # # We now train our model using TensorFlow on the same dataset and then on the full dataset. We first reimport the data and create a small and a large training set. Then instantiate the first neural network, display its description and its performance before traning. (Notice that the tf model performs the flattening for us.) # + tags=[] #reimport data and create a big and small dataset (x_train_big, y_train_big), (x_test, y_test) = mnist.load_data() size_train_big = 20000 x_train_big, y_train_big = x_train_big[:size_train_big], y_train_big[:size_train_big] x_train_small, y_train_small = x_train[:size_train_small], y_train[:size_train_small] #Model instantiation and summary model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(10,use_bias=False) ]) print("This is the model we instantiated:\n") print(model.summary()) #Choose a loss function loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) #Check accuracy before training print('\n\nUntrained accuracy:', np.round(model.evaluate(x_test, y_test, verbose=0)[1],4)) #Train the model print(f"\n\nNow we fit/train the model on the small data set", f"with {size_train_small} images.\n") model.fit(x_train, y_train, epochs=5) #Check accuracy in/out-of-sample print("\n\nProbability of correct prediction after training\n\n- on traning set:", np.round(model.evaluate(x_train, y_train, verbose=0)[1],4)) print("\n- on test set:", np.round(model.evaluate(x_test, y_test, verbose=0)[1],4),"\n"); # - # So the TensorFlow training, though blindingly fast and using gradient descent, yields a model performs similarly to our naively trained model! In particular it seems to be also suffering from overfitting. Lets try now to train the same model on a bigger set. # + model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(10,use_bias=False) ]) model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) print(f"\n\nWe fit/train the model on the bit data set", f"with {size_train_big} images.\n\n") model.fit(x_train_big, y_train_big, epochs=5) print("\n\nProbability of correct prediction after training\n\n- on traning set:", np.round(model.evaluate(x_train_big, y_train_big, verbose=0)[1],4)) print("\n- on test set:", np.round(model.evaluate(x_test, y_test, verbose=0)[1],4)); # - # Great news! The in-sample and out-of-sample accuracy are high and very close, so we can conclude that the model is not overfitting the traning set. # # # Have fun! # # # --- # Author: # # Github: [ltoniazzi](https://github.com/ltoniazzi) # ### Future additions # # - Speed up this code with JAX. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import os import sys root_path = os.path.abspath("../../../") if root_path not in sys.path: sys.path.append(root_path) # + from _Dist.NeuralNetworks.f_AutoNN.NN import Auto nn = Auto( "Adult", data_info={"file_type": "csv"}, model_param_settings={"max_epoch": 20} ).fit() # + # %matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.rcParams["figure.figsize"] = (18, 8) el, il = nn.log["epoch_loss"], nn.log["iter_loss"] ee_base = np.arange(len(el)) ie_base = np.linspace(0, len(el) - 1, len(il)) plt.figure() plt.plot(ie_base, il, label="Iter loss") plt.plot(ee_base, el, linewidth=3, label="Epoch loss") plt.legend() plt.show() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: conda_pytorch_p36 # language: python # name: conda_pytorch_p36 # --- # # Distributed data parallel BERT model training with PyTorch and SMDataParallel # # SMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for PyTorch, TensorFlow, and MXNet. # # This notebook example shows how to use SMDataParallel with PyTorch(version 1.6.0) on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train a BERT model using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source. # # # The outline of steps is as follows: # # 1. Stage dataset in [Amazon S3](https://aws.amazon.com/s3/). Original dataset for BERT pretraining consists of text passages from BooksCorpus (800M words) (Zhu et al. 2015) and English Wikipedia (2,500M words). Please follow original guidelines by NVidia to prepare training data in hdf5 format - # https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md#getting-the-data # 2. Create Amazon FSx Lustre file-system and import data into the file-system from S3 # 3. Build Docker training image and push it to [Amazon ECR](https://aws.amazon.com/ecr/) # 4. Configure data input channels for SageMaker # 5. Configure hyper-prarameters # 6. Define training metrics # 7. Define training job, set distribution strategy to SMDataParallel and start training # # **NOTE:** With large traning dataset, we recommend using (Amazon FSx)[https://aws.amazon.com/fsx/] as the input filesystem for the SageMaker training job. FSx file input to SageMaker significantly cuts down training start up time on SageMaker because it avoids downloading the training data each time you start the training job (as done with S3 input for SageMaker training job) and provides good data read throughput. # # # **NOTE:** This example requires SageMaker Python SDK v2.X. # # ## Amazon SageMaker Initialization # # Initialize the notebook instance. Get the aws region, sagemaker execution role. # # The IAM role arn used to give training and hosting access to your data. See the [Amazon SageMaker Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) for how to create these. Note, if more than one role is required for notebook instances, training, and/or hosting, please replace the sagemaker.get_execution_role() with the appropriate full IAM role arn string(s). As described above, since we will be using FSx, please make sure to attach `FSx Access` permission to this IAM role. # + # %%time # ! python3 -m pip install --upgrade sagemaker import sagemaker from sagemaker import get_execution_role from sagemaker.estimator import Estimator import boto3 sagemaker_session = sagemaker.Session() bucket = sagemaker_session.default_bucket() role = get_execution_role() # provide a pre-existing role ARN as an alternative to creating a new role print(f'SageMaker Execution Role:{role}') client = boto3.client('sts') account = client.get_caller_identity()['Account'] print(f'AWS account:{account}') session = boto3.session.Session() region = session.region_name print(f'AWS region:{region}') # - # ## Prepare SageMaker Training Images # # 1. SageMaker by default use the latest [Amazon Deep Learning Container Images (DLC)](https://github.com/aws/deep-learning-containers/blob/master/available_images.md) PyTorch training image. In this step, we use it as a base image and install additional dependencies required for training BERT model. # 2. In the Github repository https://github.com/HerringForks/DeepLearningExamples.git we have made PyTorch-SMDataParallel BERT training script available for your use. This repository will be cloned in the training image for running the model training. # # ### Build and Push Docker Image to ECR # # Run the below command build the docker image and push it to ECR. image = "" # Example: bert-smdataparallel-sagemaker tag = "" # Example: pt1.6 # !pygmentize ./Dockerfile # !pygmentize ./build_and_push.sh # %%time # ! chmod +x build_and_push.sh; bash build_and_push.sh {region} {image} {tag} # ### Training script # # In the Github repository https://github.com/HerringForks/DeepLearningExamples.git we have made PyTorch-SMDataParallel BERT training script available for your use. Clone the repository. # !rm -rf DeepLearningExamples # !git clone https://github.com/HerringForks/DeepLearningExamples # ## Configure hyperparameters for your training # !pygmentize train.sh # ## Preparing FSx Input for SageMaker # # 1. Download and prepare your training dataset on S3. # 2. Follow the steps listed here to create a FSx linked with your S3 bucket with training data - https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-fs-linked-data-repo.html. Make sure to add an endpoint to your VPC allowing S3 access. # 3. Follow the steps listed here to configure your SageMaker training job to use FSx https://aws.amazon.com/blogs/machine-learning/speed-up-training-on-amazon-sagemaker-using-amazon-efs-or-amazon-fsx-for-lustre-file-systems/ # # # ### Important Caveats # # 1. You need use the same `subnet` and `vpc` and `security group` used with FSx when launching the SageMaker notebook instance. The same configurations will be used by your SageMaker training job. # 2. Make sure you set appropriate inbound/output rules in the `security group`. Specically, opening up these ports is necessary for SageMaker to access the FSx filesystem in the training job. https://docs.aws.amazon.com/fsx/latest/LustreGuide/limit-access-security-groups.html # 3. Make sure `SageMaker IAM Role` used to launch this SageMaker training job has access to `AmazonFSx`. # + from sagemaker.inputs import FileSystemInput subnets=[''] # Should be same as Subnet used for FSx. Example: subnet-01aXXXX security_group_ids=[''] # Should be same as Security group used for FSx. sg-075ZZZZZZ file_system_id= '' # FSx file system ID with your training dataset. Example: 'fs-0bYYYYYY' file_system_directory_path= 'YOUR_MOUNT_PATH_FOR_TRAINING_DATA' # NOTE: '/fsx/' will be the root mount path. Example: '/fsx/bert/pt/phase1' file_system_access_mode = "ro" file_system_type = 'FSxLustre' train_fs = FileSystemInput(file_system_id=file_system_id, file_system_type=file_system_type, directory_path=file_system_directory_path, file_system_access_mode=file_system_access_mode) # - # ## SageMaker PyTorch Estimator function options # # In the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You're also passing in the training script you reviewed in the previous cell. # # **Instance types** # # SMDataParallel supports model training on SageMaker with the following instance types only: # 1. ml.p3.16xlarge # 1. ml.p3dn.24xlarge [Recommended] # 1. ml.p4d.24xlarge [Recommended] # # **Instance count** # # To get the best performance and the most out of SMDataParallel, you should use at least 2 instances, but you can also use 1 for testing this example. # # **Distribution strategy** # # Note that to use DDP mode, you update the the `distribution` strategy, and set it to use `smdistributed dataparallel`. # + from sagemaker.pytorch import PyTorch docker_image = f"{account}.dkr.ecr.{region}.amazonaws.com/{image}:{tag}" # YOUR_ECR_IMAGE_BUILT_WITH_ABOVE_DOCKER_FILE instance_type = "ml.p3dn.24xlarge" # Other supported instance type: ml.p3.16xlarge, ml.p4d.24xlarge instance_count = 2 # You can use 2, 4, 8 etc. # This job name is used as prefix to the sagemaker training job. Makes it easy for your look for your training job in SageMaker Training job console. job_name = 'pt-bert-smdataparallel-N%d-%s' % (instance_count, instance_type.split(".")[1]) print("Job name: ", job_name) estimator = PyTorch(base_job_name=job_name, source_dir=".", entry_point="train.sh", role=role, image_uri=docker_image, framework_version='1.6.0', py_version='py3', instance_count=instance_count, instance_type=instance_type, sagemaker_session=sagemaker_session, subnets=subnets, security_group_ids=security_group_ids, debugger_hook_config=False, # Training using SMDataParallel Distributed Training Framework distribution={'smdistributed':{ 'dataparallel':{ 'enabled': True } } } ) # + # !pygmentize train.sh estimator.fit(train_fs) # - model_data = estimator.model_data print("Storing {} as model_data".format(model_data)) # %store model_data # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Introduction # In this notebook we will test the implementation of the AlexNet class provided in the `alexnet.py` file. This is part of [this](https://kratzert.github.io/2017/02/24/finetuning-alexnet-with-tensorflow.html) blog article on how to finetune AlexNet with TensorFlow 1.0. # # To run this notebook you have to download the `bvlc_alexnet.npy` file from [here](http://www.cs.toronto.edu/~guerzhoy/tf_alexnet/), which stores the pretrained weigts of AlexNet. # # The idea to validate the implementation is to create an AlexNet graph with the provided script and load all pretrained weights into the variables (so no finetuneing!), to see if everything is wired up correctly. # + #some basic imports and setups import os import cv2 import numpy as np import tensorflow.compat.v1 as tf tf.compat.v1.disable_eager_execution() import matplotlib.pyplot as plt #mean of imagenet dataset in BGR imagenet_mean = np.array([104., 117., 124.], dtype=np.float32) current_dir = os.getcwd() image_dir = os.path.join(current_dir, 'images') # %matplotlib inline # - # Set up the GPU in the condition of allocation exceeds system memory with the reminding message: Could not # create cuDNN handle... The following lines of code can avoids the sudden stop of the runtime. gpus = tf.config.experimental.list_physical_devices('GPU') for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) # + #get list of all images img_files = [os.path.join(image_dir, f) for f in os.listdir(image_dir) if f.endswith('.jpeg')] #load all images imgs = [] for f in img_files: imgs.append(cv2.imread(f)) #plot images fig = plt.figure(figsize=(15,6)) for i, img in enumerate(imgs): fig.add_subplot(1,3,i+1) plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) plt.axis('off') # - # First we will create placeholder for the dropout rate and the inputs and create an AlexNet object. Then we will link the activations from the last layer to the variable `score` and define an op to calculate the softmax values. # + from alexnet import AlexNet from caffe_classes import class_names #placeholder for input and dropout rate x = tf.placeholder(tf.float32, [1, 227, 227, 3]) keep_prob = tf.placeholder(tf.float32) #create model with default config ( == no skip_layer and 1000 units in the last layer) model = AlexNet(x, keep_prob, 1000, []) #define activation of last layer as score score = model.fc8 #create op to calculate softmax softmax = tf.nn.softmax(score) # - # Now we will start a TensorFlow session and load pretrained weights into the layer weights. Then we will loop over all images and calculate the class probability for each image and plot the image again, together with the predicted class and the corresponding class probability. with tf.Session() as sess: # Initialize all variables sess.run(tf.global_variables_initializer()) # Load the pretrained weights into the model model.load_initial_weights(sess) # Create figure handle fig2 = plt.figure(figsize=(15,6)) # Loop over all images for i, image in enumerate(imgs): # Convert image to float32 and resize to (227x227) img = cv2.resize(image.astype(np.float32), (227,227)) # Subtract the ImageNet mean img -= imagenet_mean # Reshape as needed to feed into model img = img.reshape((1,227,227,3)) # Run the session and calculate the class probability probs = sess.run(softmax, feed_dict={x: img, keep_prob: 1}) # Get the class name of the class with the highest probability class_name = class_names[np.argmax(probs)] # Plot image with class name and prob in the title fig2.add_subplot(1,3,i+1) plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) plt.title("Class: " + class_name + ", probability: %.4f" %probs[0,np.argmax(probs)]) plt.axis('off') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # #Exploring Ensemble Methods # In this homework we will explore the use of boosting. You will: # # Use SFrames to do some feature engineering. # Train a boosted ensemble of decision-trees (gradient boosted trees) on the lending club dataset. # Predict whether a loan will default along with prediction probabilities (on a validation set). # Evaluate the trained model and compare it with a baseline. # Find the most positive and negative loans using the learned model. # Explore how the number of trees influences classification performance. # # #Load the Lending Club dataset import pandas as pd import numpy as np loans = pd.read_csv('/Users/April/Downloads/lending-club-data.csv') # safe_loans = 1 => safe # safe_loans = -1 => risky loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1) loans = loans.drop('bad_loans', axis = 1) # Selecting features # The features we will be using are described in the code comments below. Extract these feature columns and target column from the dataset. We will only use these features. target = 'safe_loans' features = ['grade', # grade of the loan (categorical) 'sub_grade_num', # sub-grade of the loan as a number from 0 to 1 'short_emp', # one year or less of employment 'emp_length_num', # number of years of employment 'home_ownership', # home_ownership status: own, mortgage or rent 'dti', # debt to income ratio 'purpose', # the purpose of the loan 'payment_inc_ratio', # ratio of the monthly payment to income 'delinq_2yrs', # number of delinquincies 'delinq_2yrs_zero', # no delinquincies in last 2 years 'inq_last_6mths', # number of creditor inquiries in last 6 months 'last_delinq_none', # has borrower had a delinquincy 'last_major_derog_none', # has borrower had 90 day or worse rating 'open_acc', # number of open credit accounts 'pub_rec', # number of derogatory public records 'pub_rec_zero', # no derogatory public records 'revol_util', # percent of available credit being used 'total_rec_late_fee', # total late fees received to day 'int_rate', # interest rate of the loan 'total_rec_int', # interest received to date 'annual_inc', # annual income of borrower 'funded_amnt', # amount committed to the loan 'funded_amnt_inv', # amount committed by investors for the loan 'installment', # monthly payment owed by the borrower ] # #Skipping observations with missing values # Recall from the lectures that one common approach to coping with missing values is to skip observations that contain missing values. loans = loans[[target] + features].dropna() loans = pd.get_dummies(loans) import json with open('/Users/April/Desktop/datasci_course_materials-master/assignment1/train index.json', 'r') as f: # Reads the list of most frequent words train_idx = json.load(f) with open('/Users/April/Desktop/datasci_course_materials-master/assignment1/validation index.json', 'r') as f1: # Reads the list of most frequent words validation_idx = json.load(f1) train_data = loans.iloc[train_idx] validation_data = loans.iloc[validation_idx] # #Gradient boosted tree classifier # Now, let's use the built-in scikit learn gradient boosting classifier (sklearn.ensemble.GradientBoostingClassifier) to create a gradient boosted classifier on the training data. You will need to import sklearn, sklearn.ensemble, and numpy. # You will have to first convert the SFrame into a numpy data matrix. See the API for more information. You will also have to extract the label column. Make sure to set max_depth=6 and n_estimators=5. # #Making predictions # Just like we did in previous sections, let us consider a few positive and negative examples from the validation set. We will do the following: # # Predict whether or not a loan is likely to default. # Predict the probability with which the loan is likely to default. # First, let's grab 2 positive examples and 2 negative examples. # + validation_safe_loans = validation_data[validation_data[target] == 1] validation_risky_loans = validation_data[validation_data[target] == -1] sample_validation_data_risky = validation_risky_loans[0:2] sample_validation_data_safe = validation_safe_loans[0:2] sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky) sample_validation_data # - # For each row in the sample_validation_data, write code to make model_5 predict whether or not the loan is classified as a safe loan. (Hint: if you are using scikit-learn, you can use the .predict() method) import sklearn import sklearn.ensemble import numpy from sklearn.ensemble import GradientBoostingClassifier sample_model = GradientBoostingClassifier(n_estimators=5, max_depth=6) X = train_data.drop('safe_loans',1) X.columns sample_model.fit(X, train_data['safe_loans']) sample_model.predict(sample_validation_data.drop('safe_loans',1)) # Quiz question: What percentage of the predictions on sample_validation_data did model_5 get correct? # #Prediction Probabilities # For each row in the sample_validation_data, what is the probability (according model_5) of a loan being classified as safe? (Hint: if you are using scikit-learn, you can use the .predict_proba() method) sample_model.predict_proba(sample_validation_data.drop('safe_loans',1)) # Quiz Question: Which loan has the highest probability of being classified as a safe loan? # # Checkpoint: Can you verify that for all the predictions with probability >= 0.5, the model predicted the label +1? # #Evaluating the model on the validation data # Evaluate the accuracy of the model_5 on the validation_data. (Hint: if you are using scikit-learn, you can use the .score() method) sample_model.score(validation_data.drop('safe_loans',1), validation_data['safe_loans']) # Calculate the number of false positives made by the model on the validation_data. predict_safeloans = sample_model.predict(validation_data.drop('safe_loans',1)) predict_safeloans sum(predict_safeloans > validation_data['safe_loans']) # #Comparison with decision trees # false negative sum(predict_safeloans < validation_data['safe_loans']) # Quiz Question: Using the same costs of the false positives and false negatives, what is the cost of the mistakes made by the boosted tree model (model_5) as evaluated on the validation_set? cost = 20000*1653+10000*1491 print cost # #Most positive & negative loans # In this section, we will find the loans that are most likely to be predicted safe. validation_data['predictions'] = sample_model.predict_proba(validation_data.drop('safe_loans',1))[:,1] validation_data[['grade_A','grade_B','grade_C','grade_D','predictions']].sort('predictions', ascending = False).head(5) validation_data[['grade_A','grade_B','grade_C','grade_D','predictions']].sort('predictions', ascending = False).tail(5) # #Effects of adding more trees # In this assignment, we will train 5 different ensemble classifiers in the form of gradient boosted trees. # Train models with 10, 50, 100, 200, and 500 trees. Use the n_estimators parameter to control the number of trees. Remember to keep max_depth = 6. # # Call these models model_10, model_50, model_100, model_200, and model_500, respectively. This may take a few minutes to run. model_10 = GradientBoostingClassifier(n_estimators=10, max_depth=6) model_10.fit(train_data.drop('safe_loans',1), train_data['safe_loans']) model_50 = GradientBoostingClassifier(n_estimators=50, max_depth=6) model_50.fit(train_data.drop('safe_loans',1), train_data['safe_loans']) model_100 = GradientBoostingClassifier(n_estimators=100, max_depth=6) model_100.fit(train_data.drop('safe_loans',1), train_data['safe_loans']) model_200 = GradientBoostingClassifier(n_estimators=200, max_depth=6) model_200.fit(train_data.drop('safe_loans',1), train_data['safe_loans']) model_500 = GradientBoostingClassifier(n_estimators=500, max_depth=6) model_500.fit(train_data.drop('safe_loans',1), train_data['safe_loans']) model_10.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) model_50.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) model_100.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) model_200.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) model_500.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) # #Plot the training and validation error vs. number of trees # In this section, we will plot the training and validation errors versus the number of trees to get a sense of how these models are performing. We will compare the 10, 50, 100, 200, and 500 tree models. You will need matplotlib in order to visualize the plots. import matplotlib.pyplot as plt # %matplotlib inline def make_figure(dim, title, xlabel, ylabel, legend): plt.rcParams['figure.figsize'] = dim plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) if legend is not None: plt.legend(loc=legend, prop={'size':15}) plt.rcParams.update({'font.size': 16}) plt.tight_layout() # Steps to follow: # # Step 1: Calculate the classification error for each model on the training data (train_data). # Step 2: Store the training errors into a list (called training_errors) that looks like this: [train_err_10, train_err_50, ..., train_err_500] # Step 3: Calculate the classification error of each model on the validation data (validation_data). # Step 4: Store the validation classification error into a list (called validation_errors) that looks like this:[validation_err_10, validation_err_50, ..., validation_err_500] # train_err_10 = 1 - model_10.score(train_data.drop('safe_loans',1), train_data['safe_loans']) train_err_50 = 1 - model_50.score(train_data.drop('safe_loans',1), train_data['safe_loans']) train_err_100 = 1 - model_100.score(train_data.drop('safe_loans',1), train_data['safe_loans']) train_err_200 = 1 - model_200.score(train_data.drop('safe_loans',1), train_data['safe_loans']) train_err_500 = 1 - model_500.score(train_data.drop('safe_loans',1), train_data['safe_loans']) validation_err_10 = 1 - model_10.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) validation_err_50 = 1 - model_50.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) validation_err_100 = 1 - model_100.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) validation_err_200 = 1 - model_200.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) validation_err_500 = 1 - model_500.score(validation_data.drop(['safe_loans','predictions'],1), validation_data['safe_loans']) training_errors = [train_err_10, train_err_50, train_err_100, train_err_200, train_err_500] validation_errors = [validation_err_10, validation_err_50, validation_err_100, validation_err_200, validation_err_500] # + plt.plot([10, 50, 100, 200, 500], training_errors, linewidth=4.0, label='Training error') plt.plot([10, 50, 100, 200, 500], validation_errors, linewidth=4.0, label='Validation error') make_figure(dim=(10,5), title='Error vs number of trees', xlabel='Number of trees', ylabel='Classification error', legend='best') # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + [markdown] id="n6IUghH57AdO" # ## Ingest # + id="Mn3OsftD68wb" # + [markdown] id="JdyAy9PD7CxL" # ## EDA # + id="ZkxADuBY7Dji" # + [markdown] id="zNgO3xIv7EDf" # ## Modeling # + id="Qza8MKMj7E_G" # + [markdown] id="Qn0HLikM7Fih" # ## Conclusion # + id="HM_KKZ0m7Gf-" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # ### Check if results are binary correct import numpy as np import pandas as pd # First we read in two reference files, one for the intel results, one for the gnu results df_intel = pd.read_csv('reference_intel.csv', delimiter=' ') df_gnu = pd.read_csv('benchmark_i5.csv', delimiter=' ') # take the rows containing all the various matrix and vector sizes values_intel = df_intel[['n','Nx','Ny','exblas_i']] values_gnu = df_gnu[['n','Nx','Ny','exblas_i']].iloc[0:32] values_intel.set_index(['n','Nx'], inplace=True) values_gnu.set_index(['n','Nx'], inplace=True) # Make a dictionary of files and compilertypes that we want to check files = {'knl_mpi1':'intel', 'knl_mpi2':'intel', 'knl_mpi4':'intel', 'skl_mpi1':'intel', 'skl_mpi2':'intel', 'skl_mpi4':'intel', 'p100_mpi1':'gnu', 'p100_mpi2':'gnu', 'p100_mpi4':'gnu', 'v100_mpi1':'gnu', 'v100_mpi2':'gnu', 'v100_mpi4':'gnu', 'i5':'gnu','gtx1060':'gnu'} # Now, go through all the files and all rows and compare the result (the exblas_i column) to the # corresponding reference value for f,k in files.items(): df=pd.read_csv('benchmark_'+f+'.csv', delimiter=' ') Passed = True; Err = False print( "Checking", f , k) ref = values_gnu if k == 'intel' : ref = values_intel for i in df.index: try: if df.loc[i,'exblas_i'] != ref.loc[(df.loc[i,'n'],df.loc[i,'Nx']),'exblas_i'] and not pd.isnull(df.loc[i,'exblas_i']): Passed = False print( "Wrong result at n = ",df.loc[i,'n']," N = ",df.loc[i,'Nx'], " Difference is ", df.loc[i,'exblas_i']-ref.loc[(df.loc[i,'n'],df.loc[i,'Nx']),'exblas_i']) except KeyError: Err = True continue if Passed : print( "PASSED") else: print( "FAILED") if Err: print( " There was a Key error") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) import configparser config = configparser.ConfigParser() config.read('config.ini') ip = config['DEFAULT']['IP'] port = config['DEFAULT']['MongoDB-Port'] contain_string = config['DEFAULT']['Contain-String'] import pymongo from pymongo import MongoClient client = MongoClient(ip, int(port)) # - db_twitter = client["Twitter"] collections_twitter = db_twitter.collection_names() # + dic_collection = {} for i in collections_twitter: if i.startswith("20") and contain_string in i: dic_collection[i] = "{:,}".format(db_twitter[i].find({}).count()) for key in sorted(dic_collection): print("%s: %s" % (key, dic_collection[key])) # - for collection in sorted(dic_collection): exist = 0 # get information of index of each collection index_information = db_twitter[collection].index_information() # check if collection has "text_text" index for index in index_information: if index == "text_text": exist = 1 print("Text index exists in " + collection) # if "text_text" index not exist then create one if exist == 0: print("No text index exists in " + collection) print("creating text index....") db_twitter[collection].create_index([("text", pymongo.TEXT)]) print("Text index for " + collection + " is done.") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="HQ9cafQyzIaB" colab={"base_uri": "https://localhost:8080/"} outputId="40d50351-cdc3-44cf-cb41-dafa4c28fbfd" # !pip install jovian --upgrade --quiet # + colab={"base_uri": "https://localhost:8080/"} id="_hEkjvFWawDN" outputId="c591854a-5bfc-4f83-fa0c-ddec09b0cde0" import jovian jovian.commit() # + [markdown] id="x_ep1ELNzIaH" # # US Accidents Exploratory Data Analysis # TODO - talk about EDA # # TODO - talk about the dataset (source, what it contains, how it will be useful) # - Kaggle # - informaiton about accidents # - can use useful to prevent accidents # - mention that this does not contain data about New York # + id="qxY027N1zIaI" pip install opendatasets --upgrade --quiet # + colab={"base_uri": "https://localhost:8080/"} id="uSe-5QAozIaJ" outputId="98ffe383-2307-4810-eb22-19968c049ecb" import opendatasets as od download_url = 'https://www.kaggle.com/sobhanmoosavi/us-accidents' od.download(download_url) # + id="DgPsvZePzIaJ" data_filename = './us-accidents/US_Accidents_Dec20_Updated.csv' # + [markdown] id="WGRWTzSM4WEL" # ##Data Preparation and Cleaning # 1. Load the file using Pandas # 2. Look at some information about the data & the columns # 3. Fix any missing or incorrect values # + id="9OtTQgtgzIaK" import pandas as pd import seaborn as sns # + id="UpSItfRA2RUG" df = pd.read_csv(data_filename) # + colab={"base_uri": "https://localhost:8080/", "height": 530} id="5NPPXKbw2RPP" outputId="e2afbcca-f168-4a0e-ab1b-2693820b5a29" df.head(5) # + id="vb0PLggS_iza" # sns.heatmap(df.corr()) # + colab={"base_uri": "https://localhost:8080/"} id="ozQna60t2RL3" outputId="25d3e43b-c697-47cd-f19b-1af41b48b2fd" df.info() # + colab={"base_uri": "https://localhost:8080/", "height": 317} id="gh_jNDcc2RFg" outputId="d3370904-c888-42ac-e0ba-2d71eb775680" df.describe() # + colab={"base_uri": "https://localhost:8080/"} id="gcC1mv153aK_" outputId="5e73029c-d6fe-400e-f2bc-be2d3db76305" numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] numeric_df = df.select_dtypes(include=numerics) len(numeric_df.columns) # + colab={"base_uri": "https://localhost:8080/"} id="RreZbaTh3aHt" outputId="256680b1-6772-458f-b00b-b01f81ae4e56" missing_percentages = df.isna().sum().sort_values(ascending=False) / len(df) missing_percentages # + colab={"base_uri": "https://localhost:8080/"} id="YH3-XlbC4yuK" outputId="b6d7da18-d0cc-4c16-81c6-2fea380b5d54" type(missing_percentages) # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="E_dTDqiR40V_" outputId="ed06c700-88f5-492d-dfcb-0033f8307593" missing_percentages[missing_percentages != 0].plot(kind='barh') # + id="6lSSATKr40Rq" # + id="RH98WFWS40NH" # + [markdown] id="6ByhCXxW5JxX" # ##Exploratory Analysis and Visualization # Columns we'll analyze: # # - City # - Start Time # - Start Lat, Start Lng # - Temperature # - Weather Condition # + colab={"base_uri": "https://localhost:8080/"} id="CeYu5HPL5Q9d" outputId="1db81e9c-61a6-489b-8550-75697f453dd9" df.columns # + [markdown] id="9FOf_JL-5XNA" # ##City # + colab={"base_uri": "https://localhost:8080/"} id="zPjLB_Qh5RWY" outputId="d73de7a1-beae-4fe8-d950-cfd61af401db" df.City # + colab={"base_uri": "https://localhost:8080/"} id="pCyb8U7d5gMW" outputId="db5c8eeb-227c-436a-8201-d78b26381dd0" cities = df.City.unique() len(cities) # + colab={"base_uri": "https://localhost:8080/"} id="U9y32SYb5gJp" outputId="a0a8fafc-9b99-4ffc-a84a-84099c2f4d23" cities_by_accident = df.City.value_counts() cities_by_accident # + colab={"base_uri": "https://localhost:8080/"} id="_DlEKw5Y5gGp" outputId="606a64bd-00f3-4a38-ac06-4a7fa1fe22b5" cities_by_accident[:20] # + colab={"base_uri": "https://localhost:8080/"} id="--_O6ayk55TM" outputId="dd959858-4b84-4a5e-f670-184dfabeb6f6" cities_by_accident[:20] # + colab={"base_uri": "https://localhost:8080/"} id="Z_PRjz7R56eM" outputId="fba81985-13c0-4063-b65f-50943b3cced5" type(cities_by_accident) # + colab={"base_uri": "https://localhost:8080/", "height": 282} id="Ka6orW9w56S7" outputId="6f30ec60-23c1-4e07-934f-b1f4102ad5a4" cities_by_accident[:20].plot(kind='barh') # + id="bFshw_gw6DQC" import seaborn as sns sns.set_style("darkgrid") # + colab={"base_uri": "https://localhost:8080/", "height": 300} id="8kYV3y346DBl" outputId="6a26d4c8-e211-479a-c2b1-2742abaad378" sns.histplot(cities_by_accident, log_scale=True) # + colab={"base_uri": "https://localhost:8080/"} id="3FajZuMx6IdU" outputId="41d0f4e6-5d7c-4114-f206-e1cc153887fa" cities_by_accident[cities_by_accident == 1] # + [markdown] id="4wCpKzQS8nRC" # ##summary and conclusion # Insights: # - No data from New York # - The number of accidents per city decreases exponentailly # - Less than 5% of cities have more than 1000 yearly accidents. # - Over 1200 cities have reported just one accidents. # + [markdown] id="mEEXADcB6QEY" # ##Star_Time # + colab={"base_uri": "https://localhost:8080/"} id="g83axEGS6IZ9" outputId="863044e9-492e-4b16-c6b1-e2cfbde5187a" df.Start_Time # + id="8v8CPBMT6Ob9" df.Start_Time = pd.to_datetime(df.Start_Time) ##converting into date time object # + colab={"base_uri": "https://localhost:8080/"} id="GUVV4Sjb_n4d" outputId="ee11a6ad-f7d7-4709-f315-32ccf1419919" df.Start_Time[0] # + colab={"base_uri": "https://localhost:8080/", "height": 297} id="IRUhjC4YBgfC" outputId="67bde664-1d61-46c2-a53e-d7dfdf2515ca" sns.histplot(df.Start_Time.dt.hour) # + colab={"base_uri": "https://localhost:8080/", "height": 351} id="eTihTKmK6OVL" outputId="0884798a-1552-42d4-d23d-37ef544d9334" sns.distplot(df.Start_Time.dt.hour, bins=24, kde=False, norm_hist=True) #norm_hist for percentage # + [markdown] id="SNPMGV3W7RZr" # - A high percentage of accidents occur between 6 am to 10 am (probably people in a hurry to get to work) # - Next higest percentage is 3 pm to 6 pm. # + colab={"base_uri": "https://localhost:8080/", "height": 351} id="b2Wfp7KO7MdG" outputId="5f014406-2461-4d49-ba98-d4eabfc541e0" sns.distplot(df.Start_Time.dt.dayofweek, bins=7, kde=False, norm_hist=True) # + [markdown] id="MFWmalS07kgi" # - A high percentage of accidents occur between Monday- Friday(probably people out for work this days) # + [markdown] id="roGyjF8sEJfU" # Is the distribution of accidents by hour the same on weekends as on weekdays. # + [markdown] id="USMZIF2LEJXY" # # + colab={"base_uri": "https://localhost:8080/", "height": 351} id="TLXq8A5V6cOs" outputId="66ba76c7-f7f2-42da-a84f-b94920ee6dbe" sundays_start_time = df.Start_Time[df.Start_Time.dt.dayofweek == 6 ] sns.distplot(sundays_start_time.dt.hour, bins=24, kde=False, norm_hist=True) # + colab={"base_uri": "https://localhost:8080/", "height": 351} id="MtnSiQ2oE1IT" outputId="4628d8ec-1cc4-44cb-c06a-df7706e49ab0" Mondays_start_time = df.Start_Time[df.Start_Time.dt.dayofweek == 1 ] sns.distplot(Mondays_start_time.dt.hour, bins=24, kde=False, norm_hist=True) # + [markdown] id="bNezJSayFe3r" # On sandays,the peak occurs between 10 am and 3 pm ,unlike weekdays # + colab={"base_uri": "https://localhost:8080/", "height": 351} id="rSs2EzucFeg7" outputId="f1a430de-d81b-4994-a74a-8d80003148f8" sns.distplot(df.Start_Time.dt.month, bins=12, kde=False, norm_hist=True) # + id="bVGTLIWIFQDE" df_2019=df[df.Start_Time.dt.year == 2019] # + colab={"base_uri": "https://localhost:8080/", "height": 353} id="N127xtjUE1EY" outputId="e006244c-371d-45a5-95b7-f62cf31c17fe" sns.distplot(df_2019.Start_Time.dt.month, bins=12, kde=False, norm_hist=True) # + colab={"base_uri": "https://localhost:8080/", "height": 351} id="TwAoYIFQHM2c" outputId="b9471b03-9a7a-420f-a33a-81c1f4f2b60e" df_2018=df[df.Start_Time.dt.year == 2018] sns.distplot(df_2018.Start_Time.dt.month, bins=12, kde=False, norm_hist=True) # + colab={"base_uri": "https://localhost:8080/", "height": 351} id="x5kbs8wMHMx1" outputId="973bf904-4dfe-498e-866a-defd736b14c1" df_2016=df[df.Start_Time.dt.year == 2016] sns.distplot(df_2016.Start_Time.dt.month, bins=12, kde=False, norm_hist=True) # + [markdown] id="4Z7AyoFuE0r2" # Can you explain the month-wise trend of accidents? # - Much data is missing for 2016. Maybe even 2017. # + colab={"base_uri": "https://localhost:8080/"} id="8Xlrct5sJDqv" outputId="8c49d870-3b78-4148-e4bc-51a254abefed" df_2016.info() df.info() # + [markdown] id="l3d1_uXROpFw" # Start lag,star log # + colab={"base_uri": "https://localhost:8080/"} id="F9DOk87MOlP4" outputId="cb49e49a-407a-4bfa-eea3-62cf19d5b386" df.Start_Lng # + colab={"base_uri": "https://localhost:8080/", "height": 299} id="jdKpjNKv6cMR" outputId="967276dc-6770-47f3-ac54-5d0d6ad32202" sns.scatterplot(x=df.Start_Lng, y=df.Start_Lat,s=1) # + id="wEkxf1O6IjdO" import folium # + colab={"base_uri": "https://localhost:8080/", "height": 768} id="BnJtWbejQIqo" outputId="f70234fb-965e-400e-a2ae-fa67ee5556ff" folium.Map(location=[38.9,-77.05],zoom_start=12) # + id="qfes9QnAQ5OB" lat, lon = df.Start_Lat[0], df.Start_Lng[0] # + id="wjwgSPvsRy_O" for x in df[['Start_Lat', 'Start_Lng']].sample(100).iteritems(): # + colab={"base_uri": "https://localhost:8080/", "height": 768} id="GYW8zMNiQIoT" outputId="3fc98452-5c74-4b26-ea4a-4f93a6e195d9" map=folium.Map() for lat, lon marker = folium.Marker((lat,lon)) marker.add_to(map) map # + colab={"base_uri": "https://localhost:8080/"} id="YxeDa6YwTIT_" outputId="32ff7a37-d86b-4199-e975-66840be75fce" list(zip(list(df.Start_Lat), list(df.Start_Lng))) # + id="0342WKk3UbYi" from folium.plugins import HeatMap # + id="4RLYekR0U5WG" sample_df = df.sample(int(0.000001 * len(df))) lat_lon_pairs = list(zip(list(df.Start_Lat), list(df.Start_Lng))) # + id="4KUqlZhOSfFb" map = folium.Map() HeatMap(lat_lon_pairs).add_to(map) map # + id="xT3_fjVcQIls" # + id="rGGVQk0DQIi3" # + [markdown] id="RLCC1FjM9ZZD" # ##Questions and Answers # 1. Are there more accidents in warmer or colder areas? # 2. Which 5 states have the highest number of accidents? How about per capita? # 3. Does New York show up in the data? If yes, Why is the count lower if this the most populated city. # 4. Among teh top 100 cities in number of accidents, which states do they belong to most frequently. # 5. what time of the day are accidents most frequent in? # 6. Which days of the week have teh most accidents? # 7. Which months have the most accidents? # 8. What is the trend of accidents year over year (decreasing/incresing)? # 9. # + colab={"base_uri": "https://localhost:8080/"} id="WIIFygJ9Y04Y" outputId="8448826d-f78b-4940-eb0b-0dd08ac0c9fb" # + id="8cRHREojYtjL" # + colab={"base_uri": "https://localhost:8080/"} id="cwopSj6BYtgn" outputId="5c58200c-efd1-4152-82d7-b242cec73d74" # + id="jeB79kcZYtd6" # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # open FIA data points # + # Mostly-standard imports import os import sys sys.path.append('/content') import xarray as xr import tempfile import numpy as np import shutil import urllib import matplotlib.pyplot as plt import pandas as pd import geopandas as gpd from shapely.geometry import Point from glob import glob import utm import src.azuretools as azt #from mpl_toolkits.basemap import Basemap # Less-common-but-still-pip-installable imports from netCDF4 import Dataset # pip install progressbar2, not progressbar import progressbar # This will contain just the .nc files nasadem_file_list = None temp_dir = os.path.join(tempfile.gettempdir(),'nasadem') os.makedirs(temp_dir,exist_ok=True) # %matplotlib inline # %load_ext autoreload # %autoreload 2 # + blob = '/home/datablob' fiafile = f'{blob}/fia_no_pltcn.csv' plots = gpd.GeoDataFrame(pd.read_csv(fiafile)) plots.Latitude = plots.LAT.astype('float') plots.Longitude = plots.LON.astype('float') geometry = [Point(xy) for xy in zip(plots.Longitude, plots.Latitude)] crs = {'init': 'epsg:4326'} geo_df = gpd.GeoDataFrame(plots, crs=crs, geometry=geometry) #subset by STATECD statecd = pd.read_csv(glob(f'{blob}/supp_data/*.csv')[0]) query = 'CA' code = statecd[statecd['STATEAB']== query]['STATECD'].values[0] subdf = geo_df[geo_df['STATECD']==code] fields = ['INDEX', 'INVYR', 'LAT', 'LON'] subdf = subdf.astype({'LAT':'float', 'LON':'float', 'INVYR':'int32'}) subdf = gpd.GeoDataFrame(subdf, geometry=gpd.points_from_xy(subdf.LON, subdf.LAT)) subdf = subdf.set_crs(epsg=4326) subdf = subdf.rename(columns={'Unnamed: 0':'INDEX'}) #statebounds for reducing query on cloud statebounds = glob(f'{blob}/supp_data/shp/*.shp')[0] states = gpd.read_file(statebounds) state = states[states['STPOSTAL']==query] state_shape = state.geometry # - ''' run this if you are hoping to build your URL list ''' url_list = [] for i,s in subdf.iterrows(): tile_of_interest = [s['geometry'].y,s['geometry'].x] tile_name = lat_lon_to_nasadem_tile(tile_of_interest[0],tile_of_interest[1]) url = azt.nasadem_blob_root + tile_name url_list.append(url) subdf['DEM_URL'] = url_list # # selecting by number of cells # + ''' if we want to select by number of cells around the center point ''' cell_buffer = 10 if not os.path.exists(f'{blob}/training_tiles/NASADEM/{query}'): os.makedirs(f'{blob}/training_tiles/NASADEM/{query}') for sample_index in range(len(subdf)): sample_point = gpd.GeoDataFrame(subdf.iloc[sample_index]).T sample_point.crs = {'init':'epsg:4269'} tile_of_interest = [sample_point.geometry.y.values[0], sample_point.geometry.x.values[0]] tile_name = azt.lat_lon_to_nasadem_tile(tile_of_interest[0],\ tile_of_interest[1],\ nasadem_file_list) url = azt.nasadem_blob_root + tile_name fn = azt.download_url(url,progress_updater = azt.DownloadProgressBar()) fh = xr.open_dataset(fn, engine='h5netcdf') yi = np.argmin(np.abs(fh.lat - float(sample_point['LAT'])).values) xi = np.argmin(np.abs(fh.lon - float(sample_point['LON'])).values) #set the area around those point defined as c data_chunk = fh.isel(lat=slice(yi-cell_buffer,yi+cell_buffer),\ lon=slice(xi-cell_buffer,xi+cell_buffer)) outfilename = f'{blob}/training_tiles/NASADEM/{query}/{sample_point.INDEX.values[0]}.nc' data_chunk.to_netcdf(outfilename) # - # # selecting by distance # + ''' if we want to select by distance around center point ''' if not os.path.exists(f'{blob}/training_tiles/NASADEM/{query}'): os.makedirs(f'{blob}/training_tiles/NASADEM/{query}') #given we know the utm values can we go look around 1km? buffer_dist = 500 #500 meters both sides of the center for 1km image for sample_index in range(len(subdf)): sample_point = gpd.GeoDataFrame(subdf.iloc[sample_index]).T sample_point.crs = {'init':'epsg:4269'} tile_of_interest = [sample_point.geometry.y.values[0], sample_point.geometry.x.values[0]] tile_name = azt.lat_lon_to_nasadem_tile(tile_of_interest[0],\ tile_of_interest[1],\ nasadem_file_list) url = azt.nasadem_blob_root + tile_name fn = azt.download_url(url,progress_updater = azt.DownloadProgressBar()) fh = xr.open_dataset(fn, engine='h5netcdf') e_utm_values, n_utm_values, zone, hemi = utm.from_latlon(latitude=float(sample_point.geometry.y), \ longitude=float(sample_point.geometry.x)) bounds = {'miny':n_utm_values-buffer_dist, \ 'maxy':n_utm_values+buffer_dist, \ 'minx':e_utm_values-buffer_dist, \ 'maxx':e_utm_values+buffer_dist} bbox = {'lowerleft':(bounds['miny'], bounds['minx'], zone, hemi),\ 'upperleft':(bounds['maxy'], bounds['minx'], zone, hemi),\ 'upperright':(bounds['maxy'], bounds['maxx'], zone, hemi),\ 'lowerright':(bounds['miny'], bounds['maxx'], zone, hemi)} #transform bounds to lat lon bbox_latlon = {k:utm.to_latlon(v[1], v[0], zone_number=v[2], zone_letter=v[3]) for k,v in bbox.items()} minx, maxx, miny, maxy = bbox_latlon['lowerleft'][1],bbox_latlon['lowerright'][1],\ bbox_latlon['lowerleft'][0],bbox_latlon['upperright'][0] minyi = np.argmin(np.abs(fh.lat - miny).values) maxyi = np.argmin(np.abs(fh.lat - maxy).values) minxi = np.argmin(np.abs(fh.lon - minx).values) maxxi = np.argmin(np.abs(fh.lon - maxx).values) #set the area around those point defined as c data_chunk = fh.isel(lat=slice(maxyi, minyi),\ lon=slice(minxi, maxxi)) outfilename = f'{blob}/training_tiles/NASADEM/{query}/{sample_point.INDEX.values[0]}.nc' data_chunk.to_netcdf(outfilename) # + fh = xr.open_dataset(fn, engine='h5netcdf') heights = fh['NASADEM_HGT'][:] lons = fh.variables['lon'][:] lats = fh.variables['lat'][:] min_height = np.min(heights) max_height = np.max(heights) height_units = fh['NASADEM_HGT'].units fh.close() print('Height ranges from {} {} to {} {}'.format(min_height,height_units, max_height,height_units)) extent = [np.min(lons), np.max(lons), np.min(lats), np.max(lats)] plt.imshow(heights,extent=extent) plt.xlabel('Longitude') plt.ylabel('Latitude') cb = plt.colorbar() cb.set_label('Height ({})'.format(height_units)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="jyJ3Yev3hzwQ" # Open In Colab # + [markdown] id="3Ezk4xyjiVNV" # ## Notes # # This notebook is designed to be run in **Colab (Google Colaboratory)**, not in your local Jupyter Notebook environment. # # So, if you want to run it locally, please rewrite the PATH settings of each cell and the file input/output code appropriately. # + id="XirDETzAV0s4" # !pip install rdkit-pypi from rdkit import rdBase from rdkit import Chem from rdkit.Chem import Draw print(rdBase.rdkitVersion) # + id="W9iCv8X5GxSd" # !git clone https://github.com/ohuelab/QEPPI.git # + id="Bo609XfrOINp" #Move the working directory to path import os path = "/content/QEPPI" os.chdir(path) # + [markdown] id="510NKIkmlFPn" # # Calculation QEPPI for Your "SMILES" # # + id="RLZj8WajlikM" #@markdown ##### Example SMILES: **Idasanutlin** is a potent and selective p53-MDM2 inhibitor #@markdown ##### Replace with the "SMILES" you are interested in. SMILES = "COC1=CC(=CC=C1NC(=O)[C@@H]1N[C@@H](CC(C)(C)C)[C@@](C#N)([C@H]1C1=CC=CC(Cl)=C1F)C1=CC=C(Cl)C=C1F)C(O)=O" #@param {type:"string"} if len(SMILES) == 0: print("Please enter SMILES.") else: # !python calc_QEPPI.py --smiles '$SMILES' mol = Chem.MolFromSmiles(SMILES) img = Draw.MolToImage(mol) img # + [markdown] id="MNIh6cFLH_0B" # # Calculation QEPPI for Your "SDF file" # If you upload the SDF, the result of calculating QEPPI will be returned as CSV with SMILES. # + id="k5zXO7IRToX_" from google.colab import files print("Please your SDF file upload") uploaded = files.upload() file_name = list(uploaded.keys())[0] # !python calc_QEPPI.py --sdf $file_name # !rm -rf $file_name files.download("result.csv") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- from astrometry.util.fits import * # %matplotlib inline import pylab as plt import numpy as np import json from astrometry.util.util import Tan, Sip, fit_sip_wcs_py from astrometry.util.starutil_numpy import radectoxyz, arcsec_between from scipy.interpolate import InterpolatedUnivariateSpline # %load_ext autoreload # %aimport petal_metrology # %aimport platescale_ss from platescale_ss import Tpsfunc, Rps, Mps, Sps, Tps from petal_metrology import get_petal petal_id = 7 petal = get_petal(petal_id) plt.plot(petal.fifs.x.ravel(), petal.fifs.y.ravel(), 'b.', label='FIDs') plt.plot(petal.gif1.x, petal.gif1.y, 'r.', label='GIF1') plt.plot(petal.gif2.x, petal.gif2.y, 'm.', label='GIF2') plt.legend() plt.title('Petal %i: Fiducials' % petal_id) plt.xlabel('Petal-local x (mm)') plt.xlabel('Petal-local y (mm)') plt.axis('equal'); plt.plot(petal.gif1.x, petal.gif1.y, 'r.') plt.plot(petal.gif2.x, petal.gif2.y, 'b.') plt.title('Petal %i: GIFs' % petal_id) plt.axis('equal'); plt.plot(petal.gfa.gif_1_mm_x, petal.gfa.gif_1_mm_y, 'r.', label='GIF1') plt.plot(petal.gfa.gif_2_mm_x, petal.gfa.gif_2_mm_y, 'b.', label='GIF2') cx,cy = petal.gfa_pix_to_gfa_mm(petal.ccdbpx, petal.ccdbpy) plt.plot(cx, cy, 'k-', label='CCD bounds') plt.legend() plt.title('GFA metrology local positions') plt.axis('equal'); g1x,g1y = petal.gfa_mm_to_focal_mm(petal.gfa.gif_1_mm_x, petal.gfa.gif_1_mm_y) g2x,g2y = petal.gfa_mm_to_focal_mm(petal.gfa.gif_2_mm_x, petal.gfa.gif_2_mm_y) plt.plot(petal.ccdbx, petal.ccdby, 'k-') plt.plot(petal.ccdbx[0], petal.ccdby[0], 'ko') plt.plot(g1x, g1y, 'kx') plt.plot(petal.gif1.x, petal.gif1.y, 'r.') plt.plot(g2x, g2y, 'gx') plt.plot(petal.gif2.x, petal.gif2.y, 'b.') plt.title('Transformed GFA CCD location (petal-local)') plt.xlabel('Petal-local x (mm)') plt.ylabel('Petal-local y (mm)') plt.axis('equal'); print('Scatter in GIF1 fit positions: %.3f mm' % (np.mean(np.hypot(petal.gif1.x - g1x, petal.gif1.y - g1y)))) print('Scatter in GIF2 fit positions: %.3f mm' % (np.mean(np.hypot(petal.gif2.x - g2x, petal.gif2.y - g2y)))) plt.figure(figsize=(8,4)) plt.subplot(1,2,1) plt.plot(g1x, g1y, 'kx') plt.plot(petal.gif1.x, petal.gif1.y, 'r.') plt.axis('equal') plt.subplot(1,2,2) plt.plot(g2x, g2y, 'gx') plt.plot(petal.gif2.x, petal.gif2.y, 'b.') plt.axis('equal'); plt.plot(Rps, Mps, 'b-', label='Medial (radial direction)') plt.plot(Rps, Sps, 'r-', label='Saggital (tangential direction)') plt.legend() plt.xlabel('Radius (mm)') plt.title('Plate scale') plt.ylabel('Plate scale (um/arcsec)'); plt.plot(Rps, Tps, 'r-') plt.xlabel('Focal plane radius (mm)') plt.ylabel('Field of view Theta (deg)'); # Now fit a SIP polynomial distortion model for how the echo22 optics projects onto GFA CCD pixels. # # Evaluate a grid of points in CCD space and in RA,Dec, using the metrology transformations to go from GFA CCD pixels # to focal plane coordinates, and from there to Theta and RA,Dec. # + x0 = min([min(petal.gfa.gif_1_pix_x), min(petal.gfa.gif_2_pix_x), 0]) - 100 y0 = min([min(petal.gfa.gif_1_pix_y), min(petal.gfa.gif_2_pix_y), 0]) - 100 x1 = max([max(petal.gfa.gif_1_pix_x), max(petal.gfa.gif_2_pix_x), petal.ccdw]) + 100 y1 = max([max(petal.gfa.gif_1_pix_y), max(petal.gfa.gif_2_pix_y), petal.ccdh]) + 100 ccdgridpx, ccdgridpy = np.meshgrid(np.linspace(x0, x1, 20), np.linspace(y0, y1, 20)) ccdgridpx = ccdgridpx.ravel() ccdgridpy = ccdgridpy.ravel() gridx, gridy = petal.gfa_pix_to_focal_mm(ccdgridpx, ccdgridpy) gridr = np.hypot(gridx, gridy) crpixx,crpixy = (petal.ccdw+1.)/2., (petal.ccdh+1.)/2. crx,cry = petal.gfa_pix_to_focal_mm(crpixx, crpixy) # + plt.plot(gridx, gridy, 'g.') plt.plot(gridx[0], gridy[0], 'go') plt.plot(crx, cry, 'go') plt.plot(petal.ccdbx, petal.ccdby, 'k-') plt.plot(petal.ccdbx[0], petal.ccdby[0], 'ko') plt.plot(g1x, g1y, 'kx') plt.plot(petal.gif1.x, petal.gif1.y, 'r.') plt.plot(g2x, g2y, 'gx') plt.plot(petal.gif2.x, petal.gif2.y, 'b.') plt.title('Distortion-fitting grid') plt.xlabel('Petal x (mm)') plt.ylabel('Petal y (mm)') plt.axis('equal'); # - theta = Tpsfunc(gridr) gridu = theta * gridx / gridr gridv = theta * gridy / gridr crr = np.hypot(crx, cry) crd = Tpsfunc(crr) cru = crd * crx / crr crv = crd * cry / crr griddec = gridv gridra = -gridu / np.cos(np.deg2rad(griddec)) starxyz = radectoxyz(gridra, griddec) fieldxy = np.vstack((ccdgridpx, ccdgridpy)).T weights = np.ones(len(gridra)) crdec = crv[0] crra = -cru[0] / np.cos(np.deg2rad(crdec)) ps = 0.2/3600. tan_in = Tan(crra, crdec, crpixx, crpixy, -ps, 0., 0., ps, float(petal.ccdw), float(petal.ccdh)) sip_order = inv_order = 4 sip = fit_sip_wcs_py(starxyz, fieldxy, weights, tan_in, sip_order, inv_order) #for order in range(2,7): # sip = fit_sip_wcs_py(starxyz, fieldxy, weights, tan_in, order, order) # ok,fitx,fity = sip.radec2pixelxy(gridra, griddec) # d = np.hypot(fitx - ccdgridpx, fity - ccdgridpy) # print('SIP order', order, '-> mean fit error', np.mean(d), 'pixels, max', np.max(d)) # fitr,fitd = sip.pixelxy2radec(ccdgridpx, ccdgridpy) # d = np.array([arcsec_between(r1,d1,r2,d2) for r1,d1,r2,d2 in zip(fitr, fitd, gridra, griddec)]) # print(' mean', np.mean(d), 'arcsec, max', np.max(d)) CRVAL = sip.crval # + # Don't write here -- use the gif-fif.py driver scripts instead. It also plugs in the GIF pixel positions to the SIP header. #sip.write_to('sip-petal%i.fits' % petal_id) # - # Moving the CRVAL to (1,0), demo sip.crval = [1., 0.] plt.plot(sip.crval[0], sip.crval[1], 'go') gpx1,gpy1 = petal.focal_mm_to_gfa_pix(petal.gif1.x, petal.gif1.y) gpx2,gpy2 = petal.focal_mm_to_gfa_pix(petal.gif2.x, petal.gif2.y) grx,gry = np.meshgrid(np.linspace(x0, x1, 10), np.linspace(y0, y1, 10)) grr,grd = sip.pixelxy2radec(grx,gry) plt.plot(grr, grd, 'g.') g1r,g1d = sip.pixelxy2radec(gpx1, gpy1) g2r,g2d = sip.pixelxy2radec(gpx2, gpy2) plt.plot(g1r, g1d, 'r.') plt.plot(g2r, g2d, 'b.') bra,bdec = sip.pixelxy2radec(petal.ccdbpx, petal.ccdbpy) plt.plot(bra, bdec, 'k-') plt.plot(bra[0], bdec[0], 'ko') xl,xh = plt.xlim() plt.xlim(xh,xl) plt.axis('equal'); plt.figure(figsize=(6,5)) # HACK -- we subtract 1 from RA to center the plot at 0,0! plt.plot(grr - 1., grd, 'g.') gtr,gtd = sip.wcstan.pixelxy2radec(grx,gry) # NOTE flipped sign on the RA quiver direction plt.quiver(grr-1, grd, -(grr-gtr), grd-gtd) print('Max arrow length: %.1f arcsec' % (np.max(np.hypot(grr-gtr, grd-gtd))*3600.)) plt.plot(g1r-1, g1d, 'r.', label='GIF1') plt.plot(g2r-1, g2d, 'b.', label='GIF2') plt.plot(bra-1, bdec, 'k-', label='CCD bounds') plt.plot(bra[0]-1, bdec[0], 'ko') plt.legend() plt.title('Distortion seen by GFA CCD') plt.axis('equal'); plt.xlabel('RA (deg)') plt.ylabel('Dec (deg)') xl,xh = plt.xlim() plt.xlim(xh,xl) plt.savefig('distortion.pdf') petal.gfa.gif_1_pix_x, petal.gfa.gif_1_pix_y petal.gfa.gif_2_pix_x, petal.gfa.gif_2_pix_y xx = np.linspace(x0, x1, 500) yy = np.linspace(y0, y1, 500) # GIF 1 is at (w, large +y) r1y = yy r1x = np.zeros_like(yy) + petal.ccdw r1sip,d1sip = sip.pixelxy2radec(r1x, r1y) r1tan,d1tan = sip.wcstan.pixelxy2radec(r1x, r1y) # GIF 2 is at (-med x, 0) r2x = xx r2y = np.zeros_like(xx) r2sip,d2sip = sip.pixelxy2radec(r2x, r2y) r2tan,d2tan = sip.wcstan.pixelxy2radec(r2x, r2y) dist1 = np.array([arcsec_between(r1,d1,r2,d2) for r1,d1,r2,d2 in zip(r1sip,d1sip,r1tan,d1tan)]) dist2 = np.array([arcsec_between(r1,d1,r2,d2) for r1,d1,r2,d2 in zip(r2sip,d2sip,r2tan,d2tan)]) plt.plot(r1y, dist1, 'r-') plt.plot(r2x, dist2, 'b-'); plt.axvline(gpy1[0], color='r', label='GIF1') plt.axvline(gpx2[0], color='b', label='GIF2') plt.legend() plt.xlabel('CCD coordinate (pixels)') plt.ylabel('Difference from TANgent plane to Echo22 (arcsec)'); # OLD STUFF NOT USED from focalplane_ss import focal_surface_ss from scipy.interpolate import InterpolatedUnivariateSpline rz = np.array([[float(w) for w in line.split('\t')] for line in focal_surface_ss.split('\n')]) Ropt,Zopt = rz[:,0], rz[:,1] Zoptics = InterpolatedUnivariateSpline(Ropt, Zopt) Spoly = np.array([9.95083E-06, 9.99997E-01, 1.79466E-07, 1.76983E-09, 7.24320E-11, -5.74381E-13, 3.28356E-15, -1.10626E-17, 1.89154E-20, -1.25367E-23]) radius = np.linspace(0, 420, 421) S = np.zeros_like(radius) for i,s in enumerate(Spoly): S += s * radius**i # DESI-0530-v16 -- focal plane layout #device_location_id device_type X Y Z design_ss = '''0 POS 28.134375 5.201437 -0.082419 1 POS 38.551296 5.201448 -0.152664 2 POS 33.343493 14.253354 -0.132583 3 POS 48.968106 5.201461 -0.245266 4 POS 43.760371 14.255653 -0.214058 5 POS 59.384810 5.201480 -0.360537 6 POS 54.177144 14.256291 -0.318053 7 POS 48.967142 23.345351 -0.298069 8 POS 69.800289 5.201506 -0.498824 9 POS 64.592763 14.249463 -0.444873 10 POS 59.383858 23.337718 -0.413611 11 FIF 54.182759 32.395574 -0.404797 12 POS 80.215689 5.201540 -0.660539 13 POS 75.008287 14.241437 -0.594928 14 POS 69.800405 23.299254 -0.552079 15 POS 64.599448 32.359504 -0.531959 16 POS 90.630976 5.201583 -0.846101 17 POS 85.423703 14.236912 -0.768639 18 POS 80.215599 23.295915 -0.714161 19 POS 75.016034 32.323704 -0.682426 20 POS 69.786205 41.395803 -0.673227 21 POS 101.046105 5.201632 -1.055932 22 POS 95.838980 14.237022 -0.966431 23 POS 90.630637 23.299708 -0.900149 24 POS 85.432557 32.329622 -0.856874 25 POS 80.200656 41.359380 -0.835814 26 POS 111.461023 5.201688 -1.290443 27 POS 106.254079 14.235969 -1.188700 28 POS 101.045606 23.296309 -1.110389 29 POS 95.848134 32.326855 -1.055354 30 POS 90.614980 41.368525 -1.022718 31 POS 85.361306 50.431334 -1.012965 32 POS 121.875677 5.201751 -1.550013 33 POS 116.668954 14.238986 -1.435858 34 POS 111.460452 23.297031 -1.345338 35 POS 106.263304 32.326987 -1.278351 36 POS 101.029161 41.366600 -1.233845 37 POS 95.776854 50.441734 -1.212562 38 NON 90.539726 59.568980 -1.215569 39 POS 132.290020 5.201820 -1.834975 40 POS 127.083563 14.242436 -1.708249 41 POS 121.875109 23.312745 -1.605420 42 POS 116.677631 32.345810 -1.526342 43 POS 111.443232 41.368033 -1.469701 44 POS 106.192152 50.442824 -1.436591 45 POS 100.955815 59.573706 -1.428027 46 POS 142.704022 5.201895 -2.145600 47 POS 137.497877 14.241743 -2.006158 48 POS 132.289546 23.328109 -1.890884 49 POS 127.093215 32.371519 -1.799643 50 POS 121.857174 41.401574 -1.730894 51 POS 116.607190 50.444640 -1.685510 52 POS 111.371701 59.574862 -1.665163 53 POS 106.147709 68.622231 -1.667694 54 POS 153.117665 5.201978 -2.482086 55 POS 147.911879 14.235958 -2.329817 56 POS 142.703742 23.330238 -2.201928 57 POS 137.506269 32.376499 -2.098242 58 POS 132.271120 41.413641 -2.017254 59 POS 127.021981 50.454549 -1.959713 60 POS 121.787408 59.563824 -1.927184 61 POS 116.563412 68.623975 -1.918115 62 POS 163.530944 5.202069 -2.844554 63 POS 158.325567 14.241167 -2.679433 64 POS 153.117660 23.321177 -2.538751 65 POS 147.918667 32.352849 -2.422347 66 POS 142.685037 41.415900 -2.329129 67 POS 137.436514 50.486772 -2.259620 68 POS 132.202886 59.588974 -2.214985 69 POS 126.978911 68.612156 -2.193495 70 POS 121.778363 77.687710 -2.197208 71 POS 173.943870 5.202169 -3.233037 72 POS 168.738956 14.240451 -3.055031 73 POS 163.531281 23.302369 -2.901472 74 POS 158.330800 32.347757 -2.772446 75 FIF 153.098887 41.393265 -2.666555 76 POS 147.850932 50.487573 -2.584842 77 POS 142.618180 59.596960 -2.528094 78 POS 137.394166 68.637151 -2.494814 79 POS 132.194081 77.680593 -2.486192 80 POS 184.356463 5.202278 -3.647476 81 POS 179.152070 14.237881 -3.456608 82 POS 173.944624 23.313286 -3.290323 83 POS 168.742846 32.358998 -3.148597 84 POS 163.512702 41.389721 -3.030031 85 POS 158.265131 50.453724 -2.935460 86 POS 153.033251 59.588635 -2.866668 87 POS 147.809242 68.640326 -2.821366 88 POS 142.609571 77.685414 -2.800809 89 POS 137.409794 86.716472 -2.803969 90 POS 194.768763 5.202397 -4.087734 91 POS 189.564948 14.231336 -3.884058 92 POS 184.357651 23.280410 -3.704860 93 POS 179.154623 32.342155 -3.550453 94 POS 173.926432 41.409409 -3.419611 95 POS 168.679209 50.459776 -3.312354 96 POS 163.448048 59.556355 -3.230675 97 POS 158.224107 68.634312 -3.173432 98 POS 153.024816 77.664324 -3.140525 99 POS 147.825218 86.706198 -3.131882 100 POS 142.616722 95.751246 -3.147111 101 POS 205.180829 5.202527 -4.553598 102 POS 199.977654 14.228149 -4.337229 103 POS 194.770456 23.256961 -4.145219 104 POS 189.566012 32.307850 -3.977977 105 POS 184.340094 41.387352 -3.834629 106 POS 179.093132 50.462417 -3.715005 107 POS 173.862650 59.551524 -3.620789 108 POS 168.638734 68.605303 -3.550822 109 POS 163.439840 77.654626 -3.506024 110 POS 158.240184 86.685466 -3.485072 111 POS 153.031586 95.726599 -3.488175 112 POS 215.592744 5.202667 -5.044797 113 POS 210.390253 14.233781 -4.815894 114 POS 205.183093 23.257197 -4.611264 115 POS 199.977245 32.285080 -4.431206 116 POS 194.753631 41.348039 -4.275158 117 POS 189.506912 50.462612 -4.143312 118 POS 184.277019 59.546670 -4.036566 119 POS 179.053197 68.607518 -3.954311 120 POS 173.854426 77.638308 -3.896961 121 POS 168.655095 86.670127 -3.863853 122 POS 163.446281 95.705959 -3.854716 123 POS 158.248634 104.741309 -3.870224 124 POS 226.004616 5.202820 -5.561013 125 POS 220.802775 14.229185 -5.319695 126 POS 215.595622 23.261744 -5.102621 127 POS 210.388311 32.317634 -4.910212 128 POS 205.167155 41.341782 -4.741456 129 POS 199.920516 50.426528 -4.596705 130 POS 194.691621 59.521908 -4.477607 131 POS 189.467524 68.577093 -4.382801 132 POS 184.268924 77.622912 -4.313382 133 POS 179.069504 86.655195 -4.268042 134 POS 173.860831 95.690899 -4.246718 135 POS 168.663659 104.721625 -4.250040 136 POS 236.416418 5.202984 -6.101909 137 POS 231.215143 14.228888 -5.848342 138 POS 226.008091 23.252494 -5.618881 139 POS 220.799357 32.296346 -5.413915 140 POS 215.580565 41.367331 -5.233235 141 POS 210.334018 50.389100 -5.075337 142 POS 205.105538 59.491820 -4.943851 143 POS 199.882451 68.552724 -4.836730 144 POS 194.682930 77.592933 -4.754841 145 POS 189.483958 86.638469 -4.697482 146 POS 184.275264 95.675817 -4.663955 147 POS 179.078456 104.707107 -4.655151 148 POS 173.885775 113.739788 -4.670876 149 POS 246.838139 5.203163 -6.667712 150 FIF 241.627317 14.223609 -6.401469 151 POS 236.420299 23.253682 -6.159827 152 POS 231.210442 32.278714 -5.942410 153 POS 225.994948 41.335119 -5.749412 154 POS 220.747479 50.412015 -5.579621 155 POS 215.519290 59.455303 -5.435037 156 POS 210.296516 68.523290 -5.315623 157 POS 205.096945 77.569992 -5.221518 158 POS 199.897878 86.608920 -5.151744 159 POS 194.689627 95.659188 -5.106229 160 POS 189.493000 104.691658 -5.085266 161 POS 184.300546 113.724981 -5.088871 162 POS 179.106358 122.756412 -5.116722 163 POS 257.252581 5.203355 -7.257219 164 POS 252.044444 14.225931 -6.979102 165 POS 246.832142 23.247567 -6.725043 166 POS 241.621641 32.280091 -6.495493 167 POS 236.409626 41.304861 -6.290191 168 POS 231.161858 50.379780 -6.107991 169 POS 225.932910 59.445972 -5.951322 170 POS 220.710026 68.489860 -5.819230 171 POS 215.510811 77.540650 -5.712918 172 POS 210.312055 86.586048 -5.631009 173 POS 205.104645 95.629603 -5.573119 174 POS 199.907269 104.672739 -5.540138 175 POS 194.715187 113.707386 -5.531610 176 POS 189.520745 122.739141 -5.547294 177 POS 267.667769 5.203564 -7.870659 178 POS 262.457076 14.223228 -7.580439 179 POS 257.245228 23.248210 -7.314422 180 POS 252.034981 32.272538 -7.072788 181 POS 246.822518 41.305017 -6.855427 182 POS 241.576687 50.349591 -6.660775 183 POS 236.347185 59.413428 -6.491778 184 POS 231.128043 68.474233 -6.347859 185 POS 225.924551 77.507637 -6.228817 186 POS 220.726030 86.556806 -6.134746 187 POS 215.518882 95.606560 -6.064743 188 POS 210.321473 104.643918 -6.019405 189 POS 205.129745 113.686869 -5.998894 190 POS 199.935558 122.719129 -6.002410 191 POS 194.744335 131.752958 -6.030497 192 POS 278.094967 5.203789 -8.508655 193 POS 272.880840 14.222591 -8.206283 194 POS 267.662804 23.247127 -7.927934 195 POS 262.450433 32.272277 -7.674202 196 POS 257.238503 41.294646 -7.444760 197 POS 251.993690 50.347747 -7.238108 198 POS 246.762246 59.383505 -7.056492 199 POS 241.542032 68.441362 -6.900271 200 POS 236.339412 77.492988 -6.769400 201 POS 231.139839 86.523335 -6.662744 202 POS 225.930951 95.573444 -6.580426 203 POS 220.735635 104.618025 -6.523143 204 POS 215.544261 113.658685 -6.490394 205 POS 210.350327 122.700956 -6.481986 206 POS 205.159459 131.735505 -6.497923 207 POS 288.520497 5.204032 -9.170480 208 POS 283.299843 14.229781 -8.855715 209 POS 278.079780 23.248036 -8.565241 210 POS 272.860653 32.273705 -8.299135 211 POS 267.649767 41.298301 -8.057807 212 POS 262.411205 50.331000 -7.839225 213 POS 257.182799 59.378796 -7.645829 214 POS 251.957037 68.411117 -7.476803 215 POS 246.753235 77.459791 -7.333682 216 POS 241.554656 86.509371 -7.215298 217 POS 236.346852 95.544478 -7.120554 218 POS 231.149812 104.586756 -7.050948 219 POS 225.958811 113.631634 -7.006136 220 POS 220.765092 122.670975 -6.985460 221 POS 215.574184 131.715951 -6.989494 222 POS 210.382581 140.750850 -7.017600 223 POS 298.953468 5.204294 -9.857001 224 POS 293.728561 14.218382 -9.529742 225 POS 288.505535 23.257544 -9.227068 226 POS 283.281475 32.275235 -8.948564 227 POS 278.067188 41.301079 -8.695034 228 POS 272.827144 50.336734 -8.464336 229 POS 267.600450 59.355858 -8.258622 230 POS 262.380923 68.406696 -8.078133 231 POS 257.168712 77.430867 -7.922044 232 POS 251.967640 86.476933 -7.791341 233 POS 246.762098 95.530283 -7.684954 234 POS 241.563171 104.560869 -7.602848 235 POS 236.373441 113.597456 -7.545742 236 POS 231.179947 122.643407 -7.513065 237 POS 225.989315 131.685151 -7.504825 238 POS 220.798536 140.727742 -7.521000 239 FIF 215.603297 149.762750 -7.561100 240 POS 309.394888 5.204578 -10.568833 241 POS 304.165392 14.217045 -10.228797 242 POS 298.933687 23.243163 -9.913162 243 POS 293.710750 32.275395 -9.622605 244 POS 288.488660 41.300648 -9.356448 245 POS 283.262855 50.323803 -9.114380 246 POS 278.029621 59.359249 -8.896349 247 POS 272.801211 68.375285 -8.702714 248 POS 267.589952 77.423349 -8.534976 249 POS 262.380822 86.446658 -8.391294 250 POS 257.175223 95.496859 -8.272705 251 POS 251.987934 104.541895 -8.179454 252 POS 246.787326 113.571952 -8.109454 253 POS 241.594950 122.609516 -8.064452 254 POS 236.404576 131.653657 -8.044111 255 POS 231.216804 140.694437 -8.048185 256 POS 226.028648 149.734666 -8.076585 257 POS 319.850121 5.204885 -11.307161 258 POS 314.612103 14.217002 -10.953653 259 POS 309.378502 23.237608 -10.625208 260 POS 304.147696 32.261157 -10.321578 261 POS 298.923722 41.293825 -10.043004 262 POS 293.694739 50.321298 -9.788509 263 POS 288.474419 59.345236 -9.558944 264 POS 283.240527 68.372772 -9.352929 265 POS 278.012139 77.388951 -9.171448 266 POS 272.809994 86.429959 -9.016376 267 POS 267.593405 95.455660 -8.884476 268 POS 262.401352 104.508495 -8.778994 269 POS 257.211709 113.548234 -8.697715 270 POS 252.014008 122.580228 -8.640111 271 POS 246.820044 131.619729 -8.607269 272 POS 241.633103 140.662841 -8.599280 273 POS 236.444935 149.703018 -8.615484 274 POS 231.256874 158.743266 -8.656044 275 POS 330.318532 5.205216 -12.073034 276 POS 325.078996 14.216518 -11.705999 277 POS 319.828921 23.235072 -11.363253 278 POS 314.595961 32.254673 -11.046598 279 POS 309.365682 41.280926 -10.754879 280 POS 304.135730 50.314371 -10.487863 281 POS 298.912497 59.340436 -10.245747 282 POS 293.686595 68.356517 -10.027751 283 POS 288.459291 77.376609 -9.834108 284 POS 283.236538 86.392020 -9.665035 285 POS 278.024213 95.437310 -9.521578 286 POS 272.821097 104.465024 -9.402689 287 POS 267.630443 113.512587 -9.309372 288 POS 262.441442 122.550130 -9.240250 289 POS 257.245276 131.585800 -9.194988 290 POS 252.057751 140.623250 -9.174607 291 POS 246.867458 149.670397 -9.178725 292 POS 241.677540 158.713293 -9.207083 293 POS 340.797188 5.205575 -12.867597 294 POS 335.552207 14.218306 -12.486083 295 POS 330.298035 23.224970 -12.129266 296 POS 325.054239 32.249735 -11.798497 297 POS 319.819860 41.272437 -11.493384 298 POS 314.589074 50.294871 -11.213326 299 POS 309.371478 59.320811 -10.958902 300 POS 304.150100 68.340233 -10.728680 301 POS 298.925421 77.362295 -10.522758 302 POS 293.676715 86.384136 -10.339693 303 POS 288.453718 95.402776 -10.182688 304 POS 283.254608 104.435641 -10.051900 305 POS 278.057515 113.471479 -9.945668 306 POS 272.870897 122.512376 -9.864567 307 POS 267.682215 131.551279 -9.807628 308 POS 262.490887 140.589317 -9.774842 309 POS 257.291318 149.636802 -9.766236 310 POS 252.099270 158.687269 -9.782557 311 POS 246.917324 167.729911 -9.823532 312 POS 351.280618 5.205963 -13.691992 313 POS 346.034733 14.214559 -13.295560 314 POS 340.778866 23.232078 -12.924261 315 POS 335.530289 32.234896 -12.579013 316 POS 330.285806 41.261496 -12.259553 317 POS 325.050884 50.288478 -11.965927 318 POS 319.828324 59.302608 -11.697953 319 POS 314.601034 68.336209 -11.454655 320 POS 309.373021 77.352030 -11.235694 321 FIF 304.144678 86.373300 -11.041361 322 POS 298.918599 95.390299 -10.871596 323 POS 293.697158 104.410906 -10.726660 324 POS 288.499421 113.443777 -10.607980 325 POS 283.304821 122.479109 -10.513948 326 POS 278.114386 131.526696 -10.444905 327 POS 272.923365 140.568834 -10.400054 328 POS 267.728162 149.605054 -10.379125 329 POS 262.526411 158.657932 -10.382758 330 POS 257.345257 167.706617 -10.411861 331 NON 252.162106 176.750314 -10.465030 332 POS 361.783127 5.206384 -14.549105 333 POS 356.530930 14.220045 -14.136500 334 POS 351.271491 23.232906 -13.749722 335 POS 346.019309 32.245241 -13.389574 336 POS 340.768571 41.247595 -13.055163 337 POS 335.525469 50.277241 -12.747047 338 POS 330.294391 59.301199 -12.465020 339 POS 325.070982 68.316073 -12.208444 340 POS 319.851287 77.335954 -11.977046 341 POS 314.623303 86.355671 -11.769773 342 POS 309.393767 95.374515 -11.587014 343 POS 304.167566 104.400012 -11.429209 344 POS 298.948599 113.421707 -11.296325 345 POS 293.750911 122.455935 -11.189651 346 POS 288.553193 131.494549 -11.107533 347 POS 283.363186 140.541106 -11.050585 348 POS 278.169146 149.589855 -11.017886 349 POS 272.969703 158.626925 -11.008879 350 POS 267.784582 167.675147 -11.025557 351 POS 262.605756 176.725473 -11.067107 352 POS 372.304572 5.206842 -15.440826 353 POS 367.046904 14.222657 -15.011204 354 POS 361.784479 23.239621 -14.608186 355 POS 356.519189 32.251191 -14.231495 356 POS 351.267336 41.266855 -13.882142 357 POS 346.014080 50.267603 -13.558386 358 POS 340.775422 59.295836 -13.261679 359 POS 335.555016 68.318525 -12.991660 360 POS 330.326002 77.329411 -12.745911 361 POS 325.099923 86.353919 -12.525580 362 POS 319.868254 95.368418 -12.329484 363 POS 314.637249 104.387907 -12.158275 364 POS 309.427165 113.415438 -12.013390 365 POS 304.208643 122.438262 -11.892373 366 POS 299.007270 131.474777 -11.797446 367 POS 293.812414 140.513049 -11.727514 368 POS 288.618081 149.564277 -11.682528 369 POS 283.434521 158.608349 -11.662486 370 POS 278.235112 167.644897 -11.665619 371 POS 273.048024 176.699716 -11.694696 372 POS 267.869980 185.752885 -11.748763 373 POS 382.851708 5.207341 -16.369647 374 POS 377.581275 14.225697 -15.921480 375 POS 372.310360 23.245507 -15.500886 376 POS 367.036487 32.265703 -15.107224 377 POS 361.774214 41.273042 -14.741197 378 POS 356.521510 50.285321 -14.402351 379 POS 351.271943 59.289571 -14.089663 380 POS 346.050117 68.310962 -13.805129 381 POS 340.826055 77.330498 -13.545841 382 POS 335.598169 86.342713 -13.311368 383 POS 330.367518 95.360056 -13.101863 384 POS 325.135653 104.372163 -12.917097 385 POS 319.908601 113.396362 -12.757811 386 POS 314.691225 122.433475 -12.624339 387 POS 309.471704 131.456899 -12.515035 388 POS 304.271509 140.492776 -12.432075 389 POS 299.077457 149.538764 -12.374455 390 POS 293.895890 158.584419 -12.342237 391 POS 288.714729 167.629767 -12.334575 392 POS 283.515091 176.664566 -12.349843 393 POS 278.343637 185.717251 -12.392174 394 POS 393.424937 5.207884 -17.337325 395 POS 388.140778 14.229285 -16.869737 396 POS 382.863296 23.247053 -16.430889 397 POS 377.591794 32.264029 -16.020274 398 POS 372.323203 41.277959 -15.637166 399 POS 367.059580 50.285702 -15.281291 400 POS 361.805862 59.294163 -14.952744 401 POS 356.557594 68.302202 -14.650789 402 POS 351.332946 77.326055 -14.376874 403 POS 346.106481 86.351068 -14.128452 404 POS 340.874021 95.367865 -13.904804 405 POS 335.640990 104.385298 -13.706357 406 POS 330.406184 113.397881 -13.532738 407 POS 325.172714 122.420562 -13.384453 408 POS 319.951289 131.457285 -13.262356 409 POS 314.740071 140.488302 -13.165650 410 POS 309.544573 149.527366 -13.095060 411 POS 304.359611 158.566940 -13.049907 412 POS 299.178985 167.613923 -13.029974 413 POS 293.998533 176.660957 -13.034708 414 POS 288.801639 185.695643 -13.062499 415 POS 283.637442 194.747780 -13.117824 416 POS 404.081201 5.208477 -18.350559 417 POS 398.738956 14.225389 -17.858705 418 POS 393.449077 23.249638 -17.400470 419 POS 388.167805 32.268975 -16.971185 420 POS 382.890716 41.283512 -16.570017 421 POS 377.618145 50.294466 -16.196593 422 POS 372.348713 59.303226 -15.850413 423 POS 367.092929 68.311732 -15.532043 424 POS 361.850852 77.323843 -15.241188 425 POS 356.626281 86.349467 -14.978049 426 POS 351.396985 95.375716 -14.740313 427 POS 346.159685 104.396074 -14.527379 428 POS 340.921924 113.416595 -14.339816 429 POS 335.673666 122.435172 -14.176645 430 POS 330.441604 131.452755 -14.039819 431 POS 325.231465 140.491382 -13.930301 432 POS 320.024196 149.522514 -13.845693 433 POS 314.833968 158.562743 -13.787499 434 POS 309.652699 167.606302 -13.754882 435 POS 304.467332 176.653076 -13.746898 436 POS 299.297680 185.703512 -13.764947 437 POS 294.103927 194.734795 -13.805322 438 NON 288.953309 203.790235 -13.874466 439 FIF 409.474266 14.197003 -18.898949 440 POS 404.122544 23.218728 -18.415962 441 POS 398.819158 32.244354 -17.966249 442 POS 393.534784 41.265881 -17.546494 443 POS 388.254349 50.281413 -17.154860 444 POS 382.975029 59.289875 -16.790698 445 POS 377.695835 68.298438 -16.453664 446 POS 372.431600 77.312536 -16.144837 447 POS 367.188181 86.331367 -15.864379 448 POS 361.959983 95.356349 -15.611525 449 POS 356.724409 104.385028 -15.384085 450 POS 351.492977 113.402714 -15.182459 451 POS 346.256488 122.421077 -15.005980 452 POS 341.024728 131.441075 -14.855323 453 POS 335.785391 140.454435 -14.729134 454 POS 330.586206 149.495602 -14.632116 455 POS 325.380123 158.528685 -14.559383 456 POS 320.190631 167.570784 -14.513241 457 POS 315.006255 176.615530 -14.492562 458 POS 309.825442 185.662528 -14.497202 459 POS 304.656415 194.714323 -14.527854 460 NON 299.461566 203.749679 -14.580945 461 ETC 409.459957 32.248592 -18.998638 462 POS 404.147873 41.278355 -18.558247 463 POS 398.858533 50.299485 -18.148110 464 POS 393.573850 59.315593 -17.766165 465 POS 388.290327 68.325855 -17.411751 466 POS 383.007996 77.333682 -17.084608 467 POS 377.741454 86.343664 -16.785829 468 POS 372.508215 95.364327 -16.516812 469 POS 367.283208 104.387676 -16.274875 470 POS 362.050127 113.406552 -16.058209 471 POS 356.818155 122.425394 -15.867516 472 POS 351.580840 131.442079 -15.702025 473 POS 346.344975 140.462498 -15.562338 474 POS 341.111212 149.483456 -15.448292 475 POS 335.907947 158.521088 -15.362587 476 POS 330.709526 167.562969 -15.302688 477 POS 325.519300 176.606974 -15.268721 478 NON 320.337006 185.652998 -15.260616 479 NON 315.158929 194.701275 -15.278089 480 NON 309.989652 203.755799 -15.321672 481 NON 304.801298 212.798938 -15.388569 482 FIF 404.225409 59.321591 -18.784086 483 POS 398.935554 68.338881 -18.411840 484 POS 393.644277 77.349185 -18.066866 485 POS 388.358798 86.355441 -17.749620 486 POS 383.089415 95.372899 -17.461089 487 POS 377.852793 104.397078 -17.202413 488 POS 372.624629 113.420212 -16.970917 489 POS 367.386042 122.443874 -16.764765 490 POS 362.154678 131.464561 -16.585215 491 POS 356.909760 140.486544 -16.430457 492 POS 351.668982 149.510741 -16.301908 493 POS 346.435663 158.528282 -16.199388 494 POS 341.232939 167.564785 -16.125586 495 POS 336.040651 176.610884 -16.078466 496 FIF 330.848478 185.657667 -16.056824 497 NON 325.667527 194.706751 -16.061564 498 NON 320.491841 203.758362 -16.092228 499 NON 315.326169 212.817798 -16.149454 500 NON 404.312382 77.368452 -19.089997 501 ETC 399.011224 86.384671 -18.754597 502 POS 393.721763 95.393281 -18.447464 503 POS 388.443108 104.409809 -18.168602 504 POS 383.204612 113.440147 -17.920656 505 POS 377.954604 122.477680 -17.698495 506 POS 372.712972 131.501382 -17.503138 507 POS 367.484156 140.523431 -17.335089 508 POS 362.238745 149.545467 -17.191674 509 POS 357.000577 158.568158 -17.074874 510 POS 351.768135 167.583942 -16.984134 511 POS 346.569076 176.623367 -16.922842 512 POS 341.374433 185.674350 -16.888103 513 NON 336.188661 194.721628 -16.879595 514 NON 331.012369 203.769815 -16.897523 515 NON 325.846164 212.829023 -16.942429 516 NON 320.686786 221.887768 -17.013540 517 FIF 393.926712 113.390039 -18.918818 518 POS 388.652210 122.444328 -18.678902 519 POS 383.386290 131.492023 -18.466484 520 POS 378.135810 140.515642 -18.281365 521 POS 372.908118 149.543582 -18.124855 522 POS 367.654716 158.566307 -17.992142 523 POS 362.419686 167.587405 -17.887145 524 POS 357.190057 176.613581 -17.808869 525 POS 351.991106 185.657962 -17.759959 526 POS 346.797988 194.710169 -17.737838 527 NON 341.616862 203.760230 -17.742532 528 NON 336.441256 212.812569 -17.773693 529 NON 331.277862 221.875049 -17.832287 530 NON 326.068172 230.969235 -17.915023 531 NON 383.551879 149.556603 -19.092997 532 OPT 378.312514 158.597059 -18.947493 533 OPT 373.055789 167.618750 -18.826166 534 FIF 367.822967 176.645359 -18.733541 535 NON 362.597006 185.675302 -18.667928 536 NON 357.397057 194.719958 -18.631448 537 NON 352.205528 203.771149 -18.622179 538 NON 347.032787 212.829122 -18.641002 539 NON 341.854447 221.889013 -18.685643 540 NON 336.655826 230.974664 -18.756303 541 GIF 295.568779 207.451460 -14.494325 542 GIF 357.223911 202.680763 -19.097053 543 TB0 23.104558 12.080089 -81.920000 544 TB1 425.250167 25.264515 -102.920000 545 TB2 390.667592 169.867103 -102.920000''' def parse_design_ss(ss): design = [] for line in ss.split('\n'): words = line.split('\t') design.append((words[1], float(words[2]), float(words[3]), float(words[4]))) return design design = parse_design_ss(design_ss) D = fits_table() D.device = np.array([d[0] for d in design]) D.x = np.array([d[1] for d in design]) D.y = np.array([d[2] for d in design]) D.z = np.array([d[3] for d in design]) # Drop tooling balls D = D[np.array([not(d.startswith('TB')) for d in D.device])] plt.scatter(D.x, D.y, c=D.z, s=1); #plt.colorbar() plt.scatter(xyz[:,0], xyz[:,1], c=xyz[:,2], edgecolors='k'); plt.axis('equal'); plt.plot(tp[:,0], tp[:,1], 'k.', label='POS') # + plt.figure(figsize=(8,8)) plt.clf() plt.subplots_adjust(left=0.08, bottom=0.08, top=0.95, right=0.99) P = D[D.device == 'POS'] vp = np.vstack((P.x, P.y)).T tp = np.dot(R, (vp-vc).T).T + pc # gfa size (mountain) h,w = 1032, 2048 pixx = np.array([1, 1, w, w, 1]) pixy = np.array([1, h, h, 1, 1]) ccdx = t.mm_x_coeffs[0] + t.mm_x_coeffs[1]*pixx + t.mm_x_coeffs[2]*pixy ccdy = t.mm_y_coeffs[0] + t.mm_y_coeffs[1]*pixx + t.mm_y_coeffs[2]*pixy plt.plot(ph[:,0], ph[:,1], 'ro', label='GIF1') plt.plot(pi[:,0], pi[:,1], 'co', label='GIF2') plt.plot(th[:,0], th[:,1], 'kx', label='FIF'); plt.plot(tp[:,0], tp[:,1], 'k.', label='POS') plt.plot(ccdx, ccdy, 'b-', label='GFA CCD bounds') plt.legend() plt.axis('equal'); plt.xlabel('"x" (mm)') plt.ylabel('"y" (mm)') plt.title('GFA/GIF/FIF tiedown, petal 4') plt.savefig('/tmp/1.png') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] azdata_cell_guid="5353c044-9920-478b-b1f8-e98119b73a21" # Migrate a Database to a Azure SQL Managed Instance # ============================================= # # Description # ----- # # Copies the database from an on-premises SQL instance to an Azure SQL Managed Instance. # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # #### 1. Importing libraries and data import pandas as pd import numpy as np import seaborn as sns import matplotlib import matplotlib.pyplot as plt import os import sklearn from sklearn.cluster import KMeans # Here is where you import the k-means algorithm from scikit-learn. import pylab as pl # PyLab is a convenience module that bulk imports matplotlib. import openpyxl # + # This option ensures the graphs you create are displayed in your notebook without the need to "call" them specifically. # %matplotlib inline # - path = r'C:\Users\ df_airbnb = pd.read_csv(os.path.join(path, '2 - Airbnb - Data', 'Prepared Data', 'df_airbnb_v2_rev.csv')) # df_airbnb.dtypes df_airbnb.columns # Dropping unnamed column df_airbnb1 = df_airbnb.drop(columns = ['Unnamed: 0','Unnamed: 0.1']) # #### 2. Elbow technique num_cl = range(1, 10) # Defines the range of potential clusters in the data. kmeans = [KMeans(n_clusters=i) for i in num_cl] # Defines k-means clusters in the range assigned above. # + score = [kmeans[i].fit(df_airbnb2).score(df_airbnb2) for i in range(len(kmeans))] # Creates a score that represents # a rate of variation for the given cluster option. score # + # Plot the elbow curve using PyLab. pl.plot(num_cl,score) pl.xlabel('Number of Clusters') pl.ylabel('Score') pl.title('Elbow Curve') pl.show() # - # ##### In the Airbnb dataset, there's a large jump from two to three on the x-axis, but after that, the curve straightens out. This means that the optimal count for your clusters is three. # + # Create the k-means object. kmeans = KMeans(n_clusters = 3, n_jobs = -1) # + # Fit the k-means object to the data. kmeans.fit(df_airbnb2) # - df_airbnb2['clusters'] = kmeans.fit_predict(df_airbnb2) df_airbnb2.head() df_airbnb2['clusters'].value_counts() # + # Plot the clusters for the "price" and "min nights" variables. plt.figure(figsize=(12,8)) ax = sns.scatterplot(x=df_airbnb2['min_nights'], y=df_airbnb2['price'], hue=kmeans.labels_, s=100) # Here, you're subsetting `X` for the x and y arguments to avoid using their labels. # `hue` takes the value of the attribute `kmeans.labels_`, which is the result of running the k-means algorithm. # `s` represents the size of the points you want to see in the plot. ax.grid(False) # This removes the grid from the background. plt.xlabel('minimum nights') # Label x-axis. plt.ylabel('price per night') # Label y-axis. plt.show() # - # #### There is no visible linear connection between the variables. Most of the dots are concentrated between the values 0 and 150, meaning that most of the bookings happens for shorter period of time (confirming that airbnb is mostly used for shor stay, even if there is a pattern of stays lasting more than 1 month) # + # Plot the clusters for the "nr reviews" and "price" variables. plt.figure(figsize=(12,8)) ax = sns.scatterplot(x=df_airbnb2['nr_reviews'], y=df_airbnb2['price'], hue=kmeans.labels_, s=100) ax.grid(False) plt.xlabel('number of reviews') plt.ylabel('price') plt.show() # - df_airbnb2.loc[df_airbnb2['clusters'] == 2, 'cluster'] = 'dark purple' df_airbnb2.loc[df_airbnb2['clusters'] == 1, 'cluster'] = 'purple' df_airbnb2.loc[df_airbnb2['clusters'] == 0, 'cluster'] = 'pink' df_airbnb2.groupby('cluster').agg({'price':['mean', 'median'], 'min_nights':['mean', 'median'], 'nr_reviews':['mean', 'median'], 'availability_365':['mean', 'median']}) # ##### all the cluster do have a minimum stay of 6 nights and an average price above 65 euros. the mean availability_365 between the cluster is between 86 and 132, but if we look at the median of the dark and pink cluster the median is "0", meaning that there are a lot of accomodations published online, but not available. # ##### Since the listings and data have been collected in March 2021, it might be that due to the pandemic the accomodation description was still online, but the calender not available due to the local lockdown restrictions. df_airbnb2.to_excel(os.path.join(path,'2 - Airbnb - Data','Prepared Data','berlin_cane_cluster.xlsx')) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt # %matplotlib inline m,n =200,3 adv = (100+100*np.random.rand(m,n)).round(2) adv_df = pd.DataFrame(adv,columns=['TV','Radio','Newspaper']) adv_df.head() sales = (1 + 2*adv[:,0:1] + 3*adv[:,1:2] + 4*adv[:,2:3] + np.random.randn(m,1)).round(2) # np.concatenate((adv,sales)) sales[:5] plt.plot(adv[:, 2:3], sales, 'b.') plt.xlabel('Newspaper') plt.ylabel('Sales') plt.show() adv X = np.c_[np.ones((m,1)),adv] X[:5] sales.shape y = sales theta_hat = (np.linalg.inv(X.T.dot(X))).dot(X.T.dot(y)) theta_hat # ## Predicted value example = X[100].dot(theta_hat) example # ## Actual value sales[100] from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(adv,y) lin_reg.coef_ lin_reg.intercept_ lin_reg.predict([adv[100]]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- # # Propogate Errors # # This notebook takes you through the steps of how to propogate errors for through the neural network model # # * required packages: `numpy h5py keras` # * data files: # - starnet_cnn.h5 # - mean_and_std.npy # - test_data.h5 # - apStar_combined_main.h5 # + import numpy as np from keras.models import load_model import h5py import tensorflow as tf import time import keras.backend as K import subprocess datadir= "" # - # Define path variables for your keras model, denormalization data, and test data model_path = datadir + 'starnet_cnn.h5' denormalization_path = datadir + 'mean_and_std.npy' test_data_path = datadir + 'test_data.h5' # **Define functions to:** # # 1. compute the jacobian matrix # 2. compute the covariance # 3. compute the variance # # Note: these functions can be combined into one, but they are separated here to allow users to extract intermediate results for analysis # + def calc_jacobian(model,spectra,denormalize=None): if denormalize==None: y_list = tf.unstack(model.output) else: y_list = tf.unstack(denormalize(model.output[0])) J = [tf.gradients(y, model.input) for y in y_list] jacobian_func = [K.function([model.input, K.learning_phase()], j_) for j_ in J] jacobian = np.array([jf([spectra,False]) for jf in jacobian_func])[:,0,0,:] ''' for i in range(len(spectra)): jacobian = np.array([jf([spectra,False]) for jf in jacobian_func])[:,:,0,:,0] np.save('temp/temp_jacobian_'+str(i)+'.npy',jacobian) if i%int(0.1*len(spectra))==0: print('Jacobians completed: '+str(i)) for i in range(len(spectra)): if i==0: jacobian = np.load('temp/temp_jacobian_'+str(i)+'.npy') else: jacobian = np.concatenate((jacobian,np.load('temp/temp_jacobian_'+str(i)+'.npy'))) subprocess.check_output(['rm','temp/temp_jacobian_'+str(i)+'.npy']) ''' return jacobian def calc_covariance(model,spectra,err_spectra,denormalize=None): jac_matrix = calc_jacobian(model,spectra,denormalize) err_spectra[err_spectra > 6] = 0 jac_matrix = np.nan_to_num(jac_matrix) covariance = np.einsum('ij,jl->il',(jac_matrix*(err_spectra**2)),jac_matrix.T) return covariance def calc_variance(model,spectra,err_spectra,denormalize=None): covariance = calc_covariance(model,spectra,err_spectra,denormalize) return np.diagonal(covariance, offset=0) # - # ** Create a denormalization function ** # + mean_and_std = np.load(denormalization_path) mean_labels = mean_and_std[0] std_labels = mean_and_std[1] num_labels = mean_and_std.shape[1] def denormalize(lb_norm): return ((lb_norm*std_labels)+mean_labels) # - # **Load the StarNet model** model = load_model(model_path) # ** Load Test Data ** # # The error propagation technique takes some time, so for the purpose of example, we will only use the first 100 spectra in the test set # + num_test = 300 f = h5py.File(test_data_path, 'r') test_spectra = f['spectrum'] test_err_spectra = f['error_spectrum'] test_ap_ids = f['Ap_ID'][0:num_test] test_labels = np.column_stack((f['TEFF'][0:num_test],f['LOGG'][0:num_test],f['FE_H'][0:num_test])) print('Test set contains ' + str(len(test_ap_ids))+' stars') # - # ** Compute predictions and errors for the test set ** # # **Steps:** # 1. compute predictions # # \begin{equation} # h_(\textbf{x},\textbf{W}) = h_{1}(\textbf{x},\textbf{W}),...,h_{j}(\textbf{x},\textbf{W})) # \end{equation} # # j = 3 # # 2. compute jacobian matrix # # \begin{equation} # Jac = \frac{\partial h_{j}(\textbf{x},\textbf{W})}{\partial \textbf{x}} = (\frac{\partial h_{j}(\textbf{x},\textbf{W})}{\partial x_{1}},...,\frac{\partial h_{j}(\textbf{x},\textbf{W})}{\partial x_{n}}) # \end{equation} # # j = 1,...,3 # # n = 7214 # # 3. compute covariance matrix # # \begin{equation} # Cov = Jac \times \Delta \textbf{x}^2 \times Jac^T # \end{equation} # # # 4. obtain propagated variance due to error spectrum from the diagonal of the covariance matrix # # \begin{equation} # \sigma_{\mathrm{prop}}^2 \approx diag(Cov) # \end{equation} # # # 5. determine which region of the label-space the labels are within to obtain the intrinsic scatter in the corresponding bin. These values have been predetermined from training StarNet on synthetic data and applying it to a test set of synthetic data # # \begin{equation} # \sigma_{\mathrm{int}} # \end{equation} # # 6. combine propagated error with the intrinsic scatter term # # \begin{equation} # \Delta h_{j} = \sqrt{\sigma_{\mathrm{prop}}^2 + \sigma_{\mathrm{int}}^2} # \end{equation} variance = np.zeros((len(test_labels),3)) predictions = np.zeros(test_labels.shape) print('Making predictions and computing propagated variance for '+str(len(test_labels))+' spectra') time_start = time.time() for i in range(len(test_labels)): spectrum = test_spectra[i:i+1] err_spectrum = test_err_spectra[i:i+1] variance[i] = calc_variance(model,spectrum,err_spectrum,denormalize) predictions[i] = denormalize(model.predict(spectrum)) if i%int(0.1*len(test_labels))==0: print('\n'+str(i+1)+' completed.\n'+str(time.time()-time_start)+' seconds elapsed.') print('\nAll '+str(i+1)+' completed.\n'+str(time.time()-time_start)+' seconds elapsed.') f.close() # ** Create intrinsic scatter arrays (predetermined) ** scatter_terms = np.array([[ 2.85209088e+01, 2.30193645e+01, 2.10676180e+01, 1.91357425e+01, 1.72090644e+01, 1.58693655e+01, 1.52684102e+01, 1.42387830e+01, 1.64239293e+01, 2.18981017e+01], [ 3.86073715e-02, 3.04916170e-02, 2.44161726e-02, 2.25093310e-02, 2.35929675e-02, 2.36922221e-02, 2.58764773e-02, 2.80946934e-02, 3.34534390e-02, 3.56641714e-02], [ 3.90793092e-02, 2.43149947e-02, 2.25292707e-02, 1.81974298e-02, 1.58638867e-02, 1.46142515e-02, 1.36038125e-02, 1.25392930e-02, 1.24740228e-02, 1.53680421e-02]]) scatter_ranges = np.array([[ 3.50000000e+03, 3.95000000e+03, 4.40000000e+03, 4.85000000e+03, 5.30000000e+03, 5.75000000e+03, 6.20000000e+03, 6.65000000e+03, 7.10000000e+03, 7.55000000e+03, 8.00000000e+03], [ 0.00000000e+00, 5.00000000e-01, 1.00000000e+00, 1.50000000e+00, 2.00000000e+00, 2.50000000e+00, 3.00000000e+00, 3.50000000e+00, 4.00000000e+00, 4.50000000e+00, 5.00000000e+00], [ -2.50000000e+00, -2.20000000e+00, -1.90000000e+00, -1.60000000e+00, -1.30000000e+00, -1.00000000e+00, -7.00000000e-01, -4.00000000e-01, -1.00000000e-01, 2.00000000e-01, 5.00000000e-01]]) # ** assign each spectrum an intrinsic scatter term depending on which region of the parameter-space the prediction lies ** # + scatter_errs = np.empty(test_labels.shape) for i in range(scatter_terms.shape[0]): for j in range(scatter_terms.shape[1]): current_min = scatter_ranges[i,j] current_max = scatter_ranges[i,j+1] current_scatter = scatter_terms[i,j] index = np.where((test_labels[:,i]>current_min)&(test_labels[:,i]", apart["name"]) for k in keylist: if k != "name": the_value = apart[k] if k == "resource": the_value = apart[k]["resourceType"] + "/" + apart[k]["id"] print(k, "-->", the_value) else: print(k, "-->", the_value) else: print("Evaluation failure, return code -->", resp.status_code) # + #Run CQL expression against every patient in the patient_list def run_cql_over_patientlist(cql_expression, patient_list): for p in patient_list: print("Running against", p) run_inline_cql(cql_expression, p) # + cql_expression = "34 + 12" subject = "Patient/17dc3b9c5d0-e4448978-ee07-4a33-adee-98542009ec60" run_inline_cql(cql_expression, subject) # + cql_expression = "'dog' + 'house'" subject = "Patient/17dc3b9c5d0-e4448978-ee07-4a33-adee-98542009ec60" run_inline_cql(cql_expression, subject) # - cql_expression = "34 + 56" subject_list = patient_library["patientsTwo"] run_cql_over_patientlist(cql_expression, subject_list) cql_expression = "[Patient] p where p.gender = 'male'" subject = "Patient/17dc3b9caba-658e8e4c-8503-4a9f-ad12-754c9f2022f8" run_inline_cql(cql_expression, subject) cql_expression = "[Patient] p where p.gender = 'male'" run_cql_over_patientlist(cql_expression, patient_library["patientsTwo"]) # ### Create and evaluate Library resources that contain cql code # + # Helper that will encode cql into base64 import base64 def encode_to_base64(cql_source): message_bytes = cql_source.encode('ascii') base64_bytes = base64.b64encode(message_bytes) base64_cql = base64_bytes.decode('ascii') return base64_cql # + # Encode the cql source cql_source = '''library "PatientsByAgeGender" version '1.0.0' using FHIR version '4.0.1' context Patient define "Patient is Male": Patient.gender.value = 'male' define "Patient is Female": Patient.gender.value = 'female' define "Initial Population": "Patient is Male" define "OlderThan40": AgeInYears() >= 40 define "OlderMales": "OlderThan40" and "Patient is Male"''' encoded_cql = encode_to_base64(cql_source) print(encoded_cql) # + # Instantiate a library resource with the base64 encoded cql content # Be sure the name/version of the resource matches the name/version of the cql source above library_resource = '''{ "resourceType": "Library", "id": "example2", "name": "PatientsByAgeGender", "version": "1.0.0", "title": "Example library 2", "status": "active", "type": { "coding": [ { "code": "logic-library" } ] }, "date": "2021-12-13", "description": "Example 2 logic", "content": [ { "contentType": "text/cql", "data": "ENCODED_CQL" } ] }''' library_resource = library_resource.replace("ENCODED_CQL", encoded_cql) # - library_resource # + #Post a library resource from os import listdir from os import path headers = {'Content-Type': 'application/json'} response = requests.post(fhir_server_url + "/Library", auth=(username, password), verify=False, data=library_resource, #data=file.read(), headers=headers) library_id = response.headers["location"].split("v4/")[1].split("/_")[0] print(library_id) # + # Get a list of ALL Libraries in fhir server #https://dlr-ext-fhir.wh-health-patterns.dev.watson-health.ibm.com/fhir/Library def get_library_list(): liblist = [] resp = requests.get(fhir_server_url + "/Library", auth=(username, password), verify=False) print("Status code", resp.status_code) resp_json = resp.json() print("Found", resp_json["total"], "libraries") for e in resp_json["entry"]: libid = e["fullUrl"].split("v4/")[1] #print(pid) liblist.append(libid) return liblist library_list = get_library_list() print(library_list) # - def run_library_evaluation(library_id, subject, results_table): url = fhir_server_url + "/" + library_id + "/$evaluate" #print(url) query_params = { "subject": subject } resp = requests.get(url, auth=(username, password), params=query_params, verify=False) if resp.status_code == 200: print("Evaluation successful") result_json = resp.json() print(result_json.keys()) return_stuff = None for p in result_json["parameter"]: if p["name"] == "return": return_stuff = p["part"] break if return_stuff: if results_table["headerrow"] == None: #create header row first by getting all parts in order headerrow = [] for col in return_stuff: headerrow.append(col["name"]) results_table["headerrow"] = headerrow data_row = [] data_row.append(subject) for apart in return_stuff: keylist = apart.keys() #print(keylist) the_value = None for k in keylist: if "value" in k: the_value = apart[k] print(apart["name"], "-->", the_value) data_row.append(the_value) break #print(apart["name"], "-->", the_value) #data_row.append(the_value) results_table["datarows"].append(data_row) else: print("Evaluation failure, return code -->", resp.status_code) results_table = { "headerrow":None, "datarows": [] } subject = "Patient/17dc3b9c5d0-e4448978-ee07-4a33-adee-98542009ec60" library_id = "Library/17dc3bc5e89-8fdb55be-078c-4a52-9a0d-3f425e6c242f" run_library_evaluation(library_id, subject, results_table) results_table = { "headerrow":None, "datarows": [] } subject = "Patient/17dc3b9caba-658e8e4c-8503-4a9f-ad12-754c9f2022f8" run_library_evaluation(library_id, subject, results_table) def run_library_over_patientlist(library_id, patient_list, results_table): for p in patient_list: print("Running against", p) run_library_evaluation(library_id, p, results_table) # + results_table = { "headerrow":None, "datarows": [] } run_library_over_patientlist(library_id, patient_library["patientsOne"] + patient_library["patientsTwo"], results_table) # - results_table.keys() # + import csv def create_csv(data_table, filename): with open(filename, 'w', newline='') as f: writer = csv.writer(f) writer.writerow(data_table["headerrow"]) for arow in data_table["datarows"]: writer.writerow(arow) print(filename, "created") create_csv(results_table, "results.csv") # + #Use pandas dataframe to visualize the results import pandas as pd data = pd.read_csv("results.csv") data.head(20) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # name: python3 # --- # + [markdown] id="view-in-github" colab_type="text" # Open In Colab # + id="NYvEVdxZjZpV" # #!curl -o myPic.png https://d1qsx5nyffkra9.cloudfront.net/sites/default/files/article-image/eminence-organics-acne-face-mapping.jpg import cv2 as cv from matplotlib import pyplot as plt import numpy as np import dlib from google.colab.patches import cv2_imshow # + id="HUemp2wholSF" colab={"base_uri": "https://localhost:8080/", "height": 229} outputId="9c74b39b-7b67-401b-9e57-dd55be99b646" detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor("drive/My Drive/Colab Notebooks/shape_predictor_68_face_landmarks.dat") def createBox(img, points, scale = 5, masked= False, cropped = True): if masked: mask = np.zeros_like(img) mask = cv.fillPoly(mask, [points], (255,255,255)) img = cv.bitwise_and(img, mask) if cropped: cropBox = cv.boundingRect(points) x,y,w,h = cropBox imgCrop = img[y:y+h,x:x+w] imgCrop = cv.resize(imgCrop, (0,0), None, scale, scale) return imgCrop else: return mask img = cv.imread("drive/My Drive/Memories/myPic.jpg", 1) img = cv.resize(img, (0,0), None, 2, 2) imgOriginal= img.copy() imgGray = cv.cvtColor(img, cv.COLOR_BGR2GRAY) faces = detector(imgGray) for face in faces: x1,y1 = face.left(),face.top() x2,y2 = face.right(),face.bottom() cv.rectangle(imgOriginal, (x1,y1),(x2,y2), (0,255,0)) landmarks = predictor(imgGray, face) myPoints = [] for n in range(68): x = landmarks.part(n).x y = landmarks.part(n).y myPoints.append([x,y]) #cv.circle(imgOriginal, (x,y), 3, (255,255,255), cv.FILLED) cv.putText(imgOriginal, str(n), (x,y-4), cv.FONT_HERSHEY_COMPLEX_SMALL, 0.6, (0,0,0), 1) print(myPoints) myPoints= np.array(myPoints) lips = createBox(img, myPoints[42:48], masked = True, cropped = False) colorLips = np.zeros_like(lips) colorLips[:] = 0,0,255 colorLips = cv.bitwise_and(lips, colorLips) colorLips = cv.GaussianBlur(colorLips, (7,7), 50) colorLips = cv.addWeighted(img, 1, colorLips, 0.4, 0) cv2_imshow(colorLips) cv2_imshow(result) cv2_imshow(img) cv2_imshow(imgOriginal) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.9.7 ('base') # language: python # name: python3 # --- # + import dongsw.extension.pandas as pd from dongsw.extension.sklearn.metrics import mean_squared_error, root_mean_squared_error_ data = pd.DataFrame({'1':[1,2,3], '2':[2,3,4]}) data # - pd.rolling_apply_(data, 2, lambda x: x.iloc[1]) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [default] # language: python # name: python3 # --- # + # %matplotlib inline # %config InlineBackend.figure_format = 'retina' # %load_ext autoreload # %autoreload 2 import numpy as np import matplotlib.pyplot as plt import matplotlib.pylab as pylab from simplified_monorotor import Monorotor import plotting import testing import trajectories pylab.rcParams['figure.figsize'] = 10,10 # - # # PD controller # # #### TODO 1 - Implement PD controller # # Implement the PD Controller math in `thrust_control`: # # $$ # \begin{align} # e &= z_{\text{target}} - z_{\text{actual}} \\ # \dot{e} &= \dot{z}_{\text{target}} - \dot{z}_{\text{actual}} \\ # \bar{u}_1 &= k_p e + k_d \dot{e} \\ # u_1 &= m(g - \bar{u}_1) # \end{align} # $$ # + class PDController: def __init__(self, k_p, k_d, m): self.k_p = k_p self.k_d = k_d self.vehicle_mass = m self.g = 9.81 def thrust_control(self, z_target, z_actual, z_dot_target, z_dot_actual, z_dot_dot_ff=0.0): # IGNORE this for now. # # TODO # implement PD controller math shown above # and return a commanded thrust. # # You can ignore the z_dot_dot_ff parameter for now. e = z_target - z_actual e_dot = z_dot_target - z_dot_actual u_bar = self.k_p * e + self.k_d * e_dot u = self.vehicle_mass * (self.g - u_bar) return u testing.pd_controller_test(PDController, feed_forward=False) # - # #### TODO 2 - Adjust parameters # # Start by running the code below with K_D = 0.0. This will remind you what a P-controlled trajectory looks like. Then try slowly increasing K_D to 0.5, 1.0, 2.0, etc... # # What value of K_D gives a reasonable trajectory? # # Is there a problem with setting K_D too high? # + MASS_ERROR = 1.5 K_P = 20.0 K_D = 5 # preparation drone = Monorotor() perceived_mass = drone.m * MASS_ERROR controller = PDController(K_P, K_D, perceived_mass) # generate trajectory total_time = 3.0 dt = 0.001 t=np.linspace(0.0,total_time,int(total_time/dt)) z_path= -np.ones(t.shape[0]) z_dot_path = np.zeros(t.shape[0]) # run simulation history = [] for z_target, z_dot_target in zip(z_path, z_dot_path): z_actual = drone.z z_dot_actual = drone.z_dot u = controller.thrust_control(z_target, z_actual, z_dot_target, z_dot_actual) drone.thrust = u drone.advance_state(dt) history.append(drone.X) # generate plots z_actual = [h[0] for h in history] plotting.compare_planned_to_actual(z_actual, z_path, t) # - # [Solution](/notebooks/PD%20Controller%20Solution.ipynb) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- # + a4_width_mm = 210 a4_height_mm = 297 letter_width_in = 8.5 letter_height_in = 11 inch_to_mm = 25.4 print("A4 纸张的尺寸是 {} 毫米 × {} 毫米,".format( a4_width_mm, a4_height_mm)) print(" 或 {} 英寸 x {} 英寸。".format( a4_width_mm/inch_to_mm, a4_height_mm/inch_to_mm)) print(" 约 {:2.2f} 英寸 x {:2.2f} 英寸。".format( a4_width_mm/inch_to_mm, a4_height_mm/inch_to_mm)) print("美国信纸的尺寸是 {} 英寸 × {} 英寸,".format( letter_width_in, letter_height_in)) print(" 或 {:3.1f} 毫米 x {:3.1f} 毫米。".format( letter_width_in * inch_to_mm, letter_height_in * inch_to_mm)) # - from math import sqrt print("√2 = ", sqrt(2)) print("2 的平方根 = ", 2**.5) #%% A系列纸张的理论尺寸 w = 1_000 / 2**.25 #下划线"_"分隔仅为阅读方便,无实际编程作用 h = 1_000 * 2**.25 # 2**.25是2的4次方根即2的0.25次幂 print("A系列纸张的理论尺寸:") for i in range(0, 11): # {:>3} 表示 3 个字符向右看齐。宽高四舍五入到整数 print("{:>3}: {:3.0f} 毫米 x {:4.0f} 毫米".format('A'+str(i), w, h), end=' ') # {:6.2f}: 包括小数点,浮点数展示 6 个字符向右看齐,小数点后面保留 2 位数 print(" {:6.2f} 毫米 x {:7.2f} 毫米".format(w, h), end=' ') # 用 int() 丢弃小数部分,宽高仅保留整数部分 print(" {:3.0f}毫米 x {:4.0f}毫米".format(int(w), int(h))) tmp = w #暂存原宽度备作下一个高度 w = h/2 #新宽度为原高度的一半 h = tmp #新高度为原宽度 #%% B0 至 B10 系列纸张的理论尺寸 = ISO 216 标准尺寸 w = 1000 h = 1414 print("B系列纸张的理论和 ISO 216 标准尺寸:") i = 0 while i < 11: b_i = "{:>3} {:-4}开".format('B'+str(i), 2**i) print("{}: {:4.0f}毫米 x {:4.0f}毫米".format(b_i, int(w), int(h))) tmp = w #暂存原宽度备作下一个高度 w = h/2 #新宽度为原高度的一半 h = tmp #新高度为原宽度 i += 1 # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] papermill={"duration": 0.01538, "end_time": "2021-02-01T04:58:21.655913", "exception": false, "start_time": "2021-02-01T04:58:21.640533", "status": "completed"} tags=[] id="OFvFuyyiPrqH" # ## Import Libraries # + papermill={"duration": 2.87553, "end_time": "2021-02-01T04:58:24.545749", "exception": false, "start_time": "2021-02-01T04:58:21.670219", "status": "completed"} tags=[] id="YxtM_LwuPrqQ" import pandas as pd import numpy as np import gensim from gensim.utils import simple_preprocess from gensim.parsing.porter import PorterStemmer from gensim.models import Word2Vec import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,confusion_matrix import matplotlib.pyplot as plt import seaborn as sns # + colab={"base_uri": "https://localhost:8080/"} id="8USJxBg7Q4dj" outputId="4ae5975e-e936-436d-cbcb-8d588b26e705" from google.colab import drive drive.mount('/content/gdrive') # + colab={"base_uri": "https://localhost:8080/"} id="5y3wKiQYRETn" outputId="9b968371-94f5-4e55-e307-fa6b2c08062d" # !ls gdrive/MyDrive/ML/ # + papermill={"duration": 1.031796, "end_time": "2021-02-01T04:58:25.592794", "exception": false, "start_time": "2021-02-01T04:58:24.560998", "status": "completed"} tags=[] id="0jU-xB5kPrqS" # Read dataset train_df = pd.read_csv("gdrive/MyDrive/ML/Train.csv") # + papermill={"duration": 0.036974, "end_time": "2021-02-01T04:58:25.644824", "exception": false, "start_time": "2021-02-01T04:58:25.607850", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 195} id="q7YBAfySPrqS" outputId="fdb6e36e-58dd-4712-f4bf-223af175a505" # first few lines train_df.head() # + papermill={"duration": 0.023942, "end_time": "2021-02-01T04:58:25.684126", "exception": false, "start_time": "2021-02-01T04:58:25.660184", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="YrYS7R6yPrqT" outputId="2bdca47e-0678-428c-86e1-13d4abe6bea1" # shape of dataset train_df.shape # + [markdown] papermill={"duration": 0.015097, "end_time": "2021-02-01T04:58:25.714778", "exception": false, "start_time": "2021-02-01T04:58:25.699681", "status": "completed"} tags=[] id="SKlV_y2LPrqU" # Let's see weather our dataset is balanced or imbalanced # + papermill={"duration": 0.158348, "end_time": "2021-02-01T04:58:25.890994", "exception": false, "start_time": "2021-02-01T04:58:25.732646", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 350} id="vQw-165APrqU" outputId="3f19a37f-8519-4706-8cb5-18e12ca3ff76" sns.countplot(train_df['label']) # + [markdown] papermill={"duration": 0.016548, "end_time": "2021-02-01T04:58:25.924439", "exception": false, "start_time": "2021-02-01T04:58:25.907891", "status": "completed"} tags=[] id="4ZuDWFbgPrqV" # From above graph we can see that our dataset is balanced dataset.So that's great # + [markdown] papermill={"duration": 0.016561, "end_time": "2021-02-01T04:58:25.957738", "exception": false, "start_time": "2021-02-01T04:58:25.941177", "status": "completed"} tags=[] id="ZKT5srlQPrqV" # ## Tokenization # + [markdown] papermill={"duration": 0.016597, "end_time": "2021-02-01T04:58:25.990816", "exception": false, "start_time": "2021-02-01T04:58:25.974219", "status": "completed"} tags=[] id="l20cUX_zPrqV" # **Tokenization** is the process of tokenizing or splitting a string, text into a list of tokens. # + papermill={"duration": 35.389385, "end_time": "2021-02-01T04:59:01.396866", "exception": false, "start_time": "2021-02-01T04:58:26.007481", "status": "completed"} tags=[] id="Ym0wvgv1PrqV" train_df['tokenized_text'] = [simple_preprocess(line, deacc=True) for line in train_df['text']] # + [markdown] papermill={"duration": 0.016473, "end_time": "2021-02-01T04:59:01.430797", "exception": false, "start_time": "2021-02-01T04:59:01.414324", "status": "completed"} tags=[] id="iQywj9ajPrqW" # ## Stemming # + [markdown] papermill={"duration": 0.016586, "end_time": "2021-02-01T04:59:01.464066", "exception": false, "start_time": "2021-02-01T04:59:01.447480", "status": "completed"} tags=[] id="t8ZTECI9PrqW" # **Stemming** is the process of reducing a word to its word stem that affixes to suffixes and prefixes or to the roots of words known as a lemma.The input to the stemmer is tokenized words. # A stemming algorithm reduces the words “chocolates”, “chocolatey”, “choco” to the root word, “chocolate” # + papermill={"duration": 59.843074, "end_time": "2021-02-01T05:00:01.323932", "exception": false, "start_time": "2021-02-01T04:59:01.480858", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 402} id="lGSH_yThPrqX" outputId="c1036984-ace7-4ebe-a279-a38bd3eb85b6" porter_stemmer = PorterStemmer() train_df['stemmed_tokens'] = [[porter_stemmer.stem(word) for word in tokens] for tokens in train_df['tokenized_text']] train_df # + [markdown] papermill={"duration": 0.017135, "end_time": "2021-02-01T05:00:01.358373", "exception": false, "start_time": "2021-02-01T05:00:01.341238", "status": "completed"} tags=[] id="eqy9dfqCPrqX" # ## Train-Test Split # + papermill={"duration": 0.050324, "end_time": "2021-02-01T05:00:01.425955", "exception": false, "start_time": "2021-02-01T05:00:01.375631", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/"} id="KxjLX3DdPrqX" outputId="665c8be1-8eea-418e-88d1-ac2f8d5e11bc" # Train Test Split Function def split_train_test(train_df, test_size=0.3, shuffle_state=True): X_train, X_test, Y_train, Y_test = train_test_split(train_df[['stemmed_tokens']], train_df['label'], shuffle=shuffle_state, test_size=test_size, random_state=15) print("Value counts for Train sentiments") print(Y_train.value_counts()) print("Value counts for Test sentiments") print(Y_test.value_counts()) print(type(X_train)) print(type(Y_train)) X_train = X_train.reset_index() X_test = X_test.reset_index() Y_train = Y_train.to_frame() Y_train = Y_train.reset_index() Y_test = Y_test.to_frame() Y_test = Y_test.reset_index() print(X_train.head()) return X_train, X_test, Y_train, Y_test # Call the train_test_split X_train, X_test, Y_train, Y_test = split_train_test(train_df) # + [markdown] papermill={"duration": 0.017508, "end_time": "2021-02-01T05:00:01.461749", "exception": false, "start_time": "2021-02-01T05:00:01.444241", "status": "completed"} tags=[] id="Y61nHW_IPrqY" # ## Word2Vec Model Creation # + [markdown] papermill={"duration": 0.017476, "end_time": "2021-02-01T05:00:01.497168", "exception": false, "start_time": "2021-02-01T05:00:01.479692", "status": "completed"} tags=[] id="A0ogL9EwPrqZ" # **word2vec** algorithm uses a neural network model to learn word associations from a large corpus of text. Once trained, such a model can detect synonymous words or suggest additional words for a partial sentence. As the name implies, word2vec represents each distinct word with a particular list of numbers called a vector. # + papermill={"duration": 117.576898, "end_time": "2021-02-01T05:01:59.091812", "exception": false, "start_time": "2021-02-01T05:00:01.514914", "status": "completed"} tags=[] colab={"base_uri": "https://localhost:8080/", "height": 454} id="0MYGP7IYPrqZ" outputId="56341d3d-4f68-4574-febd-bc44ae4e395f" size = 500 window = 3 min_count = 1 workers = 3 # 0 for CBOW, 1 for skip-gram sg = 0 OUTPUT_FOLDER = '/kaggle/working/' # Function to train word2vec model def make_word2vec_model(train_df, padding, sg, min_count, size, workers, window): if padding: #print(len(train)) temp_df = pd.Series(train_df['stemmed_tokens']).values temp_df = list(temp_df) temp_df.append(['pad']) #print(str(size)) word2vec_file = OUTPUT_FOLDER + '2ata' + '_PAD.model' w2v_model = Word2Vec(temp_df, min_count = min_count, size = size, workers = workers, window = window, sg = sg) #w2v_model.save(word2vec_file) return w2v_model, word2vec_file # Train Word2vec model w2vmodel, word2vec_file = make_word2vec_model(train_df, padding=True, sg=sg, min_count=min_count, size=size, workers=workers, window=window) # + [markdown] papermill={"duration": 0.017826, "end_time": "2021-02-01T05:01:59.128451", "exception": false, "start_time": "2021-02-01T05:01:59.110625", "status": "completed"} tags=[] id="2uKaNf21Prqa" # ## Padding # + [markdown] papermill={"duration": 0.017726, "end_time": "2021-02-01T05:01:59.164016", "exception": false, "start_time": "2021-02-01T05:01:59.146290", "status": "completed"} tags=[] id="eailXT7kPrqa" # All the neural networks require to have inputs that have the same shape and size. However, when we pre-process and use the texts as inputs for our model e.g. LSTM, not all the sentences have the same length. In other words, naturally, some of the sentences are longer or shorter. We need to have the inputs with the same size, this is where the padding is necessary. # **Padding** will make all sentences of same length by inserting 0 in the end or bigenning of the sentences # + papermill={"duration": 0.044877, "end_time": "2021-02-01T05:01:59.226871", "exception": false, "start_time": "2021-02-01T05:01:59.181994", "status": "completed"} tags=[] id="SZAtDGPbPrqa" outputId="d79fdcc1-fc73-4983-e5db-70a1aad68cae" max_sen_len = train_df.stemmed_tokens.map(len).max() padding_idx = w2vmodel.wv.vocab['pad'].index print(padding_idx) def make_word2vec_vector_cnn(sentence): padded_X = [padding_idx for i in range(max_sen_len)] i = 0 for word in sentence: if word not in w2vmodel.wv.vocab: padded_X[i] = 0 else: padded_X[i] = w2vmodel.wv.vocab[word].index i += 1 return torch.tensor(padded_X, dtype=torch.long, device=device).view(1, -1) # + papermill={"duration": 0.025094, "end_time": "2021-02-01T05:01:59.270003", "exception": false, "start_time": "2021-02-01T05:01:59.244909", "status": "completed"} tags=[] id="92QCoM01Prqa" outputId="3b5090fe-db77-45f4-9d12-85389455ce30" max_sen_len # + papermill={"duration": 0.025654, "end_time": "2021-02-01T05:01:59.314047", "exception": false, "start_time": "2021-02-01T05:01:59.288393", "status": "completed"} tags=[] id="XCqLqCHSPrqb" outputId="7b92d483-a06a-4995-c8ff-c78de7b45a59" padding_idx # + [markdown] papermill={"duration": 0.018659, "end_time": "2021-02-01T05:01:59.351560", "exception": false, "start_time": "2021-02-01T05:01:59.332901", "status": "completed"} tags=[] id="v6ODWVYFPrqb" # ## CNN model # + papermill={"duration": 0.377412, "end_time": "2021-02-01T05:01:59.747977", "exception": false, "start_time": "2021-02-01T05:01:59.370565", "status": "completed"} tags=[] id="jhIsD_u6Prqb" outputId="69b74536-cfe6-4dcc-b416-d46bf7ac0e80" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") device # + papermill={"duration": 0.031464, "end_time": "2021-02-01T05:01:59.799039", "exception": false, "start_time": "2021-02-01T05:01:59.767575", "status": "completed"} tags=[] id="_KVKyJ9-Prqb" EMBEDDING_SIZE = 500 NUM_FILTERS = 10 #torch.nn.Conv2d(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T, T]], #stride: Union[T, Tuple[T, T]] = 1, padding: Union[T, Tuple[T, T]] = 0, #dilation: Union[T, Tuple[T, T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros') class CnnTextClassifier(nn.Module): def __init__(self, vocab_size, num_classes, window_sizes=(1,2,3,5)): super(CnnTextClassifier, self).__init__() w2vmodel = gensim.models.KeyedVectors.load(OUTPUT_FOLDER + '2ata_PAD.model') weights = w2vmodel.wv # With pretrained embeddings self.embedding = nn.Embedding.from_pretrained(torch.FloatTensor(weights.vectors), padding_idx=w2vmodel.wv.vocab['pad'].index) # like a python list, it was designed to store any desired number of nn.Module self.convs = nn.ModuleList([ nn.Conv2d(1, NUM_FILTERS, [window_size, EMBEDDING_SIZE], padding=(window_size - 1, 0)) for window_size in window_sizes ]) self.fc = nn.Linear(NUM_FILTERS * len(window_sizes), num_classes) def forward(self, x): x = self.embedding(x) # [B, T, E] # Apply a convolution + max_pool layer for each window size x = torch.unsqueeze(x, 1) xs = [] for conv in self.convs: x2 = torch.tanh(conv(x)) x2 = torch.squeeze(x2, -1) x2 = F.max_pool1d(x2, x2.size(2)) xs.append(x2) x = torch.cat(xs, 2) # FC x = x.view(x.size(0), -1) logits = self.fc(x) probs = F.softmax(logits, dim = 1) return probs # + papermill={"duration": 0.026038, "end_time": "2021-02-01T05:01:59.844386", "exception": false, "start_time": "2021-02-01T05:01:59.818348", "status": "completed"} tags=[] id="f2Y5Zi3UPrqc" def make_target(label): if label == 0: return torch.tensor([0], dtype=torch.long, device=device) elif label == 1: return torch.tensor([1], dtype=torch.long, device=device) # + [markdown] papermill={"duration": 0.01963, "end_time": "2021-02-01T05:01:59.883419", "exception": false, "start_time": "2021-02-01T05:01:59.863789", "status": "completed"} tags=[] id="ZbfTWpYBPrqc" # ## Train the model # + papermill={"duration": 5.154448, "end_time": "2021-02-01T05:02:05.058040", "exception": false, "start_time": "2021-02-01T05:01:59.903592", "status": "completed"} tags=[] id="uOLwCpUrPrqd" outputId="be36ad59-bac1-4989-a2c0-79e724f7a931" NUM_CLASSES = 2 VOCAB_SIZE = len(w2vmodel.wv.vocab) print(VOCAB_SIZE) cnn_model = CnnTextClassifier(vocab_size=VOCAB_SIZE, num_classes=NUM_CLASSES) cnn_model.to(device) loss_function = nn.CrossEntropyLoss() optimizer = optim.Adam(cnn_model.parameters(), lr=0.0001) num_epochs = 5 # + papermill={"duration": 1206.785601, "end_time": "2021-02-01T05:22:11.863925", "exception": false, "start_time": "2021-02-01T05:02:05.078324", "status": "completed"} tags=[] id="UBaaSYrxPrqd" outputId="4d362b85-916e-4701-9f76-9dc14d3ecea5" # Open the file for writing loss loss_file_name = OUTPUT_FOLDER + '1cnn_class_big_loss_with_padding.csv' f = open(loss_file_name,'w') f.write('iter, loss') f.write('\n') losses = [] cnn_model.train() for epoch in range(num_epochs): print("Epoch" + str(epoch + 1)) train_loss = 0 for index, row in X_train.iterrows(): # Clearing the accumulated gradients cnn_model.zero_grad() # Make the bag of words vector for stemmed tokens bow_vec = make_word2vec_vector_cnn(row['stemmed_tokens']) # Forward pass to get output probs = cnn_model(bow_vec) # Get the target label #print(Y_train['label'][index]) target = make_target(Y_train['label'][index]) # Calculate Loss: softmax --> cross entropy loss loss = loss_function(probs, target) train_loss += loss.item() # Getting gradients w.r.t. parameters loss.backward() # Updating parameters optimizer.step() print(f'train_loss : {train_loss / len(X_train)}') print("Epoch ran :"+ str(epoch+1)) f.write(str((epoch+1)) + "," + str(train_loss / len(X_train))) f.write('\n') train_loss = 0 torch.save(cnn_model, OUTPUT_FOLDER + 'cnn_big_model_500_with_padding.pth') f.close() print("Input vector") print("Probs") print(probs) print(torch.argmax(probs, dim=1).cpu().numpy()[0]) # + [markdown] papermill={"duration": 0.021574, "end_time": "2021-02-01T05:22:11.908648", "exception": false, "start_time": "2021-02-01T05:22:11.887074", "status": "completed"} tags=[] id="OK-xJXPxPrqf" # ## Model Test # + papermill={"duration": 49.988305, "end_time": "2021-02-01T05:23:01.919057", "exception": false, "start_time": "2021-02-01T05:22:11.930752", "status": "completed"} tags=[] id="zOkIjLXfPrqi" outputId="fcb08291-4a67-4713-cc42-81ab8fcf8805" bow_cnn_predictions = [] original_lables_cnn_bow = [] cnn_model.eval() loss_df = pd.read_csv(OUTPUT_FOLDER + '1cnn_class_big_loss_with_padding.csv') print(loss_df.columns) # loss_df.plot('loss') y_pred_list = [] y_true_list = [] with torch.no_grad(): for index, row in X_test.iterrows(): #print(row['stemmed_tokens']) bow_vec = make_word2vec_vector_cnn(row['stemmed_tokens']) #print(bow_vec) probs = cnn_model(bow_vec) #print(probs.data) _, predicted = torch.max(probs.data, 1) bow_cnn_predictions.append(predicted.cpu().numpy()[0]) original_lables_cnn_bow.append(make_target(Y_test['label'][index]).cpu().numpy()[0]) print(confusion_matrix(original_lables_cnn_bow, bow_cnn_predictions)) #print(original_lables_cnn_bow) print(classification_report(original_lables_cnn_bow,bow_cnn_predictions)) loss_file_name = OUTPUT_FOLDER + '1cnn_class_big_loss_with_padding.csv' loss_df = pd.read_csv(loss_file_name) print(loss_df.columns) plt_500_padding_30_epochs = loss_df[' loss'].plot() fig = plt_500_padding_30_epochs.get_figure() fig.savefig(OUTPUT_FOLDER + '1loss_plt_500_padding_30_epochs.pdf') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # Predicting Cancellation Rates # # In this notebook, you will build a machine learning model to predict whether or not a customer cancelled a hotel booking. You will be introduced to the `scikit-learn` framework to do machine learning in Python. # # We will use a dataset on hotel bookings from the article ["Hotel booking demand datasets"](https://www.sciencedirect.com/science/article/pii/S2352340918315191), published in the Elsevier journal, [Data in Brief](https://www.sciencedirect.com/journal/data-in-brief). The abstract of the article states # # > This data article describes two datasets with hotel demand data. One of the hotels (H1) is a resort hotel and the other is a city hotel (H2). Both datasets share the same structure, with 31 variables describing the 40,060 observations of H1 and 79,330 observations of H2. Each observation represents a hotel booking. Both datasets comprehend bookings due to arrive between the 1st of July of 2015 and the 31st of August 2017, including bookings that effectively arrived and bookings that were canceled. # # For convenience, the two datasets have been combined into a single csv file `data/hotel_bookings.csv`. Let us start by importing all the functions needed to import, visualize and model the data. # + # Data imports import pandas as pd import numpy as np # Visualization imports # %config InlineBackend.figure_format = 'svg' import matplotlib.pyplot as plt import plotly.express as px plt.rcParams['figure.figsize'] = [8, 4] # ML Imports from sklearn.model_selection import train_test_split, KFold, cross_validate, cross_val_score from sklearn.pipeline import Pipeline from sklearn.compose import ColumnTransformer from sklearn.preprocessing import LabelEncoder, OneHotEncoder from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import accuracy_score # - # ## 0. Get the data # # The first step in any machine learning workflow is to get the data and explore it. hotel_bookings = pd.read_csv('data/hotel_bookings.csv') hotel_bookings.head() # Let us look at the number of bookings by month. bookings_by_month = hotel_bookings.groupby('arrival_date_month', as_index=False)[['hotel']].count().rename(columns={"hotel": "nb_bookings"}) months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December'] fig = px.bar( bookings_by_month, x='arrival_date_month', y='nb_bookings', title=f'Hotel Bookings by Month', category_orders={"arrival_date_month": months} ) fig.show(config={"displayModeBar": False}) # Our objective is to build a model that predicts whether or not a user cancelled a hotel booking. # # ## 1. Split the data into training and test sets. # # Let us start by defining a split to divide the data into training and test sets. The basic idea is to train the model on a portion of the data and test its performance on the other portion that has not been seen by the model. This is done in order to prevent __overfitting__. We will use four-fold cross validation with shuffling. split = KFold(n_splits=4, shuffle=True, random_state=1234) # ## 2. Choose a class of models, and hyperparameters. # # The next step is to choose a class of models and specify hyperparameters. This is just for starters and we will see later how we can specify a range of values for hyperparameters and tune the model for optimal performance! We will pick the simple, yet very effective Decision Tree and Random Forest models. # We will use `scikit-learn` to fit the models and evaluate their performance. from IPython.display import Image Image("http://scikit-learn.org/dev/_static/ml_map.png", width=750) models = [ ("Decision Tree", DecisionTreeClassifier(random_state=1234)), ("Random Forest", RandomForestClassifier(random_state=1234,n_jobs=-1)), ] # ## 3. Preprocess the data # # The next step is to set up a pipeline to preprocess the features. We will impute all missing values with a constant, and one-hot encode all categorical features. # + # Preprocess numerical features: features_num = [ "lead_time", "arrival_date_week_number", "arrival_date_day_of_month", "stays_in_weekend_nights", "stays_in_week_nights", "adults", "children", "babies", "is_repeated_guest" , "previous_cancellations", "previous_bookings_not_canceled", "agent", "company", "required_car_parking_spaces", "total_of_special_requests", "adr" ] transformer_num = SimpleImputer(strategy="constant") # Preprocess categorical features: features_cat = [ "hotel", "arrival_date_month", "meal", "market_segment", "distribution_channel", "reserved_room_type", "deposit_type", "customer_type" ] transformer_cat = Pipeline(steps=[ ("imputer", SimpleImputer(strategy="constant", fill_value="Unknown")), ("onehot", OneHotEncoder(handle_unknown='ignore')) ]) # Create a preprocessing pipeline preprocessor = ColumnTransformer(transformers=[ ("num", transformer_num, features_num), ("cat", transformer_cat, features_cat) ]) # - # ## 4. Fit the models and evaluate performance # # Finally, we will fit the Decision Tree and Random Forest models on the training data and use 4-fold cross-validation to evaluate their performance. features = features_num + features_cat X = hotel_bookings[features] y = hotel_bookings["is_canceled"] for name, model in models: # Compose data preprocessing and model into a single pipeline steps = Pipeline(steps=[ ('preprocessor', preprocessor), ('model', model) ]) # Compute cross validation accuracy for each model cv_results = cross_val_score(steps, X, y, cv=split, scoring="accuracy", n_jobs=-1) # output: min_score = round(np.min(cv_results), 4) max_score = round(np.max(cv_results), 4) mean_score = round(np.mean(cv_results), 4) std_dev = round(np.std(cv_results), 4) print(f"[{name}] Cross Validation Accuarcy Score: {mean_score} +/- {std_dev} (std) min: {min_score}, max: {max_score}") # ## Data Dictionary # # |variable |class |description | # |:-------------------------------|:-----------|:------------------------------------------------------------------------------------------------| # |adr |numeric |Average daily rate | # |adults |integer |Number of adults | # |agent |categorical |The id of the travel agency | # |arrival_date_day_of_month |integer |Day of the month of the arrival date | # |arrival_date_month |categorical |Month of arrival date with 12 categories: “January” to “December” | # |arrival_date_week_number |integer |Week number of the arrival date | # |arrival_date_year |integer |Year of arrival date | # |assigned_room_type |categorical |The code for type of room assigned | # |babies |integer |Number of babies | # |booking_changes |integer |The number of changes made to the booking | # |children |integer |Number of children | # |company |categorical |The id of the company making the booking | # |country |categorical |The country of originin ISO 3155-3:2013 format | # |customer_type |categorical |The type of booking: Contract / Group / Transient / Transient-Party | # |days_in_waiting_list |integer |The number of days the booking was in the waiting list | # |deposit_type |categorical |The type of deposit: No Deposit / Non Refund / Refundable | # |distribution_channel |categorical |The booking distribution channel: TA / TO etc. | # |is_cancelled |categorical |A boolean indicating if the booking was cancelled (1) or not (0) | # |is_repeated_guest |categorical |A boolean indicating if it was a repeated guest (1) or not (0) | # |lead_time |integer |The number of days between the booking date and arrival date | # |market_segment |categorical |A designation for the market segment: TA. TO | # |meal |categorical |The type of meal booked: Bed & Breakfast (BB), Half Board (HB), and Full Board (FB) | # |previous_bookings_not_cancelled |integer |The number of previous bookings not cancelled by the customer prior to the current booking | # |previous_cancellations |integer |The number of previous bookings that were cancelled by the customer prior to the current booking | # |required_car_parking_spaces |integer |The number of car parking spaces required by the customer | # |reservation_status |categorical |The last status of the reservation: Canceled / Check-Out / No-Show | # |reservation_status_date |date |The date at which the last status was set. | # |reserved_room_type |categorical |The code of room type reserved. | # |stays_in_weekend_nights |integer |The number of weekend nights stayed or booked to stay | # |stays_in_week_nights |integer |The number of week nights stayed or booked to stay | # |total_of_special_requests |integer |The number of special requests made by the customer | # + # Formação Cientista de Dados - e # Gráfico 3D # - # Importação das bibliotecas import pandas as pd import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d # Carregamento da base de dados base = pd.read_csv('orchard.csv') base # Criação do gráfico 3D, indicando o atributo projection = '3d' e passando três atributos (decrease, rowpos e colpos) figura = plt.figure() eixo = figura.add_subplot(1, 1, 1, projection = '3d') eixo.scatter(base.decrease, base.rowpos, base.colpos) eixo.set_xlabel('decrease') eixo.set_ylabel('rowpos') eixo.set_zlabel('colpos') # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: commons # language: python # name: commons # --- # + import pandas as pd train = pd.read_csv("../../data/interim/train.csv") train.head() # - from sklearn.preprocessing import KBinsDiscretizer kbins = KBinsDiscretizer(n_bins=5, encode='ordinal', strategy='quantile') # ‘uniform’, ‘quantile’, ‘kmeans’ train['amount_discretized'] = kbins.fit_transform(train[['amount']].values) agg_values = train.groupby(by=['amount_discretized']).mean() columns_to_agg = ['v1'] agg_values = agg_values[columns_to_agg] agg_values.columns = [x + "_mean_given_amount" for x in agg_values.columns] train = train.merge(agg_values, how='left', on=['amount_discretized']) train.drop(['amount_discretized'], axis=1, inplace=True) print(train.shape) train.head() from sklearn.base import BaseEstimator, TransformerMixin class AggByAmount(BaseEstimator, TransformerMixin): # Inputs: bins, encode, strategy ('uniform', 'quantile', 'kmeans'), number of top features, mean/max/min # Top features order: ['v1', 'v4', 'v10', 'v7', 'v18', 'v11', 'v20', 'amount', 'v3', 'v16', 'v13', 'v14', 'v8', 'v9', 'v19', 'v2', 'v5', 'v12', 'v26', 'v24', 'v25', 'v27', 'v17', 'v22', 'v23', 'v6', 'v15', 'v21'] def __init__(self, n_bins=5, encode='ordinal', strategy='quantile', columns_to_agg=['v1']): self.n_bins = n_bins self.encode = encode self.strategy = strategy self.columns_to_agg = columns_to_agg self.kbins = None self.initial_columns = None def fit(self, X, y=None): self.kbins = KBinsDiscretizer(n_bins=self.n_bins, encode=self.encode, strategy=self.strategy) self.kbins.fit(X[['amount']].values) self.initial_columns = list(X.columns) return self def transform(self, X, y=None): X['amount_discretized'] = self.kbins.transform(X[['amount']].values) agg_values = X.groupby(by=['amount_discretized']).mean() agg_values = agg_values[self.columns_to_agg] agg_values.columns = [x + "_mean_given_amount" for x in agg_values.columns] X = X.merge(agg_values, how='left', on=['amount_discretized']) X.drop(self.initial_columns + ['amount_discretized'], axis=1, inplace=True) return X agg_by_amount = AggByAmount() agg_by_amount.fit(train) agg_by_amount.transform(train).head() # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- import numpy as np my_list1 = [1,2,3,4] # First we imported numpy as np to save keystrokes. We created a list and then assigned it to an array by using the 'np.array()' method to more easily use tab or shift tab for more info. my_array1 = np.array(my_list1) # Calling my_array1 then produces our list in parens. my_array1 # We can combine lists in our array similarly as shown below. my_list2 = [11,22,33,44] my_lists = [my_list1, my_list2] my_array2 = np.array(my_lists) my_array2 # .shape method is to see rows and columns of the arrays and dtype is data type. It is important to note that when using .zeros method you should use just float or int without adding the 32 or 64 as the data type shows. my_array2.shape my_array2.dtype my_zeros = np.zeros((3,3),int) my_zeros my_zeros.dtype np.eye(6) np.arange(1,15,2) 5/2 arr1 = np.array([[1,2,3,4], [8,9,10,11]]) arr1 * arr1 # import numpy as np arr = np.arange(0,11) arr arr[8] arr[1:5] arr slice_of_arr = arr[0:6] slice_of_arr # Important note about arrays. There is only one copy unless you create another, so anything you do changes the original copy. # To make a copy do the following: arr_copy = arr.copy() arr_copy arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45])) arr_2d arr_2d[1] arr_2d[1][0] # Slicing the 3x3 is also possible using slices divided by a comma shown below: arr_2d[:2,1:] # The :2 takes the 0 - 1 lists of |5, 10, 15,|and |20, 25, 30] and the 1: gives us the 1 and 2 elements of each. Remember that everything starts with zero. # Below we create another array reshaped into a matrix. To transform the matrix (i.e. make the colomn size the row size and vice versa), we use the transform method 'T'. arr = np.arange(50).reshape((10,5)) arr arr.T # We can do dot multiplication which is done with matrices in mathematics by using 'np.dot'. In dot multiplication we multiply the first value in the first row of our first matrix by the first value in the first column of our second matrix and continue until the entire first row of our first matrix is multiplied by the entire first column of our second matrix. Then we sum the products for our matrix product. We then do the same for the 2nd row of the first matrix and the 2nd column of the 2nd matrix and so on until completion. This gives us the standard mathematics formula of [mxn X nxp = mxp.] This means a 1x3 X 3x1 gives us a 1x1, but a 3x1 X 1x3 gives us a 3x3. Below is our original array multiplied by our transposed one. Remember that the number of values in the row must equal the values in the column in order to do dot multiplication. I'll need to get into more math studies while doing this course and that will slow me down. I'll spend more time doing this to compensate. np.dot(arr.T, arr) # It is also possible to make 3D matrices by using the reshape method though it isn't used often. arr3d = np.arange(50).reshape((5,5,2)) arr3d # Transposing a 3d matrix uses .tranpose and the output must be specified. arr3d.transpose((1,0,2)) arr = np.array([[1,2,3]]) arr arr.swapaxes(0,1) # Universal array functions arr = np.arange(11) arr import matplotlib.pyplot as plt # %matplotlib inline # We imported matplotlib.pyplot to visualize and we set it to plt to save characters. We have to include the %matplotlib inline to allow us to see everythin in Jupyter points = np.arange(-5,5,.01) dx,dy = np.meshgrid(points, points) dx dy z = (np.sin(dx) + np.sin(dy)) z plt.imshow(z) # + plt.imshow(z) plt.colorbar() plt.title('plot for sin of x plus y') # - # np.where gives you a shorthand for booleans in numpy. arr = np.array([[1,2,3],[4,5,6],[7,8,9]]) arr arr.sum(0) # sum will add the values of an array over a given axis. In the case above, 0 would be the columns and 1 would be the rows. We can also get the mean by using the method below. We can do it by axis or leave empty paren to get the mean of all. arr.mean() arr.std() arr.var() arr = np.arange(5) # Below is how we save and load arrays. First we created one and saved it using np.save. Note that you must pass a name in quotes and the array you want to save. arr np.save('myarray',arr) # Then we created another array under the original name. By using np.load and passing the array name we gave our first array ending in .npy, we can retrieve our original. arr = np.arange(10) arr np.load('myarray.npy') # We can also save multiple arrays under a single name by zipping them using np.savez and passing our created name ending in npz in quotes followed by the files we want saved separated by commas. arr1 = np.load('myarray.npy') arr2 = arr np.savez('ziparrays.npz',x = arr1, y = arr2) zipped = np.load('ziparrays.npz') # To call them, you must use the name of the zipped variable in quotes and within square brackets as shown below: zipped['x'] zipped['y'] # Below is how we can save a matrix as a txt file. arr = np.array([[1,2,3],[4,5,6],[7,8,9]]) arr np.savetxt('textarray.txt', arr, delimiter=',') arr = np.loadtxt('textarray.txt', delimiter = ',') arr # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # # ECG Federated 1D-CNN Client Side # This code is the server part of ECG federated 1D-CNN model for **multi** client and a server. users = 2 # number of clients # + import os import h5py import socket import struct import pickle import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import Dataset, DataLoader from torch.optim import Adam # for image # import matplotlib.pyplot as plt # import numpy as np import time from tqdm import tqdm # - root_path = '../../models/' # ## Cuda # device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") device = "cpu" print(device) client_order = int(input("client_order(start from 0): ")) num_traindata = 13244 // users # ## Data load class ECG(Dataset): def __init__(self, train=True): if train: # total: 13244 with h5py.File(os.path.join(root_path, 'ecg_data', 'train_ecg.hdf5'), 'r') as hdf: self.x = hdf['x_train'][num_traindata * client_order : num_traindata * (client_order + 1)] self.y = hdf['y_train'][num_traindata * client_order : num_traindata * (client_order + 1)] else: with h5py.File(os.path.join(root_path, 'ecg_data', 'test_ecg.hdf5'), 'r') as hdf: self.x = hdf['x_test'][:] self.y = hdf['y_test'][:] def __len__(self): return len(self.x) def __getitem__(self, idx): return torch.tensor(self.x[idx], dtype=torch.float), torch.tensor(self.y[idx]) # ## Making Batch Generator batch_size = 32 # ### `DataLoader` for batch generating # `torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False)` train_dataset = ECG(train=True) test_dataset = ECG(train=False) trainloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) testloader = DataLoader(test_dataset, batch_size=batch_size) # ### Number of total batches train_total_batch = len(trainloader) print(train_total_batch) test_batch = len(testloader) print(test_batch) # ## Pytorch layer modules for *Conv1D* Network # # # # ### `Conv1d` layer # - `torch.nn.Conv1d(in_channels, out_channels, kernel_size)` # # ### `MaxPool1d` layer # - `torch.nn.MaxPool1d(kernel_size, stride=None)` # - Parameter `stride` follows `kernel_size`. # # ### `ReLU` layer # - `torch.nn.ReLU()` # # ### `Linear` layer # - `torch.nn.Linear(in_features, out_features, bias=True)` # # ### `Softmax` layer # - `torch.nn.Softmax(dim=None)` # - Parameter `dim` is usually set to `1`. # # ## Construct 1D-CNN ECG classification model class EcgConv1d(nn.Module): def __init__(self): super(EcgConv1d, self).__init__() self.conv1 = nn.Conv1d(1, 16, 7) # 124 x 16 self.relu1 = nn.LeakyReLU() self.pool1 = nn.MaxPool1d(2) # 62 x 16 self.conv2 = nn.Conv1d(16, 16, 5) # 58 x 16 self.relu2 = nn.LeakyReLU() self.conv3 = nn.Conv1d(16, 16, 5) # 54 x 16 self.relu3 = nn.LeakyReLU() self.conv4 = nn.Conv1d(16, 16, 5) # 50 x 16 self.relu4 = nn.LeakyReLU() self.pool4 = nn.MaxPool1d(2) # 25 x 16 self.linear5 = nn.Linear(25 * 16, 128) self.relu5 = nn.LeakyReLU() self.linear6 = nn.Linear(128, 5) self.softmax6 = nn.Softmax(dim=1) def forward(self, x): x = self.conv1(x) x = self.relu1(x) x = self.pool1(x) x = self.conv2(x) x = self.relu2(x) x = self.conv3(x) x = self.relu3(x) x = self.conv4(x) x = self.relu4(x) x = self.pool4(x) x = x.view(-1, 25 * 16) x = self.linear5(x) x = self.relu5(x) x = self.linear6(x) x = self.softmax6(x) return x ecg_net = EcgConv1d() ecg_net.to(device) criterion = nn.CrossEntropyLoss() rounds = 400 # default local_epochs = 1 # default lr = 0.001 optimizer = Adam(ecg_net.parameters(), lr=lr) # ## Socket initialization # ### Required socket functions # + def send_msg(sock, msg): # prefix each message with a 4-byte length in network byte order msg = pickle.dumps(msg) msg = struct.pack('>I', len(msg)) + msg sock.sendall(msg) def recv_msg(sock): # read message length and unpack it into an integer raw_msglen = recvall(sock, 4) if not raw_msglen: return None msglen = struct.unpack('>I', raw_msglen)[0] # read the message data msg = recvall(sock, msglen) msg = pickle.loads(msg) return msg def recvall(sock, n): # helper function to receive n bytes or return None if EOF is hit data = b'' while len(data) < n: packet = sock.recv(n - len(data)) if not packet: return None data += packet return data # - # ### Set host address and port number host = input("IP address: ") port = 10080 max_recv = 100000 # ### Open the client socket s = socket.socket() s.connect((host, port)) # ## SET TIMER start_time = time.time() # store start time print("timmer start!") msg = recv_msg(s) rounds = msg['rounds'] client_id = msg['client_id'] local_epochs = msg['local_epoch'] send_msg(s, len(train_dataset)) # + # update weights from server # train for r in range(rounds): # loop over the dataset multiple times weights = recv_msg(s) ecg_net.load_state_dict(weights) ecg_net.eval() for local_epoch in range(local_epochs): for i, data in enumerate(tqdm(trainloader, ncols=100, desc='Round '+str(r+1)+'_'+str(local_epoch+1))): # get the inputs; data is a list of [inputs, labels] inputs, labels = data inputs = inputs.to(device) labels = labels.clone().detach().long().to(device) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = ecg_net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() msg = ecg_net.state_dict() send_msg(s, msg) print('Finished Training') # - end_time = time.time() #store end time print("Training Time: {} sec".format(end_time - start_time)) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3.7.4 64-bit # name: python374jvsc74a57bd07945e9a82d7512fbf96246d9bbc29cd2f106c1a4a9cf54c9563dadf10f2237d4 # --- # # Chipotle ** # # ### Step 1. Import the necessary libraries # + # remember to %matplotlib inline import pandas as pd import matplotlib as mp import numpy as np import scipy.stats as stats import seaborn as sns sns.set_context('notebook') sns.set_style('darkgrid') # - # ### Step 2. Import the dataset of `chipotle` and assign it to a variable called chipo. # + chipo=pd.read_csv("https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv",sep="\t") chipo # - # ### Step 3. See the first 10 entries chipo[0:10] # ### Step 4. Create a histogram of the top 5 items bought # + sns.histplot(data=chipo.sort_values('quantity', ascending=False).groupby('item_name').head(1)[0:5],y="item_name",x="quantity") #helpesita aqui que no se hacer el grafico este me estoy volviendo LOCO # - chipo.sort_values("quantity", ascending=False).groupby(by="item_name") # ### Step 5. Create a scatterplot with the number of items orderered per order price # # Make sure you get the same labels and title # #### Hint: Price should be in the X-axis and Items ordered in the Y-axis # + chipo["item_price"]=chipo["item_price"].apply(lambda x:x.lstrip("$")) # - chipo["item_price"] = chipo["item_price"].astype(float) chipo sns.scatterplot(data=chipo.groupby("order_id").sum("item_price"), x="item_price", y="quantity",color="green") # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 # language: python # name: python3 # --- # + [markdown] colab_type="text" id="SKD7jSvIn1T9" # # Importing Modules # # The necessary modules are : os, opencv, numpy, tqdm, matplotlib, imgaug, tensorflow and sklearn # + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="nNtz_YYVnzcZ" outputId="21000b0d-93c5-4dcf-96f3-4bedc5d90a3c" import os import cv2 import numpy as np from tqdm import tqdm import matplotlib.pyplot as plt import imgaug.augmenters as iaa from imgaug.augmentables.segmaps import SegmentationMapsOnImage from tensorflow.keras.layers import Input, Conv2D, MaxPooling2D, Conv2DTranspose, concatenate, BatchNormalization, Activation, add from tensorflow.keras.models import Model, model_from_json from tensorflow.keras.optimizers import Adam from tensorflow.keras.layers import ELU, LeakyReLU from tensorflow.keras.utils import plot_model from tensorflow.keras import backend as K from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report # + [markdown] colab_type="text" id="xLkeS7Lxo_Po" # # Constructing Training and Test Datasets # + [markdown] colab_type="text" id="gHo6ymwFpJzY" # ## Loading the Images # # We first load all the images and the corresponding segmentation masks. # # They are stored in two lists X, Y and respectively # # Moreover, the images are resized to 64x64 # + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" id="YbeQ1aytHTm3" outputId="d43c1e52-d889-4a4f-96aa-5ef4d20b649f" img_files = next(os.walk('data/cxr'))[2] msk_files = next(os.walk('data/masks'))[2] img_files.sort() msk_files.sort() print(len(img_files)) print(len(msk_files)) X = [] Y = [] for img_fl in tqdm(img_files): if(img_fl.split('.')[-1]=='png'): img = cv2.imread('data/cxr/{}'.format(img_fl), cv2.IMREAD_GRAYSCALE) resized_img = cv2.resize(img,(64, 64), interpolation = cv2.INTER_CUBIC) X.append(resized_img) msk = cv2.imread('data/masks/{}'.format(img_fl), cv2.IMREAD_GRAYSCALE) resized_msk = cv2.resize(msk,(64, 64), interpolation = cv2.INTER_CUBIC) Y.append(resized_msk) # - # ## Data augmentation # # We generate more data for training using the original dataset. # # Operations: # # * dropout pixels # * change scale # * rotate # * horizontal flip # * crop # * contrast # * and others # + imgs = [] masks = [] for i in range(len(X)): # Load an example image (uint8, 64x64). image = np.array(X[i], dtype=np.uint8) # Define an example segmentation map (int32, 64x64). segmap = np.array(Y[i]) segmap = segmap / 255 segmap = np.around(segmap, decimals=0) segmap = np.array(segmap, dtype=np.int32) segmap = SegmentationMapsOnImage(segmap, shape=image.shape) # Define our augmentation pipeline. seq = iaa.Sequential([ iaa.Dropout([0.0, 0.004]), # drop 0% to 0.4% of all pixels iaa.Sharpen((0.0, 1.0)), # sharpen the image iaa.Affine( scale={"x": (0.9, 1.1), "y": (0.9, 1.1)}, translate_percent={"x": (-0.05, 0.05), "y": (-0.05, 0.05)}, rotate=(-10, 10), shear=(-8, 8) ), iaa.ElasticTransformation(alpha=0.1, sigma=0.03), iaa.Fliplr(0.5), iaa.LinearContrast((0.75, 1.5)), iaa.Crop(percent=(0, 0.05)), iaa.Sometimes( 0.5, iaa.GaussianBlur(sigma=(0, 0.5)) ), ], random_order=True) # Augment images and segmaps. images_aug = [] segmaps_aug = [] for _ in range(15): #e.g. range(15) = 15 new image are generate per 1 original image images_aug_i, segmaps_aug_i = seq(image=image, segmentation_maps=segmap) images_aug.append(images_aug_i) segmaps_aug.append(segmaps_aug_i) for image_aug, segmap_aug in zip(images_aug, segmaps_aug): imgs.append(image_aug) mask = cv2.cvtColor(segmap_aug.draw(size=image_aug.shape[:2])[0], cv2.COLOR_BGR2GRAY) mask = cv2.medianBlur(mask, 5) masks.append(mask) # + [markdown] colab_type="text" id="jYgEz9HQpgsR" # ## Train-Test Split # # The X, Y lists are converted to numpy arrays for convenience. # Furthermore, the images are divided by 255 to bring down the pixel values to [0...1] range. On the other hand the segmentations masks are converted to binary (0 or 1) values. # # Using Sklearn *train_test_split* we split the data randomly into 80% training and 20% testing data # + colab={"base_uri": "https://localhost:8080/", "height": 139} colab_type="code" id="CxFIZv3715Dt" outputId="5a93db60-ed50-4122-8aa0-b26e42a601ca" print(len(imgs)) print(len(masks)) X = np.array(imgs) Y = np.array(masks) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.1, random_state=3) Y_train = Y_train.reshape((Y_train.shape[0],Y_train.shape[1],Y_train.shape[2],1)) Y_test = Y_test.reshape((Y_test.shape[0],Y_test.shape[1],Y_test.shape[2],1)) X_train = X_train / 255 X_test = X_test / 255 Y_train = np.where(Y_train == 0, 0, 1) Y_test = np.where(Y_test == 0, 0, 1) print(X_train.shape) print(Y_train.shape) print(X_test.shape) print(Y_test.shape) # + [markdown] colab_type="text" id="C-Ajp2QVrMti" # # MultiResUNet Model # + [markdown] colab_type="text" id="21cbmiojrYrU" # ## Model Definition # # The MultiResUNet model as described in the [paper](https://arxiv.org/abs/1902.04049) can be found [here](https://github.com/nibtehaz/MultiResUNet/blob/master/MultiResUNet.py) # + colab={} colab_type="code" id="2nX7I1Wf_zEy" def conv2d_bn(x, filters, num_row, num_col, padding='same', strides=(1, 1), activation='relu', name=None): ''' 2D Convolutional layers Arguments: x {keras layer} -- input layer filters {int} -- number of filters num_row {int} -- number of rows in filters num_col {int} -- number of columns in filters Keyword Arguments: padding {str} -- mode of padding (default: {'same'}) strides {tuple} -- stride of convolution operation (default: {(1, 1)}) activation {str} -- activation function (default: {'relu'}) name {str} -- name of the layer (default: {None}) Returns: [keras layer] -- [output layer] ''' x = Conv2D(filters, (num_row, num_col), strides=strides, padding=padding, use_bias=False)(x) x = BatchNormalization(axis=3, scale=False)(x) if(activation == None): return x x = Activation(activation, name=name)(x) return x def trans_conv2d_bn(x, filters, num_row, num_col, padding='same', strides=(2, 2), name=None): ''' 2D Transposed Convolutional layers Arguments: x {keras layer} -- input layer filters {int} -- number of filters num_row {int} -- number of rows in filters num_col {int} -- number of columns in filters Keyword Arguments: padding {str} -- mode of padding (default: {'same'}) strides {tuple} -- stride of convolution operation (default: {(2, 2)}) name {str} -- name of the layer (default: {None}) Returns: [keras layer] -- [output layer] ''' x = Conv2DTranspose(filters, (num_row, num_col), strides=strides, padding=padding)(x) x = BatchNormalization(axis=3, scale=False)(x) return x def MultiResBlock(U, inp, alpha = 1.67): ''' MultiRes Block Arguments: U {int} -- Number of filters in a corrsponding UNet stage inp {keras layer} -- input layer Returns: [keras layer] -- [output layer] ''' W = alpha * U shortcut = inp shortcut = conv2d_bn(shortcut, int(W*0.167) + int(W*0.333) + int(W*0.5), 1, 1, activation=None, padding='same') conv3x3 = conv2d_bn(inp, int(W*0.167), 3, 3, activation='relu', padding='same') conv5x5 = conv2d_bn(conv3x3, int(W*0.333), 3, 3, activation='relu', padding='same') conv7x7 = conv2d_bn(conv5x5, int(W*0.5), 3, 3, activation='relu', padding='same') out = concatenate([conv3x3, conv5x5, conv7x7], axis=3) out = BatchNormalization(axis=3)(out) out = add([shortcut, out]) out = Activation('relu')(out) out = BatchNormalization(axis=3)(out) return out def ResPath(filters, length, inp): ''' ResPath Arguments: filters {int} -- [description] length {int} -- length of ResPath inp {keras layer} -- input layer Returns: [keras layer] -- [output layer] ''' shortcut = inp shortcut = conv2d_bn(shortcut, filters, 1, 1, activation=None, padding='same') out = conv2d_bn(inp, filters, 3, 3, activation='relu', padding='same') out = add([shortcut, out]) out = Activation('relu')(out) out = BatchNormalization(axis=3)(out) for i in range(length-1): shortcut = out shortcut = conv2d_bn(shortcut, filters, 1, 1, activation=None, padding='same') out = conv2d_bn(out, filters, 3, 3, activation='relu', padding='same') out = add([shortcut, out]) out = Activation('relu')(out) out = BatchNormalization(axis=3)(out) return out def MultiResUnet(height, width, n_channels): ''' MultiResUNet Arguments: height {int} -- height of image width {int} -- width of image n_channels {int} -- number of channels in image Returns: [keras model] -- MultiResUNet model ''' inputs = Input((height, width, n_channels)) mresblock1 = MultiResBlock(32, inputs) pool1 = MaxPooling2D(pool_size=(2, 2))(mresblock1) mresblock1 = ResPath(32, 4, mresblock1) mresblock2 = MultiResBlock(32*2, pool1) pool2 = MaxPooling2D(pool_size=(2, 2))(mresblock2) mresblock2 = ResPath(32*2, 3, mresblock2) mresblock3 = MultiResBlock(32*4, pool2) pool3 = MaxPooling2D(pool_size=(2, 2))(mresblock3) mresblock3 = ResPath(32*4, 2, mresblock3) mresblock4 = MultiResBlock(32*8, pool3) pool4 = MaxPooling2D(pool_size=(2, 2))(mresblock4) mresblock4 = ResPath(32*8, 1, mresblock4) mresblock5 = MultiResBlock(32*16, pool4) up6 = concatenate([Conv2DTranspose( 32*8, (2, 2), strides=(2, 2), padding='same')(mresblock5), mresblock4], axis=3) mresblock6 = MultiResBlock(32*8, up6) up7 = concatenate([Conv2DTranspose( 32*4, (2, 2), strides=(2, 2), padding='same')(mresblock6), mresblock3], axis=3) mresblock7 = MultiResBlock(32*4, up7) up8 = concatenate([Conv2DTranspose( 32*2, (2, 2), strides=(2, 2), padding='same')(mresblock7), mresblock2], axis=3) mresblock8 = MultiResBlock(32*2, up8) up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=( 2, 2), padding='same')(mresblock8), mresblock1], axis=3) mresblock9 = MultiResBlock(32, up9) conv10 = conv2d_bn(mresblock9, 1, 1, 1, activation='sigmoid') model = Model(inputs=[inputs], outputs=[conv10]) return model # + [markdown] colab_type="text" id="2frZlpmFsv1f" # ## Auxiliary Functions # + [markdown] colab_type="text" id="degBGBWYsyNG" # ### Custom Metrics # # Since Keras does not have build-in support for computing Dice Coefficient or Jaccard Index (at the time of writing), the following functions are declared # + colab={} colab_type="code" id="xq8qfLqDA6q2" def dice_coef(y_true, y_pred): smooth = 0.0 y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth) def jacard(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum ( y_true_f * y_pred_f) union = K.sum ( y_true_f + y_pred_f - y_true_f * y_pred_f) return intersection/union # + [markdown] colab_type="text" id="VtvyeBXy8Mk3" # ### Saving Model # # Function to save the model # + colab={} colab_type="code" id="GCfX9MUYALar" def saveModel(model): model_json = model.to_json() try: os.makedirs('models') except: pass fp = open('models/modelP.json','w') fp.write(model_json) model.save_weights('models/modelW.h5') # + [markdown] colab_type="text" id="AH3znjx-8vDq" # ### Evaluate the Model # # We evaluate the model on test data (X_test, Y_test). # # We compute the values of Jaccard Index and Dice Coeficient, and save the predicted segmentation of first 10 images. The best model is also saved # # (This could have been done using keras call-backs as well) # + colab={} colab_type="code" id="Tkit_YYvBQ7V" def evaluateModel(model, X_test, Y_test, batchSize): try: os.makedirs('results') except: pass yp = model.predict(x=X_test, batch_size=batchSize, verbose=1) yp = np.round(yp,0) for i in range(10): plt.figure(figsize=(20,10)) plt.subplot(1,3,1) plt.imshow(X_test[i]) plt.title('Input') plt.subplot(1,3,2) plt.imshow(Y_test[i].reshape(Y_test[i].shape[0],Y_test[i].shape[1])) plt.title('Ground Truth') plt.subplot(1,3,3) plt.imshow(yp[i].reshape(yp[i].shape[0],yp[i].shape[1])) plt.title('Prediction') intersection = yp[i].ravel() * Y_test[i].ravel() union = yp[i].ravel() + Y_test[i].ravel() - intersection jacard = (np.sum(intersection)/np.sum(union)) plt.suptitle('Jacard Index'+ str(np.sum(intersection)) +'/'+ str(np.sum(union)) +'='+str(jacard)) plt.savefig('results/'+str(i)+'.png',format='png') plt.close() jacard = 0 dice = 0 for i in range(len(Y_test)): yp_2 = yp[i].ravel() y2 = Y_test[i].ravel() intersection = yp_2 * y2 union = yp_2 + y2 - intersection jacard += (np.sum(intersection)/np.sum(union)) dice += (2. * np.sum(intersection) ) / (np.sum(yp_2) + np.sum(y2)) jacard /= len(Y_test) dice /= len(Y_test) print('Jacard Index : '+str(jacard)) print('Dice Coefficient : '+str(dice)) fp = open('models/log.txt','a') fp.write(str(jacard)+'\n') fp.close() fp = open('models/best.txt','r') best = fp.read() fp.close() if(jacard>float(best)): print('***********************************************') print('Jacard Index improved from '+str(best)+' to '+str(jacard)) print('***********************************************') fp = open('models/best.txt','w') fp.write(str(jacard)) fp.close() saveModel(model) # + [markdown] colab_type="text" id="80IwnKtM9NHY" # ### Training the Model # # The model is trained and evaluated after each epochs # + colab={} colab_type="code" id="wlOOh0-nA05L" def trainStep(model, X_train, Y_train, X_test, Y_test, epochs, batchSize): for epoch in range(epochs): print('Epoch : {}'.format(epoch+1)) model.fit(x=X_train, y=Y_train, batch_size=batchSize, epochs=1, verbose=1) evaluateModel(model,X_test, Y_test,batchSize) return model # + [markdown] colab_type="text" id="4wJ5V9AJ9Ygu" # ## Define Model, Train and Evaluate # + colab={"base_uri": "https://localhost:8080/", "height": 1105} colab_type="code" id="GJqeuZPhDZSK" outputId="b1344374-f69e-4805-a22d-b8152f07b754" model = MultiResUnet(height=64, width=64, n_channels=1) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[dice_coef, jacard, 'accuracy']) saveModel(model) fp = open('models/log.txt','w') fp.close() fp = open('models/best.txt','w') fp.write('-1.0') fp.close() trainStep(model, X_train, Y_train, X_test, Y_test, epochs=35, batchSize=32) # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 3 (ipykernel) # language: python # name: python3 # --- from os import path import pandas as pd import numpy as np from eis.EISDataIO import eis_dataframe_from_csv, ECM_from_raw_strings from sklearn import preprocessing # + # Load training data # here = !pwd train_data_path = path.join(path.dirname(here[0]), "train_data.csv") eis_data = eis_dataframe_from_csv(train_data_path) # + # Drop ECM parameters from df for classification exercise #sample = eis_data.loc[eis_data.Circuit=="L-R-RCPE"] sample = eis_data sample # + # Split X into Z_real and Z_imag sample2 = sample z_count = sample2.Z.size sample2["Z_real"] = "" sample2["Z_imag"] = "" for x in range(z_count): z_real_x = np.array(sample2.Z[x].real) sample2.Z_real[x] = z_real_x z_imag_x = np.array(sample2.Z[x].imag) sample2.Z_imag[x] = z_imag_x sample2 = sample2.drop(['Parameters','Z'], axis=1) sample2 = sample2[['freq','Z_real','Z_imag','Circuit']] sample2 # + # Encode string labels to integers le = preprocessing.LabelEncoder() le.fit(sample2.Circuit) print(list(le.classes_)) le.transform(sample2.Circuit) sample2['Circuit_int']="" Circuit_int = le.transform(sample2.Circuit) sample2.Circuit_int = Circuit_int sample2 # + from eis.EISDataIO import eis_dataframe_from_csv, ECM_from_raw_strings from eis.EISPlot import plot_eis for x in range(len(le.classes_)): sample3 = eis_data.loc[eis_data.Circuit==list(le.classes_)[x]] print(list(le.classes_)[x]) for y in range(5): sample_i = sample3.iloc[y] frequencies = sample_i.freq impedances = sample_i.Z plot_eis(frequencies, impedances) # - # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python 2 # language: python # name: python2 # --- import numpy as np import pandas as pd import matplotlib.pyplot as plt from bokeh.charts import Scatter, show from bokeh.plotting import figure from bokeh.io import output_notebook output_notebook() df = pd.read_csv('../MA490-MachineLearning-FinalProject//sineData5.csv') # + df['Actual'] = np.zeros(100) for i in range(100): df['Actual'].values[i] = np.sin(df['Input'].values[i]) df.head() # - fig = Scatter(df[20:80], x='Input',y='Prediction',color='blue') f = figure() f.line(df.Input.values[18:82], df.Prediction.values[18:82], color='blue') x = np.linspace(-3*np.pi-np.pi/2, 3*np.pi+np.pi/2, 100) y = np.sin(x) f.line(x,y,color='red') x = [-3*np.pi,-3*np.pi] y = [-3,3] f.line(x,y,color='black') x = [3*np.pi,3*np.pi] y = [-3,3] f.line(x,y,color='black') show(f) # ##### sine(x): Calculates the sine of x. # Using skflow and its library I trained a TensorFlowDNNRegressor with 9 hidden units. #
    Process: # * Originally trained using random data by picking random numbers between 0 and 1000 and converting to radians # * Trained over 10,000 iterations. # * Used 2 hidden units. # * Had very high error. # # * Next used 0 to 720 degrees and fed all the values to the net. # * Trained over 10,000 iterations. # * Used 2 hidden units. # * Still had very high error, didn't seem to learn very well. # # * Generated 10000 numbers between -π to π and fed the neural network the sine taylor expansion (9 of the terms) for each value. # * Trained over 10,000 iterations. # * Used 9 hidden units. # * Was much more accurate than the previous attempts, can predict in the range between -pi and pi almost spot on. # * As you increase the range the prediction becomes less accurate and heads off to infinite and can't seem to generalize the sine curve. # * Generated 10000 numbers between -3π to 3π # * Trained over 100,000 iterations # * Used 9 hidden units # * Still accurate between -3π and 3π yet outside of that range still expands to infinity # --- # jupyter: # jupytext: # text_representation: # extension: .py # format_name: light # format_version: '1.5' # jupytext_version: 1.14.4 # kernelspec: # display_name: Python [conda env:py38] * # language: python # name: conda-env-py38-py # --- # # M2 Statistics (JPL-MUR vs Peggy) # # 2016+ (post algorithm change) # May-Aug (or May-Aug) # - should validate "std of daily temperature" when making daily means of m2 # - statistics of the correlation, mean difference and standard deviation # # 2013+ for ice retreat period # - statistics when sea ice > 80%, 60% ... use subsurface? (11m) from erddapy import ERDDAP import xarray as xa import pandas as pd import numpy as np # + server_url='http://akutan.pmel.noaa.gov:8080/erddap' e = ERDDAP(server=server_url) dfm2={} for dataset_id in ['datasets_Mooring_19bsm2a_preliminary', # 'datasets_Mooring_19bsitaepr2a_final', 'datasets_Mooring_18bsm2a_final', 'datasets_Mooring_17bsm2a_final', 'datasets_Mooring_16bsm2a_final', 'datasets_Mooring_15bsm2a_final', 'datasets_Mooring_14bsm2a_final', 'datasets_Mooring_13bsm2a_final', 'datasets_Mooring_12bsm2a_final', 'datasets_Mooring_10bsm2a_final']: print(f'{dataset_id}') try: e = ERDDAP(server=server_url, protocol='tabledap', response='csv' ) e.dataset_id=dataset_id except HTTPError: print('Failed to generate url {}'.format(dataset_id)) continue try: dftemp = e.to_pandas( index_col='time (UTC)', parse_dates=True, skiprows=(1,) # units information can be dropped. ) dftemp.columns = [x[1].split()[0] for x in enumerate(dftemp.columns)] ##resample as daily data dfm2.update({dataset_id: dftemp}) except: pass # - dfm2['datasets_Mooring_19bsm2a_preliminary'] = dfm2['datasets_Mooring_19bsm2a_preliminary'].loc['2019-4-26':'2019-9-20'] dfm2['datasets_Mooring_19bsm2a_preliminary'][dfm2['datasets_Mooring_19bsm2a_preliminary'].temperature < -2] =np.nan sst_m2 = pd.DataFrame() for mooring in sorted(list(dfm2.keys())): dint = np.nan try: dint = dfm2[mooring].depth.unique()[(dfm2[mooring].depth.unique() > 0)].min() if np.isnan(dfm2[mooring][(dfm2[mooring].depth == dint)].temperature).all(): dint = dfm2[mooring].depth.unique()[(dfm2[mooring].depth.unique() > 3)].min() print(dint) sst_m2 = pd.concat([sst_m2,dfm2[mooring][(dfm2[mooring].depth == dint)].dropna(subset=['temperature'],axis=0)[['depth','temperature']]]) except: print(f'{mooring} failed') sst_m2_daily = sst_m2.resample('1D').mean() sst_m2_std = sst_m2.resample('1D').std() sst_m2_std.temperature.plot(figsize=(16,2)) sst_m2_daily_hr = sst_m2.resample('1H').mean() sst_m2_daily_hr = sst_m2_daily_hr[sst_m2_daily_hr.index.hour==9] sst_m2_daily_hr.index = sst_m2_daily_hr.index.round('1D') #mur files sstfiles = '/Volumes/MobileSSD/in_and_outbox/data_sets/podaac_MUR/M2_highres/' mdf_hres = xa.open_mfdataset(sstfiles+'201[0123456789]*.nc') m2point=[56.867,180+(180-164.05)] m2pointW=[56.867,-164.05] mdf_hres_m2 = mdf_hres.sel(lat=m2pointW[0], lon=m2pointW[1], method="nearest").load() #mod and manip mdf_hres_m2_df = mdf_hres_m2.to_dataframe().drop(['lat','lon'],axis=1) mdf_hres_m2_df['analysed_sst'] = mdf_hres_m2_df['analysed_sst']-273.15 mdf_hres_m2_df.index = mdf_hres_m2_df.index.tz_localize('utc').round('1D') m2_merged = mdf_hres_m2_df.join(sst_m2_daily) m2_merged_hr = mdf_hres_m2_df.join(sst_m2_daily_hr) m2_merged.plot(figsize=(16,2)) m2_merged_hr.plot(figsize=(16,2)) #choose just 2018 year = '2012-5' year2 = '2012-8' (m2_merged.loc[year:year2]['analysed_sst'] - m2_merged.loc[year:year2]['temperature']).plot() (m2_merged_hr.loc[year:year2]['analysed_sst'] - m2_merged_hr.loc[year:year2]['temperature']).plot() import seaborn as sns import statsmodels.api as sm #daily averaged 2017 mean of the differences print(f"Mean: {(m2_merged.loc[year:year2]['analysed_sst'] - m2_merged.loc[year:year2]['temperature']).mean()}") print(f"STD: {(m2_merged.loc[year:year2]['analysed_sst'] - m2_merged.loc[year:year2]['temperature']).std()}") # + data = m2_merged.loc[year:year2].dropna(axis=0, how='any',subset=['analysed_sst','temperature']) y = data.analysed_sst x = sm.add_constant(data.temperature) mod = sm.OLS(y, x).fit() print(mod.summary()) # - sns.pairplot(data=m2_merged.loc[year:year2],corner=True) # ## Now look at same stats for a year with ice in region # # - 2013 (never greater than 60% and sporadic # - 2012 has a smoother curve and closer to 80% # - 2017,2011 a little ice # - 2010 crazy rapid retreat? mdf_hres_m2_df.sea_ice_fraction.plot() mdf_hres_m2_df['2010':'2013'].sea_ice_fraction.plot(figsize=(12,2)) # + server_url='http://akutan.pmel.noaa.gov:8080/erddap' e = ERDDAP(server=server_url) dfm2c={} for dataset_id in [ 'datasets_Mooring_13bs2c_final', 'datasets_Mooring_12bs2c_final', 'datasets_Mooring_11bs2c_final', 'datasets_Mooring_10bs2c_final', 'datasets_Mooring_09bs2c_final',]: print(f'{dataset_id}') try: e = ERDDAP(server=server_url, protocol='tabledap', response='csv' ) e.dataset_id=dataset_id except HTTPError: print('Failed to generate url {}'.format(dataset_id)) continue try: dftemp = e.to_pandas( index_col='time (UTC)', parse_dates=True, skiprows=(1,) # units information can be dropped. ) dftemp.columns = [x[1].split()[0] for x in enumerate(dftemp.columns)] ##resample as daily data dfm2c.update({dataset_id: dftemp}) except: pass # - sst_m2c = pd.DataFrame() for mooring in sorted(list(dfm2c.keys())): dint = np.nan try: dint = dfm2c[mooring].depth.unique()[(dfm2c[mooring].depth.unique() > 0)].min() if np.isnan(dfm2c[mooring][(dfm2c[mooring].depth == dint)].temperature).all(): dint = dfm2c[mooring].depth.unique()[(dfm2c[mooring].depth.unique() > 9)].min() print(dint) sst_m2c = pd.concat([sst_m2c,dfm2c[mooring][(dfm2c[mooring].depth == dint)].dropna(subset=['temperature'],axis=0)[['depth','temperature']]]) except: print(f'{mooring} failed') # + sst_m2c_daily = sst_m2c.resample('1D').mean() sst_m2c_std = sst_m2c.resample('1D').std() sst_m2c_std.temperature.plot(figsize=(16,2)) sst_m2c_daily_hr = sst_m2c.resample('1H').mean() sst_m2c_daily_hr = sst_m2c_daily_hr[sst_m2c_daily_hr.index.hour==9] sst_m2c_daily_hr.index = sst_m2c_daily_hr.index.round('1D') m2_merged = mdf_hres_m2_df.join(sst_m2c_daily) m2_merged_hr = mdf_hres_m2_df.join(sst_m2c_daily_hr) m2_merged.plot(figsize=(16,2)) m2_merged_hr.plot(figsize=(16,2)) # - #choose just 2018 year = '2012-1' year2 = '2012-4' (m2_merged.loc[year:year2]['analysed_sst'] - m2_merged.loc[year:year2]['temperature']).plot(figsize=(16,2)) (m2_merged_hr.loc[year:year2]['analysed_sst'] - m2_merged_hr.loc[year:year2]['temperature']).plot(figsize=(16,2)) m2_merged.loc[year:year2].plot(figsize=(16,2)) m2_merged_hr.loc[year:year2].plot(figsize=(16,2)) m2_merged_ice = m2_merged[m2_merged.sea_ice_fraction >= 0] #daily averaged 2017 mean of the differences m2_merged_ice print(f"Mean: {(m2_merged_ice.loc[year:year2]['analysed_sst'] - m2_merged_ice.loc[year:year2]['temperature']).mean()}") print(f"STD: {(m2_merged_ice.loc[year:year2]['analysed_sst'] - m2_merged_ice.loc[year:year2]['temperature']).std()}") # + data = m2_merged_ice.loc[year:year2].dropna(axis=0, how='any',subset=['analysed_sst','temperature']) y = data.analysed_sst x = sm.add_constant(data.temperature) mod = sm.OLS(y, x).fit() print(mod.summary()) # - (Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp��� 8 ` � � � ( P x � � �  @ h � � �  0 X � � � � H p � � � 8`���(Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp���8`���(Px���@h���0X���� Hp���8`���(Px���@h��� 0 X � � � � !H!p!�!�!�!"8"`"�"�"�"#(#P#x#�#�#�#$@$h$�$�$�$%0%X%�%�%�%�% &H&p&�&�&�&'8'`'�'�'�'(((P(x(�(�(�()@)h)�)�)�)*0*X*�*�*�*�* +H+p+�+�+�+,8,`,�,�,�,-(-P-x-�-�-�-.@.h.�.�.�./0/X/�/�/�/�/ 0H0p0�0�0�0181`1�1�1�12(2P2x2�2�2�23@3h3�3�3�3404X4�4�4�4�4 5H5p5�5�5�5686`6�6�6�67(7P7x7�7�7�78@8h8�8�8�8909X9�9�9�9�9 :H:p:�:�:�:;8;`;�;�;�;<(<P<x<�<�<�<=@=h=�=�=�=>0>X>�>�>�>�> ?H?p?�?�?�?@8@`@�@�@�@A(APAxA�A�A�AB@BhB�B�B�BC0CXC�C�C�C�C DHDpD�D�D�DE8E`E�E�E�EF(FPFxF�F�F�FG@GhG�G�G�GH0HXH�H�H�H�H IHIpI�I�I�IJ8J`J�J�J�JK(KPKxK�K�K�KL@LhL�L�L�LM0MXM�M�M�M�M NHNpN�N�N�NO8O`O�O�O�OP(PPPxP�P�P�PQ@QhQ�Q�Q�QR0RXR�R�R�R�R SHSpS�S�S�ST8T`T�T�T�TU(UPUxU�U�U�UV@VhV�V�V�VW0WXW�W�W�W�W XHXpX�X�X�XY8Y`Y�Y�Y�YZStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFilteredStarcoderdataJupyterScriptsDedupFiltered����